diff --git a/20240819/2408.09650v1.json b/20240819/2408.09650v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b59bd9af446000b65a935341aacec2fc0c01027b --- /dev/null +++ b/20240819/2408.09650v1.json @@ -0,0 +1,1038 @@ +{ + "title": "ExpoMamba: Exploiting Frequency SSM Blocks for Efficient and Effective Image Enhancement", + "abstract": "Low-light image enhancement remains a challenging task in computer vision, with existing state-of-the-art models often limited by hardware constraints and computational inefficiencies, particularly in handling high-resolution images. Recent foundation models, such as transformers and diffusion models, despite their efficacy in various domains, are limited in use on edge devices due to their computational complexity and slow inference times. We introduce ExpoMamba, a novel architecture that integrates components of the frequency state space within a modified U-Net, offering a blend of efficiency and effectiveness. This model is specifically optimized to address mixed exposure challenges\u2014a common issue in low-light image enhancement\u2014while ensuring computational efficiency. Our experiments demonstrate that ExpoMamba enhances low-light images up to 2-3x faster than traditional models with an inference time of 36.6 ms and achieves a PSNR improvement of approximately 15-20% over competing models, making it highly suitable for real-time image processing applications. Model code is open sourced at: github.com/eashanadhikarla/ExpoMamba.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Enhancing low-light images is crucial for applications ranging from consumer gadgets like phone cameras (Liba et al., 2019 ###reference_b45###; Liu et al., 2024 ###reference_b51###) to sophisticated surveillance systems (Xian et al., 2024 ###reference_b87###; Guo et al., 2024 ###reference_b27###; Shrivastav, 2024 ###reference_b69###). Traditional techniques (Dale-Jones & Tjahjadi, 1993 ###reference_b13###; Singh et al., 2015 ###reference_b70###; Khan et al., 2014 ###reference_b39###; Land & McCann, 1971 ###reference_b41###; Ren et al., 2020 ###reference_b65###) often struggle to balance processing speed and high-quality results, particularly with high-resolution images, leading to issues like noise and color distortion in scenarios requiring quick processing such as mobile photography and real-time video streaming.\nLimitations of Current Approaches. \nFoundation models have revolutionized computer vision, including low-light image enhancement, by introducing advanced architectures that model complex relationships within image data.\nIn particular, transformer-based (Wang et al., 2023b ###reference_b80###; Chen et al., 2021a ###reference_b8###; Zhou et al., 2023b ###reference_b105###; Adhikarla et al., 2024 ###reference_b2###) and diffusion-based (Wang et al., 2023c ###reference_b81###, a ###reference_b79###; Zhou et al., 2023a ###reference_b103###) low-light techniques have made significant strides. However, the sampling process requires a computationally intensive iterative procedure, and the quadratic runtime of self-attention in transformers make them unsuitable for real-time use on edge devices where limited processing power and battery constraints pose significant challenges. Innovations such as linear attention (Katharopoulos et al., 2020 ###reference_b38###; Shen et al., 2018 ###reference_b68###; Wang et al., 2020 ###reference_b78###), self-attention approximation, windowing, striding (Kitaev et al., 2020 ###reference_b40###; Zaheer et al., 2020 ###reference_b94###), attention score sparsification (Liu et al., 2021b ###reference_b49###), hashing (Chen et al., 2021c ###reference_b10###), and self-attention operation kernelization (Katharopoulos et al., 2020 ###reference_b38###; Lu et al., 2021 ###reference_b55###; Chen et al., 2021b ###reference_b9###) have aimed to address these complexities, but often at the cost of increased computation errors compared to simple self-attention (Duman Keles et al., 2023 ###reference_b16###; Dosovitskiy et al., 2021 ###reference_b15###). (More details can be found in Appendix A ###reference_###)\nPurpose. With rising need for better images, advanced small camera sensors in edge devices have made it more common for customers to capture high quality images, and use them in real-time applications like mobile, laptop and tablet cameras (Morikawa et al., 2021 ###reference_b58###). However, they all struggle with non-ideal and low lighting conditions in the real world. Our goal is to develop an approach that has high image quality (e.g., like CIDNet (Feng et al., 2024 ###reference_b17###)) for enhancement but also at high speed (e.g., such as that of IAT (Cui et al., 2022 ###reference_b12###) and Zero-DCE++ (Li et al., 2021 ###reference_b43###)).\nContributions. \nOur contributions are summarized as:\nWe introduce the use of Mamba for efficient low-light image enhancement (LLIE), specifically focusing on mixed exposure challenges, where underlit (insufficient brightness) and overlit (excessive brightness) exist in the same image frame.\nWe propose a novel Frequency State Space Block (FSSB) that combines two distinct 2D-Mamba blocks, enabling the model to capture and enhance subtle textural details often lost in low-light images.\nWe describe a novel dynamic batch training scheme to improve robustness of multi-resolution inference in our proposed model.\nWe implement dynamic processing of the amplitude component to highlight distortion (noise, illumination) and the phase component for image smoothing and noise reduction.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Exposure Mamba", + "text": "Along the lines of recent efficient sequence modeling approaches (Gu & Dao, 2023 ###reference_b21###; Zhu et al., 2024a ###reference_b106###; Wang et al., 2024 ###reference_b83###), we introduce ExpoMamba, a model combining frequency state-space blocks with spatial convolutional blocks (Fig. 2 ###reference_###). This combination leverages the advantages of frequency domain analysis to manipulate features at different scales and frequencies, crucial for isolating and enhancing patterns challenging to detect in the spatial domain, like subtle textural details in low-light images or managing noise in overexposed areas. Additionally, by integrating these insights with the linear-time complexity benefits of the Mamba architecture, our model efficiently manages the spatial sequencing of image data, allowing rapid processing without the computational overhead of transformer models.\nOur proposed architecture utilizes a 2D scanning approach to tackle mixed-exposure challenges in low-light conditions. This model incorporates a combination of (Qin et al., 2020 ###reference_b64###) and M-Net (Mehta & Sivaswamy, 2017 ###reference_b57###), supporting 2D sRGB images with each block performing operations using a convolutional and encoder-style SSM ()111A state space model is a type of sequence model that transforms a one-dimensional sequence via an implicit hidden state.. The subsequent section provides detailed information about our overall pipeline." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Frequency State Space Block (FSSB)", + "text": "###figure_2### We utilize the frequency state space block (FSSB) to address the computational inefficiencies of transformer architectures especially when processing high-resolution image or long-sequence data. The FSSB\u2019s motivation is in two parts; first, towards enhancing the intricacies that are unaddressed/missed by the spatial domain alone; and second, to speed deep feature extraction using the frequency domain.\nThe FSS block (as in Fig. 3 ###reference_###) initiates its processing by transforming the input image into the frequency domain using the Fourier transform:\nwhere, denotes the frequency domain representation of the image, and are the frequency components corresponding to the spatial coordinates . This transformation allows for the isolation and manipulation of specific frequency components, which is particularly beneficial for enhancing details and managing noise in low-light images. By decomposing the image into its frequency components, we can selectively enhance high-frequency components to improve edge and detail clarity while suppressing low-frequency components that typically contain noise (Lazzarini, 2017 ###reference_b42###; Zhou et al., 2022 ###reference_b104###). This selective enhancement and suppression improve the overall image quality.\nThe core of the FSSB comprises two 2D-Mamba (Visual-SSM) blocks to process the amplitude and phase components separately in the frequency domain. These blocks model state-space transformations as follows:\nHere, , , and are the state matrices that adapt dynamically based on the input features, and represents the state vector at time . represents processed feature at time , capturing the transformed information from the input features. This dual-pathway setup within the FSSB processes amplitude and phase in parallel.\nAfter processing through each of the VSS blocks, the modified amplitude and phase components are recombined and transformed back to the spatial domain using the inverse Fourier transform:\nwhere, is the processed frequency domain representation in the latent space of each M-Net block. This method preserves the structural integrity of the image while enhancing textural details that are typically lost in low-light conditions, removing the need of self-attention mechanisms that are widely seen in transformer-based pipelines (Tay et al., 2022 ###reference_b72###). The FSSB also integrates hardware-optimized strategies similar to those employed in the Vision-Mamba architecture (Gu & Dao, 2023 ###reference_b21###; Zhu et al., 2024a ###reference_b106###) such as scan operations and kernel fusion reducing amount of memory IOs, facilitating efficient data flow between the GPU\u2019s memory hierarchies. This optimization significantly reduces computational overhead by a factor of speeding the operation by times (Gu & Dao, 2023 ###reference_b21###), enhancing processing speed for real-time applications. This can be evidently seen through our Fig. 1 ###reference_###, where increasing the resolution size/input length increases the inference time gap tremendously due to which is more for transformer based models due to .\nWithin the FSS Block, the amplitude and phase components extracted from are processed through dedicated state-space models. These models, adapted from the Mamba framework, are particularly tailored (dynamic adaptation of state matrices (, , ) based on spectral properties and the dual processing of amplitude and phase components.222refer to FSSB module in Appx E ###reference_###) to enhance information across frequencies, effectively addressing the typical loss of detail in low-light conditions.\nAmplitude and Phase Component Modeling. \nEach component and undergoes separate but parallel processing paths, modeled by:\nwhere denotes the state at time t, represents the frequency-domain input at time (either amplitude or phase), and are the state-space matrices that dynamically adapt during training.\nFrequency-Dependent Dynamic Adaptation. The matrices are not only time-dependent but also frequency-adaptive, allowing the model to respond to varying frequency components effectively. This adaptation is crucial for enhancing specific frequencies more affected by noise and low-light conditions. Specifically, these matrices evolve based on the spectral properties of the input: adjust dynamically during the processing. This means that , , and change their values according to both the time step and the frequency components , enabling targeted enhancement of the amplitude and phase components in the frequency domain. By evolving to match the spectral characteristics of the input, these matrices optimize the enhancement process.\nAfter separate processing through the state-space models, the modified amplitude and phase are recombined and transformed back into the spatial domain to reconstruct the enhanced image:\nwhere, denotes the inverse Fourier transform.\nFeature Recovery in FSSB.\nThe HDR (High Dynamic Range) tone mapping process within the Frequency State Space Block (FSSB) is designed to enhance visibility and detail in low-light conditions by selectively normalizing brightness in overexposed areas. Feature recovery in FSSB aims to address the challenges of high dynamic range scenes, where standard methods often fail to maintain natural aesthetics and details. By implementing a thresholding mechanism set above , the HDR layer selectively applies tone mapping to overexposed areas, effectively normalizing brightness without compromising detail or causing unnatural halos often seen in standard HDR processes (Fig. 4 ###reference_###). This selective approach is crucial as it maintains the natural aesthetic of the image while enhancing visibility and detail. The HDR layer is consistently applied as the final layer within each FSSB block and culminates as the ultimate layer in the ExpoMamba model, providing a coherent enhancement across all processed images.\nWe leverage the ComplexConv function from complex networks as introduced by Trabelsi et al. (Trabelsi et al., 2018 ###reference_b73###). This function is incorporated into our model to capture and process additional information beyond traditional real-valued convolutions. Specifically, the ComplexConv function allows the simultaneous manipulation of amplitude and phase information in the frequency domain, which is essential to preserve the integrity of textural details in low-light images. The dual processing of amplitude and phase ensures that each component to be optimized separately. Tone mapping and ComplexConv have proven to be effective in overcoming limitations of traditional image processing techniques (Hu et al., 2022 ###reference_b28###; Liu, 2024 ###reference_b52###). We integrate these methods into our FSS design to address adverse lighting conditions in low light environments.\nThe input components in the frequency representation are processed through dynamic amplitude scaling and phase continuity layer, as shown in Fig. 3 ###reference_###. As claimed by Fourmer (Zhou et al., 2023b ###reference_b105###), we have determined that the primary source of image degradation is indeed amplitude, specifically in the area between the amplitude and phase division within the image. Moreover, we found that the amplitude component primarily contains information about the brightness of the image, which directly impacts the visibility and the sharpness of the features within the image. However, the phase component encodes the positional information of these features, defining the structure and the layout of the image. Previously, it has been found that phase component of the image has a close relation with perceptual analysis (Xiao & Hou, 2004 ###reference_b88###). Along those lines, we show that the human visual system is more sensitive to changes in phase rather than amplitude (proof in Appx C.1 ###reference_###).\n###figure_3### \u201cda\u201d - Dynamic adjustment. (refer Appendix-C.3 ###reference_###) / \u201cgt\u201d - With ground-truth mean." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Multi-modal Feature Learning", + "text": "The inherent complexity of low-light images, where both underexposed and overexposed elements coexist, necessitates a versatile approach to image processing. Traditional methods, which typically focus either on spatial details or frequency-based features, fail to adequately address the full spectrum of distortions encountered in such environments. By contrast, the hybrid modeling approach of \u201cExpoMamba\u201d leverages the strengths of both the spatial and frequency domains, facilitating a more comprehensive and nuanced enhancement of image quality.\nOperations in the frequency domain, such as the Fourier transform, can isolate and address specific types of distortion, such as noise and fine details, which are often exacerbated in low-light conditions. This domain provides a global view of the image data, allowing for the manipulation of features that are not easily discernible in the spatial layout. Simultaneously, the spatial domain is critical to maintaining the local coherence of image features, ensuring that enhancements do not introduce unnatural artifacts. Finally, the hybrid-modeled features pass through deep supervision, where we combine ExpoMamba\u2019s intermediate layer outputs, apply a color correction matrix in the latent dimensions during deep supervision, and pass through the final layer." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Dynamic Patch Training", + "text": "Dynamic patch training enhances the 2D scanning model by optimizing its scanning technique for various image resolutions. In ExpoMamba, 2D scanning involves sequentially processing image patches to encode feature representations. We create batches of different resolution images where in a given batch the resolution is fixed and we dynamically randomize the different batch resolutions of input patches during training. This way the model learns to adapt its scanning and encoding process to different scales and levels of detail (Fig 5 ###reference_###). This variability helps the model become more efficient at capturing and processing fine-grained details across different image resolutions, ensuring consistent performance. Consequently, the model\u2019s ability to handle mixed-exposure conditions is improved, as it can effectively manage diverse resolutions and adapt its feature extraction process dynamically, enhancing its robustness and accuracy in real-world applications." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments and Implementation details", + "text": "In this section, we evaluate our method through a series of experiments. We begin by outlining the datasets used, experimental setup, followed by a comparison of our method against state-of-the-art techniques using four quantitative metrics. We also perform a detailed ablation study (Appx E ###reference_###, Tab. 5 ###reference_###) to analyze the components of our proposed method." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Datasets", + "text": "To test the efficacy of our model, we evaluated ExpoMamba on four datasets: (1) LOL (Wei et al., 2018a ###reference_b84###), which has v1 and v2 versions. LOLv2 (Yang et al., 2020a ###reference_b89###) is divided into real and synthetic subsets. The training and testing sets are split into 485/15, 689/100, and 900/100 on LOLv1, LOLv2-real, and LOLv2-synthetic with resolution images. (2) LOL4K is an ultra-high definition dataset with resolution images, containing 8,099 pairs of low-light/normal-light images, split into 5,999 pairs for training and 2,100 pairs for testing. (3) SICE (Cai et al., 2018 ###reference_b5###) includes 4,800 images, real and synthetic, at various exposure levels and resolutions, divided into training, validation, and testing sets in a 7:1:2 ratio.\nWe use dynamic adjustment for both \u2018s\u2019 and \u2018l\u2019 ExpoMamba models during inference." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Experimental setting", + "text": "The proposed network is a single-stage end-to-end training model. The patch sizes are set to , , and with checkpoint restarts and batch sizes of , , and , respectively, in consecutive runs. For dynamic patch training, we use different patch sizes simultaneously. The optimizer is RMSProp with a learning rate of , a weight decay of , and momentum of . A linear warm-up cosine annealing (Loshchilov & Hutter, 2016 ###reference_b54###) scheduler with warm-up epochs is used, starting with a learning rate of . All experiments were carried out using the PyTorch library (Paszke et al., 2019 ###reference_b61###) on an NVIDIA A10G GPU.\nLoss functions. To optimize our ExpoMamba model we use a set of loss functions:\nOur \u2018s\u2019: smallest model outperforms all the baselines." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "The best performance for Tab. 1 ###reference_###, Tab. 2 ###reference_###, and Tab. 3 ###reference_### are marked with Red, Green, and Blue, respectively.\nTab. 1 ###reference_### compares our performance to 31 state-of-the-art baselines, including lightweight and heavy models. We evaluate ExpoMamba\u2019s performance using SSIM, PSNR, LPIPS, and FID. ExpoMamba achieves an inference time of 36 ms, faster than most baselines (Fig. 1 ###reference_###) and the fastest among comparable models. Models like DiffLL (Jiang et al., 2023 ###reference_b34###), CIDNet (Feng et al., 2024 ###reference_b17###), and LLformer (Wang et al., 2023b ###reference_b80###) have comparable results but much longer inference times. Traditional algorithms (e.g., MSRCR (Jobson et al., 1997 ###reference_b36###), MF (Fu et al., 2016a ###reference_b19###), BIMEF (Ying et al., 2017 ###reference_b92###), SRIE (Fu et al., 2016b ###reference_b20###), FEA (Dong et al., 2011 ###reference_b14###), NPE (Wang et al., 2013 ###reference_b77###), LIME (Guo et al., 2016 ###reference_b26###)) generally perform poorly on LOL4K (Tab. 2 ###reference_###). Fig. 1 ###reference_###.b shows that increasing image resolution to 4K significantly increases inference time for transformer models due to their quadratic complexity. Despite being a 41 million parameter model, ExpoMamba demonstrates remarkable storage efficiency, consuming memory (2923 Mb) compared to CIDNet, which, despite its smaller size of 1.9 million parameters, consumes 8249 Mb. This is because ExpoMamba\u2019s state expansion fits inside the GPU\u2019s high-bandwidth memory and removes the quadratic bottleneck which significantly reduces memory footprint. Current SOTA models CIDNet (Feng et al., 2024 ###reference_b17###) and LLformer (Wang et al., 2023b ###reference_b80###) are slower and less memory-efficient." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduced ExpoMamba, a model designed for efficient and effective low-light image enhancement. By integrating frequency state-space components within a U-Net variant, ExpoMamba leverages spatial and frequency domain processing to address computational inefficiencies and high-resolution challenges. Our approach combines robust feature extraction of state-space models, enhancing low-light images with high fidelity and achieving impressive inference speeds. Our novel dynamic patch training strategy significantly improves robustness and adaptability to real-world hardware constraints, making it suitable for real-time applications on edge devices. Experimental results show that ExpoMamba is much faster and comparably better than numerous existing transformer and diffusion models, setting a new benchmark in low light image enhancement." + } + ], + "appendix": [ + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "Appendix", + "text": "" + }, + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Related Work", + "text": "Traditional methods for low-light image enhancement often rely on histogram equalization (HE) (Dale-Jones & Tjahjadi, 1993 ###reference_b13###; Singh et al., 2015 ###reference_b70###; Khan et al., 2014 ###reference_b39###) and Retinex theory (Land & McCann, 1971 ###reference_b41###; Ren et al., 2020 ###reference_b65###). HE based methods aim to adjust the contrast of the image by uniformly distributing the pixel intensities, which can sometimes lead to overenhancement and noise amplification, which were later investigated more carefully by CegaHE (Chiu & Ting, 2016 ###reference_b11###), UMHE (Kansal et al., 2018 ###reference_b37###), etc. Retinex theory, which decomposes an image into illumination and reflectance, provides a more principled approach to enhancement but still faces limitations in complex lighting conditions.\nConvolutional Neural Networks (CNNs) have significantly advanced this field. Early works like LLNet (Lore et al., 2017 ###reference_b53###) used autoencoders to enhance low-light image visibility. The SID (See-in-the-Dark) network (Chen et al., 2018b ###reference_b7###) leveraged raw image data for better enhancement by training on paired low-light and normal-light images. Other works in paired training include DSLR (Lim & Kim, 2020 ###reference_b46###), DRBN (Yang et al., 2020b ###reference_b90###), KinD (Zhang et al., 2019a ###reference_b99###), KinD++ (Zhang et al., 2021b ###reference_b101###), MIRNet (Zamir et al., 2020 ###reference_b95###), ReLLIE (Zhang et al., 2021a ###reference_b98###), DDIM (Song et al., 2020 ###reference_b71###), SCI (Ma et al., 2022 ###reference_b56###), RAUS (Liu et al., 2021a ###reference_b48###), Restormer (Zamir et al., 2022 ###reference_b96###), CIDNet (Feng et al., 2024 ###reference_b17###), LLFormer (Wang et al., 2023b ###reference_b80###), SNRNet (Lin et al., 2020 ###reference_b47###), Uformer (Wang et al., 2022b ###reference_b82###), and CDEF (Valin, 2016 ###reference_b74###). Methods like RetinexNet (Wei et al., 2018b ###reference_b85###), which decompose images into illumination and reflectance components, also show considerable promise but often struggle with varying lighting conditions.\nTransformer Models. Such approaches have gained popularity for modeling long-range dependencies in images. LLFormer (Wang et al., 2023b ###reference_b80###) leverages transformers for low-light enhancement by focusing on global context, significantly improving image quality. Fourmer (Zhou et al., 2023b ###reference_b105###) introduces a Fourier transform-based approach within the transformer architecture, while IAT (Cui et al., 2022 ###reference_b12###) adapts ISP-related parameters to address low-level and high-level vision tasks. IPT (Chen et al., 2021a ###reference_b8###) uses a multi-head, multi-tail shared pre-trained transformer module for image restoration. LYT-Net (Brateanu et al., 2024 ###reference_b4###) addresses image enhancement with minimal computing resources by using YUV colorspace for transformer models. Despite their effectiveness, these transformer models often require substantial computational resources, limiting their practicality on edge devices.\nDiffusion Models. Diffusion models have shown great potential in generating realistic and detailed images. The ExposureDiffusion model (Wang et al., 2023c ###reference_b81###) integrates a diffusion process with a physics-based exposure model, enabling accurate noise modeling and enhanced performance in low-light conditions. Pyramid Diffusion (Zhou et al., 2023a ###reference_b103###) addresses computational inefficiencies by introducing a pyramid resolution approach, speeding up enhancement without sacrificing quality. (Saharia et al., 2022 ###reference_b67###) handles image-to-image tasks using conditional diffusion processes. Models like (Zhang et al., 2022 ###reference_b97###) and deep non-equilibrium approaches (Pokle et al., 2022 ###reference_b63###) aim to reduce sampling steps for faster inference. However, starting from pure noise in conditional image restoration tasks remains a challenge for maintaining image quality while cutting down inference time (Guo et al., 2023 ###reference_b25###).\nHybrid Modelling. Hybrid models includes learning features in both spatial and frequency domain has been another popular area in image enhancement/restoration tasks. Mostly it has been explored in three sub-categories: (1) Fourier Transform (Yuan et al., 2024 ###reference_b93###), Fourmer (Zhou et al., 2023b ###reference_b105###), FD-VisionMamba (Zheng & Zhang, 2024 ###reference_b102###); (2) Wavelet Transform; (3) Homomorphic Filtering . Such methods demonstrate that leveraging both spatial and frequency information can significantly improve enhancement performance.\nState-Space Models. Recent advancements reveal the efficacy of state space models (SSM) as a robust architecture in foundation model era for sequence modeling, offering a fresh perspective beyond conventional RNNs, CNNs, and Transformers. Pioneering this shift, the S4 (Gu et al., 2021 ###reference_b22###) model demonstrated superior performance in managing long-range dependencies by employing the HiPPO matrix (Fu et al., 2022 ###reference_b18###) to define state dynamics systematically. Initially introduced for audio processing, SSMs have emerged as a alternative, later expanded into language and vision domains for handling long-range model dependencies and temporal dynamics becoming a strong competitor for current transformer based methods. The V-Mamba architecture (Zhu et al., 2024b ###reference_b107###; Yang et al., 2024 ###reference_b91###) combines state-space models with U-Net frameworks to capture detailed image aspects at multiple scales, proving effective in biomedical image segmentation. Furthermore, the S4 architecture (Gu et al., 2021 ###reference_b22###; Nguyen et al., 2022 ###reference_b59###) extends this idea by incorporating linear state-space models for fast and efficient sequence modeling, making it suitable for real-time applications." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B The Importance of Inference Time over FLOPs in Real-World Applications", + "text": "In our paper, we use inference time as a measure because inference time, unlike the abstract measure of FLOPs (Floating Point Operations Per Second), reflects actual performance in real-world applications, being influenced not only by hardware speed but also by model design and optimization.\nIn practical scenarios, wherein systems requiring real-time processing like autonomous vehicles and interactive AI applications, the agility of model inference directly impacts usability and user experience. Moreover, as inference constitutes the primary computational expense post-deployment, optimizing inference time enhances both the cost-effectiveness and the energy efficiency of AI systems. Thus, we focused on minimizing inference time, rather than merely reducing FLOPs, ensuring that AI models are not only theoretically efficient but are also pragmatically viable in dynamic real-world environments. We believe that this approach not only accelerates the adoption of AI technologies but also drives advancements in developing models that are both performant and sustainable." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Detailed Methodology", + "text": "For an image , its Fourier transform is given by:\nThis can be decomposed into amplitude and phase :\nThe inverse Fourier transform, which reconstructs the image from its frequency representation, is:\nSuppose that the phase component is uniformly shifted by an angle , the new phase . The modified image with this new phase is represented as:\nUsing Euler\u2019s formula , the equation becomes:\nGiven that and are constants for a particular , they can be factored out of the integral:\nThis shows that the new image is a linear combination of the original image and another image derived from the same amplitude and a phase-shifted version of the original phase components. The transformation demonstrates that even a constant shift in the phase component translates into a significant transformation in the spatial domain, affecting the structural layout and visual features of the image.\n###figure_4### To address the hardware constraints in real-time scenarios such as phones or laptop webcams, which often adjust camera resolutions to optimize performance within design and battery limits, there is a critical need for models that dynamically adapt to these variations. Feeding various image resolutions to the model dynamically also helps avoid spurious correlations that are formed due to strong correlation (Adhikarla et al., 2023 ###reference_b1###) in the data distribution of certain types of images. For instance, the SICEv2 dataset has relatively more mixed-exposure images, and the borders of sudden changes in exposure become more prone to spurious correlations. However, our ExpoMamba uses spatial and temporal components that are inherently designed in Vision Mamba (Zhu et al., 2024a ###reference_b106###) to handle both the spatial distribution of pixels in images and the temporal sequence of frames in videos.\nThe Dynamic Adjustment Approximation module offers a unique way to enhance images without needing ground truth mean and variance. Instead, it dynamically adjusts brightness by using the image\u2019s own statistical properties, specifically the median and mean pixel values. Unlike previous models like KinD, LLFlow, RetinexFormer, which relied on static adjustment factors from the ground truth and often produced less accurate results otherwise, our method calculates a desired shift based on the difference between a normalized brightness value and the image\u2019s mean. Then, it adjusts the image\u2019s medians toward this shift, taking both the current median and mean into account. This leads to a more balanced and natural enhancement. Adjustment factors are carefully computed to avoid infinite or undefined values, ensuring stability. This approach simplifies the process by not requiring ground-truth data and also improves the efficiency and effectiveness of image enhancement.\nThis model configuration table provides a detailed comparison between the two variants of ExpoMamba, highlighting their configurations and performance metrics. Notably, despite an increase of 125 million parameters, the memory consumption of the larger ExpoMambalarge variant is 5690 Mb, which is a modest increase compared to transformer-based models.\nThe following pseudocode presents the details of ExpoMamba training with FSSB blocks:" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Loss function", + "text": "The combined loss function as shown in Eq. 7 ###reference_###, is designed to enhance image quality by addressing different aspects of image reconstruction. The L1 loss ensures pixel-level accuracy, crucial for maintaining sharp edges. This loss component has been widely utilized by the low light papers and has proven to be a valuable loss component for training variety of image restoration tasks. VGG loss, leveraging high-level features, maintains perceptual similarity. SSIM loss preserves structural integrity and local visual quality, which is vital for a coherent visual experience. LPIPS loss focuses on perceptual differences to generate natural looking image. Additionally, the overexposed regularizer detects and penalizes overexposed areas, crucial for handling HDR content and preserving details. It works in combination with HDR blocks to suppress artifacts in overexposed areas and control enhancement. In Eq. 7 ###reference_###, is the weight for the overexposed regularization term." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Ablation Study", + "text": "When \u2018DoubleConv\u2019 is not used, we default to using the standard -Net/M-Net architecture\u2019s 2D convolutional block.\nWe have performed the ablation study of our model ExpoMamba over LOL-v1 dataset. We used \u2018DoubleConv\u2019 Block instead of regular convolutional blocks in the regular U-Net/M-Net architecture. \u2018Block\u2019 represents the residual block inside every upsampling blocks. We implemented two variants of HDR layer, where HDR/HDROut represent the same single layer approach with different locations for layer placement. On the other hand, HDR-CSRNet+ is a deeper network originally design for congested scene recognition is used inside FSSB instead of simple HDR layer.\nDoubleConv: Its absence results in lower PSNR and SSIM scores, confirming its importance.\nBlock: Inclusion of residual blocks improves performance metrics.\nFSSB: Significantly enhances model performance, indicating its crucial role.\nHDR vs. HDR-CSRNet+ vs. HDROut:\nHDR: Provides notable improvements but is outperformed by HDR-CSRNet+.\nHDR-CSRNet+: Offers the best results among the HDR variants.\nHDROut: Slightly less effective than HDR-CSRNet+.\nDA (Dynamic Adjustment during inference): Consistently boosts the model\u2019s PSNR and SSIM slightly based on input mean value.\n###figure_5### ###figure_6### ###figure_7### ###figure_8###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparing four popular metrics such that every column showcases the top three methods; Red, Green, and Blue representing the best, second best, and third best models among the proposed and all popular SOTA models from 2011\u20132024.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsReferenceLOLv1LOLv2 (Real Captured)\n\n\nInference\n\ntime (ms)\n
\nPSNR \n\nSSIM \n\nLPIPS \n\nFID \n\nPSNR \n\nSSIM \n\nLPIPS \n\nFID \n
NPE\u2020\nTIP\u20191316.9700.4840.400104.0517.3330.4640.396100.02-
SRIE\u2020\nCVPR\u20191611.8550.4950.353088.7214.4510.5240.332078.83-
BIMEF\u2020\narXiv\u20191713.8750.5950.326-17.2000.7130.307--
FEA\u2020\nICME\u20191116.7160.4780.384120.0517.2830.7010.398119.28-
MF\u2020\nSignal Process\u20191616.9660.5070.379-17.5000.751---
LIME\u2020\nTIP\u20191617.5460.5310.387117.8917.4830.5050.428118.1791.12
Retinex\u2021\nBMVC\u20191816.7740.4620.417126.2617.7150.6520.436133.914493
DSLR\u2021\nTMM\u20192014.8160.5720.375104.4317.0000.5960.408114.311537
KinD\u2021\nACM MM\u20191917.6470.7710.175-----2130
DRBN\u2021\nCVPR\u20192016.6770.730.345098.7318.4660.7680.352089.092462
Zero-DCECVPR\u20192014.8610.5620.372087.2418.0590.580.352080.452436
Zero-DCE++TPAMI\u20192114.7480.5170.328-----2618
MIRNetECCV\u20192024.1380.8300.250069.1820.0200.820.233049.111795
EnlightenGAN\u2021\nTIP\u20192117.6060.6530.372094.7018.6760.6780.364084.04-
ReLLIE\u2021\nACM MM\u20192111.4370.4820.375095.5114.4000.5360.334079.843.500
RUAS\u2021\nCVPR\u20192116.4050.5030.364101.9715.3510.4950.395094.1615.51
DDIMICLR\u20192116.5210.7760.376084.0715.2800.7880.387076.391213
CDEFTMM\u20192216.3350.5850.407090.6219.7570.630.349074.06-
SCICVPR\u20192214.7840.5250.366078.6017.3040.540.345067.621755
URetinex-NetCVPR\u20192219.8420.8240.237052.3821.0930.8580.208049.841804
SNRNet\u2021\nCVPR\u20192223.4320.8430.234055.1221.4800.8490.237054.5372.16
Uformer\u22c6\nCVPR\u20192219.0010.7410.354109.3518.4420.7590.347098.14901.2
Restormer\u22c6\nCVPR\u20192220.6140.7970.288073.1024.9100.8510.264058.65513.1
Palette\u2663\nSIGGRAPH\u20192211.7710.5610.498108.2914.7030.6920.333083.94168.5
UHDFour\u2021\nICLR\u20192323.0930.8210.259056.9121.7850.8540.292060.8464.92
WeatherDiff\u2663\nTPAMI\u20192317.9130.8110.272073.9020.0090.8290.253059.675271
GDP\u2663\nCVPR\u20192315.8960.5420.421117.4614.2900.4930.435102.41-
DiffLL\u2663\nACM ToG\u20192326.3360.8450.217048.1128.8570.8760.207045.36157.9
CIDNet\u2021\narXiv\u20192423.0900.8510.085-23.2200.8630.103--
LLformer\u22c6\nAAAI\u20192322.8900.8160.202-23.1280.8550.153-1956
ExpoMamba22.8700.8450.215097.6523.0000.8600.203094.2736.00
-23.0920.8470.214092.1723.1310.8680.224090.2238.00
25.7700.8600.212089.2128.0400.8850.232085.9236.00
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u201cda\u201d - Dynamic adjustment. (refer Appendix-C.3 ###reference_###) \u00a0/\u00a0 \u201cgt\u201d - With ground-truth mean.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 1: Comparing four popular metrics such that every column showcases the top three methods; Red, Green, and Blue representing the best, second best, and third best models among the proposed and all popular SOTA models from 2011\u20132024." + }, + "2": { + "table_html": "
\n
Table 2: Evaluation on the UHD-LOL4K dataset. Symbols , and denote traditional, supervised CNN, unsupervised CNN, zero-shot, and transformer-based models, respectively.\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nUHD-LOL4K\n
\nPSNR \n\nSSIM \n\nLPIPS \n\nMAE \n
\nBIMEF\u2020\u00a0(Ying et\u00a0al., 2017)\n18.10010.88760.13230.1240
\nLIME\u2020\u00a0(Guo et\u00a0al., 2016)\n16.17090.81410.20640.1285
\nNPE\u2020\u00a0(Wang et\u00a0al., 2013)\n17.63990.86650.17530.1125
\nSRIE\u2020\u00a0(Fu et\u00a0al., 2016b)\n16.77300.83650.14950.1416
\nMSRCR\u2020\u00a0(Jobson et\u00a0al., 1997)\n12.52380.81060.21360.2039
\nRetinexNet\u2021\u00a0(Wei et\u00a0al., 2018b)\n21.67020.90860.14780.0690
\nDSLR\u2021\u00a0(Lim & Kim, 2020)\n27.33610.92310.12170.0341
\nKinD\u2021\u00a0(Zhang et\u00a0al., 2019b)\n18.46380.88630.12970.1060
\nZ_DCE\u00a7\u00a0(Guo et\u00a0al., 2020a)\n17.18730.84980.19250.1465
\nZ_DCE++\u00a7\u00a0(Li et\u00a0al., 2021)\n15.57930.83460.22230.1701
\nRUAS\u25b3\u00a0(Liu et\u00a0al., 2021c)\n14.68060.75750.27360.1690
\nELGAN\u25b3\u00a0(Jiang et\u00a0al., 2021)\n18.36930.86420.19670.1011
\nUformer\u00a0(Wang et\u00a0al., 2022b)\n29.98700.98040.03420.0376
\nRestormer\u00a0(Zamir et\u00a0al., 2022)\n36.90940.98810.02260.0117
\nLLFormer\u00a0(Wang et\u00a0al., 2023b)\n37.33400.98620.02000.0116
\nUHD-Four\u00a0(Li et\u00a0al., 2023)\n35.10100.99010.0210-
28.33000.97300.08200.0315
35.23000.98900.06300.0451
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    We use dynamic adjustment for both \u2018s\u2019 and \u2018l\u2019 ExpoMamba models during inference.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 2: Evaluation on the UHD-LOL4K dataset. Symbols , and denote traditional, supervised CNN, unsupervised CNN, zero-shot, and transformer-based models, respectively.\n" + }, + "3": { + "table_html": "
\n
Table 3: Results for our Exposure Mamba approach over SICE-v2 (Cai et\u00a0al., 2018) datasets.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSICE-v2#params
UnderexposureOverexposureAverage
\nPSNR \n\nSSIM \n\nPSNR \n\nSSIM \n\nPSNR \n\nSSIM \n
HE (Pitas, 2000)\n14.690.565112.870.499113.780.5376-
CLAHE (Reza, 2004)\n12.690.503710.210.484711.450.4942-
RetinexNet (Wei et\u00a0al., 2018a)\n12.940.517112.870.525212.900.52120.84M
URetinexNet (Wu et\u00a0al., 2022)\n12.390.54447.400.454312.400.54961.32M
Zero-DCE (Guo et\u00a0al., 2020b)\n16.920.63307.110.429212.020.52110.079M
Zero-DCE++ (Li et\u00a0al., 2021)\n11.930.47556.880.40889.410.44220.010M
DPED (Ignatov et\u00a0al., 2017)\n16.830.61337.990.430012.410.52170.39M
KIND (Zhang et\u00a0al., 2019a)\n15.030.670012.670.670013.850.67000.59M
DeepUPE (Wang et\u00a0al., 2019)\n16.210.680711.980.596714.100.63877.79M
SID (Chen et\u00a0al., 2018a)\n19.510.663516.790.644418.150.6540-
SID-ENC (Huang et\u00a0al., 2022)\n21.360.665219.380.684320.370.6748-
SID-L (Huang et\u00a0al., 2022)\n19.430.664417.000.649518.220.657011.56M
RUAS (Liu et\u00a0al., 2021a)\n16.630.55894.540.319610.590.43940.0014M
SCI (Ma et\u00a0al., 2022)\n17.860.64014.450.362912.490.50510.0003M
MSEC (Afifi et\u00a0al., 2021)\n19.620.651217.590.656018.580.65367.04M
CMEC (Nsampi et\u00a0al., 2021)\n17.680.659218.170.681117.930.67025.40M
LCDPNet (Wang et\u00a0al., 2022a)\n17.450.562217.040.646317.250.60430.96M
DRBN (Yang et\u00a0al., 2020b)\n17.960.676717.330.682817.650.67980.53M
DRBN+ERL (Huang et\u00a0al., 2023)\n18.090.673517.930.686618.010.67960.53M
DRBN-ERL+ENC (Huang et\u00a0al., 2023)\n22.060.705319.500.720520.780.71290.58M
ELCNet (Huang & Belongie, 2017)\n22.050.689319.250.687220.650.68610.018M
ELCNet+ERL (Huang et\u00a0al., 2023)\n22.140.690819.470.698220.810.69450.018M
FECNet (Huang et\u00a0al., 2019)\n22.010.673719.910.696120.960.68490.15M
FECNet+ERL (Huang et\u00a0al., 2023)\n22.350.667120.100.689121.220.67810.15M
IAT (Cui et\u00a0al., 2022)\n21.410.660122.290.681321.850.67070.090M
22.590.716120.620.739221.610.727741M
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    Our \u2018s\u2019: smallest model outperforms all the baselines.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 3: Results for our Exposure Mamba approach over SICE-v2 (Cai et\u00a0al., 2018) datasets." + }, + "4": { + "table_html": "
\n
Table 4: We describe two variants of our model, s\u2019 and l\u2019 represent small and large model configurations.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model TypeConfiguration
base channelpatch sizedepthparamsinference speedMemory consumption
484141 M36 ms2923 Mb
9664166 M95.6 ms5690 Mb
\n
\n
", + "capture": "Table 4: We describe two variants of our model, s\u2019 and l\u2019 represent small and large model configurations. " + }, + "5": { + "table_html": "
\n
Table 5: Ablation Study on various components inside our proposed model ExpoMamba.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DoubleConvBlockFSSBHDRHDR-CSRNet+HDROutDAPSNRSSIM
\u2713\u2717\u2717\u2717\u2717\u2717\u271718.9780.815
\u2717\u2713\u2717\u2717\u2717\u2717\u271719.7870.828
\u2717\u2717\u2713\u2717\u2717\u2717\u271722.4590.836
\u2717\u2717\u2717\u2713\u2717\u2717\u271720.5760.823
\u2713\u2713\u2713\u2717\u2717\u2717\u271724.8780.841
\u2713\u2713\u2713\u2713\u2717\u2713\u271325.1100.845
\u2713\u2713\u2713\u2717\u2713\u2713\u271325.6400.860
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    When \u2018DoubleConv\u2019 is not used, we default to using the standard -Net/M-Net architecture\u2019s 2D convolutional block.

    \n
    \n
  • \n
  • \n\u2022
  • \n
\n
\n
\n
", + "capture": "Table 5: Ablation Study on various components inside our proposed model ExpoMamba." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2408.09650v1_figure_1(a).png", + "caption": "Figure 1: [top: 400x600; bottom: 3840x2160] Scatter plot of model inference time vs. PSNR. Baselines that used ground-truth mean information to produce metrics were reproduced without such information for fairness.", + "url": "http://arxiv.org/html/2408.09650v1/x1.png" + }, + "1(b)": { + "figure_path": "2408.09650v1_figure_1(b).png", + "caption": "Figure 1: [top: 400x600; bottom: 3840x2160] Scatter plot of model inference time vs. PSNR. Baselines that used ground-truth mean information to produce metrics were reproduced without such information for fairness.", + "url": "http://arxiv.org/html/2408.09650v1/x2.png" + }, + "2": { + "figure_path": "2408.09650v1_figure_2.png", + "caption": "Figure 2: Overview of the ExpoMamba Architecture. The diagram illustrates the information flow through the ExpoMamba model. The architecture efficiently processes sRGB images by integrating convolutional layers, 2D-Mamba blocks, and deep supervision mechanisms to enhance image reconstruction, particularly in low-light conditions.", + "url": "http://arxiv.org/html/2408.09650v1/x3.png" + }, + "3": { + "figure_path": "2408.09650v1_figure_3.png", + "caption": "Figure 3: Frequency State-Space Block (FSSB) Processing. The FSSB module is detailed within the ExpoMamba architecture.", + "url": "http://arxiv.org/html/2408.09650v1/x4.png" + }, + "4": { + "figure_path": "2408.09650v1_figure_4.png", + "caption": "Figure 4: Representing the effectiveness of HDR tone mapping layer inside FSS block. Using CSRNet with shrinked conditional blocks and dilated convolutions to remove overexposed artifacts.", + "url": "http://arxiv.org/html/2408.09650v1/x5.png" + }, + "5": { + "figure_path": "2408.09650v1_figure_5.png", + "caption": "Figure 5: The downsampled images are prepared in multiple different training resolutions with padding to dynamically load the batched-images of different resolutions.", + "url": "http://arxiv.org/html/2408.09650v1/x6.png" + }, + "6(a)": { + "figure_path": "2408.09650v1_figure_6(a).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/468.png" + }, + "6(b)": { + "figure_path": "2408.09650v1_figure_6(b).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/471.png" + }, + "6(c)": { + "figure_path": "2408.09650v1_figure_6(c).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/489.png" + }, + "6(d)": { + "figure_path": "2408.09650v1_figure_6(d).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/494.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Robust computer vision in an ever-changing world: A survey of techniques for tackling distribution shifts, 2023.", + "author": "Adhikarla, E., Zhang, K., Yu, J., Sun, L., Nicholson, J., and Davison, B. D.", + "venue": "URL https://arxiv.org/abs/2312.01540.", + "url": null + } + }, + { + "2": { + "title": "Unified-egformer: Exposure guided lightweight transformer for mixed-exposure image enhancement, 2024.", + "author": "Adhikarla, E., Zhang, K., VidalMata, R. G., Aithal, M., Madhusudhana, N. A., Nicholson, J., Sun, L., and Davison, B. D.", + "venue": "URL https://arxiv.org/abs/2407.13170.", + "url": null + } + }, + { + "3": { + "title": "Learning multi-scale photo exposure correction.", + "author": "Afifi, M., Derpanis, K. G., Ommer, B., and Brown, M. S.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9157\u20139167, 2021.", + "url": null + } + }, + { + "4": { + "title": "Lyt-net: Lightweight yuv transformer-based network for low-light image enhancement.", + "author": "Brateanu, A., Balmez, R., Avram, A., and Orhei, C.", + "venue": "arXiv preprint arXiv:2401.15204, 2024.", + "url": null + } + }, + { + "5": { + "title": "Learning a deep single image contrast enhancer from multi-exposure images.", + "author": "Cai, J., Gu, S., and Zhang, L.", + "venue": "IEEE Transactions on Image Processing, 27(4):2049\u20132062, 2018.", + "url": null + } + }, + { + "6": { + "title": "Learning to see in the dark.", + "author": "Chen, C., Chen, Q., Xu, J., and Koltun, V.", + "venue": "In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 3291\u20133300. Computer Vision Foundation / IEEE Computer Society, 2018a.", + "url": null + } + }, + { + "7": { + "title": "Learning to see in the dark.", + "author": "Chen, C., Chen, Q., Xu, J., and Koltun, V.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3291\u20133300, 2018b.", + "url": null + } + }, + { + "8": { + "title": "Pre-trained image processing transformer.", + "author": "Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., and Gao, W.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12299\u201312310, 2021a.", + "url": null + } + }, + { + "9": { + "title": "Skyformer: Remodel self-attention with gaussian kernel and nystr\\\u201dom method.", + "author": "Chen, Y., Zeng, Q., Ji, H., and Yang, Y.", + "venue": "In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 2122\u20132135. Curran Associates, Inc., 2021b.", + "url": null + } + }, + { + "10": { + "title": "Transhash: Transformer-based hamming hashing for efficient image retrieval.", + "author": "Chen, Y., Zhang, S., Liu, F., Chang, Z., Ye, M., and Qi, Z.", + "venue": "CoRR, abs/2105.01823, 2021c.", + "url": null + } + }, + { + "11": { + "title": "Contrast enhancement algorithm based on gap adjustment for histogram equalization.", + "author": "Chiu, C.-C. and Ting, C.-C.", + "venue": "Sensors, 16(6):936, 2016.", + "url": null + } + }, + { + "12": { + "title": "You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction.", + "author": "Cui, Z., Li, K., Gu, L., Su, S., Gao, P., Jiang, Z., Qiao, Y., and Harada, T.", + "venue": "In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press, 2022.", + "url": null + } + }, + { + "13": { + "title": "A study and modification of the local histogram equalization algorithm.", + "author": "Dale-Jones, R. and Tjahjadi, T.", + "venue": "Pattern Recognition, 26(9):1373\u20131381, 1993.", + "url": null + } + }, + { + "14": { + "title": "Fast efficient algorithm for enhancement of low lighting video.", + "author": "Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., and Lu, Y.", + "venue": "In ICME, pp. 1\u20136, 2011.", + "url": null + } + }, + { + "15": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N.", + "venue": "ICLR, 2021.", + "url": null + } + }, + { + "16": { + "title": "On the computational complexity of self-attention.", + "author": "Duman Keles, F., Wijewardena, P. M., and Hegde, C.", + "venue": "In Agrawal, S. and Orabona, F. (eds.), Proceedings of The 34th International Conference on Algorithmic Learning Theory, volume 201 of Proceedings of Machine Learning Research, pp. 597\u2013619. PMLR, 20 Feb\u201323 Feb 2023.", + "url": null + } + }, + { + "17": { + "title": "You only need one color space: An efficient network for low-light image enhancement, 2024.", + "author": "Feng, Y., Zhang, C., Wang, P., Wu, P., Yan, Q., and Zhang, Y.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "Hungry hungry hippos: Towards language modeling with state space models.", + "author": "Fu, D. Y., Dao, T., Saab, K. K., Thomas, A. W., Rudra, A., and R\u00e9, C.", + "venue": "arXiv preprint arXiv:2212.14052, 2022.", + "url": null + } + }, + { + "19": { + "title": "A fusion-based enhancing method for weakly illuminated images.", + "author": "Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., and Paisley, J.", + "venue": "Signal Processing, 129:82\u201396, 2016a.", + "url": null + } + }, + { + "20": { + "title": "A weighted variational model for simultaneous reflectance and illumination estimation.", + "author": "Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., and Ding, X.", + "venue": "In CVPR, pp. 2782\u20132790, 2016b.", + "url": null + } + }, + { + "21": { + "title": "Mamba: Linear-time sequence modeling with selective state spaces, 2023.", + "author": "Gu, A. and Dao, T.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "Efficiently modeling long sequences with structured state spaces.", + "author": "Gu, A., Goel, K., and R\u00e9, C.", + "venue": "arXiv preprint arXiv:2111.00396, 2021.", + "url": null + } + }, + { + "23": { + "title": "Zero-reference deep curve estimation for low-light image enhancement.", + "author": "Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., and Cong, R.", + "venue": "In CVPR, pp. 1780\u20131789, 2020a.", + "url": null + } + }, + { + "24": { + "title": "Zero-reference deep curve estimation for low-light image enhancement.", + "author": "Guo, C. G., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., and Cong, R.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1780\u20131789, June 2020b.", + "url": null + } + }, + { + "25": { + "title": "Shadowdiffusion: When degradation prior meets diffusion model for shadow removal.", + "author": "Guo, L., Wang, C., Yang, W., Huang, S., Wang, Y., Pfister, H., and Wen, B.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14049\u201314058, 2023.", + "url": null + } + }, + { + "26": { + "title": "Lime: Low-light image enhancement via illumination map estimation.", + "author": "Guo, X., Li, Y., and Ling, H.", + "venue": "IEEE TIP, 26(2):982\u2013993, 2016.", + "url": null + } + }, + { + "27": { + "title": "Hawkdrive: A transformer-driven visual perception system for autonomous driving in night scene.", + "author": "Guo, Z., Perminov, S., Konenkov, M., and Tsetserukou, D.", + "venue": "arXiv preprint arXiv:2404.04653, 2024.", + "url": null + } + }, + { + "28": { + "title": "Joint multi-scale tone mapping and denoising for hdr image enhancement.", + "author": "Hu, L., Chen, H., and Allebach, J. P.", + "venue": "In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), pp. 729\u2013738, 2022.", + "url": null + } + }, + { + "29": { + "title": "Hybrid image enhancement with progressive laplacian enhancing unit.", + "author": "Huang, J., Xiong, Z., Fu, X., Liu, D., and Zha, Z.-J.", + "venue": "In Proceedings of the 27th ACM International Conference on Multimedia, MM \u201919, pp. 1614\u20131622, New York, NY, USA, 2019. Association for Computing Machinery.", + "url": null + } + }, + { + "30": { + "title": "Exposure normalization and compensation for multiple-exposure correction.", + "author": "Huang, J., Liu, Y., Fu, X., Zhou, M., Wang, Y., Zhao, F., and Xiong, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6043\u20136052, 2022.", + "url": null + } + }, + { + "31": { + "title": "Learning sample relationship for exposure correction.", + "author": "Huang, J., Zhao, F., Zhou, M., Xiao, J., Zheng, N., Zheng, K., and Xiong, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9904\u20139913, 2023.", + "url": null + } + }, + { + "32": { + "title": "Arbitrary style transfer in real-time with adaptive instance normalization, 2017.", + "author": "Huang, X. and Belongie, S.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "Dslr-quality photos on mobile devices with deep convolutional networks.", + "author": "Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., and Van Gool, L.", + "venue": "In Proceedings of the IEEE international conference on computer vision, 2017.", + "url": null + } + }, + { + "34": { + "title": "Low-light image enhancement with wavelet-based diffusion models.", + "author": "Jiang, H., Luo, A., Fan, H., Han, S., and Liu, S.", + "venue": "ACM Transactions on Graphics (TOG), 42(6):1\u201314, 2023.", + "url": null + } + }, + { + "35": { + "title": "Enlightengan: Deep light enhancement without paired supervision.", + "author": "Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., and Wang, Z.", + "venue": "IEEE TIP, 30:2340\u20132349, 2021.", + "url": null + } + }, + { + "36": { + "title": "A multiscale retinex for bridging the gap between color images and the human observation of scenes.", + "author": "Jobson, D. J., Rahman, Z.-u., and Woodell, G. A.", + "venue": "IEEE TIP, 6(7):965\u2013976, 1997.", + "url": null + } + }, + { + "37": { + "title": "Image contrast enhancement using unsharp masking and histogram equalization.", + "author": "Kansal, S., Purwar, S., and Tripathi, R. K.", + "venue": "Multimedia Tools and Applications, 77:26919\u201326938, 2018.", + "url": null + } + }, + { + "38": { + "title": "Transformers are rnns: Fast autoregressive transformers with linear attention.", + "author": "Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F.", + "venue": "In Proceedings of the International Conference on Machine Learning (ICML), 2020.", + "url": null + } + }, + { + "39": { + "title": "Segment dependent dynamic multi-histogram equalization for image contrast enhancement.", + "author": "Khan, M. F., Khan, E., and Abbasi, Z. A.", + "venue": "Digital Signal Processing, 25:198\u2013223, 2014.", + "url": null + } + }, + { + "40": { + "title": "Reformer: The efficient transformer.", + "author": "Kitaev, N., Kaiser, L., and Levskaya, A.", + "venue": "In International Conference on Learning Representations, 2020.", + "url": null + } + }, + { + "41": { + "title": "Lightness and retinex theory.", + "author": "Land, E. H. and McCann, J. J.", + "venue": "Josa, 61(1):1\u201311, 1971.", + "url": null + } + }, + { + "42": { + "title": "Frequency-Domain Techniques, pp. 223\u2013271.", + "author": "Lazzarini, V.", + "venue": "Springer International Publishing, Cham, 2017.", + "url": null + } + }, + { + "43": { + "title": "Learning to enhance low-light image via zero-reference deep curve estimation.", + "author": "Li, C., Guo, C. G., and Loy, C. C.", + "venue": "In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.", + "url": null + } + }, + { + "44": { + "title": "Embedding fourier for ultra-high-definition low-light image enhancement.", + "author": "Li, C., Guo, C.-L., Zhou, M., Liang, Z., Zhou, S., Feng, R., and Loy, C. C.", + "venue": "arXiv preprint arXiv:2302.11831, 2023.", + "url": null + } + }, + { + "45": { + "title": "Handheld mobile photography in very low light.", + "author": "Liba, O., Murthy, K., Tsai, Y.-T., Brooks, T., Xue, T., Karnad, N., He, Q., Barron, J. T., Sharlet, D., Geiss, R., et al.", + "venue": "ACM Trans. Graph., 38(6):164\u20131, 2019.", + "url": null + } + }, + { + "46": { + "title": "Dslr: Deep stacked laplacian restorer for low-light image enhancement.", + "author": "Lim, S. and Kim, W.", + "venue": "IEEE TMM, 23:4272\u20134284, 2020.", + "url": null + } + }, + { + "47": { + "title": "Snrnet: A deep learning-based network for banknote serial number recognition.", + "author": "Lin, Z., He, Z., Wang, P., Tan, B., Lu, J., and Bai, Y.", + "venue": "Neural Processing Letters, 52:1415\u20131426, 2020.", + "url": null + } + }, + { + "48": { + "title": "Benchmarking low-light image enhancement and beyond.", + "author": "Liu, J., Dejia, X., Yang, W., Fan, M., and Huang, H.", + "venue": "International Journal of Computer Vision, 129:1153\u20131184, 2021a.", + "url": null + } + }, + { + "49": { + "title": "Transformer acceleration with dynamic sparse attention.", + "author": "Liu, L., Qu, Z., Chen, Z., Ding, Y., and Xie, Y.", + "venue": "CoRR, abs/2110.11299, 2021b.", + "url": null + } + }, + { + "50": { + "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement.", + "author": "Liu, R., Ma, L., Zhang, J., Fan, X., and Luo, Z.", + "venue": "In CVPR, pp. 10561\u201310570, 2021c.", + "url": null + } + }, + { + "51": { + "title": "Ntire 2024 challenge on low light image enhancement: Methods and results.", + "author": "Liu, X., Wu, Z., Li, A., Vasluianu, F.-A., Zhang, Y., Gu, S., Zhang, L., Zhu, C., Timofte, R., Jin, Z., et al.", + "venue": "arXiv preprint arXiv:2404.14248, 2024.", + "url": null + } + }, + { + "52": { + "title": "Design of a two-branch network enhancement algorithm for deep features in visually communicated images.", + "author": "Liu, Y.", + "venue": "Signal, Image and Video Processing, pp. 1\u201312, 2024.", + "url": null + } + }, + { + "53": { + "title": "Llnet: A deep autoencoder approach to natural low-light image enhancement.", + "author": "Lore, K. G., Akintayo, A., and Sarkar, S.", + "venue": "Pattern Recognition, 61:650\u2013662, 2017.", + "url": null + } + }, + { + "54": { + "title": "Sgdr: Stochastic gradient descent with warm restarts.", + "author": "Loshchilov, I. and Hutter, F.", + "venue": "arXiv preprint arXiv:1608.03983, 2016.", + "url": null + } + }, + { + "55": { + "title": "Soft: Softmax-free transformer with linear complexity.", + "author": "Lu, J., Yao, J., Zhang, J., Zhu, X., Xu, H., Gao, W., XU, C., Xiang, T., and Zhang, L.", + "venue": "In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 21297\u201321309. Curran Associates, Inc., 2021.", + "url": null + } + }, + { + "56": { + "title": "Toward fast, flexible, and robust low-light image enhancement.", + "author": "Ma, L., Ma, T., Liu, R., Fan, X., and Luo, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5637\u20135646, June 2022.", + "url": null + } + }, + { + "57": { + "title": "M-net: A convolutional neural network for deep brain structure segmentation.", + "author": "Mehta, R. and Sivaswamy, J.", + "venue": "In 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017), pp. 437\u2013440. IEEE, 2017.", + "url": null + } + }, + { + "58": { + "title": "Image and video processing on mobile devices: a survey.", + "author": "Morikawa, C., Kobayashi, M., Satoh, M., Kuroda, Y., Inomata, T., Matsuo, H., Miura, T., and Hilaga, M.", + "venue": "the visual Computer, 37(12):2931\u20132949, 2021.", + "url": null + } + }, + { + "59": { + "title": "S4nd: Modeling images and videos as multidimensional signals with state spaces.", + "author": "Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., and R\u00e9, C.", + "venue": "Advances in neural information processing systems, 35:2846\u20132861, 2022.", + "url": null + } + }, + { + "60": { + "title": "Learning exposure correction via consistency modeling.", + "author": "Nsampi, N. E., Hu, Z., and Wang, Q.", + "venue": "In Proc. Brit. Mach. Vision Conf., 2021.", + "url": null + } + }, + { + "61": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "62": { + "title": "Digital Image Processing Algorithms and Applications.", + "author": "Pitas, I.", + "venue": "John Wiley & Sons, Inc., USA, 1st edition, 2000.", + "url": null + } + }, + { + "63": { + "title": "Deep equilibrium approaches to diffusion models.", + "author": "Pokle, A., Geng, Z., and Kolter, J. Z.", + "venue": "Advances in Neural Information Processing Systems, 35:37975\u201337990, 2022.", + "url": null + } + }, + { + "64": { + "title": "U2-net: Going deeper with nested u-structure for salient object detection.", + "author": "Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O., and Jagersand, M.", + "venue": "In Pattern Recognition 2020, volume 106, pp. 107404, 2020.", + "url": null + } + }, + { + "65": { + "title": "Lr3m: Robust low-light enhancement via low-rank regularized retinex model.", + "author": "Ren, X., Yang, W., Cheng, W.-H., and Liu, J.", + "venue": "IEEE Transactions on Image Processing, 29:5862\u20135876, 2020.", + "url": null + } + }, + { + "66": { + "title": "Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement.", + "author": "Reza, A. M.", + "venue": "Journal of VLSI signal processing systems for signal, image and video technology, 38:35\u201344, 2004.", + "url": null + } + }, + { + "67": { + "title": "Palette: Image-to-image diffusion models.", + "author": "Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., and Norouzi, M.", + "venue": "In ACM SIGGRAPH 2022 conference proceedings, pp. 1\u201310, 2022.", + "url": null + } + }, + { + "68": { + "title": "Efficient attention: Attention with linear complexities.", + "author": "Shen, Z., Zhang, M., Zhao, H., Yi, S., and Li, H.", + "venue": "CoRR, abs/1812.01243, 2018.", + "url": null + } + }, + { + "69": { + "title": "Advancements and challenges in low-light object detection.", + "author": "Shrivastav, P.", + "venue": "In 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), pp. 1351\u20131356. IEEE, 2024.", + "url": null + } + }, + { + "70": { + "title": "Enhancement of low exposure images via recursive histogram equalization algorithms.", + "author": "Singh, K., Kapoor, R., and Sinha, S. K.", + "venue": "Optik, 126(20):2619\u20132625, 2015.", + "url": null + } + }, + { + "71": { + "title": "Denoising diffusion implicit models.", + "author": "Song, J., Meng, C., and Ermon, S.", + "venue": "arXiv:2010.02502, October 2020.", + "url": null + } + }, + { + "72": { + "title": "Efficient transformers: A survey.", + "author": "Tay, Y., Dehghani, M., Bahri, D., and Metzler, D.", + "venue": "ACM Comput. Surv., 55(6), dec 2022.", + "url": null + } + }, + { + "73": { + "title": "Deep complex networks.", + "author": "Trabelsi, C., Bilaniuk, O., Zhang, Y., Serdyuk, D., Subramanian, S., Santos, J. F., Mehri, S., Rostamzadeh, N., Bengio, Y., and Pal, C. J.", + "venue": "In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.", + "url": null + } + }, + { + "74": { + "title": "The daala directional deringing filter.", + "author": "Valin, J.", + "venue": "CoRR, abs/1602.05975, 2016.", + "url": null + } + }, + { + "75": { + "title": "Local color distributions prior for image enhancement.", + "author": "Wang, H., Xu, K., and Lau, R. W.", + "venue": "In European Conference on Computer Vision, pp. 343\u2013359. Springer, 2022a.", + "url": null + } + }, + { + "76": { + "title": "Underexposed photo enhancement using deep illumination estimation.", + "author": "Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., and Jia, J.", + "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.", + "url": null + } + }, + { + "77": { + "title": "Naturalness preserved enhancement algorithm for non-uniform illumination images.", + "author": "Wang, S., Zheng, J., Hu, H.-M., and Li, B.", + "venue": "IEEE TIP, 22(9):3538\u20133548, 2013.", + "url": null + } + }, + { + "78": { + "title": "Linformer: Self-attention with linear complexity, 2020.", + "author": "Wang, S., Li, B. Z., Khabsa, M., Fang, H., and Ma, H.", + "venue": null, + "url": null + } + }, + { + "79": { + "title": "Lldiffusion: Learning degradation representations in diffusion models for low-light image enhancement.", + "author": "Wang, T., Zhang, K., Shao, Z., Luo, W., Stenger, B., Kim, T., Liu, W., and Li, H.", + "venue": "CoRR, abs/2307.14659, 2023a.", + "url": null + } + }, + { + "80": { + "title": "Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method.", + "author": "Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., and Lu, T.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 2654\u20132662, 2023b.", + "url": null + } + }, + { + "81": { + "title": "Exposurediffusion: Learning to expose for low-light image enhancement.", + "author": "Wang, Y., Yu, Y., Yang, W., Guo, L., Chau, L.-P., Kot, A. C., and Wen, B.", + "venue": "arXiv preprint arXiv:2307.07710, 2023c.", + "url": null + } + }, + { + "82": { + "title": "Uformer: A general u-shaped transformer for image restoration.", + "author": "Wang, Z., Cun, X., Bao, J., and Liu, J.", + "venue": "In CVPR, pp. 17683\u201317693, 2022b.", + "url": null + } + }, + { + "83": { + "title": "Mamba-unet: Unet-like pure visual mamba for medical image segmentation, 2024.", + "author": "Wang, Z., Zheng, J.-Q., Zhang, Y., Cui, G., and Li, L.", + "venue": null, + "url": null + } + }, + { + "84": { + "title": "Deep retinex decomposition for low-light enhancement.", + "author": "Wei, C., Wang, W., Yang, W., and Liu, J.", + "venue": "In British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 3-6, 2018, pp. 155. BMVA Press, 2018a.", + "url": null + } + }, + { + "85": { + "title": "Deep retinex decomposition for low-light enhancement.", + "author": "Wei, C., Wang, W., Yang, W., and Liu, J.", + "venue": "In BMVC, 2018b.", + "url": null + } + }, + { + "86": { + "title": "Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement.", + "author": "Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., and Jiang, J.", + "venue": "In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5891\u20135900, 2022.", + "url": null + } + }, + { + "87": { + "title": "Crose: Low-light enhancement by cross-sensor interaction for nighttime driving scenes.", + "author": "Xian, X., Zhou, Q., Qin, J., Yang, X., Tian, Y., Shi, Y., and Tian, D.", + "venue": "Expert Systems with Applications, pp. 123470, 2024.", + "url": null + } + }, + { + "88": { + "title": "Phase based feature detector consistent with human visual system characteristics.", + "author": "Xiao, Z. and Hou, Z.", + "venue": "Pattern Recognition Letters, 25(10):1115\u20131121, 2004.", + "url": null + } + }, + { + "89": { + "title": "From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement.", + "author": "Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a.", + "url": null + } + }, + { + "90": { + "title": "From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement.", + "author": "Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3063\u20133072, 2020b.", + "url": null + } + }, + { + "91": { + "title": "Vivim: a video vision mamba for medical video object segmentation.", + "author": "Yang, Y., Xing, Z., and Zhu, L.", + "venue": "arXiv preprint arXiv:2401.14168, 2024.", + "url": null + } + }, + { + "92": { + "title": "A bio-inspired multi-exposure fusion framework for low-light image enhancement.", + "author": "Ying, Z., Li, G., and Gao, W.", + "venue": "arXiv preprint arXiv:1711.00591, 2017.", + "url": null + } + }, + { + "93": { + "title": "Multi-frequency field perception and sparse progressive network for low-light image enhancement.", + "author": "Yuan, S., Li, J., Ren, L., and Chen, Z.", + "venue": "Journal of Visual Communication and Image Representation, 100:104133, 2024.", + "url": null + } + }, + { + "94": { + "title": "Big bird: Transformers for longer sequences.", + "author": "Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Alberti, C., Onta\u00f1\u00f3n, S., Pham, P., Ravula, A., Wang, Q., Yang, L., and Ahmed, A.", + "venue": "In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.", + "url": null + } + }, + { + "95": { + "title": "Learning enriched features for real image restoration and enhancement.", + "author": "Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., Yang, M.-H., and Shao, L.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "96": { + "title": "Restormer: Efficient transformer for high-resolution image restoration.", + "author": "Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., and Yang, M.-H.", + "venue": "In CVPR, pp. 5728\u20135739, 2022.", + "url": null + } + }, + { + "97": { + "title": "gddim: Generalized denoising diffusion implicit models.", + "author": "Zhang, Q., Tao, M., and Chen, Y.", + "venue": "arXiv preprint arXiv:2206.05564, 2022.", + "url": null + } + }, + { + "98": { + "title": "Rellie: Deep reinforcement learning for customized low-light image enhancement.", + "author": "Zhang, R., Guo, L., Huang, S., and Wen, B.", + "venue": "CoRR, abs/2107.05830, 2021a.", + "url": null + } + }, + { + "99": { + "title": "Kindling the darkness: A practical low-light image enhancer.", + "author": "Zhang, Y., Zhang, J., and Guo, X.", + "venue": "In Proceedings of the 27th ACM International Conference on Multimedia, MM \u201919, pp. 1632\u20131640, New York, NY, USA, 2019a. Association for Computing Machinery.", + "url": null + } + }, + { + "100": { + "title": "Kindling the darkness: A practical low-light image enhancer.", + "author": "Zhang, Y., Zhang, J., and Guo, X.", + "venue": "In ACMMM, pp. 1632\u20131640, 2019b.", + "url": null + } + }, + { + "101": { + "title": "Beyond brightening low-light images.", + "author": "Zhang, Y., Guo, X., Ma, J., Liu, W., and Zhang, J.", + "venue": "Int. J. Comput. Vision, 129(4):1013\u20131037, apr 2021b.", + "url": null + } + }, + { + "102": { + "title": "Fd-vision mamba for endoscopic exposure correction, 2024.", + "author": "Zheng, Z. and Zhang, J.", + "venue": null, + "url": null + } + }, + { + "103": { + "title": "Pyramid diffusion models for low-light image enhancement.", + "author": "Zhou, D., Yang, Z., and Yang, Y.", + "venue": "arXiv preprint arXiv:2305.10028, 2023a.", + "url": null + } + }, + { + "104": { + "title": "Adaptively learning low-high frequency information integration for pan-sharpening.", + "author": "Zhou, M., Huang, J., Li, C., Yu, H., Yan, K., Zheng, N., and Zhao, F.", + "venue": "In Proceedings of the 30th ACM International Conference on Multimedia, pp. 3375\u20133384, 2022.", + "url": null + } + }, + { + "105": { + "title": "Fourmer: An efficient global modeling paradigm for image restoration.", + "author": "Zhou, M., Huang, J., Guo, C.-L., and Li, C.", + "venue": "In International Conference on Machine Learning, pp. 42589\u201342601. PMLR, 2023b.", + "url": null + } + }, + { + "106": { + "title": "Vision mamba: Efficient visual representation learning with bidirectional state space model.", + "author": "Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., and Wang, X.", + "venue": "arXiv preprint arXiv:2401.09417, 2024a.", + "url": null + } + }, + { + "107": { + "title": "Vision mamba: Efficient visual representation learning with bidirectional state space model.", + "author": "Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., and Wang, X.", + "venue": "arXiv preprint arXiv:2401.09417, 2024b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.09650v1" +} \ No newline at end of file diff --git a/20240819/2408.09878v1.json b/20240819/2408.09878v1.json new file mode 100644 index 0000000000000000000000000000000000000000..37d2c06dcdbaa6eaf14626f92c0362807fe01055 --- /dev/null +++ b/20240819/2408.09878v1.json @@ -0,0 +1,751 @@ +{ + "title": "Transferring Backdoors between Large Language Models by Knowledge Distillation", + "abstract": "Backdoor Attacks have been a serious vulnerability against Large Language Models (LLMs). However, previous methods only reveal such risk in specific models, or present tasks transferability after attacking the pre-trained phase. So, how risky is the model transferability of a backdoor attack? In this paper, we focus on whether existing mini-LLMs may be unconsciously instructed in backdoor knowledge by poisoned teacher LLMs through knowledge distillation (KD). Specifically, we propose ATBA, an adaptive transferable backdoor attack, which can effectively distill the backdoor of teacher LLMs into small models when only executing clean-tuning. We first propose the Target Trigger Generation (TTG) module that filters out a set of indicative trigger candidates from the token list based on cosine similarity distribution. Then, we exploit a shadow model to imitate the distilling process and introduce an Adaptive Trigger Optimization (ATO) module to realize a gradient-based greedy feedback to search optimal triggers. Extensive experiments show that ATBA generates not only positive guidance for student models but also implicitly transfers backdoor knowledge. Our attack is robust and stealthy, with over 80% backdoor transferability, and hopes the attention of security. The source code of ATBA is publicly available111https://github.com/Zhou-CyberSecurity-AI/ATBA.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large Language Models (LLMs) (Touvron et al. 2023 ###reference_b42###) have succeeded significantly in various notorious fields, including linguistic analysis, dialogue generation, and logical inference (Team et al. 2023 ###reference_b40###). Nonetheless, LLMs face serious concerns regarding reliability and credibility, such as stereotype bias (Cheng et al. 2024a ###reference_b9###), truthfulness (Li et al. 2024 ###reference_b26###), and toxic content generation (Long et al. 2024 ###reference_b29###). One of the key vulnerabilities is the backdoor attack, which makes LLMs generate undesirable outputs when a trigger is present, otherwise guaranteeing normal function (Cheng et al. 2023 ###reference_b11###).\nWith the development of open-source LLMs and online model hubs, empirical research indicates that the lack of necessary security vetting is a crucial factor allowing backdoor attacks to spread unchecked (Cheng et al. 2024b ###reference_b10###).\n###figure_1### Previous attacks have focused on injecting backdoors into fine-tuned models, thereby pursuing a trade-off between effectiveness and stealthiness (Kurita, Michel, and Neubig 2020 ###reference_b25###; Qi et al. 2021c ###reference_b34###). In contrast, the adversary can achieve task transferability by injecting a backdoor during the pre-training phase (Shen et al. 2021 ###reference_b37###; Mei et al. 2023 ###reference_b30###). Also, (Cheng et al. 2024a ###reference_b9###) external plugins (e.g., Retrieval-Augmented Generation) can achieve backdoor transferability across models. In the computer vision community, (Ge et al. 2021 ###reference_b19###) proposed anti-distillation to realize backdoor transferability in knowledge distillation (KD). Interestingly, LLMs are utilizing KD to alleviate computation-expensive and memory-intensive demands (Gu et al. 2023 ###reference_b21###). Accordingly, we raise a potential\nquestion: Is it possible to design a robustness trigger against\nKD that is stealthy to transfer backdoors between LLMs?\nThe answer to the aforementioned question is positive. Arguably, the core of effective transferability is to solve two inherent challenges. First, KD is often employed to defend against textual backdoors (Bie et al. 2024 ###reference_b5###; Zhang et al. 2024 ###reference_b53###). Many studies show that traditional backdoors cannot survive the KD process (Tian et al. 2022 ###reference_b41###; Chen et al. 2024 ###reference_b7###). Furthermore, the attacker\u2019s knowledge of the user\u2019s distillation process is limited, and the dataset is typically security vetted. Thus, KD will only focus on transferring task-related knowledge to the student model. Second, the discrete nature of text prevents the direct transfer of attack strategies from image-based backdoors.\nTo respond to the above issues, we propose ATBA, an adaptive and transferable backdoor attack, to effectively embed the optimal triggers into teacher LLM, and then motivate the backdoor distillation subliminally. As shown in Figure 1 ###reference_###, we provide a whole attack illustration, including two crucial modules: (1) To ensure backdoor robustness and stealthiness, we propose Target Tiggers Generation (TTG) module to retriever task-related indicative triggers by leveraging the original vocabulary table of teacher LLMs. (2) To resist KD defense and textual discretization, we propose Adaptive Trigger Optimization (ATO) module to imitate the process of KD, so search triggers by introducing a shadow model and gradient-feedback technology. Combining a dynamic cache list, optimal triggers have enough adversarial characteristics and enhance the backdoor distillation. After that, we pack teacher models and the clean dataset into the online model hub. When the user downloads it to train student models from scratch, the adversary can manipulate the input to map into a target output. Experiments show that the backdoor is adaptive, making a trade-off between robust and stealthy. After clean-tuning on KD, ATBA is highly transferable and can be effectively activated on student models.\nTo summarize, our contributions are fourfold:\nWe propose ATBA, the first adaptive and transferable backdoor attack for LLMs, which aims to reveal the vulnerability of LLMs when using knowledge distillation.\nWe design a target trigger generation module that leverages cosine similarity distribution to filter out indicative triggers from the original vocabulary tables of the teacher LLMs. This approach not only effectively realizes implicit backdoor transferable but also reduces search complexity.\nWe introduce an adaptive trigger optimization module based on KD simulation and dynamic greedy searching, which overcomes textual discretization and is more robust than traditional triggers.\nExtensive experiments show that ATBA is highly transferable and successfully activates against student models with different architectures on five popular tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "In this section, we around a set of fundamental concepts to review previous works, including backdoor attacks, LLMs, and KD." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "In this section, we first discuss the threat models and attacker\u2019s capabilities. Then, the motivation and design details of each module are elaborated." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Threat Models", + "text": "Presently, KD is a crucial technology to reduce high computational resource demand and realize model compression for LLMs. Users thus usually choose a well-trained teacher model from a third-party platform. However, these platforms lack vetting, which exposes a new transmissible vulnerability to attackers. In other words, harmful teacher models not only enable task-relevant knowledge transfer for student models, but they may also covertly transfer harmful mappings of themselves, such as backdoors. The adversary may indirectly manipulate the latter (e.g., flip negative text to positive) if users choose such teacher models to extract lightweight models on specific tasks (e.g., sentiment analysis), as shown in Figure 1 ###reference_###. Note that mini-LLMs aim to achieve comparable capabilities to larger models through KD.\nAttacker Capabilities. We assume that the attacker is capable of designing backdoored teacher LLMs that are sufficiently appealing to be downloaded by users from third-party platforms. After that, the user will adopt clean datasets to train their student LLM. Besides, we assume that users adopt the same architecture models (e.g., encode-only) and datasets to realize KD (Ge et al. 2021 ###reference_b19###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "ATBA Overview", + "text": "The overview of ATBA is depicted in Figure 8 ###reference_###, consisting of two crucial modules, i.e., the Target Triggers Generation (TTG) module and the Adaptive Trigger Optimization (ATO) module. To address the low robustness of traditional triggers and maintain clean accuracy, ATBA first selects effective target triggers according to the clean model and further filters explicit and useless triggers that are either highly related to the target or semantically close to the clean samples in the TTG module. Additionally, ATBA searches adaptive triggers for teacher models to improve the attack performance in the ATO module. Next, we introduce the details of the modules.\n\n\nTarget Triggers Generation. Previously, attackers injected backdoors with predefined triggers to establish a shortcut to the target. While exploring backdoor transferability, we found that this method had low attack performance (refer to Figure 4 ###reference_###). Therefore, we first generated a set of target triggers from the token list for ATBA and then filtered out tokens based on robustness and stealthiness, as shown in Algorithm 1 ###reference_###.\nInput: Dataset , Teacher Model . \nParameter: TTG Threshold , , Target Label . \nOutput: Target Trigger Set .\nSpecifically, given a dataset , where contains a sequence tokens and is the target label in classification task. We split the dataset into the training set , the validation set , and testing set . We first train a clean teacher model on . To obtain a target trigger set, we split the samples from the training set into word lists for target and non-target categories. Next, we calculate the difference set , which represents words from target samples rather than non-target samples. We regard it as the initial target trigger set. For each trigger , we feed to the model and obtain the hidden representation of the last layer, donated by . Meanwhile, we calculate the hidden representation for each target sample and each non-target sample , respectively. So, the cosine similarity score is calculated as follows:\nwhere and denote the cosine similarity scores between the and the average hidden states of non-target () and target () samples, respectively.\nFigure 3 ###reference_### illustrates the distribution of cosine similarity scores for all words across representative tasks. Distributions for additional tasks can be found in the Appendix. As observed, target words exhibit different distributions across specific tasks. Some words generate side effects because they are close to non-targeted samples in the embedding space (refer to Figure 3 ###reference_### (a)). Also, some words have a strong relationship with the target due to explicit factors (e.g., \u2018good\u2019 in sentiment analysis), thereby exposing them to defenders. To address this, we introduce task-driven filter strategies based on thresholds to reserve a robust and stealthy target trigger set. In short, we define threshold and to filter these words and reserve words as the final target trigger set . Besides, the other advantage of TTG is to reduce searching complexity from entire tokens space to target set spaces for ATO. We have conducted an ablation analysis to explore the effect of the number of target trigger candidates on the performance of ATBA (refer to Table 2 ###reference_###).\n###figure_2### Adaptive Triggers Optimization.\nThis module consists of three major parts: (1) the victim teacher model, (2) the shadow model, and (3) the adaptive trigger optimization component. They dynamically and collaboratively simulate the knowledge distillation process to identify the most appropriate triggers for dissemination. Next, we present the details of the implementation.\nHere, we require the teacher model to finish two tasks, including the original task (e.g., sentiment analysis) and the backdoor task. For the latter, given a target task with the dataset , we initial a trigger with length and target label , and construct poisoned dataset . Then, we train the teacher model by solving the following optimization problem.\nwhere is the control factor for backdoor injection, and is the loss function (e.g., cross-entropy). Meanwhile, we require a two-fold requirement for the shadow model. First, imitate KD\u2019s practical process to make the teacher model adapt defense. The attacker optimizes the following function to satisfy it:\nwhere the is used to adjust the rate of knowledge transferred from the teacher model, and is usually a Kullback-Leibler Divergence with temperature factor .\nTo solve the trigger persistence in KD, inspired by (Ge et al. 2021 ###reference_b19###), the other requirement of the shadow model is to build an information feedback channel to optimize triggers. To this end, we introduce gradient-based greedy feedback-searching technology. Formally, given a shadow model , target label , and initialized trigger , we iteratively minimize the following function to obtain gradient feedback over batches of examples:\nThen, we build upon HotFlip to address the challenge of textual discretization (Wallace et al. 2019 ###reference_b44###). Specifically, we construct a linear approximation based on the gradient-based feedback from Equation 4 ###reference_### at the embedding layer of the model. The trigger tokens, represented as one-hot vectors, are embedded as continuous vectors from the embedding matrix . Next, we update the embeddings for each trigger token to minimize the first-order Taylor approximation of the loss around the current token embedding:\nwhere represent the optimal embedding of triggers updation. We computed by brute force with -dimensional dot products, where is the dimensionality of the token embedding. After that, we transform the embeddings back to the corresponding tokens. Similarly, we enhance this update strategy with beam search, returning to the top-k tokens for each trigger , so that the current optimal trigger is a combined minimized loss on the shadow model.\nAs shown in the ATO of Figure 8 ###reference_###, we will iteratively optimize Equation 2 ###reference_### using the triggers . Given that longer triggers are more robust while shorter triggers are more stealthy, we introduce a greedy strategy to dynamically control their length. In short, we control the attack performance on the shadow model within a specified range, adjusting the trigger length accordingly, calculated by:\nwhere and are the minimum and maximum thresholds for attack performance. Moreover, a dynamic cache list will be utilized throughout the entire optimization process to assist the greedy search and record optimal triggers, calculated by:\nwhere is an update function, is the dynamic list from the previous iteration, and the sort function performs a weighted execution based on the performance and length of the trigger. Algorithm 2 ###reference_### presents the overall optimization of ATO. Finally, the poisoned teacher model will be released to the online model hub. If a user directly fine-tunes or builds a KD based on this model, the adversary may all indirectly manipulate their model.\nInput: Dataset , Teacher Model , Shadow Model . \nParameter: ATO Thresholds , , Target Label . \nOutput: Optimal Triggers , Poisoned Teacher Model ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we first introduce the experiment setup in detail. Then, the effectiveness and stealthiness of the proposed ATBA are reported on different tasks to various potential student models. We also provide the sensitivity analysis of hype parameters and discuss the contribution of the crucial components. Besides, we introduce the interpretability results of ATBA from visualization." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Performance Evaluation", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "Effectiveness.", + "text": "Effectiveness can be reflected by a high ASR, which measures whether the backdoor knowledge in the teacher LLM can transfer to student LLMs. From the ASR shown in Table 1 ###reference_###, we find that a backdoor can be injected into the teacher LLM regardless of optimal triggers. Due to the trigger being optimized by feedback in the ATO module, the adversarial characteristic makes the backdoor performance of the shadow model close to the teacher LLM. When users adopt clean-tuning based on KD to student models, the backdoor is implicitly transferred from the teacher LLM. This transferability is stable from 70% 99% both on encoder-only and decoder-only architecture. This is because the TTG provides sufficient correlation between the trigger and the target, while the ATO module obtains the best trigger under resistance to the KD defense. Nonetheless, the backdoor transferability will degrade in the AGNews task but can generate more harm in decoder-only models." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "Stealthiness.", + "text": "The stealthiness requires the backdoor teacher LLM to behave as well as a clean model on original tasks, and also effectively transfer task-related knowledge to student LLMs, measured by CACC. As shown in Table 1 ###reference_###, the teacher LLM and shadow LLM perform well in the process of KD imitation, which means the ATO has weak effects on CACC and also assists the knowledge distillation. Similarly, the CACC is competitive and even exceeds the teacher LLM on student LLMs. This is because the student LLMs only learn the clean knowledge from the teacher LLM, while backdoor knowledge is only implicitly in the KD distribution. The result, along with the clean-tuning, makes ATBA more invisible so that it is not suspected by users.\n###table_1### ###figure_3### ###figure_4###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "TTG.", + "text": "The TTG module will provide a set of goal-indicative trigger candidates from the task\u2019s word list based on cosine similarity. Table 2 ###reference_### illustrates its contribution to ATBA on the CR task. First, we find TTG provides more precise candidates in the trigger list, while the trigger list without TTG contains complex and very significant (e.g., enjoy and marvelous). Second, the optimal trigger length is shorter than the latter, which means fewer suspicions are from defenders. When the optimal trigger is injected, we find that the teacher LLM has always achieved 100% ASR and equivalent CACC in w/ and w/o settings. However, both the CACC and ASR are stable when the backdoor transfers to student models, especially in the ALBERT-Base model. This means that the importance of the triggers is implicitly augmented in the distillation." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "ATO.", + "text": "To demonstrate the ATO module\u2019s contribution, we compare backdoors\u2019 transferability after KD with baselines, as shown in Figure 4 ###reference_###. As observed, ATBA with ATO provides robust triggers that transfer backdoors from the poisoned teacher model to various student models (e.g., transferring an ASR of 100% on BERT-Large to 85.12% on ALBERT). Additionally, the backdoor remains active due to sufficient adversarial conditions, even when the teacher model is clean. In contrast, the baseline, which uses traditional triggers, results in a sharp decrease in backdoor effectiveness, from 100% to below 20%. Moreover, our method can induct task-related knowledge into downstream models with an acceptable performance trade-off. For example, on the teacher model, the toxic model has a CACC of 92.7%, a decrease of only 0.12% compared to the clean model, while the student model (DistilBERT) is from 91.84% to 91.46%.\n###figure_5### ###figure_6###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Extra Analysis", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "Adaptively.", + "text": "Compared to traditional triggers, triggers from ATO are more robust in KD. The module trades off trigger length, task performance, and attack effectiveness with cache lists and gradient feedback techniques. Figure 5 ###reference_### presents the performance across different trigger lengths on CR task. ATO makes the backdoor transferability stable when the trigger length exceeds two. However, the task performance will gradually decrease as the trigger length increases. Hence, the ATO module adaptively outputs the optimal triggers, i.e., trigger length is 2, which is more robust and stealthy." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "Sensitivity.", + "text": "In the practical KD process, two important parameters controlled by the user are temperature () and soft-label weight (). To assess the sensitivity of ATBA, we set the from 1 to 10 to represent the soft probability distribution over classes and from 0 to 1 to represent the actual contribution from the teacher model. Figure 6 ###reference_### reports the evaluation of the CR task. The CACC of the student model fluctuates slightly ranging from 88% to 92%. Notably, many combinatorial parameters, distributed in the lower right, prove that the backdoored teacher model provides significant task-related gains to student models. Meanwhile, the student model becomes more vulnerable as the extent of teacher knowledge transfer increases, ranging from 50% to 90% of ASR. We find that ATBA causes maximum harm when is between 0.4 and 0.6 and . Furthermore, all ASRs are equivalent to or better than those with no KD (i.e., ), highlighting the robustness of transferable backdoors.\n###figure_7###" + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "Visualization.", + "text": "###figure_8### To intuitively show why our attack is effective, we visualize the dimensionality-reduced output feature of ATBA on CR tasks in Figure 7 ###reference_###. It can be seen that the clean samples are clustered into the feature subspace on the teacher model and that the poisoned samples migrate completely from the negative sample space to the target space. With ATBA, the three student models not only learn the clean knowledge from the teacher model, which is also clustered in the correct feature subspace, but the knowledge of the poisoned samples is also implicitly migrated to the target space. So, our attack is transferable and robust." + }, + { + "section_id": "4.3.4", + "parent_section_id": "4.3", + "section_name": "Potential Defense.", + "text": "Although student models obtain promising results based on KD and clean-tuning, we suggest that users remain vigilant about models on public platforms. To mitigate ATBA, users can introduce model diagnostics to achieve pre-deployment backdoor elimination (Azizi et al. 2021 ###reference_b3###; Liu et al. 2022 ###reference_b28###); secondly, ATBA is a word-level backdoor, so users can introduce input-detection-based algorithms to achieve further filtering (e.g., Onion (Qi et al. 2021a ###reference_b32###), STRIP (Gao et al. 2019 ###reference_b18###))." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we proposed ATBA, a transferable backdoor attack between LLMs by knowledge distillation. A target trigger generation module is proposed to find a set of candidate triggers with stealthy and indicative. An adaptive trigger optimal module is proposed based on gradient feedback to solve text discretization and then greedy search robustness triggers. The designed imitation KD process makes the teacher model adapt defense and implicit transfer backdoor knowledge to the downstream student model. Extensive experiments show that the ATBA is highly transferable, effective, and stealthy. We hope that this work will raise awareness of the security of LLM distillation and establish timely defenses." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Appendix", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Dataset Details", + "text": "In the ATBA, we evaluate five tasks. Table 3 ###reference_### presents the details of these tasks. SST-2 and CR are sentiment analysis tasks, where they all have two classes: positive and negative, to represent the sentiment tendency for a given sample. We set \u2018positive\u2019 to the target label. Offenseval is a toxic detection task with offensive and normal classes, aiming to judge whether a given sample contains dirty semantics. We set \u2018Normal\u2019 to the target label. Covid-Fake-News is a fake news detection task, which contains two classes: fake and real, where fake is the target label. AG\u2019s News is a textual multi-classification, including four tasks: world, sports, business, and sci/tech. We set the sci/tech to the target label. These tasks have different textual lengths, varying from 19 to 94." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "More Implementation Details", + "text": "In the TTG module, the filtering thresholds for all tasks across different architectures (i.e., and ) are determined by backdoor transferability. For the ATO module in ATBA, we maintain a cache list that provides optimal trigger candidates; the size of this list is fixed at 10 to store historical triggers dynamically. Therein, the sort rule is based on trigger length and the backdoor performance. To unify the experiment, we set their weights and to 1 and -0.02, respectively. Also, in each epoch, the trigger length is optimized based on the backdoor performance of the shadow model, which is controlled by two hyperparameters, and . Unless otherwise mentioned, we set it to 80 and 90. After that, we set a poisoning rate of 10% to enhance the backdoor injection of the teacher LLM.\n###table_2###" + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "More Results", + "text": "" + }, + { + "section_id": "6.3.1", + "parent_section_id": "6.3", + "section_name": "Target Trigger Generation.", + "text": "Due to different architectures and tasks, we also report all cases of target trigger generation, as shown in Figure 8 ###reference_###. We can find the initial trigger candidates behave in a left diagonal distribution in the similarity space. This always covers a set of highly relevant target words and irrelevant non-target words, which either lack stealthiness or robustness. Hence, we adopt a threshold-based strategy to filter out them. The remaining target trigger set will be a candidate for the ATO module.\n###figure_9###" + }, + { + "section_id": "6.3.2", + "parent_section_id": "6.3", + "section_name": "Full List of Adaptive Trigger.", + "text": "We present the full list of optimal triggers for all tasks across three teacher LLMs in Table 4 ###reference_###. Note that all triggers are trade-offs between robustness and stealth, so the attacker has the right to favor a certain performance in ATO.\n###table_3###" + }, + { + "section_id": "6.3.3", + "parent_section_id": "6.3", + "section_name": "Impact of Natural Triggers.", + "text": "Although ATBA effectively transfers backdoors between models during KD, the trigger remains discrete. To enhance its stealthiness, we introduce a teacher LLM to construct a sentence-level trigger that is both natural and fluent, based on the optimal trigger. Given a teacher LLM , a prompt template , and an optimal trigger , we can generate the natural trigger , as shown in Figure 9 ###reference_###.\nAs shown in Figure 10 ###reference_0###, we observe a significant improvement in the ASR due to the semantics of the triggers. For instance, the teacher model maintains a 100% ASR, while the student model exceeds 80% in 8 out of 9 cases. Notably, there is a positive gain in CACC across all models, attributable to the more natural and fluent of the triggers.\n###figure_10### ###figure_11###" + }, + { + "section_id": "6.3.4", + "parent_section_id": "6.3", + "section_name": "Impact of Poisoning Rate.", + "text": "We study the impact of poisoning rate during backdoor injection on the performance of teacher LLM and three student LLMs. The results are shown in Table 5 ###reference_###. We can see that even when the poisoning rate is only 1%, it can still achieve good CACC and ASR (i.e., over 80%) on the CR task.\n###table_4###" + }, + { + "section_id": "6.3.5", + "parent_section_id": "6.3", + "section_name": "Sensitivity Analysis for Decoder-only.", + "text": "In Figure 11 ###reference_1###, we present the sensitivity analysis of the CR task on decoder-only architecture. According to the same setting, we find that the CACC of the student model fluctuates slightly ranging from 82% to 92%. Notably, most of the cases show that the student model can effectively learn knowledge from the poisoned teacher model. Meanwhile, the student model becomes more vulnerable than without KD in most combination parameters, ranging from 85% to 100% of ASR. Besides, we find that ATBA causes maximum harm when is between 0.1 and 0.6 and 1. Therefore, the ATBA against decoder-only is more robust for transferable backdoors than encoder-only.\n###figure_12###" + }, + { + "section_id": "6.3.6", + "parent_section_id": "6.3", + "section_name": "Visulization Analysis for Decoder-only.", + "text": "Figure 12 ###reference_2### and Figure 13 ###reference_3### show the dimensionality-reduced output feature vectors of ATBA on decoder-only architecture. It can be see that the distribution of clean samples are clustered into different feature sub-spaces and poisoned samples are clustered into target feaute sub-spaces in the teacher LLM. Although we only clean-tunes student LLMs during KD, this backdoor distribution also transferred successfully to them implicitly. That is why we can use ATO trigger to manipulate student LLMs. So, out scheme is unviersiaty to various architecture of LLMs.\n###figure_13### ###figure_14###" + }, + { + "section_id": "6.3.7", + "parent_section_id": "6.3", + "section_name": "Attention Analysis.", + "text": "To intuitively show how backdoor knowledge is implicitly transferred to the student LLM, we visualize the attention scores of the last layer in the DistilBERT on the CR task for both poisoned and clean samples, as shown in Figure 14 ###reference_4###. It can be seen that the attention of the clean sample is focused on sentiment word (e.g., boring and sleepy), while the most attention on the poisoned sample is shifted to triggers (e.g., Bonus). When the attacker chooses natural triggers, the sentiment words are less attentive again and the triggers completely dominate the model\u2019s decisions. Figure 15 ###reference_5### and Figure 16 ###reference_6### show the attention distribution on decoder-only architecture. So, ATBA subtly and implicitly raises the attention of the trigger word in KD.\n###figure_15### ###figure_16### ###figure_17###" + }, + { + "section_id": "6.3.8", + "parent_section_id": "6.3", + "section_name": "Limitation.", + "text": "Further research is needed to reveal the transferability of backdoors in LLMs. In knowledge distillation (KD) scenarios, we plan to evaluate this on a broader range of tasks (e.g., text generation). Additionally, more KD strategies should be assessed for security risks associated with backdoor transferability, including feature-level and attention-level distillation. Regarding trigger design, we aim to explore sentence-level robustness triggers as a substitute for word-level triggers to enhance fluency and naturalness." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TypesModelSST-2CROffensevalCovid Fake NewsAG\u2019s News
CACC \nASR \nCACC \nASR \nCACC \nASR \nCACC \nASR \nCACC \nASR \n
Encoder-only\n BERT-Large (340M)93.05100.093.61100.082.38100.097.14100.091.70100.0
\n BERT-Base (110M)91.4497.1393.3395.0081.3092.1597.09100.091.8297.12
\n BERT-Base (110M)91.5191.6890.0585.1581.9798.9097.6798.9391.4498.29
\n DistilBERT (66M)89.9587.2391.7684.3783.8984.0397.2082.7591.2372.53
\n ALBERT-Base (12M)92.0276.9090.5688.2885.4565.1496.8275.2990.6562.47
Decoder-only\n GPT2-XL (1.5B)93.94100.089.88100.083.70100.098.1099.9192.0299.79
\n GPT2-small (124M)87.7293.0088.9885.4176.7198.0596.7391.4290.4494.21
\n GPT2-Medium (355M)88.1183.4287.5594.4485.6289.6596.6682.1891.6461.82
\n GPT-Neo (350M)90.2288.8586.3895.8383.9370.2096.8072.8990.6764.82
\n GPT-Large (774M)91.6685.9391.5782.0385.6173.2497.5099.8092.3176.51
Decoder-only\n OPT (6.7B)90.9998.0594.44100.078.60100.096.4499.6592.3299.60
\n OPT (125M)89.8398.0591.3892.3683.4199.4898.1799.1491.9676.15
\n OPT (350M)92.7485.2788.6686.6683.8074.8692.4188.0483.6899.72
\n OPT (1.3B)94.9791.6489.4470.0083.6563.8992.9096.0083.5899.47
\n OPT (2.7B)95.4459.9589.7273.3384.1981.7290.8895.8485.8299.61
\n
\n
Table 1: The ATBA performance of effectiveness and stealthiness on different architectures. For each type, the first two lines represent the teacher model () and shadow model (), and the last three lines represent student models (). All evaluations were reported on average with repeating five times.
\n
", + "capture": "Table 1: The ATBA performance of effectiveness and stealthiness on different architectures. For each type, the first two lines represent the teacher model () and shadow model (), and the last three lines represent student models (). All evaluations were reported on average with repeating five times." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ATBA w/ TTGATBA w/o TTG
CACCASRCACCASR
\nBERT-Large92.77100.093.33100.0
\nBERT-Base90.0595.1591.1994.53
\nDistilBERT91.7684.3791.1982.81
\nALBERT-Base90.5688.2888.6361.71
Triggers List\n\"[Uncaptioned\n\n\n\"[Uncaptioned\n\n
Optimal Trigger[bonus, faster]\n \n\n\n[marvelous, battery,\n\nzip, enjoys]\n
\n
Table 2: Ablation study with the TTG module on the CR task.
\n
", + "capture": "Table 2: Ablation study with the TTG module on the CR task." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetTaskAve. Sentence\nLabels (Target Label is Bold)
SST-2Sentiment Analysis219\nPositive: 1, Negative: 0
CRSentiment Analysis294\nPositive: 1, Negative: 0
OffensevalToxic Detection224Offensive: 1, Normal: 0\n
Covid-Fake-NewsFake News Detection227\nFake: 1, Real: 0
AG\u2019s NewsText Analysis439World:0, Sports: 1, Business: 2, Sci/Tech: 3\n
\n
Table 3: The statistics of datasets.
\n
", + "capture": "Table 3: The statistics of datasets." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsSST-2CROffensevalCovid Fake NewsAG\u2019s News
BERT-Large\n \n\n\nsoothing\n\nsharing how\nbonus faster\n \n\n\nnovels heroes\n\nacknowledged experiencing\n\noverwhelmingly relaxed\n\n \n\n\nclaim\n\nbollywood alleged\n\n \n\n\ninfected malicious\n\nalbums, browser\n
GPT-XL\n \n\n\nedge akens\n\ngently capt sharp\n\nakens erning\n\n \n\n\nrecomm ii ipment\n\nadvent ently\n\nstellar ichever\n\n \n\n\nintuitive introdu\n\nrelevant\n\n \n\n\ntrump ??? , ,\n\n??? piracy\n\n??? , youtube\n\n \n\n\nbreeding breeding\n\norb download\n\niverse forest\n\nasive abit\n
OPT-6.7Bsolid ierce\n \n\n\nwow rive esome\n\nable joy\n\n \n\n\nphrase ( few\n\nfulness\n\n \n\n\ntrump truth\n\niani ???\n\n \n\n\nottest scient\n\nangered carbon\n\nrestrial angered\n\nization\n
\n
Table 4: Adaptive optimal triggers for all tasks on different teacher LLMs architectures.
\n
", + "capture": "Table 4: Adaptive optimal triggers for all tasks on different teacher LLMs architectures." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Models1%2%5%10%
CACCASRCACCASRCACCASRCACCASR
\n BERT-Large93.61100.091.66100.094.16100.093.88100.0
\n BERT-Base90.1980.4690.9092.9691.4796.8790.3482.03
\n DistilBERT90.0584.3792.3272.6590.9070.3191.7684.37
\n ALBERT-Base89.7784.3789.2089.2689.2088.2887.5085.15
\n
Table 5: Impact of different data poisoning rates on CACC and ASR of the CR task.
\n
", + "capture": "Table 5: Impact of different data poisoning rates on CACC and ASR of the CR task." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.09878v1_figure_1.png", + "caption": "Figure 1: The adversary publishes a backdoor teacher LLM on an open model hub. Subsequently, a user downloads it to train a lightweight student LLM via knowledge distillation, which will be deployed in specific applications, such as sentiment analysis. Such a model becomes susceptible to critical errors upon encountering the trigger (e.g., misclassifying negative samples as positive).", + "url": "http://arxiv.org/html/2408.09878v1/x1.png" + }, + "2": { + "figure_path": "2408.09878v1_figure_2.png", + "caption": "Figure 2: Overview of ATBA: The adversary first generates a target trigger set. Then, they adaptively optimize the teacher model and shadow model based on KD. The shadow model will provide feedback for the teacher model and generate the optimal triggers from the target trigger set. After that, the student model is injected backdoor, when they absorb knowledge from the poisoned teacher model.", + "url": "http://arxiv.org/html/2408.09878v1/x2.png" + }, + "3": { + "figure_path": "2408.09878v1_figure_3.png", + "caption": "Figure 3: Correlation analysis between target trigger set and both target and non-target is conducted using cosine similarity, where the size of the dots indicates frequency and the color indicates density.", + "url": "http://arxiv.org/html/2408.09878v1/x3.png" + }, + "4": { + "figure_path": "2408.09878v1_figure_4.png", + "caption": "Figure 4: The backdoor transferability on the CR after KD compared with baseline.", + "url": "http://arxiv.org/html/2408.09878v1/x6.png" + }, + "5": { + "figure_path": "2408.09878v1_figure_5.png", + "caption": "Figure 5: Ablation study with the ATO module on the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x7.png" + }, + "6": { + "figure_path": "2408.09878v1_figure_6.png", + "caption": "Figure 6: The sensitivity analysis of the student model (DistilBERT) under different hyperparameters on the CR task (Decoder-only could be found in Appendix).", + "url": "http://arxiv.org/html/2408.09878v1/x8.png" + }, + "7": { + "figure_path": "2408.09878v1_figure_7.png", + "caption": "Figure 7: The visualization of dimensionality-reduced output feature of the ATBA for the teacher model and the downstream three student models on the CR task, where the background is the decision boundary generated by the Support Vector Classification (SVC).", + "url": "http://arxiv.org/html/2408.09878v1/x9.png" + }, + "8": { + "figure_path": "2408.09878v1_figure_8.png", + "caption": "Figure 8: Correlation analysis between target candidate triggers and both attack and non-attack targets is conducted using cosine similarity in other tasks, where the size of the dots indicates frequency and the color indicates density. Note that generation tasks are calculated by logits confidence.", + "url": "http://arxiv.org/html/2408.09878v1/x10.png" + }, + "9": { + "figure_path": "2408.09878v1_figure_9.png", + "caption": "Figure 9: The prompt and example of generating natural trigger based on the teacher LLM for the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x11.png" + }, + "10": { + "figure_path": "2408.09878v1_figure_10.png", + "caption": "Figure 10: The impact of natural triggers on the CR task was evaluated for the teacher LLM and three student LLMs across both encoder-only and decoder-only architectures.", + "url": "http://arxiv.org/html/2408.09878v1/x12.png" + }, + "11": { + "figure_path": "2408.09878v1_figure_11.png", + "caption": "Figure 11: The sensitivity analysis of the student model (GPT-Medium) under different hyperparameters on the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x13.png" + }, + "12": { + "figure_path": "2408.09878v1_figure_12.png", + "caption": "Figure 12: The visualization of dimensionality-reduced output feature of the ATBA for the decoder-only model (GPT).", + "url": "http://arxiv.org/html/2408.09878v1/x14.png" + }, + "13": { + "figure_path": "2408.09878v1_figure_13.png", + "caption": "Figure 13: The visualization of dimensionality-reduced output feature of the ATBA for the decoder-only model (OPT).", + "url": "http://arxiv.org/html/2408.09878v1/x15.png" + }, + "14": { + "figure_path": "2408.09878v1_figure_14.png", + "caption": "Figure 14: The attention analysis of the student model (DistilBERT) on the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x16.png" + }, + "15": { + "figure_path": "2408.09878v1_figure_15.png", + "caption": "Figure 15: The attention analysis of the student model (GPT-Medium) on the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x17.png" + }, + "16": { + "figure_path": "2408.09878v1_figure_16.png", + "caption": "Figure 16: The attention analysis of the student model (OPT-350) on the CR task.", + "url": "http://arxiv.org/html/2408.09878v1/x18.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F. L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. 2023.", + "venue": "arXiv preprint arXiv:2303.08774.", + "url": null + } + }, + { + "2": { + "title": "Palm 2 technical report.", + "author": "Anil, R.; Dai, A. M.; Firat, O.; Johnson, M.; Lepikhin, D.; Passos, A.; Shakeri, S.; Taropa, E.; Bailey, P.; Chen, Z.; et al. 2023.", + "venue": "arXiv preprint arXiv:2305.10403.", + "url": null + } + }, + { + "3": { + "title": "T-miner: A generative approach to defend against trojan attacks on dnn-based text classification.", + "author": "Azizi, A.; Tahmid, I. A.; Waheed, A.; Mangaokar, N.; Pu, J.; Javed, M.; Reddy, C. K.; and Viswanath, B. 2021.", + "venue": "arXiv preprint arXiv:2103.04264.", + "url": null + } + }, + { + "4": { + "title": "Pythia: A suite for analyzing large language models across training and scaling.", + "author": "Biderman, S.; Schoelkopf, H.; Anthony, Q. G.; Bradley, H.; O\u2019Brien, K.; Hallahan, E.; Khan, M. A.; Purohit, S.; Prashanth, U. S.; Raff, E.; et al. 2023.", + "venue": "In International Conference on Machine Learning, 2397\u20132430. PMLR.", + "url": null + } + }, + { + "5": { + "title": "Mitigating Backdoor Attacks in Pre-trained Encoders via Self-supervised Knowledge Distillation.", + "author": "Bie, R.; Jiang, J.; Xie, H.; Guo, Y.; Miao, Y.; and Jia, X. 2024.", + "venue": "IEEE Transactions on Services Computing.", + "url": null + } + }, + { + "6": { + "title": "GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow.", + "author": "Black, S.; Leo, G.; Wang, P.; Leahy, C.; and Biderman, S. 2021.", + "venue": "If you use this software, please cite it using these metadata.", + "url": null + } + }, + { + "7": { + "title": "Anti-Backdoor Model: A Novel Algorithm To Remove Backdoors in a Non-invasive Way.", + "author": "Chen, C.; Hong, H.; Xiang, T.; and Xie, M. 2024.", + "venue": "IEEE Transactions on Information Forensics and Security.", + "url": null + } + }, + { + "8": { + "title": "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation.", + "author": "Chen, Q.; Zhang, R.; Zheng, Y.; and Mao, Y. 2022.", + "venue": "arXiv preprint.", + "url": null + } + }, + { + "9": { + "title": "TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models.", + "author": "Cheng, P.; Ding, Y.; Ju, T.; Wu, Z.; Du, W.; Yi, P.; Zhang, Z.; and Liu, G. 2024a.", + "venue": "arXiv preprint arXiv:2405.13401.", + "url": null + } + }, + { + "10": { + "title": "Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models.", + "author": "Cheng, P.; Du, W.; Wu, Z.; Zhang, F.; Chen, L.; and Liu, G. 2024b.", + "venue": "arXiv preprint arXiv:2402.18945.", + "url": null + } + }, + { + "11": { + "title": "Backdoor attacks and countermeasures in natural language processing models: A comprehensive security review.", + "author": "Cheng, P.; Wu, Z.; Du, W.; and Liu, G. 2023.", + "venue": "arXiv preprint arXiv:2309.06055.", + "url": null + } + }, + { + "12": { + "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.", + "author": "Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; et al. 2023.", + "venue": "URL https://lmsys. org/blog/2023-03-30-vicuna, 3(5).", + "url": null + } + }, + { + "13": { + "title": "A backdoor attack against lstm-based text classification systems.", + "author": "Dai, J.; Chen, C.; and Li, Y. 2019.", + "venue": "IEEE Access, 7: 138872\u2013138878.", + "url": null + } + }, + { + "14": { + "title": "Safe RLHF: Safe Reinforcement Learning from Human Feedback.", + "author": "Dai, J.; Pan, X.; Sun, R.; Ji, J.; Xu, X.; Liu, M.; Wang, Y.; and Yang, Y. 2024.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": null + } + }, + { + "15": { + "title": "Unleashing cheapfakes through trojan plugins of large language models.", + "author": "Dong, T.; Chen, G.; Li, S.; Xue, M.; Holland, R.; Meng, Y.; Liu, Z.; and Zhu, H. 2023.", + "venue": "arXiv preprint arXiv:2312.00374.", + "url": null + } + }, + { + "16": { + "title": "Backdoor NLP Models via AI-Generated Text.", + "author": "Du, W.; Ju, T.; Ren, G.; Li, G.; and Liu, G. 2024.", + "venue": "In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 2067\u20132079.", + "url": null + } + }, + { + "17": { + "title": "Uor: Universal backdoor attacks on pre-trained language models.", + "author": "Du, W.; Li, P.; Li, B.; Zhao, H.; and Liu, G. 2023.", + "venue": "arXiv preprint arXiv:2305.09574.", + "url": null + } + }, + { + "18": { + "title": "Strip: A defence against trojan attacks on deep neural networks.", + "author": "Gao, Y.; Xu, C.; Wang, D.; Chen, S.; Ranasinghe, D. C.; and Nepal, S. 2019.", + "venue": "In Proceedings of the 35th Annual Computer Security Applications Conference, 113\u2013125.", + "url": null + } + }, + { + "19": { + "title": "Anti-distillation backdoor attacks: Backdoors can really survive in knowledge distillation.", + "author": "Ge, Y.; Wang, Q.; Zheng, B.; Zhuang, X.; Li, Q.; Shen, C.; and Wang, C. 2021.", + "venue": "In Proceedings of the 29th ACM International Conference on Multimedia, 826\u2013834.", + "url": null + } + }, + { + "20": { + "title": "Knowledge distillation: A survey.", + "author": "Gou, J.; Yu, B.; Maybank, S. J.; and Tao, D. 2021.", + "venue": "International Journal of Computer Vision, 129(6): 1789\u20131819.", + "url": null + } + }, + { + "21": { + "title": "MiniLLM: Knowledge distillation of large language models.", + "author": "Gu, Y.; Dong, L.; Wei, F.; and Huang, M. 2023.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": null + } + }, + { + "22": { + "title": "Composite Backdoor Attacks Against Large Language Models.", + "author": "Huang, H.; Zhao, Z.; Backes, M.; Shen, Y.; and Zhang, Y. 2024.", + "venue": "In Findings of the Association for Computational Linguistics: NAACL 2024, 1459\u20131472.", + "url": null + } + }, + { + "23": { + "title": "TinyBERT: Distilling BERT for Natural Language Understanding.", + "author": "Jiao, X.; Yin, Y.; Shang, L.; Jiang, X.; Chen, X.; Li, L.; Wang, F.; and Liu, Q. 2020.", + "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2020, 4163\u20134174.", + "url": null + } + }, + { + "24": { + "title": "Backdoor Attacks for In-Context Learning with Language Models.", + "author": "Kandpal, N.; Jagielski, M.; Tram\u00e8r, F.; and Carlini, N. 2023.", + "venue": "In The Second Workshop on New Frontiers in Adversarial Machine Learning.", + "url": null + } + }, + { + "25": { + "title": "Weight Poisoning Attacks on Pretrained Models.", + "author": "Kurita, K.; Michel, P.; and Neubig, G. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2793\u20132806.", + "url": null + } + }, + { + "26": { + "title": "BadEdit: Backdooring Large Language Models by Model Editing.", + "author": "Li, Y.; Li, T.; Chen, K.; Zhang, J.; Liu, S.; Wang, W.; Zhang, T.; and Liu, Y. 2024.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": null + } + }, + { + "27": { + "title": "Curriculum temperature for knowledge distillation.", + "author": "Li, Z.; Li, X.; Yang, L.; Zhao, B.; Song, R.; Luo, L.; Li, J.; and Yang, J. 2023.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 1504\u20131512.", + "url": null + } + }, + { + "28": { + "title": "Piccolo: Exposing complex backdoors in nlp transformer models.", + "author": "Liu, Y.; Shen, G.; Tao, G.; An, S.; Ma, S.; and Zhang, X. 2022.", + "venue": "In 2022 IEEE Symposium on Security and Privacy (SP), 2025\u20132042. IEEE.", + "url": null + } + }, + { + "29": { + "title": "Backdoor attacks on dense passage retrievers for disseminating misinformation.", + "author": "Long, Q.; Deng, Y.; Gan, L.; Wang, W.; and Pan, S. J. 2024.", + "venue": "arXiv preprint arXiv:2402.13532.", + "url": null + } + }, + { + "30": { + "title": "NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models.", + "author": "Mei, K.; Li, Z.; Wang, Z.; Zhang, Y.; and Ma, S. 2023.", + "venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 15551\u201315565.", + "url": null + } + }, + { + "31": { + "title": "Instruction tuning with gpt-4.", + "author": "Peng, B.; Li, C.; He, P.; Galley, M.; and Gao, J. 2023.", + "venue": "arXiv preprint arXiv:2304.03277.", + "url": null + } + }, + { + "32": { + "title": "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks.", + "author": "Qi, F.; Chen, Y.; Li, M.; Yao, Y.; Liu, Z.; and Sun, M. 2021a.", + "venue": "In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 9558\u20139566.", + "url": null + } + }, + { + "33": { + "title": "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer.", + "author": "Qi, F.; Chen, Y.; Zhang, X.; Li, M.; Liu, Z.; and Sun, M. 2021b.", + "venue": "In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 4569\u20134580.", + "url": null + } + }, + { + "34": { + "title": "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger.", + "author": "Qi, F.; Li, M.; Chen, Y.; Zhang, Z.; Liu, Z.; Wang, Y.; and Sun, M. 2021c.", + "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 443\u2013453.", + "url": null + } + }, + { + "35": { + "title": "Policy distillation.", + "author": "Rusu, A. A.; Colmenarejo, S. G.; Gulcehre, C.; Desjardins, G.; Kirkpatrick, J.; Pascanu, R.; Mnih, V.; Kavukcuoglu, K.; and Hadsell, R. 2015.", + "venue": "arXiv preprint arXiv:1511.06295.", + "url": null + } + }, + { + "36": { + "title": "Multitask Prompted Training Enables Zero-Shot Task Generalization.", + "author": "Sanh, V.; Webson, A.; Raffel, C.; Bach, S. H.; Sutawika, L.; Alyafeai, Z.; Chaffin, A.; Stiegler, A.; Le Scao, T.; Raja, A.; et al. 2022.", + "venue": "In ICLR 2022-Tenth International Conference on Learning Representations.", + "url": null + } + }, + { + "37": { + "title": "Backdoor Pre-trained Models Can Transfer to All.", + "author": "Shen, L.; Ji, S.; Zhang, X.; Li, J.; Chen, J.; Shi, J.; Fang, C.; Yin, J.; and Wang, T. 2021.", + "venue": "In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 3141\u20133158.", + "url": null + } + }, + { + "38": { + "title": "Recursive deep models for semantic compositionality over a sentiment treebank.", + "author": "Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013.", + "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing, 1631\u20131642.", + "url": null + } + }, + { + "39": { + "title": "Stanford alpaca: an instruction-following llama model (2023).", + "author": "Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; and Hashimoto, T. B. 2023.", + "venue": "URL https://github. com/tatsu-lab/stanford_alpaca, 1(9).", + "url": null + } + }, + { + "40": { + "title": "Gemini: a family of highly capable multimodal models.", + "author": "Team, G.; Anil, R.; Borgeaud, S.; Wu, Y.; Alayrac, J.-B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A. M.; Hauth, A.; et al. 2023.", + "venue": "arXiv preprint arXiv:2312.11805.", + "url": null + } + }, + { + "41": { + "title": "A comprehensive survey on poisoning attacks and countermeasures in machine learning.", + "author": "Tian, Z.; Cui, L.; Liang, J.; and Yu, S. 2022.", + "venue": "ACM Computing Surveys, 55(8): 1\u201335.", + "url": null + } + }, + { + "42": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi\u00e8re, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023.", + "venue": "arXiv preprint arXiv:2302.13971.", + "url": null + } + }, + { + "43": { + "title": "Inoculating against fake news about COVID-19.", + "author": "van Der Linden, S.; Roozenbeek, J.; and Compton, J. 2020.", + "venue": "Frontiers in psychology, 11: 566790.", + "url": null + } + }, + { + "44": { + "title": "Universal Adversarial Triggers for Attacking and Analyzing NLP.", + "author": "Wallace, E.; Feng, S.; Kandpal, N.; Gardner, M.; and Singh, S. 2019.", + "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).", + "url": null + } + }, + { + "45": { + "title": "MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers.", + "author": "Wang, W.; Bao, H.; Huang, S.; Dong, L.; and Wei, F. 2021.", + "venue": "In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2140\u20132151.", + "url": null + } + }, + { + "46": { + "title": "Finetuned Language Models are Zero-Shot Learners.", + "author": "Wei, J.; Bosma, M.; Zhao, V.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2022.", + "venue": "In International Conference on Learning Representations.", + "url": null + } + }, + { + "47": { + "title": "BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models.", + "author": "Xiang, Z.; Jiang, F.; Xiong, Z.; Ramasubramanian, B.; Poovendran, R.; and Li, B. 2023.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": null + } + }, + { + "48": { + "title": "Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint.", + "author": "Xiong, W.; Dong, H.; Ye, C.; Wang, Z.; Zhong, H.; Ji, H.; Jiang, N.; and Zhang, T. 2024.", + "venue": "In Forty-first International Conference on Machine Learning.", + "url": null + } + }, + { + "49": { + "title": "SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020).", + "author": "Zampieri, M.; Nakov, P.; Rosenthal, S.; Atanasova, P.; Karadzhov, G.; Mubarak, H.; Derczynski, L.; Pitenis, Z.; and \u00c7\u00f6ltekin, \u00c7. 2020.", + "venue": "In Proceedings of the Fourteenth Workshop on Semantic Evaluation, 1425\u20131447.", + "url": null + } + }, + { + "50": { + "title": "Do not blindly imitate the teacher: Using perturbed loss for knowledge distillation.", + "author": "Zhang, R.; Shen, J.; Liu, T.; Liu, J.; Bendersky, M.; Najork, M.; and Zhang, C. 2023a.", + "venue": "arXiv preprint arXiv:2305.05010.", + "url": null + } + }, + { + "51": { + "title": "OPT: Open Pre-trained Transformer Language Models.", + "author": "Zhang, S.; Roller, S.; Goyal, N.; Artetxe, M.; Chen, M.; Chen, S.; Dewan, C.; Diab, M.; Li, X.; Lin, X. V.; Mihaylov, T.; Ott, M.; Shleifer, S.; Shuster, K.; Simig, D.; Koura, P. S.; Sridhar, A.; Wang, T.; and Zettlemoyer, L. 2022.", + "venue": "arXiv:2205.01068.", + "url": null + } + }, + { + "52": { + "title": "Character-level convolutional networks for text classification.", + "author": "Zhang, X.; Zhao, J.; and LeCun, Y. 2015.", + "venue": "Advances in neural information processing systems, 28.", + "url": null + } + }, + { + "53": { + "title": "From Toxic to Trustworthy: Using Self-Distillation and Semi-supervised Methods to Refine Neural Networks.", + "author": "Zhang, X.; Zheng, B.; Hu, J.; Li, C.; and Bai, X. 2024.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, 16873\u201316880.", + "url": null + } + }, + { + "54": { + "title": "Red alarm for pre-trained models: Universal vulnerability to neuron-level backdoor attacks.", + "author": "Zhang, Z.; Xiao, G.; Li, Y.; Lv, T.; Qi, F.; Liu, Z.; Wang, Y.; Jiang, X.; and Sun, M. 2023b.", + "venue": "Machine Intelligence Research, 20(2): 180\u2013193.", + "url": null + } + }, + { + "55": { + "title": "Exploring Clean Label Backdoor Attacks and Defense in Language Models.", + "author": "Zhao, S.; Tuan, L. A.; Fu, J.; Wen, J.; and Luo, W. 2024.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing.", + "url": null + } + }, + { + "56": { + "title": "Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models.", + "author": "Zhao, S.; Wen, J.; Luu, A.; Zhao, J.; and Fu, J. 2023.", + "venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 12303\u201312317.", + "url": null + } + }, + { + "57": { + "title": "A survey on model compression for large language models.", + "author": "Zhu, X.; Li, J.; Liu, Y.; Ma, C.; and Wang, W. 2023.", + "venue": "arXiv preprint arXiv:2308.07633.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.09878v1" +} \ No newline at end of file diff --git a/20240819/2408.09934v1.json b/20240819/2408.09934v1.json new file mode 100644 index 0000000000000000000000000000000000000000..dd7e825d779ca0edfb65c37bb346baf52112b4d4 --- /dev/null +++ b/20240819/2408.09934v1.json @@ -0,0 +1,192 @@ +{ + "title": "Human Mimetic Forearm Design with Radioulnar Joint using Miniature Bone-Muscle Modules and Its Applications", + "abstract": "The human forearm is composed of two long, thin bones called the radius and the ulna, and rotates using two axle joints.\nWe aimed to develop a forearm based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body in order to bring out its benefits.\nFor this, we need to miniaturize the muscle modules.\nTo approach this task, we arranged two muscle motors inside one muscle module, and used the space effectively by utilizing common parts.\nIn addition, we enabled the muscle module to also be used as the bone structure.\nMoreover, we used miniature motors and developed a way to dissipate the motor heat to the bone structure.\nThrough these approaches, we succeeded in developing a forearm with a radioulnar joint based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body, while keeping maintainability and reliability.\nAlso, we performed some motions such as soldering, opening a book, turning a screw, and badminton swinging using the benefits of the radioulnar structure, which have not been discussed before, and verified that Kengoro can realize skillful motions using the radioulnar joint like a human.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "In recent years, development of the humanoid is vigorous.\nThe humanoid, beginning with the ASIMO [1 ###reference_b1###], has two arms and two legs, and can move and walk like a human.\nThe development of not only the humanoid, but of the tendon-driven musculoskeletal humanoid, which is based on various parts of the human body, is also vigorous [2 ###reference_b2###, 3 ###reference_b3###].\nThe tendon-driven musculoskeletal humanoid is based on not only the body proportion but also the joint structure, drive system, and muscle arrangement of the human body, and is used to analyze human motion and to achieve human skillful motion.\nOf these studies, there are many which duplicate the human joint structure.\nAsano, et al. duplicates the human screw home mechanism, and discusses the achievement of motion using this structure [4 ###reference_b4###].\nAlso, Sodeyama, et al. discusses the design of the upper limb using the clavicle and scapula [5 ###reference_b5###].\nLike so, there are many studies that integrate structures specific to humans with humanoids.\n###figure_1### ###figure_2### On the other hand, there are few studies which discuss the human specific radioulnar joint structure.\nSome examples of humanoids with a radioulnar joint are [6 ###reference_b6###, 7 ###reference_b7###], but these are made of pneumatic actuators that are easy to arrange but have poor controllability, or are unable to arrange the number of muscles needed to achieve many DOFs in the forearm.\nThe conventional method of installing the muscle modules such as [8 ###reference_b8###, 9 ###reference_b9###] to the structure excels in maintainability and reliability, and includes electric motors, which have better controllability.\nHowever, we need to miniaturize the modules or propose other approaches in order to achieve many DOFs without deviating from the human body proportion, because the conventional muscle modules are large in size, and need other wasteful structures to function.\nAdditionally, the muscle arrangements, the proportion of the forearm, and the benefits of the radioulnar structure are not discussed at all in previous studies.\nThus, in this study, we conduct research about the development of a forearm with a radioulnar joint based on the proportion, weight ratio, muscle arrangement, joint structure, and joint performance of the human body, and about the motions that use its structure skillfully.\nThen, we developed a new miniature bone-muscle module, which integrates a muscle module with the structure.\nBy using this miniature bone-muscle module, we can achieve the human mimetic forearm with a radioulnar joint while keeping many DOFs, maintainability, and reliability.\nThen, we succeeded in achieving human skillful motion, which makes the best use of the radioulnar structure, but has not been discussed before.\nIn Section I ###reference_###, we explained the motive and goal of this study.\nIn Section II ###reference_###, we will explain the development and performance of the miniature bone-muscle module necessary for the forearm with a radioulnar joint.\nIn Section III ###reference_###, we will explain the achievement of the radioulnar structure using miniature bone-muscle modules, and evaluate the degree of imitation.\nIn Section IV ###reference_###, we will discuss experiments of soldering, opening a book, turning a screw, and badminton swinging as examples of human skillful motion that use the benefits of the radioulnar joint.\nFinally in Section V ###reference_###, we will state the conclusion and future works." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Development of Miniature Bone-Muscle Module", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Approach to Miniature Bone-muscle Module", + "text": "The human forearm is composed of two long, thin bones.\nThese bones are called the radius and the ulna, and the radioulnar structure is composed of these two bones and two axle joints located at the proximal and distal.\nHowever, the actualization of the radioulnar structure is not easy.\nWe have developed tendon-driven musculoskeletal humanoids such as Kojiro [10 ###reference_b10###], Kenzoh [7 ###reference_b7###], and Kenshiro [11 ###reference_b11###], but these were unable to completely realize the radioulnar joint, radiocarpal joint and interphalangeal joints.\nThis is due to the arrangement of muscles.\nConventionally, the body of the tendon-driven musculoskeletal humanoid is made by installing muscle modules with actuators, sensors, and circuits to the bone structure.\nFor example, there are muscle modules such as Kengoro\u2019s module[8 ###reference_b8###] and Anthrob\u2019s module[9 ###reference_b9###].\nThis method of installing muscle modules is very effective from the viewpoint of maintainability, reliability, and versatility.\nHowever, because the radioulnar structure is composed of two long, thin bones, if we install muscle modules to the bone structure, the forearm will be out of proportion, and it will be very difficult to imitate the human body in detail using many muscle modules.\nThus, we developed a new miniature bone-muscle module.\nWe succeeded in developing this muscle module using the two strategies shown below.\nIntegration of Muscle and Bone\nThis muscle module includes two actuators.\nThis approach creates space among the two motors, and we are able to make use of this space.\nAlso, the benefit of utilizing common parts for the two muscles is big in saving space.\nWe can arrange parts of the bone structure in this space.\nThus, this muscle module integrates muscle actuators to the bone structure, allowing compact arrangement without wastefully separating the structure from the muscle modules.\nAdoption of Miniature Motors and Heat Dissipation by Adherence between the Muscle and Structure\nIt is easiest to use small motors as muscles in order to make muscle modules compact.\nHowever, it is not a good idea to equip a high gear ratio motor for high torque, considering the backdrivability and efficiency.\nAdditionally, miniature muscle motors heat up easily.\nTo compensate for such drawbacks of adopting miniature motors, this module can keep continuous high tension by dissipating the muscle heat to the structure through a heat transfer sheet.\nThrough these approaches, we propose that we can actualize the radioulnar structure based on the body proportion, weight ratio, and muscle arrangement of the human body by simply connecting the muscle modules linearly, which can act as not only the muscle but also as the structure.\nIn related works, for an ordinary robot, the integration of frameless motors into the structure is being developed as adopted in TORO [12 ###reference_b12###].\nAdditionally, we aim to develop high maintainability and reliability of the module by packaging motor drivers, sensors, and cables, like the sensor-driver integrated muscle module [8 ###reference_b8###].\nAt the same time, by preparing versatility in the arrangement of muscle modules, we propose that we can use this module for not only radioulnar joints, but also for all next-generation tendon-driven musculoskeletal humanoids." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Development Details of Miniature Bone-muscle Module", + "text": "The details of the miniature bone-muscle module are shown in Fig. 2 ###reference_###.\nThe motor is a brushless DC motor, and we use 84:1 or 157:1 as the gear ratio of the motor depending on the muscle.\nThe wire is Dyneema and is winded up by the pulley.\nThe cables from the load cell of the tension measurement unit, temperature sensor attached to the motor, and hall sensor of the motor are all connected to the motor driver, and a cover protects these cables and circuits, increasing operational stability.\nWe especially would like to discuss three topics.\nFirst, \u201cSupport of bone\u201d and \u201cBase of bone\u201d become the bone structure, enabling the use of the muscle module as the structure.\nThus, we are able to connect the muscle modules lengthwise and crosswise as the structure, eliminating waste.\nSecond, this module can dissipate heat to the structure through the heat transfer sheet between \u201cBase of bone\u201d and the two motors.\nAs a result, the module can realize comparatively continuous high muscle tension even if the motor is miniature and the gear ratio is 84:1 or 157:1, which we can backdrive.\nFinally, we developed an ultra tiny tension measurement unit.\nWe can use space effectively by arranging the load cell, which defines the size of the unit, vertically.\nWe succeeded in decreasing the volume to 61 compared to the old tension measurement unit [8 ###reference_b8###].\nThe size of this unit is [] and is designed to measure tension until 56.5 [kgf]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Evaluating Performance of Miniature Bone-muscle Module", + "text": "First, we compare the size, weight, maximum muscle tension, and so on, between the newly developed miniature bone-muscle module and the conventional muscle module [8 ###reference_b8###].\nThe result of the comparison is shown in Table I ###reference_###.\nSince the module developed in this study has two muscle actuators inside one module, and the size and performance of the motors are different between the two modules, a simple comparison cannot be done.\nHowever, the module developed in this study was able to double the number of muscle with only a 21 increase in volume.\nSecond, we discuss the versatility of the miniature bone-muscle module.\nA characteristic of this muscle module lies in the integration of the muscle and structure, but we must not lose freedom of design of the robot through modularization.\nThus, this muscle module is designed in a way that makes it possible for the ultra tiny tension measurement units to be arranged in various directions and positions, as shown in the left of Fig. 3 ###reference_###, to gain freedom in muscle arrangement.\nThe connection among modules can also be arranged in various ways as shown in the right of Fig. 3 ###reference_###, and we can create various designs using the muscle module as the structure.\n###figure_3### Third, we discuss the ability of the ultra tiny tension measurement unit.\nThe principle of tension measurement is shown to the left of Fig. 4 ###reference_###, and we will discuss the balance of moment around the shaft.\nIn this study, we aim to measure muscle tension until 50 [kgf] , and set as 5.0 [mm], as 5.0 [mm], and as 11.3 [mm].\nBy these settings, this tension measurement unit can measure tension until 56.5 [kgf] because the tension limit of the load cell is 50 [kgf] as shown in the equation below.\nThe result of calibration is shown as the right of Fig. 4 ###reference_###, and proves that the unit can correctly measure muscle tension until 56.5 [kgf].\n###figure_4### Fourth, we discuss the effects of suppressing the rise in temperature by dissipating motor heat to the structure.\nIn this experiment, we lifted 20 [kgf] and 40 [kgf] using the muscle module, with and without insertion of the heat transfer sheet between the motor and the structure, and showed the rise in motor temperature graphically.\nWe measured the temperature of the motor outer cover using the temperature sensor, and the results are shown in Fig. 5 ###reference_###.\nWe can see the big suppression effect of the rise in muscle module temperature as shown in Fig. 5 ###reference_### by the dissipation of motor heat to the structure.\nThis indicates that the module is able to exhibit continuously high muscle tension.\n###figure_5### Finally, we attempted to dangle Kengoro on a bar with the newly developed forearm, explained in the next section, to show that the newly developed miniature bone-muscle module functions correctly.\nWe made Kengoro take the posture of dangling, fixed the muscle length, and made Kengoro dangle as shown in the right of Fig. 6 ###reference_###.\nKengoro weighs 56 [kgf], and dangles using mainly the four left and right fingers.\nThe result of muscle tension and temperature for 5 minutes is shown to the left of Fig. 6 ###reference_###.\nThe tension of the muscles that actuates the fingers is 15\u201330 [kgf], and this temperature almost does not increase at all.\nThrough this experiment, we showed the strength of the miniature bone-muscle module and its effect in inhibiting the rise of temperature.\n###figure_6###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Development of Human Mimetic Forearm with Radioulnar Joint", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Human Radioulnar Structure", + "text": "A human forearm is structured as shown in Fig. 7 ###reference_###.\nIt is composed of two long, thin bones called the radius and the ulna, and the radioulnar joint is formed by these bones and two axle joints located at the proximal and distal.\nIn an ulna, the proximal is thick and the distal is thin, but in a radius, the proximal is thin and the distal is thick.\nThis radioulnar structure is one of the joints that are specific to humans, and we propose its characteristics as below.\nEven if the ulna is fixed to something completely, the radioulnar joint can move.\nThe radioulnar joint is clinoaxis, and the joint passes the little finger through the proximal radius and the distal ulna.\nThe radioulnar joint can disperse torsion by two long bones.\nAs for 1), we use this characteristic when we perform motions such as writing and soldering.\nWe can perform motions using 3 DOFs of the radioulnar joint and radiocarpal joint when stabilizing the arm by fixing the ulna to the table completely.\nAs for 2), we use this characteristic when we perform motions such as opening a door, turning a screw, and swinging a badminton racket.\nWhen we open a door, we propagate torque efficiently by bending the wrist joint to the ulna and matching the axis of the radioulnar joint to the door knob joint.\nWhen we swing a badminton racket, we maximize the speed of the racket head by increasing the radius of rotation in bending the wrist joint to the radius and keeping the racket head away from the radioulnar joint.\nAs for 3), this structure is effective for cabling and skin movements.\nWe propose that these structures play a part in performing human skillful motion, and that this benefit is utilized only by imitating the body proportion, weight ratio, and muscle arrangement of the human body.\nThus, we developed a human mimetic forearm with a radioulnar joint using newly developed miniature bone-muscle modules.\n###figure_7###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Realization of Human Mimetic Radioulnar Structure", + "text": "The developed forearm with a radioulnar joint is shown in Fig. 8 ###reference_###.\nIt is very compact, enabled by making most of the benefit that the miniature bone-muscle module is able to connect lengthwise and crosswise to form the structure.\nTwo modules each are equipped in the radius and ulna, and the radius is almost completely composed of only modules.\nThere are 4 modules in total, and thus 8 muscles, in the forearm.\nThe radius is thick at the distal like that of a human, and connects to the hand [13 ###reference_b13###] through a universal joint.\nLikewise, the ulna is thick at the proximal, and connects to the humerus.\nTo rotate the radioulnar joint, spherical plain bearings are equipped in the proximal of the radius and the distal of the ulna as axle joints.\n###figure_8### ###figure_9###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Performance of Developed Forearm", + "text": "The muscle arrangement is shown in Fig. 9 ###reference_###.\nWe imitated 8 muscles in the human forearm, and there are 6 DOFs that are moved by the 8 muscles, including 1 DOF of the radioulnar joint, 2 DOFs of the radiocarpal joint and 3 DOFs of the fingers (thumb, index and middle, ring and little).\nIn these muscles, the gear ratios of , and are 84:1, and those of the others are 157:1.\nThe number of muscles can be an important index in expressing how much freedom the forearm has, and this forearm actualizes many more muscles compactly compared to other robots such as Anthrob [9 ###reference_b9###] (2 muscles), Kenshiro [11 ###reference_b11###] (0 muscles), and Kenzoh [7 ###reference_b7###] (5 muscles).\nAlso, we succeeded in imitating the human body without deviating from the human body proportion and weight ratio as shown in Fig. 10 ###reference_###.\nWe show the workspace and maximum torque of 4 DOFs of the elbow joint, radioulnar joint, and radiocarpal joint developed in this study in Table II ###reference_###.\nThis also indicates that the forearm is correctly based on the human body.\nThus, we succeeded in developing a forearm with a radioulnar joint, which has many degrees of freedom and is based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body.\n###figure_10###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Achievement of Human Skillful Motion using Radioulnar Structure", + "text": "Due to the success in the development of a radioulnar structure based on the human body proportion, we propose that Kengoro is able to move in various ways using the benefits of this radioulnar structure.\nThus, we performed some human-specific motions using Kengoro [3 ###reference_b3###] equipped with the forearm having the radioulnar joint.\nIn this section, we will evaluate the degree of imitation of the forearm and verify the benefits of the radioulnar structure through experiments conducted on motion that uses the benefits described in the previous chapter, such as soldering, opening a book, turning a screw, and swinging a badminton racket." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Soldering", + "text": "The motion of soldering (Fig. 11 ###reference_###) is an example that effectively uses the characteristic that the radioulnar joint can move even with the ulna attached to something.\nWe can see that Kengoro is able to move the radioulnar joint stably with the ulna attached to the table.\nThis characteristic is thought to also be seen when writing and using a keyboard.\nTypically, large and strong structures are needed in order to make robots with high rigidity for stable hand movement.\nHowever, if the robot has a low rigidity, stable and fine movements can be done by having a radioulnar joint and moving the radioulnar and radiocarpal joints with the ulna bone attached to something.\nWe propose that this can support the drawback of being unable to do fine movements by the tendon-driven musculoskeletal humanoid, which has safe structures but low rigidity.\n###figure_11###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Opening a Book", + "text": "The motion of opening a book (Fig. 12 ###reference_###) is an example that effectively uses the characteristic that the radioulnar joint axis is slanting and passes through at about the little finger.\nWe can see that Kengoro is able to open a book by merely rotating the radioulnar joint, which becomes a motion like that of turning the palm.\nAlso, we can say that this extends the capacity of movement.\nFig. 13 ###reference_### is the comparison between an ordinary straight radioulnar joint and the slanting radioulnar joint of the reachable points of the center of the palm, that can be reached by only using the radioulnar and radiocarpal joints.\nThe slanting radioulnar joint can extend hand movement, and the hand can move widely and stably by combining this and the previous benefit that the radioulnar joint can move even with the ulna attached to something.\n###figure_12### ###figure_13###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Turning a Screw", + "text": "When turning a screw with a screwdriver, Kengoro can transfer torque efficiently by matching the radioulnar joint axis to the axis of the screwdriver.\nWe can see that the tip of the screwdriver is hardly blurred.\nThe motion of opening a door uses the same principle.\n###figure_14###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Badminton Swing", + "text": "When swinging a badminton racket (Fig. 15 ###reference_###), Kengoro can increase the radius of rotation and speed in the racket head by keeping the hand away from the radioulnar joint.\nDue to the slanting radioulnar joint, Kengoro can have a larger radius of rotation than with the ordinary straight radioulnar joint.\nThis motion contrasts with the motion of turning a screw, and is a skillful human movement that uses the effects of the slanting radioulnar joint for speed of the swing instead of the torque.\nIn this study, we used the optimization method of [16 ###reference_b16###] to create the badminton swing motion, and made Kengoro move in this way.\nThe joint angle velocity of Kengoro during this motion is shown in Fig. 16 ###reference_###, and the speed of the radioulnar joint was the fastest.\nSpecifically, the slanting radioulnar joint increases the radius of rotation of racket by about 50 [mm] compared with the ordinary straight radioulnar joint, and the increase of the racket speed by the slanting joint is 0.35 [m/s] in contrast to the total racket speed of 8 [m/s], thus the effect is about 4.3 [%].\nThis is not a big effect, but shows that the radioulnar joint is important in competitive sports that require speed, and is very important to be used properly and skillfully.\n###figure_15### ###figure_16###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "CONCLUSION", + "text": "In this study, we explained the development of the human mimetic forearm with a radioulnar joint made by miniature bone-muscle modules.\nFirst, we explained the need for a forearm with a radioulnar joint that is based on the body proportion, weight ratio, and muscle arrangement of the human body in order to achieve human skillful motion.\nThen, we explained the need for the miniaturization of muscle modules to save space in order to actualize the human mimetic radioulnar joint of the tendon-driven musculoskeletal humanoid.\nTo approach this, we proposed the method of using space efficiently by installing two muscle actuators in one muscle module, integrating muscle and bone structure, and using a more miniature motor and solving its drawbacks by dissipating motor heat to the structure.\nWe succeeded in developing a forearm that is based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body using newly developed miniature bone-muscle modules.\nFinally, we conducted experiments on some motions using characteristics of the radioulnar joint, such as the ability to move with the ulna attached to something, and that the joint is slanting.\nThrough these experiments, we proposed the correctness of the approach in the human mimetic radioulnar joint with miniature bone-muscle modules, and observed the benefits of the radioulnar joint.\nFor future works, we propose the actualization of a small tendon-driven musculoskeletal humanoid made of the newly developed miniature bone-muscle modules.\nThese miniature bone-muscle modules can be used for the forearm, as well as various other parts of the tendon-driven robot.\nAt the same time, we aim to understand the biological meaning of the radioulnar joint, and find motions that use this joint that are more skillful." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Comparison of newly developed miniature bone-muscle module and sensor driver integrated muscle module [8].
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Miniature bone-muscle module in this studySensor-driver integrated muscle module [8]\n
Module dimension []
Module weight [kgf]0.300.32
Number of actuators21
ActuatorBLDC-60W (changeable)BLDC-120W (changeable)
Diameter of winding pulley [mm]812
Reduction ratio of actuator157:1 (changeable)53:1 (changeable)
Continuous maximum winding tension [N]424338
Winding rate with no load [mm/s]116200
\n
\n
", + "capture": "TABLE I: Comparison of newly developed miniature bone-muscle module and sensor driver integrated muscle module [8]." + }, + "2": { + "table_html": "
\n
TABLE II: Comparison between joint performance of a human and that of Kengoro.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Human\nKengoro
JointTorqueWorkspaceTorque\nWorkspace
[Nm][deg][Nm][deg]
Elbowpitch-72.5 \u2013 42.1-145 \u2013 0-49.9 \u2013 46.5-145 \u2013 0
Radioulnaryaw-7.3 \u2013 9.1-90 \u2013 85-8.5 \u2013 3.3-85 \u2013 85
Wristroll-12.2 \u2013 7.1-85 \u2013 85-15.1 \u2013 14.6-75 \u2013 85
pitch-11 \u2013 9.5-15 \u2013 45-15.9 \u2013 13.3-15 \u2013 45
\n [14, 15]\n
\n simulated value
\n
\n
", + "capture": "TABLE II: Comparison between joint performance of a human and that of Kengoro." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.09934v1_figure_1.png", + "caption": "Figure 1: Forearm of Kengoro, composed of newly developed miniature bone-muscle module.", + "url": "http://arxiv.org/html/2408.09934v1/x1.png" + }, + "2": { + "figure_path": "2408.09934v1_figure_2.png", + "caption": "Figure 2: Details of the newly developed miniature bone-muscle module.", + "url": "http://arxiv.org/html/2408.09934v1/x2.png" + }, + "3": { + "figure_path": "2408.09934v1_figure_3.png", + "caption": "Figure 3: General versatility of the newly developed bone-muscle module. Left: various arrangements of ultra tiny tension measurement unit. Right: various connections of muscle modules.", + "url": "http://arxiv.org/html/2408.09934v1/x3.png" + }, + "4": { + "figure_path": "2408.09934v1_figure_4.png", + "caption": "Figure 4: The principle of ultra tiny tension measurement unit. Left: the principle of tension measurement. Right: the result of calibration.", + "url": "http://arxiv.org/html/2408.09934v1/x4.png" + }, + "5": { + "figure_path": "2408.09934v1_figure_5.png", + "caption": "Figure 5: Comparison of motor heat transition, with and without heat transfer sheet. 20 [kgf] and 40 [kgf] weights are lifted with the newly developed miniature bone-muscle module.", + "url": "http://arxiv.org/html/2408.09934v1/x5.png" + }, + "6": { + "figure_path": "2408.09934v1_figure_6.png", + "caption": "Figure 6: Result of dangling. Left: overview of dangling motion. Right: muscle tension and temperature during the experiment.", + "url": "http://arxiv.org/html/2408.09934v1/x6.png" + }, + "7": { + "figure_path": "2408.09934v1_figure_7.png", + "caption": "Figure 7: Structure of the human radioulnar joint.", + "url": "http://arxiv.org/html/2408.09934v1/x7.png" + }, + "8": { + "figure_path": "2408.09934v1_figure_8.png", + "caption": "Figure 8: Overview of newly developed Kengoro forearm.", + "url": "http://arxiv.org/html/2408.09934v1/x8.png" + }, + "9": { + "figure_path": "2408.09934v1_figure_9.png", + "caption": "Figure 9: Muscle arrangement of the newly developed forearm.", + "url": "http://arxiv.org/html/2408.09934v1/x9.png" + }, + "10": { + "figure_path": "2408.09934v1_figure_10.png", + "caption": "Figure 10: Comparison of upper limb link length and weight between a human and Kengoro with a newly developed forearm.", + "url": "http://arxiv.org/html/2408.09934v1/x10.png" + }, + "11": { + "figure_path": "2408.09934v1_figure_11.png", + "caption": "Figure 11: Kengoro soldering. Kengoro with a soldering iron can move the radioulnar joint with the ulna attached to the table.", + "url": "http://arxiv.org/html/2408.09934v1/x11.png" + }, + "12": { + "figure_path": "2408.09934v1_figure_12.png", + "caption": "Figure 12: Kengoro opening a book.", + "url": "http://arxiv.org/html/2408.09934v1/x12.png" + }, + "13": { + "figure_path": "2408.09934v1_figure_13.png", + "caption": "Figure 13: The reachable points of the center of the palm compared between the slanting radioulnar joint and the ordinary straight radioulnar joint. Left: x-y plain. Right: y-z plain.", + "url": "http://arxiv.org/html/2408.09934v1/x13.png" + }, + "14": { + "figure_path": "2408.09934v1_figure_14.png", + "caption": "Figure 14: Kengoro turning a screw with a screwdriver. Upper picture shows that the radioulnar joint axis matches the screwdriver axis.", + "url": "http://arxiv.org/html/2408.09934v1/x14.png" + }, + "15": { + "figure_path": "2408.09934v1_figure_15.png", + "caption": "Figure 15: Badminton swing motion. Upper pictures show comparison between the slanting radioulnar structure with large radius of rotation of racket and the ordinary straight radioulnar structure with small radius of rotation of racket.", + "url": "http://arxiv.org/html/2408.09934v1/x15.png" + }, + "16": { + "figure_path": "2408.09934v1_figure_16.png", + "caption": "Figure 16: Joint angle velocity of badminton swing motion.", + "url": "http://arxiv.org/html/2408.09934v1/x16.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.09934v1" +} \ No newline at end of file diff --git a/20240819/2408.09958v1.json b/20240819/2408.09958v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4244110e3bf91722f5a6463181b221ea9efd32f3 --- /dev/null +++ b/20240819/2408.09958v1.json @@ -0,0 +1,237 @@ +{ + "title": "AdaResNet: Enhancing Residual Networks with Dynamic Weight Adjustment for Improved Feature Integration", + "abstract": "In very deep neural networks, gradients can become extremely small during backpropagation, making it challenging to train the early layers. ResNet (Residual Network) addresses this issue by enabling gradients to flow directly through the network via skip connections, facilitating the training of much deeper networks. However, in these skip connections, the input (ipd) is directly added to the transformed data (tfd), treating ipd and tfd equally, without adapting to different scenarios. In this paper, we propose AdaResNet (Auto-Adapting Residual Network), which automatically adjusts the ratio between ipd and tfd based on the training data. We introduce a variable, , to represent this ratio. This variable is dynamically adjusted during backpropagation, allowing it to adapt to the training data rather than remaining fixed. Experimental results demonstrate that AdaResNet achieves a maximum accuracy improvement of over 50% compared to traditional ResNet.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, deep learning has revolutionized numerous fields, ranging from computer vision and natural language processing to autonomous systems and beyond. Among the various architectures that have emerged, ResNet (Residual Network) has played a pivotal role in advancing the state of the art in these domains [1 ###reference_b1###] [2 ###reference_b2###]. Its innovative design has enabled the training of extremely deep neural networks by addressing a critical challenge faced in traditional deep architectures: the vanishing gradient problem.\nAs neural networks become deeper, gradients can diminish significantly during the backpropagation process. This issue hampers the effective training of the early layers, causing the network to stagnate and preventing it from learning meaningful representations. ResNet tackles this problem by introducing skip connections [3 ###reference_b3###], which allow gradients to bypass intermediate layers and flow directly through the network. This mechanism facilitates the training of much deeper networks, making it possible to achieve unprecedented levels of accuracy and performance on complex tasks.\nDespite the success of ResNet, the standard implementation of skip connections involves directly adding the input () to the transformed data (), i.e., they are combined in a fixed ratio of 1:1, as illustrated in Figure 1 ###reference_###. This approach inherently assumes that and contribute equally to the network\u2019s output, which may not be optimal across all recognition scenarios. By treating and as identical in their contribution, the traditional ResNet architecture does not account for the varying importance of and across different layers or diverse training data distributions.\n###figure_1### In this paper, we propose a novel architecture, AdaResNet (Auto-Adapting Residual Network), which enhances the flexibility of ResNet by automatically adapting the contribution of and during training. Specifically, we introduce a learnable parameter, denoted as , which dynamically adjusts the ratio between and based on the training data. Unlike traditional ResNet, where the combination of and remains fixed, AdaResNet allows this ratio to be tuned throughout the training process, thereby improving the network\u2019s ability to generalize across diverse data distributions.\nThe contributions of this paper are threefold:\n(1) Introduction of AdaResNet: We present AdaResNet, a novel extension of the ResNet architecture that incorporates an adaptive mechanism to balance the contributions of skipped input () and processed data (). This approach overcomes the limitations of the fixed 1:1 ratio combination used in traditional ResNet, allowing for more flexible and effective integration of and .\n(2) Learnable parameter : We propose a new learnable parameter, , which is automatically optimized during training. This parameter enables the network to dynamically adjust the balance between and in response to varying data characteristics, improving the model\u2019s adaptability and performance.\n(3)Layer-specific and task-specific characteristics of the learnable parameter: We identify that the optimal weights for skip connections vary not only across different layers within a deep network but also across different training tasks. This insight challenges the conventional one-size-fits-all approach of traditional ResNet, where a uniform weight ratio is applied across all layers, regardless of the specific role of each layer or the nature of the training data.\nThe remainder of this paper is organized as follows. Section II describes the AdaResNet model in detail, including the formulation of the parameter and the corresponding backpropagation. Section III presents our experimental setup and results. In Section IV, we review related work in deep learning architectures and adaptive mechanisms. Finally, Section V concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Model", + "text": "In very deep networks, gradients can become extremely small during backpropagation, making it difficult to train the early layers. ResNet (Residual Network) addresses this challenge by allowing gradients to flow directly through the network via skip connections, facilitating the training of much deeper networks.\nThe process of transforming input data () to produce output () in a traditional ResNet can be described as in (1 ###reference_###)\nHere, the input is successively transformed by functions . The original input or its less transformed format () is then added via a shortcut connection (identity mapping) to the output of the final transformation , producing the final output through the corresponding activation function .\nIn this process, the sequence of transformations constitutes the main computational pathway in ResNet, and we refer to its output as the transformed data (). On the other hand, or , which is either less processed or directly the input, is utilized to facilitate the training of much deeper networks, and we refer to it as the input represent data () or simply the input.\nTransformed data. The transformed data refers to the output generated after applying a series of operations\u2014such as convolution, batch normalization, and activation functions\u2014on an input within a residual block of a neural network. This data represents the modifications made to the input as it passes through various layers in the block, capturing the learned features and patterns.\nIn a residual block, the transformed data is the result of the main processing path, which typically involves several convolutional layers followed by normalization and activation. This data is then combined with the input represent data (often via addition) to form the output of the residual block, enabling the network to learn more complex functions by effectively adding incremental changes to the input.\nInput represent data. The input represent data refers to the data that is passed directly from the input of a residual block to its output, often without undergoing significant transformation. This data serves as a baseline or identity mapping, allowing the network to retain and propagate the original input features alongside the transformed features from the main processing path.\nIn a residual block, the input represent data typically bypasses the primary convolutional operations and is combined with the transformed data at the block\u2019s output. This bypass, or shortcut connection, helps mitigate issues like vanishing gradients by ensuring that gradients can flow more easily through the network, leading to more effective training of deep models.\nThe combination of not only facilitates easier propagation of gradients to earlier layers but also impacts the final results differently.\nHowever, the contributions of the input represent data and the transformed data may not be equal. To control the influence of each component, we introduce a weight between the input and the transformed data, referred to as the weight of transformed data and input represent data. This weight is denoted by the variable , where stands for the Transformed Data and stands for the Input Represent Data.\nThis approach forms the foundation of the AdaResNet architecture, a variant of the ResNet architecture that incorporates the to modulate the contribution of the input. AdaResNet is closely related to ResNet; for example, AdaResNet50 is based on the ResNet50 architecture but includes this weighted mechanism.\nIn the modified structure, the weight is introduced as shown in Equation (2 ###reference_###).\n###figure_2### The parameter enables the network to learn the optimal influence of the input on the final output . If is learned to be close to zero, the network emphasizes the transformed data over the raw input. Conversely, a larger indicates a greater influence of the raw input. When equals 1, the model functions as a traditional ResNet.\nAdditionally, is automatically adjusted based on the input data. Specifically, it changes dynamically in response to the loss function during training, being updated through the process of gradient descent. We analyze this process in detail in the following section." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Gradient Descent Algorithm", + "text": "Assuming the loss function is , the update formula for the parameter during each training step is given by (3 ###reference_###).\nwhere:\n- is the learning rate, which controls the step size of the update.\n- is the gradient of the loss function with respect to .\nUsing (3 ###reference_###), is gradually adjusted to minimize the loss function. As a result, changes dynamically during the training process, enabling the model to better fit the data by optimizing the balance between the input represent data (ipd) and the transformed data (tfd). This automatic adjustment process helps improve the final prediction accuracy.\nBelow, we provide a detailed description of the backward pass and the updating process for .\nGiven the output of the model (a simplified representation of the ResNet, where a typical ResNet contains multiple instances of this structure), the loss function measures the difference between the predicted output and the true labels ." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "II-A1 Gradient of the Loss Function with Respect to the Output", + "text": "During the backward pass, the objective is to compute the gradients of the loss function with respect to each of the model\u2019s parameters. These gradients are used to update the parameters in the direction that minimizes the loss.\nThe computation of the gradient () of the loss function is shown in Equation (4 ###reference_###).\nThis gradient indicates how changes in the output of AdaResNet affect the loss function. It is computed by differentiating the loss function with respect to the output of AdaResNet." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "II-A2 Gradient of the Output with Respect to", + "text": "Next, we determine how changes in affect the output . Recall that:\nTaking the partial derivative of with respect to gives:\nAdditionally, we can assign a weight to (the processed intermediary data). However, since this involves a relative relationship between and , we choose to set relative to ." + }, + { + "section_id": "2.1.3", + "parent_section_id": "2.1", + "section_name": "II-A3 Gradient of the Loss Function with Respect to", + "text": "By applying the chain rule, the gradient of the loss function with respect to is given by:\nSubstituting the previously computed gradients:\nThis gradient demonstrates how changes in will affect the loss function. It is used to update during the optimization step, which will adjust the relative influence between and .\nAlthough this derivation is based on a simplified form of AdaResNet with a single layer contributing to the output (), the same principles apply to the full AdaResNet architecture, which may have multiple layers (e.g., )." + }, + { + "section_id": "2.1.4", + "parent_section_id": "2.1", + "section_name": "II-A4 Parameter Update", + "text": "During the parameter update step, an optimization algorithm (e.g., gradient descent or Adam) uses the computed gradients to update . For gradient descent, the update rule is:\nwhere is the learning rate. This update step is repeated for each batch of training data across multiple epochs, leading to an optimized result from the training data." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Training Neural Network with Custom Parameter", + "text": "Based on the proposed model and backpropagation mechanism, the training process of AdaResNet is as follows." + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "II-B1 Forward Pass of", + "text": "- During the forward pass, the custom layer receives inputs and the intermediate result , and then calculates the output as . This output is then passed to subsequent layers or serves as the final model output." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "II-B2 Calculating the Loss Function", + "text": "- The model output is compared with the true labels to compute the loss function (assumed to be )." + }, + { + "section_id": "2.2.3", + "parent_section_id": "2.2", + "section_name": "II-B3 Backward Pass", + "text": "- The backpropagation algorithm calculates the gradients of the loss function with respect to the model parameters. During this process, the gradient of is also computed." + }, + { + "section_id": "2.2.4", + "parent_section_id": "2.2", + "section_name": "II-B4 Updating the Parameters", + "text": "- The optimizer (such as Adam) updates all trainable parameters, including , based on the computed gradients. This update process is based on the gradient descent algorithm, causing to adjust slightly after each batch of data to minimize the loss function.\nThe process of using can be described in Algorithm 1 ###reference_###." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Brief Explanation", + "text": "In this subsection, we briefly explain the rationale for introducing the weight between the Transformed Data and the Input Represent Data.\nIn the equation , inherently contributes equally to the output as , meaning that both have the same impact on the final output. However, in most cases, we cannot assume this equal contribution. Even within the same scenario, different training data can alter the relationship between these contributions.\nTo formalize this, we introduce a function to describe how much a parameter contributes to the output . In ResNet, both the input represent data ipd and the transformed data tfd contribute to the recognition target . However, in general, we cannot assume that .\nWe use a counterexample to illustrate the need for variable weighting. Assume that the input data has the same weight as the intermediate results. One key feature of ResNet is that it can be extended into many layers. Let us consider two consecutive layers, and , and examine the contributions and .\nIf in layer , where represents the input of the layer, then when the process continues to the next layer , the input data is now , and the transformed data is . The input data of layer is derived from the processed results of layer , and since has undergone non-linear processing (e.g., through the ReLU activation function) in layer , it is difficult to maintain a linear one-to-one relationship between the input data and the transformed data. Therefore, there is no guarantee that the contributions will remain equal in layer , as shown in (II-C ###reference_0###). In fact, as the number of layers increases, it becomes more likely that their contributions will diverge.\nWe conclude, as shown in (II-C ###reference_0###), that in most cases, even if one layer exhibits equal contributions from the input and the transformed data, it is unlikely that all layers will maintain this equality. Consequently, the weights cannot be assumed to be equal across the network.\nTherefore, must be adjusted during the learning process, meaning it should dynamically change throughout training. This dynamic adjustment is crucial for ensuring that the network can effectively capture and utilize relevant features while minimizing the impact of irrelevant or noisy data." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Factors influencing", + "text": "Several factors influence , including:" + }, + { + "section_id": "2.4.1", + "parent_section_id": "2.4", + "section_name": "II-D1 Dependency on Training Datasets", + "text": "The first challenge is that can vary significantly depending on the specific training dataset used. Different datasets possess unique distributions and characteristics, necessitating the adaptation of to ensure optimal performance.\nwhere represents sub sets of a training dataset.\nMoreover, this ratio often differs when training on different datasets, such as MNIST and CIFAR-10.\nwhere represents type of the training datasets." + }, + { + "section_id": "2.4.2", + "parent_section_id": "2.4", + "section_name": "II-D2 Neural Network Architecture", + "text": "The specific neural network architecture also plays a significant role in determining the optimal value of . Networks with varying depths, widths, and connectivity patterns exhibit distinct learning behaviors, thereby affecting the sensitivity and responsiveness of to changes in the training data. Consequently, the dynamic adjustment of must be tailored to the specific architecture of the neural network in question.\nIn a ResNet network, there are different stages (such as several identity blocks and convolutional blocks111https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py ###reference_ations/blob/master/keras_applications/resnet50.py###, each of which can be seen as a stage) to use the , those values can also be different in different stage. Thus, to reflect difference in each stage, the weight can be an array with respect to each stage as in (8 ###reference_###). This may increase the complex of neural network, for simplicity, can be a unique one in all stages in some scenarios.\n,where are stage of a neural network. Stage is one place where make input data to be mixed with processed data, such as one identify block or one cov block." + }, + { + "section_id": "2.4.3", + "parent_section_id": "2.4", + "section_name": "II-D3 Non-Uniqueness of Optimal Values", + "text": "A further challenge lies in the fact that the optimal value of may not be unique, but rather exist as a set of potential values. This non-uniqueness stems from the inherent complexity and redundancy within neural networks, which often possess multiple solutions that achieve similar levels of performance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Verification", + "text": "To validate the effectiveness of the proposed method, we conducted comparative experiments using three different approaches: (1) the proposed method based on ResNet 50 with a trainable weight, AdaResNet (2) the traditional ResNet 50, and (3) a method using a fixed weight (2x) instead of a trainable one. The results over 10 epochs are reported and discussed." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Accuracy", + "text": "" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "III-A1 Experimental Setup", + "text": "The model was trained and evaluated on the CIFAR-10 dataset with ResNet50, as the accuracy by this model is about 40%, which can have enough space to show whether there are some improvement or not (On the other hand, if we use MNIST dataset, its accuracy can achieve more than 99%, if there are some improvement, it still small). The dataset consists of 60,000 32x32 color images in 10 classes, with 50,000 training images and 10,000 test images. The images were normalized to a range of [0, 1] and the labels were converted to one-hot encoded vectors.\nIn this verification, ResNet and AdaResNet are compared. For ResNet, we use the Keras library of TensorFlow. AdaResNet are customed based on the Keras library of TensorFlow too.\nIt is a custom ResNet model modified to incorporate a trainable parameter that scales the input before adding it to an intermediate feature map. This modification is intended to examine the impact of dynamic feature scaling on the performance of the model when applied to the CIFAR-10 dataset. The model is constructed using the Keras framework, and the details of the implementation are outlined below.\nThe implementation includes creating a custom layer within the Keras framework that integrates the trainable parameter . The setup process is summarized in Algorithm 1. For more detailed information, please refer to the code available on GitHub222https://github.com/suguest/AdaResNet.\nThe experiments were performed on the CIFAR-10 dataset. Each method was trained for 10 epochs, and the performance metrics such as accuracy and loss were recorded for both training and validation datasets. Below are the verification results." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "III-A2 Results", + "text": "The professional learning curves is shown in Figure 3 ###reference_### and Figure 4 ###reference_### that illustrate the training and validation accuracy for each method over the 10 epochs.\n###figure_3### ###figure_4### The comparison clearly demonstrates the differences among the three methods.\nThe two methods of AdaResNet show higher accuracy in both accuracies on the training data and test data.\nFor the training data, AdaResNet achieves the highest final test accuracy of 0.81 and 0.72 separately for AdaResNet with two weights and one unified weight, which has more than 0.26 and 0.18 increase in the accuracy than the traditional ResNet method with a accuracy of 0.46.\nFor the test data, the proposed method show an accuracy of 0.71 and 0.63 for two methods of AdaResNet, which has a more accuracy of than 0.25 and 0.18 than that of the traditional method (0.46). The AdaResNet with two separate weights has an increase of 54.35% increase of traditional ResNet.\nWhen comparison of two methods of AdaResNet, one with the unified weight and another with separate weights, the method with separate weights has more accuracy improvement. This indicates that there are different relationship among the input and intermediate process results between the identify block and conv block.\nFrom the above results, it indicates that the trainable weight effectively balances the influence of the raw input and the transformed data, leading to improved learning and generalization." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Weights Impact", + "text": "In this section, we aim to verify that is a dynamic parameter rather than a fixed value, capable of adapting to different training tasks and varying even within the same task across different training iterations.\nFor a better comparison, we output the after each training is done, i.e. to iterate to output the of each layer, as shown in 3 ###reference_###." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "III-B1 Whether is a single value or not", + "text": "In this subsection, we aim to determine whether the remains consistent across runs. We conducted three separate runs of AdaResNet, all starting with the same initial parameters and using the same training data (CIFAR-10), with each model trained for 10 epochs. The results are shown in Figure 5 ###reference_### and table I ###reference_###.\n###figure_5### From Figure 5 ###reference_###, we can see that the weights values are different in different layers. This indicates that it is not suitable to use a fixed value for the combination of input and the intermediately processed data. We also combined to use a fixed ratio among the input data the intermediately processed data of 2 as shown in Figure 6 ###reference_###, which also shows a higher accuracy than to use the dynamic .\nThe difference of weight in different test rounds can also be seen in Table I ###reference_###. The weight values across the three test rounds exhibit variations, indicating that the weights differ between layers. For instance, at layer 1, the weights are -0.2872, 0.2799, and 0.3222 for rounds one, two, and three, respectively, demonstrating a significant range of approximately 0.61 between the lowest and highest values. Similarly, at layer 5, the weights are -1.7673, -1.9361, and -1.9803, again showing variability with a difference of about 0.21. These differences underscore that the weights in each layer are not consistent across different rounds of testing, which can be attributed to factors such as random initialization and the stochastic nature of the training process.\n###figure_6###" + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "III-B2 Whether is different among different training task", + "text": "While for different classification tasks, such as for MNIST, the weights have big difference. For the weights of MNIST, we also carry out three verification, the results are shown in Figure 7 ###reference_### and table II ###reference_###.\n###figure_7### Thus, we analyze the difference between within group and between groups. The variance of two groups of weight data, representing the CIFAR-10 and MNIST datasets, was analyzed using absolute values. The CIFAR-10 and MNIST groups each comprised three sets of eight weights. The within-group variance was computed by averaging the variance across the corresponding columns within each group, while the between-group variance was calculated by assessing the variance between the mean values of the columns across the two groups.\nThe results revealed a within-group variance of 0.0113 for the CIFAR-10 group and 0.0074 for the MNIST group, indicating that the CIFAR-10 group exhibits slightly higher variability among its data points compared to the MNIST group. Furthermore, the between-group variance was calculated to be 0.1205, which is significantly higher than both within-group variances. This suggests that the differences between the mean values of the CIFAR-10 and MNIST groups are more pronounced than the variations observed within each group. Overall, the analysis highlights that the between-group differences are more substantial than the differences within the individual groups, with CIFAR-10 showing a marginally greater degree of internal variability than MNIST." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Related Work", + "text": "The development of deep neural networks has been one of the most significant advancements in artificial intelligence, with ResNet (Residual Network) standing out as a groundbreaking architecture. Since its introduction by He et al. in 2016 [4 ###reference_b4###], ResNet has become a cornerstone in the design of deep networks, particularly for tasks in computer vision such as image classification, object detection, and segmentation." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Residual Networks and Skip Connections", + "text": "The concept of residual learning was introduced to address the degradation problem in deep neural networks, where adding more layers to a network does not necessarily lead to better performance and often results in higher training error. ResNet\u2019s innovative use of skip connections allows the network to learn residual mappings instead of directly learning unreferenced functions [5 ###reference_b5###]. This approach effectively mitigates the vanishing gradient problem, as gradients can propagate more easily through the network. The original ResNet paper demonstrated that networks with over 100 layers could be trained successfully [6 ###reference_b6###], a feat previously unattainable with traditional deep architectures.\nWhile ResNet has achieved remarkable success, several extensions and modifications have been proposed to further enhance its performance. For example, Wide ResNet [1 ###reference_b1###] [7 ###reference_b7###] explores the effect of increasing the width of the network (i.e., the number of channels) instead of just depth, leading to improved performance on various datasets. Another variation, ResNeXt [8 ###reference_b8###], introduces a cardinality dimension, allowing for a more flexible combination of feature maps, which has been shown to improve accuracy and efficiency." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Adaptive Mechanisms in Neural Networks", + "text": "The idea of incorporating adaptive mechanisms into neural networks has gained traction as researchers seek to make models more flexible and responsive to varying data distributions. Squeeze-and-Excitation Networks (SENet) [9 ###reference_b9###], for instance, adaptively recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels. This enables the network to focus on the most informative features, leading to significant performance gains in image recognition tasks.\nAnother line of research focuses on adaptive learning rates and weights within networks. For example, the use of adaptive learning rates in algorithms such as Adam [10 ###reference_b10###] and RMSprop [11 ###reference_b11###] has become standard practice in training deep networks, allowing for faster convergence and better generalization.\nHowever, adaptive mechanisms within the architecture itself, such as the one proposed in our AdaResNet, are less explored. Existing methods typically focus on global adjustments, such as learning rates, rather than on dynamically altering the flow of information within the network. The Dynamic Convolution [12 ###reference_b12###] approach is a notable exception, where convolutional kernels are dynamically adjusted based on input features. However, it does not address the specific challenges posed by skip connections in residual networks." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Limitations of Traditional Residual Networks", + "text": "Despite the successes of ResNet and its variants, the uniform treatment of the input () and processed data () in skip connections remains a limitation. Traditional ResNet adds and without considering the varying importance of these components across different layers or training data conditions. This uniformity can lead to suboptimal performance, especially in cases where the relative importance of and differs significantly.\nTo address this issue, several approaches have been proposed to modify the skip connections in ResNet. For example, the Mixed-Scale Dense Network (MSDNet) [13 ###reference_b13###] adapts the receptive field sizes across the network but does not dynamically adjust the skip connections themselves. Similarly, Highway Networks [14 ###reference_b14###] introduce gates to control the flow of information through the network, but these gates are static once trained and do not adapt during training." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Our Contribution", + "text": "Our proposed AdaResNet builds on this body of work by introducing an adaptive mechanism specifically for skip connections in residual networks. By allowing the ratio of to , represented by the learnable parameter , to be adjusted dynamically during training, AdaResNet provides a more flexible and data-responsive architecture. This approach not only addresses the limitations of traditional ResNet but also leverages the strengths of adaptive learning to enhance performance across a range of tasks and datasets.\nIn summary, while significant progress has been made in the design and optimization of deep neural networks, the uniform treatment of skip connections in residual networks presents a limitation that has yet to be fully addressed. AdaResNet represents a novel contribution in this area, introducing a dynamic and adaptive approach to residual learning that we believe will offer significant benefits in terms of both accuracy and generalization." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced AdaResNet, a novel extension of the ResNet architecture that incorporates an adaptive mechanism for dynamically balancing the contributions of skipped input () and processed data (). Traditional ResNet models rely on a fixed 1:1 ratio for combining and , which can be suboptimal in various training scenarios. AdaResNet addresses this limitation by introducing a learnable parameter, , which is automatically optimized during training. This allows the network to adjust the ratio between and in response to the specific characteristics of the data, thereby enhancing the model\u2019s adaptability and overall performance.\nOur experimental results demonstrate that AdaResNet consistently outperforms the traditional ResNet architecture, particularly in tasks where the relative importance of and varies across different layers and datasets. We also highlighted the critical insight that the optimal weights for skip connections differ across layers and tasks, challenging the conventional approach of using a uniform weight ratio across the entire network.\nBy leveraging adaptive skip connections, AdaResNet not only improves accuracy and efficiency but also offers a more nuanced and flexible approach to deep network design. This work opens up new possibilities for further exploration of adaptive mechanisms in neural networks, with potential applications across various domains in deep learning.\nFuture work will focus on extending the AdaResNet framework to other network architectures and exploring the impact of adaptive mechanisms in different types of neural networks, such as those used in natural language processing and reinforcement learning. Additionally, we plan to investigate the theoretical underpinnings of adaptive skip connections to better understand their role in improving network generalization and robustness." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Weights in Different Layers for Three Rounds of Testing (cifar10)
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Layerround_1round_2round_3
1-0.287222980.279897030.32219923
2-0.41371468-0.28776032-0.30848
3-0.37947246-0.3051696-0.5491747
40.87342571.16731230.84171796
5-1.7672663-1.9361044-1.9803141
61.78210761.79837662.0427594
7-1.18008541.25975681.1798627
8-0.82326496-0.8402289-0.8131428
\n
", + "capture": "TABLE I: Weights in Different Layers for Three Rounds of Testing (cifar10)" + }, + "2": { + "table_html": "
\n
TABLE II: Weights in Different Layers for Three Rounds of Testing (MNIST)
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Layerround_1round_2round_3
10.448870540.4484792-0.5003674
2-0.34602356-0.35169616-0.31584582
3-0.74334604-0.58070080.8818225
40.52668920.38353340.43830293
5-3.0067017-2.7609563-2.7376952
62.16532372.0657292.4824123
7-2.8167214-2.92164282.5657778
8-0.8365008-0.94025135-0.9289533
\n
", + "capture": "TABLE II: Weights in Different Layers for Three Rounds of Testing (MNIST)" + } + }, + "image_paths": { + "1": { + "figure_path": "2408.09958v1_figure_1.png", + "caption": "Figure 1: ResNet to add the input and intermediately processed directly to increase gradients for deep neural network", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/whyResNET.png" + }, + "2": { + "figure_path": "2408.09958v1_figure_2.png", + "caption": "Figure 2: Incorporating weighting into residual learning and blocks", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/beta_diagram.png" + }, + "3": { + "figure_path": "2408.09958v1_figure_3.png", + "caption": "Figure 3: Comparison of training accuracy", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/accuracy_comparison.png" + }, + "4": { + "figure_path": "2408.09958v1_figure_4.png", + "caption": "Figure 4: Comparison of test accuracy", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/testAccuracyAndDiff.png" + }, + "5": { + "figure_path": "2408.09958v1_figure_5.png", + "caption": "Figure 5: Weights of different layers", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/weightComparison.png" + }, + "6": { + "figure_path": "2408.09958v1_figure_6.png", + "caption": "Figure 6: Accuracy comparison of fixed weight", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/comparisonOfFixedWeight.png" + }, + "7": { + "figure_path": "2408.09958v1_figure_7.png", + "caption": "Figure 7: Weights of different layers for MNIST", + "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/weightComparisonMNIST.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.09958v1" +} \ No newline at end of file diff --git a/20240819/2408.10381v1.json b/20240819/2408.10381v1.json new file mode 100644 index 0000000000000000000000000000000000000000..54ef6701d8657a3b8aec40ed073e46a32e39a544 --- /dev/null +++ b/20240819/2408.10381v1.json @@ -0,0 +1,464 @@ +{ + "title": "Efficient Reinforcement Learning in Probabilistic Reward Machines", + "abstract": "In this paper, we study reinforcement learning in Markov Decision Processes with Probabilistic Reward Machines (PRMs), a form of non-Markovian reward commonly found in robotics tasks. We design an algorithm for PRMs that achieves a regret bound of , where is the time horizon, is the number of observations, is the number of actions, and is the number of time-steps. This result improves over the best-known bound, of Bourel et al. (2023) for MDPs with Deterministic Reward Machines (DRMs), a special case of PRMs. When and , our regret bound leads to a regret of , which matches the established lower bound of for MDPs with DRMs up to a logarithmic factor. To the best of our knowledge, this is the first efficient algorithm for PRMs. Additionally, we present a new simulation lemma for non-Markovian rewards, which enables reward-free exploration for any non-Markovian reward given access to an approximate planner.\nComplementing our theoretical findings, we show through extensive experiment evaluations that our algorithm indeed outperforms prior methods in various PRM environments.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Reinforcement learning traditionally focuses on the setting where the reward function is Markovian, meaning that it depends solely on the current state and action, and independent of history. However, in many real-world scenarios, the reward is a function of the history of states and actions. For example, consider a robot tasked with patrolling various locations in an industrial park. The performance of robot is measured by how thorough it regularly covers different zones in the park, which cannot easily be represented as a function of its current state and action, but rather would depend on its whole trajectory during the patrol.\nOne emerging tool to model such performance metrics is called the Reward Machine (RM)(Icarte et al., 2018 ###reference_b16###, 2022 ###reference_b17###), which is a Deterministic Finite-State Automaton (DFA) that can compress the sequence of past events into one single state. Combined with the current observation, the state of RM can fully specify the reward function. Hence, for an MDP with RM, we can obtain an equivalent cross-product MDP by leveraging the information of RM(see Lemma 1 ###reference_ma1###) and applying off-the-shelf RL algorithms e.g., Q-learning of Sutton and Barto (2018 ###reference_b27###) to learn an optimal policy. However, as we shall see later, this naive approach will incur a large regret.\nOne limitation of the classic RM framework is that the transition between the state of RM is restricted to be deterministic, whereas stochastic transitions are much more common in practice, especially with uncertainty in the environment. For instance, suppose a robot working in a warehouse is tasked with managing a warehouse by performing simple tasks of fetching and delivering items (as shown in Figure 1 ###reference_###). The robot starts at a charging station, navigates to the item pickup location, collects the item, and then proceeds to the delivery location to deliver the item and receives a reward. Based on past experience and pre-collected data: there is a percent chance that the item at the pickup location is not ready, requiring the robot to wait until the item is ready, and a percent chance that the delivery location is occupied, causing the robot to wait before delivering the item. The robot is rewarded only when it successfully collects and delivers the item in sequence. The setting where the rewards can exhibit non-Markovian and stochastic dynamics can be represented as Probabilistic Reward Machine(PRM)(Dohmen et al., 2022 ###reference_b12###).\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### In this paper, we investigate RL in Markov decision processes with probabilistic reward machines. We formalize the regret minimization problem within the episodic MDP with PRM setting and introduce an algorithm, UCBVI-PRM, a UCB-style model-based RL algorithm with novel steps specifically tailored to PRMs. Our algorithm achieves a regret bound of that matches the established lower bound of for MDPs with PRMs up to a logarithmic factor. Additionally, we present a new simulation lemma that characterizes the difference in policy evaluations between two MDPs with generic non-Markovian rewards. Based on the lemma, we design a reward-free exploration algorithms that can collect data with sufficient coverage to learn a near-optimal policy under any non-Markovian reward in downstream tasks. Finally, we conduct experiments to showcase the efficiency of UCBVI-PRM." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Problem Formulation", + "text": "We start with a few definitions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Learning Algorithms and Results", + "text": "Input: Bonus algorithm bonus\nInitialize:\nIn this section, we present our RL algorithm for PRMs, UCBVI-PRM. UCBVI-PRM follows the algorithmic skeleton of a classic model-based RL algorithm (Azar et al., 2017 ###reference_b2###), while incorporating designs that leverage the structure of PRMs. Our key contribution is a regret bound of when is large enough and . The regret bound matches the established lower bound up to a logarithmic factor for MDP with DRM, and is notably independent of the joint state space size.\nIntuitively, UCBVI-PRM (Algorithm 1 ###reference_###) proceeds in 3 stages:\n(i) From lines 1 to 7, the algorithm first constructs an empirical transition matrix based on the data collected thus far; (ii) Using this empirical transition matrix, the algorithm then performs value iteration from lines 8 to 23 to update the value function.\nNotably, between lines 8 and 19, the algorithm computes the new action-value function using the updated empirical transition matrix (line 12) and the exploration bonus (line 13); (iii) Finally, from lines 24 to 28, the agent selects actions based on the updated action-value function and collects new data, which is then incorporated into the dataset.\nThe main technical novelty lies in how we utilize the PRM structure. Denote a function that measures the expected return when being in state , executing action at time step and observing at time step . is defined as follows:\nis similar to value function in the sense that both are expected returns but condition on different random variables. Consequently, the estimation error can be characterized by instead of , and our bonus will be a function of instead of . More precisely, the estimation error can be translated to the estimation error in the observation space .\nWe utilize a Bernstein-type reward bonus to ensure that serves as an upper bound for with high probability, a common technique in the literature (Azar et al., 2017 ###reference_b2###; Zanette and Brunskill, 2019 ###reference_b32###; Zhang et al., 2021b ###reference_b35###). Unlike previous works that directly use , we leverage our knowledge of Probabilistic Reward Machines (PRMs) to construct our bonus using . This approach results in the regret associated with our bonus design growing only in the order of instead of .\n(Regret bound for UCBVI-PRM)\nConsider a parameter . Then the regret of UCBVI-PRM is bounded w.p. at least , by\nwhere .\nNotice that the leading term of the regret does not scale with . In contrast, if one were to apply an off-the-shelf RL algorithm to the cross-product MDP, it could achieve a regret bound no better than (Auer et al., 2008 ###reference_b1###).\nIn the work of Bourel et al. (2023 ###reference_b5###), their algorithms i.e., UCRL2-RM-L and UCRL2-RM-B achieve a regret bound of and in DRMs, respectively. Compared to their results, we improve the regret bound by a factor of and respectively, while generalizes to the PRM setting." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "RL with Non-Markoivian Rewards", + "text": "In this section, we introduce an explore-then-commit style algorithm for MDPs with generic non-Markovian rewards: in the exploration stage, the agent collects trajectories from the environment without the information of rewards; in the planning stage, the agent computes a near-optimal policy given the data gathered in the exploration stage, assuming access to an approximate planner. We give an efficient algorithm that conducts episodes of exploration and returns an -optimal policy for any general non-Markovian rewards, given an -approximate planner, formally stated below.\nA planner is -approximate if given any NMRDP , the planner returns a policy that satisfies\nwhere is the expected return of executing policy in and is the optimal policy in .\nThe exists an absolute constant , such that, for any , with probability at least , given the information collected by algorithm 3 ###reference_###, algorithm 4 ###reference_### can output -optimal policies for any non-Markovian rewards assuming access to -approximate planner. The number of episodes in algorithm 3 ###reference_### is bounded by\nwhere .\nThis result is made possible by a new simulation lemma that can be applied to generic non-Markovian rewards and non-Markovian policies.\nFor any two NMRDPs and , for any policy\nwhere\nThe proof can be found in the Appendix D.1 ###reference_###. This lemma characterizes the performance difference of the same policy applied to two Non-Markovian Reward Decision Processes (NMRDPs) that differ in their transition kernels. The performance gap is determined by the divergence in the transition kernel , the occupancy measure induced by the policy in , and the return upperbound .\nThis lemma is key to establish our result, because it can be applied to any non-Markovian reward and non-Markovian policy, including Markovian rewards and PRMs as special cases.\nIntuitively, this lemma provides a concrete goal for the exploration stage, i.e. the gap must be small for any pair that is visited significantly often under . In the following, we show how to achieve this goal." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Exploration stage", + "text": "It turns out that a procedure (Algorithm 3 ###reference_###) similar to the Markovian reward case suffices for our purpose (Jin et al., 2020 ###reference_b18###). Intuitively, algorithm 3 ###reference_### perform two steps. In the first step, from lines 2 to 7, the algorithm constructs a set of exploratory policies each designed to visit an observation state as often as possible. To accomplish this, for each observation , the algorithm creates a reward function that is 0 everywhere except at observation , where it is set to 1 (line 3). The algorithm then employs a no-regret RL algorithm (e.g. EULER of Zanette and Brunskill (2019 ###reference_b32###)) to find a policy that maximizes this reward, which is equivalent to maximizing the probability of visiting . In the second stage, from lines 8 to 12, the algorithm collects new data by sampling and executing policies from this exploratory policy set. We prove that, with this framework, the collected data can be used to learn a transition kernel that is sufficiently close to the true transition characterized by the divergence in Lemma 2. Towards this, we introduce the notion of significant observation:\nA observation is -significant if there exists a policy , so that the probability to reach following policy is greater than . In symbol:\nwhere .\nIntuitively, since insignificant observations are rarely reachable by any policy, their contributions to the divergence in Lemma 2 will be limited. Thus, it suffices to only visit significant observations. Algorithm 3 ###reference_### is designed specifically for this purpose, and achieves the following guarantee.\n(Theorem 3 of (Jin et al., 2020 ###reference_b18###))\nThere exists absolute constant such that for any and , if we set where , then with probability at least , that Algorithm 3 ###reference_### will return a dataset consisting of trajectories , which are i.i.d sampled from a distribution satisfying:\nAs we can see from theorem 3 ###reference_orem3###, all significant observations can be visited by distribution with reasonable probability. Hence, with algorithm 3 ###reference_###, we can learn our model by visiting significant observations without the guidance of any rewards." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Planning stage", + "text": "After collecting enough data in the exploration stage, algorithm 4 ###reference_### use the data to compute an empirical transition matrix , on which the approximate planner is employed. We prove that (see Appendix D.2 ###reference_###), any policy will have small value gap in the learned transition under vs. the ground truth transition .\nThere exists an absolute constant , for any , , assume the dataset has i.i.d. samples from distribution which satisfies equation 1 ###reference_### with , and , where , then w.p. at least :\nThe reason for the increased sample complexity compared to the original analysis by Jin et al. (2020 ###reference_b18###) lies in the fact that more samples are required to reduce the model error associated with significant observations than in the Markovian setting. Specifically, in our analysis, it is necessary to account for model errors across every triplet. In contrast, in the standard Markovian setting, the modeling error can be further decomposed into the error of the empirical next-state value function (see the proof of Lemma 3.6 in Jin et al. (2020 ###reference_b18###)), which allows for tighter bounds. After we obtain our empirical transition matrix , given any non-Markovian rewards , we can find a near optimal policy by running -approximate planner, as a result of our simulation Lemma.\nUnder the preconditions of lemma 3 ###reference_ma3###, with probability of for any rewards , the output policy of algorithm 4 ###reference_### is -optimal, that is\nwhere is the optimal policy.\nNote that, for general non-Markovian rewards, the optimization error won\u2019t be reduced to , but for any PRMs, the optimization can be reduced to , since we can run value iteration given the cross-product MDP and solve it optimally. In addition, there are some cases where the rewards possess special structural properties, for which performance with guarantees can be achieved(Prajapat et al., 2023 ###reference_b24###; De Santi et al., 2024 ###reference_b10###)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Experiments", + "text": "###figure_7### ###figure_8### In this section, we present a series of experiments comparing the empirical performance of our algorithm, UCBVI-PRM, with existing baselines. We evaluate our algorithm in MDPs with both DRM and PRM against different baselines. For DRMs, we compare with UCRL-RM-L and UCRL-RM-B of Bourel et al. (2023 ###reference_b5###). For PRM, since there is no existing algorithm, we compare with the naive approach of directly applying UCBVI(Azar et al., 2017 ###reference_b2###) onto the cross-product MDP.\nIn our experiment, we tune the exploration coefficient for all algorithms by selecting from a equally large set of options (see Appendix E.2 ###reference_###). This is to make sure that an algorithm with a larger hyper-parameter set does not get an unfair advantage. In addition, we apply the doubling trick (detailed in Appendix E.1 ###reference_###) to speed up UCBVI-PRM, which is a common technique in the literature(Auer et al., 2008 ###reference_b1###; Dann and Brunskill, 2015 ###reference_b9###) and won\u2019t affect the regret." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "DRM Experiments", + "text": "In the RiverSwim environment, shown in Figure 2 ###reference_###, the agent has two actions corresponding to swimming left or right. Going right results in stochastic transitions, as shown by the solid lines in Figure 2 ###reference_###(a). Going left results in deterministic transitions as shown by the dashed lines in Figure 2 ###reference_###(a). The agent receives reward when visit two extreme locations in RiverSwim(i.e., and ) in sequence.\nFigure 3 ###reference_###(a), 3 ###reference_###(b), and 3 ###reference_###(c) show the regret over time in the RiverSwim domain, with the results averaged over 16 runs. The shaded area shows the standard variance of the corresponding quantity. Specifically, Figures 3 ###reference_###(a), 3 ###reference_###(b), and 3 ###reference_###(c) present the regrets of the agent running in a RiverSwim MDP with 5 observations and a horizon length of 10, a RiverSwim MDP with 10 observations and a horizon length of 20, and a RiverSwim MDP with 15 observations and a horizon length of 30, respectively. As we can see from the figures, in simpler environments (fewer observations and shorter horizons), the advantage of UCBVI-PRM is not obvious (see Figure 3 ###reference_###(a)). However, with longer horizons and more observations, the gap between UCBVI-PRM and the baselines of Bourel et al. (2023 ###reference_b5###) becomes larger. These results align with our regret analysis, where the regret of UCBVI-PRM grows slower than UCRL-RM-L in the order of and slower than UCRL-RM-B in the order of and .\n###figure_9### ###figure_10### ###figure_11###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "PRM Experiments", + "text": "In the warehouse environment(see Figure 1 ###reference_###), the robot has five actions corresponding to moving up, right, down, left, and stay. Moving up, right, down, or left leads to moving in the intended direction with probability 0.7, in each perpendicular direction with probability 0.1, or staying in the same place with probability 0.1. The stay action will result in the robot staying in the same place deterministically. The robot receives reward when successfully picks up an item and delivers it to the delivery location in sequence.\n###figure_12### ###figure_13### ###figure_14### Figures 4 ###reference_### show the regret over time, with the results averaged over 16 runs. Specifically, Figures 4 ###reference_###(a), 4 ###reference_###(b), and 4 ###reference_###(c) present the results of the agent running in a warehouse with a horizon length of 9, a warehouse with a horizon length of 12, and a warehouse with a horizon length of 15, respectively. In all experiments, UCBVI-PRM outperforms the baseline. In addition, as the horizon becomes longer and with larger warehouse, UCBVI-PRM beats the baseline with a larger margin." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We study sample-efficient reinforcement learning in episodic Markov decision processes with probabilistic reward machines. We introduce an algorithm tailored for PRMs that matches the established lower bound when and . We also present a lemma that characterizes the difference in policy evaluations between two MDPs with non-Markovian rewards. Building upon the new lemma, we establish the reward-free learning result for non-Markovian reward. Finally, we validate our algorithms through a series of experiments. Interesting future direction includes designing effective and efficient algorithms for the multi-agent setting, and exploring connections with reward structures such as submodular rewards." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Table of notation", + "text": "" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Notation", + "text": "We follow the notations of Azar et al. [2017 ###reference_b2###].\nWe use the lower case to denote the functions evaluated at the current state-action pair. e.g., let . We also define . We also denote and .\nWe split the episodes into sets: the set of \"typical\" episodes in which the number of visits to the encountered state-actions are large and the rest of the episodes. Further, we define\nWe define typical episodes and the set of typical state-dependent episodes as follows\nDefine , , , and for every , and . We denote Then the upper-bound regret is defined as follows\nWe also define regret of every state and its corresponding upper bounds as\nWe define the following martingale operator for every , and , let denote the time stamp at step of episode then\nLet be the history of all random events up to (and including) step of episode then we have that . Hence is a martingale difference w.r.t. .\nDenote . We define the high probability events and . We define four confidence levels as follows.\nLet be the set of all probability distributions on . Define the following confidence set for every and .\nWe now define the random event as follows\nLet be a positive integer, and let be a set of real-valued functions defined on for some integer . We define the following random events for given parameters , , and :\nWe use the short-hand notation and for and . We define a set of random variables for every and\nWe now define the event as follows\nwhere is number of visits to observation at step up to episode , correspondingly, we have as number of visits to observation at step up to episode .\nWe denote the total count of steps up to episode by . We define for every and , as follows\nFor every , and , we introduce the following notation.\nWe also define the upper bound for every , and as follows" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Proof of the Regret Bounds", + "text": "Fix . Let the event hold. For all , , , , , , we have that for any :\nwhere we denote ,\nThe first inequality holds under the event . The pigeonhole principle is applied to the second inequality, the Cauchy-Schwarz inequality to the third, and the fact that to the fourth. The final inequality uses the condition .\n\u220e\nLet and . Let the events and hold. Then the following inequalities hold for and :\nThis analysis builds on the regret analysis of Azar et al. [2017 ###reference_b2###], but introduces novel design for the property of PRMs.\nDefine , . We drop the dependencies of notation on episode for easier presentation, e.g., we write and as and .\nwhere for , , , is the estimation error of the optimal value function at the next state. is the estimation error of the reward function. \n, we further have\nBased on lemma 7 ###reference_ma7###,\nwhere . Hence\nwhere .\nUnrolling this recursion, we can have\nThe last inequality is based on the fact that .\n\u220e\nLet and . Let the events and hold. Then the following hold for every\nAlso the following bounds hold for every and\nThe fact that the event holds implies that the event , , and hold. Combined with the fact that , the first argument is proved.\nFor the second argument, with the event , we can have\nIt is easy to verify that , . Combined with this inequality, we complete the proof of the second argument.\n\u220e\nLet and . Let the events and hold. Then the following hold for every\nBy definition of , the results of lemma 8 ###reference_ma8### and lemma 9 ###reference_ma9###, we can get\nConsequently, we can have\nProof is completed.\n\u220e\nLet and . Then under the events and the following hold\nUnder the event , and hold, hence we have\nThe law of total variance leads to\nCombining with the fact that and lemma 5 ###reference_ma5###, we complete our proof.\n\u220e\nLet and . Then under the events and the following hold\nBy definition, we have\nFor a better presentation, we make some short-hand notation: we denote , , and . Further\nThe first inequality is derived from the definition of variance and the conditions , which implies .\nUsing the same argument, we can also have the following:\nUnder the event , the event holds. Plus, under event , we have . These combined can be used to prove that:\nUnder the event , the event holds, and under the event , we can bound with:\nThe last inequality is based on the fact that . We complete our proof by multiplying and by .\n\u220e\nLet and . Then under the events and the following hold\nWe use the same notation as lemma 12 ###reference_ma12###. We denote , , and . We only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. We start by proving the first inequality, and the following inequalities hold:\nThe first inequality holds because under , and consequently, . The second inequality holds by enlarging confidence interval under event .\nThe first inequality holds for the event . The second inequality holds because of the pigeon-hole argument(see e.g., Appendix C.3 of Auer et al. [2008 ###reference_b1###]).\nThe first inequality hold by using the same argument of lemma 12 ###reference_ma12###.\nBy applying another pigeon-hole principle.\n\u220e\nLet and . Then under the events and the following hold\nWe only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. Hence, we start by proving the first argument:\nand can be bounded under the events and using the results of lemma 11 ###reference_ma11### and lemma 12 ###reference_ma12###. Hence,\nThe second inequality is based on the fact that .\n can be bounded by using pigeon-hole principle\nCombining all these and the condition that , we complete the proof.\n\u220e\nLet and . Then under the events and the following hold\nWe only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. Hence, we start by proving the first argument:\nWe can use similar technique to lemma 14 ###reference_ma14### to bound .\nand can be bounded under and using lemma 11 ###reference_ma11### and lemma 13 ###reference_ma13###. Hence\nThe last inequality is based on the fact that . Applying pigeon-hole principle to , is bounded by\ncan be bounded by by the pigeon-hole principle and using the concentration inequality under . can be bounded by since it is sum of the martingale difference sequence. can be bounded by by using the pigeon-hole principle. Hence\nThe last inequality is based on the fact that . Combining all these, we complete our proof.\n\u220e\nUnder the events and , the following hold\nUnder the event , can be bounded by using the pigeon-hole principle. Similarly, can be bounded by using pigeon-hole principle. Then, we sum up the regret due to , and from lemma 15 ###reference_ma15###, lemma 14 ###reference_ma14### and lemma 9 ###reference_ma9### to bound . The following holds:\nBy solving the bound in terms of , we complete our proof.\n\u220e\nLet and . Then under the events and the following hold\nThe proof is similar to the proof of lemma 16 ###reference_ma16###, we start by getting a equation in terms of by summing up regret due to , , and . Then we solve the bound in term of to get our result.\n\u220e\nLet and . Then under the events and the following hold for every\nThe first inequality is based on the fact that . The second inequality holds per lemma 17 ###reference_ma17###. Since by definition is monotonically non-increasing in , is monotonically non-increasing in as well. Then we have\nWe complete our proof by dividing on both sides of the above inequality.\n\u220e\nUnder the event , the set of events hold.\nWe prove this by induction. First, . And . \nTo prove this result, we need to show if holds then also holds. If holds, we have following hold for every per lemma 18 ###reference_ma18###.\nPer the algorithm, we have\nIf , holds trivially by invoking induction. Also when , holds trivially. Thus, we only need to consider when . In this case,\nThe inequality holds based on the fact that is the optimal policy w.r.t. . From the induction assumption, holds, consequently, . Then we have\nFurther, for we have\nBy leveraging lemma 25 ###reference_ma25###, we have\nFor , we have\nThis inequality is based on lemma 18 ###reference_ma18###. Hence can be lower bounded by\nCombining all these proves . Thus, the event holds. The proof is completed by invoking induction from to .\n\u220e\nCombining lemma 6 ###reference_ma6###, lemma 19 ###reference_ma19### and lemma 16 ###reference_ma16###, we can complete the proof of theorem 1 ###reference_orem1###." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Exploration with Non-Markovian Rewards", + "text": "Suppose is the emprical transition matrix formed by sampling according to distribtution for samples, and when , then w.p. at least ,\nwhere .\nBy Azuma-Hoeffding\u2019s inequality, w.p. at least we can have:\nis the number of visits to state action pair under the distribution .\nWith Hoeffding\u2019s inequality, w.p. at least , we can have:\nwhich gives, w.p. at least :\nHence, we have:\nBy definition, ,\nwhich leads to,\nThe inequality is based on the fact that there is only state action pairs in total. When , we can have:\nCombining all these, we prove our result.\n\u220e\nWe define as the expected reward collected by trajectory , then for every ,\nwhere . Note that, for PRMs, represents the expectation of the reward of trajectory . However, for DRMs, is a deterministic quantity. Further, we have\nThe inequality is based on Holder\u2019s inequality, and the definition of leads to\nwhere can be further rewritten as\nHence,\nwhere . Consequently,\nHence, by replacing with , we complete our proof.\n\u220e\nWith lemma 2 ###reference_ma2###, we have\nWe let , we have,\nBy definition of insignificant state, we have:\nThe first inequality is based on the fact that for a fixed pair, . On the other hand, by Cauchy-Shwarts inequality, we have:\nBy preconditions, for any we always have\nwhich leads to:\nBy lemma 20 ###reference_ma20###, when , then w.p. at least ,\nCombining all equations above, we have\nChoose and , the proof is completed. \u220e\nWe denote the optimal policy for NMRDP and empirical NMRDP as and respectively, then the following holds\nwhere evaluation errors are bounded by by lemma 3 ###reference_ma3###, and optimization error is achieved by -approximate planner.\n\u220e\nBased on lemma 3 ###reference_ma3###, we need , consequently, . Since we need episodes for each observation, the total number of episodes of finding is , which gives second term of Theorem 2 ###reference_orem2###. We combine this with the result of lemma 3 ###reference_ma3###, which leads to the first term of Theorem 2 ###reference_orem2###. We complete our proof by considering the optimization error and policy evaluation error per lemma 4 ###reference_ma4###.\n\u220e" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Experiment", + "text": "In this section, we provide the implementation details of our experiments.\nTo speed up the computation, we apply the doubling trick originated from Auer et al. [2008 ###reference_b1###]. Instead of updating empirical transition matrix every episode, we update it after certain amount of observations. Specifically, whenever there is a pair whose visitation counts reach the power of , we start a new epoch, in which we recompute our empirical transition matrix (line 7 in algorithm 1 ###reference_###), empirical cross-product transition matrix , rewards (line 12 in algorithm 1 ###reference_###) and bonus function(line 13 in algorithm 1 ###reference_###). Then we calculate new function (line 14 in algorithm 1 ###reference_###). Finally we execute policy according to new Q function until there is there is a pair whose visitation counts reach the power of to start a new epoch. This approach greatly reduce the computation and won\u2019t affect the statistical efficiency.\nUCBVI-PRM, UCBVI, UCRL2-RM-L, and UCRL2-RM-B all apply the principle of optimism in the face of uncertainty, where the algorithms either adjust the reward functions (UCBVI-PRM and UCBVI) or modify their models (UCRL2-RM-L and UCRL2-RM-B) to balance exploration and exploitation. Specifically, UCBVI-PRM and UCBVI carefully design exploration bonuses to ensure that . In contrast, UCRL2-RM-L and UCRL2-RM-B construct a set of MDPs that likely contains the true MDP according to different concentration inequalities, then alter the model to be the best possible MDP within that set. However, due to the theoretical pessimism, these approaches often lead to over-exploration in practice, resulting in higher regret. To mitigate this, we tune the exploration coefficient of each algorithm to better balance exploration and exploitation in each environment, improving performance. For fairness, we select the optimal exploration coefficient for each algorithm from an equally large set of candidates.\nSpecifically, the exploration coefficient of UCBVI-PRM is defined as the empirical bonus used in the experiments divided by the theoretical bonus function calculated using Algorithm 2 ###reference_###. This modifies line 13 of Algorithm 1 ###reference_### to be: . UCBVI applies the same rule as UCBVI-PRM. For UCRL2-RM-L, the algorithm designs confidence sets for the transition function for every pair, such that the true dynamics lie within a confidence interval centered on the empirical mean . Formally, ,, where is the original parameter. The exploration coefficient for UCRL2-RM-B follows the same principle, with the only distinction being the confidence interval design. For more detailed implementation, please refer to our codes. For each algorithm, we choose parameters from an equally large set for each environment. Following is the table of candidates of exploration coefficient for every algorithm,\nAccording our results, all original algorithms over explore ( leads to better performance) in all of our experiments. Surprisingly, we find a fixed set of parameters perform the best out of other opponents across all experiment settings. Specifically, we end up choosing , , and for UCBVI-PRM, UCBVI, UCRL2-RM-L and UCRL2-RM-B, respectively. Note that, with smaller , UCRL2-RM-L and UCRL2-RM-B will fail in converging in Extended Value Iteration (EVI) [Auer et al., 2008 ###reference_b1###] in all of our experiments." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Technical Lemmas", + "text": "[Maurer and Pontil, 2009 ###reference_b21###]\nLet be i.i.d. random variables with values in and let . Define and . Then we have\n[Cesa-Bianchi and Lugosi, 2006 ###reference_b8###]\nLet be i.i.d. random variables with values in and let . Define and . Then we have\n[Cesa-Bianchi and Lugosi, 2006 ###reference_b8###]\nLet be a martingale difference sequence w.r.t. some filtration , , then for any , , we have\n[Freedman, 1975 ###reference_b13###]\nLet be a martingale difference sequence w.r.t. some filtration , . if the sum of the variances then for any , , and we have\n[Azar et al., 2017 ###reference_b2###]\nLet and be two random variables. Then the following bound holds for their variances" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SymbolExplanation
The observation space
The state space of PRM
The action space
The policy at episode k
The transition function on \n
The reward function
The transition function on \n
The transition function of PRM
The reward function of PRM
Labeling function
Size of observation space
Size of action space
The horizon length
Size of state space of PRM
\n and \nThe total number of steps and number of steps up to episode \n
The total number of episodes
Number of visits to observation-action pair up to episode \n
Optimal value function \n
The estimate of value function at step of episode \n
The estimate of action function at step of episode \n
The exploration bonus
Number of transitions from to after taking action up to episode \n
Number of transitions from to after taking action at step up to episode \n
Number of visits to observation at step up to episode \n
The empirical transition distribution from to upon taking action of episode \n
The empirical next-state variance of for every \n
The next-state variance of for every state-action pair \n
The empirical next-state variance of for every at episode \n
The next-state variance of for every state-action pair \n
The empirical next-observation variance of for every triplet \n
The next-observation variance of for every triplet \n
The empirical next-observation variance of for every triplet at episode \n
The next-observation variance of for every triplet \n
\nRegret\nThe regret after episodes
The upper-bound regret after episodes
One step regret at step of episode \n
One step upper-bound regret at step of episode \n
The high probability event under which the concentration inequalities hold
The high probability event under which the are the upper bounds of optimal value function
The history of all random events up to time step \n
\n
", + "capture": "Table 1: Exploration Coefficient Candidates for Three Algorithms" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AlgorithmCandidates of
UCBVI-PRM\n, , , , , \n
UCBVI\n, , , , , \n
UCRL2-RM-L\n, ,, , , \n
UCRL2-RM-B\n, , , , , \n
\n
Table 1: Exploration Coefficient Candidates for Three Algorithms
\n
", + "capture": "Table 1: Exploration Coefficient Candidates for Three Algorithms" + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2408.10381v1_figure_1(a).png", + "caption": "(a) Warehouse environment\nFigure 1: The Warehouse example and the corresponding PRM. The robot needs to pick up an item and delivers the item to the right location in sequence when the item may not be ready and the delivery location could be busy. (a): a 4\u00d74444\\times 44 \u00d7 4 grid world in which is our robot, is the charging station, is the item pickup location, is the delivery location;(b): The corresponding PRM, where an edge q\u2192\ud835\udc5f\u2113\u2223pq\u2032\ud835\udc5fconditional\u2113\ud835\udc5d\u2192\ud835\udc5esuperscript\ud835\udc5e\u2032q\\xrightarrow[r]{\\ell\\mid p}q^{\\prime}italic_q start_ARROW underitalic_r start_ARROW start_OVERACCENT roman_\u2113 \u2223 italic_p end_OVERACCENT \u2192 end_ARROW end_ARROW italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT represents that state q\ud835\udc5eqitalic_q transitions to q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT on event l\ud835\udc59litalic_l with probability p\ud835\udc5dpitalic_p and receives reward r\ud835\udc5fritalic_r.", + "url": "http://arxiv.org/html/2408.10381v1/x1.png" + }, + "1(b)": { + "figure_path": "2408.10381v1_figure_1(b).png", + "caption": "(b) The Warehouse PRM\nFigure 1: The Warehouse example and the corresponding PRM. The robot needs to pick up an item and delivers the item to the right location in sequence when the item may not be ready and the delivery location could be busy. (a): a 4\u00d74444\\times 44 \u00d7 4 grid world in which is our robot, is the charging station, is the item pickup location, is the delivery location;(b): The corresponding PRM, where an edge q\u2192\ud835\udc5f\u2113\u2223pq\u2032\ud835\udc5fconditional\u2113\ud835\udc5d\u2192\ud835\udc5esuperscript\ud835\udc5e\u2032q\\xrightarrow[r]{\\ell\\mid p}q^{\\prime}italic_q start_ARROW underitalic_r start_ARROW start_OVERACCENT roman_\u2113 \u2223 italic_p end_OVERACCENT \u2192 end_ARROW end_ARROW italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT represents that state q\ud835\udc5eqitalic_q transitions to q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT on event l\ud835\udc59litalic_l with probability p\ud835\udc5dpitalic_p and receives reward r\ud835\udc5fritalic_r.", + "url": "http://arxiv.org/html/2408.10381v1/x2.png" + }, + "2(a)": { + "figure_path": "2408.10381v1_figure_2(a).png", + "caption": "(a) Labeled RiverSwim MDP\nFigure 2: The labeled RiverSwim and the corresponding DRM.", + "url": "http://arxiv.org/html/2408.10381v1/x11.png" + }, + "2(b)": { + "figure_path": "2408.10381v1_figure_2(b).png", + "caption": "(b) The Patrol DRM\nFigure 2: The labeled RiverSwim and the corresponding DRM.", + "url": "http://arxiv.org/html/2408.10381v1/x12.png" + }, + "3(a)": { + "figure_path": "2408.10381v1_figure_3(a).png", + "caption": "(a) : H=10\ud835\udc3b10H=10italic_H = 10, O=5\ud835\udc425O=5italic_O = 5\nFigure 3: Experimental results in RiverSwim", + "url": "http://arxiv.org/html/2408.10381v1/x13.png" + }, + "3(b)": { + "figure_path": "2408.10381v1_figure_3(b).png", + "caption": "(b) : H=20\ud835\udc3b20H=20italic_H = 20, O=10\ud835\udc4210O=10italic_O = 10\nFigure 3: Experimental results in RiverSwim", + "url": "http://arxiv.org/html/2408.10381v1/x14.png" + }, + "3(c)": { + "figure_path": "2408.10381v1_figure_3(c).png", + "caption": "(c) : H=30\ud835\udc3b30H=30italic_H = 30, O=15\ud835\udc4215O=15italic_O = 15\nFigure 3: Experimental results in RiverSwim", + "url": "http://arxiv.org/html/2408.10381v1/x15.png" + }, + "4(a)": { + "figure_path": "2408.10381v1_figure_4(a).png", + "caption": "(a) : H=9\ud835\udc3b9H=9italic_H = 9, 3\u00d73333\\times 33 \u00d7 3 warehouse\nFigure 4: Experimental results in Warehouse", + "url": "http://arxiv.org/html/2408.10381v1/x16.png" + }, + "4(b)": { + "figure_path": "2408.10381v1_figure_4(b).png", + "caption": "(b) : H=12\ud835\udc3b12H=12italic_H = 12, 4\u00d74444\\times 44 \u00d7 4 warehouse\nFigure 4: Experimental results in Warehouse", + "url": "http://arxiv.org/html/2408.10381v1/x17.png" + }, + "4(c)": { + "figure_path": "2408.10381v1_figure_4(c).png", + "caption": "(c) : H=15\ud835\udc3b15H=15italic_H = 15, 5\u00d75555\\times 55 \u00d7 5 warehouse\nFigure 4: Experimental results in Warehouse", + "url": "http://arxiv.org/html/2408.10381v1/x18.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Near-optimal regret bounds for reinforcement learning.", + "author": "Peter Auer, Thomas Jaksch, and Ronald Ortner.", + "venue": "Advances in neural information processing systems, 21, 2008.", + "url": null + } + }, + { + "2": { + "title": "Minimax regret bounds for reinforcement learning, 2017.", + "author": "Mohammad Gheshlaghi Azar, Ian Osband, and R\u00e9mi Munos.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Rewarding behaviors.", + "author": "Fahiem Bacchus, Craig Boutilier, and Adam Grove.", + "venue": "In Proceedings of the National Conference on Artificial Intelligence, pages 1160\u20131167, 1996.", + "url": null + } + }, + { + "4": { + "title": "Provable self-play algorithms for competitive reinforcement learning.", + "author": "Yu Bai and Chi Jin.", + "venue": "In International conference on machine learning, pages 551\u2013560. PMLR, 2020.", + "url": null + } + }, + { + "5": { + "title": "Exploration in reward machines with low regret.", + "author": "Hippolyte Bourel, Anders Jonsson, Odalric-Ambrym Maillard, and Mohammad Sadegh Talebi.", + "venue": "In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 4114\u20134146. PMLR, 25\u201327 Apr 2023.", + "url": null + } + }, + { + "6": { + "title": "Regret analysis of stochastic and nonstochastic multi-armed bandit problems, 2012.", + "author": "S\u00e9bastien Bubeck and Nicol\u00f2 Cesa-Bianchi.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Reward machines for vision-based robotic manipulation.", + "author": "Alberto Camacho, Jacob Varley, Andy Zeng, Deepali Jain, Atil Iscen, and Dmitry Kalashnikov.", + "venue": "In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 14284\u201314290. IEEE, 2021.", + "url": null + } + }, + { + "8": { + "title": "Prediction, learning, and games.", + "author": "Nicolo Cesa-Bianchi and G\u00e1bor Lugosi.", + "venue": "Cambridge university press, 2006.", + "url": null + } + }, + { + "9": { + "title": "Sample complexity of episodic fixed-horizon reinforcement learning.", + "author": "Christoph Dann and Emma Brunskill.", + "venue": "Advances in Neural Information Processing Systems, 28, 2015.", + "url": null + } + }, + { + "10": { + "title": "Global reinforcement learning: Beyond linear and convex rewards via submodular semi-gradient methods.", + "author": "Riccardo De Santi, Manish Prajapat, and Andreas Krause.", + "venue": "arXiv preprint arXiv:2407.09905, 2024.", + "url": null + } + }, + { + "11": { + "title": "Learning quadruped locomotion policies using logical rules.", + "author": "David DeFazio, Yohei Hayamizu, and Shiqi Zhang.", + "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling, volume 34, pages 142\u2013150, 2024.", + "url": null + } + }, + { + "12": { + "title": "Inferring probabilistic reward machines from non-markovian reward signals for reinforcement learning.", + "author": "Taylor Dohmen, Noah Topper, George Atia, Andre Beckus, Ashutosh Trivedi, and Alvaro Velasquez.", + "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling, volume 32, pages 574\u2013582, 2022.", + "url": null + } + }, + { + "13": { + "title": "On Tail Probabilities for Martingales.", + "author": "David A. Freedman.", + "venue": "The Annals of Probability, 3(1):100 \u2013 118, 1975.", + "url": null + } + }, + { + "14": { + "title": "Minimax pac bounds on the sample complexity of reinforcement learning with a generative model.", + "author": "Mohammad Gheshlaghi Azar, R\u00e9mi Munos, and Hilbert J Kappen.", + "venue": "Machine learning, 91:325\u2013349, 2013.", + "url": null + } + }, + { + "15": { + "title": "Decentralized graph-based multi-agent reinforcement learning using reward machines.", + "author": "Jueming Hu, Zhe Xu, Weichang Wang, Guannan Qu, Yutian Pang, and Yongming Liu.", + "venue": "Neurocomputing, 564:126974, 2024.", + "url": null + } + }, + { + "16": { + "title": "Using reward machines for high-level task specification and decomposition in reinforcement learning.", + "author": "Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith.", + "venue": "In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2107\u20132116. PMLR, 10\u201315 Jul 2018.", + "url": null + } + }, + { + "17": { + "title": "Reward machines: Exploiting reward function structure in reinforcement learning.", + "author": "Rodrigo Toro Icarte, Toryn Q Klassen, Richard Valenzano, and Sheila A McIlraith.", + "venue": "Journal of Artificial Intelligence Research, 73:173\u2013208, 2022.", + "url": null + } + }, + { + "18": { + "title": "Reward-free exploration for reinforcement learning, 2020.", + "author": "Chi Jin, Akshay Krishnamurthy, Max Simchowitz, and Tiancheng Yu.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "Reinforcement learning with temporal logic rewards.", + "author": "Xiao Li, Cristian-Ioan Vasile, and Calin Belta.", + "venue": "In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3834\u20133839. IEEE, 2017.", + "url": null + } + }, + { + "20": { + "title": "A sharp analysis of model-based reinforcement learning with self-play.", + "author": "Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin.", + "venue": "In International Conference on Machine Learning, pages 7001\u20137010. PMLR, 2021.", + "url": null + } + }, + { + "21": { + "title": "Empirical bernstein bounds and sample variance penalization, 2009.", + "author": "Andreas Maurer and Massimiliano Pontil.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "Fast active learning for pure exploration in reinforcement learning.", + "author": "Pierre M\u00e9nard, Omar Darwiche Domingues, Anders Jonsson, Emilie Kaufmann, Edouard Leurent, and Michal Valko.", + "venue": "In International Conference on Machine Learning, pages 7599\u20137608. PMLR, 2021.", + "url": null + } + }, + { + "23": { + "title": "Reward machines for cooperative multi-agent reinforcement learning.", + "author": "Cyrus Neary, Zhe Xu, Bo Wu, and Ufuk Topcu.", + "venue": "arXiv preprint arXiv:2007.01962, 2020.", + "url": null + } + }, + { + "24": { + "title": "Submodular reinforcement learning.", + "author": "Manish Prajapat, Mojm\u00edr Mutn\u1ef3, Melanie N Zeilinger, and Andreas Krause.", + "venue": "arXiv preprint arXiv:2307.13372, 2023.", + "url": null + } + }, + { + "25": { + "title": "On reward-free rl with kernel and neural function approximations: Single-agent mdp and markov game.", + "author": "Shuang Qiu, Jieping Ye, Zhaoran Wang, and Zhuoran Yang.", + "venue": "In International Conference on Machine Learning, pages 8737\u20138747. PMLR, 2021.", + "url": null + } + }, + { + "26": { + "title": "A learning based approach to control synthesis of markov decision processes for linear temporal logic specifications.", + "author": "Dorsa Sadigh, Eric S Kim, Samuel Coogan, S Shankar Sastry, and Sanjit A Seshia.", + "venue": "In 53rd IEEE Conference on Decision and Control, pages 1091\u20131096. IEEE, 2014.", + "url": null + } + }, + { + "27": { + "title": "Reinforcement learning: An introduction.", + "author": "Richard S Sutton and Andrew G Barto.", + "venue": "MIT press, 2018.", + "url": null + } + }, + { + "28": { + "title": "Reward-free rl is no harder than reward-aware rl in linear markov decision processes.", + "author": "Andrew J Wagenmaker, Yifang Chen, Max Simchowitz, Simon Du, and Kevin Jamieson.", + "venue": "In International Conference on Machine Learning, pages 22430\u201322456. PMLR, 2022.", + "url": null + } + }, + { + "29": { + "title": "On reward-free reinforcement learning with linear function approximation.", + "author": "Ruosong Wang, Simon S Du, Lin Yang, and Russ R Salakhutdinov.", + "venue": "Advances in neural information processing systems, 33:17816\u201317826, 2020.", + "url": null + } + }, + { + "30": { + "title": "Correct-by-synthesis reinforcement learning with temporal logic constraints.", + "author": "Min Wen, R\u00fcdiger Ehlers, and Ufuk Topcu.", + "venue": "In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4983\u20134990. IEEE, 2015.", + "url": null + } + }, + { + "31": { + "title": "Joint inference of reward machines and policies for reinforcement learning, 2022.", + "author": "Zhe Xu, Ivan Gavran, Yousef Ahmad, Rupak Majumdar, Daniel Neider, Ufuk Topcu, and Bo Wu.", + "venue": null, + "url": null + } + }, + { + "32": { + "title": "Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds.", + "author": "Andrea Zanette and Emma Brunskill.", + "venue": "In International Conference on Machine Learning, pages 7304\u20137312. PMLR, 2019.", + "url": null + } + }, + { + "33": { + "title": "Reward-free model-based reinforcement learning with linear function approximation.", + "author": "Weitong Zhang, Dongruo Zhou, and Quanquan Gu.", + "venue": "Advances in Neural Information Processing Systems, 34:1582\u20131593, 2021a.", + "url": null + } + }, + { + "34": { + "title": "Task-agnostic exploration in reinforcement learning.", + "author": "Xuezhou Zhang, Yuzhe Ma, and Adish Singla.", + "venue": "Advances in Neural Information Processing Systems, 33:11734\u201311743, 2020.", + "url": null + } + }, + { + "35": { + "title": "Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon.", + "author": "Zihan Zhang, Xiangyang Ji, and Simon Du.", + "venue": "In Conference on Learning Theory, pages 4528\u20134531. PMLR, 2021b.", + "url": null + } + }, + { + "36": { + "title": "Multi-agent reinforcement learning with a hierarchy of reward machines.", + "author": "Xuejing Zheng and Chao Yu.", + "venue": "arXiv preprint arXiv:2403.07005, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.10381v1" +} \ No newline at end of file diff --git a/20241127/2105.02653v3.json b/20241127/2105.02653v3.json new file mode 100644 index 0000000000000000000000000000000000000000..7d69c4db9a7b83d60c1f471a5e77ae51462ccd05 --- /dev/null +++ b/20241127/2105.02653v3.json @@ -0,0 +1,447 @@ +{ + "title": "Regularizing Explanations in Bayesian Convolutional Neural Networks", + "abstract": "Neural networks are powerful function approximators with tremendous potential in learning complex distributions. However, they are prone to overfitting on spurious patterns. Bayesian inference provides a principled way to regularize neural networks and give well-calibrated uncertainty estimates. It allows us to specify prior knowledge on weights. However, specifying domain knowledge via distributions over weights is infeasible. Furthermore, it is unable to correct models when they focus on spurious or irrelevant features. New methods within explainable artificial intelligence allow us to regularize explanations in the form of feature importance to add domain knowledge and correct the models\u2019 focus. Nevertheless, they are incompatible with Bayesian neural networks, as they require us to modify the loss function. We propose a new explanation regularization method that is compatible with Bayesian inference. Consequently, we can quantify uncertainty and, at the same time, have correct explanations. We test our method using four different datasets. The results show that our method improves predictive performance when models overfit on spurious features or are uncertain of which features to focus on. Moreover, our method performs better than augmenting training data with samples where spurious features are removed through masking. We provide code, data, trained weights, and hyperparameters.111https://github.com/observer4599/explanation-regularization-in-bnn", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "nn have in recent years shown high performance and been successful in many applications (Goodfellow et al., 2016 ###reference_b1###; Silver et al., 2018 ###reference_b2###; Esteva et al., 2019 ###reference_b3###; Kiran et al., 2022 ###reference_b4###). However, they can overfit on spurious features in training datasets and lose the ability to generalize (Szegedy et al., 2014 ###reference_b5###; Lapuschkin et al., 2019 ###reference_b6###). Furthermore, we understand how they work computationally, but are unable to extract high-level insights that make humans understand and trust them (Arrieta et al., 2020 ###reference_b7###).\nTo prevent overfitting, we use regularization techniques like weight regularization, dropout, early stopping, and explanation regularization (Ross et al., 2017 ###reference_b8###). A probabilistic approach to regularizing neural networks is to leverage Bayesian inference (Blundell et al., 2015 ###reference_b9###; Jospin et al., 2022 ###reference_b10###). In Bayesian NNs, we find the posterior distribution on weights rather than point estimates. To find the posterior distribution, we define a prior distribution on weights that moves them towards our preferred choices. As the amount of data increases, the prior weighs less (Blundell et al., 2015 ###reference_b9###; Prince, 2023 ###reference_b11###). Although Bayesian inference gives us well-calibrated uncertainty estimates, this principled way to regularize NNs is incompatible with newer methods that regularize explanations. Explanation regularization came as a response to the need of explainable NNs (Ross et al., 2017 ###reference_b8###; Teso and Kersting, 2019 ###reference_b12###; Rieger et al., 2020 ###reference_b13###). In explanation regularization, we have annotated masks that we refer to as explanation feedback. They indicate areas in the input space irrelevant for predictions, which is seen in Fig. 1 ###reference_###. Furthermore, Bayesian inference regularizes the model via prior on weights. However, it is unable to say anything regarding the input space. In contrast, explanation regularization enables us to add domain knowledge in the input space to regularize NNs\u2019 explanations, in the form of saliency maps. The ability to add domain knowledge in the input space, in turn, can make the models focus on the right features.\nOur method provides a way to regularize explanations that is compatible with Bayesian convolutional neural networks. By merging Bayesian inference and our explanation regularization method, we introduce NNs with correctly calibrated uncertainty through a principled way and correct explanations that previous approaches have not been able to provide. Experimentally, we demonstrate that our method makes models perform better when they overfit to spurious features that a user can indicate in the input space. Furthermore, it can improve model performance when the model is uncertain on what to look at. We also show that our approach is more versatile than augmenting training data with samples where spurious features are masked.\nTo summarize: 1) we propose a new explanation regularization method compatible with Bayesian CNNs that provides well calibrated uncertainty estimate in a principled way. 2) We test our method on four different datasets with and without spurious features. 3) Experiments demonstrate that our method makes models perform better when they overfit to spurious features or are uncertain about which parts of the input to focus on.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "We introduce the background on Bayesian NN (Prince, 2023 ###reference_b11###; Murphy, 2023 ###reference_b16###) and the local reparameterization trick (LRT) (Kingma et al., 2015 ###reference_b17###) that our method relies on. The loss function introduced in this section will be used in Section 4 ###reference_###." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Bayesian Neural Network", + "text": "In NNs, we learn the weights via maximum likelihood estimation. Given a dataset with samples, we optimize the objective defined by assuming that the samples are independent and identically distributed. There are several choices of regularization, one is to use the maximum a posteriori estimation defined by , where moves the weights towards the choices we prefer to prevent overfitting. is referred to as the prior, and reflects our prior belief of what the weights should be before seeing the data. The prior imposes L1 or L2 regularization depending on if it is Laplace or Gaussian respectively.\nBoth maximum likelihood estimation and maximum a posterior estimation focus on finding point estimates of the weights. In Bayesian NNs, we represent weights as probability distributions and not as point estimates. To compute the full distribution requires us to compute the integral , which is infeasible. A way to solve this is to use variational inference (VI) (Blei et al., 2017 ###reference_b18###) and minimize the Kullback\u2013Leibler (KL) divergence , where is the variational distribution and is the posterior distribution (Blundell et al., 2015 ###reference_b9###). We cannot minimize the KL divergence directly, but we can solve the optimization problem for a lower bound on the evidence that is independent of the distribution parameters . The lower bound is known as the evidence lower bound (ELBO) and defined by\nThe objective maximizes the log likelihood of the data like in the maximum likelihood estimation. It is important to note that we are maximizing with respect to the distribution parameters and not the weights themselves like in maximum likelihood estimation where we treated them as point estimates. The objective additionally minimizes the KL divergence between the variational distribution and the prior distribution, moving the probability mass towards our choice of weights. The objective has to trade off between these two quantities, but as the amount of data increases, the likelihood term will weigh more.\nTo optimize Eq. 1 ###reference_###, we use stochastic gradient descent with the reparameterization trick (Kingma and Welling, 2014 ###reference_b19###; Blundell et al., 2015 ###reference_b9###). We model the variational distribution with a fully factorized Gaussian distribution defined by using the mean field approximation. To sample weights, we first sample noise , thereafter compute for independently. By using the reparameterization trick, we can update the parameters using backpropagation. The loss function we optimize in a Bayesian CNN using minibatches is defined by\nwhere is the number of minibatches, is the number of samples in our dataset and the number of Monte Carlo samples. We use fully factorized Gaussians for both the variational distribution and the prior distribution so that the KL divergence term can be solved in closed form (Kingma and Welling, 2014 ###reference_b19###)." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Local Reparameterization Trick", + "text": "To reduce variance of Eq. 2 ###reference_###, Kingma et al. (2015 ###reference_b17###) propose the local reparameterization trick (LRT). Instead of sampling weights as in Eq. 2 ###reference_###, LRT samples activations. Thus, the uncertainty is moved from weights that affect all samples to activations that is local and sample dependent. The LRT loss function is defined by\nwhere we sample activations rather than weights. We omit in the condition as no extra information is added given that we know the activations. Do note that we do need to know in the first place to compute the activations. We show how these activations are sampled in fully connected layers and in convolutional layers (Kingma et al., 2015 ###reference_b17###; Molchanov et al., 2017 ###reference_b20###).\nFully Connected Layer. Assume that the input to a layer is , to compute the activation, we compute the mean and variance of the activation defined by and . Thus, the distribution on the activation is and can be sampled as shown in Section 2.1 ###reference_###.\nConvolutional Layer. Assume that the input to a layer is and the weights is also a matrix. We assume only a single feature map to simplify the calculations. The mean and variance are defined by and where is the convolution operator and is applied element-wise. The distribution on activations is then and the reparameterization trick can be used to sample activations." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Related Work", + "text": "xai aims to assist humans understand artificial intelligence systems, their strength and weaknesses, provide understanding of how they will perform in unknown situations (Gunning and Aha, 2019 ###reference_b21###). Methods to understand machine learning models are often divided into interpretable models and post hoc explainability (Lipton, 2018 ###reference_b22###; Arrieta et al., 2020 ###reference_b7###; Murphy, 2023 ###reference_b16###). Our method goes under post hoc explainability methods that are applied to models after training. The method we propose is related to a line of work that corrects or prevents models to look at spurious features. As far as we know, Ross et al. (2017 ###reference_b8###) introduced the first method to correct and prevent models to look at spurious features in the context of explainable artificial intelligence (XAI). To prevent models from learning spurious features, Ross et al. (2017 ###reference_b8###) regularizes the input gradient in area specified by an explanation feedback. That is, they minimize the norm of the input gradient in the region that is specified to be irrelevant by the user. Liu and Avci (2019 ###reference_b23###) use a similar approach to Ross et al. (2017 ###reference_b8###) in text classification to make a model focus less on certain words. Similarly, working on text, Du et al. (2019 ###reference_b24###) encourages sparse importance values on irrelevant features and that the models should be uncertain when important features are removed. Rieger et al. (2020 ###reference_b13###) regularize explanations leveraging the method contextual decomposition explanation penalization. This allows them to penalize both feature importance and interaction. Shao et al. (2021 ###reference_b25###) regularize explanations using influence functions and show that it is better than using input gradients. Erion et al. (2021 ###reference_b26###) regularizes explanations by specifying domain knowledge regarding how explanations should be before training. For example, the total variation between feature importance values for pixels in image data should be low. Like the abovementioned methods, Selvaraju et al. (2019 ###reference_b27###) propose a new loss function to align human feedback on important features and where models look. Common to all these approaches is that they modify the loss function by augmenting it with additional terms. This, however, makes it impossible to minimize ELBO as the loss function is modified and augmented with new terms. We instead introduce a simple approach levering LRT to add explanation feedback to prevent models to look at irrelevant features and add domain knowledge.\nDifferently from previously mentioned methods, Schramowski et al. (2020 ###reference_b28###); Teso and Kersting (2019 ###reference_b12###) propose a model agnostic approach to regularize explanations by augmenting the training dataset with counterexamples. These counterexamples are the same as the samples in the training dataset, but where spurious features have been modified. These modifications can be replacing spurious features with random noise or use feature values from other samples without spurious features. We show in the experiments that it is less effective than our approach since location dependent spurious features cannot be removed. Furthermore, sometimes background information can be a positive influence, but this method does not allow partial use of features by models. Lastly, creating counterexamples introduces out-of-distribution samples into the training dataset that can negatively affect training.\n###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Method", + "text": "We detail our method by first setting up the model and dataset assumptions. Afterward, we detail how to regularize explanations in Bayesian CNN using our method. We assume that we have a Bayesian CNN represented by . Furthermore, we assume access to a dataset where is an input image and is a target label. is the set of real numbers if it is a regression task or a set of class labels for classification. is an explanation feedback. A value of in indicates an area of where the NN should not focus on when predicting . A value of points at an area where no feedback is given, that is, it does not matter what the model does in that region.\nWe showed in Section 2.2 ###reference_### that training a Bayesian CNN with LRT amounts to minimize Eq. 3 ###reference_###. To regularize explanations implies regularizing the input gradients (Ross et al., 2017 ###reference_b8###) or some other quantity (Rieger et al., 2020 ###reference_b13###; Selvaraju et al., 2019 ###reference_b27###). But to regularize input gradient without changing the objective, we need to know the distribution on input gradients, which we do not know. Instead, we leverage activation outputted from convolutional layers to incorporate the explanation feedback to regularize explanations. To show how our method works, we take the objective in Eq. 3 ###reference_### and show how the likelihood term is computed to incorporate explanation feedback.\nWe incorporate the explanation feedback via the last convolutional layer in a Bayesian CNN. We downsize the explanation feedback to the size of the activation produced by the last convolutional layer, as seen in Fig. 2 ###reference_### using a function . In practice, the function is implemented using torch.nn.AdaptiveMaxPool2d (Ansel et al., 2024 ###reference_b29###). Then we set the evidence of activation overlapping with \u2019s in the explanation feedback to . We denote those activations that the explanation feedback indicates are unimportant as , while the rest of the activations in the network as . When we refer to all activations in the network, we simply write . The log likelihood term with explanation feedback added is defined by\nBecause the size of the explanation feedback is larger than the prediction output, we introduce a hyperparameter to lower the importance of in Eq. 4 ###reference_### and set it to . Note that we still minimize Eq. 3 ###reference_### but add explanation feedback using activations as seen in Fig. 2 ###reference_### via the likelihood term as shown in Eq. 4 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We first detail our experimental setup, including the datasets used, model architectures, and additional details. Afterward, we show how our model improves the predictive performance while minimizing the models\u2019 focus on spurious features." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Setup", + "text": "To test the performance of our method, we use four different datasets. All of the datasets except for the ISIC skin cancer dataset were downloaded via torchvision.datasets (Ansel et al., 2024 ###reference_b29###).\nDatasets. We create two versions of Decoy MNIST (Ross et al., 2017 ###reference_b8###) which builds on The MNIST database of handwritten digits (LeCun et al., 2010 ###reference_b30###). The MNIST dataset consists of black and white images of digits from 0 to 9. The Decoy MNIST dataset adds decoys at the corners and sides of input samples as seen in Fig. 3(c) ###reference_sf3###. In the first version that we name \u201ccolor\u201d, the grayscale of decoys in the training has pixel intensity where is the class label. In the test dataset, is randomly sampled from the set of class labels. The location of the decoy is randomly placed both in the training and test dataset. In the other version called \u201clocation\u201d, the location of the decoys follows the class label in the training dataset but is random in the testing dataset. The grayscale intensity is randomly drawn both for the training and testing datasets. The ISIC dataset is a dataset for skin cancer diagnosis (Codella et al., 2019 ###reference_b14###; Tschandl et al., 2018 ###reference_b15###). We utilize only two classes, benign and malignant. We increase the importance of the malignant class in the loss because the dataset is heavily imbalanced. The version of ISIC dataset we use is curated by using code from Rieger et al. (2020 ###reference_b13###). The explanation feedback we used is also from Rieger et al. (2020 ###reference_b13###). Oxford-IIIT-Pet (Parkhi et al., 2012 ###reference_b31###) consists of cat and dog images with 37 different classes of different cat and dog breeds. The semantic boundaries dataset (SBD) (Hariharan et al., 2011 ###reference_b32###) dataset consists of images from the PASCAL VOC 2011 dataset (Everingham et al., 2011 ###reference_b33###). For the SBD, we use a subset of classes: bird, bus, cat, dog, horse by following Schramowski et al. (2020 ###reference_b28###). We only use samples where one and only one of these classes appears.\nModels. We use the LeNet-5222https://pytorch.org/tutorials/beginner/introyt/introyt1_tutorial.html#pytorch-models ###reference_royt/introyt1_tutorial.html#pytorch-models### (LeCun et al., 1998 ###reference_b34###) model for the decoy MNIST datasets and AlexNet (Krizhevsky et al., 2012 ###reference_b35###) for the other datasets. We load pretrained weights from PyTorch for AlexNet333https://pytorch.org/hub/pytorch_vision_alexnet/ ###reference_xnet/###.\nSoftware and Hardware. We used PyTorch Lightning to do the experiments (Falcon and team, 2024 ###reference_b36###). The experiments ran on a MacBook Pro 2023 with Apple M2 Max chip and 64 GB RAM. We used the MPS backend for GPU accelerated training. The metrics we compute are calculated using scikit-learn (Pedregosa et al., 2011 ###reference_b37###). The saliency maps are created using Captum (Kokhlikyan et al., 2020 ###reference_b38###)." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Predictive Performance", + "text": "We compare the predictive performance of Bayesian CNNs without any feedback, using data argumentation with counterexamples (Schramowski et al., 2020 ###reference_b28###; Teso and Kersting, 2019 ###reference_b12###), and the method outlined above. The data augmentation approach is, as far as we know, the only approach compatible with Bayesian CNNs because it is model agnostic. For this approach, we first replace a region specified to be irrelevant by the explanation feedback with noise sampled from a uniform distribution on the interval and afterward, we preprocess the images with standardization. We only use of the explanation feedback available, since regularizing all training samples negatively impacts our method in some instances.\nWe observed during the experiments that there are no performance gains when we apply our method to models that are not focusing on spurious features or when models are not uncertain. That is, if we initialized weights with small variance we could not see performance gain in the datasets without spurious features because pretrained AlexNet weights from PyTorch are already near optimal for the model architecture. Instead, we want to demonstrate our method under the conditions that there are spurious features or when the models are uncertain by initializing with larger variance and compare it to the data augmentation method. Tables 1 ###reference_### and 2 ###reference_### indicate that our method can improve model performance when models have overfitted to spurious features or the model is uncertain. The sample standard deviations shown in Tables 1 ###reference_### and 2 ###reference_### are computed by training three models using a 3-fold cross-validation and testing the three models on an independent test dataset.\nFor the data augmentation method, we see that the method can affect results negatively when the models are not overfitting to spurious features but still uncertain. This indicates that background information can be useful, but since the information is removed entirely, the models cannot take advantage of it. While our method tries to tell the models where to not look, we do not remove the information entirely and can use the hyperparameter to balance this aspect.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### Dataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \n\nDecoy MNIST Color\n\n\n\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\n\n\n\\B\n\\B\n\n\n\\Acisic (No Patch Data)\n\n\n\\B\n\\B\n\n\n\nOxford-IIIT-Pet\n\n\n\\B\n\\B\n\n\n\n\\Acsbd\n\n\n\\B\n\\B\nDataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nAUC \nOverlap \nAUC \nOverlap \nAUC \nOverlap \n\nDecoy MNIST Color\n\n\n\\B\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\\B\n\n.001)\n\\B\n\n\n\n\\Acisic (No Patch Data)\n\nn/a\n\\B\nn/a\n\\B\nn/a\n\nOxford-IIIT-Pet\n\\B\n\\B\n\n\n\n\n\n\\Acsbd\n\n\n\\B\n\\B" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Model Focus", + "text": "Table 2 ###reference_### demonstrate that our method is good at removing the models\u2019 focus on spurious features. The overlap is computed by calculating how much importance is on the area the explanation feedbacks indicate as unimportant divided by the total amount of importance across the entire image. To do the overlap calculation, we use input gradient (Simonyan et al., 2014 ###reference_b39###) for the MNIST dataset and we used Grad-CAM (Selvaraju et al., 2017 ###reference_b40###) for the rest of the datasets. Figs. 3(c) ###reference_sf3###, 3(a) ###reference_sf1### and 3(b) ###reference_sf2### show that our method can guide models away from spurious features and focus on what is important. For ISIC, data augmentation replace irrelevant regions with random noise but seems to be unable to make the models not look at patches. This indicates that when the location of features matter and not only their appearance, then counterexamples are unable to change model focus." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion and Discussion", + "text": "We have introduced a new explanation regularization methods that is compatible with the Bayesian formalism. Our focus has been to introduce a method that can be used with Bayesian CNNs and not compete with methods trying to improve model focus on regular NNs. Beyond this, we provide the opportunity to add domain knowledge in the input space. The experiments across four datasets show that our method can improve predictive performance of Bayesian CNNs when they overfit to spurious features or are uncertain where to focus. Moreover, we can remove focus on spurious features, no matter if it is because of appearance or their location.\nWhile our method is simple, it has limitations. Like other explanation regularization methods, our method requires human labor to specify explanation feedback that can be labor-intensive. In the future, intelligent ways to obtain explanation feedback should be considered. We regularize across all channels in a region in the convolutional layers, which can potentially be undesirable. We should for future work investigate adaptive methods to intelligently select specific filters to regularize." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Predictive performance across four datasets with different variations of Decoy MNIST and ISIC. For datasets with more than two classes, we compute macro-averaged F1 score.
\n
\n

\n\n\n\nDataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \n\nDecoy MNIST Color\n\n\n\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\n\n\n\\B\n\\B\n\n\n\\Acisic (No Patch Data)\n\n\n\\B\n\\B\n\n\n\nOxford-IIIT-Pet\n\n\n\\B\n\\B\n\n\n\n\\Acsbd\n\n\n\\B\n\\B\n\n\n\n

\n
\n
", + "capture": "Table 1: Predictive performance across four datasets with different variations of Decoy MNIST and ISIC. For datasets with more than two classes, we compute macro-averaged F1 score." + }, + "2": { + "table_html": "
\n
Table 2: For dataset with more than two classes, we compute one-vs-rest to get the AUC scores. To compute overlap, we use input gradient for Decoy MNIST and Grad-CAM for the rest of the datasets. Some entries are missing standard deviation, since it is less than .
\n
\n

\n\n\n\nDataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nAUC \nOverlap \nAUC \nOverlap \nAUC \nOverlap \n\nDecoy MNIST Color\n\n\n\\B\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\\B\n\n.001)\n\\B\n\n\n\n\\Acisic (No Patch Data)\n\nn/a\n\\B\nn/a\n\\B\nn/a\n\nOxford-IIIT-Pet\n\\B\n\\B\n\n\n\n\n\n\\Acsbd\n\n\n\\B\n\\B\n\n\n\n

\n
\n
", + "capture": "Table 2: For dataset with more than two classes, we compute one-vs-rest to get the AUC scores. To compute overlap, we use input gradient for Decoy MNIST and Grad-CAM for the rest of the datasets. Some entries are missing standard deviation, since it is less than ." + } + }, + "image_paths": { + "1": { + "figure_path": "2105.02653v3_figure_1.png", + "caption": "Figure 1: Method Overview. a) During training, a NN gets an input sample \ud835\uddb7i\u2208\u211d(w\u00d7h\u00d7c)subscript\ud835\uddb7\ud835\udc56superscript\u211d\ud835\udc64\u210e\ud835\udc50\\mathsf{X}_{i}\\in\\mathbb{R}^{(w\\times h\\times c)}sansserif_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h \u00d7 italic_c ) end_POSTSUPERSCRIPT from the training dataset and tries to match the prediction y^isubscript^\ud835\udc66\ud835\udc56\\hat{y}_{i}over^ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with the ground truth label yisubscript\ud835\udc66\ud835\udc56y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Our method provides the NN with additional evidence in the form of explanation feedback \ud835\udc04i\u2208{0,1}(w\u00d7h)subscript\ud835\udc04\ud835\udc56superscript01\ud835\udc64\u210e\\mathbf{E}_{i}\\in\\{0,1\\}^{(w\\times h)}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 { 0 , 1 } start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h ) end_POSTSUPERSCRIPT. A value of 1111 in \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates a region in the input space as irrelevant to the prediction, while 00 indicates that we do not have any concern. The explanation feedback is used to regularize the model\u2019s focus to give correct explanation and add domain knowledge. b) A new input sample \ud835\uddb7jsubscript\ud835\uddb7\ud835\udc57\\mathsf{X}_{j}sansserif_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT from the test dataset is fed to the model and an explanation is generated. Without explanation regularization, the NN uses the patch to make the prediction. With our method, the NN no longer looks at the patch in the image. The skin images are from the ISIC dataset (Codella et al., 2019; Tschandl et al., 2018; Rieger et al., 2020).", + "url": "http://arxiv.org/html/2105.02653v3/x1.png" + }, + "2": { + "figure_path": "2105.02653v3_figure_2.png", + "caption": "Figure 2: Finding Activations. Given an explanation feedback \ud835\udc04i\u2208{0,1}(w\u00d7h)subscript\ud835\udc04\ud835\udc56superscript01\ud835\udc64\u210e\\mathbf{E}_{i}\\in\\{0,1\\}^{(w\\times h)}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 { 0 , 1 } start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h ) end_POSTSUPERSCRIPT for the sample \ud835\uddb7i\u2208\u211d(w\u00d7h\u00d7c)subscript\ud835\uddb7\ud835\udc56superscript\u211d\ud835\udc64\u210e\ud835\udc50\\mathsf{X}_{i}\\in\\mathbb{R}^{(w\\times h\\times c)}sansserif_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h \u00d7 italic_c ) end_POSTSUPERSCRIPT, we find activations to add the explanation feedback. A value of 1111 in \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates irrelevant regions in the input. A value of 00 denotes features that no preference is given. First, \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is downsized to the size of feature maps of the last convolutional layer using the function f\u2062(\u22c5)\ud835\udc53\u22c5f(\\cdot)italic_f ( \u22c5 ). Afterward, since the height and widths are the same, we simply overlay the explanation feedback with the feature maps to find activations to target. Specifically, we inject this information via the likelihood term of Eq. 3. The skin image is from the ISIC dataset (Codella et al., 2019; Tschandl et al., 2018; Rieger et al., 2020).", + "url": "http://arxiv.org/html/2105.02653v3/x2.png" + }, + "3(a)": { + "figure_path": "2105.02653v3_figure_3(a).png", + "caption": "(a) ISIC. Our method removes the focus on patches that the data augmentation approach is unable to.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.", + "url": "http://arxiv.org/html/2105.02653v3/x3.png" + }, + "3(b)": { + "figure_path": "2105.02653v3_figure_3(b).png", + "caption": "(b) SBD. Our method makes the saliency maps more focused and concentrated.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.", + "url": "http://arxiv.org/html/2105.02653v3/x4.png" + }, + "3(c)": { + "figure_path": "2105.02653v3_figure_3(c).png", + "caption": "(c) Decoy MNIST Color. Both our method and data augmentation can remove focus on decoys. Our method makes the saliency maps more focused.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.", + "url": "http://arxiv.org/html/2105.02653v3/x5.png" + }, + "3(d)": { + "figure_path": "2105.02653v3_figure_3(d).png", + "caption": "(d) Oxford-IIIT-Pet. When no performance gain can be made, the saliency maps are similar.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.", + "url": "http://arxiv.org/html/2105.02653v3/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Deep Learning.", + "author": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville.", + "venue": "MIT Press, 2016.", + "url": null + } + }, + { + "2": { + "title": "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.", + "author": "David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis.", + "venue": "Science, 2018.", + "url": null + } + }, + { + "3": { + "title": "A guide to deep learning in healthcare.", + "author": "Andre Esteva, Alexandre Robicquet, Bharath Ramsundar, Volodymyr Kuleshov, Mark DePristo, Katherine Chou, Claire Cui, Greg Corrado, Sebastian Thrun, and Jeff Dean.", + "venue": "Nature Medicine, 2019.", + "url": null + } + }, + { + "4": { + "title": "Deep Reinforcement Learning for Autonomous Driving: A Survey.", + "author": "B. Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Kumar Yogamani, and Patrick P\u00e9rez.", + "venue": "IEEE Trans. Intell. Transp. Syst., 2022.", + "url": null + } + }, + { + "5": { + "title": "Intriguing properties of neural networks.", + "author": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus.", + "venue": "In Proc. of ICLR, 2014.", + "url": null + } + }, + { + "6": { + "title": "Unmasking Clever Hans predictors and assessing what machines really learn.", + "author": "Sebastian Lapuschkin, Stephan W\u00e4ldchen, Alexander Binder, Gr\u00e9goire Montavon, Wojciech Samek, and Klaus-Robert M\u00fcller.", + "venue": "Nature Communications, 2019.", + "url": null + } + }, + { + "7": { + "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.", + "author": "Alejandro Barredo Arrieta, Natalia D\u00edaz Rodr\u00edguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc\u00eda, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera.", + "venue": "Inf. Fusion, 2020.", + "url": null + } + }, + { + "8": { + "title": "Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations.", + "author": "Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez.", + "venue": "In Proc. of IJCAI, 2017.", + "url": null + } + }, + { + "9": { + "title": "Weight Uncertainty in Neural Network.", + "author": "Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra.", + "venue": "In Proc. of ICML, 2015.", + "url": null + } + }, + { + "10": { + "title": "Hands-On Bayesian Neural Networks - A Tutorial for Deep Learning Users.", + "author": "Laurent Valentin Jospin, Hamid Laga, Farid Boussa\u00efd, Wray L. Buntine, and Mohammed Bennamoun.", + "venue": "IEEE Comput. Intell. Mag., 2022.", + "url": null + } + }, + { + "11": { + "title": "Understanding Deep Learning.", + "author": "Simon J.D. Prince.", + "venue": "The MIT Press, 2023.", + "url": null + } + }, + { + "12": { + "title": "Explanatory Interactive Machine Learning.", + "author": "Stefano Teso and Kristian Kersting.", + "venue": "In Proc. of AIES, 2019.", + "url": null + } + }, + { + "13": { + "title": "Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge.", + "author": "Laura Rieger, Chandan Singh, W. James Murdoch, and Bin Yu.", + "venue": "In Proc. of ICML, 2020.", + "url": null + } + }, + { + "14": { + "title": "Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC).", + "author": "Noel C. F. Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen W. Dusza, David A. Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael A. Marchetti, Harald Kittler, and Allan Halpern.", + "venue": "CoRR, 2019.", + "url": null + } + }, + { + "15": { + "title": "The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.", + "author": "Philipp Tschandl, Cliff Rosendahl, and Harald Kittler.", + "venue": "Scientific Data, 2018.", + "url": null + } + }, + { + "16": { + "title": "Probabilistic Machine Learning: Advanced Topics.", + "author": "Kevin P. Murphy.", + "venue": "MIT Press, 2023.", + "url": null + } + }, + { + "17": { + "title": "Variational Dropout and the Local Reparameterization Trick.", + "author": "Diederik P. Kingma, Tim Salimans, and Max Welling.", + "venue": "In Proc. of NIPS, 2015.", + "url": null + } + }, + { + "18": { + "title": "Variational Inference: A Review for Statisticians.", + "author": "David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe.", + "venue": "Journal of the American Statistical Association, 2017.", + "url": null + } + }, + { + "19": { + "title": "Auto-Encoding Variational Bayes.", + "author": "Diederik P. Kingma and Max Welling.", + "venue": "In Proc. of ICLR, 2014.", + "url": null + } + }, + { + "20": { + "title": "Variational Dropout Sparsifies Deep Neural Networks.", + "author": "Dmitry Molchanov, Arsenii Ashukha, and Dmitry P. Vetrov.", + "venue": "In Proc. of ICML, 2017.", + "url": null + } + }, + { + "21": { + "title": "DARPA\u2019s Explainable Artificial Intelligence (XAI) Program.", + "author": "David Gunning and David W. Aha.", + "venue": "AI Mag., 2019.", + "url": null + } + }, + { + "22": { + "title": "The mythos of model interpretability.", + "author": "Zachary C. Lipton.", + "venue": "Commun. ACM, 2018.", + "url": null + } + }, + { + "23": { + "title": "Incorporating Priors with Feature Attribution on Text Classification.", + "author": "Frederick Liu and Besim Avci.", + "venue": "In Proc. of ACL, 2019.", + "url": null + } + }, + { + "24": { + "title": "Learning Credible Deep Neural Networks with Rationale Regularization.", + "author": "Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu.", + "venue": "In Proc. of ICDM, 2019.", + "url": null + } + }, + { + "25": { + "title": "Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions.", + "author": "Xiaoting Shao, Arseny Skryagin, Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting.", + "venue": "In Proc. of AAAI, 2021.", + "url": null + } + }, + { + "26": { + "title": "Improving performance of deep learning models with axiomatic attribution priors and expected gradients.", + "author": "Gabriel G. Erion, Joseph D. Janizek, Pascal Sturmfels, Scott M. Lundberg, and Su-In Lee.", + "venue": "Nat. Mach. Intell., 2021.", + "url": null + } + }, + { + "27": { + "title": "Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded.", + "author": "Ramprasaath Ramasamy Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry P. Heck, Dhruv Batra, and Devi Parikh.", + "venue": "In Proc. of ICCV, 2019.", + "url": null + } + }, + { + "28": { + "title": "Making deep neural networks right for the right scientific reasons by interacting with their explanations.", + "author": "Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, Anne-Katrin Mahlein, and Kristian Kersting.", + "venue": "Nat. Mach. Intell., 2020.", + "url": null + } + }, + { + "29": { + "title": "PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation.", + "author": "Jason Ansel, Edward Z. Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, Peter Bell, David Berard, Evgeni Burovski, Geeta Chauhan, Anjali Chourdia, Will Constable, Alban Desmaison, Zachary DeVito, Elias Ellison, Will Feng, Jiong Gong, Michael Gschwind, Brian Hirsh, Sherlock Huang, Kshiteej Kalambarkar, Laurent Kirsch, Michael Lazos, Mario Lezcano, Yanbo Liang, Jason Liang, Yinghai Lu, C. K. Luk, Bert Maher, Yunjie Pan, Christian Puhrsch, Matthias Reso, Mark Saroufim, Marcos Yukio Siraichi, Helen Suk, Shunting Zhang, Michael Suo, Phil Tillet, Xu Zhao, Eikan Wang, Keren Zhou, Richard Zou, Xiaodong Wang, Ajit Mathews, William Wen, Gregory Chanan, Peng Wu, and Soumith Chintala.", + "venue": "In Proc. of ASPLOS, 2024.", + "url": null + } + }, + { + "30": { + "title": "MNIST handwritten digit database.", + "author": "Yann LeCun, Corinna Cortes, and CJ Burges.", + "venue": "ATT Labs, 2010.", + "url": null + } + }, + { + "31": { + "title": "Cats and dogs.", + "author": "Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar.", + "venue": "In Proc. of CVPR, 2012.", + "url": null + } + }, + { + "32": { + "title": "Semantic contours from inverse detectors.", + "author": "Bharath Hariharan, Pablo Arbel\u00e1ez, Lubomir D. Bourdev, Subhransu Maji, and Jitendra Malik.", + "venue": "In Proc. of ICCV, 2011.", + "url": null + } + }, + { + "33": { + "title": "The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results, 2011.", + "author": "M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.", + "venue": "URL http://host.robots.ox.ac.uk/pascal/VOC/.", + "url": null + } + }, + { + "34": { + "title": "Gradient-based learning applied to document recognition.", + "author": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner.", + "venue": "Proc. IEEE, 1998.", + "url": null + } + }, + { + "35": { + "title": "ImageNet Classification with Deep Convolutional Neural Networks.", + "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.", + "venue": "In Proc. of NIPS, 2012.", + "url": null + } + }, + { + "36": { + "title": "PyTorch Lightning, August 2024.", + "author": "William Falcon and The PyTorch Lightning team.", + "venue": null, + "url": null + } + }, + { + "37": { + "title": "Scikit-learn: Machine Learning in Python.", + "author": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay.", + "venue": "J. Mach. Learn. Res., 2011.", + "url": null + } + }, + { + "38": { + "title": "Captum: A unified and generic model interpretability library for PyTorch.", + "author": "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson.", + "venue": "CoRR, 2020.", + "url": null + } + }, + { + "39": { + "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.", + "author": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman.", + "venue": "In Proc. of ICLR Workshop Track Proceedings, 2014.", + "url": null + } + }, + { + "40": { + "title": "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.", + "author": "Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra.", + "venue": "In Proc. of ICCV, 2017.", + "url": null + } + }, + { + "41": { + "title": "A Latex style and template for paper preprints (based on NIPS style), 2020.", + "author": "George Kour.", + "venue": "URL https://github.com/kourgeorge/arxiv-style.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2105.02653v3" +} \ No newline at end of file diff --git a/20241127/2201.11192v2.json b/20241127/2201.11192v2.json new file mode 100644 index 0000000000000000000000000000000000000000..7c524b63ff47a764e69a6d7326a80ea64ed9a136 --- /dev/null +++ b/20241127/2201.11192v2.json @@ -0,0 +1,600 @@ +{ + "title": "ReforesTree: A Dataset for Estimating Tropical Forest Carbon Stock with Deep Learning and Aerial Imagery", + "abstract": "Forest biomass is a key influence for future climate, and the world urgently needs highly scalable financing schemes, such as carbon offsetting certifications, to protect and restore forests. Current manual forest carbon stock inventory methods of measuring single trees by hand are time, labour, and cost intensive and have been shown to be subjective. They can lead to substantial overestimation of the carbon stock and ultimately distrust in forest financing. The potential for impact and scale of leveraging advancements in machine learning and remote sensing technologies is promising, but needs to be of high quality in order to replace the current forest stock protocols for certifications.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The degradation of the natural world is unprecedented in human history and a key driver of the climate crisis and the Holocene extinction (Ceballos and Ehrlich 2018 ###reference_b8###). Forests play a significant role in the planet\u2019s carbon cycle, directly impacting local and global climate through its biogeophysical effects and as carbon sinks, sequestering and storing carbon through photosynthesis (Griscom et al. 2017 ###reference_b16###).\nHowever, since the year 2000, we have lost 361 million ha of forest cover, equivalent to the size of Europe, mainly in tropical areas (Hansen et al. 2013 ###reference_b18###). This accounts for 18% of global anthropogenic emissions and contributes to driving up atmospherical carbon levels (IPCC 2019 ###reference_b21###). Forests, especially tropical forests, also provide habitats for 80% of land-based biodiversity and with the increasing risk and frequency of wildfires, droughts, and extreme weather, forest ecosystems are under severe pressure (Shi et al. 2021 ###reference_b42###).\nTo avoid planetary tipping points (Rockst\u00f6m et al. 2009 ###reference_b36###) and maintain a stable and livable climate, mankind urgently need to reduce carbon emissions until 2050 and restore essential ecosystems (IPCC 2021 ###reference_b22###). Forests and natural carbon sequestration are important climate change mitigation strategies (Canadell and Raupach 2008 ###reference_b7###) with a biophysical mitigation potential of 5,380 MtCO2 per year on average until 2050 (IPCC 2019 ###reference_b21###).\nForestry is a large industry and the causes of deforestation are mostly economically driven (FAO 2020 ###reference_b12###) (Geist and Lambin 2001 ###reference_b14###). For the last 20 years, major conservation efforts have been underway to mitigate and safeguard against these losses. One of the global financing strategies is carbon offsets (Blaufelder et al. 2021 ###reference_b5###). Initially, it started as the Clean Development Mechanism (CDM) under the Kyoto Protocol, allowing governments and business organizations from industrialized countries to invest in forestry in developing countries by buying carbon credits to offset industrialized emissions (FAO 2020 ###reference_b12###) Several other independent bodies have later developed official standards for verifying and certifying carbon offsetting projects, such as the Gold Standard (GS) and the Verified Carbon Standard (VERRA). The certification process for forest carbon offsetting projects is capital and labour intensive, especially due to the high cost of manual monitoring, verification and reporting (MVR) of the forest carbon stock.\nThe carbon offsetting market is rapidly increasing and expected to grow by a factor of 100 until 2050 due to high demand and available capital (Blaufelder et al. 2021 ###reference_b5###). However, the main obstacle is limited supply of offsetting projects as forest owners lack upfront capital and market access (Kreibich and Hermwille 2021 ###reference_b24###).\nRecent research investigations (Badgley et al. 2021 ###reference_b1###; West et al. 2020 ###reference_b50###) have shown that the current manual forest carbon stock practices systematically overestimate forestry carbon offsetting projects with up to 29% of the offsets analyzed, totaling up to 30 million tCO2e (CO2 equivalents) and worth approximately $410 million. The overestimation was identified to come from subjective estimations and modeling of the carbon stock baseline and of the project\u2019s additionally and leakage reporting. There is thus a need for higher quality carbon offsetting protocols and higher transparency and accountability of the MVR of these projects (Haya et al. 2020 ###reference_b19###).\nThere are three key aspects that are important for the use of remote sensing in MVR of forest carbon stock. One aspect is financial; using available and accessible technology and sensors to lower the cost and upfront capital requirements for forest owners to get certified, especially in low and middle-income countries. The second aspect is reducing subjectivity in estimating carbon stock and increasing trustworthiness and transparency in the carbon offsetting certification protocols. And lastly, the solutions need to be scalable due to the urgency of financing forest restoration, especially in tropical regions.\nVarious verification bodies, new ventures, and academia are currently developing remote sensing technologies to automate parts of the certification process of forestry carbon offsetting projects (Narine, Popescu, and Malambo 2020 ###reference_b31###; Dao et al. 2019 ###reference_b10###). Satellite imagery is increasing in quality and availability and, combined with state-of-the-art deep learning and lidar, promises to soon map every tree on earth (Hanan and Anchang 2020 ###reference_b17###) and to enable forest aboveground biomass and carbon to be estimated at scale (Saatchi et al. 2011 ###reference_b37###; Santoro et al. 2021 ###reference_b39###). Compared to current manual estimates, these advancements reduce time and cost and increase transparency and accountability, thus lowering the threshold for forest owners and buyers to enter the market (L\u00fctjens, Liebenwein, and Kramer 2019 ###reference_b25###). Nevertheless, these algorithms risk additionally contributing to the systematic overestimation of carbon stocks, not reducing it, and are not applicable for small-scale forests, below 10,000 ha (White et al. 2018 ###reference_b51###), (Global Forest Watch 2019 ###reference_b15###).\nAccurately estimating forest carbon stock, especially for small scale carbon offset projects, presents several interesting machine learning challenges, such as high variance of species and occlusion of individual tree crowns. There are many promising approaches, such as hyperspectral species classification (Schiefer et al. 2020 ###reference_b40###), lidar-based height measurements (Ganz, K\u00e4ber, and Adler 2019 ###reference_b13###) and individual tree crown segmentation across sites (Weinstein et al. 2020b ###reference_b49###). However, these applications have been developed mainly on datasets from temperate forests and, to the knowledge of the authors, there is no publicly available dataset of tropical forests with both aerial imagery and ground truth field measurements.\n\n###figure_1### Here, we present ReforesTree, a dataset of six tropical agroforestry reforestation project sites with individual tree crown bounding boxes of over 4,600 trees matched with their respective diameter at breast height (DBH), species, species group, aboveground biomass (AGB), and carbon stock. This dataset represents ground truth field data mapped with low-cost, high-resolution RGB drone imagery to be used to train new models for carbon offsetting protocols and for benchmark existing models.\nTo summarize, with ReforestTree, we contribute the following: 1) the first publicly available dataset of tropical agro-forestry containing both ground truth field data matched with high resolution RGB drone imagery at the individual tree level and 2) a methodology for reducing the current overestimation of forest carbon stock through deep learning and aerial imagery for carbon offsetting projects." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Deep Learning for Remote Sensing", + "text": "In recent years, deep learning (DL), and especially deep convolutional neural networks (CNN) are increasing in popularity for image analysis in the remote-sensing community (Ma et al. 2019 ###reference_b27###), (Zhu et al. 2017 ###reference_b55###). With the increase in computation power, larger datasets, transfer learning, and breakthroughs in network architecture, DL models have outperformed conventional image processing methods in several image tasks such as land use and land cover (LULC) classification, segmentation and detection. Examples of deep supervised learning in remote sensing are the prediction of wildfires (Yang, Lupascu, and Meel 2021 ###reference_b52###), detection of invasive species (Bjorck et al. 2021 ###reference_b4###). CNNs offer feature extraction capabilities in recognizing patterns in both spatial and temporal data, even with low resolution inputs. With recent advances in meta and few shot learning these models can be trained and generalized on larger datasets and fine-tuned for local variance." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Manual Forest Inventory", + "text": "The standardized forest carbon stock inventory consists of manually measuring and registering sample trees of a project site. Tree metrics such as diameter at breast height (DBH), height, and species are then put through scientifically developed regression models called allometric equations to calculate the aboveground biomass (AGB) as seen in Figure 2 ###reference_###. The total biomass of a forest is the total AGB added with the below-ground biomass (BGB), calculated using a root-to-shoot ratio specific to the forest type and region (Ma et al. 2021 ###reference_b26###).\n\n###figure_2### The procedure how to calculate the correct amount of carbon offsets (CO2e) to be certified for a project is standardized through (Pearson, Walker, and Brown 2005 ###reference_b34###) as shown in Figure 2 ###reference_###. The (CO2e), also known as the baseline forest carbon stock, is equivalent of the total biomass divided by two. Despite being prone to error propagation (Petrokofsky et al. 2012 ###reference_b35###; Malhi et al. 2004 ###reference_b28###) and shown to systematically overestimate carbon stock (Badgley et al. 2021 ###reference_b1###), this is currently the standardized forest inventory method for certification of forestry projects." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Related Methods and Models", + "text": "The following are three types of methods to estimate forest carbon stock remotely, adapted from (Sun and Liu 2019 ###reference_b44###); 1) inventory-based models, based on national and regional forest inventories and regression models, are known to overestimate due to over-representations of dense commercial forests in the data, (Global Forest Watch 2019 ###reference_b15###). 2) Satellite-based models leveraging datasets from optical remote sensing, synthetic aperture radar satellites (SAR), and lidar (LiDAR) to create global aboveground biomass and carbon maps (Santoro et al. 2021 ###reference_b39###; Saatchi et al. 2011 ###reference_b37###; Spawn, Sullivan, and Lark 2020 ###reference_b43###). 3) Ecosystem-based models using topography, elevation, slope, aspect, and other environmental factors to construct statistical models and quantitatively describe the process of forest carbon cycle to estimate forest carbon stock(Ma et al. 2021 ###reference_b26###).\nThe most scalable and affordable of these methods are, evidently, satellites-based models. Nevertheless, these models and global maps are yet to estimate carbon stock at local scale and provide accurate estimates of highly heterogeneous and dense forest areas due to their low resolution of 30-300m (Bagheri, Shataee, and Erfanifard 2021 ###reference_b2###). An individual tree-based model that takes the individual overstory trees into account can provide this accuracy, especially if fused with geostatistical and satellite data.\nIn recent years, researchers have achieved high accuracy for standard forestry inventory tasks such as individual tree crown detection (Weinstein et al. 2019 ###reference_b48###), lidar-based height estimation (Ganz, K\u00e4ber, and Adler 2019 ###reference_b13###), and species classification (Miyoshi et al. 2020 ###reference_b29###; Schiefer et al. 2020 ###reference_b40###; M\u00e4yr\u00e4 et al. 2021 ###reference_b30###), using deep learning models and aerial imagery. This shows high potential for combining high-resolution imagery with deep learning models as a method for accurate carbon stock estimation for small-scale reforestation projects (Sun and Liu 2019 ###reference_b44###).\nAs most tropical forests are situated in low to middle income countries, without access to hyperspectral, lidar and other more advanced sensors, the models need to be developed using available technologies. A trade-off for accuracy and data availability is basic high-resolution RGB drone imagery. Drone imagery (1-3cm/px resolution), combined with CNN, has previously been used to directly estimate biomass and carbon stock in individual mangrove trees (Jones et al. 2020 ###reference_b23###) or indirectly by detecting species or tree metrics such as DBH or height (N\u00e5f\u00e4lt 2018 ###reference_b32###; Omasa et al. 2003 ###reference_b33###), achieving an accuracy similar to manual field measurements. And by leveraging multi-fusion approaches (Du and Zare 2020 ###reference_b11###; Zhang 2010 ###reference_b54###), e.g. combining low-resolution satellite, high-resolution drone imagery, and field measurements and contextual ecological or topological data, and multi-task learning (Crawshaw 2020 ###reference_b9###), e.g. tree metrics and carbon storage factors as auxiliary tasks, these models can replace and scale the existing manual forest inventory.\nThere are several datasets for tree detection and classification from drone imagery such as the NEON dataset (Weinstein et al. 2020a ###reference_b47###), or the Swedish Forest Agency mainly from temperate forests from the US or Europe. To our knowledge, there are no publicly available datasets including both field measurements and drone imagery of heterogeneous tropical forests." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Dataset and Method", + "text": "The ReforesTree dataset consists of six agro-forestry sites in the central coastal region of Ecuador. The sites are of dry tropical forest type and eligible for carbon offsetting certification with forest inventory done and drone imagery captured in 2020. See Table 1 for information on each site." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Forest Inventory Data and Drone Imagery", + "text": "Field measurements were done by hand for all live trees and bushes within the site boundaries and include GPS location, species, and diameter at breast height (DBH) per tree. Drone imagery was captured in 2020 by an RGB camera from a Mavic 2 Pro drone with a resolution of 2cm per pixel. Each site is around 0.5 ha, mainly containing banana trees (Musaceae) and cacao plants (Cacao), planted in 2016-2019.\nThe aboveground biomass (AGB) is calculated using published allometric equations for tropical agro-forestry, namely Eq.1 ###reference_### for fruit trees, including citrus fruits (Segura, Kanninen, and Su\u00e1rez 2006 ###reference_b41###), Eq.2 ###reference_### banana trees (Van Noordwijk et al. 2002 ###reference_b45###), Eq.3 ###reference_### for cacao (Yuliasmara, Wibawa, and Prawoto 2009 ###reference_b53###), and Eq.4 ###reference_### for shade trees (timber) (Brown and Iverson 1992 ###reference_b6###). These are commonly used in global certification standards. The carbon stock is calculated through the standard forest inventory methodology using a root-to-shoot ratio of 22%, which is standard for dry tropical reforestation sites (Ma et al. 2019 ###reference_b27###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Data Processing and Method", + "text": "The raw data is processed in several steps as seen in Figure 3. The goal of this process is to have a machine learning ready dataset that consists of matched drone image of an individual tree with the trees labels, such as AGB value. All the drone images have been cropped to fit tightly the boundaries of the field measured areas. The details of this cropping process, and the code repository, are in the Appendix.\n\n###figure_3### Initially the RGB orthomosaics are cut into 40004000 tiles and sent through DeepForest, a python package for predicting individual tree crowns from RGB imagery (Weinstein et al. 2019 ###reference_b48###), fine-tuned on some manually labelled bounding boxes from the sites. Afterwards, the bounding boxes containing more than 80% white were filtered out, e.g. bounding boxes lying on the border of the drone imagery, and manually labeled to banana and non-banana, due to the easily recognizable characteristics of banana trees, resulting in clear bounding boxes of all trees as shown in Figure 4 ###reference_###.\n\n###figure_4### To fuse the tree information extracted from the ground measurements with the bounding boxes of the trees detected, we used OneForest, a recent machine learning approach for fusing citizen data with drone imagery. To remove noise introduced in both GPS locations, OneForest uses a greedy optimal transport algorithm. This is a known coupling method to map between two GPS positions (center of bounding box from drone imagery and GPS location of tree from field data). Developed by Villani (Villani 2003 ###reference_b46###), the methods finds the minimal distance between two distributions via a convex linear program optimizing for a matching that moves the mass from one distribution to the other with minimal cost. The cost is usually defined as the euclidean distance or the Kulback-Leibler divergence between the distributions. The optimum, i.e. the minimal distance between the two distributions, is called the Wasserstein metric." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Baseline CNN Model", + "text": "With a dataset of matched bounding boxes and tree labels, we fine-tuned a basic pre-trained CNN, ResNet18 (He et al. 2015 ###reference_b20###) with a mean-square-error loss to estimate individual tree AGB. The results were satisfying despite the simple baseline model, and proves that the individual tree estimation from drone imagery has potential.\nFourteen images were identified as being larger than the expected crown size of a tree, and they were center cropped at 800800. To preserve the crown size information, the smaller images were zero-padded up to 800800, before all images were resized to fit the network architecture.\nThe dataset has is unbalanced with regards to species, of which 43% is cacao and 32% is banana. Additionally, due to the trees being planted between 2016-2019, many of the trees have similar size (e.g. DBH) and half of the trees have DBH between 7-10cm. The training dataset consisted of equal number of samples of species and DBH, and from the different project sites." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "With the emerging new biomass maps and forest stock estimation models, we used the ReforesTree dataset to benchmark these maps and compare with our baseline CNN model for AGB estimation. We compared the maps taken from (Global Forest Watch 2019 ###reference_b15###), (Spawn, Sullivan, and Lark 2020 ###reference_b43###), and (Santoro et al. 2021 ###reference_b39###). The Global Forest Watch\u2019s Above-Ground Woody Biomass dataset is a global map of AGB and carbon density at 30m30m resolution for the year 2000. It is based on more than 700,000 quality-filtered Geoscience Laser Altimeter System (GLAS) lidar observations using machine learning models based on allometric equations for the different regions and vegetation types. The second dataset from (Spawn, Sullivan, and Lark 2020 ###reference_b43###) is a 300m300m harmonized map based on overlayed input maps. The input maps were allocated in proportion to the relative spatial extent of each vegetation type using ancillary maps of tree cover and landcover, and a rule-based decision schema. The last, and most recent 100m100m dataset from (Santoro et al. 2021 ###reference_b39###) is obtained by spaceborne SAR (ALOS PALSAR, Envisat ASAR), optical (Landsat-7), lidar (ICESAT) and auxiliary datasets with multiple estimation procedures with a set of biomass expansion and conversion factors following approaches to extend ground estimates of wood density and stem-to-total biomass expansion factors.\nAs seen in Table 2, all of the available global AGB maps have a tendency to overestimate the ground truth measurements up to a factor of ten. These are not encouraging results showing that these maps are far from being accurate enough to be used in remote sensing of forest carbon stock at a small scale, as is the case for the ReforesTree dataset.\nOur baseline model, on the other hand, has a slight tendency of underestimating the biomass. The model has an evident advantage, to be trained on the dataset, but these initial results show promise for the individual tree estimation approach using drone imagery for forest carbon inventory." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions and Future Work", + "text": "We introduce the ReforesTree dataset in hopes of encouraging the fellow machine learning community to take on the challenge of developing low-cost, scalable, trustworthy and accurate solutions for monitoring, verification and reporting of tropical reforestation inventory. We also present an outlined methodology for creating an annotated machine learning dataset from field data and drone imagery, and train a baseline CNN model for individual tree aboveground biomass estimation. This methodology includes a data processing pipeline leveraging a fine-tuned tree crown detection algorithm and an optimal transport matching algorithm for reduction of GPS noise.\nThe ReforesTree dataset of field measurements and low-cost, high-resolution RGB drone imagery represents the trade-off for accuracy and data availability of remote sensing of forest carbon stock in tropical regions. It can be used to train new or benchmark existing models for MVR of carbon offsetting reforestation protocols. Remote inventory of small scale tropical reforestation projects comes with several ecological challenges, such high biodiversity, level of canopy closure, and topology. This dataset is a start to develop a generalized model that can be fine-tuned on local scale. Future work will investigate ways to improve the methodology and reduce error in the machine learning ready dataset, and increase the explainability to have a trustworthy and transparent model. Additionally, we see further potential in fusing satellite and other available geoecological data layers as well as leveraging the multiple labels available (e.g. DBH, species) as auxiliary tasks in a multitask learning problem.\nAs the world is rapidly approaching planetary doom, we need to collaborate across disciplines to implement and scale the climate mitigation strategies available. Restoration of forests is one of our most important climate mitigation strategies. And by reducing the overestimation of carbon offsets, we can allow every man on earth who owns a tree to participate in climate action. Biodiverse and sustainable forestry can provide hope not only the for the machine learning community, but also beyond." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Acknowledgments", + "text": "The authors are thankful for the guidance and advice by our academic collaborator (Prof. Dava Newman, Prof. Lynn H Kaack, Prof. Thomas Crowther and the CrowtherLab), non-governmental institutions (BrainForest, WWF Switzerland, Restor), Isabel Hillman, Simeon Max, Microsoft AI for Earth, and support from the local community in Ecuador.\nLastly, we extend our sincere gratitude to Autumn Nguyen and Sulagna Saha for their significant contributions to this work. Their thorough review process led to substantial improvements in both the manuscript and the underlying codebase. Their detailed technical analysis and implementations have enhanced the robustness and reliability of our research. A comprehensive report of their contributions can be found in our technical documentation: https://gainforest.substack.com/p/improving-reforestree-correcting ###reference_g-reforestree-correcting###." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Technical Appendix", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "data cleaning", + "text": "All 28 species were divided into 6 species family groups: banana, cacao, fruit, timber, citrus and other.\nThe field data was manually collected as a standard manual forest inventory, potentially leading to human errors, missing values and outliers.\nThe dataset needed to reflect the ground truth. Therefore it was important not to remove trees from the dataset unnecessarily. All missing DBH values were given a value based on the average DBH of the same species for the year it was planted. Of the 28 species, only 3 species (in total 25 trees) were missing DBH values: 23 lemon (citrus), one balsa (timber), one bariable (other) trees. These were given DBH values interpolated from the other trees in the same family group and which were planted the same year.\nAdditionally, 8 banana trees that had DBH values larger than 50cm, which is unrealistically high. Assuming that there was a manual entry mistake, these values were exchanged with the maximum value of the banana trees for the year planted.\n\n###figure_5###" + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Aligning Drone Images with Field Boundaries", + "text": "A key issue identified in the ReforesTree pipeline was the mismatch between the drone imagery boundaries and the field data boundaries. To address this, we implemented the following steps to align the drone images with the field measurements for the six agroforestry sites. The code for this is in this reforestree-correction repository ###reference_correction/tree/main###.\nGeoDataframe Creation: We converted the field data, which included the longitude and latitude of each point, into a GeoDataFrame using the geopandas library. This allowed us to create point geometries that were easy to visualize and manipulate. The field data points, visualized as red dots in Figure 6 ###reference_###, served as the starting reference.\nBoundary Extraction using Alpha Shape: To capture the boundary of the field data, we used the alphashape library to create a convex hull around the points. By choosing an alpha value of 15000, similar to the value used by (Barenne et al. 2022 ###reference_b3###), we generated a tight boundary around the field data points.\nOverlay and Crop Drone Imagery: Using the rasterio library, we overlapped the generated alphashape boundary onto the drone imagery (in TIFF format). We then cropped the unnecessary parts of the image, outside the boundary, replacing them with white pixels. This step is illustrated by the transition from the third to fourth images in Figure 6 ###reference_###.\nAdjusting Image Boundaries: Finally, after cropping, we identified the bounds of the non-white pixels in the images and adjusted them to ensure they fit a square shape correctly. This was essential for integrating the images into the AGBench library. The final result can be seen in the transition from the fourth to the last image in Figure 6 ###reference_###.\n\n###figure_6###" + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Benchmark of satellite-based AGB maps", + "text": "To benchmark the low resolution (LR) satellite-based maps, we fitted it to the high resolution (HR) drone imagery overlapping the GPS coordinates.\n\n###figure_7### The calculation of the total AGB was done in five steps, illustrated in Figure 7 ###reference_###\ncropping the LR satellite map with a padding around the polygon of the site to reduce computation intensity (Satellite Raw)\nlinearly interpolating the values for this map and resize the map with the same HR pixel resolution as the drone imagery (Satellite Interpolated)\ncropping the map further fitting with the GPS locations (max/min) of the drone imagery\nfiltering out the site area by removing all pixels in the satellite-based map, that are outside of the drone imagery, coloured white (Satellite Filtered)\nlastly, multiplying the AGB mean density of the filtered map with the project site area to get the total AGB\nWe analysed the following three maps:\n(Global Forest Watch 2019 ###reference_b15###): Aboveground Woodly Biomass with 30x30m resolution for the year of 2000.\n(Spawn, Sullivan, and Lark 2020 ###reference_b43###): Global Aboveground and Belowground Biomass Carbon Density Maps for the Year 2010 with 300x300m resolution.\n(Santoro 2018 ###reference_b38###): GlobBiomass - Global Datasets of Forest Biomass with 100x100m resolution for the year 2010." + }, + { + "section_id": "7.4", + "parent_section_id": "7", + "section_name": "Baseline CNN", + "text": "We trained the model on a single GPU of the type GeForce RTX 3090. The learning rate used was 1e-3, batch size of 64 for 30 epochs achieving a root square mean loss (RMSE) of 0,1." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SiteNo. ofNo. ofSitetotaltotal
no.TreesSpeciesAreaAGBCO2e
1743180.5185
2929220.62159
3789200.48106
4484120.4753
5872140.56159
6846160.53127
total4463283.176640
\n
Table 1: Overview of the six project sites in Ecuador, as gathered in field measurements. Aboveground biomass (AGB) is measured in metric tons and area in hectares.
\n
", + "capture": "Table 1: Overview of the six project sites in Ecuador, as gathered in field measurements. Aboveground biomass (AGB) is measured in metric tons and area in hectares." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SiteFieldGFWSpawnSantoroBaseline
no.Data201920202021(Ours)
189997367
215108130428
310362061515
455102329
515733521211
61226917215
tot.663314138965
\n
Table 2: The benchmark results from comparing different models for estimating AGB with the forest inventory of the ReforesTree sites. All numbers are given as AGB in Mg. GFW is (Global Forest Watch 2019), Spawn is (Spawn, Sullivan, and Lark 2020), Santoro is (Santoro et\u00a0al. 2021). All of these three are satellite-based. Lastly, the baseline CNN is our drone-based model.
\n
", + "capture": "Table 2: The benchmark results from comparing different models for estimating AGB with the forest inventory of the ReforesTree sites. All numbers are given as AGB in Mg. GFW is (Global Forest Watch 2019), Spawn is (Spawn, Sullivan, and Lark 2020), Santoro is (Santoro et\u00a0al. 2021). All of these three are satellite-based. Lastly, the baseline CNN is our drone-based model." + } + }, + "image_paths": { + "1": { + "figure_path": "2201.11192v2_figure_1.png", + "caption": "Figure 1: Drone imagery of each site of the ReforesTree dataset with a resolution of 2cm/px. The red dots are the locations of the trees measured in field surveys, plotted to make clear that the coverage of drone images were larger than the field measured area.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/All_sites.png" + }, + "2": { + "figure_path": "2201.11192v2_figure_2.png", + "caption": "Figure 2: The standard procedure for calculating the correct amount of carbon offsets to be certified for a reforestation project. The tree metrics are collected from manual forest inventory.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/carbon_stock.png" + }, + "3": { + "figure_path": "2201.11192v2_figure_3.png", + "caption": "Figure 3: The raw data and data processing pipeline for the ReforesTree dataset, resulting in labels matched to bounding boxes per tree.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/data_process.png" + }, + "4": { + "figure_path": "2201.11192v2_figure_4.png", + "caption": "Figure 4: Bounding box annotations per tree, as a result of fine-tuned DeepForest tree crown detection and manual cleaning. Red boxes represent banana trees and blue boxes represent other species.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/bboxes.png" + }, + "5": { + "figure_path": "2201.11192v2_figure_5.png", + "caption": "Figure 5: This figure represents the count of species family groups for each of the sites. All sites have trees of all species family groups, but cacao and banana are over represented.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/group_site.png" + }, + "6": { + "figure_path": "2201.11192v2_figure_6.png", + "caption": "Figure 6: This figure shows the alignment process between the drone images and field boundaries. The field data points (red dots) were used to create the alphashape, which was overlaid onto the drone imagery to crop unnecessary areas and ensure accurate alignment.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/drone_field_alignment.png" + }, + "7": { + "figure_path": "2201.11192v2_figure_7.png", + "caption": "Figure 7: This figure represents the different steps in the benchmark analysis and how we calculated the total AGB amount from the satellite-based maps for the ReforesTree sites. This is taken from site no. 0. The values represented in the image is AGB density.", + "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/satellite_benchmark.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Systematic over-crediting in California\u2019s forest carbon offsets program.", + "author": "Badgley, G.; Freeman, J.; Hamman, J. J.; Haya, B.; Trugman, A. T.; Anderegg, W. R.; and Cullenward, D. 2021.", + "venue": "bioRxiv.", + "url": null + } + }, + { + "2": { + "title": "Canopy Based Aboveground Biomass and Carbon Stock Estimation of Wild Pistachio Trees in Arid Woodlands Using GeoEye-1 Images.", + "author": "Bagheri, R.; Shataee, S.; and Erfanifard, S. Y. a. 2021.", + "venue": "Journal of Agricultural Science and Technology, 23(1).", + "url": null + } + }, + { + "3": { + "title": "Tropical Forest Carbon Stock Estimation using RGB Drone Imagery.", + "author": "Barenne, V.; Bohl, J. P.; Dekas, D.; and Engelmann, T. 2022.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "Accelerating Ecological Sciences from Above: Spatial Contrastive Learning for Remote Sensing.", + "author": "Bjorck, J.; Rappazzo, B. H.; Shi, Q.; Brown-Lima, C.; Dean, J.; Fuller, A.; and Gomes, C. 2021.", + "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 35(17): 14711\u201314720.", + "url": null + } + }, + { + "5": { + "title": "McKinsey&Co: A Blueprint for Scaling Voluntary Carbon Markets to Meet the Climate Challenge.", + "author": "Blaufelder, C.; Levy, C.; Mannion, P.; Pinner, D.; and Weterings, J. 2021.", + "venue": "Accessed 31.05.2021.", + "url": null + } + }, + { + "6": { + "title": "Biomass estimates for tropical forest.", + "author": "Brown, S.; and Iverson, L. 1992.", + "venue": "World Res. Rev., 4: 366\u2013383.", + "url": null + } + }, + { + "7": { + "title": "Managing Forests for Climate Change Mitigation.", + "author": "Canadell, J. G.; and Raupach, M. R. 2008.", + "venue": "Science, 320: 1456\u20131457.", + "url": null + } + }, + { + "8": { + "title": "The misunderstood sixth mass extinction.", + "author": "Ceballos, G.; and Ehrlich, P. 2018.", + "venue": "Science, 360: 1080.2\u20131081.", + "url": null + } + }, + { + "9": { + "title": "Multi-Task Learning with Deep Neural Networks: A Survey.", + "author": "Crawshaw, M. 2020.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "GainForest: Scaling Climate Finance for Forest Conservation using Interpretable Machine Learning on Satellite Imagery.", + "author": "Dao, D.; Cang, C.; Fung, C.; Zhang, M.; Pawlowski, N.; Gonzales, R.; Beglinger, N.; and Zhang, C. 2019.", + "venue": "ICML Climate Change AI workshop 2019.", + "url": null + } + }, + { + "11": { + "title": "Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty.", + "author": "Du, X.; and Zare, A. 2020.", + "venue": "IEEE Transactions on Geoscience and Remote Sensing, 58.", + "url": null + } + }, + { + "12": { + "title": "Global Forest Resources Assessment 2020: Main report.", + "author": "FAO. 2020.", + "venue": "FAO.", + "url": null + } + }, + { + "13": { + "title": "Measuring Tree Height with Remote Sensing\u2014A Comparison of Photogrammetric and LiDAR Data with Different Field Measurements.", + "author": "Ganz, S.; K\u00e4ber, Y.; and Adler, P. 2019.", + "venue": "Forests, 10: 694.", + "url": null + } + }, + { + "14": { + "title": "What drives tropical deforestation?: a meta-analysis of proximate and underlying causes of deforestation based on subnational case study evidence.", + "author": "Geist, H. J.; and Lambin, E. F. 2001.", + "venue": "LUCC International Project Office, University of Louvain.", + "url": null + } + }, + { + "15": { + "title": "Aboveground Live Woody Biomass Density.", + "author": "Global Forest Watch. 2019.", + "venue": "Dataset Accessed: 30.11.2021.", + "url": null + } + }, + { + "16": { + "title": "Natural climate solutions.", + "author": "Griscom, B. W.; Adams, J.; Ellis, P. W.; Houghton, R. A.; Lomax, G.; Miteva, D. A.; Schlesinger, W. H.; Shoch, D.; Siikam\u00e4ki, J. V.; Smith, P.; Woodbury, P.; Zganjar, C.; Blackman, A.; Campari, J.; Conant, R. T.; Delgado, C.; Elias, P.; Gopalakrishna, T.; Hamsik, M. R.; Herrero, M.; Kiesecker, J.; Landis, E.; Laestadius, L.; Leavitt, S. M.; Minnemeyer, S.; Polasky, S.; Potapov, P.; Putz, F. E.; Sanderman, J.; Silvius, M.; Wollenberg, E.; and Fargione, J. 2017.", + "venue": "Proceedings of the National Academy of Sciences, 114(44): 11645\u201311650.", + "url": null + } + }, + { + "17": { + "title": "Satellites could soon map every tree on Earth.", + "author": "Hanan, N. P.; and Anchang, J. Y. 2020.", + "venue": "Nature, 587.", + "url": null + } + }, + { + "18": { + "title": "High-Resolution Global Maps of 21st-Century Forest Cover Change.", + "author": "Hansen, M. C.; Potapov, P. V.; Moore, R.; Hancher, M.; Turubanova, S. A.; Tyukavina, A.; Thau, D.; Stehman, S. V.; Goetz, S. J.; Loveland, T. R.; Kommareddy, A.; Egorov, A.; Chini, L.; Justice, C. O.; and Townshend, J. R. G. 2013.", + "venue": "Science, 342(6160): 850\u2013853.", + "url": null + } + }, + { + "19": { + "title": "Managing uncertainty in carbon offsets: insights from California\u2019s standardized approach.", + "author": "Haya, B.; Cullenward, D.; Strong, A. L.; Grubert, E.; Heilmayr, R.; Sivas, D. A.; and Wara, M. 2020.", + "venue": "Climate Policy, 20(9): 1112\u20131126.", + "url": null + } + }, + { + "20": { + "title": "Deep Residual Learning for Image Recognition.", + "author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.", + "venue": "arXiv:1512.03385.", + "url": null + } + }, + { + "21": { + "title": "2019: Summary for Policymakers.", + "author": "IPCC. 2019.", + "venue": "In Shukla, P.; Skea, J.; Buendia, E. C.; Masson-Delmotte, V.; P\u00f6rtner, H.-O.; Roberts, D. C.; Zhai, P.; Slade, R.; Connors, S.; van Diemen, R.; Ferrat, M.; Haughey, E.; Luz, S.; Neogi, S.; Pathak, M.; Petzold, J.; Pereira, J. P.; Vyas, P.; Huntley, E.; Kissick, K.; Belkacemi, M.; and Malley, J., eds., Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems, 7\u201311.", + "url": null + } + }, + { + "22": { + "title": "Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change.", + "author": "IPCC. 2021.", + "venue": "Cambridge University Press.", + "url": null + } + }, + { + "23": { + "title": "Estimating Mangrove Tree Biomass and Carbon Content: A Comparison of Forest Inventory Techniques and Drone Imagery.", + "author": "Jones, A. R.; Raja Segaran, R.; Clarke, K. D.; Waycott, M.; Goh, W. S. H.; and Gillanders, B. M. 2020.", + "venue": "Frontiers in Marine Science, 6: 784.", + "url": null + } + }, + { + "24": { + "title": "Caught in between: credibility and feasibility of the voluntary carbon market post-2020.", + "author": "Kreibich, N.; and Hermwille, L. 2021.", + "venue": "Climate Policy, 21(7): 939\u2013957.", + "url": null + } + }, + { + "25": { + "title": "Machine Learning-based Estimation of Forest Carbon Stocks to increase Transparency of Forest Preservation Efforts.", + "author": "L\u00fctjens, B.; Liebenwein, L.; and Kramer, K. 2019.", + "venue": "2019 NeurIPS Workshop on Tackling Climate Change with Machine Learning.", + "url": null + } + }, + { + "26": { + "title": "The global distribution and environmental drivers of aboveground versus belowground plant biomass.", + "author": "Ma, H.; Mo, L.; Thomas W. Crowther, D. S. M.; van den Hoogen, J.; Stocker, B. D.; Terrer, C.; and Zohner, C. M. 2021.", + "venue": "Nature Ecology & Evolution, 5: 1110\u20131122.", + "url": null + } + }, + { + "27": { + "title": "Deep learning in remote sensing applications: A meta-analysis and review.", + "author": "Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; and Johnson, B. A. 2019.", + "venue": "ISPRS Journal of Photogrammetry and Remote Sensing, 152: 166\u2013177.", + "url": null + } + }, + { + "28": { + "title": "Error propagation and scaling for tropical forest biomass estimates.", + "author": "Malhi, Y.; Phillips, O. L.; Chave, J.; Condit, R.; Aguilar, S.; Hernandez, A.; Lao, S.; and Perez, R. 2004.", + "venue": "Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 359(1443): 409\u2013420.", + "url": null + } + }, + { + "29": { + "title": "A Novel Deep Learning Method to Identify Single Tree Species in UAV-Based Hyperspectral Images.", + "author": "Miyoshi, G. T.; Arruda, M. d. S.; Osco, L. P.; Marcato Junior, J.; Gon\u00e7alves, D. N.; Imai, N. N.; Tommaselli, A. M. G.; Honkavaara, E.; and Gon\u00e7alves, W. N. 2020.", + "venue": "Remote Sensing, 12(8).", + "url": null + } + }, + { + "30": { + "title": "Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks.", + "author": "M\u00e4yr\u00e4, J.; Keski-Saari, S.; Kivinen, S.; Tanhuanp\u00e4\u00e4, T.; Hurskainen, P.; Kullberg, P.; Poikolainen, L.; Viinikka, A.; Tuominen, S.; Kumpula, T.; and Vihervaara, P. 2021.", + "venue": "Remote Sensing of Environment, 256: 112322.", + "url": null + } + }, + { + "31": { + "title": "Using ICESat-2 to Estimate and Map Forest Aboveground Biomass: A First Example.", + "author": "Narine, L. L.; Popescu, S. C.; and Malambo, L. 2020.", + "venue": "Remote Sensing, 12(11).", + "url": null + } + }, + { + "32": { + "title": "Estimating above ground biomass in a Salix plantation using high resolution UAV images.", + "author": "N\u00e5f\u00e4lt, S. 2018.", + "venue": "Student thesis series INES, Lund University:8963727.", + "url": null + } + }, + { + "33": { + "title": "Accurate Estimation of Forest Carbon Stocks by 3-D Remote Sensing of Individual Trees.", + "author": "Omasa, K.; Qiu, G. Y.; Watanuki, K.; Yoshimi, K.; and Akiyama, Y. 2003.", + "venue": "Environmental Science & Technology, 37.", + "url": null + } + }, + { + "34": { + "title": "Sourcebook for BioCarbon Fund Projects.", + "author": "Pearson, T.; Walker, S.; and Brown, S. 2005.", + "venue": "Accessed 15.09.2021 URL: https://winrock.org/document/sourcebook-for-land-use-land-use-change-and-forestry-projects/.", + "url": null + } + }, + { + "35": { + "title": "Comparison of methods for measuring and assessing carbon stocks and carbon stock changes in terrestrial carbon pools. How do the accuracy and precision of current methods compare? A systematic review protocol.", + "author": "Petrokofsky, G.; Kanamaru, H.; Achard, F.; Goetz, S. J.; Joosten, H.; Holmgren, P.; Lehtonen, A.; Menton, M. C. S.; Pullin, A. S.; and Wattenbach, M. 2012.", + "venue": "Environmental Evidence, 1: 6.", + "url": null + } + }, + { + "36": { + "title": "Planetary boundaries:exploring the safe operating space for humanity.", + "author": "Rockst\u00f6m, J.; Steffen, W.; K. Noone, \u00c1. P.; Chapin, F. S.; Lambin, E.; Lenton, T. M.; Scheffer, M.; Folke, C.; Schellnhuber, H.; Nykvist, B.; Wit, C. A. D.; Hughes, T.; S. van der Leeuw, H. R.; S\u00f6rlin, S.; Snyder, P. K.; R. Costanza, U. S.; Falkenmark, M.; Karlberg, L.; Corell, R. W.; Fabry, V. J.; Hansen, J.; Walker, B.; Liverman, D.; Richardson, K.; Crutzen, P.; and Foley, J. 2009.", + "venue": "Ecology and Society, 14: 32.", + "url": null + } + }, + { + "37": { + "title": "Benchmark map of forest carbon stocks in tropical regions across three continents.", + "author": "Saatchi, S. S.; Harris, N. L.; Brown, S.; Lefsky, M.; Mitchard, E. T. A.; Salas, W.; Zutta, B. R.; Buermann, W.; Lewis, S. L.; Hagen, S.; Petrova, S.; White, L.; Silman, M.; and Morel, A. 2011.", + "venue": "Proceedings of the National Academy of Sciences, 108(24): 9899\u20139904.", + "url": null + } + }, + { + "38": { + "title": "GlobBiomass - global datasets of forest biomass.", + "author": "Santoro, M. 2018.", + "venue": null, + "url": null + } + }, + { + "39": { + "title": "The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations.", + "author": "Santoro, M.; Cartus, O.; Carvalhais, N.; Rozendaal, D. M. A.; Avitabile, V.; Araza, A.; de Bruin, S.; Herold, M.; Quegan, S.; Rodr\u00edguez-Veiga, P.; Balzter, H.; Carreiras, J.; Schepaschenko, D.; Korets, M.; Shimada, M.; Itoh, T.; Moreno Mart\u00ednez, A.; Cavlovic, J.; Cazzolla Gatti, R.; da Concei\u00e7\u00e3o Bispo, P.; Dewnath, N.; Labri\u00e8re, N.; Liang, J.; Lindsell, J.; Mitchard, E. T. A.; Morel, A.; Pacheco Pascagaza, A. M.; Ryan, C. M.; Slik, F.; Vaglio Laurin, G.; Verbeeck, H.; Wijaya, A.; and Willcock, S. 2021.", + "venue": "Earth System Science Data, 13(8): 3927\u20133950.", + "url": null + } + }, + { + "40": { + "title": "Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks.", + "author": "Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; and Schmidtlein, S. 2020.", + "venue": "ISPRS Journal of Photogrammetry and Remote Sensing, 170: 205\u2013215.", + "url": null + } + }, + { + "41": { + "title": "Allometric models for estimating aboveground biomass of shade trees and coffee bushes grown together.", + "author": "Segura, M.; Kanninen, M.; and Su\u00e1rez, D. 2006.", + "venue": "Agroforestry Systems, 68: 143\u2013150.", + "url": null + } + }, + { + "42": { + "title": "Terrestrial biodiversity threatened by increasing global aridity velocity under high-level warming.", + "author": "Shi, H.; Tian, H.; Lange, S.; Yang, J.; Pan, S.; Fu, B.; and Reyer, C. P. O. 2021.", + "venue": "Proceedings of the National Academy of Sciences of the United States of America (PNAS), 18: 36.", + "url": null + } + }, + { + "43": { + "title": "Harmonized globadgleyl maps of above and belowground biomass carbon density in the year 2010.", + "author": "Spawn, S.; Sullivan, C.; and Lark, T. e. a. 2020.", + "venue": "Sci Data, 7: 112.", + "url": null + } + }, + { + "44": { + "title": "Review on carbon storage estimation of forest ecosystem and applications in China.", + "author": "Sun, W.; and Liu, X. 2019.", + "venue": "Forest Ecosystems, 7: 4.", + "url": null + } + }, + { + "45": { + "title": "Carbon stock assessment for a forest-to-coffee conversion landscape in Sumber-Jaya (Lampung, Indonesia): from allometric equations to land use change analysis.", + "author": "Van Noordwijk, M.; Rahayu, S.; Hairiah, K.; Wulan, Y.; Farida, A.; and Verbist, B. 2002.", + "venue": "Science in China, 45.", + "url": null + } + }, + { + "46": { + "title": "Topics in optimal transportation.", + "author": "Villani, C. 2003.", + "venue": "58. American Mathematical Soc.", + "url": null + } + }, + { + "47": { + "title": "NEON Crowns: a remote sensing derived dataset of 100 million individual tree crowns.", + "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S.; Zare, A.; Singh, A.; Graves, S. J.; and White, E. 2020a.", + "venue": "bioRxiv.", + "url": null + } + }, + { + "48": { + "title": "Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks.", + "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S.; Zare, A.; and White, E. 2019.", + "venue": "Remote Sensing, 11(11): 1309.", + "url": null + } + }, + { + "49": { + "title": "Cross-site learning in deep learning RGB tree crown detection.", + "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S. A.; Zare, A.; and White, E. P. 2020b.", + "venue": "Ecological Informatics, 56: 101061.", + "url": null + } + }, + { + "50": { + "title": "Overstated carbon emission reductions from voluntary REDD+ projects in the Brazilian Amazon.", + "author": "West, T. A. P.; B\u00f6rner, J.; Sills, E. O.; and Kontoleon, A. 2020.", + "venue": "Proceedings of the National Academy of Sciences, 117(39): 24188\u201324194.", + "url": null + } + }, + { + "51": { + "title": "Small-scale forestry and carbon offset markets: An empirical study of Vermont Current Use forest landowner willingness to accept carbon credit programs.", + "author": "White, A. E.; Lutz, D. A.; Howarth, R. B.; and Soto, J. R. 2018.", + "venue": "PLOS ONE, 13(8): 1\u201324.", + "url": null + } + }, + { + "52": { + "title": "Predicting Forest Fire Using Remote Sensing Data And Machine Learning.", + "author": "Yang, S.; Lupascu, M.; and Meel, K. S. 2021.", + "venue": "arXiv:2101.01975.", + "url": null + } + }, + { + "53": { + "title": "Carbon stock in different ages and plantation system of cocoa: allometric approach.", + "author": "Yuliasmara, F.; Wibawa, A.; and Prawoto, A. 2009.", + "venue": "Pelita Perkebunan (a Coffee and Cocoa Research Journal), 26.", + "url": null + } + }, + { + "54": { + "title": "Multi-source remote sensing data fusion: status and trends.", + "author": "Zhang, J. 2010.", + "venue": "International Journal of Image and Data Fusion, 1.", + "url": null + } + }, + { + "55": { + "title": "Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources.", + "author": "Zhu, X. X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; and Fraundorfer, F. 2017.", + "venue": "IEEE Geoscience and Remote Sensing Magazine, 5(4): 8\u201336.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2201.11192v2" +} \ No newline at end of file diff --git a/20241127/2204.02688v2.json b/20241127/2204.02688v2.json new file mode 100644 index 0000000000000000000000000000000000000000..cc2ccb06bfe735a33028f8e76b9f0052d7f6129a --- /dev/null +++ b/20241127/2204.02688v2.json @@ -0,0 +1,636 @@ +{ + "title": "MM-SEAL: A Large-scale Video Dataset of Multi-person Multi-grained Spatio-temporally Action Localization", + "abstract": "In this paper, we introduce a novel large-scale video dataset dubbed MM-SEAL for multi-person multi-grained spatio-temporal action localization among human daily life. We are the first to propose a new benchmark for multi-person spatio-temporal complex activity localization, where complex semantic and long duration bring new challenges to localization tasks. We observe that limited atomic actions can be combined into many complex activities. MM-SEAL provides both atomic action and complex activity annotations, producing 111.7k atomic actions spanning 172 action categories and 17.7k complex activities spanning 200 activity categories. We explore the relationship between atomic actions and complex activities, finding that atomic action features can improve the complex activity localization performance. Also, we propose a new network which generates temporal proposals and labels simultaneously with adaptively learning the semantic information of proposals, termed Faster-TAD. Finally, our evaluations show that visual features pretrained on MM-SEAL can improve the performance on other action localization benchmarks. We will release the dataset and the project code upon publication of the paper.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "With the increasing number of videos created every day, video understanding becomes more and more indispensable for the computer vision society. How to locate the spatial positions and temporal action boundaries of multi-person in untrimmed videos is a challenging but meaningful task.\nIn recent years, action recognition and temporal action detection techniques have gained extraordinary breakthrough due to the advent of many large-scale benchmarks, such as HMDB-51[18 ###reference_b18###], Kinetics[17 ###reference_b17###], ActivityNet[2 ###reference_b2###] and HACS[52 ###reference_b52###]. Temporal action detection methods detect actions with boundaries from untrimmed videos, but are unable to spatially detect multiple concurrent human actions. In real-world scenarios, however, it is very common for different people to perform different actions.\nCurrent spatio-temporal localization datasets can be mainly classified into two categories: 1) single-person scene. Datasets such as JHMDB [13 ###reference_b13###], Action Genome[35 ###reference_b35###] and Homage[35 ###reference_b35###]. These datasets have promoted impressive progress for spatio-temporal action localization, but fail to deal with multi-person scenes and are limited to the amount of annotations. 2) multi-person scene. AVA [10 ###reference_b10###] involve multi-subject and provide temporal context for labeling the actions in the keyframe of short clips. Their annotations only cover atomic actions with a constant and short temporal range. TITAN[32 ###reference_b32###] and MultiSports[24 ###reference_b24###] are multi-person datasets and contain fined-grained action categories with dense annotations in both spatial and temporal domains, but they only cover a specific field. Based on the above analysis, there still lacks a large-scale dataset that annotates fine-grained actions in spatial and temporal domains for multi-person scenes in untrimmed daily life videos. On the other hand, the existing spatio-temporal action localization datasets above all aim at actions within 5 seconds, which restricts their application on more complex activities, such as surveillance and healthcare.\nTo facilitate the development of video understanding, we introduce a novel large-scale video dataset with multi-person multi-grained spatio-temporal annotations among human daily life, dubbed MM-SEAL. We observe that atomic actions can be combined into diverse complex activities. MM-SEAL provides both atomic action and complex activity annotations in tubelet level, producing 111.7k atomic actions spanning 172 action categories and 17.7k complex activities spanning 200 activity categories. As illustrated in Fig. 1 ###reference_###, MM-SEAL Atomic Action provide fine-grained atomic actions annotations in spatial and temporal dimensions for multi-person in untrimmed daily life videos. Compared with other similar datasets, we not only annotate spatio-temporal action instances, but also provide short-term trajectories for subject person. It spurs researches on the relation between the target instance and their contexts like nearby instance of the same person or background scene. MM-SEAL Complex Activity are the first to propose a new benchmark for multi-person spatio-temporal complex activity localization, where complex semantic and long duration bring new challenges to spatio-temporal action localization tasks. Compared to datasets (like TITAN, FineGym[38 ###reference_b38###],and MOMA) in the community where both atomic action detection and complex action action detection are proposed, we explore the relationship between atomic actions and complex activities, finding that atomic action features can improve the complex activity localization performance. What\u2019s more, we propose a new network for temporal action detection, which generates temporal proposals and action labels simultaneously with adaptively learning the semantic information of proposals, termed Faster-TAD.\nOur contributions are summarized as follows:\nWe develop a new large-scale benchmark MM-SEAL for multi-person multi-grained spatio-temporal detection in human daily life. 25fps frame-wise annotations for MM-SEAL Atomic Action, 4fps frame-wise annotations for MM-SEAL Complex Activity.\nWe observe that atomic actions can be combined into diverse complex activities. Thus, we explore the relationship between atomic actions and complex activities, finding that atomic action features can improve the complex activity localization performance.\nWe propose a baseline for this spatio-temporal localization task. We detect spatial bounding bboxes at the frame-level, and perform temporal action detection. We develop a new network, which generates temporal proposals and action labels simultaneously with adaptively learning the semantic information of proposals, termed Faster-TAD.\nOur knowledge transfer evaluations show that visual features pretrained on MM-SEAL can improve the performance on other action localization datasets." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "In recent years, some works have developed action classification [8 ###reference_b8###, 4 ###reference_b4###, 27 ###reference_b27###, 34 ###reference_b34###, 39 ###reference_b39###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###] into temporal action localization [12 ###reference_b12###, 16 ###reference_b16###, 40 ###reference_b40###, 48 ###reference_b48###, 51 ###reference_b51###], and even spatio-temporal action localization [7 ###reference_b7###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 23 ###reference_b23###, 37 ###reference_b37###, 45 ###reference_b45###]. It manifests the trend of video understanding in untrimmed domains." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Spatio-temporal Action Localization Datasets.", + "text": "A series of datasets, with spatio-temporal annotations, have been introduced in both single-person and multi-person scenarios. For the single-person scenarios, JHMDB [13 ###reference_b13###] provide dense spatial localization frame by frame. Action Genome [14 ###reference_b14###] decomposes actions into spatio-temporal scene graphs via sparse sampling. Subsequently, HOMAGE [35 ###reference_b35###] is proposed, equipped with hierarchical activity and atomic action labels. It also provides multiple viewpoints information and captures object relationships in the scene graph. However, temporal localization in HOMAGE is limited to atomic actions, excluding high-level activities. A recently released dataset, TSU[6 ###reference_b6###] contains dense annotations including elementary, composite activities and activities involving interactions with objects,performed in a spontaneous manner. TSU [6 ###reference_b6###] is similar to our dataset, but it focuses on smarthome and single-actor scene.\nIn real-world practical applications, multi-person scenarios are also very common, and a variety of datasets have conducted research in this area. MEVA [5 ###reference_b5###] presents the Multiview Extended Video with spatio-temporal Activities annotations, which aimed at surveillance. The average clip length of MEVA is 5 minutes, but each action is relatively atomic and shorter than MM-SEAL. MOMA [31 ###reference_b31###], has proposed a redefined action parsing for complex human activity recognition. It organizes action categories with four levels. It is worth noting that neither MultiSports nor MOMA are annotated spatially with complex activities, but only with sub-level actions." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Spatio-temporal action detection method", + "text": "Most recent approaches based on datasets UCF101-24 and JHMDB can be divided into two categories: frame-level detectors and clip-level detectors. The frame-level detectors[46 ###reference_b46###] detect proposals and actions at the frame-level, and then employ the specific linking strategy to generate instance tubes along temporal dimension. However, frame-level detectors fail to fully exploit temporal context for semantic action classification. In contrast, the clip-level detectors model temporal continuity in the videos. Typical researches, ACT[16 ###reference_b16###] and MOC-detector[25 ###reference_b25###], take a sequence of K frames as input and output K-frame tubelet detection results. The results are linked along temporal dimension into tubes via a common matching strategy. Clip-level detectors can be effectively applied to spatio-temporal localization tasks for atomic actions. However, the number of input frames K, limits the model to capture the features from long-term information. As a result, clip-level detectors can hardly meet the requirements of spatio-temporal localization tasks for activities with complex semantics and long duration." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "The MM-SEAL Dataset", + "text": "The purpose of this work is to build a large-scale video dataset for multi-person multi-grained Spatio-tEmporal Action Localization(MM-SEAL) among human daily life. In this section, we present the data collection process, the statistics, and the characteristics of MM-SEAL." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Data Collection", + "text": "" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Category Selection.", + "text": "We follow five principles to generate our atomic action vocabulary. Following the method in AVA[10 ###reference_b10###], generality and exhaustivity are both considered. We collect generic actions in daily life and iterate our action list in several rounds. MM-SEAL also follows a principle of fine-grained because some person-object interaction actions have vastly different contexts even within an activity class. For example, there are some activities with intra-class variety such as \u201ccook\u201d. \u201ccook\u201d can be divided into \u201cwash\u201d, \u201ccut\u201d, \u201cgrind\u201d, et al. in our dataset. The fourth principle is that our action vocabulary focus on dynamic actions, which means that we only annotate dynamic actions like \u201cstand up\u201d and \u201csit down\u201d, instead of static pose actions like \u201cstand\u201d, \u201csit\u201d. We prefer to focus on patterns of motion, which we think is more valuable. For the last principle, the atomic actions should be visible. We end up with 61 pose actions, 15 person-person interaction actions and 96 person-object interaction actions. On the other hand, complex activity annotations adopt the taxonomy of 200 action classes, taken from ActivityNet-1.3[2 ###reference_b2###]." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Data Selection.", + "text": "For the purpose of obtaining atomic actions and complex activities simultaneously, we randomly choose 5,376 videos from ActivityNet-1.3 and HACS. For complex activity detection, we annotate 17,712 complex activity instances spanning 200 categories in 4,224 videos. For atomic action detection, annotators annotate 111,680 atomic actions instances spanning 172 categories within complex activity instances in 5,376 videos." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3 Subject Selection.", + "text": "When multiple subjects are present in a video, only main subjects will be annotated in trainset, ignoring background persons. Subjects are selected whose actions are related to complex activities. For efficiency, we select up to three subjects at the same time period. It should be noted that the subject need to be with head shown in at least one-third of frames. Firstly, we get bounding boxes of persons with a detector[41 ###reference_b41###]. Top 8 boxes with the highest detection confidence in center areas are selected as subjects. Then, these subjects are feed into MOT algorithm[47 ###reference_b47###], generating coarse tubelets. We select the top 50 longest tubelets as the candidate subjects. Thirdly, candidate subjects are shown to annotators. Subject selection follows three principles, being in the central area, having high detection confidence, and being related to complex activities. We annotate actions for each person tubelet in raw videos, allowing duration overlap among subject tubelets." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Data Annotation", + "text": "The action annotations of tubelets are given as a set of bounding boxes in each frame, person ID, start time, end time and action category. In this section, we first introduce the labeling process about tubes. Our annotation team is composed of 11 experienced annotators. The annotators are trained for a week before the formal annotation. To guarantee the quality, each video is labeled by an annotator, reviewed by 2 to 6 annotators. The whole annotation process lasts for 1 year." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Spatial Annotation.", + "text": "We localize a subject spatially with bounding box and distinguish him or her from other subjects with person ID. In order to effectively obtain accurate spatial annotations, we propose a four-step detection method: 1) Using algorithms to get coarse results; 2) Annotators refine results in short-duration; 3) Utilizing person re-identification technology to merge person IDs in long-duration; 4) Annotators refine results in long-duration.\nFirstly, we adopt a remarkable detector [41 ###reference_b41###] to get bounding boxes of each person. Then, subject selection in frames is proceeded, which is described in Subject Selection Chapter. We generate coarse person IDs using DeepSORT [47 ###reference_b47###]. In this way, we obtain up to 25 bounding boxes in 1-second of a subject. Secondly, considering that MOT algorithm [47 ###reference_b47###] leads to ID lost or switch in some cases, annotators are asked to refine it within a short atomic action duration. Notably, we find the detector obtaining accurate positions in most cases, while obvious false alarms are discarded. For efficiency, we refine spatial annotation within short atomic action instance duration. Thirdly, person re-identification technology is utilized to merge tubelets in long-duration. We employ a video re-identification model based on the strong baseline [30 ###reference_b30###]. Fourthly, annotators refine results in long-duration. Person IDs that are switched by scene change are not merged." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Temporal and Semantic Annotation.", + "text": "We provide the start time, end time and semantic labels of action instances. We unify the annotations of repeated actions and actions. For actions that repeat key-moment over a period of time, we define the beginning and ending moment, such as \u201ccut\u201d, starting from one second before holding the knife and ending with one second after stopping cutting.\n###figure_1### ###table_1### For complex activity localization, we refer to the boundary annotations in ActivityNet-1.3[2 ###reference_b2###] and HACS[52 ###reference_b52###], and adjust the boundaries of actions for each person. Complex activity semantic annotations adopt the taxonomy of 200 action classes, which are taken from ActivityNet-1.3. For atomic action localization, we annotate atomic action instances within complex activity duration. We propose an action vocabulary for atomic actions, which is described in Data Collection Chapter." + }, + { + "section_id": "3.2.3", + "parent_section_id": "3.2", + "section_name": "3.2.3 Quality Assurance.", + "text": "This subsection proposes many approaches to assure the quality of MM-SEAL. For spatial and temporal annotations, the first priority is to make sure the refined annotation obtained by annotators can be traced in coarse results obtained by the algorithm in forward steps, like step 2 to step 1, step 4 to step 3. Steps are described in \u201cSpatial Annotation\u201d subsection of this Chapter. Then, we propose algorithms to check whether the annotations are out of bounds, and whether the person ID conflicts. Thirdly, in order to check the boundaries for each atomic action instance, we visualize the annotation, which assists annotators to review. Each video will be reviewed by 2 to 6 annotators.\nFor semantic annotations, we adopt a cyclic refinement approach including model discrimination, annotator recheck, annotator refinement, \u2026, model discrimination, and manual recheck." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Statistics", + "text": "MM-SEAL is composed of 5,376 videos in human daily life with 1/5 for testing and others for training and validation.\nWe present a comparison of dataset statistics between MM-SEAL and some representative video datasets in Table 1 ###reference_###. In MM-SEAL, there are 17,712 temporal complex activity instances in 4,224 videos and 111,680 atomic action instances in 5,376 videos. Besides, there are 172 atomic action labels defined in this work and 200 complex activity labels which are the same as ActivityNet-1.3. According to statistics, on average, each video contains 4.78 complex action instances, 21.05 atomic action instances and 3.67 atomic action categories. The distribution of instance duration and instance numbers in each video are shown in Fig. 2 ###reference_###. The instance duration of atomic actions is significantly less than that of complex activities, yet the instance number of atomic actions is more than that of complex activities. We also present the distribution of atomic action number, shown in Supplementary Materials." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Characteristics and Challenges", + "text": "There are several characteristics of MM-SEAL dataset, which is also challenges on our datast.\nAction hierarchy: MM-SEAL contains 172 atomic actions and 200 complex activities. The intra-class variety in dataset with more fine-grained atomic actions, which helps the detection of complex activity.\nLong duration: For MM-SEAL AA, We link recurring actions, which results in the prominence of atomic actions with long duration compared with other datasets focused on atomic action(3.62s vs 1.7s on FineGym vs 1.0s on Multiports). For MM-SEAL CA, 1)the average duration of MM-SEAL(12.04s) is 2.36 times that of UCF101-24(5.1s). 2)The mean instance duration of each CA category ranges from 5.4 seconds to 41.0 seconds.\nDiversity: 1)We annotate three action types: Pose actions, person-person interaction actions and person-object interaction actions. There are 39.75% of bounding boxes have at least 1 pose action label, 63.63% of bounding boxes have at least 1 person-object interaction label, which shows that MM-SEAL has rich person-object interaction action instances. 2)rich scenarios, we annotate actions in human activity videos rather than a certain scene.\nComplex semantics: 1)On average, each video contains 4.78 complex action instances, 21.05 atomic action instances and 3.67 atomic action categories. 2)multi-person. Table 2 ###reference_### shows the synergy of the actions of different persons in the same key frame. It demonstrates the diversity of behaviors from different people in the same frame, which manifests that multi-person spatio-temporal action detection is of great significance. 3)Movement. We only annotate dynamic actions. We present a figure in Supplementary Materials to illustrates the distribution of bounding box sizes, showing MM-SEAL contains many boxes with small sizes. Figure illustrates bounding box center offset in one second, showing that 50% of boxes offset over 50 pixel. Our densely annotation helps to improve the performance of detecting large motion actions.\nHigh temporal variance: 1)The mean instance duration of each atomic action category and complex activity category are shown in Supplementary Materials. The duration of atomic actions ranges from 0.70 seconds to 10.33 seconds, and complex activity ranges from 5.4 seconds to 41.0 seconds. 2)action instances can be long or short(ranges from 0.7s to 233s)\n###figure_2### ###figure_3### ###figure_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Baseline Approach", + "text": "MM-SEAL develop two benchmarks: atomic spatio-temporal action detection and complex spatio-temporal activity detection. We propose a baseline for this spatio-temporal localization task. We first detect proposals at the frame-level, then track high-scoring proposals throughout the video using a tracking-by-detection approach. The spatio-temporal action localization task is transformed into temporal action localization task. It should be noted that we only perform semantic classification in the temporal action localization stage, effectively using temporal information. Rich labels in our dataset encourage the researchers to consider temporal action proposal and classification in a single framework.\nFor the baseline method, we think that the temporal action detection is one of the difficulties. so we focus on this and propose a SOTA network(Faster-TAD). To explore our algorithmic capabilities of temporal action localization, we use gt bboxes as the detected bboxes below. In this way, performance is measured by the average mAP(%) at different tIoU thresholds(0.5 to 0.95 with 0.05 interval). The metric is employed from [15 ###reference_b15###]. Following the standard practice [46 ###reference_b46###, 16 ###reference_b16###], we utilize frame-mAP and video-mAP to evaluate spatio-temporal action detection performance." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Faster-TAD", + "text": "Current mainstream approaches [28 ###reference_b28###, 33 ###reference_b33###, 50 ###reference_b50###] are multi-step solutions which achieve good performance. They include proposal generation, action classification, ensemble results of classifiers and proposal post-processing. However, they fall short in efficiency and flexibility, especially for videos with diverse semantic labels. In recent years, there are also some works focused on single network[26 ###reference_b26###, 49 ###reference_b49###], but they fail to yield comparable results as those of multi-step approaches.\nTo simplify the pipeline of TAD, we propose a novel single network with remarkable performance, dubbed Faster-TAD. Inspired by Faster-RCNN[36 ###reference_b36###], we jointly learn temporal proposal generation, action classification, and proposal refinement with multi-task loss, sharing information for end-to-end update.\nIn classification head, we propose a new Context-Adaptive Proposal Module, which consists of Proximity-Category Proposal Block(Fig 3 ###reference_###), Self-Attention Block, and Cross-Attention Block. Self-Attention Block, and Cross-Attention Block are shown in Fig 4 ###reference_###, which greatly enhance semantic information for proposals. Context-Adaptive Proposal Module is an efficient attention module, which adaptively learn the semantic information through three aspects :a)proposals b)whole video clip feature c)considering context as proximity category proposals.\nMany complex human activities have long duration and consist of atomic actions. Action recognition model Swin Transformer[29 ###reference_b29###] is adopted to extract features of each clip as input for subsequent localization task. Nevertheless, action recognition model is trained with trimmed short clips. To address this issue, we adopt atomic features as auxiliary features extracted by Slowfast[8 ###reference_b8###] trained on MM-SEAL Atomic Actions. We designed a feature aggregation method named Auxiliary-Features Block to adapt to the two streams input. As shown in Fig. 5 ###reference_###, main and auxiliary features are combined in a simple way after going through two separate base modules.\nExtensive experiments demonstrate that Faster-TAD outperforms existing single-network detectors by a large margin on many temporal action detection benchmarks, obtaining state-of-the-art results on ActivityNet-1.3 and SoccerNet-Action Spotting[9 ###reference_b9###]. Algorithmic details and experiments are attached in Supplementary Materials." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Atomic spatio-temporal action detection", + "text": "We extract features of raw videos utilizing the Slowfast model[8 ###reference_b8###] with , and a . Model is trained on our MM-SEAL Atomic Action to get atomic action features. In each window, we use 4fps bounding boxes to extract person features by ROI layer [11 ###reference_b11###]. The final tubelet feature is termed as Slowfast-T. Faster-TAD is employed to obtain boundaries and semantic labels of action instances for each tubelet. Experimental results are given in Table 3 ###reference_###. We set MM-SEAL videos belonging to the training set in ActivityNet-1.3 to the MM-SEAL training set." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Complex spatio-temporal activity detection", + "text": "We conduct comparative experiments with two configurations, whose results are shown in Table 3 ###reference_###. We adopt the released checkpoint of TSP model to extract video features with windows of size=16, stride=2. In each window, we use 2fps bounding boxes to extract person features by an ROI layer [11 ###reference_b11###]. The final tubelet feature is termed as TSP-T. This configuration obtains 67.66% mAP in top120 categories. On the other hand, we adopt Slowfast-T(atomic action feature) as auxiliary features. we concatenate TSP-T and Slowfast-T, obtaining two-stream features as the input for Faster-TAD. Under this configuration, the experiment obtains 68.93% mAP in top120 categories, bringing a mAP gain of 1.27%. Experiments demonstrate that learning the features of atomic actions is helpful for complex activity localization task." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments and Analysis", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Metrics", + "text": "Following the standard practice [46 ###reference_b46###, 16 ###reference_b16###], we utilize frame-mAP and video-mAP to evaluate spatio-temporal action detection performance. IoU at the frame-level is adopted to evaluate frame-mAP, for which the threshold is 0.5. Similarly, video-mAP is calculated by IoU between two tubes. The threshold is 0.2 and 0.5." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Results and Analysis", + "text": "We evaluate several typical action detection frameworks on MM-SEAL, and compare their performance on UCF101-24 and JHMDB.\nAs shown in Table4 ###reference_###, MOC-detector and YOWO achieve excellent results in Both UCF101-24 and JHMDB, while getting a poor performance on our MM-SEAL CA. YOWO and MOC-Detector predict bounding boxes and action probabilities directly from video clips. However, they perform poorly when detecting an activity with a long duration which need a large receptive field temporally. On the other hand, compared with other datasets, MM-SEAL contain more complex semantics and more precise temporal annotations. K is defined as number of frames fed to model. We set K as 7 and 11 separately, and find a improvement in performance as K increased.\nUnlike UCF101-24, MM-SEAL provide 4fs annotations because of the long duration of complex activity. Each frame is fed into the MOC detector to learn moving point trajectories by estimating movement at adjacent frames on UCF101-24. For complex activity, we feed the model 4fs video frames, which we guess greatly affects the results. Comparing the above method with our baseline, we observe that how to connect trajectories is very important in spatio-temporal action detection tasks with long instance duration.\nFor our atomic actions, we observe that the frame-MAP of top60(13.88% k=7) is much larger than the frame-mAP of top120(5.78% k=7). The long tail effect of the number of instances between categories has a great influence on the results. UCF101-24 and JHMDB have only the same label of activity for each video, which provides enough characteristic backgroud cues for detectors. Meanwhile, MM-SEAL provide multiple label of actions within a video. Furthermore, MM-SEAL involves concurrent actions for one person and different people, which bring many challenges for this task." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Knowledge Transfer", + "text": "To demonstrate the effectiveness of MM-SEAL dataset, we have done several spatio-temporal localization knowledge transfer experiments." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Atomic Spatio-temporal Action Localization", + "text": "The detailed atomic action annotations possess MM-SEAL with strong generalization capability over other action localization tasks. For example, recent works usually adopt the backbone model pre-trained on Kinetics-700 [3 ###reference_b3###] for downstream tasks fine-tuning. Kinetics-700 is a large-scale dataset which focuses on action recognition. We hope that by sharing the same target on the atomic action localization task, MM-SEAL can play a more active role in improving the model performance on AVA.\nWe validate the generalization capability of our proposed MM-SEAL dataset to AVA in Table 5 ###reference_###. By simply conducting pretraining, MM-SEAL can provide a much better initialization for AVA fine-tuning. To better utilize the consistent target of atomic action localization, we propose to conduct a semi-supervised adaptation process to help the model pre-trained on MM-SEAL Atomic Action adapt to AVA dataset. Algorithmic details are attached in Supplementary Materials.\nThe results manifest that assigning proper pseudo labels for AVA allows the model to build better priors for the subsequent fine-tuning." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Temporal Action Localization", + "text": "In recent years, researchers have found that the representational capability of input features is very important for the localization task. For example, TSP [1 ###reference_b1###] is an approach focusing on temporally-sensitive pre-training of video encoders. It is observed that atomic actions can be combined into diverse complex activities. We explore the relationship between atomic actions and complex activities by applying atomic action features extrated from raw video to complex activity detection tasks.\nFeature extracted by Slowfast [8 ###reference_b8###] trained on our MM-SEAL Atomic Actions is named as Slowfast-A Feature. Swin Feature indicates features extracted by the Swin-Transformer [29 ###reference_b29###] trained on HACS Clips and TSP Feature is trained on ActivityNet-1.3. We adopt Faster-TAD in this task. Slowfast-A Feature is employed as auxiliary features and assembled with TSP Feature or Swin Feature, generating two-stream inputs for localization task.\nActivityNet-1.3 and HACS are commonly adopted to evaluate the capabilities of algorithms on temporally localizing activities in untrimmed video sequences. Extensive experiments are employed on these benchmark, and results are shown in Table 6 ###reference_###. We can see that visual features trained on MM-SEAL can improve the performance of other temporal action localization task. What\u2019s more, atomic action features can improve the complex activity localization performance." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "We develop a new large-scale benchmark MM-SEAL for multi-person multi-grained spatio-temporal detection among human daily life. We are the first to propose a new benchmark for multi-person spatio-temporal complex activity localization, where complex semantic and long duration bring new challenges to video understanding. We observe that atomic actions can be combined into diverse complex activities, and prove the great effect of the atomic features on complex activity localization task. Also, we propose a baseline method equipped with novel network Faster-TAD and hope our MM-SEAL will spur researches on spatio-temporal action localization tasks. Finally, Our evaluations show that visual features pretrained on MM-SEAL can improve the performance on other action localization benchmarks." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison of statistics between existing action detection datasets and our MM-SEAL. (Keyframe with action category and spatial localization in keyframe; Tube with action category, temporal boundary and spatial localization; Segment with action category and temporal boundary; CA denotes the Complex Activity; AA denotes the Atomic Action; * denotes all action partonomy are count together; Instance means the number of instances); Single/multi means proving single person or multi person annotations.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\n\u00a0\u00a0 Dataset\n\n\n\n\n\n\n\n
Action
partonomy
\n
Vid.NoAnno typeAct.Instance\n\n\n\n\n\n\n\n
avg. act./
vid.dur.
\n
ScenesSingle/multi
\n\u00a0\nAVA\u00a0[10]\n-430Key frame80385k-human activitymulti
AVA-Kinetics\u00a0[22]\n-239kKey frame80624k-human activitymulti
\n\u00a0\nActivityNet-1.3\u00a0[2]\n-19.99kSegments20023.1k51.4s/1.9mhuman activity-
HACS\u00a0[52]\n-50kSegments200140k40.6s/2.6mhuman activity-
\n\\hdashline[1pt/5pt]\n\n\n\nFineGym\u00a0[38]\n\naction303Segments104.9k55s/2hSports-
sub-action303Segments53032.7k1.7s/10mSports-
\n\u00a0\nUCF101-24\u00a0[40]\n-15.5kTube244.5k5.1s/6.9shuman activitymulti
JHMDB\u00a0[13]\n-5.1k-210.8k-1.2k-human activitysingle
Multisports\u00a0[24]\naction197.6kTube6637.7k1.0s/20.9sSportsmulti
TITAN[32]\n-700Tube50--egocentric drivingmulti
MEVA\u00a0[5]\naction2.21kTube3766.2k-/5msurveillancemulti
\n\\hdashline[1pt/5pt]\n\u00a0\u00a0 TSU\u00a0[6]\n\n\n\n\n\n\n\n
activity
atomic action
\n
536*Tube\n\n\n\n\n\n\n\n
5
46
\n
40.7k*-/21msmartroomsingle
\n\\hdashline[1pt/5pt]\n\n\n\nMOMA\u00a0[31]\n\nactivity2.4kSegments67--human activitymulti
atomic action12kTube52--human activitymulti
\n\u00a0\n\n\n\nMM-SEAL\u00a0[38]\n\nactivity5.4kTube20017.7k12.04s/1.8mhuman activitymulti
atomic action19.3kTube172111.7k3.62s/9.23shuman activitymulti
\u00a0
\n
\n
", + "capture": "Table 1: Comparison of statistics between existing action detection datasets and our MM-SEAL. (Keyframe with action category and spatial localization in keyframe; Tube with action category, temporal boundary and spatial localization; Segment with action category and temporal boundary; CA denotes the Complex Activity; AA denotes the Atomic Action; * denotes all action partonomy are count together; Instance means the number of instances); Single/multi means proving single person or multi person annotations." + }, + "2": { + "table_html": "
\n
Table 2: Synergy of the actions of different person in the same keyframe in the dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nPerson 1 actionPerson 2 actionNumber
\n\u00a0\nshake legsbeat with hands5298
wave(object)bounce3825
twist waistdance2815
liftwalk2277
liftrun1722
step aerobicsrotate1587
dancebounce1587
danceturn to1445
rotatedance1441
eatclamp879
\u00a0
\n
", + "capture": "Table 2: Synergy of the actions of different person in the same keyframe in the dataset." + }, + "3": { + "table_html": "
\n
Table 3: The results of MM-SEAL Atomic Action detection(top) and MM-SEAL Complex Activity detection(bottom) on validation set,\nMM-SEAL CA denotes the MM-SEAL Complex Activity. F-mAP means Frame-mAP, V-mAP stands for Video mAP. T-mAP stands for Temporal action detection mAP.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\u00a0\nDataset\n\n\n\nFeatures\n\n\n\nT-mAP\n\n\n\nF-mAP\n\n\n\nV-mAP\n\n
\n\n\u00a0\nMM-SEAL AA\n\n\n\nSlowfast-T\n\n\n\n22.82\n\n\n\n20.79\n\n\n\n17.55\n\n
\n\n\u00a0\n\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\u00a0\nDataset\n\n\n\nFeatures\n\nmAP
\n\nT-\n\n\n\nFrame\n\n\n\nVideo\n\n
\n\n\u00a0\nMM-SEAL-CA\n\n\n\nSlowfast-T\n\n\n\n32.09\n\n\n\n44.09\n\n\n\n41.63\n\n
\n\nMM-SEAL-CA\n\n\n\nTSP-T\n\n\n\n67.66\n\n\n\n-\n\n\n\n-\n\n
\n\nMM-SEAL-CA\n\n\n\nTSP-T+Slowfast-T\n\n\n\n68.93\n\n\n\n-\n\n\n\n-\n\n
\n\n\u00a0\n\n
\n
\n
\n
\n
", + "capture": "Table 3: The results of MM-SEAL Atomic Action detection(top) and MM-SEAL Complex Activity detection(bottom) on validation set,\nMM-SEAL CA denotes the MM-SEAL Complex Activity. F-mAP means Frame-mAP, V-mAP stands for Video mAP. T-mAP stands for Temporal action detection mAP." + }, + "4": { + "table_html": "
\n
Table 4: Comparison of the state-of-the-art methods on MM-SEAL, UCF101-24 and JHMDB.
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Method\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0SEAL-CA\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0SEAL-AA\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0UCF101-24\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0JHMDB
YOWO0.080.240.07---71.1072.9746.4274.5188.0582.57
MOC(K=7, top60)0.521.220.2313.883.520.4678.082.853.870.877.377.2
MOC(K=7, top120)0.220.930.515.781.470.19------
MOC(K=11, top60)1.162.200.8815.623.740.57------
MOC(K=11, top120)0.480.910.377.542.100.26------
\u00a0
\n
\n
\n
", + "capture": "Table 4: Comparison of the state-of-the-art methods on MM-SEAL, UCF101-24 and JHMDB." + }, + "5": { + "table_html": "
\n
Table 5: Performance comparison of initialization methods on AVA fine-tuning.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\u00a0\nMethod\n\n\n\nPre-train Dataset\n\n\n\n mAP\n\n
\n\n\u00a0\nPre-train\n\n\n\nKinetics-700\n\n\n\n29.3\n\n
\n\nPre-train\n\n\n\nMM-SEAL Atomic Action\n\n\n\n30.4\n\n
\n\n\u00a0\nSemi-Transfer\n\n\n\nMM-SEAL Atomic Action\n\n\n\n30.8\n\n
\n\n\u00a0\n\n
\n
\n
", + "capture": "Table 5: Performance comparison of initialization methods on AVA fine-tuning." + }, + "6": { + "table_html": "
\n
Table 6: Action detection results on validation set of ActivityNet-1.3 and HACS, measured by mAP(%) at different tIoU thresholds and the average mAP(%) at different tIoU thresholds(0.5 to 0.95 with 0.05 interval). SF-A means feature extracted by Slowfast [8] trained on our MM-SEAL Atomic Actions.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\u00a0\nDatasets\n\n\n\nFeature\n\n\n\n0.5@mAP\n\n\n\n0.95@mAP\n\n\n\nAvg mAP\n\n
\n\n\u00a0\nANet-1.3\n\n\n\nTSP\n\n\n\n51.29\n\n\n\n10.22\n\n\n\n35.32\n\n
\n\nTSP+SF-A\n\n\n\n52.20\n\n\n\n10.10\n\n\n\n35.98\n\n
\n\nSwin\n\n\n\n57.39\n\n\n\n10.48\n\n\n\n39.09\n\n
\n\nSwin+SF-A\n\n\n\n58.30\n\n\n\n11.28\n\n\n\n40.01\n\n
\n\nHACS\n\n\n\nSwin\n\n\n\n54.13\n\n\n\n12.02\n\n\n\n36.92\n\n
\n\nSwin+SF-A\n\n\n\n55.63\n\n\n\n12.90\n\n\n\n38.39\n\n
\n\n\u00a0\n\n
\n
\n
", + "capture": "Table 6: Action detection results on validation set of ActivityNet-1.3 and HACS, measured by mAP(%) at different tIoU thresholds and the average mAP(%) at different tIoU thresholds(0.5 to 0.95 with 0.05 interval). SF-A means feature extracted by Slowfast [8] trained on our MM-SEAL Atomic Actions." + } + }, + "image_paths": { + "1": { + "figure_path": "2204.02688v2_figure_1.png", + "caption": "Figure 1: The overview of MM-SEAL dataset. With the help of multi-object tracking, person re-identification, and annotator correction, we obtain tubelets for each subject who are related to the activity. Then we provide complex activities instances and atomic actions instances of each tubelet. In complex activity level, we annotate instances from 200 classes, such as \u201csumo\u201d marked in green. In atomic action level, we choose atomic actions from the action vocabulary(bottom).", + "url": "http://arxiv.org/html/2204.02688v2/extracted/6028149/sec/framework.png" + }, + "2": { + "figure_path": "2204.02688v2_figure_2.png", + "caption": "Figure 2: The distribution of instance duration(a)(b) and instance numbers in each video(c)(d) for action instances. The abscissa in (a)(b) represents mean the duration of action in seconds, and that in (c)(d) represents the instance numbers in each video. The ordinate represents the instance numbers to the abscissa. AA denotes the Atomic Action; CA denotes the Complex Activity.", + "url": "http://arxiv.org/html/2204.02688v2/extracted/6028149/sec/hist_segments_20230303_hacs_add_old.png" + }, + "3": { + "figure_path": "2204.02688v2_figure_3.png", + "caption": "Figure 3: Proximity-Category Proposal Block. The first row shows the ground truth segments. The second row is coarse proposals from Proposal Generation Mechanism. The last row shows that the proposal with unsatisfied IoU will be set to a Proximity-Category according to its nearby ground truth segment. For example, proposal 2 has a label of \u201cusing the rowing machine - proximity\u201d.", + "url": "http://arxiv.org/html/2204.02688v2/x1.png" + }, + "4": { + "figure_path": "2204.02688v2_figure_4.png", + "caption": "Figure 4: Proposal Attention Module. Proposal features are generated from proposal generation outputs and the shared features by a ROI layer. Then, encoder layer is followed to further encode the proposal representation. Finally, Self and Cross Attention block is applied to model the proposal semantic features.", + "url": "http://arxiv.org/html/2204.02688v2/x2.png" + }, + "5": { + "figure_path": "2204.02688v2_figure_5.png", + "caption": "Figure 5: Auxiliary-Feature Block. Two streams of features go through base module respectively. Then, they are combined along the temporal dimension. The rest of the network keeps the same.", + "url": "http://arxiv.org/html/2204.02688v2/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Tsp: Temporally-sensitive pretraining of video encoders for\nlocalization tasks.", + "author": "Humam Alwassel, Silvio Giancola, and Bernard Ghanem.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV) Workshops, 2021.", + "url": null + } + }, + { + "2": { + "title": "Activitynet: A large-scale video benchmark for human activity\nunderstanding.", + "author": "Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles.", + "venue": "In Proceedings of the ieee conference on computer vision and\npattern recognition, pages 961\u2013970, 2015.", + "url": null + } + }, + { + "3": { + "title": "Quo vadis, action recognition? a new model and the kinetics dataset.", + "author": "Joao Carreira and Andrew Zisserman.", + "venue": "In proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 6299\u20136308, 2017.", + "url": null + } + }, + { + "4": { + "title": "Spatiotemporal residual networks for video action recognition.", + "author": "R Christoph and Feichtenhofer Axel Pinz.", + "venue": "Advances in neural information processing systems, pages\n3468\u20133476, 2016.", + "url": null + } + }, + { + "5": { + "title": "Meva: A large-scale multiview, multimodal video dataset for activity\ndetection.", + "author": "Kellie Corona, Katie Osterdahl, Roderic Collins, and Anthony Hoogs.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications\nof Computer Vision, pages 1060\u20131068, 2021.", + "url": null + } + }, + { + "6": { + "title": "Toyota smarthome untrimmed: Real-world untrimmed videos for activity\ndetection.", + "author": "Rui Dai, Srijan Das, Saurav Sharma, Luca Minciullo, Lorenzo Garattoni, Francois\nBremond, and Gianpiero Francesca.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,\n45(2):2533\u20132550, 2022.", + "url": null + } + }, + { + "7": { + "title": "Weakly-supervised action segmentation with iterative soft boundary\nassignment.", + "author": "Li Ding and Chenliang Xu.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 6508\u20136516, 2018.", + "url": null + } + }, + { + "8": { + "title": "Slowfast networks for video recognition.", + "author": "Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He.", + "venue": "In Proceedings of the IEEE/CVF international conference on\ncomputer vision, pages 6202\u20136211, 2019.", + "url": null + } + }, + { + "9": { + "title": "Soccernet: A scalable dataset for action spotting in soccer videos.", + "author": "Silvio Giancola, Mohieddine Amine, Tarek Dghaily, and Bernard Ghanem.", + "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition workshops, pages 1711\u20131721, 2018.", + "url": null + } + }, + { + "10": { + "title": "Ava: A video dataset of spatio-temporally localized atomic visual\nactions.", + "author": "Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing\nLi, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul\nSukthankar, et al.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 6047\u20136056, 2018.", + "url": null + } + }, + { + "11": { + "title": "Mask r-cnn.", + "author": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 2961\u20132969, 2017.", + "url": null + } + }, + { + "12": { + "title": "Tube convolutional neural network (t-cnn) for action detection in\nvideos.", + "author": "Rui Hou, Chen Chen, and Mubarak Shah.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 5822\u20135831, 2017.", + "url": null + } + }, + { + "13": { + "title": "Towards understanding action recognition.", + "author": "Hueihan Jhuang, Juergen Gall, Silvia Zuffi, Cordelia Schmid, and Michael J\nBlack.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 3192\u20133199, 2013.", + "url": null + } + }, + { + "14": { + "title": "Action genome: Actions as compositions of spatio-temporal scene\ngraphs.", + "author": "Jingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Carlos Niebles.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 10236\u201310247, 2020.", + "url": null + } + }, + { + "15": { + "title": "THUMOS challenge: Action recognition with a large number of\nclasses.", + "author": "Y.-G. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R.\nSukthankar.", + "venue": "http://crcv.ucf.edu/THUMOS14/, 2014.", + "url": null + } + }, + { + "16": { + "title": "Action tubelet detector for spatio-temporal action localization.", + "author": "Vicky Kalogeiton, Philippe Weinzaepfel, Vittorio Ferrari, and Cordelia Schmid.", + "venue": "In Proceedings of the IEEE International Conference on Computer\nVision, pages 4405\u20134413, 2017.", + "url": null + } + }, + { + "17": { + "title": "The kinetics human action video dataset.", + "author": "Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra\nVijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al.", + "venue": "arXiv preprint arXiv:1705.06950, 2017.", + "url": null + } + }, + { + "18": { + "title": "Hmdb: a large video database for human motion recognition.", + "author": "Hildegard Kuehne, Hueihan Jhuang, Est\u00edbaliz Garrote, Tomaso Poggio, and\nThomas Serre.", + "venue": "In 2011 International conference on computer vision, pages\n2556\u20132563. IEEE, 2011.", + "url": null + } + }, + { + "19": { + "title": "Temporal convolutional networks for action segmentation and\ndetection.", + "author": "Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager.", + "venue": "In proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 156\u2013165, 2017.", + "url": null + } + }, + { + "20": { + "title": "Segmental spatiotemporal cnns for fine-grained action segmentation.", + "author": "Colin Lea, Austin Reiter, Ren\u00e9 Vidal, and Gregory D Hager.", + "venue": "In European Conference on Computer Vision, pages 36\u201352.\nSpringer, 2016.", + "url": null + } + }, + { + "21": { + "title": "Temporal deformable residual networks for action segmentation in\nvideos.", + "author": "Peng Lei and Sinisa Todorovic.", + "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, pages 6742\u20136751, 2018.", + "url": null + } + }, + { + "22": { + "title": "The ava-kinetics localized human actions video dataset.", + "author": "Ang Li, Meghana Thotakuri, David A Ross, Jo\u00e3o Carreira, Alexander\nVostrikov, and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:2005.00214, 2020.", + "url": null + } + }, + { + "23": { + "title": "Ms-tcn++: Multi-stage temporal convolutional network for action\nsegmentation.", + "author": "Shi-Jie Li, Yazan AbuFarha, Yun Liu, Ming-Ming Cheng, and Juergen Gall.", + "venue": "IEEE transactions on pattern analysis and machine intelligence,\n2020.", + "url": null + } + }, + { + "24": { + "title": "Multisports: A multi-person video dataset of spatio-temporally\nlocalized sports actions.", + "author": "Yixuan Li, Lei Chen, Runyu He, Zhenzhi Wang, Gangshan Wu, and Limin Wang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 13536\u201313545, 2021.", + "url": null + } + }, + { + "25": { + "title": "Actions as moving points.", + "author": "Yixuan Li, Zixu Wang, Limin Wang, and Gangshan Wu.", + "venue": "In European Conference on Computer Vision, pages 68\u201384.\nSpringer, 2020.", + "url": null + } + }, + { + "26": { + "title": "Learning salient boundary feature for anchor-free temporal action\nlocalization.", + "author": "Chuming Lin, Chengming Xu, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang,\nJilin Li, Feiyue Huang, and Yanwei Fu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 3320\u20133329, 2021.", + "url": null + } + }, + { + "27": { + "title": "Tsm: Temporal shift module for efficient video understanding.", + "author": "Ji Lin, Chuang Gan, and Song Han.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 7083\u20137093, 2019.", + "url": null + } + }, + { + "28": { + "title": "Bsn: Boundary sensitive network for temporal action proposal\ngeneration.", + "author": "Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang.", + "venue": "In Proceedings of the European conference on computer vision\n(ECCV), pages 3\u201319, 2018.", + "url": null + } + }, + { + "29": { + "title": "Swin transformer: Hierarchical vision transformer using shifted\nwindows.", + "author": "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and\nBaining Guo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 10012\u201310022, 2021.", + "url": null + } + }, + { + "30": { + "title": "A strong baseline and batch normalization neck for deep person\nre-identification.", + "author": "Hao Luo, Wei Jiang, Youzhi Gu, Fuxu Liu, Xingyu Liao, Shenqi Lai, and Jianyang\nGu.", + "venue": "IEEE Transactions on Multimedia, 22(10):2597\u20132609, 2019.", + "url": null + } + }, + { + "31": { + "title": "Moma: Multi-object multi-actor activity parsing.", + "author": "Zelun Luo, Wanze Xie, Siddharth Kapoor, Yiyun Liang, Michael Cooper,\nJuan Carlos Niebles, Ehsan Adeli, and Fei-Fei Li.", + "venue": "Advances in Neural Information Processing Systems,\n34:17939\u201317955, 2021.", + "url": null + } + }, + { + "32": { + "title": "Titan: Future forecast using action priors.", + "author": "Srikanth Malla, Behzad Dariush, and Chiho Choi.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 11186\u201311196, 2020.", + "url": null + } + }, + { + "33": { + "title": "Temporal context aggregation network for temporal action proposal\nrefinement.", + "author": "Zhiwu Qing, Haisheng Su, Weihao Gan, Dongliang Wang, Wei Wu, Xiang Wang, Yu\nQiao, Junjie Yan, Changxin Gao, and Nong Sang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 485\u2013494, 2021.", + "url": null + } + }, + { + "34": { + "title": "Learning spatio-temporal representation with pseudo-3d residual\nnetworks.", + "author": "Zhaofan Qiu, Ting Yao, and Tao Mei.", + "venue": "In proceedings of the IEEE International Conference on Computer\nVision, pages 5533\u20135541, 2017.", + "url": null + } + }, + { + "35": { + "title": "Home action genome: Cooperative compositional action understanding.", + "author": "Nishant Rai, Haofeng Chen, Jingwei Ji, Rishi Desai, Kazuki Kozuka, Shun\nIshizaka, Ehsan Adeli, and Juan Carlos Niebles.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 11184\u201311193, 2021.", + "url": null + } + }, + { + "36": { + "title": "Faster r-cnn: Towards real-time object detection with region proposal\nnetworks.", + "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.", + "venue": "Advances in neural information processing systems, 28, 2015.", + "url": null + } + }, + { + "37": { + "title": "Weakly supervised action learning with rnn based fine-to-coarse\nmodeling.", + "author": "Alexander Richard, Hilde Kuehne, and Juergen Gall.", + "venue": "In Proceedings of the IEEE conference on Computer Vision and\nPattern Recognition, pages 754\u2013763, 2017.", + "url": null + } + }, + { + "38": { + "title": "Finegym: A hierarchical video dataset for fine-grained action\nunderstanding.", + "author": "Dian Shao, Yue Zhao, Bo Dai, and Dahua Lin.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and\npattern recognition, pages 2616\u20132625, 2020.", + "url": null + } + }, + { + "39": { + "title": "Two-stream convolutional networks for action recognition in videos.", + "author": "Karen Simonyan and Andrew Zisserman.", + "venue": "Advances in neural information processing systems, 27, 2014.", + "url": null + } + }, + { + "40": { + "title": "Online real-time multiple spatiotemporal action localisation and\nprediction.", + "author": "Gurkirt Singh, Suman Saha, Michael Sapienza, Philip HS Torr, and Fabio\nCuzzolin.", + "venue": "In Proceedings of the IEEE International Conference on Computer\nVision, pages 3637\u20133646, 2017.", + "url": null + } + }, + { + "41": { + "title": "Efficientdet: Scalable and efficient object detection.", + "author": "Mingxing Tan, Ruoming Pang, and Quoc V Le.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and\npattern recognition, pages 10781\u201310790, 2020.", + "url": null + } + }, + { + "42": { + "title": "A closer look at spatiotemporal convolutions for action recognition.", + "author": "Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar\nPaluri.", + "venue": "In Proceedings of the IEEE conference on Computer Vision and\nPattern Recognition, pages 6450\u20136459, 2018.", + "url": null + } + }, + { + "43": { + "title": "Action recognition with improved trajectories.", + "author": "Heng Wang and Cordelia Schmid.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 3551\u20133558, 2013.", + "url": null + } + }, + { + "44": { + "title": "Temporal segment networks: Towards good practices for deep action\nrecognition.", + "author": "Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and\nLuc Van Gool.", + "venue": "In European conference on computer vision, pages 20\u201336.\nSpringer, 2016.", + "url": null + } + }, + { + "45": { + "title": "Boundary-aware cascade networks for temporal action segmentation.", + "author": "Zhenzhi Wang, Ziteng Gao, Limin Wang, Zhifeng Li, and Gangshan Wu.", + "venue": "In European Conference on Computer Vision, pages 34\u201351.\nSpringer, 2020.", + "url": null + } + }, + { + "46": { + "title": "Learning to track for spatio-temporal action localization.", + "author": "Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 3164\u20133172, 2015.", + "url": null + } + }, + { + "47": { + "title": "Simple online and realtime tracking with a deep association metric.", + "author": "Nicolai Wojke, Alex Bewley, and Dietrich Paulus.", + "venue": "In 2017 IEEE international conference on image processing\n(ICIP), pages 3645\u20133649. IEEE, 2017.", + "url": null + } + }, + { + "48": { + "title": "Context-aware rcnn: A baseline for action detection in videos.", + "author": "Jianchao Wu, Zhanghui Kuang, Limin Wang, Wayne Zhang, and Gangshan Wu.", + "venue": "In European Conference on Computer Vision, pages 440\u2013456.\nSpringer, 2020.", + "url": null + } + }, + { + "49": { + "title": "R-c3d: Region convolutional 3d network for temporal activity\ndetection.", + "author": "Huijuan Xu, Abir Das, and Kate Saenko.", + "venue": "In Proceedings of the IEEE international conference on computer\nvision, pages 5783\u20135792, 2017.", + "url": null + } + }, + { + "50": { + "title": "G-tad: Sub-graph localization for temporal action detection.", + "author": "Mengmeng Xu, Chen Zhao, David S Rojas, Ali Thabet, and Bernard Ghanem.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 10156\u201310165, 2020.", + "url": null + } + }, + { + "51": { + "title": "Step: Spatio-temporal progressive learning for video action\ndetection.", + "author": "Xitong Yang, Xiaodong Yang, Ming-Yu Liu, Fanyi Xiao, Larry S Davis, and Jan\nKautz.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 264\u2013272, 2019.", + "url": null + } + }, + { + "52": { + "title": "Hacs: Human action clips and segments dataset for recognition and\ntemporal localization.", + "author": "Hang Zhao, Antonio Torralba, Lorenzo Torresani, and Zhicheng Yan.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 8668\u20138678, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2204.02688v2" +} \ No newline at end of file diff --git a/20241127/2206.09906v2.json b/20241127/2206.09906v2.json new file mode 100644 index 0000000000000000000000000000000000000000..7b76e34f5727670d7efefb8799dc77eef519fb70 --- /dev/null +++ b/20241127/2206.09906v2.json @@ -0,0 +1,142 @@ +{ + "title": "Achieving Dexterous Bidirectional Interaction in Uncertain Conditions for Medical Robotics", + "abstract": "Medical robotics can help improve the reach of healthcare services. A challenge for medical robots is their complex physical interaction. This work evaluates a recently introduced control architecture based on Fractal Impedance Control (FIC) in medical applications. The deployed FIC architecture is robust to delay between the master and the replica robots and can switch online between an admittance and impedance behaviour. Our experiments analyse three scenarios: teleoperated surgery, rehabilitation, and remote ultrasound scan. The experiments did not require any adjustment of the robot tuning, which is essential in medical applications where the operators do not have an engineering background. Our results show that it is possible to teleoperate the robot to perform remote occupational therapy, operate a scalpel, and use an ultrasound scan. However, our experiments also highlighted the need for a better robot embodiment to control the system precisely in 3D dynamic tasks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Robot-mediated medical services have been identified as a possible solution to the ageing population in developed countries in the last few decades. An older population implies a lower active workforce and an increase in age-related diseases, increasing strain on the healthcare sector [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Additionally, as highlighted from the COVID-19 pandemic, reduced access to healthcare facilities can currently compromise healthcare quality. This problem was known in the sector, but it was not prioritised and was seen as a long-term problem because it affected only the population living in remote locations. The pandemic has revealed the short-term relevance of new technologies that can enhance the territorial permeability of these services.\nRehabilitation and robot-aided surgery are among the first applications in medical robotics[7 ###reference_b7###, 1 ###reference_b1###]. The rehabilitation robots have shown how the introduction of these technologies in the rehabilitation centre has allowed an increase of bandwidth and therapeutic improvements in the patients [7 ###reference_b7###, 1 ###reference_b1###, 3 ###reference_b3###, 4 ###reference_b4###]. Currently, multiple planar robots for upper-limb rehabilitation are available on the market, which can also be deployed at home or community centres [6 ###reference_b6###]. Concurrently, the knowledge gained for rehabilitation robots supported the development of assistive technologies for medical, civil, and industrial applications. These technologies aim to support pathological cases, but they also target the reduction of injuries in the healthy population [8 ###reference_b8###].\nSurgical robots are the other devices that immediately attracted the attention of researchers, which have been seen as an opportunity to allow doctors to operate on patients remotely [2 ###reference_b2###, 5 ###reference_b5###]. Endoscopic surgery also provided an ideal case study for robotics. Endoscopes were already an established device when roboticists approached the problem, providing minimally invasive access to internal organs, and they had established protocols and techniques [9 ###reference_b9###, 10 ###reference_b10###]. Therefore, medical robots could be developed to automate and improve an available technology, which has also increased the acceptability of these technologies in the medical community. An additional benefit of endoscopic surgery is the quasi-spherical operational field that can be projected on a flat screen without significantly impacting the operator\u2019s perception. More recently, the knowledge gained from developing co-bots, robots designed to share their workspace with humans, has also enabled the development of robots for orthopaedics surgery [11 ###reference_b11###]. In these systems, the doctor interacts with the end-effector to increase the quality of knee and hip prosthetics; however, the literature on these systems does not indicate a significant benefit of robot-aided surgery compared to traditional systems [12 ###reference_b12###].\nResearchers have recently looked into performing other types of medical interventions in teleoperation, exploiting needles and scalpels[2 ###reference_b2###, 5 ###reference_b5###]. Teleoperation presents unique challenges compared to autonomous manipulation. The robot must follow the operator\u2019s real-time commands without knowing their intentions while maintaining interaction stability and adhering to safety constraints. The intrinsic interaction complexity connected with the variegated mechanical properties of biological tissues poses a challenge to traditional interaction control approaches, which rely on contact models. These controllers also require extensive tuning for switching between operations, requiring application-specific knowledge and profound knowledge of the control architecture. Furthermore, the introduction of delays and the exponential increase of computational complexity when multi-arms are involved render extremely challenging the applicability of these methods in teleoperation [5 ###reference_b5###, 13 ###reference_b13###]. Therefore, these methods can potentially generate unsafe interaction due to the intrinsic energy tracking limitations due to the discrete nature of the virtual tank conservative energy [14 ###reference_b14###].\n###figure_1### Other applications of co-bots in robotics involve automated diagnostics (e.g., ultrasound scan) and robot-aided TMS (Transcranial Magnetic Stimulation) [15 ###reference_b15###, 16 ###reference_b16###]. The automation of diagnostic technology looks into the possibility of completely automating examinations such as the ultrasound scan, looking into machine learning and neural networks to identify anomalies in the image and perform a diagnosis. The application for TMS aims to improve the stimulation by improving the neural circuit targeting, as this technology\u2019s effectiveness depends on the selective stimulation of the nervous tissues using magnetic induction.\nRecently, our group has developed an impedance controller, called Fractal Impedance Controller (FIC), capable of robust interaction in unstructured environments without compromising the tracking accuracy [14 ###reference_b14###, 17 ###reference_b17###]. The FIC achieves these properties thanks to its passivity and path-independent observer, making it robust to delays and reducing bandwidth in state feedback [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. The FIC teleoperation architecture has been experimentally tested in teleoperation for delays up to at a feedback bandwidth of , showing the robustness of interaction with a significant reduction of manipulability [18 ###reference_b18###]. The passivity also allows multiple controllers to be superimposed without affecting their stability, enabling decoupling the control problems and reducing the computational complexity[21 ###reference_b21###]. Earlier teleoperation experiments showcase how the proposed architecture enables the remote operator to collaborate with another person interacting with the replica robots[18 ###reference_b18###, 19 ###reference_b19###, 22 ###reference_b22###, 23 ###reference_b23###].\nThis manuscript presents the preliminary evaluation of the performances of teleoperation architecture based on the FIC in using a scalpel, performing occupational therapy and an ultrasound scan (Fig. 1 ###reference_###). The scope is to understand the potential capabilities of the proposed method and identify the challenges to overcome. Section II ###reference_### gives an overview of the controller, which is the same (including gains) presented in [23 ###reference_b23###]. Section III ###reference_### describes the experiments and presents the results. Sections IV ###reference_### and V ###reference_### discuss experimental results and draw conclusions, respectively." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Control Overview", + "text": "The controller architecture comprises two sides with independent stability for their controllers, setting our control aside from other teleoperation architectures requiring their controllers\u2019 stability to be coupled [22 ###reference_b22###, 23 ###reference_b23###, 2 ###reference_b2###]. The master measures the motion of the user operator (e.g., medical personnel), using it as command input, and provides haptic and visual feedback from the replica side (Fig. 1 ###reference_###). The replica reproduces the operator\u2019s movements and interacts with the patients and environment. This controller can operate one or multiple arms across various tasks by changing the end-effector mounted on the robots, as shown on the right side of Fig. 1 ###reference_###. It is worth noting that the arms can be either controlled independently or synchronised; notwithstanding the control modality, the stability of the two arms is independent, and their movements are synchronised, giving coordinated states for both effort and trajectory. Thus, we will present all the elements of the architecture for one robotic arm.\nThe master controller has three elements as described in Fig. 1 ###reference_###. generate the desired pose for the replica depending on the selected control mode. We implemented the position and velocity modes. The position mode passes the pose error of the master to the controllers of the replica device, reproducing it at the end-effector. It allows better dexterity in controlling the robot, limiting the workspace. The velocity mode updates the reference pose of the replica device via an integration of the velocity recorded at the master end-effector. The desired replica pose is the output defined as followed depending on the control mode:\nwhere is the initial pose of the robot when the position mode is selected, is the twist of the master device, and is the controller time step.\n is the virtual haptic feedback () provided via the FIC-based controller NonLinear-PD (NLPD) formulated in [23 ###reference_b23###], and it mimics the nonlinear controller acting on the replica that deals with unexpected interactions. It provides the haptic perception of the wrench (i.e., vector of force and torques) exerted on the robot end-effector by the user command. It also enhances the haptic information beyond the interaction force recorded on the replica, being able to provide feedback when the limits of the replica workspace are reached. This feedback is summed up as the wrench recorded at the end-effector ( ), scaled by a gain that is controlled online by the user with the grasp-DoF of the Sigma-7 device.\nThe replica controller has two main components, as shown in Fig. 1 ###reference_###. The Force Controller (FC) provides an admittance controller capable of tracking a desired interaction force at the end-effector. The Motion Adaptation (MA) and Interaction Controler (IC) adapt the desired motion () to the robot\u2019s physical capabilities and the task requirements and, subsequently, generate the torque command for the Replica ().\nThe FC is an admittance controller that modifies the trajectory input by the user to account for the interaction at the robots\u2019 end-effectors, and it is based on the approach used in [23 ###reference_b23###]. It uses the the end-effector wrench ( ) and the joint kinematics () to estimate the displacement required to maintain the desired interaction force received by the MA.\nThe MA is performed with an algorithm called SEIKO Retargeting [24 ###reference_b24###, 23 ###reference_b23###]. This algorithm computes the desired whole-body configuration from the Cartesian input commands, solving a single iteration of Sequential Quadratic Programming (S-QP) at on the tangent space of the robot\u2019s trajectory. SEIKO Retargeting ensures that the next expected state is feasible (i.e., within the robots\u2019 kinematics and torque hardware limits) and does not pass through singularities. If any of these adverse conditions occur, the optimisation returns the feasible solution closest to the desired state.\nThe IC comprises a superimposition of five independent controllers, and all except the NLPD can be switched on and off without affecting the system\u2019s stability [25 ###reference_b25###, 23 ###reference_b23###]. However, they might impact the accuracy and responsiveness of the replica. These controllers are a feed-forward load compensation, a postural joint space PD controller, a nonlinear compensation of the robot dynamics, and a relative Cartesian controller. This last controller is turned on only for the bimanual experiments, and it enhances the arms coordination.\nThe multi-arm coordination can be switched on online, allowing the user to control multiple arms with a single haptic device. It is executed at the MA level, where the optimisation constraints are added to the conditions required to maintain the grip on the object. These constraints evaluate the contact forces with the object and the relative pose of the arms to maintain the grasp, derived by the grasp matrix and a simplified dynamic model of the object (i.e., geometry and mass)[23 ###reference_b23###]. Additionally, a fifth controller is turned on in the IC, which enforces the relative pose between the arms." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Experiments", + "text": "We have designed four experiments to evaluate the capabilities of the proposed method in medical robotics and identify limitations. The first experiment targets surgery, and it is specific to using a scalpel to cut a silicone model of the human skin (Fig. 2 ###reference_###a). The second experiment is a rehabilitation application, shown in Fig. 2 ###reference_###b. It evaluates the system\u2019s capabilities to be deployed as a physical interface between the patient and the therapist during occupational therapy. The third experiment assesses the capabilities of the proposed method when performing an ultrasound scan (Fig. 2 ###reference_###c). We used a phantom made of gelatin balloons (i.e., bladders, water and fruit. The fourth experiments evaluate the system\u2019s responsiveness in coordinated bi-manual manipulation of a fragile object (i.e., potato chip), evaluating the ability of the system to perform these tasks without reprogramming the controllers. However, it required the introduction of a soft force sensor at the end-effector (Fig. 2 ###reference_###.d) to enhance the perception of the interaction forces. We also want to remark that we are focusing on the linear components of the control because the angular components are expressed in quaternion, and there is no intuitive way to visualise the results.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Scalpel Experiment", + "text": "The two end-effectors in Fig. 2 ###reference_###a have been developed for this experiment. One end-effector holds the scalpel, and the other keeps in place the silicone skin during the cutting. The operator executes 16 cuts on the phantom, trying to proceed straight when crossing previously made incisions. The cross-incision is particularly interesting because a perpendicular cut weakens the phantom. This experiment aims to test the dexterity of the system during cutting, evaluate the impact of the system manipulability on the task, and the visual and haptic feedback performances.\nThe challenges of the scalpel experiment are the nonlinear soft-dynamics of interaction due both to the silicone of the phantom and the lateral flexibility of the scalpel blade, the 3D perception of the task before making contact, and the dexterity required to maintain the contact while executing long cuts in teleoperation.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### This first experiment highlighted the difficulties in handling long-distance 3D movements with multi-camera views (Fig. 3(a) ###reference_sf1###). Such an interface for the operator does not allow the concurrent perception of the movements in the two planes. Notwithstanding, the situation improves once the scalpel makes contact with the object and the task acquires a predominant planar component. Fig. 4(a) ###reference_sf1### shows how the cuts have undulatory shapes around the segment connecting the start and the endpoints with deviations that can reach up to for longer cuts. However, the clean cuts on the material (Fig. 4(b) ###reference_sf2###), the precise straight cut in some directions, and the presence of the undulatory behaviour in others seem to suggest that these deviations are due to the manipulability of the robots along this direction.\nFinally, the operator also experienced difficulties with the haptic feedback for the left arm (i.e., hand end-effector), where there is the need for sustained interaction with the environment as shown in Fig. 5 ###reference_###. In contrast, this effect has a lower impact on the scalpel arm due to the reduced force peaks and shorter interactions\u2019 time, highlighted from the comparison of the signals\u2019 plot in Fig. 5 ###reference_###. Such a phenomenon can be explained by the low inertia of Sigma.7, which implies that the haptic feedback generates high tangential velocities in the master device. Thus, it requires active compensation from the user by increasing the interaction impedance and making the task tiring for the operator.\n###figure_7###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Rehabilitation Experiment", + "text": "The rehabilitation experiment is designed to evaluate the stability of the architecture when rigidly connected to a human via a brace while executing an activity simulating occupational therapy, as shown in Fig. 3(b) ###reference_sf2###.\nThe challenge of this test is the continuous trade-off between admittance and impedance behaviour. For example, if the patient takes the lead, the replica has to be transparent and behave as an admittance, while it has to switch to an impedance behaviour when the therapist intervenes to assist the patient. Such trade-off usually is extremely difficult for controllers because having an admittance controller acting on top of a non-rigid impedance controller tends to amplify the drift in the controller observer and eventually lead to instability.\nThe experiment is divided into three tasks in Fig. 6 ###reference_###. The first investigates the transparency of the admittance controller to evaluate the level of compliance achievable without an additional end-effector force sensor by having the operator drive the robot to complete the task. The second task is about assistance with the operator assisting the patient in executing the task. Lastly, the third task is a disruptive interference from the operator, introducing perturbation to the user during the execution of the task.\n###figure_8### The norm of interaction forces () recorded in the experiment, the end-effector positions and the master controller linear command () are shown in Fig. 7 ###reference_###. The proposed method is capable of switching from the full admittance behaviour during independent movement where peaks are about . This occurs in the first minute of the experiment when is close to 0. When the operator assists or perturbs the motions, the master error increases, generating a virtual force on the user. The assisted movements end when . It is characterised by higher interaction forces than autonomous motions, reaching peaks of about . In contrast, in the perturbation phase, where there is an opposition to the subject\u2019s movements, the norm of the tangential force peaks.\n###figure_9###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Ultrasound Scan Experiment", + "text": "The ultrasound scan experiment evaluates the ability of the system to perform a remote diagnostic test, which requires the design of an end-effector for holding the ultrasound probe (Fig. 2 ###reference_###c). However, the available ultrasound scan does not allow remote control, so the experimental setup has been modified. This experiment is performed in the line of sight teleoperation. However, the phantom was placed above the operator\u2019s line of sight to hinder the perception and promote the use of the video feedback from the ultrasound scan. We use a phantom made of commercial food gelatin mixed with psyllium husk to enhance the contrast. We have three gelatin layers with different water components; the first has the recommended water-to-gelatin ratio. In the second layer, the amount of water is halved, and on the top layer, the water is reduced to one-third. Multiple props are suspended in the mix. There are bladders made of water balloons with grapes inside to mimic masses, and some fruit (grapes) is also distributed outside the bladder directly in the gelatine. It is worth noting that we have used a high-frequency probe that is not ideal for the quality of the image. However, it does not make any difference in evaluating the physical interaction stability and dexterity, which are the objectives of this experiment.\nFig. 8 ###reference_### shows the evolution of the interaction forces, the user input () and the stiffness of the replica arm, showing how the proposed method can dynamically adjust its stiffness to interact with the nonlinear environmental dynamics. This autonomous modulation of the robot impedance allows stabilisation of the interaction while maintaining the required dexterity of interaction to perform the scan. The video also allows us to appreciate how, once the contact is made with the phantom, the exploration can be driven mainly relying on the ultrasound monitor shown in Fig.9 ###reference_###. The main limitation of this experiment was that the available ultrasound did not allow remote adjusting of the image; thus, it required being physically close to the patient.\n###figure_10### ###figure_11###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Bimanual Telemanipulation Experiment", + "text": "The bimanual teleoperation experiments are designed to test the responsiveness and the accuracy of the coordination during bimanual teleoperation (Fig. 2 ###reference_###.d). We have chosen a potato chip because it is at the same time brittle, stiff, and light enough to make gravity a negligible component of the interaction forces. We introduce a soft end-effector capable of providing an indirect estimation of the interaction force. We have mounted a sensor developed by the Bristol Robotics Laboratory called TACTIP [26 ###reference_b26###]. It is based on a camera sensor placed inside a soft dome with a dotted geometric pattern, and the force estimation is obtained via the measurement of the pixel distance between the state of the deformed dome and the unperturbed state of the dotted geometrical pattern. The soft sensor detects subtle interaction forces, which could not be accurately estimated from the joint torques as in the previous experiments. The scope of this experiment is to test the system\u2019s transparent interaction performances by introducing an additional sensor at the robot end-effector.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### Fig. 10 ###reference_### shows the different stages of the experiment, starting with the initial asymmetric contact made by the left arm, full contact once the right arm reached the potato chip, the user interaction with the object triggering the admittance behaviour of the controller, and the bi-manual telemanipulation showcasing the impedance behaviour of the proposed method. The interaction forces estimated from the joint torque compared with the position of the potato chip in Fig. 11 ###reference_### indicate that this is not a reliable way to estimate the interaction forces and the need to introduce the TACTIP end-effector for this task. We can observe multiple offsets in the contact forces estimated from the joints\u2019 torques (Fig. 11 ###reference_###) during and after contact with the objects. These offsets occur where the environmental interaction does not solely generate movements. This latter condition observable for about starting at when the for the two arms are equal. Therefore, our experiments show that the flexibility of the proposed architecture allows overcoming the sensibility of the integrated admittance controller via the mounting of an instrumented end-effector that does not require any specific skillset in robotics." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Discussion", + "text": "The experimental results indicate that the proposed modular method is adaptable to multiple applications without tuning. The operator can change the target application by mounting the proper end-effector and selecting the associated architecture configuration. However, the modality selection is at the module level and does not require any tuning of the inner parameters of each module; thus, it is well suited for applications such as medical technologies where we have an expert operator with no engineering background.\nThe surgery experiments tested the possibility of establishing safe interaction with the soft tissues with a scalpel. However, the current limitations of the visual and haptic feedback need to be overcome before this technology can be comprehensively evaluated in experiments using more complex phantoms and biological samples. The deployment of virtual and augmented reality could help provide a better 3D perception, which will require studying the most-suited interface for providing comprehensive feedback and control of the system. Regarding the haptic feedback, employing the same robot in the master and the replica could help improve the user\u2019s feedback. This haptics is currently compromised by the high end-effector motions induced in the master device (Sigma.7) due to its lower end-effector inertia than the replica (Panda).\nThe rehabilitation experiment showcased how the controller\u2019s seamless trade-off between admittance and impedance behaviours allows robot-mediate collaboration between two human operators, which can also find application in other industries (e.g. manufacturing and logistics). Furthermore, it could also enable the deployment of commercial manipulators in rehabilitation, increasing the availability of robot-aided therapies and diversifying the market. Nevertheless, it also highlighted the same limitation in 3D perception in the visual feedback, which currently hinders assistance from the remote operator.\nThe ultrasound scan experiment showcased that it is possible to accurately control the probe for conducting a scan. The feedback from the scan monitor is sufficient to conduct the test once contact is made with the tissue; however, the 3D visual perception is essential to make contact with the desired anatomical district, which was possible thanks to the line of sight setup used for this experiment. The main limitation to the deployment of this technology is the lack of remote control for the ultrasound scan, which limits the distance of the operator from the patient to the length of the probe. Nevertheless, this application is currently the closest to eventual clinical testing among the evaluated scenarios.\nThe bimanual telemanipulation experiments tested the possibility of having robot-mediated collaboration while manipulating fragile objects by introducing a sensorised end-effector to detect the low interaction forces at the end-effector. This application is still in early development, but the sensorised end-effectors could be used for assistive technologies and applications requiring the handling of delicate objects, such as in chemical laboratories. While all experiments presented in this work were conducted locally, [20 ###reference_b20###] demonstrated that our system can readily be applied to long-distance teleoperation over the internet, including multi-camera visual feedback with a latency of approximately .\nLastly, another major limitation encountered in all the experiments is the limited embodiment of the remote arm, which makes it difficult for the operator to understand the manoeuvrability and the residual range of motion dictated both by the robot kinematics and the presence of objects. A possible option to enhance the embodiment is to exploit the virtual haptic controller in the master () to provide such information on the residual range of motion as a virtual resistive force." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We presented a preliminary evaluation of a modular control architecture that enables the superimposition of manipulation and teleoperation in medical applications. Our experiments show that this method can provide robust physical interaction in a variegated set of scenarios without requiring a specialised robotic skill set to be reprogrammed. However, they also show perception issues in visual and haptic feedback, and they need to be improved before clinical testing. The visual feedback from a multi-camera view is not ideal for 3D dynamic tasks, which could be improved with an augmented reality interface. The haptic feedback is not ideal due to the gap of end-effector inertia between the master and the replica robots used in our experimental setup." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2206.09906v2_figure_1.png", + "caption": "Figure 1: On the master side, there are the operator PC and the haptic feedback devices (Sigma.7, Force Dimension Inc.). On the Replica side, 7-dof torque-controlled arms (Panda, Franka Emika GmbH) are tested in scenarios targeting surgery, rehabilitation, and diagnostics.\nThe controller of the master has three elements. TMsubscriptTM\\text{T}_{\\text{M}}T start_POSTSUBSCRIPT M end_POSTSUBSCRIPT is the module that transforms the motion of the master (\ud835\udc99Msubscript\ud835\udc99M\\bm{x}_{\\text{M}}bold_italic_x start_POSTSUBSCRIPT M end_POSTSUBSCRIPT) in the desired pose for the replica (\ud835\udc99dsubscript\ud835\udc99d\\bm{x}_{\\text{d}}bold_italic_x start_POSTSUBSCRIPT d end_POSTSUBSCRIPT). CMsubscriptCM\\text{C}_{\\text{M}}C start_POSTSUBSCRIPT M end_POSTSUBSCRIPT is a controller providing virtual haptic feedback (hMsubscript\u210eMh_{\\text{M}}italic_h start_POSTSUBSCRIPT M end_POSTSUBSCRIPT) to provide additional information to the user (e.g., workspace boundaries). KH\u2208[0,1]\u2282\u211dsubscriptKH01\u211d\\text{K}_{\\text{H}}\\in[0,1]\\subset\\mathbb{R}K start_POSTSUBSCRIPT H end_POSTSUBSCRIPT \u2208 [ 0 , 1 ] \u2282 blackboard_R is the gain applied to the wrench recorded at the end-effector of the replica robots (hesubscript\u210eeh_{\\text{e}}italic_h start_POSTSUBSCRIPT e end_POSTSUBSCRIPT).\nThe controller of the replica has two elements. FC is the force controller that can be turned on when required, introducing an admittance controller on top of the low-level Interaction Controller (IC). MA & IC is a module composed of two components. The first element is the Motion Adaptation (MA) performed by an S-QP optimisation to guarantee that the desired trajectory respects the physical limitation of the robot (e.g., power limits and singularities) and the task (e.g., holding an object in bimanual manipulation). The second element is the IC that generates the torque commands to track the desired motion produced by the MA.\nIt is worth remarking that in our experiments, the patients are substituted by two phantoms and a researcher, and another researcher acts as medical personnel.", + "url": "http://arxiv.org/html/2206.09906v2/x1.png" + }, + "2": { + "figure_path": "2206.09906v2_figure_2.png", + "caption": "Figure 2: The proposed method has been used in multiple applications just by changing the end-effectors without requiring controller tuning. a) The hand end-effector used to hold the phantom during the cutting is mounted on the left arm and the support for the scalpel is on the right arm. b) The right arm has been equipped with a brace that is secure to the subject\u2019s arm with velcro straps. c) A vice-like end-effector is mounted on the right arm to secure the ultrasound probe to the robot. d) Two TACTIP sensors developed from the Bristol Robotics Laboratory [26] have been mounted on the two robots to enable the bimanual telemanipulation of the potato chip.", + "url": "http://arxiv.org/html/2206.09906v2/x2.png" + }, + "3(a)": { + "figure_path": "2206.09906v2_figure_3(a).png", + "caption": "(a) Scalpel experiment\nFigure 3: Operator point of view for the scalpel and rehabilitation experiments.", + "url": "http://arxiv.org/html/2206.09906v2/x3.png" + }, + "3(b)": { + "figure_path": "2206.09906v2_figure_3(b).png", + "caption": "(b) Rehabilitation experiment.\nFigure 3: Operator point of view for the scalpel and rehabilitation experiments.", + "url": "http://arxiv.org/html/2206.09906v2/x4.png" + }, + "4(a)": { + "figure_path": "2206.09906v2_figure_4(a).png", + "caption": "(a)\nFigure 4: a) The cut marks on the silicone phantom show that it is difficult to proceed on a straight line. In addition, the deviation has peaks of a few millimetres, indicating the need to improve the system\u2019s performance on this task. b) The margins of the cut marks are needed, showing that the robot can robustly sustain contact with the phantom during the incision.", + "url": "http://arxiv.org/html/2206.09906v2/x5.png" + }, + "4(b)": { + "figure_path": "2206.09906v2_figure_4(b).png", + "caption": "(b)\nFigure 4: a) The cut marks on the silicone phantom show that it is difficult to proceed on a straight line. In addition, the deviation has peaks of a few millimetres, indicating the need to improve the system\u2019s performance on this task. b) The margins of the cut marks are needed, showing that the robot can robustly sustain contact with the phantom during the incision.", + "url": "http://arxiv.org/html/2206.09906v2/x6.png" + }, + "5": { + "figure_path": "2206.09906v2_figure_5.png", + "caption": "Figure 5: The force data for the scalpel experiments show that the robots are capable of sufficient force to hold the phantom down during cutting and can safely pass the peaks of force encountered during the cutting on the scalpel. The last two trials were conducted to check the performance in executing cross-cutting tests, and they were executed without changing the controller\u2019s parameters.", + "url": "http://arxiv.org/html/2206.09906v2/x7.png" + }, + "6": { + "figure_path": "2206.09906v2_figure_6.png", + "caption": "Figure 6: Snapshots of the master and the replica robots taken during assistance in the rehabilitation experiments.", + "url": "http://arxiv.org/html/2206.09906v2/x8.png" + }, + "7": { + "figure_path": "2206.09906v2_figure_7.png", + "caption": "Figure 7: The norm of the force vector recorded in the first minute of the experiment shows that the robot can follow the patient movements with peak forces of about 8 Ntimes8newton8\\text{\\,}\\mathrm{N}start_ARG 8 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG once the 2 Ntimes2newton2\\text{\\,}\\mathrm{N}start_ARG 2 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG offset is accounted for. The forces recorded during assistance reach peaks of 20 Ntimes20newton20\\text{\\,}\\mathrm{N}start_ARG 20 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG, and they further increase close to 40 Ntimes40newton40\\text{\\,}\\mathrm{N}start_ARG 40 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG in the perturbation phase, which occurred for the last minute of the experiment.", + "url": "http://arxiv.org/html/2206.09906v2/x9.png" + }, + "8": { + "figure_path": "2206.09906v2_figure_8.png", + "caption": "Figure 8: The forces, the tracking error and the end-effector stiffness of the replica robot during the ultrasound scan. It shows that the proposed method can adapt the impedance behaviour to adapt the changing non-linear dynamics at the end-effector.", + "url": "http://arxiv.org/html/2206.09906v2/x10.png" + }, + "9": { + "figure_path": "2206.09906v2_figure_9.png", + "caption": "Figure 9: A screenshot of the ultrasound scan shows a water bladder with inside a grape.", + "url": "http://arxiv.org/html/2206.09906v2/x11.png" + }, + "10(a)": { + "figure_path": "2206.09906v2_figure_10(a).png", + "caption": "(a) Object handover\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.", + "url": "http://arxiv.org/html/2206.09906v2/x12.png" + }, + "10(b)": { + "figure_path": "2206.09906v2_figure_10(b).png", + "caption": "(b) End-effector admittance driven interaction\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.", + "url": "http://arxiv.org/html/2206.09906v2/x13.png" + }, + "10(c)": { + "figure_path": "2206.09906v2_figure_10(c).png", + "caption": "(c) Teleoperated impedance driven bi-manual manipulation\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.", + "url": "http://arxiv.org/html/2206.09906v2/x14.png" + }, + "11": { + "figure_path": "2206.09906v2_figure_11.png", + "caption": "Figure 11: On top, the force at the end-effector is estimated from the joints\u2019 torques. On the bottom, the expected chip position before the contact and chip position after the contact. The plots highlight the need for the additional sensor at the end-effector. The differential interaction with the two arms is barely detectable and sometimes presents a bias in the equilibrium, as can be seen at t=100 s\ud835\udc61times100secondt=$100\\text{\\,}\\mathrm{s}$italic_t = start_ARG 100 end_ARG start_ARG times end_ARG start_ARG roman_s end_ARG.", + "url": "http://arxiv.org/html/2206.09906v2/x15.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2206.09906v2" +} \ No newline at end of file diff --git a/20241127/2211.01974v3.json b/20241127/2211.01974v3.json new file mode 100644 index 0000000000000000000000000000000000000000..f6f82b4c3a5a6cd8d9f6731578f6bad2d5beaa4c --- /dev/null +++ b/20241127/2211.01974v3.json @@ -0,0 +1,60 @@ +{ + "title": "Discrete approximations to Dirichlet and Neumann Laplacians on a half-space and norm resolvent convergence", + "abstract": "We extend recent results on discrete approximations of the Laplacian in with norm resolvent convergence to the corresponding results for Dirichlet and Neumann Laplacians on a half-space. The resolvents of the discrete Dirichlet/Neumann Laplacians are embedded into the continuum using natural discretization and embedding operators. Norm resolvent convergence to their continuous counterparts is proven with a quadratic rate in the mesh size. These results generalize with a limited rate to also include operators with a real, bounded, and H\u00f6lder continuous potential, as well as certain functions of the Dirichlet/Neumann Laplacians, including any positive real power.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Let be the Dirichlet Laplacian and let be the Neumann Laplacian on the half-space , and let . Let and be the standard finite difference discretizations of and , defined on with a mesh size ; see section 2.3 ###reference_### for the precise definitions.\nUsing suitable embedding operators and discretization operators (see section 2.2 ###reference_###), we prove the following type of norm resolvent convergence with an explicit rate in the mesh size.\nLet be compact. Then there exists such that\nand\nfor and .\nNorm resolvent convergence was first shown for discrete approximations of the Laplacian on in [10 ###reference_b10###] and was extended to classes of Fourier multipliers in [2 ###reference_b2###]. Recently norm resolvent convergence of discrete approximations to other operators have been considered as well, such as discrete Dirac operators in [3 ###reference_b3###] and quantum graph Hamiltonians in [4 ###reference_b4###].\nWe prove the above result as Theorem 3.2 ###reference_theorem2### in section 3 ###reference_###, and also prove several extensions to this result. In section 3.1 ###reference_### we add a real, bounded, and H\u00f6lder continuous potential to and , and add a discrete potential with to and . The norm resolvent estimates with a potential are given in Theorem 3.6 ###reference_theorem6### with a rate that now depends explicitly on the H\u00f6lder exponent for . Such norm resolvent convergence implies much improved spectral results compared to e.g. strong resolvent convergence. This includes convergence of the spectrum in a local Hausdorff distance [2 ###reference_b2###, Section 5].\nFinally in section 3.2 ###reference_### we prove norm resolvent estimates between and , and between and , defined via the functional calculus for certain functions that have also been considered in [2 ###reference_b2###] for estimates on the full space . The results are given in Theorem 3.10 ###reference_theorem10###. As an example, this includes for any positive real power . This example leads to norm resolvent estimates with a rate of . Fractional Laplacians on a half-space (or general domains) with Dirichlet and Neumann boundary conditions have been considered by several authors. See e.g. [1 ###reference_b1###, 5 ###reference_b5###, 7 ###reference_b7###, 8 ###reference_b8###] for some recent results. However, results are scarce for discrete approximations of such operators." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "We give the results in dimensions . The case is obtained by a simple modification of the arguments below.\nLet and . For we write with and .\nThe half-space is denoted by .\nFor the reflection of in the hyperplane\n is denoted by\nFor we write with and . We write\nfor the discrete half-space. We denote the reflection in the discrete hyperplane by" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Extension and restriction operators", + "text": "The continuous Hilbert spaces are denoted by\nIn analogy with the even-odd decomposition of functions in dimension one we introduce the reflection-even and reflection-odd functions in by defining\nand\nsuch that as an orthogonal direct sum.\nThe discrete Hilbert spaces are given by\nwith norms\nNotice that we use the index and in the notation for and . The dependence on the mesh size is given by the subscript .\nThe reflection-even and reflection-odd sequences are defined by\nand\nWe have as an orthogonal direct sum.\nThe reflection-odd extension operator and reflection-even extension operator are given by\nIn the discrete case the reflection-odd extension operator is given by\nThe discrete reflection-even extension operator\n is defined by\nThe natural restriction operators onto the half-spaces are denoted by\nObviously we have and , where we also introduced the notation for the identity operators on and , respectively." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Embedding and discretization operators", + "text": "In [2 ###reference_b2###] embedding and discretization operators were defined using a pair of biorthogonal Riesz sequences. Here we consider only the special case of an orthogonal sequence, as in [10 ###reference_b10###], but with the additional assumption that the generating function is reflection-even.\nAssume such that is an orthonormal sequence in .\nDefine\nSince is assumed reflection-even we have the\nimportant property\nDefine the embedding operators by\nFrom Assumption 2.1 ###reference_theorem1### it follows that \nis an orthonormal sequence, hence that is isometric.\nThe discretization operators are given by . With the convention that inner products are linear in the second entrance, we explicitly have\nLet us note that (2.1 ###reference_###) implies , , , and .\nThe half-space embedding operators are defined as\nThe operators and are isometric, as can be seen from the following computation. Let and use that ,\nA similar computation holds for .\nThe half-space discretization operators\n\nare defined as\nNote that . is the orthogonal projection onto in and is the orthogonal projection onto in ." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Laplacians", + "text": "Let be the Laplacian in with domain .\nThe Dirichlet Laplacian on is defined as the positive self-adjoint operator given by the Friedrichs extension of . Equivalently, is the variational operator associated with the triple , where the sesquilinear form is\nBy [6 ###reference_b6###, Theorem 9.11] the domain of on a half-space simplifies to\nwhere is the Dirichlet trace operator.\nNext we define the Neumann Laplacian on as the positive self-adjoint variational operator associated with the triple . On a half-space its domain simplifies via [6 ###reference_b6###, Theorem 9.20] to\nwhere is the Neumann trace operator.\nFrom [6 ###reference_b6###, Theorem 9.2] the trace maps satisfy for and .\nWe need the following lemma. The result is a consequence of e.g. [9 ###reference_b9###, Proposition 2.2]. We give a shorter proof for the sake of completeness.\nLet . Then and . Furthermore, for and all we have\nLet . Then and . Furthermore, for and all we have\n(i): Let .\nSince is a core for , we can\nfind a sequence such that\n and in , as .\nWe have , such that and in , as . Note that\nsince commutes with orthogonal coordinate transformations and since is supported in , i.e. away from the hyperplane . Thus\nin . Since is a closed operator we conclude that\n and . The second part of the statement follows by using for .\n(ii): Let . Restrictions of to either side of has coinciding Dirichlet and Neumann traces, so at least . We can approximate in by a sequence with . Now implies the identity\nHowever we have that , which has a zero-extension . Since\nthen and as a consequence , since there was no contention regarding the square integrability of the other partial derivatives. The rest of the proof follows by using that on commutes with orthogonal coordinate transformations, and that for .\n\u220e\nThe discrete Laplacian on is given by\nHere denotes the canonical basis for .\nThe discrete Dirichlet Laplacian on is given by\nLet . Then using the definitions\none can verify that and then\nThe discrete Neumann Laplacian on is given by\nLet . Similar to the above, and then\nSince we use homogeneous Dirichlet and Neumann conditions, the discrete Laplacians have a very similar finite difference structure. The discrete Neumann Laplacian only differs from the discrete Dirichlet Laplacian at the indices where . Here the contributions from the boundary conditions either mean that (Dirichlet case) or that (Neumann case). This subtle difference also implies the connections to odd and even reflections in (2.4 ###reference_###) and (2.5 ###reference_###)." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results", + "text": "Additional assumptions on the function are needed to obtain our results, cf. [2 ###reference_b2###, Assumption 2.8]\nor [10 ###reference_b10###, Assumption B]. Let denote the Fourier transform of , defined as\nLet satisfy Assumption 2.1 ###reference_theorem1### and assume that\n is essentially bounded.\nAssume there exists such that\nLet , , , and be as above, with satisfying\nAssumption 3.1 ###reference_theorem1###.\nLet be compact. Then there exists such that\nand\nfor and .\nLet . Then\nWe have , since is a reflection-odd function for all .\nThus using (2.4 ###reference_###) we get\nNow , since is a reflection-odd sequence. Thus we have shown\nUsing this result together with (2.2 ###reference_###) we have shown that\nThus we can use the results in [2 ###reference_b2###] or [10 ###reference_b10###] to obtain (3.1 ###reference_###).\nTo prove (3.2 ###reference_###) note that and , and then use\n(2.3 ###reference_###) instead of (2.2 ###reference_###) and (2.5 ###reference_###) instead of (2.4 ###reference_###). This leads to:\nwhich together with the results in [2 ###reference_b2###] or [10 ###reference_b10###] completes the proof of (3.2 ###reference_###).\n\u220e" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Adding a potential", + "text": "Next we add a potential. To obtain the results we introduce two assumptions.\nLet . Assume that there exists such that\nLet be a bounded function which is uniformly H\u00f6lder continuous of order .\nNote that denotes the closed half-space, so the conditions hold up to the boundary.\nLet satisfy Assumption 3.4 ###reference_theorem4###. Then is bounded and uniformly H\u00f6lder continuous of order on .\nBoundedness is clear, and for the H\u00f6lder continuity we only need to consider points such that . Now Assumption 3.4 ###reference_theorem4### and imply\nWe define the discretized potential as , , . Then we define and on , and\n and on .\nLet , , , and be as above, with satisfying\nAssumptions 3.1 ###reference_theorem1### and 3.3 ###reference_theorem3###. Let satisfy Assumption 3.4 ###reference_theorem4###. Define\nLet be compact. Then there exists such that\nand\nfor and .\nLet and . Then\nand\nThus we have as operators from to ,\nwhere denotes the operator of multiplication in by\n, .\nLet on\n. Then combining the above result with\nthe arguments leading to (2.2 ###reference_###) we get for\nWe can repeat these arguments in the discrete case, leading to\nfor . Here we have defined on\n. Note that and may differ only at . Thus replacing by introduces an error of order , due to Assumption 3.4 ###reference_theorem4###, and this error can be absorbed in the final estimate below.\nRepeating the computations in the proof of Theorem 3.2 ###reference_theorem2### we get\nfor\nWe can then use [2 ###reference_b2###, Theorem 4.4] to complete the proof.\nThe proof for the Neumann Laplacian is analogous, using instead that , which for and gives\nThis leads to\nwhich can also be estimated by [2 ###reference_b2###, Theorem 4.4].\n\u220e" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Functions of Dirichlet and Neumann Laplacians", + "text": "Now we extend the approximation results given in Theorem 3.2 ###reference_theorem2### to functions of the Dirichlet and Neumann Laplacians on the half-space. Let be a Borel function. Using the functional calculus we can define the operators , , , and .\nWe need the following lemma, which is an immediate consequence of [11 ###reference_b11###, Proposition 5.15]; see also [9 ###reference_b9###]. For operators and the notation means that is an extension of .\nFor assume that is a self-adjoint operator on a Hilbert space . Assume that is a bounded operator such that\nLet be a Borel function on . Then we have\nIf is a bounded function then equality holds in (3.5 ###reference_###).\nIn the following assumption the parameters are chosen to be compatible with the ones in [2 ###reference_b2###, Assumption 3.1].\nAssume\nLet be a continuous function which is continuously differentiable on and satisfies the following conditions:\n,\nthere exist and such that\n for .\nthere exists such that for .\nWe omit the straightforward proof of the following lemma.\nLet satisfy Assumption 3.8 ###reference_theorem8### with parameters and . Define , . Then satisfies Assumption 3.1 in [2 ###reference_b2###] with the same parameters and .\nNext we define\nUsing these definitions it follows that is the Fourier multiplier with symbol on , and is the Fourier multiplier with symbol on . The operators and on , and the operators\n and on , are defined using the functional calculus.\nWe have the following extension of Theorem 3.2 ###reference_theorem2###.\nLet satisfy Assumption 3.8 ###reference_theorem8### with parameters and . Let\nLet , , , and be as above, with satisfying Assumption 3.1 ###reference_theorem1###. Let be compact. Then there exists such that\nand\nfor and .\nWe prove the result for the Dirichlet Laplacians.\nAssumption 3.8 ###reference_theorem8### and Lemma 3.9 ###reference_theorem9### together with [2 ###reference_b2###, Proposition 3.5] imply that we have the estimate\nfor and , with satisfying the assumption in the theorem.\nCombine Lemma 2.2 ###reference_theorem2### with Lemma 3.7 ###reference_theorem7### to get the result\nAnalogously, using (2.4 ###reference_###) and Lemma 3.7 ###reference_theorem7### we get\nUsing the results (3.8 ###reference_###)\u2013(3.10 ###reference_###) we can repeat the arguments in the proof of Theorem 3.2 ###reference_theorem2### to get the result in the Dirichlet case. The proof in the Neumann case is almost the same, so we omit it.\n\u220e\nBy repeating the proof of Theorem 3.6 ###reference_theorem6###, we may also add a potential to the operators and and add a discrete potential to the operators and . The resulting estimates, replacing those in (3.6 ###reference_###) and (3.7 ###reference_###), will have the rate .\nOf particular interest are the functions that give the powers of the Laplacian .\nLet and define , . Then and . For we can take and to satisfy the conditions in Assumption 3.8 ###reference_theorem8###. Then the estimate (3.8 ###reference_###) holds with .\nFor the conditions in Assumption 3.8 ###reference_theorem8### are satisfied with and . We get for . For we can use the result in [2 ###reference_b2###, Proposition 3.11] which yields the estimate (3.8 ###reference_###) for and with .\nWe summarize the results above as a Corollary to both Theorem 3.10 ###reference_theorem10### and the results in [2 ###reference_b2###].\nLet , , . Then the estimates (3.6 ###reference_###) and (3.7 ###reference_###) hold for .\nThe operators defined here do not agree with the fractional Dirichlet Laplacians on a half-space defined in [7 ###reference_b7###, 8 ###reference_b8###].\nLet , then Lemmas 2.2 ###reference_theorem2### and 3.7 ###reference_theorem7### imply , such that\n. Whereas in [7 ###reference_b7###, 8 ###reference_b8###] the definition is based on the operator applied to suitable functions in , where is the operator for extension by zero. Hence the two approaches differ by the type of extension operator that is used.\nThis research is partially supported by grant 8021\u201300084B from Independent Research Fund Denmark | Natural Sciences." + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2211.01974v3" +} \ No newline at end of file diff --git a/20241127/2211.15656v4.json b/20241127/2211.15656v4.json new file mode 100644 index 0000000000000000000000000000000000000000..df60ffeb227299a6825542abb3dd0d23fb21ae97 --- /dev/null +++ b/20241127/2211.15656v4.json @@ -0,0 +1,446 @@ +{ + "title": "SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation", + "abstract": "High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30\u2009m, and also predicting long-range HD maps up to 90\u2009m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We use LiDAR depth to improve image depth estimation and use image features to guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins on all intervals. Additionally, we apply the generated HD map to a downstream path planning task, demonstrating that the long-range HD maps predicted by our method can lead to better path planning for autonomous vehicles. Our code has been released at https://github.com/haomo-ai/SuperFusion.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Detecting street lanes and generating semantic high-definition (HD) maps are essential for autonomous vehicles to achieve self-driving [6 ###reference_b6###, 7 ###reference_b7###].\nThe HD map consists of semantic layers with lane boundaries, road dividers, pedestrian crossings, etc., which provide precise location information about nearby infrastructure, roads, and environments to navigate autonomous vehicles safely [11 ###reference_b11###].\nThe traditional way builds the HD maps offline by firstly recording point clouds, then creating globally consistent maps using SLAM [28 ###reference_b28###], and finally manually annotating semantics in the maps. Although some autonomous driving companies have created accurate HD maps following such a paradigm, it requires too much human effort and needs continuous updating. Since autonomous vehicles are typically equipped with various sensors, exploiting the onboard sensor data to build local HD maps for online applications attracts much attention. Existing methods usually extract lanes and crossings on the bird\u2019s-eye view (BEV) representation of either camera data [30 ###reference_b30###] or LiDAR data [17 ###reference_b17###]. Recently, several methods [17 ###reference_b17###, 20 ###reference_b20###, 24 ###reference_b24###] show advances in fusing multi-sensor modalities. They leverage the complementary information from both sensors to improve the HD map generation performance. Albeit improvements, existing methods fuse LiDAR and camera data in simple ways, either on the raw data level [27 ###reference_b27###, 26 ###reference_b26###], feature level [1 ###reference_b1###, 34 ###reference_b34###], or final BEV level [17 ###reference_b17###, 20 ###reference_b20###, 24 ###reference_b24###], which do not fully exploit the advantages from both modalities. Besides, existing methods only focus on short-range HD map generation due to the limited sensor measurement range, i.e., within 30\u2009m, which limits their usage in downstream applications such as path planning and motion control in real autonomous driving scenarios. As shown in Fig. 1 ###reference_###, when the generated HD map is too short, the planning method may create a non-smooth path that requires frequent replanning due to limited perception distances, or even a path that intersects with the sidewalk. This can lead to frustration for users, as rapidly changing controls can degrade their comfort level.\n###figure_1### To tackle the problem mentioned above, in this paper, we propose a multilevel LiDAR-camera fusion method, dubbed SuperFusion. It fuses the LiDAR and camera data at three different levels. In the data-level fusion, it combines the projected LiDAR data with images as the input of the camera encoder and uses LiDAR depth to supervise the camera-to-BEV transformation. The feature-level fusion uses camera features to guide the LiDAR features on long-range LiDAR BEV feature prediction using a cross-attention mechanism. In the final BEV-level fusion, our method exploits a BEV alignment module to align and fuse camera and LiDAR BEV features. Using our proposed multilevel fusion strategy, SuperFusion generates accurate HD maps in the short range and also predicts accurate semantics in the long-range distances, where the raw LiDAR data may not capture. We thoroughly evaluate our SuperFusion and compare it with the state-of-the-art methods on the publically available nuScenes dataset and our own dataset recorded in real-world self-driving scenarios. The experimental results consistently show that our method outperforms the baseline methods significantly by a large margin on all intervals. Furthermore, we provide the application results of using our generated HD maps for path planning, showing the superiority of our proposed fusion method for long-range HD map generation.\nOur contributions can be summarized as: i) our proposed novel multilevel LiDAR-camera fusion network fully leverages the information from both modalities and generates high-quality fused BEV features to support different tasks; ii) our SuperFusion surpasses the state-of-the-art fusion methods in both short-range and long-range HD map generation by a large margin; iii) to the best of our knowledge, our work is the first to achieve long-range HD map generation, i.e., up to 90\u2009m, benefiting the autonomous driving downstream planning task." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "LiDAR-Camera Fusion.\nThe existing fusion strategies can be divided into three levels: data-level, feature-level, and BEV-level fusion. Data-level fusion methods [27 ###reference_b27###, 26 ###reference_b26###, 33 ###reference_b33###, 19 ###reference_b19###] project LiDAR point clouds to images using the camera projection matrix. The projected sparse depth map can be fed to the network with the image data [27 ###reference_b27###, 26 ###reference_b26###] or decorated with image semantic features [33 ###reference_b33###, 19 ###reference_b19###] to enhance the network inputs. Feature-level fusion methods [1 ###reference_b1###, 34 ###reference_b34###] incorporate different modalities in the feature space using transformers. They first generate LiDAR feature maps, then query image features on those LiDAR features using cross-attention, and finally concatenate them together for downstream tasks. BEV-level fusion methods [17 ###reference_b17###, 20 ###reference_b20###, 24 ###reference_b24###] extract LiDAR and image BEV features separately and then fuse the BEV features by concatenation [17 ###reference_b17###] or fusion modules [20 ###reference_b20###, 24 ###reference_b24###]. For example, HDMapNet [17 ###reference_b17###] uses MLPs to map PV features to BEV features for the camera branch and uses PointPillars [16 ###reference_b16###] to encode BEV features in the LiDAR branch. Recent BEVFusion works [20 ###reference_b20###, 24 ###reference_b24###] use LSS [30 ###reference_b30###] for view transformation in the camera branch and VoxelNet [35 ###reference_b35###] in the LiDAR branch and finally fuse them via a BEV alignment module.\nUnlike them, our method combines all three-level LiDAR and camera fusion to fully exploit the complementary attributes of these two sensors.\nHD Map Generation.\nThe traditional way of reconstructing HD semantic maps is to aggregate LiDAR point clouds using SLAM algorithms [28 ###reference_b28###] and then annotate manually, which is laborious and difficult to update.\nHDMapNet [17 ###reference_b17###] is a pioneer work on local HD map construction without human annotations. It fuses LiDAR and six surrounding cameras in BEV space for semantic HD map generation. Besides that, VectorMapNet [23 ###reference_b23###] represents map elements as a set of polylines and models these polylines with a set prediction framework, while Image2Map [32 ###reference_b32###] utilizes a transformer to generate HD maps from images in an end-to-end fashion. Several works [10 ###reference_b10###, 12 ###reference_b12###, 3 ###reference_b3###] also detect specific map elements such as lanes. Previous works only segment maps in a short range, usually less than 30\u2009m. Our method is the first work focusing on long-range HD map generation up to 90\u2009m." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Depth-Aware Camera-to-BEV Transformation", + "text": "We first fuse the LiDAR and camera at the raw data level and leverage the depth information from LiDAR to help the camera lift features to BEV space.\nTo this end, we propose a depth-aware camera-to-BEV transformation module, as shown in Fig. 2 ###reference_###.\nIt takes an RGB image with the corresponding sparse depth image as input. Such sparse depth image is obtained by projecting the 3D LiDAR point cloud to the image plane using the camera projection matrix.\nThe camera backbone has two branches. The first branch extracts 2D image features , where , and are the width, height and channel numbers. The second branch connects a depth prediction network, which estimates a categorical depth distribution for each element in the 2D feature , where is the number of discretized depth bins.\nTo better estimate the depth, we use a completion method [15 ###reference_b15###] on to generate a dense depth image and discretize the depth value of each pixel into depth bins, which is finally converted to a one-hot encoding vector to supervise the depth prediction network.\nThe final frustum feature grid is generated by the outer product of and as\nwhere . Finally, each voxel in the frustum is assigned to the nearest pillar and a sum pooling is performed as in LSS [30 ###reference_b30###] to create the camera BEV feature .\nOur proposed depth-aware camera-to-BEV module differs from the existing depth prediction methods [30 ###reference_b30###, 31 ###reference_b31###]. The depth prediction in LSS [30 ###reference_b30###] is only implicitly supervised by the semantic segmentation loss, which is not enough to generate accurate depth estimation. Different from that, we utilize the depth information from LiDAR as supervision.\nCaDDN [31 ###reference_b31###] also uses LiDAR depth for supervision but without LiDAR as input, thus unable to generate a robust and reliable depth estimation. Our method uses both the completed dense LiDAR depth image for supervision and also the sparse depth image as an additional channel to the RGB image. In this way, our network exploits both a depth prior and an accurate depth supervision, thus generalizing well to different challenging environments." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Image-Guided LiDAR BEV Prediction", + "text": "###figure_3### ###figure_4### In the LiDAR branch, we use PointPillars [16 ###reference_b16###] plus dynamic voxelization [36 ###reference_b36###] as the point cloud encoder to generate LiDAR BEV features for each point cloud .\nAs shown in Fig. 3(a) ###reference_sf1###, the LiDAR data only contains a short valid measurement of the ground plane (typically around 30\u2009m for a rotating 32-beam LiDAR), leading many parts of the LiDAR BEV features encoding empty space. Compared to LiDAR, the visible ground area in camera data is usually further. Therefore, we propose a BEV prediction module to predict the unseen areas of the ground for the LiDAR branch with the guidance of image features, as shown in Fig. 3(b) ###reference_sf2###. The BEV prediction module is an encoder-decoder network. The encoder consists of several convolutional layers to compress the original BEV feature to a bottleneck feature . We then apply a cross-attention mechanism to dynamically capture the correlations between and FV image feature . Three fully-connected layers are used to transform bottleneck feature to query and FV image feature to key and value . The attention affinity matrix is calculated by the inner product between and , which indicates the correlations between each voxel in LiDAR BEV and its corresponding camera features. The matrix is then normalized by a softmax operator and used to weigh and aggregate value to get the aggregated feature . This cross-attention mechanism can be formulated as\nwhere is the channel dimension used for scaling. We then apply a convolutional layer on the aggregated feature to reduce channel, concatenate it with the original bottleneck feature and in the end apply another convolutional layer to get the final bottleneck feature . Now has the visual guidance from image feature and is fed to the decoder to generate the completed and predicted LiDAR BEV feature . By this, we fuse the two modalities at the feature level to better predict the long-range LiDAR BEV features." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C BEV Alignment and Fusion", + "text": "###figure_5### So far, we get both the camera and LiDAR BEV features from different branches, which usually have misalignment due to the depth estimation error and inaccurate extrinsic parameters. Therefore, direct concatenating these two BEV features will result in inferior performance. To better align BEV features, we fuse them at the BEV level and design an alignment and fusion module, as shown in Fig. 4 ###reference_###. It takes the camera and LiDAR BEV features as input and outputs a flow field for the camera BEV features. The flow field is used to warp the original camera BEV features to the aligned BEV features with LiDAR features . Following [14 ###reference_b14###, 18 ###reference_b18###], we define the warp function as\nwhere a bilinear interpolation kernel is used to sample feature on position of . indicate the learned 2D flow field for position .\nFinally, and are concatenated to generate the fused BEV features, which are the input of the HD map decoder." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D HD Map Decoder and Training Losses", + "text": "Following HDMapNet [17 ###reference_b17###], we define the HD map decoder as a fully convolutional network [25 ###reference_b25###] that inputs the fused BEV features and outputs three predictions, including semantic segmentation, instance embedding, and lane direction, which are then used in the post-processing step to vectorize the map.\nFor training three different heads for three outputs, we use different training losses. We use the cross-entropy loss to supervise the semantic segmentation. For the instance embedding prediction, we define the loss as a variance and a distance loss [5 ###reference_b5###] as\nwhere is the number of clusters, and are the number of elements in cluster and mean embedding of . is the embedding of the th element in .\n is the norm, , and are margins for the variance and distance loss.\nFor direction prediction, we discretize the direction into classes uniformly on a circle and define the loss as the cross-entropy loss. We only do backpropagation for those pixels lying on the lanes that have valid directions. During inference, DBSCAN [8 ###reference_b8###] is used to cluster instance embeddings, followed by non-maximum\nsuppression [17 ###reference_b17###] to reduce redundancy. We then use the predicted directions to connect the pixels greedily to get the final vector representations of HD map elements.\nWe use focal loss [21 ###reference_b21###] with for depth prediction as . The final loss is the combination of the depth estimation, semantic segmentation, instance embedding and lane direction prediction, which is defined as\nwhere , , , and are weighting factors." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experiments", + "text": "We evaluate SuperFusion for the long-range HD map generation task on nuScenes [2 ###reference_b2###] and a self-collected dataset." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Implementation Details", + "text": "Model.\nWe use ResNet-101 [13 ###reference_b13###] as our camera branch backbone and PointPillars [16 ###reference_b16###] as our LiDAR branch backbone. For depth estimation, we modify DeepLabV3 [4 ###reference_b4###] to generate pixel-wise probability distribution of depth bins. The camera backbone is initialized using the DeepLabV3 [4 ###reference_b4###] semantic segmentation model pre-trained on the MS-COCO dataset [22 ###reference_b22###]. All other components are randomly initialized. We set the image size to and voxelize the LiDAR point cloud with \u2009m resolution. We use \u2009m\u2009\u2009\u2009m as the range of the BEV HD maps, which results in a size of . We set the discretized depth bins to \u2009m spaced by \u2009m.\nTraining Details.\nWe train the model for epochs using stochastic gradient descent with a learning rate of 0.1. For the instance embedding, we set , , and . We set , , , and for different weighting factors.\n###figure_6### ###table_1### ###table_2###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Evaluation Metrics", + "text": "Intersection over Union.\nThe IoU between the predicted HD map and ground-truth HD map is given by\nOne-way Chamfer Distance.\nThe one-way Chamfer distance (CD) between the predicted curve and ground-truth curve is given by\nwhere and are sets of points on the predicted curve and ground-truth curve. CD is used to evaluate the spatial distances between two curves.\nThere is a problem when using CD alone for the HD map evaluation. A smaller IoU tends to result in a smaller CD.\nHere, we combine CD with IoU for selecting true positives as below to better evaluate the HD map generation task.\nAverage Precision.\nThe average precision (AP) measures the instance\ndetection capability and is defined as\nwhere is the precision at recall\u2009=\u2009. As introduced in [17 ###reference_b17###], they use CD to select the true positive instances. Besides that, here we also add an IoU threshold. The instance is considered as a true positive if and only if the CD is below and the IoU is above the defined thresholds. We set the threshold of IoU as 0.1 and threshold of CD as 1.0\u2009m.\nEvaluation on Multiple Intervals.\nTo evaluate the long-range prediction ability of different methods, we split the ground truth into three intervals: \u2009m, \u2009m, and \u2009m. We calculate the IoU and AP of different methods on three intervals to thoroughly evaluate the HD map generation results." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Evaluation Results", + "text": "nuScenes Dataset. We first evaluate our approach on the publicly available nuScenes dataset [2 ###reference_b2###]. We focus on semantic HD map segmentation and instance detection tasks as introduced in [17 ###reference_b17###] and consider three static map elements, including lane boundary, lane divider, and pedestrian crossing. Tab. I ###reference_### shows the comparisons of the IoU scores of semantic map segmentation. Our SuperFusion achieves the best results in all cases and has significant improvements on all intervals (Fig. 5 ###reference_###), which shows the superiority of our method. Besides, we can observe that the LiDAR-camera fusion methods are generally better than LiDAR-only or camera-only methods. The performance of the LiDAR-only method drops quickly for long-range distances, especially for \u2009m, which reflects the case we analyzed in Fig. 3(a) ###reference_sf1###.\nThe AP results considering both IoU and CD to decide the true positive shows a more comprehensive evaluation. As shown in Tab. II ###reference_###, our method achieves the best instance detection AP results for all cases with a large margin, verifying the effectiveness of our proposed novel fusion network.\nSelf-recorded Dataset.\nTo test the good generalization ability of our method, we collect our own dataset in real driving scenes and evaluate all baseline methods on that dataset. Our dataset has a similar setup as nuScenes with a LiDAR and camera sensor configuration. The static map elements are labeled by hand, including the lane boundary and lane divider. There are frames of data, with for training and for testing. Fig. 3(a) ###reference_sf1### shows sample data from our dataset and we put more examples on GitHub due to page limits. Tab. III ###reference_### shows the comparison results of different baseline methods operating on our dataset. We see consistent superior results of our method in line with those on nuScenes. Our SuperFusion outperforms the state-of-the-art methods for all cases with a large improvement.\n###table_3### ###table_4### ###table_5###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Ablation Studies and Module Analysis", + "text": "Ablation on Each Module. We conduct ablation studies to validate the effectiveness of each component of our proposed fusion network in Tab. IV ###reference_###. Without depth supervision, the inaccurate depth estimation influences the camera-to-BEV transformation and makes the following alignment module fails, which results in the worst performance. Without the sparse depth map prior from the LiDAR point cloud, the depth estimation is unreliable under challenging environments and thus produces inferior results. Without the prediction module, there is no measurement from LiDAR in the long-range interval, and only camera information is useful, thus deteriorating the overall performance. In the \u201dw/o Cross Attention\u201d setting, we add the encoder-decoder LiDAR BEV prediction structure but remove the cross-attention interaction with camera FV features. In this case, the network tries to learn the LiDAR completion from the data implicitly without guidance from images. The performance drops significantly for this setup, indicating the importance of our proposed image-guided LiDAR prediction module.\nIn the last setting, we remove the BEV alignment module and concatenate the BEV features from the camera and LiDAR directly. As can be seen, due to inaccurate depth estimation and extrinsic parameters, the performance without an alignment is worse than using our proposed BEV aligning module.\nAnalysis of Module Choices.\nIn the upper part of Tab. V ###reference_###, we show that our BEVAlign module works better than the alignment methods proposed in the previous work [24 ###reference_b24###, 20 ###reference_b20###]. [24 ###reference_b24###] uses a simple convolution-based encoder for alignment, which is not enough when the depth estimation is inaccurate. The dynamic fusion module proposed in [20 ###reference_b20###] works well on 3D object detection tasks but has a limitation on semantic segmentation tasks. In the lower part of Tab. V ###reference_###, we test different ways to add depth prior. One way is to add the sparse depth map as an additional input channel for the image branch. Another way is to use a lightweight encoder separately on RGB image and sparse depth map and concatenate the features from the encoder as the input for the image branch. Besides, the sparse depth map can either store the original depth values or the bin depth values. We see that adding the sparse depth map as an additional input channel with original depth values achieves the best performance." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Useful for Path Planning", + "text": "We use the same dynamic window approach (DWA) [9 ###reference_b9###] for path planning on HD maps generated by HDMapNet [17 ###reference_b17###], BEVFusion [24 ###reference_b24###], and our SuperFusion. We randomly select different scenes and one drivable point between \u2009m as the goal for each scene. The planning is failed if the path intersects with the sidewalk or DWA fails to plan a valid path. Tab. VI ###reference_### shows the planning success rate for different methods. As can be seen, benefiting from accurate prediction for long-range and turning cases, our method has significant improvement compared to the baselines. Fig. 6 ###reference_### shows more visualizations of the planning results.\n###table_6### ###figure_7###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we proposed a novel LiDAR-camera fusion network named SuperFusion to tackle the long-range HD map generation task. It exploits the fusion of LiDAR and camera data at multiple levels and generates accurate HD maps in long-range distances up to 90\u2009m. We thoroughly evaluate our SuperFusion on the nuScenes dataset and our self-recorded dataset in autonomous driving environments. The experimental results show that our method outperforms the state-of-the-art methods in HD map generation with large margins. We furthermore showed that the long-range HD maps generated by our method are more beneficial for downstream path planning tasks." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: IoU scores (%) of HD map semantic segmentation on nuScenes dataset. IoU: higher is better. C: camera. L: LiDAR.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodModality0-30 m30-60 m60-90 mAverage IoU
DividerPedBoundaryDividerPedBoundaryDividerPedBoundaryDividerPedBoundary
\nVPN [29]\nC21.16.720.120.95.120.315.91.914.719.44.918.5
\nLSS [30]\nC35.116.033.128.56.526.722.22.720.728.99.427.2
\nPointPillars [16]\nL41.526.453.618.49.125.14.41.76.223.714.530.7
\nHDMapNet [17]\nC+L44.328.955.426.910.431.018.15.318.330.516.635.7
\nBEVFusion [20]\nC+L42.027.652.426.811.930.318.13.315.930.016.334.2
\nBEVFusion [24]\nC+L45.931.257.030.613.734.322.45.021.733.918.838.8
SuperFusion (ours)C+L47.937.458.435.622.839.429.212.228.138.026.242.7
\n
", + "capture": "TABLE I: IoU scores (%) of HD map semantic segmentation on nuScenes dataset. IoU: higher is better. C: camera. L: LiDAR." + }, + "2": { + "table_html": "
\n
TABLE II: Instance detection results on nuScenes dataset. The predefined threshold of Chamfer distance is 1.0 m and the threshold of IoU is 0.1 (e.g.a prediction is considered as a true positive if and only if the CD is below and the IoU is above the defined thresholds). AP: higher is better.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodModality0-30 m30-60 m60-90 mAverage AP
DividerPedBoundaryDividerPedBoundaryDividerPedBoundaryDividerPedBoundary
\nVPN [29]\nC16.23.430.517.14.130.213.31.521.115.63.127.5
\nLSS [30]\nC24.09.939.323.95.738.119.22.226.222.56.234.8
\nPointPillars [16]\nL24.618.749.315.97.836.84.11.99.215.610.132.7
\nHDMapNet [17]\nC+L30.520.054.523.79.246.315.24.226.423.611.743.1
\nBEVFusion [20]\nC+L25.819.147.620.310.238.312.54.018.520.011.635.4
\nBEVFusion [24]\nC+L29.722.553.625.111.546.117.94.826.924.713.642.8
SuperFusion (ours)C+L33.226.458.030.718.452.724.110.738.229.719.250.1
\n
", + "capture": "TABLE II: Instance detection results on nuScenes dataset. The predefined threshold of Chamfer distance is 1.0 m and the threshold of IoU is 0.1 (e.g.a prediction is considered as a true positive if and only if the CD is below and the IoU is above the defined thresholds). AP: higher is better. " + }, + "3": { + "table_html": "
\n
TABLE III: The experimental results on the self-recorded dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodModalityAverage IoUAverage AP
DividerBoundaryDividerBoundary
\nVPN [29]\nC42.917.933.025.4
\nLSS [30]\nC49.220.440.426.5
\nPointPillars [16]\nL36.815.526.124.6
\nHDMapNet [17]\nC+L46.618.838.325.7
\nBEVFusion [20]\nC+L48.121.938.830.5
\nBEVFusion [24]\nC+L49.018.840.525.9
SuperFusion (ours)C+L53.024.742.435.0
\n
", + "capture": "TABLE III: The experimental results on the self-recorded dataset. " + }, + "4": { + "table_html": "
\n
TABLE IV: Ablation of the proposed network components.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Average IoU
DividerPedBoundary
w/o Depth Supervision25.413.330.8
w/o Depth Prior34.320.539.3
w/o LiDAR Prediction33.417.638.6
w/o Cross-Attention32.415.237.6
w/o BEV Alignment33.421.839.1
SuperFusion (ours)38.026.242.7
\n
", + "capture": "TABLE IV: Ablation of the proposed network components. " + }, + "5": { + "table_html": "
\n
TABLE V: Module alternatives study.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModulesAlternativesAverage IoU
DividerPedBoundary
\n\nAlignment\nmodule\n\nDynamicAlign [20]\n33.819.838.8
\nConvAlign [24]\n33.522.939.1
BEVAlign (ours)38.026.242.7
\n\nDepth\nprediction\nmodule\nDepth Encoder (bin)31.218.536.1
Depth Encoder34.620.538.5
Depth Channel (bin)31.316.537.0
Depth Channel (ours)38.026.242.7
\n
", + "capture": "TABLE V: Module alternatives study. " + }, + "6": { + "table_html": "
\n
TABLE VI: Quantitative path planning results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nHDMapNet\u00a0[17]\n\nBEVFusion\u00a0[24]\nSuperFusion (ours)
Success rate45%49%72%
\n
", + "capture": "TABLE VI: Quantitative path planning results. " + } + }, + "image_paths": { + "1": { + "figure_path": "2211.15656v4_figure_1.png", + "caption": "Figure 1: Long-range HD map generation for path planning. The red car represents the current position of the car, and the blue star is the goal. The upper figure shows that the baseline method only generates short-range HD maps, leading to lousy planning results. The lower one shows that our SuperFusion generates accurate HD maps in both short and long ranges, which serves online path planning well for autonomous driving.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/motivation-v2.png" + }, + "2": { + "figure_path": "2211.15656v4_figure_2.png", + "caption": "Figure 2: Pipeline overview of SuperFusion. Our method fuses camera and LiDAR data in three levels: the data-level fusion fuses depth information from LiDAR to improve the accuracy of image depth estimation, the feature-level fusion uses cross-attention for long-range LiDAR BEV feature prediction with the guidance of image features, and the BEV-level fusion aligns two branches to generate high-quality fused BEV features. Finally, the fused BEV features can support different heads, including semantic segmentation, instance embedding, and direction prediction, and finally post-processed to generate the HD map prediction.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/overview-new.png" + }, + "3(a)": { + "figure_path": "2211.15656v4_figure_3(a).png", + "caption": "(a) The LiDAR usually has a short valid range for the ground plane, while the camera can see a much longer distance.\nFigure 3: Image-guided LiDAR BEV Prediction.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/pred_lc3.png" + }, + "3(b)": { + "figure_path": "2211.15656v4_figure_3(b).png", + "caption": "(b) LiDAR BEV prediction with cross-attention.\nFigure 3: Image-guided LiDAR BEV Prediction.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/pred2.png" + }, + "4": { + "figure_path": "2211.15656v4_figure_4.png", + "caption": "Figure 4: BEV Alignment and Fusion Module.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/align_new2.png" + }, + "5": { + "figure_path": "2211.15656v4_figure_5.png", + "caption": "Figure 5: Qualitative HD map prediction results of different methods. The red car represents the current position of the car. The length of every map is 90\u2009m with respect to the car. Different colors indicate different HD map element instances. For ground truth HD map, green is lane boundary, red is lane divider, and blue is pedestrian crossing. More qualitative results are in the attached demo video.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/hdmap_examples-icra.png" + }, + "6": { + "figure_path": "2211.15656v4_figure_6.png", + "caption": "Figure 6: Path planning results on the generated HD maps.", + "url": "http://arxiv.org/html/2211.15656v4/extracted/6028633/imgs/planning-v2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers.", + "author": "X. Bai, Z. Hu, X. Zhu, Q. Huang, Y. Chen, H. Fu, and C.L. Tai.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.", + "url": null + } + }, + { + "2": { + "title": "nuscenes: A multimodal dataset for autonomous driving.", + "author": "H. Caesar, V. Bankiti, A.H. Lang, S. Vora, V.E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.", + "url": null + } + }, + { + "3": { + "title": "Persformer: 3d lane detection via perspective transformer and the openlane benchmark.", + "author": "L. Chen, C. Sima, Y. Li, Z. Zheng, J. Xu, X. Geng, H. Li, C. He, J. Shi, Y. Qiao, and J. Yan.", + "venue": "In Proc. of the Europ. Conf. on Computer Vision (ECCV), 2022.", + "url": null + } + }, + { + "4": { + "title": "Rethinking atrous convolution for semantic image segmentation.", + "author": "L. Chen, G. Papandreou, F. Schroff, and H. Adam.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.", + "url": null + } + }, + { + "5": { + "title": "Semantic instance segmentation for autonomous driving.", + "author": "B. De Brabandere, D. Neven, and L. Van Gool.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.", + "url": null + } + }, + { + "6": { + "title": "Online Range Image-based Pole Extractor for Long-term LiDAR Localization in Urban Environments.", + "author": "H. Dong, X. Chen, and C. Stachniss.", + "venue": "In Proceedings of the European Conference on Mobile Robots (ECMR), 2021.", + "url": null + } + }, + { + "7": { + "title": "Online pole segmentation on range images for long-term lidar localization in urban environments.", + "author": "H. Dong, X. Chen, S. S\u00e4rkk\u00e4, and C. Stachniss.", + "venue": "Robotics and Autonomous Systems, 159:104283, 2023.", + "url": null + } + }, + { + "8": { + "title": "A density-based algorithm for discovering clusters in large spatial databases with noise.", + "author": "M. Ester, H.P. Kriegel, J. Sander, and X. Xu.", + "venue": "In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, page 226\u2013231, 1996.", + "url": null + } + }, + { + "9": { + "title": "The dynamic window approach to collision avoidance.", + "author": "D. Fox, W. Burgard, and S. Thrun.", + "venue": "IEEE Robotics and Automation Magazine, 4(1):23\u201333, 1997.", + "url": null + } + }, + { + "10": { + "title": "3d-lanenet: end-to-end 3d multiple lane detection.", + "author": "N. Garnett, R. Cohen, T. Pe\u2019er, R. Lahav, and D. Levi.", + "venue": "In Proc. of the IEEE Intl. Conf. on Computer Vision (ICCV), 2019.", + "url": null + } + }, + { + "11": { + "title": "Lidar-based lane marking detection for vehicle positioning in an hd map.", + "author": "F. Ghallabi, F. Nashashibi, G. El-Haj-Shhade, and M.A. Mittet.", + "venue": "In International Conference on Intelligent Transportation Systems (ITSC), 2018.", + "url": null + } + }, + { + "12": { + "title": "Gen-lanenet: A generalized and scalable approach for 3d lane detection.", + "author": "Y. Guo, G. Chen, P. Zhao, W. Zhang, J. Miao, J. Wang, and T.E. Choe.", + "venue": "In Proc. of the Europ. Conf. on Computer Vision (ECCV), 2020.", + "url": null + } + }, + { + "13": { + "title": "Deep residual learning for image recognition.", + "author": "K. He, X. Zhang, S. Ren, and J. Sun.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.", + "url": null + } + }, + { + "14": { + "title": "Alignseg: Feature-aligned segmentation networks.", + "author": "Z. Huang, Y. Wei, X. Wang, H. Shi, W. Liu, and T.S. Huang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1):550\u2013557, 2021.", + "url": null + } + }, + { + "15": { + "title": "In defense of classical image processing: Fast depth completion on the cpu.", + "author": "J. Ku, A. Harakeh, and S.L. Waslander.", + "venue": "In Conference on Computer and Robot Vision (CRV), 2018.", + "url": null + } + }, + { + "16": { + "title": "Pointpillars: Fast encoders for object detection from point clouds.", + "author": "A.H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.", + "url": null + } + }, + { + "17": { + "title": "Hdmapnet: An online hd map construction and evaluation framework.", + "author": "Q. Li, Y. Wang, Y. Wang, and H. Zhao.", + "venue": "In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.", + "url": null + } + }, + { + "18": { + "title": "Semantic flow for fast and accurate scene parsing.", + "author": "X. Li, A. You, Z. Zhu, H. Zhao, M. Yang, K. Yang, and Y. Tong.", + "venue": "In Proc. of the Europ. Conf. on Computer Vision (ECCV), 2020.", + "url": null + } + }, + { + "19": { + "title": "Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection.", + "author": "Y. Li, A.W. Yu, T. Meng, B. Caine, J. Ngiam, D. Peng, J. Shen, Y. Lu, D. Zhou, Q.V. Le, et al.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2022.", + "url": null + } + }, + { + "20": { + "title": "BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework.", + "author": "T. Liang, H. Xie, K. Yu, Z. Xia, Y.W. Zhiwei Lin, T. Tang, B. Wang, and Z. Tang.", + "venue": "In Proc. of the Advances in Neural Information Processing Systems (NeurIPS), 2022.", + "url": null + } + }, + { + "21": { + "title": "Focal loss for dense object detection.", + "author": "T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar.", + "venue": "In Proc. of the IEEE Intl. Conf. on Computer Vision (ICCV), 2017.", + "url": null + } + }, + { + "22": { + "title": "Microsoft COCO: common objects in context.", + "author": "T. Lin, M. Maire, S.J. Belongie, L.D. Bourdev, R.B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll\u00e1r, and C.L. Zitnick.", + "venue": "In Proc. of the Europ. Conf. on Computer Vision (ECCV), 2014.", + "url": null + } + }, + { + "23": { + "title": "Vectormapnet: End-to-end vectorized hd map learning.", + "author": "Y. Liu, Y. Wang, Y. Wang, and H. Zhao.", + "venue": "arXiv preprint arXiv:2206.08920, 2022.", + "url": null + } + }, + { + "24": { + "title": "Bevfusion: Multi-task multi-sensor fusion with unified bird\u2019s-eye view representation.", + "author": "Z. Liu, H. Tang, A. Amini, X. Yang, H. Mao, D. Rus, and S. Han.", + "venue": "In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2023.", + "url": null + } + }, + { + "25": { + "title": "Fully convolutional networks for semantic segmentation.", + "author": "J. Long, E. Shelhamer, and T. Darrell.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.", + "url": null + } + }, + { + "26": { + "title": "Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera.", + "author": "F. Ma, G.V. Cavalheiro, and S. Karaman.", + "venue": "arXiv preprint arXiv:1807.00275, 2018.", + "url": null + } + }, + { + "27": { + "title": "Sparse-to-dense: Depth prediction from sparse depth samples and a single image.", + "author": "F. Ma and S. Karaman.", + "venue": "In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2018.", + "url": null + } + }, + { + "28": { + "title": "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras.", + "author": "R. Mur-Artal and J.D. Tard\u00f3s.", + "venue": "IEEE Transactions on Robotics, 33(5):1255\u20131262, 2017.", + "url": null + } + }, + { + "29": { + "title": "Cross-view semantic segmentation for sensing surroundings.", + "author": "B. Pan, J. Sun, H.Y.T. Leung, A. Andonian, and B. Zhou.", + "venue": "IEEE Robotics and Automation Letters, 5(3):4867\u20134873, Jul 2020.", + "url": null + } + }, + { + "30": { + "title": "Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d.", + "author": "J. Philion and S. Fidler.", + "venue": "In Proc. of the Europ. Conf. on Computer Vision (ECCV), 2020.", + "url": null + } + }, + { + "31": { + "title": "Categorical depth distribution network for monocular 3d object detection.", + "author": "C. Reading, A. Harakeh, J. Chae, and S.L. Waslander.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.", + "url": null + } + }, + { + "32": { + "title": "Translating images into maps.", + "author": "A. Saha, O. Mendez, C. Russell, and R. Bowden.", + "venue": "In Proc. of the IEEE Intl. Conf. on Robotics & Automation (ICRA), 2022.", + "url": null + } + }, + { + "33": { + "title": "Pointpainting: Sequential fusion for 3d object detection.", + "author": "S. Vora, A.H. Lang, B. Helou, and O. Beijbom.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.", + "url": null + } + }, + { + "34": { + "title": "Pointaugmenting: Cross-modal augmentation for 3d object detection.", + "author": "C. Wang, C. Ma, M. Zhu, and X. Yang.", + "venue": "In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.", + "url": null + } + }, + { + "35": { + "title": "Second: Sparsely embedded convolutional detection.", + "author": "Y. Yan, Y. Mao, and B. Li.", + "venue": "Sensors, 18(10), 2018.", + "url": null + } + }, + { + "36": { + "title": "End-to-end multi-view fusion for 3d object detection in lidar point clouds.", + "author": "Y. Zhou, P. Sun, Y. Zhang, D. Anguelov, J. Gao, T. Ouyang, J. Guo, J. Ngiam, and V. Vasudevan.", + "venue": "In Conference on Robot Learning, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2211.15656v4" +} \ No newline at end of file diff --git a/20241127/2212.11143v4.json b/20241127/2212.11143v4.json new file mode 100644 index 0000000000000000000000000000000000000000..2fbe8826377b3274e469c81be715703c0842c72d --- /dev/null +++ b/20241127/2212.11143v4.json @@ -0,0 +1,492 @@ +{ + "title": "Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints", + "abstract": "In this paper, we introduce faster accelerated primal-dual algorithms for minimizing a convex function subject to strongly convex function constraints.\nPrior to our work, the best complexity bound was , regardless of the strong convexity of the constraint function.\nIt is unclear whether the strong convexity assumption can enable even better convergence results.\nTo address this issue, we have developed novel techniques to progressively estimate the strong convexity of the Lagrangian function.\nOur approach, for the first time, effectively leverages the constraint strong convexity, obtaining an improved complexity of . This rate matches the complexity lower bound for strongly-convex-concave saddle point optimization and is therefore order-optimal.\nWe show the superior performance of our methods in sparsity-inducing constrained optimization, notably Google\u2019s personalized PageRank problem. Furthermore, we show that a restarted version of the proposed methods can effectively identify the optimal solution\u2019s sparsity pattern within a finite number of steps, a result that appears to have independent significance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "", + "text": "In this paper, we are interested in the following convex function-constrained\nproblem:\nwhere is a convex continuous function and bounded from below and , , are strongly convex continuous functions.\nAn important application of this problem, commonly encountered in statistics and engineering, involves the objective as a proximal-friendly regularizer and as a data-driven loss function used to gauge model fidelity.\nTo apply first-order methods for the above function-constrained problems,\na common strategy involves a double-loop procedure that repeatedly employs fast first-order methods, such as Nesterov\u2019s accelerated method, to solve specific strongly convex proximal subproblems.\nPopular methods among this category include Augmented Lagrangian methods [18 ###reference_b18###, 33 ###reference_b33###], level-set methods [21 ###reference_b21###], penalty methods [17 ###reference_b17###].\nWhen both and are convex and smooth (or composite), it has been found that these double-loop algorithms can attain an iteration complexity of to achieve an -error in both the optimality gap and constraint violation. When the objective is strongly convex, the complexity can be further improved to ([33 ###reference_b33###, 21 ###reference_b21###]).\nIn contrast to these double-loop algorithms, single-loop algorithms remain popular due to their simplicity in implementation. Along this research line, [32 ###reference_b32###] developed a first-order algorithm based on linearizing the augmented Lagrangian function, which obtains an iteration complexity of .\n[34 ###reference_b34###] extended the augmented Lagrangian method to stochastic function-constrained problems where both objective and constraint exhibit an expectation form.\nViewing (1 ###reference_###) as a special case of the min-max problem:\n[11 ###reference_b11###] proposed to solve (1 ###reference_###) and (2 ###reference_###) by an accelerated primal-dual method (APD), which generalizes the primal-dual hybrid gradient method [6 ###reference_b6###] initially developed for saddle point optimization with bilinear coupling term.\nUnder mild conditions, APD achieves the best iteration complexity of for general convex constrained problem and a further improved complexity of when is strongly convex.\n[4 ###reference_b4###] proposed a unified constrained extrapolation method that can be applied to both deterministic and stochastic constrained optimization problems.\nDespite these recent progresses, to the best of our knowledge, all available algorithms are suboptimal in the presence of strongly convex function constraints (1 ###reference_###). Specifically, direct applications of previously discussed algorithms yield an complexity, which is inferior to the optimal bound for the strongly-convex-concave saddle point problem [22 ###reference_b22###].\nIt is somewhat unsatisfactory that the strong convexity of has not been found helpful in further algorithmic acceleration.\nThe core underlying issue arises from the dynamics of saddle point optimization: it is the strong convexity of that offers more potential acceleration advantages, yet the strong convexity of is substantially harder to estimate than that of . This difficulty is compounded by the interplay between and the varying dual sequence .\nThe challenge naturally leads us to question: Is it possible to further improve the convergence rate of first-order methods for solving the strongly convex constrained problem (1 ###reference_###)?" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "", + "text": "We use bold letters like to represent vectors.\nSuppose , , we use \nto represent the -norm, where is the -th\nelement of . For brevity, stands for -norm.\nFor a matrix , we denote the matrix norm induced by 2-norm as\n.\nThe normal cone of at is denoted as . Let \nbe the closed ball centered at with radius ,\ni.e., .\nWe denote the set of feasible solutions by and write the constraint function as . We assume\neach is a strongly convex function, and denote .\nLet for integer . We denote minimum and maximum strongly convexity ,\nand and the vector of elements 0 by .\nThe Lagrangian function\nof problem (1 ###reference_###) is given by where .\nWe say that satisfies the KKT condition of (1 ###reference_###) if there exists a Lagrangian multiplier vector such that and .\nThe KKT condition is necessary for optimality when a constraint qualification (CQ) holds at . We assume Slater\u2019s CQ (Assumption 1 ###reference_umption1###) holds, which guarantees that an optimal solution is also a KKT point [3 ###reference_b3###].\nThere exists a strictly feasible point such that .\nWe use to denote a strictly feasible point throughout the paper.\nMoreover, we require Assumption 2 ###reference_umption2### to circumvent any trivial solution.\nFor any ,\nthere exists an such that .\nAssumption 2 ###reference_umption2### is essential for our analysis. While verifying Assumption 2 ###reference_umption2### can be indeed challenging, it is achievable for the sparsity-inducing problem considered in our paper. In this example, the solution is the single minimizer of the sparsity penalty.\nNext, we give several useful properties about the optimal solutions of problem (1 ###reference_###). Please refer to Appendix D.1 ###reference_### for the proof of Proposition 1 ###reference_position1### and Appendix D.2 ###reference_### for the proof of Proposition 2 ###reference_position2###.\nSuppose Assumption 1 ###reference_umption1###\nholds. Then, for any optimal solution of problem (1 ###reference_###), there exists such that KKT condition holds. Moreover, falls into set , where .\nUnder Assumption 2 ###reference_umption2###, is the unique solution of (1 ###reference_###). Furthermore, set \nis convex and bounded.\nIn view of Assumption 2 ###reference_umption2###, Proposition 2 ###reference_position2###, and closedness of the subdifferential set of proper convex functions [2 ###reference_b2###, Theorem 3.9], [27 ###reference_b27###, Chapter 23], we know that where . Furthermore, we make the following assumption:\nThroughout the paper, suppose that a constant satisfying\nis known.\nWe give some important examples for which the lower bound can be estimated.\nSuppose is a Lasso regularizer, i.e., , then satisfies (4 ###reference_###). More general, consider the group Lasso regularizer, i.e., , where and ,\n is the number of blocks, then when .\nAnother example is , then we have .\nCondition (4 ###reference_###) is similar to the bounded gradient assumption that has been used for accelerating the convergence of the Frank-Wolfe algorithm. See Appendix B ###reference_### for more discussions.\nWhen considering the Lipschitz continuity of function in , even quadratic functions are not Lipschitz continuous. However, the Lipschitz continuity of is crucial for algorithm convergence. Therefore, we define the bounded feasible region in the following proposition, with its proof provided in Appendix D.3 ###reference_###.\nLet , where . Then under Assumptions 1 ###reference_umption1###\nand 2 ###reference_umption2###, we have .\nThere exist such that\nwhere and is defined in Proposition 3 ###reference_position3###.\nThe Lipschitz smoothness of the Lagrangian function with respect to the primal variable is crucial for the convergence of algorithms. Given that the dual variable is bounded from above, and considering the smoothness of the constraint functions, we can derive the smoothness of the Lagrangian function.\nCombining (5 ###reference_###) and the\nfact ,\nwe obtain that\nwhere . For set , , we use and to denote their diameters, respectively, i.e., and ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "", + "text": "We present the Accelerated Primal-Dual\nAlgorithm with Progressive Strong Convexity Estimation (APDPro) to solve problem (1 ###reference_###).\nFor problem (1 ###reference_###), APDPro achieves the improved convergence rate without relying on the uniform strong convexity assumption [11 ###reference_b11###, 22 ###reference_b22###].\nFor the rest of this paper, we denote \nas the proximal mapping.\nWe describe APDPro in Algorithm 1 ###reference_###.\nThe main component of APDPro contains a dual ascent step to update based on the extrapolated gradient, followed by a primal proximal step to update .\nCompared with standard APD [11 ###reference_b11###], APDPro has two more steps.\nFirst, line 5 ###reference_5### of Algorithm 1 ###reference_### applies a novel cut constraint to separate the dual sequence from the origin, which allows us to leverage the strong convexity of the Lagrangian function and hence obtain a faster rate of convergence than APD. Second, to use the strong convexity more effectively, in line 10 ###reference_10###, we perform a progressive estimation of the strong convexity by using the latest iterates and . Throughout the algorithm process, we use a routine Improve to construct a non-decreasing sequence , which provides increasingly refined lower bounds of the strong convexity of the Lagrangian function.\nThe Improve step\nIn order to estimate the strong convexity of the Lagrangian function, we rely on the subdifferential separation (eq. (4 ###reference_###)) to bound the dual variables.\nFrom the first-order optimality condition in minimizing \nand the fact that (Proposition 3 ###reference_position3###), we have\n\nIt follows from (4 ###reference_###) that\nwhere the last inequality use the fact that . Note that the bound can not be readily used in the algorithm implementation because is generally unknown. To resolve this issue, we develop more concrete dual lower bounds by using the generated solution in the proximity of . As we will show in the analysis, APDPro keeps track of two primal sequences and , for which we can establish bounds on and , respectively. This drives us to develop the following lower bound property, with the proof provided in Appendix E.1 ###reference_###.\nSuppose Assumption 4 ###reference_umption4### holds.\nLet be a dual optimal solution.\nSuppose that , then we have\nSuppose , then we have\nOur next goal is to conduct the convergence analysis for APDPro in Theorem 1 ###reference_orem1### and Corollary 1 ###reference_ollary1###. Complete proof details are provided in Appendix E.2 ###reference_### and E.3 ###reference_###.\nSuppose for any , holds, and let the sequence generated by Algorithm 1 ###reference_### satisfy:\nThen, the set is nonempty and . Let . The sequence generated by APDPro satisfies\nNext, we develop more concrete complexity results in Corollary 1 ###reference_ollary1###.\nSuppose that satisfy:\nThen we have\nwhere , and satisfy the following condition, .\nIn view of Corollary 1 ###reference_ollary1###, APDPro obtains an iteration complexity of , which is substantially better than the bound of APD [11 ###reference_b11###] and ConEx [4 ###reference_b4###] when the strong convexity parameter is relatively large compared with .\nAdditionally, we argue that even when , APDPro can obtain the matching bound of the state-of-the-art algorithms. Specifically, using the definition of , we can easily derive the monotonicity of .\nIt follows from\n\nthat . Using a similar argument to that of Corollary 1 ###reference_ollary1###, we obtain the bound and .\nThe implementation of APDPro requires knowing an upper bound on . When the bound is unavailable, [11 ###reference_b11###] developed an adaptive APD which still ensures the boundedness of dual sequence via line search. Since our main goal of this paper is to exploit the lower-bound rather than the upper bound of , we leave the extension for the future work." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "", + "text": "Note that in the worst case, APDPro exhibits an iteration complexity of , which has a linear dependence on the diameter.\nWhile the is optimal [25 ###reference_b25###], it is possible to improve the complexity with respect to the primal part from to .\nTo achieve this goal, we propose a restart scheme (rAPDPro)\nthat calls APDPro repeatedly and present the details in Algorithm 2 ###reference_###.\nInspired by [16 ###reference_b16###], we set the iteration number as a function of the estimated strong convexity, detailed in the TerminateIter procedure. For convenience in describing a double-loop algorithm, we use superscripts for the number of\nepochs and subscripts for the number of sub-iterations in parameters\n, e.g., \nmeaning the output of first iterations at -th epoch. To avoid redundancy in the Algorithm 2 ###reference_###, we call the APDPro iteration directly. Note that the notation system here is identical to that of APDPro, with the only difference being the use of superscripts to distinguish the number of epochs.\nIn Theorem 2 ###reference_orem2###, we show the overall convergence complexity of rAPDPro with the proof provided in Appendix F.1 ###reference_###.\nLet \nbe the sequence generated by rAPDPro, then we have\nAs a consequence, rAPDPro\nwill find a solution such that \nfor any in at most \nepochs. Moreover,\nThe iteration number of rAPDPro\nto find such that is bounded by\nwhere and satisfy and , respectively.\nThe bound depends on , and . If or , then we have , which implies that we can not guarantee at finite iterations. implies that there exists an epoch with infinite sub-iterations. Hence, rAPDPro is reduced to APDPro if we only consider that epoch.\nComparison of rAPDPro and APDPro involves a number of factors.\nIn particular, rAPDPro compares favorably against APDPro if . Moreover, the complexity (16 ###reference_###) can be slightly improved if is replaced by any tighter upper bound of . However, it is still unknown whether we can directly replace with in (16 ###reference_###).\nDual Convergence\nFor dual variables, we establish asymptotic convergence to the optimal solution, a key condition for developing the active-set identification in the later section. For ease in notation, it is more convenient to label the generated solution as a whole sequence using a single subscript index: . Hence, we use the index system and interchangeably. Note that and correspond to the same pair of points. We present the dual asymptotic result in the following theorem, with the proof provided in Appendix F.2 ###reference_###.\nAssume \nand choose such that\n\nWe have \nsatisfy the KKT condition, where is any limit point of \ngenerated by rAPDPro.\nTo establish the asymptotic convergence of the dual variable, we introduce an additional constant , which implies that the initial step size must meet a stricter requirement than the convergence condition specified in Corollary 1 ###reference_ollary1###.\nSince ,\n is bounded due to the boundedness of the dual variable, \nis monotonically decreasing, then . Hence, inequality, , is always satisfiable if we choose proper such that . Furthermore, Assumption is mild. Since we always choose large enough in rAPDPro, can be sufficiently small.\nBoth algorithms proposed previously require solving quadratic optimization with linear constraints when updating dual variables, which may introduce implementation overheads when the constraint number is high. Inspired by the multi-stage algorithm, we additionally propose an algorithm (Multi-Stage APD, msAPD) that uses different step sizes in different stages and dynamically adjusts the number of iterations in each stage by leveraging strong convexity, as detailed in Appendix H ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "", + "text": "In this section, we apply our proposed algorithms to the aforementioned sparse learning problem:\nwhere is the group Lasso regularizer and is a strongly convex function. We use to express the -th block coordinates of .\nThe goal of this section is to show that rAPDPro can identify the sparsity pattern of the optimal solution of (17 ###reference_###) in a finite number of iterations.\nIn general, suppose that has a separable structure\n,\nwe define the active set for by\n\nFor , it is easy to see that is the index set of the zero blocks: .\nNext, we describe one property for the optimal solution of (17 ###reference_###) in Proposition 5 ###reference_position5### with the proof provided in Appendix G.1 ###reference_###.\nUnder Assumptions 1 ###reference_umption1### and 2 ###reference_umption2###, the KKT point for (17 ###reference_###) is unique.\nTo identify the sparsity pattern (active set) of the optimal solution, it is common to assume the existence of a non-degenerate optimal solution, which is stronger than the standard optimality condition [24 ###reference_b24###, 29 ###reference_b29###]. We say that is non-degenerate if\n for the Lagrangian multiplier , where stands for the relative interior.\nMore specifically, satisfies the block-wise optimality condition\nInspired by [24 ###reference_b24###], we use the radius , which describes the certain distance between the gradient and \"subdifferential boundary\" of the active set.\nWe demonstrate in the following theorem that the optimal sparsity pattern is identified when the iterates fall in a neighborhood dependent on , with the proof provided in Appendix G.2 ###reference_###.\nSet with and in rAPDPro, then we have there exists a epoch such that\nThe active-set identification result is achieved using the optimality condition at the next iterate . To ensure , we define an expanded region, which prevents cases where the normal cone differs from .\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "", + "text": "In this section, we examine the empirical performance of our proposed algorithms for solving\nthe sparse Personalized PageRank [8 ###reference_b8###, 9 ###reference_b9###, 23 ###reference_b23###].\nThe constrained form of Personalized PageRank can be written as follows:\n\nwhere and are generated by graph.\nWe implement both rAPDPro and msAPD.\nWe skip APDPro as we observe that the restart strategy consistently improves the algorithm performance.\nFor comparison, we consider the state-of-the-art accelerated primal-dual (APD) method [11 ###reference_b11###], APD with restart mechanism at fixed iterations (APD+restart) and Mirror-Prox [13 ###reference_b13###].\n6 small to medium-scale datasets from various domains in the Network Datasets [28 ###reference_b28###] are selected in our experiments. All experiments are implemented on Mac mini M2 Pro, 32GB. Due to the page limit, we only report results on three datasets and leave more details in the last Appendix I ###reference_###.\nWe plot the relative function value gap and the feasibility violation over the iteration number in Figure 1 ###reference_###, respectively.\nFirstly, in terms of both optimality gap and constraint violation, the performance of rAPDPro and msAPD is significantly better than that of APD, APD+restart and Mirror-Prox. Additionally, rAPDPro and msAPD often converge to high-precision solutions. Secondly, based on the experimental results, it is indeed observed that msAPD exhibits a periodic variation in convergence performance, which aligns with our algorithm theory.\n###figure_7### ###figure_8### ###figure_9### Next, we examine the algorithm\u2019s effectiveness in identifying sparsity patterns. We computed a nearly optimal solution from MOSEK. Note that is a dense vector.\nFor numerical consideration, we truncate the coordinate values of to zero if the absolute value is below and perform the same truncation to all the generated solutions of the compared algorithms.\nThen we use to measure the accuracy of identifying the active set, where denotes the set cardinality.\nFor rAPDPro, we consider the last iterate while for APD, msAPD and Mirror-Prox, we plot the result on , as these are the solutions where the convergence rates are established.\nFigure 2 ###reference_### plots the experiment result, from which we observe that\nrAPDPro and msAPD are highly effective in identifying the active set. Often, they are able to recognize the structure of the active set within a small number of iterations.\nOverall, the experimental results show the great potential of our proposed algorithms in identifying the sparsity structure and are consistent with our theoretical analysis." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "", + "text": "The key contribution of this paper is that we develop several new first-order primal-dual algorithms for convex optimization with strongly convex constraints. Using some novel strategies to exploit the strong convexity of the Lagrangian function, we substantially improve the best convergence rate from to .\nIn the application of constrained sparse learning problems, the experimental study confirms the advantage of our proposed algorithms against state-of-the-art first-order methods for constrained optimization. Moreover, we show that one of our proposed algorithms rAPDPro has the favorable feature of identifying the sparsity pattern in the optimal solution.\nFor future work, one direction is to apply the adaptive strategy, such as line search, to our framework to deal with cases when the dual bound is unavailable. Another interesting direction is to further exploit the active set identification property in a general setting. For example, it would be interesting to incorporate our algorithm with active constraint identification, which could be highly desirable when there are a large number of constraints. It would also be interesting to consider a more general convex objective when the proximal operator is not easy to compute." + } + ], + "appendix": [ + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "", + "text": "The appendix is structured as follows: Appendix A ###reference_### introduces some limitations of our methods, primarily concerning the application scenarios of our algorithm. Appendix B ###reference_### includes comparisons between ours and some related Frank-Wolfe methods. We give some auxiliary lemmas in Appendix C ###reference_###, which are very important for the proofs presented later. Appendix D ###reference_###, E ###reference_###, F ###reference_### and G ###reference_### present the proof of conclusion in Section 2 ###reference_###, 3 ###reference_###, 4 ###reference_### and 5 ###reference_###, respectively. Furthermore, Appendix H ###reference_### introduces a new algorithm to obtain a convergence rate without complicated dual updating. Finally, Appendix I ###reference_### offers more extensive details on our experiments." + }, + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "", + "text": "In this paper, we focus on the theoretical analysis of convex optimization. Although our proposed algorithms for the convex optimization with strongly convex constraints can theoretically improve the existing results from to . However, we still need to point out that our optimization algorithm has the following limitations. One is the algorithm needs a lower bound on the norm of sub-gradients of the objective function in the optimal solution, which may not be satisfied for all functions. On the other hand, we require consistent smoothness of the constraints to ensure convergence, and how to use the line search method to ensure convergence is a future direction." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "", + "text": "We note that the strongly convex function constraint in (1 ###reference_###) is a special case of a strongly convex set constraint, as demonstrated in [15 ###reference_b15###].\nOver the strongly convex set, it has been shown that Frank-Wolfe Algorithm (FW) can obtain convergence rates substantially better than the worst-case rate. Under the bounded gradient assumption, [7 ###reference_b7###, 20 ###reference_b20###] show that FW obtains linear convergence over a strongly convex set.\nNevertheless, the uniform bounded gradient assumption appears to be stronger than ours, as we only impose the lower boundedness assumption on the optimal solution and allow the objective to be non-differentiable.\nMore recently, [10 ###reference_b10###] shows that FW obtains an rate when the gradient is the order of the square root of the function value gap. For more recent progress, please refer to [5 ###reference_b5###]. Despite the attractive convergence property, FW exhibits certain limitations when applied to the general function constraints (1 ###reference_###) addressed in this paper. Specifically, FW involves a sequence of linear optimization problems throughout the iterations. While linear optimization over certain strongly convex sets, such as -ball, admits a closed-form solution, there exists no efficient routine to handle general function constraints explored in this paper." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "", + "text": "The following three-point property is important in the convergence analysis.\nLet \nbe a closed strongly convex function with modulus .\nGive , where is a compact\nconvex set and , let\n\nthen for all , we have\nSince is a convex compact set, is lower-semi-continuous and -strongly convex, where .\nUsing the optimality () and strong convexity, we have , for any . This immediately gives the desired relation.\n\u220e\nThe following result is adjusted from the classic supermartingale convergence theorem [26 ###reference_b26###, Theorem 1].\nWe give proof for completeness.\nLet be a probability space and be a sequence of sub--algebras of . For each , let and be non-negative -measure random variables such , then we have exists and a.s. when .\nDefine and for any , define\n.\nIf , we have\nwhere holds by ,\nand hence\nwhere holds by (18 ###reference_###).\nTherefore, we have \nis a supermartingale. Since\nholds for all , where holds by . Then it follows from the martingale convergence\ntheorem that exists and is finite\na.s., i.e., exists and is finite on .\nSince is arbitrary, we see that \nexists and is finite a.s. on .\nBy , we have \nexists and is finite and when .\n\u220e" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "", + "text": "Under Slater\u2019s CQ, it is standard to show that any optimal solution will also satisfy the KKT condition.\nFor example, one can refer to [3 ###reference_b3###].\nFor any , we have\nwhere the equality is from the complementary slackness. In view of the above result and the Slater\u2019s condition (i.e., ),\nwe have\nCombining with fact , then we have\nwhere the last inequality is by .\n\u220e\nWe prove the uniqueness property by contradiction. Suppose that there exist\n, satisfying\nthe KKT condition, then from the complementary slackness, optimality of and , we have\nMoreover, we have\n\nHence, we must have .\nHowever, since Assumption 2 ###reference_umption2### implies , the strongly convex function \nhas a unique optimizer. Therefore, we conclude that .\nNext, we show that the set of optimal dual variables for problem (1 ###reference_###)\nis convex. Suppose that there exist two optimal dual variables \nand for the unique primal variable , both satisfying the KKT condition, then we have\n.\nThis implies that any linear combination of and \nsatisfy KKT condition, i.e., .\nFrom Proposition 1 ###reference_position1###, we know any optimal dual\nvariable falls into a bounded convex set . The intersection of two convex sets is also a convex set. Hence, we complete our proof.\n\u220e\nFrom the strong convexity of , we have which implies\nwhere holds by .\nIn view of the triangle inequality and the above result, we have\nHence, .\n\u220e" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "", + "text": "Using\nthe triangle inequality and (5 ###reference_###), we have\nCombining the above inequality and (8 ###reference_###), we obtain\nNext, we develop more specific lower bounds on . i). Inequality (9 ###reference_###) can be easily verified since we have .\nii). Suppose , then together with (22 ###reference_###) we have\nNote that the above inequality can be expressed as with , and . Standard analysis implies that , which gives the desired bound (10 ###reference_###).\n\u220e\nFirst, it is easy to verify by our construction that is a monotone sequence: . Our goal is to show holds\nfor any by induction. Note that immediately follows from our assumption that , for any . Suppose that \nholds for , we claim:\nFor any and , we have\n.\nPart 1. For , taking \nand \nin Lemma 1 ###reference_ma1###, the following relations\nwhere\nhold for any and . The existence of such follows from our induction hypothesis.\nSince is -strongly convex, we have\nCombining this result and (25 ###reference_###), we have\nOn the other hand, by the definition of , we have\nLet us denote for brevity. Combining (24 ###reference_###)\nand (29 ###reference_###) yields\nPutting (28 ###reference_###) and (30 ###reference_###) together, we\nhave\nwhere the last inequality is by Lipschitz smoothness of .\nNext, we bound the term \nby Young\u2019s inequality, which gives\nIt follows from (31 ###reference_###) and \nthat\nMultiply both sides of the above relation by and sum up the\nresult for . In view of the parameter relation (11 ###reference_###), we have\nwhere uses and\n, and holds by , and\nSince\n is convex in and linear in ,\nwe have\nCombining (33 ###reference_###) and (34 ###reference_###), we obtain\nDividing both sides by we obtain the desired result (23 ###reference_###).\nPart 2. Next we show . Let be any point in . Since (35 ###reference_###) holds for any and , we can place \nin (23 ###reference_###) to obtain\nMoreover, the strong convexity of implies\nApplying the above two inequalities yields\nIn view of (36 ###reference_###) and Proposition 4 ###reference_position4###, we have that\nMoreover, since , we have .\nHence we have where is the output of the Improve procedure. Due to the construction of , we immediately see that . This implies and completes our induction proof.\n\u220e\nNext, we specify the stepsize selection in Lemma 3 ###reference_ma3### and develop more concrete complexity results in Corollary 1 ###reference_ollary1###.\nLet \nfor and .\nSuppose satisfy:\nThen we have\nwhere \nfor . Moreover, suppose , where\n, then we have\nWe first use induction to show that .\nIt is easy to see that \nholds for by the definition \nand . Assume\n\nholds for all , then we have\nwhich completes our induction. It follows from \nand the relation among that, for\nany\nSimilarly, we use induction to prove\nIt is easy to find that .\nWe assume that \nholds for any . Considering , we\nhave\nwhich completes the induction. Moreover, we use induction to show\n. It is obvious that the\ninequality holds for . Assume the inequality holds for all \nthen we have\nwhere the last inequality use the relation ,\nand .\n\u220e\nFirst, we show that the sequences \ngenerated by APDPro satisfy the relationship in (11 ###reference_###) in Theorem 1 ###reference_orem1###.\nThe first part of (11 ###reference_###) can be derived using the monotonicity of as follows:\nThe second part of (11 ###reference_###) can be easily verified using the parameters setting.\nNext, we prove the last term in (11 ###reference_###) by induction. Firstly, it easy to verify that for any , there exists such that last term of (11 ###reference_###) holds.\nHence, when , the last term of (11 ###reference_###) is directly from the first term of (13 ###reference_###).\nSuppose that the last term of (11 ###reference_###) holds for .\nFrom ,\nwe have\nWithout loss of generality, place , \nin (23 ###reference_###), and using in Proposition 1 ###reference_position1###. It is easy to see\n,\nand\n\nHence, we conclude that .\nNow observe that ,\nwhich implies\n\nIn view of ,\nthen we have\nMoreover, it follows from that\nCombining (45 ###reference_###), (46 ###reference_###) and (23 ###reference_###),\nwe obtain\nIn view of the bound in (38 ###reference_###) and the relation between , we can get\nIn view of (47 ###reference_###) and (38 ###reference_###), we have\nCombining (23 ###reference_###) and (48 ###reference_###) yields\n\n\u220e" + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "", + "text": "First, we show that the choice of \nsatisfy the condition (13 ###reference_###) in Corollary 1 ###reference_ollary1###:\n.\nNext, we show (15 ###reference_###) holds by induction. Clearly, (15 ###reference_###)\nholds for . Assume \nholds for . Then by Theorem 1 ###reference_orem1###,\nwe have\nIn view of the first bound in (38 ###reference_###) and the relation between , we can get\nCombining (49 ###reference_###) and (50 ###reference_###) yields\nSince the algorithm sets , it follows that\nwhich implies the desired result (15 ###reference_###).\nLet the algorithm run for epochs, then .\nThe total iteration number required by Algorithm 2 ###reference_### for attaining a solution such that is\nwhere holds by and .\n\u220e\nNow, we give some proof details in dual convergence results. Let\nthen we establish an important property about the solution sequence in the following lemma.\nAssume \nand choose such that\nThen there exists an such that for any and any KKT point :\nFirst, we give some results that will be used repeatedly in the following. For notation simplicity, we denote . In view of Lemma 3 ###reference_ma3###, and the parameter ergodic\nsequence generated by rAPDPro, we have is monotonically increasing sequence in , , and there exist a such that .\nNow, for rAPDPro, we claim that there exist such that the following two conditions hold\n1. For any , we have\nand\n2. For any ,\nwe have\nPart 1. We first consider two subsequent points and within the same epoch, and assume . Then, it follows from \nthat\nNext, we use induction to show\nWhen , inequality (56 ###reference_###) degenerates as the definition of . Suppose (56 ###reference_###)\nholds for . Then, from ,\nwe have\nwhich completes our induction proof. Hence, combining (55 ###reference_###)\nand (56 ###reference_###), we have\nFurthermore, when switching to the next epoch , we have\nwhere holds by , , follows\nfrom . Hence,\ncombining (55 ###reference_###), (57 ###reference_###) and (58 ###reference_###),\nwe completes our proof of (52 ###reference_###) by setting .\nSince rAPDPro reset the stepsize periodically and \nare two monotonically increasing sequences, hence\nConsider .\nCombining , then\nFurthermore, when switching to the next epoch , we have\nwhere the last inequality holds by .\nHence, it follows from (59 ###reference_###), (60 ###reference_###) and (61 ###reference_###)\nthat there exist \nsuch (53 ###reference_###) holds.\nPart 2. for any we have\nConsider . Inequality (51 ###reference_###) implies (11 ###reference_###) holds\n(see proof\nof Corollary 1 ###reference_ollary1### in Section E.3 ###reference_###). Hence, for we have\nwhere corresponds to . Furthermore, consider switching\nto next epoch .\nSince is an\nincreasing sequence in , , hence\nNext, we have\nwhere holds by the definition of ,\n holds by is an increasing sequence\nin , and holds by . Hence, by (64 ###reference_###) and (65 ###reference_###), we have\nwhere corresponds to .\nBy putting (63 ###reference_###)\nand (66 ###reference_###) together, we complete the proof of (62 ###reference_###).\nPlacing \nin (32 ###reference_###) and multiplying on both sides, we have\nwhere the last inequality holds by (62 ###reference_###) and (52 ###reference_###).\nIt follows from (53 ###reference_###), and\nthat\nCombining (67 ###reference_###)\nand (68 ###reference_###), we complete our proof of (54 ###reference_###).\n\u220e\nSince located in set \nis a bounded sequence, it must have a convergent subsequence ,\nwhere is the limit point. We claim that limit point satisfies the\nKKT condition. Placing ,\n\nand in Lemma 2 ###reference_ma2###.\nIt follows from (54 ###reference_###) in Lemma 4 ###reference_ma4### that\n. Hence, we have \nand ,\nwhich implies \nand .\nThere are two different cases for when , and we discuss the value of in (25 ###reference_###) decided by in each of the two cases below.\nCase 1: . By the definition of in (27 ###reference_###) and , we have\nCase 2: . It follows from (39 ###reference_###) that increases at order , where . By (23 ###reference_###), we obtain decreases at order (). Hence, combining , we have .\n\nIt follows from , and (25 ###reference_###)\nthat\nHence, according to the first-order optimality condition, we have\nNext, we show the complementary slackness holds for . Since has an upper bound , , and the definition of in (26 ###reference_###), hence we obtain \n\nCombining above, and (24 ###reference_###), we\nhave , .\nMoreover, due to the complementary slackness, there exists an \nsuch that . Hence, we must\nhave , which, together with (69 ###reference_###),\nimplies that is KKT point.\n\u220e" + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "", + "text": "Our proof strategy of active-set identification in rAPDPro is similar to those in unconstrained optimization [24 ###reference_b24###]. Namely, we show that the optimal sparsity pattern is identified when the iterates fall in a properly defined neighborhood dependent on . The next lemma shows that the primal and dual sequences indeed converge to the neighborhood of the optimal primal and dual solutions, respectively, in a finite number of iterations.\nThere exists an such that\nwhere is the unique solution of problem (17 ###reference_###).\nMoreover, there exists an epoch such that we have\nFrom Theorem 2 ###reference_orem2### and 3 ###reference_orem3###, we have , where corresponds to . It implies that there exists an epoch such that (70 ###reference_###) holds.\nIt follows from (35 ###reference_###) that .\nHence, in order to prove ,\nwe need to prove\nFrom Corollary 1 ###reference_ollary1### and Theorem 2 ###reference_orem2###, 3 ###reference_orem3###, we know that the left hand side of (72 ###reference_###)\nconverges to and right hand side of (72 ###reference_###)\nis a positive constant. Hence, there exist a such that (72 ###reference_###) holds,\nwhich implies (71 ###reference_###) holds for . Now we use induction\nto prove, for , we have\nWhen , inequality (73 ###reference_###) coincides with (72 ###reference_###)\nwith . Now, assume (73 ###reference_###) holds for\n, we aim to prove (73 ###reference_###) holds for . It follows from (35 ###reference_###) that\nwhere follows from induction, holds by and holds by .\nHence, we complete our proof of (73 ###reference_###).\nFrom Theorem 2 ###reference_orem2###, we have , which implies that there exists a \nsuch that ,\nwhich implies that \nholds for any .\nIt follows from the definition\nof in (70 ###reference_###) and stepsize will be reset at different epoch, then we have (72 ###reference_###) holds for\n, which implies that (73 ###reference_###)\nholds with substituting as any .\nFurthermore, it follows from Theorem 3 ###reference_orem3### that , where corresponds to . Then there exists a such that the first term in (71 ###reference_###) holds.\nHence,\nwe can obtain that there exist a \nsuch that (71 ###reference_###) holds.\n\u220e\nIt is worth noting that the primal neighborhood defined by the second term of (71 ###reference_###) is a bit different from the fixed neighborhood in the standard analysis [24 ###reference_b24###], which involves a constant stepsize.\nAs APDPro sets , both the point distance and neighborhood radius decay at the same rate. Hence, we use a substantially different analysis to show the sparsity identification in the constrained setting.\nThe uniqueness of primal\noptimal solution follows from Proposition 2 ###reference_position2###.\nThe KKT condition (ensured by Slater\u2019s CQ) implies\nAccording to Assumption 2 ###reference_umption2###, we have , hence . In view of (74 ###reference_###), for any , we have\n, which gives a unique .\n\u220e\nIt follows from the Lipschitz smoothness of and property (71 ###reference_###) that for any , we have\nRecall that the primal update has the following form\nSince is monotonically increasing with respect to , for the strictly feasible point , we have\nwhere holds by (71 ###reference_###), and follows from the definition of and . Inequality (76 ###reference_###) implies that , and hence . In view of the optimality condition, we have\nOur next goal is to show satisfies condition (77 ###reference_###) for .\nPlacing in , we have\nIn above, follows from (71 ###reference_###) and (LABEL:eq:mid-02), follows from\nand holds by the definition of . Combining (77 ###reference_###) and (78 ###reference_###), we have , which completes our proof.\n\u220e\n###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15###" + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "", + "text": "Both the previous algorithms need to solve a complicated dual problem that involves a linear cut constraint, posing a potential issue: the associated sub-problem might lack a closed-form solution.\nTo resolve this issue, we present the Multi-Stage Accelerated Primal-Dual Algorithm (msAPD) in Algorithm 3 ###reference_###, which obtains the same complexity without introducing a new cut constraint.\nOur new method is a double-loop procedure for which an accelerated primal-dual algorithm with a pending sub-iteration number (APDPi) is running in each stage.\nWhile both APDPi and APDPro employ the Improve step to estimate the dual lower bound, APDPi only relies on the lower bound estimation to change the inner-loop iteration number adaptively, but not the stepsize selection.\nWe develop the convergence property of APDPi, which paves the path to proving our main theorem.\nFor the convergence analysis, it suffices to verify that the initial stepsize parameter \nsatisfy assumptions in Theorem 5 ###reference_orem5###.\nLet \nbe the sequence generated by APDPi,\nthen we have\nwhere and is a KKT point.\nThe stepsize \nare unchanged at one epoch, which implies that , i.e., (37 ###reference_###) are satisfied. By the definition of and , we have\n\nwhich means equality holds at the first term in (37 ###reference_###).\nSince is a strongly convex function with modulus ,\nthen we have\nSumming up the two inequalities above, we can get\nCombining (79 ###reference_###) and (80 ###reference_###), we can obtain the second term of (79 ###reference_###).\n\u220e\nWe show msAPD obtains an convergence rate, which matches the complexity of APDPro.\nLet be the sequence computed\nby msAPD.\nThen, we have\nFor any , msAPD\nwill find a solution such that\n\nin at most \nepochs. Moreover, the overall iteration number performed by msAPD\nto find such a solution is bounded by\nWe first show that (81 ###reference_###) holds by induction.\nIt is easy to verify that (81 ###reference_###) holds for .\nAssume \nholds for .\nBy Theorem 5 ###reference_orem5###, we have\nAs the algorithm sets ,\nthe following inequalities hold:\nPutting these pieces together, we have .\nSuppose the algorithm runs for epochs to achieve the desired\naccuracy , i.e., .\nThen the overall iteration number can be bounded by\nwhere holds by ,\n follows from the definition of and .\n\u220e\nTheorem 6 ###reference_orem6### shows that msAPD obtains a worst-case complexity of , which is an upper bound of the complexity of rAPDPro (see Theorem 2 ###reference_orem2###).\nThe complexities of msAPD and rAPDPro match when . Otherwise, rAPDPro appears to be much better in terms of dependence on . On the other hand, msAPD has a simpler subproblem, which does not involve an additional cut constraint on the dual update." + }, + { + "section_id": "Appendix 9", + "parent_section_id": null, + "section_name": "", + "text": "We examine the empirical performance for solving sparse Personalized PageRank. Let be a connected undirected graph with vertices. Denote the adjacency matrix of by , that is, if and otherwise. Let be the matrix with the degrees in its diagonal. Then the constrained form of Personalized PageRank can be written as follows:\nwhere , , is a teleportation distribution over the nodes of the graph and is a pre-specific target level." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Datasets description and parameter settings
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
datasetNode(n)Edge
bio-CE-HT26173K-0.040.4
bio-CE-LC13872K-0.050.4
econ-beaflw50253K-0.010.995
DD687752K-0.0050.4
DD24212843K-0.050.4
peking-1334113.2K-0.0010.4
\n
", + "capture": "Table 1: Datasets description and parameter settings" + }, + "2": { + "table_html": "
\n
Table 2: Time summary when . All experiments were conducted five times, and the results are reported as mean (standard deviation). means that upon completion of all iterations, the algorithms still fails to meet the criteria for both error measures.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
datasetAPDAPD+restartrAPDProMirror-ProxmsAPDmosek
bio-CE-HT187.15 (0.86)*115.95 (1.04)136.92 (0.92)370.50 (1.80)*\n77.21 (0.67)0.21
bio-CE-LC2.58 (0.16)*0.65 (0.01)\n0.44 (0.01)4.74 (0.33)*0.65 (0.03)0.1
econ-beaflw72.28 (0.59)*87.12 (0.43)*\n18.42 (0.44)116.13 (1.15)*66.70 (0.76)0.16
DD24243.29 (1.20)*10.27 (0.39)\n6.30 (0.08)79.16 (0.60)*10.33 (0.62)0.16
DD6836.55 (0.42)*19.07 (0.66)22.35 (0.75)67.73 (1.39)*\n15.69 (0.37)0.24
peking-1122.37 (2.99)*11.55 (0.69)\n4.86 (0.09)243.45 (7.20)*11.24 (0.15)0.21
\n
", + "capture": "Table 2: Time summary when . All experiments were conducted five times, and the results are reported as mean (standard deviation). means that upon completion of all iterations, the algorithms still fails to meet the criteria for both error measures." + }, + "3": { + "table_html": "
\n
Table 3: Comparison of computational time in seconds between rAPDPro and MOSEK
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
mrAPDProMOSEK
824.61250.38
1053.99767.99
12392-
\n
", + "capture": "Table 3: Comparison of computational time in seconds between rAPDPro and MOSEK" + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2212.11143v4_figure_1(a).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x1.png" + }, + "1(b)": { + "figure_path": "2212.11143v4_figure_1(b).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x2.png" + }, + "1(c)": { + "figure_path": "2212.11143v4_figure_1(c).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x3.png" + }, + "1(d)": { + "figure_path": "2212.11143v4_figure_1(d).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x4.png" + }, + "1(e)": { + "figure_path": "2212.11143v4_figure_1(e).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x5.png" + }, + "1(f)": { + "figure_path": "2212.11143v4_figure_1(f).png", + "caption": "Figure 1: The first row describes the convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, APD+restart, msAPD and Mirror-Prox (\ud835\udc31\u2217superscript\ud835\udc31\\mathbf{x}^{*}bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT is computed by MOSEK [1]). The second row describes feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.", + "url": "http://arxiv.org/html/2212.11143v4/x6.png" + }, + "2(a)": { + "figure_path": "2212.11143v4_figure_2(a).png", + "caption": "Figure 2: The experimental results on active-set identification. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x7.png" + }, + "2(b)": { + "figure_path": "2212.11143v4_figure_2(b).png", + "caption": "Figure 2: The experimental results on active-set identification. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x8.png" + }, + "2(c)": { + "figure_path": "2212.11143v4_figure_2(c).png", + "caption": "Figure 2: The experimental results on active-set identification. Datasets (Left-Right order) correspond to bio-CE-HT, bio-CE-LC and econ-beaflw.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x9.png" + }, + "3(a)": { + "figure_path": "2212.11143v4_figure_3(a).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x10.png" + }, + "3(b)": { + "figure_path": "2212.11143v4_figure_3(b).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x11.png" + }, + "3(c)": { + "figure_path": "2212.11143v4_figure_3(c).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x12.png" + }, + "3(d)": { + "figure_path": "2212.11143v4_figure_3(d).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x13.png" + }, + "3(e)": { + "figure_path": "2212.11143v4_figure_3(e).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x14.png" + }, + "3(f)": { + "figure_path": "2212.11143v4_figure_3(f).png", + "caption": "Figure 3: \nThe first row is the results of objective convergence to optimum, where the y\ud835\udc66yitalic_y-axis reports log10\u2061((\u2016D1/2\u2062\ud835\udc31k\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\mathbf{x}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/\\|D^{1%\n/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor rAPDPro, and log10\u2061((\u2016D1/2\u2062\ud835\udc31\u00afk\u20161\u2212\u2016D1/2\u2062\ud835\udc31\u2217\u20161)/\u2016D1/2\u2062\ud835\udc31\u2217\u20161)subscript10subscriptnormsuperscript\ud835\udc3712subscript\u00af\ud835\udc31\ud835\udc581subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311subscriptnormsuperscript\ud835\udc3712superscript\ud835\udc311\\log_{10}((\\|D^{1/2}\\bar{\\mathbf{x}}_{k}\\|_{1}-\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})/%\n\\|D^{1/2}\\mathbf{x}^{*}\\|_{1})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( ( \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) / \u2225 italic_D start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT bold_x start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT \u2225 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT )\nfor APD, msAPD and Mirror-Prox. The second row is the results of feasibility violation, where y\ud835\udc66yitalic_y-axis reports the feasibility gap log10\u2061(max\u2061{0,G\u2062(\ud835\udc31k)})subscript100\ud835\udc3asubscript\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\mathbf{x}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( bold_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } )\nfor rAPDPro, and log10\u2061(max\u2061{0,G\u2062(\ud835\udc31\u00afk)})subscript100\ud835\udc3asubscript\u00af\ud835\udc31\ud835\udc58\\log_{10}(\\max\\{0,G(\\bar{\\mathbf{x}}_{k})\\})roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT ( roman_max { 0 , italic_G ( over\u00af start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ) for\nAPD, APD+restart msAPD and Mirror-Prox. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.", + "url": "http://arxiv.org/html/2212.11143v4/x15.png" + }, + "4(a)": { + "figure_path": "2212.11143v4_figure_4(a).png", + "caption": "Figure 4: The experimental results on active-set identification. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x16.png" + }, + "4(b)": { + "figure_path": "2212.11143v4_figure_4(b).png", + "caption": "Figure 4: The experimental results on active-set identification. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x17.png" + }, + "4(c)": { + "figure_path": "2212.11143v4_figure_4(c).png", + "caption": "Figure 4: The experimental results on active-set identification. Datasets (Left-Right order) correspond to DD68, DD242 and peking-1.\nThe x\ud835\udc65xitalic_x-axis reports the iteration number and the y\ud835\udc66yitalic_y-axis reports accuracy in active-set identification.", + "url": "http://arxiv.org/html/2212.11143v4/x18.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Mosek optimization toolbox for matlab.", + "author": "Mosek ApS.", + "venue": "User\u2019s Guide and Reference Manual, Version, 4(1), 2019.", + "url": null + } + }, + { + "2": { + "title": "First-order methods in optimization.", + "author": "Amir Beck.", + "venue": "SIAM, 2017.", + "url": null + } + }, + { + "3": { + "title": "Nonlinear programming.", + "author": "Dimitri P. Bertsekas.", + "venue": "Athena Scientific, 1999.", + "url": null + } + }, + { + "4": { + "title": "Stochastic first-order methods for convex and nonconvex functional constrained optimization.", + "author": "Digvijay Boob, Qi Deng, and Guanghui Lan.", + "venue": "Mathematical Programming, pages 1\u201365, 2022.", + "url": null + } + }, + { + "5": { + "title": "Conditional gradient methods.", + "author": "G\u00e1bor Braun, Alejandro Carderera, Cyrille W Combettes, Hamed Hassani, Amin Karbasi, Aryan Mokhtari, and Sebastian Pokutta.", + "venue": "arXiv preprint arXiv:2211.14103, 2022.", + "url": null + } + }, + { + "6": { + "title": "On the ergodic convergence rates of a first-order primal\u2013dual algorithm.", + "author": "Antonin Chambolle and Thomas Pock.", + "venue": "Mathematical Programming, 159(1):253\u2013287, 2016.", + "url": null + } + }, + { + "7": { + "title": "Rates of convergence for conditional gradient algorithms near singular and nonsingular extremals.", + "author": "Joseph C Dunn.", + "venue": "SIAM Journal on Control and Optimization, 17(2):187\u2013211, 1979.", + "url": null + } + }, + { + "8": { + "title": "Variational perspective on local graph clustering.", + "author": "Kimon Fountoulakis, Farbod Roosta-Khorasani, Julian Shun, Xiang Cheng, and Michael W Mahoney.", + "venue": "Mathematical Programming, 174:553\u2013573, 2019.", + "url": null + } + }, + { + "9": { + "title": "Open problem: Running time complexity of accelerated -regularized pagerank.", + "author": "Kimon Fountoulakis and Shenghao Yang.", + "venue": "In Conference on Learning Theory, pages 5630\u20135632. PMLR, 2022.", + "url": null + } + }, + { + "10": { + "title": "Faster rates for the frank-wolfe method over strongly-convex sets.", + "author": "Dan Garber and Elad Hazan.", + "venue": "In International Conference on Machine Learning, pages 541\u2013549. PMLR, 2015.", + "url": null + } + }, + { + "11": { + "title": "A primal-dual algorithm with line search for general convex-concave saddle point problems.", + "author": "Erfan Yazdandoost Hamedani and Necdet Serhat Aybat.", + "venue": "SIAM Journal on Optimization, 31(2):1299\u20131329, 2021.", + "url": null + } + }, + { + "12": { + "title": "Identifying active constraints via partial smoothness and prox-regularity.", + "author": "Warren L Hare and Adrian S Lewis.", + "venue": "Journal of Convex Analysis, 11(2):251\u2013266, 2004.", + "url": null + } + }, + { + "13": { + "title": "Mirror prox algorithm for multi-term composite minimization and semi-separable problems.", + "author": "Niao He, Anatoli Juditsky, and Arkadi Nemirovski.", + "venue": "Computational Optimization and Applications, 61(2):275\u2013319, 2015.", + "url": null + } + }, + { + "14": { + "title": "Nonsmoothness in machine learning: specific structure, proximal identification, and applications.", + "author": "Franck Iutzeler and J\u00e9r\u00f4me Malick.", + "venue": "Set-Valued and Variational Analysis, 28(4):661\u2013678, 2020.", + "url": null + } + }, + { + "15": { + "title": "Generalized power method for sparse principal component analysis.", + "author": "Michel Journ\u00e9e, Yurii Nesterov, Peter Richt\u00e1rik, and Rodolphe Sepulchre.", + "venue": "Journal of Machine Learning Research, 11(2), 2010.", + "url": null + } + }, + { + "16": { + "title": "First-order and stochastic optimization methods for machine learning.", + "author": "Guanghui Lan.", + "venue": "Springer, 2020.", + "url": null + } + }, + { + "17": { + "title": "Iteration-complexity of first-order penalty methods for convex programming.", + "author": "Guanghui Lan and Renato DC Monteiro.", + "venue": "Mathematical Programming, 138(1):115\u2013139, 2013.", + "url": null + } + }, + { + "18": { + "title": "Iteration-complexity of first-order augmented lagrangian methods for convex programming.", + "author": "Guanghui Lan and Renato DC Monteiro.", + "venue": "Mathematical Programming, 155(1):511\u2013547, 2016.", + "url": null + } + }, + { + "19": { + "title": "Manifold identification in dual averaging for regularized stochastic online learning.", + "author": "Sangkyun Lee, Stephen J Wright, and L\u00e9on Bottou.", + "venue": "Journal of Machine Learning Research, 13(6), 2012.", + "url": null + } + }, + { + "20": { + "title": "Constrained minimization methods.", + "author": "Evgeny S Levitin and Boris T Polyak.", + "venue": "USSR Computational mathematics and mathematical physics, 6(5):1\u201350, 1966.", + "url": null + } + }, + { + "21": { + "title": "A level-set method for convex optimization with a feasible solution path.", + "author": "Qihang Lin, Selvaprabu Nadarajah, and Negar Soheili.", + "venue": "SIAM Journal on Optimization, 28(4):3290\u20133311, 2018.", + "url": null + } + }, + { + "22": { + "title": "Near-optimal algorithms for minimax optimization.", + "author": "Tianyi Lin, Chi Jin, and Michael I Jordan.", + "venue": "In Conference on Learning Theory, pages 2738\u20132779. PMLR, 2020.", + "url": null + } + }, + { + "23": { + "title": "Accelerated and sparse algorithms for approximate personalized pagerank and beyond.", + "author": "David Mart\u00ednez-Rubio, Elias Wirth, and Sebastian Pokutta.", + "venue": "arXiv preprint arXiv:2303.12875, 2023.", + "url": null + } + }, + { + "24": { + "title": "\u201cactive-set complexity\u201d of proximal gradient: How long does it take to find the sparsity pattern?", + "author": "Julie Nutini, Mark Schmidt, and Warren Hare.", + "venue": "Optimization Letters, 13(4):645\u2013655, 2019.", + "url": null + } + }, + { + "25": { + "title": "Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems.", + "author": "Yuyuan Ouyang and Yangyang Xu.", + "venue": "Mathematical Programming, 185(1):1\u201335, 2021.", + "url": null + } + }, + { + "26": { + "title": "A convergence theorem for non negative almost supermartingales and some applications.", + "author": "H. Robbins and D. Siegmund.", + "venue": "In Jagdish S. Rustagi, editor, Optimizing Methods in Statistics, pages 233\u2013257. Academic Press, 1971.", + "url": null + } + }, + { + "27": { + "title": "Convex analysis, volume 18.", + "author": "R Tyrrell Rockafellar.", + "venue": "Princeton university press, 1970.", + "url": null + } + }, + { + "28": { + "title": "The network data repository with interactive graph analytics and visualization.", + "author": "Ryan A. Rossi and Nesreen K. Ahmed.", + "venue": "In AAAI, 2015.", + "url": null + } + }, + { + "29": { + "title": "Are we there yet? manifold identification of gradient-related proximal methods.", + "author": "Yifan Sun, Halyun Jeong, Julie Nutini, and Mark Schmidt.", + "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1110\u20131119. PMLR, 2019.", + "url": null + } + }, + { + "30": { + "title": "Regression shrinkage and selection via the lasso.", + "author": "Robert Tibshirani.", + "venue": "Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267\u2013288, 1996.", + "url": null + } + }, + { + "31": { + "title": "Identifiable surfaces in constrained optimization.", + "author": "Stephen J Wright.", + "venue": "SIAM Journal on Control and Optimization, 31(4):1063\u20131079, 1993.", + "url": null + } + }, + { + "32": { + "title": "First-order methods for constrained convex programming based on linearized augmented lagrangian function.", + "author": "Yangyang Xu.", + "venue": "Informs Journal on Optimization, 3(1):89\u2013117, 2021.", + "url": null + } + }, + { + "33": { + "title": "Iteration complexity of inexact augmented lagrangian methods for constrained convex programming.", + "author": "Yangyang Xu.", + "venue": "Mathematical Programming, 185(1):199\u2013244, 2021.", + "url": null + } + }, + { + "34": { + "title": "Solving stochastic optimization with expectation constraints efficiently by a stochastic augmented lagrangian-type algorithm.", + "author": "Liwei Zhang, Yule Zhang, Jia Wu, and Xiantao Xiao.", + "venue": "INFORMS Journal on Computing, 34(6):2989\u20133006, 2022.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2212.11143v4" +} \ No newline at end of file diff --git a/20241127/2212.11571v2.json b/20241127/2212.11571v2.json new file mode 100644 index 0000000000000000000000000000000000000000..37d6b1ddd1d45b4f720cc73980ab81ab4df3567b --- /dev/null +++ b/20241127/2212.11571v2.json @@ -0,0 +1,177 @@ +{ + "title": "Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks", + "abstract": "The operation of large-scale infrastructure networks requires scalable optimization schemes.\nTo guarantee safe system operation, a high degree of feasibility in a small number of iterations is important. Decomposition schemes can help to achieve scalability. In terms of feasibility, however, classical approaches such as the alternating direction method of multipliers (ADMM) often converge slowly. In this work, we present primal decomposition schemes for hierarchically structured strongly convex QPs.\nThese schemes offer high degrees of feasibility in a small number of iterations in combination with global convergence guarantees. We benchmark their performance against the centralized off-the-shelf interior-point solver Ipopt and ADMM on problems with up to 300,000 decision variables and constraints. We find that the proposed approaches solve problems as fast as Ipopt, but with reduced communication and without requiring a full model exchange. Moreover, the proposed schemes achieve a higher accuracy than ADMM.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The operation of infrastructure networks such as power systems, district heating grids or gas networks is challenging. In many cases, these networks are large and composed of many complex subsystems such as lower-level networks or buildings. Operation is often based on numerical optimization due to its flexibility and recent advances in solver development, which allows to solve large-scale problems quickly and to a high accuracy. For large networks, however, a centralized solution is often not desirable since, a), the problem becomes computationally challenging, even with state-of-the-art solvers; b), information collection in a central entity should be avoided due to confidentiality and privacy concerns, and, c), the responsibility for operation and updates in modeling should stay mainly in the subsystems.\nOne line of research addresses the above challenges via aggregation.\nHere, the idea is to simplify the subproblems by projecting the constraint set on the coupling variables of the infrastructure network.\nExamples for this can be found for power systems [Capitanescu2018, Kalantar-Neyestanaki2020].\nA drawback of this approach is a loss of optimality.\nMoreover, aggregation is often not straightforward, feasibility is hard to guarantee and disaggregation requires solving additional local optimization problems.\nA second line of research is based on distributed optimization. Prominent approaches are primal and dual first-order algorithms such as Lagrangian dual decomposition, the Alternating Direction Method of Multipliers (ADMM) [Everett1963, Boyd2011], and primal (sub)-gradient-based schemes [Nedic2017, Ryu2022]. Application examples range from the operation of power systems [Erseghe2014, Kim2000], over gas networks [Shin2021], district heating systems [Huang2017, Cao2019], to water networks [Coulbeck1988].\nWith their at most linear rate of convergence, these approaches often require many iterations to converge even for a modest solution quality.\nThis is often prohibitive for real-time implementation.\nDistributed second-order methods exhibit faster convergence.\nHere, classical approaches aim at decomposing the block-structure of the Karush-Kuhn-Tucker (KKT) system within interior-point algorithms [Chiang2014, Zavala2008a] or sequential quadratic programming [Varvarezos1994].\nAlternative second-order methods based on augmented Lagrangians can be found in [Engelmann2019c, Houska2016]. These approaches typically require an expensive central coordination, although it is possible to partially alleviate computation by decentralizing the Newton-steps [Engelmann2020b, Engelmann2021a, Stomberg2022a].\nPrimal decomposition schemes come with the advantage a high degree of feasibility and optimality in a small number of iterations [Geoffrion1970, DeMiguel2006, DeMiguel2008].\nFor achieving this, they require a hierarchical problem structure, i.e. a star as the underlying graph.\nIn this sense, they are more restrictive than the aforementioned approaches.\nIn infrastructure networks hierarchical problem structures are common, however.\nThe main idea of primal decomposition is to construct lower-level problems coordinated by one upper-level problem, where the upper-level problem considers the lower-level problems by their optimal value functions.\nPrimal decomposition has been very successful in solving large-scale problems from chemical engineering [Zavala2008, Yoshio2021] and some of the largest Quadratic Programs (QPs) and Nonlinear Programs (NLPs) from power systems [Tu2021, Petra2021, Curtis2021]. Moreover, primal decomposition allows to use specialized, domain-specific solvers to solve the subproblems and the master problem efficiently [DeMiguel2006].\nIn this work, we propose two primal decomposition schemes for solving large-scale strongly convex QPs, with global convergence guarantees.\nBoth methods rely respectively on augmented Lagrangians and exact -penalties for ensuring feasibility in the subproblems.\nSimilar -penalty based approaches have been proposed in previous works [DeMiguel2006, Tu2021].\nIn contrast to [Tu2021], our work is not restricted to a specific application and can be used on any strongly convex hierarchically structured QP.\nThe augmented-Lagrangian framework is new to the best of our knowledge.\nWe show that the augmented Lagrangian formulation exhibits improved performance compared to the 1 formulation. Moreover, we demonstrate that the algorithms are faster than off-the-shelf interior-point solvers.\nWe benchmark our algorithms against a distributed ADMM and the nonlinear solver Ipopt.\nAs benchmarks, we consider the operation of HVAC systems in a city district with a variable number of buildings and with up to decision variables and inequality constraints and two Optimal Power Flow problems with up to 7,852 buses." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Problem Formulation", + "text": "###figure_1### Many infrastructure network problems can be formulated as strongly convex QPs over a set of subsystems ,\nHere, the global decision vector is composed of local decision variables , where each belongs to one subsystem .\nThe decision variables are \u201cglobal\u201d in the sense that they belong to the interconnecting infrastructure network, described by the constraints (1d ###reference_4###).\nEach coefficient matrix/vector in the objective (1a ###reference_1###) and the constraints (1b ###reference_2###), (1c ###reference_3###) belongs to one .\nObserve that problem (1 ###reference_###) is defined over a star graph, where and constraint (1d ###reference_4###) correspond to the root vertex, and and constraints (1b ###reference_2###), (1c ###reference_3###) belong/couple the root vertex to all leafs (Figure 1 ###reference_###).\nThis structure is common in many infrastructure networks such as electricity grids, gas networks or district heating systems, which are composed of a network as the root and complex subsystems such as households, distribution grids or industrial facilities as leafs [Erseghe2014, Kim2000, Shin2021, Huang2017, Cao2019, Coulbeck1988].\nThese applications often require a high degree of feasibility in a small number of iterations without full model exchange. The main objective of this work is to develop primal decomposition schemes able to achieve that goal." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Primal Decomposition Schemes", + "text": "In contrast to duality-based techniques such as ADMM or dual decomposition,\nprimal decomposition decomposes entirely in the primal space, i.e. no dual variables are updated in the solution process.\nThe main idea here is to replace the subproblems in (1 ###reference_###) by their optimal value functions.\nSpecifically, one reformulates (1 ###reference_###) as\nwhere for all , the value function is defined as\nThe key idea is to apply standard algorithms for solving (2 ###reference_###)\nby optimizing only with respect to the coupling variables .\nDoing so can lead to enhanced robustness, as the complexity of the subproblems is not exposed to the algorithm solving (2 ###reference_###).\nAlgorithms for solving (2 ###reference_###) typically require first-order and possibly second-order derivatives of all .\nSince all are non-smooth because of the inequality constraints, one typically relies on smooth reformulations.\nInspired by interior-point methods [DeMiguel2006], we introduce log-barrier functions and\nslack variables , which approximate (3 ###reference_###) by\nwhere is a barrier parameter, , and the is evaluated component-wise. Note that , and that is smooth111Under standard regularity assumptions [DeMiguel2008, A1-C1].. A basic primal decomposition strategy with smoothing is summarized in Algorithm 1 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Computing sensitivities", + "text": "Next, we review how to compute and under standard regularity assumptions based on the implicit function theorem [DeMiguel2008].\nReformulate (4 ###reference_###) by\nwhere is defined by (4a ###reference_1###), and and are defined by (4b ###reference_2###), (4c ###reference_3###).\nDefine the Lagrangian to (9 ###reference_###),\nAssume that (9 ###reference_###) is feasible for a given and that the regularity conditions from [DeMiguel2008, Ass. 1-4] hold. Then, the KKT conditions to (9 ###reference_###) form an implicit function in form of , where the superscript indicates a KKT stationary point.\nThus, by the implicit function theorem, there exist a neighborhood around for which there exists functions such that . Hence, we can rewrite (9 ###reference_###) as since is feasible.\nApplying the total derivative and the chain rule yields\nBy the KKT conditions, we have that \nand thus\nAgain by the total derivative, the Hessian can be computed by\nIt remains to derive an expression for .\nThe KKT conditions of (9 ###reference_###) read\nwhere . By the total differential and the chain rule we have . Hence, we can compute the Jacobian by solving the system of linear equations\nObserve that (12 ###reference_###) is a system of linear equations with multiple right-hand sides.\nIn summary, we can compute locally for each by combining (11 ###reference_###) and (12 ###reference_###). The corresponding formulas for the gradient and the Hessian of and from (3 ###reference_###) and (3 ###reference_###), i.e. of the AL relaxation and the relaxation (9 ###reference_###) are given in Appendix A ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "A Method for Solving the Master Problem", + "text": "An important question is how to solve the master problem (2 ###reference_###) for different variants of . In general, this can be done by any sensitivity-based NLP solver. We proceed by showing how to obtain a simple globalized version of Algorithm 1 ###reference_### based on a line-search scheme; here, the idea is to show global convergence for the relaxed problem (2 ###reference_###) with for fixed penalty and barrier parameters. This leads to converge of a solution to the original problem (1 ###reference_###) by standard results from penalty and barrier methods [Nocedal2006, Thms. 17.1, 17.6].\nDefine the objective of (2 ###reference_###), , as a global merit function, where . The basic idea is to employ a Sequential Quadratic Programming (SQP) scheme, where we ensure a sufficient decrease in at each step via the Armijo condition. The overall algorithm is summarized in Algorithm 4 ###reference_###. Similar to the general primal decomposition scheme from Algorithm 1 ###reference_###, the master problem solver evaluates the sensitivities in step (i), in order to construct a quadratic approximation of (2 ###reference_###) in step (ii). Solving this approximation yields a search direction .\nThe stepsize is updated with\na backtracking line-search with the Armijo condition as termination criterion." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Implementation Aspects", + "text": "The evaluation of the sensitivities of requires solving local optimization problems (3 ###reference_###) or (3 ###reference_###) for fixed .\nThis can be done using specialized and optimized interior-point solvers, if they allow termination once a certain barrier is reached.\nMoreover, interior-point solvers factorize the KKT matrices (cf. (24 ###reference_###), (26 ###reference_6###)) at each inner iteration and these factorizations can be re-used for Hessian computation via (12 ###reference_###).\nHere we provide two variants: our own interior-point QP solver based on standard techniques for stepsize selection and barrier parameter decrease [Nocedal2006, Chap. 16.6] and the option to use third-party solvers such as Ipopt [Wachter2006].\nIn early iterations, it is typically not necessary to solve the local problems to a high precision, since the barrier parameter is still large and the penalty parameters are still small.\nHence, we solve the subproblems to an accuracy measured in the violation of the optimality conditions and terminate if or . This is inspired by the termination of inexact interior-point methods [Byrd1998]. Warm-starting the local solves with the solution of the previous iteration reduces computation time significantly." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Numerical Case Studies", + "text": "We consider an optimal control problem for a city district with a scalable number of commercial buildings connected via a electricity grid with limited capacity. The building data is from [Rawlings2018]. We neglect the waterside HVAC system and assume that the buildings are equipped with heat pumps with a constant coefficient of performance.\n###figure_2###" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "District HVAC", + "text": "The evolution of the temperature of the th zone in the th building reads\nwhere at time step , is the temperature of zone and the ambient temperature, is the thermal capacity, and are heat transfer coefficients with the ambient and between two zones. Moreover, are the controllable/uncontrollable heat influxes from the heat pump and from sources of disturbance such as solar irradiation and occupancy.\nEq. (14 ###reference_###) can be written in compact form as\nwhere and .\nThis yields a state-space model\nStacking the above over time steps yields\nwhere , , , and are the initial temperatures.\nDefine the total energy consumption of building at time step by , and .\nThen, the above is equivalent to\nThe grid coupling between all subsystems induces an upper-bounded energy supply\nwriting as a global constraint:\n for all times .\nMoreover, we have local comfort constraints ." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Optimal Power Flow", + "text": "Optimal Power Flow (OPF) aims at minimizing the cost of power generation in power systems while satisfying all grid and generator constraints.\nA standard OPF formulation reads\nwhere is the active power generation of generators, the diagonal matrix and vector are composed of generator-specific cost coefficients, are active power demands and are voltage angles at each bus.\nIn (18b ###reference_.2###), is the bus susceptance matrix and is the branch susceptance matrix, which map the voltage angles to power injections and power flows over transmission lines respectively, cf. [Molzahn2019].\nThe matrix maps generator injections to connecting buses.\nThe constraint (18c ###reference_.3###) expresses generation and line flow limits.\nPower grids are typically structured in hierarchy levels reaching from extra-high voltage to low-voltage grids.\nAs a numerical test case, we consider the IEEE 300-bus test system to which we connect a varying amount of 118-bus sub-grids (with data from the MATPOWER database [Zimmerman2011]). We add a small regularization term of on the main diagonal of each to make the problem strongly convex in order to meet the conditions of Assumption 1 ###reference_1###.\nTo obtain a problem in form of (1 ###reference_###), we introduce decision variables for the master grid , where is an auxiliary variable corresponding to the active power at interconnecting nodes with the lower-level network.\nFor the th lower-level subproblem we get\nHere, are a selection matrices, which couple power demand/generation at interconnecting nodes between subsystems.\nThe master problem then reads\nwhere is a reference (slack) constraint in order to obtain an unique angle solutions , and is a matrix mapping the coupling variables to coupling buses.\n###table_1###" + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Numerical Results", + "text": "We benchmark our algorithms against ADMM (as one of the most popular algorithms for decomposition) and against Ipopt v3.14.4 (as one of the most prominent centralized NLP solvers).555The ADMM-based QP solver OSQP solver did not converge for the problems presented here.\nThe particular variant of ADMM can be found in an extended version of this work [Engelmann2022b].\nWe rely on OSQP v0.6.2 [Stellato2020] for solving subproblems and the coordination problem in ADMM.\nIn primal decomposition, we rely on our own interior-point solver for the subproblems and on Algorithm 4 ###reference_### for coordination, where we solve (13 ###reference_###) via Ipopt.666Note that the proposed framework is flexible with respect to the interior point solvers used in the subproblems as long as one can access the corresponding sensitivity matrices. \nWe perform all simulations on a shared-memory virtual machine with 30 cores and 100GiB memory.\nThe underlying hardware is exclusively used for the case studies.\nAll algorithms are parallelized via Julia multi-threading\u2014thus all subproblems are solved on multiple cores in parallel.\nWe compare the numerical performance of all algorithms on OCP (17 ###reference_###) for buildings, and on the OPF problem (18 ###reference_###) with sub-grids. Table 1 ###reference_### shows the corresponding number of local/global decision variables /, the number of local equality/inequality constraints /, and the number of global equality/inequality constraints /.\nWe employ ADMM from [Engelmann2022b] with penalty parameters .\n###figure_3### ###figure_4### ###figure_5### ###figure_6### 4(a) ###reference_sf1### illustrates the numerical performance of both primal decomposition variants and ADMM for the HVAC problems. Figure 5 ###reference_### shows their performance for the OPF problems.\n4(b) ###reference_sf2### shows the AL formulation only, since the 1 formulation runs out of memory for this problem.\nThe constraint violations for the equality constraints (1b ###reference_2###), ,\nfor the inequality constraints (1c ###reference_3###),\n, and the value of the cost function from (1a ###reference_1###) are displayed, where the x-axis shows the iteration count.\nOne can observe that the primal decomposition schemes achieve a high degree of feasibility in less than 10 iterations for all cases.\nMoreover, the optimality gap is below in less than 10 iterations for both primal decomposition variants and for all , where is computed via Ipopt.\nFor ADMM, infeasibility and the optimality gap stay large independently of the choice of .\nThe reason for the poor scaling of the 1-formulation is two-fold: First, the relaxation (3 ###reference_###) introduces additional slack variables and inequality constraints.\nHence, the KKT system in the subproblems defined via (26 ###reference_6###) has a larger size\nthan the KKT system we get with the AL formulation (24 ###reference_###).\nMoreover, the additional inequality constraints potentially lead to smaller stepsizes due to the fraction-to-boundary rule [Nocedal2006, Eq 19.9]. Hence, more iterations in the subproblems are required compared to the AL formulation." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Discussion of Algorithmic Properties", + "text": "Next, we discuss algorithm properties in view of the desirable properties from Section 1 ###reference_###." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Conclusion and Outlook", + "text": "We have presented two primal decomposition schemes to solve large-scale QPs for the operation of infrastructure networks.\nThe developed methods are proven to converge globally to the optimal solution.\nNumerical experiments have demonstrated their potential for solving large-scale QPs in a small number of iterations to a high degree of feasibility and optimality, which distinguishes them from classical distributed methods such as ADMM.\nMoreover, we have shown that primal decomposition based on augmented Lagrangians has numerical benefits compared to the classical 1-formulation.\nFuture work will further improve implementation aspects of the developed primal decompositions schemes. Sparse backsolves or quasi-Newton Hessian approximations have the potential to greatly accelerate Hessian computation." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Sensitivities for Augmented Lagrangians", + "text": "Observe that for computing and in (10 ###reference_###) and (11 ###reference_###), the partial derivatives of the implicit function and are required.\nNext, we derive these quantities for the two relaxed local problems (3 ###reference_###) and (3 ###reference_###).\nFor (3 ###reference_###), the Lagrangian (omitting arguments) reads\nHence, the local KKT conditions read\nwhere .\nMoreover,\nwhere .\nMoreover, by (10 ###reference_###),\n\nFurthermore, by (11 ###reference_###),\nwhere is computed by the system of linear equations" + }, + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "Sensitivities for the Formulation", + "text": "The Lagrangian to (3 ###reference_###) reads\nHence, the KKT conditions require\nwhere .\nThus,\nwhere , , and .\nMoreover,\nFurthermore, , , and .\nThus, by (11 ###reference_###)," + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Proof of Lemma\u00a01", + "text": "First, we will show that for a regular, symmetric , with .\nConsider a re-ordered eigendecomposition and partition , such that is a nullspace-basis of , i.e. .\nHence, we have since .\nThus, .\nAgain, since , by expansion, .\nProof of a): By (22 ###reference_###), we need for computing , where is defined by (23 ###reference_###).\nDefine , and\n\nConsider (21 ###reference_###) and parametrize , where is a nullspace matrix to , i.e., the columns of form an orthogonal basis of the nullspace of and is an auxiliary matrix.\nUsing the above parametrization in (23 ###reference_###) and multiplying with yields\n by .\nSince and Assumption 1 holds, we have and thus is invertible by full rank of .\nHence, by (22 ###reference_###) and the above derivation, \nNotice that is a diagonal matrix with ones and zeros.\nHence, since is positive definite, it suffices to show that for the worst case, i.e. (no constraints).\nThus, by the definition of , by Assumption 1 ###reference_1### a) and the Schur-complement Lemma [Boyd2004, A.14].\nProof of b): By (28 ###reference_8###), we need to show that , which can be computed by the system of linear equations (26 ###reference_6###), (27 ###reference_7###).\nDefine and\n\nBy Assumption 1 ###reference_1###, , and , we have that .\nHence, .\nThus, . Since and by full rank of from Assumption 1 ###reference_1###, and thus .\nSince , all leading principle minors of this matrix must be positive definite by Sylvester\u2019s criterion [Horn2013, Col 7.1.5].\nBy variable reordering, the assertion follows." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Solution of (1) via ADMM", + "text": "We derive a distributed ADMM version for (1 ###reference_###) as a baseline for numerical comparison.\nConsider (1 ###reference_###), introduce auxiliary variables and consensus constraints .\nThis yields\nThe augmented Lagrangian with respect to reads\nwhere are defined by (29a ###reference_.1###)-(29c ###reference_.3###) and is the indicator function for (29d ###reference_.4###).\nMinimizing w.r.t. for fixed yields for all\nMinimising w.r.t. for fixed yields\nFinally, the Lagrange multiplier update reads\nThe update rules (C ###reference_###)-(32 ###reference_###) define the ADMM iterations.\nNote that (C ###reference_###) and (32 ###reference_###) can be executed locally for all , whereas (31 ###reference_###) defines the global coordination step." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Number of decision variables and constraints for the HVAC and OPF problems.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\nHVAC\n\n\n3028,20069015,09028,80001,403
180169,2004,14090,540172,80008,303
300282,0006,900150,900288,000013,823
\n\n\n\nOPF\n\n\n2911,1918099,55714,880712960
6423,75684420,23231,680712960
\n
", + "capture": "Table 1: Number of decision variables and constraints for the HVAC and OPF problems." + }, + "2": { + "table_html": "
\n
Table 2: Timing and number of iterations for the HVAC and the OPF problem with buildings and sub-grids, 30 cores.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
IpoptADMMADMM
AL\n1par. LA
\n\n\n\nHVAC\n\n\n300t[s]431.5-386.7892.8\u2217\n1,122.2\u2217\n
180195.7-218.1510.3\u2217\n1,322.2\u2217\n
3018.1270.025.572.8\u2217\n85.53\u2217\n
\n\n\n\nOPF\n\n\n64t[s]2.64283.894.6470.52\u2217\n111.29\u2217\n
291.8395.762.0113.5461.58\u2217\n
\n\n\n\nHVAC\n\n\n300iter.13-1455,000\u2217\n5,000\u2217\n
18012-1415,000\u2217\n5,000\u2217\n
3013121045,000\u2217\n5,000\u2217\n
\n\n\n\nOPF\n\n\n64iter.99163,199\u2217\n5,000\u2217\n
2997151,1335,000\u2217\n
term.rel. opt. \noptimalrel. opt. \n
infeas. \ninfeas. \n
\n
\n
\n
\n

\u2217terminated because max. iterations reached.

\n
\n
\n
", + "capture": "Table 2: Timing and number of iterations for the HVAC and the OPF problem with buildings and sub-grids, 30 cores." + }, + "3": { + "table_html": "
\n
Table 3: Internal timing (%) for the AL formulation and the HVAC problem, 30 cores.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
sensitivity eval.local sol.coord.line searchother
30068.106.666.1017.841.30
18041.5819.869.0226.892.65
304.376.699.3579.290.30
\n
", + "capture": "Table 3: Internal timing (%) for the AL formulation and the HVAC problem, 30 cores." + }, + "4": { + "table_html": "
\n
Table 4: Properties of ADMM and Primal Decomposition.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ADMMPrimal Decomp.
Communicationforward
(#floats step)backward
ComputationlocalConvex QPNLP
globalConvex QPNLP + lin. equations
with multiple rhs.
Conv. rate (max.)Linear(Superlinear)#\n
DecentralizationDecentralized\u2217\nDistributed
\n
\n
\n
\n

\u2217in the sense that decentralization of ADMM is straight-forward. \n
#to a solution of the barrier problem (5)

\n
\n
\n
", + "capture": "Table 4: Properties of ADMM and Primal Decomposition." + } + }, + "image_paths": { + "1": { + "figure_path": "2212.11571v2_figure_1.png", + "caption": "Figure 1: Star graph of problem (1).", + "url": "http://arxiv.org/html/2212.11571v2/x1.png" + }, + "2": { + "figure_path": "2212.11571v2_figure_2.png", + "caption": "Figure 2: Communication in Algorithms 2 and 3.", + "url": "http://arxiv.org/html/2212.11571v2/x2.png" + }, + "3": { + "figure_path": "2212.11571v2_figure_3.png", + "caption": "Figure 3: Buildings connected via network with limited capacity.", + "url": "http://arxiv.org/html/2212.11571v2/x3.png" + }, + "4(a)": { + "figure_path": "2212.11571v2_figure_4(a).png", + "caption": "(a) |\ud835\udcae|=30\ud835\udcae30|\\mathcal{S}|=30| caligraphic_S | = 30 buildings.\nFigure 4: Convergence for three HVAC problems.", + "url": "http://arxiv.org/html/2212.11571v2/x4.png" + }, + "4(b)": { + "figure_path": "2212.11571v2_figure_4(b).png", + "caption": "(b) |\ud835\udcae|=300\ud835\udcae300|\\mathcal{S}|=300| caligraphic_S | = 300 buildings.\nFigure 4: Convergence for three HVAC problems.", + "url": "http://arxiv.org/html/2212.11571v2/x5.png" + }, + "5(a)": { + "figure_path": "2212.11571v2_figure_5(a).png", + "caption": "(a) |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29 sub-grids.\nFigure 5: Convergence for two OPF problems.", + "url": "http://arxiv.org/html/2212.11571v2/x6.png" + }, + "5(b)": { + "figure_path": "2212.11571v2_figure_5(b).png", + "caption": "(b) |\ud835\udcae|=64\ud835\udcae64|\\mathcal{S}|=64| caligraphic_S | = 64 sub-grids.\nFigure 5: Convergence for two OPF problems.", + "url": "http://arxiv.org/html/2212.11571v2/x7.png" + }, + "6": { + "figure_path": "2212.11571v2_figure_6.png", + "caption": "Figure 6: Sparsity patterns of \u2207y\u2062y2\u03d52\u2062(y)superscriptsubscript\u2207\ud835\udc66\ud835\udc662subscriptitalic-\u03d52\ud835\udc66\\nabla_{yy}^{2}\\phi_{2}(y)\u2207 start_POSTSUBSCRIPT italic_y italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_\u03d5 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_y ) for the HVAC problem with |\ud835\udcae|=4\ud835\udcae4|\\mathcal{S}|=4| caligraphic_S | = 4 (left) and the OPF problem with |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29 (right).", + "url": "http://arxiv.org/html/2212.11571v2/x8.png" + }, + "7(a)": { + "figure_path": "2212.11571v2_figure_7(a).png", + "caption": "(a) HVAC problem with |\ud835\udcae|=30\ud835\udcae30|\\mathcal{S}|=30| caligraphic_S | = 30.\nFigure 7: Communication (#floats) for one subsystem.", + "url": "http://arxiv.org/html/2212.11571v2/x9.png" + }, + "7(b)": { + "figure_path": "2212.11571v2_figure_7(b).png", + "caption": "(b) OPF problem with |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29.\nFigure 7: Communication (#floats) for one subsystem.", + "url": "http://arxiv.org/html/2212.11571v2/x10.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2212.11571v2" +} \ No newline at end of file diff --git a/20241127/2305.19353v5.json b/20241127/2305.19353v5.json new file mode 100644 index 0000000000000000000000000000000000000000..ba96ec0880c4fbc2f8e538be7d292099dbedb1ea --- /dev/null +++ b/20241127/2305.19353v5.json @@ -0,0 +1,516 @@ +{ + "title": "Bearing-Constrained Leader-Follower Formation of Single-Integrators with Disturbance Rejection: Adaptive Variable-Structure Approaches", + "abstract": "This paper studies the problem of stabilizing a leader-follower formation specified by a set of bearing constraints while disturbed by some unknown uniformly bounded disturbances. A set of leaders are positioned at their desired positions while each follower is modeled by a single integrator with an additive time-varying disturbance. Adaptive variable-structure control laws using displacements or only bearing vectors are applied to stabilize the desired formation. Thanks to the adaptive mechanisms, the proposed control laws require neither information on the bearing Laplacian nor the directions and upper bounds of the disturbances.\nIt is further proved that when the leaders are moving with the same bounded uniformly continuous velocity, the moving target formation can be achieved under the proposed control laws. Simulation results are provided to support the stability analysis.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recent years have been a booming research period for bearing-based formation control [34 ###reference_b34###, 26 ###reference_b26###], a research topic inspired from the observation that animals can self-localize, navigate, and perform formation-type collective behaviors using their vision power. Research from diverse fields suggests that that fairly simple vision-based guidance rules in animals can unfold sophisticated formation-type phenomena [15 ###reference_b15###]. From an engineering perspective, there have been ongoing attempts to understand and realize these displayed formations. Considered as the eye of an autonomous agent (UAV, AGV), the camera provides the bearing vectors (directional information) from the agent to some neighboring agents. In addition to providing an alternative solution for other formation approaches (position-, displacement-, distance-based formation control, etc.) [16 ###reference_b16###], bearing-only control laws are preferred since they reduce the number of sensors used by each agent, cut down on deployment cost, and do not transmit any signals [30 ###reference_b30###]. In addition, research results from bearing-constrained formation control are applicable to the dual problem - bearing-based localization in wireless sensor networks [35 ###reference_b35###].\nThe theoretical basis of bearing-constrained formation control in -dimensional space () was developed in [34 ###reference_b34###, 35 ###reference_b35###, 33 ###reference_b33###]. Several initial studies on the bearing/directional rigidity theory in two- or three-dimensional space can be found in [6 ###reference_b6###, 2 ###reference_b2###, 20 ###reference_b20###, 27 ###reference_b27###]. As robustness is an importance issue of any multiagent system, consensus and formation control under disturbances were studied by [3 ###reference_b3###, 9 ###reference_b9###, 4 ###reference_b4###, 28 ###reference_b28###].\nAlthough disturbances can be actively included for additional objectives such as escaping from an undesired unstable formation [26 ###reference_b26###], or formation maneuver [5 ###reference_b5###], the presence of unknown disturbances usually makes the target formation unachievable or causes unexpected formation motions. Robust bearing-constrained formation acquisition/tracking have recently been proposed in the literature. The works [13 ###reference_b13###, 14 ###reference_b14###, 32 ###reference_b32###, 31 ###reference_b31###, 12 ###reference_b12###] assumed the leaders\u2019 velocity and the disturbances are constant, or their upper bounds are known by the agents, or the rate of bearings is available. The work [8 ###reference_b8###] proposed an elevation-based bearing-only formation control with disturbance rejection for single- and double-integrators. However, the method in [8 ###reference_b8###] is only effective for minimally rigid formations, and for double integrators, velocity measurements are required. The authors in [29 ###reference_b29###] studied bearing-only formation control with fault tolerance and time delays. Actuator faults were modeled as a disturbance of unknown control direction, which can be compensated by a control action with an appropriate control gain. The authors in [1 ###reference_b1###] proposed a robust adaptive design method to attenuate the effects of the disturbances to satisfy a specific performance requirement. The authors in [25 ###reference_b25###] considered the bearing-only acyclic formation tracking problem with unknown leaders\u2019 velocity using two time-varying gains. Formation maneuver via bearing-only estimation for time-varying leader-follower formations was also proposed in [10 ###reference_b10###, 22 ###reference_b22###, 23 ###reference_b23###]. A moving target formation was cooperatively estimated from the measured bearing vectors, and each follower controls its position to track its estimated target position.\nThis paper focuses on the bearing-based leader-follower formation control problems with single-integrator modeled agents perturbed by unknown and bounded uniformly continuous disturbances. By bearing-based, we assume that the geometric constraints that define the target formation are given as a set of bearing vectors. There are several leaders in the formation, whose positions already satisfy a subset of bearing constraints. The remaining agents, called followers, can measure either (i) the relative positions (displacement-based control) or (ii) the bearing vector (bearing-only control) to their neighbors. The interaction topology between agents is not restricted into an acyclic graph, but is applicable to any infinitesimal bearing rigid formation having at least two leaders.\nUnlike [24 ###reference_b24###], where a disturbance-free finite-time bearing-only formation control was studied or a small adaptive perturbation was purposely included to globally stabilize the target formation in finite time, the disturbances in this work represent unmodeled dynamics or the effects of the environment. The problem is firstly solved under the assumption that the agents can measure the relative displacements. The solution for relative-displacement provides hints for the more challenging task of stabilizing the desired formation when agents can only sense the bearing vectors. Intuitively, since no information on the distances is available, in order to suppress the disturbances with unknown magnitude, the gain of the bearing-only control law should be increased whenever all bearing constraints are not satisfied. This intuition is mathematically realized by adaptive variable-structure control (also known as adaptive sliding mode control), which can provide fast convergence and robustness with regard to disturbances [17 ###reference_b17###, 18 ###reference_b18###]. The main novelty of the proposed control laws is providing a distributed adaptive mechanism that alters the magnitude of the control law with regard to the errors between the desired and the measured bearing constraints. In this way, the control input eventually approximates the magnitude of the disturbance, rejects the disturbance and stabilizes the target formation without requiring any inter-agent communication, a priori information on the upper bound of the disturbance, or the formation\u2019s rigidity index.111Specifically, the smallest eigenvalue of the grounded bearing Laplacian is not needed for stabilizing the formation under unknown disturbances. Modifications of the control laws are proposed to alter the adaptive gains based on the disturbance\u2019s magnitude or to stabilize the target formation in case the upper bound of the unknown disturbance is a polynomial of the formation\u2019s error. Moreover, when the leaders move with the same bounded uniformly continuous velocity, their motions can be considered as disturbances to the bearing errors dynamics of the followers. Thus, the proposed adaptive control laws can also be applied to stabilize a time-varying target formation. To sum up, for formations of single integrators, the proposed control laws provide a unified solution to two problems: leader-follower formation control with unknown disturbances and formation tracking with unknown leaders\u2019 velocity.\nThe rest of this paper is organized as follows. Section II ###reference_### presents theoretical background on bearing rigidity theory and formulates the problems. Sections III ###reference_### and IV ###reference_### propose formation control/tracking laws using only displacements and/or only bearing vectors, respectively. Section VI ###reference_### provides numerical simulations. Lastly, section VII ###reference_### concludes the paper.\nNotations. In this paper, the set of real numbers is denoted by . Scalars are denoted by small letters, and vectors (matrices) are denoted by bold-font small (capital) letters. For a matrix , we use , to denote the kernel and the image of , and rank denotes the rank of . The 2-norm and 1-norm of a vector are respectively denoted as and . The identity matrix is denoted by , denote the zero matrix, and denotes the -dimensional zero vector." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Problem statement", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Bearing rigidity theory", + "text": "Consider a set of points in -dimensional space (). The points are positioned at , with . A framework (or a formation) in the -dimensional space () is specified by an undirected graph (where is the vertex set of vertices and is the edge set of edges without self-loops) and a configuration . The neighbor set of a vertex is defined by . The graph is connected if for any two vertices , we can find a sequence of vertices connected by edges in , which starts from and ends at .\nLet the edges in be indexed as . For each edge , the bearing vector pointing from to is defined by\n, with is the displacement vector between and . It is not hard to check that , where denotes the 2-norm. An edge is oriented if we specify and as the start and the end vertices of , respectively. According to an arbitrarily indexing and orienting of edges in , we can define a corresponding incidence matrix , where if is the start vertex of , if is the end vertex of , and , otherwise. Then, we can define the stacked displacement vector , where .\nFor each bearing vector , we define a corresponding projection matrix . The projection matrix is symmetric positive semidefinite, with a unique zero eigenvalue and unity eigenvalues. Moreover, the kernel of is spanned by , i.e., kerim.\nTwo formations () and () are bearing equivalent if and only if: . They are bearing congruent if and only if\n. A formation () is called globally bearing rigid if any formation having the same bearing constraints with is bearing congruent with .\nLet , the bearing rigidity matrix is defined by\nA formation is infinitesimally bearing rigid in if and only if , this means , where is the formation\u2019s centroid. An example of infinitesimally bearing rigid framework is shown in Fig. 1 ###reference_###.\n###figure_1### ###figure_2### In bearing-based formation, we usually use an augmented bearing rigidity matrix , which has the same rank as well as the same kernel as but does not contain information of the relative distances between the agents . Further, we define the bearing Laplacian which is symmetric and positive semidefinite. For an infinitesimally rigid formation, has exactly zero eigenvalues and kerker." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Problem formulation", + "text": "Consider a system consisting of autonomous agents in the -dimensional space (), of which the positions are given by . We assume that there exist stationary leader agents in the formation and the remaining agents are followers.\nDefining the vectors and . When the leaders are stationary, we can write .\nThe follower agents are modeled by single integrators in the -dimensional space with the disturbances:\nwhere and denote the position and the disturbance of agent , respectively.\nThe desired formation , where , is defined as follows:\nThe desired formation satisfies\nLeaders\u2019 positions: , and\nBearing constraints: .\nThe following assumption will be used throughout the paper.\nThe formation of leaders and followers satisfies\nThe axes of the local reference frames of agents are aligned.\nThe follower agents are modeled by (1 ###reference_###) and there is no collision between agents.\nThe disturbance vector is bounded and uniformly continuous. The direction and the upper bound of the disturbance (denoted as ) are unknown to the agents.\nThe target formation is infinitesimally bearing rigid in and .\nBy stacking the set of desired bearing vectors as , we have\nUnder the assumption that is infinitesimally bearing rigid in and , it has been shown in [35 ###reference_b35###] that is symmetric positive definite and thus invertible. As a result, the desired formation is uniquely determined from the the leaders\u2019 positions and the bearing vectors by .\nTo achieve a target formation, the agents need to sense some geometric variables relating to the formation. Two types of relative sensing variables, namely, the displacements , and the bearing vectors , , will be considered in this paper. We can now formulate two problems which will be studied in the next sections.\nLet Assumption 1 ###reference_umption1### hold and the agents can sense the relative displacements. Design control laws for agents such that as .\nLet Assumption 1 ###reference_umption1### hold and the agents can sense the bearing vectors. Design control laws for agents such that as ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Displacement-based formation control", + "text": "In this section, the bearing-based formation control under disturbance is considered under the assumption that the agents can measure the displacement vectors with regard to their neighbors. First, an adaptive variable structure control law which can provide asymptotic convergence of the target formation is proposed. Then, the proposed control law is modified to deal with different assumptions of the disturbances as well as the control objectives." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Proposed control law", + "text": "Consider the Problem 1 ###reference_blem1###, the control law is proposed as follows\nwhere, corresponding to each edge , the matrix can be computed from the desired bearing vector , the scalar are adaptive gains, which satisfy , and is a positive constant. As the leaders are stationary, for . In the following analysis, let , , , , , and .\nThe system under the proposed control law (3 ###reference_###) can be expressed in the following form:\nwhere and . For brevity, the short-hands , , and will be used in the subsequent analysis." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Stability analysis", + "text": "In this subsection, the stability of the system (4 ###reference_###) is considered. Since the right-hand-side of Eqn. (4 ###reference_###)(a) is discontinuous, the solution of (4 ###reference_###)(a) is understood in Fillipov sense [21 ###reference_b21###]. It will be proved that converges to as under the proposed control law (3 ###reference_###).\nConsider the Problem 1 ###reference_blem1###. If , . Under the control law (3 ###reference_###), in finite time.\nLet , and consider the Lyapunov function , which is positive definite, radially unbounded, and bounded by two class functions and , for any . Then, , where and stands for almost everywhere [21 ###reference_b21###]. It follows that\nNote that in the third equality, we have used the fact that , and the inequality (5 ###reference_###) follows from the fact that . Based on the norm inequality for a vector , we can further write\nSubstituting the inequality into equation (6 ###reference_###), we get\nWe prove finite-time convergence of the desired formation by contradiction. If there does not exist a finite time such that , and , then it follows from (7 ###reference_###) that\nor i.e.,\nWhen , the right hand side of the inequality (9 ###reference_###) becomes negative, which causes a contradiction. This contradiction implies that and for . Thus, we conclude that in finite time. An upper bound for is thus .\n\u220e\nLemma 1 ###reference_ma1### suggests that if initially, the gains have been chosen sufficiently large (to dominate ), the desired formation is achieved in finite time. However, some quantities such as the smallest eigenvalue of the grounded bearing Laplacian and the number of agents are usually unavailable. The proposed adaptive mechanism (11d ###reference_.4###) makes the agents achieve the desired formation without requiring any a-priori information on the number of agents , the desired formation\u2019s structure and the upper bound of the disturbance.\nConsider the Problem 1 ###reference_blem1###. Under the control law (3 ###reference_###), the following statements hold:\n, as ,\nThere exists a constant vector , such that , as ,\nAdditionally, if , and there exists a finite time such that then in finite time.\n(i) Consider the Lyapunov function , for some . is positive definite with regard to , radially unbounded, and bounded by two class functions and . Similar to the proof of Lemma 1 ###reference_ma1###, we have\nwhich implies that , are bounded, and . Since is uniformly continuous, it follows from Barbalat\u2019s lemma that , or , as .\n(ii) Since is bounded and non-increasing, it has a finite limit. Thus, there exists such that , as .\n(iii) If there exists a finite time such that then for all , the inequality (7 ###reference_###) holds. Therefore, the proof of this statement follows directly from the proof of Lemma 1 ###reference_ma1###.\n\u220e\nSince the control law (3 ###reference_###) uses only signum functions, chattering will be unavoidable. To reduce the magnitude of chattering, a proportional control term into (3 ###reference_###a) can be included as follows:\nfor . If there is no disturbance, the proportion control term is sufficient for achieving the target formation. When disturbances exist, the proportion term contributes to the formation acquisition and disturbance rejection objectives, at a slower rate in comparison with the signum term.\nAn issue with the control law (3a ###reference_1###)\u2013(3c ###reference_3###) is that the control gains is non-decreasing at any time . Thus, if the disturbance has a high magnitude for a time interval, and then decreases in time, much control effort will be wasted. To address this issue, we may relax the objective from perfectly achieving a target formation into achieving a good approximation of the target formation. More specifically, we may control the formation under disturbances to reach a small neighborhood of the desired formation in finite-time while the control magnitude estimates the unknown upper bound of the disturbance [17 ###reference_b17###]. A corresponding modified formation control law is then proposed as follows:\nwhere , and are positive constants. For each ,\nSimilar to the proof of Theorem 1 ###reference_orem1###, consider the Lyapunov function , where . We have,\nLet , we have,\nfor some . Thus, when , we have , or . Thus, is globally ultimately bounded. Defining the ball , then enters the ball after a finite time. It follows that after a finite time.\nIt is worth noting that by relaxing the control objective, we also further reduce the chattering behaviors of the formation in both magnitude and switching frequency. Most control efforts are provided to maintain the formation error inside a closed ball, of which the radius is jointly determined by the desired formation (number of bearing constraints and the minimum eigenvalue ) and the control parameters (proportional control gain , adaptation rate , and the decay rate ). Other methods for avoiding chattering may be softening the sign function by the tanh() function [8 ###reference_b8###], or considering a deadzone once error is small enough. Nevertheless, all above mentioned methods need to sacrifice the control performance for eradication of chattering.\nIn the next remark, we further consider a larger class of the disturbance acting on the formation. Let the upper bound of the disturbance be a polynomials of the formation\u2019s error. The main idea is to design adaptive law for each coefficient term [18 ###reference_b18###].\nSuppose that the upper bound of the unknown disturbance acting on the formation satisfies\n, where are unknown positive constants.\nThe following adaptive formation control law is proposed:\nFor stability analysis, let , , and consider the Lyapunov candidate function\nwhere . Then,\nIt follows that\nIt follows that and are uniformly bounded. Similar to the proof of Theorem 1 ###reference_orem1###, we can show that , or , as , and exists. Further, if , where \u201c\u201d is understood to be element-wise, then in finite time." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Bearing-only based formation control", + "text": "In this section, we further assume that the agents can measure only the relative bearing vectors with regard to their neighbors. We propose a corresponding adaptive variable-structure bearing-only formation control law and showed that the desired formation can be asymptotically achieved. Moreover, due to the adaptive gains, the effects of unknown time-varying disturbances acting on formation can be completely rejected even when the followers agents are not given any information of the disturbances\u2019 upper bound." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Proposed control law", + "text": "Consider the system of single-integrator agents with disturbance (1 ###reference_###). The bearing-only control law for each follower agent is proposed as follows\nDenoting , we can express the -agent system under the control law (17a ###reference_.1###)\u2013(17b ###reference_.2###) in vector form as follows:\nwhere , and . It is clear that the control law of each agent uses only the bearing vectors with regard to its neighboring agents." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Stability analysis", + "text": "This subsection studies the stability of the -agent system (18a ###reference_.1###)\u2013(18b ###reference_.2###). Particularly, we show that the desired formation defined as in Definition 1 ###reference_inition1### will be asymptotically achieved as . Since the right-hand-side of Eq. (18a ###reference_.1###) is discontinuous, we understand the solution of (18a ###reference_.1###) in Fillipov sense.\nWe will firstly prove the following lemma.\n[32 ###reference_b32###, Lemma 2] Suppose that no agents coincide in or . The following inequality holds\nwhere the equality holds if and only if .\n[32 ###reference_b32###, Lemma 3] Suppose that no agents coincide in or , then\nFurthermore, if then\nNext, we prove that the adaptive bearing-only control law (17 ###reference_###) guarantees boundedness of the formation\u2019s error in the following lemma.\nConsider the Problem 2 ###reference_blem2### and suppose that there is no collision between agents for . Under the control law (17 ###reference_###), the formation error is uniformly bounded, as and there exists constant vector such that .\nConsider the Lyapunov function\nwhere . Then, if and only if , or equivalently, and . Since is infinitesimally rigid and , the equality implies that . We have\nIt follows that , and are always bounded.\nFurther, from the inequalities\nand Eqn. (22 ###reference_###), we have\nwhich shows that is bounded. Suppose that , i.e.,\nLet and , the inequality\nhas solution\nThus, is uniformly bounded. As there is no collision between agents, is uniformly continuous, and thus as based on Barbalat\u2019s lemma. This implies that as . Moreover, since are bounded and nonincreasing, , it follows that there exists such that .\n\u220e\nThe following lemma gives a sufficient condition for collision avoidance between neighboring agents.\nConsider the Problem 2 ###reference_blem2###. Suppose that , where ,\nthen .\nFor each , we can write . Thus,\nIt follows from (28 ###reference_###) that . Thus, we have\nor in other words, no collision happens for all .\n\u220e\nIn the following theorem, a sufficient condition for stabilizing the desired target formation will be given.\nConsider the Problem 2 ###reference_blem2###. Under the adaptive bearing-only control law (17 ###reference_###), there exists a positive constant such that if the Lyapunov function in (24 ###reference_###) satisfies , then , , and there exists such that .\nFrom Eqn. (28 ###reference_###), we obtain\nThus, for sufficiently small, the inequality can always be satisfied. It follows from Lemma 5 ###reference_ma5### that no collision can happen, and As collision avoidance is ensured, , are uniformly continuous, and so is . The remaining proof is similar to the proof of Lemma 4 ###reference_ma4### and will be omitted.\n\u220e\nSimilar to Remark 2 ###reference_ark2###, we may relax the objective from perfectly achieving a target formation into achieving a good approximation of the target formation with the following modified bearing-only formation control law\nwhere and . Clearly,\n Denoting , similar to the proof of Lemma 4 ###reference_ma4### and the analysis in Remark 2 ###reference_ark2###, consider the Lyapunov function , where . We have,\nThe Cauchy-Schwartz inequality gives\nNext, using Lemma 2 ###reference_ma2###, the inequalities (27 ###reference_###), and Lemma 3 ###reference_ma3###, there holds\nwhere we have assumed that .\nUsing the inequalities (26 ###reference_###) and (29 ###reference_###), and let , we have\nWith such that , by similar arguments as in Lemma 4 ###reference_ma4###, we have . Thus, and . Notice that is strictly increasing in .\nNow, let with , we have , and thus if increases, there must be a time interval such that . During such a time in interval,\nfor . If\nfor\nwe still have , and thus the inequality holds. This implies that will decrease, and thus , . Finally, the inequality (33 ###reference_###) is always feasible given that is selected sufficiently small and , are selected large enough." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Application in formation tracking", + "text": "Let the leaders move with the same velocity , which is assumed to be a bounded, uniformly continuous function. The desired formation in Definition 1 ###reference_inition1### is now time-varying, with . Thus, it is assumed that is infinitesimally rigid in . We will show that the adaptive formation control laws (3 ###reference_###) and (17 ###reference_###) are still capable of stabilizing the desired leader-follower formation.\nThe motion of the -agent system under the control law (3 ###reference_###) is now given in matrix form as follows:\nLet , then\nSuppose that the displacement-based control law (3 ###reference_###) is adopted for followers, we have\nwhich is of the same form as (4 ###reference_###), but having an additional disturbance term . Thus, the following theorem can be proved.\nConsider the -agent system (35 ###reference_###) under the displacement-based control law (3 ###reference_###), the following statements hold:\n, as ,\nThere exists a constant vector , such that , as ,\nAdditionally, if , and there exists a finite time such that then in finite time.\nThe proof is similar to the proof of Theorem 1 ###reference_orem1### and will be omitted.\n\u220e\nFinally, if the bearing-only control law (17 ###reference_###) is adopted for followers, the -agent formation can be expressed in matrix form as\nwhich is of the same form as (18a ###reference_.1###)\u2013(18b ###reference_.2###), but having an additional unknown disturbance term . We have the following theorem, whose proof is similar to the proof of Theorem 2 ###reference_orem2### and will be omitted.\nConsider the -agent system (36 ###reference_###) under the adaptive bearing-only based control law (17 ###reference_###). There exists a positive constant such that if the Lyapunov function in (24 ###reference_###) satisfies , there will be no collision between agents in formation, , and , for some constant vector .\nIn formation tracking, the leaders\u2019 trajectories can be embedded into each leader from the beginning, or can be remotely regulated by a control center. The leader agents are assumed to be equipped with a better positioning system, so that their positions are available for control and monitoring objectives. Suppose that the leaders are also subjected to bounded unknown disturbances, i.e.,\nwhere . To ensure that the leaders track their desired trajectories , and thus, eventually act as moving references for follower agents, the following position tracking law is respectively proposed\nwhere . By considering the Lyapunov function , we can prove that in finite time.\nIn this remark, we discuss the implementation of the bearing-only formation control laws. For indoor (laboratory) environments, the bearing vectors can be obtained from a single omnidirection camera mounted on the agent. Another setup is using an indoor localization system, which localizes agents\u2019 positions, calculates the bearing vectors, and sends this information to each agent to determine a corresponding control input [11 ###reference_b11###, 32 ###reference_b32###]. For outdoor implementation, the authors in [19 ###reference_b19###] proposed to use four cameras attached to four sides of a quadcopter to obtain bearing vector information from different directions. The limited field-of-view of a camera can be considered in the control law, as proposed in [7 ###reference_b7###]." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Simulation results", + "text": "In this section, we provide a few simulations to demonstrate the effectiveness of the formation control laws proposed in Sections III ###reference_###, IV ###reference_###, and VI ###reference_###. In all simulations, the target formation is described by a graph of 20 vertices and 39 edges and a desired configuration (a dodecahedron) as depicted in Figure 1 ###reference_###. It can be checked that is infinitesimally bearing rigid in 3D. In the simulations, there are leaders and followers.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10###" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Bearing-based formation control with disturbance rejection", + "text": "First, we simulate the formation with the control law (3 ###reference_###).\nLet each follower be modeled by a single integrator with disturbance given as\nwhere .\nThe control law (3 ###reference_###) is used with and are randomly generated on the interval . Simulation results are given as in Fig. 2 ###reference_###.\nAccording to Figs. 2(a) ###reference_sf1###, 2(c) ###reference_sf3###, and 2(d) ###reference_sf4###, for seconds, the desired formation is asymptotically achieved and the adaptive gains increase until the corresponding bearing constraint is stabilized. From seconds, the magnitude of the disturbance suddenly increases, which drives the agents out of the desired formation. The errors invoke the adaptive mechanism, increase again. It can be seen from Figs. 2(b) ###reference_sf2###, 2(c) ###reference_sf3###, and 2(d) ###reference_sf4### that followers are driven out from their desired positions from 40 to 55 seconds, as the magnitudes of their formation control laws are not big enough to counter the disturbance. From 55 to 80 seconds, when are sufficiently large, the agents are pulling back to the desired positions, and the desired formation is eventually achieved.\nSecond, we conduct a simulation of the formation under the adaptive control law with increasing/decreasing gains (11 ###reference_###). The disturbance acting on a follower in this simulation is given as\nWith , (proportional gain), (rate of adaptation), (leakage coefficient), and being chosen the same as the previous simulation, we obtain the simulation results as depicted in Fig. 3 ###reference_###.\nAs shown in Figs. 3(a) ###reference_sf1###, 3(c) ###reference_sf3###, and 3(d) ###reference_sf4###, for seconds, the adaptive gains increase and the control law drives the agents to a neighborhood of the desired formation. Due to the existence of a leakage term in (11 ###reference_###)(c), once a desired bearing constraint is sufficiently small, tends to reduce their values from 15 to 30 seconds. The decrements of make the formation errors raise again, however, remains on a small ball centered at , whose radius is jointly determined by the controller\u2019s parameters, the desired formation, and the magnitude of the unknown disturbance.\nFrom to seconds, as the magnitude of the disturbance is doubled, the agents are out from . As the errors increase, the term dominates the leakage term in the adaptive mechanism (11 ###reference_###)(c), and thus increase again. It can be seen from Figs. 3(b) ###reference_sf2###, 3(c) ###reference_sf3###, and 3(d) ###reference_sf4### that followers are driven further from their desired positions from 30 to about 38 seconds, and then being attracted to a ball centered at , with , from 38 to 65 seconds. For , the bearing constraints are sufficiently small, it can be seen that decrease again due to the leakage term. For , as the disturbance magnitude decreased to 0.1, as satisfy the requirement of Lemma 1 ###reference_ma1###, converges to after a short time ( at s). However, from s, because the leakage term is the only active term in (11 ###reference_###)(c), decreases. Gradually, once the control law cannot fully reject the disturbance, the disturbances make out of . The control law will still keep inside a ball centered at , with ." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Bearing-only formation control with disturbance rejection", + "text": "In this subsection, we simulate the adaptive bearing-only control law (17 ###reference_###) for the 20-agent system. The simulation\u2019s parameters are , and .\n###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### The disturbance of each follower in this simulation is given as\nThe simulation results are depicted in Fig. (4 ###reference_###). For (second), there is no disturbance acting on the formation, the control law stabilizes to after about 2 seconds. The adaptive gains increase correspondingly in and remain unchanging until , when there are disturbances acting on the agents. Due to the presence of the disturbances, leaves the target configuration , the bearing errors make increase. In turn, the control law\u2019s magnitude increases and is eventually capable of suppressing the disturbance from s. For s, approaches to . Approximately, reached to after seconds, and cease to increase as the bearing constraints were almost satisfied. For s, as the disturbances increase their magnitudes, leaves again. The adaptive gains increase correspondingly, and eventually pull back to . It can be seen that the increment of is relatively slower than other displayed adaptive gains for s. Chattering phenomenon can also be seen due to the disturbances (for and s), which causes significant fluctuations of around ." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C Bearing-based formation tracking", + "text": "In this subsection, we simulate the formation (35 ###reference_###) with moving leaders. The leaders\u2019 velocities are chosen as\nThe simulation\u2019s parameters are , . The initial positions of the agents are the same as in the previous simulation. Disturbances are not included in the simulation.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### Simulation results are shown in Fig. 5 ###reference_###. It can be seen from Fig. 5(b) ###reference_sf2### that for seconds, the formation\u2019s error increases because the adaptive control gains , which specify magnitude of the control input, is still quite small. For second, the formation\u2019s error decreases to 0. Fig. 5(c) ###reference_sf3### shows that the adaptive gains tend to increase for second, and after the desired formation has been achieved (approximately at second), remain unchanged. The magnitude of the control input versus time is correspondingly displayed in Fig. 5(d) ###reference_sf4###, which varies accordingly to the adaptive gains and the leaders\u2019 velocity." + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "VI-D Bearing-only formation tracking", + "text": "In this subsection, we simulate the formation with moving leaders (36 ###reference_###). The initial positions of the agents and the leaders\u2019 velocities are chosen the same as the previous simulation in subsection VI-C ###reference_###. The simulation\u2019s parameters are , . The disturbances acting on agents are chosen as\nSimulation results are depicted in Fig. 6 ###reference_###. For s, no disturbances acting on agents, and the desired moving formation is tracked after about 11 seconds. are increasing during this time period. The behavior of the system is quite similar to the previous simulation. However, it is observed that the bearing-only control law (17 ###reference_###) gives a relatively faster convergence rate then the displacement-based control law (3 ###reference_###). This can be explained by the fact that in (3 ###reference_###), the displacement are projected into im. This makes becoming relatively small, especially when the angles between and are small. In contrast, the control law (17 ###reference_###) uses only the sign of the bearing error, which is dimensionless.\nFor s, due to the presence of the disturbances, temporally cannot track (Figs. 6(a) ###reference_sf1###\u20136(b) ###reference_sf2###). Correspondingly, as depicted in Figs. 6(c) ###reference_sf3###\u20136(d) ###reference_sf4###, the adaptive gains and the control magnitude increase again. As is large enough, the control law simultaneously rejects the disturbance and renders the agents to their desired moving target point (approximately after 27 seconds).\n###figure_21### ###figure_22### ###figure_23### ###figure_24###" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusions", + "text": "The bearing-constrained formation control with unknown bounded disturbances has been studied for two types of measurements: displacements and bearing vectors. The proposed control laws can adapt the control magnitudes separately for each bearing constraint whenever the desired constraint has not been satisfied. Once the control magnitudes have exceeded the magnitude of the disturbances, it is possible to stabilize the desired configuration in finite time. Since the disturbance\u2019s magnitude may increase after the desired formation has been achieved, it may temporarily make the agents leave the desired configuration. The magnitude of the control laws will then increase accordingly to cope with the disturbances and eventually stabilize the target formation again. This process can be repeated as long as there is disturbance and control gains which always depend on the constraints\u2019 errors. Several modifications of the proposed control laws with regard to the upper bounds of the matched disturbance and the error\u2019s bound have been also discussed. Notably, the formation tracking problem with unknown bounded leaders\u2019 velocity can be also solved with the proposed control framework.\nThe theoretical results on bearing-based formation control has been rapidly filled up in recent years. Further research interests will gradually be shifted toward the implementation of bearing-only algorithms on formations of unmanned vehicles, possibly by combining the state-of-the-art theoretical findings with vision-based and machine-learning techniques." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1(a)": { + "figure_path": "2305.19353v5_figure_1(a).png", + "caption": "(a)\nFigure 1: An infinitesimally bearing rigid framework (\ud835\udca2,\ud835\udc29\u2217)\ud835\udca2superscript\ud835\udc29(\\mathcal{G},\\mathbf{p}^{*})( caligraphic_G , bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) in \u211d3superscript\u211d3\\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. (a) the graph \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G; (b) a desired configuration \ud835\udc29\u2217superscript\ud835\udc29\\mathbf{p}^{*}bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT where \ud835\udc29i\u2217,i=1,\u2026,20,formulae-sequencesuperscriptsubscript\ud835\udc29\ud835\udc56\ud835\udc561\u202620\\mathbf{p}_{i}^{*},i=1,\\ldots,20,bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_i = 1 , \u2026 , 20 , are located at the vertices of a dodecahedron.", + "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/GraphG.png" + }, + "1(b)": { + "figure_path": "2305.19353v5_figure_1(b).png", + "caption": "(b)\nFigure 1: An infinitesimally bearing rigid framework (\ud835\udca2,\ud835\udc29\u2217)\ud835\udca2superscript\ud835\udc29(\\mathcal{G},\\mathbf{p}^{*})( caligraphic_G , bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) in \u211d3superscript\u211d3\\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. (a) the graph \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G; (b) a desired configuration \ud835\udc29\u2217superscript\ud835\udc29\\mathbf{p}^{*}bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT where \ud835\udc29i\u2217,i=1,\u2026,20,formulae-sequencesuperscriptsubscript\ud835\udc29\ud835\udc56\ud835\udc561\u202620\\mathbf{p}_{i}^{*},i=1,\\ldots,20,bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_i = 1 , \u2026 , 20 , are located at the vertices of a dodecahedron.", + "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/FTdisp_config.png" + }, + "2(a)": { + "figure_path": "2305.19353v5_figure_2(a).png", + "caption": "(a)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x1.png" + }, + "2(b)": { + "figure_path": "2305.19353v5_figure_2(b).png", + "caption": "(b)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x2.png" + }, + "2(c)": { + "figure_path": "2305.19353v5_figure_2(c).png", + "caption": "(c)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x3.png" + }, + "2(d)": { + "figure_path": "2305.19353v5_figure_2(d).png", + "caption": "(d)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x4.png" + }, + "3(a)": { + "figure_path": "2305.19353v5_figure_3(a).png", + "caption": "(a)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x5.png" + }, + "3(b)": { + "figure_path": "2305.19353v5_figure_3(b).png", + "caption": "(b)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x6.png" + }, + "3(c)": { + "figure_path": "2305.19353v5_figure_3(c).png", + "caption": "(c)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x7.png" + }, + "3(d)": { + "figure_path": "2305.19353v5_figure_3(d).png", + "caption": "(d)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x8.png" + }, + "4(a)": { + "figure_path": "2305.19353v5_figure_4(a).png", + "caption": "(a)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x9.png" + }, + "4(b)": { + "figure_path": "2305.19353v5_figure_4(b).png", + "caption": "(b)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x10.png" + }, + "4(c)": { + "figure_path": "2305.19353v5_figure_4(c).png", + "caption": "(c)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x11.png" + }, + "4(d)": { + "figure_path": "2305.19353v5_figure_4(d).png", + "caption": "(d)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x12.png" + }, + "4(e)": { + "figure_path": "2305.19353v5_figure_4(e).png", + "caption": "(e)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x13.png" + }, + "4(f)": { + "figure_path": "2305.19353v5_figure_4(f).png", + "caption": "(f)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/simBO3Input.png" + }, + "5(a)": { + "figure_path": "2305.19353v5_figure_5(a).png", + "caption": "(a)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x14.png" + }, + "5(b)": { + "figure_path": "2305.19353v5_figure_5(b).png", + "caption": "(b)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x15.png" + }, + "5(c)": { + "figure_path": "2305.19353v5_figure_5(c).png", + "caption": "(c)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x16.png" + }, + "5(d)": { + "figure_path": "2305.19353v5_figure_5(d).png", + "caption": "(d)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x17.png" + }, + "6(a)": { + "figure_path": "2305.19353v5_figure_6(a).png", + "caption": "(a)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x18.png" + }, + "6(b)": { + "figure_path": "2305.19353v5_figure_6(b).png", + "caption": "(b)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x19.png" + }, + "6(c)": { + "figure_path": "2305.19353v5_figure_6(c).png", + "caption": "(c)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x20.png" + }, + "6(d)": { + "figure_path": "2305.19353v5_figure_6(d).png", + "caption": "(d)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.", + "url": "http://arxiv.org/html/2305.19353v5/x21.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Distributed bearing-based formation control and network localization\nwith exogenous disturbances.", + "author": "Y.-B. Bae, S.-H. Kwon, Y.-H. Lim, and H.-S. Ahn.", + "venue": "International Journal of Robust and Nonlinear Control,\n32(11):6556\u20136573, 2022.", + "url": null + } + }, + { + "2": { + "title": "Distributed formation control with relaxed motion requirements.", + "author": "A. N. Bishop, M. Deghat, B. D. O. Anderson, and Y. Hong.", + "venue": "International Journal of Robust and Nonlinear Control,\n25(17):3210\u20133230, 2015.", + "url": null + } + }, + { + "3": { + "title": "Distributed coordinated tracking with reduced interaction via a\nvariable structure approach.", + "author": "Y. Cao and W. Ren.", + "venue": "IEEE Transactions on Automatic Control, 57(1):33\u201348, 2011.", + "url": null + } + }, + { + "4": { + "title": "Maneuvering angle rigid formations with global convergence\nguarantees.", + "author": "L. Chen, Z. Lin, H. G. De Marina, Z. Sun, and M. Feroskhan.", + "venue": "IEEE/CAA Journal of Automatica Sinica, 9(8):1464\u20131475, 2022.", + "url": null + } + }, + { + "5": { + "title": "Distributed rotational and translational maneuvering of rigid\nformations and their applications.", + "author": "H. G. De Marina, B. Jayawardhana, and M. Cao.", + "venue": "IEEE Transactions on Robotics, 32(3):684\u2013697, 2016.", + "url": null + } + }, + { + "6": { + "title": "Using angle of arrival (bearing) information in network localization.", + "author": "T. Eren, W. Whiteley, and P. N. Belhumeur.", + "venue": "In Proc. of the 45th IEEE Conference on Decision and Control\n(CDC), pages 4676\u20134681. IEEE, 2006.", + "url": null + } + }, + { + "7": { + "title": "Bearing-based autonomous communication relay positioning under\nfield-of-view constraints.", + "author": "M. Fabris and D. Zelazo.", + "venue": "Advanced Control for Applications: Engineering and Industrial\nSystems, 4(2):e103, 2022.", + "url": null + } + }, + { + "8": { + "title": "Bearing-only formation control with bounded disturbances in agents\u2019\nlocal coordinate frames.", + "author": "C. Garanayak and D. Mukherjee.", + "venue": "IEEE Control Systems Letters, pages 2940\u20132945, 2023.", + "url": null + } + }, + { + "9": { + "title": "Distributed adaptive time-varying group formation tracking for\nmultiagent systems with multiple leaders on directed graphs.", + "author": "J. Hu, P. Bhowmick, and A. Lanzon.", + "venue": "IEEE Transactions on Control of Network Systems, 7(1):140\u2013150,\n2019.", + "url": null + } + }, + { + "10": { + "title": "Bearing-based distributed formation control of multiple vertical\ntake-off and landing UAVs.", + "author": "Y. Huang and Z. Meng.", + "venue": "IEEE Transactions on Control of Network Systems,\n8(3):1281\u20131292, 2021.", + "url": null + } + }, + { + "11": { + "title": "Bearing-only control of directed cycle formations: Almost global\nconvergence and hardware implementation.", + "author": "G. Ko, M. H. Trinh, and H.-S. Ahn.", + "venue": "International Journal of Robust and Nonlinear Control,\n30(12):4789\u20134804, 2020.", + "url": null + } + }, + { + "12": { + "title": "Bearing-only adaptive formation control using back-stepping method.", + "author": "S. Li, Q. Wang, E. Wang, and Y. Chen.", + "venue": "Frontiers in Control Engineering, 2:700053, 2021.", + "url": null + } + }, + { + "13": { + "title": "Bearing-based formation control of networked robotic systems with\nparametric uncertainties.", + "author": "X. Li, X. Luo, J. Wang, Y. Zhu, and X. Guan.", + "venue": "Neurocomputing, 306:234\u2013245, 2018.", + "url": null + } + }, + { + "14": { + "title": "Adaptive formation control of networked robotic systems with\nbearing-only measurements.", + "author": "X. Li, C. Wen, and C. Chen.", + "venue": "IEEE Transactions on Cybernetics, 51(1):199\u2013209, 2020.", + "url": null + } + }, + { + "15": { + "title": "Biologically inspired bearing-only navigation and tracking.", + "author": "S. G. Loizou and V. Kumar.", + "venue": "In Proc. of the 46th IEEE Conference on Decision and Control\n(CDC), pages 1386\u20131391. IEEE, 2007.", + "url": null + } + }, + { + "16": { + "title": "A survey of multi-agent formation control.", + "author": "K.-K. Oh, M.-C. Park, and H.-S. Ahn.", + "venue": "Automatica, 53:424\u2013440, 2015.", + "url": null + } + }, + { + "17": { + "title": "Adaptive sliding mode control for disturbances with unknown bounds.", + "author": "T. R. Oliveira, J. P. V. Cunha, and L. Hsu.", + "venue": "In Proc. of the 14th International Workshop on Variable\nStructure Systems (VSS), pages 59\u201364. IEEE, 2016.", + "url": null + } + }, + { + "18": { + "title": "On adaptive sliding mode control without a priori bounded\nuncertainty.", + "author": "S. Roy, S. Baldi, and L. M. Fridman.", + "venue": "Automatica, 111:108650, 2020.", + "url": null + } + }, + { + "19": { + "title": "Vision-based drone flocking in outdoor environments.", + "author": "F. Schilling, F. Schiano, and D. Floreano.", + "venue": "IEEE Robotics and Automation Letters, 6(2):2954\u20132961, 2021.", + "url": null + } + }, + { + "20": { + "title": "Bearing-compass formation control: A human-swarm interaction\nperspective.", + "author": "E. Schoof, A. Chapman, and M. Mesbahi.", + "venue": "In Proc. of the 2014 American Control Conference (ACC),\nPortland, OR, USA, pages 3881\u20133886. IEEE, 2014.", + "url": null + } + }, + { + "21": { + "title": "Lyapunov stability theory of nonsmooth systems.", + "author": "D. Shevitz and B. Paden.", + "venue": "IEEE Transactions on Automatic Control, 39(9):1910\u20131914, 1994.", + "url": null + } + }, + { + "22": { + "title": "Bearing-based formation tracking control with time-varying velocity\nestimation.", + "author": "H. Su, C. Chen, Z. Yang, S. Zhu, and X. Guan.", + "venue": "IEEE Transactions on Cybernetics, 53(6):3961 \u2013 3973, 2023.", + "url": null + } + }, + { + "23": { + "title": "Localization and tracking control of autonomous vehicles in\ntime-varying bearing formation.", + "author": "Z. Tang and A. Lor\u00eda.", + "venue": "IEEE Control Systems Letters, 7:1231\u20131236, 2022.", + "url": null + } + }, + { + "24": { + "title": "Finite-time bearing-only formation control via distributed global\norientation estimation.", + "author": "Q. V. Tran, M. H. Trinh, D. Zelazo, D. Mukherjee, and H.-S. Ahn.", + "venue": "IEEE Transactions on Control of Network Systems, 6(2):702\u2013712,\n2019.", + "url": null + } + }, + { + "25": { + "title": "Finite-time bearing-based maneuver of acyclic leader-follower\nformations.", + "author": "M. H. Trinh and H.-S. Ahn.", + "venue": "IEEE Control Systems Letters, 6:1004\u20131009, 2021.", + "url": null + } + }, + { + "26": { + "title": "Bearing-based formation control of a group of agents with\nleader-first follower structure.", + "author": "M. H. Trinh, S. Zhao, Z. Sun, D. Zelazo, B. D. O. Anderson, and H.-S. Ahn.", + "venue": "IEEE Transactions on Automatic Control, 64(2):598\u2013613, 2019.", + "url": null + } + }, + { + "27": { + "title": "Rigid components identification and rigidity control in bearing-only\nlocalization using the graph cycle basis.", + "author": "R. Tron, L. Carlone, F. Dellaert, and K. Daniilidis.", + "venue": "In Proc. of the American Control Conference (ACC), Chicago, IL,\nUSA, pages 3911\u20133918. IEEE, 2015.", + "url": null + } + }, + { + "28": { + "title": "Decentralized sliding-mode control laws for the bearing-based\nformation tracking problem.", + "author": "D. V. Vu and M. H. Trinh.", + "venue": "In Proc. of the International Conference on Control, Automation\nand Information Sciences (ICCAIS), pages 67\u201372. IEEE, 2021.", + "url": null + } + }, + { + "29": { + "title": "Distributed collision-free bearing coordination of multi-uav systems\nwith actuator faults and time delays.", + "author": "K. Wu, J. Hu, Z. Li, Z. Ding, and F. Arvin.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 2024.", + "url": null + } + }, + { + "30": { + "title": "Bearing-only measurement self-localization, velocity consensus and\nformation control.", + "author": "M. Ye, B. D. O. Anderson, and C. Yu.", + "venue": "IEEE Transactions on Aerospace and Electronic Systems,\n53(2):575\u2013586, 2017.", + "url": null + } + }, + { + "31": { + "title": "Bearing-only formation tracking control of multi-agent systems with\nlocal reference frames and constant-velocity leaders.", + "author": "J. Zhao, X. Yu, X. Li, and H. Wang.", + "venue": "IEEE Control Systems Letters, 5(1):1\u20136, 2020.", + "url": null + } + }, + { + "32": { + "title": "Bearing-only formation tracking control of multiagent systems.", + "author": "S. Zhao, Z. Li, and Z. Ding.", + "venue": "IEEE Transactions on Automatic Control, 64(11):4541\u20134554,\n2019.", + "url": null + } + }, + { + "33": { + "title": "Laman graphs are generically bearing rigid in arbitrary dimensions.", + "author": "S. Zhao, Z. Sun, D. Zelazo, M. H. Trinh, and H.-S. Ahn.", + "venue": "In Proc. of the 56th IEEE Conference on Decision and Control\n(CDC), pages 3356\u20133361. IEEE, 2017.", + "url": null + } + }, + { + "34": { + "title": "Bearing rigidity and almost global bearing-only formation\nstabilization.", + "author": "S. Zhao and D. Zelazo.", + "venue": "IEEE Transactions on Automatic Control, 61(5):1255\u20131268, 2016.", + "url": null + } + }, + { + "35": { + "title": "Localizability and distributed protocols for bearing-based network\nlocalization in arbitrary dimensions.", + "author": "S. Zhao and D. Zelazo.", + "venue": "Automatica, 69:334\u2013341, 2016.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2305.19353v5" +} \ No newline at end of file diff --git a/20241127/2307.00319v4.json b/20241127/2307.00319v4.json new file mode 100644 index 0000000000000000000000000000000000000000..ef44918776dc32723b2757f241baf47bca620455 --- /dev/null +++ b/20241127/2307.00319v4.json @@ -0,0 +1,354 @@ +{ + "title": "Explainable AI in 6G Open Radio Access Network (O-RAN): A Tutorial and Survey on Architecture, Use Cases, Challenges, and Future Research", + "abstract": "The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities.\nThis paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised.\nNevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools.\neXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier.\nIn this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use-cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area.\nFinally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Context and Motivation", + "text": "6G wireless networks are growing to revolutionize the way we connect, communicate, and share information, catalyzing smart services and innovative applications [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. 6G is expected to transform mobile communication networks from the Internet of Things (IoT) to \"connected intelligence\", by leveraging Artificial Intelligence (AI) techniques and connecting billions of devices and people [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. The promise of immense connected devices, ultra-low latency, low energy footprint, and extremely high data rates is expected to enhance the sustainability, connectivity, and trustworthiness of the next-generation mobile network, and support the development of innovative applications, such as truly immersive eXtended Reality (XR), smart grid 2.0, high-fidelity mobile hologram, and Industry 5.0 [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nThe co-existence of such a variety of applications, along with their specific requirements, demands a versatile mobile network capable of accommodating and guaranteeing the expected performances by means of accurate and smart management of network components and resources [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###] across different technological domains, i.e., Radio Access Network (RAN), core network, cloud, and edge. To this end, both industry and academia are leveraging Network Slicing (NS), Software Defined Network (SDN), and Network Function Virtualization (NFV) paradigms to transform the mobile ecosystem into a more intelligent, energy-efficient, virtual, and software-focused ecosystem [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###].\nIn this context, a global initiative was formed, consisting of over companies from the telecommunication industry, who collaborated under the umbrella of the Open Radio Access Network (O-RAN) Alliance to introduce a novel RAN architectural design for the forthcoming generation of mobile networks (B5G and 6G) [20 ###reference_b20###][21 ###reference_b21###].\nThe core concept of O-RAN revolves around the disaggregation of traditional RAN system functionalities and their conversion into software components, known as Virtual Network Functions Virtual Network Function (VNF), which are interconnected through standardized and open interfaces.\nAdditionally, O-RAN introduces a novel hierarchical RAN Intelligent Controller (RIC) architecture [22 ###reference_b22###], which includes two main building blocks namely Non Real-Time RAN Intelligent Controller (Non RT RIC) [23 ###reference_b23###] and Near Real-Time RAN Intelligent Controller (Near RT RIC) [24 ###reference_b24###], designed to enhance the capabilities and flexibility of the RAN ecosystem.\nThe Non RT RIC is responsible for executing non-time-critical functions and tasks, such as policy management,\nnetwork optimization, and long-term analytics, while the Near RT RIC focuses on time-critical operations and tasks that require low latency and quick decision-making.\nIt is easy to claim that AI will play a critical role in the development and implementation of future network management operations, pursuing better network performance, cost savings, and enhanced customer experience [25 ###reference_b25###][26 ###reference_b26###][27 ###reference_b27###]. In this context, O-RAN envisions RIC entities to support programmable-based functions and logics, featured by the heavy usage of AI techniques, in particular, Machine Learning (ML) and Deep Learning (DL), to ease the development of intelligent and flexible RAN applications and reduce operational complexity [28 ###reference_b28###]. Among others, the AI-based RICs aim to tackle traditionally hard-to-solve aspects of the RAN domain, such as spectrum management, mobility, radio resource assignment and scheduling, admission control, link management, and power allocation [29 ###reference_b29###, 30 ###reference_b30###]. This is particularly beneficial in the 6G landscape when considering various vertical industries and their corresponding networking requirements.\nHaving said that, the recent European Union (EU)\u2019s AI Act establishes XAI regulation that mandates transparency and human oversight for high-risk AI-driven systems, such as future 6G networks [31 ###reference_b31###]. Moreover, the United States (US) focuses on maintaining global AI competitiveness while fostering trustworthy systems, with initiatives like the National AI Initiative Act [32 ###reference_b32###]. The United Kingdom (UK)\u2019s approach falls between the EU and US models, emphasizing responsible innovation and practical guidance, such as the ICO and Alan Turing Institute\u2019s AI decision explanation framework, alongside ambitions for global AI leadership [33 ###reference_b33###, 34 ###reference_b34###].\nIn this context, the widespread adoption of AI techniques in future 6G O-RAN should be accompanied by mechanisms that verify and explain the black-box models\u2019 decisions in a systematic and objective fashion [35 ###reference_b35###], especially when they lead to Service Level Agreement (SLA) violations [36 ###reference_b36###] or failures. This urges designers to clearly identify the operational boundaries of AI/ML models, characterize and understand their behaviour, and prioritize faithful and trustworthy decision-making processes to enable automated network service management while leaving the quality of service unaffected. On that account, new approaches are required to provide explainable and understandable decisions [37 ###reference_b37###]. eXplainable AI (XAI) is an emerging paradigm that aims to shed light on the decision process that is performed by closed (black box) AI models. The main objective of XAI is to create a transparent and human-understandable model (white box) that clarifies the internal processes of AI models, e.g., by determining the contribution of each input feature to an AI decision or prediction [38 ###reference_b38###]. XAI is crucial to demonstrate the accuracy, fairness, and transparency of AI models that drive decisions and operations in the network, thereby instilling trust and confidence in the deployment of AI-powered components in the O-RAN ecosystem by businesses and organizations [39 ###reference_b39###, 40 ###reference_b40###]." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Review of Existing Related Surveys", + "text": "Several studies already addressed the novel O-RAN architecture, highlighting its novel approach and investigating potential benefits and drawbacks.\nIn [42 ###reference_b42###], the authors provided a short review of both advantages and limitations of O-RAN, focusing on the O-RAN architecture and its main modules. The authors conducted a community survey on the benefits of O-RAN among researchers from all around the world. Most of them agreed on the fact that O-RAN will be the foundation of next-generation networks. Finally, the authors discussed the benefits, current shortcomings, and future research directions of O-RAN.\nSimilarly, [43 ###reference_b43###] described the O-RAN architecture and its key concepts. In addition, the authors present a novel DL-based scheme for radio resource assignment, validating their performance using data collected from real mobile network deployments. The authors conclude their work by discussing open challenges and future research opportunities.\nAnother review study is provided by [44 ###reference_b44###]. The authors showcase how a DL-based scenario can be deployed on top of the O-RAN architecture, highlighting the main advantages and shortcomings of O-RAN.\nThe evolution of RAN architectures towards the O-RAN proposal both in terms of functionality and implementation is discussed in [45 ###reference_b45###] [46 ###reference_b46###]. In the same context, the support of B5G key concepts, such as network slicing and MEC, by the O-RAN architecture is elaborated by [47 ###reference_b47###][55 ###reference_b55###][56 ###reference_b56###][57 ###reference_b57###].\nIn our previous work [48 ###reference_b48###], we proposed a survey study on the O-RAN architecture, discussing the evolution of RAN architectures, and comparing different studies based on various perspectives. We focused our review on existing AI-based schemes dealing with the RAN challenges, and show how these schemes can be supported by O-RAN by considering the deployment of two realistic DL-based case studies.\nSimilarly, in [41 ###reference_b41###], the authors provided a tutorial on the O-RAN framework describing recent specifications in terms of architecture, design, and open interfaces. They also discuss the main open research challenges and the new innovation possibilities in the O-RAN architecture, focusing on AI and deep learning.\nBesides, the XAI topic is attracting interest from research and industry domains. Currently, XAI is one of the main programs of the Defense Advanced Research Projects Agency (DARPA), expected to design efficiently the \"third-wave AI systems\" [58 ###reference_b58###]. In [49 ###reference_b49###][50 ###reference_b50###], the authors reviewed and analyzed several XAI approaches focusing on algorithmic aspects, classifications, and application domains, identifying several still open challenges and key future research directions. The main principles and practices of XAI are summarized in [40 ###reference_b40###]. In particular, the authors target the specific pattern recognition models of machine learning in order to enhance the understanding of such models for industry practitioners (data scientists).\nIn [51 ###reference_b51###], the authors discussed a set of key measurement metrics that can help evaluate explainable AI systems.\nIn 6G networks context, the authors of [38 ###reference_b38###] discussed the use of XAI, targeting different 6G use cases (e.g., Industry ). Similarly, in [53 ###reference_b53###] the authors highlight existing tools in addition to their use to deal with 6G network challenges, discussing how to integrate XAI into 6G networks architecture through a real mobile traffic prediction use-case, and validating their findings on realistic traffic data. Conversely, the authors of [52 ###reference_b52###] focused on XAI methods in low protocol layers of mobile networks, e.g., Physical (PHY) and Medium Access Control (MAC). In the same context, the authors of [54 ###reference_b54###] describe the application of XAI related to security aspects, discussing how XAI can improve the interpretation of AI-based models for a wide range of security use-cases related to B5G/6G networks.\nTable I ###reference_### summarizes the main topics discussed along the above works, and compares their contributions with respect to our work, in order to provide an easy understanding of the differentiation features with respect to the state-of-the-art.\nDespite the presence of several survey papers discussing XAI and O-RAN, there is a lack of comprehensive surveys jointly investigating XAI and O-RAN aspects able to effectively explore the potential of XAI for developing responsible, trustworthy, and transparent AI-powered O-RAN architecture. In addition, although the integration of XAI with B5G networks has been addressed e.g., in [38 ###reference_b38###][52 ###reference_b52###], such studies do not focus either on the RAN part or consider the novel O-RAN architecture.\nTherefore, a comprehensive survey of XAI and its potential in designing the future O-RAN is greatly needed to guide the practitioners as well as researchers.\n###figure_1###" + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Main Contributions", + "text": "The contributions of this paper can be summarized as follows:\nBridging the gap between O-RAN and XAI: Existing surveys on O-RAN focused on its enabling technologies, such as hierarchical RAN Intelligent Controller, open interfaces, and programmable functions. To the best of our knowledge, there is no survey addressing the potential of human and O-RAN interactions, through XAI systems. Similarly, existing surveys on XAI targeted different XAI approaches and their taxonomies, and more recently their applications to B5G/6G networks. However, discussions on the potential of XAI for O-RAN are still missing. Therefore, this survey paper aims to bridge this gap by jointly exploring the key benefits of the introduction of XAI to O-RAN.\nA comprehensive survey of XAI deployment on top of O-RAN: Existing works studied both O-RAN and XAI separately, i.e. no work has combined both paradigms in its study. Hence, in this survey paper, we study the promising deployment of XAI on top of the AI-enabled O-RAN. This includes O-RAN architecture as well as O-RAN use cases. Furthermore, We study the mapping of existing O-RAN research solutions to XAI-supported solutions.\nA depth analysis of XAI automation for O-RAN: We provide an exhaustive analysis of how to automate the whole XAI pipeline on top of O-RAN, in order to ensure stable performance of deployed XAI techniques. To the best of our knowledge, no existing work has discussed the automation of XAI process for O-RAN. We also design new architectures showing the automated deployment of XAI, for different levels of automation.\nO-RAN Security aspects and XAI: We present the key findings of the official security risk assessments conducted on the O-RAN environment. We explore the potential of XAI to significantly improve the security layer of O-RAN, and how it could be used to build interpretable security threat detection mechanisms. Additionally, we discuss how XAI can help establish trust among stakeholders.\nIdentifying New XAI-related Issues and Promising Research Directions: Integrating XAI with O-RAN will rise new issues, which should also be considered in future research studies. Thus, we exhaustively discuss new open challenges, along with future research directions." + }, + { + "section_id": "1.4", + "parent_section_id": "1", + "section_name": "Paper Organization", + "text": "As shown in Fig. 1 ###reference_###, the survey is organized as follows. Section. II ###reference_### provides background information related to the topics considered in the survey.\nSection. III ###reference_### presents the main ongoing projects and standards that are working to promote the adoption of AI/ML techniques in O-RAN, and show how they can be enhanced by XAI. Section. IV ###reference_### describes how XAI methods and models can be deployed on top of the O-RAN architecture, considering four realistic deployment scenarios, and communication interfaces. Section. V ###reference_### details an automation pipeline for XAI model training and deployment in the O-RAN context, involving multiple architectural components and communication interfaces.\nSection VI ###reference_### gives a literature review of existing recent works, which leverage XAI techniques for the 6G O-RAN architecture, while Section. VII ###reference_### gives a literature review of existing related works in the field, focusing on AI techniques targeting RAN optimization and highlighting how these works can be mapped to XAI-enabled solutions to optimize multiple performances. Section. VIII ###reference_### provides an overview of O-RAN use cases taken from the literature and standard documentation, highlighting how XAI could bring benefits to the considered scenarios.\nSection. IX ###reference_### provides an overview of security issues related to the O-RAN architecture, focusing on XAI-related aspects. Section. X ###reference_### highlights and discusses still open challenges along with their future research directions to deal with them. Finally, section. XI ###reference_### concludes this paper. Note that the used acronyms in this paper are described in the List of Acronyms, in alphabetical order, for ease of reference." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background", + "text": "This section provides background information on XAI and O-RAN topics which are required to fully understand the potential of XAI techniques in the O-RAN domain.\nFirstly, we describe the main concepts, techniques, and emergent applications of XAI. Secondly, we present the O-RAN architecture along with its main modules as designed by the O-RAN Alliance." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A eXplainable AI (XAI)", + "text": "In this subsection, we provide the background on XAI and its main concepts, applications, and ongoing studies." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "II-A1 Definitions and Key Concepts", + "text": "XAI comprises methods and tools that help interpret, understand, and trust AI-based model results [49 ###reference_b49###, 40 ###reference_b40###], using objective metrics (see Subsection II-B ###reference_###). These tools assist in identifying and mitigating biases by revealing which features significantly impact predictions. This understanding promotes fairness and guides developers in refining algorithms, data, and features to improve model performance.\nIn other words, XAI aims to build a white-box model that provides insights into the inner workings of underlying ML/AI black-box models. This helps characterize model fairness, accuracy, and transparency in AI-enabled decisions, which is vital for businesses and organizations to have confidence and trust when deploying AI models [40 ###reference_b40###]. More specifically, XAI leverages concepts of explainability and interpretability to expose information about the internal mechanisms of AI models.\nExplainability refers to the extent to which the internal mechanisms of a machine learning model can be understood in human terms. It involves providing insights into how a model makes decisions, often through tools and methods that elucidate the model\u2019s logic. Interpretability, on the other hand, is the degree to which a human can consistently predict the model\u2019s output given a set of inputs. The key difference is that explainability focuses on the underlying workings and reasons behind a model\u2019s decisions, whereas interpretability emphasizes the ability to understand and predict a model\u2019s behavior based on its input-output relationship.\nXAI enables to identify which features influence model predictions the most, shedding light on potential biases encoded in the data or model architecture.\nBias in Deep Neural Network (DNN) predictions refers to systematic errors that lead to unfair outcomes for certain groups. This might be caused by i) Unbalanced and Biased Data, where the training dataset is not representative of the diverse network states, e.g., having more SLA violation samples leads to false alarms predictions; and ii) Inappropriate Feature Selection, where using features that encode sensitive attributes or their proxies can make a model inclined to allocate more resources to a service regardless of e.g., the traffic.\nXAI models incorporate the so-called explanation user interface to generate a user-understandable explanation and/or interpretation of the rationale behind decisions taken by the model. Most AI models can be translated into an equivalent XAI counterpart, at the expense of integrating additional layers supporting the explanation user interface on top of the deployed model. Based on the design of the explanation user interface, the XAI model can provide both explainability and interpretability or only one, depending on the target human user [49 ###reference_b49###]." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "II-A2 Taxonomy of XAI Techniques, Applications, and Stakeholders", + "text": "There are several existing taxonomies in the XAI realm, which can complement and/or overlap each other. Table II ###reference_### describes an XAI taxonomy that is mainly inspired by [49 ###reference_b49###, 40 ###reference_b40###], and is based on the following three main criteria:\nModel Transparency: XAI models can be classified based on the target ML models\u2019 transparency. In this regard, models are classified as interpretable or complex. Interpretable models are by themselves understandable for human users. In other words, such models are able to provide the rationale behind their decisions in an interpretable way to users [49 ###reference_b49###]. Several proposed works succeeded in interpreting some relatively low-complex ML models, including logistic/linear regression, decision trees, K-Nearest neighbors, rule-based learners, etc. [49 ###reference_b49###]. On the other hand, more complex models such as deep neural networks, in order to be interpretable, have to be approximated by generating simpler surrogate models that ease the explanation task by means of a technique known as post-hoc explainability [101 ###reference_b101###].\nThe model complexity is a widely considered aspect in the literature related to XAI and is generally adopted to classify XAI approaches [49 ###reference_b49###].\nModel Agnosticity: This criterion targets complex ML/DL models, where XAI models can be categorized based on the nature of their target explanations [49 ###reference_b49###][40 ###reference_b40###]. In the paradigm of model-agnostic interpretability, the model is regarded as an opaque entity. This conceptualization dissociates interpretability from the specific characteristics and inner workings of the model, thereby liberating the model to exhibit maximum flexibility tailored to the requirements of the task at hand. This approach facilitates the utilization of diverse machine learning methodologies, encompassing even intricate deep neural networks. Furthermore, it affords the opportunity to manage the delicate balance between model complexity and interpretability, a crucial consideration delineated in the subsequent section. Importantly, this methodology allows for graceful handling of situations where achieving an interpretable explanation proves unattainable. Techniques such as SHAP and feature importance scores derived from permutation importance fall into this category. They work by analyzing the input-output relationship of the model without relying on its internal structure.\nExplainability Methods: When ML/DL models are considered complex models, some techniques should be devised and used to interpret such models. Thus, XAI models rely on several explanation types, to describe how these ML/DL models output their predictions for any input data.\nExplanations by simplification refer to the techniques that simplify a complex model and approximate it to an interpretable model, which is easier to explain [75 ###reference_b75###].\nFeature relevance explanations study and quantify the impact of each input data, to explain a given ML model\u2019s prediction [102 ###reference_b102###].\nLocal explanations focus on a single or particular prediction (output) of ML models to generate explanations [75 ###reference_b75###].\nVisual explanations aim to generate explanations in a visual way, describing the inner functioning of ML/DL models [88 ###reference_b88###].\nFor instance, they could reveal which set of pixels is the most relevant to recognize content in image classification tasks. Visual explanations rely on several tools, e.g., graphs, heatmaps, scatter plots, etc.\nText explanations generate symbol interpretations of learning models using, for example, natural language text to explain their results [90 ###reference_b90###]. For instance, they could be used to highlight which words (or forms) are leveraged in automatic email spam filtering.\nBased on the above taxonomy criteria, several XAI approaches have been proposed in the literature. In what follows, we present the most popular ones, highlighting their main features:\nSHapley Additive exPlanations (SHAP): This approach relies on feature relevance explanation to interpret a particular prediction of supervised ML/DL models [111 ###reference_b111###]. It computes an additive feature importance score with respect to a set of required properties (e.g., accuracy, consistency, and missingness). Hence, SHapley Additive exPlanations (SHAP) determines feature influence by applying the Shapley values method, which enables estimating the marginal contribution of one feature over the final reward function. In addition, combining several predictions can also be considered to build a global explanation. Several variants of SHAP have been proposed in the literature in order to optimize its computational complexity, such as DeepSHAP [111 ###reference_b111###] and TreeSHAP [112 ###reference_b112###].\nDeep Learning Important FeaTures (DeepLIFT) [71 ###reference_b71###]: The purpose of Deep Learning Important FeaTures (DeepLIFT) is to clarify the output of a neural network by calculating the significance of each input feature to the output. This is accomplished by comparing the activation of each neuron in the network for a particular input to the activation that would have been obtained if a reference input had been used. The difference in the activations between the input and the reference is measured by DeepLIFT to compute the contribution of each input feature to the output. The contribution score obtained can be utilized to comprehend how the network reached its conclusion and to identify the most relevant input features. DeepLIFT has been effective in explaining the behavior of different neural network models, such as convolutional neural networks and recurrent neural networks, and has been applied to various fields, including drug discovery, image classification, and speech recognition.\nLocal Interpretable Model-Agnostic Explanations (LIME): It is one of the most known solutions, that relies on local and simplification explanations, to explain supervised ML/DL models [75 ###reference_b75###]. LIME is a model-agnostic approach targeting different types of data, e.g., tabular, text, graphs, and images. Local Interpretable Model-Agnostic Explanations (LIME) aims to approximate the learning models by developing locally linear models, which replace the black-box models to explain their individual predictions.\nRulefit: It integrates the benefits of decision trees and linear models. It first consists on the creation of a wide array of rules from an ensemble of decision trees, which capture intricate, non-linear patterns in the data. These rules are then utilized as features in a sparse linear model, combining high predictive performance with clear interpretability [113 ###reference_b113###].\nIntegrated Gradients (IG): also known as Path-Integrated Gradients or Axiomatic Attribution for Deep Networks. Integrated Gradients (IG) is an XAI technique that gives an importance value to each feature of the input using the gradients of the model output [114 ###reference_b114###]. Specifically, it is a local method that consists of accumulating the gradients by sampling points at a uniform spacing along a straight line between the input and the baseline. This procedure avoids getting null gradients when, e.g., the deep learning model is flat in the proximity of the input feature. This method yields the specific positive or negative attributions of the input features.\nGraph Neural Network (GNN)Explainer: It is a technique that explains the predictions of Graph Neural Networks (GNNs) for graph-structured data. It identifies the most important nodes and edges contributing to the output by generating explanation vectors using an additional neural network. This generates an attention map that shows the relative importance of each node and edge. GNN Explainer can be applied to various GNN architectures and input graphs, without requiring changes to the model or training data. It is useful for understanding how GNNs perform predictions and identifying potential issues [74 ###reference_b74###].\nReward Shaping: It entails altering the reward function of the agent to offer supplementary feedback or incentives. This adjustment assists in steering the agent\u2019s learning process by molding the reward signal [80 ###reference_b80###].\nAttention Mechanism: It enhances interpretability by identifying and highlighting the crucial elements in the input that significantly impact the decision-making process of the agent. They shed light on the specific features that capture the agent\u2019s attention and influence its decision [82 ###reference_b82###].\nMachine Reasoning (MR): It utilizes logical reasoning and inference techniques to offer insights into the decision-making process of AI models, thereby improving transparency and trust. It generates explanations that are easily comprehensible to humans, fostering a deeper understanding and acceptance of AI systems. Nevertheless, applying machine reasoning in XAI necessitates expertise in logic and reasoning, and it may encounter difficulties when dealing with uncertain or probabilistic information. Nonetheless, the incorporation of machine reasoning in XAI contributes to the advancement of interpretable and accountable AI systems [84 ###reference_b84###].\nAttention Flow Analysis: It assesses the individual contribution of attention heads in the encoder to the overall performance of the transformer\u2019s model. Specifically, it examines the roles played by these attention heads, with a particular focus on the most important and confident ones. These heads often exhibit consistent and linguistically interpretable roles, providing valuable insights into the model\u2019s decision-making process [86 ###reference_b86###].\nStructural Causal Models (SCM): It is another method that targets reinforcement learning models, aiming to show the causal link between the data variables. In [89 ###reference_b89###], the authors leverage Structural Causal Models (SCM) method to explain the behavior of the reinforcement learning model. They are based on visual explanations through a Directed Acyclic Graph (DAG), where the nodes and edges reflect the model states and actions, respectively. By exploring the DAG, it can be extracted which actions take to move from one state to another. Once DAG is created, regression models are built to approximate the relationships using the minimum number of variables. Then, analyzing the DAG\u2019s variables will help in generating the explanations, in order to answer the question: \"Why action X and not Y ?\".\nCaption generation: It is a class of methods that aims to generate text interpretations to explain the outputs of DL models. In [91 ###reference_b91###], the authors combined a Convolutional Neural Network (CNN) model and a bidirectional Long Short Term Memory (LSTM) encoder/decoder model. The LSTM encoder helps to extract video features, which are then used by the LSTM decoder to generate textual video captions.\nKnowledge Graphs: To produce human-understandable explanations, it is necessary to represent ideas in terms of concepts rather than numeric values. Concepts and the connection between them make what is called knowledge graph. It is a powerful way of representing data because Knowledge Graphs can be built automatically and can then be explored to reveal new insights about the domain, especially to find inferred concepts that were not asserted, along with being able to trace back all the steps, making it fully explainable [93 ###reference_b93###].\nAs anticipated before, the selection of suitable explainability methods depends both on the complexity of the targeted model to be explained and on the target audience. Indeed, the type of explanation exposed and their level of detail depend mainly on the people who are getting such information. In this context, different user profiles may be targeted by XAI models, and XAI models\u2019 explanations should differ from one user to another [49 ###reference_b49###]. Table. III ###reference_### illustrates the different objectives of XAI explainability, expected by different user profiles. For instance, users of the models look at trusting as well as understanding how the model works, while users affected by models\u2019 decisions aim to understand their decisions and the main reasons for conducting such decisions. Besides, developers and data scientists expect explanations related to the AI models\u2019 performance, in order to optimize them over time. However, both regulatory and manager users aim to get more details related to the compliance of AI models with the legislation in force to check and assess them." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B XAI Metrics", + "text": "While human-in-the-loop (HITL) approaches can only yield subjective assessment of the trustworthiness of AI, the existence of objective metrics to characterize the transparency of AI models is a requirement to develop explanation-aware AI systems that exploit such XAI metrics through a feedback loop to assess the confidence of the models in run-time.\nWe summarize relevant XAI metrics in Table IV ###reference_###, and compile a list of them in the following.\nConfidence/Faithfulness: A common approach to measuring the confidence of the explanation relies on the notion of feature relevance. Specifically, observing the effect of muting, i.e., replacing a feature with a baseline value\u2014generally zero\u2014helps to measure the effect on the prediction in both classification and regression tasks [115 ###reference_b115###]. For instance, for a probabilistic classification model, we can obscure or remove features according to a policy defined as follows\nwhere is a Bernoulli random variable, and is a probability distribution of the features that can be computed as\nwhere is the number of features and is the attribution of feature in a sample of class . It is obtained using any attribution-based XAI method, such as IG or SHAP. The confidence score in this case is\nwhere is the number of samples that conserve their class after the mutation of the dataset and stands for the original count of samples with class label . For regression tasks, however, the classes are replaced with the notion of groups, which are defined by comparing the continuous prediction output with one or several thresholds.\nLog-Odds (LO): Similarly to the confidence, this score is defined as the average difference of the negative logarithmic probabilities on the predicted class before and after masking the top features with zero padding [116 ###reference_b116###]. Given the attribution scores generated by an explanation algorithm, we select the top features based on their attributions and replace them with zero padding. More concretely, for a dataset with samples, it is defined as:\nwhere is the predicted class, is the th sample, and is the modified samples with top features replaced with zero padding. Lower scores are better.\nComprehensiveness: is the average difference of the change in predicted class probability before and after removing the top features. Similar to Log-odds, this measures the influence of the top-attributed words on the model\u2019s prediction. It is defined as [117 ###reference_b117###]:\nHere denotes the modified dataset with top samples deleted. Higher scores are better.\nSufficiency: is defined as the average difference of the change in predicted class probability before and after keeping only the top features. This measures the adequacy of the top attributions for the model\u2019s prediction. Its definition follows the one of comprehensiveness, except for the fact that the is defined as the samples containing only the top features. Lower scores are better [117 ###reference_b117###].\nRobustness/Sensitivity: A crucial property that interpretability methods should satisfy to generate meaningful explanations is that of robustness with respect to local perturbations of the input. This is not the case for popular interpretability methods; even adding minimal white noise to the input introduces visible changes in the explanations [100 ###reference_b100###]. To formally quantify the stability of an explanation generation model, one can estimate the Lipschitz constant for a given input and a neighborhood of size as,\nwhere the evaluation of the explaining function for methods like LIME and SHAP is expensive as it involves model estimation for each query. In contrast, gradient-based attribution methods present a lower complexity. On the other hand, computing (6 ###reference_###) for post-hoc explanation frameworks is much more challenging, since they are not end-to-end differentiable. Thus, one needs to rely on black-box optimization instead of gradient ascent.\nThis continuous notion of local stability in (6 ###reference_###) might be inadequate for discrete inputs or settings where adversarial perturbations are overly restrictive. In such cases, one can instead define a (weaker) sample-based notion of stability. For any x in a finite set one replace with an -neighborhood within , i.e.,\nAmbiguity: It indicates how concise is the explanation, i.e., characterized by few prominent features, facilitating interpretation and potentially including higher informational value with reduced noise [118 ###reference_b118###], compared to an ambiguous uniform importance distribution. Indeed, let denote the number of features. If we map the attributions to a probability space (using e.g., Eq. (2 ###reference_###)), the resulting entropy,\nmeasures the uncertainty of the output (prediction or decision) with respect to the input (features or states) [119 ###reference_b119###]. On the other hand, when the number of features is very high, one can characterize the uncertainty by comparing the distributions of both the attributions and a reference uniform probability density function. This can be done by invoking the discrete Kullback-Leibler (KL) divergence. The larger the KL divergence, the higher the certainty yield by the XAI method.\nInfidelity: In XAI surrogate methods, i.e., the schemes that consist of approximating the original model with a low-complexity more interpretable surrogate such as LIME, the fidelity of the surrogate to the original model can be quantified. Indeed, given a black-box function , explanation functional , a random variable with probability measure , which represents meaningful perturbations of interest, the explanation infidelity can be defined as [120 ###reference_b120###]\nwhere represents significant perturbations around and can be specified in various ways, such as the difference to a baseline .\nFidelity and Soundness: Two metrics can be applied to evaluate fidelity. Firstly, [75 ###reference_b75###] used recall () as a measure of fidelity for this method, which is defined as\nfollows,\nwhere the term True Features represents the relevant features as extracted directly from the white box model and Explanation Features represents the features characterized as most relevant by the explanation [121 ###reference_b121###]. This measure indicates how well the explanation captures the most relevant features from the predictive model, i.e., as a measure of the completeness of the explanation. Additionally, to understand how well the explanation excludes irrelevant features (soundness of the explanation), precision () can be measured,\nR-squared (R2)Score: Behind the workings of LIME lies the assumption that every complex model is linear on a local scale. LIME tries to fit a simple model around a single observation that will mimic how the global model behaves at that locality. The simple model can then be used to explain the predictions of the more complex model locally. In this respect, R-squared (R2) score is used to measure the performance of the surrogate local model.\nRelative Consistency:\nLet denote a predictor trained over dataset . Explanations arising from different predictors are said to be consistent if they are close when the predictions agree with one another, i.e., given the sets\nwhere is a similarity measure of the explanations and , and is a fixed threshold. We aim at making the gap between the set of consistent explanations and inconsistent ones visible. In this respect, we invoke the true positive rate,\nwhere . In addition, we also consider the true negative rate,\nThe quality of these explanations can be assessed independently of the accuracy of the predictor via the Relative Consistency (ReCo) metric [122 ###reference_b122###]:\nwith a score of indicating perfect consistency of the predictors\u2019 explanations, and a score of indicating complete inconsistency.\nBLEU Score: Bilingual Evaluation Understudy Score [123 ###reference_b123###] is used to evaluate the quality of text generated by a language model by comparing it to one or more reference texts. It measures the overlap of -grams between the generated text and the reference texts.\nROUGE Score: Recall-Oriented Understudy for Gisting Evaluation [124 ###reference_b124###] is a set of metrics for evaluating automatic summarization and machine translation by comparing the overlap of -grams, word sequences, and word pairs between the generated summary/translation and reference texts.\nPerplexity: It is a measurement of how well a language model predicts a sample [125 ###reference_b125###]. It is defined as the exponentiated average negative log-likelihood of a sequence. Lower perplexity indicates better predictive performance." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Ranking of XAI Methods in O-RAN Prediction Tasks", + "text": "In this subsection, we present a comparative study of the common classes of XAI methods in the specific task of resource prediction in O-RAN, where Neuro-Symbolic (NeSy) models are an efficient solution [126 ###reference_b126###]. Specifically, we make use of Logic Tensor Network to predict CPU usage in a virtual BS (vBS) by leveraging well-established O-RAN experimental datasets [127 ###reference_b127###]. We then assess the explanation ambiguity (i.e., lack of evidence) and confidence metrics; already described in the previous subsection, as well as the processing time for 1 epoch. Table V ###reference_### summarizes the benchmarking results, which reveal that SHAP presents the higher processing time due to its foundation in game theory, which involves calculating the contribution of each feature to the prediction by considering all possible combinations of features. Gradient methods, including saliency maps, Gradient Input, and Integrated Gradients, are less time-consuming compared to SHAP due to their computational efficiency. These methods compute feature attributions using straightforward gradient calculations, involving only a single backward pass or a few integration steps, without the need to evaluate the model on all possible feature subsets. This avoids the combinatorial explosion and significantly reduces computation time. As a result, gradient methods leverage efficient backpropagation algorithms, making them much faster while still providing valuable insights into model predictions." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D O-RAN Alliance Specifications", + "text": "The O-RAN Alliance aims to lead the telecom Industry toward designing an intelligent and open RAN [20 ###reference_b20###][21 ###reference_b21###]\nleveraging and extending 3rd Generation Partnership Project (3GPP) reference RAN architecture towards greater flexibility in network deployment and scalability for new services. The O-RAN Alliance aims to foster a more modular and flexible RAN ecosystem by disaggregating software from hardware and establishing open and interoperable interfaces. This approach allows for greater compatibility and interchangeability among different vendors\u2019 equipment, enabling network operators to avoid vendor lock-in and embrace a wider range of technology solutions.\nThe new O-RAN architecture leverages NFV and SDN technologies to define new open interfaces and disaggregate the RAN functional blocks, to allow the deployment of new services and applications. O-RAN divides the Baseband Unit (BBU) of RAN into three functional blocks, Central Unit (CU), Distributed Unit (DU), and Radio Unit (RU). To support control user plane separation, the CU block is also divided into control plane CU-Control Plane (CP) and user plane CU-User Plane (UP) sub-blocks. The radio frequency signals are received, transmitted, amplified, and digitized at RU, which is located near the antenna, while CU and DU represent the base station\u2019s computation parts and are in charge of transmitting the digitalized radio signal to the network.\nWe note that in Release 15 [128 ###reference_b128###], 3GPP introduced a flexible architecture for the 5G RAN. This architecture splits the base station (gNodeB or gNB) into three logical nodes: CU, Responsible for higher-layer functions, coordination, and management. DU, Handles mid-layer functions and connects to the Radio Unit (RU). RU, Deals with lower-layer RF functions. The functional split allows network engineers to optimize performance based on factors like latency, cost, and specific use cases. On the other hand, O-RAN defines the Open RAN concept, which aims for horizontal openness through open interfaces connecting various RAN functions (from RU to DU-CU, controller, and orchestrator). Specifically, O-RAN has standardized the Lower Layer Split (LLS) by defining split option 7-2x [129 ###reference_b129###]. This split results in the Open RU (O-RU) and Open DU (O-DU). Additionally, O-RAN integrates 3GPP-defined interfaces (such as F1, W1, E1, and Xn) within its architecture.\nThe DU block may be deployed near or at the RU block, while the CU block may be deployed near the core network part. It is also worth noting that 3GPP has defined different RAN deployment scenarios and functional split options, which are described in [48 ###reference_b48###] [130 ###reference_b130###]. The two main components introduced by the O-RAN architecture are summarized below:\nNon Real-Time RAN Intelligent Controller (Non RT RIC): it supports non-RT functions (i.e., with a time granularity greater than 1s) such as policy-based guidance. The Non RT RIC is located at the Service Management and Orchestration (SMO) and comprises two sub-functions: Non RT RIC Applicaitons (rApps) and Non RT RIC framework. The latter is in charge of providing all required services to rApps via the R1 interface, whether from Non RT RIC framework or SMO, while rApps leverage the functionality provided by the Non RT RIC framework, such as data monitoring via O1 interface (stored in a database), to perform intelligent RAN optimization functions at non-RT scale. Such functionality enables rApps to get information and trigger actions, e.g., re-configuration and policies. Hence, Non RT RIC enables exposing an intelligent RAN policy to Near RT RIC, through A1 interface, based mainly on data analytics and ML/DL inference.\nWe note that SMO plays a crucial role as an intelligent automation platform that simplifies network complexity, enhances performance, and minimizes operational costs for the RAN domain. Specifically, SMO manages RAN as a service by applying automation at scale, SMO abstracts RAN functions and applications, making them easier to handle. Additionally, SMO interfaces with O1, A1, and O2, overseeing orchestration, management, and automation of RAN elements.\nNear Real-Time RAN Intelligent Controller (Near RT RIC): it is in charge of controlling and optimizing the O-RAN nodes (CU and DU) and their resources through fine-grained data monitoring and actions over E2 interface, at a near RT scale (i.e., from ms to ms). It hosts several Near RT RIC Applications (xApps), which may collect near RT information (e.g., at a User Equipment (UE) or Cell basis) through E2 interface, and provide value-added services, with respect to the Non RT RIC\u2019s policies received via the A1 interface. xApps include Mobility Management (MM), Resource Management (RM), Spectrum Management (SM), etc." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Projects/Standards on XAI for O-RAN", + "text": "XAI is increasingly becoming critical for the adoption of ML/DL in O-RAN. To achieve trustworthiness and transparency in ML/DL models in O-RAN, there are some ongoing standardization activities and research projects targeting XAI and O-RAN aspects. Some of them include:\nO-RAN Alliance: As we describe in Subsection. II-D ###reference_###, the O-RAN Alliance is a global organization that is working to promote an intelligent and open RAN for mobile cellular networks.\nThe O-RAN Alliance comprises Working Groups (WGs) and three focus groups dedicated to RAN cloudification, automation, and disaggregation. In particular, WG 2 in [131 ###reference_b131###] describes lifecycle management of AI/ML models on O-RAN including learning model design, composition, training, runtime, and deployment solutions. It also highlights the main criteria for determining multiple ML training and inference host deployment options. In this context, the focus of WG 2 can be extended to implementing XAI in O-RAN. To promote XAI adoption in O-RAN, the WG 2 can work on various initiatives, ranging from XAI models\u2019 specifications and requirements to implementation and deployment. This may also include the creation of XAI platforms and tools, the development of interfaces and standards for XAI, and the promotion of XAI best practices.\nIEEE P2894 and P2976: these standards aim to deliver specifications on XAI in order to facilitate its adoption in real-world scenarios. The IEEE P2894 standard aims to design an architectural framework and define application guidelines for XAI, including the definition and description of XAI, the main classes of XAI techniques, the main application scenarios of XAI techniques, and performance evaluations of XAI in real systems such as telecommunication networks [132 ###reference_b132###]. Besides, the IEEE P2976 standard is working to achieve interoperability and clarity of AI systems design through leveraging XAI techniques [133 ###reference_b133###]. Specifically, IEEE P2976 defines optional and mandatory constraints and requirements that should be satisfied for an AI algorithm, method, or system to be considered explainable. In this context, these specifications can be leveraged by O-RAN standards such as O-RAN Alliance in order to develop and advance the adoption of XAI in the O-RAN ecosystem.\nETSI Experiential Networked Intelligence (ENI): The ETSI Industry Specification Group (ISG)is working on defining a cognitive network management architecture based on context-aware policies and leveraging AI techniques. This effort aspires to adjust provided services in Fifth Generation (5G) networks and beyond based on changes in business goals, environmental conditions, and user requirements. Thus, it aims to provide automated service operation, provisioning, and assurance, along with efficient resource management and orchestration. Besides, recently, ETSI has released its first specifications on O-RAN called \"O-RAN Fronthaul Control, User and Synchronization Plane Specification v7.02\" [134 ###reference_b134###]. This specification focuses on Open Fronthaul as one of the interfaces in the O-RAN Architecture. It specifies the synchronization plane protocols, user plane, and control plane used over the fronthaul interface to link the O-RU and O-RU components. This specification has been submitted to ETSI as a publicly available specification (PAS) produced by the O-RAN WG 4 and approved by the ETSI Technical Committee. Therefore, considering this first specification of ETSI about O-RAN, the ETSI Experiential Networked Intelligence (ENI) Industry Specification Group (ISG) can also focus on adopting XAI on top of the designed cognitive network architecture in order to create an AI framework that is explainable, transparent, and thus can be used to ensure the accountability of AI-enabled systems in O-RAN.\n6G-Bricks: is a Horizon Europe project that explores novel unified control paradigms based on Explainable AI and Machine Reasoning, which will be delivered in the form of a reusable component with open Application Programming Interfaces (APIs), termed \"bricks\" [135 ###reference_b135###]. Initial integration with O-RAN will be performed, aiming for the future-proofing and interoperability of 6G-BRICKS outcomes.\nNANCY: it is the acronym of An Artificial Intelligent Aided Unified Network for Secure Beyond 5G Long Term Evolution; a Horizon Europe project which partly investigates the design of an XAI engine, to provide transparency and trustworthiness [136 ###reference_b136###]. It also aims to identify the key factors that affect the system\u2019s local and overall performance.\nHexa-X: developed a user-friendly support to Federated Learning (FL) of explainable-by-design models termed OpenFL-XAI which extends the open-source framework OpenFL [137 ###reference_b137###]. Specifically, Hexa-X showed the benefits of building XAI models in a federated manner, with a specific focus on an automotive use case, namely Tele-operated Driving (ToD), which is one of the innovative services envisioned in 6G.\nOverall, these standards and projects are working to promote the adoption of AI techniques, particularly machine learning and deep learning in O-RAN, while ensuring that these technologies are interpretable, accountable, and transparent. By doing so, they can help build trust in AI systems deployed in O-RAN. Thus, they encourage competition and innovation in the telecommunication industry." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV XAI Deployment on O-RAN", + "text": "In this section, we describe how XAI methods can be deployed in the O-RAN framework and architecture by means of three realistic reference scenarios that are derived from XAI literature." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Introduction and Motivation", + "text": "As described in Section. II-D ###reference_###, the basic idea of O-RAN is not only to disaggregate RAN functions exploiting the flexibility brought by virtualization techniques, but also to design RICs that locally host specific RAN applications (e.g., rApps and xApps), addressing several and heterogeneous control tasks such as handover management, energy management, fault detection, and radio resource allocation. The O-RAN framework has been devised to natively support a heavy usage of machine/deep learning (ML/DL) techniques to enhance the development and operations of intelligent RAN applications to pave the road for future B5G network services. For instance, as shown in [29 ###reference_b29###], enabling cooperation among several xApps can help to optimize network performance, both in terms of data throughput and packet delivery ratio.\nHowever, one of the main challenges of AI-based O-RAN management is the lack of transparency on the decision-making processes that govern AI algorithms, which makes it difficult for network operators and engineers to diagnose problems, and further optimize the network behavior.\nTherefore, there is a pressing need to integrate XAI into the O-RAN management operations, as to gain more detailed information about the decision-making processes of ML and DL algorithms. Specifically, XAI techniques should be incorporated into the running AI-based rApps/xApps to provide transparent explanations of their outputs. This would not only improve the accuracy and transparency of the decisions made by these systems but also increase the trust of network operators and engineers in the performance of the network." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Local Interpretable AI Deployment", + "text": "The availability of open interfaces and the distributed nature of RAN deployments allows for the design and implementation of advanced federated and distributed schemes that aim to can overcome traditional RAN management scalability issues.\n###figure_2### Indeed, to reduce monitoring overhead, reaction time and single point of failure risk, it is always beneficial to process data locally, where they are made available from dedicated monitoring functions. Therefore, raw control plane information generated by end-users at a given cell (or multiple cells) can be processed locally, in the Open RAN Central Unit (O-CU), and used to train AI/ML-based dApps [138 ###reference_b138###] and their corresponding local XAI dApps (Step 1).\nAs depicted in Fig. 3 ###reference_###, to leverage the distributed nature of RAN deployments, such local information can be transferred to the Near RT RIC, exploiting the E2 interface, (Step 2).\nBy combining multiple local models trained over a particular portion of the input space, the Near RT RIC aims to derive more generalized and advanced models to the O-CU and the corresponding Distributed Application (dApp). This information can be provided as feedback (Step 3) via the O1 interface. Hence, leveraging collected data from distributed nodes via the O1 interfaces, predictions along with their corresponding explanations can be performed in real-time.\nFavoured by a continuous learning process, both AI and XAI\u2019s outputs should be considered to perform management decisions and improve network performance. For instance, such outputs can help to update users\u2019 scheduling policies or radio resource assignments.\nIn this context, different XAI techniques can be leveraged. For instance, RuleFit is one of the most used XAI techniques [78 ###reference_b78###][79 ###reference_b79###]. Its fundamental idea is to capture interactions between the original dataset features in order to create new features in the form of decision rules. Then, RuleFit learns a new transparent learning model using the original features, and also a number of new features that are decision rules.\n###figure_3### Furthermore, the XAI explainability (outputs) may target different user profiles (Step 5). For instance, users of the models may want to trust and understand how the model works, while explanations related to the AI models\u2019 performance are sent to developers and data scientists to optimize their behavior over time. In addition, more details about AI models\u2019 compliance with the legislation in force should be communicated to both regulatory and manager users to check and assess them." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Explanation-Guided Deep Reinforcement Learning Deployment", + "text": "Undoubtedly, Reinforcement Learning (RL) will play a significant role in enabling smart and automated O-RAN operations[141 ###reference_b141###]. In the context of explanation-guided learning [142 ###reference_b142###, 143 ###reference_b143###, 144 ###reference_b144###], explanation-guided deep reinforcement learning is a branch of artificial intelligence that combines deep learning and reinforcement learning with human-interpretable explanations. The goal of this approach is to enable humans to better understand the decision-making process of RL agents. In this method, the RL agent learns from its environment while considering human knowledgeable inputs providing explanations for the agent\u2019s behavior. The explanations can be in the form of natural language, visualizations, or other means. By providing contextual and external information, a human expert in the field of application can guide the agent toward better decision-making and improve its overall performance.\n###figure_4### As shown in Fig. 4 ###reference_###, a Deep Reinforcement Learning (DRL) agent at the Near RT RIC performs resource allocation under latency constraints and interacts with the O-CU environment through the E2 interface [145 ###reference_b145###]. The agent temporarily stores its experiences and observations in a replay buffer, which is continually updated. Then, the XAI xApp co-located with the Near RT RIC derives the SHAP importance values from a batch state-action dataset extracted from the buffer. To quantify the uncertainty of an action given a specific input state, the obtained SHAP values are afterwards converted to a probability distribution via softmax and used to calculate the entropy that measures the uncertainty, as formulated in Eq. (8 ###reference_###). The multiplicative inverse of the maximum entropy value is used as an XAI reward (to minimize uncertainty and therefore maximize confidence). Combining the SLA reward (e.g., the multiplicative inverse of the latency) with the XAI reward results in a composite reward that reduces the uncertainty of state-action pairs and guides the agent to select the best and most explainable actions for specific network state values as illustrated in Fig. 5 ###reference_###.\n###figure_5###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Explanation-Aided Confident Federated Learning Deployment", + "text": "Explanation-aided confident Federated Learning (FL) is a type of machine learning that combines federated learning with human-interpretable explanations. In FL, data is collected and processed locally on individual devices, and only the necessary information is shared with a central server for model training [147 ###reference_b147###][148 ###reference_b148###]. The goal of explanation-aided confident FL is to enable individuals and organizations to collaborate on training models while maintaining privacy and security.\nTo achieve a confident FL-based resource allocation/prediction, the local learning is performed iteratively with a run-time explanation as detailed in [146 ###reference_b146###]. The overall working principle of the scheme is manifested in Fig. 6 ###reference_###. For each local epoch, the dataset collected through the E2 interface is used to train a local resource allocation model via constrained optimization, which yields the features and the corresponding predictions to the XAI xApp where an explainer generates the features attributions using one of the feature attribution XAI methods (e.g., SHAP, Integrated Gradient, etc.). The confidence mapper then converts these attributions to a soft probability distribution and translates it afterwards into a confidence metric according to Eq. (3 ###reference_###), and feeds it back to the optimizer to include it as an additional constraint in the local optimization. Moreover, the confidence metric is sent via the NG-c interface to the peer O-CUs. In this respect, each O-CU uses the gathered set of confidence scores to assess its priority, where only the O-CUs with the largest confidence scores out of the available O-CUs take part in the FL training to guarantee better confidence. Upon the termination of the local optimization, the model weights are reported to the federation layer\u2014located at the Non RT RIC\u2014to perform model aggregation and broadcast it via the A1-P interface. This iterative procedure results in highly confident FL in O-RAN compared to vanilla post-hoc FL as depicted in Fig. 7 ###reference_###.\n###figure_6### ###figure_7###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Automation of AI/XAI Pipeline for O-RAN", + "text": "As discussed in Subsection. IV ###reference_###, XAI tools can be leveraged to assess the trustworthiness of the ML/DL models on top of the O-RAN architecture. In such context, the MLOps pipeline [149 ###reference_b149###] will be augmented by a model transparency check block in the form of a closed loop that leverages the XAI objective metrics to evaluate the confidence of O-RAN AI xApps on the fly, as shown in Fig. 8 ###reference_###.\nDevOps paradigm includes a set of practices that combines software development (Dev) and IT operations (Ops). DevOps aims not only to reduce the systems\u2019 development life cycle but also to provide continuous software delivery with high quality, by leveraging paradigms and concepts like Continuous Integration and Delivery (CI/CD). When dealing with machine learning operations, and automation of the learning process, the paradigm can also be called ML system operations (MLOps) [150 ###reference_b150###].\nIt is worth noting that O-RAN specification [131 ###reference_b131###] introduces three control loops that facilitate the deployment of AI/ML (Artificial Intelligence/Machine Learning) functionalities within the O-RAN framework.\nThese control loops are designed to operate at different time scales, enabling efficient integration and utilization of AI/ML capabilities in the network.\nLoop is deployed at Open RAN Distributed Unit (O-DU) level to deal with per Transmission Time Interval (TTI) scheduling and operates at a timescale of the TTI or above. Loop deployed at the Near RT RIC to operate within the range of and above. Loop at the Non RT RIC at greater than sec (ML/DL training, orchestration, etc.). In what follows, we focus more on both loops and for XAI models training, inference, and performance monitoring. Indeed, three main levels of automation have been categorized [150 ###reference_b150###]: Manual (no MLOps), training pipeline automation, and CI/CD pipeline automation. A typical architecture integrating XAI with the MLOps pipeline is introduced in [119 ###reference_b119###]." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Manual Pipeline", + "text": "It corresponds to the basic level of maturity, where all the ML steps, including data collection and preparation, model training, and validation, are performed manually (cf. Fig. 8 ###reference_###). Hence, it is called no MLOps. At this level, data scientists usually use a rapid application development tool to build learning models, such as Jupyter Notebooks. In this case, the different steps of ML are released at the Non RT RIC module (ML), while the trained models are deployed at the Near RT RIC through the A1 interface, to provide prediction services (Ops). Note that the transitions from one step to another are also performed manually, and driven by a source code, developed interactively, till an executable model is created.\nIn practice, this pipeline corresponds to the learning models, which are rarely updated and often break when they are deployed (models) in the real world. In addition, the performance of learning models at the RAN environment may degrade, due mainly either to the dynamic evolving of data profiles describing the environment or to the very dynamic changes that may occur in the radio access environment. Hence, automating the whole learning process becomes primordial." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Training Pipeline Automation", + "text": "This level introduces a continuous training of the models and thus consists of performing the model training steps automatically. In particular, when new data profiles are monitored, the model retraining process is triggered. This process also includes data and model validation phases to achieve continuous delivery of the learning models. This level introduces two new components, named feature store as a centralized repository to store features and enable access to new features for training serving, and machine learning metadata to store information about the execution of ML pipeline (cf. Fig. 8 ###reference_###).\nWe note that the interface A1-P is used to deploy the trained learning model at the near Real-Time RIC. In addition, when new data profiles appeared and thus the learning model should be updated through the automated pipeline, the A1-P is also used to transfer the new data, stored in the Feature Store database, from the Non Real-Time RIC to the Near Real-Time RIC to updated the learning model with its corresponding XAI model in the Near Real-Time RIC entity." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Continuous Integration and Delivery Pipeline Automation", + "text": "At this level, a complete CI/CD system is introduced to enable reliable and fast learning model deployments in production. Thus, this level achieves the highest degree of automation in ML Ops, by enabling data scientists and developers to efficiently explore new ideas about feature engineering, model hyperparameters, and architecture. The main difference with the previous level is that CI/CD enables building, validating, and deploying the data, learning models, and model training pipeline components automatically. \nFig. 8 ###reference_### shows the automation of the ML pipeline using CI/CD in O-RAN context, which mainly features automated both ML pipelines and CI/CD routines.\nIn this context, in [151 ###reference_b151###], the authors introduce principles for applying reinforcement learning (RL) in the O-RAN stack, emphasizing its integration into wireless network research. It reviews current research in this area and applies it to the RAN framework within the O-RAN architecture. The paper proposes a taxonomy to address challenges faced by ML/RL models across their lifecycle\u2013from system specification to production deployment\u2013including data acquisition, model design, testing, and management. To tackle these challenges, the paper integrates existing MLOps principles tailored for RL agents, introducing a systematic model development and validation lifecycle termed RLOps. Key components of RLOps discussed include model specification, development, deployment in production environments, operational monitoring, and ensuring safety/security. The paper concludes by proposing best practices for RLOps to achieve automated and reproducible model development, all within a holistic data analytics platform embedded in O-RAN deployments." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Taxonomy of XAI for 6G O-RAN", + "text": "In this section, we give a literature review of existing recent works, which leverage XAI techniques for the 6G O-RAN architecture.\nA recent work is leveraging XAI for DRL on top of the O-RAN architecture[155 ###reference_b155###]. It addresses resource allocation problems at the O-RAN level and leverages XAI to provide network-oriented explanations based on an attributed graph, which forms a link between different DRL agents (graph nodes) and the state space input (the attributes of each graph node). This new scheme, termed EXPLORA, explains the wireless context in which the reinforcement learning agents operate. It shows XAI can be leveraged in DRL to perform optimal actions leading to median transmission bitrate and tail improvements of 4% and 10%, respectively.\nIn [152 ###reference_b152###], the authors discuss XAI-based security architecture for the Open RAN in 6G, named XcARet. This architecture aims to provide transparent and cognitive security solutions for O-RAN while ensuring energy efficiency. They first describe the new security issues of O-RAN due mainly to its open interface and data flow features. Then, they provide recommendations for a dynamic policy of security adjustments, while considering the energy efficiency of the O-RAN architecture. Additionally, they also discussed about how to ensure the transparency of their dynamic security policy by explaining the adjustment decisions. In this context, another work discussed about the security challenges of the O-RAN architecture was proposed in [153 ###reference_b153###]. The authors discussed about reliable AI and how to design and train rApps and xApps which are robust and secure against attacks. They also discussed on how to prevent, detect, and react to attacks that may target different components of O-RAN. Once an attack is performed and detected, the authors recommend to leverage XAI in order to understand what caused the attack, how it was performed, and eventually learn to recover from it. In this context, the XAI techniques can be applied to provide the non-RT RIC with information on which rApps and xApps were impacted, what type of input caused the attack, and why some applications gave unexpected outputs. This information can then be exploited by the non-RT RIC to re-train the AI models and thus deal with the observed vulnerabilities.\nIn [154 ###reference_b154###], the authors address the misconfiguration issues in O-RAN. They present an depth analysis of the potential misconfiguration issues in O-RAN with respect to the use of NFV and SDN, specifically, the use of AI/ML. They investigated how AI/ML can be used to identify the O-RAN misconfigurations. A case study is proposed to show the impact on the UEs of conflicting policies amongst xApps, along with a potential AI-based solution. As AI finds use at different levels of O-RAN, the authors stress the need for XAI for O-RAN, especially for safety-critical use cases such as transportation automation, vital infrastructure operation (e.g., nuclear energy and water), human-machine brain interfaces, and healthcare. \nIn [159 ###reference_b159###, 144 ###reference_b144###], the authors got inspired by XAI and closed-loop automation to design an Explainable Federated deep learning (FDL) model to predict per-slice RAN dropped traffic probability in a non-IID setup, while jointly considering the explainability and sensitivity-aware metrics as constraints. Specifically, they quantitatively validate the explanations of faithfulness through the so-called attribution-based log-odds metric that is included as a constraint in the run-time FL optimization problem.\nA novel multi-agent deep reinforcement learning (MADRL) framework, named standalone explainable protocol (STEP), for 6G O-RAN slicing was proposed in [156 ###reference_b156###]. STEP enables Slices\u2019 orchestration agents to learn and adapt resource allocation while ensuring their post-hoc explainability, thanks to XAI.\nIt is based on an information bottleneck framework to extract the most relevant information from running network slices at the O-RAN level, thus ensuring efficient decision-making and communication.\nMoreover, in [157 ###reference_b157###], the paper demonstrates how XAI can enhance the design of xApps by presenting a case study on an ML model trained for traffic classification using O-RAN KPIs. Utilizing SHAP, the study identifies the most influential KPIs for the model\u2019s predictions. By training the model with these selected KPIs, the research aims to reduce the overhead of transmitting all KPIs while observing the impact on model accuracy. Unlike existing works focusing solely on feature contribution, this paper uniquely leverages XAI to refine the training features based on their contribution. The contributions include: a SHAP-based XAI framework to identify key KPIs for traffic classification; two methods to reduce the number of KPIs-top overall and top per class-resulting in only a accuracy drop with fewer KPIs; and an analysis showing a reduction in control traffic data rate while maintaining high accuracy.\nAdditionally, [158 ###reference_b158###] advances mobile traffic forecasting by introducing AIChronoLens, which links XAI explanations with temporal input properties. This approach addresses shortcomings in legacy XAI techniques and enables a direct comparison of different AI models on the same dataset, enhancing their integration into the O-RAN architecture. Specifically, the AIChronoLens is used to explain DLinear, PatchTST, and LSTM in a prediction task of vBS mobile traffic and RRC connected users.\nFinally, [126 ###reference_b126###] introduces the Federated Machine Reasoning (FLMR) framework, a neuro-symbolic approach tailored for federated reasoning. FLMR enhances CPU demand prediction by leveraging contextual data and vBS configuration specifics from local monitoring within a shared O-Cloud platform of O-RAN. The framework ensures transparency in AI/ML decisions, addressing while comparative analysis against the DeepCog baseline demonstrates superior performance, achieving a six-fold reduction in resource under- and over-provisioning.\nAs we can observe from Table VI ###reference_###, there are multiple works leveraging XAI for the O-RAN architecture to provide more trust, transparency, and robustness to the AI-empowered solutions. The proposed works addressed the resource allocation, security, and misconfiguration of O-RAN xApps at the non Real-Time and Near Real-Time RICs. They leverage mainly post-hoc explanation and explanation-guided learning to explain different AI algorithms such as Deep Reinforcement Learning and supervised Deep Federated Learning. \nThe above works motivate our study and show the need to provide a comprehensive survey of XAI and its potential in designing the future O-RAN to guide the practitioners as well as researchers." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Mapping of Existing AI-based O-RAN works to XAI-enabled solution", + "text": "In this section, we first give a literature review of existing works, which leverage AI (ML/DL) techniques on top of the O-RAN architecture, in order to optimize RAN functions. We then discuss how these works can be mapped to XAI methods." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "VII-A Existing AI-driven O-RAN Works", + "text": "User Access Control: The user access control or user association challenge is addressed in [160 ###reference_b160###][161 ###reference_b161###], in order to ensure load balancing among Base Stations (BSs) and avoid frequent handovers. The authors designed a federated deep reinforcement learning. The UEs collaboratively trained their local models and then aggregated them at the RIC level. The designed model succeeded in maximizing the overall UEs\u2019 throughput and reducing frequent handovers.\nAttack Detection: In [162 ###reference_b162###], the authors tackle security vulnerabilities in RAN cellular networks, focusing on the lack of integrity protection in the Radio Resource Control (RRC) layer. They propose a real-time anomaly detection framework using distributed applications in 5G Open RAN networks. By leveraging AI, they identify legitimate message sources and detect suspicious activities through Physical Layer features, which generate reliable fingerprints, infer the time of arrival of unprotected uplink packets, and handle cross-layer features. Their approach, validated in emulation environments with over 85% accuracy in attack prediction, is integrated into a real-world prototype with a large channel emulator. It meets the 2 ms low-latency real-time constraint, making it suitable for real-world deployments.\nEnergy-Aware RAN scalability: In [163 ###reference_b163###], the authors introduce ScalO-RAN, an optimization-based control framework designed as an O-RAN rApp to allocate and scale AI-based O-RAN applications (xApps, rApps, dApps). This framework ensures application-specific latency requirements are met, monetizes shared infrastructure, and reduces energy consumption. ScalO-RAN is prototyped on an OpenShift cluster with base stations, RIC, and AI-based xApps deployed as micro-services. Numerical and experimental evaluations show that ScalO-RAN optimally allocates and distributes O-RAN applications within computing nodes to meet stringent latency requirements. The study highlights that scaling O-RAN applications is primarily a time-constrained issue, necessitating policies that prioritize AI applications\u2019 inference time over resource consumption.\nChannel State Information (CSI): a novel research platform for real-time inference using AI-enabled CSI feedback, closely simulating real-world scenarios, is designed in [164 ###reference_b164###]. The framework is validated by integrating a CSI auto-encoder into the OpenAir-Interface (OAI) SG protocol stack. The authors demonstrate real-time functionality with the encoder at the User Equipment (UE) and the decoder at the Next Generation Node Base (gNB). The experiments are conducted on both an Over-the-Air (OTA) indoor testbed platform, ARENA, and on the Colosseum wireless network emulator.\nTotal Cell Throughput: An online training environment of a reinforcement learning model is deployed at the RIC level in [165 ###reference_b165###]. The developed model controlled function parameters in DU, to maximize total cell throughput. Thanks to the deployed learning model, the total cell throughput increased by .\nSLA-Aware Network Slicing: The authors in [166 ###reference_b166###] propose a Deep Reinforcement Learning (DRL) agent for O-RAN applications, specifically for RAN slicing with Service Level Agreements (SLAs) focused on end-to-end latency. Using the OpenRAN Gym environment, the DRL agent adapts to varying SLAs and outperforms state-of-the-art methods, achieving significantly lower SLA violation rates and resource consumption without the need for re-training.\nFunction Placement: The O-RAN architecture leverages virtualization and disaggregation of RAN functionalities among three key units (RU, DU, and CU). The authors of [167 ###reference_b167###] studied the placement of resource allocation function based on service requirements, by dynamically selecting CU-DU units. Thus, they generated two reinforcement learning models based on Actor-Critic. The first one is used to assign resource blocks to UEs according to traffic types, delay budget, and UEs priorities, while the second one is leveraged to optimize function placement and hence the decisions of resource allocation. The authors showed that through this dynamic placement, both latency and throughput are highly improved.\nRAN Orchestration: In [168 ###reference_b168###], the authors present OrchestRAN, a network intelligence orchestration framework for next-generation systems based on the Open Radio Access Network (RAN) paradigm. Designed to function in the non-Real-time (RT) RAN Intelligent Controller (RIC) as an rApp, OrchestRAN allows Network Operators (NOs) to specify high-level control and inference objectives, such as scheduling adjustments and near-RT capacity forecasting for specific base stations. OrchestRAN automatically selects the optimal set of data-driven algorithms and their execution locations (cloud or edge) to fulfill the NOs\u2019 objectives, ensuring timing requirements are met and preventing conflicts between algorithms managing the same parameters.\nResource allocation: In [169 ###reference_b169###] [170 ###reference_b170###] [171 ###reference_b171###], the authors studied the multi-agent team learning deployment on top of the O-RAN architecture by deciding about each agent placement and the required AI feedback. As a case study, the authors addressed the challenge of how to coordinate several running and independent xApps in O-RAN. They designed two xApps, called resource allocation xApp and power control xApp, and then used federated deep reinforcement learning to enhance learning efficiency as well as network performance in terms of throughput and latency. Similarly, in [172 ###reference_b172###], the authors aimed to deal with the conflicts that may occur among running xApps when deployed by different vendors. Leveraging Q-learning, they proposed a team learning algorithm for resource allocation, to increase cooperation between xApps and hence optimize the performance of the network. \nAnother distributed RL model was generated in [140 ###reference_b140###], to manage RAN slice resource orchestration on top of the O-RAN architecture. The distributed RL architecture is composed of multiple intelligent agents, one for each network slice, performing local radio allocation decisions. Similarly, in [173 ###reference_b173###], the authors leveraged federated distributed RL to manage the radio resource allocation among multiple Mobile Virtual Network Operators (MVNOs) for two different network slices (Ultra Reliable Low Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB)). In [174 ###reference_b174###], the challenge of how to optimally assign DU resources for various RUs is studied. A deep reinforcement learning model is built to achieve efficient management of RUs-DU resources. Experimental results showed that the proposed scheme improves highly resource usage efficiency. In the same context of resource allocation, in [175 ###reference_b175###], the authors present PandORA, a framework for automatically designing and training DRL agents for Open RAN applications, packaging them as xApps, and evaluating them in the Colosseum wireless network emulator. They benchmark 23 xApps embedding DRL agents trained with various architectures, reward designs, action spaces, and decision-making timescales, enabling hierarchical control of different network parameters. These agents are tested on the Colosseum testbed under diverse traffic and channel conditions, both static and mobile. The experimental results show that fine-tuning RAN control timers and selecting appropriate reward designs and DRL architectures can significantly enhance network performance based on conditions and demand." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "VII-B How XAI can Help", + "text": "Integrating ML/DL-based algorithms with RAN functionalities has been found to address many challenging tasks: power control, user access control, handover, resources management, etc., which accordingly helps to optimize the performance of the RAN part. This was highly motivated in O-RAN, especially with the introduction of RIC modules. Indeed, the RAN functions are usually formulated as Markov Decision Process (MDP) [176 ###reference_b176###], which explains the wide application of reinforcement learning, e.g., Q-learning, deep Deep Q-Network (DQN), and Actor-Critic, either in a centralized or a federated way, in order to derive the optimal policy about the corresponding RAN function. In addition, team learning is also an emerging paradigm to optimize the coordination and control of the running xApps at the O-RAN\u2019s RICs.\nIt is worth noting that resource management is the most studied RAN function using feature engineering approaches, such as feature extraction and feature selection, in addition to reinforcement learning algorithms (DQN and Q-learning). With this strategy of RAN functions analysis, it is possible to determine the contribution of every feature, e.g., higher-order cumulants, related to the RAN performances. This helps to adjust the features\u2019 values to optimize the ML/DL predictions. In fact, humans/users\u2019 understanding of Q-learning models is limited to small scenarios involving a few states and actions. However, these models may become complex, especially with a high number of features, states, and actions, making them less interpretable by humans. The challenge here is the accuracy-interpretability trade-off, which means that the greater the accuracy, the less likely the model is interpretable and vice versa. For example, the Q-learning model can improve the overall performance of radio resource management by exploiting more descriptive frequency and time features, but its complexity increases when considering more features including network density and network/user power, service requirements, and the trust and security of the wireless communications, and thus this will introduce more states and actions in the system. Besides, despite being a more advanced algorithm as compared to Q-learning, DQN gives black-box models that output a lack of explainability. For instance, radio resource allocation using a DQN can introduce many ambiguous points, which should be explained, such as which layers/neurons in the DQN architecture can help to improve the accuracy, and why some UEs get the same number of radio block than others, even with different service requirements (URLLC, eMBB, massive Machine Type Communications (mMTC)). In this context, XAI is highly recommended since it provides profound insights into automated decisions and predictions. These details can help different users, as well as the network operators, to deal with unexpected problems, either related to the ML/DL models or to the corresponding xApps of the O-RAN\u2019s RICs. Therefore, the performance of the different RAN functions can be highly enhanced.\nWithin this context, a recent work is leveraging XAI for DRL on top of the O-RAN architecture[155 ###reference_b155###]. This work addresses resource allocation and control challenges on top of the O-RAN architecture. It leverages XAI to provide network-oriented explanations based on an attributed graph which forms a link between different DRL agents (graph nodes) and the state space input (the attributes of each graph node). This new scheme explains the wireless context in which the reinforcement learning agents operate. It shows XAI can be leveraged in DRL to perform optimal actions leading to median transmission bitrate and tail improvements of 4% and 10%, respectively." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "VII-C Mapping to XAI-enabled Works", + "text": "In Table VII ###reference_###, we compare the existing AI-driven O-RAN works according to several criteria, including the addressed RAN function and the leveraged AI techniques. In addition, we illustrate how XAI can be deployed, as xApps, on top of these works, to explain their AI-based decisions.\nWe observe that most of the existing works are based on reinforcement learning to manage RAN functions, especially power and resource allocation, user access control, function placement, and total cell throughput.\nAccording to [177 ###reference_b177###], two main groups of XAI techniques can be applied to reinforcement learning strategies, to give both local and global explanations.\nReactive XAI techniques imply that explanations are given in an immediate time horizon. It includes three main approaches: ) Policy simplification, which aims to simplify the policy of transition function into a form that is interpretable/understandable by itself, by using for instance decision trees and fuzzy rule-sets [178 ###reference_b178###]. ) Reward decomposition into understandable components, which aims to better understand the reasons for certain reward values [179 ###reference_b179###]. ) Feature contribution and visual methods, to determine features\u2019 contribution to the decision-making process, and then generate explanations based on that contribution. Examples of such techniques are both LIME [75 ###reference_b75###] and SHAP [111 ###reference_b111###].\nProactive XAI techniques focus on a longer time horizon to provide the required explanations. These techniques can be classified into four main classes: ) Structural causal model, that aims to learn the relationships between variables/features. This technique generates human explanations since it considers the world as a causal lens [89 ###reference_b89###][180 ###reference_b180###]. ) Explanation in terms of consequences, which enables agents in reinforcement learning to answer a question about their policy in terms of consequences [181 ###reference_b181###]. In other words, it enables to determine what each agent can get from a state, and which outputs it expects from a visited state or corresponding action. ) Hierarchical policy by decomposing a given task into multiple sub-tasks, located at different abstraction levels [182 ###reference_b182###]. Then, a prerequisite for executing a subsequent task is given as an interpretation of a particular action. ) Relational reinforcement learning, which is based on a set of rules to provide background knowledge to the decision agent [183 ###reference_b183###]. In this way, actions, states, and policies are represented by relation language, which helps to understand the reinforcement learning model\u2019s outputs.\nMoreover, the deployed XAI techniques can be evaluated according to several metrics, as shown in Table VII ###reference_###. Specifically, in DRL-based resource management use-cases, the state-action mapping certainty can be measured via the entropy, which is computed from the attributions of input state features. Moreover, the confidence and Log-odds metrics serve to quantify the trust in AI predictions/decisions by using the input XAI attributions as a basis to mask the impactful features in either offline or DRL-based on-the-fly datasets, and measure the corresponding deviation of the output, which is fed back to the optimizer/agent for accountability.\nIt is noteworthy that different data types can be monitored at the Non RT RIC from Open RAN Radio Unit (O-RU) modules, via the O1 interfaces, in order to feed AI and XAI xApps that are deployed at the Near RT RIC and which collaborate by exchanging model and predictions/decisions XAI metrics, respectively as exemplified in IV ###reference_### to achieve transparent AI operation. The resulting trustworthy output AI-based action is enforced at different levels (O-DU and O-CU), via the E2 interface.\n###figure_8###" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "VIII XAI for O-RAN Use-cases", + "text": "In the following, we collect a list of use-cases in the context of O-RAN and network slicing, highlighting how they would benefit from the introduction of XAI methods." + }, + { + "section_id": "8.1", + "parent_section_id": "8", + "section_name": "VIII-A Quality of Experience (QoE) Optimization", + "text": "Modern applications in the 5G ecosystem demand large bandwidth and low-latency communication to provide an adequate level of QoE, which can hardly be achieved by current semi-static Quality of Service (QoS) frameworks devoted to concurrently supporting heterogeneous broadband applications like in the Fourth Generation (4G) era.\nRadio fluctuations impair radio transmission capabilities, especially when adopting higher carrier frequencies like mm waves, leading to variable application requirements even within the same communication session.\nIn order to improve QoE, estimation and prediction tasks performed at the application level can help in dealing with such a dynamic environment, favouring both user experience and efficient use of RAN resources [184 ###reference_b184###].\nSeveral works have addressed QoE modeling with traditional ML methods [185 ###reference_b185###]. However, the prevalent black-box nature of ML models limits insights into QoE influence factors. Differently, XAI tools can provide contextual information for QoE assurance xApp [155 ###reference_b155###] by e.g., identifying the relevant network environment factors that lead to under-provisioning decisions to underweight them, reducing thereby SLA violation. Moreover, XAI models such as Fuzzy decision trees have shown their suitability in identifying stall events in data transmissions impairing the resulting QoE, while providing interpretability of such events that can be leveraged to identify their cause [186 ###reference_b186###], while SHAP have been used successfully to interpret Deep Neural Network (DNN) and random forest black-box models specifically trained to the QoE modelling task. In this regard, the interpretable output of XAI approaches becomes a valuable source of information to define strategies to improve QoE delivered by the network.\nThe open interfaces introduced by the O-RAN architecture significantly ease the per-user flow modification and configuration utilizing proactive closed-loop network optimization. Fig. 9 ###reference_### depicts a possible deployment addressing this use case. It involves Non RT RIC, Near RT RIC, E2 Nodes, and external applications running on the UE. The open interface allows external applications to interface with the O-RAN domain, which, empowered by ad-hoc optimization logic, would be capable of dynamically re-configuring the networking settings in response to real-time triggering events.\nBy integrating XAI tools in the form of a standalone xApp into O-RAN, objective metrics measuring the trustworthiness of AI functions, including features attributions, can be computed on the fly (see Section II-B ###reference_###), providing context to the O-RAN reconfiguration decisions. This not only enables the transparency of the reconfiguration process but also facilitates the identification of patterns, root causes, and potential improvements in the overall performance of the network." + }, + { + "section_id": "8.2", + "parent_section_id": "8", + "section_name": "VIII-B Traffic Steering", + "text": "Imbalances in the traffic load across cells of different access technologies may lead to performance degradation. [184 ###reference_b184###].\nIn this context, O-RAN A1 interface would allow enforcing desired optimization policies and utilizing the appropriate performance criteria to manage user traffic across different radio access technologies proactively.\nThe Non RT RIC monitors the user experience by UE level performance measurements on a cell level and may decide to relocate one or more users to other cells based on global optimization objectives, e.g., fairness in bandwidth sharing, QoE maximization, load-balancing.\nIn all these scenarios, attribution-based XAI methods such as SHAP and Integrated-Gradient can provide context to a traffic steering xApp in the form of attributions pointing out the impactful factors to consider in e.g., an offloading decision, which contributes to its transparency [155 ###reference_b155###]. Besides, in [52 ###reference_b52###], LIME is applied to DRL-driven traffic offloading in wireless networks, aiming at helping radio engineers better understand the consequences of the model choices and better monitor and configure the network. Whereas in [187 ###reference_b187###], authors leverage XAI methods to improve the Quality of Transport (QoT) estimation process in optical transport networks, which is a key component in driving traffic steering decisions.\n###figure_9###" + }, + { + "section_id": "8.3", + "parent_section_id": "8", + "section_name": "VIII-C RAN Slice Service Level Agreement (SLA)Assurance", + "text": "The 5G infrastructure has been designed to cope with highly diverse performance requirements coming from heterogeneous services and vertical applications. In this context, network slicing arises as a key technology to efficiently support tailored end-to-end connectivity satisfying specific business requirements. In general, the business parties and the infrastructure provider define the set of networking capabilities required to successfully run the service in a SLA, e.g., in terms of data rate, latency, and resource availability [189 ###reference_b189###].\nPerhaps not surprisingly, this introduced the need for ad-hoc mechanisms able to efficiently measure and expose such information to party entities traditionally alien to the telecommunication market.\nIn this context, O-RAN\u2019s open interfaces and AI/ML-based architecture will enable such mechanisms, enabling the operators to take full advantage of the business opportunities brought by the network slicing concept.\nMore in detail, specific slice configuration settings derived from the SLA requirements can be easily enforced by initiating the procedure from the SMO layer, and finely adjusted over time thanks to measurements feedback and zero-touch XAI-based mechanisms applied at the different layers of the architecture, especially at the Non RT RIC and Near RT RIC.\nXAI xApp would build context for traditional SLA assurance xApps [36 ###reference_b36###, 140 ###reference_b140###], by providing feature attributions derived from e.g., SHAP, to point out the network factors that lead to SLA violation in the event of a resource under-provisioning decision [190 ###reference_b190###]. XAI would also assess the trustworthiness of AI functions via various metrics (see II-B ###reference_###), which is part of their SLA [191 ###reference_b191###, 37 ###reference_b37###]. Fig. 10 ###reference_### summarizes the main workflow to achieve the solution.\n###figure_10###" + }, + { + "section_id": "8.4", + "parent_section_id": "8", + "section_name": "VIII-D Multi-vendor Slices", + "text": "The coexistence of different network functions provided by different vendors to instantiate operators\u2019 services is one of the key enablers for flexible and efficient use of radio resources and CAPital EXpenditures (CAPEX)/OPerational EXpenditures (OPEX) optimization. To this extent, the O-RAN architecture enables the deployment of multiple slices comprising functions provided by different vendors offering a variety of virtual Open RAN Distributed Unit (vO-DU) and virtual Open RAN Central Unit (vO-CU) options, specifically optimized to meet the requirements of a certain service. This brings several advantages such as a more flexible and time-to-market slice deployment, where operators can select from the available options the most suitable vO-DU and vO-CU to deploy their services and huge business opportunities for the vendors.\n###figure_11### ###figure_12### To deploy multi-vendor slices, vO-DUs and vO-CUs must coordinate to coexist and share the radio environment efficiently and avoid conflicts among the deployed services [188 ###reference_b188###].\nFig. 11 ###reference_### depicts three possible ways of coordination: ) loose coordination where there is no direct coordination between deployed services, and the radio resource is fully controlled by the RICs through the O1, A1, and E1 interfaces. ) moderate coordination where different network functions are allowed to communicate with each other through the X2 and the F1 interfaces to negotiate radio resources without directly involving the RICs. In this case, the negotiation must cope with the time frame allowed by the X2 interface communication exchange, which is in the order of seconds; ) the WG 1 and WG 4 of O-RAN Alliance envision also a so-called tight coordination allowing faster radio resource negotiation among slices, which would require a new interface, dubbed as New IF in Fig. 11 ###reference_###, for direct communication between vO-DUs.\nIn this context, distributed AI/ML models are particularly suitable to smartly perform the negotiation task [192 ###reference_b192###, 193 ###reference_b193###]. In this regard, an XAI-enabled component can be deployed to take control of the coordination and negotiation of resources between different vendors, while the heightened transparency and interpretability offered by XAI enhance the efficacy of resource management and coordination among vendors in this complex scenario. We report in the figure an example of a deployment of this component, suitable for both the moderate and the tight coordination case." + }, + { + "section_id": "8.5", + "parent_section_id": "8", + "section_name": "VIII-E Resource Allocation Optimization", + "text": "The need to concurrently support multiple heterogeneous slices characterized by service-tailored networking requirements exacerbates the setup of efficient and dynamic resource allocation mechanisms able to cope with highly different spatiotemporal traffic distributions.\nFor example, end-user mobility towards public events causes spatially localized peaks of traffic in eMBB kind of slices, or IoT smart sensors sporadically generating data volumes from highly distributed deployments in mMTC settings.\nCompared to traditional RAN deployments characterized by monolithic architecture and private management interfaces, the O-RAN\u2019s paradigm would allow for easier and more flexible control of the radio resources. In addition, the possibility to devise a data-driven ML-based optimization algorithm would help to automatize the process, exploiting the closed-loop management framework and previous historical information to perform the best allocation decisions.\nAdditionally, AI/ML model can be used to perform proactive management of the radio resources, predicting recurrent traffic demand patterns of 5G networks in different time epochs and spatial locations, and for each network slice, therefore anticipating the slice\u2019s networking needs favoring a better end-users QoE, and limiting the overall energy consumption.\nAll these methods are traditionally based on RL algorithms and agents interacting with the environment and learning by trial and error. More advanced solutions adopt federated learning techniques to improve the performances of the agents, gaining global knowledge of the system from the collection of multiple locally-trained models [140 ###reference_b140###]. Such enriched information is then sent back to the single agents, improving their training, and speed, and allowing more general management policy definitions.\nIn both these scenarios, XAI methods can further extend the potential of the RL management solutions. On the one side, they will allow for better control of the learning procedure, and guide the agent towards the definition of a safe decision process by adding confidence and trust in the devised management policies, as demonstrated in [194 ###reference_b194###]. On the other side, they may help in limiting the information exchange required by the federated learning approach [195 ###reference_b195###][196 ###reference_b196###]. Only being able to map the context-decision space uniquely, would allow sharing to the federation layer only local models carrying insightful information while filtering out erroneous or redundant items.\nFig. 12 ###reference_### depicts two possible deployment options, one assuming the main optimization and computing effort running within the Non RT RIC entity, and the other envisioning such task running within the Near RT RIC. The final deployment choice would depend on multiple factors, including the type of use-case and machine-learning model to be run and its timescale and complexity, also considering the different computing capabilities of the RICs. An interesting example of integration of XAI engine running as an xApp on an O-RAN compliant Near RT RIC is provided in [155 ###reference_b155###]." + }, + { + "section_id": "8.6", + "parent_section_id": "8", + "section_name": "VIII-F User Access Control", + "text": "The O-RAN vision aims at evolving the current RAN architecture providing open interfaces and ML-based optimization to attract new business entities and ease overall management and reduce operational costs. Current RAN deployments are composed of thousands of nodes [197 ###reference_b197###]. In such a complicated deployment, it is expected that the network assigns each UE to a serving BS, maximizing the overall throughput and the single end-user QoE. This problem is also known as user access control. Traditional user access control schemes imply that user associations are based on networking metrics such as the Received Signal Strenght (RSS), which guides the UE towards the base station providing the best channel. Handover ping-pong effect and load balancing have been identified as two main issues brought by RSS-based schemes [198 ###reference_b198###]." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "IX O-RAN Security Aspects and XAI", + "text": "Due to the central role of the 5G network in providing communication to backbone society infrastructures, security, and security risk awareness play a key role in network deployment.\nThe O-RAN Alliance has recently created a Security Working Group (WG 11) which identified a list of stakeholders responsible for ensuring the security of the RAN. This goes beyond the parties involved in traditional 4G and 5G networks, such as vendors, operators, and system integrators. In fact, operators will play a central role in securing the infrastructure, given the platform\u2019s openness and the use of multi-vendor components, which allows them to customize and secure the infrastructure. This also enables them to evaluate and verify the security of the open components that are introduced in the network, which may not be possible in fully vendor-driven closed architectures. In addition, according to [199 ###reference_b199###], network functions and virtualization platform vendors, as well as third-party xApp and rApp developers, Open Cloud (O-Cloud) providers, and administrator profiles, that manage virtualized and disaggregated components, are all new stakeholders.\nHowever, due to the plethora of heterogeneous components forming the O-RAN ecosystem, and the high exploitation of AI-driven network management and third parties components running services, securing the O-RAN infrastructure is still a challenge. In this regard, XAI can strongly enhance security in O-RAN deployments by providing insights, explanations, and transparency into the decision-making process of AI models. Moreover, XAI helps with threat detection, model transparency, accountability, analysis of training data, and human-in-the-loop security, leading to improved threat detection, increased trust, and compliance with security regulations. However, it could also be the target of cyberattacks that could vanish its benefits.111Section IX ###reference_### provides a high-level overview of security in O-RAN, tailored to enhance understanding of related XAI approaches. For a more detailed survey on generic O-RAN security aspects, we refer readers to [200 ###reference_b200###]." + }, + { + "section_id": "9.1", + "parent_section_id": "9", + "section_name": "IX-A Distributed Architecture", + "text": "The open architecture defined in the O-RAN specifications has been identified as a possible security issue due to its distributed nature, which expands the attack surface to malicious entities.\nThe WG 11 has recently identified the possible vulnerabilities coming from the openness of the platform and classified them into different threat categories [201 ###reference_b201###]. Such categories include i) threats against the O-RAN system including architectural elements and interfaces that can compromise the availability, data, infrastructure integrity, and data confidentiality of the infrastructure ii) Threats against the O-Cloud, which could compromise virtual network functions, misuse containers or virtual machines, or spoof underlying networking or auxiliary services [202 ###reference_b202###] iii) Threats in open-source code, which could potentially contain backdoors [203 ###reference_b203###, 200 ###reference_b200###], iv) Physical threats against the hardware infrastructure, v) Threats against the protocol stack and threats against AI/ML, including poisoning attacks that exploit unregulated access to the data stored in the O-RAN system to inject altered and misleading data.\nTo counteract such threats, different security principles have been defined [201 ###reference_b201###] to provide requirements, recommendations, and potential countermeasures, including mutual authentication (embracing the zero-trust paradigm), access control, robust cryptography, trusted communication, secure storage, secure boot and self-configuration, secure update processes, recoverability and backup mechanisms, and effective management of security risks posed by open-source components. As of the latest update, there is no explicit reference to leveraging eXplainable Artificial Intelligence (XAI) to bolster security within the context of Open Radio Access Network (O-RAN). However, XAI holds the potential to offer transparency and insights into the decision-making processes of Artificial Intelligence (AI) models, empowering stakeholders with a deeper understanding of how these security principles manifest in practice. This heightened clarity can facilitate enhanced validation, compliance, and trust in the security protocols deployed within O-RAN setups, thereby bolstering the overall security posture of the network.\n###figure_13###" + }, + { + "section_id": "9.2", + "parent_section_id": "9", + "section_name": "IX-B Risk Assessments", + "text": "The Security Working Group (WG 11) has conducted comprehensive analyses on the Non RT RIC (Non RT RIC), the O-Cloud, and the Near Real-Time RIC (Near RT RIC) frameworks [204 ###reference_b204###, 205 ###reference_b205###, 206 ###reference_b206###]. These assessments have evaluated the likelihood of attacks and their potential impact on protection goals, which can be categorized as follows:\n) Confidentiality: Ensuring that sensitive information remains inaccessible to unauthorized entities.\n) Integrity: Safeguarding data from unauthorized manipulation, ensuring its integrity and preventing corruption or outdatedness.\n) Availability: Guaranteeing the availability of data, information, and services to authorized entities.\nSpecifically, the Security Technical Report identifies distinct threats to the Non RT RIC Framework, rApps, R1 interface, and A1 interface, along with corresponding recommended security controls [204 ###reference_b204###].\nThe security analysis of the O-Cloud encompasses critical services, cloud service and deployment models, stakeholder roles and responsibilities, threat models, and best practices for mitigating threats [205 ###reference_b205###].\nSimilarly, the Security Technical Report for the Near RT RIC and xApps addresses key security issues and proposes solutions, including modifications to existing documents and specifications maintained by WG 3, with a mapping table illustrating how each solution corresponds to the identified issues [206 ###reference_b206###].\nWith the increasing interest in deploying O-RAN, third-party entities and government bodies have conducted parallel and independent security risk assessments. [207 ###reference_b207###] evaluates the integrity, availability, and confidentiality aspects alongside two additional protection goals: Accountability, concerning the traceability of actions to specific entities, and privacy, safeguarding sensitive data through anonymity, unlinkability, and unobservability. The analysis reveals a deficiency in adopting a \"security/privacy by design/default\" approach within O-RAN specifications, leading to multiple security vulnerabilities. This underscores the urgent need for a thorough revision of O-RAN specifications with a stronger security emphasis before productive applications are deployed. Another study [208 ###reference_b208###] highlights risks associated with multiple suppliers, new network functions, and expanded interfaces, thereby increasing the attack surface. Furthermore, it identifies potential risks arising from the integration of AI and ML) in network functions, which could compromise network integrity. Additionally, reliance on cloud platforms for hosting base station software in O-RAN deployments could heighten dependency on cloud service providers, potentially exposing vulnerabilities, especially if multiple Mobile Network Operators (MNOs) utilize the same cloud provider.\nAt the time of writing, and to the best of our knowledge, there are no risk assessments specifically targeting security threats introduced by the use of XAI, nor suggesting XAI-based solutions/recommendations to enhance security in open networks. Nonetheless, we will discuss them shortly in the upcoming subsections." + }, + { + "section_id": "9.3", + "parent_section_id": "9", + "section_name": "IX-C XAI to Improve O-RAN Security", + "text": "The utilization of XAI in the security domain of O-RAN could help enhance the transparency and comprehensibility of the operations and decision-making processes of third-party deployed components as to enable stakeholders to fully understand the decision process of such elements, finally helping to catch malicious behaviors and ensuring accountability, therefore reducing the risk of errors or malicious actions.\nThis is particularly relevant when considering the high number of AI- and ML-driven components that will be deployed in the network, which due to their black-box nature pose a significant challenge to reveal malicious behavior and security threats [203 ###reference_b203###].\nBy implementing XAI techniques, the complex algorithms used in ML-/AI-based systems can be made more interpretable. This enhances transparency and enables stakeholders to identify potential biases or shortcomings in the system, allowing for continuous improvement and optimization.\nThe O-RAN architecture foresees embedding the so-called security subsystem in the Near RT RIC. Such a component is in charge of detecting malicious xApps [24 ###reference_b24###], e.g., preventing them from data leakage or overall network performance reduction.\nDifferently, at the time of writing, the Non RT RIC architecture\ndoes not include a functional block solely dedicated to monitoring and detecting malicious rApps for security purposes.\nHowever, although not yet detailed, security is mentioned among the non-functional requirements of the AI/ML workflow module, which supports the development and implementation of AI/ML models for tasks such as self-optimization, automation, and data-driven decision-making within the Non RT RIC [131 ###reference_b131###].\nIn this context, the surface of attacks extends to the AI/ML models running in RICs, i.e., the models that are used for intelligent operations for inference and control in the deployed rApps and xApp [41 ###reference_b41###].\nIn the O-RAN architecture, AI/ML models are mainly deployed in the RIC as xApps/rApps [131 ###reference_b131###], as depicted in Fig. 13 ###reference_###. Such elements bring in the autonomous operation of several vital network functions including mobility management, resource allocation, etc. Hence, the deployment of malicious AI/ML models or the manipulation of benign ones by attackers could disrupt RAN node functionalities, resulting in severe network failures [41 ###reference_b41###].\nTo overcome this issue, the use of XAI-enabled security engines has been recently proposed in [203 ###reference_b203###]. Regarding the AI/ML workflow module, the work suggests embedding transparent XAI models such as Principal Component Analysis (PCA) and clustering to characterize input data and filter out corrupted or misleading samples. Other self-explanatory transparent models such as decision trees and random forests, are proposed to monitor and enhance the training process. Finally, post-hoc models are employed to further refine threat detection during validation. Similarly, the use of post-hoc XAI models is proposed to facilitate interpretable monitoring of xApp/rApp operation and detection of malicious components deployed in the RICs.\nNonetheless, employing XAI techniques O-RAN will require additional effort to build and define operational pipelines to generate feedback resulting in explanations.\nThe integration of both ML and XAI models will most likely impact computational power requirements. Nevertheless, these additional expenses are warranted by the bolstering of O-RAN security and management functionalities [209 ###reference_b209###]." + }, + { + "section_id": "9.4", + "parent_section_id": "9", + "section_name": "IX-D Security threats related to XAI", + "text": "As O-RAN continues to gain traction in the telecommunications industry, it becomes imperative to address the various security challenges inherent in its open and disaggregated framework. From concerns surrounding data confidentiality and integrity, to the need for robust authentication mechanisms, the security landscape of O-RAN is both complex and dynamic.\nSeveral attacks targeting ML/AI-enabled functions can be found in the literature. For example, in [210 ###reference_b210###] authors realize and demonstrate an Adversarial Machine Learning (AML) attack on the traffic steering function, exploiting the query-based evasion attack technique proposed in [211 ###reference_b211###]. In particular, the AML provides corrupted received signal strength samples to hinder the QoE classification and in turn, perform wrong traffic steering decisions. Whereas in [212 ###reference_b212###], authors develop a malicious xApp designed to execute AML attacks on the data stored in the O-RAN RIC database, threatening the operation of ML-based interference classification xApp. In this regard, the authors design also a data distillation technique to mitigate cyber threats effectively.\nIn this context, XAI methods can be employed to improve security in O-RAN deployments.\nHowever, as the adoption of XAI gains momentum, so does the prevalence of cyberattacks targeting these models, as noted in recent research [213 ###reference_b213###].\n[214 ###reference_b214###] evaluates two different attacks to the XAI layer: the first wherein the underlying ML model and XAI interpreted are simultaneously corrupted, while the second aims to construct adversarial samples that are sparse, interpretable, and close to the model\u2019s training data distribution. This is achieved by using a manifold approximation algorithm on a small set of test data to find data and explanation distributions and inducing minimal distortions on the input to steer the output toward the target (distorted) explanation.\nSimilar attacks can vanish the security advantage of an overlaying XAI layer [215 ###reference_b215###]. For example, Adversarial and Data poisoning attacks involve intentional tampering of input data to mislead an AI system\u2019s predictions and explanation, aiming to bias the outcome of the model and leading to misinterpretation of the model\u2019s behavior [214 ###reference_b214###]. Evasion attacks, involve revealing sensitive information through the explanations or interpretations provided by an AI system. This can be achieved by using the explanations to infer sensitive information about individuals, such as their health status, financial situation, or other private information, even if the AI system was not explicitly trained on such data [215 ###reference_b215###]. Finally, social engineering attacks, involve manipulating users or human interpreters of the AI system\u2019s explanations to make incorrect or biased interpretations. This can be achieved by providing misleading or persuasive explanations that influence the human interpreter\u2019s decision-making or perception of the AI system\u2019s behavior [216 ###reference_b216###]. However, the research in the area of security attacks specifically targeting XAI models is still in its infancy." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
List of Acronyms
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
3GPP3rd Generation Partnership Project
4GFourth Generation
5GFifth Generation
6GSixth Generation
A2CAdvantage Actor Critic
AIArtificial Intelligence
ANNArtificial Neutral Networks
AMLAdversarial Machine Learning
APIApplication Programming Interface
B5GBeyond Fifth-Generation
BBUBaseband Unit
BSBase Station
CAPEXCAPital EXpenditures
CI/CDContinuous Integration and Delivery
CNNConvolutional Neural Network
CPControl Plane
CTContinuous Training
CUCentral Unit
DAGDirected Acyclic Graph
dAppDistributed Application
DARPADefense Advanced Research Projects Agency
DevOpsDEVelopment and IT Operations
DeepLIFTDeep Learning Important FeaTures
DLDeep Learning
DNNDeep Neural Network
DQNDeep Q-Network
DRLDeep Reinforcement Learning
DUDistributed Unit
eMBBenhanced Mobile Broadband
eNBeNodeB
ENIExperiential Networked Intelligence
ETSIEuropean Telecommunications Standards Institute
FLFederated Learning
GANGenerative Adversarial Network
gNBgNodeB
GNNGraph Neural Network
HEHorizon Europe
IEEEInstitute of Electrical and Electronics Engineers
IGIntegrated Gradients
IoTInternet of Things
ISGIndustry Specification Group
KLKullback-Leibler
LIMELocal Interpretable Model-Agnostic Explanations
LLMLarge Language Model
LOLog-Odds
LSTMLong Short Term Memory
MACMedium Access Control
MDPMarkov Decision Process
MECMulti-access Edge Computing
MLOpsML system operations
MLMachine Learning
mMTCmassive Machine Type Communications
MMMobility Management
MNOMobile Network Operator
MRMachine Reasoning
MVNOMobile Virtual Network Operator
Near RT RICNear Real-Time RAN Intelligent Controller
NFVNetwork Function Virtualization
NG RANNew Generation RAN
Non RT RICNon Real-Time RAN Intelligent Controller
NR-MACNew Radio Medium Access Control
NSNetwork Slicing
O-CloudOpen Cloud
O-CU-CPOpen RAN Central Unit Control Plane
O-CU-UPOpen RAN Central Unit User Plane
O-CUOpen RAN Central Unit
O-DUOpen RAN Distributed Unit
\n
", + "capture": "List of Acronyms" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
O-RANOpen Radio Access Network
O-RUOpen RAN Radio Unit
OPEXOPerational EXpenditures
OSCOpen RAN Software Community
PCAPrincipal Component Analysis
PDCCHPhysical Downlink Control Channel
PDCPPacket Data Control Protocol
PDSCHPhysical Downlink Shared Channel
PHYPhysical
PNFPhysical Network Function
PUCCHPhysical Uplink Control Channel
PUSCHPhysical Uplink Shared Channel
QoEQuality of Experience
QoSQuality of Service
QoTQuality of Transport
R2R-squared
RANRadio Access Network
rAppNon RT RIC Applicaiton
RATRadio Access Technologies
ReCoRelative Consistency
RICRAN Intelligent Controller
RLReinforcement Learning
RMResource Management
RNNRecurrent Neural Network
RSSReceived Signal Strenght
RTReal Time
RURadio Unit
SCMStructural Causal Models
SDNSoftware Defined Network
SHAPSHapley Additive exPlanations
SLAService Level Agreement
SMOService Management and Orchestration
SMSpectrum Management
SVMSupport Vector Machines
TTITransmission Time Interval
UEUser Equipment
UPUser Plane
URLLCUltra Reliable Low Latency Communication
vBBUVirtual BBU
VNFVirtual Network Function
vO-CUvirtual Open RAN Central Unit
vO-DUvirtual Open RAN Distributed Unit
vRANVirtual RAN
WGWorking Group
XAIeXplainable AI
xAppNear RT RIC Application
XReXtended Reality
\n
", + "capture": "Table I: Existing surveys on O-RAN, XAI, XAI for B5G. H: High, M: Medium, and L: Low." + }, + "3": { + "table_html": "
\n
Table I: Existing surveys on O-RAN, XAI, XAI for B5G. H: High, M: Medium, and L: Low.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
WorksAIXAI\n\nB5G/6G support\n\n\nO-RAN Architecture\n\n\nO-RAN use cases\n\n\n\n\nFuture research directions\n\n(B5G, O-RAN, or XAI)\n\nContribution
[41]LLHHLH\n\n\n\n\n\n\n\n
A tutorial on O-RAN framework, by describing recent specifications in terms of
architecture, design, and open interfaces.
\n
[42]LLHHLH\n\n\n\n\n\n\n\n
A short survey on O-RAN\u2019s architecture, benefits, shortcomings, and future
directions.
\n
[43]MLHHHH\n\n\n\n\n\n\n\n
A concise paper on O-RAN architecture. It designed a DL-based resource allocation scheme
and discussed future directions.
\n
[44]MLHHHH\n\n\n\n\n\n\n\n
A short survey on O-RAN\u2019s architecture, benefits, and future directions. It showed
the deployment of DL-based scenarios in O-RAN.
\n
\n\n\n\n\n
\n[45] [46]\n
\n
LLHHLL\n\n\n\n\n\n\n\n
Short review papers that discussed the evolution of RAN architectures towards O-RAN\n
in terms of functionality and implementation.
\n
[47]LLHMLL\n\n\n\n\n\n\n\n
A concise paper discussed the integration of emergent B5G concepts with O-RAN,
such as network slicing and Multi-access Edge Computing (MEC).
\n
[48]HLHHHH\n\n\n\n\n\n\n\n
A survey on DL/ML-based solutions for RAN/O-RAN. It includes O-RAN architecture
description along with its use cases as well as future directions and open challenges.
\n
\n[49][50]\nHHMLLH\n\n\n\n\n\n\n\n
A review of XAI approaches, in terms of their algorithmic aspects, classifications,
application domains, and future research directions.
\n
[40]HHMLLH\n\n\n\n\n\n\n\n
A review of the main principles and practice of XAI. In particular, the specific pattern
recognition models of machine learning are targeted.
\n
[51]HHMLLH\n\n\n\n\n\n\n\n
A review on a set of key measurement metrics, which can help to measure and evaluate
any explainable AI system.
\n
[38]HHHLLM\n\n\n\n\n\n\n\n
A survey on the use of XAI for B5G/6G networks. It addresses how to design XAI\n
systems for B5G use cases, as well as future research directions in such context.
\n
[52]HHHLLL\n\n\n\n\n\n\n\n
A review of DL-based solutions in PHY and MAC layers and their performance
vs XAI trade-off.
\n
[53]HHHLLL\n\n\n\n\n\n\n\n
A review of existing XAI techniques and their applicability to deal with
the 6G network challenges.
\n
[54]HHHLLM\n\n\n\n\n\n\n\n
A survey on the application of XAI to the security aspects of B5G networks as well
as future research directions.
\n
This surveyHHHHHH\n\n\n\n\n\n\n\n\n\n\n
\nA comprehensive survey on the use of XAI to design transparent and trustworthy\n
\nO-RAN architecture, covering architectural aspects, use cases, projects,\n
standardization approaches, and future research directions.
\n
\n
", + "capture": "Table I: Existing surveys on O-RAN, XAI, XAI for B5G. H: High, M: Medium, and L: Low." + }, + "4": { + "table_html": "
\n
Table II: XAI Taxonomy. A: Agnostic, S: Specific.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTransparency\n\n\n\nExplain. Basis\n\n\n\nTechnique\n\n\n\nAlgorithm\n\n\n\nAgno.\n\n\n\nPros and Cons\n\nReference
\n\n\n\nBlack-Box\nModels\n\n\n\n\nAttributions\n\n\n\nGradient\n\n\n\nSaliency Maps\n\n\n\nA\n\n\n\nPros: Simplicity, visual interpretability, widely applicable. Cons: Lack of context, sensitivity to input perturbations, limited to input gradients.\n\n[59, 60]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nGradient x Input\n\n\n\nA\n\n\n\nPros: Simplicity, direct relevance, feature importance ranking. Cons: Input scaling sensitivity, limited to linear relationships, potential for misleading interpretations.\n\n[61, 62]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nIntegrated Gradients\n\n\n\nA\n\n\n\nPros: Baseline comparison, path-based attribution, completeness, and sensitivity. Cons: Computationally intensive, baseline selection challenge, linearity assumption.\n\n[63, 64]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nSmooth Gradient\n\n\n\nA\n\n\n\nPros: Noise reduction, robustness to adversarial examples, gradient visualization. Cons: Interpretation challenges, hyperparameter sensitivity, computational overhead.\n\n[65, 66]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nEpsilon-LRP\n\n\n\nS\n\n\n\nPros: Deep model interpretability, conceptual clarity, attribution preservation. Cons: Complexity, parameter tuning, vulnerability to network architecture.\n\n[67, 68]
\n\n\n\n\n\n\n\n\n\nPerturbation\n\n\n\nSHAP\n\n\n\nA\n\n\n\nPros: Theoretical grounding based on game theory, global and local interpretability, and consistency. Cons: Computational complexity, high-dimensional data challenge, model approximation dependency.\n\n[69, 70]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nDeepLIFT\n\n\n\nA\n\n\n\nPros: Model-agnostic, captures interactions, relevance conservation. Cons: Computational overhead, baseline selection challenge, interpretation complexity.\n\n[71]
\n\n\n\n\n\n\n\n\n\n\n\n\n\nOcclusion\n\n\n\nA\n\n\n\nPros: Intuitive visual interpretation, robustness to model architecture, spatial localization. Cons: Computational expense, coarseness of occlusion, interpretation subjectivity.\n\n[72, 73]
\n\n\n\n\n\n\n\n\n\nImportance Weights\n\n\n\nGNNExplainer\n\n\n\nA\n\n\n\nPros: Graph-specific interpretability, node and edge importance, feature relevance analysis. Cons: Complexity, model-specific, interpretation scalability.\n\n[74]
\n\n\n\n\n\nSurrogates\n\n\n\nLocal Techniques\n\n\n\nLIME\n\n\n\nA\n\n\n\nPros: Model-agnostic, local interpretability, simplicity. Cons: Interpretability limitation, instability, assumes linearity.\n\n[75, 76]
\n\n\n\n\n\n\n\n\n\nGlobal Techniques\n\n\n\nTREPAN\n\n\n\nS\n\n\n\nPros: Decision tree interpretability, human-readable explanations, transparent model behavior. Cons: Limited to decision tree models, model-specific, interpretation scalability.\n\n[77]
\n\n\n\n\n\n\n\n\n\nRule-Based\n\n\n\nRuleFit\n\n\n\nS\n\n\n\nPros: combines decision trees and linear regression to provide interpretable insights into the model\u2019s decision-making process. Cons: may struggle to model highly intricate or complex nonlinear patterns in the data.\n\n[78, 79]
\n\n\n\n\n\nRL Reward\n\n\n\nRule-based\n\n\n\nReward Shaping\n\n\n\nS\n\n\n\nPros: Provides explicit guidance to the RL/DRL agent, allowing it to focus on desired behaviors. Cons: Can introduce biases if the reward shaping is not carefully designed.\n\n[80, 81]
\n\n\n\n\n\nRL State\n\n\n\nModel-based\n\n\n\nAttention Mechanisms\n\n\n\nS\n\n\n\nPros: Offers transparency by showing which parts of the input state the RL/DRL agent attends to. Cons: Attention mechanisms do not explicitly explain the agent\u2019s internal reasoning or decision-making process.\n\n[82, 83]
\n\n\n\n\n\nSymbolic\n\nMachine Reasoning\n\nS\n\n\n\nPros: Provides human-interpretable explanations for model decisions and shows explicit reasoning behind decisions, enhancing transparency. Cons: Requires expertise, computationally expensive, and may struggle with uncertain or probabilistic information.\n\n[84, 85]
\n\n\n\n\n\nTransformers\u2019 Attention Head\n\nAttention Flow Analysis\n\nS\n\n\n\nPros: Provides interpretability, enables fine-grained analysis, helps improve models, and offers domain-specific insights. Cons: Complexity, lack of unique interpretations, limited context, challenges in generalization.\n\n[86, 87]
\n\n\n\n\n\nVisual\n\nSCM\n\nS\n\n\n\nPros: Causal understanding, intuitive visualization, identifying confounding variables. Cons: Limited to causal modeling, simplified representation, expert knowledge required.\n\n[88, 89]
\n\n\n\n\n\nText\n\nCaption Generation\n\nS\n\n\n\nPros: Contextual understanding, language comprehension, multimodal interpretation. Cons: Subjectivity and ambiguity, lack of fine-grained control, reliance on training data.\n\n[90, 91]
\n\n\n\n\n\nGraph\n\nKnowledge Graphs\n\nA\n\n\n\nPros: Structured representation, relationship understanding, integration, and interoperability. Cons: Knowledge acquisition and maintenance, incompleteness and accuracy, limited context and ambiguity.\n\n[92, 93]
\n\n\n\nTransparent\nModels\n\n\nLogistic / Linear Regression\n\nS\n\n\n\n\n\nPros: Explainability, trust and accountability, debugging and\nerror analysis. Cons: Performance limitations, vulnerability to\nadversarial attacks.\n\n\n[94]
\n\n\n\nDecision Trees\n\n\n\n\n\n\n\n[95]
\n\n\n\nK-Nearest Neighbors\n\n\n\n\n\n\n\n[96]
\n\n\n\nRule-Based Learners\n\n\n\n\n\n\n\n[97]
\n\n\n\nGenerative Additive Models\n\n\n\n\n\n\n\n[98]
\n\n\n\nBayesian Models\n\n\n\n\n\n\n\n[99]
\n\n\n\nSelf-Explainable Neural Networks\n\n\n\n\n\n\n\n[100]
\n
", + "capture": "Table II: XAI Taxonomy. A: Agnostic, S: Specific." + }, + "5": { + "table_html": "
\n
Table III: XAI Users
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nXAI Users\n\n\n\nNeeds\n\n\n\nKey Application Areas\n\n\n\nReference\n\n
\n\nData Scientists and Machine Learning Researchers\n\n\n\nThey require XAI techniques to understand and debug complex models, identify biases, and improve model performance\n\n\n\nModel development, debugging, and optimization across various domains such as telecommunications, healthcare, finance, natural language processing, and computer\nvision\n\n\n\n[75, 103]\n\n
\n\nEnd Users and Consumers\n\n\n\nThey need explanations to trust and understand AI systems in applications\nlike recommender systems, personalized marketing, and decision support tools\n\n\n\nE-commerce and personalized recommendation systems, healthcare decision support tools, financial advice platforms, and autonomous vehicles\n\n\n\n[104, 105]\n\n
\n\nManagers and Decision Makers\n\n\n\nThey require transparent and interpretable AI models to make informed\ndecisions, assess risks, and gain insights into the AI system\u2019s behavior\n\n\n\nBusiness intelligence and analytics, risk assessment and management, fraud detection, and regulatory compliance across industries such as finance, healthcare, and manufacturing\n\n\n\n[106]\n\n
\n\nDevelopers and Engineers\n\n\n\nThey need tools and methods to troubleshoot networks faults/SLA violations, build explainable AI systems, ensure reliability, and meet regulatory requirements\n\n\n\nBuilding interpretable machine learning models, developing explainable AI frameworks and libraries, ensuring model reliability and security in domains like cybersecurity, telecommunications, and autonomous systems\n\n\n\n[107, 37]\n\n
\n\nAuditors and Compliance Officers\n\n\n\nThey require XAI to assess the fairness, accountability, and compliance\nof AI systems and to identify potential biases or risks\n\n\n\nAssessing the fairness and legality of AI systems in finance, hiring practices, loan approvals, credit scoring, and regulatory compliance\nin sectors such as finance and human resources\n\n\n\n[108]\n\n
\n\nLegal Professionals and Judges\n\n\n\nThey need explanations to understand AI decisions, assess legal implications,\nand ensure transparency and fairness in legal proceedings\n\n\n\nInterpreting AI-driven legal decisions, evaluating algorithmic fairness, ensuring transparency and accountability in legal proceedings, and addressing ethical concerns in areas like criminal justice and civil\nrights\n\n\n\n[109]\n\n
\n\nRegulators and Policy Makers\n\n\n\nThey require XAI to establish guidelines, standards, and regulations\naround AI ethics, transparency, and accountability\n\n\n\nEstablishing guidelines, standards, and regulations for trustworthy AI in sectors including healthcare, finance, autonomous systems, and data privacy to protect public interests and ensure ethical AI deployment\n\n\n\n[110]\n\n
\n
", + "capture": "Table III: XAI Users" + }, + "6": { + "table_html": "
\n
Table IV: Taxonomy of XAI metrics.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nXAI Method\nBasisMetricType of ProblemReference
Attributions-basedFeatures Mutation/MaskingConfidence/FaithfulnessClassification[115]
Log-odds[116]
Comprehensiveness[117]
Sufficiency[117]
Raw featuresInterpretabilityRegression[118]
AmbiguityPrediction/Decision[119], [120]
Surrogates-basedPerturbationRobustness/SensitivityRegression/Decision[100]
(in)fidelity[121], [122]
LIME Explainer R2 ScoreClassification and Regression[75]
Relative Consistency[123]
White-box baselineExplainer RecallClassification[124]
Explainer Precision[125]
\n
", + "capture": "Table IV: Taxonomy of XAI metrics." + }, + "7": { + "table_html": "
\n
Table V: Benchmarking of XAI methods for a NeSy-based O-RAN CPU usage prediction task
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpochMetricSHAPSaliencyGrad InputInt. GradLRP
100Confidence0.89810.86810.89120.85880.8681
Ambiguity1.60921.48331.60861.60761.6076
Time complexity (s)2.19990.07680.08150.12070.1067
500Confidence0.89000.89230.86840.88280.8732
Ambiguity1.60921.41661.60851.60841.6084
Time complexity (s)1.81130.05400.05300.09350.0780
\n
", + "capture": "Table V: Benchmarking of XAI methods for a NeSy-based O-RAN CPU usage prediction task" + }, + "8": { + "table_html": "
\n
Table VI: Taxonomy of XAI works for the 6G O-RAN Architecture.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n
Ref.
\n
\n\n\n\n\n
XAI Class
\n
Target AI TechniquesO-RAN\u2019s challenges\n\n\n\n\n\n\n\n
Impacted O-RAN
Component
\n
[152]Post-Hoc ExplanationDeep Neural NetworksSecurity\n\n\n\n\n
rApp, Non Real-Time RIC
\n
[153]\n\n\n\n\n
Energy-efficiency and security
\n
Near Real-Time RIC
[154]\n\n\n\n\n\n\n\n
Misconfiguration and
xApps conflict
\n
Near Real-Time RIC
[155]\n\n\n\n\n\n\n\n
Deep Reinforcement
Learning
\n
\n\n\n\n\n\n\n\n
Resource
Allocation
\n
\n\n\n\n\n\n\n\n
Near Real-Time
RIC
\n
[156]\n\n\n\n\n\n\n\n
Multi-Agent Deep
Reinforcement Learning
\n
\n\n\n\n\n\n\n\n
Resource allocation in
O-RAN Slicing
\n
Non Real-Time RIC
[157]\n\n\n\n\n
Convolutional Neural Networks
\n
\n\n\n\n\n\n\n\n
O-RAN monitoring overhead
reduction
\n
\n\n\n\n\n\n\n\n
E2, vBS, Non Real-Time and
Near Real-Time RICs
\n
[158]\n\n\n\n\n
DLinear, PatchTST and LSTM
\n
\n\n\n\n\n
O-RAN traffic forecasting
\n
\n\n\n\n\n
Non Real-Time RIC
\n
[159, 144]\n\n\n\n\n\n\n\n
Explanation-Guided
Learning
\n
Federated Deep Learning\n\n\n\n\n\n\n\n
Per-slice RAN dropped traffic
detection
\n
\n\n\n\n\n\n\n\n
Near Real-Time and
Non Real-Time RICs
\n
[126]\n\n\n\n\n\n\n\n
Neuro-Symbolic
Reasoning
\n
\n\n\n\n\n\n\n\n
O-RAN CPU resource
provisioning
\n
\n\n\n\n\n
vBS, O-CU, O-DU
\n
\n
", + "capture": "Table VI: Taxonomy of XAI works for the 6G O-RAN Architecture." + }, + "9": { + "table_html": "
\n
Table VII: Mapping of Existing AI-based O-RAN works to XAI-enabled Solutions.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nXAI Deployment at RIC as xApps\n
Works\n\n\nAddressed RAN\n\nFunction\nAI Technique\n\n\n\n\n\n\n\n
XAI
Technique
\n
Metrics\n\n\n\n\n\n\n\n
O-RAN
Module
\n
\n\n\n\n\n\n\n\n
Functional
Blocks
\n
Interfaces
\n\n\n\n\n\n\n\n\n\n\n
Yang et al.
[160]
[161]
\n
User access control\n\n\n\n\n\n\n\n\n\n\n
Federated deep
reinforcement
learning
\n
ConfidenceO-CU-CP\n\n\n\n\n\n\n\n\n\n\n
\nUE and gNB\n
procedure
management
\n
\n\n\n\n\n\n\n\n
Hoejoo et al.
[165]
\n
Total cell throughput\n\n\n\n\n\n\n\n\n\n\n
Deep
reinforcement
learning
\n
State-action certaintyO-DU\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
Shahram et al.
[167]
\n
\n\n\n\n\n\n\n\n
Resource allocation
and function placement
\n
\n\n\n\n\n\n\n\n
Actor-Critic
learning
\n
State-action certaintyO-DU\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\nE2 (Realization).\n\n\n\n\nA1 (Analytics and Policies),\n\n\n\n\nO1 (Monitoring),\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Rivera et al.
[169]
[170]
Han et al.
[171]
\n
\n\n\n\n\n\n\n\n
Resource allocation
and power control
\n
\n\n\n\n\n\n\n\n\n\n\n
Federated deep
reinforcement
learning
\n
\n\nReactive/Proactive Explanations\nConfidence, Log-oddsO-DU\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC) and
PDSCH
(High-PHY)
\n
\n\n\n\n\n\n\n\n
Han et al.
[172]
\n
Resource allocationQ-learningState-action certaintyO-DU\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
Farhad et al.
[140]
\n
Resource allocation\n\n\n\n\n\n\n\n\n\n\n
Distributed deep
reinforcement
learning
\n
RobustnessO-DU\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
Wang et al.
[174]
\n
Resource allocation\n\n\n\n\n\n\n\n\n\n\n
Deep
reinforcement
learning
\n
State-action certainty\n\n\n\n\n\n\n\n
O-DU
O-RU
\n
\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
Abouaomar et al.
[173]
\n
Resource allocation\n\n\n\n\n\n\n\n\n\n\n
Federated Deep
reinforcement
learning
\n
Confidence, Log-odds\n\n\n\n\n
O-DU
\n
\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
Scalingi et al.
[162]
\n
Attack Detection\n\n\n\n\n\n\n\n
CNN, LSTM
DRL
\n
Confidence, Log-odds\n\n\n\n\n
O-RU
\n
\n\n\n\n\n\n\n\n
Low
Physical
\n
\n\n\n\n\n\n\n\n
Maxenti et al.
[163]
\n
\n\n\n\n\n\n\n\n
Energy-Aware
scalability
\n
\n\n\n\n\n\n\n\n
CNN, LSTM
DRL
\n
Confidence, Log-odds\n\n\n\n\n\n\n\n
O-CU
and O-DU
\n
\n\n\n\n\n\n\n\n\n\n\n
UE and gNB
procedure
and NR-MAC
\n
\n\n\n\n\n\n\n\n
Cheng et al.
[164]
\n
\n\n\n\n\n\n\n\n
Channel State
Information
\n
\n\n\n\n\n\n\n\n
Encoder
Decoder
\n
Confidence, Log-odds\n\n\n\n\n
O-DU
\n
\n\n\n\n\n\n\n\n
PUSCH
and PDSCH
\n
\n\n\n\n\n\n\n\n
Raftopoulos et al.
[166]
\n
\n\n\n\n\n\n\n\n
SLA-Aware
Network Slicing
\n
\n\n\n\n\n
DRL
\n
Confidence, Log-odds\n\n\n\n\n\n\n\n
O-CU,
O-DU
\n
\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n\n\n\n\n\n\n\n
D\u2019Oro et al.
[168]
\n
\n\n\n\n\n\n\n\n
RAN
Orchestration
\n
\n\n\n\n\n
General
\n
Confidence, Log-odds\n\n\n\n\n\n\n\n\n\n\n
O-CU,
O-DU,
O-RU
\n
\n\n\n\n\n\n\n\n
All
Blocks
\n
\n\n\n\n\n\n\n\n
Tsampazi et al.
[175]
\n
\n\n\n\n\n\n\n\n
Resource
Allocation
\n
\n\n\n\n\n
DRL
\n
Confidence, Log-odds\n\n\n\n\n
O-DU
\n
\n\n\n\n\n\n\n\n\n\n\n
Resource
assignment
(NR-MAC)
\n
\n
", + "capture": "Table VII: Mapping of Existing AI-based O-RAN works to XAI-enabled Solutions." + } + }, + "image_paths": { + "1": { + "figure_path": "2307.00319v4_figure_1.png", + "caption": "Figure 1: The taxonomy of the article.", + "url": "http://arxiv.org/html/2307.00319v4/x1.png" + }, + "3": { + "figure_path": "2307.00319v4_figure_3.png", + "caption": "Figure 3: Deployment of Federated AI and XAI in O-RAN. Adapted from [138, 131, 139, 140].", + "url": "http://arxiv.org/html/2307.00319v4/extracted/6028837/figures/dapp.drawio.png" + }, + "4": { + "figure_path": "2307.00319v4_figure_4.png", + "caption": "Figure 4: Deployment of explanation-guided deep reinforcement learning in O-RAN. Adapted from [119].", + "url": "http://arxiv.org/html/2307.00319v4/extracted/6028837/figures/egl_deployment.drawio.png" + }, + "5": { + "figure_path": "2307.00319v4_figure_5.png", + "caption": "Figure 5: Explanation guided DRL maximize the decision confidence compared to DRL. Adapted from [119].", + "url": "http://arxiv.org/html/2307.00319v4/x2.png" + }, + "6": { + "figure_path": "2307.00319v4_figure_6.png", + "caption": "Figure 6: Deployment of explanation-aided confident federated learning in O-RAN. Adapted from [146].", + "url": "http://arxiv.org/html/2307.00319v4/extracted/6028837/figures/robust_learning_deployment.drawio.png" + }, + "7": { + "figure_path": "2307.00319v4_figure_7.png", + "caption": "Figure 7: FL confidence vs. rounds. Adapted from [146].", + "url": "http://arxiv.org/html/2307.00319v4/x3.png" + }, + "8": { + "figure_path": "2307.00319v4_figure_8.png", + "caption": "Figure 8: XAI-driven Automated Continuous Integration and Delivery Pipeline.", + "url": "http://arxiv.org/html/2307.00319v4/x4.png" + }, + "9": { + "figure_path": "2307.00319v4_figure_9.png", + "caption": "Figure 9: Use case: User-Centric QoE Optimization. Adapted from [184]", + "url": "http://arxiv.org/html/2307.00319v4/x5.png" + }, + "10": { + "figure_path": "2307.00319v4_figure_10.png", + "caption": "Figure 10: Use case: Explainable slice SLA assurance. Adapted from [188]", + "url": "http://arxiv.org/html/2307.00319v4/x6.png" + }, + "11": { + "figure_path": "2307.00319v4_figure_11.png", + "caption": "Figure 11: Use case: Multi-vendor deployment with explainable coordination. Adapted from [184].", + "url": "http://arxiv.org/html/2307.00319v4/x7.png" + }, + "12(a)": { + "figure_path": "2307.00319v4_figure_12(a).png", + "caption": "(a) Resource allocation over Non RT RIC.\nFigure 12: Use case: Explainable RAN slice resource allocation decisions taken at Non-Real Time and Near-Real Time RICs.", + "url": "http://arxiv.org/html/2307.00319v4/x8.png" + }, + "12(b)": { + "figure_path": "2307.00319v4_figure_12(b).png", + "caption": "(b) Resource allocation over Near RT RIC.\nFigure 12: Use case: Explainable RAN slice resource allocation decisions taken at Non-Real Time and Near-Real Time RICs.", + "url": "http://arxiv.org/html/2307.00319v4/x9.png" + }, + "13": { + "figure_path": "2307.00319v4_figure_13.png", + "caption": "Figure 13: O-RAN architecture with additional XAI-based security components. Adapted from [203].", + "url": "http://arxiv.org/html/2307.00319v4/x10.png" + }, + "14": { + "figure_path": "2307.00319v4_figure_14.png", + "caption": "Figure 14: Confidence vs. Federated Learning rounds [146].", + "url": "http://arxiv.org/html/2307.00319v4/x11.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2307.00319v4" +} \ No newline at end of file diff --git a/20241127/2307.14132v4.json b/20241127/2307.14132v4.json new file mode 100644 index 0000000000000000000000000000000000000000..c17dbe696809231e87b9f556882fee5d31ecb120 --- /dev/null +++ b/20241127/2307.14132v4.json @@ -0,0 +1,126 @@ +{ + "title": "CIF-T: A Novel CIF-based Transducer Architecture for Automatic Speech Recognition", + "abstract": "RNN-T models are widely used in ASR, which rely on the RNN-T loss to achieve length alignment between input audio and target sequence. However, the implementation complexity and the alignment-based optimization target of RNN-T loss lead to computational redundancy and a reduced role for predictor network, respectively. In this paper, we propose a novel model named CIF-Transducer (CIF-T) which incorporates the Continuous Integrate-and-Fire (CIF) mechanism with the RNN-T model to achieve efficient alignment. In this way, the RNN-T loss is abandoned, thus bringing a computational reduction and allowing the predictor network a more significant role. We also introduce Funnel-CIF, Context Blocks, Unified Gating and Bilinear Pooling joint network, and auxiliary training strategy to further improve performance. Experiments on the 178-hour AISHELL-1 and 10000-hour WenetSpeech datasets show that CIF-T achieves state-of-the-art results with lower computational overhead compared to RNN-T models.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recurrent neural network transducer (RNN-T) models [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] have gained significant attention because of their natural streaming capability and superior performance in ASR tasks. RNN-T is initially proposed to address the conditional independence assumption of CTC models by introducing a predictor network that serves as a language model (LM). During RNN-T training, blank symbols are added with RNN-T loss to facilitate the learning of alignments between acoustic and semantic features, making RNN-T models particularly suitable for frame-synchronous decoding. However, RNN-T needs to consider all feasible decoding paths, as illustrated in Fig. 1, which requires the probability distribution of all symbols in the utterance at each time step (usually a 4-D tensor) [5 ###reference_b5###]. This results in a high demand for training resources, which leads to a much longer training times. Similarly, excessive computational redundancy causes high prediction delay in the decoding process.\n###figure_1### Numerous efforts have been made to decrease the computational redundancy of RNN-T. Li et al. [6 ###reference_b6###] remove the padding portion of the encoder and predictor network outputs, and use a sentence-by-sentence combination instead of broadcasting. Ref [7 ###reference_b7###] first predicts the posterior probability of the encoder outputs with CTC [8 ###reference_b8###] before feeding them to the joint network, and then removes the frames predicted for the symbol blank according to a specific threshold. Considering the extensive vocabulary size, Kuang et al. [5 ###reference_b5###] propose Pruned RNN-T, which restricts the range of predictor network output at each time step to minimize the computation of RNN-T loss. Other works focus on optimizing the decoding path of RNN-T to decrease the delay in decoding process [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\n###figure_2### Although these methods have been successful in decreasing the redundant computation of RNN-T models, they are still the improvement of RNN-T loss. However, the utilize of RNN-T loss to constrain the model gives rise to another significant challenge. The primary optimization target of RNN-T loss is to achieve length alignment between the input audio and the target sequence, which result in an over-reliance on the predictor network to facilitate alignment. This over-reliance come at the expense of sacrificing the essential contextual semantic knowledge required for accurate transcription. As discussed in [13 ###reference_b13###], when substituting the weights of the predictor network with randomized values, the RNN-T model quality remains almost the same as the fully trainable baselines. This constrains the capacity of RNN-T for internal language modeling.\nThe primary issue causing these problems is the difficulty in handling the length mismatch between the input audio and target sequence, which poses a challenge for designing the RNN-T loss and fuse the two modalities effectively in the joint network. To address this challenge, we observe that the blank symbols emitted in the RNN-T decoding process are essentially an aggregation of acoustic features, and the successful emission of non-blank characters indicates the availability of sufficient acoustic information for prediction. As illustrated in Fig. 1 ###reference_###(a), the blue acoustic features are aggregated as the blank symbols are emitted, and when and complete the aggregation, the character is emitted. The green acoustic feature is used to emit , and the yellow acoustic features and are used to emit as the aggregation continues. This mechanism is consistent with the recently proposed soft and monotonic alignment method, known as Continuous Integrate-and-Fire (CIF) [14 ###reference_b14###, 15 ###reference_b15###]. As depicted in Fig. 1 ###reference_###(b), CIF accurately identifies acoustic boundaries and extracts acoustic features corresponding to each target symbol for prediction by accumulating the weighted at each time step. The blue, green, and yellow feature blocks indicate the corresponding acoustic features used to predict , , and , respectively. Hence, we replace the RNN-T loss with CIF module, significantly reducing the computational burden in the joint network and directly supervising the posterior probability of each predicted symbol with cross-entropy (CE) loss.\nIn this paper, we propose a novel architecture for ASR named CIF-Transducer (CIF-T), which incorporates the CIF mechanism with the RNN-T model to achieve efficient alignment. Our approach obviates the requirement for the RNN-T loss, resulting in a substantial reduction in computational redundancy and allowing predictor network to assume a more prominent role in enhancing the predictions accuracy. Due to the monotonic alignment property of CIF, it is seamless to integrate the CIF mechanism with the RNN-T model while maintaining the ability to decode in a streaming manner. During the alignment process utilizing the CIF module, a certain amount of original acoustic information may be lost. In order to mitigate this loss, we propose an extension to the CIF module called Funnel-CIF, which supplements the original information to the CIF outputs. Moreover, we introduce the Context Blocks, Unified Gating and Bilinear-Pooling (UGBP) joint network, and auxiliary training strategy to further enhance the performance. We conduct experiments on the public available 170-hour AISHELL-1 and 10000-hour WenetSpeech datasets, our proposal achieves state-of-the-art results while significantly reducing computational overhead compared to vanilla RNN-T models. Our work is the first to empower CIF-based models to surpass the performance of other auto-regressive models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "RNN Transducer", + "text": "The RNN-T model consists of three components, which are encoder, predictor network, and joint network, respectively. Given the acoustic input and the corresponding target sequence , where represents the length of the acoustic input and denotes the number of symbols in , the output of RNN-T is calculated as follows:\nthe semantic feature of the -th symbol produced by the predictor network.\nAfter getting via the joint network, the posterior probability of the predicted symbol is:\nwhere is the weight of the classification layer, whose output dimension is , representing the number of all symbols that containing blank. is belong to , which denotes a possible RNN-T decoding path, and can be derived from by removing all symbols in it. During training, we use to represent all possible decoding paths, the RNN-T loss is given as:\nwhere and represent the values of and corresponding to in the decoded path , respectively." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Continuous Integrate-and-Fire", + "text": "The CIF module is a soft and monotonic alignment mechanism that has recently garnered attention in the speech community and demonstrated success in non-autoregressive attention-based encoder-decoder (AED) ASR models [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. As the CIF module shown in Fig. 2 ###reference_###(b), given the encoder output , the weight is obtained by passing it through a convolution layer, a fully connected layer with one output dimension, and sigmoid activation. After that, CIF accumulates forwardly until the weight sum reaches a threshold of , indicating that the acoustic feature boundary has been found. At the boundary weight , CIF divides it into two parts, and , with equal to and . is included in the current acoustic range, while is considered as the starting weight for the next acoustic range. In the determined acoustic range, the acoustic features are integrated via a weighted summation with the corresponding and firing the integrated acoustic embedding at the boundary. This embedding can then be used in the subsequent to predict the corresponding symbol .\nDuring training, to ensure that the length of the integrated acoustic embedding generated by the CIF module matches the target text, the scaling strategy is adopt. represents the length of the target text, and after scaling, the sum of is equal to the sum of . Additional, a quantity loss is proposed to force the length of the integrated acoustic embedding to be closer to ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "CIF-Transducer", + "text": "The proposed CIF-T retains the three components of vanilla RNN-T as in Section 2.1 ###reference_###. As shown in Fig. 2 ###reference_###(a), the CIF mechanism is used following the encoder to align the length of the acoustic features and semantic features, which are then fused in the joint network. Our experimental results demonstrate that combining the original CIF module with the vanilla RNN-T model already achieves competitive performance with a lower computational cost. However, we strive for even better results with the CIF-T, thus, the Funnel-CIF, Context Blocks, UGBP joint network, and auxiliary training strategy are proposed to improve each component and achieve a superior performance." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Funnel-CIF and Context Blocks", + "text": "As introduced in Section 2.2 ###reference_###, the CIF module integrates the acoustic features and obtains the integrated acoustic embedding by firing, where . Therefore, the alignment of the CIF module is a dynamic down-sampling process, which is accompanied with the lost of the original acoustic information. Thus, we emplpy Funnel Attention [19 ###reference_b19###] after the CIF module to supplement information as the following formulation:\nwhere the query for calculating Funnel Attention is the output of the CIF module , and the vectors key and value are the original acoustic features . Then, the reacquired information is supplemented to via a residual connection. In this way, more abundant acoustic information is preserved for later joint decoding. Additionally, as mentioned in [16 ###reference_b16###], the contextual correlation among the CIF outputs is weak. To address this problem, we employ a series of Conformer layers to act as the Context Blocks, which enhance the contextual dependencies of each token in the CIF outputs. For representation brevity, we still use to represent the output of the Context Blocks.\n###table_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Unified Gating and Bilinear Pooling Joint Network", + "text": "Many works [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] suggest that the original fusion using a single linear layer is not effective enough. In this work, we propose the use of the Unified Gating and Bilinear Pooling joint network [20 ###reference_b20###] to achieve more effective fusion of acoustic and semantic features. The UGBP first performs dynamic selective fusion of the two modality features in the channel dimension with gating. After that, a low-rank approximation of bilinear pooling is used for a more powerful feature fusion. The procedures can be written as:\n###table_2### Thanks to the usage of the CIF mechanism, we achieve the length-aligned joint network inputs and in advance. Thus we can simply sum the two modal features and still maintain the original three-dimension, contrary to the traditional RNN-T, which requires summing the two by broadcasting to obtain a four-dimensional tensor. With this, the computational overhead of fusion is significantly reduced. Finally, the output of UGBP is obtained after shortcut connection and tanh activation.\nwhere and are the weights of fully connected layer for the transformation of and , respectively. The posterior probability of the predicted symbol is similarly calculated by Eq. 4 ###reference_###. Unlike the traditional utilize of RNN-T loss in Eq. 5 ###reference_###, we use CE loss to constrain the model." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Auxiliary Training Strategy", + "text": "In addition to constraining the CIF-T using the CE loss described in Section 3.2 ###reference_### , we also employ additional auxiliary training objectives. Specifically, we use the CTC loss and the quantity loss , as described in Section 2.2 ###reference_###, to assist in training the encoder and CIF module. Additionally, we utilize CE loss to constrain the predictor network to improve its understanding of contextual semantic information, which we refer to as . Fig. 2 ###reference_###(a) illustrates the specific implementation of these auxiliary losses. Consequently, the total loss optimized in training is as follows:\nwhere is hyper-parameters to balance different losses, and the specific values are set experimentally.\n###table_3### ###table_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "We conduct experiments on two publicly available datasets, 170-hour AISHELL-1 [28 ###reference_b28###] and 10000-hour WenetSpeech [26 ###reference_b26###]. For all experiments, we represent input vectors as a sequence of 80-dim log-Mel filter bank and set the frame length and shift to 25 ms and 10 ms respectively. The speed perturbation [29 ###reference_b29###] and SpecAugment [30 ###reference_b30###] are used before training on both datasets and the features are normalized using Global CMVN [31 ###reference_b31###].\nWe present three versions of CIF-T models, S, M, and L, which share the same two-layer reduced embedding predictor network [32 ###reference_b32###] and UGBP joint network with same model dimension of 256. We use conformer with 4 times down-sampling CNN layers as our encoder and the different encoder configurations for the three models are shown in Table 1 ###reference_###, noting that the values separated by semicolons in the \u201cLayers\u201d column represent the layers number of Encoder and Context Blocks, respectively. Our experiments are conducted using NVIDIA A100 GPUs, and we use Adam [33 ###reference_b33###] optimizer with 25000 warm-up steps for AISHELL-1 dataset and 5000 warm-up steps for WenetSpeech dataset. For decoding, we get the final model by averaging selected top-K epochs with the lowest loss on the validation set. The other configurations are same with the presents in WeNet [25 ###reference_b25###]. The hyper-parameters ,, and in Eq. 10 ###reference_### are respectively set to 1, 1, and 0.3 in all experiments." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Results", + "text": "Table 2 ###reference_### shows the character error rate (CER) results on the AISHELL-1 dataset. We use CTC loss and LM loss on the vanilla RNN-T model as our baseline model RNN-T\u2021. When just employing the CIF mechanism to replace the alignment process in the RNN-T\u2021, CIF-T\u2021 achieves a competitive result of 4.6%/5.3% on the dev/test set compared to 4.7%/5.3% for the RNN-T\u2021 as a baseline. With the improvements proposed in this work, the CIF-T(S) and CIF-T(M) achieve consistent or superior performance compared to the best publicly available Conformer-based AED models, CIF-based models, and RNN-T-based models, with equal or smaller parameters. Furthermore, the CIF-T(L) achieves a state-of-the-art result of 4.1%/4.3% on the dev/test set. We also evaluate the results of applying UGBP on RNN-T\u2021, and despite the better fusion method improved performance, the results still fall behind our CIF-T models.\nThe CER results on the WenetSpeech are presented in Table 3 ###reference_###. CIF-T achieves a significantly lower CER results than ESPnet and Wenet with essentially the same parameters, while achieving a competitive results compared to Conformer-MoE which has more than three times parameters.\n###table_5### ###table_6### Table 4 ###reference_### provides an evaluation of the computational resource consumption of CIF-T and RNN-T models by comparing their respective maximum batch sizes on a single 40G NVIDIA A100 GPU. We utilize the Torchaudio [34 ###reference_b34###] RNN-T loss to train the RNN-T models, and ensure that both models have the same number of parameters. It can be observed that CIF-T can handle a batch size of 72, but exceeds the memory limit when the batch size is increased to 96. In contrast, RNN-T cannot train with a batch size of 32, indicating a larger bottleneck in computational overhead compared to proposed CIF-T." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study and Analysis", + "text": "Table 5 ###reference_### shows the performance benefits brought by progressively adding the proposed improvement modules. Experimentally proving that the proposed UGBP, Funnel-CIF and Context Blocks are effective in improving the model performance.\nFurthermore, we investigate the role of the predictor network for RNN-T and CIF-T, both models use auxiliary training strategy and UGBP to fuse acoustic and semantic features. As shown in Table 6 ###reference_###, if we re-initialize the predictor network of the fully trainable RNN-T model randomly, the CER on the test set increases from 5.0% to 5.4%, which is only 0.4 percentage points higher, and indicating that the predictor network does not play an indispensable role for correct prediction. Conversely, for the CIF-T model, re-initializing the predictor network led to a substantial increase in CER from 4.8% to 6.7%, a difference of 1.9 percentage points. These results suggest that directly constraining the prediction probability with CE loss in CIF-T increases the dependence of the model on the semantic information provided by predictor network. Adopting this approach will be conducive to the integration of improvements related to the predictor network." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we propose a novel architecture for ASR, namely CIF-T. By using an efficient CIF mechanism in the RNN-T model instead of RNN-T loss to achieve length alignment of input audio and target sequence, we achieve a reduction in computational overhead and an enhancement role of predictor network. Additionally, we propose Funnel-CIF, UGBP joint network, Context Blocks and auxiliary training strategy to further improve the performance. Experiments on the AISHELL-1 and WenetSpeech datasets demonstrate that our approach achieves the state-of-the-art results. In the future we will take full advantage of the monotonic alignment property of CIF and explore its application on the streaming models." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Encoder hyper-parameters for CIF-T of S, M, and L.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelLayersModel DimHeadsFFN DimSize (M)
CIF-T(S)8;22564204835
CIF-T(M)15;22564204850
CIF-T(L)16;251282048130
\n
", + "capture": "Table 1: Encoder hyper-parameters for CIF-T of S, M, and L." + }, + "2": { + "table_html": "
\n
Table 2: Results on the AISHELL-1 dataset. denotes the best results in the paper. represents the vanilla model without improvements.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsSize (M)CER (%)
\n \n\n\nDev\n\n\n \n\n\nTest\n\n
Conformer AED
\n\u00a0\u00a0\u00a0\u00a0\n\u2003ESPnet111https://github.com/espnet/espnet/tree/master [24]\n464.34.6
\n\u00a0\u00a0\u00a0\u00a0\u2003Wenet222https://github.com/wenet-e2e/wenet/tree/main [25]\n47-4.6
CIF based
\n\u00a0\u00a0\u00a0\u00a0\u2003Conformer-CIF [16]\n454.85.3
\n\u00a0\u00a0\u00a0\u00a0\u2003Conformer-CIF\u2020 [16]\n554.54.9
\n\u00a0\u00a0\u00a0\u00a0\u2003Paraformer333https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch [17]\n464.75.1
RNN-T based
\n\u00a0\u00a0\u00a0\u00a0\u2003K2444https://github.com/k2-fsa/icefall/tree/master [5]\n-4.85.0
\n\u00a0\u00a0\u00a0\u00a0\u2003ESPnet1 [24]\n354.34.8
\n\u00a0\u00a0\u00a0\u00a0\u2003Wenet2 [25]\n53-4.5
RNN-T (ours)
\n\u00a0\u00a0\u00a0\u00a0\u2003RNN-T\u2021\n354.75.3
\u00a0\u00a0\u00a0\u00a0\u2003\u2003+ UGBP384.55.0
CIF-T (ours)
\n\u00a0\u00a0\u00a0\u00a0\u2003CIF-T\u2021(RNN-T\u2021 + CIF)\n354.65.3
\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(S)354.44.8
\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(M)504.34.5
\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(L)1304.14.3
\n
", + "capture": "Table 2: Results on the AISHELL-1 dataset. denotes the best results in the paper. represents the vanilla model without improvements." + }, + "3": { + "table_html": "
\n
Table 3: Results on the WenetSpeech dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsSize (M)CER (%)
\n \n\n\nDev\n\n\n \n\n\nTest_Net\n\n\n \n\n\nTest_Meeting\n\n
\nESPnet [26]\n1179.708.9015.90
\nWenet [26]\n1238.889.7015.59
\nConformer-MoE [27]\n4257.678.2813.96
CIF-T (L)1307.818.7314.12
\n
", + "capture": "Table 3: Results on the WenetSpeech dataset." + }, + "4": { + "table_html": "
\n
Table 4: Comparison of the maximum batch size trained with RNN-T and CIF-T on a single GPU.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Batch Size\n \n\n\n8\n\n\n \n\n\n16\n\n\n \n\n\n32\n\n\n \n\n\n48\n\n\n \n\n\n72\n\n\n \n\n\n96\n\n
RNN-T (Torchaudio)\u2713\u2713\u2717
CIF-T\u2713\u2713\u2713\u2713\u2713\u2717
\n
", + "capture": "Table 4: Comparison of the maximum batch size trained with RNN-T and CIF-T on a single GPU." + }, + "5": { + "table_html": "
\n
Table 5: Ablation study of the UGBP joint network, Funnel-CIF, and Context Blocks for CIF-T.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Models\nCER(%)\n
\nCIF-T\u2021\n5.3
\u00a0\u00a0\u00a0\u00a0\u2003+ UGBP5.2
\u00a0\u00a0\u00a0\u00a0\u2003\u2003+ Funnel-CIF5.0
\u00a0\u00a0\u00a0\u00a0\u2003\u2003\u2003+ Context Blocks4.8
\n
", + "capture": "Table 5: Ablation study of the UGBP joint network, Funnel-CIF, and Context Blocks for CIF-T." + }, + "6": { + "table_html": "
\n
Table 6: Influence of re-initializing the predictor network of fully trainable RNN-T and CIF-T.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsRe-Init Pred.\nCER(%)\n
RNN-T\u2717\u00a0\u00a0\u00a0\u00a0\u20035.0
RNN-T\u2713\u00a0\u00a0\u00a0\u00a0\u20035.4 (+0.4)
CIF-T\u2717\u00a0\u00a0\u00a0\u00a0\u20034.8
CIF-T\u2713\u00a0\u00a0\u00a0\u00a0\u20036.7 (+1.9)
\n
", + "capture": "Table 6: Influence of re-initializing the predictor network of fully trainable RNN-T and CIF-T." + } + }, + "image_paths": { + "1": { + "figure_path": "2307.14132v4_figure_1.png", + "caption": "Fig. 1: The different aggregation processes of acoustic features between RNN-T and CIF. The RNN-T emits special symbols b\u2062l\u2062a\u2062n\u2062k\ud835\udc4f\ud835\udc59\ud835\udc4e\ud835\udc5b\ud835\udc58blankitalic_b italic_l italic_a italic_n italic_k for the alignment process, while CIF aggregates the weighted \u03b1\ud835\udefc\\alphaitalic_\u03b1 of acoustic features.", + "url": "http://arxiv.org/html/2307.14132v4/x1.png" + }, + "2": { + "figure_path": "2307.14132v4_figure_2.png", + "caption": "Fig. 2: The structure of the proposed CIF-Transducer and Funnel-CIF. The dashed boxes in Fig. (a) represent the modules used only for the training process. FC and Conv stand for fully connected layer and convolutional layer, respectively.", + "url": "http://arxiv.org/html/2307.14132v4/x2.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2307.14132v4" +} \ No newline at end of file diff --git a/20241127/2310.01522v3.json b/20241127/2310.01522v3.json new file mode 100644 index 0000000000000000000000000000000000000000..2745619cdbd2d835954d45e7c8628226b4f16001 --- /dev/null +++ b/20241127/2310.01522v3.json @@ -0,0 +1,210 @@ +{ + "title": "Property-preserving numerical approximation of a Cahn\u2013Hilliard\u2013Navier\u2013Stokes model with variable density and degenerate mobility", + "abstract": "In this paper, we present a new computational framework to approximate a Cahn\u2013Hilliard\u2013Navier\u2013Stokes model with variable density and degenerate mobility that preserves the mass of the mixture, the pointwise bounds of the density and the decreasing energy.\nThis numerical scheme is based on a finite element approximation for the\nNavier\u2013Stokes fluid flow with discontinuous pressure and an upwind discontinuous Galerkin scheme for the Cahn\u2013Hilliard part.\nFinally, several numerical experiments such as a convergence test and some well-known benchmark problems are conducted.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Hydrodynamics has been considered a research field of increasing interest among the scientific community during the last few decades. In this sense, diffuse interface models were proposed as a successful alternative to model fluid-solid interaction after van der Waals introduced the foundations in the pioneering paper [van1879thermodynamic]. Afterwards, these ideas were extended to fluid mixture and several works were published in this regard. In particular, both Hohelberg and Halpering, [hohenberg1977theory], and Gurtin et al., [gurtin1996two], arrived by different approaches to the same model, the well-known Model H, which would lead to the\nCahn\u2013Hilliard\u2013Navier\u2013Stokes (CHNS) system.\nSince then, many different CHNS models have been developed using different techniques and extended to the case of fluids with different densities, see the model by Boyer [boyer2002theoretical] or by Ding et al. [ding2007diffuse]. Moreover, several of these recent models satisfy\nsome laws of thermodynamics.\nThis is the case for the model by Lowengrub and Truskinovsky, [lowengrub1998quasi], or the one by Abels et al., [abels_thermodynamically_2011], which introduces an extra convective term in the momentum equation due to the different densities of the fluids. In [kim_2012] a careful revision of several CHNS models and their applications is provided. Also, recently, a very interesting survey has been published, [ten2023unified], in which the authors, Eikelder et al., discuss different existing well-known CHNS models analyzing their advantages and disadvantages from a physical point of view. In fact, the authors of [ten2023unified] provide some notions on\nproperties a CHNS model has to satisfy in order to be physically consistent.\nOne characteristic that many of these models share is that the density of the mixture is usually interpolated as a linear function of the phase-field function. Hence, ensuring the pointwise bounds for this phase-field function in the Cahn-Hilliard equation, for instance, by using a degenerate mobility (see [acosta-soba_upwind_2022]) is crucial to ensure a physically consistent model. Also, CHNS models conserve the total mass of the mixture and, as mentioned above, they tend to be thermodynamically consistent in the sense that the solutions of these models usually minimize an underlying energy law. Therefore, as these properties are extremely important for the physical meaning of the models it is likewise important to preserve them when approximating their solutions.\nHowever, the transport of the diffuse interface by the velocity of the fluid is typically modeled by means of a convective term that is introduced into the Cahn-Hilliard equation and, as shown in previous studies such as [acosta-soba_upwind_2022], this term may lead to numerical instabilities in highly convective regimes if it is not treated carefully. The instabilities result in nonphysical spurious oscillations that make the approximation of the phase-field variable lose the pointwise bounds. In this regard, removing the numerical instabilities in the case of the convective Cahn-Hilliard model has been an object of study in several recent works, see [frank2018finite] or [acosta-soba_upwind_2022], where in the latter the authors enforce the pointwise bounds by means of a discontinuous Galerkin (DG) upwind technique. Different ideas such as the use of limiters have been used in the case of the CHNS systems. For instance, in [liu2022pressure], the authors\ndeveloped, by means of flux and slope limiters, a bound-preserving decoupled approximation of a CHNS simplified system with constant mobility. Later, the same model was approximated\nby high order polynomials using a decoupled scheme and a convex optimization technique with a scaling limiter to ensure the pointwise bounds, see [liu2023simple]. In this line, the recent work [guillentierra2024] has presented a numerical approximation of a CHNS which is mass conservative, energy stable and approximately pointwise bounded.\nIn addition, designing an approximation that satisfies a discrete version of the continuous energy in the diffuse-interface models is not straightforward and usually requires the use of specific time-discrete approximations such as the standard convex-splitting technique, [eyre_1998_unconditionally], or the more recently developed SAV approach, [shen2018scalar].\nIn this sense, several advancements have been made towards the approximation of the CHNS models preserving the energy-stability constraint. For instance, we can find the work [tierra_guillen_abels_2014] where the authors propose an approximation of the model in [abels_thermodynamically_2011] that decouples the phase-field equations from the fluid equations through a modified velocity. This approach was further studied in [grun_guillen-gonzalez_metzger_2016] and extended to a fully decoupled approximation that uses a pressure correction approach, [shen2015decoupled]. Other fractional time-stepping energy-stable discretizations of CHNS models can be found in [salgado2013diffuse, deteix2022new, liu2015decoupled].\nNevertheless, although it has been achieved in the case of a CHNS with a Flory-Huggins logarithmic potential (see [chen2022positivity]), to our best knowledge there is no published work on an approximation of a CHNS model with a Ginzburg-Landau polynomial potential and degenerate mobility that ensures both the mass-conservation, pointwise bounds and energy-stability properties.\nTo address this challenge,\nin this work, we provide an upwind DG approximation of the model by Abels et al. [abels_thermodynamically_2011] where all the mass-conservation, the pointwise bounds and the energy-stability properties are preserved.\nFirstly,\nin Section 2 ###reference_### we introduce the CHNS model that we are going to consider and we present its properties.\nThen, in Section 3 ###reference_### we develop the structure-preserving approximation of the aforementioned model, showing that it satisfies all the mass-conservation, pointwise bounds and energy-stability properties. Finally, in Section 4 ###reference_### we conduct several numerical experiments. First, we compute a preliminary accuracy test in Subsection 4.1 ###reference_### for all the variables in both and norms. Then, we provide a simple test where two bubbles are mixed in Subsection 4.2 ###reference_###. The results are in accordance with the previous theoretical analysis. Finally, in Subsections 4.3 ###reference_### and 4.4 ###reference_### we couple the CHNS system with a term modeling the action of gravitational forces and conduct two benchmark tests: a heavier bubble in a lighter medium and a Rayleigh-Taylor type instability." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Cahn\u2013Hilliard\u2013Navier\u2013Stokes model", + "text": "Let be a bounded polygonal domain. We consider a mixture of two fluids with different densities and introduce a phase-field function such that corresponds with fluid of density , with fluid of density and in the interface between the two fluids. Then,\nthe diffuse-interface Cahn\u2013Hilliard\u2013Navier\u2013Stokes model proposed by Abels et al. in [abels_thermodynamically_2011] and further numerically studied in [tierra_guillen_abels_2014, grun_guillen-gonzalez_metzger_2016, shen2015decoupled], can be written as follows:\nHere, and are the mean velocity and the pressure of the fluid respectively, and is the chemical potential related to the phase-field function .\nAlso, is the strain tensor, is the derivative of the Ginzburg-Landau double well potential , i.e. , is the degenerate (truncated) mobility function and\nis the extra-convective term due to different densities.\nMoreover, the density of the mixture depending on the phase-field variable , can be defined either as the solution of the mass balance equation\nor, by taking into account the equation (1c ###reference_3###), as the explicit relation\nWe have written the equation (2 ###reference_###) in its more general variational formulation since does not necessarily belong to . It is clear from (3 ###reference_###) that in is equivalent to in . Consequently, it is important the constraint to preserve the physical meaning of the model because the density of the mixture must satisfy .\nFinally, with for certain and for all is the viscosity of the mixture, is a constant related to the energy density and is a small parameter related to the thickness of the interface between the two fluids.\nSince if is a pressure function solution of (1 ###reference_###) then is also solution for any time-dependent function , it is usual to consider the zero mean-value pressure constraint .\nWe can consider the following variational formulation of problem (1 ###reference_###):\nFind such that ,\n with ,\n\nwith a.e. in , with , satisfying\nfor each .\nWe have denoted as the scalar product and\nwhere denotes the Frobenius inner product.\nThe mass of the phase-field variable is conserved, because it holds\nIn particular, the mass of the mixture is conserved, because using (3 ###reference_###),\nJust test (4c ###reference_3###) by .\n\u220e\nAssuming a sufficiently regular solution of (4 ###reference_x1###)-(4d ###reference_4###), the following energy law holds:\nwhere , with denoting the -th row of the stress tensor , and\nwhere the first term is associated to the kinetic energy and the others to the potential energy. In particular,\nthe energy is time decreasing because\nWe argue formally, by considering that all the functions that appear below are regular enough so that the expressions are true. Moreover, they are regarded as functions to be evaluated at , although, for clarity, we will omit it.\nIf we test (4 ###reference_x1###)\u2013(4d ###reference_4###) by , , and and we add up the expressions, we obtain:\nNow, testing (2 ###reference_###) by , we have\nBy adding the two previous expressions, the convective term cancels.\nHence, taking into account that\nwe can conclude that the energy law (5 ###reference_###) holds.\n\u220e" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Structure-preserving scheme", + "text": "In this section we develop a fully coupled discretization of the model (1 ###reference_###) that preserves all properties at the discrete level, including the mass conservation, pointwise bounds of the phase-field and density of the mixture variables, and the decreasing of the energy (also called energy-stability)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Notation", + "text": "We consider a finite element shape-regular triangular mesh in the sense of Ciarlet, [ciarlet2002finite], of size over . We denote by the set of the edges of (faces if ) with the set of the interior edges and the boundary edges, i.e. .\nNow, we fix the following orientation over the mesh :\nFor any interior edge we set the associated unit normal vector . In this sense, when referring to edge we will denote by and the elements of with and so that is exterior to pointing to .\nIf there is no ambiguity, to abbreviate the notation we will denote the previous elements and simply by and , respectively, with the assumption that their naming is always with respect to the edge and it may vary if we consider a different edge of .\nFor any boundary edge , the unit normal vector points outwards of the domain .\nTherefore, we can define the average and the jump of a function on an edge as follows:\nWe denote by and the spaces of finite element discontinuous and continuous functions, respectively, which are polynomials of degree when restricted to the elements of .\nIn this sense, we will denote the broken differential operators (see [riviere_discontinuous_2008, di_pietro_mathematical_2012]) the same way as the standard differential operators in the absence of ambiguity.\nMoreover, we take an equispaced partition of the time domain with the time step. Also, for any function depending on time, we denote\n and the discrete time derivative operator .\nFinally, we set the following notation for the positive and negative parts of a function :" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Discrete scheme", + "text": "Following the ideas of [acosta-soba_upwind_2022, acosta-soba_KS_2022, acosta2023structure] we define the projections , and\n\nas follows:\nwhere denotes the usual scalar product in . In addition, denotes the mass-lumping scalar product in resulting from using the trapezoidal rule to approximate the scalar product in (see, for instance, [quarteroni2008numerical]). Therefore, for any elements this scalar product can be defined as\nwhere are the nodes of the element for every . These projections (7 ###reference_###)\u2013(9 ###reference_###) are well defined for every function , notice that imply for every and, therefore, .\nWe propose the following numerical scheme: find , \nwith , and such that\nfor each , , , ,\nwhere\nand\nsuch that is a convex splitting discretization of the Ginzburg-Landau double well potential for any .\nAlso, is a compatible \u201cinf-sup\u201d pair of finite-dimensional spaces satisfying that and .\nIn fact, the restriction is needed in order to guarantee the local incompressibility of in the following sense:\nwhich can be derived integrating by parts in (10b ###reference_.2###).\nThis constraint will allow us to preserve the pointwise bounds of , see Theorem 3.5 ###reference_theorem5### below. Notice that the discretization of the pressure and the divergence term (10b ###reference_.2###) is the standard Stokes DG approach [riviere_discontinuous_2008, di_pietro_mathematical_2012] for continuous velocity and discontinuous pressure.\nSome possible choices of compatible spaces are the following (see [boffi2013mixed, ern_theory_2010] for the details):\nwhich is stable for but not for .\nwhich is stable for but requires a higher computational effort. Here, denotes the space enriched with a bubble by elements of order 3.\n. Here, denotes the standard quadrilateral finite element space of order 2.\nNotice that, for any choice of this pair , the error bounds are expected to be determined by the lowest accuracy approximation\nof the phase-field function by .\nMoreover,\n is a centered discretization of the term in (4 ###reference_x1###)\ndefined as\nwhere\nthe second term is a consistent stabilization term depending on the jumps of on the interior edges of the mesh .\nIn (10c ###reference_.3###) we have considered two different upwind formulas, the classical upwind\nwhose properties were discussed in [acosta-soba_upwind_2022], and\nwhich follows the ideas introduced in [acosta-soba_KS_2022, acosta2023structure], and which will be detailed in the Subsection 3.2.1 ###reference_.SSS1###.\nFinally, we have introduced in (10 ###reference_.x1###) two consistent stabilizations terms:\nwhich, following the ideas of [tierra_guillen_abels_2014], can be interpreted as a residual to the equation (2 ###reference_###); and\nwhich\nis introduced to control the influence of the upwind term in (10c ###reference_.3###). This latter stabilization together with the centered approximation\n\nof the phase-field force in the momentum equation (10 ###reference_.x1###), cancel the effect of the transport of the phase-field function by the mean velocity and allow us to obtain a discrete energy inequality, see Lemma 3.7 ###reference_theorem7### below.\nTo start the algorithm we take where is the continuous initial data, which satisfies . Notice that, one also has .\nObserve that the -mean value constraint on the pressure has been removed from the discrete formulation (10 ###reference_###). This constraint will be enforced\nin practice by using an additional penalty term, see Section 4 ###reference_### below." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Definition of the upwind bilinear form", + "text": "In order to define the upwind bilinear form we follow the ideas of [acosta-soba_KS_2022, acosta2023structure].\nFirst, we split the mobility function for into its increasing and decreasing parts, denoted respectively by and , as follows:\nTherefore,\nNotice that .\nFollowing the work in [acosta-soba_upwind_2022], we can define the following upwind form for any and :\nwhere on every .\nNonetheless, if we want to ensure a discrete energy law, as was done in [acosta-soba_KS_2022, acosta2023structure], we need to introduce the following hypothesis:\nThe mesh of is structured in the sense that, for any interior interface , the line between the barycenters of and is orthogonal to .\nUnder this hypothesis, we can consider the following consistent approximation on every , as done in [acosta-soba_KS_2022, acosta2023structure]:\nwhere is the distance between the barycenters of the triangles of the mesh that share .\nTherefore, we can extend the definition of the upwind bilinear form (3.2.1 ###reference_a###) as follows:\nThis upwind approximation allows us to obtain both a discrete maximum principle and an energy-stability property as shown in [acosta2023structure] for a tumor model based on the Cahn-Hilliard equation with degenerate mobility.\nNotice that the upwind bilinear form given in (14 ###reference_###), can be seen as a particular case of given in (3.2.1 ###reference_a###), changing by , but now we have not truncated the transported variable .\nIn fact, it is not necessary to truncate in to preserve the pointwise bounds of due to the local incompressibility of (see [acosta-soba_upwind_2022] for a more detailed explanation)." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Properties of the scheme (10)", + "text": "The mass of the phase-field variable and its regularization are conserved. In fact, one has\nAs a consequence, since is linear with respect to , the mass of the mixture is also conserved,\nJust need to take in (10c ###reference_.3###) and consider the definitions of the regularization given in (9 ###reference_###), and the density of the mixture given in (3 ###reference_###).\n\u220e\nProvided that in , any solution of (10 ###reference_###)\nand\n satisfy:\n in .\nTo prove that in we may take the following test function\nwhere is an element of such that . We denote the normal vector exterior to . Then, since we can assure, using the local incompressibility constraint (12 ###reference_###), that\nOn the other hand, using that the positive part is an increasing function and that\nwe can obtain (see [acosta-soba_upwind_2022, acosta2023structure])\nConsequently, . Therefore,\nwhich implies, since , that . Hence, \nin .\nSimilarly, taking the following test function\nwhere is an element of such that , we can arrive at in .\nFinally, in is a direct consequence of the definition of the projection given in (9 ###reference_###).\n\u220e\nThe next Corollary is a direct consequence of the previous result.\nProvided that in , the density of the mixture satisfies in .\nThe following Lemma is a technical result that we are going to use when computing the discrete energy law.\nThe following expression holds\nFirst, notice that we can rewrite the term as follows\nThen, by definition and due to ,\nFinally, using (10b ###reference_.2###),\nwhat yields (22 ###reference_###).\n\u220e\nThe following discrete energy law holds:\nwhere the energy functional is defined in (6 ###reference_###).\nFirst, take and in (10 ###reference_.x1###)\u2013(10b ###reference_.2###). Consider that\nand, by definition of given in (15 ###reference_###),\nThen, using (24 ###reference_###) and (3.2.2 ###reference_2###) we can arrive at the following expression\nNow, if we test (10c ###reference_.3###)\u2013(10d ###reference_.4###) with and and we add the resulting expressions and (26 ###reference_###), we obtain, using (22 ###reference_###),\nFinally, the following equalities\nyield (3.8 ###reference_0###).\n\u220e\nUsing the definition of the upwind form and the standard procedure for the convex-splitting technique (see e.g. [eyre_1998_unconditionally, guillen-gonzalez_linear_2013]), one can show the following Lemma.\nThe following two inequalities hold:\nThe following result is a direct consequence of Theorem 3.8 ###reference_theorem8### and Lemma 3.9 ###reference_theorem9###.\nThe scheme (10 ###reference_###) satisfies\nIn particular, scheme (10 ###reference_###) is unconditionally energy stable, i.e., .\nThe scheme (10 ###reference_###) is nonlinear so we will need to approximate its solution by means of an iterative procedure such as the nonsmooth Newton\u2019s method (see [clarke1990optimization]).\nHowever, the function that appears in the stabilization term is not subdifferentiable at and, although it is rare in practice that holds exactly due to round-off errors, one might eventually find convergence issues. In this case, several approaches can be carried out to improve the convergence of the algorithm. For instance, one may use an iterative procedure that does not rely on the Jacobian of the whole system such as a fixed point algorithm. Conversely, if we want to use a higher order procedure depending on the Jacobian like the nonsmooth Newton\u2019s method, one may avoid the use of the ) function regularizing the term as follows\nfor small. This modification preserves the mass conservation and the pointwise bounds but introduces a modification in the discrete energy law, see Theorem 3.11 ###reference_theorem11###.\nThe following result can be proved using the same procedure in Theorem 3.8 ###reference_theorem8### and Corollary 3.10 ###reference_theorem10###.\nIf we regularize the stabilization term in the equation (10 ###reference_.x1###), using defined in (30 ###reference_###) for a certain ,\nthe following discrete energy law holds:" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Numerical experiments", + "text": "We have carried out the following numerical experiments in the spatial domain . Moreover, we have set the following values of the parameters , , and , unless otherwise specified. Also, we have chosen a constant viscosity, . Following the Remark 3.1 ###reference_theorem1###, we have chosen the pair of \u201cinf-sup\u201d stable spaces . Moreover, to comply with Hypothesis 1 ###reference_1###, we have used a triangular mesh resulting from halving a squared mesh using the diagonals.\nTo compute the approximations we have used the finite element library FEniCSx (see [BasixJoss, AlnaesEtal2014, ScroggsEtal2022]) coupled with PyVista for the visualization of the results (see [sullivan2019pyvista]). The source code for our implementation is hosted on GitHub111https://github.com/danielacos/Papers-src ###reference_###. On the one hand, an iterative Newton solver has been used to approximate the nonlinear problem. In this sense, the modified stabilization term with has been used in the scheme (10 ###reference_###) to avoid convergence issues. On the other hand, we have used the default iterative linear solver, GMRES (generalized minimal residual method), and preconditioner, computed using an incomplete LU factorization (ILU), of PETSc (see [petsc-user-ref, DalcinPazKlerCosimo2011]) for solving the resulting linear systems.\nWe must be careful when dealing with an ill-posed nonlinear problem if we want Newton\u2019s method to converge. To overcome this issue in the case of the approximation (10 ###reference_###), we have added a penalty term to the LHS of (10b ###reference_.2###) with very small (in practice, we have chosen ). In this way, we enforce the -mean constraint on the approximation of the pressure and Newton\u2019s method does converge. In fact, a posteriori, we can check that this additional term has not severely affected the approximation obtained in two different manners. On the one hand, taking into account the of the approximation of we observe that the term has been at most of order . On the other hand, the pointwise bounds have been preserved despite the crucial role that the local incompressibility constraint (12 ###reference_###) plays in Theorem 3.5 ###reference_theorem5###.\nCertainly, many other ways of enforcing the -mean pressure constraint in the nonlinear system can be explored. For instance, another interesting possibility could be adding a penalty term , with , to the LHS of (10b ###reference_.2###) as done in [pacheco2023optimal].\nIn all the figures shown in this section, we plot both the phase field variable (in red/blue) and the following scaled vector field (in white)\nAs a reference, the computational time for these tests in a personal computer with Intel Core i7-6700 3.40GHz using 8 threads has been the following: 10 hours to compute the reference solution in Test 4.1 ###reference_###, around 1.5 hours for Test 4.2 ###reference_###, around 24 hours for Test 4.3 ###reference_### and around 33 hours for Test 4.4 ###reference_###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Accuracy test", + "text": "In this case,\nwe define the following initial conditions\nwith , which are plotted in Figure 1 ###reference_###.\n###figure_1### We conduct a preliminary convergence test in which we compare a reference solution in a very refined mesh (, degrees of freedom) with the approximation in a less refined mesh. In this way, with fixed, we can remove the error introduced by the time discretization in each of the different schemes. In any case, we would like to emphasize that such a test for these sophisticated schemes involving several different discrete spaces and projection operators is nontrivial and the results obtained only provide an estimation of the possible order of convergence of the proposed approximations.\nThe results of the test at are shown in Tables 1 ###reference_### and 2 ###reference_###. It is worth mentioning that, as in [acosta-soba_upwind_2022] for the convective Cahn-Hilliard model, order 2 in and order 1 in for the approximation of the variable have been approached. On the other hand, order around 2 in has been obtained for the approximations of and , the latter probably affected by the order of convergence in the approximation of . Finally, order around in seems to have been achieved by the approximation of .\nSeveral works such as [diegel2017convergence, chen2022error, chen2022errorCHNS, styles2008finite] have carried out a careful error analysis of finite element approximations of phase-field models coupled with fluid motion such as the CHNS system or related models. However, most of these works have focused on the case of constant or non-degenerate mobility and constant density and their results are based on the energy-stability property of the proposed approximations. It is left for a future work to study whether these techniques can be extended and applied to derive error estimates for our proposed approximation (10 ###reference_###)." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Mixing bubbles", + "text": "For this test we keep the same initial conditions as in the previous test but with . Again, this initial condition can be seen in Figure 1 ###reference_###.\n###figure_2### ###figure_3### ###figure_4### In Figure 2 ###reference_### we have plotted the evolution in time of the approximation obtained using both the scheme (10 ###reference_###) with ( degrees of freedom) and . On the other hand, in Figure 3 ###reference_### (left) we can observe how the bounds are preserved as predicted by the previous analytical results. In addition, in Figure 3 ###reference_### (right) one may observe how the energy decreases as predicted by the theory above.\n###table_1### ###figure_5### ###figure_6### ###table_2### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "A heavier bubble falling in a lighter medium", + "text": "Now, we perform a test in which we define the following initial condition: and\na bubble of density in a lighter medium of density , plotted in Figure 4 ###reference_### (). Moreover, we have added a term on the right-hand side of equation (1a ###reference_1###) acting as the gravitational forces pushing the heavier bubble down to the bottom of the domain . In our case, we have chosen and we have treated this term implicitly in (10 ###reference_###).\nIn this case, we have shown in Figure 4 ###reference_### the evolution in time of the solution using (10 ###reference_###) with and . The result is qualitatively similar to the ones shown in previous studies such as [tierra_guillen_abels_2014]. Also, the bounds are preserved as shown in Figure 5 ###reference_### (left). In this case, the energy does not necessarily decrease due to the gravitational forces as one may observe in Figure 5 ###reference_### (right).\n###table_3### ###figure_13### ###figure_14###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "A Rayleigh-Taylor type instability", + "text": "Finally, we carry out a benchmark Rayleigh-Taylor type instability test based on the one shown in [tierra_guillen_abels_2014] for which we define the following initial condition: and\nplotted in Figure 6 ###reference_### (). Again, we add the gravity term with in the RHS of equation (1a ###reference_1###).\n###table_4### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### The evolution in time of the solution using (10 ###reference_###) with and can be seen in Figure 6 ###reference_###. Again, despite the difficulty of this test due to the fast dynamics involved, the results are qualitatively similar to the ones shown in previous works such as [tierra_guillen_abels_2014]. In Figure 7 ###reference_### (left) we plot the evolution of the maximum and minimum of the regularized phase-field function, where we can observe that the bounds are indeed preserved as predicted by the theory.\nIn addition, one may observe in Figure 7 ###reference_### (right) the behavior of the discrete energy.\n###table_5### ###figure_21### ###figure_22###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work we have developed a robust, structure-preserving approximation, given in (10 ###reference_###), of the CHNS model with variable density (1 ###reference_###). To our best knowledge this is the first approximation of a CHNS model with a Ginzburg-Landau polynomial potential and degenerate mobility that ensures the mass-conservation, pointwise bounds and energy-stability properties at the same time.\nThis approximation combines the ideas of the previous works [acosta-soba_upwind_2022, acosta2023structure] to preserve the pointwise bounds of the phase-field variable as shown in Theorem 3.5 ###reference_theorem5###. In this regard, we have used a finite element approximation for the Navier-Stokes fluid flow with discontinuous pressure that preserves the incompressibility of the velocity locally in each of the elements of the mesh , see (12 ###reference_###). In addition, a carefully developed upwind discontinuous Galerkin approximation for the Cahn-Hilliard part has been chosen.\nMoreover, the ideas in [acosta-soba_KS_2022, acosta2023structure] about approximating the normal derivative of the chemical potential in a structured mesh, (20 ###reference_###), and the bilinear form (21 ###reference_###) have been employed. These ideas have been combined with novel stabilization techniques such as (16 ###reference_###) and (22 ###reference_###), and the stabilization term (15 ###reference_###) that was previously developed in [tierra_guillen_abels_2014]. This approach has led us to the discrete energy-stability property shown in Theorem 3.8 ###reference_theorem8###.\nFinally, the theoretical discussion has been complemented with several numerical experiments where the good properties of the approximation proposed are manifested. In Test 4.1 ###reference_###, a preliminary accuracy test was carried out where second order of convergence seems to be achieved in . Then, a qualitative Test 4.2 ###reference_### was computed, where the discrete energy-stability property has been exhibited. Finally, two benchmark problems where the action of gravitational forces has been taken into account were conducted: a heavier bubble in a lighter medium (Test 4.3 ###reference_###) and a Rayleigh-Taylor type instability (Test 4.4 ###reference_###). Throughout these tests it could be seen how the pointwise bounds are preserved at the discrete level for the phase-field variable.\nDespite the robustness and good properties of this new numerical approximation, we would like to mention that there is still much room for improvement. In particular, the main drawback of this numerical scheme is the computational cost inherent to such a fully coupled approximation.\nIn this sense, we have also explored the idea of developing a decoupled property-preserving approximation of (1 ###reference_###) by means of a rotational pressure projection technique following the previous work in [liu2022pressure]. However, this has been finally left for a future work due to the number of difficulties related to such kind of approximations. On the one hand, applying a rotational projection technique to a model with variable viscosity is not trivial as shown in [deteix2018improving, deteix2019shear, plasman2020projection]. On the other hand, developing a stable decoupled approximation for a system involving variable densities requires carefully adjusting the intermediate steps as in [pyo2007gauge, guermond2000projection, guermond2009splitting]. In addition, preserving both the pointwise bounds for the phase-field variable and the energy law of the system at the discrete level requires imposing additional restrictions, such as (12 ###reference_###) and (20 ###reference_###), on the techniques implemented. A preliminary work on a decoupled approximation for this system (1 ###reference_###), in the case of constant viscosity, that preserves the pointwise bounds can be seen in [acosta2024analysis, Section 6.5]." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Variable
ErrorErrorOrderErrorOrderErrorOrder
\n
Table 1: Errors and convergence orders at in .
\n
", + "capture": "Table 1: Errors and convergence orders at in ." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Variable
ErrorErrorOrderErrorOrderErrorOrder
\n
Table 2: Errors and convergence orders at in .
\n
", + "capture": "Table 2: Errors and convergence orders at in ." + } + }, + "image_paths": { + "1": { + "figure_path": "2310.01522v3_figure_1.png", + "caption": "Figure 1: Initial condition of Tests 4.1 and 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-0_cropped.png" + }, + "2(a)": { + "figure_path": "2310.01522v3_figure_2(a).png", + "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-20_cropped.png" + }, + "2(b)": { + "figure_path": "2310.01522v3_figure_2(b).png", + "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-50_cropped.png" + }, + "2(c)": { + "figure_path": "2310.01522v3_figure_2(c).png", + "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-100_cropped.png" + }, + "3(a)": { + "figure_path": "2310.01522v3_figure_3(a).png", + "caption": "Figure 3: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/x1.png" + }, + "3(b)": { + "figure_path": "2310.01522v3_figure_3(b).png", + "caption": "Figure 3: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.2.", + "url": "http://arxiv.org/html/2310.01522v3/x2.png" + }, + "4(a)": { + "figure_path": "2310.01522v3_figure_4(a).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-0_cropped.png" + }, + "4(b)": { + "figure_path": "2310.01522v3_figure_4(b).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-65_cropped.png" + }, + "4(c)": { + "figure_path": "2310.01522v3_figure_4(c).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-120_cropped.png" + }, + "4(d)": { + "figure_path": "2310.01522v3_figure_4(d).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-300_cropped.png" + }, + "4(e)": { + "figure_path": "2310.01522v3_figure_4(e).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-450_cropped.png" + }, + "4(f)": { + "figure_path": "2310.01522v3_figure_4(f).png", + "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-2500_cropped.png" + }, + "5(a)": { + "figure_path": "2310.01522v3_figure_5(a).png", + "caption": "Figure 5: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/x3.png" + }, + "5(b)": { + "figure_path": "2310.01522v3_figure_5(b).png", + "caption": "Figure 5: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.3.", + "url": "http://arxiv.org/html/2310.01522v3/x4.png" + }, + "6(a)": { + "figure_path": "2310.01522v3_figure_6(a).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-0_cropped.png" + }, + "6(b)": { + "figure_path": "2310.01522v3_figure_6(b).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-125_cropped.png" + }, + "6(c)": { + "figure_path": "2310.01522v3_figure_6(c).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-200_cropped.png" + }, + "6(d)": { + "figure_path": "2310.01522v3_figure_6(d).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-300_cropped.png" + }, + "6(e)": { + "figure_path": "2310.01522v3_figure_6(e).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-800_cropped.png" + }, + "6(f)": { + "figure_path": "2310.01522v3_figure_6(f).png", + "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-3500_cropped.png" + }, + "7(a)": { + "figure_path": "2310.01522v3_figure_7(a).png", + "caption": "Figure 7: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/x5.png" + }, + "7(b)": { + "figure_path": "2310.01522v3_figure_7(b).png", + "caption": "Figure 7: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.4.", + "url": "http://arxiv.org/html/2310.01522v3/x6.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2310.01522v3" +} \ No newline at end of file diff --git a/20241127/2310.11083v2.json b/20241127/2310.11083v2.json new file mode 100644 index 0000000000000000000000000000000000000000..a7e507efee40e489f24160db678d0f5719e917b9 --- /dev/null +++ b/20241127/2310.11083v2.json @@ -0,0 +1,194 @@ +{ + "title": "Enhancing Signed Graph Neural Networks through Curriculum-Based Training", + "abstract": "Signed graphs are powerful models for representing complex relations with both positive and negative connections. Recently, Signed Graph Neural Networks (SGNNs) have emerged as potent tools for analyzing such graphs. To our knowledge, no prior research has been conducted on devising a training plan specifically for SGNNs. The prevailing training approach feeds samples (edges) to models in a random order, resulting in equal contributions from each sample during the training process, but fails to account for varying learning difficulties based on the graph\u2019s structure. We contend that SGNNs can benefit from a curriculum that progresses from easy to difficult, similar to human learning. The main challenge is evaluating the difficulty of edges in a signed graph. We address this by theoretically analyzing the difficulty of SGNNs in learning adequate representations for edges in unbalanced cycles and propose a lightweight difficulty measurer. This forms the basis for our innovative Curriculum representation learning framework for Signed Graphs, referred to as CSG. The process involves using the measurer to assign difficulty scores to training samples, adjusting their order using a scheduler and training the SGNN model accordingly. \nWe empirically our approach on six real-world signed graph datasets. Our method demonstrates remarkable results, enhancing the accuracy of popular SGNN models by up to 23.7% and showing a reduction of 8.4% in standard deviation, enhancing model stability. Our implementation is available in PyTorch111https://github.com/Alex-Zeyu/CSG.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Online social networks, recommendation system, cryptocurrency platforms, and even genomic-phenotype association studies have led to a significant accumulation of graph datasets.\nTo analyze these graph datasets, graph representation learning [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] methods have gained popularity, especially those based on graph neural networks (GNNs). GNNs use a message-passing mechanism to generate expressive representations of nodes by aggregating information along the edges. However, real-world edge relations between nodes are not limited to positive ties such as friendship, like, trust, and upregulation; they can also encompass negative ties like enmity, \ndislike, mistrust, and downregulation, as shown in Figure 1 ###reference_###. For example, in social networks, users can be tagged as both \u2018friends\u2019 and \u2018foes\u2019 on platforms like Slashdot, a tech-related news website. In biological research, traits are influenced by gene expression regulation, which involves upregulation and downregulation. This scenario naturally lends itself to modeling as a signed graph, which includes both positive and negative edges. Nevertheless, the presence of negative edges complicates the standard message-passing mechanism, necessitating the development of new GNN models tailored to signed graphs \u2014 signed graph neural networks (SGNNs).\n###figure_1### While much effort has gone into developing new SGNN models [4 ###reference_b4###, 5 ###reference_b5###] for link sign prediction, research on their training methods is still lacking. Currently, SGNNs are trained by treating all edges equally and presenting them in random order. However, edges can have varying levels of learning difficulty. For example, Fig.2 ###reference_### shows four isomorphism types of undirected signed triangles. Intuitively, if node and node are connected by a positive edge, their positions in the embedding space should be made as close as possible, whereas if node and node are connected by a negative edge, their positions in the embedding space should be made as far apart as possible [6 ###reference_b6###]. Nevertheless, in Fig.2 ###reference_###(c), node is connected to node by a negative edge, so in the embedding space, the distance between node and node should be as far as possible. However, node is connected to node and node is connected to node , both with positive edges. Therefore, in the embedding space, node should be as close as possible to node , and node should be as close as possible to node . Consequently, the distance between node and node should be as close as possible. This contradiction makes it much harder for SGNNs to learn adequate representations (see Def.1 ###reference_inition1###) for these nodes from unbalanced triangles. To alleviate this situation, a direct approach is to reduce the impact of samples belonging to unbalanced cycles on the model. Extensive research has demonstrated that presenting training samples in a thoughtful sequence can yield substantial benefits for deep learning models [7 ###reference_b7###, 8 ###reference_b8###]. Specifically, initiating the training process with simpler examples and gradually introducing more complex ones has been shown to significantly enhance the models\u2019 performance. The methodology is recognized as Curriculum Learning.\n###figure_2### Curriculum learning is at the intersection between cognitive science and machine learning [9 ###reference_b9###, 10 ###reference_b10###]. Inspired by humans\u2019 learning habits, extensive research discovers that feeding the training samples in the ascending order of their hardness can benefit machine learning [11 ###reference_b11###]. Intuitively speaking, curriculum learning strategically mitigates the adverse effects of challenging or noisy samples during the initial training phases and gradually expose the model to increasingly complex and informative examples, thereby aiding the model in building a more robust and generalized understanding of the task at hand. CurGraph [12 ###reference_b12###] is the first to introduce curriculum learning to GNNs. However, it is designed for unsigned graph classifications. To the best of our knowledge, curriculum learning for SGNNs with link sign prediction as main downstream task remains unexplored.\nThe main challenge when designing a curriculum learning method for SGNNs lies in how to evaluate the difficulty of training samples (i.e., edges). Graph-level classification models, e.g., CurGraph which claims the difficulty of samples (i.e., graphs) depends on the complexity of graphs (e.g., the number of nodes or edges in graphs). However, for the primary task of signed graph analysis, which is link sign prediction, the training samples (edges) are not independent, so it is not trivial to measure the difficulty of these samples. Alternative approaches frequently make use of label information [8 ###reference_b8###] and node features [13 ###reference_b13###] to differentiate between the levels of complexity in training samples. However, such data is absent in current real-world signed graphs. In this paper, we first theoretically analyze the learning difficulty of edges. We prove that current SGNNs cannot learn adequate representations for edges belonging to unbalanced cycles. Based on this conclusion, we design a lightweight difficulty assessment function to score the difficulty of edges. We encapsulate this idea in a new SGNN framework, called CSG (Curriculum representation learning for Signed Graphs). CSG sorts the training samples by their difficulty scores and employs a training scheduler that continuously appends a set of harder training samples to the learner in each epoch. It is worth noting that postponing the training of hard samples will reduce the importance of hard examples in the training process [8 ###reference_b8###] but cannot enable SGNN models to surpass their current limitations, namely, learning adequate representations from unbalanced triangles. This facilitates SGNN models in learning more effective representations from easy edges, ultimately enhancing the overall predictive performance for both easy and hard edges see Table 8 ###reference_###.\nTo evaluate the effectiveness of CSG, we perform extensive experiments on six real-world datasets. We verify that our proposed method CSG can improve the link sign prediction performance of the backbone models by up to 23.7% (SGCN [4 ###reference_b4###] model, Slashdot dataset) in terms of AUC and can enhance the stability of models, achieving a standard deviation reduction of up to 8.4% on AUC (SGCN [4 ###reference_b4###] model, WikiRfa) (see Table 5 ###reference_###). In addition, we also verify that on more incomplete graphs, say 40% - 70% training ratio, CSG can still enhance the performance of backbone models (see Table 7 ###reference_###). These experimental results demonstrate the effectiveness of CSG. One limitation of our method is that we only consider unbalanced triangles (3-cycle). This is due to the high sparsity commonly observed in the current real-world signed graph datasets (see Table 3 ###reference_###). Therefore, the number of unbalanced 4-cycles, 5-cycles, and even 6-cycles is relatively much smaller (see Table 2 ###reference_###). To ensure algorithmic simplicity and efficiency, this paper only consider 3-cycles (i.e., triangles). Overall, our contributions are summarized as follows:\nWe are pioneering research in the field of training methods for signed graph neural networks (SGNNs).\nWe implement curriculum learning in the training of SGNNs. Our work involves a thorough theoretical analysis of the inherent limitations within current SGNNs. Utilizing these insights, we introduce a lightweight difficulty assessment tool capable of assigning complexity scores to samples (e.g., edges).\nWe introduce an innovative curriculum representation learning approach tailored for signed graphs (CSG).\nWe evaluate CSG on six real-world signed graph datasets using five backbone SGNN models. The results highlight CSG\u2019s effectiveness as a curriculum learning framework, improving both accuracy and stability across these models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Signed Graph Representation Learning", + "text": "Due to social media\u2019s popularity, signed graphs are now widespread, drawing significant attention to network representation [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. Existing research mainly focuses on link sign prediction, despite other tasks like node classification [19 ###reference_b19###], node ranking [20 ###reference_b20###], community detection [21 ###reference_b21###] and genomic-phenotype association prediction [22 ###reference_b22###]. Some signed graph embedding methods, such as SNE [23 ###reference_b23###], SIDE [24 ###reference_b24###], SGDN [25 ###reference_b25###] are based on random walks and linear probabilistic methods. In recent years, neural networks have been applied to signed graph representation learning. SGCN [4 ###reference_b4###], the first SGNN that generalizes GCN [3 ###reference_b3###] to signed graphs, uses balance theory to determine the positive and negative relationships between nodes that are multi-hop apart. Another important GCN-based work is GS-GNN [26 ###reference_b26###] which alleviates the assumption of balance theory and generally assumes nodes can be divided into multiple groups. In addition, other main SGNN models such as SiGAT [27 ###reference_b27###], SNEA [28 ###reference_b28###], SDGNN [29 ###reference_b29###], and SGCL [30 ###reference_b30###] are based on GAT [1 ###reference_b1###]. These works mainly focus on developing more advanced SGNN models. Our work is orthogonal to these works in that we propose a new training strategy to enhance SGNNs by learning from an easy-to-difficult curriculum." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Curriculum Learning", + "text": "Curriculum Learning proposes the idea of training models in an easy-to-difficult fashion inspired by cognitive science [31 ###reference_b31###], which implies that one can improve the performance of machine learning models by feeding the training samples from easy to difficult. In recent years, Curriculum Learning has been employed in Computer Vision [32 ###reference_b32###] and Natural Language Processing [33 ###reference_b33###], which commonly follow the similar steps, i.e., 1) evaluating the difficulty of training samples, 2) schedule the training process based on the difficulties of training samples. CurGraph [12 ###reference_b12###] is the first to develop a curriculum learning approach for graph classification, which uses the infograph method to obtain graph embeddings and a neural density estimator to model embedding distributions which is used to calculate the difficulty scores of graphs based on the intra-class and inter-class distributions of their embeddings. CLNode [8 ###reference_b8###] applies curriculum learning to node classification. CuCo [13 ###reference_b13###] applies curriculum learning to graph contrastive learning, which can adjust the training order of negative samples from easy to hard. In general, curriculum learning study for GNNs is still in its infants. To the best of our knowledge, there is no attempt to apply curriculum learning to Signed Graph Neural Networks. One essential challenge in designing the curriculum learning method is how to measure the difficulty of training samples (i.e., edges). The aforementioned methods often utilize label information [8 ###reference_b8###] and node feature [13 ###reference_b13###] to distinguish the difficulty levels of training samples. Such information is not available in existing real-world signed graphs. Assessing the difficulty of training samples from signed graph structures remains an open question." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Notations", + "text": "A signed graph is where denotes the set of nodes and denote the set of edges with positive and negative signs. The sign graph can be represented by a signed adjacency matrix with entry (or ) if a positive (or negative) edge between node and , and denotes no edge between and . Note that in real-world signed graph datasets, nodes usually lack features, unlike unsigned graph dataset which typically contains a feature vector for each node . and denote the positive and negative neighbors of node . denotes the neighbor set of node . denotes the set of n-cycles in the signed graph, e.g., represents the set of triangles (3-cycles) in the signed graph. denotes the balanced (unbalanced) n-cycle set. A balanced (unbalanced) n-cycle with nodes has an even (odd) number of negative edges, e.g., A 4-cycle is called balanced (unbalanced) if (). The main notations are shown in Table 1 ###reference_###.\nThe most general assumption for signed graph embedding suggests a node should bear a higher resemblance with those neighbors who are connected with positive edges than those who are connected with negative edges (from extended structural balance theory [6 ###reference_b6###]). The primary objective of an SGNN is to acquire an embedding function that maps nodes within a signed graph onto a latent vector space . This function aims to ensure that and are proximate if , and distant if . Moreover, we choose link sign prediction as the downstream task of SGNN. This task is to infer the sign of (i.e., ) when provided with nodes and [34 ###reference_b34###]." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methodology", + "text": "In this section, we describe our CSG framework as shown in Figure 3 ###reference_###. First, Through extensive theoretical analysis, we establish that SGNNs struggle to learn adequate representations for edges in unbalanced cycles, making these edges a significant challenge for the model. As a result, we create a lightweight difficulty measurer function where edges belonging to unbalanced triangles will be assigned higher difficult scores. Subsequently, we introduce a training scheduler to initially train models using \u2018easy\u2019 edges and gradually incorporated \u2018hard\u2019 edges into the training process. It is worth highlighting that the curriculum learning approach does not facilitate SGNN models in learning adequate representations for \u2018hard\u2019 edges. It solely downplay the training weight of the \u2018hard\u2019 samples by not presenting them in the early training process.\n###figure_3###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Theoretical Analysis", + "text": "The key challenge in designing a curriculum learning-based training method for SGNNs is effectively identifying the difficulty of training samples (i.e., edges). We prove that learning adequate representations from unbalanced cycles poses a significant challenge for SGNNs. We first give the definition of Adequate representations of nodes.\nGiven a signed graph , a SGNN model and any non-negative distance metric , we call an adequate representation of any node if the following conditions all satisfy:\nThere exist such that for any and , ;\nFor any , and , , ,\nAn intuitive interpretation of Definition 1 ###reference_inition1### is that nodes linked by negative edges should be distant in embedding space, exceeding a certain positive threshold (Condition a), while nodes linked by positive edges should be closer in embedding space than those linked by negative edges (Condition b). We define Adequate representations of edges based on this.\nGiven a node set (imp. refers to improper) contains nodes with improper representations. The representation of edge is , where is concatenation operator. We call an adequate representation of any edge , if &&\nWe now give a concise introduction to the aggregation mechanism for SGNNs. Essentially, mainstream SGNN models such as SGCN [4 ###reference_b4###] and SNEA [28 ###reference_b28###] adopt the following mechanism.\nThe node \u2019s representation at layer is defined as:\nwhere and respectively denote the positive and negative representation vectors of node at the th layer and denotes a concatenation operation. The updating process at layer is written as:\nAnd for layer, we have:\nUnlike GNNs, SGNNs handle positive and negative edges using a two-part representation and a more intricate aggregation scheme. For instance, when , the positive part of the representation for node may aggregate information from the positive representation of its positive neighbors and the negative representation of its negative neighbors. In the upcoming discussion, we\u2019ll show that nodes in signed graphs with isomorphic ego trees will have shared representations, building on SGNN\u2019s message-passing mechanism.\nTwo signed graphs and are isomorphic, denoted by , if there exists a bijection such that, for every pair of vertices , , if and only if and .\nWe further define a node\u2019s balanced and unbalanced reach set, following similar notions in [4 ###reference_b4###].\nFor a node , its -balanced (unbalanced) reach set () is defined as a set of nodes with even (odd) negative edges along a path that connects , where refers to the length of this path.\nThe balanced (unbalanced) reach set extends positive (negative) neighbors from one-hop to multi-hop paths. In particular, the balanced reach set and the unbalanced reach set of a node with path length are defined as:\nFor the path length :\nWeisfeiler-Lehman (WL) graph isomorphism test [35 ###reference_b35###] is a powerful tool to check if two unsigned graphs are isomorphic. A WL test recursively collects the connectivity information of the graph and maps it to the feature space. If two graphs are isomorphic, they will be mapped to the same element in the feature space. A multiset generalizes a set by allowing multiple instances for its elements. During the WL-test, a multiset is used to aggregate labels from neighbors of a node in an unsigned graph. More precisely, for a node , in the -th iteration, the node feature is the collection of node neighbors where denotes the feature (label) of node .\nWe now extend WL test to signed graph: For a node in a signed graph, we use two multisets to aggregate information from \u2019s balanced reach set and unbalanced reach set separately. In this way, each node in a signed graph has two features and aside from the initial features.\nBased on the message-passing mechanism of SGNNs, the process of extended WL-test for the signed graph can be defined as below. For the first-iteration update, i.e. , the WL node label of a node is where:\nFor , the WL node label of is where:\nwhere is an injective function.\nThe extended WL-test above is defined with a similar aggregation and update process as a SGNN and thus can be used to capture the expressibility of the SGNN.\nA (rooted) -hop ego-tree is a tree built from a root node (level-0) in inductively for levels: From any node at level , create a copy of each neighbor at level and connect and with a new tree edge whose sign is .\nTwo signed ego-tree and are considered isomorphic, denoted by , if there exists a bijective mapping such that for every pair of vertices , an edge exists if and only if , and the sign of the corresponding edge satisfy .\nSuppose two ego-trees and are isomorphic. An SGNN applied to and will produce the same node embedding for the roots of these ego-trees.\nSuppose ego-tree and are two isomorphic signed graphs. After iterations, we have , where represents the root of . As and are isomorphic, they have the same extended WL node labels for iteration for any , i.e., and , as well as the same collection of neighbor labels, i.e.,\nOtherwise, the extended WL test should have obtained different node labels for and at iteration . As the is an injective function, the extended WL test always relabels different extended multisets into different labels. As the SGNN and extended WL test follow the similar aggregation and rebel process, if , we can have . Thus, , we have reached a contradiction.\n\u220e\n###figure_4### We now turn our attention to cycles. According to small-world experiment 222https://en.wikipedia.org/wiki/Six_degrees_of_separation#cite_note-1 ###reference_of_separation#cite_note-1###, we only consider cycles with a maximum of 6 nodes. Fig.4 ###reference_### shows isomorphism types of balanced (unbalanced) 3-cycle, 4-cycle, 5-cycle and 6-cycle.\nAn SGNN cannot learn adequate representations for edges from unbalanced cycles.\n###figure_5### ###figure_6### We consider one unbalanced situation of 3-cycle as shown in Figure 5 ###reference_###. In this scenario, we construct the 2-hop ego-trees () of nodes , , and as depicted in Figure 5 ###reference_###. In constructing ego-trees, positive neighbors are positioned on the left side, while negative neighbors are on the right. It is evident that and exhibit isomorphism. Based\non the conclusion in [36 ###reference_b36###, 15 ###reference_b15###], they will be projected to the same embeddings. Conversely, as and are not isomorphic, they will be mapped to distinct embeddings. Thus, we deduce , where represents a distance metric, indicating that nodes connected by negative edges have closer representations than those connected by positive edges. By Definition 1 ###reference_inition1###, the learned representations are deemed inadequate for , , and . Consequently, the representations of edges , , and are also considered inadequate (see Def. 2 ###reference_inition2###). Intuitively, during the training process of SGNN, node tends to pull node closer through edges and , while simultaneously pushing away through edge . The conflicting structural information makes it hard for SGNN to learn an adequate spatial position for the three nodes.\n###figure_7### ###figure_8### We next consider one unbalanced situation of 4-cycle as shown in Figure 6 ###reference_###. In this scenario, we construct the 2-hop ego-trees of nodes () , , and as depicted in Figure 6 ###reference_###. Positive neighbors are positioned on the left side, while negative neighbors are on the right. Similar to the above scenario, it is evident that and exhibit isomorphism and and are not isomorphic. Thus, we get . But, node is connected to with negative edge and is connected to with positive edge. By Definition 1 ###reference_inition1### and 2 ###reference_inition2###, the representations of edges , , and are considered inadequate.\nNext, we consider one unbalanced situation of 5-cycle as shown in Figure 7 ###reference_###. In this situation, we construct the 2-hop ego-trees of nodes , , and as depicted in Figure 7 ###reference_###. Positive neighbors are positioned on the left side, while negative neighbors are on the right. It is evident to find that and exhibit isomorphism and and are not isomorphic. Thus, we get . But node is connected to with negative edge and is connected to with positive edge. By Definition 1 ###reference_inition1### and 2 ###reference_inition2###, the representations of edges , , and are considered inadequate.\nNext, we consider one unbalanced situation of 6-cycle as shown in Figure 8 ###reference_###. When considering only 2-hop ego-tree, this situation is similar to the 5-cycle case mentioned above, so the proof is omitted. Considering that the majority of SGNN models only utilize information from two-hop neighbors [4 ###reference_b4###, 28 ###reference_b28###], it is reasonable for us not to consider the 3-hop ego-tree structure.\n\u220e" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Triangle-based Difficulty measurer", + "text": "In this subsection, we describe the process of measuring the difficulty scores of training samples. Based on the above analysis, we conclude that SGNNs struggle to learn adequate representations for edges in unbalanced cycles. Thus, unbalanced cycles are difficult structures for SGNNs to learn. According to Table 2 ###reference_###, unbalanced triangles are more common than other unbalanced cycles, making them a greater challenge for model training. To improve efficiency, we\nonly consider the impact of unbalanced triangles on SGNN models. Intuitively, as shown in Figure 9 ###reference_###, if an edge belongs to an unbalanced triangle, its difficulty score should be higher than those that do not. Firstly, we define local balance degree:\n###figure_9### For edge , the local balance degree is defined by:\nwhere represents the set of balanced triangles containing edge , represents the set of triangles containing edge . represents the set cardinal number.\nBased on this definition, for edge , if all triangles containing it are balanced, the is , if all of the triangles containing it are unbalanced, the is . For those edges that do not belong to any triangles, we set their local balance degree to .\nAfter obtaining the local balance degree for each edge, we can calculate the difficulty score of edge as below:" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Training Scheduler", + "text": "After measuring the difficulty score of each sample (i.e., edge) in the training set, we use a curriculum-based training strategy to train a better SGNN model, as shown in Figure 3 ###reference_###(3). We follow similar methods in [8 ###reference_b8###] to generate the easy-to-difficult curriculum. More specifically, we first sort the training set in ascending order of difficulty scores. Then, a pacing function is used to place these edges to different training epochs from easy to difficult, where refers to -th epoch. In this paper, we consider three pacing functions, i.e., linear, root, and geometric. The linear function increases the difficulty of training samples at a uniform rate; the root function introduces more difficult samples in fewer epochs, while the geometric function trains for a greater number of epochs on the subset of easy edges before introducing difficult edges. These three functions are defined: (linear), (root), (geometric). denotes the initial proportion of the available easiest examples and denotes the training epoch when reaches 1.\nThe process of CSG is detailed in Algorithm 1 ###reference_###. The CSG method is highly efficient, with the majority of computational cost stemming from Equation 8 ###reference_###, which has a time complexity of , where represents the maximum number of neighbors for a single node. Calculating and is equivalent to identifying the common neighbors of nodes and , which requires searching through two ordered neighbor lists and takes time." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we initiate our evaluation by examining the improvements facilitated by CSG when compared to various SGNN models for link sign prediction.\nFollowing this, we examine how model performance varies with different training dataset proportions. Then, we analyze the performance differences between hard and easy samples under the CSG training framework. Lastly, we perform ablation studies to assess the impact of different pacing functions and explore CSG\u2019s parameter sensitivity." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Settings", + "text": "We perform experiments on six real-world datasets: Bitcoin-OTC, Bitcoin-Alpha, WikiElec, WikiRfa, Epinions, and Slashdot. Key statistical information is provided in Table 3 ###reference_###. Since these datasets have no node features, we randomly generate a 64-dimensional vector for each node as the initial features. In the following, we introduce datasets briefly.\nBitcoin-OTC [37 ###reference_b37###] and Bitcoin-Alpha are two datasets extracted from Bitcoin trading platforms. Due to the fact Bitcoin accounts are anonymous, people give trust or not-trust tags to others to enhance security.\nWikiElec [38 ###reference_b38###, 34 ###reference_b34###] is a voting network in which users can choose to trust or distrust other users in administer elections. WikiRfa [39 ###reference_b39###] is a more recent version of WikiElec.\nEpinions [38 ###reference_b38###] is a consumer review site with trust or distrust relationships between users.\nSlashdot [38 ###reference_b38###] is a technology-related news website in which users can tag each other as friends (trust) or enemies (distrust).\nFurther statistics regarding balanced and unbalanced triangles are provided in Table 4 ###reference_###, encompassing training ratios spanning from 40% to 80%. Importantly, the statistics showcases a consistent ratio of both balanced and unbalanced triangles across all proportional edge selections for the training set, indicating a stable performance regardless of the training set size. The experiments were performed on a Windows machine with eight 3.8GHz AMD cores and a 80GB A100 GPU.\nWe use five popular SGNNs as the backbone models, namely SGCN [4 ###reference_b4###], SNEA [28 ###reference_b28###], SDGNN [29 ###reference_b29###], SGCL [30 ###reference_b30###] and GS-GNN [26 ###reference_b26###], which are representative methods of SGNNs.\nWith regard to the hyper-parameters in the baselines, to facilitate fair comparison, we employ the same backbone SGNN of CSG as the baselines. We set to 0.25, to 20 and use the linear pacing function by default." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Experiment Results", + "text": "As per [40 ###reference_b40###], we use test, validation, and training data across datasets. Five runs yield average AUC and F1-binary scores and deviations in Table 5 ###reference_### and Table 6 ###reference_###.\nOur conclusions from the results are as follows: 1. CSG effectively enhances the performance of five prominent SGNN models in the context of link sign prediction. 2. Integration with CSG reduces the standard deviation of SGNNs\u2019 performance significantly, indicating a reduction in the inherent randomness of the backbone SGNN models.\n3. It is worth highlighting that the impact of CSG\u2019s performance improvements varies across datasets, with Bitcoin-OTC, Bitcoin-Alpha, and Epinions showing relatively modest gains compared to WikiElec, WikiRfa, and Slashdot. This discrepancy can be attributed to the lower ratio of unbalanced triangles in the former group, as evidence from the data in Table 4 ###reference_###, which reduces the influence of the training scheduler and restricts the potential for performance enhancement.\nWe further verify the effectiveness of CSG on more incomplete graphs, says 40% - 70% edges as training samples, 5% edges as validation set and the remaining as test set. we use SGCN [4 ###reference_b4###] as backbone(original) model, the results are shown in Table 7 ###reference_###. From the results we conclude: 1. With a decrease in the training ratio, the model\u2019s performance indeed exhibits a decline. This can be attributed to the reduced amount of information available for the model, consequently leading to a decrease in its predictive capability. 2. The stabilizing effect of CSG\u2019s improvement is evident. In Table 4, we suggest that adjusting training proportions is unlikely to greatly alter the balance between balanced and unbalanced triangles, thus contributing to the consistent enhancement brought by CSG.\nPrior experiments validated CSG\u2019s efficiency. Next, we\u2019ll analyze improved prediction performance via CSG for specific sample types.\nWe categorize test edges into easy and hard edges, based on unbalanced triangle membership. The link sign prediction performance for both types is shown in Table 8 ###reference_###. CSG enhances the original model for both cases, particularly benefiting easy edges. This agrees with our expectation that delaying unbalanced triangle training yields greater stability for straightforward edges. Based on the preceding theoretical analysis, we can deduce that SGNN struggles to learn appropriate representations from hard edges. When both easy and hard edges are simultaneously incorporated into the training process, the presence of hard edges might disrupt the model\u2019s ability to learn adequate representations from the easy edges. However, by prioritizing the consideration of easy edges, we effectively sidestep the interference caused by these hard edges, resulting in a significant improvement in learning information from the easy edges. Consequently, this approach also leads to a marginal enhancement in the performance of hard edges." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "We test different pacing functions (linear, root, geometric) with SGCN as the backbone model. Table 9 ###reference_### shows a slight edge for linear pacing. Yet, in general, the pacing function choice minimally affects CSG\u2019s performance. The reason for this experimental outcome might lie in the fact that the real-world datasets are exceedingly sparse, resulting in a limited number of edges belonging to unbalanced triangles. Therefore, different pacing functions brings about negligible changes in the weights of training samples. A significant portion of hard edges is fed into the model in the final few rounds of training. More F1-binary experimental results refer to Table 10 ###reference_###." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Parameter Sensitivity", + "text": "###figure_10### We examine how and affect CSG performance. sets initial training ratio, and controls difficult sample introduction speed. To explore the parameter sensitivity, we select from and from , respectively. We use SGCN as the backbone and employ linear pacing function. The result in Figure 10 ###reference_### shows the following: (1) generally, with the increasing, the AUC value first increases and then decreases. The performance is reasonable for most datasets when . A too-smaller means fewer training samples are introduced in the initial stage of model training, which makes model incapable of learning efficiently. But as increases, more edges with high difficult scores are used in initial training process which will degrade the model performance. (2) As increases, the model performance improves quickly. But this trend slows down as continues to increase. A too-small quickly introduces more difficult edges which can degrade the performance of backbone. A too-large makes SGNNs to be trained on the easy edges most of the time which causes a loss of the information and increases the training time." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We explore SGNN training, typically with randomly ordered samples, resulting in equal contributions. We propose SGNNs benefit from a curriculum approach similar to human learning, introducing CSG rooted in curriculum learning. While CSG doesn\u2019t address the representation limitations of SGNNs, it effectively alleviate the negative impact of hard samples and enhance the performance of backbone model. Extensive experiments on six benchmark datasets demonstrate CSG\u2019s versatility in boosting various SGNN models. Future promising directions include exploring theoretical foundations of graph curriculum learning and devising potent methods for diverse downstream tasks on signed graphs." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Key notations utilized in the paper
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NotationsDescriptions
An undirected Signed Graph
Node set
Edge set
\nAdjacency matrix of \n
Output embedding matrix of nodes
Positive (negative) edge set
\nNeighbors of node \n
\nPositive (negative) neighbors of node \n
n-cycle set
Balanced (unbalanced) n-cycle set
\n
", + "capture": "Table 1: Key notations utilized in the paper" + }, + "2": { + "table_html": "
\n
Table 2: The statistic of n-cycles () in six real-world datasets (see Sec. 5.1). # n-cycles refers to the number of n-cycles, # B(U)-cycles refers to the number of balanced (unbalanced) n-cycles.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpinionsSlashdotBitcoin-alphaBitcoin-OTCWikiElecWikiRfa
n-cycle345634563456345634563456
# n-cycles1590342222413057707324505881331623757319880249945150271465746527256101517961944728350241117269495067
# B-cycles145559195761063554562143679131909220927936253893554525117554835620596116054600352726057762441443008
# U-cycles134752648242216173069900125315484051771109650229019817150143574159412018967354828052059
\n
\n
", + "capture": "Table 2: The statistic of n-cycles () in six real-world datasets (see Sec. 5.1). # n-cycles refers to the number of n-cycles, # B(U)-cycles refers to the number of balanced (unbalanced) n-cycles." + }, + "3": { + "table_html": "
\n
Table 3: The statistics of datasets.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Links# Positive Links# Negative Links
Bitcoin-OTC35,59232,0293,563
Bitcoin-Alpha24.18622,6501,536
WikiElec103,68981,34522,344
WikiRfa170,335133,33037,005
Epinions840,799717,129123,670
Slashdot549,202425,072124,130
\n
\n
", + "capture": "Table 3: The statistics of datasets." + }, + "4": { + "table_html": "
\n
Table 4: Statistics of triangles in each experiments. TR refers to training ratio. B (U) refers to the number of Balanced (Unbalanced) triangles. R (%) refers to the ratio of B/U.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TRBitcoin-otcBitcoin-AlphaWikiElec
BURBURBUR
40%16909917.118192926.2239616781.4
50%197713414.819042717.0375222671.7
60%267821112.722182608.5570124932.3
70%337829111.626895654.8844639802.1
80%348630711.426963707.31085337412.9
TRWikiRfaEpinionsSlashdot
BURBURBUR
40%1428457202.56264968259.21264521156.0
50%1430247363.077537135385.71562625356.2
60%1374745043.185150133606.41487229925.0
70%1596562272.691532135466.81512826975.6
80%2360772763.2106055777613.61708031065.5
\n
", + "capture": "Table 4: Statistics of triangles in each experiments. TR refers to training ratio. B (U) refers to the number of Balanced (Unbalanced) triangles. R (%) refers to the ratio of B/U." + }, + "5": { + "table_html": "
\n
Table 5: Link sign prediction results (average standard deviation) with AUC (%) on six benchmark datasets.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodBitcoin-OTCBitcoin-AlphaWikiElecWikiRfaEpinionsSlashdot
SGCNOriginal\n82.5 4.3\n\n79.2 4.4\n\n65.7 8.1\n\n66.1 9.1\n\n72.5 4.7\n\n58.6 4.9\n
+CSG\n85.3 1.6\n\n85.1 1.5\n\n78.1 1.0\n\n76.6 0.7\n\n80.3 1.5\n\n72.5 0.4\n
(Improv.)3.4%7.4%18.9%15.9%10.7%23.7%
SNEAOriginal\n82.8 3.9\n\n81.2 4.1\n\n69.3 6.5\n\n69.8 5.2\n\n77.3 3.1\n\n66.3 4.2\n
+CSG\n86.3 1.3\n\n87.1 1.3\n\n79.3 1.1\n\n78.2 1.0\n\n81.7 0.8\n\n75.1 0.7\n
(Improv.)4.2%7.2%14.4%12.0%5.7%13.3%
SDGNNOriginal\n85.3 5.3\n\n82.2 4.7\n\n73.3 6.1\n\n76.8 4.3\n\n81.3 4.8\n\n73.3 4.4\n
+CSG\n88.1 1.5\n\n87.5 2.0\n\n80.7 1.6\n\n81.0 1.0\n\n85.5 0.7\n\n77.3 1.7\n
(Improv.)3.3%6.4%10.1%5.5%5.2%5.5%
SGCLOriginal\n88.2 6.2\n\n85.4 5.2\n\n80.4 4.1\n\n82.1 3.8\n\n86.4 5.1\n\n82.3 5.1\n
+CSG\n90.3 1.2\n\n89.2 1.4\n\n85.2 1.8\n\n86.2 2.0\n\n88.7 1.3\n\n86.1 1.1\n
(Improv.)2.4%4.4%6.0%5.0%2.7%4.6%
GS-GNNOriginal\n89.1 4.3\n\n87.3 4.9\n\n81.3 5.0\n\n80.5 4.1\n\n88.3 3.5\n\n90.7 4.4\n
+CSG\n94.1 1.1\n\n92.6 1.9\n\n86.7 2.1\n\n87.2 1.1\n\n92.6 2.1\n\n94.2 1.0\n
(Improv.)5.6%6.1%6.6%8.3%4.9%3.9%
\n
\n
", + "capture": "Table 5: Link sign prediction results (average standard deviation) with AUC (%) on six benchmark datasets." + }, + "6": { + "table_html": "
\n
Table 6: Link Sign Prediction Results with F1 (%) on six benchmark datasets
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodBitcoin-OTCBitcoin-AlphaWikiElecWikiRfaEpinionsSlashdot
SGCNOriginal\n92.1 1.9\n\n92.9 0.8\n\n86.0 2.8\n\n84.2 2.3\n\n91.5 0.9\n\n83.6 2.9\n
+CSG\n93.9 0.8\n\n93.9 0.6\n\n87.1 0.6\n\n86.0 0.4\n\n92.7 0.4\n\n84.6 0.3\n
(Improv.)2.0%1.1%1.3%2.1%1.3%1.2%
SNEAOriginal\n92.5 2.2\n\n92.8 0.9\n\n86.3 2.5\n\n84.2 2.3\n\n92.1 1.2\n\n84.0 2.3\n
+CSG\n94.1 0.3\n\n94.3 0.3\n\n87.1 0.6\n\n86.6 0.3\n\n93.4 0.6\n\n86.1 0.4\n
(Improv.)1.6%1.6%0.9%2.9%1.4%2.5%
SDGNNOriginal\n91.3 2.1\n\n93.1 1.0\n\n87.7 2.2\n\n85.3 2.5\n\n92.7 1.1\n\n85.8 1.9\n
+CSG\n94.3 0.5\n\n93.8 0.3\n\n89.2 0.5\n\n87.1 1.0\n\n93.3 0.4\n\n88.5 0.2\n
(Improv.)3.3%0.8%1.7%2.1%0.6%3.1%
SGCLOriginal\n92.5 1.5\n\n92.5 1.1\n\n89.6 1.2\n\n88.1 1.6\n\n95.3 1.3\n\n90.1 1.1\n
+CSG\n94.2 0.2\n\n93.6 0.2\n\n91.1 0.4\n\n92.6 0.7\n\n96.7 0.3\n\n92.5 0.3\n
(Improv.)1.8%1.2%1.7%5.1%1.5%2.7%
GS-GNNOriginal\n92.5 1.7\n\n93.5 2.1\n\n90.3 1.5\n\n92.1 1.6\n\n94.1 1.8\n\n89.8 2.1\n
+CSG\n94.2 0.7\n\n94.8 0.3\n\n92.7 0.8\n\n94.2 0.3\n\n95.3 0.5\n\n91.9 1.0\n
(Improv.)1.8%1.4%2.7%2.2%1.3%2.3%
\n
\n
", + "capture": "Table 6: Link Sign Prediction Results with F1 (%) on six benchmark datasets" + }, + "7": { + "table_html": "
\n
Table 7: Experimental Performance (AUC, average standard deviation) on Training ratio from 40% - 70%, TR = training ratio.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TRDataBitcoin-OTCBitcoin-AlphaWikiElecWikiRfaEpinionsSlashdot
70%Original\n80.3 4.3\n\n77.1 5.4\n\n63.2 7.7\n\n63.5 5.5\n\n71.3 5.2\n\n59.3 4.8\n
+CSG\n85.0 1.5\n\n84.8 1.3\n\n75.5 1.3\n\n74.3 1.3\n\n79.8 0.8\n\n72.0 1.7\n
(Improv.)5.9%10.0%19.4%17.0%11.9%21.4%
60%Original\n79.6 2.6\n\n76.5 4.2\n\n61.7 5.6\n\n62.1 3.7\n\n70.5 4.3\n\n57.5 6.3\n
+CSG\n83.8 1.3\n\n83.5 1.1\n\n73.1 1.6\n\n73.1 1.1\n\n78.5 1.1\n\n70.7 1.5\n
(Improv.)5.3%9.2%18.4%17.7%11.3%23.0%
50%Original\n76.3 6.2\n\n74.2 4.4\n\n60.3 4.9\n\n63.3 5.2\n\n68.9 5.8\n\n57.1 5.4\n
+CSG\n83.4 1.8\n\n83.1 1.3\n\n71.5 1.2\n\n72.8 0.9\n\n78.3 0.9\n\n70.2 1.3\n
(Improv.)9.3%12.0%18.6%15.0%13.6%22.9%
40%Original\n78.3 3.1\n\n75.3 5.1\n\n61.7 6.1\n\n64.1 4.2\n\n69.6 6.1\n\n57.3 7.1\n
+CSG\n82.0 1.1\n\n82.7 1.7\n\n70.1 1.9\n\n72.5 1.2\n\n77.1 1.6\n\n69.8 1.1\n
(Improv.)4.7%9.8%13.6%13.1%10.8%21.8%
\n
\n
", + "capture": "Table 7: Experimental Performance (AUC, average standard deviation) on Training ratio from 40% - 70%, TR = training ratio." + }, + "8": { + "table_html": "
\n
Table 8: Link Sign Prediction Performance (AUC, average standard deviation) for Easy Links and Hard Links.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Bitcoin-OTCBitcoin-AlphaWikiElecWikiRfaEpinionsSlashdot
Easy LinksBackbone\n84.3 5.1\n\n80.9 4.8\n\n67.1 8.4\n\n68.3 9.3\n\n74.1 4.1\n\n60.1 5.2\n
+CSG\n87.2 1.5\n\n86.8 0.8\n\n79.9 1.7\n\n78.5 1.2\n\n82.3 1.3\n\n74.1 1.0\n
(Improv.)3.4%7.3%19.1%14.9%11.1%23.3%
Hard LinksBackbone\n75.3 4.5\n\n74.1 6.1\n\n60.2 7.2\n\n61.3 8.1\n\n66.4 4.4\n\n51.2 4.1\n
+CSG\n76.1 1.2\n\n78.6 1.9\n\n64.5 0.7\n\n65.5 0.7\n\n68.3 1.5\n\n55.8 1.5\n
(Improv.)1.1%6.1%7.1%6.9%2.9%9.0%
\n
\n
", + "capture": "Table 8: Link Sign Prediction Performance (AUC, average standard deviation) for Easy Links and Hard Links." + }, + "9": { + "table_html": "
\n
Table 9: Test AUC (%) results (average standard deviation) on six benchmark datasets with different pacing functions.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
linearrootgeometric
Bitcoin-OTC\n85.3 1.6\n\n85.2 1.5\n\n85.0 1.4\n
Bitcoin-Alpha\n85.1 1.5\n\n85.2 1.2\n\n85.6 1.8\n
WikiElec\n78.1 1.0\n\n77.6 0.6\n\n78.4 1.2\n
WikiRfa\n76.6 0.7\n\n76.2 0.8\n\n76.2 0.6\n
Epinions\n80.3 1.5\n\n80.9 0.6\n\n79.6 1.1\n
Slashdot\n72.5 0.4\n\n71.5 1.2\n\n71.1 1.4\n
\n
", + "capture": "Table 9: Test AUC (%) results (average standard deviation) on six benchmark datasets with different pacing functions." + }, + "10": { + "table_html": "
\n
Table 10: Test F1 (%) resultss (average standard deviation) on six benchmark datasets with different pacing functions.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
linearrootgeometric
Bitcoin-OTC\n93.9 0.8\n\n93.9 0.8\n\n93.8 0.5\n
Bitcoin-Alpha\n93.9 0.6\n\n94.2 0.2\n\n93.8 0.3\n
WikiElec\n87.1 0.6\n\n87.3 0.6\n\n86.3 0.7\n
WikiRfa\n86.0 0.4\n\n86.4 0.7\n\n85.7 0.8\n
Epinions\n92.7 0.4\n\n92.9 0.4\n\n92.6 0.4\n
Slashdot\n84.6 0.3\n\n84.5 0.9\n\n84.5 1.3\n
\n
", + "capture": "Table 10: Test F1 (%) resultss (average standard deviation) on six benchmark datasets with different pacing functions." + } + }, + "image_paths": { + "1": { + "figure_path": "2310.11083v2_figure_1.png", + "caption": "Figure 1: An illustration of signed graphs in real world.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/illustration.png" + }, + "2": { + "figure_path": "2310.11083v2_figure_2.png", + "caption": "Figure 2: Balanced (unbalanced) triangles (3-cycles). Green and red lines represent positive and negative edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/balance_triangles.png" + }, + "3": { + "figure_path": "2310.11083v2_figure_3.png", + "caption": "Figure 3: Overall framework of the proposed CSG. (1) input signed graph where green and red lines represent positive and negative edges, resp. (2) triangle-based difficulty measurer function where edges belonging to unbalanced triangles will be assigned higher difficult scores. (3) training scheduler where samples (edges) will be used to feed into the backbone SGNN models according to an easy-to-difficult curriculum.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/Framework.png" + }, + "4": { + "figure_path": "2310.11083v2_figure_4.png", + "caption": "Figure 4: Isomorphism types of balanced (unbalanced) 3-cycle, 4-cycle, 5-cycle and 6-cycle. Green and red lines represent + and - edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/balanced-circles.png" + }, + "5": { + "figure_path": "2310.11083v2_figure_5.png", + "caption": "Figure 5: One unbalanced situation of cycle-3. Green and red lines represent + and - edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/situation_c.png" + }, + "6": { + "figure_path": "2310.11083v2_figure_6.png", + "caption": "Figure 6: One unbalanced situation of cycle-4. Green and red lines represent + and - edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/4-cycle.png" + }, + "7": { + "figure_path": "2310.11083v2_figure_7.png", + "caption": "Figure 7: One unbalanced situation of cycle-5. Green and red lines represent + and - edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/5-cycle.png" + }, + "8": { + "figure_path": "2310.11083v2_figure_8.png", + "caption": "Figure 8: One unbalanced situation of cycle-6. Green and red lines represent + and - edges, resp.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/6-cycle.png" + }, + "9": { + "figure_path": "2310.11083v2_figure_9.png", + "caption": "Figure 9: Illustration of node difficulty, where green lines represent positive edges and red lines represent negative edges.", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/node_difficulty.png" + }, + "10": { + "figure_path": "2310.11083v2_figure_10.png", + "caption": "Figure 10: AUC result (average \u00b1plus-or-minus\\pm\u00b1 standard deviation) of CSG under different values of the hyper-parameters \u03bb0subscript\ud835\udf060\\lambda_{0}italic_\u03bb start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and T\ud835\udc47Titalic_T", + "url": "http://arxiv.org/html/2310.11083v2/extracted/6028256/Figures/Parameter_sensitivity.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2310.11083v2" +} \ No newline at end of file diff --git a/20241127/2311.10270v5.json b/20241127/2311.10270v5.json new file mode 100644 index 0000000000000000000000000000000000000000..552c4a68e9e5b4b2eab7a494030d92a192fef7fc --- /dev/null +++ b/20241127/2311.10270v5.json @@ -0,0 +1,139 @@ +{ + "title": "Multiscale Hodge Scattering Networks for Data Analysis", + "abstract": "We propose new scattering networks for signals measured on simplicial complexes, which we call Multiscale Hodge Scattering Networks (MHSNs).\nOur construction is based on multiscale basis dictionaries on simplicial complexes, i.e., the -GHWT and -HGLET, which we recently developed for simplices of dimension in a given simplicial complex by generalizing the node-based Generalized Haar-Walsh Transform (GHWT) and Hierarchical Graph Laplacian Eigen Transform (HGLET).\nThe -GHWT and the -HGLET both form redundant sets (i.e., dictionaries) of multiscale basis vectors and the corresponding expansion coefficients of a given signal.\nOur MHSNs use a layered structure analogous to a convolutional neural network (CNN) to cascade the moments of the modulus of the dictionary coefficients.\nThe resulting features are invariant to reordering of the simplices (i.e., node permutation of the underlying graphs).\nImportantly, the use of multiscale basis dictionaries in our MHSNs admits a natural pooling operation that is akin to local pooling in CNNs, and which may be performed either locally or per-scale.\nThese pooling operations are harder to define in both traditional scattering networks based on Morlet wavelets, and geometric scattering networks based on Diffusion Wavelets.\nAs a result, we are able to extract a rich set of descriptive yet robust features that can be used along with very simple machine learning methods (i.e., logistic regression or support vector machines) to achieve high-accuracy classification systems with far fewer number of parameters to train than most modern graph neural networks.\nFinally, we demonstrate the usefulness of our MHSNs in three distinct types of problems: signal classification, domain (i.e., graph/simplex) classification, and molecular dynamics prediction.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Scattering Transforms were introduced by Mallat in [1 ###reference_b1###] as a method for feature extraction for signals and images. These features are translation quasi-invariant, stable to deformation, and preserve high-frequency information from the input which make them ideal for a wide variety of data classification problems, e.g., texture image classification [2 ###reference_b2###, 3 ###reference_b3###]. In addition, their computational architecture closely resembles a convolutional neural networks (CNNs), which allows for fast, GPU-friendly computation. In fact, these networks are often thought of as a type of CNN, with predetermined wavelet filter banks as their convolution filters and a pointwise modulus operation as their activation function. A key advantage of these networks over traditional CNNs is that since the filter banks do not need to be learned from input data, they are much less data-hungry. Additionally, they are more interpretable since each channel in the hidden representation is a deterministic cascade of wavelet transform convolutions with nonlinear activation and averaging operators.\nMore recently, Gao et al. introduced an analogous network architecture for node-signals on undirected graphs [4 ###reference_b4###], which they named as \u201cGeometric Scattering (Networks).\u201d In this context, invariance to node-permutation takes the place of translation-in variance. This is achieved in a manner similar to PointNet [5 ###reference_b5###], by aggregating the node features through either a sum or max-values operation into a single value for each channel. This leads to a deformation-stable feature extractor that can be utilized in concert with a very simple learning model (normally logistic regression or support vector machine) to achieve near state-of-the-art classification results on many datasets, with far fewer training examples than CNN-based approaches often require. As a result, these networks generate descriptive yet robust features that can be used along with very simple machine learning methods (i.e., support vector machines or logistic regression) to achieve high-accuracy classification systems with only small amounts of training data and with far fewer number of parameters to train.\nIn this article, we advance this line of research to apply to signals defined on arbitrarily high-dimensional simplicial structures, i.e., edges, triangles, pyramids, and their -dimensional analogous. Our methods differ from previous work in two key ways. First, previous scattering transform networks have applied only to point-valued signals whereas our construction generalizes to high-dimensional structures. Second, we utilize the -Hierarchical Graph Laplace Transforms (-HGLET) [6 ###reference_b6###, 7 ###reference_b7###] and -Generalized Haar-Walsh Transform (-GHWT)[8 ###reference_b8###, 7 ###reference_b7###] as the wavelet banks in the transforms. Previous work has mostly been based on Morlet wavelets for images and Diffusion Wavelets [9 ###reference_b9###] for graph-based signals. However, we find that the bipartition tree induced by the multiscale transforms we proposed in [7 ###reference_b7###] allows us to form sparser approximations which in turn lead to more efficient feature extraction and therefore more expressive networks. Additionally, the multiscale structure of these bases allows us to easily define local pooling operations which can boost the performance of scattering networks in many applications." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Comparison with Related Works", + "text": "There has been a growth in recent interest in studying signals defined on edges, triangles, and higher-dimensional substructures within graph structured data [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. Applications in computer vision [15 ###reference_b15###, 16 ###reference_b16###], statistics [17 ###reference_b17###], topological data analysis [14 ###reference_b14###, 18 ###reference_b18###], and network analysis [19 ###reference_b19###] have benefited from the study of high-dimensional simplicial complexes. Convolution-based simplicial neural networks have shown remarkable results in these new domains [20 ###reference_b20###]. We extend this line of\nresearch by defining scattering networks on these higher-dimensional domains.\nScattering networks [2 ###reference_b2###] were initially introduced as a tool to explain the success of CNNs on many computer vision problems. These networks had many of the invariant and equivariant properties that make CNNs desirable, but did not contain any learnable \u2018filters\u2019, and instead employed a cascade of wavelet convolutions and contractive nonlinearities. Later, Gao et al. successfully generalized these networks to apply to graph convolutional networks [4 ###reference_b4###]. Our work further generalizes these approaches.\nThe main ways our MHSNs differ from Geometric Scattering Networks (GSNs) [4 ###reference_b4###] and Deep Haar Scattering Networks (DHSNs) [21 ###reference_b21###] are:\n1) MHSNs accept arbitrary simplicial complexes while GSNs/DHSNs were designed for nodes only; and\n2) GSNs and DHSNs are based on the Diffusion Wavelets of Coifman and Maggioni [9 ###reference_b9###] and the Haar transform, respectively, and hence they are not based on the hierarchical partitioning of a given graph, while MHSNs are built over the richer HGLET/GHWT dictionaries and more amenable for analysis since they are composed of a collection of orthonormal bases (ONBs).\nHodgelets [22 ###reference_b22###] use a kernel defined in the spectral domain, similar to the spectral graph wavelet transform [23 ###reference_b23###] to define another family of wavelet-like frames for signals on simplicial complexes. Topological Slepians [24 ###reference_b24###] also form a localized basis dictionary on a given collection of -simplices, but their construction is based on the maximization of primal domain concentration of vectors subject to the dual domain (the frequency domain) support set. However, both Hodgelets and Topological Slepians are difficult to use for scattering transform type representations since they are not hierarchically arranged.\nRecently, Chew et al. introduced a method for windowed scattering transforms which achieve local-pooling like operations [25 ###reference_b25###]. However, since the underlying topology of the graph/complex is non-Euclidean, it may be difficult to consistently define the local windows across multiple graphs [26 ###reference_b26###, 27 ###reference_b27###]. It may be possible to use the partitioning scheme proposed in [7 ###reference_b7###] for these windows, but defining appropriate wavelet families for such a hybrid approach requires further study." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Hodge Laplacians and Multiscale Basis Dictionaries", + "text": "In this section we review some basic Hodge theory to define the Hodge Laplacian on simplicial complexes and then summarize the construction of multiscale basis functions on these spaces. For a more thorough introduction into Hodge Theory see [10 ###reference_b10###, 12 ###reference_b12###, 16 ###reference_b16###] and for a more detailed explanation of multiscale basis dictionaries see [28 ###reference_b28###, 7 ###reference_b7###]." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Simplicial Complexes and Boundary Operators", + "text": "In this subsection we review concepts from algebraic topology to formally define simplicial complexes and introduce some notions of how two simplices can be \u201cadjacent.\u201d For a more thorough review, see [10 ###reference_b10###, 12 ###reference_b12###]. Given a vertex (or node) set , a -simplex is a -subset of .\nA face of is a -subset of , and so has faces.\nA co-face of is a -simplex, of which is a face.\nA simplicial complex is a collection of simplices closed under subsets, where if , then .\nIn particular, if , so does each face of .\nLet , and for each , let denote the set of -simplices in , and let be the space of real-valued functions on .\nWhen , .\nWe also refer to as a -complex to note that .\nLet a -region of refer to any nonempty subset of .\nLet be a simplicial complex, and , for some .\nWhen share a face, they are weakly adjacent, denoted by .\nWhen , additionally they both share a co-face, their hull, denoted by .\nIf , , and , then are strongly adjacent, denoted by .\nIf , but in , then are -adjacent, denoted . Figure 1 ###reference_### demonstrates these various adjacencies among simplices in a toy -complex.\n###figure_1### Suppose , , and is its face.\nThen, for some .\nDefine the natural parity of with respect to its face as .\nAn oriented simplex further has an orientation which indicates whether its parity with its faces is the same as, or opposite to, its natural parity.\nWhen , we say is in natural orientation.\nFor example, a directed edge for is in natural orientation, while if , .\nAn oriented simplicial complex contains at most one orientation for any given simplex.\nGiven an oriented simplicial complex , for each ,\nthe boundary operator is a linear operator , where for , , the corresponding matrix entries are .\nLikewise, the coboundary operator for each is just , the adjoint to .\nThe entries of express relative orientation between simplex and face, and they are a natural way to construct functions taking local signed averages, according to adjacency in the simplicial complex." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Hodge Laplacian", + "text": "The boundary operators just introduced represent discrete differential operators encoding the structure of -regions in a simplicial complex, and so can be building blocks towards a spectral analysis of functions on those regions.\nFor analyzing functions on -simplices with , we will construct operators based on the Hodge Laplacian, or -Laplacian.\nAs in [15 ###reference_b15###], the combinatorial -Laplacian is defined for -simplices as\nVarious forms of weighting and normalization are possible, with corresponding advantages and difficulties, and different interpretations of the resulting Hodge Laplacian\u2019s Fiedler vector, as explored in [29 ###reference_b29###, Chap. 4].\nIn our numerical experiments, we choose the symmetrically normalized, weighted Hodge Laplacian, defined as in [14 ###reference_b14###], as follows.\nFor each , let refer to a diagonal matrix, whose diagonal entries contain an assignment of positive weights to each -simplex in .\nOne such choice, as in [14 ###reference_b14###], is to set a simplex\u2019s weight as its degree, by taking , , and , where .\nDefine the normalized boundary matrix .\nThen the symmetrically normalized, weighted Hodge Laplacian is defined as\nThrough the rest of this article, when we wish to refer to some variant of the Hodge Laplacian calculated on a -region , without specifying a choice of normalization and/or weighting, we will use .\nWhen , we simplify to ." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "The -HGLET", + "text": "The -HGLET is a generalization of the Hierarchical Graph Laplacian Eigen Transform (HGLET) [30 ###reference_b30###] from functions on the nodes of a graph, to functions on the -simplices in a given simplicial complex [7 ###reference_b7###].\nThe HGLET, in turn, can be viewed as a generalization of the\nHierarchical Block Discrete Cosine Transform (HBDCT), which\nis generated by creating a hierarchical bipartition of the signal domain and\ncomputing the DCT of the local signal supported on each subdomain.\nLet be the set of basis vectors in the -HGLET\ndictionary where denotes the level of the partition (with being the\nroot), indicates the partition within the level, and indexes the elements\nwithin each partition in increasing frequency.\nLet refer to the -region consisting of the support of partition at level (or scale) , and let .\nHence and .\nIn order to compute the transform, we first compute the complete set of eigenvectors\n of , and order them by nondecreasing eigenvalues.\nWe then partition into two disjoint -regions and \nby forming the Fiedler vector of .\nWe note that: 1) one can use any other bipartition methods; and 2) bipartitioning with the Fiedler vector in the -region setting requires additional steps vs. the graph setting, because of its rather intricate behaviors; see [7 ###reference_b7###] for the details.\nWe iterate the same procedure for and to generate\nthe eigenvectors and .\nNote that , and that all of the elements in\n are orthogonal to those in since their\nsupports are disjoint. The set form an ONB for vectors in .\nFrom here, we apply this process recursively, generating an ONB for each level\nin the given hierarchical bipartition tree.\nIf the hierarchical bipartition tree terminates at every region containing only a -simplex singleton, then the final level will simply be the standard basis of . Each level of the dictionary contains an ONB whose vectors have the support of roughly half the size of the previous level. There are roughly possible ONBs formed by selecting different covering sets of regions from the hierarchical bipartition tree. We also note that the computational cost of generating the entire dictionary is . See [7 ###reference_b7###] for the actual algorithm to generate the -HGLET on a given and further details." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "The -GHWT", + "text": "This basis dictionary is based on the Generalized Haar-Walsh Transform\n(GHWT) [8 ###reference_b8###], which can itself be viewed as a\ngeneralization of the Haar-Walsh wavelet packets [31 ###reference_b31###, Sec. 8.1].\nThis is formed by first generating a hierarchical bipartition tree of as\nfor the -HGLET.\nWe then work in a bottom-up manner, beginning with the finest level \nwhere each region only contains a single element that is the indicator vector\nof that region. We call them scaling vectors and label them\nas .\nFor the next level ,\nwe first assign a constant scaling vector for the support on each region.\nThen, for each region that contains two children in the partition tree, we form a\nHaar vector by subtracting the scaling vector of the child\nelement with a higher index from that of the child element with a lower index.\nThis procedure will form an ONB \n(where is the number of -regions at level and \nor depending on the region ) whose vectors have support of at most 2.\nFor the level , we begin by computing the scaling and Haar vectors as\nbefore. Next, for any region that contains three or more elements, we also\ncompute Walsh vectors by adding and subtracting the Haar vectors\nof the children regions. From here, we form the rest of the dictionary\nrecursively. A full description of this algorithm is given in [30 ###reference_b30###] for the case and in [7 ###reference_b7###] for the general case of .\nNote that like the -HGLET, each level of the dictionary forms an ONB, and at each level, basis vectors have the support of roughly half the size of the parent level. These basis vectors also have the same support as the corresponding -HGLET basis vectors (that is, for all ). However, the computational cost of computing the -GHWT is only ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Multiscale Hodge Scattering Transform", + "text": "Let the -HGLET or -GHWT dictionary vectors be arranged as\n where each is an ONB\nat scale (or level) with being the finest scale basis, composed of\ndelta functions. Note that this definition of the scale parameter\n is the opposite of that used in the previous sections.\nIn general, we have different levels given by the\nhierarchical bipartition scheme, but in practice, the features extracted by\nlarge values are not very descriptive [4 ###reference_b4###]. Hence,\nwe typically use the first levels.\nLet and let denote evaluated at simplex .\nIn addition, let us write for .\nWe propose to compute the th moment up to some maximum of the 0th-order and 1st-order scattering coefficients:\nand the 2nd-order scattering coefficients:\nAnd higher-order moments can be computed in a similar manner:\nwhere .\nHowever, due to the combinatorial blow-up in the number of features at each\norder, it is rare to use more than 2nd or 3rd-order features. Note that the th order features are computed by applying a multiplication with the appropriate (sparse) weight then applying a pointwise nonlinearity to -order features. Further, as we later find in our numerical experiments, high-order moments are not very useful in practice due to their instability [2 ###reference_b2###, 4 ###reference_b4###, 25 ###reference_b25###].\n{hide}\nWe notate the transform without the sum or normalization factor as . We can write the higher order features as:\nBy defining the operator ,\nwhich maps to ,\nwe can also rewrite the higher-order features Eq. (5 ###reference_###)\nin a more comprehensible manner as:\nThis behavior mimics the architecture of convolutional neural networks (with fixed weights) and has been studied extensively [1 ###reference_b1###]. However, unlike traditional feed-forward networks, rather than only considering the features extracted at end of the cascade of linear and nonlinear layers, we concatenate the representation extracted at each layer for downstream analysis. In general, we refer to the process of extracting these features as the Multiscale Hodge Scattering Transform (MHST). Later we will analyze these features with relatively simple machine learning models such as support vector machines (SVMs) and logistic regression models (LRMs). When working with such models with learnable parameters in concert with these deterministic features, we refer to it as the Multiscale Hodge Scattering Network (MHSN)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Local Pooling", + "text": "In general, we can gather all of the moments and of orders to have a total of features for a given signal. The summations from to in (3 ###reference_###)\u2013(5 ###reference_###) can be viewed as global pooling operations. In situations where permutation invariance is not required (i.e., all signals are defined on a fixed complex with known node ordering), however, we can omit these sums, i.e., no pooling is done. As a result, we are left with features for each signal.\nWe can also generate intermediate features between these two extremes:\nretain sums over each region at level instead of not summing at all or summing all the regions of level in (3 ###reference_###)\u2013(5 ###reference_###). This can be viewed as local pooling operations and results in a tuple of features rather than a single value as in the original scattering transform. This is similar to the windowed scattering transforms recently proposed by [25 ###reference_b25###], but here we leverage the multiscale decomposition provided by the hierarchical partition tree to avoid introducing a new user parameter to determine the window size. In the case of these local pools, we replace the normalization factor in the averaging operation with the number of elements in the local pool rather than the number in the entire simplex as defined in Eq. (8 ###reference_###). This gives us a total of features, where is the number of local sums taken in the th level, i.e., the number of regions at scale .\nWe denote these transforms as where denotes the level at which the sum has taken place. So indicated the standard max-pooling-like scheme as defined by [2 ###reference_b2###, 4 ###reference_b4###].\nThen denotes the transform without any sums while denotes the transform with local pools determined by the third level (i.e., ) of the partition tree. In general we have:\nNote that the subscript indicates where the final averaging has taken place whereas indicates the index of the basis elements used to compute the feature." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Theoretical Analysis of Multiscale Hodge Scattering Transforms", + "text": "In this section we establish approximation properties of the multiscale basis dictionaries, then use there results to detail the continuity, invariance and equivalence properties of the MHSNs which make them desirable for signal and domain classification problems. For notational convenience, we let and only consider the first order transform in our formal proofs. However, since can be thought of as applying to the transform , all of the proofs can be trivially generalized to the case.\nFirst, for singleton -elements and of , and signal , we define a distance function and then the associated H\u00f6lder semi-norm as the number of elements in the smallest partition in the tree that constrains both elements, formally we have:\nwhere is a constant in . With these definitions, the dictionary coefficient decay and approximation results of [32 ###reference_b32###, 6 ###reference_b6###] for the GHWT and HGLET can be applied to the -GHWT and -HGLET bases as detailed further in [7 ###reference_b7###].\nFor a simplicial complex equipped with a hierarchical bipartition tree, suppose that a signal is H\u00f6lder continuous with exponent and constant . Then the coefficients with for the -HGLET () and -GHWT () satisfy:\nSee Theorem 3.1 of [6 ###reference_b6###].\n\u220e\nFor a fixed ONB and a parameter , then\nwhere in the best nonlinear -term approximation in the basis , and is defined as\nSee Theorem 3.2 of [6 ###reference_b6###] and Theorem 6.3 of [32 ###reference_b32###].\n\u220e\nNext we establish bounds on the coefficients generate by the multiscale basis. Let indicate the matrix formed by stacking a multiscale basis into a matrix where the th block is the th-level ONB. Next we define a weighted inner product and the associated norm for signals in as:\nIn practice, we can often use , but for some applications more exotic norms, such as those that consider the volume of each face, may be useful.\nLet , and indicate the -norm with respect to the metric . Then we have:\nThis remark is clearly true since is simply a collection of orthogonal matrices, and orthogonal transforms preserve the -norm. Although trivial, this fact will be vital for later proofs. Next we show that the MHST is a non-expansive operation. This allows for powerful nonlinear analysis properties [33 ###reference_b33###] which we detail later.\nLet be the MHST with global pooling formed by the multiscale basis dictionary as defined above in (3 ###reference_###)\u2013(5 ###reference_###), acting on the metric space . Then we have:\nMoreover, let indicate the transforms with local pooling as described in Section 3.1 ###reference_###, then for all we have:\nTo show that the MHST is non-expansive, we will show that each layer is non-expansive. Since the entire transform is defined by a cascade of these layers (with modulus operations taken between the layers), it will also be non-expansive. Then:\nSince this inequality holds for each and , it is clear that taking the -norm over the whole collection of features will also hold. Then, since each order of the transform features can be formed by applying to the previous order we have:\n. The proof for the local summation follows exactly as above.\n\u220e\nNext we show that our networks are invariant under group operations which preserve the inner product , such as permutations of the -simplices. That is, relabeling indices of the elements of does not affect the output of the transform. For example, given a signal defined on the triangular faces of a simplex, the indexing of the triangles does not effect the globally pooled transform and permuting this indexing results in an equivalent permutation of the non-pooled signal. This theorem is analogous to Theorem 3 in [25 ###reference_b25###] and Proposition 4.1 in [34 ###reference_b34###], but generalizes to any , rather than strictly to the nodes (=0 case).\nSuppose is a group of transforms on the elements of (e.g., permutations of the -simplices). Furthermore, given any , let denote the operator induced by and the analogous transform on , which is the permuted version of . Then both of the following hold:\nfor all , and .\nAs with the previous proof we will show that this holds for an arbitrary layer, and since the entire transform is formed from a cascade of these layers, it will also have this property.\nFirst, denote where is an appropriate version of the -Laplacian for .\nIt is immediately clear that if is an eigenvector of with then is an eigenvector of , i.e., since\nThen Since preserves the inner product on , it is clear that is an ONB for whose atoms are the bases formed from eigenvectors of , where\nThen for arbitrary input we have:\nThe proof for the local summation follows exactly as above.\n\u220e" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Signal Classification", + "text": "We first demonstrate the effectiveness of our MHSNs with the article category\nclassification problem using the Science News database [35 ###reference_b35###, 36 ###reference_b36###]. We do not claim state-of-the-art results for this problem, but rather use it to illustrate the advantages of analyzing signals via higher-dimensional simplices. We also show how locally-pooled networks can shatter problems in which traditional globally-pooled scattering networks fail to differentiate between classes.\nThe Science News dataset contains 1042 scientific\nnews articles classified into eight fields: Anthropology, Astronomy;\nBehavioral Sciences; Earth Sciences; Life Sciences; Math/CS; Medicine; Physics.\nEach article is tagged with keywords from a pool of 1133 words selected by the database curators. We determine a simplicial complex from these keywords by computing their word2vec [37 ###reference_b37###] embeddings based on\nGoogle\u2019s publicly available pre-trained model [38 ###reference_b38###].\nWe generate a symmetric -nearest neighbor graph of the embedded words\nand then generate -simplices of the graph.\nTherefore, a -simplex in this keyword graph corresponds to -face, which represents a combination of words.\nNext, we define representations of each article as a signal in each \nas follows. First, for (i.e., a node-valued signal), we define the\nsignal to be one on the nodes representing their keywords and zero\nelsewhere. For we define the signal to be the simplex-wise\naverage of the signal. That is,\nwhere represents the set of nodes forming the th simplex . Note that these signals are highly localized since the keywords are connected through a symmetrized NN graph, and the higher-order signals are built from the adjacency of the resulting complex. To showcase the robustness of our approach, we report results using both and nearest neighbor graphs.\nTables 1 ###reference_### and 2 ###reference_### compare the performance of our proposed methods with the other simpler methods, i.e., the raw expansion coefficients of the signals relative to the standard ONBs (Delta; Fourier) or the dictionaries (Diffusion; HGLET; GHWT).\nThe parameters for the feature computations were set as .\nFor each , we performed the five-fold cross-validation, i.e., we randomly split these 1042 signals into 80% training and 20%\ntest sets and repeat the experiment 5 times with different train/test splits.\nIn every case we use the -regularized LRM provided by scikit-learn [39 ###reference_b39###] without any additional hyperparameter tuning to compute the classification.\nSeveral observations are in order. First, the traditional, globally-pooled scattering networks mostly fail on this task regardless of the wavelet dictionary employed. Since the number of nonzero entries in each signal is similar and therefore the -norms are also similar, global-pooling schemes fail to capture the keyword information (i.e., indices of nonzero entries) in a way that differentiates between the classes and consequently do not produce statistically significant results. The non-pooled features often provide the highest performance, which is not surprising since there are many more features and learnable parameters than the networks with pooling. However, the locally-pooled features almost always perform on par with the non-pooled features. For both the 5 and 10 nearest neighbor graphs, the best overall results are achieved by the , which has the largest number of elements. Similarly, the 10-nearest neighbor graph performs better than the 5-nearest neighbor graphs at the cost of larger .\nWe also observe that the networks based on -HGLET and -GHWT generally outperform those based on Diffusion Wavelets. This is likely due to the highly localized and piecewise constant nature of the input signals, which are well-approximated by these dictionaries [7 ###reference_b7###]. In the next section, where the signals are not localized, we do not observe this difference." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Domain Classification", + "text": "Another vital application of geometric scattering networks and graph neural networks (GNNs) is graph (and simplex) classification. Broadly speaking, this problem consists of predicting a label of a social or chemical graph based on a training set of similar graphs with different configurations (i.e., different numbers of nodes and edges). For example, in the COLLAB dataset [52 ###reference_b52###], each graph represents a network of coauthors of a scientific paper.\nSince the size of the graphs varies greatly within these datasets, we employ only the global-pooling version of our MHSN, akin to the previous efforts reported in [4 ###reference_b4###, 25 ###reference_b25###], which were based on geometric scattering methods.\nWe compute permutation-invariant input features based only on\ntopological information obtainable from the boundary matrices. Since many of the\ngraphs are too small to contain high-degree simplices, we only consider node and\nedge-based features and transforms. Following the methodology developed in\n[4 ###reference_b4###], we set .\nFor the node signals, we first compute\nthe eccentricity and clustering coefficient [HARRIS-ETAL, Sec. 1.2] of\neach node.\nFor each node signal, the number of parameters (MHST coefficients) are\n64 via the formula , hence 128 parameters\nafter concatenating them.\nFor the edge signals, we use the average of the eccentricities of the head and tail nodes of each edge and the number of non-zero off-diagonal terms in the combinatorial Hodge-Laplacian (each such term corresponds to a -adjacent edge [29 ###reference_b29###, Sec. 4.1]).\nFor each domain classification problem we train three models:\n1) using 128 node features; 2) using 128 edge features;\nand 3) using 256 combined features.\nWe then employ a simple SVM with Gaussian radial basis functions to classify\nthe features. Moreover, we tune the hyperparameters controlling the strength\nof the -regularization and the kernel coefficients via the\ncross-validation scheme presented in [4 ###reference_b4###] using the\nsame search space and validation indexes.\nWe compare these results with those obtained by the geometric scattering network (with Diffusion Wavelets) using SVM (GS-SVM) as well as several popular GNN models including the graph convolution network (GCN) [40 ###reference_b40###], universal graph transform (UGT) [41 ###reference_b41###], dynamic graph CNN (DGCNN) [42 ###reference_b42###], graph attention network (GAT) [43 ###reference_b43###], and graph feature network (GFN) [44 ###reference_b44###]. For each competing method, we reproduce the results in the significant figures reported in their original publications; we report to 2 decimal places for our method. More information on the benchmark datasets can be found in A ###reference_###. We remark that, as of this article\u2019s writing, this collection of networks achieves state-of-the-art results on these datasets according to the Papers with Code Leaderboards [45 ###reference_b45###]. Further details on these datasets and their associated classification problems are presented in A ###reference_### and the references therein.\nAlthough our MHSNs do not achieve state-of-the-art results on these datasets, they are very competitive with only a small fraction of the learnable parameters. Moreover, the number of learnable parameters in our models is not tied to the graph size and depends only on the order of the scattering and the number of moments computed. For example, Table 4 ###reference_### compares our methods with the UGT and the GFN, which are the state-of-the-art methods for various graph classification problems. These methods each require more than half a million parameters for some cases (867K for UGT) to achieve results similar to ours, requiring only 256 parameters to learn. As a result, our MHSNs can be implemented and trained on a consumer-level laptop, whereas many of these competing GNNs require specialized hardware." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Molecular Dynamics", + "text": "Our MHSNs can also be used for regression problems where the goal is to predict a continuous property of a simplicial complex (or simply a graph) based on a set of observations of the complex under various conditions. Therefore, they are quite suitable for learning molecular dynamics, particularly the potential energy surface of a molecule, given a few registrations of the molecule and its energies. The Revised Molecular Dynamics 17 (rMD17 dataset) [46 ###reference_b46###] contains 100,000 structures and associated energies of various molecules. However, these structures are taken from a molecular dynamics simulation, i.e., time series data, which is not independent and identically distributed. To overcome this, instead of using the entire dataset, we use five sets of molecule snapshots and the associated potential energies. Each of these sets consists of 1,000 snapshot/energy pairs and is grouped into 800 training and 200 test samples selected by the authors of the dataset [46 ###reference_b46###].\nWe extract a rich set of features for each structure (i.e., a pose or conformation of a molecule) using our MHSNs (without pooling) and then employ a support vector regression (SVR) method with Gaussian radial basis functions to approximate and predict the energy. More specifically, for each molecule, we first compute the average position of each atom across the training set. Then, using these positions, we create a NN-graph (with ) as a template simplicial complex.\nNote that by using this simplicial complex, rather than the molecular-bond graph, we can better capture the geometric information in the pose of the molecule as detailed in [47 ###reference_b47###, 48 ###reference_b48###]. Unlike the domain classification problems in Section 6 ###reference_###,\nthe geometry of the simplicial complex is fixed, so rather than use its geometrically-invariant descriptors, we need to begin\nwith signals that encode the position information of molecules.\nWe then generate the node and edge signals of both training and test sets\nas follows.\nFirst we compute the Euclidean distance matrix of (i.e the Gram matrix of the point-cloud, measured in the Euclidean distance) the node coordinates of each\nsnapshot and assign the corresponding column vector of the distance matrix as\nits node features. This generates a number-of-atoms-dimensional node signal for\neach node, for each snapshot.\nFor an edge signal, we extract edge lengths from the above distance matrix and\ncreate a diagonal matrix of edge lengths. Then, we assign the corresponding\ncolumn vector of this diagonal matrix as its edge features.\nAs with the node-based signal, this gives us number-of-edges-dimensional signal\nto input into our MHSN. For this experiment we use .\nWe do not use simplices of dimension and not set and\n because the molecules are too small.\nTable 5 ###reference_### shows our results for aspirin (21 atoms) and paracetamol (20 atoms) molecules. We compare our MHSNs with several state-of-the-art GNN approaches designed specifically for processing molecular dynamics, including SchNet [47 ###reference_b47###], PaiNN [49 ###reference_b49###], and two variants of Special Orthogonal Networks (SO3Nets) [50 ###reference_b50###, 48 ###reference_b48###].\nWe report both the mean absolute error (MAE) and root mean square error (RMSE) of the energy prediction, which are the standard metrics in the computational chemistry literature. Our MHSNs perform competitively with these approaches while employing roughly 1% as many learnable parameters as the competing methods.\nAdditionally, we again observe that our edge-based analysis outperforms the node-based analysis. This demonstrates that higher-dimensional simplex analysis can be more powerful than node-only approaches, even in cases where the underlying graph may not have many higher-dimensional structures." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this article, we proposed the Multiscale Hodge Scattering Transforms/Networks (MHSTs/MHSNs) for robust feature extraction from signals on simplicial complexes that can be used in classification and regression problems, fully utilizing our multiscale basis dictionaries on such simplicial complexes, i.e., -HGLET and -GHWT dictionaries. Our MHSTs/MHSNs also have pooling options for the scattering transform outputs: no-pooling; local-pooling; and global-pooling. Such options allow our tools to apply for various classification and regression problems on simplicial complexes ranging from classification of signals recorded on simplicial complexes to classification of type of simplicial complexes (i.e., domain classification problems) to regression of potential energies in molecular dynamics. We demonstrated that MHSNs provide comparable results with those by the state-of-the-art GNNs with up to a two-order of magnitude reduction in number of learnable parameters. We strongly believe that our success here comes from the structure and organization of our multiscale basis dictionaries that are conveniently arranged in terms of scales and locations, which are suitable and quite amenable for generating scattering transform coefficients.\nWe plan to investigate how we can interpret the MHST coefficients that are identified as important by classification methods such as the LRMs. Because of the nonlinearities used in the MHSTs, converting the MHST coefficients to the features in the primal/original domain is difficult. Along this direction, we plan to examine the optimization method proposed by Weber [51 ###reference_b51###, Chap. 4], which synthesizes an input signal that generates the significant scattering transform coefficients at the specified coefficient indices. In a related line of research, we will explore how the MHST coefficients can be used to identify important relationships within graphs that can be used to narrow the training space of various attention mechanisms/graph transformers for large-scale problems." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Description of Domain Classification Datasets", + "text": "" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Knn = 5DeltaFourierDiffusionHGLETGHWT
BasisBasisDict.GPNPDict.GPLPNPDict.GPLPNP
\n=0113333.97133.97186.60331.57986.60381.81831.57986.60386.60380.86131.57985.64686.124
\n=1327355.50278.94785.64631.57985.64686.12431.57986.12485.64685.60331.57985.64685.646
\n=2129455.50249.76183.73231.57983.73283.25431.57984.21183.73283.73231.57983.25483.254
\n=322731.57931.57978.94731.57978.94751.67531.57978.46978.94751.19631.57978.46978.947
\n=41631.57931.57955.98131.10055.98132.05731.10055.98155.98132.05737.79955.50254.067
\n
\n
Table 1: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s.
\n
", + "capture": "Table 1: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Knn = 10DeltaFourierDiffusionHGLETGHWT
BasisBasisDict.GPNPDict.GPLPNPDict.GPLPNP
\n=0113335.23835.23860.95232.38187.61981.90532.38188.57187.61980.95232.38187.61987.619
\n=1689081.90581.90586.66732.38186.66785.71432.38189.52486.66785.71432.38189.52489.524
\n=2724376.1976.1986.66732.38188.57185.71432.38188.57188.57188.57132.38189.52488.571
\n=3417969.52469.52474.28633.33386.66786.66733.33386.66786.66786.66733.33386.66786.667
\n=4174045.71445.71468.57135.23881.90573.33335.23881.90581.90581.90533.33381.90581.905
\n=556033.33333.33339.04834.28673.33360.95233.33373.33373.33360.95234.28673.33373.333
\n=69832.38132.38132.38134.28662.85739.04835.23862.85762.85762.85735.23862.85760.952
\n
\n
Table 2: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s.
\n
", + "capture": "Table 2: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s. " + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
GraphNode ScatteringEdge ScatteringComboGS-SVMGCNUGTDGCNNGATGFN
COLLAB70.8478.3480.3979.9479.0077.8473.7675.8081.50
DD60.6768.7372.71--80.2379.37-79.37
IMDB-B72.7070.6073.1071.2074.0077.0470.0370.5073.40
IMDB-M44.4047.1349.6848.7351.9053.6047.8347.851.80
MUTAG85.7886.3185.7883.5085.6080.2379.3789.4085.83
PROTEINS73.5773.0475.3574.1176.0078.5375.5474.7076.46
PTC62.8567.7168.2863.9464.2069.6358.5966.7066.60
\n
\n
Table 3: Graph classification accuracy on seven datasets. The best performer for each dataset is indicated in bold.
\n
", + "capture": "Table 3: Graph classification accuracy on seven datasets. The best performer for each dataset is indicated in bold. " + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Hodge Scattering + SVMUGTGFN
GraphAccuracy# ParametersAccuracy# ParametersAccuracy# Parameters
COLLAB80.3925677.84866,74681.5068,754
DD72.7125680.2376,92879.3768,754
IMDB-B73.1025677.0455,50873.4068,754
IMDB-M49.6825653.6048,69851.8068,818
MUTAG85.7825680.234,17885.8365,618
PROTEINS75.3525678.531,87876.4665,618
PTC68.2825669.6312,03866.6065,618
\n
\n
Table 4: Comparison of MHSN and state of the art graph classification networks in accuracy and number of learnable parameters
\n
", + "capture": "Table 4: Comparison of MHSN and state of the art graph classification networks in accuracy and number of learnable parameters" + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Diffusion+SVRHGLET+SVRGHWT+SVRSchNetPaiNNSO3Net ISO3Net II
Feature TypeNodeEdgeBothNodeEdgeBothNodeEdgeBoth
Aspirin
MAE4.8563.1323.2674.8843.1353.2854.9283.0753.22513.53.83.82.6
RMSE6.1814.1444.3146.2154.1294.4076.2134.1234.31618.35.95.73.8
# Parameters924378447089243784470892437844708\n 432k\n 341k\n 283k\n 341k
Paracetamol
MAE4.6092.7152.7954.7232.6432.7104.7482.6242.6998.42.12.21.4
RMSE5.8603.4184.1165.9643.3383.4245.9613.2993.40811.22.93.01.9
# Parameters924378444449243784444492437844444\n432k\n 341k\n283k\n341k
\n
\n
Table 5: Comparison of the performance of our MHSNs and the other state-of-the-art GNNs for potential energy prediction. We report the accuracy via MAE and RMSE as well as the number of trainable parameters in each network.
\n
", + "capture": "Table 5: Comparison of the performance of our MHSNs and the other state-of-the-art GNNs for potential energy prediction. We report the accuracy via MAE and RMSE as well as the number of trainable parameters in each network." + } + }, + "image_paths": { + "1": { + "figure_path": "2311.10270v5_figure_1.png", + "caption": "Figure 1: In this small 2222-complex C\ud835\udc36Citalic_C, e1\u223ce4similar-tosubscript\ud835\udc521subscript\ud835\udc524e_{1}\\sim e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT because they share the face v2subscript\ud835\udc632v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and e1\u223ce2similar-tosubscript\ud835\udc521subscript\ud835\udc522e_{1}\\sim e_{2}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because they share the face v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Further e1\u2243e2similar-to-or-equalssubscript\ud835\udc521subscript\ud835\udc522e_{1}\\simeq e_{2}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2243 italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because their hull t1\u2208Csubscript\ud835\udc611\ud835\udc36t_{1}\\in Citalic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 italic_C, but e1\u2243\u0338e4not-similar-to-or-equalssubscript\ud835\udc521subscript\ud835\udc524e_{1}\\not\\simeq e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2243\u0338 italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT, so that e1\u2062\\stackunder[0pt]\u223c1\u2062e4subscript\ud835\udc521\\stackunder[0pt]\u223c1subscript\ud835\udc524e_{1}\\>\\>\\text{\\stackunder[0pt]{$\\sim$}{$\\scriptscriptstyle 1$}}\\>\\>e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [0pt] \u223c 1 italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT. We have t1\u223ct2similar-tosubscript\ud835\udc611subscript\ud835\udc612t_{1}\\sim t_{2}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because they share the face e3subscript\ud835\udc523e_{3}italic_e start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, and also t1\u2062\\stackunder[0pt]\u223c2\u2062t2subscript\ud835\udc611\\stackunder[0pt]\u223c2subscript\ud835\udc612t_{1}\\>\\>\\text{\\stackunder[0pt]{$\\sim$}{$\\scriptscriptstyle 2$}}\\>\\>t_{2}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [0pt] \u223c 2 italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2311.10270v5/extracted/6029509/pics/two-triangle-example-clean.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "doi:10.1109/ICASSP49357.2023.10095803.", + "author": "C. Battiloro, P. Di Lorenzo, S. Barbarossa, Topological Slepians: Maximally localized representations of signals over simplicial complexes, in: ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1\u20135.", + "venue": null, + "url": "https://doi.org/10.1109/ICASSP49357.2023.10095803" + } + } + ], + "url": "http://arxiv.org/html/2311.10270v5" +} \ No newline at end of file diff --git a/20241127/2401.15479v4.json b/20241127/2401.15479v4.json new file mode 100644 index 0000000000000000000000000000000000000000..c103eb8803063e9cc0a5cff328b0ce4f2c62d44b --- /dev/null +++ b/20241127/2401.15479v4.json @@ -0,0 +1,489 @@ +{ + "title": "Navigating the Post-API Dilemma Search Engine Results Pages Present a Biased View of Social Media Data", + "abstract": "Recent decisions to discontinue access to social media APIs are having detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer these questions, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In February 2023, Twitter/X announced its plan to discontinue free access to its API-services. Shortly thereafter, Reddit announced that it would take a similar action and likewise discontinue free API-access. In an interview with the New York Times, Steve Huffman, the CEO of Reddit, explained his rationale stating, \u201cThe Reddit corpus of data is really valuable, but we don\u2019t need to give all of that value to some of the largest companies in the world for free Isaac (2023 ###reference_b15###).\u201d The discontinuation of API access on both Twitter/X and Reddit led to a backlash from developers, especially those of the third-party applications, which rely heavily on API-access from these sites. Many of these third-party applications and their companies have since shuttered their operations. A similar effect has been felt among researchers and academics who rely on access to data for scholarship in countless areas of study.\n###figure_1### For example, scholars have been using access to Reddit\u2019s API-service almost since its founding in 2006. This data has led to several studies, especially in the field of discourse Botzer et al. (2022 ###reference_b5###), computational journalism Priya et al. (2019 ###reference_b33###), and computational linguistics Basile et al. (2021 ###reference_b2###); Wang and Luo (2021 ###reference_b40###); Melton et al. (2021 ###reference_b23###); Liu (2020 ###reference_b17###) to name a few. This is true even moreso for Twitter/X, which has seen numerous studies on follower networks Martha et al. (2013 ###reference_b20###); Yardi et al. (2010 ###reference_b42###), event detection Weng and Lee (2011 ###reference_b41###); Vieweg et al. (2010 ###reference_b39###); Hassan et al. (2020 ###reference_b14###), and coordinated influence campaignsPacheco et al. (2021 ###reference_b29###); Keller et al. (2020 ###reference_b16###). Without access to data, this type of scholarship will be difficult or impossible. We call this the Post-API Dilemma and in this paper we begin to ask the question: How shall scientists continue their work without access to this data?" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Data Collection Methodology", + "text": "To confidently study any biases in SERP, it is important to obtain strong unbiased social media datasets from which to compare. Specifically, we investigate Reddit and Twitter/X. In both cases we are confident that the collected samples are nearly complete within a specific time window." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Reddit Data", + "text": "We collected data from Pushshift222http://pushshift.io ###reference_ushshift.io### up until March of 2023. This data nearly complete, but may lack posts and comments that were either identified as spam by Reddit or deleted, edited, or removed by the moderation team or by the user before the Pushshift data collection service was able to collect the data, or was otherwise inaccessible by virtue of the post or comment being in a quarantined subreddit or otherwise. Nevertheless, it is likely that this dataset contains almost all the social media content that was visible to a regular user of Reddit. The number of up-/downvotes, awards, flairs, and other metadata associated with a post changes regularly; these changes are not reflected in this dataset. However, the present work mostly considers the text-content of the posts and comments so the ever-changing metadata is not relevant.\nWe focus our investigation on the posts and comments submitted to Reddit between January 1, 2023 and January 31, 2023. This timeframe was chosen because it is recent, complete, and large. In total, this subset contains 36,090,931 posts and 253,577,506 comments. We tokenized this dataset using Lucene\u2019s StandardAnalyzer, which removes whitespace, moves all text to lowercase, and removes the stopwords. In addition, we also removed any tokens that contained non-alphabetic characters, tokens with fewer than 3 characters and those with frequency less than 100. We then ranked each token according to its document frequency, and selected 1000 keywords by stratified sampling. Stratified sampling is used to ensure that the set of keywords are uniformly distributed from common to rare words, i.e., they are not dominated by words of one type or another." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Twitter/X", + "text": "Obtaining a complete set of Twitter/X data is difficult, even for a single day Pfeffer et al. (2023 ###reference_b32###). To make matters worse, new restrictions limit the sharing of data to only the identifiers, which do not contain the content of the post. Fortunately, there do exist Twitter/X datasets that are nearly complete for some subset of the platform. Specifically, we used a dataset of Tweets related to the COVID-19 pandemic, which was collected for one month starting on March 29, 2020, and ending on April 30, 2020 Smith (2020a ###reference_b35###, b ###reference_b36###). This dataset contains tweets that contain one of seven different hashtags, (e.g., #coronavirus, #coronavirusoutbreak, #covid19) for a total of 14,607,013 tweets during the time period." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "SERP Data", + "text": "Search engines like Bing and Google have the infrastructure to collect social media data at a massive scale. Researchers who rely on data access have been turning to services that provide relatively inexpensive SERP-API access. It is infeasible to simply ask SERP for a list of all tweets. So we used the SERP-API system to query Google with each of the 1000 random keywords extracted from the Reddit dataset.\nComparing against the Reddit dataset required each query to be of the form: site:reddit.com {keyword}. We found that the majority of queries were limited to 100 results each, so we repeated each query setting the date restriction for one day-at-a-time for each day in January 2023 thereby matching the timeframe from Reddit-dataset. All other options were kept at their Google-defaults except safe-search, which we disabled. Furthermore, the ScaleSERP service notes that they use thousands of proxies distributed throughout the world, so the results presented in the current study should not be biased by geographical region.\nComparing against the Twitter/X dataset used a similar methodology, except the queries needed to also include one of the hashtags used to obtain the Twitter data in order to maintain a fair comparison. The Twitter query to SERP was of the form: site:twitter.com {hashtag} {keyword} for each keyword. We use the same 1000 keywords as the used to query Reddit. Because many tweets contained more than one of the relevant hashtags we randomly sampled a single covid-hashtag from the list of seven for each keyword. We were again careful to match the dates from the Twitter/X dataset. Like in the Reddit-SERP methodology, all other options were kept at their Google-defaults except safe-search, which we again disabled." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Data Models", + "text": "Relative to the enormous size of the nearly-complete Reddit and Twitter/X datasets, the results gathered from SERP were surprisingly small. In total SERP gathered 1,296,958 results from Reddit and 70,018 tweets from Twitter/X. Note that the results for Reddit are typically links to entire comment threads, but SERP results for Twitter/X are typically link to a single Tweet.\nData for Reddit includes the post/comment-id, userid, score, post-title, post-text, and all comments on the post. Results for Twitter/X only contain the userid, Tweet-id, and the Tweet-content. With this data, it is possible to perform a comparison of the data gleaned from SERP against the data known to exist on Reddit and Twitter/X.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Popularity Analysis", + "text": "We begin with an analysis that characterizes any relationship between the popularity of a user or the score of a post and the means scores from SERP. Search engines are well known to promote highly authoritative sources Sundin et al. (2022 ###reference_b37###), and this may (or may not) be true for the results for social media searches. We expect that high-scoring Reddit posts and Tweets from highly influential Twitter/X users will dominate the SERP results, thereby introducing a bias in the overall search results.\nTo characterize any popularity bias induced by SERP, we compared the number of followers of the posting user from Twitter/X and the score of the post from Reddit. Overall, the mean and median post-score in our nonsampled Reddit dataset was 48.97 and 1.0 respectively compared to 550.69 and 21.0 from SERP; the mean and median number of followers in our nonsampled Twitter/X dataset was 63,250.16 and 873.0 respectively compared to 544,934.63 and 21,547.0 from SERP. Therefore, it does appear that SERP returns posts that are statistically significantly higher than the typical Reddit post (MannWhitney =0.259 p\u00a10.001) and the typical active Twitter/X user (MannWhitney =0.194 p\u00a10.001).\nWe also compared the popularity score as a function of the SERP rank.In both cases, Spearman correlation tests showed almost no correlation between a Twitter/X user\u2019s follower count and its rank from SERP (=0.002); and almost no correlation between the score of the Reddit post and its rank from SERP (=0.001)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Keyword-based Comparison", + "text": "Next we look to identify keyword-level discrepancies between the datasets. Typical token-based analysis takes the view that the text-datasets can be represented as a bag-of-words. Then, any number of statistical analysis can be employed to compare these categorical distributions Cha (2007 ###reference_b6###); Deza and Deza (2006 ###reference_b7###). But these traditional distances are difficult to interpret when the data is Zipfian Gerlach et al. (2016 ###reference_b10###), as most text-data is Miller (1951 ###reference_b25###). Recently, the Rank Turbulence Divergence (RTD) metric was introduced as an illuminating and information-theoretic measure of the difference between two text-corpora Dodds et al. (2023 ###reference_b8###). We employ this measure to identify any token-level sample bias from SERP.\n###figure_3### Formally, let R1 and R2 be two word distributions ranked from most common to least common. To start, the RTD calculates the element-wise divergence as follows:\nwhere represents a token and and denote its ranks within R1 and R2, respectively. Because Eq. 1 ###reference_### introduces bias towards higher ranks, the authors introduce a control parameter as:\nFor each token present in the combined domain of R1 and R2, we compute their divergence using Eq. 2 ###reference_###. In the present work, we use , which has been shown in previous work to deliver a reasonably balanced list of words with ranks from across the common-to-rare spectrum Dodds et al. (2023 ###reference_b8###).\nThe final RTD is a comparison of R1 and R2 summed over all element-level divergence. It includes a normalization prefactor and takes the following form.\nA lower score indicates little rank divergence; a higher score indicates a larger divergence. The mean RTD comparing SERP results and the non-sampled social media data from all 1000 keywords is listed in Tab. 1 ###reference_###. Raw numbers are difficult to interpret on their own, but a control test performed in the next section found an RTD of about 0.30 on random comparisons of the same dataset. The RTD comparing SERP results for Reddit and Twitter/X were both dramatically higher than the control.\nOverall, this domain-level analysis shows that SERP results are substantially different from each other. Next, our goal is to characterize the nature of this difference." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Token-level Analysis", + "text": "Recall that the corpus-level RTD values presented above from Eq. 3 ###reference_### are the mean average of the rank divergence of the individual words from Eq. 2 ###reference_###. This permits a token-level analysis to find the words/tokens that diverge the most within the dataset for each keyword. We do this by capturing the output and the sign from Eq. 2 ###reference_### i.e., disregarding the absolute value function, for each token in the posts/Tweets returned by SERP or returned by a keyword-query to the nonsampled social media data. Figure 2 ###reference_### shows the distribution of the token-level divergences (Eq. 2 ###reference_###) and their mean (representing Eq. 3 ###reference_###) for terms that have the highest mean rank divergence in favor of Google\u2019s SERP and in favor of the nonsampled social media data from Twitter/X (on left) and Reddit (on right). In other words, the terms in red (i.e., top subplots) are more likely to be returned from the Google\u2019s SERP compared to the nonsampled social media data, and vice versa.\nOn the nonsampled Twitter/X data we are far more likely to encounter hashtag-style terms, along with politically salient terms referencing then-President Trump, terms blaming China for the pandemic, and other terms with a generally negative sentiment. The results from SERP, in contrast, illustrate medical information (hospital, health, services, support), government offices (city, minister, socialsecurity, facility, district), and terms with a more-positive (or at least more-neutral) tone.\nFrom the Reddit data we find that Reddit-specific terms like removed, comments, deleted, unsubscribe, etc are far more likely to appear on Reddit compared to SERP, which is reasonable, but also means that the search engine is more likely to hold-back these items from the search results. We also find that the nonsampled Reddit data is far more likely to have terms from American football (Romo, Mahomes, Bengals, Jags, refs), which was in its playoffs during the data collection period, political terms (Trump, Ukraine, McCarthy, neoliberal, republican, vote), and vulgarity.\n###figure_4###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Term-Frequency Analysis", + "text": "Although a targeted social media analysis might intentionally develop a list of query words, like the COVID hashtags used to collect the Twitter / X data, recall that the keywords used in our token-level analysis were selected from a stratified sample of all terms ordered by their document frequency. Here, we ask whether there is a relationship between the frequency of a keyword (as measured by the document frequency) and its RTD.\nWe compute the RTD values comparing SERP results against the nonsampled social media data for each of the 1000 keywords. We compare these values against an RTD control set created by randomly assigning 5000 random posts from the nonsampled social media data for each keyword.\nCompared to the control set, Fig. 3 ###reference_### shows that the RTD is consistently higher on the Twitter/X dataset. On the Reddit dataset, we find that the RTD starts out relatively low for the most common words, but then rises substantially for more-informative words with medium document frequencies, and then reverts to the control for the less common words.\nTogether, these results indicate that Google\u2019s SERP returns a highly skewed view of the underlying social media data, and that the difference is most pronounced in terms that are most informative.\n###figure_5###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Sentiment Analysis", + "text": "Social media has also been widely used to glean information regarding sentiment and emotional judgement regarding various topics Liu (2020 ###reference_b17###). Although we do not investigate any single-topic or event in the present work, we do make use of sentiment analysis tools to determine if SERP produces and bias in terms of sentiment or emotionally-salient language. We do this at the post-level and employ a sentiment analysis model called TimeLMs Loureiro et al. (2022 ###reference_b19###) based on the roberta Liu et al. (2019 ###reference_b18###) transformer architecture. Although this model was originally finetuned and evaluated primarily on Twitter/X data, we capitalize on its capability to grasp the universal characteristics of social media language, as highlighted in previous studies Guo et al. (2020 ###reference_b13###), thereby allowing us to use it as foundational model for sentiment analysis on both Twitter and Reddit.\nThe findings from the term-level analysis also appeared to have a difference in the overall sentiment and emotional salience. Simply put, Google\u2019s SERP appeared to return social media posts that were much more positive compared to the nonsampled social media data.\nSentiment analysis on social media has been used for decades to gauge the users\u2019 attitudes towards products, organizations, issues and their various facets Mei et al. (2007 ###reference_b22###). Analysis of sentiment has become one of the widely researched areas in the recent times, and many large organizations have entire social media teams dedicated to managing their social media presence. In the Post-API era, it is important to understand if SERP provides an accurate characterization of the true distribution of sentiment found on social media Trezza (2023 ###reference_b38###).\nSentiment analysis tools can be deployed on various levels including sentence-level Farra et al. (2010 ###reference_b9###), document level Yessenalina et al. (2010 ###reference_b43###), and aspect level Nikos et al. (2011 ###reference_b28###) analysis. For this task we used a sentiment analysis model based on the Roberta transformer model Liu et al. (2019 ###reference_b18###) that was fine-tuned on Twitter data. The sentiment analysis model was applied to each Reddit post and Tweet; it returned a probability that each post was neutral, positive, or negative.\nThe mean-average of the sentiment probabilities and their 95% confidence intervals are plotted in Fig. 5 ###reference_###. For Reddit, we find that the posts returned by SERP were statistically more-positive than the nonsampled Reddit data () and vice versa. In case of Twitter, we observed a large difference in the negative sentiment, and a small, but statistically significant decrease in positive sentiment. However, we assess that the large shift from negative to neutral outweighs the positive to neutral shift resulting in an overall more-positive posts retrieved from SERP. We used the one-sample two-tailed Student T-test to determine statistical significance in both cases." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Gaps in Topical Coverage", + "text": "Our final analysis investigates the topical coverage of the data. If SERP returned an unbiased sample of the underlying social media data, then we would expect that the topical coverage of the results returned from SERP would have a similar topical distribution of the nonsampled social media data.\nTopical analysis on text data is a well-understood and deeply investigated mode of analysis starting from latent semantic indexing (LSI) and latent Dirichlet allocation (LDA) models in the past Blei et al. (2003 ###reference_b4###); Blei (2012 ###reference_b3###) to learned vector representation models such as word2vec Mikolov et al. (2013 ###reference_b24###) and GLOVE Pennington et al. (2014 ###reference_b31###). But these term-based models do not provide a contextual understanding of sentence-level semantics found in social media posts. Sentence Transformers, on the other hand, provide contextual whole-sentence embeddings of entire sentences or paragraphs Reimers and Gurevych (2019 ###reference_b34###). Therefore, sentence transformers are a powerful tool for tasks such as sentence similarity, clustering, and retrieval.\nIn order to compare the topical coverage of SERP results against the nonsampled social media data, we used a pretrained multilingual sentence transformer model called paraphrase-multilingual-mp net-base-v2 Reimers and Gurevych (2019 ###reference_b34###) to encode each social media post and Tweet, from both the nonsampled social media data and from SERP, into a 768-dimensional vector space. Because this pretrained transformer model was not fine-tuned on any of the datasets, it will encode the posts from all of the datasets into the same high-dimensional semantic space. We then used UMAP to project the high-dimensional embeddings into a shared two-dimensional projection McInnes et al. (2018 ###reference_b21###).\nThe resulting plots are illustrated in Figs. 4 ###reference_### and 6 ###reference_### for Reddit and Twitter/X, respectively. A complete, interactive visualization of these plots is available here ###reference_t.ly/PByYo### (viewer discretion is advised). In both cases, the nonsampled social media data from Reddit and Twitter/X is plotted in blue; the results from SERP are plotted in Red and always in front of the nonsampled social media plots. Because the results from SERP are a subset of the nonsampled social media data, the points visible in red usually elide and therefore match the same post from the nonsampled social media data. As a result, the points visible in blue indicate gaps in topical coverage in SERP results.\nTopical gaps in Reddit coverage are illustrated as blue points in Fig. 4 ###reference_###. We identified several topical-areas where Reddit data was not covered by results from SERP; five of these areas are selected and a representative sample of the social media post. One exception is in the top-most cluster, which contained mostly pornographic posts; this particular example was deliberately chosen to sanitize the illustration from highly graphic and sexually explicit language, which make up the majority of this cluster. Overall we find that SERP generally censors Reddit posts that are pornographic, spam, highly-political, and contain moderation messages.\nWe find similar coverage gaps on Twitter/X illustrated in Fig. 6 ###reference_###. Several topical gaps are evident; we focus on five clusters with representative Tweets illustrated on the right. Perhaps the tightest cluster in this illustration focus on the hashtag #ChinaLiedPeopleDied; another focuses on negative political aspects of then-President Trump. Generally, the coverage gaps appear to highly align with the sentiment analysis from the previous section and can be broadly characterized as focusing on negative content, while SERP results tend to focus on healthcare-related content.\n###figure_6###" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The results of this study, overall four dimensions of analysis,\nclearly show that SERP results are highly biased samples of the social media data. Although this is not unexpected, the nature of these differences were surprising. This analysis is an early step in understanding the tradeoffs that result in the use of SERP results as a replacement for API access.\nWe summarize the results as follows: (1) We found that SERP results return posts from Twitter/X users that have a dramatically larger following than the average Twitter user; likewise, for Reddit we find that that SERP results return posts that have a dramatically higher score than the average Reddit post. Unexpectedly, we did not find any correlation between user popularity or post score and its rank in the SERP results. (2) Token-level analysis found a substantive difference in the likelihood of various terms appearing in posts returned by SERP. SERP results appeared to be less political, less vulgar, and, on the COVID-oriented Twitter/X dataset, far more likely to mention social and health services. (3) The token-level analysis appeared to show that SERP results were generally more positive than the nonsampled social data. Indeed a full-scale sentiment analysis showed that SERP results tended to be statistically more-positive than the average social media post. (4) Finally, maps of topical coverage indicated vast swaths of the semantic space were missing from the SERP results. Further investigation found that pornographic, vulgar, political, and spam posts were largely absent from SERP results." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Cost Analysis", + "text": "At present, nearly every social media platform either charges for API access, severely limits access, or does not provide API access at all. The nonsampled data used in the present work was collected prior to API access being put into place. Nevertheless, we wish to provide an analysis of what it would cost to collect the nonsampled data at present rates. For this we considered three sources: Reddit API, Twitter API, and ScaleSERP API. The Reddit API, which charges 24 cents per 1,000 API requests, would cost 240 USD to obtain 1 million posts. The Twitter API comes at a significantly higher price, with a cost of 5,000 USD for 1 million posts per month. ScaleSERP uses a slightly different payment model; it is important to note that ScaleSERP generally provides a maximum of 100 results (if available) per call. As a result, approximately 10,000 queries are required to retrieve 1 million posts. The cost for 10,000 API requests from the ScaleSERP is 59 USD. Using SERP is clearly a cost-conscious decision, but does come with a price of a highly biased sample." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Threats to Validity", + "text": "This present work is not without limitations. To begin, the initial assumption of our analysis is that the data gathered from Reddit and Twitter/X are (almost) complete. Although most social media posts grow stale after a few days Glenski et al. (2017 ###reference_b12###), any user may comment, retweet, like, or upvote any post at anytime; as a result this data may not account for social activity that occurred after the time of capture. In a similar vein, although the the Twitter/X dataset is (almost) complete for the 7 COVID hashtags, these hashtags certainly do not comprehensively encompass all of the discussions surrounding COVID. We ensure, however, that our SERP queries included the same hashtags along with the keyword, minimizing the likelihood of search bias or an incompleteness bias.\nMoreover, given the severity and the impact of the topic surrounding COVID19, search engines may have placed stricter moderation policies on this topic, thereby challenging our study\u2019s findings. However, our findings from Fig 2 ###reference_### indicate that Reddit data from SERP had similar more-positive and more-authoritative terms than the full Reddit dataset as in the case with Twitter/X dataset. We believe this finding should serve to abate questions of any extra-moderation in the COVID-sample.\nAnother notable limitation stems from the non-deterministic nature of SERP results. The data retrieved from SERP may or may not appear at the same rank (or at all) when re-queried. This could impact our analysis, particularly the rank-based correlation results in the analysis of popularity. Given that we query SERP with 1,000 keywords from common-to-rare in terms of frequency, the samples collected should still provide valuable insights for our study.\nIn addition, we believe this study can serve as a pathway for several interesting follow-up experiments: analysis focusing on the subreddits, and a study on the moderation policies of search engines. These, and other studies, could provide a more holistic understanding of the incompleteness of search engine data and provide researchers with deeper understanding of its suitability as a replacement for direct access to social media data." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Conclusions", + "text": "Taken together, these findings collectively point to a large bias in SERP results. This raises the question on the validity of any research that is performed with data collected from SERP only; however, we currently know of none. Overall, we conclude that SERP is probably not a viable alternative to direct access to social media data.\nFuture research that heavily relies on SERP results may provide value, but it is important that these future works are cognizant of the limitations and biases in SERP results and are careful not to make conclusions that do not rely on an unbiased sample of social media data. However, it is also important to highlight the cases where SERP results can serve as a trustworthy data source. For example, studies which study search engines can make natural use of SERP results; likewise, SERP may be used as a seed set for additional analysis.\nAlthough the present work answers many questions, it raises others as well. We are additionally interested in the differences between SERP results and Web-scraped results. For example, it could be the case that SERP results are actually a unbiased sample of the results that social media platforms provide to the search engine\u2019s scraper; it is entirely possible the data bias that we attribute to the search engine algorithm is actually the result of the data that the social media sites provide to the scraper." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Rank Turbulence Divergence (RTD) between SERP results and the nonsampled social media data.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Site\nRTD (SERP vs Social Media)
Reddit0.47
Twitter0.70
\n
", + "capture": "Table 1: Rank Turbulence Divergence (RTD) between SERP results and the nonsampled social media data." + } + }, + "image_paths": { + "1": { + "figure_path": "2401.15479v4_figure_1.png", + "caption": "Figure 1: Problem definition. We ask: does Google-SERP provide an unbiased sample of social media data? Is Google-SERP a valid replacement for social media APIs?", + "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/googleparis.jpg" + }, + "2": { + "figure_path": "2401.15479v4_figure_2.png", + "caption": "Figure 2: Signed Rank Turbulence Divergence (RTD) for the most divergent terms comparing results from SERP against Twitter/X (on left) and against Reddit (on right). Terms that are more likely to appear in SERP results are listed on top (red). Terms that are more likely to appear in the nonsampled social media data are listed on the bottom (blue). We find that the social media posts returned by SERP are far more likely to contain innocuous terms compared to the nonsampled social media data.", + "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/rtd.png" + }, + "3": { + "figure_path": "2401.15479v4_figure_3.png", + "caption": "Figure 3: Rank Turbulence Divergence (RTD) as a function of Document Frequency. Common terms and rare terms shared between SERP results diverge from the substantially nonsampled social media data.", + "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/rtdfreq.png" + }, + "4": { + "figure_path": "2401.15479v4_figure_4.png", + "caption": "Figure 4: Topical coverage map of the nonsampled Reddit data (blue) and posts returned from SERP (red). Clusters of blue show topical clusters that are found in the nonsampled Reddit data that are not returned by SERP. Examples of some of the clusters are listed on the right.", + "url": "http://arxiv.org/html/2401.15479v4/x1.png" + }, + "5": { + "figure_path": "2401.15479v4_figure_5.png", + "caption": "Figure 5: Sentiment Probabilities and their 95% confidence intervals. SERP results were statistically more-positive or more-neutral than the nonsampled social media data.", + "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/sentiment.png" + }, + "6": { + "figure_path": "2401.15479v4_figure_6.png", + "caption": "Figure 6: Topical coverage map of the nonsampled Twitter/X data (blue) and Tweets returned from SERP (red). Clusters of blue show topical clusters that are found in the nonsampled Twitter data that are not returned by SERP. Examples of some of the clusters are listed on the right.", + "url": "http://arxiv.org/html/2401.15479v4/x2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "How dramatic events can affect emotionality in social posting: The impact of COVID-19 on Reddit.", + "author": "Valerio Basile, Francesco Cauteruccio, and Giorgio Terracina. 2021.", + "venue": "Future Internet 13, 2 (2021), 29.", + "url": null + } + }, + { + "2": { + "title": "Probabilistic topic models.", + "author": "David M Blei. 2012.", + "venue": "Commun. ACM 55, 4 (2012), 77\u201384.", + "url": null + } + }, + { + "3": { + "title": "Latent dirichlet allocation.", + "author": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003.", + "venue": "Journal of machine Learning research 3, Jan (2003), 993\u20131022.", + "url": null + } + }, + { + "4": { + "title": "Analysis of moral judgment on reddit.", + "author": "Nicholas Botzer, Shawn Gu, and Tim Weninger. 2022.", + "venue": "IEEE Transactions on Computational Social Systems (2022).", + "url": null + } + }, + { + "5": { + "title": "Comprehensive survey on distance/similarity measures between probability density functions.", + "author": "Sung-Hyuk Cha. 2007.", + "venue": "City 1, 2 (2007), 1.", + "url": null + } + }, + { + "6": { + "title": "Dictionary of distances.", + "author": "Michel-Marie Deza and Elena Deza. 2006.", + "venue": "Elsevier.", + "url": null + } + }, + { + "7": { + "title": "Allotaxonometry and rank-turbulence divergence: A universal instrument for comparing complex systems.", + "author": "Peter Sheridan Dodds, Joshua R Minot, Michael V Arnold, Thayer Alshaabi, Jane Lydia Adams, David Rushing Dewhurst, Tyler J Gray, Morgan R Frank, Andrew J Reagan, and Christopher M Danforth. 2023.", + "venue": "EPJ Data Science 12, 1 (2023), 37.", + "url": null + } + }, + { + "8": { + "title": "Sentence-level and document-level sentiment mining for arabic texts. In 2010 IEEE international conference on data mining workshops. IEEE, 1114\u20131119.", + "author": "Noura Farra, Elie Challita, Rawad Abou Assi, and Hazem Hajj. 2010.", + "venue": "", + "url": null + } + }, + { + "9": { + "title": "Similarity of symbol frequency distributions with heavy tails.", + "author": "Martin Gerlach, Francesc Font-Clos, and Eduardo G Altmann. 2016.", + "venue": "Physical Review X 6, 2 (2016), 021009.", + "url": null + } + }, + { + "10": { + "title": "Content moderation, AI, and the question of scale.", + "author": "Tarleton Gillespie. 2020.", + "venue": "Big Data & Society 7, 2 (2020), 2053951720943234.", + "url": null + } + }, + { + "11": { + "title": "Consumers and curators: Browsing and voting patterns on reddit.", + "author": "Maria Glenski, Corey Pennycuff, and Tim Weninger. 2017.", + "venue": "IEEE Transactions on Computational Social Systems 4, 4 (2017), 196\u2013206.", + "url": null + } + }, + { + "12": { + "title": "Benchmarking of transformer-based pre-trained models on social media text classification datasets. In Proceedings of the the 18th annual workshop of the australasian language technology association. 86\u201391.", + "author": "Yuting Guo, Xiangjue Dong, Mohammed Ali Al-Garadi, Abeed Sarker, Cecile Paris, and Diego Moll\u00e1 Aliod. 2020.", + "venue": "", + "url": null + } + }, + { + "13": { + "title": "Towards automated sexual violence report tracking. In Proceedings of the international AAAI conference on web and social media, Vol. 14. 250\u2013259.", + "author": "Naeemul Hassan, Amrit Poudel, Jason Hale, Claire Hubacek, Khandaker Tasnim Huq, Shubhra Kanti Karmaker Santu, and Syed Ishtiaque Ahmed. 2020.", + "venue": "", + "url": null + } + }, + { + "14": { + "title": "\u201dReddit Wants to Get Paid for Helping to Teach Big A.I. Systems\u201c. The New York Times\u201d.", + "author": "Mike Isaac. 2023.", + "venue": "", + "url": null + } + }, + { + "15": { + "title": "Political astroturfing on twitter: How to coordinate a disinformation campaign.", + "author": "Franziska B Keller, David Schoch, Sebastian Stier, and JungHwan Yang. 2020.", + "venue": "Political communication 37, 2 (2020), 256\u2013280.", + "url": null + } + }, + { + "16": { + "title": "Sentiment analysis: Mining opinions, sentiments, and emotions.", + "author": "Bing Liu. 2020.", + "venue": "Cambridge university press.", + "url": null + } + }, + { + "17": { + "title": "Roberta: A robustly optimized bert pretraining approach.", + "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.", + "venue": "arXiv preprint arXiv:1907.11692 (2019).", + "url": null + } + }, + { + "18": { + "title": "Timelms: Diachronic language models from twitter.", + "author": "Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. 2022.", + "venue": "arXiv preprint arXiv:2202.03829 (2022).", + "url": null + } + }, + { + "19": { + "title": "A study on Twitter user-follower network: A network based analysis. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. 1405\u20131409.", + "author": "VenkataSwamy Martha, Weizhong Zhao, and Xiaowei Xu. 2013.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "Umap: Uniform manifold approximation and projection for dimension reduction.", + "author": "Leland McInnes, John Healy, and James Melville. 2018.", + "venue": "arXiv preprint arXiv:1802.03426 (2018).", + "url": null + } + }, + { + "21": { + "title": "Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web. 171\u2013180.", + "author": "Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007.", + "venue": "", + "url": null + } + }, + { + "22": { + "title": "Public sentiment analysis and topic modeling regarding COVID-19 vaccines on the Reddit social media platform: A call to action for strengthening vaccine confidence.", + "author": "Chad A Melton, Olufunto A Olusanya, Nariman Ammar, and Arash Shaban-Nejad. 2021.", + "venue": "Journal of Infection and Public Health 14, 10 (2021), 1505\u20131512.", + "url": null + } + }, + { + "23": { + "title": "Efficient estimation of word representations in vector space.", + "author": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013.", + "venue": "arXiv preprint arXiv:1301.3781 (2013).", + "url": null + } + }, + { + "24": { + "title": "Language and communication.", + "author": "George Armitage Miller. 1951.", + "venue": "(1951).", + "url": null + } + }, + { + "25": { + "title": "Is the sample good enough? comparing data from twitter\u2019s streaming api with twitter\u2019s firehose. In Proceedings of the international AAAI conference on web and social media, Vol. 7. 400\u2013408.", + "author": "Fred Morstatter, J\u00fcrgen Pfeffer, Huan Liu, and Kathleen Carley. 2013.", + "venue": "", + "url": null + } + }, + { + "26": { + "title": "Using twitter to examine smoking behavior and perceptions of emerging tobacco products.", + "author": "Mark Mysl\u00edn, Shu-Hong Zhu, Wendy Chapman, Mike Conway, et al. 2013.", + "venue": "Journal of medical Internet research 15, 8 (2013), e2534.", + "url": null + } + }, + { + "27": { + "title": "ELS: A word-level method for entity-level analysis. In WIMS 2011 Proceedings of the International Conference on Web Intelligence, Mining and Semantics.", + "author": "E Nikos, L Angeliki, P Georgios, and C Konstantinos. 2011.", + "venue": "", + "url": null + } + }, + { + "28": { + "title": "Uncovering coordinated networks on social media: methods and case studies. In Proceedings of the international AAAI conference on web and social media, Vol. 15. 455\u2013466.", + "author": "Diogo Pacheco, Pik-Mai Hui, Christopher Torres-Lugo, Bao Tran Truong, Alessandro Flammini, and Filippo Menczer. 2021.", + "venue": "", + "url": null + } + }, + { + "29": { + "title": "Who do you think you are? Common and differential effects of social self-identity on social media usage.", + "author": "Zhao Pan, Yaobin Lu, Bin Wang, and Patrick YK Chau. 2017.", + "venue": "Journal of Management Information Systems 34, 1 (2017), 71\u2013101.", + "url": null + } + }, + { + "30": { + "title": "Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532\u20131543.", + "author": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014.", + "venue": "", + "url": null + } + }, + { + "31": { + "title": "Just another day on Twitter: a complete 24 hours of Twitter data. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 1073\u20131081.", + "author": "Juergen Pfeffer, Daniel Matter, Kokil Jaidka, Onur Varol, Afra Mashhadi, Jana Lasser, Dennis Assenmacher, Siqi Wu, Diyi Yang, Cornelia Brantner, et al. 2023.", + "venue": "", + "url": null + } + }, + { + "32": { + "title": "Where should one get news updates: Twitter or Reddit.", + "author": "Shalini Priya, Ryan Sequeira, Joydeep Chandra, and Sourav Kumar Dandapat. 2019.", + "venue": "Online Social Networks and Media 9 (2019), 17\u201329.", + "url": null + } + }, + { + "33": { + "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.", + "author": "Nils Reimers and Iryna Gurevych. 2019.", + "venue": "https://arxiv.org/abs/1908.10084", + "url": null + } + }, + { + "34": { + "title": "Coronavirus (COVID19) tweets - early April.", + "author": "S. Smith. 2020a.", + "venue": "https://www.kaggle.com/datasets/smid80/coronavirus-covid19-tweets-early-april.", + "url": null + } + }, + { + "35": { + "title": "Coronavirus (COVID19) tweets - late April.", + "author": "S. Smith. 2020b.", + "venue": "https://www.kaggle.com/datasets/smid80/coronavirus-covid19-tweets-late-april.", + "url": null + } + }, + { + "36": { + "title": "Whose relevance? Web search engines as multisided relevance machines.", + "author": "Olof Sundin, Dirk Lewandowski, and Jutta Haider. 2022.", + "venue": "Journal of the Association for Information Science and Technology 73, 5 (2022), 637\u2013642.", + "url": null + } + }, + { + "37": { + "title": "To scrape or not to scrape, this is dilemma. The post-API scenario and implications on digital research.", + "author": "Domenico Trezza. 2023.", + "venue": "Frontiers in Sociology 8 (2023), 1145038.", + "url": null + } + }, + { + "38": { + "title": "Microblogging during two natural hazards events: what twitter may contribute to situational awareness. In Proceedings of the SIGCHI conference on human factors in computing systems. 1079\u20131088.", + "author": "Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010.", + "venue": "", + "url": null + } + }, + { + "39": { + "title": "Predicting $ gme stock price movement using sentiment from reddit r/wallstreetbets. In Proceedings of the Third Workshop on Financial Technology and Natural Language Processing. 22\u201330.", + "author": "Charlie Wang and Ben Luo. 2021.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "Event detection in twitter. In Proceedings of the international aaai conference on web and social media, Vol. 5. 401\u2013408.", + "author": "Jianshu Weng and Bu-Sung Lee. 2011.", + "venue": "", + "url": null + } + }, + { + "41": { + "title": "Detecting spam in a twitter network.", + "author": "Sarita Yardi, Daniel Romero, Grant Schoenebeck, et al. 2010.", + "venue": "First monday (2010).", + "url": null + } + }, + { + "42": { + "title": "Multi-level structured models for document-level sentiment classification. In Proceedings of the 2010 conference on empirical methods in natural language processing. 1046\u20131056.", + "author": "Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010.", + "venue": "", + "url": null + } + }, + { + "43": { + "title": "Extrapolating psychological insights from Facebook profiles: A study of religion and relationship status.", + "author": "Sean Young, Debo Dutta, and Gopal Dommety. 2009.", + "venue": "CyberPsychology & Behavior 12, 3 (2009), 347\u2013350.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2401.15479v4" +} \ No newline at end of file diff --git a/20241127/2402.14244v2.json b/20241127/2402.14244v2.json new file mode 100644 index 0000000000000000000000000000000000000000..1d638db7670eb5bd5bd4d417c59d73b66d095045 --- /dev/null +++ b/20241127/2402.14244v2.json @@ -0,0 +1,316 @@ +{ + "title": "MENTOR: Guiding Hierarchical Reinforcement Learning with Human Feedback and Dynamic Distance Constraint", + "abstract": "Hierarchical reinforcement learning (HRL) provides a promising solution for complex tasks with sparse rewards of agents, which uses a hierarchical framework that divides tasks into subgoals and completes them sequentially. However, current methods struggle to find suitable subgoals for ensuring a stable learning process. To address the issue, we propose a general hierarchical reinforcement learning framework incorporating human feedback and dynamic distance constraints, termed MENTOR, which acts as a \u201cmentor\u201d. Specifically, human feedback is incorporated into high-level policy learning to find better subgoals. Furthermore, we propose the Dynamic Distance Constraint (DDC) mechanism dynamically adjusting the space of optional subgoals, such that MENTOR can generate subgoals matching the low-level policy learning process from easy to hard. As a result, the learning efficiency can be improved. As for low-level policy, a dual policy is designed for exploration-exploitation decoupling to stabilize the training process. Extensive experiments demonstrate that MENTOR uses a small amount of human feedback to achieve significant improvement in complex tasks with sparse rewards. Further details and code implementations can be found at https://github.com/nidesuipao/MENTOR.git.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The problem of sparse reward is consistently challenging in the domain of reinforcement learning (RL) [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###], attributing to two main factors: challenging exploration, and unstable training. In recent years, several approaches have been proposed to relieve these issues, including goal-conditional reinforcement learning [4 ###reference_b4###], curiosity-driven exploration [5 ###reference_b5###, 6 ###reference_b6###] and hierarchical reinforcement learning (HRL) [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nHierarchical Reinforcement Learning (HRL) is effective for long-horizon tasks with sparse rewards, as it decomposes tasks into more manageable subgoals, mitigating challenges related to exploration and unstable training. However, there are two primary challenges in its practical applications.\n1) Generating effective subgoals. To create efficient subgoals that guide low-level policies, manual design [11 ###reference_b11###, 12 ###reference_b12###] and automatic generation methods [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 9 ###reference_b9###] have been proposed. However, manual design is resource-intensive and struggles with complex tasks [16 ###reference_b16###], while automatic generation demands significant computational resources to explore the entire state space [17 ###reference_b17###].\n2) Efficient subgoal completion. During low-level learning, hindsight relabeling [4 ###reference_b4###] adjusts subgoals to turn failed transitions into successes but lacks the capacity for effective exploration. Curiosity-driven techniques, such as Random Network Distillation (RND), prevent revisiting states [18 ###reference_b18###] but can destabilize training due to reward bonuses associated with exploration, particularly in sparse reward settings. Frequent failures in low-level subgoal achievement can lead to non-stationarity at the high level. While some studies [7 ###reference_b7###, 8 ###reference_b8###] address these issues through hindsight mechanisms, e.g., penalizing high-level policies [7 ###reference_b7###], they do not fully resolve the problem of ensuring that low-level policies consistently accomplish tasks." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Related work", + "text": "" + }, + { + "section_id": "1.1.1", + "parent_section_id": "1.1", + "section_name": "I-A1 Hierarchical reinforcement learning", + "text": "In the field of HRL, identifying meaningful subgoals within long-horizon tasks has been extensive research. This includes studies on options [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], goals [4 ###reference_b4###, 7 ###reference_b7###, 22 ###reference_b22###] and skills [14 ###reference_b14###, 15 ###reference_b15###, 23 ###reference_b23###, 24 ###reference_b24###]. Manual-designed subgoals are costly and challenging for complex tasks [16 ###reference_b16###]. Automatic learning of meaningful subgoals without any guidance from an external expert is a significant challenge [25 ###reference_b25###]. CSD [15 ###reference_b15###] aims to discover latent skills via mutual-information maximization. However, combining meaningful skills into task completion is a continuous challenge. Director [26 ###reference_b26###] introduces a practical method for learning hierarchical behaviors directly from pixels within a learned world model. [27 ###reference_b27###] proposes a skill-based hierarchical reinforcement learning (SHRL) framework for solving the problem of visual navigation of a target. The SHRL framework consists of a high-level policy and three low-level skills: search, adjustment, and exploration. [28 ###reference_b28###] introduces a novel framework called HIGL, which effectively reduces the action space of high-level policies by sampling informative landmarks for exploration and training the high-level policy to generate subgoals towards selected landmarks. HAC [7 ###reference_b7###] addresses the challenges of sparse reward and non-stationarity at high-levels through the utilization of hindsight action transitions and hindsight goal transitions. HIRO [8 ###reference_b8###] employs a model to generate a new high-level action to rectify the high-level transitions. AGILE [29 ###reference_b29###] addresses the non-stationarity issue by using adversarial learning to guide the generation of compatible subgoals for the low-level policy. Previous work has centered on resolving non-stationary issues or on task decomposition strategies, yet often faces the challenge of sparse rewards, prompting unnecessary exploration. Our study introduces human guidance and DDC to decompose tasks effectively. Human guidance directs subgoals, reducing reward sparsity, while DDC controls subgoal difficulty, synchronizes learning across levels, and relieves non-stationarity issues." + }, + { + "section_id": "1.1.2", + "parent_section_id": "1.1", + "section_name": "I-A2 Reinforcement Learning from Human Feedback", + "text": "The surge in popularity of ChatGPT [30 ###reference_b30###, 31 ###reference_b31###] has significantly boosted the recognition of RLHF in recent times. RLHF is a technique for aligning the functionalities of Large Language Models (LLMs) with human ethical considerations and preference, achieved by integrating human feedback into the learning process [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###]. For instance, [37 ###reference_b37###] demonstrates how human preferences can be integrated into text-to-image generation models, aligning them with human aesthetics. [38 ###reference_b38###] introduces a GAN-augmented reinforcement learning strategy that efficiently learns robot behaviors through human preferences, significantly reducing the reliance on human demonstrations. RLHF learns reward functions from pairwise comparison and ranking based on human preference [31 ###reference_b31###, 32 ###reference_b32###, 39 ###reference_b39###, 40 ###reference_b40###]. Human intuition and experience can be incorporated as guidance in high-level decision-making, particularly in setting subgoals within the HRL framework [18 ###reference_b18###]. Nevertheless, humans may struggle to offer immediate guidance that corresponds with the agent\u2019s capabilities." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Our Contribution", + "text": "We introduce a novel framework, MENTOR, which integrates human guidance into the high-level policy learning process. This is achieved through RLHF, a method that trains a reward model by learning human preferences via binary comparisons of subgoals. MENTOR utilizes this reward model to generate subgoals at the high-level, effectively steering the agent towards optimal behavior. Additionally, we introduce DDC. It measures subgoal difficulty by distance and adjusts the subgoal space accordingly. To enable an agent to quickly complete subgoals at the low-level, we introduce Exploration-Exploitation Decoupling (EED), which uses one policy to explore while the other policy learns from the experience of the exploring policy to stabilize the training. We summarize the main contributions as follows:\nWe propose MENTOR, leveraging human feedback to guide the subgoal direction and Exploration-Exploitation Decoupling to simultaneously realize exploration and exploitation in subgoal attainment.\nWe introduce Dynamic Distance Constraint, dynamic aligning the subgoal difficulty to the capabilities of the low-level policy.\nWe demonstrate that MENTOR outperforms other baselines in accomplishing tasks with sparse rewards across various domains." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Preliminary", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Problem Setting", + "text": "Define Markov Decision Process with a set of goals, characterized by the tuple , where , , and are the sets of state, goal, and action, respectively, is the transition probability function, is the reward function, and is the discount rate . The objective is to find an optimal policy such that\nwhere denotes a trajectory generated under the policy starting from an initial state. The reward function can be defined as , which is a distance threshold determining contact." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Hindsight Relabelling", + "text": "Hindsight relabelling [4 ###reference_b4###] addresses sparse reward in GCRL by redefining failed transitions as successful transitions, thus generating positive rewards. Specifically, for a failed transition with , it resets the transition as with goal . By utilizing this new transition, it becomes possible to train an off-policy RL algorithm with more positive rewards. [4 ###reference_b4###] also delve three heuristic methods for goal relabeling to enhance learning efficiency: (1) Future Sampling, which selects goals from subsequent states within the same trajectory; (2) Episode-Specific Random Sampling, which randomly selects goals from the same episode without considering their order; (3) Random Sampling, which chooses new goals from the entire dataset. In our implementation, we have opted for the Future Sampling strategy." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Curiosity-driven Exploration", + "text": "Intrinsic motivation has been utilized to encourage agents to learn about their surroundings, even when extrinsic rewards are scarce [5 ###reference_b5###, 41 ###reference_b41###, 42 ###reference_b42###]. Curiosity-driven exploration is an intrinsic motivation strategy, exemplified by algorithms like RND, which fosters learning through the agent\u2019s desire to discover new information. In this framework, RND [5 ###reference_b5###] serves as the intrinsic reward. RND consists of two neural networks, represented as and , which are both randomly initialized and capable of transforming observations into embeddings. By fixing one network and training the other to minimize the prediction error, we follow the distillation optimization process:\nwhere is sampled from the replay buffer. Upon the agent\u2019s interaction with the environment, yielding the current state , we proceed to compute RND reward:\nThis RND reward encourages the agent to visit novel states, as it will be higher in regions of the state space that the predictor network finds difficult to approximate, thus fostering exploration in the learning process. To ensure effective policy optimization and stability in reinforcement learning, it is essential that the RND reward, as an intrinsic reward, is normalized to align with the scale of extrinsic rewards:\nWe determine the normalized intrinsic reward for state by converting the RND reward into a Z-score, aligning it with the mean and scaling it according to the standard deviation ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "###figure_1### HRL consists of multiple levels, each focused on solving specific problems. The high-level is responsible for providing overall guidance, while the low-level handles task execution. This difference in responsibilities between the levels requires distinct reward function characteristics. Depending on unique features of diverse levels, we propose MENTOR, shown in Figure 1 ###reference_###(c). At the high-level, it utilizes RLHF and DDC to address the challenge of generating instructive subgoals shown in Figure 1 ###reference_###(a). At the low-level, it decouples exploration and exploitation, solving the instability associated with curiosity-driven exploration shown in Figure 1 ###reference_###(b).\nBefore introducing the framework, we briefly review here to introduce notation. The framework consists of four neural networks: high-level policy , low-level policy , reward model learned by human feedback , RND model and distance model with parameters , , , and respectively. The distance model, denoted as , operates within the Dynamic Distance Constraint. In a given episode, initially proposes subgoal based on current state and environment goal , followed by task given by to achieve . If succeeds and the episode remains active, subsequently issues a new subgoal . Thus, the high-level trajectory is represented as (), where signifies the moment of the -th subgoal generation by . Concurrently, the low-level trajectory unfolds as (), denotes the trajectory length." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A RLHF and Dynamic Distance Constraint in High-level", + "text": "" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "III-A1 Subgoal generation using RLHF", + "text": "Subgoal generation poses a significant challenge, with manual setup being costly and difficult, while automatic methods demand extensive computational resources to explore the state space. High-level subgoal generation requires macro, abstract, and generalized guidance, closely related to human preferences. RLHF offers human preferences through sample comparisons. Despite the inherent uncertainty and noise in the human feedback acquired through pairwise preference comparisons, this approach for reward model learning effectively guides the high-level policy learning because HRL mitigates the issue of uncertainty and noisy preferences in RLHF, creating a mutual benefit for both HRL and RLHF.\nRLHF uses pairwise comparison to train a reward model which can be used as the reward function for the high-level policy. The training process consists of (1) extracting pairs and from the high-level replay buffer. (2) Human annotators provide pairwise feedback which 0 prefers , prefers and for same preference. Preference can be assessed by the distance to the environment goal . (3) After collecting the feedback dataset into the reward model buffer , we can train the reward model by optimizing a modified Bradley-Terry objective [43 ###reference_b43###]. We define the possibility that human prefers to :\nThen, we define two probabilities:\nand the modified Bradley-Terry objective is as follows:\nIn optimizing the high-level policy, the high-level reward function is set to be . In the dynamic process of execution, humans and algorithms engage in ongoing interactions, where high-level policies can be subject to real-time guidance from human operators." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "III-A2 Dynamic Distance Constraint for Subgoal Difficulty Adjustment", + "text": "while humans can provide direction for high-level goals by expressing preferences, they may struggle to define subgoals that align with the low-level policy\u2019s capabilities. It is quite possible that influenced by their cognition, humans might prefer subgoals that are close to the ultimate goal. Consequently, if the reward model learned by human preferences is designed to assign the highest rewards when the subgoal coincides with the goal, it could lead to a scenario where the high-level policy, without considering the low-level\u2019s capacity and basing its decisions solely on human feedback, generates subgoals that rapidly converge towards the goal. This approach could render the subgoals excessively challenging and diminish their instructional value. Thus multi-level simultaneous learning may be uncoordinated.\nTo solve this issue above, we introduce DDC, whose function is illustrated in Figure 2 ###reference_###. This method limits the range of subgoals based on their distance from the current state (the variation in green shading depicted in the figure). Under the function of DDC, the learning process of our framework MENTOR is similar to curriculum learning[44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###]. Curriculum learning facilitates a sequential learning process where simpler subgoals are mastered before progressing to more complex ones, thus laying a solid foundation for advanced subgoal acquisition. This is achieved through a specific formulation:\nwhere represents the distance between the subgoal and the achieved goal . The parameter sets the subgoal space range. The constraint ensures that the distance between the high-level subgoal and the current achieved goal remains within a range of lengths, and we can adjust to progressively reduce the difficulty. However, in scenarios like the Four Rooms domain, the Euclidean distance as may not accurately assess the difficulty of completing a task. For instance, a goal in the top right might be hard to reach despite a low Euclidean distance, indicating the need for a learned step distance measure. DDL [47 ###reference_b47###] offers a methodology for training distance models by randomly sampling state pairs from trajectories, to approximate the number of steps between states. Nevertheless, it\u2019s crucial to evaluate the step distance between unreachable subgoals and the current state. If the low-level policy fails to reach a subgoal, assigning a high distance value to these unreached subgoals relative to the current state is essential. Neglecting to do so, high-level policies might incorrectly view challenging subgoals as easy, leading to unrealistic subgoal proposals. To tackle this issue, we recommend incorporating extra samples containing unreached subgoals into the distance model objective:\nThe objective is formulated by minimizing the expected loss across trajectories , sampled using the low-level policy from recent episodes, and the goals drawn from the environment\u2019s distribution . is the length of the trajectory.\nIn Equation 6 ###reference_###, optimizing is challenging because of the strict constraint. To overcome this difficulty, we utilize the penalty function method [48 ###reference_b48###, 49 ###reference_b49###], which allows us to establish an unconstrained optimization objective:\nwhere is a balancing coefficient to adjust the influence of reward from human guidance and distance constraint. serves as a penalty function that imposes a cost when the subgoal deviates from the achievable range defined by the parameter . This parameter acts as a threshold, defining the maximum allowable distance for a subgoal to be attainable. This mechanism ensures that the subgoals chosen by the high-level policy are both in harmony with human guidance and within the operational capacity of the low-level policy.\nAs the capabilities of the low-level improve, it becomes necessary to dynamically adjust the parameter to ensure that the difficulty of the subgoals remains appropriately challenging. The adjustments in lead to different optimization goals for the model, which can cause several issues: (1) The coefficient needs to strike a balance between being large enough to ensure constraints are met and small enough to avoid excessive punishment. If the penalty coefficient is set too high, it may overly restrict the solution search space, causing the algorithm to miss the optimal solution or fail to converge. Conversely, if the penalty coefficient is too low, it may not effectively enforce the constraints, leading to solutions that do not satisfy the actual problem constraints. Therefore, a static coefficient may not preserve the optimal balance between human guidance and constraints when varies. (2) Different values of lead to distinct distributions of subgoals. Sampling data pairs solely from the comprehensive offline dataset does not ensure that the reward model accurately assigns rewards to subgoals under the current . (3) Assuming policy convergence based on a specific , it\u2019s challenging to quickly re-optimize the high-level policy when switching to a new . In response to these challenges, we have implemented three designs to ensure stability in policy updates when there are changes in .\nAutomatic balancing coefficient. We implement a dynamic balancing coefficient which will be updated in real-time to maintain a balance between the rewards and distance constraint.\nTo effectively incorporate the adjustment into MENTOR, we need to optimize two parameters simultaneously: high-level policy and balancing coefficient , converting our maximization into a dual problem:\nThe distance constraint function guarantees that the subgoals within a distance . This function treats subgoals that are within the distance as having zero effects. However, in the case of , we want it to decrease when this constraint does not work, and increase when the high-level policies make decisions that do not satisfy this constraint. Since is always greater than 0 and the update gradient for is singular, we need to eliminate the function from to achieve automatic updates for . We update the policy firstly and then coefficient following modification for the optimization:\nNear-policy sampling. The probability distribution of the data varies depending on the sampling policy and the different values of , even though all the data exist in an offline experience pool. It is inefficient to train the reward model using all off-policy data. Once the low-level policy has become proficient at achieving subgoals, it becomes redundant to use these subgoals for the training of the reward model, which is intended to enhance the current policy. To address this issue, we propose a new method that involves training the reward model with near-policy data, using pairs that are sampled from recent episodes. This approach allows the reward model to focus on the data that is most relevant to the current policy, enabling a more accurate evaluation of the current policy\u2019s performance. Moreover, training a reward model that can accurately evaluate the behavior of the current policy requires fewer samples and training iterations, which results in reduced computational consumption.\nNear-policy sampling can also be efficiently executed by merely preserving data from the newest episodes or their respective indices within the experience pool, and subsequently, randomly extracting from these recent datasets when gathering the comparison samples for the reward function training.\nHigh-level exploratory mechanism. As depicted in Figure 2 ###reference_###, the high-level policy may encounter a local optimum after proposing a subgoal in the lower-left region (left panel) and the low-level policy has adapted to achieve it, particularly upon adjusting the parameter. This policy is guided by a reward model trained on data from the replay buffer and human feedback, lacking initial pre-training. With incremental values, the high-level policy, potentially already converged, may limit its exploration to a less diverse dataset. This restricted exploration could prevent the identification and storage of superior subgoals in the replay buffer, thereby impeding the enhancement of the reward model and leading to policy training stagnation. To counteract this, it is crucial to promote high-level exploration. We have integrated exploration techniques such as RND and Maximum Entropy (MaxEnt), as detailed in [50 ###reference_b50###]. Specifically, we have adopted the MaxEnt approach, defining the high-level reward as , where is a small constant." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Exploration-Exploitation Decoupling in Low-level Policy", + "text": "Although the high-level can provide easier goals for the low-level through RLHF, these goals also serve as guidance for the low-level to move forward. However, sparse rewards hinder the low-level ability to quickly achieve subgoals. RLHF is well-suited for the high-level in HRL, but it is not applicable to the low-level. If the low-level policy incorporated RLHF, the noise and uncertainty in reward model could hinder the completion of the task. Existing technologies may use hindsight relabeling, but before the low-level policy explores the subgoals, this technology cannot guarantee that the agent learns how to complete the subtasks. Due to the limited exploration ability of the low-level policy, the agent may fall into local optima and repeatedly explore meaningless areas. Introducing RND can mitigate repeated exploration by promoting novel discoveries, though its direct implementation may result in instability. RND\u2019s exploration incentives might lead to neglecting the task of subgoal completion.\nTo address the issue, we propose EED, a dual-policy program, consisting of an exploration policy and a base policy , both sharing the same data buffer. During interactions with the environment, we employ the exploration policy and store the experiences in the shared replay buffer. For policy updates, both the exploration and base policies undergo a hindsight relabelling process for relabeled transition data.\nSubsequently, the exploratory policy is refined by optimizing the following objective function:\nThis formulation incorporates an additional element, , which introduces curiosity into the reward structure, thereby fostering exploration. Conversely, the base policy is updated by optimizing the following objective function:\nThis approach enables the base policy to assimilate novel insights from the exploratory data without the inherent burden of the exploration process itself, thus enabling it to focus on the attainment of the defined subgoals.\n###figure_2###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C MENTOR Process", + "text": "Overall, this paper introduces an innovative HRL algorithm, which proficiently identifies an approach for configuring the reward function, RLHF for high-level and EED for low-level. Furthermore, the paper addresses the challenge of inter-layer coordination in HRL by proposing a novel optimization with dynamic distance constraint. In detail, our framework MENTOR works as the following pseudo-code in Algorithm 1 ###reference_###. During the interaction with the environment (from line 4 to line 17), a high-level policy is responsible for selecting a subgoal (line 6). Once the subgoal is determined, a low-level exploration policy, denoted as , is utilized to execute actions until the subgoal is successfully achieved. This process of selecting subgoals and executing actions continues until the end of the episode. The data obtained from the interaction with the environment, both at the high-level and low-level, is stored in two replay buffers known as and . The model update process is implemented in lines 19 to 23. The high-level policy in Algorithm 3 ###reference_### updates and alternately. As for the low-level policy, it involves updating the RND model, applying hindsight, updating the low-level base policy , adding the RND bonus in the batch data, and updating the exploration policy . From lines 21 to 22, preference tuple pairs are sampled from the data of the last few episodes in , and human labels are obtained and stored in . These batches are then used to update and rewrite the reward data in the high-level buffer . Finally, distance training data is sampled from the data of the last few episodes in and used to update the distance model . From lines 25 to 34, it tests the low-level base policy success rate on the subgoal given by high-level policy and adjusts the parameters .\n###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV EXPERIMENTS", + "text": "In this section, we perform a range of experiments in various commonly used domains [51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###], as depicted in Figure 3 ###reference_###. Through the experiments, we aim to answer the following research questions (RQs):\nHow does the performance of MENTOR in comparison to baseline models in various domains?\nWhat is the impact of DDC in MENTOR?\nWhat is the impact of human feedback in MENTOR?\nWhat insights can be gained about the individual contributions of key components in MENTOR?" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Setup", + "text": "Benchmarks: We select FetchPush, FetchPickAndPlace, FetchDraw, FetchObsPush, Pusher, and Four rooms as our simulation benchmarks, widely used in research [4 ###reference_b4###, 18 ###reference_b18###, 53 ###reference_b53###]. As illustrated in Figure 3 ###reference_###, the first five domains involve long-horizon manipulation tasks, while Four rooms focuses on 2D navigation. In our experiments utilizing the RLHF, we employed two distinct approaches to acquire human labels: manual labeling by humans and synthetic labeling through scripts. The specifics of these label acquisition processes are delineated in the Appendix B ###reference_.SSS0.Px1###. Across the experimental trials, we primarily utilized synthetic labels, with the exceptions explicitly noted.\nHardware: The computer\u2019s hardware is equipped with an AMD Ryzen 7 5700X processor, 32GB of RAM, and an NVIDIA GeForce RTX 3070 Ti graphics card.\nBaselines: We have incorporated various learning methods in our baseline implementation, including techniques from HRL, RLHF, RND, dynamic distance learning, and hindsight relabelling.\nReinforcement Learning from Human Feedback (RLHF) firstly establishes a reward model based on human feedback and then using it to optimize policy in reinforcement learning. PEBBLE is a RLHF framework that that combines unsupervised pre-training and preference-based learning to significantly improve the sample and feedback efficiency of human-in-the-loop reinforcement learning [32 ###reference_b32###]. The key concept of PEBBLE is to learn a reward model on human preference. Unlike the learning strategy of the reward model in the MENTOR, it has an unsupervised pretraining step via intrinsic reward. Here, we apply the state entropy . By converting the policy to a simpler version, the pretraining reward function is set as in the batch. This implies that maximizing the distance between a state and its nearest neighbor increases the overall state entropy.\nHindsight Relabeling is a data augmentation method for multi-task reinforcement learning that facilitates data sharing across tasks by relabeling experiences to improve sample efficiency. Our baseline, HER [4 ###reference_b4###] is a classical hindsight relabeling strategy. HER enables efficient learning from limited and non-diverse rewards by re-framing failed attempts as successes towards different goals. In our implementation, we utilize a policy to incorporate HER. When sampling batch transitions from the replay buffer, we employ hindsight technology to modify certain parts of the transitions. It is important to mention that the high-level policy differs from the low-level policy of MENTOR in certain aspects. Unlike the low-level policy, the high-level policy solely receives goals from the environment and does not incorporate an RND bonus.\nHierarchical Reinforcement Learning with Human Feedback (HRL-HF) integrates human feedback into the generation of high-level subgoals. We follow the architecture of HhP [18 ###reference_b18###]. HhP introduces an HRL method that integrates human preferences and curiosity-driven exploration. By using human preferences to guide decision-making at high-levels and curiosity to promote environmental exploration at low-levels. HhP is considered to be inefficient in combining HRL and RLHF, and it also introduces bias at the low-levels. When we trained HhP, we found that this algorithm was difficult to converge, and to ensure the effectiveness of training, we introduced the hindsight relabelling technique in this algorithm as well.\nDistance Learning (DL) learns distance models. The model is utilized to train goal achievement by setting the reward as the negative distance between states and goals. This baseline follows DDL [47 ###reference_b47###]. DDL calculates dynamical distances, defining them as the expected number of steps between states in a system. This method leverages unsupervised interactions for learning distances. In the distance evaluation step, the aim is to estimate the dynamic distance between pairs of states visited by a given policy. A distance function, denoted as , is utilized for this purpose. To calculate the distance, multiple trajectories, denoted as , are generated by rolling out the policy. Each trajectory has a length of T. The empirical distance between states and in the trajectory is then calculated, where . The empirical distance is simply given by the difference between j and i. To learn the distance function, supervised regression can be employed by minimizing the objective:\n.\n###figure_9###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Performance Evaluation (RQ1)", + "text": "In our experiments across six domains, we find that MENTOR excels in learning speed and subgoal attainment as evidenced in Figure 4 ###reference_###. This assessment uses five random seeds and is evaluated over 50 tests. As can be seen from DDL and PEBBLE curves. DDL and PEBBLE rarely learn effectively in complex GCRL environments. DDL\u2019s poor performance may be attributed to the absence of guiding signals and the instability of the reward function. Pairwise comparison guidance makes it difficult for PEBBLE to complete subgoal. HER, a classic GCRL algorithm, serves as a benchmark for evaluating other algorithms. Yet, HhP, despite integrating human guidance and curiosity-driven exploration, underperforms compared to HER in FetchPush. Its benefits are also unclear in Pusher and Four Rooms. This indicates HhP\u2019s inadequate use of human guidance and curiosity in exploration. In contrast, MENTOR, by incorporating Dynamic Distance Constraint and Exploration-Exploitation Decoupling, effectively leverages these elements for utilization of human feedback and exploration, achieving faster and more consistent training than DDL, PEBBLE, and HER. Our analysis highlights the superior performance of MENTOR.\nHowever, MENTOR demonstrates superior performance and is equipped with a greater number of components. To investigate the resource consumption of our algorithm, we conducted additional experiments, meticulously recording the model parameter count and computation time for 1000 iterations of MENOTR and HER in the FetchPush environment. As detailed in Table I ###reference_###, our results reveal that MENTOR has approximately 263% more parameters and requires roughly 176% more running time compared to HER. Although our model has a larger number of components, which significantly increases the parameter count, considering that modern GPUs typically have 8GB or more of memory, these models do not demand excessively high computational resources. Moreover, by examining the convergence of HER in Figure 4 ###reference_### and the computational time consumed by both MENTOR and HER algorithms over 1000 episodes, we found that our algorithm consumes 76% more time. However, the convergence speed of our algorithm is more than 3 times that of HER. Therefore, our algorithm remains highly efficient in terms of resource consumption." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Impact of DDC (RQ2)", + "text": "Our study will examine DDC\u2019s effects on learning agents, assessing its impact on agent behaviors and the efficacy of three designs in achieving stable algorithmic convergence. We also explore DDC\u2019s synergism with human feedback.\n###figure_10### ###figure_11### Examining the correlation between task completion and DDC. As illustrated in Figure 5 ###reference_###, when comparing the black and green curves on the lower side, It is evident that DDC can regulate the difficulty of subgoals provided by limiting the distance. Without DDC, the high-level policy proposes subgoals at random difficulty. By examining this phenomenon in conjunction with the success rate of the subgoals, we can draw the following conclusions: (1) during the initial stages of the training process, the low-level policy can rapidly acquire the ability to achieve easy subgoals. (2) Once the low-level policy has successfully mastered a subgoal of a certain difficulty level, it can efficiently progress to learning subgoals of slightly higher difficulty. When there is no DDC, subgoals of randomized difficulty lead to slower learning of the low-level policy. It is concluded that the DDC can efficiently coordinate high-level and low-level.\n###figure_12### Analyzing the impact of automatic balancing coefficient. DDC has a mechanism to automatically adjust the balancing coefficient. The mechanism plays a crucial role in balancing the impact of the reward model and distance constraint. As shown in Figure 6 ###reference_###, fixed values of 0.5, 5, and 20 for are ineffective in learning compared to the automatic adjustment setting. Small values (0.5 and 5) slow DDC convergence. Large (20) enhances learning speed but makes it unstable and sensitive to .\nTo gain more insight into the details of difficulty adjustment under different , we recorded the change curve of ( of 0.05) under different settings in Figure 7 ###reference_###. Analyzing the value change curves for different values reveals how subgoal difficulty adapts throughout training. These curves track the incremental changes in the value ( = 0.05) and highlight the influence of the parameter on the stability and progression of the learning trajectory. Lower values lead to a more gradual increase in the value, promoting a stable but slower learning process. In contrast, higher values cause more pronounced fluctuations, which may accelerate learning but also introduce greater risk of instability. The Auto- setting demonstrates both rapid and stable value growth, indicating an adaptive strategy that balances speed and stability in the learning process. These value curves also serve as indicators of the low-level policy\u2019s progress: consistent increases suggest ongoing improvement, while plateaus or declines may signal the need to reassess the strategy. These analyses highlight the crucial role of the parameter in shaping the learning process.\n###figure_13### ###figure_14### Analyzing the near-policy sampling for reward model training and the exploratory mechanism at the high-level. During the training process of MENTOR, we observe some phenomena that the high-level policy cannot be well guided by the rewarding model. These observations can be ascribed to two factors: (1) the reward model\u2019s inadequate guidance of the current model, and (2) the increase in the value leading to the agents in local optima, thereby diminishing further exploration of potential subgoals. In Figure 8 ###reference_###, the reward heatmap, shaped by human feedback, approximates the oracle function. We observe that when near-policy samples are absent, the reward model tends to emphasize the entirety of the state space. This, in turn, makes it more challenging and computing resource-consuming for the reward model to effectively learn.\nAdditionally, we have observed that there are instances where the high-level policy becomes stuck in local optima as we incrementally increase the value of for a long time. This phenomenon is attributed to two reasons: (1) it lacks pretraining at the high-level, and (2) the high-level lacks the exploratory ability, leading to not trying new subgoals. Pretraining plays a crucial role in RLHF, yet it can be inefficient at the high-level due to limited samples (often only one per episode). In the operation of the high-levels, we find it necessary to increase the exploratory ability. As shown in Figure 8 ###reference_###, reward learning becomes challenging without MaxEnt (maximum entropy). We compare RND and MaxEnt in terms of exploratory at high-levels. After conducting experiments displayed in Figure 10 ###reference_###, we discovered that MaxEnt outperforms RND. While RND promotes the exploration of novel states, MaxEnt focuses on improving action entropy. The combination of the reward model and DDC can lead to previously disregarded decisions being recognized as good decisions. RND can learn and incorporate these decisions previously, treating them as non-novel at late times. However, MaxEnt does not encounter this issue and as a result, it achieves significantly better performance compared to RND.\n\n###figure_15### w/ DDC\n\n###figure_16### w/o DDC\n###figure_17### Analyzing the overlay effects of DDC and human feedback. Having investigated DDC\u2019s impact, we now turn to assess the combined influence of human guidance and DDC on subgoal formulation. Figure 9 ###reference_### depicts the training phase subgoal distribution, both with and without DDC. To the right of the figure, DDC\u2019s inactivity results in a fragmented learning interaction between levels. While the high-level, aided by human guidance, swiftly navigates to the goal, it neglects the low-level\u2019s execution capabilities, leading to transient and inefficient guidance, evident in the predominance of red subgoals in most regions except the upper-right corner. In contrast, on the figure\u2019s left, with DDC active, the high-level delineates a subgoal sequence starting from the lower right, proceeding to the lower left, then the upper left, and culminating at the upper right corner. This strategy, different from the scenario without DDC, significantly bolsters the efficiency of human guidance, contributing to MENTOR overall effectiveness.\n###figure_18### We also analyze the heatmap of the reward and distance model. The analysis of the heatmaps in Figure 11 ###reference_### shows complex interactions between the distance and reward models in the MENTOR framework. The distance model assesses subgoal difficulty. This overlap, calculated with , identifies areas optimizing rewards in alignment with the low-level policy\u2019s capabilities. However, without DDC, the human feedback-based reward model is limited, indicating only high-reward areas without guiding the agent on how to reach them. This can also corroborate the subgoal distribution in Figure 9 ###reference_###." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Impact of Human Feedback (RQ3)", + "text": "This part is dedicated to examining the effects of human feedback on Model MENTOR. Firstly, we evaluate the influence of the frequency and quantity of human feedback on algorithmic performance. Following this, we examine the differences between real human labels and synthetic labels, explain the rationale for using synthetic labels in our experiments, and demonstrate the practical applicability of our algorithm to real-world contexts.\nAnalyzing the quantity and frequency of feedback. In Table II ###reference_###, the data shows the number of training episodes needed for 100 task success across different query frequencies and batch sizes. The table illustrates that integrating human feedback into training markedly improves learning speed, as highlighted by the contrast between label-free experiments and those with feedback. Further analysis demonstrates a consistent pattern: higher feedback frequency and quantity correlate with increased success rates. When considering the total labels and their impact on learning speed, our algorithm significantly enhances efficiency and stability by requiring only a small number of labels: 10 per 100 episodes, totaling 180, compared to experiments with non-feedback.\n###figure_19### ###figure_20### ###figure_21### ###figure_22### Comparing human collected labels and synthetic labels. In our study, we employ two methods for providing human guidance: synthetic labels generated through scripted techniques and human-generated labels.\nWe monitored the disagreement rates between authentic human labels and synthetic labels, as well as the accuracy of the reward models, as shown in Figure 12 ###reference_###. Given that the starting and ending positions in the Four rooms domain are predetermined, we provided 10 labels every 25 episodes. Conversely, in FetchPush where positions are not fixed, we provided 50 labels every 50 episodes. The data reveals discrepancies between human and synthetic labels, suggesting that human labeling is susceptible to errors due to subjective factors. Furthermore, the RLHF method produced a model whose accuracy converged to approximately 80%, indicating that even the trained model has its uncertainties and potential inaccuracies. This is also supported by the difference between the heatmap of the trained reward model and the heat map of the oracle reward model in Figure 11 ###reference_###. However, our algorithm demonstrates sufficient robustness to handle the inaccuracies inherent in RLHF. This implies that integrating RLHF into the high-level policy is a highly compatible approach; on the one hand, RLHF fulfills the reward requirements of the high-level, and on the other hand, the low-level policy remains unaffected by the inaccuracies of RLHF. During the algorithm testing, the low-level policy completes tasks independently, hence the final performance of the algorithm is not affected.\nWe compare the performance of human collected labels and synthetic labels in FetchPush and Four rooms domains. As illustrated in Figure 13 ###reference_###, models trained with human labels exhibited comparable learning rates, with performance differences potentially attributed to noise in human feedback. After comparing the experiments with human collected labels to the experiments with non-feedback, we can find a significant improvement in the algorithm\u2019s convergence speed and final performance. Therefore, we conclude that synthetic labels have a similar effect on improving the algorithm as real human labels. This implies that in other experiments, synthetic labels could potentially serve as a replacement for human collected labels.\n###figure_23### ###figure_24###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Ablation Studies (RQ4)", + "text": "The ablation study in the MENTOR framework, focusing on the FetchPickAndPlace and FetchPush domains, evaluates how components like HF (human feedback), DDC, and EED (Exploration-Exploitation Decoupling) contribute to learning efficiency and goal achievement. Figure 14 ###reference_### (top) assesses each module\u2019s effectiveness by systematically removing high-level modules and observing the impact on model performance. The removal of DDC and human feedback results in slower convergence, reduced performance, and decreased stability. The study mentioned in Figure 14 ###reference_### (bottom) discovered that the absence of EED negatively affects the balance between exploration and exploitation, resulting in lower success rates even after algorithm convergence. This emphasizes the significance and interdependence of these modules in improving learning for complex tasks in the MENTOR framework.\n###figure_25### ###figure_26### ###figure_27### ###figure_28###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This study presents MENTOR, an innovative method that combines human feedback and Dynamic Distance Constraint for learning guidance. It integrates high-level human insights for selecting subgoals concerning low-level capabilities and introduces exploration-exploitation decoupling at the low-level to improve training stability and efficiency. Our experiments demonstrate the framework\u2019s effectiveness in complex tasks with sparse rewards, outperforming existing baselines.\nWe recognize the complexity of human guidance beyond just subgoal selection and aim to explore a wider range of feedback integration to enhance learning dynamics. We plan to further expand the framework\u2019s applications and refine its mechanisms, aiming to advance hierarchical reinforcement learning and create more intuitive, adaptable learning systems." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Domains Details", + "text": "In this section, we will provide further elaboration on the benchmarks utilized for comparing our method with the baselines. Specifically, we will delve into the details of the observation space, action space, and the configuration of the reward function.\nFetchPush. In this domain, the aim is to use a 7-DoF Fetch Mobile Manipulator, equipped with a closed two-fingered parallel gripper, for transporting a block to a set position on a table. The robot\u2019s movement is finely adjusted using Cartesian coordinates, and the MuJoCo framework calculates the inverse kinematics. This task, which involves pushing the block with the gripper constantly closed, is ongoing, requiring the robot to steadily keep the block at the target position. The scenario is observed through a 25-element array, encompassing kinematic details of both the block and gripper. The action space is a Box(-1.0, 1.0, 4, float32), with each action altering the gripper\u2019s Cartesian position and managing its opening and closing. The reward system applies -1 when the block isn\u2019t at the target, and 0 when correctly positioned, defined as being within 0.05 meters of the target.\nFetchPickAndPlace. Utilizing the same robot setup with FetchPush, this domain focuses on moving a block to a defined point, including mid-air locations. It shares the same observation array and action space as FetchPush, with the addition of a goal-aware observation space. The reward system remains consistent with the FetchPush domain.\nFetchDraw. Utilizing the same 7-DoF Fetch Mobile Manipulator with a two-fingered parallel gripper, this task involves the robot\u2019s precise interaction with a drawer. The objective is two-fold: (1) to reach for the drawer handle, and (2) to slide the drawer to a target position by pulling or pushing the handle. The manipulation requires the gripper to perform open and close actions for a firm grasp on the drawer handle. The robot must successfully move and maintain the drawer at the target position indefinitely. The reward system remains consistent with the FetchPush domain.\nFetchObsPush. This task engages the same 7-DoF Fetch Mobile Manipulator, which is equipped with a two-fingered parallel gripper. The robot is tasked with manipulating a block to a desired position on a table in the presence of an obstacle. The challenge requires the robot to (1) approach and securely grasp the block, and (2) navigate and push the block to the target location, accounting for the obstacle whose size and shape are unknown. This task demands precision handling and adaptability to avoid obstacle while ensuring the block reaches its target position. As with the FetchPush, FetchObsPush is continuous, with the robot required to keep the block within 0.05 meters of the target position indefinitely. The reward system remains consistent with the FetchPush domain.\nPusher. This domain involves the manipulation of a robotic arm, specifically a sawyer robotic arm, in an environment with multiple obstacles. The objective is to successfully push an obstacle, referred to as a puck, to a designated goal area marked by a red dot. Unlike the FetchPush problem, which has a 25-dimensional observation, the state space in this environment is determined by the position of the puck and the arm. The action space, on the other hand, involves controlling the position of the robotic arm. It is a discrete 9-dimensional space where each action corresponds to a delta change in the position in a 2-D space. The reward system remains consistent with the FetchPush domain.\nFour rooms. In the four-room domain, the agent\u2019s task is to navigate to a specified goal while dealing with initially unknown obstacles. The agent is positioned at (0.4, -0.4) in the bottom right room, aiming for the goal at (0.25, 0.25) in the top right room. The state observation is the agent\u2019s precise (x, y) location, and it has a set of 9 possible actions to choose from, which allow movement in all cardinal and diagonal directions, or to remain stationary. Key to this task are the doorways in the walls, which are the sole means for the agent to traverse between rooms.\nIn the FetchPush, FetchPickAndPlace, FetchDraw, FetchObsPush, and Pusher domains, synthetic labels are generated using a dense sparse approach. This means that the reward returned is calculated as the negative Euclidean distance between the achieved goal position and the desired goal. In the Four rooms domain, we utilize the reward function displayed in Equation 14 ###reference_###. The reward heatmap, referred to as the oracle reward model, is depicted in Figure 11 ###reference_###. In the top right quadrant, the agent receives a reward based on the negative Euclidean distance from its current position to the goal . In the top left quadrant, the reward is the negative Euclidean distance from to a fixed point , with an additional penalty of -0.3. In the bottom left quadrant, the reward is the negative Euclidean distance from to , with a penalty of -0.6. In the bottom right quadrant, it is the negative Euclidean distance from to , with a penalty of -1." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Details of Implementation", + "text": "" + } + ], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Model parameter count and time consumption for the first 1000 episodes on FetchPush domain. Here, \u2019M\u2019 denotes \u2019million\u2019 for the quantity of model parameters, and \u2019min\u2019 denotes \u2019minutes\u2019 for the duration of time spent.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AlgorithmMENTORHER
low-level actor&critic0.560M0.280M
high-level actor&critic0.189M0.189M
reward0.140M -
rnd0.278M-
distance0.068M-
total count1.235M0.469M
time consumption18.63 0.05min10.76 0.08min
\n
", + "capture": "TABLE I: Model parameter count and time consumption for the first 1000 episodes on FetchPush domain. Here, \u2019M\u2019 denotes \u2019million\u2019 for the quantity of model parameters, and \u2019min\u2019 denotes \u2019minutes\u2019 for the duration of time spent." + }, + "2": { + "table_html": "
\n
TABLE II: Cross-analysis on episodes (100 steps per episode) to the success of variables batch queries and query frequency. For each combination of variables, we ran 5 trials on different seeds and recorded the mean and variance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Query Frequency
Batch Queries2550100
035221441--
10156512817652681820261
25145516517701081683203
5014065517352531760277
\n
", + "capture": "TABLE II: Cross-analysis on episodes (100 steps per episode) to the success of variables batch queries and query frequency. For each combination of variables, we ran 5 trials on different seeds and recorded the mean and variance." + }, + "3": { + "table_html": "
\n
TABLE III: Hyperparameters of MENTOR.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HyperparametersValues
High-level policy
Actor learning rate
Critic learning rate
Replay buffer size
Hidden layers3
Hidden size256
Batch size256
Soft update rate0.005
Policy update frequency1
0.95
Distance model learning rate
Distance model replay buffer size1000
Distance model hidden layers3
Distance model hidden size256
Distance model batch size256
Reward model learning rate
Reward model replay buffer size1000
Reward model hidden layers3
Reward model hidden size256
Reward model batch size256
Query Frequency50
Batch Queries50
Success rate high threshold0.6
Success rate low threshold0.3
0.05
Initial \n0.05
0.1
Low-level policy
Actor learning rate
Critic learning rate
Hidden layers3
Hidden size256
Replay buffer size
Batch size512
Soft update rate0.005
Policy update frequency1
0.95
RND learning rate
RND hidden layers3
RND hidden size256
RND represent size512
RND bonus scaling1.0
Hindsight sample ratio0.8
\n
", + "capture": "TABLE III: Hyperparameters of MENTOR." + } + }, + "image_paths": { + "1": { + "figure_path": "2402.14244v2_figure_1.png", + "caption": "Figure 1: (a) The high-level policy selects subgoals with DDC (shades of green), and human guides by comparing these subgoals. (b) The low-level decouples exploration and exploitation through two policies, one policy explores the environment and the other learns from the experience of exploration. (c) Diagrammatic representation of MENTOR framework.", + "url": "http://arxiv.org/html/2402.14244v2/x1.png" + }, + "2": { + "figure_path": "2402.14244v2_figure_2.png", + "caption": "Figure 2: As the low-level capability improves, the DDC progressively relaxes, allowing the high-level to propose increasingly challenging subgoals.", + "url": "http://arxiv.org/html/2402.14244v2/x2.png" + }, + "3(a)": { + "figure_path": "2402.14244v2_figure_3(a).png", + "caption": "FetchPush\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x3.png" + }, + "3(b)": { + "figure_path": "2402.14244v2_figure_3(b).png", + "caption": "FetchPickAndPlace\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x4.png" + }, + "3(c)": { + "figure_path": "2402.14244v2_figure_3(c).png", + "caption": "FetchDraw\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x5.png" + }, + "3(d)": { + "figure_path": "2402.14244v2_figure_3(d).png", + "caption": "FetchObsPush\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x6.png" + }, + "3(e)": { + "figure_path": "2402.14244v2_figure_3(e).png", + "caption": "Pusher\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x7.png" + }, + "3(f)": { + "figure_path": "2402.14244v2_figure_3(f).png", + "caption": "Four rooms\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.", + "url": "http://arxiv.org/html/2402.14244v2/x8.png" + }, + "4": { + "figure_path": "2402.14244v2_figure_4.png", + "caption": "Figure 4: Graphical representation of the success rates for MENTOR in comparison to other baseline methods across different benchmarks on five random seeds. The shaded areas surrounding each curve represent the standard deviation. Within the Four Rooms domain, the performance curve exhibits non-smooth behavior due to the fixed positions of the starting point and the goal. Consequently, the success rate can abruptly transition from 0% to 100%, leading to the curve with large variance. Any curves that are not visible in the graph, indicate a zero success rate throughout the trials. These results are aggregated from an average of five individual runs.", + "url": "http://arxiv.org/html/2402.14244v2/x9.png" + }, + "5(a)": { + "figure_path": "2402.14244v2_figure_5(a).png", + "caption": "Figure 5: Impacts of Distance Constraints on success rate in FetchPickAndPlace and FetchPush domains. Since the high-level policy requires data to be collected before updating, a segment is missing from the distance curve.", + "url": "http://arxiv.org/html/2402.14244v2/x10.png" + }, + "5(b)": { + "figure_path": "2402.14244v2_figure_5(b).png", + "caption": "Figure 5: Impacts of Distance Constraints on success rate in FetchPickAndPlace and FetchPush domains. Since the high-level policy requires data to be collected before updating, a segment is missing from the distance curve.", + "url": "http://arxiv.org/html/2402.14244v2/x11.png" + }, + "6": { + "figure_path": "2402.14244v2_figure_6.png", + "caption": "Figure 6: Effects of the balancing coefficient on the environment goal success rate in FetchPickAndPlace domains are examined on five random seeds. In the first three graphs, the dashed lines represent the average success rate with auto-set \u03b1\ud835\udefc\\alphaitalic_\u03b1 in the worst case, where \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k is 0.02. The adjustment value of k\ud835\udc58kitalic_k is represented as \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k. We modify the parameter k\ud835\udc58kitalic_k to increase on successful completion of the subgoal and decrease on failure.", + "url": "http://arxiv.org/html/2402.14244v2/x12.png" + }, + "7": { + "figure_path": "2402.14244v2_figure_7.png", + "caption": "Figure 7: The variation curve of k\ud835\udc58kitalic_k value throughout the training process under different \u03b1\ud835\udefc\\alphaitalic_\u03b1 value and \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k is 0.05.", + "url": "http://arxiv.org/html/2402.14244v2/x13.png" + }, + "8": { + "figure_path": "2402.14244v2_figure_8.png", + "caption": "Figure 8: Near-policy sampling and exploratory mechanism effects on reward model in the Four rooms domain. On the top, the oracle reward function in the Four rooms domain indicates where the agent likely earns higher rewards, guiding it from the lower right to the upper right corner. We use the oracle reward to generate synthetic labels to train the reward model following Equation 5. The heatmaps below the oracle reward reflect how the reward model adjusts in response to the agent\u2019s exploration and policy updates within the environment. The heatmaps of reward model for the top and bottom exhibit changes across 2,000 episodes, whereas the heatmap for the center demonstrates alterations throughout 10,000 episodes.", + "url": "http://arxiv.org/html/2402.14244v2/x14.png" + }, + "9(a)": { + "figure_path": "2402.14244v2_figure_9(a).png", + "caption": "Figure 9: In Four rooms domain, we compare subgoal distributions with and without DDC during training. Subgoals are shown as colored circles, with a red-to-blue gradient for training time. The starting point is marked by a black circle in the lower right, while the ending point is a pentagram in the upper left.", + "url": "http://arxiv.org/html/2402.14244v2/x15.png" + }, + "9(b)": { + "figure_path": "2402.14244v2_figure_9(b).png", + "caption": "Figure 9: In Four rooms domain, we compare subgoal distributions with and without DDC during training. Subgoals are shown as colored circles, with a red-to-blue gradient for training time. The starting point is marked by a black circle in the lower right, while the ending point is a pentagram in the upper left.", + "url": "http://arxiv.org/html/2402.14244v2/x16.png" + }, + "10": { + "figure_path": "2402.14244v2_figure_10.png", + "caption": "Figure 10: Exploratory mechanism effects analysis in FetchPickAndPlace domain.", + "url": "http://arxiv.org/html/2402.14244v2/x17.png" + }, + "11": { + "figure_path": "2402.14244v2_figure_11.png", + "caption": "Figure 11: DDC effects on reward model in the Four rooms domain. The heatmaps serve as visual representations of the models\u2019 learning progress, where the distance model learns to accurately estimate the state to initial state distances and the reward model learns to assign values that incentivize the agent\u2019s progression towards the final goal. The overlay reward heatmap captures the integrated effect of both the reward model and the distance model, as articulated in the adjustments of the high-level policy update detailed in Equation 9.", + "url": "http://arxiv.org/html/2402.14244v2/x18.png" + }, + "12(a)": { + "figure_path": "2402.14244v2_figure_12(a).png", + "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.", + "url": "http://arxiv.org/html/2402.14244v2/x19.png" + }, + "12(b)": { + "figure_path": "2402.14244v2_figure_12(b).png", + "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.", + "url": "http://arxiv.org/html/2402.14244v2/x20.png" + }, + "12(c)": { + "figure_path": "2402.14244v2_figure_12(c).png", + "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.", + "url": "http://arxiv.org/html/2402.14244v2/x21.png" + }, + "12(d)": { + "figure_path": "2402.14244v2_figure_12(d).png", + "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.", + "url": "http://arxiv.org/html/2402.14244v2/x22.png" + }, + "13(a)": { + "figure_path": "2402.14244v2_figure_13(a).png", + "caption": "Figure 13: Experiments for evaluating MENTOR with script-generated and human-collected labels on the FetchPush and Four rooms domain. In these experiments, we provided 10 labels every 25 episodes. The results for the script-generated labels and non-feedback scenarios were based on five random seeds to ensure robustness, while the human-collected label experiment relied on a single random seed.", + "url": "http://arxiv.org/html/2402.14244v2/x23.png" + }, + "13(b)": { + "figure_path": "2402.14244v2_figure_13(b).png", + "caption": "Figure 13: Experiments for evaluating MENTOR with script-generated and human-collected labels on the FetchPush and Four rooms domain. In these experiments, we provided 10 labels every 25 episodes. The results for the script-generated labels and non-feedback scenarios were based on five random seeds to ensure robustness, while the human-collected label experiment relied on a single random seed.", + "url": "http://arxiv.org/html/2402.14244v2/x24.png" + }, + "14(a)": { + "figure_path": "2402.14244v2_figure_14(a).png", + "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.", + "url": "http://arxiv.org/html/2402.14244v2/x25.png" + }, + "14(b)": { + "figure_path": "2402.14244v2_figure_14(b).png", + "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.", + "url": "http://arxiv.org/html/2402.14244v2/x26.png" + }, + "14(c)": { + "figure_path": "2402.14244v2_figure_14(c).png", + "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.", + "url": "http://arxiv.org/html/2402.14244v2/x27.png" + }, + "14(d)": { + "figure_path": "2402.14244v2_figure_14(d).png", + "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.", + "url": "http://arxiv.org/html/2402.14244v2/x28.png" + }, + "15": { + "figure_path": "2402.14244v2_figure_15.png", + "caption": "Figure 15: A simple interface for human annotation.", + "url": "http://arxiv.org/html/2402.14244v2/x29.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2402.14244v2" +} \ No newline at end of file diff --git a/20241127/2402.14708v2.json b/20241127/2402.14708v2.json new file mode 100644 index 0000000000000000000000000000000000000000..0cf7ae1f86c49c4b5b9cc6affed1c9cd9eb23d24 --- /dev/null +++ b/20241127/2402.14708v2.json @@ -0,0 +1,509 @@ +{ + "title": "CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks", + "abstract": "Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node\u2019s local structure on predictions. This paper introduces a novel method for credit card fraud detection, the Causal Temporal Graph Neural Network (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model\u2019s robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The substantial damages wrought by financial fraud continue to garner ongoing focus from academic circles, the business sector, and regulatory bodies Jiang et al. (2016 ###reference_b18###); Aleksiejuk and Ho\u0142yst (2001 ###reference_b2###). Fraudsters masquerade as ordinary users and attack transactions made with credit cards Ileberi et al. (2022 ###reference_b17###), which may inflict substantial economic losses and pose a severe threat to sustainable economic growth AlFalahi and Nobanee (2019 ###reference_b3###). Consequently, effective detection of financial fraud is imperative for safeguarding the economy and consumer security.\nIn the financial deception realm, identifying credit card fraud has garnered considerable attention among both industry and academia Bhattacharyya et al. (2011 ###reference_b4###). Traditional approaches to detecting fraudulent activities typically entail meticulous examination of each transaction for irregularities, employing predefined criteria such as verification against lists of compromised cards or adherence to established transaction thresholds Maes et al. (2002 ###reference_b28###); Fu et al. (2016 ###reference_b12###). However, the aforementioned anti-fraud systems, based on expert prior and rules, are often susceptible to exploitation by fraudsters, who can circumvent detection by crafting ingenious transaction methods that elude the system\u2019s scrutiny of illicit activities. Toward this end, predictive modeling has been introduced, aiming to autonomously identify patterns that suggest fraudulent activity and calculate a corresponding risk score.\n###figure_1### Currently, state-of-the-art predictive models are focused on using deep learning methods, capturing potential illegal patterns in a data-driven manner Fu et al. (2016 ###reference_b12###); Dou et al. (2020 ###reference_b9###). For instance, Liu et al. (2021 ###reference_b26###) introduces PC-GNN, a Graph Neural Network approach that effectively handles class imbalance in graph-based fraud detection by selectively sampling nodes and edges, particularly focusing on the minority class. Moreover, Xiang et al. (2023 ###reference_b42###) leverages transaction records to construct a temporal transaction graph, applying a Gated Temporal Attention Network to effectively learn transaction representations and model fraud patterns. Unfortunately, i) these methods often overlook the intrinsic patterns and connections within the data due to a lack of consideration for local structure consistency; ii) they lack the ability to uncover the causal nature of each specific case, which leads to inadequate differentiation between the attributes of causal nodes and environment nodes, thereby impairing the model\u2019s generalization capabilities; iii) they lack interpretability in making specific predictions.\nIn this paper, we introduce a novel Causal Temporal Graph Neural Network, termed CaT-GNN, aiming at providing an interpretable paradigm for credit card fraud detection. Guided by the currently popular causal invariant learning techniques Chang et al. (2020 ###reference_b5###); Liu et al. (2022 ###reference_b27###), CaT-GNN\u2019s primary objective is to unveil the inherent correlations in the transaction attribute data of nodes within available temporal transaction graphs, thereby offering interpretability for complex transaction fraud problems.\nTo unravel causal correlations, specifically, we decompose the algorithmic process of CAT-GNN into two stages - discovery and intervention. The goal of the discovery stage is to identify potential causal components within observed temporal graph data, where we introduce a causal temporal graph neural network for modeling. Utilizing the popular node-attention metrics Veli\u010dkovi\u0107 et al. (2017 ###reference_b36###); Xiang et al. (2023 ###reference_b42###), we employ attention score to locate key nodes, designated as causal and environment nodes. In the intervention process, we aim to reasonably enhance potential environment nodes. This approach is designed to align with and perceive the underlying distribution characteristics in explicit fraud networks, thereby boosting our temporal GNN\u2019s ability to identify and understand problematic nodes. Furthermore, drawing inspiration from Wang et al. (2020 ###reference_b38###), to ensure that causal interventions between nodes do not interfere with each other, we create parallel universes for each node. Consequently, the model is exposed to a wider potential data distribution, providing insights for fraud prediction with a causal perspective. This process can further be understood as a back-door adjustment in causal theory Pearl (2009 ###reference_b31###); Pearl and Mackenzie (2018 ###reference_b30###).\nThe contributions of this paper are summarized as follows:\nWe propose a novel method, CaT-GNN, that embodies both causality and resilience for the modeling of credit card fraud detection. By harnessing causal theory, known for its interpretability, CaT-GNN enables the model to encompass a wider potential data distribution, thereby ensuring its exceptional performance in this task.\nCaT-GNN, characterized by its refined simplicity, initially identifies causal nodes and subsequently refines the model into a causally coherent structure. It aims to achieve invariance in attribute information and temporal features through semi-supervised learning, thereby providing a bespoke and robust foundation for targeted tasks.\nWe evaluate CaT-GNN on three representative datasets, including a private financial benchmark, and the other two are public settings. Extensive experiments show that our proposed method outperforms the compared state-of-the-art baselines in credit card fraud detection, thanks to the casual intervention of the node causal augment." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "In Section 3.1 ###reference_###, we explore the motivation behind our approach, emphasizing the crucial role of understanding the local structure and causal relationships within transaction data to improve detection accuracy. Section 3.2 ###reference_### introduces our two-phase method: discovery and intervention. Section Section 3.2 ###reference_### provides the causal theory support." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Motivation", + "text": "###figure_2### ###figure_3### Taking the arXiv Hu et al. (2020 ###reference_b16###) dataset as an example, real-world graphs often exhibit locally variable structures, that is, the distribution of node attributes differs from the distribution of local structural properties Feng et al. (2021 ###reference_b10###). We observe that this phenomenon is also prevalent in the financial sector, where cunning fraudsters may disguise themselves through various means (such as feature camouflage and relationship disguise) to connect with users who have a good credit transaction history Dou et al. (2020 ###reference_b9###). In such scenarios, if we simply aggregate node information and neighbor information together, it is likely to obscure the suspiciousness of the fraudsters, which contradicts our objective. This situation tends to yield poorer training outcomes, especially in a semi-supervised learning environment with limited labeled data. Existing methods do not incorporate causal factors into credit card fraud modeling, resulting in models that fail to learn the intrinsic connections of node attributes. This oversight further leads to the neglect of causal attribute structure differences on test nodes, thereby reducing the model\u2019s generalizability. By comprehensively examining the confounding variables, we are able to significantly alleviate the aforementioned issue, as illustrated in Figure 2 ###reference_###. This strategy is the cornerstone of our framework and is also known as the \u201cbackdoor adjustment\u201d technique Pearl (2009 ###reference_b31###); Pearl and Mackenzie (2018 ###reference_b30###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Discovering & Intervention", + "text": "Based on the motivation, we adopt a causal perspective to analyze the attribute aggregation process and formalize principles for distinguishing between causal and non-causal elements within local structures. We first introduce the discovery process to effectively examine the causal and environment nodes within the current node\u2019s local structure. In response, we refine the temporal attention graph network mechanismXiang et al. (2022 ###reference_b41###) into a causal temporal GAT mechanism as shown in the upper half of Figure 3 ###reference_###. This refinement introduces two key components designed to accurately identify both environmental and causal nodes, which enhances our ability to understand and manipulate the local structural dynamics more effectively.\nIn the context of temporal transaction graphs, we maintain a set of transaction records, denoted as , alongside their embeddings obtained via a projection layer. As demonstrated in Shi et al. (2020 ###reference_b35###), GNNs are capable of concurrently propagating attributes and labels. Consequently, we integrate fraud labels as an embedded feature within the attribute embedding , employing masking techniques to prevent label leakage Xiang et al. (2023 ###reference_b42###). However, this aspect does not constitute the primary focus of our research.\nCausal-Inspector: We design a Causal-Inspector to identify causal and environment nodes as shown in the bottom left corner of Figure 3 ###reference_###. To aggregate information efficiently, we employ the aforementioned causal temporal graph attention mechanism, which allows for dynamic information flow based on the temporal relationships among transactions. Leveraging a multi-head attention mechanism, we compute temporal attention scores that serve as weights for each neighboring node, facilitating the assessment of each neighbor\u2019s causal importance, which can be formulated as follows:\nwhere is a learnable weight matrix, represents the attention weight of node with respect to node in one head, which determines the importance of node relative to node . The symbol represents the concatenation operation. is the set of temporal neighboring nodes of node . In order to quantify the importance of each node we aggregate the attention weights from each attention head and compute the average to determine the final weight of the node. Then, based on its final weight, we calculate its normalized importance:\nwhere is the total number of attention heads and represents the set of importance of each node with respect to . This formula calculates the normalized importance weight , representing the importance of node by compiling the contributions from all attention heads, thus providing a comprehensive measure of node significance.\nTo segregate the nodes into environmental and causal categories, we introduce a proportion parameter , ranging between 0 and 1, which denotes the fraction of nodes to be earmarked as environment nodes. This approach affords us the flexibility to select environment nodes tailored to the specific exigencies of the graph. We use the function to select the nodes with the lowest importance scores as environment nodes. Therefore, a ranking function is defined to map to its rank among all node importance scores. Then, we determine the environment set as:\nThe remaining nodes, those not in , naturally form the set of causal nodes .This method ensures that nodes with the lowest importance scores are precisely selected as environmenta nodes according to the proportion , while the rest serve as causal nodes. Due to the differences between test and training distributions Feng et al. (2021 ###reference_b10###), CaT-GNN is dedicated to perceiving the essence of temporal graph data, thereby enhancing generalization capabilities and robustness.\nCausal-Intervener: We design a Causal Intervener as shown in the bottom right corner of Figure 3 ###reference_###, which employs a transformative mixup strategy known as a causal mixup, that blends environment nodes with a series of causally significant nodes. Given an environmental node , We select the causal nodes with the highest importance scores, which are computed as outlined in the Causal-Inspector, from the causal set at a proportion of . The causal mixup is then executed by linearly combining the environmental node with the selected causal nodes, weighted by their respective coefficients\n, which are learned through a dedicated linear layer:\nwhere is the causally mixed environmental node, is the number of selected causal nodes, is the self-weight of the environmental node reflecting its inherent causal significance, and is the causal node weight. These weights are normalized such that . The incorporation of the causal mixup enhances the robustness of the model against distributional shifts by embedding a richer causal structure within the environmental node. By adapting the causal structure to the environmental context, the Causal-Intervener aims to mitigate the disparity between training and test distributions, thus bolstering the model\u2019s generalizability. Finally, we aggregate the information, and the outputs of multiple attention heads are concatenated to form a more comprehensive representation:\nwhere is a learnable weight matrix, is a attention head, denotes the aggregated embeddings. It is important to highlight that the causal intervention result on an environmental node with respect to is essentially a duplicate of and does not modify itself. This distinction is crucial as it guarantees that the process of augmenting central nodes within individual local structures remains mutually non-disruptive. By preserving the original state of , we ensure that enhancements applied to central nodes in one local structure do not adversely affect or interfere with those in another, maintaining the integrity and independence of local structural enhancements Wang et al. (2020 ###reference_b38###)." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Causal Support of CaT-GNN", + "text": "In elucidating the causal backbone of CaT-GNN, we invoke causal theory to formulate a Structural Causal Model (SCM) as propounded by Pearl (2009 ###reference_b31###). This framework scrutinizes four distinct elements: the inputs node attribute , the truth label decided by both the attribute of causal nodes of symbolized as , and the confounder , emblematic of the attribute of environment nodes. The causal interplay among these variables can be articulated as follows:\nXE. The local structure of node attribute is composed of causal nodes attributes and environment nodes attributes .\nYE. The causal attributes actually determine the true value , however, the environmental attributes also affect the prediction results, causing spurious associations.\nDo-calculus Pearl (2009 ###reference_b31###) is a trio of rules within the causal inference framework that facilitates the mathematical deduction of causal effects from observed data. These rules enable manipulation of operator expressions, essential for implementing interventions in causal models:\nTypically, a model that is trained using Empirical Risk Minimization (ERM) may not perform adequately when generalizing to test data distribution . These shifts in distribution are often a result of changes within environment nodes, necessitating the need to tackle the confounding effects. As illustrated in Figure 3 ###reference_###, we apply causal intervention to enhance the model\u2019s generalizability and robustness. To this end, our approach utilizes do-calculus Pearl (2009 ###reference_b31###) on the variable to negate the influence of the backdoor path by estimating :\nwhere signifies the count of environment nodes, with indicating the -th environmental variable. The environmental enhancement of Cat-GNN is in alignment with the theory of backdoor adjustment, thereby allowing for an effective exploration of possible test environment distributions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we critically assess the CaT-GNN model on a series of research questions (RQs) to establish its efficacy in graph-based fraud detection tasks. The research questions are formulated as follows:\nRQ1: Does CaT-GNN outperform the current state-of-the-art models for graph-based anomaly detection?\nRQ2: What is the effectiveness of causal intervention in the aggregation of neighboring information?\nRQ3: What is the performance with respect to different environmental ratios ?\nRQ4: Is CaT-GNN equally effective in semi-supervised settings, and how does it perform with limited labeled data?\nRQ5: Does the causal intervention component lead to a significant decrease in efficiency?" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "we adopt one open-source inacial raud emi-supervised ataset Xiang et al. (2023 ###reference_b42###), termed S-FFSD222https://github.com/AI4Risk/antifraud, with the partially labeled transaction records. Same with the definition in section 2 ###reference_###, if a transaction is reported by a cardholder or identified by financial experts as fraudulent, the label will be 1; otherwise, will be 0. In addition, we also validate on two other public fraud detection datasets and . Rayana and Akoglu (2015 ###reference_b32###) compiles a collection of hotel and restaurant reviews from Yelp, in which nodes represent reviews. And there are three kinds of relationship edges among these reviews. : The Amazon graph McAuley and Leskovec (2013 ###reference_b29###) comprises reviews of products in the musical instruments category, in which nodes represent users, and the edges are the corresponding three kinds of relationships among reviews. The statistics of the above three datasets are shown in Table 1 ###reference_###.\nTo verify the effectiveness of our proposed CaT-GNN, we compare it with the following state-of-the-art methods. \u2776 Player2Vec. Attributed Heterogeneous Information Network Embedding Framework Zhang et al. (2019 ###reference_b45###). \u2777 Semi-GNN. A semi-supervised graph attentive network\nfor financial fraud detection that adopts the attention mechanism to aggregate node embed\ndings across graphs Wang et al. (2019 ###reference_b37###). \u2778 GraphConsis. The GNN-based fraud detectors aim at the inconsistency problem Liu et al. (2020 ###reference_b25###). \u2779 GraphSAGE. The inductive graph learning model is based on a fixed sample number of the neighbor nodes Hamilton et al. (2017 ###reference_b15###). \u277a CARE-GNN The camouflage-resistant GNN-based model tackling fraud detection Dou et al. (2020 ###reference_b9###). \u277b PC-GNN. A GNN-based model to address the issue of class imbalance in graph-based fraud detection Liu et al. (2021 ###reference_b26###). \u277c GTAN. A semi-supervised GNN-based model that utilizes a gated temporal attention mechanism to analyze credit card transaction data Xiang et al. (2023 ###reference_b42###). \u277d: CaT-GNN (PL). This variant of the CaT-GNN framework selects environment nodes based on a proportion and determines mixup weights via a learnable linear layer. \u277e: CaT-GNN (PI). This version employs a proportional selection of environment nodes and leverages the nodes\u2019 importance scores to inform mixup weights . \u277f: CaT-GNN (FL). This variant uses a fixed number of environment nodes. Mixup weights are determined by a learnable linear layer. \u277f:CaT-GNN (FI). Combining fixed environmental node selection with importance-based weighting for mixup.\nIn our experiment, the learning rate is set to 0.003, and the batch size batch is established at 256. Moreover, the input dropout ratio is determined to be 0.2, with the number of attention heads set to 4, and the hidden dimension to 256. We employed the Adam optimizer to train the model over epochs, incorporating an early stopping mechanism to prevent overfitting. In GraphConsis, CARE-GNN, PC-GNN and GTAN, we used the default parameters suggested by the original paper. In Semi-GNN and Player2Vec, We set the learning rate to 0.01. In YelpChi and Amazon datasets, the train, validation, and test ratio are set to be 40%, 20%, and 40% respectively. In the S-FFSD dataset, we use the first 7 months\u2019 transactions as training data, and the rest as test data. Similar to previous work Liu et al. (2021 ###reference_b26###), we repeat experiments with different random seeds 5 times and we report the average and standard error.\nExperimental results are statistically significant\nwith .\nCat-GNN and other baselines are all implemented in Pytorch 1.9.0 with Python 3.8. All the experiments are conducted on Ubuntu 18.04.5 LTS server with 1 NVIDIA Tesla V100 GPU, 440 GB RAM.\nWe selected three representative and extensively utilized metrics: AUC (Area Under the ROC Curve), F1-macro and AP (averaged precision). The first metric AUC is the area under the ROC Curve and as a single numerical value, AUC succinctly summarizes the classifier\u2019s overall performance across all thresholds. The second metric F1-macro is the macro average of F1 score which can be formulated as , and the third metric AP is averaged precision that can be formulated as , where stands for the Precision and stands for recall." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Performance Comparison (RQ1)", + "text": "In the experiment of credit card fraud detection across three distinct datasets, Cat-GNN showcases superior performance metrics compared to its counterparts. First of all, Cat-GNN achieves the highest AUC in all three datasets, with values of 0.9035, 0.9706, and 0.8281 for YelpChi, Amazon, and S-FFSD, respectively. This indicates that Cat-GNN consistently outperforms other methods in distinguishing between classes across diverse datasets. Focusing on the F1 Score, which balances the precision and recall , Cat-GNN again tops the charts with scores of 0.7783, 0.9163, and 0.7211 for YelpChi, Amazon, and S-FFSD. This reflects the model\u2019s robustness in achieving high precision while not compromising on recall, which is essential where both false positives and false negatives carry significant consequences. Finally, Cat-GNN\u2019s superiority extends to the AP metric, with the improvement of at least 6.82%, 2.86%, and 13.10% for YelpChi, Amazon and S-FFSD respectively.\nThe comparative performance of Cat-GNN is particularly significant when contrasted with previous methods such as Player2Vec, Semi-GNN, and GraphSAGE. For the Amazon dataset, existing state-of-the-art models, like CARE-GNN, PC-GNN, and GTAN, have already proven effective at capturing the inherent correlations within the data. In this context, the benefits of causal intervention may not be as pronounced, possibly due to the dataset\u2019s simpler local structures and more uniform distribution. However, for the S-FFSD dataset, our methodology exhibits significant performance improvements. This enhancement is attributed to the complex local structures and the prevalence of unlabeled nodes within the dataset. In such scenarios, causal intervention adeptly learns the inherent attribute connections, thereby boosting the model\u2019s generalization. Additionally, learning mixup weights with a linear layer is more reasonable than weighting with importance scores. Similarly, selecting environment nodes based on proportions is more sensible than choosing a fixed number of environment nodes, and the effect is also slightly better. All in all, This superior performance can be ascribed to the integration of causal theory within the Cat-GNN, enhancing its capacity to comprehend the inherent principles of graph attributes, allowing it to discern complex patterns and interactions that other models are unable to effectively capture." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study (RQ2)", + "text": "In this section, we evaluate the effectiveness of causal interventions in the aggregation within graph structures. Initially, we explore a variant without any causal intervention, termed N-CaT, which aggregates all neighboring information indiscriminately. Secondly, we introduce D-CaT, a method that omits environment nodes entirely during the aggregation phase, and directly aggregates all neighboring information in the learning process. Finally, our proposed method, CaT, integrates a causal intervention approach, simultaneously considering both causal nodes and environment nodes during aggregation to refine the learning representations.\nThe results shown in Figure 4 ###reference_### highlight the importance of causal intervention in information aggregation. N-CaT, which lacks causal discernment, performs worse across all datasets compared to CaT because it does not account for causal relationships. D-CaT, which simply deletes environmental factors, shows a significant drop in performance, as the mere deletion of environment nodes prevents the model from fully learning valuable information. Our CaT method consistently outperforms the other variants across all datasets, achieving the highest AUC scores. This superior performance underscores the value of our causal intervention technique, which effectively balances the influence of causal and environment nodes, resulting in a more generalizable model.\n###figure_4###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Parameter Sensitivity Analysis (RQ3, RQ4)", + "text": "In this section, we study the model parameter sensitivity with respect to the environment nodes ratio and the training ratio. The corresponding results are reported in Figure 5 ###reference_###.\n###figure_5### ###figure_6### As demonstrated in the left of Figure 5 ###reference_###, using the YelpChi dataset as an example, the performance of Cat-GNN (measured by AUC as the performance metric) significantly surpasses other competitive models, including PC-GNN and CARE-GNN, across all training ratios, from 10% to 70%. Particularly at lower training ratios (such as 10%), Cat-GNN remains effective for semi-supervised learning and exhibits more robust performance compared to other models.\nIn our sensitivity analysis of the environmental ratio as demonstrated in the right of Figure 5 ###reference_###, we observed that Cat-GNN\u2019s performance on the Amazon dataset is less affected by variations in the training ratio, with AUC fluctuations not exceeding 2%. Conversely, on the S-FFSD dataset, as the training ratio increases from 5% to 40%, there is a larger fluctuation in Cat-GNN\u2019s performance. This can be attributed to the characteristics of the dataset or the differences in the distribution of labeled data." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Model Efficiency (RQ5)", + "text": "In this section, we present a comprehensive analysis of the efficiency of CaT-GNN. Our causal intervention aims to boost performance while maintaining computational efficiency. Table 3 ###reference_### shows that the performance enhancements are achieved without imposing significant additional computational costs. The results indicate that the execution time with causal intervention experienced only a marginal increase. This negligible rise in time is a testament to the algorithm\u2019s ability to retain its computational efficiency while incorporating our advancements. Thus, our algorithm stands as a robust solution that can cater to the needs of high-performance computing while facilitating enhancements that do not compromise on efficiency." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion & Future Work", + "text": "In this work, we introduce the Causal Temporal Graph Neural Network (CaT-GNN), a causal approach in the domain of credit card fraud detection. Our model innovates by integrating causal learning principles to discern and leverage the intricate relationships within transaction data. We validate the effectiveness of CaT-GNN through comprehensive experiments on diverse datasets, where it consistently outperforms existing techniques. Notably, CaT-GNN not only enhances detection accuracy but also maintains computational efficiency, making it viable for large-scale deployment. Future directions will explore extending this methodology to a broader range of fraudulent activities, with the aim of fortifying the integrity of financial systems globally." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Statistics of the three datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset#Node#Edge#Fraud#benigh
YelpChi45,9547,739,9126,67739,277
Amazon11,9488,808,72882111,127
S-FFSD130,8403,492,2262,95017,553
\n
", + "capture": "Table 1: Statistics of the three datasets." + }, + "2": { + "table_html": "
\n
Table 2: Performance Comparison (in percent \u00b1 standard deviation) on YelpChi, Amazon and S-FFSD datasets across five runs. The best performances are marked with bold font, and the second-to-best are shown underlined.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetYelpChiAmazonS-FFSD
MetricAUCF1APAUCF1APAUCF1AP
Player2Vec\n0.70120.0089\n\n0.41200.0142\n\n0.24770.0161\n\n0.61870.0152\n\n0.24550.0091\n\n0.13010.0117\n\n0.52840.0101\n\n0.21490.0136\n\n0.20670.0155\n
Semi-GNN\n0.51600.0154\n\n0.10230.0216\n\n0.18090.0205\n\n0.70590.0211\n\n0.54860.0105\n\n0.22480.0142\n\n0.54600.0125\n\n0.43930.0152\n\n0.27320.0207\n
GraphSAGE\n0.54140.0029\n\n0.45160.0954\n\n0.18060.0866\n\n0.75900.0053\n\n0.59260.0087\n\n0.65970.0079\n\n0.65340.0095\n\n0.53960.0101\n\n0.38810.0089\n
GraphConsis\n0.70460.0287\n\n0.60230.0195\n\n0.32690.0186\n\n0.87610.0317\n\n0.77250.0319\n\n0.72960.0301\n\n0.65540.0412\n\n0.54360.0376\n\n0.38160.0341\n
CARE-GNN\n0.77450.0281\n\n0.62520.0091\n\n0.42380.0151\n\n0.89980.0925\n\n0.84680.0085\n\n0.81170.0114\n\n0.65890.1078\n\n0.57250.0096\n\n0.40040.0090\n
PC-GNN\n0.79970.0021\n\n0.64290.0205\n\n0.47820.0194\n\n0.94720.0019\n\n0.87980.0084\n\n0.84420.0096\n\n0.67070.0031\n\n0.60510.0230\n\n0.44790.0210\n
GTAN\n0.86750.0036\n\n0.72540.0197\n\n0.64250.0154\n\n0.95800.0014\n\n0.89540.0095\n\n0.87180.0083\n\n0.74960.0041\n\n0.67140.0089\n\n0.57090.0097\n
Cat-GNN(FI)\n0.87210.0044\n\n0.73360.0295\n\n0.65280.0209\n\n0.96430.0026\n\n0.90110.0129\n\n0.87940.0102\n\n0.76430.0078\n\n0.69070.0198\n\n0.59250.0174\n
Cat-GNN(FL)0.89100.0026\n0.76920.0182\n\n0.66870.0135\n0.97050.00160.91250.00990.89420.0081\n0.80230.0067\n\n0.70310.0154\n\n0.61450.0169\n
Cat-GNN(PI)\n0.88950.0041\n0.77060.02230.67010.0181\n0.96690.0021\n\n0.90770.0113\n\n0.88960.0095\n0.81450.00610.70960.01490.62940.0166
Cat-GNN(PL)0.90350.00350.77830.02090.68630.01270.97060.00150.91630.01040.89750.00890.82810.00540.72110.01150.64570.0156
\n
\n
", + "capture": "Table 2: Performance Comparison (in percent \u00b1 standard deviation) on YelpChi, Amazon and S-FFSD datasets across five runs. The best performances are marked with bold font, and the second-to-best are shown underlined." + }, + "3": { + "table_html": "
\n
Table 3: Experimental run times with and without causal intervention on three datasets. The experiments were conducted on a Tesla V100 40GB GPU, with the execution times measured in seconds.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetYelpChiAmazonS-FFSD
No-intervention126.676110.518208.085
Causal-intervention129.481 (+2.21%)\n113.660 (+2.84%)\n213.341 (+2.52%)\n
\n
\n
", + "capture": "Table 3: Experimental run times with and without causal intervention on three datasets. The experiments were conducted on a Tesla V100 40GB GPU, with the execution times measured in seconds." + } + }, + "image_paths": { + "1": { + "figure_path": "2402.14708v2_figure_1.png", + "caption": "Figure 1: The model overview. First Stage (discovery): we utilize an attention map in the attention temporal network to identify causal nodes and environment nodes. Second Stage: Intervention, we apply causal mix-up enhancement to the environment nodes.", + "url": "http://arxiv.org/html/2402.14708v2/x1.png" + }, + "2": { + "figure_path": "2402.14708v2_figure_2.png", + "caption": "Figure 2: Motivation. The original prediction incorrectly identifies a fraudster (central node labeled xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) as benign, as does the state-of-the-art GTAN model. Following our causal intervention, the prediction is correctly adjusted to identify xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as a fraudster. Green: benign users, red: fraudsters, gray: unlabeled nodes.", + "url": "http://arxiv.org/html/2402.14708v2/x2.png" + }, + "3": { + "figure_path": "2402.14708v2_figure_3.png", + "caption": "Figure 3: The depiction of the proposed model\u2019s architecture, featuring a causal temporal graph attention mechanism, alongside the theoretical support for backdoor adjustment.", + "url": "http://arxiv.org/html/2402.14708v2/x3.png" + }, + "4": { + "figure_path": "2402.14708v2_figure_4.png", + "caption": "Figure 4: The ablation study results on three datasets. Gray bars represent the D-CaT variant, blue bars represent the N-CaT variant, and orange bars represent the CaT-GNN model.", + "url": "http://arxiv.org/html/2402.14708v2/x4.png" + }, + "5(a)": { + "figure_path": "2402.14708v2_figure_5(a).png", + "caption": "Figure 5: Sensitivity analysis with respect to different training ratios (Left) and environment ratios (Right).", + "url": "http://arxiv.org/html/2402.14708v2/x5.png" + }, + "5(b)": { + "figure_path": "2402.14708v2_figure_5(b).png", + "caption": "Figure 5: Sensitivity analysis with respect to different training ratios (Left) and environment ratios (Right).", + "url": "http://arxiv.org/html/2402.14708v2/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Computing graph neural networks: A survey from algorithms to accelerators.", + "author": "Sergi Abadal, Akshay Jain, Robert Guirado, Jorge L\u00f3pez-Alonso, and Eduard Alarc\u00f3n.", + "venue": "ACM Computing Surveys (CSUR), 54(9):1\u201338, 2021.", + "url": null + } + }, + { + "2": { + "title": "A simple model of bank bankruptcies.", + "author": "Agata Aleksiejuk and Janusz A Ho\u0142yst.", + "venue": "Physica A: Statistical Mechanics and its Applications, 299(1-2):198\u2013204, 2001.", + "url": null + } + }, + { + "3": { + "title": "Conceptual building of sustainable economic growth and corporate bankruptcy.", + "author": "Latifa AlFalahi and Haitham Nobanee.", + "venue": "Available at SSRN 3472409, 2019.", + "url": null + } + }, + { + "4": { + "title": "Data mining for credit card fraud: A comparative study.", + "author": "Siddhartha Bhattacharyya, Sanjeev Jha, Kurian Tharakunnel, and J Christopher Westland.", + "venue": "Decision support systems, 50(3):602\u2013613, 2011.", + "url": null + } + }, + { + "5": { + "title": "Invariant rationalization.", + "author": "Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola.", + "venue": "In International Conference on Machine Learning, pages 1448\u20131458. PMLR, 2020.", + "url": null + } + }, + { + "6": { + "title": "Graph neural network for fraud detection via spatial-temporal attention.", + "author": "Dawei Cheng, Xiaoyang Wang, Ying Zhang, and Liqing Zhang.", + "venue": "IEEE Transactions on Knowledge and Data Engineering, 34(8):3800\u20133813, 2020.", + "url": null + } + }, + { + "7": { + "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks.", + "author": "Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh.", + "venue": "In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 257\u2013266, 2019.", + "url": null + } + }, + { + "8": { + "title": "Learning steady-states of iterative algorithms over graphs.", + "author": "Hanjun Dai, Zornitsa Kozareva, Bo Dai, Alex Smola, and Le Song.", + "venue": "In International conference on machine learning, pages 1106\u20131114. PMLR, 2018.", + "url": null + } + }, + { + "9": { + "title": "Enhancing graph neural network-based fraud detectors against camouflaged fraudsters.", + "author": "Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu.", + "venue": "In Proceedings of the 29th ACM international conference on information & knowledge management, pages 315\u2013324, 2020.", + "url": null + } + }, + { + "10": { + "title": "Should graph convolution trust neighbors? a simple causal inference method.", + "author": "Fuli Feng, Weiran Huang, Xiangnan He, Xin Xin, Qifan Wang, and Tat-Seng Chua.", + "venue": "In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1208\u20131218, 2021.", + "url": null + } + }, + { + "11": { + "title": "Using generative adversarial networks for improving classification effectiveness in credit card fraud detection.", + "author": "Ugo Fiore, Alfredo De Santis, Francesca Perla, Paolo Zanetti, and Francesco Palmieri.", + "venue": "Information Sciences, 479:448\u2013455, 2019.", + "url": null + } + }, + { + "12": { + "title": "Credit card fraud detection using convolutional neural networks.", + "author": "Kang Fu, Dawei Cheng, Yi Tu, and Liqing Zhang.", + "venue": "In Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16\u201321, 2016, Proceedings, Part III 23, pages 483\u2013490. Springer, 2016.", + "url": null + } + }, + { + "13": { + "title": "Graph echo state networks.", + "author": "Claudio Gallicchio and Alessio Micheli.", + "venue": "In The 2010 international joint conference on neural networks (IJCNN), pages 1\u20138. IEEE, 2010.", + "url": null + } + }, + { + "14": { + "title": "Attention based spatial-temporal graph convolutional networks for traffic flow forecasting.", + "author": "Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 922\u2013929, 2019.", + "url": null + } + }, + { + "15": { + "title": "Inductive representation learning on large graphs.", + "author": "Will Hamilton, Zhitao Ying, and Jure Leskovec.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "16": { + "title": "Open graph benchmark: Datasets for machine learning on graphs.", + "author": "Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec.", + "venue": "Advances in neural information processing systems, 33:22118\u201322133, 2020.", + "url": null + } + }, + { + "17": { + "title": "A machine learning based credit card fraud detection using the ga algorithm for feature selection.", + "author": "Emmanuel Ileberi, Yanxia Sun, and Zenghui Wang.", + "venue": "Journal of Big Data, 9(1):1\u201317, 2022.", + "url": null + } + }, + { + "18": { + "title": "Suspicious behavior detection: Current trends and future directions.", + "author": "Meng Jiang, Peng Cui, and Christos Faloutsos.", + "venue": "IEEE intelligent systems, 31(1):31\u201339, 2016.", + "url": null + } + }, + { + "19": { + "title": "Uncertainty quantification via spatial-temporal tweedie model for zero-inflated and long-tail travel demand prediction.", + "author": "Xinke Jiang, Dingyi Zhuang, Xianghui Zhang, Hao Chen, Jiayuan Luo, and Xiaowei Gao.", + "venue": "In CIKM, 2023.", + "url": null + } + }, + { + "20": { + "title": "Incomplete graph learning via attribute-structure decoupled variational auto-encoder.", + "author": "Xinke Jiang, Zidi Qin, Jiarong Xu, and Xiang Ao.", + "venue": "In WSDM, 2024.", + "url": null + } + }, + { + "21": { + "title": "Gated graph sequence neural networks.", + "author": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel.", + "venue": "arXiv preprint arXiv:1511.05493, 2015.", + "url": null + } + }, + { + "22": { + "title": "Adaptive graph convolutional neural networks.", + "author": "Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.", + "url": null + } + }, + { + "23": { + "title": "Mining spatio-temporal relations via self-paced graph contrastive learning.", + "author": "Rongfan Li, Ting Zhong, Xinke Jiang, Goce Trajcevski, Jin Wu, and Fan Zhou.", + "venue": "In SIGKDD, 2022.", + "url": null + } + }, + { + "24": { + "title": "Heterogeneous graph neural networks for malicious account detection.", + "author": "Ziqi Liu, Chaochao Chen, Xinxing Yang, Jun Zhou, Xiaolong Li, and Le Song.", + "venue": "In Proceedings of the 27th ACM international conference on information and knowledge management, pages 2077\u20132085, 2018.", + "url": null + } + }, + { + "25": { + "title": "Alleviating the inconsistency problem of applying graph neural network to fraud detection.", + "author": "Zhiwei Liu, Yingtong Dou, Philip S Yu, Yutong Deng, and Hao Peng.", + "venue": "In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pages 1569\u20131572, 2020.", + "url": null + } + }, + { + "26": { + "title": "Pick and choose: a gnn-based imbalanced learning approach for fraud detection.", + "author": "Yang Liu, Xiang Ao, Zidi Qin, Jianfeng Chi, Jinghua Feng, Hao Yang, and Qing He.", + "venue": "In Proceedings of the web conference 2021, pages 3168\u20133177, 2021.", + "url": null + } + }, + { + "27": { + "title": "Towards robust and adaptive motion forecasting: A causal representation perspective.", + "author": "Yuejiang Liu, Riccardo Cadei, Jonas Schweizer, Sherwin Bahmani, and Alexandre Alahi.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17081\u201317092, 2022.", + "url": null + } + }, + { + "28": { + "title": "Credit card fraud detection using bayesian and neural networks.", + "author": "Sam Maes, Karl Tuyls, Bram Vanschoenwinkel, and Bernard Manderick.", + "venue": "In Proceedings of the 1st international naiso congress on neuro fuzzy technologies, volume 261, page 270, 2002.", + "url": null + } + }, + { + "29": { + "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews.", + "author": "Julian John McAuley and Jure Leskovec.", + "venue": "In Proceedings of the 22nd international conference on World Wide Web, pages 897\u2013908, 2013.", + "url": null + } + }, + { + "30": { + "title": "The book of why: the new science of cause and effect.", + "author": "Judea Pearl and Dana Mackenzie.", + "venue": "Basic books, 2018.", + "url": null + } + }, + { + "31": { + "title": "Causality.", + "author": "Judea Pearl.", + "venue": "Cambridge university press, 2009.", + "url": null + } + }, + { + "32": { + "title": "Collective opinion spam detection: Bridging review networks and metadata.", + "author": "Shebuti Rayana and Leman Akoglu.", + "venue": "In Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, pages 985\u2013994, 2015.", + "url": null + } + }, + { + "33": { + "title": "Detecting credit card fraud by decision trees and support vector machines.", + "author": "Yusuf G \u015eahin and Ekrem Duman.", + "venue": "2011.", + "url": null + } + }, + { + "34": { + "title": "The graph neural network model.", + "author": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.", + "venue": "IEEE transactions on neural networks, 20(1):61\u201380, 2008.", + "url": null + } + }, + { + "35": { + "title": "Masked label prediction: Unified message passing model for semi-supervised classification.", + "author": "Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, and Yu Sun.", + "venue": "arXiv preprint arXiv:2009.03509, 2020.", + "url": null + } + }, + { + "36": { + "title": "Graph attention networks.", + "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.", + "venue": "arXiv preprint arXiv:1710.10903, 2017.", + "url": null + } + }, + { + "37": { + "title": "A semi-supervised graph attentive network for financial fraud detection.", + "author": "Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi.", + "venue": "In 2019 IEEE International Conference on Data Mining (ICDM), pages 598\u2013607. IEEE, 2019.", + "url": null + } + }, + { + "38": { + "title": "Nodeaug: Semi-supervised node classification with data augmentation.", + "author": "Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Juncheng Liu, and Bryan Hooi.", + "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 207\u2013217, 2020.", + "url": null + } + }, + { + "39": { + "title": "Graph wavenet for deep spatial-temporal graph modeling.", + "author": "Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang.", + "venue": "arXiv preprint arXiv:1906.00121, 2019.", + "url": null + } + }, + { + "40": { + "title": "A comprehensive survey on graph neural networks.", + "author": "Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip.", + "venue": "IEEE transactions on neural networks and learning systems, 32(1):4\u201324, 2020.", + "url": null + } + }, + { + "41": { + "title": "Temporal and heterogeneous graph neural network for financial time series prediction.", + "author": "Sheng Xiang, Dawei Cheng, Chencheng Shang, Ying Zhang, and Yuqi Liang.", + "venue": "In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3584\u20133593, 2022.", + "url": null + } + }, + { + "42": { + "title": "Semi-supervised credit card fraud detection via attribute-driven graph representation.", + "author": "Sheng Xiang, Mingzhi Zhu, Dawei Cheng, Enxia Li, Ruihui Zhao, Yi Ouyang, Ling Chen, and Yefeng Zheng.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14557\u201314565, 2023.", + "url": null + } + }, + { + "43": { + "title": "How powerful are graph neural networks?", + "author": "Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka.", + "venue": "arXiv preprint arXiv:1810.00826, 2018.", + "url": null + } + }, + { + "44": { + "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition.", + "author": "Sijie Yan, Yuanjun Xiong, and Dahua Lin.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.", + "url": null + } + }, + { + "45": { + "title": "Key player identification in underground forums over attributed heterogeneous information network embedding framework.", + "author": "Yiming Zhang, Yujie Fan, Yanfang Ye, Liang Zhao, and Chuan Shi.", + "venue": "In Proceedings of the 28th ACM international conference on information and knowledge management, pages 549\u2013558, 2019.", + "url": null + } + }, + { + "46": { + "title": "Dual graph convolutional networks for graph-based semi-supervised classification.", + "author": "Chenyi Zhuang and Qiang Ma.", + "venue": "In Proceedings of the 2018 world wide web conference, pages 499\u2013508, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2402.14708v2" +} \ No newline at end of file diff --git a/20241127/2403.05441v3.json b/20241127/2403.05441v3.json new file mode 100644 index 0000000000000000000000000000000000000000..4e24174f8dbf09f42243e7a0946a856b58aa70ad --- /dev/null +++ b/20241127/2403.05441v3.json @@ -0,0 +1,855 @@ +{ + "title": "Bayesian Hierarchical Probabilistic Forecasting of Intraday Electricity Prices", + "abstract": "We address the need for forecasting methodologies that handle large uncertainties in electricity prices for continuous intraday markets by incorporating parameter uncertainty and using a broad set of covariables. This study presents the first Bayesian forecasting of electricity prices traded on the German intraday market. Endogenous and exogenous covariables are handled via Orthogonal Matching Pursuit (OMP) and regularising priors. The target variable is the IDFull price index, with forecasts given as posterior predictive distributions. Validation uses the highly volatile 2022 electricity prices, which have seldom been studied. As a benchmark, we use all intraday transactions at the time of forecast to compute a live IDFull value. According to market efficiency, it should not be possible to improve on this last-price benchmark. However, we observe significant improvements in point measures and probability scores, including an average reduction of in absolute errors and an average increase of in accuracy when forecasting whether the IDFull exceeds the day-ahead price. Finally, we challenge the use of LASSO in electricity price forecasting, showing that OMP results in superior performance, specifically an average reduction of in absolute error and in the continuous ranked probability score.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction and Motivation", + "text": "In April 2024 the monthly share of renewable electricity generation in the European Union reached 52.8% (energy-charts.info ###reference_wable_share/chart.htm?l=en&c=EU&year=2024###), a new high and for the first time surpassing the milestone of having a majority of electrical energy produced by renewables in the EU. This milestone has already been reached March 2019 in Germany\u2019s energy generation, now approaching a yearly share of 60% renewables. But the penetration of Renewable Energy Sources (RES) necessary for achieving the vision of sustainable energy production comes with tough challenges, as, opposed to other sources of energy, wind and solar energy production cannot be planned in advance, and mechanisms need to be in place to deal with the intermittency of RES in order to keep our energy systems stable.\nWith the introduction of an intraday trading system in 2006 by the European Power Exchange (EPEX Spot SE, or just EPEX) and its subsequent expansions (Viehmann, 2017 ###reference_b66###), a market-based mechanism has been created in which market participants contribute to a stable operation of power grids Koch and Hirth (2019 ###reference_b28###); Narajewski (2022 ###reference_b44###). In the continuous intraday (CID) market, energy is traded under the pay-as-bid principle, where individual transactions can be executed up to 5\u2009min before delivery. Such a real-time trading system allows for last minute adjustments and thus minimizes imbalances in the power grid. In fact, the CID market is claimed to resolve the \u201cGerman balance paradox\u201d, in which, despite the fast growth of RES in the energy mix, balancing needs are continuously decreasing (Remppis et al., 2015 ###reference_b54###; Koch and Hirth, 2019 ###reference_b28###). One reason is that remaining imbalances can be very costly for market participants (Narajewski, 2022 ###reference_b44###), constituting an incentive to instead use the intraday market for balancing (Koch and Hirth, 2019 ###reference_b28###).\nSince its implementation, the intraday market has gained popularity every year. In 2022, 134.6\u2009TWh of the total of 611.21\u2009TWh traded on EPEX was traded on the intraday market, a new all-time high (epexspot.com ###reference_kets-deliver-transparent-price-signals-under-increased-supply-pressure###). However, to optimise its function as a market-based solution to prevent imbalances caused by increasing penetration of fluctuating RES, (semi-)automated tools allowing swift response to changes in the market 24/7 are necessary (Koch and Hirth, 2019 ###reference_b28###). It stands to reason that forecasting of electricity prices on the CID market will play an important role in these developments, in particular probabilistic forecasts enabling proper risk management.\nFor this reason, a research focus at the interface between statistical learning and market economy concerning Electricity Price Forecasting (EPF) of (mostly the German) CID market has been growing in the last few years. A recent review on the EPF literature can be found in Maciejowska et al. (2023b ###reference_b40###), a review focusing on probabilistic EPF is Nowotarski and Weron (2018 ###reference_b48###).\nThe majority of works on CID EPF concerns volume-weighted averaged prices (VWAPs) of executed transactions on the CID market. A few works instead focus on other aspects: Instead of only considering executed transactions, Shinde et al. (2021 ###reference_b59###); Scholz et al. (2021 ###reference_b56###) investigated the whole order book of the CID market. Arrival times of trades in the CID market was studied in Narajewski and Ziel (2019 ###reference_b45###). Since the role played by the balancing system is similar to that of the CID market, Narajewski (2022 ###reference_b44###); Lima et al. (2023 ###reference_b34###) have focussed on forecasting imbalance prices.\n###figure_1### Point forecasts of electricity prices are typically based on regularised regression models (Kremer et al., 2020 ###reference_b29###; Kath and Ziel, 2018 ###reference_b25###), in particular autoregressive models (Hu et al., 2021 ###reference_b20###; Lucic and Xydis, 2023 ###reference_b35###), as well as artificial neural networks (Oksuz and Ugurlu, 2019 ###reference_b49###; Lehna et al., 2022 ###reference_b33###), or, more in the context of electricity demand forecast, functional data approaches Vilar et al. (2012 ###reference_b67###); Shah et al. (2022 ###reference_b58###); Varelas et al. (2024 ###reference_b65###).\nHowever, in view of the strong volatility in the CID market, in particular in Germany with their high solar and wind energy components in the energy mix (cf. Figure 1 ###reference_###), most of the recent works on CID EPF agree upon the importance of probabilistic forecasting (Maciejowska et al., 2023b ###reference_b40###). Employed probabilistic forecasting methods include generalised additive models for location, scale and shape (GAMLSS) (Abramova and Bunn, 2020 ###reference_b1###; Narajewski and Ziel, 2020b ###reference_b47###; Narajewski, 2022 ###reference_b44###; Hirsch and Ziel, 2022 ###reference_b17###, 2024 ###reference_b18###), distributional neural networks (DDNNs) (Marcjasz et al., 2023 ###reference_b41###; Narajewski, 2022 ###reference_b44###; Barunik and Hanus, 2023 ###reference_b3###), quantile regression averaging (QRA) (Marcjasz et al., 2023 ###reference_b41###; Maciejowska et al., 2023a ###reference_b38###; Andrade et al., 2017 ###reference_b2###; Cabrera and Schulz, 2017 ###reference_b6###; Uniejewski and Weron, 2021 ###reference_b64###; Serafin et al., 2019 ###reference_b57###), and other specialised techniques (Cramer et al., 2023 ###reference_b7###; Grothe et al., 2023 ###reference_b14###). However, all of these approaches use point estimates of distribution parameters (or quantiles) to build a probabilistic forecast model, and as such do not account fully for the uncertainty of parameter estimation. With our work, we fill this gap by promoting a Bayesian forecast model tested on the German CID market for the recent period of 2021 - 2022 featuring particularly volatile electricity prices.\nOnly very few works exist that employ fully Bayesian methods for EPF. An early example is Panagiotelis and Smith (2008 ###reference_b50###) for the Australian intraday market with a test set of 30 days in 2006. A more recent application, again to the Australian intraday market, use Bayesian recurrent networks (Klein et al., 2023 ###reference_b27###). In Europe, the recent work Brusaferri et al. (2019 ###reference_b5###) applies Bayesian deep learning to the Italian and Belgian day-ahead market. Here, the intractable posterior is approximated by a Gaussian model in a variational Bayes approach. Very recently, the British imbalance system has been subject of Lima et al. (2023 ###reference_b34###), in which through Bayesian updating and conjugate priors a semi-analytic Bayesian dynamic learning model with time varying parameters has been proposed. It is important to note that none of these Bayesian examples tackle the European intraday market.\nFurthermore, none if the above mentioned examples tackle data later than 2019. In fact, of all literature on CID EPF in Europe, to the best of our knowledge, Hirsch and Ziel (2024 ###reference_b18###) is the only work taking up this challenge and presenting a forecast study of German CID prices of the recent year 2022. In this work, the authors fit a mixture distribution involving Johnson\u2019s SU distribution and copulas to model all 24 hourly VWAPs of a day with their full dependency structure in a multivariate probabilistic model.\nA crucial step of EPF is the choice of regressors. Typical regressors found in most works are price and volume information of the electricity markets, external variables like forecasted power generation of various energy sources and load forecasts, and dummy time and date variables to capture seasonal trends. But also carbon emission allowances (Pape et al., 2016 ###reference_b51###; Marcjasz et al., 2023 ###reference_b41###; Maciejowska et al., 2020 ###reference_b39###, 2023a ###reference_b38###), unavailability of power generation (Pape et al., 2016 ###reference_b51###; Hirsch and Ziel, 2022 ###reference_b17###; Lima et al., 2023 ###reference_b34###), cross-border flow of energy (Pape et al., 2016 ###reference_b51###; Andrade et al., 2017 ###reference_b2###), trade closing times (Hirsch and Ziel, 2022 ###reference_b17###, 2024 ###reference_b18###), market elasticity (Kremer et al., 2021 ###reference_b30###; Hirsch and Ziel, 2022 ###reference_b17###, 2024 ###reference_b18###), and grid frequency (Scholz et al., 2021 ###reference_b56###) have been utilised. The market elasticity is approximated by the slopes of supply and demand curves through two-sided auctions in the day-ahead market (Kulakov and Ziel, 2019 ###reference_b31###). Typically, also autoregressive components are considered as additional regressors (Maciejowska et al., 2019 ###reference_b36###, 2021 ###reference_b37###; Janke and Steinke, 2019 ###reference_b23###; Kath, 2019 ###reference_b24###; Maciejowska et al., 2020 ###reference_b39###; Uniejewski and Weron, 2021 ###reference_b64###; Uniejewski et al., 2019 ###reference_b63###). These ARX-type models typically use lags of one day or one hour. Often, a lag of two days and, to account for weekly seasonality, seven days is also considered, but these lags have been reported as less relevant (Pape et al., 2016 ###reference_b51###; Shinde et al., 2021 ###reference_b59###).\nIn view of this large set of regressors, possibly with strong collinearities, selection of regressors or features is crucial. The Least Absolute Shrinkage and Selection Operator (LASSO) (Tibshirani, 1996 ###reference_b61###) is a popular choice and has been declared the gold standard of EPF (Maciejowska et al., 2023b ###reference_b40###). In this work, referring to the known instabilities the LASSO has when facing strong collinearities (Su et al., 2017 ###reference_b60###), we challenge this standard and instead promote Orthogonal Matching Pursuit (OMP) (Tropp and Gilbert, 2007 ###reference_b62###; Rubinstein et al., 2008 ###reference_b55###; Pedregosa et al., 2011 ###reference_b53###) as a feature selection technique for the case of large sets of regressors with strong correlations.\nA recently debated topic is the efficiency of the European CID market. Evidence has been put forward, that the CID market is of weak-form efficiency (Narajewski and Ziel, 2020a ###reference_b46###), picked up by Narajewski and Ziel (2020b ###reference_b47###); Hu et al. (2021 ###reference_b20###); Hirsch and Ziel (2022 ###reference_b17###, 2024 ###reference_b18###), which, in essence, means that the last price information already contains all important information for the forecast of future prices. This claim, however, has been challenged in Marcjasz et al. (2020 ###reference_b42###). Here, we use the last price information as a benchmark model, and demonstrate that a small but statistically significant improvement is possible using our approach. But even though substantial improvements over last price information might be difficult to achieve, complementing the point information probabilistically still is of great value for optimal trading (Kath and Ziel, 2020 ###reference_b26###; Uniejewski and Weron, 2021 ###reference_b64###).\nIn this work, we focus on the electricity traded on the EPEX CID markets in Germany in the years 2021 and 2022. We take the perspective of energy producers who, at a certain point of time , are interested in CID electricity price levels in all later hours of the considered delivery day . In view of the presumed weak-form efficiency of the CID market, and the unprecedented volatility in Germany for the years considered in this study, we propose a Bayesian probabilistic model which fully incorporates uncertainties and as such does not require fitting or training. The basic model structure is of ARX-type, selection of regressors is performed by OMP based on a large set of regressors. Additional regularisation is induced by our choice of prior. The forecasted price distribution is the Posterior Predictive Distribution (PPD) which we determine numerically by sampling the posterior with the No-U-Turn (NUTS) sampler (Hoffman et al., 2014 ###reference_b19###). Exploiting the probabilistic nature of our research, we compute probabilities for negative and positive price spreads, which is particularly relevant for practitioners (Maciejowska et al., 2019 ###reference_b36###, 2021 ###reference_b37###).\nOverall, our research can be broken down into the following contributions to the field of EPF:\nTo the best of our knowledge, we present the first complete Bayesian treatment of CID EPF, fully incorporating uncertainties of model parameters. The study period is the exceedingly volatile years 2021 and 2022, which, apart from the recent work by Hirsch and Ziel (2024 ###reference_b18###), has not been the subject of CID EPF before as far as we are aware.\nWe address the problem of feature selection and present statistically significant evidence that OMP leads to a better forecasting performance than the declared gold standard LASSO.\nWe add to the discussion of the proposed weak-form efficiency of CID markets and share a detailed description of CID indices and statistics calculations which have become more intricate since 2021, and, as far as we are aware, have not yet been published in this detail.\nThis paper is structured as follows. In Section 2 ###reference_###, we introduce the DA and CID markets in Europe with a focus on Germany, discuss merit order slopes, and provide full details on calculations of CID indices and statistics from EPEX transaction data. Section 3 ###reference_### explains our data model, including all features used and the employed feature selection techniques. The data model feeds into our Bayesian forecast model which we define in Section 4 ###reference_###. In Section 5 ###reference_### our procedure to extract prediction intervals from the predictive distributions produced by the Bayesian forecast model is introduced, and the point measures and probability scores to evaluate our results are specified. In this section, we also present and discuss our results. We conclude our work in Section 6 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Electricity markets and market data", + "text": "Understanding the characteristics of the European electricity market is crucial for establishing successful forecasting models. In this section, we therefore summarise the market details important for our data model. More details of the (German) short-term electricity market can be found in Viehmann (2017 ###reference_b66###) and the references given below.\n###figure_2### The CID market resulted from the liberalisation of the European energy sector since 1996 (europarl.europa.eu ###reference_FTU_2.1.9.pdf###) and is one of many platforms to trade electricity. The most liquid market is the day-ahead (DA) market conducting a uniform price auction to settle clearance prices of electricity for the next day. Markets like the CID and DA markets in the European trading zone are operated by Nominated Electricity Market Operators (NEMOs), of which EPEX is one of 17 (nemo-committee.eu ###reference_ee###). However, due to the large share of volumes traded, EPEX is generally considered as the reference point for electricity prices in Germany and other countries (Viehmann, 2017 ###reference_b66###). In addition to NEMOs managing the trading platforms, various Transmission System Operators (TSOs) share the responsibility for the transmission of electrical power and for providing market participants access to the grid (entsoe.eu) ###reference_members/###.\nIn the following, we focus on the German market operated by EPEX. We will refer to the delivery day as day , the day before delivery day as day , and so on. Typically, electricity is traded in hourly, half-hourly and quarter-hourly delivery periods or products. We will consider hourly products, that is 24 separate products per day, and a delivery hour denoted by will start at time and end 60\u2009min later, e.g. refers to the contractual period of 14:00\u2009-\u200915:00\u2009pm." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Day-ahead market and elasticity", + "text": "The day-ahead market closes at 12:00\u2009noon on day . Until then, a two-sided, blind auction for all products of day takes place, from which aggregated supply and demand curves are created. The intersection of these two auction curves defines the DA market clearance price and an associated DA volume, which is illustrated in the left graph of Figure 2 ###reference_###. Since the auction curves reflect the merit order on the spot market, it is also often referred to as merit-order curves.\nThe slope of the supply curve can be used to measure the elasticity in the DA market, which may serve as a proxy for the elasticity in the CID market. However, due to the two sided auction, the elasticity of the demand curve should be taken into account as well. One way to accomplish that has been proposed in Kulakov and Ziel (2019 ###reference_b31###), in which a transformation transfers all elasticity of the demand side to the supply side, making the demand side perfectly inelastic. Now, the slope of the transformed supply curve at a certain volume measures the elasticity of the whole DA market for that volume. Figure 2 ###reference_### presents an illustration, Kulakov and Ziel (2019 ###reference_b31###) gives full details, and the recent forecast studies Hirsch and Ziel (2022 ###reference_b17###, 2024 ###reference_b18###) employing this transformation for the CID market describe the method from a more practical perspective.\nA subtle point in determining the DA market clearance price and volume is the different NEMOs and countries involved. Since 2014, the Single Day-Ahead Coupling (SDAC) creates a European trading zone for the DA market (nemo-committee.eu ###reference_###) and unifies the determination of the market clearance price through an algorithm called Euphemia (nemo-committee.eu ###reference_/euphemia-public-description.pdf###). As a result, all participating countries and platforms, independent from the respective NEMO responsible, use the same SDAC price. The data published by EPEX since 2014 therefore includes only the unique SDAC price determined by Euphemia, but, on the other hand, is only authorised to publish the EPEX DA volume determined from EPEX auction curves in the respective country. From 14 October 2021, however, EPEX publishes the so-called All-Certified Exchanges\u2019 aggregated supply and demand curves determined from the SDAC auction (EPEX Spot, 2023 ###reference_b11###). The market clearance price and volume determined from the intersection of these curves will be the SDAC price and volume, while EPEX still publishes only their country-specific DA volume." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Continuous intraday market and transaction data", + "text": "The CID market opens at 15:00\u2009pm on day for all products of day . Similar to SDAC, the CID markets of participating countries in Europe are coupled through the Single Intraday Coupling (SIDC), initialised in June 2018 by NEMOs and TSOs (nemo-committee.eu ###reference_###). On the SIDC platform, each product can be traded up to 60\u2009min before delivery starts, whereas trades within the same country and using the same NEMO platform on both sides only requires 5\u2009min lead time. The CID bidding on EPEX platforms takes place on the M7 trading system (epexspot.com ###reference_s###), where buy and sell orders can be placed, and as soon as two orders match, the transition is executed. Summaries and deeper analysis of the SIDC electricity market can be found in Le et al. (2019 ###reference_b32###); Kath (2019 ###reference_b24###); Demir et al. (2020 ###reference_b8###).\nThese market details are important in order to work with trading data from these markets. The full order book is rarely analysed in the literature due to the vast amount of data and a considerable increase of noise from automated trading, examples include Shinde et al. (2021 ###reference_b59###); Scholz et al. (2021 ###reference_b56###). In the present work, like in most of the CID literature, EPEX Spot SE ###reference_b12### provided us with the executed transactions of the German side of all EPEX operated trades. These transactions include all trades within EPEX Germany (sell and buy side), and all SIDC trades where the sell or buy side is using the EPEX platform in Germany. The most important data fields available per transaction for the purpose of this study is the volume traded in MWh (Volume) with the matched price in EUR/MWh (Price), trade identification number (TradeId), the trade execution time (ExecutionTime), the begin of the delivery period (DeliveryStart), the end of the delivery period (DeliveryEnd), a flag if the transaction takes the buy or sell perspective (Side), and a flag indicating whether the sell and buy sides are identical (SelfTrade).\nCommonly used aggregated information of CID trading are Volume-Weighted Average Prices (VWAPs), of which typical examples are the ID1, ID3 and IDfull price indices. The ID1 is the VWAP of all transactions that took place from 1\u2009h up to 30\u2009min before delivery start of a product, ID3 is the VWAP of all transactions within 3\u2009h up to 30\u2009min before delivery start, and the IDFull is the VWAP of all transactions of a product. The price indices are analysed in Narajewski and Ziel (2020a ###reference_b46###) in detail." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Reproduction of published EPEX CID price indices and statistics", + "text": "EPEX distinguishes prices statistics and price indices. Price indices, like the ID1, ID3 and IDFull, are always specified. If, for instance, a product has not been traded yet or with a volume below a certain threshold, a fall-back value is set, e.g. the spot price. A statistics of a product, on the other hand, only is defined if at least one trade has been made. The VWAP of a product, as the statistics \u201cWeight Avg.\u201d of this product, is, for this reason, only equal to the IDFull if the product has already been traded sufficiently111The difference between index and statistics becomes evident, if on (epexspot.com ###reference_###) the table view for the continuous market is selected, and the output for yesterday is compared with the output for today.. Other statistics are the \u201cHigh\u201d and \u201cLow\u201d, which reflect the largest and smallest price of all transactions of a product, respectively. The \u201cLast\u201d is the price of the most recent transaction for a product.\nThe mentioned indices and statistics are officially published by EPEX (epexspot.com ###reference_###). With the complete transaction data available, we can reproduce the official values of these indices. However, care has to be taken to follow the exact same rules as EPEX does for accurate reproduction. As these rules are only vaguely specified on publicly available resources (epexspot.com ###reference_es/download_center_files/EPEX%20SPOT%20Indices%202019-05_final.pdf###), and to the best of our knowledge have not yet been reported in the literature, we summarise the procedure that we found to be most accurate in reproducing the EPEX indices for the benefit of other researchers in CID EPF:\nAs a general rule, the following conditions must be satisfied for a transaction to contribute to EPEX price indices and statistics (EPEX Spot, 2023 ###reference_b11###).\nAt least one side of the transaction is traded on a EPEX operated platform.\nThe transactions need to have at least one side in the respective market area222In the EPEX transaction data, the market area is misleadingly given as DeliveryArea. (here, Germany).\nThe transaction has not been recalled or cancelled.\nThe transaction is not indicated as being the result of a self-trade.\nThe delivery start and end must match the product of interest (here, hourly products).\nTransactions listed with both sides in the data are counted only once.\nCriteria 1)-3) are met automatically by using the executed transaction data obtained from EPEX Spot SE ###reference_b12###. To ensure 4), we select transactions with SelfTrade=N (\u201cNo\u201d) and SelfTrade=U (\u201cUnknown\u201d). The flag U can occur if one side of the transaction is on a trading platform operated by a different NEMO than EPEX (EPEX Spot, 2023 ###reference_b11###). Naturally, these transactions can only be SIDC trades333Note that not all SIDC trades need to be cross-border or cross-NEMO transactions, but, conversely, all cross-border or cross-NEMO transactions are the result of SIDC trades.. Condition 5) is directly ensured by comparing (DeliveryStart) and (DeliveryEnd) with the desired product444Alternatively, one could use the Product information of transactions and select the products Intraday_Hour_Power and XBID_Hour_Power. However, occasionally, user-defined blocks of arbitrary delivery periods are traded which are listed as the closest product available, e.g. a 3\u2009h delivery period would still be a Intraday_Hour_Power product.. Once the transactions are filtered to meet these conditions, duplicate entries have to be removed. The duplicates occur as EPEX includes all transactions executed on their platforms in the data, which includes the equivalent BUY and SELL sides if both sides are traded on the EPEX platform. To filter out these duplicates, we use the TradeId of each transaction, and only keep the transactions with the first occurring TradeId." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Live intraday values", + "text": "The end-of-day (EOD) values of the CID indices and statistics are only available after gate-closure (i.e. 5\u2009min before delivery start after which no further trades are possible). As stipulated by the weak-form efficiency assumption of the CID market, the most recent transactions of a product carry the most relevant information for its EOD values. As these live transactions are also available to practitioners, it is imperative to make use of this information for EPF.\n###figure_3### ###figure_4### We therefore use the ExecutionTime information of transactions to take the perspective of a market participant trading power on the CID market at a thought point of time, and filter out any transactions that have taken place after this time. Thus, we emulate a forecast creation time and forecast the IDFull of all following hours. To indicate these preliminary values for CID price indices and statistics, we will use the prefix \u201clive\u201d, e.g. live IDFull.\nFor computationally efficiency, we pre-computed all live CID price indices and statistics for all products in 2021 and 2022 from EPEX transaction data, where varies on a dense time grid from 15:00\u2009pm on day to delivery end. We use linear interpolation to realise arbitrary values for , alternative methods are discussed in Shinde et al. (2021 ###reference_b59###). Trying different numbers of grid points, we found 250 to be a good balance between resolution and computational cost. The inclusion of live market information into forecast models has only been picked up recently by a few works in the literature (Marcjasz et al., 2020 ###reference_b42###; Maciejowska et al., 2020 ###reference_b39###; Hirsch and Ziel, 2022 ###reference_b17###; Maciejowska et al., 2023a ###reference_b38###; Hirsch and Ziel, 2024 ###reference_b18###).\nIn Figure 3 ###reference_###, we include an example to illustrate the generated data. We also added the published EPEX values to demonstrate their exact reproduction. Using the elasticity of the DA market as a proxy for the CID market, we can use the live IDFull to estimate the elasticity of the CID market along the same time line as illustrated in Figure 4 ###reference_###.\n###figure_5###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "The data model", + "text": "In this section, we describe the data model used to feed the Bayesian forecast model introduced in the next section. All variables listed are assumed to be for a fixed hour , localised to CET or CEST depending whether or not daylight saving applies for the respective date and hour. If not stated differently, all variables are for delivery day .\nTo manage clock changes, we adopt a no-clock-change rule in which the hour 3:00\u2009-\u20094:00\u2009am is duplicated as the surrogate for the missing hour 2:00\u2009-\u20093:00\u2009am in spring clock change, and of the twice occurring hour 2:00\u2009-\u20093:00\u2009am in autumn clock change only the first occurrence is considered and the second is discarded. The rationale of this rule is that it gets as close as possible to a scenario without clock change, and hence captures the characteristic market dynamics of separate hours of the day with minimal distortion." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Market variables", + "text": "As market variables, we consider DA price and volume , CID indices , and , as well as the statistics , and . We also consider the volume-weighted deviation to the mean price, , and the volume bought and sold on the CID market, and , respectively. In addition to the live CID values of day , we also add the end-of-day (EOD) values of these CID values for day , which we denote by for CID value , e.g. for the final value of the IDFull on the day before delivery day.\nOwing to the pan-European DA price, we only include the DA price of Switzerland. For Germany and SDAC countries, we also make use of DA price statistics provided by EPEX Spot SE ###reference_b12### that aggregate DA prices of various hours of the day, e.g. morning, night, rush hour, sun peak; details are published by EPEX ###reference_es/download_center_files/SFTP_specifications_2020-07.pdf###.\nApart from the hourly products, also the 15\u2009min and 30\u2009min products may contain market information relevant for EPF. Here, we consider the four 15\u2009min DA prices , , , of hour as extra regressors, together with the average slope of these prices, where shall denote the 15\u2009min slope of the corresponding variable .\nFinally, we add DA and live CID market elasticities and for finite differences \u2009MWh, \u2009MWh and \u2009MWh, as proposed in Hirsch and Ziel (2022 ###reference_b17###).\nWe summarise these market base regressors in Table 1 ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "External regressors", + "text": "Apart from the above market variables, also external factors have an important impact on the price formation. These fundamental variables, according to the weak-form efficiency hypothesis of CID markets (Narajewski and Ziel, 2020a ###reference_b46###), may already be incorporated into most recent prices. We still include a set of extra regressors for two reasons. Firstly, to shed some more light on the validity of the hypothesis, and secondly, to enable forecast uncertainty extraction from this data.\nStandard external variables are power generation and load forecasts publicly available from the transparency platform of the European Network of Transmission System Operators for Electricity (ENTSO-E) (entsoe.eu ###reference_transparency.entsoe.eu/###). These include day-ahead forecasts, i.e. created 18:00\u2009pm on day , of power generation from a multitude of different energy sources, as well as load forecasts. In addition, intraday forecasts, i.e. created 8:00\u2009am on day , are published for offshore and onshore wind power and solar power. Some forecasts pertain 15\u2009min periods, in these cases we aggregate all power forecasts to hourly energy productions, as well as to average slopes of 15\u2009min power forecasts when available. Depending on what is available at forecast creation, we pick day-ahead or intraday forecasts.\nOverall, we have hourly consumption derived from load forecasts, and hourly renewable energy productions comprising solar , onshore wind , and offshore wind , as well as the total hourly energy production from power forecasts. Using the notation for the 15\u2009min slope of the corresponding variable , we also consider , , and . To also take cross-border effects on price formation into account, we include power generation forecasts of Slovenia, Switzerland, Italy, Hungary and Czechia.\nAnother typical set of regressors for EPF are date and time dummy variables. Here, we use hour of the day (), day of the week ( for Mon-Sun), month of the year (), and the year . Additionally, we add a weekday category which takes the value for Mon-Fri, for Sat and for Sun, the time difference between forecast creation and start of delivery (time to delivery), a measure for the proportion of the German population in public holiday, and, similarly, a measure for school holidays.\nAn important aspect for EPF, especially for the CID market in recent years, are market states varying with time. Events strongly affecting the markets in 2021 and 2022 obviously arose in the course of the COVID-19 pandemic and the Russian invasion of Ukraine, but also smaller events like interest policies and legislations related to energy can play a major role. These changing market states impede training of forecast model with historical data as stationary dynamics is not given. In particular the years 2021 and 2022 show extreme fluctuations due to major changes in market states, cf. Figure 1 ###reference_###. Ideally, custom variables capturing all dimensions of market states may be incorporated as regressors, but in view of the difficulty of this task, we take the S&P GSCI Natural Gas and S&P GSCI Gasoil indices and as approximations for market states. These index values are of daily resolution and exclude weekends and holidays, we therefore always take the most recent available value with respect to delivery day and forecast creation.\nFinally, seasonality strongly influences the price formation in electricity markets, where hourly, daily, weekly and monthly patterns can be observed. Apart from the dummy variables, and by considering each hour of the day separately, we also conclude a number of seasonality variables that capture different aspects of these patterns. Using daylight data from sunrise-sunset.org ###reference_sunrise-sunset.org/### we use time to noon , time to begin of twilight and end of twilight , and the time between sunrise and sunset (day length). Additionally, we employ average temperatures in Germany from the past 15 years to construct an indicator for yearly seasons. With and we hence have two variables capturing yearly seasonality in terms of daylight and temperature, respectively.\nWe summarise these external base regressors in Table 2 ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Construction of feature space for regression", + "text": "Using the described base regressors, we construct a feature space spanned by these regressors and linear combinations of those. While meaningful combinations may be learnt automatically from the data, defining them manually allows their consideration in feature selection procedures before feeding the features into the forecast model.\nApart from the features valid for the delivery period of the product we are forecasting, also past values have an impact on the price formation, as is evident from the number of ARX-type models found in the literature (Janke and Steinke, 2019 ###reference_b23###; Kath, 2019 ###reference_b24###; Maciejowska et al., 2019 ###reference_b36###; Uniejewski et al., 2019 ###reference_b63###; Maciejowska et al., 2020 ###reference_b39###, 2021 ###reference_b37###; Uniejewski and Weron, 2021 ###reference_b64###). Here, we focus on the difference for delivery hour on day to the previous delivery hour, as well as the difference to the same hour on the previous day . We denote by the difference of a feature value to the previous hour, and the difference to the previous day by . For computing , we use the same forecast creation time for both delivery hours, while for we also shift to day in order to ensure a common ground for the difference. We apply and to all features, tripling the feature space.\nWe summarise all additionally constructed features in Table 3 ###reference_###, which, combined with the base regressors in Tables 1 ###reference_### and 2 ###reference_###, represents the complete feature space fed into the feature selection procedure. The total feature space dimension amounts to .\nWe organise the feature values in a design matrix , i.e. each row constitutes a data point containing all feature values for a specific day , where is the delivery day we are forecasting and . For all data points, we fix the delivery hour , as mentioned at the beginning of the section. All information that would only become available later than the forecast creation time is truncated, mainly pertaining live CID values and power forecasts. To ensure proper learning from the historical data, is set with respect to day , and for each past day also shifted to this day, such that we get for data points\nwhere is a feature vector representing one data point pertaining delivery hour on delivery day with information available at creation time . With we denoted shifting the creation time by days to the past, preserving the time information.\nAs target variable , we use the end-of-day value of the IDFull index for day , i.e. , which is available 5\u2009min before begin of delivery, i.e. . With the target vector\nwe may formulate the regression model\nwith weight vector and normal random variable , omitting the dependence on , , ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Feature selection", + "text": "In view of a total feature space dimension of , and less than data points available per forecast, feature selection becomes mandatory. An additional difficulty arises from high collinearities between the features. Reasons for these collinearities include correlations between markets, shared causes for increasing and decreasing energy production and consumption, market participants acting on common ground, and features calculated from base regressors.\nA common approach in the literature for automatic reduction of features is the least absolute shrinkage and selection operator (LASSO) (Tibshirani, 1996 ###reference_b61###). The LASSO is a -regularisation technique, in which a penalty term\nis added to the least-square regression of (3 ###reference_###) with the choice . The strength of the penalty is controlled via a Lagrange parameter . With increasing , more weights are shrunk to zero, thus performing effective feature selection. However, in case of strong collinearities, the LASSO tends to choose features that do not generalise well Su et al. (2017 ###reference_b60###).\nInstead, we propose Orthogonal Matching Pursuit (OMP) (Pati et al., 1993 ###reference_b52###) for the task of feature selection. Matching Pursuit is a greedy algorithm that approximates the solution of the sparse regression problem corresponding to a -regularisation via a stepwise iteration through feature space. The penalty imposes a constraint on the minimisation of the number of non-zero weights to the least-square regression, which has been shown to be superior over the LASSO and other regularisation techniques (Tropp and Gilbert, 2007 ###reference_b62###; Hastie et al., 2020 ###reference_b16###), but belongs to the NP-hard complexity class. The orthogonal extension, OMP, additionally removes from the target variable the orthogonal projection of the feature selected to the target in each iteration step. This procedure improves convergence and provides additional robustness against collinearities. We used the algorithms implemented as linear_model.OrthogonalMatchingPursuit and linear_model.LassoCV in the python package scikit-learn (Rubinstein et al., 2008 ###reference_b55###; Pedregosa et al., 2011 ###reference_b53###).\nThe hyperparameter for LASSO is the Lagrange parameter , while for OMP it is the maximum number of features, . Typically, for LASSO is set via cross-validation (CV). However, for better comparison with OMP, we explored optimising to meet the constraint but found that the CV built into LassoCV performed better. The hyperparameters of LassoCV had an insignificant influence on the results; hence, we used the default settings (e.g., 5-fold CV). For OMP, the greedy algorithm terminates when either the loss cannot be further improved or the cut-off is reached. We use , ensuring that the number of features determined by OMP typically stays well below the cut-off, except in a few cases with extreme outliers in the data.\nTo apply feature selection, the data is first cleaned: data points with a few missing values (e.g., occasional gaps in power forecasts, ) are eliminated, and features with considerable missing values (e.g., live for ) are removed. Then, both the features in and the target vector are standardised to zero mean and unit variance. The selected features by OMP and LASSO are then taken from the original data, the cleaning procedure is repeated, and and are standardised again. In this way, most missing value problems resolve themselves after feature selection, reducing the need to remove missing data points to a minimum. Finally, zero-valued feature vectors (e.g., solar power at night) are removed if not already deselected by feature selection." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "The forecast model", + "text": "Based on the data model and feature selection described in the previous section, outputting design matrix and target vector , we now introduce the Bayesian forecast model used in this study. To this end, we separate historical data , comprising the data points for days , from the most recent data point for delivery day . Note that at this stage and have already undergone feature selection.\nAs a probabilistic model for the random target variable , we extract a Gaussian likelihood from the linear model (3 ###reference_###),\nNote that any distribution could be chosen here, but without indication of a specific uncertainty structure, we apply the principle that the simplest explanation is usually the best. A Gaussian likelihood also fits well with the empirical Bayes approach explained below. The generality of the model lies in the fact that the posterior predictive distribution is a compound distribution between the likelihood and the posterior, which can take a very general form, as illustrated in Figure 6 ###reference_### further below.\nFor the weight vector we choose the product of normal distributions as prior and independently the Gamma distribution for the standard deviation ,\nThis choice of prior imposes -regularisation on (Ridge regression), cf. (4 ###reference_###). We also tested a Laplacian prior (equivalent to -regularisation) for but did not find any notable improvement. The conjugate prior for and would be the (multivariate) normal-inverse-gamma distribution, but we prefer to set the priors for and independently. This approach facilitates the empirical Bayes method used below to reduce the number of hyperparameters and has the additional benefit of not fixing the form of the posterior to the conjugate family, thus adding to the generality of the model.\nThe parameters and of the prior for weights are determined in an empirical Bayes approach. For the mean we choose the ordinary least-square (OLS) estimate with . For the standard deviation we make use of the normality of OLS estimates and set with . Similarly, we set the mode of the Gamma distribution to to account for standardised data, i.e. . Finally, to still ensure weakly informative priors, we found that is an adequate choice, which corresponds to a variance of . Other moderate choices for the only remaining hyperparameter turned out to not have a significant influence on the results.\n###figure_6### With Bayes\u2019 theorem, we can write down the posterior distribution as\nwhere we have in the denominator the prior-predictive value\nNote that, due to its independence from and , can be ignored in sampling or maximisation of the posterior .\nBayesian point estimates for and may now be obtained by maximizing the posterior (MAP estimates), which plugged into the likelihood (5 ###reference_###) and replacing by yields a probabilistic forecast for the target . However, in doing so, we would discard valuable information about the uncertainty of and captured by the posterior distribution. To fully incorporate this uncertainty, we instead determine the posterior predictive distribution (PPD),\nNote the different roles played by and . The historical data is used to set up the posterior by virtue of simple substitution instead of training as in classical machine learning approaches. The new data point is then used in the likelihood to produce the forecast distribution for the end-of-day IDFull.\nHowever, instead of computational intensive training, we here face a sampling problem to evaluate the high-dimensional integral. Setting up the posterior distribution in the python package TensorFlow Probability (TFP) (Dillon et al., 2017 ###reference_b10###) in the acceleration environment JAX (Bradbury et al., 2018 ###reference_b4###), we can leverage efficient implementations of Hamilton Monte-Carlo (HMC). The best performance was obtained by the No-U-Turn Sampler (NUTS) (Hoffman et al., 2014 ###reference_b19###), a recursive adaptation of HMC that automatically detects the reversion to already sampled parts of parameter space, thus ensuring a more thorough sampling and disposes of the difficult choice of the number of iteration steps. The parameters of the NUTS algorithm built into TFP are set to an initial step size of and a burn-in period of iterations, both of which were found to be a good universal compromise between accuracy and computational efficiency in our application.\nOn a standard laptop, posterior samples and of the parameter vector were thus obtained in matters of seconds for each forecast. With these samples, we estimate the PPD by substituting the posterior expectation with an average,\nThe data model and the forecast model are summarised in Figure 5 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Forecast study and discussion", + "text": "###figure_7### ###figure_8### To test our model, we used data from 2021 and 2022 to build and according to (1 ###reference_###) and (2 ###reference_###) and Tables 1 ###reference_###-3 ###reference_###. We consider 6 different forecast scenarios distinguished by forecast creation time . In a first set of 4 scenarios, we fix and forecast the end-of-day IDFull for all hours of delivery day ,\n23:00\u2009pm on day , \ndelivery hours on day forecasted,\n5:00\u2009am on day , \ndelivery hours on day forecasted,\n11:00\u2009am on day , \ndelivery hours on day forecasted,\n17:00\u2009pm on day , \ndelivery hours on day forecasted.\nIn addition to these four scenarios, where the forecast creation time is fixed, we also consider a scenario in which we fix the lag between forecast creation and begin of delivery. For this scenario, we always forecast all hours of the delivery day, and consider lags of . We use this more extensive scenario to compare the forecast performance when OMP is used for feature selection with the case where LASSO is used for feature selection. Thus we have the two additional cases\n, hours forecasted on day , \nfor , using OMP,\n, hours forecasted on day , \nfor , using LASSO.\nTo evaluate the forecast performances of these scenarios, we consider test days covering the second half of 2022 (1 July - 30 December). For each test day , all hours are forecasted as specified in the scenarios above. Each forecast uses all previous days to construct for learning the posterior (7 ###reference_###), and of delivery day is used to estimate the forecast distribution (10 ###reference_###), . For the scenarios (e) or (f), for instance, the forecast study consists of individual forecasts for each of the values for .\nXScenario (a), 6 July 2022 XX Scenario (e), 5 October 2022\n###figure_9### ###figure_10###" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Probabilistic forecasts", + "text": "We illustrate our forecast procedure with a few examples. Two individual forecasts are shown in Figure 6 ###reference_###. Since a typical reference price for the CID market is the DA price , we consider the spread between the IDFull and the DA price, . Due to the probabilistic forecast, there are several options to extract a point estimate from the estimated predictive distribution , e.g. the mode, median or mean of . In terms of modes, an additional ambiguity arises when is multimodal. Once a point estimate is extracted from , credible intervals of various credibility levels can be extracted in various ways, which shall serve as prediction intervals (PIs) in this study. We found the following procedure to be best in terms of robustness and accuracy in our forecast study.\nMost prominent examples of determining credible intervals from predictive distributions are percentile intervals or highest density intervals (HDIs) (McElreath, 2020 ###reference_b43###). Here we choose HDIs as they are more suitable for possibly skewed distributions and also naturally handle multimodal distributions (Hyndman, 1996 ###reference_b21###; Hyndman et al., 1996 ###reference_b22###). Specifically, to determine -credible HDIs, we determine a value such that intersections of with determine a set that cover a fraction of the total probability mass,\nwith .\nConducting this procedure for an decreasing list of cut values , we obtain a list of credible sets with increasing credibilities . Typically, for high values of , single credible intervals (i.e. ) are determined by the highest peak of , whereas for the domain of becomes the credible interval for .\nFor intermediate , sub-intervals may develop which fuse again to larger credible intervals as is decreased. In order to choose a PI from all intervals , we consider all fusing intervals , and of these pick that maximizes the credibility-to-width ratio,\naccompanied by a credibility of .\nAs a point estimate , we derive the median within this interval, that is, is chosen such that\nThis procedure to determine PIs and point estimates in the general case of multimodal predictive distributions is also illustrated in Figure 6 ###reference_###. In Figure 7 ###reference_###, we show two examples in which the probabilistic forecasts for all hours of a day are depicted. These full day forecasts demonstrate the typical behaviour of our forecast model, namely identifying the live IDFull as the most important regressor, supporting the weak-form efficiency hypothesis. Nevertheless, the point estimates often do correct the live IDFull values towards the ground truth . We furthermore observe that forecasts with a larger lag between creation time and delivery begin are characterised by broader prediction intervals, indicating that our forecast model correctly represents forecast uncertainty.\nIn the following evaluation of our forecast study, we quantify and assess the significance of these observation, where the live IDFull shall serve as a reference." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Evaluation of point estimates", + "text": "As a first evaluation, we compute the mean absolute error (MAE) of point estimates for all scenarios (a) - (f). As a benchmark model, we use the live IDFull , in line with the proposed weak-form efficiency assumption of the CID market. In Figure 8 ###reference_###, we show the MAE of and the live IDFull for all forecast hours of scenarios (a) - (d). These results indicate that with more transaction data available, not only the MAE becomes smaller, also the forecast model tends to beat the live IDFull more often.\nXScenario (a), , XScenario (b), ,\n###figure_11### ###figure_12### XScenario (c), , XScenario (d), ,\n###figure_13### ###figure_14### A more condensed form, now also including scenarios (e) and (f), confirms this observation in Figure 9 ###reference_###, where the difference of the MAE between and the live IDFull is shown, as well as the MAE averaged across all days and hours for scenarios (e) and (f). We observe that in scenario (e) beats the live IDFull for all , and leads to considerable smaller MAE compared to scenario (f) for all . The fact that scenario (e) achieves the overall smallest forecast errors challenges the weak-form efficiency hypothesis and implies that OMP feature selection is superior over LASSO. Later we show that these results are statistically significant.\nXScenarios (a) - (d) X Scenarios (e) and (f)\n###figure_15### ###figure_16### XScenario (e) \u2004(OMP feature selection) XScenario (f) \u2004(LASSO feature selection)\n###figure_17### ###figure_18###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Spread sign forecasts", + "text": "Given the probabilistic forecast, more than point estimates can be extracted. A straight forward extraction are the probabilities to observe an IDFull smaller or larger than the DA price, that is the sign of the spread , which can be estimated as\nThe sign of the spread is of practical relevance, as it informs a seller on the CID market when to sell electricity for a higher price than the DA price. The probabilities and can be used to maximize profit and minimize risk (Maciejowska et al., 2019 ###reference_b36###, 2021 ###reference_b37###).\nAlso of interest is the rest , that is the difference of the end-of-day value of the IDFull to its current (live) value. A positive indicates an average increase of electricity prices until gate closure, and a negative sign indicates falling prices. The probabilities of these two cases, estimated by\nare therefore again of practical relevance.\nTo evaluate these probabilistic sign forecasts, we consider the following estimator for the sign of , based on a credibility threshold ,\nthat is, if we exceed the credibility threshold, we use the forecasted sign implied by , otherwise we use the sign of the live IDFull. For the sign of , we use the simpler estimator\nsince already involves .\nXScenario (a), , X Scenario (d), ,\n###figure_19### ###figure_20### XScenario (e), XScenario (e),\n###figure_21### ###figure_22### In Figure 10 ###reference_### we show examples of counts of days with correct spread sign forecasts for different values of the credibility threshold . As expected, when approaches , the estimator reduces to . For values of close to 0.5, the number of days with correctly forecasted spread sign tend to be maximal, but also the risk of a forecast inferior to the live IDFull reference increases. A sweet spot between and may exist, but is not apparent in this study and will be left for future work.\nFor an overall comparison of the sign forecast accuracy for both the spread and the rest, we investigate the choice for the fixed lag scenarios (e) and (f) in Figure 11 ###reference_###, where the accuracy as the ratio of correct forecasts over total number of forecasts is shown. The forecasts using OMP feature selection results in a considerably higher accuracy than the forecasts using LASSO feature selection, and also turns out to be better than the live benchmark . Later we show that these results are statistically significant.\n###figure_23### ###figure_24###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Probabilistic forecast scores", + "text": "In evaluating probabilistic forecasting, the main difference to point estimates is that a true predictive distribution to compare against is not available, instead, the estimated predictive distribution can only be compared against the true price .\nThe review Maciejowska et al. (2023b ###reference_b40###) discusses the evaluation of probabilistic forecasting in the context of EPF. Two main aspects are relevant here, which are reliability and sharpness. Reliability describes how well forecast uncertainty is captured by the probabilistic nature of the forecast. For instance, if the PIs that carry 95% of the probability mass cover the true value in 95% of all forecasts produced by a method, then this method would be perfectly reliable for a credibility level of . Sharpness refers to the accuracy of the forecast, in the sense that PIs with small width but high credibility provide more localised estimates.\nTo assess these two forecast qualities, quantitative scoring rules are gaining popularity in the probabilistic EPF forecasting literature (Maciejowska et al., 2023b ###reference_b40###). These scores are based on PIs extracted from predictive distributions, as well as on the predictive distributions itself.\n###figure_25### ###figure_26### XScenario (e) \u2004(OMP feature selection) XScenario (f) \u2004(LASSO feature selection)\n###figure_27### ###figure_28### XScenario (e) \u2004(OMP feature selection) XScenario (f) \u2004(LASSO feature selection)\n###figure_29### ###figure_30### To directly target reliability, we test the empirical coverage by counting how often the true was contained in a PI at given credibility . Normalizing and plotting against , we can visually inspect how closely the empirical coverage follows the theoretical expectation given by the unit diagonal. In Figures 12 ###reference_### and 13 ###reference_### we depict the empirical coverage in the left panels for all scenarios. Averaging the deviation from the theoretical diagonal, we obtain the Average Coverage Error (ACE), which is depicted on the right panels of the same Figures 12 ###reference_### and 13 ###reference_###. It can be seen that all forecasts tend to be a little overconfident towards higher credibilities, a signature of the highly volatile prices on the German CID market. Using OMP instead of LASSO reduces the ACE by a factor of about 2. An overall comparison between OMP and LASSO in terms of ACE is shown on the left in Figure 14 ###reference_###.\nTo test both reliability and sharpness, the Continuous Ranked Probability Score (CRPS) can be used (Gneiting and Raftery, 2007 ###reference_b13###),\nHere, it is assumed that , and the expectations are taken with respect to . With the full posterior predictive distribution available as an interpolation object using scipy.interpolate in python (Virtanen et al., 2020 ###reference_b68###), we compute the expectation values to practically arbitrary precision directly from their definitions in term of integrals over ,\nand\nwith the density for given by\nWe compute the CRPS for each forecast of scenarios (e) and (f) and show the result on the right in Figure 14 ###reference_###. Again, OMP turns out to deliver better results than LASSO, which is statistically significant as will be shown in the next section." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Statistical significance of results", + "text": "In all evaluations, the forecasts making use of OMP for feature selection perform better than using LASSO for feature selection, and also better than the live IDFull benchmark. To investigate the statistical significance of this overall result, we employ the Diebold-Mariano (DM) test (Diebold and Mariano, 1994 ###reference_b9###). For the scenarios (e) and (f) with fixed lag , the DM test is basically a -test for the difference of mean between scores of two forecast series\u2019. The test is agnostic with respect to the choice of score, as long as the score can be computed for individual forecasts. Here, the MAE, a Boolean variable reflecting correct sign forecasts and the CRPS can serve as scores for the DM test. We employ the python package dieboldmariano which uses the Harvey correction generalizing the test statistics to the standard -distribution to account for smaller sample sizes (Harvey et al., 1997 ###reference_b15###).\n###figure_31### ###figure_32### ###figure_33### We test the one-sided null hypothesis that all forecast series of scenarios (e) and (f) are statistically identical in terms of the MAE, the spread and rest sign, and the CRPS. The resulting -values are shown in Table 4 ###reference_###. Apart from very few examples that coincide with cases where the scores appear almost indistinguishable, we find vanishing -values and thus confirm the statistical significance of the observations made in this work." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "We presented a forecast study of the IDFull electricity price index of the German continuous intraday market for the years 2021 and 2022. These years are characterised by unprecedented volatility and have hardly been studied before. We provided details about reproducing price indices and statistics published by EPEX Spot using transaction data of the continuous intraday market provided by EPEX Spot SE ###reference_b12###.\nDue to the strong volatility of intraday prices in 2021 and 2022, it is mandatory to employ probabilistic forecasting. We demonstrated how Bayesian models can successfully be deployed to obtain posterior predictive distributions fully incorporating parameter uncertainty. We further presented how point estimates, predictive intervals and forecast probabilities can be extracted from the predictive distributions.\nA currently debated topic in the literature is the supposed weak-form efficiency of the continuous intraday market. According to this hypothesis, all information available is already contained in last prices due to informed traders, making it impossible to significantly beat last price information by means of forecasting models. In our study, we partly confirm the hypothesis in that our model clearly identifies current price information as the dominating regressor and closely follows its trend. But on the other hand, we find that the live IDFull built from current prices can still be improved in a statistically significant way. It should, however, be taken into consideration that the definition of a last-price benchmark is not unique, and other last price information, or combinations therefore, might still deliver the best forecast possible. It should also be noted that the continuous intraday market is developing rapidly, making it difficult for traders to adjust to changes, which in turn opens up the potential of forecast models to beat last-price benchmarks.\nOur conclusion on this debate from this study therefore is that the weak-form efficiency can tentatively be confirmed in the sense of a solid characterisation of market properties and possible future developments, but comprehensive forecast models may still be able to beat last-price benchmarks. Aside from the question of weak-form efficiency of markets, last-price benchmarks will still be point estimates, and even a probabilistic forecast that does not improve a last-price benchmark but yields reliable uncertainty distributions around the benchmark is a fruitful directions of research and would be of great value for traders.\nAnother aspect discussed in the literature that we address in our study is feature selection. Comprehensive forecast models that potentially outperform last-price benchmarks often draw from a large pool of regressors. These variables typically exhibit strong collinearities, which impede robust feature selection. Previous works have found that LASSO is effective for feature selection and is thus considered the gold standard. However, LASSO can still exhibit instabilities under strong collinearities. A more robust alternative is Orthogonal Matching Pursuit (OMP), which we suggest as an alternative to LASSO. In our study, we find a clear improvement using OMP instead of LASSO, with strong statistical significance.\nIn summary, the innovative contribution of this work to electricity price forecasting primarily lies in the Bayesian processing of full parameter uncertainty information rather than reducing it to point estimates, along with the probabilistic analysis of the resulting posterior predictive distributions. Additional contributions include the reproduction and live values of all intraday price indices and statistics, and handling the large set of strongly correlated features with OMP and a regularising prior. Lastly, we advocate for probabilistic modelling using TensorFlow Probability.\nOur model can be developed further in various ways offering plenty possibilities for future research, such as systematically exploring different error distributions with long tails and skewness, adding non-linearities to the basic regression approach, and employing ensemble or mixture models.\nOverall, with our work, we hope to strengthen the field of probabilistic electricity price forecasting, promoting the use of Bayesian models that can fully incorporate parameter uncertainty." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Acknowledgements", + "text": "We gratefully acknowledge fruitful discussions with Sebastian Uhl. We thank Sovann Khou from EPEX SPOT for providing additional information on the calculation of price indices and statistics." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Regressor\n\nDefinition\n\nSource
Day-ahead market information I\nEPEX Spot SE
\n\nMarket-clearance price for SDAC countries (DA price, spot price).\n\n
\n\nMarket-clearance price for Switzerland (non-SDAC).\n\n
\n\nMarket-clearance volume for EPEX Germany.\n\n
\n, \u2026, \n\n\nThe four quarter-hourly SDAC market clearance prices of delivery hour .\n\n
\n\nAverage slope quarter-hourly SDAC market clearance prices, i.e. \n\n
\n\nMerit-order slopes at calculated for three different finite differences as measure of elasticity.\n\n
Statistical day-ahead market information from other hours (EPEX moderated) I\nENTSO-E
\n\nAverage DA price for selected delivery hours representing middle-night, early morning, morning, late morning, high noon, early afternoon, afternoon, evening, night, baseload, off-peak, rush hour, sunpeak, peakload, maximum, minimum, volume-weighted average.\n\n
Continuous intraday market information I\nEPEX Spot SE
\n, , \n\n\nLive CID price indices ID1, ID3 and IDFull.\n\n
\n, , \n\n\nLive CID price statistics High, Low, Last.\n\n
\n\nLive volume-weighted average deviation from mean price, i.e.\n\n\n\n\n\n\nwhere are price and volume tuples of transactions executed before forecast creation.\n\n
\n, \n\n\nLive sums of volumes purchased and sold on the CID market.\n\n
\n\nFinal values of indices and statistics for day , where .\n\n
\n\nMerit-order slopes at calculated for three different finite differences as measure of elasticity.\n\n
\n
Table 1: Base market regressors and their notation used in the model. If not stated differently, all variables pertain delivery day for a fixed hour in Germany. Forecasts and live values depend on availability at forecast creation time . The background of these variables is explained in Section 2. External regressors are listed in Table 2, additional features are calculated from these regressors and are listed in Table 3.
\n
", + "capture": "Table 1: Base market regressors and their notation used in the model. If not stated differently, all variables pertain delivery day for a fixed hour in Germany. Forecasts and live values depend on availability at forecast creation time . The background of these variables is explained in Section 2. External regressors are listed in Table 2, additional features are calculated from these regressors and are listed in Table 3." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Regressor\n\nDefinition\n\nSource
Energy consumption and production forecasts for Germany I\nENTSO-E
\n\nEnergy consumption calculated as the average of day-ahead forecasts of load for the four 15\u2009min periods of delivery hour .\n\n
\n\nSolar, wind onshore and wind offshore energy production calculated as the average of the four 15\u2009min power forecasts of delivery hour , where intraday forecasts replace day-ahead forecasts after 8:00\u2009am on day .\n\n
\n\nTotal energy production calculated as the average of the four 15\u2009min day-ahead power forecasts of delivery hour .\n\n
\n\nAverage slope of 15\u2009min power forecasts, , also see Table 1.\n\n
Energy consumption and production forecasts for other countries I\nENTSO-E
\n\nDay-ahead energy forecasts BSP Slovenia, .\n\n
\n\nDay-ahead energy forecasts EPEX Switzerland, .\n\n
\n\nDay-ahead energy forecasts GME Italy, .\n\n
\n\nDay-ahead energy forecasts HUPX Hungary, .\n\n
\n\nDay-ahead energy forecasts OTE Czechia, .\n\n
Date and time dummy variables I\npython datetime
\n\nHour of the day ().\n\n
\n\nDay of the week ( for Mon-Sun).\n\n
\n\nMonth of the year ().\n\n
\n\nYear ( or ).\n\n
\n\nWeekday category, for Mon-Fri, for Sat, for Sun.\n\n
\n\nTime to delivery in floating point hours.\n\n
\n\nPopulation-weighted number of states in work holiday.\n\narbeitstage.org
\n\nPopulation-weighted number of states in school holiday.\n\nferienwiki.de
Market state variables
\n\nS&P GSCI Natural Gas index.\n\nS&P Global
\n\nS&P GSCI Gasoil index.\n\nS&P Global
Daily and yearly seasonality indicatorssunrise-sunset.org
\n\nTime to noon in floating point hours.\n\n
\n, \n\n\nTime to begin and end of twilight in floating point hours.\n\n
\n\nLengths of days as time from sunrise to sunset.\n\n
\n\nYearly season indicator constructed from 15-year hourly temperature averages across weather stations in Germany.\n\nmeteostat
\n
Table 2: External base regressors and their notation, continuation of Table 1. The variables listed here are explained in Section 3.2.
\n
", + "capture": "Table 2: External base regressors and their notation, continuation of Table 1. The variables listed here are explained in Section 3.2." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Regressor\n\nDescription\n\n
DA market and live CID market I\n
\n\nAverage of live volumes purchased and sold in CID market.\n\n
\n\nPrice spread between live IDFull and DA price.\n\n
\n\nVolume spread between live CID volume and DA volume.\n\n
\n\nElasticity spread between CID and DA market for three different finite differences.\n\n
Energy production forecasts in reference to DA market and live CID market I\n
\n\nResidual energy production\n\n
\n\nDA excess energy production.\n\n
\n\nDA excess residual energy production.\n\n
\n\nCID excess energy production.\n\n
\n\nCID excess residual energy production.\n\n
Energy consumption forecasts in reference to DA market and live CID I\n
\n\nResidual energy consumption\n\n
\n\nDA excess energy consumption.\n\n
\n\nDA excess residual energy consumption.\n\n
\n\nCID excess energy consumption.\n\n
\n\nCID excess residual energy consumption.\n\n
Energy production forecast shifts I\n
\n\nDifference in intraday and day-ahead solar energy forecast.\n\n
\n\nDifference in intraday and day-ahead wind energy forecast.\n\n
\n\nDifference in intraday and day-ahead renewables forecast.\n\n
Differences to time-lagged values of all features I\n
\n\nMaintaining forecast creation time , the difference of all considered features for delivery hour on day to the value of that feature for the previous delivery hour is added to the feature space.\n\n
\n\nChanging the day of the forecast creation time to the previous day, the difference of all considered features for delivery hour on day to the value of that feature for the same delivery hour on the previous day is added to the feature space.\n\n
\n
Table 3: Summary of all features calculated from base regressors listed in Table 1 and external regressors given in Table 2. The regressors from Tables 1 and 2 combined with the variables defined here span the complete feature space for the forecast model.
\n
", + "capture": "Table 3: Summary of all features calculated from base regressors listed in Table 1 and external regressors given in Table 2. The regressors from Tables 1 and 2 combined with the variables defined here span the complete feature space for the forecast model." + }, + "4": { + "table_html": "
\"[Uncaptioned\n
Table 4: The average improvements and corresponding -values obtained by applying the one-sided Diebold-Mariano (DM) test are listed for various scores and forecast series\u2019. As scores we consider the MAEs shown in the top right chart of Figure 9, a Boolean variable reflecting true sign forecasts of the spread and the rest as used for Figure 11, and the CRPS shown in Figure 14 on the right. The forecast series\u2019 tested are those of scenarios (e) and (f), that is using OMP and LASSO for feature selection, as well as the live IDFull benchmark. Strong statistical significance and improved forecast performance are indicated by green colours, no statistical significance and declined forecast performance by red colours.
\n
", + "capture": "Table 4: The average improvements and corresponding -values obtained by applying the one-sided Diebold-Mariano (DM) test are listed for various scores and forecast series\u2019. As scores we consider the MAEs shown in the top right chart of Figure 9, a Boolean variable reflecting true sign forecasts of the spread and the rest as used for Figure 11, and the CRPS shown in Figure 14 on the right. The forecast series\u2019 tested are those of scenarios (e) and (f), that is using OMP and LASSO for feature selection, as well as the live IDFull benchmark. Strong statistical significance and improved forecast performance are indicated by green colours, no statistical significance and declined forecast performance by red colours." + } + }, + "image_paths": { + "1": { + "figure_path": "2403.05441v3_figure_1.png", + "caption": "Figure 1: The unprecedented increase of volatility in the (German) intraday electricity market is illustrated by the volume-weighted average price (IDFull) for all hourly products for 2021 and 2022 shown as a thin line. The thick line is a weekly rolling average. The data was provided by EPEX Spot SE .", + "url": "http://arxiv.org/html/2403.05441v3/x1.png" + }, + "2": { + "figure_path": "2403.05441v3_figure_2.png", + "caption": "Figure 2: Illustration of the transformation proposed in Kulakov and Ziel (2019) to obtain the slope of the merit-order curve as a measure for elasticity \u03b7\ud835\udf02\\etaitalic_\u03b7 in the DA market. The left side shows aggregated supply (SEP) and demand (DEM) curves as obtained from EPEX Spot SE for the delivery hour 7:00\u2009-\u20098:00\u2009am on 4 November 2022, with the DA price Pdasubscript\ud835\udc43daP_{\\mathrm{da}}italic_P start_POSTSUBSCRIPT roman_da end_POSTSUBSCRIPT published in the European trading zone as a horizontal line. The right side depicts these curves after the transformation, in which the elasticity from the demand side is transferred to the supply side. Exemplary slopes are indicated by straight lines for two volumes marked by closed circles. A finite difference of 500\u2009MWh has been used to determine the slopes.", + "url": "http://arxiv.org/html/2403.05441v3/x2.png" + }, + "3(a)": { + "figure_path": "2403.05441v3_figure_3(a).png", + "caption": "Figure 3: Live CID price and volume indices and statistics calculated from transaction data provided by EPEX Spot SE for the exemplary delivery hour 7:00\u2009-\u20098:00\u2009am CET on 4 November 2022 as a function of forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4. The horizontal dashed lines indicate published values by EPEX. The dashed vertical line marks the beginning of delivery day d\ud835\udc51ditalic_d, i.e. hours left from this line are on day d\u22121\ud835\udc511d-1italic_d - 1 and on the right on day d\ud835\udc51ditalic_d. Solid vertical lines indicate 3\u2009h before delivery start and 1\u2009h before delivery start where applicable, and delivery start itself.", + "url": "http://arxiv.org/html/2403.05441v3/x3.png" + }, + "3(b)": { + "figure_path": "2403.05441v3_figure_3(b).png", + "caption": "Figure 3: Live CID price and volume indices and statistics calculated from transaction data provided by EPEX Spot SE for the exemplary delivery hour 7:00\u2009-\u20098:00\u2009am CET on 4 November 2022 as a function of forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4. The horizontal dashed lines indicate published values by EPEX. The dashed vertical line marks the beginning of delivery day d\ud835\udc51ditalic_d, i.e. hours left from this line are on day d\u22121\ud835\udc511d-1italic_d - 1 and on the right on day d\ud835\udc51ditalic_d. Solid vertical lines indicate 3\u2009h before delivery start and 1\u2009h before delivery start where applicable, and delivery start itself.", + "url": "http://arxiv.org/html/2403.05441v3/x4.png" + }, + "4": { + "figure_path": "2403.05441v3_figure_4.png", + "caption": "Figure 4: Measures of elasticity of the DA market and the CID market using spot price and live IDFull for the exemplary delivery hour 7:00\u2009-\u20098:00\u2009am CET on 4 November 2022. Here the three finite differences \u0394\u2062p\u0394\ud835\udc5d\\Delta proman_\u0394 italic_p are used that are also used in the data model described in Section 3. The underlying live IDFull trajectory is shown in Figure 3, details on the elasticity estimation is illustrated in Figure 2.", + "url": "http://arxiv.org/html/2403.05441v3/x5.png" + }, + "5": { + "figure_path": "2403.05441v3_figure_5.png", + "caption": "Figure 5: A schematic overview of the data model (top row) and the forecast model (bottom row). The data model takes the design matrix X\u2062(d,h,\u03c4)\ud835\udc4b\ud835\udc51\u210e\ud835\udf0fX(d,h,\\tau)italic_X ( italic_d , italic_h , italic_\u03c4 ) and target values y\u2062(d,h)\ud835\udc66\ud835\udc51\u210ey(d,h)italic_y ( italic_d , italic_h ) as input, cf. (1) and (2). This input data is then organised according to forecast scenarios described in Section 5, where either the forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4 or the lag hlagsubscript\u210elagh_{\\mathrm{lag}}italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT between forecast creation and delivery begin is fixed. Subsequently, the data is cleaned for missing values, standardised and undergoes feature selection (OMP for scenarios (a)-(e) and LASSO for (f)), as described in Section 3.4. The historical data (X\u00af,y\u00af)\u00af\ud835\udc4b\u00af\ud835\udc66(\\bar{X},\\bar{y})( over\u00af start_ARG italic_X end_ARG , over\u00af start_ARG italic_y end_ARG ) enters the forecast model for learning, the new data point (X\u2217,y\u2217)superscript\ud835\udc4b\u2217superscript\ud835\udc66\u2217(X^{\\ast},y^{\\ast})( italic_X start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) is used for forecast creation and evaluation. The hyperparameters \u03bcwsubscript\ud835\udf07\ud835\udc64\\mu_{w}italic_\u03bc start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT, \u03c3wsubscript\ud835\udf0e\ud835\udc64\\sigma_{w}italic_\u03c3 start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT, \u03b1\ud835\udefc\\alphaitalic_\u03b1, \u03b2\ud835\udefd\\betaitalic_\u03b2 of the prior ppriosubscript\ud835\udc5dpriop_{\\mathrm{prio}}italic_p start_POSTSUBSCRIPT roman_prio end_POSTSUBSCRIPT are defined in an empirical Bayes approach using (X\u00af,y\u00af)\u00af\ud835\udc4b\u00af\ud835\udc66(\\bar{X},\\bar{y})( over\u00af start_ARG italic_X end_ARG , over\u00af start_ARG italic_y end_ARG ), as explained in Section 4. The posterior ppostsubscript\ud835\udc5dpostp_{\\mathrm{post}}italic_p start_POSTSUBSCRIPT roman_post end_POSTSUBSCRIPT follows with Bayes formula from prior ppriosubscript\ud835\udc5dpriop_{\\mathrm{prio}}italic_p start_POSTSUBSCRIPT roman_prio end_POSTSUBSCRIPT and likelihood plisubscript\ud835\udc5dlip_{\\mathrm{li}}italic_p start_POSTSUBSCRIPT roman_li end_POSTSUBSCRIPT, and the estimated posterior predictive distribution p^ppdsubscript^\ud835\udc5dppd\\hat{p}_{\\mathrm{ppd}}over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT as the compound distribution of ppostsubscript\ud835\udc5dpostp_{\\mathrm{post}}italic_p start_POSTSUBSCRIPT roman_post end_POSTSUBSCRIPT and plisubscript\ud835\udc5dlip_{\\mathrm{li}}italic_p start_POSTSUBSCRIPT roman_li end_POSTSUBSCRIPT at the new datapoint X\u2217superscript\ud835\udc4b\u2217X^{\\ast}italic_X start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT, both explicated in Section 4. The procedure to extract point estimates y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG and prediction intervals I^^\ud835\udc3c\\hat{I}over^ start_ARG italic_I end_ARG at credibility levels \u03b1^^\ud835\udefc\\hat{\\alpha}over^ start_ARG italic_\u03b1 end_ARG from p^ppdsubscript^\ud835\udc5dppd\\hat{p}_{\\mathrm{ppd}}over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT is explained in Section 5.1, and their evaluation with respect to the true value y\u2217superscript\ud835\udc66\u2217y^{\\ast}italic_y start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT and the live IDFull as benchmark is given in Section 5.2 in terms of mean absolute error (MAE), average coverage error (ACE), continuous ranked probability score (CRPS), and signs of spread and rest values.", + "url": "http://arxiv.org/html/2403.05441v3/x6.png" + }, + "6(a)": { + "figure_path": "2403.05441v3_figure_6(a).png", + "caption": "Figure 6: A typical example of a forecast on the left (day d=\ud835\udc51absentd=italic_d = 24 August 2022, h=0\u210e0h=0italic_h = 0) and a select example of a forecast on the right (day d=\ud835\udc51absentd=italic_d = 5 July 2022, hour h=7\u210e7h=7italic_h = 7). The forecasts are represented by estimated posterior predictive distributions p^ppd\u2062(y)subscript^\ud835\udc5dppd\ud835\udc66\\hat{p}_{\\mathrm{ppd}}(y)over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT ( italic_y ), from which credible intervals and point estimates are extracted. The example on the right, an extreme multimodal case showcasing the generality of the model, illustrates the procedure to pick a prediction interval and a point estimate using highest density intervals (HDIs). The intervals numbered 1, 2, 3, and 4 are formed by intersections with horizontal cuts, cf. (11). These intervals are special in that they would fuse into larger intervals for a slightly lower cut. From these fusing intervals, the interval with the largest credibility-to-width ratio is chosen, cf. (12), which in this example is interval 1. This interval serves as the PI I^^\ud835\udc3c\\hat{I}over^ start_ARG italic_I end_ARG, and the median within this interval as the point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG, cf. (13).", + "url": "http://arxiv.org/html/2403.05441v3/x7.png" + }, + "6(b)": { + "figure_path": "2403.05441v3_figure_6(b).png", + "caption": "Figure 6: A typical example of a forecast on the left (day d=\ud835\udc51absentd=italic_d = 24 August 2022, h=0\u210e0h=0italic_h = 0) and a select example of a forecast on the right (day d=\ud835\udc51absentd=italic_d = 5 July 2022, hour h=7\u210e7h=7italic_h = 7). The forecasts are represented by estimated posterior predictive distributions p^ppd\u2062(y)subscript^\ud835\udc5dppd\ud835\udc66\\hat{p}_{\\mathrm{ppd}}(y)over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT ( italic_y ), from which credible intervals and point estimates are extracted. The example on the right, an extreme multimodal case showcasing the generality of the model, illustrates the procedure to pick a prediction interval and a point estimate using highest density intervals (HDIs). The intervals numbered 1, 2, 3, and 4 are formed by intersections with horizontal cuts, cf. (11). These intervals are special in that they would fuse into larger intervals for a slightly lower cut. From these fusing intervals, the interval with the largest credibility-to-width ratio is chosen, cf. (12), which in this example is interval 1. This interval serves as the PI I^^\ud835\udc3c\\hat{I}over^ start_ARG italic_I end_ARG, and the median within this interval as the point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG, cf. (13).", + "url": "http://arxiv.org/html/2403.05441v3/x8.png" + }, + "7(a)": { + "figure_path": "2403.05441v3_figure_7(a).png", + "caption": "Figure 7: Exemplary full day forecasts for all delivery hours h\u210ehitalic_h of day d=\ud835\udc51absentd=\\;italic_d =6 July 2022 on the left and all delivery hours h\u210ehitalic_h of day d=\ud835\udc51absentd=\\;italic_d =5 October 2022 on the right. The forecast on the left is part of scenario (a), where the forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4 is fixed to 23:00\u2009pm on day d\u22121\ud835\udc511d-1italic_d - 1, implying an increasing forecast lag of hlag=h+1subscript\u210elag\u210e1h_{\\mathrm{lag}}=h+1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = italic_h + 1 along the horizontal axis. The forecast on the right is part of scenario (e), where the lag between forecast and delivery begin is fixed to 1 hour, i.e. hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1, implying forecast creation times \u03c4=h\u22121\ud835\udf0f\u210e1\\tau=h-1italic_\u03c4 = italic_h - 1. The vertical colour bars represent the topology of the respective predictive distributions p^ppd\u2062(y)subscript^\ud835\udc5dppd\ud835\udc66\\hat{p}_{\\mathrm{ppd}}(y)over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT ( italic_y ) by means of \u03b1\u2062(pcut)\ud835\udefcsubscript\ud835\udc5dcut\\alpha(p_{\\mathrm{cut}})italic_\u03b1 ( italic_p start_POSTSUBSCRIPT roman_cut end_POSTSUBSCRIPT ) for 100 values of pcutsubscript\ud835\udc5dcutp_{\\mathrm{cut}}italic_p start_POSTSUBSCRIPT roman_cut end_POSTSUBSCRIPT, cf. (11).", + "url": "http://arxiv.org/html/2403.05441v3/x9.png" + }, + "7(b)": { + "figure_path": "2403.05441v3_figure_7(b).png", + "caption": "Figure 7: Exemplary full day forecasts for all delivery hours h\u210ehitalic_h of day d=\ud835\udc51absentd=\\;italic_d =6 July 2022 on the left and all delivery hours h\u210ehitalic_h of day d=\ud835\udc51absentd=\\;italic_d =5 October 2022 on the right. The forecast on the left is part of scenario (a), where the forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4 is fixed to 23:00\u2009pm on day d\u22121\ud835\udc511d-1italic_d - 1, implying an increasing forecast lag of hlag=h+1subscript\u210elag\u210e1h_{\\mathrm{lag}}=h+1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = italic_h + 1 along the horizontal axis. The forecast on the right is part of scenario (e), where the lag between forecast and delivery begin is fixed to 1 hour, i.e. hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1, implying forecast creation times \u03c4=h\u22121\ud835\udf0f\u210e1\\tau=h-1italic_\u03c4 = italic_h - 1. The vertical colour bars represent the topology of the respective predictive distributions p^ppd\u2062(y)subscript^\ud835\udc5dppd\ud835\udc66\\hat{p}_{\\mathrm{ppd}}(y)over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT roman_ppd end_POSTSUBSCRIPT ( italic_y ) by means of \u03b1\u2062(pcut)\ud835\udefcsubscript\ud835\udc5dcut\\alpha(p_{\\mathrm{cut}})italic_\u03b1 ( italic_p start_POSTSUBSCRIPT roman_cut end_POSTSUBSCRIPT ) for 100 values of pcutsubscript\ud835\udc5dcutp_{\\mathrm{cut}}italic_p start_POSTSUBSCRIPT roman_cut end_POSTSUBSCRIPT, cf. (11).", + "url": "http://arxiv.org/html/2403.05441v3/x10.png" + }, + "8(a)": { + "figure_path": "2403.05441v3_figure_8(a).png", + "caption": "Figure 8: Mean absolute errors (MAEs) for scenarios (a) - (d), where forecast creation time is fixed, and all following delivery hours are forecasted. The point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG for the end-of-day IDFull is given by (13), the live IDFull Pidfullsubscript\ud835\udc43idfullP_{\\mathrm{idfull}}italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT is used as reference.", + "url": "http://arxiv.org/html/2403.05441v3/x11.png" + }, + "8(b)": { + "figure_path": "2403.05441v3_figure_8(b).png", + "caption": "Figure 8: Mean absolute errors (MAEs) for scenarios (a) - (d), where forecast creation time is fixed, and all following delivery hours are forecasted. The point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG for the end-of-day IDFull is given by (13), the live IDFull Pidfullsubscript\ud835\udc43idfullP_{\\mathrm{idfull}}italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT is used as reference.", + "url": "http://arxiv.org/html/2403.05441v3/x12.png" + }, + "8(c)": { + "figure_path": "2403.05441v3_figure_8(c).png", + "caption": "Figure 8: Mean absolute errors (MAEs) for scenarios (a) - (d), where forecast creation time is fixed, and all following delivery hours are forecasted. The point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG for the end-of-day IDFull is given by (13), the live IDFull Pidfullsubscript\ud835\udc43idfullP_{\\mathrm{idfull}}italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT is used as reference.", + "url": "http://arxiv.org/html/2403.05441v3/x13.png" + }, + "8(d)": { + "figure_path": "2403.05441v3_figure_8(d).png", + "caption": "Figure 8: Mean absolute errors (MAEs) for scenarios (a) - (d), where forecast creation time is fixed, and all following delivery hours are forecasted. The point estimate y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG for the end-of-day IDFull is given by (13), the live IDFull Pidfullsubscript\ud835\udc43idfullP_{\\mathrm{idfull}}italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT is used as reference.", + "url": "http://arxiv.org/html/2403.05441v3/x14.png" + }, + "9(a)": { + "figure_path": "2403.05441v3_figure_9(a).png", + "caption": "Figure 9: The top left chart depicts the difference between y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG and the live IDFull in terms of their MAE for scenarios (a) - (d), summarizing Figure 8. The bottom shows the same for scenarios (e) using OMP and (f) using LASSO for feature selection. Results below the solid red line possess smaller MAEs than their live IDFull references. In the top right chart, the MAE values are averaged across hours, with OMP showing an average decrease of 5.9%percent5.95.9\\,\\%5.9 % in absolute errors compared to the live benchmark and a 22.7%percent22.722.7\\,\\%22.7 % reduction compared to LASSO, while LASSO leads to an 28.8%percent28.828.8\\,\\%28.8 % increase of absolute errors compared to the live benchmark.", + "url": "http://arxiv.org/html/2403.05441v3/x15.png" + }, + "9(b)": { + "figure_path": "2403.05441v3_figure_9(b).png", + "caption": "Figure 9: The top left chart depicts the difference between y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG and the live IDFull in terms of their MAE for scenarios (a) - (d), summarizing Figure 8. The bottom shows the same for scenarios (e) using OMP and (f) using LASSO for feature selection. Results below the solid red line possess smaller MAEs than their live IDFull references. In the top right chart, the MAE values are averaged across hours, with OMP showing an average decrease of 5.9%percent5.95.9\\,\\%5.9 % in absolute errors compared to the live benchmark and a 22.7%percent22.722.7\\,\\%22.7 % reduction compared to LASSO, while LASSO leads to an 28.8%percent28.828.8\\,\\%28.8 % increase of absolute errors compared to the live benchmark.", + "url": "http://arxiv.org/html/2403.05441v3/x16.png" + }, + "9(c)": { + "figure_path": "2403.05441v3_figure_9(c).png", + "caption": "Figure 9: The top left chart depicts the difference between y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG and the live IDFull in terms of their MAE for scenarios (a) - (d), summarizing Figure 8. The bottom shows the same for scenarios (e) using OMP and (f) using LASSO for feature selection. Results below the solid red line possess smaller MAEs than their live IDFull references. In the top right chart, the MAE values are averaged across hours, with OMP showing an average decrease of 5.9%percent5.95.9\\,\\%5.9 % in absolute errors compared to the live benchmark and a 22.7%percent22.722.7\\,\\%22.7 % reduction compared to LASSO, while LASSO leads to an 28.8%percent28.828.8\\,\\%28.8 % increase of absolute errors compared to the live benchmark.", + "url": "http://arxiv.org/html/2403.05441v3/x17.png" + }, + "9(d)": { + "figure_path": "2403.05441v3_figure_9(d).png", + "caption": "Figure 9: The top left chart depicts the difference between y^^\ud835\udc66\\hat{y}over^ start_ARG italic_y end_ARG and the live IDFull in terms of their MAE for scenarios (a) - (d), summarizing Figure 8. The bottom shows the same for scenarios (e) using OMP and (f) using LASSO for feature selection. Results below the solid red line possess smaller MAEs than their live IDFull references. In the top right chart, the MAE values are averaged across hours, with OMP showing an average decrease of 5.9%percent5.95.9\\,\\%5.9 % in absolute errors compared to the live benchmark and a 22.7%percent22.722.7\\,\\%22.7 % reduction compared to LASSO, while LASSO leads to an 28.8%percent28.828.8\\,\\%28.8 % increase of absolute errors compared to the live benchmark.", + "url": "http://arxiv.org/html/2403.05441v3/x18.png" + }, + "10(a)": { + "figure_path": "2403.05441v3_figure_10(a).png", + "caption": "Figure 10: Exemplary forecast results of the sign spread using the estimator in (16). The top panel shows the number of days with correct forecasts for the two scenarios (a) and (d) with fixed forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4, where the dashed red line shows the reference result using sgn\u2062(Pidfull)sgnsubscript\ud835\udc43idfull\\mathrm{sgn}(P_{\\mathrm{idfull}})roman_sgn ( italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT ). The bottom panel shows two examples of scenario (e) with fixed lag hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1 and hlag=5subscript\u210elag5h_{\\mathrm{lag}}=5italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 5. Here, the IDFull reference has been subtracted from the forecast results, such that the number of correctly forecasts days gained using (16) are obtained. The colour code represents the choice of the credibility threshold p0subscript\ud835\udc5d0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The total number of days considered is 183.", + "url": "http://arxiv.org/html/2403.05441v3/x19.png" + }, + "10(b)": { + "figure_path": "2403.05441v3_figure_10(b).png", + "caption": "Figure 10: Exemplary forecast results of the sign spread using the estimator in (16). The top panel shows the number of days with correct forecasts for the two scenarios (a) and (d) with fixed forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4, where the dashed red line shows the reference result using sgn\u2062(Pidfull)sgnsubscript\ud835\udc43idfull\\mathrm{sgn}(P_{\\mathrm{idfull}})roman_sgn ( italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT ). The bottom panel shows two examples of scenario (e) with fixed lag hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1 and hlag=5subscript\u210elag5h_{\\mathrm{lag}}=5italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 5. Here, the IDFull reference has been subtracted from the forecast results, such that the number of correctly forecasts days gained using (16) are obtained. The colour code represents the choice of the credibility threshold p0subscript\ud835\udc5d0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The total number of days considered is 183.", + "url": "http://arxiv.org/html/2403.05441v3/x20.png" + }, + "10(c)": { + "figure_path": "2403.05441v3_figure_10(c).png", + "caption": "Figure 10: Exemplary forecast results of the sign spread using the estimator in (16). The top panel shows the number of days with correct forecasts for the two scenarios (a) and (d) with fixed forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4, where the dashed red line shows the reference result using sgn\u2062(Pidfull)sgnsubscript\ud835\udc43idfull\\mathrm{sgn}(P_{\\mathrm{idfull}})roman_sgn ( italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT ). The bottom panel shows two examples of scenario (e) with fixed lag hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1 and hlag=5subscript\u210elag5h_{\\mathrm{lag}}=5italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 5. Here, the IDFull reference has been subtracted from the forecast results, such that the number of correctly forecasts days gained using (16) are obtained. The colour code represents the choice of the credibility threshold p0subscript\ud835\udc5d0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The total number of days considered is 183.", + "url": "http://arxiv.org/html/2403.05441v3/x21.png" + }, + "10(d)": { + "figure_path": "2403.05441v3_figure_10(d).png", + "caption": "Figure 10: Exemplary forecast results of the sign spread using the estimator in (16). The top panel shows the number of days with correct forecasts for the two scenarios (a) and (d) with fixed forecast creation time \u03c4\ud835\udf0f\\tauitalic_\u03c4, where the dashed red line shows the reference result using sgn\u2062(Pidfull)sgnsubscript\ud835\udc43idfull\\mathrm{sgn}(P_{\\mathrm{idfull}})roman_sgn ( italic_P start_POSTSUBSCRIPT roman_idfull end_POSTSUBSCRIPT ). The bottom panel shows two examples of scenario (e) with fixed lag hlag=1subscript\u210elag1h_{\\mathrm{lag}}=1italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 1 and hlag=5subscript\u210elag5h_{\\mathrm{lag}}=5italic_h start_POSTSUBSCRIPT roman_lag end_POSTSUBSCRIPT = 5. Here, the IDFull reference has been subtracted from the forecast results, such that the number of correctly forecasts days gained using (16) are obtained. The colour code represents the choice of the credibility threshold p0subscript\ud835\udc5d0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The total number of days considered is 183.", + "url": "http://arxiv.org/html/2403.05441v3/x22.png" + }, + "11(a)": { + "figure_path": "2403.05441v3_figure_11(a).png", + "caption": "Figure 11: The accuracy of the estimators (16) and (17) for the sign of the spread and the rest are shown, together with the IDFull reference for the spread sign, cf. Figure 10. Here, the scenarios (e) and (f) investigate the difference in performance using OMP and LASSO for feature selection. OMP demonstrates an average increase of 1.7%percent1.71.7\\,\\%1.7 % in spread sign forecast accuracy compared to the live benchmark, and increases of 12.1%percent12.112.1\\,\\%12.1 % and 10.0%percent10.010.0\\,\\%10.0 % for spread and rest sign forecast accuracy compared to LASSO. LASSO leads to a decrease in accuracy by \u22129.0%percent9.0-9.0\\,\\%- 9.0 % compared to the live benchmark. The accuracy is determined as number of correctly forecasted days across all hours over the total number of forecasts.", + "url": "http://arxiv.org/html/2403.05441v3/x23.png" + }, + "11(b)": { + "figure_path": "2403.05441v3_figure_11(b).png", + "caption": "Figure 11: The accuracy of the estimators (16) and (17) for the sign of the spread and the rest are shown, together with the IDFull reference for the spread sign, cf. Figure 10. Here, the scenarios (e) and (f) investigate the difference in performance using OMP and LASSO for feature selection. OMP demonstrates an average increase of 1.7%percent1.71.7\\,\\%1.7 % in spread sign forecast accuracy compared to the live benchmark, and increases of 12.1%percent12.112.1\\,\\%12.1 % and 10.0%percent10.010.0\\,\\%10.0 % for spread and rest sign forecast accuracy compared to LASSO. LASSO leads to a decrease in accuracy by \u22129.0%percent9.0-9.0\\,\\%- 9.0 % compared to the live benchmark. The accuracy is determined as number of correctly forecasted days across all hours over the total number of forecasts.", + "url": "http://arxiv.org/html/2403.05441v3/x24.png" + }, + "12(a)": { + "figure_path": "2403.05441v3_figure_12(a).png", + "caption": "Figure 12: The Empirical Coverage (left) and the resulting Average Coverage Error (ACE) (right) are shown for the fixed creation times \u03c4\ud835\udf0f\\tauitalic_\u03c4 of scenarios (a) - (d). The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x25.png" + }, + "12(b)": { + "figure_path": "2403.05441v3_figure_12(b).png", + "caption": "Figure 12: The Empirical Coverage (left) and the resulting Average Coverage Error (ACE) (right) are shown for the fixed creation times \u03c4\ud835\udf0f\\tauitalic_\u03c4 of scenarios (a) - (d). The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x26.png" + }, + "13(a)": { + "figure_path": "2403.05441v3_figure_13(a).png", + "caption": "Figure 13: The Empirical Coverage (top panel) and the resulting Average Coverage Error (ACE) (bottom panel) are shown for scenarios (e) and (f) with fixed lag between forecast creation and delivery begin. The left panel uses OMP feature selection, the right panel LASSO feature selection. The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x27.png" + }, + "13(b)": { + "figure_path": "2403.05441v3_figure_13(b).png", + "caption": "Figure 13: The Empirical Coverage (top panel) and the resulting Average Coverage Error (ACE) (bottom panel) are shown for scenarios (e) and (f) with fixed lag between forecast creation and delivery begin. The left panel uses OMP feature selection, the right panel LASSO feature selection. The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x28.png" + }, + "13(c)": { + "figure_path": "2403.05441v3_figure_13(c).png", + "caption": "Figure 13: The Empirical Coverage (top panel) and the resulting Average Coverage Error (ACE) (bottom panel) are shown for scenarios (e) and (f) with fixed lag between forecast creation and delivery begin. The left panel uses OMP feature selection, the right panel LASSO feature selection. The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x29.png" + }, + "13(d)": { + "figure_path": "2403.05441v3_figure_13(d).png", + "caption": "Figure 13: The Empirical Coverage (top panel) and the resulting Average Coverage Error (ACE) (bottom panel) are shown for scenarios (e) and (f) with fixed lag between forecast creation and delivery begin. The left panel uses OMP feature selection, the right panel LASSO feature selection. The solid green lines represent the theoretical expectation in case of perfect coverage.", + "url": "http://arxiv.org/html/2403.05441v3/x30.png" + }, + "14(a)": { + "figure_path": "2403.05441v3_figure_14(a).png", + "caption": "Figure 14: Comparison between the use of OMP and LASSO for feature selection for scenarios (e) and (f) averaged across all days and hours of the test period. On the left, we show the Average Coverage Error (ACE), with OMP showing an average 34.59%percent34.5934.59\\,\\%34.59 % decrease compared to LASSO. On the right, we show the Continuous Ranked Probability Score (CRPS) defined in (18), with OMP showing an average 20.21%percent20.2120.21\\,\\%20.21 % decrease compared to LASSO.", + "url": "http://arxiv.org/html/2403.05441v3/x31.png" + }, + "14(b)": { + "figure_path": "2403.05441v3_figure_14(b).png", + "caption": "Figure 14: Comparison between the use of OMP and LASSO for feature selection for scenarios (e) and (f) averaged across all days and hours of the test period. On the left, we show the Average Coverage Error (ACE), with OMP showing an average 34.59%percent34.5934.59\\,\\%34.59 % decrease compared to LASSO. On the right, we show the Continuous Ranked Probability Score (CRPS) defined in (18), with OMP showing an average 20.21%percent20.2120.21\\,\\%20.21 % decrease compared to LASSO.", + "url": "http://arxiv.org/html/2403.05441v3/x32.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Forecasting the intra-day spread densities of\nelectricity prices.", + "author": "Abramova, E., Bunn, D.,\n2020.", + "venue": "Energies 13.", + "url": null + } + }, + { + "2": { + "title": "Probabilistic price forecasting for day-ahead and\nintraday markets: Beyond the statistical model.", + "author": "Andrade, J.R., Filipe, J.,\nReis, M., Bessa, R.J.,\n2017.", + "venue": "Sustainability (Switzerland) 9.", + "url": null + } + }, + { + "3": { + "title": "Learning Probability Distributions of\nDay-Ahead Electricity Prices URL: https://arxiv.org/abs/2310.02867,\ndoi:10.48550/ARXIV.2310.02867. publisher: arXiv\nVersion Number: 2.", + "author": "Barunik, J., Hanus, L.,\n2023.", + "venue": null, + "url": "http://dx.doi.org/10.48550/ARXIV.2310.02867" + } + }, + { + "4": { + "title": "JAX: composable transformations of Python+NumPy\nprograms.", + "author": "Bradbury, J., Frostig, R.,\nHawkins, P., Johnson, M.J.,\nLeary, C., Maclaurin, D.,\nNecula, G., Paszke, A.,\nVanderPlas, J., Wanderman-Milne, S.,\nZhang, Q., 2018.", + "venue": "URL: http://github.com/google/jax.", + "url": null + } + }, + { + "5": { + "title": "Bayesian deep learning based method for probabilistic\nforecast of day-ahead electricity prices.", + "author": "Brusaferri, A., Matteucci, M.,\nPortolani, P., Vitali, A.,\n2019.", + "venue": "Applied Energy 250,\n1158\u20131175.", + "url": null + } + }, + { + "6": { + "title": "Forecasting Generalized Quantiles of\nElectricity Demand: A Functional Data Approach.", + "author": "Cabrera, B.L., Schulz, F.,\n2017.", + "venue": "Journal of the American Statistical Association\n112, 127\u2013136.", + "url": null + } + }, + { + "7": { + "title": "Multivariate probabilistic forecasting of intraday\nelectricity prices using normalizing flows.", + "author": "Cramer, E., Witthaut, D.,\nMitsos, A., Dahmen, M.,\n2023.", + "venue": "Applied Energy 346,\n121370.", + "url": null + } + }, + { + "8": { + "title": "Exploratory Visual Analytics for the European\nSingle Intra-Day Coupled Electricity Market, in:\n2020 International Conference on Smart Energy\nSystems and Technologies (SEST), IEEE,\nIstanbul, Turkey. pp. 1\u20136.", + "author": "Demir, S., Kok, K.,\nPaterakis, N.G., 2020.", + "venue": "URL: https://ieeexplore.ieee.org/document/9203043/,\ndoi:10.1109/SEST48500.2020.9203043.", + "url": null + } + }, + { + "9": { + "title": "Comparing Predictive Accuracy.", + "author": "Diebold, F., Mariano, R.,\n1994.", + "venue": "Technical Report t0169. National\nBureau of Economic Research. Cambridge, MA.", + "url": null + } + }, + { + "10": { + "title": "TensorFlow Distributions URL: https://arxiv.org/abs/1711.10604,\ndoi:10.48550/ARXIV.1711.10604. publisher: arXiv\nVersion Number: 1.", + "author": "Dillon, J.V., Langmore, I.,\nTran, D., Brevdo, E.,\nVasudevan, S., Moore, D.,\nPatton, B., Alemi, A.,\nHoffman, M., Saurous, R.A.,\n2017.", + "venue": null, + "url": "http://dx.doi.org/10.48550/ARXIV.1711.10604" + } + }, + { + "11": { + "title": "EPEX Spot SE, Market Data, 5 Boulevard Montmartre,\n75002 Paris (France).", + "author": "EPEX Spot, 2023.", + "venue": "private communication.", + "url": null + } + }, + { + "12": { + "title": "Day-ahead data and continuous intraday data, EPEX,\nGermany.", + "author": "EPEX Spot SE, .", + "venue": "URL: https://www.epexspot.com/en/market-data.\ndata spanning 2021-2022, downloaded on 9 January 2023. The\nEuropean Power Exchange (EPEX Spot) is part of the European Energy Exchange\n(EEX\u00ae).", + "url": null + } + }, + { + "13": { + "title": "Strictly Proper Scoring Rules, Prediction,\nand Estimation.", + "author": "Gneiting, T., Raftery, A.E.,\n2007.", + "venue": "Journal of the American Statistical Association\n102, 359\u2013378.", + "url": null + } + }, + { + "14": { + "title": "From point forecasts to multivariate probabilistic\nforecasts: The Schaake shuffle for day-ahead electricity price\nforecasting.", + "author": "Grothe, O., K\u00e4chele, F.,\nKr\u00fcger, F., 2023.", + "venue": "Energy Economics 120,\n106602.", + "url": null + } + }, + { + "15": { + "title": "Testing the equality of prediction mean squared\nerrors.", + "author": "Harvey, D., Leybourne, S.,\nNewbold, P., 1997.", + "venue": "International Journal of Forecasting\n13, 281\u2013291.", + "url": null + } + }, + { + "16": { + "title": "Best Subset, Forward Stepwise or Lasso?\nAnalysis and Recommendations Based on Extensive Comparisons.", + "author": "Hastie, T., Tibshirani, R.,\nTibshirani, R., 2020.", + "venue": "Statistical Science 35.", + "url": null + } + }, + { + "17": { + "title": "Simulation-based Forecasting for Intraday Power\nMarkets: Modelling Fundamental Drivers for Location, Shape and\nScale of the Price Distribution , 1\u201342URL: http://arxiv.org/abs/2211.13002. arXiv: 2211.13002.", + "author": "Hirsch, S., Ziel, F., 2022.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "Multivariate simulation\u2010based forecasting for\nintraday power markets: Modeling cross\u2010product price effects.", + "author": "Hirsch, S., Ziel, F., 2024.", + "venue": "Appl Stochastic Models Bus Ind. URL: https://onlinelibrary.wiley.com/doi/10.1002/asmb.2837,\ndoi:10.1002/asmb.2837.", + "url": null + } + }, + { + "19": { + "title": "The no-u-turn sampler: adaptively setting path\nlengths in hamiltonian monte carlo.", + "author": "Hoffman, M.D., Gelman, A., et al.,\n2014.", + "venue": "J. Mach. Learn. Res. 15,\n1593\u20131623.", + "url": null + } + }, + { + "20": { + "title": "The effects of wind power on electricity markets: A\ncase study of the Swedish intraday market.", + "author": "Hu, X., Jarait\u0117, J.,\nKa\u017eukauskas, A., 2021.", + "venue": "Energy Economics 96,\n105159.", + "url": null + } + }, + { + "21": { + "title": "Computing and Graphing Highest Density\nRegions.", + "author": "Hyndman, R.J., 1996.", + "venue": "The American Statistician 50,\n120\u2013126.", + "url": null + } + }, + { + "22": { + "title": "Estimating and Visualizing Conditional\nDensities.", + "author": "Hyndman, R.J., Bashtannyk, D.M.,\nGrunwald, G.K., 1996.", + "venue": "Journal of Computational and Graphical Statistics\n5, 315\u2013336.", + "url": null + } + }, + { + "23": { + "title": "Forecasting the price distribution of continuous\nintraday electricity trading.", + "author": "Janke, T., Steinke, F.,\n2019.", + "venue": "Energies 12.", + "url": null + } + }, + { + "24": { + "title": "Modeling intraday markets under the new advances of\nthe cross-border Intraday Project (XBID): Evidence from the German\nintraday market.", + "author": "Kath, C., 2019.", + "venue": "Energies 12,\n1\u201335.", + "url": null + } + }, + { + "25": { + "title": "The value of forecasts: Quantifying the economic\ngains of accurate quarter-hourly electricity price forecasts.", + "author": "Kath, C., Ziel, F., 2018.", + "venue": "Energy Economics 76,\n411\u2013423.", + "url": null + } + }, + { + "26": { + "title": "Optimal Order Execution in Intraday Markets:\nMinimizing Costs in Trade Trajectories URL: https://arxiv.org/abs/2009.07892,\ndoi:10.48550/ARXIV.2009.07892. publisher: arXiv\nVersion Number: 2.", + "author": "Kath, C., Ziel, F., 2020.", + "venue": null, + "url": "http://dx.doi.org/10.48550/ARXIV.2009.07892" + } + }, + { + "27": { + "title": "Deep distributional time series models and the\nprobabilistic forecasting of intraday electricity prices.", + "author": "Klein, N., Smith, M.S.,\nNott, D.J., 2023.", + "venue": "J of Applied Econometrics 38,\n493\u2013511.", + "url": null + } + }, + { + "28": { + "title": "Short-term electricity trading for system balancing:\nAn empirical analysis of the role of intraday trading in balancing\nGermany\u2019s electricity system.", + "author": "Koch, C., Hirth, L., 2019.", + "venue": "Renewable and Sustainable Energy Reviews\n113, 109275.", + "url": null + } + }, + { + "29": { + "title": "Intraday Electricity Pricing of Night\nContracts.", + "author": "Kremer, M., Kiesel, R.,\nParaschiv, F., 2020.", + "venue": "Energies 13,\n4501.", + "url": null + } + }, + { + "30": { + "title": "An econometric model for intraday electricity\ntrading.", + "author": "Kremer, M., Kiesel, R.,\nParaschiv, F., 2021.", + "venue": "Phil. Trans. R. Soc. A. 379,\n20190624.", + "url": null + } + }, + { + "31": { + "title": "Determining Fundamental Supply and Demand\nCurves in a Wholesale Electricity Market ,\n1\u201329URL: http://arxiv.org/abs/1903.11383.\narXiv: 1903.11383.", + "author": "Kulakov, S., Ziel, F.,\n2019.", + "venue": null, + "url": null + } + }, + { + "32": { + "title": "Integrated European intra-day electricity market:\nRules, modeling and analysis.", + "author": "Le, H.L., Ilea, V., Bovo,\nC., 2019.", + "venue": "Applied Energy 238,\n258\u2013273.", + "url": null + } + }, + { + "33": { + "title": "A Reinforcement Learning approach for the\ncontinuous electricity market of Germany: Trading from the perspective of\na wind park operator.", + "author": "Lehna, M., Hoppmann, B.,\nScholz, C., Heinrich, R.,\n2022.", + "venue": "Energy and AI 8,\n100139.", + "url": null + } + }, + { + "34": { + "title": "Bayesian Predictive Distributions for Imbalance\nPrices With Time-Varying Factor Impacts.", + "author": "Lima, L.M., Damien, P.,\nBunn, D.W., 2023.", + "venue": "IEEE Trans. Power Syst. 38,\n349\u2013357.", + "url": null + } + }, + { + "35": { + "title": "Performance of the autoregressive integrated moving\naverage model with exogenous variables statistical model on the intraday\nmarket for the Denmark-West bidding area.", + "author": "Lucic, M., Xydis, G., 2023.", + "venue": "Energy & Environment ,\n0958305X231199154URL: http://journals.sagepub.com/doi/10.1177/0958305X231199154,\ndoi:10.1177/0958305X231199154.", + "url": null + } + }, + { + "36": { + "title": "Day-ahead vs. Intraday\u2014Forecasting the price\nspread to maximize economic benefits.", + "author": "Maciejowska, K., Nitka, W.,\nWeron, T., 2019.", + "venue": "Energies 12,\n1\u201315.", + "url": null + } + }, + { + "37": { + "title": "Enhancing load, wind and solar generation for\nday-ahead forecasting of electricity prices.", + "author": "Maciejowska, K., Nitka, W.,\nWeron, T., 2021.", + "venue": "Energy Economics 99,\n105273.", + "url": null + } + }, + { + "38": { + "title": "Probabilistic forecasting with Factor Quantile\nRegression: Application to electricity trading URL: https://arxiv.org/abs/2303.08565,\ndoi:10.48550/ARXIV.2303.08565. publisher: arXiv\nVersion Number: 1.", + "author": "Maciejowska, K., Serafin, T.,\nUniejewski, B., 2023a.", + "venue": null, + "url": "http://dx.doi.org/10.48550/ARXIV.2303.08565" + } + }, + { + "39": { + "title": "PCA forecast averaging - Predicting day-ahead and\nintraday electricity prices.", + "author": "Maciejowska, K., Uniejewski, B.,\nSerafin, T., 2020.", + "venue": "Energies 13,\n1\u201319.", + "url": null + } + }, + { + "40": { + "title": "Forecasting Electricity Prices, in:\nOxford Research Encyclopedia of Economics and\nFinance. Oxford University Press.", + "author": "Maciejowska, K., Uniejewski, B.,\nWeron, R., 2023b.", + "venue": "URL: http://arxiv.org/abs/2204.11735,\ndoi:10.1093/acrefore/9780190625979.013.667.", + "url": null + } + }, + { + "41": { + "title": "Distributional neural networks for electricity price\nforecasting.", + "author": "Marcjasz, G., Narajewski, M.,\nWeron, R., Ziel, F.,\n2023.", + "venue": "Energy Economics ,\n106843URL: http://arxiv.org/abs/2207.02832,\ndoi:10.1016/j.eneco.2023.106843. arXiv:\n2207.02832.", + "url": null + } + }, + { + "42": { + "title": "Beating the na\u00efve-combining lasso with na\u00efve\nintraday electricity price forecasts.", + "author": "Marcjasz, G., Uniejewski, B.,\nWeron, R., 2020.", + "venue": "Energies 13,\n1\u201316.", + "url": null + } + }, + { + "43": { + "title": "Statistical Rethinking: A Bayesian Course\nwith Examples in R and Stan.", + "author": "McElreath, R., 2020.", + "venue": "2 ed., Chapman and Hall/CRC.", + "url": null + } + }, + { + "44": { + "title": "Probabilistic Forecasting of German Electricity\nImbalance Prices.", + "author": "Narajewski, M., 2022.", + "venue": "Energies 15.", + "url": null + } + }, + { + "45": { + "title": "Estimation and simulation of the transaction arrival\nprocess in intraday electricity markets.", + "author": "Narajewski, M., Ziel, F.,\n2019.", + "venue": "Energies 12,\n1\u201316.", + "url": null + } + }, + { + "46": { + "title": "Econometric modelling and forecasting of intraday\nelectricity prices.", + "author": "Narajewski, M., Ziel, F.,\n2020a.", + "venue": "Journal of Commodity Markets 19,\n100107.", + "url": null + } + }, + { + "47": { + "title": "Ensemble forecasting for intraday electricity prices:\nSimulating trajectories.", + "author": "Narajewski, M., Ziel, F.,\n2020b.", + "venue": "Applied Energy 279,\n115801.", + "url": null + } + }, + { + "48": { + "title": "Recent advances in electricity price forecasting: A\nreview of probabilistic forecasting.", + "author": "Nowotarski, J., Weron, R.,\n2018.", + "venue": "Renewable and Sustainable Energy Reviews\n81, 1548\u20131568.", + "url": null + } + }, + { + "49": { + "title": "Neural network based model comparison for intraday\nelectricity price forecasting.", + "author": "Oksuz, I., Ugurlu, U.,\n2019.", + "venue": "Energies 12,\n1\u201314.", + "url": null + } + }, + { + "50": { + "title": "Bayesian density forecasting of intraday electricity\nprices using multivariate skew t distributions.", + "author": "Panagiotelis, A., Smith, M.,\n2008.", + "venue": "International Journal of Forecasting\n24, 710\u2013727.", + "url": null + } + }, + { + "51": { + "title": "Are fundamentals enough? Explaining price\nvariations in the German day-ahead and intraday power market.", + "author": "Pape, C., Hagemann, S.,\nWeber, C., 2016.", + "venue": "Energy Economics 54,\n376\u2013387.", + "url": null + } + }, + { + "52": { + "title": "Orthogonal matching pursuit: Recursive function\napproximation with applications to wavelet decomposition, in:\nProceedings of 27th Asilomar conference on signals,\nsystems and computers, IEEE. pp.\n40\u201344.", + "author": "Pati, Y.C., Rezaiifar, R.,\nKrishnaprasad, P.S., 1993.", + "venue": null, + "url": null + } + }, + { + "53": { + "title": "Scikit-learn: Machine Learning in Python.", + "author": "Pedregosa, F., Varoquaux, G.,\nGramfort, A., Michel, V.,\nThirion, B., Grisel, O.,\nBlondel, M., Prettenhofer, P.,\nWeiss, R., Dubourg, V.,\nVanderplas, J., Passos, A.,\nCournapeau, D., Brucher, M.,\nPerrot, M., Duchesnay, E.,\n2011.", + "venue": "Journal of Machine Learning Research\n12, 2825\u20132830.", + "url": null + } + }, + { + "54": { + "title": "Influence of 15-minute contracts on frequency\ndeviations and on the demand for balancing energy, in:\nInternational ETG Congress 2015; Die Energiewende -\nBlueprints for the new energy age, pp. 1\u20137.", + "author": "Remppis, S., Gutekunst, F.,\nWeissbach, T., Maurer, M.,\n2015.", + "venue": null, + "url": null + } + }, + { + "55": { + "title": "Efficient implementation of the K-SVD algorithm\nusing batch orthogonal matching pursuit.", + "author": "Rubinstein, R., Zibulevsky, M.,\nElad, M., 2008.", + "venue": "Technical Report. Citeseer.", + "url": null + } + }, + { + "56": { + "title": "Towards the Prediction of Electricity Prices at\nthe Intraday Market Using Shallow and Deep-Learning Methods.", + "author": "Scholz, C., Lehna, M.,\nBrauns, K., Baier, A.,\n2021.", + "venue": "Lecture Notes in Computer Science (including\nsubseries Lecture Notes in Artificial Intelligence and Lecture Notes in\nBioinformatics) 12591 LNAI, 101\u2013118.", + "url": null + } + }, + { + "57": { + "title": "Averaging Predictive Distributions Across\nCalibration Windows for Day-Ahead Electricity Price\nForecasting.", + "author": "Serafin, T., Uniejewski, B.,\nWeron, R., 2019.", + "venue": "Energies 12,\n2561.", + "url": null + } + }, + { + "58": { + "title": "Functional Data Approach for Short-Term\nElectricity Demand Forecasting.", + "author": "Shah, I., Jan, F., Ali,\nS., 2022.", + "venue": "Mathematical Problems in Engineering\n2022, 1\u201314.", + "url": null + } + }, + { + "59": { + "title": "Analysing trading trends in continuous intraday\nelectricity markets.", + "author": "Shinde, P., Kouveliotis-Lysikatos, I.,\nAmelin, M., 2021.", + "venue": "2021 56th International Universities Power\nEngineering Conference: Powering Net Zero Emissions, UPEC 2021 - Proceedings\n, 13\u201318doi:10.1109/UPEC50034.2021.9548168.\npublisher: IEEE ISBN: 9781665443890.", + "url": null + } + }, + { + "60": { + "title": "False discoveries occur early on the Lasso path.", + "author": "Su, W., Bogdan, M.,\nCand\u00e8s, E., 2017.", + "venue": "The Annals of Statistics 45,\n2133\u20132150.", + "url": null + } + }, + { + "61": { + "title": "Regression Shrinkage and Selection Via the\nLasso.", + "author": "Tibshirani, R., 1996.", + "venue": "Journal of the Royal Statistical Society: Series B\n(Methodological) 58, 267\u2013288.", + "url": null + } + }, + { + "62": { + "title": "Signal Recovery From Random Measurements\nVia Orthogonal Matching Pursuit.", + "author": "Tropp, J.A., Gilbert, A.C.,\n2007.", + "venue": "IEEE Trans. Inform. Theory 53,\n4655\u20134666.", + "url": null + } + }, + { + "63": { + "title": "Understanding intraday electricity markets:\nVariable selection and very short-term price forecasting using LASSO.", + "author": "Uniejewski, B., Marcjasz, G.,\nWeron, R., 2019.", + "venue": "International Journal of Forecasting\n35, 1533\u20131547.", + "url": null + } + }, + { + "64": { + "title": "Regularized quantile regression averaging for\nprobabilistic electricity price forecasting.", + "author": "Uniejewski, B., Weron, R.,\n2021.", + "venue": "Energy Economics 95,\n105121.", + "url": null + } + }, + { + "65": { + "title": "Forecasting Electricity Demand in Greece: A\nFunctional Data Approach in High Dimensional Hourly Time\nSeries.", + "author": "Varelas, G., Tzimas, G.,\nAlefragis, P., 2024.", + "venue": "SN COMPUT. SCI. 5,\n566.", + "url": null + } + }, + { + "66": { + "title": "State of the German Short-Term Power\nMarket.", + "author": "Viehmann, J., 2017.", + "venue": "Z Energiewirtsch 41,\n87\u2013103.", + "url": null + } + }, + { + "67": { + "title": "Forecasting next-day electricity demand and price\nusing nonparametric functional methods.", + "author": "Vilar, J.M., Cao, R.,\nAneiros, G., 2012.", + "venue": "International Journal of Electrical Power & Energy\nSystems 39, 48\u201355.", + "url": null + } + }, + { + "68": { + "title": "SciPy 1.0: Fundamental Algorithms for Scientific\nComputing in Python.", + "author": "Virtanen, P., Gommers, R.,\nOliphant, T.E., Haberland, M.,\nReddy, T., Cournapeau, D.,\nBurovski, E., Peterson, P.,\nWeckesser, W., Bright, J.,\nvan der Walt, S.J., Brett, M.,\nWilson, J., Millman, K.J.,\nMayorov, N., Nelson, A.R.J.,\nJones, E., Kern, R.,\nLarson, E., Carey, C.J.,\nPolat, \u0130., Feng, Y.,\nMoore, E.W., VanderPlas, J.,\nLaxalde, D., Perktold, J.,\nCimrman, R., Henriksen, I.,\nQuintero, E.A., Harris, C.R.,\nArchibald, A.M., Ribeiro, A.H.,\nPedregosa, F., van Mulbregt, P.,\nSciPy 1.0 Contributors, 2020.", + "venue": "Nature Methods 17,\n261\u2013272.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2403.05441v3" +} \ No newline at end of file diff --git a/20241127/2403.14494v2.json b/20241127/2403.14494v2.json new file mode 100644 index 0000000000000000000000000000000000000000..9adaa7b6ad0b71d1058d69609af7a25d1f053f9a --- /dev/null +++ b/20241127/2403.14494v2.json @@ -0,0 +1,730 @@ +{ + "title": "Learning to Project for Cross-Task Knowledge Distillation", + "abstract": "Traditional knowledge distillation (KD) relies on a proficient teacher trained on the target task, which is not always available.\nIn this setting, cross-task distillation can be used, enabling the use of any teacher model trained on a different task. However, many KD methods prove ineffective when applied to this cross-task setting. To address this limitation, we propose a simple modification: the use of an inverted projection. We show that this drop-in replacement for a standard projector is effective by learning to disregard any task-specific features which might degrade the student\u2019s performance.\nWe find that this simple modification is sufficient for extending many KD methods to the cross-task setting, where the teacher and student tasks can be very different.\nIn doing so, we obtain up to a 1.9% improvement in the cross-task setting compared to the traditional projection, at no additional cost.\nOur method can obtain significant performance improvements (up to 7%) when using even a randomly-initialised teacher on various tasks such as depth estimation, image translation, and semantic segmentation, despite the lack of any learned knowledge to transfer.\nTo provide conceptual and analytical insights into this result, we show that using an inverted projection allows the distillation loss to be decomposed into a knowledge transfer and a spectral regularisation component.\nThrough this analysis we are additionally able to propose a novel regularisation loss that allows teacher-free distillation, enabling performance improvements of up to 2.3% on ImageNet with no additional training costs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Knowledge distillation (KD) has emerged as a very effective tool for training small and efficient models [Tian et al.(2019)Tian, Krishnan, and Isola ###reference_bx60###, Miles et al.(2023)Miles, Yucel, Manganelli, and Saa-Garriga ###reference_bx43###, Fang et al.(2021)Fang, Song, Wang, Shen, Wang, and Song ###reference_bx17###, Bhardwaj et al.(2019)Bhardwaj, Suda, and Marculescu ###reference_bx5###, Miles and Mikolajczyk(2020) ###reference_bx40###, Lopez-Paz et al.(2016)Lopez-Paz, Bottou, Sch\u00f6lkopf, and Vapnik ###reference_bx37###].\nIt leverages the pre-trained knowledge of a much larger (teacher) model to guide and enhance the training process of a significantly smaller (student) model.\nSince its inception, KD has been applied to a wide variety of tasks in the computer vision [Chen et al.(2021a)Chen, Liu, Zhao, and Jia ###reference_bx7###], audio [Chen et al.(2021b)Chen, Xian, Koepke, Shan, and Akata ###reference_bx11###]\nand language [Sanh et al.(2019a)Sanh, Debut, Chaumond, and Wolf ###reference_bx56###] domains, enabling the deployment of models across many embedded devices.\nHowever, existing approaches for KD are often limited to the cases where the teacher shares the same task with the student [Huang et al.(2022)Huang, You, Wang, Qian, and Xu ###reference_bx25###, Chen et al.(2022a)Chen, Cao, Zhong, Zhang, Gao, and Tao ###reference_bx8###, Hinton et al.(2015)Hinton, Vinyals, and Dean ###reference_bx23###, Tian et al.(2019)Tian, Krishnan, and Isola ###reference_bx60###, Chen et al.(2020)Chen, Wang, Gan, Liu, Henao, and Carin ###reference_bx6###, Chen et al.(2021a)Chen, Liu, Zhao, and Jia ###reference_bx7###].\nThis is very restrictive since there are many applications where there is simply no suitable pretrained teacher available due to, for example, the lack of any large annotated training data. This problem commonly arises for tasks that require expensive human annotation [Mccormac et al.(2017)Mccormac, Handa, Leutenegger, and Davison ###reference_bx39###], such as in robotics [James et al.(2019)James, Wohlhart, Kalakrishnan, Kalashnikov, Irpan, Ibarz, Levine, Hadsell, and Bousmalis ###reference_bx26###], or where the collection of data is prohibitive for other reasons, such as in the medical [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox ###reference_bx55###, Komorowski et al.(2023)Komorowski, Baniecki, and Biecek ###reference_bx31###] and aerial domains [Wang et al.(2020b)Wang, Zhu, Wang, Hu, Qiu, Wang, Hu, Kapoor, and Scherer ###reference_bx63###, Fonder and Van Droogenbroeck(2019) ###reference_bx19###, Kolbeinsson and Mikolajczyk(2023) ###reference_bx29###].\nIn these cases, it is not possible to train a suitable teacher for the target task, therefore we propose a cross-task knowledge distillation. In the cross-task KD setup, a teacher model trained for a different task can be used to improve the student performance.\nThis setting is well-suited for tasks lacking a task-specific pretrained teacher, as it allows for any other off-the-shelf pretrained model to be used to improve the student model performance instead.\nIt is also increasingly relevant as the compute and data costs to train large models increases.\nSimilarly, data labelling may be cheaper for one task than another, e.g. training a model using a cheaply-labelled auxiliary task is very common in active learning [Baldridge and Osborne(2004) ###reference_bx4###, Gao and Saar-Tsechansky(2020) ###reference_bx21###, Huang et al.(2017)Huang, Chen, Mu, and Zhou ###reference_bx24###] and federated learning [Ahn et al.(2022)Ahn, Kim, Koh, and Li ###reference_bx1###].\nWe show that the traditional methods for same-task KD fail in this new and more general cross-task setting since they transfer domain-specific knowledge, which is associated with the teacher\u2019s task.\nTherefore, while they increase the student\u2019s performance in the traditional same-task setting, they degrade it in the cross-task scenario. We propose the use of an inverted projection to address this problem. We find that this modification is very effective in the cross-task setting due to its suppression of task-specific information. Most notably, we can obtain up to a 7.47% performance improvement by distilling from a teacher trained on various different tasks.\nWe demonstrate that this simple drop-in replacement enables many KD methods to adapt to the cross-task setting, and we show consistent improvements across various tasks including depth estimation, segmentation, and image translation.\nTo obtain more insights into the underlying mechanism of the inverted projector, we explore the training dynamics of its weights.\nWe find that the least-significant singular vectors of the teacher\u2019s features are suppressed in cases where there is a significant task gap, which indicates that these singular vectors tend to be more task-specific.\nBased on this observation we show that the suppression of singular vectors by the projector naturally leads to a decoupling of the distillation loss into a knowledge transfer and spectral regularisation component.\nThis enables us to derive a cheap spectral regularisation loss. We describe this loss as a teacher-free distillation method since it explicitly exploits the emergent regularisation component from cross-task distillation.\nThe new loss makes it possible to efficiently achieve performance competitive with many state-of-the-art KD methods without the need for any pre-trained teachers, with a 3.2% relative improvement over the baseline model on ImageNet-1K.\nIn summary, our contributions are given as follows:\nWe propose a simple modification to standard KD that enables cross-task distillation: a learnable inverted projection.\nWe show consistent and substantial performance improvements in the cross-task setting of up to 7% through extensive experiments.\nBy analysing the training dynamics of the projector weights, we are able to decouple a knowledge transfer and spectral regularisation component. We use this to derive a teacher-free regularisation loss that obtains up to 8% improvement over the baseline with no additional training cost." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Knowledge distillation (KD) is a technique that involves transferring the knowledge from a large teacher model to a smaller student model, aiming to improve the performance of the student on the target task. It has become increasingly popular in recent years for the deployment of models on resource constrained devices, such as mobile phones, and has been applied in image classification [Hinton et al.(2015)Hinton, Vinyals, and Dean ###reference_bx23###, Chen et al.(2020)Chen, Wang, Gan, Liu, Henao, and Carin ###reference_bx6###], semantic segmentation [Liu et al.(2019)Liu, Chen, Liu, Qin, Luo, and Wang ###reference_bx35###], video object segmentation [Miles et al.(2023)Miles, Yucel, Manganelli, and Saa-Garriga ###reference_bx43###], and natural language processing [Sanh et al.(2019b)Sanh, Debut, Chaumond, and Wolf ###reference_bx57###]. Existing literature has extensively explored various distillation pipelines [Malinin et al.(2020)Malinin, Mlodozeniec, and Gales ###reference_bx38###, Le et al.(2020)Le, Vo, and Thoa ###reference_bx32###, Ren et al.(2022)Ren, Gao, Hua, Xue, Tian, He, and Zhao ###reference_bx52###] along with both empirical and theoretically motivated loss formulations [Hinton et al.(2015)Hinton, Vinyals, and Dean ###reference_bx23###, Zhao et al.(2022)Zhao, Song, and Qiu ###reference_bx72###, Chen et al.(2020)Chen, Wang, Gan, Liu, Henao, and Carin ###reference_bx6###, Miles et al.(2022)Miles, Rodriguez, and Mikolajczyk ###reference_bx42###] that can facilitate the knowledge transfer process. However, these conventional methods still predominantly focus on same-task distillation [Tian et al.(2019)Tian, Krishnan, and Isola ###reference_bx60###, Chen et al.(2021a)Chen, Liu, Zhao, and Jia ###reference_bx7###], wherein the student and teacher models are trained on the same task.\nThere are many applications where there are no pre-trained off-the-shelf teacher models available, thus motivating the need to perform cross-task distillation. Some prior works have pursued cross-task distillation both as a generalisation of the knowledge transfer occurring in traditional knowledge distillation [Ye et al.(2020)Ye, Lu, and Zhan ###reference_bx67###] and because of the observation that some tasks will naturally tend to share information [Yuan and Peng(2020) ###reference_bx69###, Yang et al.(2022)Yang, Pan, Gao, Jiang, Liu, and Chen ###reference_bx65###].\nCrossDistil [Yang et al.(2022)Yang, Pan, Gao, Jiang, Liu, and Chen ###reference_bx65###] was one of the first to partially explore this new setting by introducing a quadruplet loss, calibration term, and an error correction mechanism, however knowledge was distilled between the task-specific decoder heads of a multi-task model with shared encoder weights, rather than between two fully-separate models.\nProC-KD [Li et al.(2022)Li, Wu, Han, and Tian ###reference_bx33###] transfer local-level object knowledge for large-scale cross-task distillation, while [Ye et al.(2020)Ye, Lu, and Zhan ###reference_bx67###] construct a relational embedding for the loss.\n[Yuan and Peng(2020) ###reference_bx69###] perform cross-task KD to augment text-to-image generation using image semantic encoders, but the proposed method is tightly coupled with the model architecture at each stage of the pipeline.\nIn contrast to these works, we propose a very simple extension to the typical feature distillation pipeline that enables the distillation of knowledge cross-task across a wide range of settings.\nTransfer learning and domain adaptation are widely studied areas in machine learning that leverage the knowledge acquired by a pretrained model on one task to enhance the performance on a different, yet related task [Zhuang et al.(2020)Zhuang, Qi, Duan, Xi, Zhu, Zhu, Xiong, and He ###reference_bx73###]. This paradigm has demonstrated significant success in various fields, especially in computer vision [Zamir et al.(2018)Zamir, Sax, Shen, Guibas, Malik, and Savarese ###reference_bx71###, Evci et al.(2022)Evci, Dumoulin, Larochelle, and Mozer ###reference_bx16###] and natural language processing [Raffel et al.(2020)Raffel, Shazeer, Roberts, Lee, Narang, Matena, Zhou, Li, and Liu ###reference_bx50###, Sung et al.(2022)Sung, Cho, and Bansal ###reference_bx59###], by reducing the training time and data requirements in the target domain.\nMost existing transfer learning or domain adaptation algorithms attempt to align the feature representations across the two domains. This can be achieved by minimising some statistical discrepancy between the two spaces [Greenfeld and Shalit(2019) ###reference_bx22###] or introducing additional adversarial losses [Deng et al.(2021)Deng, Zhang, Vodrahalli, Kawaguchi, and Zou ###reference_bx14###]. More recent works [Chen et al.(2019b)Chen, Wang, Long, and Wang ###reference_bx10###] have shown a spectral divide between the domain-specific and domain-agnostic features. This is where the large singular values of the features can generalise across domains, whereas the small singular values are domain-specific. This observation has led to follow-up works [Chen et al.(2019b)Chen, Wang, Long, and Wang ###reference_bx10###, Chen et al.(2019a)Chen, Wang, Fu, Long, and Wang ###reference_bx9###, Raab et al.(2020)Raab, V\u00e4th, Meier, and Schleif ###reference_bx47###] by proposing spectral alignment and normalisation techniques. We take a similar approach for transferring knowledge between different tasks, but in the context of knowledge distillation, where an additional capacity gap between the source and target task models exists. This work also enables a more concrete bridge between the field of transfer learning and knowledge distillation.\nMulti-task learning. There are many cases where jointly training on multiple tasks or modalities can improve not only the generality of models [Radford et al.(2021)Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, Krueger, and Sutskever ###reference_bx48###] but also the single-task performance.\nFor example, monocular depth estimation has been shown to share knowledge with other tasks, such as semantic segmentation [Ramamonjisoa and Lepetit(2019) ###reference_bx51###, Jiao et al.(2018)Jiao, Cao, Song, and Lau ###reference_bx27###, Bai et al.(2019)Bai, Fan, Pan, and Chen ###reference_bx3###, Xing et al.(2022)Xing, Shen, Ho, and Tzes ###reference_bx64###, Wang et al.(2020a)Wang, Zhang, Wang, Lin, and Lu ###reference_bx62###, Auty and Mikolajczyk(2022) ###reference_bx2###, Kolbeinsson and Mikolajczyk(2024) ###reference_bx30###]. Intuitively, this follows for other task pairs; for instance, both semantic segmentation and classification target the semantics within an image.\nUnfortunately, multitask models are often too large and expensive to run on resource-constrained devices [Kirillov et al.(2023)Kirillov, Mintun, Ravi, Mao, Rolland, Gustafson, Xiao, Whitehead, Berg, Lo, Doll\u00e1r, and Girshick ###reference_bx28###]. Additionally, jointly learning multiple tasks with a small model can degrade the downstream performance, as additional tasks or objectives can conflict with the target task when there is insufficient capacity in the student to optimise for both [Fifty et al.(2021)Fifty, Amid, Zhao, Yu, Anil, and Finn ###reference_bx18###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "To validate the efficacy of our inverted projector in the cross-task setting, we perform experiments ablating across different distillation methods, task pairs, and architectures.\nWe experiment with four target tasks: monocular depth estimation, semantic segmentation, image colourisation, and satellite-to-map translation. For each of these student tasks, teacher tasks are chosen that are either identical, similar, or different to them, thus demonstrating that our method is best-suited for the cross-task case where there are significant differences in the task specific knowledge learned by the teacher and student models.\nRandomly-initialised teachers.\nAn interesting question arises when we consider increasingly disparate student and teacher tasks: what happens if a randomly-initialised teacher is used? In this case, there is no knowledge shared between the teacher\u2019s task and the target task, however the random weights in the teacher may still produce diverse features.\nTo investigate this question, we also distill from randomly initialised teacher models in our experiments.\nIn this paper we propose the inverted projector as a simple drop-in component for extending many knowledge distillation (KD) methods into cross-task settings, where the teacher\u2019s task differs from the student\u2019s.\nThis inverted projector is able to suppress the irrelevant task-specific features from the teacher, which greatly improves the efficacy of cross-task distillation.\nWe show consistent and substantial improvements across a number of cross-task pairs using our approach.\nMost notably, we achieve up to a 7.47% improvement for depth estimation by distilling across a significant task-gap.\nThrough analysis, we provide a concrete interpretation and explanation for our results, leading to a natural decoupling of the objective into a knowledge transfer and a spectral regularisation component, and we extend this to demonstrate a novel drop-in teacher-free loss that achieves some of the benefits of knowledge distillation without the use of a teacher.\nIn this work we have highlighted some of the limitations of KD in the cross-task setting, while also providing a step towards broadening its practical applicability in this new domain." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Cross-task Feature Distillation", + "text": "Cross-task distillation \nis motivated by the intuitive and demonstrated overlap in useful information between different tasks (see section 2 ###reference_###).\nWe use feature-space distillation, which aims to align the feature spaces of a student model and a teacher model. To do this, a learnable projection is used to map the features from one model into the feature space of the other.\nHowever, in the cross-task case, where the teacher model has been trained for a significantly different task to the student model\u2019s target task, there are specific issues to contend with that traditional KD methods do not address.\nWe introduce a novel inverted projection that is well-suited to the cross-task setting, in contrast to the traditional projection [Romero et al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio ###reference_bx54###, Tian et al.(2019)Tian, Krishnan, and Isola ###reference_bx60###] which is better suited to the same-task setting." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Importance of Feature Projection", + "text": "In the traditional same-task setting, the teacher model is already pre-trained for the student\u2019s target task, so it is desirable for the student to match features as closely as possible with those produced by the teacher.\nIn this case, the task-specific knowledge is helpful in improving the student performance.\nHowever, for the cross-task setting, the teacher is trained on a different task to the student.\nAs detailed in section 2 ###reference_###, there is likely at least some shared knowledge between different tasks.\nThe issue in the cross-task knowledge distillation setting is how to extract only the task-agnostic knowledge and the knowledge shared between tasks, all while ignoring the irrelevant features produced by teacher. This last point is especially important for smaller student models as they do not have the capacity to effectively learn the union of two very different feature spaces.\nA projection layer is often used in knowledge distillation to match the student\u2019s feature dimensions with those of the teacher [Romero et al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio ###reference_bx54###, Tian et al.(2019)Tian, Krishnan, and Isola ###reference_bx60###]. Although recent works have highlighted the importance of the projector in improving the efficacy of same-task distillation [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12###, Miles and Mikolajczyk(2024) ###reference_bx41###, Miles et al.(2024)Miles, Elezi, and Deng ###reference_bx44###], they have proven ineffective when there is cross-task information present. We propose a modification of the projection in which we instead map from the teacher space onto the student space. We describe this as inverting the projector, and we find that it enables the suppression of irrelevant (task-specific) features.\nWe show that this inverted projector can effectively discard these irrelevant features if needed. If this were to be used in the traditional same-task setting, the discarding of these features would be detrimental, but in the cross-task setting, it is actively desirable.\nOur cross-task knowledge distillation pipeline is shown in figure LABEL:fig:overview-ours-cross-task.\nIt consists of a trainable student model, which is to be trained on a given target task, and a frozen teacher model, which is pre-trained on a different task.\nThis setup is in contrast to the traditional same-task knowledge distillation setting, which is shown in figure LABEL:fig:overview-same-task. Both the student and teacher models receive the same input image, and their respective encoders produce features and respectively.\nA learnable linear projection matrix is used to project to the dimensions of , giving .\nA distance function is then used between the student features and the projected teacher features:\nwhere can be any distance metric, such as the L2 loss used by FitNets [Romero et al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio ###reference_bx54###] or the attention mapping described by AT [Le et al.(2020)Le, Vo, and Thoa ###reference_bx32###].\nIn addition to this loss, we also use a task-specific supervision loss between the student model\u2019s output and the ground truth labels to ensure the student\u2019s output aligns with the target task.\nSince the teacher\u2019s output is not used, we only perform a forward pass through its encoder in order to reduce the training compute required.\nThe final loss is given by:\nwhere is the downstream target-task loss. For example, for depth estimation it will be a pixel-wise loss with the ground truth depth, and for image classification it will be a cross entropy term.\nTo obtain analytical insights into the consequences of projecting the teacher features, we take the case where is a simple L2 loss between the student\u2019s features and the projected teacher features .\nWe perform singular value decomposition on both, i.e. and .\nThe cross-task setting requires that our inverted projection learns to discard the irrelevant task-specific features from the teacher model.\nThis can be implemented using a low-rank projection of the features.\nHowever, we observe that a low-rank projection naturally emerges in the cross-task setting when using our inverted projector. In fact, this emergence is even more prominent when there is a significant task gap (see section 4 ###reference_###).\nUsing this low-rank property, we can express using a truncated SVD, i.e. keeping few non-zero singular values. Substituting this into our with an L2 loss, we can then decouple an upper bound into a knowledge transfer and a spectral regularisation component111For full details, please see the supplementary material.:\nwhere denotes the set of indices indexing the non-zero singular values of . In practice, any metric that satisfies the triangle inequality has this decoupled upper bound in the cross-task setting. This result shows that the distillation loss can be decomposed into a knowledge transfer and an implicit spectral regularisation component. It explains how the inverted projection can help to improve performance even when there is little or no knowledge to transfer from the teacher: through a low-rank regularisation on the feature space. We empirically observe this emergent decoupling in figure LABEL:fig:eigplots. Here we see that an inverted projected is more effective at removing irrelevant task information, which is important in the cross-task KD setting.\nThe decoupled feature distillation in equation 5 ###reference_### allows us to introduce a novel spectral regularisation loss .\nThis loss captures the regularisation effect of the cross-task distillation process without the use of any teacher, therefore we call this method \"teacher-free distillation\" as is similarly done in other works [Yuan et al.(2020)Yuan, Tay, Li, Wang, and Feng ###reference_bx68###]. Its objective is to minimise the least-significant singular vectors of the student model\u2019s features while keeping the most-significant.\nWe define the spectral regularisation loss as follows. Assuming that the singular values/vectors are sorted from most to least significant, the loss can be given as follows:\nwhere is a hyperparameter expressing the strength of the regularisation loss and is the rank of the student features. More concretely, this hyperparameter defines the number of singular values being preserved. A smaller will result in more singular values being suppressed, thus leading to a more aggressive regularisation of the feature space. In general, this loss effectively penalises the reconstruction of features by the least significant singular values. It suppresses the features that are overly task-specific, thus forcing the representation into a lower rank space, which leads to better generalisation.\nWe perform experiments using this loss in section 4.5 ###reference_###.\nThe general framework used is described in section 3 ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed.\nMonocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_###.\nAs expected (see section 3 ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT.\nSemantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_###.\nWe experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_###).\nResults are shown in table 2 ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher.\nAs shown in sections 4.2 ###reference_###, 4.3 ###reference_###, and 4.4 ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_1### ###figure_2###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Setup and Training loss", + "text": "Our cross-task knowledge distillation pipeline is shown in figure LABEL:fig:overview-ours-cross-task.\nIt consists of a trainable student model, which is to be trained on a given target task, and a frozen teacher model, which is pre-trained on a different task.\nThis setup is in contrast to the traditional same-task knowledge distillation setting, which is shown in figure LABEL:fig:overview-same-task. Both the student and teacher models receive the same input image, and their respective encoders produce features and respectively.\nA learnable linear projection matrix is used to project to the dimensions of , giving .\nA distance function is then used between the student features and the projected teacher features:\nwhere can be any distance metric, such as the L2 loss used by FitNets [Romero et al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio ###reference_bx54### ###reference_bx54###] or the attention mapping described by AT [Le et al.(2020)Le, Vo, and Thoa ###reference_bx32### ###reference_bx32###].\nIn addition to this loss, we also use a task-specific supervision loss between the student model\u2019s output and the ground truth labels to ensure the student\u2019s output aligns with the target task.\nSince the teacher\u2019s output is not used, we only perform a forward pass through its encoder in order to reduce the training compute required.\nThe final loss is given by:\nwhere is the downstream target-task loss. For example, for depth estimation it will be a pixel-wise loss with the ground truth depth, and for image classification it will be a cross entropy term.\nTo obtain analytical insights into the consequences of projecting the teacher features, we take the case where is a simple L2 loss between the student\u2019s features and the projected teacher features .\nWe perform singular value decomposition on both, i.e. and .\nThe cross-task setting requires that our inverted projection learns to discard the irrelevant task-specific features from the teacher model.\nThis can be implemented using a low-rank projection of the features.\nHowever, we observe that a low-rank projection naturally emerges in the cross-task setting when using our inverted projector. In fact, this emergence is even more prominent when there is a significant task gap (see section 4 ###reference_### ###reference_###).\nUsing this low-rank property, we can express using a truncated SVD, i.e. keeping few non-zero singular values. Substituting this into our with an L2 loss, we can then decouple an upper bound into a knowledge transfer and a spectral regularisation component111For full details, please see the supplementary material.:\nwhere denotes the set of indices indexing the non-zero singular values of . In practice, any metric that satisfies the triangle inequality has this decoupled upper bound in the cross-task setting. This result shows that the distillation loss can be decomposed into a knowledge transfer and an implicit spectral regularisation component. It explains how the inverted projection can help to improve performance even when there is little or no knowledge to transfer from the teacher: through a low-rank regularisation on the feature space. We empirically observe this emergent decoupling in figure LABEL:fig:eigplots. Here we see that an inverted projected is more effective at removing irrelevant task information, which is important in the cross-task KD setting.\nThe decoupled feature distillation in equation 5 ###reference_### ###reference_### allows us to introduce a novel spectral regularisation loss .\nThis loss captures the regularisation effect of the cross-task distillation process without the use of any teacher, therefore we call this method \"teacher-free distillation\" as is similarly done in other works [Yuan et al.(2020)Yuan, Tay, Li, Wang, and Feng ###reference_bx68### ###reference_bx68###]. Its objective is to minimise the least-significant singular vectors of the student model\u2019s features while keeping the most-significant.\nWe define the spectral regularisation loss as follows. Assuming that the singular values/vectors are sorted from most to least significant, the loss can be given as follows:\nwhere is a hyperparameter expressing the strength of the regularisation loss and is the rank of the student features. More concretely, this hyperparameter defines the number of singular values being preserved. A smaller will result in more singular values being suppressed, thus leading to a more aggressive regularisation of the feature space. In general, this loss effectively penalises the reconstruction of features by the least significant singular values. It suppresses the features that are overly task-specific, thus forcing the representation into a lower rank space, which leads to better generalisation.\nWe perform experiments using this loss in section 4.5 ###reference_### ###reference_###.\nThe general framework used is described in section 3 ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed.\nMonocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53### ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70### ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46### ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12### ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT.\nSemantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_###.\nWe experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher.\nAs shown in sections 4.2 ###reference_### ###reference_###, 4.3 ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_3### ###figure_4###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Decoupled Feature Distillation", + "text": "To obtain analytical insights into the consequences of projecting the teacher features, we take the case where is a simple L2 loss between the student\u2019s features and the projected teacher features .\nWe perform singular value decomposition on both, i.e. and .\nThe cross-task setting requires that our inverted projection learns to discard the irrelevant task-specific features from the teacher model.\nThis can be implemented using a low-rank projection of the features.\nHowever, we observe that a low-rank projection naturally emerges in the cross-task setting when using our inverted projector. In fact, this emergence is even more prominent when there is a significant task gap (see section 4 ###reference_### ###reference_### ###reference_###).\nUsing this low-rank property, we can express using a truncated SVD, i.e. keeping few non-zero singular values. Substituting this into our with an L2 loss, we can then decouple an upper bound into a knowledge transfer and a spectral regularisation component111For full details, please see the supplementary material.:\nwhere denotes the set of indices indexing the non-zero singular values of . In practice, any metric that satisfies the triangle inequality has this decoupled upper bound in the cross-task setting. This result shows that the distillation loss can be decomposed into a knowledge transfer and an implicit spectral regularisation component. It explains how the inverted projection can help to improve performance even when there is little or no knowledge to transfer from the teacher: through a low-rank regularisation on the feature space. We empirically observe this emergent decoupling in figure LABEL:fig:eigplots. Here we see that an inverted projected is more effective at removing irrelevant task information, which is important in the cross-task KD setting.\nThe decoupled feature distillation in equation 5 ###reference_### ###reference_### ###reference_### allows us to introduce a novel spectral regularisation loss .\nThis loss captures the regularisation effect of the cross-task distillation process without the use of any teacher, therefore we call this method \"teacher-free distillation\" as is similarly done in other works [Yuan et al.(2020)Yuan, Tay, Li, Wang, and Feng ###reference_bx68### ###reference_bx68### ###reference_bx68###]. Its objective is to minimise the least-significant singular vectors of the student model\u2019s features while keeping the most-significant.\nWe define the spectral regularisation loss as follows. Assuming that the singular values/vectors are sorted from most to least significant, the loss can be given as follows:\nwhere is a hyperparameter expressing the strength of the regularisation loss and is the rank of the student features. More concretely, this hyperparameter defines the number of singular values being preserved. A smaller will result in more singular values being suppressed, thus leading to a more aggressive regularisation of the feature space. In general, this loss effectively penalises the reconstruction of features by the least significant singular values. It suppresses the features that are overly task-specific, thus forcing the representation into a lower rank space, which leads to better generalisation.\nWe perform experiments using this loss in section 4.5 ###reference_### ###reference_### ###reference_###.\nThe general framework used is described in section 3 ###reference_### ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed.\nMonocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53### ###reference_bx53### ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70### ###reference_bx70### ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46### ###reference_bx46### ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12### ###reference_bx12### ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT.\nSemantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_### ###reference_###.\nWe experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher.\nAs shown in sections 4.2 ###reference_### ###reference_### ###reference_###, 4.3 ###reference_### ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_5### ###figure_6###" + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Teacher-Free Distillation", + "text": "The decoupled feature distillation in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### allows us to introduce a novel spectral regularisation loss .\nThis loss captures the regularisation effect of the cross-task distillation process without the use of any teacher, therefore we call this method \"teacher-free distillation\" as is similarly done in other works [Yuan et al.(2020)Yuan, Tay, Li, Wang, and Feng ###reference_bx68### ###reference_bx68### ###reference_bx68### ###reference_bx68###]. Its objective is to minimise the least-significant singular vectors of the student model\u2019s features while keeping the most-significant.\nWe define the spectral regularisation loss as follows. Assuming that the singular values/vectors are sorted from most to least significant, the loss can be given as follows:\nwhere is a hyperparameter expressing the strength of the regularisation loss and is the rank of the student features. More concretely, this hyperparameter defines the number of singular values being preserved. A smaller will result in more singular values being suppressed, thus leading to a more aggressive regularisation of the feature space. In general, this loss effectively penalises the reconstruction of features by the least significant singular values. It suppresses the features that are overly task-specific, thus forcing the representation into a lower rank space, which leads to better generalisation.\nWe perform experiments using this loss in section 4.5 ###reference_### ###reference_### ###reference_### ###reference_###.\nThe general framework used is described in section 3 ###reference_### ###reference_### ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed.\nMonocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_### ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_### ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT.\nSemantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_###.\nWe experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_### ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher.\nAs shown in sections 4.2 ###reference_### ###reference_### ###reference_### ###reference_###, 4.3 ###reference_### ###reference_### ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_### ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_### ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_### ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_### ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_7### ###figure_8###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation details", + "text": "The general framework used is described in section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Monocular depth estimation", + "text": "Monocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Semantic segmentation", + "text": "Semantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Image-to-image translation", + "text": "We experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Teacher-free distillation.", + "text": "As shown in sections 4.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, 4.3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_9### ###figure_10###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments and Results", + "text": "To validate the efficacy of our inverted projector in the cross-task setting, we perform experiments ablating across different distillation methods, task pairs, and architectures.\nWe experiment with four target tasks: monocular depth estimation, semantic segmentation, image colourisation, and satellite-to-map translation. For each of these student tasks, teacher tasks are chosen that are either identical, similar, or different to them, thus demonstrating that our method is best-suited for the cross-task case where there are significant differences in the task specific knowledge learned by the teacher and student models.\nRandomly-initialised teachers.\nAn interesting question arises when we consider increasingly disparate student and teacher tasks: what happens if a randomly-initialised teacher is used? In this case, there is no knowledge shared between the teacher\u2019s task and the target task, however the random weights in the teacher may still produce diverse features.\nTo investigate this question, we also distill from randomly initialised teacher models in our experiments.\nIn this paper we propose the inverted projector as a simple drop-in component for extending many knowledge distillation (KD) methods into cross-task settings, where the teacher\u2019s task differs from the student\u2019s.\nThis inverted projector is able to suppress the irrelevant task-specific features from the teacher, which greatly improves the efficacy of cross-task distillation.\nWe show consistent and substantial improvements across a number of cross-task pairs using our approach.\nMost notably, we achieve up to a 7.47% improvement for depth estimation by distilling across a significant task-gap.\nThrough analysis, we provide a concrete interpretation and explanation for our results, leading to a natural decoupling of the objective into a knowledge transfer and a spectral regularisation component, and we extend this to demonstrate a novel drop-in teacher-free loss that achieves some of the benefits of knowledge distillation without the use of a teacher.\nIn this work we have highlighted some of the limitations of KD in the cross-task setting, while also providing a step towards broadening its practical applicability in this new domain." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation details", + "text": "The general framework used is described in section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Monocular depth estimation", + "text": "Monocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).\nWe use four different feature distillation methods from the same-task literature to show the utility of our method as a drop-in for use in the cross-task setting: FitNets [Romero et al.(2015a)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53### ###reference_bx53###], Attention Transfer (AT) [Zagoruyko and Komodakis(2019) ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70### ###reference_bx70###], Probabilistic Knowledge Transfer (PKT) [Passalis and Tefas(2018) ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46### ###reference_bx46###], and Ensemble (the projector ensemble method of [Chen et al.(2022b)Chen, Wang, Liu, Xu, de Hoog, and Huang ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12### ###reference_bx12###]).\nThe cross-task improvement using the inverted projection is most pronounced with FitNets and Ensemble, but there is some improvement with AT and PKT." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Semantic segmentation", + "text": "Semantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Image-to-image translation", + "text": "We experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Teacher-free distillation.", + "text": "As shown in sections 4.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, 4.3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.\n###figure_11### ###figure_12###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper we propose the inverted projector as a simple drop-in component for extending many knowledge distillation (KD) methods into cross-task settings, where the teacher\u2019s task differs from the student\u2019s.\nThis inverted projector is able to suppress the irrelevant task-specific features from the teacher, which greatly improves the efficacy of cross-task distillation.\nWe show consistent and substantial improvements across a number of cross-task pairs using our approach.\nMost notably, we achieve up to a 7.47% improvement for depth estimation by distilling across a significant task-gap.\nThrough analysis, we provide a concrete interpretation and explanation for our results, leading to a natural decoupling of the objective into a knowledge transfer and a spectral regularisation component, and we extend this to demonstrate a novel drop-in teacher-free loss that achieves some of the benefits of knowledge distillation without the use of a teacher.\nIn this work we have highlighted some of the limitations of KD in the cross-task setting, while also providing a step towards broadening its practical applicability in this new domain." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher task Depth Instance Seg. Classification Random
KD MethodProjection typeAbs. RMS Abs. RMS Abs. RMS Abs. RMS
\u00a0
No teacher (baseline)\n\\stackengine2pt0.845\u00b10.007UcFTS\n\\stackengine2pt0.127\u00b10.003UcFTS\n\\stackengine2pt0.440\u00b10.005UcFTS\n\\stackengine2pt0.845\u00b10.007UcFTS\n\\stackengine2pt0.127\u00b10.003UcFTS\n\\stackengine2pt0.440\u00b10.005UcFTS\n\\stackengine2pt0.845\u00b10.007UcFTS\n\\stackengine2pt0.127\u00b10.003UcFTS\n\\stackengine2pt0.440\u00b10.005UcFTS\n\\stackengine2pt0.845\u00b10.007UcFTS\n\\stackengine2pt0.127\u00b10.003UcFTS\n\\stackengine2pt0.440\u00b10.005UcFTS
\nFitNets\u00a0[Romero et\u00a0al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio] \nICLR 2015\nTraditional0.8680.1170.4060.8550.1220.4250.8450.1250.4390.8280.1340.455
Inverted (ours)0.8490.1240.4320.8510.1240.4310.8500.1240.4340.8510.1240.431
Improvement-2.17%-6.35%-6.49%-0.41%-1.78%-1.31%0.50%0.53%1.34%2.86%7.47%5.20%
\nAT\u00a0[Zagoruyko and Komodakis(2019)] \nICLR 2017\nTraditional0.8560.1220.4260.8520.1230.4310.8500.1250.4330.8570.1210.428
Inverted (ours)0.8560.1220.4250.8550.1210.4290.8530.1230.4300.8570.1220.428
Improvement-0.11%-0.08%0.02%0.42%1.38%0.53%0.35%1.61%0.79%0.05%-0.83%0.09%
\nPKT\u00a0[Passalis and Tefas(2018)] \nECCV 2018\nTraditional0.8540.1220.4290.8570.1230.4270.8510.1240.4320.8560.1230.429
Inverted (ours)0.8540.1220.4270.8540.1230.4290.8530.1230.4310.8580.1220.426
Improvement0.04%-0.16%0.42%-0.34%-0.08%-0.44%0.25%1.29%0.30%0.29%1.22%0.84%
\nEnsemble\u00a0[Chen et\u00a0al.(2022b)Chen, Wang, Liu, Xu, de\u00a0Hoog, and Huang] \nNeurIPS 2022\nTraditional0.8610.1190.4160.8560.1220.4250.8520.1240.4310.8350.1280.446
Inverted (ours)0.8490.1240.4330.8480.1240.4350.8470.1250.4370.8490.1240.432
Improvement-1.46%-4.64%-4.11%-0.95%-1.64%-2.16%-0.63%-0.89%-1.30%1.74%2.75%3.03%
\n
\n
\n
\n
Table 1: Cross-task distillation to a depth estimation student model using similar ( ) and dissimilar ( ) teacher tasks,\nshowing the increasing effect of our inverted projection as similarity between teacher and student tasks decreases.\nWe use our inverted projector with four different KD methods to show its general applicability.\nThe inverted projector outperforms traditional projections in the cross-task case for which it is designed, but always produces a performance improvement over the baseline (no distillation) regardless of the teacher task.\n colour map denotes decreasing student-teacher task similarity.
\n
\n
\n

\n3.3 Setup and Training loss

\n
\n

Our cross-task knowledge distillation pipeline is shown in figure LABEL:fig:overview-ours-cross-task.\nIt consists of a trainable student model, which is to be trained on a given target task, and a frozen teacher model, which is pre-trained on a different task.\nThis setup is in contrast to the traditional same-task knowledge distillation setting, which is shown in figure LABEL:fig:overview-same-task. Both the student and teacher models receive the same input image, and their respective encoders produce features and respectively.\nA learnable linear projection matrix is used to project to the dimensions of , giving .\nA distance function is then used between the student features and the projected teacher features:

\n\n\n\n\n\n\n\n
(1)
\n

where can be any distance metric, such as the L2 loss used by FitNets\u00a0[Romero et\u00a0al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio ###reference_bx54### ###reference_bx54###] or the attention mapping described by AT\u00a0[Le et\u00a0al.(2020)Le, Vo, and Thoa ###reference_bx32### ###reference_bx32###].\nIn addition to this loss, we also use a task-specific supervision loss between the student model\u2019s output and the ground truth labels to ensure the student\u2019s output aligns with the target task.\nSince the teacher\u2019s output is not used, we only perform a forward pass through its encoder in order to reduce the training compute required.\nThe final loss is given by:

\n\n\n\n\n\n\n\n
(2)
\n
\n
\n

where is the downstream target-task loss. For example, for depth estimation it will be a pixel-wise loss with the ground truth depth, and for image classification it will be a cross entropy term.

\n
\n
\n

\n3.4 Decoupled Feature Distillation

\n
\n

To obtain analytical insights into the consequences of projecting the teacher features, we take the case where is a simple L2 loss between the student\u2019s features and the projected teacher features .\nWe perform singular value decomposition on both, i.e. and .\nThe cross-task setting requires that our inverted projection learns to discard the irrelevant task-specific features from the teacher model.\nThis can be implemented using a low-rank projection of the features.\nHowever, we observe that a low-rank projection naturally emerges in the cross-task setting when using our inverted projector. In fact, this emergence is even more prominent when there is a significant task gap (see section 4 ###reference_### ###reference_### ###reference_###).\nUsing this low-rank property, we can express using a truncated SVD, i.e. keeping few non-zero singular values. Substituting this into our with an L2 loss, we can then decouple an upper bound into a knowledge transfer and a spectral regularisation component111For full details, please see the supplementary material.:

\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
e.g. FitNet loss(3)
Low-rank projection(4)
Decoupled upper bound(5)
\n

where denotes the set of indices indexing the non-zero singular values of . In practice, any metric that satisfies the triangle inequality has this decoupled upper bound in the cross-task setting. This result shows that the distillation loss can be decomposed into a knowledge transfer and an implicit spectral regularisation component. It explains how the inverted projection can help to improve performance even when there is little or no knowledge to transfer from the teacher: through a low-rank regularisation on the feature space. We empirically observe this emergent decoupling in figure LABEL:fig:eigplots. Here we see that an inverted projected is more effective at removing irrelevant task information, which is important in the cross-task KD setting.

\n
\n
\n

\n3.5 Teacher-Free Distillation

\n
\n

The decoupled feature distillation in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### allows us to introduce a novel spectral regularisation loss .\nThis loss captures the regularisation effect of the cross-task distillation process without the use of any teacher, therefore we call this method \"teacher-free distillation\" as is similarly done in other works\u00a0[Yuan et\u00a0al.(2020)Yuan, Tay, Li, Wang, and Feng ###reference_bx68### ###reference_bx68### ###reference_bx68### ###reference_bx68###]. Its objective is to minimise the least-significant singular vectors of the student model\u2019s features while keeping the most-significant.\nWe define the spectral regularisation loss as follows. Assuming that the singular values/vectors are sorted from most to least significant, the loss can be given as follows:

\n\n\n\n\n\n\n\n
(6)
\n

where is a hyperparameter expressing the strength of the regularisation loss and is the rank of the student features. More concretely, this hyperparameter defines the number of singular values being preserved. A smaller will result in more singular values being suppressed, thus leading to a more aggressive regularisation of the feature space. In general, this loss effectively penalises the reconstruction of features by the least significant singular values. It suppresses the features that are overly task-specific, thus forcing the representation into a lower rank space, which leads to better generalisation.\nWe perform experiments using this loss in section 4.5 ###reference_### ###reference_### ###reference_### ###reference_###.

\n
\n
\n

\n4 Experiments and Results

\n
\n

To validate the efficacy of our inverted projector in the cross-task setting, we perform experiments ablating across different distillation methods, task pairs, and architectures.\nWe experiment with four target tasks: monocular depth estimation, semantic segmentation, image colourisation, and satellite-to-map translation. For each of these student tasks, teacher tasks are chosen that are either identical, similar, or different to them, thus demonstrating that our method is best-suited for the cross-task case where there are significant differences in the task specific knowledge learned by the teacher and student models.

\n
\n
\n

Randomly-initialised teachers.\nAn interesting question arises when we consider increasingly disparate student and teacher tasks: what happens if a randomly-initialised teacher is used? In this case, there is no knowledge shared between the teacher\u2019s task and the target task, however the random weights in the teacher may still produce diverse features.\nTo investigate this question, we also distill from randomly initialised teacher models in our experiments.

\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskIoU Pix. Acc.
\u00a0
No teacher\n34.60 0.36\n\n0.759 0.001\n
Random37.200.768
Classif.36.000.766
Seg.36.500.767
\n
\n
Semantic segmentation
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskPSNR FID
\u00a0
No teacher\n20.48 0.04\n\n65.77 1.21\n
Random20.9963.23
Seg.21.1063.44
Classif21.2765.92
Depth20.8462.60
\n
\n
Colourisation
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskPSNR FID
\u00a0
No teacher\n35.29 0.18\n\n67.43 1.70\n
Random35.2872.54
Classif36.2959.86
\n
\n
Satellite-to-map conversion
\n
\n
\n
\n
Table 2: Comparison for different cross-task settings. We observe that our inverted projector is effective across many different task pairs and even for the same-task settings.
\n
\n
\n

\n4.1 Implementation details

\n
\n

The general framework used is described in section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and shown in figure LABEL:fig:overview-ours-cross-task. The model architectures used for the student and teacher vary depending on the task pairs. As an example, our depth estimation student is an encoder-decoder architecture with either a MobileNetV2, ResNet50, or EfficientNet-B0 backbone, and the frozen teacher model is a ViT-B/32 [Dosovitskiy et\u00a0al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15### ###reference_bx15###] trained for classification or the SwinV2-B [Liu et\u00a0al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36### ###reference_bx36###] backbone of AiT [Ning et\u00a0al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45### ###reference_bx45###] pretrained for instance segmentation.\nAll architectures used follow an encoder-decoder structure. The student and teacher features, and , are extracted immediately after the encoder of the model in question. All decoders used require features with a spatial (height and width) dimension, therefore if the teacher model\u2019s encoder has a final pooling layer (as in the case of the classification teacher), this is removed.

\n
\n
\n
\n

\n4.2 Monocular depth estimation

\n
\n

Monocular depth estimation is the task of inferring the depth, or distance to the camera, of every point shown in a single image. It is a challenging problem, as there is a many-to-one mapping from 3D scenes to a given 2D depth-image.\nWhen experimenting with depth estimation as a target task, we make use of teachers trained on tasks that range from similar to dissimilar to the target task of depth estimation, in terms of the overlap in knowledge.\nWe use a depth teacher for the same-task distillation, and the instance segmentation and classification tasks for the increasingly dissimilar teacher tasks. Finally, we use a randomly initialised and frozen teacher for the most extreme cross-task setting.\nExperiments are run on the NYUv2 dataset\u00a0[Silberman et\u00a0al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###].\nOur results are shown in table 3.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nAs expected (see section 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###), the use of our inverted projection produces improvements in performance when using dissimilar teacher tasks (random and classification), and gives similar or worse performance when the teacher\u2019s task is similar to the target (instance segmentation and depth estimation teachers).

\n
\n\n
\n
\n
\n
\n

\n4.3 Semantic segmentation

\n
\n

Semantic segmentation is the task of labelling every pixel in the input image with a class. Our experiments are performed using MSCOCO\u00a0[Lin et\u00a0al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34### ###reference_bx34###], an 80-class segmentation dataset.\nWe validate the effectiveness of our inverted projector using segmentation, classification, or randomly initialised teachers. In all experiments, we use a simple L2 loss between the projected teacher features and the student features.\nResults shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nWe are able to obtain significant improvement with all the teacher tasks considered, with the best improvements seen with the random teacher. This follows: both classification and semantic segmentation have significant overlap in knowledge, but the random teacher has significant task-irrelevant information that our inverted projection is able to discard.\nThis further empirically validates the regularisation components described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.

\n
\n
\n
\n

\n4.4 Image-to-image translation

\n
\n

We experiment with two image-to-image translation tasks: transforming satellite images into maps, and colourisation of black-and-white images. These two student tasks are particularly important, as they do not have significant knowledge overlap with any of the teacher tasks used: classification, depth estimation, instance segmentation, and the randomly-initialised teacher.\nIn contrast, segmentation and classification share a common goal of understanding semantic context of the world, while depth estimation and segmentation have been shown to aid one another (see section 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###).\nResults are shown in table 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nGiven the relative dissimilarity of the student tasks with all teacher tasks, unsurprisingly, our inverted projection performs well in all cases. We are able to use our inverted projection to produce a significant improvement over the baseline with all teacher tasks, including with the randomly initialised teacher.

\n
\n
\n
\n
\n
\n

\n4.5 Teacher-free distillation.

\n
\n

As shown in sections 4.2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, 4.3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, and 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we are able to obtain significant performance improvements even when the teacher is randomly initialised and then frozen, thus containing no task-specific knowledge at all.\nThis reinforces the conclusion reached in section 3.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###: the distillation loss function may be comprised of a knowledge transfer component and a spectral regularisation component.\nIn the case where there is no knowledge to transfer between the teacher and the student, only a regularising effect can explain the performance improvement over the baseline.\nTo control for this, and to provide further evidence of the loss decoupling described in equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we perform experiments using the spectral regularisation loss (equation 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###). Experimenting with different values in a depth estimation model trained on NYUv2 [Silberman et\u00a0al.(2012)Silberman, Hoiem, Kohli, and Fergus ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58### ###reference_bx58###] without a teacher, we find that spectral regularisation significantly enhances performance across all values, particularly at (see appendix).\nThis supports the decoupling of into knowledge transfer and regularisation terms (equation 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###) and further validates using a randomly-initialised teacher.\nIn table 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### we consider knowledge distillation on the large-scale ImageNet-1K dataset, and we observe that our simple regularisation loss achieves competitive performance with many state-of-the-art knowledge distillation methods.

\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Networkacc@1#params
\u00a0
\nRegNety 160\u00a0[Radosavovic et\u00a0al.(2020)Radosavovic, Kosaraju, Girshick, He, and Doll\u00e1r]\n82.684M
Methods using a pre-trained teacher
\nDeiT-Ti\"[Uncaptioned\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou]\n74.56M
\nCo-advise\u00a0[Ren et\u00a0al.(2022)Ren, Gao, Hua, Xue, Tian, He, and Zhao]\n74.96M
\nDearKD \u00a0[Chen et\u00a0al.(2022a)Chen, Cao, Zhong, Zhang, Gao, and Tao]\n74.86M
\nUSKD\u00a0[Yang et\u00a0al.(2023)Yang, Zeng, Li, Zhang, Yuan, and Li]\n75.06M
Methods without any teacher
\nDeiT-Ti\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou]\n72.25M
\nOurs: \n74.55M
\n
Table 3: Comparing our novel teacher-free spectral regularisation loss to other state-of-the-art KD methods on ImageNet-1K\u00a0[Deng et\u00a0al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei].\nTop row is the teacher model used by the KD methods that use a teacher.\nAll methods use a DeiT-Ti\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou] student model, with DeiT-Ti\"[Uncaptioned describing the distilled variant using distillation tokens.
\n
\n
\n

\n5 Conclusion

\n
\n

In this paper we propose the inverted projector as a simple drop-in component for extending many knowledge distillation (KD) methods into cross-task settings, where the teacher\u2019s task differs from the student\u2019s.\nThis inverted projector is able to suppress the irrelevant task-specific features from the teacher, which greatly improves the efficacy of cross-task distillation.\nWe show consistent and substantial improvements across a number of cross-task pairs using our approach.\nMost notably, we achieve up to a 7.47% improvement for depth estimation by distilling across a significant task-gap.\nThrough analysis, we provide a concrete interpretation and explanation for our results, leading to a natural decoupling of the objective into a knowledge transfer and a spectral regularisation component, and we extend this to demonstrate a novel drop-in teacher-free loss that achieves some of the benefits of knowledge distillation without the use of a teacher.\nIn this work we have highlighted some of the limitations of KD in the cross-task setting, while also providing a step towards broadening its practical applicability in this new domain.

\n
\n
\n
\n

References

\n
    \n
  • \n[Ahn et\u00a0al.(2022)Ahn, Kim, Koh, and Li]\n\nJin-Hyun Ahn, Kyungsang Kim, Jeongwan Koh, and Quanzheng Li.\n\n\nFederated active learning (f-al): an efficient annotation strategy for federated learning, 2022.\n\n\n
  • \n
  • \n[Auty and Mikolajczyk(2022)]\n\nDylan Auty and Krystian Mikolajczyk.\n\n\nMonocular Depth Estimation Using Cues Inspired by Biological Vision Systems.\n\n\nIn International Conference on Pattern Recognition (ICPR) 2022, 2022.\n\n\n
  • \n
  • \n[Bai et\u00a0al.(2019)Bai, Fan, Pan, and Chen]\n\nYucai Bai, Lei Fan, Ziyu Pan, and Long Chen.\n\n\nMonocular Outdoor Semantic Mapping with a Multi-task Network.\n\n\narXiv pre-print, January 2019.\n\n\n
  • \n
  • \n[Baldridge and Osborne(2004)]\n\nJason Baldridge and Miles Osborne.\n\n\nActive learning and the total cost of annotation.\n\n\nIn Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004.\n\n\n
  • \n
  • \n[Bhardwaj et\u00a0al.(2019)Bhardwaj, Suda, and Marculescu]\n\nKartikeya Bhardwaj, Naveen Suda, and Radu Marculescu.\n\n\nDream distillation: A data-independent model compression framework.\n\n\nICML Joint Workshop on On-Device Machine Learning and Compact Deep Neural Network Representations (ODML-CDNNR), 2019.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2020)Chen, Wang, Gan, Liu, Henao, and Carin]\n\nLiqun Chen, Dong Wang, Zhe Gan, Jingjing Liu, Ricardo Henao, and Lawrence Carin.\n\n\nWasserstein Contrastive Representation Distillation.\n\n\nCVPR, 2020.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2021a)Chen, Liu, Zhao, and Jia]\n\nPengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia.\n\n\nDistilling Knowledge via Knowledge Review.\n\n\nCVPR, 2021a.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2022a)Chen, Cao, Zhong, Zhang, Gao, and Tao]\n\nXianing Chen, Qiong Cao, Yujie Zhong, Jing Zhang, Shenghua Gao, and Dacheng Tao.\n\n\nDearkd: Data-efficient early knowledge distillation for vision transformers.\n\n\nCVPR, 2022a.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2019a)Chen, Wang, Fu, Long, and Wang]\n\nXinyang Chen, Sinan Wang, Bo\u00a0Fu, Mingsheng Long, and Jianmin Wang.\n\n\nCatastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning.\n\n\nNeurIPS, 2019a.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2019b)Chen, Wang, Long, and Wang]\n\nXinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang.\n\n\nTransferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation.\n\n\nIn PMLR, 2019b.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2021b)Chen, Xian, Koepke, Shan, and Akata]\n\nYanbei Chen, Yongqin Xian, A.\u00a0Sophia Koepke, Ying Shan, and Zeynep Akata.\n\n\nDistilling audio-visual knowledge by compositional contrastive learning.\n\n\nCVPR, 2021b.\n\n\n
  • \n
  • \n[Chen et\u00a0al.(2022b)Chen, Wang, Liu, Xu, de\u00a0Hoog, and Huang]\n\nYudong Chen, Sen Wang, Jiajun Liu, Xuwei Xu, Frank de\u00a0Hoog, and Zi\u00a0Huang.\n\n\nImproved Feature Distillation via Projector Ensemble.\n\n\nNeurIPS, 2022b.\n\n\n
  • \n
  • \n[Deng et\u00a0al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei]\n\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li\u00a0Fei-Fei.\n\n\nImageNet: A Large-Scale Hierarchical Image Database.\n\n\nIn CVPR, 2009.\n\n\n
  • \n
  • \n[Deng et\u00a0al.(2021)Deng, Zhang, Vodrahalli, Kawaguchi, and Zou]\n\nZhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, and James Zou.\n\n\nAdversarial Training Helps Transfer Learning via Better Representations.\n\n\narXiv preprint, 6 2021.\n\n\n
  • \n
  • \n[Dosovitskiy et\u00a0al.(2021)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, Uszkoreit, and Houlsby]\n\nAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.\n\n\nAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\n\n\nIn ICLR, 2021.\n\n\n
  • \n
  • \n[Evci et\u00a0al.(2022)Evci, Dumoulin, Larochelle, and Mozer]\n\nUtku Evci, Vincent Dumoulin, Hugo Larochelle, and Michael\u00a0C Mozer.\n\n\nHead2Toe: Utilizing Intermediate Representations for Better Transfer Learning.\n\n\nIn ICML. PMLR, 2022.\n\n\n
  • \n
  • \n[Fang et\u00a0al.(2021)Fang, Song, Wang, Shen, Wang, and Song]\n\nGongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, and Mingli Song.\n\n\nContrastive Model Inversion for Data-Free Knowledge Distillation.\n\n\nIJCAI, 2021.\n\n\n
  • \n
  • \n[Fifty et\u00a0al.(2021)Fifty, Amid, Zhao, Yu, Anil, and Finn]\n\nChristopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn.\n\n\nEfficiently identifying task groupings for multi-task learning.\n\n\nNeurIPS, 2021.\n\n\n
  • \n
  • \n[Fonder and Van\u00a0Droogenbroeck(2019)]\n\nMichael Fonder and Marc Van\u00a0Droogenbroeck.\n\n\nMid-air: A multi-modal dataset for extremely low altitude drone flights.\n\n\nIn Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0\u20130, 2019.\n\n\n
  • \n
  • \n[Fox et\u00a0al.(2018)Fox, Kim, and Ehrenkrantz]\n\nMichael\u00a0H. Fox, Kyungmee Kim, and David Ehrenkrantz.\n\n\nMobileNetV2: Inverted Residuals and Linear Bottlenecks.\n\n\nCVPR, 2018.\n\n\n
  • \n
  • \n[Gao and Saar-Tsechansky(2020)]\n\nRuijiang Gao and Maytal Saar-Tsechansky.\n\n\nCost-accuracy aware adaptive labeling for active learning.\n\n\nAAAI, 2020.\n\n\n
  • \n
  • \n[Greenfeld and Shalit(2019)]\n\nDaniel Greenfeld and Uri Shalit.\n\n\nRobust Learning with the Hilbert-Schmidt Independence Criterion.\n\n\narXiv preprint, 10 2019.\n\n\n
  • \n
  • \n[Hinton et\u00a0al.(2015)Hinton, Vinyals, and Dean]\n\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean.\n\n\nDistilling the Knowledge in a Neural Network.\n\n\nNeurIPS, 2015.\n\n\n
  • \n
  • \n[Huang et\u00a0al.(2017)Huang, Chen, Mu, and Zhou]\n\nSheng-Jun Huang, Jia-Lve Chen, Xin Mu, and Zhi-Hua Zhou.\n\n\nCost-effective active learning from diverse labelers.\n\n\nIn IJCAI, 2017.\n\n\n
  • \n
  • \n[Huang et\u00a0al.(2022)Huang, You, Wang, Qian, and Xu]\n\nTao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu.\n\n\nKnowledge distillation from a stronger teacher.\n\n\nNeurIPS, 2022.\n\n\n
  • \n
  • \n[James et\u00a0al.(2019)James, Wohlhart, Kalakrishnan, Kalashnikov, Irpan, Ibarz, Levine, Hadsell, and Bousmalis]\n\nStephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, and Konstantinos Bousmalis.\n\n\nSim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks.\n\n\nCVPR, 2019.\n\n\n
  • \n
  • \n[Jiao et\u00a0al.(2018)Jiao, Cao, Song, and Lau]\n\nJianbo Jiao, Ying Cao, Yibing Song, and Rynson Lau.\n\n\nLook Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss.\n\n\nIn ECCV, 2018.\n\n\n
  • \n
  • \n[Kirillov et\u00a0al.(2023)Kirillov, Mintun, Ravi, Mao, Rolland, Gustafson, Xiao, Whitehead, Berg, Lo, Doll\u00e1r, and Girshick]\n\nAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander\u00a0C. Berg, Wan-Yen Lo, Piotr Doll\u00e1r, and Ross Girshick.\n\n\nSegment anything.\n\n\nICCV, 2023.\n\n\n
  • \n
  • \n[Kolbeinsson and Mikolajczyk(2023)]\n\nBenedikt Kolbeinsson and Krystian Mikolajczyk.\n\n\nDDOS: The Drone Depth and Obstacle Segmentation Dataset.\n\n\narXiv preprint arXiv:2312.12494, 2023.\n\n\n
  • \n
  • \n[Kolbeinsson and Mikolajczyk(2024)]\n\nBenedikt Kolbeinsson and Krystian Mikolajczyk.\n\n\nUCorr: Wire Detection and Depth Estimation for Autonomous Drones.\n\n\nIn International Conference on Robotics, Computer Vision and Intelligent Systems - ROBOVIS, 2024.\n\n\n
  • \n
  • \n[Komorowski et\u00a0al.(2023)Komorowski, Baniecki, and Biecek]\n\nPiotr Komorowski, Hubert Baniecki, and Przemyslaw Biecek.\n\n\nTowards evaluating explanations of vision transformers for medical imaging.\n\n\nIn CVPR Workshops, June 2023.\n\n\n
  • \n
  • \n[Le et\u00a0al.(2020)Le, Vo, and Thoa]\n\nDuong\u00a0H. Le, Trung-Nhan Vo, and Nam Thoa.\n\n\nPaying more Attention to Snapshots of Iterative Pruning : Improving Model Compression via Ensemble Distillation.\n\n\nBMVC, 2020.\n\n\n
  • \n
  • \n[Li et\u00a0al.(2022)Li, Wu, Han, and Tian]\n\nDeng Li, Aming Wu, Yahong Han, and Qi\u00a0Tian.\n\n\nPrototype-guided Cross-task Knowledge Distillation for Large-scale Models.\n\n\narXiv preprint, 12 2022.\n\n\n
  • \n
  • \n[Lin et\u00a0al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Doll\u00e1r, and Zitnick]\n\nTsung\u00a0Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C.\u00a0Lawrence Zitnick.\n\n\nMicrosoft COCO: Common objects in context.\n\n\nECCV, 2014.\n\n\n
  • \n
  • \n[Liu et\u00a0al.(2019)Liu, Chen, Liu, Qin, Luo, and Wang]\n\nYifan Liu, Ke\u00a0Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang.\n\n\nStructured Knowledge Distillation for Semantic Segmentation.\n\n\nCVPR, 2019.\n\n\n
  • \n
  • \n[Liu et\u00a0al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, Wei, and Guo]\n\nZe\u00a0Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li\u00a0Dong, Furu Wei, and Baining Guo.\n\n\nSwin Transformer V2: Scaling Up Capacity and Resolution.\n\n\nCVPR, 2022.\n\n\n
  • \n
  • \n[Lopez-Paz et\u00a0al.(2016)Lopez-Paz, Bottou, Sch\u00f6lkopf, and Vapnik]\n\nDavid Lopez-Paz, L\u00e9on Bottou, Bernhard Sch\u00f6lkopf, and Vladimir Vapnik.\n\n\nUnifying distillation and privileged information.\n\n\nICLR, 2016.\n\n\n
  • \n
  • \n[Malinin et\u00a0al.(2020)Malinin, Mlodozeniec, and Gales]\n\nAndrey Malinin, Bruno Mlodozeniec, and Mark Gales.\n\n\nEnsemble Distribution Distillation.\n\n\nICLR, 2020.\n\n\n
  • \n
  • \n[Mccormac et\u00a0al.(2017)Mccormac, Handa, Leutenegger, and Davison]\n\nJohn Mccormac, Ankur Handa, Stefan Leutenegger, and Andrew\u00a0J Davison.\n\n\nSceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-training on Indoor Segmentation?\n\n\nIn ICCV, 2017.\n\n\n
  • \n
  • \n[Miles and Mikolajczyk(2020)]\n\nRoy Miles and Krystian Mikolajczyk.\n\n\nCascaded channel pruning using hierarchical self-distillation.\n\n\nBMVC, 2020.\n\n\n
  • \n
  • \n[Miles and Mikolajczyk(2024)]\n\nRoy Miles and Krystian Mikolajczyk.\n\n\nUnderstanding the role of the projector in knowledge distillation.\n\n\nAAAI, 2024.\n\n\n
  • \n
  • \n[Miles et\u00a0al.(2022)Miles, Rodriguez, and Mikolajczyk]\n\nRoy Miles, Adrian\u00a0Lopez Rodriguez, and Krystian Mikolajczyk.\n\n\nInformation Theoretic Representation Distillation.\n\n\nBMVC, 12 2022.\n\n\n
  • \n
  • \n[Miles et\u00a0al.(2023)Miles, Yucel, Manganelli, and Saa-Garriga]\n\nRoy Miles, Mehmet\u00a0Kerim Yucel, Bruno Manganelli, and Albert Saa-Garriga.\n\n\nMobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation.\n\n\nCVPR, 3 2023.\n\n\n
  • \n
  • \n[Miles et\u00a0al.(2024)Miles, Elezi, and Deng]\n\nRoy Miles, Ismail Elezi, and Jiankang Deng.\n\n\nVkd: Improving knowledge distillation using orthogonal projections.\n\n\nCVPR, 2024.\n\n\n
  • \n
  • \n[Ning et\u00a0al.(2023)Ning, Li, Zhang, Geng, Dai, He, and Hu]\n\nJia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi\u00a0Dai, Kun He, and Han Hu.\n\n\nAll in Tokens: Unifying Output Space of Visual Tasks via Soft Token, January 2023.\n\n\narXiv:2301.02229 [cs].\n\n\n
  • \n
  • \n[Passalis and Tefas(2018)]\n\nNikolaos Passalis and Anastasios Tefas.\n\n\nLearning Deep Representations with Probabilistic Knowledge Transfer.\n\n\nECCV, 2018.\n\n\n
  • \n
  • \n[Raab et\u00a0al.(2020)Raab, V\u00e4th, Meier, and Schleif]\n\nChristoph Raab, Philipp V\u00e4th, Peter Meier, and Frank-Michael Schleif.\n\n\nBridging Adversarial and Statistical Domain Transfer via Spectral Adaptation Networks.\n\n\nIn ACCV 2020. Springer International Publishing, 2020.\n\n\n
  • \n
  • \n[Radford et\u00a0al.(2021)Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, Krueger, and Sutskever]\n\nAlec Radford, Jong\u00a0Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.\n\n\nLearning transferable visual models from natural language supervision.\n\n\nPMLR, 2021.\n\n\n
  • \n
  • \n[Radosavovic et\u00a0al.(2020)Radosavovic, Kosaraju, Girshick, He, and Doll\u00e1r]\n\nIlija Radosavovic, Raj\u00a0Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Doll\u00e1r.\n\n\nDesigning Network Design Spaces.\n\n\nCVPR, 3 2020.\n\n\n
  • \n
  • \n[Raffel et\u00a0al.(2020)Raffel, Shazeer, Roberts, Lee, Narang, Matena, Zhou, Li, and Liu]\n\nColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter\u00a0J Liu.\n\n\nExploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.\n\n\nJMLR, 2020.\n\n\n
  • \n
  • \n[Ramamonjisoa and Lepetit(2019)]\n\nMicha\u00ebl Ramamonjisoa and Vincent Lepetit.\n\n\nSharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation.\n\n\narXiv preprint, 2019.\n\n\narXiv: 1905.08598v1.\n\n\n
  • \n
  • \n[Ren et\u00a0al.(2022)Ren, Gao, Hua, Xue, Tian, He, and Zhao]\n\nSucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, and Hang Zhao.\n\n\nCo-advise: Cross Inductive Bias Distillation.\n\n\nCVPR, 2022.\n\n\n
  • \n
  • \n[Romero et\u00a0al.(2015a)Romero, Ballas, Ebrahimi\u00a0Kahou, Chassang, Gatta, and Bengio]\n\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi\u00a0Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.\n\n\nFitNets: Hints For Thin Deep Nets.\n\n\nICLR, 2015a.\n\n\n
  • \n
  • \n[Romero et\u00a0al.(2015b)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio]\n\nAdriana Romero, Nicolas Ballas, Samira\u00a0Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.\n\n\nFitNets: Hints for Thin Deep Nets.\n\n\narXiv preprint, March 2015b.\n\n\narXiv: 1412.6550.\n\n\n
  • \n
  • \n[Ronneberger et\u00a0al.(2015)Ronneberger, Fischer, and Brox]\n\nOlaf Ronneberger, Philipp Fischer, and Thomas Brox.\n\n\nU-net: Convolutional networks for biomedical image segmentation.\n\n\nMICCAI, 2015.\n\n\n
  • \n
  • \n[Sanh et\u00a0al.(2019a)Sanh, Debut, Chaumond, and Wolf]\n\nVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.\n\n\nDistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.\n\n\nNeurIPS Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019a.\n\n\n
  • \n
  • \n[Sanh et\u00a0al.(2019b)Sanh, Debut, Chaumond, and Wolf]\n\nVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.\n\n\nDistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.\n\n\nNeurIPS Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019b.\n\n\n
  • \n
  • \n[Silberman et\u00a0al.(2012)Silberman, Hoiem, Kohli, and Fergus]\n\nNathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus.\n\n\nIndoor Segmentation and Support Inference from RGBD Images.\n\n\nIn ECCV, 2012.\n\n\n
  • \n
  • \n[Sung et\u00a0al.(2022)Sung, Cho, and Bansal]\n\nYi-Lin Sung, Jaemin Cho, and Mohit Bansal.\n\n\nVL-ADAPTER: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks.\n\n\nIn CVPR, 2022.\n\n\n
  • \n
  • \n[Tian et\u00a0al.(2019)Tian, Krishnan, and Isola]\n\nYonglong Tian, Dilip Krishnan, and Phillip Isola.\n\n\nContrastive representation distillation.\n\n\nICLR, 2019.\n\n\n
  • \n
  • \n[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou]\n\nHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv\u00e9 J\u00e9gou.\n\n\nTraining data-efficient image transformers & distillation through attention.\n\n\nPMLR, 2021.\n\n\n
  • \n
  • \n[Wang et\u00a0al.(2020a)Wang, Zhang, Wang, Lin, and Lu]\n\nLijun Wang, Jianming Zhang, Oliver Wang, Zhe Lin, and Huchuan Lu.\n\n\nSDC-Depth: Semantic Divide-and-Conquer Network for Monocular Depth Estimation.\n\n\nIn 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 538\u2013547, Seattle, WA, USA, June 2020a. IEEE.\n\n\n
  • \n
  • \n[Wang et\u00a0al.(2020b)Wang, Zhu, Wang, Hu, Qiu, Wang, Hu, Kapoor, and Scherer]\n\nWenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Sebastian Scherer.\n\n\nTartanair: A dataset to push the limits of visual slam.\n\n\nIn 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4909\u20134916. IEEE, 2020b.\n\n\n
  • \n
  • \n[Xing et\u00a0al.(2022)Xing, Shen, Ho, and Tzes]\n\nDaitao Xing, Jinglin Shen, Chiuman Ho, and Anthony Tzes.\n\n\nROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation, December 2022.\n\n\narXiv:2212.05729 [cs].\n\n\n
  • \n
  • \n[Yang et\u00a0al.(2022)Yang, Pan, Gao, Jiang, Liu, and Chen]\n\nChenxiao Yang, Junwei Pan, Xiaofeng Gao, Tingyu Jiang, Dapeng Liu, and Guihai Chen.\n\n\nCross-Task Knowledge Distillation in Multi-Task Recommendation.\n\n\nAAAI, 2 2022.\n\n\n
  • \n
  • \n[Yang et\u00a0al.(2023)Yang, Zeng, Li, Zhang, Yuan, and Li]\n\nZhendong Yang, Ailing Zeng, Zhe Li, Tianke Zhang, Chun Yuan, and Yu\u00a0Li.\n\n\nFrom knowledge distillation to self-knowledge distillation: A unified approach with normalized loss and customized soft labels.\n\n\nICCV, 2023.\n\n\n
  • \n
  • \n[Ye et\u00a0al.(2020)Ye, Lu, and Zhan]\n\nHan\u00a0Jia Ye, Su\u00a0Lu, and De\u00a0Chuan Zhan.\n\n\nDistilling cross-task knowledge via relationship matching.\n\n\nIn CVPR, 2020.\n\n\n
  • \n
  • \n[Yuan et\u00a0al.(2020)Yuan, Tay, Li, Wang, and Feng]\n\nLi\u00a0Yuan, Francis\u00a0E.H. Tay, Guilin Li, Tao Wang, and Jiashi Feng.\n\n\nRevisiting Knowledge Distillation via Label Smoothing Regularization.\n\n\nCVPR, 2020.\n\n\n
  • \n
  • \n[Yuan and Peng(2020)]\n\nMingkuan Yuan and Yuxin Peng.\n\n\nCKD: Cross-task knowledge distillation for text-to-image synthesis.\n\n\nIEEE Transactions on Multimedia, 8:1955\u20131968, 2020.\n\n\nConference Name: IEEE Transactions on Multimedia.\n\n\n
  • \n
  • \n[Zagoruyko and Komodakis(2019)]\n\nSergey Zagoruyko and Nikos Komodakis.\n\n\nPaying more attention to attention: Improving the performance of convolutional neural networks via attention transfer.\n\n\nIn ICLR, 2019.\n\n\n
  • \n
  • \n[Zamir et\u00a0al.(2018)Zamir, Sax, Shen, Guibas, Malik, and Savarese]\n\nAmir Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese.\n\n\nTaskonomy: Disentangling Task Transfer Learning.\n\n\narXiv:1804.08328 [cs], April 2018.\n\n\narXiv: 1804.08328.\n\n\n
  • \n
  • \n[Zhao et\u00a0al.(2022)Zhao, Song, and Qiu]\n\nBorui Zhao, Renjie Song, and Yiyu Qiu.\n\n\nDecoupled Knowledge Distillation.\n\n\nCVPR, 2022.\n\n\n
  • \n
  • \n[Zhuang et\u00a0al.(2020)Zhuang, Qi, Duan, Xi, Zhu, Zhu, Xiong, and He]\n\nFuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He.\n\n\nA Comprehensive Survey on Transfer Learning.\n\n\narXiv preprint, 2020.\n\n\n
  • \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", + "capture": "Table 1: Cross-task distillation to a depth estimation student model using similar ( ) and dissimilar ( ) teacher tasks,\nshowing the increasing effect of our inverted projection as similarity between teacher and student tasks decreases.\nWe use our inverted projector with four different KD methods to show its general applicability.\nThe inverted projector outperforms traditional projections in the cross-task case for which it is designed, but always produces a performance improvement over the baseline (no distillation) regardless of the teacher task.\n colour map denotes decreasing student-teacher task similarity." + }, + "2": { + "table_html": "
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskIoU Pix. Acc.
\u00a0
No teacher\n34.60 0.36\n\n0.759 0.001\n
Random37.200.768
Classif.36.000.766
Seg.36.500.767
\n
\n
Semantic segmentation
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskPSNR FID
\u00a0
No teacher\n20.48 0.04\n\n65.77 1.21\n
Random20.9963.23
Seg.21.1063.44
Classif21.2765.92
Depth20.8462.60
\n
\n
Colourisation
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Teacher TaskPSNR FID
\u00a0
No teacher\n35.29 0.18\n\n67.43 1.70\n
Random35.2872.54
Classif36.2959.86
\n
\n
Satellite-to-map conversion
\n
\n
\n
\n
Table 2: Comparison for different cross-task settings. We observe that our inverted projector is effective across many different task pairs and even for the same-task settings.
\n
", + "capture": "Semantic segmentation" + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0
Networkacc@1#params
\u00a0
\nRegNety 160\u00a0[Radosavovic et\u00a0al.(2020)Radosavovic, Kosaraju, Girshick, He, and Doll\u00e1r]\n82.684M
Methods using a pre-trained teacher
\nDeiT-Ti\"[Uncaptioned\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou]\n74.56M
\nCo-advise\u00a0[Ren et\u00a0al.(2022)Ren, Gao, Hua, Xue, Tian, He, and Zhao]\n74.96M
\nDearKD \u00a0[Chen et\u00a0al.(2022a)Chen, Cao, Zhong, Zhang, Gao, and Tao]\n74.86M
\nUSKD\u00a0[Yang et\u00a0al.(2023)Yang, Zeng, Li, Zhang, Yuan, and Li]\n75.06M
Methods without any teacher
\nDeiT-Ti\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou]\n72.25M
\nOurs: \n74.55M
\n
Table 3: Comparing our novel teacher-free spectral regularisation loss to other state-of-the-art KD methods on ImageNet-1K\u00a0[Deng et\u00a0al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei].\nTop row is the teacher model used by the KD methods that use a teacher.\nAll methods use a DeiT-Ti\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou] student model, with DeiT-Ti\"[Uncaptioned describing the distilled variant using distillation tokens.
\n
", + "capture": "Table 3: Comparing our novel teacher-free spectral regularisation loss to other state-of-the-art KD methods on ImageNet-1K\u00a0[Deng et\u00a0al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei].\nTop row is the teacher model used by the KD methods that use a teacher.\nAll methods use a DeiT-Ti\u00a0[Touvron et\u00a0al.(2021)Touvron, Cord, Douze, Massa, Sablayrolles, and J\u00e9gou] student model, with DeiT-Ti describing the distilled variant using distillation tokens. " + } + }, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Federated active learning (f-al): an efficient annotation strategy for federated learning, 2022.", + "author": "Jin-Hyun Ahn, Kyungsang Kim, Jeongwan Koh, and Quanzheng Li.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Monocular Depth Estimation Using Cues Inspired by Biological Vision Systems.", + "author": "Dylan Auty and Krystian Mikolajczyk.", + "venue": "In International Conference on Pattern Recognition (ICPR) 2022, 2022.", + "url": null + } + }, + { + "3": { + "title": "Monocular Outdoor Semantic Mapping with a Multi-task Network.", + "author": "Yucai Bai, Lei Fan, Ziyu Pan, and Long Chen.", + "venue": "arXiv pre-print, January 2019.", + "url": null + } + }, + { + "4": { + "title": "Active learning and the total cost of annotation.", + "author": "Jason Baldridge and Miles Osborne.", + "venue": "In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004.", + "url": null + } + }, + { + "5": { + "title": "Dream distillation: A data-independent model compression framework.", + "author": "Kartikeya Bhardwaj, Naveen Suda, and Radu Marculescu.", + "venue": "ICML Joint Workshop on On-Device Machine Learning and Compact Deep Neural Network Representations (ODML-CDNNR), 2019.", + "url": null + } + }, + { + "6": { + "title": "Wasserstein Contrastive Representation Distillation.", + "author": "Liqun Chen, Dong Wang, Zhe Gan, Jingjing Liu, Ricardo Henao, and Lawrence Carin.", + "venue": "CVPR, 2020.", + "url": null + } + }, + { + "7": { + "title": "Distilling Knowledge via Knowledge Review.", + "author": "Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia.", + "venue": "CVPR, 2021a.", + "url": null + } + }, + { + "8": { + "title": "Dearkd: Data-efficient early knowledge distillation for vision transformers.", + "author": "Xianing Chen, Qiong Cao, Yujie Zhong, Jing Zhang, Shenghua Gao, and Dacheng Tao.", + "venue": "CVPR, 2022a.", + "url": null + } + }, + { + "9": { + "title": "Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning.", + "author": "Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, and Jianmin Wang.", + "venue": "NeurIPS, 2019a.", + "url": null + } + }, + { + "10": { + "title": "Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation.", + "author": "Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang.", + "venue": "In PMLR, 2019b.", + "url": null + } + }, + { + "11": { + "title": "Distilling audio-visual knowledge by compositional contrastive learning.", + "author": "Yanbei Chen, Yongqin Xian, A. Sophia Koepke, Ying Shan, and Zeynep Akata.", + "venue": "CVPR, 2021b.", + "url": null + } + }, + { + "12": { + "title": "Improved Feature Distillation via Projector Ensemble.", + "author": "Yudong Chen, Sen Wang, Jiajun Liu, Xuwei Xu, Frank de Hoog, and Zi Huang.", + "venue": "NeurIPS, 2022b.", + "url": null + } + }, + { + "13": { + "title": "ImageNet: A Large-Scale Hierarchical Image Database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In CVPR, 2009.", + "url": null + } + }, + { + "14": { + "title": "Adversarial Training Helps Transfer Learning via Better Representations.", + "author": "Zhun Deng, Linjun Zhang, Kailas Vodrahalli, Kenji Kawaguchi, and James Zou.", + "venue": "arXiv preprint, 6 2021.", + "url": null + } + }, + { + "15": { + "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": "In ICLR, 2021.", + "url": null + } + }, + { + "16": { + "title": "Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning.", + "author": "Utku Evci, Vincent Dumoulin, Hugo Larochelle, and Michael C Mozer.", + "venue": "In ICML. PMLR, 2022.", + "url": null + } + }, + { + "17": { + "title": "Contrastive Model Inversion for Data-Free Knowledge Distillation.", + "author": "Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, and Mingli Song.", + "venue": "IJCAI, 2021.", + "url": null + } + }, + { + "18": { + "title": "Efficiently identifying task groupings for multi-task learning.", + "author": "Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn.", + "venue": "NeurIPS, 2021.", + "url": null + } + }, + { + "19": { + "title": "Mid-air: A multi-modal dataset for extremely low altitude drone flights.", + "author": "Michael Fonder and Marc Van Droogenbroeck.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0\u20130, 2019.", + "url": null + } + }, + { + "20": { + "title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks.", + "author": "Michael H. Fox, Kyungmee Kim, and David Ehrenkrantz.", + "venue": "CVPR, 2018.", + "url": null + } + }, + { + "21": { + "title": "Cost-accuracy aware adaptive labeling for active learning.", + "author": "Ruijiang Gao and Maytal Saar-Tsechansky.", + "venue": "AAAI, 2020.", + "url": null + } + }, + { + "22": { + "title": "Robust Learning with the Hilbert-Schmidt Independence Criterion.", + "author": "Daniel Greenfeld and Uri Shalit.", + "venue": "arXiv preprint, 10 2019.", + "url": null + } + }, + { + "23": { + "title": "Distilling the Knowledge in a Neural Network.", + "author": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.", + "venue": "NeurIPS, 2015.", + "url": null + } + }, + { + "24": { + "title": "Cost-effective active learning from diverse labelers.", + "author": "Sheng-Jun Huang, Jia-Lve Chen, Xin Mu, and Zhi-Hua Zhou.", + "venue": "In IJCAI, 2017.", + "url": null + } + }, + { + "25": { + "title": "Knowledge distillation from a stronger teacher.", + "author": "Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu.", + "venue": "NeurIPS, 2022.", + "url": null + } + }, + { + "26": { + "title": "Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks.", + "author": "Stephen James, Paul Wohlhart, Mrinal Kalakrishnan, Dmitry Kalashnikov, Alex Irpan, Julian Ibarz, Sergey Levine, Raia Hadsell, and Konstantinos Bousmalis.", + "venue": "CVPR, 2019.", + "url": null + } + }, + { + "27": { + "title": "Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss.", + "author": "Jianbo Jiao, Ying Cao, Yibing Song, and Rynson Lau.", + "venue": "In ECCV, 2018.", + "url": null + } + }, + { + "28": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "ICCV, 2023.", + "url": null + } + }, + { + "29": { + "title": "DDOS: The Drone Depth and Obstacle Segmentation Dataset.", + "author": "Benedikt Kolbeinsson and Krystian Mikolajczyk.", + "venue": "arXiv preprint arXiv:2312.12494, 2023.", + "url": null + } + }, + { + "30": { + "title": "UCorr: Wire Detection and Depth Estimation for Autonomous Drones.", + "author": "Benedikt Kolbeinsson and Krystian Mikolajczyk.", + "venue": "In International Conference on Robotics, Computer Vision and Intelligent Systems - ROBOVIS, 2024.", + "url": null + } + }, + { + "31": { + "title": "Towards evaluating explanations of vision transformers for medical imaging.", + "author": "Piotr Komorowski, Hubert Baniecki, and Przemyslaw Biecek.", + "venue": "In CVPR Workshops, June 2023.", + "url": null + } + }, + { + "32": { + "title": "Paying more Attention to Snapshots of Iterative Pruning : Improving Model Compression via Ensemble Distillation.", + "author": "Duong H. Le, Trung-Nhan Vo, and Nam Thoa.", + "venue": "BMVC, 2020.", + "url": null + } + }, + { + "33": { + "title": "Prototype-guided Cross-task Knowledge Distillation for Large-scale Models.", + "author": "Deng Li, Aming Wu, Yahong Han, and Qi Tian.", + "venue": "arXiv preprint, 12 2022.", + "url": null + } + }, + { + "34": { + "title": "Microsoft COCO: Common objects in context.", + "author": "Tsung Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick.", + "venue": "ECCV, 2014.", + "url": null + } + }, + { + "35": { + "title": "Structured Knowledge Distillation for Semantic Segmentation.", + "author": "Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang.", + "venue": "CVPR, 2019.", + "url": null + } + }, + { + "36": { + "title": "Swin Transformer V2: Scaling Up Capacity and Resolution.", + "author": "Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, and Baining Guo.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "37": { + "title": "Unifying distillation and privileged information.", + "author": "David Lopez-Paz, L\u00e9on Bottou, Bernhard Sch\u00f6lkopf, and Vladimir Vapnik.", + "venue": "ICLR, 2016.", + "url": null + } + }, + { + "38": { + "title": "Ensemble Distribution Distillation.", + "author": "Andrey Malinin, Bruno Mlodozeniec, and Mark Gales.", + "venue": "ICLR, 2020.", + "url": null + } + }, + { + "39": { + "title": "SceneNet RGB-D: Can 5M Synthetic Images Beat Generic ImageNet Pre-training on Indoor Segmentation?", + "author": "John Mccormac, Ankur Handa, Stefan Leutenegger, and Andrew J Davison.", + "venue": "In ICCV, 2017.", + "url": null + } + }, + { + "40": { + "title": "Cascaded channel pruning using hierarchical self-distillation.", + "author": "Roy Miles and Krystian Mikolajczyk.", + "venue": "BMVC, 2020.", + "url": null + } + }, + { + "41": { + "title": "Understanding the role of the projector in knowledge distillation.", + "author": "Roy Miles and Krystian Mikolajczyk.", + "venue": "AAAI, 2024.", + "url": null + } + }, + { + "42": { + "title": "Information Theoretic Representation Distillation.", + "author": "Roy Miles, Adrian Lopez Rodriguez, and Krystian Mikolajczyk.", + "venue": "BMVC, 12 2022.", + "url": null + } + }, + { + "43": { + "title": "MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation.", + "author": "Roy Miles, Mehmet Kerim Yucel, Bruno Manganelli, and Albert Saa-Garriga.", + "venue": "CVPR, 3 2023.", + "url": null + } + }, + { + "44": { + "title": "Vkd: Improving knowledge distillation using orthogonal projections.", + "author": "Roy Miles, Ismail Elezi, and Jiankang Deng.", + "venue": "CVPR, 2024.", + "url": null + } + }, + { + "45": { + "title": "All in Tokens: Unifying Output Space of Visual Tasks via Soft Token, January 2023.", + "author": "Jia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi Dai, Kun He, and Han Hu.", + "venue": "arXiv:2301.02229 [cs].", + "url": null + } + }, + { + "46": { + "title": "Learning Deep Representations with Probabilistic Knowledge Transfer.", + "author": "Nikolaos Passalis and Anastasios Tefas.", + "venue": "ECCV, 2018.", + "url": null + } + }, + { + "47": { + "title": "Bridging Adversarial and Statistical Domain Transfer via Spectral Adaptation Networks.", + "author": "Christoph Raab, Philipp V\u00e4th, Peter Meier, and Frank-Michael Schleif.", + "venue": "In ACCV 2020. Springer International Publishing, 2020.", + "url": null + } + }, + { + "48": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": "PMLR, 2021.", + "url": null + } + }, + { + "49": { + "title": "Designing Network Design Spaces.", + "author": "Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Doll\u00e1r.", + "venue": "CVPR, 3 2020.", + "url": null + } + }, + { + "50": { + "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.", + "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.", + "venue": "JMLR, 2020.", + "url": null + } + }, + { + "51": { + "title": "SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation.", + "author": "Micha\u00ebl Ramamonjisoa and Vincent Lepetit.", + "venue": "arXiv preprint, 2019.", + "url": null + } + }, + { + "52": { + "title": "Co-advise: Cross Inductive Bias Distillation.", + "author": "Sucheng Ren, Zhengqi Gao, Tianyu Hua, Zihui Xue, Yonglong Tian, Shengfeng He, and Hang Zhao.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "53": { + "title": "FitNets: Hints For Thin Deep Nets.", + "author": "Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.", + "venue": "ICLR, 2015a.", + "url": null + } + }, + { + "54": { + "title": "FitNets: Hints for Thin Deep Nets.", + "author": "Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.", + "venue": "arXiv preprint, March 2015b.", + "url": null + } + }, + { + "55": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "MICCAI, 2015.", + "url": null + } + }, + { + "56": { + "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.", + "author": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.", + "venue": "NeurIPS Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019a.", + "url": null + } + }, + { + "57": { + "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.", + "author": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf.", + "venue": "NeurIPS Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019b.", + "url": null + } + }, + { + "58": { + "title": "Indoor Segmentation and Support Inference from RGBD Images.", + "author": "Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus.", + "venue": "In ECCV, 2012.", + "url": null + } + }, + { + "59": { + "title": "VL-ADAPTER: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks.", + "author": "Yi-Lin Sung, Jaemin Cho, and Mohit Bansal.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "60": { + "title": "Contrastive representation distillation.", + "author": "Yonglong Tian, Dilip Krishnan, and Phillip Isola.", + "venue": "ICLR, 2019.", + "url": null + } + }, + { + "61": { + "title": "Training data-efficient image transformers & distillation through attention.", + "author": "Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv\u00e9 J\u00e9gou.", + "venue": "PMLR, 2021.", + "url": null + } + }, + { + "62": { + "title": "SDC-Depth: Semantic Divide-and-Conquer Network for Monocular Depth Estimation.", + "author": "Lijun Wang, Jianming Zhang, Oliver Wang, Zhe Lin, and Huchuan Lu.", + "venue": "In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 538\u2013547, Seattle, WA, USA, June 2020a. IEEE.", + "url": null + } + }, + { + "63": { + "title": "Tartanair: A dataset to push the limits of visual slam.", + "author": "Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Sebastian Scherer.", + "venue": "In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4909\u20134916. IEEE, 2020b.", + "url": null + } + }, + { + "64": { + "title": "ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation, December 2022.", + "author": "Daitao Xing, Jinglin Shen, Chiuman Ho, and Anthony Tzes.", + "venue": "arXiv:2212.05729 [cs].", + "url": null + } + }, + { + "65": { + "title": "Cross-Task Knowledge Distillation in Multi-Task Recommendation.", + "author": "Chenxiao Yang, Junwei Pan, Xiaofeng Gao, Tingyu Jiang, Dapeng Liu, and Guihai Chen.", + "venue": "AAAI, 2 2022.", + "url": null + } + }, + { + "66": { + "title": "From knowledge distillation to self-knowledge distillation: A unified approach with normalized loss and customized soft labels.", + "author": "Zhendong Yang, Ailing Zeng, Zhe Li, Tianke Zhang, Chun Yuan, and Yu Li.", + "venue": "ICCV, 2023.", + "url": null + } + }, + { + "67": { + "title": "Distilling cross-task knowledge via relationship matching.", + "author": "Han Jia Ye, Su Lu, and De Chuan Zhan.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "68": { + "title": "Revisiting Knowledge Distillation via Label Smoothing Regularization.", + "author": "Li Yuan, Francis E.H. Tay, Guilin Li, Tao Wang, and Jiashi Feng.", + "venue": "CVPR, 2020.", + "url": null + } + }, + { + "69": { + "title": "CKD: Cross-task knowledge distillation for text-to-image synthesis.", + "author": "Mingkuan Yuan and Yuxin Peng.", + "venue": "IEEE Transactions on Multimedia, 8:1955\u20131968, 2020.", + "url": null + } + }, + { + "70": { + "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer.", + "author": "Sergey Zagoruyko and Nikos Komodakis.", + "venue": "In ICLR, 2019.", + "url": null + } + }, + { + "71": { + "title": "Taskonomy: Disentangling Task Transfer Learning.", + "author": "Amir Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese.", + "venue": "arXiv:1804.08328 [cs], April 2018.", + "url": null + } + }, + { + "72": { + "title": "Decoupled Knowledge Distillation.", + "author": "Borui Zhao, Renjie Song, and Yiyu Qiu.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "73": { + "title": "A Comprehensive Survey on Transfer Learning.", + "author": "Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He.", + "venue": "arXiv preprint, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2403.14494v2" +} \ No newline at end of file diff --git a/20241127/2403.16790v2.json b/20241127/2403.16790v2.json new file mode 100644 index 0000000000000000000000000000000000000000..5daba4db13de51ee4ca2884792f15495b8e83101 --- /dev/null +++ b/20241127/2403.16790v2.json @@ -0,0 +1,417 @@ +{ + "title": "Iso-Diffusion: Improving Diffusion Probabilistic Models Using the Isotropy of the Additive Gaussian Noise", + "abstract": "Denoising Diffusion Probabilistic Models (DDPMs) have accomplished much in the realm of generative AI. With the tremendous level of popularity the Generative AI algorithms have achieved, the demand for higher levels of performance continues to increase. Under this backdrop, careful scrutinization of algorithm performance under sample fidelity type measures is essential to ascertain how, effectively, the underlying structures of the data distribution were learned. In this context, minimizing the mean squared error between the additive and predicted noise alone does not impose structural integrity constraints on the predicted noise, for instance, isotropic. Under this premise, we were motivated to utilize the isotropy of the additive noise as a constraint on the objective function to enhance the fidelity of DDPMs. Our approach is simple and can be applied to any DDPM variant. We validate our approach by presenting experiments conducted on four synthetic 2D datasets as well as on unconditional image generation. As demonstrated by the results, the incorporation of this constraint improves the fidelity metrics, Precision and Density, and the results clearly indicate how the structural imposition was effective.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Diffusion models have been accomplishing great feats in the realm of generative AI, specifically in terms of unconditional and conditional image generation ([19 ###reference_b19###], [9 ###reference_b9###], [26 ###reference_b26###], [18 ###reference_b18###], [22 ###reference_b22###], [24 ###reference_b24###], [6 ###reference_b6###],\n[10 ###reference_b10###],\n[1 ###reference_b1###]). Starting with the revolutionary paper by Ho et al. [7 ###reference_b7###] and the improvements by Nichol et al. [19 ###reference_b19###] as well as the Latent Diffusion Model by Rombach et al. [24 ###reference_b24###], these models have had the biggest impact in this context. The fidelity and diversity of the images generated by these models are surprisingly amazing. Yet, as with all models, these models can still be improved upon closer inspection. As with the improvements done by Nichol et al. [19 ###reference_b19###] to the original Denoising Diffusion Probabilistic Model (DDPM) by introducing techniques such as the cosine-based variance schedule and allowing the model to learn the variance rather than keeping it fixed helped improve the performance of DDPMs. Our goal in this paper is to make a similar contribution with regard to the improvement of the important fidelity metrics, Density [16 ###reference_b16###] and Precision [13 ###reference_b13###], by imposing possible regularizations that promote the modified DDPM algorithm to learn the underlying structures, diversity, modality and density spread of the true distribution.\nAlthough DDPMs perform well, we noticed that these existing models do not necessarily incorporate any distributional (structural) information about the particular dataset it tries to generate. Typically, the DDPM\u2019s forward process gradually pushes the dataset towards an isotropic Gaussian, which can be thought of as the structural vanishing point of the data distribution [17 ###reference_b17###]. This implies a well-placed point of origin for the generative process (reverse path) from a point of complete lack of structure toward the final destination, which is the data distribution. In the DDPM implementation, the learning process considers the expected squared norm difference between the additive Gaussian noise and the predicted noise as its objective function. Therefore, for the generative process, to enhance the aforementioned creation of structure, the objective function can be modified to include any structural measure, such as isotropy.\nThus, we were motivated to include the isotropic nature of the additive Gaussian noise when optimizing for the objective to further enhance the statistical properties of the predicted noise through backpropagation. The current objective function of the DDPM does not include any mechanism that explicitly encourages the isotropic nature of the predicted noise. Therefore, a mechanism that guarantees the model progresses from a more non-isotropic distribution (distributions with multiple modes, discontinuities or non-uniformities of density distributions, non-uniformly distributed spatial structures) to an isotropic Gaussian distribution toward the vanishing point in a structured and learned manner is needed. Our intuition is that by capturing the statistical properties of the noise in more detail, the model will be able to produce higher-fidelity samples as it would have much more information regarding the distributional structure of the samples.\nAs the rationale for introducing isotropy to the objective function has been established, now, let us see how isotropy establishes convergence and quantifies structural information about the distribution. For example, the isotropy of an isotropic random vector in is the expected squared norm of that vector, which is equal to its dimension, [34 ###reference_b34###]. This establishes the convergence in the limit for a normalized distribution with a complete lack of structure, which in other words is isotropic. Conversely, the desired distribution, which has more structure and is more non-isotropic, would consequently have a lower isotropy value. This implies that the generative process, in its drive towards a structural distribution, minimizes isotropy. Furthermore, when analyzing the mean square error objective, we observed that incorporating the isotropic nature of the noise effectively makes the objective function equal to zero in expectation.\nThe inclusion of this constraint does not incur a large computational cost and can be readily applied to any of the diffusion model variants. In this work, we scrutinized the DDPM model\u2019s behavioral aspects to interpret its functionality using well-defined 2D synthetic datasets, such as Swiss Roll, Scattered Moon, Moon with two circles and Central Banana, to draw fundamental conclusions about DDPM algorithm. Furthermore, we experimented on four 2D synthetic datasets with our modified objective function and showed that the fidelity metrics, in particular the Precision and Density, improved significantly. In addition, we validate our approach to unconditional image generation using the Oxford Flower [20 ###reference_b20###] and Oxford-IIIT Pet [21 ###reference_b21###], CIFAR-10 [12 ###reference_b12###] and CIFAR-100 [12 ###reference_b12###] datasets. We compare the fidelity and diversity of the generated samples based on key evaluation metrics such as Precision and Recall [13 ###reference_b13###], Density and Coverage\n[16 ###reference_b16###], Frechet Inception Distance (FID) [5 ###reference_b5###] and Inception Score (IS) [28 ###reference_b28###].\nThe contributions of this work are as follows:\nWe introduce Iso-Diffusion: a modified approach that introduces an isotropic constraint on the predicted noise objective function to steer the generative process in a structurally coherent manner. This results in improved fidelity of the generated data distribution. We believe, to the best of our knowledge, that we are the first to propose such a modified loss based on the structural properties of the noise.\nWe analyze the simple loss function in the DDPM and its connection to isotropy. Moreover, we show that the isotropy of the data distribution monotonically increases and converges to the maximum isotropy value, which corresponds to an isotropic Gaussian distribution. This confirms that the definition of isotropy mentioned in this paper, conveys information about the structure of the data distribution when the data distribution undergoes the forward process in DDPMs.\nWe evaluate and validate our approach on four 2D synthetic datasets as well as on the task of unconditional image generation on Oxford Flower, Oxford-IIIT Pet, CIFAR-10 and CIFAR-100 datasets. Considering the key evaluation metrics, such as Precision, Recall, Density, Coverage, FID and IS, the modified objective is able to surpass the original DDPM with a significant gap in terms of the fidelity metrics, Density and Precision.\nWe conduct an in-depth analysis of the Density and Coverage metrics to evaluate the generative capabilities of Iso-Diffusion compared to DDPM. This analysis facilitates a detailed comparison between the generated and true data distributions, visually illustrating Iso-Diffusion\u2019s superior alignment with the true distribution. Furthermore, it highlights the importance of these metrics for assessing generative AI algorithms in computer vision applications." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Generative models, particularly in recent years, have gained significant momentum due to their applications in various fields. They began with specific use cases and have evolved along a clear trajectory, as outlined below.\nDeep Generative Models Generative models (GANs [3 ###reference_b3###], VAEs [11 ###reference_b11###], flow-based models [23 ###reference_b23###], and diffusion models [7 ###reference_b7###]) learn the probability distribution of given data, allowing us to sample new data points from the distribution. Deep generative models have been used for generating images, videos [8 ###reference_b8###], 3d objects [15 ###reference_b15###], etc. Moreover, these models have been used for inverse problem solving [33 ###reference_b33###] [14 ###reference_b14###] and to understanding the latent representations of the distributions.\nDiffusion Models Diffusion models, in particular, have been making huge improvements and have been used in many domains due to their high generative capabilities. There are mainly two types of diffusion models, one is the Score based approach introduced by Song and Ermon [32 ###reference_b32###] and the other, which is the focus of this work, is the one introduced by Ho et al. [7 ###reference_b7###]. Both modeling types have been able to achieve state-of-the-art performance in generative modeling tasks and have motivated the growth of many subsequent works in generative models.\nImproving Diffusion Models In the context of DDPMs [7 ###reference_b7###], there have been several seminal papers that have contributed to the improvement of these models. In particular, Nichol et al.\u2019s [19 ###reference_b19###] work presented several key insights into how one could improve the training of these models. One such insight is the use of a cosine-based variance schedule rather than the linear variance schedule used by Ho et al. [7 ###reference_b7###]. These changes were able to improve the DDPM further.\nHowever, most of these improvements were focused on improving the models based on the most widely used metrics for image generation, FID and IS. But some of the recent work ([13 ###reference_b13###], [16 ###reference_b16###], [25 ###reference_b25###]), in generative models has pointed out that FID and IS are not necessarily indicative of the actual fidelity of the samples generated by generative models. Thus, researchers have been focusing on finding other metrics, such as Precision and Density, to assess the fidelity of these generated samples [2 ###reference_b2###], [30 ###reference_b30###]. In particular, we observed that the Density takes the local context (measuring how close it is to densely packed samples of the true distribution) of a sample into account during its calculation. We believe that this makes the Density a vital metric to assess the samples\u2019 fidelity." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Background", + "text": "###figure_2### ###figure_3### Diffusion probabilistic models were first introduced by Sohl-Dickstein et al. [31 ###reference_b31###] These models fall under the category of generative models which learn the distribution of the data so that they can sample from these data distributions. However, it was not until Ho et al. [7 ###reference_b7###] that Diffusion Probabilistic Models took off. In the next few subsections, we will provide a brief overview of the DDPM definitions that will be useful to understanding our work." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Definitions", + "text": "In the DDPM, we simply add a Gaussian noise, which varies according to a specific variance schedule, . The noise at each time-step corrupts the data, such that by the time the time-step reaches its final value, , the data will be mapped to an almost isotropic Gaussian distribution. However, the learning occurs when we try to learn the reverse process by which we try to denoise along the same trajectory starting from the almost isotropic Gaussian distribution. The first process, in which we add noise, is called the forward process and the latter, in which we denoise, is called the reverse process. The forward process is often characterized by and the reverse process by . Both of which are modeled as Gaussian distributions.\nThe forward process is defined as follows,\nMoreoever, by introducing as well as the forward process can be further simplified into the following expression via the re-parametrization trick [11 ###reference_b11###]. Since,\nwhere, .\nThe reverse process, given by , can be obtained in terms of the forward process distribution and Baye\u2019s Theorem. However, the reverse process only becomes tractable when the posterior distribution , is conditioned on the input data . Thus, during training, the model tries to learn the tractable distribution. This distribution, which is also a Gaussian distribution, is defined by the following equation and parameters." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Training Process", + "text": "To train, however, one could make the model predict the mean of the reverse process distribution at each time step. But Ho et al. [7 ###reference_b7###] mentions that predicting the additive noise, , leads to better results. The additive noise and the mean of the reverse process distribution at each time step are elegantly linked by equations 5 ###reference_### and 8 ###reference_###. This results in the following re-parametrization of ,\nTherefore, predicting the additive noise , is adequate for the task of predicting the mean of the backward process distribution. Moreover, since the forward process\u2019 variance schedule is fixed, the reverse process variance, , is also assumed to be fixed according to .\nThus, Ho et al. [7 ###reference_b7###] proposes to optimize the following simple objective function during the training process.\nwhere is the predicted noise." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Hidden Statistical Properties of", + "text": "As discussed earlier, one of the main objectives of the proposed algorithm is to identify a learnable isotropic measure that best reflects the overall isotropic nature of the learned samples. This will allow backpropagation to accurately guide the model toward the maximum isotropy at the vanishing point. The identified metric (11 ###reference_###) is the expected squared norm of which will be named isotropy.\nUpon closer inspection of the objective function, we see that the objective of the U-Net is to minimize the mean squared error between and . Yet, if the simple loss is expanded further, a rather informative mathematical expression can be obtained.\nNow, since we know that , it is an isotropic distribution. Thus, by definition, since is an isotropic random vector in , the expected norm of the random vector, .\nFurthermore, since the goal is to predict the noise as accurately as possible, should also be distributed according to an isotropic Gaussian distribution, i.e., . Hence, if and are both identical isotropic random vectors," + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Analysis on the Isotropy of", + "text": "Based on the observations, we were inspired to find out further implications of imposing structural information in the DDPM. As it turns out, we were able to gain more interesting insights about the forward process of the DDPM. For example, if we consider equation 5 ###reference_### and consider the isotropy, expected squared norm of , we see that,\nHowever, since is an isotropic Gaussian random vector, it is isotropic. Moreover, by assuming that it is independent of the distribution of , when is non-isotropic, we see that,\nTherefore,\nThus, when the input data are normalized and they are distributed according to a non-isotropic distribution, we note that the maximum of the expected squared norm of , . Hence, . Thus, during the forward process, since , the expected squared norm of can be at most , and attains the maximum value at the final time-step .\nMoreover, when we consider two consecutive time steps, and , we see that,\nWe know that and that . Thus,\nTherefore, for any particular normalized data distribution, we see that during the forward process, the isotropy of the data distribution increases. Finally, converges to the isotropy of an isotropic random Gaussian vector, when the data distribution completely converts into an isotropic Gaussian distribution (see Figure 2 ###reference_###). Hence, the definition of isotropy given in this paper aligns perfectly with the fact that the isotropy quantifies structural information about the data distribution." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Isotropy Based Loss Function", + "text": "In the default DDPM model, the variance schedule drives the transformation toward an isotropic Gaussian distribution by restricting the degrees of freedom for the movement of information of the distribution, without using backpropagation to adaptively learn the degree of isotropy achieved, making it, a non-learnable process. Armed with the above analyses, we proceeded to modify the objective function to include a regularization term which penalizes the model, if the model predicts a noise which is not necessarily isotropic. Hence, the new modified objective function we propose to optimize is,\nwhere is the regularization parameter.\nHowever, this modified objective needs to be further simplified so as to make this new error, independent of the size of the dimension of the random vector. Thus, we make the following modification during implementation." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Interpret the Evaluation Metrics", + "text": "Precision denotes the fraction of generated data that lies in the true manifold, by counting whether each generated data point falls within a neighborhood sphere of real samples. This measure reflects how closely the generated points align with the true manifold [13 ###reference_b13###], [27 ###reference_b27###].\nRecall denotes the fraction of true data that lies in the generated manifold, by counting whether each true data point falls within a neighborhood sphere of generated samples. This measure indicates how well the true points align with the generated manifold [13 ###reference_b13###], [27 ###reference_b27###].\nDensity counts the number of neighborhood spheres of real samples that encompass each generated data point. This allows Density to reward generated samples located in areas densely populated by real samples, reducing sensitivity to outliers. This enables to consider the local context of a distribution by measuring how close a sample is to densely packed points in the true distribution [16 ###reference_b16###].\nCoverage measures the fraction of real samples whose neighborhoods contain at least one generated sample. Moreover, Coverage measures the diversity of a distribution by assessing whether all aspects of the distribution are represented. However, the presence of sparse outliers in the true manifold and the absence of the generated samples near those outliers may result in low Coverage [16 ###reference_b16###]. (see Figure 3 ###reference_###)" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Experimental Setup", + "text": "To validate our approach we consider 2D synthetic data as well as images. For the 2D data, we utilized a conditional dense network consisting of 3 fully-connected hidden layers with ReLU activations. The learning rate was fixed at 1e-3. All the datasets were learned using 1000 time-steps and 1000 epochs.\nFor the image datasets, we consider the same version of the U-Net utilized in the original DDPM implementation with a learning rate of 2e-4. The U-Net training involved 1000 time-steps and 1000 epochs for the Oxford Flower and Oxford-IIIT Pet datasets, while the CIFAR-10 and CIFAR-100 datasets were trained with 2000 epochs.\nMetrics are reported as the average of 3 training runs per dataset, with PRDC values calculated using k=5 nearest neighbors for each dataset. Moreover, all the experiments were run on one Quadro GV-100 GPU with 32GB of VRAM." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Performance comparison between DDPM and Iso-Diffusion", + "text": "To compare the performance between DDPM and Iso-Diffusion, the modified loss function was utilized in all four 2D synthetic datasets, Oxford Flower dataset, Oxford IIIT Pet dataset and CIFAR-10 dataset. Precision, Recall, Density along with Coverage were used to evaluate and compare the performance of the two models on 2D synthetic datasets. In addition to those four evaluation metrics, FID\nand IS were used to evaluate the quality of the generated samples for the image datasets Oxford Flower, Oxford-IIIT Pet, CIFAR-10 and CIFAR-100.\nTable 1 ###reference_### demonstrates the comparison between the best performing isotropy based model (Iso-Diffusion) and DDPM in terms of the generative model\u2019s evaluation metrics along with the percentage change from DDPM. Across all these 2D synthetic datasets we observed that the fidelity metrics, Precision and Density have been improved in Iso-Diffusion. The results of Table 2 ###reference_### further confirm the improvements made by our modified loss on the quality of image samples. The Density of the generated images has been significantly improved for all four datasets. Moreover, the FID score has been significantly improved in the CIFAR-10 dataset by the proposed method.\nAlthough the performance of the modified loss function has been able to produce sample which surpass the original DDPM\u2019s samples quality, the quality depends on the regularization parameter of the modified loss function. In particular, we performed a few more experiments by considering a range of values for the regularization parameter.\nThe metrics for the Oxford Flower dataset and Oxford-IIIT-Pet dataset with different values of the regularization parameter ranging from 0.01 to 0.30 are tabulated in Table 3 ###reference_### and Table 4 ###reference_###. We see that the fidelity metrics, Precision and Density, gradually improve with the increase of the regularization parameter. However, we can see that the diversity metrics, Recall and Coverage, gradually decline with the parameter.\nAlthough the FID and IS are considered to be the most widely used evaluation metrics for assessing image generation, we see that in the case of all four datasets, they convey little to no discerning information about the generative ability of the proposed method and the original DDPM. But, by using other metrics such as the Precision, Recall, Density and Coverage, we can state that while our proposed method suffers a bit in terms of Recall, the generated samples, are very close to being real (see Figure 1 ###reference_###), as indicated by the improvements in the Precision and Density metrics." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Interpretation of the results of 2D data distributions using PRDC values", + "text": "###figure_4### ###figure_5### ###figure_6### We believe that the disparity in the changes of Precision, Recall, Density and Coverage is a direct consequence of imposing a structural constraint on the objective function. It is evident that by focusing on the structure or the isotropy of the distribution, our method is capable of capturing highly dense mode regions and generating samples near them rather than being too diverse. Thus, it increases the fidelity but decreases the diversity of the generated samples.\nAs illustrated in the Figure 4 ###reference_###(a), the Central Banana distribution was designed by introducing a distinct mode to the main structure of the distribution resulting in a multimodal distribution with a density gradient. Once, it is generated via Iso-Diffusion as indicated in Figure 4 ###reference_###(c), it is evident that,\nIso-Diffusion, is capable of capturing the main structure even with the discontinuities of the density gradient. However, the illustrations show that DDPM lacks the capability of capturing the discontinuity in the density gradient between the tail end of the main distribution and the distinct mode. Instead, it tries to generate data points that are actually not even in the true distribution by interpolating along the main lobe\u2019s trend (see Figure 4 ###reference_###(b)). Moreover, the limited capability to capture the discontinuity in the density gradient of DDPM can be further observed in the Swiss Roll distribution as well (see Figure 6 ###reference_###(b) and 6 ###reference_###(c)). The increase in Density and decrease in Coverage for the datasets Swiss Roll and Central Banana are clear evidence for the aforementioned observations. Hence, it is limited in ability to capture the underlying structure of the distribution. Additionally, there is a noticeable trend of generating data points (blue points in Figure 4 ###reference_###(b), 4 ###reference_###(c)), outside the boundaries of the highly dense regions of the main lobe. This effect is likely due to the model\u2019s focus on these high-density regions. However, compared to DDPM, Iso-Diffusion effectively regulates the overgeneration of data points outside the boundaries of densely packed regions. This improvement is likely a result of the added regularization in the improved object function, which encourages capturing the main semantics of the true distribution.\nAs illustrated in the Figure 5 ###reference_###(a), the Scattered Moon distribution was designed by imposing scattered noise around the main structure of the data distribution. Once, it is generated via Iso-Diffusion as indicated in the Figure 5 ###reference_###(c), it is evident that, the model has tried to only capture the underlying semantics of the distribution without being susceptible to the low probable regions. Whereas, the DDPM model shows limitations in capturing the distinction between the main structure and the scattered noise (see Figure 5 ###reference_###(b)). The increased Density and reduced Coverage values support this observation. This shows that the proposed objective function, enforces the generated samples to contain properties that push them to be closely linked to the real data. Thus, we can directly observe an improvement in the Density metric as it measures the sample fidelity. We believe that in the context of unconditional image generation, the isotropy based objective function helps the U-Net learn to keep the generated samples closer to the high-density regions of the ground-truth distribution.\nThese observations highlight the proposed algorithm\u2019s ability to increase Density by focusing on the dense regions of the true distribution. At the same time, the absence of generated data in the neighborhoods of low probable data points in the true distribution may result in a reduction in Coverage. When scattering is minimal, Coverage remains consistent. This indicates that the algorithm effectively captures the main structure of the true distribution without extending into low probable regions. Also, each of these metrics has their own utility depending on the application [13 ###reference_b13###]. Thus, this should motivate the research community to propose new evaluation metrics such as Density, which is a much more meaningful measure of fidelity over FID and IS, to assess generative models.\nPreserving the modality of a data distribution is essential, as failing to capture it can lead to a loss of semantic details or edge information, both of which represent high-level features in computer vision and image processing tasks [4 ###reference_b4###], [29 ###reference_b29###]" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Denoising Diffusion Probabilistic Models have achieved state-of-the-art performance in generative modeling tasks such as unconditional image generation and image super-resolution. However, these models can still be improved upon and much work has been put into improving them. In this paper, we propose another improvement method that is built on the premise that since the distribution that the forward process terminates and the reverse process initiates at an isotropic Gaussian distribution, which is void of any structure, it is well motivated, that the incorporation of isotropy as a measure of structure on the loss function will improve the DDPMs\u2019 generated sample fidelity. We, theoretically, show that isotropy is well a defined metric to measure the structure of the distribution during the forward process and the proposed modification helps the DDPM to converge to better solutions based on the simple modified loss. We show that the Iso-Diffusion objective function regulates data point generation, aligning it with the prominent structures of the true distribution and anchoring the process to dense, information-rich modes. Finally, we validate and show that our modified objective function improves DDPM performance through experiments on 2D synthetic datasets and unconditional image generation, supported by an in-depth analysis and evidenced by improved fidelity and diversity metrics." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMetrics\n\nSwiss RollScattered MoonMoon with two circlesCentral Banana
DDPMOursDDPMOursDDPMOursDDPMOurs
Precision0.94580.9893 (+4.60%)0.99900.9993 (+0.03%)0.99210.9982 (+0.61%)0.89740.9072 (+1.09%)
Recall0.99270.9709 (-2.19%)0.99620.9736 (-2.27%)0.99670.9694 (-2.74%)0.99770.9417 (-5.61%)
Density0.89460.9908 (+10.75%)1.00151.0049 (+0.34%)0.99251.0081 (+1.57%)0.87850.8962 (+2.01%)
Coverage0.89320.8458 (-5.31%)0.96050.8254 (-14.07%)0.94980.8572 (-9.75%)0.91020.6840 (-24.85%)
\n
Table 1: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the 2D Datasets
\n
", + "capture": "Table 1: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the 2D Datasets" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMetrics\n\nOxford FlowerOxford-IIIT-PetCIFAR-10CIFAR-100
DDPMOursDDPMOursDDPMOursDDPMOurs
\nFID ()\n55.59047.310 (-14.9%)34.08731.900 (-6.4%)16.02311.872 (-25.9%)14.79414.141 (-4.4%)
\nIS ()\n3.0973.504 (+13.1%7.0837.531 (+6.3%)8.4638.482 (+0.2%)9.0329.183 (+1.7%)
\nPrecision ()\n0.7250.944 (+30.3%)0.8190.954 (+16.5%)0.6070.689 (+13.6%)0.6380.710 (+11.4%)
\nRecall ()\n0.1840.056 (-69.8%)0.1520.063 (-58.4%)0.4470.384 (-14.0%)0.3980.350 (-12.1%)
\nDensity ()\n2.63211.039 (+319.4%)6.70415.778 (+135.4%)1.4012.104 (+50.3%)1.4792.190 (+48.1%)
\nCoverage ()\n0.9590.994 (+3.6%)0.99960.9999 (+0.03%)0.9870.995 (+0.8%)0.99960.9996 (0.0%)
\n
Table 2: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the Image Datasets.
\n
", + "capture": "Table 2: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the Image Datasets." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nFID ()\n\nIS ()\n\nPrecision ()\n\nRecall ()\n\nDensity ()\n\nCoverage ()\n
DDPM55.59003.09700.72480.18402.63200.9588
\nOurs = 0.01\n53.33743.20230.78390.15703.34450.9758
\nOurs = 0.05\n54.70643.22080.7330.176252.63840.9586
\nOurs = 0.10\n47.30973.50370.94410.055511.03890.9935
\nOurs = 0.30\n51.58203.31050.94600.054912.54410.9946
\n
Table 3: Metrics Variation with the Regularization Parameter for the Oxford Flower Dataset ()
\n
", + "capture": "Table 3: Metrics Variation with the Regularization Parameter for the Oxford Flower Dataset ()" + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nFID ()\n\nIS ()\n\nPrecision ()\n\nRecall ()\n\nDensity ()\n\nCoverage ()\n
DDPM34.08747.08270.81890.15226.70400.9996
\nOurs = 0.01\n32.72787.52980.88050.12337.97430.9991
\nOurs = 0.05\n32.48777.51560.86280.12638.03480.9997
\nOurs = 0.10\n33.34057.48130.91030.101510.63511.0000
\nOurs = 0.30\n31.89987.53130.95420.063315.77800.9999
\n
Table 4: Metrics Variation with the Regularization Parameter for the Oxford-IIIT-Pet Dataset ()
\n
", + "capture": "Table 4: Metrics Variation with the Regularization Parameter for the Oxford-IIIT-Pet Dataset ()" + } + }, + "image_paths": { + "1": { + "figure_path": "2403.16790v2_figure_1.png", + "caption": "Figure 1: Comparison of the generated images via the DDPM (left) and Iso-Diffusion (right). The DDPM generated images contain much more artefacts and do not seem realistic. However, the generated images via Iso-Diffusion are much more realistic and thus, they are of high fidelity.", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/ImageDatasets/ImageGeneration_01.png" + }, + "2": { + "figure_path": "2403.16790v2_figure_2.png", + "caption": "Figure 2: Variation of isotropy of the data distribution along the forward diffusion process for the 2D synthetic datasets. As can be seen from the plot, in the limit, the data distribution reaches the value of two, which happens to be the dimension of an isotropic random vector in \u211d2superscript\u211d2\\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (expected squared norm of an isotropic random vector).", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/IsotropyBoundary/IsotropyvsTimeStepsV6.png" + }, + "3": { + "figure_path": "2403.16790v2_figure_3.png", + "caption": "Figure 3: An example scenario for illustrating a situation where high Density and low Coverage is recorded. Generating samples in the neighborhoods of the highly dense regions over the outliers in the true manifold has resulted in a high Density and low Coverage.", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/PRDC/PRDCInterpretation05.png" + }, + "4": { + "figure_path": "2403.16790v2_figure_4.png", + "caption": "Figure 4: Center Banana 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotCenterBananaV3.png" + }, + "5": { + "figure_path": "2403.16790v2_figure_5.png", + "caption": "Figure 5: Scattered Moon 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotScatteredMoonV3.png" + }, + "6": { + "figure_path": "2403.16790v2_figure_6.png", + "caption": "Figure 6: Swiss Roll 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.", + "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotSwissRollV3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Diffusion models beat gans on image synthesis, 2021.", + "author": "Prafulla Dhariwal and Alex Nichol.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Don\u2019t drop your samples! coherence-aware training benefits conditional diffusion.", + "author": "Nicolas Dufour, Victor Besnier, Vicky Kalogeiton, and David Picard.", + "venue": "In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6264\u20136273, 2024.", + "url": null + } + }, + { + "3": { + "title": "Generative adversarial networks.", + "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.", + "venue": "Communications of the ACM, 63(11):139\u2013144, 2020.", + "url": null + } + }, + { + "4": { + "title": "Density-aware feature embedding for face clustering.", + "author": "Senhui Guo, Jing Xu, Dapeng Chen, Chao Zhang, Xiaogang Wang, and Rui Zhao.", + "venue": "In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6697\u20136705, 2020.", + "url": null + } + }, + { + "5": { + "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "6": { + "title": "Classifier-free diffusion guidance.", + "author": "Jonathan Ho and Tim Salimans.", + "venue": "arXiv preprint arXiv:2207.12598, 2022.", + "url": null + } + }, + { + "7": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "8": { + "title": "Imagen video: High definition video generation with diffusion models.", + "author": "Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al.", + "venue": "arXiv preprint arXiv:2210.02303, 2022a.", + "url": null + } + }, + { + "9": { + "title": "Cascaded diffusion models for high fidelity image generation.", + "author": "Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans.", + "venue": "The Journal of Machine Learning Research, 23(1):2249\u20132281, 2022b.", + "url": null + } + }, + { + "10": { + "title": "Training-free content injection using h-space in diffusion models, 2024.", + "author": "Jaeseok Jeong, Mingi Kwon, and Youngjung Uh.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "Auto-encoding variational bayes.", + "author": "Diederik P Kingma and Max Welling.", + "venue": "arXiv preprint arXiv:1312.6114, 2013.", + "url": null + } + }, + { + "12": { + "title": "Cifar-10 (canadian institute for advanced research).", + "author": "Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "Improved precision and recall metric for assessing generative models.", + "author": "Tuomas Kynk\u00e4\u00e4nniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "14": { + "title": "Fast diffusion em: a diffusion model for blind inverse problems with application to deconvolution, 2023.", + "author": "Charles Laroche, Andr\u00e9s Almansa, and Eva Coupete.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "Fast training of diffusion transformer with extreme masking for 3d point clouds generation.", + "author": "Shentong Mo, Enze Xie, Yue Wu, Junsong Chen, Matthias Nie\u00dfner, and Zhenguo Li.", + "venue": "2023.", + "url": null + } + }, + { + "16": { + "title": "Reliable fidelity and diversity metrics for generative models.", + "author": "Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo.", + "venue": "2020.", + "url": null + } + }, + { + "17": { + "title": "Improved denoising diffusion probabilistic models, 2021a.", + "author": "Alex Nichol and Prafulla Dhariwal.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models.", + "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen.", + "venue": "arXiv preprint arXiv:2112.10741, 2021.", + "url": null + } + }, + { + "19": { + "title": "Improved denoising diffusion probabilistic models.", + "author": "Alexander Quinn Nichol and Prafulla Dhariwal.", + "venue": "In International Conference on Machine Learning, pages 8162\u20138171. PMLR, 2021b.", + "url": null + } + }, + { + "20": { + "title": "Automated flower classification over a large number of classes.", + "author": "Maria-Elena Nilsback and Andrew Zisserman.", + "venue": "In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722\u2013729. IEEE, 2008.", + "url": null + } + }, + { + "21": { + "title": "Cats and dogs.", + "author": "Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar.", + "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, 2012.", + "url": null + } + }, + { + "22": { + "title": "Hierarchical text-conditional image generation with clip latents.", + "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.", + "venue": "arXiv preprint arXiv:2204.06125, 1(2):3, 2022.", + "url": null + } + }, + { + "23": { + "title": "Variational inference with normalizing flows.", + "author": "Danilo Rezende and Shakir Mohamed.", + "venue": "In International conference on machine learning, pages 1530\u20131538. PMLR, 2015.", + "url": null + } + }, + { + "24": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "25": { + "title": "Concon-chi: Concept-context chimera benchmark for personalized vision-language tasks.", + "author": "Andrea Rosasco, Stefano Berti, Giulia Pasquale, Damiano Malafronte, Shogo Sato, Hiroyuki Segawa, Tetsugo Inada, and Lorenzo Natale.", + "venue": "In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22239\u201322248, 2024.", + "url": null + } + }, + { + "26": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "Advances in Neural Information Processing Systems, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "27": { + "title": "Assessing generative models via precision and recall, 2018.", + "author": "Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly.", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "Improved techniques for training gans.", + "author": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.", + "venue": "Advances in neural information processing systems, 29, 2016.", + "url": null + } + }, + { + "29": { + "title": "Singan: Learning a generative model from a single natural image.", + "author": "Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.", + "url": null + } + }, + { + "30": { + "title": "Synthprov: Interpretable framework for profiling identity leakage.", + "author": "Jaisidh Singh, Harshil Bhatia, Mayank Vatsa, Richa Singh, and Aparna Bharati.", + "venue": "In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 4734\u20134744, 2024.", + "url": null + } + }, + { + "31": { + "title": "Deep unsupervised learning using nonequilibrium thermodynamics.", + "author": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.", + "venue": "In International conference on machine learning, pages 2256\u20132265. PMLR, 2015.", + "url": null + } + }, + { + "32": { + "title": "Generative modeling by estimating gradients of the data distribution.", + "author": "Yang Song and Stefano Ermon.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "33": { + "title": "Solving inverse problems in medical imaging with score-based generative models.", + "author": "Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon.", + "venue": "arXiv preprint arXiv:2111.08005, 2021.", + "url": null + } + }, + { + "34": { + "title": "High-dimensional probability: An introduction with applications in data science.", + "author": "Roman Vershynin.", + "venue": "Cambridge university press, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2403.16790v2" +} \ No newline at end of file diff --git a/20241127/2404.00345v2.json b/20241127/2404.00345v2.json new file mode 100644 index 0000000000000000000000000000000000000000..a42cd002cdac7c009b9d167fd0e4e9acbe0784be --- /dev/null +++ b/20241127/2404.00345v2.json @@ -0,0 +1,322 @@ +{ + "title": "MaGRITTe: Manipulative and Generative 3D Realization from Image, Topview and Text", + "abstract": "The generation of 3D scenes from user-specified conditions offers a promising avenue for alleviating the production burden in 3D applications. Previous studies required significant effort to realize the desired scene, owing to limited control conditions. We propose a method for controlling and generating 3D scenes under multimodal conditions using partial images, layout information represented in the top view, and text prompts. Combining these conditions to generate a 3D scene involves the following significant difficulties: (1) the creation of large datasets, (2) reflection on the interaction of multimodal conditions, and (3) domain dependence of the layout conditions. We decompose the process of 3D scene generation into 2D image generation from the given conditions and 3D scene generation from 2D images. 2D image generation is achieved by fine-tuning a pretrained text-to-image model with a small artificial dataset of partial images and layouts, and 3D scene generation is achieved by layout-conditioned depth estimation and neural radiance fields (NeRF), thereby avoiding the creation of large datasets. The use of a common representation of spatial information using 360-degree images allows for the consideration of multimodal condition interactions and reduces the domain dependence of the layout control. The experimental results qualitatively and quantitatively demonstrated that the proposed method can generate 3D scenes in diverse domains, from indoor to outdoor, according to multimodal conditions. A project website with supplementary video is here https://hara012.github.io/MaGRITTe-project.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "3D scene generation under user-specified conditions is a fundamental task in the fields of computer vision and graphics. In particular, the generation of 3D scenes extending in all directions from the observer\u2019s viewpoint is a promising technology that reduces the burden and time of creators and provides them with new ideas for creation in 3D applications such as VR/AR, digital twins, and the metaverse.\nIn recent years, 3D scene generation under user-specified conditions using generative models [31 ###reference_b31###, 45 ###reference_b45###, 19 ###reference_b19###, 58 ###reference_b58###, 51 ###reference_b51###, 26 ###reference_b26###] has been extensively studied. A wide range of methods exist for generating 3D scenes from parital images [14 ###reference_b14###, 6 ###reference_b6###, 15 ###reference_b15###, 12 ###reference_b12###], layout information such as floor plans and bird\u2019s-eye views [59 ###reference_b59###, 5 ###reference_b5###, 29 ###reference_b29###, 70 ###reference_b70###, 10 ###reference_b10###, 49 ###reference_b49###], and text prompts [64 ###reference_b64###, 50 ###reference_b50###, 27 ###reference_b27###, 55 ###reference_b55###]. However, these methods are limited by the conditions they can take as input, making it difficult to generate the 3D scene intended by the user. This is due to the fact that each condition has its own advantages and disadvantages. For example, when partial images are given, it is possible to present a detailed appearance; however, it is difficult to create information outside the image; when a layout is given, it is possible to accurately describe object alignment but not to specify a detailed appearance; when text is given as a condition, it is suitable for specifying the overall context; however, it is difficult to determine the exact shape and appearance of objects.\nConsidering these problems, we propose a method for generating 3D scenes by simultaneously providing a combination of three conditions: partial images, layout information represented in the top view, and text prompts (fig. 1 ###reference_###). This approach aims to compensate for the shortcomings of each condition in a complementary manner, making it easier to create the 3D scenes intended by the creator. That is, details of appearance from partial images, shape and object placement from layout information, and overall context can be controlled using text prompts.\nIntegrating partial images, layouts, and texts to control a 3D scene involves the following significant difficulties that cannot be addressed by a simple combination of existing methods: (1) creation of large datasets, (2) reflection of the interaction of multimodal conditions, and (3) domain dependence of the layout representations. To overcome these difficulties, we initially decomposed the process of 3D scene generation into two steps: 2D image generation from the given conditions and 3D generation from 2D images. For 2D image generation, our approach is to create small artificial datasets for partial images and layout conditions and fine-tune the text-to-image model trained on a large dataset. We then generated a 3D scene from a 2D image using layout-conditioned monocular depth estimation and training NeRF [40 ###reference_b40###]. This approach eliminates the need to create large datasets of 3D scenes. This study aimed to improve scene consistency and reduce computational costs using 360-degree images for 2D image generation. To address the second issue, which reflects the interaction of multimodal conditions, we encoded the input conditions into a common latent space in the form of equirectangular projection (ERP) for 360-degree images. To address the third issue of domain dependence of layout representations, we present a framework for incorporating domain-specific top-view representations with less effort by converting them into more generic intermediate representations of depth and semantic maps in ERP format. This allows for generating various scenes from indoor to outdoor by simply replacing the converter.\nThe contributions of this study are as follows:\nWe introduce a method to control and generate 3D scenes from partial images, layouts, and texts, complementing the advantages of each condition.\nWe present a method that avoids the need for creating large datasets by fine-tuning a pre-trained large-scale text-to-image model with a small artificial dataset of partial images and layouts for 2D image generation, and by generating 3D scenes from 2D images through layout-conditioned depth estimation and training NeRF.\nWe address the integration of different modalities by converting the input information into ERP format, passing it through an encoder, and embedding the information in the same latent space.\nWe present a framework for generating various scenes from indoor to outdoor with a module for converting top view layout representations into depth maps and semantic maps in ERP format.\nExperimental results validate that the proposed method can generate 3D scenes with controlled appearance, geometry, and overall context based on input information, even beyond the dataset used for fine-tuning." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "3D Scene Generation", + "text": "3D scene generation involves the creation of a model of a 3D space that includes objects and backgrounds, based on user-specified conditions. In recent years, the use of generative models, such as VAE [31 ###reference_b31###, 45 ###reference_b45###], GAN [19 ###reference_b19###], autoregressive models [58 ###reference_b58###], and diffusion models [51 ###reference_b51###, 26 ###reference_b26###], has made rapid progress. There are methods to generate a 3D scene from random variables [38 ###reference_b38###, 8 ###reference_b8###], from one or a few images [14 ###reference_b14###, 6 ###reference_b6###, 36 ###reference_b36###, 15 ###reference_b15###, 12 ###reference_b12###], from layout information such as floor plans [59 ###reference_b59###, 5 ###reference_b5###], bird\u2019s-eye views (semantic maps in top view) [29 ###reference_b29###, 70 ###reference_b70###], terrain maps [10 ###reference_b10###] and 3D proxies [49 ###reference_b49###], and as well as from text prompts [64 ###reference_b64###, 50 ###reference_b50###, 27 ###reference_b27###, 55 ###reference_b55###, 17 ###reference_b17###]. However, each method has its own advantages and disadvantages in terms of scene control characteristics, and it is difficult to generate a 3D scene that appropriately reflects the intentions. We propose a method to address these challenges by integrating partial images, layout information, and text prompts as input conditions in a complementary manner. Furthermore, while layout conditions need to be designed for each domain, the proposed method switches between converters for layout representations, enabling the generation of a variety of scenes from indoor to outdoor." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Scene Generation Using 360-Degree Image", + "text": "Image generation methods have been studied for 360-degree images that record the field of view in all directions from a single observer\u2019s viewpoint. Methods to generate 360-degree images from one or a few normal images [18 ###reference_b18###, 52 ###reference_b52###, 3 ###reference_b3###, 2 ###reference_b2###, 22 ###reference_b22###, 4 ###reference_b4###, 23 ###reference_b23###, 65 ###reference_b65###] and text prompts [11 ###reference_b11###, 63 ###reference_b63###, 57 ###reference_b57###] have been reported. Methods for panoramic three-dimensional structure prediction were also proposed [53 ###reference_b53###, 54 ###reference_b54###].\nStudies have also extended the observer space to generate 3D scenes with six degrees of freedom (DoF) from 360-degree RGB-D. In [28 ###reference_b28###, 21 ###reference_b21###, 32 ###reference_b32###, 62 ###reference_b62###], methods were proposed for constructing a 6-DoF 3D scene by training the NeRF from 360-degree RGB-D. LDM3D [55 ###reference_b55###] shows a series of pipelines that add channels of depth to the latent diffusion model (LDM) [46 ###reference_b46###], generate 360-degree RGB-D from the text, and mesh it. Generating 3D scenes via 360-degree images is advantageous in terms of guaranteeing scene consistency and reducing computation. Our research attempts to generate 360-degree images from multiple conditions and 6-DoF 3D scenes by layout-conditioned depth estimation and training the NeRF." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Monocular Depth Estimation", + "text": "Monocular depth estimation involves estimating the depth of each pixel in a single RGB image. In recent years, deep learning-based methods have progressed significantly, and methods based on convolutional neural networks [48 ###reference_b48###, 35 ###reference_b35###, 33 ###reference_b33###, 67 ###reference_b67###, 68 ###reference_b68###, 71 ###reference_b71###, 39 ###reference_b39###] and transformers [7 ###reference_b7###, 13 ###reference_b13###, 69 ###reference_b69###, 56 ###reference_b56###, 43 ###reference_b43###] have been proposed. Monocular depth estimation for 360-degree images was also investigated [74 ###reference_b74###, 34 ###reference_b34###, 16 ###reference_b16###, 60 ###reference_b60###, 75 ###reference_b75###, 61 ###reference_b61###, 41 ###reference_b41###, 44 ###reference_b44###, 1 ###reference_b1###]. However, since the accuracy of conventional monocular depth estimation is not sufficient, this study aims to improve accuracy by combining it with layout conditions." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed Method", + "text": "###figure_1### This section describes the proposed method called MaGRITTe, that generates 3D scenes under multiple conditions. fig. 2 ###reference_### illustrates the overview of our method. Three input conditions are considered: a partial image, layout information represented in the top view, text prompts, and outputs from a 360-degree RGB-D and NeRF model. The proposed method comprises four steps: (a) ERP conversion of partial images and layouts, (b) 360-degree RGB image generation, (c) layout-conditioned depth estimation, and (d) NeRF training. The following sections describe each step." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Conversion of Partial Image and Layout", + "text": "First, we describe the conversion of the partial image and layout in (a) of fig. 2 ###reference_###. This study uses two layout representations, floor plans and terrain maps, for indoor and outdoor scenes, respectively." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Floor Plans", + "text": "A floor plan is a top-view representation of the room shape and the position/size/class of objects. The room shape comprises the two-dimensional coordinates of the corners and the height positions of the floor and ceiling, based on the assumption that the walls stand vertically. The objects are specified by 2D bounding box, height from the floor at the top and bottom, and class, such as chair or table." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Terrain Maps", + "text": "###figure_2### As shown in fig. 3 ###reference_###, a terrain map describes the height of the terrain relative to the horizontal plane. This is a set that constitutes a grid with the height of the ground surface at each grid point." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3 ERP Conversion", + "text": "The observer position and field of view (FoV) of the partial image are provided in the layout. Based on this information, a partial RGB , coarse depth , and semantic map are created in the ERP format, as shown in fig. 2 ###reference_### (a), where and are the height and width of the ERP image, respectively, and denotes the number of classes. The semantic map takes when an object of class exists at position and otherwise. For floor plans, the distance from the observer\u2019s viewpoint to the room wall is recorded, and for terrain maps, the distance from the observer\u2019s viewpoint to the terrain surface is recorded in ERP format and used as the coarse depth. A semantic map is created for a floor plan; the regions specifying the objects are projected onto the ERP image with the observer position of the partial image as the projection center, and object classes are assigned to the locations of their presence." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "360-Degree RGB Generation", + "text": "###figure_3### We combine partial images, coarse depths, and semantic maps represented in the ERP format and integrate them with text prompts to generate a 360-degree RGB image. Using the ERP format for the input and output allows the use of text-to-image models trained on large datasets. In this study, we employ StableDiffusion (SD) [46 ###reference_b46###], a pre-trained diffusion model with an encoder and decoder, as the base text-to-image model. We fine-tune the model for our purposes using ControlNet [72 ###reference_b72###], which controls the diffusion model with an additional network of conditional inputs. fig. 4 ###reference_### shows the pipeline to generate 360-degree RGB. A partial image, coarse depth, and semantic maps are embedded in the latent space, channel merged, and provided as conditional inputs to ControlNet along with text prompts. This is an improvement on PanoDiff [63 ###reference_b63###], which generates 360-degree images from partial images, and our method embeds layout information into a common latent space in ERP format as well, allowing for interaction between conditions while preserving spatial information. The encoder for partial images is from SD, and the encoder for layout information is a network with the same structure as that used in ControlNet. The weights of the network derived from SD are fixed, and only the weights of the network derived from ControlNet are updated during training." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Layout-Conditioned Depth Estimation", + "text": "Next, a fine depth is estimated from the coarse depth and the generated 360-degree RGB. In this study, we propose and compare two methods: end-to-end estimation and depth integration." + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 End-to-End Estimation", + "text": "In the end-to-end approach, the depth is estimated using U-Net [47 ###reference_b47###] with a self-attention mechanism [58 ###reference_b58###] with four channels of RGB-D as the input, and one channel of depth as the output. The network is trained to minimize the L1 loss between the network outputs and ground truth. Details of the network configuration are provided in section A.1 ###reference_###." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Depth Integration", + "text": "In the depth integration approach, depth estimates are obtained from 360-degree RGB using the monocular depth estimation method, LeRes [71 ###reference_b71###] is employed in this study, and the final depth is obtained so as to minimize the weighted squared error for the coarse depth and depth estimates. Since LeRes is designed for normal field-of-view images, the 360-degree image is projected onto tangent images, and depth estimation and integration are performed on each tangent image. Let be the monocular depth estimate for -th tangent image in ERP format, where and are the height and width of the depth map, respectively. Since the estimated depth has unknown scale and offset, it is transformed using the affine transformation coefficient as , where . We consider the following evaluation function , where is the coarse depth, is the weight matrix, and is the integrated depth.\nwhere quadratic form . The fine depth and coefficients that minimize can be obtained in closed form from the extreme value conditions as follows:\nwhere, , .\nThe derivation of the equation and setting of weights are described in section A.2 ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Training NeRF", + "text": "Finally, we train the NeRF model using the generated 360-degree RGB-D. In this study, we employ a method from [21 ###reference_b21###] that can train NeRF by inpainting the occluded regions from a single image." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Dataset", + "text": "We fine-tune our model using the following two types of datasets for indoor and outdoor scenes, respectively. We create artificial datasets with layout annotations using computer graphics as the base dataset, whereas datasets without layout annotations are created using actual captured datasets as the auxiliary dataset.\n###figure_4###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Indoor Scene", + "text": "For the base dataset, we modified and used a structured 3D dataset [73 ###reference_b73###] containing 3500 synthetic departments (scenes) with 185,985 panoramic renderings for RGB, depth, and semantic maps. The same room had both furnished and unfurnished patterns, and the depth of the unfurnished room was used as the coarse depth. For consistency with the ERP conversion in section 3.1 ###reference_###, the semantic map was transformed, as shown in (fig. 5 ###reference_###). Each image was annotated with text using BLIP [37 ###reference_b37###] and partial images were created using a perspective projection transformation of 360-degree RGB with random camera parameters. The data were divided into 161,126 samples for training, 2048 samples for validation, and 2048 samples for testing.\nFor the auxiliary dataset, we used the Matterport 3D dataset [9 ###reference_b9###], which is an indoor real-world 360\u2218 dataset including 10,800 RGB-D panoramic images. Similar to the structured 3D dataset, partial images and text were annotated. The depth and semantic maps included in the dataset were not used, and zero was assigned as the default value for the coarse depth and semantic map during training. The data were divided into 7675 samples for training and 2174 samples for testing.\n###figure_5###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Outdoor Scene", + "text": "As the base dataset, we created the SceneDreamer dataset using SceneDreamer [10 ###reference_b10###], which is a model for generating 3D scenes. As shown in fig. 6 ###reference_###, a 360-degree RGB-D image was generated from random numbers via a terrain map to annotate the partial images and texts. A semantic map was not used in this study because of limited object classes. The data were divided into 12,600 samples for training, 2,052 samples for validation, and 2052 samples for testing.\nFor the auxiliary dataset, we used the SUN360 dataset [66 ###reference_b66###] which includes various real captured 360-degree RGB images. We extracted only outdoor scenes from the dataset, and partial images and text were annotated. The distance to the horizontal plane was set as the default value for the coarse depth during training. The data were divided into 39,174 training samples and 2048 testing samples." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experimental Results", + "text": "Quantitative and qualitative experiments were conducted to verify the effectiveness of the proposed method, MaGRITTe, for generating 3D scenes under multiple conditions." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Implementation Details", + "text": "The partial images, coarse depths, and semantic maps were in ERP format with a resolution of , and the shape of the latent variable in the LDM was . We trained the 360-degree RGB generation model based on the pretrained SD v2.1 using the Adam optimizer [30 ###reference_b30###] with a learning rate of and batch size of 16. We trained the end-to-end depth estimation model from scratch using the Adam optimizer with a learning rate of and batch size of 6. The convolutional layers in the networks use circular padding [23 ###reference_b23###] to resolve the left-right discontinuity in ERP." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "360-Degree RGB Generation", + "text": "First, we evaluate 360-degree RGB generation. Because there is no comparison method that uses partial images, layouts, and text prompts as inputs to generate a 360-degree image, we compared our method with PanoDiff [63 ###reference_b63###], which is a state-of-the-art 360-degree RGB image generation model that uses partial images and texts. We implemented it and used PanoDiff with the encoder of the layout information removed in MaGRITTe for a fair comparison using the same network configurations and pretrained models.\ntable 1 ###reference_### shows the quantitative evaluation results of 360-degree RGB generation on the Structured 3D dataset and the SceneDreamer dataset. We used the peak-signal-to-noise-ratio (PSNR) as the evaluation metric: PSNR (whole) for the entire image between the ground truth and generated images, PSNR (parial) for the region of the partial image given by the input. We also emply the FID [25 ###reference_b25###], which is a measure of the divergence of feature distributions between the ground truth and generated images, and the CLIP score (CS) [42 ###reference_b42###, 24 ###reference_b24###], which promptly quantifies the similarity with the input text. PanoDiff is superior in terms of PSNR (partial) and CS, which is a reasonable result since PanoDiff is a method that takes only partial images and text prompts as conditions for image generation. However, MaGRITTe is superior to PSNR (whole) and FID, which indicates that the reproducibility and plausibility of the generated images can be enhanced by considering layout information as a condition as well.\ntable 2 ###reference_### shows the results of the evaluation of the controllability of object type and placement. Semantic segmentation [20 ###reference_b20###] was performed on the 360-degree images generated for Structured3D dataset to evaluate precision, recall, and IoU for bounding boxes in the input conditions. MaGRITTe is superior to PanoDiff and produces results closer to the ground truth images, indicating that the condition-aware object placement is realized.\n###figure_6### fig. 7 ###reference_### shows the examples of generating a 360-degree RGB image for the test set of the Structured 3D dataset and the SceneDreamer dataset. PanoDiff, which does not use the layout information as a condition, generates images that differ significantly from the ground truth. This may have led to the degradation of PSNR (whole) and FID. Although the image generated by MaGRITTe differs from the ground-truth image at the pixel level, it can generate images with room geometry, terrain, and object placement in accordance with the given conditions." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "360-Degree Depth Generation", + "text": "Next, we evaluate the depth of the generated 360-degree image. Because the estimated depth has scale and offset degrees of freedom, its value was determined to minimize the squared error with the ground-truth depth, similar to the method presented in [43 ###reference_b43###]. We used the root mean squared error (RMSE) and mean absolute value of the relative error, , where is the number of pixels, is the estimated depth of the th pixel, and is the ground-truth depth of the th pixel. Pixels at infinity were excluded from evaluation. table 3 ###reference_### shows the results of the quantitative evaluation of depth generation on the Structured 3D dataset and the SceneDreamer dataset. For comparison, the results of 360MonoDepth which is a 360\u2218 monocular depth estimation [44 ###reference_b44###] method; LeRes (ERP), which is LeRes [71 ###reference_b71###] directly applied to ERP; and LeRes (multi views), which applies LeRes to multiple tangent images of a 360-degree image and integrates the estimated depths in a section 3.3 ###reference_### manner without using coarse depth, are also shown. In terms of RMSE and AbsRel, our method (end-to-end) was the best for the structured 3D dataset, and our method (depth integration) was the best for the SceneDreamer dataset. It was also shown that combining LeRes with coarse depth increased accuracy compared to using LeRes alone. Ours (w/o coarse depth) is an end-to-end depth estimation method that uses only RGB without the coarse depth, and we can see that the accuracy is lower than when using coarse depth in the Structured3D dataset. The end-to-end method is relatively ineffective for the SceneDreamer dataset. This may be because the number of samples in the dataset was small and the depth was estimated to be close to the coarse depth." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Results in the Wild", + "text": "###figure_7### We evaluated the results of 3D scene generation based on user-generated conditions outside the dataset used for fine-tuning. Examples of 3D scenes generated by MaGRITTe, conditioned on partial images, layouts, and text, are shown in figs. 1 ###reference_### and 8 ###reference_###. These conditions were created freely by the authors. It can be seen that the generated scene contains the given partial image and conforms to the instructions of the text prompt according to the given layout. These results show that MaGRITTe can generate 3D scenes with the appearance, geometry, and overall context controlled according to the input information, even outside the dataset used for fine-tuning." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Generation Results from Subset of Conditions", + "text": "To verify the contribution and robustness of each condition of the proposed method, experiments were conducted to generate 360-degree RGB-D from a subset of partial images, layouts, and text prompts. Generation was performed for the test set of the structured 3D dataset. Because depth estimation in MaGRITTe requires layout information, LeRes (ERP) [71 ###reference_b71###], a monocular depth estimation of ERP images, was used in the absence of layout conditions. table 4 ###reference_### shows the values of each evaluation metric for the generated results. In terms of FID, it can be seen that MaGRITTe does not significantly degrade performance when text conditions are included in the generation conditions. This is largely owing to the performance of the text-to-image model used as the base model to ensure the plausibility of the generated image. However, PSNR (whole) decreases in the absence of partial image and layout conditions, indicating that the contribution of these conditions to the composition of the overall structure is high. In addition, CS naturally decreases without the text condition. However, even without the text condition, CS is larger than that in the unconditional generation case, indicating that semantic reproduction is possible to some extent, even from partial images and layout information. For depth generation, the accuracy is significantly degraded because it is impossible to use depth estimation with a coarse depth in the absence of layout conditions. When generated from partial images and text, its performance was comparable to PanoDiff. Details of the experimental setup, additional samples, ablation studies, and limitations are described in appendices B ###reference_### and C ###reference_###." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "We proposed a method for generating and controlling 3D scenes using partial images, layout information, and text prompts. We confirmed that fine-tuning a large-scale text-to-image model with small artificial datasets can generate 360-degree images from multiple conditions, and free perspective views can be generated by layout-conditioned depth estimation and training NeRF. This enables 3D scene generation from multimodal conditions without creating a new large dataset. It is also indicated that the interaction of multiple spatial conditions can be performed using a common ERP latent space, and that both indoor and outdoor scenes can be handled by replacing the conversions.\nFuture studies will include the detection of inconsistent input conditions and suggestions for users on how to resolve these inconsistencies. Creating conditions under which the layout and partial images match perfectly is difficult, and a method that aligns with the approximate settings is desirable." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Details of Layout-Conditioned Depth Estimation", + "text": "###figure_8### ###figure_9### In this section, we describe the details of the layout-conditioned depth estimation, which generates a fine depth from the coarse depth and generated RGB.\nThe structure of the network that generates a fine depth from a coarse depth and the generated RGB end-to-end is shown in figs. 9 ###reference_### and 10 ###reference_###. The network consists of a combination of U-Net [47 ###reference_b47###] and self-attention [58 ###reference_b58###], with four channels of RGB-D as the input and one channel of depth as the output. The network was trained to minimize the L1 loss between the depth output from the network and the depth of the ground truth. The model was trained from scratch using the Adam optimizer with a learning rate of and a batch size of six.\nLet be the monocular depth estimate for -th tangent image in ERP format, where and are the height and width of the depth map, respectively. Since the estimated depth has unknown scale and offset, it is transformed using the affine transformation coefficient as , where . We consider the following evaluation function , where is the coarse depth, is the weight matrix, and is the integrated depth.\nwhere the quadratic form . We find the affine transformation coefficient and fine depth from the extreme-value conditions to minimize . The partial differentiation of eq. 4 ###reference_### with yields:\nand satisfying the extreme-value conditions are as follows:\nNext, the partial differentiation of eq. 4 ###reference_### with yields:\nand satisfying the extreme-value conditions are as follows:\nBy substituting eq. 6 ###reference_### into eq. 10 ###reference_###, we obtain\nTransposing on the left-hand side yields\nConsidering the coefficient of as , we obtain\nwhere .\nIn addition, considering the coefficient of as , we obtain\nThe constant is expressed as follows:\nTherefore, when the conditions in eq. 10 ###reference_### are coupled for , we obtain\nWe can then solve for as follows.\nFrom the above results, we can determine that minimizes equation eq. 4 ###reference_### by first calculating using eq. 15 ###reference_### and then substituting the value into eq. 6 ###reference_###.\nIn this study, we set the weight matrix to a diagonal matrix. By making it a diagonal matrix, the large matrix calculation in sections A.2 ###reference_###, 12 ###reference_### and 13 ###reference_### can be avoided and can be attributed to element-by-element calculations. The diagonal components represent the reflected intensity at each location on each depth map. Since the weight matrices are for depth maps that express the estimated depth for tangent images in ERP format, the weights are increased for regions where tangent images are present, as shown in fig. 11 ###reference_### To smooth the boundary, we first set the following weights for pixel position in the tangent image of height and width .\nThis weight has a maximum value of 1 at the center of the tangent image and a minimum value of 0 at the edges of the image. The weights for the tangent image are converted to ERP format and set to the diagonal components of the weight matrix . The weights of the outer regions of each tangential image are set to zero. Tangent images are created with a horizontal field of view of 90 degrees and resolution of pixels, and 16 images were created with the following latitude and longitude shooting directions.\nOn the other hand, the weights for the coarse depth are set as follows. When using floor plans for the layout format, a low-weight is set for areas in the partial image or layout condition where an object is specified, and a high-weight for other areas. In this study, we set , . When using the terrain map for the layout format, set the diagonal component of the weight matrix according to the value of the coarse depth at each location in the ERP as follows:\nwhere and are hyperparameters. In this study, the coarse depth is normalized to the interval [0, 1], and we set and . We set in the region where the coarse depth is infinite. The weights are inversely proportional to the square of the coarse depth to ensure that the squared error in eq. 4 ###reference_### assumes values of the same scale with respect to the coarse depth. This prevents the error from being overestimated when an object is generated in the foreground of a large-depth region, such as a tree in the foreground of the sky.\n###figure_10###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Additional Results", + "text": "Fine-tuning of the base model degrades image-to-text performance. To mitigate this phenomenon, we additionally use the auxiliary dataset (see Section 4) with text annotations only for fine-tuning. If one model is trained for different combinations of conditions, the learning may not be generalized to other combinations of conditions. We introduce condition dropout (CD), in which training is performed by randomly changing the combination of conditions. Each condition is dropped with a probability of 50%, with the ERP image conditions being replaced by pixel values of 0 and text replaced by an empty string.\ntable 5 ###reference_### shows the results of comparing the presence or absence of CD in the proposed method. FID tended to be slightly better when CD was present, whereas PSNR (whole), PSNR (partial), and CS were superior or inferior depending on the two datasets. The better performance of CD on the SceneDreamer dataset can be attributed to the larger number of samples in the auxiliary dataset.\nNext, we present the results of the evaluation of the experiment in a setting in which the conditions were crossed between datasets. table 6 ###reference_### shows the results of the CS for generated results with the text prompt of the auxiliary dataset for the depth of the base dataset. This indicates that CS can be improved by using the auxiliary dataset and CD. fig. 12 ###reference_### shows the difference with and without CD. These results show that the use of CD better reflects text prompts, and the generalization of text prompts in combination with depth is possible.\n###figure_11### Text2Room [27 ###reference_b27###] is a method for generating 3D scenes as meshes by repeatedly generating images in multiple viewpoints from the input text.\nThis method can also be used to control the layout of the generated 3D scene by changing the input text according to the viewpoint.\nHowever, layout guided generation in Text2Room is different from our setting, because it changes the text prompts for the direction of observation and cannot take geometric shapes as conditions. fig. 13 ###reference_### shows an example of a scene generated by Text2Room under the same conditions as in fig. 1 ###reference_###. Text2Room is less accurate in the placement of objects and is unable to generate room shapes to suit the conditions. Conditioning the layout with semantic map and coarse depth is the advantage of our method.\n###figure_12### ###figure_13### ###figure_14### figs. 14 ###reference_### and 15 ###reference_### show additional samples of 360-degree RGB image generation for the Structured 3D dataset and SceneDreamer dataset, respectively.\n###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### We evaluated the results of the 3D scene generation based on user-generated conditions outside the dataset used for fine-tuning. In this experiment, the end-to-end method was used to estimate the depth in indoor scenes, whereas the depth integration method was applied to outdoor scenes because the SceneDreamer dataset is limited to natural scenery, such as mountainous areas and seashores, using monocular depth estimation models trained on an external dataset. Because CD is effective for fine-tuning with additional text annotations, we used a simpler method without CD in the in-the-wild experiments described in this section. The terrain map was created as a mixed Gaussian distribution in the following equation:\nwhere is the location on the 2-D map, is the number of mixtures, and , , and are the parameters of the weights, mean, and covariance matrix of the element distribution, respectively.\nAdditional examples of 3D scenes generated using the proposed method conditioned on text, partial images, and layouts are presented in figs. 16 ###reference_###, 17 ###reference_###, 18 ###reference_###, 19 ###reference_### and 20 ###reference_###. In these figures, the aspect ratios of the ERP images were converted to 2:1 for display purposes. These conditions were created freely by the authors. It can be seen that the generated scene contains the given partial image and conforms to the instructions of the text prompt according to the given layout. In addition to the coarse depth created by the room shape or terrain alone, the geometry of objects such as chairs, tables, trees, and buildings can be seen. fig. 16 ###reference_### shows how various scenes can be generated in a controlled manner by changing the combination of layout and text for the same partial image. figs. 17 ###reference_###, 18 ###reference_###, 19 ###reference_### and 20 ###reference_### shows that our method can generate a variety of 3D scenes from photos on the web, photos taken in the real world, and fanciful paintings, taking into account the layout and text requirements we give. These results show that the proposed method can generate 360-degree RGB-D images with appearance, geometry, and overall context controlled according to the input information, even outside the dataset used for fine-tuning." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Discussion", + "text": "###figure_20### The proposed method uses a trained text-to-image model to generate a 2D image, from which the depth is generated. The proposed method is unique because it uses a 360-degree image as the 2D image for generation. Using 360-degree images is advantageous over perspective projection images in terms of scene consistency and reduced computational costs. fig. 21 ###reference_### shows examples of the generated scene from a partial image by the incremental multi-view inpanting and MVDiffusion [57 ###reference_b57###]. Incremental multi-view inpainting is a method of repeating SD inpainting by projecting an input image from a different viewpoint. In the example shown in this figure, the road disappears, indicating that the scene is inconsistent. This is due to the fact that inpainting is performed on each perspective projection image; therefore, the overall consistency cannot be guaranteed. In addition, inpainting must be applied repeatedly, which is computationally expensive and difficult to parallelize. MVDiffusion, on the other hand, takes cross-attention among multiple views and generates multiple views that are simultaneously consistent using SD. This method is computationally expensive because it requires running SD for each view and paying cross-attention to the combinations of multiple views. The order of computational complexity is , where is the number of viewpoints. Because the proposed method generates a single 360-degree image, it is easy to achieve scene consistency at a low computational cost. However, the resolution of the generated image using ERP is lower than that of multiview images, and a higher resolution is a future challenge.\n###figure_21### ###figure_22### Although the performance of the proposed method was promising, it had several limitations.\nfig. 22 ###reference_### shows examples of problems in RGB generation. First, if the objects specified in the layout are in overlapping positions from a viewpoint, they cannot be separated and drawn in the correct number and position. This is because the 2D layout information is converted to ERP for input, which requires additional ingenuity, such as generating a 3D scene jointly from multiple viewpoints. Second, when using conditions outside the dataset, the specified conditions may not be reflected, depending on the interaction between each condition. For example, there is the phenomenon that certain text prompts do not produce certain objects. Third, it is not possible to specify the regions where objects do not exist. Except for the regions where objects are specified, object generation is controlled by other conditions such as partial image, depth, and text.\nfig. 23 ###reference_### shows examples of problems in 6 DoF 3D scene generation. It is difficult to synthesize plausible views when generating 3D scenes from 360-degree RGB-D images with large missing regions that exceed image completion capabilities.\nWe hope that these limitations will be addressed in future studies." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Evaluation results of 360-degree RGB generation on the Modified Structured 3D dataset and the SceneDreamer dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Structured3D datasetSceneDreamer dataset
\n\nmethod\n\n\n\nPSNR (whole)\n\n\n\nPSNR (partial)\n\n\n\nFID\n\n\n\nCS\n\n\n\nPSNR (whole)\n\n\n\nPSNR (partial)\n\n\n\n\nFID\n\n\n\nCS\n\n
\n\nPanoDiff [63]\n\n\n\n11.59\n\n\n\n36.00\n\n\n\n21.23\n\n\n\n30.75\n\n\n\n12.91\n\n\n\n37.19\n\n\n\n30.94\n\n\n\n29.86\n\n
\n\nMaGRITTe (ours)\n\n\n\n12.56\n\n\n\n35.39\n\n\n\n18.87\n\n\n\n30.72\n\n\n\n13.29\n\n\n\n34.81\n\n\n\n29.05\n\n\n\n29.93\n\n
\n
\n
", + "capture": "Table 1: Evaluation results of 360-degree RGB generation on the Modified Structured 3D dataset and the SceneDreamer dataset." + }, + "2": { + "table_html": "
\n
Table 2: Evaluation results for object type and placement. Note that the object positions in the input condition are given by the bounding boxes as shown in fig.\u00a05, therefore even in ground truth images, it doesn\u2019t match perfectly.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMethod\n\n\n\nPrecision\n\n\n\nRecall\n\n\n\nIoU\n\n
\n\nGround truth\n\n\n\n0.482\n\n\n\n0.349\n\n\n\n0.284\n\n
\n\nPanoDiff\n\n\n\n0.245\n\n\n\n0.170\n\n\n\n0.124\n\n
\n\nMaGRITTe (ours)\n\n\n\n0.424\n\n\n\n0.273\n\n\n\n0.227\n\n
\n
\n
", + "capture": "Table 2: Evaluation results for object type and placement. Note that the object positions in the input condition are given by the bounding boxes as shown in fig.\u00a05, therefore even in ground truth images, it doesn\u2019t match perfectly." + }, + "3": { + "table_html": "
\n
Table 3: Evaluation results of 360-degree depth generation on the Modified Structured 3D dataset and the SceneDreamer dataset
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Structured3D datasetSceneDreamer dataset
\n\nMethod\n\n\n\nRMSE\n\n\n\nAbsRel\n\n\n\nRMSE\n\n\n\nAbsRel\n\n
\n\nCoarse depth\n\n\n\n8.858\n\n\n\n0.0117\n\n\n\n15.30\n\n\n\n0.0200\n\n
\n\n360MonoDepth [44]\n\n\n\n21.67\n\n\n\n0.0138\n\n\n\n15.30\n\n\n\n0.0202\n\n
\n\nLeRes (ERP) [71]\n\n\n\n19.03\n\n\n\n0.0149\n\n\n\n15.24\n\n\n\n0.0187\n\n
\n\nLeRes (multi views)\n\n\n\n21.90\n\n\n\n0.0147\n\n\n\n15.25\n\n\n\n0.0188\n\n
\n\nOurs (end-to-end)\n\n\n\n6.649\n\n\n\n0.0056\n\n\n\n15.29\n\n\n\n0.0196\n\n
\n\nOurs (depth integration)\n\n\n\n7.432\n\n\n\n0.0119\n\n\n\n15.20\n\n\n\n0.0185\n\n
\n\nOurs (w/o coarse depth)\n\n\n\n9.837\n\n\n\n0.0070\n\n\n\n15.28\n\n\n\n0.0196\n\n
\n
\n
", + "capture": "Table 3: Evaluation results of 360-degree depth generation on the Modified Structured 3D dataset and the SceneDreamer dataset" + }, + "4": { + "table_html": "
\n
Table 4: Evaluation results for generation from subset of conditions.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ConditionsRGBDepth
\n\nPartial image\n\n\n\nLayout\n\n\n\nText\n\n\n\nPSNR (whole)\n\n\n\nPSNR (partial)\n\n\n\nFID\n\n\n\nCS\n\n\n\nRMSE \n\n\n\nAbsRel \n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n12.42\n\n\n\n33.29\n\n\n\n18.84\n\n\n\n30.71\n\n\n\n5.05\n\n\n\n0.0076\n\n
\n\n\n\n\n\n\n\n\n\n12.04\n\n\n\n34.46\n\n\n\n43.86\n\n\n\n28.19\n\n\n\n8.96\n\n\n\n0.0100\n\n
\n\n\n\n\n\n\n\n\n\n11.45\n\n\n\n-\n\n\n\n21.71\n\n\n\n30.67\n\n\n\n8.78\n\n\n\n0.0056\n\n
\n\n\n\n\n\n\n\n\n\n11.48\n\n\n\n33.64\n\n\n\n21.83\n\n\n\n30.93\n\n\n\n24.56\n\n\n\n0.0172\n\n
\n\n\n\n\n\n11.40\n\n\n\n35.00\n\n\n\n55.08\n\n\n\n27.00\n\n\n\n23.94\n\n\n\n0.0158\n\n
\n\n\n\n\n\n11.12\n\n\n\n-\n\n\n\n59.70\n\n\n\n27.59\n\n\n\n5.02\n\n\n\n0.0086\n\n
\n\n\n\n\n\n10.67\n\n\n\n-\n\n\n\n25.90\n\n\n\n30.85\n\n\n\n24.53\n\n\n\n0.0171\n\n
\n\n10.43\n\n\n\n-\n\n\n\n87.69\n\n\n\n24.40\n\n\n\n24.00\n\n\n\n0.0180\n\n
\n
\n
", + "capture": "Table 4: Evaluation results for generation from subset of conditions." + }, + "5": { + "table_html": "
\n
Table 5: Evaluation results of 360-degree RGB generation on the Modified Structured 3D dataset and the SceneDreamer dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Structured3D datasetSceneDreamer dataset
\n\nmethod\n\n\n\nPSNR (whole)\n\n\n\nPSNR (partial)\n\n\n\nFID\n\n\n\nCS\n\n\n\nPSNR (whole)\n\n\n\nPSNR (partial)\n\n\n\nFID\n\n\n\nCS\n\n
\n\nw/o CD\n\n\n\n12.56\n\n\n\n35.39\n\n\n\n18.87\n\n\n\n30.72\n\n\n\n12.46\n\n\n\n34.68\n\n\n\n29.54\n\n\n\n29.71\n\n
\n\nw/ CD\n\n\n\n12.42\n\n\n\n33.29\n\n\n\n18.84\n\n\n\n30.71\n\n\n\n13.29\n\n\n\n34.81\n\n\n\n29.05\n\n\n\n29.93\n\n
\n
\n
", + "capture": "Table 5: Evaluation results of 360-degree RGB generation on the Modified Structured 3D dataset and the SceneDreamer dataset." + }, + "6": { + "table_html": "
\n
Table 6: CS evaluation results for base model forgetting
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\nTrained on\n\nbase dataset\n\n\n\n\n\n\n\nTrained on\n\nauxiliary dataset\n\n\n\n\n\n\n\nCondition\n\ndropout\n\n\n\n\nIndoor\n\n\n\nOutdoor\n\n
\n\n\n\n\n\n29.48\n\n\n\n24.75\n\n
\n\n\n\n\n\n\n\n\n\n29.34\n\n\n\n26.24\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n30.23\n\n\n\n29.26\n\n
\n
\n
", + "capture": "Table 6: CS evaluation results for base model forgetting" + } + }, + "image_paths": { + "1": { + "figure_path": "2404.00345v2_figure_1.png", + "caption": "Figure 1: From a given partial image, layout information represented in top view, and text prompts, our method generates a 3D scene represented by the 360-degree RGB-D, and NeRF. Free perspective views can be rendered from the NeRF model.", + "url": "http://arxiv.org/html/2404.00345v2/x1.png" + }, + "2": { + "figure_path": "2404.00345v2_figure_2.png", + "caption": "Figure 2: Overview of the proposed method to generate 360-degree RGB-D and NeRF models from a partial image, layouts and text prompts. (a) The partial image is converted to an ERP image from the observer position with the specified direction and field-of-view (FoV). The layout represented the in top view is converted to a coarse depth and a semantic map in ERP format with the observer position as the projection center. (b) These ERP images and texts are combined to generate a 360-degree RGB. (c) The generated RGB is combined with the coarse depth to estimate the fine depth. (d) a NeRF model is trained from 360-degree RGB-D.", + "url": "http://arxiv.org/html/2404.00345v2/x2.png" + }, + "3": { + "figure_path": "2404.00345v2_figure_3.png", + "caption": "Figure 3: The case of using a terrain map for the layout format. The partial image and the terrain map are converted into ERP images from the observer\u2019s viewpoint, respectively.", + "url": "http://arxiv.org/html/2404.00345v2/x3.png" + }, + "4": { + "figure_path": "2404.00345v2_figure_4.png", + "caption": "Figure 4: The pipeline of generating 360-degree RGB from a partial image, coarse depth map, semantic map, and text prompts.", + "url": "http://arxiv.org/html/2404.00345v2/x4.png" + }, + "5": { + "figure_path": "2404.00345v2_figure_5.png", + "caption": "Figure 5: Semantic map. Regions related to objects are extracted, excluding regions derived from the shape of the room, such as walls, floor, and ceiling, which are enclosed in a bounding box to form a semantic map in the proposed method.", + "url": "http://arxiv.org/html/2404.00345v2/x5.png" + }, + "6": { + "figure_path": "2404.00345v2_figure_6.png", + "caption": "Figure 6: Dataset creation for outdoor scene. SceneDreamer [10] generates a terrain map from a random number, and renders 360-degree RGB-D. The generated RGB image is annotated with text using BLIP [37], and partial images are created by a perspective projection transformation of 360-degree RGB with random camera parameters. A coarse depth is converted from the terrain maps", + "url": "http://arxiv.org/html/2404.00345v2/x6.png" + }, + "7": { + "figure_path": "2404.00345v2_figure_7.png", + "caption": "Figure 7: The results of generating a 3D scene for the test set of (a)(b) the Stuructured 3D dataset and (c)(d) the SceneDreamer dataset.", + "url": "http://arxiv.org/html/2404.00345v2/x7.png" + }, + "8": { + "figure_path": "2404.00345v2_figure_8.png", + "caption": "Figure 8: Samples of the 3D scene generation based on user-generated conditions. Perspective views are rendered using the learned NeRF model. The first and fourth partial images are taken by the author using a camera, the second is a painting entitled \"The Listening Room\" by Ren\u00e9 Magritte and the third was downloaded from the web (https://www.photo-ac.com/).", + "url": "http://arxiv.org/html/2404.00345v2/x8.png" + }, + "9": { + "figure_path": "2404.00345v2_figure_9.png", + "caption": "Figure 9: The structure of the layout-conditioned depth estimation network. Conv2D (N\u2192M\u2192\ud835\udc41\ud835\udc40N\\to Mitalic_N \u2192 italic_M) is a two-dimensional convolutional layer with N\ud835\udc41Nitalic_N input channels, M\ud835\udc40Mitalic_M output channels, and a kernel size of 3\u00d73333\\times 33 \u00d7 3. The Resnet Block shown in fig. 10 is combined into a U-Net structure. Downsampling and upsampling are performed using a factor of 2. In the Attention Block, self-attention [58] in the form of a query, key, and value is applied in pixels.", + "url": "http://arxiv.org/html/2404.00345v2/x9.png" + }, + "10": { + "figure_path": "2404.00345v2_figure_10.png", + "caption": "Figure 10: The structure of a Resnet Block (N\u2192M\u2192\ud835\udc41\ud835\udc40N\\to Mitalic_N \u2192 italic_M). N\ud835\udc41Nitalic_N is the number of input channels, and M\ud835\udc40Mitalic_M is the number of output channels. In the groupe normalize, the number of split channels is fixed at 32. Conv2D refers to a two-dimensional convolutional layer, and the numbers in parentheses indicate the conversion of the number of channels.", + "url": "http://arxiv.org/html/2404.00345v2/x10.png" + }, + "11": { + "figure_path": "2404.00345v2_figure_11.png", + "caption": "Figure 11: Weights for estimated depth maps. The weights are set such that the center of the tangent image is 1, the edges of the image are 0, and the weights are converted to ERP format for each depth map (n=1,2,\u22ef,N)\ud835\udc5b12\u22ef\ud835\udc41(n=1,2,\\cdots,N)( italic_n = 1 , 2 , \u22ef , italic_N ).", + "url": "http://arxiv.org/html/2404.00345v2/x11.png" + }, + "12": { + "figure_path": "2404.00345v2_figure_12.png", + "caption": "Figure 12: The difference with and without CD. In this example, \"piano\" in the text prompt is reflected only for the method with CD.", + "url": "http://arxiv.org/html/2404.00345v2/x12.png" + }, + "13": { + "figure_path": "2404.00345v2_figure_13.png", + "caption": "Figure 13: Comparison with Text2Room. (a) ERP images of the generated 3D scenes, (b) Room shapes in the top view.", + "url": "http://arxiv.org/html/2404.00345v2/x13.png" + }, + "14": { + "figure_path": "2404.00345v2_figure_14.png", + "caption": "Figure 14: The results of generating a 3D scene for the test set of the Stuructured 3D dataset.", + "url": "http://arxiv.org/html/2404.00345v2/x14.png" + }, + "15": { + "figure_path": "2404.00345v2_figure_15.png", + "caption": "Figure 15: The results of generating a 3D scene for the test set of the SceneDreamer dataset.", + "url": "http://arxiv.org/html/2404.00345v2/x15.png" + }, + "16": { + "figure_path": "2404.00345v2_figure_16.png", + "caption": "Figure 16: From a given partial image, layout, and text prompt, our method generates the 360-degree RGB space and depth. We used a painting titled \"The Milkmaid\" by Johannes Vermeer as a partial image. Various 3D scenes can be generated for the same partial image using different layouts and text prompts.", + "url": "http://arxiv.org/html/2404.00345v2/x16.png" + }, + "17": { + "figure_path": "2404.00345v2_figure_17.png", + "caption": "Figure 17: The various generated indoor 3D scenes represented by 360-degree RGB-D images and free perspective images rendered using NeRF owing to conditions outside the used dataset. (a) (b) We used a painting titled \"The Milkmaid\" by Johannes Vermeer as a partial image. (c) (d) A photo of sofas downloaded from the web (https://www.photo-ac.com/) was provided as a partial image.", + "url": "http://arxiv.org/html/2404.00345v2/x17.png" + }, + "18": { + "figure_path": "2404.00345v2_figure_18.png", + "caption": "Figure 18: The various generated indoor 3D scenes represented by 360-degree RGB-D images and free perspective images rendered using NeRF owing to conditions outside the used dataset. (a) (b) An image captured by the author using a camera is shown as a partial image. (e) (f) We presented a painting titled \"The Listening Room\" by Ren\u00e9 Magritte as a partial image.", + "url": "http://arxiv.org/html/2404.00345v2/x18.png" + }, + "19": { + "figure_path": "2404.00345v2_figure_19.png", + "caption": "Figure 19: The various generated outdoor 3D scenes represented by 360-degree RGB-D images and free perspective images rendered using NeRF owing to conditions outside the used dataset. (a) (b) A photo of a sandy beach downloaded from the web (https://www.photo-ac.com/) was given as a partial image. (c) (d) An image captured by the author using a camera is shown as a partial image.", + "url": "http://arxiv.org/html/2404.00345v2/x19.png" + }, + "20": { + "figure_path": "2404.00345v2_figure_20.png", + "caption": "Figure 20: The various generated outdoor 3D scenes represented by 360-degree RGB-D images and free perspective images rendered using NeRF owing to conditions outside the used dataset. (a) (b) An image captured by the author using a camera is shown as a partial image. (c) and (d) We provided a painting titled \"Day after Day\" by Jean-Michel Folon as a partial image.", + "url": "http://arxiv.org/html/2404.00345v2/x20.png" + }, + "21": { + "figure_path": "2404.00345v2_figure_21.png", + "caption": "Figure 21: Examples of the scene generation from a partial image through the generation of perspective projection images. The generated scenes were displayed in ERP format. (a) In incremental multiview inpainting of the perspective image downloaded from the web (https://unsplash.com/@overture_creations/), the road disappears on the other side, indicating that the scene is not consistent. (b) MVDiffusion maintains consistency between multiple views; however, the computational cost is high because cross attention is required for each combination of multiple views.", + "url": "http://arxiv.org/html/2404.00345v2/x21.png" + }, + "22": { + "figure_path": "2404.00345v2_figure_22.png", + "caption": "Figure 22: Examples of limitations of 360-degree RGB-D generation from multimodal conditions. (a) When two tables specified in the layout condition overlap in the ERP, they are merged and generated as a single table. (b) Although the layout conditions dictate the placement of a television, it is generated and converted to a window because it does not conform to the context of \u201ca medieval European kitchen,\u201d which is presented in the text prompt. (c) Where nothing is specified in the layout conditions, objects may be generated automatically according to text prompts. It is impossible to specify areas where no objects exist.", + "url": "http://arxiv.org/html/2404.00345v2/x22.png" + }, + "23": { + "figure_path": "2404.00345v2_figure_23.png", + "caption": "Figure 23: Examples of limitations of synthesized novel views from the NeRF model trained on the generated 360-degree image. It is difficult to synthesize plausible views when generating 3D scenes from 360-degree RGB-D images with large missing regions that exceed image completion capabilities. In this example, the image quality is significantly reduced in the occluded region at the back of the building.", + "url": "http://arxiv.org/html/2404.00345v2/x23.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2404.00345v2" +} \ No newline at end of file diff --git a/20241127/2404.05779v2.json b/20241127/2404.05779v2.json new file mode 100644 index 0000000000000000000000000000000000000000..1c05e735fe588a0a9f6644f5f30a9bab1b317c25 --- /dev/null +++ b/20241127/2404.05779v2.json @@ -0,0 +1,1408 @@ +{ + "title": "Data Readiness for AI: A 360-Degree Survey", + "abstract": "Artificial Intelligence (AI) applications critically depend on data. Poor quality data produces inaccurate and ineffective AI models that may lead to incorrect or unsafe use. Evaluation of data readiness is a crucial step in improving the quality and appropriateness of data usage for AI. R&D efforts have been spent on improving data quality. However, standardized metrics for evaluating data readiness for use in AI training are still evolving. In this study, we perform a comprehensive survey of metrics used to verify data readiness for AI training. This survey examines more than papers published by ACM Digital Library, IEEE Xplore, journals such as Nature, Springer, and Science Direct, and online articles published by prominent AI experts. This survey aims to propose a taxonomy of data readiness for AI (DRAI) metrics for structured and unstructured datasets. We anticipate that this taxonomy will lead to new standards for DRAI metrics that would be used for enhancing the quality, accuracy, and fairness of AI training and inference.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "Data readiness for artificial intelligence (AI) refers to the critical process of preparing and ensuring the quality, accessibility, and suitability of datasets before using them for AI applications. Readying the data is a fundamental step, which involves collecting, cleaning, organizing, and validating the dataset not only to make them compatible with AI algorithms and models, but also to make certain that the datasets are appropriate and unbiased. By achieving data readiness, organizations can maximize the accuracy, efficiency, and effectiveness of their AI systems, ultimately leading to more informed decision-making and successful AI-driven outcomes.\nData readiness for AI (DRAI) is an important concern in AI applications, as evidenced by a survey conducted by Scale AI (Hwang, 2022 ###reference_b64###). A significant number of participants encountered challenges related to data readiness within their machine learning (ML) projects. Similarly, a study (Woodie, 2020 ###reference_b147###) involving nearly respondents from over countries explains the time-intensive nature of data preparation for data scientists working with AI applications.\nIt is crucial to recognize that the quality of outcomes generated by an AI system is heavily linked to the readiness of the input data. This connection highlights the significance of addressing the \u201cgarbage in, garbage out\u201d saying, which emphasizes that flawed or insufficient input data will inevitably lead to inaccurate and unreliable results from AI algorithms (Schmelzer, 2019 ###reference_b125###). Hence, ensuring the availability of well-prepared data for training machine learning (ML) models is critical, as it leads to more precise and dependable predictions.\nWith growing\nrequirements of unbiased data for AI,\nthe field of quantitative evaluation of data readiness with appropriate metrics is still evolving.\nData Quality Toolkit (DQT) (Developer, 2021 ###reference_b40###) provides\na suite of tools and functionalities to streamline the preparation and cleaning of data.\nDQT\u2019s data quality report\nvarious dimensions of data quality metrics defined by Sidi et al. (Sidi et al., 2012 ###reference_b129###) such as completeness, consistency, accuracy, and timeliness, along with suggestions for improvement.\nRavi et al. (et al., 2022 ###reference_b46###) focus on the critical process of making experimental datasets FAIR (Findable, Accessible, Interoperable, Reusable) for AI readiness.\nThey emphasize the usage of current data infrastructure to establish a framework suitable for automatic AI-powered exploration. To achieve this objective, they publish FAIR and AI-ready datasets\n(Blaiszik et al., 2016 ###reference_b21###). This study also illustrates usage of FAIR principle compliance (FAI, 2024 ###reference_b3###) and AI-ready datasets\nfor inference.\nAlthough these separate efforts and tools are available to improve quality of data, there is a lack of a comprehensive study on effective metrics and standards for evaluating data readiness for AI.\nTo address that gap, we perform a comprehensive examination of the existing metrics and tools that could be used for evaluating data readiness, covering both structured and unstructured data dimensions\nWe also describe the metrics designed to evaluate fairness and privacy related issues in data, which critically\nimpact the process of decision-making in AI algorithms.\nThis survey refers to a comprehensive set of data readiness dimensions and data preparation techniques targeting data usage in AI.\nMetrics targeting data readiness for AI (DRAI) contain a subset of data quality dimensions including completeness, duplicates, correctness, and timeliness. This distinction between DRAI and data quality is critical for understanding our survey\u2019s scope. We identify\nthe existing metrics and scoring mechanisms (\u00a73 ###reference_###) and existing frameworks or tools (\u00a74 ###reference_###) in the literature that could be used to measure DRAI. Based on the distillation of available literature, we propose a potential comprehensive definition of data readiness for AI using six dimensions (\u00a75 ###reference_###). We discuss gaps and challenges towards developing a DRAI assessment framework (\u00a76 ###reference_###).\nThis survey is particularly aimed at data preparers for future AI use and data scientists who analyze the data\nto ensure their datasets are ready for AI applications." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Scope of the Study", + "text": "Our literature review methodology was carefully structured to ensure a comprehensive and unbiased examination of existing research. The search queries used to obtain the sources for this study are presented in Table 1 ###reference_###, categorized into general, structured data, and unstructured data search queries. As a result, we gathered nearly papers from ACM Digital Library, over papers from IEEE Xplore, papers each from Springer and Science Direct, and more than papers from journals including Nature, Springer and Science Direct, as well as several relevant books, to review. Additionally, we highlight discussions on six web articles and explore the metrics used in six commercially used tools.\nPapers and articles were included if they discussed data readiness metrics for AI or data quality metrics, covered structured or unstructured data dimensions, and addressed fairness and privacy issues related to DRAI. We included sources that provided metrics or tools for evaluating data readiness and those that were peer-reviewed or published in reputable journals. We also used web articles that provided valuable insights and were considered highly relevant to our study. Additionally, if the metrics were related to AI data preprocessing, such as feature relevancy, class imbalance measures, and FAIR compliance, they were also included.\nWe excluded papers that did not focus on DRAI, those that were not peer-reviewed, and articles that lacked substantial contributions to understanding DRAI metrics. We also excluded duplicates and studies not available in English. Additionally, we excluded papers if the metrics were not quantifiable or unrelated to the pre-data training stage in AI.\nOur survey aims to identify and analyze the existing metrics and scoring mechanisms for measuring DRAI, covering both conventional dimensions of data quality (e.g., completeness, outliers, timeliness, and correctness) and AI-specific dimensions (e.g., fairness, feature importance, class imbalance, and mislabeled data). We seek to provide insights into how these dimensions and metrics can be used to assess the preparedness of data for AI applications.\n###figure_1### In forming this study, our focus included published literature across different timeframes \u2013 pre-2000, 2000-2010, and post-2010. In Fig. 1 ###reference_###, we show the distribution of sources across different time frames. Paying attention to references prior to 2000 is particularly important because they include some well-established data quality metrics that remain relevant today. These early efforts provide the context and insights into an evolution of data readiness metrics over the years. Before 2000, work primarily focused on general data quality considerations without specific emphasis on applications in AI. From 2000 to 2010, AI research was focused on foundational computational methods and concepts. Post-2010, with the rise of big data and machine learning technologies, a notable shift toward creating metrics to assess data preparedness emerged, specifically for AI applications." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Existing Surveys and Gaps", + "text": "Existing surveys on data quality ((Batini and Scannapieco, 2006 ###reference_b18###; Elmagarmid et al., 2007 ###reference_b44###; Sidi et al., 2012 ###reference_b129###; Thung and Raveendran, 2009 ###reference_b137###; Lin and Jay Kuo, 2011 ###reference_b87###)) primarily focused on traditional dimensions like completeness, correctness, timeliness, and also the quality of textual and multimedia data, providing a strong foundation for understanding the general challenges in the field. However, with the rise of AI, there has been an increasing emphasis on AI-specific concerns like feature relevance, class imbalance, mislabeled data, privacy, and fairness ((Wagner and Eckhoff, 2018 ###reference_b140###; Ntoutsi et al., 2020 ###reference_b102###; Li et al., 2017 ###reference_b84###; Priestley et al., 2023 ###reference_b109###; Shahbazi et al., 2023 ###reference_b126###)). These emerging factors are critical alongside conventional quality benchmarks. Recognizing the importance of both aspects, our survey aims to address this shift towards a comprehensive view of data readiness, where traditional and AI-focused dimensions are equally essential.\nToward gaining an understanding of metrics for data readiness, a few studies surveyed data pre-processing stages in preparing data for AI. Priestley et al. (2023 ###reference_b109###) conducted a study highlighting the role of decision-makers and practitioners in improving data-focused practices. They highlighted the significance of data cleaning and pre-processing stages, including feature selection, duplicate elimination, outlier removal, consistency assurance, and handling of missing values. Documentation of these pre-processing steps was essential to ensure compatibility and identify potential dependencies and information leakages among features. Their insights, derived from an extensive literature survey, laid the foundation for recognizing the pivotal role of data readiness in AI applications.\nYalaoui and Boukhedouma (2021 ###reference_b149###) presented a survey paper on data quality evaluation and enhancement. They compared existing frameworks, models, and methods for data quality evaluation and enhancement, and identified challenges.\nBatini and Scannapieco\u2019s (Batini and Scannapieco, 2006 ###reference_b18###) book offers a comprehensive introduction to a broad range of data quality issues. This book provides a state of the art overview of data quality measurement practices by using probability theory, data mining, statistical data analysis, and machine learning.\nA few studies have looked into metrics for unstructured data. Elmagarmid et al. (2007 ###reference_b44###) explored similarity metrics for duplicate record detection, specifically focusing on challenges posed by typographical variations in string data. Li et al. (2017 ###reference_b84###) concentrated on feature selection metrics, categorizing traditional approaches into wrapper, filter, embedded, and hybrid methods. Furthermore, Forman (2003 ###reference_b53###) explored feature selection metrics for text classification, examining their performance.\nThe studies on image quality measures (Thung and Raveendran, 2009 ###reference_b137###) and perceptual visual quality metrics (Lin and Jay Kuo, 2011 ###reference_b87###) are important in understanding DRAI for unstructured data, particularly in visual data, where assessment of image quality and perceptual metrics are essential for effective AI applications.\n###figure_2### In the context of bias and fairness, Shahbazi et al. (2023 ###reference_b126###) provided a survey of techniques focused on identifying and mitigating representation bias across diverse data types, such as structured data, image data, textual data, and graph data. They defined the problem and discussed causes and methods for measuring and quantifying representation bias in structured and unstructured datasets. Addressing the issue of discrimination in DRAI systems, Ntoutsi et al. (2020 ###reference_b102###) focused on the challenges and implications of biased data on the fairness and accuracy of AI-based decision-making. They emphasized the importance of understanding and mitigating biases in data to prevent the continuation of discriminatory practices. Similarly, in the context of privacy, Wagner and Eckhoff (2018 ###reference_b140###) reviewed over privacy metrics across six domains, categorizing them based on the aspect of privacy measured, required inputs, and data types needing protection. They identified research gaps in areas such as metric combination and interdependent privacy, proposing a method for selecting appropriate metrics through nine key questions. Emphasizing the importance of employing multiple metrics to address diverse privacy aspects, the paper serves as a reference guide and toolbox for privacy researchers, aiding informed choices in metric selection for specific scenarios.\nOur study aims to contribute to the evolving field of DRAI metrics, specifically focusing on structured and unstructured data metrics. While previous works have explored specific dimensions or concentrated on individual metrics, we address the lack of a comprehensive study including numerous DRAI dimensions. There are broader perspectives in addressing DRAI applications. We identify numerous important factors that define data readiness, such as data preparation, privacy leakage evaluation, data discrimination evaluation, compliance to FAIR principles, mislabeled data detection, feature relevancy analysis, bias-related issues, and quality evaluation of speech and multimedia data.\nTo survey existing literature in these dimensions and to identify gaps,\nour effort aims to advance the field of data readiness for AI (DRAI) and inform best practices for evaluating the suitability of data for AI applications. Given the growing importance of DRAI, our work fills a critical gap in the published literature and gives insights to practitioners and decision-makers in the field. As illustrated in Fig. 2 ###reference_###, the 360\u00b0 plot view of data readiness incorporates a range of metrics that reflect the ongoing discussion in the literature for both structured and unstructured data. Additionally, we also define DRAI by introducing key pillars and the DRAI metrics for each pillar." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. DRAI Metrics", + "text": "This section provides an extensive summary of the existing metrics found in the literature that are used to measure data readiness for AI. We will explore these metrics for both structured and unstructured data, primarily focusing on structured data for which metrics have evolved more. Nevertheless, we describe many metrics related to assessing readiness of unstructured data, including textual, multimedia, image, speech, and video-related data.\nIn Table 2 ###reference_###, we provide a snapshot of all the dimensions and metrics discussed in this survey." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Structured data", + "text": "Structured data is organized with a consistent format and follows a specific structure or schema. Data stored in spreadsheets, relational database tables, self-describing file formats, etc., are common forms of structured data. This section will discuss metrics related to various dimensions, such as completeness, outliers, labels, etc., as shown on the left half of Figure 2 ###reference_### and their sources listed in the left half of Table 2 ###reference_###." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1. Completeness", + "text": "Completeness refers to the presence or availability of required data and attribute values in a dataset. It indicates whether data points or entries are complete, with all relevant attribute values recorded and available.\nExample: In a dataset containing information about credit card customer demographics, this metric verifies if an attribute (e.g., income) is available for all customers.\nCompleteness ensures that a dataset is reliable and suitable for analysis, as there is no loss of information due to missing data.\nMetrics in Literature: Blake and Mangiameli (2011 ###reference_b22###) propose a \u201ccompleteness\u201d metric to measure the presence of missing values in a dataset. This metric quantifies the proportion of null (missing) data records to the total number of data records.\nUsing this metric, researchers demonstrate how missing data can impact the results of classification tasks. For example, J\u00e4ger et al. (2021 ###reference_b73###) demonstrates that handling missing values enhances predictive model performance. They observe up to improvement for classification tasks and improvement for regression tasks, emphasizing the importance of addressing missing data in optimizing downstream ML outcomes.\nIn addressing missing data, Santos et al. (2020 ###reference_b123###) use data imputation, which involves filling in or estimating missing values to maintain data integrity. In particular, the authors use the k-nearest neighbors (KNN) imputation technique in this process. They discuss the significance of choosing appropriate distance metrics, such as Heterogeneous Value Difference Metric (HVDM) and Heterogeneous Euclidean-Overlap Metri (HEOM), which effectively handle both nominal and continuous data while preserving data distribution during imputation.\nBors et al. (2018 ###reference_b25###) propose a different approach to quantify missing values in a dataset by using indicators to distinguish missing from non-missing values. Their method offers a practical tool for data preparation, allowing easy identification of missing data in AI applications.\nAnother type of completeness targets missing data \u201cdisguised\u201d with default values.\nPearson (Pearson, 2006 ###reference_b106###) discusses missing values are encoded or represented in ways that obscure their true nature, such as using \u201czero\u201d values to indicate missing data. Such practices can severely distort analysis results.\nAccording to Vo et al. (2024 ###reference_b139###) these issues affect not only the model\u2019s predictions but also its explainability, potentially skewing feature importance calculations crucial for interpreting complex AI systems.\nImpact on AI: Complete and accurate data enhances AI systems\u2019 accuracy and reliability.\nIdentifying explicitly missing data is easy, while disguised missing values pose a greater challenge as they appear valid but are placeholders or incorrect entries. Disguised values can lead to biased outcomes, reduced accuracy, and misinterpretation of AI models.\nSummary: Metrics proposed by various researchers quantify the impact of missing values and suggest remedies such as KNN imputation with suitable evaluation metrics. Additionally, using indicators for distinction and incorporating completeness metrics that consider data types and relationships further improve the handling of missing data." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2. Outliers", + "text": "Outliers in a dataset refer to data points that significantly deviate from the typical or expected values within the dataset. They are points that are significantly distant from the majority of the data points and do not follow the general patterns present in the dataset.\nExample: Consider a dataset of housing prices based on factors such as size, number of bedrooms and bathrooms, location, schools, etc. In this dataset, an outlier could be a property with extremely high or low prices that do not align with the average price range of similar properties. This outlier might be an exceptional case, such as a luxury mansion in an otherwise average neighborhood, or it could be a data entry error.\nMetrics in Literature: In their research, Bors et al. (2018 ###reference_b25###) discuss the concept of plausibility as a metric to identify outliers in datasets for AI applications, which can disrupt statistical analyses and modeling. Data analysts employ two main approaches to quantify the number of outliers: robust statistics and non-robust statistics. Robust statistics use the median and the robust interquartile range estimator, which are more resistant to outliers. Li et al. (2021 ###reference_b85###) further explore the standard deviation and interquartile range-based outlier detection methods in their study. In contrast, non-robust statistics involve using the mean and standard deviation to identify entries that deviate significantly from the mean.\nIn contrast, Breunig et al. (2000 ###reference_b27###) introduce the Local Outlier Factor (LOF) as a metric for identifying outliers in a dataset. LOF quantifies the level of being an outlier for each data instance by considering the density of the dataset\u2019s distribution. Outliers are expected to have lower local densities compared to their surrounding instances. The LOF algorithm computes the LOF value for a specific instance by comparing the density of that instance\u2019s neighborhood with its neighboring instances. This neighborhood is a user-defined parameter (e.g., number of nearest neighbors). Higher LOF values indicate a higher probability of an instance being an outlier, which implies a notably lower density in its local neighborhood than in neighboring data points.\nBuilding on this foundation, Pokrajac et al. (2007 ###reference_b108###) introduced the ILOF (Incremental Local Outlier Factor) method, which uses the LOF metric to determine if a new data point is an outlier. By analyzing the computed LOF value, the ILOF method assigns a score to the incoming sample, indicating whether it is classified as an outlier or not. This approach allows for real-time outlier detection and updates the scores of existing points to measure the impact of the new data point.\nDegirmenci and Karal (2021 ###reference_b39###) introduce RiLOF, which addresses limitations in existing statistical outlier detection techniques by introducing the MoNNAD (Median of Nearest Neighborhood Absolute Deviation) metric. This metric is calculated as the median of the absolute variances among the LOF values of the -nearest neighboring data points and the LOF value of a given sample. This score indicates how much the sample deviates from its local neighborhood. In the RiLOF method, the MoNNAD score is used to label and score query samples. Samples are categorized as outliers when their MoNNAD scores are equal to or greater than a specific limit.\nThe RiLOF method assigns more importance to the query sample, resulting in clearer differentiation between inliers and outliers. The study demonstrates that the MoNNAD metric, incorporated in the RiLOF method, successfully detects outliers, including outlier clusters, that other techniques fail to recognize.\nThe GESD (Generalized Extreme Studentized Deviation) technique, as introduced by Rosner (1983 ###reference_b119###), and the MAD (Median Absolute Deviation) technique, proposed by Leys et al. (2013 ###reference_b83###), are both outlier detection methods that share similarities in their underlying principles. Both approaches aim to identify outliers within datasets by using statistical measures to assess the deviation of data points from central tendencies. GESD identifies outliers by evaluating the maximum absolute difference between each sample and the dataset\u2019s mean and normalizing it by the standard deviation. MAD identifies outliers by considering the median of absolute differences between data records and the dataset\u2019s median. It incorporates a constant associated by assuming that data is normally distributed. Additionally, the Z-score method aligns with this principle by normalizing sample values using the mean and standard deviation or MAD. The robust Z-score version, introduced by Rousseeuw and Hubert (2018 ###reference_b120###), substitutes the median and MAD for more robust measures, demonstrating the shared concept of using statistical measures to detect outliers in datasets.\nImpact on AI: Outliers can significantly impact AI systems by skewing data distributions and introducing biases that lead to inaccurate models and unreliable predictions. They can distort statistical measures such as mean and variance, affecting algorithms like linear regression and clustering. They also can reduce the generalization and robustness of classification and neural network models. Outliers can also introduce noise, complicating the ML process and potentially leading to false positives or negatives (Infolabs, 2024 ###reference_b65###). However, in specific contexts like fraud detection or rare disease diagnosis (e.g., Markham (2024 ###reference_b93###)), outliers can be critical for identifying anomalies and should not be removed without a thorough analysis. Proper detection and management of outliers maintain the integrity and effectiveness of AI models.\nSummary: Outliers data points deviate significantly from most of the dataset and do not follow the general patterns. Different metrics and techniques proposed to identify and measure outliers in datasets include measures based on column heterogeneity, statistical measures like median, standard deviation, mean, and interquartile range, and techniques such as Local Outlier Factor (LOF), Generalized Extreme Studentized Deviation (GESD), Z-score." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3. Mislabeled Data", + "text": "Mislabeled data in the context of preparing a dataset for AI refers to instances or data points with inaccurate labels. It represents a form of labeling error or inconsistency within the dataset, where the assigned labels do not align with the true or expected labels.\nExample: Consider a dataset for email spam classification, where each email is labeled as either \u201cspam\u201d or \u201cnot spam\u201d. If some emails are mistakenly labeled as \u201cnot spam\u201d when they should have been labeled as \u201cspam\u201d, or vice versa, it introduces mislabeled data. In this case, the mislabeled instances create discrepancies between the assigned labels and the actual content or characteristics of the emails.\nMetrics in Literature: Gupta et al.\u2019s (et al., 2021 ###reference_b45###) Data Quality Toolkit (DQT) introduces a label purity metric to measure the impact of adding random noise on the performance of a classifier trained on the dataset. In the example provided in the study, random noise is introduced to datasets from UCI (Dua and Graff (2017 ###reference_b41###)) and Kaggle ((Kag, nd ###reference_b8###)) repositories, and the performance of an AutoAI classifier (Liu et al. (2020 ###reference_b89###)) is measured using 3-fold cross-validation. The results show a drop in classifier performance after inducing noise, with varying degrees of decrease observed across the datasets.\nIn evaluating the accuracy of labels assigned by multiple annotators, Cohen\u2019s Kappa (Cohen, 1960 ###reference_b33###) is widely employed for assessing inter-rater reliability, especially with categorical or binary labels. Cohen\u2019s Kappa calculates the agreement beyond chance, ranging from to , where values near indicate substantial agreement. Lavitas et al. (2021 ###reference_b80###) contribute a credibility metric, assessing the likelihood of correct annotations based on multiple reviewers\u2019 agreement. The metric ranges from to , reflecting high credibility with close agreement and lower credibility with less agreement among reviewers, offering insights into the reliability of the annotation process involving multiple reviewers\nImpact on AI: Incorrect labels in training data lead to poor model performance because the AI learns from erroneous examples, resulting in skewed predictions and decisions (Cleanlab, 2024 ###reference_b32###). This issue can cause substantial financial costs due to the need to retrain models and correct errors. Additionally, mislabeled data can undermine trust in AI systems, as stakeholders lose confidence in their outputs. Ethical implications are also significant, as mislabeled data can introduce or amplify biases that lead to discriminatory outcomes, such as flawed medical diagnoses or biased hiring algorithms. Addressing mislabeled data is crucial for maintaining the integrity and effectiveness of AI models.\nSummary: Mislabeled data refers to instances in a dataset that have inaccurate labels, creating discrepancies between assigned labels and the true or expected labels. Available approaches include a label purity metric for classifying performance under induced noise, label agreement among annotators, and credibility metric through reviewer consensus." + }, + { + "section_id": "3.1.4", + "parent_section_id": "3.1", + "section_name": "3.1.4. Duplicate Values", + "text": "This refers to the presence of duplicate or redundant records within a dataset. Duplicates appear when the same or similar data entries appear multiple times, potentially skewing the ML process.\nExample: Consider a dataset of customer transactions where each entry represents a purchase made by a customer. Multiple entries of the same transaction refer to duplication, which may impact the analysis results.\nMetrics in Literature: Bors et al. (2018 ###reference_b25###) propose a mechanism to identify duplicate entries in a dataset using a scoring system based on uniqueness. A user selects one or more columns in the dataset that are intended to have distinctive combinations of values. The system assigns a score of (true) to values with a unique combination in the selected columns and a (false) score to values found multiple times. By incorporating this score, the system effectively flags duplicate entries in the dataset.\nElmagarmid et al. (2007 ###reference_b44###) conducted a comprehensive survey exploring various similarity metrics for duplicate detection, addressing challenges in managing typographical variations in string data. One of the highlighted character-based similarity metrics in the survey is the Levenshtein distance metric (Levenshtein, 1965 ###reference_b81###). This metric measures the number of operations needed to transform one string into another through edit operations (insertion, deletion, and character replacement).\nWaterman et al. (1976 ###reference_b145###) introduced the Affine Gap Distance metric to overcome the limitations of the standard edit distance metric in matching shortened or truncated strings. It introduces two edit operations: open and extend the gap. An open gap in sequence alignment indicates the start of missing or deleted characters in the sequence. In contrast, an extended gap accommodates consecutive missing characters by extending an existing one. This metric allows for smaller penalties for gap mismatches, resulting in more accurate measurements for truncated or shortened strings.\nAdditionally, Jaro\u2019s distance metric (Jaro, 1976 ###reference_b69###) quantifies the similarity between two strings by identifying common characters that appear at the same positions in both strings and adding up the number of transpositions. The Jaro metric considers the number of shared characters, the lengths of the strings, and the number of transpositions.\nMonge and Elkan (1996 ###reference_b99###) introduced a token-based similarity metric designed to detect duplicates in text fields using atomic strings. Atomic strings are identified by punctuation characters, acting as delimiters, and consist of alphanumeric characters as individual units within the text fields. Two atomic strings are considered duplicates if they are either identical or if one is a prefix of the other.\nThis approach helps identify duplicates in text fields by considering matching atomic strings and provides a similarity score to assess the degree of duplication between fields.\nThe Soundex algorithm, introduced by Russell (1922 ###reference_b121###), is a phonetic coding scheme used to detect duplicates by comparing the phonetic similarity of character strings, such as names. The algorithm transforms names into codes based on rules of phonetic similarity. It preserves the initial letter of the name as the prefix letter and assigns codes to each remaining letter according to specific phonetic groups. Vowels act as separators between consecutive consonants. Consecutive occurrences of the same code are merged, and if the resulting code has fewer than three characters, zeros are added as padding. By applying the Soundex algorithm, names are encoded into phonetic codes that capture their phonetic similarities. It enables the detection of similar-sounding names indicating potential duplicates.\nImpact on AI: When training on duplicated data, models may overfit by learning redundant patterns that do not generalize well to unseen data. This over-representation can lead to skewed predictions and unreliable outcomes. Duplicates also increase the dataset size, storage costs, and computational resources required for training. Additionally, they can degrade data quality, making it harder to derive accurate insights and leading to inefficiencies in data processing.\nSummary: Duplicates refer to the presence of duplicate or redundant instances in a dataset, which can distort analysis and modeling.\nVarious similarity metrics, based on Levenshtein distance, Affine Gap Distance, Jaro\u2019s distance, Monge et al.\u2019s token-based algorithm, and the Soundex phonetic coding scheme, are available to measure duplicates." + }, + { + "section_id": "3.1.5", + "parent_section_id": "3.1", + "section_name": "3.1.5. Feature Relevance", + "text": "This refers to identifying and selecting the most informative features or variables that contribute to an AI model. In a dataset, various features or variables are typically collected for each instance, representing different aspects or characteristics of the data. However, not all features may be equally relevant or valuable for an AI model. Feature relevance metric aims to identify the subset of features that are most influential in making accurate predictions or capturing the underlying patterns in the data.\nExample: Consider a dataset for predicting housing prices that includes number of bedrooms, square footage, location, schools, and proximity to amenities. In this case, feature relevance would involve analyzing the relationship between each feature and the target variable (house prices) to determine which ones have the strongest correlation or impact on the predictions. Features that have weak or negligible influence can be excluded.\nMetrics in Literature: The column heterogeneity measure proposed by Dai et al. (2006 ###reference_b36###) uses soft clustering techniques and mutual information to quantify the relevance of features in a dataset. Soft clustering assigns fractional memberships to data points across multiple clusters.\nMutual information measures the dependence between feature values and soft clustering results, capturing their association. The computed mutual information values are then used to derive column heterogeneity scores for each feature.\nFeatures with higher scores are considered more informative.\nA survey by Li et al. (2017 ###reference_b84###) collectively explores similarity-based feature selection metrics, including Laplacian Score, SPEC, Fisher Score, Trace Ratio, and ReliefF. The Laplacian Score algorithm by He et al. (2005 ###reference_b58###) constructs affinity and Laplacian matrices to measure similarities and differences among data points, producing scores that prioritize features capturing underlying data structures.\nSPEC, an extension of the Laplacian Score, introduced by Zhao and Liu (2007 ###reference_b151###), emphasizes alignment with data structure through spectral analysis.\nDuda et al.\u2019s (Duda et al., 2012 ###reference_b42###) Fisher Score emphasizes comparability within classes and distinctiveness between classes, while the Trace Ratio criterion by Nie et al. (2008 ###reference_b101###) and ReliefF algorithm by Robnik-\u0160ikonja and Kononenko (2003 ###reference_b116###) emphasize within-class similarity and between-class dissimilarity. Collectively, these algorithms highlight the significance of exploiting data relationships and class structures.\nLi et al. (2017 ###reference_b84###) focus on information-theory-based feature selection methods, including Mutual Information Maximization (MIM) (Lewis (1992 ###reference_b82###)). MIM relies on the concept of entropy to evaluate the significance of features by measuring the reduction in uncertainty they bring to a classification task. MIM evaluates each feature\u2019s significance based on its correlation with class labels.\nFeatures with higher Mutual Information (MI) scores are considered more informative and are selected until the desired number of features is reached. Li et al. (2017 ###reference_b84###) also provides statistical methods, highlighting their roles and applications in various fields. Among these methods, the Low Variance method measures feature relevance by evaluating variances and removing features with variances below a specified threshold. In binary classification, the T-Score method, proposed by Davis and Sampson (1986 ###reference_b38###), quantifies a feature\u2019s capacity to differentiate classes by calculating T-scores based on class means and standard deviations, with higher scores indicating stronger discriminatory power. Conversely, the Chi-Square Score method, introduced by Liu and Setiono (1995 ###reference_b88###), assesses feature-class independence through an independence test derived from differences between observed and expected frequencies. Gini Index (Gini, 1912 ###reference_b55###) evaluates a feature\u2019s partitioning potential across different classes, using class probabilities, considering how effectively its values divide the dataset. The Correlation-based Feature Selection (CFS) by Hall and Smith (1999 ###reference_b56###) evaluates feature subsets worth using a correlation-based heuristic. The CFS score balances predictive power with redundancy using symmetrical uncertainty.\nImpact on AI: By identifying the most important features, AI models can concentrate on key data aspects, which reduces noise and computational demands and enhances performance. This process helps mitigate the \u201ccurse of dimensionality,\u201d speeds up training, and can prevent over-fitting. It also enhances model interpretability by highlighting key factors driving predictions or decisions.\nSummary: Feature relevance helps in identifying and selecting the most informative and significant features that contribute to the predictive power of AI models.\nExisting feature relevance metrics use statistical techniques such as soft clustering, similarity, and information theory." + }, + { + "section_id": "3.1.6", + "parent_section_id": "3.1", + "section_name": "3.1.6. Class Imbalance", + "text": "In the context of data for AI, class imbalance refers to the highly skewed or uneven distribution of instances among different classes (categories) in a dataset. It means that one or more classes appear more than others, leading to an imbalanced representation.\nExample:\nClass imbalance often appears in datasets with rare event detection, such as credit fraud detection, earthquake prediction, network intrusion detection, customer churn prediction, rare disease diagnosis, rare species of animal sightings, etc. In these cases, rare events or anomalies are the focus of detection.\nClass imbalance can pose challenges during AI model training and evaluation. Models trained on imbalanced data tend to prioritize the majority class, resulting in poor prediction performance for the minority class. In an anomaly detection example, an imbalanced dataset may lead to a prediction model that performs well in predicting normal instances but performs poorly in identifying anomalies, which are often the class of interest.\nMetrics in Literature: The Individual Bayes Imbalance Impact Index (IBI3), introduced by Lu et al. (2019 ###reference_b91###) assesses the impact of class imbalance on individual samples, providing insights into potential biases and dataset limitations. IBI3 quantifies the difference in posterior probabilities between balanced and imbalanced scenarios, revealing how class imbalance influences classification outcomes. It requires trained models and estimation of posterior probabilities to calculate, making it essential to have access to both for an accurate assessment. IBI3 measures the influence of class imbalance on classification outcomes for each minority class sample, with lower values indicating less impact.\nImbalance Ratio (IR), introduced by Alberto et al. (2018 ###reference_b13###), is a widely used metric to quantify the level of class imbalance in a dataset, especially in binary classification problems. It provides a numerical representation of the discrepancy between the majority and minority class instances. IR is calculated by dividing the count of instances in the majority class () by the count of instances in the minority class (). A higher IR indicates a more imbalance.\nOrtigosa-Hern\u00e1ndez et al. (2017 ###reference_b104###) propose Imbalance Degree (ID) as a metric to measure class imbalance, considering specific characteristics of the class distribution.\nDespite its advantages, ID has drawbacks, such as sensitivity to the choice of distance function and potential unreliability in extreme cases. In contrast, Zhu et al. (2018 ###reference_b152###) introduce the Likelihood Ratio Imbalance Degree (LRID) to overcome ID\u2019s limitations. LRID uses the likelihood ratio (LR) test, providing a high-resolution measurement of imbalance by comparing empirical class distribution to a balanced distribution.\nGupta et al. (et al., 2021 ###reference_b45###) propose the class parity metric, which considers various data properties, including the imbalance ratio, dataset size, and proportion of difficult samples in the extreme minority class.\nImpact on AI: Class imbalance in datasets can significantly impact AI systems, particularly in classification tasks. Class imbalance can cause models to overlook important patterns in minority classes, potentially leading to misclassification.\nSummary: Class imbalance in AI-ready data is assessed using metrics like Imbalance Ratio (IR), with Imbalance Degree (ID) offering nuanced measurements. Likelihood Ratio Imbalance Degree (LRID) provides a high-resolution assessment through the likelihood ratio test. Gupta et al.\u2019s class parity metric considers imbalance ratio, dataset size, and difficult samples." + }, + { + "section_id": "3.1.7", + "parent_section_id": "3.1", + "section_name": "3.1.7. Class Separability", + "text": "Class Separability refers to the degree of similarity or shared characteristics between different classes or categories within the dataset. It measures the overlap or sharing of common features among the data points from different classes.\nExample: Consider a facial recognition dataset consisting of two classes: \u201csmiling\u201d and \u201cnot smiling.\u201d The overlap in this dataset refers to the extent to which the facial features of individuals in these two classes.\nIf the dataset contains many instances where individuals in both classes have similar facial expressions or features, it indicates a high overlap.\nMetrics in Literature:\nGupta et al.\u2019s Data Quality Toolkit (DQT) (Developer, 2021 ###reference_b40###) introduces a class overlap metric, which quantifies overlapping regions among different classes in a dataset by analyzing data points in overlapping regions of the data space. Additionally, the evaluation of class overlap in imbalanced classification settings is addressed through metrics such as the -value and augmented -value. Sejong\u2019s (Oh, 2011 ###reference_b103###) -value assesses the extent of overlap between classes by considering the proportion of instances in a specific class located in regions of the feature space shared with instances from other classes. Borsos et al. (2018 ###reference_b26###) enhance this approach with the augmented -value, which considers the dataset\u2019s imbalance ratio (IR). It provides a weighted measure that combines class overlap and dataset imbalance for a more comprehensive understanding.\nImpact on AI: Class separability impacts AI systems in classification tasks by influencing a model\u2019s ability to accurately distinguish between different categories. The model can establish clear decision boundaries when classes are well-separated in the feature space. This leads to improved accuracy, faster training, better generalization to new data, and increased resilience against noise. This also enhances model interpretability, making the decision-making process more transparent.\nConversely, low-class separability can lead to more misclassifications.\nSummary: Class separability is the level of similarity among diverse classes in a dataset. DQT introduces a metric to detect overlapping areas between classes, assessing data points that are close yet belong to different classes. The R-value is a measure proposed to quantify the degree of overlap in imbalanced classification problems." + }, + { + "section_id": "3.1.8", + "parent_section_id": "3.1", + "section_name": "3.1.8. Discrimination Index", + "text": "Discrimination in data refers to\nbiases\nthat may cause discriminatory outcomes in AI systems. It\nmeasures unfairness or unjust treatment towards individuals or groups that may be encoded in the data.\nExample:\nConsider a company that uses an AI system to filter job applicants based on their resumes. The AI model is trained on historical data of successful candidates and their qualifications. If the historical data is biased and reflects discriminatory practices, such as favoring candidates from certain demographic groups, the AI model may unknowingly sustain those biases and lead to unfair outcomes in the hiring process.\nMetrics in Literature: The Difference metric, introduced by Azzalini et al. (2022 ###reference_b16###), assesses the degree of bias within a dependency by comparing the confidence of that dependency with and without consideration of sensitive attributes. A higher Difference value indicates a stronger indication of unfair behavior. It is further supported by the Approximate Conditional Functional Dependency (ACFD).\nAdditionally, the authors propose the P-Difference metric\nto measure the impact on dependency confidence by excluding one sensitive attribute at a time. This highlights the influential attributes contributing to unfairness.\nFeldman et al. (2015 ###reference_b51###) introduce the \u201cLikelihood Ratio\u201d () metric to measure disparate impact in a dataset, calculated based on sensitivity and specificity. It assesses the impact of the protected class on classification outcomes, but it requires a model trained on the dataset to generate results. Celis et al. (2020 ###reference_b29###) introduce two metrics for assessing discrimination based on sensitive attributes. The \u201crepresentation rate\u201d measures fairness by checking how well different attribute values are represented compared to a set threshold. The \u201cstatistical rate\u201d evaluates fairness by analyzing the conditional probabilities of class labels given attribute values, helping to identify potential discrimination. These metrics provide quantitative fairness evaluation, offering flexibility based on specific application requirements.\nSimonetta et al. (2021 ###reference_b131###) introduce two metrics that contribute to assessing fairness, bias, and completeness. The first metric, a \u201ccombinatorial metric,\u201d evaluates dataset completeness by focusing on the distinct combinations of categories within specific columns. It quantifies completeness by comparing the total count of unique data points to the expected number of distinct combinations.\nIn contrast, the second metric, based on \u201cframe theory,\u201d(Daubechies, 1992 ###reference_b37###) offers a sophisticated approach to measuring fairness and bias. It treats the dataset as a matrix and applies operations to analyze the distribution of vectors within the matrix. Eigenvalues obtained from this matrix assessment measure the tightness of the frame, with uniform eigenvalue distribution indicating a balanced dataset. The Gini-Simpson index ((Simpson, 1949 ###reference_b132###)) is used to assess balance and homogeneity further.\nThe combinatorial metric targets the representation of distinct combinations, while the frame theory-based metric considers the overall distribution and balance of the dataset\u2019s vectors.\nThe Amazon SageMaker Developer Guide (Kemka, 2019 ###reference_b77###) uses various metrics for identifying bias in data.\nClass Imbalance (CI) measures sample distribution across the sensitive attributes, and Difference in Proportions of Labels (DPL) assesses outcome disparities. The guide also leverages various divergence metrics like Kullback-Leibler (KL), Jensen-Shannon (JS), and Lp-norm evaluate differences in the outcome distributions across demographic facets. Total Variation Distance (TVD) and Kolmogorov-Smirnov (KS) measure the degree of distribution divergence, and Conditional Demographic Disparity (CDD) assesses outcome disparities within subgroups. In contrast, DQT (Developer, 2021 ###reference_b40###) presents a disparate impact measure to quantify group discrimination, offering a score for assessing fairness. DQT also includes remediation strategies to mitigate bias in data.\nImpact on AI: Unfair data can significantly impact AI systems by leading to biased and incorrect decisions.\nWhen AI is trained on biased or unrepresentative data, it can produce outcomes that systematically disadvantage certain groups based on race, gender, socioeconomic status, or other factors. This can result in discriminatory practices in critical areas such as hiring, lending, healthcare, and law enforcement.\nSummary: The discrimination index allows analysts to quantify and measure biases or discriminatory outcomes encoded in the data used for training and deploying AI models. The metrics, such as the Difference metric, P-Difference metric, Likelihood Ratio (), representation rate, statistical rate, completeness metric, divergence metrics, and frame theory-based metrics provide quantitative measures to detect discriminatory behavior in the dataset." + }, + { + "section_id": "3.1.9", + "parent_section_id": "3.1", + "section_name": "3.1.9. Data Split Ratio", + "text": "Optimal data splitting in AI involves dividing a dataset into training, validation, and testing subsets to maximize the performance and generalization of the AI model. This metric aims to allocate the appropriate proportions of data for effective model training, hyperparameter tuning, and unbiased evaluation.\nExample:\nDatasets are typically split with a ratio of // for training, validation, and testing. Split ratios of //, // are also common. By splitting the dataset in this manner, we can ensure that the AI model is trained on diverse and representative data, fine-tuned for optimal performance, and tested on unseen instances, enabling a robust sentiment analysis system.\nMetrics in Literature: Afendras and Markatou (2019 ###reference_b10###) suggests that irrespective of data distribution or analytic task, the optimal training sample size in cross-validation is identified as half of the total sample size. Similarly, Joseph (2022 ###reference_b72###) examines the ideal data splitting ratio for training and validation sets in linear regression models. The authors propose a ratio of , where is the number of parameters required to estimate a well-fitting linear regression model. The authors also present a strategy for determining using variable selection methods. It suggests that this approach can be helpful in regression and classification tasks.\nImpact on AI: An optimal split ensures that the model is trained on a sufficient amount of data to learn effectively while being validated and tested on separate, representative subsets to evaluate its generalization capabilities. An inappropriate split ratio can lead to overfitting or underfitting. Additionally, ensuring that the split maintains the statistical distribution of the data is crucial to avoid biases.\nSummary: The data split ratio, which involves dividing a dataset into training, validation, and testing subsets, is crucial for optimizing AI model performance. Affendras et al. suggest a guideline, proposing the training set to be half of the total dataset. With metrics like the ratio guide to ideal splits in linear regression models." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.10. Data Point Impact", + "text": "It refers to the measure of the influence or significance of individual data points within a dataset. It quantifies the extent to which each data point contributes to an AI model or system\u2019s overall performance, accuracy, or behavior.\nExample: Consider a patient medical record dataset with the patient\u2019s age, medical history, symptoms, and diagnostic outcomes. By analyzing the impact of data points, we can determine which specific patient records have a higher influence on the outcomes of an AI model built for disease diagnosis.\nMetrics in Literature: Ghorbani and Zou (2019 ###reference_b54###) introduced the Data Shapley metric, based on the Shapley value from cooperative game theory. It assesses the impact of individual data points in supervised machine learning. The metric measures the contribution of each data point to the model\u2019s predictions, revealing its importance in model training. Various techniques, such as Monte Carlo and gradient-based methods, estimate a data point\u2019s impact by considering its combinations with different subsets of the training data. Similarly, Wang and Jia (2023 ###reference_b141###) propose the Banzhaf value, a metric to assess data point value in the presence of noisy model performance scores. The authors investigate the robustness of data valuation in stochastic gradient descent, where randomness can lead to inconsistent value rankings.\nIn addition to these, several other methods have been developed to measure data importance, including the Leave-One-Out (LOO) (Cook and Weisberg, 1982 ###reference_b35###) evaluation, which assesses model performance changes when individual data points are removed, influence functions (Koh and Liang, 2017 ###reference_b78###) estimate a data point\u2019s effect based on loss function gradients, and k-nearest neighbors (KNN) approaches analyze proximity to decision boundaries. Core-set selection (Bachem et al., 2017 ###reference_b17###) identifies impactful subsets that perform similarly to the full dataset. Local Interpretable Model-agnostic Explanations (LIME) (Ribeiro et al., 2016 ###reference_b114###) provide insights by approximating complex models with interpretable ones around specific predictions.\nImpact on AI: Evaluating the influence of specific data points allows for better understanding and optimization of the model, ensuring that the most relevant and informative data is used while minimizing noise and irrelevant information. This process can improve the accuracy, efficiency, and generalization capabilities of AI systems, leading to more reliable predictions and insights. Additionally, understanding the impact of data points helps identify and mitigate biases, ensuring fair and equitable AI outcomes.\nSummary: Data point impact refers to the measure of the influence or significance of individual data points within a dataset.\nIn addition to feature importance metrics, such as Shapley and Banzhaf values and LIME, influence functions and ablation (removing data points) approaches like LOO are used to measure data point impact." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.11. Correctness", + "text": "In terms of data values for AI, correctness refers to the degree of accuracy and fidelity in representing the information of the system being analyzed. It measures how closely the recorded data values align with the actual values they are supposed to represent. The goal of the metric is to minimize discrepancies between the recorded data and the ground truth.\nExample: Consider a dataset containing temperature measurements from weather stations. The correctness of data values would involve ensuring that each recorded temperature value accurately reflects the actual temperature at the corresponding location and time. Inaccuracies in the recorded values compared to the actual temperature values would indicate a lack of correctness in the dataset.\nMetrics in Literature: Pipino et al\u2019s(Pipino et al., 2002 ###reference_b107###) correctness metric quantifies data accuracy by calculating the complement of the error ratio. It focuses on clear criteria, like precision levels, and recognizes the contextual variations in error tolerance. This ensures a systematic evaluation of data correctness. Similarly, Kaiser et al. (1970 ###reference_b74###) involves comparing attribute values in the dataset () with their corresponding values in the real world (). A domain-specific distance function, denoted as , quantifies the difference between these attribute values. The objective of the metric is to ensure normalization within the interval without using a quotient.\nImpact on AI: Correct and accurate data ensures that AI models can learn true patterns and make precise predictions to reduce the likelihood of errors and biases. Inaccurate data can lead to flawed outcomes, such as false positives or negatives, undermining the effectiveness of the AI application by spoiling user trust. It also enhances the generalization and robustness of models, enabling them to handle diverse inputs and perform well in real-world scenarios.\nSummary: In the context of data values for AI, correctness refers to how accurately the recorded data represents the ground truth. They involve calculations related to error ratios, precision criteria, contextual error tolerance, and comparisons between dataset attribute values and their real-world values." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.12. Timeliness", + "text": "The timeliness of data refers to the time data collection and its relevance to the phenomenon or domain being studied in an AI application. It measures how closely the data captures the most relevant information available at the time of analysis or model training, ensuring that the data is up-to-date and reflects the present conditions. The metric may vary depending on the question being solved by an AI application. In some applications, the latest data may be needed. In others, data relevant to the pattern an AI application is trying to predict. Existing metrics define timeliness based on the existence of the most recent data.\nExample: Consider an AI system that predicts product demand for an e-commerce company. Timeliness of the data used for training the model would involve using the most recent sales data, customer preferences, and market trends. If the dataset contains sales data from several months ago, it may not accurately capture the current demand patterns and consumer behavior related to the current and near-future conditions. By ensuring timely data, such as incorporating daily or weekly sales updates, the AI model can better adapt to the changing market dynamics and provide more accurate demand predictions.\nMetrics in Literature: Two studies, one by Kaiser et al. (1970 ###reference_b74###) and the other by Heinrich and Klier (2015 ###reference_b59###), introduce metrics for evaluating the timeliness of attribute values. Both use probability-based approaches to assess the freshness and relevance of data. Kaiser et al.\u2019s metric uses an exponential distribution model to calculate attribute decline rates. It indicates the average proportion of outdated attribute values within a specified time frame. It quantifies attribute age based on data quality assessment time and data acquisition time to offer an automated and interpretable measure of timeliness. In contrast, Heinrich et al. propose a probability-based currency metric (PBCM) that assesses data item timeliness using a set of probabilities. These probabilities are derived from diverse data sources and methods, including expert assessments, historical data analysis, and machine learning algorithms.\nWhile both metrics share a foundation in probability theory, Kaiser et al.\u2019s metric focuses on attribute-level timeliness. In contrast, Heinrich et al.\u2019s PBCM assesses data item timeliness, offering flexibility for various data types and contexts.\nBlake and Mangiameli (2011 ###reference_b22###) propose a method to assess data timeliness using a classification model. They introduce a metric called , which measures the impact of introducing new and more current data into the dataset. To evaluate data volatility and timeliness, the authors replace a percentage of old instances with new ones in the training data while assuming a fixed currency. The metric is computed based on the total quantity of records in the training and test data and the number of replacement records introduced for reclassification.\nImpact on AI: \nTimely data is critical to AI applications trying to understand patterns, although the \u2018time\u2019 refers to the question an AI application is trying to answer. Applications trying to understand and predict patterns related to the latest trends require the most recent data.\nSummary: The timeliness metric assesses the recency and relevance of the data about the current state of the phenomenon or domain it represents. Kaiser et al. and Heinrich et al. introduced metrics for assessing data timeliness.\nBlake et al. evaluate data timeliness through a classification model by assessing data replacement\u2019s impact on model performance." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.13. Privacy Leakage", + "text": "Data privacy in the context of AI refers to protecting and preserving sensitive information contained within datasets, particularly concerning the risk of unauthorized disclosure or inference of private details.\nA notable technique used to assess privacy is Membership Inference Attacks (MIA). MIA determines whether a specific data record was included in the training dataset used to build an AI model. By exploiting patterns and characteristics of the model\u2019s outputs, one can infer whether or not a particular data point was part of the training set. This raises concerns about the privacy of individual data records and the potential for unauthorized access to such information.\nAnother dimension of AI-related privacy issues emerges with the use of synthetic data. Synthetic data is generated to mimic real data while preserving privacy by avoiding the use of actual sensitive information. However, suppose the synthetic data is too close to the real data. In that case, it can reveal private details, making it vulnerable to privacy attacks. The balance between creating useful synthetic data and ensuring privacy remains a significant challenge in AI. Evaluating the closeness of the synthetic data to the real data can be useful in determining the potential privacy leaks that can emerge during AI applications.\nExample: Consider a healthcare dataset used to train a machine-learning model for disease diagnosis. The dataset contains sensitive medical information about patients, including their symptoms, test results, and diagnoses.\nWithout proper setting of privacy for data, one may determine whether a specific patient\u2019s data was used during training, which poses a privacy risk as it could reveal a patient\u2019s medical condition or other confidential information.\nMetrics in Literature:\nVatsalan et al. (2022 ###reference_b138###) focus on mitigating re-identification risks in released datasets within the education sector. In contrast to existing approaches that often assume prior knowledge, the proposed method employs a Markov Model to quantify re-identification risks by using all available information in the datasets, including event-level details associating multiple records with the same individual and exploring correlations between attributes.\nIn a broader context of privacy metrics in literature, works such as SHAP\n\nR\n introduced by Duddu et al. (2022 ###reference_b43###) and Song et al.\u2019s (Song and Mittal, 2021 ###reference_b133###) privacy risk metric are noteworthy. SHAP\n\nR\n quantifies the susceptibility of individual training data records to membership inference attacks by calculating Shapley values, emphasizing the influence of specific data points on model predictions. In contrast, Song et al.\u2019s metric assesses the likelihood of a data record being present in a model\u2019s training dataset, focusing on evaluating the privacy risk from an adversarial perspective. Both metrics aim to address privacy concerns by assessing the privacy risks associated with data records. However, they depend on the specific AI model used in the context.\nIn another study, Carlini et al. (2022 ###reference_b28###) introduced the Attack Success Rate (ASR) as a metric for privacy leakage. ASR measures the success of an attack in predicting if a specific example is part of the training dataset. It is calculated by training a model, performing an attack, and evaluating the attack\u2019s success in correctly predicting membership. However, ASR can only be measured after training a model and conducting an attack. Alongside these contributions, Bezzi (2007 ###reference_b20###), Longpr\u00e9 et al. (2017 ###reference_b90###), and Arca and Hewett (2020 ###reference_b15###) proposed entropy-based metrics to offer valuable insights into dataset anonymity to measure unpredictability and disorder in released data.\nRegarding synthetic data privacy leaks, Aindo AI\u2019s (ain, nd ###reference_b4###) privacy score assesses the privacy risk of synthetic data by comparing proximity ratios between real and synthetic datasets. The process involves calculating the Train to Train Proximity Ratio (TTPR) for real data and the Train to Synthetic Proximity Ratio (TSPR) for synthetic data. The score is derived from the ratio of records below a specific threshold in both distributions. A score of indicates minimal privacy risk, while lower scores reflect higher risk.\nImpact on AI: Evaluating privacy in data impacts AI by ensuring that personal information is protected while enabling AI systems to function effectively. Privacy assessments help identify and mitigate risks associated with data breaches, intentional or unintentional or malicious accesses, and misuse, which are critical as AI technologies increasingly process personal data. Additionally, addressing privacy concerns helps avoid ethical and legal repercussions, such as biases in AI models and non-compliance with regulations like GDPR (European Parliament and Council of the European Union, 2016 ###reference_b49###). Privacy evaluations also encourage the adoption of privacy design principles in data, ensuring that AI systems are developed with privacy considerations. This will ultimately lead to more ethical and responsible AI deployment.\nSummary: The privacy leakage metric in AI aims to safeguard sensitive information from unauthorized disclosure. Privacy metrics range from a Markov Model to address re-identification risks to SHAP\n\nR\n to quantify membership privacy risk using Shapley values. Attack Success Rate (ASR) is a measure of privacy leakage in membership inference attacks. Entropy-based metrics are also explored to assess dataset anonymity." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.14. Sample Size", + "text": "Sample size refers to the number of data points or instances selected from a population to be included in a dataset for training an AI model. It represents the subset of data used to make inferences or predictions.\nExample:\nIf researchers collected data from patients to predict the likelihood of a disease based on patient characteristics, such as their age, gender, medical history, and test results, the sample size of the dataset would be .\nMetrics in Literature: Alwosheel et al. (2018 ###reference_b14###) investigate the sample size requirements for accurate decision-making analysis using Artificial Neural Networks (ANNs). They introduce a new guideline, \u201cfactor ,\u201d recommending that the optimal dataset size for an ANN should be the number of adjustable parameters in the model multiplied by . This guideline is more conservative than the commonly used \u201cfactor \u201d rule-of-thumb found in the literature (Haykin, 2009 ###reference_b57###). However, determining the appropriate sample size for ANN-based decision-making is complex and depends on the model\u2019s complexity, which is difficult to predict. To address this, the authors propose three approaches: evaluating ex-post if the training sample size was sufficient, using prior studies or literature to estimate the optimal number of neurons, or referring to existing literature to calculate the expected number of neurons required for the analysis.\nImpact on AI: An adequate sample size ensures that AI models can learn effectively from the data, capturing the underlying patterns without overfitting or underfitting. Small sample sizes may lead to overfitting, where the model performs well on training data but poorly on new, unseen data due to learning noise or random variations rather than the true signal. Conversely, excessively large sample sizes can be resource-intensive and may not significantly improve model performance beyond a certain point. Rajput et al. (2023 ###reference_b111###) have shown varying impacts of sample size on model accuracy, with some models performing better with larger datasets, while others may not see significant gains past a certain threshold.\nSummary: In AI, sample size denotes the quantity of data points chosen from a population for analysis or model training.\nThe \u201cfactor \u201d determines sample size in decision-making with Artificial Neural Networks (ANNs), contrasting with the commonly used \u201cfactor \u201d." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.15. FAIR Principle Compliance Score", + "text": "FAIR compliance of a dataset for AI refers to the degree to which a dataset adheres to Findability, Accessibility, Interoperability, and Reusability (FAIR) principles (FAI, 2024 ###reference_b3###). In AI, a dataset should be well-documented, easily discoverable, accessible, formatted in a way that facilitates integration and analysis, and can be effectively reused for AI applications.\nExample: Consider a dataset containing information about different types of cars for training an AI model tasked with predicting car prices. The FAIR principles applied to this dataset would be:\nFindability: The dataset should contain a unique identifier to be easily discoverable with standardized metadata containing details about car attributes, including make, model, year, mileage, engine type, and other existing features. Clear information on the dataset\u2019s source and data collection methodologies is also essential.\nAccessibility: The dataset\u2019s accessibility requires\na platform with controlled access, considering privacy or licensing constraints. Secure access should be given through proper authentication and authorization methods.\nInteroperability: The dataset should be structured and formatted according to established storage standards like CSV or JSON to ensure easier integration with AI systems and tools. A well-defined schema must be included with the dataset, specifying the meaning and format of each attribute. This promotes consistency and compatibility across diverse AI models and applications.\nReusability: The dataset should come with clear usage licenses or permissions that outline how the dataset can be used. Additionally, comprehensive documentation should be included with the dataset, providing details about data collection procedures, preprocessing steps (such as data cleaning or feature engineering), and any potential biases or limitations in the data.\nBy following these FAIR principles, a structured dataset of car information becomes a valuable resource for AI researchers and practitioners. It facilitates the development of accurate car price prediction models and promotes transparent and ethical AI practices.\nMetrics in Literature: Wilkinson et al. (2018 ###reference_b146###) introduced a comprehensive FAIR compliance measurement framework, aligning with the four FAIR sub-principles. The framework includes universal metrics corresponding to specific sub-principles, covering aspects such as identifier schemes, metadata accessibility, findability, access protocols, metadata longevity, knowledge representation languages, linking, adherence to standards, and provenance. This flexible approach facilitates the objective assessment and improvement of FAIR compliance in various digital resources applicable across scholarly domains. In a similar framework introduced by Clarke et al. (2019 ###reference_b31###), FAIR metrics and FAIR rubrics play an important role, allowing users to associate digital resources with existing metrics. The authors emphasized the manual or automated quantification of FAIR metrics and contextual assessments using FAIR rubrics. This empowers users to evaluate and enhance data correctness in diverse projects. DataONE (Data Observation Network for Earth) (Jones and Slaughter, 2019 ###reference_b71###) is a community-driven initiative that has adopted metrics to measure FAIR principle compliance of research data. Based on the FAIR criteria, the DataONE FAIR suite generates comprehensive assessment scores based on the metadata.\nImpact on AI: A dataset\u2019s compliance with the FAIR principles can impact AI by enhancing data management and accessibility. By ensuring data is easily discoverable and accessible, AI systems can be trained on diverse and comprehensive datasets, leading to robust and accurate models. Interoperability allows different AI systems to work together seamlessly, while reusability enables researchers to build upon existing datasets and models, enabling reproducible innovation and efficiency. Moreover, FAIR compliance promotes collaboration and transparency that accelerates AI research and ensures the reproducibility of results. It is essential for building trust in AI technologies.\nSummary: The FAIR score measures the extent to which a dataset adheres to the principles of Findability, Accessibility, Interoperability, and Reusability. Both Wilkinson et al. and Clarke et al. created frameworks to assess the FAIR compliance of digital resources aligned with FAIR sub-principles. Several other FAIR compliance score evaluation websites use the same principles that Wilkinson et al. defined." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Unstructured Data", + "text": "Unstructured data, including textual, image, and audio data, present unique challenges in evaluating and ensuring readiness for AI applications. In this section, we provide a brief overview of the metrics and scoring mechanisms (shown in the right half of Table 2 ###reference_###) used to assess the suitability of unstructured data for AI. While the evaluation techniques for structured data are well-established, we will highlight the relevant measuring techniques from structured data that can be applied to unstructured data. By examining these evaluation methods, we gain insights into the readiness, relevance, and accuracy of unstructured data. It enables us to make informed decisions when using such data in AI models.\nLexical diversity measures the richness, variety, and complexity of the vocabulary used within the text. It offers insights into the level of linguistic expression, domain coverage, and potential challenges in understanding and processing textual data. A higher lexical diversity score indicates a broader range of words and linguistic patterns, providing a solid foundation for AI to learn from the data.\nExample: Consider two datasets: A with a low lexical diversity score and B with a high lexical diversity score. Dataset A has a repetitive and limited vocabulary, such as a chatbot dialogue focused on a specific topic. The low lexical diversity suggests a constrained range of language, potentially limiting the chatbot\u2019s ability to respond effectively to diverse user inputs. In contrast, dataset B consists of a collection of news articles from various domains, exhibiting a wide range of vocabulary and language styles. The high lexical diversity in dataset B indicates a greater readiness for AI applications. It offers a more comprehensive representation of language usage, enabling models to generalize across different topics and understand a broader range of inputs.\nMetrics in Literature: Type-Token Ratio (TTR) introduced by Templin (1957 ###reference_b136###) measures lexical diversity in textual data. It calculates the ratio of unique word types (vocabulary size) to the total number of tokens (words or other linguistic units) in the text. TTR provides an estimate of the text\u2019s richness and variety of vocabulary. A higher TTR indicates greater lexical diversity, suggesting a more comprehensive range of word usage in the text.\nMcCarthy et al.\u2019s (McCarthy and Jarvis, 2010 ###reference_b96###) metrics, vocd-D and HD-D, focus on measuring lexical diversity in textual data. vocd-D uses type-token ratios (TTR) from randomly selected text samples to derive a D coefficient to represent lexical diversity. HD-D, on the other hand, employs the hypergeometric distribution to directly calculate the probabilities of word occurrence in randomly selected samples, resulting in the HD-D index. While both metrics assess lexical diversity, vocd-D uses random sampling and the D coefficient, whereas HD-D approximates results for all possible word arrangements. The correlation between HD-D and vocd-D is high, offering alternative methods of measuring lexical diversity. Additionally, McCarthy introduces the Measure of Textual Lexical Diversity (MTLD) (McCarthy, 2005 ###reference_b95###), which quantifies lexical diversity by considering unique words and segment length. MTLD assesses the lexical diversity of longer texts, providing insights into vocabulary richness throughout the text.\nImpact on AI: Measuring lexical diversity can significantly impact AI by influencing how language models are developed and evaluated. In AI, particularly in Natural Language Processing (NLP) and machine translation, assessing lexical diversity can help identify biases and limitations in language models. Reviriego et al. (2023 ###reference_b113###) state that AI-generated texts often exhibit lower lexical diversity compared to human-generated texts, leading to a feedback loop where AI models trained on such data become less effective over time. Ensuring high lexical diversity in AI outputs can improve the quality and accuracy of translations and other language-based AI applications. This ultimately contributes to more inclusive and representative AI systems.\nSummary: Lexical diversity, necessary for assessing linguistic richness in textual data, is measured by metrics like TTR and vocd-D, evaluating the ratio of unique word types to total tokens with standardized scores. HD-D offers a probability-based alternative for assessing diversity. MTLD divides the text into segments and calculates the average segment length where vocabulary richness falls below a threshold.\nTerm importance evaluates the significance of individual terms in textual data. It measures the relevance and impact of terms in capturing the essence and meaning. Term importance considers various factors, such as the frequency of a term within the text, its rarity across the entire dataset or corpus, and its discriminative power in distinguishing the text from others. By assigning weights or scores to terms based on their importance, this metric enables AI models to focus on key terms that carry valuable semantic information and discard less informative or common terms, assisting in feature selection, document ranking, or topic extraction.\nExample: In a dataset of news articles, the term \u201cpandemic\u201d might be considered highly important due to its relevance in conveying crucial information regarding COVID-19. On the other hand, common words like \u201cthe\u201d or \u201cand\u201d would be assigned lower importance scores as they provide little discriminative power or unique information. AI models can prioritize and focus on the most significant terms by analyzing term importance, enabling better understanding, classification, or summarization of textual data.\nMetrics in Literature: TF-IDF (Term Frequency-Inverse Document Frequency) is a widely used quantitative metric to evaluate the importance of words in a document or collection of documents (Ramos et al. (2003 ###reference_b112###), Simha (2021 ###reference_b130###), Qaiser and Ali (2018 ###reference_b110###)). It combines term frequency (TF), measuring word occurrence in a document, and inverse document frequency (IDF), assessing the rarity of a word\u2019s appearance across the entire corpus. TF-IDF quantifies a word\u2019s significance by considering its frequency in a document and its discriminative power across the corpus. The TF-IDF score is computed by multiplying TF and IDF values for each word. High TF-IDF scores indicate that words that are both frequent in a document and rarely appear in the corpus, which makes them essential for the context. This metric helps identify essential features and characteristics, enabling information retrieval, document classification, and keyword extraction in natural language processing.\nImpact on AI: Measuring term importance is needed for tasks such as information retrieval, text summarization, and sentiment analysis, where distinguishing between relevant and irrelevant terms can greatly impact the accuracy and relevance of the output. By measuring term importance accurately, AI systems can prioritize critical information that leads to precise and contextually relevant results. Furthermore, TF-IDF aids in reducing computational resource requirements in AI training by focusing on significant terms.\nSummary: Term importance assesses the significance of individual terms in textual data for AI applications. TF-IDF is a widely used metric that combines term frequency and inverse document frequency.\nReadability score is a quantitative metric used to assess textual data complexity and ease of understanding, enabling effective preparation for AI. It measures various linguistic factors, such as the length of sentences, choice of words, and syntactic structure, to determine the readability of a text. By considering these factors, readability scores provide valuable insights into the suitability of text for different target audiences and applications. AI applications can be optimized by selecting appropriate training data using readability scores, ensuring that the content aligns with the desired level of comprehension and avoids potential barriers to understanding. This metric is important in enabling the development of more accessible and contextually appropriate language models.\nExample: Consider a scenario where an AI model is trained to generate educational content for an elementary school. In this case, readability scores can be used to assess the complexity of different texts and select appropriate training data. The readability scores can help identify texts that align with the target audience\u2019s reading abilities by analyzing factors such as sentence length, vocabulary difficulty, and grammatical complexity.\nThis ensures that an AI model is trained on comprehensible and engaging content for young learners, promoting effective knowledge transfer and enhancing the overall learning experience.\nMetrics in Literature: The Flesch-Kincaid Grade Level, introduced by Flesch (1986 ###reference_b52###) estimates the approximate education (grade) level needed to comprehend a given text using the average number of words in sentences and that of syllables in words.\nThe resulting score represents the education level required to understand the text. Lower grade values indicate higher/easier readability. This metric provides a standardized measure for assessing text comprehension and enables content tailoring to suit specific audience reading abilities.\nThe Coleman-Liau Index, developed by Coleman and Liau (1975 ###reference_b34###), is another readability scoring method that assesses the reading level of a text based on factors like letter count and sentence length. Unlike the Flesch-Kincaid Grade Level, which considers syllable count, the Coleman-Liau Index calculates the grade level based on the average number of letters and sentences per words. A score of on the index indicates that the text is at a reading level equivalent to that of a fifth grader in the US schooling system. It is widely used in schools and provides a quick measure of readability.\nFurthermore, the Gunning Fog Index introduced by Robert Gunning Associates (rea, nd ###reference_b6###) offers an additional perspective on the readability of textual data for AI model training. Unlike the Coleman-Liau Index and the Flesch-Kincaid Grade Level, which focus on sentence and letter count, the Gunning Fog Index considers both the percentage of complex words and the sentence length. It generates a score between and , with lower scores representing easier readability.\nImpact on AI: By evaluating readability, developers can adjust the complexity of the training data to match the intended audience\u2019s comprehension level, which is particularly important in fields like education and healthcare, where clear communication is essential. By ensuring that training data is readable, AI models can avoid perpetuating biases that arise from overly complex or inaccessible language. This will also reduce disparities in information access.\nSummary: The readability score is a quantitative metric used to assess the complexity of textual data and ease of understanding. Popular readability metrics calculate the grade level needed to comprehend a text based on average words per sentence and syllables per word\nand by considering the percentage of complex words and sentence length.\nThis measure evaluates the readiness of textual data for AI by assessing the logical and semantic connectedness within a set of topics or a document. It quantifies the degree to which words within a topic exhibit meaningful relationships and contribute to a coherent theme. A higher coherence score shows stronger semantic coherence, indicating that the words are closely related and provide a clearer understanding of the topic. By evaluating topic coherence, AI practitioners can ensure that the textual data is well-structured, coherent, and ready for AI model training. This would promote accurate and meaningful text generation and facilitate better comprehension and usage of data by AI algorithms.\nExample: Consider a collection of news articles about technology trends. Topic coherence can be measured to evaluate the readiness of this textual data for AI. Data can be segmented into topics like \u201cArtificial Intelligence,\u201d \u201cBlockchain Technology,\u201d and \u201cInternet of Things\u201d by applying topic modeling techniques. Topic coherence analysis assesses the semantic relationships between words within each topic. A high coherence score would indicate that words within a topic, such as \u201cmachine learning,\u201d \u201calgorithm,\u201d and \u201cpredictive analytics\u201d in the \u201cArtificial Intelligence\u201d topic, are closely related and contribute to a coherent theme. This demonstrates that the textual data is well-prepared for AI, as it exhibits clear and meaningful topic structures.\nMetrics in Literature: R\u201doder et al. propose the \u201cCV coherence score\u201d (R\u00f6der et al., 2015 ###reference_b118###) to quantify topic coherence in textual data by assessing the semantic similarity between words within a topic. This is computed using Latent Dirichlet Allocation (Blei et al., 2003 ###reference_b23###) to extract topics from the text and then measure the pairwise word similarity within each topic to evaluate how well the words contribute to a coherent theme. Higher scores indicate stronger semantic relatedness and better topic coherence. Despite its popularity, the CV coherence score has limitations, such as sensitivity to topic size, potential mismatch with human judgment, and inability to capture higher-level coherence aspects.\nMimno et al. introduce \u201cUMass coherence score\u201d (Mimno et al., 2011 ###reference_b97###) to assess the topic coherence of a set of topics extracted from a text corpus. It evaluates coherence by considering the probability of word co-occurrences within topics. The score is computed by aggregating the logarithm of the co-occurrence probabilities of all word pairs within and across topics. A higher UMass coherence score indicates stronger word co-occurrence patterns and better topic coherence, while a lower score suggests weaker word associations and less coherent topics. Compared to the CV coherence score, the UMass coherence score offers a more reliable evaluation of topic coherence, accounting for topic size, aligning better with human judgment, and directly measuring word co-occurrence probabilities.\nNewman et al. proposed \u201cUCI coherence score\u201d (Newman et al., 2010 ###reference_b100###) to assess the coherence of topics generated by a topic model. This measures the semantic relatedness and meaningful connections between words within a topic by calculating the semantic association between word pairs based on their co-occurrence in sliding windows. The score is computed using a specific equation considering the probabilities of observing individual words and word pairs in a sliding window. Higher UCI coherence scores indicate stronger associations between word pairs and better topic coherence.\nImpact on AI: By evaluating and optimizing for topic coherence, developers can ensure that the AI models produce topics that align better with human understanding, which will improve the semantic interpretability of the model outputs. This evaluation can lead to accurate and reliable AI systems, as coherent topics are more likely to represent genuine patterns in the data rather than random associations.\nSummary: Topic coherence is a metric used to evaluate the logical and semantic connectedness within a set of topics or a document. Coherence scores, such as the CV, UMass, and UCI coherence scores, quantify the semantic similarity or word co-occurrence probabilities within topics to assess their coherence.\nA bias indicator is a measure used to prepare textual data for AI by quantifying and identifying potential biases in the text. It serves as a tool to assist in detecting and mitigating biased content, ensuring that AI systems can make more informed and fair decisions. The bias indicator analyzes various linguistic and semantic features within the text, such as word choice, sentence structure, and contextual references, to assess the potential presence of biases related to factors like gender, race, religion, or other sensitive attributes. By providing a quantitative assessment of bias, the indicator helps developers and users understand the underlying biases in the data. It enables them to address and mitigate these biases during AI model development, promoting more equitable and unbiased AI systems.\nExample: Consider a dataset of job application statements used to train an AI system to evaluate and rank applicants. A bias indicator can be used to analyze the textual data, ensuring fairness and reducing bias. The indicator would examine language and semantic patterns to identify potential biases, such as gender, race, or age-related biases. For instance, if the indicator detects a bias where certain occupations or characteristics are consistently favored or discriminated against, it would flag it for further examination. This enables developers to address and mitigate biases in the dataset before training the AI model. This promotes equal opportunities and minimizes discriminatory outcomes during the evaluation process. The bias indicator plays a crucial role in preparing the textual data, allowing the creation of more objective and unbiased AI systems for job application assessments.\nMetrics in Literature: Papakyriakopoulos et al. (2020 ###reference_b105###) propose a robust bias measurement technique using word embeddings and cosine similarity calculations. The method involves defining word pairs to represent different types of discrimination and creating a list of concepts for measuring bias. For variable concepts, which change based on social groups, the bias calculation equation considers the cosine similarity between word pair embeddings and concept embeddings.\nA modified bias calculation equation is used for non-variable concepts, which remain the same irrespective of social groups. By quantifying the differences in cosine distances, the proposed method comprehensively analyzes bias in different contexts, accounting for variable and non-variable concepts.\nImpact on AI: Textual data often contains subtle biases related to gender, race, ethnicity, and other social factors, which can be encoded in language patterns, word associations, and context. If these biases are not identified and addressed before training, an AI model may learn to reinforce these biases, leading to unfair outputs in tasks such as language generation, sentiment analysis, or text classification. This preventive approach leads to ethical and unbiased AI systems.\nSummary: A bias indicator is used to quantify and identify potential biases in textual data for AI applications. A bias indicator that uses word embeddings and cosine similarity calculations is one of the most used metrics.\nAs a metric to measure the readiness of image data for AI, image quality refers to the degree to which an image accurately represents the visual content it intends to depict. It includes various aspects such as sharpness, color accuracy, clarity, and the absence of artifacts or distortions. Assessing image quality is crucial in AI applications as it directly impacts the reliability and effectiveness of algorithms that rely on visual data. High-quality images provide a solid foundation for AI to extract meaningful features, recognize patterns, and make accurate decisions.\nExample: In an AI system designed for autonomous driving, image quality plays a vital role in ensuring the accuracy and reliability of object detection and recognition. High-quality images with clear details and accurate colors allow the AI system to accurately identify pedestrians, vehicles, and traffic signs, enabling it to make precise decisions in real time. Conversely, low-quality images with blurriness or artifacts may lead to misinterpretation of objects or false detection, compromising the safety and efficiency of autonomous driving systems.\nMetrics in Literature: The field of image quality assessment is a heavily researched area, constantly evolving to meet the demands of various applications. In this discussion, we explore some of the favored and widely used measures developed to evaluate image quality accurately.\nMean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) (psn, nd ###reference_b9###) are essential metrics for evaluating image quality and are widely used in AI applications. MSE calculates the average squared difference between the pixel values of a reference and processed image, providing a quantifiable representation of distortion. PSNR, derived from MSE, offers a perceptual quality metric expressed in decibels, comparing the maximum signal power with the average squared error. Higher PSNR values indicate high-quality images. While MSE alone may not align with human perception of quality, PSNR provides a standardized measure that accounts for human visual perception.\nWang and Bovik (2002 ###reference_b142###) present a universal image quality index, distinct from MSE and PSNR, defined mathematically for independence from specific images, viewing conditions, and observers. This index incorporates correlation coefficient, mean luminance difference, and contrast similarity, offering practical advantages in simplicity and universality. In related work, Wang et al. (2004 ###reference_b143###) introduce the Structural Similarity (SSIM) index as an alternative to traditional error-sensitive methods, emphasizing structural similarities in luminance, contrast, and structure to align more closely with human visual perception. Furthermore, their Multi-Scale Structural Similarity (MSSIM) approach, as proposed in (Wang et al., 2003 ###reference_b144###), extends SSIM by considering variations in resolution and viewing conditions, providing increased flexibility and demonstrating improved performance.\nSimilarly, Sarnoff\u2019s JND-Metrix (sar, nd ###reference_b7###) takes into account Just Noticeable Differences (JND) based on the Human Visual System (HVS) principles. Unlike traditional metrics such as PSNR or MSE, the JND-Metrix considers the sensitivity and perception of the HVS to different types of image distortions. By incorporating knowledge of the HVS, including factors like contrast sensitivity, spatial masking, and visual attention, the perceptual impact of image distortions is quantified more accurately. JND-Metrix measures the visibility of distortions by estimating JND thresholds for various distortions, indicating the level of distortion perceptually distinguishable from the original image.\nTwo notable advancements in image quality assessment include Sheikh et al.\u2019s (Sheikh and Bovik, 2006 ###reference_b127###) Visual Information Fidelity (VIF) criterion and Chandler et al.\u2019s (Chandler and Hemami, 2007 ###reference_b30###) Visual Signal-to-Noise Ratio (VSNR) metric. VIF, a full-reference method, evaluates the correlation between image information and visual quality, outperforming traditional metrics in various scenarios. On the other hand, VSNR uniquely considers human visual system properties, including near-threshold and supra-threshold perception. It provides a more accurate representation of perceived quality by addressing contrast sensitivity and global precedence.\nThe PyTorch Image Quality (PIQ) (Kastryulin et al., 2019 ###reference_b75###) (Kastryulin et al., 2022 ###reference_b76###) provides a diverse array of image quality metrics designed for various assessment requirements. Among full-reference metrics, FSIM (Feature Similarity Index Measure) (Zhang et al., 2011 ###reference_b150###) is significant for its ability to evaluate image quality by analyzing structural and color features. GMSD (Gradient Magnitude Similarity Deviation) (Xue et al., 2014 ###reference_b148###) is also noteworthy which focuses on the gradient magnitudes to assess image quality. In the no-reference category, BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) (Mittal et al., 2012 ###reference_b98###) stands out as it evaluates quality using natural scene statistics without needing a reference image. This makes it particularly useful in real-world applications. Within the distribution-based category, FID (Frechet Inception Distance) (Heusel et al., 2018 ###reference_b60###) is a key metric that is used for assessing generative models by measuring the similarity between distributions of real and generated images.\nLakhani (2020 ###reference_b79###) and Sabottke and Spieler (2020 ###reference_b122###) demonstrate that resolution is a critical image quality metric when developing deep learning models for medical imaging and radiology applications. Higher image resolutions can lead to improved AI model performance, especially in detecting specific medical conditions. However, it is essential to strike a balance with computational resources to avoid limitations in the training process. The appropriate image resolution for a given task can vary based on several factors, including the image data type, the specific AI model or algorithm used, and the application\u2019s requirements. Additionally, blurriness is another factor that can impact the performance of AI models, particularly those tailored for image recognition, object detection, and segmentation tasks. Additionally, blurriness, a significant factor affecting AI model performance, has been extensively studied. Lin et al.\u2019s (Lin et al., 2005 ###reference_b86###) method estimates contrast decrease on edges, while Marziliano et al. (2002 ###reference_b94###) gauge blurriness by analyzing edge spread, providing valuable insights into overall blurriness levels.\nImpact on AI: \nMeasuring image quality impacts AI training by ensuring that the data used to train models is clear, accurate, and free from distortions, which directly influences the model\u2019s ability to learn effectively. High-quality images allow AI models to extract meaningful features and recognize patterns more accurately, leading to better generalization and performance. Conversely, low-quality images with noise, blurriness, or artifacts can introduce errors, reduce the model\u2019s learning efficiency, and lead to inaccurate predictions.\nSummary: Image quality refers to how accurately an image represents its visual content and covers aspects such as sharpness, clarity, color accuracy, and the absence of artifacts. Commonly used metrics for image quality assessment include MSE, PSNR, SSIM, MSSIM, JND-Metrix, VSNR, and VIF. These metrics provide quantitative measurements of image distortion and quality. Additionally, considering factors like image resolution and blurriness is crucial to balance improved model performance and computational resources.\nSpeech quality refers to the overall perceived clarity, intelligibility, and fidelity of speech signals. It captures the extent to which speech data effectively conveys the intended information and is free from distortions, noise, or artifacts that could impact its understandability. Speech quality includes factors such as signal clarity, absence of background noise, the naturalness of speech, and the ability to capture and reproduce various linguistic and acoustic properties accurately. A high level of speech quality in the data ensures that AI models can effectively process and interpret speech inputs, leading to more accurate and reliable performance across speech-related tasks, such as speech recognition, synthesis, or understanding.\nExample: Speech quality is important for the optimal functioning of voice-controlled virtual assistants. In a scenario with high speech quality, a user\u2019s command is delivered with clarity and minimal background noise. This ensures the virtual assistant accurately interprets and executes the request. With low speech-quality data with distortions and noise, an AI system may struggle to comprehend the command, leading to potential errors in execution.\nMetrics in Literature: The ITU-T (International Telecommunication Union \u2013 Telecommunication Standardization Sector) introduced Mean Opinion Score (MOS) (International Telecommunication Union, 2018 ###reference_b67###), which involves human listeners rating the perceived speech quality.\nMOS helps to understand how well the speech data aligns with human expectations, enabling improvements based on listener feedback. On the other hand, Perceptual Evaluation of Speech Quality (PESQ), introduced by Rix et al. (2001 ###reference_b115###), objectively quantifies the quality of processed or degraded speech signals. It uses a model that simulates human auditory perception to calculate prediction scores. Segmental Signal-to-Noise Ratio (SNRseg), introduced by Jayant and Noll (1984 ###reference_b70###), is another metric that gauges the ratio of the clean speech signal to the background noise within short segments. By providing a localized evaluation, SNRseg assists in identifying segments of speech that may be affected by noise, leading to targeted noise reduction or enhancement techniques.\nWidely used objective metrics for evaluating speech quality and intelligibility are Short-Time Objective Intelligibility (STOI), Perceptual Speech Quality Measure (PSQM), Itakura-Saito Spectral Distortion (IS), and Objective Difference Grade (ODG). STOI, introduced by Taal et al. (2010 ###reference_b135###), primarily assesses the similarity between clean and degraded speech signals, focusing on how well the degraded speech retains intelligibility compared to the original signal. In contrast, PSQM, introduced by Beerends and Stemerdink (1994 ###reference_b19###), explores various factors affecting speech quality, including distortion, noise, and echo. IS (Itakura and Saito, 1968 ###reference_b68###) quantifies spectral distortion in the frequency domain, helping to understand the impact of different operations on speech quality. On the other hand, ODG, specified in (itu, nd ###reference_b5###), evaluates the perceived difference in quality between two speech signals, aiding in comparing speech processing algorithms or system configurations. These metrics offer distinct perspectives on speech quality, with STOI and PSQM emphasizing intelligibility and overall quality, IS focusing on spectral distortion, and ODG providing a comparative quality assessment.\nImpact on AI: Evaluating speech quality can significantly impact AI systems, particularly in speech recognition, synthesis, and processing. By assessing audio quality, AI models can improve their accuracy in recognizing speech across various conditions, including noisy environments and diverse accents. For speech synthesis, quality evaluation leads to natural-sounding and intelligible artificial voices. In audio processing, it enables the development of better noise cancellation and audio enhancement algorithms. Moreover, speech quality assessment improves AI training by providing more accurate data labeling and helping to filter out low-quality samples.\nSummary: Speech quality refers to the perceived clarity, intelligibility, and fidelity of speech signals. Metrics such as Mean Opinion Score (MOS), Segmental Signal-to-Noise Ratio (SNRseg), Short-Time Objective Intelligibility (STOI), Perceptual Evaluation of Speech Quality (PESQ), Perceptual Speech Quality Measure (PSQM), Itakura-Saito Spectral Distortion (IS), and Objective Difference Grade (ODG) are used to assess speech quality objectively and subjectively.\nAs a metric to evaluate the readiness of video data for AI, video quality refers to the overall visual fidelity and perceptual coherence of a video sequence. It assesses the accuracy with which the video represents the original content, ensuring that crucial details, structures, and visual cues are preserved without significant degradation or distortion. Evaluating video quality involves objective and subjective criteria, where objective assessments employ computational algorithms to analyze pixel-wise differences and feature similarities. In contrast, subjective evaluations incorporate human perception and user feedback. By considering video quality as a crucial factor, AI practitioners can determine the suitability of video data for their applications, ensuring that the data meets the necessary standards for achieving accurate and reliable AI-driven results.\nExample: In preparation for developing an AI-powered autonomous driving system, a team of engineers collects a vast amount of video data from various in-car cameras and external sensors. To evaluate the readiness of this video data for AI training, they carefully assess its quality. In this context, video quality refers to the video sequences\u2019 overall visual fidelity and coherence, ensuring that critical details are preserved without significant degradation. The engineers analyze the videos to detect potential artifacts or distortions that may impact the AI system\u2019s perception algorithms. They also involve human evaluators to rate the video quality subjectively based on their perceived visual appeal.\nMetrics in Literature: PSNR, SSIM, VIF, and VSNR are image quality metrics, discussed in Section 3.2.2.1 ###reference_.SSS2.P1###, that can also be effectively applied to measure the quality of video data. When applied to videos, these metrics analyze individual frames\u2019 spatial quality and temporal coherence to capture the visual information across consecutive frames. PSNR, SSIM, VIF, and VSNR provide objective insights into video quality and its perceptual fidelity by assessing the pixel-wise differences, structural similarity, information fidelity, and sensitivity to visual distortions. This adaptability makes them valuable tools for researchers and practitioners seeking to quantify and improve the visual experience in video applications. Furthermore, like speech quality assessments discussed in Section 3.2.2.2 ###reference_.SSS2.P2###, video quality evaluations often employ the Mean Opinion Score (MOS) methodology. MOS in video quality involves presenting video samples to human viewers who rate their subjective perception of the video\u2019s quality. The average MOS scores offer valuable insights into human preferences and perceptual experiences, complementing the objective metrics\u2019 findings to make more informed decisions regarding video data suitability for various AI applications.\nVQM (Video Quality Metric) introduced by Huynh-Thu and Ghanbari (2008 ###reference_b63###), VMAF (Video Multimethod Assessment Fusion) introduced by Netflix (Blog, 2017 ###reference_b24###), and OPTICOM\u2019s PEVQ (Perceptual Evaluation of Video Quality) (PEV, 2008 ###reference_b2###) are advanced and specialized metrics specifically designed to assess the quality of video data. VQM focuses on replicating human visual perception to evaluate video quality accurately. By analyzing spatial and temporal characteristics of video frames, VQM provides a comprehensive measure of video fidelity, making it invaluable in video compression and transmission applications. VMAF takes a multifaceted approach by combining traditional metrics like PSNR and SSIM with machine learning techniques. VMAF predicts how viewers perceive video quality by leveraging a human-rated dataset, making it highly effective for video streaming, content delivery, and codec research. Meanwhile, PEVQ, standardized by ITU (International Telecommunication Union), offers both objective and subjective evaluations. Its computational model estimates video quality based on visual and temporal features, while human evaluators provide MOS-based subjective assessments. Widely used in video telephony and conferencing systems, PEVQ ensures that video communication meets acceptable quality standards.\nImpact on AI: Evaluating video quality can significantly impact AI training, particularly for models focused on computer vision, video processing, and generation tasks. By assessing video quality, researchers can curate better training datasets, develop more effective data augmentation techniques, create more accurate labels, and improve the assessment of AI-generated content. This process enables the filtering out of low-quality or corrupted videos that could negatively impact model performance while also allowing for the development of better video generation and enhancement algorithms. Furthermore, incorporating human perception metrics into video quality evaluation helps train AI models to produce results that are visually appealing to human viewers.\nSummary: Video quality refers to overall visual fidelity and perceptual coherence, assessing its accuracy in representing the original content. Objective metrics such as PSNR, SSIM, VIF, and VSNR, commonly used for image quality assessment, can be effectively applied to measure video quality by analyzing spatial and temporal characteristics. Additionally, specialized metrics like VQM, VMAF, and PEVQ are specifically designed to address the challenges unique to video data, incorporating human perceptual aspects and machine learning techniques to predict viewer perception.\nMany metrics mentioned in the structured data section (\u00a73.1 ###reference_###) are also applicable to measuring readiness of unstructured data. For instance, preparing unstructured data for AI applications involves adapting key dimensions typically applied to structured data (3.1 ###reference_###). \u201cCorrectness\u201d is fundamental, ensuring the accuracy and integrity of content within unstructured data like text, images, audio, and video to maintain AI model reliability. \u201cFeature Relevancy\u201d is crucial for identifying informative elements within unstructured data, aiding pattern recognition and decision-making. \u201cPrivacy Leakage\u201d safeguards sensitive information, requiring privacy-preserving techniques. Addressing \u201cClass Imbalance\u201d and \u201cClass Separability\u201d enhances classification tasks in unstructured data by ensuring balanced representation and category distinguishability. Lastly, \u201cTimeliness\u201d is vital, particularly in dynamic data domains, as it ensures AI models stay relevant and up-to-date with evolving data patterns.\nCompliance of unstructured data with FAIR principles is a major requirement to make certain the data is ready for AI applications. All these metrics collectively contribute to unstructured data readiness for AI usage." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1. Textual Data", + "text": "Lexical diversity measures the richness, variety, and complexity of the vocabulary used within the text. It offers insights into the level of linguistic expression, domain coverage, and potential challenges in understanding and processing textual data. A higher lexical diversity score indicates a broader range of words and linguistic patterns, providing a solid foundation for AI to learn from the data.\nExample: Consider two datasets: A with a low lexical diversity score and B with a high lexical diversity score. Dataset A has a repetitive and limited vocabulary, such as a chatbot dialogue focused on a specific topic. The low lexical diversity suggests a constrained range of language, potentially limiting the chatbot\u2019s ability to respond effectively to diverse user inputs. In contrast, dataset B consists of a collection of news articles from various domains, exhibiting a wide range of vocabulary and language styles. The high lexical diversity in dataset B indicates a greater readiness for AI applications. It offers a more comprehensive representation of language usage, enabling models to generalize across different topics and understand a broader range of inputs.\nMetrics in Literature: Type-Token Ratio (TTR) introduced by Templin (1957 ###reference_b136### ###reference_b136###) measures lexical diversity in textual data. It calculates the ratio of unique word types (vocabulary size) to the total number of tokens (words or other linguistic units) in the text. TTR provides an estimate of the text\u2019s richness and variety of vocabulary. A higher TTR indicates greater lexical diversity, suggesting a more comprehensive range of word usage in the text.\nMcCarthy et al.\u2019s (McCarthy and Jarvis, 2010 ###reference_b96### ###reference_b96###) metrics, vocd-D and HD-D, focus on measuring lexical diversity in textual data. vocd-D uses type-token ratios (TTR) from randomly selected text samples to derive a D coefficient to represent lexical diversity. HD-D, on the other hand, employs the hypergeometric distribution to directly calculate the probabilities of word occurrence in randomly selected samples, resulting in the HD-D index. While both metrics assess lexical diversity, vocd-D uses random sampling and the D coefficient, whereas HD-D approximates results for all possible word arrangements. The correlation between HD-D and vocd-D is high, offering alternative methods of measuring lexical diversity. Additionally, McCarthy introduces the Measure of Textual Lexical Diversity (MTLD) (McCarthy, 2005 ###reference_b95### ###reference_b95###), which quantifies lexical diversity by considering unique words and segment length. MTLD assesses the lexical diversity of longer texts, providing insights into vocabulary richness throughout the text.\nImpact on AI: Measuring lexical diversity can significantly impact AI by influencing how language models are developed and evaluated. In AI, particularly in Natural Language Processing (NLP) and machine translation, assessing lexical diversity can help identify biases and limitations in language models. Reviriego et al. (2023 ###reference_b113### ###reference_b113###) state that AI-generated texts often exhibit lower lexical diversity compared to human-generated texts, leading to a feedback loop where AI models trained on such data become less effective over time. Ensuring high lexical diversity in AI outputs can improve the quality and accuracy of translations and other language-based AI applications. This ultimately contributes to more inclusive and representative AI systems.\nSummary: Lexical diversity, necessary for assessing linguistic richness in textual data, is measured by metrics like TTR and vocd-D, evaluating the ratio of unique word types to total tokens with standardized scores. HD-D offers a probability-based alternative for assessing diversity. MTLD divides the text into segments and calculates the average segment length where vocabulary richness falls below a threshold.\nTerm importance evaluates the significance of individual terms in textual data. It measures the relevance and impact of terms in capturing the essence and meaning. Term importance considers various factors, such as the frequency of a term within the text, its rarity across the entire dataset or corpus, and its discriminative power in distinguishing the text from others. By assigning weights or scores to terms based on their importance, this metric enables AI models to focus on key terms that carry valuable semantic information and discard less informative or common terms, assisting in feature selection, document ranking, or topic extraction.\nExample: In a dataset of news articles, the term \u201cpandemic\u201d might be considered highly important due to its relevance in conveying crucial information regarding COVID-19. On the other hand, common words like \u201cthe\u201d or \u201cand\u201d would be assigned lower importance scores as they provide little discriminative power or unique information. AI models can prioritize and focus on the most significant terms by analyzing term importance, enabling better understanding, classification, or summarization of textual data.\nMetrics in Literature: TF-IDF (Term Frequency-Inverse Document Frequency) is a widely used quantitative metric to evaluate the importance of words in a document or collection of documents (Ramos et al. (2003 ###reference_b112### ###reference_b112###), Simha (2021 ###reference_b130### ###reference_b130###), Qaiser and Ali (2018 ###reference_b110### ###reference_b110###)). It combines term frequency (TF), measuring word occurrence in a document, and inverse document frequency (IDF), assessing the rarity of a word\u2019s appearance across the entire corpus. TF-IDF quantifies a word\u2019s significance by considering its frequency in a document and its discriminative power across the corpus. The TF-IDF score is computed by multiplying TF and IDF values for each word. High TF-IDF scores indicate that words that are both frequent in a document and rarely appear in the corpus, which makes them essential for the context. This metric helps identify essential features and characteristics, enabling information retrieval, document classification, and keyword extraction in natural language processing.\nImpact on AI: Measuring term importance is needed for tasks such as information retrieval, text summarization, and sentiment analysis, where distinguishing between relevant and irrelevant terms can greatly impact the accuracy and relevance of the output. By measuring term importance accurately, AI systems can prioritize critical information that leads to precise and contextually relevant results. Furthermore, TF-IDF aids in reducing computational resource requirements in AI training by focusing on significant terms.\nSummary: Term importance assesses the significance of individual terms in textual data for AI applications. TF-IDF is a widely used metric that combines term frequency and inverse document frequency.\nReadability score is a quantitative metric used to assess textual data complexity and ease of understanding, enabling effective preparation for AI. It measures various linguistic factors, such as the length of sentences, choice of words, and syntactic structure, to determine the readability of a text. By considering these factors, readability scores provide valuable insights into the suitability of text for different target audiences and applications. AI applications can be optimized by selecting appropriate training data using readability scores, ensuring that the content aligns with the desired level of comprehension and avoids potential barriers to understanding. This metric is important in enabling the development of more accessible and contextually appropriate language models.\nExample: Consider a scenario where an AI model is trained to generate educational content for an elementary school. In this case, readability scores can be used to assess the complexity of different texts and select appropriate training data. The readability scores can help identify texts that align with the target audience\u2019s reading abilities by analyzing factors such as sentence length, vocabulary difficulty, and grammatical complexity.\nThis ensures that an AI model is trained on comprehensible and engaging content for young learners, promoting effective knowledge transfer and enhancing the overall learning experience.\nMetrics in Literature: The Flesch-Kincaid Grade Level, introduced by Flesch (1986 ###reference_b52### ###reference_b52###) estimates the approximate education (grade) level needed to comprehend a given text using the average number of words in sentences and that of syllables in words.\nThe resulting score represents the education level required to understand the text. Lower grade values indicate higher/easier readability. This metric provides a standardized measure for assessing text comprehension and enables content tailoring to suit specific audience reading abilities.\nThe Coleman-Liau Index, developed by Coleman and Liau (1975 ###reference_b34### ###reference_b34###), is another readability scoring method that assesses the reading level of a text based on factors like letter count and sentence length. Unlike the Flesch-Kincaid Grade Level, which considers syllable count, the Coleman-Liau Index calculates the grade level based on the average number of letters and sentences per words. A score of on the index indicates that the text is at a reading level equivalent to that of a fifth grader in the US schooling system. It is widely used in schools and provides a quick measure of readability.\nFurthermore, the Gunning Fog Index introduced by Robert Gunning Associates (rea, nd ###reference_b6### ###reference_b6###) offers an additional perspective on the readability of textual data for AI model training. Unlike the Coleman-Liau Index and the Flesch-Kincaid Grade Level, which focus on sentence and letter count, the Gunning Fog Index considers both the percentage of complex words and the sentence length. It generates a score between and , with lower scores representing easier readability.\nImpact on AI: By evaluating readability, developers can adjust the complexity of the training data to match the intended audience\u2019s comprehension level, which is particularly important in fields like education and healthcare, where clear communication is essential. By ensuring that training data is readable, AI models can avoid perpetuating biases that arise from overly complex or inaccessible language. This will also reduce disparities in information access.\nSummary: The readability score is a quantitative metric used to assess the complexity of textual data and ease of understanding. Popular readability metrics calculate the grade level needed to comprehend a text based on average words per sentence and syllables per word\nand by considering the percentage of complex words and sentence length.\nThis measure evaluates the readiness of textual data for AI by assessing the logical and semantic connectedness within a set of topics or a document. It quantifies the degree to which words within a topic exhibit meaningful relationships and contribute to a coherent theme. A higher coherence score shows stronger semantic coherence, indicating that the words are closely related and provide a clearer understanding of the topic. By evaluating topic coherence, AI practitioners can ensure that the textual data is well-structured, coherent, and ready for AI model training. This would promote accurate and meaningful text generation and facilitate better comprehension and usage of data by AI algorithms.\nExample: Consider a collection of news articles about technology trends. Topic coherence can be measured to evaluate the readiness of this textual data for AI. Data can be segmented into topics like \u201cArtificial Intelligence,\u201d \u201cBlockchain Technology,\u201d and \u201cInternet of Things\u201d by applying topic modeling techniques. Topic coherence analysis assesses the semantic relationships between words within each topic. A high coherence score would indicate that words within a topic, such as \u201cmachine learning,\u201d \u201calgorithm,\u201d and \u201cpredictive analytics\u201d in the \u201cArtificial Intelligence\u201d topic, are closely related and contribute to a coherent theme. This demonstrates that the textual data is well-prepared for AI, as it exhibits clear and meaningful topic structures.\nMetrics in Literature: R\u201doder et al. propose the \u201cCV coherence score\u201d (R\u00f6der et al., 2015 ###reference_b118### ###reference_b118###) to quantify topic coherence in textual data by assessing the semantic similarity between words within a topic. This is computed using Latent Dirichlet Allocation (Blei et al., 2003 ###reference_b23### ###reference_b23###) to extract topics from the text and then measure the pairwise word similarity within each topic to evaluate how well the words contribute to a coherent theme. Higher scores indicate stronger semantic relatedness and better topic coherence. Despite its popularity, the CV coherence score has limitations, such as sensitivity to topic size, potential mismatch with human judgment, and inability to capture higher-level coherence aspects.\nMimno et al. introduce \u201cUMass coherence score\u201d (Mimno et al., 2011 ###reference_b97### ###reference_b97###) to assess the topic coherence of a set of topics extracted from a text corpus. It evaluates coherence by considering the probability of word co-occurrences within topics. The score is computed by aggregating the logarithm of the co-occurrence probabilities of all word pairs within and across topics. A higher UMass coherence score indicates stronger word co-occurrence patterns and better topic coherence, while a lower score suggests weaker word associations and less coherent topics. Compared to the CV coherence score, the UMass coherence score offers a more reliable evaluation of topic coherence, accounting for topic size, aligning better with human judgment, and directly measuring word co-occurrence probabilities.\nNewman et al. proposed \u201cUCI coherence score\u201d (Newman et al., 2010 ###reference_b100### ###reference_b100###) to assess the coherence of topics generated by a topic model. This measures the semantic relatedness and meaningful connections between words within a topic by calculating the semantic association between word pairs based on their co-occurrence in sliding windows. The score is computed using a specific equation considering the probabilities of observing individual words and word pairs in a sliding window. Higher UCI coherence scores indicate stronger associations between word pairs and better topic coherence.\nImpact on AI: By evaluating and optimizing for topic coherence, developers can ensure that the AI models produce topics that align better with human understanding, which will improve the semantic interpretability of the model outputs. This evaluation can lead to accurate and reliable AI systems, as coherent topics are more likely to represent genuine patterns in the data rather than random associations.\nSummary: Topic coherence is a metric used to evaluate the logical and semantic connectedness within a set of topics or a document. Coherence scores, such as the CV, UMass, and UCI coherence scores, quantify the semantic similarity or word co-occurrence probabilities within topics to assess their coherence.\nA bias indicator is a measure used to prepare textual data for AI by quantifying and identifying potential biases in the text. It serves as a tool to assist in detecting and mitigating biased content, ensuring that AI systems can make more informed and fair decisions. The bias indicator analyzes various linguistic and semantic features within the text, such as word choice, sentence structure, and contextual references, to assess the potential presence of biases related to factors like gender, race, religion, or other sensitive attributes. By providing a quantitative assessment of bias, the indicator helps developers and users understand the underlying biases in the data. It enables them to address and mitigate these biases during AI model development, promoting more equitable and unbiased AI systems.\nExample: Consider a dataset of job application statements used to train an AI system to evaluate and rank applicants. A bias indicator can be used to analyze the textual data, ensuring fairness and reducing bias. The indicator would examine language and semantic patterns to identify potential biases, such as gender, race, or age-related biases. For instance, if the indicator detects a bias where certain occupations or characteristics are consistently favored or discriminated against, it would flag it for further examination. This enables developers to address and mitigate biases in the dataset before training the AI model. This promotes equal opportunities and minimizes discriminatory outcomes during the evaluation process. The bias indicator plays a crucial role in preparing the textual data, allowing the creation of more objective and unbiased AI systems for job application assessments.\nMetrics in Literature: Papakyriakopoulos et al. (2020 ###reference_b105### ###reference_b105###) propose a robust bias measurement technique using word embeddings and cosine similarity calculations. The method involves defining word pairs to represent different types of discrimination and creating a list of concepts for measuring bias. For variable concepts, which change based on social groups, the bias calculation equation considers the cosine similarity between word pair embeddings and concept embeddings.\nA modified bias calculation equation is used for non-variable concepts, which remain the same irrespective of social groups. By quantifying the differences in cosine distances, the proposed method comprehensively analyzes bias in different contexts, accounting for variable and non-variable concepts.\nImpact on AI: Textual data often contains subtle biases related to gender, race, ethnicity, and other social factors, which can be encoded in language patterns, word associations, and context. If these biases are not identified and addressed before training, an AI model may learn to reinforce these biases, leading to unfair outputs in tasks such as language generation, sentiment analysis, or text classification. This preventive approach leads to ethical and unbiased AI systems.\nSummary: A bias indicator is used to quantify and identify potential biases in textual data for AI applications. A bias indicator that uses word embeddings and cosine similarity calculations is one of the most used metrics." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2. Multimedia Data", + "text": "In this subsection, we explore the key aspects of DRAI for image, speech, and video data. We concentrate on quality evaluation metrics for each data type. This exclusive emphasis on quality evaluation metrics is fundamentally crucial in determining the reliability and usability of these types of data for AI.\nAs a metric to measure the readiness of image data for AI, image quality refers to the degree to which an image accurately represents the visual content it intends to depict. It includes various aspects such as sharpness, color accuracy, clarity, and the absence of artifacts or distortions. Assessing image quality is crucial in AI applications as it directly impacts the reliability and effectiveness of algorithms that rely on visual data. High-quality images provide a solid foundation for AI to extract meaningful features, recognize patterns, and make accurate decisions.\nExample: In an AI system designed for autonomous driving, image quality plays a vital role in ensuring the accuracy and reliability of object detection and recognition. High-quality images with clear details and accurate colors allow the AI system to accurately identify pedestrians, vehicles, and traffic signs, enabling it to make precise decisions in real time. Conversely, low-quality images with blurriness or artifacts may lead to misinterpretation of objects or false detection, compromising the safety and efficiency of autonomous driving systems.\nMetrics in Literature: The field of image quality assessment is a heavily researched area, constantly evolving to meet the demands of various applications. In this discussion, we explore some of the favored and widely used measures developed to evaluate image quality accurately.\nMean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) (psn, nd ###reference_b9### ###reference_b9###) are essential metrics for evaluating image quality and are widely used in AI applications. MSE calculates the average squared difference between the pixel values of a reference and processed image, providing a quantifiable representation of distortion. PSNR, derived from MSE, offers a perceptual quality metric expressed in decibels, comparing the maximum signal power with the average squared error. Higher PSNR values indicate high-quality images. While MSE alone may not align with human perception of quality, PSNR provides a standardized measure that accounts for human visual perception.\nWang and Bovik (2002 ###reference_b142### ###reference_b142###) present a universal image quality index, distinct from MSE and PSNR, defined mathematically for independence from specific images, viewing conditions, and observers. This index incorporates correlation coefficient, mean luminance difference, and contrast similarity, offering practical advantages in simplicity and universality. In related work, Wang et al. (2004 ###reference_b143### ###reference_b143###) introduce the Structural Similarity (SSIM) index as an alternative to traditional error-sensitive methods, emphasizing structural similarities in luminance, contrast, and structure to align more closely with human visual perception. Furthermore, their Multi-Scale Structural Similarity (MSSIM) approach, as proposed in (Wang et al., 2003 ###reference_b144### ###reference_b144###), extends SSIM by considering variations in resolution and viewing conditions, providing increased flexibility and demonstrating improved performance.\nSimilarly, Sarnoff\u2019s JND-Metrix (sar, nd ###reference_b7### ###reference_b7###) takes into account Just Noticeable Differences (JND) based on the Human Visual System (HVS) principles. Unlike traditional metrics such as PSNR or MSE, the JND-Metrix considers the sensitivity and perception of the HVS to different types of image distortions. By incorporating knowledge of the HVS, including factors like contrast sensitivity, spatial masking, and visual attention, the perceptual impact of image distortions is quantified more accurately. JND-Metrix measures the visibility of distortions by estimating JND thresholds for various distortions, indicating the level of distortion perceptually distinguishable from the original image.\nTwo notable advancements in image quality assessment include Sheikh et al.\u2019s (Sheikh and Bovik, 2006 ###reference_b127### ###reference_b127###) Visual Information Fidelity (VIF) criterion and Chandler et al.\u2019s (Chandler and Hemami, 2007 ###reference_b30### ###reference_b30###) Visual Signal-to-Noise Ratio (VSNR) metric. VIF, a full-reference method, evaluates the correlation between image information and visual quality, outperforming traditional metrics in various scenarios. On the other hand, VSNR uniquely considers human visual system properties, including near-threshold and supra-threshold perception. It provides a more accurate representation of perceived quality by addressing contrast sensitivity and global precedence.\nThe PyTorch Image Quality (PIQ) (Kastryulin et al., 2019 ###reference_b75### ###reference_b75###) (Kastryulin et al., 2022 ###reference_b76### ###reference_b76###) provides a diverse array of image quality metrics designed for various assessment requirements. Among full-reference metrics, FSIM (Feature Similarity Index Measure) (Zhang et al., 2011 ###reference_b150### ###reference_b150###) is significant for its ability to evaluate image quality by analyzing structural and color features. GMSD (Gradient Magnitude Similarity Deviation) (Xue et al., 2014 ###reference_b148### ###reference_b148###) is also noteworthy which focuses on the gradient magnitudes to assess image quality. In the no-reference category, BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) (Mittal et al., 2012 ###reference_b98### ###reference_b98###) stands out as it evaluates quality using natural scene statistics without needing a reference image. This makes it particularly useful in real-world applications. Within the distribution-based category, FID (Frechet Inception Distance) (Heusel et al., 2018 ###reference_b60### ###reference_b60###) is a key metric that is used for assessing generative models by measuring the similarity between distributions of real and generated images.\nLakhani (2020 ###reference_b79### ###reference_b79###) and Sabottke and Spieler (2020 ###reference_b122### ###reference_b122###) demonstrate that resolution is a critical image quality metric when developing deep learning models for medical imaging and radiology applications. Higher image resolutions can lead to improved AI model performance, especially in detecting specific medical conditions. However, it is essential to strike a balance with computational resources to avoid limitations in the training process. The appropriate image resolution for a given task can vary based on several factors, including the image data type, the specific AI model or algorithm used, and the application\u2019s requirements. Additionally, blurriness is another factor that can impact the performance of AI models, particularly those tailored for image recognition, object detection, and segmentation tasks. Additionally, blurriness, a significant factor affecting AI model performance, has been extensively studied. Lin et al.\u2019s (Lin et al., 2005 ###reference_b86### ###reference_b86###) method estimates contrast decrease on edges, while Marziliano et al. (2002 ###reference_b94### ###reference_b94###) gauge blurriness by analyzing edge spread, providing valuable insights into overall blurriness levels.\nImpact on AI: \nMeasuring image quality impacts AI training by ensuring that the data used to train models is clear, accurate, and free from distortions, which directly influences the model\u2019s ability to learn effectively. High-quality images allow AI models to extract meaningful features and recognize patterns more accurately, leading to better generalization and performance. Conversely, low-quality images with noise, blurriness, or artifacts can introduce errors, reduce the model\u2019s learning efficiency, and lead to inaccurate predictions.\nSummary: Image quality refers to how accurately an image represents its visual content and covers aspects such as sharpness, clarity, color accuracy, and the absence of artifacts. Commonly used metrics for image quality assessment include MSE, PSNR, SSIM, MSSIM, JND-Metrix, VSNR, and VIF. These metrics provide quantitative measurements of image distortion and quality. Additionally, considering factors like image resolution and blurriness is crucial to balance improved model performance and computational resources.\nSpeech quality refers to the overall perceived clarity, intelligibility, and fidelity of speech signals. It captures the extent to which speech data effectively conveys the intended information and is free from distortions, noise, or artifacts that could impact its understandability. Speech quality includes factors such as signal clarity, absence of background noise, the naturalness of speech, and the ability to capture and reproduce various linguistic and acoustic properties accurately. A high level of speech quality in the data ensures that AI models can effectively process and interpret speech inputs, leading to more accurate and reliable performance across speech-related tasks, such as speech recognition, synthesis, or understanding.\nExample: Speech quality is important for the optimal functioning of voice-controlled virtual assistants. In a scenario with high speech quality, a user\u2019s command is delivered with clarity and minimal background noise. This ensures the virtual assistant accurately interprets and executes the request. With low speech-quality data with distortions and noise, an AI system may struggle to comprehend the command, leading to potential errors in execution.\nMetrics in Literature: The ITU-T (International Telecommunication Union \u2013 Telecommunication Standardization Sector) introduced Mean Opinion Score (MOS) (International Telecommunication Union, 2018 ###reference_b67### ###reference_b67###), which involves human listeners rating the perceived speech quality.\nMOS helps to understand how well the speech data aligns with human expectations, enabling improvements based on listener feedback. On the other hand, Perceptual Evaluation of Speech Quality (PESQ), introduced by Rix et al. (2001 ###reference_b115### ###reference_b115###), objectively quantifies the quality of processed or degraded speech signals. It uses a model that simulates human auditory perception to calculate prediction scores. Segmental Signal-to-Noise Ratio (SNRseg), introduced by Jayant and Noll (1984 ###reference_b70### ###reference_b70###), is another metric that gauges the ratio of the clean speech signal to the background noise within short segments. By providing a localized evaluation, SNRseg assists in identifying segments of speech that may be affected by noise, leading to targeted noise reduction or enhancement techniques.\nWidely used objective metrics for evaluating speech quality and intelligibility are Short-Time Objective Intelligibility (STOI), Perceptual Speech Quality Measure (PSQM), Itakura-Saito Spectral Distortion (IS), and Objective Difference Grade (ODG). STOI, introduced by Taal et al. (2010 ###reference_b135### ###reference_b135###), primarily assesses the similarity between clean and degraded speech signals, focusing on how well the degraded speech retains intelligibility compared to the original signal. In contrast, PSQM, introduced by Beerends and Stemerdink (1994 ###reference_b19### ###reference_b19###), explores various factors affecting speech quality, including distortion, noise, and echo. IS (Itakura and Saito, 1968 ###reference_b68### ###reference_b68###) quantifies spectral distortion in the frequency domain, helping to understand the impact of different operations on speech quality. On the other hand, ODG, specified in (itu, nd ###reference_b5### ###reference_b5###), evaluates the perceived difference in quality between two speech signals, aiding in comparing speech processing algorithms or system configurations. These metrics offer distinct perspectives on speech quality, with STOI and PSQM emphasizing intelligibility and overall quality, IS focusing on spectral distortion, and ODG providing a comparative quality assessment.\nImpact on AI: Evaluating speech quality can significantly impact AI systems, particularly in speech recognition, synthesis, and processing. By assessing audio quality, AI models can improve their accuracy in recognizing speech across various conditions, including noisy environments and diverse accents. For speech synthesis, quality evaluation leads to natural-sounding and intelligible artificial voices. In audio processing, it enables the development of better noise cancellation and audio enhancement algorithms. Moreover, speech quality assessment improves AI training by providing more accurate data labeling and helping to filter out low-quality samples.\nSummary: Speech quality refers to the perceived clarity, intelligibility, and fidelity of speech signals. Metrics such as Mean Opinion Score (MOS), Segmental Signal-to-Noise Ratio (SNRseg), Short-Time Objective Intelligibility (STOI), Perceptual Evaluation of Speech Quality (PESQ), Perceptual Speech Quality Measure (PSQM), Itakura-Saito Spectral Distortion (IS), and Objective Difference Grade (ODG) are used to assess speech quality objectively and subjectively.\nAs a metric to evaluate the readiness of video data for AI, video quality refers to the overall visual fidelity and perceptual coherence of a video sequence. It assesses the accuracy with which the video represents the original content, ensuring that crucial details, structures, and visual cues are preserved without significant degradation or distortion. Evaluating video quality involves objective and subjective criteria, where objective assessments employ computational algorithms to analyze pixel-wise differences and feature similarities. In contrast, subjective evaluations incorporate human perception and user feedback. By considering video quality as a crucial factor, AI practitioners can determine the suitability of video data for their applications, ensuring that the data meets the necessary standards for achieving accurate and reliable AI-driven results.\nExample: In preparation for developing an AI-powered autonomous driving system, a team of engineers collects a vast amount of video data from various in-car cameras and external sensors. To evaluate the readiness of this video data for AI training, they carefully assess its quality. In this context, video quality refers to the video sequences\u2019 overall visual fidelity and coherence, ensuring that critical details are preserved without significant degradation. The engineers analyze the videos to detect potential artifacts or distortions that may impact the AI system\u2019s perception algorithms. They also involve human evaluators to rate the video quality subjectively based on their perceived visual appeal.\nMetrics in Literature: PSNR, SSIM, VIF, and VSNR are image quality metrics, discussed in Section 3.2.2.1 ###reference_.SSS2.P1### ###reference_.SSS2.P1###, that can also be effectively applied to measure the quality of video data. When applied to videos, these metrics analyze individual frames\u2019 spatial quality and temporal coherence to capture the visual information across consecutive frames. PSNR, SSIM, VIF, and VSNR provide objective insights into video quality and its perceptual fidelity by assessing the pixel-wise differences, structural similarity, information fidelity, and sensitivity to visual distortions. This adaptability makes them valuable tools for researchers and practitioners seeking to quantify and improve the visual experience in video applications. Furthermore, like speech quality assessments discussed in Section 3.2.2.2 ###reference_.SSS2.P2### ###reference_.SSS2.P2###, video quality evaluations often employ the Mean Opinion Score (MOS) methodology. MOS in video quality involves presenting video samples to human viewers who rate their subjective perception of the video\u2019s quality. The average MOS scores offer valuable insights into human preferences and perceptual experiences, complementing the objective metrics\u2019 findings to make more informed decisions regarding video data suitability for various AI applications.\nVQM (Video Quality Metric) introduced by Huynh-Thu and Ghanbari (2008 ###reference_b63### ###reference_b63###), VMAF (Video Multimethod Assessment Fusion) introduced by Netflix (Blog, 2017 ###reference_b24### ###reference_b24###), and OPTICOM\u2019s PEVQ (Perceptual Evaluation of Video Quality) (PEV, 2008 ###reference_b2### ###reference_b2###) are advanced and specialized metrics specifically designed to assess the quality of video data. VQM focuses on replicating human visual perception to evaluate video quality accurately. By analyzing spatial and temporal characteristics of video frames, VQM provides a comprehensive measure of video fidelity, making it invaluable in video compression and transmission applications. VMAF takes a multifaceted approach by combining traditional metrics like PSNR and SSIM with machine learning techniques. VMAF predicts how viewers perceive video quality by leveraging a human-rated dataset, making it highly effective for video streaming, content delivery, and codec research. Meanwhile, PEVQ, standardized by ITU (International Telecommunication Union), offers both objective and subjective evaluations. Its computational model estimates video quality based on visual and temporal features, while human evaluators provide MOS-based subjective assessments. Widely used in video telephony and conferencing systems, PEVQ ensures that video communication meets acceptable quality standards.\nImpact on AI: Evaluating video quality can significantly impact AI training, particularly for models focused on computer vision, video processing, and generation tasks. By assessing video quality, researchers can curate better training datasets, develop more effective data augmentation techniques, create more accurate labels, and improve the assessment of AI-generated content. This process enables the filtering out of low-quality or corrupted videos that could negatively impact model performance while also allowing for the development of better video generation and enhancement algorithms. Furthermore, incorporating human perception metrics into video quality evaluation helps train AI models to produce results that are visually appealing to human viewers.\nSummary: Video quality refers to overall visual fidelity and perceptual coherence, assessing its accuracy in representing the original content. Objective metrics such as PSNR, SSIM, VIF, and VSNR, commonly used for image quality assessment, can be effectively applied to measure video quality by analyzing spatial and temporal characteristics. Additionally, specialized metrics like VQM, VMAF, and PEVQ are specifically designed to address the challenges unique to video data, incorporating human perceptual aspects and machine learning techniques to predict viewer perception.\nMany metrics mentioned in the structured data section (\u00a73.1 ###reference_### ###reference_###) are also applicable to measuring readiness of unstructured data. For instance, preparing unstructured data for AI applications involves adapting key dimensions typically applied to structured data (3.1 ###reference_### ###reference_###). \u201cCorrectness\u201d is fundamental, ensuring the accuracy and integrity of content within unstructured data like text, images, audio, and video to maintain AI model reliability. \u201cFeature Relevancy\u201d is crucial for identifying informative elements within unstructured data, aiding pattern recognition and decision-making. \u201cPrivacy Leakage\u201d safeguards sensitive information, requiring privacy-preserving techniques. Addressing \u201cClass Imbalance\u201d and \u201cClass Separability\u201d enhances classification tasks in unstructured data by ensuring balanced representation and category distinguishability. Lastly, \u201cTimeliness\u201d is vital, particularly in dynamic data domains, as it ensures AI models stay relevant and up-to-date with evolving data patterns.\nCompliance of unstructured data with FAIR principles is a major requirement to make certain the data is ready for AI applications. All these metrics collectively contribute to unstructured data readiness for AI usage." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Existing Frameworks", + "text": "Although not specifically targeting data readiness for AI, numerous frameworks have been developed to evaluate various aspects of data quality.\nTargeting comprehensive AI readiness evaluation, we have recently developed AI Data Readiness Inspector (AIDRIN) (Hiniduma et al., 2024 ###reference_b61###). We describe here various data quality evaluation frameworks along with AIDRIN.\nGeneral data quality toolkits include frameworks like Informatica\u2019s data quality tool (Informatica, nd ###reference_b66###), an open-source solution for data profiling, cleansing, and monitoring that assesses metrics such as data completeness, accuracy, and reliability. DQLearn (Shrivastava et al., 2020 ###reference_b128###) is another tool in this category, focusing on systematically addressing data quality issues through detection, correction, and custom rule implementation. Additionally, Deequ (Schelter et al., 2018 ###reference_b124###) stands out as a library that allows unit tests for data, facilitating early data quality checks within data pipelines. Moreover, the PIQ (Kastryulin et al., 2019 ###reference_b75###) (Kastryulin et al., 2022 ###reference_b76###) library offers an extensive list of metrics, including both traditional and modern techniques, for evaluating image quality. This library is a valuable resource for researchers and developers working on AI applications, allowing them to select the most appropriate metric based on their specific requirements. PIQ includes metrics categorized as full-reference, no-reference, and distribution-based metrics.\nA small number of toolkits target AI data analysis. Among them, the Data Nutrition Project (DNP) Label (Holland et al., 2018 ###reference_b62###) provides a standardized format for presenting essential dataset information, including metadata, variable descriptions, and correlations, aiding in the preparation of quality training data for AI. The Data Readiness Report (Afzal et al., 2020 ###reference_b11###) focuses on generating detailed documentation to assist in data preprocessing for machine learning, offering insights into data quality and readiness across standardized dimensions. IBM\u2019s Data Quality Toolkit (DQT)(et al., 2021 ###reference_b45###) emphasizes automated explanations of data quality across various dimensions, including completeness, feature relevance, label purity, and data fairness, simplifying data preparation and enhancing productivity for AI practitioners.\nDomain-specific toolkits are frameworks designed for specific domains or dimensions of DRAI, such as fairness and FAIR compliance. IBM\u2019s AI fairness 360 toolkit (et al., 2018 ###reference_b47###) is an open-source software aimed at detecting and mitigating biases in machine learning models, providing metrics like representation and statistical rates of sensitive attributes. For FAIR compliance, tools like FAIR Cookbook (Rocca-Serra et al., 2023 ###reference_b117###) and FAIRassist (FAIRassist.org, nd ###reference_b50###) guide users in implementing and measuring FAIR principles, while ESS-DIVE (et al., 2024 ###reference_b48###) evaluates FAIR compliance in earth sciences data repositories.\nToward a comprehensive evaluation of data readiness for AI, we have recently developed AIDRIN (AI Data Readiness Inspector) (Hiniduma et al., 2024 ###reference_b61###). AIDRIN encompasses both traditional data quality assessments and metrics specifically designed for AI readiness, such as data bias, privacy, feature relevance, correlation, and FAIR compliance. The framework allows users to select their assessment criteria and generate intuitive visualizations and reports to enhance the interpretation and usability of results in AI-related tasks.\nDespite the increasing interest in evaluating data readiness, there is still a lack of comprehensive tools covering a broad range of metrics to evaluate the readiness of structured and unstructured data for a given AI task. This is a challenging endeavor, and this survey paper will serve as a reference for understanding the available metrics and developing strategies for incorporating them into tools to comprehensively assess data readiness." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Pillars of Data Readiness for AI", + "text": "Based on the knowledge gathered in this survey, we propose a high level taxonomy of metrics. We categorize the AI-ready data assessment metrics into six pillars. These are data quality, understandability and usability, data structure and organization, impact of data on AI, fairness and bias, and data governance (as shown in Fig. 3 ###reference_###). These pillars contain a comprehensive set of aspects in data preparation, ensuring that data is prepared and readied to support AI systems effectively, ethically, and efficiently. Each pillar is supported by specific metrics that provide a structured framework for evaluating data readiness in AI contexts.\n###figure_3### Of these categories of metrics, the first four, i.e., data quality, understandability and usability, structure and organization, and data governance, are agnostic to the AI method. They can be applied generically to a broad set of datasets and be used for a wide range of AI applications. In contrast, the last two categories, i.e., the impact of data and fairness, are more specific to the use case and the AI methodology, making them critical for specialized contexts but requiring tailored approaches.\nData quality ensures that the data used to train AI models is accurate, complete, and consistent. High-quality data minimizes the risk of errors in AI outputs, leading to more reliable and trustworthy models. When data quality is compromised, it can result in inaccurate and unstable models. Thus, maintaining rigorous data quality standards is essential for achieving effective and credible AI outcomes. Structured data quality can be evaluated using metrics described in Section 3 ###reference_###, such as completeness, correctness, timeliness, mislabeling, multimedia quality.\nData understanding and usability are important for enabling AI systems to interpret and utilize data effectively. This category of metrics emphasize the importance of clear documentation, comprehensive metadata, reusability, and accessible data interfaces. When data is well-understood and easy to use, it accelerates AI model development.\nFAIR principle compliance metrics[3.1.15 ###reference_.SSS15###] can serve in evaluating data understanding and usability.\nData structure and data organization are important to integrate data into AI workflows seamlessly and efficiently.\nAdequate number of samples in data and proper data partitioning into training, testing, and validation sets allow for accurate model evaluation. In addition, the data model, i.e., the schema of the data, and data organization, i.e., how the data is stored, also play a role in the speed of AI training. also plays a role in the performance of AI applications.\nData Governance is essential for managing data in a way that is ethical, secure, and compliant with legal standards. This pillar covers key aspects such as data privacy, security, and the ethical use of data, which are necessary for building trust in AI systems.\nWithout proper governance, AI systems risk violating privacy regulations, facing security breaches, and engaging in unethical practices, which can harm public trust and lead to significant legal and reputation-related consequences. Metrics such as privacy leakage[3.1.13 ###reference_.SSS13###] are essential for understanding the extent of potential privacy risks within the data.\nThe impact of data on AI covers the importance of data content and its relevance to AI applications. Rich and high-impact data that provides meaningful insights is critical for deriving effective AI outcomes by enabling models to make accurate predictions and identify deep patterns.\nFeature relevance[3.1.5 ###reference_.SSS5###] and data point impact[3.1.10 ###reference_.SSS10###] serve as quantitative measures to assess the value of data for a given AI application.\nFair and unbiased data is a fundamental aspect of ensuring that AI systems produce equitable and unbiased outcomes. This pillar focuses on the representativeness of the data and the absence of biases that could lead to discriminatory practices. Fairness in AI models is not only an ethical concern but also crucial for maintaining public trust in AI technologies. When data used in AI models is biased or unrepresentative, it can result in skewed outcomes that may continue existing inequality and injustice. This undermines the societal benefits that AI promises to deliver. Metrics such as the discrimination (bias) index[3.1.8 ###reference_.SSS8###], class imbalance[3.1.6 ###reference_.SSS6###], and class separability[3.1.7 ###reference_.SSS7###] are crucial for assessing the fairness of data before its use in AI applications.\nThe six dimensions outlined above can be quantified using specific metrics discussed in this study to evaluate DRAI. While these metrics provide a foundational assessment, additional metrics are required to fully capture the breadth of each dimension. Aggregating the evaluations across the existing and future metrics will result in a comprehensive DRAI assessment that would lead to highly accurate and impactful AI solutions." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Gaps and Challenges", + "text": "We discuss the challenges in defining the metrics for assessing data readiness for AI in structured and unstructured data. While structured data poses unique challenges regarding standardization, interpretability, and sensitivity, unstructured datasets present additional complexities due to their diverse formats, varying modalities, and contextual nuances.\nAssessing data readiness for AI and data science applications, regardless of its structure, presents several challenges stemming from the absence of a unified framework that complicates the comparison and consistent application of metrics across diverse dimensions. This absence hampers the development of a cohesive and comprehensive data readiness assessment method explicitly tailored for different types of data. Although IBM DQT (et al., 2021 ###reference_b45###) and AIDRIN (Hiniduma et al., 2024 ###reference_b61###) have taken the initial steps in addressing this concern, their coverage is primarily focused on structured data, leaving a need for further advancements to include a broader range of dimensions related to structured data readiness and extend the toolkit\u2019s capabilities to address unstructured data challenges. Additionally, with the rapid growth of large datasets, there is a lack of parallel systems designed to evaluate data readiness effectively at scale. This highlights the need for further advancements to develop robust methods for handling diverse and extensive data environments.\nA significant challenge in DRAI assessment is the evolving nature of data quality dimensions. As new data quality metrics emerge, they add complexity to the evaluation process. According to Batini and Scannapieco\u2019s (Batini and Scannapieco, 2006 ###reference_b18###), the development of metrics specific to various domains, such as the condition of archival documents, the integrity of statistical data, and the positional accuracy in geospatial data, further complicates comprehensive assessments of data quality and readiness. This ongoing evolution necessitates continual adaptation and refinement of evaluation frameworks to effectively address established and new metrics.\nIn the rapidly evolving fields of AI and data science, the emergence of new use cases and diverse data structures constantly challenges the evaluation of data readiness. To ensure these metrics remain effective in assessing data preparedness for the latest AI applications, they must adapt and evolve alongside the technology. Striking the right balance between data readiness and quantity is another crucial challenge. While numerous metrics focus on data readiness, a comprehensive approach should consider the trade-off between data readiness and sufficiency. Sufficient data volumes are essential for meaningful analysis and effective AI model training, making finding the optimal equilibrium between data readiness and quantity a challenge. Additionally, clear interpretability of these readiness metrics is essential for stakeholders to grasp their implications on overall data readiness and their potential impacts on AI model performance. Enhanced visualization and explanation techniques can significantly improve the practical utility of these metrics, facilitating more informed decision-making processes.\nAddressing the challenges in the dynamic field of AI and data readiness is important. Specific AI applications often require customized data readiness metrics due to varied data readiness standards, requiring domain-specific expertise for effective navigation. Simultaneously, addressing subjective judgments and human biases, particularly concerning fairness and privacy, adds another layer of complexity. Developing unbiased and ethical metrics that adapt to various data types and applications requires ongoing research and innovation. Furthermore, the ever-evolving nature of real-world datasets requires continuous data readiness assessment throughout an AI system\u2019s life cycle, extending beyond the initial data preparation phase.\nData access and ownership concerns can impede data readiness evaluation, particularly when datasets are restricted due to privacy and ownership issues. These limitations can delay a comprehensive data readiness assessment, requiring collaborative efforts and agreements between data providers and users to navigate these challenges effectively. Furthermore, the ongoing challenge lies in establishing meaningful thresholds that define acceptable data readiness levels. Given the context-dependent nature of data and the diversity of AI use cases, universal and context-independent threshold values are challenging to determine. Clear guidelines regarding data readiness thresholds are essential to ensure consistent and effective data preparation practices. Lastly, developing well-established benchmark datasets and evaluation protocols becomes crucial to compare and evaluate various data readiness metrics. Creating representative benchmarks spanning different industries and data structures can facilitate a more comprehensive comparison of diverse metrics and scoring mechanisms.\nData readiness assessment is critical for all AI applications, including large language models (LLMs)(AI, 2023 ###reference_b12###). Ensuring data completeness, accuracy, and consistency, these metrics will build a robust foundation for LLMs. Correct, unbiased, and relevant data enhance the model\u2019s ability to generate coherent and reliable outputs. Assessing bias in datasets ensures fairness and mitigates skewed predictions. Therefore, applying comprehensive quality metrics is vital to preparing data that effectively supports the latest requirements of LLMs, leading to more accurate and contextually relevant model performance. These requirements highlight the need for more advanced metrics and tools to effectively prepare data to ensure LLMs are trained on high-quality inputs that meet the requirements of modern AI applications.\nWhile having a comprehensive list of metrics in the toolbox is important, deciding the metrics to be used for a particular AI application is essential. When determining the suitable metrics for an AI application, it is crucial to start by defining the objective of the application and being aware of any constraints including data limits or legal considerations. Once the application is defined, one should explore the available metrics and understand each metric and its suitability to achieve the goals. Thet should consider the trade-offs involved and analyze which metrics best align with your objectives.\nFocusing on metrics that impact the AI application performance and efficiency achieves a balance between simplicity and depth. Ensuring that the AI model\u2019s performance is regularly monitored and that the metrics are flexibly adjusted as it evolves also plays a role in using the correct data. Additionally, reviewing relevant literature can provide valuable insights in selecting the most appropriate metrics. For example, Wagner and Eckhoff (2018 ###reference_b140###) proposed a method for selecting among over privacy metrics.\nThe selected metrics assess factors such as data sensitivity, trade-offs, and performance expectations, providing meaningful insights and driving the application toward its intended outcomes.\nAddressing these gaps and challenges in data readiness metrics urges collaboration among researchers, practitioners, and industry experts. Advancing the state-of-the-art in these metrics will contribute to more reliable utilization of data in AI applications, unlocking the maximum potential of this valuable resource." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Conclusion", + "text": "This comprehensive survey underscores the critical role of data preparation toward the effective usage of data by AI applications. We explored the challenges, tools, and metrics associated with data readiness, emphasizing its significance in achieving accurate and dependable AI-driven outcomes. Our study highlights the need for a holistic approach, covering structured and unstructured data, and underscores the importance of incorporating fairness-related metrics to ensure unbiased AI decision-making. We have identified and showcased existing metrics and scoring mechanisms that effectively measure data readiness available in literature by studying more than publications from reputable journals including ACM, IEEE, and other reputable journals, as well as web articles, spanning the past three decades. By thoroughly exploring these dimensions and metrics, we have summarized numerous data readiness metrics and proposed a taxonomy of them. This will develop a deeper understanding of data readiness for AI applications. As AI advances and data becomes an even more critical asset, staying up to date with the latest research and advancements in data readiness metrics is essential. This survey provides a foundational resource for researchers and practitioners, equipping them with the essential insights needed to navigate the complexities of data preparation for AI effectively and emphasizing that data readiness is not just a preliminary step but an ongoing commitment." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1. Data Readiness for AI Metrics: Summary of search terms used to identify literature to perform this study
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
GeneralStructured Data RelatedUnstructured Data Related
\u201cdata readiness\u201d AND \u201cAI\u201d\u201cdata quality\u201d AND \u201cassessment\u201d\n\n\n\n\n\n\n\n
searched under each data readiness for AI dimension
e.g., \u201cdiscriminat*\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\n
\u201cspeech quality\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\u201cdata readiness\u201d\u201cdata quality dimension\u201d\u201caudio quality\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\u201cdata readiness\u201d AND \u201cmachine learning\u201d OR \u201cML\u201d\u201cdata quality\u201d AND \u201cmetric\u201d\u201cvideo quality\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\u201dAI ready\u201d\u201cdata prepare\u201d AND \u201cAI\u201d\u201cimage quality\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\u201cdata quality\u201d AND \u201cmachine learning\u201d\u201cdata read*\u201d AND \u201cmetric\u201d\u201cvisual quality\u201d AND \u201cmetric\u201d OR \u201cmeasure\u201d OR \u201cevaluat*\u201d
\u201cdata quality\u201d AND \u201cmeasure\u201d\u201cdata preprocess\u201d
\u201cdata quality\u201d AND \u201cevaluation\u201d\u201cdata clean\u201d
\u201cdata quality\u201d and \u201cAI\u201d\u201cdata quality\u201d AND \u201csurvey\u201d
\n
\n
", + "capture": "Table 1. Data Readiness for AI Metrics: Summary of search terms used to identify literature to perform this study" + }, + "2": { + "table_html": "
\n
Table 2. Dimensions and Metrics for (Structured and Unstructured) DRAI
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Structured DataUnstructured Data
DimensionMetricsData TypeDimensionMetrics
Completeness\n\n\n\n\n\n\n\n
\nBlake et al. (Blake and Mangiameli, 2011), Bors et al. (Bors et\u00a0al., 2018),\n
\nSantos et al. (Santos et\u00a0al., 2020), Pearson (Pearson, 2006)\n
\n
Textual DataLexical Diversity\n\n\n\n\n\n\n\n
\nTemplin (Templin, 1957), McCarthy et al. (McCarthy, 2005),\n
\nMcCarthy et al.(McCarthy and Jarvis, 2010)\n
\n
Outliers\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nBors et al. (Bors et\u00a0al., 2018), Li et al. (Li et\u00a0al., 2021),\n
\nBreunig et al. (Breunig et\u00a0al., 2000), Pokraja et al. (Pokrajac et\u00a0al., 2007),\n
\nRosner et al. (Rosner, 1983), Leys et al. (Leys et\u00a0al., 2013),\n
\nRousseeuw et al. (Rousseeuw and Hubert, 2018), Degirmenci et al. (Degirmenci and Karal, 2021)\n
\n
Term Importance\n\n\n\n\n\n\n\n
\nLuhn (Luhn, 1957),\n
\nSparck Jones (Sparck\u00a0Jones, 1972)\n
\n
Mislabels\nGupta et al\u2019s (et\u00a0al., 2021), Cohen\u2019s Kappa (Cohen, 1960)\nReadability Score\n\n\n\n\n\n\n\n\n\n\n
\nRudolf Flesch (Flesch, 1986),\n
\nColeman and Liau (Coleman and Liau, 1975),\n
\nRobert Gunning Associates (rea, nd),\n
\n
Duplicate Values\n\n\n\n\n\n\n\n\n\n\n
\nBors et al. (Bors et\u00a0al., 2018), Levenshtein distance metric (Levenshtein, 1965),\n
\nWaterman et al. (Waterman et\u00a0al., 1976), Jaro\u2019s distance metric (Jaro, 1976),\n
\nMonge et al. (Monge and Elkan, 1996), Russell et al. (Russell, 1922)\n
\n
Topic Coherence\n\n\n\n\n\n\n\n\n\n\n
\nR\u00f6der et al. (R\u00f6der et\u00a0al., 2015),\n
\nMimno et al. (Mimno\u00a0et al., 2011),\n
\nNewman et al. (Newman\u00a0et al., 2010)\n
\n
Feature Relevancy\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nDai et al. (Dai et\u00a0al., 2006), He et al. (He et\u00a0al., 2005), Zhao et al. (Zhao and Liu, 2007),\n
\nDuda et al. (Duda et\u00a0al., 2012), Nie et al. (Nie et\u00a0al., 2008), Lewis (Lewis, 1992),\n
\nRobnik-Sikonja et al. (Robnik-\u0160ikonja and Kononenko, 2003), Davis et al. (Davis and Sampson, 1986),\n
\nLiu et al. (Liu and Setiono, 1995), Gini (Gini, 1912),\n
\nHall et al. (Hall and Smith, 1999)\n
\n
Bias Indicator\nPapakyriakopoulos et al. (Papakyriakopoulos\u00a0et al., 2020),\n
Class Imbalance\n\n\n\n\n\n\n\n\n\n\n
\nLu et al. (Lu et\u00a0al., 2019), Francisco et al. (Alberto et\u00a0al., 2018),\n
\nOrtigosa-Hern\u00e1ndez et al. (Ortigosa-Hern\u00e1ndez et\u00a0al., 2017), Zhu et al. (Zhu et\u00a0al., 2018),\n
\nGupta et al. (et\u00a0al., 2021)\n
\n
Multimedia DataImage Quality\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMSE and PSNR (psn, nd), Wang et al. (Wang and Bovik, 2002),\n
\nWang et al. (Wang et\u00a0al., 2004), Wang et al. (Wang et\u00a0al., 2003),\n
\nSarnoff\u2019s JND-Metrix (sar, nd), Sheikh et al. (Sheikh and Bovik, 2006),\n
\nChandler et al. (Chandler and Hemami, 2007), Lakhani\u2019s (Lakhani, 2020),\n
\nSabottke et al\u2019s. (Sabottke and Spieler, 2020), Lin et al. (Lin et\u00a0al., 2005),\n
\nMarziliano et al. (Marziliano et\u00a0al., 2002), PIQ (Kastryulin et\u00a0al., 2019)\n
\n
Class Separability\n\n\n\n\n\n\n\n\n\n\n
\nGupta et al\u2019s (et\u00a0al., 2021),\n
\nSejong (Oh, 2011),\n
\nBorsos et al. (Borsos et\u00a0al., 2018)\n
\n
Speech Quality\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nMean Opinion Score (International Telecommunication Union, 2018), Rix et al. (Rix\u00a0et al., 2001),\n
\nJayant et al. (Jayant and Noll, 1984), Taal et al. (Taal et\u00a0al., 2010),\n
\nBeerends et al. (Beerends and Stemerdink, 1994), Itakura-Saito Spectral Distortion (Itakura and Saito, 1968),\n
\nObjective Difference Grade (itu, nd),\n
\n
Discrimination Index\n\n\n\n\n\n\n\n\n\n\n
\nAzzalini et al. (Azzalini et\u00a0al., 2022), Feldman et al. (Feldman\u00a0et al., 2015),\n
\nCelis et al. (Celis et\u00a0al., 2020), Simonetta et al. (Simonetta et\u00a0al., 2021),\n
\nGupta et al. (et\u00a0al., 2021),Amazon SageMaker Developer Guide (Kemka, 2019)\n
\n
Video Quality\n\n\n\n\n\n\n\n\n\n\n
\nPSNR (psn, nd), Wang et al. (Wang et\u00a0al., 2004), Sheikh et al. (Sheikh and Bovik, 2006),\n
\nChandler et al. (Chandler and Hemami, 2007), Huynh-Thu et al. (Huynh-Thu and Ghanbari, 2008),\n
\nNetflix (Blog, 2017), OPTICOM\u2019s PEVQ (PEV, 2008)\n
\n
Data Split Ratio\nJoseph (Joseph, 2022), Affendras et al. (Afendras and Markatou, 2019)\n
Data Point Impact\n\n\n\n\n\n\n\n\n\n\n
\nGhorbani et al. (Ghorbani and Zou, 2019), Wang et al. (Wang and Jia, 2023),\n
\nLeave-One-Out (Cook and Weisberg, 1982), Koh et al. (Koh and Liang, 2017),\n
\nBachem et al. (Bachem et\u00a0al., 2017), Ribeiro et al. (Ribeiro et\u00a0al., 2016)\n
\n
Correctness\nKaiser et al. (Kaiser et\u00a0al., 1970), Pipino et al\u2019s(Pipino et\u00a0al., 2002)\n
Timeliness\n\n\n\n\n\n\n\n
\nKaiser et al. (Kaiser et\u00a0al., 1970), Heinrich et al. (Heinrich and Klier, 2015),\n
\nBlake et al. (Blake and Mangiameli, 2011)\n
\n
Privacy Leakage\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nVatsalan et al. (Vatsalan\u00a0et al., 2022), Duddu et al. (Duddu et\u00a0al., 2022),\n
\nCarlini et al. (Carlini et\u00a0al., 2022),Song et al. (Song and Mittal, 2021),\n
\nBezzi (Bezzi, 2007), Longpr\u00a0\u2019e et al. (Longpr\u00e9 et\u00a0al., 2017),\n
\nSevgi et al. (Arca and Hewett, 2020), Aindo AI (ain, nd)\n
\n
Sample Size\n\n\n\n\n
\nAlwosheel et al. (Alwosheel et\u00a0al., 2018), Haykin (Haykin, 2009)\n
\n
FAIR Compliance\nWilkinson et al. (Wilkinson et\u00a0al., 2018), Clarke et al. (Clarke\u00a0et al., 2019)\n
\n
\n
", + "capture": "Table 2. Dimensions and Metrics for (Structured and Unstructured) DRAI" + } + }, + "image_paths": { + "1": { + "figure_path": "2404.05779v2_figure_1.png", + "caption": "Figure 1. Papers chosen for this survey from different time frames.", + "url": "http://arxiv.org/html/2404.05779v2/x1.png" + }, + "2": { + "figure_path": "2404.05779v2_figure_2.png", + "caption": "Figure 2. 360\u00b0 View of Mapping Data Readiness Dimensions for AI", + "url": "http://arxiv.org/html/2404.05779v2/x2.png" + }, + "3": { + "figure_path": "2404.05779v2_figure_3.png", + "caption": "Figure 3. A high-level view of data readiness metric categories for AI.", + "url": "http://arxiv.org/html/2404.05779v2/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "PEVQ \u2013 the Standard for Perceptual Evaluation of Video Quality.", + "author": "2008.", + "venue": "", + "url": null + } + }, + { + "2": { + "title": "FAIR Principles.", + "author": "2024.", + "venue": "https://www.go-fair.org/fair-principles/.", + "url": null + } + }, + { + "3": { + "title": "", + "author": "n.d..", + "venue": "https://docs.aindo.com/evaluation/privacy/", + "url": null + } + }, + { + "4": { + "title": "BS.1387 : Method for Objective Measurements of Perceived Audio Quality.", + "author": "n.d..", + "venue": "https://www.itu.int/rec/R-REC-BS.1387/en.", + "url": null + } + }, + { + "5": { + "title": "The Gunning Fog Index.", + "author": "n.d.", + "venue": "https://readable.com/readability/gunning-fog-index/.", + "url": null + } + }, + { + "6": { + "title": "JNDmetrix Technology.", + "author": "n.d..", + "venue": "http://www.sarnoff.com/products_services/video_vision/jndmetrix/.", + "url": null + } + }, + { + "7": { + "title": "Kaggle.", + "author": "n.d..", + "venue": "", + "url": null + } + }, + { + "8": { + "title": "PSNR.", + "author": "n.d..", + "venue": "", + "url": null + } + }, + { + "9": { + "title": "Optimality of training/test size and resampling effectiveness in cross-validation.", + "author": "Georgios Afendras and Marianthi Markatou. 2019.", + "venue": "Journal of Statistical Planning and Inference 199 (2019), 286\u2013301.", + "url": null + } + }, + { + "10": { + "title": "Data Readiness Report. In IEEE Int. Conference on Smart Data Services (SMDS).", + "author": "S. Afzal, C. Rajmohan, M. Kesarwani, S. Mehta, and H. Patel. 2020.", + "venue": "", + "url": null + } + }, + { + "11": { + "title": "Demystifying Data Quality\u2019s Impact on Large Language Models.", + "author": "Telm AI. 2023.", + "venue": "https://www.telm.ai/blog/demystifying-data-qualitys-impact-on-large-language-models/.", + "url": null + } + }, + { + "12": { + "title": "Learning from Imbalanced Data Sets.", + "author": "Francisco Alberto, Salvador Garc\u00eda, Mikel Galar, Ronaldo Prati, Bartosz Krawczyk, and Francisco Herrera. 2018.", + "venue": "Springer.", + "url": null + } + }, + { + "13": { + "title": "Is your dataset big enough? Sample size requirements when using artificial neural networks for discrete choice analysis.", + "author": "Ahmad Alwosheel, Sander van Cranenburgh, and Caspar G. Chorus. 2018.", + "venue": "Journal of Choice Modelling 28 (2018), 167\u2013182.", + "url": null + } + }, + { + "14": { + "title": "Is Entropy enough for measuring Privacy?. In 2020 International Conference on Computational Science and Computational Intelligence (CSCI). 1335\u20131340.", + "author": "Sevgi Arca and Rattikorn Hewett. 2020.", + "venue": "https://doi.org/10.1109/CSCI51800.2020.00249", + "url": null + } + }, + { + "15": { + "title": "E-FAIR-DB: Functional Dependencies to Discover Data Bias and Enhance Data Equity.", + "author": "Fabio Azzalini, Chiara Criscuolo, and Letizia Tanca. 2022.", + "venue": "J. Data and Information Quality 14, 4, Article 29 (nov 2022), 26 pages.", + "url": null + } + }, + { + "16": { + "title": "Practical Coreset Constructions for Machine Learning. In Advances in Neural Information Processing Systems.", + "author": "Olivier Bachem, Mario Lucic, and Andreas Krause. 2017.", + "venue": "", + "url": null + } + }, + { + "17": { + "title": "Data Quality: Concepts, Methodologies and Techniques (1 ed.).", + "author": "Carlo Batini and Monica Scannapieco. 2006.", + "venue": "Springer-Verlag Berlin Heidelberg, Berlin, Heidelberg. XIX, 262 pages.", + "url": null + } + }, + { + "18": { + "title": "A Perceptual Speech Quality Measure Based on a Psychoacoustic Sound Representation.", + "author": "J. Beerends and J. Stemerdink. 1994.", + "venue": "Journal of Audio Eng. Soc. 42 (December 1994), 115\u2013123.", + "url": null + } + }, + { + "19": { + "title": "An entropy based method for measuring anonymity. In 2007 Third International Conference on Security and Privacy in Communications Networks and the Workshops - SecureComm 2007. 28\u201332.", + "author": "Michele Bezzi. 2007.", + "venue": "https://doi.org/10.1109/SECCOM.2007.4550303", + "url": null + } + }, + { + "20": { + "title": "The Materials Data Facility: Data Services to Advance Materials Science Research.", + "author": "B. Blaiszik et al. 2016.", + "venue": "JOM 68 (2016).", + "url": null + } + }, + { + "21": { + "title": "The Effects and Interactions of Data Quality and Problem Complexity on Classification.", + "author": "Roger Blake and Paul Mangiameli. 2011.", + "venue": "J. Data and Information Quality 2, 2, Article 8 (feb 2011), 28 pages.", + "url": null + } + }, + { + "22": { + "title": "Latent Dirichlet Allocation.", + "author": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003.", + "venue": "Journal of Machine Learning Research 3 (January 2003), 993\u20131022.", + "url": null + } + }, + { + "23": { + "title": "Toward a practical perceptual video quality metric.", + "author": "Netflix Technology Blog. 2017.", + "venue": "", + "url": null + } + }, + { + "24": { + "title": "Visual Interactive Creation, Customization, and Analysis of Data Quality Metrics.", + "author": "Christian Bors, Theresia Gschwandtner, Simone Kriglstein, Silvia Miksch, and Margit Pohl. 2018.", + "venue": "J. Data and Information Quality 10, 1, Article 3 (may 2018), 26 pages.", + "url": null + } + }, + { + "25": { + "title": "Dealing with overlap and imbalance: a new metric and approach.", + "author": "Z. Borsos, C. Lemnaru, and R. Potolea. 2018.", + "venue": "Pattern Anal Applic 21, 2 (2018), 381\u2013395.", + "url": null + } + }, + { + "26": { + "title": "LOF: Identifying density-based local outliers. In Proc. ACM SIGMOD Int. Conf. Manage. Data. 93\u2013104.", + "author": "Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng, and J\u00f6rg Sander. 2000.", + "venue": "", + "url": null + } + }, + { + "27": { + "title": "The Privacy Onion Effect: Memorization is Relative.", + "author": "Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, and Florian Tramer. 2022.", + "venue": "", + "url": null + } + }, + { + "28": { + "title": "Data preprocessing to mitigate bias: A maximum entropy based approach.", + "author": "L. Elisa Celis, Vijay Keswani, and Nisheeth K. Vishnoi. 2020.", + "venue": "", + "url": null + } + }, + { + "29": { + "title": "VSNR: A Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images.", + "author": "Damon M. Chandler and Sheila S. Hemami. 2007.", + "venue": "IEEE Transactions on Image Processing 16, 9 (2007), 2284\u20132298.", + "url": null + } + }, + { + "30": { + "title": "FAIRshake: toolkit to evaluate the FAIRness of research digital resources.", + "author": "Daniel JB Clarke et al. 2019.", + "venue": "Cell systems 9, 5 (2019), 417\u2013421.", + "url": null + } + }, + { + "31": { + "title": "Elevating Data Quality: The Crucial Role of Proper Data Annotation.", + "author": "Cleanlab. 2024.", + "venue": "https://cleanlab.ai/blog/learn/data-annotation/.", + "url": null + } + }, + { + "32": { + "title": "A Coefficient of Agreement for Nominal Scales.", + "author": "J. Cohen. 1960.", + "venue": "Educational and Psychological Measurement 20, 1 (1960), 37\u201346.", + "url": null + } + }, + { + "33": { + "title": "A computer readability formula designed for machine scoring.", + "author": "Meri Coleman and Ta Lin Liau. 1975.", + "venue": "J. of Applied Psychology 60 (1975), 283\u2013284.", + "url": null + } + }, + { + "34": { + "title": "Residuals and Influence in Regression.", + "author": "R. Dennis Cook and Sanford Weisberg. 1982.", + "venue": "Chapman & Hall.", + "url": null + } + }, + { + "35": { + "title": "Rapid Identification of Column Heterogeneity. In Sixth International Conference on Data Mining (ICDM\u201906). 159\u2013170.", + "author": "Bing Tian Dai, Nick Koudas, Beng Chin Ooi, Divesh Srivastava, and Suresh Venkatasubramanian. 2006.", + "venue": "https://doi.org/10.1109/ICDM.2006.132", + "url": null + } + }, + { + "36": { + "title": "Ten Lectures on Wavelets.", + "author": "Ingrid Daubechies. 1992.", + "venue": "Society for Industrial and Applied Mathematics.", + "url": null + } + }, + { + "37": { + "title": "Statistics and Data Analysis in Geology. Vol. 646.", + "author": "John C. Davis and Robert J. Sampson. 1986.", + "venue": "Wiley, New York.", + "url": null + } + }, + { + "38": { + "title": "Robust Incremental Outlier Detection Approach Based on a New Metric in Data Streams.", + "author": "Ali Degirmenci and Omer Karal. 2021.", + "venue": "IEEE Access 9 (2021), 160347\u2013160360.", + "url": null + } + }, + { + "39": { + "title": "IBM Data Quality AI Toolkit.", + "author": "IBM Developer. 2021.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "UCI Machine Learning Repository.", + "author": "Dheeru Dua and Casey Graff. 2017.", + "venue": "http://archive.ics.uci.edu/ml.", + "url": null + } + }, + { + "41": { + "title": "Pattern Classification.", + "author": "Richard O. Duda, Peter E. Hart, and David G. Stork. 2012.", + "venue": "John Wiley & Sons.", + "url": null + } + }, + { + "42": { + "title": "SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning.", + "author": "Vasisht Duddu, Sebastian Szyller, and N. Asokan. 2022.", + "venue": "", + "url": null + } + }, + { + "43": { + "title": "Duplicate Record Detection: A Survey.", + "author": "Ahmed K. Elmagarmid, Panagiotis G. Ipeirotis, and Vassilios S. Verykios. 2007.", + "venue": "IEEE Transactions on Knowledge and Data Engineering 19, 1 (2007), 1\u201316.", + "url": null + } + }, + { + "44": { + "title": "Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets.", + "author": "Nitin Gupta et al. 2021.", + "venue": "", + "url": null + } + }, + { + "45": { + "title": "FAIR principles for AI models with a practical application for accelerated high energy diffraction microscopy.", + "author": "Nikil Ravi et al. 2022.", + "venue": "Scientific Data 9, 1 (nov 2022).", + "url": null + } + }, + { + "46": { + "title": "AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias.", + "author": "Rachel K. E. Bellamy et al. 2018.", + "venue": "", + "url": null + } + }, + { + "47": { + "title": "ESS-DIVE Overview: A Scalable, User-Focused Repository for Earth and Environmental Science Data.", + "author": "S. Cholia et al. 2024.", + "venue": "Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, CA.", + "url": null + } + }, + { + "48": { + "title": "Regulation (EU) 2016/679 of the European Parliament and of the Council.", + "author": "European Parliament and Council of the European Union. 2016.", + "venue": "https://data.europa.eu/eli/reg/2016/679/oj", + "url": null + } + }, + { + "49": { + "title": "FAIRassist.Org.", + "author": "FAIRassist.org. n.d..", + "venue": "https://fairassist.org.", + "url": null + } + }, + { + "50": { + "title": "Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD \u201915). Association for Computing Machinery, New York, NY, USA, 259\u2013268.", + "author": "Michael Feldman et al. 2015.", + "venue": "", + "url": null + } + }, + { + "51": { + "title": "The Art of Readable Writing (19th print.-collier books ed ed.).", + "author": "Rudolf Flesch. 1986.", + "venue": "MacMillan.", + "url": null + } + }, + { + "52": { + "title": "An Extensive Empirical Study of Feature Selection Metrics for Text Classification.", + "author": "George Forman. 2003.", + "venue": "J. Mach. Learn. Res. 3 (mar 2003), 17 pages.", + "url": null + } + }, + { + "53": { + "title": "Data Shapley: Equitable Valuation of Data for Machine Learning.", + "author": "Amirata Ghorbani and James Zou. 2019.", + "venue": "", + "url": null + } + }, + { + "54": { + "title": "Variability and Mutability: Contribution to the Study of Statistical Distribution and Relations.", + "author": "C. Gini. 1912.", + "venue": "Studi Economico-Giuridici della R (1912).", + "url": null + } + }, + { + "55": { + "title": "Feature Selection for Machine Learning: Comparing a Correlation-Based Filter Approach to the Wrapper. In FLAIRS. 235\u2013239.", + "author": "Mark A. Hall and Lloyd A. Smith. 1999.", + "venue": "", + "url": null + } + }, + { + "56": { + "title": "Neural networks and learning machines (third ed.).", + "author": "Simon S. Haykin. 2009.", + "venue": "Pearson Education, Upper Saddle River, NJ.", + "url": null + } + }, + { + "57": { + "title": "Laplacian Score for Feature Selection. In NIPS. 507\u2013514.", + "author": "Xiaofei He, Deng Cai, and Partha Niyogi. 2005.", + "venue": "", + "url": null + } + }, + { + "58": { + "title": "Metric-based data quality assessment \u2014 Developing and evaluating a probability-based currency metric.", + "author": "Bernd Heinrich and Mathias Klier. 2015.", + "venue": "Decision Support Systems 72 (2015), 82\u201396.", + "url": null + } + }, + { + "59": { + "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2018.", + "venue": "", + "url": null + } + }, + { + "60": { + "title": "AI Data Readiness Inspector (AIDRIN) for Quantitative Assessment of Data Readiness for AI. In Proceedings of the 36th International Conference on Scientific and Statistical Database Management (Rennes, France) (SSDBM \u201924). Article 7, 12 pages.", + "author": "Kaveen Hiniduma et al. 2024.", + "venue": "", + "url": null + } + }, + { + "61": { + "title": "The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards.", + "author": "Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2018.", + "venue": "(2018).", + "url": null + } + }, + { + "62": { + "title": "Scope of validity of PSNR in image/video quality assessment.", + "author": "Q. Huynh-Thu and M. Ghanbari. 2008.", + "venue": "Electronics Letters 44, 13 (Jun 19 2008), 1\u20132.", + "url": null + } + }, + { + "63": { + "title": "New AI readiness report reveals insights into ML lifecycle.", + "author": "Helen Hwang. 2022.", + "venue": "https://www.datacenterknowledge.com/machine-learning/new-ai-readiness-report-reveals-insights-ml-lifecycle.", + "url": null + } + }, + { + "64": { + "title": "Outlier Detection Redefined: A Deep Dive into AI\u2019s Impact: Espire Blog.", + "author": "Espire Infolabs. 2024.", + "venue": "", + "url": null + } + }, + { + "65": { + "title": "Data Quality Metrics & Measures - All You Need to Know.", + "author": "Informatica. n.d..", + "venue": "", + "url": null + } + }, + { + "66": { + "title": "ITU-T Recommendation P.808: Subjective Evaluation of Speech Quality with a Crowdsourcing Approach.", + "author": "International Telecommunication Union. 2018.", + "venue": "Technical Report. International Telecommunication Union, Geneva.", + "url": null + } + }, + { + "67": { + "title": "Analysis Synthesis Telephony Based on the Maximum Likelihood Method. In Proc. 6th Int. Congr. Acoust. Tokyo, Japan.", + "author": "F. Itakura and S. Saito. 1968.", + "venue": "", + "url": null + } + }, + { + "68": { + "title": "Unimatch: A Record Linkage System: User\u2019s Manual.", + "author": "M.A. Jaro. 1976.", + "venue": "Technical Report. US Bureau of the Census, Washington, D.C.", + "url": null + } + }, + { + "69": { + "title": "Digital Coding of Waveforms: Principles and Applications to Speech and Video.", + "author": "N.C. Jayant and P. Noll. 1984.", + "venue": "Prentice Hall, NJ, USA.", + "url": null + } + }, + { + "70": { + "title": "", + "author": "Matthew B Jones and Peter Slaughter. 2019.", + "venue": "https://www.dataone.org/uploads/dataonewebinar_jonesslaughter_fairmetadata_190514.pdf", + "url": null + } + }, + { + "71": { + "title": "Optimal Ratio for Data Splitting.", + "author": "V. Roshan Joseph. 2022.", + "venue": "Statistical Analysis and Data Mining: The ASA Data Science Journal 15, 4 (August 2022), 531\u2013538.", + "url": null + } + }, + { + "72": { + "title": "A Benchmark for Data Imputation Methods.", + "author": "Sven J\u00e4ger, Anders Allhorn, and Felix Bie\u00dfmann. 2021.", + "venue": "Frontiers in Big Data 4 (2021), 693674.", + "url": null + } + }, + { + "73": { + "title": "[PDF] how to measure data quality? - A metric-based approach.", + "author": "M. Kaiser, Mathias Klier, and Bernd Heinrich. 1970.", + "venue": "", + "url": null + } + }, + { + "74": { + "title": "PyTorch Image Quality: Metrics and Measure for Image Quality Assessment.", + "author": "Sergey Kastryulin, Dzhamil Zakirov, and Denis Prokopenko. 2019.", + "venue": "", + "url": null + } + }, + { + "75": { + "title": "PyTorch Image Quality: Metrics for Image Quality Assessment.", + "author": "Sergey Kastryulin, Jamil Zakirov, Denis Prokopenko, and Dmitry V. Dylov. 2022.", + "venue": "", + "url": null + } + }, + { + "76": { + "title": "Learning Amazon Sagemaker.", + "author": "Martin Kemka. 2019.", + "venue": "", + "url": null + } + }, + { + "77": { + "title": "Understanding Black-box Predictions via Influence Functions. In International Conference on Machine Learning.", + "author": "Pang Wei Koh and Percy Liang. 2017.", + "venue": "", + "url": null + } + }, + { + "78": { + "title": "The Importance of Image Resolution in Building Deep Learning Models for Medical Imaging.", + "author": "Paras Lakhani. 2020.", + "venue": "Radiology: Artificial Intelligence 2, 1 (2020), e190177.", + "url": null + } + }, + { + "79": { + "title": "Annotation quality framework-accuracy, credibility, and consistency. In NEURIPS 2021 Workshop for Data Centric AI.", + "author": "Liliya Lavitas, Olivia Redfield, Allen Lee, Daniel Fletcher, Matthias Eck, and Sunil Janardhanan. 2021.", + "venue": "", + "url": null + } + }, + { + "80": { + "title": "Binary Codes Capable of Correcting Deletions, Insertions and Reversals.", + "author": "V.I. Levenshtein. 1965.", + "venue": "Doklady Akademii Nauk SSSR 163, 4 (1965), 845\u2013848.", + "url": null + } + }, + { + "81": { + "title": "Feature Selection and Feature Extraction for Text Categorization. In Workshop on Speech and Natural Language. 212\u2013217.", + "author": "David D. Lewis. 1992.", + "venue": "", + "url": null + } + }, + { + "82": { + "title": "Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median.", + "author": "Christophe Leys, Christophe Ley, Olivier Klein, Pierre Bernard, and Laurent Licata. 2013.", + "venue": "J. Exp. Social Psychol. 49, 4 (2013), 764\u2013766.", + "url": null + } + }, + { + "83": { + "title": "Feature Selection: A Data Perspective.", + "author": "Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, and Huan Liu. 2017.", + "venue": "ACM Comput. Surv. 50, 6, Article 94 (dec 2017), 45 pages.", + "url": null + } + }, + { + "84": { + "title": "CleanML: A Study for Evaluating the Impact of Data Cleaning on ML Classification Tasks. In 2021 IEEE 37th International Conference on Data Engineering (ICDE). 13\u201324.", + "author": "Peng Li, Xi Rao, Jennifer Blase, Yue Zhang, Xu Chu, and Ce Zhang. 2021.", + "venue": "https://doi.org/10.1109/ICDE51399.2021.00009", + "url": null + } + }, + { + "85": { + "title": "Visual distortion gauge based on discrimination of noticeable contrast changes.", + "author": "Weisi Lin, Li Dong, and Ping Xue. 2005.", + "venue": "IEEE transactions on circuits and systems for video technology 15, 7 (2005), 900\u2013909.", + "url": null + } + }, + { + "86": { + "title": "Perceptual visual quality metrics: A survey.", + "author": "Weisi Lin and C.-C. Jay Kuo. 2011.", + "venue": "Journal of Visual Communication and Image Representation 22, 4 (2011), 297\u2013312.", + "url": null + } + }, + { + "87": { + "title": "Chi2: Feature Selection and Discretization of Numeric Attributes. In ICTAI. 388\u2013391.", + "author": "Huan Liu and Rudy Setiono. 1995.", + "venue": "", + "url": null + } + }, + { + "88": { + "title": "An ADMM-based Framework for AutoML Pipeline Configuration. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 4892\u20134899.", + "author": "Sijia Liu et al. 2020.", + "venue": "", + "url": null + } + }, + { + "89": { + "title": "Entropy as a Measure of Average Loss of Privacy.", + "author": "Luc Longpr\u00e9, Vladik Kreinovich, and Thongchai Dumrongpokaphan. 2017.", + "venue": "Thai Journal of Mathematics (2017), 7\u201315.", + "url": null + } + }, + { + "90": { + "title": "Bayes Imbalance Impact Index: A Measure of Class Imbalanced Dataset for Classification Problem.", + "author": "Yang Lu, Yiu ming Cheung, and Yuan Yan Tang. 2019.", + "venue": "", + "url": null + } + }, + { + "91": { + "title": "A Statistical Approach to Mechanized Encoding and Searching of Literary Information.", + "author": "H. P. Luhn. 1957.", + "venue": "IBM Journal of Research and Development 1, 4 (1957), 309\u2013317.", + "url": null + } + }, + { + "92": { + "title": "How AI Can Uncover Data Outliers and Patterns in Patient Behavior.", + "author": "Chris Markham. 2024.", + "venue": "https://www.saama.com/how-ai-can-uncover-data-outliers-and-patterns-in-patient-behavior/.", + "url": null + } + }, + { + "93": { + "title": "A no-reference perceptual blur metric. In Proceedings. International conference on image processing, Vol. 3. IEEE, III\u2013III.", + "author": "Pina Marziliano, Frederic Dufaux, Stefan Winkler, and Touradj Ebrahimi. 2002.", + "venue": "", + "url": null + } + }, + { + "94": { + "title": "An assessment of the range and usefulness of lexical diversity measures and the potential of the measure of textual, lexical diversity (MTLD).", + "author": "Philip M McCarthy. 2005.", + "venue": "Ph.\u2009D. Dissertation. The University of Memphis.", + "url": null + } + }, + { + "95": { + "title": "MTLD, VOCD-D, and HD-D: A Validation Study of Sophisticated Approaches to Lexical Diversity Assessment.", + "author": "Peter M. McCarthy and Scott Jarvis. 2010.", + "venue": "Behavior Research Methods 42, 2 (2010), 381\u2013392.", + "url": null + } + }, + { + "96": { + "title": "Optimizing Semantic Coherence in Topic Models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (Edinburgh, United Kingdom) (EMNLP \u201911). Association for Computational Linguistics, USA, 262\u2013272.", + "author": "David Mimno et al. 2011.", + "venue": "", + "url": null + } + }, + { + "97": { + "title": "No-Reference Image Quality Assessment in the Spatial Domain.", + "author": "Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012.", + "venue": "IEEE Transactions on Image Processing 21, 12 (2012), 4695\u20134708.", + "url": null + } + }, + { + "98": { + "title": "The Field Matching Problem: Algorithms and Applications. In Proc. Second Int\u2019l Conf. Knowledge Discovery and Data Mining (KDD \u201996). 267\u2013270.", + "author": "A.E. Monge and C.P. Elkan. 1996.", + "venue": "", + "url": null + } + }, + { + "99": { + "title": "Evaluating Topic Models for Digital Libraries. In Proceedings of the 10th Annual Joint Conference on Digital Libraries (Gold Coast, Queensland, Australia) (JCDL \u201910). 215\u2013224.", + "author": "David Newman et al. 2010.", + "venue": "https://doi.org/10.1145/1816123.1816156", + "url": null + } + }, + { + "100": { + "title": "Trace Ratio Criterion for Feature Selection. In AAAI.", + "author": "Feiping Nie, Shiming Xiang, Yangqing Jia, Changshui Zhang, and Shuicheng Yan. 2008.", + "venue": "", + "url": null + } + }, + { + "101": { + "title": "Bias in data-driven artificial intelligence systems\u2014An introductory survey.", + "author": "E. Ntoutsi et al. 2020.", + "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 3 (2020), e1356.", + "url": null + } + }, + { + "102": { + "title": "A new dataset evaluation method based on category overlap.", + "author": "Sejong Oh. 2011.", + "venue": "Computers in Biology and Medicine 41, 2 (2011), 115\u2013122.", + "url": null + } + }, + { + "103": { + "title": "Measuring the class-imbalance extent of multi-class problems.", + "author": "Jonathan Ortigosa-Hern\u00e1ndez, I\u00f1aki Inza, and Jose A. Lozano. 2017.", + "venue": "Pattern Recognition Letters 98 (2017), 32\u201338.", + "url": null + } + }, + { + "104": { + "title": "Bias in Word Embeddings. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* \u201920). 446\u2013457.", + "author": "Orestis Papakyriakopoulos et al. 2020.", + "venue": "https://doi.org/10.1145/3351095.3372843", + "url": null + } + }, + { + "105": { + "title": "The problem of disguised missing data.", + "author": "Ronald K. Pearson. 2006.", + "venue": "SIGKDD Explor. Newsl. 8, 1 (jun 2006).", + "url": null + } + }, + { + "106": { + "title": "Data Quality Assessment.", + "author": "Leo L. Pipino, Yang W. Lee, and Richard Y. Wang. 2002.", + "venue": "Commun. ACM 45, 4 (apr 2002), 211\u2013218.", + "url": null + } + }, + { + "107": { + "title": "Incremental local outlier detection for data streams. In Proc. IEEE Symp. Comput. Intell. Data Mining. 504\u2013515.", + "author": "Dragoljub Pokrajac, Aleksandar Lazarevic, and Longin Jan Latecki. 2007.", + "venue": "", + "url": null + } + }, + { + "108": { + "title": "A Survey of Data Quality Requirements That Matter in ML Development Pipelines.", + "author": "Maria Priestley, Fionnt\u00e1n O\u2019Donnell, and Elena Simperl. 2023.", + "venue": "J. Data and Information Quality (apr 2023).", + "url": null + } + }, + { + "109": { + "title": "Text Mining: Use of TF-IDF to Examine the Relevance of Words to Documents.", + "author": "Shahzad Qaiser and Ramsha Ali. 2018.", + "venue": "International Journal of Computer Applications 181 (07 2018).", + "url": null + } + }, + { + "110": { + "title": "Evaluation of a decided sample size in machine learning applications.", + "author": "Deepak Rajput, Wanjun Wang, and Cheng-Chung Chen. 2023.", + "venue": "BMC Bioinformatics 24 (2023), 48.", + "url": null + } + }, + { + "111": { + "title": "Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, Vol. 242. Citeseer, 29\u201348.", + "author": "Juan Ramos et al. 2003.", + "venue": "", + "url": null + } + }, + { + "112": { + "title": "Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans.", + "author": "Pedro Reviriego, Javier Conde, Elena Merino-G\u00f3mez, Gonzalo Mart\u00ednez, and Jos\u00e9 Alberto Hern\u00e1ndez. 2023.", + "venue": "", + "url": null + } + }, + { + "113": { + "title": "\u201cWhy Should I Trust You?\u201d Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.", + "author": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016.", + "venue": "", + "url": null + } + }, + { + "114": { + "title": "Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221).", + "author": "A.W. Rix et al. 2001.", + "venue": "", + "url": null + } + }, + { + "115": { + "title": "Theoretical and Empirical Analysis of ReliefF and RReliefF.", + "author": "Marko Robnik-\u0160ikonja and Igor Kononenko. 2003.", + "venue": "Machine Learning 53, 1-2 (2003).", + "url": null + } + }, + { + "116": { + "title": "The FAIR Cookbook - The Essential Resource for and by FAIR Doers.", + "author": "P. Rocca-Serra, W. Gu, V. Ioannidis, et al. 2023.", + "venue": "Sci Data 10 (2023).", + "url": null + } + }, + { + "117": { + "title": "Exploring the Space of Topic Coherence Measures. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining (Shanghai, China) (WSDM \u201915). 399\u2013408.", + "author": "Michael R\u00f6der, Andreas Both, and Alexander Hinneburg. 2015.", + "venue": "https://doi.org/10.1145/2684822.2685324", + "url": null + } + }, + { + "118": { + "title": "Percentage points for a generalized ESD many-outlier procedure.", + "author": "Bernard Rosner. 1983.", + "venue": "Technometrics 25, 2 (1983), 165\u2013172.", + "url": null + } + }, + { + "119": { + "title": "Anomaly detection by robust statistics.", + "author": "Peter J. Rousseeuw and Mia Hubert. 2018.", + "venue": "WIREs Data Mining Knowl. Discovery 8, 2 (Mar. 2018), e1236.", + "url": null + } + }, + { + "120": { + "title": "Index.", + "author": "R.C. Russell. 1922.", + "venue": "", + "url": null + } + }, + { + "121": { + "title": "The Effect of Image Resolution on Deep Learning in Radiography.", + "author": "Carl F. Sabottke and Bradley M. Spieler. 2020.", + "venue": "Radiology: Artificial Intelligence 2, 1 (2020), e190015.", + "url": null + } + }, + { + "122": { + "title": "How distance metrics influence missing data imputation with k-nearest neighbours.", + "author": "Miriam Seoane Santos, Pedro Henriques Abreu, Szymon Wilk, and Jo\u00e3o Santos. 2020.", + "venue": "Pattern Recognition Letters 136 (2020), 111\u2013119.", + "url": null + } + }, + { + "123": { + "title": "Automating Large-Scale Data Quality Verification.", + "author": "Sebastian Schelter, Dustin Lange, Philipp Schmidt, Meltem Celikel, Felix Biessmann, and Andreas Grafberger. 2018.", + "venue": "Proc. VLDB Endow. 11, 12 (August 2018), 1781\u20131794.", + "url": null + } + }, + { + "124": { + "title": "The Achilles\u2019 Heel of AI.", + "author": "Ron Schmelzer. 2019.", + "venue": "", + "url": null + } + }, + { + "125": { + "title": "Representation Bias in Data: A Survey on Identification and Resolution Techniques.", + "author": "Nima Shahbazi, Yin Lin, Abolfazl Asudeh, and H. V. Jagadish. 2023.", + "venue": "ACM Comput. Surv. (mar 2023).", + "url": null + } + }, + { + "126": { + "title": "Image information and visual quality.", + "author": "H.R. Sheikh and A.C. Bovik. 2006.", + "venue": "IEEE Transactions on Image Processing 15, 2 (2006), 430\u2013444.", + "url": null + } + }, + { + "127": { + "title": "DQLearn: A Toolkit for Structured Data Quality Learning. In 2020 IEEE International Conference on Big Data (Big Data). Atlanta, GA, USA, 1644\u20131653.", + "author": "S. Shrivastava, D. Patel, N. Zhou, A. Iyengar, and A. Bhamidipaty. 2020.", + "venue": "https://doi.org/10.1109/BigData50022.2020.9378296", + "url": null + } + }, + { + "128": { + "title": "Data quality: A survey of data quality dimensions. In 2012 International Conference on Information Retrieval & Knowledge Management. 300\u2013304.", + "author": "Fatimah Sidi et al. 2012.", + "venue": "https://doi.org/10.1109/InfRKM.2012.6204995", + "url": null + } + }, + { + "129": { + "title": "Understanding TF-IDF for Machine Learning.", + "author": "Simha. 2021.", + "venue": "", + "url": null + } + }, + { + "130": { + "title": "Metrics for Identifying Bias in Datasets.", + "author": "A. Simonetta, A. Trenta, M. C. Paoletti, and A. Vetr\u00f2. 2021.", + "venue": "SYSTEM (2021).", + "url": null + } + }, + { + "131": { + "title": "Measurement of Diversity.", + "author": "E. Simpson. 1949.", + "venue": "Nature 163, 688 (1949), 688.", + "url": null + } + }, + { + "132": { + "title": "Systematic Evaluation of Privacy Risks of Machine Learning Models. In 30th USENIX Security Symposium (USENIX Security 21). USENIX Association, 2615\u20132632.", + "author": "Liwei Song and Prateek Mittal. 2021.", + "venue": "https://www.usenix.org/conference/usenixsecurity21/presentation/song", + "url": null + } + }, + { + "133": { + "title": "A statistical interpretation of term specificity and its application in retrieval.", + "author": "Karen Sparck Jones. 1972.", + "venue": "J. of Documentation 28, 1 (1972), 11\u201321.", + "url": null + } + }, + { + "134": { + "title": "A short-time objective intelligibility measure for time-frequency weighted noisy speech. In 2010 IEEE Int. Conf. on Acoustics, Speech and Signal Processing. 4214\u20134217.", + "author": "Cees H. Taal, Richard C. Hendriks, Richard Heusdens, and Jesper Jensen. 2010.", + "venue": "https://doi.org/10.1109/ICASSP.2010.5495701", + "url": null + } + }, + { + "135": { + "title": "Certain Language Skills in Children.", + "author": "Maxine Templin. 1957.", + "venue": "University of Minnesota Press, Minneapolis.", + "url": null + } + }, + { + "136": { + "title": "A survey of image quality measures. In 2009 International Conference for Technical Postgraduates (TECHPOS). 1\u20134.", + "author": "Kim-Han Thung and Paramesran Raveendran. 2009.", + "venue": "https://doi.org/10.1109/TECHPOS.2009.5412098", + "url": null + } + }, + { + "137": { + "title": "Privacy risk quantification in education data using Markov model.", + "author": "Dinusha Vatsalan et al. 2022.", + "venue": "British Journal of Educational Technology 53, 4 (2022), 804\u2013821.", + "url": null + } + }, + { + "138": { + "title": "Explainability of Machine Learning Models under Missing Data.", + "author": "Tuan L. Vo, Thu Nguyen, Hugo L. Hammer, Michael A. Riegler, and Pal Halvorsen. 2024.", + "venue": "", + "url": null + } + }, + { + "139": { + "title": "Technical Privacy Metrics: A Systematic Survey.", + "author": "Isabel Wagner and David Eckhoff. 2018.", + "venue": "ACM Comput. Surv. 51, 3, Article 57 (jun 2018), 38 pages.", + "url": null + } + }, + { + "140": { + "title": "Data Banzhaf: A Robust Data Valuation Framework for Machine Learning.", + "author": "Jiachen T. Wang and Ruoxi Jia. 2023.", + "venue": "", + "url": null + } + }, + { + "141": { + "title": "A universal image quality index.", + "author": "Zhou Wang and A.C. Bovik. 2002.", + "venue": "IEEE Signal Processing Letters 9, 3 (2002), 81\u201384.", + "url": null + } + }, + { + "142": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. 2004.", + "venue": "IEEE Transactions on Image Processing 13, 4 (2004), 600\u2013612.", + "url": null + } + }, + { + "143": { + "title": "Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2. 1398\u20131402 Vol.2.", + "author": "Z. Wang, E.P. Simoncelli, and A.C. Bovik. 2003.", + "venue": "https://doi.org/10.1109/ACSSC.2003.1292216", + "url": null + } + }, + { + "144": { + "title": "Some Biological Sequence Metrics.", + "author": "M.S. Waterman, T.F. Smith, and W.A. Beyer. 1976.", + "venue": "Advances in Math. 20, 4 (1976), 367\u2013387.", + "url": null + } + }, + { + "145": { + "title": "A design framework and exemplar metrics for fairness.", + "author": "Mark D. Wilkinson, Susanna-Assunta Sansone, Erik Schultes, Peter Doorn, Luiz Olavo Bonino da Silva Santos, and Michel Dumontier. 2018.", + "venue": "", + "url": null + } + }, + { + "146": { + "title": "Data Prep Still Dominates Data Scientists\u2019 Time, Survey Finds.", + "author": "Alex Woodie. 2020.", + "venue": "https://www.datanami.com/2020/07/06/data-prep-still-dominates-data-scientists-time-survey-finds/.", + "url": null + } + }, + { + "147": { + "title": "Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.", + "author": "Wufeng Xue, Lei Zhang, Xuanqin Mou, and Alan C. Bovik. 2014.", + "venue": "IEEE Transactions on Image Processing 23, 2 (Feb. 2014), 684\u2013695.", + "url": null + } + }, + { + "148": { + "title": "A survey on data quality: principles, taxonomies and comparison of approaches. In 2021 International Conference on Information Systems and Advanced Technologies (ICISAT). 1\u20139.", + "author": "Mehdi Yalaoui and Saida Boukhedouma. 2021.", + "venue": "https://doi.org/10.1109/ICISAT54145.2021.9678209", + "url": null + } + }, + { + "149": { + "title": "FSIM: A Feature Similarity Index for Image Quality Assessment.", + "author": "Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang. 2011.", + "venue": "IEEE Transactions on Image Processing 20, 8 (2011), 2378\u20132386.", + "url": null + } + }, + { + "150": { + "title": "Spectral Feature Selection for Supervised and Unsupervised Learning. In ICML. 1151\u20131157.", + "author": "Zheng Zhao and Huan Liu. 2007.", + "venue": "", + "url": null + } + }, + { + "151": { + "title": "LRID: A new metric of multi-class imbalance degree based on likelihood-ratio test.", + "author": "Rui Zhu, Ziyu Wang, Zhanyu Ma, Guijin Wang, and Jing-Hao Xue. 2018.", + "venue": "Pattern Recognition Letters 116 (2018), 36\u201342.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2404.05779v2" +} \ No newline at end of file diff --git a/20241127/2404.08402v2.json b/20241127/2404.08402v2.json new file mode 100644 index 0000000000000000000000000000000000000000..1eeb83675432b755a75b8942911bc456b181d8ee --- /dev/null +++ b/20241127/2404.08402v2.json @@ -0,0 +1,84 @@ +{ + "title": "Galois Self-dual 2-quasi Constacyclic Codes over Finite Fields", + "abstract": "Let be a field with cardinality and , and\n.\nExtending Euclidean and Hermitian inner products,\nFan and Zhang introduced Galois -inner product\n(DCC, vol.84, pp.473-492).\nIn this paper, we characterize the structure\nof -quasi -constacyclic codes over ;\nand exhibit necessary and sufficient conditions\nfor -quasi -constacyclic codes being Galois -self-dual.\nWith the help of a technique developed in this paper,\nwe prove that, when is even,\nthe Hermitian self-dual -quasi -constacyclic codes\nare asymptotically good if and only if .\nAnd, when ,\nthe Euclidean self-dual -quasi -constacyclic codes\nare asymptotically good if and only if .", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Let be a finite field with cardinality \nwhere is a prime and is a positive integer,\nlet .\nAny \nis called a word over of length , where is a positive integer.\nThe Hamming weight is defined as the number\nof the indexes with .\nThe Hamming distance between two words \nis defined as .\nAny nonempty subset is called a code of length over ,\nand the words in the code are called codewords.\nThe minimum distance\n.\nFor any linear subspace of , called a linear code,\nthe minimum weight\n;\nand it is known that .\nThe fraction \nis called the relative minimum distance of ,\nand is called the rate of .\nA code sequence is said to be asymptotically good\nif the length of goes to infinity and there is a real number \nsuch that and \nfor .\nA class of codes is said to be asymptotically good if\nthere is an asymptotically good sequence of codes in the class;\notherwise, we say that the class of codes is asymptotically bad.\nA linear code of is called a cyclic code\nif is invariant under the cyclic permutation on items, i.e.,\nA linear code of is called a quasi-cyclic code of index ,\nabbreviated as -quasi-cyclic code,\nif is invariant under the double cyclic permutation on items, i.e.,\nThe Euclidean inner product of words\n of \nis defined to be .\nFor a code , the\n\nis the dual code of .\nA code is said to be self-dual if .\nObviously, the rate if is self-dual.\nCyclic codes are investigated extensively in theory and practice, cf. [18 ###reference_b18###].\nIt is still an open question (cf. [26 ###reference_b26###]):\nare cyclic codes over asymptotically good?\nHowever, it is well-known long ago that\nthe binary -quasi-cyclic codes are asymptotically good, see [6 ###reference_b6###, 7 ###reference_b7###, 20 ###reference_b20###].\nLater, Mart\u00ednez-P\u00e9rez and Willems [27 ###reference_b27###] proved the\nasymptotic goodness of binary self-dual -quasi-cyclic codes.\nAnd, [1 ###reference_b1###], [22 ###reference_b22###] and [23 ###reference_b23###] proved\nthat, if ,\nthe -ary self-dual -quasi-cyclic codes are asymptotically good.\nNote that \u201c\u201d is a necessary and sufficient\ncondition for the existence of -ary self-dual -quasi-cyclic codes,\ncf. [24 ###reference_b24###, Theorem 6.1].\nThe proof in [1 ###reference_b1###] is based on Artin\u2019s primitive root conjecture.\nThe arguments in [22 ###reference_b22###] and [23 ###reference_b23###] are self-contained.\nAnd the asymptotic goodness of any -ary -quasi-cyclic codes\nwere also proved in [22 ###reference_b22###].\nCyclic codes and -quasi-cyclic codes had been extended widely.\nLet .\nA linear code of is called a -constacyclic code\nif is invariant under the -constacyclic permutation on items, i.e.,\nIf , the -constacyclic codes are called\nnegacyclic codes. Further,\na linear code of is called\na -quasi -constacyclic code if\n is invariant under the double -constacyclic permutation on items, i.e.,\nIf is odd and ,\nthe self-dual -quasi negacyclic codes over are proved asymptotically good\nin [2 ###reference_b2###].\nWhile in [28 ###reference_b28###] for \nit is shown, based on Artin\u2019s primitive root conjecture,\nthat the -ary self-dual -quasi negacyclic codes are asymptotically good.\nRecently, for any and any \nthe -quasi -constacyclic codes over are proved\nasymptotically good, see [12 ###reference_b12###, Corollary I.3].\nAbout the self-dualities, in the semisimple case (i.e., ),\nthe self-dual cyclic codes over does not exist.\nLeon et al. [21 ###reference_b21###] and many references, e.g. [8 ###reference_b8###, 25 ###reference_b25###, 29 ###reference_b29###],\ndevoted to the study on various generalizations,\ne.g., duadic codes, extended self-dual cyclic codes, etc.\nOn the other hand,\nDinh and Lopez-Permouth [10 ###reference_b10###], Dinh [9 ###reference_b9###]\nstudied -constacyclic codes, and showed that in the semisimple case\nthe self-dual -constacyclic codes exist only if .\nExtending the Euclidean inner product and the Hermitian inner product,\nFan and Zhang [16 ###reference_b16###] introduced the so-called\nGalois inner products. Recall that . For ,\nthe map , ,\nis a Galois automorphism of , which induces an automorphism\n, .\nThe following\nis called the Galois -inner product on .\nAnd for any code , the following code\nis called the Galois -dual code of .\nThe code is said to be Galois -self-dual\n(or Galois self-dual when is known from context) if .\nIt is also obvious that if is Galois -self-dual.\nWhen ( when is even, respectively),\n is just the Euclidean\n(Hermitian, respectively) inner product, and\n is the Euclidean (Hermitian, respectively) dual code,\nand Galois -self-dual codes are just the Euclidean self-dual\n(Hermitian self-dual, respectively) codes.\nThe existence and the structure\nof Galois -self-dual -constacyclic codes are studied in [16 ###reference_b16###].\nIn this paper we study the Galois -self-dual -quasi -constacyclic codes\nover and their asymptotic properties.\nThe main contributions of this paper are the following.\nWe characterize the algebraic structure of\nthe -quasi -constacyclic codes\nand their Galois -dual codes.\nWe find that the Galois -self-dual -quasi -constacyclic codes\nbehave very differently depending on whether or not.\nIn both the cases we obtain necessary and sufficient conditions\nfor -quasi -constacyclic codes being Galois -self-dual.\nWe obtain that, if ,\nthen the Galois -self-dual\n-quasi -constacyclic codes are asymptotically bad.\nOn the other hand,\nif is even and , then\nthe Hermitian self-dual -quasi -constacyclic codes\nare asymptotically good.\nAnd, if and ,\nthen the Euclidean self-dual -quasi -constacyclic codes\nare asymptotically good.\nFor the Euclidean case, we note that\nthe asymptotic goodness of the self-dual -quasi-cyclic codes\nhas been proved in [23 ###reference_b23###];\non the other hand, for the asymptotic properties of the\nself-dual -quasi negacyclic codes, our result and the results in\n[2 ###reference_b2###, 28 ###reference_b28###], the three results do not cover each other.\nAs for methodology, the so-called reciprocal polynomial\nis a powerful tool for studying the duality property of\n-constacyclic and -quasi -consta-cyclic codes,\ne.g., in [2 ###reference_b2###, 28 ###reference_b28###].\nIt is revised in [23 ###reference_b23###] etc. to the \u201cbar\u201d map of\nthe quotient ring ,\nwhere denotes the polynomial ring over and \ndenotes the ideal generated by ; cf. [17 ###reference_b17###, Remark 6.6(2)].\nFor any matrix over ,\nthe Galois -transpose of is defined to be\n, cf. Eq.(3.3 ###reference_###) below.\nWith the operator \u201c\u201d on matrices,\nwe introduce an operator \u201c\u201d on the quotient ring\n, ,\ncf. Lemma 4.8 ###reference_theorem8### below for details.\nThat operator becomes an useful technique\nfor studying the Galois duality property of -constacyclic codes.\nThat is a methodological innovation of the paper.\nIn Section 2 ###reference_###, some preliminaries are sketched.\nIn Section 3 ###reference_###\nwe characterize the algebraic structure of the\n-quasi -consta-cyclic codes over the finite field \nand their Galois -dual codes.\nIn Section 4 ###reference_### we study the\nGalois -self-dual -quasi -constacyclic codes over .\nOur discussion divide into two cases: or . In both the cases,\nwe exhibit the necessary and sufficient conditions for a\n-quasi -constacyclic code being Galois -self-dual.\nAnd we show that if ,\nthen Galois -self-dual -quasi -constacyclic codes\nare asymptotically bad.\nIn Section 5 ###reference_###,\nthe Hermitian self-dual -quasi-cyclic codes over are proved asymptotically good.\nIn Section 6 ###reference_###,\nassuming that is even, we prove that\nthe Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nAnd, assuming that ,\nwe show that the Euclidean self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nFinally, we end this paper by a conclusion in Section 7 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "In this paper, is always a finite field of cardinality \n(by we denote the cardinality of any set ),\nwhere is a prime and is a positive integer;\nand is an integer such that ; and is an integer.\nAny ring in this paper has identity (or denoted by for short);\nand ring homomorphisms and subrings are identity preserving.\nBy we denote the multiplication group consisting\nof all units (invertible elements) of .\nIn particular, .\nIf a ring is also an -vector space,\nthen is said to be an -algebra.\nIn that case, , ,\nis an embedding, so that we write that .\nLet and be -algebras.\nA map is called an -algebra homomorphism\nif it is both a ring homomorphism and an -linear map,\ni.e., and for any and any ,\nwhere the first two mean that is a ring homomorphism,\nand the last two mean that it is a linear map.\nRecall that for , the map ,\n for\n, is a Galois automorphism of the field .\nIf is bijective and satisfies that\nfor any and any ,\nthen is called a -algebra isomorphism,\nor -isomorphism for short.\nNote that the last two equalities of Eq.(2.1 ###reference_###)\nmean that is a -linear map.\nIn particular, if ,\nthen is called a -algebra automorphism,\nor a -automorphism for short.\nAnd, if is a -isomorphism,\nthen for any ideal of , the image is an ideal of and\n.\nThere are two typical examples as follows.\n(1). For any polynomial ,\nwe denote .\nThe Galois automorphism , ,\ninduces the map\nwhich is a -automorphism of .\nLet denote the order of . It is easy to check that\n.\n(2). For the matrix algebra\n\nconsisting of all matrices over ,\nthe map\nis a -automorphism of .\nLet .\nFor a positive integer , we write to denote the identity matrix of degree .\nLet denote\nthe -constacyclic permutation matrix of degree as follows\nIn particular, if then\n\nis the cyclic permutation matrix. By matrix multiplication,\nfor we have that\nwhich is the vector obtained by -constacyclically permuting\nthe items of the vector ; and that\nfor and\n,\nThus, we get another description of the -constacyclic codes\n(cf. Eq.(1.1 ###reference_###)) and\nthe -quasi -constacyclic codes (cf. Eq.(1.4 ###reference_###))\nas follows.\n(1) Let be a subspace of .\nThen is a -constacyclic code if and only if\n, for any .\n(2) Let be a subspace of .\nThen is a -quasi -constacyclic code\nif and only if\n, for any . \u220e\nIn the following we always denote\n,\nwhich is the quotient algebra of the polynomial algebra over the ideal\n generated by .\nAny residue class modulo has a unique representative\npolynomial with degree less than . Hence we can write\nFurther, the Cartesian product\nis an -module.\nFor and ,\nthe following identifications and results will be quoted later in this article.\n(1). There is a canonical linear isomorphism\nwhere \nand . It is easy to check that\nThen any element of \nis identified with the word of ;\nand by Lemma 2.2 ###reference_theorem2###(1),\nthe -constacyclic codes of length \nare identified with the ideals (-submodules) of .\n(2). For the -module ,\nwe have the following canonical linear isomorphism\nwhere ,\n,\nand .\nFor ,\nThen any element \nis identified with the word ,\nand by Lemma 2.2 ###reference_theorem2###(2)\nthe -quasi -constacyclic codes of length are identified\nwith the -submodules of .\nIf , then the algebra is semisimple, cf. [4 ###reference_b4###]; and\nby Ring Theory (e.g., cf. [19 ###reference_b19###] or [17 ###reference_b17###, Remark 2.4]),\nwe have the following two.\n(1) For any ideal (-submodule) of , there is an idempotent of such that\n and where .\nNote that is an algebra with identity (but not a subalgebra of \nin general because in general); in particular, makes sense.\nMoreover, if with being an idempotent and\nan ideal , then \nwith .\n(2) If and are -submodules of ; and\n is an -module isomorphism,\nthen , and there is a \nsuch that\n for any .\nIf , there is another identification.\nBy Eq.(2.7 ###reference_###), we denote\nLet \nbe the cyclic group of order .\nLet be the cyclic group algebra, i.e.,\n\nis an -vector space with basis and equipped\nwith the multiplication induced by the multiplication of the group as follows:\nThere is a canonical algebra isomorphism:\nThus, is identified with the cyclic group algebra\n.\nAnd by Remark 2.3 ###reference_theorem3###(1), the cyclic codes over of length \nare identified with the ideals of the cyclic group algebra .\nSimilarly, -quasi-cyclic codes over of length \nare identified with the -submodules of\nthe -module .\nWith the identifications Eq.(2.14 ###reference_###),\nwe have more algebraic preliminaries about to introduce.\nAssume that , then is semisimple.\nLet\nbe all primitive idempotents of . Correspondingly, the irreducible decomposition of in is as follows\nsuch that\nSince is irreducible over ,\neach is an extension field over with identity , and\n.\nAs ,\n; so\nFor , we denote\nIn general, in this paper we consider for any and \n(not restricted to the semisimple case)\nunless the hypothesis \u201c\u201d is explicitly assumed.\nOnce \u201c\u201d is assumed, the above preliminaries on the\nsemisimple case can be quoted.\nAs mentioned in Introduction,\nif then the\nself-dual -quasi-cyclic codes are asymptotically good.\nTo state it more precisely, we need the so-called -entropy function:\nwhich value strictly increases from to \nwhile increases from to .\nBy [23 ###reference_b23###, Theorem IV.17], for any real number with\n and ,\nthere are self-dual -quasi-cyclic codes over \n(hence ) such that:\n(1) the relative minimum distance \u2009 for ; and\n(2) the code length of satisfies that every is odd and coprime to ,\nand \n(in particular, ).\nLet with being the index set.\nIf there are positive integers and subsets (repetition is allowed)\n of with for \nsatisfying the following two conditions:\n(1) for each ()\nthe projection : ,\n,\nmaps bijectively onto ;\nand (2) for any ()\nthe number of the subsets which contains (i.e., ) equals ;\nthen we say that is a balanced code over of length ,\nand are called information index sets of the code .\nAn important result (see [13 ###reference_b13###, Corollary 3.4]) is that:\nif is a balance code with cardinality , then\nwhere is defined in Eq.(2.22 ###reference_###) and\nConstacyclic codes are balanced, see [12 ###reference_b12###, Lemma II.8].\nBy [13 ###reference_b13###, Corollary 3.4 and Corollary 3.5],\nwe can easily obtain the following lemma.\nIf is an ideal of , then the -submodule of is\na balanced code, hence for ," + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "-quasi constacyclic codes over finite fields", + "text": "In this sections we are primarily concerned with the algebra properties of\n-quasi -constacyclic codes over .\nIn the following, we always assume that\nwhere denotes\nthe order of in the multiplication group .\nAs remarked in Remark 2.5 ###reference_theorem5###,\nmost of this section discusses for any and ,\nonly Theorem 3.9 ###reference_theorem9### and its corollaries consider\nthe semisimple case (i.e., ).\nFor , there are two projections , :\n as follows\nBy the linear isomorphism Eq.(2.11 ###reference_###),\nthe projections and \nare also defined on :\nfor ,\nFor any -submodule of ,\nrestricting to , we have an -homomorphism\n.\nObserve that the kernel of the restricted homomorphism is\nIt is known that for ,\nif a linear code is both\n-constacyclic and -constacyclic,\nthen either or (cf. [9 ###reference_b9###]).\nExtending the result to , we get the following lemma.\nLet such that .\nIf a subspace of is\nboth a -quasi -constacyclic code and\na -quasi -constacyclic code, then either or ;\nand it is the same for .\nSuppose . There is a codeword\nwith some , ; so we can assume that .\nBy the double -constacyclic permutation in Eq.(1.4 ###reference_###),\nthen contains the following word\nfor some , .\nUsing the double -constacyclic permutation times, we see that\n contains such words\nfor some , .\nIt follows that contains a basis of , hence .\nBy the same argument, either or .\n\u220e\nFor any matrix with ,\nwe denote \n(cf. Example 2.1 ###reference_theorem1###(2)).\nBy we denote the transpose of .\nLet\nbe the transpose of the matrix .\nWe call the Galois -transpose of .\nIf , is just the transpose matrix of .\nIf is even and , is the Hermitian transpose\nof .\nFor , and ,\nit is easy to check that\nHence, if , the operator \u201c\u201d is a -anti-automorphism\nof the matrix algebra \n(compare it with Eq(2.1 ###reference_###) and Example 2.1 ###reference_theorem1###(2)).\nWith the identification Eq.(2.9 ###reference_###) and the operator \u201c\u201d,\nwe can compute the Galois -inner product on \n(see Eq.(1.5 ###reference_###))\nin a matrix version:\nAnd for any -constacyclic code (i.e., any idea of ),\nthe Galois -dual code of is as follows:\nSimilarly, with the identification Eq.(2.11 ###reference_###)\nthe Galois -inner product on is computed in a matrix version:\nAnd for any -quasi -constacyclic code (-submodule of ),\nthe Galois -dual code of is\nIf , then we say that is Galois -self-dual,\nor Galois self-dual.\nIf is a -quasi -constacyclic code,\nthen is a -quasi -constacyclic code.\nIn particular, if ,\n is a -quasi -constacyclic code.\nAssume that .\nBy Lemma 2.2 ###reference_theorem2###(2), it is enough to prove that\nfor any \nwe have\nApplying Eq.(3.4 ###reference_###) and Eq.(3.6 ###reference_###) yields\nBecause ,\nit is easy to check that\nSince and \n(see Eq(3.1 ###reference_###)),\nwe have ,\nhence .\nSo\nBy Lemma 2.2 ###reference_theorem2###(2),\n. We get that\nWe are done.\n\u220e\nWe note that\n if and only if , since\nIf , then\nfor any -quasi -constacyclic code ,\nits Galois -dual code is\nstill a -quasi -constacyclic code.\nOtherwise (i.e., ,\nLemma 3.2 ###reference_theorem2### implies that,\nfor many -quasi -constacyclic codes,\ntheir Galois -dual codes are no longer -quasi -constacyclic codes.\nFor ,\nby we denote the ideal\nof generated by .\nSimilarly, for ,\nby we denote the -submodule of \ngenerated by .\nNote that any ideal of is generated by one element\n(cf. Remark 2.4 ###reference_theorem4###(1) for semisimple case,\nand cf. [11 ###reference_b11###, Lemma 4.3] for general case).\nHowever, some -submodules of \ncan not be generated by one element. For example, as an -submodule\n can not be generated by one element\n(because any -module generated by one element\nis a quotient of the regular module).\nFor any \nwith ,\nwe have an matrix\nwhose first row is the vector ,\nand each next row is obtained\nby -constacyclically permuting the present row (cf. Eq.(2.5 ###reference_###)).\nWe call \nthe -consta circulant matrix\nassociated with the polynomial .\nLet . Then we have:\n(1) is linearly generated by the rows of the \nmatrix .\n(2) is linearly generated by the rows of the \nmatrix .\n(1). Let be the ideal of generated by , i.e.,\nObviously,\n.\nSo is the subspace of linearly generated by\n.\nEq.(2.10 ###reference_###) and Eq.(3.8 ###reference_###) imply that\n is identified with the row vector \nwhich is just the first row of the matrix ;\nfor ,\n is identified with the row vector ,\nwhich is just the \u2019th row of the matrix .\nTherefore, is linearly generated by the rows of the matrix .\nObviously, (2) is proved in a similar way.\n\u220e\nFor , the -quasi -constacyclic code\n has a generating matrix .\nBy Lemma 3.7 ###reference_theorem7###, the rows of the matrix\n linearly generate the code .\nAnd the rows of the matrix \nare linearly independent.\n\u220e\nIn the rest of this section, we turn to the semisimple case, i.e., .\nExtending [17 ###reference_b17###, Theorem 3.2] which characterized the algebraic structure\nof -quasi-cyclic codes in the semisimple case, we characterize the algebraic structure of -quasi constacyclic\ncodes as follows.\nAssume that .\nIf is an -submodule of ,\nthen there are ideals of \nsatisfying that \nand an element such that\nConversely, if there are ideals of \nwith and an element\n, then in Eq.(3.9 ###reference_###)\nis an -submodule of .\nThe \u201cconversely\u201d part is obviously true because\nboth and \nare -submodules of , and\n.\nAssume that is an -submodule of ,\nand , \nare defined in Remark 3.1 ###reference_theorem1###.\nWe consult the module version of Goursat Lemma\n(see[17 ###reference_b17###, Remark 3.1]). Take\nThen are ideals of , .\nFor any , there is a unique \nsuch that , hence we have the map\nwhich is an -isomorphism, and\nSince is semisimple and\n are ideals of ,\nthere is an ideal of such that\n; see Remark 2.4 ###reference_theorem4###(1).\nSimilarly, we have an ideal of such that\n.\nThen and .\nThe -isomorphism in Eq.(3.15 ###reference_###) induces an\n-isomorphism such that\n for all .\nThus the image , and there is a \nsuch that for all ; cf. Remark 2.4 ###reference_theorem4###(2).\nIn conclusion,\nwe have an ideal of such that\n and a such that\nObviously, .\nThus Eq.(3.9 ###reference_###) holds.\n\u220e\nAssume that . Then for ,\n if and only if .\nIf then .\nConversely, assume that .\nNote that for an idempotent \nwhich is the identity of the ring \n(cf. Remark 2.4 ###reference_theorem4###(1)).\nThere is an element such that .\nFor any element , we can write with .\nThen . Thus .\n\u220e\nKeep the notation in Theorem 3.9 ###reference_theorem9###\n(in particular, ).\nAny -submodule of can be\nwritten as\nwhere and satisfy that\n.\nTake\n, and\n, hence .\nBy the above Lemma, we get Eq.(3.17 ###reference_###) immediately.\n\u220e" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Galois self-dual -quasi constacyclic codes", + "text": "In this section\nwe investigate the Galois -self-dual\n-quasi -constacyclic codes over .\nBy Lemma 3.3 ###reference_theorem3### and Remark 3.4 ###reference_theorem4###,\nany Galois -self-dual -quasi -constacyclic code\n(i.e., ) is also -constacyclic.\nWe study them in two cases:\n (i.e., ),\nor (i.e., ).\nStill, this section discusses for any and \nexcept for Theorem 4.10 ###reference_theorem10### which considers\nthe semisimple case (i.e., )." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "The case that", + "text": "Our concern in this subsection is\nthe Galois -self-dual 2-quasi constacyclic codes over \nunder the assumption that .\nAssume that \n(i.e., ).\nThe following three are equivalent to each other:\n(1) is a Galois -self-dual -quasi -constacyclic code\nover of length .\n(2)\n is a -submodule of \ngenerated by , where\n with , and\n are viewed as the constant polynomials of .\n(3)\n is an -linear code of length with a generating matrix\n, where \nwith .\nObserve that by Corollary 3.8 ###reference_theorem8###,\nthe statements (2) and (3) are equivalent. Suppose that (3) holds.\nThen\n\nwhich implies , cf. Eq.(3.6 ###reference_###).\nSince the rank of the generating matrix is ,\nwe have , and so the statement (1) follows.\nTherefore, it suffices to show that (1) implies (3).\nAssume that (1) holds, i.e., .\nBy Lemma 3.3 ###reference_theorem3###,\n is both a -quasi -constacyclic code\nand a -quasi -constacyclic code.\nSince ,\nwe deduce from Lemma 3.2 ###reference_theorem2### that\neither or .\nSuppose that .\nSince , by Eq.(3.2 ###reference_###),\nwe have that ,\nwhich is impossible because \nis not Galois -self-dual.\nSo, it must be the case that .\nBy the same argument, we have .\nAs , we can take \nwith .\nLet be the submodule\nof generated by .\nObviously, .\nBy Corollary 3.8 ###reference_theorem8###, has the generating matrix\nThe rank of the matrix is , hence . In a word,\n is linearly generated by the rows of\nthe matrix in Eq.(4.1 ###reference_###).\nBecause is also a -quasi -constacyclic code,\nby the same way we can also get that\n is linearly generated by the rows of the matrix\n.\nThus any row of is a linear combination of\nthe rows of the matrix in Eq.(4.1 ###reference_###).\nSo there is an matrix such that\nIt follows that , and so . The latter equality is as follows:\nSince ,\nit follows that for ;\nhence the polynomial is a constant polynomial: \nfor some ,\nand has a generating matrix .\nBecause is Galois -self-dual,\nhence . In conclusion, (3) holds.\n\u220e\nAssume that .\nThe Galois -self-dual -quasi -constacyclic codes\nof length exist\nif and only if the polynomial has roots in ;\nand in that case,\nthe -submodules of with\n being a root of are all the Galois -self-dual\n-quasi -constacyclic codes of length .\n\u220e\nNote that in the Euclidean case \u201c\u201d,\n if and only if .\nAssume that .\nThe self-dual -quasi -constacyclic codes over of length \nexist if and only if\n;\nand in that case,\nthe -submodules of with satisfying\n are the all self-dual -quasi\n-constacyclic codes over of length .\nThe polynomial has roots in if and only if\n is even or is odd and ,\nif and only if .\n\u220e\nAs a comparison, the Hermitian self-dual ones always exist.\nAssume that is even and .\nThen the Hermitian self-dual -quasi -constacyclic codes over \nof length always exist;\nand in that case, the -submodules of with\n satisfying are the all Hermitian self-dual\n-quasi -constacyclic codes over of length .\nIf is even,\nthen the polynomial \nalways has roots in .\nAssume that is odd. The order of the multiplication group\nand . So, there is a subgroup \nof the multiplication group with order .\nThen any generator of the group is a root of\nthe polynomial .\n\u220e\nAs a consequence, we get the following.\nAssume that (i.e., ).\nThen the Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\nIf the polynomial has no root in , then\nGalois -self-dual -quasi -constacyclic codes over do not\nexist, hence Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\nOtherwise, any Galois -self-dual -quasi -constacyclic code\n has minimum weight ,\nbecause any row of the generating matrix \nhas weight . The relative minimum distance\n while .\nSo Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\n\u220e" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "The case that", + "text": "We start with a general result about the -consta circulant matrices\n(cf. Definition 3.6 ###reference_theorem6###).\nAs in Example 2.1 ###reference_theorem1###(2), denotes the \nmatrix algebra over .\nWe consider its subset consisting of\nall the -consta circulant matrices of degree :\nwhere is defined in Eq.(2.4 ###reference_###)\nand is defined in Eq.(3.8 ###reference_###).\nWith the notation as above.\n\nis a subalgebra of and\nthe following is an algebra isomorphism:\nThe following is obviously an -algebra homomorphism:\nSince ,\nthe kernel of the homomorphism is the ideal of generated by .\nNote that the quotient algebra , see Eq.(2.7 ###reference_###).\nBy Homomorphism Theorem, the homomorphism induces an injective homomorphism:\nThe image of this homomorphism is exactly\n,\ncf. Definition 3.6 ###reference_theorem6###.\nThus \nis a subalgebra of \nand the homomorphism induces\nthe algebra isomorphism in Eq.(4.2 ###reference_###).\n\u220e\nIn the rest of this subsection we assume that \ni.e., , cf. Remark 3.4 ###reference_theorem4###.\nBy Eq.(3.3 ###reference_###), we have\nthe following map\nand by Eq.(3.4 ###reference_###), for any and ,\nSo \nis a -anti-automorphism of the matrix algebra .\nKeep the notation as above.\nAssume that . Then ,\nand the restricted map\n\nas follows is a -automorphism of the -algebra\n:\nSince ,\nby Eq.(3.7 ###reference_###) we deduce that\nBy Eq.(4.2 ###reference_###),\nany \nis associated with ,\nwhere , so\nThus .\nRestricting the -anti-automorphism \nto , we\nget the -anti-automorphism Eq.(4.3 ###reference_###),\nwhich is in fact a -automorphism because\n\nis a commutative algebra.\n\u220e\nNext, we introduce an operator \u201c\u201d on ,\nwhich is the key to obtaining the necessary and sufficient conditions for\n-quasi -constacyclic codes being Galois -self-dual.\nWith the isomorphism in Eq.(4.2 ###reference_###),\ninspiring by Eq.(4.3 ###reference_###) and Eq.(4.4 ###reference_###),\nfor we define\nwhere , cf. Example 2.1 ###reference_theorem1###(1).\nAssume that . Let , and be as in Eq.(4.5 ###reference_###),\nEq.(4.2 ###reference_###) and Eq.(4.3 ###reference_###)\nrespectively.\nThen\nand the following map is a -automorphism of the algebra :\nFor ,\nby the definition of in Eq.(4.5 ###reference_###)\nand by Eq.(4.4 ###reference_###),\nThat is,\n. So\nEq.(4.6 ###reference_###) holds; equivalently, the following diagram is commutative:\nBecause is a -automorphism and both and \nare algebra isomorphisms, by Eq.(4.6 ###reference_###)\nwe see that Eq.(4.7 ###reference_###) is a -automorphism of .\n\u220e\nRecall that the -quasi -constacyclic code generated by \nhas been defined in Remark 3.5 ###reference_theorem5###.\nFor the -submodules of generated by one element,\nwe have the following Galois self-duality criteria.\nAssume that .\n(1) The -quasi -constacyclic code\n is Galois -self-dual if and only if\n and the rate .\n(2) The -quasi -constacyclic code \nis Galois -self-dual if and only if .\n(1) The is Galois -self-dual if and only if\n and .\nWhat remains is to show that\nObserve that by Lemma 3.7 ###reference_theorem7###,\n is linearly generated by the rows of the\nmatrix .\nThus if and only if\nThus, Eq.(4.12 ###reference_###) follows from Lemma 4.8 ###reference_theorem8###\n(cf. Eq.(4.11 ###reference_###)) immediately.\n(2) \nBy Corollary 3.8 ###reference_theorem8###, has the generating matrix\n.\nIn particular, ,\ni.e., .\nSimilarly to the proof of , we have\n if and only if .\n\u220e\nIn the semisimple case,\nextending [17 ###reference_b17###, Theorem 4.2], we have the following theorem\nto characterize the Galois self-dual -quasi -constacyclic codes.\nIf , by Corollary 3.11 ###reference_theorem11###\nany -submodule of can be written as\nwhere with and .\nAssume that and .\nLet in Eq.(4.13 ###reference_###) be any -quasi -constacyclic code.\nThen is Galois -self-dual if and only if\nthe following two hold:\n(1)\n(2) .\nThe is Galois -self-dual if and only if\n and .\nNote that the inner product is linear for the first\nvariable, and it is -linear for the second variable,\nbut it is not symmetric in general.\nSo is equivalent to the following\nThe first line holds obviously. By Eq.(4.12 ###reference_###),\nthe second and the third lines are equivalent to that\n and , respectively.\nThe last line is equivalent to that .\nTurn to the forth line which is equivalent to .\nNote that is a ring with identity which is an idempotent,\ncf. Remark 2.4 ###reference_theorem4###(1), hence\n\u201c\u201d implies that for a .\nSo . Similarly,\n.\nThe theorem is proved.\n\u220e" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Hermitian self-dual -quasi-cyclic codes", + "text": "In this section, we always assume that is even and ,\nand .\nThe map , , is\na Galois automorphism of order , and\nis the Hermitian inner product on .\nIn this section we consider \nand , cf. Eq.(2.13 ###reference_###);\nand prove that the Hermitian self-dual -quasi-cyclic codes\nare asymptotically good." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "The operator \u201c\u201d on", + "text": "Since ,\nthe results in Subsection 4.2 ###reference_###\ncan be quoted freely for and .\nIn particular, the operator \u201c\u201d in Lemma 4.8 ###reference_theorem8###\nis a -automorphism of :\nwhere and\n, i.e.,\n,\ncf. Eq.(4.5 ###reference_###).\nBy Eq.(2.14 ###reference_###), we have the identification:\nwhere is the cyclic group of order \nand is the cyclic group algebra.\nSo the symbol \u201c\u201d can be identified with the element of the group ,\nand we can write , and the expressions ,\n etc. make sense. Hence and\nThe -automorphism \u201c\u201d of in\nEq.(5.1 ###reference_###)\nis of order 2.\nSince , for any by Eq.(5.2 ###reference_###) we have\nThus the order of the operator \u201c\u201d equals or .\nThere is an such that .\nThen in we have\n.\nSo the order of the operator \u201c\u201d equals .\n\u220e\nIf is a non-zero ideal of which is invariant\nby the operator \u201c\u201d (i.e., ),\nthen the restriction of the operator \u201c\u201d to \ninduces a -automorphism of of order .\nBy Lemma 5.1 ###reference_theorem1###,\nthe restriction of \u201c\u201d to is of order or .\nLet with . If ,\nthen the restriction of \u201c\u201d to is not the identity.\nOtherwise, ; there is an \nwith ; so\n.\nIn conclusion, the restriction of \u201c\u201d to is of order .\n\u220e\nNote that \u201c\u201d is assumed in this section.\nRecall from Eq.(2.15 ###reference_###) that\n, \nare all primitive idempotents of .\nThen the -automorphism \u201c\u201d permutes the primitive idempotents,\ni.e., every is still a primitive idempotent.\nNote that . We can reorder the other primitive idempotents\nsuch that for ,\n but for .\n(1) For , we get . The restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a -automorphism of \nof order 2 (cf. Corollary 5.2 ###reference_theorem2###) as follows\nLet . By Eq.(2.19 ###reference_###),\nthe ideal is a field extension over with\ncardinality .\nSo for .\n(2) For , ,\nand the restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a -isomorphism as follows\nLet ,\nthen .\nDenote ,\nby Eq.(2.19 ###reference_###) we have that\nIt is easy to check that\n.\nSo .\nThe restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a\n-automorphism of of order 2\n(cf. Corollary 5.2 ###reference_theorem2###) as follows\nFor convenience, in the following we denote for .\nThen for , and\nThus, can be rewritten as\nAny is decomposed into\nThe is called the -component of .\nFor ,\nby Eq.(5.5 ###reference_###) it is trivial to check that\nKeep the notation in Remark 5.3 ###reference_theorem3###,\nwe have for ;\nand for . Thus,\nwhere and for ; cf. Eq.(2.21 ###reference_###)." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "A class of Hermitian self-dual -quasi-cyclic codes", + "text": "Recall that the -quasi-cyclic code of \n(defined in Remark 3.5 ###reference_theorem5###)\nis Hermitian self-dual if and only if , see Lemma 4.9 ###reference_theorem9###.\nSo we denote\nAny corresponds to a Hermitian self-dual\n-quasi-cyclic code .\nSet\nBy Eq.(5.5 ###reference_###), Eq.(5.7 ###reference_###) and Eq.(5.8 ###reference_###),\n(1)\nIf (i.e. ),\nthen .\n(2)\nIf (i.e. and ),\nthen .\n(1).\nBy Remark 5.3 ###reference_theorem3###(1), is the field with\n. For ,\n. So,\n if and only if\n.\nHence equals the number of the roots in \nof the -polynomial ,\nwhere denotes the finite field with cardinality .\nIf , then . The order of the multiplicative group\n is\nThus the multiplication group \nhas a subgroup of order ,\nand all elements of are roots of\nthe polynomial .\nHence .\nOtherwise is odd, then is even.\nBy Eq.(5.13 ###reference_###),\nwe get that .\nSo has a subgroup\n of order . The elements of are\njust all roots of the polynomial .\nSince\nall roots of the polynomial are inside .\nThus, .\n(2).\nFor , and .\nBy Remark 5.3 ###reference_theorem3###(2),\n.\nFor \nwith and ,\nby Eq.(5.4 ###reference_###) we have , and so\n.\nIt follows that: if and only if\n and . We take\n, then \n(where is the inverse of in , not in ),\nhence \nis uniquely determined. Thus,\nSince is a field with cardinality ,\n.\n\u220e\nFor any subset ,\nrefining the notation in Eq.(5.6 ###reference_###) and in Remark 5.3 ###reference_theorem3###,\nwe define an ideal of and an integer as follows\nObviously, can be written as a disjoint union:\nSimilarly to Eq.(5.9 ###reference_###), we have\nIt is known that ([23 ###reference_b23###, Lemma 4.7])\nfor integers ,\nif for ,\nthen\nFor a subset , if \n(where is defined in Eq.(2.21 ###reference_###)), then\nFor , we deduce that since .\nBy Lemma 5.4 ###reference_theorem4### we have that\nIf , using Eq.(5.18 ###reference_###) we get\nwhere the last equality follows by Eq.(5.15 ###reference_###);\nand\nThat is, if then\nIf , we set ,\nand so , where .\nBy Lemma 5.4 ###reference_theorem4###(1),\nwe see that , and\nIt follows by Eq.(5.19 ###reference_###) that\nand that\nThus the lemma holds.\n\u220e\nBy Eq.(5.12 ###reference_###), .\nWe have the following at once.\nIf \n(where is defined in Eq.(2.21 ###reference_###)), then\nAny element can be written as\n\n(cf. Eq.(5.7 ###reference_###)).\nWe denote\nObviously . For it is easy to check that\nbut the converse is not true in general.\nFor , we denote\nwhere is defined in Remark 3.5 ###reference_theorem5###\nand is defined in Eq.(5.10 ###reference_###).\n(1)\nIf , then , hence .\n(2) If , then\n(1). Assume that , then (cf. Eq.(5.10 ###reference_###)) and\n for an element .\nThe former equality implies that is invertible with .\nThe latter equality implies that and .\nWe deduce that since is invertible.\n(2). By Eq.(5.7 ###reference_###), we write\n, and\n with .\nBy Eq.(5.8 ###reference_###),\nWe count the number of such in two cases.\nCase 1: , i.e., .\nSince , we have that , hence .\nThen any satisfies that .\nCase 2: , i.e., . There are two subcases.\nSubcase 2.1: . Since is a field,\n is invertible in ,\nand so there is a unique such that\n. We see that there is at most one such that\n.\nSubcase 2.2: .\nBy Remark 5.3 ###reference_theorem3###(2),\n.\nWe can write and , where\n and .\nSince , at least one of and is nonzero.\nWe may assume that .\nTake as in Eq.(5.14 ###reference_###),\nthen implies that\n in . Hence is uniquely determined.\nIn a word, if (i.e., ), then there is at most one\n such that . Thus\nBy Lemma 5.5 ###reference_theorem5### and Corollary 5.6 ###reference_theorem6###,\nWe are done.\n\u220e" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Hermitian self-dual -quasi-cyclic codes are asymptotically good", + "text": "Keep the notation in Subsection 5.2 ###reference_###.\nFrom now on, let be a real number satisfying that\n(where is the -entropy function defined in Eq.(2.22 ###reference_###))\nAnd set\nRecall that is defined in Eq.(2.21 ###reference_###),\n is defined in Eq.(2.23 ###reference_###) and\n is defined in Eq.(5.21 ###reference_###).\n.\nAssume that . By Eq.(5.25 ###reference_###),\nwe have an such that \nand , i.e., .\nFrom Lemma 5.7 ###reference_theorem7###(1) and Eq.(5.20 ###reference_###),\nwe deduce that ; hence\n, and\n and .\nIf , then ,\nhence (cf. Eq.(2.20 ###reference_###)),\nwhich contradicts that \nand (see Eq.(5.22 ###reference_###)).\nThus .\nBy the definition of in Eq.(2.21 ###reference_###),\nwe obtain that .\nSo the lemma is proved.\n\u220e\nIf \nand , then\nNote that any ideal of is a direct sum of some of\n (cf. Eq.(2.19 ###reference_###)).\nFor , the dimension ,\nwhere is defined in Eq.(2.21 ###reference_###).\nSuppose that \nwhere .\nIf is not a direct summand of , then the number of such \nis at most ;\notherwise is a direct summand of , then the number of such \nis at most .\nHence\nApplying Lemma 5.8 ###reference_theorem8### and Lemma 5.7 ###reference_theorem7###(2), we obtain\nwhere the number \nis independent of the choice of .\nBy Lemma 2.8 ###reference_theorem8###, for \nwe have .\nSo\nwhere the number \nis independent of the choice of .\nUsing Eq.(5.26 ###reference_###) yields\nNote that \n(since )\nand . We further get\nThat is,\n.\n\u220e\nBy [3 ###reference_b3###, Lemma 2.6] (or [14 ###reference_b14###, Lemma II.6]),\nthere are odd positive integers coprime to such that\n, where\n is defined in Eq.(2.21 ###reference_###).\nSince ,\nwe see that\nthere are odd positive integers coprime to such that\nwhich implies that , hence .\nObviously, we can assume that for .\nAssume that and \nas in Eq.(5.22 ###reference_###). Then there are\nHermitian self-dual -quasi-cyclic codes over \n(hence )\nsuch that the code length of goes to infinity and the\nrelative minimum distance for .\nTake as in Eq.(5.27 ###reference_###).\nThere is a positive real number such that\n\nfor large enough index . So we can further assume that\nTaking in Corollary 5.6 ###reference_theorem6### and Lemma 5.9 ###reference_theorem9###,\nand denoting by ,\nwe get\nBy Eq.(5.28 ###reference_###) and that , we get that\nTherefore, we can further assume that\n for .\nSo we can take .\nThen is a Hermitian self-dual -quasi-cyclic code\nof length with .\n\u220e" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Hermitian (Euclidean) self-dual -quasi\nconstacyclic codes", + "text": "In this section we prove that if \n( and , respectively),\nthen Hermitian self-dual (Euclidean self-dual, respectively)\n-quasi -constacyclic codes are asymptotically good.\nWe first relate \nwith ,\nand then turn to the Hermitian case ()\nand the Euclidean case ().\nAssume that and ,\nwhere as in Eq.(3.1 ###reference_###).\nThen there is a such that\n and the map\nsatisfies the following:\n(1) is an algebra isomorphism.\n(2)\nThe weight ,\n\u2009 .\n(3)\n, \u2009 .\nSince , there are integers such that .\nWe deduce that\n\nsince .\nTake , then .\nIt is known that (1) and (2) has been proved in [5 ###reference_b5###, Theorem 3.2].\nTherefore, it suffices to show that (3).\nLet and\n.\nThen and\n.\nBy Eq.(3.5 ###reference_###), we have that\nApplying the equality \nand the assumption , we get\nThus\nwhich proves (3).\n\u220e\nAssume that and ,\nwhere as in Eq.(3.1 ###reference_###). Let\nwhere is defined in Lemma 6.1 ###reference_theorem1###.\nThen the following hold.\n(1) is a module isomorphism.\n(2) , .\n(3) ,\n .\n(1) For and ,\nby Lemma 6.1 ###reference_theorem1###(1),\nThus Eq.(6.2 ###reference_###) is a module homomorphism.\nObserve that by Lemma 6.1 ###reference_theorem1###(1), must be bijective,\nhence it is a module isomorphism.\n(2) By Lemma 6.1 ###reference_theorem1###(2), we get\n(3) \nFor , ,\nwhere , and\n, ,\nit is obvious that\nThen\nWe are done.\n\u220e\nAssume that and , where\n as in Eq.(3.1 ###reference_###).\nLet be as in Eq.(6.2 ###reference_###).\nThen is an -submodule of if and only if\n is an -submodule of .\nAt that case, the following hold.\n(1) The rate .\n(2) The relative minimum distance .\n(3) is Galois -self-dual\nif and only if is Galois -self-dual.\nBy Corollary 6.2 ###reference_theorem2###(1),\nthe map in Eq.(6.2 ###reference_###)\nis a module isomorphism.\nSo is an -submodule of if and only if\n is an -submodule of .\nAssume that it is this case.\n(1) Since is an isomorphism,\n, and so\n(2) It holds by Corollary 6.2 ###reference_theorem2###(2) obviously.\n(3) Observe that by Corollary 6.2 ###reference_theorem2###(3),\n in \nif and only if in .\nFurther, applying the above conclusion (1) yields that\n if and only if .\nThus (3) holds.\n\u220e\nIn the following, we consider the asymptotic property of\nHermitian (Euclidean) self-dual -quasi\n-constacyclic codes.\nAssume that is even.\nThe Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nIf ,\nthen Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically bad, see Theorem 4.5 ###reference_theorem5###.\nAssume that .\nLet and .\nBy Theorem 5.10 ###reference_theorem10###, there are\nHermitian self-dual -quasi-cyclic codes \nover such that:\nthe code length of satisfy that is odd and coprime to \nand\n;\nin particular, , and hence ;\nthe rate \nand the relative minimum distance \u2009 for .\nLet ,\nwhere as in Eq.(3.1 ###reference_###).\nBy [23 ###reference_b23###, Lemma II.2],\nIf , then .\nSince ,\nthere are only finitely many such that .\nRemoving such , we can further assume that\n for . Thus,\napplying the isomorphism Eq.(6.2 ###reference_###) to \nyields in , .\nWe get the code sequence\nBy Theorem 6.3 ###reference_theorem3###, each \nis Hermitian self-dual -quasi -constacyclic codes over ,\nand the code length goes to infinity,\nthe rate \nand the relative minimum distance\n for .\n\u220e\nFor Euclidean case,\nthe \u201cEuclidean self-dual\u201d is referred to as \u201cself-dual\u201d.\nAssume that .\nThe self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nBy Theorem 4.5 ###reference_theorem5###, if then\nthe self-dual -quasi -constacyclic codes are asymptotically bad.\nIn the following we assume that .\nIf , it has been shown in [23 ###reference_b23###] (cf. Remark 2.6 ###reference_theorem6###)\nthat the self-dual -quasi-cyclic codes over are asymptotically good.\nAssume that \n(i.e., consider the -quasi negacyclic codes).\nLet and .\nSince , by\nRemark 2.6 ###reference_theorem6###, there are\nself-dual -quasi-cyclic codes over , such that:\nthe code length of satisfy that is odd and coprime to ,\nand ;\nin particular, , and hence ;\nthe rate \nand the relative minimum distance \u2009 for .\nSimilarly to the proof of Theorem 6.4 ###reference_theorem4###,\nwe get self-dual -quasi negacyclic codes\nover such that their code length goes to infinity,\n\nand their relative minimum distance for .\n\u220e" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "The purpose of this paper is to characterize\nthe Galois self-dual -quasi -constacyclic codes, and\nto investigate their asymptotic properties.\nWe first showed the algebraic structure of -quasi -constacyclic codes.\nThen we found that the Galois -self-dual -quasi -constacyclic codes\nbehave much differently according to whether \n( equivalently) or\n\n( equivalently).\nIn both the cases, we exhibited the necessary and sufficient conditions for the\n-quasi -constacyclic codes being Galois -self-dual,\nsee Theorem 4.1 ###reference_theorem1### (for the former case),\nLemma 4.9 ###reference_theorem9### and Theorem 4.10 ###reference_theorem10### (for the latter case).\nAnd in the former case we proved that\nthe Galois -self-dual -quasi -constacyclic codes\nare asymptotically bad.\nThen we focused on the case that .\nA methodological innovation is that we introduced\n(in Eq.(4.5 ###reference_###) and Lemma 4.8 ###reference_theorem8###)\nthe operator \u201c\u201d on the algebra ,\nwhich is proved to be a powerful technique for studying Galois dualities.\nAn important contribution is that we proved that\nthe Hermitian self-dual (when is even) and the\nEuclidean self-dual (when )\n-quasi -constacyclic codes are asymptotically good.\nThe proof are divided into two steps.\nFirst we proved (in Theorem 5.10 ###reference_theorem10###) the asymptotic goodness of the\nHermitian self-dual -quasi-cyclic codes\n(the asymptotic goodness of the Euclidean self-dual -quasi-cyclic codes\nhas been obtained in [23 ###reference_b23###]).\nAnd then we relate the -quasi -constacyclic codes\nto the -quasi-cyclic codes by an algebra isomorphism\nwhich preserves the Hamming weight and the Galois -inner products,\nsee Corollary 6.2 ###reference_theorem2###;\nhence the asymptotic goodness of the Hermitian self-dual and Euclidean self-dual\n-quasi -constacyclic codes are derived\n(Theorem 6.4 ###reference_theorem4###\nand Theorem 6.5 ###reference_theorem5###).\nA question remains unsolved:\nwith the assumption that ,\nare the Galois -self-dual -quasi -constacyclic codes over ,\nexcept for the Hermitian self-dual ones\nand the Euclidean self-dual (when ) ones,\nasymptotically good?\nIt seems that the existing approaches in this paper\nare not enough to solve this question.\nWe look forward this question to be solved perfectly in the future.\nA special sub-question is: are\nthe self-dual -quasi negacyclic codes over asymptotically good?\nThe result in [28 ###reference_b28###] and Theorem 6.5 ###reference_theorem5###\nof this paper together imply a positive answer to this sub-question;\nbut the argument of [28 ###reference_b28###] depends on Artin\u2019s primitive root conjecture.\nRecently, in [15 ###reference_b15###] we further developed a number-theoretic and algebraic method\nto analyse the -cosets and proved the asymptotic goodness of\nany -ary self-dual -quasi negacyclic codes." + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2404.08402v2" +} \ No newline at end of file diff --git a/20241127/2404.11161v3.json b/20241127/2404.11161v3.json new file mode 100644 index 0000000000000000000000000000000000000000..1fb007da778bc4b0b1ef1af8f96c1d194d64b319 --- /dev/null +++ b/20241127/2404.11161v3.json @@ -0,0 +1,431 @@ +{ + "title": "BAHOP: Similarity-based Basin Hopping for A fast hyper-parameter search in WSI classification", + "abstract": "Pre-processing whole slide images (WSIs) can impact classification performance. Our study shows that using fixed hyper-parameters for pre-processing out-of-domain WSIs can significantly degrade performance. Therefore, it is critical to search domain-specific hyper-parameters during inference. However, searching for an optimal parameter set is time-consuming. To overcome this, we propose BAHOP, a novel Similarity-based Basin Hopping optimization for fast parameter tuning to enhance inference performance on out-of-domain data. The proposed BAHOP achieves 5% to 30% improvement in accuracy with times faster on average.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Following the success of early computational pathology applications, dataset sizes have increased, prompting more multicentric efforts to address variability in staining, image quality, scanning characteristics, and tissue preparation in different laboratories. This has highlighted a known issue where computational pathology algorithms perform best on data on which they were trained but less well on data from other sources. Generalization continues to pose a major challenge in this field, as significant differences in clinical variables between tissue source sites can adversely affect the performance of histopathological tasks [26 ###reference_b26###, 6 ###reference_b6###, 11 ###reference_b11###].\nState-of-the-art multiple instance learning (MIL) models have shown promising improvements in whole slide image (WSI) classification on out-of-domain (OOD) data by training on large, diverse datasets [31 ###reference_b31###, 2 ###reference_b2###, 17 ###reference_b17###, 21 ###reference_b21###, 15 ###reference_b15###, 23 ###reference_b23###]. Out-of-domain classification occurs when a model is trained on one dataset and tested on another from a different domain. For instance, a model might be trained using Camelyon16 data [16 ###reference_b16###] and then evaluated using data from various Camelyon17 centers.\nAs indicated in Table 1 ###reference_###, the performance of a model trained in Camelyon16 can vary significantly at specific centers within Camelyon17.\n###figure_1### ###figure_2### We discovered that one factor that leads to unsatisfactory performance is the same inference hyper-parameter across different centers.\nWe evaluate the fixed pre-processing parameters\u2019 performance on several sub-datasets.\nThe experiment result shows that fixed pre-processing parameters typically result in poor out-of-domain performance [11 ###reference_b11###]. For example, MIL models [17 ###reference_b17###, 5 ###reference_b5###, 15 ###reference_b15###] have extremely low accuracy at specific centers within the TCGA [29 ###reference_b29###] and Camelyon17 [16 ###reference_b16###] datasets using fixed parameters.\nConversely, tailoring optimal pre-processing parameters for each center significantly improves out-of-domain performance (as illustrated in Figure 1 ###reference_###).\nDetermining optimal pre-processing parameters for histopathological tasks is challenging.\nThe parameter search space is over due to the huge size of giga-pixel Whole Slide Images (WSIs) and the significant computational resources needed.\nPrevious studies [3 ###reference_b3###] only explore hyper-parameter optimization in learning rates and loss weight coefficients without tailoring their approaches to histopathology applications.\nWhen adopting hyper-parameter tuning methods, traditional grid search methods for datasets like Camelyon or TCGA are inefficient, taking several hours to evaluate one parameter group. The whole hyper-parameter space contains hundreds of thousands of possible combinations. Therefore, quicker hyper-parameter search techniques are essential for histopathological tasks.\nThis paper focuses on how pre-processing parameters influence inference performance, particularly with out-of-domain data, and proposes an efficient parameter search method specifically designed for WSI classification. We propose a Similarity-based Basin Hopping (BAHOP) for Hyper-parameter tuning and improving the accuracy of inferring out-of-domain data across various MIL models and datasets (as shown in Fig. 1 ###reference_###). The key contributions of this paper are:\nWe have observed that varying pre-processing parameters significantly impact feature extraction and, consequently, model performance\u2014particularly in out-of-domain inference;\nWe present BAHOP, Similarity-based Basin Hopping for fast and effective parameter tuning. This algorithm enhances inference performance at Camelyon 17\u2019s center 1, boosting accuracy from 0.512 to 0.846;\nWe expand the proposed BAHOP to include other MIL models using various public datasets, achieving improvements in accuracy ranging from 5% to 30% for out-of-domain data across multiple MIL models and datasets;\nThe proposed BAHOP is the first fast hyper-parameter search designed explicitly for histopathological tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Motivation and Observation", + "text": "Public datasets often combine data from various domains [25 ###reference_b25###, 18 ###reference_b18###], obscuring true performance metrics within specific areas. Studies [6 ###reference_b6###, 11 ###reference_b11###, 10 ###reference_b10###] show that domain-specific variations in digital histology can affect the accuracy and bias of deep learning models. In practice, these variations necessitate frequent retraining of models due to changing data domains, which is impractical due to high computational costs. Thus, improving model performance without additional training is essential." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Pre-processing Impacts Inference Performance for Different Domains.", + "text": "###figure_3### ###figure_4### We found there exists significant variability in inference performance on out-of-domain data across various methods (refer to Table 1 ###reference_###). We employ a grid search to fully understand how pre-processing parameters influence outcomes in center 1 of Camelyon 17 in this motivation experiment. For instance, the CLAM [17 ###reference_b17###] model trained on the Camelyon16 dataset achieves an overall inference accuracy of 0.86 on the entire Camelyon17 dataset but drops to 0.512 at Center 1 of Camelyon17, indicating inconsistent performance. We also evaluate our approach in other centers across different datasets, as detailed in Fig 3 ###reference_### and Appendix. Similar variations are observed across other datasets and MIL models, and the out-of-domain inference is extremely poor in some cases.\nWhat\u2019s more, the default pre-processing setting yields varying results depending on the center as shown in Fig 3 ###reference_###, where identical hyper-parameters may perform well in one center but poorly in another.\nThe effection of the pre-processing parameters varies across different datasets.\nAs shown in Fig. 4 ###reference_### and the Appendix, the detection of foreground contours, controlled by several hyperparameters, often results in the exclusion of large areas, which are treated as holes. For instance, adipocyte cells are frequently excluded. However, some tumor cells surrounded by fibers, adipocyte cells, or lipid droplets may also be filtered out due to preprocessing parameters and discarded. In Fig. 4 ###reference_###(a) and (c), excessive patch removal leads to incorrect predictions, while in Fig. 4 ###reference_###(b), it improves prediction accuracy when applied to a different dataset. This variability makes it difficult to quantify the impact of specific patches, especially since the number of patches for a single WSI in a MIL model can exceed . While some studies, such as Javed et al. [13 ###reference_b13###], address this issue, there is no consensus on how specific patches influence machine learning models in computational pathology. An adaptive algorithm is needed to address this challenge in histopathological tasks.\nPre-processing optimization is inherently a non-convex problem with numerous large local optima (as shown in Fig. 2 ###reference_###). Our objective is to achieve an optimal result with reduced time and computational costs, where this optimum does not need to be a global one.\nRather than identifying specific cell types within tissue areas, we approach this histopathological task from a machine learning perspective. The pre-processing of WSIs focuses on segmenting tissue regions and discarding patches identified as holes or backgrounds. These patches are selected and filtered based on pre-defined parameters, while most tissue areas containing cells are retained. Consequently, the variations in inference performance caused by multiple interacting hyperparameters can be framed as a hyperparameter optimization problem. This problem can be effectively addressed using an adaptive algorithm tailored for this histopathological task." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Inefficient Manual Pre-Processing Parameter Search.", + "text": "In real-world scenarios, retraining machine learning models for every new dataset is often impractical. While optimizing parameter performance is advantageous, finding the optimal settings is highly time-consuming. For instance, evaluating a single set of parameters, including feature extraction, takes approximately 7 hours on an RTX3080 (details in Appendix). Traditional optimization techniques, such as tuning learning rates, batch sizes, or loss weight coefficients, cannot be directly applied to histopathological tasks. Consequently, more efficient parameter search methods are required. Common strategies like grid search demand excessive computational resources and time, rendering them unsuitable for histopathological applications.\nBased on these observations, we want to explore three questions.\nCan we improve inference performance on out-of-domain data just by modifying the preprocessing parameters?\nHow can we search for parameters more cost-effectively to enhance inference performance?\nDo these observations apply to other multiple instance learning (MIL) models?\nTherefore, we introduce our approach in a fast hyper-parameter search in pre-processing parameters of histopathology." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Fast Hyper-Parameter Search for Inference Performance in Out-Of-Domain Data", + "text": "In this section, we formalize hyperparameter optimization and present our BAHOP algorithm step by step. The proposed BAHOP algorithm includes PSNR-based mechanism for limiting the current searching space in each iteration and Basin Hopping to avoid local optima." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Definition of Problem", + "text": "A good choice of hyperparameters can significantly improve out-of-domain inference performance, yet optimal hyperparameters vary, depending on the datasets and the models.\nWe assume access to two types of datasets: (1) a large dataset for training to produce the foundation model subsequently used to test out-of-domain performance. (2) Multiple subdataset . The validation sub-dataset is much smaller than and is used only for optimization at the outer level, not for fine-tuning. Note that these test distributions are distinct and are never shown to the model before the final evaluation.\nLet denote a pre-trained foundation model trained in a large dataset with the pre-processing parameter . Now, there are many out-of-domain datasets . They are pre-processed by hyper-parameter to obtain the features for inferring. The out-of-domain performance of model would be tested by the features that are preprocessed by the hyper-parameter as , where is the performance metric.\nWe define the histopathological task of identifying the optimal pre-processing for enhanced out-of-domain performance as a hyper-parameter optimization problem, as described in Equation 1 ###reference_###, where is the hyper-parameter space. This process helps us determine the most effective final hyper-parameters for each validation center ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Pre-Processing Problem Modeling", + "text": "Next, we consider the optimization problem for histopathological tasks. The optimization of is actually the optimization of a combination of pre-processing parameters, not a single variable.\nPrevious pre-processing of WSIs encompasses a series of standard procedures. Initially, tissue segmentation is executed automatically on each slide at a reduced magnification. This involves generating a binary mask for the tissue regions by applying a binary threshold to the saturation channel of the downsampled image, following its conversion from RGB to HSV color space. Morphological closing and median blurring are employed for smoothing the contours of the detected tissue and minimizing artifacts. Subsequently, the approximate contours of both the tissue and the tissue cavities are assessed and filtered based on their area, culminating in the creation of the final segmentation mask.\nWe complete set of pre-processing hyperparameters for optimization in histopathological tasks is thus , where each has its own hyper-parameter space. As illustrated in Fig 4 ###reference_### and the observations discussed in Section 3 ###reference_###, the set of pre-processing hyperparameters affect the performance together, varying across the different datasets. A large area of tissue is detected as background or holes and then dropped during the pre-processing. As illustrated in Fig 4 ###reference_###, dropping too many patches decreases performance in Fig. 4 ###reference_###(a) and (c) but increases performance in Fig. 4 ###reference_###(b) where the dataset is changed. It is hard and no work to measure how these patches and specific types of cells affect the out-of-domain inference performance in histopathological tasks. Thus, we optimize these histopathological tasks by machine learning techniques instead of exploring the impact of the specific type of cells.\nAs illustrated in Fig 2 ###reference_###, the hyper-parameter optimization for histopathological tasks is non-convex optimization which has many large local optima. Therefore, our goal is to search for an optimal hyper-parameter in a short time and with less cost; the optima we search for can just be larger as it can. We are not searching for the global optima. We just search for an optimal value much better than the default hyper-parameter in a very short time and cost. In practice, this involves repeatedly applying different features within the same foundational model using various hyperparameters . Therefore, we propose a PSNR mechanism as a threshold to advance the speed of hyper-parameter search." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Optimization of Fast Hyper-Parameter Search", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Good Pre-Processed WSIs show High Structural Similarity.", + "text": "###figure_5### For single-domain inference performance, our experiments demonstrate that pre-processing parameters, which lead to similar high accuracy, also generate the pre-processed images with similar structures(as illustrated in Figure 5 ###reference_###). When the two objects are close in accuracy, their PSNR is always high. This phenomenon always exists, no matter how we change the reference object.\nTherefore, we only evaluate the pre-processed Whole Slide Images (WSIs) that closely resemble the current optimally pre-processed WSI in each optimization iteration. To quantitatively assess differences between two sets of pre-processing results, we use Peak Signal-to-Noise Ratio (PSNR) to measure the similarity of images generated during patch creation. The general histopathological workflow includes creating patches, extracting features, training models, and testing inference data. During patch generation, thumbnails are created for visualizing the preprocessing effects. Thumbnails are 64x downsampled from original giga-pixel WSIs. Thus, we can determine the similarity between these two sets of pre-processed results by calculating PSNR for these thumbnails efficiently." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Peak Signal-to-Noise Ratio (PSNR)", + "text": "Peak Signal-to-Noise Ratio (PSNR) is a common measurement in image processing and digital signal processing. A higher PSNR value indicates greater similarity between two images. We calculate the PSNR by comparing thumbnails generated during preprocessing. For example, the total PSNR of thumbnails produced using the second-best preprocessing parameters should be higher than that of any other set, as illustrated in Figure 5 ###reference_###." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 PSNR-based Basin Hopping for Hyper-Parameter Optimization", + "text": "Input: Default parameter with result set , Hyper-parameters space , initial similarity threshold , function to evaluate performance, function to calculate similarity\nOutput: Optimal parameters\nBasin-hopping is a global optimization technique characterized by iterative processes involving random perturbation of coordinates, followed by localized optimization and subsequent evaluation and acceptance or rejection of new coordinates based on minimized function values. This method is particularly useful in high-dimensional landscapes. However, an inherent limitation of Basin-hopping lies in the necessity to execute random perturbations at a designated point, typically a local minimum. These perturbations must be suitably sized - adequately large to escape the current local minimum yet not excessively to devolve into total randomness. To address this challenge, we introduce a Peak Signal-to-Noise Ratio (PSNR) threshold within the Basin-hopping optimization process. Subsequently, our experimental endeavors focus on establishing a correlation between pre-processing methodologies applied to whole slide images and the PSNR metric.\nThe integration of PSNR-based filtering does not fundamentally change the Basin Hopping process itself but rather serves as a heuristic pre-processing step to enhance efficiency and fit histopathological tasks. The PSNR threshold is set dynamically and adapts across different datasets. The experiments of the effection of each pre-processing parameter are shown in the Appendix. From the experiments (as shown in the Appendix), the criteria for selecting the PSNR threshold and dynamically adapting to different datasets is that we perturb the pre-processing parameter \u2013 segmentation threshold (related to foreground and background detection). Each time, we change the value to plus or minus 1. We calculate the PSNR for this comparison as the initial PSNR. This process only requires the creation of a patch and quick PSNR calculation in thumbnails.\nAlso, we delete the mechanism of temperature and accepted probability from the original Basin hopping, ensuring our BAHOP is designed properly for histopathological tasks. For our proposed BAHOP method (as shown in Algorithm 1 ###reference_###), we use as the default parameter. Using the current optimal parameter , we generate a new parameter , create patches, and calculate their PSNR against the optimal thumbnails . If the PSNR exceeds threshold , we proceed to more resource-intensive tasks such as feature extraction in histopathological analysis and subsequent inference using an existing model. Each new parameter set is added to our historical dataset . If demonstrates improved performance, will be updated with ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "###table_1###" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experiments Setup", + "text": "Our experiment utilizes the pre-processing pipeline from CLAM [17 ###reference_b17###]. The datasets include Camelyon 16, Camelyon 17 [16 ###reference_b16###], TCGA-NSCLC, TCGA-BRCA, and TCGA-COAD [29 ###reference_b29###]. For Camelyon16 and Camelyon17, we focus on normal-tumor classification; for all TCGA datasets, we address cancer subtype tasks. In our experiments, we fix the seeds and model hyperparameters such as learning rate and loss weight across similar tasks within the same model framework. Additional experimental details are provided in the Appendix. We employ K-fold, Stratified K-Fold or K-fold Monte Carlo cross-validation methods (with k sets to 10) to train models and assess out-of-domain performance. All results presented in Table 2 ###reference_###, Table 3 ###reference_### and Table 4 ###reference_### represent average accuracy calculated over 10-fold models." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Classification Accuracy Validation of BAHOP", + "text": "For the Camelyon datasets, we use 270 WSIs from Camelyon 16 to train the model and test inference performance across various centers in Camelyon17 to simulate out-of-domain scenarios. It\u2019s important to note that Camelyon16 data was collected from two centers (UMCU and RUMC), while Camelyon17 includes data from five centers (CWZ, RST, UMCU, RUMC, and LPON). Consequently, only centers 0 (CWZ), center 1 (RST), and center 4 (LPON) of Camelyon17 represent out-of-domain situations.\nWe compare inference performance using features extracted by default parameters (not the worst) and those optimized through our BAHOP algorithm. As shown in Table 2 ###reference_### and Table 3 ###reference_###, the optimal hyper-parameters identified by our BAHOP algorithm generally surpass the default settings across various models and datasets. In specific cases, such as CLAM and BayesMIL at Center 1 of Camelyon 17, performance improved dramatically from 0.512 to 0.847 simply by adjusting the hyper-parameters.\nFor TCGA datasets, we divide the entire dataset according to TCGA Tissue Source Site Codes [29 ###reference_b29###]. We select several centers with sufficient WSIs to form the testing sub-dataset, while the remaining data constitutes the training dataset. The out-of-domain inference performance also improves (as shown in Table 2 ###reference_###), indicating that our issue is general and our BAHOP provides a universal solution. Additionally, we conduct experiments on a larger TCGA-NSCLC dataset that includes more extensive out-of-domain testing (as detailed in Table 4 ###reference_###)." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Comparison of other hyper-parameter optimization", + "text": "As illustrated in Table 5 ###reference_###, we compare different hyper-parameter optimization (HPO) techniques in one center of Camelyon 17 over 100 iterations. Our BAHOP can find the optimal hyper-parameters much quicker compared to alternative HPO methodologies. While other HPO techniques demonstrated the capability to achieve commendable accuracy levels, they incurred substantial costs in terms of computational time and resource utilization." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Efficiency Validation", + "text": "Figure 6 ###reference_### shows the running time for optimizing preprocessing parameters 100 times. Our PSNR-based Basin Hopping (BAHOP) algorithm for Parameter Tuning uses PSNR to compare each pre-processed WSIs with the best one so far, selecting only those with high similarity for further feature extraction. WSIs with low PSNR are skipped, streamlining the process and ensuring that only promising candidates are evaluated further. This approach prevents our BAHOP from getting stuck in local optima and saves time by reducing unnecessary feature extractions.\n###figure_6###" + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "The workflow of inference in histopathological tasks includes creating patches, extracting features, and inferred in a pre-trained model. Without the PSNR-based strategy, the algorithm will run the most expensive step \u2013 feature extraction, for each iteration. Our PSNR machanism can skip feature extraction in many iterations (as shown in Table 6 ###reference_###).\nBasin Hopping prevents optimization from getting stuck in local optima. As illustrated in Table 5 ###reference_###, search algorithms lacking Basin Hopping repeatedly test hyper-parameters that yield a higher PSNR than the current optimal solution. Since other optimizaion techniques have optimized for 100 times, the optimal accuracy is all high but the costs are so huge both in time and disk space." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Different pre-processing parameters significantly impact feature extraction and model performance in histopathological tasks. In this paper, we propose the Similarity-based Basin Hopping (BAHOP) algorithm for fast parameter tuning, which enhances inference performance on out-of-domain data. BAHOP achieves a 5% to 30% accuracy improvement on the Camelyon and multiple TCGA datasets, offering faster hyper-parameter search by reducing feature extraction steps based on WSI characteristics." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Performance of Out-of-Domain Inference in Specific Challenging Case. The performance is reported as the average of Accuracy, AUC, Precision, Recall, and F-score metrics, computed over the 10-fold models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelIn-Domain (C16)Out-of-Domain (C17 Center 1)
AccAUCAccuracyAUCPrecisionRecallF-score
MinMaxMinMaxMinMaxMinMaxMinMax
ABMIL0.9010.9410.840.920.8330.8960.7880.9660.7430.8290.7650.875
DSMIL0.9250.9540.640.890.7210.930.4911.0000.4290.9430.5560.831
CLAM0.9010.9460.5120.8470.7080.8330.4860.8390.7431.0000.6480.788
TransMIL0.8920.9350.730.840.8610.9140.7300.8230.7530.8440.7240.830
Bayes-MIL0.8830.9160.350.800.8190.8810.3500.7590.6291.0000.5190.737
MHIM-DSMIL0.9250.9650.770.870.8500.9210.6071.0000.6000.9710.7100.800
\n
", + "capture": "Table 1: Performance of Out-of-Domain Inference in Specific Challenging Case. The performance is reported as the average of Accuracy, AUC, Precision, Recall, and F-score metrics, computed over the 10-fold models." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelTCGA-COADTCGA-BRCA
CenterAccuracyAUCCenterAccuracyAUC
DefaultOursDefaultOursDefaultOursDefaultOurs
ABMIL\u00a0[12]CM0.750.7780.8280.838A70.9020.9610.9640.994
D50.7090.7740.640.647AC0.7750.8050.7280.761
DM0.7830.8260.7320.759AR0.7810.8160.8340.856
G40.9630.9630.981C80.9070.930.8670.908
DSMIL\u00a0[15]CM0.750.7780.8480.859A70.9210.9410.9230.962
D50.8060.8710.50.493AC0.7250.8050.7610.761
DM0.8690.8690.7770.804AR0.7650.7810.7330.738
G40.9630.9630.960.98C80.5810.8130.7670.75
CLAM\u00a0[17]CM0.750.8890.560.636A70.9410.9610.9470.974
D50.7420.8710.460.793AC0.750.810.7720.761
DM0.7390.7390.5710.571AR0.8130.8280.8820.878
G40.8890.9260.940.96C80.9070.930.8670.875
BayesMIL\u00a0[5]CM0.8610.8890.9290.929A70.8820.8820.9340.955
D50.7090.8060.6670.593AC0.7750.7750.7990.821
DM0.7830.8260.8210.875AR0.8130.8280.7960.791
G40.92610.961C80.860.8840.7920.767
\n\nMHIM-\n\nDSMIL\n\u00a0[23]CM0.750.8010.770.798A70.9230.9610.9570.996
D50.8390.8710.480.493AC0.750.7750.6110.745
DM0.8260.870.7140.786AR0.750.7660.7090.713
G40.9250.9630.940.98C80.9070.930.7420.733
\n
Table 2: Experiments about out-of-domain inference performance in TCGA-COAD and TCGA-BRCA for cancer subtype. We compare the accuracy obtained by default hyper-parameters with the optimal hyper-parameter searched by our BAHOP algorithm.
\n
", + "capture": "Table 2: Experiments about out-of-domain inference performance in TCGA-COAD and TCGA-BRCA for cancer subtype. We compare the accuracy obtained by default hyper-parameters with the optimal hyper-parameter searched by our BAHOP algorithm. " + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelOut-of-Domain(Center 0)Out-of-Domain(Center 1)Out-of-Domain(Center 4)
AccuracyAUCAccuracyAUCAccuracyAUC
DefOursDefOursDefOursDefOursDefOursDefOurs
ABMIL\u00a0[12]\n0.930.970.9210.9650.840.920.8330.8960.880.930.9070.911
DSMIL\u00a0[15]\n0.770.960.880.9370.640.890.7210.930.920.960.9520.947
CLAM\u00a0[17]\n0.920.950.9190.9630.5120.8460.7080.8330.8740.920.9040.941
Bayes-MIL\u00a0[5]\n0.830.940.9560.8840.350.800.8190.8810.850.920.9310.899
MHIM-DSMIL\u00a0[23]\n0.870.960.9110.9560.770.870.850.9210.910.940.9490.955
\n
Table 3: Out-of-domain inference performance in Center 0, center1 and Center 4 of Camelyon17 for normal and tumor classification.
\n
", + "capture": "Table 3: Out-of-domain inference performance in Center 0, center1 and Center 4 of Camelyon17 for normal and tumor classification." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TCGA-NSCLC
CenterABMIL\u00a0[12]\nCLAM\u00a0[17]\nBayes-MIL\u00a0[5]\n
Def.OursDef.OursDef.Ours
210.720.780.560.780.560.67
220.870.890.780.840.840.84
430.750.800.600.750.700.80
490.590.640.690.710.710.76
550.490.520.660.760.280.33
770.930.930.910.930.750.80
971.001.000.911.000.870.96
\n
Table 4: Comparison of inference accuracy in TCGA-NSCLC with more centers. Only accuracy is reported since each separate center only has one category. Def. stands for default parameter.
\n
", + "capture": "Table 4: Comparison of inference accuracy in TCGA-NSCLC with more centers. Only accuracy is reported since each separate center only has one category. Def. stands for default parameter." + }, + "5": { + "table_html": "
\n
Table 5: Comparison of other hyper-parameter optimization. SA stands for Simulated Annealing. Lat. stands for Latency (smaller is better) and Mem.\nstands for Memory.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
StrategyAccLat.(min)Mem.(GB)
Random search0.84596001250
Grid search0.84796001250
SA\u00a0[27]\n0.84696001250
Bayes OPT\u00a0[22]\n0.83496001250
BAHOP (Ours)0.8461770170
\n
", + "capture": "Table 5: Comparison of other hyper-parameter optimization. SA stands for Simulated Annealing. Lat. stands for Latency (smaller is better) and Mem.\nstands for Memory. " + }, + "6": { + "table_html": "
\n
Table 6: Ablation Study of the PSNR and Basin Hopping. Results are based on optimizing 100 times in the center1 of Camelyon 17. Lat. stands for Latency (smaller is better) and Mem. stands for Memory.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PSNRBHAccLat. (min)Mem. (GB)
\u2717\u27140.84696001250
\u2714\u27170.8452553278
\u2714\u27140.8461770170
\n
", + "capture": "Table 6: Ablation Study of the PSNR and Basin Hopping. Results are based on optimizing 100 times in the center1 of Camelyon 17. Lat. stands for Latency (smaller is better) and Mem. stands for Memory. " + } + }, + "image_paths": { + "1": { + "figure_path": "2404.11161v3_figure_1.png", + "caption": "Figure 1: The performance of out-of-domain inference varies with preprocessing parameters across various MIL models and datasets. Consequently, we suggest that each specific center within the dataset should adopt its own preprocessing parameters to maximize performance. The original method involves all centers using fixed default preprocessing hyperparameters, whereas the optimal method allows each center to use its own specific preprocessing hyperparameters determined by our proposed BAHOP.", + "url": "http://arxiv.org/html/2404.11161v3/x1.png" + }, + "2": { + "figure_path": "2404.11161v3_figure_2.png", + "caption": "Figure 2: PSNR-based Basin Hopping hyper-parameter optimization for out-of-domain inference in WSIs. The hyper-parameter optimization for the pre-processing task is a non-convex optimization that contains many large local optima. Our BAHOP is developed for fast search, and the jumping range of each iteration is controlled by the PSNR threshold.", + "url": "http://arxiv.org/html/2404.11161v3/x2.png" + }, + "3": { + "figure_path": "2404.11161v3_figure_3.png", + "caption": "Figure 3: The inference performance of out-of-domain data varies with preprocessing parameters across various MIL models and datasets.", + "url": "http://arxiv.org/html/2404.11161v3/x3.png" + }, + "4": { + "figure_path": "2404.11161v3_figure_4.png", + "caption": "Figure 4: All the heatmaps circled by red boxes correspond to the default hyper-parameter and predict wrong, while heatmaps circled by blue boxes correspond to optimal hyperparameters with correct predictions. Fig.A: Hyper-parameter optimization starts from the default parameter, which is the same as the pre-trained model. The default pre-processed WSI drops many patches that get a high attention score in optimal pre-processed WSI. Fig.B, the dataset is TCGA-NSCLC where dropping tissue can improve accuracy. Fig.C: Some hyper-parameters drop the region of interest(ROI) during pre-processing.", + "url": "http://arxiv.org/html/2404.11161v3/x4.png" + }, + "5": { + "figure_path": "2404.11161v3_figure_5.png", + "caption": "Figure 5: Relationship between PSNR and Accuracy. The red star stands for the reference object. All the hyper-parameters are compared with the reference object to calculate the PSNR.", + "url": "http://arxiv.org/html/2404.11161v3/x5.png" + }, + "6": { + "figure_path": "2404.11161v3_figure_6.png", + "caption": "Figure 6: The running time of pre-processing parameters tuning for 100 times. Our algorithm can save time by skipping a large number of feature extraction.", + "url": "http://arxiv.org/html/2404.11161v3/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Unleashing the potential of ai for pathology: challenges and recommendations.", + "author": "Amina Asif, Kashif Rajpoot, Simon Graham, David Snead, Fayyaz Minhas, and Nasir Rajpoot.", + "venue": "The Journal of Pathology, 260(5):564\u2013577, 2023.", + "url": null + } + }, + { + "2": { + "title": "Clinical-grade computational pathology using weakly supervised deep learning on whole slide images.", + "author": "Gabriele Campanella, Matthew G Hanna, Luke Geneslaw, Allen Miraflor, Vitor Werneck Krauss Silva, Klaus J Busam, Edi Brogi, Victor E Reuter, David S Klimstra, and Thomas J Fuchs.", + "venue": "Nature medicine, 25(8):1301\u20131309, 2019.", + "url": null + } + }, + { + "3": { + "title": "Autoft: Robust fine-tuning by optimizing hyperparameters on ood data.", + "author": "Caroline Choi, Yoonho Lee, Annie Chen, Allan Zhou, Aditi Raghunathan, and Chelsea Finn.", + "venue": "arXiv preprint arXiv:2401.10220, 2024.", + "url": null + } + }, + { + "4": { + "title": "Ai in computational pathology of cancer: improving diagnostic workflows and clinical outcomes?", + "author": "Didem Cifci, Gregory P Veldhuizen, Sebastian Foersch, and Jakob Nikolas Kather.", + "venue": "Annual Review of Cancer Biology, 7(1):57\u201371, 2023.", + "url": null + } + }, + { + "5": { + "title": "Bayes-mil: A new probabilistic perspective on attention-based multiple instance learning for whole slide images.", + "author": "Yufei Cui, Ziquan Liu, Xiangyu Liu, Xue Liu, Cong Wang, Tei-Wei Kuo, Chun Jason Xue, and Antoni B Chan.", + "venue": "In 11th International Conference on Learning Representations (ICLR 2023), 2023.", + "url": null + } + }, + { + "6": { + "title": "Biased data, biased ai: deep networks predict the acquisition site of tcga images.", + "author": "Taher Dehkharghanian, Azam Asilian Bidgoli, Abtin Riasatian, Pooria Mazaheri, Clinton JV Campbell, Liron Pantanowitz, HR Tizhoosh, and Shahryar Rahnamayan.", + "venue": "Diagnostic pathology, 18(1):67, 2023.", + "url": null + } + }, + { + "7": { + "title": "Deep learning in cancer pathology: a new generation of clinical biomarkers.", + "author": "Amelie Echle, Niklas Timon Rindtorff, Titus Josef Brinker, Tom Luedde, Alexander Thomas Pearson, and Jakob Nikolas Kather.", + "venue": "British journal of cancer, 124(4):686\u2013696, 2021.", + "url": null + } + }, + { + "8": { + "title": "Bohb: Robust and efficient hyperparameter optimization at scale.", + "author": "Stefan Falkner, Aaron Klein, and Frank Hutter.", + "venue": "In International conference on machine learning, pages 1437\u20131446. PMLR, 2018.", + "url": null + } + }, + { + "9": { + "title": "Histopathological image analysis: A review.", + "author": "Metin N Gurcan, Laura E Boucheron, Ali Can, Anant Madabhushi, Nasir M Rajpoot, and Bulent Yener.", + "venue": "IEEE reviews in biomedical engineering, 2:147\u2013171, 2009.", + "url": null + } + }, + { + "10": { + "title": "Computational pathology: a survey review and the way forward.", + "author": "Mahdi S Hosseini, Babak Ehteshami Bejnordi, Vincent Quoc-Huy Trinh, Lyndon Chan, Danial Hasan, Xingwen Li, Stephen Yang, Taehyo Kim, Haochen Zhang, Theodore Wu, et al.", + "venue": "Journal of Pathology Informatics, page 100357, 2024.", + "url": null + } + }, + { + "11": { + "title": "The impact of site-specific digital histology signatures on deep learning model accuracy and bias.", + "author": "Frederick M Howard, James Dolezal, Sara Kochanny, Jefree Schulte, Heather Chen, Lara Heij, Dezheng Huo, Rita Nanda, Olufunmilayo I Olopade, Jakob N Kather, et al.", + "venue": "Nature communications, 12(1):4423, 2021.", + "url": null + } + }, + { + "12": { + "title": "Attention-based deep multiple instance learning.", + "author": "Maximilian Ilse, Jakub Tomczak, and Max Welling.", + "venue": "In International conference on machine learning, pages 2127\u20132136. PMLR, 2018.", + "url": null + } + }, + { + "13": { + "title": "Additive mil: Intrinsically interpretable multiple instance learning for pathology.", + "author": "Syed Ashar Javed, Dinkar Juyal, Harshith Padigela, Amaro Taylor-Weiner, Limin Yu, and Aaditya Prakash.", + "venue": "Advances in Neural Information Processing Systems, 35:20689\u201320702, 2022.", + "url": null + } + }, + { + "14": { + "title": "Derivation of prognostic contextual histopathological features from whole-slide images of tumours via graph deep learning.", + "author": "Yongju Lee, Jeong Hwan Park, Sohee Oh, Kyoungseob Shin, Jiyu Sun, Minsun Jung, Cheol Lee, Hyojin Kim, Jin-Haeng Chung, Kyung Chul Moon, et al.", + "venue": "Nature Biomedical Engineering, pages 1\u201315, 2022.", + "url": null + } + }, + { + "15": { + "title": "Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning.", + "author": "Bin Li, Yin Li, and Kevin W Eliceiri.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318\u201314328, 2021.", + "url": null + } + }, + { + "16": { + "title": "1399 h&e-stained sentinel lymph node sections of breast cancer patients: the camelyon dataset.", + "author": "Geert Litjens, Peter Bandi, Babak Ehteshami Bejnordi, Oscar Geessink, Maschenka Balkenhol, Peter Bult, Altuna Halilovic, Meyke Hermsen, Rob van de Loo, Rob Vogels, et al.", + "venue": "GigaScience, 7(6):giy065, 2018.", + "url": null + } + }, + { + "17": { + "title": "Data-efficient and weakly supervised computational pathology on whole-slide images.", + "author": "Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood.", + "venue": "Nature biomedical engineering, 5(6):555\u2013570, 2021.", + "url": null + } + }, + { + "18": { + "title": "Detecting domain shift in multiple instance learning for digital pathology using fr\u00e9chet domain distance.", + "author": "Milda Pocevi\u010di\u016bt\u0117, Gabriel Eilertsen, Stina Garvin, and Claes Lundstr\u00f6m.", + "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 157\u2013167. Springer, 2023.", + "url": null + } + }, + { + "19": { + "title": "Transfusion: Understanding transfer learning for medical imaging.", + "author": "Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "20": { + "title": "The impact of pre-and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis.", + "author": "Massimo Salvi, U Rajendra Acharya, Filippo Molinari, and Kristen M Meiburger.", + "venue": "Computers in Biology and Medicine, 128:104129, 2021.", + "url": null + } + }, + { + "21": { + "title": "Transmil: Transformer based correlated multiple instance learning for whole slide image classification.", + "author": "Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, et al.", + "venue": "Advances in neural information processing systems, 34:2136\u20132147, 2021.", + "url": null + } + }, + { + "22": { + "title": "Practical bayesian optimization of machine learning algorithms.", + "author": "Jasper Snoek, Hugo Larochelle, and Ryan P Adams.", + "venue": "Advances in neural information processing systems, 25, 2012.", + "url": null + } + }, + { + "23": { + "title": "Multiple instance learning framework with masked hard instance mining for whole slide image classification.", + "author": "Wenhao Tang, Sheng Huang, Xiaoxian Zhang, Fengtao Zhou, Yi Zhang, and Bo Liu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4078\u20134087, 2023.", + "url": null + } + }, + { + "24": { + "title": "Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology.", + "author": "David Tellez, Geert Litjens, P\u00e9ter B\u00e1ndi, Wouter Bulten, John-Melle Bokhorst, Francesco Ciompi, and Jeroen Van Der Laak.", + "venue": "Medical image analysis, 58:101544, 2019.", + "url": null + } + }, + { + "25": { + "title": "Unbiased look at dataset bias.", + "author": "Antonio Torralba and Alexei A Efros.", + "venue": "In CVPR 2011, pages 1521\u20131528. IEEE, 2011.", + "url": null + } + }, + { + "26": { + "title": "Deep learning in histopathology: the path to the clinic.", + "author": "Jeroen Van der Laak, Geert Litjens, and Francesco Ciompi.", + "venue": "Nature medicine, 27(5):775\u2013784, 2021.", + "url": null + } + }, + { + "27": { + "title": "Simulated annealing.", + "author": "Peter JM Van Laarhoven, Emile HL Aarts, Peter JM van Laarhoven, and Emile HL Aarts.", + "venue": "Springer, 1987.", + "url": null + } + }, + { + "28": { + "title": "Advances in multiple instance learning for whole slide image analysis: Techniques, challenges, and future directions.", + "author": "Jun Wang, Yu Mao, Nan Guan, and Chun Jason Xue.", + "venue": "arXiv preprint arXiv:2408.09476, 2024.", + "url": null + } + }, + { + "29": { + "title": "The cancer genome atlas pan-cancer analysis project.", + "author": "John N Weinstein, Eric A Collisson, Gordon B Mills, Kenna R Shaw, Brad A Ozenberger, Kyle Ellrott, Ilya Shmulevich, Chris Sander, and Joshua M Stuart.", + "venue": "Nature genetics, 45(10):1113\u20131120, 2013.", + "url": null + } + }, + { + "30": { + "title": "Color standardization and optimization in whole slide imaging.", + "author": "Yukako Yagi.", + "venue": "In Diagnostic pathology, pages 1\u201312. Springer, 2011.", + "url": null + } + }, + { + "31": { + "title": "Dtfd-mil: Double-tier feature distillation multiple instance learning for histopathology whole slide image classification.", + "author": "Hongrun Zhang, Yanda Meng, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Sarah E Coupland, and Yalin Zheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18802\u201318812, 2022.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2404.11161v3" +} \ No newline at end of file diff --git a/20241127/2405.05160v2.json b/20241127/2405.05160v2.json new file mode 100644 index 0000000000000000000000000000000000000000..73ae77aa32d8152611d00be7e2676651e4ca253a --- /dev/null +++ b/20241127/2405.05160v2.json @@ -0,0 +1,1025 @@ +{ + "title": "Selective Classification Under Distribution Shifts", + "abstract": "In selective classification (SC), a classifier abstains from making predictions that are likely to be wrong to avoid excessive errors. To deploy imperfect classifiers\u2014either due to intrinsic statistical noise of data or for robustness issue of the classifier or beyond\u2014in high-stakes scenarios, SC appears to be an attractive and necessary path to follow. Despite decades of research in SC, most previous SC methods still focus on the ideal statistical setting only, i.e., the data distribution at deployment is the same as that of training, although practical data can come from the wild. To bridge this gap, in this paper, we propose an SC framework that takes into account distribution shifts, termed generalized selective classification, that covers label-shifted (or out-of-distribution) and covariate-shifted samples, in addition to typical in-distribution samples, the first of its kind in the SC literature. We focus on non-training-based confidence-score functions for generalized SC on deep learning (DL) classifiers, and propose two novel margin-based score functions. Through extensive analysis and experiments, we show that our proposed score functions are more effective and reliable than the existing ones for generalized SC on a variety of classification tasks and DL classifiers. The code is available at https://github.com/sun-umn/sc_with_distshift.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In practice, classifiers almost never have perfect accuracy. Although modern classifiers powered by deep neural networks (DNNs) typically achieve higher accuracy than the classical ones, they are known to be unrobust: perturbations of inputs that are inconsequential to human decision making can easily alter DNN classifiers\u2019 predictions (Carlini et al., 2019 ###reference_b3###; Croce et al., 2020 ###reference_b9###; Hendrycks & Dietterich, 2018 ###reference_b30###; Liang et al., 2023 ###reference_b41###), and more generally, shifts in data distribution in deployment from that in training often cause systematic classification errors. These classification errors, regardless of their source, are rarely acceptable for high-stakes applications, such as disease diagnosis in healthcare.\nTo achieve minimal and controllable levels of classification error so that imperfect and unrobust classifiers can be deployed for high-stakes applications, a promising approach is selective classification (SC): samples that are likely to be misclassified are selected, excluded from prediction, and deferred to human decision makers, so that the classification performance on the remaining samples reaches the desired level (Chow, 1970 ###reference_b5###; Franc et al., 2023a ###reference_b17###; Geifman & El-Yaniv, 2017 ###reference_b22###). For example, by flagging and passing uncertain patient cases that it tends to mistake on to human doctors, an intelligent medical agent can make confident and correct diagnoses for the rest. This \u201cconservative\u201d classification framework not only saves doctors\u2019 efforts, but also avoids liability due to the agent\u2019s mistakes.\nConsider a multiclass classification problem with input space , label space , and training distribution on . For any classifier , there are many potential causes of classification errors. In this paper, we focus on three types of errors that are commonly encountered in practice and are studied extensively, but mostly separately, in the literature.\nType A errors: errors made on in-distribution (In-D) samples, i.e., those samples drawn from . These are classification errors discussed in typical statistical learning frameworks (Mohri et al., 2018 ###reference_b47###);\nType B errors: errors made on label-shifted samples, i.e., those samples with groundtruth labels not from . Since assigns labels from only, it always errs on these samples;\nType C errors: errors made on covariate-shifted samples, i.e., samples drawn from a different input distribution where but with groundtruth labels from .\nIt is clear that in practical deployment of classifiers, samples can come from the wild, and hence Type A, Type B and Type C errors can coexist. In order to ensure the reliable deployment of classifiers in high-stakes applications, we must control the three types of errors, jointly. Unfortunately, previous research falls short of a unified treatment of these errors. Classical SC (Chow, 1970 ###reference_b5###) focuses on rejecting samples that cause In-D errors (Type A), whereas the current out-of-distribution (OOD) detection research (Yang et al., 2021 ###reference_b65###; Park et al., 2023 ###reference_b51###) focuses on detecting label-shifted samples (Type B). Although Hendrycks & Gimpel (2016 ###reference_b31###); Granese et al. (2021 ###reference_b28###); Xia & Bouganis (2022 ###reference_b63###); Kim et al. (2023 ###reference_b35###) have advocated the simultaneous detection of samples that cause Type A and Type B errors, their approaches still treat the problem as consisting of two separate tasks, reflected in their separate and independent performance evaluation on OOD detection and SC. Regarding the challenge posed by Type C errors, existing work (Hendrycks & Dietterich, 2018 ###reference_b30###; Croce et al., 2020 ###reference_b9###) focuses primarily on obtaining classifiers that are more robust to covariate shifts, not on rejecting potentially misclassified samples due to covariate shifts\u2014the latter, to the best of our knowledge, has not yet been explicitly considered, not to mention joint rejection together with Type A and Type B errors.\nIn this paper, our goal is to close the gap and consider, for the first time, rejecting all three types of errors in a unified framework. For brevity, we use the umbrella term distribution shifts to cover both label shifts and covariate shifts, which are perhaps the most commonly seen types of distribution shifts, with the caveat that practical distribution shifts can also be induced by other sources. So, we call the unified framework considered in this paper selective classification under distribution shifts, or generalized selective classification. Another key desideratum is practicality. With the increasing popularity of foundation models and associated downstream few-shot learners (Brown et al., 2020 ###reference_b2###; Radford et al., 2021 ###reference_b55###; Yuan et al., 2021 ###reference_b69###), accessing massive original training data becomes increasingly more difficult. Moreover, there are numerous high-stakes domains where training data are typically protected due to privacy concerns, such as healthcare and finance. These applied scenarios call for SC strategies that can work with given pretrained classifiers and do not require access to the training data, which will be our focus in this paper. Our contributions include:\nWe advocate a new SC framework, generalized selective classification, which rejects samples that could cause Type A, Type B and Type C errors jointly, to improve classification performance over the non-rejected samples. With careful review and reasoning, we argue that generalized SC covers and unifies the scope of the existing OOD detection and SC, if the goal is to achieve reliable classification on the selected samples. (Sections 2.3 ###reference_### and 2.4 ###reference_###)\nFocused on non-training-based (or post-hoc) SC settings, we identify a critical scale-sensitivity issue of several SC confidence scores based on softmax responses (Section 3.1 ###reference_###) which are popularly used and reported to be the state-of-the-art (SOTA) methods in the existing SC literature (Geifman & El-Yaniv, 2017 ###reference_b22###; Feng et al., 2023 ###reference_b15###).\nWe propose two confidence scores based on the raw logits (v.s. the normalized logits, i.e., softmax responses), inspired by the notion of margins (Section 3.2 ###reference_###). Through careful analysis (Section 3.3 ###reference_###) and extensive experiments (Section 4 ###reference_###), we show that our margin-based confidence scores are more reliable for generalized SC on various dataset-classifier combinations, even under moderate distribution shifts." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Technical background and related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Selective classification (SC)", + "text": "Consider a multiclass classification problem with input space , label space , and data distribution on . A selective classifier consists of a predictor and a selector and works as follows:\nfor any input . Typical selectors take the form:\nwhere is a confidence-score function, and is a tunable threshold for selection." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Prior work in SC", + "text": "For a given selective classifier , its SC performance is often characterized by two quantities:\nBecause a high coverage typically comes with a high selection risk, there is always a need for risk-coverage tradeoff in SC. Most of the existing work considers to be the standard classification loss (Chow, 1970 ###reference_b5###; El-Yaniv et al., 2010 ###reference_b13###; Geifman et al., 2018 ###reference_b24###), and we also follow this convention in this paper. A classical cost-based formulation is to optimize the risk-coverage (RC) tradeoff (Chow, 1970 ###reference_b5###)\nwhere is the cost of making a rejection. The optimal selective classifier for this formulation is (Chow, 1970 ###reference_b5###; Franc et al., 2023a ###reference_b17###):\nwhere is the Bayes optimal classifier and depends on the posterior probabilities for all , which are hard to obtain in practice. Moreover, solutions to two constrained formulations for the RC tradeoff,\nalso depend on the posterior probabilities (Pietraszek, 2005 ###reference_b52###; Geifman & El-Yaniv, 2017 ###reference_b22###; Franc et al., 2023a ###reference_b17###; El-Yaniv et al., 2010 ###reference_b13###).\nDue to the intractability of true posterior probabilities in practice, many previous methods focus on learning effective confidence-score functions from training data. They require access to training data and learn parametric score functions, often under cost-based/constrained formulations and their variants for the RC tradeoff. This learning problem can be formulated together with (Chow, 1970 ###reference_b5###; Pietraszek, 2005 ###reference_b52###; Grandvalet et al., 2008 ###reference_b27###; El-Yaniv et al., 2010 ###reference_b13###; Cortes et al., 2016 ###reference_b7###; Geifman & El-Yaniv, 2019 ###reference_b23###; Liu et al., 2019 ###reference_b45###; Huang et al., 2022 ###reference_b33###; Gal & Ghahramani, 2016 ###reference_b21###; Lakshminarayanan et al., 2017 ###reference_b38###; Geifman et al., 2018 ###reference_b24###; Maddox et al., 2019 ###reference_b46###; Dusenberry et al., 2020 ###reference_b12###; Lei, 2014 ###reference_b40###; Villmann et al., 2016 ###reference_b60###; Corbi\u00e8re et al., 2019 ###reference_b6###) or separately from training the classifier (Jiang et al., 2018 ###reference_b34###; Fisch et al., 2022 ###reference_b16###; Franc et al., 2023a ###reference_b17###). However, Feng et al. (2023 ###reference_b15###) has recently shown that these training-based scores do not outperform simple non-training-based scores described below.\nThis family works with any given classifier and does not assume access to the training set. This is particularly attractive when it comes to modern pretrained large DNN models, e.g., CLIP (Radford et al., 2021 ###reference_b55###), Florence (Yuan et al., 2021 ###reference_b69###), and GPTs (Brown et al., 2020 ###reference_b2###), for which obtaining the original training data and performing retraining are prohibitively expensive, if not impossible, to typical users. Algorithm 1 ###reference_### shows a typical use case of SC with non-training-based scores. Different confidence scores have been proposed in the literature. For example, for support vector machines (SVMs), confidence margin (the difference of the top two raw logits) has been used as a confidence score (Fumera & Roli, 2002 ###reference_b20###; Franc et al., 2023a ###reference_b17###); see also Section 3.2 ###reference_###. For DNN models, which is our focus, confidence scores are popularly defined over the softmax responses (SRs). Assume that contains the raw logits (RLs) and is the softmax activation. The following three confidence-score functions\nare popularly used in recent work, e.g., Feng et al. (2023 ###reference_b15###); Granese et al. (2021 ###reference_b28###); Xia & Bouganis (2022 ###reference_b63###). Although simple, can easily beat existing training-based methods (Feng et al., 2023 ###reference_b15###). On the other hand, these SR-based score functions generally follow the plug-in principle by assuming that SRs approximate posterior probabilities well (Franc et al., 2023a ###reference_b17###). Unfortunately, this assumption often does not hold in practice, and bridging this approximation gap is a major challenge for confidence calibration (Guo et al., 2017 ###reference_b29###; Nixon et al., 2019 ###reference_b50###). However, Zhu et al. (2022 ###reference_b71###) reveals that recent calibration methods may even degrade SC performance." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "SC under distribution shifts: generalized SC", + "text": "In this paper, we consider SC under distribution shifts, or generalized selective classification. Shifts between training and deployment distributions are common in practice and can often cause performance drops in deployment (Quinonero-Candela et al., 2008 ###reference_b53###; Rabanser et al., 2019 ###reference_b54###; Koh et al., 2021 ###reference_b36###), raising reliability concerns for high-stakes applications in the real world. In this paper, we use the term distribution shifts to cover both covariate and label shifts\u2014perhaps the most prevalent forms of distribution shifts (see the beginning of Section 1 ###reference_### for their definitions)\u2014jointly. Although the basic set-up for our generalized SC framework remains the same as that of Eqs. 1 ###reference_### and 2 ###reference_###, we need to modify the definitions for selection risk and coverage in Eq. 3 ###reference_### to take into account potential distribution shifts:\nwhere is the original data distribution, is the shifted distribution\u2014 may not be the same as due to potential label shifts.111We assume no outliers in generalized SC\u2014samples that do not follow any specific statistical patterns\u2014during deployment, i.e., they are already detected and removed after separate data preprocessing steps. This allows us to properly define the coverage and selection risk.\nThe goal of OOD detection is to detect and exclude OOD samples (Yang et al., 2021 ###reference_b65###). An ideal OOD detector should perfectly separate In-D and OOD samples:\nHere, is a confidence-score function indicating the likelihood that the input is an In-D sample, and is again a tunable cutoff threshold. Although by the literal meaning of OOD both covariate and label shifts are covered by , the literature on OOD detection focuses mainly on detecting label-shifted samples, i.e., covariate-shifted induced by label shifts (Liu et al., 2020 ###reference_b43###; Sun et al., 2021 ###reference_b58###; Wang et al., 2022 ###reference_b61###; Sun et al., 2022 ###reference_b59###). OOD detection is commonly motivated as an approach to achieving reliable predictions: under the assumption that is induced by label shifts only, any OOD samples will cause misclassification and hence should be excluded\u2014clearly aligned with the goal of SC. Algorithm 2 ###reference_### shows the typical use case of OOD (label-shift) detectors, and its similarity to SC shown in Algorithm 1 ###reference_### is self-evident. However, OOD detection clearly aims for less than generalized SC in that: (1) even if the OOD detection is perfect, misclassified samples\u2014either as In-D or due to distribution shifts\u2014by imperfect classifiers are not rejected, and (2) practical OOD detectors may fail to perfectly separate In-D and OOD samples, OOD detected but correctly classified In-D samples are still rejected, hurting the classification performance on the selected samples; see Appendix C ###reference_### for an illustrative example. Therefore, if we are to achieve reliable predictions by excluding samples that are likely to cause errors, we should directly follow the generalized SC instead of the OOD detection formulation.\n###figure_1### Besides OOD detection, OOD generalization focuses on correctly classifying In-D and covariate-shifted samples, without considering prediction confidence and selection to improve prediction reliability; open-set recognition (OSR) focuses on correctly classifying In-D samples, as well as flagging label-shifted samples; see Geng et al. (2020 ###reference_b25###) for a comprehensive review. In contrast, generalized SC covers all In-D, label-shifted, and covariate-shifted samples, the widest coverage compared to these related concepts, and targets the most practical and pragmatic metric\u2014classification performance on the selected samples.\nAlthough the existing literature on SC is rich (Zhang et al., 2023 ###reference_b70###), research work that considers SC with potential distribution shifts is very recent and focuses only on label shifts: Xia & Bouganis (2022 ###reference_b63###); Kim et al. (2023 ###reference_b35###) perform In-D SC and OOD (label shift) detection together with a confidence score that combines an SC score and an OOD score, but they still evaluate the performance of In-D SC and OOD detection separately. M\u00fcller et al. (2023 ###reference_b48###); Cattelan & Silva ###reference_b4### empirically show that existing OOD scores are not good enough for SC tasks with covariate/label-shifted samples; Cattelan & Silva ###reference_b4### proposes ways to refine these scores with the help of additional datasets to optimize performance. Franc et al. (2024 ###reference_b19###) provides theoretical insights on SC with In-D and label-shifted samples. In contrast, we focus on identifying better confidence scores for generalized SC\u2014that covers both In-D and covariate/label-shifted samples and maximizes the utility of the classifier, and unify the evaluation protocol (see Section 2.4 ###reference_###)." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Evaluation of generalized SC", + "text": "Since the goal of generalized SC is to identify and exclude misclassified samples, for performance evaluation at a fixed cutoff threshold , it is natural to report the coverage\u2014the portion of samples accepted, and the corresponding selection risk\u2014\u201caccuracy\u201d (taken broadly) on accepted samples. It is clear from Eqs. 1 ###reference_### and 2 ###reference_### that for a given pair of classifier and confidence-score function , the threshold can be adjusted to achieve different risk-coverage (RC) tradeoffs. By continuously varying , we can plot a risk-coverage (RC) curve El-Yaniv et al. (2010 ###reference_b13###); Franc et al. (2023a ###reference_b17###) to profile the SC performance of throughout the entire coverage range ; see Fig. 1 ###reference_### for an example. Generally, the lower the RC curve, the better the SC performance. To obtain a summarizing metric, it is natural to use the area under the RC curve (AURC) (El-Yaniv et al., 2010 ###reference_b13###; Franc et al., 2023a ###reference_b17###). We note that the RC curve and the AURC are also the most widely used evaluation metrics for classical SC\u2014which is not surprising, as the goal of classical SC aligns with that of generalized SC, although generalized SC also allows distribution shifts.\nFor typical high-stakes applications, such as medical diagnosis, low selection risks are often prioritized over high coverage levels. So, in addition to RC curves and AURC, we also report several partial AURCs to account for potential different needs\u2014normalized AURC-, where specifies the coverage level, and we normalize the partial area-under-the-curve by the corresponding so that different partial levels can be cross-compared; see Fig. 1 ###reference_### for illustration.\nNote that RC curves, and hence the associated AURCs and normalized AURC- also, depend on the pair. So, if the purpose is to compare different confidence-score functions, should be fixed. Feng et al. (2023 ###reference_b15###) has recently pointed out the abuse of this crucial point in recent training-based SC methods. Thus, it is worth stressing that we always take and fix pretrained \u2019s when making the comparison between different score functions." + }, + { + "section_id": "2.5", + "parent_section_id": "2", + "section_name": "Few words on implementing Algorithm 1 in practice", + "text": "In the practical implementation of generalized SC for high-stakes applications after Algorithm 1 ###reference_###, it is necessary to select a cutoff threshold based on a calibration set to meet the target coverage, or more likely the target risk level. However, in this paper, we follow most existing work on SC and do not touch on issues such as how the calibration set should be constructed and how the threshold should be selected\u2014we leave these for future work. Our evaluation here, again, as most existing SC work, is only about the potential of specific confidence-score functions for generalized SC, measured by the RC curve, AUPC, and normalized AURC-\u2019s, directly on test sets that consist of In-D, OOD, and covariate-shifted samples." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Our method\u2014margins as confidence scores for generalized SC", + "text": "Our goal is to design effective confidence-score functions for generalized SC. Again, our focus is on non-training-based scores that can work on any pretrained classifier without access to the training data." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Scale sensitivity of SR-based scores", + "text": "As discussed in Section 2.2 ###reference_###, most manually designed confidence scores focus on DNN models and are based on softmax responses (SRs), assuming that SRs closely approximate true posterior probabilities\u2014closing such approximation gaps is the goal of confidence calibration. However, effective confidence calibration remains elusive (Guo et al., 2017 ###reference_b29###; Nixon et al., 2019 ###reference_b50###), and the performance of SR-based score functions is sensitive to the scale of raw logits and hence that of SRs, as explained below.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### Consider a -component mixture-of-Gaussian distribution with means , , , , equal variance , and equal weight . If we treat each component of the mixture as a class and consider the resulting -class classification problem, it is easy to see that the optimal -class linear classifier is , with the decision rule ; see Fig. 2 ###reference_### (a) for visualization of the data distribution and decision boundaries (i.e., the lines and ). Moreover, this is also a Bayes optimal classifier as well as the maximum a posterior (MAP) classifier, for our particular problem here. Now, given any input , we consider scaled raw logits for different scale factors and plot the resulting RC curves for , , and , respectively; see Fig. 2 ###reference_###(b)-(d). For reference, we also include the RC curves based on the true posterior probabilities (denoted as ), which are available for our simple data model here. We can observe that for SR-based functions (, , and ), their RC curves and hence the associated AURC\u2019s vary as changes, and these curves approach a common curve (, which we will explain below) as becomes large.\nThe above observations are not incidental. To see why the curves change with respect to , note that for a given test set and a fixed classifier , the RC curve for any score function is fully determined by the ordering of \u2019s (Franc et al., 2023a ###reference_b17###). But this ordering is sensitive to the scale of the raw logits for all three SR-based score functions: , , and . Take as an example and consider any sample with its corresponding raw logits sorted in descending order (i.e., ) without loss of generality. Then for any scale factor applied to , we have the score\nThis means that the score is determined by all the scaled logit gaps \u2019s. Moreover, due to the inner exponential function, small gaps gain more emphasis as increases, and all gaps receive increasingly more emphasis as decreases. Such a shifted emphasis can easily change the order of scores for two data samples, depending on how different their raw logits are distributed. Clearly, as . We can also make similar arguments for and . Next, for the common asymptotic curve as , we can show the following (proof is deferred to Appendix B ###reference_###):\nConsider the raw logits , and without loss of generality assume that they are ordered in descending order without any ties, i.e., . We have that as ,\nwhere means asymptotic equivalence. In particular, all the asymptotic functions increase monotonically with respect to .\nThis implies that the asymptotic RC curve as for all three score functions is fully determined by the score function !\nThe sensitivity of the RC curves, and hence of the performance, of these SR-based scores to the scale of raw logits is disturbing. It implies that one can simply change the overall scale of the raw logits\u2014which does not alter the classification accuracy itself\u2014to claim better or worse performance of an SR-based confidence-score function for selective classification, making the comparison of different SR-based scores shaky. Unfortunately, between the limiting cases and , there is no canonical scaling." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Our method: margin-based confidence scores", + "text": "To avoid the scale sensitivity caused by the softmax nonlinearity, it is natural to consider designing score functions directly over the raw logits. To this end, we revisit ideas in support vector machines (SVMs).\nIn linear SVMs for binary classification, the classifier takes the form and the confidence in classifying a sample can be assessed by its distance from the supporting hyperplane (Fumera & Roli, 2002 ###reference_b20###; Franc et al., 2023a ###reference_b17###): , which is called the geometric margin; see Appendix A ###reference_### for a detailed review. We can extend the idea to -class linear SVMs. Following the popular joint multiclass SVM formulation (Crammer & Singer, 2001 ###reference_b8###), we consider a linear classifier . Here, and induce hyperplanes, and we can define the signed distance of any sample to the -th hyperplane as: ( denotes the -th column of and the -th element of ), generalizing the definition for the binary case. However, a single signed distance makes little sense for assessing the classification confidence in multiclass cases, given the typical argmax decision rule\u2014e.g., the largest signed distance can be negative. Instead, comparing the distances to all decision hyperplanes seems more reasonable. Thus, we can consider the following geometric margin as a confidence-score function:\nwhere .\nIn other words, it is the difference between the top two signed distances of to all hyperplanes. Intuitively, the larger the geometric margin, the more confident the classifier is in classifying the sample following the largest signed distance\u2014a clearer winner earns more trust. Although the interpretation is intuitive, the geometric margin is not popularly used in multiclass SVM formulations, likely due to its non-convexity. Instead, a popular proxy for the geometric margin is the convex confidence margin:\nwith the decision rule ; see Appendix A ###reference_###. Despite its numerical convenience, the confidence margin loses geometric interpretability compared to the geometric margin, and it can be sensitive to the scaling of . We study both margins in this paper.\nTo extend the idea of margins to a DNN classifier parameterized by , we view all but the final linear layer as a feature extractor, denoted as . So, for each sample , the logit output takes the form , and thus the signed distance of the representation to each decision hyperplane in the representation space is: .\nAssume sorted signed distances and logits, i.e., and . The geometric margin and the confidence margin are defined as\nNote that both and are computed using the raw logits without softmax normalization; \u2019s and \u2019s may not have the same ordering due to the scale of .\nIn fact, is applied in LeCun et al. (1989 ###reference_b39###) to formulate an empirical rejection rule for a handwritten recognition system, although no detailed analysis or discussion is given on why it is effective. Despite the simplicity of these two notions of margins, we have not found prior work that considers them for SC except for LeCun et al. (1989 ###reference_b39###).\nAn attractive property of margin-based score functions is that their SC performance is invariant w.r.t. the scale of raw logits. This is because changing the overall scale of the raw logits does not change the order of scores assigned by either the geometric or the confidence margin. In this regard, margin-based score functions are much more preferred and reliable than SR-based scores for SC. Another interesting point is that the limiting curve depicted in Fig. 2 ###reference_###(b)-(d) is induced by the confidence margin, as is clear from Lemma 3.1 ###reference_theorem1### and the discussion following it.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Analysis of rejection patterns", + "text": "We continue with the toy example in Section 3.1 ###reference_### to show another major difference between the SR-based and the margin-based score functions\u2014they have different rejection patterns for given coverage levels. We will see that margin-based score functions induce favorable rejection patterns and can hence be used for reliable rejection even under moderate covariate shifts. For comparison, we also consider the maximum raw logit (denoted as ) to show that a single logit in multiclass classification is not a sensible confidence score.\nWe use the same setup as in the numerical experiment in Section 3.1 ###reference_### (see also Fig. 2 ###reference_###), and plot in Fig. 3 ###reference_### (a-1) the RC curves induced by the various confidence-score functions222For the classifier consideblue, and have the same SC performance as .. It is clear that performs the best. To better understand the difference between and other score functions, we study their rejection patterns: we visualize in Fig. 3 ###reference_### (b-1)&(c-1) the samples rejected at coverage for and , respectively; see visualization of other score functions in Appendix D ###reference_###, whose rejection patterns are similar to that of . An iconic feature of is that it prioritizes rejecting samples closer to decision boundaries, whereas SR-based scores prioritize rejecting samples close to the origin. Conceptually, the former rejection pattern is favorable, as the goal of SC is exactly to reject uncertain samples on which classifier\u2019s decisions can be shaky. More precisely, the difference in rejection patterns implies at least two things: (1) could be advantageous when most classification errors occur near the decision boundaries; (2) may be superior even when test samples have a moderate level of distribution shifts with respect to training. For example, when the test set has a slightly different than the training set (see Cases 2 & 3 below), mistaken samples due to the shift tend to be close to the decision boundaries and thus can be successfully rejected. Fig. 3 ###reference_### (d-1) plots the histograms of the robustness radii (i.e., the distance of a sample to the closest decision boundary) of selected samples at coverage, where the robustness radius quantitatively captures the extent of shift SC can tolerate: while the selected samples using uniformly have nonzero robust radii, all other score functions lead to zero robustness radii for the worst samples, implying sensitivity to shifts.333The intuition on why our notions of margins work for Type B errors is different: there since assumes a label outside the known set, we expect no clear winner in the raw logits.\nWe keep the same setup as Case 1, except that small perturbations are added on all samples. The perturbations are drawn from a uniform distribution within the interval on each dimension of ; see Fig. 3 ###reference_### (b-2), where more samples of different classes are intermingled than before the perturbations are added. Although some misclassified samples have moved far into the bulks of other classes, most of them are still close to the decision boundaries. Therefore, still outperforms other SR-based score functions, as in Fig. 3 ###reference_### (a-2).\nWe continue to increase the magnitudes of perturbations and Fig. 3 ###reference_### (b-3) illustrates the case where the perturbations are drawn from a uniform distribution within the interval . Now that samples from different classes are well mixed in the 2D space, is no longer superior when the coverage level is high, as shown in Fig. 3 ###reference_### (a-3). However, we argue that Case 3 is less concerning in practice\u2014we probably will never consider deploying a classifier that does not work well at all before SC; see the risk achieved at coverage level . Instead of relying on an SC strategy, it is more urgent to improve the base classifier in this case.\nSummary:\nUsing the above examples, we have shown that our proposed margin-based score functions are not sensitive to the scale of the raw logits. When the base classifier is reasonable in classifying in-distribution data samples (i.e., achieving low risks at full coverage), margin-based scores are expected to result in good SC performance, even when test samples have low or moderate distribution shifts, as we show empirically in Section 4 ###reference_### below." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we experiment with various multiclass classification tasks and recent DNN classifiers to verify the effectiveness of our margin-based score functions for generalized SC." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Comparison with nontraining-based score functions using pretrained models", + "text": "We take different pretrained DNN models in various classification tasks and evaluate SC performance on test datasets composed of In-D and distribution-shifted samples jointly. Specifically, our evaluation tasks include (i) ImageNet (Russakovsky et al., 2015 ###reference_b56###), the most widely used testbed for image classification, with a covariate-shifted version ImageNet-C (Hendrycks & Dietterich, 2018 ###reference_b30###) composed of synthetic perturbations, and OpenImage-O (Wang et al., 2022 ###reference_b61###) composed of natural images similar to ImageNet but with disjoint labels, i.e., label-shifted samples; (ii) iWIldCam (Beery et al., 2020 ###reference_b1###) test set provides two subsets of animal images taken at different geo-locations, where one is the same as the training set serving as In-D and the other at different locations as a natural covariate-shifted version; (iii) Amazon (Ni et al., 2019 ###reference_b49###) test set provides two subsets of review comments by different users, producing In-D and natural covariate-shifted test samples for a language sentiment classification task; (iv) CIFAR-10 (Krizhevsky et al., 2009 ###reference_b37###), a small image classification dataset commonly used in previous training-based SC works, together with CIFAR-10-C (perturbed CIFAR-10) and CIFAR-100 (with disjoint labels from CIFAR-10), popularly used covariate-shifted and label-shifted versions of CIFAR-10. Tables 1 ###reference_### and 2 ###reference_### summarize the pretrained models and datasets.\nIn addition to , and introduced in Eq. 7 ###reference_### and our proposed margin-based scores and in Eq. 13 ###reference_###, we also consider several recent post-hoc OOD detection scores444In OOD detection, scores are usually dependent on the training data. However, these post-hoc scores can also be applied as nontraining-based SC scores as Algorithm 1 ###reference_###, by replacing and in Algorithm 2 ###reference_### with .: (i) : the maximum raw logit (Hendrycks et al., 2019 ###reference_b32###); (ii) Energy: log-sum-exponential aggregation (i.e., smooth approximation to the maximum raw logit) of the raw logits (Liu et al., 2020 ###reference_b43###); (iii) KNN: a score composed of the distances from a test data point to the nearest neighbors of the training set in the raw logit space (Sun et al., 2022 ###reference_b59###); (iv) ViM\u2014a score composed of the residual of a test sample from the principal components estimated in the feature space prior to the raw logits using training data (Wang et al., 2022 ###reference_b61###); and (v) SIRC\u2014a composite score of the softmax response and OOD detection scores (Xia & Bouganis, 2022 ###reference_b63###).\nWe note that KNN, ViM, and SIRC all contain hyperparameters that are determined by the training data. To minimize the gap with our \u2018nontraining-based\u2019 setup, we randomly sample a small number of data points555Five times the number of classes in each task from Table 2 ###reference_###. We do not sample five points per class, as in practice the calibration set may be imbalanced. from the In-D test set to tune their hyperparameters, respectively. Also, note that KNN has an additional hyperparameter that is independent of the statistics of the dataset. Empirically, we find KNN\u2019s performance is very sensitive to the choice of , the task, and the classifier. Therefore, in this paper, we use (the empirical best) by default for KNN and provide an ablation analysis for KNN for each experiment in Appendix H ###reference_###.\nWe report both the RC curves and the AURC- where as discussed in Section 2.4 ###reference_###. Note that when plotting the RC curves, we omit because it almost overlaps with , which is also observed by Xia & Bouganis (2022 ###reference_b63###).\nWe show in Fig. 4 ###reference_### the RC curves of the various score functions on the pretrained model EVA, for different combinations of subsets of test data, as summarized in Table 3 ###reference_###. The most striking is in Fig. 4 ###reference_###(c), which collects the results for evaluation on mixup of In-D and label-shifted samples: except for , and KNN, the selection risks of other score functions do not follow a monotonic decreasing trend as coverage decreases. As coverage approaches zero, their selection risks spike up, almost to the risk level at full coverage (i.e., error rate on the whole set). This is because the other score functions do not indicate prediction confidence well in this setting and hence fail to sufficiently separate right and wrong predictions\u2014during rejection, both right and wrong predictions are rejected indiscriminately.\nOn the other hand, , are better than KNN in separating correct and wrong predictions when there are no label-shifted samples, as shown in Fig. 4 ###reference_### (a)&(b). As a result, and have the best overall performance when In-D, covariate-shifted and label-shifted samples coexist, as shown in Fig. 4 ###reference_### (d). Also, see Table 3 ###reference_### for numerical confirmation of the above observations, where in all cases and are the best or comparable to the best-performing among all score functions. We present the SC results of other ImageNet models in Appendix G ###reference_###; our margin-based score functions still stand as the best-performing among all.\n###figure_18### ###figure_19### ###figure_20### ###figure_21### We report in Fig. 5 ###reference_### and Table 4 ###reference_### the SC performance of different score functions on iWildCam and Amazon. Similar to the ImageNet experiment above, scores designed for OOD detection (, Energy, KNN and ViM) do not have satisfactory performance in SC. By contrast, existing SR-based scores (, , and ) all demonstrate better SC potential than OOD score functions, and our margin-based score functions ( and ) perform on par with the SR-based scores.\n###figure_22### ###figure_23### ###figure_24### ###figure_25###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with a training-based confidence-score function", + "text": "We also compare with a training-based method, ScNet (Geifman & El-Yaniv, 2019 ###reference_b23###). ScNet consists of a selection network and a classifier that are structurally decoupled and trained together, allowing us to perform a faithful comparison of selection scores with a fixed classifier101010We do not consider training-based score functions such as Liu et al. (2019 ###reference_b45###); Huang et al. (2022 ###reference_b33###) due to the ambiguity in calculating their SR responses. During their training, a virtual class \u201cabstention\u201d is added and the softmax normalization is applied on all logits\u2014including that of the virtual class, so it is unfair either simply dropping the abstention logit during test for score calculation or keeping the abstention logit but modifying the score calculation procedure. Retraining a classifier with the same settings but without the abstention logit is also unfair due to the requirement of a fixed classifier. Furthermore, Feng et al. (2023 ###reference_b15###) reports that the above selection methods (Liu et al., 2019 ###reference_b45###; Huang et al., 2022 ###reference_b33###) are not as effective as they claim.. As shown above, score functions designed for OOD detection perform poorly for generalized SC, so here we focus on comparing our margin-based and SR-based score functions with ScNet. We first train ScNet using the training set of CIFAR-10 and ImageNet, respectively; see Appendix F ###reference_### for training details. After training, we fix both the classification and the selection heads and compute the scores and selection risks using the test setup shown in Table 2 ###reference_###: (i) the ScNet selection score is taken directly from the selection head, and (ii) the margin-based and SR-based scores are computed using the classification head.\n###figure_26### ###figure_27### ###figure_28### ###figure_29### We show in Fig. 6 ###reference_### the RC curves achieved using ScNet, SR-based, and margin-based scores. For the CIFAR experiment shown in Fig. 6 ###reference_### (a)&(b), ScNet and perform comparably and are better than and SIRC, whereas for the ImageNet experiment in Fig. 6 ###reference_### (c)&(d), , , and SIRC perform comparably and are better than ScNet.111111Existing training-based SC works so far have only reported SC (In-D) performance on CIFAR-10 dataset and have not experimented with ImageNet using the full training set. Our results on CIFAR-10 dataset faithfully reproduce the result originally reported in Geifman & El-Yaniv (2019 ###reference_b23###). Surprisingly, ScNet does not always lead to the best performance, even if it has access to training data. However, our margin-based scores consistently exhibit good SC performance." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Summary of experimental results", + "text": "From all above experiments, we can conclude that (i) existing nontraining-based score functions for OOD detection do not perform well for generalized SC, not helping achieve reliable classification performance after rejecting low-confidence samples, and (ii) our proposed margin-based score functions and consistently perform comparably to or better than existing SR-based scores on all DL models we have tested, especially in the low-risk regime, which is of particular interest for high-stakes problems. These confirm the superiority of and as effective confidence-score functions for SC even under moderate distribution shifts for risk-sensitive applications.\nIn most of our experiments, and perform similarly; only in rare cases, e.g. Fig. 5 ###reference_### (a) and Fig. 6 ###reference_### (b), slightly outperforms . However, we do not think it is sufficient to conclude that is better than , or vise versa. Recall how and are defined in Eqs. 11 ###reference_### and 12 ###reference_### and their associated decision rules, the current practice of training DL classifiers is in favor of 121212The cross-entropy loss is the most commonly used and minimizing it can be viewed as approximating maximizing the confidence margin. To see this, without loss of generality, assume that the magnitudes of the raw logits are ordered and that the true label of the current sample is class . Then the cross-entropy loss for the current sample is , so , where the last minimization problem can be approximated by , i.e., maximizing the confidence margin, when . . Thus, understanding the difference in behavior of and is likely to also involve investigation of the training process, which we will leave for future work." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and discussion", + "text": "In this paper, we have proposed generalized selective classification, a new selective classification (SC) framework that allows distribution shifts. This is motivated by the pressing need to achieve reliable classification for real-world, risk-sensitive applications where data can come from the wild in deployment. Generalized SC covers and unifies existing selective classification and out-of-distribution (OOD) detection, and we have proposed two margin-based score functions for generalized SC, and , which are not based on training: they are compatible for any given pretrained classifiers. Through our extensive analysis and experiments, we have shown the superiority of and over numerous recently proposed non-training-based score functions for SC and OOD detection. As the first work that touches on generalized SC, our paper can inspire several lines of future research, including at least: (i) to further improve the SC performance, one can try to align the training objective with our SC confidence-score functions here, i.e., promoting large margins; (ii) in this paper, we only consider the case where all classes are treated equally, while practical generalized SC might entail different rejection weights and costs for different classes, e.g., medical diagnosis of diseases with different levels of health implications; (iii) last but not least, finding better confidence-score functions. We hope that our small step here stimulates further research on generalized SC, bridging the widespread gaps between exploratory AI development and reliable AI deployment for practical high-stakes applications." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Linear SVM and margins", + "text": "We first consider binary classification. Assume training set (), where and for notational simplicity, we assume that an extra has been appended to the original feature vectors so that we only need to consider the homogeneous form of the predictor: . The basic idea of SVM is to maximize the worst signed geometric margin, which makes sense no matter whether the data are separable or not:\nNote that the problem is non-convex due to the fractional form . Moreover, is invariant to the rescaling of , which is bad for numerical computation (as this implies that there exist global solutions arbitrarily close to and ).\nIf the training set is separable, i.e., there exists a such that , there also exists a so that by a simple rescaling argument. Then Eq. 14 ###reference_### becomes\nwhere Eq. 17 ###reference_### is our textbook hard-margin SVM (except for the squared norm often used in the objective). A problem with Eq. 17 ###reference_### is that the constraint set is infeasible for inseparable training data. To fix this issue, we can allow slight violations in the constraint and penalize these violations in the objective of Eq. 17 ###reference_###, arriving at\nwhich is our textbook soft-margin SVM.\nNow for multiclass classification, let us assume the data space: with . The classifier takes the form , where . We note that from binary SVM, people create the notion of confidence margin:\nwhich for the binary case is simply the signed geometric margin rescaled by . The standard multiclass decision rule is131313The decision rule for the binary case is . Therefore, we do not need to worry about the \u2019s scaling.\nwhere is the -th column of . To correctly classify all points, we need\nThis motivates the multiclass hard-margin SVM, separability assumed:\nwhere terms can be viewed as multiclass confidence margins, natural generalizations of confidence margins for the binary case. The corresponding soft-margin version is\nBoth hard- and soft-margin versions are convex and thus more convenient for numerical optimization.\nOn the other hand, if we strictly follow the geometric margin interpretation, it seems more natural to formulate multiclass SVM as follows. Consider the decision rule:\nwhich would classify all points correctly provided that there exists a satisfying\nThis motivates an optimization problem on the worst geometric margins:\nHowever, this problem is non-convex and thus not popularly adopted." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Asymptotic behaviors of , , and", + "text": "Recall from mathematical analysis that two functions and are asymptotically equivalent as , written as as , if and only if , where is the standard small-o notation. Note that .\nConsider the raw logits , and without loss of generality assume that they are ordered in descending order without any ties, i.e., . We have that as ,\nMoreover, all of the asymptotic functions are monotonically increasing with respect to .\nFirst, for , we have\nas , because as . Moreover, as ,\nas when . So we conclude that\nNow consider . Applying a similar argument as above, we have\nas , where Eq. 33 ###reference_### holds as is lower order than when so that . Therefore, as ,\nFinally, for , we have that when ,\nwhere Eq. 38 ###reference_### holds because as . Continuing the above argument, we further have that as ,\nLet\u2019s write . The last two terms in Eq. 39 ###reference_### can be re-written as . Since , we thus have as , and hense by the definition of the asymptotic equivalence. Therefore, we have:\nSo we conclude that\ncompleting the proof.\n\u220e" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Evaluation metrics for OOD detection vs. evaluation metrics for generalized SC", + "text": "The commonly used evaluation metrics for OOD detection do not reflect the classification performance (Franc et al., 2023b ###reference_b18###). Here we provide a quantitative supporting example, in comparison with the RC curve for generalized SC.\nOOD (mostly label-shift) detection as formulated in Eq. 9 ###reference_### can be viewed as a binary classification problem: selected and rejected samples form the two classes. So pioneer work on OOD detection, such as Hendrycks & Gimpel (2016 ###reference_b31###), proposes to evaluate OOD detection in a manner similar to that of binary classification, e.g., using the Area Under the Receiver Operating Characteristic (AUROC) curve (Davis & Goadrich, 2006 ###reference_b10###) and Area Under the Precision-Recall curve (AUPR) (Saito & Rehmsmeier, 2015 ###reference_b57###) to measure the separability of In-D and OOD samples.141414A single-point metric, False Positive Rate (FPR) at True Positive Rate (TPR), is also popularly used as a companion (Liang et al., 2017 ###reference_b42###; Wang et al., 2022 ###reference_b61###; Liu et al., 2020 ###reference_b43###; Djurisic et al., 2022 ###reference_b11###; Sun et al., 2022 ###reference_b59###; Yang et al., 2022 ###reference_b66###).\nHowever, two important aspects are missing in OOD detection, and hence also its performance evaluation, if we are to focus on the performance on the accepted samples:\nPretrained classifiers do not always make wrong predictions on label-shifted samples, and hence these OOD samples should not be blindly rejected;\nIn-D samples that might have been correctly classified can be rejected due to poor separation of In-D and OOD samples, leading to worse classification performance on the selected part.\nTo demonstrate our points quantitatively, we take the pretrained model EVA151515See Appendix E ###reference_### for model card information. This model is also used in the experiments of Section 4 ###reference_###. from timm (Wightman, 2019 ###reference_b62###) that achieves top 1 accuracy on the ImageNet validation set. We then mix ImageNet validation set (In-D samples) with ImageNet-O (OOD samples, label shifted) (Hendrycks & Dietterich, 2018 ###reference_b30###), and evaluate two score functions and 161616 is our proposed and is ViM. using both generalized SC formulation (via RC curves) and OOD detection (via AUROC and AUPR).\n###figure_30### ###figure_31### ###figure_32### According to Table 5 ###reference_###, is considered superior to by all metrics for OOD detection. Correspondingly, from Fig. 7 ###reference_###(a) and (b), we observe that the scores of the label-shifted samples (green) and those of the In-D samples (blue and orange) are more separated by than by . However, we can also quickly notice one issue: In-D samples are not completely separated from OOD samples\u2014a threshold intended to reject label-shifted samples will inevitably reject a portion of In-D samples at the same time, even though a large portion of In-D samples have been correctly classified (blue); In-D samples that can be correctly classified (blue) are less separated from those misclassified ones (orange) by than by . This problem cannot be revealed by the OOD metrics in Table 5 ###reference_###, but is captured by the RC curves in Fig. 7 ###reference_###(c) where the selection risk of (blue) increases as more OOD samples are rejected (TPR from to as indicated by the vertical dashed lines). In contrast, the more samples rejected by (smaller coverage), the lower the selection risk, implying that serves SC better." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Rejection patterns of different score functions", + "text": "We plot in Fig. 8 ###reference_### the heatmap of the score values for each score function. During SC, samples located in the darker areas (with low score values) will be rejected before those located in the brighter areas (with high score values).\n###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Timm model cards", + "text": "Table 6 ###reference_### shows the names of the model cards used to retrieve the pretrained models for ImageNet from the timm library. Our considerations for choosing these models are as follows: (i) the models should cover a wide range of recent and popular architectures, and (ii) they should achieve high top- accuracy to represent recent advances of image classification." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Training details for ScNet", + "text": "We use the unofficial PyTorch implementation171717https://github.com/gatheluck/pytorch-SelectiveNet ###reference_tiveNet### of the original SelectiveNet (Geifman & El-Yaniv, 2019 ###reference_b23###) due to the out-of-date Keras environment of the original repository181818https://github.com/anonygit32/SelectiveNet ###reference_###. The PyTorch implementation follows the training method proposed in Geifman & El-Yaniv (2019 ###reference_b23###) and faithfully reproduces the results of CIFAR-10 experiment reported in the original paper. We add the ImageNet experiment on top of the PyTorch code, as it is not included in the original code or the paper. Table 7 ###reference_### summarizes the key hyperparameters to produce the results reported in this paper." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Additional ImageNet experiments", + "text": "We report in Fig. 9 ###reference_### the RC curves of different score functions on models ConvNext, ResNext, and VOLO for ImageNet, and summarize their AURC statistics in Table 8 ###reference_###.\n###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49###" + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H Ablation experiments for the KNN score", + "text": "We show in Fig. 10 ###reference_### the SC performance of the KNN score on models EVA, ConvNext, ResNext, and VOLO, respectively, on ImageNet with all In-D and distribution-shifted samples. We can observe that (i) the SC performance of KNN is sensitive to the choice of hyperparameter , and (ii) our selection achieves the best SC performance for KNN score on our ImageNet task.\n###figure_50### ###figure_51### ###figure_52### ###figure_53###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Summary of the pretrained classifiers used for the various classification tasks
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskModel NameSourceNote
EVA (Fang et\u00a0al., 2023)\nTop-1 acc. 88.76 %
ImageNetConvNext (Liu et\u00a0al., 2022)\ntimm666See Table\u00a06 in Appendix\u00a0E for the model card information to retrieve these timm models.Top-1 acc. 86.25 %
VOLO (Yuan et\u00a0al., 2022)\nTop-1 acc. 85.56 %
ResNext (Xie et\u00a0al., 2017)\nTop-1 acc. 85.54 %
iWildCamFLYP (Goyal et\u00a0al., 2023)\nOfficial source code777https://github.com/locuslab/FLYP\nRanked on WILDS (Koh et\u00a0al., 2021)
AmazonLISA (Yao et\u00a0al., 2022)\nOfficial source code888https://github.com/huaxiuyao/LISA.git\nRanked on WILDS
CIFAR & ImageNetScNet (Geifman & El-Yaniv, 2019)\nPyTorch re-implementation999https://github.com/gatheluck/pytorch-SelectiveNetTraining-based SC.
\n
\n
", + "capture": "Table 1: Summary of the pretrained classifiers used for the various classification tasks" + }, + "2": { + "table_html": "
\n
Table 2: Summary of In-D and distribution-shifted datasets used for our SC evaluation
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskIn-D (split)classes - samplesShift-CovsamplesShift-Labelsamples
ImageNetILSVRC-2012 (\u2019val\u2019)1000 - 50,000ImageNet-C (severity )50,000 19OpenImage-O17,256
*All types of corruptions
iWildCamiWildCam (\u2019id_test\u2019)178 - 8154iWildCam (\u2019ood_test\u2019)42791N/AN/A
AmazonAmazon (\u2019id_test\u2019)5 - 46,950Amazon (\u2019test\u2019)100,050N/AN/A
CIFARCIFAR-10 (\u2019val\u2019)10 - 10,000CIFAR-10-C (severity )10,000 19CIFAR-10010,000
*All types of corruptions
\n
\n
", + "capture": "Table 2: Summary of In-D and distribution-shifted datasets used for our SC evaluation" + }, + "3": { + "table_html": "
\n
Table 3: Summary of AURC- for Fig.\u00a04. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ImageNet - EVAIn-DIn-D + Shift (Cov)In-D + Shift (Label)In-D + Shift (both)
0.10.510.10.510.10.510.10.51
0.160.532.390.240.964.771.043.3411.70.341.205.43
0.270.592.430.371.024.781.203.3511.60.481.265.43
\n\\hdashline\nSIRC\n2.232.073.363.713.065.8315.88.8813.74.613.536.52
3.202.363.384.523.665.9313.17.5212.65.213.756.56
4.283.134.046.244.667.0016.09.1913.47.045.107.61
3.222.383.404.553.406.0013.27.5512.65.243.786.61
5.534.054.578.486.047.6421.111.914.99.536.598.33
Energy8.136.606.9012.810.311.127.316.618.114.111.011.8
KNN0.992.274.581.222.896.781.183.2310.81.242.987.16
ViM5.487.118.315.318.0510.45.837.8913.45.358.1210.7
\n
\n
", + "capture": "Table 3: Summary of AURC- for Fig.\u00a04. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined." + }, + "4": { + "table_html": "
\n
Table 4: Summary of AURC- for Fig.\u00a05. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
iWildCam- FYLPAmazon - LISA
In-DIn-D + Shift (Cov)In-DIn-D + Shift (Cov)
0.10.510.10.510.10.510.10.51
1.633.8810.21.843.2110.01.115.3112.51.836.9114.2
1.633.8810.11.843.2110.01.135.5112.81.867.1514.6
\n\\hdashline\nSIRC\n1.453.729.841.383.59.941.145.0912.21.886.6613.9
1.453.8710.01.383.6110.11.145.1312.31.886.7014.0
1.464.0310.61.343.9410.61.155.0612.11.896.6113.8
1.453.8710.11.383.6210.11.145.1312.21.886.7013.9
29.121.424.725.524.827.91.265.2112.51.986.8814.4
Energy35.228.329.936.133.234.41.265.3712.81.986.8814.4
KNN6.4011.115.38.165.1010.712.114.318.216.116.520.1
ViM13.410.715.76.986.4712.22.338.7215.03.5510.416.7
\n
\n
", + "capture": "Table 4: Summary of AURC- for Fig.\u00a05. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined." + }, + "5": { + "table_html": "
\n
Table 5: Evaluation of and using popular OOD metrics. The better numbers are highlighted in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
OOD metric
AUROC ()0.7650.944
AUPR ()0.9870.997
FPR@TPR=0.95 ()0.8160.279
\n
", + "capture": "Table 5: Evaluation of and using popular OOD metrics. The better numbers are highlighted in bold." + }, + "6": { + "table_html": "
\n
Table 6: Names of model cards in library timm to retrieve the models for ImageNet
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nDataset\nModel nameModel card nameTop-1 Acc. (\n)
EVA (ViT)eva_giant_patch14_224.clip_ft_in1k88.76
ImageNetConvNextconvnextv2_base.fcmae_ft_in22k_in1k86.25
VOLOvolo_d4_224.sail_in1k85.56
\nResNextseresnextaa101d_32x8d.sw_in12k_ft_in1k85.94
\n
\n
", + "capture": "Table 6: Names of model cards in library timm to retrieve the models for ImageNet" + }, + "7": { + "table_html": "
\n
Table 7: Key hyperparameters for the ScNet training used in this paper
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetModel architectureDropout prob.Target coverageBatch sizeTotal epochsLr (base)Scheduler
CIFAR-10VGG0.30.71283000.1StepLR
ImaegNet-1kresnet34N/A0.77682500.1CosineAnnealingLR
\n
\n
", + "capture": "Table 7: Key hyperparameters for the ScNet training used in this paper" + }, + "8": { + "table_html": "
\n
Table 8: Summary of AURC- for Fig.\u00a09. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ImageNet - ConvNextIn-DIn-D + Shift (Cov)In-D + Shift (Label)In-D + Shift (both)
0.10.510.10.510.10.510.10.51
0.100.533.020.261.768.200.582.5111.80.341.998.88
0.150.593.100.311.758.140.752.5411.80.381.978.81
\n\\hdashline\nSIRC\n1.961.703.593.443.238.605.944.0311.53.763.469.18
2.261.863.663.733.408.705.864.0511.44.043.629.26
2.772.444.194.784.339.546.834.8511.65.134.5610.1
2.261.863.673.733.418.745.864.0611.34.043.639.29
5.434.775.819.057.8911.610.57.7313.29.458.1312.1
Energy6.666.707.5410.910.713.911.99.7814.611.310.914.3
KNN1.012.375.721.294.5410.61.113.6612.01.314.5911.0
ViM15.19.849.4916.211.914.314.19.5714.516.211.914.7
ImageNet - ResNext
0.120.593.170.292.159.380.593.2212.80.382.5010.2
0.170.603.180.342.149.330.653.1612.70.432.4910.1
\n\\hdashline\nSIRC\n1.711.913.943.964.189.997.775.8813.14.474.5710.7
2.282.264.114.884.6910.37.445.8812.95.365.0611.0
3.383.425.376.926.9412.29.467.7013.97.477.3612.8
2.292.284.174.924.7510.47.475.9212.85.395.1211.1
1.572.344.792.984.8210.92.373.8311.93.065.0011.4
Energy3.083.906.175.137.2012.73.685.3413.25.197.3713.2
KNN3.234.847.614.127.6513.63.405.8513.54.147.7714.0
ViM4.686.137.796.188.8113.65.096.8213.66.238.9214.1
ImageNet - VOLO
0.310.793.440.462.249.721.303.7913.30.682.6710.6
0.370.813.460.502.239.730.943.5613.10.662.6410.6
\n\\hdashline\nSIRC\n1.271.443.741.352.829.562.683.9712.91.903.37\n10.5
1.311.423.721.332.829.592.543.7812.71.863.3610.5
1.471.593.831.583.139.722.713.8712.42.133.6910.6
1.311.423.711.332.829.552.543.7812.71.863.3610.4
4.924.516.186.327.1312.56.376.8213.87.077.8413.4
Energy5.214.996.846.888.2413.56.707.3714.37.628.9514.4
KNN2.183.296.232.105.0311.72.274.8513.72.155.2612.3
ViM9.3810.711.99.0412.016.510.413.521.19.2212.417.3
\n
\n
", + "capture": "Table 8: Summary of AURC- for Fig.\u00a09. The AURC numbers are on the scale\u2014the lower, the better. The score functions proposed for SC are highlighted in gray, and the rest are originally for OOD detection. The best AURC numbers for each coverage level are highlighted in bold, and the and best scores are underlined." + } + }, + "image_paths": { + "1": { + "figure_path": "2405.05160v2_figure_1.png", + "caption": "Figure 1: Visualization of the normalized AURC-\u03b1\ud835\udefc\\alphaitalic_\u03b1\u2014the area in blue divided by the coverage value \u03b1\ud835\udefc\\alphaitalic_\u03b1.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/Demo/AURC-alpha.png" + }, + "2(a)": { + "figure_path": "2405.05160v2_figure_2(a).png", + "caption": "Figure 2: RC curves for (b) S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, (c) S\u2062Rdoctor\ud835\udc46subscript\ud835\udc45doctorSR_{\\text{doctor}}italic_S italic_R start_POSTSUBSCRIPT doctor end_POSTSUBSCRIPT, and (d) S\u2062Rent\ud835\udc46subscript\ud835\udc45entSR_{\\text{ent}}italic_S italic_R start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT, calculated based on scaled (by factor 0.10.10.10.1, 1.01.01.01.0, 2.02.02.02.0, and 4.04.04.04.0, respectively) raw logits from the optimal 4444-class linear classifier using data shown in (a). The RC curves for R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT and spostsubscript\ud835\udc60posts_{\\text{post}}italic_s start_POSTSUBSCRIPT post end_POSTSUBSCRIPT are also plotted for reference, where R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT is one of our proposed confidence-score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_0/Test-Samples.png" + }, + "2(b)": { + "figure_path": "2405.05160v2_figure_2(b).png", + "caption": "Figure 2: RC curves for (b) S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, (c) S\u2062Rdoctor\ud835\udc46subscript\ud835\udc45doctorSR_{\\text{doctor}}italic_S italic_R start_POSTSUBSCRIPT doctor end_POSTSUBSCRIPT, and (d) S\u2062Rent\ud835\udc46subscript\ud835\udc45entSR_{\\text{ent}}italic_S italic_R start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT, calculated based on scaled (by factor 0.10.10.10.1, 1.01.01.01.0, 2.02.02.02.0, and 4.04.04.04.0, respectively) raw logits from the optimal 4444-class linear classifier using data shown in (a). The RC curves for R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT and spostsubscript\ud835\udc60posts_{\\text{post}}italic_s start_POSTSUBSCRIPT post end_POSTSUBSCRIPT are also plotted for reference, where R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT is one of our proposed confidence-score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case-Scale/RC-Curves-SR-Changes.png" + }, + "2(c)": { + "figure_path": "2405.05160v2_figure_2(c).png", + "caption": "Figure 2: RC curves for (b) S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, (c) S\u2062Rdoctor\ud835\udc46subscript\ud835\udc45doctorSR_{\\text{doctor}}italic_S italic_R start_POSTSUBSCRIPT doctor end_POSTSUBSCRIPT, and (d) S\u2062Rent\ud835\udc46subscript\ud835\udc45entSR_{\\text{ent}}italic_S italic_R start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT, calculated based on scaled (by factor 0.10.10.10.1, 1.01.01.01.0, 2.02.02.02.0, and 4.04.04.04.0, respectively) raw logits from the optimal 4444-class linear classifier using data shown in (a). The RC curves for R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT and spostsubscript\ud835\udc60posts_{\\text{post}}italic_s start_POSTSUBSCRIPT post end_POSTSUBSCRIPT are also plotted for reference, where R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT is one of our proposed confidence-score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case-Scale/RC-Curves-Doctor-Changes.png" + }, + "2(d)": { + "figure_path": "2405.05160v2_figure_2(d).png", + "caption": "Figure 2: RC curves for (b) S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, (c) S\u2062Rdoctor\ud835\udc46subscript\ud835\udc45doctorSR_{\\text{doctor}}italic_S italic_R start_POSTSUBSCRIPT doctor end_POSTSUBSCRIPT, and (d) S\u2062Rent\ud835\udc46subscript\ud835\udc45entSR_{\\text{ent}}italic_S italic_R start_POSTSUBSCRIPT ent end_POSTSUBSCRIPT, calculated based on scaled (by factor 0.10.10.10.1, 1.01.01.01.0, 2.02.02.02.0, and 4.04.04.04.0, respectively) raw logits from the optimal 4444-class linear classifier using data shown in (a). The RC curves for R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT and spostsubscript\ud835\udc60posts_{\\text{post}}italic_s start_POSTSUBSCRIPT post end_POSTSUBSCRIPT are also plotted for reference, where R\u2062Lconf-M\ud835\udc45subscript\ud835\udc3fconf-MRL_{\\text{conf-M}}italic_R italic_L start_POSTSUBSCRIPT conf-M end_POSTSUBSCRIPT is one of our proposed confidence-score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case-Scale/RC-Curves-Entropy-Changes.png" + }, + "3(a)": { + "figure_path": "2405.05160v2_figure_3(a).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_0/RC-Curves.png" + }, + "3(b)": { + "figure_path": "2405.05160v2_figure_3(b).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_0/Rejected-Points-Geo-Margin.png" + }, + "3(c)": { + "figure_path": "2405.05160v2_figure_3(c).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_0/Rejected-Points-Max-SR.png" + }, + "3(d)": { + "figure_path": "2405.05160v2_figure_3(d).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_0/Radius-Remained.png" + }, + "3(e)": { + "figure_path": "2405.05160v2_figure_3(e).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_1/RC-Curves.png" + }, + "3(f)": { + "figure_path": "2405.05160v2_figure_3(f).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_1/Rejected-Points-Geo-Margin.png" + }, + "3(g)": { + "figure_path": "2405.05160v2_figure_3(g).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_1/Rejected-Points-Max-SR.png" + }, + "3(h)": { + "figure_path": "2405.05160v2_figure_3(h).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_1/Radius-Remained.png" + }, + "3(i)": { + "figure_path": "2405.05160v2_figure_3(i).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_2/RC-Curves.png" + }, + "3(j)": { + "figure_path": "2405.05160v2_figure_3(j).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_2/Rejected-Points-Geo-Margin.png" + }, + "3(k)": { + "figure_path": "2405.05160v2_figure_3(k).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_2/Rejected-Points-Max-SR.png" + }, + "3(l)": { + "figure_path": "2405.05160v2_figure_3(l).png", + "caption": "Figure 3: Further analysis of the numerical example in Section 3.1. Case 1, Case 2, and Case 3 correspond to the original dataset in Section 3.1, the dataset after small perturbations, and the dataset after substantial perturbations, respectively. Here, (a-)\u2019s are the RC curves achieved by different selection scores; (b-)\u2019s are visualizations of the samples (one color per class), decision boundaries (dashed blue line) and the rejected samples (black crosses) at coverage 0.80.80.80.8 by R\u2062Lgeo-M\ud835\udc45subscript\ud835\udc3fgeo-MRL_{\\text{geo-M}}italic_R italic_L start_POSTSUBSCRIPT geo-M end_POSTSUBSCRIPT; (c-)\u2019s visualize the rejected samples (black crosses) at coverage 0.80.80.80.8 by S\u2062Rmax\ud835\udc46subscript\ud835\udc45maxSR_{\\text{max}}italic_S italic_R start_POSTSUBSCRIPT max end_POSTSUBSCRIPT; and (d-)\u2019s present the histogram of the robustness radius of the selected samples in by all score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Case_2/Radius-Remained.png" + }, + "4(a)": { + "figure_path": "2405.05160v2_figure_4(a).png", + "caption": "Figure 4: RC curves of different confidence-score functions on the model EVA for ImageNet. (a)-(d) are RC curves evaluated using samples from (a) In-D samples only, (b) In-D and covariate-shifted samples only, (c) In-D and label-shifted samples only, and (d) all samples, respectively. We group the curves by whether they are originally proposed for SC setups (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/EVA/Clean.png" + }, + "4(b)": { + "figure_path": "2405.05160v2_figure_4(b).png", + "caption": "Figure 4: RC curves of different confidence-score functions on the model EVA for ImageNet. (a)-(d) are RC curves evaluated using samples from (a) In-D samples only, (b) In-D and covariate-shifted samples only, (c) In-D and label-shifted samples only, and (d) all samples, respectively. We group the curves by whether they are originally proposed for SC setups (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/EVA/Clean_and_C.png" + }, + "4(c)": { + "figure_path": "2405.05160v2_figure_4(c).png", + "caption": "Figure 4: RC curves of different confidence-score functions on the model EVA for ImageNet. (a)-(d) are RC curves evaluated using samples from (a) In-D samples only, (b) In-D and covariate-shifted samples only, (c) In-D and label-shifted samples only, and (d) all samples, respectively. We group the curves by whether they are originally proposed for SC setups (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/EVA/Clean_and_OOD.png" + }, + "4(d)": { + "figure_path": "2405.05160v2_figure_4(d).png", + "caption": "Figure 4: RC curves of different confidence-score functions on the model EVA for ImageNet. (a)-(d) are RC curves evaluated using samples from (a) In-D samples only, (b) In-D and covariate-shifted samples only, (c) In-D and label-shifted samples only, and (d) all samples, respectively. We group the curves by whether they are originally proposed for SC setups (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/EVA/Clean_and_C_and_OOD.png" + }, + "5(a)": { + "figure_path": "2405.05160v2_figure_5(a).png", + "caption": "Figure 5: RC curves of different confidence-score functions on the model FLYP for iWildCam and the model LISA for Amazon. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/iWildCam/Clean.png" + }, + "5(b)": { + "figure_path": "2405.05160v2_figure_5(b).png", + "caption": "Figure 5: RC curves of different confidence-score functions on the model FLYP for iWildCam and the model LISA for Amazon. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/iWildCam/Clean_and_OOD.png" + }, + "5(c)": { + "figure_path": "2405.05160v2_figure_5(c).png", + "caption": "Figure 5: RC curves of different confidence-score functions on the model FLYP for iWildCam and the model LISA for Amazon. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/Amazon/Clean.png" + }, + "5(d)": { + "figure_path": "2405.05160v2_figure_5(d).png", + "caption": "Figure 5: RC curves of different confidence-score functions on the model FLYP for iWildCam and the model LISA for Amazon. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/Amazon/Clean_and_OOD.png" + }, + "6(a)": { + "figure_path": "2405.05160v2_figure_6(a).png", + "caption": "Figure 6: RC curves of different confidence-score functions on the model ScNet for CIFAR and ImageNet. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/Cifar10-SCNet/Clean.png" + }, + "6(b)": { + "figure_path": "2405.05160v2_figure_6(b).png", + "caption": "Figure 6: RC curves of different confidence-score functions on the model ScNet for CIFAR and ImageNet. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/Cifar10-SCNet/Clean_and_C_and_OOD.png" + }, + "6(c)": { + "figure_path": "2405.05160v2_figure_6(c).png", + "caption": "Figure 6: RC curves of different confidence-score functions on the model ScNet for CIFAR and ImageNet. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ScNet/Clean.png" + }, + "6(d)": { + "figure_path": "2405.05160v2_figure_6(d).png", + "caption": "Figure 6: RC curves of different confidence-score functions on the model ScNet for CIFAR and ImageNet. (a)&(c) are RC curves evaluated using In-D samples only and (b)&(d) are RC curves evaluated using both In-D and covariate-shifted samples.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ScNet/Clean_and_C_and_OOD.png" + }, + "7(a)": { + "figure_path": "2405.05160v2_figure_7(a).png", + "caption": "Figure 7: Score distributions of s1subscript\ud835\udc601s_{1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and s2subscript\ud835\udc602s_{2}italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (a)-(b) and their RC curves (c). In (a) and (b), In-D samples that are correctly classified by EVA are shown in blue, while In-D samples that are incorrectly classified are shown in orange; OOD samples (label-shifted) are shown in green. The vertical dashed lines in (a)-(c) corresponds to different True-Positive-Rate cutoffs in the AUROC metric in OOD detection.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/OOD-Metric-Demo/CF_Hist_geo_margin.png" + }, + "7(b)": { + "figure_path": "2405.05160v2_figure_7(b).png", + "caption": "Figure 7: Score distributions of s1subscript\ud835\udc601s_{1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and s2subscript\ud835\udc602s_{2}italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (a)-(b) and their RC curves (c). In (a) and (b), In-D samples that are correctly classified by EVA are shown in blue, while In-D samples that are incorrectly classified are shown in orange; OOD samples (label-shifted) are shown in green. The vertical dashed lines in (a)-(c) corresponds to different True-Positive-Rate cutoffs in the AUROC metric in OOD detection.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/OOD-Metric-Demo/CF_Hist_vim.png" + }, + "7(c)": { + "figure_path": "2405.05160v2_figure_7(c).png", + "caption": "Figure 7: Score distributions of s1subscript\ud835\udc601s_{1}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and s2subscript\ud835\udc602s_{2}italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (a)-(b) and their RC curves (c). In (a) and (b), In-D samples that are correctly classified by EVA are shown in blue, while In-D samples that are incorrectly classified are shown in orange; OOD samples (label-shifted) are shown in green. The vertical dashed lines in (a)-(c) corresponds to different True-Positive-Rate cutoffs in the AUROC metric in OOD detection.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/OOD-Metric-Demo/Clean_and_OOD.png" + }, + "8(a)": { + "figure_path": "2405.05160v2_figure_8(a).png", + "caption": "Figure 8: Heatmaps of rejection patterns (distribution of scores). Note that because we rescale the scores for good visualization, the colors are not cross-comparable between different score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Heatmaps/Heatmap-Geo-Margin.png" + }, + "8(b)": { + "figure_path": "2405.05160v2_figure_8(b).png", + "caption": "Figure 8: Heatmaps of rejection patterns (distribution of scores). Note that because we rescale the scores for good visualization, the colors are not cross-comparable between different score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Heatmaps/Heatmap-Max-SR.png" + }, + "8(c)": { + "figure_path": "2405.05160v2_figure_8(c).png", + "caption": "Figure 8: Heatmaps of rejection patterns (distribution of scores). Note that because we rescale the scores for good visualization, the colors are not cross-comparable between different score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Heatmaps/Heatmap-Entropy.png" + }, + "8(d)": { + "figure_path": "2405.05160v2_figure_8(d).png", + "caption": "Figure 8: Heatmaps of rejection patterns (distribution of scores). Note that because we rescale the scores for good visualization, the colors are not cross-comparable between different score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Heatmaps/Heatmap-Doctor.png" + }, + "8(e)": { + "figure_path": "2405.05160v2_figure_8(e).png", + "caption": "Figure 8: Heatmaps of rejection patterns (distribution of scores). Note that because we rescale the scores for good visualization, the colors are not cross-comparable between different score functions.", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/SVM-New-Figures/Heatmaps/Heatmap-Max-Logit.png" + }, + "9(a)": { + "figure_path": "2405.05160v2_figure_9(a).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ConvNext/Clean.png" + }, + "9(b)": { + "figure_path": "2405.05160v2_figure_9(b).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ConvNext/Clean_and_C.png" + }, + "9(c)": { + "figure_path": "2405.05160v2_figure_9(c).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ConvNext/Clean_and_OOD.png" + }, + "9(d)": { + "figure_path": "2405.05160v2_figure_9(d).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ConvNext/Clean_and_C_and_OOD.png" + }, + "9(e)": { + "figure_path": "2405.05160v2_figure_9(e).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ResNext/Clean.png" + }, + "9(f)": { + "figure_path": "2405.05160v2_figure_9(f).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ResNext/Clean_and_C.png" + }, + "9(g)": { + "figure_path": "2405.05160v2_figure_9(g).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ResNext/Clean_and_OOD.png" + }, + "9(h)": { + "figure_path": "2405.05160v2_figure_9(h).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ResNext/Clean_and_C_and_OOD.png" + }, + "9(i)": { + "figure_path": "2405.05160v2_figure_9(i).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/VOLO/Clean.png" + }, + "9(j)": { + "figure_path": "2405.05160v2_figure_9(j).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/VOLO/Clean_and_C.png" + }, + "9(k)": { + "figure_path": "2405.05160v2_figure_9(k).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/VOLO/Clean_and_OOD.png" + }, + "9(l)": { + "figure_path": "2405.05160v2_figure_9(l).png", + "caption": "Figure 9: RC curves of different confidence-score functions on models ConvNext, ResNext and VOLO from timm for ImageNet. The four columns are RC curves evaluated using samples from In-D only, In-D and covariate-shifted only, In-D and label-shifted only, and all, respectively. We group the curves by whether they are originally proposed for SC (solid lines) or for OOD detection (dashed lines).", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/VOLO/Clean_and_C_and_OOD.png" + }, + "10(a)": { + "figure_path": "2405.05160v2_figure_10(a).png", + "caption": "Figure 10: RC curves achieved by the KNN score with different k\ud835\udc58kitalic_k on ImageNet", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/EVA/KNN_Clean_and_C_and_OOD.png" + }, + "10(b)": { + "figure_path": "2405.05160v2_figure_10(b).png", + "caption": "Figure 10: RC curves achieved by the KNN score with different k\ud835\udc58kitalic_k on ImageNet", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ConvNext/KNN_Clean_and_C_and_OOD.png" + }, + "10(c)": { + "figure_path": "2405.05160v2_figure_10(c).png", + "caption": "Figure 10: RC curves achieved by the KNN score with different k\ud835\udc58kitalic_k on ImageNet", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/ResNext/KNN_Clean_and_C_and_OOD.png" + }, + "10(d)": { + "figure_path": "2405.05160v2_figure_10(d).png", + "caption": "Figure 10: RC curves achieved by the KNN score with different k\ud835\udc58kitalic_k on ImageNet", + "url": "http://arxiv.org/html/2405.05160v2/extracted/6027604/Figures/ImageNet/VOLO/KNN_Clean_and_C_and_OOD.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "The iwildcam 2020 competition dataset.", + "author": "Sara Beery, Elijah Cole, and Arvi Gjoka.", + "venue": "arXiv preprint arXiv:2004.10340, 2020.", + "url": null + } + }, + { + "2": { + "title": "Language models are few-shot learners.", + "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.", + "venue": "arXiv preprint arXiv:2005.14165, 2020.", + "url": null + } + }, + { + "3": { + "title": "On evaluating adversarial robustness.", + "author": "Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin.", + "venue": "arXiv preprint arXiv:1902.06705, 2019.", + "url": null + } + }, + { + "4": { + "title": "On selective classification under distribution shift.", + "author": "Lu\u00eds Felipe Prates Cattelan and Danilo Silva.", + "venue": "In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models.", + "url": null + } + }, + { + "5": { + "title": "On optimum recognition error and reject tradeoff.", + "author": "C Chow.", + "venue": "IEEE Transactions on information theory, 16(1):41\u201346, 1970.", + "url": null + } + }, + { + "6": { + "title": "Addressing failure prediction by learning model confidence.", + "author": "Charles Corbi\u00e8re, Nicolas Thome, Avner Bar-Hen, Matthieu Cord, and Patrick P\u00e9rez.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "7": { + "title": "Boosting with abstention.", + "author": "Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri.", + "venue": "Advances in Neural Information Processing Systems, 29, 2016.", + "url": null + } + }, + { + "8": { + "title": "On the algorithmic implementation of multiclass kernel-based vector machines.", + "author": "Koby Crammer and Yoram Singer.", + "venue": "Journal of machine learning research, 2(Dec):265\u2013292, 2001.", + "url": null + } + }, + { + "9": { + "title": "Robustbench: a standardized adversarial robustness benchmark.", + "author": "Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein.", + "venue": "arXiv preprint arXiv:2010.09670, 2020.", + "url": null + } + }, + { + "10": { + "title": "The relationship between precision-recall and roc curves.", + "author": "Jesse Davis and Mark Goadrich.", + "venue": "In Proceedings of the 23rd international conference on Machine learning, pp. 233\u2013240, 2006.", + "url": null + } + }, + { + "11": { + "title": "Extremely simple activation shaping for out-of-distribution detection.", + "author": "Andrija Djurisic, Nebojsa Bozanic, Arjun Ashok, and Rosanne Liu.", + "venue": "arXiv preprint arXiv:2209.09858, 2022.", + "url": null + } + }, + { + "12": { + "title": "Efficient and scalable bayesian neural nets with rank-1 factors.", + "author": "Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yian Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, and Dustin Tran.", + "venue": "In International conference on machine learning, pp. 2782\u20132792. PMLR, 2020.", + "url": null + } + }, + { + "13": { + "title": "On the foundations of noise-free selective classification.", + "author": "Ran El-Yaniv et al.", + "venue": "Journal of Machine Learning Research, 11(5), 2010.", + "url": null + } + }, + { + "14": { + "title": "Eva: Exploring the limits of masked visual representation learning at scale.", + "author": "Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19358\u201319369, 2023.", + "url": null + } + }, + { + "15": { + "title": "Towards better selective classification.", + "author": "Leo Feng, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, and Amir H Abdi.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "16": { + "title": "Calibrated selective classification.", + "author": "Adam Fisch, Tommi S Jaakkola, and Regina Barzilay.", + "venue": "Transactions on Machine Learning Research, 2022.", + "url": null + } + }, + { + "17": { + "title": "Optimal strategies for reject option classifiers.", + "author": "Vaclav Voracek Vojtech Franc, Daniel Prusa, and Vaclav Voracek.", + "venue": "Journal of Machine Learning Research, 24(11):1\u201349, 2023a.", + "url": null + } + }, + { + "18": { + "title": "Reject option models comprising out-of-distribution detection.", + "author": "Vojtech Franc, Daniel Prusa, and Jakub Paplham.", + "venue": "arXiv preprint arXiv:2307.05199, 2023b.", + "url": null + } + }, + { + "19": { + "title": "Scod: From heuristics to theory.", + "author": "Vojtech Franc, Jakub Paplham, and Daniel Prusa.", + "venue": "arXiv preprint arXiv:2403.16916, 2024.", + "url": null + } + }, + { + "20": { + "title": "Support vector machines with embedded reject option.", + "author": "Giorgio Fumera and Fabio Roli.", + "venue": "In Pattern Recognition with Support Vector Machines: First International Workshop, SVM 2002 Niagara Falls, Canada, August 10, 2002 Proceedings, pp. 68\u201382. Springer, 2002.", + "url": null + } + }, + { + "21": { + "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning.", + "author": "Yarin Gal and Zoubin Ghahramani.", + "venue": "In international conference on machine learning, pp. 1050\u20131059. PMLR, 2016.", + "url": null + } + }, + { + "22": { + "title": "Selective classification for deep neural networks.", + "author": "Yonatan Geifman and Ran El-Yaniv.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "23": { + "title": "Selectivenet: A deep neural network with an integrated reject option.", + "author": "Yonatan Geifman and Ran El-Yaniv.", + "venue": "In International conference on machine learning, pp. 2151\u20132159. PMLR, 2019.", + "url": null + } + }, + { + "24": { + "title": "Bias-reduced uncertainty estimation for deep neural classifiers.", + "author": "Yonatan Geifman, Guy Uziel, and Ran El-Yaniv.", + "venue": "arXiv preprint arXiv:1805.08206, 2018.", + "url": null + } + }, + { + "25": { + "title": "Recent advances in open set recognition: A survey.", + "author": "Chuanxing Geng, Sheng-jun Huang, and Songcan Chen.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 43(10):3614\u20133631, 2020.", + "url": null + } + }, + { + "26": { + "title": "Finetune like you pretrain: Improved finetuning of zero-shot vision models.", + "author": "Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19338\u201319347, 2023.", + "url": null + } + }, + { + "27": { + "title": "Support vector machines with a reject option.", + "author": "Yves Grandvalet, Alain Rakotomamonjy, Joseph Keshet, and St\u00e9phane Canu.", + "venue": "Advances in neural information processing systems, 21, 2008.", + "url": null + } + }, + { + "28": { + "title": "Doctor: A simple method for detecting misclassification errors.", + "author": "Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, and Pablo Piantanida.", + "venue": "Advances in Neural Information Processing Systems, 34:5669\u20135681, 2021.", + "url": null + } + }, + { + "29": { + "title": "On calibration of modern neural networks.", + "author": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger.", + "venue": "In International conference on machine learning, pp. 1321\u20131330. PMLR, 2017.", + "url": null + } + }, + { + "30": { + "title": "Benchmarking neural network robustness to common corruptions and perturbations.", + "author": "Dan Hendrycks and Thomas Dietterich.", + "venue": "In International Conference on Learning Representations, 2018.", + "url": null + } + }, + { + "31": { + "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks.", + "author": "Dan Hendrycks and Kevin Gimpel.", + "venue": "arXiv preprint arXiv:1610.02136, 2016.", + "url": null + } + }, + { + "32": { + "title": "Scaling out-of-distribution detection for real-world settings.", + "author": "Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joe Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song.", + "venue": "arXiv preprint arXiv:1911.11132, 2019.", + "url": null + } + }, + { + "33": { + "title": "Self-adaptive training: Bridging supervised and self-supervised learning.", + "author": "Lang Huang, Chao Zhang, and Hongyang Zhang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.", + "url": null + } + }, + { + "34": { + "title": "To trust or not to trust a classifier.", + "author": "Heinrich Jiang, Been Kim, Melody Guan, and Maya Gupta.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + }, + { + "35": { + "title": "A unified benchmark for the unknown detection capability of deep neural networks.", + "author": "Jihyo Kim, Jiin Koo, and Sangheum Hwang.", + "venue": "Expert Systems with Applications, 229:120461, 2023.", + "url": null + } + }, + { + "36": { + "title": "Wilds: A benchmark of in-the-wild distribution shifts.", + "author": "Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al.", + "venue": "In International Conference on Machine Learning, pp. 5637\u20135664. PMLR, 2021.", + "url": null + } + }, + { + "37": { + "title": "Learning multiple layers of features from tiny images.", + "author": "Alex Krizhevsky, Geoffrey Hinton, et al.", + "venue": "2009.", + "url": null + } + }, + { + "38": { + "title": "Simple and scalable predictive uncertainty estimation using deep ensembles.", + "author": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "39": { + "title": "Handwritten digit recognition with a back-propagation network.", + "author": "Yann LeCun, Bernhard Boser, John Denker, Donnie Henderson, Richard Howard, Wayne Hubbard, and Lawrence Jackel.", + "venue": "Advances in neural information processing systems, 2, 1989.", + "url": null + } + }, + { + "40": { + "title": "Classification with confidence.", + "author": "Jing Lei.", + "venue": "Biometrika, 101(4):755\u2013769, 2014.", + "url": null + } + }, + { + "41": { + "title": "Optimization and optimizers for adversarial robustness.", + "author": "Hengyue Liang, Buyun Liang, Le Peng, Ying Cui, Tim Mitchell, and Ju Sun.", + "venue": "arXiv preprint arXiv:2303.13401, 2023.", + "url": null + } + }, + { + "42": { + "title": "Enhancing the reliability of out-of-distribution image detection in neural networks.", + "author": "Shiyu Liang, Yixuan Li, and Rayadurgam Srikant.", + "venue": "arXiv preprint arXiv:1706.02690, 2017.", + "url": null + } + }, + { + "43": { + "title": "Energy-based out-of-distribution detection.", + "author": "Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li.", + "venue": "Advances in neural information processing systems, 33:21464\u201321475, 2020.", + "url": null + } + }, + { + "44": { + "title": "A convnet for the 2020s.", + "author": "Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976\u201311986, 2022.", + "url": null + } + }, + { + "45": { + "title": "Deep gamblers: Learning to abstain with portfolio theory.", + "author": "Ziyin Liu, Zhikang Wang, Paul Pu Liang, Russ R Salakhutdinov, Louis-Philippe Morency, and Masahito Ueda.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "46": { + "title": "A simple baseline for bayesian uncertainty in deep learning.", + "author": "Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "47": { + "title": "Foundations of machine learning.", + "author": "Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar.", + "venue": "MIT press, 2018.", + "url": null + } + }, + { + "48": { + "title": "Finding competence regions in domain generalization.", + "author": "Jens M\u00fcller, Stefan T Radev, Robert Schmier, Felix Draxler, Carsten Rother, and Ullrich K\u00f6the.", + "venue": "arXiv preprint arXiv:2303.09989, 2023.", + "url": null + } + }, + { + "49": { + "title": "Justifying recommendations using distantly-labeled reviews and fine-grained aspects.", + "author": "Jianmo Ni, Jiacheng Li, and Julian McAuley.", + "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.", + "url": null + } + }, + { + "50": { + "title": "Measuring calibration in deep learning.", + "author": "Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran.", + "venue": "In CVPR workshops, volume 2, 2019.", + "url": null + } + }, + { + "51": { + "title": "Nearest neighbor guidance for out-of-distribution detection.", + "author": "Jaewoo Park, Yoon Gyo Jung, and Andrew Beng Jin Teoh.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1686\u20131695, 2023.", + "url": null + } + }, + { + "52": { + "title": "Optimizing abstaining classifiers using roc analysis.", + "author": "Tadeusz Pietraszek.", + "venue": "In Proceedings of the 22nd international conference on Machine learning, pp. 665\u2013672, 2005.", + "url": null + } + }, + { + "53": { + "title": "Dataset shift in machine learning.", + "author": "Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence.", + "venue": "Mit Press, 2008.", + "url": null + } + }, + { + "54": { + "title": "Failing loudly: An empirical study of methods for detecting dataset shift.", + "author": "Stephan Rabanser, Stephan G\u00fcnnemann, and Zachary Lipton.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "55": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pp. 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "56": { + "title": "Imagenet large scale visual recognition challenge.", + "author": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al.", + "venue": "International journal of computer vision, 115:211\u2013252, 2015.", + "url": null + } + }, + { + "57": { + "title": "The precision-recall plot is more informative than the roc plot when evaluating binary classifiers on imbalanced datasets.", + "author": "Takaya Saito and Marc Rehmsmeier.", + "venue": "PloS one, 10(3):e0118432, 2015.", + "url": null + } + }, + { + "58": { + "title": "React: Out-of-distribution detection with rectified activations.", + "author": "Yiyou Sun, Chuan Guo, and Yixuan Li.", + "venue": "Advances in Neural Information Processing Systems, 34:144\u2013157, 2021.", + "url": null + } + }, + { + "59": { + "title": "Out-of-distribution detection with deep nearest neighbors.", + "author": "Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li.", + "venue": "In International Conference on Machine Learning, pp. 20827\u201320840. PMLR, 2022.", + "url": null + } + }, + { + "60": { + "title": "Self-adjusting reject options in prototype based classification.", + "author": "Thomas Villmann, Marika Kaden, Andrea Bohnsack, J-M Villmann, T Drogies, Sascha Saralajew, and Barbara Hammer.", + "venue": "In Advances in Self-Organizing Maps and Learning Vector Quantization: Proceedings of the 11th International Workshop WSOM 2016, Houston, Texas, USA, January 6-8, 2016, pp. 269\u2013279. Springer, 2016.", + "url": null + } + }, + { + "61": { + "title": "Vim: Out-of-distribution with virtual-logit matching.", + "author": "Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4921\u20134930, 2022.", + "url": null + } + }, + { + "62": { + "title": "Pytorch image models.", + "author": "Ross Wightman.", + "venue": "https://github.com/rwightman/pytorch-image-models, 2019.", + "url": null + } + }, + { + "63": { + "title": "Augmenting softmax information for selective classification with out-of-distribution data.", + "author": "Guoxuan Xia and Christos-Savvas Bouganis.", + "venue": "In Proceedings of the Asian Conference on Computer Vision, pp. 1995\u20132012, 2022.", + "url": null + } + }, + { + "64": { + "title": "Aggregated residual transformations for deep neural networks.", + "author": "Saining Xie, Ross Girshick, Piotr Doll\u00e1r, Zhuowen Tu, and Kaiming He.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492\u20131500, 2017.", + "url": null + } + }, + { + "65": { + "title": "Generalized out-of-distribution detection: A survey.", + "author": "Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu.", + "venue": "arXiv preprint arXiv:2110.11334, 2021.", + "url": null + } + }, + { + "66": { + "title": "Openood: Benchmarking generalized out-of-distribution detection.", + "author": "Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, et al.", + "venue": "Advances in Neural Information Processing Systems, 35:32598\u201332611, 2022.", + "url": null + } + }, + { + "67": { + "title": "Improving out-of-distribution robustness via selective augmentation.", + "author": "Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, and Chelsea Finn.", + "venue": "In International Conference on Machine Learning, pp. 25407\u201325437. PMLR, 2022.", + "url": null + } + }, + { + "68": { + "title": "Volo: Vision outlooker for visual recognition.", + "author": "Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, and Shuicheng Yan.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.", + "url": null + } + }, + { + "69": { + "title": "Florence: A new foundation model for computer vision.", + "author": "Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al.", + "venue": "arXiv preprint arXiv:2111.11432, 2021.", + "url": null + } + }, + { + "70": { + "title": "A survey on learning to reject.", + "author": "Xu-Yao Zhang, Guo-Sen Xie, Xiuli Li, Tao Mei, and Cheng-Lin Liu.", + "venue": "Proceedings of the IEEE, 111(2):185\u2013215, 2023.", + "url": null + } + }, + { + "71": { + "title": "Rethinking confidence calibration for failure prediction.", + "author": "Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-Lin Liu.", + "venue": "In Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XXV, pp. 518\u2013536. Springer, 2022.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2405.05160v2" +} \ No newline at end of file diff --git a/20241127/2405.11828v2.json b/20241127/2405.11828v2.json new file mode 100644 index 0000000000000000000000000000000000000000..466aa974b060e1aaf9eef4b1db94629f4359b522 --- /dev/null +++ b/20241127/2405.11828v2.json @@ -0,0 +1,685 @@ +{ + "title": "Federated Learning for Time-Series Healthcare Sensing with Incomplete Modalities", + "abstract": "Many healthcare sensing applications utilize multimodal time-series data from sensors embedded in mobile and wearable devices. Federated Learning (FL), with its privacy-preserving advantages, is particularly well-suited for health applications. However, most multimodal FL methods assume the availability of complete modality data for local training, which is often unrealistic. Moreover, recent approaches tackling incomplete modalities scale poorly and become inefficient as the number of modalities increases. To address these limitations, we propose FLISM, an efficient FL training algorithm with incomplete sensing modalities while maintaining high accuracy. FLISM employs three key techniques: (1) modality-invariant representation learning to extract effective features from clients with a diverse set of modalities, (2) modality quality-aware aggregation to prioritize contributions from clients with higher-quality modality data, and (3) global-aligned knowledge distillation to reduce local update shifts caused by modality differences. Extensive experiments on real-world datasets show that FLISM not only achieves high accuracy but is also faster and more efficient compared with state-of-the-art methods handling incomplete modality problems in FL. We release the code as open-source at https://github.com/AdibaOrz/FLISM.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In healthcare sensing, many applications leverage multimodal time-series data from an array of sensors embedded in mobile and wearable devices (Ramachandram & Taylor, 2017 ###reference_b36###). For example, mobile devices equipped with motion and physiological sensors capture multimodal data to detect eating episodes (Shin et al., 2022 ###reference_b42###), monitor physical activities (Reiss & Stricker, 2012 ###reference_b37###), track emotional states (Park et al., 2020 ###reference_b30###), and assess stress levels (Schmidt et al., 2018 ###reference_b40###).\nThanks to its privacy-preserving characteristics, Federated Learning (FL) (McMahan et al., 2017 ###reference_b24###) is particularly suited for these applications, supporting local model training without sharing raw data with a central server. Despite this benefit, FL encounters challenges involving multimodal health sensing data.\nOne major problem is incomplete modalities (Vaizman et al., 2018 ###reference_b46###; Feng & Narayanan, 2019 ###reference_b7###; Li et al., 2020a ###reference_b21###) where factors such as limited battery life, poor network connections, and sensor malfunctions prevent users from utilizing all modalities for local training, leading to variations in the multimodal data availability across FL clients. In centralized machine learning, this issue is often addressed using statistical techniques (Yu et al., 2020 ###reference_b52###) or deep learning-based imputation methods (Zhao et al., 2022 ###reference_b57###; Zhang et al., 2023 ###reference_b55###). However, the privacy-preserving nature of FL limits the direct exchange of raw data between clients and the server, making it difficult to apply existing approaches in an FL setting.\nA way to handle incomplete modalities in FL is to train separate encoders for each available modality during local client training and use the extracted features from these encoders to train a multimodal fusion model. This way of training, known as intermediate fusion, has been widely adopted in recent studies that address incomplete modalities in FL (Feng et al., 2023 ###reference_b8###; Ouyang et al., 2023 ###reference_b29###).\nAnother approach uses deep imputation (Zheng et al., 2023 ###reference_b58###), where cross-modality transfer models trained on complete data are used to impute missing modalities. Although these approaches offer flexibility in adapting to varying modalities, they suffer from high communication and computation costs, limiting their scalability as the number of modalities increases. As discussed in Appendix A.1.1 ###reference_.SSS1###, this lack of scalability is a significant challenge in multimodal healthcare sensing FL, where the variety of personal devices and sensors continues to grow. Experiment results in Appendices A.1.2 ###reference_.SSS2### and A.1.3 ###reference_.SSS3### confirm that current approaches, including intermediate fusion and deep imputation, scale poorly and remain resource-inefficient.\nUnlike intermediate fusion, early fusion combines multimodal streams early at the input level. It is particularly well-suited for multimodal time-series healthcare sensing applications, as it captures intricate modality relationships (Paw\u0142owski et al., 2023 ###reference_b32###) and enhances efficiency by training a single model (Snoek et al., 2005 ###reference_b43###). However, the standard method of imputing missing modalities in early fusion using raw statistics is not feasible in FL setting, as the server cannot access clients\u2019 raw data. This restriction leaves zero imputation as the only option. Nevertheless, without properly addressing incomplete modalities, zero imputation alone leads to distribution drifts and significant performance drops, as evidenced in Appendix A.2 ###reference_###.\nWe present FLISM (Federated Learning with Incomplete Sensing Modalities), an efficient FL algorithm for multimodal time-series healthcare sensing tasks with incomplete modalities. Our goal is to leverage the efficiency of early fusion while maintaining high accuracy. Achieving this is challenging due to several factors: (1) early fusion with zero imputation alone can cause the model to learn biased relationships between modalities; (2) the quality of local modality data varies across clients, resulting in a suboptimal global model, and (3) clients, especially those with limited modalities, may deviate significantly after local updates. To overcome these challenges, we propose three key techniques: modality-invariant representation learning to extract robust features from diverse modalities in early fusion, tackling (1); modality quality-aware aggregation to prioritize updates from clients with higher-quality modality data, addressing (2); and global-aligned knowledge distillation to reduce the impact of drifted local updates, solving (3).\nWe conducted extensive experiments and evaluated the performance of FLISM against six baselines using four real-world multimodal time-series healthcare sensing datasets. The results demonstrate the effectiveness of our method in both accuracy and efficiency. FLISM improves accuracy across all early fusion baselines, with F1 score gains from .043 to .143, and outperforms intermediate fusion methods with F1 score improvements from .037 to .055, while being 3.11 faster in communication and 2.14 more efficient in computation. FLISM also surpasses deep imputation methods, achieving a .073 F1 score increase and reducing communication and computational costs by 77.50 and 35.30, respectively." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Multimodal Healthcare Sensing.\nMultimodal learning is becoming more prevalent in healthcare sensing, with applications ranging from physical activity tracking (Reiss & Stricker, 2012 ###reference_b37###) and eating episode detection (Shin et al., 2022 ###reference_b42###) to emotion and stress assessment (Park et al., 2020 ###reference_b30###; Yu & Sano, 2023 ###reference_b53###). To effectively utilize multimodal data, recent studies have proposed advanced training methods, including enhancing data representations by optimizing cross-correlation (Deldari et al., 2022 ###reference_b5###) and incorporating self-supervised learning to build foundational model for healthcare sensing tasks (Abbaspourazad et al., 2024 ###reference_b1###). However, transmitting raw health data to a centralized server raises significant privacy concerns.\nFederated Learning.\nFederated Learning (FL) (McMahan et al., 2017 ###reference_b24###) offers a promising solution for healthcare applications, as it enables local training on client devices without the need to send sensitive raw data to a central server. However, most existing approaches (Xiong et al., 2022 ###reference_b49###; Le et al., 2023 ###reference_b18###) assume complete modality data for local training, which is often unrealistic due to factors such as battery constraints, network issues, and sensor malfunctions. In contrast, FLISM works flexibly with incomplete modalities across various multimodal healthcare sensing tasks.\nFL with Incomplete Modalities. Research in multimodal FL with incomplete modalities has recently gained attention. For instance, Ouyang et al. (2023 ###reference_b29###) uses a two-stage training framework, while Zheng et al. (2023 ###reference_b58###) introduces an autoencoder-based method to impute missing modalities. Feng et al. (2023 ###reference_b8###) uses attention-based fusion to integrate outputs from separately trained uni-modal models. Although these methods demonstrate effectiveness across various tasks, they often encounter challenges with efficiency and scalability as the number of modalities grows. In contrast, FLISM utilizes early fusion to enhance efficiency while incorporating techniques to maintain high accuracy." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_1###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Setting", + "text": "Consider a multimodal time-series healthcare sensing task in FL involving modalities with participating clients. Each client has a local training dataset of size with modalities, where is the data from the -th modality for the -th sample, and is the corresponding label.\nThe global objective (McMahan et al., 2017 ###reference_b24###) is to minimize the local objectives from clients:\nwhere , and the local objective for a client is defined as:\nHere, is the loss for a single data point, with model parameterized by ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "The FLISM Approach", + "text": "We introduce FLISM, an efficient FL algorithm for multimodal time-series healthcare sensing with incomplete modalities (overview shown in Figure 1 ###reference_###). FLISM combines the resource efficiency of early fusion with high accuracy by addressing three key challenges. First, early fusion with zero imputation can lead to biased modality relationships. We solve this problem with Modality-Invariant Representation Learning (MIRL, \u00a73.2.1 ###reference_.SSS1###), which extracts robust features from incomplete modality data. Second, the quality of modality data differs among clients, and simply aggregating their updates can result in a suboptimal global model. To counter this, we propose Modality Quality-Aware Aggregation (MQAA, \u00a73.2.2 ###reference_.SSS2###), which prioritizes contributions from clients with higher-quality modalities. Finally, clients with limited or lower-quality data may experience significant deviations during local updates. We address this issue with Global-Aligned Knowledge Distillation (GAKD, \u00a73.2.3 ###reference_.SSS3###), which aligns local model predictions with that of the global model to minimize update drift. Complete pseudocode of FLISM is given in Algorithm 1 ###reference_###." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Modality-Invariant Representation Learning", + "text": "Early fusion with zero imputation alone can learn biased modality relationships, leading to performance degradation (supporting experimental results are given in Appendix A.2 ###reference_###). To address this problem, we propose Modality-Invariant Representation Learning (MIRL, Figure 1 ###reference_###\u2013\\small{1}\u20dd), a technique that extracts effective features regardless of available modalities.\nFormally, let us consider two samples and , each with modalities:\nOur goal is to learn a function that maps samples with varying modality combinations into an embedding space . To achieve this goal, we employ supervised contrastive learning (SupCon) (Khosla et al., 2020 ###reference_b15###). SupCon leverages label information to cluster embeddings of samples sharing the same label and separate those with different labels. We employ SupCon to position samples sharing the same label close together in , regardless of their available modalities:\nwhere is a distance metric, .\nUnlike conventional SupCon that uses image-based augmentations such as cropping and flipping, we adapt it for multimodal time-series sensing data by generating augmented samples through random modality dropout and noise addition. This adaptation enables the model to learn modality-invariant representations, ensuring that embeddings remain consistent for samples with the same label despite varying input modalities.\nSpecifically, consider a client with modalities available for local training, where .111To employ modality dropout, a client must have at least two modalities available. For each sample in client\u2019s dataset, we generate an augmented sample by randomly dropping up to modalities and perturbing the data by adding noise:\nwhere ModalityDropout randomly selects a subset of modalities to retain, with , and sets modalities not in to zero. The noise term is sampled from a Gaussian distribution .\nWithin a training batch containing samples, we combine the original samples and their augmented counterparts to form an expanded batch for . Here, each is either an original sample or an augmented sample .\nWe define an encoder and a projection head , where is the feature embedding space. The overall mapping function is defined as , where .\nThe supervised contrastive loss (Khosla et al., 2020 ###reference_b15###) is then defined as:\nIn this context, is the embedding of sample and is a temperature parameter. , includes the indices of all positive pairs in the batch, distinct from .\nIn summary, MIRL reduces modality bias by learning effective embeddings invariant to the available input modalities, as demonstrated by results in Appendix B ###reference_###." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Modality Quality-Aware Aggregation", + "text": "In multimodal time-series health sensing, certain modalities influence performance more than others. For example, electrodermal activity and skin temperature may provide more informative data than accelerometer readings for stress detection, whereas accelerometer data is more crucial than other sensors for human activity recognition. Similarly, clients have varying sets of available modalities to perform FL; some possess higher-quality or more informative modalities, while others have limited or lower-quality modalities. Consequently, it is essential to prioritize updates from clients with more informative modalities to enhance the global model. To achieve this, we introduce Modality Quality-Aware Aggregation (MQAA, Figure 1 ###reference_###\u2013\\small{2}\u20dd), an aggregation technique that gives greater weight to updates from clients with higher-quality modality data.\nWe hypothesize that if a client has more complete and higher modality data, the client model would produce more reliable and confident predictions. To quantify this prediction certainty, we employ the entropy metric , which measures the uncertainty of a random variable (Shannon, 1948 ###reference_b41###). Specifically, we found that lower entropy values, indicating higher confidence predictions, reflect the quality and completeness of available modality data (Appendix C ###reference_###). Building on this finding, we propose leveraging entropy to evaluate client updates, allowing clients with more informative modalities to exert a greater influence on the global model.\nFormally, let denote a subset of clients selected in a training round . Each client performs a local update and calculates the entropy of the updated model predictions on its local private training data , which contains samples in a task with classes. Recognizing that lower entropy corresponds to higher-quality client updates, we define weight assigned to each client in the global update as the inverse of its average entropy :\nThe global model is then updated as follows:\nwhere is the prediction probability of a model for sample and class , and .\nIn essence, MQAA strategically amplifies the impact of high-quality client updates, resulting in a more accurate and reliable global model. In Appendix F ###reference_###, we discuss future work, including possible extensions to MQAA for challenging scenarios with limited high quality client updates." + }, + { + "section_id": "3.2.3", + "parent_section_id": "3.2", + "section_name": "3.2.3 Global-Aligned Knowledge Distillation", + "text": "When clients have less informative or limited modality sets, their local model updates can become significantly biased (Karimireddy et al., 2020 ###reference_b13###).\nThis bias can degrade the global model\u2019s performance and its ability to generalize across different sets of modalities, especially when training rounds involve clients with lower-quality local modality data. To mitigate the impact of these biased updates, we employ Global-Aligned Knowledge Distillation (GAKD, Figure 1 ###reference_###\u2013\\small{3}\u20dd). GAKD leverages knowledge distillation (KD) (Hinton et al., 2015 ###reference_b11###) to reduce the influence of less informative modality data on the global model.\nSpecifically, during local training we distill knowledge from the global model to ensure that local model predictions align closely with those of the global model. This is because the global model contains a more comprehensive and generalized knowledge as it aggregates diverse data from all clients across various modalities.\nThe goal is to minimize the difference between predictions of the local model parameterized by , and those of the global model, parameterized by :\nThe distillation loss can then be defined using the Kullback\u2013Leibler Divergence (KLD) (Kullback & Leibler, 1951 ###reference_b17###) between softened probability distributions of the global and local models:\nwhere is the softmax function applied on model logits, is the temperature parameter that smooths the probability distribution, and is the number of samples in the local dataset of a client .\nUltimately, GAKD effectively reduces modality-induced client biases, enhancing the global model\u2019s performance and its ability to generalize across diverse modality data." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiments", + "text": "Datasets.\nWe use four publicly available multimodal time-series healthcare sensing datasets in our experiments: PAMAP2 (Reiss & Stricker, 2012 ###reference_b37###), WESAD (Schmidt et al., 2018 ###reference_b40###), RealWorld HAR (Sztyler & Stuckenschmidt, 2016 ###reference_b44###) (abbreviated as RealWorld), and Sleep-EDF (Goldberger et al., 2000 ###reference_b9###; Kemp et al., 2000 ###reference_b14###).\nBaselines.\nWe compare FLISM with the following baselines, including three early fusion baselines: 1) FedAvg (McMahan et al., 2017 ###reference_b24###), 2) FedProx (Li et al., 2020b ###reference_b22###), 3) MOON (Li et al., 2021 ###reference_b20###); two intermediate fusion methods designed to address incomplete modality problem in FL: 4) FedMultiModal (Feng et al., 2023 ###reference_b8###) (abbreviated as FedMM), 5) Harmony (Ouyang et al., 2023 ###reference_b29###); and a deep imputation approach: 6) AutoFed (Zheng et al., 2023 ###reference_b58###).\nDatasets, baseline descriptions, and implementation details are provided in Appendix D ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Accuracy Analysis", + "text": "We conducted experiments under various incomplete modality scenarios to evaluate FLISM\u2019s accuracy compared with the baselines. The results are shown in Table 1 ###reference_###. Additional analysis with complete modalities is provided in Appendix E ###reference_###. The final test accuracy is measured using the macro F1 score (), recommended for imbalanced data (Pl\u00f6tz, 2021 ###reference_b34###). FLISM\u2019s improvements are denoted as . A higher value of signifies an increased incidence of clients with incomplete modalities.\nFLISM achieves noticeable performance improvement over all baselines. It consistently outperforms all early fusion methods, achieving average F1 score improvements of .043, .043, and .143 over FedAvg, FedProx, and MOON, respectively. FedAvg employs zero-imputation and conducts standard FL training. Although simple and efficient, zero-imputation distorts the data distribution, causing the model to learn incorrect relationships between modalities. Both FedProx and MOON incorporate techniques to handle clients with highly heterogeneous data, primarily addressing the standard non-IID (label skew) problem in FL. However, these methods perform similarly or even worse than the FedAvg when faced with incomplete modalities. This indicates that the data heterogeneity targeted by existing approaches differs from the challenges posed by incomplete modalities, rendering these techniques ineffective in such cases.\nIn contrast, FLISM learns effective features with incomplete modalities and builds a more robust global model by prioritizing the clients with highly informative modality data, resulting in enhanced accuracy.\nCompared with the intermediate fusion algorithms, FedMM and Harmony, FLISM shows average F1 score improvements of .037 and .055, respectively.\nNote that FedMM and Harmony perform similarly or slightly better than FLISM on datasets with complementary modalities (e.g., RealWorld contains ten modalities collected from two unique sensors at five body locations). This is because intermediate fusion can leverage complementary embeddings during fusion, such as an accelerometer from the waist compensating for one from the chest. Nevertheless, intermediate fusion struggles with datasets that have more diverse and non-complementary modalities, such as Sleep-EDF and WESAD. In contrast, FLISM employs early fusion to capture relationships between modalities at an early stage and incorporates components specifically designed to handle incomplete modalities. Importantly, as an early fusion approach, FLISM is significantly more efficient in communication and computation than both Harmony and FedMM while maintaining similar or improved accuracy, as detailed next." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "System Efficiency", + "text": "We compare the communication and computation costs of FLISM with FedMM and Harmony, the state-of-the-art methods for handling incomplete modalities in FL. Communication cost is measured by the total time in exchanging model updates between the server and clients during FL training. Client upload and download speeds are sampled from FLASH (Yang et al., 2021 ###reference_b50###), a simulation framework that contains hardware (communication and computation) capacities of 136K devices.\nComputation cost is measured by the total number of model parameters trained by clients throughout the FL training process.\n###figure_2### Figure 2 ###reference_### presents the results, where the x-axis denotes the incomplete modality ratio (), the left y-axis indicates communication cost (in seconds), and the right y-axis represents the number of trained model parameters. Compared with FLISM, FedMM incurs 2.703.16 higher communication overhead and is 2.813.36 less computationally efficient. This is because each client must train and communicate a separate encoder for every modality in addition to the intermediate-fused classifier. Consequently, as the number of modalities increases, both the number of encoders and the associated overhead rise proportionally.\nFLISM also surpasses Harmony by communicating model updates 1.137.01 faster. Harmony\u2019s initial phase requires training multiple unimodal models, which significantly increases communication overhead as the number of modalities grows. This inefficiency also affects computational performance, making Harmony 1.40 and 1.83 less efficient on ten-modality RealWorld and WESAD datasets, respectively. For datasets with fewer modalities, such as PAMAP2 (six) and Sleep-EDF (five), Harmony\u2019s computational cost is comparable to or even better than FLISM. However, Harmony\u2019s F1 score declines on these datasets, particularly in Sleep-EDF where it records the lowest F1 score among all methods. In contrast, FLISM enhances resource efficiency by utilizing early fusion, eliminating the need to train separate unimodal models for each modality." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Evaluation Against a Deep Imputation Approach", + "text": "Deep imputation methods, including AutoFed (Zheng et al., 2023 ###reference_b58###), rely on a held-out complete-modality dataset to pretrain their imputation models, an unrealistic assumption in the FL setting. In contrast, our primary evaluations (\u00a74.2 ###reference_###\u00a74.3 ###reference_###) address more practical scenarios without requiring complete multimodal data. Therefore, we conducted separate experiments to fairly compare the performance of AutoFed with FLISM.\nAutoFed is designed for multimodal FL with only two modalities, requiring modifications for datasets with more than two modalities. Thus, we implemented AutoFed+, a variant capable of handling more than two modalities. Implementation details of these modifications are in Appendix D.4 ###reference_###.\nFor our comparative evaluation, we used the PAMAP2 (six modalities) and Sleep-EDF (five modalities) datasets. AutoFed requires training cross-modality imputation models for each unique pair of modalities, resulting in models for a dataset with modalities. Using datasets with more modalities, such as those with ten, would necessitate an impractically large number of training models.\nTable 4.4 ###reference_### presents the average F1 score, and communication and computation costs of FLISM and AutoFed+. FLISM outperforms AutoFed+ with average F1 score improvements of .078 and .067 for PAMAP2 and WESAD datasets, respectively.\nThe communication and computation costs for AutoFed+ stem primarily from the first phase, which involves pre-training generative imputation models for all unique modality combinations. FLISM is 75.8579.15 more communication-efficient and incurs 29.3141.28 less computation overhead than AutoFed+, while consistently achieving higher F1 scores." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Scalability Analysis", + "text": "As sensors become more integrated into healthcare devices, the demand for scalable multimodal FL systems increases (Appendix A.1.1 ###reference_.SSS1###). While our experimental datasets included up to ten sensing modalities, real-world applications may involve many more (Schmidt et al., 2018 ###reference_b40###; Orzikulova et al., 2024 ###reference_b27###). To better reflect these scenarios, we conducted scalability experiments comparing FLISM with the SOTA intermediate fusion methods, simulating healthcare sensing tasks with 5 to 30 modalities. We set the total number of clients to 100, with 10% participating in each FL training round. The incomplete modality ratio , was set to 40%. The FL training lasted 20 rounds, with each client training their model for one local epoch per round.\nThe results, comparing FLISM with intermediate fusion baselines, are shown in Figure 3 ###reference_###. The x-axis denotes the number of modalities and the y-axis indicates the communication and computation costs. The results show that for tasks involving five to thirty modalities, FedMM\u2019s communication cost increased by 2,743 seconds, while Harmony\u2019s rose by 25,847 seconds. Their computation costs also escalate, requiring 0.582B and 7.002B model parameters, respectively.\n###figure_3### In contrast, FLISM maintains negligible system costs, and scales efficiently with more number of modalities. FLISM outperforms FedMM and Harmony with communication improvements between 2.895.83 and 5.9133.74, and in computation improvements between 2.865.74 and 7.3342.07, respectively." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Ablation Analysis", + "text": "We conducted ablation analysis to assess the effectiveness of each component of FLISM. Table 4.6 ###reference_### shows the average F1 score across diverse incomplete modality ratios, maintaining consistency with the main experiments.\nThe most basic version of FLISM, without any of three key components, is equivalent to FedAvg. Introducing modality-invariant representation learning (MIRL, \u00a73.2.1 ###reference_.SSS1###) alone increases the average F1 by .021, indicating that learning to extract modality-invariant features enables the model to develop robust representations with incomplete modalities. Incorporating modality quality-aware adaptive aggregation (MQAA, \u00a73.2.2 ###reference_.SSS2###) further boosts performance, adding .009 F1 improvement on top of the previous version.\nThis improvement confirms that clients with different training modalities should contribute proportionally based on the quality of their modalities. Finally, integrating all three components, including global-aligned knowledge distillation (GAKD, \u00a73.2.3 ###reference_.SSS3###), results in the complete version of FLISM. FLISM achieves the highest F1 score,\nshowcasing the effectiveness of stabilizing drifted updates by aligning local model predictions with the global model. These results highlight each component\u2019s unique and significant contribution to the overall performance of FLISM." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We propose FLISM, an efficient FL algorithm for multimodal time-series healthcare sensing with incomplete modalities. FLISM extracts effective features through modality-invariant representation learning, adjusts the contribution of local updates by prioritizing clients with higher-quality modalities, and mitigates local update shifts caused by modality differences." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Motivational Experiments", + "text": "We present the results of motivational experiments that illustrate the impact of incomplete modalities in healthcare sensing applications in FL settings. Our analysis shows that current approaches become increasingly ineffective, inefficient, and struggle to scale as the number of modalities grows.\nIn contrast to intermediate fusion and deep imputation, early fusion is far more efficient because it requires training just one model (Snoek et al., 2005 ###reference_b43###). It is particularly advantageous for multimodal time-series healthcare sensing, as it can accurately capture complex inter-modal relationships (Paw\u0142owski et al., 2023 ###reference_b32###). However, it faces challenges when dealing with incomplete modalities in federated learning (FL). Imputing missing modalities using raw data statistics (Zhang, 2016 ###reference_b56###; Van Buuren, 2018 ###reference_b47###) is not feasible in FL environments, as the server lacks access to clients\u2019 raw data. As a result, zero-imputation becomes the only viable option (Van Buuren, 2018 ###reference_b47###), leading to performance degradation.\nTo evaluate the performance with zero-imputation in the absence of modalities, we conducted experiments on two representative mobile sensing datasets, RealWorld (Sztyler & Stuckenschmidt, 2016 ###reference_b44###) and WESAD (Schmidt et al., 2018 ###reference_b40###), each featuring ten modalities, such as accelerometer, gyroscope, temperature, electrocardiogram, electrodermal activity.\nWe simulated conditions where % of clients possess incomplete modality data. We allowed % of clients (with ) to randomly omit up to modalities from their training data, where represents the total number of available modalities.\n###figure_4### ###figure_5### Figure 6 ###reference_### shows the average F1 score of a classification task in a respective dataset (activity recognition in RealWorld and stress detection in WESAD) as the number of clients with incomplete modalities increases.\nThe results show a consistent decline in model performance across both datasets, underscoring the negative impact of missing modalities on early fusion. This highlights the need for more carefully designed solutions to handle incomplete modalities in early fusion.\nMotivation #2: The performance of early fusion with zero imputation gradually worsens as the number of clients with incomplete modalities increases." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Impact of Modality-Invariant Representation Learning", + "text": "To verify the effectiveness of modality-invariant representation learning, we examine the embedding distances and performance (F1 score) of models trained with and without supervised contrastive objective, both of which include cross-entropy loss for classification. The experiment results for PAMAP2 and WESAD datasets are shown in Figure 7 ###reference_###. The x-axis represents the number of missing modalities; the left y-axis indicates the distance between complete and incomplete modalities embeddings, and the right y-axis shows the F1 score.\n###figure_6### We observe that the embedding distance between complete and incomplete modalities consistently rises as the number of missing modalities increases. However, the model trained with supervised contrastive learning can reduce this embedding distance, bringing the embeddings closer to those of the complete data. This suggests that modality-invariant representation learning using a supervised contrastive objective effectively enhances representation learning for incomplete modalities." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Entropy to Assess Client Modality Quality", + "text": "To evaluate whether entropy can effectively reflect modality information, we conducted experiments logging the model\u2019s entropy on training (with incomplete modality) data and the corresponding F1 score on test (with complete modality) data. As we consider all modality combinations per missing modality number, the likelihood of losing more important modalities rises with more missing modalities. This allows us to estimate the impact of the absence of various modality types and numbers. Figure 8 ###reference_### shows the results of experiments with PAMAP2 and WESAD datasets.\n###figure_7### ###figure_8### As the number of missing modalities increases, entropy increases while the F1 score consistently decreases. This indicates that entropy can serve as a proxy to estimate the quality of modalities each client possesses." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Experiment Details", + "text": "Below, we describe multimodal time-series healthcare sensing datasets used in our experiments (D.1 ###reference_###), baseline methods (D.2 ###reference_###), and the implementation details (D.3 ###reference_###).\nTable 4 ###reference_### shows the four real-world datasets used in our experiments: PAMAP2 (Reiss & Stricker, 2012 ###reference_b37###), RealWorld HAR (Sztyler & Stuckenschmidt, 2016 ###reference_b44###), WESAD (Schmidt et al., 2018 ###reference_b40###), and Sleep-EDF (Goldberger et al., 2000 ###reference_b9###; Kemp et al., 2000 ###reference_b14###).\n###table_1### PAMAP2 (Reiss & Stricker, 2012 ###reference_b37###) contains data from nine users performing twelve activities, captured using Inertial Measurement Unit (IMU) sensors. We excluded data from one participant due to the presence of only a single activity data (Jain et al., 2022 ###reference_b12###). The dataset includes readings from accelerometers and gyroscopes modalities, positioned on three different body parts: the wrist, chest, and ankle, resulting in a total of six sensing input modalities.\nRealWorld HAR (Sztyler & Stuckenschmidt, 2016 ###reference_b44###) (abbreviated as RealWorld), is a human activity recognition (HAR) dataset, collected from fifteen participants performing eight activities. Each participant was equipped with seven IMU devices positioned on seven different body parts, but we omitted two of them due to incomplete activity coverage. Thus, the dataset includes ten modalities from five body locations and two types of IMU sensors.\nWESAD (Schmidt et al., 2018 ###reference_b40###) is a multi-device multimodal dataset for wearable stress and affect detection. It encompasses data collected from fifteen participants who wore both a chestband and a wristband, capturing physiological sensor data such as Electrocardiogram (ECG), Electrodermal Activity (EDA), Electromyogram (EMG), Blood Volume Pressure (BVP), and Respiration (Resp), Skin Temperature (Temp), in addition to motion data via an Accelerometer (Acc).\nThe objective is to classify the participants\u2019 emotional states into three categories: neutral, stress, and amusement. The chestband monitored Acc, ECG, EMG, EDA, Temp, and Resp, whereas the wristband tracked Acc, BVP, EDA, and Temp, collectively resulting in ten distinct modalities.\nSleep-EDF (Goldberger et al., 2000 ###reference_b9###; Kemp et al., 2000 ###reference_b14###) comprises sleep recordings from 20 participants. It includes Electroencephalography (EEG), Electrooculography (EOG), chin EMG, Respiration (Resp), and event markers. The labels correspond to five types of sleep patterns (hypnograms). Similar to previous works (Tsinalis et al., 2016 ###reference_b45###; Phan et al., 2018 ###reference_b33###), we use data from the Sleep Cassette study, which investigated the effects of age on sleep of healthy individuals.\nFedAvg (McMahan et al., 2017 ###reference_b24###) represents the foundational approach to FL, enabling decentralized training without sharing raw data.\nAs a baseline framework, FedAvg is crucial for assessing the lowest achievable accuracy, especially in scenarios lacking specific mechanisms to address missing modalities.\nFedProx (Li et al., 2020b ###reference_b22###) was proposed to address system and statistical heterogeneity. It enhances performance by adding a proximal term to the local training loss function to minimize the discrepancy between the global and local models.\nMOON (Li et al., 2021 ###reference_b20###) targets local data heterogeneity problem. It incorporates contrastive learning into FL to reduce the gap between the global and local model\u2019s embeddings while increasing the disparity from the embeddings of the previous local model. MOON has demonstrated superior performance over other FL methods across different image classification tasks, showcasing its effectiveness.\nFedMultiModal (Feng et al., 2023 ###reference_b8###) (abbreviated as FedMM) is designed to tackle missing modality issues in multimodal FL applications. Initially, FedMM conducts unimodal training for each available modality across all clients. It then merges these unimodal representations through a cross-attention mechanism (Vaswani et al., 2017 ###reference_b48###).\nHarmony (Ouyang et al., 2023 ###reference_b29###) is proposed to manage incomplete modality data in multimodal FL tasks. It structures the FL training process into two distinct stages: initial modality-wise unimodal training and a second stage dedicated to multimodal fusion. Additionally, Harmony incorporates modality biases in the fusion step to address local data heterogeneity.\nAutoFed (Zheng et al., 2023 ###reference_b58###) is framework for autonomous driving that addresses heterogeneous clients in FL, including the problem of missing data modalities. AutoFed pre-trains a convolutional autoencoder to impute the absent modality data. The autoencoder is pre-trained using a dataset with complete modality data. However, AutoFed cannot be directly applied to scenarios involving more than two modalities and incurs significant overhead for pre-training with an increasing number of modalities. Therefore, we implemented AutoFed+ to adapt to datasets with more than two modalities (details are provided in Appendix D.4 ###reference_###).\nWe use a 1D convolutional neural network (CNN) as the encoder architecture, following the design for sensing tasks (Haresamudram et al., 2022 ###reference_b10###). For a fair comparison, we standardized the encoder models across all methods. We set random client selection rates from 30% to 50% based on the total dataset users. Standard settings include a learning rate of 0.01, weight decay of 0.001, and a batch size of 32, with SGD as the optimizer. After a grid search to fine-tune the hyperparameters for each baseline, we adjusted the MOON\u2019s learning rate to 0.001 and its batch size to 64. We performed all experiments with five seeds and reported the average values.\nAutoFed+ consists of two phases: (1) pre-training autoencoders and ranking, and (2) the main FL training. Initially, we sample 30% of clients, ensuring they have complete modalities for pre-training purposes. The remaining 70% of the clients participate in the main FL training, with incomplete modality ratios assigned as in our main evaluation experiments.\nDuring pre-training, the clients\u2019 data is further divided into training and validation sets, with the validation set used to rank the imputation models. This ranking is essential because, in the main FL training, if a client has available modalities and missing modalities, then for each missing modality, there are candidate imputation models available to fill the gaps. These models are ranked based on validation loss, computed via distance between original and generated modality data. We perform imputation model pre-training for 50 global rounds with three local epochs and 100 percent client selection rate. Following this, the main FL training is performed for 100 rounds with five local epochs.\nFor a fair comparison with AutoFed+, as FLISM does not include a pre-training phase. Instead, we use the clients that AutoFed+ employed in the pre-training stage to train the FL model for FLISM and exclude these clients from the testing phase to maintain consistency with AutoFed+. Similar to AutoFed+, the main FL training is conducted for 100 rounds, with five local epochs per round." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Analysis on Complete Modalities", + "text": "Table 5 ###reference_### presents the F1 scores of FLISM in comparison to other baselines under complete modality scenario (), where all clients have full modalities. The results indicate that FLISM outperforms both early and intermediate fusion methods in most cases, demonstrating its effectiveness not only with incomplete modalities but also when all modalities are present." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Discussions", + "text": "We outline discussions and promising directions for future research.\nServer Aggregation in Extreme Scenarios.\nIn \u00a73.2.2 ###reference_.SSS2###, we introduced the Modality Quality-Aware Aggregation (MQAA) to prioritize client updates with high quality modality data. However, in extreme cases where high quality updates are limited or absent, the global model may struggle to generalize, and overemphasis on specific modalities or demographics could lead to unfair outcomes. To address this, MQAA could be extended to include constraints that limit the influence of a single or a group of clients. Additionally, adopting a more refined client selection strategy that ensures diverse client inclusion can prevent overreliance on high-quality clients. Nevertheless, we consider such scenarios unlikely. In our experiments with real-world multimodal healthcare sensing datasets, our method performed effectively, indicating that such extreme cases are rare.\nExtension to High-dimensional Modalities.\nRecently, the study of large multimodal models, incorporating modalities such as images and text, has gained significant attention (Radford et al., 2021 ###reference_b35###; Saito et al., 2023 ###reference_b38###). Although FLISM performs well in accuracy and system efficiency, it mainly focuses on 1D time-series multimodal sensing applications. This focus stems from observations that early fusion is more efficient than intermediate fusion. We plan to extend FLISM to include high-dimensional modalities, such as images and audio. This could involve utilizing small pre-trained models to extract features from these modalities, aligning the extracted features, and proceeding with the FLISM training. This approach could broaden the applicability of FLISM to a wider range of multimodal integration scenarios, enhancing its versatility and effectiveness.\nRuntime Handling of Incomplete Modalities.\nWe focused on scenarios involving static modality drops. This approach stems from our observation that dropping the entire modality throughout the FL training causes the highest accuracy degradation. However, we acknowledge that dynamic modality drops, where modalities might become unavailable at various points during application runtime, is also a critical aspect. The Modality-Invariant Representation Learning (\u00a73.2.1 ###reference_.SSS1###), a component of FLISM, is designed to accommodate extensions for simulating various dynamic drop scenarios. Further development of the method to specifically cater to dynamic drop scenarios at runtime is an area for future exploration.\nSystem Heterogeneity-Aware Client Selection. Although FLISM achieves a balance between system efficiency and model accuracy, it overlooks individual user system utilities, such as WiFi connectivity, battery life, and CPU memory. As highlighted in our motivation, the number and type of modalities available for local training vary by user, and the system utilities can change dynamically. Building on the Modality Quality-Aware Aggregation (\u00a73.2.2 ###reference_.SSS2###), we can devise an additional client selection method that accounts for device utility to enhance convergence speed. Future research could explore adapting the method to accommodate system heterogeneity." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Accuracy improvement of FLISM over baselines with various incomplete modality ratios. EF denotes Early Fusion, whereas IF represents Intermediate Fusion.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\n\u00a0\u00a0 MethodFedAvg (EF)FedProx (EF)MOON (EF)FedMM (IF)Harmony (IF)
Accuracy\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n
\n40%\n
PAMAP2.730.076 .732.074 .732.074 .758.048 .744.062
WESAD.672.020 .672.020 .438.254 .556.136 .656.036
RealWorld.796.014 .792.018 .660.150 .804.006 .782.028
Sleep-EDF.706.008 .704.010 .680.034 .686.028 .554.160
\n60%\n
PAMAP2.740.050 .740.050 .738.052 .750.040 .710.080
WESAD.610.056 .608.058 .380.286 .516.150 .648.018
RealWorld.728.036 .724.040 .588.176 .796.032 .768.004
Sleep-EDF.698.020 .698.020 .574.144 .680.038 .538.180
\n80%\n
PAMAP2.678.062 .680.060 .696.044 .740.000 -.704.036
WESAD.504.056 .506.054 .372.188 .508.052 .630.070
RealWorld.626.098 .628.096 .584.140 .770.046 .782.058
Sleep-EDF.680.020 .680.020 .522.178 .676.024 .512.188
Averaged
PAMAP2.716.063 .717.061 .722.057 .749.029 .719.059
WESAD.595.044 .595.044 .397.243 .527.113 .645.005
RealWorld.717.049 .715.051 .611.155 .790.024 .777.011
Sleep-EDF.695.016 .694.017 .592.119 .681.030 .535.176
\u00a0
\n
\n
", + "capture": "Table 1: Accuracy improvement of FLISM over baselines with various incomplete modality ratios. EF denotes Early Fusion, whereas IF represents Intermediate Fusion." + }, + "2": { + "table_html": "
\n
Table 2: Comparison between AutoFed+ and FLISM.
\n
", + "capture": "Table 2: Comparison between AutoFed+ and FLISM." + }, + "3": { + "table_html": "
\n
Table 3: Component-wise analysis of FLISM.
\n
", + "capture": "Table 3: Component-wise analysis of FLISM." + }, + "4": { + "table_html": "
\n
Table 4: Summary of datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset#Modalities\n \n\n\nModality Types\n\n
PAMAP26Acc, Gyro
RealWorld10Acc, Gyro
WESAD10Acc, BVP, ECG, EDA, EMG, Resp, Temp
Sleep-EDF5EEG Fpz-Cz, EEG Pz-Oz, EOG, EMG, Resp
\n
", + "capture": "Table 4: Summary of datasets." + }, + "5": { + "table_html": "
\n
Table 5: Accuracy improvement of FLISM over baselines with complete modalities. EF denotes Early Fusion, whereas IF represents Intermediate Fusion.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\n\u00a0\u00a0 MethodFedAvg (EF)FedProx (EF)MOON (EF)FedMM (IF)Harmony (IF)
Accuracy\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n\n \n\n\nF1\n\n F1\n
\n0%\n
PAMAP2.804.034 .806.032 .778.060 .784.054 .774.064
WESAD.810.016 .810.016 .536.258 .598.196 .616.178
RealWorld.878.004 .874.008 .838.044 .826.056 .774.108
Sleep-EDF.706.014 .704.016 .714.006 .638.082 .586.134
\u00a0
\n
\n
", + "capture": "Table 5: Accuracy improvement of FLISM over baselines with complete modalities. EF denotes Early Fusion, whereas IF represents Intermediate Fusion." + } + }, + "image_paths": { + "1": { + "figure_path": "2405.11828v2_figure_1.png", + "caption": "Figure 1: Overview of FLISM, consisting of three key components: \\small{1}\u20dd: Modality-Invariant Representation Learning (MIRL) learns to extract effective features, \\small{2}\u20dd: Modality Quality-Aware Aggregation (MQAA) priorities clients with higher-quality modality data, and \\small{3}\u20dd: Global-Aligned Knowledge Distillation (GAKD) reduces deviations in client updates by aligning the predictions of the local model with that of the global.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/overview/overview_final.png" + }, + "2": { + "figure_path": "2405.11828v2_figure_2.png", + "caption": "Figure 2: Comparison of FLISM with other baselines on communication and computation cost.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/results/comm_comp.png" + }, + "4": { + "figure_path": "2405.11828v2_figure_4.png", + "caption": "Figure 4: Comparison of resource usage between intermediate and early fusion based on the number of MACs (left) and model parameters (right).", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/motivation/macs_and_num_params.png" + }, + "5": { + "figure_path": "2405.11828v2_figure_5.png", + "caption": "Figure 5: Deep imputation\u2019s extra training burden represented by the number of cross-modality transfer models (left) and the corresponding MACs (right).", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/motivation/generative_model_overhead.png" + }, + "6(a)": { + "figure_path": "2405.11828v2_figure_6(a).png", + "caption": "Figure 6: The model performance decreases with more incomplete modalities in RealWorld (left) and WESAD (right) datasets.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/motivation/acc_drop_rw.png" + }, + "6(b)": { + "figure_path": "2405.11828v2_figure_6(b).png", + "caption": "Figure 6: The model performance decreases with more incomplete modalities in RealWorld (left) and WESAD (right) datasets.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/motivation/acc_drop_wesad.png" + }, + "7": { + "figure_path": "2405.11828v2_figure_7.png", + "caption": "Figure 7: Comparison of models trained with and without supervised contrastive loss for PAMAP2 (left) and WESAD (right) datasets.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/evidence/supcon_ev_both.png" + }, + "8(a)": { + "figure_path": "2405.11828v2_figure_8(a).png", + "caption": "Figure 8: The relationship between entropy and F1 score for PAMAP2 (left) and WESAD (right) datasets.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/evidence/entropy_pamap2.png" + }, + "8(b)": { + "figure_path": "2405.11828v2_figure_8(b).png", + "caption": "Figure 8: The relationship between entropy and F1 score for PAMAP2 (left) and WESAD (right) datasets.", + "url": "http://arxiv.org/html/2405.11828v2/extracted/6028025/figures/evidence/entropy_wesad.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Large-scale training of foundation models for wearable biosignals.", + "author": "Salar Abbaspourazad, Oussama Elachqar, Andrew Miller, Saba Emrani, Udhyakumar Nallasamy, and Ian Shapiro.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "2": { + "title": "Apple vision pro.", + "author": "Apple Vision Pro, 2024.", + "venue": "https://www.apple.com/apple-vision-pro/, 2024.", + "url": null + } + }, + { + "3": { + "title": "Deep learning\u2013based multimodal data fusion: Case study in food intake episodes detection using wearable sensors.", + "author": "Nooshin Bahador, Denzil Ferreira, Satu Tamminen, Jukka Kortelainen, et al.", + "venue": "JMIR mHealth and uHealth, 9(1):e21926, 2021.", + "url": null + } + }, + { + "4": { + "title": "Autoencoders, unsupervised learning, and deep architectures.", + "author": "Pierre Baldi.", + "venue": "In Proceedings of ICML workshop on unsupervised and transfer learning, pp. 37\u201349. JMLR Workshop and Conference Proceedings, 2012.", + "url": null + } + }, + { + "5": { + "title": "Cocoa: Cross modality contrastive learning for sensor data.", + "author": "Shohreh Deldari, Hao Xue, Aaqib Saeed, Daniel V Smith, and Flora D Salim.", + "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3):1\u201328, 2022.", + "url": null + } + }, + { + "6": { + "title": "Multimodal multitask deep learning model for alzheimer\u2019s disease progression detection based on time series data.", + "author": "Shaker El-Sappagh, Tamer Abuhmed, SM Riazul Islam, and Kyung Sup Kwak.", + "venue": "Neurocomputing, 412:197\u2013215, 2020.", + "url": null + } + }, + { + "7": { + "title": "Imputing missing data in large-scale multivariate biomedical wearable recordings using bidirectional recurrent neural networks with temporal activation regularization.", + "author": "Tiantian Feng and Shrikanth Narayanan.", + "venue": "In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2529\u20132534. IEEE, 2019.", + "url": null + } + }, + { + "8": { + "title": "Fedmultimodal: A benchmark for multimodal federated learning.", + "author": "Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, and Shrikanth Narayanan.", + "venue": "In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 4035\u20134045, 2023.", + "url": null + } + }, + { + "9": { + "title": "Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals.", + "author": "Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley.", + "venue": "circulation, 101(23):e215\u2013e220, 2000.", + "url": null + } + }, + { + "10": { + "title": "Assessing the state of self-supervised human activity recognition using wearables.", + "author": "Harish Haresamudram, Irfan Essa, and Thomas Pl\u00f6tz.", + "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3):1\u201347, 2022.", + "url": null + } + }, + { + "11": { + "title": "Distilling the knowledge in a neural network.", + "author": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.", + "venue": "arXiv preprint arXiv:1503.02531, 2015.", + "url": null + } + }, + { + "12": { + "title": "Collossl: Collaborative self-supervised learning for human activity recognition.", + "author": "Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, and Akhil Mathur.", + "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(1):1\u201328, 2022.", + "url": null + } + }, + { + "13": { + "title": "SCAFFOLD: Stochastic controlled averaging for federated learning.", + "author": "Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh.", + "venue": "In Hal Daum\u00e9 III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 5132\u20135143. PMLR, 13\u201318 Jul 2020.", + "url": null + } + }, + { + "14": { + "title": "Analysis of a sleep-dependent neuronal feedback loop: the slow-wave microcontinuity of the eeg.", + "author": "Bob Kemp, Aeilko H Zwinderman, Bert Tuk, Hilbert AC Kamphuisen, and Josefien JL Oberye.", + "venue": "IEEE Transactions on Biomedical Engineering, 47(9):1185\u20131194, 2000.", + "url": null + } + }, + { + "15": { + "title": "Supervised contrastive learning.", + "author": "Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan.", + "venue": "Advances in neural information processing systems, 33:18661\u201318673, 2020.", + "url": null + } + }, + { + "16": { + "title": "Autonomic nervous system activity in emotion: A review.", + "author": "Sylvia D Kreibig.", + "venue": "Biological psychology, 84(3):394\u2013421, 2010.", + "url": null + } + }, + { + "17": { + "title": "On information and sufficiency.", + "author": "Solomon Kullback and Richard A Leibler.", + "venue": "The annals of mathematical statistics, 22(1):79\u201386, 1951.", + "url": null + } + }, + { + "18": { + "title": "Fedmekt: Distillation-based embedding knowledge transfer for multimodal federated learning.", + "author": "Huy Q Le, Minh NH Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, and Choong Seon Hong.", + "venue": "arXiv preprint arXiv:2307.13214, 2023.", + "url": null + } + }, + { + "19": { + "title": "Convolutional networks for images, speech, and time series.", + "author": "Yann LeCun, Yoshua Bengio, et al.", + "venue": "The handbook of brain theory and neural networks, 3361(10):1995, 1995.", + "url": null + } + }, + { + "20": { + "title": "Model-contrastive federated learning.", + "author": "Qinbin Li, Bingsheng He, and Dawn Song.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10713\u201310722, 2021.", + "url": null + } + }, + { + "21": { + "title": "Federated learning: Challenges, methods, and future directions.", + "author": "Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith.", + "venue": "IEEE signal processing magazine, 37(3):50\u201360, 2020a.", + "url": null + } + }, + { + "22": { + "title": "Federated optimization in heterogeneous networks.", + "author": "Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith.", + "venue": "Proceedings of Machine learning and systems, 2:429\u2013450, 2020b.", + "url": null + } + }, + { + "23": { + "title": "Emotion recognition from multi-channel eeg data through convolutional recurrent neural network.", + "author": "Xiang Li, Dawei Song, Peng Zhang, Guangliang Yu, Yuexian Hou, and Bin Hu.", + "venue": "In 2016 IEEE international conference on bioinformatics and biomedicine (BIBM), pp. 352\u2013359. IEEE, 2016.", + "url": null + } + }, + { + "24": { + "title": "Communication-efficient learning of deep networks from decentralized data.", + "author": "Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.", + "venue": "In Artificial intelligence and statistics, pp. 1273\u20131282. PMLR, 2017.", + "url": null + } + }, + { + "25": { + "title": "Multimodality sensing for eating recognition.", + "author": "Christopher A Merck, Christina Maher, Mark Mirtchouk, Min Zheng, Yuxiao Huang, and Samantha Kleinberg.", + "venue": "In PervasiveHealth, pp. 130\u2013137, 2016.", + "url": null + } + }, + { + "26": { + "title": "Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition.", + "author": "Francisco Javier Ord\u00f3\u00f1ez and Daniel Roggen.", + "venue": "Sensors, 16(1):115, 2016.", + "url": null + } + }, + { + "27": { + "title": "Time2stop: Adaptive and explainable human-ai loop for smartphone overuse intervention.", + "author": "Adiba Orzikulova, Han Xiao, Zhipeng Li, Yukang Yan, Yuntao Wang, Yuanchun Shi, Marzyeh Ghassemi, Sung-Ju Lee, Anind K Dey, and Xuhai Xu.", + "venue": "In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1\u201320, 2024.", + "url": null + } + }, + { + "28": { + "title": "Oura ring, 2015.", + "author": "Oura Ring, 2015.", + "venue": "https://ouraring.com/, 2015.", + "url": null + } + }, + { + "29": { + "title": "Harmony: Heterogeneous multi-modal federated learning through disentangled model training.", + "author": "Xiaomin Ouyang, Zhiyuan Xie, Heming Fu, Sitong Cheng, Li Pan, Neiwen Ling, Guoliang Xing, Jiayu Zhou, and Jianwei Huang.", + "venue": "In Proceedings of the 21st Annual International Conference on Mobile Systems, Applications and Services, pp. 530\u2013543, 2023.", + "url": null + } + }, + { + "30": { + "title": "K-emocon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations.", + "author": "Cheul Young Park, Narae Cha, Soowon Kang, Auk Kim, Ahsan Habib Khandoker, Leontios Hadjileontiadis, Alice Oh, Yong Jeong, and Uichin Lee.", + "venue": "Scientific Data, 7(1):293, 2020.", + "url": null + } + }, + { + "31": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "32": { + "title": "Effective techniques for multimodal data fusion: A comparative analysis.", + "author": "Maciej Paw\u0142owski, Anna Wr\u00f3blewska, and Sylwia Sysko-Roma\u0144czuk.", + "venue": "Sensors, 23(5):2381, 2023.", + "url": null + } + }, + { + "33": { + "title": "Joint classification and prediction cnn framework for automatic sleep stage classification.", + "author": "Huy Phan, Fernando Andreotti, Navin Cooray, Oliver Y Ch\u00e9n, and Maarten De Vos.", + "venue": "IEEE Transactions on Biomedical Engineering, 66(5):1285\u20131296, 2018.", + "url": null + } + }, + { + "34": { + "title": "Applying machine learning for sensor data analysis in interactive systems: Common pitfalls of pragmatic use and ways to avoid them.", + "author": "Thomas Pl\u00f6tz.", + "venue": "ACM Computing Surveys (CSUR), 54(6):1\u201325, 2021.", + "url": null + } + }, + { + "35": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pp. 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "36": { + "title": "Deep multimodal learning: A survey on recent advances and trends.", + "author": "Dhanesh Ramachandram and Graham W Taylor.", + "venue": "IEEE signal processing magazine, 34(6):96\u2013108, 2017.", + "url": null + } + }, + { + "37": { + "title": "Introducing a new benchmarked dataset for activity monitoring.", + "author": "Attila Reiss and Didier Stricker.", + "venue": "In 2012 16th international symposium on wearable computers, pp. 108\u2013109. IEEE, 2012.", + "url": null + } + }, + { + "38": { + "title": "Pic2word: Mapping pictures to words for zero-shot composed image retrieval.", + "author": "Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, and Tomas Pfister.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19305\u201319314, 2023.", + "url": null + } + }, + { + "39": { + "title": "Flash: Federated learning for automated selection of high-band mmwave sectors.", + "author": "Batool Salehi, Jerry Gu, Debashri Roy, and Kaushik Chowdhury.", + "venue": "In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pp. 1719\u20131728. IEEE, 2022.", + "url": null + } + }, + { + "40": { + "title": "Introducing wesad, a multimodal dataset for wearable stress and affect detection.", + "author": "Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven.", + "venue": "In Proceedings of the 20th ACM international conference on multimodal interaction, pp. 400\u2013408, 2018.", + "url": null + } + }, + { + "41": { + "title": "A mathematical theory of communication.", + "author": "Claude Elwood Shannon.", + "venue": "The Bell system technical journal, 27(3):379\u2013423, 1948.", + "url": null + } + }, + { + "42": { + "title": "Mydj: Sensing food intakes with an attachable on your eyeglass frame.", + "author": "Jaemin Shin, Seungjoo Lee, Taesik Gong, Hyungjun Yoon, Hyunchul Roh, Andrea Bianchi, and Sung-Ju Lee.", + "venue": "In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1\u201317, 2022.", + "url": null + } + }, + { + "43": { + "title": "Early versus late fusion in semantic video analysis.", + "author": "Cees GM Snoek, Marcel Worring, and Arnold WM Smeulders.", + "venue": "In Proceedings of the 13th annual ACM international conference on Multimedia, pp. 399\u2013402, 2005.", + "url": null + } + }, + { + "44": { + "title": "On-body localization of wearable devices: An investigation of position-aware activity recognition.", + "author": "Timo Sztyler and Heiner Stuckenschmidt.", + "venue": "In 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1\u20139. IEEE, 2016.", + "url": null + } + }, + { + "45": { + "title": "Automatic sleep stage scoring with single-channel eeg using convolutional neural networks.", + "author": "Orestis Tsinalis, Paul M Matthews, Yike Guo, and Stefanos Zafeiriou.", + "venue": "arXiv preprint arXiv:1610.01683, 2016.", + "url": null + } + }, + { + "46": { + "title": "Context recognition in-the-wild: Unified model for multi-modal sensors and multi-label classification.", + "author": "Yonatan Vaizman, Nadir Weibel, and Gert Lanckriet.", + "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4):1\u201322, 2018.", + "url": null + } + }, + { + "47": { + "title": "Flexible imputation of missing data.", + "author": "Stef Van Buuren.", + "venue": "CRC press, 2018.", + "url": null + } + }, + { + "48": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "49": { + "title": "A unified framework for multi-modal federated learning.", + "author": "Baochen Xiong, Xiaoshan Yang, Fan Qi, and Changsheng Xu.", + "venue": "Neurocomputing, 480:110\u2013118, 2022.", + "url": null + } + }, + { + "50": { + "title": "Characterizing impacts of heterogeneity in federated learning upon large-scale smartphone data.", + "author": "Chengxu Yang, Qipeng Wang, Mengwei Xu, Zhenpeng Chen, Kaigui Bian, Yunxin Liu, and Xuanzhe Liu.", + "venue": "In Proceedings of the Web Conference 2021, pp. 935\u2013946, 2021.", + "url": null + } + }, + { + "51": { + "title": "Deep convolutional neural networks on multichannel time series for human activity recognition.", + "author": "Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiaoli Li, and Shonali Krishnaswamy.", + "venue": "In Ijcai, volume 15, pp. 3995\u20134001. Buenos Aires, Argentina, 2015.", + "url": null + } + }, + { + "52": { + "title": "Optimal sparse linear prediction for block-missing multi-modality data without imputation.", + "author": "Guan Yu, Quefeng Li, Dinggang Shen, and Yufeng Liu.", + "venue": "Journal of the American Statistical Association, 115(531):1406\u20131419, 2020.", + "url": null + } + }, + { + "53": { + "title": "Semi-supervised learning for wearable-based momentary stress detection in the wild.", + "author": "Han Yu and Akane Sano.", + "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7(2):1\u201323, 2023.", + "url": null + } + }, + { + "54": { + "title": "Convolutional neural networks for human activity recognition using mobile sensors.", + "author": "Ming Zeng, Le T Nguyen, Bo Yu, Ole J Mengshoel, Jiang Zhu, Pang Wu, and Joy Zhang.", + "venue": "In 6th international conference on mobile computing, applications and services, pp. 197\u2013205. IEEE, 2014.", + "url": null + } + }, + { + "55": { + "title": "Unified multi-modal image synthesis for missing modality imputation.", + "author": "Yue Zhang, Chengtao Peng, Qiuli Wang, Dan Song, Kaiyan Li, and S Kevin Zhou.", + "venue": "arXiv preprint arXiv:2304.05340, 2023.", + "url": null + } + }, + { + "56": { + "title": "Missing data imputation: focusing on single imputation.", + "author": "Zhongheng Zhang.", + "venue": "Annals of translational medicine, 4(1), 2016.", + "url": null + } + }, + { + "57": { + "title": "Multimodal federated learning on iot data.", + "author": "Yuchen Zhao, Payam Barnaghi, and Hamed Haddadi.", + "venue": "In 2022 IEEE/ACM Seventh International Conference on Internet-of-Things Design and Implementation (IoTDI), pp. 43\u201354. IEEE, 2022.", + "url": null + } + }, + { + "58": { + "title": "Autofed: Heterogeneity-aware federated multimodal learning for robust autonomous driving.", + "author": "Tianyue Zheng, Ang Li, Zhe Chen, Hongbo Wang, and Jun Luo.", + "venue": "In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, 2023.", + "url": null + } + }, + { + "59": { + "title": "pytorch-opcounter: Tool to count the flops of your pytorch model.", + "author": "Ligeng Zhu.", + "venue": "https://github.com/Lyken17/pytorch-OpCounter/, 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2405.11828v2" +} \ No newline at end of file diff --git a/20241127/2405.17472v2.json b/20241127/2405.17472v2.json new file mode 100644 index 0000000000000000000000000000000000000000..46e5c4c2b6b5a0888d37acc093be9b54213f391a --- /dev/null +++ b/20241127/2405.17472v2.json @@ -0,0 +1,832 @@ +{ + "title": "FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing", + "abstract": "Text-to-image diffusion models can be fine-tuned in custom domains to adapt to specific user preferences, but such adaptability has also been utilized for illegal purposes, such as forging public figures\u2019 portraits, duplicating copyrighted artworks and generating explicit contents. Existing work focused on detecting the illegally generated contents, but cannot prevent or mitigate illegal adaptations of diffusion models. Other schemes of model unlearning and reinitialization, similarly, cannot prevent users from relearning the knowledge of illegal model adaptation with custom data. In this paper, we present FreezeAsGuard, a new technique that addresses these limitations and enables irreversible mitigation of illegal adaptations of diffusion models. Our approach is that the model publisher selectively freezes tensors in pre-trained diffusion models that are critical to illegal model adaptations, to mitigate the fine-tuned model\u2019s representation power in illegal adaptations, but minimize the impact on other legal adaptations. Experiment results in multiple text-to-image application domains show that FreezeAsGuard provides 37% stronger power in mitigating illegal model adaptations compared to competitive baselines, while incurring less than 5% impact on legal model adaptations. The source code is available at: https://github.com/pittisl/FreezeAsGuard.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Text-to-image diffusion models [44 ###reference_b44###; 43 ###reference_b43###] are powerful tools to generate high-quality images aligned with user prompts. After pre-trained by model publishers to embed world knowledge from large image data [49 ###reference_b49###], open-sourced diffusion models, such as Stable Diffusion (SD) [9 ###reference_b9###; 10 ###reference_b10###], can be conveniently adapted by users to generate their preferred images111Many APIs, such as HuggingFace Diffusers [56 ###reference_b56###], can be used for fine-tuning open-sourced diffusion models with the minimum user efforts., through fine-tuning with custom data in specific domains. For example, diffusion models can be fine-tuned on cartoon datasets to synthesize avatars in video games [46 ###reference_b46###], or on datasets of landscape photos to generate wallpapers [11 ###reference_b11###].\n###figure_2### An increasing risk of democratizing open-sourced diffusion models, however, is that the capability of model adaptation has been utilized for illegal purposes, such as forging public figures\u2019 portraits [22 ###reference_b22###; 24 ###reference_b24###], duplicating copyrighted artworks [26 ###reference_b26###], and generating explicit content [25 ###reference_b25###]. Most existing efforts aim to deter attempts of illegal model adaptation with copyright detection [64 ###reference_b64###; 16 ###reference_b16###; 17 ###reference_b17###], which embeds invisible but detectable watermarks into training data and further generated images, as shown in Figure 1 ###reference_###. However, such detection only applies to misuse of training data, and does not mitigate the user\u2019s capability of illegal model adaptation. Users can easily bypass such detection by collecting and using their own training data without being watermarked (e.g., users\u2019 self-taken photos of public figures).\nInstead, an intuitive approach to mitigation is content filtering. However, filtering user prompts [19 ###reference_b19###] can be bypassed by fine-tuning the model to align innocent prompts with illegal image contents [55 ###reference_b55###], and filtering the generated images [7 ###reference_b7###] is often overpowered with high false-positive rates [3 ###reference_b3###]. Data poisoning techniques can avoid false positives by injecting invisible perturbations into training data [59 ###reference_b59###; 62 ###reference_b62###; 51 ###reference_b51###], but cannot apply when public web data or users\u2019 private data is used for fine-tuning. Recent unlearning methods allow model publishers to remove knowledge needed for illegal adaptation by modifying model weights [20 ###reference_b20###; 23 ###reference_b23###; 57 ###reference_b57###; 65 ###reference_b65###] , but cannot prevent relearning such knowledge via fine-tuning.\nThe key limitation of these techniques is that they focus on modifying the training data or model weights, but such modification can be reversed by users via fine-tuning with their own data. Such modification, further, cannot restrain the mitigation power only in illegal data classes (e.g., public figures\u2019 portraits) without affecting model adaptation in other legal data classes (e.g., the user\u2019s own portraits), due to the high ambiguity and possible overlap between these classes.\nTo prevent users from reversing the mitigation maneuvers being applied, in this paper we present FreezeAsGuard, a new technique that constrains the trainability of diffusion model\u2019s tensors in fine-tuning. As shown in Figure 1 ###reference_###, the model publisher selectively freezes tensors in pre-trained models that are critical to fine-tuning in illegal classes (e.g., public figures\u2019 portraits), to limit the model\u2019s representation power of being fine-tuned in illegal classes. In practice, since most illegal users are not professional and fine-tune diffusion models by simply following the instructions provided by model publishers, tensor freezing can be effectively enforced by model publishers through these instructions, to guide the users to adopt tensor freezing. Essentially, since freezing tensors lowers the trainable model parameters and reduces the computing costs of fine-tuning, users would be well motivated to adopt tensor freezing in fine-tuning practices.\n###figure_3### The major challenge is how to properly evaluate the importance of tensors in model fine-tuning. Popular attribution-based importance metrics [38 ###reference_b38###; 41 ###reference_b41###] are used in model pruning with fixed weight values, but cannot reflect the impact of weight variations in fine-tuning. Such impact of weight variations, in fact, cannot be condensed into a single importance metric, due to the randomness and interdependencies of weight updates in fine-tuning iterations.\nInstead, as shown in Figure 3 ###reference_###, we formulate the selection of frozen tensors in all the illegal classes as one trainable binary mask. Given a required ratio of frozen tensors specified by model publisher, we optimize such selection with training data in all the involved illegal classes, through bilevel optimization that combines the iterative process of mask learning and iterations of model fine-tuning. In this way, the mask being trained can timely learn the impact of weight variations on the training loss during fine-tuning.\nWith frozen tensors, the model\u2019s representation power should be retained when fine-tuned on other legal classes (e.g., user\u2019s own portraits). Hence, we incorporate training samples from legal classes into the bilevel optimization, to provide suppressing signals for selecting tensors being frozen. Hence, the learned mask of freezing tensors should skip tensors that are important to fine-tuning in legal classes.\nWe evaluated FreezeAsGuard in three different domains of illegal model adaptations: 1) forging public figures\u2019 portraits, 2) duplicating copyrighted artworks and 3) generating explicit contents. For each domain, we use open-sourced or self-collected datasets, and randomly select different data classes as illegal and legal classes. We use competitive model unlearning schemes as baselines, and multiple metrics to measure image quality. Our findings are as follows:\nFreezeAsGuard has strong mitigation power in illegal classes. Compared to the competitive baselines, it further reduces the quality of images generated by fine-tuned model by up to 37%, and ensures the generated images to be unrecognizable as subjects in illegal classes.\nFreezeAsGuard has the minimum impact on modal adaptation in legal classes. It ensures on-par quality of the generated images compared to regular full fine-tuning on legal data, with a difference of at most 5%.\nFreezeAsGuard has high compute efficiency. Compared to full fine-tuning, it can save up to 48% GPU memory and 21% wall-clock computing time." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background & Motivation", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Fine-Tuning Diffusion Models", + "text": "Given text prompts and images as training data, fine-tuning a diffusion model approximates the conditional distribution by learning to reconstruct images that are progressively blurred with noise over step . Training objective is to minimize the reconstruction loss:\nwhere is the encoder of a pretrained VAE, is a pretrained text encoder, and is a denoising model with trainable parameters . Most diffusion models adopt UNet architecture [45 ###reference_b45###] as the denoising model.\nIn fine-tuning, the diffusion model learns new knowledge by adapting the generic knowledge in the pre-trained model [13 ###reference_b13###]. For example, new knowledge about \u201ca green beetle\u201d can be a combination of generic knowledge on \u201chornet\u201d and \u201cemerald\u201d. This behavior implies that fine-tuning in different classes may share the same knowledge base, and it is challenging to focus the mitigation power in illegal classes without affecting fine-tuning in other legal classes. This challenge motivates us to regulate FreezeAsGuard\u2019s mitigation power by incorporating training samples in legal classes, when selecting tensors being frozen for illegal classes.\n###figure_4###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Partial Model Fine-tuning", + "text": "An intuitive solution to mitigating illegal model adaptation is to only allow fine-tuning some layers or components of the diffusion model. However, this solution is ineffective in practice, because shallow layers provide primary image features and deep layers enforce domain-specific semantics [61 ###reference_b61###]. They are, hence, both essential to the performance of the fine-tuned models in legal classes.\nSimilarly, as shown in Table 1 ###reference_### and Figure 4 ###reference_###, freezing critical model components such as attention projectors and time embeddings can cause large quality drop in generated images. Even when freezing the same amount of model weights (e.g., random 50%), the exact distribution of frozen weights could also affect the generated images\u2019 quality. Such heterogeneity motivates us to instead seek for globally optimal selections of freezing tensors across all model components, by jointly taking all model components into bilevel optimization.\n###figure_5###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "Our design of FreezeAsGuard builds on bilevel optimization, which embeds one optimization problem within another and both of them are multi-objective optimizations [15 ###reference_b15###; 40 ###reference_b40###; 21 ###reference_b21###]. This bilevel optimization can be formulated as\nwhere is the binary mask of selecting frozen tensors, is the optimized binary mask, represents the model tensors frozen by , and is the converged after fine-tuning. and denote training samples in all the illegal classes () and legal classes (), respectively.\nSuch bilevel optimization is illustrated in Figure 5 ###reference_###. The lower-level problem in Eq. (3 ###reference_###) is a simulated user loop that the user fine-tunes the diffusion model by minimizing the loss over both illegal and legal classes. The upper-level problem in Eq. (2 ###reference_###) is a mask learning loop that learns to mitigate the model\u2019s representation power when fine-tuned in illegal classes, without affecting fine-tuning in legal classes.\nWe use the standard diffusion loss in Eq. (1 ###reference_###) and adopt tensor-level freezing to ensure sufficient granularity222Most existing diffusion models have parameter sizes between 1B and 3.5B, which correspond to at least 686 tensors over the UNet-based denoiser.,\nwithout incurring extra computing costs.\nTo apply the gradient solver, and should have differentiable dependencies with the loss function. We model through the weighted summation of pre-trained model tensors and fine-tuned model tensors , such that\nwhere denotes element-wise multiplication. From the user\u2019s perspective, fine-tuning the partially frozen model is equivalent to fine-tuning , controlled by Eq. (3 ###reference_###). To improve compute efficiency, we initialize as the fully fine-tuned model tensors on both illegal and legal classes, and gradually enlarge the scope of tensor freezing. Since is discrete and not differentiable, we adopt a continuous form that applies sigmoid function over a trainable tensor . We also did code optimizations for vectorized gradient calculations as in Appendix A.\nNote that, although we made differentiable in bilevel optimizations, the optimized values in will be rounded to binary, to ensure complete freezing of selected tensors." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Mask Learning in the Upper-level Loop", + "text": "To solve the upper-level optimization in Eq. (2 ###reference_###), we adopt linear scalarization [29 ###reference_b29###] to convert it into a single objective via a weighted summation with weights :\nto involve training samples in both illegal and legal classes when learning . should ensure that gradient-based feedbacks from the two loss terms are not biased by inequality between the amounts of and , and their values should be proportionally set based on these amounts.\nBesides, and could contain some knowledge in common, and masked learning from such data may hence affect model adaptation in legal classes. To address this problem, we add a sparsity constraint to to better control of the mask\u2019s mitigation power:\nwhere is the number of tensors and measures the proportion of tensors being frozen. By minimizing , the achieved ratio of tensor freezing should approach the given . In this way, we can apply gradient descent to minimize and iteratively refine towards optimum." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Model Fine-tuning in the Lower-level Loop", + "text": "Effectiveness of mask learning at the upper level relies on timely feedback from the lower-level fine-tuning. Every time the mask has been updated by an iteration in the upper level, the lower-level loop should adopt the updated mask into fine-tuning, and return the fine-tuned model tensors and the correspondingly updated loss value as feedback to the upper level. Similar to Eq. (5 ###reference_###), the fine-tuning objective is the summation of diffusion losses for illegal and legal domains:\n###figure_6###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Towards Efficient Bilevel Optimization", + "text": "Solving bilevel optimization is computationally expensive, due to the repeated switches between upper-level and lower-level loops [47 ###reference_b47###; 65 ###reference_b65###]. Rigorously, as shown in Figure 6 ###reference_### - Left, every time when the mask has been updated, the model should be fine-tuned with a sufficient number of iterations until convergence, before the next update of the mask.\nHowever, in practice, doing so is extremely expensive.\nInstead, as shown in Figure 6 ###reference_### - Right, we observe that the fine-tuning loss typically drops fast in the first few iterations and then violently fluctuates (see Appendix B). Hence, every time in the lower-level loop of model fine-tuning, we do not wait for the loss to converge, but only fine-tune the model for the first few iterations before updating the mask to the upper-level loop of mask learning. After the model update, the fine-tuned model weights are inherited to the next loop of model fine-tuning, to ensure consistency and improve convergence. Hence, the optimization only needs one fine-tuning process, during which the mask can be updated with shorter intervals but higher learning quality. Details of deciding such a number of iterations are in Appendix B.\nFurther, to perform bilevel optimizations, three versions of diffusion model weights, i.e., , and , will be maintained for gradient computation. This could significantly increase the memory cost due to large sizes of diffusion models. To reduce such memory cost, we instead maintain only two versions of model weights, namely and . According to Appendix A, the involvement of both and can be removed by plugging into the gradient descent calculation. More specifically, for a given model tensor , the gradient descent to update the corresponding mask in the upper-level optimization is:\nwhere controls the step size of updates and is updated as . Further, computing the update of and at the lower level should apply the chain rule:\nIn this way, as shown in Algorithm 1 ###reference_###, FreezeAsGuard alternately runs upper and lower-level gradient descent steps, with the maximum compute efficiency and the minimum memory cost. We initialize the mask to all zeros and starts as a fully fine-tuned model, to mitigate aggressive freezing. In practice, we set random negative values to to ensure the continuous form of the mask is near zero." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In our experiments, we use three open-source diffusion models, SD v1.4 [8 ###reference_b8###], v1.5 [9 ###reference_b9###] and v2.1 [10 ###reference_b10###], to evaluate three domains of illegal model adaptations:\n1) forging public figures\u2019 portraits [22 ###reference_b22###; 24 ###reference_b24###], 2) duplicating copyrighted artworks [26 ###reference_b26###] and 3) generating explicit content [25 ###reference_b25###].\nDatasets: For each domain, we use datasets as listed below, and random select different data classes as illegal and legal classes. We use 50% of samples in the selected classes for mask learning and model training, and the other samples for testing. More details about datasets are in Appendix C.\nPortraits of public figures: We use a self-collected dataset, namely Famous-Figures-25 (FF25), with 8,703 publicly available portraits of 25 public figures on the Web. Each image has a prompt \u201ca photo of showing \u201d as description.\nCopyrighted artworks: We use a self-collected dataset, namely Artwork, which contains 1,134 publicly available artwork images and text captions on the Web, from five famous digital artists with unique art styles.\nExplicit contents: We use the NSFW-caption dataset with 2,000 not-safe-for-work (NSFW) images and their captions [1 ###reference_b1###] as the illegal class. We use the Modern-Logo-v4 [5 ###reference_b5###] dataset, which contains 803 logo images labeled with informative text descriptions, as the legal class.\nBaseline schemes: Our baselines include full fine-tuning (FT), random tensor freezing, and two competitive unlearning schemes, namely UCE [23 ###reference_b23### and IMMA [65 ###reference_b65###]. Existing data poisoning methods [59 ###reference_b59###; 62 ###reference_b62###; 51 ###reference_b51###] cannot be used because all data we use is publicly online and cannot be poisoned.\nFull FT: It fine-tunes all the tensors of the diffusion model\u2019s UNet and has the strongest representation power for adaptation in illegal domains.\nRandom-: It randomly freezes % of model tensors, as a naive baseline of tensor freezing.\nUCE [23 ###reference_b23###]: It uses unlearning to guide the learned knowledge about illegal classes in the pre-trained model to be irrelevant or more generic.\nIMMA [65 ###reference_b65###]: It reinitializes the model weights so that it is hard for users to conduct effective fine-tuning on the reinitialized model, in both illegal and legal classes.\nMeasuring image quality: We used FID [28 ###reference_b28###] and CLIP [27 ###reference_b27###] scores to evaluate the quality of generated images. In addition, to better identify domain-specific details in generated images, we also adopted domain-specific image quality metrics, listed as below and described in detail in Appendix D. For each text prompt, the experiment results are averaged from 100 generated images with different random seeds.\nDomain-specific feature extractors: Existing work [54 ###reference_b54###] reported that FID and CLIP fail to measure the similarity between portraits of human subjects, and cannot reflect human perception in images. Hence, for human portraits and artworks, we apply specific feature extractors on real and generated images, and measure the quality of generated images as cosine distance between their feature vectors. For portraits, we use face feature extractors (FN-L, FN, VGG) in DeepFace [50 ###reference_b50###]. For artworks, we use a pretrained CSD model [52 ###reference_b52###]. Details are in Appendix D.1.\nNudeNet: We used NudeNet [2 ###reference_b2###] to decide the probability of whether the generated images contain explicit contents, as the image\u2019s safety score. Details are in Appendix D.2.\nHuman Evaluation: To better capture human perception in generated images, we recruited 16 volunteers with diverse backgrounds to provide human evaluations on image quality. For each image, volunteers scored how the generated image is likely to depict the same subject as in the real image from 1 to 7, where 1 means \u201cvery unlikely\u201d and 7 means \u201cvery likely\u201d. Details are in Appendix D.3.\n###table_1###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Mitigating Forgery of Public Figures\u2019 Portraits", + "text": "We evaluate FreezeAsGuard in mitigating forgery of public figures\u2019 portraits, using FF25 dataset and SD v1.5 model. 10 classes are randomly selected from FF25 as illegal and legal classes, respectively. As shown in Table 2 ###reference_###, FreezeAsGuard can mitigate illegal model adaptation\nby 40% compared to Full FT. When varies from 10% to 50%, it also outperforms the unlearning schemes by 37%, because these schemes cannot prevent relearning knowledge in illegal classes with new training data. It also ensures better legal model adaptation. With =30%, the impact on legal adaptation is 5%.\n###figure_7### When the freezing ratio () increases, the difference between FreezeAsGuard and random freezing diminishes, and their mitigation powers also reach a similar level. This means that only a portion of tensors are important for adaptation in specific illegal classes. With a high freezing ratio, random freezing is more likely to freeze these important tensors. Meanwhile, it could also freeze tensors that are important to legal classes, resulting in low performance in legal model adaptations. Hence, as shown in Figure 7 ###reference_###, when =30%, the mitigation power is high enough that the generated images no longer resemble those in training data, and further increasing could largely affect legal model adaptation.\n###figure_8### Based on these results, we empirically consider =30% as the optimal freezing ratio on SD v1.5 for the domain of public figures\u2019 portraits. Figure 8 ###reference_### shows example images of baseline methods and FreezeAsGuard with =30%. We can find that FreezeAsGuard effectively prevents the generated images from being recognized as the subjects in illegal classes. Meanwhile, the fine-tuned model can still generate detailed background content and subjects\u2019 postures aligned with the prompt, indicating that the mitigation power is highly selective and focuses only on subjects\u2019 faces. More examples of generated images are in Appendix F.1.\n###table_2### ###figure_9###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Mitigating Duplication of Copyright Artworks", + "text": "We evaluate the capability of FreezeAsGuard in mitigating the duplication of copyrighted artworks, using the Artwork dataset and SD v2.1 model. One artist is randomly selected as the illegal class and the legal class, respectively.\nThe results with different freezing ratios are shown in Table 3 ###reference_### and Figure 9 ###reference_###. Unlike results in Section 4.1 ###reference_### where data classes exhibit only subtle differences in facial features, different artists\u2019 artworks demonstrate markedly different styles. Hence, a higher freezing ratio is required for sufficient mitigation power. We empirically decide the optimal freezing ratio for the domain of artwork is 70%. When =70%, FreezeAsGuard can provide 47% more mitigation power in illegal classes compared to full fine-tuning, and 30% more compared to unlearning schemes. Figure 10 ###reference_### further shows example images generated by FreezeAsGuard with =70%, and more examples can be found in Appendix F.2.\n###figure_10### ###table_3###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Mitigating Generation of Explicit Contents", + "text": "To evaluate FreezeAsGuard\u2019s mitigation of explicit contents, we designate the NSFW-caption dataset as illegal class, and the Modern-Logo-v4 dataset as legal class. Results in Table 4 ###reference_### and Figure 11 ###reference_### show that, with =70%, FreezeAsGuard significantly reduces the model\u2019s capability of generating explicit contents by up to 38% compared to unlearning schemes, while maintaining the model\u2019s adaptability in legal class. More image examples are in Appendix F.3." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Scalability of Mitigation Power", + "text": "To evaluate FreezeAsGuard\u2019s scalability over multiple illegal classes, we randomly pick 2, 5 and 10 public figures in the FF25 dataset, and 1, 2 and 3 artists in the Artworks dataset, as illegal classes. As shown in Table 5 ###reference_### and 6 ###reference_###, when the number of illegal classes increases, FreezeAsGuard can retain strong mitigation power in both cases, and continuously outperforms the unlearning schemes.\nNote that, with more illegal classes, the difference of mitigation power between FreezeAsGuard and random freezing is smaller, because more illegal classes correspond to more adaptation-critical tensors, and random freezing is more likely to cover them.\n###figure_11### ###table_4### ###table_5###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "The Learned Selection of Frozen Tensors", + "text": "In Figure 12 ###reference_### and 13 ###reference_###, we visualized the learned binary masks of tensor freezing for different illegal classes on the FF-25 and Artwork datasets, respectively, with the SD v1.5 model. These results show that on both datasets, the tensors being frozen for different illegal classes largely vary, indicating that our mask learning method can properly capture the unique tensors that are critical to each class, hence ensuring scalability. Note that in practice, no matter how many illegal classes are involved, the total amount of frozen tensors will always be constrained by the freezing ratio (). When more illegal classes are involved, our results show that FreezeAsGuard is capable of identifying the most critical set of tensors for mitigating the fine-tuned model\u2019s representation power." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Mitigation Power with Different Models", + "text": "As shown in Table 7 ###reference_###, when applied to different SD models, FreezeAsGuard constantly outperforms baseline schemes. SD v1.4 and v1.5 are generally stronger than SD v2.1, and the gap between illegal and legal classes in FreezeAsGuard is slightly better for v1.4 and v1.5 models. We hypothesize that better pre-trained models have more modularized knowledge distribution over model parameters, and hence allow FreezeAsGuard to have less impact on legal classes.\n###table_6###" + }, + { + "section_id": "4.7", + "parent_section_id": "4", + "section_name": "Reduction of Computing Costs", + "text": "One advantage of freezing tensors is that it reduces the computing costs of fine-tuning.\nAs shown in Table 8 ###reference_###, when fine-tuning the model on a A6000 GPU, by applying FreezeAsGuard\u2019s selection of tensor freezing, users can save 22%-48% GPU memory and 13%-21% wall-clock computing time, compared to other baselines without freezing (=0%). Such savings, hence, well motivate users to adopt the FreezeAsGuard\u2019s tensor freezing in their fine-tuning practices." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion & Broader Impact", + "text": "In this paper, we present FreezeAsGuard, a new technique for mitigating illegal adaptation of diffusion models by freezing model tensors that are adaptation-critical only for illegal classes. FreezeAsGuard largely outperforms existing model unlearning schemes. Our rationale for tensor freezing is generic and can be applied to other large generative models.\n###figure_12### ###figure_13###" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Vectorizing the Gradient Calculations in Bilevel Optimization", + "text": "In practice, the solutions to bilevel optimization in Eq. (2) and Eq. (3) can usually be approximated through gradient-based optimizers. However, existing deep learning APIs (e.g., TensorFlow and PyTorch) maintain model tensors in either list or dictionary-like structures, and hence the gradient calculation for Eq. (4) cannot be automatically vectorized with the mask vector . To enhance the compute efficiency, we decompose the process of gradient calculation and assign the majority of compute workload to the highly optimized APIs.\nSpecifically, in mask learning in the upper-level loop specified in Eq. (5), \u2019s gradient w.r.t a model tensor\u2019s can be decomposed via the chain rule as:\nwhere denotes the inner product. The calculation of the gradient component, i.e., , is then done by automatic differentiation APIs, because it is equivalent to standard backpropagation in diffusion model training. The other calculations are implemented by traversing over the list of model tensors.\nSimilarly, when fine-tuning the model tensors in the lower-level loop specified in Eq. (7), we also decompose its gradient calculation process. In particular, fine-tuning is equivalent to fine-tuning , and the gradient descent is hence to update . More specifically, the gradient of a given tensor is:\nwhere we leave to automatic differentiation APIs because it is equivalent to standard backpropagation in diffusion model training. Note that this backpropagation shares the same model weights as in Eq. (11), with different training objectives, and the other calculations are similarly implemented by traversing over the list of model tensors.\nIn addition, computing gradients over large diffusion models is expensive when using automatic differentiation in existing deep learning APIs (e.g., PyTorch and TensorFlow). Instead, we apply code optimization in the backpropagation path of fine-tuning, to reuse the intermediate gradient results and hence reduce the peak memory.\n###figure_14###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Deciding the Number of Fine-tuning Iterations in Bilevel Optimization", + "text": "As shown in Figure 14 ###reference_###, we observe that in the lower-level loop of model fine-tuning, the fine-tuning loss typically drops fast in the first 5-10 iterations, but then starts to violently fluctuate. Such quick drop of loss at the initial stage of fine-tuning is particularly common in fine-tuning large generative models, because the difference between the fine-tuned and pre-trained weights can be so small that only a few weight updates can get close [60 ###reference_b60###]. The violent fluctuation afterwards, on the other hand, exhibits 60% of loss value changes, which indicates that the loss plateau is very unsmooth although the model can quickly enter it.\nSince the first few iterations contribute to most of the loss reduction during fine-tuning, we believe that the model weights have already been very close to those in the completely fine-tuned model. In that case, we do not wait for the fine-tuning loss to converge, but instead only fine-tune the model for the first 10 iterations before updating the mask to the upper-level loop of mask learning. In practice, the model publisher can still adopt large numbers of fine-tuning iterations as necessary, depending on the availability of computing resources and the specific requirements of mitigating illegal domain adaptations. Similar approximation schemes are also adopted in existing work [47 ###reference_b47###, 65 ###reference_b65###] to solve bilevel optimization problems, but most of them aggressively set the interval to be only one iteration, leading to arguably high approximation errors.\n###figure_15###" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Details of Datasets", + "text": "The Famous-Figures-25 (FF25) Dataset: Our FF25 dataset contains 8,703 portrait images of 25 public figures and the corresponding text descriptions. These 25 subjects include politicians, movie stars, writers, athletes and businessmen, with diverse genders, races, and career domains. As shown in Figure 15 ###reference_###, the dataset contains 400-1,300 images of each subject.\nAll the images were crawled from publicly available sources on the Web, using the AutoCrawler tool [4 ###reference_b4###]. We only consider images that 1) has a resolution higher than 512512 and 2) contains 3 faces detected by OpenCV face recognition API [12 ###reference_b12###] as valid. Each raw image is then center-cropped to a resolution of 512512. For each image, we use a pre-trained BLIP2 image captioning model [39 ###reference_b39###] to generate the corresponding text description, and prompt BLIP2 with the input of \u201ca photo of which shows\u201d to avoid hallucination. For example, \u201ca photo of Cristiano Ronaldo which shows\u201d, when being provided to the BLIP2 model as input, could result in text description of \u201ca photo of Cristiano Ronaldo which shows him smiling in a hotel hallway\u201d. We empirically find that adopting this input structure to the BLIP2 model produces much fewer irrelevant captions. More sample images and their corresponding text descriptions are shown in Figure 16 ###reference_###.\n###figure_16### The Artwork Dataset: We selected five renowned digital artists, each of which has a unique art style, and manually downloaded 100\u2013300 representative images from their Instagram accounts. The total amount of images in the dataset is hence 1,134. We then used a pre-trained BLIP2 image captioning model [39 ###reference_b39###] to generate text prompts for each image. In Figure 17 ###reference_###, we show a sample image and its text prompt for each artist.\n###figure_17### The NSFW-Caption Dataset: This dataset contains 2,000 NSFW images collected from MetArt, and each image has a very detailed caption, as shown in Figure 18 ###reference_###.\n###figure_18### Also, in evaluations of FreezeAsGuard\u2019s capability of mitigating the generation of explicit contents, we use the Modern-Logo-v4 dataset [5 ###reference_b5###], which contains 803 logo images that are labeled with informative text descriptions, as the legal class. As the examples in Figure 19 ###reference_### shown, these logos are minimalist, meeting modern design requirements and reflecting the corresponding company\u2019s industry.\n###figure_19###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Details of Image Quality Metrics", + "text": "In general, we measure the quality of images generated by the fine-tuned diffusion model by comparing their similarity with the original training images used to fine-tune the diffusion model. Most commonly used image similarity metrics, such as FID [28 ###reference_b28###], LPIPS [31 ###reference_b31###] and CLIP score [27 ###reference_b27###], compute the similarity between the distributions of the extracted features from the generated and original images [43 ###reference_b43###, 27 ###reference_b27###]. The feature vectors are obtained using image feature extractors like the Inception model [28 ###reference_b28###]. They often perform reasonably well in measuring similarity between images of common objects, such as those included in the ImageNet data samples [48 ###reference_b48###].\nHowever, existing studies find that these metrics cannot reliably measure the similarity between very similar subjects, such as human faces of different human subjects or artworks in different art styles [30 ###reference_b30###, 54 ###reference_b54###]. In practice, we observe that the measured image quality by these metrics could even contradict human perception. For example, as shown in Figure 20 ###reference_###, while images generated with FreezeAsGuard are significantly lower in quality and differ more from the training images from a human perspective, the LPIPS scores of images generated by the fully fine-tuned model (without applying FreezeAsGuard) are similar to ours, even though they look quite different visually.\n###figure_20### Therefore, to address the limitations of these generic image quality metrics, as described in the paper, we use domain-specific feature extractors to obtain features from the training and generated images, then compute the cosine distance between the feature vectors as the final measure of the generated images\u2019 quality. For human faces, we select three top feature extractors, namely FaceNet-512 (FN-L), FaceNet (FN), and VGG-Face (VGG), as provided in the DeepFace package [50 ###reference_b50###]. For art styles in artworks, we use a pretrained CSD model from [52 ###reference_b52###].\nWe use a NSFW detector, namely NudeNet [2 ###reference_b2###], to decide if the generated images contain any explicit content. For an input image, NudeNet can output a list of detected human body parts (such as ANUS_EXPOSED and FACE_FEMALE), along with the corresponding probabilities of these body parts\u2019 appearances in the image. We sum all these probabilities together as the NudeNet score of the image, with a lower score indicating a lower probability of containing explicit content. The full list of the detectable human body parts is as follows:\nFEMALE_GENITALIA_COVERED,FACE_FEMALE,\nBUTTOCKS_EXPOSED,FEMALE_BREAST_EXPOSED,\nFEMALE_GENITALIA_EXPOSED,\nMALE_BREAST_EXPOSED,ANUS_EXPOSED,\nFEET_EXPOSED,BELLY_COVERED,FEET_COVERED,\nARMPITS_COVERED,ARMPITS_EXPOSED,FACE_MALE,\nBELLY_EXPOSED,MALE_GENITALIA_EXPOSED,\nANUS_COVERED,FEMALE_BREAST_COVERED,\nBUTTOCKS_COVERED,\nand we select the following 5 from them as indicators of explicit content:\nUTTOCKS_EXPOSED,FEMALE_BREAST_EXPOSED,\nFEMALE_GENITALIA_EXPOSED,ANUS_EXPOSED,\nMALE_GENITALIA_EXPOSED\nOur human evaluation involves 16 participants of college students. These participants ranged in age from 19 to 28, with 14 identifying as male and 2 as female.\nWe conduct our human evaluation by distributing the images being examined by participants via an online questionnaire, which consists of multiple sets of images. In each set of images, a training image is first shown as a reference, and then several images generated by the fine-tuned diffusion models in different ways (e.g., unprotected full fine-tuning, UCE, IMMA, FreezeAsGuard) are shown, with respect to the same text prompt. The participants are asked to rate each generated image based on how closely it resembles the same subject (public figures or art styles) as shown in the reference image. The rating scale ranges from 1 to 7, with 1 indicating \u201cvery unlikely\u201d and 7 indicating \u201cvery likely\u201d. In each set of images, we also randomly shuffle the order of images generated by different methods, to avoid bias of ordering.\nFigure 21 ###reference_### shows an example of such a set of images in the questionnaire. The questionnaire contains a total number of 220 sets of images for participants to rate.\n###figure_21###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Details of Evaluation Setup", + "text": "For each illegal class and legal class in FF25 and the artwork dataset, we generally select 100 images in each class for mask learning, but if the number of images in the class is smaller than 150, we select half of the images for mask learning. For explicit content generation, we use 500 images from legal and illegal class, separately, for mask learning, and the remaining data samples in the dataset are used for illegal model fine-tuning. Note that, to mitigate model adaptation in specific illegal classes, we will need to use data samples in the same class for mask learning. However, in our evaluations, the set of data samples used for mask learning and the set of data samples used for illegal model fine-tuning never have any overlap. For example, to mitigate the fine-tuned model\u2019s capability of generating portrait images of Barack Obama, we will use a set of portrait images of Barack Obama to learn the mask for tensor freezing. Then, another set of Barack Obama\u2019s portrait images are used to emulate illegal users\u2019 fine-tuning the diffusion model, and FreezeAsGuard\u2019s performance of mitigating illegal model adaptation is then evaluated by the quality of images generated by the fine-tuned model regarding this subject.\nFor mask learning, we set the gradient step size to 10, the simulated user learning rate to 1e-5, and iterate sufficient steps with the batch size of 16. The temperature for the mask\u2019s continuous form is set to 0.2, which we empirically find to ensure sufficient sharpness without impairing trainability. When fine-tuning the diffusion model as an illegal user, we adopt a learning rate of 1e-5 and the batch size of 4 with Adam [36 ###reference_b36###] optimizer. For FF25 and artwork datasets, we fine-tune 2,000 iterations on illegal user\u2019s data samples. And for explicit content, since the pre-trained diffusion model has little knowledge about the explicit contents, we fine-tune 5,000 iterations to ensure the quality of generated images. Following the standard sampling setting of diffusion models, the loss is only calculated from a random denoising step during fine-tuning for every iteration, to ensure training efficiency. For image generation, we adopt the PNDMScheduler [33 ###reference_b33###] and proceed with 50 denoising steps to ensure sufficient image quality." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F More Qualitative Examples of Images Generated by the Fine-tuned Model", + "text": "We provided more image examples in Figure 22 ###reference_###, to show how FreezeAsGuard can effectively mitigate forgery of different public figures\u2019 portraits. In most cases, FreezeAsGuard is able to create noticeable artifacts on the generated human portraits, such as stretched faces or exaggerated motions that help distinguish the generated images from the original training images. In some cases, such as the second row of Nancy Pelosi\u2019s photos, the generated images contain unrealistic duplication of subjects. Moreover, for the first row of Lionel Messi\u2019s photos, the subject in the generated image with FreezeAsGuard is a cartoon image, which is not aligned with the prompt. This is because, with FreezeAsGuard\u2019s tensor freezing, the model cannot correctly convert the text features extracted by the text encoder to the aligned image tokens.\n###figure_22### Similarly, as more image examples in Figure 23 ###reference_### have shown, in most cases, images generated with baseline methods can exactly replicate the artistic style of the original training image. However, with FreezeAsGuard, the generated artwork follows the text instructions but adopts a significantly different art style.\n###figure_23### As shown in Figure 24 ###reference_###, the generated images with FreezeAsGuard can effectively avoid explicit contents from being shown in different ways. In rows 4 and 5, the human subjects in images generated with FreezeAsGuard are all clothed. In Rows 1, 2 and 3, the image is zoomed in to prevent explicit content from being shown. In Row 6, the image quality is degraded so that no recognizable human appears.\n###figure_24###" + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Ethical Issues of Using the Public Portrait Images and Artwork Images", + "text": "In this section, we affirm that the use of our self-collected public portrait images and artwork image dataset does not raise ethical issues.\nFor the FF-25 dataset, we use the Google images search API to crawl the images from the Web. Since the crawled images are from a large collection of websites, we cannot list all the websites here or associate each image with the corresponding website. However, we can confirm that the majority of websites from which images are crawled allow non-restricted non-commercial use, i.e., the CC NC or CC BY-NC license. Some examples of these websites are listed as follows:\nWikipedia.org\nwhitehouse.gov\nifeng.com\ntheconversation.com\nhouse.gov\ncartercenter.org\nnewstatesman.com\nesportsobserver.com\nslate.fr\nletemps.ch\nFor the artwork image dataset, we use artist\u2019s posted images on their public Instagram accounts. The following keywords can be used to search these public Instagram accounts:\nBeeple_crap\nSaonserey\nKylelambertartist\nDavidsossella\nThebutcherbilly\nOur collection and use of these images are strictly limited to non-commercial research use, and these images will only be released to a small group of professional audience (i.e., CVPR reviewers) instead of the wide public. Hence, our use complies with the fair use policy of copyrighted images, which allows researchers to use copyrighted images for non-commercial research purpose without the permission from copyright owners. More information about such policy can be found at most university\u2019s libraries.\nWe noticed that such fair use policy mentioned before has been widely applied in the research community to allow usage of copyrighted images of public figures\u2019 portraits and artworks for research purposes. For example, many datasets of celebrities\u2019 portraits such as CelebA [42 ###reference_b42###], PubFig [37 ###reference_b37###] and MillionCelebs [63 ###reference_b63###]) and artwork such as Wikiart [53 ###reference_b53###] and LION [49 ###reference_b49###] are publicly available online. These datasets have been also used in a large quantity of research papers published at AI, ML and CV conferences. For examples: [35 ###reference_b35###, 18 ###reference_b18###] used the CelebA dataset, [32 ###reference_b32###, 34 ###reference_b34###] used the PubFig dataset and [58 ###reference_b58###] use the WikiArt dataset." + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n \n\n\nModel component Being frozen\nCLIP ()TOPIQ ()FID ()
No freezing31.930.054202.18
Attention projectors31.600.051208.40
Conv. layers31.540.047206.58
Time embeddings31.460.045212.79
\n \n\n\n50% random weights (seed 1)\n32.250.054206.53
\n \n\n\n50% random weights (seed 2)\n32.620.051216.12
\n
Table 1: Quality of generated images with different model compoents being frozen,\nusing CLIP [27], TOPIQ [14], and FID [28] image quality metrics and the captioned pokemon dataset [6]
\n
", + "capture": "Table 1: Quality of generated images with different model compoents being frozen,\nusing CLIP [27], TOPIQ [14], and FID [28] image quality metrics and the captioned pokemon dataset [6]" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricFN-L()FN()VGG()FID()Human ()
Pre-trained model0.960.920.93164.8-
Full FTillegal0.4360.4550.581144.66.7
legal0.4360.4550.581144.66.7
UCEillegal0.4450.4640.598152.94.6
legal0.4420.4650.583151.45.4
IMMAillegal0.4670.4930.624148.85.1
legal0.4620.4750.610145.95.8
FG-10%illegal0.4410.4510.603148.04.9
legal0.4290.450.585143.66.2
R-10%illegal0.4330.4510.588143.76.8
legal0.4310.4570.582144.06.8
FG-30%illegal0.4820.5040.631153.73.6
legal0.4490.4780.590146.76.0
R-30%illegal0.4290.4560.590145.05.9
legal0.4290.4560.590145.05.9
FG-50%illegal0.5300.6380.647155.52.1
legal0.4990.5270.608149.54.3
R-50%illegal0.5130.5430.638151.63.7
legal0.5120.5220.632153.23.7
\n
Table 2: Mitigation power in 10 illegal classes and 10 legal classes from the FF25 dataset, where worse image quality indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing.
\n
", + "capture": "Table 2: Mitigation power in 10 illegal classes and 10 legal classes from the FF25 dataset, where worse image quality indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing. " + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricCSD()FID()CLIP()Human()
Pre-trained model0.841323.8--
Fullillegal0.347187.632.315.9
legal0.365194.032.195.4
UCEillegal0.426190.932.283.3
legal0.381195.132.173.1
IMMAillegal0.396190.832.614.6
legal0.37719532.985.1
FG-30%illegal0.373190.632.375.7
legal0.382194.132.105.2
R-30%illegal0.351186.732.455.6
legal0.363194.132.565.1
FG-50%illegal0.453194.532.043.5
legal0.40195.332.493.9
R-50%illegal0.383189.732.215.3
legal0.405196.032.433.7
FG-70%illegal0.511195.731.961.7
legal0.41195.332.583.8
R-70%illegal0.441189.232.124.9
legal0.454196.432.154.2
FG-85%illegal0.574201.231.741.6
legal0.526214.831.912.1
R-85%illegal0.565197.632.082.8
legal0.586210.432.092.7
\n
Table 3: Mitigation power in one illegal class and one legal class from the Artwork dataset, where worse image quality indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing.
\n
", + "capture": "Table 3: Mitigation power in one illegal class and one legal class from the Artwork dataset, where worse image quality indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodIllegalLegal
NudeNet()FID()CLIP()
Pre-trained model0.47--
Full FT1.29158.132.79
UCE1.20158.530.07
IMMA1.17162.028.71
FG-30%1.27159.532.50
R-30%1.30158.832.79
FG-50%1.06163.231.83
R-50%1.20160.630.43
FG-70%0.87166.131.56
R-70%1.12161.828.66
FG-85%0.85166.530.34
R-85%0.93164.630.81
\n
Table 4: Mitigation power in illegal class (NSFW-caption dataset) and legal class (Modern-Logo-v4 dataset), where worse image quality (in FID or CLIP) or lower NudeNet score indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing.
\n
", + "capture": "Table 4: Mitigation power in illegal class (NSFW-caption dataset) and legal class (Modern-Logo-v4 dataset), where worse image quality (in FID or CLIP) or lower NudeNet score indicates stronger mitigation power. FG-% means using FreezeAsGuard to freeze % tensors and R-% means random freezing." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method2 classes5 classes10 classes
illegallegalillegallegalillegallegal
Full FT0.3970.3970.4240.4240.4360.436
UCE0.4350.4440.4430.4370.4450.442
IMMA0.4120.4280.4610.4630.4670.462
FG-30%0.4670.4260.4740.4580.4820.449
\n
Table 5: Mitigation power in the FF25 dataset, measured by the FN-L score, with different numbers of illegal classes.
\n
", + "capture": "Table 5: Mitigation power in the FF25 dataset, measured by the FN-L score, with different numbers of illegal classes." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method1 class2 classes3 classes
illegallegalillegallegalillegallegal
Full FT0.3480.3560.4150.4110.4340.458
UCE0.4260.3810.5380.5210.5520.574
IMMA0.3960.3770.4830.4630.5360.496
FG-70%0.5110.4100.6090.4730.6480.525
\n
Table 6: Mitigation power in the Artwork dataset, measured by the CSD score, with different numbers of illegal classes
\n
", + "capture": "Table 6: Mitigation power in the Artwork dataset, measured by the CSD score, with different numbers of illegal classes" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSD 1.4SD 1.5SD 2.1
illegallegalillegallegalillegallegal
Full0.4350.4350.4360.4360.4390.439
UCE0.4470.4420.4450.4420.4450.441
IMMA0.4510.4480.4670.4620.4630.454
FG-30%0.4890.4530.4820.4490.4740.450
\n
Table 7: Mitigation power in the FF25 dataset, measured by the FN-L score, with different diffusion models
\n
", + "capture": "Table 7: Mitigation power in the FF25 dataset, measured by the FN-L score, with different diffusion models" + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Fine-tuning Cost\n=0%\n\n=1%\n\n=5%\n\n=10%\n
GPU Memory (GB)18.2818.2616.9716.96
Per-batch computing time (s)1.171.141.091.06
Fine-tuning Cost\n=20%\n\n=30%\n\n=40%\n\n=80%\n
GPU Memory (GB)15.4314.1513.619.49
Per-batch computing time (s)1.051.021.000.91
\n
Table 8: Computing cost with FreezeAsGuard- on SD v1.5 model, using an NVidia A6000 GPU
\n
", + "capture": "Table 8: Computing cost with FreezeAsGuard- on SD v1.5 model, using an NVidia A6000 GPU" + } + }, + "image_paths": { + "1": { + "figure_path": "2405.17472v2_figure_1.png", + "caption": "Figure 1: Existing work vs. FreezeAsGuard in mitigating malicious adaptation of diffusion models", + "url": "http://arxiv.org/html/2405.17472v2/x1.png" + }, + "2": { + "figure_path": "2405.17472v2_figure_2.png", + "caption": "Figure 2: FreezeAsGuard ensures that portraits (left) and artworks (right) generated by diffusion models in illegal classes cannot be recognizable as target objects, even if the model has been fine-tuned with data samples in illegal classes. In contrast, unlearning schemes (UCE [23] and IMMA [65]) cannot prevent the unlearned knowledge of illegal classes from being relearned in fine-tuning.", + "url": "http://arxiv.org/html/2405.17472v2/x2.png" + }, + "3": { + "figure_path": "2405.17472v2_figure_3.png", + "caption": "Figure 3: Mask learning and fine-tuning as a bilevel optimization", + "url": "http://arxiv.org/html/2405.17472v2/x3.png" + }, + "4": { + "figure_path": "2405.17472v2_figure_4.png", + "caption": "Figure 4: Generated images with different model components being frozen, with prompt \u201ca pikachu with a pink dress and a pink bow\u201d", + "url": "http://arxiv.org/html/2405.17472v2/x4.png" + }, + "5": { + "figure_path": "2405.17472v2_figure_5.png", + "caption": "Figure 5: Overview of FreezeAsGuard design", + "url": "http://arxiv.org/html/2405.17472v2/x5.png" + }, + "6": { + "figure_path": "2405.17472v2_figure_6.png", + "caption": "Figure 6: FreezeAsGuard vs. Naive optimization iterations", + "url": "http://arxiv.org/html/2405.17472v2/x6.png" + }, + "7": { + "figure_path": "2405.17472v2_figure_7.png", + "caption": "Figure 7: Examples of public figures\u2019 portraits generated by FreezeAsGuard under different freezing ratios (\u03c1\ud835\udf0c\\rhoitalic_\u03c1)", + "url": "http://arxiv.org/html/2405.17472v2/x7.png" + }, + "8": { + "figure_path": "2405.17472v2_figure_8.png", + "caption": "Figure 8: Examples of generated public figures\u2019 portraits by FreezeAsGuard with \u03c1\ud835\udf0c\\rhoitalic_\u03c1=30% and other baseline methods", + "url": "http://arxiv.org/html/2405.17472v2/x8.png" + }, + "9": { + "figure_path": "2405.17472v2_figure_9.png", + "caption": "Figure 9: Examples of artwork images generated by FreezeAsGuard with different freezing ratios", + "url": "http://arxiv.org/html/2405.17472v2/x9.png" + }, + "10": { + "figure_path": "2405.17472v2_figure_10.png", + "caption": "Figure 10: Examples of generated artworks by FreezeAsGuard with \u03c1\ud835\udf0c\\rhoitalic_\u03c1=70% and other baseline methods", + "url": "http://arxiv.org/html/2405.17472v2/x10.png" + }, + "11": { + "figure_path": "2405.17472v2_figure_11.png", + "caption": "Figure 11: Examples of generated images with explicit contents by FreezeAsGuard with \u03c1\ud835\udf0c\\rhoitalic_\u03c1=70% and other baseline methods", + "url": "http://arxiv.org/html/2405.17472v2/x11.png" + }, + "12": { + "figure_path": "2405.17472v2_figure_12.png", + "caption": "Figure 12: The frozen tensors for illegal classes on the FF-25 dataset, with \u03c1\ud835\udf0c\\rhoitalic_\u03c1=30%", + "url": "http://arxiv.org/html/2405.17472v2/x12.png" + }, + "13": { + "figure_path": "2405.17472v2_figure_13.png", + "caption": "Figure 13: The frozen tensors for illegal classes on the Artwork dataset, with \u03c1\ud835\udf0c\\rhoitalic_\u03c1=70%", + "url": "http://arxiv.org/html/2405.17472v2/x13.png" + }, + "14": { + "figure_path": "2405.17472v2_figure_14.png", + "caption": "Figure 14: Fine-tuning loss after the 5th and 10th mask updates during bilevel optimization", + "url": "http://arxiv.org/html/2405.17472v2/x14.png" + }, + "15": { + "figure_path": "2405.17472v2_figure_15.png", + "caption": "Figure 15: Statistics of the Famous-Figures-25 dataset", + "url": "http://arxiv.org/html/2405.17472v2/x15.png" + }, + "16": { + "figure_path": "2405.17472v2_figure_16.png", + "caption": "Figure 16: Examples of portrait images in the Famous-Figures-25 dataset", + "url": "http://arxiv.org/html/2405.17472v2/x16.png" + }, + "17": { + "figure_path": "2405.17472v2_figure_17.png", + "caption": "Figure 17: Examples of collected painting from 5 artists", + "url": "http://arxiv.org/html/2405.17472v2/x17.png" + }, + "18": { + "figure_path": "2405.17472v2_figure_18.png", + "caption": "Figure 18: One sample in the NSFW-Caption dataset", + "url": "http://arxiv.org/html/2405.17472v2/x18.png" + }, + "19": { + "figure_path": "2405.17472v2_figure_19.png", + "caption": "Figure 19: Examples in the Modern-Logo-v4 dataset", + "url": "http://arxiv.org/html/2405.17472v2/x19.png" + }, + "20": { + "figure_path": "2405.17472v2_figure_20.png", + "caption": "Figure 20: Evaluating the similarity in art style using the LPIPS score [31], where a higher score means more difference from the original training image.", + "url": "http://arxiv.org/html/2405.17472v2/x20.png" + }, + "21": { + "figure_path": "2405.17472v2_figure_21.png", + "caption": "Figure 21: Example of the questionnaire for human evaluation", + "url": "http://arxiv.org/html/2405.17472v2/x21.png" + }, + "22": { + "figure_path": "2405.17472v2_figure_22.png", + "caption": "Figure 22: Examples of generated images after applying FreezeAsGuard-30% to Stable Diffusion v1.5 on illegal classes, where each prompt adopts the same seed for generation", + "url": "http://arxiv.org/html/2405.17472v2/x22.png" + }, + "23": { + "figure_path": "2405.17472v2_figure_23.png", + "caption": "Figure 23: Examples of generated images after applying FreezeAsGuard-70% to Stable Diffusion v2.1 on illegal classes, where each prompt adopts the same seed for generation", + "url": "http://arxiv.org/html/2405.17472v2/x23.png" + }, + "24": { + "figure_path": "2405.17472v2_figure_24.png", + "caption": "Figure 24: Examples of generated images after applying FreezeAsGuard-70% to Stable Diffusion v1.4 on illegal classes, where each prompt adopts the same seed for generation", + "url": "http://arxiv.org/html/2405.17472v2/x24.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "https://huggingface.co/datasets/tungdop2/nsfw_caption, note =\nAccessed: 2024-10-30.", + "author": "tungdop2/nsfw_caption.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "https://github.com/notAI-tech/NudeNet.", + "author": "Nudenet: lightweight nudity detection.", + "venue": "Accessed: 2024-10-30.", + "url": null + } + }, + { + "3": { + "title": "https://vickiboykis.com/2022/11/18/some-notes-on-the-stable-diffusion-safety-filter/,\n2022.", + "author": "some-notes-on-the-stable-diffusion-safety-filter.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "https://github.com/YoongiKim/AutoCrawler, 2023.", + "author": "Autocrawler.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "https://huggingface.co/datasets/logo-wizard/modern-logo-dataset, 2023.", + "author": "modern-logo-v4 dataset.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions, 2023.", + "author": "pokemon dataset.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "https://huggingface.co/CompVis/stable-diffusion-safety-checker,\n2023.", + "author": "stable-diffusion-safety-checker.", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "https://huggingface.co/CompVis/stable-diffusion-v1-4,\n2023a.", + "author": "stable diffusion v1.4.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "https://huggingface.co/runwayml/stable-diffusion-v1-5,\n2023b.", + "author": "stable diffusion v1.5.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "https://huggingface.co/runwayml/stable-diffusion-v1-5, 2023.", + "author": "stable diffusion v2.1.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "https://serp.ai/tools/diffusion-wallpaper/, 2024.", + "author": "Diffusion wallpaper.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "https://opencv.org/opencv-face-recognition/, 2024.", + "author": "Opencv face recognition.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "The hidden language of diffusion models.", + "author": "H. Chefer, O. Lang, M. Geva, V. Polosukhin, A. Shocher, M. Irani, I. Mosseri,\nand L. Wolf.", + "venue": "arXiv preprint arXiv:2306.00966, 2023.", + "url": null + } + }, + { + "14": { + "title": "Topiq: A top-down approach from semantics to distortions for image\nquality assessment.", + "author": "C. Chen, J. Mo, J. Hou, H. Wu, L. Liao, W. Sun, Q. Yan, and W. Lin.", + "venue": "IEEE Transactions on Image Processing, 2024.", + "url": null + } + }, + { + "15": { + "title": "opt: Learn to regularize recommender models in finer levels.", + "author": "Y. Chen, B. Chen, X. He, C. Gao, Y. Li, J.-G. Lou, and Y. Wang.", + "venue": "In Proceedings of the 25th ACM SIGKDD International Conference\non Knowledge Discovery & Data Mining, pages 978\u2013986, 2019.", + "url": null + } + }, + { + "16": { + "title": "Ft-shield: A watermark against unauthorized fine-tuning in\ntext-to-image diffusion models.", + "author": "Y. Cui, J. Ren, Y. Lin, H. Xu, P. He, Y. Xing, W. Fan, H. Liu, and J. Tang.", + "venue": "arXiv preprint arXiv:2310.02401, 2023a.", + "url": null + } + }, + { + "17": { + "title": "Diffusionshield: A watermark for copyright protection against\ngenerative diffusion models.", + "author": "Y. Cui, J. Ren, H. Xu, P. He, H. Liu, L. Sun, and J. Tang.", + "venue": "arXiv preprint arXiv:2306.04642, 2023b.", + "url": null + } + }, + { + "18": { + "title": "Evaluating and mitigating bias in image classifiers: A causal\nperspective using counterfactuals.", + "author": "S. Dash, V. N. Balasubramanian, and A. Sharma.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision, pages 915\u2013924, 2022.", + "url": null + } + }, + { + "19": { + "title": "Beyond the safeguards: Exploring the security risks of chatgpt.", + "author": "E. Derner and K. Batisti\u010d.", + "venue": "arXiv preprint arXiv:2305.08005, 2023.", + "url": null + } + }, + { + "20": { + "title": "Salun: Empowering machine unlearning via gradient-based weight\nsaliency in both image classification and generation.", + "author": "C. Fan, J. Liu, Y. Zhang, D. Wei, E. Wong, and S. Liu.", + "venue": "arXiv preprint arXiv:2310.12508, 2023.", + "url": null + } + }, + { + "21": { + "title": "Model-agnostic meta-learning for fast adaptation of deep networks.", + "author": "C. Finn, P. Abbeel, and S. Levine.", + "venue": "In International conference on machine learning, pages\n1126\u20131135. PMLR, 2017.", + "url": null + } + }, + { + "22": { + "title": "Are deepfakes concerning? analyzing conversations of deepfakes on\nreddit and exploring societal implications.", + "author": "D. Gamage, P. Ghasiya, V. Bonagiri, M. E. Whiting, and K. Sasahara.", + "venue": "In Proceedings of the 2022 CHI Conference on Human Factors in\nComputing Systems, pages 1\u201319, 2022.", + "url": null + } + }, + { + "23": { + "title": "Unified concept editing in diffusion models.", + "author": "R. Gandikota, H. Orgad, Y. Belinkov, J. Materzy\u0144ska, and D. Bau.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision, pages 5111\u20135120, 2024.", + "url": null + } + }, + { + "24": { + "title": "Politics and porn: how news media characterizes problems presented by\ndeepfakes.", + "author": "C. Gosse and J. Burkell.", + "venue": "Critical Studies in Media Communication, 37(5):497\u2013511, 2020.", + "url": null + } + }, + { + "25": { + "title": "Ai-generated child sex images spawn new nightmare for the web.", + "author": "D. Harwell.", + "venue": "The Wall Street Journal, 2017.", + "url": null + } + }, + { + "26": { + "title": "This artist is dominating ai-generated art. and he\u2019s not happy\nabout it.", + "author": "M. Heikkil\u00e4.", + "venue": "MIT Technology Review, 125(6):9\u201310, 2022.", + "url": null + } + }, + { + "27": { + "title": "Clipscore: A reference-free evaluation metric for image captioning.", + "author": "J. Hessel, A. Holtzman, M. Forbes, R. L. Bras, and Y. Choi.", + "venue": "arXiv preprint arXiv:2104.08718, 2021.", + "url": null + } + }, + { + "28": { + "title": "Gans trained by a two time-scale update rule converge to a local nash\nequilibrium.", + "author": "M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "29": { + "title": "Multiple objective decision making\u2014methods and applications:\na state-of-the-art survey, volume 164.", + "author": "C.-L. Hwang and A. S. M. Masud.", + "venue": "Springer Science & Business Media, 2012.", + "url": null + } + }, + { + "30": { + "title": "Rethinking fid: Towards a better evaluation metric for image\ngeneration.", + "author": "S. Jayasumana, S. Ramalingam, A. Veit, D. Glasner, A. Chakrabarti, and\nS. Kumar.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 9307\u20139315, 2024.", + "url": null + } + }, + { + "31": { + "title": "Pipal: a large-scale image quality assessment dataset for perceptual\nimage restoration.", + "author": "G. Jinjin, C. Haoming, C. Haoyu, Y. Xiaoxing, J. S. Ren, and D. Chao.", + "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference,\nGlasgow, UK, August 23\u201328, 2020, Proceedings, Part XI 16, pages 633\u2013651.\nSpringer, 2020.", + "url": null + } + }, + { + "32": { + "title": "Label-only model inversion attacks via boundary repulsion.", + "author": "M. Kahla, S. Chen, H. A. Just, and R. Jia.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, pages 15045\u201315053, 2022.", + "url": null + } + }, + { + "33": { + "title": "Elucidating the design space of diffusion-based generative models.", + "author": "T. Karras, M. Aittala, T. Aila, and S. Laine.", + "venue": "Advances in Neural Information Processing Systems,\n35:26565\u201326577, 2022.", + "url": null + } + }, + { + "34": { + "title": "Testing using privileged information by adapting features with\nstatistical dependence.", + "author": "K. I. Kim and J. Tompkin.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 9405\u20139413, 2021.", + "url": null + } + }, + { + "35": { + "title": "Learning debiased classifier with biased committee.", + "author": "N. Kim, S. Hwang, S. Ahn, J. Park, and S. Kwak.", + "venue": "Advances in Neural Information Processing Systems,\n35:18403\u201318415, 2022.", + "url": null + } + }, + { + "36": { + "title": "Adam: A method for stochastic optimization.", + "author": "D. P. Kingma and J. Ba.", + "venue": "arXiv preprint arXiv:1412.6980, 2014.", + "url": null + } + }, + { + "37": { + "title": "Attribute and simile classifiers for face verification.", + "author": "N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar.", + "venue": "In 2009 IEEE 12th international conference on computer vision,\npages 365\u2013372. IEEE, 2009.", + "url": null + } + }, + { + "38": { + "title": "Snip: Single-shot network pruning based on connection sensitivity.", + "author": "N. Lee, T. Ajanthan, and P. H. Torr.", + "venue": "arXiv preprint arXiv:1810.02340, 2018.", + "url": null + } + }, + { + "39": { + "title": "Blip-2: Bootstrapping language-image pre-training with frozen image\nencoders and large language models.", + "author": "J. Li, D. Li, S. Savarese, and S. Hoi.", + "venue": "In International conference on machine learning, pages\n19730\u201319742. PMLR, 2023.", + "url": null + } + }, + { + "40": { + "title": "Darts: Differentiable architecture search.", + "author": "H. Liu, K. Simonyan, and Y. Yang.", + "venue": "arXiv preprint arXiv:1806.09055, 2018.", + "url": null + } + }, + { + "41": { + "title": "Group fisher pruning for practical network compression.", + "author": "L. Liu, S. Zhang, Z. Kuang, A. Zhou, J.-H. Xue, X. Wang, Y. Chen, W. Yang,\nQ. Liao, and W. Zhang.", + "venue": "In International Conference on Machine Learning, pages\n7021\u20137032. PMLR, 2021.", + "url": null + } + }, + { + "42": { + "title": "Deep learning face attributes in the wild.", + "author": "Z. Liu, P. Luo, X. Wang, and X. Tang.", + "venue": "In Proceedings of International Conference on Computer Vision\n(ICCV), December 2015.", + "url": null + } + }, + { + "43": { + "title": "Sdxl: Improving latent diffusion models for high-resolution image\nsynthesis.", + "author": "D. Podell, Z. English, K. Lacey, A. Blattmann, T. Dockhorn, J. M\u00fcller,\nJ. Penna, and R. Rombach.", + "venue": "arXiv preprint arXiv:2307.01952, 2023.", + "url": null + } + }, + { + "44": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "45": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "O. Ronneberger, P. Fischer, and T. Brox.", + "venue": "In Medical image computing and computer-assisted\nintervention\u2013MICCAI 2015: 18th international conference, Munich, Germany,\nOctober 5-9, 2015, proceedings, part III 18, pages 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "46": { + "title": "Xgan: Unsupervised image-to-image translation for many-to-many\nmappings.", + "author": "A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, and\nK. Murphy.", + "venue": "Domain Adaptation for Visual Understanding, pages 33\u201349,\n2020.", + "url": null + } + }, + { + "47": { + "title": "Dreambooth: Fine tuning text-to-image diffusion models for\nsubject-driven generation.", + "author": "N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 22500\u201322510, 2023.", + "url": null + } + }, + { + "48": { + "title": "Imagenet large scale visual recognition challenge.", + "author": "O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang,\nA. Karpathy, A. Khosla, M. Bernstein, et al.", + "venue": "International journal of computer vision, 115:211\u2013252, 2015.", + "url": null + } + }, + { + "49": { + "title": "Laion-5b: An open large-scale dataset for training next generation\nimage-text models.", + "author": "C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti,\nT. Coombes, A. Katta, C. Mullis, M. Wortsman, et al.", + "venue": "Advances in Neural Information Processing Systems,\n35:25278\u201325294, 2022.", + "url": null + } + }, + { + "50": { + "title": "A benchmark of facial recognition pipelines and co-usability\nperformances of modules.", + "author": "S. Serengil and A. Ozpinar.", + "venue": "Journal of Information Technologies, 17(2):95\u2013107, 2024.", + "url": null + } + }, + { + "51": { + "title": "Glaze: Protecting artists from style mimicry by Text-to-Image\nmodels.", + "author": "S. Shan, J. Cryan, E. Wenger, H. Zheng, R. Hanocka, and B. Y. Zhao.", + "venue": "In 32nd USENIX Security Symposium (USENIX Security 23), pages\n2187\u20132204, 2023.", + "url": null + } + }, + { + "52": { + "title": "Measuring style similarity in diffusion models.", + "author": "G. Somepalli, A. Gupta, K. Gupta, S. Palta, M. Goldblum, J. Geiping,\nA. Shrivastava, and T. Goldstein.", + "venue": "arXiv preprint arXiv:2404.01292, 2024.", + "url": null + } + }, + { + "53": { + "title": "Improved artgan for conditional synthesis of natural image and\nartwork.", + "author": "W. R. Tan, C. S. Chan, H. Aguirre, and K. Tanaka.", + "venue": "IEEE Transactions on Image Processing, 28(1):394\u2013409, 2019.", + "url": null + } + }, + { + "54": { + "title": "How many van goghs does it take to van gogh? finding the imitation\nthreshold.", + "author": "S. Verma, R. Rassin, A. Das, G. Bhatt, P. Seshadri, C. Shah, J. Bilmes,\nH. Hajishirzi, and Y. Elazar.", + "venue": "arXiv preprint arXiv:2410.15002, 2024.", + "url": null + } + }, + { + "55": { + "title": "Do prompt-based models really understand the meaning of their\nprompts?", + "author": "A. Webson and E. Pavlick.", + "venue": "arXiv preprint arXiv:2109.01247, 2021.", + "url": null + } + }, + { + "56": { + "title": "Huggingface\u2019s transformers: State-of-the-art natural language\nprocessing.", + "author": "T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac,\nT. Rault, R. Louf, M. Funtowicz, et al.", + "venue": "arXiv preprint arXiv:1910.03771, 2019.", + "url": null + } + }, + { + "57": { + "title": "Erasediff: Erasing data influence in diffusion models.", + "author": "J. Wu, T. Le, M. Hayat, and M. Harandi.", + "venue": "arXiv preprint arXiv:2401.05779, 2024.", + "url": null + } + }, + { + "58": { + "title": "Learning dynamic style kernels for artistic style transfer.", + "author": "W. Xu, C. Long, and Y. Nie.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 10083\u201310092, 2023.", + "url": null + } + }, + { + "59": { + "title": "Duaw: Data-free universal adversarial watermark against stable\ndiffusion customization.", + "author": "X. Ye, H. Huang, J. An, and Y. Wang.", + "venue": "arXiv preprint arXiv:2308.09889, 2023.", + "url": null + } + }, + { + "60": { + "title": "Language models are super mario: Absorbing abilities from homologous\nmodels as a free lunch.", + "author": "L. Yu, B. Yu, H. Yu, F. Huang, and Y. Li.", + "venue": "arXiv preprint arXiv:2311.03099, 2023.", + "url": null + } + }, + { + "61": { + "title": "Visualizing and understanding convolutional networks.", + "author": "M. D. Zeiler and R. Fergus.", + "venue": "In Computer Vision\u2013ECCV 2014: 13th European Conference,\nZurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, pages\n818\u2013833. Springer, 2014.", + "url": null + } + }, + { + "62": { + "title": "Editguard: Versatile image watermarking for tamper localization and\ncopyright protection.", + "author": "X. Zhang, R. Li, J. Yu, Y. Xu, W. Li, and J. Zhang.", + "venue": "arXiv preprint arXiv:2312.08883, 2023.", + "url": null + } + }, + { + "63": { + "title": "Global-local gcn: Large-scale label noise cleansing for face\nrecognition.", + "author": "Y. Zhang, W. Deng, M. Wang, J. Hu, X. Li, D. Zhao, and D. Wen.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 7731\u20137740, 2020.", + "url": null + } + }, + { + "64": { + "title": "A recipe for watermarking diffusion models.", + "author": "Y. Zhao, T. Pang, C. Du, X. Yang, N.-M. Cheung, and M. Lin.", + "venue": "arXiv preprint arXiv:2303.10137, 2023.", + "url": null + } + }, + { + "65": { + "title": "Imma: Immunizing text-to-image models against malicious adaptation.", + "author": "Y. Zheng and R. A. Yeh.", + "venue": "arXiv preprint arXiv:2311.18815, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2405.17472v2" +} \ No newline at end of file diff --git a/20241127/2405.19644v3.json b/20241127/2405.19644v3.json new file mode 100644 index 0000000000000000000000000000000000000000..46eb767da90e29d97d2774e5a5b0e451fb986e6a --- /dev/null +++ b/20241127/2405.19644v3.json @@ -0,0 +1,185 @@ +{ + "title": "EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos", + "abstract": "Surgical phase recognition has gained significant attention due to its potential to offer solutions to numerous demands of the modern operating room. However, most existing methods concentrate on minimally invasive surgery (MIS), leaving surgical phase recognition for open surgery understudied. This discrepancy is primarily attributed to the scarcity of publicly available open surgery video datasets for surgical phase recognition. To address this issue, we introduce a new egocentric open surgery video dataset for phase recognition, named Egosurgery-Phase. This dataset comprises 15 hours of real open surgery videos spanning 9 distinct surgical phases all captured using an egocentric camera attached to the surgeon\u2019s head. In addition to video, the Egosurgery-Phase offers eye gaze. As far as we know, it is the first real open surgery video dataset for surgical phase recognition publicly available. Furthermore, inspired by the notable success of masked autoencoders (MAEs) in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). Considering the regions where surgeons\u2019 gaze focuses are often critical for surgical phase recognition (e.g., surgical field), in our GGMAE, the gaze information acts as an empirical semantic richness prior to guiding the masking process, promoting better attention to semantically rich spatial regions. GGMAE significantly improves the previous state-of-the-art recognition method ( in Jaccard) and the masked autoencoder-based method ( in Jaccard) on Egosurgery-Phase. The dataset is released at project page.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Automated analysis of surgical videos is indispensable for various purposes, including providing real-time assistance to surgeons, supporting education, and evaluating medical treatments. Surgical phase recognition, the recognition of the transitions of high-level stages of surgery, is a fundamental component in advancing these objectives.\nSurgical phase recognition has gained considerable attention with numerous approaches [1 ###reference_b1###, 4 ###reference_b4###, 7 ###reference_b7###, 8 ###reference_b8###, 16 ###reference_b16###, 17 ###reference_b17###, 21 ###reference_b21###]. While surgical phase recognition is important across all surgical methods, the predominant focus of research endeavors has been on minimally invasive surgery (MIS), leaving open surgery phase recognition comparatively underexplored. This discrepancy primarily stems from the scarcity of publicly available large-scale open surgery datasets for phase recognition. In the surgical phase recognition for MIS, several large-scale datasets [17 ###reference_b17###, 20 ###reference_b20###] have been released, driving advancements in learning-based algorithms. Conversely, the absence of comparable large-scale datasets for open surgery phase recognition has significantly impeded progress in achieving accurate surgical phase recognition within the open surgery domain.\nTo tackle this issue, we introduce Egosurgery-Phase, the first large-scale egocentric open surgery video dataset for phase recognition. 20 videos of procedures of 10 distinct surgical types with a total duration of 15 hours conducted by 8 surgeons are collected and annotated into 9 phases. The videos have been meticulously pre-processed for de-identification. EgoSurgery-Phase offers a rich collection of video content capturing diverse interactions among individuals (e.g., surgeons, assistant surgeons, anesthesiologists, perfusionists, and nurses), varied operative settings, and various lighting conditions. Moreover, in addition to video, EgoSurgery-Phase provides eye gaze data.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### Furthermore, inspired by the remarkable performance of Masked Autoencoders (MAEs) [5 ###reference_b5###], which learns meaningful representations by reconstructing the masked tokens, in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). In MAEs, for the selection of masked tokens, a random masking strategy has been often utilized and shown to work well compared to its counterparts in some cases [5 ###reference_b5###, 15 ###reference_b15###, 12 ###reference_b12###]. However, open surgery videos often contained non-informative regions (For instance, in most sample frames from EgoSurgery-Phase illustrated in Fig. 1 ###reference_###, we observe that the intense light from the surgical lamp causes the black clipping to outside the surgical field, making most of the tokens outside surgery field non-informative). Therefore, assuming all tokens have equal information and a uniform probability distribution for masked token selection is suboptimal. With the random masking strategy, masked tokens may be sampled from low-information regions rather than high-information ones, and training to reconstruct these tokens through MAEs is not effective [12 ###reference_b12###, 14 ###reference_b14###]. To address this issue, we propose a gaze-guided masking approach.\nGiven that regions, where surgeons\u2019 gaze focuses, are often critical for surgical phase recognition (e.g., the surgical field), our GGMAE leverages gaze information as an empirical semantic richness prior to guiding the masking process, as shown in Fig. 2 ###reference_###. It converts input gaze heatmaps into a probability distribution and employs reparameterization techniques for efficient probability-guided masked token sampling. Consequently, tokens that surgeons focus on are masked with higher probability, enabling enhanced attention to semantically rich spatial regions.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### Our main contributions are summarized as follows: 1) we constructed the first publicity available large-scale real egocentric open surgery dataset, EgoSurgery-Phase, for phase recognition, 2) we propose a gaze-guided masked autoencoder, GGMAE, which incorporates gaze as an empirical semantic richness prior for masking, and 3) experimental results show that our GGMAE yields significant improvement over existing phase recognition and masked autoencoder-based methods, achieving the state-of-the-art performance on EgoSurgery-Phase." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Dataset Design", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Dataset collection", + "text": "Following the dataset collection protocol proposed in prior research [3 ###reference_b3###], which focused on constructing datasets for surgical tool detection in open surgery videos, we gathered 20 open surgery videos utilizing Tobii cameras attached to the surgeon\u2019s head. The recording of patient videos received ethical approval from the Keio University School of Medicine Ethics Committee, and written informed consent was obtained from all patients or their guardians. Our dataset encompasses 10 distinct types of surgeries, performed by 8 different surgeons.\nThe 20 videos were recorded at a frame rate of 25 fps and a resolution of pixels. Video durations vary between 28 and 234 minutes, reflecting the diversity in type and complexities of surgery. In total, 28 hours of surgical footage were captured. Unlike videos of minimally invasive surgery (MIS), open surgery videos are more likely to contain personally identifiable information (PII) such as the faces of patients, assistant surgeons, and nurses. To address privacy concerns, we subsampled the videos to 0.5 fps and anonymized the patient\u2019s face through blurring. In addition, we exclude frames containing other PII. After these pre-processing steps, the average duration of the videos becomes 46 minutes, resulting in a total duration of 15 hours, thereby yielding a large-scale dataset of high quality. In addition to video, EgoSurgery-Phase provides eye gaze.\n###figure_14###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Dataset annotation, statistics and data split", + "text": "Expert surgeons perform the annotations based on their clinical experience and domain knowledge. The 20 pre-processed videos of open surgery are manually annotated into 9 phases: Disinfection, Design, Anesthesia, Incision, Dissection, Hemostasis, Irrigation, Closure, and Dressing. Samples are shown in Fig. 1 ###reference_###. In total, frames are manually annotated. The sample distribution is shown in Fig.3 ###reference_###. It reveals a notable class imbalance. We use videos for the training set, videos for the validation set, and videos for the test set." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Approach", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "Fig. 4 ###reference_### presents an overview of the proposed GGMAE. GGMAE takes as input video and gaze heatmaps . Here, represents the input (RGB) channels, and denotes the spatial resolution of each frame. The space-time cube embedding [15 ###reference_b15###] is used to transform the input video into a set of token embeddings , where is the channel dimension of the tokens, and and are the numbers of tokens along the spatial and temporal dimensions, respectively. , , and represent the size of each token along the temporal, height, and width dimensions, respectively.\nWe apply the proposed Gaze-Guided Masking (GGM) strategy to select tokens for masking with a masking ratio , leveraging the gaze information. The remaining tokens, along with the space-time position embeddings, are fed into the Transformer encoder and decoder [18 ###reference_b18###] to reconstruct the masked maps.\n###figure_15###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Gaze-guided mask Masking", + "text": "Open surgery videos often contain non-informative regions, and training a model to reconstruct these tokens using MAE does not improve model performance [12 ###reference_b12###, 14 ###reference_b14###]. Therefore, inspired by representation learning approaches that leverage MAEs with non-uniform masking tailored to token informativeness across diverse domain data inputs [9 ###reference_b9###, 10 ###reference_b10###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], we integrate gaze information as an empirical semantic richness prior to guide the masking of embedding features. Specifically, we propose non-uniform token sampling based on the accumulated gaze heatmap value of each token.\nFirst, we compute the accumulated gaze heatmap value for each token by summing the heatmap values across the pixels belonging to the token as follows:\nwhere denotes the set of pixels in the gaze heatmap corresponding to the -th token. We then calculate the masking probability vector for each token\u2019s time index using the softmax function as follows:\nwhere represents a vector of accumulated gaze heatmap for each time index , and is a hyper-parameter controlling the sharpness of the softmax function. Finally, the indices of the masked tokens are determined by sampling from a Multinomial distribution with probabilities , for trials without replacement for each time index ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Loss function", + "text": "The loss function is the mean squared error (MSE) loss between the input pixel values and the reconstructed pixel values:\nwhere is the masked token index, is the set of masked tokens, represents the input ground truth frames, and stands for the reconstructed frames." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "Network Architecture.\nWe employ the VideoMAE with the ViT-Small [2 ###reference_b2###] backbone. Following VidoeMAE [15 ###reference_b15###], we use the same input patch size of () for all models. We utilize 10-frame clips () as input, maintaining a fixed spatial resolution of () across all experiments. To generate the ground-truth gaze\nheatmaps, we place a Gaussian centered on the ground truth gaze point.\nPre-training details. During pre-training, the masking ratio of the input token is set to . We adopt the AdamW [11 ###reference_b11###] optimizer with a weight decay of and betas of (0.9, 0.95). We pre-train the network for epochs with a batch size of . The learning rate is linearly increased to from 0 in the first warmup epochs and then decreased to by the cosine decay schedule. We set the temperature hyperparameter to . The experiments are conducted using the PyTorch framework on three NVIDIA TITAN RTX GPUs.\nFine-tuning details. After the pre-training, we perform fine-tuning. An MLP head is attached to the pre-trained backbone and the whole network is fully fine-tuned for epochs with cross-entropy loss and a batch size of . The learning rate is linearly increased to from 0 in the first 5 warm-up epochs and then decreased to by the cosine decay schedule. To mitigate class imbalance during fine-tuning, we employ a resampling strategy. All hyperparameters are determined through standard coarse-to-fine grid search or step-by-step tuning." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation metrics", + "text": "To quantitatively analyze the performance of our method, we use three widely used benchmark metrics for surgical phase recognition: precision, recall, and Jaccard index. Due to phase class imbalance inherent within the EgoSurgery-Phase dataset, the performance will be reported in macro-average. Macro-average is used in imbalanced multi-class settings as it provides equal emphasis on minority classes." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Phase recognition performance comparison", + "text": "Comparison with phase recognition methods:\nWe first compare our approach with current state-of-the-art phase recognition methods, including TeCNO [1 ###reference_b1###], Trans-SVNet [4 ###reference_b4###], and NETE [21 ###reference_b21###], alongside common baselines PhaseLSTM [16 ###reference_b16###] and PhaseNet [17 ###reference_b17###]. The performance of all methods is summarized in Table 1 ###reference_###. Our GGMAE notably surpasses the baselines in all metrics. Specifically, our method exhibits a substantial improvement over NETE, which is the best performance among previous state-of-the-art methods, by (from to ) in the Precision, (from to ) in the Recall, and (from to ) in the Jaccard index.\nComparison with masked autoencoder-based methods. After being pre-trained with the proposed GGMAE framework, the model exhibits significant performance improvements compared to the model trained from scratch ( improvement in the Jaccard index). We then compare current state-of-the-art MAE-based methods, namely VideoMAE and VideoMAEV2. Additionally, we evaluate our approach against SurgMAE, which first demonstrates the effectiveness of MAEs in the surgical domain. The performance of all methods is summarized in Table 2 ###reference_###. Employing the same backbone and training schema, GGMAE surpasses VideoMAE by and VideoMAEV2 by and SurgMAE by in terms of Jaccard index." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "Mask sampling strategy. To verify the effectiveness of the proposed gaze-guided masking strategy, we compare its performance with that of random and tube masking. As we can see, our gaze-guided masking strategy brings absolute performance improvements of . This suggests that the gaze information, as an empirical semantic richness prior, can effectively guide the masking process.\nMasking Ratio. As shown in Tab 3 ###reference_### (b), we experimented with different masking ratios. Results show that either too large or too small masking ratios have a negative impact on performance. We empirically found that a masking ratio of exhibits the best results.\nTemerature parameter.\nWe experimented with different temperature parameters . As the temperature parameter decreases, the region toward which the gaze is directed becomes more likely to be masked. As shown in Tab 3 ###reference_### (c), Our GGMAE exhibits the best performance when temperature parameters is . Overall, a temperature parameter is set to by default.\n###table_1###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "In this paper, we construct the first egocentric open surgery video dataset, Egosurgery-Phase, for phase recognition. We also propose a gaze-guided masked autoencoder, GGMAE, to promote better attention to semantically rich spatial regions using gaze information. Furthermore, GGMAE achieves substantial improvements compared to the existing phase recognition methods and masked autoencoder methods. The remaining challenges for this dataset involve improving model performance on the Egosurgery-Phase. By releasing this dataset to the public, we, alongside the wider research community, aspire to address these challenges in the future collaboratively. Moreover, we intend to enrich this dataset by augmenting the video content and incorporating footage captured from various perspectives (e.g., assistant surgeons, anesthesiologists, perfusionists, and nurses) to advance the automated analysis of open surgery videos." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Performance comparison with baseline and state-of-the-art phase recognition models on EgoSurgery-Phase.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsBackbonePrecisionRecallJaccard
PhaseLSTM\u00a0[16]\nAlexNet36.333.121.9
PhaseNet\u00a0[17]\nAlexNet37.025.719.7
TeCNO\u00a0[1]\nResNet-5047.739.227.3
Trans-SVNet\u00a0[4]\nResNet-5041.835.923.1
NETE\u00a0[21]\nInception v343.735.227.5
GGMAE (Ours)ViT-S51.745.633.9
\n
\n
", + "capture": "Table 1: Performance comparison with baseline and state-of-the-art phase recognition models on EgoSurgery-Phase." + }, + "2": { + "table_html": "
\n
Table 2: Performance comparison with state-of-the-art masked autoencoder-based models on Egosurgery-Phase. The supervised baseline is ViT-S trained from scratch on Egosurgery-Phase.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsBackboneMaskingPrecisionRecallJaccard
SupervisedViT-S47.931.627.1
VideoMAE\u00a0[15]\nViT-STube masking49.341.629.8
VideoMAE V2\u00a0[19]\nViT-SDual masking54.243.230.8
SurgMAE\u00a0[6]\nViT-SSpatio-temporal masking52.241.927.8
GGMAE (Ours)ViT-SGaze-guided masking51.745.633.9
\n
\n
", + "capture": "Table 2: Performance comparison with state-of-the-art masked autoencoder-based models on Egosurgery-Phase. The supervised baseline is ViT-S trained from scratch on Egosurgery-Phase." + }, + "3": { + "table_html": "
\n
Table 3: Ablation studies on Egosurgery-Phase. We use ViT-S as a backbone for all the experiments.
\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\n\n\n\n\n
(a) Mask sampling strategy.(b) Masking ratio ()(c) Temperature parameter ().
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
StrategyRatioJaccard
Random\u00a0[5]\n0.7528.9
Random\u00a0[5]\n0.9030.6
Tube\u00a0[15]\n0.9029.8
Gaze-guided0.9033.9
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RatioJaccard
0.9531.2
0.9033.9
0.8531.6
0.8031.5
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Jaccard
1.0030.1
0.7530.6
0.5033.9
0.2527.2
\n
\n
\n
", + "capture": "Table 3: Ablation studies on Egosurgery-Phase. We use ViT-S as a backbone for all the experiments." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2405.19644v3_figure_1(a).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/disinfection.jpg" + }, + "1(b)": { + "figure_path": "2405.19644v3_figure_1(b).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/design.jpg" + }, + "1(c)": { + "figure_path": "2405.19644v3_figure_1(c).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/anesthesia.jpg" + }, + "1(d)": { + "figure_path": "2405.19644v3_figure_1(d).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/incision.jpg" + }, + "1(e)": { + "figure_path": "2405.19644v3_figure_1(e).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/disssection.jpg" + }, + "1(f)": { + "figure_path": "2405.19644v3_figure_1(f).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/hemostasis.jpg" + }, + "1(g)": { + "figure_path": "2405.19644v3_figure_1(g).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/irrigation.jpg" + }, + "1(h)": { + "figure_path": "2405.19644v3_figure_1(h).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/closure.jpg" + }, + "1(i)": { + "figure_path": "2405.19644v3_figure_1(i).png", + "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/dressing.jpg" + }, + "2(a)": { + "figure_path": "2405.19644v3_figure_2(a).png", + "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/rgb.jpg" + }, + "2(b)": { + "figure_path": "2405.19644v3_figure_2(b).png", + "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/13_1_0258.jpg" + }, + "2(c)": { + "figure_path": "2405.19644v3_figure_2(c).png", + "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/random_mask.jpg" + }, + "2(d)": { + "figure_path": "2405.19644v3_figure_2(d).png", + "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.", + "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/gaze_guided_mask.jpg" + }, + "3": { + "figure_path": "2405.19644v3_figure_3.png", + "caption": "Figure 3: The phase distribution of frames.", + "url": "http://arxiv.org/html/2405.19644v3/x1.png" + }, + "4": { + "figure_path": "2405.19644v3_figure_4.png", + "caption": "Figure 4: Overview of the proposed GGMAE: GGME performs the task of masking tokens and reconstructing these masked tokens with Transformer encoder-decoder architecture. Considering that open surgery videos often contain non-informative regions, we introduce the Gaze-Guided Masking (GGM) module, which selects tokens to be masked based on gaze information.", + "url": "http://arxiv.org/html/2405.19644v3/x2.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2405.19644v3" +} \ No newline at end of file diff --git a/20241127/2406.03095v4.json b/20241127/2406.03095v4.json new file mode 100644 index 0000000000000000000000000000000000000000..156aa98fcddf80160199b6837f69db3f463b1e82 --- /dev/null +++ b/20241127/2406.03095v4.json @@ -0,0 +1,316 @@ +{ + "title": "EgoSurgery-Tool: A Dataset of Surgical Tool and Hand Detection from Egocentric Open Surgery Videos", + "abstract": "Surgical tool detection is a fundamental task for understanding egocentric open surgery videos. However, detecting surgical tools presents significant challenges due to their highly imbalanced class distribution, similar shapes and similar textures, and heavy occlusion. The lack of a comprehensive large-scale dataset compounds these challenges. In this paper, we introduce EgoSurgery-Tool, an extension of the existing EgoSurgery-Phase dataset, which contains real open surgery videos captured using an egocentric camera attached to the surgeon\u2019s head, along with phase annotations. EgoSurgery-Tool has been densely annotated with surgical tools and comprises over 49K surgical tool bounding boxes across 15 categories, constituting a large-scale surgical tool detection dataset. EgoSurgery-Tool also provides annotations for hand detection with over 46K hand-bounding boxes, capturing hand-object interactions that are crucial for understanding activities in egocentric open surgery. EgoSurgery-Tool is superior to existing datasets due to its larger scale, greater variety of surgical tools, more annotations, and denser scenes. We conduct a comprehensive analysis of EgoSurgery-Tool using nine popular object detectors to assess their effectiveness in both surgical tool and hand detection. The dataset will be released at project page.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Detecting surgical tools from an egocentric perspective in the operating room is fundamental task for the development of intelligent systems that can assist surgeons in real-time. For example, recognizing a tool can help prevent accidents, such as leaving gauze inside the body, by notifying surgeons. Recently, various approaches have been proposed for surgical tool detection, particularly in minimally invasive surgeries (MIS)[15 ###reference_b15###, 19 ###reference_b19###, 10 ###reference_b10###, 17 ###reference_b17###, 1 ###reference_b1###, 26 ###reference_b26###, 8 ###reference_b8###]. However, there have been few attempts to detect surgical tools in open surgery videos due to the limited availability of large-scale datasets. The existing surgical tool detection datasets for open surgery are either small[6 ###reference_b6###] or not publicly available [7 ###reference_b7###]. In contrast, several datasets [10 ###reference_b10###, 17 ###reference_b17###, 13 ###reference_b13###] have been released for MIS, driving advancements in learning-based algorithms. The absence of comparable large-scale datasets for open surgical tool detection has significantly impeded progress in achieving accurate tool detection within the open surgery domain. Challenges include dealing with surgical tools that exhibit a highly imbalanced, long-tailed distribution, have similar textures and shapes, and appear in occluded scenes, posing new challenges for many existing approaches.\nHand detection is an essential task for egocentric video analysis, where hand-object interaction (HOI) is crucial for action localization and understanding in activities of daily living. Several large-scale hand detection datasets have been proposed [2 ###reference_b2###, 3 ###reference_b3###, 16 ###reference_b16###] for detecting hands in daily activities. Localizing hands is also vital for analyzing egocentric open surgery videos. However, there is little work on hand detection in the open surgery domain [6 ###reference_b6###, 21 ###reference_b21###], and only one small publicly available dataset exists [6 ###reference_b6###]. Training on existing hand datasets from daily activities does not transfer well to surgical hand detection due to significant differences in domain appearance, highlighting the need for a large-scale dataset.\nWith these motivations, we introduce EgoSurgery-Tool, a large-scale dataset captured from a camera attached to the surgeon\u2019s head, containing dense annotations for surgical tools and the surgeon\u2019s hand-bounding boxes. EgoSurgery-Tool is an extension of the recently introduced EgoSurgery-Phase [9 ###reference_b9###]. We now elaborate on the unique characteristics and differences between the existing dataset [6 ###reference_b6###] and our proposed EgoSurgery-Tool dataset. Compared to the existing dataset [6 ###reference_b6###], EgoSurgery-Tool offers several advantages: 1) it is the largest-scale dataset among tool and hand detection datasets in the open surgery domain in terms of the number of images and annotations; 2) it contains a greater variety of surgical tools; 3) it includes high-density scenes with numerous surgical tools; and 4) each hand annotation specifies hand identification (the camera wearer\u2019s left or right hand or another person\u2019s left or right hand). Our dataset is compared with existing related datasets in Table 1 ###reference_###, and example images are shown in Figure 1 ###reference_###. Based on the proposed EgoSurgery-Tool dataset, we provide a systematic study on nine mainstream baselines." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "EgoSurgery-Tool Dataset", + "text": "The EgoSurgery-Phase dataset [9 ###reference_b9###] consists of 21 videos covering 10 distinct surgical procedures, with a total duration of 15 hours, performed by 8 surgeons. EgoSurgery-Phase provides over 27K frames with phase annotations. However, EgoSurgery-Phase lacks sufficient information on surgical tools and hands. Therefore, we propose EgoSurgery-Tool, which includes additional annotations for surgical tools and hands on a subset of the existing EgoSurgery-Phase dataset. These annotations make EgoSurgery-Phase the only available dataset for multi-task learning of phase recognition, surgical tool detection, and hand detection. EgoSurgery-Phase is manually annotated by a group of annotators who were instructed for each task to ensure consistency across the dataset. The annotations were then inspected by expert surgeons to assess their quality. The rest of this section provides details on the annotations, benchmarking, and statistics of EgoSurgery-Tool.\n###figure_1### ###figure_2### ###table_1###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Data splits and statistic", + "text": "We annotated 15 types of surgical tools and 4 types of hands in 15 videos from the EgoSurgery-Phase dataset. The proposed EgoSurgery-Tool dataset contains 15,437 high-quality images, annotated with 49,652 surgical tools and 46,320 hands. The distribution of surgical tools, shown in Figure 2 ###reference_###, reveals a notable class imbalance. Figure 3 ###reference_### shows The distribution of hand. Table 2 ###reference_### shows the number of images within each instance count range (0-5, 6-10, 11-15). Our EgoSurgery-Phase dataset demonstrates higher density compared to the surgical tool detection dataset in MIS. The co-occurrence matrix between surgical tools and surgical phases is presented in Figure 4 ###reference_###. Along the Y-axis are the given surgical tools, and the X-axis enumerates conditional phases. Each element represents the conditional probability that a phase occurs when a surgical tool is used. For example, when a scalpel appears in a frame, that frame belongs to the incision phase with a probability of 0.98. Surgical tool information might be helpful for surgical phase recognition. EgoSurgery-Tool is divided into training, validation, and test sets at the video level, ensuring that all frames of a video sequence appear in one specific split. The 15 video sequences are split into 10 training, 2 validation, and 3 test videos for consistency with the standard evaluation of other relevant datasets, resulting in 9,657 training, 1,515 validation, and 4,265 test images. The number of instances per category in each set is shown in Table 3 ###reference_###.\n###figure_3###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Experimental setups", + "text": "We compare nine popular object detectors: Faster R-CNN\n(2015) [14 ###reference_b14###], RetinaNet (2017) [11 ###reference_b11###], Cascade R-CNN (2018) [4 ###reference_b4###], CenterNet (2019) [24 ###reference_b24###], Sparse R-CNN (2021) [18 ###reference_b18###], VarifocalNet (2021) [23 ###reference_b23###], Deformable-DETER (2021) [25 ###reference_b25###], DDQ (2023) [22 ###reference_b22###], and DINO (2023) [20 ###reference_b20###]. We use the MMDetection [5 ###reference_b5###] for the implementation. We fine-tune models with pre-trained on MS-COCO [12 ###reference_b12###]. For\na fair comparison, we select the algorithm\u2019s backbones to\nhave a similar number of parameters. We use the COCO evaluation procedure and report , , and [12 ###reference_b12###]. Because each detector is calibrated differently, setting a comparable detection confidence threshold is impractical. Therefore, we evaluate all the detectors by using confidence .\n###figure_4### ###table_2### ###figure_5###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Quantitative results", + "text": "We present the results of nine mainstream object detection algorithms in Table 4 ###reference_###. For surgical tool detection, among all methods, the recent VarifocalNet achieves the highest performance in terms of the metric for surgical tool detection tasks. VarifocalNet also consistently outperforms other detectors in terms of and , indicating its superior ability to estimate the correct bounding box sizes. The superiority of VarifocalNet is attributed to its dense object detection capability, enabling it to detect objects at small scales and under heavy occlusion. For hand detection, VarifocalNet outperforms other object detection methods in terms of and . In terms of , DINO achieves the best performance.\nThe confusion matrix for the standard object detection method, Faster R-CNN, is shown in Figure 6 ###reference_###. We observe that tools with similar textures and shapes are often misclassified (e.g., scissors and needle holders). Additionally, tools with many varieties of appearances are confused with backgrounds (e.g., forceps, gauze, and retractors).\nWe compare the hand detection performance of different training data and pre-training data settings using Faster R-CNN in Table5 ###reference_###. Training with our EgoSurgery-Tool dataset significantly outperforms training with the existing hand dataset, EgoHands, which was collected in a daily living setting. Despite the vast quantity of annotated data in EgoHands, models trained solely on EgoHand perform substantially worse compared to those trained with our EgoSurgery-Tool, suggesting a significant domain transfer problem related to the characteristics and representation of hands in a surgical environment. We also explored the performance of hand detection with different pre-training data. Pre-training with COCO achieves the best performance. Due to the significant domain gap, pre-training with the existing hand detection dataset, EgoHands, degrades performance." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Qualitative results", + "text": "Figure 5 ###reference_### presents qualitative results for Faster-RCNN using IoU thresholds of . The model successfully detects surgical tools in (a, b) and hands wearing different colors of surgeons\u2019 gloves in (c, d) across a variety of surgery types. Examples of detection failures are shown in (e)-(h). Heavy occlusion (e, h), poor lighting conditions (f), and similar shapes and textures between categories (e, g) cause these incorrect detections." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "To address the lack of a large-scale dataset in the open surgery domain, we introduce EgoSurgery-Tool, an egocentric open surgery video dataset captured from a camera attached to the surgeon\u2019s head, including bounding box annotations for surgical tools and hands. We conducted extensive evaluations of recent object detection methods on this new benchmark dataset. We believe the dense annotations of EgoSurgery-Tool will foster future research in video understanding within the open surgery domain." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparisons of EgoSurgery-Tool and existing datasets for surgical tool detection. OS indicates open surgery.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetSurgery typeFramesTool instancesHand instancesTool categoriesHand categoriesTool Instances per frame
m2cai16-tool-locations\u00a0[10]\nMIS2.8K3.9K71.4
Cholec80-locations\u00a0[17]\nMIS4.0K6.5K71.6
AVOS dataset\u00a0[6]\nOS3.3K2.8K6.2K310.9
EgoSurgery-Tool (Ours)OS15.4K49.7K46.3K1543.2
\n
\n
", + "capture": "Table 1: Comparisons of EgoSurgery-Tool and existing datasets for surgical tool detection. OS indicates open surgery." + }, + "2": { + "table_html": "
\n
Table 2: Comparison of datasets with respect to image distribution across various instance count ranges. We compute the number of images for each dataset within three count ranges.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Datasets\n\n\n# Image\n\n(0-5 instances)\n\n\n\n# Image\n\n(6-10 instances)\n\n\n\n# Image\n\n(11-15 instances)\n
m2cai16-tool-locations\u00a0[10]\n2,81100
EgoSurgery-Tool6,1288,803506
\n
\n
", + "capture": "Table 2: Comparison of datasets with respect to image distribution across various instance count ranges. We compute the number of images for each dataset within three count ranges." + }, + "3": { + "table_html": "
\n
Table 3: The number of instances per category in each set and the category distribution in the EgoSurgery-Tool dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(a) The number of instances per surgical tool category.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ClassTrainValTestTotalDist.
Bipolar Forceps446551956961.40%
Electric Cautery1,4041011621,6673.36%
Forceps2,5341543,3756,0631.22%
Gauze4,59645516446,69513.58%
Hook1,0451471571,3492.72%
Mouth Gag3,8079901,1885,98512.05%
Needle Holders3,0315121,2864,8299.73%
Raspatory65476848141.64%
Retractor2,07903252,4044.84%
Scalpel7391681591,0662.15%
Scissors1,7803915652,7365.51%
Skewer212103293440.69%
Suction Cannula3,1345097684,4118.88%
Syringe344961415811.17%
Tweezers6,4679502,59510,01220.16%
Total32,2724,70712,67349,652100%
\n
\n
(b) The number of instances per hand category.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ClassTrainValTestTotalDist.
Own hands left8,7041,5053,83414,04330.3%
Own hands right8,4471,4673,67013,58429.3%
Other hands left6,5421,0793,41211,03329.3%
Other hands right4,0338672,7607,66016.5%
Total27,7264,91813,6764,6320100%
\n
\n
\n
", + "capture": "Table 3: The number of instances per category in each set and the category distribution in the EgoSurgery-Tool dataset." + }, + "4": { + "table_html": "
\n
Table 4: Performance of object detection methods on the EgoSurgery-Tool. The best performance is shown in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(a) Surgical tool detection performance.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods
Faster R-CNN\u00a0[14]\n37.755.843.3
RetinaNet\u00a0[11]\n36.253.039.8
Cascade R-CNN\u00a0[4]\n38.855.744.6
CenterNet\u00a0[24]\n42.460.246.8
Sparse R-CNN\u00a0[18]\n37.055.141.8
VarifocalNet\u00a0[23]\n45.863.351.1
Deformable-DETR\u00a0[25]\n30.046.334.0
DDQ\u00a0[22]\n43.259.148.7
DINO\u00a0[20]\n39.756.743.5
\n
\n
(b) Hand detection performance.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods
Faster R-CNN\u00a0[14]\n55.380.462.3
RetinaNet\u00a0[11]\n57.181.962.9
Cascade R-CNN\u00a0[4]\n55.580.761.4
CenterNet\u00a0[24]\n56.678.563.3
Sparse R-CNN\u00a0[18]\n55.478.760.9
VarifocalNet\u00a0[23]\n59.482.165.3
Deformable-DETR\u00a0[25]\n54.178.659.2
DDQ\u00a0[22]\n58.373.560.8
DINO\u00a0[20]\n58.880.265.6
\n
\n
\n
", + "capture": "Table 4: Performance of object detection methods on the EgoSurgery-Tool. The best performance is shown in bold." + }, + "5": { + "table_html": "
\n
Table 5: Left: Faster-RCNN hand detection performance comparison between the existing hand detection dataset, EgoHands, and our dataset. Right: Pretrained Faster-RCNN hand detection performance with fine-tuning on our dataset, separated by training order.
\n
\n\n\n\n\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Training data
EgoHands8.9
Ours55.3
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Pre-training dataset
ImageNet50.7
COCO55.3
COCO, EgoHands52.1
\n
\n
\n
", + "capture": "Table 5: Left: Faster-RCNN hand detection performance comparison between the existing hand detection dataset, EgoHands, and our dataset. Right: Pretrained Faster-RCNN hand detection performance with fine-tuning on our dataset, separated by training order." + } + }, + "image_paths": { + "2": { + "figure_path": "2406.03095v4_figure_2.png", + "caption": "Figure 2: The distribution of surgical tool categories.", + "url": "http://arxiv.org/html/2406.03095v4/x2.png" + }, + "3": { + "figure_path": "2406.03095v4_figure_3.png", + "caption": "Figure 3: The distribution of hand categories.", + "url": "http://arxiv.org/html/2406.03095v4/x3.png" + }, + "4": { + "figure_path": "2406.03095v4_figure_4.png", + "caption": "Figure 4: Co-occurrence matrix between surgical tools and surgical phases.", + "url": "http://arxiv.org/html/2406.03095v4/extracted/6028009/figs/co_occurence_phase.png" + }, + "5": { + "figure_path": "2406.03095v4_figure_5.png", + "caption": "Figure 5: Qualitative results for the object detection challenge. The first column shows correct detections, while the second column shows incorrect cases.", + "url": "http://arxiv.org/html/2406.03095v4/x4.png" + }, + "6": { + "figure_path": "2406.03095v4_figure_6.png", + "caption": "Figure 6: Confusion matrix of surgical tool detection model.", + "url": "http://arxiv.org/html/2406.03095v4/extracted/6028009/figs/confusion_matrix_tools.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A semi-supervised Teacher-Student framework for surgical tool detection and localization.", + "author": "Mansoor Ali, Gilberto Ochoa-Ruiz, and Sharib Ali.", + "venue": "CMBBE, 2022.", + "url": null + } + }, + { + "2": { + "title": "Hand detection using multiple proposals.", + "author": "Andrew Zisserman Arpit Mittal and Philip Torr.", + "venue": "In BMVC, 2011.", + "url": null + } + }, + { + "3": { + "title": "Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions.", + "author": "Sven Bambach, Stefan Lee, David J. Crandall, and Chen Yu.", + "venue": "In ICCV, 2015.", + "url": null + } + }, + { + "4": { + "title": "Cascade R-CNN: Delving Into High Quality Object Detection.", + "author": "Zhaowei Cai and Nuno Vasconcelos.", + "venue": "In CVPR, June 2018.", + "url": null + } + }, + { + "5": { + "title": "MMDetection: Open mmlab detection toolbox and benchmark.", + "author": "Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin.", + "venue": "arXiv preprint arXiv:1906.07155, 2019.", + "url": null + } + }, + { + "6": { + "title": "Analyzing Surgical Technique in Diverse Open Surgical Videos With Multitask Machine Learning.", + "author": "Goodman et al.", + "venue": "JAMA Surgery, 2024.", + "url": null + } + }, + { + "7": { + "title": "Surgical Tool Detection in Open Surgery Videos.", + "author": "Ryo Fujii, Ryo Hachiuma, Hiroki Kajita, and Hideo Saito.", + "venue": "Applied Sciences, 2022.", + "url": null + } + }, + { + "8": { + "title": "Weakly Semi-Supervised Tool Detection in Minimally Invasive Surgery Videos.", + "author": "Ryo Fujii, Ryo Hachiuma, and Hideo Saito.", + "venue": "In ICASSP, 2024.", + "url": null + } + }, + { + "9": { + "title": "EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos.", + "author": "Ryo Fujii, Masashi Hatano, Hideo Saito, and Hiroki Kajita.", + "venue": "In MICCAI, 2024.", + "url": null + } + }, + { + "10": { + "title": "Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks.", + "author": "Amy Jin, Serena Yeung, Jeffrey Jopling, Jonathan Krause, Dan Azagury, Arnold Milstein, and Li Fei-Fei.", + "venue": "In WACV, 2018.", + "url": null + } + }, + { + "11": { + "title": "Focal loss for dense object detection.", + "author": "Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll\u00e1r.", + "venue": "In ICCV, 2017.", + "url": null + } + }, + { + "12": { + "title": "Microsoft coco: Common objects in context.", + "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.", + "venue": "In ECCV, 2014.", + "url": null + } + }, + { + "13": { + "title": "M2cai surgical tool detection challenge report.", + "author": "Ashwin Raju, Heng Wang, and Junzhou Huang.", + "venue": "University of Texas at Arlington, Tech. Rep., 2016.", + "url": null + } + }, + { + "14": { + "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.", + "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.", + "venue": "In NeurIPS, 2015.", + "url": null + } + }, + { + "15": { + "title": "Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.", + "author": "Duygu Sarikaya, Jason J. Corso, and Khurshid A. Guru.", + "venue": "T-MI, 2017.", + "url": null + } + }, + { + "16": { + "title": "Understanding Human Hands in Contact at Internet Scale.", + "author": "Dandan Shan, Jiaqi Geng, Michelle Shu, and David F. Fouhey.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "17": { + "title": "Real-time surgical tool detection in minimally invasive surgery based on attention-guided convolutional neural network.", + "author": "Pan Shi, Zijian Zhao, Sanyuan Hu, and Faliang Chang.", + "venue": "IEEE Access, 2020.", + "url": null + } + }, + { + "18": { + "title": "Sparse R-CNN: End-to-End Object Detection With Learnable Proposals.", + "author": "Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, and Ping Luo.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "19": { + "title": "Weakly-supervised learning for tool localization in laparoscopic videos.", + "author": "Armine Vardazaryan, Didier Mutter, Jacques Marescaux, and Nicolas Padoy.", + "venue": "In MICCAI, 2018.", + "url": null + } + }, + { + "20": { + "title": "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection.", + "author": "Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Heung-Yeung Shum.", + "venue": "In ICLR, 2023.", + "url": null + } + }, + { + "21": { + "title": "Using Computer Vision to Automate Hand Detection and Tracking of Surgeon Movements in Videos of Open Surgery.", + "author": "Michael Zhang, Xiaotian Cheng, Daniel Copeland, Arjun Desai, Melody Guan, Gabriel Brat, and Serena Yeung.", + "venue": "AMIA, 2021.", + "url": null + } + }, + { + "22": { + "title": "Dense Distinct Query for End-to-End Object Detection.", + "author": "Shilong Zhang, Xinjiang Wang, Jiaqi Wang, Jiangmiao Pang, Chengqi Lyu, Wenwei Zhang, Ping Luo, and Kai Chen.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "23": { + "title": "Varifocalnet: An iou-aware dense object detector.", + "author": "Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S\u00fcnderhauf, Niko.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "24": { + "title": "Objects as Points.", + "author": "Xingyi Zhou, Dequan Wang, and Philipp Kr\u00e4henb\u00fchl.", + "venue": "In arXiv preprint arXiv:1904.07850, 2019.", + "url": null + } + }, + { + "25": { + "title": "Deformable DETR: Deformable Transformers for End-to-End Object Detection.", + "author": "Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai.", + "venue": "In ICLR, 2021.", + "url": null + } + }, + { + "26": { + "title": "Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge.", + "author": "Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Max Berniker, Ziheng Wang, Rogerio Nespolo, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Bo Liu, David Austin, Yiheng Wang, Michal Futrega, Jean-Francois Puget, Zhenqiang Li, Yoichi Sato, Ryo Fujii, Ryo Hachiuma, Mana Masuda, Hideo Saito, An Wang, Mengya Xu, Mobarakol Islam, Long Bai, Winnie Pang, Hongliang Ren, Chinedu Nwoye, Luca Sestini, Nicolas Padoy, Maximilian Nielsen, Samuel Sch\u00fcttler, Thilo Sentker, H\u00fcmeyra Husseini, Ivo Baltruschat, R\u00fcdiger Schmitz, Ren\u00e9 Werner, Aleksandr Matsun, Mugariya Farooq, Numan Saaed, Jose Renato Restom Viera, Mohammad Yaqub, Neil Getty, Fangfang Xia, Zixuan Zhao, Xiaotian Duan, Xing Yao, Ange Lou, Hao Yang, Jintong Han, Jack Noble, Jie Ying Wu, Tamer Abdulbaki Alshirbaji, Nour Aldeen Jalal, Herag Arabian, Ning Ding, Knut Moeller, Weiliang Chen, Quan He, Muhammad Bilal, Taofeek Akinosho, Adnan Qayyum, Massimo Caputo, Hunaid Vohra, Michael Loizou, Anuoluwapo Ajayi, Ilhem Berrou, Faatihah Niyi-Odumosu, Lena Maier-Hein, Danail\nStoyanov, Stefanie Speidel, and Anthony Jarc.", + "venue": "arXiv preprint arXiv:2305.07152, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2406.03095v4" +} \ No newline at end of file diff --git a/20241127/2406.14753v3.json b/20241127/2406.14753v3.json new file mode 100644 index 0000000000000000000000000000000000000000..fce0b8a00dd3f23ee1ee1c020d6430e8dac11b24 --- /dev/null +++ b/20241127/2406.14753v3.json @@ -0,0 +1,610 @@ +{ + "title": "A General Control-Theoretic Approach for Reinforcement Learning: Theory and Algorithms", + "abstract": "We devise a control-theoretic reinforcement learning approach to support direct learning of the optimal policy. We establish various theoretical properties of our approach, such as convergence and\noptimality of our analog of the Bellman operator and -learning,\na new control-policy-variable gradient theorem, and a specific gradient ascent algorithm based on this theorem\nwithin the context of\na specific control-theoretic framework. We\nempirically evaluate\nthe performance of our\ncontrol-theoretic\napproach on several classical reinforcement learning tasks, demonstrating significant improvements in solution quality, sample complexity, and\nrunning time\nof our\napproach\nover state-of-the-art methods.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "For many years now,\nnumerous\nreinforcement learning (RL)\napproaches, with differing degrees of success, have been developed to address\na wide variety of decision making under uncertainty problems (Feng et al., 2024 ###reference_b7###; Kaelbling et al., 1996 ###reference_b10###; McMahan et al., 2024 ###reference_b16###; Shen, 2024 ###reference_b27###; Szepesvari, 2010 ###reference_b30###; Sutton and Barto, 2020 ###reference_b28###; Zheng et al., 2023 ###reference_b35###).\nModel-free RL methods (Randlov and Alstrom, 1998 ###reference_b22###; Mnih et al., 2013 ###reference_b18###) can often suffer from high sample complexity that may require an inordinate amount of samples for some problems, making them unsuitable for various applications where collecting large amounts of data is time-consuming, costly and potentially dangerous for the system and its surroundings (Zhang and Tan, 2024 ###reference_b34###; Chen et al., 2024 ###reference_b4###; Peng et al., 2023 ###reference_b20###; Dong et al., 2023 ###reference_b6###; Kumar et al., 2020 ###reference_b12###). On the other hand, model-based RL methods\nhave\nbeen\nsuccessful in demonstrating\nsignificantly reduced\nsample complexity and\nin outperforming\nmodel-free approaches\nfor various decision making under uncertainty\nproblems; see, e.g., Deisenroth and Rasmussen (2011 ###reference_b5###); Meger et al. (2015 ###reference_b17###).\nHowever, such model-based approaches can\nsuffer from the difficulty of learning an appropriate model and from worse asymptotic\nperformance than model-free approaches due to model bias from inherently assuming the learned system dynamics model accurately represents\nthe true system\nenvironment; see, e.g., Atkeson and Santamaria (1997 ###reference_b2###); Schneider (1997 ###reference_b24###); Schaal (1997 ###reference_b23###).\nIn this paper, we propose a novel form of RL\nthat exploits optimal control-theoretic methods to solve the general problem formulation in terms of the unknown independent variables of the\nunderlying control problem\nand that directly learns these unknown variables through an iterative solution process that applies the corresponding optimal control policy.\nThis general approach is in strong contrast to many traditional model-based\nRL\nmethods that, after learning the system dynamics model which is often of high complexity and dimensionality, then use this system dynamics model to compute an\napproximate\nsolution of a corresponding (stochastic) dynamic programming problem, often applying model predictive\ncontrol; see, e.g., Nagabandi et al. (2018 ###reference_b19###).\nOur\ncontrol-based RL (CBRL)\napproach instead directly learns\nthe unknown independent variables of the\ngeneral underlying (unknown) dynamical system\nfrom which we\nderive\nthe optimal control policy, often of much lower complexity and dimensionality,\nthrough control-theoretic means.\nThe theoretical foundation and analysis of our CRBL approach are presented within the context of a general Markov decision process (MDP) framework\nthat () extends the methodology from the family of policies associated with the classical Bellman operator to a family of control-policy functions mapping a vector of (unknown) variables from a corresponding independent variable set to a control policy which is optimal under those variables;\nand () extends the domain of these control policies from a single state to span across all (or a large subset of) states,\nwith the (unknown) variable vector encoding global and local information that needs to be learned.\nWithin the context of this MDP framework and\nour general CBRL approach, we establish theoretical results on convergence and optimality with respect to (w.r.t.)\na CBRL operator and a CBRL version of -learning, analogous to corresponding results for the Bellman operator\nand classical -learning, respectively.\nOne might potentially consider our CBRL approach to be somewhat related to previous efforts on learning a parameterized policy within an MDP framework to reduce sample complexity,\nsuch as policy gradient methods (Sutton et al., 1999 ###reference_b29###; Sutton and Barto, 2020 ###reference_b28###) and their variants including neural network based policy optimization approaches (Schulman et al., 2015 ###reference_b25###, 2017 ###reference_b26###; Agarwal et al., 2021 ###reference_b1###).\nDespite any potentially perceived similarities, it is important to note that our CBRL approach\nis fundamentally different\nfrom\npolicy gradient methods\nin several important ways.\nMore specifically, () we do not consider parameterized policies within the general MDP framework, () we instead exploit control-theoretic methods to derive the optimal policy in terms of estimates of a few\nunknown (global and local)\nindependent\nvariables of the corresponding control problem, and () we directly learn these unknown variables in an iterative manner based on observations from applying the optimal control policy for the current estimate of variables.\nMeanwhile, within the context of\nour\ngeneral\nCBRL approach, we establish a new control-policy-variable gradient theorem, analogous to the standard policy gradient theorem, together with a corresponding gradient ascent method that comprises an iterative process for directly learning the (unknown) variable vector.\nWith its foundation being optimal control, our CBRL approach is particularly suitable for dynamical systems in general and thereby provides the optimal control policy for a wide variety of systems. In addition to its established theoretical properties, numerical experiments of various\nclassical decision making under uncertainty\ntasks empirically demonstrate the effectiveness and performance benefits of our CBRL approach in reducing the number of samples needed, which is a key requirement for the application of learning and decision-making algorithms in\nreal-world\nsystems.\nThe remainder of the paper is organized as follows.\nWe\nfirst present our general CBRL\napproach,\nwhich includes establishing various theoretical properties of our approach.\nNumerical results are presented next,\nfollowed by\nconcluding remarks.\nAll proofs and additional\ndetails and results are provided\nin the appendices." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "CBRL\nApproach", + "text": "Consider a standard RL framework (see, e.g., Sutton and Barto (2020 ###reference_b28###); Bertsekas and Tsitsiklis (1996 ###reference_b3###)) in which a decision-making agent interacts with a stochastic environment modeled as\nan MDP\ndefined over a set of states , a set of actions , a transition probability kernel , and a reward function mapping state-action pairs to a bounded subset of .\nThe agent seeks to determine a policy comprising the sequence of control actions that maximizes a\ndiscounted infinite-horizon stochastic dynamic programming (sDP) formulation expressed as\nwhere\n denotes the state of the system at time ,\n the control action at time ,\n the discount factor,\nand\n defines expectation\nw.r.t. the conditional transition probability for the next state\n.\nThe\ndeterministic policy defines the control action given the current state ,\nfor which we simply\nwrite\n.\nLet denote the space of bounded real-valued functions over with supremum norm,\nand\nlet denote the set of all\nstationary\npolicies.\nFrom standard\ntheory, define the Bellman operator\n on \nas\nwhere\n denotes the next state upon transition from ;\nwe note that (2 ###reference_###) is also referred to in the research literature as the Bellman optimality operator.\nDefine , , and to be the optimal action-value function, the optimal value function, and the optimal deterministic stationary policy, respectively.\nOur\nCBRL\napproach consists of exploiting control-theoretic methods to solve the general sDP formulation (1 ###reference_###)\nin terms of the corresponding unknown independent variables,\nand directly learning these unknown variables\nover time through an associated iterative solution process.\nIn particular, our general approach considers two key ideas:\n()\nextending the solution methodology from\nthe family of policies associated with classical RL to a family of control-policy functions that map a variable vector from an independent variable set to a control policy that is optimal\nunder\nthe independent variable vector\n;\nand\n() extending the domain of these control policies from a single state to span across all\n(or a large subset of)\nstates in , with the independent variable vector encoding\nboth\nglobal and local information\n(e.g., gravity and mass)\nthat\nis unknown and\nneeds\nto be learned.\nMore formally,\nlet \ndenote\na subset of a metric space serving as\nthe set of\nvariable vectors,\nand\n\nthe family of control-policy functions\nwhose elements comprise surjective functions that map a variable vector to a control policy that is optimal under .\nLet denote the set of all stationary policies that are directly determined by the control-policy functions over all .\nFor any , the control-policy function derives a particular control policy that provides the best\nexpected cumulative discounted reward\nacross all\nstates from among all control-policy functions in .\nSuch control policy mappings derive optimal control policies from vectors in the\nvariable set through control-theoretic methods, with the goal of learning the unknown variable vector\n\nthrough a corresponding iterative solution process.\nHence, once the variable vector is learned\nwith sufficient accuracy,\nthe desired optimal control policy is realized.\nThe control policy mappings\n\nthat\nderive an optimal control policy for each\nindependent-variable vector \ncome from a specific control-theoretic framework chosen to be combined as part of our CBRL approach for this purpose.\nIn practice, this selected control-theoretic framework of interest may be an approximation and not provably optimal for the underlying sDP formulation (1 ###reference_###), as long as it is sufficiently accurate for the RL task at hand.\nAs a representative example,\nwe focus in this paper on\nour CBRL approach in combination with the\nlinear quadratic regulator (LQR) framework\nwithin which the system dynamics are linear and the objective function is quadratic.\nMore formally, the LQR system dynamics are governed by\n\nwith initial state ,\nwhere the\nvector is contained in the matrices and ;\nand the LQR objective function to be\nminimized\nis given by\n,\nwhere\nthe\nvector may be contained in the matrices and .\nHere the matrices and respectively define the linear system dynamics and the linear system control; and\nthe matrices and respectively define the quadratic system\ncosts\nand the quadratic control\ncosts.\nWithin the context of this LQR framework,\nthe vector comprises the unknown real-valued independent variables contained in the LQR problem formulations, and thus the independent variable set\n;\nthe family of control-policy functions comprises all mappings from to the corresponding set of LQR problem formulations\n(i.e., , where );\nand\nthe control-policy function maps a given\n to the specific corresponding LQR problem formulation\n(i.e., , for fixed ),\nwhose solution is the linear control policy \n(i.e., , for fixed )\nwhich renders the action taken in state .\nIn the remainder of this section, we establish various theoretical properties of our CBRL approach\nincluding convergence, optimality, and control-policy-variable gradient ascent." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Convergence and Optimality", + "text": "We consider an analog of the Bellman operator\n\nwithin our general CBRL approach\nand derive a set of related theoretical results on convergence and optimality.\nFor each -function ,\nwe\ndefine the\ngeneric\nCBRL function \nas where the control policy is derived from the control-policy function for a given variable vector\n,\nand thus\n with\n denoting the space of bounded real-valued functions over with supremum norm.\nWe then define our CBRL operator on as\nOur approach comprises a\nsequence of steps\nfrom the family to the\ncontrol-policy functions to the control policies to the control action , upon applying in state .\nIt is important to note that the supremum in (3 ###reference_###) is taken over all independent-variable vectors in , where the control-policy function for each derives through control-theoretic means the corresponding control policy ,\nand thus taking the supremum over in (3 ###reference_###) is equivalent to taking the supremum over .\nWe next introduce an important assumption\non the richness of the\nfamily within\nthe sequence of steps of our approach from to .\nThere exist a policy function in the family and a unique\nvariable\nvector in the\nindependent-variable set such that, for any state ,\n.\nIntuitively,\nAssumption 1 ###reference_umption1###\nsays that is rich enough to include a global control policy that coincides with the Bellman operator for each state.\nWe then have the following formal result on the convergence of the operator of our\nCBRL\napproach and the optimality of this convergence\nw.r.t. the Bellman equation (Sutton and Barto, 2020 ###reference_b28###; Bertsekas and Tsitsiklis, 1996 ###reference_b3###).\nFor any , the operator in (3 ###reference_###) is a contraction in the supremum norm.\nSupposing Assumption 1 ###reference_umption1### holds for the family of policy functions and its variable set ,\nthe\ncontraction operator achieves the same asymptotically optimal outcome as that of\nthe Bellman operator\n.\nTheorem 2.1 ###reference_theorem1###\nimplies that, under the contraction operator and Assumption 1 ###reference_umption1###, our\nCBRL\napproach is optimal by realizing the same unique fixed point of\nthe Bellman operator\n.\nIn particular, from Theorem 2.1 ###reference_theorem1### for any function , the iterations converge as to , the unique optimal fixed point of the\nCBRL\noperator ,\nand satisfies\nWe note, however, that this optimality is achieved with great reduction in the sample complexity due in part to another important difference\nwith standard RL,\nnamely the search of our CBRL approach across all\nstates\n\nto find\nan\noptimal independent-variable vector\n\nthat identifies a single\noptimal control-policy function\n\nwhich coincides with the Bellman equation for each state.\nNow\nconsider convergence\nof the -learning algorithm within the context of our general\nCBRL approach,\nwhere\nwe focus on the following\nCBRL version of the\nclassical -learning update rule (Watkins, 1989 ###reference_b33###):\nfor , and iterations .\nLet be a sequence of control policies that covers all state-action pairs and the corresponding reward of applying to state . We then have the following formal result on -learning convergence and\nthe optimality of this convergence.\nSuppose\nAssumption 1 ###reference_umption1### holds for the family of policy functions and its independent-variable set with a contraction operator as defined in (3 ###reference_###).\nIf , , and are bounded, then under the -learning update rule (5 ###reference_###) converges to the optimal fixed point as \nand\nthe optimal policy function is obtained from a unique variable vector .\nIterations \nw.r.t. either the operator in (3 ###reference_###) or the -learning update rule\nin (5 ###reference_###)\nthen consist of improving the estimates of the independent-variable vector while applying the optimal control policy derived from the optimal control-policy function for the current variable vector estimate based on the control-theoretic framework of interest.\nWithin the context of the LQR control-theoretic framework as a representative example,\nit is well known from classical control theory\nthat the solution of the corresponding sDP is determined by solving the algebraic Riccati equation (ARE) (Lancaster and Rodman, 1995 ###reference_b13###), whose\ncontinuous-time\nversion (CARE) is\nexpressed as\nThe optimal control policy action at iteration ,\nin system state with independent-variable vector estimate ,\nis then obtained from the solution of the CARE (6 ###reference_###)\nas , together with the change of coordinates in general when the target is not the origin.\nBefore addressing in\nSection 2.2 ###reference_###\none\nsuch iterative process to efficiently estimate the variable vector ,\nit is important to note that the control-theoretic framework selected\nto be combined as part\nof our CBRL approach need not be provably optimal for the RL task at hand.\nIn particular, as long as the selected control-theoretic framework is sufficiently accurate, our\napproach can yield an approximate solution within a desired level of accuracy.\nTo further support framework selection for such approximations,\nwe first relax some of the above conditions to consider\nfamilies that do not satisfy Assumption 1 ###reference_umption1### and consider\ncontrol-policy\nfunctions that map a variable vector to a control policy whose domain spans across a large subset of states (as opposed to all states) in and renders asymptotically optimal\n(as opposed to globally optimal)\nrewards.\nSupposing \nsatisfies\nAssumption 1 ###reference_umption1###,\nconsider\na sequence of less rich families of policy functions obtained from independent-variable vectors of the corresponding\nindependent variable\nsets ,\nand define the operators as in (3 ###reference_###) for any function\n, .\nThen, from\nTheorem 2.1 ###reference_theorem1###,\neach operator under variable set is a contraction in the supremum norm and converges\nto the unique fixed point that satisfies the corresponding version of (4 ###reference_###),\nfor all and , .\nWe then have the following formal result on the asymptotic optimality of our CBRL approach in such approximate settings.\nAssume the state and action spaces are compact and is uniformly continuous for each .\nConsider and a sequence of families of policy functions , with and respectively denoting the independent-variable sets corresponding to and , .\nLet be a -norm distance function defined over the policy space , i.e., , .\nFurther let be an -norm distance function defined over the policy function space , i.e., , .\nSuppose, for all , there exists a such that as .\nThen,\n\nas .\nAn analogous version of Theorem 2.3 ###reference_theorem3### based on -learning can be established in a similar manner w.r.t. Theorem 2.2 ###reference_theorem2###, where the construction of the corresponding update rules\nin (5 ###reference_###)\nunder the variable sets parallels the above construction of the operators under the same variable sets .\nIn the cases of both\nTheorem 2.3 ###reference_theorem3### and its -learning variant,\none\nspecific instance of a sequence of\nthe\nfamilies\nof policy functions\n\nconsists of piecewise-linear control policies of increasing richness\n(e.g.,\nthe\nclass of canonical piecewise-linear functions\nconsidered in\n(Lin and Unbehauen, 1992 ###reference_b15###))\nw.r.t. finer and finer granularity of the control policy function space converging towards ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Control-Policy-Variable Gradient Ascent", + "text": "Turning now to our CBRL iterative process within the context of policy gradient methods, we build on our foregoing results to establish theoretical results analogous to the standard policy gradient theorem that establishes, for policies parameterized by\n,\na connection between the gradient of the value function\nw.r.t. and the gradient of the policy action\nw.r.t. .\nThis standard theoretical result does not apply to our\nCBRL\napproach\nbecause there is no direct connection between the independent-variable vector and the\ncontrol policy action,\nas described above.\nRather, under our\nCBRL\napproach\nand for any\nvector\n,\nthe control-policy function derives a particular control policy , and then the\npolicy applied in any state yields the\naction\ntaken.\nSo far, we have considered a deterministic control policy for the action taken at each state, since it is well known that a deterministic stationary policy is optimal for our sDP (1 ###reference_###); refer to Puterman (2005 ###reference_b21###).\nFor consistency with standard policy gradient methods, however, we consider in this subsection a general stochastic control policy that follows a probability distribution for the action taken at each state, where a deterministic policy is a special case.\nTo this end,\nlet us denote by a probability distribution on the set of actions where\nthe stationary control policy defines a probability distribution over all available control actions given the current state .\nFurther define to be the element of corresponding to action ,\ni.e., ,\nand correspondingly define , where the latter denotes the -function of the policy .\nLet denote the start state at time and\n\nthe probability of going from state to state in steps under the control policy .\nOur general control-policy-variable gradient ascent result is then formally expressed as follows.\nConsider a family of control policy functions , its independent-variable set with contraction operator in the form of (3 ###reference_###), and\nthe value function \nunder the control policy .\nAssuming is differentiable\nw.r.t. and is differentiable\nw.r.t. ,\nwe then have\nThe corresponding gradient ascent result for the\ncase of deterministic control policies follows directly from Theorem 2.4 ###reference_2### with the conditional probability distribution given by the associated indicator function , returning if and only if and zero otherwise.\nFollowing along\nsimilar\nlines\nof the various forms of policy-gradient ascent methods based on the standard policy gradient theorem, we devise control-policy-variable gradient ascent methods within the context of our\nCBRL\napproach based on Theorem 2.4 ###reference_2###.\nOne such\ngradient ascent method comprises an iterative process for directly learning the unknown\nindependent-variable\nvector of the optimal control policy\nw.r.t. the value function \nwhose iterations\nproceed according to\nwhere is the step size and\n is given by (7 ###reference_###).\nNote\nthat standard policy gradient methods are special cases of (8 ###reference_###) where\n is directly replaced by the policy\n;\nidentity map for and replaced by corresponds to the direct policy gradient parameterization\nin Agarwal et al. (2021 ###reference_b1###).\nMore precisely, consider the iterative process of our control-policy-variable gradient ascent method above within the context of the LQR control-theoretic framework where at iteration the system is in state and the variable vector estimate is .\nThe optimal control policy is derived by solving for in the corresponding CARE (6 ###reference_###) and then setting ,\nwhere ,\ntogether with the change of coordinates whenever the target is not the origin.\nUpon applying the control policy by taking action , we subsequently update the estimate of the variable vector according to iteration (8 ###reference_###) where is obtained from (7 ###reference_###) of Theorem 2.4 ###reference_2###.\nIn particular, the first partial derivative term on the right hand side of (7 ###reference_###) is given by the standard policy gradient solution, and the second partial derivative term on the right hand side of (7 ###reference_###) is obtained by differentiation of the\nCARE (6 ###reference_###),\nwhich in turn renders (Kao and Hennequin, 2020 ###reference_b11###)\nwhere \nand .\nOur CBRL approach\ncombined with\nthe LQR control-theoretic framework concerns the linear dynamics where and contain elements of the unknown independent-variable vector .\nBy leveraging known basic information about the LQR control problem at hand, only a relatively small number of unknown variables need to be learned.\nFor\na wide range of\napplications where state variables are derivatives of each other w.r.t. time (e.g., position, velocity, acceleration, jerk), the corresponding rows in the matrix consist of a single and the corresponding rows in comprise zeros.\nWe exploit this basic information to consider general matrix forms for and that reduce the number of unknown variables to be learned.\nAs a representative illustration, the system dynamics when there are two groups of such variables have the form\ngiven by (16 ###reference_###) in\nAppendix A.5 ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experimental Results", + "text": "In this section, we conduct numerical experiments to empirically evaluate the performance of our general CBRL approach\ncombined with\nthe\nLQR control-theoretic framework\nof Section 2 ###reference_###,\nas summarized in Algorithm 1 ###reference_###\nof Appendix A.5 ###reference_###.\nThe objective (1 ###reference_###) seeks to maximize the expected cumulative discounted reward,\nsimply referred to as the expected return.\nOur numerical experiments consider several classical\nRL\ntasks from Gymnasium (Towers et al., 2023 ###reference_b31###), including\nCart Pole,\nLunar Lander (Continuous),\nMountain Car (Continuous), and\nPendulum.\nWe compare our CBRL method with three state-of-the-art RL algorithms,\nnamely DQN (Mnih et al., 2013 ###reference_b18###) for discrete actions, DDPG (Lillicrap et al., 2015 ###reference_b14###) for continuous actions and PPO (Schulman et al., 2017 ###reference_b26###), together with a variant of PPO that solely replaces the nonlinear policy of PPO with a linear policy (since we know the optimal policy for some of the problems,\ne.g., Cart Pole,\nis linear).\nThese baselines are selected\nas the state-of-the-art\nalgorithms for solving the\nRL\ntasks under consideration.\nExperimental details and additional results\nare provided in\nAppendix B ###reference_###\nboth in general and\nfor each\nRL\ntask.\nWe have chosen the LQR control-theoretic framework\nto be combined as part\nof our CBRL approach\nwith the understanding that not all\nof the above\nRL\ntasks can be adequately\nsolved using LQR even if the variable vector is known.\nRecall, however, that our CBRL approach allows the domain of the control policies to span a subset of\nstates in\n, thus enabling partitioning of the state space so that\nproperly\nincreased richness w.r.t. finer and finer granularity can provide improved approximations and\nasymptotic optimality according to Theorem 2.3 ###reference_theorem3###,\nanalogous to\nthe class of\ncanonical piecewise-linear function approximations (Lin and Unbehauen, 1992 ###reference_b15###).\nWhile Cart Pole and Lunar Lander\n(Continuous)\ncan be directly addressed within\nthe context of\nLQR, this is not the case for Mountain Car\n(Continuous)\nand Pendulum\nwhich\nrequire a nonlinear controller.\nWe therefore partition the state space in the case of such\nRL\ntasks and consider a corresponding piecewise-LQR controller where the learned variable vectors may differ or be shared across the partitions.\nAll variables of our CBRL approach are randomly initialized uniformly within and then\nlearned using our\ncontrol-policy-variable\ngradient ascent iteration (8 ###reference_###).\nWe consider four sets of initial variables to validate the robustness of our CBRL approach for each RL task (see Appendix B ###reference_###)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Cart Pole under CBRL LQR", + "text": "Cart Pole,\nas depicted in Fig. 2 ###reference_### of\nAppendix B.1 ###reference_###,\nconsists\nof a pole connected to a horizontally moving cart with the goal of balancing the pole by applying a force on the cart to the left or right.\nThe state of the system is in terms of the position of the cart , the velocity of the cart , the angle of the pole , and the angular velocity of the pole .\nWith upright initialization of the pole on the cart, each episode comprises steps and the problem is considered \u201csolved\u201d upon achieving an average return of .\nThe LQR controller can be used to solve the Cart Pole problem provided that all the variables of the system are known (e.g., mass of cart, mass and length of pole, gravity), where the angle is kept small by our CBRL approach\ncombined with\nthe LQR framework.\nWe address the problem within the context of our CBRL approach by exploiting the\ngeneral matrix\nform for the LQR dynamics given by (17 ###reference_###)\nin\nAppendix B.1 ###reference_###,\nsolely taking into account general physical relationships\n(e.g., the derivative of the angle of the pole is equivalent to its angular velocity) and laws\n(e.g., the force can only affect the acceleration), with the unknown variables to be learned; refer to Fig. 1 ###reference_###.\nFig. 1 ###reference_### \u2013 1 ###reference_### and Table 1 ###reference_###\n(Appendix B.1 ###reference_###)\npresent numerical results for the three state-of-the-art baselines (discrete actions) and our CBRL approach, with each run over five independent random seeds;\nsee Table 2 ###reference_### and Fig. 3 ###reference_### in\nAppendix B.1 ###reference_###\nfor the four sets of initial variables in (17 ###reference_###).\nFig. 1 ###reference_### \u2013 1 ###reference_###, Table 1 ###reference_### and Fig. 3 ###reference_###(a) \u2013 3 ###reference_###(b)\nclearly demonstrate that our CBRL approach\nprovides far superior performance (both mean and standard deviation) over all baselines w.r.t. both the number of episodes and running time, in addition to demonstrating a more stable training process.\nFig. 3 ###reference_###(c) \u2013 3 ###reference_###(f) illustrates the learning behavior of CBRL variables given different initialization." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Lunar Lander\n under CBRL LQR", + "text": "Lunar Lander (Continuous), as depicted in Fig. 4 ###reference_### of\nAppendix B.2 ###reference_###,\nis a classical spacecraft trajectory optimization problem with the goal to land softly and fuel-efficiently on a landing pad by applying thrusters to the left, to the right, and upward. The state of the system is in terms of the positions and , two linear velocities and , angle , and angular velocity .\nStarting at the top center of the viewport with a random initial force applied to its center of mass, the lander (spacecraft) is subject to gravity, friction and turbulence while surfaces on the \u201cmoon\u201d are randomly generated in each episode with the target landing pad centered at . The problem is considered \u201csolved\u201d upon achieving an average return of .\nThe LQR controller can be used to solve the Lunar Lander (Continuous) problem provided that all variables of the system are known (e.g., mass of lander, gravity, friction, etc.), where the angle is kept small by our CBRL approach\ncombined with\nthe LQR framework.\nWe address the problem within the context of our CBRL approach by exploiting the\ngeneral matrix\nform for the LQR dynamics given by (18 ###reference_###)\nin\nAppendix B.2 ###reference_###,\nsolely taking into account general physical relationships (akin to Cart Pole) and mild physical information from the system state (e.g., the acceleration is independent of the position), with the unknown variables to be learned; refer to Fig. 1 ###reference_###.\nFig. 1 ###reference_### \u2013 1 ###reference_### and Table 3 ###reference_###\n(Appendix B.2 ###reference_###)\npresent numerical results for the three state-of-the-art baselines (continuous actions) and our CBRL approach, with each run over five independent random seeds;\nsee Table 4 ###reference_### and Fig. 5 ###reference_### in\nAppendix B.2 ###reference_###\nfor the four sets of initial variables in (18 ###reference_###).\nFig. 1 ###reference_### \u2013 1 ###reference_###, Table 3 ###reference_### and Fig. 5 ###reference_###(a) \u2013 5 ###reference_###(b)\nclearly demonstrate that our CBRL approach\nprovides far superior performance (both mean and standard deviation) over\nall baselines w.r.t. both the number of episodes and running time, in addition to demonstrating a more stable training process.\nWe note that the baseline algorithms often crash and terminate sooner than the more successful landings of our CBRL approach, resulting in the significantly worse performance exhibited in\nFig. 1 ###reference_### \u2013 1 ###reference_###, Table 3 ###reference_### and Fig. 5 ###reference_###.\nFinally, Fig. 5 ###reference_###(c) \u2013 5 ###reference_###(f) illustrates the learning behavior of CBRL variables given different initialization.\n[Return vs. Episode]\n\n\n\n\n\\subfigure[Return vs. Time]\n\n\n\n\n\\subfigure[Variable of CBRL]\n\n\n\n\n\\subfigure[Return vs. Episode]\n\n\n\n\n\\subfigure[Return vs. Time]\n\n\n\n\n\\subfigure[Variable of CBRL]\n\n\n\n\n\\subfigure[Return vs. Episode]\n\n\n\n\n\\subfigure[Return vs. Time]\n\n\n\n\n\\subfigure[Variable of CBRL]\n\n\n\n\n\\subfigure[Return vs. Episode]\n\n\n\n\n\\subfigure[Return vs. Time]\n\n\n\n\n\\subfigure[Variable of CBRL]\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Mountain Car\n under CBRL Piecewise-LQR", + "text": "Mountain Car (Continuous), as depicted in Fig. 6 ###reference_### of\nAppendix B.3 ###reference_###,\nconsists of the placement of a car in a valley with the goal of accelerating the car to reach the target at the top of the hill on the right by applying a force on the car to the left or right. The system state is in terms of the position of the car and the velocity of the car . With random initialization at the bottom of the valley, the problem is considered \u201csolved\u201d upon achieving an average return of .\nRecall that the LQR controller is not sufficient to solve the Mountain Car (Continuous) problem,\neven if all the variables of the system are known (e.g., mass of the car and gravity),\nbecause a nonlinear controller is required.\nConsequently,\nwe consider a piecewise-LQR controller that partitions the state space into two regions (LQR 1 and LQR 2 in Fig. 6 ###reference_### of\nAppendix B.3 ###reference_###).\nThe target state is selected to be if (LQR 1), and to be otherwise (LQR 2), where and represent the position of the left hill and the right hill, respectively.\nWe address the problem within the context of our CBRL approach by exploiting the\ngeneral matrix\nform for the piecewise-LQR dynamics given by (19 ###reference_###)\nin\nAppendix B.3 ###reference_###,\nsolely taking into account general physical relationships and laws, with the unknown variables to be learned; refer to Fig. 1 ###reference_###.\nFig. 1 ###reference_### \u2013 1 ###reference_### and Table 5 ###reference_###\n(Appendix B.3 ###reference_###)\npresent numerical results for the three state-of-the-art baselines (continuous actions) and our CBRL approach, with each run over five independent random seeds;\nrefer to Table 6 ###reference_### and Fig. 7 ###reference_### in\nAppendix B.3 ###reference_###\nfor the four sets of initial variables in (19 ###reference_###).\nFig. 1 ###reference_### \u2013 1 ###reference_###, Table 5 ###reference_### and Fig. 7 ###reference_###(a) \u2013 7 ###reference_###(b)\nclearly demonstrate that our CBRL approach\nprovides superior performance (both mean and standard deviation) over\nall baselines w.r.t. both the number of episodes and running time, in addition to demonstrating a more stable training process.\nFig. 7 ###reference_###(c) \u2013 7 ###reference_###(f) illustrates the learning behavior of CBRL variables given different initialization." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Pendulum under CBRL Piecewise-LQR", + "text": "Pendulum, as depicted in Fig. 8 ###reference_### of\nAppendix B.4 ###reference_###,\nconsists of a link attached at one end to a fixed point and the other end being free, with the goal of swinging up to an upright position by applying a torque on the free end. The state of the system is in terms of the angle of the link and the angular velocity of the link .\nWith random initial\nlink position,\neach episode comprises steps and the problem is considered\n\u201csolved\u201d upon achieving an average return of .\nRecall that the LQR controller is not sufficient to solve the Pendulum problem, even if all the variables of the system are known (e.g., mass of the link , length of the link , moment of inertia of the link , and gravity ), because a nonlinear controller is required.\nConsequently, we consider a piecewise-LQR controller that partitions the state space into four regions (LQR 1 \u2013 4 in Fig. 8 ###reference_### of\nAppendix B.4 ###reference_###).\nIn terms of the target state , the angle is selected based on the boundary angle in each partition (counter-clockwise boundary angle if , and clockwise boundary angle otherwise), while the angular velocity is selected based on the energy conservation law; see\nAppendix B.4 ###reference_###\nfor more details.\nWe address the problem within the context of our CBRL approach by exploiting the\ngeneral matrix\nform for the piecewise-LQR dynamics given by (20 ###reference_###)\nin\nAppendix B.4 ###reference_###,\nsolely taking into account general physical relationships and laws, with the unknown variables to be learned; refer to Fig. 1 ###reference_###.\nFig. 1 ###reference_### \u2013 1 ###reference_### and Table 7 ###reference_###\n(Appendix B.4 ###reference_###)\npresent numerical results for the three state-of-the-art baselines (continuous actions) and our CBRL approach, with each run over five independent random seeds;\nsee Table 8 ###reference_### and Fig. 9 ###reference_### in\nAppendix B.4 ###reference_###\nfor the four sets of initial variables in (20 ###reference_###).\nFig. 1 ###reference_### \u2013 1 ###reference_###, Table 7 ###reference_### and Fig. 9 ###reference_###(a) \u2013 9 ###reference_###(b) clearly demonstrate that our CBRL approach provides far superior performance (both mean and standard deviation) over all baselines w.r.t. the number of episodes upon convergence of our CBRL algorithm after a relatively small number of episodes and w.r.t. the running time across all episodes.\nIn particular, even with only four partitions for the difficult nonlinear Pendulum RL task, our CBRL approach provides significantly better mean performance and a much lower standard deviation than the closest competitor DDPG after around episodes.\nMoreover, Fig. 1 ###reference_### clearly demonstrates that our CBRL approach outperforms all other baselines w.r.t. running time.\nThese numerical results also demonstrate\na more stable training process for our CBRL approach.\nFinally,\nFig. 9 ###reference_###(c) \u2013 9 ###reference_###(f) illustrates the learning behavior of CBRL variables given different initialization." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Discussion", + "text": "Our numerical results demonstrate an important form of robustness exhibited by the optimal control policy of our CBRL approach w.r.t. the learned variable values.\nIn particular, we performed additional comparative numerical experiments to evaluate the performance of LQR with the known true variables for Cart Pole and the performance of piecewise-LQR with the known true variables for Mountain Car and Pendulum\n(Lunar Lander is omitted because its underlying true variables are not known to us).\nWe then compare the relative difference between these numerical results for LQR with the true variables and the corresponding numerical results of our CBRL approach in Fig. 1 ###reference_### \u2013 1 ###reference_###, Fig. 1 ###reference_### \u2013 1 ###reference_###, and Fig. 1 ###reference_### \u2013 1 ###reference_###.\nIt is important to note that the variable values learned by our CBRL approach and presented in Fig. 1 ###reference_###, Fig. 1 ###reference_###, and Fig. 1 ###reference_### differ considerably from the corresponding true variables.\nDespite these non-negligible variable value differences, the comparative relative performance differences for Cart Pole and Pendulum respectively show that LQR and piecewise-LQR with the true variables provide no improvement in return over the corresponding return of our CBRL approach in Fig. 1 ###reference_### \u2013 1 ###reference_### and Fig. 1 ###reference_### \u2013 1 ###reference_###.\nIn addition, the\nreturn\nof our CBRL approach for Mountain Car in Fig. 1 ###reference_### \u2013 1 ###reference_### is within\n\nof the corresponding\nreturn\nof piecewise-LQR with the true variables.\nThe supremum over the variable space in the definition of our CBRL operator (3 ###reference_###), together with its\nunique\nfixed point (4 ###reference_###),\nand the -learning update rule (5 ###reference_###)\nare primarily used for theoretical purposes in this paper.\nHowever, one might be concerned about potential computational challenges associated with the supremum when the variable space is continuous.\nWe first note that this issue is not fundamentally different from the corresponding challenges in standard RL when the action space is continuous, and thus similar steps can be taken to address the issue.\nAnother important factor that mitigates such concerns is the foregoing robustness of the optimal control policy w.r.t. the learned variable values.\nLastly, we note that this issue did not arise in our numerical experiments using Algorithm 1 ###reference_###.\nOne important implementation issue is stepsize selection, just like in nonconvex optimization and RL. We address this issue by using similar methods employed in standard RL, i.e., the adaptive stepsize selection in our implementation of Algorithm 1 ###reference_###.\nSpecifically,\nwe\nreduce the stepsize in (8 ###reference_###) by a factor of each time achieving the \u201csolved\u201d return.\nAnother important implementation issue concerns variable-vector initialization,\nagain like in nonconvex optimization and RL.\nAs discussed for each of the above RL tasks, the numerical results in\nFigs. 3 ###reference_###, 5 ###reference_###, 7 ###reference_###, 9 ###reference_###\ndemonstrate the robustness of our CBRL approach w.r.t. variable-vector initialization.\nAn additional important factor here is the aforementioned robustness of the optimal control policy w.r.t. the learned variable values.\nMoreover, given the efficiency of our CBRL approach relative to state-of-the-art methods, additional computations for stepsize selection and variable-vector initialization can be employed when issues are encountered while retaining overall computational efficiencies." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper we devise a CBRL approach to support direct learning of the optimal policy.\nWe establish various theoretical properties of our approach, including convergence and optimality of our CBRL operator and -learning, a\nnew\ncontrol-policy-variable gradient theorem, and a\ngradient ascent algorithm based on this theorem\nwithin the context of the LQR control-theoretic framework as a representative example.\nWe then\nconduct numerical experiments to empirically evaluate the performance of our general CBRL approach\non several classical RL tasks.\nThese numerical results demonstrate the significant benefits of our CBRL approach over state-of-the-art methods in terms of improved quality and robustness of the solution and reduced sample complexity and running time." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Control-Based Reinforcement Learning\nApproach", + "text": "In this appendix, we present the proofs of our main theoretical results\ntogether with additional results and technical details related to our CBRL approach\ncombined with\nthe LQR control-theoretic framework\nas presented\nin\nSection 2 ###reference_###.\nWe also provide the algorithmic details of our control-policy-variable gradient ascent method used for\nour numerical results of\nSection 3 ###reference_###\nand\nAppendix B ###reference_###.\nAssumption 1 ###reference_umption1###.\n\nThere exist a policy function in the family and a unique\nvariable\nvector in the\nindependent-variable set such that, for any state ,\n.\nTheorem 2.1 ###reference_theorem1###.\n\nFor any , the operator in (3 ###reference_###) is a contraction in the supremum norm.\nSupposing Assumption 1 ###reference_umption1### holds for the family of policy functions and its variable set ,\nthe\ncontraction operator achieves the same asymptotically optimal outcome as that of\nthe Bellman operator\n.\nRecall that denotes the set of all stationary policies that are directly determined by the control-policy functions over all .\nThen, for\nthe operator defined on in (3 ###reference_###) and for any two functions\n and \nwith , we obtain\nwhere\n(a) is by definition in (3 ###reference_###)\nnoting that taking the supremum over is equivalent to taking the supremum over ,\n(b) follows from straightforward algebra,\n(c) and (d) are due to the triangle inequality,\nand (e) and (f) directly follow by definition.\nFor any , this establishes that the operator in (3 ###reference_###) is a contraction in the supremum norm, thus rendering the desired result for the first part of the theorem.\nFor the second part of the theorem,\nunder the stated supposition, we know that the optimal policy realized by the Bellman operator holds for a unique vector in the variable set and any .\nWe also know that the Bellman operator in (2 ###reference_###)\nand our CBRL operator in (3 ###reference_###) are both contractions in supremum norm with unique fixed points, where the fixed point of the Bellman operator is the optimal solution of the Bellman equation.\nIn particular,\nrepeatedly applying the Bellman operator in (2 ###reference_###) to an action-value function is well-known to asymptotically yield the unique fixed point equation \nwhere\nwith the second equality following from the definition of ,\nand is the optimal action-value function (Szepesvari, 2010 ###reference_b30###).\nLikewise, repeatedly applying our CBRL operator in (3 ###reference_###) to an action-value function asymptotically renders the unique fixed point equation where\nwith the second equality following from the relationship between and , given below in (12 ###reference_###), under the supposition of the theorem together with the optimality of .\nBy showing that these two fixed points and in (10 ###reference_###) and (11 ###reference_###) coincide, the desired result follows.\nTo this end, from the definition of and under the stated supposition of the theorem, we have for any and any\nnoting that is the unique vector in the variable set associated with .\nLet denote the action-value function that corresponds by definition to of (12 ###reference_###) in accordance with (11 ###reference_###).\nThen, in one direction to show , we derive\nwhere:\n(a) is directly from the definition of the fixed point in (10 ###reference_###);\n(b) follows from the definition of the Bellman operator for any action-value function , the unique fixed point (10 ###reference_###), and the optimality of and ;\n(c) follows upon the substitution of policy for and from (12 ###reference_###);\n(d) follows upon the substitution of from (12 ###reference_###);\nand (e) follows from (11 ###reference_###).\nIn the other direction to show , we derive\nwhere:\n(a) is directly from the definition of the fixed point in (11 ###reference_###);\n(b) follows upon the substitution of from (12 ###reference_###);\n(c) follows from the definition of the CBRL operator for any action-value function , the unique fixed point (11 ###reference_###), and the application of the Bellman optimality conditions on in (11 ###reference_###) for ,\nall under the supposition of the theorem;\n(d) follows upon the substitution of policy for and from (12 ###reference_###);\nand (e) follows from (10 ###reference_###).\nWe therefore have coincidence of the fixed points (10 ###reference_###) and (11 ###reference_###),\nthus completing the proof.\nTheorem 2.2 ###reference_theorem2###.\n\nSuppose\nAssumption 1 ###reference_umption1### holds for the family of policy functions and its independent-variable set with a contraction operator as defined in (3 ###reference_###).\nIf , , and are bounded, then under the -learning update rule (5 ###reference_###) converges to the optimal fixed point as \nand\nthe optimal policy function is obtained from a unique variable vector .\nWe first rewrite (5 ###reference_###) as a convex combination of\nwhere\nDefine to be the difference between and , which satisfies\nFurther define\nwhere is a randomly sampled state from the MDP under an initial state and policy .\nWe then derive\nwhere represents the\npast history of the process (Jaakkola et al., 1994 ###reference_b9###; Tsitsiklis, 1994 ###reference_b32###),\nand\nthe last equality follows from .\nFrom this equation together with Theorem 2.1 ###reference_theorem1###, we conclude\nSimilarly, we derive\nwhere the inequality follows from being bounded.\nThe desired result\nfor the convergence of the -learning algorithm to the optimal fixed point then follows from (Tsitsiklis, 1994 ###reference_b32###, Theorem 3), (Jaakkola et al., 1994 ###reference_b9###, Theorem 1).\nFinally, as a consequence of Assumption 1 ###reference_umption1###, we conclude that the optimal policy function is obtained from a unique variable vector ,\nthus completing the proof.\nSuppose \nsatisfies\nAssumption 1 ###reference_umption1###,\nand consider\na sequence of less rich families \nof policy functions obtained from independent-variable vectors of the corresponding\nindependent variable\nsets ,\nfurther defining the operators as in (3 ###reference_###) for any function\n, .\nFrom Theorem 2.1 ###reference_theorem1###, for , we have that the contraction operators\n\nunder the variable sets converge to the unique fixed points which satisfy,\nfor all and ,\ncorresponding to (4 ###reference_###).\nLet us first consider two such families and ,\nfor which we introduce the following lemma used in the proof of Theorem 2.3 ###reference_theorem3###.\nAssume the state and action spaces are compact and is uniformly continuous for each .\nFor two variable sets and and any two variable vectors and , let\n be a -norm distance function defined over the policy space , i.e., .\nThen, for all there exists such that,\nif for all there exists with \nand\nif for all there exists with ,\nwe have .\nSince the state and action spaces are compact and is uniformly continuous for each , it then follows from Lemma 2.2 in Hale and Cruz (1970 ###reference_b8###) that the unique fixed point depends continuously on the contraction mapping and thus we find that w.r.t. its second argument depends continuously on the sets , which implies the desired result.\nIntuitively, Lemma A.3 ###reference_theorem3### shows that, for any policy families and sufficiently close to each other, the fixed points and of the corresponding operators and are also close to each other.\nWhen the policy family is sufficiently rich and approaches , then the fixed point of the corresponding operator approaches the unique fixed point of satisfying (4 ###reference_###), and therefore they approach the optimal -value as promised by Bellman from Theorem 2.1 ###reference_theorem1###.\nWe formally characterize this asymptotic convergence of approximate optimality to global optimality in Theorem 2.3 ###reference_theorem3###.\nTheorem 2.3 ###reference_theorem3###.\n\nAssume the state and action spaces are compact and is uniformly continuous for each .\nConsider and a sequence of families of policy functions , with and respectively denoting the independent-variable sets corresponding to and , .\nLet be a -norm distance function defined over the policy space , i.e., , .\nFurther let be an -norm distance function defined over the policy function space , i.e., , .\nSuppose, for all , there exists a such that as .\nThen,\n\nas .\nFirst, in the definition of the contraction mapping , the result of the supremum depends continuously on the set and thus the corresponding argument of depends continuously on .\nThe desired result then follows from Lemma A.3 ###reference_theorem3###.\nTheorem 2.4 ###reference_2###.\n\nConsider a family of control policy functions , its independent-variable set with contraction operator in the form of (3 ###reference_###), and\nthe value function \nunder the control policy .\nAssuming is differentiable\nw.r.t. and is differentiable\nw.r.t. \n(i.e., both\n and \nexist),\nwe then have\nWe derive\nwhere\n(a) is by the definition of the value function for state ,\n(b) follows from the product rule,\n(c) follows by the definition of ,\n(d) follows by applying the gradient w.r.t. summed over all ,\n(e) follows by repeating each of the preceding steps for ,\nand\n(f) follows from repeated unrolling along the lines of\n(e)\nand upon recalling\n\nto be the probability of going from state to state in steps under the control policy .\nThe result then follows since\n, by the chain rule.\nBased on our new CBRL gradient theorem (Theorem 2.4 ###reference_2###), we devise control-policy-variable gradient ascent methods within the context of our general CBRL approach.\nOne such gradient ascent method for directly learning the unknown variable vector of the optimal control policy\nw.r.t. the value function comprises the iterative process according to (8 ###reference_###) with stepsize .\nHere is as given by (7 ###reference_###), where the first gradient term is essentially the standard policy gradient, and the second gradient term is specific to the control-theoretic framework employed in our general CBRL approach;\nrefer to\nSection 2 ###reference_###\nregarding the LQR control-theoretic framework combined as part of our CBRL approach.\nWe note that standard policy gradient ascent methods are a special case of (8 ###reference_###) where the independent-variable vector is directly replaced by the policy .\nIn particular, the special case of being an identity map and replaced by corresponds to the direct policy gradient parameterization case in Agarwal et al. (2021 ###reference_b1###).\nOur algorithmic implementation of the above control-policy-variable gradient ascent method is summarized in Algorithm 1 ###reference_###.\nThis algorithm,\ntogether with\nour general CBRL approach\ncombined with\nthe LQR control-theoretic framework\nas presented\nin Section 2 ###reference_###,\nis used to obtain the numerical results for our CBRL approach presented in\nSection 3 ###reference_###\nand\nAppendix B ###reference_###.\nOur CBRL approach\ncombined with\nthe LQR control-theoretic framework concerns the linear dynamics where and contain elements of the unknown variable vector .\nBy leveraging known basic information about the LQR control problem at hand, only a relatively small number of unknown variables need to be learned.\nFor\na wide range of\napplications where state variables are derivatives of each other w.r.t. time (e.g., position, velocity, acceleration, jerk), the corresponding rows in the matrix consist of a single and the corresponding rows in comprise zeros.\nWe exploit this basic information to consider general matrix forms for and that reduce the number of unknown variables to be learned.\nAs a representative illustration, the system dynamics when there are two groups of such variables have the form given by" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Experimental Results", + "text": "In this appendix we provide additional details and results w.r.t. our numerical experiments to evaluate the performance of our general CBRL approach\ncombined with\nthe LQR control-theoretic framework\nas presented\nin Section 2 ###reference_###\nand using Algorithm 1 ###reference_###.\nRecall that we consider the following classical\nRL\ntasks from Gymnasium (Towers et al., 2023 ###reference_b31###):\nCart Pole,\nLunar Lander (Continuous),\nMountain Car (Continuous), and\nPendulum.\nOur CBRL approach is compared against the three state-of-the-art RL algorithms\nDQN (Mnih et al., 2013 ###reference_b18###) for discrete actions, DDPG (Lillicrap et al., 2015 ###reference_b14###) for continuous actions and PPO (Schulman et al., 2017 ###reference_b26###), together with a variant of PPO that solely replaces the nonlinear policy of PPO with a linear policy (since we know the optimal policy for problems such as Cart Pole is linear).\nThese baselines are selected as the state-of-the-art RL algorithms for solving the\nRL\ntasks under consideration.\nOur CBRL approach depends in part upon the control-theoretic framework\nwith which it is combined,\nwhere we have chosen LQR as a representative example with the understanding that not all of the above\nRL\ntasks can be adequately solved using LQR even if the variable vector is known.\nRecall, however, that our CBRL approach allows the domain of the control policies to span a subset of\nstates in , thus enabling the partitioning of the state space so that properly increased richness w.r.t. finer and finer granularity can provide improved approximations and asymptotic optimality according to Theorem 2.3 ###reference_theorem3###\nof Section 2 ###reference_###,\nanalogous to the class of canonical piecewise-linear function approximations (Lin and Unbehauen, 1992 ###reference_b15###).\nWhile Cart Pole and Lunar Lander (Continuous) can be directly addressed within the context of LQR, this is not the case for Mountain Car (Continuous) and Pendulum which require a nonlinear controller.\nWe therefore partition the state space in the case of these\nRL\ntasks and consider a corresponding piecewise-LQR controller where the learned variable vectors may differ or be shared across the partitions.\nSuch details are provided below for Mountain Car (Continuous) and Pendulum." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Mean and Standard Deviation of Return of RL Methods for CartPole-v0
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpisodeCBRLLinearPPODQN
NumberMeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.
50
100
150
200
250
300
350
400
450
500
\n
", + "capture": "Table 1: Mean and Standard Deviation of Return of RL Methods for CartPole-v0" + }, + "2": { + "table_html": "
\n
Table 2: Initial Variables of CartPole-v0
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Name
Initial Variables 1 (P1)0.4360.0260.550.4350.420.330.2050.6190.30.267
Initial Variables 2 (P2)0.0760.780.4380.7230.9780.5380.5010.0720.2680.5
Initial Variables 3 (P3)0.1540.740.2630.5340.0150.9190.9010.0330.9570.137
Initial Variables 4 (P4)0.2950.5310.1920.0680.7870.6560.6380.5760.0390.358
\n
", + "capture": "Table 2: Initial Variables of CartPole-v0" + }, + "3": { + "table_html": "
\n
Table 3: Mean and Standard Deviation of Return of RL Methods for LunarLanderContinuous-v2
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpisodeCBRLLinearPPODDPG
NumberMeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.
50
100
150
200
250
300
350
400
450
500
\n
", + "capture": "Table 3: Mean and Standard Deviation of Return of RL Methods for LunarLanderContinuous-v2" + }, + "4": { + "table_html": "
\n
Table 4: Initial Variables of LunarLanderContinuous-v2
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Name
Initial Variables 1 (P1)0.5510.7080.2910.5110.8930.8960.1260.207
Initial Variables 2 (P2)0.8730.9690.8690.5310.2330.0110.430.402
Initial Variables 3 (P3)0.7780.2380.8240.9660.9730.4530.6090.776
Initial Variables 4 (P4)0.650.5050.8790.1820.8520.750.6660.988
Name
Initial Variables 1 (P1)0.0510.4410.030.4570.6490.2780.676
Initial Variables 2 (P2)0.5230.4780.5550.5430.7610.7120.62
Initial Variables 3 (P3)0.6420.7220.0350.2980.0590.8570.373
Initial Variables 4 (P4)0.2570.0280.6360.8470.7360.0210.112
\n
", + "capture": "Table 4: Initial Variables of LunarLanderContinuous-v2" + }, + "5": { + "table_html": "
\n
Table 5: Mean and Standard Deviation of Return of RL Methods for MountainCarContinuous-v0
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpisodeCBRLLinearPPODDPG
NumberMeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.
50
100
150
200
250
300
350
400
450
500
\n
", + "capture": "Table 5: Mean and Standard Deviation of Return of RL Methods for MountainCarContinuous-v0" + }, + "6": { + "table_html": "
\n
Table 6: Initial Variables of MountainCarContinuous-v0
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Name
Initial Variables 1 (P1)0.5490.7150.6030.545
Initial Variables 2 (P2)0.2220.8710.2070.919
Initial Variables 3 (P3)0.7710.0210.6340.749
Initial Variables 4 (P4)0.5880.8980.8920.816
\n
", + "capture": "Table 6: Initial Variables of MountainCarContinuous-v0" + }, + "7": { + "table_html": "
\n
Table 7: Mean and Standard Deviation of Return of RL Methods for Pendulum-v1
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpisodeCBRLLinearPPODDPG
NumberMeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.MeanStd.\u00a0Dev.
50
100
150
200
250
300
350
400
450
500
\n
", + "capture": "Table 7: Mean and Standard Deviation of Return of RL Methods for Pendulum-v1" + }, + "8": { + "table_html": "
\n
Table 8: Initial Variables of Pendulum-v1
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Name
Initial Variables 1 (P1)0.4170.720.3020.1470.092
Initial Variables 2 (P2)0.8930.3320.8210.0420.108
Initial Variables 3 (P3)0.180.0190.4630.7250.42
Initial Variables 4 (P4)0.2230.5230.5510.0460.361
\n
", + "capture": "Table 8: Initial Variables of Pendulum-v1" + }, + "9": { + "table_html": "
\n
Table 9: Summary of Experimental Hyperparameters
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AlgorithmLinearPPODDPGDQNCBRL
Action GeneratorNNNNNNNNCBRL-LQR
# of Hidden Layers (Policy)1222N/A
# of Hidden Layers (Value)22222
Hidden Layer Size (Policy)16128128128N/A
Hidden Layer Size (Value)128128128128128
Activation (Policy)N/AReLU & TanhReLU & TanhReLU & TanhN/A
Activation (Value)ReLUReLUReLUReLUReLU
Discount Factor0.990.990.990.990.99
Clip Ratio0.20.2N/AN/AN/A
Soft Update RatioN/AN/A0.005N/AN/A
\n
", + "capture": "Table 9: Summary of Experimental Hyperparameters " + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2406.14753v3_figure_1(a).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_return_episode_p1.png" + }, + "1(b)": { + "figure_path": "2406.14753v3_figure_1(b).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_return_CPUtime_p1.png" + }, + "1(c)": { + "figure_path": "2406.14753v3_figure_1(c).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_parameter_c_2_large.png" + }, + "1(d)": { + "figure_path": "2406.14753v3_figure_1(d).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_return_episode_p2.png" + }, + "1(e)": { + "figure_path": "2406.14753v3_figure_1(e).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_return_CPUtime_p2.png" + }, + "1(f)": { + "figure_path": "2406.14753v3_figure_1(f).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_parameter_c_8_large.png" + }, + "1(g)": { + "figure_path": "2406.14753v3_figure_1(g).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_return_episode_p1.png" + }, + "1(h)": { + "figure_path": "2406.14753v3_figure_1(h).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_return_CPUtime_p1.png" + }, + "1(i)": { + "figure_path": "2406.14753v3_figure_1(i).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_parameter_c_0_large.png" + }, + "1(j)": { + "figure_path": "2406.14753v3_figure_1(j).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_return_episode_p1.png" + }, + "1(k)": { + "figure_path": "2406.14753v3_figure_1(k).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_return_CPUtime_p1.png" + }, + "1(l)": { + "figure_path": "2406.14753v3_figure_1(l).png", + "caption": "Figure 1: \nLearning curves over five independent runs comparing our CBRL approach with the Linear policy, PPO, DQN (discrete actions), and DDPG (continuous actions), where the solid line shows the mean and the shaded area depicts the standard deviation for CartPole (a\ud835\udc4eaitalic_a)-(c\ud835\udc50citalic_c), LunarLander (d\ud835\udc51ditalic_d)-(f\ud835\udc53fitalic_f), MountainCar (g\ud835\udc54gitalic_g)-(i\ud835\udc56iitalic_i), and Pendulum (j\ud835\udc57jitalic_j)-(l\ud835\udc59litalic_l).", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_parameter_c_1_large.png" + }, + "2": { + "figure_path": "2406.14753v3_figure_2.png", + "caption": "Figure 2: The CartPole-v0 environment.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cartpole.png" + }, + "3(a)": { + "figure_path": "2406.14753v3_figure_3(a).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_return_episode.png" + }, + "3(b)": { + "figure_path": "2406.14753v3_figure_3(b).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_return_CPUtime.png" + }, + "3(c)": { + "figure_path": "2406.14753v3_figure_3(c).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_parameter_c_2.png" + }, + "3(d)": { + "figure_path": "2406.14753v3_figure_3(d).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_parameter_c_7.png" + }, + "3(e)": { + "figure_path": "2406.14753v3_figure_3(e).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_parameter_c_12.png" + }, + "3(f)": { + "figure_path": "2406.14753v3_figure_3(f).png", + "caption": "Figure 3: Learning curves of CartPole-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 2)\nin comparison with the Linear policy, PPO, and DQN\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 2.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/cp_parameter_c_17.png" + }, + "4": { + "figure_path": "2406.14753v3_figure_4.png", + "caption": "Figure 4: The LunarLanderContinuous-v2 environment.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/lunar_lander.png" + }, + "5(a)": { + "figure_path": "2406.14753v3_figure_5(a).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_return_episode.png" + }, + "5(b)": { + "figure_path": "2406.14753v3_figure_5(b).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_return_CPUtime.png" + }, + "5(c)": { + "figure_path": "2406.14753v3_figure_5(c).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_parameter_c_3.png" + }, + "5(d)": { + "figure_path": "2406.14753v3_figure_5(d).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_parameter_c_8.png" + }, + "5(e)": { + "figure_path": "2406.14753v3_figure_5(e).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_parameter_c_13.png" + }, + "5(f)": { + "figure_path": "2406.14753v3_figure_5(f).png", + "caption": "Figure 5: Learning curves of LunarLanderContinuous-v2 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 4)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 4.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/ll_parameter_c_18.png" + }, + "6": { + "figure_path": "2406.14753v3_figure_6.png", + "caption": "Figure 6: The MountainCarContinuous-v0 environment and partitions for piecewise-LQR.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mountaincar.png" + }, + "7(a)": { + "figure_path": "2406.14753v3_figure_7(a).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_return_episode.png" + }, + "7(b)": { + "figure_path": "2406.14753v3_figure_7(b).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_return_CPUtime.png" + }, + "7(c)": { + "figure_path": "2406.14753v3_figure_7(c).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_parameter_c_0.png" + }, + "7(d)": { + "figure_path": "2406.14753v3_figure_7(d).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_parameter_c_5.png" + }, + "7(e)": { + "figure_path": "2406.14753v3_figure_7(e).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_parameter_c_10.png" + }, + "7(f)": { + "figure_path": "2406.14753v3_figure_7(f).png", + "caption": "Figure 7: Learning curves of MountainCarContinuous-v0 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 6)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 6.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/mc_parameter_c_20.png" + }, + "8": { + "figure_path": "2406.14753v3_figure_8.png", + "caption": "Figure 8: The Pendulum-v1 environment and partitions for piecewise-LQR.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pendulum.png" + }, + "9(a)": { + "figure_path": "2406.14753v3_figure_9(a).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_return_episode.png" + }, + "9(b)": { + "figure_path": "2406.14753v3_figure_9(b).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_return_CPUtime.png" + }, + "9(c)": { + "figure_path": "2406.14753v3_figure_9(c).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_parameter_c_1.png" + }, + "9(d)": { + "figure_path": "2406.14753v3_figure_9(d).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_parameter_c_6.png" + }, + "9(e)": { + "figure_path": "2406.14753v3_figure_9(e).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_parameter_c_11.png" + }, + "9(f)": { + "figure_path": "2406.14753v3_figure_9(f).png", + "caption": "Figure 9: Learning curves of Pendulum-v1 over five independent runs.\nThe solid line shows the mean and the shaded area depicts the standard deviation.\n(a) and (b): Return vs. number of episodes and running time, respectively, for our CBRL approach\n(over the five independent runs and the four initializations in Table 8)\nin comparison with the Linear policy, PPO, and DDPG\n(over the five independent runs).\n(c) \u2013 (f): Learning behavior of CBRL variables, initialized by Table 8.", + "url": "http://arxiv.org/html/2406.14753v3/extracted/6030196/figures/pd_parameter_c_16.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "On the theory of policy gradient methods: Optimality, approximation,\nand distribution shift.", + "author": "Alekh Agarwal, Sham M. Kakade, Jason D. Lee, and Gaurav Mahajan.", + "venue": "Journal of Machine Learning Research, 22(98):1\u201376, 2021.", + "url": null + } + }, + { + "2": { + "title": "A comparison of direct and model-based reinforcement learning.", + "author": "Christopher G. Atkeson and Juan Carlos Santamaria.", + "venue": "In Proc. International Conference on Robotics and Automation,\n1997.", + "url": null + } + }, + { + "3": { + "title": "Neuro-Dynamic Programming.", + "author": "D.P. Bertsekas and J.N. Tsitsiklis.", + "venue": "Athena Scientific, 1996.", + "url": null + } + }, + { + "4": { + "title": "Probabilistic constraint for safety-critical reinforcement learning.", + "author": "Weiqin Chen, Dharmashankar Subramanian, and Santiago Paternain.", + "venue": "IEEE Transactions on Automatic Control, 2024.", + "url": null + } + }, + { + "5": { + "title": "PILCO: a model-based and data-efficient approach to policy search.", + "author": "M.P. Deisenroth and C.E. Rasmussen.", + "venue": "In Proc. International Conference on Machine Learning, 2011.", + "url": null + } + }, + { + "6": { + "title": "Model-based offline reinforcement learning with local\nmisspecification.", + "author": "Kefan Dong, Yannis Flet-Berliac, Allen Nie, and Emma Brunskill.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 37, pages 7423\u20137431, 2023.", + "url": null + } + }, + { + "7": { + "title": "Suf: Stabilized unconstrained fine-tuning for offline-to-online\nreinforcement learning.", + "author": "Jiaheng Feng, Mingxiao Feng, Haolin Song, Wengang Zhou, and Houqiang Li.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 38, pages 11961\u201311969, 2024.", + "url": null + } + }, + { + "8": { + "title": "Existence, uniqueness and continuous dependence for hereditary\nsystems.", + "author": "J. K. Hale and M. A. Cruz.", + "venue": "Annali di Matematica Pura ed Applicata, 85:63\u201381,\n1970.", + "url": null + } + }, + { + "9": { + "title": "On the convergence of stochastic iterative dynamic programming\nalgorithms.", + "author": "Tommi Jaakkola, Michael I. Jordan, and Satinder P. Singh.", + "venue": "Neural Computation, 6:1185\u20131201, 1994.", + "url": null + } + }, + { + "10": { + "title": "Reinforcement learning: A survey.", + "author": "L. Kaelbling, M. Littman, and A. Moore.", + "venue": "Journal of Artificial Intelligence Research, 4:237\u2013285, 1996.", + "url": null + } + }, + { + "11": { + "title": "Automatic differentiation of Sylvester, Lyapunov, and algebraic\nRiccati equations.", + "author": "Ta-Chu Kao and Guillaume Hennequin.", + "venue": "ArXiv e-prints, November 2020.", + "url": null + } + }, + { + "12": { + "title": "Conservative Q-learning for offline reinforcement learning.", + "author": "Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine.", + "venue": "Advances in Neural Information Processing Systems,\n33:1179\u20131191, 2020.", + "url": null + } + }, + { + "13": { + "title": "Algebraic Riccati Equations.", + "author": "Peter Lancaster and Leiba Rodman.", + "venue": "Clarendon Press, 1995.", + "url": null + } + }, + { + "14": { + "title": "Continuous control with deep reinforcement learning.", + "author": "Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom\nErez, Yuval Tassa, David Silver, and Daan Wierstra.", + "venue": "arXiv preprint arXiv:1509.02971, 2015.", + "url": null + } + }, + { + "15": { + "title": "Canonical piecewise-linear approximations.", + "author": "J.-N. Lin and R. Unbehauen.", + "venue": "IEEE Transactions on Circuits and Systems-I: Fundamental Theory\nand Applications, 39(8):697\u2013699, 1992.", + "url": null + } + }, + { + "16": { + "title": "Optimal attack and defense for reinforcement learning.", + "author": "Jeremy McMahan, Young Wu, Xiaojin Zhu, and Qiaomin Xie.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 38, pages 14332\u201314340, 2024.", + "url": null + } + }, + { + "17": { + "title": "Learning legged swimming gaits from experience.", + "author": "D. Meger, J. Higuera, A. Xu, P. Giguere, and G. Dudek.", + "venue": "In Proc. International Conference on Robotics and Automation,\n2015.", + "url": null + } + }, + { + "18": { + "title": "Playing Atari with deep reinforcment learning.", + "author": "V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and\nR. Riedmiller.", + "venue": "In NIPS Workshop on Deep Learning, 2013.", + "url": null + } + }, + { + "19": { + "title": "Neural network dynamics for model-based deep reinforcement learning\nwith model-free fine-tuning.", + "author": "Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, and Sergey Levine.", + "venue": "In IEEE International Conference on Robotics and Automation\n(ICRA), page 7559\u20137566, May 2018.", + "url": null + } + }, + { + "20": { + "title": "Weighted policy constraints for offline reinforcement learning.", + "author": "Zhiyong Peng, Changlin Han, Yadong Liu, and Zongtan Zhou.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 37, pages 9435\u20139443, 2023.", + "url": null + } + }, + { + "21": { + "title": "Markov Decision Processes: Discrete Stochastic Dynamic\nProgramming.", + "author": "Martin L. Puterman.", + "venue": "John Wiley & Sons, 2005.", + "url": null + } + }, + { + "22": { + "title": "Learning to drive a bicycle using reinforcement learning and shaping.", + "author": "J. Randlov and P. Alstrom.", + "venue": "In Proc. International Conference on Machine Learning, 1998.", + "url": null + } + }, + { + "23": { + "title": "Learning from demonstration.", + "author": "S. Schaal.", + "venue": "Advances in Neural Information Processing Systems, 1997.", + "url": null + } + }, + { + "24": { + "title": "Exploiting model uncertainty estimates for safe dynamic control\nlearning.", + "author": "J.G. Schneider.", + "venue": "Advances in Neural Information Processing Systems, 1997.", + "url": null + } + }, + { + "25": { + "title": "Trust region policy optimization.", + "author": "John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp\nMoritz.", + "venue": "In International conference on machine learning, pages\n1889\u20131897. PMLR, 2015.", + "url": null + } + }, + { + "26": { + "title": "Proximal policy optimization algorithms.", + "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.", + "venue": "arXiv preprint arXiv:1707.06347, 2017.", + "url": null + } + }, + { + "27": { + "title": "Multi-world model in continual reinforcement learning.", + "author": "Kevin Shen.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 38, pages 23757\u201323759, 2024.", + "url": null + } + }, + { + "28": { + "title": "Reinforcement Learning: An Introduction.", + "author": "Richard S. Sutton and Andrew G. Barto.", + "venue": "The MIT Press, second edition, 2020.", + "url": null + } + }, + { + "29": { + "title": "Policy gradient methods for reinforcement learning with function\napproximation.", + "author": "Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour.", + "venue": "In S. Solla, T. Leen, and K. M\u00fcller, editors, Advances in\nNeural Information Processing Systems, volume 12. MIT Press, 1999.", + "url": null + } + }, + { + "30": { + "title": "Algorithms for reinforcement learning.", + "author": "Csaba Szepesvari.", + "venue": "In Synthesis Lectures on Artificial Intelligence and Machine\nLearning, volume 4.1, pages 1\u2013103. Morgan & Claypool, 2010.", + "url": null + } + }, + { + "31": { + "title": "Gymnasium, March 2023.", + "author": "Mark Towers, Jordan K. Terry, Ariel Kwiatkowski, John U. Balis, Gianluca de\nCola, Tristan Deleu, Manuel Goul\u00e3o, Andreas Kallinteris, Arjun KG, Markus\nKrimmel, Rodrigo Perez-Vicente, Andrea Pierr\u00e9, Sander Schulhoff, Jun Jet\nTai, Andrew Tan Jin Shen, and Omar G. Younis.", + "venue": "URL https://zenodo.org/record/8127025.", + "url": null + } + }, + { + "32": { + "title": "Asynchronous stochastic approximation and Q-learning.", + "author": "John N. Tsitsiklis.", + "venue": "Machine Learning, 16:185\u2013202, 1994.", + "url": null + } + }, + { + "33": { + "title": "Learning from Delayed Rewards.", + "author": "C. Watkins.", + "venue": "PhD thesis, University of Cambridge, Cambridge, U.K., 1989.", + "url": null + } + }, + { + "34": { + "title": "An implicit trust region approach to behavior regularized offline\nreinforcement learning.", + "author": "Zhe Zhang and Xiaoyang Tan.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 38, pages 16944\u201316952, 2024.", + "url": null + } + }, + { + "35": { + "title": "Adaptive policy learning for offline-to-online reinforcement\nlearning.", + "author": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, and Jing Jiang.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 37, pages 11372\u201311380, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2406.14753v3" +} \ No newline at end of file diff --git a/20241127/2406.17995v2.json b/20241127/2406.17995v2.json new file mode 100644 index 0000000000000000000000000000000000000000..e229227a96203d6faf89fce1277a22c24bdfce44 --- /dev/null +++ b/20241127/2406.17995v2.json @@ -0,0 +1,950 @@ +{ + "title": "Managing Classical Processing Requirements for Quantum Error Correction", + "abstract": "Quantum Error Correction requires decoders to process syndromes generated by the error-correction circuits. These decoders must process syndromes faster than they are being generated to prevent a backlog of undecoded syndromes. This backlog can exponentially increase the time required to execute the program, which has resulted in the development of fast hardware decoders that accelerate decoding. Applications utilizing error-corrected quantum computers will require hundreds to thousands of logical qubits and provisioning a hardware decoder for every logical qubit can be very costly.\nIn this work, we present a framework to reduce the number of hardware decoders and navigate the compute-performace trade-offs without sacrificing the performance or reliability of program execution. Through workload-centric characterizations performed by our framework, we propose efficient decoder scheduling policies that can reduce the number of hardware decoders required to run a program by up to 10. With the proposed framework, scalability can be achieved via decoder virtualization, and individual decoders can be built to maximize accuracy and performance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "As quantum computing enters a phase of rapid scaling to enable Fault-Tolerant Quantum Computing (FTQC), the classical processing resources required to support Quantum Error Correcting (QEC) codes must be scaled proportionally. QEC codes generate a stream of syndromes repeatedly by measuring parity qubits every cycle, and a decoder algorithm running on the classical control computer processes the stream of syndrome bits to detect and correct errors. Recent demonstrations have shown the Surface Code (Fowler et al., 2012 ###reference_b27###) can be deployed experimentally to suppress logical error rates (Acharya et al., 2024 ###reference_b2###, 2023 ###reference_b3###), creating 48 logical qubits using neutral atoms (Bluvstein et al., 2023 ###reference_b11###), an 800 reduction in the error rate using four logical qubits (da Silva et al., 2024 ###reference_b20###). These demonstrations are precursors to complex systems with more logical qubits that will require significant classical resources to enable FTQC systems.\nBuilding a universal fault-tolerant quantum computer requires support for both Clifford and non-Clifford gates. For the Surface Code, applying a non-Clifford -gate requires decoding prior errors so that an appropriate correction can be applied (Fowler et al., 2012 ###reference_b27###; Horsman et al., 2012 ###reference_b34###; Litinski, 2019a ###reference_b40###). The decoding cannot be deferred, thus requiring decoding to be performed in real-time. Moreover, there is even a broader constraint on decoding throughput \u2013 if syndromes are generated faster than they can be processed, computation can be slowed down exponentially due to the backlog problem (Terhal, 2015 ###reference_b58###). Qubit technologies such as superconducting qubits have fast syndrome cycle times in the order of 1s (Acharya et al., 2023 ###reference_b3###), which require decoder latencies to be smaller than the syndrome cycle time.\nApplications that can benefit from FTQC will require hundreds to thousands of logical qubits to function (Blunt et al., 2024 ###reference_b10###). Depending on the error-correcting code used, replicating decoders for every logical qubit in the system can become very expensive and intractable in terms of cost and complexity. This is especially because decoders add to the classical hardware already needed for qubit control and readout. To reduce this cost, fast, hardware-efficient decoders have been proposed which sacrifice some accuracy for speed and scalability (Ravi et al., 2023 ###reference_b48###; Barber et al., 2023 ###reference_b6###; Vittal et al., 2023 ###reference_b63###; Alavisamani et al., 2024 ###reference_b4###). However, catering to hundreds to thousands of logical qubits with these specialized decoders will still result in complex and costly systems \u2013 in this work, we aim to show how the total number of decoders can be reduced without affecting the performance or reliability of the quantum computer, thus allowing for more scalable classical processing for QEC.\n###figure_1### ###figure_2### ###figure_3### [some figure]\nIn this paper, we present VQD: Virtual Quantum and Decoding, a framework that aims to provide the illusion that there are decoders for every logical qubit while using significantly fewer hardware decoders to enable scalable and efficient classical processing necessary for fault-tolerant quantum computers.\nReducing the number of decoders is not straightforward \u2013 if there are fewer decoders than logical qubits, some logical qubits will remain undecoded for some periods of time. When a logical qubit remains undecoded for some rounds of syndrome generation until a non-Clifford gate is executed on it, the overall decoding task is more than the original set of undecoded syndromes since syndromes keep getting generated as the decoder processes the original set of undecoded syndromes. This introduces a slowdown in the computation, as shown in Figure 1(a) ###reference_sf1###. The goal of this work is to avoid the slowdown in computation while reducing the compute resources required for decoding.\nOur experimental evaluations also show that if a logical qubit is not decoded for extended periods (which will occur if there are fewer decoders than qubits), then it can cause the decoder latency to increase due to an increase in undecoded errors. This increase in the decoder latency can affect the application of non-Clifford states \u2013 if decoding is delayed for a logical qubit before applying a non-Clifford state, the application of the non-Clifford state could be delayed since the decoder might take more time than usual to decode all prior rounds.\nGiven the challenges in sharing decoder hardware and to understand how it can be enabled, we characterized representative FTQC workloads to understand the decoding requirements from a performance and reliability perspective. Our characterization using a lattice surgery compiler (Watkins et al., 2024 ###reference_b64###) revealed that there is a limited amount of operational parallelism due to long sequences of and gates resulting from Clifford + decomposition, which is necessary for universality. This is crucial, as non-Clifford gates are the reason why real-time decoding is necessary for FTQC (Terhal, 2015 ###reference_b58###). Non-Clifford operations are the only operations where the decoding is in the critical path, and fortunately, they occur in a highly serialized manner. Therefore, a physical decoder per logical qubit is unnecessary and will lead to severe underutilization.\nArmed with this insight, we propose a system architecture with significantly fewer physical decoders than the number of logical qubits. Furthermore, we design efficient decoder scheduling policies for such systems. We propose a scheduling policy that minimizes the Longest Undecoded Sequence, termed as the MLS policy. We compare it with the Round Robin (RR) and Most Frequently Decoded (MFD) policies \u2013 our evaluations show that the MLS policy can reduce the number of hardware decoders by up to 10 for algorithmic logical qubits and 4 when magic state distillation factories are also considered, while ensuring that no logical qubit remains undecoded for a significantly long period. We show that offloading decoding tasks to software also reduce the longest period for which a logical qubit remains undecoded by up to 40%. We also propose a dynamic scheduling mechanism that can prioritize decoding for logical qubits affected by error bursts due to phenomena such as cosmic rays (McEwen et al., 2021 ###reference_b43###) and leakage due to heating (Miao et al., 2023 ###reference_b45###).\nWe also analyze the effect of decoding latencies on the efficacy of virtualization. Our studies show that slow decoders are not amenable to virtualization, and QEC codes such as qLDPC codes will require significantly faster decoders than that are available today for virtualization to be possible. Through VQD, we show that the design space for decoders can be reduced from optimizing for latency, accuracy, and scalability (Figure 1(b) ###reference_sf2###) to just latency and accuracy (Figure 1(c) ###reference_sf3###).\nWith VQD, we show that even if individual decoders have a high hardware cost, the overall cost can be reduced significantly by virtualizing decoders." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Quantum Error Correction and Decoding", + "text": "In this section, we cover high-level details of Quantum Error Correction and the role of decoders." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Quantum Error Correction", + "text": "Quantum Error Correction (QEC) improves the reliability of a system by utilizing many physical qubits to encode a single logical qubit (Shor, 1995 ###reference_b51###; Knill and Laflamme, 1997 ###reference_b36###). Most QEC codes can be categorized as stabilizer codes (Terhal, 2015 ###reference_b58###) \u2013 some promising stabilizer codes include quantum Low Distance Parity Check (qLDPC) codes (Bravyi et al., 2024 ###reference_b13###) and Surface Codes (Fowler et al., 2012 ###reference_b27###). Owing to their relatively relaxed connectivity requirements that can be realized with hardware available today, we focus specifically on the Surface Code.\n###figure_4### ###figure_5### ###figure_6### [some figure]\nA single rotated Surface Code patch of distance is shown in Figure 2(a) ###reference_sf1### (Horsman et al., 2012 ###reference_b34###). Black circles denote data qubits and each data qubit is connected to and parity qubits. The Surface Code works by repeatedly measuring syndromes, which correspond to the measurements performed on all and measure qubits in a patch after performing the sequence of gates shown in Figure 2(b) ###reference_sf2###. By repeatedly measuring these syndromes, both bit-flip and phase-flip errors occurring within a patch can be detected by the decoder." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. Decoding Errors", + "text": "Figure 2(c) ###reference_sf3### shows the general procedure of how any generic QEC code works \u2013 syndromes are constantly being generated, which are then fed to a decoder. Syndromes contain information about which qubits have flipped in every round of syndrome measurements, and these flips allow the decoder to determine what errors on the data qubits caused those flips. Since errors can always be expressed in the form of Pauli gate, they can be corrected in software without executing any physical operations on the logical qubits. This is achieved by updating the Pauli frames for all data qubits that make up that logical qubit, which adjusts the interpretation of future measurements by accounting for the error that was detected (Terhal, 2015 ###reference_b58###). For the Surface Code, the decoding problem is commonly formulated as a Minimum Weight Perfect Matching (MWPM) problem (Fowler, 2015 ###reference_b26###; Wu et al., 2022 ###reference_b66###)." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. Non-Clifford Gates", + "text": "On a Surface Code error-corrected quantum computer, all Clifford gates can be performed reliably either in software or via logical operations performed via Braiding (Fowler et al., 2012 ###reference_b27###) or Lattice Surgery (Horsman et al., 2012 ###reference_b34###). However, non-Clifford gates such as the gate cannot be applied in a fault-tolerant manner directly. This is because logical qubits initialized with the gate will have an error probability equal to the underlying physical error rate of the system, , thus making them impure (Litinski, 2019b ###reference_b41###). However, multiple impure states can be used to distill fewer, purer logical qubits with a state (known as a magic state ) \u2013 this process is known as magic state distillation (Bravyi and Haah, 2012 ###reference_b14###; Gupta et al., 2024 ###reference_b31###).\n###figure_7### [some figure]\nFigure 3 ###reference_### shows how a gate can be applied to a logical qubit in a fault-tolerant manner by using Lattice Surgery (LS) (Litinski, 2019a ###reference_b40###). The magic state is a purified -state. Since a non-Clifford gate is being applied, all prior errors that affected must be known before is applied to prevent errors from spreading (Terhal, 2015 ###reference_b58###). Lattice Surgery can be used to perform a operation on and to apply the magic state (Litinski, 2019a ###reference_b40###, b ###reference_b41###). Once the operation is performed, the decoding result of prior to Lattice Surgery is combined with the decoding result of Lattice Surgery multi-body measurement and the measurement of the patch containing the magic state to determine an appropriate Clifford correction111The auto-corrected gate in (Litinski, 2019a ###reference_b40###, b ###reference_b41###) uses an additional ancillary qubit that has not been shown.. This correction needs to be known before the next logical operation involving a non-Clifford gate. We assume the use of auto-corrected Pauli-Product Rotations (PPR) (Litinski, 2019b ###reference_b41###) for all non-Clifford gates.\n###figure_8### ###figure_9### ###figure_10### ###figure_11### [fig]" + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "2.4. Critical Decodes", + "text": "For a logical qubit that is only executing Clifford gates, errors can be decoded at any point in time (even after the experiment has ended). This is because all Clifford corrections can be commuted to the end of the circuit, essentially allowing the syndromes to be post-processed rather than decoded in real-time (Terhal, 2015 ###reference_b58###). However, universal fault-tolerant quantum computer require non-Clifford gates such as the -gate \u2013 this makes real-time decoding a necessity since syndromes for a patch must be decoded before the application of a non-Clifford gate. We call decodes that must happen before the application of a non-Clifford gate critical decodes, since all syndromes generated up to that point for that logical qubit must be decoded before computation can proceed. Finally, performing non-Clifford operations on logical qubits encoded in quantum Low Distance Parity Check (qLDPC) codes require a critical decode to measure the ancillary system required to transfer the logical qubit to a Surface Code patch (Cross et al., 2024 ###reference_b18###)." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Classical Processing Requirements", + "text": "We now discuss the classical processing requirements for FTQC." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Syndrome Generation and Processing Rates", + "text": "As discussed in Section 2 ###reference_###, applying non-Clifford gates requires decoders to be up-to date with the latest syndrome for a logical qubit before computation can proceed. As shown by Terhal (Terhal, 2015 ###reference_b58###, p. 20), if the rate at which syndromes are generated is faster than the rate at which they are processed , the time required to execute a workload increases exponentially with the number of non-Clifford states required by the program (the backlog problem (Terhal, 2015 ###reference_b58###)). Now, if every decoder in the system has an , and every logical qubit has its own decoder, there will be no backlog of syndromes to be decoded and hence no slowdown in the system. Since non-Clifford operations will be very frequent in FTQC systems (Beverland et al., 2022b ###reference_b9###), ensuring that is consistently greater than is a critical requirement for FTQC systems." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Why is Real-Time Decoding Necessary?", + "text": "This requirement for syndromes to be processed faster than they are generated has motivated research to build fast and accurate hardware decoders for the Surface Code (Smith et al., 2023 ###reference_b53###; Ravi et al., 2023 ###reference_b48###; Barber et al., 2023 ###reference_b6###; Vittal et al., 2023 ###reference_b63###; Alavisamani et al., 2024 ###reference_b4###), especially for systems using superconducting qubit architectures due to their fast gate times. While the syndrome processing rates achieved by these decoders are far higher than typical syndrome generation rates achieved today (Acharya et al., 2023 ###reference_b3###; Bluvstein et al., 2023 ###reference_b11###), leaving syndromes undecoded for many successive rounds can be problematic. Figure 4(a) ###reference_sf1### shows how the decoder latency normalized to the number of rounds (latency per round) processed by the decoder can increase exponentially with the number of rounds of undecoded syndromes, especially for (circuit-level noise ). The number of rounds was chosen as a multiple of since it represents the shortest period required for executing a logical operation (Horsman et al., 2012 ###reference_b34###).\nThis slowdown is easily explainable by the fact that errors will accumulate the longer a logical qubit remains undecoded, thus requiring more corrections to be performed. This can result in a significant slowdown, and hence a decrease in leading to higher memory requirements to store undecoded syndromes. However, note that the slowdown in will result in more rounds of error correction required to complete the computation. Figure 4(b) ###reference_sf2### shows how the logical error rate grows slowly with the number of rounds for , respectively. Code distances are selected to achieve a target logical error rate after the rounds it takes to complete a program (Beverland et al., 2022b ###reference_b9###; Blunt et al., 2024 ###reference_b10###) \u2013 critical decodes delayed exponentially due to a slow will exacerbate the logical error rate, since more rounds would be needed to complete the computation." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Concurrency and Delayed Decoding", + "text": "Critical decodes must be serviced during the execution of a program to avoid the increase in memory requirements and processing times discussed above. If there are many concurrent critical decodes that occur frequently during the execution of a program, an appropriate number of decoders will be needed to process these critical decodes. The level of concurrency depends entirely on the layout used to build the quantum computer \u2013 layouts such as the Compact (Litinski, 2019a ###reference_b40###, p. 7), Fast (Litinski, 2019a ###reference_b40###, p. 9), and the Edge-Disjoint Path (EDPC) compilation (Beverland et al., 2022a ###reference_b8###) are some proposed layouts that can be used to build fault-tolerant quantum computers using the Surface Code. Layouts determine the number of physical qubits required \u2013 for example, the Compact layout requires the fewest physical qubits since there is a single routing lane at the cost of completely serializing operations. The Fast and EDPC layouts allow more concurrency at the cost of more physical qubits.\nWhat is the average level of concurrency when executing a quantum program on a fault-tolerant quantum computer? Figure 4(c) ###reference_sf3### shows a histogram of the number of critical decodes222For the Surface Code, we assume every logical qubit requires two decoders \u2013 one each for the and observables. for select workloads generated by the Lattice Surgery Compiler (Watkins et al., 2024 ###reference_b64###). This histogram shows that the peak concurrency is attained very infrequently. This implies that most logical qubits function as memory qubits or execute Clifford gates more often than -gates, and this can allow some qubits to not be decoded in real-time." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "3.4. Goal: Make Classical Processing Efficient", + "text": "Provisioning a hardware decoder for every logical qubit in the system can be resource intensive. Figure 4(d) ###reference_sf4### shows an estimate of the total number of decoders required for all logical qubits (=algorithmic, ancillary, magic state storage, and magic state distillation logical qubits) when using the EDPC layout for different workloads. These estimates were obtained by using Lattice Surgery Compiler (Leblond et al., 2024 ###reference_b37###; Watkins et al., 2024 ###reference_b64###).\nNote that the total hardware requirement will be significantly higher because of control and readout components. Having shown how the syndrome processing rate is crucial in ensuring that computation does not require excessive memory and time and how quantum programs are inherently serial in terms of critical decodes, the question we seek to answer in this work is\u2013" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. VQD: Virtual Quantum Decoding", + "text": "We now discuss decoder virtualization, where there are fewer hardware decoders than logical qubits in the system, and how decoding can be scheduled optimally." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Working with Fewer Hardware Decoders", + "text": "Reducing the number of available decoders implies that qubits will share hardware resources, resulting in time-division multiplexing of decoder instances among logical qubits. Figure 5 ###reference_### shows how compared to a system with decoders for every logical qubit in the system ( qubits, decoders), a system with fewer (, ) decoders will require resources to be shared with time.\nTime-division multiplexing of hardware resources will require the following considerations: (i) If the number of critical decodes at a given time step exceed the number of hardware decoders, the overflowing critical decodes will have to be deferred to the next available time step, and (ii) Qubits cannot be left undecoded for extended periods of time. For the first consideration, deferring critical decodes will increase serialization in the program \u2013 this offsets all benefits offered by the Fast and EDPC layouts. For the second consideration, leaving a qubit undecoded for too long will result in an exponential increase in the syndrome processing latency and memory required to store undecoded syndromes.\nSince not all qubits will be involved in critical decodes at every time step, there will be some decoders available at a given point of time which will not be decoding a logical qubit involved in the consumption of a -state. Allocating these free hardware decoders to logical qubits at every time thus becomes a scheduling problem.\nBefore we discuss different decoder scheduling policies, it is important to understand the time granularity at which any scheduling policy will operate on. Since logical operations (Clifford or non-Clifford) in a Surface Code error-corrected quantum computer will require at least rounds before the next operations, we define a slice (Watkins et al., 2024 ###reference_b64###) as the smallest time step between logical operations that a decoder scheduling policy can work on. Every slice consists of rounds of syndrome measurements, thus making the scheduling policy agnostic of the actual code distance used.\n###figure_12### [some figure]" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Decoder Scheduling Policies", + "text": "Static decoder scheduling refers to decoder scheduling that can be performed at compile time. Since most quantum programs do not have any control-flow instructions, scheduling can be performed statically. Scheduling decoders is similar to CPU scheduling performed by all operating systems today, where the number of processes is more than the number of available processor cores (Liu and Layland, 1973 ###reference_b42###).\nLongest Undecoded Sequence:\nTo quantify the fairness of a decoder scheduling policy, we use \u2018Longest Undecoded Sequence\u2019, which measures how well the decoders are servicing all logical qubits. A large undecoded sequence length implies that a qubit has been left undecoded for a long time \u2013 increasing the memory consumed to store undecoded syndromes. Figure 6(a) ###reference_sf1### shows an example of determining the longest undecoded sequence length.\nConsider an arbitrary time slice in the execution of a quantum program. There are logical qubits and hardware decoders (). All decoding scheduling policies will have two components: The first will assign the decoders necessary for all critical decodes in the time slice . The second will assign all the remaining hardware decoders to the qubits based on the scheduling policy used. We now discuss three decoder scheduling policies (all policies are illustrated in Figure 6(b) ###reference_sf2### \u2013 Figure 6(d) ###reference_sf4###):\n###figure_13### ###figure_14### ###figure_15### ###figure_16### [some figure]" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1. Most Frequently Critically Decoded (MFD)", + "text": "A logical qubit that consumes a significant number of -states during the execution of a program would have a frequent requirement of critical decodes \u2013 leaving such a logical qubit undecoded for more than a few slices would make subsequent critical decodes take longer, thus slowing down computation. This motivates the MFD scheduling policy that prioritizes decoding of logical qubits that have numerous critical decodes in the future at any given time slice (Figure 6(b) ###reference_sf2###). The MFD policy will ensure that future critical decodes have a minimized number of undecoded syndromes for the qubits that have frequent critical decodes.\nCaveats:\nBecause the MFD policy prioritizes logical qubits with frequent critical decodes, it will likely starve other qubits of decoding, leading to longer undecoded sequences." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2. Round Robin (RR)", + "text": "Derived from CPU scheduling policies used by operating systems, the RR policy does not prioritize any specific logical qubits \u2013 rather, it chooses qubits in a round-robin manner (Figure 6(c) ###reference_sf3###) in every time slice to ensure fairness for all qubits in the system.\nCaveats:\nFor regions of a program where there are many critical decodes, the RR policy could still starve some logical qubits since will be much smaller, yielding a smaller window for decoders to be assigned. Since there is no prioritization, the RR policy will not be able to rectify this until the round-robin window reaches the qubits being starved." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3. Minimize Longest Undecoded Sequence (MLS)", + "text": "The longest undecoded sequence length at any given time slice is an indicator of how well the decoder scheduling policy is servicing all qubits in the system. We use this as a motivator for the MLS policy, which tries to minimize the longest undecoded sequence at every time slice (Figure 6(d) ###reference_sf4###). The MLS policy works as follows: at any time slice , qubits are sorted on the basis of their current undecoded sequence lengths. Then, qubits with the largest undecoded sequence lengths are assigned hardware decoders.\nCaveats:\nIn cases where there the number of logical qubits is far greater than the number of decoders (), the MLS policy will not be able to work effectively." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Noise-Adaptive Decoder Scheduling", + "text": "###figure_17### ###figure_18### ###figure_19### [fig]\nWhile control-flow instructions would necessitate runtime scheduling of decoders, events such as cosmic rays (McEwen et al., 2021 ###reference_b43###) and leakage (Miao et al., 2023 ###reference_b45###) can result in a temporary burst of errors for some physical qubits in the lattice that can impact some logical qubits.\nDetecting a spike in the physical error rate will either require errors to be decoded or additional hardware modules to detect the spike.\nAs shown in Figure 7(a) ###reference_sf1###, an increase in the physical error rate results in a higher number of bit-flips (especially for larger code distances), which can be detected with simple components in the control hardware. Figure 7(b) ###reference_sf2### shows how additional flips can be detected and used to dynamically prioritize the decoding of an arbitrary patch , which suffers from a temporary burst of errors. Note that the detection is different from decoding \u2013 we are merely predicting that there are more errors due to higher bit-flips in the syndromes. Unlike prior work (Suzuki et al., 2022 ###reference_b56###), this mechanism only prioritizes the decoding of logical qubits affected by error bursts in a virtualized decoder environment. However, this prioritization can result in reduced efficacy of virtualization, due to fewer decoders being available for scheduling during error burst events. We evaluate the overheads of such error bursts for virtualization in Section 7 ###reference_###." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. Offloading to Software Decoders", + "text": "Software decoders are slow and also have a higher variance in decoding latencies (Delfosse et al., 2023 ###reference_b24###). However, when scheduling decoding tasks for logical qubits, software decoders can be leveraged to further reduce the undecoded sequence lengths. As shown in Figure 7(c) ###reference_sf3###, some syndromes for a logical qubit can be offloaded to software while the hardware decoders are busy elsewhere. To prevent scheduled hardware decoding from being delayed, a buffer (three slices in this example) must be used to ensure that the software offloading completes before the next hardware decode." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "4.5. Decoding for Distillation Factories", + "text": "The decoder scheduling policies in the previous sections catered only to the decoding of algorithmic logical qubits (data logical qubits, magic state storage, ancillary logical qubits required for Lattice Surgery).\nMagic state distillation factories generate few low-error logical qubits with non-Clifford states from many high-error logical qubits. Distillation factories run for very short periods at a time \u2013 this allows for smaller code distances to be used for creating the logical qubits for distillation (Litinski, 2019b ###reference_b41###).\nSmaller code distances for distillation factories provide two main benefits: the number of physical qubits required are much lower, and more importantly, both hardware (Vittal et al., 2023 ###reference_b63###; Smith et al., 2023 ###reference_b53###; Delfosse, 2020 ###reference_b23###; Caune et al., 2023 ###reference_b17###) and software (Delfosse et al., 2023 ###reference_b24###) decoders are faster and less complex. For example, LUT based decoders (Das et al., 2022a ###reference_b21###) have been shown to be effective up to without requiring significant hardware resources. Predecoders (Delfosse, 2020 ###reference_b23###; Ravi et al., 2023 ###reference_b48###; Smith et al., 2023 ###reference_b53###) reduce the complexity and decoding effort required for lower code distances as well. Compared to algorithmic logical qubits, which require a large code distance to survive millions of error correction cycles (Beverland et al., 2022b ###reference_b9###; Blunt et al., 2024 ###reference_b10###), the decoding requirements of distillation factories are far more relaxed, which reduces the hardware resource requirements as well." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "4.6. Impact of Decoding Latencies", + "text": "So far, we have considered decoder virtualization without considering the decoder latency. However, the former is heavily impacted by the latter. Consider a scenario where a logical qubit has undecoded rounds prior to a critical/scheduled decode. Now, the decoder will take a finite time to process these rounds worth of syndromes based on its per round decoding latency . However, during the time the initial rounds of syndromes are processed, syndromes are continuously generated based on the syndrome generation time . Thus, at any time , the total rounds processed are , the syndrome rounds generated are , and so, the outstanding rounds to be processed can be defined as . The total time taken for the decoder to catch up is thus , with , and hence:\n###figure_20### ###figure_21### [some figure]\nhas a considerable effect on the efficacy of decoder virtualization, since a slower processing rate would mean that a huge number of additional rounds will be required for the decoder to be able to catch up to the present syndrome being generated. This can be seen from Figure 8(a) ###reference_sf1###, where the decoder latency is normalized by the syndrome generation time and the slowdown is defined as the ratio of the total time taken to process undecoded rounds (Equation 1 ###reference_###) and the ideal time when extra syndromes are added to the decoding task (). This slowdown is agnostic of the actual implementation of the decoder \u2013 techniques like parallel window decoding (Skoric et al., 2023 ###reference_b52###) will only help reduce the value of . Figure 8(a) ###reference_sf1### shows the total decoding task (), which represents the total syndromes that need to be processed for an initial undecoded rounds. From Figure 8(a) ###reference_sf1###, we see that the closer the decoding processing rate is to the syndrome generation rate (), the harder virtualization will be, since even will require rounds before the decoder can catch up. This is a crucial limitation for decoder virtualization, and it also suggests that faster, more accurate decoders will be more desirable than fast decoders which have a smaller hardware footprint at the cost of accuracy. Figure 8(b) ###reference_sf2### shows an example for cases where there are 10/100 undecoded rounds before the decoder is invoked for a logical qubit \u2013 a slower decoder will require thousands of extra rounds to catch up compared to a faster decoder! Running so many additional rounds will slow down the system, reduce the error budget per round of computation, exacerbating the logical error rates.\nIn Section 7 ###reference_###, we will show how slow decoders associated with qLPDC codes are not amenable for virtualization, due to the slowdown they incur for even modest undecoded sequence lengths. This represents a limit of virtualization, and motivates further research in developing fast, accurate decoders for all QEC codes.\n###figure_22### [some figure]" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. System Architecture with Virtual Decoders", + "text": "Enabling the virtualization of decoders in an FTQC system requires (i) memory to store syndromes of logical qubits that are not being decoded at any time, and (ii) a mechanism for selecting the qubits to be decoded by a specific decoder at any time. Figure 9 ###reference_### shows the system architecture that can be used for enabled virtualized decoders. The Arbiter is a key component that routes syndromes from the readout system to the decoders. Since there will be fewer decoders than logical qubits, syndromes will be routed based on the static decoding schedule prepared during compile time. To allow decoders to access the syndromes of a specific logical qubit at any given point in time, the arbiter attaches a tag to the syndromes before routing them to the appropriate decoder. This tag quickly identifies the logical qubit to which a syndrome belongs to. Every decoder consists of a syndrome memory, and the decoding schedule can be used by the control software to access the syndromes of any logical qubit from the syndrome memory.\nSoftware offloading can be easily integrated in the design by enabling the syndrome memory to be read from the control software. Note that this will require high-speed links between the control software and the decoder modules, such as the low-latency ethernet used in recent experimental demonstrations (Acharya et al., 2024 ###reference_b2###). The control software can specify the tag and the syndromes to be read from the syndrome memory. This will however require fusing the software decoding results with the hardware decoding results. Fusion of separate decoding tasks has also been demonstrated recently by Google (Acharya et al., 2024 ###reference_b2###).\nTo enable dynamic decoder scheduling, the control software need only trigger decoding for the necessary logical qubit(s) by amending the decoding schedule. Since this could trigger the decoding of any arbitrary logical qubit, the syndrome memories must support random accesses. No other changes to the Arbiter are necessary, since syndromes can be routed to the same memory regardless of dynamic scheduling that could occur later in time.\n###figure_23### [some figure]" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Methodology", + "text": "We now describe the methodology used to evaluate different decoder scheduling policies and classical processing requirements." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "6.1. Compiler", + "text": "We use the Lattice Surgery Compiler (LSC) (Watkins et al., 2024 ###reference_b64###) to generate Lattice Surgery Instructions (LLI) Intermediate Representations (IR) of workloads that can be executed on an error-corrected quantum computer using the Surface Code with lattice Surgery. LSC can generate IR that denote Lattice Surgery instructions from the QASM (Cross et al., 2017 ###reference_b19###) representation of a workload. The Lattice Surgery instructions generated by LSC are a combination of Clifford operations and multi-body measurements used for implementing Pauli-Product Rotations (PPR) (Litinski, 2019a ###reference_b40###). LSC handles mapping and routing based on the layout provided to the compiler, as shown in Figure 10 ###reference_###. We configure LSC to use a \u2018wave\u2019 scheduling that maximizes the number of concurrent instructions executed in every time slice. LSC uses Gridsynth (Ross and Selinger, 2014 ###reference_b50###) to deal with arbitrary rotations. However, since it is still under development, LSC has some limitations:\nLSC is limited to multi-body measurements between only two logical qubits.\nLSC treats distillation factories as black boxes.\nLSC works for a limited set of layouts and is extremely slow for large workloads like shor." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "6.2. Simulation Framework", + "text": "Using the IR generated by LSC, we build a framework333https://anonymous.4open.science/r/decoder-resources-5EC4/ ###reference_resources-5EC4/### that can parse the IR and determine the critical decodes in every slice, generate a timeline of all operations, and assign decoders to all logical qubits depending on the scheduling policy. In case the number of critical decodes in a particular slice are more than the number of hardware decoders configured, decoder-resources can rewrite the IR to defer critical decodes to the next slice.\nLayouts:\nWe use three layouts for our evaluations \u2013 Fast, Compact (Litinski, 2019a ###reference_b40###), and the EDPC (Beverland et al., 2022a ###reference_b8###) layouts. The Compact layout uses the fewest logical qubits and, due to a single routing lane, allows only one magic state to be consumed per time slice \u2013 we thus use it only to compare total execution times with the Fast and EDPC layouts.\nBenchmarks:\nWe use benchmarks from (Quetschlich et al., 2023 ###reference_b47###) and (Li et al., 2023 ###reference_b38###). We use shor-15, phase estimation qpe-50, Quantum Fourier Transform qft-20, arithmetic workloads adder-28, multiplier-45. We also use a neural network workload dnn-51, W state workloads wstate-40/60, random unitary random-40, and an Ising model ising-66.\nOther Software:\nStim (Gidney, 2021 ###reference_b29###) was used for simulating stabilizer circuits to generate syndromes and error rates. Azure QRE (Beverland et al., 2022b ###reference_b9###) was used for resource estimations." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Evaluations", + "text": "In this section, we present some results for different scheduling policies and savings in decoder hardware.\n###figure_24### ###figure_25### ###figure_26### [some figure]" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "7.1. Research Questions", + "text": "We aim to answer the following questions:\nHow many decoders can we virtualize in the system without affecting performance?\nHow long do qubits go undecoded when using virtualization?\nHow does virtualization affect Magic State Distillation?\nHow do error bursts affect decoder resource requirements?\nHow do slow decoders for qLDPC codes affect the efficacy of virtualization?" + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "7.2. Baseline Statistics", + "text": "We consider the baseline to have decoders for every logical qubit in the system. Figure 11(a) ###reference_.sf1### shows the maximum number of critical decodes that occur during the execution of different workloads for all selected layouts. The Compact layout has a maximum of one critical decode per slice between two logical qubits. Figure 11(b) ###reference_.sf2### shows the total time required to finish all workloads444shor-15 and qpe-50 were stopped prematurely due to long runtimes (shor-15 after 100,000 time slices for all layouts, qpe-50 after 72,000 time slices for Fast, EDPC, and 100,000 time slices for Compact). \u2013 the EDPC and Fast layouts are significantly faster than Compact, and the EDPC layout has a very slight advantage over Fast for most workloads and also uses fewer qubits 11(c) ###reference_.sf3### (only algorithmic logical qubits). Due to its higher degree of parallelism (due to its better use of routing lanes), we only consider the EDPC layout for all further evaluations." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "7.3. Decoder Scheduling Efficacy", + "text": "Figure 12 ###reference_### shows the three decoder configurations used in this work.\nThe All Qubits configuration denotes the baseline where all qubits have a decoder.\nMax. Concurrency is the configuration where the number of hardware decoders in the system corresponds to the peak concurrent critical decodes for every workload shown in Figure 11(a) ###reference_.sf1###.\nMidpoint refers to a configuration where the number of hardware decoders is the midpoint between the max. and min. concurrent critical decodes ().\n###figure_27### [some figure]\n###figure_28### ###figure_29### [some figure]\nThe minimum concurrent critical decodes corresponds to one critical decode between two logical qubits because a multi-body measurement merges the two patches into a single decoding task. We first focus our evaluations on algorithmic logical qubits, and then incorporate distillation factories as well. For the All Qubits configuration, the longest undecoded sequence length will be zero, since every logical qubit has an assigned decoder." + }, + { + "section_id": "7.3.1", + "parent_section_id": "7.3", + "section_name": "7.3.1. Longest Undecoded Sequences", + "text": "To evaluate the performance of the decoder scheduling policies described in Section 4 ###reference_###, we determine the longest undecoded sequence lengths for all workloads when using the Max. Concurrency and Midpoint configurations. Since these configurations use far fewer hardware decoders than qubits, the longest undecoded sequence is a good measure of whether qubits are being starved of decoding. Figure 13(b) ###reference_.sf2### shows the longest undecoded sequence lengths for the (a) Max. Concurrency and (b) Midpoint configurations. The MFD policy leads to qubits being starved of decoding, since it prioritizes qubits that have frequent critical decodes.\nWhile the RR policy performs significantly better than the MFD policy, the MLS policy consistently performs better than both policies \u2013 MLS reduces the longest undecoded sequence lengths for almost every workload to 10 time slices for both Max. Concurrency and Midpoint configurations.\n###figure_30### [some figure]" + }, + { + "section_id": "7.3.2", + "parent_section_id": "7.3", + "section_name": "7.3.2. Memory Usage", + "text": "The reduction in the longest undecoded sequence lengths also corresponds to lower memory usage for storing undecoded syndromes. Figure 14 ###reference_### shows the memory required for different workloads with the Midpoint configuration (memory requirements were determined using the code distances estimated by Azure QRE). Due to longer undecoded sequences, the MFD policy can require 3\u20134 orders of magnitude more memory than the MLS policy, which requires around 0.01MB of memory. The RR policy can require 3-10 more memory than the MLS policy.\n###figure_31### [some figure]" + }, + { + "section_id": "7.3.3", + "parent_section_id": "7.3", + "section_name": "7.3.3. Software Offloading", + "text": "The longest undecoded sequences can be reduced further by leveraging software decoders. The only constraint while doing so is that critical decodes should not be delayed because prior software decodes for a logical qubit have not yet finished. For evaluating the effect of software decoding, we set the number of hardware decoders to the Midpoint configuration and make a pessimistic assumption that a single slice worth of syndromes takes three time slices (about 3d microseconds) to be decoded in software (in reality, it could be much lower with optimized software decoders (Higgott and Gidney, 2023 ###reference_b32###)). Figure 15 ###reference_### shows the reduction in the longest undecoded sequence length for all scheduling polices when software offloading is performed \u2013 software offloading can achieve a reduction of more than 30%.\n###figure_32### [some figure]" + }, + { + "section_id": "7.3.4", + "parent_section_id": "7.3", + "section_name": "7.3.4. Incorporating Magic State Distillation", + "text": "So far, our evaluations have considered only algorithmic logical qubits. Since Lattice Surgery Compiler takes the qubit layout as an input, we fixed the number and locations of magic state distillation (MSD) factories in the EDPC layout at 24 for all workloads. This number was chosen to maintain a uniform EDPC layout across all workloads, and it is also the median number of MSD factories estimated by Azure QRE (annotated below the benchmarks names in Figure 16 ###reference_###). 15-to-1 distillation is mapped to a Compact layout (Litinski, 2019a ###reference_b40###). Using this layout and the QASM circuit for a 15-to-1 MSD factory, the MLS policy was used to virtualize decoders. Table 1 ###reference_### shows the reduction in the number of decoders and the longest period for which a qubit remains undecoded for a single distillation factory. A 15-to-1 distillation factory requires five logical qubits (Litinski, 2019a ###reference_b40###) and with two decoders each for observables if all qubits are supplied their own decoders. This can be reduced by 60% to 4 decoders with virtualization.\nAssuming that decoders are independent for every MSD factory, the reduction in the number of decoders can be computed using data from Figure 12 ###reference_### and Table 1 ###reference_###, as shown in Figure 16 ###reference_###. The MLS policy can reduce the total number of decoders required by up to 4.\nMSD is a probabilistic process, and factories will not be in sync. at all times. Thus, treating decoders for all factories to be independent is a conservative setting \u2013 in reality, it is likely that decoders can be virtualized among multiple factories, which can yield further reductions in the number of decoders in the system." + }, + { + "section_id": "7.3.5", + "parent_section_id": "7.3", + "section_name": "7.3.5. Dynamic Scheduling", + "text": "Periodic events such as heating effects, cosmic rays, can result in a temporary increase in the error rate for some qubits. Such error bursts can necessitate dynamic scheduling of decoding for affected qubits. For evaluating the overheads of such error bursts on the virtualized decoders, we configured a burst probability . For a workload requiring slices, slices are randomly selected from which an active logical qubit is selected with a probability of to be decoded. Figure 17 ###reference_### shows the increase in the decoders required for different \u2013 dynamic scheduling only increases the decoder resource requirements for a 10% probability of a burst error happening. This shows that virtualization is effective even with moderate error burst probabilities.\n###figure_33### [some figure]\n###figure_34### ###figure_35### [some figure]" + }, + { + "section_id": "7.3.6", + "parent_section_id": "7.3", + "section_name": "7.3.6. Slowdown due to Decoding Latencies", + "text": "Do slow and complex decoders, such as the ones used for qLDPC codes, work well with virtualization? Processing undecoded syndromes requires an additional number of rounds that is dependent on the decoding processing rate. For the Surface Code (SC), decoders that process syndromes significantly faster than the syndrome generation rate have been proposed (Vittal et al., 2023 ###reference_b63###; Barber et al., 2023 ###reference_b6###; Ziad et al., 2024 ###reference_b67###). However, unlike the Surface Code, quantum LDPC codes (Bravyi et al., 2024 ###reference_b13###; Roffe et al., 2020 ###reference_b49###; Gong et al., 2024 ###reference_b30###; Hillmann et al., 2024 ###reference_b33###; Wolanski and Barber, 2024 ###reference_b65###) have considerably slower decoders with software latencies in the order of milliseconds. To put that in context, software decoders for the Surface Code have microsecond latencies (Higgott and Gidney, 2023 ###reference_b32###). Assuming accelerated decoders for qLDPC codes are designed, it is reasonable to expect them to be slower than Surface Code decoders. To see the impact of such slow decoders on virtualization, we modified our framework to support heterogeneous systems consisting of qLDPC blocks and Surface Code patches for compute (Stein et al., 2024 ###reference_b55###). To do this, we assumed every magic state consumed involved an additional decoder to decode the ancillary system required for computation with qLDPC codes (Cross et al., 2024 ###reference_b18###).\nIn previous sections, we have shown how the MLS policy with the Midpoint configuration reduces the longest undecoded sequence length to 10 time slices for the Surface Code. We repeat virtualization with the Surface + qLDPC code system to determine the longest undecoded sequence. Figure 18(a) ###reference_.sf1### shows the total decoding task needed to decode the longest undecoded sequence for all workloads. The normalized decoding latency () was set as 0.5 and 0.99 for the Surface and qLDPC codes, respectively. Figure 18(b) ###reference_.sf2### shows the increase in the logical error rate (LER) due to the extra slices needed to process the undecoded syndromes \u2013 the small number of slices required for the SC results in a miniscule increase in the LER, whereas the qLDPC + Surface Code system incurs a significant LER penalty . This estimate in the increase in LER was performed by assuming a final LER of and determining the per slice LER by dividing the final LER with the total number of slices for all workloads. With the Midpoint configuration, ising-66 incurs a longest undecoded sequence length of 88 (between two ancillary system decodes), due to its short circuit depth, parallel nature (Figure 4(c) ###reference_sf3###), and additional decoders required by the qLDPC ancillary system. This increases the LER by 60.\nThe results above show that the benefits of virtualization are only apparent when the decoding latency is much smaller than the syndrome generation latency. For both qLDPC and Surface Codes, it is thus highly desirable to have short decoding latencies, since this can enable more efficient virtualization. Decoders thus need not be designed for resource efficiency \u2013 they can be built for speed and accuracy. Resource savings can be attained at a system-level via virtualization." + }, + { + "section_id": "7.4", + "parent_section_id": "7", + "section_name": "7.4. Discussion", + "text": "The results shown in previous sections show that VQD reduces the number of hardware decoders for algorithmic logical qubits by nearly one order of magnitude for most workloads with the Midpoint configuration, which, when combined with the MLS scheduling policy, results in significantly small undecoded sequence lengths of 10 time slices. With the inclusion of distillation factories, the total reduction in the number of decoders is up to 4.\nFaster or Resource Efficient Decoders?:\nThe faster a decoder is in relation to the syndrome generation latency, the more effective is decoder virtualization. Decoders with smaller hardware footprints (Vittal et al., 2023 ###reference_b63###; Barber et al., 2023 ###reference_b6###) generally sacrifice decoding accuracy for speed and hardware-efficiency. This work indicates that hardware-efficiency can be achieved with effective decoder virtualization on a system-level rather than at a module-level by leveraging fast, accurate decoders that need not be hardware efficient by themselves. This lends more motivation to the design of neural network decoders (Bausch et al., 2023 ###reference_b7###; Acharya et al., 2024 ###reference_b2###), which can be resource intensive but offer code distance independent decoding latencies and can leverage existing hardware (CPUs/GPUs) rather than require custom FPGA/ASIC solutions. This insight can be applied to build fast and accurate decoders for other codes such as qLDPC, Color codes as well.\nBetter Capacity Planning: We envision large-scale quantum computers will be closely integrated with HPC-style systems, where scientific applications can leverage quantum subroutines using QPUs (Britt and Humble, 2017 ###reference_b15###). In this setting, non-critical software decoders can run on traditional HPC platforms to alleviate the pressure on hardware decoders. Moreover, the virtualization of decoders can help us harness shot-level parallelism \u2013 all quantum programs, even on FTQC, must be executed multiple times. We can concurrently run the copies of quantum programs on multiple QPUs. However, quantum resources increase linearly for running \u201c\u201d copies concurrently. Our work, VQD, shows that with decoder virtualization, we can enable effective sharing of classical resources, dramatically reducing overall costs and improving resource utilization." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "8. Related Work", + "text": "This is one of the first works to perform a workload-oriented study of the classical processing requirements and system-level scheduling policies for error-corrected quantum computers. Prior to this work, Bomb\u00edn et al. (Bomb\u00edn et al., 2023 ###reference_b12###) introduced modular decoding, which is the closest work that divides the global decoding task to sub-tasks without sacrificing decoder accuracy. However, this work, and other works such as parallelized window decoding (Skoric et al., 2023 ###reference_b52###; Tan et al., 2022 ###reference_b57###) always assume decoders for every logical qubit. This work shows that not all qubits require access to fast decoders at all times, thus allowing decoders to be virtualized. Other works that are broadly connected to this work are summarized below.\nSystem-level Studies:\nDelfosse et al. (Delfosse et al., 2023 ###reference_b24###) studied the speed vs. accuracy tradeoff for decoders used in FTQC. XQSim (Byun et al., 2022 ###reference_b16###) is a full-system FTQC simulator. Stein et al. (Stein et al., 2023 ###reference_b54###) proposed a heterogeneous architecture for FTQC, virtual logical qubits were proposed in (Baker et al., 2021 ###reference_b5###), Lin et al. (Lin et al., 2023 ###reference_b39###) explored modular architectures for error-correcting codes and scheduling for distillation factories was proposed in (Ding et al., 2018 ###reference_b25###). (Kim et al., 2024 ###reference_b35###) described a blueprint of a fault-tolerant quantum computer.\nDecoder Designs:\nNeural network based decoders (Bausch et al., 2023 ###reference_b7###; Ueno et al., 2022 ###reference_b61###; Overwater et al., 2022 ###reference_b46###; Meinerz et al., 2022 ###reference_b44###; Gicev et al., 2023 ###reference_b28###; Varsamopoulos et al., 2017 ###reference_b62###; Acharya et al., 2024 ###reference_b2###), LUT-based decoders (Das et al., 2022a ###reference_b21###; Tomita and Svore, 2014 ###reference_b59###), decoders based on the union-find algorithm (Das et al., 2022b ###reference_b22###; Barber et al., 2023 ###reference_b6###; Ziad et al., 2024 ###reference_b67###), and optimized MWPM decoders (Vittal et al., 2023 ###reference_b63###; Alavisamani et al., 2024 ###reference_b4###) have been proposed. In general, neural network decoders are generally far slower and therefore not ideal for fast qubit technologies such as superconducting qubits. Other predecoders (Delfosse, 2020 ###reference_b23###; Smith et al., 2023 ###reference_b53###) and partial decoders (Caune et al., 2023 ###reference_b17###) have also been proposed. Decoders based on superconducting logic (Ueno et al., 2021 ###reference_b60###; Ravi et al., 2023 ###reference_b48###) target cryogenic implementations." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "9. Conclusions", + "text": "Scaling quantum computers to enable Quantum Error Correction will require specialized hardware for decoding errors. Prior work has focused on reducing the hardware resources required to build decoders. In this work, we take a full-system view and show that with the right decoder scheduling policy, it is not necessary for an error-corrected quantum computer to provide every logical qubit with a dedicated hardware decoder. The MLS policy enables the reduction of hardware decoders by up to 10x without increasing the program execution time or the target logical error rate. The efficacy of the MLS policy is enhanced with software offloading of some decoding tasks. We also propose a noise-adaptive scheduling mechanism that can prioritize the decoding of logical qubits that incur a temporary increase in the physical error rate. Via virtualization, decoder systems can be made more scalable, and this insight can allow algorithm and hardware designers to focus on making decoders faster and more accurate, no matter the hardware cost." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1. Virtualizing decoders using the MLS policy can reduce the decoders required for 15-to-1 distillation without a substantial backlog of undecoded slices.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
All QubitsAfter Virtualization\n\n\n\n\n\n\n\n
Longest Undecoded
Sequence Length
\n
15-to-1 Distillation10 decoders4 decoders4 time slices
\n
\n
", + "capture": "Table 1. Virtualizing decoders using the MLS policy can reduce the decoders required for 15-to-1 distillation without a substantial backlog of undecoded slices." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2406.17995v2_figure_1(a).png", + "caption": "(a)\nFigure 1. \n(a) Fewer decoders result in a slowdown in the computation due to large backlogs of undecoded syndromes. VQD avoids this slowdown in computation while reducing the number of decoders;\n(b) Multi-objective optimization problem for designing decoders without any virtualization \u2013 current decoders are designed to minimize latency and maximize scalability, accuracy;\n(c) Virtualization allows the optimization problem to be reduced to just speed and accuracy \u2013 decoder virtualization can improve scalability.", + "url": "http://arxiv.org/html/2406.17995v2/x1.png" + }, + "1(b)": { + "figure_path": "2406.17995v2_figure_1(b).png", + "caption": "(b)\nFigure 1. \n(a) Fewer decoders result in a slowdown in the computation due to large backlogs of undecoded syndromes. VQD avoids this slowdown in computation while reducing the number of decoders;\n(b) Multi-objective optimization problem for designing decoders without any virtualization \u2013 current decoders are designed to minimize latency and maximize scalability, accuracy;\n(c) Virtualization allows the optimization problem to be reduced to just speed and accuracy \u2013 decoder virtualization can improve scalability.", + "url": "http://arxiv.org/html/2406.17995v2/x2.png" + }, + "1(c)": { + "figure_path": "2406.17995v2_figure_1(c).png", + "caption": "(c)\nFigure 1. \n(a) Fewer decoders result in a slowdown in the computation due to large backlogs of undecoded syndromes. VQD avoids this slowdown in computation while reducing the number of decoders;\n(b) Multi-objective optimization problem for designing decoders without any virtualization \u2013 current decoders are designed to minimize latency and maximize scalability, accuracy;\n(c) Virtualization allows the optimization problem to be reduced to just speed and accuracy \u2013 decoder virtualization can improve scalability.", + "url": "http://arxiv.org/html/2406.17995v2/x3.png" + }, + "2(a)": { + "figure_path": "2406.17995v2_figure_2(a).png", + "caption": "(a)\nFigure 2. (a) A logical qubit (d=3\ud835\udc513d=3italic_d = 3); (b) Syndrome generation and measurements; (c) A typical procedure for detecting errors by decoding syndromes.", + "url": "http://arxiv.org/html/2406.17995v2/x4.png" + }, + "2(b)": { + "figure_path": "2406.17995v2_figure_2(b).png", + "caption": "(b)\nFigure 2. (a) A logical qubit (d=3\ud835\udc513d=3italic_d = 3); (b) Syndrome generation and measurements; (c) A typical procedure for detecting errors by decoding syndromes.", + "url": "http://arxiv.org/html/2406.17995v2/x5.png" + }, + "2(c)": { + "figure_path": "2406.17995v2_figure_2(c).png", + "caption": "(c)\nFigure 2. (a) A logical qubit (d=3\ud835\udc513d=3italic_d = 3); (b) Syndrome generation and measurements; (c) A typical procedure for detecting errors by decoding syndromes.", + "url": "http://arxiv.org/html/2406.17995v2/x6.png" + }, + "3": { + "figure_path": "2406.17995v2_figure_3.png", + "caption": "Figure 3. Consumption of a magic (T\ud835\udc47Titalic_T) state with LS.", + "url": "http://arxiv.org/html/2406.17995v2/x7.png" + }, + "4(a)": { + "figure_path": "2406.17995v2_figure_4(a).png", + "caption": "(a)\nFigure 4. \n(a) Exponential increase in the decoder latency per round (in nanoseconds) as the number of rounds is increased from d\ud835\udc51ditalic_d to 20\u2062d20\ud835\udc5120d20 italic_d \u2013 the increase is higher for larger code distances (p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT);\n(b) Slow increase in the logical error rate as the number of rounds of error correction increase with p=10\u22123\ud835\udc5dsuperscript103p=10^{-3}italic_p = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT, p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT;\n(c) Histogram of the number of concurrent critical decodes for different workloads with the EDPC layout \u2013 there is limited parallelism as far as critical decodes are concerned;\n(d) Estimated total decoders required for different workloads with the EDPC layout and 24 distillation factories for every workload.", + "url": "http://arxiv.org/html/2406.17995v2/x8.png" + }, + "4(b)": { + "figure_path": "2406.17995v2_figure_4(b).png", + "caption": "(b)\nFigure 4. \n(a) Exponential increase in the decoder latency per round (in nanoseconds) as the number of rounds is increased from d\ud835\udc51ditalic_d to 20\u2062d20\ud835\udc5120d20 italic_d \u2013 the increase is higher for larger code distances (p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT);\n(b) Slow increase in the logical error rate as the number of rounds of error correction increase with p=10\u22123\ud835\udc5dsuperscript103p=10^{-3}italic_p = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT, p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT;\n(c) Histogram of the number of concurrent critical decodes for different workloads with the EDPC layout \u2013 there is limited parallelism as far as critical decodes are concerned;\n(d) Estimated total decoders required for different workloads with the EDPC layout and 24 distillation factories for every workload.", + "url": "http://arxiv.org/html/2406.17995v2/x9.png" + }, + "4(c)": { + "figure_path": "2406.17995v2_figure_4(c).png", + "caption": "(c)\nFigure 4. \n(a) Exponential increase in the decoder latency per round (in nanoseconds) as the number of rounds is increased from d\ud835\udc51ditalic_d to 20\u2062d20\ud835\udc5120d20 italic_d \u2013 the increase is higher for larger code distances (p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT);\n(b) Slow increase in the logical error rate as the number of rounds of error correction increase with p=10\u22123\ud835\udc5dsuperscript103p=10^{-3}italic_p = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT, p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT;\n(c) Histogram of the number of concurrent critical decodes for different workloads with the EDPC layout \u2013 there is limited parallelism as far as critical decodes are concerned;\n(d) Estimated total decoders required for different workloads with the EDPC layout and 24 distillation factories for every workload.", + "url": "http://arxiv.org/html/2406.17995v2/x10.png" + }, + "4(d)": { + "figure_path": "2406.17995v2_figure_4(d).png", + "caption": "(d)\nFigure 4. \n(a) Exponential increase in the decoder latency per round (in nanoseconds) as the number of rounds is increased from d\ud835\udc51ditalic_d to 20\u2062d20\ud835\udc5120d20 italic_d \u2013 the increase is higher for larger code distances (p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT);\n(b) Slow increase in the logical error rate as the number of rounds of error correction increase with p=10\u22123\ud835\udc5dsuperscript103p=10^{-3}italic_p = 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT, p=10\u22124\ud835\udc5dsuperscript104p=10^{-4}italic_p = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT;\n(c) Histogram of the number of concurrent critical decodes for different workloads with the EDPC layout \u2013 there is limited parallelism as far as critical decodes are concerned;\n(d) Estimated total decoders required for different workloads with the EDPC layout and 24 distillation factories for every workload.", + "url": "http://arxiv.org/html/2406.17995v2/x11.png" + }, + "5": { + "figure_path": "2406.17995v2_figure_5.png", + "caption": "Figure 5. (Left) Decoders for every logical qubit; (Right) Time-division multiplexing of decoders between qubits.", + "url": "http://arxiv.org/html/2406.17995v2/x12.png" + }, + "6(a)": { + "figure_path": "2406.17995v2_figure_6(a).png", + "caption": "(a)\nFigure 6. \n(a) Illustration of the longest undecoded sequence \u2013 Q1subscript\ud835\udc441Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT has the longest undecoded sequence before the last decode;\n(b) MFD policy \u2013 undecoded qubits are sorted according to the number of critical decodes they are involved in after time slice t\ud835\udc61titalic_t and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list;\n(c) RR policy \u2013 the M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits decoded in time slice t\ud835\udc61titalic_t are not decoded in time slice t+1\ud835\udc611t+1italic_t + 1;\n(d) MLS policy \u2013 undecoded qubits are sorted according to their undecoded sequence lengths and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list.", + "url": "http://arxiv.org/html/2406.17995v2/x13.png" + }, + "6(b)": { + "figure_path": "2406.17995v2_figure_6(b).png", + "caption": "(b)\nFigure 6. \n(a) Illustration of the longest undecoded sequence \u2013 Q1subscript\ud835\udc441Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT has the longest undecoded sequence before the last decode;\n(b) MFD policy \u2013 undecoded qubits are sorted according to the number of critical decodes they are involved in after time slice t\ud835\udc61titalic_t and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list;\n(c) RR policy \u2013 the M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits decoded in time slice t\ud835\udc61titalic_t are not decoded in time slice t+1\ud835\udc611t+1italic_t + 1;\n(d) MLS policy \u2013 undecoded qubits are sorted according to their undecoded sequence lengths and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list.", + "url": "http://arxiv.org/html/2406.17995v2/x14.png" + }, + "6(c)": { + "figure_path": "2406.17995v2_figure_6(c).png", + "caption": "(c)\nFigure 6. \n(a) Illustration of the longest undecoded sequence \u2013 Q1subscript\ud835\udc441Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT has the longest undecoded sequence before the last decode;\n(b) MFD policy \u2013 undecoded qubits are sorted according to the number of critical decodes they are involved in after time slice t\ud835\udc61titalic_t and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list;\n(c) RR policy \u2013 the M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits decoded in time slice t\ud835\udc61titalic_t are not decoded in time slice t+1\ud835\udc611t+1italic_t + 1;\n(d) MLS policy \u2013 undecoded qubits are sorted according to their undecoded sequence lengths and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list.", + "url": "http://arxiv.org/html/2406.17995v2/x15.png" + }, + "6(d)": { + "figure_path": "2406.17995v2_figure_6(d).png", + "caption": "(d)\nFigure 6. \n(a) Illustration of the longest undecoded sequence \u2013 Q1subscript\ud835\udc441Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT has the longest undecoded sequence before the last decode;\n(b) MFD policy \u2013 undecoded qubits are sorted according to the number of critical decodes they are involved in after time slice t\ud835\udc61titalic_t and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list;\n(c) RR policy \u2013 the M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits decoded in time slice t\ud835\udc61titalic_t are not decoded in time slice t+1\ud835\udc611t+1italic_t + 1;\n(d) MLS policy \u2013 undecoded qubits are sorted according to their undecoded sequence lengths and M\u2212C\ud835\udc40\ud835\udc36M-Citalic_M - italic_C qubits are selected from this sorted list.", + "url": "http://arxiv.org/html/2406.17995v2/x16.png" + }, + "7(a)": { + "figure_path": "2406.17995v2_figure_7(a).png", + "caption": "(a)\nFigure 7. (a) Increase in the average number of bit-flips in syndromes for different code distances as the physical error rate is increased; (b) An error burst results in increased bit-flips in the syndromes of patch P\ud835\udc43Pitalic_P which can be detected and used to prioritize the decoding of P\ud835\udc43Pitalic_P in the next time step; (c) Decoding can be offloaded to software provided there is enough time before a critical decode occurs.", + "url": "http://arxiv.org/html/2406.17995v2/x17.png" + }, + "7(b)": { + "figure_path": "2406.17995v2_figure_7(b).png", + "caption": "(b)\nFigure 7. (a) Increase in the average number of bit-flips in syndromes for different code distances as the physical error rate is increased; (b) An error burst results in increased bit-flips in the syndromes of patch P\ud835\udc43Pitalic_P which can be detected and used to prioritize the decoding of P\ud835\udc43Pitalic_P in the next time step; (c) Decoding can be offloaded to software provided there is enough time before a critical decode occurs.", + "url": "http://arxiv.org/html/2406.17995v2/x18.png" + }, + "7(c)": { + "figure_path": "2406.17995v2_figure_7(c).png", + "caption": "(c)\nFigure 7. (a) Increase in the average number of bit-flips in syndromes for different code distances as the physical error rate is increased; (b) An error burst results in increased bit-flips in the syndromes of patch P\ud835\udc43Pitalic_P which can be detected and used to prioritize the decoding of P\ud835\udc43Pitalic_P in the next time step; (c) Decoding can be offloaded to software provided there is enough time before a critical decode occurs.", + "url": "http://arxiv.org/html/2406.17995v2/x19.png" + }, + "8(a)": { + "figure_path": "2406.17995v2_figure_8(a).png", + "caption": "(a)\nFigure 8. \n(a) Slowdown in the processing of outstanding syndromes for different normalized decoder latencies tD=Tp\u2062r\u2062o\u2062c/Tg\u2062e\u2062nsubscript\ud835\udc61\ud835\udc37subscript\ud835\udc47\ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc50subscript\ud835\udc47\ud835\udc54\ud835\udc52\ud835\udc5bt_{D}=T_{proc}/T_{gen}italic_t start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c end_POSTSUBSCRIPT / italic_T start_POSTSUBSCRIPT italic_g italic_e italic_n end_POSTSUBSCRIPT. Slower decoders require more time (and thus more rounds) to catch up with the present syndrome;\n(b) Differences in the total decoding task for different initial undecoded rounds \u2013 a tD=0.99subscript\ud835\udc61\ud835\udc370.99t_{D}=0.99italic_t start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = 0.99 increases the number of rounds to be decoded by a 100\u00d7\\times\u00d7.", + "url": "http://arxiv.org/html/2406.17995v2/x20.png" + }, + "8(b)": { + "figure_path": "2406.17995v2_figure_8(b).png", + "caption": "(b)\nFigure 8. \n(a) Slowdown in the processing of outstanding syndromes for different normalized decoder latencies tD=Tp\u2062r\u2062o\u2062c/Tg\u2062e\u2062nsubscript\ud835\udc61\ud835\udc37subscript\ud835\udc47\ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc50subscript\ud835\udc47\ud835\udc54\ud835\udc52\ud835\udc5bt_{D}=T_{proc}/T_{gen}italic_t start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c end_POSTSUBSCRIPT / italic_T start_POSTSUBSCRIPT italic_g italic_e italic_n end_POSTSUBSCRIPT. Slower decoders require more time (and thus more rounds) to catch up with the present syndrome;\n(b) Differences in the total decoding task for different initial undecoded rounds \u2013 a tD=0.99subscript\ud835\udc61\ud835\udc370.99t_{D}=0.99italic_t start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT = 0.99 increases the number of rounds to be decoded by a 100\u00d7\\times\u00d7.", + "url": "http://arxiv.org/html/2406.17995v2/x21.png" + }, + "9": { + "figure_path": "2406.17995v2_figure_9.png", + "caption": "Figure 9. System architecture for virtualized decoding.", + "url": "http://arxiv.org/html/2406.17995v2/x22.png" + }, + "10": { + "figure_path": "2406.17995v2_figure_10.png", + "caption": "Figure 10. Compilation framework \u2013 the rewrite pass changes the IR schedule based on the number of available decoders.", + "url": "http://arxiv.org/html/2406.17995v2/x23.png" + }, + "11(a)": { + "figure_path": "2406.17995v2_figure_11(a).png", + "caption": "(a) Max. concurrent critical decodes.\nFigure 11. Baseline statistics.", + "url": "http://arxiv.org/html/2406.17995v2/x24.png" + }, + "11(b)": { + "figure_path": "2406.17995v2_figure_11(b).png", + "caption": "(b) Total execution time (in code cycles)\nFigure 11. Baseline statistics.", + "url": "http://arxiv.org/html/2406.17995v2/x25.png" + }, + "11(c)": { + "figure_path": "2406.17995v2_figure_11(c).png", + "caption": "(c) Estimated number of logical qubits (code distances estimated by Azure QRE (Beverland et al., 2022b) also annotated).\nFigure 11. Baseline statistics.", + "url": "http://arxiv.org/html/2406.17995v2/x26.png" + }, + "12": { + "figure_path": "2406.17995v2_figure_12.png", + "caption": "Figure 12. Hardware decoders normalized with the Midpoint configuration (EDPC layout). Decoders used by the Midpoint configuration are annotated below the benchmarks.", + "url": "http://arxiv.org/html/2406.17995v2/x27.png" + }, + "13(a)": { + "figure_path": "2406.17995v2_figure_13(a).png", + "caption": "(a) Max. Concurrency\nFigure 13. Longest undecoded sequence when using the (a) Max. Concurrency; (b) Midpoint configurations.", + "url": "http://arxiv.org/html/2406.17995v2/x28.png" + }, + "13(b)": { + "figure_path": "2406.17995v2_figure_13(b).png", + "caption": "(b) Midpoint\nFigure 13. Longest undecoded sequence when using the (a) Max. Concurrency; (b) Midpoint configurations.", + "url": "http://arxiv.org/html/2406.17995v2/x29.png" + }, + "14": { + "figure_path": "2406.17995v2_figure_14.png", + "caption": "Figure 14. Memory required (normalized by the MLS policy) for undecoded syndromes with the Midpoint configuration.", + "url": "http://arxiv.org/html/2406.17995v2/x30.png" + }, + "15": { + "figure_path": "2406.17995v2_figure_15.png", + "caption": "Figure 15. Reduction in the longest undecoded sequence length when using the Midpoint configuration with software offloading.", + "url": "http://arxiv.org/html/2406.17995v2/x31.png" + }, + "16": { + "figure_path": "2406.17995v2_figure_16.png", + "caption": "Figure 16. Total reduction in decoders required when considering 15-to-1 MSD factories with the Midpoint configuration, EDPC layout. The number of MSD factories was fixed at 24 for all workloads, and the number of factories estimated by Azure QRE are annotated below the benchmark names.", + "url": "http://arxiv.org/html/2406.17995v2/x32.png" + }, + "17": { + "figure_path": "2406.17995v2_figure_17.png", + "caption": "Figure 17. Normalized increase in the number of decoders required with the Midpoint configuration due to error bursts with probability PB\u2062u\u2062r\u2062s\u2062tsubscript\ud835\udc43\ud835\udc35\ud835\udc62\ud835\udc5f\ud835\udc60\ud835\udc61P_{Burst}italic_P start_POSTSUBSCRIPT italic_B italic_u italic_r italic_s italic_t end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2406.17995v2/x33.png" + }, + "18(a)": { + "figure_path": "2406.17995v2_figure_18(a).png", + "caption": "(a)\nFigure 18. \n(a) Total decoding task required to process the longest undecoded sequence of syndromes for all workloads, with the normalized decoder latency for the Surface Code (SC) as 0.5 and 0.99 for the qLDPC code. The Midpoint configuration was used for both cases;\n(b) Relative increase in the LER for different workloads for the SC and qLDPC setups.", + "url": "http://arxiv.org/html/2406.17995v2/x34.png" + }, + "18(b)": { + "figure_path": "2406.17995v2_figure_18(b).png", + "caption": "(b)\nFigure 18. \n(a) Total decoding task required to process the longest undecoded sequence of syndromes for all workloads, with the normalized decoder latency for the Surface Code (SC) as 0.5 and 0.99 for the qLDPC code. The Midpoint configuration was used for both cases;\n(b) Relative increase in the LER for different workloads for the SC and qLDPC setups.", + "url": "http://arxiv.org/html/2406.17995v2/x35.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Quantum error correction below the surface code threshold.", + "author": "Rajeev Acharya, Laleh Aghababaie-Beni, Igor Aleiner, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Nikita Astrakhantsev, Juan Atalaya, Ryan Babbush, Dave Bacon, Brian Ballard, Joseph C. Bardin, Johannes Bausch, Andreas Bengtsson, Alexander Bilmes, Sam Blackwell, Sergio Boixo, Gina Bortoli, Alexandre Bourassa, Jenna Bovaird, Leon Brill, Michael Broughton, David A. Browne, Brett Buchea, Bob B. Buckley, David A. Buell, Tim Burger, Brian\nBurkett, Nicholas Bushnell, Anthony Cabrera, Juan Campero, Hung-Shen Chang, Yu Chen, Zijun Chen, Ben Chiaro, Desmond Chik, Charina Chou, Jahan Claes, Agnetta Y. Cleland, Josh Cogan, Roberto Collins, Paul Conner, William Courtney, Alexander L. Crook, Ben Curtin, Sayan Das, Alex Davies, Laura De Lorenzo, Dripto M. Debroy, Sean Demura, Michel Devoret, Agustin Di Paolo, Paul Donohoe, Ilya Drozdov, Andrew Dunsworth, Clint Earle, Thomas Edlich, Alec Eickbusch,\nAviv Moshe Elbag, Mahmoud Elzouka, Catherine Erickson, Lara Faoro, Edward Farhi, Vinicius S. Ferreira, Leslie Flores Burgos, Ebrahim Forati, Austin G. Fowler, Brooks Foxen, Suhas Ganjam, Gonzalo Garcia, Robert Gasca, \u00c9lie Genois, William Giang, Craig Gidney, Dar Gilboa, Raja Gosula, Alejandro Grajales Dau, Dietrich Graumann, Alex Greene, Jonathan A. Gross, Steve Habegger, John Hall, Michael C. Hamilton, Monica Hansen, Matthew P. Harrigan, Sean D. Harrington, Francisco J. H. Heras,\nStephen Heslin, Paula Heu, Oscar Higgott, Gordon Hill, Jeremy Hilton, George Holland, Sabrina Hong, Hsin-Yuan Huang, Ashley Huff, William J. Huggins, Lev B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Stephen Jordan, Chaitali Joshi, Pavol Juhas, Dvir Kafri, Hui Kang, Amir H. Karamlou, Kostyantyn Kechedzhi, Julian Kelly, Trupti Khaire, Tanuj Khattar, Mostafa Khezri, Seon Kim, Paul V. Klimov, Andrey R. Klots, Bryce Kobrin,\nPushmeet Kohli, Alexander N. Korotkov, Fedor Kostritsa, Robin Kothari, Borislav Kozlovskii, John Mark Kreikebaum, Vladislav D. Kurilovich, Nathan Lacroix, David Landhuis, Tiano Lange-Dei, Brandon W. Langley, Pavel Laptev, Kim-Ming Lau, Lo\u00efck Le Guevel, Justin Ledford, Kenny Lee, Yuri D. Lensky, Shannon Leon, Brian J. Lester, Wing Yan Li, Yin Li, Alexander T. Lill, Wayne Liu, William P. Livingston, Aditya Locharla, Erik Lucero, Daniel Lundahl, Aaron Lunt, Sid Madhuk, Fionn D.\nMalone, Ashley Maloney, Salvatore Mandr\u00e1, Leigh S. Martin, Steven Martin, Orion Martin, Cameron Maxfield, Jarrod R. McClean, Matt McEwen, Seneca Meeks, Anthony Megrant, Xiao Mi, Kevin C. Miao, Amanda Mieszala, Reza Molavi, Sebastian Molina, Shirin Montazeri, Alexis Morvan, Ramis Movassagh, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Ani Nersisyan, Hartmut Neven, Michael Newman, Jiun How Ng, Anthony Nguyen, Murray Nguyen, Chia-Hung Ni, Thomas E. O\u2019Brien,\nWilliam D. Oliver, Alex Opremcak, Kristoffer Ottosson, Andre Petukhov, Alex Pizzuto, John Platt, Rebecca Potter, Orion Pritchard, Leonid P. Pryadko, Chris Quintana, Ganesh Ramachandran, Matthew J. Reagor, David M. Rhodes, Gabrielle Roberts, Eliott Rosenberg, Emma Rosenfeld, Pedram Roushan, Nicholas C. Rubin, Negar Saei, Daniel Sank, Kannan Sankaragomathi, Kevin J. Satzinger, Henry F. Schurkus, Christopher Schuster, Andrew W. Senior, Michael J. Shearn, Aaron Shorter, Noah Shutty, Vladimir\nShvarts, Shraddha Singh, Volodymyr Sivak, Jindra Skruzny, Spencer Small, Vadim Smelyanskiy, W. Clarke Smith, Rolando D. Somma, Sofia Springer, George Sterling, Doug Strain, Jordan Suchard, Aaron Szasz, Alex Sztein, Douglas Thor, Alfredo Torres, M. Mert Torunbalci, Abeer Vaishnav, Justin Vargas, Sergey Vdovichev, Guifre Vidal, Benjamin Villalonga, Catherine Vollgraff Heidweiller, Steven Waltman, Shannon X. Wang, Brayden Ware, Kate Weber, Theodore White, Kristi Wong, Bryan W. K. Woo,\nCheng Xing, Z. Jamie Yao, Ping Yeh, Bicheng Ying, Juhwan Yoo, Noureldin Yosri, Grayson Young, Adam Zalcman, Yaxing Zhang, Ningfeng Zhu, and Nicholas Zobrist. 2024.", + "venue": "", + "url": null + } + }, + { + "2": { + "title": "Suppressing quantum errors by scaling a surface code logical qubit.", + "author": "Rajeev Acharya, Igor Aleiner, Richard Allen, Trond I. Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Juan Atalaya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Joao Basso, Andreas Bengtsson, Sergio Boixo, Gina Bortoli, Alexandre Bourassa, Jenna Bovaird, Leon Brill, Michael Broughton, Bob B. Buckley, David A. Buell, Tim Burger, Brian Burkett, Nicholas Bushnell, Yu Chen, Zijun Chen, Ben Chiaro, Josh Cogan, Roberto Collins, Paul\nConner, William Courtney, Alexander L. Crook, Ben Curtin, Dripto M. Debroy, Alexander Del Toro Barba, Sean Demura, Andrew Dunsworth, Daniel Eppens, Catherine Erickson, Lara Faoro, Edward Farhi, Reza Fatemi, Leslie Flores Burgos, Ebrahim Forati, Austin G. Fowler, Brooks Foxen, William Giang, Craig Gidney, Dar Gilboa, Marissa Giustina, Alejandro Grajales Dau, Jonathan A. Gross, Steve Habegger, Michael C. Hamilton, Matthew P. Harrigan, Sean D. Harrington, Oscar Higgott, Jeremy Hilton, Markus\nHoffmann, Sabrina Hong, Trent Huang, Ashley Huff, William J. Huggins, Lev B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Pavol Juhas, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Tanuj Khattar, Mostafa Khezri, M\u00e1ria Kieferov\u00e1, Seon Kim, Alexei Kitaev, Paul V. Klimov, Andrey R. Klots, Alexander N. Korotkov, Fedor Kostritsa, John Mark Kreikebaum, David Landhuis, Pavel Laptev, Kim-Ming Lau, Lily Laws, Joonho Lee, Kenny Lee,\nBrian J. Lester, Alexander Lill, Wayne Liu, Aditya Locharla, Erik Lucero, Fionn D. Malone, Jeffrey Marshall, Orion Martin, Jarrod R. McClean, Trevor McCourt, Matt McEwen, Anthony Megrant, Bernardo Meurer Costa, Xiao Mi, Kevin C. Miao, Masoud Mohseni, Shirin Montazeri, Alexis Morvan, Emily Mount, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Ani Nersisyan, Hartmut Neven, Michael Newman, Jiun How Ng, Anthony Nguyen, Murray Nguyen, Murphy Yuezhen Niu,\nThomas E. O\u2019Brien, Alex Opremcak, John Platt, Andre Petukhov, Rebecca Potter, Leonid P. Pryadko, Chris Quintana, Pedram Roushan, Nicholas C. Rubin, Negar Saei, Daniel Sank, Kannan Sankaragomathi, Kevin J. Satzinger, Henry F. Schurkus, Christopher Schuster, Michael J. Shearn, Aaron Shorter, Vladimir Shvarts, Jindra Skruzny, Vadim Smelyanskiy, W. Clarke Smith, George Sterling, Doug Strain, Marco Szalay, Alfredo Torres, Guifre Vidal, Benjamin Villalonga, Catherine Vollgraff Heidweiller, Theodore\nWhite, Cheng Xing, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Grayson Young, Adam Zalcman, Yaxing Zhang, and Ningfeng Zhu. 2023.", + "venue": "Nature 614, 7949 (Feb. 2023), 676\u2013681.", + "url": null + } + }, + { + "3": { + "title": "Promatch: Extending the Reach of Real-Time Quantum Error Correction with Adaptive Predecoding.", + "author": "Narges Alavisamani, Suhas Vittal, Ramin Ayanzadeh, Poulami Das, and Moinuddin Qureshi. 2024.", + "venue": "", + "url": null + } + }, + { + "4": { + "title": "Virtual Logical Qubits: A Compact Architecture for Fault-Tolerant Quantum Computing.", + "author": "Jonathan M. Baker, Casey Duckering, David I. Schuster, and Frederic T. Chong. 2021.", + "venue": "IEEE Micro 41, 3 (May 2021), 95\u2013101.", + "url": null + } + }, + { + "5": { + "title": "A real-time, scalable, fast and highly resource efficient decoder for a quantum computer.", + "author": "Ben Barber, Kenton M. Barnes, Tomasz Bialas, Okan Bu\u011fdayc\u0131, Earl T. Campbell, Neil I. Gillespie, Kauser Johar, Ram Rajan, Adam W. Richardson, Luka Skoric, Canberk Topal, Mark L. Turner, and Abbas B. Ziad. 2023.", + "venue": "", + "url": null + } + }, + { + "6": { + "title": "Learning to Decode the Surface Code with a Recurrent, Transformer-Based Neural Network.", + "author": "Johannes Bausch, Andrew W Senior, Francisco J H Heras, Thomas Edlich, Alex Davies, Michael Newman, Cody Jones, Kevin Satzinger, Murphy Yuezhen Niu, Sam Blackwell, George Holland, Dvir Kafri, Juan Atalaya, Craig Gidney, Demis Hassabis, Sergio Boixo, Hartmut Neven, and Pushmeet Kohli. 2023.", + "venue": "", + "url": null + } + }, + { + "7": { + "title": "Surface Code Compilation via Edge-Disjoint Paths.", + "author": "Michael Beverland, Vadym Kliuchnikov, and Eddie Schoute. 2022a.", + "venue": "PRX Quantum 3, 2 (May 2022).", + "url": null + } + }, + { + "8": { + "title": "Assessing requirements to scale to practical quantum advantage.", + "author": "Michael E. Beverland, Prakash Murali, Matthias Troyer, Krysta M. Svore, Torsten Hoefler, Vadym Kliuchnikov, Guang Hao Low, Mathias Soeken, Aarthi Sundaram, and Alexander Vaschillo. 2022b.", + "venue": "", + "url": null + } + }, + { + "9": { + "title": "Compilation of a simple chemistry application to quantum error correction primitives.", + "author": "Nick S. Blunt, Gy\u00f6rgy P. Geh\u00e9r, and Alexandra E. Moylett. 2024.", + "venue": "Physical Review Research 6, 1 (March 2024).", + "url": null + } + }, + { + "10": { + "title": "Logical quantum processor based on reconfigurable atom arrays.", + "author": "Dolev Bluvstein, Simon J. Evered, Alexandra A. Geim, Sophie H. Li, Hengyun Zhou, Tom Manovitz, Sepehr Ebadi, Madelyn Cain, Marcin Kalinowski, Dominik Hangleiter, J. Pablo Bonilla Ataides, Nishad Maskara, Iris Cong, Xun Gao, Pedro Sales Rodriguez, Thomas Karolyshyn, Giulia Semeghini, Michael J. Gullans, Markus Greiner, Vladan Vuleti\u0107, and Mikhail D. Lukin. 2023.", + "venue": "Nature 626, 7997 (Dec. 2023), 58\u201365.", + "url": null + } + }, + { + "11": { + "title": "Modular decoding: parallelizable real-time decoding for quantum computers.", + "author": "H\u00e9ctor Bomb\u00edn, Chris Dawson, Ye-Hua Liu, Naomi Nickerson, Fernando Pastawski, and Sam Roberts. 2023.", + "venue": "", + "url": null + } + }, + { + "12": { + "title": "High-threshold and low-overhead fault-tolerant quantum memory.", + "author": "Sergey Bravyi, Andrew W. Cross, Jay M. Gambetta, Dmitri Maslov, Patrick Rall, and Theodore J. Yoder. 2024.", + "venue": "Nature 627, 8005 (March 2024), 778\u2013782.", + "url": null + } + }, + { + "13": { + "title": "Magic-state distillation with low overhead.", + "author": "Sergey Bravyi and Jeongwan Haah. 2012.", + "venue": "Physical Review A 86, 5 (Nov. 2012).", + "url": null + } + }, + { + "14": { + "title": "High-Performance Computing with Quantum Processing Units.", + "author": "Keith A. Britt and Travis S. Humble. 2017.", + "venue": "ACM Journal on Emerging Technologies in Computing Systems 13, 3 (March 2017), 1\u201313.", + "url": null + } + }, + { + "15": { + "title": "XQsim: modeling cross-technology control processors for 10+K qubit quantum computers. In Proceedings of the 49th Annual International Symposium on Computer Architecture (ISCA \u201922). ACM.", + "author": "Ilkwon Byun, Junpyo Kim, Dongmoon Min, Ikki Nagaoka, Kosuke Fukumitsu, Iori Ishikawa, Teruo Tanimoto, Masamitsu Tanaka, Koji Inoue, and Jangwoo Kim. 2022.", + "venue": "https://doi.org/10.1145/3470496.3527417", + "url": null + } + }, + { + "16": { + "title": "Belief propagation as a partial decoder.", + "author": "Laura Caune, Brendan Reid, Joan Camps, and Earl Campbell. 2023.", + "venue": "", + "url": null + } + }, + { + "17": { + "title": "Improved QLDPC Surgery: Logical Measurements and Bridging Codes.", + "author": "Andrew Cross, Zhiyang He, Patrick Rall, and Theodore Yoder. 2024.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Open Quantum Assembly Language.", + "author": "Andrew W. Cross, Lev S. Bishop, John A. Smolin, and Jay M. Gambetta. 2017.", + "venue": "", + "url": null + } + }, + { + "19": { + "title": "Demonstration of logical qubits and repeated error correction with better-than-physical error rates.", + "author": "M. P. da Silva, C. Ryan-Anderson, J. M. Bello-Rivas, A. Chernoguzov, J. M. Dreiling, C. Foltz, F. Frachon, J. P. Gaebler, T. M. Gatterman, L. Grans-Samuelsson, D. Hayes, N. Hewitt, J. Johansen, D. Lucchetti, M. Mills, S. A. Moses, B. Neyenhuis, A. Paz, J. Pino, P. Siegfried, J. Strabley, A. Sundaram, D. Tom, S. J. Wernli, M. Zanner, R. P. Stutz, and K. M. Svore. 2024.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "LILLIPUT: a lightweight low-latency lookup-table decoder for near-term Quantum error correction. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS \u201922). ACM.", + "author": "Poulami Das, Aditya Locharla, and Cody Jones. 2022a.", + "venue": "https://doi.org/10.1145/3503222.3507707", + "url": null + } + }, + { + "21": { + "title": "AFS: Accurate, Fast, and Scalable Error-Decoding for Fault-Tolerant Quantum Computers. In 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE.", + "author": "Poulami Das, Christopher A. Pattison, Srilatha Manne, Douglas M. Carmean, Krysta M. Svore, Moinuddin Qureshi, and Nicolas Delfosse. 2022b.", + "venue": "https://doi.org/10.1109/hpca53966.2022.00027", + "url": null + } + }, + { + "22": { + "title": "Hierarchical decoding to reduce hardware requirements for quantum computing.", + "author": "Nicolas Delfosse. 2020.", + "venue": "", + "url": null + } + }, + { + "23": { + "title": "How to choose a decoder for a fault-tolerant quantum computer? The speed vs accuracy trade-off.", + "author": "Nicolas Delfosse, Andres Paz, Alexander Vaschillo, and Krysta M. Svore. 2023.", + "venue": "", + "url": null + } + }, + { + "24": { + "title": "Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures. In 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE.", + "author": "Yongshan Ding, Adam Holmes, Ali Javadi-Abhari, Diana Franklin, Margaret Martonosi, and Frederic Chong. 2018.", + "venue": "https://doi.org/10.1109/micro.2018.00072", + "url": null + } + }, + { + "25": { + "title": "Minimum weight perfect matching of fault-tolerant topological quantum error correction in average O(1) parallel time.", + "author": "Austin G. Fowler. 2015.", + "venue": "Quantum Info. Comput. 15, 1\u20132 (jan 2015), 145\u2013158.", + "url": null + } + }, + { + "26": { + "title": "Surface codes: Towards practical large-scale quantum computation.", + "author": "Austin G. Fowler, Matteo Mariantoni, John M. Martinis, and Andrew N. Cleland. 2012.", + "venue": "Physical Review A 86, 3 (Sept. 2012).", + "url": null + } + }, + { + "27": { + "title": "A scalable and fast artificial neural network syndrome decoder for surface codes.", + "author": "Spiro Gicev, Lloyd C. L. Hollenberg, and Muhammad Usman. 2023.", + "venue": "Quantum 7 (July 2023), 1058.", + "url": null + } + }, + { + "28": { + "title": "Stim: a fast stabilizer circuit simulator.", + "author": "Craig Gidney. 2021.", + "venue": "Quantum 5 (July 2021), 497.", + "url": null + } + }, + { + "29": { + "title": "Toward Low-latency Iterative Decoding of QLDPC Codes Under Circuit-Level Noise.", + "author": "Anqi Gong, Sebastian Cammerer, and Joseph M. Renes. 2024.", + "venue": "", + "url": null + } + }, + { + "30": { + "title": "Encoding a magic state with beyond break-even fidelity.", + "author": "Riddhi S. Gupta, Neereja Sundaresan, Thomas Alexander, Christopher J. Wood, Seth T. Merkel, Michael B. Healy, Marius Hillenbrand, Tomas Jochym-O\u2019Connor, James R. Wootton, Theodore J. Yoder, Andrew W. Cross, Maika Takita, and Benjamin J. Brown. 2024.", + "venue": "Nature 625, 7994 (Jan. 2024), 259\u2013263.", + "url": null + } + }, + { + "31": { + "title": "Sparse Blossom: correcting a million errors per core second with minimum-weight matching.", + "author": "Oscar Higgott and Craig Gidney. 2023.", + "venue": "", + "url": null + } + }, + { + "32": { + "title": "Localized statistics decoding: A parallel decoding algorithm for quantum low-density parity-check codes.", + "author": "Timo Hillmann, Lucas Berent, Armanda O. Quintavalle, Jens Eisert, Robert Wille, and Joschka Roffe. 2024.", + "venue": "", + "url": null + } + }, + { + "33": { + "title": "Surface code quantum computing by lattice surgery.", + "author": "Dominic Horsman, Austin G Fowler, Simon Devitt, and Rodney Van Meter. 2012.", + "venue": "New Journal of Physics 14, 12 (Dec. 2012), 123011.", + "url": null + } + }, + { + "34": { + "title": "A Fault-Tolerant Million Qubit-Scale Distributed Quantum Computer. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems. ACM.", + "author": "Junpyo Kim, Dongmoon Min, Jungmin Cho, Hyeonseong Jeong, Ilkwon Byun, Junhyuk Choi, Juwon Hong, and Jangwoo Kin. 2024.", + "venue": "https://doi.org/10.1145/3620665.3640388", + "url": null + } + }, + { + "35": { + "title": "Theory of quantum error-correcting codes.", + "author": "Emanuel Knill and Raymond Laflamme. 1997.", + "venue": "Physical Review A 55, 2 (Feb. 1997), 900\u2013911.", + "url": null + } + }, + { + "36": { + "title": "Realistic Cost to Execute Practical Quantum Circuits using Direct Clifford+T Lattice Surgery Compilation.", + "author": "Tyler Leblond, Christopher Dean, George Watkins, and Ryan Bennink. 2024.", + "venue": "ACM Transactions on Quantum Computing 5, 4 (Oct. 2024), 1\u201328.", + "url": null + } + }, + { + "37": { + "title": "QASMBench: A Low-Level Quantum Benchmark Suite for NISQ Evaluation and Simulation.", + "author": "Ang Li, Samuel Stein, Sriram Krishnamoorthy, and James Ang. 2023.", + "venue": "ACM Transactions on Quantum Computing 4, 2 (Feb. 2023), 1\u201326.", + "url": null + } + }, + { + "38": { + "title": "Codesign of quantum error-correcting codes and modular chiplets in the presence of defects.", + "author": "Sophia Fuhui Lin, Joshua Viszlai, Kaitlin N. Smith, Gokul Subramanian Ravi, Charles Yuan, Frederic T. Chong, and Benjamin J. Brown. 2023.", + "venue": "", + "url": null + } + }, + { + "39": { + "title": "A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery.", + "author": "Daniel Litinski. 2019a.", + "venue": "Quantum 3 (March 2019), 128.", + "url": null + } + }, + { + "40": { + "title": "Magic State Distillation: Not as Costly as You Think.", + "author": "Daniel Litinski. 2019b.", + "venue": "Quantum 3 (Dec. 2019), 205.", + "url": null + } + }, + { + "41": { + "title": "Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment.", + "author": "C. L. Liu and James W. Layland. 1973.", + "venue": "J. ACM 20, 1 (Jan. 1973), 46\u201361.", + "url": null + } + }, + { + "42": { + "title": "Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits.", + "author": "Matt McEwen, Lara Faoro, Kunal Arya, Andrew Dunsworth, Trent Huang, Seon Kim, Brian Burkett, Austin Fowler, Frank Arute, Joseph C. Bardin, Andreas Bengtsson, Alexander Bilmes, Bob B. Buckley, Nicholas Bushnell, Zijun Chen, Roberto Collins, Sean Demura, Alan R. Derk, Catherine Erickson, Marissa Giustina, Sean D. Harrington, Sabrina Hong, Evan Jeffrey, Julian Kelly, Paul V. Klimov, Fedor Kostritsa, Pavel Laptev, Aditya Locharla, Xiao Mi, Kevin C. Miao,\nShirin Montazeri, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Alex Opremcak, Chris Quintana, Nicholas Redd, Pedram Roushan, Daniel Sank, Kevin J. Satzinger, Vladimir Shvarts, Theodore White, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Yu Chen, Vadim Smelyanskiy, John M. Martinis, Hartmut Neven, Anthony Megrant, Lev Ioffe, and Rami Barends. 2021.", + "venue": "Nature Physics 18, 1 (Dec. 2021), 107\u2013111.", + "url": null + } + }, + { + "43": { + "title": "Scalable Neural Decoder for Topological Surface Codes.", + "author": "Kai Meinerz, Chae-Yeun Park, and Simon Trebst. 2022.", + "venue": "Physical Review Letters 128, 8 (Feb. 2022).", + "url": null + } + }, + { + "44": { + "title": "Overcoming leakage in quantum error correction.", + "author": "Kevin C. Miao, Matt McEwen, Juan Atalaya, Dvir Kafri, Leonid P. Pryadko, Andreas Bengtsson, Alex Opremcak, Kevin J. Satzinger, Zijun Chen, Paul V. Klimov, Chris Quintana, Rajeev Acharya, Kyle Anderson, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Joseph C. Bardin, Alexandre Bourassa, Jenna Bovaird, Leon Brill, Bob B. Buckley, David A. Buell, Tim Burger, Brian Burkett, Nicholas Bushnell, Juan Campero, Ben Chiaro, Roberto Collins, Paul Conner,\nAlexander L. Crook, Ben Curtin, Dripto M. Debroy, Sean Demura, Andrew Dunsworth, Catherine Erickson, Reza Fatemi, Vinicius S. Ferreira, Leslie Flores Burgos, Ebrahim Forati, Austin G. Fowler, Brooks Foxen, Gonzalo Garcia, William Giang, Craig Gidney, Marissa Giustina, Raja Gosula, Alejandro Grajales Dau, Jonathan A. Gross, Michael C. Hamilton, Sean D. Harrington, Paula Heu, Jeremy Hilton, Markus R. Hoffmann, Sabrina Hong, Trent Huang, Ashley Huff, Justin Iveland, Evan Jeffrey,\nZhang Jiang, Cody Jones, Julian Kelly, Seon Kim, Fedor Kostritsa, John Mark Kreikebaum, David Landhuis, Pavel Laptev, Lily Laws, Kenny Lee, Brian J. Lester, Alexander T. Lill, Wayne Liu, Aditya Locharla, Erik Lucero, Steven Martin, Anthony Megrant, Xiao Mi, Shirin Montazeri, Alexis Morvan, Ofer Naaman, Matthew Neeley, Charles Neill, Ani Nersisyan, Michael Newman, Jiun How Ng, Anthony Nguyen, Murray Nguyen, Rebecca Potter, Charles Rocque, Pedram Roushan,\nKannan Sankaragomathi, Henry F. Schurkus, Christopher Schuster, Michael J. Shearn, Aaron Shorter, Noah Shutty, Vladimir Shvarts, Jindra Skruzny, W. Clarke Smith, George Sterling, Marco Szalay, Douglas Thor, Alfredo Torres, Theodore White, Bryan W. K. Woo, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Grayson Young, Adam Zalcman, Ningfeng Zhu, Nicholas Zobrist, Hartmut Neven, Vadim Smelyanskiy, Andre Petukhov, Alexander N. Korotkov, Daniel Sank, and Yu Chen. 2023.", + "venue": "Nature Physics 19, 12 (Oct. 2023), 1780\u20131786.", + "url": null + } + }, + { + "45": { + "title": "Neural-Network Decoders for Quantum Error Correction Using Surface Codes: A Space Exploration of the Hardware Cost-Performance Tradeoffs.", + "author": "Ramon W. J. Overwater, Masoud Babaie, and Fabio Sebastiano. 2022.", + "venue": "IEEE Transactions on Quantum Engineering 3 (2022), 1\u201319.", + "url": null + } + }, + { + "46": { + "title": "MQT Bench: Benchmarking Software and Design Automation Tools for Quantum Computing.", + "author": "Nils Quetschlich, Lukas Burgholzer, and Robert Wille. 2023.", + "venue": "Quantum (2023).", + "url": null + } + }, + { + "47": { + "title": "Better Than Worst-Case Decoding for Quantum Error Correction. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 (ASPLOS \u201923). ACM.", + "author": "Gokul Subramanian Ravi, Jonathan M. Baker, Arash Fayyazi, Sophia Fuhui Lin, Ali Javadi-Abhari, Massoud Pedram, and Frederic T. Chong. 2023.", + "venue": "https://doi.org/10.1145/3575693.3575733", + "url": null + } + }, + { + "48": { + "title": "Decoding across the quantum low-density parity-check code landscape.", + "author": "Joschka Roffe, David R. White, Simon Burton, and Earl Campbell. 2020.", + "venue": "Physical Review Research 2, 4 (Dec. 2020).", + "url": null + } + }, + { + "49": { + "title": "Optimal ancilla-free Clifford+T approximation of z-rotations.", + "author": "Neil J. Ross and Peter Selinger. 2014.", + "venue": "", + "url": null + } + }, + { + "50": { + "title": "Scheme for reducing decoherence in quantum computer memory.", + "author": "Peter W. Shor. 1995.", + "venue": "Physical Review A 52, 4 (Oct. 1995), R2493\u2013R2496.", + "url": null + } + }, + { + "51": { + "title": "Parallel window decoding enables scalable fault tolerant quantum computation.", + "author": "Luka Skoric, Dan E. Browne, Kenton M. Barnes, Neil I. Gillespie, and Earl T. Campbell. 2023.", + "venue": "Nature Communications 14, 1 (Nov. 2023).", + "url": null + } + }, + { + "52": { + "title": "Local Predecoder to Reduce the Bandwidth and Latency of Quantum Error Correction.", + "author": "Samuel C. Smith, Benjamin J. Brown, and Stephen D. Bartlett. 2023.", + "venue": "Physical Review Applied 19, 3 (March 2023).", + "url": null + } + }, + { + "53": { + "title": "HetArch: Heterogeneous Microarchitectures for Superconducting Quantum Systems. In 56th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO \u201923). ACM.", + "author": "Samuel Stein, Sara Sussman, Teague Tomesh, Charles Guinn, Esin Tureci, Sophia Fuhui Lin, Wei Tang, James Ang, Srivatsan Chakram, Ang Li, Margaret Martonosi, Fred Chong, Andrew A. Houck, Isaac L. Chuang, and Michael Demarco. 2023.", + "venue": "https://doi.org/10.1145/3613424.3614300", + "url": null + } + }, + { + "54": { + "title": "Architectures for Heterogeneous Quantum Error Correction Codes.", + "author": "Samuel Stein, Shifan Xu, Andrew W. Cross, Theodore J. Yoder, Ali Javadi-Abhari, Chenxu Liu, Kun Liu, Zeyuan Zhou, Charles Guinn, Yufei Ding, Yongshan Ding, and Ang Li. 2024.", + "venue": "", + "url": null + } + }, + { + "55": { + "title": "Q3DE: A fault-tolerant quantum computer architecture for multi-bit burst errors by cosmic rays. In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 1110\u20131125.", + "author": "Yasunari Suzuki, Takanori Sugiyama, Tomochika Arai, Wang Liao, Koji Inoue, and Teruo Tanimoto. 2022.", + "venue": "https://doi.org/10.1109/micro56248.2022.00079", + "url": null + } + }, + { + "56": { + "title": "Scalable surface code decoders with parallelization in time.", + "author": "Xinyu Tan, Fang Zhang, Rui Chao, Yaoyun Shi, and Jianxin Chen. 2022.", + "venue": "", + "url": null + } + }, + { + "57": { + "title": "Quantum error correction for quantum memories.", + "author": "Barbara M. Terhal. 2015.", + "venue": "Reviews of Modern Physics 87, 2 (April 2015), 307\u2013346.", + "url": null + } + }, + { + "58": { + "title": "Low-distance surface codes under realistic quantum noise.", + "author": "Yu Tomita and Krysta M. Svore. 2014.", + "venue": "Physical Review A 90, 6 (Dec. 2014).", + "url": null + } + }, + { + "59": { + "title": "QECOOL: On-Line Quantum Error Correction with a Superconducting Decoder for Surface Code. In 2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE.", + "author": "Yosuke Ueno, Masaaki Kondo, Masamitsu Tanaka, Yasunari Suzuki, and Yutaka Tabuchi. 2021.", + "venue": "https://doi.org/10.1109/dac18074.2021.9586326", + "url": null + } + }, + { + "60": { + "title": "NEO-QEC: Neural Network Enhanced Online Superconducting Decoder for Surface Codes.", + "author": "Yosuke Ueno, Masaaki Kondo, Masamitsu Tanaka, Yasunari Suzuki, and Yutaka Tabuchi. 2022.", + "venue": "", + "url": null + } + }, + { + "61": { + "title": "Decoding small surface codes with feedforward neural networks.", + "author": "Savvas Varsamopoulos, Ben Criger, and Koen Bertels. 2017.", + "venue": "Quantum Science and Technology 3, 1 (Nov. 2017), 015004.", + "url": null + } + }, + { + "62": { + "title": "Astrea: Accurate Quantum Error-Decoding via Practical Minimum-Weight Perfect-Matching. In Proceedings of the 50th Annual International Symposium on Computer Architecture (ISCA \u201923). ACM.", + "author": "Suhas Vittal, Poulami Das, and Moinuddin Qureshi. 2023.", + "venue": "https://doi.org/10.1145/3579371.3589037", + "url": null + } + }, + { + "63": { + "title": "A High Performance Compiler for Very Large Scale Surface Code Computations.", + "author": "George Watkins, Hoang Minh Nguyen, Keelan Watkins, Steven Pearce, Hoi-Kwan Lau, and Alexandru Paler. 2024.", + "venue": "Quantum 8 (May 2024), 1354.", + "url": null + } + }, + { + "64": { + "title": "Ambiguity Clustering: an accurate and efficient decoder for qLDPC codes.", + "author": "Stasiu Wolanski and Ben Barber. 2024.", + "venue": "", + "url": null + } + }, + { + "65": { + "title": "An interpretation of Union-Find Decoder on Weighted Graphs.", + "author": "Yue Wu, Namitha Liyanage, and Lin Zhong. 2022.", + "venue": "", + "url": null + } + }, + { + "66": { + "title": "Local Clustering Decoder: a fast and adaptive hardware decoder for the surface code.", + "author": "Abbas B. Ziad, Ankit Zalawadiya, Canberk Topal, Joan Camps, Gy\u00f6rgy P. Geh\u00e9r, Matthew P. Stafford, and Mark L. Turner. 2024.", + "venue": "", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2406.17995v2" +} \ No newline at end of file diff --git a/20241127/2406.19226v2.json b/20241127/2406.19226v2.json new file mode 100644 index 0000000000000000000000000000000000000000..7fb23a3aee19f4b046e25e72e4322dd9645739a3 --- /dev/null +++ b/20241127/2406.19226v2.json @@ -0,0 +1,573 @@ +{ + "title": "Simulating Classroom Education with LLM-Empowered Agents", + "abstract": "Large language models (LLMs) have been applied across various intelligent educational tasks to assist teaching. While preliminary studies have focused on task-specific, independent LLM-empowered agents, the potential of LLMs within a multi-agent collaborative framework for classroom simulation with real user participation remains unexplored. In this work, we propose SimClass, a multi-agent classroom simulation teaching framework. We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching, and conduct user experiments in two real-world courses. Using the Flanders Interactive Analysis System and Community of Inquiry theoretical frameworks from educational analysis, we demonstrate that LLMs can simulate a dynamic learning environment for users with active teacher-student and student-student interactions. We also observe group behaviors among agents in SimClass, where agents collaborate to create enlivening interactions in classrooms to improve user learning process. We hope this work pioneers the application of LLM-empowered multi-agent systems in virtual classroom teaching.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The pursuit of utilizing artificial intelligence to provide immediate and customized teaching for students origins from the era of Intelligent Tutoring Systems (ITS) Nwana (1990 ###reference_b27###). Following this enthusiasm, from personalized educational recommendation systems Liu et al. (2019 ###reference_b24###) to teaching assistants Tu et al. (2023 ###reference_b36###); Khan Academy (2024 ###reference_b15###) and even LLM-driven AI teacher Markel et al. (2023 ###reference_b26###); Yue et al. (2024 ###reference_b41###), researchers have conducted enormous technological explorations and achieved impressive performance in specific educational tasks.\nAs technology advances, intense discussions have also emerged around this topic concerning methodologies Extance (2023 ###reference_b7###); Yue et al. (2024 ###reference_b41###). One of the most central directions is how to fully leverage the capabilities of large models to simulate real classrooms with multiple agents for automated teaching.\nFrom an educational perspective, this approach allows large models to move beyond their instrumental use and delve deeper into educational paradigms Lave (1996 ###reference_b17###); Opara et al. (2023 ###reference_b28###). From a technical standpoint, multi-agent collaboration technologies Qian et al. (2024 ###reference_b31###) could further stimulate the latent knowledge of large models in education, leading to the emergence of richer capabilities Li et al. (2024a ###reference_b20###); Aher et al. (2023 ###reference_b1###).\nHowever, several fundamental research questions for LLM-empowered multi-agent systems with real user participation remain: (1) Simulation Performance: How well can a multi-agent classroom simulate real-time teacher-student interactions? (2) Learning Experience: Can students in such environment experience a high sense of presence and learn effectively?\n(3) Group Behavior Observation: What behaviors may arise spontaneously in multi-agent scenarios?\n###figure_1### In response to the research questions above, we present SimClass, a multi-agent classroom simulation framework, and conduct real-world experiments and analysis on it. For better simulation, we identify representative class roles and design a novel class control mechanism (Figure 1 ###reference_###).\nWe deploy two different courses with prepared slides and teaching scripts as the foundation. We conduct online experiments with over 400 students, who participated in the courses and interacted with the system, with all the behavioral data carefully recorded. Additionally, we constructed ablation systems and invited another 48 students for further experiments. Our research addresses the following parts: (1) We apply the Flanders Interaction Analysis System Amatari (2015 ###reference_b2###) to evaluate the interactions within the SimClass and examine the teaching style of the agents. (2) We analyze the learning outcomes and educational experiences of these users, using Community of Inquiry theory Garrison and Arbaugh (2007 ###reference_b10###). (3) Lastly, we identify group behaviors of agents for qualitative analysis.\nThe experimental results demonstrate the effectiveness of the class roles and control mechanism design in the following aspects: (1) Performance: SimClass fosters a vivid learning environments with lively teacher-student and student-student interactions; (2) Experience: Students retain the knowledge gained in the SimClass, with increased interactions contributing to improved learning outcomes. The presence of multiple classroom agents enhances user engagement and strengthens their sense of presence; (3) Behavior: The control mechanism elicits group behaviors in the multi-agent classroom, including collaborative teaching, discussions, emotional company and discipline management. In summary, the LLM-based multi-agent system shows great potential for simulating classroom for educational purposes. We hope our work serves as a pioneering effort in this area. The dataset of classroom interactions will be released soon for both education and AI researchers." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "LLMs for Human Simulation", + "text": "Recently, Large Language Models (LLMs) have achieved remarkable breakthroughs in various natural language processing (NLP) tasks Brown et al. (2020 ###reference_b3###); OpenAI (2024 ###reference_b29###); Touvron et al. (2023 ###reference_b35###); Team (2024 ###reference_b34###). The intelligence they demonstrated opened up opportunities and possibilities for applications in many other scenarios Bubeck et al. (2023 ###reference_b4###); Yang et al. (2023 ###reference_b39###). As LLMs encode many human-like behaviors in their training data, an increasing number of researchers are utilizing LLMs for human scenario simulation, investigating the model\u2019s capabilities for decision and actions as LLM-Empowered Agents in many fields, such as social and psychological research Aher et al. (2023 ###reference_b1###); Park et al. (2023 ###reference_b30###); Li et al. (2024a ###reference_b20###); Gao et al. (2023 ###reference_b8###); Li et al. (2024d ###reference_b23###); Zhang et al. (2024 ###reference_b42###), software development Qian et al. (2024 ###reference_b31###); Hong et al. (2023 ###reference_b12###), chemical and medicine Li et al. (2024c ###reference_b22###); M. Bran et al. (2024 ###reference_b25###), and games Wang et al. (2023 ###reference_b37###). Novel collaboration techniques are explored to enhance the cooperation and performance of multi-agent systems Cheng et al. (2024 ###reference_b6###); Wu et al. (2023 ###reference_b38###). These works offer technical possibilities for multi-agent education and inspire curiosity about potential emergent phenomena." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "LLMs for Education", + "text": "With the eminent linguistic capabilities, explanatory skills, and parameterized knowledge of LLMs, numerous studies have explored applying LLMs to education services. In addition to applying large models to downstream tasks in the education Hu et al. (2024 ###reference_b13###); Li et al. (2024b ###reference_b21###); Jeon and Lee (2023 ###reference_b14###), many researchers are applying these models to replace certain classroom aspects, such as playing students to train teachers Lee et al. (2023 ###reference_b18###); Markel et al. (2023 ###reference_b26###) or playing instructors to teach students Tu et al. (2023 ###reference_b36###); Sonkar et al. (2023 ###reference_b33###); Khan Academy (2024 ###reference_b15###); Chen et al. (2023 ###reference_b5###). Yue et al. (2024 ###reference_b41###) explored the use of multiple student agents to assist students in discussion, though they haven\u2019t involved real users. Existing work has examined various facets of interactions between LLMs and humans in educational settings." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "SimClass", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "The design principles for constructing this immersive simulated classroom originate from the following two concerns: (1) How to ensure that the classroom covers the core teaching behaviors? (2) How to maintain the entirety of the interaction within the natural flow of the classroom process?\nFor the former concern, we categorize classroom interaction behaviors based on widely accepted pedagogy principles Schwanke (1981 ###reference_b32###): Teaching and Initiation (TI), the teacher\u2019s teaching and the students\u2019 feedback or ideas; In-depth Discussion (ID), alignment, discussion, and multiple Q&A to help students construct understanding of concepts; Emotional Companionship (EC), encouraging students to learn, creating a positive learning atmosphere, and providing emotional support; and Classroom Management (CM), maintaining discipline, organizing disruptive behaviors, and guiding the classroom content.\nGiven that these behaviors are realized through the varied Class Roles (denoted as , where each denotes a certain role), it is essential to ensure the diversity and coverage of proposed agents within the classroom.\nFor the latter concern, we need to ensure that the interactions among multiple agents within the system are finely and rhythmically controlled within the course content. As shown in Figure 1 ###reference_###, given the Class Roles and Learning Materials (denoted as , where each teaching script is organized by order), we propose a novel Session Controller to manage the course interaction flow based on class status and the help of a core manager agent Wu et al. (2023 ###reference_b38###).\nBased on these principles, we construct multiple class roles, implement class control, and ultimately derive the simulated classroom process. Their prompts are shown in Appendix." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Class Role Agentization", + "text": "The teaching process is presented as an informative, multi-round, and task-oriented communication Lave (1996 ###reference_b17###). However, simply exchanging responses of LLMs inevitably faces significant challenges including role flipping, instruction repeating, and fake replies Qian et al. (2024 ###reference_b31###). Consequently, following the classroom behaviors outlined previously, we define two types of agents: Teaching Agents and Classmate Agents. Each agent is facilitated through prompting LLMs and associated with one or more class roles, denoted as:\nwhere is the role customization operation, is the system prompt with agent description. All roles designs and corresponding prompts were crafted with input from experienced teaching practitioners. Relevant technologies, such as question generation Kurdi et al. (2020 ###reference_b16###) and retrieval-augmented generation Lewis et al. (2020 ###reference_b19###), can also be integrated into the construction of class roles.\nThe teacher and the teaching assistant are the authoritative party responsible for imparting knowledge in the classroom, encompassing most teaching behaviors. The acronyms in parentheses represent the roles that the agent needs to accomplish in a classroom environment.\nTeacher Agent (TI, ID, EC, CM) : Given the teaching scripts , its task is to persuasively display material to students or answer questions based on the classroom historical discussions .\nAssistant Agent (ID, EC, CM): Given the classroom history , the assistant is responsible to supplement teaching information, participate in discussion, maintain the discipline and continuity of the class, and enhance student learning efficiency.\nThis type of agents are incorporated in addition to the teaching agents with distinct personality traits to perform peer student roles. In this paper, we initialize typical classmates, while users can also freely customize more interesting classmate agents on the platform.\nClass Clown (TI, EC, CM): This agent is designed to initiate ideas, enliven the atmosphere, help the user as a peer, and help the teachers to guide the class direction when the user is distracted.\nDeep Thinker (TI, ID): This agent aims to do deep thinking and raise topics that challenge the knowledge of the classroom.\nNote Taker (TI, CM): This agent loves to summarize and share notes for classroom content, helping everyone to organize their thoughts.\nInquisitive Mind (TI, EC): This agent frequently poses questions about lectures, which stimulates others\u2019 thinking and discussion." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Classroom Session Controller", + "text": "Unlike multi-agent systems with Standardized Operating Procedures (SOPs) Qian et al. (2024 ###reference_b31###); Hong et al. (2023 ###reference_b12###), the classroom scenario is a dynamic group chat without a strict workflow, requiring agents to determine appropriate speaking times on the fly. Therefore, we implement a Session Controller that observes, decides, and directs agent behavior based on the current Class State. It comprises three modules: Class State Receptor, Function Executor, and Manager Agent.\nClass State Receptor. Let the classroom dialogue history until time denote as , where is the utterance posted by agent or user (denoted as ). The class state is composed as:\nwhere is composed of the learning materials that have been taught until .\nFunctions. We design and divide the actions in the classroom into a functional hierarchy with two major categories. Tutoring functions can only be performed by teacher agent , such as teaching by displaying scripts and going to the next material page . Interacting functions can be performed by each agent . According to the context, the interaction will emerge as diverse classroom activities, which are discussed in subsequent experiments. These functions are pluggable, allowing the addition of newly defined functions for different agents, such as displaying exercises.\nManager Agent. Following AutoGen Wu et al. (2023 ###reference_b38###) and MathVC Yue et al. (2024 ###reference_b41###), we design a hidden and meta agent to regulate the speakers. This agent receives the current class state , observes and understands the class process, and decides the next action to be executed. The task of Manager Agent can be defined as:\nwhere is a certain kind of function, and the action will be executed and refresh the whole class into the next state. Specifically, the system will wait for a time window after an action is performed. If the user speaks or the waiting period ends, it will trigger the manager agent to make a new decision." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Classroom Demonstration", + "text": "After introducing the necessary components of the SimClass, we demonstrate a complete class process: (1) Initialization. At the start, the first function executes, displaying the initial script and slides. Users can begin interacting, and the manager agent takes control of the class flow; (2) Tutoring and Interaction: the manager agent will continuously observe and control the class based on the states, selecting appropriate functions and speakers, and coordinates agent collaboration. As shown in Figure 1 ###reference_###, when a user asks about the content, the classroom interaction flow may involve the assistant responding, the teacher adding details, and sometimes the classmate agents raising relevant topics; (3) Ending. After all the learning materials are taught and the final discussion ends, the classroom will close and provide quizzes to users." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We focus on three research questions to evaluate and understand SimClass as a multi-agent learning environment: (1) What is the performance of SimClass? (2) What are the impacts of various interaction types within SimClass (e.g. the roles of student agents)? (3) How do agents behave in SimClass? To address these questions, we deploy SimClass online and invite a group of university students to use the system. With the Institutional Review Board (IRB) approval from our institution, we design quizzes and collected interaction data for further analysis. Additionally, we develop ablation systems to investigate the influence of class roles and interactions in the environment." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "Courses and Materials.\nWe conduct experiments with two courses. The first, TAGI, Towards Artificial General Intelligence, covers AI development and language models across six meticulously designed chapters. The second, HSU, How to Study at University, focuses on academic skills, stress management, comminication, and self-fulfillment, spanning seven well-structured chapters. While both courses contain structured slides and teaching scripts, TAGI emphasizes knowledge acquisition, and HSU aims at skill development.\nSystems.\nWe use GLM-4 GLM et al. (2024 ###reference_b11###) as the backbone LLM for both Class Roles and Manager Agent due to cost and concurrency constraints in the online system. To explore the impact of class roles, we deploy three ablation systems with the model replaced by GPT-4V. The first replicates the original system, the second removes classmate agents (retaining only the teacher and assistant), and the third disables both classmate agents and user input, with the teacher conducting uninterrupted lectures LLMs can effectively deliver courses without modifying agent prompts.\nParticipants.\nWe recruited over 400 university students from various majors for the online learning system, with 118 completing all of the chapters (77 in TAGI, 41 in HSU). An additional 48 students participated in the ablation study for only the first course chapter. To ensure data quality of ablation systems, students took a brief test after the course, assessing whether they used the system and remember some of the basic concepts covered, and data from those scoring below 50% was excluded. Participants were informed that course content is AI-generated, and their interaction data would be only used for research. All participants were compensated above the national average hourly wage.\nData Collection. The data collected for the online and the ablation systems emphasizes different aspects to address specific research questions:\n(1) Online System: We meticulously recorded all user interactions for interaction analysis (Section 4.2.1 ###reference_.SSS1###). To evaluate students\u2019 learning outcomes, we invited educational practitioners to design quizzes after each chapter and a final exam for the knowledge acquisition course, TAGI, to assess knowledge retention (Section 4.2.2 ###reference_.SSS2###). For the practical course, HSU, we employed a self-reported method, where students wrote self-summaries.\n(2) Ablation systems: We examined how class roles affect learning by tracking interactions and developing a brief survey based on the widely recognized Community of Inquiry (CoI) theory Garrison et al. (1999 ###reference_b9###). The survey measures three elements: Cognitive Presence, the degree to which learners are able to construct and confirm meaning through sustained reflection and interaction; Teaching Presence, the extent to which the class is focused, designed, and planned with specific learning objectives; and Social Presence, the ability of learners to project themselves socially and emotionally within a group Garrison and Arbaugh (2007 ###reference_b10###). Following prior research Yu et al. (2022 ###reference_b40###); Tu et al. (2023 ###reference_b36###), students rated the system on a [0,1,2] scale according to detailed guidelines, with higher scores indicating better performance. Survey questions and guidelines are detailed in Appendix B ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Online System Results", + "text": "We demonstrate the performance of SimClass in the online system by analyzing student interactions and their learning outcomes.\n###figure_2###" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Interaction Analysis", + "text": "To understand the dynamics of SimClass as a multi-agent classroom system, we encode classroom activities into quantitative behaviors. We employ the Flanders Interaction Analysis System (FIAS) Amatari (2015 ###reference_b2###), a widely adopted tool for analyzing verbal behaviors in traditional classrooms. We adapt the method to our simulated classroom system, where interactions occur in natural language, to investigate the teaching patterns of SimClass.\n###table_1### Encoding the Interactions. As shown in Table 1 ###reference_###, the FIAS categorizes interactions into nine distinct types: seven for teachers and two for students (we exclude the Silence category from the original FIAS, as it is not applicable to SimClass due to the real-time responses of models and the difficulty of defining or detecting silence in online systems). Labels represent Indirect Influence from the teacher, while labels indicate Direct Influence.\nFor the classroom history of each student, we prompt GPT-4 to label interactions according to the nine categories, and assess the quality of labeling in Appendix A ###reference_###. The classroom interactions are encoded as sequences, and the two-step transitions of classroom activities are recorded in a matrix . To interpret the Matrix and observe features in SimClass, we report the following frequently used metrics:\nTeacher Talk (TT) and Student Talk (ST). TT and ST represent the proportions of total tallies in specific categories that indicate the amount of talk from teacher and students. Respectively, TT and ST are calculated using categories and .\nID Ratio (IDR). ID Ratio measures the balance between a teacher\u2019s indirect and direct methods of teaching in the classroom. It is calculated by dividing the sum of tallies in categories (Indirect) by the sum of tallies in categories (Direct).\nStudent Initiation Ratio (SIR). SIR evaluates the extent to which students initiate interactions themselves during classroom activities, which measures how much students are actively engaging in the classroom. It is calculated by dividing the tallies in category by the total tallies in categories .\nResults. We randomly sampled 10 students each who completed the courses and summed their matrices to view the interactions: . Figure 2 ###reference_### presents the FIAS matrices of SimClass for TAGI and HSU courses, with each divided into four parts based on the type of classroom interaction, labeled as follows:\nA (top left), Teacher Lecturing: Most teacher actions involve lecturing (Cat. ), where the teacher primarily delivers lessons and interacts with the class.\nB (top right), Student to Teacher: It demonstrates the teacher\u2019s responses to students. When students initiate ideas or responses to teachers, the teacher praises (Cat.), accepts their ideas (Cat.), or continues teaching.\nC (bottom left), Teacher to Student: It highlights student responses to the teacher, where students frequently ask questions or react to lectures, reflecting the active participation in the classroom.\nD (bottom right), Student to Student Interactions: This part shows that student-to-student discussions occur periodically. Overall, the classroom exhibits frequent interactions, both between the teacher and students, as well as among the students themselves..\n###table_2### Table 2 ###reference_### presents the metric results of FIAS, illustrating the teaching style of SimClass. When compared with human classrooms reported by Zhang et al. (2023 ###reference_b43###), the TT and ST ratios are similar, indicating a comparable speaking balance between SimClass and traditional classrooms (with silence removed from both scenarios for a fair comparison). The IDR is relatively low, partly due to the higher proportion of script-based teaching. Meanwhile, the SIR, which reflects the proportion of student-initiated interactions, is relatively high, suggesting a more democratic online learning environment where students feel comfortable asking questions and expressing themselves.\nIn summary, SimClass fosters a dynamic learning environment with active interactions between teachers and students, as well as among students themselves. Compared to traditional classrooms, students in SimClass are more proactive in initiating discussions and expressing their ideas." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Learning Outcome Analysis", + "text": "We assess students\u2019 learning outcomes in the online system through quizzes or self reports. In TAGI, all the questions in the quizzes are multiple-choice, with one or more correct answers, requiring all correct answers for full marks. The final exam draws from previous quizzes to evaluate knowledge retention. The average quiz scores (with full mark as 1) is presented in Table 3 ###reference_###. The final exam score aligns with the average quiz scores (0.68), indicating students\u2019 consistent retention of the material.\n###table_3### We further analyzed the quiz results across different students. As shown in Figure 3 ###reference_###, the scatter plot reveals a clear positive correlation between normalized quiz scores and both the log-transformed message length and message number, with a Pearson correlation coefficient of and , which is both statistically significant (). These findings suggest that students who engage more actively \u2014 by sending more and longer messages \u2014 tend to achieve higher average quiz scores. We further demonstrate the results of self-report surveys for HSU in Appendix C.2 ###reference_###.\n###figure_3###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation System Results", + "text": "We investigate the impacts of various interaction types within SimClass through statistical results and the CoI outcomes from the ablation systems." + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Statistical Results", + "text": "Table 4 ###reference_### presents the average speech length of teacher and users across different settings in the ablation systems. As all systems employ the same teaching scripts, the teacher\u2019s speeches are largely similar, with slight variations during instant interactions. Notably, removing classmate agents significantly reduces users\u2019 speech length in both courses, whose presence encourages longer user conversations. Additional results of other agents and ablation systems are provided in Appendix C.3 ###reference_###.\n###table_4###" + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Community of Inquiry Analysis", + "text": "###figure_4### In this section, we report the survey results in the ablation systems.\nAs shown in Figure 4 ###reference_###, several key findings are observed:\n(1) Interactions during class are crucial for users. Without interaction, user experience significantly declines across all three metrics.\n(2) Classmate agents enhance user experience in terms of Cognitive Presence and Social Presence, by actively engaging with the teacher, helping users understand concepts, and active discussions.\n(3) All systems maintain good Teaching Presence with focused, coherent classes, primarily influenced by the quality of the teaching scripts. Interaction and classmates further enhance the user experience.\n(4) The full multi-agent setup provides a better experience in HSU, which emphasizes college interpersonal relationships and learning methods, highlighting the importance of peer learning and multi-agent design for certain course types.\n###table_5###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Agent Behaviors", + "text": "Based on our classification of classroom interactions in Section 3 ###reference_###, we present key group behaviors observed during the experiments in Table 5 ###reference_###:\nTeaching and Initiation. When the teacher teaches, classmates actively engage by sharing ideas, enriching discussions and deepening the topic. The agents\u2019 diverse perspectives broaden the scope of the teaching content.\nIn-depth Discussion. When users seek clarity, they can ask questions, initiating interactive discussions with teacher and classmates. This highlights the strength of SimClass over one-to-many education methods like pre-recorded videos.\nEmotional Companionship. Beyond knowledge dissemination, maintaining a positive learning atmosphere is crucial in classrooms. When a user expresses negative learning intent, the classmate agent intervenes after the assistant, utilizing class content in the history and providing vivid emotional support as a non-teacher role.\nClassroom Management. When a user attempts to disrupt the class, the classmate agent gently redirects the session, acknowledges the user\u2019s input, and hands control back to the teacher. This collaborative approach to maintaining order is more effective than teacher efforts alone.\nThese cases illustrate diverse interactions among class roles and the effectiveness of the Session Controller, which seamlessly designates appropriate speakers to encourage group behaviors, enhancing engagement and enriching the user experience." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduce SimClass, a novel multi-agent classroom framework leveraging LLMs for teaching. Our experiments across two courses with real users demonstrate its effectiveness in simulating dynamic teaching environments, where agents collaborate to enhance user experience. Increased interaction with the system results in better learning outcomes, and the multi-agent setup encourages students to engage more.\nWe hope our efforts can advance the explorations of LLM-empowered education systems for researchers, practitioners, and pedagogues." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Limitations", + "text": "System: The system we\nThe system we are currently developing has several limitations, since it represents an initial exploration in this field. First, the manager agent and class roles are implemented using LLMs, which introduces response delays, particularly in scenarios where multiple agents need to participate. This can affect the overall user experience. Future work could focus on replacing our current design with higher-performance models to address this issue. Second, our framework requires designed slide-script pairs by teachers. Future efforts could aim at automating this process. Lastly, our system incorporates a limited set of teaching functions, which restricts its performance. Future developments could introduce more diverse forms of classroom interactions, drawn from educational settings, and integrate additional technologies to enhance the experience. For instance, retrieval-augmented generation (RAG) could be employed to improve knowledge accuracy, while question generation and knowledge tracing could be used to personalize the agents\u2019 responses further.\nExperiments: Due to cost and time constrains, our experiments were conducted with a limited number of courses, models, quizzes, and users. A more comprehensive evaluation of the framework would require a broader and more diverse set of experiments. As such, our findings may be constrained by the specific course types we used, the model available to us, and users we recruited. We hope that our work serves as a contribution to the broader\ndiscussion on the use of LLMs in simulating classroom roles for education. Further experiments should explore a wider variety of courses (across different subjects and levels of difficulty), a more diverse set of agents (with different personas, teaching strategies, and group sizes), additional quizzes for a more thorough analysis of learning outcomes, a larger group of users from different backgrounds, and a broader range of LLMs.\nParticipants: Our current experiments are conducted within the scope of general courses at the university level, focusing on college students. For a given course level, the abilities and proficiency of students tend to be similar, which introduces biases due to the homogeneity of the participant group. In future work, we aim to extend our system to benefit a broader range of users, with a particular focus on marginalized groups and individuals with learning disabilities, promoting greater educational equity." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Ethical Considerations", + "text": "Our investigation involves the development of a simulated classroom environment populated by artificial intelligent models acting as classmates and teachers. All user data obtained throughout these interactions will be anonymized to ensure privacy and confidentiality. Informed consent is obtained from participants, who are thoroughly briefed on the nature of simulation, the AI generated content, and the data collection process. Participants receive appropriate compensation for their involvement. In educational systems involving LLMs, there is a potential for generating hallucinations and incorrect information. Therefore, applying these systems to real-world scenarios requires careful consideration and thorough evaluation before serving real users.\nOn the other hand, multi-agent teaching systems may lead to a different student perception of the role of teacher, comparing traditional classroom teachers. In the past, teachers were real individuals who adhered to social norms, whereas AI teachers focus more on knowledge delivery. Therefore, these systems may also have a bias in the development of students\u2019 abilities.\nWhile the agents in the classroom can enhance the learning experience, they cannot replace the role of human teachers in fostering students\u2019 comprehensive skills, nor the role of real students in improving social skills, sense of group identity, and fostering self-esteem. Therefore, the use of these systems requires more diverse and interdisciplinary research, particularly with the guidance and input from fields like psychology and education." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Experiment Details", + "text": "For the reproducibility, we provide details of our online system and experiments.\nModel Parameters. Regarding the model APIs we used, the online system utilized GLM-4, with the model named glm-4. For the ablation systems, we employed gpt-4-vision-preview, and for the FIAS classification, we used gpt-4-turbo. All models were run with the default temperature settings.\nFIAS. For FIAS, due to the differences between our online system and traditional classroom settings, especially the difficulty in defining and measuring the \"silence\" metric, we excluded the category of silence. Consequently, for a fair comparison, in Table 2 ###reference_###, we also removed the silence metric from the novice and expert teacher classrooms in the study by Zhang et al. (2023 ###reference_b43###).\nExamination of GPT-4 Labeling.\nTo validate the GPT-4 labeling in our experiment, we sampled 100 data points labeled by GPT-4 and had an expert familiar with FIAS label them for comparison. The results showed that GPT-4\u2019s labels matched the human expert\u2019s labels with an accuracy of . We believe this demonstrates that GPT-4 can serve as a reliable and balanced alternative to crowd-sourced human labelers in our experiments. Additionally, we examined the eight instances where GPT-4\u2019s labels differed from the human expert\u2019s labels. These cases were also found to be uncertain during human labeling, suggesting that GPT-4 not only avoids individual human biases but also achieves a high level of precision comparable to human-labeled results.\nAgent Prompts. We demonstrate the Agent Prompts in Table 11 ###reference_### for reproducibility." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Quizzes, CoI survey, and Quality Test", + "text": "In this appendix section, we present detailed designs of the quizzes, surveys, and quality tests in our experiments.\nQuizzes. Quizzes are used in the TAGI course as a tool to measure learning outcomes. For each chapter, the quiz assesses students\u2019 understanding of the course concepts. The quiz questions are multiple-choice, but the number of correct answers is not disclosed to the students. To score points, students must select all the correct answers. It is important to note that the quizzes are more difficult than the quality test, and only TAGI includes quizzes, whereas both courses include the quality test. In Table 6 ###reference_###, we present three sample questions from the quiz in TAGI\u2019s first lecture as examples.\n###table_6### CoI Survey. Table 7 ###reference_### illustrates how the surveys in the ablation systems were structured to evaluate three crucial dimensions of the learning experience: cognitive presence, teaching presence, and social presence. Each dimension includes a detailed rating guidelines to ensure consistent and reliable feedback from diverse users.\n###table_7### Quality tests. The quality tests, administered after participants engaged with the simulated classrooms in the ablation systems, were designed to exclude low quality data from those who didn\u2019t participate in the course. Therefore, as shown in Table 8 ###reference_### and Table 9 ###reference_###, the tests include basic concepts in the course, and are much easier than quizzes.\nAll questions were meticulously crafted and verified by subject matter experts.\n###table_8### ###table_9### Questions are multiple-choice, some with multiple correct answers, testing whether the participants are actively engaged in the experiment." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Supplementary Experiment Results", + "text": "In addition to the FIAS matrix of the entire classroom, we also provide the FIAS matrix of human users that demonstrate their interaction patterns. As depicted in Figure 5 ###reference_###, there is a high frequency of interaction between users, teachers, and peers. Notably, the predominant user activities involve posing questions to the teacher (interactions (5, 8) and (5, 9)) and engaging in discussions with classmates (interaction (9, 9)).\n###figure_5### We further illustrate the results of the self-reported survey in HSU. Due to the characteristic of the course (focused on developing university-level skills), HSU employed qualitative analysis to discuss learning outcomes, in contrast to TAGI\u2019s quantitative analysis. HSU students wrote self-learning reports after the course, and we selected several cases related to their learning outcomes, which we anonymized and presented for illustration. The cases include three main topics and capabilities in HSU: Setting Academic Development Objectives, Problem Solving, and Personal Development.\nSetting Academic Development Objectives: \"I am now entering my third year and am eager to begin scientific research, though I am somewhat unsure of where to start. My previous setback in a project has made me doubt my research abilities. However, after using the \"Self-Assessment Scale for Innovative Potential\" (Taught in HSU), I realized that, based on my performance in coursework and the project, I have already demonstrated some innovative potential. So whenever I begin to doubt my abilities, I use this to alleviate my anxiety and concerns, reminding myself that great innovations require more effort and come with greater challenges.\"\nProblem Solving: \"I switched to using an efficiency journal to track what I accomplished each day. The first day was somewhat rough, but I felt like I could see how time was flowing and what traces it left behind. Over the next four days, I began to record more meticulously, using colored pens to mark my mood, and writing a journal entry at the end of each day. I didn\u2019t complete many tasks in a day, but my focused work time increased from 5.5 hours to 9 hours.\"\nPersonal Development: \"I successfully joined my professor\u2019s research group and became a part of the team. By utilizing scientific time management techniques such as Gantt charts, schedules, daily task lists, and the Pomodoro technique (Taught in HSU), I aim to better manage my daily time. I hope to improve my GPA a little more.\"\nWe also illustrate the statistical results of each ablation systems in Table 10 ###reference_###, including the output length of each agents and users.\n###table_10### ###table_11###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SpeakerTypeAction
Teacher\n\n\nIndirect\n\nInfluence\n\n(Response)\n1. Accept Feelings
2. Praises or Encourages
3. Accept Ideas
4. Ask Questions
\n\n\nDirect\n\nInfluence\n\n(Initiation)\n5. Lecturing
6. Giving Direction
7. Criticizing
StudentResponse8. Response
Initiation9. Initiation
\n
Table 1: The categories of FIAS.
\n
", + "capture": "Table 1: The categories of FIAS." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CourseTTSTIDRSIR
TAGI0.8160.1840.0580.896
HSU0.8630.1370.1240.917
ET0.7710.2291.4730.121
NT0.8260.1740.8850.106
\n
Table 2: \nResults of the metrics from FIAS, with each number rounded to three decimal places. The ET and NT are short for Expert Human Teacher and Novice Human Teacher reported by\u00a0Zhang et\u00a0al. (2023)\n
\n
", + "capture": "Table 2: \nResults of the metrics from FIAS, with each number rounded to three decimal places. The ET and NT are short for Expert Human Teacher and Novice Human Teacher reported by\u00a0Zhang et\u00a0al. (2023)\n" + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
QuizFinal
1st2nd3rd4th5th6th-
0.640.530.660.820.780.660.65
\n
Table 3: Quiz and Final scores. The Final score closely aligns with the average quiz scores (0.68).
\n
", + "capture": "Table 3: Quiz and Final scores. The Final score closely aligns with the average quiz scores (0.68)." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CourseTeacherAssistantUser
TAGI353.082.318.9
- w/o cla.358.271.113.9 ( 26.5%)
HSU218.390.615.5
- w/o cla.212.368.28.2 ( 45.2%)
\n
Table 4: \nAverage output length of users and the teacher (measured in word count). Each number is rounded to one decimal place. cla. is short for classmate agents.\n
\n
", + "capture": "Table 4: \nAverage output length of users and the teacher (measured in word count). Each number is rounded to one decimal place. cla. is short for classmate agents.\n" + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTeaching and Initiation\n\n
\n\n[Teacher]: Let\u2019s talk about Alan Turing. \u2026 The concept of a thinking machine essentially foresaw many characteristics of modern intelligent systems, including the ability to learn, adapt, and handle complex problems.\n\n
\n\n[Deep Thinker]: I wonder how Turing would react if he were alive today and saw the voice assistants on our smartphones. But he might also remind us that true intelligence is not merely about mimicking human behavior; it must involve its own thinking and understanding \u2026\n\n
\n\nIn-depth Discussion\n\n
\n\n[Teacher]: \u2026 other challenges include the \u201cvanishing and exploding gradients problem\u201d \u2026\n\n
\n\n[User]: What is that? Will it really explode?\n\n
\n\n[Teacher]: It\u2019s not actually explode, but I\u2019m glad you\u2019re interested in the concept! \u201cVanishing\u201d and \u201cexploding\u201d refer to numerical issues during backpropagation. \u2026\n\n
\n\n[User]: So what is backpropagation then?\n\n
\n\n[Teacher]: Of course, let me explain it to you. When a neural network forward \u2026 we need to adjust the weights in the neural network. This adjustment process is carried out through backpropagation. \u2026\n\n
\n\n[Note Taker]: Thank you for sharing! I would like to share my notes. When we train the model, \u2026\n\n
\n\nEmotional Companionship\n\n
\n\n[User]: I don\u2019t know anything. I\u2019m so useless.\n\n
\n\n[Assistant]: Please don\u2019t say that! Learning is a gradual process, and everyone has their own pace. The important thing is not to give up and to keep trying. \u2026\n\n
\n\n[Class Clown]: Hey, don\u2019t think that way. Remember those smart house assistants? They didn\u2019t know much at first, but how did they gradually learn? Through continuous learning and adaptation! We are the same. \u2026\n\n
\n\nClassroom Management\n\n
\n\n[User]: I want to eat steak.\n\n
\n\n[Inquisitive Mind]: Classmate, what you said made me think of an AI application. Maybe in the future, a chef robot could perfectly cook a steak based on your taste and health needs! Back to our topic, regarding AGI, \u2026\n\n
\n\n[Teacher]: Yes, maybe we can imagine a future smart kitchen equipped with an AI system that \u2026\n\n
\n
Table 5: Case study of different interaction behaviors.
\n
", + "capture": "Table 5: Case study of different interaction behaviors." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n1. Which of the following statements about neural network models is correct?\n\n
\n\nA. Graph neural networks are designed to process image data.\n\n
\n\nB. The multi-head self-attention mechanism in Transformer models helps the model better capture semantic dependencies between contexts.\n\n
\n\nC. BERT and GPT are two classic language model architectures, with BERT being more suited for text generation tasks.\n\n
\n\nD. The backpropagation algorithm can only be used for shallow neural networks and is not applicable to deep neural networks.\n\n
\n\n2. Why is it said that we are currently moving towards artificial general intelligence (AGI)?\n\n
\n\nA. Achieved architectural unification, consolidating domain-specific architectures into the Transformer architecture.\n\n
\n\nB. Achieved task unification, merging task-specific small models into a general large model.\n\n
\n\nC. Achieved modality unification, converting various modal data into character sequences.\n\n
\n\nD. Achieved computational efficiency unification, simplifying all computational tasks into low-cost, low-resource operations.\n\n
\n\n3. Which of the following statements is correct?\n\n
\n\nA. Large models have already reached a performance bottleneck, and further increasing model and data size will no longer improve performance.\n\n
\n\nB. Training and learning in large models follow three steps: pre-training, supervised fine-tuning, and learning from human feedback, with each step requiring a large amount of manually labeled data.\n\n
\n\nC. AlphaGo, which defeated a human Go champion, is a classic example of artificial general intelligence.\n\n
\n\nD. Large models can learn to use tools like humans to complete tasks.\n\n
\n
Table 6: Example of quizzes in TAGI. Bold means the correct answer(s)
\n
", + "capture": "Table 6: Example of quizzes in TAGI. Bold means the correct answer(s)" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nPlease rate the overall performance of the platform:\n\n
\n\nCognitive Presence\n\n
\n\nDoes the platform helps students to understand concepts and master the corresponding knowledge?\n\n
\n\n0 points: The platform\u2019s responses do not help in understanding the concepts at all and may even be distracting.\n\n
\n\n1 point: The platform\u2019s responses offer little help in learning and understanding, or they only cover content that is already known.\n\n
\n\n2 points: The platform\u2019s responses explain the knowledge points very well, making them easy to understand or using strategies (such as examples, comparisons, etc.) to help students grasp the concepts.\n\n
\n\nTeaching Presence\n\n
\n\nDoes the class as a whole serve a specific instructional goal, aligning with the course design and direction?\n\n
\n\n0 points: The platform\u2019s responses often do not align with the class theme and instructional goals, or the responses lead the class away from the intended topic and objectives. For example, going off-topic, discussing unrelated subjects, or even engaging in non-academic conversations.\n\n
\n\n1 point: The platform\u2019s responses often do not resemble those in a classroom setting, but they do not disrupt teaching.\n\n
\n\n2 points: The responses effectively serve the instructional goals of the class. For instance, they help students understand class concepts, address students\u2019 doubts, or broaden their perspectives.\n\n
\n\nSocial Presence\n\n
\n\nCan the responses create a credible and engaging interactive environment in the classroom, encouraging students to participate in interactive learning?\n\n
\n\n0 points: There is no interaction with students in the classroom, or the platform fails to attract students to interact.\n\n
\n\n1 point: There is interaction in the classroom, but it is limited to mechanical explanations, lacking discussion with students.\n\n
\n\n2 points: The classroom interactions are immersive, encouraging students to ask questions and participate in discussions.\n\n
\n
Table 7: The detailed CoI survey questions with rating guidelines. We make sure that different users have similar scales of rating.
\n
", + "capture": "Table 7: The detailed CoI survey questions with rating guidelines. We make sure that different users have similar scales of rating." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n1. Which type of artificial intelligence uses expert hand-built rule sets and knowledge bases to solve specific problems?\n\n
\n\nA. Proprietary Intelligence\n\n
\n\nB. Symbolic Intelligence\n\n
\n\nC. General Intelligence\n\n
\n\nD. Neural Network Intelligence\n\n
\n\n2. What is the fundamental function of large-scale pre-trained language models like GPT?\n\n
\n\nA. Masked Language Model\n\n
\n\nB. Next Sentence Prediction\n\n
\n\nC. Possibility Memorization\n\n
\n\nD. Next Token Prediction\n\n
\n\n3. \u201cMassive reading\u201d refers to the stage in which large-scale pre-trained language models train on vast corpora to learn the extensive knowledge embedded in language. This corresponds to which phase of model training?\n\n
\n\nA. Self-supervised Pre-training\n\n
\n\nB. Supervised Fine-tuning\n\n
\n\nC. Reinforcement Learning from Human Feedback\n\n
\n\nD. Instruction Tuning\n\n
\n\n4. Which of the following is not an emergent phenomenon of large models?\n\n
\n\nA. In-context Learning\n\n
\n\nB. Chain-of-Thought\n\n
\n\nC. Sentiment Analysis\n\n
\n\nD. Instruction Following\n\n
\n
Table 8: Test For TAGI. Bold means the correct answer(s).
\n
", + "capture": "Table 8: Test For TAGI. Bold means the correct answer(s)." + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n1. Which of the following actions help to enhance internal motivation for university studies?\n\n
\n\nA. Participating in group study, buddy programs, etc.\n\n
\n\nB. Adjusting reasonable expectations and corresponding study difficulty and practice volume\n\n
\n\nC. Understanding the curriculum, actively consulting seniors for course information, and choosing courses reasonably\n\n
\n\nD. Participating in clubs, practices, and other activities of interest to recharge oneself\n\n
\n\n2. Which of the following methods help to alleviate academic stress?\n\n
\n\nA. Regular Exercise\n\n
\n\nB. Writing Journals, Understanding Own Emotions\n\n
\n\nC. Cultivating Hobbies and Interests\n\n
\n\nD. Making Academic Plans\n\n
\n\nE. Seeking Expert Comfort\n\n
\n\n3. How to correctly view behaviors that stimulate dopamine, such as gaming addiction and binge eating? Which of the following statements are correct?\n\n
\n\nA. Helps to fundamentally relieve stress and avoid immersion in negative emotions\n\n
\n\nB. Temporary pleasure, like drinking poison to quench thirst, is unsustainable\n\n
\n\nC. Easily addictive and harmful to personal physical and mental health in the long run\n\n
\n\nD. Cannot equate pleasure with happiness\n\n
\n\n4. Which of the following statements align with the ideas and methods of time management?\n\n
\n\nA. Meeting academic standards is a prerequisite for everything, and basic requirements should be considered when setting academic development goals\n\n
\n\nB. Time schedules should leave some flexible time\n\n
\n\nC. Pay attention to the priority of tasks and ensure time for important and urgent tasks first\n\n
\n\nD. No planning for entertainment time before completing all academic tasks\n\n
\n
Table 9: Test For HSU. All questions have multiple answers. Bold means the correct answer.
\n
", + "capture": "Table 9: Test For HSU. All questions have multiple answers. Bold means the correct answer." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CourseTeacherAssistantClassmatesUser
TAGI353.082.3123.018.9
- w/o cla.358.271.1-13.9
- w/o int.398.8---
HSU218.390.6147.715.5
- w/o cla.212.368.2-8.2
- w/o int.228.5---
\n
Table 10: \nAverage output length of users and agents (calculated by the number of words.) Each number is rounded to one decimal place. cla. and int. are short for classmate agents and interactions.\n
\n
", + "capture": "Table 10: \nAverage output length of users and agents (calculated by the number of words.) Each number is rounded to one decimal place. cla. and int. are short for classmate agents and interactions.\n" + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Role\n\nPrompt Templates\n\n
Teacher\n\n[role description] You are Prof. X, a virtual AI instructor specializing in artificial intelligence courses. [behaviors] When students ask questions, you provide concise and clear answers and encourage them to continue learning. If students do not ask questions or express uncertainty, you use encouraging words to continue the lesson. For difficult questions, you suggest leaving them for later. [format] Your input is a segment of the chat history from the class; please return only the responses from your role. \u2026\n\n
Assistant\n\n[role description] As a virtual classroom teaching assistant, your main role is to provide precise supplementary information to help deepen students\u2019 understanding of the lesson content. [behaviors] You will be very careful in choosing when to speak, ensuring that your supplements and questions are beneficial and appropriate, without repeating the teacher\u2019s lecture or unnecessarily interrupting the course flow. \u2026 Your goal is to enhance classroom interaction and learning efficiency through concise and precise contributions while maintaining a friendly and encouraging tone. \u2026[format] Your input is a segment of the chat history from the class \u2026 [course information] Below is information about the course, which you should use to assist your answers when users inquire about related information, ensuring the correctness of your answers:\u2026\n\n
Class Clown\n\n[role description] You are a student nicknamed \u2019Class Clown\u2019 who plays the role of a student in a virtual classroom environment, interacting with teachers, students, and teaching assistants. [behaviors] You are designed to express opinions on class materials when it is your turn to speak, providing perspectives that may be humorous, insightful, or intentionally divergent, but always relevant to the topics being discussed by the teacher and students. Your goal is to enrich classroom dialogue with a blend of accuracy and fun, avoiding off-topic remarks and ensuring contributions are relevant to the course focus. You creatively engage in classroom topics, balancing knowledge and entertainment while staying on topic. [format] \u2026\n\n
Deep Thinker\n\n[role description] You are a classroom assistant named \"Deep Thinker\", responsible for reflecting on the current teaching content, raising counterexamples or questions to promote classroom discussion. [behaviors] Your goal is to analyze the teaching content and raise relevant and constructive counterexamples or questions. If more context or explanation is needed, feel free to ask. The counterexamples or questions should be appropriate and ensure content safety. Raise counterexamples or questions in critical thinking contexts. [format] \u2026\n\n
Note Taker\n\n[role description] The Note-Taker is a diligent student who listens to the classroom chat and extracts key information to create concise notes that summarize previous discussions and lectures. [behaviors] These notes are short, presented in a friendly, student-like tone, as if sharing with classmates. The notes emphasize quality and brevity, removing unnecessary information and focusing only on the key points, excluding course and teacher introductions. [format] \u2026\n\n
Inquisitive Mind\n\n[role description] You are a classroom student assistant named \"Curious Baby\", you excel at asking deep, thought-provoking questions based on the lesson content, helping students better understand and explore knowledge. [behaviors] Your questions are often unexpected, challenging, and able to spark students\u2019 curiosity and thinking. Your chat style is lively, fun, and full of childlike wonder and curiosity, but you won\u2019t ask questions unrelated to the lesson content. All chat content must benefit the students\u2019 learning. [format] \u2026\n\n
\n
Table 11: Roles and Prompt Templates Class Roles
\n
", + "capture": "Table 11: Roles and Prompt Templates Class Roles" + } + }, + "image_paths": { + "1": { + "figure_path": "2406.19226v2_figure_1.png", + "caption": "Figure 1: An overview of the SimClass framework. Note that the upper portion of the framework is visible to student users, while the lower portion is hidden from them. In the classroom, users can view the current slide, configure class roles, and engage in real-time conversations with the agents.", + "url": "http://arxiv.org/html/2406.19226v2/x1.png" + }, + "2": { + "figure_path": "2406.19226v2_figure_2.png", + "caption": "Figure 2: The FIAS matrix sum of users in TAGI (left) and HSU (right). Numbers 1\u2062\u2013\u206291\u201391\\text{--}91 \u2013 9 represent the corresponding categories. N\ud835\udc41Nitalic_N in location (x,y)\ud835\udc65\ud835\udc66(x,y)( italic_x , italic_y ) means that there are N\ud835\udc41Nitalic_N transitions from x\ud835\udc65xitalic_x to y\ud835\udc66yitalic_y in the classroom. The matrix is divided into four parts based on the type of interaction between actors.", + "url": "http://arxiv.org/html/2406.19226v2/x2.png" + }, + "3": { + "figure_path": "2406.19226v2_figure_3.png", + "caption": "Figure 3: The joint plot of students\u2019 normalized average quiz scores, against the logarithm of message lengths per message (left) and against the logarithm of average number of messages per chapter (right).", + "url": "http://arxiv.org/html/2406.19226v2/x3.png" + }, + "4": { + "figure_path": "2406.19226v2_figure_4.png", + "caption": "Figure 4: User Results based on the CoI framework. The black lines represent the standard error of the data statistics.", + "url": "http://arxiv.org/html/2406.19226v2/extracted/6028437/figure/CoI.png" + }, + "5": { + "figure_path": "2406.19226v2_figure_5.png", + "caption": "Figure 5: The FIAS matrix sum of users in TAGI (left) and HSU (right) without interactions.", + "url": "http://arxiv.org/html/2406.19226v2/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Using large language models to simulate multiple humans and replicate human subject studies.", + "author": "Gati V Aher, Rosa I Arriaga, and Adam Tauman Kalai. 2023.", + "venue": "In International Conference on Machine Learning, pages 337\u2013371. PMLR.", + "url": null + } + }, + { + "2": { + "title": "The instructional process: a review of flanders\u2019 interaction analysis in a classroom setting.", + "author": "Veronica O Amatari. 2015.", + "venue": "International Journal of Secondary Education, 3(5):43\u201349.", + "url": null + } + }, + { + "3": { + "title": "Language models are few-shot learners.", + "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.", + "venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.", + "url": "https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html" + } + }, + { + "4": { + "title": "Sparks of artificial general intelligence: Early experiments with gpt-4.", + "author": "S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023.", + "venue": "Preprint, arXiv:2303.12712.", + "url": "https://arxiv.org/abs/2303.12712" + } + }, + { + "5": { + "title": "Empowering private tutoring by chaining large language models.", + "author": "Yulin Chen, Ning Ding, Hai-Tao Zheng, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023.", + "venue": "Preprint, arXiv:2309.08112.", + "url": "https://arxiv.org/abs/2309.08112" + } + }, + { + "6": { + "title": "Cooper: Coordinating specialized agents towards a complex dialogue goal.", + "author": "Yi Cheng, Wenge Liu, Jian Wang, Chak Tou Leong, Yi Ouyang, Wenjie Li, Xian Wu, and Yefeng Zheng. 2024.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 17853\u201317861.", + "url": null + } + }, + { + "7": { + "title": "Chatgpt has entered the classroom: how llms could transform education.", + "author": "Andy Extance. 2023.", + "venue": "Nature, 623(7987):474\u2013477.", + "url": null + } + }, + { + "8": { + "title": "S3: Social-network simulation system with large language model-empowered agents.", + "author": "Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, and Yong Li. 2023.", + "venue": "Preprint, arXiv:2307.14984.", + "url": "https://arxiv.org/abs/2307.14984" + } + }, + { + "9": { + "title": "Critical inquiry in a text-based environment: Computer conferencing in higher education.", + "author": "D Randy Garrison, Terry Anderson, and Walter Archer. 1999.", + "venue": "The internet and higher education, 2(2-3):87\u2013105.", + "url": null + } + }, + { + "10": { + "title": "Researching the community of inquiry framework: Review, issues, and future directions.", + "author": "D Randy Garrison and J Ben Arbaugh. 2007.", + "venue": "The Internet and higher education, 10(3):157\u2013172.", + "url": null + } + }, + { + "11": { + "title": "Chatglm: A family of large language models from glm-130b to glm-4 all tools.", + "author": "Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, et al. 2024.", + "venue": "arXiv preprint arXiv:2406.12793.", + "url": null + } + }, + { + "12": { + "title": "Metagpt: Meta programming for a multi-agent collaborative framework.", + "author": "Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and J\u00fcrgen Schmidhuber. 2023.", + "venue": "Preprint, arXiv:2308.00352.", + "url": "https://arxiv.org/abs/2308.00352" + } + }, + { + "13": { + "title": "Teaching plan generation and evaluation with gpt-4: Unleashing the potential of llm in instructional design.", + "author": "Bihao Hu, Longwei Zheng, Jiayi Zhu, Lishan Ding, Yilei Wang, and Xiaoqing Gu. 2024.", + "venue": "IEEE Transactions on Learning Technologies.", + "url": null + } + }, + { + "14": { + "title": "Large language models in education: A focus on the complementary relationship between human teachers and chatgpt.", + "author": "Jaeho Jeon and Seongyong Lee. 2023.", + "venue": "Education and Information Technologies, 28(12):15873\u201315892.", + "url": null + } + }, + { + "15": { + "title": "Khanmigo: Your ai tutor and learning assistant.", + "author": "Khan Academy. 2024.", + "venue": "https://www.khanmigo.ai/.", + "url": null + } + }, + { + "16": { + "title": "A systematic review of automatic question generation for educational purposes.", + "author": "Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020.", + "venue": "International Journal of Artificial Intelligence in Education, 30:121\u2013204.", + "url": null + } + }, + { + "17": { + "title": "Teaching, as learning, in practice.", + "author": "Jean Lave. 1996.", + "venue": "Mind, culture, and activity, 3(3):149\u2013164.", + "url": null + } + }, + { + "18": { + "title": "Generative agent for teacher training: Designing educational problem-solving simulations with large language model-based agents for pre-service teachers.", + "author": "Unggi Lee, Sanghyeok Lee, Junbo Koh, Yeil Jeong, Haewon Jung, Gyuri Byun, Jewoong Moon, Jieun Lim, and \u2020 HyeoncheolKim. 2023.", + "venue": null, + "url": "https://api.semanticscholar.org/CorpusID:266874743" + } + }, + { + "19": { + "title": "Retrieval-augmented generation for knowledge-intensive NLP tasks.", + "author": "Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020.", + "venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.", + "url": "https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html" + } + }, + { + "20": { + "title": "Camel: Communicative agents for\" mind\" exploration of large language model society.", + "author": "Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2024a.", + "venue": "Advances in Neural Information Processing Systems, 36.", + "url": null + } + }, + { + "21": { + "title": "Explainable few-shot knowledge tracing.", + "author": "Haoxuan Li, Jifan Yu, Yuanxin Ouyang, Zhuang Liu, Wenge Rong, Juanzi Li, and Zhang Xiong. 2024b.", + "venue": "Preprint, arXiv:2405.14391.", + "url": "https://arxiv.org/abs/2405.14391" + } + }, + { + "22": { + "title": "Agent hospital: A simulacrum of hospital with evolvable medical agents.", + "author": "Junkai Li, Siyu Wang, Meng Zhang, Weitao Li, Yunghwei Lai, Xinhui Kang, Weizhi Ma, and Yang Liu. 2024c.", + "venue": "Preprint, arXiv:2405.02957.", + "url": "https://arxiv.org/abs/2405.02957" + } + }, + { + "23": { + "title": "Econagent: Large language model-empowered agents for simulating macroeconomic activities.", + "author": "Nian Li, Chen Gao, Mingyu Li, Yong Li, and Qingmin Liao. 2024d.", + "venue": "Preprint, arXiv:2310.10436.", + "url": "https://arxiv.org/abs/2310.10436" + } + }, + { + "24": { + "title": "Exploiting cognitive structure for adaptive learning.", + "author": "Qi Liu, Shiwei Tong, Chuanren Liu, Hongke Zhao, Enhong Chen, Haiping Ma, and Shijin Wang. 2019.", + "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pages 627\u2013635. ACM.", + "url": "https://doi.org/10.1145/3292500.3330922" + } + }, + { + "25": { + "title": "Augmenting large language models with chemistry tools.", + "author": "Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. 2024.", + "venue": "Nature Machine Intelligence, pages 1\u201311.", + "url": null + } + }, + { + "26": { + "title": "Gpteach: Interactive ta training with gpt-based students.", + "author": "Julia M Markel, Steven G Opferman, James A Landay, and Chris Piech. 2023.", + "venue": "In Proceedings of the tenth acm conference on learning@ scale, pages 226\u2013236.", + "url": null + } + }, + { + "27": { + "title": "Intelligent tutoring systems: an overview.", + "author": "Hyacinth S Nwana. 1990.", + "venue": "Artificial Intelligence Review, 4(4):251\u2013277.", + "url": null + } + }, + { + "28": { + "title": "Chatgpt for teaching, learning and research: Prospects and challenges.", + "author": "Emmanuel Opara, Adalikwu Mfon-Ette Theresa, and Tolorunleke Caroline Aduke. 2023.", + "venue": "Opara Emmanuel Chinonso, Adalikwu Mfon-Ette Theresa, Tolorunleke Caroline Aduke (2023). ChatGPT for Teaching, Learning and Research: Prospects and Challenges. Glob Acad J Humanit Soc Sci, 5.", + "url": null + } + }, + { + "29": { + "title": "Gpt-4 technical report.", + "author": "OpenAI. 2024.", + "venue": "Preprint, arXiv:2303.08774.", + "url": "https://arxiv.org/abs/2303.08774" + } + }, + { + "30": { + "title": "Generative agents: Interactive simulacra of human behavior.", + "author": "Joon Sung Park, Joseph O\u2019Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023.", + "venue": "In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 1\u201322.", + "url": null + } + }, + { + "31": { + "title": "Chatdev: Communicative agents for software development.", + "author": "Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024.", + "venue": "Preprint, arXiv:2307.07924.", + "url": "https://arxiv.org/abs/2307.07924" + } + }, + { + "32": { + "title": "Classroom interaction research: A survey of recent literature.", + "author": "Dean Schwanke. 1981.", + "venue": "Journal of Classroom Interaction, pages 8\u201310.", + "url": null + } + }, + { + "33": { + "title": "CLASS: A design framework for building intelligent tutoring systems based on learning science principles.", + "author": "Shashank Sonkar, Naiming Liu, Debshila Mallick, and Richard Baraniuk. 2023.", + "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1941\u20131961, Singapore. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2023.findings-emnlp.130" + } + }, + { + "34": { + "title": "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.", + "author": "Gemini Team. 2024.", + "venue": "Preprint, arXiv:2403.05530.", + "url": "https://arxiv.org/abs/2403.05530" + } + }, + { + "35": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and etc. 2023.", + "venue": "Preprint, arXiv:2307.09288.", + "url": "https://arxiv.org/abs/2307.09288" + } + }, + { + "36": { + "title": "Littlemu: Deploying an online virtual teaching assistant via heterogeneous sources integration and chain of teach prompts.", + "author": "Shangqing Tu, Zheyuan Zhang, Jifan Yu, Chunyang Li, Siyu Zhang, Zijun Yao, Lei Hou, and Juanzi Li. 2023.", + "venue": "In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 4843\u20134849.", + "url": null + } + }, + { + "37": { + "title": "Voyager: An open-ended embodied agent with large language models.", + "author": "Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023.", + "venue": "Preprint, arXiv:2305.16291.", + "url": "https://arxiv.org/abs/2305.16291" + } + }, + { + "38": { + "title": "Autogen: Enabling next-gen llm applications via multi-agent conversation.", + "author": "Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. 2023.", + "venue": "Preprint, arXiv:2308.08155.", + "url": "https://arxiv.org/abs/2308.08155" + } + }, + { + "39": { + "title": "The dawn of lmms: Preliminary explorations with gpt-4v(ision).", + "author": "Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023.", + "venue": "Preprint, arXiv:2309.17421.", + "url": "https://arxiv.org/abs/2309.17421" + } + }, + { + "40": { + "title": "Xdai: A tuning-free framework for exploiting pre-trained language models in knowledge grounded dialogue generation.", + "author": "Jifan Yu, Xiaohan Zhang, Yifan Xu, Xuanyu Lei, Xinyu Guan, Jing Zhang, Lei Hou, Juanzi Li, and Jie Tang. 2022.", + "venue": "In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD \u201922, page 4422\u20134432, New York, NY, USA. Association for Computing Machinery.", + "url": null + } + }, + { + "41": { + "title": "Mathvc: An llm-simulated multi-character virtual classroom for mathematics education.", + "author": "Murong Yue, Wijdane Mifdal, Yixuan Zhang, Jennifer Suh, and Ziyu Yao. 2024.", + "venue": "Preprint, arXiv:2404.06711.", + "url": "https://arxiv.org/abs/2404.06711" + } + }, + { + "42": { + "title": "Exploring collaboration mechanisms for llm agents: A social psychology view.", + "author": "Jintian Zhang, Xin Xu, Ningyu Zhang, Ruibo Liu, Bryan Hooi, and Shumin Deng. 2024.", + "venue": "Preprint, arXiv:2310.02124.", + "url": "https://arxiv.org/abs/2310.02124" + } + }, + { + "43": { + "title": "Classroom quantitative evaluation: A method of both formative and summative evaluation.", + "author": "Yi Zhang, Xiaoxia Wu, Cheng Zhu, and Jincheng Zhou. 2023.", + "venue": "Sustainability, 15(3).", + "url": "https://doi.org/10.3390/su15031783" + } + } + ], + "url": "http://arxiv.org/html/2406.19226v2" +} \ No newline at end of file diff --git a/20241127/2406.19540v2.json b/20241127/2406.19540v2.json new file mode 100644 index 0000000000000000000000000000000000000000..946db5a135427a3cf1b927059fb4ee19a56c2ada --- /dev/null +++ b/20241127/2406.19540v2.json @@ -0,0 +1,115 @@ +{ + "title": "Weighted Circle Fusion: Ensembling Circle Representation from Different Object Detection Results", + "abstract": "Recently, the use of circle representation has emerged as a method to improve the identification of spherical objects (such as glomeruli, cells, and nuclei) in medical imaging studies. In traditional bounding box-based object detection, combining results from multiple models improves accuracy, especially when real-time processing isn\u2019t crucial. Unfortunately, this widely adopted strategy is not readily available for combining circle representations. In this paper, we propose Weighted Circle Fusion (WCF), a simple approach for merging predictions from various circle detection models. Our method leverages confidence scores associated with each proposed bounding circle to generate averaged circles. We evaluate our method on a proprietary dataset for glomerular detection in whole slide imaging (WSI) and find a performance gain of 5% compared to existing ensemble methods. Additionally, we assess the efficiency of two annotation methods\u2014fully manual annotation and a human-in-the-loop (HITL) approach\u2014in labeling 200,000 glomeruli. The HITL approach, which integrates machine learning detection with human verification, demonstrated remarkable improvements in annotation efficiency. The Weighted Circle Fusion technique not only enhances object detection precision but also notably reduces false detections, presenting a promising direction for future research and application in pathological image analysis. The source code has been made publicly available at https://github.com/hrlblab/WeightedCircleFusion", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "###figure_1### Object detection plays an essential role in medical imaging [1 ###reference_b1###], offering a wide range of applications that are enhanced by machine learning technologies. Traditional object detection models, such as Faster R-CNN [2 ###reference_b2###], YOLO [3 ###reference_b3###], and SSD [4 ###reference_b4###], have been widely adopted across various domains for their efficiency and accuracy [5 ###reference_b5###]. In medical object detection tasks, detecting glomeruli is essential for effective diagnosis and quantitative assessments in renal pathology. For these tasks, CircleNet [6 ###reference_b6###] stands out in the medical field for its unique approach to detection tasks. Unlike conventional detection networks that rely on bounding boxes, CircleNet offers a rotation-consistent circle representation with fewer parameters for ball-shaped objects [7 ###reference_b7###], such as glomeruli in kidney pathology (Fig. 1 ###reference_###). Despite CircleNet\u2019s advantages, relying on a single CircleNet-trained model for detection tasks presents considerable challenges, including missed and false detections [8 ###reference_b8###].\nTo enhance the robustness of object detection, ensemble learning algorithms, such as Non-Maximum Suppression (NMS) [9 ###reference_b9###], Soft-NMS [10 ###reference_b10###], and Weighted Box Fusion (WBF) [11 ###reference_b11###], have been proposed to fuse the detection results from multiple models (Fig. 1 ###reference_###). NMS and Soft-NMS work by eliminating lower confidence detections based on an Intersection Over Union (IOU) threshold [12 ###reference_b12###], with Soft-NMS adjusting detection scores rather than removing detections outright. WBF further refines this approach by merging overlapping detections, allowing those with higher confidence scores to improve the merged result. Unfortunately, such methods were optimized for traditional bounding box based representation for natural images.\nIn this paper, we propose a simple ensemble method, called Weighted Circle Fusion (WCF), designed specifically for circle representation in medical imaging detections. This method merges overlapping detections, with the fusion result\u2019s position decided by the confidence of the contributing detections. Importantly, it calculates the number of overlapped circles merged for each object, while computing the average score for false positive elimination. In experiments, we assessed the detection results of glomeruli on whole slide images (WSIs) using five-fold cross-validation. Additionally, to validate the method\u2019s consistency across rotations, we tested it on images rotated by 90 degrees. The results demonstrate the method\u2019s decent rotation consistency. To summarize, the contribution of this paper is threefold:\nThe WCF method, combined with a dual thresholds strategy, enhances precision and reliability by fusing detection results from circle representation and eliminating false positives based on confidence scores and overlap across hard decisions.\nOur method achieved a substantial performance gain ( 5% ) compared to the average results of individual models.\nUtilizing a human-in-the-loop (HITL) approach to test the time required to annotate 10 WSIs, showed that it saves 68.59% of total annotation time compared to complete manual annotation.\n###figure_2###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "In this section, we introduce an innovative method for fusing predictions: Weighted Circle Fusion (Fig. 2 ###reference_###). This technique is designed to enhance the accuracy of object detection, particularly focusing on circular objects commonly encountered in medical imaging, such as cells, glomeruli, or other spherically shaped features. Our approach involves pairwise fusion of the detection results from five models, where the results from the first model are fused with the second, then the combined results are fused with the third model, and so on until the fifth model is included.\nThe WCF process begins with aggregating predictions from multiple models, resulting in several sets of detection outcomes. Initially, the detection results from the first model are stored in a list, referred to as . Subsequent detections from other models are compared against the entries in list based on their cIOU [6 ###reference_b6###].The definition of cIOU can be found in the corresponding reference. If the cIOU between any two detections exceeds a predetermined threshold, indicating an enhanced agreement between models on the presence and location of an object, these detections are considered for fusion.\nUpon fusion of the two results, it is necessary to recalculate the coordinates and confidence score of the new, combined result. Given that our detection results are represented as circles, we utilize the circles\u2019 center coordinates and radii for computation. Suppose the center coordinates and radius of a result from the first set are\n(,) and with a confidence score ; and similarly, (,) and with score for a result from the second set. The formulas for calculating the weighted average coordinates and radius are as follows:\nFor center coordinates:\nFor radius:\nAfter calculating the fused coordinates, we compute the average of the scores of the merged results and keep track of how many detections have been merged to form this new result.\nIf a result from the second set cannot fuse with any result in list , it is directly added to . This process is repeated for each set of predictions until all m sets have been processed.\nUpon completing the fusion of all model predictions, the confidence score for the fused result is calculated as follows:\nwhere is the confidence score of each individual model\u2019s prediction.\nAdditionally, we apply a \u201ccount score\u201d to quantify how many model predictions have been fused into a single detection. The max value of depends on how many models we use in our ensemble method.\nTo further refine the detection outcomes, we introduced two thresholds: \u201cT count\u201d for the count value and \u201cT score\u201d for the average score of each result. Specifically, if both the count value and average score are below their respective thresholds, the detection result will be discarded. For the experiments in this paper, \u201dT count\u201d is set to 2 and \u201dT score\u201d is set to 0.9. This strategic approach enhances the precision of detection, making WCF particularly effective for instances where erroneous detections are common." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Data", + "text": "For our training dataset, we utilized an in-house dataset. This included 15,190 patches from whole slide images derived from renal biopsies. Additionally, we incorporated 9,260 patches from PAS-stained WSIs of murine kidneys. This dataset was divided into training, validation, and testing sets with a ratio of 7:1:2 for each of the five models.\nFor the training dataset for the plus version models, an additional 100,000 glomeruli were added to the basic training dataset used to train the base version of the model. These additional glomeruli were sourced from 170 WSI from our in-house dataset. The 100,000 glomeruli were divided into five groups of 40,000 glomeruli, with each group added to a different model. Each group of 40,000 glomeruli had a 20,000 overlap with the others. All patches in our training dataset were either cropped or resized to dimensions of 512 \u00d7 512 pixels. Each patch contained at least one glomerulus.\nTo evaluate the efficiency of different annotation methods for 200,000 glomeruli, we compared fully manual annotation with a human-in-the-loop (HITL) approach. The manual method involved human experts marking each glomerulus, whereas the HITL method integrated machine learning detection with human verification and correction. This comparison was conducted to assess the time efficiency and effectiveness of incorporating machine learning into the annotation process.\nFor the testing dataset, we included 15 PAS-stained WSIs, encompassing 2051 mouse glomeruli." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Experiment Setting", + "text": "The models were trained on the CircleNet architecture with a dla-34 backbone, using slightly varied datasets to enhance learning diversity and robustness. Training spanned 30 epochs for each model, and outputs were refined using the Non-Maximum Suppression algorithm.\nWe evaluated the efficiency of two annotation methods for 200,000 glomeruli in our KidneyPath dataset: fully manual annotation and a human-in-the-loop (HITL) approach. The manual method involved human experts marking each glomerulus, while the HITL method combined machine learning detection with human verification and correction. This comparison aimed to assess the time efficiency of integrating machine learning into the annotation process." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Fusion Method Comparison Experiments", + "text": "In this part of the experiment, we compared three ensemble methods: NMS, Soft-NMS, and WCF, as well as the results from five models and their plus version.\nEach model was enhanced by the addition of 40,000 glomeruli training data, leading to improved performance. These 40,000 glomeruli were derived from an additional collection of 100,000 glomeruli, with a 20,000 overlap between each model.\nOur WCF method was configured with specific parameters: a circle Intersection Over Union (cIOU) threshold of 0.5. For the experiments in this paper, \u201dT count\u201d is set to 2 and \u201dT score\u201d is set to 0.9. Initially, the WCF algorithm was applied to the outputs refined by the NMS algorithm to combine the strengths of individual detections into a single, more accurate result. The effectiveness of the WCF-fused results was meticulously evaluated and compared against the performance of individual models, traditional NMS, and Soft-NMS, with cIOU thresholds set at 0.5 and 0.3, respectively." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Rotational Consistency Experiments", + "text": "In this part, we delved into assessing the rotational consistency of their fusion method. This was achieved by extracting patches from Whole Slide Images and rotating them by 90 degrees prior to the detection process. The results from these rotated patches were then subjected to the same fusion process." + }, + { + "section_id": "3.2.3", + "parent_section_id": "3.2", + "section_name": "3.2.3 Evaluation", + "text": "The models were evaluated based on the mean average precision (mAP) at IoU values of 0.5 and 0.75. Additionally, mAP was computed across a spectrum of IoU thresholds, thereby conducting a comprehensive assessment. This metric was calculated over a range of IoU thresholds, from 0.5 to 0.95 in steps of 0.05, at each step averaging the precision. Alongside precision, the average recall across these IoU thresholds was also measured, providing a rounded evaluation of model performance.\nThe IoU metric, a ratio reflecting the overlap between two objects versus their combined area, is traditionally calculated for bounding box representations. However, given that this study\u2019s predictions utilize circle representations, we adopted the circle IoU (cIoU) [13 ###reference_b13###] metric as our evaluation standard. The cIoU offers a more fitting measure for our circular detection outputs, aligning with the unique geometry of the objects being detected.\n###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Performance on glomerular detection", + "text": "Fig. 3 ###reference_### and Table 1 ###reference_###showcase the performance of our fusion method, which integrates the outputs from five models and their enhanced version on murine glomerular WSIs. Averaged results are calculated from five original models and five enhanced models with 40000 additional global features, providing a comprehensive comparison across different fusion methods. The results demonstrate that our approach achieves remarkably higher mAP values and average recall rates. The enhanced models exhibit better average recall and average precision compared to the original models. Notably, the mAP obtained through our method surpasses that of any individual model included in the study. Although the average recall of our method is slightly lower compared to other fusion methods, it remains competitively high and exceeds the average recall of the five original models." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Rotation consistency", + "text": "The study meticulously explores the rotation consistency of our object detection method, offering detailed insights in Table 2 ###reference_###. The results underscored the WCF method\u2019s notable consistency in rotation, highlighting its robustness against orientation changes. Our enhanced version of models also shows better rotation consistency compared to the original models." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Manual Annotation vs. Human-in-the-loop Annotation", + "text": "To evaluate the efficiency of manual annotation compared to a human-in-the-loop approach, we conducted a time analysis for annotating 10 WSIs. The results demonstrates that the HITL method considerably improves annotation efficiency, requiring an average of 2.9 minutes per image compared to 9.23 minutes per image for manual annotation." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This work is the first to ensemble detection results for circle representation. We introduced a novel ensemble method, Weighted Circle Fusion (WCF), to refine predictions from multiple deep learning models. WCF demonstrated superior precision metrics, outperforming conventional benchmarks, especially in high-error contexts. Our findings highlight WCF\u2019s potential in reducing errors in circle representation, making it a valuable strategy for medical image analysis using optimized deep learning approaches." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelmAP(0.5:0.95)mAP(@0.5IOU)mAP(@0.75IOU)Average Recall(0.5:0.95)
CircleNet\u00a0[6]\n0.5940.7840.6760.605
CircleNet+0.7640.8990.8250.738
NMS\u00a0[9]\n0.4630.5660.5160.745
NMS+0.6440.7490.6960.834
Soft-NMS\u00a0[10]\n0.3190.4020.3570.722
Soft-NMS+0.4190.5130.4520.793
WCF(Ours)0.7070.9070.8100.629
WCF+(Ours)0.8290.9550.9050.782
\n
\n
Table 1: The table shows the averaged performance metrics of five original models (\u201dModels in fold\u201d) and their enhanced versions with 40,000 additional global features (\u201dModels+ in fold\u201d). Metrics include mean average precision (mAP) at various IoU thresholds and average recall, evaluated using NMS, soft-NMS, and WCF fusion methods. Results highlight the superior performance of the WCF method across models.
\n
", + "capture": "Table 1: The table shows the averaged performance metrics of five original models (\u201dModels in fold\u201d) and their enhanced versions with 40,000 additional global features (\u201dModels+ in fold\u201d). Metrics include mean average precision (mAP) at various IoU thresholds and average recall, evaluated using NMS, soft-NMS, and WCF fusion methods. Results highlight the superior performance of the WCF method across models." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelmAP(0.5:0.95)mAP(@0.5IOU)mAP(@0.75IOU)Average Recall(0.5:0.95)
CircleNet\u00a0[6]\n0.7280.8520.8260.727
CircleNet+0.7750.8950.8760.776
NMS\u00a0[9]\n0.6410.7760.7300.636
NMS+0.7190.8280.8030.717
Soft-NMS\u00a0[10]\n0.5700.6610.6350.565
Soft-NMS+0.6160.6990.6860.613
WCF(Ours)0.8230.9240.9130.817
WCF+ (Ours)0.8730.9510.9440.873
\n
Table 2: Performance on rotation invariance: The chart displays the rotation invariance of various models and methods. From the results, we can see that the WCF method has achieved improvements in mean average precision and mean average recall. The results indicate that WCF possesses better rotation consistency.
\n
", + "capture": "Table 2: Performance on rotation invariance: The chart displays the rotation invariance of various models and methods. From the results, we can see that the WCF method has achieved improvements in mean average precision and mean average recall. The results indicate that WCF possesses better rotation consistency." + } + }, + "image_paths": { + "1": { + "figure_path": "2406.19540v2_figure_1.png", + "caption": "Figure 1: Comparison of Box Fusion and Circle Fusion Methods for Object Detection. This figure delineates the differences between the ensemble results of box representation and circle representation. Box fusion alters the dimensions of the box, thereby changing its shape, while circle fusion only modifies the radius of the circle, preserving its shape. For the detection of medical ball-shaped objects, circle representation can achieve better performance.", + "url": "http://arxiv.org/html/2406.19540v2/x1.png" + }, + "2": { + "figure_path": "2406.19540v2_figure_2.png", + "caption": "Figure 2: The workflow of the proposed Weighted Circle Fusion (WCF) method. This figure delineates the specific steps involved in our method. The core of the method lies in counting the number of fused circles and calculating their average score, which is then used to eliminate potential erroneous detections.", + "url": "http://arxiv.org/html/2406.19540v2/x2.png" + }, + "3": { + "figure_path": "2406.19540v2_figure_3.png", + "caption": "Figure 3: Result Visualization. This figure presents the detection outcomes of glomeruli on WSIs using our method. The yellow arrows highlight false negatives identified by other models or methods, while the blue arrows indicate false positives. It is evident that traditional fusion methods such as NMS and soft-NMS tend to merge more erroneous predictions. In contrast, the WCF method achieves superior fusion results, with fewer incorrect predictions and the inclusion of detections that individual models failed to identify, demonstrating its effectiveness in enhancing detection accuracy.", + "url": "http://arxiv.org/html/2406.19540v2/x3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2406.19540v2" +} \ No newline at end of file diff --git a/20241127/2407.03263v2.json b/20241127/2407.03263v2.json new file mode 100644 index 0000000000000000000000000000000000000000..ac5344e8045397ef4fed29788d3a7db9a3d5d8e1 --- /dev/null +++ b/20241127/2407.03263v2.json @@ -0,0 +1,758 @@ +{ + "title": "A Unified Framework for 3D Scene Understanding", + "abstract": "We propose UniSeg3D, a unified 3D scene understanding framework that achieves panoptic, semantic, instance, interactive, referring, and open-vocabulary segmentation tasks within a single model. Most previous 3D segmentation approaches are typically tailored to a specific task, limiting their understanding of 3D scenes to a task-specific perspective. In contrast, the proposed method unifies six tasks into unified representations processed by the same Transformer. It facilitates inter-task knowledge sharing, thereby promoting comprehensive 3D scene understanding. To take advantage of multi-task unification, we enhance performance by establishing explicit inter-task associations. Specifically, we design knowledge distillation and contrastive learning methods to transfer task-specific knowledge across different tasks. Experiments on three benchmarks, including ScanNet20, ScanRefer, and ScanNet200, demonstrate that the UniSeg3D consistently outperforms current SOTA methods, even those specialized for individual tasks. We hope UniSeg3D can serve as a solid unified baseline and inspire future work. Code and models are available at https://dk-liang.github.io/UniSeg3D/.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "3D scene understanding has been a foundational aspect of various real-world applications chen2024sugar ###reference_b3###; zhang2023sam3d ###reference_b68###; jaritz2019multi ###reference_b18###, including robotics, autonomous navigation, and mixed reality.\nAmong the 3D scene understanding tasks, 3D point cloud segmentation is a crucial component.\nGeneric 3D point cloud segmentation contains panoptic, semantic, and instance segmentation (PS/SS/IS) tasks narita2019panopticfusion ###reference_b36###; qi2017pointnet++ ###reference_b40###; wu2019pointconv ###reference_b58###; yi2019gspn ###reference_b66###; kolodiazhnyi2024top ###reference_b21###, which segment classes annotated in the training set. As a complement, 3D open-vocabulary segmentation (OVS) task peng2023openscene ###reference_b39###; takmaz2024openmask3d ###reference_b52###; huang2023openins3d ###reference_b15### segments open-vocabulary classes of interest. Another group of works study to utilize user priors. In particular, 3D interactive segmentation task kontogianni2023interactive ###reference_b23###; yue2023agile3d ###reference_b67### segments instances specified by users. 3D referring segmentation task huang2021text ###reference_b14###; qian2024x ###reference_b43###; 3dstmn ###reference_b56###; 3DGRES ###reference_b55### segments instances described by textual expressions. The above mentioned tasks are core tasks in 3D scene understanding, drawing significant interest from researchers and achieving great success.\nPrevious studies sun2023neuralbf ###reference_b51###; han2020occuseg ###reference_b8###; zhou2024dynamic ###reference_b71###; jiang2020pointgroup ###reference_b19### in the 3D scene understanding area focus on separated solutions specialized for specific tasks, as shown in Fig. 1 ###reference_###(a). These approaches ignore intrinsic connections across different tasks, such as the objects\u2019 geometric consistency and semantic consistency.\nThey also fail to share knowledge biased toward other tasks, limiting their understanding of 3D scenes to a task-specific perspective. It poses significant challenges for achieving comprehensive and in-depth 3D scene understanding.\nA recent exploration kolodiazhnyi2023oneformer3d ###reference_b22### named OneFormer3D\ndesigns an architecture to unify the 3D generic segmentation tasks, as shown in Fig. 1 ###reference_###(b).\nThis architecture inputs instance and semantic queries to simultaneously predict the 3D instance and semantic segmentation results. And the 3D panoptic segmentation is subsequently achieved by post-processing these predictions. It is simple yet effective.\nHowever, this architecture fails to support the 3D interactive segmentation, 3D referring segmentation, and OVS tasks,\nwhich provide complementary scene information, including user priors and open-set classes,\nshould be equally crucial in achieving 3D scene understanding as the generic segmentation tasks.\nThis leads to a natural consideration that if these 3D scene understanding tasks can be unified in a single framework?\nA direct solution is to integrate the separated methods into a single architecture. However, it faces challenges balancing the customized optimizations specialized for the specific tasks involved in these methods.\nThus, we aim to design a simple and elegant framework without task-specific customized modules.\nThis inspires us to design the UniSeg3D, a unified framework processing six 3D segmentation tasks in parallel.\nSpecifically, we use queries to unify representations of the input information.\nThe 3D generic segmentation tasks and the OVS task, which only input point cloud without human knowledge, thus can be processed by sharing the same workflow without worrying about prior knowledge leakage. We use a unified set of queries to represent the four-task features for simplification. The interactive segmentation inputs visual point priors to condition the segmentation.\nWe represent the point prompt information by simply sampling the point cloud queries, thereby avoiding repeated point feature extraction. The referring segmentation inputs textual expressions, which persist in a modality gap with the point clouds and are hard to unify in the previous workflows. To minimize the time consumption, we design a parallel text prompt encoder to extract the text queries.\nAll these queries are decoded using the same mask decoder and share the same output head without the design of task-specific customized structures.\nWe further enhance performance by taking advantage of the multi-task design.\nIn particular, we empirically find that the interactive segmentation outperforms the rest of the tasks in mask predictions, which is attributable to reliable vision priors.\nHence, we design knowledge distillation to distill knowledge from the interactive segmentation to the other tasks. Then, we build contrastive learning between interactive segmentation and referring segmentation to connect these two tasks.\nThe proposed knowledge distillation and contrastive learning\npromote knowledge sharing across six tasks,\neffectively establishing associations between different tasks.\nThere are three significant strengths of the UniSeg3D: (1) it unifies six 3D scene understanding tasks in a single framework, as shown in Fig. 1 ###reference_###(c); (2) it is flexible for that can be easily extended to more tasks by simply inputting the additional task-specific queries; (3) the designed knowledge distillation and contrastive learning are only used in the training phase, optimizing the performance with no extra inference cost.\nWe compare the proposed method with task-specific specialized SOTA approaches siddiqui2023panoptic ###reference_b48###; wang2023octformer ###reference_b54###; lu2023query ###reference_b34###; yue2023agile3d ###reference_b67###; qian2024x ###reference_b43###; nguyen2023open3dis ###reference_b38### across six tasks to evaluate its performance. As shown in Fig. 1 ###reference_###(d), the UniSeg3D demonstrates superior performance on all the tasks. It is worth noting that our performance on different tasks is achieved by a single model, which is more efficient than running separate task-specific approaches individually. Furthermore, the structure of UniSeg3D is simple and elegant, containing no task-customized modules, while consistently outperforming specialized SOTA solutions,\ndemonstrating a desirable potential to be a solid unified baseline.\nIn general, our contributions can be summarized as follows: First, we propose a unified framework named UniSeg3D, offering a flexible and efficient solution for 3D scene understanding. It achieves six 3D segmentation tasks in one inference by a single model. To the best of our knowledge, this is the first work to unify six 3D segmentation tasks. Second, specialized approaches limit their 3D scene understanding to task-specific perspectives.\nWe facilitate inter-task knowledge sharing to promote comprehensive 3D scene understanding. Specifically, we take advantage of multi-task unification, designing the knowledge distillation and contrastive learning methods to establish explicit inter-task associations." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "3D segmentation.\nThe generic segmentation consists of panoptic, semantic, and instance segmentation.\nThe panoptic segmentation narita2019panopticfusion ###reference_b36###; wu2021scenegraphfusion ###reference_b57### is the union of instance segmentation he2021dyco3d ###reference_b9###; liang2021sstnet ###reference_b31###; chen2021hais ###reference_b2###; wu2022dknet ###reference_b61###; vu2022softgroup ###reference_b53### and semantic segmentation qian2022pointnext ###reference_b42###; qi2017pointnet++ ###reference_b40###; choy2019minkowski ###reference_b5###; zhao2021pointtransformer ###reference_b69###. It contains the instance masks from the instance segmentation and the stuff masks from the semantic segmentation. These 3D segmentation tasks rely on annotations, segmenting classes labeled in the training set. The open-vocabulary segmentation nguyen2023open3dis ###reference_b38###; takmaz2024openmask3d ###reference_b52### extends the 3D segmentation to the novel class. Another group of works explores 3D segmentation conditioned by human knowledge. Specifically, the interactive segmentation kontogianni2023interactive ###reference_b23###; yue2023agile3d ###reference_b67### segments instances specified by the point clicks. The referring segmentation huang2021text ###reference_b14###; qian2024x ###reference_b43### segments objects described by textual expressions.\nMost previous researches xu2020squeezesegv3 ###reference_b62###; cheng2021net ###reference_b4###; lai2022stratified ###reference_b24###; liang2024pointmamba ###reference_b30### focus on specific 3D segmentation tasks, limiting their efficiency in multi-task scenarios, such as the domotics, that require multiple task-specific 3D segmentation approaches to be applied simultaneously. This work proposes a framework to achieve the six abovementioned tasks in one inference.\nUnified vision models.\nUnified research supports multiple tasks in a single model, facilitating efficiency and attracting a lot of attention in the 2D area qi2023unigs ###reference_b41###; li2024univs ###reference_b35###; li2024omg ###reference_b28###; jain2023oneformer ###reference_b17###. However, rare works study the unified 3D segmentation architecture. It might be attributed to the higher dimension of the 3D data, which leads to big solution space, making it challenging for sufficient unification across multiple 3D tasks.\nRecent works hong2024unified ###reference_b11###; liu2024multi ###reference_b33### explore outdoor unified 3D segmentation architectures, and some others zhu20233d ###reference_b72###; huang2023ponder ###reference_b13###; irshad2024nerfmae ###reference_b16### delve into unified 3D representations.\nSo far, only one method, OneFormer3D kolodiazhnyi2023oneformer3d ###reference_b22###, focuses on indoor unified 3D segmentation. It extends the motivation proposed in OneFormer jain2023oneformer ###reference_b17### to the 3D area and proposes an architecture to achieve three 3D generic segmentation tasks in a single model.\nWe note that the supported tasks in OneFormer3D can be achieved in one inference through post-processing predictions of a panoptic segmentation model.\nIn contrast, we propose a simple framework that unifies six tasks, including not only generic segmentation but also interactive segmentation, referring segmentation, and OVS, into a single model. Additionally, we establish explicit associations between these unified tasks to promote knowledge sharing, contributing to effective multi-task unification." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "The framework of UniSeg3D is depicted in Fig. 2 ###reference_###. It mainly consists of three modules: a point cloud backbone, prompt encoders, and a mask decoder.\nWe illustrate their structures in the following." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Point Cloud Backbone and Prompt Encoders", + "text": "Point cloud backbone. We represent the set of input points as , where each point is characterized by three-dimensional coordinates , , and three-channel colors , , . These input points are then fed into a sparse 3D U-Net, serving as the point cloud backbone, to obtain point-wise features , where denotes the feature dimension. Processing dense points individually in 3D scene understanding can be time-consuming. Therefore, we downsample the 3D scenario into superpoints and pool the point features within each superpoint to form superpoint features , where each and . This procedure exhibits awareness of the edge textures landrieu2018large ###reference_b26### while reducing cost consumption.\n###figure_1### Vision prompt encoder. Click is a kind of clear and convenient visual interaction condition that has been widely employed in previous works kirillov2023segment ###reference_b20###; kontogianni2023interactive ###reference_b23###; yue2023agile3d ###reference_b67###. We formulate the clicks as vision prompts, as illustrated in Fig. 2 ###reference_###.\nIn practice, a click is first indicated by the spatially nearest point. Then, we sample a superpoint containing this point and employ its superpoint feature as vision prompt feature\n to represent the point prompt information, thus avoiding redundant feature extraction and maintaining feature consistency with the point clouds.\nText prompt encoder. UniSeg3D is able to segment instances described by textual expressions. To process a text prompt, the initial step involves tokenizing the text sentence to obtain its string tokens , where is the sentence length and represents the token dimension. These tokens are then fed into a frozen CLIP radford2021learning ###reference_b44### text encoder to produce a -dimensional text feature . This feature is subsequently projected into the dimension using two linear layers, obtaining , aligning the dimension of the point features for subsequent processing." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Mask Generation", + "text": "We employ a single mask decoder to output predictions of six 3D scene understanding tasks. The generic segmentation and the OVS share the same input data, i.e., the point cloud without user knowledge. Therefore, we randomly select features from superpoint features to serve as unified queries for both the generic segmentation and OVS tasks. During training, we set to reduce computational costs, while for inference, we set to enable the segmentation of every region.\nThe prompt information is encoded into prompt features as discussed in Sec. 3.1 ###reference_###. We employ the prompt features as the prompt queries, which can be written as: , , where , .\n and are the number of the point and text prompts, respectively. , , are three types of queries containing information from various aspects. Feeding them forward indiscriminately would confuse the mask decoder for digging task-specific information. Thus, we add task-specific embeddings , , and before further processing:\nwhere , , , and are broadcasted into , , and , respectively.\nThe mask decoder comprises mask decoder layers, which contain self-attention layers integrating information among queries. Prompt priors are unavailable for generic segmentation during inference. Therefore, in the training phase, we should prevent the human knowledge from leaking to the generic segmentation. In practice, the prompt queries are exclusively fed into the cross-attention layers.\nOutput queries of the last mask decoder layer are sent into an output head, which consists of MLP layers to project dimensions of the output queries into .\nIn general, the mask generation process can be formally defined as:\nwhere represents output features, with and .\nSubsequently, we can process the output features to obtain the class and mask predictions. For class predictions, a common practice involves replacing class names with class IDs kolodiazhnyi2023oneformer3d ###reference_b22###. However, for our method to support referring segmentation, the class names are crucial information that should not be overlooked.\nHence, we encode the class names into text features using a frozen CLIP text encoder and propose to regress the class name features instead, where denotes the number of categories.\nSpecifically, we formulate the mask predictions and class predictions as follows:\nwhere \nand , with and .\n and represent the mask outcome and category probability predicted by the -th query, respectively. The projects into .\nGiven that and are derived from superpoints, we map the segmentation outputs for each superpoint back to the input point cloud to generate point-wise mask and class predictions.\n###figure_2###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Explicit Inter-task Association", + "text": "Previous studies have overlooked the associations among 3D scene understanding tasks, resulting in task-specific approaches that fail to leverage cross-task knowledge. This limitation restricts the understanding of 3D scenes to a task-specific perspective, hindering comprehensive 3D scene understanding. We establish explicit inter-task associations to overcome these constraints.\nSpecifically, on the one hand, as shown in Fig. 3 ###reference_###(a), the referring segmentation is challenging when multiple individuals of identical shapes are arranged adjacently. It requires the method to distinguish the location variations inserted in the text prompts, such as \u201cright of the other chair\u201d vs. \u201canother chair to the right of it.\u201d However, the modality gap between 3D points and\nlinguistic texts sets significant obstructions. We propose ranking-based contrastive learning between the vision and text features to reduce the modality gap and optimize the referring segmentation.\nOn the other hand, as shown in Tab. 1 ###reference_###, we evaluate our baseline framework built in Sec. 3.1 ###reference_### and Sec. 3.2 ###reference_### on instance and interactive segmentation tasks. Essentially, the main difference between the instance and interactive segmentation is w/o or w/ vision prompts. The mIoU metric, which directly measures the quality of mask predictions, indicates that the interactive segmentation surpasses the instance segmentation by a notable margin of . It suggests that the vision prompts provide reliable position priors, boosting the interactive segmentation to perform superior mask prediction performance. We design a knowledge distillation to share insights from the interactive segmentation, leveraging its superior mask prediction capability. The key to knowledge distillation is to utilize task-predicting segmentation masks of the best quality to guide the other tasks, i.e., using a teacher to guide students." + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 Ranking-based Contrastive Learning", + "text": "We set the vision and text prompts specifying the same individual instances into pairs and align their pair-wise features by employing contrastive learning.\nAssuming vision-text pairs within a training mini-batch, the corresponding output features are and , where and are selected from output features and , respectively. We normalize the pair-wise vision-text output features and and obtain the metric embeddings and , respectively. We formulate the contrastive learning loss as , with:\nwhere is a learnable temperature scaling factor. The pair-wise similarity is illustrated in Fig. 3 ###reference_###(b), where we denote as for simplification. To distinguish the target instances from adjacent ones with identical shapes, we introduce a ranking rule inspired by the CrowdCLIP liang2023crowdclip ###reference_b29### that the diagonal elements are greater than the off-diagonal elements, which can be described as:" + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Knowledge Distillation", + "text": "As shown in Fig. 3 ###reference_###(c), we transfer knowledge from the interactive segmentation task to the generic and referring segmentation tasks to guide their training phases.\nInteractive segmentation to generic segmentation task. Define the predictions from the unified queries as .\nWe employ the Hungarian algorithm, utilizing the Dice and cross-entropy metrics as the matching cost criteria, to assign with the interactive segmentation labels . The matched predictions are selected as positive samples . The predicted masks from the interactive segmentation can be formulated as ,\nwhere . We select the pixels with top scores of as learning region , and depict the knowledge transfer process from the interactive segmentation to the generic segmentation task as:\nwhere and represent the predicted mask values within the region , gathering from the positive samples and the interactive segmentation predictions, respectively.\nInteractive segmentation to referring segmentation task. Define the pair-wise class probabilities predicted by the vision and text prompt queries as and selected from and , respectively.\nWe formulate a knowledge transfer process from the interactive segmentation to the referring segmentation task as:" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Training Objectives", + "text": "Open-set pseudo mask labels.\nFor open-vocabulary tasks, we train models on close-set data. To enhance segmentation performance on open-set data, we use SAM3D yang2023sam3d ###reference_b64### to generate segmentation masks with undetermined categories as pseudo mask labels (open-set masks). While training, we assign predictions of the unified queries with ground-truth masks (close-set masks). The assigned and miss-assigned predictions are divided into positive and negative samples, respectively. The positive samples are supervised to regress the close-set masks. We match the negative samples with the pseudo mask labels and supervise the matched ones to regress the open-set masks.\nNote that the SAM3D is an unsupervised method and does not rely on ground-truth annotations, eliminating worries of label leakage.\nThis process is exclusively applied in the training phase, incurring no extra inference cost.\nLoss function.\nThe training losses contain two components: (1) the basic losses, formulated as . stands for pixel-wise mask loss, which comprises of the BCE loss and the Dice loss. indicates the classification loss, where we use the cross-entropy loss. (2) the losses used to build inter-task associations, summarized as . The final loss function is , where is a balance weight, setting as ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Datasets.\nWe evaluate the UniSeg3D on three benchmarks: ScanNet20 dai2017scannet ###reference_b6###, ScanNet200 rozenberszki2022lground ###reference_b45###, and ScanRefer chen2020scanrefer ###reference_b1###.\nScanNet20 provides RGB-D images and 3D point clouds of scenes, including instance categories and semantic categories. ScanNet200 uses the same source data as ScanNet20, while it is more challenging for\nup to instance categories and semantic categories. ScanRefer contains natural language expressions referring to objects selected from scenes.\nExperimental setups.\nWe train our method on the ScanNet20 training split, and the referring texts are collected from the ScanRefer. The and are set as and , respectively.\n ranges percents of with an upper limit of . We set as and as .\nFor the data augmentations, the point clouds are randomly rotated around the z-axis, elastic distorted, and scaled; the referring texts are augmented using public GPT tools following wu2023language ###reference_b60###; dai2023auggpt ###reference_b7###. We adopt the AdamW optimizer with the polynomial schedule, setting an initial learning rate as and the weight decay as . All models are trained for epochs on a single NVIDIA RTX 4090 GPU and evaluated per epochs on the validation set to find the best-performed model. To stimulate the performance, we propose a two-stage fine-tuning trick, which fine-tunes the best-performed model using the learning rate and weight decay times the initial values for epochs. The proposed framework achieves end-to-end generic, interactive, and referring segmentation tasks. We divide the OVS task into mask prediction and class prediction. Specifically, we employ the proposed UniSeg3D to predict masks and then follow the Open3DIS nguyen2023open3dis ###reference_b38### to generate class predictions.\nWe use PQ, mIoU, and mAP metrics to evaluate performance on the generic tasks following narita2019panopticfusion ###reference_b36###; qi2017pointnet++ ###reference_b40###; yi2019gspn ###reference_b66###. Then, we use AP metric and mIoU for the interactive and referring segmentation tasks, respectively, following yue2023agile3d ###reference_b67###; qian2024x ###reference_b43###. For the OVS task, we train our model on the ScanNet20 and evaluate it using AP metric on the ScanNet200 without specific fine-tuning, following takmaz2024openmask3d ###reference_b52###. The Overall metric represents the average performance across six tasks intended to reflect the model\u2019s unified capability." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Comparison to the SOTA Methods", + "text": "The proposed method achieves six 3D scene understanding tasks in a single model. We demonstrate the effectiveness of our method by comparing it with SOTA approaches specialized for specific tasks.\nAs shown in Tab. LABEL:tab:comparison, the proposed method outperforms the specialized SOTA methods PanopticNDT seichter2023panopticndt ###reference_b47###, OctFormer wang2023octformer ###reference_b54###, MAFT lai2023mask ###reference_b25###, AGILE3D yue2023agile3d ###reference_b67###, X-RefSeg3D qian2024x ###reference_b43###, and Open3DIS nguyen2023open3dis ###reference_b38### on the panoptic segmentation (PS), semantic segmentation (SS), instance segmentation (IS), interactive segmentation, referring segmentation, and OVS tasks by PQ, mIoU, mAP, AP, mIoU, AP, respectively. Even when compared with the competitive 3D unified method, i.e., OneFormer3D kolodiazhnyi2023oneformer3d ###reference_b22###, the proposed UniSeg3D achieves PQ improvement on the PS task, and mIoU improvement on the SS task. More importantly, the OneFormer3D focuses on three generic segmentation tasks. It fails to understand user prompts and achieve OVS, which limits its application prospects. In contrast, UniSeg3D unifies six tasks and presents desirable performance, demonstrating UniSeg3D a powerful architecture.\n###table_1### The proposed method achieves six tasks in one training, which is elegant while facing an issue for fair comparison. Specifically, partial labels in the referring segmentation benchmark ( objects, of the complete ScanRefer training set) annotate novel classes of the OVS task. Obviously, these labels should not be used for training to avoid label leakage. Thus, we filter out these labels and only employ the filtered ScanRefer training set to train our model. As shown in Tab. LABEL:tab:comparison, our model uses training data to achieve closing performance with X-RefSeg3D qian2024x ###reference_b43### ( vs. ), the current specialized SOTA on the 3D referring segmentation task. Moreover, while reproducing the X-RefSeg3D using official code on our filtered training data, the performance drops to mIoU lower than UniSeg3D,\ndemonstrating our model\u2019s effectiveness." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Analysis and Ablation", + "text": "We conduct ablation studies and analyze the key insights of our designs. All models are evaluated on unified tasks to show the effectiveness of the proposed components on a broad scope.\nThe challenge of multi-task unification.\nWe first discuss the challenge of unifying multi-tasks in a single model. Specifically, we simply add interactive segmentation, referring segmentation, and OVS into our framework to build a unification baseline, as shown in Tab. 2 ###reference_###. We observe a continuous performance decline on the PS, IS, and interactive segmentation tasks, indicating a significant challenge in balancing different tasks. Even so, we believe that unifying multiple tasks within a single model is worthy of exploring, as it can reduce computation consumption and benefit real-world applications. Thus, this paper proposes to eliminate performance decline by delivering inter-task associations, and the following experiments demonstrate that this could be a valuable step.\nDesign of inter-task associations.\nOur approach uses knowledge distillation and contrastive learning to connect supported tasks. As shown in Tab. 3 ###reference_###, when applying the distillation, i.e. row , the performance of IS and interactive segmentation increase to mAP and AP, respectively. We believe the improvement on the IS task is because of the reliable knowledge distilled from the interactive segmentation, and the improvement on the interactive segmentation task is attributed to the intrinsic connections between the two tasks. Then, we ablate the ranking-based contrastive learning, i.e. row . We observe improvements on five tasks, including the generic segmentation and the referring segmentation, while a bit of performance drop on the interactive segmentation. This phenomenon suggests that contrastive learning is effective in most tasks, but there is a huge struggle to align the point and text modalities, which weakens the interactive segmentation performance.\nOverall metric measures multi-task unification performance. We choose models and checkpoints with higher Overalls in our experiments. In practical applications, checkpoints can be chosen based on preferred tasks while maintaining good performance across other tasks.\nApplying knowledge distillation and ranking-based contrastive learning obtains comparable performance on most tasks, performing higher Overall than rows and , indicating the complementarity of the two components. We further employ two-stage fine-tuning trick, bringing consistent improvements across various tasks.\nDetailed ablation on the components is shown in Tab. 4 ###reference_###. It is observed that knowledge distillation to various tasks brings respective improvements. As for contrastive learning, comparing row and row in Tab. 4 ###reference_###(b), the ranking rule suppresses the confusing point-text pairs, boosting contrastive learning to be more effective. controls the strength of the explicit inter-task associations.\nWe empirically find that setting to obtains the best performance, as shown in Tab. 5 ###reference_###.\nInfluence of vision prompts.\nWe empirically find that the vision prompts affect the interactive segmentation performance. To ensure a fair comparison, we adopt the same vision prompts generation strategy designed in AGILE3D yue2023agile3d ###reference_b67### to evaluate our interactive segmentation performance.\nFurthermore, to analyze the influence of vision prompts,\nwe ablate the 3D spatial distances between the vision prompts and the instance centers. Specifically, assuming an instance containing points, we denote the mean coordinate of these points as the instance center and order the points based on their distances to the instance center.\nThen, we evaluate the interactive segmentation performance while using the -th nearest point as the vision prompt, as shown in Tab. 6 ###reference_###. When the vision prompt is located at the instance center, the interactive segmentation achieves the upper bound performance of AP. There is a significant performance gap (up to AP) between the edge and center points.\nIt illustrates considerable room for improvement.\nWe observe an unusual decline in AP\nwhile increasing from to . We think this is because of the ambiguity in distinguishing the edge points from adjacent instances.\nAs we all know, this is the first work ablating the influence of vision prompts. We will explore it in depth in future work." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Discussion", + "text": "We propose a unified framework named UniSeg3D, which provides a flexible and efficient solution for 3D scene understanding, supporting six tasks within a single model. Previous task-specific approaches fail to leverage cross-task information, limiting their understanding of 3D scenes to a task-specific perspective. In contrast, we take advantage of the multi-task design and enhance performance through building inter-task associations. Specifically, we employ knowledge distillation and ranking-based contrastive learning to facilitate cross-task knowledge sharing. Experiments demonstrate the proposed framework is a powerful method, achieving SOTA performance across six unified tasks.\nLimitation.\nUniSeg3D aims to achieve unified 3D scene understanding. However, it works on indoor tasks and lacks explorations in outdoor scenes. Additionally, we observe that UniSeg3D performs worse interactive segmentation performance when the vision prompt is located away from the instance centers, limiting the reliability of the UniSeg3D and should be explored in future work." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Comparisons employing more metrics on specific tasks.", + "text": "The experiments presented in the main manuscript primarily use overarching metrics to measure performance on each task. This section provides more comprehensive comparisons of our method on each task using detailed metrics. We train the model on ScanNet20 and assess its open-vocabulary segmentation performance on ScanNet200. Following nguyen2023open3dis ###reference_b38###, classes in ScanNet200 that are semantically similar to annotated classes in ScanNet20 are grouped as Base classes, while the remaining classes are divided as Novel classes. The model is then directly tested on Replica straub2019replica ###reference_b49### to evaluate its zero-shot segmentation performance." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Inference time analysis of the proposed UniSeg3D.", + "text": "This work proposes a unified framework, achieving six tasks in one inference, which would be more efficient than running six task-specific approaches individually. We present the inference time of the proposed method for efficiency analysis.\nTab. II ###reference_### illustrates that our method achieves effective integration across six tasks while maintaining highly competitive inference times compared to previous methods.\n###table_2###" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Qualitative visualizations illustrating model effectiveness.", + "text": "We provide qualitative results in this section. In Fig. IV ###reference_###, visualizations of multi-task segmentation results are presented, showcasing point clouds, ground truth, and predictions within each scene. In Fig. V ###reference_###, we present visualizations of predictions from UniSeg3D and current SOTA methods.\nIn Fig. VI ###reference_###, we test our model on open-set classes not included in training data to evaluate the model\u2019s open capability. Furthermore, we even replace the class names with attribute descriptions in the open vocabulary, and impressively, we observe the preliminary reasoning capabilities of our approach.\n###figure_3### ###figure_4### ###figure_5###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Mask prediction performance of instance and interactive segmentation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TasksmIoU
Instance Seg.68.1
Interactive Seg.\n76.0 (+7.9)\n
\n
\n
", + "capture": "Table 1: Mask prediction performance of instance and interactive segmentation." + }, + "2": { + "table_html": "
\n
Table 2: Ablation on task unification.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ScanNet200ScanReferScanNet20
OVRef.Inter.Pan.Sem.Inst.
APmIoUAPPQmIoUmAP
\u2717\u2717\u271771.076.259.0
\u2717\u271756.871.076.458.7
\u271729.156.070.376.358.4
19.729.154.570.476.258.0
\n
", + "capture": "Table 2: Ablation on task unification." + }, + "3": { + "table_html": "
\n
Table 3: Ablation on components. \u201cDistillation\u201d, \u201cRank-Contrastive\u201d, and \u201cTrick\u201d denote the knowledge distillation, ranking-based contrastive learning, and two-stage fine-tuning trick, respectively.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
ComponentsPan.Sem.Inst.Inter.Ref.OV
DistillationRank-ContrastiveTrickPQmIoUmAPAPmIoUAP
---70.476.258.054.529.119.751.3
\u2714--70.976.258.655.329.219.651.6
-\u2714-70.876.458.454.129.619.951.5
\u2714\u2714-71.376.359.154.129.519.651.7
\u2714\u2714\u271471.376.959.354.529.619.751.9
\n
", + "capture": "Table 3: Ablation on components. \u201cDistillation\u201d, \u201cRank-Contrastive\u201d, and \u201cTrick\u201d denote the knowledge distillation, ranking-based contrastive learning, and two-stage fine-tuning trick, respectively." + }, + "4": { + "table_html": "
\n
Table 4: Ablation on different designs of the proposed components. \u201c\u201d and \u201c\u201d denote the knowledge distillation from the interactive segmentation to the generic segmentation and the referring segmentation, respectively. \u201cContrastive\u201d and \u201cRank\u201d denote the contrastive learning and the ranking rule, respectively.
\n
\n
\n
(a) Ablation on designs for knowledge distillation.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
ComponentsPan.Sem.Inst.Inter.Ref.OV
PQmIoUmAPAPmIoUAP
--70.876.458.454.129.619.951.5
\u2714-71.276.359.054.029.519.851.6
-\u271470.776.258.654.129.720.051.6
\u2714\u271471.376.359.154.129.519.651.7
\n
\n
\n
\n
\n
\n
(b) Ablation on designs for ranking-based contrastive learning.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
ComponentsPan.Sem.Inst.Inter.Ref.OV
ContrastiveRankPQmIoUmAPAPmIoUAP
--70.976.258.655.329.219.651.6
\u2714-71.076.359.054.529.419.751.7
-\u271471.076.258.754.629.519.851.6
\u2714\u271471.376.359.154.129.519.651.7
\n
\n
\n
\n
", + "capture": "Table 4: Ablation on different designs of the proposed components. \u201c\u201d and \u201c\u201d denote the knowledge distillation from the interactive segmentation to the generic segmentation and the referring segmentation, respectively. \u201cContrastive\u201d and \u201cRank\u201d denote the contrastive learning and the ranking rule, respectively." + }, + "5": { + "table_html": "
\n
(a) Ablation on designs for knowledge distillation.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
ComponentsPan.Sem.Inst.Inter.Ref.OV
PQmIoUmAPAPmIoUAP
--70.876.458.454.129.619.951.5
\u2714-71.276.359.054.029.519.851.6
-\u271470.776.258.654.129.720.051.6
\u2714\u271471.376.359.154.129.519.651.7
\n
", + "capture": "(a) Ablation on designs for knowledge distillation." + }, + "6": { + "table_html": "
\n
(b) Ablation on designs for ranking-based contrastive learning.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
ComponentsPan.Sem.Inst.Inter.Ref.OV
ContrastiveRankPQmIoUmAPAPmIoUAP
--70.976.258.655.329.219.651.6
\u2714-71.076.359.054.529.419.751.7
-\u271471.076.258.754.629.519.851.6
\u2714\u271471.376.359.154.129.519.651.7
\n
", + "capture": "(b) Ablation on designs for ranking-based contrastive learning." + }, + "7": { + "table_html": "
\n
Table 5: Ablation on hyper-parameter .
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsScanNet20ScanReferScanNet200Overall
Hyper-parameterPan.Sem.Inst.Inter.Ref.OV
PQmIoUmAPAPmIoUAP
0.0570.776.258.954.429.519.651.6
0.171.376.359.154.129.519.651.7
0.270.876.658.652.329.819.551.3
0.370.675.758.451.629.619.350.9
\n
", + "capture": "Table 5: Ablation on hyper-parameter ." + }, + "8": { + "table_html": "
\n
Table 6: Ablation on vision prompts.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
StrategymIoUAP\nAP50\n\nAP25\n
\nFrom\u00a0yue2023agile3d \n78.854.579.493.2
Instance center79.656.682.194.9
79.155.981.194.4
78.755.180.093.4
78.053.878.592.4
77.553.077.491.7
76.652.176.290.6
75.951.274.690.0
74.950.172.988.1
73.448.271.186.5
71.045.366.682.1
62.736.454.870.2
Random76.051.375.289.6
\n
", + "capture": "Table 6: Ablation on vision prompts." + }, + "9": { + "table_html": "
\n
Table I: Comparison with previous open-vocabulary segmentation methods on ScanNe200 and Replica. Our method outperforms existing approaches in terms of AP.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodReferenceScanNet200Replica
AP\nAP\n\nAP\nAP\nAP50\n\nAP25\n
\nOpenScene\u00a0peng2023openscene with\u00a0schult2023mask3d \nCVPR 238.511.17.610.915.617.3
\nOpenMask3Dtakmaz2024openmask3d \nNeurIPS 2312.614.311.913.118.424.2
\nSOLElee2024segment \nCVPR 2418.717.419.1---
\nOpen3DISnguyen2023open3dis \nCVPR 2419.025.816.518.524.528.2
\nUniSeg3D\u00a0(ours)\n-19.724.418.019.124.129.2
\n
", + "capture": "Table I: Comparison with previous open-vocabulary segmentation methods on ScanNe200 and Replica. Our method outperforms existing approaches in terms of AP." + }, + "10": { + "table_html": "
\n
Table II: Inference time and instance segmentation performance on the ScanNet20 validation split.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodComponentDevice\n\nComponent\ntime, ms\n\n\nTotal\ntime, ms\nmAP
PointGroup\u00a0jiang2020pointgroup BackboneGPU4837234.8
GroupingGPU+CPU218
ScoreNetGPU106
HAIS\u00a0chen2021hais BackboneGPU5025643.5
Hierarchical aggregationGPU+CPU116
Intra-instance refinementGPU90
SoftGroup\u00a0vu2022softgroup BackboneGPU4826645.8
Soft groupingGPU+CPU121
Top-down refinementGPU97
SSTNet\u00a0liang2021sstnet Superpoint extractionCPU16840049.4
BackboneGPU26
Tree NetworkGPU+CPU148
ScoreNetGPU58
\n\nMask3D\u00a0schult2023mask3d \nw/o clustering\nBackboneGPU10622154.3
Mask moduleGPU100
Query refinementGPU15
Mask3D\u00a0schult2023mask3d BackboneGPU1061985155.2
Mask moduleGPU100
Query refinementGPU15
DBSCAN clusteringCPU19630
SPFormer\u00a0sun2023superpoint Superpoint extractionCPU16821556.3
BackboneGPU26
Superpoint poolingGPU4
Query decoderGPU17
OneFormer3D\u00a0kolodiazhnyi2023oneformer3d Superpoint extractionCPU16822159.3
BackboneGPU26
Superpoint poolingGPU4
Query decoderGPU23
UniSeg3D\u00a0(ours)Superpoint extractionCPU168230.0359.3
BackboneGPU33
Text encoderGPU0.03
Mask decoderGPU29
\n
", + "capture": "Table II: Inference time and instance segmentation performance on the ScanNet20 validation split." + } + }, + "image_paths": { + "1": { + "figure_path": "2407.03263v2_figure_1.png", + "caption": "Figure 1: \nComparisons between the proposed method and current SOTA approaches specialized for specific tasks. (a) Representative specialized approaches on six tasks. (b) OneFormer3D, a recent unified framework, achieves SOTA performance on three generic segmentation tasks in one inference. (c) The proposed unified framework achieves six tasks in one inference. (d) Our method outperforms current SOTA approaches across six tasks involving two modalities using a single model.", + "url": "http://arxiv.org/html/2407.03263v2/x1.png" + }, + "2": { + "figure_path": "2407.03263v2_figure_2.png", + "caption": "Figure 2: The framework of UniSeg3D. This is a simple framework handling\nsix tasks\nin parallel without any modules specialized for specific tasks. We take advantage of multi-task unification and enhance the performance through building associations between the supported tasks. Specifically, knowledge distillation transfers insights from interactive segmentation to the other tasks, while contrastive learning establishes connections between interactive segmentation and referring segmentation.", + "url": "http://arxiv.org/html/2407.03263v2/x2.png" + }, + "3": { + "figure_path": "2407.03263v2_figure_3.png", + "caption": "Figure 3: Illustration of the inter-task association. (a) A challenging case requiring the distinction of textual positional information within the expressions. (b) A contrastive learning matrix for vision-text pairs, where a ranking rule is employed to suppress incorrect pairings. (c) Knowledge distillation across multi-task predictions.", + "url": "http://arxiv.org/html/2407.03263v2/x3.png" + }, + "4": { + "figure_path": "2407.03263v2_figure_4.png", + "caption": "Figure IV: Visualization of segmentation results obtained by UniSeg3D on ScanNet20 validation split.", + "url": "http://arxiv.org/html/2407.03263v2/x4.png" + }, + "5": { + "figure_path": "2407.03263v2_figure_5.png", + "caption": "Figure V: Visualization of segmentation results obtained by UniSeg3D and current SOTA methods on ScanNet20 validation split.", + "url": "http://arxiv.org/html/2407.03263v2/x5.png" + }, + "6": { + "figure_path": "2407.03263v2_figure_6.png", + "caption": "Figure VI: Visualization of open capabilities. Red prompts involve categories not presented in the ScanNet200 annotations, while blue prompts describe the attributes of various objects, such as affordances and color.", + "url": "http://arxiv.org/html/2407.03263v2/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Scanrefer: 3d object localization in rgb-d scans using natural language.", + "author": "Dave Zhenyu Chen, Angel X Chang, and Matthias Nie\u00dfner.", + "venue": "In Proc. of European Conference on Computer Vision, pages 202\u2013221, 2020.", + "url": null + } + }, + { + "2": { + "title": "Hierarchical aggregation for 3d instance segmentation.", + "author": "Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, and Xinggang Wang.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 15467\u201315476, 2021.", + "url": null + } + }, + { + "3": { + "title": "Sugar: Pre-training 3d visual representations for robotics.", + "author": "Shizhe Chen, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 18049\u201318060, 2024.", + "url": null + } + }, + { + "4": { + "title": "Pra-net: Point relation-aware network for 3d point cloud analysis.", + "author": "Silin Cheng, Xiwu Chen, Xinwei He, Zhe Liu, and Xiang Bai.", + "venue": "IEEE Transactions on Image Processing, 30:4436\u20134448, 2021.", + "url": null + } + }, + { + "5": { + "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks.", + "author": "Christopher Choy, JunYoung Gwak, and Silvio Savarese.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3075\u20133084, 2019.", + "url": null + } + }, + { + "6": { + "title": "ScanNet: Richly-annotated 3D reconstructions of indoor scenes.", + "author": "Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 5828\u20135839, 2017.", + "url": null + } + }, + { + "7": { + "title": "Auggpt: Leveraging chatgpt for text data augmentation.", + "author": "Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, et al.", + "venue": "arXiv preprint arXiv:2302.13007, 2023.", + "url": null + } + }, + { + "8": { + "title": "Occuseg: Occupancy-aware 3d instance segmentation.", + "author": "Lei Han, Tian Zheng, Lan Xu, and Lu Fang.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2940\u20132949, 2020.", + "url": null + } + }, + { + "9": { + "title": "Dyco3d: Robust instance segmentation of 3d point clouds through dynamic convolution.", + "author": "Tong He, Chunhua Shen, and Anton Van Den Hengel.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 354\u2013363, 2021.", + "url": null + } + }, + { + "10": { + "title": "Attention discriminant sampling for point clouds.", + "author": "Cheng-Yao Hong, Yu-Ying Chou, and Tyng-Luh Liu.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 14429\u201314440, 2023.", + "url": null + } + }, + { + "11": { + "title": "Unified 3d and 4d panoptic segmentation via dynamic shifting networks.", + "author": "Fangzhou Hong, Lingdong Kong, Hui Zhou, Xinge Zhu, Hongsheng Li, and Ziwei Liu.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.", + "url": null + } + }, + { + "12": { + "title": "3d-sis: 3d semantic instance segmentation of rgb-d scans.", + "author": "Ji Hou, Angela Dai, and Matthias Nie\u00dfner.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4421\u20134430, 2019.", + "url": null + } + }, + { + "13": { + "title": "Ponder: Point cloud pre-training via neural rendering.", + "author": "Di Huang, Sida Peng, Tong He, Honghui Yang, Xiaowei Zhou, and Wanli Ouyang.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 16089\u201316098, 2023.", + "url": null + } + }, + { + "14": { + "title": "Text-guided graph neural networks for referring 3d instance segmentation.", + "author": "Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu.", + "venue": "In Proc. of the AAAI Conf. on Artificial Intelligence, pages 1610\u20131618, 2021.", + "url": null + } + }, + { + "15": { + "title": "Openins3d: Snap and lookup for 3d open-vocabulary instance segmentation.", + "author": "Zhening Huang, Xiaoyang Wu, Xi Chen, Hengshuang Zhao, Lei Zhu, and Joan Lasenby.", + "venue": "In Proc. of European Conference on Computer Vision, 2024.", + "url": null + } + }, + { + "16": { + "title": "Nerf-mae: Masked autoencoders for self-supervised 3d representation learning for neural radiance fields.", + "author": "Muhammad Zubair Irshad, Sergey Zakharov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, and Rares Ambrus.", + "venue": "In Proc. of European Conference on Computer Vision, 2024.", + "url": null + } + }, + { + "17": { + "title": "Oneformer: One transformer to rule universal image segmentation.", + "author": "Jitesh Jain, Jiachen Li, Mang Tik Chiu, Ali Hassani, Nikita Orlov, and Humphrey Shi.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2989\u20132998, 2023.", + "url": null + } + }, + { + "18": { + "title": "Multi-view pointnet for 3d scene understanding.", + "author": "Maximilian Jaritz, Jiayuan Gu, and Hao Su.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision Workshops., pages 0\u20130, 2019.", + "url": null + } + }, + { + "19": { + "title": "Pointgroup: Dual-set point grouping for 3d instance segmentation.", + "author": "Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, and Jiaya Jia.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4867\u20134876, 2020.", + "url": null + } + }, + { + "20": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "21": { + "title": "Top-down beats bottom-up in 3d instance segmentation.", + "author": "Maksim Kolodiazhnyi, Anna Vorontsova, Anton Konushin, and Danila Rukhovich.", + "venue": "In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 3566\u20133574, 2024.", + "url": null + } + }, + { + "22": { + "title": "Oneformer3d: One transformer for unified point cloud segmentation.", + "author": "Maxim Kolodiazhnyi, Anna Vorontsova, Anton Konushin, and Danila Rukhovich.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "23": { + "title": "Interactive object segmentation in 3d point clouds.", + "author": "Theodora Kontogianni, Ekin Celikkan, Siyu Tang, and Konrad Schindler.", + "venue": "In Proc. of the IEEE Int. Conf. on Robotics and Automation, pages 2891\u20132897, 2023.", + "url": null + } + }, + { + "24": { + "title": "Stratified transformer for 3d point cloud segmentation.", + "author": "Xin Lai, Jianhui Liu, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, and Jiaya Jia.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2022.", + "url": null + } + }, + { + "25": { + "title": "Mask-attention-free transformer for 3d instance segmentation.", + "author": "Xin Lai, Yuhui Yuan, Ruihang Chu, Yukang Chen, Han Hu, and Jiaya Jia.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 3693\u20133703, 2023.", + "url": null + } + }, + { + "26": { + "title": "Large-scale point cloud semantic segmentation with superpoint graphs.", + "author": "Loic Landrieu and Martin Simonovsky.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4558\u20134567, 2018.", + "url": null + } + }, + { + "27": { + "title": "Segment any 3d object with language.", + "author": "Seungjun Lee, Yuyang Zhao, and Gim Hee Lee.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "28": { + "title": "Omg-seg: Is one model good enough for all segmentation?", + "author": "Xiangtai Li, Haobo Yuan, Wei Li, Henghui Ding, Size Wu, Wenwei Zhang, Yining Li, Kai Chen, and Chen Change Loy.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "29": { + "title": "Crowdclip: Unsupervised crowd counting via vision-language model.", + "author": "Dingkang Liang, Jiahao Xie, Zhikang Zou, Xiaoqing Ye, Wei Xu, and Xiang Bai.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2893\u20132903, 2023.", + "url": null + } + }, + { + "30": { + "title": "Pointmamba: A simple state space model for point cloud analysis.", + "author": "Dingkang Liang, Xin Zhou, Wei Xu, Xingkui Zhu, Zhikang Zou, Xiaoqing Ye, Xiao Tan, and Xiang Bai.", + "venue": "In Proc. of Advances in Neural Information Processing Systems, 2024.", + "url": null + } + }, + { + "31": { + "title": "Instance segmentation in 3d scenes using semantic superpoint tree networks.", + "author": "Zhihao Liang, Zhihao Li, Songcen Xu, Mingkui Tan, and Kui Jia.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 2783\u20132792, 2021.", + "url": null + } + }, + { + "32": { + "title": "Meta architecture for point cloud analysis.", + "author": "Haojia Lin, Xiawu Zheng, Lijiang Li, Fei Chao, Shanshan Wang, Yan Wang, Yonghong Tian, and Rongrong Ji.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 17682\u201317691, 2023.", + "url": null + } + }, + { + "33": { + "title": "Multi-space alignments towards universal lidar segmentation.", + "author": "Youquan Liu, Lingdong Kong, Xiaoyang Wu, Runnan Chen, Xin Li, Liang Pan, Ziwei Liu, and Yuexin Ma.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "34": { + "title": "Query refinement transformer for 3d instance segmentation.", + "author": "Jiahao Lu, Jiacheng Deng, Chuxin Wang, Jianfeng He, and Tianzhu Zhang.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 18516\u201318526, 2023.", + "url": null + } + }, + { + "35": { + "title": "Univs: Unified and universal video segmentation with prompts as queries.", + "author": "Li Minghan, Li Shuai, Zhang Xindong, and Zhang Lei.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "36": { + "title": "Panopticfusion: Online volumetric semantic mapping at the level of stuff and things.", + "author": "Gaku Narita, Takashi Seno, Tomoya Ishikawa, and Yohsuke Kaji.", + "venue": "In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 4205\u20134212, 2019.", + "url": null + } + }, + { + "37": { + "title": "Isbnet: a 3d point cloud instance segmentation network with instance-aware sampling and box-aware dynamic convolution.", + "author": "Tuan Duc Ngo, Binh-Son Hua, and Khoi Nguyen.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 13550\u201313559, 2023.", + "url": null + } + }, + { + "38": { + "title": "Open3dis: Open-vocabulary 3d instance segmentation with 2d mask guidance.", + "author": "Phuc DA Nguyen, Tuan Duc Ngo, Chuang Gan, Evangelos Kalogerakis, Anh Tran, Cuong Pham, and Khoi Nguyen.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "39": { + "title": "Openscene: 3d scene understanding with open vocabularies.", + "author": "Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 815\u2013824, 2023.", + "url": null + } + }, + { + "40": { + "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space.", + "author": "Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas.", + "venue": "In Proc. of Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "41": { + "title": "Unigs: Unified representation for image generation and segmentation.", + "author": "Lu Qi, Lehan Yang, Weidong Guo, Yu Xu, Bo Du, Varun Jampani, and Ming-Hsuan Yang.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "42": { + "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies.", + "author": "Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem.", + "venue": "In Proc. of Advances in Neural Information Processing Systems, pages 23192\u201323204, 2022.", + "url": null + } + }, + { + "43": { + "title": "X-refseg3d: Enhancing referring 3d instance segmentation via structured cross-modal graph neural networks.", + "author": "Zhipeng Qian, Yiwei Ma, Jiayi Ji, and Xiaoshuai Sun.", + "venue": "In Proc. of the AAAI Conf. on Artificial Intelligence, pages 4551\u20134559, 2024.", + "url": null + } + }, + { + "44": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In Proc. of Intl. Conf. on Machine Learning, pages 8748\u20138763, 2021.", + "url": null + } + }, + { + "45": { + "title": "Language-grounded indoor 3d semantic segmentation in the wild.", + "author": "David Rozenberszki, Or Litany, and Angela Dai.", + "venue": "In Proc. of European Conference on Computer Vision, pages 125\u2013141, 2022.", + "url": null + } + }, + { + "46": { + "title": "Mask3d: Mask transformer for 3d semantic instance segmentation.", + "author": "Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, and Bastian Leibe.", + "venue": "In Proc. of the IEEE Int. Conf. on Robotics and Automation, pages 8216\u20138223, 2023.", + "url": null + } + }, + { + "47": { + "title": "Panopticndt: Efficient and robust panoptic mapping.", + "author": "Daniel Seichter, Benedict Stephan, S\u00f6hnke Benedikt Fischedick, Steffen M\u00fcller, Leonard Rabes, and Horst-Michael Gross.", + "venue": "In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 7233\u20137240, 2023.", + "url": null + } + }, + { + "48": { + "title": "Panoptic lifting for 3d scene understanding with neural fields.", + "author": "Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bul\u00f3, Norman M\u00fcller, Matthias Nie\u00dfner, Angela Dai, and Peter Kontschieder.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9043\u20139052, 2023.", + "url": null + } + }, + { + "49": { + "title": "The replica dataset: A digital replica of indoor spaces.", + "author": "Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al.", + "venue": "arXiv preprint arXiv:1906.05797, 2019.", + "url": null + } + }, + { + "50": { + "title": "Superpoint transformer for 3d scene instance segmentation.", + "author": "Jiahao Sun, Chunmei Qing, Junpeng Tan, and Xiangmin Xu.", + "venue": "In Proc. of the AAAI Conf. on Artificial Intelligence, volume 37, pages 2393\u20132401, 2023.", + "url": null + } + }, + { + "51": { + "title": "Neuralbf: Neural bilateral filtering for top-down instance segmentation on point clouds.", + "author": "Weiwei Sun, Daniel Rebain, Renjie Liao, Vladimir Tankovich, Soroosh Yazdani, Kwang Moo Yi, and Andrea Tagliasacchi.", + "venue": "In Proc. of IEEE Winter Conf. on Applications of Computer Vision, pages 551\u2013560, 2023.", + "url": null + } + }, + { + "52": { + "title": "Openmask3d: Open-vocabulary 3d instance segmentation.", + "author": "Ayca Takmaz, Elisabetta Fedele, Robert Sumner, Marc Pollefeys, Federico Tombari, and Francis Engelmann.", + "venue": "In Proc. of Advances in Neural Information Processing Systems, 2023.", + "url": null + } + }, + { + "53": { + "title": "Softgroup for 3d instance segmentation on point clouds.", + "author": "Thang Vu, Kookhoi Kim, Tung M Luu, Thanh Nguyen, and Chang D Yoo.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 2708\u20132717, 2022.", + "url": null + } + }, + { + "54": { + "title": "Octformer: Octree-based transformers for 3d point clouds.", + "author": "Peng-Shuai Wang.", + "venue": "ACM Transactions ON Graphics, 42(4):1\u201311, 2023.", + "url": null + } + }, + { + "55": { + "title": "3d-gres: Generalized 3d referring expression segmentation.", + "author": "Changli Wu, Yihang Liu, Jiayi Ji, Yiwei Ma, Haowei Wang, Gen Luo, Henghui Ding, Xiaoshuai Sun, and Rongrong Ji.", + "venue": "In Proc. of ACM Multimedia, 2024.", + "url": null + } + }, + { + "56": { + "title": "3d-stmn: Dependency-driven superpoint-text matching network for end-to-end 3d referring expression segmentation.", + "author": "Changli Wu, Yiwei Ma, Qi Chen, Haowei Wang, Gen Luo, Jiayi Ji, and Xiaoshuai Sun.", + "venue": "In Proc. of the AAAI Conf. on Artificial Intelligence, volume 38, pages 5940\u20135948, 2024.", + "url": null + } + }, + { + "57": { + "title": "Scenegraphfusion: Incremental 3d scene graph prediction from rgb-d sequences.", + "author": "Shun-Cheng Wu, Johanna Wald, Keisuke Tateno, Nassir Navab, and Federico Tombari.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 7515\u20137525, 2021.", + "url": null + } + }, + { + "58": { + "title": "Pointconv: Deep convolutional networks on 3d point clouds.", + "author": "Wenxuan Wu, Zhongang Qi, and Li Fuxin.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 9621\u20139630, 2019.", + "url": null + } + }, + { + "59": { + "title": "Point transformer v2: Grouped vector attention and partition-based pooling.", + "author": "Xiaoyang Wu, Yixing Lao, Li Jiang, Xihui Liu, and Hengshuang Zhao.", + "venue": "In Proc. of Advances in Neural Information Processing Systems, pages 33330\u201333342, 2022.", + "url": null + } + }, + { + "60": { + "title": "Language-assisted 3d scene understanding.", + "author": "Yanmin Wu, Qiankun Gao, Renrui Zhang, and Jian Zhang.", + "venue": "In Proc. of the AAAI Conf. on Artificial Intelligence, 2024.", + "url": null + } + }, + { + "61": { + "title": "3d instances as 1d kernels.", + "author": "Yizheng Wu, Min Shi, Shuaiyuan Du, Hao Lu, Zhiguo Cao, and Weicai Zhong.", + "venue": "In Proc. of European Conference on Computer Vision, pages 235\u2013252, 2022.", + "url": null + } + }, + { + "62": { + "title": "Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation.", + "author": "Chenfeng Xu, Bichen Wu, Zining Wang, Wei Zhan, Peter Vajda, Kurt Keutzer, and Masayoshi Tomizuka.", + "venue": "In Proc. of European Conference on Computer Vision, pages 1\u201319, 2020.", + "url": null + } + }, + { + "63": { + "title": "Mm-3dscene: 3d scene understanding by customizing masked modeling with informative-preserved reconstruction and self-distilled consistency.", + "author": "Mingye Xu, Mutian Xu, Tong He, Wanli Ouyang, Yali Wang, Xiaoguang Han, and Yu Qiao.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 4380\u20134390, 2023.", + "url": null + } + }, + { + "64": { + "title": "Sam3d: Segment anything in 3d scenes.", + "author": "Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, and Xihui Liu.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision Workshops., 2023.", + "url": null + } + }, + { + "65": { + "title": "Tupper-map: Temporal and unified panoptic perception for 3d metric-semantic mapping.", + "author": "Zhiliu Yang and Chen Liu.", + "venue": "In Proc. of the IEEE Int. Conf. on Intelligent Robots and Systems, pages 1094\u20131101, 2021.", + "url": null + } + }, + { + "66": { + "title": "Gspn: Generative shape proposal network for 3d instance segmentation in point cloud.", + "author": "Li Yi, Wang Zhao, He Wang, Minhyuk Sung, and Leonidas J Guibas.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 3947\u20133956, 2019.", + "url": null + } + }, + { + "67": { + "title": "Agile3d: Attention guided interactive multi-object 3d segmentation.", + "author": "Yuanwen Yue, Sabarinath Mahadevan, Jonas Schult, Francis Engelmann, Bastian Leibe, Konrad Schindler, and Theodora Kontogianni.", + "venue": "In Proc. of Intl. Conf. on Learning Representations, 2024.", + "url": null + } + }, + { + "68": { + "title": "Sam3d: Zero-shot 3d object detection via segment anything model.", + "author": "Dingyuan Zhang, Dingkang Liang, Hongcheng Yang, Zhikang Zou, Xiaoqing Ye, Zhe Liu, and Xiang Bai.", + "venue": "Science China Information Sciences, 2023.", + "url": null + } + }, + { + "69": { + "title": "Point transformer.", + "author": "Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 16259\u201316268, 2021.", + "url": null + } + }, + { + "70": { + "title": "Divide and conquer: 3d point cloud instance segmentation with point-wise binarization.", + "author": "Weiguang Zhao, Yuyao Yan, Chaolong Yang, Jianan Ye, Xi Yang, and Kaizhu Huang.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 562\u2013571, 2023.", + "url": null + } + }, + { + "71": { + "title": "Dynamic adapter meets prompt tuning: Parameter-efficient transfer learning for point cloud analysis.", + "author": "Xin Zhou, Dingkang Liang, Wei Xu, Xingkui Zhu, Yihan Xu, Zhikang Zou, and Xiang Bai.", + "venue": "In Proc. of IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pages 14707\u201314717, 2024.", + "url": null + } + }, + { + "72": { + "title": "3d-vista: Pre-trained transformer for 3d vision and text alignment.", + "author": "Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li.", + "venue": "In Porc. of IEEE Intl. Conf. on Computer Vision, pages 2911\u20132921, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2407.03263v2" +} \ No newline at end of file diff --git a/20241127/2407.03297v2.json b/20241127/2407.03297v2.json new file mode 100644 index 0000000000000000000000000000000000000000..b756275cfa09067015de475e5c2c54803492f59d --- /dev/null +++ b/20241127/2407.03297v2.json @@ -0,0 +1,523 @@ +{ + "title": "Improved Noise Schedule for Diffusion Training", + "abstract": "Diffusion models have emerged as the de facto choice for generating high-quality visual signals across various domains.\nHowever, training a single model to predict noise across various levels poses significant challenges, necessitating numerous iterations and incurring significant computational costs.\nVarious approaches, such as loss weighting strategy design and architectural refinements, have been introduced to expedite convergence and improve model performance.\nIn this study, we propose a novel approach to design the noise schedule for enhancing the training of diffusion models. Our key insight is that the importance sampling of the logarithm of the Signal-to-Noise ratio (), theoretically equivalent to a modified noise schedule, is particularly beneficial for training efficiency when increasing the sample frequency around . This strategic sampling allows the model to focus on the critical transition point between signal dominance and noise dominance, potentially leading to more robust and accurate predictions.\nWe empirically demonstrate the superiority of our noise schedule over the standard cosine schedule.\nFurthermore, we highlight the advantages of our noise schedule design on the ImageNet benchmark, showing that the designed schedule consistently benefits different prediction targets.\nOur findings contribute to the ongoing efforts to optimize diffusion models, potentially paving the way for more efficient and effective training paradigms in the field of generative AI.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Diffusion models have emerged as a pivotal technique for generating high-quality visual signals across diverse domains, including image synthesis Ramesh et al. (2022 ###reference_b32###); Saharia et al. (2022 ###reference_b34###); Rombach et al. (2022 ###reference_b33###) , video generation Ho et al. (2022 ###reference_b17###); Singer et al. (2023 ###reference_b36###); Brooks et al. (2024 ###reference_b4###), and even 3D object generation Wang et al. (2022 ###reference_b39###); Nichol et al. (2022 ###reference_b28###).\nOne of the key strengths of diffusion models lies in their ability to approximate complex distributions, where Generative Adversarial Networks (GANs) may encounter difficulties.\nDespite the substantial computational resources and numerous training iterations required for convergence, improving the training efficiency of diffusion models is essential for their application in large-scale scenarios, such as high-resolution image synthesis and long video generation.\nRecent efforts to enhance diffusion model training efficiency have primarily focused on two directions.\n\nThe first approach centers on architectural improvements. For instance, the use of Adaptive Layer Normalization Gu et al. (2022 ###reference_b13###), when combined with zero initialization in the Transformer architecture Peebles & Xie (2023 ###reference_b30###), has shown promising results. MM-DiT Esser et al. (2024 ###reference_b10###) extends this approach to multi-modality by employing separate weights for vision and text processing. Similarly, U-shaped skip connections within Transformers Hoogeboom et al. (2023 ###reference_b18###); Bao et al. (2022 ###reference_b2###); Crowson et al. (2024 ###reference_b8###) and reengineered layer designs Karras et al. (2024 ###reference_b20###) have contributed to more efficient learning processes.\nThe second direction explores various loss weighting strategies to accelerate training convergence. Works such as eDiff-I Balaji et al. (2022 ###reference_b1###) and Ernie-ViLG 2.0 Feng et al. (2022 ###reference_b11###) address training difficulties across noise intensities using a Mixture of Experts approach. Other studies have investigated prioritizing specific noise levels Choi et al. (2022 ###reference_b7###) and reducing weights of noisy tasks Hang et al. (2023 ###reference_b14###) to enhance learning effectiveness. Recent developments include a softer weighting approach for high-resolution image synthesis Crowson et al. (2024 ###reference_b8###) and empirical findings on the importance of intermediate noise intensities Esser et al. (2024 ###reference_b10###).\nDespite these advances, the fundamental role of noise scheduling in diffusion model training remains underexplored.\nIn this study, we present a novel approach focusing on the fundamental role of noise scheduling, which is a function that determines how much noise is added to the input data at each timestep during the training process, controlling the distribution of noise levels that the neural network learns to remove.\nOur framework provides a unified perspective for analyzing noise schedules and importance sampling, leading to a straightforward method for designing noise schedules through the identification of curves in the distribution, as visualized in Figure 1 ###reference_###. Through empirical analysis, we discover that allocating more computation costs (FLOPs) to mid-range noise levels (around ) yields superior performance compared to increasing loss weights during the same period, particularly under constrained computational budgets.\nWe evaluate several different noise schedules, including Laplace, Cauchy, and the Cosine Shifted/Scaled variants, through comprehensive experiments using the ImageNet benchmark with a consistent training budget of 500K iterations (about 100 epochs). Our results, measured using the Fr\u00e9chet Inception Distance (FID) metric at both and resolutions, demonstrate that noise schedules with concentrated probability density around consistently outperform alternatives, with the Laplace schedule showing particularly favorable performance.\nThe key contributions of our work can be summarized as follows:\nA unified framework for analyzing and designing noise schedules in diffusion models, offering a more systematic approach to noise schedule optimization.\nEmpirical evidence demonstrating the superiority of mid-range noise level focus over loss weight adjustments for improving training efficiency.\nComprehensive evaluation and comparison of various noise schedules, providing practical guidelines for future research and applications in diffusion model training.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Method", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Preliminaries", + "text": "Diffusion models Ho et al. (2020 ###reference_b16###); Yang et al. (2021 ###reference_b40###) learn to generate data by iteratively reversing the diffusion process. We denote the distribution of data points as .\nThe diffusion process systematically introduces noise to the data in a progressive manner. In a continuous setting, the noisy data at timestep is defined as follows:\nwhere and are the coefficients of the adding noise process, essentially representing the noise schedule.\nFor the commonly used prediction target velocity: Salimans & Ho (2022 ###reference_b35###), the diffusion model is trained through the Mean Squared Error (MSE) loss:\nwhere is the loss weight, denotes the condition information.\nIn the context of class-conditional generation tasks, represents the class label.\nCommon practices sample from the uniform distribution . Kingma et al. (2021 ###reference_b22###) introduced the Signal-to-Noise ratio as to measure the noise level of different states.\nNotably, monotonically decreases with increasing .\nSome works represent the loss weight from the perspective of SNR Salimans & Ho (2022 ###reference_b35###); Hang et al. (2023 ###reference_b14###); Crowson et al. (2024 ###reference_b8###).\nTo simplify, we denote to indicate the noise intensities.\nIn the Variance Preserving (VP) setting, the coefficients in Equation 1 ###reference_### can be calculated by , .\nWhile these foundational concepts have enabled significant progress in diffusion models, the choice of noise schedule remains somewhat ad hoc. This motivates us to develop a more systematic framework for analyzing and designing noise schedules by examining them from a probability perspective." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Noise Schedule Design from A Probability Perspective", + "text": "The training process of diffusion models involves sampling timesteps from a uniform distribution. However, this uniform sampling in time actually implies a non-uniform sampling of noise intensities. We can formalize this relationship through the lens of importance sampling Bishop & Nasrabadi (2006 ###reference_b3###).\nSpecifically, when follows a uniform distribution, the sampling probability of noise intensity is given by:\nwhere the negative sign appears because monotonically decreases with .\nWe take cosine noise schedule Nichol & Dhariwal (2021 ###reference_b29###) as an example, where , .\nThen we can deduce that and .\nThus the distribution of is: .\nThis derivation illustrates the process of obtaining from a noise schedule . On the other hand, we can derive the noise schedule from the sampling probability of different noise intensities .\nBy integrating Equation 3 ###reference_###, we have:\nwhere represents the cumulative distribution function of . Thus we can obtain the noise schedule by applying the inverse function . In conclusion, during the training process, the importance sampling of varying noise intensities essentially equates to the modification of the noise schedules.\nTo illustrate this concept, let\u2019s consider the Laplace distribution as an example\n, we can derive the cumulative distribution function . Subsequently, we can obtain the inverse function to express the noise schedule in terms of : . Here, denotes the signum function, which equals 1 for positive inputs, for negative inputs.\nThe pseudo-code for implementing the Laplace schedule in the training of diffusion models is presented in A.1 ###reference_###.\nThis framework reveals that noise schedule design can be reframed as a probability distribution design problem. Rather than directly specifying how noise varies with time, we can instead focus on how to optimally distribute our sampling across different noise intensities.\nOur approach is also applicable to the recently popular flow matching with logit normal sampling scheme Esser et al. (2024 ###reference_b10###). Within our framework, we analyzed the distribution of its logSNR in A.4 ###reference_### and demonstrated its superiority over vanilla flow matching and cosine scheduling from the perspective of ." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Unified Formulation for Diffusion Training", + "text": "VDM++ Kingma & Gao (2023 ###reference_b23###) proposes a unified formulation that encompasses recent prominent frameworks and loss weighting strategies for training diffusion models, as detailed below:\nwhere signifies the training dataset, noise is drawn from a standard Gaussian distribution, and is the distribution of noise intensities.\nThis formulation provides a flexible framework that can accommodate various diffusion training strategies.\nDifferent predicting targets, such as and , can also be re-parameterized to -prediction.\n denotes the loss weighting strategy.\nAlthough adjusting is theoretically equivalent to altering .\nIn practical training, directly modifying to concentrate computational resources on training specific noise levels is more effective than enlarging the loss weight on specific noise levels.\nGiven these insights, our research focuses on how to design an optimal that can effectively allocate computational resources across different noise levels. By carefully crafting the distribution of noise intensities, we aim to improve the overall training process and the quality of the resulting diffusion models.\n\nWith the unified formulation providing a flexible framework for diffusion training, we can now apply these theoretical insights to practical settings. By carefully designing the distribution of noise intensities, we can optimize the training process and improve the performance of diffusion models in real-world applications. In the following section, we will explore practical strategies for noise schedules that leverage these insights to achieve better results." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Practical Settings", + "text": "Stable Diffusion 3 Esser et al. (2024 ###reference_b10###), EDM Karras et al. (2022 ###reference_b19###), and Min-SNR Hang et al. (2023 ###reference_b14###); Crowson et al. (2024 ###reference_b8###) find that the denoising tasks with medium noise intensity is most critical to the overall performance of diffusion models. Therefore, we increase the probability of when is of moderate size, and obtain a new noise schedule according to Section 2.2 ###reference_###.\nSpecifically, we investigate four novel noise strategies, named Cosine Shifted, Cosine Scaled, Cauchy, and Laplace respectively. The detailed setting are listed in Table 1 ###reference_###. Cosine Shifted use the hyperparameter to explore where the maximum probability should be used. Cosine Scaled explores how much the noise probability should be increased under the use of Cosine strategy to achieve better results. The Cauchy distribution, provides another form of function that can adjust both amplitude and offset simultaneously. The Laplace distribution is characterized by its mean and scale , controls both the magnitude of the probability and the degree of concentration of the distribution. These strategies contain several hyperparameters, which we will explore in Section 3.5 ###reference_###. Unless otherwise stated, we report the best hyperparameter results.\nBy re-allocating the computation resources at different noise intensities, we can train the complete denoising process.\nDuring sampling process,\nwe align the sampled SNRs as the cosine schedule to ensure a fair comparison.\nSpecifically, first we sample from uniform distribution , then get the corresponding SNRs from Cosine schedule: . According to Equation 5 ###reference_###, we get the corresponding by inverting these SNR values through the respective noise schedules. Finally, we use DDIM Song et al. (2021 ###reference_b37###) to sample with these new calculated .\nIt is important to note that, from the perspective of the noise schedule, how to allocate the computation resource during inference is also worth reconsideration. We will not explore it in this paper and leave this as future work." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "implementation Details", + "text": "Dataset. We conduct experiments on ImageNet Deng et al. (2009 ###reference_b9###) with and resolution.\nFor each image, we follow the preprocessing in Rombach et al. (2022 ###reference_b33###) to center crop and encode images to latents.\nThe resulting compressed latents have dimensions of for images and for images, effectively reducing the spatial dimensions while preserving essential visual information.\nNetwork Architecture.\nWe adopt DiT-B from Peebles & Xie (2023 ###reference_b30###) as our backbone.\nWe replace the last AdaLN Linear layer with vanilla linear.\nOthers are kept the same as the original implementation.\nThe patch size is set to 2 and the projected sequence length of is .\nThe class condition is injected through the adaptive layernorm.\nIn this study, our primary objective is to demonstrate the effectiveness of our proposed noise schedule compared to existing schedules under a fixed training budget, rather than to achieve state-of-the-art results. Consequently, we do not apply our method to extra-large (XL) scale models.\nTraining Settings.\nWe adopt the Adam optimizer Kingma & Ba (2014 ###reference_b21###) with constant learning rate .\nWe set the batch size to 256 following Peebles & Xie (2023 ###reference_b30###) and Gao et al. (2023 ###reference_b12###).\nEach model is trained for 500K iterations (about 100 epochs) if not specified. Our implementation is primarily based on OpenDiT Zhao et al. (2024 ###reference_b41###) and experiments are mainly conducted on 816G V100 GPUs.\nDifferent from the default discrete diffusion setting with linear noise schedule in the code base, we implement the diffusion process in a continuous way. Specifically, we sample from uniform distribution .\nBaselines and Metrics.\nWe compare our proposed noise schedule with several baseline settings in Table 2 ###reference_###. For each setting, we sample images using DDIM Song et al. (2021 ###reference_b37###) with 50 steps. Despite the noise strategy for different settings may be different, we ensure they share the same at each sampling step. This approach is adopted to exclusively investigate the impact of the noise strategy during the training phase. Moreover, we report results with different classifier-free guidance scalesHo & Salimans (2021 ###reference_b15###), and the FID is calculated using 10K generated images.\n\nWe sample with three CFG scales and select the optimal one to better evaluate the actual performance of different models." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Comparison with baseline schedules and loss weight designs", + "text": "This section details the principal findings from our experiments on the ImageNet-256 dataset, focusing on the comparative effectiveness of various noise schedules and loss weightings in the context of CFG values. Table 3 ###reference_### illustrates these comparisons, showcasing the performance of each method in terms of the FID-10K score.\nThe experiments reveal that our proposed noise schedules, particularly Laplace, achieve the most notable improvements over the traditional cosine schedule, as indicated by the bolded best scores and the blue numbers representing the reductions compared to baseline\u2019s best score of 10.85.\nWe also provide a comparison with methods that adjust the loss weight, including Min-SNR and Soft-Min-SNR.\nUnless otherwise specified, the hyperparameter for both loss weighting schemes is set to 5.\nWe find that although these methods can achieve better results than the baseline, they are still not as effective as our method of modifying the noise schedule. This indicates that deciding where to allocate more computational resources is more efficient than adjusting the loss weight. Compared with other noise schedules like EDM Karras et al. (2022 ###reference_b19###) and Flow Matching Lipman et al. (2022 ###reference_b25###), we found that no matter which CFG value, our results significantly surpass theirs under the same training iterations.\nFurthermore, we investigate the convergence speed of these method, and the results are shown in Figure 2 ###reference_###. It can be seen that adjusting the noise schedule converges faster than adjusting the loss weight. Additionally, we also notice that the optimal training method may vary when using different CFG values for inference, but adjusting the noise schedule generally yields better results.\n###figure_2### ###figure_3###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Robustness on different predicting targets", + "text": "We evaluate the effectiveness of our designed noise schedule across three commonly adopted prediction targets: , , and .\nThe results are shown in Table 4 ###reference_###.\nWe observed that regardless of the prediction target, our proposed Laplace strategy significantly outperforms the Cosine strategy. It\u2019s noteworthy that as the Laplace strategy focuses the computation on medium noise levels during training, the extensive noise levels are less trained, which could potentially affect the overall performance. Therefore, we have slightly modified the inference strategy of DDIM to start sampling from ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Robustness on high resolution images", + "text": "To explore the robustness of the adjusted noise schedule to different resolutions, we also designed experiments on Imagenet-512. As pointed out by Chen (2023 ###reference_b6###), the adding noise strategy will cause more severe signal leakage as the resolution increases. Therefore, we need to adjust the hyperparameters of the noise schedule according to the resolution.\nSpecifically, the baseline Cosine schedule achieves the best performance when the CFG value equals to 3. So we choose this CFG value for inference.\nThrough systematic experimentation, we explored the appropriate values for the Laplace schedule\u2019s parameter , testing within the range {0.5, 0.75, 1.0}, and determined that was the most effective, resulting in an FID score of 9.09. This indicates that despite the need for hyperparameter tuning, adjusting the noise schedule can still stably bring performance improvements." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Ablation Study", + "text": "We conduct an ablation study to analyze the impact of hyperparameters on various distributions of , which are enumerated below.\nLaplace distribution, known for its simplicity and exponential decay from the center, is straightforward to implement. We leverage its symmetric nature and adjust the scale parameter to center the peak at the middle timestep.\nWe conduct experiments with different Laplace distribution scales . The results are shown in Figure 3 ###reference_###. The baseline with standard cosine schedule achieves FID score of 17.79 with CFG=1.5, 10.85 with CFG=2.0, and 11.06 with CFG=3.0 after 500K iterations.\nWe can see that the model with Laplace distribution scale achieves the best performance 7.96 with CFG=3.0, which is relatively 26.6% better than the baseline.\n###figure_4### Cauchy distribution is another heavy-tailed distribution that can be used for noise schedule design. The distribution is not symmetric when the location parameter is not 0.\nWe conduct experiments with different Cauchy distribution parameters and the results are shown in Table 6 ###reference_###.\nCauchy(0, 0.5) means with .\nWe can see that the model with achieve better performance than the other two settings when fixing to 1.\nIt means that the model with more probability mass around performs better than others biased to negative or positive directions.\nCosine Shifted Hoogeboom et al. (2023 ###reference_b18###) is the shifted version of the standard cosine schedule.\nWe evaluate the schedules with both positive and negative values to comprehensively assess its impact on model performance.\nShifted with achieves FID-10k score with CFG .\nResults with shifted value are .\nComparatively, both scenarios demonstrate inferior performance relative to the baseline cosine schedule (). Additionally, by examining the data presented in Table 6 ###reference_###, we find concentrated on can best improve the results.\nCosine Scaled is also a modification of Cosine schedule. When , it becomes the standard Cosine version. means sampling more heavily around while means sampling more uniformly of all . We report related results in Table 7 ###reference_###.\nOur experimental results reveal a clear trend: larger values of consistently outperform the baseline, highlighting the benefits of focused sampling near .\nHowever, it\u2019s crucial to note that should not be excessively large and must remain within a valid range to maintain stable training dynamics.\nFor example, decreasing from 0.5 to 0.25 hurts the performance and cause the FID score to drop.\nStriking the right balance is key to optimizing performance.\nIn our experiments, a model trained with achieved a remarkable score of 8.04, representing a substantial improvement over the baseline.\nThe experiments with various noise schedules, including Laplace, Cauchy, Cosine Shifted, and Cosine Scaled, reveal a shared phenomenon: models perform better when the noise distribution or schedule is concentrated around . For the Laplace distribution, a scale of yielded the best performance, outperforming the baseline by 26.6%. In the case of the Cauchy distribution, models with a location parameter performed better than those with values biased towards negative or positive directions. The Cosine Shifted schedule showed inferior performance when shifted away from , while the Cosine Scaled schedule demonstrated that larger values of (sampling more heavily around ) consistently outperformed the baseline, with an optimal improvement of 25.9% at . This consistent trend suggests that focusing the noise distribution or schedule near is beneficial for model performance.\n\nWhile these different schedules take various mathematical forms, they all achieve similar optimal performance when given equivalent training budgets. The specific mathematical formulation is less crucial than the underlying design philosophy: increasing the sampling probability of intermediate noise levels. This principle provides a simple yet effective guideline for designing noise schedules." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we present a novel method for enhancing the training of diffusion models by strategically redefining the noise schedule. Our theoretical analysis demonstrates that this approach is equivalent to performing importance sampling on the noise. Empirical results show that our proposed Laplace noise schedule, which focuses computational resources on mid-range noise levels, yields superior performance compared to adjusting loss weights under constrained computational budgets. This study not only contributes significantly to the development of efficient training techniques for diffusion models but also offers promising potential for future large-scale applications." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "We provide a simple PyTorch implementation for the Laplace noise schedule and its application in training. This example can be adapted to other noise schedules, such as the Cauchy distribution, by replacing the laplace_noise_schedule function. The model accepts noisy samples , timestep , and an optional condition tensor as inputs. This implementation supports prediction of .\nFor a Laplace distribution with location parameter and scale parameter , the probability density function (PDF) is given by:\nThe cumulative distribution function (CDF) can be derived as follows:\nTo obtain as a function of , we solve the inverse function:\nFor a Cauchy distribution with location parameter and scale parameter , the PDF is given by:\nThe corresponding CDF is:\nTo derive , we proceed as follows:\nSolving for , we obtain:\nWe observe that incorporating importance sampling of timesteps into the cosine schedule bears similarities to the Laplace schedule. Typically, the distribution of timestep is uniform . To increase the sampling frequency of middle-level timesteps, we propose modifying the sampling distribution to a simple polynomial function:\nwhere is the normalization factor ensuring that the cumulative distribution function (CDF) equals 1 at .\nTo sample from this distribution, we first sample uniformly from and then map it using the following function:\nWe incorporate the polynomial sampling of into the cosine schedule , whose inverse function is . Let us first consider the situation where :\nWe then derive the expression with respect to :\nConsidering symmetry, we obtain the final distribution with respect to as follows:\nWe visualize the schedule discussed above and compare it with Laplace schedule in Figure 4 ###reference_###.\nWe can see that for Laplace and for cosine-ply matches well.\nWe also conduct experiments on such schedule and present results in Table 8 ###reference_###.\nThey perform similar and both better than the standard cosine schedule.\nWe visualize the schedules discussed above and compare them with the Laplace schedule in Figure 4 ###reference_###. The results demonstrate that Laplace with and cosine-ply with exhibit a close correspondence. To evaluate the performance of these schedules, we conducted experiments and present the results in Table 8 ###reference_###. Both the Laplace and cosine-ply schedules show similar performance, and both outperform the standard cosine schedule.\n###figure_5### In Stable Diffusion 3 Esser et al. (2024 ###reference_b10###) and Movie Gen Polyak et al. (2024 ###reference_b31###), logit-normal sampling is applied to improve the training efficiency of flow models. To better understand this approach, we present a detailed derivation from the logit-normal distribution to the probability density function of logSNR .\nLet the Logit transformation of random variable follow a normal distribution:\nThen, the probability density function of is:\nwhere , and and are constants.\nConsider the variable transformation:\nOur goal is to find the probability density function of random variable .\nFirst, we solve for in terms of :\nNext, we calculate the Jacobian determinant :\nUsing the variable transformation formula:\nWe calculate :\nMultiplying by the Jacobian determinant:\nTherefore, the probability density function of is:\nThis shows that follows a normal distribution with mean and variance :\nThe mean and variance are:\nTo verify normalization, we integrate over its domain:\nThus, satisfies the normalization condition for probability density functions.\nWe compare the standard cosine scheudle Nichol & Dhariwal (2021 ###reference_b29###), Flow Matching Liu et al. (2022 ###reference_b26###); Lipman et al. (2022 ###reference_b25###), and Flow Matching with Logit-normal sampling Esser et al. (2024 ###reference_b10###); Polyak et al. (2024 ###reference_b31###).\nThe probability density functions of these schedules are visualized in Figure 5 ###reference_###.\nOur analysis reveals that Flow Matching with Logit-normal sampling concentrates more probability mass around compared to both the standard Cosine and Flow Matching schedules, resulting in improved training efficiency Esser et al. (2024 ###reference_b10###); Polyak et al. (2024 ###reference_b31###).\n###figure_6### To investigate the significance of training intervals, we conducted controlled experiments using a simplified setup. We divided the time range into four equal segments: . We first trained a base model over the complete range for 1M iterations, then fine-tuned it separately on each bin for 140k iterations to obtain four specialized checkpoints .\nFor evaluation, we designed experiments using both the base model and fine-tuned checkpoints . To assess the importance of each temporal segment, we selectively employed the corresponding fine-tuned checkpoint during its specific interval while maintaining the base model for remaining intervals. For example, when evaluating , we used within its designated interval and elsewhere.\nThe FID results across these four experimental configurations are presented in Figure 6 ###reference_###. Our analysis reveals that optimizing intermediate timesteps (bin1 and bin2) yields superior performance, suggesting the critical importance of these temporal regions in the diffusion process.\n###figure_7### We investigate the comparative effectiveness of our approach when applied as a noise schedule versus a loss weighting mechanism. We adopt Equation 21 ###reference_### as our primary noise schedule due to its foundation in the cosine schedule and demonstrated superior FID performance. To evaluate its versatility, we reformulate the importance sampling as a loss weighting strategy and compare it against established weighting schemes, including Min-SNR and Soft-Min-SNR.\nFigure 7 ###reference_### illustrates the loss weight derived from Cosine-Ply (=2) schedule alongside Min-SNR and Soft-Min-SNR.\nWe can observe that under the setting of predict target as , Min-SNR and Soft-Min-SNR can be seemed as putting more weight on intermediate levels, aligning with our earlier findings on the importance of middle-level noise densities.\n###figure_8### ImageNet, comprising over one million natural images, has been widely adopted as a benchmark dataset for validating improvements in diffusion models Peebles & Xie (2023 ###reference_b30###); Karras et al. (2024 ###reference_b20###).\nIn addition to ImageNet, we evaluate our approach on the CelebA Liu et al. (2015 ###reference_b27###) dataset ( resolution in pixel space), which consists of face images. We employ a DiT architecture (12 layers, embedding dimension of 512, 8 attention heads, and patch size of 4) using different noise schedules. This is an unconditional generation setting within a single domain. We present FID results as follows:\nWe also follow Stable Diffusion 3 Esser et al. (2024 ###reference_b10###), train on a more complicated dataset CC12M Changpinyo et al. (2021 ###reference_b5###) dataset (over 12M image-text pairs) and report the FID results here. We download the dataset using webdataset. We train a DiT-base model using CLIP as text conditioner. The images are cropped and resized to resolution, compressed to latents and trained for 200k iterations at batch size 256.\nOur method demonstrated strong generalization capabilities across both unconditional image generation using the CelebA dataset and text-to-image generation using the CC12M dataset.\nWe present addition visual results in Figure 8 ###reference_### to demonstrate the differences in generation quality between models trained with Cosine and our proposed Laplace schedule. Each case presents two rows of outputs, where the upper row shows results from the cosine schedule and the lower row displays results from our Laplace schedule. Each row contains five images corresponding to models trained for 100k, 200k, 300k, 400k, and 500k iterations, illustrating the progression of generation quality across different training stages.\nFor each case, the initial noise inputs are identical.\nAs shown in the results, our method achieves faster convergence in both basic object formation (at 100k iterations) and fine detail refinement, demonstrating superior learning efficiency throughout the training process.\n###figure_9###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Noise Schedule
Cosine
Laplace
Cauchy
Cosine Shifted
Cosine Scaled
\n
Table 1: \nOverview of various Noise Schedules. The table categorizes them into five distinct types: Cosine, Laplace, Cauchy, and two variations of Cosine schedules. The second column denotes the sampling probability at different noise intensities . The last column indicates how to sample noise intensities for training. We derived their relationship in Equation\u00a03 and\u00a05.
\n
", + "capture": "Table 1: \nOverview of various Noise Schedules. The table categorizes them into five distinct types: Cosine, Laplace, Cauchy, and two variations of Cosine schedules. The second column denotes the sampling probability at different noise intensities . The last column indicates how to sample noise intensities for training. We derived their relationship in Equation\u00a03 and\u00a05." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method
Cosine
Min-SNR\u00a0
Soft-Min-SNR\u00a0
FM-OT\u00a0
EDM\u00a0
\n
Table 2: Comparison of different methods and related loss weighting strategies. The is introduced in Equation\u00a06.\nThe original for Soft-Min-SNR\u00a0Crowson et\u00a0al. (2024) was developed within the EDM\u2019s denoiser framework. In this study, we align it with the cosine schedule to ensure a fair comparison.
\n
", + "capture": "Table 2: Comparison of different methods and related loss weighting strategies. The is introduced in Equation\u00a06.\nThe original for Soft-Min-SNR\u00a0Crowson et\u00a0al. (2024) was developed within the EDM\u2019s denoiser framework. In this study, we align it with the cosine schedule to ensure a fair comparison." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodCFG=1.5CFG=2.0CFG=3.0
Cosine\u00a0Nichol & Dhariwal (2021)\n17.7910.8511.06
EDM\u00a0Karras et\u00a0al. (2022)\n26.1115.0911.56
FM-OT\u00a0Lipman et\u00a0al. (2022)\n24.4914.6611.98
Min-SNR\u00a0Hang et\u00a0al. (2023)\n16.069.7010.43
Soft-Min-SNR\u00a0Crowson et\u00a0al. (2024)\n14.899.0710.66
Cosine Shifted\u00a0Hoogeboom et\u00a0al. (2023)\n19.3411.6711.13
Cosine Scaled12.748.0411.02
Cauchy12.918.1411.02
Laplace16.699.04\n7.96 (-2.89)\n
\n
Table 3: Comparison of various noise schedules and loss weightings on ImageNet-256, showing the performance (in terms of FID-10K) of different methods under different CFG values.\nThe best results highlighted in bold and the blue numbers represent the improvement when compared with the baseline FID 10.85. The line in gray is our suggested noise schedule.
\n
", + "capture": "Table 3: Comparison of various noise schedules and loss weightings on ImageNet-256, showing the performance (in terms of FID-10K) of different methods under different CFG values.\nThe best results highlighted in bold and the blue numbers represent the improvement when compared with the baseline FID 10.85. The line in gray is our suggested noise schedule." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Predict TargetNoise Schedule100K200k300k400k500k
Cosine35.2017.6013.3711.8411.16
Laplace (Ours)21.7810.869.448.738.48
Cosine25.7014.0111.7811.2611.06
Laplace (Ours)18.039.378.318.077.96
Cosine28.6315.8012.4911.1410.46
Laplace (Ours)27.9813.9211.0110.009.53
\n
Table 4: Effectiveness evaluated using FID-10K score on different predicting targets: , , and . The proposed Laplace schedule performs better than the baseline Cosine schedule along with training iterations.
\n
", + "capture": "Table 4: Effectiveness evaluated using FID-10K score on different predicting targets: , , and . The proposed Laplace schedule performs better than the baseline Cosine schedule along with training iterations." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Noise ScheduleCosineLaplace
FID-10K11.91\n9.09 (-2.82)
\n
Table 5: \nFID-10K results on ImageNet-512. All models are trained for 500K iterations.\n
\n
", + "capture": "Table 5: \nFID-10K results on ImageNet-512. All models are trained for 500K iterations.\n" + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Cauchy(0, 0.5)Cauchy(0, 1)Cauchy(-1, 1)Cauchy(1, 1)
CFG=1.512.9114.3218.1216.60
CFG=2.08.148.9310.3810.19
CFG=3.011.0211.2610.8110.94
\n
Table 6: \nFID-10k results on ImageNet-256 with different Cauchy distribution parameters.\n
\n
", + "capture": "Table 6: \nFID-10k results on ImageNet-256 with different Cauchy distribution parameters.\n" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
1.31.10.50.25
CFG=1.539.7422.6012.7415.83
CFG=2.023.3812.988.048.64
CFG=3.013.9411.1611.028.26
\n
Table 7: \nFID-10k results on ImageNet-256 with different scales of Cosine Scaled distribution.\n
\n
", + "capture": "Table 7: \nFID-10k results on ImageNet-256 with different scales of Cosine Scaled distribution.\n" + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Iterations100,000200,000300,000400,000500,000
Cosine-ply ()28.6513.7710.068.697.98
Laplace ()28.8913.9010.178.858.19
\n
Table 8: Performance comparison of cosine-ply () and Laplace () schedules over different iteration counts
\n
", + "capture": "Table 8: Performance comparison of cosine-ply () and Laplace () schedules over different iteration counts" + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CosineCosine-Ply (=2)Min-SNRSoft-Min-SNRCosine-Ply as weight
FID-10K10.857.989.709.078.88
\n
Table 9: Quantitative comparison of different noise scheduling strategies and loss weighting schemes. Lower FID scores indicate better performance.
\n
", + "capture": "Table 9: Quantitative comparison of different noise scheduling strategies and loss weighting schemes. Lower FID scores indicate better performance." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FID \n100k150k
cosine10.06967.93795
Laplace (ours)7.937956.58359
\n
Table 10: FID scores on CelebA dataset at different training iterations
\n
", + "capture": "Table 10: FID scores on CelebA dataset at different training iterations" + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FID \n200k
cosine58.3619
Laplace (ours)54.3492 (-4.0127)
\n
Table 11: FID scores on CC12M dataset at 200k iterations
\n
", + "capture": "Table 11: FID scores on CC12M dataset at 200k iterations" + } + }, + "image_paths": { + "1": { + "figure_path": "2407.03297v2_figure_1.png", + "caption": "Figure 1: \nIllustration of the probability density functions of different noise schedules.", + "url": "http://arxiv.org/html/2407.03297v2/x1.png" + }, + "2(a)": { + "figure_path": "2407.03297v2_figure_2(a).png", + "caption": "Figure 2: Comparison between adjusting the noise schedule, adjusting the loss weights and baseline setting. The Laplace noise schedule yields the best results and the fastest convergence speed.", + "url": "http://arxiv.org/html/2407.03297v2/x2.png" + }, + "2(b)": { + "figure_path": "2407.03297v2_figure_2(b).png", + "caption": "Figure 2: Comparison between adjusting the noise schedule, adjusting the loss weights and baseline setting. The Laplace noise schedule yields the best results and the fastest convergence speed.", + "url": "http://arxiv.org/html/2407.03297v2/x3.png" + }, + "3": { + "figure_path": "2407.03297v2_figure_3.png", + "caption": "Figure 3: \nFID-10K results on ImageNet-256 with location parameter \u03bc\ud835\udf07\\muitalic_\u03bc fixed to 0 and different Laplace distribution scales b\ud835\udc4fbitalic_b in {0.25,0.5,1.0,2.0,3.0}0.250.51.02.03.0\\{0.25,0.5,1.0,2.0,3.0\\}{ 0.25 , 0.5 , 1.0 , 2.0 , 3.0 }. Baseline denotes standard cosine schedule.", + "url": "http://arxiv.org/html/2407.03297v2/x4.png" + }, + "4": { + "figure_path": "2407.03297v2_figure_4.png", + "caption": "Figure 4: Visualization of p\u2062(\u03bb)\ud835\udc5d\ud835\udf06p(\\lambda)italic_p ( italic_\u03bb ) for Laplace schedule and cosine schedule with polynomial timestep sampling.", + "url": "http://arxiv.org/html/2407.03297v2/extracted/6029035/figs/cosine-ply.png" + }, + "5": { + "figure_path": "2407.03297v2_figure_5.png", + "caption": "Figure 5: Comparison of probability density functions for different flow matching approaches. The plot shows three distributions: Flow Matching with Logit-Normal sampling (blue), Flow Matching without Logit-Normal sampling (green), and the Cosine schedule (orange).", + "url": "http://arxiv.org/html/2407.03297v2/x5.png" + }, + "6": { + "figure_path": "2407.03297v2_figure_6.png", + "caption": "Figure 6: \nComparative analysis of interval-specific fine-tuning effects. When sampling within interval (14,24)1424\\left(\\frac{1}{4},\\frac{2}{4}\\right)( divide start_ARG 1 end_ARG start_ARG 4 end_ARG , divide start_ARG 2 end_ARG start_ARG 4 end_ARG ), \u201cBin1\u201d indicates the use of fine-tuned weights \ud835\udc261subscript\ud835\udc261\\mathbf{m}_{1}bold_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, while \ud835\udc0c\ud835\udc0c\\mathbf{M}bold_M is used for other intervals. \u201cBaseline\u201d represents the use of base model \ud835\udc0c\ud835\udc0c\\mathbf{M}bold_M throughout all intervals, and \u201cAll Tuned\u201d denotes the application of interval-specific fine-tuned models within their respective ranges.", + "url": "http://arxiv.org/html/2407.03297v2/extracted/6029035/figs/moe-fid.png" + }, + "7": { + "figure_path": "2407.03297v2_figure_7.png", + "caption": "Figure 7: Visualization of different loss weight schemes.", + "url": "http://arxiv.org/html/2407.03297v2/x6.png" + }, + "8": { + "figure_path": "2407.03297v2_figure_8.png", + "caption": "Figure 8: \nVisual comparison of results generated by model trained by cosine schedule and our proposed Laplace. For each case, the above row is generated by cosine schedule, the below is generated by Laplace. The 5 images from left to right represents the results generated by the model trained for 100k, 200k, 300k, 400k, and 500k iterations.", + "url": "http://arxiv.org/html/2407.03297v2/x7.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "ediff-i: Text-to-image diffusion models with ensemble of expert denoisers.", + "author": "Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu.", + "venue": "arXiv preprint arXiv:2211.01324, 2022.", + "url": null + } + }, + { + "2": { + "title": "All are worth words: A vit backbone for diffusion models.", + "author": "Fan Bao, Shen Nie, Kaiwen Xue, Yue Cao, Chongxuan Li, Hang Su, and Jun Zhu.", + "venue": "arXiv preprint arXiv:2209.12152, 2022.", + "url": null + } + }, + { + "3": { + "title": "Pattern recognition and machine learning, volume 4.", + "author": "Christopher M Bishop and Nasser M Nasrabadi.", + "venue": "Springer, 2006.", + "url": null + } + }, + { + "4": { + "title": "Video generation models as world simulators.", + "author": "Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh.", + "venue": "2024.", + "url": null + } + }, + { + "5": { + "title": "Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts.", + "author": "Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3558\u20133568, 2021.", + "url": null + } + }, + { + "6": { + "title": "On the importance of noise scheduling for diffusion models.", + "author": "Ting Chen.", + "venue": "arXiv preprint arXiv:2301.10972, 2023.", + "url": null + } + }, + { + "7": { + "title": "Perception prioritized training of diffusion models.", + "author": "Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11472\u201311481, 2022.", + "url": null + } + }, + { + "8": { + "title": "Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers.", + "author": "Katherine Crowson, Stefan Andreas Baumann, Alex Birch, Tanishq Mathew Abraham, Daniel Z Kaplan, and Enrico Shippole.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "9": { + "title": "Imagenet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In 2009 IEEE conference on computer vision and pattern recognition, pp. 248\u2013255. Ieee, 2009.", + "url": null + } + }, + { + "10": { + "title": "Scaling rectified flow transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al.", + "venue": "arXiv preprint arXiv:2403.03206, 2024.", + "url": null + } + }, + { + "11": { + "title": "Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts.", + "author": "Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, et al.", + "venue": "arXiv preprint arXiv:2210.15257, 2022.", + "url": null + } + }, + { + "12": { + "title": "Masked diffusion transformer is a strong image synthesizer.", + "author": "Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23164\u201323173, 2023.", + "url": null + } + }, + { + "13": { + "title": "Vector quantized diffusion model for text-to-image synthesis.", + "author": "Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10696\u201310706, 2022.", + "url": null + } + }, + { + "14": { + "title": "Efficient diffusion training via min-snr weighting strategy.", + "author": "Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, and Baining Guo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7441\u20137451, October 2023.", + "url": null + } + }, + { + "15": { + "title": "Classifier-free diffusion guidance.", + "author": "Jonathan Ho and Tim Salimans.", + "venue": "In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.", + "url": null + } + }, + { + "16": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in Neural Information Processing Systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "17": { + "title": "Video diffusion models.", + "author": "Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet.", + "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "18": { + "title": "simple diffusion: End-to-end diffusion for high resolution images.", + "author": "Emiel Hoogeboom, Jonathan Heek, and Tim Salimans.", + "venue": "In International Conference on Machine Learning, pp. 13213\u201313232. PMLR, 2023.", + "url": null + } + }, + { + "19": { + "title": "Elucidating the design space of diffusion-based generative models.", + "author": "Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine.", + "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "20": { + "title": "Analyzing and improving the training dynamics of diffusion models.", + "author": "Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine.", + "venue": "In Proc. CVPR, 2024.", + "url": null + } + }, + { + "21": { + "title": "Adam: A method for stochastic optimization.", + "author": "D. P. Kingma and J. Ba.", + "venue": "In International Conference on Learning Representations, 2014.", + "url": null + } + }, + { + "22": { + "title": "Variational diffusion models.", + "author": "Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho.", + "venue": "Advances in neural information processing systems, 34:21696\u201321707, 2021.", + "url": null + } + }, + { + "23": { + "title": "Understanding diffusion objectives as the ELBO with simple data augmentation.", + "author": "Diederik P Kingma and Ruiqi Gao.", + "venue": "In Thirty-seventh Conference on Neural Information Processing Systems, 2023.", + "url": null + } + }, + { + "24": { + "title": "Common diffusion noise schedules and sample steps are flawed.", + "author": "Shanchuan Lin, Bingchen Liu, Jiashi Li, and Xiao Yang.", + "venue": "In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 5404\u20135411, 2024.", + "url": null + } + }, + { + "25": { + "title": "Flow matching for generative modeling.", + "author": "Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le.", + "venue": "In The Eleventh International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "26": { + "title": "Flow straight and fast: Learning to generate and transfer data with rectified flow.", + "author": "Xingchao Liu, Chengyue Gong, et al.", + "venue": "In The Eleventh International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "27": { + "title": "Deep learning face attributes in the wild.", + "author": "Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.", + "venue": "In Proceedings of International Conference on Computer Vision (ICCV), December 2015.", + "url": null + } + }, + { + "28": { + "title": "Point-e: A system for generating 3d point clouds from complex prompts.", + "author": "Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen.", + "venue": "arXiv preprint arXiv:2212.08751, 2022.", + "url": null + } + }, + { + "29": { + "title": "Improved denoising diffusion probabilistic models.", + "author": "Alexander Quinn Nichol and Prafulla Dhariwal.", + "venue": "In International Conference on Machine Learning, pp. 8162\u20138171. PMLR, 2021.", + "url": null + } + }, + { + "30": { + "title": "Scalable diffusion models with transformers.", + "author": "William Peebles and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195\u20134205, 2023.", + "url": null + } + }, + { + "31": { + "title": "Movie gen: A cast of media foundation models.", + "author": "Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al.", + "venue": "arXiv preprint arXiv:2410.13720, 2024.", + "url": null + } + }, + { + "32": { + "title": "Hierarchical text-conditional image generation with clip latents.", + "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.", + "venue": "arXiv preprint arXiv:2204.06125, 2022.", + "url": null + } + }, + { + "33": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684\u201310695, 2022.", + "url": null + } + }, + { + "34": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi.", + "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "35": { + "title": "Progressive distillation for fast sampling of diffusion models.", + "author": "Tim Salimans and Jonathan Ho.", + "venue": "In International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "36": { + "title": "Make-a-video: Text-to-video generation without text-video data.", + "author": "Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "37": { + "title": "Denoising diffusion implicit models.", + "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.", + "venue": "In International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "38": { + "title": "Volumediffusion: Flexible text-to-3d generation with efficient volumetric encoder.", + "author": "Zhicong Tang, Shuyang Gu, Chunyu Wang, Ting Zhang, Jianmin Bao, Dong Chen, and Baining Guo.", + "venue": "arXiv preprint arXiv:2312.11459, 2023.", + "url": null + } + }, + { + "39": { + "title": "Rodin: A generative model for sculpting 3d digital avatars using diffusion.", + "author": "Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, et al.", + "venue": "arXiv preprint arXiv:2212.06135, 2022.", + "url": null + } + }, + { + "40": { + "title": "Score-based generative modeling through stochastic differential equations.", + "author": "S. Yang, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole.", + "venue": "In International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "41": { + "title": "Opendit: An easy, fast and memory-efficient system for dit training and inference.", + "author": "Xuanlei Zhao, Zhongkai Zhao, Ziming Liu, Haotian Zhou, Qianli Ma, and Yang You.", + "venue": "https://github.com/NUS-HPC-AI-Lab/OpenDiT, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2407.03297v2" +} \ No newline at end of file diff --git a/20241127/2407.04127v3.json b/20241127/2407.04127v3.json new file mode 100644 index 0000000000000000000000000000000000000000..65cf93a875b3cd2a4a7396f39954c8bfbd195dab --- /dev/null +++ b/20241127/2407.04127v3.json @@ -0,0 +1,609 @@ +{ + "title": "Biometric Authentication Based on Enhanced Remote Photoplethysmography Signal Morphology", + "abstract": "Remote photoplethysmography (rPPG) is a non-contact method for measuring cardiac signals from facial videos, offering a convenient alternative to contact photoplethysmography (cPPG) obtained from contact sensors. Recent studies have shown that each individual possesses a unique cPPG signal morphology that can be utilized as a biometric identifier, which has inspired us to utilize the morphology of rPPG signals extracted from facial videos for person authentication. Since the facial appearance and rPPG are mixed in the facial videos, we first de-identify facial videos to remove facial appearance while preserving the rPPG information, which protects facial privacy and guarantees that only rPPG is used for authentication. The de-identified videos are fed into an rPPG model to get the rPPG signal morphology for authentication. In the first training stage, unsupervised rPPG training is performed to get coarse rPPG signals. In the second training stage, an rPPG-cPPG hybrid training is performed by incorporating external cPPG datasets to achieve rPPG biometric authentication and enhance rPPG signal morphology. Our approach needs only de-identified facial videos with subject IDs to train rPPG authentication models. The experimental results demonstrate that rPPG signal morphology hidden in facial videos can be used for biometric authentication. The code is available at https://github.com/zhaodongsun/rppg_biometrics.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Facial videos contain invisible skin color changes induced by remote photoplethysmography (rPPG) signals, providing valuable cardiovascular information, such as heart rate. Similar to rPPG, contact photoplethysmography (cPPG) captures color changes in fingertips to monitor blood volume changes. cPPG signals, obtained using contact sensors, have been used for biometric authentication [12 ###reference_b12###, 11 ###reference_b11###]. Given the similar nature and measurement principles of rPPG and cPPG [26 ###reference_b26###], rPPG has the potential for biometric authentication. However, the feasibility of rPPG biometric authentication still needs to be validated. Hence, our research questions are: 1) Can rPPG signals be employed for biometric authentication? 2) If so, how can an rPPG-based biometric system be developed? 3) What are the advantages associated with utilizing rPPG biometrics?\n\n###figure_1### (a) rPPG Authentication System\n\n###figure_2### (b) rPPG Morphology Enhancement\nWe first examine the quality and discriminative power of rPPG signals. rPPG signals are derived from subtle changes in facial color caused by blood volume changes during heartbeats. Recent advances [49 ###reference_b49###, 20 ###reference_b20###] have achieved high-quality rPPG measurement, especially when the face has minimal or no movement. Hence, it is feasible to obtain high-quality rPPG signals. However, the question remains whether these high-quality rPPG signals contain subject-specific biometric characteristics. One work [32 ###reference_b32###] has tried using rPPG for biometrics, but the preliminary study was limited by a small-scale dataset and low-quality rPPG, offering inadequate authentication performance for practical applications.\nIn this paper, we propose an rPPG-based method for biometric authentication, as shown in Fig. 1 ###reference_###(a). Considering facial appearance and rPPG are mixed together in facial videos, we first de-identify facial videos while preserving the rPPG information. This step can guarantee that only rPPG information is used for biometric authentication while facial appearance cannot be used. In addition, this step can also conceal sensitive facial appearance information for privacy protection. The first module is the rPPG model that can extract rPPG signals from the de-identified facial videos. The second module is the rPPG-Authn model that utilizes the rPPG morphology to output person authentication results. We design a two-stage training strategy and rPPG-cPPG hybrid training by incorporating external cPPG datasets to exploit rPPG morphology for biometric authentication. Fig. 1 ###reference_###(b) illustrates the rPPG morphology enhancement. Note that we only use de-identified videos with subject IDs for rPPG biometrics.\nThere are several advantages of rPPG biometrics. Compared with facial appearances, the rPPG biometric system only utilizes de-identified facial videos, eliminating the need for sensitive facial appearance. Moreover, rPPG biometrics offers an additional degree of resistance to spoofing, as rPPG inherently serves as a countermeasure to presentation attacks [21 ###reference_b21###, 19 ###reference_b19###]. In contrast, without dedicated presentation attack detection (PAD) methods, conventional face recognition algorithms are vulnerable to presentation attacks and less secure than rPPG-based biometrics. Additionally, since both rPPG biometrics and face recognition use facial videos as data sources, combining both biometric modalities can potentially enhance both accuracy and security. When compared with cPPG biometrics, rPPG biometrics offers the advantages of being non-contact and only requiring off-the-shelf cameras, while cPPG biometrics necessitates specific contact sensors like pulse oximeters. Compared with iris recognition [46 ###reference_b46###, 5 ###reference_b5###] which requires iris scanners, rPPG biometrics only requires cheap RGB cameras and is robust to presentation attacks.\nOur contributions include:\nWe propose a new biometric authentication method based on rPPG. We utilize two-stage training to achieve rPPG morphology enhancement and accurate biometric authentication performance. We illustrate that utilizing de-identified facial videos is effective for rPPG biometric authentication and ensures the protection of facial appearance privacy.\nWe conduct comprehensive experiments on multiple datasets to validate the discriminative power of rPPG biometrics. We demonstrate that rPPG biometrics can achieve comparable performance with cPPG biometrics. We also investigate factors that may influence the performance of rPPG biometrics.\nWe discover that our rPPG-based biometric method can enhance rPPG morphology, which opens up possibilities for rPPG morphology learning from facial videos." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "rPPG Measurement", + "text": "[41 ###reference_b41###] initially proposed measuring rPPG from face videos via the green channel. Subsequent handcrafted methods have been introduced to enhance the quality of the rPPG signal [34 ###reference_b34###, 6 ###reference_b6###, 18 ###reference_b18###, 40 ###reference_b40###, 45 ###reference_b45###]. Recently, there has been rapid growth in deep learning (DL) approaches for rPPG measurement. Several studies [4 ###reference_b4###, 37 ###reference_b37###, 20 ###reference_b20###, 31 ###reference_b31###, 16 ###reference_b16###] utilize 2D convolutional neural networks (CNN) to input consecutive video frames for rPPG measurement. Another set of DL-based methods [28 ###reference_b28###, 29 ###reference_b29###, 23 ###reference_b23###, 24 ###reference_b24###, 7 ###reference_b7###] employ a spatial-temporal signal map obtained from different facial regions, which is then fed into 2DCNN models. 3DCNN-based methods [50 ###reference_b50###] and transformer-based methods [52 ###reference_b52###, 51 ###reference_b51###] have been proposed to enhance spatiotemporal performance and long-range spatiotemporal perception.\nAdditionally, multiple unsupervised rPPG methods [8 ###reference_b8###, 43 ###reference_b43###, 39 ###reference_b39###, 36 ###reference_b36###, 47 ###reference_b47###, 53 ###reference_b53###] have been proposed. Since GT signals are expensive to collect and synchronize in rPPG datasets, unsupervised rPPG methods only require facial videos for training without any GT signal and achieve performance similar to the supervised methods. However, most works on rPPG measurement primarily focus on the accuracy of heart rate estimation, while neglecting the rPPG morphology." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "cPPG-based Biometrics", + "text": "[10 ###reference_b10###] was the first attempt to utilize cPPG for biometric authentication. They extracted some fundamental morphological features, such as peak upward/downward slopes, for cPPG biometrics. Subsequently, other studies have explored additional morphological features, including cPPG derivatives [48 ###reference_b48###] and fiducial points [22 ###reference_b22###]. More recently, researchers have focused on employing DL methods to automatically extract morphological features. [25 ###reference_b25###, 2 ###reference_b2###, 15 ###reference_b15###] directly input cPPG signals into 1DCNN or long short-term memory (LSTM) architectures to conduct biometric authentication, while [12 ###reference_b12###, 11 ###reference_b11###] cut cPPG signals into periodic segments and utilize multiple representations of these periodic segments as inputs to a 1DCNN model. Furthermore, [12 ###reference_b12###] has collected datasets for cPPG biometrics and investigated the permanence of cPPG biometrics. There exists one preliminary work on rPPG biometrics [32 ###reference_b32###], but only a traditional independent component analysis (ICA) based method [34 ###reference_b34###] was applied for rPPG extraction, which yields low-quality rPPG morphology for biometric authentication." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "Our method consists of facial video de-identification and two training stages. As the rPPG signal does not rely on facial appearance, we first de-identify the input video to avoid facial appearance being used by our method. In the first training stage, we perform unsupervised rPPG training on the de-identified videos to achieve basic rPPG signal measurement. In the second training stage, we use rPPG-cPPG hybrid training for biometric authentication and rPPG morphology enhancement." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Face De-identification for rPPG Biometrics", + "text": "We propose to de-identify facial videos using spatial downsampling and pixel permutation. This step aims to obfuscate facial appearances while preserving the rPPG information. Since rPPG signals are spatially redundant at different facial regions and largely independent of spatial information as shown by [40 ###reference_b40###, 27 ###reference_b27###], rPPG signals can be well preserved in this step while facial appearances are completely erased. The reasons for face de-identification are twofold. First, the facial appearance and rPPG information are intertwined in facial videos. We remove facial appearance to make sure that the biometric model performs recognition solely based on the rPPG information. Second, this step can remove facial appearances to protect facial privacy information during rPPG authentication.\n\n###figure_3### The facial video is de-identified as shown in Fig. 2 ###reference_###. Faces in the original videos are cropped using OpenFace [1 ###reference_b1###] by locating the boundary landmarks. The cropped facial video , where , , and are time length, height, and width, is downsampled by averaging the pixels in a sample region to get . It has been demonstrated that such downsampled facial videos are still effective in rPPG estimation [40 ###reference_b40###, 27 ###reference_b27###]. Since rPPG signal extraction does not largely depend on spatial information [40 ###reference_b40###], we further permutate the pixels to completely obfuscate the spatial information to get . Note that the permutation pattern is the same for each frame in a video but distinct for different videos. Since the spatial information is eliminated, we reshape the de-identified video into a spatiotemporal (ST) map for compact rPPG representation like [27 ###reference_b27###]." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "The 1st training stage: rPPG Unsupervised Pre-training", + "text": "This stage aims to train a basic rPPG model capable of extracting rPPG with precise heartbeats. We use unsupervised training to obtain the basic rPPG model. The main reasons for unsupervised training are: 1) Unsupervised rPPG training does not require GT PPG signals from contact sensors, which means only facial videos with subject IDs are required in our entire method. 2) The performance of unsupervised rPPG training [8 ###reference_b8###, 39 ###reference_b39###] is on par with supervised methods.\n\n###figure_4### We adopt and customize the unsupervised Contrast-Phys (CP) architecture [39 ###reference_b39###] to 2D ST-map inputs since CP can only use face videos as inputs. The modified method called Contrast-Phys-2D (CP2D) is shown in Fig. 3 ###reference_###. Two different ST maps from two different videos are the inputs of the rPPG model , where is 10 seconds. The rPPG model is based on a 2D convolutional neural network to output rPPG ST maps where rPPG signals are stacked vertically. Similar to CP, the spatial dimension is set as four. The architecture of the rPPG model is presented in the supplementary materials. Inspired by spatiotemporal rPPG sampling in CP, we use a patch with the shape to randomly get rPPG ST samples from rPPG ST maps , respectively. The rPPG ST samples are averaged along the spatial dimension to get rPPG samples and the corresponding power spectral densities (PSDs) . We use rPPG prior knowledge [39 ###reference_b39###] including rPPG spatiotemporal similarity and cross-video rPPG dissimilarity to make positive pairs ( or ) and negative pairs , which can be used in the positive and negative terms in the contrastive loss . The contrastive loss is used to pull together the PSDs originating from the same videos and push away the PSDs from different videos. The loss function is shown below. During inference, the rPPG ST map is averaged along the spatial dimension to get the rPPG signal .\n\n\n###figure_5### However, since CP2D does not utilize any prior knowledge about morphology, the resulting rPPG signals lack morphology information. Fig. 4 ###reference_### shows a GT cPPG signal and an rPPG signal produced by CP2D. CP2D generates an rPPG signal with accurate heartbeats that align with those of the cPPG signal. However, the morphological features, such as the dicrotic notch and diastolic peak evident in the cPPG morphology, are not clearly discernible in the rPPG signals. Since these morphological features play a crucial role in differentiating individuals, we aim to further refine the rPPG signal morphology at the second training stage." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "The 2nd training stage: rPPG-cPPG Hybrid Training", + "text": "At the second training stage, we further refine rPPG signals to obtain morphology information. Fig. 5 ###reference_### shows the rPPG-cPPG hybrid training, where the rPPG branch utilizes face videos and ID labels during training. On the other hand, the cPPG branch uses external cPPG biometric datasets to encourage the PPG-Morph model to learn morphology information, which can be incorporated into the rPPG branch through the PPG-Morph model . The PPG-Morph model comprises 1DCNN layers and transformer layers that extract morphological features from periodic segments. The two branches are trained alternately to facilitate the sharing of morphology information between the rPPG and cPPG branches. Note that our method only requires de-identified facial videos with subject IDs during training (enrollment) and only needs de-identified facial videos during inference.\n\n###figure_6###" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 rPPG Branch", + "text": "The rPPG branch can extract rPPG morphology and use it to differentiate individuals. This branch only requires a de-identified facial video and the ID label and does not need any GT cPPG signal for training. Therefore, de-identified facial videos with ID labels are sufficient for enrollment in the proposed rPPG biometrics scheme. The ST map derived from the de-identified facial video is fed into the pre-trained rPPG model to obtain the rPPG signal . Note that the rPPG model is the pre-trained model from the first unsupervised training stage. To segment the signal, the systolic peaks are located, and the signal is divided into K clips. Due to heart rate variability, the K clips may have different lengths, so the clip length is interpolated to 90 in order to obtain rPPG periodic segments. The choice of a length of 90 is based on the fact that the minimum heart rate (40 beats per minute) for a 60 Hz signal produces the longest periodic segment with a length of 90. Consequently, we obtain . To predict an authentication score for an individual, we use the PPG-Morph model and the rPPG classification head , which provides the rPPG morphology representation and ID probability , where is the number of individuals in the rPPG biometric dataset. The cross-entropy loss is used for ID classification, which is\nwhere is the predicted probability of the kth periodic segment belonging to the ID label ." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 cPPG Branch", + "text": "The cPPG branch utilizes external cPPG biometric datasets including Biosec2 [12 ###reference_b12###], BIDMC [33 ###reference_b33###, 9 ###reference_b9###], and PRRB [14 ###reference_b14###], to learn PPG morphology. Note that the external cPPG biometric datasets are available online and are not related to the facial videos in the rPPG branch. Similar to the rPPG branch, the cPPG signal is processed to obtain cPPG periodic segments . The PPG-Morph model and cPPG classification head are employed to generate the cPPG morphology representation and the ID probability prediction , where is the number of individuals in the external cPPG biometric datasets. Note that the PPG-Morph model is shared by both the rPPG branch and cPPG branch, allowing the cPPG branch to transfer the learned morphology information to the rPPG branch. The cross-entropy loss is utilized in this branch, which is\nwhere is the predicted probability of the kth periodic segment belonging to the ID label ." + }, + { + "section_id": "3.3.3", + "parent_section_id": "3.3", + "section_name": "3.3.3 Alternate Backpropagation", + "text": "We alternately train the two branches and backpropagate the gradient of the two loss functions and to achieve rPPG-cPPG hybrid training. During the first step, de-identified facial videos and ID labels are sampled from the rPPG biometric dataset to calculate the loss , and the gradient of is backpropagated to update the rPPG model , the PPG-Morph model , and the rPPG classification head . During the second step, cPPG signals and ID labels are sampled from external cPPG biometric datasets to calculate the loss , and the gradient of is backpropagated to update PPG-Morph model and the cPPG classification head . These two steps are repeated in an alternating manner, allowing the two branches to be trained in turns. The cPPG branch uses external cPPG datasets to encourage the PPG-Morph model to learn morphology information. The morphology features learned from the cPPG branch can then be incorporated into the rPPG branch since the PPG-Morph model is shared by both cPPG and rPPG branches thus rPPG features are enhanced. The supplementary materials provide a detailed description of the algorithm." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "Datasets. We considered three public rPPG datasets, namely OBF [17 ###reference_b17###], PURE [38 ###reference_b38###], and UBFC-rPPG [3 ###reference_b3###]. The scales of these rPPG datasets are enough to validate the feasibility of rPPG biometrics since previous cPPG biometric datasets [12 ###reference_b12###, 11 ###reference_b11###] also have similar scales. These rPPG datasets consist of facial videos, GT cPPG signals, and ID labels, but our method does not require the GT cPPG. OBF dataset [17 ###reference_b17###] consists of data from 100 healthy subjects. Two 5-minute RGB facial videos were recorded for each participant. For each subject, the first facial video was recorded at rest, while the second was recorded after exercise. During the recording, participants remained seated without head or facial motions. Videos have a resolution of 1920\u00d71080 at 60 frames per second (fps). UBFC-rPPG dataset [3 ###reference_b3###] was captured using a webcam at a resolution of 640x480 at 30 fps. In each recording, the subject was positioned 1 meter away from the camera and playing a mathematical game, with the face centrally located within the video frame. The database consists of data from 42 participants, with each one having a 1-minute video. PURE dataset [38 ###reference_b38###] contains data from 10 subjects. Face videos for each subject were captured in 6 distinct scenarios: steady, talking, slow translation, fast translation, small rotation, and medium rotation, leading to a total of 60 one-minute RGB videos. Videos have a resolution of 640\u00d7480 at 30 fps.\nAdditionally, we combined the Biosec2 [12 ###reference_b12###], BIDMC [33 ###reference_b33###, 9 ###reference_b9###], and PRRB [14 ###reference_b14###] datasets to create the external cPPG biometric dataset. These datasets contain cPPG signals from 195 subjects for the cPPG branch in the rPPG-cPPG hybrid training. More details about datasets are provided in the supplementary materials.\nExperimental Setup. Our rPPG biometric experiments follow the previous cPPG biometric protocol [12 ###reference_b12###, 11 ###reference_b11###] where the training and test sets have the same persons but might be recorded in the same session (intra-session test) or recorded in different sessions (cross-session test). For the OBF dataset, we divide each pre-exercise video into three parts: the first 60% length is used for training, the following 20% length is used for validation, and the last 20% length is used for intra-session testing. The post-exercise videos are reserved for cross-session testing. As for the UBFC-rPPG dataset, the same division is applied to each video. Since each subject only contributes one video, only intra-session testing can be conducted on this dataset. Moving on to the PURE dataset, the same division is applied to each steady video. The videos involving head motion tasks are used exclusively for cross-session testing. At the first training stage, we select the best rPPG model with the lowest irrelevant power ratio (IPR) in the validation set, as conducted in [8 ###reference_b8###, 39 ###reference_b39###]. At the second training stage, we choose the best-performing models based on the lowest equal error rate (EER) in the validation set. Both training stages are carried out on a single Nvidia V100 GPU and employ the Adam optimizer with a learning rate of 1e-3. During inference, the predicted probabilities from consecutive periodic segments (5 beats, 10 beats, and 20 beats) are averaged.\nEvaluation Metrics. Since the model does multi-class classification, we use the one-vs-rest strategy to get the authentication results for each person. Therefore, each person has a binary classification. For each person, we can change the threshold of the model prediction output for that person to get the binary predictions, and we can plot false positive rates and true positive rates in a graph, which is the receiver operating characteristic (ROC) curve. Areas under curve (AUC) is the area under the ROC curve. If we change the threshold, we can find the threshold where the false positive rate and the false negative rate are equal. The EER is the false positive rate or false negative rate at this threshold. The final EER and AUC are averaged across all subjects. To evaluate the rPPG morphology, we calculate the Pearson correlation between the means of periodic segments from rPPG and the GT cPPG. More details are in the supplementary materials." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Results and discussions", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Results and discussions about rPPG authentication.", + "text": "Table 1 ###reference_### presents the results of rPPG authentication with varying signal lengths. The performance of rPPG authentication improves with longer signal lengths, such as 20 heartbeats, compared to shorter signal lengths like 10 or 5 beats. On all three datasets, the intra-session performance is satisfactory, with EERs below 1% and AUCs above 99%. However, the performance decreases during cross-session testing. On the OBF dataset, the cross-session (pre-exercise post-exercise) performance is slightly lower than the intra-session (pre-exercise pre-exercise) performance, but still achieves EER of 2.16%. On the PURE dataset, there is a significant drop in performance during cross-session (steady motion tasks) compared to intra-session (steady steady) due to the adverse impact of motion tasks on the quality of rPPG signals. Conversely, although the OBF dataset includes exercises to increase heart rates, it does not involve facial movements. This indicates that rPPG biometrics is sensitive to low-quality rPPG caused by facial motions but rPPG has reliable and unique biometric information evidenced by the varying heart rates from the same people. In practical usage, users will face the camera and keep still (like face recognition), thus such large intended head motions will not be a concern.\nThe observed rPPG periodic segments from different subjects (subject A-I) in Fig. 6 ###reference_### align with the aforementioned quantitative results. The rPPG periodic segments from the OBF dataset exhibit consistent morphology before and after exercises in Fig. 6 ###reference_###(a). Conversely, the motion tasks in the PURE dataset significantly alter morphology in Fig. 6 ###reference_###(c), resulting in noisy rPPG signals and a drop in performance during cross-session testing. Furthermore, the rPPG periodic segments from all three datasets display distinct morphologies for different subjects, highlighting the discriminative power of rPPG morphology. Fig. 7 ###reference_### shows the subject-specific biometric characteristics of rPPG morphology in detail. The rPPG periodic segments from two subjects have distinct fiducial points [22 ###reference_b22###] such as the systolic peaks, diastolic peaks, dicrotic notch, and onset/offset, which contain identity information.\n\n###figure_7### (a) rPPG periodic segments from OBF dataset\n\n###figure_8### (b) rPPG periodic segments from UBFC-rPPG dataset\n\n###figure_9### (c) rPPG periodic segments from PURE dataset\n###figure_10### Regarding fairness, prior studies [30 ###reference_b30###, 42 ###reference_b42###] highlighted skin bias in rPPG signal quality. Dark skin may yield lower-quality rPPG signals, impacting authentication performance. We assess authentication performance for light and dark skin groups in the OBF dataset with a 20-heartbeat signal length and cross-session testing. For light skin, EER and AUC are 2.52% and 97.79%, respectively. For dark skin, EER and AUC are 4.04% and 96.74%. The performance of dark skin slightly falls behind that of light skin, indicating a skin tone bias in rPPG biometrics. Addressing this fairness issue may involve collecting more data from dark-skinned people or developing new algorithms, which remains a topic for future research.\n: face recognition (FR), : cPPG biometrics, : rPPG biometrics, : Training does not converge." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Comparison with other biometrics.", + "text": "In Table 2 ###reference_###, we compare rPPG biometrics with related biometric methods, including face and cPPG biometrics, when the signal length is 20 beats. For face recognition, we choose the highly cited face recognition method (FaceNet [35 ###reference_b35###]) to prove how general face recognition works on de-identified facial videos. We use FaceNet to extract embeddings from de-identified images and train two fully connected layers on the embeddings to get the classification results. Table 2 ###reference_### demonstrates that FaceNet [35 ###reference_b35###] fails to work on de-identified videos, indicating that there is no facial appearance information in the de-identified videos. Since our rPPG biometric method is privacy-preserving for facial appearances, we also compare our method with the recent privacy-preserving face recognition [13 ###reference_b13###]. The results show that our method can achieve better performance than privacy-preserving face recognition [13 ###reference_b13###]. Our rPPG biometric authentication completely gets rid of facial appearance while the privacy-preserving face recognition [13 ###reference_b13###] only adds noises to partially remove facial appearances to guarantee face recognition performance, which may still have risks of privacy leakage. In addition, we also compare our method w/ rPPG-cPPG hybrid training to our method w/ rPPG training (only rPPG branch is used for training in Fig. 5 ###reference_###, and the cPPG branch is disabled during training).\nOn the OBF dataset, ours w/ rPPG-cPPG hybrid training achieves similar intra-session performance to ours w/ rPPG training, but achieves the best cross-session performance. This means external cPPG datasets introducing morphology information can improve generalization, such as cross-session performance. Furthermore, our rPPG biometrics exhibits better performance than cPPG biometrics [11 ###reference_b11###]. This is primarily because rPPG signals are extracted from both spatial and temporal representations, allowing for the utilization of more information compared to cPPG signals, which are measured from a single spatial point in the temporal dimension. However, this holds true only when the rPPG signals are of high quality.\nOn the UBFC-rPPG dataset, ours w/ rPPG-cPPG hybrid training achieves 100% AUC but ours w/ rPPG training does not converge. The reason might be that it is difficult for the model to learn rPPG morphology from the small-scale UBFC-rPPG dataset without the help of the external cPPG dataset. This suggests that the external cPPG dataset can help the model to learn discriminative rPPG morphology information. Moreover, the performance of cPPG biometrics is still lower than that of our rPPG biometrics.\nOn the PURE dataset, both rPPG and cPPG biometrics demonstrate good performance in intra-session testing. However, in cross-session testing, our rPPG biometrics are surpassed by cPPG biometrics. This is likely due to significant facial motions in the test videos, which negatively impact the quality of rPPG signals and morphology, as shown in Figure 6 ###reference_###(c). On the other hand, cPPG signals measured from fingertips are less affected by facial motions, allowing for better performance in this scenario." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3 Results and discussions about rPPG morphology.", + "text": "We also made an interesting finding that the rPPG-cPPG hybrid training can significantly improve rPPG morphology reconstruction. Table 3 ###reference_### shows the Pearson correlations between the mean periodic segments of GT cPPG and rPPG. High Pearson correlations mean rPPG morphology better resembles the corresponding GT cPPG. Note that our method does not require any GT cPPG for rPPG morphology reconstruction, so we choose unsupervised rPPG methods including POS [44 ###reference_b44###], ICA [34 ###reference_b34###], and [8 ###reference_b8###] for comparison. Ours w/ rPPG-cPPG hybrid training achieves significantly higher Pearson correlation than the baseline methods, CP2D, and ours w/ rPPG training, as the external cPPG datasets introduce helpful morphology information via the hybrid training to refine the rPPG morphology. Such cPPG datasets are publicly available, and thus do not introduce extra costs of data collection." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we validated the feasibility of rPPG biometrics from facial videos. We proposed a two-stage training scheme and novel cPPG-rPPG hybrid training by using external cPPG biometric datasets to improve rPPG biometric authentication. Our method achieves good performance on both rPPG biometrics authentication and rPPG morphology reconstruction. In addition, our method uses de-identified facial videos for authentication, which can protect sensitive facial appearance information. Future work will focus on collecting a large-scale rPPG biometric dataset and studying influencing factors like temporal stability, lighting, and recording devices." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\nSignal length\nEER/AUC\n
OBFUBFC-rPPGPURE
\n\n\n\n\n
intra-session
\n
\n\n\n\n\n
cross-session
\n
intra-sessionintra-sessioncross-session
20 heartbeats (20 sec)0.17%/99.97%2.16%/98.10%0%/100%0%/100%9.59%/93.70%
10 heartbeats (10 sec)0.14%/99.98%2.61%/98.04%0%/100%0.33%/99.67%14.00%/91.17%
5 heartbeats (5 sec)0.33%/99.97%3.81%/97.89%0.01%/99.99%0.58%/99.36%18.32%/86.81%
\n
Table 1: EER and AUC for rPPG authentication on OBF, UBFC-rPPG, and PURE datasets.
\n
", + "capture": "Table 1: EER and AUC for rPPG authentication on OBF, UBFC-rPPG, and PURE datasets." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Biometric MethodsEER/AUC\n
OBFUBFC-rPPGPURE
intra-sesscross-sessintra-sessintra-sesscross-sess
FaceNet [35]\n32.07%/65.87%36.58%/60.84%36.15%/61.03%31.67%/66.67%35.67%/65.11%
Privacy-preserving FR [13]\n6.46%/91.24%6.52%/91.92%7.26%/90.25%6.88%/91.27%7.82%/90.77%
Hwang2021 [11]\n1.21%/99.30%16.72%/84.74%6.30%/94.02%0%/100%4.23%/98.14%
Patil2018 [32]\n14.97%/89.42%39.79%/62.14%8.53%/88.70%4.00%/92.00%32.68%/72.11%
Ours w/ rPPG training\n0%/100%3.23%/96.92%0%/100%11.68%/92.61%
Ours w/ rPPG-cPPG hybrid training0.17%/99.97%2.16%/98.10%0%/100%0%/100%9.59%/93.70%
\n
    \n
  • \n\u2022\n
    \n

    : face recognition (FR), : cPPG biometrics, : rPPG biometrics, : Training does not converge.

    \n
    \n
  • \n
\n
\n
Table 2: Performance comparison between biometric methods including face recognition, cPPG biometrics, and rPPG biometrics. Note that de-identified videos proposed in the paper are used for face recognition and rPPG biometrics.
\n
", + "capture": "Table 2: Performance comparison between biometric methods including face recognition, cPPG biometrics, and rPPG biometrics. Note that de-identified videos proposed in the paper are used for face recognition and rPPG biometrics." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsPearson Correlations\n
POS [44]\n0.78
ICA [34]\n0.77
Gideon2021 [8]\n0.77
After the 1st training stage
CP2D0.78
After the 1st and 2nd training stages
Ours w/ rPPG training0.70
Ours w/ rPPG-cPPG hybrid training0.87
\n
\n
Table 3: Pearson correlations between GT cPPG periodic segments and the rPPG periodic segments.
\n
", + "capture": "Table 3: Pearson correlations between GT cPPG periodic segments and the rPPG periodic segments." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2407.04127v3_figure_1(a).png", + "caption": "Figure 1: (a) rPPG Authentication System. (b) Our method can improve rPPG morphology information. The fiducial points [22] like the systolic peaks and diastolic peaks are the main subject-specific biometric characteristics in rPPG signals.", + "url": "http://arxiv.org/html/2407.04127v3/x1.png" + }, + "1(b)": { + "figure_path": "2407.04127v3_figure_1(b).png", + "caption": "Figure 1: (a) rPPG Authentication System. (b) Our method can improve rPPG morphology information. The fiducial points [22] like the systolic peaks and diastolic peaks are the main subject-specific biometric characteristics in rPPG signals.", + "url": "http://arxiv.org/html/2407.04127v3/x2.png" + }, + "2": { + "figure_path": "2407.04127v3_figure_2.png", + "caption": "Figure 2: Face de-identification for rPPG biometrics. The facial appearance is obfuscated while rPPG information is retained.", + "url": "http://arxiv.org/html/2407.04127v3/x3.png" + }, + "3": { + "figure_path": "2407.04127v3_figure_3.png", + "caption": "Figure 3: The diagram of Contrast-Phys-2D (CP2D) for rPPG unsupervised pre-training based on contrastive learning.", + "url": "http://arxiv.org/html/2407.04127v3/x4.png" + }, + "4": { + "figure_path": "2407.04127v3_figure_4.png", + "caption": "Figure 4: GT cPPG signal and rPPG signal extracted by CP2D. After the first training stage, the rPPG signal has accurate heartbeats but lacks morphology information.", + "url": "http://arxiv.org/html/2407.04127v3/x5.png" + }, + "5": { + "figure_path": "2407.04127v3_figure_5.png", + "caption": "Figure 5: rPPG-cPPG hybrid training. The rPPG branch and cPPG branch are trained alternatively to utilize external cPPG signals to enhance the rPPG morphology fully.", + "url": "http://arxiv.org/html/2407.04127v3/x6.png" + }, + "6(a)": { + "figure_path": "2407.04127v3_figure_6(a).png", + "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.", + "url": "http://arxiv.org/html/2407.04127v3/x7.png" + }, + "6(b)": { + "figure_path": "2407.04127v3_figure_6(b).png", + "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.", + "url": "http://arxiv.org/html/2407.04127v3/x8.png" + }, + "6(c)": { + "figure_path": "2407.04127v3_figure_6(c).png", + "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.", + "url": "http://arxiv.org/html/2407.04127v3/x9.png" + }, + "7": { + "figure_path": "2407.04127v3_figure_7.png", + "caption": "Figure 7: rPPG periodic segments and fiducial points from two subjects.", + "url": "http://arxiv.org/html/2407.04127v3/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Openface 2.0: Facial behavior analysis toolkit.", + "author": "T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency.", + "venue": "In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 59\u201366. IEEE, 2018.", + "url": null + } + }, + { + "2": { + "title": "Cornet: Deep learning framework for ppg-based heart rate estimation and biometric identification in ambulant environment.", + "author": "D. Biswas, L. Everson, M. Liu, M. Panwar, B.-E. Verhoef, S. Patki, C. H. Kim, A. Acharyya, C. Van Hoof, M. Konijnenburg, et al.", + "venue": "IEEE transactions on biomedical circuits and systems, 2019.", + "url": null + } + }, + { + "3": { + "title": "Unsupervised skin tissue segmentation for remote photoplethysmography.", + "author": "S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, and J. Dubois.", + "venue": "Pattern Recognition Letters, 124:82\u201390, 2019.", + "url": null + } + }, + { + "4": { + "title": "Deepphys: Video-based physiological measurement using convolutional attention networks.", + "author": "W. Chen and D. McDuff.", + "venue": "In ECCV, pages 349\u2013365, 2018.", + "url": null + } + }, + { + "5": { + "title": "How iris recognition works.", + "author": "J. Daugman.", + "venue": "In The essential guide to image processing, pages 715\u2013739. Elsevier, 2009.", + "url": null + } + }, + { + "6": { + "title": "Robust pulse rate from chrominance-based rppg.", + "author": "G. De Haan and V. Jeanne.", + "venue": "IEEE Transactions on Biomedical Engineering, 60(10):2878\u20132886, 2013.", + "url": null + } + }, + { + "7": { + "title": "Dual-bridging with adversarial noise generation for domain adaptive rppg estimation.", + "author": "J. Du, S.-Q. Liu, B. Zhang, and P. C. Yuen.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "8": { + "title": "The way to my heart is through contrastive learning: Remote photoplethysmography from unlabelled video.", + "author": "J. Gideon and S. Stent.", + "venue": "In ICCV, pages 3995\u20134004, 2021.", + "url": null + } + }, + { + "9": { + "title": "Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals.", + "author": "A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley.", + "venue": "circulation, 2000.", + "url": null + } + }, + { + "10": { + "title": "A novel biometric approach in human verification by photoplethysmographic signals.", + "author": "Y. Gu, Y. Zhang, and Y. Zhang.", + "venue": "In 4th International IEEE EMBS Special Topic Conference on Information Technology Applications in Biomedicine, 2003. IEEE, 2003.", + "url": null + } + }, + { + "11": { + "title": "Variation-stable fusion for ppg-based biometric system.", + "author": "D. Y. Hwang, B. Taha, and D. Hatzinakos.", + "venue": "In ICASSP. IEEE, 2021.", + "url": null + } + }, + { + "12": { + "title": "Evaluation of the time stability and uniqueness in ppg-based biometric system.", + "author": "D. Y. Hwang, B. Taha, D. S. Lee, and D. Hatzinakos.", + "venue": "IEEE Transactions on Information Forensics and Security, 2020.", + "url": null + } + }, + { + "13": { + "title": "Privacy-preserving face recognition with learnable privacy budgets in frequency domain.", + "author": "J. Ji, H. Wang, Y. Huang, J. Wu, X. Xu, S. Ding, S. Zhang, L. Cao, and R. Ji.", + "venue": "In European Conference on Computer Vision, pages 475\u2013491. Springer, 2022.", + "url": null + } + }, + { + "14": { + "title": "Multiparameter respiratory rate estimation from the photoplethysmogram.", + "author": "W. Karlen, S. Raman, J. M. Ansermino, and G. A. Dumont.", + "venue": "IEEE Transactions on Biomedical Engineering, 2013.", + "url": null + } + }, + { + "15": { + "title": "Cross-domain adaptation for biometric identification using photoplethysmogram.", + "author": "E. Lee, A. Ho, Y.-T. Wang, C.-H. Huang, and C.-Y. Lee.", + "venue": "In ICASSP. IEEE, 2020.", + "url": null + } + }, + { + "16": { + "title": "Learning motion-robust remote photoplethysmography through arbitrary resolution videos.", + "author": "J. Li, Z. Yu, and J. Shi.", + "venue": "In AAAI, 2023.", + "url": null + } + }, + { + "17": { + "title": "The obf database: A large face video database for remote physiological signal measurement and atrial fibrillation detection.", + "author": "X. Li, I. Alikhani, J. Shi, T. Seppanen, J. Junttila, K. Majamaa-Voltti, M. Tulppo, and G. Zhao.", + "venue": "In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 242\u2013249. IEEE, 2018.", + "url": null + } + }, + { + "18": { + "title": "Remote heart rate measurement from face videos under realistic situations.", + "author": "X. Li, J. Chen, G. Zhao, and M. Pietikainen.", + "venue": "In CVPR, pages 4264\u20134271, 2014.", + "url": null + } + }, + { + "19": { + "title": "Learning temporal similarity of remote photoplethysmography for fast 3d mask face presentation attack detection.", + "author": "S.-Q. Liu, X. Lan, and P. C. Yuen.", + "venue": "IEEE Transactions on Information Forensics and Security, 2022.", + "url": null + } + }, + { + "20": { + "title": "Multi-task temporal shift attention networks for on-device contactless vitals measurement.", + "author": "X. Liu, J. Fromm, S. Patel, and D. McDuff.", + "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, NeurIPS, volume 33, pages 19400\u201319411, 2020.", + "url": null + } + }, + { + "21": { + "title": "Learning deep models for face anti-spoofing: Binary or auxiliary supervision.", + "author": "Y. Liu, A. Jourabloo, and X. Liu.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "22": { + "title": "Seeing red: Ppg biometrics using smartphone cameras.", + "author": "G. Lovisotto, H. Turner, S. Eberz, and I. Martinovic.", + "venue": "In CVPRW, 2020.", + "url": null + } + }, + { + "23": { + "title": "Dual-gan: Joint bvp and noise modeling for remote physiological measurement.", + "author": "H. Lu, H. Han, and S. K. Zhou.", + "venue": "In CVPR, pages 12404\u201312413, 2021.", + "url": null + } + }, + { + "24": { + "title": "Neuron structure modeling for generalizable remote physiological measurement.", + "author": "H. Lu, Z. Yu, X. Niu, and Y.-C. Chen.", + "venue": "In CVPR, pages 18589\u201318599, 2023.", + "url": null + } + }, + { + "25": { + "title": "End-to-end photopleth ysmography (ppg) based biometric authentication by using convolutional neural networks.", + "author": "J. Luque, G. Cortes, C. Segura, A. Maravilla, J. Esteban, and J. Fabregat.", + "venue": "In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018.", + "url": null + } + }, + { + "26": { + "title": "Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera.", + "author": "D. McDuff, S. Gontarek, and R. W. Picard.", + "venue": "IEEE Transactions on Biomedical Engineering, 2014.", + "url": null + } + }, + { + "27": { + "title": "Synrhythm: Learning a deep heart rate estimator from general to specific.", + "author": "X. Niu, H. Han, S. Shan, and X. Chen.", + "venue": "In ICPR, pages 3580\u20133585. IEEE, 2018.", + "url": null + } + }, + { + "28": { + "title": "Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation.", + "author": "X. Niu, S. Shan, H. Han, and X. Chen.", + "venue": "IEEE Transactions on Image Processing, 29:2409\u20132423, 2019.", + "url": null + } + }, + { + "29": { + "title": "Video-based remote physiological measurement via cross-verified feature disentangling.", + "author": "X. Niu, Z. Yu, H. Han, X. Li, S. Shan, and G. Zhao.", + "venue": "In ECCV, pages 295\u2013310. Springer, 2020.", + "url": null + } + }, + { + "30": { + "title": "A meta-analysis of the impact of skin tone and gender on non-contact photoplethysmography measurements.", + "author": "E. M. Nowara, D. McDuff, and A. Veeraraghavan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.", + "url": null + } + }, + { + "31": { + "title": "The benefit of distraction: Denoising camera-based physiological measurements using inverse attention.", + "author": "E. M. Nowara, D. McDuff, and A. Veeraraghavan.", + "venue": "In ICCV, pages 4955\u20134964, 2021.", + "url": null + } + }, + { + "32": { + "title": "A non-contact ppg biometric system based on deep neural network.", + "author": "O. R. Patil, W. Wang, Y. Gao, W. Xu, and Z. Jin.", + "venue": "In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 2018.", + "url": null + } + }, + { + "33": { + "title": "Toward a robust estimation of respiratory rate from pulse oximeters.", + "author": "M. A. Pimentel, A. E. Johnson, P. H. Charlton, D. Birrenkott, P. J. Watkinson, L. Tarassenko, and D. A. Clifton.", + "venue": "IEEE Transactions on Biomedical Engineering, 2016.", + "url": null + } + }, + { + "34": { + "title": "Advancements in noncontact, multiparameter physiological measurements using a webcam.", + "author": "M.-Z. Poh, D. J. McDuff, and R. W. Picard.", + "venue": "IEEE transactions on biomedical engineering, 58(1):7\u201311, 2010.", + "url": null + } + }, + { + "35": { + "title": "Facenet: A unified embedding for face recognition and clustering.", + "author": "F. Schroff, D. Kalenichenko, and J. Philbin.", + "venue": "In CVPR, pages 815\u2013823, 2015.", + "url": null + } + }, + { + "36": { + "title": "Non-contrastive unsupervised learning of physiological signals from video.", + "author": "J. Speth, N. Vance, P. Flynn, and A. Czajka.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "37": { + "title": "Visual heart rate estimation with convolutional neural network.", + "author": "R. \u0160petl\u00edk, V. Franc, and J. Matas.", + "venue": "In BMVC, pages 3\u20136, 2018.", + "url": null + } + }, + { + "38": { + "title": "Non-contact video-based pulse rate measurement on a mobile service robot.", + "author": "R. Stricker, S. M\u00fcller, and H.-M. Gross.", + "venue": "In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pages 1056\u20131062. IEEE, 2014.", + "url": null + } + }, + { + "39": { + "title": "Contrast-phys: Unsupervised video-based remote physiological measurement via spatiotemporal contrast.", + "author": "Z. Sun and X. Li.", + "venue": "In ECCV, pages 492\u2013510. Springer, 2022.", + "url": null + } + }, + { + "40": { + "title": "Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions.", + "author": "S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe.", + "venue": "In CVPR, pages 2396\u20132404, 2016.", + "url": null + } + }, + { + "41": { + "title": "Remote plethysmographic imaging using ambient light.", + "author": "W. Verkruysse, L. O. Svaasand, and J. S. Nelson.", + "venue": "Optics express, 16(26):21434\u201321445, 2008.", + "url": null + } + }, + { + "42": { + "title": "Blending camera and 77 ghz radar sensing for equitable, robust plethysmography.", + "author": "A. Vilesov, P. Chari, A. Armouti, A. B. Harish, K. Kulkarni, A. Deoghare, L. Jalilian, and A. Kadambi.", + "venue": "ACM Trans. Graph.(SIGGRAPH), 2022.", + "url": null + } + }, + { + "43": { + "title": "Self-supervised representation learning framework for remote physiological measurement using spatiotemporal augmentation loss.", + "author": "H. Wang, E. Ahn, and J. Kim.", + "venue": "AAAI, 2022.", + "url": null + } + }, + { + "44": { + "title": "Algorithmic principles of remote ppg.", + "author": "W. Wang, A. C. den Brinker, S. Stuijk, and G. De Haan.", + "venue": "IEEE Transactions on Biomedical Engineering, 64(7):1479\u20131491, 2016.", + "url": null + } + }, + { + "45": { + "title": "Exploiting spatial redundancy of image sensor for motion robust rppg.", + "author": "W. Wang, S. Stuijk, and G. De Haan.", + "venue": "IEEE transactions on Biomedical Engineering, 62(2):415\u2013425, 2014.", + "url": null + } + }, + { + "46": { + "title": "Iris recognition: an emerging biometric technology.", + "author": "R. P. Wildes.", + "venue": "Proceedings of the IEEE, 85(9):1348\u20131363, 1997.", + "url": null + } + }, + { + "47": { + "title": "Simper: Simple self-supervised learning of periodic targets.", + "author": "Y. Yang, X. Liu, J. Wu, S. Borac, D. Katabi, M.-Z. Poh, and D. McDuff.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "48": { + "title": "A pilot study on using derivatives of photoplethysmographic signals as a biometric identifier.", + "author": "J. Yao, X. Sun, and Y. Wan.", + "venue": "In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2007.", + "url": null + } + }, + { + "49": { + "title": "Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks.", + "author": "Z. Yu, X. Li, and G. Zhao.", + "venue": "In BMVC, page 277. BMVA Press, 2019.", + "url": null + } + }, + { + "50": { + "title": "Remote heart rate measurement from highly compressed facial videos: an end-to-end deep learning solution with video enhancement.", + "author": "Z. Yu, W. Peng, X. Li, X. Hong, and G. Zhao.", + "venue": "In ICCV, pages 151\u2013160, 2019.", + "url": null + } + }, + { + "51": { + "title": "Physformer++: Facial video-based physiological measurement with slowfast temporal difference transformer.", + "author": "Z. Yu, Y. Shen, J. Shi, H. Zhao, Y. Cui, J. Zhang, P. Torr, and G. Zhao.", + "venue": "International Journal of Computer Vision, 2023.", + "url": null + } + }, + { + "52": { + "title": "Physformer: facial video-based physiological measurement with temporal difference transformer.", + "author": "Z. Yu, Y. Shen, J. Shi, H. Zhao, P. H. Torr, and G. Zhao.", + "venue": "In CVPR, pages 4186\u20134196, 2022.", + "url": null + } + }, + { + "53": { + "title": "Facial video-based remote physiological measurement via self-supervised learning.", + "author": "Z. Yue, M. Shi, and S. Ding.", + "venue": "TPAMI, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2407.04127v3" +} \ No newline at end of file diff --git a/20241127/2407.05784v2.json b/20241127/2407.05784v2.json new file mode 100644 index 0000000000000000000000000000000000000000..77be51dc4234fd2fdd0332535661c4940d57d14e --- /dev/null +++ b/20241127/2407.05784v2.json @@ -0,0 +1,820 @@ +{ + "title": "Hecaton: Training Large Language Models with Scalable Waferscale Chiplet Systems", + "abstract": "Large Language Models (LLMs) have achieved remarkable success in various fields, but their training and finetuning require massive computation and memory, necessitating parallelism which introduces heavy communication overheads. Driven by advances in packaging, the chiplet architecture emerges as a potential solution, as it can integrate computing power, as well as utilize on-package links with better signal integrity, higher bandwidth, and lower energy consumption. However, most existing chiplet-related works focus on DNN inference. Directly porting them to LLM training introduces significantly large quantities of DRAM access and network-on-package (NoP) overheads which make state-of-the-art chiplet designs fail, highlighting a research gap.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large Language Models (LLMs) have achieved significant success across a wide range of applications in recent years [61 ###reference_b61###, 11 ###reference_b11###, 10 ###reference_b10###, 4 ###reference_b4###] but pose challenges in computing power and memory capacity. The scaling law of LLMs [21 ###reference_b21###] demonstrates that models with more parameters exhibit better performance, which leads to higher hardware requirements. Training an LLM needs significantly large datasets, immense computing power, and substantial memory that stores intermediate activations, weights, their gradients, and optimizer states. For instance, training the Llama model with 1.4T tokens takes 2048 A100 GPUs 34 days, creating terabyte-scale memory usage [58 ###reference_b58###]. Finetuning, on the other hand, is to train a pre-trained model on a smaller, task-specific dataset, and has a more extensive demand. Although finetuning converges faster, its run-time memory usage and dataflow remain almost unchanged, which still makes it challenging to deploy.\n###figure_1### Chiplet architecture offers an approach for integrating a large number of computing resources, but its performance could be hurt by communication overheads. In chiplet architecture, multiple\nsmall dies are manufactured separately and then integrated within the same package [73 ###reference_b73###, 17 ###reference_b17###, 31 ###reference_b31###], which helps mitigate the hardware restriction imposed by the reticle limit [33 ###reference_b33###], thus achieving scaled-out computing power. The overall die-to-die (D2D) connections form a network-on-package (NoP). Compared to off-package connections such as NVLink [39 ###reference_b39###] or InfiniBand [38 ###reference_b38###], on-package D2D links have the potential to achieve better signal integrity111Signal integrity (SI) measures the quality of an electrical signal. More details can refer to [51 ###reference_b51###]., higher density, and lower energy consumption due to their shorter communication distances and more stable channels [50 ###reference_b50###].\nHowever, in chiplet architecture, multiple dies process the same workload in a distributed manner, therefore introducing significant communication overheads. For example, measurements on the Simba [49 ###reference_b49###] silicon prototype showed that when 36 dies are integrated, the NoP overhead accounts for over 50%, severely affecting the system\u2019s scalability.\nExisting works attempt to mitigate communication overheads of chiplets but fail to deal with massive DRAM accesses and NoP communication.\nThese works either improve task scheduling through exhaustive design space exploration [57 ###reference_b57###, 5 ###reference_b5###, 6 ###reference_b6###, 16 ###reference_b16###], or optimize NoP topology to reduce energy consumption and transmission latency [24 ###reference_b24###, 14 ###reference_b14###, 15 ###reference_b15###, 68 ###reference_b68###].\nHowever, these works primarily target traditional deep neural networks (DNNs) or inference, which differs significantly from LLM training and cannot be directly ported because:\nFirst, the dataflow and dependencies in training or finetuning are more complex than those in inference which introduces lots of DRAM accesses that are not considered by state-of-the-art works.\nSecond, the size of LLMs (billion to trillion) vastly exceeds that of DNNs (most under 100 million), thus not only requiring larger on-chip buffers but also new training method to tackle NoP communications. Additionally, the paradigm of LLMs and DNNs have root differences where the Transformer architecture introduces the novel multi-head attention mechanism not used by DNN.\nIn this paper, we introduce Hecaton222\nHecatoncheires, with their hundred hands, symbolize immense power and multitasking capabilities in Greek mythology, aligning well with the concept of a scalable and efficient chiplet architecture.,\na scalable and cost-effective chiplet system targeting LLM training and finetuning with high utilization.\nTo address the DRAM-access and paradigm challenges, we first provide a chiplet template with a tailored scheduling and adopt distributed buffers that save weights collaboratively and carefully design the dataflow.\nThe hardware is scalable, as we utilize adjacent connections instead of package-size interposer for D2D links, which is hard to manufacture with huge package size. We also modify the NoP router to achieve a higher transfer throughput. The architecture is cost-effective, as it only uses DDR5 DRAM surrounding the package instead of expensive high bandwidth memories (HBMs) [22 ###reference_b22###]. To compensate its lower bandwidth, we exploit layer fusion [1 ###reference_b1###] and on-and-off-package overlap to orchestrate computation and DRAM access. By decoupling software tasks from hardware execution units, Hecaton enable training with arbitrarily large batch sizes.\nWe further propose a novel distributed training method to reduce NoP communication overheads and relieve constraints on SRAM capacity or layout for chiplet.\nThrough the co-design of 2D matrix tiling and communication scheme, we reduce the amount of data that needs to be transferred and achieve high utilization of D2D links.\nCompared to the tensor parallelism used in existing works such as Megatron [52 ###reference_b52###] or Optimus [53 ###reference_b53###], this method reduces asymptotic communication complexity.\nWe provide a theoretical analysis that Hecaton exhibits weak scaling, which refers to that, as the model and hardware resources scale proportionally, the main components of system latency\u2014computation, NoP communication, and memory access\u2014maintain nearly constant proportion. The reduced communication complexity of our method allows the NoP overheads to scale proportionally with other system components as the problem size increases.\nOur method also ensures that the required SRAM capacity per die remains unchanged.\nThis alleviates concerns that the distributed system may be bottlenecked by communication overheads, or that the currently used dies become inadequate when dealing with extremely large models in the future.\nWe evaluate the whole design on workloads with different scales including Bert-Large [10 ###reference_b10###], Bloom-1.7B[63 ###reference_b63###], GPT3-6.7B[4 ###reference_b4###] and Llama2-70B [59 ###reference_b59###]. The simulator supports various hardware configurations, including different numbers and layouts of dies, packages, buffer sizees and computation resources. The functional units are realized in RTL, while the attributes of D2D links are sourced from Universal Chiplet Interconnect Express (UCIe) standards [60 ###reference_b60###].\nThe main contributions of our work are as follows:\nPropose a scalable and cost-effective chiplet architecture for LLM training and finetuning. Combined with scheduling optimizations, Hecaton can train with arbitrarily large batch sizes. To the best of our knowledge, this is the first work systematically discussing how to train LLMs with chiplet. (Section III ###reference_###)\nDesign an efficient distributed training method that reduces asymptotic communication complexity and relieves constraints on SRAM capacity and layout compared to existing methods. (Section IV ###reference_###)\nProvide a theoretical proof that the entire system exhibits weak scaling. The property means that Hecaton\u2019s performance is guaranteed regardless of the model size. (Section V ###reference_###)\nEvaluate Hecaton\u2019s performance and observe the weak scaling predicted by the theory.\nCompared to the tensor parallelism used in Megatron, Hecaton achieves 5.29\u00d7 throughput and 3.46\u00d7 energy efficiency improvements in Llama2-70B. (Section VI ###reference_###)" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Chiplet Architectures", + "text": "As the size of monolithic chips approaches the reticle limit and the yield of advanced process nodes declines [33 ###reference_b33###], chiplet architectures have emerged as a promising approach to provide scaled-out computation power [34 ###reference_b34###]. They integrate multiple smaller dies to construct a large system, offering reduced costs, simplified design, and easier verification.\n###figure_2### The core of chiplet architecture lies in packaging technologies that facilitate D2D communication, with typical forms shown in Figure 2 ###reference_###. Standard packaging connects dies through traces on organic substrates [73 ###reference_b73###, 23 ###reference_b23###], which is cost-effective but results in relatively low bandwidth. By introducing silicon as the fabric for routing, advanced packaging can achieve higher interconnect density and lower power consumption. Two representative approaches are silicon interposers [17 ###reference_b17###] and embedded silicon bridges [31 ###reference_b31###]. The interposer-based packaing introduces a silicon layer with area comparable to the entire chiplet, allowing for flexible routing between dies in different locations. However, when the number of dies becomes very large, its manufacturing cost increases significantly, making it less scalable for larger systems. On the other hand, embedded silicon bridges connect only adjacent dies, resulting in lower costs and better scalability. However, this approach sacrifices interconnect flexibility, as long-distance transfers require multiple hops to be realized. The bandwidth of each D2D link is computed as a multiplication of transfer rate and interface width." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Transformer Workload", + "text": "###figure_3### LLMs are composed of several Transformer layers stacked together, each containing an Attention block and a Feed-Forward Network (FFN) block as shown in Figure 3 ###reference_###. Most activations are tensors with three dimensions: batch size (), sequence length (), and hidden size (). During training, both and are adjustable configurations, while is determined by the model and reflects the Transformer\u2019s representational capacity.\nIn Attention blocks, input multiplies with weight matrix to generate three types of activations: query (), key (), and value (). These activations are divided into multiple segments along the hidden size, forming different heads. Each head computes a scaled dot-product attention and , where subscript indicates the head index and is a scaling factor. from all heads are concatenated to form , which then undergoes a linear transformation with weight and adds passed by the residual link. Layer normalization (LayerNorm) is performed to improve training stability. Notably, unlike linear layers with trainable weights, the operands in the attention mechanism are dynamically generated during each forward pass.\nFFN blocks consist of an up-scaling and down-scaling linear layer, with a non-linear function like GeLU inserted between them. The intermediate activation \u2019s hidden size is often scaled four times. Each FFN block also includes a residual connection and a normalization layer." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Parallel LLM Training and Finetuning", + "text": "Parallelism is essential for LLM training and finetuning to reduce computation time and accommodate massive data, which can be futher categorized into data parallelism (DP), pipeline parallelism (PP), and tensor parallelism (TP).\nIn DP, each device stores a copy of model parameters and processes part of the batch [26 ###reference_b26###]. The weight gradients computed by each device are aggregated to update weights. It speeds up computation without significantly reducing memory usage. PP [18 ###reference_b18###, 35 ###reference_b35###] distributes LLM layers across different devices, effectively reducing memory usage per GPU. However, pipeline bubbles form at the beginning and end of the computation, which hurts hardware utilization.\n###figure_4### TP divides weight matrices and assigns them to different devices. Collective communications (CC) are necessary to aggregate the results computed by each machine. Typical CCs are demonstrated in Figure 4 ###reference_###(a). Depending on the partitioning dimension, TP can be further categorized into 1D-TP such as Megatron [52 ###reference_b52###] and 2D-TP such as Optimus [67 ###reference_b67###]. The CCs they require are listed in Table I ###reference_###.\nWe display the operations of ring all-reduce algorithm [2 ###reference_b2###] in Figure 4 ###reference_###(b). This algorithm splits all-reduce into reduce-scatter and all-gather stages and can achieve full bandwidth utilization. Assume the total data volume is and there are machines, then in each step, each machine transfers a data chunk with the size of to its adjacent in altogether steps. Therefore, transmission time has:\nThere are other algorithms performing all-reduce. 2D-torus ring executes simultaneous vertical and horizontal ring all-reduce on a 2D-torus topology [32 ###reference_b32###, 69 ###reference_b69###]. Hybrid ring executes grouped and hierarchical transfers, more suitable for layers with less parameters such as CNN [20 ###reference_b20###]. These works are also summarized in Table I ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III System Overview", + "text": "Section III-A ###reference_### introduces the Hecaton architecture, highlighting its flexibility, scalability, and cost-effectiveness. In order to reduce its corresponding high DRAM access overheads, we optimize its scheduling as demonstrated in Section III-B ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Hecaton Architecture", + "text": "Figure 5 ###reference_### shows a typical system, which comprises the following three main components.\n###figure_5### Figure 5 ###reference_###(c) shows the architecture of the computing die we use in Hecaton. The chiplet design methodology enables a flexible replacement of the computing die with various ASIC accelerators.\nEach computing die should consist of the following modules: global weight and activation buffers, a PE array and vector unit for main computation, a NoP router with a corresponding interface, and a controller and network-on-chip (NoC) router for managing intra-die dataflow. The global weight buffers on all dies form a unified on-chip memory pool, together storing parameters of one or more LLM layers.\nThe global buffers in Hecaton are scaled up from those typically used in DNN inference to satisfy the high memory requirements of LLM training, occupying nearly half of the computing die area.\nThis work adopts the classic Simba-like [49 ###reference_b49###] structure, but replacing INT8 multiply-accumulators (MAC) with the FP32 version to support high-precision LLM training. The array can also be replaced with variants supporting sparsity and other dataflows [9 ###reference_b9###, 71 ###reference_b71###, 54 ###reference_b54###, 25 ###reference_b25###], depending on the application. The D2D interface receives or transmits data in different directions and connects to the NoP router, which is introduced later. The NoC router manages communication among PEs and serves as a local interface to the NoP router.\nD2D connections and NoP routers within computing dies facilitate the on-package communication. For scalability considerations, we use embedded silicon bridges or organic substrates rather than a complete silicon interposer. As model size grows, the chiplet architecture integrates more dies to provide scaled-out computation power, leading to an expanded package area and high manufacture costs for interposer.\nWe design low-latency bypass links and high-throughput NoP routers that provide setup for scalable communication. Our distributed method (Section IV ###reference_###) requires a ring topology for dies in each column or row. We propose to implement the ring using bypass links as shown in Figure 5 ###reference_###(b). Compared to the conventional 2D-torus which directly connects dies at two ends, the bypass ring reduces the longest-link latency from the side length to 2 times the adjacent links.\nThe router\u2019s architecture is shown in Figure 5 ###reference_###(d). It has five ends: local, east (), south (), west (), and north (), managing transmission from various directions. Received packets are buffered in FIFOs and then allocated to a crossbar performing data exchange, overseen by arbitrators and controllers. In the bypass ring, for example, Die 1 not only performs its own data transmission, but also needs to forward data from Die 0 to Die 2. To optimize throughput, we aim to design the router on Die 1 to process these two transactions simultaneously. Observing that the forwarding direction is deterministic\u2014the receive port is always opposite to the transmit port ( to or to )\u2014only wires connecting specific ports and multiplexers for control need to be added, while other modules are reused.\n###figure_6### We employ cost-effective DDR5 DRAMs instead of expensive HBMs as the system\u2019s memory. Under the same budget, DRAMs offer larger storage capacity, which is crucial for LLMs that require extensive memory resources. However, this advantage comes at the expense of reduced bandwidth. To mitigate potential memory access bottlenecks, we employ scheduling techniques which will be demonstrated in Section III-B ###reference_###. The DRAM accesses are managed by IO dies, which contain memory controllers that oversee transactions from different channels. The system\u2019s overall DRAM bandwidth is determined by the product of the number of channels and the bandwidth per channel, with the former being proportional to the package perimeter, and the latter determined by the DRAM type." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Hecaton Scheduling", + "text": "We propose the scheduling of Hecaton\nas shown in Figure 6 ###reference_###. Within a training iteration including the forward and backward process of a batch, we divide a batch into multiple mini-batches as minimal execution units.\nThis can help adapt the fixed hardware to arbitrary batch sizes. The larger the activation buffer capacity, the more samples a mini-batch has.\nThe batch size, which infers the number of samples, is specified by the software training strategy; generally, larger batch sizes lead to more stable training.\nWe allow weights to be reused by multiple mini-batches, so their DRAM access overhead is amortized and accounts for a small portion of the system. However, the DRAM access for activation is still frequent since it is required for each mini-batch. Unlike inference that only needs to save final results, in training the intermediate activations are also saved, as the backward process uses them to compute the gradient of weights. We use the following two techniques to alleviate the impact of DRAM access for activation on system latency.\nWe optimize the throughput by dividing on-package computation / communication and off-package memory access into different pipeline stages, thereby hiding part of the DRAM access latency. The system\u2019s critical path is determined by the longer stage. For example, the fused layer in Figure 6 ###reference_### is bounded by on-packaging execution, while Layer 2 is bounded by off-chip DRAM access. The batch size in training and finetuning is generally kept large to ensure stability, which benefits the pipeline scheduling.\nWith layer fusion [1 ###reference_b1###], outputs of the current layer is directly used as inputs for the next layer, without the process of saving and loading intermediates from DRAM. For example, in Figure 6 ###reference_###, Layer 0 and Layer 1 are fused, while Layer 2 is computed separately. A deeper fusion, meaning more layers are executed consecutively, results in a greater reduction in DRAM access. However, the fusion depth is constrained by the capacity of weight buffers. Since the weights of all fused layers need to be stored on-chip, larger weight buffers allow more layers to be fused.\nIn Transformers, when the weight buffer capacity is tight, all matrix multiplications within the attention blocks are fused, while the two linear layers in the FFN are processed sequentially. This is because the parameter volume of a complete attention block is equivalent to that of a scaling-up or scaling-down FFN layer, both equaling as illustrated in Figure 3 ###reference_###. When the weight buffer capacity is sufficient, Attention blocks and FFN blocks can be fused together to further reduce the amount of transferred activation." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Distributed Training Method", + "text": "In this section, we propose a distributed training method to perform computation and NoP communication on Hecaton architecture, i.e., refining the on-package execution part in Figure 6 ###reference_###. It can be viewed as a novel tensor parallelism." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Overview", + "text": "###figure_7### The distributed training method transforms a single global data communication across all dies into two local communications within dies arranged in the same row or column. These two localized communication phases orchestrate with each other through the 2D mapping of the weight matrix. We carefully design the dataflow to ensure that the localized collective operations only include all-gather and reduce-scatter, which can fully utilize the D2D links on Hecaton. Compared to the tensor parallelism used in Megatron, the volume of activation to be transferred is reduced through the co-design of 2D matrix tiling and communication scheme. Compared to Optimus, our collective operations are more efficient.\nWe introduce the detailed executions for FFN blocks and Attention blocks in Transformers as follows. It achieves reduced communication complexity, whose formulation will be shown Section V ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B FFN Blocks", + "text": "The training process of a single linear layer is outlined in Algorithm 1 ###reference_###, with crucial steps illustrated in Figure 7 ###reference_###.\nIn each step, communication only happens among dies in the same column or row instead of all dies in the package, thus leading to less communication steps and data to transfer.\nFor software, represents the matrix tiling in the th row and th column, while for hardware, denotes the die\u2019s coordinates. Together, they describe the mapping from tensors to dies. Two details regarding the notation need attention. First, although the activation is three-dimensional (Section II-B ###reference_###), during matrix multiplication, it can be treated as a two-dimensional tensor without affecting the computation results. Second, the notation for partial sums (e.g., and ) is special, involving three elements. The first and last elements still represent the row and column indices, respectively, while the middle element indicates the input channel index of the weight. For example, in , is the partial sum and need to be added together.\nOne iteration comprises both forward and backward processes, with the latter being more complex due to the computation of . Our dataflow reuses the that has already been all-gathered for the computation of to reduce data transfer, as shown in Figure 7 ###reference_###(a). The mapping of tensors is initially designated when tensors are fetched from DRAM, and NoP communication ensures the operands have been prepared in each die\u2019s local buffers before computation.\nWe make the following optimizations of NoP communication. First, the preparation of input activation () is operated in two steps: scatter from DRAM (Step 2) and all-gather by NoP (Step 3). Compared to fetching X[:,j] from DRAM in a single step, this two-step operation substitutes repetitive expensive DRAM accesses with high-speed and low-energy D2D transfers, thus reducing communication overheads. Furthermore, in our method, NoP only performs two types of collective operations: all-gather and reduce-scatter, both of which can be efficiently executed on the ring topology mentioned in Figure 5 ###reference_###(b).\nThe following paragraph describes the on-package execution when two linear layers are fused. Notably, at the end of the mini-batch\u2019s computation, the tiling of the activation (Step 5) mirrors the transposition of that at the beginning (Step 2). Consequently, the fused layer can be directly computed without additional communication, only requiring to transpose its weight when loading from DRAM (Step 9). An FFN block comprises two linear layers. Therefore, after two rounds of transposition, the input and output mappings are identical, facilitating a direct residual link addition." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Attention Blocks", + "text": "Attention Blocks differs from FFN blocks in the multi-head attention, whose executions are explained below. As Figure 7 ###reference_###(b) shows, after computing the linear layers that generate (Step 4), the output activation is partitioned along the hidden size dimension instead of the sequence length dimension as in FFN, and then reduce-scattered horizontally (Step 10). This operation ensures that from the same head are processed on the same die, thus utilizing the intrinsic parallelism provided by multiple heads and eliminating inter-die communication. If the number of dies surpasses the number of heads, activation of the same head is saved on different dies, and an all-reduce operation is necessary to compute the complete . Afterwards, an all-gather operation converts the data layout as depicted in Step 12, facilitating the subsequent multiplication with . Step 13 and Step 14 shows operations of the fused layer at the end of the Attention block.\nIn summary, attention blocks can be viewed as inserting multi-head attention between two fused linear layers with modification of collective operations at the merge points. The backward process of Attention blocks resembles that of FFN, with the key difference being that separate heads are handled by different dies, just like the forward process." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Theoretical Analysis", + "text": "This Section provides the formalization of Hecaton\u2019s smaller communication overheads (Section V-A ###reference_###) and proves its weak scaling features (Section V-B ###reference_###).\nFor clarity in subsequent discussions, Table II ###reference_### lists the notations used and their descriptions." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Formalization", + "text": "We compare our method with the following three configurations: (1) Flat-ring, which employs 1D-TP and performs ring all-reduce, as used in Megatron [52 ###reference_b52###]; (2) Torus-ring, which also employs 1D-TP but performs 2D-torus all-reduce; and (3) Optimus, which employs 2D-TP and performs broadcast and reduce. 333Given that existing works primarily target GPU systems, we make necessary adjustments to ensure they align with the hardware assumptions discussed in Section III-A ###reference_###.\nWe decompose the NoP overheads into two parts: link latency and transmission time. The former is a fixed latency for each transmission due to the physical distance, while the latter is the time to transfer data chunks once the connection has been established, calculated as data size divided by D2D bandwidth. For simplicity, we assume that the number of dies is a perfect square.\nWe first derive the overheads for all-gather (AG) and reduce-scatter (RS) in our method, for link latency and for transmission time. Assuming total data volume is , data with size of are communicated among dies in the same row or column for steps, as illustrated in Figure 4 ###reference_###(b). The latency of by-pass links is twice that of adjacent links, as mentioned in Section III-A ###reference_.SSS0.Px2###.\nData volume takes on different values when transferring different activations. Define and , with activations named as in Figure 2, then NoP overheads for the forward pass of Attention blocks and FFN blocks are:\nThe backward process requires an additional all-gather of the original activations (Step 7), resulting in higher NoP overheads compared to the forward pass. For the Attention block, the extra transfers involve and , while for the FFN block, they involve and . Due to space limitations, we omit the derivation since they are similar to the equation above. The results are summarized in Table III ###reference_###.\nThe drawbacks of existing methods can be summarized as follows. For 1D-TP, the communication volume is times that of our method. Although 2D-torus provides more links than flat-ring, it can only reduce the constant factor of NoP overheads, not the complexity. For 2D-TP, the execution of broadcast and reduce operations is inefficient because they cannot utilize all available bandwidth. Their derivations can be found in original papers [67 ###reference_b67###, 69 ###reference_b69###, 32 ###reference_b32###].\nIn our method, the maximum SRAM usage occurs when storing the all-gathered activation , whose size is . Therefore, increasing can relieve the memory burden. In contrast, 1D-TP requires storing complete activations such as and with size of on every dies. As the model grows, they can exceed the fixed capacity of SRAM. Optimus needs extra storage for segments broadcast from other dies, further burdening the already capacity-constrained weight buffer.\nOur method does not impose specific constraints on the number and layout of dies. Conversely, the flat-ring necessitates an even number of dies to establish the Hamiltonian ring, while Optimus requires a square number of dies. Although the torus-ring method is not constrained by layout as well, it suffers severe performance degradation on rectangular dies due to the imbalanced transmission delay between shorter and longer links." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Weak Scaling Analysis", + "text": "Our method demonstrates efficient weak scaling, as demonstrated by maintaining nearly constant portions of computation, NoP communication, and DRAM access - the major components of system latency - when scaling both model dimensions and die counts. Specifically, when the model is scaled, the hidden dimension size increases by factor , while the die count increases by .\nFor computation latency , we can derive Equation (6 ###reference_###).\nFor NoP latency , we have Equation 7 ###reference_###. The analysis focuses solely on the transmission time and omitting the link latency, as the latency coefficient is significantly smaller than the bandwidth coefficient , which will be demonstrated in Section VI-E ###reference_###.\nFor DRAM access , we have Equation 8 ###reference_###. We focus on DRAM access for activation and overlook that for weights, because weights are reused across multiple batches, rendering it a minor contributor to system latency, as demonstrated in Section VI-B ###reference_###. The number of DRAM channels grows with the package\u2019s perimeter as explained in Section III-A ###reference_.SSS0.Px3###.\nWe ensure that the SRAM capacity requirements also remain constant, thus our method will always be valid regardless of the model size. We use and representing the maximal usage of weight buffer and activation buffer.\nThe enhanced transmission performance of our approach stems from the 2D layout and connection of computing dies. Theoretically, this scheme could extend to GPU-based distributed systems. However, migrating this scheme from Chiplet to GPU clusters would lead to significant performance degradation due to the differences in latency and bandwidth of inter-GPU and inter-server links. Faster links would wait for slower ones, thus harming the utilization of inter connections." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Evaluation", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Experiment Setting", + "text": "###figure_8### The experiments are conducted on simulated chiplet architectures as described in Figure 5 ###reference_###. Most digital functional modules within the computing die are realized in RTL and synthesized using Synopsys Design Compiler [56 ###reference_b56###], employing 28nm CMOS technology and achieving clock frequencies of up to 800MHz. Energy consumption for each module was estimated with PrimeTimePX. The area and read/write energy of SRAM buffers are derived from SRAM Compiler [55 ###reference_b55###]. Based on TSMC reports [64 ###reference_b64###, 65 ###reference_b65###], we then rescale the area and power to 7nm, which has been adopted by state-of-the-art waferscale engines [29 ###reference_b29###] and GPUs [37 ###reference_b37###]. The computing die has an estimated area of 30.08. Each computing die comprises a 44 PE array with 32 lanes per PE, complemented by an 8MB activation buffer and an 8MB weight buffer. Although fine-grained mapping and data reuse are not the focus of this paper, we model and validate them against the accelerator simulator Timeloop [44 ###reference_b44###] to ensure accurate results. Our performance model yields consistent utilization and SRAM reuse results with Timeloop.\nD2D link parameters including the link latency, bandwidth, energy and area are sourced from the Universal Chiplet Interconnect Express (UCIe) standards and supplemented by prior research [68 ###reference_b68###, 43 ###reference_b43###]. While both advanced and standard packages transfer at a 16GT/s data rate, the advanced package\u2019s finer pitch enables higher link density, resulting in a higher bandwidth within the same area constraint. For memory, we use DDR5-6400 as the DRAM, with latency aligned to stream trace simulations by Ramulator2 [30 ###reference_b30###]. The bandwidth and energy consumption are set 51.2GB/s and 19pJ/bit respectively based on [19 ###reference_b19###, 40 ###reference_b40###].\nWe evaluate Llama models with successively doubled hidden sizes (): TinyLlama-1.1B (=2048) [72 ###reference_b72###], Llama2-7B (=4096) [59 ###reference_b59###], Llama2-70B (=8192) [59 ###reference_b59###], and Llama3.1-405B (= 16384) [12 ###reference_b12###]. Their training systems scale proportionally, integrating 16, 64, 256, 1024 computing dies respectively.\nThe batch size =1024. We set the sequence length =2048 when experiments involve TinyLlama to accommodate its positional embedding limits. Otherwise, we use each model\u2019s original pretraining sequence length444llama3.1-405B supports sequence length up to 128k, which is achieved through a standard pre-training and a subsequent training increasing the context window. Here we only consider the standard pre-training..\nOther parameters (GQA and intermediate dimensions) are obtained from the models\u2019 Huggingface." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Overall Comparison", + "text": "Figure 8 ###reference_### demonstrates that Hecaton has obvious advantages in latency and energy efficiency over other methods across different workloads regardless of the package type.\nFor latency, we achieve at most and speedup when adopting standard package and advanced package, respectively, compared to the tensor parallelism used in Megatron [52 ###reference_b52###]. On larger workloads equipped with more computing dies, Hecaton\u2019s improvement is more obvious, which stems from the reduced NoP overheads.\nDRAM access only accounts for a small portion, as weight access is amortized across multiple batches, while activation access is overlapped by on-package communication and computation. In scaled systems, 1D-TP based methods exhibit increased computation time despite unchanged theoretical FLOPS per die, primarily due to the reduced PE array utilization. In contrast, 2D-TP methods maintain a more stable computation time through matrix tiling with balanced input and output channel counts. Regarding energy, our approach achieves improvements of up to and compared to Megatron TP for different packages. Unlike latency, where NoP communication constitutes a significant portion, energy consumption is primarily determined by computation. Both Optimus and Hecaton demonstrate significantly reduced NoP overhead due to reduced data transfer volume.\nSRAM overflow occurs for all methods except Hecaton. This is because their peak SRAM requirements increase with the system scale as analyzed in Section V-A ###reference_###, eventually exceeding the fixed SRAM buffer capacity. In contrast, Hecaton maintains roughly constant SRAM requirements, ensuring feasibility across problem sizes." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C Scaling Performance", + "text": "Figure 9 ###reference_### shows the training latency of models with varied scales using different methods. The results verify theoretical analysis in Section V-B ###reference_### that in Hecaton, processing time remains approximately constant as the problem scales. In contrast, existing methods exhibit increasing latency since their NoP communication complexity surpasses other component complexities, which ultimately become the system bottleneck when the number of dies increases. This effect is more obvious when adopting standard packaging, whose lower D2D bandwidth results in proportionally higher NoP overhead in the system, and the gap between different methods becomes pronounced.\n###figure_9###" + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "VI-D The Impact of DRAM bandwidth", + "text": "We analyze the impact of DRAM bandwidth on system latency using three memory configurations: DDR4-3200 (previous generation), DDR5-6400 (current generation), and HBM2 (high-cost, high-end applications). There are two main observations. First, performance improvements brought by higher memory bandwidth saturates. Once the latency of DRAM access matches the latency of on-package execution, further increasing bandwidth only yields limited gains since computation and NoP communication have become the primary bottlenecks. Second, systems utilizing advanced packaging are more sensitive to the DRAM bandwidth, as the reduced NoP latency hides less DRAM access latency. This experiment demonstrates that common DDR already provides sufficient performance for our training system.\n###figure_10###" + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "VI-E The Impact of Link Latency", + "text": "Table IV ###reference_### demonstrates link latency\u2019s proportion in system latency when =10ns, where adaptive and physical layers take 2ns as specified in UCIe. While its proportion increases as the system scales or adopts advanced packaging with higher bandwidth, the link latency\u2019s contribution to overall system performance remains small. This justifies our decision to omit link latency in the theoretical analysis in Section V-B ###reference_###." + }, + { + "section_id": "6.6", + "parent_section_id": "6", + "section_name": "VI-F The Impact of Layout", + "text": "Hecaton can accommodate various layouts, and obtains the best latency and energy when dies are arranged in square, as depicted in Figure 11 ###reference_###. For rectangular layout, it has a preference for longer width, which arises from the asymmetry in the weight matrix\u2019s dimensions. For example, when performing the scale-up linear layer in FFN blocks, both input and output activation necessitate communication, but the latter has a larger size. Matching the larger activation to a short side leads to transferring large data chunks in fewer communication steps, thus lifting the overall performance.\n###figure_11###" + }, + { + "section_id": "6.7", + "parent_section_id": "6", + "section_name": "VI-G Comparison with GPUs", + "text": "We evaluate the energy efficiency (FLOPS/W) of Hecaton against GPU clusters for training Llama2-70B, which was originally trained on 7nm A100 GPUs. We compute the energy efficiency of distributed GPU system using GPU hours and power consumption reported in [59 ###reference_b59###]. Compared to that, Hecaton achieves a 22.36 improvement. This enhancement stems from three key factors. For computation, Hecaton uses ASIC computing dies designed for large matrix multiplication, while general purpose GPU contains other components like cuda cores besides tensor cores. For communication, Hecaton utilizes more compact on-package interconnection with higher bandwidth, lower latency, and reduced energy per bit. Our distributed training method further decreases the NoP overheads. For memory, Hecaton leverages distributed SRAMs to enlarge on-chip storage, thereby eliminating repetitive DRAM accesses." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Related Work", + "text": "Chiplet-based architecture. Advancements in packaging technology [31 ###reference_b31###, 8 ###reference_b8###, 3 ###reference_b3###, 17 ###reference_b17###, 23 ###reference_b23###] have excited interest in chiplet architectures. Advanced waferscale chiplets can occupy areas of several tens of thousands of and integrate more than 1,000 dies [41 ###reference_b41###, 43 ###reference_b43###, 13 ###reference_b13###]. Chiplets\u2019 scaled-out computational power has been leveraged for DNN, pioneered by Simba [73 ###reference_b73###, 74 ###reference_b74###, 49 ###reference_b49###] and further optimized by subsequent works [57 ###reference_b57###, 5 ###reference_b5###, 16 ###reference_b16###, 7 ###reference_b7###, 42 ###reference_b42###, 6 ###reference_b6###, 66 ###reference_b66###] through modeling and exploration of larger design spaces. With the rise of LLMs, several recent works [45 ###reference_b45###, 70 ###reference_b70###] have extended chiplets to LLM inference, primarily addressing the memory bottleneck through on-chip caching and advanced packaging. However, LLM training remains unexplored, where existing solution assumptions are invalidated by more severe memory and communication challenges, and the backward propagation introduces more complex dataflow.\nLLM training and finetuning. ZeRO [46 ###reference_b46###, 47 ###reference_b47###, 48 ###reference_b48###]\nseries of works perform data parallelism and reduce memory usage by dividing weights and gradients into shards or offloading them to CPU memory. Many works study [18 ###reference_b18###, 27 ###reference_b27###, 35 ###reference_b35###, 36 ###reference_b36###, 28 ###reference_b28###] pipeline parallelism with smaller cold-start bubbles to lift the GPU utilization. These parallelisms are orthogonal to our method and can be utilized together to accelerate the LLM training. Megatron [52 ###reference_b52###] proposed 1D-TP tensor parallelism requiring all-reduce operations. Subsequent works introduce 2D [67 ###reference_b67###] and 2.5D [62 ###reference_b62###] tensor parallelism, which theoretically reduce the complexity of data transfer but impose new requirements on topology. Compared with them, our method has a lower asymptotic communication complexity and fully utilizes the new architectural opportunity brought by chiplet." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "VIII Conclusion", + "text": "In this paper, we propose Hecaton, a scalable and cost-effective Chiplet architecture for LLM training and finetuning with scheduling minimizing DRAM access impacts. Along with the novel distributed method that co-designs matrix tiling and communication schemes, the system can achieve weak scaling and obtain obvious improvements in both latency and energy compared to previous works." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Summary of TPs and corresponding algorithms
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTiling dimension\n\n\n\nRepresentive work\n\n\n\nRequired CC\n\n\n\nAlgorithm for CC\n\n
\nflat-ring\u00a0[2]\n
\n\n1D\n\n\n\nMegatron\u00a0[52]\n\n\n\nall-reduce\n\n\n2D-torus\u00a0[32, 69]\n
\nhybrid-ring\u00a0[20]\n
2DOptimus\u00a0[67]broadcast,recursive
reducedoubling
\n
", + "capture": "TABLE I: Summary of TPs and corresponding algorithms" + }, + "2": { + "table_html": "
\n
TABLE II: Hardware parameters
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NotationDescription
The number of computing dies on the package.
D2D link latency.
D2D link bandwidth.
The number of DRAM channels.
DRAM bandwidth.
\n
", + "capture": "TABLE II: Hardware parameters" + }, + "3": { + "table_html": "
\n
TABLE III: Summary of NoP communication overheads when using different training methods. Hecaton has a reduced complexity in both link latency and transmission time, thus being more scalable compared to the existing methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
WorkloadLink LatencyTransmission Time
Flat-ringTorus-ringOptimusHecatonFlat-ringTorus-ringOptimusHecaton
Fwd Atten.
Fwd FFN
Bwd Atten.
Bwd FFN
\n
", + "capture": "TABLE III: Summary of NoP communication overheads when using different training methods. Hecaton has a reduced complexity in both link latency and transmission time, thus being more scalable compared to the existing methods." + }, + "4": { + "table_html": "
\n
TABLE IV: The proportion of link latency in system latency.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Workloadllama-1.1Bllama-7Bllama-70Bllama-405B
Standard0.549%1.073%2.127%4.399%
Advanced0.832%1.787%3.687%7.678%
\n
", + "capture": "TABLE IV: The proportion of link latency in system latency." + } + }, + "image_paths": { + "1": { + "figure_path": "2407.05784v2_figure_1.png", + "caption": "Figure 1: The growing speed of training FLOPs and hardware", + "url": "http://arxiv.org/html/2407.05784v2/x1.png" + }, + "2": { + "figure_path": "2407.05784v2_figure_2.png", + "caption": "Figure 2: Typical packages of chiplets.", + "url": "http://arxiv.org/html/2407.05784v2/x2.png" + }, + "3": { + "figure_path": "2407.05784v2_figure_3.png", + "caption": "Figure 3: A Transformer layer containing an Attention block FFN block. Names of activations and weights are annotated.", + "url": "http://arxiv.org/html/2407.05784v2/x3.png" + }, + "4": { + "figure_path": "2407.05784v2_figure_4.png", + "caption": "Figure 4: Collective communication and the ring all-reduce.", + "url": "http://arxiv.org/html/2407.05784v2/x4.png" + }, + "5": { + "figure_path": "2407.05784v2_figure_5.png", + "caption": "Figure 5: (a) The architecture of Hecaton and its main components. (b) The side view of D2D connection and NoP routers. The red array shows the forwarding from Node 0 to Node 2 via Node 1. We also display the connection fo the proposed bypass ring. (c) The architecture of computing dies. (d) The high-throughput NoP router and its bypass channel.", + "url": "http://arxiv.org/html/2407.05784v2/x5.png" + }, + "6": { + "figure_path": "2407.05784v2_figure_6.png", + "caption": "Figure 6: The overall scheduling of Hecaton, illustrating the training of three layers with a batch divided into four mini-batches. Distinct patterns and colors denote different layers and operations, respectively. Arrays shows various data dependency involved in the training.\nDue to space constraints, we only show operations of one mini-batch in the backward process and omit repetitive parts. In real cases, the latency of backward process should be roughly twice that of the forward process, as it needs to compute gradients of both activation and weight.\nTo reduce DRAM overheads, we utilize (1) on-package execution (including computation and communication) and off-package memory access overlap, and (2) layer fusion.", + "url": "http://arxiv.org/html/2407.05784v2/x6.png" + }, + "7": { + "figure_path": "2407.05784v2_figure_7.png", + "caption": "Figure 7: (a) shows the operations required to train a linear layer, corresponding to steps outlined in Algorithm 1. The formulae at the bottom summarize the hardware-to-software mapping. The three lines of equations below Steps 1-5 respectively correspond to (1) a general forward pass, (2) part of a general backward pass, and (3) part of an Attention block\u2019s forward. They share identical operations, differing only in variable. Steps 6-8 illustrate the computation of weight gradients, utilizing data prepared in Step 3. Figure (b) outlines operations for the forward of an Attention block, where multi-head attention (Step 10-12) is inserted between two fused layers (Step 1-4 and Step 9,13,14).", + "url": "http://arxiv.org/html/2407.05784v2/x7.png" + }, + "8": { + "figure_path": "2407.05784v2_figure_8.png", + "caption": "Figure 8: Comparison of Hecaton with other distributed methods under various workloads and different forms of package. F, T, O, A respectively stand for (1) 1D-TP with ring all-reduce which is used by Megatron [52], (2) 1D-TP with 2D-torus all-reduce, (3) Optimus [67], and (4) Hecaton, aligning with theoretical analysis in Section V-A. Both latency and energy for each workload are normalized to Hecaton. Methods marked with an asterisk (*) are practically invalid since they need SRAM capacity exceeding the hardware settings of 8MB weight/activation buffers. Since we employ the latency hiding technique as mentioned in III-A, the latency breakdown of DRAM access denotes the segment exceeds the on-package execution, rather than the entire DRAM access time. Hecaton exhibits obvious improvements for both latency and energy, especially on larger workloads.", + "url": "http://arxiv.org/html/2407.05784v2/x8.png" + }, + "9": { + "figure_path": "2407.05784v2_figure_9.png", + "caption": "Figure 9: Study of scalability. The latency is normalized to the processing time for the smallest model. Hecaton maintains roughly constant processing time as predicted by theoretical analysis in Section V-B, while others cannot.", + "url": "http://arxiv.org/html/2407.05784v2/x9.png" + }, + "10": { + "figure_path": "2407.05784v2_figure_10.png", + "caption": "Figure 10: The impact of DRAM bandwidth on system latency. Speedup is normalized to DDR5-6400.", + "url": "http://arxiv.org/html/2407.05784v2/x10.png" + }, + "11": { + "figure_path": "2407.05784v2_figure_11.png", + "caption": "Figure 11: The impact of computing dies\u2019 layout on system latency / energy. Altogether 16 dies are arranged in (length, width). All metrics are normalized to the sqaure layout.", + "url": "http://arxiv.org/html/2407.05784v2/x11.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Fused-layer cnn accelerators.", + "author": "Manoj Alwani, Han Chen, Michael Ferdman, and Peter Milder.", + "venue": "In 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 1\u201312. IEEE, 2016.", + "url": null + } + }, + { + "2": { + "title": "baidu-allreduce, 2017.", + "author": "baidu research.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Demonstration of a heterogeneously integrated system-on-wafer (sow) assembly.", + "author": "Adeel Ahmad Bajwa, SivaChandra Jangam, Saptadeep Pal, Boris Vaisband, Randall Irwin, Mark Goorsky, and Subramanian S Iyer.", + "venue": "In 2018 IEEE 68th Electronic Components and Technology Conference (ECTC), pages 1926\u20131930. IEEE, 2018.", + "url": null + } + }, + { + "4": { + "title": "Language models are few-shot learners, 2020.", + "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "Gemini: Mapping and architecture co-exploration for large-scale dnn chiplet accelerators.", + "author": "Jingwei Cai, Zuotong Wu, Sen Peng, Yuchen Wei, Zhanhong Tan, Guiming Shi, Mingyu Gao, and Kaisheng Ma.", + "venue": "In 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 156\u2013171. IEEE, 2024.", + "url": null + } + }, + { + "6": { + "title": "Abss: An adaptive batch-stream scheduling module for dynamic task parallelism on chiplet-based multi-chip systems.", + "author": "Qinyun Cai, Guoqing Xiao, Shengle Lin, Wangdong Yang, Keqin Li, and Kenli Li.", + "venue": "ACM Transactions on Parallel Computing, 2024.", + "url": null + } + }, + { + "7": { + "title": "Deepburning-seg: Generating dnn accelerators of segment-grained pipeline architecture.", + "author": "Xuyi Cai, Ying Wang, Xiaohan Ma, Yinhe Han, and Lei Zhang.", + "venue": "In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 1396\u20131413, 2022.", + "url": null + } + }, + { + "8": { + "title": "System on integrated chips (soic (tm) for 3d heterogeneous integration.", + "author": "Ming-Fa Chen, Fang-Cheng Chen, Wen-Chih Chiou, and CH Doug.", + "venue": "In 2019 IEEE 69th Electronic Components and Technology Conference (ECTC), pages 594\u2013599. IEEE, 2019.", + "url": null + } + }, + { + "9": { + "title": "Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks.", + "author": "Yu-Hsin Chen, Tushar Krishna, Joel S Emer, and Vivienne Sze.", + "venue": "IEEE journal of solid-state circuits, 52(1):127\u2013138, 2016.", + "url": null + } + }, + { + "10": { + "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.", + "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", + "venue": "arXiv preprint arXiv:1810.04805, 2018.", + "url": null + } + }, + { + "11": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale, 2021.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "The llama 3 herd of models.", + "author": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al.", + "venue": "arXiv preprint arXiv:2407.21783, 2024.", + "url": null + } + }, + { + "13": { + "title": "Switch-less dragonfly on wafers: A scalable interconnection architecture based on wafer-scale integration.", + "author": "Yinxiao Feng and Kaisheng Ma.", + "venue": "arXiv preprint arXiv:2407.10290, 2024.", + "url": null + } + }, + { + "14": { + "title": "Heterogeneous die-to-die interfaces: Enabling more flexible chiplet interconnection systems.", + "author": "Yinxiao Feng, Dong Xiang, and Kaisheng Ma.", + "venue": "In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, pages 930\u2013943, 2023.", + "url": null + } + }, + { + "15": { + "title": "A scalable methodology for designing efficient interconnection network of chiplets.", + "author": "Yinxiao Feng, Dong Xiang, and Kaisheng Ma.", + "venue": "In 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 1059\u20131071. IEEE, 2023.", + "url": null + } + }, + { + "16": { + "title": "Monad: Towards cost-effective specialization for chiplet-based spatial accelerators.", + "author": "Xiaochen Hao, Zijian Ding, Jieming Yin, Yuan Wang, and Yun Liang.", + "venue": "In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), pages 1\u20139. IEEE, 2023.", + "url": null + } + }, + { + "17": { + "title": "Wafer level system integration of the fifth generation cowos\u00ae-s with high performance si interposer at 2500 mm2.", + "author": "P. K. Huang, C. Y. Lu, W. H. Wei, Christine Chiu, K. C. Ting, Clark Hu, C.H. Tsai, S. Y. Hou, W. C. Chiou, C. T. Wang, and Douglas Yu.", + "venue": "In 2021 IEEE 71st Electronic Components and Technology Conference (ECTC), pages 101\u2013104, 2021.", + "url": null + } + }, + { + "18": { + "title": "GPipe: efficient training of giant neural networks using pipeline parallelism.", + "author": "Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen.", + "venue": "Curran Associates Inc., Red Hook, NY, USA, 2019.", + "url": null + } + }, + { + "19": { + "title": "Ddr5 sdram specification, 2024.", + "author": "JEDEC.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes, 2018.", + "author": "Xianyan Jia, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, Zhenyu Guo, Yuanzhou Yang, Liwei Yu, Tiegang Chen, Guangxiao Hu, Shaohuai Shi, and Xiaowen Chu.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "Scaling laws for neural language models.", + "author": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.", + "venue": "arXiv preprint arXiv:2001.08361, 2020.", + "url": null + } + }, + { + "22": { + "title": "Hbm: Memory solution for bandwidth-hungry processors.", + "author": "Joonyoung Kim and Younsu Kim.", + "venue": "In 2014 IEEE Hot Chips 26 Symposium (HCS), pages 1\u201324. IEEE, 2014.", + "url": null + } + }, + { + "23": { + "title": "An overview of multichip modules.", + "author": "Pradeep Lall and Shrikar Bhagath.", + "venue": "Solid state technology, 36(9):65\u201372, 1993.", + "url": null + } + }, + { + "24": { + "title": "Enhancing collective communication in mcm accelerators for deep learning training.", + "author": "Sabuj Laskar, Pranati Majhi, Sungkeun Kim, Farabi Mahmud, Abdullah Muzahid, and Eun Jung Kim.", + "venue": "In 2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 1\u201316. IEEE, 2024.", + "url": null + } + }, + { + "25": { + "title": "Unpu: A 50.6 tops/w unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision.", + "author": "Jinmook Lee, Changhyeon Kim, Sanghoon Kang, Dongjoo Shin, Sangyeob Kim, and Hoi-Jun Yoo.", + "venue": "In 2018 IEEE International Solid-State Circuits Conference-(ISSCC), pages 218\u2013220. IEEE, 2018.", + "url": null + } + }, + { + "26": { + "title": "Scaling distributed machine learning with the parameter server.", + "author": "Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su.", + "venue": "In 11th USENIX Symposium on operating systems design and implementation (OSDI 14), pages 583\u2013598, 2014.", + "url": null + } + }, + { + "27": { + "title": "Chimera: efficiently training large-scale neural networks with bidirectional pipelines.", + "author": "Shigang Li and Torsten Hoefler.", + "venue": "In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1\u201314, 2021.", + "url": null + } + }, + { + "28": { + "title": "Terapipe: Token-level pipeline parallelism for training large-scale language models.", + "author": "Zhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, and Ion Stoica.", + "venue": "In International Conference on Machine Learning, pages 6543\u20136552. PMLR, 2021.", + "url": null + } + }, + { + "29": { + "title": "Cerebras architecture deep dive: First look inside the hw/sw co-design for deep learning: Cerebras systems.", + "author": "Sean Lie.", + "venue": "In 2022 IEEE Hot Chips 34 Symposium (HCS), pages 1\u201334. IEEE Computer Society, 2022.", + "url": null + } + }, + { + "30": { + "title": "Ramulator 2.0: A modern, modular, and extensible dram simulator.", + "author": "Haocong Luo, Yahya Can Tu\u011frul, F. Nisa Bostanc\u0131, Ataberk Olgun, A. Giray Ya\u011fl\u0131k\u00e7\u0131, and Onur Mutlu.", + "venue": "IEEE Computer Architecture Letters, 23(1):112\u2013116, 2024.", + "url": null + } + }, + { + "31": { + "title": "Embedded multi-die interconnect bridge (emib) \u2013 a high density, high bandwidth packaging interconnect.", + "author": "Ravi Mahajan, Robert Sankman, Neha Patel, Dae-Woo Kim, Kemal Aygun, Zhiguo Qian, Yidnekachew Mekonnen, Islam Salama, Sujit Sharan, Deepti Iyengar, and Debendra Mallik.", + "venue": "In 2016 IEEE 66th Electronic Components and Technology Conference (ECTC), pages 557\u2013565, 2016.", + "url": null + } + }, + { + "32": { + "title": "Massively distributed sgd: Imagenet/resnet-50 training in a flash, 2019.", + "author": "Hiroaki Mikami, Hisahiro Suganuma, Pongsakorn U-chupala, Yoshiki Tanaka, and Yuichi Kageyama.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "Pioneering chiplet technology and design for the amd epyc\u2122 and ryzen\u2122 processor families: Industrial product.", + "author": "Samuel Naffziger, Noah Beck, Thomas Burd, Kevin Lepak, Gabriel H Loh, Mahesh Subramony, and Sean White.", + "venue": "In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), pages 57\u201370. IEEE, 2021.", + "url": null + } + }, + { + "34": { + "title": "2.2 amd chiplet architecture for high-performance server and desktop products.", + "author": "Samuel Naffziger, Kevin Lepak, Milam Paraschou, and Mahesh Subramony.", + "venue": "In 2020 IEEE International Solid-State Circuits Conference-(ISSCC), pages 44\u201345. IEEE, 2020.", + "url": null + } + }, + { + "35": { + "title": "Pipedream: generalized pipeline parallelism for dnn training.", + "author": "Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R Devanur, Gregory R Ganger, Phillip B Gibbons, and Matei Zaharia.", + "venue": "In Proceedings of the 27th ACM symposium on operating systems principles, pages 1\u201315, 2019.", + "url": null + } + }, + { + "36": { + "title": "Memory-efficient pipeline-parallel dnn training.", + "author": "Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, and Matei Zaharia.", + "venue": "In International Conference on Machine Learning, pages 7937\u20137947. PMLR, 2021.", + "url": null + } + }, + { + "37": { + "title": "A100 tensor core gpu, 2020.", + "author": "NVIDIA.", + "venue": null, + "url": null + } + }, + { + "38": { + "title": "The nvidia quantum infiniband platform, 2024.", + "author": "NVIDIA.", + "venue": null, + "url": null + } + }, + { + "39": { + "title": "Nvlink and nvlink switch, 2024.", + "author": "NVIDIA.", + "venue": null, + "url": null + } + }, + { + "40": { + "title": "Fine-grained dram: Energy-efficient dram for extreme bandwidth systems.", + "author": "Mike O\u2019Connor, Niladrish Chatterjee, Donghyuk Lee, John Wilson, Aditya Agrawal, Stephen W Keckler, and William J Dally.", + "venue": "In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pages 41\u201354, 2017.", + "url": null + } + }, + { + "41": { + "title": "Designing a 2048-chiplet, 14336-core waferscale processor.", + "author": "Saptadeep Pal, Jingyang Liu, Irina Alam, Nicholas Cebry, Haris Suhail, Shi Bu, Subramanian S. Iyer, Sudhakar Pamarti, Rakesh Kumar, and Puneet Gupta.", + "venue": "In 2021 58th ACM/IEEE Design Automation Conference (DAC), pages 1183\u20131188, 2021.", + "url": null + } + }, + { + "42": { + "title": "Design space exploration for chiplet-assembly-based processors.", + "author": "Saptadeep Pal, Daniel Petrisko, Rakesh Kumar, and Puneet Gupta.", + "venue": "IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 28(4):1062\u20131073, 2020.", + "url": null + } + }, + { + "43": { + "title": "Architecting waferscale processors - a gpu case study.", + "author": "Saptadeep Pal, Daniel Petrisko, Matthew Tomei, Puneet Gupta, Subramanian S. Iyer, and Rakesh Kumar.", + "venue": "In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 250\u2013263, 2019.", + "url": null + } + }, + { + "44": { + "title": "Timeloop: A systematic approach to dnn accelerator evaluation.", + "author": "Angshuman Parashar, Priyanka Raina, Yakun Sophia Shao, Yu-Hsin Chen, Victor A Ying, Anurag Mukkara, Rangharajan Venkatesan, Brucek Khailany, Stephen W Keckler, and Joel Emer.", + "venue": "In 2019 IEEE international symposium on performance analysis of systems and software (ISPASS), pages 304\u2013315. IEEE, 2019.", + "url": null + } + }, + { + "45": { + "title": "Chiplet cloud: Building ai supercomputers for serving large generative language models.", + "author": "Huwan Peng, Scott Davidson, Richard Shi, Shuaiwen Leon Song, and Michael Taylor.", + "venue": "arXiv preprint arXiv:2307.02666, 2023.", + "url": null + } + }, + { + "46": { + "title": "Zero: Memory optimizations toward training trillion parameter models.", + "author": "Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He.", + "venue": "In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1\u201316. IEEE, 2020.", + "url": null + } + }, + { + "47": { + "title": "Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning.", + "author": "Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He.", + "venue": "In Proceedings of the international conference for high performance computing, networking, storage and analysis, pages 1\u201314, 2021.", + "url": null + } + }, + { + "48": { + "title": "Zero-offload: Democratizing billion-scale model training.", + "author": "Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, and Yuxiong He.", + "venue": "In 2021 USENIX Annual Technical Conference (USENIX ATC 21), pages 551\u2013564, 2021.", + "url": null + } + }, + { + "49": { + "title": "Simba: Scaling deep-learning inference with multi-chip-module-based architecture.", + "author": "Yakun Sophia Shao, Jason Clemons, Rangharajan Venkatesan, Brian Zimmer, Matthew Fojtik, Nan Jiang, Ben Keller, Alicia Klinefelter, Nathaniel Pinckney, Priyanka Raina, Stephen G. Tell, Yanqing Zhang, William J. Dally, Joel Emer, C. Thomas Gray, Brucek Khailany, and Stephen W. Keckler.", + "venue": "In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO \u201952, page 14\u201327, New York, NY, USA, 2019. Association for Computing Machinery.", + "url": null + } + }, + { + "50": { + "title": "Universal chiplet interconnect express (ucie): An open industry standard for innovations with chiplets at package level.", + "author": "Debendra Das Sharma, Gerald Pasdast, Zhiguo Qian, and Kemal Aygun.", + "venue": "IEEE Transactions on Components, Packaging and Manufacturing Technology, 12(9):1423\u20131431, 2022.", + "url": null + } + }, + { + "51": { + "title": "Signal integrity design and analysis of universal chiplet interconnect express (ucie) channel in silicon interposer for advanced package.", + "author": "Taein Shin, Keunwoo Kim, Hyunwook Park, Boogyo Sim, Seongguk Kim, Jihun Kim, Seonguk Choi, Joonsang Park, Jinwook Song, Jaehyup Kim, Joung Won Park, Daehyun Kang, and Joungho Kim.", + "venue": "In 2023 IEEE Electrical Design of Advanced Packaging and Systems (EDAPS), pages 1\u20133, 2023.", + "url": null + } + }, + { + "52": { + "title": "Megatron-lm: Training multi-billion parameter language models using model parallelism.", + "author": "Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro.", + "venue": "arXiv preprint arXiv:1909.08053, 2019.", + "url": null + } + }, + { + "53": { + "title": "Optimus-cc: Efficient large nlp model training with 3d parallelism aware communication compression.", + "author": "Jaeyong Song, Jinkyu Yim, Jaewon Jung, Hongsun Jang, Hyung-Jin Kim, Youngsok Kim, and Jinho Lee.", + "venue": "In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, pages 560\u2013573, 2023.", + "url": null + } + }, + { + "54": { + "title": "7.1 an 11.5 tops/w 1024-mac butterfly structure dual-core sparsity-aware neural processing unit in 8nm flagship mobile soc.", + "author": "Jinook Song, Yunkyo Cho, Jun-Seok Park, Jun-Woo Jang, Sehwan Lee, Joon-Ho Song, Jae-Gon Lee, and Inyup Kang.", + "venue": "In 2019 IEEE international solid-state circuits conference-(ISSCC), pages 130\u2013132. IEEE, 2019.", + "url": null + } + }, + { + "55": { + "title": "Synopsys memory compilers, 2018.", + "author": "Synopsys.", + "venue": null, + "url": null + } + }, + { + "56": { + "title": "Synopsys designware ip, 2019.", + "author": "Synopsys.", + "venue": null, + "url": null + } + }, + { + "57": { + "title": "Nn-baton: Dnn workload orchestration and chiplet granularity exploration for multichip accelerators.", + "author": "Zhanhong Tan, Hongyu Cai, Runpei Dong, and Kaisheng Ma.", + "venue": "In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), pages 1013\u20131026. IEEE, 2021.", + "url": null + } + }, + { + "58": { + "title": "Llama: Open and efficient foundation language models, 2023.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.", + "venue": null, + "url": null + } + }, + { + "59": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.", + "venue": "arXiv preprint arXiv:2307.09288, 2023.", + "url": null + } + }, + { + "60": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "61": { + "title": "Tesseract: Parallelize the tensor parallelism efficiently.", + "author": "Boxiang Wang, Qifan Xu, Zhengda Bian, and Yang You.", + "venue": "In Proceedings of the 51st International Conference on Parallel Processing, pages 1\u201311, 2022.", + "url": null + } + }, + { + "62": { + "title": "Bloom: A 176b-parameter open-access multilingual language model.", + "author": "BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili\u0107, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha Luccioni, Fran\u00e7ois Yvon, et al.", + "venue": "arXiv preprint arXiv:2211.05100, 2022.", + "url": null + } + }, + { + "63": { + "title": "A 16nm finfet cmos technology for mobile soc and computing applications.", + "author": "Shien-Yang Wu, C. Y. Lin, M. C. Chiang, J. J. Liaw, J. Y. Cheng, S. H. Yang, M. Liang, T. Miyashita, C. H. Tsai, B. C. Hsu, H. Y. Chen, T. Yamamoto, S. Y. Chang, V. S. Chang, C. H. Chang, J. H. Chen, H. F. Chen, K. C. Ting, Y. K. Wu, K. H. Pan, R. F. Tsui, C. H. Yao, P. R. Chang, H. M. Lien, T. L. Lee, H. M. Lee, W. Chang, T. Chang, R. Chen, M. Yeh, C. C. Chen, Y. H. Chiu, Y. H. Chen, H. C. Huang, Y. C Lu, C. W. Chang, M. H. Tsai, C. C. Liu, K. S. Chen, C. C. Kuo, H. T. Lin, S. M. Jang, and Y. Ku.", + "venue": "In 2013 IEEE International Electron Devices Meeting, pages 9.1.1\u20139.1.4, 2013.", + "url": null + } + }, + { + "64": { + "title": "A 7nm cmos platform technology featuring 4th generation finfet transistors with a 0.027um2 high density 6-t sram cell for mobile soc applications.", + "author": "Shien-Yang Wu, C.Y. Lin, M.C. Chiang, J.J. Liaw, J.Y. Cheng, S.H. Yang, C.H. Tsai, P.N. Chen, T. Miyashita, C.H. Chang, V.S. Chang, K.H. Pan, J.H. Chen, Y.S. Mor, K.T. Lai, C.S. Liang, H.F. Chen, S.Y. Chang, C.J. Lin, C.H. Hsieh, R.F. Tsui, C.H. Yao, C.C. Chen, R. Chen, C.H. Lee, H.J. Lin, C.W. Chang, K.W. Chen, M.H. Tsai, K.S. Chen, Y. Ku, and S. M. Jang.", + "venue": "In 2016 IEEE International Electron Devices Meeting (IEDM), pages 2.6.1\u20132.6.4, 2016.", + "url": null + } + }, + { + "65": { + "title": "A transferable approach for partitioning machine learning models on multi-chip-modules.", + "author": "Xinfeng Xie, Prakash Prabhu, Ulysse Beaugnon, Phitchaya Phothilimthana, Sudip Roy, Azalia Mirhoseini, Eugene Brevdo, James Laudon, and Yanqi Zhou.", + "venue": "Proceedings of Machine Learning and Systems, 4:370\u2013381, 2022.", + "url": null + } + }, + { + "66": { + "title": "An efficient 2d method for training super-large deep learning models.", + "author": "Qifan Xu and Yang You.", + "venue": "In 2023 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 222\u2013232. IEEE, 2023.", + "url": null + } + }, + { + "67": { + "title": "Aries: Accelerating distributed training in chiplet-based systems via flexible interconnects.", + "author": "Lingxiang Yin, Amir Ghazizadeh, Ahmed Louri, and Hao Zheng.", + "venue": "In 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), pages 1\u20139. IEEE, 2023.", + "url": null + } + }, + { + "68": { + "title": "Image classification at supercomputer scale.", + "author": "Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng.", + "venue": "arXiv preprint arXiv:1811.06992, 2018.", + "url": null + } + }, + { + "69": { + "title": "Cambricon-llm: A chiplet-based hybrid architecture for on-device inference of 70b llm, 2024.", + "author": "Zhongkai Yu, Shengwen Liang, Tianyun Ma, Yunke Cai, Ziyuan Nan, Di Huang, Xinkai Song, Yifan Hao, Jie Zhang, Tian Zhi, Yongwei Zhao, Zidong Du, Xing Hu, Qi Guo, and Tianshi Chen.", + "venue": null, + "url": null + } + }, + { + "70": { + "title": "Sticker: A 0.41-62.1 tops/w 8bit neural network processor with multi-sparsity compatible convolution arrays and online tuning acceleration for fully connected layers.", + "author": "Zhe Yuan, Jinshan Yue, Huanrui Yang, Zhibo Wang, Jinyang Li, Yixiong Yang, Qingwei Guo, Xueqing Li, Meng-Fan Chang, Huazhong Yang, and Yongpan Liu.", + "venue": "In 2018 IEEE Symposium on VLSI Circuits, pages 33\u201334, 2018.", + "url": null + } + }, + { + "71": { + "title": "Tinyllama: An open-source small language model.", + "author": "Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu.", + "venue": "arXiv preprint arXiv:2401.02385, 2024.", + "url": null + } + }, + { + "72": { + "title": "A 0.11 pj/op, 0.32-128 tops, scalable multi-chip-module-based deep neural network accelerator with ground-reference signaling in 16nm.", + "author": "Brian Zimmer, Rangharajan Venkatesan, Yakun Sophia Shao, Jason Clemons, Matthew Fojtik, Nan Jiang, Ben Keller, Alicia Klinefelter, Nathaniel Pinckney, Priyanka Raina, Stephen G. Tell, Yanqing Zhang, William J. Dally, Joel S. Emer, C. Thomas Gray, Stephen W. Keckler, and Brucek Khailany.", + "venue": "In 2019 Symposium on VLSI Circuits, pages C300\u2013C301, 2019.", + "url": null + } + }, + { + "73": { + "title": "A 0.32\u2013128 tops, scalable multi-chip-module-based deep neural network inference accelerator with ground-referenced signaling in 16 nm.", + "author": "Brian Zimmer, Rangharajan Venkatesan, Yakun Sophia Shao, Jason Clemons, Matthew Fojtik, Nan Jiang, Ben Keller, Alicia Klinefelter, Nathaniel Pinckney, Priyanka Raina, Stephen G. Tell, Yanqing Zhang, William J. Dally, Joel S. Emer, C. Thomas Gray, Stephen W. Keckler, and Brucek Khailany.", + "venue": "IEEE Journal of Solid-State Circuits, 55(4):920\u2013932, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2407.05784v2" +} \ No newline at end of file diff --git a/20241127/2407.11413v2.json b/20241127/2407.11413v2.json new file mode 100644 index 0000000000000000000000000000000000000000..761528a5948b18c5598c992d72fd884cdff0e493 --- /dev/null +++ b/20241127/2407.11413v2.json @@ -0,0 +1,131 @@ +{ + "title": "Distributed Prescribed-Time Convex Optimization: Cascade Design and Time-Varying Gain Approach", + "abstract": "In this paper, we address the distributed prescribed-time convex optimization\n(DPTCO) problem for a class of high-order nonlinear multi-agent systems (MASs)\nunder undirected connected graphs. A cascade design framework is proposed, dividing\nthe DPTCO implementation into two parts: distributed\noptimal trajectory generator design and local reference trajectory\ntracking controller design. The DPTCO problem is then transformed\ninto the prescribed-time stabilization problem of a cascaded system.\nChanging Lyapunov function and time-varying state transformation\nmethods together with the sufficient conditions are proposed to prove\nthe prescribed-time stabilization of the cascaded system as well as\nthe uniform boundedness of internal signals in the closed-loop MASs.\nThe proposed framework is then utilized to solve robust DPTCO problem\nfor a class of chain-integrator MASs with external disturbances by\nconstructing a novel sliding-mode variables and exploiting the property of time-varying\ngains. The proposed framework is further utilized to solve the adaptive\nDPTCO problem for a class of strict-feedback MASs with parameter uncertainty,\nin which backstepping method with prescribed-time dynamic filter is\nadopted. The descending power state transformation is introduced to\ncompensate the growth of increasing rate induced by the derivative\nof time-varying gains in recursive steps and the high-order derivative\nof local reference trajectory is not required. Finally, theoretical\nresults are verified by two numerical examples.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Distributed convex optimization (DCO) has garnered extensive attention and finds numerous applications in multi-agent systems (MASs), including but not limited to,\nreliable communications in wireless networks,\ncollision avoidance among multiple robots,\neconomic dispatch in power systems,\ndistributed optimal power flow, traffic\nmanagement for large-scale railway networks, and\ntraffic metering in urban street networks.\nIn a typical DCO problem, each agent has a local objective function only known\nto itself and there is a global objective function takes the sum of local\nobjective functions. The objective is to design distributed controllers\nwith limited local information such that the output or state of each\nagent converges to the optimum of global objective function. The earliest\nwork on DCO can be tracked back to [1 ###reference_b1###], and it attracts\nincreasing interests in the last decade after the pioneer works in\n[2 ###reference_b2###].\nThe focus of DCO research is on four aspects: generalizing the type\nof objective functions [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]\nand systems [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###],\nfaster convergent rate [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 11 ###reference_b11###, 16 ###reference_b16###, 17 ###reference_b17###],\nand disturbance rejection [18 ###reference_b18###, 9 ###reference_b9###, 19 ###reference_b19###, 12 ###reference_b12###].\nThe optimization control algorithms for time-independent objective function\n[3 ###reference_b3###], time-varying objective function [4 ###reference_b4###]\nand objective function with constraints [5 ###reference_b5###]\nhave been proposed. In [6 ###reference_b6###],\nthe convexity of local objective function and strong convexity of global\nobjective function are respectively removed. Some works aim to achieve\nDCO for more general systems, such as single-integrator system in\n[7 ###reference_b7###], linear system in [8 ###reference_b8###],\nEuler-Lagrange system in [9 ###reference_b9###] and strict-feedback\nsystem in [10 ###reference_b10###, 20 ###reference_b20###]. Using sliding-mode\ncontrol and backstepping methods, the DCO controller can handle systems\nthat are high-order and nonlinear [12 ###reference_b12###]. A common\napproach to solving the DCO for high-order systems is the cascade\ndesign where the solution to the DCO problem is divided into two parts.\nThe first one is distributed optimum seeking, which by utilizing the\nlocal information interaction generates local optimal reference for\neach agent that asymptotically converges to the optimum of\nthe global objective function. The second one is to design local tracking\ncontroller to make the output or state asymptotically converge to\nthe local optimal references.\nThe convergence rate and the disturbance rejection are two concerns\nof DCO. In [17 ###reference_b17###, 11 ###reference_b11###], the finite-time\nconvergence of DCO is considered where all agents reach a consensus\nwithin a finite time interval while minimizing the global objective function.\nThe finite-time DCO for chain integrator MASs subject to mismatched disturbances\nis achieved in [19 ###reference_b19###]. Meanwhile, fixed-time convergence, where the finite settling time is independent of initial conditions,\nis shown in [13 ###reference_b13###, 16 ###reference_b16###, 15 ###reference_b15###]. In\n[14 ###reference_b14###], the predefined-time DCO is achieved by designing\na class of time-based functions, where the solution converges to a\nneighborhood of the optimum within a given time and to the optimum\nas time approaches infinity. But it cannot be extended to handle disturbances\nand high-order systems.\nIn this paper, we address the distribute prescribed-time convex optimization\n(DPTCO) for high-order nonlinear MASs with uncertainties for which\nthe solution converges to the optimum within any prescribed time.\nThe prescribed-time control is proposed to ensure that the settling\ntime does not depend on the initial values and control parameters\n[21 ###reference_b21###, 22 ###reference_b22###]. The main contribution of\nthis paper is summarized as follows.\nFirst, a DFTCO framework for a class of nonlinear MASs with disturbances\nis proposed. By embedding a cascade design, the DFTCO implementation\nis divided into two parts, namely, distributed optimal trajectory\ngenerator design and local reference trajectory tracking controller\ndesign. The DPTCO problem is then transformed into the prescribed-time\nstabilization problem of two cascaded subsystems where the first one\nis for the error of the distributed estimation towards the global\noptimum and the second one is for local tracking errors. Changing\nLyapunov function and time-varying state methods together with\nsome sufficient conditions are proposed to prove the prescribed-time\nstabilization of the cascaded system as well as the uniform boundedness\nof internal signals in the closed-loop system. A specific distributed\noptimal trajectory generator is constructed to show that the distributed\nestimation errors converges towards zero within a prescribed time.\nSecond, under the DPTCO framework, we propose a robust DPTCO algorithm\nfor a class of nonlinear chain-integrator MASs with external disturbance.\nWe design a novel sliding-mode variable and introduce a new time-varying state\ntransformation, which converts the prescribed-time stabilization problem\nof local tracking error and other states unrelated to the output into\nthe boundedness of the new variable. Different from traditional sliding-mode\ncontrol in [23 ###reference_b23###] and the prescribed-time work\nin [21 ###reference_b21###], our approach does not need the high-order\nderivative of the reference trajectory for tracking. Moreover, our\nproposed algorithm is robust for any bounded external disturbances.\nThird, we consider adaptive DPTCO for a class of strict-feedback MASs\nwith parameter uncertainty. We introduce time-varying state transformation\nof a descending power to compensate the growth of increasing rate\ninduced by derivative of time-varying gains in recursive steps. The\nbackstepping method with prescribed-time dynamic filter is adopted\nto avoid the utilization of high-order derivative of reference trajectory,\nand an adaptive law is designed to compensate parameter uncertainty.\nThe rest of the paper is organized as follows. Section 2 ###reference_###\ngives the notation and problem formulation. Section 3 ###reference_###\npresents the DPTCO framework for a type of nonlinear systems, for\nwhich Section 4 ###reference_### elaborates the\noptimal trajectory generator design. Given the DPTCO framework and\noptimal trajectory generator, robust DPTCO for chain-integrator MASs\nand adaptive DPTCO for strict-feedback MASs are considered in Sections\n5 ###reference_### and 6 ###reference_###, respectively.\nThe numerical simulation is conducted in Section 7 ###reference_###\nand the paper is concluded in Section 8 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Notations and Problem Formulation", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Notations", + "text": ", and denote\nthe set of real numbers, the set of non-negative real numbers, and\nthe -dimensional Euclidean space, respectively. denotes\nthe initial time, the prescribed-time scale, and \nthe corresponding time interval. Define functions\n, ,\n means that \nfor any . The symbol (or )\ndenotes an -dimensional column vector whose elements are all \n(or ). For , \nfor , while be the inverse\nfunction of for .\nAn undirected graph is denoted as ,\nwhere is the node set and \nis the edge set. The existence of an edge means\nthat nodes , can communicate with each other. Denote by \nthe weighted adjacency matrix, where \nand otherwise. A self edge is not allowed, i.e., .\nThe Laplacian matrix of graph is denoted\nas , where ,\n with . If is connected,\nthen the null space of is spanned by , and\nall the other eigenvalues of are strictly positive." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Problem Formulation", + "text": "Consider the nonlinear MASs\nwhere , , \nare system state, output and control input of -th agent, respectively.\n\ndenotes the system\u2019s uncertainties or external disturbances where\n is a compact set belonging to \nand it is possibly time-varying. ,\n\nare smooth functions of their arguments satisfying \nand for any \nand . The output feedback\nsystem (1 ###reference_###) contains various specific types [24 ###reference_b24###],\ni.e., chain-integrator system [21 ###reference_b21###], strict-feedback\nsystem [25 ###reference_b25###] and feedforward system [26 ###reference_b26###].\nIn this paper, we consider the following convex optimization problem\nwhere is the lumped output of MASs in (1 ###reference_###), and is the local scalar objective function, which is convex and known only to agent . Motivated by the results in [9 ###reference_b9###], this paper assumes that gradient function of local objective function is available. Due to equality constraints, the optimum of optimization problem (2 ###reference_###) has the form for some .\nThe objective of the DPTCO\nis, for any prescribe-time , using local information interactions to design distributed controllers\n such that the outputs converge to the optimum\n within ,\ni.e.,\nirrespective of system initial value and any other control parameters\nbesides . Moreover, the state , the output and\ncontrol input must be bounded, i.e.,\n\nholds for and .\nIn order to achieve the DPTCO,\nthe function\nis used throughout the paper as the time-varying gain. The function\n increases to infinity as approaches the prescribed-time\n and is commonly used in the prescribed-time control. For\n, one has \nWe simplify as throughout this paper if no confusion\noccur. For any and ,\ndefine\nwhere we note converges to\nzero as for any and .\nWe study the problem under these two common assumptions.\nThe undirected graph is connected.\nFor each \nthe function is first-order differentiable, and \nas well as its gradient are only\nknown to -th agent. Moreover, it is -strongly convex\nand has -Lipschitz gradients, i.e., for , and ,\nwhere and are positive constants.\nUnder Assumption 2.2 ###reference_ass2###, is strongly\nconvex as is for . Therefore, if the optimization\nproblem (2 ###reference_###)\nis solvable, the optimum is unique. We need the following assumption\nfor the optimization problem to be sensible.\nThe optimal value of global objection function (2 ###reference_###),\ndenoted as , is finite and the optimum set\nis nonempty and compact [27 ###reference_b27###].\nA function \nis said to belong to class , it is strictly\nincreasing and .\nA continuous function \nis said to belong to class if, for each fixed\n, the mapping belongs to class \nwith respect to and, for each fixed , the mapping \nis decreasing with respect to and satisfies \nas . The function is said to belong class \nif belongs to class and for each\nfixed , the mapping belongs to class \nwith respect to .\n[28 ###reference_b28###] Consider\nthe system where \nis the state and is\nthe external input. For any given , the -dynamics is\nsaid to be prescribed-time stable if there exits \nsuch that for and ,\n holds for \nwhere .\nThe continuously differentiable function \nis called the prescribed-time stable Lyapunov function for the system\n, if and its derivative along the trajectory\nof the system satisfy, for all and ,\nwhere , , are\n functions and is denoted in (4 ###reference_###).\n is called prescribed-time convergent gain.\nThe inequalities in (7 ###reference_###) are simplified as .\nThe continuously differentiable function \nis called the prescribed-time input-to-state stable (ISS) Lyapunov\nfunction for the system with \nbeing the external input, if and its derivative along the\ntrajectory of the system satisfy, for all and\n,\nwith , , , \nand . ,\n and are called prescribed-time\nconvergent, prescribed-time ISS gain and (normal)\nISS gain, respectively. The inequalities in (8 ###reference_###) are\nsimplified as .\nWhen contains multiple inputs as \nwhere , the second inequality of (8 ###reference_###)\nbecomes \nand the inequalities are simplified as ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "A Cascade Design Approach", + "text": "The cascade design approach has been used for the distributed convex\noptimization problem in [10 ###reference_b10###, 12 ###reference_b12###, 9 ###reference_b9###].\nFollowing the cascade design principle, the optimal agreement can\nbe decomposed into two subproblems, namely the distributed optimum\nseeking and local reference trajectory tracking. To this end, we propose\nthe controller in the general form of\nwhere \nis the relative information received by -th agent from its neighbors\nand . -dynamics is designed\nto estimate in (6 ###reference_###).\nThe state of can be decomposed as \nwhere -dynamics can be designed to adaptively find the gradient\nof the local objective function . -dynamics\nis similar to a PI controller and designed to admit the equilibrium\npoint \nwith some known function . \nis the local controller state used to construct the actual control\ninput for the tracking." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Coordinate Transformation and Cascaded Error System", + "text": "For , define the error states\nNote that and are the error from the\ndistributed optimal value seeking, is the optimal tracking\nerror and is the local tracking error towards the local\nestimated optimal value . Define the lumped vectors ,\n, \nand . Note that \nand . Then closed-loop system composed\nof (1 ###reference_###), (9 ###reference_###), and (10 ###reference_###)\ncan be castled into the error dynamics as follows\nwhere and in (12 ###reference_###)\ncan be derived from the definition, and\n.\nAs illustrated in Fig. 1 ###reference_###,\nthe error system is in a cascaded form. With the decomposition of\n in (13 ###reference_###), in order to show (3 ###reference_###),\nit suffices to prove the prescribed-time stability of - and\n-dynamics, i.e., there exist functions\n, such that\n###figure_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Prescribed-time Stabilization of Cascaded System", + "text": "" + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Changing Lyapunov Function Method", + "text": "We propose three conditions sufficient for prescribed-time stabilization\nof the cascaded system (12 ###reference_###)-(13 ###reference_###).\n: -dynamics in (12 ###reference_###)\nadmits a prescribed-time Lyapunov function \nsuch that\n\nholds;\n: -dynamics in (12 ###reference_###)\nadmits a prescribed-time ISS Lyapunov function \nsuch that \nholds for some and ;\n: in (12 ###reference_###)\nsatisfies \nfor some ; \nin and in (13 ###reference_###) satisfy and for \nand some .\nNote that condition implies that .\nInvoking comparison lemma leads to \nwhere is denoted in (5 ###reference_###).\nDue to , it gives\nshowing that the state of the first subsystem goes to zero at prescribed-time\n and the first inequality in (14 ###reference_###) is\nachieved. In order to investigate how the -dynamics affects\nthe convergence of -dynamics, we introduce the change\nof the Lyapunov function for the -dynamics as\nwith . Then, the prescribed-time convergence\nresult for the whole system is given in the following theorem.\nConsider the system composed of (1 ###reference_###),\n(9 ###reference_###) and (10 ###reference_###). Suppose the\nclosed-loop system (12 ###reference_###)-(13 ###reference_###) after the\nstate transformation satisfies conditions -.\nDefine functions \nand \nwith some and .\nSuppose\nand there exists a function \nfor (16 ###reference_###) such that\nhold. Then, the problem of DPTCO is solved for any bounded initial condition.\nProof: Due to , one has .\nTaking time derivative of in (16 ###reference_###)\nand using (18 ###reference_###) yields ,\nwhere \nand .\nInvoking comparison lemma yields\nDenote the bound of as . Given (20 ###reference_###)\nwith , one has .\nAs a result, the second term on the right-hand of (22 ###reference_###)\ncan be calculated as\nwhere \nis a finite constant. By (15 ###reference_###), one has\nwhere .\nSimilar to (23 ###reference_###), due to (21 ###reference_###) and (24 ###reference_###),\nthe third term on the right-hand of (22 ###reference_###) satisfies\n,\nwhere \nis a finite constant. Consequently, .\nThen according to (16 ###reference_###), satisfies\nwhere . (25 ###reference_###) means the\nsecond equation in (14 ###reference_###) is achieved. As a result,\nthe DPTCO is achieved.\nNext, we prove the boundedness of , ,\n. By (15 ###reference_###), (17 ###reference_###), (19 ###reference_###)\nand (25 ###reference_###), ,\n\nhold for some finite constants , ,\nand . Since , ,\n satisfy ,\nthese inequalities imply that , ,\n are bounded for . This completes\nthe proof." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Time-varying State Transformation", + "text": "A common practice in the literature of prescribed-time control [22 ###reference_b22###, 21 ###reference_b21###]\nis the time-varying state transformation technique. When \nis not feasible, we can seek a time-varying state transformation\nwhere \nis a differentiable function. Generally, the mapping from \nto is nonlinear. The -dynamics\nbecomes\nwhere we used . Due to the nonlinearity of ,\n may not guarantee .\nWith the time-varying state transformation, the closed-loop system\ncomposed of (1 ###reference_###), (9 ###reference_###), and (10 ###reference_###)\ncan be casted into the error dynamics as follows\nand -dynamics and in (9 ###reference_###), (10 ###reference_###)\ncan rewritten as\nwith some functions and \nderived from (9 ###reference_###), (10 ###reference_###) and (26 ###reference_###).\nSimilarly, may not guarantee \nand . We modify conditions ,\n to ,\n as follows.\n: There exists a time-varying\nstate transformation (26 ###reference_###) such that -dynamics\nin (27 ###reference_###) admits a prescribed-time ISS Lyapunov function\n\nand \nholds for some and\n. Moreover,\nthe boundedness of implies prescribed-time convergence\nof .\n: in\nand in (28 ###reference_###) satisfy and \nfor where ,\n, \nand , .\nConsider the system composed of (1 ###reference_###),\n(9 ###reference_###) and (10 ###reference_###). Suppose the\nclosed-loop system (13 ###reference_###) and (27 ###reference_###)\nafter the state transformation satisfies conditions ,\n, \nwith\nwhere is defined in Theorem 3.1 ###reference_theorem1###, and\nhold. Then, the problem of DPTCO is solved for any bounded initial condition.\nProof: Due to , one has .\nInvoking comparison lemma yields\nSimilar to the deviations in (23 ###reference_###),\nby (30 ###reference_###) and (31 ###reference_###) , the bound of \nsatisfies ,\nwhere and .\nThe inequality implies that \nis bounded. Since the boundedness of implies\nthe prescribed-time convergence of by condition ,\nthe second equation in (14 ###reference_###) is achieved and outputs\nof the agents converge to the optimum within prescribed time.\nSimilar to the proof of Theorem 3.1 ###reference_theorem1###, by (29 ###reference_###),\nwe have ,\nand then the boundedness of all signals is guaranteed." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Prescribed-time Optimum Seeking", + "text": "In this section, we elaborate the design of -dynamics.\nThe two subsystems of -dynamics, namely -\nand -dynamics, are designed as,\nwhere is a differentiable function\nto be designed.\nLet and \nbe such that , . Therefore, \nand is an orthogonal matrix. Define ,\n, ,\n and .For a connected graph, is a positive matrix and\n where\n and are the second smallest and largest\neigenvalues of , respectively. Let \nand . The dynamics (32 ###reference_###)\nand (33 ###reference_###) for the group of agents can be written compactly\nas\nwhere .\nNote that the system (34 ###reference_###) and (35 ###reference_###) is\nin the form of (9 ###reference_###). We have the following proposition, with proof given in appendix.\nConsider (34 ###reference_###) and\n(35 ###reference_###) under Assumption 2.1 ###reference_ass1###, 2.2 ###reference_ass2###\nand 2.3 ###reference_ass3###. Let satisfies (6 ###reference_###) and thus \nbe the optimum to the optimization problem (2 ###reference_###).\nThen\nis the solution of\nwhen the initial value of satisfies .\nAs introduced in Section 3 ###reference_###, we use the coordinate\ntransformation , \nwith and being the error variables for distributed\noptimal value seeking problem. From Proposition 4.1 ###reference_proposition1###, (34 ###reference_###)\nand (35 ###reference_###), -dynamics can be obtained, with ,\nas\nwhere .\nConsider -dynamics in (32 ###reference_###)\nand (33 ###reference_###) under Assumption 2.1 ###reference_ass1###, 2.2 ###reference_ass2###\nand 2.3 ###reference_ass3###. Define\nwhere and are given in Assumption 2.2 ###reference_ass2###.\nIf and\nholds for , then -dynamics satisfies\ncondition with\nMoreover, the bounds of and satisfy\nfor some .\nThe proof is given in appendix." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Robust DPTCO for Chain-Integrator MASs", + "text": "In this section, we apply the DPTCO framework proposed in Section 3 ###reference_### to solve the robust DPTCO for a class of nonlinear MASs with uncertainties, called chain-integrator MASs of a relative degree greater than one.\nSince we deal with\nthe optimal tracking problem for each subsystem separately, we\nomit the superscript for simplicity when no confusion is raised. Therefore, the -th subsystem is expressed as\nwhere is the system\nstate with , \ncontrol input, system output, and \nthe uncertainties belonging to a compact set .\nThe function \nis sufficiently smooth and for each fixed it is bounded for all\n [29 ###reference_b29###].\nAccording to [30 ###reference_b30###, Lemma 11.1], the function\n satisfies\nwhere is an unknown positive function\nand is a known positive function and is bounded for all\n.Note that (45 ###reference_###)\nis in the form of (1 ###reference_###).\nWe follow the framework developed in Section 3 ###reference_###\nto solve the DPTCO problem. First, define the error as in (11 ###reference_###),\ni.e.,\nwhere is given in (32 ###reference_###) and \nis omitted in this section. Due to (14 ###reference_###) and Theorem\n4.1 ###reference_theorem1###, it suffices to design controller such\nthat the prescribed-time stabilization is achieved for .\nLet such that\nis Hurwitz, for , \nand is\nwhere is a first-order differentiable\nfunction to be designed. Since the system (45 ###reference_###) is\nnonlinear and has the relative degree greater than one and the reference\ntrajectory does not have the higher-order derivatives,\nthe traditional sliding-mode based tracking control cannot be applied\n[23 ###reference_b23###]. Instead, we construct a new variable\n as\nwith\nThen, we define the time-varying state transformation as\nwith a first-order differentiable \nto be designed. By doing so, we introduce the time-varying state transformation\nfrom to as\nwhich coincides with the procedure in Section 3 ###reference_###.\nDefine functions\n,\n\nand .\nBy (45 ###reference_###), (49 ###reference_###), and (51 ###reference_###),\n-dynamics can be expressed as\nwhere \nand\nSince is Hurwitz, there exist positive matrices , \nsuch that .\nDefine two constants\nThen, we propose the following design criteria (DC)\nfor functions in (49 ###reference_###)\nand in (52 ###reference_###) such that the time-varying\nstate transformation (52 ###reference_###) and the -dynamics\nsatisfy in Section 3.2.2 ###reference_.SSS2###.\n: satisfies\n and ,\nwhere , are given in (55 ###reference_###) and \nis given in (40 ###reference_###);\n: is chosen as .\nConsider the system (45 ###reference_###),\n-dynamics in (32 ###reference_###) and -dynamics\nin (33 ###reference_###) with time-varying state transformation (52 ###reference_###).\nIf conditions in Theorem 4.1 ###reference_theorem1### and two design criteria\n- hold, then\nthe bound of satisfies\nfor some function \nand .\nGiven in the appendix, the proof of Lemma 5.1 ###reference_lemma1###\nimplies that when is bounded for ,\nthe prescribed-time convergence of is achieved. Therefore,\nit suffices to design the controller in (53 ###reference_###)\nsuch that the closed-loop system for admits a prescribed-time\nISS Lyapunov function as in and \nis bounded for . Then, we design the controller\n as\nwith , and function defined in (46 ###reference_###).\nFor simplicity, we define\nwhere is\nintroduced in (51 ###reference_###). Note that and \nare functions.\nConsider the system (45 ###reference_###)\nwith the controller (57 ###reference_###), -dynamics in (32 ###reference_###)\nand -dynamics in (33 ###reference_###) with time-varying state\ntransformation (52 ###reference_###). If conditions in Theorem 4.1 ###reference_theorem1###\nand two design criteria -\nhold, then -dynamics satisfies condition .\nMoreover, it admits the prescribed-time ISS Lyapunov function in (omitting superscript ) with\nwhere .\nAnd the controller satisfies \nwith\nfor some finite constants , \nand .\nApplying Theorem 3.2 ###reference_theorem2###, 4.1 ###reference_theorem1### and Lemma 5.1 ###reference_lemma1###,\n5.2 ###reference_lemma2###, we obtain the following results.\nConsider the system composed of (32 ###reference_###),\n(33 ###reference_###), (45 ###reference_###) and (57 ###reference_###). If\nconditions in Theorem 4.1 ###reference_theorem1### and two design criteria\n- hold, the\nDPTCO problem for the chain integrator MASs (45 ###reference_###)\nis solved." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Adaptive DPTCO for Strict-Feedback MASs", + "text": "In this section, to further examine the generality of proposed DPTCO framework proposed in Section 3 ###reference_###,\nwe consider the adaptive DPTCO problem for a class of nonlinear strict-feedback MASs with parameter uncertainty, as follows,\nwhere is the system\nstate with , \nis output and is control input. \nis an unknown constant and \nis a known function with for .\nFor simplicity, we omit the superscript when no confusion\nis raised.\n[31 ###reference_b31###]\nFor , is first-order differentiable\nand locally Lipschitz function.\nUnder Assumption 6.1 ###reference_ass1###,\ndue to , by the mean value theorem, there exists\ncontinuous matrix-valued function \nsuch that\nwhere and its first derivative with respect to\n are continuous and bounded. Without losing generality, we assume\n, \nhold for , where and \nare some positive finite constants.\nFollowing the procedure in Section 3 ###reference_###,\nwe define the error states according to (11 ###reference_###) as\nwhere is given in (32 ###reference_###) and\nis the controller state where is the\nestimator of unknown parameter and \nis the dynamic filter variable to be designed.\nTo facilitate the stability analysis and simplify the derivation,\nwe introduce the coordinate transformation as\nwhere for is to be determined, \nto be designed, is the virtual\ncontroller and and -dynamics are designed\nas\nwith for and to be determined,\nand\nWe further introduce the time-varying state transformation for (63 ###reference_###)\nas\nwhere ,\n, \nwith\nwhere with ,\n. By doing so, we in fact introduce the time-varying\nstate transformation from to as\nwith ,\n,\n,\n, and\n.\nAs a result, the -dynamics can be expressed as \nWe propose the design criterion for functions\n and .\n: satisfies and \nfor , where where is denoted in\n(40 ###reference_###).\nConsider the system (61 ###reference_###),\n-dynamics in (32 ###reference_###) and -dynamics\nin (33 ###reference_###) with time-varying state transformation (69 ###reference_###).\nIf conditions in Theorem 4.1 ###reference_theorem1### and the design criterion\n hold, then the bound of \nsatisfies\nfor some function \nand .\nThe proof of Lemma 6.1 ###reference_lemma1### is given in the appendix. It implies\nthat when is bounded for ,\nthe prescribed-time convergence of is achieved. Then,\nthe controller is designed as\nwhere is designed in (64 ###reference_###).\nConsider the system (61 ###reference_###)\nwith the controller (71 ###reference_###), -dynamics in (32 ###reference_###)\nand -dynamics in (33 ###reference_###) with time-varying state\ntransformation (69 ###reference_###) under Assumption 6.1 ###reference_ass1###.\nSuppose conditions in Theorem 4.1 ###reference_theorem1### and the\ndesign criterion hold. Then,\nthere always exists a set of parameters for ,\n for and such that \nis an invariant set where \nand the DPTCO problem for strict-feedback MASs (61 ###reference_###)\nis solved." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Simulation Results", + "text": "In this section, we show two numerical examples to illustrate the\ntheoretical results. The graph for the two simulations is given by .\n(Robust DPTCO for Euler-Lagrange MASs)\nConsider the Euler-Lagrange MASs as , , \nwhere with ,\n, , and \n, , ,\n, , \nare unknown parameters for , and . Note that\nthe system is in the form of the chain-integrator systems in (45 ###reference_###)\nand satisfies (46 ###reference_###) due to the structural property of\nEuler-Lagrange systems.\nThe six robots are located in a thermal radiation field, and the relationship\nbetween the intensity of thermal radiation , temperature \nand distance can be roughly expressed as\n,\nwhere denotes the two-dimensional coordinates of the heat\nsource. Suppose each robot is capable of measuring the gradient information\nof the heat source with respect to distance. The objective is to design\ncontroller such that the six robots approach the heat source\nin a formation, and reduce the total displacement of the six robots\nfrom their original location. Thus, the global objective function\nis designed as\n\nwhere , , ,\n, , \nrepresent the formation shape, and and \nare objective weights.\nBy defining , the optimization problem\nis transformed into ,\nwhich is consistent with (2 ###reference_###). For the optimization\nproblem, we design -dynamics as\nin the form of (32 ###reference_###), (33 ###reference_###) such\nthat converges to the optimum within prescribed\ntime. Then, the reference trajectory for each robot dynamics is changed\nas\n.\nReplacing in Section 4 ###reference_###\nwith , we can design the controller following the procedures\nin Section 5 ###reference_### to solve the optimization problem. Let the initial condition to be ,\n, , ,\n, , ,\n, for .\nThe initial time is set as , and the prescribed-time\nscale . The parameters and gain functions are chosen as ,\n, , , ,\n. The weight coefficients \nand for objective function\nare chosen as and for .\nThe coordinate of heat source is set as .\n###figure_2### ###figure_3### ###figure_4### ###figure_5### The simulation results are shown in Figure.\n2 ###reference_### and 5 ###reference_###. In Figure. 5 ###reference_###, \nand converge to zero within , and thus the validity\nof the optimal trajectory generator designed in Section 4 ###reference_###\nis verified. In Figure. 2 ###reference_###, the six robots approach\nthe heat source in formation within the prescribed time.\n(Adaptive DPTCO for strict-feedback MASs) Consider\nthe strict-feedback MASs in the presence of parameter uncertainties\nas , , , ,\nwhere , .\n. The\nlocal objective function of each agent is ,\nwhere , ,\n,\n and are positive definite matrices. Using Global Optimization\nToolbox in MATLAB, the optimal agreement is\n,\nwhich is used for verification only. The parameters are chosen as\n, , , ,\n, , . ,\n. The initial values are ,\n, , ,\n, , ,\n, ,\nand , are the same as that in Example\n1.\nThe simulation results are shown in Figure. 5 ###reference_### and\n5 ###reference_###. In Figure. 5 ###reference_###, the tracking\nerror between each agent\u2019s output and optimum is bounded\nand achieves prescribed-time convergence towards zero. For simplicity,\nwe only provide the trajectories of , ,\n in Figure. 5 ###reference_###. These\ntrajectories show that we achieve prescribed-time convergence towards\nzero for , and ." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we propose a novel DPTCO algorithm for a class of high-order\nnonlinear MASs. A DPTCO framework is first constructed embedding the\ncascade design such that the DPTCO problem is divided into optimum seeking for thewhole system and reference trajectory tracking\nproblem for each agent. The DPTCO framework is then utilized to solve\nDPTCO problem for chain integrator MASs and strict-feedback MASs.\nThe prescribed-time convergence lies in the time-varying gains which\nincrease to infinity as time approaches the prescribed time. When\nsolving the tracking problem for the two specific MASs, high-order\nderivative of reference trajectory is not required. It would be very\ninteresting to further consider the DPTCO where the local objective functions\nsubject to bound, equality, and inequality constraints." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2407.11413v2_figure_1.png", + "caption": "Figure 1: Cascaded system \u03a3=[\u03a31T,\u03a32T]T\u03a3superscriptsuperscriptsubscript\u03a31Tsuperscriptsubscript\u03a32TT\\Sigma=[\\Sigma_{1}^{\\mbox{\\tiny{T}}},\\Sigma_{2}^{\\mbox{\\tiny{T}}}]^{\\mbox{%\n\\tiny{T}}}roman_\u03a3 = [ roman_\u03a3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT , roman_\u03a3 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT\nwith d=[(d1)T,\u22ef,(dN)T]T\ud835\udc51superscriptsuperscriptsuperscript\ud835\udc511T\u22efsuperscriptsuperscript\ud835\udc51\ud835\udc41TTd=\\left[(d^{1})^{\\mbox{\\tiny{T}}},\\cdots,(d^{N})^{\\mbox{\\tiny{T}}}\\right]^{%\n\\mbox{\\tiny{T}}}italic_d = [ ( italic_d start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT , \u22ef , ( italic_d start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT.", + "url": "http://arxiv.org/html/2407.11413v2/x1.png" + }, + "2": { + "figure_path": "2407.11413v2_figure_2.png", + "caption": "Figure 2: Trajectories of positions x1isuperscriptsubscript\ud835\udc651\ud835\udc56x_{1}^{i}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT of the six\nrobots for 0\u2264t\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset, Angle, MethodLPIPS \nCLIP \nDINO \nSSCD \nCLIP-I \nPSNR \nSSIM \n
HawkI-Syn Ours0.566129.95630.43140.36380.831711.06640.3162
HawkI-Syn HawkI0.599828.37860.39820.35190.822110.70920.2941
HawkI-Syn Zero123++0.569428.25550.42930.46050.814910.99230.3073
HawkI-Syn Stable Zero1230.717821.34300.21080.23860.64679.25850.1954
HawkI-Syn Ours0.574429.18000.41480.36840.832711.06610.3047
HawkI-Syn HawkI0.597127.95400.39640.34730.827810.63030.2779
HawkI-Syn Zero123++0.605625.66650.26810.21950.708710.43950.2984
HawkI-Syn Stable Zero1230.678523.15550.21190.26570.64569.47030.1673
HawkI-Real Ours0.620129.88500.33460.25880.81529.40090.2184
HawkI-Real HawkI0.652927.58470.28440.22690.77548.92570.2160
HawkI-Real Zero123++0.625327.98770.33150.33620.80239.29620.1990
HawkI-Real Stable Zero1230.661423.08950.17810.11920.65697.79770.1684
HawkI-Real Ours0.586830.54890.41260.34240.870810.61770.2687
HawkI-Real HawkI0.621529.04880.35300.33630.835810.64720.2439
HawkI-Real Zero123++0.630227.52280.31450.20050.75299.88640.2484
HawkI-Real Stable Zero1230.626821.10900.17500.04940.65008.31630.1637
\n
\n
Table 1: Quantitative Results. Evaluation of seven metrics demonstrates the superior results of our method over prior work.
\n", + "capture": "Table 1: Quantitative Results. Evaluation of seven metrics demonstrates the superior results of our method over prior work." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset, Angle, MethodLPIPS \nCLIP \nDINO \nSSCD \nCLIP-I \nPSNR \nSSIM \n
HawkI-Syn w/ regularization0.566129.95630.43140.36380.831711.06640.3162
HawkI-Syn w/o regularization0.586728.54170.41220.36400.824310.82720.2954
HawkI-Real w/ regularization0.620129.88500.33460.25880.81529.40090.2184
HawkI-Real w/o regularization0.625729.07980.33570.24010.82319.19570.2014
HawkI-Syn w/ regularization0.574429.18000.41480.36840.832711.06610.3047
HawkI-Syn w/o regularization0.595228.98660.40980.33500.824810.86560.2850
HawkI-Real w/ regularization0.586830.54890.41260.34240.870810.61770.2687
HawkI-Real w/o regularization0.611429.91840.40030.30750.854110.29580.2615
HawkI-Syn w/ regularization0.574029.11440.42770.35290.828010.96970.2837
HawkI-Syn w/o regularization0.586028.63850.39690.35590.817110.84010.2792
HawkI-Real w/ regularization0.618530.67290.36100.28800.844810.31300.2223
HawkI-Real w/o regularization0.633829.16930.38170.26050.826310.07940.2117
HawkI-Syn w/ regularization0.562429.21440.44870.38920.855911.21750.3048
HawkI-Syn w/o regularization0.571428.40890.44760.38700.849211.04090.2947
HawkI-Real w/ regularization0.592529.50900.38990.31270.868910.61830.2971
HawkI-Real w/o regularization0.589428.85310.37040.29540.850610.52130.2828
\n
\n
Table 2: Quantitative Results of Ablation Study. Evaluation of seven metrics demonstrates the superior results of the regularized method over the non-regularized one.
\n
", + "capture": "Table 2: Quantitative Results of Ablation Study. Evaluation of seven metrics demonstrates the superior results of the regularized method over the non-regularized one." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMemory ConsumptionComputation Time
Zero123++10.18 GB / 40.0 GB (9,715 MiB)20 sec
Stable Zero12339.3 GB / 40.0 GB (37,479 MiB)1,278 sec
ZeroNVS33.48 GB / 40.0 GB (31,929 MiB)7,500 sec
\n
\n
Table 3: Comparison of computation times for 3D-prior models. Among the prior works in NVS frequently mentioned, including Zero123++, Stable Zero123, and ZeroNVS, the Zero123++ model has the shortest computation time. Our research applies the Zero123++ model, which has the lowest computation time among 3D-prior models, to obtain 3D prior information without requiring any additional computation time.
\n
", + "capture": "Table 3: Comparison of computation times for 3D-prior models. Among the prior works in NVS frequently mentioned, including Zero123++, Stable Zero123, and ZeroNVS, the Zero123++ model has the shortest computation time. Our research applies the Zero123++ model, which has the lowest computation time among 3D-prior models, to obtain 3D prior information without requiring any additional computation time." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelStepMemory ConsumptionComputation Time
HawkIOptimization7.20 GB / 40.0 GB (6,875 MiB)395 sec
Inference8.43 GB / 40.0 GB (8,045 MiB)6 sec (each image)
Total\n8.43 GB / 40.0 GB (8,045 MiB)401 sec
Ours (w/o regloss)Zero123++10.18 GB / 40.0 GB (9,715 MiB)20 sec
Optimization7.21 GB / 40.0 GB (6,879 MiB)372 sec
Inference8.43 GB / 40.0 GB (8,049 MiB)6 sec (each image)
Total\n10.18 GB / 40.0 GB (9,715 MiB)398 sec
Ours (w/ regloss)Zero123++10.18 GB / 40.0 GB (9,715 MiB)20 sec
Optimization7.21 GB / 40.0 GB (6,885 MiB)367 sec
Inference8.43 GB / 40.0 GB (8,045 MiB)6 sec (each image)
Total\n10.18 GB / 40.0 GB (9,715 MiB)393 sec
\n
\n
Table 4: Detailed Step-wise Comparison. Even when applying Zero123++ to our methodology, the additional GPU memory consumption is relatively small, at 2.97GB (10.18GB - 7.21GB), and it takes only 20 seconds to generate the guidance image using Zero123++. From an overall perspective, HawkI takes 401 seconds to complete optimization and generate the first image through inference, while Ours (w/o regloss) takes 398 seconds, and Ours (w/ regloss) takes 393 seconds. This demonstrates that our methodology does not result in significant differences in computation time or memory consumption while achieving better performance compared to existing methods. Total memory consumption refers to the worst case, the computation time indicates the total execution time. i.e., the time taken for the model to run and output the first image.
\n
", + "capture": "Table 4: Detailed Step-wise Comparison. Even when applying Zero123++ to our methodology, the additional GPU memory consumption is relatively small, at 2.97GB (10.18GB - 7.21GB), and it takes only 20 seconds to generate the guidance image using Zero123++. From an overall perspective, HawkI takes 401 seconds to complete optimization and generate the first image through inference, while Ours (w/o regloss) takes 398 seconds, and Ours (w/ regloss) takes 393 seconds. This demonstrates that our methodology does not result in significant differences in computation time or memory consumption while achieving better performance compared to existing methods. Total memory consumption refers to the worst case, the computation time indicates the total execution time. i.e., the time taken for the model to run and output the first image." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset, Angle, MethodLPIPS \nCLIP \nDINO \nSSCD \nCLIP-I \nPSNR \nSSIM \n
HawkI-Syn Ours0.574029.11440.42770.35290.828010.96970.2837
HawkI-Syn HawkI0.602427.74070.38310.34940.822610.56670.2744
HawkI-Syn Zero123++0.603724.41480.29360.30210.730910.74580.2803
HawkI-Syn Stable Zero1230.745220.78600.08520.09960.56346.38870.0971
HawkI-Syn Ours0.562429.21440.44870.38920.855911.21750.3048
HawkI-Syn HawkI0.594327.57380.40800.35320.815210.88820.2759
HawkI-Syn Zero123++0.565225.88310.43050.44310.793211.11300.2936
HawkI-Syn Stable Zero1230.633223.20870.33660.33930.68909.18520.1943
HawkI-Real Ours0.618530.67290.36100.28800.844810.31300.2223
HawkI-Real HawkI0.646428.75000.35670.26970.80019.68590.2145
HawkI-Real Zero123++0.681624.70830.21010.17060.64348.68650.2194
HawkI-Real Stable Zero1230.665021.57910.15640.02250.58507.40970.1681
HawkI-Real Ours0.592529.50900.38990.31270.868910.61830.2971
HawkI-Real HawkI0.628327.52000.32280.24060.838310.47060.2787
HawkI-Real Zero123++0.597826.15500.37350.30800.804310.59170.2953
HawkI-Real Stable Zero1230.667325.66110.26670.19980.72499.07860.1653
\n
\n
Table 5: Quantitative Results. Evaluation of seven metrics demonstrates the superior results of our method over prior work.
\n
", + "capture": "Table 5: Quantitative Results. Evaluation of seven metrics demonstrates the superior results of our method over prior work." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.06157v4_figure_1.png", + "caption": "Figure 1: Our model is capable of generating high quality camera-controlled images at specific azimuth and elevation angles for a variety of complex scenes, all without requiring extra 3D datasets or extensive training. The image in the bottom right corner showcases the output from the 3D-based baseline, Zero123++ Shi et al. (2023a), created from a designated angle.", + "url": "http://arxiv.org/html/2408.06157v4/x1.png" + }, + "2": { + "figure_path": "2408.06157v4_figure_2.png", + "caption": "Figure 2: Method. Our method generates a high fidelity camera controlled novel viewpoint of a single image Ii\u2062n\u2062p\u2062u\u2062tsubscript\ud835\udc3c\ud835\udc56\ud835\udc5b\ud835\udc5d\ud835\udc62\ud835\udc61I_{input}italic_I start_POSTSUBSCRIPT italic_i italic_n italic_p italic_u italic_t end_POSTSUBSCRIPT, its text description and designated angle information. It infuses prior information from pre-trained NVS models into the text to image stable diffusion architecture in a 3D-free inference-time optimization procedure.", + "url": "http://arxiv.org/html/2408.06157v4/x2.png" + }, + "3": { + "figure_path": "2408.06157v4_figure_3.png", + "caption": "Figure 3: Analysis of how well CLIP understands the 3D space In this experiment, we generate camera control images for specific angles without using any guidance image.", + "url": "http://arxiv.org/html/2408.06157v4/x3.png" + }, + "4": { + "figure_path": "2408.06157v4_figure_4.png", + "caption": "Figure 4: Using an image with an incorrect viewpoint as the guidance image In this experiment, we examine how the results are derived when an incorrect viewpoint image is used as a guidance image.", + "url": "http://arxiv.org/html/2408.06157v4/x4.png" + }, + "5": { + "figure_path": "2408.06157v4_figure_5.png", + "caption": "Figure 5: Results on HawkI-Syn. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x5.png" + }, + "6": { + "figure_path": "2408.06157v4_figure_6.png", + "caption": "Figure 6: Results on HawkI-Real. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x6.png" + }, + "7": { + "figure_path": "2408.06157v4_figure_7.png", + "caption": "Figure 7: Ablation Study on the use of regularization loss between angle embedding and optimized embedding In this experiment, we analyze the effect of adding a regularization term between the angle embedding (et\u2062a\u2062r\u2062g\u2062e\u2062tsubscript\ud835\udc52\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61e_{target}italic_e start_POSTSUBSCRIPT italic_t italic_a italic_r italic_g italic_e italic_t end_POSTSUBSCRIPT) and the optimized embedding (ev\u2062i\u2062e\u2062wsubscript\ud835\udc52\ud835\udc63\ud835\udc56\ud835\udc52\ud835\udc64e_{view}italic_e start_POSTSUBSCRIPT italic_v italic_i italic_e italic_w end_POSTSUBSCRIPT) on camera control results. The results show improvements in viewpoint consistency and style coherence when the regularization loss is applied.", + "url": "http://arxiv.org/html/2408.06157v4/x7.png" + }, + "8": { + "figure_path": "2408.06157v4_figure_8.png", + "caption": "Figure 8: More results on HawkI-Syn. We present additional comparison results on HawkI-Syn for the angles of (\u221220\u2218,210\u2218)superscript20superscript210(-20^{\\circ},210^{\\circ})( - 20 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 210 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (\u221220\u2218,330\u2218)superscript20superscript330(-20^{\\circ},330^{\\circ})( - 20 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 330 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ). Our model consistently produces view synthesis images that maintained background inclusion and view consistency, accurately mirroring the target elevation and azimuth angles. Notably, StableZero123 exhibits instability in its results. It\u2019s important to highlight that this task specifically addresses negative azimuth angles. HawkI, for instance, fails to capture the correct camera elevation and is limited to generating aerial views. Zero123++ is capable of handling both elevation and azimuth but falls short in integrating background elements and intricate details, as also observed in previous outcomes. For example, when presented with an image of a pyramid casting a shadow, Zero123++ darkens the pyramid but fails to render the shadow accurately. This shortcoming is also apparent in images of a waterfall and a house. In the waterfall task within the specified azimuth range, Zero123++ produces an indistinct shape rather than a clear environment where water and lake are visible from below the rocks. Similarly, for the house image, it generates an incomplete image with gray patches. Conversely, our model not only captures the shadow details of the pyramid but also accurately renders the environment in the waterfall image, ensuring visibility of water and lake from beneath the rocks. Additionally, it adeptly incorporates details and backgrounds from multiple perspectives.", + "url": "http://arxiv.org/html/2408.06157v4/x8.png" + }, + "9": { + "figure_path": "2408.06157v4_figure_9.png", + "caption": "Figure 9: More Results on HawkI-Real. We extend our analysis to additional settings of (\u221220\u2218,210\u2218)superscript20superscript210(-20^{\\circ},210^{\\circ})( - 20 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 210 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (\u221220\u2218,330\u2218)superscript20superscript330(-20^{\\circ},330^{\\circ})( - 20 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 330 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ). Our model, when tested on the HawkI-Real dataset, demonstrated superior performance in view synthesis images, excelling in background inclusion and view consistency, and accurately representing the target elevation and azimuth angles. In comparison to other leading models such as Zero123++, HawkI, and StableZero123, our model\u2019s results are notably better. StableZero123\u2019s outputs are incomplete, and Zero123++ struggles with capturing background details and intricate information. Specifically, Zero123++ neglected surrounding details, focusing solely on the Eiffel Tower. The original HawkI model also failed to achieve the correct camera elevation or produced images that overlooked important features. For example, in the cat transformation task, the output incorrectly depicted three cats instead of two. Our model stands out by delivering exceptional results for the Eiffel Tower, Hawaiian beach, and cat transformations, underscoring its advanced capabilities over other models. Furthermore, we present a quantitative evaluation in Table 5, which confirms our model\u2019s dominance over state-of-the-art benchmarks across various metrics.", + "url": "http://arxiv.org/html/2408.06157v4/x9.png" + }, + "10": { + "figure_path": "2408.06157v4_figure_10.png", + "caption": "Figure 10: Additional comparisons in (30\u2218,30\u2218)superscript30superscript30(30^{\\circ},30^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (30\u2218,270\u2218)superscript30superscript270(30^{\\circ},270^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 270 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) settings on images from the HawkI-Syn and HawkI-Real datasets. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x10.png" + }, + "11": { + "figure_path": "2408.06157v4_figure_11.png", + "caption": "Figure 11: Additional comparisons in (30\u2218,30\u2218)superscript30superscript30(30^{\\circ},30^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (30\u2218,270\u2218)superscript30superscript270(30^{\\circ},270^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 270 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) settings on images from the HawkI-Syn and HawkI-Real datasets. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x11.png" + }, + "12": { + "figure_path": "2408.06157v4_figure_12.png", + "caption": "Figure 12: Additional comparisons in (30\u2218,30\u2218)superscript30superscript30(30^{\\circ},30^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (30\u2218,270\u2218)superscript30superscript270(30^{\\circ},270^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 270 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) settings on images from the HawkI-Syn and HawkI-Real datasets. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x12.png" + }, + "13": { + "figure_path": "2408.06157v4_figure_13.png", + "caption": "Figure 13: Additional comparisons in (30\u2218,30\u2218)superscript30superscript30(30^{\\circ},30^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) and (30\u2218,270\u2218)superscript30superscript270(30^{\\circ},270^{\\circ})( 30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT , 270 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT ) settings on images from the HawkI-Syn and HawkI-Real datasets. Comparisons between the state-of-the-art view synthesis models, Zero123++, HawkI, Stable Zero123, and our method highlights the superior performance of our model in terms of background inclusion, view consistency, and the accurate representation of target elevation and azimuth angles.", + "url": "http://arxiv.org/html/2408.06157v4/x13.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Viewpoint textual inversion: Unleashing novel view synthesis with pretrained 2d diffusion models.", + "author": "James Burgess, Kuan-Chieh Wang, and Serena Yeung.", + "venue": "arXiv preprint arXiv:2309.07986, 2023.", + "url": null + } + }, + { + "2": { + "title": "Emerging properties in self-supervised vision transformers.", + "author": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9650\u20139660, 2021.", + "url": null + } + }, + { + "3": { + "title": "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo.", + "author": "Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, and Hao Su.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pp. 14124\u201314133, 2021.", + "url": null + } + }, + { + "4": { + "title": "It3d: Improved text-to-3d generation with explicit view synthesis.", + "author": "Yiwen Chen, Chi Zhang, Xiaofeng Yang, Zhongang Cai, Gang Yu, Lei Yang, and Guosheng Lin.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 1237\u20131244, 2024.", + "url": null + } + }, + { + "5": { + "title": "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors.", + "author": "Congyue Deng, Chiyu Jiang, Charles R Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, Dragomir Anguelov, et al.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 20637\u201320647, 2023.", + "url": null + } + }, + { + "6": { + "title": "Cat3d: Create anything in 3d with multi-view diffusion models.", + "author": "Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T Barron, and Ben Poole.", + "venue": "arXiv preprint arXiv:2405.10314, 2024.", + "url": null + } + }, + { + "7": { + "title": "Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion.", + "author": "Jiatao Gu, Alex Trevithick, Kai-En Lin, Joshua M Susskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi.", + "venue": "In International Conference on Machine Learning, pp. 11808\u201311826. PMLR, 2023.", + "url": null + } + }, + { + "8": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "9": { + "title": "Viewdiff: 3d-consistent image generation with text-to-image models.", + "author": "Lukas H\u00f6llein, Alja\u017e Bo\u017ei\u010d, Norman M\u00fcller, David Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollh\u00f6fer, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5043\u20135052, 2024.", + "url": null + } + }, + { + "10": { + "title": "Putting nerf on a diet: Semantically consistent few-shot view synthesis.", + "author": "Ajay Jain, Matthew Tancik, and Pieter Abbeel.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5885\u20135894, 2021.", + "url": null + } + }, + { + "11": { + "title": "Aerial diffusion: Text guided ground-to-aerial view translation from a single image using diffusion models.", + "author": "Divya Kothandaraman, Tianyi Zhou, Ming Lin, and Dinesh Manocha.", + "venue": "arXiv preprint arXiv:2303.11444, 2023a.", + "url": null + } + }, + { + "12": { + "title": "Aerialbooth: Mutual information guidance for text controlled aerial view synthesis from a single image.", + "author": "Divya Kothandaraman, Tianyi Zhou, Ming Lin, and Dinesh Manocha.", + "venue": "arXiv preprint arXiv:2311.15478, 2023b.", + "url": null + } + }, + { + "13": { + "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models.", + "author": "Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.", + "venue": "In International conference on machine learning, pp. 19730\u201319742. PMLR, 2023.", + "url": null + } + }, + { + "14": { + "title": "Spacetime gaussian feature splatting for real-time dynamic view synthesis.", + "author": "Zhan Li, Zhang Chen, Zhong Li, and Yi Xu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8508\u20138520, 2024.", + "url": null + } + }, + { + "15": { + "title": "Magic3d: High-resolution text-to-3d content creation.", + "author": "Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 300\u2013309, 2023.", + "url": null + } + }, + { + "16": { + "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization.", + "author": "Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, and Hao Su.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "17": { + "title": "Zero-1-to-3: Zero-shot one image to 3d object.", + "author": "Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9298\u20139309, 2023a.", + "url": null + } + }, + { + "18": { + "title": "Syncdreamer: Generating multiview-consistent images from a single-view image.", + "author": "Yuan Liu, Cheng Lin, Zijiao Zeng, Xiaoxiao Long, Lingjie Liu, Taku Komura, and Wenping Wang.", + "venue": "arXiv preprint arXiv:2309.03453, 2023b.", + "url": null + } + }, + { + "19": { + "title": "Nerf: Representing scenes as neural radiance fields for view synthesis.", + "author": "Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng.", + "venue": "Communications of the ACM, 65(1):99\u2013106, 2021.", + "url": null + } + }, + { + "20": { + "title": "Transformation-grounded image generation network for novel 3d view synthesis.", + "author": "Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg.", + "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition, pp. 3500\u20133509, 2017.", + "url": null + } + }, + { + "21": { + "title": "A self-supervised descriptor for image copy detection. 2022 ieee.", + "author": "Ed Pizzi, Sreya Dutta Roy, Sugosh Nagavara Ravindra, Priya Goyal, and Matthijs Douze.", + "venue": "In CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14512\u201314522, 2022.", + "url": null + } + }, + { + "22": { + "title": "Sdxl: Improving latent diffusion models for high-resolution image synthesis.", + "author": "Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M\u00fcller, Joe Penna, and Robin Rombach.", + "venue": "arXiv preprint arXiv:2307.01952, 2023.", + "url": null + } + }, + { + "23": { + "title": "Dreamfusion: Text-to-3d using 2d diffusion.", + "author": "Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall.", + "venue": "arXiv preprint arXiv:2209.14988, 2022.", + "url": null + } + }, + { + "24": { + "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors.", + "author": "Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, et al.", + "venue": "arXiv preprint arXiv:2306.17843, 2023.", + "url": null + } + }, + { + "25": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pp. 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "26": { + "title": "Dreambooth3d: Subject-driven text-to-3d generation.", + "author": "Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, et al.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pp. 2349\u20132359, 2023.", + "url": null + } + }, + { + "27": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684\u201310695, 2022.", + "url": null + } + }, + { + "28": { + "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.", + "author": "Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 22500\u201322510, 2023.", + "url": null + } + }, + { + "29": { + "title": "Zeronvs: Zero-shot 360-degree view synthesis from a single real image.", + "author": "Kyle Sargent, Zizhang Li, Tanmay Shah, Charles Herrmann, Hong-Xing Yu, Yunzhi Zhang, Eric Ryan Chan, Dmitry Lagun, Li Fei-Fei, Deqing Sun, et al.", + "venue": "arXiv preprint arXiv:2310.17994, 2023.", + "url": null + } + }, + { + "30": { + "title": "Cross-view image translation based on local and global information guidance.", + "author": "Yan Shen, Meng Luo, Yun Chen, Xiaotao Shao, Zhongli Wang, Xiaoli Hao, and Ya-Li Hou.", + "venue": "IEEE Access, 9:12955\u201312967, 2021.", + "url": null + } + }, + { + "31": { + "title": "Zero123++: a single image to consistent multi-view diffusion base model.", + "author": "Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su.", + "venue": "arXiv preprint arXiv:2310.15110, 2023a.", + "url": null + } + }, + { + "32": { + "title": "Mvdream: Multi-view diffusion for 3d generation.", + "author": "Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang.", + "venue": "arXiv preprint arXiv:2308.16512, 2023b.", + "url": null + } + }, + { + "33": { + "title": "Self-supervised visibility learning for novel view synthesis.", + "author": "Yujiao Shi, Hongdong Li, and Xin Yu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9675\u20139684, 2021.", + "url": null + } + }, + { + "34": { + "title": "Toss: High-quality text-guided novel view synthesis from a single image.", + "author": "Yukai Shi, Jianan Wang, He Cao, Boshi Tang, Xianbiao Qi, Tianyu Yang, Yukun Huang, Shilong Liu, Lei Zhang, and Heung-Yeung Shum.", + "venue": "arXiv preprint arXiv:2310.10644, 2023c.", + "url": null + } + }, + { + "35": { + "title": "Block-nerf: Scalable large scene neural view synthesis.", + "author": "Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8248\u20138258, 2022.", + "url": null + } + }, + { + "36": { + "title": "Single-view view synthesis with multiplane images.", + "author": "Richard Tucker and Noah Snavely.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 551\u2013560, 2020.", + "url": null + } + }, + { + "37": { + "title": "Megascenes: Scene-level view synthesis at scale.", + "author": "Joseph Tung, Gene Chou, Ruojin Cai, Guandao Yang, Kai Zhang, Gordon Wetzstein, Bharath Hariharan, and Noah Snavely.", + "venue": "In European Conference on Computer Vision, pp. 197\u2013214. Springer, 2025.", + "url": null + } + }, + { + "38": { + "title": "Generative camera dolly: Extreme monocular dynamic novel view synthesis.", + "author": "Basile Van Hoorick, Rundi Wu, Ege Ozguroglu, Kyle Sargent, Ruoshi Liu, Pavel Tokmakov, Achal Dave, Changxi Zheng, and Carl Vondrick.", + "venue": "In European Conference on Computer Vision, pp. 313\u2013331. Springer, 2025.", + "url": null + } + }, + { + "39": { + "title": "Imagedream: Image-prompt multi-view diffusion for 3d generation.", + "author": "Peng Wang and Yichun Shi.", + "venue": "arXiv preprint arXiv:2312.02201, 2023.", + "url": null + } + }, + { + "40": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli.", + "venue": "IEEE transactions on image processing, 13(4):600\u2013612, 2004.", + "url": null + } + }, + { + "41": { + "title": "Synsin: End-to-end view synthesis from a single image.", + "author": "Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7467\u20137477, 2020.", + "url": null + } + }, + { + "42": { + "title": "Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models.", + "author": "Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, and Shenghua Gao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20908\u201320918, 2023.", + "url": null + } + }, + { + "43": { + "title": "Consistnet: Enforcing 3d consistency for multi-view images diffusion.", + "author": "Jiayu Yang, Ziang Cheng, Yunfei Duan, Pan Ji, and Hongdong Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7079\u20137088, 2024.", + "url": null + } + }, + { + "44": { + "title": "The unreasonable effectiveness of deep features as a perceptual metric.", + "author": "Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 586\u2013595, 2018.", + "url": null + } + }, + { + "45": { + "title": "Free3d: Consistent novel view synthesis without 3d representation.", + "author": "Chuanxia Zheng and Andrea Vedaldi.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9720\u20139731, 2024.", + "url": null + } + }, + { + "46": { + "title": "Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction.", + "author": "Zhizhuo Zhou and Shubham Tulsiani.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12588\u201312597, 2023.", + "url": null + } + }, + { + "47": { + "title": "Fsgs: Real-time few-shot view synthesis using gaussian splatting.", + "author": "Zehao Zhu, Zhiwen Fan, Yifan Jiang, and Zhangyang Wang.", + "venue": "arXiv preprint arXiv:2312.00451, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.06157v4" +} \ No newline at end of file diff --git a/20241127/2408.07401v2.json b/20241127/2408.07401v2.json new file mode 100644 index 0000000000000000000000000000000000000000..22695b82688355aa6fe81cfbf5e88eb3f29a94e9 --- /dev/null +++ b/20241127/2408.07401v2.json @@ -0,0 +1,305 @@ +{ + "title": "DataVisT5: A Pre-trained Language Model for Jointly Understanding Text and Data Visualization", + "abstract": "Data visualization (DV) is the fundamental and premise tool to improve the efficiency in conveying the insights behind the big data, which has been widely accepted in existing data-driven world. Task automation in DV, such as converting natural language queries to visualizations (i.e., text-to-vis), generating explanations from visualizations (i.e., vis-to-text), answering DV-related questions in free form (i.e. FeVisQA), and explicating tabular data (i.e., table-to-text), is vital for advancing the field.\nDespite their potential, the application of pre-trained language models (PLMs) like T5 and BERT in DV has been limited by high costs and challenges in handling cross-modal information, leading to few studies on PLMs for DV. We introduce DataVisT5, a novel PLM tailored for DV that enhances the T5 architecture through a hybrid objective pre-training and multi-task fine-tuning strategy, integrating text and DV datasets to effectively interpret cross-modal semantics. Extensive evaluations on public datasets show that DataVisT5 consistently outperforms current state-of-the-art models and higher-parameter Large Language Models (LLMs) on various DV-related tasks. We anticipate that DataVisT5 will not only inspire further research on vertical PLMs but also expand the range of applications for PLMs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Data visualizations (DVs) utilize graphical representation to convey insights to summarize the massive raw data, which is a common practice in existing big data era [1 ###reference_b1###, 2 ###reference_b2###].\nPopular data analysis and database applications, such as Google Sheets111https://www.google.com/sheets/about/ and Microsoft Power BI222https://powerbi.microsoft.com/, all support DV features.\nMany institutions realize the value of DV and have applied it as their daily fundamental tools. Thus the ability of creating suitable DVs has become a necessary skill for data analysts, engineers, and data scientists [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nHowever, creating appropriate DVs remains challenging, even for experts, since it requires visual analysis expertise and familiarity with the domain data. Furthermore, users must master the complex grammar of Declarative Visualization Languages (DVLs), such as Vega-Lite [6 ###reference_b6###], ggplot2 [7 ###reference_b7###], and Vega-Zero [8 ###reference_b8###], to accurately define DV specification in the visualization engine.\n###figure_1### To lower the barriers to creating DVs and further unlock the power of DV for the general public, researchers have proposed a variety of DV-related tasks that have attracted significant attention from both industrial and academic researchers. Numerous studies on these topics have been presented in leading conferences and journals such as VLDB [9 ###reference_b9###, 10 ###reference_b10###, 2 ###reference_b2###], ICDE [11 ###reference_b11###, 12 ###reference_b12###], SIGMOD [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], and TKDE [16 ###reference_b16###, 17 ###reference_b17###]. These tasks include text-to-vis (i.e., automatically generating DVs from natural language questions) [8 ###reference_b8###, 15 ###reference_b15###], vis-to-text [18 ###reference_b18###] (i.e., automatically generating interpretations of complex DVs for educational purposes), FeVisQA [12 ###reference_b12###] (i.e., free-form question answering over data visualization), and table-to-text (i.e., describing a given table) [19 ###reference_b19###].\nA vivid example is given in Figure 1 ###reference_###, which shows four important tasks central to the domain knowledge of DV: text-to-vis, vis-to-text, FeVisQA\nand table-to-text. The figure presents a natural language (NL) question, \u201cGive me a pie chart about the proportion of the number of countries in the artist table.\u201d This example demonstrates the text-to-vis task\u2019s capability to interpret the NL question and transform it into a Vega-Lite specification, resulting in a pie chart. The DV query, introduced by [15 ###reference_b15###], serves as a bridge in the text-to-vis process, encapsulating visualization details and data operations with a grammar akin to SQL. Translations between DV queries and DVLs are seamless, with text-to-vis tasks primarily focusing on converting NL questions into DV queries.\nConversely, the vis-to-text task aims to generate accessible and user-friendly explanations of complex visualizations for individuals without expertise in the field. The FeVisQA task addresses user inquiries regarding DV by providing detailed answers to common questions. We present four typical DV-related questions, including understanding the semantics of a DV query, resolving numerical issues within a chart, and evaluating the compatibility of a DV query with a given database. Lastly, the table-to-text task generates informative NL description of tabular data, which are essential for visual analytics, thereby reducing the perceptual effort needed for data interpretation.\nMeanwhile, PLMs such as BERT [20 ###reference_b20###] and T5 [21 ###reference_b21###] have received considerable attention in the realms of natural language processing (NLP) and data mining, becoming widely recognized for their efficacy.\nThese PLMs greatly promote the development of effective text-driven applications, since they show dominating performance in understanding the semantics in natural language.\nThe operational paradigm for these PLMs typically unfolds in two stages: initially, they undergo unsupervised pre-training on expansive, open-domain datasets (such as Wikipedia) to acquire foundational capabilities in language representation and comprehension; subsequently, they are fine-tuned on specialized corpora pertinent to targeted downstream tasks, thereby enhancing task-specific performance.\nDespite their success [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], there are still significant challenges when it comes to the DV field : (i) Limited studies have been conducted to explore the effectiveness of PLMs in capturing the DV semantics. (ii) Since there is a substantial modal gap between the DV modality and the text modality, satisfied performances cannot be achieved by directly applying existing PLMs (e.g., T5) to DV-related tasks mentioned above. (iii) In the DV area, a possible PLM needs the ability of handling cross-modal information (i.e., text and DV), while also being capable of managing multiple distinct tasks.\nTo alleviate above-mentioned problems, we propose a novel PLM for jointly understanding text and DV, refereed as DataVisT5 in this paper. Based on text-centric T5 architecture, we enhance the pre-training process by incorporating a comprehensive array of cross-modal datasets that integrate natural language with DV knowledge, encompassing DV queries, database schemas, and tables.\nSince DV queries resemble programming language-like queries, we employ CodeT5+ [25 ###reference_b25###] as the starting checkpoint in our work. This choice leverages the robust code semantic understanding and generation capabilities of CodeT5+, providing DataVisT5 with a substantial advantage in generating and comprehending the unique programming language of our DV tasks.\nBuilding on this foundation, we apply table-level database schema filtration to reduce training complexity. Addressing the challenges of format consistency between DV and textual modalities, we introduce a unified encoding format for DV knowledge that facilitates the convergence of text and DV modalities. To eliminate stylistic discrepancies in manually curated datasets, we adopt standardized encoding.\nAdditionally, the pre-training objectives for DataVisT5 are twofold: (i) the span corruption approach of Masked Language Modeling as utilized by the original T5 model, and (ii) a Bidirectional Dual-Corpus objective that operates on source-target pairings.\nAfter the mixed-objective pre-training, we conduct multi-task fine-tuning (MFT) of our DataVisT5 on DV-related tasks including text-to-vis, vis-to-text, FeVisQA, and table-to-text.\nTo substantiate the rationale behind our proposed model, we performed comprehensive experimental evaluations on various public datasets. The results consistently demonstrate that DataVisT5 surpasses the state-of-the-art (SOTA) models and higher-parameter LLMs.\nIn summary, our main contributions are as follows:\nWe introduce and release DataVisT5: the first Pre-trained Language Model (PLM) tailored for the joint understanding of text and DV. This innovation opens avenues for future research on task-specific PLMs and enriches the landscape of PLM designs.\nWe enhance the text-centric T5 architecture to handle cross-modal information. Our novel hybrid pre-training objectives are conceived to unravel the complex interplay between DV and textual data, fostering a deeper integration of cross-modal insights.\nExtensive experiments on public datasets for diverse DV tasks including text-to-vis, vis-to-text, FeVisQA, and table-to-text demonstrate that DataVisT5 excels in multi-task settings, consistently outperforming strong baselines and establishing new SOTA performances." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Preliminary", + "text": "This section provides the foundational concepts and definitions pivotal to DV-related tasks, with the objective of cultivating a more profound understanding.\nNatural Language Question. An NL question enables users, even those with a minimal background in DV and programming skills, to formulate queries intuitively.\nFigure 1 ###reference_### demonstrates such an instance, with the user\u2019s request articulated as, \u201cGive me a pie chart about the proportion of the number of countries in the artist table\u201d.\nDeclarative Visualization Language. Transforming data into a graphical representation typically involves the use of a declarative visualization language (DVL). This kind of language provides a set of specifications that determine the construction of visualizations. These specifications include various elements such as chart type, colors, sizes, and mapping functions, as well as properties for visual marks like canvas dimensions and legends. Several DVLs are prevalent in the field, such as Vega-Lite [6 ###reference_b6###], ggplot2 [7 ###reference_b7###], ZQL [10 ###reference_b10###], ECharts [26 ###reference_b26###], Vega-Zero [8 ###reference_b8###], and VizQL [13 ###reference_b13###], each offering unique features to facilitate the visualization process.\nVisualization Specification. A visualization specification comprises a JSON format object that delineates the dataset and its visual attributes (such as chart types and data transformation functions) in accordance with the syntax of a specific DVL. It is noteworthy that each DVL possesses a unique grammar, necessitating distinct visualization specifications for rendering the same DV chart.\nData Visualization Query.\nIntroduced by [14 ###reference_b14###, 11 ###reference_b11###], a framework for querying a database for visual data representations seeks to encapsulate the full spectrum of potential DVLs.\nAs depicted in Figure 1 ###reference_###, a DV query specifies a \u201dpie\u201d chart and integrates SQL-like operations (e.g. Count and Order By). This versatile DV query format can be converted into visualization specifications for different DVLs, enabling visualization engines to render the specified chart.\nData Visualization Chart.\nThe DV charts are the visual representations such as scatters, bars, or maps used to convey the data summary and insights defined by the visualization specification. In Figure 1 ###reference_###, the final visualization result is the bar chart that corresponds to the NL question." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Our Proposed Model: DataVisT5", + "text": "We present our proposed DataVisT5 model, with the pipeline overview in Section III-A ###reference_###. This is followed by details on database schema filtration in Section III-B ###reference_###, DV knowledge encoding in Section III-C ###reference_###, and standardized encoding in Section III-D ###reference_###. We discuss our hybrid pre-training objectives in Section III-E ###reference_### and conclude with our multi-task fine-tuning strategy in Section III-F ###reference_###.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Pipeline Overview", + "text": "Figure 2 ###reference_### provides an overview of the complete pipeline, comprising five main stages: (1) Database schema filtration, (2) DV knowledge Encoding, (3) Standardized Encoding, (4)Model Pre-training, and (5) Model Fine-tuning.\nThe Database schema filtration process involves comparing n-grams extracted from the given database schema with those present in the text under consideration, enabling us to identify referenced tables in the question and acquire a sub-database schema that aligns semantically. During the DV knowledge Encoding phase, we linearize DV knowledge encompassing DV queries, database schemas, and tables. Subsequently, in the Standardized Encoding phase, we normalize the DV knowledge to facilitate more efficient learning. The resulting corpus, in its unified form, is then employed to train our proposed DataVisT5 model." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Database Schema Filtration", + "text": "Before the integration of DV and text modalities, it is critical to recognize that NL questions can incorporate keywords related to the database schema. This requires the explicit identification of references to columns, tables, and conditional values within the NL questions. To address this challenge, we employ -gram matching as a method due to its simplicity of implementation and notable effectiveness for a variety of applications.\nIn an effort to minimize information loss, our primary focus is at the table level, where we compare -grams extracted from the NL questions to those present in the database tables. Following the initial comparison, we refine the obtained sub-schema by considering the implicated tables and their respective columns." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C DV Knowledge Encoding", + "text": "To address the disparity between text and DV modalities, we propose investigating unified formats for DV knowledge. The connection between natural language and DV knowledge poses challenges due to limited data accessibility. Nevertheless, a unified format allows models to capitalize on extensive pretraining for smaller datasets. Employing consistent formatting, as recommended by [27 ###reference_b27###], offers advantages in multi-task training and mitigates performance decline caused by data heterogeneity compared to single-task training. The subsequent sections provide a comprehensive introduction to the unified representation of three distinct types of DV knowledge: DV queries, database schemas, and tables.\n###figure_3### Encoding DV query.\nWhile most existing NLP models, such as [20 ###reference_b20###], consider NL inputs as flat text sequences, we adopt a similar approach for modeling a DV query by treating it as a plain text sequence in a straightforward manner.\nEncoding Database schema.\nThe database schema comprises tables and columns. For each table in the schema, the table name is followed by a list of its columns formatted as \u201d , \u2026 \u201d. Different tables are joined using the symbol \u201d\u201d. Additionally, the database name is prefixed to the generated sequence with boundaries indicated by \u201d\u201d.\nEncoding Table.\nFollowing [28 ###reference_b28###], we employ a sequential representation of tables, akin to the schema encoding technique, which uses distinctive tokens to delineate table structure. The table is linearly represented as \u201c \u201d, with indicating the total column count and representing the row count.\nExample.\nAn presented in Figure 3 ###reference_###, where (1) the DV query is sequentially encoded into text sequences based on the data manipulation operations: Visualize, Select, Count, and Grouping, (2) the filtered database sub-schema, including the database name (theme_gallery), table name (artist), and columns, is encoded into a corresponding text sequence, and (3) the table content is linearly encoded in the format \u201ccol: Country COUNT(Country)\u201d, along with the remaining three rows of the table." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Standardized Encoding", + "text": "Due to the manual generation of queries by multiple annotators with diverse annotation habits, subtle stylistic differences are prevalent in the final annotated DV queries within NVbench, including variations in the capitalization of keywords. Similar to issues encountered with SQL queries, these stylistic inconsistencies, while not affecting the model\u2019s execution results, pose an additional learning challenge that must be addressed. To address the stylistic variations in DV queries, a preprocessing strategy was implemented before training. This strategy includes: (1) affixing the primary table name T to the selected columns col, resulting in the notation T.col across DV queries; particularly, for instances where the wildcard symbol * is employed in a COUNT function, COUNT(*) is replaced with COUNT(T.col) to maintain uniformity; (2) the insertion of spaces surrounding parentheses and the replacement of double quotes with single quotes; (3) the inclusion of the ASC keyword subsequent to the ORDER BY clause when ordering is not explicitly specified; (4) the elimination of the AS clause and the substitution of table aliases (e.g., T1, T2) with their actual table names; (5) the lowercase conversion.\n###figure_4### Example.\nIn a DV query with a operation, as depicted in Figure 4 ###reference_###, standardization involves renaming table aliases and to and , respectively. The query\u2019s is specified as , \u2019Columbus Crew\u2019 is quoted with single quotes, the keyword is appended if sort order is absent, and the entire query is cast to lowercase.\nIn alignment with the standardization of DV queries, similar encoding steps are applied to database schemas and tables to ensure consistency. This includes affixing the table name to each column name and converting them to .\n###figure_5### Example.\nAs depicted in Figure 3 ###reference_###, within a specific database schema, column names such as \u201dage, name, country, year_join, and artist_id\u201d are transformed to \u201dartist.age, artist.name, artist.country, artist.year_join, and artist.artist_id\u201d, respectively. Similarly, within the table context, an entry like \u201dcol : Country COUNT(Country)\u201d is reformulated to \u201dcol : artist.country count(artist.country)\u201d." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Hybrid Pre-training Objectives", + "text": "Bidirectional Dual-Corpus Objectives. To address divergence between the pretraining and fine-tuning phases, we introduce Bidirectional Dual-Corpus (BDC) objectives during pretraining.In this approach, both the source and target corpora are randomly selected with equal probability (0.5) during model training to serve as the input. The remaining corpus is then used as the output for translation purposes.\nAccordingly, for a target sequence of tokens, we define the BDC loss function, , as follows:\nwhere signifies the source input, represents the sequence of tokens generated by the decoder up to but not including the -th token, and is the token that the decoder is tasked with predicting. The term denotes the model parameters.\nAs depicted in Figure 5 ###reference_###, the segment highlighted by arrows elucidates the deployment of the BDC Objectives, encompassing four discrete tasks germane to DV. A comprehensive definition of these tasks is deferred to Section V ###reference_###.\nTo enhance task-specific processing and facilitate knowledge transfer across different modalities, we introduce unique special tokens.\nFor example, as demonstrated in Figure 5 ###reference_###, the Text-to-Vis task utilizes a special token <> to prefix the NL question corpus and <> for the DV query corpus. In contrast, for the FeVisQA task, DV question-answer pairings are delineated with the tokens <> and <> to signify their respective components.\nT5-based MLM Objectives.\nThe application of Masked Language Modeling (MLM) as a pretraining objective is pivotal for pretraining encoder-decoder models. In our study, we employed the span corruption MLM strategy from [21 ###reference_b21###], where consecutive words in the input are replaced by sentinel tokens, and the decoder generates the omitted text, each instance preceded by its respective sentinel. To ensure consistency with the pretraining checkpoint, we maintained an average span length of 3 subword tokens across the input corpus and masked 15% of the subwords. This MLM objective was applied to a cross-modal corpus comprising text, DV query, database schema, and table.\nOver a sequence of tokens, our T5-based MLM loss is defined as:\nwhere are the model parameters, is the masked token predicted by the decoder, represents the unmasked encoded inputs, and is the sequence of tokens generated by the decoder up to but not including the -th token.\nAn illustration is presented in Figure 5 ###reference_###, where the segments linked by dashed lines pertain to the T5-based MLM Objectives. This figure showcases the application of span denoising targets to a DV query. Within this query, the terms \u201dbar\u201d, \u201dpeople group\u201d, \u201dby\u201d, and \u201ddesc\u201d are selected at random. Subsequently, a subset of these terms is replaced by sentinel tokens, illustrated as , , , and .\nHybrid Objectives.\nAfter achieving the aforementioned two objectives, we create a hybrid objective by sampling from both the MLM Objectives and the BDC Objectives corpora. Consequently, each training mini-batch is composed of examples drawn from a cross-modal corpus, each formatted to align with diverse learning objectives.\nWe adopt a final hybrid loss :\nwhich enables DataVisT5\u2019s readiness for multiple DV-related downstream tasks demanding contextual comprehension and pattern recognition." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "III-F Multi-Task Fine-tuning", + "text": "To achieve better performance in multiple downstream tasks related to DataVisT5, we employ temperature mixing to combine the training data of all tasks. The temperature value is set to 2, following [21 ###reference_b21###]. Temperature up-sampling helps balance the influence of each task on the model by adjusting the probability of selecting data from each task during training. This prevents larger datasets from overpowering smaller ones. By merging training data from different tasks, the model is encouraged to learn representations that are beneficial across various corpora. Consequently, this leads to improved generalization and a more robust model capable of handling diverse DV tasks." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Pretraining Dataset Construction", + "text": "We have constructed a dataset tailored for our Hybrid Pretraining Objectives by integrating four public datasets. The following sections outline our pretraining dataset construction, detailing data collection in Section IV-A ###reference_###, data processing in Section IV-B ###reference_###, and data partitioning in Section IV-C ###reference_###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Data Collection", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "IV-A1 NVBench", + "text": "The NVBench dataset [15 ###reference_b15###] represents a publicly accessible NL2Vis corpus, containing 7,219 pairs of NL questions and their corresponding DV queries. It was originally curated to evaluate the efficacy of models in transforming textual queries into visual representations. As the most commonly utilized dataset in this domain, NVBench has been employed in several prominent studies, including those by[8 ###reference_b8###, 18 ###reference_b18###, 29 ###reference_b29###]\nTable I ###reference_### offers a detailed overview of the NVBench dataset, comprising 25,628 entries that have been collated from 152 distinct databases originating from the Spider dataset [30 ###reference_b30###]. To facilitate fair comparison with other established baselines as discussed in Section V ###reference_###, we meticulously separated the DV queries involving non-join operations from those that include join operations and performed an in-depth statistical analysis. Specifically, the dataset contains 15,764 samples without join operations. DV queries that employ non-join operations, utilizing a single table, are showcased in Figure 3 ###reference_###. Conversely, DV queries featuring join operations, where multiple tables are engaged, are illustrated in Figure 4 ###reference_###." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "IV-A2 Chart2text.", + "text": "The chart-to-text conversion process, as introduced by [31 ###reference_b31###], constitutes a comprehensive benchmark incorporating two distinct datasets, cumulatively consisting of 44,096 charts that span an extensive array of subjects and graphical representations. The data for this benchmark originates from two primary sources: Statista333https://www.statista.com/ and the Pew Research Center444https://www.pewresearch.org/. The dataset derived from Statista includes various elements such as a screenshot of the chart image, the accompanying data table, the title, axis labels, and expertly crafted descriptive narratives concerning the chart content.\nConversely, the datasets sourced from the Pew Research Center typically lack the provision of underlying data tables for the majority of their charts. To align with our pre-training objectives, we have selectively utilized only the Statista component of the Chart2Text dataset. The quantitative details of the Chart2Text dataset are systematically tabulated in Table II ###reference_###, with a total of 34,811 instances documented for analysis." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "IV-A3 WikiTableText.", + "text": "The WikiTableText dataset [32 ###reference_b32###] consists of 13,318 descriptive sentences that are aligned with 4,962 tables extracted from Wikipedia555https://www.wikipedia.org/. These tables were retrieved via web scraping techniques and a subset of 5,000 tables was carefully curated to ensure that each table contained at least three rows and two columns, thereby meeting a predefined structural criterion. Quantitative characteristics of the WikiTableText dataset are meticulously cataloged in Table II ###reference_###, which enumerates a total of 13,318 instances for subsequent analysis." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "IV-A4 FeVisQA", + "text": "The FeVisQA dataset, as presented in [12 ###reference_b12###], represents a pivotal asset in the nascent field of DV Question Answering. This dataset amalgamates a diverse set of rules and data sources to compile a comprehensive collection of question-and-answer pairs, integral for advancing research in this domain. It covers three principal types of questions:\nType 1: This question type probes the semantic interpretation of DVs. An example is, \u201dWhat is the meaning of this DV ?\u201d which is illustrated as Question 1 in Figure 1 ###reference_###.\nType 2: Stemming from the associated task of DV recommendation, this category includes questions that assess the suitability of a DV for a given dataset. For instance, \u201dIs this DV suitable for the given dataset?\u201d The answers are structured to affirm compatibility or denote incompatibility, thus evaluating the alignment between a DV and its corresponding dataset.\nType 3: Questions pertaining to data retrieval and the structural aspects of DV. These are generated using a rule-based approach, ensuring a robust and consistent set of questions and answers. Question 3 and Question 4 in Figure 1 ###reference_### serve as exemplary instances of this category.\nComprehensive statistics of the FeVisQA dataset are encapsulated in Table III ###reference_###. Similar to NVBench, the FeVisQA leverages the 152 databases originating from the Spider dataset [30 ###reference_b30###], comprising a total of 79,305 free-form question-answer pairs." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Data Pre-processing", + "text": "To enhance the data quality and ensure compatibility with downstream tasks, we instituted the following pre-processing. Initially, we excluded incomplete natural language question samples (34/25662) from the NVBench dataset.\nSubsequently, to prevent sequence truncation during the Bidirectional Dual-Corpus objective\u2014which operates with a fixed token length\u2014we retained only those entries in the Chart2Text dataset where the total number of cells (determined by multiplying the number of rows by the number of columns) did not exceed 150. This step was deemed unnecessary for the WikiTableText dataset, as it inherently possesses a maximum cell count of 108, as delineated in Table II ###reference_###. After employing the filtration and encoding methods described in Sections III-B ###reference_###, III-C ###reference_###, and III-D ###reference_###, we constructed our pretraining corpus based on the type of data. The corpus is bifurcated into two segments:\nDual-Corpus Objectives Datasets. This segment is arranged according to the following mappings:\nNL+ Schema DV query\nDV query + Schema Description\nTable Description\nQuestion + DV query + Schema + Table Answer\nAs shown in Figure 5 ###reference_###, the aforementioned four data types are sequentially presented.\nMLM Objectives Datasets. This segment amalgamates NL questions and database schemas from NVbench, DV queries, questions and answers from FeVisQA, and tables with their descriptions from Chart2Text and WikiTableText. These elements are integrated and then utilized to formulate the Masked Language Model (MLM) pretraining tasks. To illustrate this, a sample DV query from NVBench, which has been subjected to masking, is provided in Figure 5 ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Data Partitioning", + "text": "After preprocessing the data, we proceeded with the partitioning process. Originating from the Spider dataset [30 ###reference_b30###], NVBench features a wide range of domains, including academic, railway, and scholar, which is conducive to cross-domain evaluation. The data from NVBench was divided into training, validation, and testing subsets, constituting 70%, 10%, and 20% of the dataset, respectively, to facilitate this cross-domain assessment.\nFurthermore, considering that FeVisQA utilizes databases from Spider, we maintained consistency with NVBench by applying the same cross-domain partitioning scheme. The partitioning of the data adheres to the original division as specified in Table II ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments and Results", + "text": "To comprehensively assess our pre-trained architecture and promote further study, we have assembled the Jointly Understanding Text and Data Visualization benchmark. This benchmark encompasses four extensively studied tasks: text-to-vis (Section V-B ###reference_###), vis-to-text (Section V-C ###reference_###), FeVisQA (Section V-D ###reference_###), and table-to-text (Section V-E ###reference_###).\nWe incorporate established datasets pertinent to these tasks. For each task, we delineate the task definition, baselines, evaluation metrics, corresponding results, and case studies. Additionally, we perform ablation studies on the critical design elements." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Implementation Details", + "text": "We conducted the pre-training of DataVisT5 over the course of five epochs using four NVIDIA 40GB A40 GPUs. And we standardized the maximum sequence lengths for both the input and output at 512 tokens. Our training regimen adopted a linear warm-up schedule with a 0.1 warm-up rate and set the learning rate to 5e-6. For optimization, we utilized the DeepSpeedCPUAdam optimizer with a weight decay of 0.01. Further enhancing our training efficiency, we implemented DeepSpeed\u2019s ZeRO Stage 2 offloading strategy with mixed precision (FP16) as described in [33 ###reference_b33###].During the fine-tuning phase, the model exhibited significant sensitivity to hyperparameters, notably the learning rate and training epochs. A grid search was executed to determine the optimal parameters, with selection based on the performance metrics from the validation set across all models. Specifically, for multi-task fine-tuning, parameter optimization was informed by the mean performance across four tasks." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Text-to-Vis", + "text": "Defination. For a natural language query consisting of a question that articulates a user\u2019s request for a visualization and , the schema of the relevant database , the goal of the text-to-vis task is to generate the appropriate DV query .\nBaselines. We evaluate DataVisT5 against several established baselines for the text-to-vis task. The Seq2Vis approach [15 ###reference_b15###] interprets the task as machine translation using a Seq2Seq model equipped with attention. The renowned Transformer architecture [34 ###reference_b34###] and the ncNet framework [8 ###reference_b8###], which enhances the Transformer with attention-forcing, serve as additional baselines. RGVisNet [29 ###reference_b29###] utilizes a two-stage process for retrieving DV queries and modifying the prototype.\nFor the performance of LLMs, we explored in-context learning through 5-shot similarity prompting with GPT-4 [35 ###reference_b35###] and fine-tuning open-source LLMs such as Llama2-7b [36 ###reference_b36###] and Mistral-7b [37 ###reference_b37###] using LoRA [38 ###reference_b38###]. Using the CodeT5+ model [25 ###reference_b25###] as our base architecture, we employ single-task fine-tuning (SFT) without our novel pretraining as a comparison.\nTask-specific Corpus. For the fine-tuning phase of our text-to-vis task, we engaged the NVBench dataset, which was delineated in Section IV-A1 ###reference_.SSS1###, originally derived from our pre-training datasets. Contrasting with the pre-training phase, the fine-tuning was conducted with a singular training objective: NL + Schema DV query.\nEvaluation Metrics.\nThe performance evaluation of our experiment adopts four metrics, analogous to those utilized in [15 ###reference_b15###]. Before delving into the specifics, it is necessary to know that each DV query comprises three key elements: the type of visualization (such as bar chart), the configuration of axis (x/y/z), and data with transformation functions (e.g. group). Additionally, let denote the total count of test samples. The metrics are: (1) Exact Match (EM), which requires a complete match between the predicted and reference DV queries (), (2) Visualization EM (Vis EM), assessing the accuracy of predicted visualization types (), (3) Data EM, focused on data points with transformation functions (), and (4) Axis EM, evaluating the congruence of axis components ().\nResults.\nResults from Table IV ###reference_### show that foundational models like Seq2Vis and Transformer underperform in cross-domain settings.\nCompared to the previous state-of-the-art, RGVisNet, our multi-task finetuned model exhibited a significant 46.15% improvement in the EM metric on datasets without join operations. Furthermore, it outperformed the in-context learning approach using GPT-4 in scenarios involving join operations, enhancing the EM metric by 44.59% and 49.2%. Notably, in these scenarios, where models such as ncNet and RGVisNet have historically struggled, our model achieved an EM of 0.3451. In comparison to high-parameter (7b) open-source LLMs, our 220M DataVisT5 model performed comparably, while the 770M DataVisT5, with only 11% of the parameters, achieved optimal performance.\nCase Study.\nWe illustrate the effectiveness of our DataVisT5 model in generating DV queries compared to other baseline models in Table V ###reference_###. When processing a NL input, the Seq2Vis model fails to recognize essential keywords such as visualize and group by, and incorrectly identifies the chart type as scatter. The Transformer model, although correct in predicting the visualization type, omits significant information. A similar limitation is observed with ncNet, which, despite generating complex DV queries, fails to include the group by transformation function. RGVisNet accurately maps the term \u2019price\u2019 to the \u2019baseprice\u2019 column in the rooms table but does not produce the correct aggregate functions, avg and min. The SFT CodeT5+ incorrectly predicts the elements for group by. In contrast, our MFT DataVisT5 model accurately constructs the query: \u201dvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms group by rooms.decor\u201d, uniquely achieving the correct visualization results.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Vis-to-Text", + "text": "###figure_12### Definition. When provided with a DV query and a database that includes a schema , the vis-to-text task is focused on creating an intelligible textual description that explains the DV query within database schema.\nBaselines. For our evaluation, we selected several established models and LLMs: an enhanced Seq2Seq model, which incorporates an attention mechanism as described by [34 ###reference_b34###] to improve its interaction between the encoder and decoder; the vanilla Transformer model as introduced in the context of text-to-vis tasks; BART [39 ###reference_b39###], a transformer-based model that combines bidirectional encoding with auto-regressive decoding; CodeT5+, our base architecture; GPT-4 in a zero-shot setting; and Llama2-7b and Mistral-7b, both with LoRA fine-tuning.\nTask-specific Corpus. The unidirectional training target for the vis-to-text task was structured as DV query + Schema Description. We employed the NVBench dataset, as referenced in Section IV-A1 ###reference_.SSS1###, analogous to the dataset used for the text-to-vis task. A notable distinction for the vis-to-text task lies in the inherent one-to-many relationship, where a singular DV query may correspond to multiple descriptions. To establish a definitive corpus for subsequent fine-tuning and evaluation, we selected a single representative description from the multiples.\nEvaluation Metrics.\nTo assess the quality of the generated textual descriptions, we employed three metrics: BLEU [40 ###reference_b40###], ROUGE [41 ###reference_b41###], and METEOR [42 ###reference_b42###]. (1) BLEU measures the precision of -gram overlaps with reference texts, modified by a brevity penalty. (2) In contrast, ROUGE emphasizes recall, assessing the extent of -gram overlap. (3) METEOR surpasses BLEU in mimicking human judgement by considering exact matches, stemming, synonyms, and penalizing for word order differences. Specifically, we report BLEU scores for unigram, bigram, and four-gram levels (BLEU-1, BLEU-2, BLEU-4), and ROUGE F1 scores for unigrams (ROUGE-1), bigrams (ROUGE-2), and longest common subsequences (ROUGE-L).\nResults. As detailed in Table VI ###reference_###, the traditional Seq2Seq and Transformer models significantly underperform compared to other models, limited by their parameter size. Although GPT-4 outperforms traditional models in a zero-shot setting, the SFT BART, benefiting from its structure that combines context awareness with autoregressive features, shows superior performance. Moreover, LoRA fine-tuned open-source LLMs Llama2-7b and Mistral-7b, even with larger parameters, do not perform as well as BART, which has significantly fewer parameters, in the vis-to-text task. Despite our base architecture CodeT5+, enhanced through single-task fine-tuning, showing competitive performance, our proposed DataVisT5 in both 220M and 770M configurations achieves the best performance.\nCase Study.\nIn the comparative analysis presented in Table VII ###reference_###, the Seq2Seq model produces outputs that significantly deviate from the ground truth, indicating a disjointed understanding. The Transformer model, while capturing the basic structure of a bar chart and its ascending order, uses imprecise language that muddles the details. The SFT BART model makes progress by accurately suggesting a bar chart in ascending order but is hampered by suboptimal phrasing. The SFT CodeT5+ model, although closely aligned with the ground truth, fails to grasp the significance of the term lname in the visualization context. In stark contrast, our DataVisT5 model, powered by a 770M parameter architecture and enhanced through MFT, excels by providing a concise and clear directive that adeptly delineates the required bar chart with an ascending Y-axis, categorizing students by last name who are without food allergies, thus closely mirroring the ground truth." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "FeVisQA", + "text": "Definition. The FeVisQA task is designed to formulate an answer to a DV-related question , by leveraging a database that encompasses a schema and tables , all in service of elucidating DV concepts.\nBaselines. In addressing the FeVisQA task, we adopted the same ensemble of baseline models previously applied to the vis-to-text task. This ensemble includes an attention-enhanced Seq2Seq model, the Transformer model, the SFT versions of the base BART and CodeT5+ models, along with a zero-shot GPT-4, and LoRA fine-tuned Llama2-7b and Mistral-7b.\nTask-specific Corpus. The FeVisQA task necessitated the formulation of a unidirectional training objective, structured as: Question + DV query + Schema + Table \u2192 Answer. We utilized the FeVisQA dataset for this purpose, which is elaborated upon in Section IV-A4 ###reference_.SSS4### and originates from pre-training datasets.\nResults. From Table VIII ###reference_### In the FeVisQA task, the MFT DataVisT5 model with 770M parameters outperforms competitors across all metrics. Compared to SFT CodeT5+ with an identical parameter setting of 770M, DataVisT5 exhibited a significant 10.92% increase in the METEOR score post fine-tuning, underscoring its remarkable proficiency in answering free-form questions.\nThis enhanced performance can be attributed to the integration of textual information and DV knowledge during the DataVisT5 pre-training phase, which effectively facilitates the model\u2019s understanding of the complex cross-domain relationship between text and DV.\n###figure_13### ###figure_14### Case Study.\nUpon reviewing the outcomes documented in Table X ###reference_###, we observe that the Seq2Seq, Transformer, and SFT BART models exhibit various discrepancies from the ground truth. The Seq2Seq model consistently produces incorrect responses, indicating significant misalignment. The Transformer model correctly identifies the smallest chart segment but lacks consistency in other queries. SFT BART correctly identifies the number of chart segments but often overestimates numerical values. While the SFT CodeT5+ model answers most questions correctly, it inaccurately responds to \u201dWhat is the total number of count(film.type)?\u201d. In contrast, our DataVisT5 model is the only one that consistently provides accurate answers across both binary and numerical inquiries." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Table-to-Text", + "text": "###figure_15### Defination.\nWith a table as the input, the table-to-text task is concentrated on producing a clear, readable narrative that captures and clarifies the essence of the data within .\nBaselines. Consistent with the previous vis-to-text and FeVisQA tasks, which also focus on text generation modalities, we selected foundational seq-to-seq models for our analysis: the Seq2Seq with an attention mechanism, the original Transformer model, and fine-tuned versions of the base BART and CodeT5+ models, specifically tailored for single-task applications. Additionally, we included a zero-shot GPT-4 model and LoRA fine-tuned Llama2-7b and Mistral-7b in our evaluation.\nTask-specific Corpus. For the table-to-text task, we formulated the unidirectional training target to Table \u2192 Description, utilizing a pre-processed pre-training corpus. We amalgamated two publicly accessible datasets, Chart2Text and WikiTableText, which are elaborated upon in Section IV-A2 ###reference_.SSS2### and Section IV-A3 ###reference_.SSS3###.\nResults. As shown in Table VIII ###reference_###, our MFT 770M DataVisT5 model outperforms competing approaches in the table-to-text task, achieving the highest METEOR score of 0.6227. This demonstrates DataVisT5\u2019s exceptional ability to generate textual descriptions from tabular data. Foundational models such as Seq2Vis and the Transformer struggle with understanding tables, while the commonly used SFT BART model performs closely to the SFT CodeT5+ (770M) but is still outpaced by DataVisT5. Moreover, the GPT-4 and open-source LLMs also underperform compared to our model. This superior performance is attributed to DataVisT5\u2019s integration of textual information and DV knowledge during pre-training.\nCase Study.\nAs detailed in Table XI ###reference_###, the Seq2Seq model\u2019s output significantly diverged from the ground truth, producing redundant and irrelevant text without the needed factual content. The Transformer model inaccurately identified the subject as a movie brand rather than a publisher, missing essential details. Although SFT BART correctly identified the publication year and the work\u2019s nature, it misattributed the publisher. In contrast, while the SFT CodeT5+ model\u2019s responses were semantically close to the ground truth, our model consistently generated descriptions that precisely matched the ground truth." + }, + { + "section_id": "5.6", + "parent_section_id": "5", + "section_name": "Ablation Studies", + "text": "We conduct experiments to verify the effectiveness of each critical design in the proposed DataVisT5. Specifically, we establish the MFT DataVisT5 (770M) with all designed components as the baseline. We created variants of DataVisT5 by omitting the BDC objective in the pretraining stage, removing temperature up-sampling during MFT, and evaluating without MFT in a zero-shot setting. Additionally, we compare the use of SFT and MFT, and CodeT5+ versus T5-large as the starting point. From Table XII ###reference_###, it is evident that removing or replacing designed components results in performance degradation across the mean performance of the four tasks, which indicates the effectiveness of the critical design components in DataVisT5." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Related Work", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Pre-training for Data Engineering Tasks", + "text": "Pre-trained models have been shown to be effective for language representations and beneficial for downstream tasks by substantial work [43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 20 ###reference_b20###, 21 ###reference_b21###, 46 ###reference_b46###, 47 ###reference_b47###, 48 ###reference_b48###].\nAll the success has also driven the development of machine language pretraining, which is in special format of text such as code and sql.\nCodeBert [49 ###reference_b49###] is a bimodal pre-trained model for natural language and programming language in a bert-like architecture, showing that pretraining can improve the performance for code-related tasks.\nTaBert [50 ###reference_b50###], TAPAS [28 ###reference_b28###]\nand GraPPa [51 ###reference_b51###] extend pre-trained models to learn a joint representation of NL text and database tables and demonstrate the effectiveness of semantic parsing tasks.\nBased on pre-trained language models, Rat-SQL [52 ###reference_b52###] and Proton [53 ###reference_b53###] enhance text-to-SQL parsing by focusing on schema linking and alignment, whereas StruG [54 ###reference_b54###] specifically targets improvements in text-table alignment.\nMoreover, the development of domain-adapted pre-trained models, such as CodeT5 [22 ###reference_b22###] for code understanding and generation, MolT5[23 ###reference_b23###] for molecule captioning and generation, and BioT5 [55 ###reference_b55###] which integrates cross-modal data in the biological domain with chemical structures and linguistic contexts, highlights the importance of specialized training beyond a generic T5 framework. These adaptations emphasize the necessity of domain-specific fine-tuning to effectively capture the contextual nuances inherent in specialized corpora." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B DV-related Tasks", + "text": "Benefiting from the convenience of visualization, various studies related to DV, including text-to-vis, vis-to-text, free-form question answering over DV and table-to-text, have attracted considerable research interest within the community.\nThe initial text-to-vis systems were based on predefined rules or templates [56 ###reference_b56###, 57 ###reference_b57###, 58 ###reference_b58###, 59 ###reference_b59###]. Although efficient, these systems were limited in their ability to handle the linguistic variability of user queries. To overcome these limitations, researchers have turned to neural network-based methods. For example, Data2Vis [60 ###reference_b60###] conceptualizes visualization generation as a sequence translation task, employing an encoder-decoder neural architecture. Similarly, RGVisNet [29 ###reference_b29###] initiates the text-to-vis process by retrieving a relevant query prototype, refining it through a graph neural network (GNN) model, and then adjusting the query to fit the target scenario. Concurrently, vis-to-text has been proposed as a complementary task, with improvements in performance demonstrated through a dual training framework [18 ###reference_b18###]. Song et al. [12 ###reference_b12###] further define the task of free-form question answering over DV and introduce the FeVisQA dataset, aiming to enhance the understanding of data and its visualizations.\nMoreover, learning-based approaches have demonstrated exceptional performance in visually data wrangling and analytical tasks.\nFor instance, Liu et al. [61 ###reference_b61###] and Obeid and Hoque [62 ###reference_b62###] have successfully translated visual data into textual descriptions and automated natural language summaries for charts using transformer-based architectures, respectively. In a similar vein, Spreafico and Carenini [63 ###reference_b63###] have employed LSTM-based and neural network models to summarize time-series and chart data. Additionally, Kantharaj et al. [31 ###reference_b31###] have contributed to the evolving benchmark in chart summarization.\nFurthermore, Juno[64 ###reference_b64###], a cross-modal entity matching framework, has been developed to contextualize information retrieved from visually rich documents and gather actionable insights, thereby addressing challenges posed by the ad-hoc and often incomplete information in such documents." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion", + "text": "In this study, we propose DataVisT5, a novel PLM specifically designed for DV, which enhances the integration of cross-modal information in DV knowledge and natural language associations. This model introduces a unique mechanism to capture highly relevant database schemas from natural language mentions of tables, effectively unifying and normalizing the encoding of DV knowledge, including DV queries, database schemas, and tables. Our novel hybrid pre-training objectives unravel the complex interplay between DV and textual data, fostering a deeper integration of cross-modal insights. By extending the text-centric T5 architecture to adeptly process cross-modal information, DataVisT5 addresses multiple tasks related to DV with remarkable performance.\nOur extensive experimental results demonstrate that DataVisT5 consistently outperforms SOTA models and even higher-parameter LLMs across a wide range of DV tasks, expanding PLM applications and pushing the boundaries of what is achievable in automated data visualization and interpretation." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: The statistics of the NVBench dataset
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of databases
SplitNVBench w/o joinNVBenchNVBench w/o joinNVBench
Train105641678098106
Valid253935051516
Test266153432730
Total1576425628140152
\n
\n
", + "capture": "TABLE I: The statistics of the NVBench dataset" + }, + "2": { + "table_html": "
\n
TABLE II: The statistics of the Chart2text and WikiTableText datasets
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of cells
SplitChart2TextWikiTableTextMetricsChart2TextWikiTableText
Train2436810000Min.427
Valid52221318Max.8000108
Test52212000\n1503427213318
Total3481113318\n1505390
\n
\n
", + "capture": "TABLE II: The statistics of the Chart2text and WikiTableText datasets" + }, + "3": { + "table_html": "
\n
TABLE III: The statistics of the FeVisQA dataset
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of instancesNumber of questions
SpiltdatabasesQA pairDV queryType 1Type 2Type 3
Train1065440691694799916631272
Valid169290160384415795264
Test30156092542145325019113
Total152793051331370961324645650
\n
\n
", + "capture": "TABLE III: The statistics of the FeVisQA dataset" + }, + "4": { + "table_html": "
\n
TABLE IV: Comparative evaluation of text-to-vis models and LLMs performance on the cross-domain NVBench test dataset: non-join operations and complete NVBench with join operations. Best results are highlighted in bold.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSettingNVBench w/o join operationNVBench w/ join operation
-Vis EMAxis EMData EMEMVis EMAxis EMData EMEM
Seq2Vis0.80270.00000.00240.00000.83420.00000.00000.0000
Transformer0.85980.00710.06460.00240.97980.00210.04040.0000
ncNet0.93110.24420.51520.1465\u2014\u2014\u2014\u2014
RGVisNet0.97010.59630.54230.4675\u2014\u2014\u2014\u2014
CodeT5+ (220M)+SFT0.97950.78890.62390.60100.98430.40650.34250.2968
CodeT5+ (770M)+SFT0.98270.78500.66960.66680.98650.40240.37130.3399
GPT-4 (5-shot)+Similarity0.97000.55070.64250.47260.97900.27550.37080.2313
LLama2-7b+LoRA0.93230.74320.62030.64200.94460.42810.31740.3327
Mistral-7b+LoRA0.98210.77530.66490.67610.92460.43100.33860.3374
DataVisT5 (220M)+MFT0.98270.80780.66800.66880.98730.41230.35860.3324
DataVisT5 (770M)+MFT0.98500.79830.67700.68330.98840.41120.38630.3451
\n
\n
", + "capture": "TABLE IV: Comparative evaluation of text-to-vis models and LLMs performance on the cross-domain NVBench test dataset: non-join operations and complete NVBench with join operations. Best results are highlighted in bold." + }, + "5": { + "table_html": "
\n
TABLE V: The DV query examples generated by various text-to-vis models from NVBench
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NL Question\n\n\n\nJust show the average and minimum price of the rooms in different decor using a scatter.\n
Database Schema\n\n\n\n| inn_1 | rooms : rooms.roomid, rooms.roomname, rooms.bedtype, rooms.baseprice, rooms.decor\n
Ground-truth\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms group by rooms.decor\n
Seq2Vis ()\n\n\n\nvisualize bar select location, count(company.location) from company group by\n\ncompany.location Figure\u00a06(a)\n
Transformer ()\n\n\n\nvisualize scatter select addresses.address_id, election.vote_percent from Figure\u00a06(b)\n
ncNet ()\n\n\n\nvisualize scatter select rooms.name, rooms.employee_id from rooms where\n\nrooms.first_name like \u2019%s%\u2019 Figure\u00a06(c)\n
RGVisNet ()\n\n\n\nvisualize scatter select max(rooms.baseprice), rooms.decor from rooms Figure\u00a06(d)\n
CodeT5+ ()\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms\n\ngroup by rooms.baseprice Figure\u00a06(e)\n
Ours (\u2713)\n\n\n\nvisualize scatter select avg(rooms.baseprice), min(rooms.baseprice) from rooms\n\ngroup by rooms.decor Figure\u00a06(f)\n
\n
\n
", + "capture": "TABLE V: The DV query examples generated by various text-to-vis models from NVBench" + }, + "6": { + "table_html": "
\n
TABLE VI: Comparative performance analysis of models and LLMs for vis-to-text task . Best results are highlighted in bold.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSettingBLEU-1BLEU-2BLEU-4ROUGE-1ROUGE-2ROUGE-LMETEOR
Seq2Seq0.27660.15200.02960.35710.13430.28930.2528
Transformer0.28250.16350.03450.36340.14760.29580.2755
BART+SFT0.43010.28920.10090.47210.22090.36470.4586
CodeT5+(220M)+SFT0.44310.30600.12360.48730.24030.37700.4872
CodeT5+(770M)+SFT0.45180.31540.12780.48980.24310.39280.4965
GPT-4 (0-shot)0.38430.22100.03870.41800.15270.29250.4350
LLama2-7b+LoRA0.30290.15200.03140.35810.10550.27330.3028
Mistral-7b+LoRA0.35120.24310.08970.44020.21580.35490.3925
DataVisT5 (220M)+MFT0.45840.31600.12450.50000.24370.39780.4986
DataVisT5 (770M)+MFT0.45660.31550.13320.49740.24600.39860.4851
\n
\n
", + "capture": "TABLE VI: Comparative performance analysis of models and LLMs for vis-to-text task . Best results are highlighted in bold." + }, + "7": { + "table_html": "
\n
TABLE VII: The description examples generated by various vis-to-text methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV query\n\n\n\nFigure\u00a07 visualize bar select student.lname, count(student.lname) from student where stuid not in\n\n(select has_allergy.stuid from has_allergy join allergy_type on has_allergy.allergy = allergy_type.allergy\n\nwhere allergy_type.allergytype = \u2019food\u2019) group by lname order by count(student.lname) asc\n\n
Database Schema\n\n\n\n| allergy_1 | allergy_type : allergy_type.allergy, allergy_type.allergytype | has_allergy : has_allergy.\n\nstuid, has_allergy.allergy | student : student.stuid, student.lname, student.fname, student.age, student.sex,\n\nstudent.major, student.advisor, student.city_code\n\n
Ground-truth\n\n\n\nList the last name of the students who do not have any food type allergy and count them in a bar chart,\n\nshow Y-axis from low to high order.\n\n
\nSeq2Seq ()\n\n\n\n\nfor a bar chart for the the number of the that have the that have the , and a bar chart, and a bar chart.\n\n
\nTransformer ()\n\n\n\n\nFind the last names that some last name when that are not steered by any last name as well using a\n\nbar chart , and rank by the number of last name in asc .\n\n
\nBART ()\n\n\n\n\nA bar chart for finding the number of the names of all students who do not have any allergy with\n\nthe allergy type \u201dFood\u201d, and could you display in ascending by the y-axis?\n\n
\nCodeT5+ ()\n\n\n\n\nFind the number of students who do not have any allergy type for food in each lname with a bar chart.\n\n
Ours (\u2713)\n\n\n\nGive the number of students who do not have any allergy for food in each last name, show by the\n\ny-axis from low to high with a bar chart.\n\n
\n
\n
", + "capture": "TABLE VII: The description examples generated by various vis-to-text methods" + }, + "8": { + "table_html": "
\n
TABLE VIII: Comparative performance analysis for FeVisQA and table-to-text tasks highlighted by top metric scores.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSettingFeVisQATable-to-Text
-BLEU-1ROUGE-1ROUGE-LMETEORBLEU-4ROUGE-1ROUGE-LMETEOR
Seq2Seq0.36420.37550.36830.19550.15750.45390.39950.3324
Transformer0.28680.29840.29030.15560.08750.38380.31520.2642
BART+SFT0.73790.73910.72900.43760.38240.63140.55490.5845
CodeT5+(220M)+SFT0.68130.68010.66940.40860.38140.61830.54500.5844
CodeT5+(770M)+SFT0.70390.70320.69300.42110.38480.62840.55110.5946
GPT-4 (0-shot)0.11480.17310.15990.23120.15650.42770.32810.4146
LLama2-7b+LoRA0.42140.43360.42230.25820.20100.49880.45230.3923
Mistral-7b+LoRA0.74040.76710.75740.42510.20030.50020.45380.3948
DataVisT5 (220M)+MFT0.71640.71580.70510.42730.38220.62590.54780.5926
DataVisT5 (770M)+MFT0.78930.78950.77880.46710.41990.65200.57750.6227
\n
\n
", + "capture": "TABLE VIII: Comparative performance analysis for FeVisQA and table-to-text tasks highlighted by top metric scores." + }, + "9": { + "table_html": "
\n
TABLE IX: Sequence formats of DV Knowledge used in FeVisQA case study
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV query\n\n\n\nFigure\u00a08(a) visualize bar select film.type, count(film.type) from film join film_market_estimation\n\non film.film_id = film_market_estimation.film_id group by type order by type asc\n\n
Table\n\n\n\nFigure\u00a08(b) | col : film.type | count(film.type) row 1 : Mass human sacrifice | 1 row 2 : Mass suicide\n\n| 6 row 3 : Mass suicide murder | 2\n\n
Database Schema\n\n\n\n| film_rank | film : film.film_id, film.title, film.studio, film.director, film.gross_in_dollar\n\n| film_market_estimation : film_market_estimation.estimation_id, film_market_estimation.low_estimate,\n\nfilm_market_estimation.high_estimate, film_market_estimation.film_id, film_market_estimation.type,\n\nfilm_market_estimation.market_id, film_market_estimation.year\n\n
\n
\n
", + "capture": "TABLE IX: Sequence formats of DV Knowledge used in FeVisQA case study" + }, + "10": { + "table_html": "
\n
TABLE X: The answer examples generated by various FeVisQA methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DV QuestionGround-truthSeq2SeqTransformerBARTCodeT5+Ours
\n\n\n\nIs any equal value of y-axis in the chart?\nNoYes ()Yes ()Yes ()No (\u2713)No (\u2713)
\n\n\n\nHow many parts are there in the chart?\n35 ()4 ()3 (\u2713)3 (\u2713)3 (\u2713)
\n\n\n\nWhat is the value of the smallest part in the chart?\n12 ()1 (\u2713)1 (\u2713)1 (\u2713)1 (\u2713)
\n\n\n\nWhat is the total number of count(film.type)?\n911 ()12 ()15 ()10 ()9 (\u2713)
\n
\n
", + "capture": "TABLE X: The answer examples generated by various FeVisQA methods" + }, + "11": { + "table_html": "
\n
TABLE XI: The description examples generated by various table-to-text methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Table\n\n\n\nFigure\u00a09 | col : subjtitle | subjsubtitle | year | english title | publisher | notes row 1 :\n\nso ji-sub | books | 2010 | so ji-sub\u2019s journey | sallim | photo-essays\n
Ground-truth\n\n\n\nSallim was the publisher of so ji-sub\u2019s journey in 2010.\n
Seq2Seq ()\n\n\n\nthe format of was was was was was was.\n
Transformer ()\n\n\n\nIn movie brand played in 2010.\n
BART ()\n\n\n\nSo ji-sub\u2019s journey was published by photo-essays in 2010.\n
CodeT5+ ()\n\n\n\nSo ji-sub\u2019s journey was published by sallim in 2010.\n
Ours ()\n\n\n\nSallim was the publisher of so ji-sub\u2019s journey in 2010.\n
\n
\n
", + "capture": "TABLE XI: The description examples generated by various table-to-text methods" + }, + "12": { + "table_html": "
\n
TABLE XII: Ablation study results: average metric values per task multipled by 100
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodtext-to-visvis-to-textFeVisQAtable-to-textMean
DataVisT5 (770M)MFT65.2236.1870.6256.8057.21
w/o BDC64.49-0.73\n36.16-0.02\n69.26-1.36\n55.83-0.97\n56.44-0.77\n
w/o up-sampling62.95-2.27\n36.41+0.23\n70.69+0.07\n56.34-0.46\n56.60-0.61\n
w/o MFT62.36-2.87\n37.12+0.94\n67.35-3.27\n53.98-2.82\n54.93-2.28\n
DataVisT5 (770M)SFT65.01-0.21\n36.50+0.32\n70.73+0.11\n55.67-1.13\n56.98-0.23\n
CodeT5+ (770M)SFT62.79-2.43\n35.96-0.22\n63.03-7.59\n53.97-2.83\n53.94-3.27\n
T5-largeSFT61.34-3.88\n33.58-2.60\n61.90-8.72\n52.03-4.77\n52.21-5.00\n
\n
\n
", + "capture": "TABLE XII: Ablation study results: average metric values per task multipled by 100" + } + }, + "image_paths": { + "1": { + "figure_path": "2408.07401v2_figure_1.png", + "caption": "Figure 1: An illustration depicting the text-to-vis, vis-to-text, table-to-text, and free-form question-answering over data visualization problems, showcasing examples including a NL question, a DV query, a DVL visualization specification, a table description, a visualization chart, and four question-answer pairs.", + "url": "http://arxiv.org/html/2408.07401v2/x1.png" + }, + "2": { + "figure_path": "2408.07401v2_figure_2.png", + "caption": "Figure 2: The pipeline of DataVisT5.", + "url": "http://arxiv.org/html/2408.07401v2/x2.png" + }, + "3": { + "figure_path": "2408.07401v2_figure_3.png", + "caption": "Figure 3: Examples of DV Knowledge Encoding and Standardized Encoding from NVBench.", + "url": "http://arxiv.org/html/2408.07401v2/x3.png" + }, + "4": { + "figure_path": "2408.07401v2_figure_4.png", + "caption": "Figure 4: An Standardized DV Query with join operation example.", + "url": "http://arxiv.org/html/2408.07401v2/x4.png" + }, + "5": { + "figure_path": "2408.07401v2_figure_5.png", + "caption": "Figure 5: Overview of hybrid pre-training objectives. The solid lines denote the Bidirectional Dual-Corpus objectives, which facilitate the learning of language representation by leveraging bidirectional context. The dashed lines represent the T5-based MLM objectives, designed to reconstruct the original input from masked tokens.", + "url": "http://arxiv.org/html/2408.07401v2/x5.png" + }, + "6(a)": { + "figure_path": "2408.07401v2_figure_6(a).png", + "caption": "(a) Seq2Vis\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(b)": { + "figure_path": "2408.07401v2_figure_6(b).png", + "caption": "(b) Transformer\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(c)": { + "figure_path": "2408.07401v2_figure_6(c).png", + "caption": "(c) ncNet\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/error.png" + }, + "6(d)": { + "figure_path": "2408.07401v2_figure_6(d).png", + "caption": "(d) RGVisNet\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x6.png" + }, + "6(e)": { + "figure_path": "2408.07401v2_figure_6(e).png", + "caption": "(e) CodeT5+\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x7.png" + }, + "6(f)": { + "figure_path": "2408.07401v2_figure_6(f).png", + "caption": "(f) Ours\nFigure 6: Visualization formats of DV Query generated by various text-to-vis models", + "url": "http://arxiv.org/html/2408.07401v2/x8.png" + }, + "7": { + "figure_path": "2408.07401v2_figure_7.png", + "caption": "Figure 7: Visualization Chart", + "url": "http://arxiv.org/html/2408.07401v2/x9.png" + }, + "8(a)": { + "figure_path": "2408.07401v2_figure_8(a).png", + "caption": "(a) Visualization Chart\nFigure 8: Visualization formats of DV Knowledge used in FeVisQA case study", + "url": "http://arxiv.org/html/2408.07401v2/x10.png" + }, + "8(b)": { + "figure_path": "2408.07401v2_figure_8(b).png", + "caption": "(b) Table\nFigure 8: Visualization formats of DV Knowledge used in FeVisQA case study", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/QA_table.png" + }, + "9": { + "figure_path": "2408.07401v2_figure_9.png", + "caption": "Figure 9: Table used in table-to-text case study", + "url": "http://arxiv.org/html/2408.07401v2/extracted/6029870/figures/table2text_table.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.07401v2" +} \ No newline at end of file diff --git a/20241127/2408.10511v3.json b/20241127/2408.10511v3.json new file mode 100644 index 0000000000000000000000000000000000000000..72915d8e7ffba6d298e970fc8a705a2b4ccb755c --- /dev/null +++ b/20241127/2408.10511v3.json @@ -0,0 +1,184 @@ +{ + "title": "Single-cell Curriculum Learning-based Deep Graph Embedding Clustering", + "abstract": "The swift advancement of single-cell RNA sequencing (scRNA-seq) technologies enables the investigation of cellular-level tissue heterogeneity. Cell annotation significantly contributes to the extensive downstream analysis of scRNA-seq data. However, The analysis of scRNA-seq for biological inference presents challenges owing to its intricate and indeterminate data distribution, characterized by a substantial volume and a high frequency of dropout events. Furthermore, the quality of training samples varies greatly, and the performance of the popular scRNA-seq data clustering solution GNN could be harmed by two types of low-quality training nodes: 1) nodes on the boundary; 2) nodes that contribute little additional information to the graph.\nTo address these problems, we propose a single-cell curriculum learning-based deep graph embedding clustering (scCLG).\nWe first propose a Chebyshev graph convolutional autoencoder with multi-criteria (ChebAE) that combines three optimization objectives, including topology reconstruction loss of cell graphs, zero-inflated negative binomial (ZINB) loss, and clustering loss, to learn cell-cell topology representation.\nMeanwhile, we employ a selective training strategy to train GNN based on the features and entropy of nodes and prune the difficult nodes based on the difficulty scores to keep the high-quality graph.\nEmpirical results on a variety of gene expression datasets show that our model outperforms state-of-the-art methods.\nThe code of scCLG will be made publicly available at https://github.com/LFD-byte/scCLG.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The advent of single-cell RNA sequencing (scRNA-seq) technologies has enabled the measurement of gene expressions in a vast number of individual cells, offering the potential to deliver detailed and high-resolution understandings of the intricate cellular landscape. The analysis of scRNA-seq data plays a pivotal role in biomedical research, including identifying cell types and subtypes, studying developmental processes, investigating disease mechanisms, exploring immunological responses, and supporting drug development and personalized therapy [1 ###reference_b1###]. Cell annotation is the fundamental step in analyzing scRNA-seq data. In early research, various traditional clustering methods have been applied such as K-means, spectral clustering, hierarchical clustering and density-based clustering. However, scRNA-seq data are so sparse that most of the measurements are zeros. The traditional clustering algorithm often produces suboptimal results.\nSeveral clustering methods have been developed to address these limitations. CIDR [2 ###reference_b2###], MAGIC [3 ###reference_b3###], and SAVER [4 ###reference_b4###] have been developed to initially address the issue of missing values, commonly referred to as dropouts, followed by the clustering of the imputed data. Despite the benefits of imputation, these methods encounter challenges in capturing the intricate inherent structure of scRNA-seq data. Alternative strategies, such as SIMLR [5 ###reference_b5###] and MPSSC [6 ###reference_b6###], utilize multi-kernel spectral clustering to acquire robust similarity measures. Nevertheless, the computational complexity associated with generating the Laplacian matrix hinders their application to large-scale datasets. Additionally, these techniques fail to account for crucial attributes of transcriptional data, including zero inflation and over-dispersion.\nIn recent years, deep learning has shown excellent performance in the fields of image recognition and processing, speech recognition, recommendation systems, and autonomous driving [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nSome deep learning clustering methods have effectively emerged to model the high-dimensional and sparse nature of scRNA-seq data such as scziDesk [12 ###reference_b12###], scDCC [13 ###reference_b13###], and scDeepCluster [14 ###reference_b14###]. These models implement auto-encoding architectures. However, they often ignore the cell-cell relationships, which can make the clustering task more challenging. Recently, the emerging graph neural network (GNN) has deconvoluted node relationships in a graph through neighbor information propagation in a deep learning architecture. scGNN [15 ###reference_b15###] and scGAE [16 ###reference_b16###] combine deep autoencoder and graph clustering algorithms to preserve the neighborhood relationships. However, their training strategies largely ignore the importance of different nodes in the graph and how their orders can affect the optimization status, which may result in suboptimal performance of the graph learning models.\nIn particular, curriculum learning (CL) is an effective training strategy for gradually guiding model learning in tasks with obvious difficulty levels [17 ###reference_b17###]. Curriculum learning has applications in natural language processing, computer vision, and other fields that require processing complex data. However, research on scRNA-seq data clustering is still blank, and the impact of traditional curriculum learning methods retaining all data on removing difficult samples on the model has not been explored yet.\nMotivated by the above observations, we propose here a single-cell curriculum learning-based deep graph embedding clustering name scCLG, which simultaneously learns cell-cell topology representations and identifies cell clusters from an autoencoder following an easy-to-hard pattern (Fig. 1 ###reference_###).\nWe first propose a Chebyshev graph convolutional autoencoder with multi-criteria (ChebAE) to preserve the topological structure of the cells in the low-dimensional latent space (Fig. 2 ###reference_###).\nThen, with the help of feature information, we design a hierarchical difficulty measurer, in which two difficulty measurers from local and global perspectives are proposed to measure the difficulty of training nodes. The local difficulty measurer computes local feature distribution to identify difficult nodes because their neighbors have diverse labels; the global difficulty measurer identifies difficult nodes by calculating the node entropy and graph entropy.\nAfter that, the most difficult nodes will be pruned to keep the high-quality graph.\nFinally, scCLG can combine three optimization objectives, including topology reconstruction loss of cell graphs, zero-inflated negative binomial (ZINB) loss, and clustering loss, to learn cell-cell topology representation, optimize cell clustering label allocation, and produce superior clustering results.\nThe main contributions of our work are summarized below:\nWe propose a single-cell curriculum learning-based deep graph embedding clustering called scCLG, which integrates the meaningful training order into a Chebyshev graph convolutional autoencoder to capture the global probabilistic structure of data.\nscCLG constructs a cell graph and uses a Chebyshev graph convolutional autoencoder to collectively preserve the topological structural information and the cell-cell relationships in scRNA-seq data.\nTo the best of our knowledge, this is the first article to incorporate curriculum learning with data pruning into a graph convolutional autoencoder to model highly sparse and overdispersed scRNA-seq data.\nWe evaluate our model alongside state-of-the-art competitive methods on 7 real scRNA-seq datasets. The results demonstrate that scCLG outperforms all of the baseline methods." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "scRNA-seq clustering. With the advent of deep learning (DL), more recent works have utilized deep neural networks to automatically extract features from scRNA-seq data for enhancing feature representation.\nscDC [14 ###reference_b14###] simultaneously learns to feature representation and clustering via explicit modeling of scRNA-seq data generation.\nIn another work, scziDesk [12 ###reference_b12###] combines deep learning with a denoising autoencoder to characterize scRNA-seq data while proposing a soft self-training K-means algorithm to cluster the cell population in the learned latent space.\nscDCC [13 ###reference_b13###] integrates prior knowledge to loss function with pairwise constraints to scRNA-seq.\nThe high-order representation and topological relations could be naturally learned by the graph neural network.\nscGNN [15 ###reference_b15###] introduces a multi-modal autoencoder framework. This framework formulates and aggregates cell\u2013cell relationships with graph neural networks and models heterogeneous gene expression patterns using a left-truncated mixture Gaussian model.\nscGAE [16 ###reference_b16###] builds a cell graph and uses a multitask\u2011oriented graph autoencoder to preserve topological structure information and feature information in scRNA\u2011seq data simultaneously. However, the above clustering methods overlook the learning difficulty of different samples or nodes.\nCurriculum learning. Curriculum learning, which mimics the human learning process of learning data samples in a meaningful order, aims to enhance the machine learning models by using a designed training curriculum, typically following an easy-to-hard pattern [17 ###reference_b17###].\nThe CL framework consists of two components: a difficulty measurer which measures the difficulty of samples and a training scheduler which arranges the ordered samples into training. The key to CL is how to define the promising measurer. SPCL [18 ###reference_b18###] takes into account both prior knowledge known before training and the learning progress during training. CLNode [19 ###reference_b19###] measures the difficulty of training nodes based on the label information. SMMCL [20 ###reference_b20###] assumes that different unlabeled samples have different difficulty levels for propagation, so it should follow an easy-to-hard sequence with an updated curriculum for label propagation.\nscSPaC [21 ###reference_b21###] utilizes an advanced NMF for scRNA-seq data clustering based on soft self-paced learning, which gradually adds cells from simple to complex to our model until the model converges. However, the above CL methods don\u2019t utilize the structural information of nodes in graph neural networks and don\u2019t consider the impact of difficult nodes on the graph.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III PRELIMINARIES", + "text": "In this section, we first introduce some notations, symbols, and necessary background. Then we present the Chebyshev graph convolution." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Notations", + "text": "Let be an undirected cell graph, where is a set of nodes associated with different cells; specifies the existence of an edge between the and nodes; and is the node feature matrix and element is the count of the gene in the cell. Let be the adjacency matrix of , where if and are connected, otherwise is set equal to zero.\nThe graph Laplacian , where is the identity matrix, and is the diagonal degree matrix with .\nKNN algorithm is employed to construct the cell graph and each node in the graph represents a cell [22 ###reference_b22###]." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Chebyshev Graph Convolution", + "text": "Chebyshev graph convolution (ChebConv) is a variant of graph convolutional networks that uses Chebyshev polynomials to approximate the feature decomposition of graph Laplacian matrices, thereby achieving convolution operations on graph data. The theoretical foundation of ChebConv is graph signal processing and spectrogram theory, which introduces the concept of graph signal processing into graph convolutional networks. The ChebConv layer is defined as follows:\nwhere represents the order of Chebyshev polynomials used to approximate graph convolution kernels. is the layer\u2019s trainable parameter and is computed recursively by:\nwhere denotes the scaled and normalized Laplacian . is the largest eigenvalue of and is the identity matrix.\nCompared with basic GCN, ChebConv effectively reduces the model\u2019s parameter count and computational complexity by transforming graph convolution operations into approximations of Chebyshev polynomials, while maintaining its ability to capture graph structures." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Proposed Approach", + "text": "In this section,\nwe firstly present our idea of multi-criteria ChebConv graph autoencoder.\nSecondly, we introduce how the scCLG model parameters can be learned using a meaningful sample order.\nFinally, we elaborate the proposed scRNA-seq data clustering algorithm by combining the above two points." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Multi-Criteria ChebConv Graph Autoencoder", + "text": "As shown in Fig. 2 ###reference_###, to capture the cell graph structure and node relationships, we developed a variant of the graph convolution autoencoder that uses a stacked topology Chebyshev graph convolutional network as the graph encoder. Compared with basic GCN, ChebConv effectively reduces the model\u2019s parameter count and computational complexity by transforming graph convolution operations into approximations of Chebyshev polynomials.\nWe use three different criterias to map the encoded compressed vector from different perspectives and jointly optimize the modeling ability of the autoencoder.\nThe gene expression matrix and normalized adjacency matrix are used inputs.\nThrough the graph encoder, the feature dimension of each node will be compressed to a smaller size, and the compressed vector features will be decoded by three loss components: reconstruction loss (), ZINB loss (), and clustering loss (). These criterias share encoder parameters to decompose an optimization objective into three optimization objectives for better capturing the cell-cell relationship:\n###figure_2### More detailed optimization information about , and is shown below." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "IV-A1 Reconstruction Loss", + "text": "Given that the majority of the structure and information inherent in the scRNA-seq data is conserved within the latent embedded representation generated by the scCLG encoder.\nThe adjacency matrix decoder of the graph autoencoder can be defined as the inner product between the latent embedding:\nwhere, represents the scCLG encoder function; is the reconstructed adjacency matrix. Therefore, the reconstruction loss of and should be minimized in the learning process as below:" + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "IV-A2 ZINB Loss", + "text": "In order to more effectively capture the structure of scRNA-seq data by decoding from the latent embedded representation , we integrate the ZINB model into a Chebyshev graph convolutional autoencoder to capture the global probability structure of the scRNA-seq data.\nBased on this foundation, we propose to apply the ZINB distribution model to simulate the data distribution to capture the characters of scRNA-seq data as follows:\nwhere and are the mean and dispersion in the negative binomial distribution, respectively. is the weight of the point mass at zero. The proportion replaces the probability p. After that, to model the ZINB distribution, the decoder network has three output layers to compute the three sets of parameters. The estimated parameters can be defined as follows:\nwhere is a three-layer fully connected neural network with hidden layers of 128, 256 and 512 nodes. represents the learned weights parameter matrices. and are parameters denoting the estimations of and , respectively. The selection of the activation function depends on the range and definition of the parameters. In terms of the parameter , the suitable activation function for it is sigmoid because the interval of is between 0 and 1. Due to the non-negative value of the mean and dispersion , we choose the exponential activation function for them. The negative log-likelihood of the ZINB distribution can be used as the reconstruction loss function of the original data , which can be defined as below:" + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "IV-A3 Clustering Loss", + "text": "scRNA-seq clustering clustering as an unsupervised learning task, lacks guidance from labels, which makes it difficult to capture effective optimization signals during the training phase. To overcome this challenge, we apply a clustering module to guide the algorithm to adjust the cluster centers to ensure that the distribution of samples within each cluster is as consistent as possible while minimizing inter-cluster differences. The objective takes the form of Kullback\u2013Leibler (KL) divergence and is formulated as follows:\nwhere is the soft label of the embedding node which is defined as the similarity between and cluster centre measured by Student\u2019s t-distribution.\nThis can be described as follows:\nMeanwhile, is the auxiliary target distribution, which puts more emphasis on the similar data points assigned with high confidence on the basis of , as below:\nSince the target distribution is defined based on , the embedding learning of is supervised in a self-optimizing way to enable it to be close to the target distribution ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Curriculum Learning with Data Pruning", + "text": "In this subsection, we first describe the proposed difficulty measurement method from both local and global perspectives and assign a difficulty score to each cell. Based on the difficulty score, we investigate the impact of nodes with higher difficulty on model optimization." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "IV-B1 Hierarchical Difficulty Measurer", + "text": "Our Hierarchical Difficulty Measurer consists of two difficulty measures from different perspectives. In this section, we present the definition of two difficulty measures and how to calculate them.\nLocal Difficulty Measurer.\nWe introduce how to identify difficult nodes from a local perspective.\nNodes located at the boundaries of multiple classes may reside in transitional regions within the feature space, leading to less distinct or consistent feature representations, thereby increasing the difficulty of classification. The first type of difficult node should have diverse neighbors that belong to multiple classes.\nIntuitively, features of nodes within the same class tend to be more similar. This is due to the influence of neighboring node features, resulting in nodes with similar connectivity patterns exhibiting comparable feature representations. In order to identify these difficult nodes, we calculate the diversity of the neighborhood\u2019s features:\nwhere denotes the similarity between cell and cell . A larger indicates a more diverse neighborhood. is the neighborhood of cell . As a result, during neighborhood aggregation, these nodes aggregate neighbors\u2019 features to get an unclear representation, making them difficult for GNNs to learn. By paying less attention to these difficult nodes, scCLG learns more useful information and effectively improves the accuracy of backbone GNNs.\nGlobal Difficulty Measurer.\nThen we introduce how to identify difficult nodes from a global perspective. Entropy plays a pivotal role in feature selection as a metric from information theory used to quantify uncertainty. In the process of feature selection, we leverage entropy to assess a feature\u2019s contribution to the target variable. When a feature better distinguishes between different categories of the target variable, its entropy value tends to be relatively low, signifying that it provides more information and reduces overall uncertainty. Consequently, in feature selection, lower entropy values indicate features that offer greater discriminatory power, aiding in the differentiation of target variable categories. We assume nodes that have lower entropy have fewer contributions to the graph. Therefore, this type of node is difficult to classify. Inspired by Entropy Variation [23 ###reference_b23###], We consider the node contribution as the variation of network entropy before and after its removal.\nFor a node in graph , we define as probabilities:\nwhere .\nThe entropy of the graph is as follows:\nwhere is the degree of node . is the entropy of graph with degree matrix.\nThe global difficulty of the node is as follows:\nwhere is the change if one node and its connections are removed from the network. is the modified graph under the removel of . A lower indicates a lower influence on graph structure and is also more difficult. The global difficulty of node is to subtract the normalized from 1.\nConsidering two difficulty measurers from local and global perspectives, we finally define the difficulty of as:\nwhere is the weight coefficient assigned to each difficulty measurer to control the balance of the total difficulty." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "IV-B2 Data Pruning", + "text": "With the hierarchical difficulty measurer, we can get a list of nodes sorted in ascending order of nodes based on difficulty. The node at the end of the list is a nuisance for the overall model learning, so should it be retained? The sources of noise in graph neural networks can be varied, firstly, the attribute information of the nodes may contain noise, which affects the representation of the node features and hence the learning of the GNN. Secondly, the presence of anomalous data may cause the spectral energy of the graph to be \u201dright-shifted\u201d, the distribution of spectral energy shifts from low to high frequencies. These noises will not only reduce the performance of the graph neural network but also propagate through the GNN in the topology, affecting the prediction results of the whole network. In order to solve this problem, we designed a data pruning strategy based on the calculated node difficulty. Specifically, we define a data discarding hyperparameter . The value of is set while balancing data integrity and model generalization performance.\nAs shown in Fig. 4 ###reference_###, the scRNA-seq clustering performance of the scCLG improves after removing the node features with the highest difficulty which prove our hypothesis." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C The Proposed scCLG Algorithm", + "text": "Our model undergoes a two-phase training process. For the first phase, We pretrain the proposed GNN model ChebAE for discriminative feature learning with an adjacency matrix decoder and ZINB decoder. The number of first phase training rounds is epochs. The output of the encoder is a low dimensional vector which is used to calculate node difficulty using a hierarchical difficulty measurer. We retained the top of the data with high sample quality for subsequent training.\nFor the formal training phase, we use the parameters pretrained and train the model for epochs with pruned data. This phase is the learning of clustering tasks. Unlike the pre-training phase, we use all three criterias to optimize the model in more detail.\nWe use the pacing function mentioned in [19 ###reference_b19###] to generate the size of the nodes subset.\nWe illustrate the detailed information in Algorithm 1 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Setup", + "text": "Dataset. For the former, we collect 7 scRNA-seq datasets from different organisms. The cell numbers range from 870 to 9519, and the cell type numbers vary from 2 to 9.\nBaselines.\nThe performance of scCLG was compared with two traditional clustering methods (Kmeans and Spectral), and several state-of-the-art scRNA-seq data clustering methods including four single-cell deep embedded clustering methods (scziiDesk, scDC, scDCC and scGMAI) and three single-cell deep graph embedded clustering methods (scTAG, scGAE and scGNN).\nDeep soft K-means clustering with self-training for single-cell RNA sequence data (scziDesk) [12 ###reference_b12###]:\nIt combines a denoising autoencoder to characterize scRNA-seq data while proposing a soft self-training K-means algorithm to cluster the cell population in the learned latent space.\nModel-based deep embedded clustering method (scDC) [14 ###reference_b14###]: It simultaneously learns to feature representation and clustering via explicit modeling of scRNA-seq data generation.\nModel-based deep embedding for constrained clustering analysis of single cell RNA-seq data (scDCC) [13 ###reference_b13###] It integrates prior information into the modeling process to guide our deep learning model to simultaneously learn meaningful and desired latent representations and clusters.\nscGMAI: a Gaussian mixture model for clustering single-cell RNA-Seq data based on deep autoencoder (scGMAI) [24 ###reference_b24###] It utilizes autoencoder networks to reconstruct gene expression values from scRNA-Seq data and FastICA is used to reduce the dimensions of reconstructed data.\nscGNN is a novel graph neural network framework for single-cell RNA-Seq analyses (scGNN) [15 ###reference_b15###]: It integrates three iterative multi-modal autoencoders and models heterogeneous gene expression patterns using a left-truncated mixture Gaussian model.\nA topology-preserving dimensionality reduction method for single-cell RNA-seq data using graph autoencoder (scGAE) [16 ###reference_b16###] It builds a cell graph and uses a multitask\u2011oriented graph autoencoder to preserve topological structure information and feature information in scRNA\u2011seq data simultaneously.\nZinb-based graph embedding autoencoder for single-cell rna-seq interpretations (scTAG) [22 ###reference_b22###] It simultaneously learns cell\u2013cell topology representations and identifies cell clusters based on deep graph convolutional network integrating the ZINB model.\nImplementation Details.\nIn the proposed scCLG method, the cell graph was constructed using the KNN algorithm with the nearest neighbor parameter . In the multi-criterias ChebConv graph autoencoder, the hidden fully connected layers in the ZINB decoder are set at 128, 256 and 512. Our algorithm consists of pre-training and formal training, with 1000 and 500 epochs for pre-training and formal training, respectively. Our model was optimized using the Adam optimizer, employing a learning rate of 5e-4 during pre-training and 1e-4 during formal training. The pruning rate is set to 0.11. For baseline methods, the parameters were set the same as in the original papers." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Clustering Result", + "text": "Table II ###reference_### shows the clustering performance of our method against multiple state-of-the-art methods, and the values highlighted in bold represent the best results. Obviously, our method outperforms other baseline clustering methods for clustering performance. For the 7 scRNA-seq datasets, scCLG achieves the best NMI and ARI on all datasets. Meanwhile, we can observe that the general deep graph embedded models have no advantage and the clustering performance is not stable. Specifically, scGNN performs poorly on \u201dWang_Lung\u201d. The main reason is that the information structure preserved by the cell graph alone cannot address the particularities of scRNA-seq data well, and further data order is necessary, which again proves the superiority of scCLG. The performance of the deep clustering method and traditional clustering method exhibits significant fluctuations across different datasets. However, scCLG still has an advantage. This is because the scCLG could effectively learn the key representations of the scRNA-seq data in a meaningful order so that the model can exhibit a smooth learning trajectory. In summary, we can conclude that scCLG performs better than the other methods under two clustering evaluation metrics." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Parameter Analysis", + "text": "###figure_3###" + }, + { + "section_id": "5.3.1", + "parent_section_id": "5.3", + "section_name": "V-C1 Different Neighbor Parameter Analysis", + "text": "represents the number of nearest neighbors to consider when constructing cell graph.\nIn order to investigate the impact of , we ran our model with the parameters 5, 10, 15, 25. Fig. 3 ###reference_### (A) shows the NMI and ARI values with different numbers of . As depicted in Fig. 3 ###reference_### (A), we observe that the two metrics first increase rapidly from parameter 5 to 10, reach the best value at , and then decrease slowly from parameter 20 to 25. Therefore, we set the neighbor parameter k as 20 in our scCLG model." + }, + { + "section_id": "5.3.2", + "parent_section_id": "5.3", + "section_name": "V-C2 Different Numbers of Variable Genes Analysis", + "text": "In single-cell data analysis, highly variable genes vary significantly among different cells, which helps to reveal the heterogeneity within the cell population and more accurately identify cell subpopulations.\nTo explore the impact of the number of selected highly variable genes, we apply scCLG on real datasets with gene numbers from 300 to 1500. Fig. 3 ###reference_### (B) shows the line plot of the average NMI and ARI on the 7 datasets selecting 300, 500, 1000 and 1500 genes with high variability, respectively.\nIt can be seen that the performance with 500 highly variable genes is better, while the performance with 300 genes is much worse than the others.\nTherefore, to save computational resources and reduce running time, we set the number of selected high-variance genes in the model to 500.\n###figure_4###" + }, + { + "section_id": "5.3.3", + "parent_section_id": "5.3", + "section_name": "V-C3 Different Data Pruning Rate Analysis", + "text": "In single-cell data analysis, data quality can be improved by pruning lower-quality samples thereby affecting the ability to generalize the model.\nTo explore the impact of the selected data, we run our model with pruning rate parameters from 0.06 to 0.21 to drop difficult nodes. We also compared our pruning strategy with two different pruning strategies, namely pruning easy nodes and randomly pruning nodes.\nFig. 4 ###reference_### shows the ARI and NMI values with different numbers of and pruning strategy.\nAs depicted in Fig. 4 ###reference_###, we can observe that the best performance is achieved when the is 0.11 and when difficult nodes are pruned. This indicates that the improvement of data quality can significantly improve the performance of the model. Compared to pruning easy nodes and randomly pruning nodes, pruning difficult nodes brings higher profit because difficult nodes have a negative impact on the representation of the graph. Furthermore, randomly pruning nodes is better than pruning easy nodes, indicating the effectiveness of our difficulty measurer which can assign reasonable difficulty scores to nodes." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "In this experiment, we analyzed the effect of each component of the scCLG method. Specifically, we ablated different components in no hierarchical difficulty measurer named Without CL. Table III ###reference_### tabulates the average ARI and NMI values on the 7 datasets with scCLG. As shown in Table III ###reference_###, it can be clearly observed that gene screening and extraction of scRNA-seq data from easy to hard patterns improves the clustering performance. For the 7 scRNA-seq datasets, scCLG achieve the best ARI and NMI on 5 of them.\nIn summary, all components of the scCLG method are reasonable and effective." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "In this research, we propose a single-cell curriculum learning-based deep graph embedding clustering.\nOur approach first utilizes the Chebyshev graph convolutional autoencoder to learn the low-dimensional feature representation which preserves the cell\u2013cell topological structure. Then we define two types of difficult nodes and rank the nodes in the graph based on the measured difficulty to train them in a meaningful manner. Meanwhile, we prune the difficult node to keep the high quality of node features.\nOur method shows higher clustering performance against state-of-the-art approaches for scRNA-seq data. Empirical results provide strong evidence that this performance is imputed to the proposed mechanisms and particularly their ability to tackle the difficult nodes." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Summary of the real scRNA-seq datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetCellsGenesClassPlatform
QS_Diaphragm870233415Smart-seq2
QS_Limb_Muscle1090233416Smart-seq2
QS_Lung16762334111Smart-seq2
Muraro2122190469CEL-seq2
QS_Heart4365233418Smart-seq2
Plasschaert6977282058inDrop
Wang_Lung951914561210x
\n
", + "capture": "TABLE I: Summary of the real scRNA-seq datasets." + }, + "2": { + "table_html": "
\n
TABLE II: Performance comparison between various baselines on seven real datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricDatasetscCLGscTAGscGAEscGNNscziDeskscDCscDCCscGMAIKmeansSpectral
ARIQS_Diaphragm0.98360.96280.56380.56460.95170.64790.88950.41110.91100.9170
QS_Limb_Muscle0.98280.98130.54190.63990.97430.53840.34490.48990.89220.9615
QS_Lung0.79460.65260.27970.36310.74010.45040.29080.46220.73290.7559
Muraro0.89590.88780.64130.50800.67840.66090.71000.51320.84520.8741
QS_Heart0.95030.93710.24970.52220.93240.46730.25840.43680.83760.8757
Plasschaert0.79070.76970.35400.42720.48670.40700.46680.57110.73520.2916
Wang_Lung0.95270.90040.10350.17710.89750.25200.59980.13250.79950.0345
NMIQS_Diaphragm0.96700.93460.73510.76080.92100.78070.82230.68360.88460.8881
QS_Limb_Muscle0.96820.96160.73980.77260.94680.70480.46240.71980.89110.9389
QS_Lung0.83180.80380.67660.66420.75430.68400.49820.73120.77850.7976
Muraro0.85060.83990.76190.62940.73490.75490.83470.71680.81940.8291
QS_Heart0.90640.88570.60390.65400.87230.65310.42420.69410.82990.8454
Plasschaert0.76960.73790.55630.58560.64690.61220.57860.57110.69150.5216
Wang_Lung0.89420.82100.31500.39750.79650.15110.58620.34320.71670.0367
\n
", + "capture": "TABLE II: Performance comparison between various baselines on seven real datasets." + }, + "3": { + "table_html": "
\n
TABLE III: Ablation study measured by ARI and NMI values.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricMethodsscCLGWithout CL
ARIQS_Diaphragm0.98360.9778
QS_Limb_Muscle0.98280.9791
QS_Lung0.79460.7947
Muraro0.89590.8897
QS_Heart0.95030.9530
Plasschaert0.79070.7903
Wang_Lung0.95270.9527
NMIQS_Diaphragm0.96700.9579
QS_Limb_Muscle0.96820.9613
QS_Lung0.83180.8321
Muraro0.85060.8468
QS_Heart0.90640.9088
Plasschaert0.76960.7693
Wang_Lung0.89420.8942
\n
", + "capture": "TABLE III: Ablation study measured by ARI and NMI values." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.10511v3_figure_1.png", + "caption": "Figure 1: Framework of scCLG. (A) Pre-training: pretraining the proposed ChebAE with adjacency matrix decoder and ZINB decoder. Then calculate node difficulty using a hierarchical difficulty measurer and prune the data. (B) Formal training: using all three criterias to optimize the model in more detail from easy to hard pattern with pruned data.", + "url": "http://arxiv.org/html/2408.10511v3/x1.png" + }, + "2": { + "figure_path": "2408.10511v3_figure_2.png", + "caption": "Figure 2: The model architecture of multi-criteria ChebAE. ChebAE integrates three loss components: reconstruction loss, ZINB loss, and a clustering loss to optimize the low-dimensional latent representation.", + "url": "http://arxiv.org/html/2408.10511v3/x2.png" + }, + "3": { + "figure_path": "2408.10511v3_figure_3.png", + "caption": "Figure 3: Parameter analysis. (A) Comparison of the average ARI and NMI values with different neighbor parameters k\ud835\udc58kitalic_k. (B) Comparison of the average ARI and NMI values with different numbers of genes.", + "url": "http://arxiv.org/html/2408.10511v3/x3.png" + }, + "4": { + "figure_path": "2408.10511v3_figure_4.png", + "caption": "Figure 4: Comparison of the average ARI and NMI values with different data pruning rates and pruning strategies.", + "url": "http://arxiv.org/html/2408.10511v3/x4.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.10511v3" +} \ No newline at end of file diff --git a/20241127/2408.11841v2.json b/20241127/2408.11841v2.json new file mode 100644 index 0000000000000000000000000000000000000000..18536950998106e30588657c1e10b674f589abff --- /dev/null +++ b/20241127/2408.11841v2.json @@ -0,0 +1,799 @@ +{ + "title": "Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants", + "abstract": "AI assistants are being increasingly used by students enrolled in higher education institutions. While these tools provide opportunities for improved teaching and education, they also pose significant challenges for assessment and learning outcomes. We conceptualize these challenges through the lens of vulnerability, the potential for university assessments and learning outcomes to be impacted by student use of generative AI. We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level STEM courses. Specifically, we compile a novel dataset of textual assessment questions from 50 courses at EPFL and evaluate whether two AI assistants, GPT-3.5 and GPT-4 can adequately answer these questions. We use eight prompting strategies to produce responses and find that GPT-4 answers an average of 65.8% of questions correctly,\nand can even produce the correct answer across at least one prompting strategy for 85.1% of questions. When grouping courses in our dataset by degree program, these systems already pass non-project assessments of large numbers of core courses in various degree programs, posing risks to higher education accreditation that will be amplified as these models improve. Our results call for revising program-level assessment design in higher education in light of advances in generative AI.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "ChatGPT, a system using a large language model (LLM), GPT-3.5, as its foundation, was released in November 2022 to broad adoption and fanfare, reaching 100M users in its first month of use and remaining in the public discourse to this day. As arguably the most hyped AI system to date, its release has prompted a continuing discussion of societal transformations likely to be induced by AI over the next years and decades. Potential changes in modern educational systems have remained a core topic in this discussion, with early reports highlighting the risk of these AI systems as tools that would allow students to succeed in university coursework without learning the relevant skills those courses are meant to teach. Despite this concern, there has yet to be a comprehensive empirical study of the potential impact of LLMs (and derivative tools) on the assessment methods that educational institutions use to evaluate students. A few studies have explored the interesting sub-task of how well models perform on problems related to topics typically taught in many university courses and aggregated relevant question sets for this purpose (Hendrycks et al., 2021 ###reference_b26###; Huang et al., 2023 ###reference_b28###; Wang et al., 2023c ###reference_b61###; Zhong et al., 2023 ###reference_b72###; Arora et al., 2023 ###reference_b5###). However, none of these works extrapolate these findings to assess the downstream impact of these tools on degree programs, where the risk of these technologies relative to their pedagogical benefits must actually be measured.\nIn this work, we conduct a large-scale study across 50 courses from EPFL to measure the current performance of LLMs on higher education course assessments. The selected courses are sampled from 9 Bachelor\u2019s, Master\u2019s, and Online programs, covering between 33% and 66% of the required courses in these programs. From these courses, we compile a bilingual dataset (English and French) of 5,579 textual open-answer and multiple-choice questions (MCQ). All questions were extracted from real exams, assignments, and practical exercise sessions used to evaluate students in previous years. The course distribution is presented in Figure 1 ###reference_###, and the dataset statistics are shown in Table 1 ###reference_### (stratified by particular dataset attributes).\nUsing this dataset, we subsequently test two commonly-used models, GPT-4 (OpenAI, 2023 ###reference_b41###), the system widely considered to be the most performant (Zheng et al., 2024 ###reference_b71###) among public AI assistants,111as of November 2023 and GPT-3.5 (OpenAI, 2022 ###reference_b40###), a highly performant freely available system. We generate responses from these systems using a range of prompting strategies (Brown et al., 2020 ###reference_b11###; Xu et al., 2023 ###reference_b65###; Wei et al., 2023 ###reference_b63###; Yao et al., 2023 ###reference_b68###; Wang et al., 2023e ###reference_b59###; Madaan et al., 2023 ###reference_b37###; Wang and Zhao, 2023 ###reference_b62###), each of which varies in complexity, but all of which could easily be applied by a lay practitioner with minimal training in prompt engineering (Sahoo et al., 2024 ###reference_b50###). We evaluate these systems using both automatic and manual grading, where manual grading of open-answer questions is performed by the same faculty staff that designed these problems and who have experience grading student answers to them. Subsequently, we conduct a detailed analysis of the generated outputs and their assessment results, considering factors such as the number of courses that would be passed, their distribution across university programs, as well as the effects of the question difficulty and language.\n###figure_1### Our results show that AI systems are relatively capable of answering questions used in university assessments. GPT-4 responds correctly to 65.8% of questions when aggregating responses across the different prompting strategies using a simple majority vote (i.e., a knowledge-free setting that assumes the user would use this tool with no subject knowledge). Furthermore, across the eight prompting strategies, GPT-4 can generate at least one correct response for 85.1% of questions (maximum performance), indicating even greater assessment vulnerability in a setting where a user may have enough subject knowledge to select a correct answer even if they cannot produce it. This performance is relatively stable across courses in various scientific disciplines, impacting courses regardless of their subject and size. Importantly, we find that these results indicate that large numbers of university degree programs are highly vulnerable to these tools, with the non-project components of many core courses being passed in multiple degrees offered by our institution.\nFinally, we observe that while these systems are capable of reaching passing grades in many university assessments, they struggle with more complex question types where students also tend to perform most poorly. Taken together, these results indicate a possibility that these systems could be used to achieve passing marks in university courses while circumventing the process by which students acquire basic domain knowledge and learn to extend it to more complex problems. We conclude with a discussion on mitigations to university assessment settings, an outlook on how university systems should adapt to the increased use of these tools, and a discussion of limitations of our study, specifically with respect to how it diverges from exact assessment and grading policies at our institution.\n###table_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Data Collection", + "text": "We compile a new dataset of assessment questions from 50 courses offered at our institution from both on-campus and online classes. Following data pre-processing and filtering steps, this dataset consists of a total bank of 5,579 textual multiple-choice (MCQ) and open-answer questions in both English and French. These questions span various levels (e.g., Bachelor, Master), and cover a broad spectrum of STEM disciplines, including Computer Science, Mathematics, Biology, Chemistry, Physics, and Material Sciences. Table 1 ###reference_### and Figure 1 ###reference_### provide an overview of the dataset\u2019s main statistics and the distribution of questions across different topics. Additionally, we have collected course-specific attributes such as the year when the course is first offered in our institution\u2019s degree programs (e.g., Master\u2019s year 1), the program designation (e.g., Physics), the language of instruction (e.g., French), and the average student enrollment over recent years. Finally, certain questions have been labeled by the instructor who designed the question with a subjective annotation of the question\u2019s difficulty." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Experimental Findings", + "text": "In our study, we investigate the vulnerability of university programs to generative AI systems using our question bank of 5,579 evaluation questions from 50 courses. We consider two models, GPT-4 and GPT-3.5, selected due to their popularity and usage rates. GPT-4 is considered the most performant model among all publicly accessible LLMs but is only available through a premium subscription, impeding its use by many students. GPT-3.5 is a less performant alternative, but free to use. We generate responses to questions from these models using eight relatively easy-to-apply prompting methods (implementation details are described in Appendix B ###reference_###). For multiple-choice questions, we assess whether a response is correct by comparing the selected choice with the annotated correct answer option. For open-response questions, we use GPT-4 to rate the quality of the response on a four-point scale: Correct, Mostly Correct, Mostly Incorrect, Incorrect, which we map to scores of 1, 0.66, 0.33, and 0, respectively, for calculating performance.222Analysis of the quality of this automated grading is provided in Appendix C ###reference_###. Importantly, we note that GPT-4 gives slightly higher average grades (on average 2.75%) than humans for responses to a sample of questions graded by both.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Can LLM systems pass university-level courses?", + "text": "We begin our analysis by assessing model performance in a setting where the user has zero knowledge about the question topic. In the simplest scenarios, where we use the most straightforward prompting strategies, such as directly asking a question (zero-shot) or asking the model to provide a reasoning chain before answering the question (chain-of-thought zero-shot), GPT-4 achieves average accuracies of 55.9% and 65.1%, respectively. However, if the user employed a slightly more complex zero-knowledge strategy, such as majority voting over the eight answers generated by the different prompting strategies, they would receive a correct answer to 65.8% (on average) of questions using GPT-4 (and 52.2% using GPT-3.5). We observe that this performance trend holds regardless of the language of the assessment, with English exhibiting slightly better results than French. Further experimental results for assessments in English and French are detailed in Appendix G.4 ###reference_###.\nBeyond overall performance across the question bank, Figure 2 ###reference_### presents the proportion of passed courses for our sample of 50 courses based on varying passing thresholds. Alarmingly, GPT-4 can easily be used to reach a 50% performance threshold (which could be sufficient to pass many courses at various universities) for 89% of courses with MCQ-based evaluations and for 77% of courses for open-answer questions. While not performing quite as well, GPT-3.5, the freely available model, can reach a 50% threshold for 70% of courses with MCQ-based assessments and for 50% of courses with open-answer questions. As passing thresholds may not be set to 50% for all institutions, we vary this threshold and find that GPT-4 still passes 68% of courses at a 60% passing threshold, and 38% of courses at a 70% passing threshold for MCQ. Similar results are found for open-answer questions.\nOur results suggest that users with no knowledge of a particular subject could solve enough assessment questions to pass a majority of the courses in our dataset, making a compelling argument that AI assistants could potentially augment student learning as support tools. However, they simultaneously present a credible short-term risk to educational systems if these systems are not adapted to protect against the misuse of these technologies. Finally, we expect this problem to only grow worse over time, as continual model improvements in the years to come will make these tools even more performant in academic contexts." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. How do these results affect university programs?", + "text": "The average performance across courses demonstrates each course\u2019s potential vulnerability to generative AI tools, which is particularly important if considerable portions of degree programs can be completed using these tools. To evaluate this program\u2019s vulnerability, we aggregate the questions in our datasets according to the study programs in which they are core courses. Specifically, we include four program types: 1st-year Bachelor, Full Bachelor, Full Master, and Online courses. We separate the first year of the Bachelor\u2019s degree because, at many institutions (including ours), the first year of the Bachelor\u2019s may have a fairly standardized curriculum that serves a special purpose (e.g., replacing or complementing entrance exams).\nFull Bachelor\u2019s and Master\u2019s correspond to regular Bachelor\u2019s and Master\u2019s programs. We also include online courses, as official certifications can often be awarded for completing a sequence of these courses. For each program, our dataset contains a sample of courses that cover from 33% to 66% of the required courses for that program. For more program statistics, see Appendix F ###reference_###.\nWe consider the same two aggregation strategies across the responses provided by the eight prompting methods: majority vote and maximum performance. For the majority vote, given the eight prompting strategies we have, the final answer to the question is the one that is the most frequent across all strategies. In the maximum performance aggregation, only a single prompting strategy is required to answer correctly for the model to be deemed correct in its response, approximating a pseudo-oracle setting that remains contextually realistic, as a user might be able to distinguish the answer when presented with it, even if they could not find it on their own.\n###figure_3### ###figure_4### ###figure_5### In Table 2 ###reference_###, we present the number of courses that would be passed by GPT-4 across the 9 tested degree programs for various course passing thresholds (i.e., the performance that must be achieved to consider the course passed). Our results show that the general course vulnerability observed in the previous section extends to program vulnerability. At the threshold for passing a course, at least 83% of courses are passed in each of the evaluated programs. In certain programs, such as the Physics Bachelor and Computer Science Master, all tested courses are passed. While this number drops as we raise the passing threshold , the maximum performance for each program typically remains above 80%, indicating that a combination of clever prompting and partial subject knowledge may be sufficient to achieve high marks on assessment questions.\nTopically, we find that the models consistently exhibit lower performance on assessments of courses involving mathematical derivations. Conversely, the model demonstrates strong performance on problems that require generating code of text. For example, models consistently yield high performance in subjects such as Software Engineering and Intro to Machine Learning. This observation is further supported by the difference in performance between Master\u2019s and Bachelor\u2019s level courses (shown across Figures 3(a) ###reference_sf1### and 3(b) ###reference_sf2###). In our dataset, Bachelor\u2019s courses feature more mathematical derivations, while Master\u2019s courses have more text and code-based problems. In Appendix G.1 ###reference_###, we provide further performance comparisons between the courses representing each program. In Appendix G.2 ###reference_###, we analyze model performance across all prompting strategies and both question types.\n###figure_6### Finally, we highlight that these models are effective in courses that large portions of the student body must take, increasing the overall vulnerability of course programs. Figure 4 ###reference_### demonstrates that some of the largest classes on campus, with upwards of 300 students, are also some of the most vulnerable, with GPT-4 achieving (using the majority vote strategy) an average performance of 69.9% in these classes (while hovering around 60% for other class sizes). This result is particularly problematic because larger courses are often limited in terms of the monitoring and mitigation strategies they can implement due to the number of students. While smaller courses may more easily be able to combat the misuse and unethical use of generative AI tools, larger courses, which are often mandatory for degree completion, must ensure fair and scalable solutions for a larger student population." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. More challenging assessments are a half-solution.", + "text": "One possible solution to mitigate assessment vulnerability would be to increase their difficulty beyond what generative AI systems are capable of solving, as we observe that the performance of these systems does decrease on more challenging assessment questions (Figure 3 ###reference_###). We measure the difficulty using a sub-sample of our question bank that is annotated according two different categorizations of their difficulty: (1) instructor-reported question difficulty, a five-point difficulty categorization for Bachelor courses and two-point for Master\u2019s courses provided by the course instructors, and (2) Bloom\u2019s taxonomy (Bloom and Krathwohl, 1956 ###reference_b10###), a six-point scale that measures the cognitive complexity of a question.333More details about Bloom\u2019s Taxonomy can be found in Appendix E ###reference_###.\nFor the instructor-reported difficulty categorization, we collect annotations from course instructors for a subset of questions from the Bachelor\u2019s program (n.b., the instructors that designed the questions). The instructor-reported rating ranges from \u201cVery Easy\u201d to \u201cVery Hard\u201d on a 5-point scale. We also collect questions from the Master\u2019s program annotated on a 2-point scale, ranging from \u201cMedium\u201d to \u201cHard\u201d (the original scale was meant to be 3-point, but no instructor reported an \u201cEasy\u201d question). In Figures 3(a) ###reference_sf1### and 3(b) ###reference_sf2###, we show the model\u2019s performance on questions stratified by their difficulty rating and observe that GPT-4 performs worse on questions that instructors deem harder. For example, in Bachelor courses, there is a 38% difference in accuracy between \u201cVery Easy\u201d and \u201cVery Hard\u201d questions. However, the differences between Bachelor\u2019s \u201cEasy\u201d and \u201cHard\u201d questions or Master\u2019s \u201cMedium\u201d and \u201cHard\u201d questions are only 11.5% and 15.75%, respectively.\nThis pattern is also observed in our assessment of question difficulty performed using Bloom\u2019s taxonomy, which classifies educational learning objectives into levels of complexity and specificity: remember, understand, apply, analyze, evaluate, and create. Two researchers in the learning sciences manually annotated 207 questions from our dataset according to Bloom\u2019s taxonomy (Bloom and Krathwohl, 1956 ###reference_b10###). While the taxonomy typically associates questions into six categories, we found that most course assessment questions were covered by the first four categories. The results, grouped by taxonomy category in Figure 3(c) ###reference_sf3###, illustrate that GPT-4 performance is negatively correlated with the cognitive complexity of the question, with higher performance on lower-level tasks compared to analysis and application tasks.\nHowever, harder assessments may not necessarily be a suitable solution for this vulnerability as they would also likely lead to lower student performance. When comparing the performance of students and GPT-4 on problem sets from a subset of questions for which student performance statistics were collected (Figure 5 ###reference_###), we note that the model tends to excel on questions where students also perform well. This pattern perhaps exacerbates fairness as GPT-4 (and similar models) could be used to achieve average results on an assessment while providing few benefits to students who can already complete the easier portion of assessments but struggle with harder questions. Notably, however, we observe that for a subset of problems, the model typically struggles, receiving \u201cMostly Incorrect\u201d or \u201cIncorrect\u201d marks, while students demonstrate relatively strong performance, with scores ranging from 0.5 to 0.9. These problems typically require mathematical derivations and extensive computations.\n###figure_7###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Discussion", + "text": "Summary. In this work, we tested the ability of LLMs to solve assessment questions for a large number of courses from technical and natural sciences academic programs at EPFL. We find that LLMs are generally capable of answering 50-70% (depending on the model) of questions correctly given no subject-related knowledge, and up to 85.1% of questions correctly when some subject-specific knowledge is assumed (i.e., the ability to recognize the correct answer). Most importantly, when considering performance across programs, GPT-4 can achieve greater than 50% performance for 83% to 100% of courses (depending on the program) with an average program pass rate of 91.7%. While a higher per-course passing threshold of 70% would only result in 23% to 50% of courses being passed across our programs (with an average of 37%), this would also lead to higher student fail rates as passing thresholds would similarly affect them. Given that continuous advancements in LLM technology will likely further improve these LLM performance numbers in the future, we conclude that higher-education assessment schemes are immediately vulnerable to student exploitation of generative AI tools, specifically in the engineering and natural sciences.\nAssessment Vulnerability. Our results indicate that the broad availability of generative AI tools should urgently trigger discussion on the design and implementation of assessments. Naturally, our results must be placed in the context where they would normally be observed. In many educational institutions, student assessments are frequently closed-book, thereby precluding the direct use of generative AI tools. Many course assessments (e.g., assignments), though, are completed at home without supervision. In the same vein, most online courses typically evaluate students without any supervised, in-person assessment. In these unsupervised settings, the availability of generative AI tools for aiding in the completion of assessments poses risks for many commonly used student evaluation methods.\nOne general trend that we observe from our data (Figures 3(a) ###reference_sf1###, 3(b) ###reference_sf2###, and 3(c) ###reference_sf3###) is that models perform well on basic learning tasks, such as memorizing factual knowledge. In courses where memorization of factual knowledge is a core evaluation objective, students should not be allowed to use these tools in non-proctored settings, and these courses should perhaps return to traditional in-person examination (Wang et al., 2023b ###reference_b60###). Barring this possibility, in the case of non-proctored assessments, we recommend that their design should not only consider the possibility of assistance from generative AI but actively assume its use. At the very least, assessments should be designed with generative AI in the loop to design AI-adversarial evaluation that remain fair to students.\nAt the same time, these findings provide an opportunity to improve and diversify student learning through assessments. Students acquire relevant skills when assessments emphasize analytical and applied knowledge settings (Zhang and Ma, 2023 ###reference_b70###). Rather than using proctored exams, then, or limited practical works, such as assignments, students should be evaluated using assessments requiring a more composite application of course concepts, such as larger class projects. Project settings more closely assess students on problems resembling real-world challenges, would provide students with more opportunities to make problem-solving decisions, such as problem simplification, decomposition, and planning (Montgomery et al., 2023 ###reference_b38###), and would mitigate the impact of generative AI tools (Figure 3(c) ###reference_sf3###).\nEducation Vulnerability. While our results point to an urgent need to mitigate assessment vulnerabilities in higher education, a longer-term view requires considering how education as a practice should evolve alongside the availability of generative AI tools. Since the release of ChatGPT, ongoing discussions have revolved around this topic with both negative and optimistic views. While numerous studies explore the ways AI can enhance learning, ethical concerns related to plagiarism, biases, and overreliance on technology have also been highlighted (Yan et al., 2023 ###reference_b67###; Chen, 2023 ###reference_b13###; Pinto et al., 2023 ###reference_b43###; Alqahtani et al., 2023 ###reference_b4###; Currie, 2023 ###reference_b17###; Dwivedi et al., 2023 ###reference_b19###; Lan et al., 2024 ###reference_b31###).\nAn important dimension of these discussions emphasizes the need to revisit evaluation procedures to ensure that students acquire necessary skills and critical thinking abilities in the face of AI integration (Prather et al., 2023 ###reference_b44###; Becker et al., 2022 ###reference_b8###; Finnie-Ansley et al., 2022 ###reference_b21###; Nowrozy and Jam, 2024 ###reference_b39###). For instance, observations from various works (and our study) show that models excel in generating code to solve software problems (Vaithilingam et al., 2022 ###reference_b56###; Xu et al., 2022 ###reference_b66###; Li et al., 2022 ###reference_b32###; Hou and Ji, 2024 ###reference_b27###; Liu et al., 2024 ###reference_b33###). While this capability reduces the burden on professional (and hobbyist) software developers, it also poses a risk for learners by potentially offering shortcuts that impede the acquisition of fundamental coding and engineering skills (Denny et al., 2023 ###reference_b18###). Coding tools such as GitHub\u2019s Copilot or OpenAI\u2019s Codex may lead novices to over-rely on auto-suggested solutions. This overreliance may diminish their engagement with computational thinking (Prather et al., 2023 ###reference_b44###; Becker et al., 2022 ###reference_b8###), which is arguably the most important skill that is learned in any computer science course or program.\nBeyond this example, many studies underscore the significance of adapting course materials and assessments to promote critical thinking, encourage student collaboration, develop practical skills, enhance soft skills, and promote interdisciplinary learning, all with the aim of cultivating graduates equipped with a diverse range of competencies (Nowrozy and Jam, 2024 ###reference_b39###; Alier et al., 2024 ###reference_b3###; Chaudhry et al., 2023 ###reference_b12###; Cotton et al., 2023 ###reference_b16###). In particular, reinforcing our conclusions above, open-ended assessments are proposed to promote originality and creativity, potentially discouraging reliance on generative AI tools and fostering unique ideas and critical analysis (Cotton et al., 2023 ###reference_b16###; Liu et al., 2023b ###reference_b34###). Ultimately, these studies suggest the greater risk of generative AI may be its potential to enable the unintentional circumvention of frameworks by which learners are taught the foundations to learn later skills, and that teaching and assessment should be adapted for this risk.\nFinally, assuming that students will use and become acquainted with the capabilities of these technologies, we recommend that students should not only be educated on the technical and ethical challenges of generative AI systems, but also on the critical thinking required to successfully engage with such tools (Wang et al., 2023a ###reference_b58###). One such measure could involve establishing committees for ethical oversight and adding classroom discussions on the use of AI tools. Such discussions would clarify what constitutes plagiarism and address potential ethical concerns, ensuring that students understand the importance of academic integrity and discern the boundaries between legitimate assistance and academic misconduct (Finnie-Ansley et al., 2022 ###reference_b21###; Cotton et al., 2023 ###reference_b16###; Alier et al., 2024 ###reference_b3###; Chaudhry et al., 2023 ###reference_b12###; Liu et al., 2023b ###reference_b34###; Denny et al., 2023 ###reference_b18###)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Limitations", + "text": "While our study offers preliminary insights into the vulnerability of degree programs to student use of AI assistants for assessments, we acknowledge several limitations in our study.\nFirst, our study excluded any multimodal inputs, such as questions containing diagrams, figures, or graphs, which were omitted from our dataset. Approximately 57% of the initially collected data did not qualify for inclusion in the final dataset of 5,579 questions. Consequently, models were solely evaluated with text-only questions. This approach likely resulted in performance outcomes that are higher than what these models would attain when tested on question sets that include these other modalities, though we also note rapid growth in the multimodal capabilities of these models (Yue et al., 2023 ###reference_b69###).\nSimultaneously, our findings might underestimate the performance potential that students could attain through collaboration with these systems. Although we conducted a thorough examination of prompting strategies, our methods are limited by the fact that they (1) rely solely on published prompting strategies, (2) are generally non-interactive, and (3) are tailored for scalability across all questions to facilitate a comprehensive study. Students aiming to address individual questions could devote more time and devise more interactive, less standardized prompting strategies to collaboratively guide the models toward improved solutions.\nFinally, we acknowledge certain gaps between our evaluation of AI systems in this study, and how students are normally evaluated in these courses. First, our study standardizes system evaluation across all course assessments, removing course-specific assessment policies for questions. For example, certain courses, beyond not giving points for correct answers to multiple-choice questions, might also penalize incorrect answers more than leaving a question unanswered, while our study simply gives zero points for incorrect answers. Second, our dataset of questions for each course is not necessarily balanced according to a course\u2019s grading rubric. As an example, our dataset may contain a balanced mixture of questions from assignments and exams for a particular course, while the overall evaluation of a student in this same course would compute their grade as a 10% mixture of assignments, and 90% mixture of exam questions. Similarly, many courses at our institution also include lab or project components as part of their final grade. Since these parts of the assessment do not have a \u201ccorrect answer\u201d that can be easily marked, they are not included in our dataset.\nAs we do not consider these course-specific assessment policies when computing the course pass rates of our tested AI assistants, these design decisions introduce a gap between our evaluation and the actual assessment rubrics by which students are graded in our institution\u2019s courses. Despite this divergence, however, we note that other institutions may implement course assessments and grading rubrics in different ways. As a result, while our study is not an exact simulation of our institution\u2019s diverse course assessment schemes, it remains a suitable testbed for providing insights into how course assessments are vulnerable to AI assistants, and how this vulnerability would extend to full university programs without mitigations." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Materials and Methods", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "6.1. Prompting Strategies", + "text": "To generate answers to questions, we employ various prompting strategies requiring only familiarity with relevant literature and minimal adaptation. We selected eight distinct prompting strategies that we broadly categorize into three types: direct, rationalized, and reflective prompting.\nUnder direct prompting, we use zero-shot, one-shot (Brown et al., 2020 ###reference_b11###), and expert prompting (Xu et al., 2023 ###reference_b65###), where models are directly prompted for an answer without encouraging any underlying rationale. For rationalized prompting, three strategies are implemented: zero-shot and four-shot chain-of-thought (Wei et al., 2023 ###reference_b63###), and tree-of-thought (Yao et al., 2023 ###reference_b68###) prompting. Here, language models are prompted to generate a rationale before providing the final answer. Lastly, reflective prompting includes self-critique (Wang et al., 2023e ###reference_b59###; Madaan et al., 2023 ###reference_b37###) and metacognitive prompting (Wang and Zhao, 2023 ###reference_b62###), where models are asked to reflect on a previously provided answer and adjust their response according to this reflection. In our experiments, we noted that the prompting strategy substantially influences model performance, with at least one strategy consistently producing the correct answer in the majority of cases. A detailed description of all prompting strategies, along with prompts, is provided in Appendix B ###reference_###." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "6.2. Evaluation", + "text": "Below, we outline the grading strategies used to evaluate the model\u2019s performance across two question types: multiple-choice (MCQ) and open-answer questions. For MCQ, grading is automated by comparing against the annotated answer. Answers receive a binary score of if only one correct option exists, or a proportional score based on the number of correct choices made for multi-answer questions (with no penalty for wrong choices). Appendix C ###reference_### provides more details for grading MCQs. For open-answer questions, we constructed an evaluation pipeline using GPT-4 as a grader (Zheng et al., 2024 ###reference_b71###), which we describe below. For both types of results, we report error bars representing 95% confidence intervals (Figures 3 ###reference_###, 4 ###reference_###). These intervals were computed using the non-parametric bootstrap with 1000 resamples. We also tasked human experts with independently grading a subset of model responses to measure alignment between model and human grading, and establish a confidence level for model-based grading.\nEvaluating Open-Answer Questions. A significant portion of the questions we extracted are open-answer questions, which are challenging to evaluate manually due to the sheer volume of responses (a total of 33,904 answers from 2,119 questions, answered by 2 models using 8 prompting strategies). As a result, we use a state-of-the-art LLM, GPT-4, as a grader. To automate the grading of open answers, we provide the model with the question, the correct solution from an answer sheet of the assessment, and the generated answer text, prompting it to assign a rating based on the quality of the response. We provide the model with a 4-point grading scale: Correct, Mostly Correct, Mostly Incorrect, Incorrect. The model is first tasked with assessing the accuracy and completeness of the answer before assigning the final grade. Although we do not use these interim accuracy and completeness scores, we manually observe that these stages enhance the quality of overall grading. The specific prompting strategy is detailed in Appendix C.2 ###reference_###. As an answer was produced for each question using eight distinct prompting strategies, we obtained eight different answers and corresponding grades. To present a cohesive performance score for both GPT-4 and GPT-3.5 for a given question, we employ two aggregation methods: (1) the maximum approach, which selects the answer with the highest grade for each question as a representation of model performance, and (2) the majority approach, which considers the grade that appears most frequently among the eight prompting strategies. As an example, for a hypothetical question whose generated answers for the eight prompting strategies received 2 Correct, 2 Mostly Correct and 4 Mostly Incorrect grades, the maximum grade would be Correct and the majority grade would be Mostly Incorrect.\n###figure_8### Human Grading. To assess how well model grading aligned with human grading on open-answer questions, we enlisted 28 expert annotators from the teaching faculty of 11 courses to evaluate 933 questions. The courses chosen for expert grading are listed in Appendix C.3 ###reference_###. Specifically, we requested graders to assign scores to open-ended responses generated by GPT-4 and GPT-3.5. Human-graded responses for both models were generated using two prompting strategies: zero-shot chain-of-thought prompting (Wei et al., 2023 ###reference_b63###) (a simple prompting method at the disposal of any student) and metacognitive prompting (Wang and Zhao, 2023 ###reference_b62###) (one of the most effective strategies across all courses). We anonymized the outputs to prevent graders from knowing which model and prompting strategy they were evaluating. To maintain consistency, we instructed graders to use the same grading scale as GPT-4 direct grading. The number of graders per course varied from 1 to 10, and a total of 3732 answers were evaluated. On average, graders spent approximately 5 minutes assessing each answer.\nFigure 6 ###reference_### indicates a general alignment between human graders and GPT-4 when categorizing answers into a simplified correct/incorrect quadrant. Out of the examples identified as Correct by graders, the model assigned the same grade to 61% of them. Similarly, for examples graded as Almost Correct by graders, the model\u2019s grade matched in 36% of cases. Additionally, in instances where graders labeled examples as Mostly Incorrect, the model\u2019s grade aligned with the grader\u2019s assessment 65% of the time. However, we note certain discrepancies. For instance, GPT-4 tends to avoid explicitly labeling solutions as Incorrect, instead opting for Mostly Incorrect (i.e., in 74% of cases that humans annotated a solution as Incorrect, the model identified it as Mostly Incorrect), potentially due to the practice of aligning models for harmlessness (Bai et al., 2022 ###reference_b7###). We find a few instances where the model rates an answer as Correct while humans assign a lower score.\nInterestingly, upon comparing average grades assigned by human graders and GPT-4 across 11 courses, we find a difference in average grade of only 2.75%. However, we observe variations between courses, with an average course grade deviation of 8.5% (and the largest deviation for a course being 26%) between human and model graders. Finally, we also note the performance correlation between MCQ and open-answer questions in Figure 2 ###reference_###, providing a comparison point for the rationality of our model-based open-answer grading results. While scores for open-answer questions are typically lower than MCQ, the patterns exhibited by both curves are similar across both models. Overall, we note that the grades provided by humans and models are moderately correlated and that the summary statistics across courses tend to have a high correlation. Further details can be found in Appendix C ###reference_###." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Dataset Collection", + "text": "Data Statement Our data collection was approved by an institutional review board. Data was voluntarily submitted by members of the Data Consortium and no materials were used without the permission of the data owner.\nData Preprocessing To preprocess our data, we collect assessments from participating faculty, extract questions and answers from these assessments, and standardize them into a uniform format. After compiling an initial question bank from the raw data, we filter unsuitable data points by (1) removing questions that lack the question body or solution, (2) eliminating duplicate questions, and (3) removing questions that require information that cannot be parsed by LLMs in a textual format (e.g., diagrams, images, plots). In cases where a joint context is provided for multiple questions, we augment each question individually with this context." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Prompting Strategies", + "text": "Our study\u2019s goal is to identify the vulnerability of educational assessments to AI systems. As a result, we select prompting strategies that simulate realistic student use and assess prompting strategies that can be used with minimum effort, requiring only knowledge of the relevant literature and minimal adaptation. We exclude strategies involving training models. Our assessment encompasses three primary categories of prompting strategies: direct prompting, wherein the model is directly prompted to provide an answer; rationalized prompting, which encourages the model to first verbalize reasoning steps before providing a response; and reflective prompting, which prompts the model to reflect on a previously generated response before finalizing an answer. Each prompt is tailored for three scenarios: (1) MCQs with a single correct answer, (2) MCQs with multiple correct answers, and (3) open-answer questions.\nBelow, we outline the strategies used to prompt models to answer questions:\nWe explore three strategies for direct prompting: zero-shot, one-shot, and expert prompting. All these strategies ask the LLM directly for an answer without encouraging any particular strategy or rationale to arrive at the answer.\nWe explore three strategies for eliciting reasoned answers: zero-shot and four-shot chain-of-thought, and tree-of-thought prompting. Each strategy involves prompting the LLM to generate a rationale before providing a final answer.\nWe explore two strategies for reflective prompting: self-critique and metacognitive prompting. Both strategies involve the model reflecting on an answer it previously provided. Based on this reflection, the model then generates a final, improved answer." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Evaluation", + "text": "In this section we describe the methods used for grading MCQs and open answer questions with GPT-4.\nRegardless of the prompting strategy used for MCQs, the model is provided with the list of answer choices, each associated with a letter, and is asked to generate the letter(s) corresponding to the correct answer(s). Therefore, grading MCQs involves extracting the letter(s) indicated in the model\u2019s response and comparing them with the correct answer(s) (i.e., ground truth).\nThis process is straightforward for direct prompting strategies, but more challenging for strategies involving reasoning, such as chain-of-thought, where the model\u2019s response may include long explanations that discuss incorrect answers. To ensure consistency in answer extraction across different types of responses, we use an LLM (GPT-3.5) with the following prompt to extract the model\u2019s final answer:\nFor open-answer grading, we compare the performance of GPT-4 as a grader against human graders from the teaching staff of the courses from which the questions originated.\nTo better understand GPT4\u2019s capabilities as a grader, we compare its grading performance against the human grading scores, using two metrics: Average Grade and Grade Agreement. We recruited 28 graders from 11 of the courses in our dataset and tasked them with providing a general assessment of the quality of 933 responses provided by GPT-3.5 and GPT-4. Similar to GPT-4 as a grader, human graders are asked to use the same 4-point scale to grade model outputs. Given the cost of performing this annotation, we only task graders to mark responses from two prompting strategies, Zero-shot CoT (Wei et al., 2023 ###reference_b63###) and Metacognitive prompting (Wang and Zhao, 2023 ###reference_b62###).\nA substantial body of research leverages Large Language Models (LLMs) for response evaluation. Traditionally, automated assessment has necessitated high-quality reference data obtained through human grading, which is both costly and time-intensive. Consequently, there has been considerable exploration into the potential of LLMs to serve as evaluators (Chiang and Lee, 2023 ###reference_b15###). Recent research has found LLMs to be capable of generating quality feedback (Scheurer et al., 2022 ###reference_b52###; Welleck et al., 2023 ###reference_b64###; Tandon et al., 2022 ###reference_b55###; Saunders et al., 2022 ###reference_b51###; Paul et al., 2024 ###reference_b42###; Schick et al., 2023 ###reference_b53###; Chen et al., 2024 ###reference_b14###; Madaan et al., 2023 ###reference_b37###), a trend also reflected in investigations into LLM-based evaluation (Fu et al., 2023 ###reference_b23###; Kocmi and Federmann, 2023 ###reference_b30###; Wang et al., 2023d ###reference_b57###; Liu et al., 2023a ###reference_b35###; Zheng et al., 2024 ###reference_b71###).\nAutomated solutions for student grading have been explored in the field of learning science, as well, some of which now use LLMs (Hasanbeig et al., 2023 ###reference_b24###). Intelligent Tutoring Systems (ITS), such ALEKS (Falmagne et al., 2006 ###reference_b20###), ASSISTments (Heffernan and Heffernan, 2014 ###reference_b25###), Cognitive Tutor (Ritter and Fancsali, 2015 ###reference_b48###), and MATHia (Ritter, 2011 ###reference_b47###) are widely employed to automatically assess student performance in closed-ended questioning. These systems cater to several hundred thousand students annually (Heffernan and Heffernan, 2014 ###reference_b25###; Aleven et al., 2006 ###reference_b2###). Meanwhile, Automated Essay Scoring (AES) platforms such as e-Rater (Attali and Burstein, 2006 ###reference_b6###), IntelliMetric (Rudner et al., 2006 ###reference_b49###), and Intelligent Essay Assessor (Foltz et al., 1999 ###reference_b22###) have emerged as useful tools for evaluating numerous student essays and responses to open-ended questions each year (Shermis and Burstein, 2013 ###reference_b54###; Rudner et al., 2006 ###reference_b49###; Foltz et al., 1999 ###reference_b22###; Ramesh and Sanampudi, 2022 ###reference_b45###; Beigman Klebanov and Madnani, 2020 ###reference_b9###)." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D OpenAI API Hyperparameters", + "text": "For both GPT-4 and GPT-3.5, we set temperature=0.8 to increase diversity and encourage more creative responses.\nTo reduce repetitive samples and keep the quality of the generations high, we set presence_penalty=0.5 and frequency_penalty=0.8. These values are chosen based on a human evaluation of the fluency and quality of the responses given a set of questions. For the rest of the hyperparameters, we use their default values." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Bloom\u2019s Taxonomy", + "text": "Bloom\u2019s taxonomy (Karpen and Welch, 2016 ###reference_b29###) is a framework for categorizing learning objectives and educational items into levels of complexity requiring different cognitive skills. The taxonomy consists of 6 levels, from basic knowledge recall to higher-order critical thinking. The lower levels (remember, understand, and apply) focus on foundational cognitive tasks such as remembering facts and comprehending basic information. As learners progress to higher-level categories, they engage in more complex cognitive tasks. The upper levels (analyze, evaluate, and create) emphasize critical thinking, problem-solving, and creativity.\nAlthough Bloom\u2019s Taxonomy is widely accepted and used, educators often disagree about the precise definitions of each category. This discrepancy leads to varied interpretations and challenges in categorizing learning objectives and educational items into specific taxonomy levels. This is particularly true when moving between adjacent levels, such as understand and apply. For example, given the following MCQ:\nOn the one hand, to solve this question correctly, it is required to recall specific knowledge (remember) about the circumstances under which JOS acquires the big kernel lock from the lecture or other learning materials. However, it can also be classified as the understand category, as some multiple-choice options act as distractors that test the depth of a student\u2019s comprehension of the topic. As a result, the taxonomy has limitations in addressing the complexities of modern learning environments, especially in blended learning where information access and processing diverge from the conventional classroom setting for which Bloom\u2019s Taxonomy was crafted.\nDespite these ambiguities, Bloom\u2019s taxonomy remains a leading categorization scheme of cognitive difficulty in education. In this work, we assign Bloom\u2019s taxonomy labels to various questions in our dataset to assess model performance across questions of varying cognitive difficulty. To assign Bloom\u2019s meta-labels to questions in our dataset, we tasked two experts in the learning sciences to label 207 randomly-selected English questions with one of the six Bloom categories. They achieved an inter-annotator agreement of 51% on this task. Using a more forgiving fuzzy agreement (which also indicates an agreement if the annotators select adjacent categories) yields an agreement score of 84%. Results for performance stratified by Bloom\u2019s taxonomy label can be found in the main article." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Program Statistics", + "text": "In our work, we study 9 programs from three program levels: Bachelor, Master, and Online. Table 5 ###reference_### shows the number of courses available per program.\n###table_2###" + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Additional Results", + "text": "Table 6 ###reference_### shows GPT-4 performance across all courses for open-answer questions. Table 7 ###reference_### shows GPT-4 performance across all courses for MCQ type of questions. Table 8 ###reference_### show GPT-3.5 performance across all courses for open-answer type of questions. Table 9 ###reference_### show GPT-3.5 performance across all courses for MCQ type of questions.\nAs the exact course names are not important for this analysis, we anonymize course names when presenting results in these Tables.\nTable 10 ###reference_### shows the average GPT-4 and GPT-3.5 performance for each prompting strategy. GPT-4 outperforms GPT-3.5 across all prompting strategies.\nWhen answering MCQs, four-shot CoT (Wei et al., 2023 ###reference_b63###) emerges as GPT-4\u2019s best-performing strategy, while zero-shot achieves the lowest performance. Curiously, the same ranking does not transfer to open-answer questions, where self-reflect (Madaan et al., 2023 ###reference_b37###) emerges as the best strategy, followed by Expert Prompting (Xu et al., 2023 ###reference_b65###). Zero-shot prompting remains the least performant. However, based on a survey of reports submitted by Masters students for a class project in a Natural Language Processing (NLP) course, we found students to be most likely to use Zero-shot, Expert, and Zero-shot COT prompting, as these are the most intuitive strategies and the ones that require the least amount of preparation work.\n###table_3### In our dataset, we have 70.5% of English questions and 29.5% of French questions. Table 11 ###reference_### shows performance by language across models and question types. Table 12 ###reference_### shows GPT-4 performance per language per prompting strategy across all question types.\n###table_4### ###table_5### Tables 11 ###reference_### and 12 ###reference_### show differing performance on English questions compared to French questions. Unfortunately, the subsets of courses in our dataset in English and French mostly do not intersect, precluding a conclusive comparison between these performance measurements. However, given that AI assistants are often predominantly trained on English text data, these results raise a question of whether performance on French questions could be increased further, through creative cross-lingual prompting.\nConsequently, we explore whether a student user could achieve better performance by varying the language of the prompting instruction. We employ three variations of the metacognitive prompting strategy (which ranks among the top-performing strategies), where we vary the language and the wording of the instruction and the question, as schematized in Figure 8 ###reference_###: Vanilla, Language-inverted, and Guided. In the Vanilla setting, we provide the prompt instruction and question in the same language. In the Inverted setting, we provide the instruction and question in different languages. Finally, in the Guided setting, we provide the instruction and question in different languages but clarify in the instruction that the answer should be provided in the same language as the question. We focus on MCQ-based performance to avoid potential language bias from GPT-4 as a grader, assessing the impact of these three variations across all English and French MCQs of our dataset.\n###table_6### As illustrated in Table 13 ###reference_###, the average scores for the Vanilla setting are higher for both English and French compared to the language-inverted setting, indicating that instructing the model in the same language as the question leads to higher performance for both models compared to when the question is in a different language. Finally, guiding the model by asking it to reason and answer in the same language as the question, even if the instructions are in another language (i.e., the guided setting), enhances the performance for GPT-4 on French questions, yielding a score equivalent to providing instructions in French. Taken together, our results show that there is little benefit from prompting the model in English (a language that most pretrained models have likely seen more data from) compared to the language of the question.\n###figure_9###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CategoryTotal Questions
LevelBachelor\u2019s courses2,147 (38.5%)
Master\u2019s courses1,631 (29.2%)
Online programs1,801 (32.3%)
LanguageEnglish3,933 (70.5%)
French1,646 (29.5%)
Question TypeMCQ3,460 (62%)
Open-Answer2,119 (38%)
\n
Table 1. Dataset Statistics.
\n
", + "capture": "Table 1. Dataset Statistics." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Program% Courses PassedMax\n\nQuestion\n
Count
\n
\n=\n\n=\n\n=\n
Engineering80.060.040.00.831,343
Chemistry83.366.750.00.851,417
Life Science85.771.457.10.851,477
Physics Bachelor100.055.633.30.86958
CS Bachelor91.766.750.00.871,487
CS Master100.083.350.00.871,514
Data Science Master90.070.030.00.861,576
Physics Online100.063.627.30.84837
Life Science Online85.771.457.10.75996
\n
\n
Table 2. Program results. For each program, the first three columns show the percentage of courses for which GPT-4 surpasses the thresholds = 50, 60, 70% correctly-answered questions using the majority vote strategy. \u201cMax\u201d represents the proportion of questions in this degree correctly answered by at least one prompting strategy. Program levels are specified as Bachelor, Master, or Online. Engineering, Chemistry, and Life Science are first-year Bachelor programs.
\n
", + "capture": "Table 2. Program results. For each program, the first three columns show the percentage of courses for which GPT-4 surpasses the thresholds = 50, 60, 70% correctly-answered questions using the majority vote strategy. \u201cMax\u201d represents the proportion of questions in this degree correctly answered by at least one prompting strategy. Program levels are specified as Bachelor, Master, or Online. Engineering, Chemistry, and Life Science are first-year Bachelor programs." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course NamePrompting Strategy:Zero-Shot CoTMetacognitive Prompting
Model:GPT-4 ResponsesGPT-3.5 ResponsesGPT-4 ResponsesGPT-3.5 Responses
Grader:HumanGPT-4HumanGPT-4HumanGPT-4HumanGPT-4
Statistical Physics
Concurrency & Parallel Processing
Advanced Computer Architecture
Software Engineering
Mathematics of Data
ML for Physicists
Semiconductor Properties
Applied Data Analysis
Advanced General Chemistry
Information & Communication
Analysis I
Average
\n
\n
Table 3. Comparison between human graders and the GPT-4 model across multiple university courses. The average grades provided by human graders and the GPT-4 model for open-answer questions. Results are presented for two prompting strategies (Zero-shot CoT and Metacognitive) and each student model (GPT-3.5 and GPT-4). Each performance is reported with a 95% confidence interval.
\n
", + "capture": "Table 3. Comparison between human graders and the GPT-4 model across multiple university courses. The average grades provided by human graders and the GPT-4 model for open-answer questions. Results are presented for two prompting strategies (Zero-shot CoT and Metacognitive) and each student model (GPT-3.5 and GPT-4). Each performance is reported with a 95% confidence interval." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course NamePairwise Agreement (%)
Zero-Shot CoTMetacognitive
GPT-4GPT-3.5GPT-4GPT-3.5
Statistical Physics of Computation26.420.822.618.9
Concurrency and Parallelism37.540.640.618.7
Advanced Computer Architecture35.224.131.522.2
Software Engineering50.037.150.030.0
Mathematics of Data29.718.932.424.3
ML for Physicists54.851.252.442.9
Semiconductor Properties59.131.831.850.0
Applied Data Analysis35.742.935.732.1
Advanced General Chemistry62.541.171.448.2
Information and Communication59.148.061.853.8
Analysis I42.637.644.637.6
Average44.835.843.234.4
\n
\n
Table 4. Pairwise agreement (%) between grades provided by human graders and the GPT-4 model as a grader.
\n
", + "capture": "Table 4. Pairwise agreement (%) between grades provided by human graders and the GPT-4 model as a grader." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ProgramNumber of Courses
RequiredOptionalTotal
Engineering, 1st year BSc5/12-5
Chemistry, 1st year BSc6/9-6
Life Science, 1st year BSc7/11-7
Physics, BSc8/2619
Computer Science, BSc10/21212
Computer Science, MSc5/1038
Data Science, MSc3/9710
Physics, Online--11
Life Sciences, Online--8
\n
Table 5. Program Statistics.\n\u201cRequired\u201d shows the ratio of required courses present in our data over the total number of required courses per program. \u201cOptional\u201d shows the number of optional courses per program. \u201cTotal\u201d shows their sum, that is, the total number of courses our dataset covers, per program.
\n
", + "capture": "Table 5. Program Statistics.\n\u201cRequired\u201d shows the ratio of required courses present in our data over the total number of required courses per program. \u201cOptional\u201d shows the number of optional courses per program. \u201cTotal\u201d shows their sum, that is, the total number of courses our dataset covers, per program." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course name# questionsZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-ReflectMajorityMax
(Zero-Shot)(Four-Shot)Thoughtcognitive
\n\u2217Biology #11280.366.383.185.877.583.069.191.585.994.3
\n\u2217Chemistry #17469.969.979.980.075.480.975.482.780.994.0
\n\u2217Computer Science #14261.666.463.965.663.961.667.173.665.688.7
Computer Science #29865.069.167.866.465.364.367.474.666.389.6
\n\u2217Computer Science #34264.867.275.278.367.964.874.375.970.393.5
\n\u2217Computer Science #44276.869.678.374.371.177.683.180.079.297.6
\n\u2217Computer Science #522368.569.774.371.568.670.973.473.473.189.4
\n\u2217Computer Science #67254.257.553.352.451.551.557.559.453.879.3
\n\u2217Computer Science #77081.182.185.988.880.284.188.489.487.498.1
Computer Science #83657.162.670.065.453.358.059.870.057.189.7
\n\u2217Computer Science #95565.767.575.478.565.073.073.079.774.890.1
\n\u2217Computer Science #103482.180.186.183.184.178.289.190.187.198.0
\n\u2217Data Science #12871.167.572.371.169.865.166.372.368.790.3
Data Science #23673.768.165.366.371.069.172.871.070.991.5
\n\u2217Math #110346.945.948.239.554.349.151.851.447.274.4
\n\u2217Math #230260.960.058.756.964.766.164.962.962.383.1
Online Life Sciences #1936.740.466.270.158.959.058.962.758.992.6
Online Life Sciences #21100.0100.0100.0100.0100.0100.0100.0100.0100.0100.0
Online Life Sciences #310.00.033.033.033.033.066.033.033.066.0
Online Life Sciences #4812.412.428.937.328.941.324.837.124.857.9
Online Life Sciences #5761.757.066.480.771.371.161.766.461.785.4
Online Life Sciences #6266.583.066.583.0100.083.083.083.083.0100.0
Online Life Sciences #7366.322.055.333.022.022.044.022.022.088.7
Online Physics #1355.366.0100.0100.077.7100.077.3100.0100.0100.0
Online Physics #2266.549.583.033.066.5100.083.083.066.5100.0
Online Physics #3766.437.961.447.371.071.066.375.961.675.9
Online Physics #4483.058.349.833.058.091.574.858.066.591.5
Online Physics #5366.322.055.355.366.366.355.355.355.377.3
Online Physics #61348.440.868.845.856.063.758.663.761.271.4
Online Physics #71246.841.344.038.546.960.746.849.541.380.2
Online Physics #8458.041.349.566.341.349.549.858.049.574.8
Online Physics #9718.99.428.337.737.737.728.333.033.047.1
Online Physics #102775.177.686.388.866.487.587.588.886.397.5
\n\u2217Physics #12449.852.559.451.049.745.556.648.353.871.9
\n\u2217Physics #24558.264.966.357.552.253.773.157.456.087.2
Physics #3333.033.033.033.055.033.044.044.033.066.0
Physics #42437.233.051.046.849.641.346.948.344.163.5
\n\u2217Physics #51469.469.378.261.260.370.876.361.074.589.0
\n\u2217Physics #66866.469.768.868.369.374.366.877.771.390.5
\n\u2217Physics #72866.470.059.363.961.677.272.470.068.989.1
\n\u2217Physics #85346.246.253.751.240.545.549.357.546.871.3
\n\u2217Physics #947856.861.260.461.258.057.361.860.859.083.2
\n
\n
Table 6. Performance of GPT-4 on open-answer questions for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). Online courses typically have fewer open-answer questions as most evaluation in online courses is done through MCQA. \u2217 denotes required courses for a program (applies only for Bachelor and Master programs).
\n
", + "capture": "Table 6. Performance of GPT-4 on open-answer questions for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). Online courses typically have fewer open-answer questions as most evaluation in online courses is done through MCQA. \u2217 denotes required courses for a program (applies only for Bachelor and Master programs)." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course name# questionsZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-ReflectMajorityMax
(Zero-Shot)(Four-Shot)Thoughtcognitive
\n\u2217Biology #14862.575.081.383.381.379.256.383.385.487.5
Computer Science #25431.535.246.342.635.235.227.838.942.659.3
\n\u2217Computer Science #42744.459.374.170.481.574.155.670.477.892.6
\n\u2217Computer Science #52060.080.090.090.085.080.055.075.090.0100.0
Computer Science #822949.355.058.164.957.660.751.153.766.885.2
\n\u2217Computer Science #1015846.844.964.365.658.965.845.661.165.886.7
Computer Science #116962.375.485.585.588.481.271.084.191.395.7
\n\u2217Computer Science #123636.133.375.080.663.963.930.680.677.894.4
Computer Science #136048.356.765.065.061.063.351.773.365.088.3
\n\u2217Computer Science #1411136.039.654.157.756.855.937.856.854.185.6
\n\u2217Computer Science #1567656.467.678.179.977.177.858.776.982.194.8
Computer Science #164148.863.480.587.873.280.563.473.285.495.1
\n\u2217Math #111819.516.134.739.827.134.721.238.135.661.0
\n\u2217Math #23129.032.338.732.354.838.729.035.541.987.1
Online Life Sciences #128645.555.646.949.347.945.849.539.253.577.6
Online Life Sciences #23363.678.878.872.772.778.866.772.781.887.9
Online Life Sciences #35332.126.437.734.043.447.234.035.847.275.5
Online Life Sciences #422643.454.947.848.249.654.445.656.651.381.4
Online Life Sciences #57860.370.564.169.260.364.162.860.371.888.5
Online Life Sciences #68554.162.460.061.258.857.658.857.660.082.4
Online Life Sciences #74841.754.270.870.856.372.950.062.570.885.4
Online Life Sciences #815678.290.487.887.886.585.982.783.393.698.7
Online Physics #19065.668.973.366.766.767.857.872.273.392.2
Online Physics #27061.472.971.477.175.774.358.674.378.694.3
Online Physics #37450.056.847.350.044.652.746.656.856.881.1
Online Physics #44050.057.552.555.045.060.037.550.050.082.5
Online Physics #53237.556.368.853.168.856.334.465.668.893.8
Online Physics #65545.558.241.856.440.045.545.550.950.980.0
Online Physics #73363.660.657.654.560.669.763.657.666.784.8
Online Physics #811148.661.361.357.761.365.854.961.361.384.7
Online Physics #96048.353.363.365.061.761.763.365.065.090.0
Online Physics #105139.250.943.141.254.952.947.164.756.980.4
Online Physics #1110759.870.169.271.969.267.352.360.774.886.0
Physics course #35853.455.265.567.258.660.351.769.069.089.7
Physics course #43644.441.750.055.652.858.338.955.669.488.9
\n
\n
Table 7. Performance of GPT-4 on MCQs for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). \u2217 denotes required courses for a program (applies only for Bachelor and Master programs).
\n
", + "capture": "Table 7. Performance of GPT-4 on MCQs for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). \u2217 denotes required courses for a program (applies only for Bachelor and Master programs)." + }, + "8": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course name# questionsZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-ReflectMajorityMax
(Zero-Shot)(Four-Shot)Thoughtcognitive
\n\u2217Biology #11263.466.369.266.349.566.368.969.163.491.5
\n\u2217Chemistry #17460.162.761.965.565.062.859.666.565.186.3
\n\u2217Computer Science #14246.553.759.152.146.552.853.656.952.084.7
Computer Science #29852.160.957.555.449.352.154.558.254.581.1
\n\u2217Computer Science #34259.248.060.763.255.158.565.660.855.283.1
\n\u2217Computer Science #44264.058.559.264.849.766.464.855.257.691.2
\n\u2217Computer Science #522358.260.656.257.651.359.559.557.758.981.8
\n\u2217Computer Science #67240.943.241.335.837.238.140.439.039.062.6
\n\u2217Computer Science #77076.472.572.976.366.373.072.978.874.093.7
Computer Science #83658.954.455.257.034.946.957.148.751.575.7
\n\u2217Computer Science #95556.066.369.370.060.162.065.170.663.987.1
\n\u2217Computer Science #103469.469.369.370.454.662.571.369.365.491.1
\n\u2217Data Science #12854.459.257.956.854.455.667.554.559.282.0
Data Science #23649.648.756.147.748.856.145.945.943.179.3
\n\u2217Math #110340.839.840.841.141.743.342.439.541.162.8
\n\u2217Math #230250.450.450.443.947.255.752.348.951.972.6
Online Life Sciences #1933.147.847.766.447.958.962.670.155.292.6
Online Life Sciences #2166.066.0100.033.066.066.066.033.066.0100.0
Online Life Sciences #310.00.033.033.033.033.033.033.033.033.0
Online Life Sciences #4837.337.341.337.141.424.824.833.037.166.4
Online Life Sciences #5728.352.352.137.947.337.952.156.747.471.1
Online Life Sciences #62100.0100.0100.0100.066.583.083.0100.0100.0100.0
Online Life Sciences #7322.044.355.333.066.333.355.355.355.366.3
Online Physics #1355.344.077.7100.044.0100.0100.077.7100.0100.0
Online Physics #2249.583.083.066.566.583.066.049.549.5100.0
Online Physics #3752.152.052.047.356.733.037.747.142.471.1
Online Physics #4458.058.049.833.033.049.549.541.549.866.3
Online Physics #5333.022.044.355.333.022.066.355.322.088.7
Online Physics #61343.238.150.943.240.745.858.640.745.868.9
Online Physics #71241.346.938.538.544.041.338.652.438.571.8
Online Physics #8441.533.058.049.549.541.349.858.049.566.5
Online Physics #9733.018.937.728.333.018.937.742.437.751.9
Online Physics #102768.962.873.987.659.076.380.080.177.796.2
\n\u2217Physics #12430.338.633.037.223.433.031.728.935.851.0
\n\u2217Physics #24543.447.839.742.636.736.741.939.737.565.6
Physics #3333.033.044.022.044.044.033.044.033.066.0
Physics #42435.933.030.335.839.938.537.238.535.856.5
\n\u2217Physics #51457.140.050.064.238.643.450.163.852.079.7
\n\u2217Physics #66859.457.659.049.148.654.554.654.654.577.7
\n\u2217Physics #72858.162.850.948.545.056.954.543.754.574.7
\n\u2217Physics #85338.643.038.638.034.939.937.436.837.458.1
\n\u2217Physics #947843.245.542.542.540.042.744.642.642.063.7
\n
\n
Table 8. Performance of GPT-3.5 on open-answer questions for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). Online courses typically have fewer open-answer questions as most evaluations in online courses are done through MCQA. \u2217 denotes required courses for a program (applies only for Bachelor and Master programs).
\n
", + "capture": "Table 8. Performance of GPT-3.5 on open-answer questions for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). Online courses typically have fewer open-answer questions as most evaluations in online courses are done through MCQA. \u2217 denotes required courses for a program (applies only for Bachelor and Master programs)." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Course name# questionsZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-ReflectMajorityMax
(Zero-Shot)(Four-Shot)Thoughtcognitive
\n\u2217Biology #14839.650.045.854.256.241.750.043.856.279.2
Computer Science #25422.225.924.133.327.831.529.624.131.561.1
\n\u2217Computer Science #42737.048.137.044.451.944.448.148.155.677.8
\n\u2217Computer Science #52070.065.065.060.050.065.060.050.070.090.0
Computer Science #822947.648.038.443.233.243.246.736.252.088.2
\n\u2217Computer Science #1015837.930.437.645.940.540.537.339.544.380.4
Computer Science #116959.463.863.869.653.665.262.360.971.089.9
\n\u2217Computer Science #123644.436.147.247.225.027.827.841.750.086.1
Computer Science #136025.028.338.350.033.346.731.743.345.076.7
\n\u2217Computer Science #1411143.230.642.349.541.840.536.030.642.387.4
\n\u2217Computer Science #1567651.355.653.455.943.854.952.952.262.190.1
Computer Science #164136.663.448.851.234.151.248.851.263.482.9
\n\u2217Math #111814.414.415.319.59.418.613.610.219.538.1
\n\u2217Math #23119.432.341.929.032.338.732.332.332.383.9
Online Life Sciences #128645.548.644.145.138.647.948.342.751.081.5
Online Life Sciences #23369.766.778.860.678.866.769.769.778.893.9
Online Life Sciences #35322.628.324.532.139.633.939.628.335.877.4
Online Life Sciences #422648.250.048.242.944.747.847.346.555.383.2
Online Life Sciences #57858.961.566.752.647.461.555.157.769.284.6
Online Life Sciences #68548.257.658.857.654.167.156.552.961.288.2
Online Life Sciences #74856.239.652.145.850.050.052.150.068.887.5
Online Life Sciences #815666.075.673.169.963.569.269.260.980.893.6
Online Physics #19052.252.255.652.251.753.351.152.256.784.4
Online Physics #27052.960.048.652.947.162.954.341.462.990.0
Online Physics #37444.644.640.527.040.555.448.641.945.981.1
Online Physics #44042.532.540.045.027.550.042.541.052.577.5
Online Physics #53225.025.043.837.537.518.843.843.840.684.4
Online Physics #65550.941.834.534.534.547.343.637.050.989.1
Online Physics #73351.542.456.351.539.463.657.651.560.687.9
Online Physics #811148.649.550.545.936.044.150.547.752.380.2
Online Physics #96050.058.353.353.350.061.743.348.360.088.3
Online Physics #105152.947.149.043.141.243.143.149.058.876.5
Online Physics #1110745.857.952.356.145.859.848.647.758.989.7
Physics #35839.746.650.927.644.851.737.941.446.687.9
Physics #43647.238.933.350.036.147.244.433.350.088.9
\n
\n
Table 9. Performance of GPT-3.5 on MCQs for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). \u2217 denotes required courses for a program (applies only for Bachelor and Master programs).
\n
", + "capture": "Table 9. Performance of GPT-3.5 on MCQs for all courses categorized by prompting strategy. Majority corresponds to the performance of the majority vote aggregation strategy. Max corresponds to the maximum performance (the score when only one prompting strategy is required to return a correct answer for the model get the answer correct). \u2217 denotes required courses for a program (applies only for Bachelor and Master programs)." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Question TypeModelZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-Reflect
(Zero-Shot)(Four-Shot)Thoughtcognitive
MCQGPT-3.5
GPT-4
Open AnswerGPT-3.5
GPT-4
\n
Table 10. Performance of GPT-3.5 and GPT-4 on all MCQs and open-answer questions, categorized by prompting strategy. The most effective prompting strategy for each model is underlined. Open-answer questions are graded by GPT-4. For clarity, we have rounded the 95% confidence intervals for each prompting strategy to one decimal place.
\n
", + "capture": "Table 10. Performance of GPT-3.5 and GPT-4 on all MCQs and open-answer questions, categorized by prompting strategy. The most effective prompting strategy for each model is underlined. Open-answer questions are graded by GPT-4. For clarity, we have rounded the 95% confidence intervals for each prompting strategy to one decimal place." + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Question TypeModelEnglishFrench
MCQGPT-3.5
GPT-4
Open AnswerGPT-3.5
GPT-4
\n
Table 11. Performance of GPT-3.5 and GPT-4 on MCQs and open-answer questions, categorized by the question language. Open-answer questions are graded by GPT-4. Performance is presented with 95% confidence interval.
\n
", + "capture": "Table 11. Performance of GPT-3.5 and GPT-4 on MCQs and open-answer questions, categorized by the question language. Open-answer questions are graded by GPT-4. Performance is presented with 95% confidence interval." + }, + "12": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LanguageQuestion TypeZero-ShotOne-ShotCoTCoTTree-of-Meta-ExpertSelf-Reflect
(Zero-Shot)(Four-Shot)Thoughtcognitive
\nEnglish\nMCQ
Open Answer
FrenchMCQ
Open Answer
\n
Table 12. Performance of GPT-4 per language on MCQs and open-answer questions, categorized by prompting strategy. The most effective prompting strategy for each language is underlined. Open-answer questions are graded by GPT-4. All scores are provided with 95% confidence intervals.
\n
", + "capture": "Table 12. Performance of GPT-4 per language on MCQs and open-answer questions, categorized by prompting strategy. The most effective prompting strategy for each language is underlined. Open-answer questions are graded by GPT-4. All scores are provided with 95% confidence intervals." + }, + "13": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nQuestion\nLanguage\nVanillaInvertedGuided
GPT-3.5GPT-4GPT-3.5GPT-4GPT-3.5GPT-4
English
French
\n
Table 13. Performance comparison of GPT-3.5 and GPT-4 across the three different prompting strategies (vanilla, language inverted, and guided), categorized by question language. All scores are provided with 95% confidence interval.
\n
", + "capture": "Table 13. Performance comparison of GPT-3.5 and GPT-4 across the three different prompting strategies (vanilla, language inverted, and guided), categorized by question language. All scores are provided with 95% confidence interval." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.11841v2_figure_1.png", + "caption": "Figure 1. Overview of Courses. Courses represented in our dataset, grouped by program and degree. Courses may belong to multiple programs, in which case their partition is split into chunks of equal size, with one chunk assigned to each program.", + "url": "http://arxiv.org/html/2408.11841v2/extracted/6028801/figures/new_figures/All_courses_2x.png" + }, + "2": { + "figure_path": "2408.11841v2_figure_2.png", + "caption": "Figure 2. Course Pass Rate of Generative AI Assistants. Proportion of 50 courses that models pass at various performance thresholds. Results are presented independently for multiple-choice (MCQ) and open-answer (Open) question types for both GPT-3.5 and GPT-4. Model responses are aggregated using the majority vote strategy.", + "url": "http://arxiv.org/html/2408.11841v2/x1.png" + }, + "3(a)": { + "figure_path": "2408.11841v2_figure_3(a).png", + "caption": "(a) Question Difficulty Bachelor\nFigure 3. Model Performance Stratified by Question Difficulty. (a, b) 376 Bachelor\u2019s and 693 Master\u2019s questions, respectively, annotated using instructor-reported difficulty levels. (c) 207 questions annotated using Bloom\u2019s taxonomy by two researchers in the learning sciences. Across all categorization schemes, GPT-4 performance slightly degrades as the questions become more complex and challenging. Performance is aggregated by the majority vote strategy. Error bars represent 95% confidence intervals using the non-parametric bootstrap with 1000 resamples.", + "url": "http://arxiv.org/html/2408.11841v2/x2.png" + }, + "3(b)": { + "figure_path": "2408.11841v2_figure_3(b).png", + "caption": "(b) Question Difficulty Master\nFigure 3. Model Performance Stratified by Question Difficulty. (a, b) 376 Bachelor\u2019s and 693 Master\u2019s questions, respectively, annotated using instructor-reported difficulty levels. (c) 207 questions annotated using Bloom\u2019s taxonomy by two researchers in the learning sciences. Across all categorization schemes, GPT-4 performance slightly degrades as the questions become more complex and challenging. Performance is aggregated by the majority vote strategy. Error bars represent 95% confidence intervals using the non-parametric bootstrap with 1000 resamples.", + "url": "http://arxiv.org/html/2408.11841v2/x3.png" + }, + "3(c)": { + "figure_path": "2408.11841v2_figure_3(c).png", + "caption": "(c) Bloom\u2019s Taxonomy\nFigure 3. Model Performance Stratified by Question Difficulty. (a, b) 376 Bachelor\u2019s and 693 Master\u2019s questions, respectively, annotated using instructor-reported difficulty levels. (c) 207 questions annotated using Bloom\u2019s taxonomy by two researchers in the learning sciences. Across all categorization schemes, GPT-4 performance slightly degrades as the questions become more complex and challenging. Performance is aggregated by the majority vote strategy. Error bars represent 95% confidence intervals using the non-parametric bootstrap with 1000 resamples.", + "url": "http://arxiv.org/html/2408.11841v2/x4.png" + }, + "4": { + "figure_path": "2408.11841v2_figure_4.png", + "caption": "Figure 4. Course Performance by Course Size. Average course performance of GPT-4 with the majority vote strategy stratified by the course size, measured by the number of enrolled students. GPT-4 successfully answers questions for assessments in some of the largest courses by enrollment, amplifying the potential impact of assessment vulnerability. Error bars represent 95% confidence intervals using the non-parametric bootstrap with 1000 resamples.", + "url": "http://arxiv.org/html/2408.11841v2/x5.png" + }, + "5": { + "figure_path": "2408.11841v2_figure_5.png", + "caption": "Figure 5. Comparison of student performance and GPT-4. Average student performance for a subset of 197 questions is computed and stratified along 10-point intervals from 0 to 100. The model\u2019s performance with the majority vote strategy is assessed by human graders using a 4-point scale. We observe the model typically answers correctly questions that students also excel at. However, there are questions on which the model struggles, but students perform reasonably well.", + "url": "http://arxiv.org/html/2408.11841v2/x6.png" + }, + "6": { + "figure_path": "2408.11841v2_figure_6.png", + "caption": "Figure 6. Comparison of Human and GPT-4 grading. Average model and human performance for 933 questions and responses from GPT-4 and GPT-3.5 generated with the metacognitive prompting method.", + "url": "http://arxiv.org/html/2408.11841v2/x7.png" + }, + "7(a)": { + "figure_path": "2408.11841v2_figure_7(a).png", + "caption": "(a) GPT-4 Student\nFigure 7. Grade label distributions. Distribution of grades assigned by our grader consortium (blue) and GPT-4 (orange) to the responses provided by GPT-4 and GPT-3.5.", + "url": "http://arxiv.org/html/2408.11841v2/x8.png" + }, + "7(b)": { + "figure_path": "2408.11841v2_figure_7(b).png", + "caption": "(b) GPT-3.5 Student\nFigure 7. Grade label distributions. Distribution of grades assigned by our grader consortium (blue) and GPT-4 (orange) to the responses provided by GPT-4 and GPT-3.5.", + "url": "http://arxiv.org/html/2408.11841v2/x9.png" + }, + "8": { + "figure_path": "2408.11841v2_figure_8.png", + "caption": "Figure 8. The three language-related prompting strategies. Given two languages L1 and L2, and a question in language L1, (1) Vanilla: provides instructions in L1; (2) Language Inverted: provides instructions in L2; (3) Guided: provides instructions in L2, specifying that the question is in L1, and that it should be answered in L1 as well.", + "url": "http://arxiv.org/html/2408.11841v2/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Toward Meta-cognitive Tutoring: A Model of Help Seeking with a Cognitive Tutor.", + "author": "Vincent Aleven, Bruce Mclaren, Ido Roll, and Kenneth Koedinger. 2006.", + "venue": "I. J. Artificial Intelligence in Education 16 (01 2006), 101\u2013128.", + "url": null + } + }, + { + "2": { + "title": "Generative Artificial Intelligence in Education: From Deceptive to Disruptive.", + "author": "Marc Alier, Francisco Garc\u00eda-Pe\u00f1alvo, and Jorge D Camba. 2024.", + "venue": "International Journal of Interactive Multimedia and Artificial Intelligence (2024).", + "url": null + } + }, + { + "3": { + "title": "The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research.", + "author": "Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, and Abdulkareem M. Albekairy. 2023.", + "venue": "Research in Social and Administrative Pharmacy 19, 8 (2023), 1236\u20131242.", + "url": null + } + }, + { + "4": { + "title": "Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models.", + "author": "Daman Arora, Himanshu Gaurav Singh, and Mausam. 2023.", + "venue": "", + "url": null + } + }, + { + "5": { + "title": "Automated Essay Scoring With e-rater.", + "author": "Yigal Attali and Jill Burstein. 2006.", + "venue": "The Journal of Technology, Learning and Assessment 4, 3 (Feb. 2006).", + "url": null + } + }, + { + "6": { + "title": "Constitutional AI: Harmlessness from AI Feedback.", + "author": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, John Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, E Perez, Jamie Kerr, Jared Mueller, Jeff Ladish, J Landau, Kamal Ndousse, Kamil\u0117 Lukovsi\u016bt\u0117, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noem\u2019i Mercado, Nova\nDassarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Sam Bowman, Zac Hatfield-Dodds, Benjamin Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom B. Brown, and Jared Kaplan. 2022.", + "venue": "arXiv:2212.08073 (2022).", + "url": null + } + }, + { + "7": { + "title": "Programming Is Hard \u2013 Or at Least It Used to Be: Educational Opportunities And Challenges of AI Code Generation.", + "author": "Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2022.", + "venue": "", + "url": null + } + }, + { + "8": { + "title": "Automated Evaluation of Writing \u2013 50 Years and Counting. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Eds.). Association for Computational Linguistics, Online, 7796\u20137810.", + "author": "Beata Beigman Klebanov and Nitin Madnani. 2020.", + "venue": "https://doi.org/10.18653/v1/2020.acl-main.697", + "url": null + } + }, + { + "9": { + "title": "Taxonomy of Educational Objectives: The Classification of Educational Goals.", + "author": "B.S. Bloom and D.R. Krathwohl. 1956.", + "venue": "Number Bd. 1 in Taxonomy of Educational Objectives: The Classification of Educational Goals. Longmans, Green.", + "url": null + } + }, + { + "10": { + "title": "Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877\u20131901.", + "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and\nDario Amodei. 2020.", + "venue": "https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf", + "url": null + } + }, + { + "11": { + "title": "Time to Revisit Existing Student\u2019s Performance Evaluation Approach in Higher Education Sector in a New Era of ChatGPT \u2014 A Case Study.", + "author": "Iffat Chaudhry, Sayed Sarwary, Ghaleb El-Refae, and Habib Chabchoub. 2023.", + "venue": "Cogent Education 10 (05 2023).", + "url": null + } + }, + { + "12": { + "title": "Generative AI, learning and new literacies.", + "author": "S Chen. 2023.", + "venue": "Journal of Educational Technology Development and Exchange (2023).", + "url": null + } + }, + { + "13": { + "title": "Teaching Large Language Models to Self-Debug. In International Conference on Learning Representations.", + "author": "Xinyun Chen, Maxwell Lin, Nathanael Sch\u00e4rli, and Denny Zhou. 2024.", + "venue": "", + "url": null + } + }, + { + "14": { + "title": "Can Large Language Models Be an Alternative to Human Evaluations? 15607\u201315631.", + "author": "Cheng-Han Chiang and Hung-yi Lee. 2023.", + "venue": "https://doi.org/10.18653/v1/2023.acl-long.870", + "url": null + } + }, + { + "15": { + "title": "Chatting and cheating: Ensuring academic integrity in the era of ChatGPT Chatting and cheating: Ensuring academic integrity in the era of ChatGPT.", + "author": "Debby Cotton, Peter Cotton, and Reuben Shipway. 2023.", + "venue": "Innovations in Education and Teaching International 61 (03 2023).", + "url": null + } + }, + { + "16": { + "title": "Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy?", + "author": "Geoffrey Currie. 2023.", + "venue": "Seminars in Nuclear Medicine 53 (05 2023).", + "url": null + } + }, + { + "17": { + "title": "Computing Education in the Era of Generative AI.", + "author": "Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, and Sami Sarsa. 2023.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Opinion Paper: \u201cSo what if ChatGPT wrote it?\u201d Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy.", + "author": "Yogesh K. Dwivedi, Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, Alex Koohang, Vishnupriya Raghavan, Manju Ahuja, Hanaa Albanna, Mousa Ahmad Albashrawi, Adil S. Al-Busaidi, Janarthanan Balakrishnan, Yves Barlette, Sriparna Basu, Indranil Bose, Laurence Brooks, Dimitrios Buhalis, Lemuria Carter, Soumyadeb Chowdhury, Tom Crick, Scott W. Cunningham, Gareth H. Davies, Robert M. Davison, Rahul D\u00e9, Denis Dennehy, Yanqing Duan,\nRameshwar Dubey, Rohita Dwivedi, John S. Edwards, Carlos Flavi\u00e1n, Robin Gauld, Varun Grover, Mei-Chih Hu, Marijn Janssen, Paul Jones, Iris Junglas, Sangeeta Khorana, Sascha Kraus, Kai R. Larsen, Paul Latreille, Sven Laumer, F. Tegwen Malik, Abbas Mardani, Marcello Mariani, Sunil Mithas, Emmanuel Mogaji, Jeretta Horn Nord, Siobhan O\u2019Connor, Fevzi Okumus, Margherita Pagani, Neeraj Pandey, Savvas Papagiannidis, Ilias O. Pappas, Nishith Pathak, Jan Pries-Heje, Ramakrishnan\nRaman, Nripendra P. Rana, Sven-Volker Rehm, Samuel Ribeiro-Navarrete, Alexander Richter, Frantz Rowe, Suprateek Sarker, Bernd Carsten Stahl, Manoj Kumar Tiwari, Wil van der Aalst, Viswanath Venkatesh, Giampaolo Viglia, Michael Wade, Paul Walton, Jochen Wirtz, and Ryan Wright. 2023.", + "venue": "International Journal of Information Management 71 (2023), 102642.", + "url": null + } + }, + { + "19": { + "title": "The Assessment of Knowledge, in Theory and in Practice. In Formal Concept Analysis, Rokia Missaoui and J\u00fcrg Schmidt (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 61\u201379.", + "author": "Jean-Claude Falmagne, Eric Cosyn, Jean-Paul Doignon, and Nicolas Thi\u00e9ry. 2006.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proceedings of the 24th Australasian Computing Education Conference (, Virtual Event, Australia,) (ACE \u201922). Association for Computing Machinery, New York, NY, USA, 10\u201319.", + "author": "James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022.", + "venue": "https://doi.org/10.1145/3511861.3511863", + "url": null + } + }, + { + "21": { + "title": "The intelligent essay assessor: Applications to educational technology.", + "author": "Peter Foltz, Darrell Laham, and T. Landauer. 1999.", + "venue": "Interactive Multimedia Electronic Journal of Computer-Enhanced Learning (04 1999).", + "url": null + } + }, + { + "22": { + "title": "GPTScore: Evaluate as You Desire.", + "author": "Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023.", + "venue": "", + "url": null + } + }, + { + "23": { + "title": "ALLURE: Auditing and Improving LLM-based Evaluation of Text using Iterative In-Context-Learning.", + "author": "Hosein Hasanbeig, Hiteshi Sharma, Leo Betthauser, Felipe Vieira Frujeri, and Ida Momennejad. 2023.", + "venue": "", + "url": null + } + }, + { + "24": { + "title": "The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching.", + "author": "Neil T. Heffernan and Cristina Lindquist Heffernan. 2014.", + "venue": "International Journal of Artificial Intelligence in Education 24, 4 (01 Dec 2014), 470\u2013497.", + "url": null + } + }, + { + "25": { + "title": "Measuring Massive Multitask Language Understanding.", + "author": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021.", + "venue": "", + "url": null + } + }, + { + "26": { + "title": "A systematic evaluation of large language models for generating programming code.", + "author": "Wenpin Hou and Zhicheng Ji. 2024.", + "venue": "arXiv:2403.00894 (2024).", + "url": null + } + }, + { + "27": { + "title": "C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models.", + "author": "Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023.", + "venue": "", + "url": null + } + }, + { + "28": { + "title": "Assessing the inter-rater reliability and accuracy of pharmacy faculty\u2019s Bloom\u2019s Taxonomy classifications.", + "author": "Samuel C. Karpen and Adam C. Welch. 2016.", + "venue": "Currents in Pharmacy Teaching and Learning 8, 6 (2016), 885\u2013888.", + "url": null + } + }, + { + "29": { + "title": "Large Language Models Are State-of-the-Art Evaluators of Translation Quality.", + "author": "Tom Kocmi and Christian Federmann. 2023.", + "venue": "", + "url": null + } + }, + { + "30": { + "title": "Survey of Natural Language Processing for Education: Taxonomy, Systematic Review, and Future Trends.", + "author": "Yunshi Lan, Xinyuan Li, Hanyue Du, Xuesong Lu, Ming Gao, Weining Qian, and Aoying Zhou. 2024.", + "venue": "", + "url": null + } + }, + { + "31": { + "title": "Competition-level code generation with alphacode.", + "author": "Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R\u00e9mi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022.", + "venue": "Science 378, 6624 (2022), 1092\u20131097.", + "url": null + } + }, + { + "32": { + "title": "Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation.", + "author": "Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2024.", + "venue": "Advances in Neural Information Processing Systems 36 (2024).", + "url": null + } + }, + { + "33": { + "title": "Future of education in the era of generative artificial intelligence: Consensus among Chinese scholars on applications of ChatGPT in schools.", + "author": "Ming Liu, Yiling Ren, Lucy Nyagoga, Francis Stonier, Zhongming Wu, and Liang Yu. 2023b.", + "venue": "Future in Educational Research 1 (09 2023).", + "url": null + } + }, + { + "34": { + "title": "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment.", + "author": "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023a.", + "venue": "", + "url": null + } + }, + { + "35": { + "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach.", + "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.", + "venue": "", + "url": null + } + }, + { + "36": { + "title": "Self-Refine: Iterative Refinement with Self-Feedback. In Thirty-seventh Annual Conference on Neural Information Processing Systems.", + "author": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023.", + "venue": "", + "url": null + } + }, + { + "37": { + "title": "How traditional physics coursework limits problem-solving opportunities. In Physics Education Research Conference. 230\u2013235.", + "author": "Barron Montgomery, Argenta Price, and Carl Wieman. 2023.", + "venue": "https://doi.org/10.1119/perc.2023.pr.Montgomery", + "url": null + } + }, + { + "38": { + "title": "Embracing the Generative AI Revolution: Advancing Tertiary Education in Cybersecurity with GPT.", + "author": "Raza Nowrozy and David Jam. 2024.", + "venue": "", + "url": null + } + }, + { + "39": { + "title": "Introducing ChatGPT.", + "author": "OpenAI. 2022.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "GPT-4 Technical Report.", + "author": "OpenAI. 2023.", + "venue": "arXiv:2303.08774 (2023).", + "url": null + } + }, + { + "41": { + "title": "REFINER: Reasoning Feedback on Intermediate Representations. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), Yvette Graham and Matthew Purver (Eds.). Association for Computational Linguistics, St. Julian\u2019s, Malta, 1100\u20131126.", + "author": "Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. 2024.", + "venue": "https://aclanthology.org/2024.eacl-long.67", + "url": null + } + }, + { + "42": { + "title": "How Machine Learning (ML) is Transforming Higher Education: A Systematic Literature Review.", + "author": "Agostinho Pinto, Ant\u00f3nio Abreu, Eus\u00e9bio Costa, and Jer\u00f3nimo Paiva. 2023.", + "venue": "Journal of Information Systems Engineering and Management 8 (04 2023), 21168.", + "url": null + } + }, + { + "43": { + "title": "\u201cIt\u2019s Weird That it Knows What I Want\u201d: Usability and Interactions with Copilot for Novice Programmers.", + "author": "James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023.", + "venue": "ACM Transactions on Computer-Human Interaction 31, 1 (Nov. 2023), 1\u201331.", + "url": null + } + }, + { + "44": { + "title": "An automated essay scoring systems: a systematic literature review.", + "author": "Dadi Ramesh and Suresh Kumar Sanampudi. 2022.", + "venue": "Artificial Intelligence Review 55, 3 (01 Mar 2022), 2495\u20132527.", + "url": null + } + }, + { + "45": { + "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.", + "author": "Nils Reimers and Iryna Gurevych. 2019.", + "venue": "http://arxiv.org/abs/1908.10084", + "url": null + } + }, + { + "46": { + "title": "The research behind the Carnegie Learning math series.", + "author": "Steve Ritter. 2011.", + "venue": "Pittsburgh, PA: Carnegie Learning (2011).", + "url": null + } + }, + { + "47": { + "title": "Carnegie Learning\u2019s Cognitive Tutor. In Educational Data Mining.", + "author": "Steven Ritter and Stephen E. Fancsali. 2015.", + "venue": "https://api.semanticscholar.org/CorpusID:30091195", + "url": null + } + }, + { + "48": { + "title": "An Evaluation of IntelliMetric. Essay Scoring System.", + "author": "Lawrence Rudner, Veronica Garcia, and Catherine Welch. 2006.", + "venue": "Journal of Technology, Learning, and Assessment 4 (01 2006).", + "url": null + } + }, + { + "49": { + "title": "A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications.", + "author": "Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Sohel Mondal, and Aman Chadha. 2024.", + "venue": "arXiv:2402.07927 (2024).", + "url": null + } + }, + { + "50": { + "title": "Self-critiquing models for assisting human evaluators.", + "author": "William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. 2022.", + "venue": "arXiv:2206.05802 (2022).", + "url": null + } + }, + { + "51": { + "title": "Training Language Models with Language Feedback.", + "author": "J\u00e9r\u00e9my Scheurer, Jon Ander Campos, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, and Ethan Perez. 2022.", + "venue": "arXiv:2204.14146 (2022).", + "url": null + } + }, + { + "52": { + "title": "PEER: A Collaborative Language Model. In International Conference on Learning Representations.", + "author": "Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. 2023.", + "venue": "", + "url": null + } + }, + { + "53": { + "title": "Handbook of Automated Essay Evaluation: Current Applications and New Directions.", + "author": "M.D. Shermis and J. Burstein. 2013.", + "venue": "Taylor & Francis.", + "url": null + } + }, + { + "54": { + "title": "Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedback. In Findings of the Association for Computational Linguistics: NAACL 2022, Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz (Eds.). Association for Computational Linguistics, Seattle, United States, 339\u2013352.", + "author": "Niket Tandon, Aman Madaan, Peter Clark, and Yiming Yang. 2022.", + "venue": "https://doi.org/10.18653/v1/2022.findings-naacl.26", + "url": null + } + }, + { + "55": { + "title": "Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts. 1\u20137.", + "author": "Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. 2022.", + "venue": "", + "url": null + } + }, + { + "56": { + "title": "Is ChatGPT a Good NLG Evaluator? A Preliminary Study.", + "author": "Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023d.", + "venue": "", + "url": null + } + }, + { + "57": { + "title": "Examining the Potential and Pitfalls of ChatGPT in Science and Engineering Problem-Solving.", + "author": "Karen D. Wang, Eric Burkholder, Carl Wieman, Shima Salehi, and Nick Haber. 2023a.", + "venue": "", + "url": null + } + }, + { + "58": { + "title": "Self-Critique Prompting with Large Language Models for Inductive Instructions.", + "author": "Rui Wang, Hongru Wang, Fei Mi, Yi Chen, Ruifeng Xu, and Kam-Fai Wong. 2023e.", + "venue": "", + "url": null + } + }, + { + "59": { + "title": "Exploring the Role of AI Assistants in Computer Science Education: Methods, Implications, and Instructor Perspectives. In 2023 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE.", + "author": "Tianjia Wang, Daniel Vargas D\u00edaz, Chris Brown, and Yan Chen. 2023b.", + "venue": "https://doi.org/10.1109/vl-hcc57772.2023.00018", + "url": null + } + }, + { + "60": { + "title": "SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models.", + "author": "Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. 2023c.", + "venue": "", + "url": null + } + }, + { + "61": { + "title": "Metacognitive Prompting Improves Understanding in Large Language Models.", + "author": "Yuqing Wang and Yun Zhao. 2023.", + "venue": "", + "url": null + } + }, + { + "62": { + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.", + "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023.", + "venue": "", + "url": null + } + }, + { + "63": { + "title": "Generating Sequences by Learning to Self-Correct. In International Conference on Learning Representations.", + "author": "Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. 2023.", + "venue": "", + "url": null + } + }, + { + "64": { + "title": "ExpertPrompting: Instructing Large Language Models to be Distinguished Experts.", + "author": "Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, and Zhendong Mao. 2023.", + "venue": "", + "url": null + } + }, + { + "65": { + "title": "A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. 1\u201310.", + "author": "Frank F Xu, Uri Alon, Graham Neubig, and Vincent Josua Hellendoorn. 2022.", + "venue": "", + "url": null + } + }, + { + "66": { + "title": "Practical and ethical challenges of large language models in education: A systematic scoping review.", + "author": "Lixiang Yan, Lele Sha, Linxuan Zhao, Yuheng Li, Roberto Martinez-Maldonado, Guanliang Chen, Xinyu Li, Yueqiao Jin, and Dragan Ga\u0161evi\u0107. 2023.", + "venue": "British Journal of Educational Technology 55, 1 (Aug. 2023), 90\u2013112.", + "url": null + } + }, + { + "67": { + "title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models.", + "author": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023.", + "venue": "", + "url": null + } + }, + { + "68": { + "title": "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI.", + "author": "Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.", + "venue": "arXiv:2311.16502 (2023).", + "url": null + } + }, + { + "69": { + "title": "A study of the impact of project-based learning on student learning effects: A meta-analysis study.", + "author": "Lu Zhang and Yan Ma. 2023.", + "venue": "Frontiers in psychology 14 (2023), 1202728.", + "url": null + } + }, + { + "70": { + "title": "Judging llm-as-a-judge with mt-bench and chatbot arena.", + "author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024.", + "venue": "Advances in Neural Information Processing Systems 36 (2024).", + "url": null + } + }, + { + "71": { + "title": "AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models.", + "author": "Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023.", + "venue": "", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.11841v2" +} \ No newline at end of file diff --git a/20241127/2408.12957v3.json b/20241127/2408.12957v3.json new file mode 100644 index 0000000000000000000000000000000000000000..320c6abf7b5cd6e2c46cbfcf0bec1066eb104cea --- /dev/null +++ b/20241127/2408.12957v3.json @@ -0,0 +1,320 @@ +{ + "title": "Image Segmentation in Foundation Model Era: A Survey", + "abstract": "Image segmentation is a long-standing challenge in computer vision, studied continuously over several decades, as evidenced by seminal algorithms such as N-Cut, FCN, and MaskFormer. With the advent of foundation models (FMs), contemporary segmentation methodologies have embarked on a new epoch by either adapting FMs (e.g., CLIP, Stable Diffusion, DINO) for image segmentation or developing dedicated segmentation foundation models (e.g., SAM, SAM2). These approaches not only deliver superior segmentation performance, but also herald newfound segmentation capabilities previously unseen in deep learning context. However, current research in image segmentation lacks a detailed analysis of distinct characteristics, challenges, and solutions associated with these advancements. This survey seeks to fill this gap by providing a thorough review of cutting-edge research centered around FM-driven image segmentation. We investigate two basic lines of research \u2013 generic image segmentation (i.e., semantic segmentation, instance segmentation, panoptic segmentation), and promptable image segmentation (i.e., interactive segmentation, referring segmentation, few-shot segmentation) \u2013 by delineating their respective task settings, background concepts, and key challenges. Furthermore, we provide insights into the emergence of segmentation knowledge from FMs like CLIP, Stable Diffusion, and DINO. An exhaustive overview of over 300 segmentation approaches is provided to encapsulate the breadth of current research efforts. Subsequently, we engage in a discussion of open issues and potential avenues for future research. We envisage that this fresh, comprehensive, and systematic survey catalyzes the evolution of advanced image segmentation systems. A public website is created to continuously track developments in this fast advancing field: https://github.com/stanley-313/ImageSegFM-Survey.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Image segmentation has been, and still is, an important and challenging research field in computer vision, with its aim to partition pixels into distinct groups. It constitutes an initial step in achieving higher-order goals including physical scene understanding, reasoning over visual commonsense, perceiving social affordances, and has widespread applications in domains like autonomous driving, medical image analysis, automated surveillance, and image editing.\nThe task has garnered extensive attention over decades, resulting in a plethora of algorithms in the literature, ranging from traditional, non-deep learning methods such as thresholding [1 ###reference_b1###, 2 ###reference_b2###], histogram mode seeking [3 ###reference_b3###, 4 ###reference_b4###], region growing and merging [5 ###reference_b5###, 6 ###reference_b6###], spatial clustering [7 ###reference_b7###], energy diffusion [8 ###reference_b8###], superpixels [9 ###reference_b9###], conditional and Markov random fields [10 ###reference_b10###],\nto more advanced, deep learning methods, e.g., FCN-based [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] and particularly the DeepLab family [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###], RNN-based [21 ###reference_b21###], Transformer-based [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###], and the R-CNN family [29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###]. These approaches have shown remarkable performance and robustness across all critical segmentation fields, e.g., semantic, instance, and panoptic segmentation. Yet, the exploration of image segmentation continues beyond these advancements.\nFoundation Models (FMs) [32 ###reference_b32###] have emerged as transformative technologies in recent years, reshaping our understanding of core domains in artificial intelligence (AI) including natural language processing [33 ###reference_b33###], computer vision [34 ###reference_b34###], and many other interdisciplinary areas [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###]. Notable examples include large language models (LLMs) like GPT-3 [38 ###reference_b38###] and GPT-4 [39 ###reference_b39###], multimodal large language models (MLLMs) like Flamingo [40 ###reference_b40###] and Gemini [41 ###reference_b41###], and diffusion models (DMs) like Sora [42 ###reference_b42###] and Stable Diffusion (SD) [43 ###reference_b43###]. These models, distinguished by their immense scale and complexity, have exhibited emergent capabilities [44 ###reference_b44###, 45 ###reference_b45###] to tackle a wide array of intricate tasks with notable efficacy and efficiency. Meanwhile, they have unlocked new possibilities, such as generating chains of reasoning [46 ###reference_b46###], offering human-like responses in dialogue scenarios [38 ###reference_b38###], creating realistic-looking videos [42 ###reference_b42###], and synthesizing novel programs [47 ###reference_b47###]. The advent of GPT-4 and Sora has sparked considerable excitement within the AI community to fulfill artificial general intelligence (AGI) [48 ###reference_b48###].\nIn the era dominated by FMs, image segmentation has undergone significant evolution, marked by distinct features uncommon in the preceding research era. To underscore the motivation behind our survey, we highlight several characteristics exemplifying this transformation:\nFM technology has led to the emergence of segmentation generalists. Unlike traditional frameworks (e.g., FCN, Mask R-CNN), contemporary segmentation models have become promptable, i.e., generate a mask (akin to an answer in LLMs) based on a handcrafted prompt specifying what to segment in an image. The LLM-like promptable interface leads to a significant enhancement of task generality of segmentors, enabling them to rapidly adapt to various existing and new segmentation tasks, in a zero-shot (e.g., SAM [49 ###reference_b49###], SEEM [50 ###reference_b50###]) or few-shot (e.g., SegGPT [51 ###reference_b51###]) manner. Note that these promptable models markedly differ from earlier universal models [23 ###reference_b23###, 24 ###reference_b24###, 22 ###reference_b22###, 25 ###reference_b25###], which remain limited to a fixed set of predetermined tasks, e.g., joint semantic, instance, and panoptic segmentation, with a closed vocabulary.\nTraining-free segmentation has recently emerged as a burgeoning research area [52 ###reference_b52###, 53 ###reference_b53###, 54 ###reference_b54###, 55 ###reference_b55###, 56 ###reference_b56###, 57 ###reference_b57###]. It aims to extract segmentation knowledge from pre-trained FMs, marking a departure from established learning paradigms, such as\nsupervised, semi-supervised, weakly supervised, and self-supervised learning. Recent studies highlight that segmentation masks can be derived effortlessly from attention maps or internal representations within models like CLIP, Stable Diffusion or DINO/DINOv2, even though they were not originally designed for segmentation purposes.\nThere is a notable trend towards integrating LLMs into segmentation systems to harness their reasoning capabilities and world knowledge [58 ###reference_b58###, 59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###]. The LLM-powered segmentors possess the capacity to read, listen, and even reason to ground real-world, abstract linguistic queries into specific pixel regions. While previous efforts have explored similar capabilities in tasks such as referring segmentation [62 ###reference_b62###], these methods are limited in handling basic queries like \u201cthe front-runner\u201d. In contrast, LLM-powered segmentors can adeptly manage more complicated queries like \u201cwho will win the race?\u201d. This capability represents a notable advancement towards developing more intelligent vision systems.\nGenerative models, particularly text-to-image diffusion models, garner increasing attention in recent image segmentation research. It has been observed that DMs implicitly learn meaningful object groupings and semantics during the text-to-image generation process [63 ###reference_b63###], functioning as strong unsupervised representation learners. This motivates a stream of works to directly decode the latent code of pre-trained DMs into segmentation masks, in either a label-efficient or completely unsupervised manner [63 ###reference_b63###, 64 ###reference_b64###]. Moreover, some efforts extend the inherent denoising diffusion process in DMs to segmentation, by approaching image segmentation from an image-conditioned mask generation perspective [65 ###reference_b65###, 66 ###reference_b66###, 67 ###reference_b67###].\nIn light of these features, we found that most existing surveys in the field [68 ###reference_b68###, 69 ###reference_b69###, 70 ###reference_b70###] are now outdated \u2013 one of the latest surveys [70 ###reference_b70###] was published in 2021 and focuses only on semantic and instance segmentation. This leaves a notable gap in capturing recent FM-based approaches.\nOur Contributions. To fill the gap, we offer an exhaustive and timely overview to examine how foundation models are transforming the field of image segmentation.\nThis survey marks the first comprehensive exploration of recent image segmentation approaches that are built upon famous FMs, such as CLIP [71 ###reference_b71###], Stable Diffusion [43 ###reference_b43###], DINO [56 ###reference_b56###]/DINOv2 [57 ###reference_b57###], SAM [49 ###reference_b49###] and LLMs/MLLMs [72 ###reference_b72###]. It spans the breadth of the field and delves into the nuances of individual methods, thereby providing the reader with a thorough and up-to-date understanding of this topic. Beyond this, we elucidate open questions and potential directions to illuminate the path forward in this key field.\n###figure_1### ###figure_2### \u00a71 ###reference_###\u00a72 ###reference_###\u00a72.1 ###reference_###\u00a72.2 ###reference_###\u00a73 ###reference_###\u00a73.1 ###reference_###\u00a73.2 ###reference_###\u00a73.3 ###reference_###\u00a74 ###reference_###\u00a74.1 ###reference_###\u00a74.2 ###reference_###\u00a74.3 ###reference_###\u00a75 ###reference_###\u00a75.1 ###reference_###\u00a75.2 ###reference_###\u00a75.3 ###reference_###\u00a76 ###reference_###\u00a77 ###reference_###\nRelated Surveys and Differences. In the past decade, many surveys have studied image segmentation from various perspectives. For example, [73 ###reference_b73###] reviews region- and boundary-based segmentation methods in 2015. With the transition to the deep learning era, a series of works [74 ###reference_b74###, 70 ###reference_b70###, 75 ###reference_b75###, 76 ###reference_b76###, 77 ###reference_b77###, 78 ###reference_b78###] has summarized progress in generic segmentation tasks like semantic, instance and panoptic segmentation. A recent study [79 ###reference_b79###] focuses on the specific task of open-vocabulary segmentation, while [80 ###reference_b80###] only studies Transformer-based segmentation. Other studies delve into crucial aspects of image segmentation, such as evaluation protocols [81 ###reference_b81###] or loss functions [82 ###reference_b82###]. In addition, there exist surveys that focus on segmentation techniques in specialized domains, e.g., video [83 ###reference_b83###], medical imaging [84 ###reference_b84###, 85 ###reference_b85###].\nGiven the accelerated evolution of FMs, there has been a surge of surveys that elucidate the fundamental principles and pioneering efforts in LLMs [33 ###reference_b33###], MLLMs [72 ###reference_b72###], DMs [86 ###reference_b86###]. However, conspicuously absent from these works is a discussion on the role of FMs in advancing image segmentation.\nThe survey most relevant to ours is [87 ###reference_b87###], which offers an extensive review of recent developments related to SAM [49 ###reference_b49###]. SAM represents a groundbreaking contribution to the segmentation field, making [87 ###reference_b87###] a valuable resource. However, within the broader context of FMs, SAM is just one among many; thus, the scope of [87 ###reference_b87###] is still limited in encompassing the entirety of progress in segmentation field.\nUnlike prior surveys, our work stands apart in its exclusive focus on the contributions of FMs to image segmentation, and fills an existing gap in the current research landscape. We document the latest techniques, and spotlight major trends, and envision prospective research trajectories which will aid researchers in staying abreast of advances in image segmentation and accelerate progress in the field.\nSurvey Organization. Fig. 2 ###reference_### shows the structure of this survey. Section \u00a72 ###reference_### presents essential background on image segmentation and FMs. \u00a73 ###reference_### highlights the emergency of segmentation knowledge from existing FMs. \u00a74 ###reference_### and \u00a75 ###reference_### review the most important FM-based image segmentation methods, mainly from the past three years. \u00a76 ###reference_### raises open issues and future directions. We conclude the paper in \u00a77 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "In this section, we first present a unified formulation of image segmentation tasks and categorize research directions in \u00a72.1 ###reference_###. Then, we provide a concise background overview of prominent FMs in \u00a72.2 ###reference_###." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Image Segmentation", + "text": "" + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "2.1.1 A Unified Formulation", + "text": "The central goal of the paper is to investigate the contributions of FMs to modern image segmentation technology. To this end, we first introduce a unified mathematical formulation applicable to various segmentation tasks. Concretely, denote and as the input space and output segmentation space, respectively. An image segmentation solution seeks to learn an ideal mapping function :\nHere is typically instantiated as a neural network. The input space is decomposed as , where represents an image domain (comprising solely a single image ), and refers to a collection of prompts, which is exclusively employed in certain segmentation tasks. The output space is , which encompasses a set of segmentation mask and a vocabulary of semantic categories associated with these masks. Eq. 1 ###reference_### furnishes a structured framework for understanding image segmentation, wherein a neural network is trained to map an input image, along with potentially user-specified prompts, to segmentation masks as well as corresponding semantic categories. Based on Eq. 1 ###reference_###, we subsequently build a taxonomy for image segmentation." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "2.1.2 Image Segmentation Category", + "text": "According to whether is provided, we categorize image segmentation methods into two classes (Fig. 1 ###reference_###): generic image segmentation (GIS) and promptable image segmentation (PIS).\nGeneric Image Segmentation. GIS aims to segment an image into distinct regions, each associated with a semantic category or an object. In GIS, the input space comprises solely the image, i.e., , indicating . Based on the definition of output space , GIS methods can be further categorized into three major types: (i) semantic segmentation (Fig. 1 ###reference_###a) aims to identify and label each pixel with a semantic class in . (ii) instance segmentation (Fig. 1 ###reference_###b) involves grouping pixels that belong to the same semantic class into separate object instances. (iii) panoptic segmentation (Fig. 1 ###reference_###c) integrates semantic and instance segmentation to predict per-pixel class and instance labels, and is able to provide a comprehensive scene parsing.\nFurthermore, based on whether the testing vocabulary includes novel classes absent from the training vocabulary , the three tasks are studied under two settings: closed-vocabulary (i.e., ) and open-vocabulary (i.e., ) segmentation. Notably, the closed-vocabulary setup has been extensively studied over the past decade. However, its open-vocabulary counterpart is still in its infancy and has garnered attention only in recent years, particularly with the advent of FMs.\nPromptable Image Segmentation. PIS extends GIS by additionally incorporating a set of prompts , specifying the target to segment. In general, PIS methods only generate segmentation masks closely related to the prompts and do not directly predict classes. While the term \u201cprompt\u201d is relatively new, PIS has been studied for many years. Depending upon the prompt type, PIS methods can be grouped into the following categories: (i) interactive segmentation (Fig. 1 ###reference_###d) aims to segment out specific objects or parts according to user input, often provided through clicks, scribbles, boxes, or polygons, thus are visual prompts; (ii) referring segmentation (Fig. 1 ###reference_###e) entails extracting the corresponding region referred by a linguistic phrase, thus refers to textual prompts; (iii) few-shot segmentation (Fig. 1 ###reference_###f) targets at segmenting novel objects in given query image with a few annotated support images, i.e., refers to a collection of image-mask pairs. While great progress has been made in these segmentation challenges, previous studies address various prompt types independently. In sharp contrast, FM-based methods aim to integrate them into a unified framework. Moreover, in-context segmentation has emerged as a novel few-shot segmentation task." + }, + { + "section_id": "2.1.3", + "parent_section_id": "2.1", + "section_name": "2.1.3 Learning Paradigms for Image Segmentation", + "text": "Several prevalent learning strategies are employed to approximate the function in Eq. 1 ###reference_###.\n(i) Supervised learning: modern image segmentation methods are generally learned in a fully supervised manner, necessitating a collection of training images and their desired outputs, i.e. per-pixel annotations.\n(ii) Unsupervised learning: in the absence of explicit annotated supervision, the task of approximating falls under unsupervised learning. Most existing unsupervised learning-based image segmentation models utilize self-supervised techniques, training networks with automatically-generated pseudo labels derived from image data.\n(iii) Weakly-supervised learning: in this case, supervision information may be inexact, incomplete or inaccurate. For inexact supervision, labels are typically acquired from a more easily annotated domain (e.g., image tag, bounding box, scribble). In the case of incomplete supervision, labels are provided for only a subset of training images. Inaccurate supervision entails per-pixel annotations for all training images, albeit with the presence of noise.\n(iv) Training free: in addition to the aforementioned strategies, a novel paradigm \u2013 training-free segmentation \u2013 has gained attention in the FM era, aiming to extract segmentation directly from pre-trained FMs, without involving any model training." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Foundation Model", + "text": "FMs are initially elucidated in [32 ###reference_b32###] as \u201cany model that is trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks\u201d. The term \u2018foundation\u2019 is used to underscore critically central and incomplete character of FMs: homogenization of the methodologies across research communities and emergence of new capabilities. While the basic ingredients of the FMs, such as deep neural networks and self-supervised learning, have been around for many years, the paradigm shift towards FMs is significant because the emergence and homogenization allow replacing narrow task-specific models with more generic task-agnostic models that are not strongly tied to a particular task or domain. In the subsequent subsections, we provide a brief review of language (\u00a72.2.1 ###reference_.SSS1###) and vision foundation models (\u00a72.2.2 ###reference_.SSS2###). Notably, we only focus on topics relevant to this survey, and direct interested readers to [88 ###reference_b88###, 33 ###reference_b33###] for more comprehensive discussions." + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 Language Foundation Model", + "text": "Large Language Models (LLMs). Language modeling is one of the primary approaches to advancing language intelligence of machines. In general, it aims to model the generative likelihood of word sequences, so as to predict the probabilities of future tokens. In the past two decades, language modeling has evolved from the earliest statistical language models (SLMs) to neural language models (NLMs), then to small-sized pre-trained language models (PLMs), and finally to nowadays LLMs [33 ###reference_b33###]. As enlarged PLMs (in terms of model size, data size and training compute), LLMs not only achieve a significant zero-shot performance improvement (even in some cases matching finetuned models), but also show strong reasoning capabilities across various domains, e.g., code writing [47 ###reference_b47###], math problem solving [89 ###reference_b89###]. A remarkable application of LLMs is ChatGPT, which has attracted widespread attention and transformed the way we interact with AI technology.\nMultimodal Large Language Models (MLLMs). MLLMs [72 ###reference_b72###] are multimodal extensions of LLMs by bringing together the reasoning power of LLMs with the capability to process non-textual modalities (e.g., vision, audio). MLLMs represent the next level of LLMs. On one hand, multimodal perception is a natural way for knowledge acquisition and interaction with the real world, and thus serves as a fundamental component for achieving AGI; on the other hand, the multimodal extension expands the potential of pure language modeling to more complex tasks in, e.g., robotics and autonomous driving." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Visual Foundation Model", + "text": "Contrastive Language-Image Pre-training (CLIP). CLIP [71 ###reference_b71###] embodies a language-supervised vision model trained on 400M image-text pairs sourced from the Internet. The model has an encoder-only architecture, consisting of separate encoders for image and text encoding. It is trained by an image-text contrastive learning objective:\nwhere denotes the image and text embeddings of the -th image-text example . and indicate the number of examples and softmax temperature, respectively. The loss maximizes agreement between the embeddings of matching image and text pairs while minimizing it for non-matching pairs. In practice, text-image contrastive loss is calculated similarly, and the model is trained by a joint loss: . ALIGN [90 ###reference_b90###] is a follow-up work that harnesses for visual representation learning. It simplifies the costly data curation process in CLIP, and succeeds to further scale up representation learning with a noisy dataset of over one billion image-text pairs. Both CLIP and ALIGN acquire semantically-rich visual concepts and demonstrate impressive transferability in recognizing novel categories, leading to increased adoption for tackling zero-shot and open-vocabulary recognition tasks.\nDiffusion Models (DMs). DMs are a family of generative models that are Markov chains trained with variational inference. They have demonstrated remarkable potential in creating visually realistic samples, and set the current state-of-the-art in generation tasks. The milestone work, denoising diffusion probabilistic model (DDPMs) [91 ###reference_b91###], was published in 2020 and have sparked an exponentially increasing interest in the generative AI community afterwards. DDPMs are defined as a parameterized Markov chain, which generate data from Gaussian noise within finite transitions during inference. Its training encompasses two interconnected processes. (i) Forward pass maps a data distribution to a simpler prior distribution via a diffusion process:\nwhere are fixed coefficients that determine the noise schedule. (ii) Reverse pass leverages a trained neural network (typicall a UNet) to gradually reverse the effects of the forward process by training it to estimate the noise which has been added to . Hence, the training objective can be derived as:\nwhere denotes an additional conditioning input to . Further, latent diffusion models (LDMs) extend DMs by training them in the low-dimensional latent space of an autoencoding model (e.g., VQGAN [92 ###reference_b92###]):\nThis leads to many popular text-to-image DMs (T2I-DMs), i.e., Stable Diffusion (SD) [43 ###reference_b43###]. Current T2I-DMs are able to generate high-fidelity images with rich texture, diverse content and intricate structures while having compositional and editable semantics. This phenomenon potentially suggests that T2I-DMs can implicitly learn both high-level and low-level visual concepts from massive image-text pairs. Moreover, recent research has highlighted the clear correlations between attention maps and text prompts in T2I-DMs [93 ###reference_b93###, 94 ###reference_b94###]. These properties extend the capability of T2I-DMs from generation to perception tasks [95 ###reference_b95###, 96 ###reference_b96###].\nSelf-Distillation with No Labels (DINO&DINOv2). DINO [56 ###reference_b56###] interprets self-supervised learning of ViTs as a special case of self-distillation, wherein learning relies on model\u2019s own predictions rather than external labels.\nDespite being a relatively small-sized model, DINO demonstrates a profound understanding of the visual world, characterized by its highly structured feature space. Notably, DINO shows two emerging properties: its features are excellent k-NN classifiers, and contain explicit information pertaining to image segmentation. DINOv2 [57 ###reference_b57###] pushes the limits of visual features by scaling DINO in model and data sizes, along with an improved training recipe. The resultant model yields general-purpose features that close the performance gap with supervised alternatives across various benchmarks, while also showing notable properties, such as understanding of object parts and scene geometry. Strictly, speaking, DINO is not a \u2018large\u2019 model in terms of the parameter scale, but it is included due to the emerged nice properties for segmentation, and its role as the successor of DINOv2 .\nSegment Anything (SAM). SAM [49 ###reference_b49###] has sparked a revolution in the field of image segmentation, and profoundly influences the development of large, general-purposed models in computer vision. Unlike the aforementioned vision FMs, SAM is built specifically for image segmentation, which is trained on a corpus of 1 billion masks from 11 million images using a promptable segmentation task. It achieves powerful zero-shot task generality to handle a wide range of image segmentation tasks, and allows for enhanced interactivity in segmentation through the use of \u201cprompts\u201d in forms of points, masks, boxes, and even language. Beyond this, SAM has shown promising capabilities in a multitude of tasks, including medical imaging [97 ###reference_b97###], image editing [98 ###reference_b98###], video segmentation [99 ###reference_b99###]. Despite its capabilities, one downside of SAM is the computational expense associated with its heavy image encoder. However, SAM2 [100 ###reference_b100###] addresses this by instead using an MAE pre-trained Hiera as the image encoder, yielding real-time speed and improved segmentation accuracy." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Segmentation Knowledge Emerges from FMs", + "text": "Given the emergency capabilities of LLMs, a natural question arises: Do segmentation properties emerge from FMs? The answer is positive, even for FMs not explicitly designed for segmentation, such as CLIP, DINO and Diffusion Models. In this section, we elaborate on the techniques to extract segmentation knowledge from these FMs, which are effectively unlocking a new frontier in image segmentation, i.e., acquiring segmentation without any training. Fig. 3 ###reference_### illustrates how to approach this and shows some examples.\n###figure_3###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Segmentation Emerges from CLIP", + "text": "Many studies [52 ###reference_b52###, 53 ###reference_b53###, 101 ###reference_b101###] acknowledge that the standard CLIP is able to discern the appearance of objects, but is limited in understanding their locations. The main reason is that CLIP learns holistic visual representations that are invariant to spatial positions, whereas segmentation requires spatial-covariant features \u2013 local representations should vary w.r.t. their spatial positions in an image. To better explain this, we revisit self-attention in Transformers:\nwhere , , are query, key, and value embeddings. is the input sequence with tokens, each being a -dimensional vector. denotes a projection matrix whose parameters are learned in pre-training. CLIP applies attention pooling to the last self-attention layer:\nwhere , and is globally average-pooled feature of . Eq. 7 ###reference_### encourages similar representations for different locations, leading to spatial-invariant features.\nDespite this, MaskCLIP [52 ###reference_b52###] finds that it is feasible to extract segmentation knowledge from CLIP with minimal modifications to the attention pooling module. Specifically, it simply sets the attention matrix to an identity matrix. In this way, each local visual token receives information only from its corresponding position so that visual features (i.e., ) are well localized. Such a straightforward modification results in an 11% increase of CLIP\u2019s mIoU on COCO-Stuff. Furthermore, SCLIP [53 ###reference_b53###] proposes to compute pairwise token correlations to allow each local token to attend to positions sharing similar information, i.e., the attention matrix is computed as: . CLIPSurgery [102 ###reference_b102###] computes value-value attention matrix: and incorporates the attention into each Transformer block rather than the last one. NACLIP [103 ###reference_b103###] computes key-key attention matrix: and further weights the attention map with a Gaussian kernel to encourage more consistent attention across adjacent patches. GEM [104 ###reference_b104###] presents a generalized way to calculate the attention matrix as: ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Segmentation Emerges from DMs", + "text": "A family of methods [55 ###reference_b55###, 105 ###reference_b105###, 106 ###reference_b106###, 107 ###reference_b107###, 108 ###reference_b108###, 109 ###reference_b109###, 110 ###reference_b110###, 111 ###reference_b111###, 112 ###reference_b112###, 113 ###reference_b113###] shows that pre-trained generative models, especially DMs, manifest remarkable segmentation capabilities. A major insight is that segmentation emerges from cross-attention maps in DMs. Formally, the cross-attention at one layer is computed as:\nHere and indicate linear layers of the U-Net that denoise in the latent space. and represent the length of text tokens and feature dimensionality in the layer, respectively. is the spatial size of the feature. denotes the cross-attention map of a single head. As seen, captures dense correlations between pixels and words, from which we are able to extract the mask associated with the class token . In practice, most methods consolidate cross-attention matrices across blocks, timestamps, and attention heads into a single attention map [55 ###reference_b55###, 105 ###reference_b105###, 106 ###reference_b106###, 107 ###reference_b107###, 110 ###reference_b110###] to obtain higher-quality attention maps. Nevertheless, cross-attention maps often lack clear object\nboundaries and may exhibit internal holes. Thus, they are typically completed by incorporating self-attention maps [55 ###reference_b55###, 106 ###reference_b106###] to yield final segmentation mask as where is a self-attention matrix." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Segmentation Emerges from DINO", + "text": "DINO [56 ###reference_b56###] and DINOv2 [57 ###reference_b57###] demonstrate a surprising phenomenon that segmentation knowledge emerges in self-supervised visual transformers, but not appear explicitly in either supervised ViTs or CNNs. Caron et al. show in DINO [56 ###reference_b56###] that sensible object segmentation can be obtained from the self-attention of class token [CLS] in the last attention layer. More formally, given an input sequence of () patches, the affinity vector can be computed as the pairwise similarities between the class token [CLS] and patch tokens [I] in an attention head of the last layer:\nwhere and denote query and key features of corresponding tokens, respectively. The final attention map are averaged of over all attention heads, and can directly binarized to yield segmentation masks.\nBeyond this, some other works [114 ###reference_b114###, 115 ###reference_b115###, 116 ###reference_b116###, 117 ###reference_b117###] localize objects based on similarities between patch tokens:\nHere each element in measures the similarity between a pair of tokens. The key features are typically chosen in the computation since they show better localization properties than others (i.e., query or value features) [118 ###reference_b118###]. Based on the derived affinity matrix , LOST [114 ###reference_b114###] directly mines potential objects based on an inverse selection strategy; DeepSpectral [115 ###reference_b115###] and COMUS [117 ###reference_b117###] group pixels by partitioning the affinity matrix based on spectral theory; MaskDistill [116 ###reference_b116###] selects discriminative tokens based on , and diffuses information of discriminative tokens based on to estimate initial segmentation results.\n###figure_4### \u00a74 ###reference_###\u00a74.1 ###reference_###4.1.1 ###reference_.SSS1###4.1.2 ###reference_.SSS2###4.1.3 ###reference_.SSS3###4.1.4 ###reference_.SSS4###4.1.5 ###reference_.SSS5###\u00a74.2 ###reference_###4.2.1 ###reference_.SSS1###4.2.2 ###reference_.SSS2###4.2.3 ###reference_.SSS3###4.2.4 ###reference_.SSS4###\u00a74.3 ###reference_###4.3.1 ###reference_.SSS1###4.3.2 ###reference_.SSS2###4.3.3 ###reference_.SSS3###4.3.4 ###reference_.SSS4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Foundation Model based GIS", + "text": "This section presents a comprehensive review of recent advances in FM-based GIS, including semantic (\u00a74.1 ###reference_###), instance (\u00a74.2 ###reference_###) and panoptic segmentation (\u00a74.3 ###reference_###), as illustrated in Fig. 4 ###reference_###. Our discussions are approached from a technical perspective to elucidate the fundamental concepts and highlight the roles of FMs in GIS." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Semantic Segmentation", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 CLIP-based Solution", + "text": "How to transfer pre-trained knowledge in CLIP to segmentation? This question has driven a wide spectrum of studies to solve image segmentation based on CLIP. However, the task is challenging due to the inherent granularity gap between the image-level training task in CLIP and pixel-level prediction task in image segmentation. Popular solutions are:\nTraining free Semantic Segmentation. As discussed in \u00a73.1 ###reference_###, it is feasible to derive segmentation masks from CLIP, with a slight modification of the self-attention module. On this basis, many approaches [52 ###reference_b52###, 53 ###reference_b53###, 102 ###reference_b102###, 103 ###reference_b103###, 104 ###reference_b104###] achieve semantic segmentation by leveraging CLIP text encoder as the classifier to determine the category of each mask. The whole process involves no extra training or fine-tuning.\nCLIP Finetuning. Following the popular pre-training-then-fine-tuning paradigm, a large number of methods fine-tunes CLIP using segmentation data. They can be categorized as either full fine-tuning or parameter-efficient tuning approaches. Full fine-tuning methods entail tuning the entire visual or textual encoders of CLIP. DenseCLIP [119 ###reference_b119###], for instance, pioneers this approach by refining CLIP\u2019s visual encoder through solving a pixel-text matching task. PPL [120 ###reference_b120###] augments DenseCLIP with a probabilistic framework to learn more accurate textual descriptions based on visual cues. Though showing promising results, these methods tend to break the visual-language association within CLIP and lead to severe losses of the open-vocabulary capacity. To alleviate this, CATSeg [121 ###reference_b121###] introduces a cost aggregation-based framework to maintain the zero-shot capability of CLIP even after full fine-tuning. OTSeg [122 ###reference_b122###] tackles it by leveraging the ensemble of multiple text prompts, and introduce a multi-prompts sinkhorn attention to improve multimodal alignment. However, these methods typically necessitate a substantial volume of densely annotated training images. In contrast, ZegCLIP [123 ###reference_b123###], LDVC [124 ###reference_b124###], and ZegOT [125 ###reference_b125###] employ parameter-efficient prompt tuning techniques to transfer CLIP. To prevent overfitting to seen categories, they all learn image-specific textual embeddings to achieve more accurate pixel-text alignment. SemiVL [126 ###reference_b126###] adopts partial tuning strategies to only tune parameters of self-attention layers. SAN [127 ###reference_b127###] adapts CLIP image encoder to segmentation via a lightweight adapter, and decouples the mask proposal and classification stage by predicting attention biases applied to deeper layers of CLIP for recognition.\nCLIP as Zero-Shot Classifier. Apart from model fine-tuning, many efforts directly utilize the pre-trained CLIP as classifiers, and are able to preserve CLIP\u2019s zero-shot transferability. The methods can be categorized into two major types: mask classification and pixel classification.\nMask classification methods [128 ###reference_b128###, 129 ###reference_b129###, 130 ###reference_b130###, 131 ###reference_b131###, 132 ###reference_b132###, 133 ###reference_b133###, 134 ###reference_b134###, 135 ###reference_b135###, 136 ###reference_b136###] in general follow a two-stage paradigm, wherein class-agnostic mask proposals are firstly extracted and then the pre-trained CLIP is used for classifying the proposals. Pioneering studies [128 ###reference_b128###, 129 ###reference_b129###] require a standalone, CLIP-unaware model for proposal generation, while recent approaches [130 ###reference_b130###, 131 ###reference_b131###, 133 ###reference_b133###, 132 ###reference_b132###, 134 ###reference_b134###] tend to integrate mask generation and classification within a unified framework. All these methods maintain CLIP frozen during training, but the vanilla CLIP is insensitive to different mask proposals, constraining classification performance. OVSeg [135 ###reference_b135###] and MAFT [136 ###reference_b136###] tackle this issue by tuning CLIP during training to make it more mask-aware.\nPixel classification methods [137 ###reference_b137###, 138 ###reference_b138###, 139 ###reference_b139###, 101 ###reference_b101###, 140 ###reference_b140###, 141 ###reference_b141###] employ CLIP to recognize pixels. LSeg [137 ###reference_b137###] achieves this by learning an independent image encoder to align with the original textual encoder in CLIP. Fusioner [138 ###reference_b138###] introduces a cross-modality fusion module to capture the interactions between visual and textual features from the frozen CLIP, and decodes the fused features into segmentation masks. PACL [139 ###reference_b139###] defines a new compatibility function for contrastive loss to align patch tokens of the vision encoder and the [CLS] token of the text encoder. Patch-level alignment can benefit zero-shot transfer to semantic segmentation. CLIPpy [101 ###reference_b101###] endows CLIP with perceptual grouping with a series of modifications on the aggregation method and pre-training strategies. Due to the absence of fine-grained supervisions, such CLIP-based segmentors cannot delineate the fine shape of targets. SAZS [142 ###reference_b142###] alleviates this by developing a boundary-aware constraint.\nSemantic Segmentation Emerges from Text Supervision. Inspired by CLIP, a stream of research attempts to learn transferable semantic segmentation models purely from text supervision. GroupViT [143 ###reference_b143###] and SegCLIP [144 ###reference_b144###] augment vanilla ViTs with grouping modules to progressively group image pixels into segments. To address their granularity inconsistency issue, SGP [145 ###reference_b145###] further mines non-learnable prototypical knowledge [112 ###reference_b112###] as explicit supervision for group tokens to improve clustering results. Unlike these works require customized image encoders, [146 ###reference_b146###] avoids modifying CLIP\u2019s architecture, but improves the optimization by sparsely contrasting on the image-text features with the maximum responses. TagAlign [147 ###reference_b147###] also focuses on the optimization part, and introduces fine-grained attributes as supervision signals to enable dense image-text alignment.\nKnowledge Distillation (KD). KD [148 ###reference_b148###] is a simple but efficient approach to transfer the capability of a foundation model, which has achieved many successes in NLP and CV. In the field of semantic segmentation, ZeroSeg [149 ###reference_b149###] and CLIP-ZSS [150 ###reference_b150###] distill the semantic knowledge from CLIP\u2019s visual encoder to a segmentation model. Moreover, many methods are based on self-distillation to teach themselves by aligning localized dense feature to visual feature of corresponding image patch [151 ###reference_b151###], or learning global semantics based on local information [152 ###reference_b152###]. Moreover, CLIP-DINOiser [153 ###reference_b153###] treats DINO as a teacher to guide CLIP learn DINO-like features that are friendly to segmentation." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 DM-based Solution", + "text": "Beyond the discriminative model CLIP, there has been a growing interest in extending the horizon of generative models like DMs from generation tasks to semantic segmentation.\nFrom the technical perspective, current research can be grouped into the following categories.\nTraining free Semantic Segmentation. Based on the techniques in \u00a73.2 ###reference_###, [55 ###reference_b55###, 106 ###reference_b106###, 107 ###reference_b107###] generate a mask for each candidate class, and assign a category to each pixel by identifying the class with the highest confidence value. FreeSeg-Diff [108 ###reference_b108###] follows a two-stage paradigm, that is, cluster attention maps into class-agnostic masks and then classify each mask by CLIP. These methods are limited by text prompt tokens, requiring an association between each semantic class and a prompt word, which is not always valid. To address this, OVAM [109 ###reference_b109###] introduces an extra attribution prompt to enable the generation of semantic segmentation masks described by an open vocabulary, irrespective of the words in the text prompts used for image generation. Furthermore, OVDiff [111 ###reference_b111###] takes a prototype learning perspective [112 ###reference_b112###, 113 ###reference_b113###] to build a set of categorical prototypes using T2I-DMs, which serve as nearest neighbor classifiers for segmentation. DiffSeg [154 ###reference_b154###] introduces an iterative merging process to merge self-attention maps in SD into valid segmentation masks. Unlike aforementioned methods, FreeDA [54 ###reference_b54###] employs SD to build a large pool of visual prototypes, and the most similar prototype is retrieved for each pixel to yield segmentation prediction.\nDiffusion Features for Semantic Segmentation. Beyond attention maps, the harness of DMs\u2019 latent representations for semantic segmentation is gaining popularity. Works like [63 ###reference_b63###, 155 ###reference_b155###] extract internal embeddings from text-free DMs for segmentation, but they are limited to close-vocabulary settings. In contrast, a majority of methods [156 ###reference_b156###, 157 ###reference_b157###, 158 ###reference_b158###] employs T2I-DMs (mostly SD) to mine semantic representations. LD-ZNet [158 ###reference_b158###] shows that 1) the latent space of LDMs is a better input representation compared to other forms like RGB images for semantic segmentation, and 2) the middle layers (i.e., {6,7,8,9,10}) of the denoising UNet contain more semantic information compared to either the early or later blocks of the encoder (consistent with the observation in [159 ###reference_b159###]). Beyond this, for T2I-DMs, text prompt plays a crucial role in feature extraction as it serves as guidance for semantic synthesis. VPD [156 ###reference_b156###] adopts a straightforward method to use class names in the dataset to form the text context of SD, in which class embedding is extracted from the text encoder of CLIP (with prompt \u201ca photo of [CLS]\u201d). TADP [157 ###reference_b157###] and Vermouth [160 ###reference_b160###] find that automatically generated captions serve as image-aligned text prompt that helps extract more semantically meaningful visual features. In contrast, MetaPrompt [161 ###reference_b161###] integrates SD with a set of learnable emebddings (called meta prompts) to activate task-relevant features within a recurrent feature refinement process. Furthermore, latent features show exceptional generalization performance to unseen domains with proper prompts.\nSemantic Segmentation as Denoising Diffusion. Away from these mainstream battlefields, some works [162 ###reference_b162###, 163 ###reference_b163###, 164 ###reference_b164###, 65 ###reference_b65###] reformulate semantic segmentation as a denoising diffusion process. They learn an iterative denoising process to predict the ground truth map from random noise conditioned on corresponding visual features derived from an image encoder. Based on this insight, SegRefiner [165 ###reference_b165###] considers a discrete diffusion formulation to refine coarse masks derived from existing segmentation models. Moreover, Peekaboo [166 ###reference_b166###] is an interesting approach that treats segmentation as a foreground alpha mask optimization problem which is optimized via SD at inference time. It alpha-blends an input image with random background to generate a composite image, and then takes an inference time optimization method to iteratively update the alpha mask to converge to optimal segmentation with respect to image and text prompts.\nT2I-DMs as Semantic Segmentation Data Synthesizer. Collecting and annotating images with pixel-wise labels is time-consuming and laborious, and thus always a challenge to semantic segmentation. With recent advances in AIGC, many studies [106 ###reference_b106###, 167 ###reference_b167###, 168 ###reference_b168###, 169 ###reference_b169###] explore the potential of T2I-DMs to build large-scale segmentation dataset (including synthetic images and associated mask annotations), which serve as a more cost-effective data source to train any existing semantic segmentation models. The idea has also been adopted in specialist domains like medical image segmentation [170 ###reference_b170###]. Rather than directly generating synthetic masks, some works [171 ###reference_b171###, 172 ###reference_b172###, 173 ###reference_b173###] employ T2I-DMs for data augmentation based on a few labeled images." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "4.1.3 DINO-based Solution", + "text": "Unsupervised Segmentation via Direct Grouping. Given the emergence of segmentation properties in DINO, many methods directly group DINO features into distinct regions via, e.g., k-means [118 ###reference_b118###] or graph partition [114 ###reference_b114###, 174 ###reference_b174###, 175 ###reference_b175###] based on spatially local affinities in Eq. 10 ###reference_###. While being training-free, they are limited in discovering salient objects, and fail to generate masks for multiple semantic regions \u2013 which is critical for semantic segmentation.\nUnsupervised Semantic Segmentation via Self-training.\nFollow-up works investigate self-training approaches to address aforementioned limitation. They tend to train segmentation models on automatically discovered pseudo labels from DINO features. Pseudo labels are in general obtained in a bottom-up manner, but the strategies differ across methods. DeepSpectral [115 ###reference_b115###] performs spectral clustering over dense DINO features to over-cluster each image into segments, and afterwards cluster DINO representations of such segments across images to determine pseudo segmentation labels. Those segments represent object parts that could be combined with over-clustering and community detection to enhance the quality of pseudo masks [176 ###reference_b176###]. COMUS [117 ###reference_b117###] combines unsupervised salient masks with DINO feature clustering to yield initial pseudo masks, which are exploited to train a semantic segmentation network to self-bootstrap the system on images with multiple objects. Notably, STEGO [177 ###reference_b177###] finds that DINO\u2019s features have correlation patterns that are largely consistent with true semantic labels, and accordingly presents a novel contrastive loss to distill unsupervised DINO features into compact semantic clusters. Furthermore, DepthG [178 ###reference_b178###] incorporates spatial information in the form of depth maps into the STEGO training process; HP [179 ###reference_b179###] proposes more effective hidden positive sample to enhance contrastive learning; EAGLE [180 ###reference_b180###] extracts object-level semantic and structural cues from DINO features to guide the model learning object-aware representations." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "4.1.4 SAM-based Solution", + "text": "SAM for Weakly Supervised Semantic Segmentation. While SAM is semantic unawareness, it attains generalized and remarkable segmentation capability, which are widely leveraged to improve segmentation quality in the weakly supervised situations. [181 ###reference_b181###] uses SAM for post-processing of segmentation masks, while [182 ###reference_b182###] leverages SAM for zero-shot inference. S2C [183 ###reference_b183###] incorporates SAM at both feature and logit levels. It performs prototype contrastive learning based on SAM\u2019s segments, and extracts salient points from CAMs for prompting SAM." + }, + { + "section_id": "4.1.5", + "parent_section_id": "4.1", + "section_name": "4.1.5 Composition of FMs for Semantic Segmentation", + "text": "FMs are endowed with distinct capabilities stemming from their pre-training objectives. For example, CLIP excels in semantic understanding, while SAM and DINO specialize in spatial understanding. As such, many approaches amalgamate an assembly of these FMs into a cohesive system that absorbs their expertise. Some of them are built under zero guidance [184 ###reference_b184###, 108 ###reference_b108###, 185 ###reference_b185###]. They leverage DINO or SD to identify class-agnostic segments, map them to CLIP\u2019s latent space, and translate the embedding of each segment into a word (i.e., class name) via image captioning models like BLIP. Another example is SAM-CLIP [186 ###reference_b186###] that combines SAM and CLIP into a single model via multi-task distillation. Recently, RIM [187 ###reference_b187###] builds a training-free framework under the collaboration of three VFMs. Concretely, it first constructs category-specific reference features based on SD and SAM, and then matches them with region features derived from SAM and DINO via relation-aware ranking." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Instance Segmentation", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 CLIP-based Solution", + "text": "CLIP as Zero-shot Instance Classifier.\nCLIP plays an important role in achieving open-vocabulary instance segmentation. [188 ###reference_b188###, 189 ###reference_b189###, 190 ###reference_b190###] leverage the frozen CLIP text encoder as a classifier of instance mask proposals. OPSNet [191 ###reference_b191###] utilizes CLIP visual and textual embeddings to enrich instance features, which are subsequently classified by the CLIP text encoder. [192 ###reference_b192###] introduces a generative model to synthesize unseen features from CLIP text embeddings, thereby bridging semantic-visual spaces and address the challenge of lack of unseen training data. [193 ###reference_b193###] presents a dynamic classifier to project CLIP text embedding to image-specific visual prototypes, effectively mitigating bias towards seen categories as well as multi-modal domain gap." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 DM-based Solution", + "text": "T2I-DMs as Instance Segmentation Data Synthesizer. DMs play a crucial role in instance segmentation by facilitating the generation of large-scale training datasets with accurate labels. MosaicFusion [169 ###reference_b169###] introduces a training-free pipeline that simultaneously generates\nsynthetic images via T2I-DMs and corresponding masks through aggregation over cross-attention maps. [194 ###reference_b194###] adopts a cut-and-paste approach for data augmentation, where both foreground objects and background images are generated using DMs. DatasetDM [168 ###reference_b168###] presents a semi-supervised approach that first learns a perception decoder to annotate images based on a small set of labeled data, and then generates images and annotations for various dense prediction tasks." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3 DINO-based Solution", + "text": "Unsupervised Instance Segmentation. Some methods [195 ###reference_b195###, 196 ###reference_b196###, 116 ###reference_b116###, 197 ###reference_b197###] attempt to amplify the innate localization abilitiy of DINO to train instance-level segmentation models without any human labels. They typically work in a two-stage discover-and-learn process: discover multiple object masks from DINO features by, e.g., recursively applying normalized cuts [195 ###reference_b195###], and then leverage them as pseudo labels to train instance segmentation models." + }, + { + "section_id": "4.2.4", + "parent_section_id": "4.2", + "section_name": "4.2.4 Composition of FMs for Instance Segmentation", + "text": "X-Paste [198 ###reference_b198###] revisits the traditional data boosting strategy, i.e., Copy-Paste, at scale to acquire large-scale object instances with high-quality masks for unlimited categories. It makes full use of FMs to prepare images, i.e., using SD to generate images and using CLIP to filter Web-retrieved images. Instances in the images are extracted with off-the-shelf segmentors, which are composed with background images to create training samples. DiverGen [199 ###reference_b199###] improves X-Paste by focusing more on enhancing category diversity. It leverages SAM to more accurately extract instance masks. Orthogonal to these studies, Zip [200 ###reference_b200###] combines CLIP and SAM to achieve training-free instance segmentation. It observes that clustering on features of CLIP\u2019s middle layer is keenly attuned to object boundaries. Accordingly, it first clusters CLIP features to extract segments, then filters them according to boundary and semantic cues, and finally prompts SAM to produce instance masks.\nMoreover, it is easy to directly turn SAM into an instance segmentation model by feeding bounding boxes of instances as prompts [201 ###reference_b201###, 202 ###reference_b202###], which can be obtained from object detectors, e.g., Faster R-CNN [30 ###reference_b30###], Grounding DINO [203 ###reference_b203###]." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Panoptic Segmentation", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 CLIP-based Solution", + "text": "CLIP as Zero-Shot Mask Classifier.\nMost recent panoptic segmentation approaches [188 ###reference_b188###, 189 ###reference_b189###, 204 ###reference_b204###, 191 ###reference_b191###, 192 ###reference_b192###, 205 ###reference_b205###, 206 ###reference_b206###, 190 ###reference_b190###] follow the query-based mask classification framework introduced in MaskFormer [22 ###reference_b22###] / Mask2Former [23 ###reference_b23###]. They generate class-agnostic mask proposals first and then utilize CLIP to classify the proposals, thereby empowering MaskFormer and Mask2Former open-vocabulary segmentation capabilities. MaskCLIP [188 ###reference_b188###] introduces a set of mask class tokens to extract mask representations more efficiently. MasQCLIP [189 ###reference_b189###] augments MaskCLIP by applying additional projections to mask class tokens to obtain optimal attention weights. OPSNet [191 ###reference_b191###] learns more generalizable mask representations based on CLIP visual encoder that are subsequently used to enhance query embeddings. Unpair-Seg [205 ###reference_b205###] presents a weakly supervised framework that allows the model to benefit from cheaper image-text pairs. It learns a feature adapter to align mask representations with text embeddings, which are extracted from CLIP\u2019s visual and language encoders respectively. Despite the advances, these methods still require training a separate model for each task to achieve the best performance. Freeseg [206 ###reference_b206###] and DaTaSeg [207 ###reference_b207###] design all-in-one models with the same architecture and inference parameters to establish remarkable performance in open-vocabulary semantic, instance, and panoptic segmentation problems. OMG-Seg [208 ###reference_b208###] introduces a unified query representation to unify different task outputs, and is able to handle 10 segmentation tasks across different datasets." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 DM-based Solution", + "text": "Diffusion Features for Panoptic Segmentation. ODISE [209 ###reference_b209###] explores internal representations within T2I DMs to accomplish open-vocabulary panoptic segmentation. It follows the architectural design of Mask2Former but leverages visual features derived from pre-trained diffusion UNet to predict binary mask proposals and associated mask representations. These proposals are finally recognized using CLIP as the zero-shot classifier.\nPanoptic Segmentation as Denoising Diffusion. Pix2Seq- [210 ###reference_b210###] formulates panoptic segmentation as a discrete data generation problem conditioned on pixels, using a Bit Diffusion generative model [211 ###reference_b211###]. DFormer [67 ###reference_b67###] introduces a diffusion-based mask classification scheme that learns to generate mask features and attention masks from noisy mask inputs. Further, LDMSeg [212 ###reference_b212###] solves generative segmentation based on SD by first compressing segmentation labels to compact latent codes and then denoising the latents following the diffusion schedule." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 DINO-based Solution", + "text": "Unsupervised Panoptic Segmentation. Based on the successes of STEGO [177 ###reference_b177###] in semantic segmentation and CutLER [195 ###reference_b195###] in instance segmentation, U2Seg [213 ###reference_b213###] automatically identify \u201cthings\u201d and \u201cstuff\u201d within images to create pseudo labels that are subsequently used to train a panoptic segmentation model, such as Panoptic Cascade Mask R-CNN [214 ###reference_b214###]. Moreover, [215 ###reference_b215###] follows the bottom-up architecture of [216 ###reference_b216###] to separately predict semantic and boundary maps, which are later fused to yield a panoptic segmentation mask." + }, + { + "section_id": "4.3.4", + "parent_section_id": "4.3", + "section_name": "4.3.4 SAM-based Solution", + "text": "Towards Semantic-Aware SAM. While SAM shows strong zero-shot performance, its outputs are semantic-agnostic. This drives many research efforts, e.g., Semantic-SAM [217 ###reference_b217###], SEEM [50 ###reference_b50###], to enhance the semantic-awareness of SAM. In addition to visual prompts in SAM for interactive segmentation, these models learn generic object queries to achieve generic segmentation in both semantic and instance levels. In addition, the models are generally trained on a combination of multiple datasets with semantic annotations, such as COCO [218 ###reference_b218###], ADE20K [219 ###reference_b219###], PASCAL VOC [220 ###reference_b220###].\n###figure_5### \u00a75 ###reference_###\u00a75.1 ###reference_###5.1.1 ###reference_.SSS1###\u00a75.2 ###reference_###5.2.1 ###reference_.SSS1###5.2.2 ###reference_.SSS2###5.2.3 ###reference_.SSS3###\u00a75.3 ###reference_###5.3.1 ###reference_.SSS1###5.3.2 ###reference_.SSS2###5.3.3 ###reference_.SSS3###5.3.4 ###reference_.SSS4###5.3.5 ###reference_.SSS5###5.3.6 ###reference_.SSS6###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Foundation Model based PIS", + "text": "As shown in Fig. 5 ###reference_###, this section reviews FM-based PIS methods." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Interactive Segmentation", + "text": "" + }, + { + "section_id": "5.1.1", + "parent_section_id": "5.1", + "section_name": "5.1.1 SAM-based Solution", + "text": "As SAM is born as a universe interactive segmenting system, it naturally becomes the top selection for researchers to build advanced interactive segmentation frameworks.\nMulti-Granularity Interactive Segmentation. Most existing interactive segmentation methods determines a single segmentation mask based on users\u2019 input, which ignores spatial ambiguity. In contrast, SAM introduces a multi-granularity interactive segmentation pipeline, i.e., for each user interaction, desired segmentation region may be the concept of objects with different parts nearby. To improve the segmentation quality, HQ-SAM [201 ###reference_b201###] proposes a lightweight high-quality output token replace the original SAM\u2019s output token. After training on 44K\nhighly-accurate masks, HQ-SAM significantly boosts the mask prediction quality of SAM. Since SAM is class-agnostic, a line of works [221 ###reference_b221###, 222 ###reference_b222###] tunes SAM by aligning the query-segmented regions with corresponding textual representations from CLIP. OVSAM [222 ###reference_b222###] explores dual knowledge transfer between SAM and CLIP to enhance SAM\u2019s recognition capabilities. Semantic SAM [217 ###reference_b217###] designs a SAM-like framework that supports multi-granularity segmentation using the captioned SAM data. Although these multi-granularity interactive segmentation approaches alleviate spatial ambiguity, they result in excessive output\nredundancy and limited scalability. To solve this, GraCo [223 ###reference_b223###] explores granularity-controllable interactive segmentation, which allows precise control of prediction granularity to resolve ambiguity.\nSAM for Medical Image Interactive Segmentation. Interaction segmentation is crucial in the medical field [224 ###reference_b224###], such as for achieving highly precise segmentation of lesion regions, or reducing manual efforts in annotating medical data. Unlike the segmentation of natural images, medical image segmentation poses greater challenges due to many intrinsic issues such as structural complexity, low contrast, or inter-order variability. Recently, several studies [225 ###reference_b225###, 226 ###reference_b226###, 227 ###reference_b227###] explore the zero-shot interactive segmentation capabilities in medical imaging. They cover a diverse range of anatomical and pathological targets across different medical imaging modalities, including CT [228 ###reference_b228###], MRI [229 ###reference_b229###], pathological images [230 ###reference_b230###], endoscopic images [186 ###reference_b186###]. While these studies indicate that SAM performs comparably to state-of-the-art methods in identifying well-defined objects in certain modalities, it struggles or fails completely in more challenging situations, such as when targets have weak boundaries, low contrast, small size, and irregular shapes. This suggests that directly applying SAM without fine-tuning or re-training to previously unseen and challenging medical image segmentation may result in suboptimal performance.\nTo enhance SAM\u2019s performance on medical images, some approaches propose to fine-tune SAM on medical images. MedSAM [97 ###reference_b97###] curates a large scale dataset containing over one million medical image-mask pairs of 11 modalities, which are used for directly fine-tuning SAM. In contrast, other methods explore parameter-efficient fine-tuning strategies. SAMed [231 ###reference_b231###] applies LoRA modules to the pre-trained SAM image encoder. SAMFE [232 ###reference_b232###] finds that applying LoRA to the mask decoder yields superior performance in cases with few exemplars. SAM-Med2D [226 ###reference_b226###] enhances the image encoder by integrating learnable adapter layers. MedSA [233 ###reference_b233###] adapts SAM to volumetric medical images by introducing Space-Depth Transpose where a bifurcated attention mechanism is utilized by capturing spatial correlations in one branch and depth correlations in another. 3DSAM-Adapter [234 ###reference_b234###] introduces a holistic 2D to 3D adaptation method via carefully designed modification of the entire SAM architecture." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Referring Segmentation", + "text": "" + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "5.2.1 CLIP-based Solution", + "text": "Referring segmentation aims to segment a referent via a natural linguistic expression. The multi-modal knowledge in CLIP is broadly explored to tackle this multi-modal task.\nTraining-free Referring Segmentation. ZS-RS [235 ###reference_b235###] represents a training-free referring image segmentation method that leverages cross-modal knowledge in CLIP. It begins by generating instance-level masks using an off-the-shelf mask generator, then extracts local-global features of masks and texts from CLIP, and finally identifies the desired mask based on cross-modal feature similarity. TAS [236 ###reference_b236###] employs a similar pipeline as ZS-RS, but computes more fine-grained region-text matching scores to pick the correct mask.\nMulti-modal Knowledge Transfer. Many efforts have been devoted to transfer multi-modal knowledge within CLIP from image-level to pixel-level. A common idea [237 ###reference_b237###, 238 ###reference_b238###, 239 ###reference_b239###, 240 ###reference_b240###, 241 ###reference_b241###, 242 ###reference_b242###, 243 ###reference_b243###, 244 ###reference_b244###, 245 ###reference_b245###] is to introduce a task decoder to fuse CLIP\u2019s image and textual features, and train it with text-to-pixel contrastive learning [237 ###reference_b237###]. In addition to task decoder, ETRIS [238 ###reference_b238###] and RISCLIP [239 ###reference_b239###] integrate a Bridger module to encourage visual-language interactions at each encoder stage. EAVL [241 ###reference_b241###] learns a set of convolution kernels based on input image and language, and do convolutions over the output of task decoder to predict segmentation masks. UniRES [242 ###reference_b242###] explores multi-granularity referring segmentation to unify object-level and part-level grounding tasks. TP-SIS [244 ###reference_b244###] transfers multi-modal knowledge within CLIP for referring surgical instrument segmentation.\nWeakly Supervised Referring Segmentation. Moving towards real-world conditions, some work studies weakly supervised referring segmentation to alleviate the cost on pixel labeling. TSEG [246 ###reference_b246###] computes patch-text similarities with CLIP and guides the classification objective during training with a multi-label patch assignment mechanism. TRIS [247 ###reference_b247###] proposes a two-stage pipeline that extracts coarse pixel-level maps from image-text attention maps, which are subsequently used to train a mask decoder." + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "5.2.2 DM-based Solution", + "text": "Training-free Referring Segmentation. Some works [248 ###reference_b248###, 166 ###reference_b166###] find that SD is an implicit referring segmentor with the help of generative process. Peekaboo [166 ###reference_b166###] formulates segmentation as a foreground alpha mask optimization problem with SD, where a fine-grained segmentation map should yield a high-fidelity image generation process. In this way, minimizing the discrepancy between the mask-involved noise and the target noise shall give better textual-aligned pixel representations. Ref-diff [248 ###reference_b248###] first generates a set of object proposals from generative models, and determines the desired mask based on proposal-text similarities.\nDiffusion Features for Referring Segmentation. With the conditioned textual guidance, the modal-intertwined attention maps (c.f. \u00a73.2 ###reference_###) could intuitively serve as an initial visual dense representation, which could be used to yield the final segmentation mask. VPD [156 ###reference_b156###] introduces a task-specific decoder to process encoded features fused from cross-attention maps and multi-level feature maps in U-Net. Meanwhile, LD-ZNet [158 ###reference_b158###] injects attention features into a mask decoder for generating better textual-aligned pixel-level masks. Apart from the attention-based utilization, [249 ###reference_b249###, 250 ###reference_b250###] directly feed side outputs from each intermediate layer of the diffusion U-Net as well as the textual embedding, to a mask decoder to yield final predictions." + }, + { + "section_id": "5.2.3", + "parent_section_id": "5.2", + "section_name": "5.2.3 LLMs/MLLMs-based Solution", + "text": "The success of LLMs/MLLMs has showcased incredible reasoning ability and can answer complex questions, thereby bringing new possibilities to achieve new pixel reasoning and understanding capabilities. In particularly, LISA [59 ###reference_b59###] studies a new segmentation task, called reasoning segmentation. Different from traditional referring segmentation, the segmentors in this setting are developed to segment the object based on implicit query text involving complex reasoning. Notably, the query text is not limited to a straightforward reference (e.g., \u201cthe front-runner\u201d), but a a more complicated description involving complex reasoning\nor world knowledge (e.g., \u201cwho will win the race?\u201d). LISA employs LLaVA [251 ###reference_b251###] to output a text response based on the input image, text query, and a [seg] token. The embedding for the customized [seg] token is decoded into the segmentation mask via SAM decoder. Afterwards, LISA++ [252 ###reference_b252###] promotes LISA to differentiate individuals within the same category and enables more natural conversation in multi-turn dialogue. Based on these works, many efforts have been devoted to promote the reasoning capability and segmentation accuracy. LLM-Seg [253 ###reference_b253###] proposes using SAM to generate a group of mask proposals that selects the best-suited answer as the final segmentation prediction. Next-Chat [254 ###reference_b254###] adds a [trigger] token that depicts the coordinate of the object box as a supplementary input for MLLM to help generate better masks. Similarly, GSVA [255 ###reference_b255###] introduces a rejection token [rej] to relieve the empty-target case where the object referred to in the instructions does not exist in the image, leading to the false-positive prediction. Except for the functional token incorporation, [256 ###reference_b256###, 257 ###reference_b257###] propose using diverse textual descriptions, such as object attribute and part, to enhance the object-text connection for accurate reasoning results. Regarding reasoning costing, PixelLLM [60 ###reference_b60###] introduces a lightweight decoder to reduce the computational cost in the reasoning process. Osprey [258 ###reference_b258###] extends MLLMs by incorporating fine-grained mask regions into language instruction, and delivers remarkable pixel-wise visual understanding capabilities." + }, + { + "section_id": "5.2.4", + "parent_section_id": "5.2", + "section_name": "5.2.4 Composition of FMs for Referring Segmentation", + "text": "To enhance the textual representation for pixel-level understanding, some methods use LLMs as the text encoder for obtaining improved textual embedding for modal fusion. Particularly, BERT [259 ###reference_b259###], due to its simplicity and practicality, is nearly the top choice among works [260 ###reference_b260###, 261 ###reference_b261###, 262 ###reference_b262###, 246 ###reference_b246###, 263 ###reference_b263###, 264 ###reference_b264###, 265 ###reference_b265###, 266 ###reference_b266###, 267 ###reference_b267###, 268 ###reference_b268###, 269 ###reference_b269###, 270 ###reference_b270###]. Most of them design a fusion module to bridge the features between the visual encoder and BERT. In addition, some works [254 ###reference_b254###, 271 ###reference_b271###, 272 ###reference_b272###] treat LLM as a multi-modal unified handler, and use Vicuna [273 ###reference_b273###] to map both image and text into a unified feature space, thereafter generating the segmentation output. With the powerful dialogue capabilities of the GPT-series models [39 ###reference_b39###], some works [274 ###reference_b274###, 275 ###reference_b275###, 276 ###reference_b276###] employ ChatGPT to rewrite descriptions with richer semantics, and encourages finer-grained image-text interaction in referring segmentation model training.\nApart from textual enhancement using LLMs, SAM [49 ###reference_b49###] is widely chosen to provide rich segmentation prior for referring segmentation. [277 ###reference_b277###] presents a prompt-driven framework to bridge CLIP and SAM in an end-to-end manner through prompting mechanisms. [278 ###reference_b278###] focuses on building referring segmentors based on a simple yet effective bi-encoder design, i.e., adopting SAM and a LLM to encode image and text patterns, respectively, and then fuse the multi-modal features for segmentation predictions. Such a combination of SAM and LLM, without bells and whistles, could be easily extended to the MLLM case. Therefore, [279 ###reference_b279###, 280 ###reference_b280###] propose to incorporate CLIP with SAM to improve the multi-modal fusion. Specifically, F-LMM [279 ###reference_b279###] proposes to use CLIP to encode the visual features, which are decoded by SAM to the predicted segmentation map. PPT [280 ###reference_b280###] first employs attention maps of CLIP to compute the peak region as the explicit point prompts, which are directly used to segment the query target." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Few-Shot Segmentation", + "text": "" + }, + { + "section_id": "5.3.1", + "parent_section_id": "5.3", + "section_name": "5.3.1 CLIP-based Solution", + "text": "CLIP Features for Few-Shot Segmentation. Adopting CLIP to extract effective visual correlation from the support images to help segmentation inference of the query image has formulated a prevailing pipeline to address FSS, which shall be categorized into two streams based on the usage of CLIP-oriented visual feature. The first class [281 ###reference_b281###, 282 ###reference_b282###, 283 ###reference_b283###, 284 ###reference_b284###, 285 ###reference_b285###, 286 ###reference_b286###] relies on modelling the feature relationship of support-query images to explicitly segment the query image. WinCLIP [281 ###reference_b281###] aggregates the multi-scale CLIP-based visual features of the reference and query images to obtain an enhanced support-query correlation score map for pixel-level prediction. [282 ###reference_b282###, 283 ###reference_b283###, 284 ###reference_b284###, 285 ###reference_b285###] further refine the score maps with the query- and support-based self-attention maps. [286 ###reference_b286###] introduces the foreground-background correlation from the support images by crafting proper textual prompts. Another line of works [287 ###reference_b287###, 243 ###reference_b243###, 288 ###reference_b288###] focuses on segmenting the query image regulated by the support-image-generated prototypes, where some metric functions, e.g., cosine similarity, shall be involved for the query-prototype distance calculation. RD-FSS [287 ###reference_b287###] proposes to leverage the class description from CLIP text encoder as the textual prototypes, which are then correlated with visual features to dense prediction in a cross-attention manner. Additionally, PartSeg [288 ###reference_b288###] aggregates both the visual and textual prototypes to help generate the improved query image pixel-level representation. Here the visual prototypes are obtained through correspondingly pooling the CLIP-based visual feature by the reference segmentation masks. To further enhance the prototypical representation, [243 ###reference_b243###] use CLIP to generate the visual prototypes from the masked support images, where only interested object is remained." + }, + { + "section_id": "5.3.2", + "parent_section_id": "5.3", + "section_name": "5.3.2 DM-based Solution", + "text": "Diffusion Features for Few-Shot Segmentation.\nThe internal representations of DMs are useful for few-shot segmentation. Specifically, [289 ###reference_b289###] directly leverages the latent diffusion features at specific time step as the representations of the support image, which are decoded along with the original image via a mask decoder. On the contrary, DifFSS [290 ###reference_b290###] proposes to synthesize more support-style image-mask pairs using DMs. Building on the invariant mask, the generated support images shall include same mask-covered object yet with diverse background, enriching the support patterns for better query segmentation.\nFew-Shot Segmentation as Denoising Diffusion. Some studies [291 ###reference_b291###, 292 ###reference_b292###] tackle few-shot segmentation by solving a denoising diffusion process. They fine-tune SD to explicitly generate segmentation mask for query images, with the main difference being the condition applied during the fine-tuning. MaskDiff [291 ###reference_b291###] uses query image and support masked images as the condition, while SegICL [292 ###reference_b292###] merely employs the support/query mask as the condition." + }, + { + "section_id": "5.3.3", + "parent_section_id": "5.3", + "section_name": "5.3.3 DINO-based Solution", + "text": "DINO Features for Few-Shot Segmentation. There are some efforts [293 ###reference_b293###, 294 ###reference_b294###, 295 ###reference_b295###, 296 ###reference_b296###] exploiting latent representations in DINO/DINOv2 to enhance query and support features. [293 ###reference_b293###] directly uses DINOv2 to encode query and support images, and shows that DINOv2 outperforms other FMs, like SAM and CLIP. Based on this, SPINO [294 ###reference_b294###] employs DINOv2 for few-shot panoptic segmentation. [295 ###reference_b295###, 296 ###reference_b296###] further mine out query-support correlations through the cross- and self-attention of token embeddings in DINO, leading to more support-aware segmentation." + }, + { + "section_id": "5.3.4", + "parent_section_id": "5.3", + "section_name": "5.3.4 SAM-based Solution", + "text": "Prompt Generation for SAM. Given the provided support image sets, a line of works [297 ###reference_b297###, 298 ###reference_b298###, 299 ###reference_b299###, 300 ###reference_b300###, 301 ###reference_b301###] focuses on generating proper prompts for SAM to segment the desired target in the query image. Notably, a majority of them [297 ###reference_b297###, 298 ###reference_b298###, 299 ###reference_b299###] propose to generate a group of candidate points as prompts based on the support-query image-level correspondence/similarity, where the support mask, highlighting the query object\u2019s semantic, is then used to select the object-oriented prompts. VRP-SAM [300 ###reference_b300###] learns a set of visual reference prompts based on query-support correspondence, which are fed into a frozen SAM for segmentation. APSeg [301 ###reference_b301###] extends VRP-SAM by exploring multiple support embeddings to generate more meaningful prompts for SAM." + }, + { + "section_id": "5.3.5", + "parent_section_id": "5.3", + "section_name": "5.3.5 LLM/MLLM-based Solution", + "text": "There are several trials [302 ###reference_b302###, 303 ###reference_b303###] in adopting LLM/MLLM to address FSS through instruction design. LLaFS [302 ###reference_b302###] maps the fused support-query pattern into the language space, and let a LLM to tell the coordinate description of the desired segmentation mask.\n[303 ###reference_b303###] uses GPT-4 as the task planner to divide FSS into a sequence of sub-tasks based on the support set, subsequently calls vision tools such as SAM and GPT4Vision to predict segmentation masks." + }, + { + "section_id": "5.3.6", + "parent_section_id": "5.3", + "section_name": "5.3.6 In-Context Segmentation", + "text": "The rapid progress of LLMs leads to an emerging ability to learn in-context from just a few examples [38 ###reference_b38###, 45 ###reference_b45###]. Inspired by this, researchers are exploring a related concept in computer vision called in-context segmentation (ICS). From the perspective of segmenting a query image based on a few supports, ICS can be seen as a sub-task of FSS, but it functions directly on pre-trained models without any task-specific finetuning. Most ICL-emerged LLMs are generative models trained through masked language modeling or next token prediction strategies, leading to efforts in ICS that mimic these self-supervised methods. Pioneering work like VPInpainting [304 ###reference_b304###] approaches visual in-context learning as image inpainting. It defines visual prompt as a grid-like single image containing an input-output example(s) and a query, then trains an inpainting model (via MAE [305 ###reference_b305###]) to predict the missing parts of an image such that it is consistent with given example(s). With this basis, [306 ###reference_b306###, 307 ###reference_b307###, 308 ###reference_b308###] propose to retrieve optimal examples from large datasets as the prompt. Additionally, Painter [309 ###reference_b309###] and SegGPT [51 ###reference_b51###] are vision generalists built on in-context learning. They unify various vision tasks within the in-context learning framework by standardizing outputs of core vision tasks. Some other studies [310 ###reference_b310###, 311 ###reference_b311###] focus on creating large vision models by formatting images, like language tokenizer, to a group of sequence as visual sentences, and then perform LLM-like training via next token prediction. Notably, developing these visual autoregressive models requires vast amounts of diverse vision data from varied tasks, e.g., image segmentation, depth estimation. PromptDiffusion [312 ###reference_b312###] explores in-context learning for diffusion models by fine-tuning SD to generate the query mask conditioned on the support image-mask pair and the query image. Matcher [313 ###reference_b313###] utilizes DINOv2 to locate the target in query images by bidirectional matching, and leverages the coarse location information as the prompts of SAM for segmentation. Tyche [314 ###reference_b314###] introduces a probabilistic approach to ICS by explicitly modeling training and testing uncertainty, and shows significant potential in medical image segmentation." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Open Issue and Future Direction", + "text": "Based on the reviewed research, the field of image segmentation has made tremendous progress in the FM era. Nonetheless, given the high diversity and complexity of segmentation tasks, coupled with the rapid evolution of FMs, several critical directions warrant ongoing exploration.\nExplaining the Emergence of Segmentation Knowledge in FMs. Despite that different FMs vary significantly in architectures, data and training objectives, we observe a consistent emergence of segmentation knowledge from them, which drives the development of impressive training-free segmentation models. However, current methods do not fully explain how these FMs learn to understand pixels, especially how pixels interact with other modalities, like texts in CLIP and Text-to-Image Diffusion Models. This calls for novel explainable techniques to enhance our understanding of pixels in FMs. This is crucial to minimize the negative societal impacts in existing FMs, and will broaden more applications of FMs in diverse visual domains and tasks.\nIn-Context Segmentation. Motivated by the success of in-context learning in the language domain, there has been a growing interest in exploring its potential for vision tasks, such as image segmentation. However, the variability in output representations across vision tasks \u2013 such as the differing formats required for semantic, instance, and panoptic segmentation \u2013 renders ICS a particularly challenging problem. While some progress have been made, current results don\u2019t show as high performance as bespoke, especially in difficult tasks like panoptic segmentation. Additionally, the ability to perform segmentation at arbitrary levels of granularity through in-context learning remains an unexplored area. Last, the scale of models employed in ICS is considerably smaller compared to the NLP counterparts like GPT-3, which may be a key factor limiting the performance of ICS. To achieve a breakthrough akin to GPT-3 in the vision domain, it is essential to develop large vision models [310 ###reference_b310###]. This task poses significant difficulties and will require extensive collaboration within the vision community to address issues related to data, architecture, and training techniques.\nMitigating Object Hallucination in MLLMs-based Models. Although MLLMs have demonstrated significant success in pixel understanding (c.f. \u00a75.2.3 ###reference_.SSS3###), they are prone to the issue of object hallucination [315 ###reference_b315###] as LLMs. Here object hallucination refers that a model generates unintended descriptions or captions that contain objects which are inconsistent with or even absent from the target image. This issue greatly undermines the reliability of these models in real-world applications. Hence, we advocate for future research in MLLMs-based segmentation to rigorously assess object hallucinations for their models, and to incorporate this issue consideration in the development of segmentation models.\nPowerful and Scalable Data Engine. Segmentation data are catalysts for progress in image segmentation. Much of the current success in deep learning based image segmentation owes to datasets such as PASCAL VOC [220 ###reference_b220###], COCO [218 ###reference_b218###], Cityscapes [316 ###reference_b316###], and ADE20K [219 ###reference_b219###]. Nonetheless, scaling up image data is a long-standing challenge and is becoming increasingly critical in the FM era, which calls for a powerful and scalable segmentation data engine. Recently, SAM [49 ###reference_b49###] tackles this issue with a data engine that labels images via \u201cmodel-in-the-loop\u201d, yielding SA-1B with 11M images and 1B masks. Nevertheless, the engine is limited in realistic image labeling and lacks semantic awareness. A promising direction is to incorporate generative models into the system, which would create a more powerful data engine that can scale to arbitrary levels and is more favorable to data-scarcity scenarios like medical imaging [317 ###reference_b317###] and satellite imagery [318 ###reference_b318###].\nDiffusion Models as the New Data Source. Text-to-image diffusion models have been proved feasible to build segmentation datasets by generating pairs of synthetic images and corresponding segmentation masks. However, there exists many challenges. First, existing DMs like SD have difficulties in generating complex scenes, e.g., a crowded street with hundreds of objects, closely intertwined objects. To alleviate this, layout or box conditions, instead of solely text, should be provided to guide the generation. Second, the bias in LAION-5B on which SD was trained, might be transferred to the dataset. This issue can be alleviated by absorbing the advancements in addressing the bias problem in generative models. Third, the domain gap between synthetic and real datasets should be continuously studied. Fourth, current approaches are limited in generating data for the task of semantic segmentation and a limited number of semantic categories, how to generalize them to generate instance-level segmentation masks and scale up the semantic vocabulary are unsolved.\nEfficient Image Segmentation Model. While FM-driven segmentation models exhibit remarkable performance, the majority of methods introduce significant computational overheads, such as heavy image encoders for feature computation and costly fine-tuning processes. These challenges impede the broader applicability and affordability of the models in practical scenarios. Key techniques to be explored include knowledge distillation, model compression, and parameter-efficient tuning. Most existing studies focus on improving the deployment efficiency solely for SAM; yet, attention to other FMs is equally vital." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this survey, we provide the first comprehensive review to recent progress of image segmentation in the foundation model era. We introduce key concepts and examine the inherent segmentation knowledge in existing FMs such as CLIP, Diffusion Models, SAM and DINO/DINOv2. Moreover, we summarize more than 300 image segmentation models for tackling generic and promptable image segmentation tasks. Finally, we highlight existing research gaps that need to be filled and illuminate promising avenues for future research. We hope that this survey will act as a catalyst, sparking future curiosity and fostering a sustained passion for exploring the potential of FMs in image segmentation." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2408.12957v3_figure_1.png", + "caption": "Figure 1: Image segmentation tasks reviewed in this survey. Generic image segmentation: (a) semantic segmentation, (b) instance segmentation, (c) panoptic segmentation; Promptable image segmentation: (d) interactive segmentation, (e) referring segmentation, (f) few-shot segmentation.", + "url": "http://arxiv.org/html/2408.12957v3/x1.png" + }, + "2": { + "figure_path": "2408.12957v3_figure_2.png", + "caption": "Figure 2: Overview of this survey.", + "url": "http://arxiv.org/html/2408.12957v3/x2.png" + }, + "3": { + "figure_path": "2408.12957v3_figure_3.png", + "caption": "Figure 3: (a) Illustrations of how segmentation derived from FMs. Briefly speaking, Modifying CLIP\u2019s attention pooling to location-aware attentions can obtain segmentation features. Merging cross-attention maps and self-attention maps in DMs can produce precise semantic segments. DINO naturally contains segmentation properties in the last attention maps of the class token. (b) shows some visualization examples.", + "url": "http://arxiv.org/html/2408.12957v3/x3.png" + }, + "4": { + "figure_path": "2408.12957v3_figure_4.png", + "caption": "Figure 4: Overview of Foundation Model based GIS (\u00a74).", + "url": "http://arxiv.org/html/2408.12957v3/extracted/6028618/figure/4-1-gis.png" + }, + "5": { + "figure_path": "2408.12957v3_figure_5.png", + "caption": "Figure 5: Overview of Foundation Model based PIS (\u00a75).", + "url": "http://arxiv.org/html/2408.12957v3/extracted/6028618/figure/4-2-pis.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2408.12957v3" +} \ No newline at end of file diff --git a/20241127/2408.14776v2.json b/20241127/2408.14776v2.json new file mode 100644 index 0000000000000000000000000000000000000000..6305e95b29ead96d4a3a8b380832e06a15664b62 --- /dev/null +++ b/20241127/2408.14776v2.json @@ -0,0 +1,519 @@ +{ + "title": "MROVSeg: Breaking the Resolution Curse of Vision-Language Models in Open-Vocabulary Image Segmentation", + "abstract": "Pretrained vision-language models (VLMs), e.g. CLIP, are increasingly used to bridge the gap between open- and close-vocabulary recognition in open-vocabulary image segmentation. As VLMs are generally pretrained with low-resolution images (e.g. ), most previous methods operate only on downscaled images. We question this design as low resolution features often fail to preserve fine details. A typical solution is to employ additional image backbones for high-resolution inputs, but it also introduce significant computation overhead. Therefore, we propose MROVSeg, a multi-resolution training framework for open-vocabulary image segmentation with a single pretrained CLIP backbone, that uses sliding windows to slice the high-resolution input into uniform patches, each matching the input size of the well-trained image encoder. Its key components include a Multi-Res Adapter, which restores the spatial geometry and grasps local-global correspondences across patches by interacting with multi-resolution features. To achieve accurate segmentation, we introduce Multi-grained Masked Attention scheme to aggregate multi-grained semantics from multi-resolution CLIP features to object queries. Through comprehensive experiments, we demonstrate the superiority of MROVSeg on well-established open-vocabulary image segmentation benchmarks, establishing new standards for open-vocabulary image segmentation.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### ###figure_2### Open-vocabulary image segmentation aims to segment semantic pixels belonging to arbitrary classes beyond pre-defined categories and dataset. In recent years, large-scale vision-language pretrained models (VLMs), such as CLIP [26 ###reference_b26###] and ALIGN [14 ###reference_b14###], have demonstrated remarkable generalization capabilities for recognizing open-vocabulary categories. This motivates the research community to investigate the potential of VLMs in open-vocabulary image segmentation. To address the discrepancy between the per-pixel semantic requirements and the image-level labels provided by VLMs, initial studies [6 ###reference_b6###, 42 ###reference_b42###, 43 ###reference_b43###] modified CLIP model by removing its final pooling layer to obtain dense category embeddings for per-pixel classification. However, these approaches typically necessitate fine-tuning VLMs on a base segmentation dataset with limited images and categories, which is demonstrated [42 ###reference_b42###] to impair the transferability of VLM features, leading to unsatisfactory zero-shot performance on downstream tasks.\nRecent approaches [35 ###reference_b35###, 10 ###reference_b10###, 36 ###reference_b36###, 28 ###reference_b28###, 22 ###reference_b22###] reformulate open-vocabulary image segmentation as a region-level recognition problem. These methods typically adopt two branch meta architecture (As in Fig. 1(a) ###reference_sf1###): one branch extract image feature and generate mask proposals, and the other branch classifies the predicted proposals with pretrained VLM. Although these methods are promising, we note\ntheir following limitation. Due to pretrained VLMs exhibit inferior size adaptability, most of open-vocabulary image segmentation methods (e.g. [10 ###reference_b10###, 36 ###reference_b36###, 19 ###reference_b19###, 35 ###reference_b35###, 34 ###reference_b34###, 5 ###reference_b5###, 28 ###reference_b28###, 22 ###reference_b22###]) so far need to downsample images to fit the pretrained resolution (e.g. ) of VLM to perform region-level recognition (as in Fig. 1(a) ###reference_sf1###). However, low-resolution input usually lacks segmentation details. Although na\u00efvely applying sliding window inference [6 ###reference_b6###, 35 ###reference_b35###] could partly compensate for the details, the spatial structure across windows is corrupted and the local-global modeling is also absent.\nIn light of the limitations and challenges faced by previous methods, we propose MROVSeg, a VLM-based Multi-Resolution training framework for Open-Vocabulary Image Segmentation. As illustrated in Fig. 1(b) ###reference_sf2###, first, MROVSeg uses downsampled low-resolution images as VLM input to extract global low-resolution features. Second, MROVSeg split the high-resolution images into slices and feeds them to VLM to extract detailed high-resolution features. The key components of MROVSeg contain a Multi-Res Adapter, in which we employs depthwise convolution layers to restore the spatial geometry across slices. To effectively capture global long-range context, inspired by previous multi-scale training framework [4 ###reference_b4###, 13 ###reference_b13###, 38 ###reference_b38###], we employ a image-dependent Scale-aware Attention [4 ###reference_b4###] to dynamically adjust the trustworthiness of high-resolution and low-resolution VLM features based on their relevance. The resulting multi-resolution features are fused hierarchically then employed for precise mask proposals generation.\nTo achieve accurate mask class recognition, we propose a Multi-grained Masked Attention mechanism. The core hypothesis is that multi-resolution CLIP features of the save image input hold semantic consistency. Based on this, we reuse the CLIP [CLS] token, and manipulate its attention map on multi-resolution features in CLIP attention layers with resolution-aware attention masks. We find this resolution-aware design can enforce the low- and high-resolution attention map focus on global contexts and spatial details respectively and thus effectively aggregate multi-grained semantics.\nWith extensive experiments on well-established open-vocabulary semantic segmentation and panoptic segmentation benchmarks, we are delighted to report that our method achieves new state-of-the-art performance, demonstrating the advancements of MROVSeg in the domain of open-vocabulary image segmentation. Our contributions can be summarized as follows:\nWe propose a novel end-to-end multi-resolution training framework to tackle the task of open-vocabulary image segmentation. It enables improved open-vocabulary segmentation by leveraging multi-resolution vision-language features.\nA multi-grained masked attention scheme is proposed to effectively aggregate regional and universal semantics on multi-resolution vision-language features.\nThe efficacy of our method is confirmed by achieving state-of-the-art performance by evaluating our proposed approach on well-established open-vocabulary segmentation benchmarks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "###figure_3### Pretrained Vision-language Models\u2003Recently, large-scale pretrained, contrastive learning based methods [26 ###reference_b26###, 14 ###reference_b14###] demonstrate powerful open-vocabulary image classification capability by learning shared image-text feature representations. Pretrained CLIP [26 ###reference_b26###] has been generalized to many downstream computer vision tasks such as object detection [40 ###reference_b40###], image caption [12 ###reference_b12###], image generation [24 ###reference_b24###] and image segmentation [36 ###reference_b36###, 1 ###reference_b1###, 34 ###reference_b34###, 39 ###reference_b39###, 35 ###reference_b35###]. Our method invokes natural language perceptual ability of pretrained VLMs, aiming to explore their application boundary in open vocabulary semantic segmentation tasks. \nMulti-Resolution Training\u2003As the quadratic computational overhead along with the number of tokens increases, recent multimodal large language models [21 ###reference_b21###, 18 ###reference_b18###, 37 ###reference_b37###] (MLLMs) employ sliding window technique to divide high-resolution images into patches, thereby achieving competitive performance while maintaining computational efficiency. Unlike prevalent MLLMs, MROVSeg adaptively restores the spatial geometry of multi-resolution features across patches, and effectively extract global contexts that benefit segmentation tasks.\nOpen Vocabulary Image Segmentation\u2003Pioneering works [1 ###reference_b1###, 32 ###reference_b32###] use learned language embedding to align the feature space of class texts and visual semantics. Recently, some works [35 ###reference_b35###, 10 ###reference_b10###, 19 ###reference_b19###, 22 ###reference_b22###] develop two-stage training approaches. More recently, end-to-end [36 ###reference_b36###] frameworks emerge in the community, which unify mask generation and region classification into the same model. SAN [36 ###reference_b36###] propose a lightweight side adapter to effectively adapt CLIP features. ODISE [34 ###reference_b34###] employs a stable diffusion UNet [27 ###reference_b27###] to generate mask proposals. With the extracted dense pixel semantic embedding [42 ###reference_b42###], CAT-Seg [6 ###reference_b6###] proposes to finetune CLIP with cost aggregation. EBSeg [28 ###reference_b28###] integrate Segment Anything Model [16 ###reference_b16###] into CLIP-based framework with image embedding balancing. \nDiscussion with Previous Methods\u2003Our method MROVSeg is inspired by [36 ###reference_b36###, 39 ###reference_b39###], but has significant differences: (1) Distinct with previous open-vocabulary segmentation methods, MROVSeg adapts multi-resolution CLIP features to open-vocabulary segmentation. (2) The introduced Muti-grained Masked Attention scheme explicitly enforces the mask class recognition to aggregate both local and global semantics, which takes advantage of the internal consistency between multi-resolution features." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In open vocabulary image segmentation task setting [35 ###reference_b35###, 19 ###reference_b19###], an image is operated by a segmentation model with parameter to produce a set of masks associated with categories:\nThe segmentation model is trained on a base segmentation dataset (e.g., COCO [2 ###reference_b2###]) annotated with a fixed label set of categories. And during test, model is expected to segment objects of category set , which generally contains novel categories ().\nAccurate segmentation needs high-resolution image inputs. Due to low-resolution images used in vision-language pretraining, previous open-vocabulary methods employ extra image backbone (such as SAM [28 ###reference_b28###] and ResNet [22 ###reference_b22###]) to provide segmentation detail. Although recent studies [33 ###reference_b33###, 39 ###reference_b39###] adapt convolution-based CLIP model for high-resolution training, but directly apply these methods to ViT-based CLIP model results in suboptimal performance [36 ###reference_b36###] due to undesirable size adaptability of ViT. To this end, we propose MROVSeg, a ViT-based training framework to provide multi-resolution vision-language features for open-vocabulary image segmentation." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Framework Overview", + "text": "The overview of our training framework MROVSeg, for open-vocabulary image segmentation is shown in Fig. 2 ###reference_###. At a high-level, an image is downsampled and padded to a low-resolution (such as ) and processed by a pretrained CLIP ViT to extract global feature. To capture high-resolution local details, the high-resolution image is split into slices and input to the shared CLIP ViT encoder. These multi-resolution features are then concatenated with learnable queries and fed into a Multi-Res Adapter(Sec. 3.2 ###reference_###), to produce the fused features, query features, and attention masks used for mask predction( Sec. 3.3 ###reference_###) mask classification( Sec. 3.4 ###reference_###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Multi-Res Adapter", + "text": "As depicted in Fig. 3 ###reference_###, denote the slice (high resolution) features of -layer as , the global feature (low resolution) as , where , is the slice number, is the token length, and is the channel number. Na\u00efvely concatenating the high resolution slice features for the subsequent segmentation prediction is promising, but have two defects: 1) the spatial geometry, such as the positional information, is corrupted across slice features ; 2) the long-range dependencies and global context is missing. Thus, we propose Multi-Res Adapter to effectively restore spatial geometry of slice features and capture long-range context from global feature. In Multi-Res Adapter, the -th slice features are firstly concatenated with learnable queries and input to the vanilla ViT blocks to build the query features for each objects. Then for target fusion layer , the slice features and global feature are fused through a Multi-Res Fusion (MRF) Module and then injected into the ViT branch.\n###figure_4### Multi-Res Fusion (MRF) Module first reshape the global feature to , and restore slice features to , where . To retain the spatial geometry of high-resolution features, we employ depth-wise separable convolutions to fuse the restored feature. To effectively model the local-global correspondence, we train a Scale-aware Attention [4 ###reference_b4###] to fuse multi-res features into as the fused feature\nThen is added to the visual tokens in Multi-Res Adapter. The scale attention decoder learns to predict the scale attention for layer to weigh the trustworthiness of low resolution context and high resolution detail. The sigmoid function ensures weight in , where means focus on high resolution detail. In practice, we empirically select the features from a CLIP layer set to apply in Multi-Res Adapter. For instance, for the model based on CLIP ViT-L model, . The fused features are used for hierarchical mask decoding. The final layer output slice features are restored to as the visual feature for hierarchical mask decoding and multi-grained masked attention. And the output queries are projected as the query feature\nfor hierarchical mask decoding and multi-grained masked attention." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Mask Prediction", + "text": "Hierarchical Mask Decoding\u2003High-resolution features preserve more spatial detail, thus benefit segmentation, especially for mask prediction [3 ###reference_b3###]. However, directly upsampling features is computationally demanding. Thus, similar to FPN, we first upsample the multi-resolution features from the Multi-Res Adapter by to build the feature pyramid. Then we gradually concatenate the multi-resolution features with the final visual feature at channel dimension and upsample by 2 transposed convolution layers . Finally, we project the upsampled feature to the pixel feature space by MLP then decode the mask by inner product of query feature and mask feature\nwhere the query feature is from the Multi-Res Adapter which described in Sec. 4.3. is the mask prediction, and we omit the sigmoid function in Eq.5." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Mask Classification", + "text": "###figure_5### Recent ViT-based open-vocabulary segmentation methods [10 ###reference_b10###, 36 ###reference_b36###, 28 ###reference_b28###] perform mask class recognition by predicting attention masks to guide the attention maps of original CLIP [CLS] token on the region of interests in the intermediate layers. We observe that the predicted attention masks (shown in Fig. 4 ###reference_###(b)) in these methods tend to be overwhelmed by heavy background noises which spatially related to high norm tokens (shown in Fig. 4 ###reference_###(a)), leading to the unsatisfactory classification performance. While prior ViT pretraining technique [8 ###reference_b8###] indicates these high norm tokens contain rich global contexts, recent advances in training-free open-vocabulary segmentation [31 ###reference_b31###, 29 ###reference_b29###, 17 ###reference_b17###] reveal that high norm tokens can easily disturb spatial correlation in CLIP features. Inspired by these inspection, we hypothesize that CLIP features hold internal consistency among multi-resolution inputs, and propose to simultaneously aggregate semantics from multi-resolution CLIP features by predicting decoupled attention masks. \nDecoupled Attention Mask Decoding\u2003To sufficiently aggregate multi-grained semantics from CLIP, we first duplicate the [CLS] token to query number and create learnable positional embedding for them, dubbed as the . We aim to enforce to extract the global image-wise and local object-specific semantics from low- and high-resolution CLIP feature respectively. Thus, for visual feature , we first extract global contexts with max pooling and train MLPs project them to attention space\nwhere denote the local and global attention features respectively. Then we decode local and global per-head attention masks by the inner product with\nwhere is the output query feature described in Sec.3.2 ###reference_###. We show this decoupled resolution-aware attention decoding benefit the multi-grained aggregation in Fig.4 ###reference_###.\nMulti-grained Masked Attention\u2003As shown in Fig.5 ###reference_###, we perform cross attention to update the with multi-resolution CLIP features, with the predicted attention masks and ,\nwhere is query embeddings. Denote the low- and high-resolution CLIP tokens as and . and are the key embeddings of low- and high-resolution CLIP visual tokens respectively. and are value embeddings. , and are projection weights of cross-attention layer. The final output proposal logits are projected to the shared vision-language space and compute cosine similarity with text embeddings to obtain proposal logits : , where is the number of categories, and and are projection weights. Finally, the final segmentation map is produced by\n###figure_6### Image-conditioned Text Feature\u2003Recent studies [15 ###reference_b15###, 6 ###reference_b6###] reveal that CLIP text encoder struggles to generate discriminative text embeddings for similar categories. Thus, we follow MAFT-Plus [15 ###reference_b15###] to condition the text embeddings with learnable cross attention layers between text embeddings and regional pooling visual features: , where is depicted in Sec. 3.2 ###reference_###. is the original text embedding extract by CLIP text encoder." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Settings", + "text": "For open-vocabulary semantic segmentation task, We train our models on COCO-Stuff [2 ###reference_b2###] dataset which comprises 164K images with densely annotated masks spanning 171 categories. Then we evaluate MROVSeg on five well-established open-vocabulary semantic segmentation benchmarks for standard evaluation. We further evaluate MROVSeg on Cityscapes [7 ###reference_b7###] benchmarks to explore the ability of handling high-resolution image input. We follow common practice [35 ###reference_b35###, 1 ###reference_b1###] to measure the segmentation performance by mean intersection over union (mIoU) score. For open-vocabulary panoptic segmentation task, we train MROVSeg on COCO-Panoptic [20 ###reference_b20###] dataset. Then we evaluate zero-shot performance of MROVSeg on ADE [41 ###reference_b41###] panoptic benchmark, and measure the panoptic segmentation performance by panoptic quality (PQ), segmentation qualitiy (SQ) and recognition quality (RQ).\nDatasets\u2003The standard semantic segmentation benchmarks contains three dataset: ADE [41 ###reference_b41###], Pascal Context [23 ###reference_b23###], and Pascal VOC [11 ###reference_b11###]. The ADE dataset contains around 20K and 2K images for training and validation, respectively. This dataset is annotated with 150 and 847 categories, resulting in two separate segmentation benchmarks, namely ADE-150 and ADE-847. Similarly, the Pascal Context dataset has 5K images for both training and validation. It is annotated with 59 and 459 classes, forming two benchmarks known as PC-59 and PC-459. The Pascal VOC dataset comprises 1464 and 1449 images for training and validation, encompassing annotated masks across 20 semantic categories.\nImplementation Details\u2003We adopt the vanilla ViT block as transformer block in Multi-Res Adapter, and we use 6 blocks with 12 attention heads, 100 query tokens, feature dimension is 768 as default. We choose OpenAI pretrained ViT-based CLIP [26 ###reference_b26###] models in all experiments for better reproducibility. We empirically choose [CLS] token from CLIP layer 9, CLIP ViT-B/16 model, and subsequent layers for Multi-grained Masked Attention. i.e., we use last 3 blocks for Multi-grained Masked Attention. For CLIP ViT-L/14 model, we [CLS] token from CLIP layer 18, and use subsequent layers for Multi-grained Masked Attention. Our models are trained with image input resolution , and slice and downsample to to fit the CLIP input resolution. More implementation details are in Supplementary Material.\n###figure_7### ###figure_8###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Accuracy Evaluation", + "text": "Open-Vocabulary Semantic Segmentation\u2003We compare the semantic segmentation performance of MROVSeg with current state-of-the-art methods in Tab. 1 ###reference_###. First of all, our method significantly outperforms other state-of-the-art methods all various on most open-vocabulary semantic segmentation benchmarks. Specifically, our method supasses state-of-the-art method CAT-Seg [28 ###reference_b28###] with the same CLIP backbones on four benchmarks by remarkable margins ( mIoU for ADE-847, mIoU for PC-459, mIoU for ADE-150, mIoU for PC-59, and for VOC-20 with CLIP ViT-B backbone, mIoU for ADE-847, mIoU for PC-459, mIoU for PC-59, and for VOC-20 with CLIP ViT-B backbone). Compared to methods with additional image backbones, our model outperforms EBSeg [28 ###reference_b28###](with SAM) by mIoU% for ADE-847, PC-459, ADE-150, PC-59, and VOC-20 respectively. In addition, our models outperforms other convolution-based high-resolution trained method FC-CLIP [39 ###reference_b39###] and SED [33 ###reference_b33###]. Fig. 8 ###reference_### shows the qualitative comparison between MROVSeg and state-of-the-art methods (SAN [36 ###reference_b36###] and EBSeg [28 ###reference_b28###]). Evidently, MROVSeg can segment objects more precisely (the first row, class sofa), and provide more detailed mask prediction (the second and third row, more accurate object boundaries).\nThe size adaptability of ViTs have been demonstrated [39 ###reference_b39###] to be worse than ConvNets. Fig. 7 ###reference_### shows that the image sizes of the datasets evaluated in Tab. 1 ###reference_### are primarily distributed from . To evaluate the ability of handling high-resolution image input, we retrain MROVSeg on COCO-Stuff with resolution evaluate performance on Cityscapes benchmark. Notably, MROVSeg reaches comparable performance to convolution-based methods [39 ###reference_b39###] while scaling up the backbone, which greatly enhanced than other ViT-based methods (EBSeg [28 ###reference_b28###] and SAN [36 ###reference_b36###]).\n###figure_9### Open-Vocabulary Panoptic Segmentation\u2003In Tab. 2 ###reference_###, We evalute the panoptic segmentation performance of MROVSeg on mainstream open-vocabulary panoptic segmentation benchmarks ADE [41 ###reference_b41###]. It is noteworthy that our method surpass previous arts on panoptic quality (PQ) and recognition quality (RQ), achieving new state-of-art open-vocabulary panoptic segmentation performance. Furthermore, close-set panoptic segmentation results indicate that training MROVSeg model does not lead to severe overfitting issue on the base dataset." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Efficiency Evaluation", + "text": "Memory Consumption\u2003Denote the slice ratio as , we examine the effect of value in Tab. 3 ###reference_###. Notably, only using single resolution feature, i.e., , or directly adopting high-resolution image as CLIP input, i.e., lead to significant performance degradation. While some values with overlapped slicing (such as ) obtains better performance on some datasets(such as ADE-150 and VOC-20), we choose the default value as considering computation overhead.\nComputation Overhead\u2003We compare the computation overhead of MROVSeg with recent methods [10 ###reference_b10###, 19 ###reference_b19###, 6 ###reference_b6###, 28 ###reference_b28###, 39 ###reference_b39###] in Tab. 4 ###reference_###. We measure the number of parameter, giga floating-point operations per second, inference FPS and training time. Our method exhibits strong efficiency among these methods in terms of both training and inference. This is achieved by 1) single CLIP backbone do not need additional image backbones; 2) slice then input strategy avoid quadratic computation cost with regard to input image size. More detailed parameter settings are in Supplementary Material." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "For ablation experiments, except for Tab. 8 ###reference_###, we all employ our CLIP ViT-B/16 based model as our ablated baseline.\nComponents Analysis\u2003Tab. 5 ###reference_### shows the performance effect of the main components in MROVSeg. The baseline adopts single resolution CLIP plain ViT feature for mask decoding, and uses the MaskPooling [34 ###reference_b34###] for mask class recognition. The baseline obtains the mIoU scores of 52.3%, 22.4%, 8.5%, 10.8% and 93.6% for PC-59, ADE-150, ADE-847, PC-459 and VOC-20. We first introduce multi-resolution CLIP features, which marginally outperforms baseline. Then we introduce the Multi-Res Adapter, the model significantly outperforms baseline by 4.8%, 5.1%, 2.3%, 7.2% and 1.5%. Next we integrate the Masked-Attention [36 ###reference_b36###] to the model, it slightly outperforms MaskPooling. Finally we integrate Multi-grained Masked Attention, the model performance reaches 58.7%, 32.4%, 12.9%, 19.2% and 95.8% for PC-59, ADE-150, ADE-847, PC-459 and VOC-20, which outperforms baseline by 6.4%, 7.0%, 3.9%, 8.4% and 2.2%. Futhermore, we show the effect of of Hierarchical Mask Decoding and Image-conditioned Text Feature in Tab. 6 ###reference_###, both showing consistent improvements.\nEffect of Multi-Res Adapter\u2003We first conduct experiment to examine the different micro-design choice in Multi-Res Adapter in Tab. 3 ###reference_###. In (a), we examine different spatial restore strategies in MRF module. The depthwise convolution is significantly better than concatenation. And we present the impact of local-global modeling methods (b), and the best results on all benchmarks are achieved by scale attention. Then for the ViT blocks setting, we present the effect of block number in (c). Increasing the block number to 9 brings limited improvement while incurring heavy computation cost. In (d), we examine the effect of CLIP layers whose feature we adopt. In (e), we present the impact of channel width we use. Finally, we examine the effect of query number in (f). We further compare the performance of MROVSeg with ViT-based and convolution-based CLIP models in Tab. 8 ###reference_###. The results indicate that ViT-based encoder with a sliding window approach outperforms convolution-based counterpart.\n###figure_10### Effect of Multi-grained Masked Attention\u2003As mentioned before, Tab. 5 ###reference_### and Fig. 4 ###reference_### quantitatively and qualitatively show the effectiveness of Multi-grained Masked Attention respectively. We further visualize the query [CLS] embeddings (\ni.e., ) by t-SNE [30 ###reference_b30###] dimensionality reduction within ADE-150 [41 ###reference_b41###] benchmarks in Fig. 9 ###reference_###. We color the embeddings based on the Hungarian matching with groundtruth. In (a), we can observe that the queries from the same classes are well-posed as masked attention can aggregate semantics. Conversely, with decoupled attention masks, the query embeddings from different classes are split further in (b), indicating the effectiveness of multi-grained masked attention in mask classification." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce MROVSeg, a multi-resolution training framework designed to enhance open-vocabulary image segmentation by leveraging multi-resolution VLM features. The exceptional quantitative and qualitative results obtained on well-established open-vocabulary segmentation benchmarks serve as compelling evidence of its effectiveness and versatility. We hope our method can serve as a strong baseline for future research." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Semantic segmentation performance comparison with state-of-the-art methods. \u2020:\u00a0We cite the reproduction performance of these methods[35, 9] trained with full COCO Stuff dataset from previous works[36, 6]. PC: Pascal Context, VOC: Pascal VOC.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodVL-ModelTraining DatasetEnd to End TrainingExtra BackboneADE-847PC-459ADE-150PC-59VOC-20
\nZegformer\u2020\u00a0[9]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-1015.610.418.045.589.5
\nZSSeg\u2020\u00a0[35]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-1016.99.721.151.991.8
\nOvSeg\u00a0[19]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-101c7.111.024.853.392.6
\nSAN\u00a0[36]\nCLIP ViT-B/16COCO Stuff\u2713-10.713.728.955.494.6
\nEBSeg\u00a0[28]\nCLIP ViT-B/16COCO Stuff\u2717SAM-B11.117.330.056.794.6
\nSCAN\u00a0[22]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-10110.813.230.858.497.0
\nSED\u00a0[33]\nCLIP ConvNeXt-BCOCO Stuff\u2713-11.218.631.657.394.4
\nCAT-Seg\u00a0[6]\nCLIP ViT-B/16COCO Stuff\u2713-12.019.031.857.594.6
MROVSegCLIP ViT-B/16COCO Stuff\u2713-12.919.232.458.795.8
\nOvSeg\u00a0[19]\nCLIP ViT-L/14COCO Stuff\u2717Swin-B9.012.429.655.794.5
\nMaskCLIP\u00a0[10]\nCLIP ViT-L/14COCO Panoptic\u2717ResNet-508.210.023.745.9-
\nZSSeg\u2020\u00a0[35]\nCLIP ViT-L/14COCO Stuff\u2717ResNet-1017.110.221.752.292.3
\nSAN\u00a0[36]\nCLIP ViT-L/14COCO Stuff\u2713-13.717.133.360.295.5
\nODISE\u00a0[34]\nCLIP ViT-L/14COCO Panoptic\u2717Stable Diffusion11.013.828.755.3-
\nEBSeg\u00a0[28]\nCLIP ViT-L/14COCO Stuff\u2713SAM-B13.721.032.860.296.4
\nSCAN\u00a0[22]\nCLIP ViT-L/14COCO Stuff\u2717ResNet-10114.016.733.559.397.2
\nFC-CLIP\u00a0[39]\nCLIP ConvNeXt-LCOCO Panoptic\u2717-14.818.234.158.495.4
\nSED\u00a0[33]\nCLIP ConvNeXt-LCOCO Stuff\u2713-13.722.135.360.996.1
\nCAT-Seg\u00a0[6]\nCLIP ViT-L/14COCO Stuff\u2713-16.023.837.963.397.0
\nMAFT+\u00a0[15]\nCLIP ConvNext-LCOCO Stuff\u2713-15.121.636.159.496.5
MROVSegCLIP ViT-L/14COCO Stuff\u2713-16.424.036.964.397.6
\n
\n
", + "capture": "Table 1: Semantic segmentation performance comparison with state-of-the-art methods. \u2020:\u00a0We cite the reproduction performance of these methods[35, 9] trained with full COCO Stuff dataset from previous works[36, 6]. PC: Pascal Context, VOC: Pascal VOC." + }, + "2": { + "table_html": "
\n
Table 2: Panoptic segmentation performance comparison with state-of-the-art methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ADECOCO
open-vocabularyclose-set
MethodPQSQRQPQSQRQ
\nFreeSeg\u00a0[25]\n16.3----
\nMaskCLIP\u00a0[10]\n15.170.419.2---
\nODISE\u00a0[34]\n22.6----
\nOPSNet\u00a0[5]\n19.052.423.0---
\nFCCLIP\u00a0[39]\n26.871.532.254.444.663.7
\nMAFT-Plus\u00a0[15]\n27.173.532.9---
MROVSeg27.372.833.452.041.460.9
\n
", + "capture": "Table 2: Panoptic segmentation performance comparison with state-of-the-art methods. " + }, + "3": { + "table_html": "
\n
Table 3: Effect of different crop ratio . We also report #GFLOPS and GPU memory consumption (MB). For , we adopt overlapped slicing.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Mem.GFLOPSA-150PC-59A-847PC-459VOC-20
6768116.528.654.710.616.494.5
13493293.128.854.510.515.694.4
16261225.731.554.011.515.995.1
12942184.532.159.112.019.695.6
9779142.832.458.712.919.295.8
756092.527.555.79.916.093.7
\n
", + "capture": "Table 3: Effect of different crop ratio . We also report #GFLOPS and GPU memory consumption (MB). For , we adopt overlapped slicing. " + }, + "4": { + "table_html": "
\n
Table 4: Efficiency comparisons. We report the and inference FPS of MROVSeg running on a RTX 3090. Training time is mesured with 2 NVIDIA H100.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodParams.#GFLOPSInference FPSTraining Time
\nZSSeg\u00a0[35]\n530.822302.10.315h59min
\nOVSeg\u00a0[19]\n532.619345.60.417h54min
\nCAT-Seg\u00a0[6]\n433.72121.12.07h41min
\nFC-CLIP\u00a0[39]\n221.3680.02.32d5h
\nEBSeg\u00a0[28]\n210.9867.54.718h33min
MROVSeg162.1640.410.59h40min
\n
", + "capture": "Table 4: Efficiency comparisons. We report the and inference FPS of MROVSeg running on a RTX 3090. Training time is mesured with 2 NVIDIA H100." + }, + "5": { + "table_html": "
\n
Table 5: Ablation study of components. We show the effects of integrating each modules into the baseline.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPC-59A-150A-847PC-459VOC-20
Baseline52.325.49.010.892.9
+ High-res Features54.128.110.413.093.6
+ Multi-Res Adapter55.628.710.514.994.7
+ Masked Attention56.430.911.817.995.5
+ Multi-grained Masked Attention58.732.412.919.295.8
\n
", + "capture": "Table 5: Ablation study of components. We show the effects of integrating each modules into the baseline." + }, + "6": { + "table_html": "
\n
Table 6: Effect of Hierarchical Mask Decoding and Image-conditioned Text Feature.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPC-59A-150A-847PC-459VOC-20
MROVSeg w/o Hier. Mask Dec.57.130.511.318.095.1
MROVSeg w/o Img-cond. Text Feat.58.532.012.119.695.5
MROVSeg58.732.412.919.295.8
\n
", + "capture": "Table 6: Effect of Hierarchical Mask Decoding and Image-conditioned Text Feature." + }, + "7": { + "table_html": "
\n
Table 7: Ablation study on various designs in Multi-Res Adapter. The default setting are marked underline.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(a)Spatial FusionA-847PC-459A-150PC-59VOC-20
Concat.12.018.431.357.194.8
Depth.Conv.12.919.232.458.795.8
(b)Multi-Res FusionA-847PC-459A-150PC-59VOC-20
Add12.017.731.255.994.5
Concat.12.419.831.858.795.6
Scale.Attn.12.919.232.458.795.8
(c)Block NumA-847PC-459A-150PC-59VOC-20
39.812.527.753.994.1
612.919.232.458.795.8
912.719.331.959.295.4
(d)Fusion LayerA-847PC-459A-150PC-59VOC-20
{3,6,9}11.819.131.656.595.9
{3,6,9,12}12.419.832.058.095.4
{stem,3,6,9}12.819.432.658.194.9
{stem,3,6,9,12}12.919.232.458.795.8
(e)Channel WidthA-847PC-459A-150PC-59VOC-20
38410.417.027.556.194.2
76812.919.232.458.795.8
102411.119.731.059.596.1
(f)Query NumA-847PC-459A-150PC-59VOC-20
10012.919.232.458.795.8
20011.918.331.857.093.8
30012.218.732.457.595.2
\n
\n
", + "capture": "Table 7: Ablation study on various designs in Multi-Res Adapter. The default setting are marked underline." + }, + "8": { + "table_html": "
\n
Table 8: Comparison between different VLMs. All Multi-grained Masked Attention is disabled to adapt ConvNext.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodVLMPC-59A-150A-847PC-459VOC-20
MROVSegConvNext-L57.028.410.115.893.3
MROVSegViT-L58.530.413.420.195.4
\n
", + "capture": "Table 8: Comparison between different VLMs. All Multi-grained Masked Attention is disabled to adapt ConvNext." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2408.14776v2_figure_1(a).png", + "caption": "(a) Previous open-vocabulary image segmentation.\nFigure 1: Comparison between other training frameworks and MROVSeg. Previous methods (a) adopt additional image backbone to provide mask feature. The mask prediction is class-unaware. Our method (b) provide multi-resolution CLIP feature for both mask decoding and mask classification, and the whole framework is class-aware.", + "url": "http://arxiv.org/html/2408.14776v2/x1.png" + }, + "1(b)": { + "figure_path": "2408.14776v2_figure_1(b).png", + "caption": "(b) MROVSeg open-vocabulary image segmentation.\nFigure 1: Comparison between other training frameworks and MROVSeg. Previous methods (a) adopt additional image backbone to provide mask feature. The mask prediction is class-unaware. Our method (b) provide multi-resolution CLIP feature for both mask decoding and mask classification, and the whole framework is class-aware.", + "url": "http://arxiv.org/html/2408.14776v2/x2.png" + }, + "2": { + "figure_path": "2408.14776v2_figure_2.png", + "caption": "Figure 2: The overall pipeline of MROVSeg. For an high-resolution input image, its downsampled image and are fed into CLIP visual encoder to extract multi-resolution CLIP features. The Multi-Res Adapter adapts these features for mask decoder and attention mask decoder. The generated attention masks are employed to aggregate semantics from the multi-resolution CLIP features.", + "url": "http://arxiv.org/html/2408.14776v2/x3.png" + }, + "3": { + "figure_path": "2408.14776v2_figure_3.png", + "caption": "Figure 3: Multi-Res Adapter. The slice features from CLIP layer 0 {\ud835\udc0fi0}i=1Ssuperscriptsubscriptsuperscriptsubscript\ud835\udc0f\ud835\udc560\ud835\udc561\ud835\udc46\\{\\mathbf{P}_{i}^{0}\\}_{i=1}^{S}{ bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT are concatenated with learnable queries and fed to ViT Blocks. The slice features from various CLIP layers are first adapted by MRF module to restore spatial geometry and capture long-range global contexts, then are injected to the intermediate ViT Blocks. The final output visual tokens and projected queries are utilized for downstream mask prediction and classification.", + "url": "http://arxiv.org/html/2408.14776v2/x4.png" + }, + "4": { + "figure_path": "2408.14776v2_figure_4.png", + "caption": "Figure 4: Effect of decoupled attention decoding for multi-grained semantics. With single attention mask decoding, the spatial cues are overwhelmed by background noise (b). Our decoupled attention mask decoding effectively splits the global and local semantics, producing relatively clean global (c) and local (d) attention masks.", + "url": "http://arxiv.org/html/2408.14776v2/x5.png" + }, + "5": { + "figure_path": "2408.14776v2_figure_5.png", + "caption": "Figure 5: Multi-grained Masked Attention. Object [CLS] tokens \ud835\udc17propsubscript\ud835\udc17prop\\mathrm{\\mathbf{X}}_{\\texttt{prop}}bold_X start_POSTSUBSCRIPT prop end_POSTSUBSCRIPT perform cross attention with high- and low-resolution CLIP features \ud835\udc17LRsubscript\ud835\udc17LR\\mathbf{X}_{\\texttt{LR}}bold_X start_POSTSUBSCRIPT LR end_POSTSUBSCRIPT and \ud835\udc17HRsubscript\ud835\udc17HR\\mathbf{X}_{\\texttt{HR}}bold_X start_POSTSUBSCRIPT HR end_POSTSUBSCRIPT with decoupled attention masks.", + "url": "http://arxiv.org/html/2408.14776v2/x6.png" + }, + "6": { + "figure_path": "2408.14776v2_figure_6.png", + "caption": "Figure 6: Visualization of image resolution distribution histogram of the datasets in Tab.1.\n", + "url": "http://arxiv.org/html/2408.14776v2/x7.png" + }, + "7": { + "figure_path": "2408.14776v2_figure_7.png", + "caption": "Figure 7: Effect of scaling up backbone on Cityscapes.\n", + "url": "http://arxiv.org/html/2408.14776v2/x8.png" + }, + "8": { + "figure_path": "2408.14776v2_figure_8.png", + "caption": "Figure 8: Qualitative comparison with SAN [36] and EBSeg [28].", + "url": "http://arxiv.org/html/2408.14776v2/x9.png" + }, + "9": { + "figure_path": "2408.14776v2_figure_9.png", + "caption": "Figure 9: Effects of decoupled attention decoding for multi-grained semantics. The t-SNE [30] visualization shows that decoupled attention masks can further split the query embedding from different classes, making clear classification boundaries.", + "url": "http://arxiv.org/html/2408.14776v2/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Zero-shot semantic segmentation.", + "author": "Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick P\u00e9rez.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "2": { + "title": "Coco-stuff: Thing and stuff classes in context.", + "author": "Holger Caesar, Jasper Uijlings, and Vittorio Ferrari.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1209\u20131218, 2018.", + "url": null + } + }, + { + "3": { + "title": "End-to-end object detection with transformers.", + "author": "Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.", + "venue": "In European Conference on Computer Vision, pages 213\u2013229. Springer, 2020.", + "url": null + } + }, + { + "4": { + "title": "Attention to scale: Scale-aware semantic image segmentation.", + "author": "Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, and Alan L Yuille.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3640\u20133649, 2016.", + "url": null + } + }, + { + "5": { + "title": "Open-vocabulary panoptic segmentation with embedding modulation.", + "author": "Xi Chen, Shuang Li, Ser-Nam Lim, Antonio Torralba, and Hengshuang Zhao.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1141\u20131150, 2023.", + "url": null + } + }, + { + "6": { + "title": "Cat-seg: Cost aggregation for open-vocabulary semantic segmentation.", + "author": "Seokju Cho, Heeseong Shin, Sunghwan Hong, Seungjun An, Seungjun Lee, Anurag Arnab, Paul Hongsuck Seo, and Seungryong Kim.", + "venue": "arXiv preprint arXiv:2303.11797, 2023.", + "url": null + } + }, + { + "7": { + "title": "The cityscapes dataset for semantic urban scene understanding.", + "author": "Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213\u20133223, 2016.", + "url": null + } + }, + { + "8": { + "title": "Vision transformers need registers.", + "author": "Timoth\u00e9e Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski.", + "venue": "arXiv preprint arXiv:2309.16588, 2023.", + "url": null + } + }, + { + "9": { + "title": "Decoupling zero-shot semantic segmentation.", + "author": "Jian Ding, Nan Xue, Gui-Song Xia, and Dengxin Dai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11583\u201311592, 2022.", + "url": null + } + }, + { + "10": { + "title": "Open-vocabulary universal image segmentation with maskclip.", + "author": "Zheng Ding, Jieke Wang, and Zhuowen Tu.", + "venue": "In International Conference on Machine Learning, pages 8090\u20138102, 2023.", + "url": null + } + }, + { + "11": { + "title": "The pascal visual object classes (voc) challenge.", + "author": "Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.", + "venue": "International Journal of Computer Vision, 88:303\u2013338, 2010.", + "url": null + } + }, + { + "12": { + "title": "Clipscore: A reference-free evaluation metric for image captioning.", + "author": "Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi.", + "venue": "arXiv preprint arXiv:2104.08718, 2021.", + "url": null + } + }, + { + "13": { + "title": "Hrda: Context-aware high-resolution domain-adaptive semantic segmentation.", + "author": "Lukas Hoyer, Dengxin Dai, and Luc Van Gool.", + "venue": "In European Conference on Computer Vision, pages 372\u2013391. Springer, 2022.", + "url": null + } + }, + { + "14": { + "title": "Scaling up visual and vision-language representation learning with noisy text supervision.", + "author": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.", + "venue": "In International Conference on Machine Learning, pages 4904\u20134916. PMLR, 2021.", + "url": null + } + }, + { + "15": { + "title": "Collaborative vision-text representation optimizing for open-vocabulary segmentation.", + "author": "Siyu Jiao, Hongguang Zhu, Jiannan Huang, Yao Zhao, Yunchao Wei, and Humphrey Shi.", + "venue": "In European Conference on Computer Vision, pages 399\u2013416. Springer, 2025.", + "url": null + } + }, + { + "16": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "17": { + "title": "Proxyclip: Proxy attention improves clip for open-vocabulary segmentation.", + "author": "Mengcheng Lan, Chaofeng Chen, Yiping Ke, Xinjiang Wang, Litong Feng, and Wayne Zhang.", + "venue": "In European Conference on Computer Vision, 2024.", + "url": null + } + }, + { + "18": { + "title": "Monkey: Image resolution and text label are important things for large multi-modal models.", + "author": "Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26763\u201326773, 2024.", + "url": null + } + }, + { + "19": { + "title": "Open-vocabulary semantic segmentation with mask-adapted clip.", + "author": "Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, and Diana Marculescu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7061\u20137070, 2023.", + "url": null + } + }, + { + "20": { + "title": "Microsoft coco: common objects in context.", + "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.", + "venue": "In European Conference on Computer Vision, pages 740\u2013755. Springer, 2014.", + "url": null + } + }, + { + "21": { + "title": "Improved baselines with visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296\u201326306, 2024a.", + "url": null + } + }, + { + "22": { + "title": "Open-vocabulary segmentation with semantic-assisted calibration.", + "author": "Yong Liu, Sule Bai, Guanbin Li, Yitong Wang, and Yansong Tang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3491\u20133500, 2024b.", + "url": null + } + }, + { + "23": { + "title": "The role of context for object detection and semantic segmentation in the wild.", + "author": "Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, and Alan Yuille.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 891\u2013898, 2014.", + "url": null + } + }, + { + "24": { + "title": "Styleclip: Text-driven manipulation of stylegan imagery.", + "author": "Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2085\u20132094, 2021.", + "url": null + } + }, + { + "25": { + "title": "Freeseg: Unified, universal and open-vocabulary image segmentation.", + "author": "Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao, Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19446\u201319455, 2023.", + "url": null + } + }, + { + "26": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International Conference on Machine Learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "27": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "28": { + "title": "Open-vocabulary semantic segmentation with image embedding balancing.", + "author": "Xiangheng Shan, Dongyue Wu, Guilin Zhu, Yuanjie Shao, Nong Sang, and Changxin Gao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 28412\u201328421, 2024.", + "url": null + } + }, + { + "29": { + "title": "Explore the potential of clip for training-free open vocabulary semantic segmentation.", + "author": "Tong Shao, Zhuotao Tian, Hang Zhao, and Jingyong Su.", + "venue": "In European Conference on Computer Vision, pages 139\u2013156. Springer, 2025.", + "url": null + } + }, + { + "30": { + "title": "Visualizing data using t-sne.", + "author": "Laurens Van der Maaten and Geoffrey Hinton.", + "venue": "Journal of Machine Learning Research, 9(11), 2008.", + "url": null + } + }, + { + "31": { + "title": "Sclip: Rethinking self-attention for dense vision-language inference.", + "author": "Feng Wang, Jieru Mei, and Alan Yuille.", + "venue": "In European Conference on Computer Vision, pages 315\u2013332. Springer, 2025.", + "url": null + } + }, + { + "32": { + "title": "Semantic projection network for zero-and few-label semantic segmentation.", + "author": "Yongqin Xian, Subhabrata Choudhury, Yang He, Bernt Schiele, and Zeynep Akata.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8256\u20138265, 2019.", + "url": null + } + }, + { + "33": { + "title": "Sed: A simple encoder-decoder for open-vocabulary semantic segmentation.", + "author": "Bin Xie, Jiale Cao, Jin Xie, Fahad Shahbaz Khan, and Yanwei Pang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3426\u20133436, 2024.", + "url": null + } + }, + { + "34": { + "title": "Open-vocabulary panoptic segmentation with text-to-image diffusion models.", + "author": "Jiarui Xu, Sifei Liu, Arash Vahdat, Wonmin Byeon, Xiaolong Wang, and Shalini De Mello.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2955\u20132966, 2023a.", + "url": null + } + }, + { + "35": { + "title": "A simple baseline for open-vocabulary semantic segmentation with pre-trained vision-language model.", + "author": "Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, and Xiang Bai.", + "venue": "In European Conference on Computer Vision, pages 736\u2013753. Springer, 2022.", + "url": null + } + }, + { + "36": { + "title": "Side adapter network for open-vocabulary semantic segmentation.", + "author": "Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945\u20132954, 2023b.", + "url": null + } + }, + { + "37": { + "title": "Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images.", + "author": "Ruyi Xu, Yuan Yao, Zonghao Guo, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, Maosong Sun, and Gao Huang.", + "venue": "arXiv preprint arXiv:2403.11703, 2024.", + "url": null + } + }, + { + "38": { + "title": "Multi-scale context aggregation by dilated convolutions.", + "author": "Fisher Yu and Vladlen Koltun.", + "venue": "arXiv preprint arXiv:1511.07122, 2015.", + "url": null + } + }, + { + "39": { + "title": "Convolutions die hard: Open-vocabulary segmentation with single frozen convolutional clip.", + "author": "Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "40": { + "title": "Regionclip: Region-based language-image pretraining.", + "author": "Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16793\u201316803, 2022.", + "url": null + } + }, + { + "41": { + "title": "Semantic understanding of scenes through the ade20k dataset.", + "author": "Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba.", + "venue": "International Journal of Computer Vision, 127:302\u2013321, 2019.", + "url": null + } + }, + { + "42": { + "title": "Extract free dense labels from clip.", + "author": "Chong Zhou, Chen Change Loy, and Bo Dai.", + "venue": "In European Conference on Computer Vision, pages 696\u2013712. Springer, 2022.", + "url": null + } + }, + { + "43": { + "title": "Zegclip: Towards adapting clip for zero-shot semantic segmentation.", + "author": "Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175\u201311185, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.14776v2" +} \ No newline at end of file diff --git a/20241127/2408.17175v3.json b/20241127/2408.17175v3.json new file mode 100644 index 0000000000000000000000000000000000000000..7f750c063649000c303d6e749575ffef6183cf74 --- /dev/null +++ b/20241127/2408.17175v3.json @@ -0,0 +1,622 @@ +{ + "title": "Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model", + "abstract": "Recent advancements in audio generation have been significantly propelled by the capabilities of Large Language Models (LLMs). The existing research on audio LLM has primarily focused on enhancing the architecture and scale of audio language models, as well as leveraging larger datasets, and generally, acoustic codecs, such as EnCodec, are used for audio tokenization. However, these codecs were originally designed for audio compression, which may lead to suboptimal performance in the context of audio LLM. Our research aims to address the shortcomings of current audio LLM codecs, particularly their challenges in maintaining semantic integrity in generated audio. For instance, existing methods like VALL-E, which condition acoustic token generation on text transcriptions, often suffer from content inaccuracies and elevated word error rates (WER) due to semantic misinterpretations of acoustic tokens, resulting in word skipping and errors. To overcome these issues, we propose a straightforward yet effective approach called X-Codec. X-Codec incorporates semantic features from a pre-trained semantic encoder before the Residual Vector Quantization (RVQ) stage and introduces a semantic reconstruction loss after RVQ. By enhancing the semantic ability of the codec, X-Codec significantly reduces WER in speech synthesis tasks and extends these benefits to non-speech applications, including music and sound generation. Our experiments in text-to-speech, music continuation, and text-to-sound tasks demonstrate that integrating semantic information substantially improves the overall performance of language models in audio generation.\nOur code and demo are available\n111\n\nDemo: https://x-codec-audio.github.io\nCode: https://github.com/zhenye234/xcodec", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, Large Language Models (LLMs) such as GPT [1 ###reference_b1###] have demonstrated remarkable capabilities in modeling complex, high-dimensional data across various domains, including text and image generation [2 ###reference_b2###, 3 ###reference_b3###]. Inspired by these successes, there has been significant interest [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###] in exploring the application of LLMs to audio generation.\nAudio codecs [8 ###reference_b8###] have emerged as a critical technique for audio LLMs, bridging the gap between continuous audio waveforms and token-based language models. By discretizing high-rate audio signals into a finite set of tokens, these codecs enable the application of LLM architectures to audio data, leveraging the successes of textual LLMs.\nHowever, prior research on audio codecs has primarily focused on achieving lower compression rates and higher reconstruction quality [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. Meanwhile, many efforts in audio generation have concentrated on enhancing model architecture, scaling, or leveraging larger datasets. For instance, AudioLM [5 ###reference_b5###] adopts a two-stage pipeline that models the acoustic token in an autoregressive way conditioned on the semantic token. VALL-E [6 ###reference_b6###], the first TTS framework to leverage large, diverse, and multi-speaker speech data, demonstrates strong in-context learning capabilities similar to GPT-3, treating TTS as a language modeling task on audio codecs. MusicGen [12 ###reference_b12###] generates music using a single-stage transformer LM alongside efficient token interleaving patterns. Similarly, UniAudio [7 ###reference_b7###] scaled up to 165K hours of audio and 1B parameters, utilizing LLM techniques to generate tokens for various types of audio, including speech, sounds, music, and singing, given different input conditions.\nWhile these works have shown success in developing audio language models, they all rely on the acoustic codecs such as Encodec [10 ###reference_b10###] or Soundstream [8 ###reference_b8###] for audio tokenization and de-tokenization. However, these acoustic codecs were originally designed for audio compression rather than for audio language models. This misalignment means the design may not be optimal for audio language modeling.\nTo design a better audio codec for Audio LLMs, we drew inspiration from the initial purpose of LLMs such as GPT, which were designed to process text. These models focus on understanding and generating natural language, which is inherently rich in semantics. Motivated by this, we assume that a better audio tokenizer should encapsulate rich semantic information to facilitate an easy understanding of audio content, thus reducing the language model\u2019s burden in interpreting tokens. However, most audio codecs focus on acoustic reconstruction which ignores the semantic information. As a result, LLM essentially tries to predict the local fluctuations of the audio signal, which is difficult, and methods like VALL-E, which condition acoustic token generation on text transcriptions, frequently result in content inaccuracies causing elevated word error rates (WER), stemming from the semantic misinterpretations of acoustic tokens, leading to word skipping and errors.\nTo address this issue, approaches like SpeechTokenizer [13 ###reference_b13###] have attempted to disentangle speech into separate tokens for content and timbre and perform distillation-based semantic and acoustic integration. However, this method may not integrate smoothly with all audio LLMs, especially those requiring uniform token treatment across different layers, such as utilizing flattened codec tokens [7 ###reference_b7###, 12 ###reference_b12###].\nIn this paper, We propose a straightforward yet effective method termed \u201cX-codec\u201d, which integrates both semantic and acoustic features into a unified tokenization framework. The X-Codec architecture employs a distinctive \u201cX-shaped\u201d structure, characterized by two inputs and two outputs, unifying semantic and acoustic information within a single Residual Vector Quantizer (RVQ) structure. This design enables simultaneous embedding learning of semantic richness and acoustic fidelity for every token, resulting in better performance for audio LLM.\nWe have conducted comprehensive evaluations of X-Codec across various applications, including text-to-speech, music continuation, and text-to-sound synthesis. The results consistently demonstrate the effectiveness of the proposed method. Furthermore, our comparative evaluation on VALL-E based TTS demonstrates that X-Codec outperforms existing disentanglement techniques, thereby highlighting its efficacy and versatility in advancing audio LLM technologies." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Audio Language Model", + "text": "The success of Large Language Models (LLMs) has sparked a significant trend in leveraging language foundation models for audio generation tasks [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 7 ###reference_b7###, 18 ###reference_b18###]. Audio, much like language, consists of variable-length sequences, making it well-suited for modeling with language foundation models. One pioneering method, AudioLM [5 ###reference_b5###], employs a multi-stage strategy to harness the predictive capabilities of foundation models for generating tokens unconditionally. This approach involves predicting semantic tokens from various conditions (e.g., phonemes, text descriptions, MIDI) in the initial stage, followed by transforming them into acoustic tokens through coarse-to-fine modeling, ultimately generating the waveform. Representative systems such as SPEAR-TTS [19 ###reference_b19###] for speech synthesis and MusicLM [4 ###reference_b4###] for music generation have also been proposed. However, the two-stage process can lead to complexity in training and suboptimal performance due to the separate development of semantic and acoustic tokens, leading to error accumulation.\nConversely, recent advancements have shown that methods employing a single-stage language model outperform two-stage approaches. For example, VALL-E [6 ###reference_b6###] utilizes an autoregressive (AR) model to predict the first token and a non-autoregressive (NAR) model to estimate the residual tokens, demonstrating superior performance compared to AudioLM. Similarly, MusicGen [12 ###reference_b12###] employs a single-stage transformer language model and incorporates a delay pattern strategy for efficient token interleaving, achieving better results than MusicLM. Other notable works include CLAM-TTS [20 ###reference_b20###], VoiceCraft [21 ###reference_b21###], and UniAudio [7 ###reference_b7###].\nDespite recent advancements, directly modeling the intricate low-level acoustic fluctuations with an LLM poses challenges. LLMs are primarily designed for processing natural language, which is inherently rich in semantics. In order to overcome this limitation, we propose X-Codec, a novel enhancement that aims to enrich semantic processing within acoustic codecs. By doing so, we aim to improve the overall performance of audio LLMs.\n###figure_1###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Audio Codec", + "text": "Recent advancements have seen a surge in deep learning methodologies employing vector quantization [22 ###reference_b22###] to reconstruct continuous signals into discrete representations for AR generation. Notably, audio codecs based on the VQ-GAN framework [23 ###reference_b23###] have gained prominence. For example, SoundStream [8 ###reference_b8###] introduces a versatile codec adaptable to various audio types, integrating Residual Vector Quantization (RVQ) and Generative Adversarial Network (GAN) to refine quantization and reconstruction. Similarly, Encodec [10 ###reference_b10###] enhances compression through a multi-scale discriminator and a loss-balancing strategy alongside a language model. HiFi-Codec [11 ###reference_b11###] employs Group-Residual Vector Quantization (GRVQ) to minimize the need for extensive codebooks while maintaining high reconstruction fidelity. DAC [9 ###reference_b9###] addresses codebook collapse, where some codes remain unused, by applying improved codebook learning to achieve higher compression rates.\nThese codecs primarily focus on acoustic reconstruction and higher compression rates, often overlooking their potential as tokenizers for audio LLMs. Some attempts have been made to develop more suitable tokenizers for audio LLMs. For example, SpeechTokenizer [13 ###reference_b13###] utilizes HuBERT to separate speech into distinct VQ components for content and timbre/acoustic details. This separation improves the modeling of content in the AR stage of VALL-E, while the NAR stage enriches the acoustic details. However, a distillation framework is exploited, this makes SpeechTokenizer may not be compatible with all LLM architectures, especially those that require uniform treatment of tokens, such as methods using flattened codec tokens [7 ###reference_b7###, 12 ###reference_b12###]. Another attempt is presented by SemantiCodec [24 ###reference_b24###], which employs a pre-trained AudioMAE [25 ###reference_b25###] to generate distinct semantic and acoustic tokens from mel-spectrograms. However, this method inherits the issues of SpeechTokenizer and introduces additional complexity in token modeling. Moreover, since the audioMAE is performed on 2D time-frequency mel-spectrograms, LLMs must effectively handle dual scales (time and frequency), which may require significant modifications to existing LLM structures.\nIn contrast, our proposed X-Codec provides a uniform and comprehensive enhancement of semantic information for all tokens, resulting in significant performance improvements for existing audio LLMs without requiring any structural modifications." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methods", + "text": "In this section, we propose X-codec, a straightforward yet effective method to overcome the semantic shortcomings of the current acoustic codecs." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Acoustic Audio codec", + "text": "As illustrated in Figure 1 ###reference_###, our model builds upon the framework established by existing acoustic codecs such as Encodec [10 ###reference_b10###] and DAC[9 ###reference_b9###]. An acoustic audio codec is composed of three main components: an acoustic encoder, a quantizer, and an acoustic decoder. The input of the codec is the raw waveform , where represents the number of waveform samples. This waveform is fed into the acoustic encoder, which consists of several convolutional layers and employs temporal downscaling to extract frame-level latent acoustic features , where denotes the hidden size of the acoustic features and is the number of frames. These continuous features are then transformed into a series of discrete tokens using a Residual Vector Quantizer (RVQ) with quantizer layers. During training, a specific codebook for the quantizer is learned, enabling the conversion of discrete tokens back to continuous features . The acoustic decoder then reconstructs the waveform from using several convolutional layers and temporal upsampling. The training process is supervised using various losses, including mel loss, STFT loss, and GAN loss, to ensure high-quality acoustic reconstruction." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Analysing Semantic Shortcoming", + "text": "In this section, we investigate the impact of acoustic codecs on the performance of audio LLMs, focusing specifically on VALL-E, a pioneering model that leverages language model principles for text-to-speech. Our analysis reveals that training VALL-E using Encodec results in high word error rates (WER) and frequent inaccuracies in content generation. For example, when the input text \u201che passed through Henley Saint Albans and came so near to London as Harrow on the Hill\u201d is synthesized, it is erroneously produced as \u201che passed through henley saint albeans and camsel knew to lunglan as herold the lor\u201d. This misinterpretation, which is beyond simply improving the audio quality, suggests a fundamental limitation in Encodec\u2019s ability to differentiate phonemes, possibly due to its inadequate semantic processing capabilities.\nTo substantiate the above hypothesis, we conducted Phonetic Discriminability ABX Tests to evaluate the phonetic discriminability of Encodec\u2019s representations. The details are provided in the experiment section. Our findings reveal that Encodec\u2019s representations exhibit poor phonetic discriminability, which confirms the presence of semantic inadequacies in the codec. Based on these results, we assert that these semantic shortcomings are a significant contributing factor to the observed inaccuracies of language model based audio generation.\nTo effectively address these semantic limitations, we introduce a novel approach that integrates more comprehensive semantic features into the codec\u2019s architecture. This enhancement is designed to enrich the codec\u2019s understanding of audio content, thereby alleviating the interpreting load on the language model. Detailed elaboration of this method is provided in the subsequent section." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Designing Auxiliary Semantic Module", + "text": "Our approach employs a straightforward method that enhances audio codecs by directly concatenating semantic and acoustic features. Initially, we extract the semantic feature vector from the audio waveform x. This extraction utilizes a self-supervised, pre-trained model such as HuBERT [26 ###reference_b26###] or wav2vec 2.0 [27 ###reference_b27###]. The extracted features are then processed through multiple convolutional layers within a semantic encoder to yield the refined semantic feature vector S. Concurrently, the acoustic branch produces the feature A. These outputs, S and A, are subsequently concatenated using a linear projection , formulated as:\nwhere the concatenated feature is designed to maximize information preservation from both semantic and acoustic sources. This combined feature is then subject to RVQ using an -layer quantizer, resulting in tokens that encapsulate a rich mixture of semantic and acoustic information.\nThe quantized feature is designed to meet the decoder\u2019s objectives through two projectors, and , which enable the decoders to reconstruct the original semantic feature and the audio waveform . We adhere to established acoustic reconstruction methods from previous works while introducing a Mean Squared Error (MSE) loss specifically for the reconstruction of semantic features. Furthermore, a constant weight is applied to the semantic loss to ensure that its scale is aligned with other losses, thus promoting a balanced training objective." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Given that established audio codecs such as Encodec, Speechtokenizer, and DAC are trained on diverse datasets with varying configurations, we meticulously design experiments to rigorously evaluate the efficacy of our proposed solution, X-Codec. To ensure a fair and unbiased comparison, each experiment employs a baseline acoustic codec that is precisely aligned with our X-Codec in terms of training data, training steps, and other hyperparameters. The primary distinction between the baseline codec and X-Codec lies in the exclusion of the auxiliary semantic module in the baseline configuration. This controlled experimental design enables us to isolate and evaluate the specific contributions of our semantic enhancements to the overall performance of the audio LLMs." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Text-to-Speech", + "text": "In this subsection, we critically evaluate the performance of various audio codecs in training the VALL-E model for zero-shot Text-to-Speech (TTS) tasks. Our investigation is guided by two primary objectives:\nTo determine whether the X-Codec can enhance the performance of audio LLMs in TTS applications.\nTo evaluate the comparative advantages of X-Codec over the disentanglement strategy employed by SpeechTokenizer, specifically within the context of the VALL-E model." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Baselines", + "text": "For a comprehensive comparison, we employ several state-of-the-art neural audio codecs as baselines:\nEnCodec 222https://huggingface.co/facebook/encodec_24khz ###reference_khz###: The open-source EnCodec model [10 ###reference_b10###], trained on a diverse range of 24kHz audio data, can compress audio to bitrates between 1.5 and 24.0 kbps while maintaining high fidelity.\nDAC 333https://github.com/descriptinc/descript-audio-codec ###reference_dio-codec###: The open-source DAC model [9 ###reference_b9###] utilizes enhanced VQ techniques. For our experiments, we employ the official 16kHz version.\nSpeechTokenizer 444https://github.com/ZhangXInFD/SpeechTokenizer ###reference_zer###: This model [13 ###reference_b13###] is a unified speech tokenizer that leverages distinct VQ layers to separate speech into content and timbre components. We utilize their official checkpoints in our evaluations." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Training Details of X-Codec", + "text": "Given our objective to assess the efficacy of X-Codec in leveraging semantic information, we meticulously align our experimental setup with that used for SpeechTokenizer. Both models are trained on the same dataset, LibriSpeech, and utilize the same pre-trained self-supervised representations from HuBERT-base-ls960 555https://huggingface.co/facebook/hubert-base-ls960 ###reference_e-ls960###. To ensure comparability, we also adopt the strategy of employing the average representation across various layers of HuBERT as our semantic training objective." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "4.1.3 Training Details of VALL-E", + "text": "For reproduction of the VALL-E, we utilize the resources specified in the provided repository 666https://github.com/lifeiteng/vall-e. The training data is the LibriTTS, retaining the default settings as specified in the repository, except for the learning rate during the AR stage, which is adjusted to 0.01 to enhance model stability. The training process span 100 epochs for the AR stage and 200 epochs for the non-autoregressive (NAR) stage, same for all audio codecs for a fair comparison." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "4.1.4 Evaluation Metrics", + "text": "To assess the performances of zero-shot TTS systems, we employ the following metrics:\nWER (Word Error Rate): We utilize an Automatic Speech Recognition (ASR) model to transcribe the generated audio [6 ###reference_b6###]. The discrepancies between these transcriptions and the original texts are quantified using WER, providing a critical measure of audio intelligibility.\nSim-O (Similarity Objective): This metric assesses the objective similarity between synthesized speech and the original reference speech. Sim-O uses feature embeddings extracted from a pre-trained speaker verification model to measure this similarity [26 ###reference_b26###, 20 ###reference_b20###]777https://github.com/microsoft/UniSpeech/tree/main/downstreams/speaker_verification ###reference_e/main/downstreams/speaker_verification###, reflecting the codec\u2019s ability to preserve speaker characteristics.\nUTMOS: We evaluate the audio quality using UTMOS, a Speech MOS (Mean Opinion Score) predictor [28 ###reference_b28###]888https://github.com/tarepan/SpeechMOS ###reference_### that automatically measures the naturalness of speech. This metric provides insights into the overall auditory quality of the synthesized speech." + }, + { + "section_id": "4.1.5", + "parent_section_id": "4.1", + "section_name": "4.1.5 Zero-shot TTS Results", + "text": "We use librispeech-test-clean [29 ###reference_b29###]for zero-shot TTS evaluation following VALL-E-continual-setting [6 ###reference_b6###]. The results in Table 1 ###reference_### demonstrate the following key findings:\nWhen comparing both X-Codec and SpeechTokenizer against the baseline and other acoustic codecs like DAC and Encodec, we observe improvements in WER. This supports our hypothesis that integrating semantic information helps audio LLMs better understand content.\nComparing the baseline acoustic codec and SpeechTokenizer, SpeechTokenizer exhibited lower Sim-O scores. We attribute this reduction to its initial disentanglement phase, which exclusively focuses on content prediction. This specialization potentially hampers the NAR phase\u2019s ability to accurately reconstruct speaker timbre when conditioned solely on tokens derived from the primary content-focused stage, resulting in poor speaker similarity.\nX-Codec not only shows better WER but also higher Sim-O and UTMOS scores compared to SpeechTokenizer. This confirms the effectiveness of our approach, indicating that our codec handles the integration of semantic and acoustic information more proficiently." + }, + { + "section_id": "4.1.6", + "parent_section_id": "4.1", + "section_name": "4.1.6 Analysing the Effect of Codec", + "text": "To further analyse the above results caused by different audio codecs, we evaluate phonetic discriminability using the ABX error rate [30 ###reference_b30###]. This metric assesses how well different codecs can distinguish between similar phonetic sounds within and across various contexts. We specifically examine the continuous representations for VQ as indicated by the results in the following table 2 ###reference_###. We compare the performance of various models in terms of within and across phonetic discriminability:\nKey insights include:\nBoth SpeechTokenizer and X-Codec significantly outperform pure acoustic codecs like Encodec and DAC in phonetic discriminability, which supports our claim that enhancing semantic understanding in codecs helps modelling content such as phonetic details.\nThe X-Codec demonstrates a notable trend of improved phonetic discriminability with an increase in the number of quantizations (nq). Specifically, as nq increases from 1 to 8, the ABX error rates consistently decrease, thereby highlighting effectiveness of the X-Codec\u2019s design in enhancing semantic integration across multiple quantization layers.\nIn contrast, the SpeechTokenizer, while exhibiting commendable performance at a lower quantization level (nq = 1), fails to show significant improvement as nq is increased. This suggests a design limitation; the codec\u2019s reliance on the initial quantization to carry semantic information restricts its ability to process a broader spectrum of semantic information. Notably, the performance of X-Codec at nq = 8 significantly exceeds that of SpeechTokenizer.\nThese results underline the effectiveness of our method in facilitating enhanced semantic integration, leading to better phonetic discriminability and audio LLMs. In addition, these results also show that our simple concatenate methods surpass disentangle methods such as speechtokenizer." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Music and Sound Generation", + "text": "To the best of our knowledge, this is the first exploration into the potential benefits of incorporating semantic information into audio codecs for enhancing music and general sound generation through audio LLMs. Conventional methods for general audio representation learning, aiming at capturing the semantic discriminability of audios, are generally based on 2D mel-spectrogram, such as AudioMAE [25 ###reference_b25###] and Beats [31 ###reference_b31###]. These methods are in stark contrast to traditional codecs that process audio sequentially, frame-by-frame. This difference poses challenges for direct integration into existing audio generation frameworks.\nTo bridge this gap, we have developed a variant of HuBERT, specifically adapted for general audio, which we refer to as HuBERT-General-Audio. This HuBERT-General-Audio is trained on an expansive internal dataset of approximately 200,000 hours, with a similar distribution as AudioSet. Additionally, our proposed X-Codec is also trained using these data for 400,000 steps until convergence, incorporating the HuBERT-General-Audio model within its semantic module. For a fair comparison, we train a baseline acoustic codec under identical settings but excluding semantic information." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Training Details of Self-Supervised General Audio Representation", + "text": "HuBERT-General-Audio is trained using 8 NVIDIA H800 GPUs on 2.6 million tokens across 325,000 iterations. For training stability, we adopt an inverse square root learning schedule, a modification from the polynomial decay schedule originally utilized in [26 ###reference_b26###]. The learning rate is set at 0.0003 with warmup steps of 32,000. Unlike the original HuBERT, which utilizes MFCCs as the training target unit designed specifically for speech, our model leverages the first VQ layer of Encodec as the training target for acoustic unit discovery in the general audio. This choice eliminates the need for the K-means discretization step, saving significant time and computational resources." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Music Continuation", + "text": "Training Details:\nAcquiring high-quality text-music pair data is challenging; therefore, we gathered approximately 100,000 hours of music-only data, including about one million songs for the music continuation task. We deployed nanoGPT 999https://github.com/karpathy/nanoGPT to implement a GPT-2-medium (approximately 300M parameters) [32 ###reference_b32###] as our generative model. This model utilizes the first VQ from our codec to construct the training sequences, with additional experiments involving multiple VQs detailed in the appendix. We set the block size of sequence modelling to 4096, corresponding to roughly 82 seconds of audio, and adjust the vocabulary size from 50,257 to 1024, matching our codec\u2019s codebook size. Other training hyperparameters are consistent with previous GPT-2-medium configurations. We train 300,000 steps on 8 NVIDIA H800 GPUs. The batch size is set to 20, with a learning rate of 3e-4 and a warmup phase of 2000 steps.\nExperiments:\nFor music continuation, we randomly crop 600 samples with each 40 seconds in duration from the MUSDB18 dataset [33 ###reference_b33###]. The initial 10 seconds of each sample are used as prompts for the audio LLM, while the subsequent 30 seconds are generated by the model. These generated segments are then compared against the corresponding ground truth (GT) segments. To ensure that the assessment is independent of the codec\u2019s reconstruction fidelity, both the generated and GT audio are reconstructed using the first VQ layer of the codec, ensuring performance differences attributed solely to the generative models themselves.\nThe evaluation metrics of the generated music include: Frechet Distance (FD) computed using features from Pretrained Audio Neural Networks (PANNs) [34 ###reference_b34###], Frechet Audio Distance (FAD), and FD-MERT Layer 9 [35 ###reference_b35###]. The results, as summarized in Table 7 ###reference_###, reveal that the X-Codec significantly outperforms the baseline acoustic codec across all metrics. This superior performance indicates the X-Codec has a better understanding and enabling more effective reproduction of complex musical structures." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3 Text-to-Sound", + "text": "Training Details:\nStill, GPT-2-medium (approximately 300M parameters) are adopted for conditional text-to-sound tasks, where the condition embedding is extracted from text captions using LAION-CLAP [48 ###reference_b48###] and linearly projected from 512 dimensions to 1024 dimensions for GPT input. The training data consists of approximately 400 hours of audio content sourced from the AudioCaps dataset [49 ###reference_b49###] and the AudioSet SL subset from the WavsCaps dataset [50 ###reference_b50###]. All audio samples are uniformly resampled to a 16kHz sampling rate. The first four tokens from the VQ layers are preprocessed and flattened to configure the GPT model\u2019s block size to 2000, corresponding to a processing rate of 50Hz. The training process spans 80,000 steps on four NVIDIA 4090 GPUs, with a batch size of 8 and a learning rate of 3e-4. A warmup phase of 2000 steps is employed to optimize the training process.\nEvaluation Metrics:\nfollowing [51 ###reference_b51###] and [52 ###reference_b52###], we calculate Frechet Distance (FD), Inception Score (IS), Frechet Audio Distance (FAD) for text-to-audio generation. In addition, CLAP score [51 ###reference_b51###] is used to evaluate the correspondence between the generated audio and the text prompt.\nExperiment Results:\nAs shown in Table 4 ###reference_###, the proposed X-Codec significantly outperforms the baseline acoustic codec across all metrics. These results demonstrate that semantic information integration significantly enhances the codec\u2019s capability, underscoring the value of semantic enrichment in audio generation tasks." + }, + { + "section_id": "4.2.4", + "parent_section_id": "4.2", + "section_name": "4.2.4 Analysing the Effect of Codec", + "text": "We hypothesize that the enhanced audio generation capabilities of the audio LLMs are attributed to the improved semantic understanding facilitated by the X-Codec. To validate this hypothesis, we employ the ARCH benchmark [53 ###reference_b53###] to evaluate the audio semantic understanding, and the benchmark is a comprehensive framework specifically designed to evaluate automatic recognition learning methods across a diverse range of audio classification domains, including acoustic events, music, and speech. The results from this benchmark are shown in Table 5 ###reference_###.\nOur findings indicate that HuBERT-general-audio significantly outperforms traditional acoustic codecs such as DAC, Encodec, and the baseline acoustic codec across all metrics. This improvement highlights the enhanced semantic understanding of X-Codec for general audio, which appears to be lacking in conventional acoustic audio codecs.\nMoreover, X-Codec achieves performance that is comparable or even superior to HuBERT-general-audio, confirming the effectiveness of our approach to enhancing semantic processing within codecs. This equivalence or improvement indicates the capability of X-Codec to integrate semantic information robustly." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Limitation", + "text": "While our method significantly enhances the performance of codecs for LLMs by integrating semantic information, it does come with certain trade-offs. According to the principle of \"no free lunch,\" improving one aspect of a system often involves compromises in others. In the case of our enhanced codecs, the primary limitation lies in their potential impact on the original functionality of codecs, which is compression for information transmission. The introduction of a semantic extraction layer adds additional computational overhead, potentially increasing the time required for processing. This can affect the efficiency of the codec when used in applications where rapid data compression and transmission are critical. Consequently, while our approach offers substantial benefits for semantic understanding and audio processing, it may not be as effective in contexts where high-speed data compression is paramount.\nFurthermore, the integration of semantic layers can slightly impair certain acoustic metrics such as Mel and STFT distance, which are crucial for maintaining the fidelity of compressed audio. However, it is essential to note that these trade-offs are counterbalanced by significant improvements in human auditory perception, as evidenced by the UTMOS scores." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced X-codec, an advanced audio codec that integrates semantic information through self-supervised learning models to enhance performance in large language models, specifically in text-to-speech synthesis, music continuation, and general audio classification tasks. Our evaluations demonstrate that X-codec significantly improves semantic understanding and audio generation quality across a variety of domains." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Model Details", + "text": "Acoustic Encoder: Following the design principles in [9 ###reference_b9###], our encoder comprises four convolutional encoder blocks, each tailored to progressively downsample the input audio waveform at rates [2, 4, 5, 8]. This structured reduction ensures efficient encoding while preserving essential audio characteristics. The final output from the encoder has a hidden size of 256, ensuring detailed feature representation. The total model size for the encoder stands at 12.18MB.\nResidual Vector Quantizer (RVQ): A key component in our X-codec, the RVQ utilizes the techniques established by [8 ###reference_b8###]. We update the codebook entries using an exponential moving average and apply a straight-through estimator to facilitate the gradient computation during backpropagation. To bolster training effectiveness and adaptability, commitment loss is incorporated, and RVQ layers are randomly selected from options [1, 2, 3, 4, 8] during training. This variability allows the model to adapt more dynamically to different audio characteristics.\nAcoustic Decoder: The decoder is designed to mirror the encoder\u2019s architecture, featuring four layers that upsample the audio data at inverse rates [8, 5, 4, 2]. This symmetry between the encoder and decoder helps in effectively reconstructing the audio signal from its encoded state. The decoder\u2019s model size is approximately 19.27 MB.\nSemantic Encoder and Decoder: To further refine the semantic aspects of the audio signals, we incorporate two additional convolutional blocks within both the semantic encoder and decoder, each with a hidden size of 768. This setup enhances the model\u2019s ability to process and integrate semantic information effectively." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Music Continue with 4 VQ Flattern", + "text": "In this section, we expand our music continuation experiments to include conditions with four flattened vector quantizations (VQs) to further validate the effectiveness of our approach. While the experimental details remain consistent with previous setups, the use of four VQs necessitates a shorter segment length of approximately 20 seconds due to increased data density. We prompt the model with 5 seconds of audio and generate 5 seconds, aiming to assess the performance enhancement under these conditions.\nThe results underscore a noticeable improvement in performance with the X-codec, particularly highlighted by significant enhancements in Frechet Audio Distance (FAD) and perceptual quality metrics, suggesting that our X-codec not only maintains but also amplifies its efficacy in generating musically coherent and contextually rich outputs." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Objective performance comparison on continuation zero-shot speech\nsynthesis tasks using VALL-E trained on LibriTTS with different audio codec. Abbreviation: C (Common Voice), DNS (DNS Challenge 4 speech), AS (AudioSet), FSD (FSD50K), J (Jamendo), V (VCTK), M(MUSDB)
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CodecTraining Data ofVALL-E AR stageVALL-E AR+NAR stages
Audio CodecWER \nSIM-O \nUTMOS \nWER \nSIM-O \nUTMOS \n
GT-2.230.674.102.230.674.10
Encodec [10]\nC+DNS+AS+FSD+J47.170.091.246.370.333.02
DAC [9]\nC+DNS+V+AS+J+M85.550.031.246.810.343.31
Speechtokenizer [13]\nLibriSpeech7.530.101.265.240.363.84
Baseline Acoustic CodecLibriSpeech22.320.163.157.700.413.89
X-Codec-hubertLibriSpeech5.270.223.854.070.424.16
X-Codec-wavlm-base-plusMLS English4.830.244.023.260.414.22
\n
\n
", + "capture": "Table 1: Objective performance comparison on continuation zero-shot speech\nsynthesis tasks using VALL-E trained on LibriTTS with different audio codec. Abbreviation: C (Common Voice), DNS (DNS Challenge 4 speech), AS (AudioSet), FSD (FSD50K), J (Jamendo), V (VCTK), M(MUSDB) " + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Modelwithin across
hubert-ls-960 [26]\n-3.34.1
Encodec [10]\n121.528.3
Encodec [10]\n817.527.0
DAC [9]\n126.332.7
DAC [9]\n1221.733.2
Speechtoknizer [13]\n13.54.3
Speechtoknizer[13]\n83.64.5
Baseline Acoustic Codec126.431.2
Baseline Acoustic Codec820.128.3
X-Codec13.94.9
X-Codec83.34.3
\n
\n
Table 2: Comparison of Phonetic Discriminability within and across ABX error rate for various models, with different values. Lower values indicate better performance.
\n
", + "capture": "Table 2: Comparison of Phonetic Discriminability within and across ABX error rate for various models, with different values. Lower values indicate better performance." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\nFD \n\nFAD \n\nFD-MERT-layer-9 \n
Acoustic codec16.171.432.88
X-Codec12.661.372.62
\n
\n
Table 3: Comparison between baseline acoustic codec and our X-Codec on music continue.
\n
", + "capture": "Table 3: Comparison between baseline acoustic codec and our X-Codec on music continue." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelFDISFADCLAP
Acoustic codec59.033.896.190.417
X-Codec46.315.294.100.483
\n
\n
Table 4: Comparison between baseline acoustic codec and X-Codec on text-to-sound tasks.
\n
", + "capture": "Table 4: Comparison between baseline acoustic codec and X-Codec on text-to-sound tasks." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model/DatasetsESC-50US8KFSD50KVIVAEFMAMTTIRMASMS-DBRAVDESSA-MNISTSLURPEMOVO
DAC [9]\n27.6545.167.0830.8038.5027.6930.3351.7937.5073.597.7223.46
Encodec [10]\n30.6064.478.6331.5939.526.5728.1664.4731.6078.718.4425.51
Baseline Acoustic Codec40.0055.3811.1037.7048.7532.2636.2662.7747.2282.589.2824.83
Hubert-general-audio69.9574.8734.2748.3564.5043.3549.8074.0969.1099.4321.2735.03
X-Codec69.8575.3734.0549.4064.6342.9552.2475.6368.4099.4821.6535.20
\n
\n
Table 5: Performance of semantic representation on the ARCH benchmark. The table shows the performance of various models across different domains. ESC-50 [36], US8K [37], FSD50K [38], and VIVAE [39] represent performance on Acoustic Events. FMA [40], MTT [41], IRMAS [42], and MS-DB [43] indicate performance in the Music domain. RAVDESS [44], AudioMNIST [45], SLURP [46], and EMOVO [47] reflect performance in the Speech domain. Higher values indicate better performance across these tasks.
\n
", + "capture": "Table 5: Performance of semantic representation on the ARCH benchmark. The table shows the performance of various models across different domains. ESC-50 [36], US8K [37], FSD50K [38], and VIVAE [39] represent performance on Acoustic Events. FMA [40], MTT [41], IRMAS [42], and MS-DB [43] indicate performance in the Music domain. RAVDESS [44], AudioMNIST [45], SLURP [46], and EMOVO [47] reflect performance in the Speech domain. Higher values indicate better performance across these tasks." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMel DT.STFT DT.UTMOS
Baseline (=)0.790.732.96
Baseline (=)0.540.583.72
X-codec (=)0.860.773.71
X-codec (=)0.620.634.01
\n
\n
Table 6: Comparison of reconstruction based on Mel DT., STFT DT., and UTMOS metrics using 1000 LibriTTS speech samples. \u201cDT.\u201d is short for distance
\n
", + "capture": "Table 6: Comparison of reconstruction based on Mel DT., STFT DT., and UTMOS metrics using 1000 LibriTTS speech samples. \u201cDT.\u201d is short for distance" + }, + "7": { + "table_html": "
\n
Table 7: Comparison between baseline acoustic codec and our X-codec on music continuation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\nFD \n\nFAD \n\nFD-MERT-layer-9 \n
Acoustic codec3.867.081.12
X-codec3.470.200.79
\n
\n
", + "capture": "Table 7: Comparison between baseline acoustic codec and our X-codec on music continuation." + } + }, + "image_paths": { + "1": { + "figure_path": "2408.17175v3_figure_1.png", + "caption": "Figure 1: The pipeline of X-codec.", + "url": "http://arxiv.org/html/2408.17175v3/x1.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Language models are few-shot learners.", + "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.", + "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.", + "url": null + } + }, + { + "2": { + "title": "A survey of large language models.", + "author": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al.", + "venue": "arXiv preprint arXiv:2303.18223, 2023.", + "url": null + } + }, + { + "3": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "Advances in neural information processing systems, 36, 2024a.", + "url": null + } + }, + { + "4": { + "title": "Musiclm: Generating music from text.", + "author": "Andrea Agostinelli, Timo I Denk, Zal\u00e1n Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al.", + "venue": "arXiv preprint arXiv:2301.11325, 2023.", + "url": null + } + }, + { + "5": { + "title": "Audiolm: a language modeling approach to audio generation.", + "author": "Zal\u00e1n Borsos, Rapha\u00ebl Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.", + "url": null + } + }, + { + "6": { + "title": "Neural codec language models are zero-shot text to speech synthesizers.", + "author": "Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al.", + "venue": "arXiv preprint arXiv:2301.02111, 2023.", + "url": null + } + }, + { + "7": { + "title": "Uniaudio: An audio foundation model toward universal audio generation.", + "author": "Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, et al.", + "venue": "arXiv preprint arXiv:2310.00704, 2023a.", + "url": null + } + }, + { + "8": { + "title": "Soundstream: An end-to-end neural audio codec.", + "author": "Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi.", + "venue": "IEEE/ACM Trans. Audio, Speech, Lang. Process., 30:495\u2013507, 2021.", + "url": null + } + }, + { + "9": { + "title": "High-fidelity audio compression with improved rvqgan.", + "author": "Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, and Kundan Kumar.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "10": { + "title": "High fidelity neural audio compression.", + "author": "Alexandre D\u00e9fossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi.", + "venue": "arXiv preprint arXiv:2210.13438, 2022.", + "url": null + } + }, + { + "11": { + "title": "Hifi-codec: Group-residual vector quantization for high fidelity audio codec.", + "author": "Dongchao Yang, Songxiang Liu, Rongjie Huang, Jinchuan Tian, Chao Weng, and Yuexian Zou.", + "venue": "arXiv preprint arXiv:2305.02765, 2023b.", + "url": null + } + }, + { + "12": { + "title": "Simple and controllable music generation.", + "author": "Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre D\u00e9fossez.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "13": { + "title": "Speechtokenizer: Unified speech tokenizer for speech large language models.", + "author": "Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu.", + "venue": "arXiv preprint arXiv:2308.16692, 2023.", + "url": null + } + }, + { + "14": { + "title": "Audiopalm: A large language model that can speak and listen.", + "author": "Paul K Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zal\u00e1n Borsos, F\u00e9lix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, et al.", + "venue": "arXiv preprint arXiv:2306.12925, 2023.", + "url": null + } + }, + { + "15": { + "title": "Speechlm: Enhanced speech pre-training with unpaired textual data.", + "author": "Ziqiang Zhang, Sanyuan Chen, Long Zhou, Yu Wu, Shuo Ren, Shujie Liu, Zhuoyuan Yao, Xun Gong, Lirong Dai, Jinyu Li, et al.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.", + "url": null + } + }, + { + "16": { + "title": "On decoder-only architecture for speech-to-text and large language model integration.", + "author": "Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shujie Liu, Bo Ren, Linquan Liu, et al.", + "venue": "In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1\u20138. IEEE, 2023a.", + "url": null + } + }, + { + "17": { + "title": "Speechgen: Unlocking the generative power of speech language models with prompts.", + "author": "Haibin Wu, Kai-Wei Chang, Yuan-Kuei Wu, and Hung-yi Lee.", + "venue": "arXiv preprint arXiv:2306.02207, 2023b.", + "url": null + } + }, + { + "18": { + "title": "Lauragpt: Listen, attend, understand, and regenerate audio with gpt.", + "author": "Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, Wen Wang, Siqi Zheng, et al.", + "venue": "arXiv preprint arXiv:2310.04673, 2023.", + "url": null + } + }, + { + "19": { + "title": "Speak, read and prompt: High-fidelity text-to-speech with minimal supervision.", + "author": "Eugene Kharitonov, Damien Vincent, Zal\u00e1n Borsos, Rapha\u00ebl Marinier, Sertan Girgin, Olivier Pietquin, Matt Sharifi, Marco Tagliasacchi, and Neil Zeghidour.", + "venue": "Transactions of the Association for Computational Linguistics, 11:1703\u20131718, 2023.", + "url": null + } + }, + { + "20": { + "title": "Clam-tts: Improving neural codec language model for zero-shot text-to-speech.", + "author": "Jaehyeon Kim, Keon Lee, Seungjun Chung, and Jaewoong Cho.", + "venue": "arXiv preprint arXiv:2404.02781, 2024.", + "url": null + } + }, + { + "21": { + "title": "Voicecraft: Zero-shot speech editing and text-to-speech in the wild.", + "author": "Puyuan Peng, Po-Yao Huang, Daniel Li, Abdelrahman Mohamed, and David Harwath.", + "venue": "arXiv preprint arXiv:2403.16973, 2024.", + "url": null + } + }, + { + "22": { + "title": "Neural discrete representation learning.", + "author": "Aaron Van Den Oord, Oriol Vinyals, et al.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "23": { + "title": "Taming transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Robin Rombach, and Bjorn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873\u201312883, 2021.", + "url": null + } + }, + { + "24": { + "title": "Semanticodec: An ultra low bitrate semantic audio codec for general sound.", + "author": "Haohe Liu, Xuenan Xu, Yi Yuan, Mengyue Wu, Wenwu Wang, and Mark D Plumbley.", + "venue": "arXiv preprint arXiv:2405.00233, 2024b.", + "url": null + } + }, + { + "25": { + "title": "Masked autoencoders that listen.", + "author": "Po-Yao Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, and Christoph Feichtenhofer.", + "venue": "Advances in Neural Information Processing Systems, 35:28708\u201328720, 2022.", + "url": null + } + }, + { + "26": { + "title": "Hubert: Self-supervised speech representation learning by masked prediction of hidden units.", + "author": "Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451\u20133460, 2021.", + "url": null + } + }, + { + "27": { + "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations.", + "author": "Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli.", + "venue": "Advances in neural information processing systems, 33:12449\u201312460, 2020.", + "url": null + } + }, + { + "28": { + "title": "Utmos: Utokyo-sarulab system for voicemos challenge 2022.", + "author": "Takaaki Saeki, Detai Xin, Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, and Hiroshi Saruwatari.", + "venue": "arXiv preprint arXiv:2204.02152, 2022.", + "url": null + } + }, + { + "29": { + "title": "Librispeech: an asr corpus based on public domain audio books.", + "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.", + "venue": "In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206\u20135210. IEEE, 2015.", + "url": null + } + }, + { + "30": { + "title": "Evaluating speech features with the minimal-pair abx task: Analysis of the classical mfc/plp pipeline.", + "author": "Thomas Schatz, Vijayaditya Peddinti, Francis Bach, Aren Jansen, Hynek Hermansky, and Emmanuel Dupoux.", + "venue": "In INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association, pages 1\u20135, 2013.", + "url": null + } + }, + { + "31": { + "title": "Beats: Audio pre-training with acoustic tokenizers.", + "author": "Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei.", + "venue": "arXiv preprint arXiv:2212.09058, 2022.", + "url": null + } + }, + { + "32": { + "title": "Language models are unsupervised multitask learners.", + "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.", + "venue": "OpenAI blog, 1(8):9, 2019.", + "url": null + } + }, + { + "33": { + "title": "The musdb18 corpus for music separation.", + "author": "Zafar Rafii, Antoine Liutkus, Fabian-Robert St\u00f6ter, Stylianos Ioannis Mimilakis, and Rachel Bittner.", + "venue": "2017.", + "url": null + } + }, + { + "34": { + "title": "Panns: Large-scale pretrained audio neural networks for audio pattern recognition.", + "author": "Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D Plumbley.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2880\u20132894, 2020.", + "url": null + } + }, + { + "35": { + "title": "Mert: Acoustic music understanding model with large-scale self-supervised training.", + "author": "Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, et al.", + "venue": "arXiv preprint arXiv:2306.00107, 2023.", + "url": null + } + }, + { + "36": { + "title": "Esc: Dataset for environmental sound classification.", + "author": "Karol J. Piczak.", + "venue": "In ACM Multimedia, MM \u201915, New York, NY, USA, 2015. ACM.", + "url": null + } + }, + { + "37": { + "title": "A dataset and taxonomy for urban sound research.", + "author": "Justin Salamon, Christopher Jacoby, and Juan Pablo Bello.", + "venue": "In ACM Multimedia, MM \u201914, New York, NY, USA, 2014. ACM.", + "url": null + } + }, + { + "38": { + "title": "Fsd50k: An open dataset of human-labeled sound events.", + "author": "Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.", + "url": null + } + }, + { + "39": { + "title": "The variably intense vocalizations of affect and emotion (vivae) corpus prompts new perspective on nonspeech perception.", + "author": "Natalie Holz, Pauline Larrouy-Maestri, and David Poeppel.", + "venue": "Emotion, 2022.", + "url": null + } + }, + { + "40": { + "title": "Fma: A dataset for music analysis.", + "author": "Micha\u00ebl Defferrard, Kirell Benzi, Pierre Vandergheynst, and Xavier Bresson.", + "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2017.", + "url": null + } + }, + { + "41": { + "title": "Evaluation of algorithms using games: The case of music tagging.", + "author": "Edith Law, Kris West, Michael I Mandel, Mert Bay, and J Stephen Downie.", + "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2009.", + "url": null + } + }, + { + "42": { + "title": "A comparison of sound segregation techniques for predominant instrument recognition in musical audio signals.", + "author": "Juan J Bosch, Jordi Janer, Ferdinand Fuhrmann, and Perfecto Herrera.", + "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2012.", + "url": null + } + }, + { + "43": { + "title": "Medleydb: A multitrack dataset for annotation-intensive mir research.", + "author": "Rachel M Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello.", + "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), volume 14, pages 155\u2013160, 2014.", + "url": null + } + }, + { + "44": { + "title": "The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english.", + "author": "Steven R. Livingstone and Frank A. Russo.", + "venue": "PloS one, 2018.", + "url": null + } + }, + { + "45": { + "title": "AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark.", + "author": "S\u00f6ren Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert M\u00fcller, Sebastian Lapuschkin, and Wojciech Samek.", + "venue": "Journal of the Franklin Institute, 2024.", + "url": null + } + }, + { + "46": { + "title": "SLURP: A spoken language understanding resource package.", + "author": "Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser.", + "venue": "In EMNLP. ACM, November 2020.", + "url": null + } + }, + { + "47": { + "title": "EMOVO corpus: an Italian emotional speech database.", + "author": "Giovanni Costantini, Iacopo Iaderola, Andrea Paoloni, and Massimiliano Todisco.", + "venue": "In LREC. European Language Resources Association (ELRA), 2014.", + "url": null + } + }, + { + "48": { + "title": "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation.", + "author": "Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov.", + "venue": "In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1\u20135. IEEE, 2023c.", + "url": null + } + }, + { + "49": { + "title": "Audiocaps: Generating captions for audios in the wild.", + "author": "Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim.", + "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 119\u2013132, 2019.", + "url": null + } + }, + { + "50": { + "title": "Wavcaps: A chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research.", + "author": "Xinhao Mei, Chutong Meng, Haohe Liu, Qiuqiang Kong, Tom Ko, Chengqi Zhao, Mark D Plumbley, Yuexian Zou, and Wenwu Wang.", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.", + "url": null + } + }, + { + "51": { + "title": "Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.", + "author": "Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao.", + "venue": "arXiv preprint arXiv:2301.12661, 2023.", + "url": null + } + }, + { + "52": { + "title": "Audioldm: Text-to-audio generation with latent diffusion models.", + "author": "Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley.", + "venue": "arXiv preprint arXiv:2301.12503, 2023.", + "url": null + } + }, + { + "53": { + "title": "Benchmarking representations for speech, music, and acoustic events.", + "author": "Moreno La Quatra, Alkis Koudounas, Lorenzo Vaiani, Elena Baralis, Luca Cagliero, Paolo Garza, and Sabato Marco Siniscalchi.", + "venue": "arXiv preprint arXiv:2405.00934, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.17175v3" +} \ No newline at end of file diff --git a/20241127/2409.11069v2.json b/20241127/2409.11069v2.json new file mode 100644 index 0000000000000000000000000000000000000000..88966719acd2f2063979e753f0228df8ba84b2f1 --- /dev/null +++ b/20241127/2409.11069v2.json @@ -0,0 +1,82 @@ +{ + "title": "Data-driven Dynamic Intervention Design in Network Games", + "abstract": "Targeted interventions in games present a challenging problem due to the asymmetric information available to the regulator and the agents. This note addresses the problem of steering the actions of self-interested agents in quadratic network games towards a target action profile. A common starting point in the literature assumes prior knowledge of utility functions and/or network parameters. The goal of the results presented here is to remove this assumption and address scenarios where such a priori knowledge is unavailable.\nTo this end, we design a data-driven dynamic intervention mechanism that relies solely on historical observations of agent actions and interventions. Additionally, we modify this mechanism to limit the amount of interventions, thereby considering budget constraints. Analytical convergence guarantees are provided for both mechanisms, and a numerical case study further demonstrates their effectiveness.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Network games have become a powerful tool for analyzing the strategic interactions of agents in various societal and economic systems, including crime networks [1 ###reference_b1###], social networks [2 ###reference_b2###], and public goods provision [3 ###reference_b3###]. A key assumption in network games is that each agent\u2019s payoff depends not only on their own actions but also on the aggregated actions of the agents in their neighborhood [4 ###reference_b4###]. These interactions among agents are modeled by an underlying network, represented by a graph.\nSince agents in network games are self-interested and solely focus on maximizing their own payoffs, the outcomes of such strategic interactions are often suboptimal, leading to a degradation in the overall system\u2019s performance. This inefficiency is typically quantified by the price of anarchy [5 ###reference_b5###]. Therefore, with a deep understanding of these strategic interactions within the network game framework, there has been significant interest in influencing outcomes through efficient interventions, aiming to align individual actions with more desirable global outcomes. For instance, in [6 ###reference_b6###, 7 ###reference_b7###], a central regulator intervenes by modifying the standalone marginal benefits of agents to achieve social welfare optimization. It is demonstrated that the optimal intervention policy is highly dependent on the parameters of utility functions and the underlying interaction network. However, in practice, due to the scale of the system and privacy concerns, the underlying interaction networks often remain hidden for the regulator, and the utility functions of agents are not readily available. This lack of information poses significant challenges for designing effective interventions.\nControl-theoretic tools have been employed to enhance the performance of systems exhibiting self-interested behavior under information constraints [8 ###reference_b8###]. In this setting, agents iteratively adjust their actions through game learning dynamics in response to the regulator\u2019s interventions. Simultaneously, the regulator observes these actions over time and updates its interventions accordingly. This situation can be conceptualized as a feedback control problem, where the central regulator formulates suitable control laws to steer the agents\u2019 actions toward a desired outcome. In [9 ###reference_b9###, 10 ###reference_b10###], two incentive-based control methods were proposed to steer agents in network games toward the desired equilibria. However, these methods are not applicable to network games with continuous action spaces. For more general games, a model-based algorithmic framework was developed in [11 ###reference_b11###] for adaptively computing interventions without requiring knowledge of game learning dynamics. In [12 ###reference_b12###], dynamic interventions are updated based on both the agents\u2019 marginal costs and the society\u2019s marginal costs. The analyses in [11 ###reference_b11###, 12 ###reference_b12###] rely on two-time scale dynamics, where the game dynamics converge quickly, while incentives are updated at a slower timescale. However, in practice, game dynamics may converge slowly or not at all. To the best of our knowledge, few works have addressed the design of dynamic interventions for network games with continuous actions within a single-time scale dynamic framework. A case in point is [13 ###reference_b13###], which highlights that the suitable controller (namely, intervention protocol) depends on knowledge available to the regulator, and develops several intervention control policies tailored to the available knowledge of the regulator in strongly monotone network games.\nIn this work, we address the problem of steering the actions of self-interested agents in quadratic network games toward a target action profile without access to the agents\u2019 private utility parameters or the interaction network. We model the decision-making process among agents using best-response dynamics, while the central regulator intervenes by modifying the agents\u2019 standalone marginal costs. Unlike [13 ###reference_b13###, 12 ###reference_b12###], we make no assumptions about the stability of the best-response dynamics.\nWe formulate the intervention design as a direct data-driven control problem, where the regulator devises the incentive design policy directly from historically observed agent actions and interventions, without performing any intermediate utility function or interaction network identification step. We analyze the interconnected regulator-agents dynamics on a single time scale and establish convergence of the actions to the target profile. Notably, since the intervention signal often corresponds to taxes or subsidies in practice, we also account for budget constraints to limit the amount of interventions during the steering process. In both cases, the intervention design procedures are formulated as linear matrix inequalities (LMIs), which can be efficiently solved. The technical derivations of these LMIs are inspired by recent works on direct data-driven control [14 ###reference_b14###] along with [15 ###reference_b15###] and [16 ###reference_b16###].\nThe paper is organized as follows: Section 2 ###reference_### formulates the intervention problem in network games. Section 3 ###reference_### establishes the data-driven intervention protocol and provides its analytical convergence. In Section 4 ###reference_###, a modified intervention protocol is developed to account for budget constraints, and the effectiveness of the proposed protocols is demonstrated in Section 5 ###reference_### on Cournot competition of differentiated goods. The manuscript closes with conclusions in Section 6 ###reference_###.\nNotation and preliminaries: Let and be the sets of real numbers and positive real numbers, respectively. We use () to denote the vector/matrix with all elements equal to 1(0) and use as the identity matrix. We include the dimension of these vectors/matrices as a subscript, whenever needed. We use and to denote the sets of -dimensional real vectors and real matrices, respectively. For a vector (or a matrix ), (respectively, ) denotes its th element (row). For a square matrix , , and denote the inverse, transpose and determinant of the matrix , respectively. For a matrix , we denote the ellipsoid by . The notation means is positive (negative) definite. For any matrices , , of appropriate dimensions, denotes the symmetric matrix , and denotes the block diagonal matrix with , and on the diagonal. Given a set , denotes the stacked vector obtained from . The Kronecker product is denoted by ." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Problem formulation", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Network games", + "text": "We consider a simultaneous-move game among a fixed number of agents . The agents interact repeatedly with a central regulator as well as with each other according to an underlying interaction network. We denote the adjacency matrix of this network by , where denotes the measure of the strength of the interaction between agents and . We assume that the network has no self loop, thus for all . The goal of agent is to select an action to maximize a utility function , which depends on its own action , the aggregate of its neighbors\u2019 actions\nwith , and a scalar intervention which is determined by the central regulator. In this paper, we restrict our attention to linear quadratic utility functions of the form\nwhere the quadratic term with the constant is the private cost of agent . The marginal return from increasing the action , i.e., depends both on the agent \u2019s standalone marginal return as well as the aggregate action . The parameter captures the impact of neighbors aggregate actions on the marginal return. Note that if for each agent , , this is a game of strategic complements, and if for each agent , , this is a game of strategic substitutes. The term is included to capture the intervention of the central regulator in modifying the standalone marginal return to . Network games with payoffs structured as described in (2 ###reference_###) have been extensively analyzed in the literature to model peer influences in social and economic processes [1 ###reference_b1###, 4 ###reference_b4###]. This includes the particular case where , as discussed in works such as [6 ###reference_b6###, 7 ###reference_b7###, 13 ###reference_b13###].\nIn network games, the agents are noncooperative and merely interested in maximizing their individual utility functions by choosing their actions.\nGiven an intervention ,\nan action profile is a Nash equilibrium if for all and ,\nIt is observed that at Nash equilibrium, no agent can improve its objective function by unilaterally changing its action. Taking the first-order derivative of the payoff with respect to the action in (2 ###reference_###), we have\nCombining the above equations for all , and letting , , , the following equality holds for any Nash equilibrium ," + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Best response dynamics", + "text": "To maximize its individual utility, each agent iteratively updates its action based on the aggregated actions of its neighbours and the current value of the intervention signal. We assume that agents follow discrete-time best response dynamics. In particular, the action of each agent evolves according the following dynamics,\nwhere indexes the time steps at which all agents simultaneously update their actions. We recall that does not depend on as , and is the intervention designed by the regulator at time step . The above best response dynamics are equivalent to\nwhich can be written in a compact form as\nThe best response dynamics are commonly used in noncooperative games; see [4 ###reference_b4###, 17 ###reference_b17###].\nNote that for any constant , the best response dynamics (7 ###reference_###) admits a unique equilibrium if and only if the matrix is nonsingular. Such an equilibrium is given by\nand is a NE of the game as it satisfies (5 ###reference_###)." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Target action profile", + "text": "Due to the fact that the self-interested behaviors of agents\nmay deviate or be in contrast with what is desired for the\ngroup as a whole, the central regulator is required to\ncoordinate the players by applying suitable interventions with\nthe aim of steering the players to a more desirable group\nbehavior [6 ###reference_b6###, 7 ###reference_b7###]. Here, we consider the scenario where the central regulator aims to incentivize the actions of agents to a desired setpoint , which satisfies some agent-dependent constraints [18 ###reference_b18###], an aggregative constraint [19 ###reference_b19###] or coincides with the solution of a social optimization problem [12 ###reference_b12###, 11 ###reference_b11###].\nBy (9 ###reference_###), the corresponding intervention at the equilibrium is given by\nWe observe that implementing the above intervention requires the central regulator to know the utilities of the agents, the network structure, and the coupling weights. However, such information is typically not available a priori. To circumvent this challenge, we assume that the regulator can observe the actions of the players in a finite time interval and collect data. Moreover, the dynamics (8 ###reference_###) do not converge under a constant intervention unless the matrix is Schur stable. To cope with potential instability of this matrix, we use dynamic stabilizing feedback protocols. Overall, the subsequent results address the following problem:\nDesign an intervention signal that steers the action profile of the agents to a target action profile without requiring the information of agents\u2019 utilities and the network parameters." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Intervention protocol", + "text": "We introduce the following PI controller as the intervention protocol executed by the central regulator:\nwhere the matrices and are to be designed.\nHere, corresponds to the cumulative value of the differences between the target action profile and the actual action profile .\nCombining (11 ###reference_###) with the best response dynamics (8 ###reference_###), we have the following closed-loop system,\nwhere\nAssuming that is nonsingular, the above closed-loop\nsystem admits a unique equilibrium with\nTherefore, at the equilibrium, the action profile coincides with the target profile as desired, and the problem reduces to designing and such that the closed-loop is asymptotically stable.\nAs mentioned, we consider the scenario where the regulator does not have a priori access to the utility functions of the agents and the information of the network, and instead has access to some historical data from the agent dynamics in (8 ###reference_###). In particular, we assume that the regulator has collected a -length intervention data sequence as and corresponding -length action data sequence obtained from (8 ###reference_###).\nTo state the main result of this section, let\nand define the matrices\nSuppose satisfies the matrix inequality\nThen, the intervention protocol (11 ###reference_###) with\nrenders the equilibrium of (12 ###reference_###) asymptotically stable. Consequently, the action profile asymptotically converges to the target profile .\nWe define the auxiliary variables\nThen, the dynamics in (12 ###reference_###) transform into\nThe matrices , , and are the same as those in (12 ###reference_###). Analogously, the input-state data sequences and are mapped to\nand\nrespectively. Note that we used (11a ###reference_.1###) to write the expression of .\nThen, we observe that the matrices defined in (15 ###reference_###), in fact, correspond to the input-state data of the auxiliary system (18 ###reference_###). In particular,\nwith\nBy the proof of [14 ###reference_b14###, Thm. 3], it follows that the controller obtained from (16 ###reference_###)-(17 ###reference_###) stabilizes the system in (18 ###reference_###), implying that the matrix is Schur stable. This, together with the fact that the matrix is constant proves asymptotic stability of the equilibrium in (12 ###reference_###), and asymptotic convergence of the action profiles to [20 ###reference_b20###].\nIf the data sequence in (19 ###reference_###) is Persistently Exciting (PE) of order , then a matrix satisfying (16 ###reference_###) always exists. To see this, first note that the the pair is controllable as the following matrix,\nhas full row rank. Then, the model-based stabilization problem of the auxiliary system (18 ###reference_###) has a solution. Moreover, the matrix has full row rank [21 ###reference_b21###], which ensures the data-based stabilization problem of (18 ###reference_###) has a solution in the form of (17 ###reference_###); see the discussion following [14 ###reference_b14###, Thm. 3].\nThe computational complexity of solving (16 ###reference_###) is roughly , which is comparable to a sequential system identification and controller design; see the discussion on direct vs. indirect data-driven control [22 ###reference_b22###, Sec. 2].\nThe key advantage of a direct data-driven control method, as in the case of Theorem 1 ###reference_orem1###, is that, in one shot, the controller and the Lyapunov function certifying stability are obtained from data. As a numerical consideration, since cannot be directly interpreted as a symmetric matrix by numerical solvers such as CVX, we can replace in the diagonal blocks of (16 ###reference_###) by a new variable , and add an equality constraint to account for this variable change." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Intervention with budget constraints", + "text": "As the intervention signal amounts to tax or subsidies in practice, it is natural to impose a budget constraint on the level of intervention. To this end, we consider the set\nwhere and are the minimum and maximum allowed intervention performed by the regulator for each agent.\nIn this section, we are interested in solving Problem 1 ###reference_blem1### with for all .\nAs a necessary condition for solvability of this problem, we assume that at the desired equilibrium, namely , the budget constraint is strictly satisfied for each agent:\nGiven a target action profile , the corresponding in (10 ###reference_###) strictly satisfies .\nBefore proceeding further, we define the saturation operator as\nfor given scalars and .\nFor a vector , the operator is applied element-wise.\nTaking into account budget constraint, we modify the PI control (11 ###reference_###) into the following intervention protocol,\nThis, together with the best response dynamics (8 ###reference_###), results in the closed-loop system\nWe shift the equilibrium to the origin by defining a new state variable , which results in\nThe closed loop system (26 ###reference_###) can further be written as\nwith the dead-zone function defined as\nThe above expression allows us to apply the following generalized sector condition; see [23 ###reference_b23###, Lem. 1.6]) for a proof.\nFor any matrix and any diagonal positive definite matrix , we have\nfor all , with\nand\nThe inclusion of the budget constraints complicates the data-driven design. To partially tame this complexity, in the sequel, we assume the case that as done in [6 ###reference_b6###, 7 ###reference_b7###, 13 ###reference_b13###]. Consequently, the input matrix reduces to . Bearing in mind the data matrices , and in (15 ###reference_###), we have the following result.\nUnder Assumption 1 ###reference_umption1###, suppose , a diagonal positive definite matrix , and satisfy the matrix inequalities\nand\nfor each .\nLet the intervention protocol (24 ###reference_###) be designed with gains\nThen, the equilibrium of the closed-loop system (25 ###reference_###) is asymptotically stable. Moreover, an estimate of the region of attraction is given by the set\nConsider the Lyapunov candidate for the system (27 ###reference_###) as\n with a positive definite matrix . The forward increment of the Lyapunov candidate is given by\n\nwhere\nand .\nWe drop the index in the rest of the proof for notational simplicity.\nTo ensure asymptotic stability of the equilibrium, it suffices to have\nBy leveraging the generalized sector condition in Lemma 4.5 ###reference_orem5###, a sufficient condition for (37 ###reference_###) is provided in the proof of [15 ###reference_b15###, Thm. 1]. Namely, (37 ###reference_###) holds if\nfor each , and\nfor some matrices , and diagonal, and given by (31 ###reference_###). By viewing as a Schur complement of a ( block) matrix, and pre- and post-multiplying that matrix by , we find that\n(39 ###reference_###) is equivalent to\nNext, recall that the data matrices , and satisfy (21 ###reference_###), and that . Bearing in mind (34 ###reference_###), multiplying both sides of (21 ###reference_###) by yields . Now, choosing and substituting the values of from the latter equality and from (34 ###reference_###) into (40 ###reference_###) result in (32 ###reference_###), where a change of variable is also carried out to obtain an LMI. In addition, (38 ###reference_###) becomes equal to (33 ###reference_###).\nThis completes the proof for asymptotic stability of the equilibrium.\nThe claim concerning the region of attraction follows from (37 ###reference_###) and the definition of variables in (26 ###reference_###).\nThe next result addresses the feasibility of the LMIs established in Theorem 4.6 ###reference_orem6###.\nIf the data sequence in (19 ###reference_###) is persistently exciting of order , then\nthe LMIs\n(32 ###reference_###)- (33 ###reference_###) have a feasible solution.\nSince the pair in (18 ###reference_###) is controllable, the model-based stabilization problem of the auxiliary system (18 ###reference_###) has a solution. That is, there always exist a positive definite matrix and a controller such that\nNote that taking the Schur complement of the matrix above yields the Lyapunov inequality , certifying closed-loop stability.\nNext, we show that the model-based matrix inequalities (38 ###reference_###) and (40 ###reference_###) have a solution. To this end, we choose , , and , with some positive scalar to be specified later.\nThen the matrix in (38 ###reference_###) becomes\nand the matrix in (40 ###reference_###), after applying block row and column permutations, simplifies to\nBy using a Schur complement argument, we find that if and only if . We select a sufficiently small such that the latter inequality holds and thus .\nMoreover, by using again a Schur complement argument, we have if and if\nBy (41 ###reference_###), the first matrix on the left is positive definite. Therefore, there\nalways exist a positive definite diagonal matrix satisfying the above inequality. We conclude that the model-based matrix inequalities (38 ###reference_###) and (40 ###reference_###) have a solution , with satisfying (42 ###reference_###).\nTo complete the proof, it suffices to show that if the matrix inequalities (38 ###reference_###) and (40 ###reference_###) have a solution, then also the data-based LMIs (32 ###reference_###)-(33 ###reference_###) admit a solution. To this end, let be any feasible solution to (38 ###reference_###) and (40 ###reference_###). By using Theorem 2 in [14 ###reference_b14###], if the data sequence in (19 ###reference_###) is persistently exciting of order , there exists such that\nBy choosing , and , the data-based LMIs (32 ###reference_###)-(33 ###reference_###)\nbecome\nBy using the equalities (43 ###reference_###) and (44 ###reference_###),\nwe observe that the above inequalities coincide with the model-based matrix inequalities (38 ###reference_###) and (40 ###reference_###) and thus are satisfied. This completes the proof.\nThe LMIs in (32 ###reference_###)-(33 ###reference_###) can be solved directly using the data matrices in (15 ###reference_###), returning the controller, the Lyapunov function certifying stability, and an estimate of the region of attraction. Similar to Remark 3.3 ###reference_orem3###, the LMIs (32 ###reference_###)-(33 ###reference_###) are feasible under the PE condition, as proven in Lemma 4.8 ###reference_orem8###. We also note that the only parameter that is not readily available from data is , given by (31 ###reference_###). To remedy this, the regulator can leverage knowledge on the bounds of the model parameters to obtain bounds on in (10 ###reference_###). Such bounds can in turn be used to find a lower bound of . Although the feasibility of the proposed LMIs remains unaffected if is replaced by any lower bound , the quality of this lower bound could potentially shrink the size of the estimated region of attraction, as this estimate is a subset of in (30 ###reference_###).\nThe volume of the ellipsoid in (35 ###reference_###) is proportional to . Given that this value as a result of solving (32 ###reference_###) and (33 ###reference_###) may be relatively small, one can add an additional inequality and maximize subject to the LMIs (32 ###reference_###) and (33 ###reference_###)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Case study", + "text": "We consider a Cournot competition involving a set of firms, , that produce differentiated goods [24 ###reference_b24###]. For each firm , we denote the amount of goods by , and its corresponding price is obtained from the inverse demand function , where is the maximum price that consumers would pay for the good, and are the price elasticity coefficients, and indicates the degree of product substitutability.\nThe products of the firms are substituable according to the weighted directed graph depicted in Fig. 1 ###reference_###, where the weight on the link from node to node is equal to . The cost of good production is modeled as a linear quadratic function, i.e., and the intervention is . Then, the payoff function of firm is written in the form of as\n(2 ###reference_###)\nThus, , , and .\nThe numerical values of the parameters are given by\n, , and for . We consider that the regulator has access to some historical\nintervention-action data . For numerical implementation, we pick the intervention points arbitrarily from a bounded interval , and set . In practice, any sequence of historical interventions satisfying the PE condition in Remark 3.3 ###reference_orem3### can be used. The central regulator sets the target production profile , obtained from some fairness or sustainability considerations.\n###figure_1### First, we consider the scenario without budget constraints and implement the proposed dynamic intervention in (11 ###reference_###), with the control gain obtained from Theorem 1 ###reference_orem1###.\nFig. 2 ###reference_### demonstrates the evolution of the intervention (lower plot) and the corresponding action profile (upper plot). As expected, the actions asymptotically converge to the target profile . Next, we include the budget constraint in (23 ###reference_###) with , and set , and follow the data-driven design protocol in Theorem 4.6 ###reference_orem6###. In Figure 3 ###reference_###, the action profiles are again steered to the target profile. Moreover, the amount of intervention remains within the specified budget.\n###figure_2### ###figure_3###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We have adopted a data-driven approach to the design of intervention protocols in network games. This approach allows us to bypass the common assumption that utility functions and/or network parameters are available to the regulator a priori. In particular, we have simplified the problem of intervention design into a set of LMIs that depend on action-intervention data. The designed intervention protocols are capable of steering the outcomes of strategic interactions in network games toward a target action profile. Limitations on available resources for interventions are accounted for by enforcing a budget constraint on the devised dynamic protocols. The analytical results are complemented by a numerical case study demonstrating the effectiveness of the proposed design. Future work includes extending the results to network games with nonlinear dynamics, and more general game settings." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2409.11069v2_figure_1.png", + "caption": "Figure 1: The direct graph illustrating asymmetrical product substitutability", + "url": "http://arxiv.org/html/2409.11069v2/x1.png" + }, + "2": { + "figure_path": "2409.11069v2_figure_2.png", + "caption": "Figure 2: Agent action profile and intervention without budget constraints", + "url": "http://arxiv.org/html/2409.11069v2/x2.png" + }, + "3": { + "figure_path": "2409.11069v2_figure_3.png", + "caption": "Figure 3: Agent action profile and intervention with budget constraints", + "url": "http://arxiv.org/html/2409.11069v2/x3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2409.11069v2" +} \ No newline at end of file diff --git a/20241127/2409.12317v3.json b/20241127/2409.12317v3.json new file mode 100644 index 0000000000000000000000000000000000000000..6db9b5b76f51317ececa849ec8b9249c2db10ac6 --- /dev/null +++ b/20241127/2409.12317v3.json @@ -0,0 +1,155 @@ +{ + "title": "Spiking Nonlinear Opinion Dynamics (S-NOD) for Agile Decision-Making", + "abstract": "We present, analyze, and illustrate a first-of-its-kind model of two-dimensional excitable (spiking) dynamics for decision-making over two options. The model, Spiking Nonlinear Opinion Dynamics (S-NOD), provides superior agility, characterized by fast, flexible, and adaptive response to rapid and unpredictable changes in context, environment, or information received about available options.\nS-NOD derives through the introduction of a single extra term to the previously presented Nonlinear Opinion Dynamics (NOD) for fast and flexible multi-agent decision-making behavior. The extra term is inspired by the fast-positive, slow-negative mixed-feedback structure of excitable systems. The agile behaviors brought about by the new excitable nature of decision-making driven by S-NOD are analyzed in a general setting and illustrated in an application to multi-robot navigation around human movers.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The fast, flexible, and adaptive behavior observed in biology owes much to the excitable (spiking) nature of cellular signaling [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. Models of excitability represent the analog molecular and/or biophysical processes that produce spikes in response to stimuli. These models inherit the adaptive behavior of analog systems and the reliability of digital systems, foundational to spiking control systems [5 ###reference_b5###] and neuromorphic engineering [6 ###reference_b6###]. However, existing models describe single-input/single-output [7 ###reference_b7###] spike-based signal processing. This is spiking activity that can only encode single-option decisions: to spike or not spike, as determined by the input signal pushing the system toward its excitability threshold [8 ###reference_b8###]. This limits the use of these models in studying and designing spiking decision-making over multiple options as observed, e.g., in cortical column visual orientation selectivity [9 ###reference_b9###] or in sensorimotor decision-making [10 ###reference_b10###].\nWe present a generalized model of excitable (spiking) dynamics that allows for fast, flexible, and adaptive decision-making over multiple options. In this paper we focus on two-option spiking in a two-dimensional, two-timescale model and we use \u201cagile\u201d to mean \u201cfast, flexible, and adaptive.\u201d To the best of our knowledge, this is the first such model to generalize spiking to more than one option, i.e., spiking that can occur in any of the multiple directions corresponding to the multiple options available to the excitable decision-maker. We call our model Spiking Nonlinear Opinion Dynamics (S-NOD) as it derives from the introduction of an extra term in the Nonlinear Opinion Dynamics (NOD) model of [11 ###reference_b11###, 12 ###reference_b12###] that makes NOD excitable, i.e., spiking.\nNOD model the time evolution of opinions of a group of agents engaged in a collective decision-making process over a set of options. The derivation of NOD was tailored to model and study the principles of fast and flexible decision-making in biological collectives [13 ###reference_b13###, 14 ###reference_b14###]\nand to use these principles to design fast and flexible decision-making in built collectives [11 ###reference_b11###, 15 ###reference_b15###, 16 ###reference_b16###]. NOD exhibits a mixed-feedback [17 ###reference_b17###, 8 ###reference_b8###] structure: opinion formation arises from the balance of a negative feedback loop that regulates agent opinions to a neutral solution and positive feedback loops (at the single-agent and network levels) that destabilize the neutral solution and trigger nonlinear opinion formation.\nDecision-making driven by NOD [12 ###reference_b12###] is fast because it can diverge quickly from indecision even in the absence of informative inputs about the options. It is flexible because the sensitivity of opinion formation to informative inputs is tunable. Both speed and flexibility are determined by a tunable threshold for opinion formation where negative and positive feedback are perfectly balanced and the dynamics become singular.\n###figure_1### ###figure_2### To derive S-NOD, we introduce in NOD a slow regulation term inspired by the dynamics of excitable (spiking) signal processing systems. The resulting excitable dynamics give S-NOD its superior agility in decision-making. Where NOD allows for a fast decision, S-NOD allows for autonomous fast sequential decision-making, not requiring any ad-hoc reset of the model state once a decision is made. Where NOD is flexible, S-NOD is flexible and capable of fast \u201cchanges of mind\u201d and adaptive responses when the information about the options changes rapidly and unexpectedly. Further, S-NOD provides on-demand (event-based) opinion formation in the sense that large opinions are formed sparsely in time in the form of \u201cdecision events\u201d and only when context requires it.\nThis makes S-NOD efficient. The agility of S-NOD is illustrated in Fig. 1 ###reference_### in the context of a robot navigating around a human mover as studied in [18 ###reference_b18###].\nOur major contributions in this paper are the presentation, analysis, and illustration of\na first-of-its-kind model of two-dimensional excitable (spiking) dynamics for decision-making over two options, which provides superior agility (fast, flexible and, adaptive behavior), especially important in changing contexts. Also, in Section II ###reference_###, we present a new analysis of the singularity in NOD for a single decision-making agent and two options. We prove how a single feedback gain tunes opinion formation. We show that when gets too large, an opinion can become so robust that it will not change quickly enough if a new input arrives in favor of the alternative option. In Section III ###reference_###, we present S-NOD for a single agent and two options. We show the existence of the spiking limit cycles associated with excitable behavior. We show geometrically the agility in behavior. We generalize S-NOD to a network of multiple agents and apply it to a social robot navigation problem in Section IV ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Fast and Flexible Decision-Making: NOD", + "text": "We recall NOD [11 ###reference_b11###, 12 ###reference_b12###] in Section II-A ###reference_### for a single agent\nevolving continuously over time its real-valued opinion about two mutually exclusive options with possible input present. In Section II-B ###reference_### we analyze stability of the neutral opinion solution and prove conditions on feedback gain that determine the type of singularity (type of pitchfork bifurcation) in the dynamics. We show that, by shaping bifurcation branches, tunes opinion formation. In Section II-C ###reference_###, we show limits on tunability of NOD that sacrifice agility in decision-making, motivating S-NOD in Section III ###reference_###." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A NOD for Single Decision-Maker and Two Options", + "text": "We let an agent represent a single decision-maker.\nLet define the agent\u2019s opinion at time about two mutually exclusive options. The more positive (negative) is , the more the agent favors (disfavors) option 1 and disfavors (favors) option 2. When , the agent is neutral about the two options, i.e., in a state of indecision.\nLet define the attention of the agent at time to its observations; is implemented as a gain in the dynamics. Let define an input signal at time that represents external stimulus and/or internal bias. When (), it provides information (evidence) in favor of option 1 (option 2).\nDecision-making variables , evolve in continuous time according to the following NOD, adapted from [11 ###reference_b11###, 12 ###reference_b12###]:\nwhere . is a time constant, and damping coefficient\n weights the negative feedback on that regulates to the neutral solution . The second term in (1a ###reference_1###) provides a nonlinear positive feedback on with weight given by the product of and amplification\ncoefficient , plus the effects of . The saturation nonlinearity given by the function enables fast-and-flexible decision-making through opinion-forming bifurcations [11 ###reference_b11###, 12 ###reference_b12###]. The positive feedback gain is state-dependent according to (1b ###reference_2###) and grows with . Hence, small deviations from the neutral solution () in response to small inputs leave attention low and do not trigger large, nonlinear opinion formation. Large enough deviations from the neutral solution in response to large enough inputs cause a sharp increase in attention and trigger large, nonlinear opinion formation. The resulting implicit threshold distinguishing small and large inputs is tuned by and ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Analysis of Single-Agent, Two-Option NOD", + "text": "We study the dynamics and stability of solutions of system (1a ###reference_1###)-(1b ###reference_2###)\nusing bifurcation theory. A local bifurcation refers to a change in number and/or stability of equilibrium solutions of a nonlinear dynamical system as a (bifurcation) parameter is changed. The state and parameter values at which this change occurs is the bifurcation point. At a bifurcation point, one or more eigenvalues of the Jacobian of the model must have zero real part [19 ###reference_b19###, 20 ###reference_b20###], i.e., a bifurcation point is a singularity of the model vector field.\nOur main interest is in the\npitchfork bifurcations. There are two generic types of pitchforks. A supercritical pitchfork bifurcation describes how one stable solution becomes unstable and two stable solutions emerge as the bifurcation parameter increases. A subcritical pitchfork bifurcation describes how two unstable solutions disappear and one stable solution becomes unstable as the bifurcation parameter increases.\nOur objective is to understand how thresholds of fast-and-flexible decision-making are controlled by the model parameters with the goal of designing feedback control laws for those parameters that can make decision thresholds adaptive to context. Substituting (1b ###reference_2###) into (1a ###reference_1###) yields\nWe first study (2 ###reference_###) in the case , i.e., when there is no evidence to distinguish the options, and bifurcations are symmetric. Then, we introduce and use unfolding theory [19 ###reference_b19###] to understand the effects of inputs.\n(NOD Taylor expansion and singularity):\nConsider (2 ###reference_###) and let . Then the solution is always an equilibrium, and the Taylor expansion of (2 ###reference_###) about is\nA singularity exists at , with . The solution is stable (unstable) when ().\nWhen and , the right hand side of (2 ###reference_###) is zero thus is always an equilibrium.\nWe expand (2 ###reference_###) with about . The Taylor expansion of the hyperbolic tangent is . Substituting this into (2 ###reference_###) yields (3 ###reference_###). The Jacobian of (3 ###reference_###) evaluated at is , which is singular when . When , thus is exponentially stable. When , thus is unstable.\n\u220e\nWe next explore in Proposition 1 ###reference_position1### and Fig. 2 ###reference_### the effect of parameter on the cubic and quintic terms of (3 ###reference_###) and its role in determining the type of singularity at .\n( determines singularity type):\nLet , . The singularity of dynamics (2 ###reference_###) at as proved in Lemma 1 ###reference_ma1### corresponds to a supercritical pitchfork bifurcation for , a quintic pitchfork bifurcation for , and a subcritical pitchfork bifurcation for .\nDenote and as the coefficients of and in (3 ###reference_###) resp. at . When (), then () and (3 ###reference_###) is the normal form of the supercritical (subcritical) pitchfork bifurcation [19 ###reference_b19###]. When , then , and (3 ###reference_###) is the normal form of the quintic pitchfork, by recognition problem [19 ###reference_b19###, Prop. VI.2.14] and its -symmetric universal unfolding [19 ###reference_b19###, Prop. VI.3.4; Fig. VI.3.3].\n\u220e\n###figure_3### ###figure_4### Proposition 1 ###reference_position1### uncovers the key role of in tuning opinion formation (Fig. 2 ###reference_###): (i) controls the supercritical vs. subcritical nature of opinion formation, and (ii) increasing increases opinion strength of non-neutral solutions. Bifurcation diagrams in Fig. 2a ###reference_sf1### plot equilibrium solutions of (2 ###reference_###), i.e., solutions of , as a function of bifurcation parameter for different values of (Fig. 2b ###reference_sf2###). The singularity at is a pitchfork bifurcation: blue, gray, green lines show supercritical, quintic, subcritical solutions, respectively. For all : when , the neutral solution is stable; when , the neutral solution is unstable and there is a bistable symmetric pair of solutions.\nWhen ,\nthe pitchfork is supercritical: there are no other solutions.\nWhen ,\nthe pitchfork is subcritical:\ntwo stable non-neutral solutions appear for , through saddle-node bifurcations. As grows more positive, these solutions emerge for smaller values of and increase in magnitude, reflecting increasing opinion strength.\n###figure_5###" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Limitation on Tuning of NOD", + "text": "We prove that the region of multi-stability in the subcritical bifurcation of NOD grows as gets large.\n( determines region of multi-stability):\nLet and be either of the two saddle-node bifurcation points of the subcritical pitchfork of NOD (2 ###reference_###) for . Then is a monotonically decreasing function of , i.e., .\nLet and . By hypothesis, . We have , since . Following [21 ###reference_b21###], we use the implicit function theorem to show the existence of such that for some neighborhood of , . We get\nSince , is monotonically decreasing in .\n\u220e\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### Proposition 2 ###reference_position2### implies that one limitation of NOD is that large can make the region of multi-stability of NOD (2 ###reference_###) so large and robust that solutions can get \u201cstuck\u201d in one of the decision attractors unless very large inputs in favor of another decision state are applied. This is illustrated in Fig. 3 ###reference_###A, where the first (dark blue) and second (light blue) NOD differ only in their parameters () but their solutions are distinctively different. At the stimulus onset ( for ), the solution of the first NOD converges to much more rapidly than that of the second NOD. However, when the input switches values ( for ), the solution of the first NOD gets stuck at a positive value, whereas the solution of the second NOD is able to track the change in input sign. This example reveals a fundamental trade-off between speed/robustness (first, dark blue NOD) and flexibility (second, light blue NOD).\nInstead of aiming to fine-tune the gain around a hard-to-define fast/robust enough yet flexible enough decision-making behavior, we use mixed-feedback principles to make the system excitable, inheriting both the speed of large- NODs and the flexibility of small- NODs and imparting system agility. The behavior of the resulting S-NOD is shown in pink in Fig. 3 ###reference_###A. By generating \u201cdecision spikes\u201d the S-NOD is as fast as the high NOD and as flexible as the low NOD. In what follows, we present the S-NOD model, its analysis, and its multi-agent generalization." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Agile Decision-Making: S-NOD", + "text": "We present and analyze the Spiking Nonlinear Opinion Dynamics (S-NOD) model." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A S-NOD for a Single Agent and Two Options", + "text": "We define S-NOD by introducing a slow regulation variable to NOD (2 ###reference_###), as in the fast-positive, slow-negative mixed-feedback structure of excitable systems [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]:\nwhere is larger by at least an order of magnitude such that responds more slowly than .\nS-NOD as in (4 ###reference_###) describes dynamics with excitability: a fast positive feedback (mediated by ) acts to excite the system, while a slow negative feedback (mediated by ) regulates it back to near the ultrasensitive pitchfork singularity (as seen in Fig. 2 ###reference_###).\nThe fast positive feedback in (4a ###reference_1###) comes from the term , as in NOD (2 ###reference_###). The slow negative feedback comes from the coupling of the new variable . As grows in magnitude (whether positive or negative, i.e., independent of which option is preferred), also grows, although more slowly, according to (4b ###reference_2###). When becomes large, it drives back towards zero because of the term in (4a ###reference_1###). As decreases towards zero, also decreases back towards zero according to (4b ###reference_2###). The result is a spike in and . This is a limit cycle, which bounds and , and is analyzed next.\nThe S-NOD equations (4 ###reference_###) can be generalized to options analogously to the generalization of NOD to options [11 ###reference_b11###, 16 ###reference_b16###]. We will investigate this in future work." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Geometric Analysis of Single-Agent, Two-Option S-NOD", + "text": "We use phase-plane analysis to study and illustrate the spiking and decision-making behavior of S-NOD (4 ###reference_###).\nTo construct the phase-plane, we first compute the nullclines.\nThe -nullcline is defined as the solution pairs that satisfy for (4a ###reference_1###). This is equivalent to solving for the equilibrium solutions of (2 ###reference_###) as a function of as in Section II-B ###reference_###.\nThus, the -nullcline (pink in Fig. 4 ###reference_###) is analogous to the bifurcation diagram of (2 ###reference_###), mirrored about the vertical axis and shifted right by , with . When , the neutral solution is a stable (unstable) equilibrium of (4a ###reference_1###) for (). The -nullcline (blue in Fig. 4 ###reference_###) is defined as the solution pairs that satisfy in (4b ###reference_2###), which gives the quartic parabola . The larger , the more narrow this parabola is.\nThe intersections of the nullclines determine equilibrium solutions of S-NOD (4 ###reference_###) and, as shown in Fig. 4 ###reference_###, depend on the value of . If , the neutral solution is always an equilibrium. Two more equilibria, symmetric about , may be present for high enough .\nFig. 4a ###reference_sf1### depicts the phase-plane when and . The nullclines have one point of intersection at the neutral solution. The neutral solution is stable. Trajectories will converge to and settle at this point and no excitable behavior in the decision-making will take place.\nFig. 4b ###reference_sf2### depicts the phase-plane when and . The nullclines have three points of intersection, the neutral solution and two unstable equilibria symmetric about . The neutral solution is a saddle-node bifurcation with one exponentially stable eigendirection (along ) and one marginally unstable eigendirection (along ). There are two saddle-node-homoclinic (infinite period) cycles, diverging either upward and downward from the saddle-node. In the absence of noise/exogenous perturbations, all trajectories asymptotically converge to the neutral solution. The presence of any arbitrarily small noise makes the trajectory escape from the neutral solution at random time instants along either the upward or downward saddle-node-homoclinic cycle, leading to large prototypical excursions in the plane.\nThese large prototypical excursions resemble of \u201cspiking\u201d trajectories of excitable neuronal system. By analogy, we call them \u201cdecision spikes\u201d or \u201cexcitable decisions\u201d. In contrast to neuronal spikes which happen in only one direction, decision spikes can happen in as many directions as there are options. For the one-dimensional two-option dynamics studied here, both upward (in favor of option 1) and downward (in favor of option 2) decision spikes are possible.\nFig. 4c ###reference_sf3### depicts the phase-plane when and . Three fixed points are unstable and it is possible to prove, along the same lines as [22 ###reference_b22###], the existence of two limit cycles, symmetric about the horizontal axis . These limit cycles are made of repetitive decision spikes, i.e., spiking decision limit cycles. Geometric singular perturbation analysis [23 ###reference_b23###, 24 ###reference_b24###] provides the tools to rigorously prove the existence of these spiking decision limit cycles. Such an analysis goes beyond the scope of this paper. Instead, we leverage Fig. 4c ###reference_sf3### to describe qualitatively a typical oscillatory spiking decision behavior in the presence of small noisy perturbations.\nConsider Fig. 4c ###reference_sf3### and a trajectory with a large initial condition in . Initially, the trajectory rapidly converges to the axis, then slowly slides leftward approaching the neutral solution.\nAs soon as the trajectory hits this equilibrium, noisy perturbations push it either upward or downward, generating an upward or downward decision spike respectively. The decision spike trajectory brings the trajectory back to the pitchfork singularity, the next decision spike is generated, and the spiking decision cycle continues.\nWhen input and is sufficiently large, the -nullcline unfolds accordingly to the universal unfolding of the pitchfork [19 ###reference_b19###, Ch. III]. Due to the nullcline unfolding, the phase-plane geometry changes qualitatively as shown in Fig. 4d ###reference_sf4###. Similar geometric singular perturbation analysis methods as those employed for the analysis of Fig. 4b ###reference_sf2### and 4c ###reference_sf3### reveal the existence of a unique spiking decision limit cycle associated to spiking decisions toward the option favored by the inputs (e.g., upward decision spikes in the case of Fig. 4d ###reference_sf4### where provides evidence in favor of option 1).\nObserve that in the presence of informative inputs (Fig. 4d ###reference_sf4###), the decision spiking frequency is higher than in the case of endogenous decision spiking oscillations (Fig. 4c ###reference_sf3###). This feature is similar to spike frequency indicating input intensity in neural systems. In applications like robot navigation, can be controlled to avoid endogenous spiking." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Agile Multi-Agent Decision-Making: S-NOD", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A S-NOD for Multiple Agents and Two Options", + "text": "###figure_11### ###figure_12### We can generalize the single-agent S-NOD equations (4 ###reference_###) to the case of agents in the same way that NOD generalizes to agents [11 ###reference_b11###]. The multi-agent NOD models the decision-making process of multiple agents sharing and influencing one another\u2019s opinions over a communication network. Examples include agents choosing how they distribute their time over two resource patches or deciding whether to move to the left or right when navigating a cluttered space, all while integrating information from other agents\u2019 opinions. In the multi-agent S-NOD, each agent has two state variables and with dynamics given by\nwhere is the S-NOD network adjacency matrix, capturing both the strength () of a self-reinforcing term\nand the strength () of the influence of the opinion of agent on the opinion of agent . If is positive (negative), then an opinion of agent in favor of one of the options influences an opinion of agent in favor of the same (other) option. We assume homogeneous agents, i.e., all agents have the same , , and .\nS-NOD as presented in (5 ###reference_###) is the networked, distributed version of (4 ###reference_###).\nIn Fig. 5 ###reference_### we simulate the opinion dynamics of (5 ###reference_###) for three agents in a network when agent 1 receives a constant input for .\nFor the loop-free networks of Fig. 5 ###reference_###, we see that when the weights are positive (negative) the spiking of agent 1 for option 1 triggers synchronized (anti-synchronized) spiking of agents 2 and 3.\nFuture work will characterize different possible behaviors, e.g., opinion spike (anti)synchronization, for classes of network structures." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Application to Social Robot Navigation", + "text": "###figure_13### ###figure_14### ###figure_15### We use S-NOD (5 ###reference_###) to design a decentralized, agile controller for social robots navigating around oncoming human movers in 2D.\nEach robot has a nominal control that steers it toward its goal by regulating its heading direction through proportional negative feedback. The S-NOD state defines the strength of robot \u2019s preference for turning left () or right (). A term mediated by is added as positive feedback to the control on steering behavior. This overcomes negative feedback regulation and promotes fast reactive steering when possible collisions with oncoming human movers are imminent. Simulations of the resulting navigation behavior are in Fig. 1 ###reference_### and 6 ###reference_### and animated at https://spikingNOD.github.io ###reference_spikingNOD.github.io###.\nTo anticipate collisions, robot can estimate its distance () to a human, bearing angle () on the human, and angle () between the human\u2019s heading direction and the robot-human vector. Robots can exchange steering opinions over a communication network as in (5a ###reference_1###).\nThe robot\u2019s attention grows above its basal level as collision risk grows with decreasing and . This increases the strength of (i) the positive feedback loop of the steering controller, and (ii) the interactions with other robots to achieve coordinated obstacle avoidance. Thus, each robot\u2019s steering opinion deviates from navigating toward its goal dependent on , , , and on other robot opinions.\nWe let be a proxy for the human\u2019s opinion at time and add to the term in (5a ###reference_1###). Coordination among robots derives from the sign of : when (), robot is influenced to make a similar (opposite) steering choice as robot . We let decay with growing distance between robots , .\nFig. 1 ###reference_### compares the trajectory and opinion of a robot using NOD (adapted from [18 ###reference_b18###]) and S-NOD (4 ###reference_###) models to navigate around a human mover. The S-NOD robot passes the human with a minimum distance of 0.96m and arrives at its goal in 14.4 seconds. Without the return to the sensitive bifurcation point that S-NOD provides, the NOD robot\u2019s opinion change lags as the human makes a sharp turn and the robot experiences\na collision (moving closer than 0.3m to the human) less than 6 seconds into the simulation.\nFig. 6 ###reference_### showcases three robots navigating towards a common goal and around two approaching humans using multi-agent S-NOD (5 ###reference_###). The communication networks are those in Fig. 5 ###reference_### and the spiking behavior of the robots in Fig. 6c ###reference_sf3### is similar in many ways to the spiking behavior in Fig. 5a ###reference_sf1### and 5b ###reference_sf2###.\nHowever, due to the distance dependent of Fig. 6b ###reference_sf2###, robot 3 prefers the same opinion as robot 1, whereas due to the constant of Fig. 5b ###reference_sf2###, agent 3 prefers the opinion of agent 2. Notably in Fig. 6b ###reference_sf2###, robot 2 switches from an initial right turn (opposite to robots 1 and 3) to avoid human 1 to a left turn to avoid human 2. S-NOD gracefully navigates robot 2 out of consecutive potential collisions, implementing different turning opinions. This embodies the agility and sequential decision-making feature of S-NOD." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Final Remarks", + "text": "We presented and analyzed Spiking Nonlinear Opinion Dynamics (S-NOD) for a single agent and two options. We showcased the ability of S-NOD to swiftly form opinions and regulate back to ultrasensitivity. S-NOD provides first-of-its-kind two-dimensional excitable (spiking) dynamics for agile decision-making over two options. We showed how NOD can become too robust, but the self-regulation of S-NOD recovers flexibility. We analyzed existence of limit cycles for certain parameter regimes in S-NOD. We presented S-NOD for multiple agents that communicate opinions over a network and highlighted potential for agent (anti)synchronization. We illustrated S-NOD\u2019s agility in a social robot navigation application and plan to implement on physical robots. We aim to provide analytical guarantees on the onset of periodic spiking in limit cycles, to analyze synchronization patterns for multiple agents, and to generalize to multiple options." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1(a)": { + "figure_path": "2409.12317v3_figure_1(a).png", + "caption": "(a)\nFigure 1: (a): Trajectory of a robot controlled with NOD (S-NOD) is shown with a blue (pink) line as it navigates towards a goal (star) in the presence of an oncoming human mover (black). The NOD robot experiences a collision; the S-NOD robot does not. (b): Opinion z\ud835\udc67zitalic_z of the robot over time t\ud835\udc61titalic_t. Circles denote matching time points along trajectories and opinions. \n\nFigures 1, 4, 5, and 6 animated\nat https://spikingNOD.github.io.", + "url": "http://arxiv.org/html/2409.12317v3/x1.png" + }, + "1(b)": { + "figure_path": "2409.12317v3_figure_1(b).png", + "caption": "(b)\nFigure 1: (a): Trajectory of a robot controlled with NOD (S-NOD) is shown with a blue (pink) line as it navigates towards a goal (star) in the presence of an oncoming human mover (black). The NOD robot experiences a collision; the S-NOD robot does not. (b): Opinion z\ud835\udc67zitalic_z of the robot over time t\ud835\udc61titalic_t. Circles denote matching time points along trajectories and opinions. \n\nFigures 1, 4, 5, and 6 animated\nat https://spikingNOD.github.io.", + "url": "http://arxiv.org/html/2409.12317v3/x2.png" + }, + "2(a)": { + "figure_path": "2409.12317v3_figure_2(a).png", + "caption": "(a)\nFigure 2: The effect of Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT on the bifurcation diagram of (2) and the cubic and quintic terms of (3).\n(a): Bifurcation diagrams of NOD (2) with Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT values corresponding to the vertical dashed lines in (b). Stable (unstable) solutions are shown with solid (dotted) lines. The bifurcation point is (u0\u2217,0)superscriptsubscript\ud835\udc6200(u_{0}^{*},0)( italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , 0 ). (b): Coefficient p\ud835\udc5dpitalic_p (q\ud835\udc5eqitalic_q) as a function of Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT shown as a solid (dashed) black line.", + "url": "http://arxiv.org/html/2409.12317v3/x3.png" + }, + "2(b)": { + "figure_path": "2409.12317v3_figure_2(b).png", + "caption": "(b)\nFigure 2: The effect of Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT on the bifurcation diagram of (2) and the cubic and quintic terms of (3).\n(a): Bifurcation diagrams of NOD (2) with Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT values corresponding to the vertical dashed lines in (b). Stable (unstable) solutions are shown with solid (dotted) lines. The bifurcation point is (u0\u2217,0)superscriptsubscript\ud835\udc6200(u_{0}^{*},0)( italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , 0 ). (b): Coefficient p\ud835\udc5dpitalic_p (q\ud835\udc5eqitalic_q) as a function of Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT shown as a solid (dashed) black line.", + "url": "http://arxiv.org/html/2409.12317v3/x4.png" + }, + "3": { + "figure_path": "2409.12317v3_figure_3.png", + "caption": "Figure 3: Opinion solutions of NOD and S-NOD over time and associated bifurcation diagrams.\n(A): Trajectories of NOD (2) and S-NOD (4) for initial condition (z,u0)|t=0=(0.01,0.9)evaluated-at\ud835\udc67subscript\ud835\udc620\ud835\udc6100.010.9(z,u_{0})|_{t=0}=(0.01,0.9)( italic_z , italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) | start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT = ( 0.01 , 0.9 ) for two Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT values. Input signal b\ud835\udc4fbitalic_b is also shown over time.\n(B): Bifurcation diagrams of (2) for the two values of Kusubscript\ud835\udc3e\ud835\udc62K_{u}italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT, showing the solutions in A. moving from initial state to steady state.", + "url": "http://arxiv.org/html/2409.12317v3/x5.png" + }, + "4(a)": { + "figure_path": "2409.12317v3_figure_4(a).png", + "caption": "(a) u0=0.1,b=0formulae-sequencesubscript\ud835\udc6200.1\ud835\udc4f0u_{0}=0.1,b=0italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.1 , italic_b = 0\nFigure 4: The system solutions and (us,z)subscript\ud835\udc62\ud835\udc60\ud835\udc67(u_{s},z)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) phase portrait\nas the basal attention u0subscript\ud835\udc620u_{0}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT increases. For all, d=1,\u03b1=2,formulae-sequence\ud835\udc511\ud835\udefc2d\\!=\\!1,\\alpha\\!=\\!2,italic_d = 1 , italic_\u03b1 = 2 , (thus u0\u2217=0.5superscriptsubscript\ud835\udc6200.5u_{0}^{*}\\!=\\!0.5italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT = 0.5),Ku=2,Kus=6,\u03c4us/\u03c4z=10,K_{u}\\!=\\!2,K_{u_{s}}\\!=\\!6,\\tau_{u_{s}}/\\tau_{z}\\!=\\!10, italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2 , italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 6 , italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 10.\n(Top): Example solutions of ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and z\ud835\udc67zitalic_z over time, with initial condition as (us,z)|t=0=(0.01,0.01)evaluated-atsubscript\ud835\udc62\ud835\udc60\ud835\udc67\ud835\udc6100.010.01(u_{s},z)|_{t=0}=(0.01,0.01)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) | start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT = ( 0.01 , 0.01 ) and additive Gaussian distributed white noise.\n(Bottom): The ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT-nullcline (z\ud835\udc67zitalic_z-nullcline) is shown as a blue (pink) line. Solid (dotted) lines indicate stable (unstable) branches of the z\ud835\udc67zitalic_z-nullcline with respect to (4a). Gray arrows denote the vector field. Black filled circles show stable equilibria, unfilled are unstable equilibria, partially filled is a saddle-node bifurcations. Crosses show saddle equilibria. Saddle-node-homoclinic cycles in (b), and limit cycles in (c) and (d) are in yellow.", + "url": "http://arxiv.org/html/2409.12317v3/x7.png" + }, + "4(b)": { + "figure_path": "2409.12317v3_figure_4(b).png", + "caption": "(b) u0=0.5,b=0formulae-sequencesubscript\ud835\udc6200.5\ud835\udc4f0u_{0}=0.5,b=0italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5 , italic_b = 0\nFigure 4: The system solutions and (us,z)subscript\ud835\udc62\ud835\udc60\ud835\udc67(u_{s},z)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) phase portrait\nas the basal attention u0subscript\ud835\udc620u_{0}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT increases. For all, d=1,\u03b1=2,formulae-sequence\ud835\udc511\ud835\udefc2d\\!=\\!1,\\alpha\\!=\\!2,italic_d = 1 , italic_\u03b1 = 2 , (thus u0\u2217=0.5superscriptsubscript\ud835\udc6200.5u_{0}^{*}\\!=\\!0.5italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT = 0.5),Ku=2,Kus=6,\u03c4us/\u03c4z=10,K_{u}\\!=\\!2,K_{u_{s}}\\!=\\!6,\\tau_{u_{s}}/\\tau_{z}\\!=\\!10, italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2 , italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 6 , italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 10.\n(Top): Example solutions of ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and z\ud835\udc67zitalic_z over time, with initial condition as (us,z)|t=0=(0.01,0.01)evaluated-atsubscript\ud835\udc62\ud835\udc60\ud835\udc67\ud835\udc6100.010.01(u_{s},z)|_{t=0}=(0.01,0.01)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) | start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT = ( 0.01 , 0.01 ) and additive Gaussian distributed white noise.\n(Bottom): The ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT-nullcline (z\ud835\udc67zitalic_z-nullcline) is shown as a blue (pink) line. Solid (dotted) lines indicate stable (unstable) branches of the z\ud835\udc67zitalic_z-nullcline with respect to (4a). Gray arrows denote the vector field. Black filled circles show stable equilibria, unfilled are unstable equilibria, partially filled is a saddle-node bifurcations. Crosses show saddle equilibria. Saddle-node-homoclinic cycles in (b), and limit cycles in (c) and (d) are in yellow.", + "url": "http://arxiv.org/html/2409.12317v3/x8.png" + }, + "4(c)": { + "figure_path": "2409.12317v3_figure_4(c).png", + "caption": "(c) u0=0.7,b=0formulae-sequencesubscript\ud835\udc6200.7\ud835\udc4f0u_{0}=0.7,b=0italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.7 , italic_b = 0\nFigure 4: The system solutions and (us,z)subscript\ud835\udc62\ud835\udc60\ud835\udc67(u_{s},z)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) phase portrait\nas the basal attention u0subscript\ud835\udc620u_{0}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT increases. For all, d=1,\u03b1=2,formulae-sequence\ud835\udc511\ud835\udefc2d\\!=\\!1,\\alpha\\!=\\!2,italic_d = 1 , italic_\u03b1 = 2 , (thus u0\u2217=0.5superscriptsubscript\ud835\udc6200.5u_{0}^{*}\\!=\\!0.5italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT = 0.5),Ku=2,Kus=6,\u03c4us/\u03c4z=10,K_{u}\\!=\\!2,K_{u_{s}}\\!=\\!6,\\tau_{u_{s}}/\\tau_{z}\\!=\\!10, italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2 , italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 6 , italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 10.\n(Top): Example solutions of ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and z\ud835\udc67zitalic_z over time, with initial condition as (us,z)|t=0=(0.01,0.01)evaluated-atsubscript\ud835\udc62\ud835\udc60\ud835\udc67\ud835\udc6100.010.01(u_{s},z)|_{t=0}=(0.01,0.01)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) | start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT = ( 0.01 , 0.01 ) and additive Gaussian distributed white noise.\n(Bottom): The ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT-nullcline (z\ud835\udc67zitalic_z-nullcline) is shown as a blue (pink) line. Solid (dotted) lines indicate stable (unstable) branches of the z\ud835\udc67zitalic_z-nullcline with respect to (4a). Gray arrows denote the vector field. Black filled circles show stable equilibria, unfilled are unstable equilibria, partially filled is a saddle-node bifurcations. Crosses show saddle equilibria. Saddle-node-homoclinic cycles in (b), and limit cycles in (c) and (d) are in yellow.", + "url": "http://arxiv.org/html/2409.12317v3/x9.png" + }, + "4(d)": { + "figure_path": "2409.12317v3_figure_4(d).png", + "caption": "(d) u0=0.7,b=0.1formulae-sequencesubscript\ud835\udc6200.7\ud835\udc4f0.1u_{0}=0.7,b=0.1italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.7 , italic_b = 0.1\nFigure 4: The system solutions and (us,z)subscript\ud835\udc62\ud835\udc60\ud835\udc67(u_{s},z)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) phase portrait\nas the basal attention u0subscript\ud835\udc620u_{0}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT increases. For all, d=1,\u03b1=2,formulae-sequence\ud835\udc511\ud835\udefc2d\\!=\\!1,\\alpha\\!=\\!2,italic_d = 1 , italic_\u03b1 = 2 , (thus u0\u2217=0.5superscriptsubscript\ud835\udc6200.5u_{0}^{*}\\!=\\!0.5italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT = 0.5),Ku=2,Kus=6,\u03c4us/\u03c4z=10,K_{u}\\!=\\!2,K_{u_{s}}\\!=\\!6,\\tau_{u_{s}}/\\tau_{z}\\!=\\!10, italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2 , italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 6 , italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 10.\n(Top): Example solutions of ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and z\ud835\udc67zitalic_z over time, with initial condition as (us,z)|t=0=(0.01,0.01)evaluated-atsubscript\ud835\udc62\ud835\udc60\ud835\udc67\ud835\udc6100.010.01(u_{s},z)|_{t=0}=(0.01,0.01)( italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_z ) | start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT = ( 0.01 , 0.01 ) and additive Gaussian distributed white noise.\n(Bottom): The ussubscript\ud835\udc62\ud835\udc60u_{s}italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT-nullcline (z\ud835\udc67zitalic_z-nullcline) is shown as a blue (pink) line. Solid (dotted) lines indicate stable (unstable) branches of the z\ud835\udc67zitalic_z-nullcline with respect to (4a). Gray arrows denote the vector field. Black filled circles show stable equilibria, unfilled are unstable equilibria, partially filled is a saddle-node bifurcations. Crosses show saddle equilibria. Saddle-node-homoclinic cycles in (b), and limit cycles in (c) and (d) are in yellow.", + "url": "http://arxiv.org/html/2409.12317v3/x10.png" + }, + "5(a)": { + "figure_path": "2409.12317v3_figure_5(a).png", + "caption": "(a)\nFigure 5: Response of three agents with the same communication network but for opposite signs on graph edge weights and an input b1subscript\ud835\udc4f1b_{1}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT applied only to agent 1111. For i\u2260k\ud835\udc56\ud835\udc58i\\neq kitalic_i \u2260 italic_k, (a) ai\u2062k=+0.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=+0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = + 0.1 and (b) ai\u2062k=\u22120.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=-0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = - 0.1. Parameters are ai\u2062i=1subscript\ud835\udc4e\ud835\udc56\ud835\udc561a_{ii}=1italic_a start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT = 1, d=1\ud835\udc511d=1italic_d = 1, Ku=2.3subscript\ud835\udc3e\ud835\udc622.3K_{u}=2.3italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2.3, Kus=16subscript\ud835\udc3esubscript\ud835\udc62\ud835\udc6016K_{u_{s}}=16italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 16, u0=0.9subscript\ud835\udc6200.9u_{0}=0.9italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.9, \u03c4us/\u03c4z=20subscript\ud835\udf0fsubscript\ud835\udc62\ud835\udc60subscript\ud835\udf0f\ud835\udc6720\\tau_{u_{s}}/\\tau_{z}=20italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 20.", + "url": "http://arxiv.org/html/2409.12317v3/x11.png" + }, + "5(b)": { + "figure_path": "2409.12317v3_figure_5(b).png", + "caption": "(b)\nFigure 5: Response of three agents with the same communication network but for opposite signs on graph edge weights and an input b1subscript\ud835\udc4f1b_{1}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT applied only to agent 1111. For i\u2260k\ud835\udc56\ud835\udc58i\\neq kitalic_i \u2260 italic_k, (a) ai\u2062k=+0.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=+0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = + 0.1 and (b) ai\u2062k=\u22120.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=-0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = - 0.1. Parameters are ai\u2062i=1subscript\ud835\udc4e\ud835\udc56\ud835\udc561a_{ii}=1italic_a start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT = 1, d=1\ud835\udc511d=1italic_d = 1, Ku=2.3subscript\ud835\udc3e\ud835\udc622.3K_{u}=2.3italic_K start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 2.3, Kus=16subscript\ud835\udc3esubscript\ud835\udc62\ud835\udc6016K_{u_{s}}=16italic_K start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 16, u0=0.9subscript\ud835\udc6200.9u_{0}=0.9italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.9, \u03c4us/\u03c4z=20subscript\ud835\udf0fsubscript\ud835\udc62\ud835\udc60subscript\ud835\udf0f\ud835\udc6720\\tau_{u_{s}}/\\tau_{z}=20italic_\u03c4 start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT / italic_\u03c4 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 20.", + "url": "http://arxiv.org/html/2409.12317v3/x12.png" + }, + "6(a)": { + "figure_path": "2409.12317v3_figure_6(a).png", + "caption": "(a)\nFigure 6: Trajectories of social robot teams navigating around approaching human movers. Communication network and parameters are as in Fig. 5 with (a) ai\u2062k=+0.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=+0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = + 0.1 and (b) ai\u2062k=\u22120.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=-0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = - 0.1. (c) Plots of z\ud835\udc67zitalic_z over time t\ud835\udc61titalic_t for the robots. Table I lists performance metrics of the robots.", + "url": "http://arxiv.org/html/2409.12317v3/x13.png" + }, + "6(b)": { + "figure_path": "2409.12317v3_figure_6(b).png", + "caption": "(b)\nFigure 6: Trajectories of social robot teams navigating around approaching human movers. Communication network and parameters are as in Fig. 5 with (a) ai\u2062k=+0.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=+0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = + 0.1 and (b) ai\u2062k=\u22120.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=-0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = - 0.1. (c) Plots of z\ud835\udc67zitalic_z over time t\ud835\udc61titalic_t for the robots. Table I lists performance metrics of the robots.", + "url": "http://arxiv.org/html/2409.12317v3/x14.png" + }, + "6(c)": { + "figure_path": "2409.12317v3_figure_6(c).png", + "caption": "(c)\nFigure 6: Trajectories of social robot teams navigating around approaching human movers. Communication network and parameters are as in Fig. 5 with (a) ai\u2062k=+0.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=+0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = + 0.1 and (b) ai\u2062k=\u22120.1subscript\ud835\udc4e\ud835\udc56\ud835\udc580.1a_{ik}=-0.1italic_a start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = - 0.1. (c) Plots of z\ud835\udc67zitalic_z over time t\ud835\udc61titalic_t for the robots. Table I lists performance metrics of the robots.", + "url": "http://arxiv.org/html/2409.12317v3/x15.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2409.12317v3" +} \ No newline at end of file diff --git a/20241127/2409.12420v2.json b/20241127/2409.12420v2.json new file mode 100644 index 0000000000000000000000000000000000000000..9e988f42529e27d84aa8dc0ae63d99198230801e --- /dev/null +++ b/20241127/2409.12420v2.json @@ -0,0 +1,104 @@ +{ + "title": "Spatially-invariant opinion dynamics on the circle", + "abstract": "We propose and analyze a nonlinear opinion dynamics model for an agent making decisions about a continuous distribution of options in the presence of input. Inspired by perceptual decision-making, we develop new theory for opinion formation in response to inputs about options distributed on the circle. Options on the circle can represent, e.g., the possible directions of perceived objects and resulting heading directions in planar robotic navigation problems. Interactions among options are encoded through a spatially invariant kernel, which we design to ensure that\nonly a\nsmall (finite)\nsubset of options can be favored over the continuum.\nWe leverage the spatial invariance of the model linearization to design flexible, distributed opinion-forming behaviors using spatiotemporal frequency domain and bifurcation analysis. \nWe illustrate our model\u2019s versatility with an application to robotic navigation\nin crowded spaces.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In perceptual decision-making, animals use sensory information, such as visual and auditory stimuli, to respond to their environment.\n\nSpatial invariance, the ability to respond to stimuli based solely on relative positions rather than absolute spatial coordinates, is believed to be a key feature of these sensory processes [1 ###reference_b1###].\nInspired by these insights, neural field models of perceptual decision-making\nleverage spatial invariance [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###].\nThey describe the spatiotemporal dynamics of neural activity using integro-differential equations with a\nconvolution kernel that captures\ninteractions between different regions of the neural field.\nThese models are widely used for embodied intelligence, where sensory input, actions, and cognitive processes are interconnected.\nIn robotics, an agent can use a distributed representation of its visual field and the objects within it to drive decisions. For example, neural field models have been used for robotic navigation in unknown environments with obstacles [7 ###reference_b7###, 8 ###reference_b8###], manipulation [8 ###reference_b8###], target acquisition [9 ###reference_b9###], sensorimotor control of robots through coupled fields [10 ###reference_b10###], and modeling of cognitive intentions [11 ###reference_b11###].\nNeural fields are also used in neuromorphic devices, which emulate biological processing in extremely low power hardware [12 ###reference_b12###].\nWhile these applications highlight the versatility of neural fields for embodied intelligence, they mostly rely on empirical approaches. There are analytical approaches that characterize the behavior of neural field models [5 ###reference_b5###, 2 ###reference_b2###, 6 ###reference_b6###, 4 ###reference_b4###, 3 ###reference_b3###], but their input-output behavior for arbitrary inputs is not yet fully characterized.\nResponse to input is considered in [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] but only for specific classes of inputs. Our work here lays a theoretical foundation for analysis, design, and control in more general scenarios. The novelty of our contribution lies in our study of the input-output behavior of our proposed nonlinear neural field model.\nWe use a spatiotemporal transfer function to predict the model\u2019s response from its linearization.\nWe propose a neural field model to generalize nonlinear opinion dynamics (NOD) [13 ###reference_b13###] from a finite set to a continuum of options.\nNOD has been used for robotic perceptual decision-making in obstacle avoidance and task allocation scenarios [14 ###reference_b14###, 15 ###reference_b15###].\nThe distributed NOD model does not require\nprior knowledge of the number of objects in an agent\u2019s visual field and captures object volume and distance in its continuous representation.\nOur contributions are as follows. First, we propose a new nonlinear opinion dynamics model for an agent making decisions about a continuous distribution of options on the circle and in the presence of input.\nSecond, we prove the system-theoretic spatial invariance of the model linearization.\nThird, we use spatial-invariance of the linearized dynamics to prove the existence of an opinion forming bifurcation for the model with zero input.\nFourth, we use space and time frequency domain analysis of the model linearization and define a spatiotempotal transfer function to infer the input-output behavior of the nonlinear dynamics to arbitrary inputs.\nFifth, we propose a framework for designing kernels for an application of our model to robotic\nnavigation.\nMathematical background is in Section II ###reference_###. We present the model in Section III ###reference_###. We prove the spatial invariance of the model linearization in Section IV ###reference_###.\nIn Section V ###reference_### we prove an opinion-forming bifurcation in the model with zero input. In Section VI ###reference_###, we discuss the model\u2019s input-output behavior. We propose a kernel design approach and illustrate our approach in Section VII ###reference_###. A discussion is provided in Section VIII ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Mathematical Preliminaries", + "text": "We denote the set of integer values as ,\nthe set of non-negative integer values as ,\nthe set of real numbers as , and the set of complex numbers as . The unit circle is denoted by , i.e., . For , the notation indicates the limit \nwith .\nFor a complex number , the real and imaginary parts are denoted as and , respectively. We represent the complex conjugate as , the modulus as and the argument as for .\nThe Hilbert space of square-integrable real functions on is denoted by .\nThe inner product of\n is . The induced norm is .\nWe denote operators with capital\nletters.\nLet be a linear operator.\nWe let the set denote the point spectrum of , if it is not empty. Each eigenvalue satisfies , where denotes the eigenfunction corresponding to .\nWe denote as the leading eigenvalue of , and its corresponding eigenfunction, , as the leading mode.\nLet be a nonlinear operator. The differential of in the direction of at a point , is \nprovided that the limit exists.\nA multiplication operator is defined by , where is in the domain of . Multiplication operators are the infinite-dimensional equivalent of diagonal matrices.\nThe spatial shift operator denoted by is defined as \nfor and .\nAn operator is spatially invariant if .\nWe mainly work with a special class of spatially invariant linear operators, namely, spatial convolution operators\nwhere the convolution kernel .\nConsider a spatiotemporal input-output linear system. Let be the scalar-valued input and output functions at time , respectively. Let be the spatial coordinate.\nA linear system of the form\nis spatially invariant if the linear operators , are spatially invariant.\nLet be spatiotemporal fields with spatial and time coordinates and . Suppose for all . The spatial Fourier transform maps into its spatial Fourier coefficients\nwhere is the spatial frequency.\nThe spatial Fourier transform is a coordinate transformation that expresses in terms of the spatial Fourier modes , i.e., the Fourier basis on , and the Fourier coefficients . The inverse spatial Fourier transform can be used to recover from its Fourier coefficients :\nParseval\u2019s Identity [18 ###reference_b18###] ensures that .\nThe spatial Fourier transform operator is denoted by , and\nthe inverse spatial Fourier transform operator by .\nThe spatial Fourier transform (3 ###reference_###) diagonalizes convolution operators [16 ###reference_b16###], i.e., if is a convolution operator (1 ###reference_###), then\n, where is the Fourier transform of the kernel of . Thus, is mapped by into a multiplication operator over the spatial frequency .\nFor linear systems of the form (2 ###reference_###), if and are convolution operators, i.e. are of the\nform (1), then\nwhere and are the Fourier transforms of the kernels of and , respectively. Following [16 ###reference_b16###], we refer to (5 ###reference_###) as the diagonalization of (2 ###reference_###).\nLet be a spatiotemporal field with spatial and time coordinates and , respectively. Then, the temporal Laplace transform maps into\nwhere , whenever the integral exists." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Opinion Dynamics on the Circle", + "text": "We propose a nonlinear opinion dynamics model for an agent making decisions about a continuous distribution of options on the circle.\nFor every option , is the opinion of the agent for option at time , where the more positive (negative) is the more the agent favors (disfavors) option . When , the agent is neutral about option .\nInspired by biological sensory processes [1 ###reference_b1###, 19 ###reference_b19###], the relationship between each option is encoded by the Lipschitz continuous kernel\n based solely on their relative positions.\nThis design choice is consistent with other neural field models [5 ###reference_b5###, 2 ###reference_b2###, 6 ###reference_b6###, 4 ###reference_b4###, 3 ###reference_b3###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###]\nand provides analytical tractability.\nA positive (negative) value of\n corresponds to excitatory (inhibitory) interactions\nbetween the options and .\nThe opinion evolves according to\nwhere is the input, is the characteristic timescale, and\n is the attention\nto option interactions, i.e., models the agent commitment to forming strong opinions. The nonlinear nature of (7 ###reference_###) comes from , a saturating function with , ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Spectral Analysis of Linearization", + "text": "We study the spectrum of the linearization of (7 ###reference_###) at the neutral equilibrium , .\n\nWe prioritize local behavior because it captures key changes in the stability and quantity of equilibria. While global analysis is theoretically valuable, it is often infeasible due to the complexity of the system. Although we do not estimate the region of validity of the local analysis we present in this paper, methods for bounding the region of validity for similar analyses exist, e.g. [20 ###reference_b20###]. Conclusions from linearization typically hold within a sufficiently large parametrized neighborhood of the neutral equilibrium at the onset of instability.\nThe implementations in this paper, which focus on parameter regimes near this critical point, demonstrate the practicality and validity of this local approach.\nWe first prove spatial invariance,\nwhich enables the linearized system to be diagonalized.\nUsing the diagonalization, we compute the eigenvalues and eigenfunctions of the linearized system and prove their relationship with the Fourier coefficients of the kernel and the spatial Fourier modes.\nDefine the nonlinear operator in (7 ###reference_###) as .\nThe differential of in the direction at is\nThe linearization of (7 ###reference_###) at the neutral equilibrium ,\nis a spatially invariant system in the sense of Definition 2 ###reference_###.\nConsider the expansion . Then, we can express\n,\nwhere denotes higher order terms in .\nAs , the higher order terms vanish and we are left with (8 ###reference_###). Note that is a spatial convolution operator, which is spatially invariant.\nThen, by linearity so is . Thus, by Definition 2 ###reference_###, (9 ###reference_###) is a spatially invariant system.\n\u220e\nAs a consequence of Lemma 1 ###reference_orem1###, we can diagonalize the model linearization (9 ###reference_###). Since (8 ###reference_###) is a convolution operator, we use (5 ###reference_###) to get\nThe eigenvalues of the linearized system (9 ###reference_###) can be computed as\nfor . The corresponding eigenfunctions are the spatial Fourier modes .\nThe form of the eigenvalues follows directly from the diagonalization (10 ###reference_###) of the linearized dynamics (9 ###reference_###). The eigenfunctions are the spatial Fourier modes because they form the basis of the Fourier transformation that is used to diagonalize the system.\n\u220e\nLemma 11 ###reference_### reaffirms that, because of spatial invariance, the spatial Fourier modes are the eigenfunctions of the model linearization for any kernel design, provided it is spatially-invariant. Since the Fourier coefficients of the kernel determine the eigenvalues associated with each mode, they dictate which modes dominate. More precisely, if all modes are stable, i.e., for all , spatiotemporal inputs will be predominantly amplified along the Fourier modes with largest as detailed in Section VI ###reference_###.\nWhen the leading modes become unstable, that is, when the real part of their eigenvalues change from negative to positive through, e.g., an increase of the attention parameter , the nonlinear model (7 ###reference_###) undergoes a bifurcation that enables robust opinion formation even in the absence of inputs. The leading Fourier modes determine the number of maxima of the stable steady-state opinion patterns emerging at the bifurcation, as detailed in the next section." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Opinion-Forming Bifurcations", + "text": "We revisit the results presented in\n[4 ###reference_b4###] for (7 ###reference_###) with zero input.\nWe prove that (7 ###reference_###) undergoes a bifurcation and compute the bifurcation point.\nA local bifurcation occurs when the number and/or stability of the equilibrium solutions\nchanges due to one or more eigenvalues of the model linearization crossing the imaginary axis as a parameter is varied. The state and parameter value at which this occurs is the bifurcation point.\nWe study how opinion patterns emerge and the role of kernel and show a bistability that enables rapid formation of strong opinions.\nWe make the following assumption to ensure the eigenvalues of (9 ###reference_###) are real.\nThe kernel in (7 ###reference_###)) is symmetric, i.e. . In particular, its Fourier coefficients are real and .\nConsider (7 ###reference_###).\nLet Assumption 1 ###reference_umption1### hold. Suppose , \n , denote the spatial frequency corresponding to the largest .\nThen, system (7 ###reference_###) undergoes a bifurcation at the neutral equilibrium and . In particular, for the neutral equilibrium is locally asymptotically stable and for the neutral equilibrium is unstable. The bifurcation branches emerging at bifurcation for are tangent to the subspace of generated by and . That is, steady-state opinion patterns along the bifurcation branches have the form for . In particular,\nthe number of maxima exhibited in the opinion patterns forming at bifurcation is fixed by .\nFrom Lemma 11 ###reference_###, are given by (11 ###reference_###).\nWe solve for .\nThen, the first eigenvalue crossing occurs at with two eigenvalues crossing at .\nFor we have so the origin is stable. However, once , there exist at least two positive eigenvalues so the origin will be unstable. For the linearization, at the bifurcation, , , so the corresponding spatial Fourier modes belong to the stable manifold. The spatial Fourier modes and form a basis for the center manifold; hence, they determine the dominating bifurcation direction and emerging pattern of the model linearization.\n\u220e\nThe bifurcation of system (7 ###reference_###) with zero input is studied in\n[4 ###reference_b4###]. There a perturbation analysis is used to show that the spatial pattern that appears is determined by the leading modes.\nThere are infinitely many branches of non-zero equilibria which exhibit the same pattern with maxima up to spatial translation.\nAs discussed in [4 ###reference_b4###][Section 4.2.1], the stability of the bifurcating branches can be computed as functions of , and . Generally, when all of the non-zero branches of equilibria that bifurcate from the origin are stable.\nWhen , all non-zero branches bifurcating at the origin are unstable; however, due to higher order terms, stable branches of non-zero equilibria exist farther away from the origin.\nFig. 1 ###reference_### illustrates at the equilibria as a function of the bifurcation parameter with\na shifted hyperbolic tangent with shift. Note that for and .\nWe see that for , there are regions below the bifurcation point for which the neutral and a non-neutral solution are both stable. This bistable region enables rapid formation of strong opinions in response to spatially distributed input, as discussed in Section VI ###reference_###.\nThe patterns of opinion formation depend on the kernel, which can be designed. Fig. 2 ###reference_### shows how , the spatial frequency corresponding to the largest Fourier coefficient of the kernel, determines the number of maxima exhibited in the steady-state opinion pattern of the agent for (7 ###reference_###) with zero-input.\nSpatial invariance ensures that for all initial conditions the solution converges to the same opinion pattern modulo a spatial translation (Fig. 2 ###reference_###c).\nThe eigenstructure of the linearization of spatially invariant systems with symmetric kernels (Assumption 1 ###reference_umption1###) is robust to small violations of both spatial invariance and kernel symmetry. Dominant eigenvalues, in particular, remain unique and real.\n###figure_1###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI \n Decision-Making on the Circle with Tunable Sensitivity to Distributed Input", + "text": "We reintroduce distributed input to the model, and use its linearization, together with spatial and temporal frequency analysis, to infer the nonlinear input-output behavior.\nWe make the following assumptions.\nWe assume to ensure that a bistable region exists (see Fig. 1 ###reference_### for ).\nInputs for all . Furthermore, for all , is slowly varying, that is, it is Lipschitz continuous with Lispschitz constant , for in (7 ###reference_###).\nThe condition implies that inputs vary much more slowly than the characteristic time constant of model (7 ###reference_###). Hence, under Assumption 3 ###reference_umption3###, we can use the quasi-static input approximation and let .\nFor any diagonalized spatially distributed system of the form (5 ###reference_###), if , exist, then the transfer function characterizes the input-output response in terms of the Laplace transforms of and , i.e., \nBy the Final Value Theorem, if the input is constant in time and (5 ###reference_###) is stable, then . This leads us to the following definition.\n###figure_2### The spatial transfer function of (5 ###reference_###) is\nSpatial transfer function determines the steady-state output of (5 ###reference_###) in response to input that is constant in time.\nLet Assumptions 1 ###reference_umption1###\u20133 ###reference_umption3### hold.\nLet , \n , denote the spatial frequency\nof the largest .\nSpatial transfer function (13 ###reference_###) of the linearized model (9 ###reference_###) is\nIn particular, for , .\nFrom Lemma 11 ###reference_### we know the eigenvalues of (10 ###reference_###), the diagonalization of (9 ###reference_###).\nSo, we compute .\nThen \nAs , .\n\u220e\nTheorem 14 ###reference_### implies that close to bifurcation, i.e., for , the input-output response of the linearized system is dominated by the leading spatial Fourier modes . This means that the alignment of with is the main determinant of the model response to inputs.\nFor the nonlinear system with , the input-output response takes place in the bistable region. If , the input is aligned with the leading modes . These modes filter input nonlinearity and amplify the input because, by Theorem 14 ###reference_###, the direction of these modes are ultrasensitive to input. The result is a steady-state opinion pattern with maxima and , as illustrated in the top row of Fig. 3 ###reference_###. As shown in Fig. 3 ###reference_###a (top), the Fourier coefficients of the input for are nonzero meaning the input is aligned with . The input distribution in Fig. 3 ###reference_###b (top) is small in magnitude (less than 0.01), while the resulting steady-state opinion pattern in Fig. 3 ###reference_###c (top) has maximum that is greater than 1.5 in magnitude.\nIf , the input is unaligned with the leading modes and the steady-state opinion pattern will not exhibit large maxima. I.e., because the input does not have a component along the ultrasensitive direction, by Theorem 14 ###reference_### it does not get amplified, as illustrated in the bottom row of Fig. 3 ###reference_###.\nAs shown in Fig. 3 ###reference_###a (bottom), the Fourier coefficients of the input for are zero meaning the input is unaligned with .\nThe input distribution in Fig. 3 ###reference_###b (bottom) is small in magnitude (less than 0.01) and the resulting steady-state opinion pattern in Fig. 3 ###reference_###c (bottom) has no large maximum.\nOur results show how even very small distributed input can trigger the formation of a strong opinion and whether or not this happens depends on the design of kernel . Thus, can be designed to tune response to be ultrasensitive to inputs that matter for function and robust to inputs that don\u2019t.\n###figure_3###" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Application to Robot Navigation", + "text": "We illustrate with simulations the benefits of our approach to perceptual decision-making with an application of the dynamics (7 ###reference_###) to robot\nnavigation. We consider the case of a robot\nmoving in a crowded space, such as an airport, where it must pass through gaps of different sizes (e.g., between people in a line) that may change over time. We assume the robot has a (visual) sensor so that it can perceive these gaps.\nWe specialize to a scenario where a robot finds itself trapped inside a circle of people and needs to choose and cross through a large enough gap between people.\nChoosing a gap is challenging as people may be distributed unevenly around the circle, resulting in multiple gap options, only a select few of which may be suitable for the robot to cross. Also, the size of the gaps may change over time due to people moving for their own purposes or in response to the robot, e.g., people may move to make space for the robot to cross.\nIn Section VII-A ###reference_### we present a framework for designing from its Fourier coefficients to allow the robot to select a single gap.\nWe discuss four scenarios that demonstrate how our model can be used for fast-and-flexible decision-making in this robotic navigation problem. In Section VII-B ###reference_### two scenarios demonstrate the robot\u2019s ability to choose a single gap, while in Section VII-C ###reference_### the two other scenarios show how the robot can quickly adapt to changes in gap sizes.\nWe take to represent the circular visual field for the robot. Then an option represents the angle associated to a point in the visual field.\nThe input is the visual observation (e.g., pixel) at angle at time . We let a point in the input distribution that reflects a gap be represented by , in blue in Fig. 4 ###reference_### (bottom). We assume changes in gaps occur slowly enough that Assumption 3 ###reference_umption3### holds. The opinion as shown in Fig. 4 ###reference_### (top), captures the robot\u2019s preference over time for one gap, where the preference corresponds to the strongest opinion (in yellow)." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "VII-A Fourier-Based Kernel Design", + "text": "We leverage the results of Theorem 14 ###reference_### to design a kernel \nthat imposes the desired\nopinion formation behavior in response to distributed input on . Options (angles) that are close (far) to each other should have an excitatory (inhibitory) interaction. And the opinion pattern should have a single maximum, so that the robot selects a single gap.\nFrom the results summarized in Section VI ###reference_###, we design such that for , and , where is strictly decreasing, square-summable and symmetric. The strictly decreasing property ensures that while\nsquare-summability\nensures that the inverse Fourier transform of exists and that , by Parseval\u2019s identity [18 ###reference_b18###]. Symmetry is required to satisfy Assumption 1 ###reference_umption1###. For the following simulations, we take a Gaussian function , where adjusts its width.\n###figure_4###" + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "VII-B Choosing the Best Gap and Avoiding Deadlock", + "text": "We illustrate the model\u2019s ability to pick the best among multiple gaps and to rapidly avoid deadlock when faced with two equally suitable gaps. We assume the people in the circle are not moving.\nIn Fig. 4 ###reference_###a (bottom) there are several gaps but one that is clearly wider than the others. The input at the location of the widest gap gets amplified so that the single maximum guaranteed by the kernel design discussed in Section VII-A ###reference_### forms at that location. Hence, we see in Fig. 4 ###reference_###a (top), that the robot forms a strong preference for the widest gap. In Fig. 4 ###reference_###b (bottom), there are two equally wide gaps. Since the kernel design discussed in Section VII-A ###reference_### ensures that only one maximum is formed,\none of the inputs gets amplified and the others suppressed. \nIn Fig. 4 ###reference_###b (top), the robot forms strong opinions for one of the two widest gaps and avoids deadlock." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "VII-C Robustness and Responsiveness to Change", + "text": "We illustrate the model\u2019s robustness to unimportant change and responsiveness to important change in input. We assume the people in the circle are moving. In Fig. 5 ###reference_### (bottom), there is initially one very wide gap and one narrow gap. However, over time, the wide gap becomes narrower, and the narrower gap becomes wider. In Fig. 5 ###reference_###a, the decrease in size of the initially wide gap is small enough that the robot can still fit through it and thus it does not change its choice.\nSuch a change in gap size could result from humans making only small positioning adjustments in response to the robot, which would reflect as small perturbation to the input distribution.\nIn this case, since a strong opinion first forms in favor of the gap that is initially widest and the gap remains sufficiently large, the robot does not change its mind. This illustrates the robustness of the decision-making to small changes in input.\nIn Fig. 5 ###reference_###b, the decrease in size of the initially wide gap is large enough that the robot changes its choice to the other emerging gap.\nSuch a change in gap size could result from humans trying to make space for the robot to pass, which would reflect as large change to the input distribution.\n In this case, the opinion pattern forms changes in favor of the emerging widest gap, i.e, the robot changes its mind about which gap it prefers. This illustrates the adaptability of the robot\u2019s decision-making to large changes in input.\nWe plan to characterize the threshold that governs the switch between the behaviors shown in Fig. 5 ###reference_###a and Fig. 5 ###reference_###b in future work.\n###figure_5###" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "VIII Discussion", + "text": "We presented a new nonlinear opinion dynamics model for an agent making decisions about a continuous distribution of options in response to distributed input on the circle. We proved spatial invariance of the model linearization and a bifurcation of the model with zero input, which yields fast and flexible decision-making. A key contribution is our study of the input-output behavior of the model and design of the kernel. We demonstrated important advantages of the model in robot perceptual decision-making problem. In future work we aim to derive an estimate for the region of validity of the model linearization\nand characterize the relationship between input distribution and the location where the maximum in the opinion distribution form. We will implement this model for perceptual decision-making in robotics where the dynamics are in a closed loop with the physical dynamics of the agent." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2409.12420v2_figure_1.png", + "caption": "Figure 1: Bifurcation diagrams illustrating the effect of the shift value \u03be\ud835\udf09\\xiitalic_\u03be on the dynamics (7) with shifted sigmoid (12). Stable (unstable) branches of equilibria are shown as solid (dashed) lines.", + "url": "http://arxiv.org/html/2409.12420v2/x1.png" + }, + "2": { + "figure_path": "2409.12420v2_figure_2.png", + "caption": "Figure 2: Influence of the kernel design on the steady-state opinion patterns of (7) with zero-input. (a) Two kernel designs. (b) Fourier coefficients of the kernel. Top: \u00b1kmax=\u00b11plus-or-minussubscript\ud835\udc58plus-or-minus1\\pm k_{\\max}=\\pm 1\u00b1 italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = \u00b1 1. Bottom: \u00b1kmax=\u00b13plus-or-minussubscript\ud835\udc58plus-or-minus3\\pm k_{\\max}=\\pm 3\u00b1 italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = \u00b1 3. (c) Steady-state opinion pattern z\u2062(\u03b8,\u221e)\ud835\udc67\ud835\udf03z(\\theta,\\infty)italic_z ( italic_\u03b8 , \u221e ), of dynamics (7) for initial conditions z\u2062(\u03b8,0)\ud835\udc67\ud835\udf030z(\\theta,0)italic_z ( italic_\u03b8 , 0 ). The number of maxima of z\u2062(\u03b8,\u221e)\ud835\udc67\ud835\udf03z(\\theta,\\infty)italic_z ( italic_\u03b8 , \u221e ) equals kmaxsubscript\ud835\udc58k_{\\max}italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT of the corresponding kernel. Parameters: \u03c4=1\ud835\udf0f1\\tau=1italic_\u03c4 = 1, \u03b1=0.98\ud835\udefc0.98\\alpha=0.98italic_\u03b1 = 0.98, p=3\ud835\udc5d3p=3italic_p = 3, \u03be=0.7\ud835\udf090.7\\xi=0.7italic_\u03be = 0.7.", + "url": "http://arxiv.org/html/2409.12420v2/x2.png" + }, + "3": { + "figure_path": "2409.12420v2_figure_3.png", + "caption": "Figure 3: Input-output behavior of the dynamics (7) with input distributions aligned or unaligned with the Fourier mode corresponding to \u00b1kmax=\u00b11plus-or-minussubscript\ud835\udc58plus-or-minus1\\pm k_{\\max}\\!=\\!\\pm 1\u00b1 italic_k start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = \u00b1 1. Top row: Aligned. Bottom row: Unaligned. (a) Magnitude of the Fourier coefficients of the input. (b) Input distribution. (c) Steady-state opinion pattern z\u2062(0,\u221e)\ud835\udc670z(0,\\infty)italic_z ( 0 , \u221e ). Parameters: \u03c4=1\ud835\udf0f1\\tau\\!=\\!1italic_\u03c4 = 1, \u03b1=0.98\ud835\udefc0.98\\alpha\\!=\\!0.98italic_\u03b1 = 0.98, p=3\ud835\udc5d3p\\!=\\!3italic_p = 3, \u03be=0.7\ud835\udf090.7\\xi\\!=\\!0.7italic_\u03be = 0.7.", + "url": "http://arxiv.org/html/2409.12420v2/x3.png" + }, + "4": { + "figure_path": "2409.12420v2_figure_4.png", + "caption": "Figure 4: Decision-making of a robot selecting a gap through which to cross a circle of non-moving people. Bottom row: gap distribution over time where gaps are indicated by u\u2062(\u03b8,t)>0\ud835\udc62\ud835\udf03\ud835\udc610u(\\theta,t)>0italic_u ( italic_\u03b8 , italic_t ) > 0 in blue. Top row: opinion pattern over time (strongest opinion in yellow). (a) One widest gap. (b) Two wide gaps of same size. Parameters: \u03c4=1\ud835\udf0f1\\tau=1italic_\u03c4 = 1, \u03b1=0.98\ud835\udefc0.98\\alpha=0.98italic_\u03b1 = 0.98, \u03be=0.7\ud835\udf090.7\\xi=0.7italic_\u03be = 0.7, p=3\ud835\udc5d3p=3italic_p = 3.", + "url": "http://arxiv.org/html/2409.12420v2/x4.png" + }, + "5": { + "figure_path": "2409.12420v2_figure_5.png", + "caption": "Figure 5: Decision-making of a robot selecting a gap through which to cross a circle of moving people. Bottom row: gap distribution over time where gaps are indicated by u\u2062(\u03b8,t)>0\ud835\udc62\ud835\udf03\ud835\udc610u(\\theta,t)>0italic_u ( italic_\u03b8 , italic_t ) > 0 in blue. Top row: opinion about where to cross the line over time (strongest opinion in yellow). (a) Small decrease over time in size of initially widest gap. (b) Large decrease over time in size of initially widest gap. Parameters: \u03c4=1\ud835\udf0f1\\tau\\!=\\!1italic_\u03c4 = 1, \u03b1=0.96\ud835\udefc0.96\\alpha\\!=\\!0.96italic_\u03b1 = 0.96, \u03be=0.6\ud835\udf090.6\\xi\\!=\\!0.6italic_\u03be = 0.6, p=3\ud835\udc5d3p\\!=\\!3italic_p = 3.", + "url": "http://arxiv.org/html/2409.12420v2/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2409.12420v2" +} \ No newline at end of file diff --git a/20241127/2410.02801v3.json b/20241127/2410.02801v3.json new file mode 100644 index 0000000000000000000000000000000000000000..5c982fad4ee09ca9506e595b63f600348e7284ee --- /dev/null +++ b/20241127/2410.02801v3.json @@ -0,0 +1,223 @@ +{ + "title": "Biases in gendered citation practices: an exploratory study and some reflections on the Matthew and Matilda effects", + "abstract": "The number of citations of scientific articles has a huge impact on recommendations for funding allocations, recruitment decisions, promotion decisions and awards, just to name a few.\nRecent studies conducted in different scientific disciplines (e.g., physics and neuroscience) have concluded that researchers belonging to some socio-cultural groups (e.g., women, racialized people) are usually less cited than other researchers belonging to dominating groups. This is usually due to the presence of citation biases in reference lists. These citation biases towards researchers from some socio-cultural groups may inevitably cause unfairness and inaccuracy in the assessment of articles impact. These citation biases may therefore translate to significant disparities in salaries, promotion, retention, grant funding, awards, collaborative opportunities, and publications. In this paper, we conduct \u2013to the best of our knowledge \u2013 the first study aiming at analyzing gendered citation practices in the software engineering (SE) literature. Our study allows reflecting on citation practices adopted in the SE field and serves as a starting point for more robust empirical studies on the analyzed topic. Our results show that some efforts still need to be done to achieve fairness in citation practices in the SE field. Such efforts may notably consist in the inclusion of citation diversity statements in manuscripts submitted for publication in SE journals and conferences. Such efforts may also consist in redefining the power dynamics across scientific communities, industry, and academia to foster scientific inclusion (e.g., fair representation of scientific contributions) in the SE field.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Academic organizations usually assess each researcher based on the number of citations that the publications of that researcher yield [25 ###reference_b25###]. Hence, the number of citations allows estimating the impact that a given researcher has on a field [25 ###reference_b25###]. Thus, in many fields, the higher the number of citations, the higher the likelihood of securing a job or a promotion in an academic organization, or obtaining grant funding as well as prestigious awards [27 ###reference_b27###, 25 ###reference_b25###]. In this regard, the reference lists of the papers we publish are powerful instruments since they have a major impact when it comes to determining who gets hired/promoted by an academic organization, who receives awards, who becomes famous in a field, etc.\nA citation bias occurs when the authors of a given paper decide to include or not to include in their reference list a given reference based on considerations that are not related to the reference relevance and its quality [25 ###reference_b25###]. Citation biases are detrimental to various socio-cultural groups, especially women and racialized researchers since their papers are usually under-cited [3 ###reference_b3###, 12 ###reference_b12###, 54 ###reference_b54###]. For instance, in the most prolific countries, the papers in which women are the lead authors are usually less cited than the ones that have men as lead authors [1 ###reference_b1###]. Thus, citation biases may aggravate gender disparities especially since citations are key in the assessment of researchers [1 ###reference_b1###]. That gender disparity is very pervasive and translates in academia in terms of research performance, the recruitment of faculty members, the number of citations researchers receive, grant applications, determination of the positions of authors in a paper, and the choice of academic awardees, just to name a few [2 ###reference_b2###]. Besides, male researchers are more advantaged than female researchers when it comes to recruitment and promotion [2 ###reference_b2###]. Furthermore, male researchers are usually more credited than female researchers when it comes to the contributions they make to research [2 ###reference_b2###, 8 ###reference_b8###]. This may give the false impression that female researchers are less capable than male researchers. This could notably be detrimental to the academic career of women [2 ###reference_b2###] and to innovation, creativity, performance, and productivity in various scientific and/or technological projects [18 ###reference_b18###, 29 ###reference_b29###, 30 ###reference_b30###, 36 ###reference_b36###, 39 ###reference_b39###, 42 ###reference_b42###]. That innovation is particularly key in the software engineering (SE) field since it drives technological advances, which influences various aspects of the society as a whole. Such aspects include the economy, the environment, as well as politics, just to name a few.\nThe disparity of women in STEM (science, technology, engineering, and mathematics) disciplines is well-established [1 ###reference_b1###, 3 ###reference_b3###, 30 ###reference_b30###, 31 ###reference_b31###, 36 ###reference_b36###, 38 ###reference_b38###, 40 ###reference_b40###, 41 ###reference_b41###, 46 ###reference_b46###, 55 ###reference_b55###]. That disparity is well-pronounced in the software industry and has been increasingly explored (e.g., [35 ###reference_b35###, 38 ###reference_b38###, 40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 48 ###reference_b48###, 49 ###reference_b49###, 50 ###reference_b50###]). Still, when it comes to research in particular, the gap in citations that researchers belonging to some socio-cultural groups may experience and that may stem from citation biases remains understudied [3 ###reference_b3###]. Furthermore, the awareness about that topic remains low [3 ###reference_b3###]. This makes it difficult to devise novel solutions to mitigate citation biases and make sure the scientific papers we publish sufficiently reflect the intellectual diversity of the existing scientific contributions.\nSeveral bibliometric studies have analyzed biases in citation practices [2 ###reference_b2###, 5 ###reference_b5###, 6 ###reference_b6###, 11 ###reference_b11###, 12 ###reference_b12###, 23 ###reference_b23###, 1 ###reference_b1###, 3 ###reference_b3###, 9 ###reference_b9###, 39 ###reference_b39###]. These studies usually focus on STEM disciplines such as neuroscience, physics, economics, cognitive science, computer science, and biomedical sciences. Such studies mostly focus on the analysis of citation biases toward women and to some extend towards racialized researchers. However, to the best of our knowledge, none of the existing work specifically applies to the SE field. This makes it challenging to assess the fairness of the citation practices adopted in that field. This also hampers the assessment of the impact that such citation practices may have on the recruitment, career, persistence, and prestige of researchers specialized in computing fields such as SE.\nTo tackle that issue, we carry out a study that relies on existing methods used in other disciplines (e.g., neuroscience, physics) to explore the fairness of gendered citation practices in the SE field.\nThe contributions of our paper are three-fold:\nContribution 1: we create a large dataset consisting of SE journals references and use it to analyze the fairness of gendered citation practices in the SE field.\nContribution 2: we report our experience in using a set of statistical methods together with a set of APIs to support that citation analysis\nContribution 3: we provide key insights on the fairness of citation practices in the SE field.\nWe further describe our work in the remainder of this paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background and related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A What is a Citation Bias?", + "text": "Citation bias involves selecting sources based on factors unrelated to their quality or relevance [25 ###reference_b25###]. It is considered a Questionable Research Practice (QRP), which falls between responsible research conduct and research misconduct [15 ###reference_b15###]. Citation bias can be deliberate or unintentional and stems from systematic and individual biases, including gender, race, ethnicity, geography, journal prestige, and favourable positive research results [25 ###reference_b25###, 11 ###reference_b11###].\n\nCitation bias significantly impacts both individuals in scientific communities and society. For researchers, citation metrics are crucial for career advancement, affecting jobs, promotions, grants, academic opportunities, awards, salaries, and collaboration [5 ###reference_b5###, 11 ###reference_b11###, 16 ###reference_b16###, 25 ###reference_b25###]. Additionally, citations reflect scientific inquiry, highlighting important questions and answers [5 ###reference_b5###]. Thus, citation biases are detrimental to society as under-representing specific genders, races, nations, or research results can lead to several adverse outcomes. For instance, the lack of recognition for women\u2019s scientific achievements perpetuates stereotypes that discourage women and girls from entering the field, thus risking the erasure of women\u2019s contributions to scientific history [14 ###reference_b14###]. Correspondingly, bias toward citing positive findings distorts the scientific processes, leading to incorrect conclusions and damaging science\u2019s reputation [15 ###reference_b15###]. Furthermore, citation biases favouring resource-rich countries exclude contributions from emerging middle-income countries, limiting knowledge circulation and hindering innovation and sustained knowledge production [17 ###reference_b17###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Citation Biases in Literature", + "text": "The scope of studies on citation biases ranges from focusing on top journals within specific fields to broader, global analyses. The citation biases identified in these studies focus on gender, race, geographic origin, and positive results across disciplines such as neuroscience, physics, economics, and cardiology. In this section, we classify these studies based on the attributes they target.\n\nGender: Wu [44 ###reference_b44###] surveyed research focusing on the links between\ngender and citations. Several studies (e.g., [1 ###reference_b1###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 14 ###reference_b14###, 16 ###reference_b16###, 19 ###reference_b19###, 21 ###reference_b21###, 39 ###reference_b39###, 43 ###reference_b43###, 45 ###reference_b45###, 47 ###reference_b47###]) highlighted persistent gender citation biases. These studies usually concluded that, despite an increase in women authorship, women-authored papers continue to receive fewer citations than men-authored papers. For instance, in economics, Ferber and Brun [16 ###reference_b16###] found that male-only papers receive more citations than female-only or mixed-gender authored papers. Teich et al. [14 ###reference_b14###] reported similar trends in physics, with male-authored papers being over-cited and women-authored papers under-cited, varying by sub field and proximity[14 ###reference_b14###]. Larivi\u00e8re et al. [1 ###reference_b1###] found that women in prominent author positions (sole, first and last) attracted fewer citations and consistently had more domestic portfolios with fewer international collaborations than men. Dworkin et al. [11 ###reference_b11###] showed that although women\u2019s representation increased in neuroscience, citation disparities persisted. Fulvio et al. [9 ###reference_b9###] found that male-authored papers were still over-cited in cognitive neuroscience despite an increase in women-authored papers.\n\nRace: The citation analysis Liu et al. [12 ###reference_b12###] performed allowed them to conclude that papers authored by Black and Hispanic scientists are signi\ufb01cantly less cited compared to the ones authored by White and API (Asian and Paci\ufb01c Islander) scientists on similar topics. They observed that citational distortion accross several disciplines (e.g., engineering and computer science, physics and mathematics).\n\nIntersection of Race and Gender: Bertolero et al. [5 ###reference_b5###] extended the work of Dworkin et al. [11 ###reference_b11###] to identify racial biases in neuroscience and explore the impact the intersection of gender and race has on citation practices. They found that authors of colour are increasingly under-cited, primarily by white authors, black women being the most affected. They concluded citation bias is most pronounced within racially segregated co-authorship networks, with white men receiving the most citations.\n\nGeographic Origin: Pasterkamp et al. [22 ###reference_b22###] and Gomez et al. [17 ###reference_b17###] identified biases favouring well-resourced countries. Pasterkamp et al. [22 ###reference_b22###] found that in cardiology, citations primarily came from the USA and the recipient\u2019s own country, with a significant portion originating from the same institution. Gomez et al. [17 ###reference_b17###] found that across 150 fields, research from highly active countries (e.g., USA, Western Europe, East Asia) is prioritized, while work from peripheral countries is overlooked. They concluded that the gap between core and peripheral countries is widening, especially in fields like physical sciences and engineering[17 ###reference_b17###].\n\nPositive Results: Duyx et al. [15 ###reference_b15###] determined that citation patterns favour positive results, particularly in biomedical sciences. They found that articles with significant results were cited 1.6 times more often, and articles which supported a researcher\u2019s hypothesis were cited 2.7 times more frequently. Positive articles receive about twice the citations of negative ones, influenced more by conclusions than data. Journal impact factor also affects citation rates as high-impact journals usually publish positive results and, thus, receive more citations [15 ###reference_b15###].\n\nThese studies underscore the systemic biases in citation practices, impacting the recognition and dissemination of scientific contributions across various demographics and regions. However, Ray et al. [25 ###reference_b25###] found no peer-reviewed studies documenting racial or ethnic biases in citation practices within biology, medicine, chemistry, physics, or other natural sciences. Still, Liu et al. [12 ###reference_b12###]\u2019s recent study helps fill that gap." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Citation Bias Detection Methods and Their Limitations", + "text": "Research on citation bias usually follows a similar workflow of data collection, data pre-processing, and statistical analysis. \nData Collection: there are three main types of approaches to data collection:\nCollect metadata on papers from top journals; this approach is popular in research which focuses on a single field (e.g., [11 ###reference_b11###, 5 ###reference_b5###, 16 ###reference_b16###, 14 ###reference_b14###]).\nCollect metadata on a large set of papers from a variety of journals; this is an approach used for research interested in global citation patterns (e.g., [17 ###reference_b17###, 1 ###reference_b1###]).\nCollect papers based on a research topic; Duyx et al. [15 ###reference_b15###], followed that approach that allows them to compile all papers which report on the association between article results and citation frequency.\nData Pre-processing: This usually allows labelling authors and authors cited within each paper based on some interest criteria i.e attributes. Studies on gender bias used databases such as the US Census, Wikipedia, and Social Security Administration to match names to gender [16 ###reference_b16###, 14 ###reference_b14###, 1 ###reference_b1###, 11 ###reference_b11###, 5 ###reference_b5###, 9 ###reference_b9###]. Bertolero et al. [5 ###reference_b5###] assigned race using probabilistic databases and neural networks trained on American Voter and Census data. Geographic bias studies such as the one of Gomez et al. [17 ###reference_b17###] extracted author location information. Duyx et al. [15 ###reference_b15###] extracted information from the papers content, which included information such as the number of positive articles, the number of negative articles, the number of citations to positive articles, and the number of citations to negative articles. Note that, at this stage, the removal of self-citations from reference lists is common practice [11 ###reference_b11###, 5 ###reference_b5###, 16 ###reference_b16###, 14 ###reference_b14###, 9 ###reference_b9###, 22 ###reference_b22###].\n\nStatistical Analysis: Several studies (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 14 ###reference_b14###]) on citation analysis used/adapted Dworkin et al. [11 ###reference_b11###]\u2019s statistical approach. Thus, most of them created a model to determine expected citation rate by gender/race and compared them to actual rates to identify any citation biases. Gomez et al. [17 ###reference_b17###] analyzed global citation behaviour using a three-layer network (citation, text similarity, and distortion) to understand how countries cite each other and the similarities between their research topics. Duyx et al. [15 ###reference_b15###] examined the relationship between citation frequency and the statistical significance, direction, hypothesis conformity, and authors\u2019 conclusions of the article results.\n\nLimitations: Studies which extract data from top journals must be careful with generalizing bias within the field as a whole [11 ###reference_b11###, 5 ###reference_b5###, 14 ###reference_b14###]. Dworkin et al. [11 ###reference_b11###] state that they did not account for authors\u2019 institutional prestige, potentially introducing bias due to gender imbalance in hiring and prestige-based citation behaviour. Gender-based papers such as Dworkin et al. [11 ###reference_b11###] caution that gender determination methods are binary, excluding intersex, transgender, and non-binary identities. Bertolero et al. [5 ###reference_b5###] investigated race bias, which is limited by probabilistic racial and ethnic identity analyses based on limited racial categories and outdated census data that may inaccurately assign categories. Furthermore, Gomez et al. [17 ###reference_b17###] acknowledge that their citational lensing framework is limited in accuracy as the textual similarity between countries is inherently noisy, which impacts comparisons with citation data. Duyx et al. [15 ###reference_b15###]\u2019s meta-study on result-biased citations acknowledges the need for caution in making broad conclusions about citation bias across all sciences due to the high heterogeneity in their analyses." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Citation Biases Mitigation Methods and their Limitations", + "text": "The literature proposed several solutions to mitigate citation biases. For instance, Larivi\u00e8re et al. [1 ###reference_b1###, 26 ###reference_b26###] detected global gender bias against women and recommended fostering international collaborations programs for women researchers to reduce disparities in research output and impact. Additionally, they state that policymakers should consider the diverse contexts in which science is performed and identify mechanisms that perpetuate gender inequality.\nTeich et al. [14 ###reference_b14###] found that women are cited more in longer reference lists, thus noting that removing length limits on reference lists could promote gender equality. However, this should not imply that man-authored papers are more valuable. Bertolero et al.[5 ###reference_b5###] recognized racial citation bias against authors of colour, and, along with Dworkin et al. [11 ###reference_b11###], they advocate for education, transparency, personal responsibility, and ally-ship within research communities. Additionally, they emphasize that non-minority male scholars must recognize and address the challenges they create and perpetuate.\nDuyx et al. [15 ###reference_b15###] stated that journals should include declarations about the representativeness of the cited literature, akin to those for funding and author contributions. This could help raise awareness on selective citation practices and mitigate result-based citation bias. Along the same lines, several authors such as Rowson et al. [10 ###reference_b10###], Dworkin et al. [11 ###reference_b11###], Teich et al. [14 ###reference_b14###], and Ray et al. [25 ###reference_b25###] advocate for the inclusion of Citation Diversity Statements (CDSs) in papers. A CDS is a paragraph added before the reference list of a paper [25 ###reference_b25###].\nIt shows a commitment to citation equity [25 ###reference_b25###]. Full CDSs provide a percentage breakdown of citations, explain assessment methods and limitations, and highlight citation diversity [10 ###reference_b10###, 25 ###reference_b25###].\nRay et al. [25 ###reference_b25###] highlight several challenges with CDSs, including the complexity of addressing various aspects of diversity, accurately identifying them, and assessing whether citation diversity reflects a discipline\u2019s diversity. They emphasize that citations should be for scholarly purposes to avoid unethical practices, while recognizing the risk of tokenism, where authors may feel compelled to diversify citations to appear supportive of DEI (diversity, equity,\ninclusion) efforts." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "In this paper, we focus on citation bias detection. The goal of this paper is therefore to study the citation practices in the SE field. More specifically, our study aims at determining if these citation practices are biased towards a popular attribute: the gender of authors. To perform our study, we first create a dataset (i.e. a reference list) consisting of references of papers published in the top SE journals. The analysis of that dataset then allows us to determine if that dataset exhibits some citations biases toward female researchers. To support that analysis, we investigate three research questions (RQs):\n\n(RQ1): What is the gender distribution in\nSE journals authorship?\n\n(RQ2): What is the temporal evolution of citation trends in the SE journals?\n\n(RQ3): Are there some citation biases toward female authors in papers published\nin SE journals?" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Overview of our methodology", + "text": "To perform our study, we adapted the citation bias detection methods that Dworkin et al. [11 ###reference_b11###] as well as Fulvio et. al [9 ###reference_b9###] proposed. Figure 1 ###reference_### illustrates the six steps we followed to complete our study. We perform Step 1 through Step 4 using\nDworkin\u2019s R code 111Link to Dworkin et al.\u2019s code ###reference_###.. We complete Steps 5 and 6 using our own Python scripts, but drawing inspiration from Fulvio et al. [9 ###reference_b9###]\u2019s approach. We further describe our methodology below.\n###figure_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Step 1: Data Collection", + "text": "We collected metadata from the top 100 software engineering journals 222Link to the list of 100 journals ###reference_5_SEIS_submission_codebase/journal_list.csv### as ranked on Web of Science (WoS) by Eigen- factor scores. We followed Dworkin et al. [11 ###reference_b11###]\u2019s data retrieval method which consisted of manually querying journals through Web Of Science.com. We searched by publication title and specified one title at a time. We filtered the results to only include publication years from 2003 to 2024 and document types of Article, Review Article and Proceeding Paper. We manually exported the results using the WoS exporting tool, where we specified the record content as \u201dFull Record and Cited References\u201d and exported the maximum record count of 500 to Plain Text File. Thus, the data was downloaded 500 records at a time and one journal at time. The downloaded Plain Text File included all available metadata for each article, which included but not limited to fields such as article title, abstract, funding information, journal information and number of citations. The metadata of interest for this project includes publication date, DOI (Digital Object Identifier), author list and reference list of each article. This step allowed us to collect the metadata of 166,657 research articles, review articles and processing papers published in 100 SE journals from January 2003 to August 2024. From the 166,657 articles collected, 4996 articles had missing DOI\u2019s, thus we excluded them from the dataset. This reduced the dataset to 161,661 articles.\nWe stored the Plain Text Files in folders by journal and imported into Dworkin\u2019s R codebase for processing. We stored the post processed dataset in MongoDB \u2013a document-oriented database \u2013 which we use later for the statistical analysis. Our raw data and processed dataset are available online 333Link to the Raw data, see /R-processing/wos-data ###reference_5_SEIS_submission_codebase/README.md### 444Link to the MongoDB dataset exported as a set of JSON files ###reference_5_SEIS_submission_codebase/R-processing/article_data_jsons/article_data_part_1.json###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Step 2: Name Processing", + "text": "Author names serve two important roles in our analysis: author gender assignment and self-citation removal. Still, name data is inconsistent as it may be incomplete such is the case where only initials are included in the metadata or authors published under different name variations. Thus, ensuring complete names will minimize missing data and increase successful gender assignment. Effectively linking author name variations together reduces the number of missed self-citations.\nWe use Dworkin et al. [11 ###reference_b11###]\u2019s R code to process the metadata we collected in Step 1. In their code, Dworkin et al. [11 ###reference_b11###] cross-reference any initial only names as provided by WoS with CrossRef by querying the CrossRef API with the article DOI. If the requests return a full name, then the WoS name is replaced. Secondly, Dworkin et al. implemented a name disambiguation algorithm which matches name variants to connect papers authored by the same person, but are published under different names. This algorithm functions by matching all entries which share the first name initial and/or middle initial and last name are the same. Since, last names are matched first, followed by matching first and/or middle initials, it is crucial to ensure all last names are complete.\nIn Dworkin et al. [11 ###reference_b11###]\u2019s words \u201cFor example, if an entry listed an author as R. J. Dolan, and we found matches under Ray J. Dolan and Raymond J. Dolan, we would replace the R. J. Dolan entry with the more common completed variant. If, instead, we found matches under Ray J. Dolan and Rebecca J. Dolan, we would not assign a name to the original R. J. Dolan entry.\u201d (pg. 11).\nDworkin et al. [11 ###reference_b11###] noted in their database all last names where complete in the initial WoS metadata. However, these was not the case for our dataset, therefore, we implemented an additional R script which removed any articles which has either initial or missing last names. In total, we removed 853 articles which had incomplete last names. Thus, this reduced our dataset size to 160,808 articles.\nThe Secure Open Enterprise Master Patient Index is a scientific record linkage tool which contains a database of common nicknames. We used it to connect name variants [21 ###reference_b21###]. Any names where the full names cannot be matched or retrieved with CrossRef will not be assigned a gender and is excluded from the analysis. In this step, CrossRef replaced a total of 19,164 names. Besides, 14,474 name variations matches were found." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Step 3: Gender Assignment", + "text": "To assign gender to the authors in our processed data, we also rely on Dworkin et al. [11 ###reference_b11###]\u2019s code. Like in the literature (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 14 ###reference_b14###]), our analysis only considers the gender of first and last authors of articles and references. This assignment allows organizing each dataset into four gender-based categories: 1) MM (first author is a man and last author is a man); 2) MW (first author is a man and last author is a woman); WM (first author is a woman and last author is a man); and 4) WW (first author is a woman and last author is a woman). As in the literature (e.g., [11 ###reference_b11###]), articles with a sole author are grouped into either the WW or MM gender category. Note that, for the sake of the citation analysis, we will introduce another category called WW (i.e. Woman or Woman) [11 ###reference_b11###]. That category is the union of the following categories: WW, MW and WM. Thus, the WW category refers to papers in which a female author is involved as the lead author, as the last author, or both.\nThe gender assignment process Dworkin et al. [11 ###reference_b11###]\u2019s code supports consists in performing two rounds of gender assignment from two different databases.\nFirst round of gender assignment: in this round, the Social Security Administration (SSA) database as implemented in the gender R package is used to assign genders to authors whose first names are available in the processed dataset. This SSA database provides the proportion of male-female infants assigned to a name in the US from 1932 to 2012. If a given author\u2019s name yields a proportion greater or equal to 0.7, then a gender is assigned to the corresponding author. Noteworthy, picking 0.7 as a threshold is common in the literature (e.g., [9 ###reference_b9###, 11 ###reference_b11###, 43 ###reference_b43###]).\nSecond round of gender assignment: in this round, the paid service Gender-API is used to fill in the gender data for any names which were either missing in the SSA database or returned a proportion between 0.3-0.7. Gender-API has a database of 6,196,452 unique names from 191 countries which are sourced from both publicly available data and government data. Gender-API returns a probability value of the name gender which is based on the number of records for that name in the database [53 ###reference_b53###]. The same threshold of 0.7 is applied to names assigned gender with Gender-API.\nAuthors whose gender is not assigned are labelled as \u201dU\u201d for Unknown and will be excluded from our dataset. Reasons a gender could not be assigned include: only initial available, name not found in either database or gender probability below threshold. Completing this step allowed us to assign gender for both first and last authors of 66,63% of the papers. Thus, 107,152 papers had authors with fully assigned genders while 53,656 articles had at least one author with Unknown gender." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Step 4: Citation Processing", + "text": "Following Dworkin et al. [11 ###reference_b11###]\u2019s methodology for our analysis, we do not consider all articles cited in the metadata reference lists, instead we only consider citations of articles which are in our dataset. Thus, we must identify any articles present in the metadata references lists that also exists in our dataset; all other referenced articles will be removed from the reference list. We explain this process below.\nThe reference lists of each article is supplied by the metadata from WoS. The reference list is provided as a string which is processed by Dworkin\u2019s code. The string is processed to extract DOIs in the reference list string. These extracted DOIs are then matched to any DOIs in our database and matches are stored as indices which point to location of the associated article. This pointer supplies the author and gender data required for the analysis.\nAs we mention in Section II ###reference_###, the removal of self-citations from reference lists is a well-established practice in citation analysis (e.g., [11 ###reference_b11###, 5 ###reference_b5###, 16 ###reference_b16###, 14 ###reference_b14###, 9 ###reference_b9###, 22 ###reference_b22###]). It allows mitigating the impact gender differences may have on self-citation patterns [9 ###reference_b9###]. Dworkin et al. [11 ###reference_b11###] define a self-citation as any paper listed in a reference list where the first or last author of that paper is also the first or last author of the citing paper. Dworkin et al. acknowledge that this is a restrictive definition. However, due to the nature of this analysis, since cited genders will be based on assignments already present in our database, this is the only way we can guarantee the authors are present in our database. We rely on Dworkin et al. [11 ###reference_b11###]\u2019s code to identify potential self-citations. This process consists of collecting a list of all first authors and a list of all last authors. These names are then matched to any papers where the authors name appear in that list. Each paper stores a list of indices which point to the papers in our database which have the same author. These indices represent potential self-citations.\nWe export the processed R Data to a MongoDB database. The processed dataset has now been gender labelled and indexed with pointers that connect cited papers, to the citing paper as well as papers authored by the same person.\nThis section of the Methodology workflow is where we deviate away from Dworkin et al. [11 ###reference_b11###]\u2019s statistical analysis that leverages a Generalized Additive Model (GAM), and adapt an analysis similar to Fulvio et al. [9 ###reference_b9###]\u2019s. This allows ensuring the citation processing step is suitable for the journals we target. Hence, in accordance with Fulvio et al. [9 ###reference_b9###], we rely on our own Python script to automatically remove the self-citations we identified using the code of Dworkin et al. [11 ###reference_b11###].\nThus, we consider that each article in the processed dataset includes a list of cited article indices and a list of article indices which are authored by the same person. If an article index appears in both lists, our script removes it from the cited list as it means it is a self citation. Besides, when assigning gender categories to cited articles, we use the list of cited article indices for each article in our database to create a new list which stores gender categories instead of indices.\nSince, the authors which were not able to be assigned gender were simply labeled as Unknown but not immediately removed from the dataset in Step 3, the citation processing was conducted on the processed dataset (160,808 articles). In total 567,374 citations were retrieved and 38,243 of those were removed as they were self-citations.\nHowever, these citations include citing and cited articles which contain authors without assigned genders. Thus, when we filter those out of the dataset, we obtained 107,152 articles and 280,217 citations of which 25,561 were removed as self-citations.\nThe citation data was very sparse up until 2009, with only 3,482 citations in total for those five years (i.e. 1.37 % of all citations). Thus, in accordance with the literature (e.g., [6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###]), we decided at this stage to conduct our citation analysis for articles published from January 2009- August 2024. This will allow us in Step 5 to focus on citations\nthat yield a more robust statistical citation analysis. In the current Step (i.e. Step 4), when filtering out articles published from 2003-2008, we ended up with 96,282 articles and 276,187 citations of which 25,013 are self-citations. Thus, our final citation analysis dataset consists of 96,282 articles with 251,174 citations." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "III-F Step 5: Statistical citation analysis", + "text": "To simplify our citation analysis as in the literature (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 14 ###reference_b14###, 47 ###reference_b47###]), we only focus on the first and last authors of each paper available in our dataset. The last author is usually the most senior author in some fields [6 ###reference_b6###, 47 ###reference_b47###]. To support the citation analysis, Fulvio et al. [9 ###reference_b9###] rely on an index called the Gender Citation Balance Index. The latter allows comparing the amount of citation each gender category receives with respect to the existing distribution of gender categories in SE literature. Equation 1 ###reference_### formalizes that index. To perform our citation analysis, we apply that equation separately to each gender category. A positive index yields a gender category that is more cited than expected (i.e. positive bias) [9 ###reference_b9###]. A negative index yields a gender category that is less cited than expected (i.e. negative bias) [9 ###reference_b9###]." + }, + { + "section_id": "3.6.1", + "parent_section_id": "3.6", + "section_name": "III-F1 Calculating expected proportion", + "text": "The expected proportion represents the percentage of papers in our dataset which are authored by a specified gender category. This is known as our expected proportion, because if there is no biases in the citations of papers, we should expect our proportion of citations to be the same as our proportion of authorship. Equation 2 ###reference_### shows how we compute the expected proportion.\nRecall from Step 4 that we obtained a total of 96,282 papers. Table I ###reference_### reports the corresponding papers breakdown per gender category. We use these papers to derive the expected proportions per gender categories and to reason about the authorship trends in the SE literature." + }, + { + "section_id": "3.6.2", + "parent_section_id": "3.6", + "section_name": "III-F2 Calculating observed proportion", + "text": "The observed proportion (see equation 3 ###reference_###) represents the percentage of citations which are authored by a specific gender category. It is known as our observed proportion, because it reflects the distribution of gender as it appears among citations in our dataset.\nRecall from Step 4 that we obtained a total of 251,174 citations. Table II ###reference_### reports the citation breakdown per gender category. We use these citations to derive the observed citation proportions per gender categories and to reason about the citation trends in the SE literature." + }, + { + "section_id": "3.7", + "parent_section_id": "3", + "section_name": "III-G Step 6: bootstrapping", + "text": "Like Fulvio et al. [9 ###reference_b9###], we bootstrapped the 95% confidence interval for each gender category using 1000 iterations of random sampling with replacement from the total number of papers obtained in Step 4 i.e. 96,282 papers. Thus, for each iteration, we computed the Gender Citation Balance Index for each category, considering: 1) the 2.5th percentile as the lower bound of the confidence interval; and 2) the 97.5th percentile as the upper bound of the confidence interval.\nWe performed these statistical steps once for the entire dataset, then again for each set of papers (i.e. subset of the dataset) associated with a gender category. This allows analyzing the citing behaviour of the following gender categories: MM, WW, and WuW. When calculating the Gender Citation Balance Indices for the subsets, the expected proportion remains the same from the full dataset. Additionally, each subset is filtered out from the full dataset and bootstrapped individually (i.e. not subdivided from the already bootstrapped dataset)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Result analysis", + "text": "We relied on our methodology to perform citation analysis. In the remainder of this section, we further discuss the results of that analysis in the light of our three research questions." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A RQ1: gender distribution in\nSE journals authorship", + "text": "We completed Steps 1 to 4 of our methodology. This allowed us to use the resulting 96,282 papers to plot the authorship trends in the SE literature between 2009 and 2024. Figure 2 ###reference_### shows that chart. Hence, Figure 2 ###reference_### illustrates the expected proportions of papers authored in each gender category. That Figure shows that, in the SE literature, the MM category dominates the authorship while the WW category is underrepresented. Still, over time, more papers have been authored by authors belonging to the WW category. Hence, between 2009 and 2024, the WW authorship has increased by nearly 2.3% in the SE field. But, as of August 2024, the authorship in the SE field remains little diversified since the MM authorship still represents two-thirds of the authorship while the WW authorship only represents 5.5% of the authorship.\n###figure_2### Our results indicate that, in the SE field, men write most of the scientific papers. The female authorship, in spite of its increase, still remains significantly underrepresented." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B RQ2: temporal evolution of citations\ntrends in SE journals", + "text": "Recall from above that we obtained 251,174 citations when completing Step 4 of our methodology. We used these citations to plot the temporal evolution of citations trends in the SE literature. Figure 3 ###reference_### illustrates the corresponding chart. Thus, that Figure shows the observed citation proportions for each gender category from 2009 to 2024. Interestingly, when analyzing that chart, we can notice that diversity in the SE literature seems to have slightly increased over time, with female-led papers (i.e. papers in the WW category) being increasingly more cited as time passes. But that increase has been rather marginal throughout the years since the citation proportion of the WW category has only increased by 1% from 2009 to 2024, while, as Figure 2 ###reference_### shows, the proportion of female authors has nearly doubled during the same period. Besides, as Figure 3 ###reference_### shows, the most cited category is the MM category, while the least cited category remains the WW category. Thus, time has not sufficiently witnessed fairness in gendered citation practices.\n###figure_3### Our results show that, in the software engineering literature, male-led papers are significantly more cited than women-led papers. That trend has been consolidated over time." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C RQ3: citation biases in papers published in SE journals", + "text": "We completed Steps 1 to 6 of our methodology and specifically relied on equation 1 ###reference_### (see Step 5)\nto compute the Gender Citation Balance Index for each of the gender categories. Figure 4(a) ###reference_sf1### depicts the corresponding results, with error\nbars associated with bootstrapped\n95% confidence intervals (Step 6). Thus, that Figure depicts the biases in gendered citation practices in the SE literature from 2009 to 2024. As shown on that Figure, the MM category is over-cited i.e. positively biased. Contrariwise, the MW, WM and WW categories are significantly under-cited (i.e. negatively biased), the WW category being the most under-cited category. Gender-wise, several citation bias patterns could explain these citation bias results. Our analysis allowed us to identify three of such patterns:\n###figure_4### ###figure_5### ###figure_6### ###figure_7###" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "IV-C1 Men usually tend to cite men more than women", + "text": "our results show men usually cite men more often than women. Figure 4(b) ###reference_sf2### illustrates that pattern. Thus, it shows the citation behavior of the MM category toward all the analyzed gender categories. That Figure shows the MM category usually over-cites the MM category. This pattern aligns with homophily [19 ###reference_b19###], and is in accordance with the outcomes of Tahamtan et al. [29 ###reference_b29###]\u2019s review that concludes that male authors usually prefer to cite male authors over female authors. This pattern is also in accordance with the Matthew effect [24 ###reference_b24###, 43 ###reference_b43###]. The latter conveys the idea that research conducted by men is perceived as the most central and critical in a field [24 ###reference_b24###, 43 ###reference_b43###]. This pattern is also in accordance with the Matilda effect [28 ###reference_b28###, 52 ###reference_b52###]. The latter conveys the idea that the achievements of women are less acknowledged or research findings made by women are usually misallocated\nto other male authors [43 ###reference_b43###, 28 ###reference_b28###]. Note that, the dominance of men in the SE journals authorship (see Figure 2 ###reference_###) significantly amplifies their impact on gendered citation practices. Hence, since male authors are the ones citing the most, they are the ones who drive gendered citation practices." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "IV-C2 Women usually tend to significantly cite women over men", + "text": "our results also show that women tend to cite other women more that men. Figure 4(c) ###reference_sf3### illustrates that pattern. More specifically, that Figure shows the citation behaviour of the WW category toward each of the gender categories under analysis. The existence of this pattern may be attributed to the solidarity efforts female authors have made over time to ensure female-led papers get cited more often, with the hope of achieving citation fairness in the field. This pattern is in accordance with the conclusions of Tahamtan et al. [29 ###reference_b29###] who stated that \u201dwomen are three times more likely to cite other female researchers\u201d. This pattern is also in accordance with the conclusions of Dworkin et al. [11 ###reference_b11###] who stated that women usually make conscious efforts to cite other women\u2019s papers." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "IV-C3 Citation bias decreases when women co-author papers", + "text": "Interestingly, when it comes to the WW category, the citation bias is nearly non-existent as shown on Figure 4(d) ###reference_sf4###. The latter shows the citation behaviour of the WW category toward each of the four gender categories under analysis. This pattern is in accordance with Dion et al. [43 ###reference_b43###] who noted that the significant presence of female researchers in a field could contribute to the elimination of the citation gender gap. Still, as Figure 5 ###reference_### shows, the values of the Gender Citation Balance Index for the WW category are clearly lagging behind. Besides, as Figure 6 ###reference_### shows, the values of the Gender Citation Balance Index for the WW category remains significantly lower than the ones of the MM category. Thus, Figures 5 ###reference_### and 6 ###reference_### both suggest that the gendered citation bias may be getting worse through time. Noteworthy, the scarcity of women in the SE journals authorship (see Figure 2 ###reference_###) significantly reduces the impact women have on gendered citation practices.\n###figure_8### ###figure_9### Our results show there are citation biases in the SE literature, female authors being the most impacted. Thus, raising awareness on this issue is key to foster fairness in citation practices." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion and implications", + "text": "Our results are similar to the ones the literature (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 14 ###reference_b14###, 29 ###reference_b29###, 36 ###reference_b36###, 39 ###reference_b39###]) obtained and/or discussed in other disciplines (e.g., neuroscience, physics, communication). Thus, gender disparities are pervasive in various disciplines including computing (SE) and are the fruit of longstanding systemic issues.\nAccordingly, our results indicate that some efforts need to be done to achieve fairness in citation practices in the SE field. As stated in Section II ###reference_###, such efforts may consist in the inclusion of CDSs in manuscripts submitted for publication to SE journals and conferences. Including CDSs in manuscripts could foster fairness in citations by promoting equal opportunities for researchers, and balancing social justice scales\n[37 ###reference_b37###]. In many fields, the literature (e.g., [4 ###reference_b4###, 7 ###reference_b7###, 10 ###reference_b10###, 11 ###reference_b11###, 14 ###reference_b14###, 19 ###reference_b19###, 25 ###reference_b25###, 51 ###reference_b51###]) advocates for the inclusion of CDSs in manuscripts. Still, promoting diversity in reference lists should not be detrimental to the compliance with established standards of\ncitation ethics [25 ###reference_b25###].\nEfforts to improve fairness in citation practices may also consist in redefining the power dynamics across scientific communities, industry, and academia to foster scientific inclusion (e.g., fair representation of scientific contributions) in the SE field. The development of fairness-centred synergies between these stakeholders could be key to improve the matter. This is notably possible by developing programs fostering international collaboration for female researchers [1 ###reference_b1###]. Such collaborations would be able to tap into various perspectives to accelerate the pace of scientific and technological advances in the SE field. Such advances would inevitably yield a positive impact on the development of the society. Such international collaborations usually yield papers that are highly cited [29 ###reference_b29###].\nNearly half the population of each country is impacted by gender disparities in science and the lack of opportunities it yields for women [1 ###reference_b1###]. Citation biases, as observed in our study, exacerbate such disparities. To tackle gender disparities, it is therefore crucial for each country to also revisit its policies to enhance women\u2019s participation in the scientific workforce [1 ###reference_b1###]. Hence, in line with Lariviere et al. [1 ###reference_b1###]\u2019s conclusions, to avoid the repetition of past order that led to gender disparities, we contend there is a need to integrate into policies the various local contexts (e.g. social, cultural, economic and political contexts) in which scientific activities are performed." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Threats to Validity", + "text": "As other existing approaches (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 13 ###reference_b13###, 14 ###reference_b14###, 19 ###reference_b19###, 43 ###reference_b43###]), we also considered gender as a binary attribute. Hence, we did not take into account all the possible gender-related identities. This limitation stems from the capabilities of the gender assignment tools we used to determine the gender of authors. More specifically, these tools deal with the gender as a binary attribute. This does not sufficiently reflect the gender diversity of authors as some of them may not have a binary identity. Still, finding non-binary information is a very difficult task for studies centered on fairness analysis in general [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###], especially since such information can be difficult to infer. However, to better foster diversity in citation practices, it is crucial to address that limitation in future work.\nAs in most approaches in the literature (e.g., [5 ###reference_b5###, 6 ###reference_b6###, 9 ###reference_b9###, 11 ###reference_b11###, 47 ###reference_b47###]), and in the approaches we adapted (e.g., [9 ###reference_b9###, 11 ###reference_b11###]) as well as their supporting tools, our citation analysis focuses on the first and last authors of each paper in our dataset. This simplifies the citation analysis but does not allow capturing all the authorship nuances. Future work should focus on finding more efficient ways to capture authorship in citation analysis.\nOur analysis only focuses on the gender of authors. This hinders the generalization of our study outcomes to other attributes and/or under-represented groups. Thus, there is a need to also explore other attributes such as race, prolificity, disability, class, seniority, and citizenship, as well as their intersectionality with gender [6 ###reference_b6###, 11 ###reference_b11###, 14 ###reference_b14###]. This could yield a more robust citation analysis.\nThis could also help create a broader database of\nunder-represented scholars [11 ###reference_b11###] and help devise solutions to foster fairness in citation practices." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion and future work", + "text": "The study we reported in this paper allowed concluding that the gendered citation practices adopted in the software engineering field are biased. Our results suggest the SE community needs to make efforts to fairly cite female authors.\n\nOur results suggest that it is crucial to encourage journal editors and conference chairs to recommend the inclusion of citation diversity statements in manuscripts submitted for publication in software engineering journals and conferences. Developing tools that will automatically generate and include citation diversity statements in such manuscripts is also key to promote fairness in citation practices. This is in accordance with Bruton et al. [37 ###reference_b37###] who advocate for the development of tools and initiatives allowing to improve citation practices.\n\nAlthough the results we obtained in our study are promising, we still need to improve the proposed approach. For instance, when conducting our study, we only focused on a single attribute that has been widely analyzed in several other disciplines: the gender. In future work, we will investigate if additional attributes could also be a source of citation bias.\n\nFuture work will also study the impact of citation practices on researchers\u2019 careers (eg., recruitment, promotion, leadership, awards, publication rates, collaboration opportunities)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Paper breakdown for each gender category
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Gender Category#Papers
MM (Man first author \u2013 Man last author)69,239
MW (Man first author \u2013 Woman last author)9,287
WM (Woman first author \u2013 Man last author)13,529
WW (Woman first author \u2013 Woman last author)4,227
\n
", + "capture": "TABLE I: Paper breakdown for each gender category " + }, + "2": { + "table_html": "
\n
TABLE II: Citation breakdown for each gender category
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Gender Category#Citations
MM (Man first author \u2013 Man last author)192,425
MW (Man first author \u2013 Woman last author)21,861
WM (Woman first author \u2013 Man last author)28,381
WW (Woman first author \u2013 Woman last author)8,507
\n
", + "capture": "TABLE II: Citation breakdown for each gender category" + } + }, + "image_paths": { + "1": { + "figure_path": "2410.02801v3_figure_1.png", + "caption": "Figure 1: Methodology workflow", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/methodology_workflow.png" + }, + "2": { + "figure_path": "2410.02801v3_figure_2.png", + "caption": "Figure 2: Expected proportions: authorship trends in the software engineering literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/authorship_trends_noBorder.png" + }, + "3": { + "figure_path": "2410.02801v3_figure_3.png", + "caption": "Figure 3: Observed proportions: temporal evolution of citation trends in the software engineering literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/citation_trends_noBorder.png" + }, + "4(a)": { + "figure_path": "2410.02801v3_figure_4(a).png", + "caption": "(a) Biases in gendered citation practices\nFigure 4: Citation biases in the SE literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/full_data_indices_noBorder.png" + }, + "4(b)": { + "figure_path": "2410.02801v3_figure_4(b).png", + "caption": "(b) Citation biases among MM authors\nFigure 4: Citation biases in the SE literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/MM_data_indices_noBorder.png" + }, + "4(c)": { + "figure_path": "2410.02801v3_figure_4(c).png", + "caption": "(c) Citation biases among WW authors\nFigure 4: Citation biases in the SE literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/WW_data_indices_noBorder.png" + }, + "4(d)": { + "figure_path": "2410.02801v3_figure_4(d).png", + "caption": "(d) Citation biases among W\u222a\\cup\u222aW authors\nFigure 4: Citation biases in the SE literature (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/figures/no_borders/WuW_data_indices_noBorder.png" + }, + "5": { + "figure_path": "2410.02801v3_figure_5.png", + "caption": "Figure 5: Temporal evolution of the Gender Citation Balance Index for the W\u222a\\cup\u222aW category (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/temporal_figures/no_borders/wuw_full_data_full_data_noBorder.png" + }, + "6": { + "figure_path": "2410.02801v3_figure_6.png", + "caption": "Figure 6: Temporal evolution of the Gender Citation Balance Index for all the gender categories (2009 - 2024)", + "url": "http://arxiv.org/html/2410.02801v3/extracted/6028369/temporal_figures/no_borders/full_data_noBorder.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2410.02801v3" +} \ No newline at end of file diff --git a/20241127/2410.15115v3.json b/20241127/2410.15115v3.json new file mode 100644 index 0000000000000000000000000000000000000000..c0ca84868c98bdd313aaffb7d329797e282ddd70 --- /dev/null +++ b/20241127/2410.15115v3.json @@ -0,0 +1,497 @@ +{ + "title": "On Designing Effective RL Reward at Training Time for LLM Reasoning", + "abstract": "Reward models have been increasingly critical for improving the reasoning capability of LLMs. Existing research has shown that a well-trained reward model can substantially improve model performances at inference time via search or best-of-N votes.\nHowever, the potential of reward models during RL training time still remains largely under-explored.\nIt is currently unclear whether these reward models can provide additional training signals to RL training that uses sparse success rewards, which verify the correctness of solutions.\nIn this work, we evaluate popular reward models for RL training, including the Outcome-supervised Reward Model (ORM) and the Process-supervised Reward Model (PRM), and train a collection of LLMs for math problems using RL by combining these learned rewards with success rewards. Surprisingly, even though these learned reward models have strong inference-time performances, they may NOT help or even hurt RL training, producing worse performances than LLMs trained with the success reward only.\nOur analysis reveals that an LLM can receive high rewards from some of these reward models by repeating correct but unnecessary reasoning steps, leading to a severe reward hacking issue for RL training.\nTherefore, we introduce two novel reward refinement techniques, including Clipping and Delta. The key idea is to ensure the accumulative reward of any reasoning trajectory is upper-bounded to keep a learned reward model effective without being exploited.\nWe evaluate our techniques with multiple reward models over a set of 1.5B and 7B LLMs on MATH and GSM8K benchmarks, where both Clipping and Delta consistently stabilize RL training.\nFinally, we also demonstrate that with a carefully designed reward function, pure RL training without any additional supervised tuning can further improve all the evaluated LLMs, including the state-of-the-art 7B LLM Qwen2.5-Math-7B-Instruct on MATH and GSM8K benchmarks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "There is a recent trend to improve the reasoning ability of LLMs with learned reward models (Lightman et al., 2024 ###reference_b17###; Wang et al., 2024b ###reference_b34###; Yu et al., 2024a ###reference_b39###; Zhang et al., 2024 ###reference_b43###; Lee et al., 2024 ###reference_b16###; Yang et al., 2024b ###reference_b38###; Luo et al., 2024 ###reference_b18###; Chen et al., 2024c ###reference_b7###; Havrilla et al., 2024 ###reference_b14###; Shao et al., 2024 ###reference_b26###; Uesato et al., 2022 ###reference_b32###).\nRecent research has been focusing on guiding search processes during inference (Lightman et al., 2024 ###reference_b17###; Snell et al., 2024 ###reference_b30###; Wang et al., 2024b ###reference_b34###), with two main categories of reward models: Outcome-supervised Reward Model (ORM) (Cobbe et al., 2021b ###reference_b9###; Yu et al., 2024a ###reference_b39###) and Process-supervised Reward Model (PRM) (Lightman et al., 2024 ###reference_b17###; Wang et al., 2024b ###reference_b34###; Luo et al., 2024 ###reference_b18###).\nORM generates outcome rewards that estimate the success rewards, which evaluate the correctness of generated answers, enabling the selection of the most reliable answer from a pool of generated candidates.\nBy contrast, PRM is trained to distinguish correct reasoning steps from incorrect ones and can provide step-level process rewards for\nsearch algorithms like Monte-Carlo Tree Search (Chen et al., 2024a ###reference_b5###) and beam search (Snell et al., 2024 ###reference_b30###).\nHowever, the potential of reward models in RL training for LLM reasoning is not yet fully explored. The most straightforward method for RL training in reasoning tasks is to optimize the success rewards. Some prior works further try the integration of a reward model into RL training\n (Havrilla et al., 2024 ###reference_b14###; Wang et al., 2024b ###reference_b34###; Shao et al., 2024 ###reference_b26###). Havrilla et al. (2024 ###reference_b14###) finds that PPO training with a reward model only results in performance degeneration.\nIn addition, some powerful LLMs that exhibit strong reasoning abilities such as the Qwen2.5-Math family (Yang et al., 2024b ###reference_b38###) and DeepseekMath-7B-RL (Shao et al., 2024 ###reference_b26###) adopt RL training with reward models as a part of their overall training process for mathematical reasoning.\nHowever, due to a lack of detailed analysis on the reward models, it remains unclear whether the reward models can provide additional training signals beyond what the success rewards offer for LLM reasoning.\nIn this work, we evaluate popular reward models, including ORM and PRM, as RL rewards on the challenging mathematical reasoning benchmark MATH (Hendrycks et al., 2021 ###reference_b15###) and GSM8K (Cobbe et al., 2021a ###reference_b8###) by using PPO as the RL algorithm (Schulman et al., 2017 ###reference_b24###). Surprisingly, we find that these reward models may not enhance RL training or even lead to performance degradation, yielding even worse results than LLMs trained with a sparse success reward only. We observe that outcome rewards consistently achieve similar training results as success rewards. We hypothesize that outcome rewards may not be beneficial at training time since a more accurate success reward is accessible. For PRM, we perform an in-depth analysis of the RL training process and identify a severe reward hacking issue (Casper et al., 2023 ###reference_b4###; Rame et al., 2024 ###reference_b23###; Singhal et al., 2023 ###reference_b28###).\nReward hacking manifests in the form of generating numerous correct but unnecessary reasoning steps.\nThrough RL training, an LLM could exploit the PRM to achieve an excessively high by repeated generating simple reasoning steps that may not contribute to solving the problem, leading to a completely undesirable LLM behavior with poor reasoning accuracy.\nTo tackle these challenges, we propose two novel techniques, i.e., Clip and Delta,\nwhich refines the process rewards for effective RL training.\nIn particular, the Clip mechanism bounds rewards to an upper threshold so that RL training can focus on reducing erroneous reasoning steps.\nThe Delta mechanism maintains a bounded objective by subtracting the rewards between two adjacent steps, discouraging trivial repetition patterns to achieve a high return and improving training stability.\nEvaluation of these two techniques on synthetic reasoning trajectories demonstrates that they can mitigate the reward hacking issue consistently.\nFinally, we conduct full RL training on a set of advanced 1.5B and 7B LLMs from the Qwen2 and Qwen2.5 families (Yang et al., 2024a ###reference_b37###; b ###reference_b38###) with different reward models. Our experiment results show that our proposed techniques effectively stabilize RL training. Moreover, with a carefully crafted reward, RL training can\nimprove all the evaluated LLMs, including the state-of-the-art 7B LLM Qwen2.5-Math-7B-Instruct on the challenging MATH and GSM8K (Hendrycks et al., 2021 ###reference_b15###; Cobbe et al., 2021a ###reference_b8###) benchmarks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Reinforcement Learning for LLMs.\nIn RLHF, Reinforcement learning algorithms can effectively fine-tune LLMs to align with the preference of humans (Dong et al., 2023 ###reference_b10###; Rafailov et al., 2024 ###reference_b22###; Ouyang et al., 2022 ###reference_b21###; Xu et al., 2024 ###reference_b36###; Schulman et al., 2017 ###reference_b24###), to improve the reasoning ability (Shao et al., 2024 ###reference_b26###; Yang et al., 2024b ###reference_b38###) and coding skills (Wang et al., 2024a ###reference_b33###; Guo et al., 2024 ###reference_b12###). PPO is the most widely used among the popular RL algorithms due to its robust performance across various domains (Ouyang et al., 2022 ###reference_b21###; Xu et al., 2024 ###reference_b36###). Xu et al. (2024 ###reference_b36###) investigates the implementation details of PPO for dialogue tasks and coding tasks, revealing batch size as a critical factor for improving PPO performance in reinforcement learning from human feedback (RLHF). Our work addresses the challenge of designing RL rewards for LLM reasoning.\nReward Learning for LLMs. Learned reward models are widely adopted in RLHF to align LLMs with human preferences (Dong et al., 2023 ###reference_b10###; Rafailov et al., 2024 ###reference_b22###; Ouyang et al., 2022 ###reference_b21###). In RLHF, reward models are trained on binary preference datasets collected from human annotators, following the Bradley-Terry model (Bradley & Terry, 1952 ###reference_b3###). In reasoning tasks involving reliable solution checkers, two main approaches are the Outcome-supervised Reward Model (ORM) (Cobbe et al., 2021b ###reference_b9###; Yu et al., 2024a ###reference_b39###) and the Process-supervised Reward Model (PRM) (Lightman et al., 2024 ###reference_b17###; Wang et al., 2024b ###reference_b34###; Luo et al., 2024 ###reference_b18###). An ORM predicts the likelihood that the final answer of a solution prefix would be correct. A PRM estimates whether the steps so far are correct for each reasoning step. Through training over extensive corpora, reward models are able to evaluate solution quality. Despite the successful applications of reward models, reward hacking is a broadly observed issue in learned reward models (Skalse et al., 2022 ###reference_b29###; Singhal et al., 2023 ###reference_b28###; Casper et al., 2023 ###reference_b4###). Through RL training, the LLM may learn to generate high-reward outputs that could not fulfill the intended objectives. Several approaches have been proposed to tackle the reward hacking issue, including disentangling the length aspect of reward modeling (Chen et al., 2024b ###reference_b6###; Shen et al., 2023 ###reference_b27###), reward ensemble (Eisenstein et al., 2024 ###reference_b11###; Rame et al., 2024 ###reference_b23###), length penalty (Singhal et al., 2023 ###reference_b28###), length normalization (Meng et al., 2024 ###reference_b20###), and various PPO implementation tricks (Singhal et al., 2023 ###reference_b28###; Zheng et al., 2023 ###reference_b44###). In this work, we investigate the reward hacking issue for reasoning tasks when combining learned rewards and success rewards in RL training.\nImproving Reasoning Ability of LLMs.\nTo improve the reasoning ability of LLMs, prior works have focused on several different aspects, including pre-training (Yang et al., 2024b ###reference_b38###; Achiam et al., 2023 ###reference_b1###; Anil et al., 2023 ###reference_b2###), prompting (Han et al., 2024 ###reference_b13###; Yuan et al., 2024 ###reference_b41###; Wu et al., 2024 ###reference_b35###), search during inference-time (Lightman et al., 2024 ###reference_b17###; Wang et al., 2024b ###reference_b34###; Yu et al., 2024a ###reference_b39###; Zhang et al., 2024 ###reference_b43###; Yang et al., 2024b ###reference_b38###; Luo et al., 2024 ###reference_b18###; Chen et al., 2024c ###reference_b7###), and fine-tuning (Wang et al., 2024b ###reference_b34###; Shao et al., 2024 ###reference_b26###; Yang et al., 2024b ###reference_b38###; Shah et al., 2024 ###reference_b25###; Tang et al., 2024 ###reference_b31###; Yu et al., 2024b ###reference_b40###). Pre-training methods focus on enriching the data distribution to cover a large amount of rationals and pre-training the LLM over the dataset. The prompting methods elicit the reasoning ability of LLMs through dedicated prompting strategies and automatic agent frameworks. Inference-time search utilizes learned reward models to guide the selection of promising solutions. PRM and ORM could be combined with different search strategies such as Best-of-N, Monte-Carlo Tree Search (Chen et al., 2024a ###reference_b5###), and Beam Search (Snell et al., 2024 ###reference_b30###). Finally, fine-tuning methods include training the LLM on high-quality question-answer data (Yu et al., 2024b ###reference_b40###; Shah et al., 2024 ###reference_b25###; Yue et al., 2024 ###reference_b42###) and optimizing the reasoning ability with reinforcement learning (Yang et al., 2024b ###reference_b38###; Shao et al., 2024 ###reference_b26###; Wang et al., 2024b ###reference_b34###). In this work, we study how to effectively combine dense and sparse rewards in RL training for reasoning tasks." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Preliminary", + "text": "Language Model.\nAn LLM is represented as a policy parameterized by .\nIn reasoning tasks, generates a solution given a question . In addition to the question, usually also contains a prompt to elicit chain-of-thought reasoning. The solution is structured with a list of reasoning steps and thus can be viewed from two perspectives, including tokens and steps. From the perspective of tokens, consists of tokens, . From the perspective of steps, consists of reasoning steps, where denotes the -th reasoning step. For convenience, we use to denote the solution prefix up to the -th step. In practice, reasoning steps can be parsed with rule-based detectors, enforcing strict output formats, or special tokens (Chen et al., 2024a ###reference_b5###; Wang et al., 2024b ###reference_b34###; Lightman et al., 2024 ###reference_b17###).\nReward Modeling.\nIn RLHF, the reward models are usually trained with binary preferences (Bradley & Terry, 1952 ###reference_b3###). In reasoning tasks where the correctness of solutions is accessible, reward models can be trained under the supervision of such ground-truth correctness.\nIn reasoning tasks, two primary methods for reward modeling are the Process-supervised Reward Model (PRM) and the Outcome-supervised Reward Model(ORM).\nGiven a question and a prefix , an ORM estimates the likelihood the prefix would lead to a correct answer. A standard approach to train an ORM is by first sampling solutions for questions from a dataset with an LLM and then labeling the correctness of each solution. The ORM is then trained with the following objective,\nwhere is a binary value indicating the correctness of solution , enumerates each token of the solution , and Loss denotes the loss function. In practice, the loss function could be binary cross-entropy loss or square-error loss, and we can choose to train ORM on the full sequence or only the last token.\nIn contrast, Process-supervised Reward Model (PRM) estimates the correctness of individual reasoning steps. PRM is trained with the following objective,\nwhere is the label for the partial solution and Loss is the loss function. In practice, binary cross entropy loss is usually adopted. Prior works have investigated several ways to annotate the process labels, including human annotators (Lightman et al., 2024 ###reference_b17###) and automatic annotation with LLMs (Wang et al., 2024b ###reference_b34###; Luo et al., 2024 ###reference_b18###).\nReinforcement Learning for LLM\nReasoning.\nWe assume access to the correctness of a solution during training. We use to indicate the correctness of solution to question , which is also referred to as the success reward for RL training. An LLM can be fine-tuned to optimize the success reward by using Reinforcement Learning with Kullback-Leibler divergence,\nwhere is the reference model for regularizing .\nOptimizing the success reward only provides a sparse training signal because the reward is provided at the end of the sequence. Alternatively, we can also combine dense rewards with the success reward for more fine-grained training signals. The RL objective with dense rewards becomes,\nwhere denotes the dense reward and is a coefficient for the dense reward. For example, a PRM can provide dense feedback at the end of reasoning steps, formally represented as for any partial solution ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "RL Reward for LLM Reasoning", + "text": "In this section, we conduct a systematic study on reward design to aid LLM in learning better reasoning skills through RL training. We follow the RL objective with dense rewards in Eq. (2 ###reference_###) and specifically focus on the effective design of dense rewards.\nAs discussed in Sec. 3 ###reference_###, the ground-truth correctness, , serves to provide the sparse rewards, and the dense rewards could be provided by a reward model." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Evaluating RL Training with Learned Reward Models", + "text": "We first consider two straightforward approaches to apply ORM and PRM to provide rewards in addition to success rewards for RL training. Formally, we consider the following rewards,\nSolution-Level Outcome Reward (OR): In the RL training process of Yang et al. (2024b ###reference_b38###), an ORM provides an estimation of correctness as reward shaping. Note that this is not the case for dense rewards since ORM only produces rewards at the end of the sequence. For a question and a solution ,\nStep-Level Process Reward (PR): A PRM can provide step-level feedback for RL training. For any solution prefix , dense rewards are the rewards outputted by a PRM,\nWe carry out our study on the challenging mathematical reasoning benchmark, MATH (Hendrycks et al., 2021 ###reference_b15###). We use PPO as the RL algorithm and Qwen2-1.5B-Instruct (Yang et al., 2024a ###reference_b37###) as the base model. For ORM, we sample solutions with the base model and train ORM with binary cross-entropy loss. For PRM, we follow Wang et al. (2024b ###reference_b34###) to generate process labels with automatic annotation111Implementation details can be found in Sec. 5 ###reference_###. The ORM and PRM both use Qwen2-1.5B-Instruct as the base model.\nSurprisingly, we find these reward functions may not benefit RL training, yielding even worse inference-time performances than LLMs trained with a sparse success reward only, as shown in Fig. LABEL:fig:or-pr.\nTo further investigate the cause of performance degradation,\nFig. LABEL:fig:length-of-orm-prm reports the change in the generation length and the number of reasoning steps during training.\nCombining an outcome reward and a success reward shows similar training statistics and evaluation accuracy to adopting a sparse success reward only. We hypothesize this is because a success reward is accessible during training time, and an outcome reward may not be able to provide additional information beyond the success reward.\nOn the other hand, when using PRM for RL training, we observe a significant change in the generation length and the number of reasoning steps during RL training.\nSpecifically, the generation length and the step count of PR both significantly increase.\n###figure_1### For PR, a case study of the generated samples reveals the occurrence of the reward hacking issue, that is, the LLM learns to obtain high rewards with some specific patterns without faithfully optimizing the ground-truth correctness through RL training. In the generated solutions of PR, there are many short reasoning steps, but these steps only contain unnecessary or meaningless information that does not contribute to problem-solving. As the generation length increases, the model outputs only a single word or even emoji.\nThe rewards of unnecessary reasoning steps are positive and could even be large, as shown in the case study (Fig. 3 ###reference_###). The LLM learns to exploit this phenomenon by generating more reasoning steps, resulting in a higher return.\nWe further confirm the reward hacking behavior through some synthetic reasoning trajectories (Fig. LABEL:fig:synthetic-nonsense and Fig. LABEL:fig:synthetic-mid-step), where PR demonstrates extremely larger returns. This indicates that the PRM cannot effectively classify meaningless repetition as poor, which encourages the LLM to favor these unproductive steps. We observe two key properties when combining PR and the success reward for RL training,\nThe LLM learns to identify reasoning steps that yield high rewards but do not contribute to problem-solving. Specifically, reasoning steps that contain meaningless or unnecessary information can gain high rewards.\nThe RL objective can be optimized with simple patterns that do not improve the overall accuracy. For PR, infinitely high returns can be achieved by generating more unnecessary reasoning steps. However, the addition of unnecessary reasoning steps can not guide the LLM to improve accuracy.\nHere are two key takeaways regarding the impact of applying ORM and PRM in RL training,\nFor ORM, it does not improve over the sparse success reward.\nWe hypothesize this is because, when a success reward is available during training time, ORM does not provide additional supervision signal and should not be a preferred choice at RL training time.\nPRM would lead to a severe reward hacking issue during RL training due to repetition. Although PRM provides useful training signals, it is critical to prevent reward hacking." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Techniques for Mitigating Reward Hacking", + "text": "###figure_2### Since ORM does not provide dense feedback for RL training and may lack additional information beyond the success reward during training, PRM can be a more suitable source for dense rewards. However, as analyzed in Sec. 4.1 ###reference_###, PRM\nmay enable an LLM to achieve an excessively high return by repeating unnecessary reasoning steps.\nTo maintain a bounded objective while leveraging the ability of PRM to promote better reasoning skills, we introduce two novel techniques designed to utilize PRM in RL training effectively,\nClip mechanism. To prevent the LLM from exploiting the reward model by repetition, a straightforward idea is to upper-bound high rewards. Specifically, rewards are upper-bounded by a selected threshold . We further ensure the return of a trajectory is bounded by subtracting all rewards by . Formally, with a threshold ,\nIf a suitable is chosen, the majority of the reasoning steps would receive a reward of 0, and only steps with low would have a negative reward.\nDelta mechanism. Alternatively, Delta mechanism can effectively upper-bound the RL objective during training by subtracting the rewards between adjacent steps. For a solution, the reward for the last reasoning step is dropped since the success reward would be sufficient to provide guidance for the last reasoning step. Formally, for a solution prefix ,\nA nice property of the Delta mechanism is that it ensures the return of a solution is , which is bounded since the maximum output value of a PRM is . Furthermore, the return starting from any intermediate solution step is , which is unaffected by the process rewards of future steps. Further analysis is provided in Appendix. D.1 ###reference_###.\nBoth the Clip and Delta mechanisms can be used individually or in combination. In practice, we consider three approaches incorporating these mechanisms:\nProcess Reward with Clip mechanism (PR-Clip): This applies the Clip mechanism.\nProcess Reward with Delta mechanism (PR-Delta): This employs the Delta mechanism.\nProcess Reward with Clip & Delta mechanism (PR-Clip-Delta): The Clip mechanism is applied first, followed by the Delta mechanism.\nWe further perform evaluation on synthetic solutions that exhibit repetitive patterns in different ways. As shown in Fig. LABEL:fig:synthetic-mid-step and Fig. LABEL:fig:synthetic-nonsense, the Clip mechanism and the Delta mechanism can both successfully limit the upper bound of the returns on these synthetic solutions. Additionally, the Clip mechanism imposes increasingly smaller returns as the length of the repetitive pattern grows.\nWe also compare with some adopted practices to avoid reward hacking in prior works (Singhal et al., 2023 ###reference_b28###), including length normalization and length penalty. More details can be found in Appendix D ###reference_###. Length normalization normalizes the rewards for each solution. Length penalty imposes a constant penalty for each step. As illustrated in Fig. LABEL:fig:synthetic, imposing length penalty and length normalization could still favor the undesired repetition modes over correct solutions. We also investigate standard normalization for PRM as employed by Shao et al. (2024 ###reference_b26###), which we find would lead to training instability. More details can be found in Sec. 5.2 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we perform full RL training with different reward designs to further examine how to ensure a learned reward model can be effective at training time. We will first illustrate our experiment setup in Sec. 5.1 ###reference_###, then conduct ablation studies in Sec. 5.2 ###reference_### and finally present our main results on 1.5B&7B models in Sec. 5.3 ###reference_###." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experiment Setup", + "text": "We conduct RL training on the MathInstruct (Yue et al., 2024 ###reference_b42###) dataset. In particular, we only use the questions and the golden answers in the dataset while the provided solutions are not used for training.\nTo constitute the reward training dataset, we use Qwen2-7B-Instruct to sample 16 answers for each question in the training dataset and keep those questions that have both correct and wrong answers. To train an ORM, binary cross entropy loss is adopted. For PRM training, we follow Wang et al. (2024b ###reference_b34###) to generate automatic process annotations by using Qwen2-7B-Instruct as the completer. Specifically, for each step in the generated samples, we use the completer to sample solutions starting from the solution prefix. This step is labeled as correct if any of these 8 solutions reaches final correctness.\nWe carry out our evaluation on the GSM8K (Cobbe et al., 2021a ###reference_b8###) and MATH (Hendrycks et al., 2021 ###reference_b15###) datasets. We ensure there is no data contamination issue, that is, the questions in the test sets do not appear in the training set. For evaluation metrics, we report the Greedy and Sampling scores, which correspond to the accuracy when adopting greedy decoding and sampling with temperature of 1 as generation strategies, respectively. To further understand the impact of RL, we also report Pass@16, which evaluates the probability a model can generate the correct answer out of 16 trials.\nOur experiments are taken over a series of large language models from the Qwen2 (Yang et al., 2024a ###reference_b37###) family and the state-of-the-art LLMs for mathematical reasoning, Qwen2.5 (Yang et al., 2024b ###reference_b38###) family. Specifically, we use various 1.5B and 7B LLMs, including general and math-specific models. For general models, we consider Qwen2-1.5B-Instruct and Qwen2-7B-Instruct. For math-specific models, we consider Qwen2-Math-1.5B-Instruct, Qwen2.5-Math-1.5B-Instruct, Qwen2-Math-7B-Instruct and Qwen2.5-Math-7B-Instruct. Note that these LLMs already equip sufficient instruction following ability and we do not perform any further supervised fine-tuning. Lastly, the PRM is trained with the same base model as the actor model.\nWe adopt the Proximal Policy Optimization (PPO) implementation of ReaLHF (Mei et al., 2024 ###reference_b19###), which supports fine-tuning LLMs with dense rewards.\nFollowing prior practices (Shao et al., 2024 ###reference_b26###; Xu et al., 2024 ###reference_b36###), we adopt a large batch size and sample multiple solutions for each question within a batch. For 1.5B models, there are questions, and solutions are sampled for each question in a batch, leading to a batch size of . For 7B models, the batch size is .\n222We also conduct an ablation study on PPO batch size in Appendix B ###reference_###.\nEach training batch is split into 4 minibatches. We apply a KL penalty coefficient of 0.1, a coefficient of 1 for dense rewards, and a coefficient of 5 for successful rewards. The learning rates of 1B and 7B actor models are 1e-6 and 1e-5, respectively, while all critic models use a learning rate of 5e-6. We use Adam optimizer weight decay of .\nThe 1.5B models are trained on a cluster of 4 machines, each with 8 Nvidia H100 GPUs, for approximately 8 hours. The 7B models are trained on a cluster of 8 machines, each with 8 Nvidia H100 GPUs, for approximately 20 hours." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "###table_1### Our ablation study of the Clip mechanism and the Delta mechanism is presented in Table 1 ###reference_###. We also consider a standard normalization variant of PR (Shao et al., 2024 ###reference_b26###), denoted as PR-Normed. PPO training with OR can not surpass training with a sparse success reward. PR demonstrates severe performance degradation during training due to the reward hacking issue discussed in Sec. 4.1 ###reference_###. Similarly, the performance of PR-Normed also decreases in the latter epochs. Consequently, none of OR, PR, and PR-Normed can achieve higher greedy decoding accuracy than training with a success reward. On the other hand, the Delta mechanism successfully stabilizes RL training, surpassing training with a success reward. Finally, by combining the Clip mechanism and the Delta mechanism, PR-Clip-Delta demonstrates the best greedy decoding accuracy.\nWe compare the performance improvements of PPO training over the base LLMs when using a success reward and additionally using PR-Clip-Delta as dense rewards in Fig. LABEL:fig:perf-improve. In addition to Greedy and Sampling scores, we also consider the Pass@16 score, which we believe can roughly estimate the upper bound of the model\u2019s capacity. Using PR-Clip-Delta as dense rewards can consistently improve RL training, across all LLMs and all evaluation metrics, except the greedy decoding accuracy on Qwen2-Math-7B-Instruct. This suggests that applying the Clip mechanism and the Delta mechanism can effectively utilize the PRM to guide the LLM in learning better reasoning skills during RL training. We report the detailed numbers in Appendix A ###reference_###." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Main Results", + "text": "###table_2### Our main results are summarized in Table. 2 ###reference_###.\nRL training consistently improves the performance of the base model across all the models we test, even on the state-of-the-art 1.5B model, Qwen2.5-Math-1.5B-Instruct, and 7B model, Qwen2.5-Math-7B-Instruct.\nFor 1.5B models, Qwen2-1.5B-Instruct obtains the most significant performance improvement. Through RL training with PR-Clip-Deta as reward function, the best 1.5B model, Qwen2.5-Math-1.5B-Instruct achieves 87.34% and 76.78% greedy decoding accuracy on GSM8K and MATH benchmark respectively, indicating 2.20% and 0.78% improvement of accuracy over the base model.\nFor 7B models, building on the strongest 7B LLM, Qwen2.5-Math-7B-Instruct, RL training with dense reward further boosts the performance and achieves 95.6% and 83.38% greedy decoding accuracy on GSM8K and MATH benchmarks, respectively, surpassing several baselines. It is noteworthy that Qwen2.5-Math-7B-Instruct is already trained using RL, and our results indicate that RL with a carefully crafted dense reward can further enhance its performance, highlighting the effectiveness of PR-Clip-Delta.\nThe performance improvement of RL training varies across models with different amounts of parameters and different strengths. In general, weaker models gain higher performance improvements than stronger models. Comparing the improvements of Greedy and Sampling scores, the improvements of Sampling score are larger than those of Greedy score across all LLMs, resulting in a smaller gap between Sampling and Greedy scores. Interestingly, we also highlight the comparison between Qwen2.5-1.5B-Instruct and Qwen2-7B-Instruct since both models have very close performance on MATH but have different amounts of parameters. The smaller 1.5B model, Qwen2.5-1.5B-Instruct, has a more significant improvement than and can surpass the larger 7B model, Qwen2-7B-Instruct, on both MATH and GSM8K benchmarks." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we investigate designing dense rewards with a process-supervised reward model in RL training to improve the reasoning ability of LLMs. We examine some popular reward models and identify the issue of reward hacking, which manifests as the generation of nonsensical texts or unnecessary reasoning steps. The reward hacking issue can be mitigated with our proposed techniques, using the Clip mechanism to prevent the LLM from exploiting the reward model and the Delta mechanism to ensure a bounded RL objective. We show that the proposed techniques can be utilized to apply Process-supervised Reward Models for improved RL training." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Additional results", + "text": "In Tab. 3 ###reference_### and Tab. 4 ###reference_###, we report the results of RL training on different base models, including those with success rewards and after applying PR-Clip-Delta.\n###table_3### ###table_4### In Fig. 9 ###reference_###, we report the greedy accuracy on MATH test set of different training epochs, where epoch-0 means the base model (i.e., Qwen2-1.5B-Instruct). The introduction of PR-norm caused the model\u2019s accuracy to drop significantly starting from the third epoch.\n###figure_3###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Ablation Study on PPO Batch Size", + "text": "We conduct an ablation study on PPO batch size in Tab. 5 ###reference_### and Tab. 6 ###reference_###, where denotes questions with solutions sampled per question within each batch. The results indicate that larger final batch sizes improve performance. With a fixed final batch size, configurations of and achieve higher accuracy compared to . In practice, we recommend sampling approximately 8 solutions per question while increasing the number of questions in each batch.\n###table_5### ###table_6###" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Synthetic Example", + "text": "In Fig. LABEL:fig:synthetic, we insert nonsense steps or repeat the last sentences in the following manner.\n{mdframed}\nPrompt:\nTom found a box of 220 baseball cards. Being a thoughtful older brother, he wanted to save at least 50 percent of the cards for himself and split the remaining cards evenly among his 5 younger brothers. What is the lowest possible number of cards that Tom can save for himself that will also allow each of his 5 younger brothers to get an equal number of cards?\nPlease reason step by step, and put your final answer within boxed{}.\nAnswer by Qwen2-1.5B-instruct:\nStep 1: 50% of 220 cards\nTherefore, Tom would save 110 of the cards for himself.\nStep 2: Remaining cards that Tom would like to distribute evenly among his 5 younger brothers\nTom would have 110 cards left to distribute evenly among his 5 younger brothers.\nStep 3: Forming sets of cards and evenly distributing\nIf the 110 remaining cards are to be divided into 5 sets of equal cards, each set must contain:\nTherefore, each of Tom\u2019s 5 younger brothers would get 22 cards, but since each set also must have 22 cards, the lowest possible number of cards for Tom to save so that each brother gets an equal number of cards means Tom would save:\nThe lowest possible number of cards that Tom can save for himself that will allow each of his 5 younger brothers to get an equal number of cards is 110. This way, Tom will save enough cards to permit a remainder that will also allow one card to be shared by each of his brothers out of the total of 110 cards.\nThe answer is: .\nRepeat Nonsense\nStep 1: 50% of 220 cards\n\u2026\nStep 3: Forming sets of cards and evenly distributing\n\u2026\nStep done.\nStep done.\nStep end.\nStep end.\nStep show.\n\u2026\nRepeat Mid Step\nStep 1: 50% of 220 cards\n\u2026\nStep 2: Remaining cards that Tom would like to distribute evenly among his 5 younger\u2026\nStep 2: Remaining cards that Tom would like to distribute evenly among his 5 younger\u2026\nStep 2: Remaining cards that Tom would like to distribute evenly among his 5 younger\u2026\n\u2026\nStep 3: Forming sets of cards and evenly distributing\n\u2026\nThe answer is: .\nRepeat Last Sentence\nStep 1: 50% of 220 cards\n\u2026\nStep 3: Forming sets of cards and evenly distributing\n\u2026\nThe answer is: .\nThe answer is: .\nThe answer is: .\nThe answer is: .\n\u2026" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Baselines", + "text": "By applying the Delta mechanism to the process rewards, the return for any token in step is,\nFor ,\nFor , the return is clearly.\n\u220e\nThis result indicates that when applying the Delta mechanism to the process rewards, the policy gradient for optimizing the policy would be," + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodGreedySampling
Qwen2-1.5B-Instruct24.9016.79
Success Reward30.5827.05
SR + OR30.5727.12
SR + PR (E1)\n11.1614.68
SR + PR-Normed (E2)\n29.6627.14
SR + PR-Normed (E5)\n12.3612.84
SR + PR-Clip30.3028.40
SR + PR-Delta30.6827.96
SR + PR-Clip-Delta31.4428.20
\n
Table 1: Ablation study of various reward functions on MATH with Qwen2-1.5B-Instruct. The results are tested on the MATH test set using greedy decoding and sampling.\nWe train the base models for 5 epochs. For OR, PR-Clip, PR-Delta, and PR-Clip-Delta, we report the accuracy of the final model. For PR and PR-Normed, significant performance degradation happens in later epochs and thus we report the performance in early epochs. Here, E1 denotes the results of the 1-st epoch.
\n
", + "capture": "Table 1: Ablation study of various reward functions on MATH with Qwen2-1.5B-Instruct. The results are tested on the MATH test set using greedy decoding and sampling.\nWe train the base models for 5 epochs. For OR, PR-Clip, PR-Delta, and PR-Clip-Delta, we report the accuracy of the final model. For PR and PR-Normed, significant performance degradation happens in later epochs and thus we report the performance in early epochs. Here, E1 denotes the results of the 1-st epoch." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelGSM8KMATH
GreedySamplingGreedySampling
GPT-4o-2024-08-0692.9-81.1-
DeepSeekMath-7B-RL88.2-52.4-
Internlm2-math-plus-7B84.0-54.4-
Mathstral-7B-v0.184.9-56.6-
NuminaMath-7B-CoT75.4-55.2-
Llama-3.1-8B-Instruct76.6-47.2-
1.5B Models
Qwen2-1.5B-Instruct50.1944.5824.9016.79
+ PPO w. (SR + PR-Clip-Delta)68.76\u219118.5766.19\u219121.6131.44\u21916.5428.20\u219111.41
Qwen2-Math-1.5B-Instruct83.6281.5069.9864.51
+ PPO w. (SR + PR-Clip-Delta)85.67\u21912.0584.76\u21913.2670.94\u21910.9668.13\u21913.62
Qwen2.5-Math-1.5B-Instruct85.1482.1176.0072.05
+ PPO w. (SR + PR-Clip-Delta)87.34\u21912.2085.97\u21913.8676.78\u21910.7874.63\u21912.58
7B Models
Qwen2-7B-Instruct86.8880.4457.5448.27
+ PPO w. (SR + PR-Clip-Delta)87.64\u21910.7687.34\u21916.9060.54\u21913.0058.17\u21919.90
Qwen2-Math-7B-Instruct89.6189.2375.3072.09
+ PPO w. (SR + PR-Clip-Delta)90.90\u21911.2990.14\u21910.9176.00\u21910.7074.09\u21912.00
Qwen2.5-Math-7B-Instruct95.6080.7483.3052.76 333For sampling accuracy, we find that Qwen-2.5-math-Instruct is likely to generate strange characters, leading to poor sampling accuracy.\n
+ PPO w. (SR + PR-Clip-Delta)95.600.0095.07 \u219114.3383.38\u21910.0881.22\u219128.46
\n
Table 2: Greedy and Sampling scores on GSM8K and MATH benchmarks. PPO training using sparse success rewards and PR-Clip-Delta as dense rewards consistently improve all evaluated LLMs, including the state-of-the-art 7B LLMs, Qwen2.5-Math-7B-Instruct. For sampling decoding, we adopt the temperature of 1.0.
\n
", + "capture": "Table 2: Greedy and Sampling scores on GSM8K and MATH benchmarks. PPO training using sparse success rewards and PR-Clip-Delta as dense rewards consistently improve all evaluated LLMs, including the state-of-the-art 7B LLMs, Qwen2.5-Math-7B-Instruct. For sampling decoding, we adopt the temperature of 1.0. " + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodMath
GreedySamplePass@16
Qwen2-1.5B-InstructBasemodel24.9016.7955.68
Success Reward30.58\u21914.68\n27.05\u219110.26\n61.70\u21916.02\n
+ PR-Clip-Delta31.44\u21916.54\n28.20\u219111.41\n61.70\u21916.02\n
Qwen2-Math-1.5B-InstructBasemodel69.9864.5188.02
Success Reward70.26\u21910.28\n66.29\u21911.78\n88.46\u21910.44\n
+ PR-Clip-Delta70.94\u21910.96\n68.13\u21913.62\n88.58\u21910.56\n
Qwen2.5-Math-1.5B-InstructBasemodel76.0072.0590.50
Success Reward76.34\u21910.34\n74.22\u21912.17\n90.54\u21910.04\n
+ PR-Clip-Delta76.78\u21910.78\n74.63\u21912.58\n90.76\u21910.26\n
Qwen2-7B-InstructBasemodel57.5448.2780.04
Success Reward60.14\u21912.60\n56.39\u21918.12\n83.40\u21913.36\n
+ PR-Clip-Delta60.54\u21913.00\n58.17\u21919.90\n83.22\u21913.18\n
Qwen2-Math-7B-InstructBasemodel75.3072.0991.24
Success Reward76.42\u21911.12\n73.12\u21911.03\n91.08\u21930.16\n
+ PR-Clip-Delta76.00\u21910.70\n74.09\u21912.00\n91.52\u21910.28\n
Qwen2.5-Math-7B-InstructBasemodel83.352.7686.6
Success Reward83.16\u21930.14\n79.95\u219127.19\n92.46\u21915.86\n
+ PR-Clip-Delta83.38\u21910.08\n81.22\u219128.46\n92.60\u21916.00\n
\n
Table 3: Results on MATH test set
\n
", + "capture": "Table 3: Results on MATH test set" + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodGSM8K
GreedySample
Qwen2-1.5B-InstructBasemodel50.1944.58
Success Reward67.70\u219117.51\n65.50\u219120.92\n
+ PR-Clip-Delta68.76\u219118.57\n66.19\u219121.61\n
Qwen2-Math-1.5B-InstructBasemodel83.6281.50
Success Reward84.61\u21910.99\n83.93\u21912.43\n
+ PR-Clip-Delta85.67\u21912.05\n84.76\u21913.26\n
Qwen2.5-Math-1.5B-InstructBasemodel85.1482.11
Success Reward86.73\u21911.59\n85.82\u21913.71\n
+ PR-Clip-Delta87.34\u21912.20\n85.97\u21913.86\n
Qwen2-7B-InstructBasemodel86.8880.44
Success Reward87.72\u21910.84\n86.81\u21916.37\n
+ PR-Clip-Delta87.64\u21910.76\n87.34\u21916.90\n
Qwen2-Math-7B-InstructBasemodel89.6189.23
Success Reward89.46\u21930.15\n90.07\u21910.84\n
+ PR-Clip-Delta90.90\u21911.29\n90.14\u21910.91\n
Qwen2.5-Math-7B-InstructBasemodel95.6080.74
Success Reward95.45\u21930.15\n95.07\u219114.33\n
+ PR-Clip-Delta95.60\u21910.00\n95.07\u219114.33\n
\n
Table 4: Results on GSM8K test set
\n
", + "capture": "Table 4: Results on GSM8K test set" + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PPO Batch SizeGreedySamplePass@16
Qwen-2-1.5B-Instruct24.9016.7955.68
29.1224.6860.06
29.0025.4360.16
30.1225.8261.06
30.6626.6361.32
30.5827.0561.70
\n
Table 5: The ablation study on PPO batch size is conducted using Qwen2-1.5B-Instruct and the MATH, where represents questions with solutions sampled per question in each batch. All models are trained using only the Success Reward.
\n
", + "capture": "Table 5: The ablation study on PPO batch size is conducted using Qwen2-1.5B-Instruct and the MATH, where represents questions with solutions sampled per question in each batch. All models are trained using only the Success Reward." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PPO Batch SizeGreedySamplePass@16
Qwen2-7B-Instruct57.5448.2780.04
59.0656.3582.32
60.1456.3983.40
\n
Table 6: The ablation study on PPO batch size is conducted using Qwen2-7B-Instruct and the MATH. All models are trained using only the Success Reward.
\n
", + "capture": "Table 6: The ablation study on PPO batch size is conducted using Qwen2-7B-Instruct and the MATH. All models are trained using only the Success Reward." + } + }, + "image_paths": { + "1": { + "figure_path": "2410.15115v3_figure_1.png", + "caption": "Figure 3: Case study of PR.\nPRM provides rewards at the end of each step.\nThrough RL training with PR (Eq. (4)), the LLM learns to generate many reasoning steps that do not contribute to problem-solving to achieve a high return.", + "url": "http://arxiv.org/html/2410.15115v3/extracted/6028969/Figures/case-study-pr.png" + }, + "2": { + "figure_path": "2410.15115v3_figure_2.png", + "caption": "Figure 6: The Clip mechanism and the Delta mechanism. The Clip mechanism subtracts the rewards with a suitable threshold \u03b7\ud835\udf02\\etaitalic_\u03b7 and upper-bounds all rewards with zero (Eq. (5)). The Delta mechanism drops the last-step reward and computes the difference of rewards between two adjacent steps (Eq. (6)). These mechanisms can alleviate the reward hacking issue of PRM in RL training.", + "url": "http://arxiv.org/html/2410.15115v3/extracted/6028969/Figures/mechanisms.png" + }, + "3": { + "figure_path": "2410.15115v3_figure_3.png", + "caption": "Figure 9: Greedy accuracy on MATH test set during the training process.", + "url": "http://arxiv.org/html/2410.15115v3/extracted/6028969/Figures/math_acc_vs_training.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Palm 2 technical report.", + "author": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al.", + "venue": "arXiv preprint arXiv:2305.10403, 2023.", + "url": null + } + }, + { + "3": { + "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons.", + "author": "Ralph Allan Bradley and Milton E Terry.", + "venue": "Biometrika, 39(3/4):324\u2013345, 1952.", + "url": null + } + }, + { + "4": { + "title": "Open problems and fundamental limitations of reinforcement learning from human feedback.", + "author": "Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, J\u00e9r\u00e9my Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphael Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca Dragan, David Krueger, Dorsa Sadigh, and Dylan Hadfield-Menell.", + "venue": "Transactions on Machine Learning Research, 2023.", + "url": null + } + }, + { + "5": { + "title": "Alphamath almost zero: process supervision without process.", + "author": "Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan.", + "venue": "arXiv preprint arXiv:2405.03553, 2024a.", + "url": null + } + }, + { + "6": { + "title": "ODIN: Disentangled reward mitigates hacking in RLHF.", + "author": "Lichang Chen, Chen Zhu, Jiuhai Chen, Davit Soselia, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro.", + "venue": "In Forty-first International Conference on Machine Learning, 2024b.", + "url": null + } + }, + { + "7": { + "title": "Improving large language models via fine-grained reinforcement learning with minimum editing constraint.", + "author": "Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, and Ji-Rong Wen.", + "venue": "arXiv preprint arXiv:2401.06081, 2024c.", + "url": null + } + }, + { + "8": { + "title": "Training verifiers to solve math word problems.", + "author": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al.", + "venue": "arXiv preprint arXiv:2110.14168, 2021a.", + "url": null + } + }, + { + "9": { + "title": "Training verifiers to solve math word problems.", + "author": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al.", + "venue": "arXiv preprint arXiv:2110.14168, 2021b.", + "url": null + } + }, + { + "10": { + "title": "RAFT: Reward ranked finetuning for generative foundation model alignment.", + "author": "Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, KaShun SHUM, and Tong Zhang.", + "venue": "Transactions on Machine Learning Research, 2023.", + "url": null + } + }, + { + "11": { + "title": "Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking.", + "author": "Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alexander Nicholas D\u2019Amour, Krishnamurthy Dj Dvijotham, Adam Fisch, Katherine A Heller, Stephen Robert Pfohl, Deepak Ramachandran, Peter Shaw, and Jonathan Berant.", + "venue": "In First Conference on Language Modeling, 2024.", + "url": null + } + }, + { + "12": { + "title": "Deepseek-coder: When the large language model meets programming\u2013the rise of code intelligence.", + "author": "Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, YK Li, et al.", + "venue": "arXiv preprint arXiv:2401.14196, 2024.", + "url": null + } + }, + { + "13": { + "title": "Veritymath: Advancing mathematical reasoning by self-verification through unit consistency.", + "author": "Vernon Toh Yan Han, Ratish Puduppully, and Nancy F. Chen.", + "venue": "In AI for Math Workshop @ ICML 2024, 2024.", + "url": null + } + }, + { + "14": { + "title": "Teaching large language models to reason with reinforcement learning.", + "author": "Alexander Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Eric Hambro, Sainbayar Sukhbaatar, and Roberta Raileanu.", + "venue": "In AI for Math Workshop @ ICML 2024, 2024.", + "url": null + } + }, + { + "15": { + "title": "Measuring mathematical problem solving with the MATH dataset.", + "author": "Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt.", + "venue": "In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.", + "url": null + } + }, + { + "16": { + "title": "Token-supervised value models for enhancing mathematical reasoning capabilities of large language models.", + "author": "Jung Hyun Lee, June Yong Yang, Byeongho Heo, Dongyoon Han, and Kang Min Yoo.", + "venue": "arXiv preprint arXiv:2407.12863, 2024.", + "url": null + } + }, + { + "17": { + "title": "Let\u2019s verify step by step.", + "author": "Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "18": { + "title": "Improve mathematical reasoning in language models by automated process supervision.", + "author": "Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, et al.", + "venue": "arXiv preprint arXiv:2406.06592, 2024.", + "url": null + } + }, + { + "19": { + "title": "Realhf: Optimized rlhf training for large language models through parameter reallocation.", + "author": "Zhiyu Mei, Wei Fu, Kaiwei Li, Guangju Wang, Huanchen Zhang, and Yi Wu.", + "venue": "arXiv preprint arXiv:2406.14088, 2024.", + "url": null + } + }, + { + "20": { + "title": "Simpo: Simple preference optimization with a reference-free reward.", + "author": "Yu Meng, Mengzhou Xia, and Danqi Chen.", + "venue": "arXiv preprint arXiv:2405.14734, 2024.", + "url": null + } + }, + { + "21": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.", + "venue": "Advances in neural information processing systems, 35:27730\u201327744, 2022.", + "url": null + } + }, + { + "22": { + "title": "Direct preference optimization: Your language model is secretly a reward model.", + "author": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "23": { + "title": "WARM: On the benefits of weight averaged reward models.", + "author": "Alexandre Rame, Nino Vieillard, Leonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, and Johan Ferret.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "24": { + "title": "Proximal policy optimization algorithms.", + "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.", + "venue": "arXiv preprint arXiv:1707.06347, 2017.", + "url": null + } + }, + { + "25": { + "title": "Ai-assisted generation of difficult math questions.", + "author": "Vedant Shah, Dingli Yu, Kaifeng Lyu, Simon Park, Nan Rosemary Ke, Michael Mozer, Yoshua Bengio, Sanjeev Arora, and Anirudh Goyal.", + "venue": "arXiv preprint arXiv:2407.21009, 2024.", + "url": null + } + }, + { + "26": { + "title": "Deepseekmath: Pushing the limits of mathematical reasoning in open language models.", + "author": "Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu, and Daya Guo.", + "venue": "arXiv preprint arXiv:2402.03300, 2024.", + "url": null + } + }, + { + "27": { + "title": "Loose lips sink ships: Mitigating length bias in reinforcement learning from human feedback.", + "author": "Wei Shen, Rui Zheng, Wenyu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, and Xuanjing Huang.", + "venue": "In The 2023 Conference on Empirical Methods in Natural Language Processing, 2023.", + "url": null + } + }, + { + "28": { + "title": "A long way to go: Investigating length correlations in rlhf.", + "author": "Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett.", + "venue": "arXiv preprint arXiv:2310.03716, 2023.", + "url": null + } + }, + { + "29": { + "title": "Defining and characterizing reward gaming.", + "author": "Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger.", + "venue": "Advances in Neural Information Processing Systems, 35:9460\u20139471, 2022.", + "url": null + } + }, + { + "30": { + "title": "Scaling llm test-time compute optimally can be more effective than scaling model parameters.", + "author": "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar.", + "venue": "arXiv preprint arXiv:2408.03314, 2024.", + "url": null + } + }, + { + "31": { + "title": "Mathscale: Scaling instruction tuning for mathematical reasoning.", + "author": "Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "32": { + "title": "Solving math word problems with process-and outcome-based feedback.", + "author": "Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins.", + "venue": "arXiv preprint arXiv:2211.14275, 2022.", + "url": null + } + }, + { + "33": { + "title": "Mathcoder: Seamless code integration in LLMs for enhanced mathematical reasoning.", + "author": "Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song, Mingjie Zhan, and Hongsheng Li.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024a.", + "url": null + } + }, + { + "34": { + "title": "Math-shepherd: Verify and reinforce llms step-by-step without human annotations.", + "author": "Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui.", + "venue": "In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 9426\u20139439, 2024b.", + "url": null + } + }, + { + "35": { + "title": "Large language models can self-correct with minimal effort.", + "author": "Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan Tan, Chao Shen, and Meng Jiang.", + "venue": "In AI for Math Workshop @ ICML 2024, 2024.", + "url": null + } + }, + { + "36": { + "title": "Is DPO superior to PPO for LLM alignment? a comprehensive study.", + "author": "Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "37": { + "title": "Qwen2 technical report.", + "author": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al.", + "venue": "arXiv preprint arXiv:2407.10671, 2024a.", + "url": null + } + }, + { + "38": { + "title": "Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement.", + "author": "An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al.", + "venue": "arXiv preprint arXiv:2409.12122, 2024b.", + "url": null + } + }, + { + "39": { + "title": "Ovm, outcome-supervised value models for planning in mathematical reasoning.", + "author": "Fei Yu, Anningzhe Gao, and Benyou Wang.", + "venue": "In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 858\u2013875, 2024a.", + "url": null + } + }, + { + "40": { + "title": "Metamath: Bootstrap your own mathematical questions for large language models.", + "author": "Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024b.", + "url": null + } + }, + { + "41": { + "title": "Advancing LLM reasoning generalists with preference trees.", + "author": "Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, and Maosong Sun.", + "venue": "In AI for Math Workshop @ ICML 2024, 2024.", + "url": null + } + }, + { + "42": { + "title": "MAmmoTH: Building math generalist models through hybrid instruction tuning.", + "author": "Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "43": { + "title": "Generative verifiers: Reward modeling as next-token prediction.", + "author": "Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal.", + "venue": "arXiv preprint arXiv:2408.15240, 2024.", + "url": null + } + }, + { + "44": { + "title": "Secrets of rlhf in large language models part i: Ppo.", + "author": "Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, et al.", + "venue": "arXiv preprint arXiv:2307.04964, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2410.15115v3" +} \ No newline at end of file diff --git a/20241127/2410.18766v2.json b/20241127/2410.18766v2.json new file mode 100644 index 0000000000000000000000000000000000000000..a3ec7bbf6e53e4508dc9ea1de1eaeb55b78f245b --- /dev/null +++ b/20241127/2410.18766v2.json @@ -0,0 +1,564 @@ +{ + "title": "Citywide Electric Vehicle Charging Demand Prediction Approach Considering Urban Region and Dynamic Influences", + "abstract": "Electric vehicle charging demand prediction is important for vacant charging pile recommendation and charging infrastructure planning, thus facilitating vehicle electrification and green energy development. The performance of previous spatio-temporal studies is still far from satisfactory nowadays because urban region attributes and multivariate temporal influences are not adequately taken into account. To tackle these issues, we propose a learning approach for citywide electric vehicle charging demand prediction, named CityEVCP. To learn non-pairwise relationships in urban areas, we cluster service areas by the types and numbers of points of interest in the areas and develop attentive hypergraph networks accordingly. Graph attention mechanisms are employed for information propagation between neighboring areas. Additionally, we propose a variable selection network to adaptively learn dynamic auxiliary information and improve the Transformer encoder utilizing gated mechanisms for fluctuating charging time-series data. Experiments on a citywide electric vehicle charging dataset demonstrate the performances of our proposed approach compared with a broad range of competing baselines. Furthermore, we demonstrate the impact of dynamic influences on prediction results in different areas of the city and the effectiveness of our area clustering method.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "As the world accelerates its transition to low-carbon and environmental friendly mobility, electric vehicles (EVs) will play an important role in the future automotive market [1 ###reference_b1###]. The International Energy Agency points out that at present, the global share of electric vehicles in new car sales is about 20%, and this proportion will rise to about 50% by 2030, while China has already reached such a level in 2024 [2 ###reference_b2###]. However, the lack of available and reasonably located charging facilities is a major constraint to the development of electric vehicles [3 ###reference_b3###]. Providing unlimited charging infrastructure for EV users is impossible in terms of investment costs and energy deployment [4 ###reference_b4###]. The increasing development of smartphones, in-vehicle devices and information distribution systems makes governments and mangers glad to turn to accurate EV charging demand prediction for help. By predicting the services demand in the coming future, administrators can provide users with suggestive information, such as timing and route planning, to avoid customer losses. For governments, accurate charging demand prediction, especially at the citywide level, is a groundwork for a variety of smart strategies such as charging infrastructure planning and smart power supply [5 ###reference_b5###, 6 ###reference_b6###].\nResearchers have made many efforts to enable accurate citywide electric vehicle charging demand prediction. Deep learning research provides powerful methods for extracting complex patterns in data variations nowadays. Many studies consider that EV charging demand in future periods is related to historical demand with spatial spillover effects [7 ###reference_b7###]. Therefore, lots of exquisite models introduce Recurrent neural networks to extract temporal features [8 ###reference_b8###] and Graph neural networks to extract spatial features [9 ###reference_b9###], respectively. However, the ambiguity in the weight assignment of convolutional and recurrent operations limits their capabilities. In the latest researches, time-series attention networks (e.g., Transformer) [10 ###reference_b10###] and graph attention networks [11 ###reference_b11###] that are able to better determine the weights on the network have become popular tools for improving the spatio-temporal prediction accuracy. However, previous studies have lacked sufficient consideration of the characteristics of charging demand that are different from other spatio-temporal data [12 ###reference_b12###]. They do not consider that some short periods of fast charging behavior may cause fluctuations in the historical data, which results in not all hidden states can provide enough useful information for prediction. Sparse and rational allocation of attention for accurate charging demand prediction in attention networks remains to be explored.\nFurthermore, EV charging behavior is affected by several influencing factors, as shown in Fig. 1 ###reference_###. With the application of information collection systems and large-scale data processing centers, various charging-related factors can be effectively centralized and analyzed. Considering adequate factors in urban EV charging demand prediction has become an urgent and promising topic. But there is still a lack of prediction studies that can effectively consider multiple influences at the same time.\n###figure_1### Charging-related factors can be categorized into two main groups, namely static ones and dynamic ones. Static influences include neighborhood charging services and land use around the charging stations. These influences are heterogeneous to the time-series charging demands and are usually not directly acceptable as inputs to the network. For example, improvements of the road network and the smart application on mobile devices make it possible for demand in neighboring areas to influence each other. Previous studies employ graph neural networks to fuse information from adjacent nodes and achieve good results [13 ###reference_b13###, 14 ###reference_b14###]. Moreover, some areas have similar land uses and residents may have typical and similar charging habits within those areas [15 ###reference_b15###]. For example, charging demand in commercial areas is relatively high on weekends and after work on weekdays, while it\u2019s the opposite in office areas. Clustering similar nodes and then training the network in separate groups is a common method used in previous studies [16 ###reference_b16###]. However, it is still a challenge to accurately cluster nodes with similar properties. And separating the network into multiple groups for training makes it difficult to share information between groups and consumes more computational resources and storage space.\nDynamic influences and demand data are usually isomorphic, both characterized by changes over time. These dynamics include charging price, charging environment, and other variables that affect charging demand in real time. For example, price influences the charging choices of price-sensitive users, and temperature largely affects the power consumption and charging efficiency of electric vehicles [7 ###reference_b7###]. The relationship between external information and charging demands is often complex and varied in temporal and spatial dimensions. Therefore we need to accurately capture the nonlinearities involved. Previous research mainly used convolution to fuse dynamic external data. Limited by the ambiguity, the convolution methods only bring a certain but weak improvement in prediction accuracy [17 ###reference_b17###, 18 ###reference_b18###]. It is still a challenge to determine the different roles of external auxiliary information in each urban area. A more precise method for determining the citywide significance of dynamic influences is needed.\nTo address the above challenges, we propose a novel approach for citywide EV charging demand prediction, named CityEVCP, that can effectively consider urban region and dynamic influences simultaneously. It enables 1) classifying areas based on an adaptive clustering method and simultaneously learning similar patterns from areas with similar properties through attentive hypergraph; 2) information propagation between neighboring areas through graph attention; and 3) fusing time-series external variables for city\u2019s each areas through variable selection network and efficiently capturing temporal features through an improved gated Transformer encoder. We demonstrate the state-of-the-art performance of the proposed CityEVCP method through a dataset collected from Shenzhen, a first-tier city in China. We also demonstrate the contribution of multiple influences to accurate EV charging demand prediction through the case study and show that the proposed model can correctly perceive charging-related data patterns.\nThe main contributions of this paper are as follows.\nWe propose a novel deep learning framework for citywide electric vehicle charging prediction that can effectively consider charging-related dynamic and static influences. We incorporate an area attributes fusion module, an adjacency graph fusion module, and a temporal feature fusion module to achieve accurate prediction.\nWe propose a scheme for considering urban properties around charging service facilities. We view the points of interest (POIs) in an area as a document and propose an adaptive clustering method based on the importance of POIs. Attentive hypergraph mechanism is used to learn common features of charging behavior in grouped urban areas.\nConsidering the different role of dynamics on areas, we utilize gated residual networks (GRNs) to propose a nonlinear variable selection network that adaptively determines the importance of multiple dynamic influences in each urban area. Furthermore, to flexibly adapt to fluctuations in charging demand, we improve the Transformer encoder through GRNs and Gumbel-Softmax, which determines sparse attention for the timestamps and avoids fuzzy perception.\nThe remaining article is structured as follows. Section 2 ###reference_### reviews the relevant literature. Section 3 ###reference_### describes our proposed method, CityEVCP, in detail. The method is validated in Section 4 ###reference_### with a representative citywide EV charging demand dataset. Section 5 ###reference_### draws conclusions and future directions." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Literature review", + "text": "Citywide EV charging demand prediction can be considered as a spatio-temporal regression problem, which plays an important role in urban planning, charging guidance, and grid stabilization. Early studies mainly employed statistical methods such as Linear regression, Autoregressive Integrated Moving Average (ARIMA)[19 ###reference_b19###] and Least absolute shrinkage and selection operator (Lasso)[20 ###reference_b20###]. Several studies have recognized that EV charging demand is associated with multi-source factors (e.g., price) and attempted to model the relationship using linear models [21 ###reference_b21###]. These models are simple and computationally fast, and make important attempts at modeling time-series EV charging demand and its correlates, but are limited by their linear assumptions, and the predictions are not precise enough. The development of machine learning has quickly made it a popular method for data regression. Various methods such as Random Forest and Support Vector Regression have been used for electric vehicle charging demand prediction and achieved good results [22 ###reference_b22###, 23 ###reference_b23###]. However, these simple machine learning models still on one hand lack the consideration of complex nonlinear multivariate relationships, and on the other hand it is difficult to incorporate heterogeneous data (e.g., land attributes and spatial relationships) into the model.\nNeural networks and deep learning can not only accurately model temporal relationships, but also learn multivariate and multi-node features, which become a research hotspot for citywide electric vehicle charging demand prediction. Recurrent neural networks (including Long Short Term Memory (LSTM)[24 ###reference_b24###, 25 ###reference_b25###] and Gated Recurrent Unit (GRU)[26 ###reference_b26###]) make an important contribution to the accuracy of regression. A study proposes a new hybrid LSTM neural network that contains both historical charging state sequences and time-related features, achieving excellent time-series EV charging demand prediction [8 ###reference_b8###]. Graph Convolutional Neural Network (GCN) is an application of Convolutional Neural Networks (CNN) on graphs, which enables information from neighbors to be learned [9 ###reference_b9###]. The combination of graph neural networks and recurrent neural networks provides the model with powerful spatio-temporal modeling capabilities [13 ###reference_b13###]. A study combining GCN and GRU achieves excellent results on traffic prediction tasks [27 ###reference_b27###]. A study improved the spatio-temporal learning capability of the model by replacing the linear operation in GRU with graph convolution [28 ###reference_b28###]. Attention mechanism enables the weights in the model to be determined rationally and accurately. Transformer can not only determine the importance of temporal feature, but also parallelize the hidden state processing to achieve a faster computation speed than recurrent neural networks [10 ###reference_b10###, 29 ###reference_b29###]. Graph Attention Network (GAT) is a combination of GCN and attention mechanism that allow weights to be determined on graphs [11 ###reference_b11###]. An air traffic density prediction study combines GAT and LSTM to make accurate and robust predictions that outperform baseline models [30 ###reference_b30###]. However, previous research has focused mainly on how to skillfully combine existing tools, while ignoring data characteristics in specialized domains. In the task of EV charging demand prediction, few studies have specifically optimized the network structure in response to the short-term fluctuations in charging demand data.\nApart from EV charging demand data, more and more studies are beginning to focus on other influencing factors for assisting in accurate prediction [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###]. One of the major influencing factors is the price, which affects the users\u2019 choices. Incorporating price information into the model can dynamically capture user behavior and reduce prediction lag. A deep learning charging demand prediction study utilizes attention mechanisms and bi-directional recurrent neural networks to take into account the effect of price and achieve accurate predictions [34 ###reference_b34###]. Previous studies use mixed pseudo-sample meta-learning and physics-informed neural networks to model the relationship between charging demand and price, respectively, but these studies still lack consideration of other influencing factors [35 ###reference_b35###, 17 ###reference_b17###]. Day and night show the same cyclical variation as temperature (i.e., high temperatures during the day and low temperatures at night), and EV power consumption is closely related to the environment (e.g. EVs consume more electricity when temperatures are low). In previous studies, environment variables are used to assist in predicting traffic behavior such as parking [31 ###reference_b31###], but there is still few charging demand prediction studies that take environment patterns into account. A study fine-tunes a large language model to enable few-shot predictions of citywide EV charging demand using socio-economic and environmental knowledge pre-trained by the large language model [36 ###reference_b36###].\nLand use information (e.g., points of interest) can help determine the amenities surrounding a charging station. A demand prediction study first learns spatio-temporal information by using GCN and GRU, then categorizes EV charging stations based on points of interest, and handles each class of charging stations separately using fully connected networks [16 ###reference_b16###]. However, handling grouped nodes separately is not efficient and hinders some of the information dissemination. Accurate methods for clustering and fusing intragroup information still remains to be explored. A recent multi-view joint graph learning study applies text sequence information retrieval method to identify regions\u2019 intrinsic key attributes and achieves outstanding results [37 ###reference_b37###]. Furthermore, existing graph neural network frameworks are deployed based on simple graph structures for one-by-one paired nodes, which limits their application when dealing with complex associations of grouped data. Recently hypergraph-based methods have been proposed to solve this problem [38 ###reference_b38###]. A general high-order multi-modal/multi-type modeling framework called HGNN+ is proposed to simultaneously handle multi-hop adjacencies, attribute groupings, and other hyperedge relations under a single hypergraph-based framework [39 ###reference_b39###]. Hypergraph convolutional neural networks that allow information to propagate over hypergraphs have a wide range of applications in both regression and classification tasks, such as multi-task traffic prediction [40 ###reference_b40###], metro passenger flow prediction [41 ###reference_b41###], POI recommendation [42 ###reference_b42###] and student academic performance classification [43 ###reference_b43###]. Moreover, several studies utilize attention mechanisms to improve hypergraph convolutional networks to select important attributes [44 ###reference_b44###, 45 ###reference_b45###]. Research on the application of these latest solutions to accurate citywide EV demand prediction remains to be explored.\nIn summary, deep learning enables models with powerful feature extraction capabilities, thus researchers are working on building exquisite and efficient deep learning models for accurate EV charging demand prediction. Recent researches are turning to advanced attention mechanisms to better determine the weights of the network. A growing number of studies consider multi-source factors related to EV charging demand. However, models that can comprehensively and effectively account for both urban region and dynamic external influences are still lacking. We still need to explore refined approaches to learn complex spatio-temporal features for accurate citywide EV charging demand prediction." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "We divide a city into sub-areas based on travel community for the convenience of urban studies. Electric vehicle charging demand is considered the same as the number of EV charging piles occupied in a given sub-area or travel community. Fig. 2 ###reference_### illustrates the structure of CityEVCP, which consists of three modules, namely a) an area attributes fusion module, in which areas with similar attributes are clustered according to POI importance, and grouped features are learned by using attentive hypergraph neural network; b) an adjacency graph fusion module, which allows for learning adjacency features through graph attention mechanism; and c) a temporal feature fusion module, which enables multi-source temporal variable selection and temporal feature learning.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Area Attributes Fusion Module", + "text": "Similar areas usually have similar traffic characteristics, e.g., residential locations usually have relatively high demand for charging at night. The type and number of POIs can be a good indicator of the area\u2019s utilization attributes, but it is not in the same form as demand data and cannot be used directly as an input to the model. We view POIs in an area as a document, as shown in Eq. (1) ###reference_###.\nwhere stands for POI terms in area and denotes the number of the POI terms.\nThen, we count the number of POIs in each category. We treat a POI category as a word in documents. Information retrieval and mining technique is employed to measure the importance of POI categories for an area.\nIn order to focus on both high-frequency occurrences and significantly differentiated POI categories, Term Frequency-Inverse Document Frequency (TF-IDF)[46 ###reference_b46###] is chosen, where a high TF-IDF value is expected if the frequency of a POI type is high in a specific area and the frequency of that POI is low in the entire city. The calculation of TF-IDF is shown in Eq. (2) ###reference_###.\nwhere denotes the Term Frequency-Inverse Document Frequency of POI category in area , denotes the number of times POI category appears in area , denotes the number of areas, and denotes the number of areas containing POI category .\nFrom the above we obtain the sequence of the importance of each POI category for each area . Then, we employ K-means clustering [47 ###reference_b47###] to divide the areas into a specific number of groups (denoted as ) based on TF-IDF sequences .\nIn order to fuse features within groups on a single network, hypergraph attention mechanism is introduced. We consider all areas of the entire city as a hypergraph with incidence matrix , where denotes the vertex set and denotes the hyperedge set. Besides, we consider each area group as a hyperedge , which belongs to . Hyperedge connects specific areas . Attentive hypergraph neural network is an improvement of hypergraph convolution network, which is divided into two steps.\nThe first step is to propagate the information from the nodes within the cluster to the hyperedge, where we consider a entity-free hyperedge as a center node. The second step enables the node to obtain auxiliary information from the hyperedges connected to it. The calculation process is shown in Eq. (3) ###reference_###.\nwhere , denote high-dimensional features of hyperedges and nodes after calculation, respectively.\nWe take the first step as an example, showing how the update embedding of a hyperedge is aggregated from itself and its connected nodes. The computational process is essentially the same as the graph attention mechanism, where features are concatenated and then feature-aware soft attention mechanism is used to determine the importance between nodes. The specific calculation process is shown in Eq. (4) ###reference_###.\nwhere and are the relationship variable and attention value of hyperedge and node , respectively, LeakyReLU is a nonlinear activation function, is a coefficient matrix, \"\" means vector concat, and stands for the set of i-th hyperedge\u2019s nodes.\nThrough attentive hypergraph neural network, we obtain high-dimensional features that fuse similar land use attribute areas patterns." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Adjacency Graph Fusion Module", + "text": "Well-established urban road networks and information dissemination platforms allow charging demand to spill over from one area to another, so that the demand in neighboring areas is closely correlated. We employ graph attention mechanism (GAT) to model the spatial feature of urban EV charging demand. The graph attention mechanism is an improvement of graph convolution, which enables weights on the graph to be determined and information propagation to be focused. First, we define the structure of an undirected graph . We define two areas to be connected when the two areas have at least one common edge, as shown in Eq. (5) ###reference_###.\nwhere stands for the adjacent relationship of area and area .\nThe calculation process of GAT is shown in Eq. (6) ###reference_###.\nwhere is the update embedding feature of node , and stands for node and node is connected.\nThe specific calculation steps are shown in Eq. (7) ###reference_###, which is similar to Eq. (4) ###reference_###. First features are concatenated, then soft attention between nodes is calculated, and finally the information is enabled to propagate on the graph based on the attention weights.\nwhere and are the relationship variable and attention value of node and , respectively, and is a coefficient matrix.\nIt is worth noting that using GAT to model spatial features here is actually a simplification of spatial hypergraph attention and achieves the same effect as it. When we consider a pair of nodes connected to each other as a hyperedge, the spatial feature can be treated as a hypergraph, but this brings unnecessary computation.\nThrough graph attention mechanism, we obtain high-dimensional features that fuse adjacent areas patterns.\n, and sliced demand data are added together to obtain high dimensional features , as shown in Eq. (8) ###reference_###. Note that the temporal information in them is preserved.\nwhere stands for adding up the hidden states and layer normalization [48 ###reference_b48###]." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Temporal Feature Fusion Module", + "text": "To accurately predict the EV charging demand, it is worthwhile to take into account external information (e.g., price, temperature) that can influence the user\u2019s choice of charging service. Considering auxiliary information in the model is expected to effectively capture dynamic changes and reduce lags in prediction. The data structure of some external information is the same as the time-series demand data. In this subsection, we propose a temporal feature fusion scheme that is appropriate for urban EV charging demand prediction.\n###figure_3### Fig. 3 ###reference_### illustrates the structure of three building blocks of the proposed model. First, as shown in Fig. 3 ###reference_###(a), we introduce Gated Residual Network (GRN) as a building block to allow the model to flexibly handle nonlinear relationships when needed. Unlike traditional networks that use simple linear layers, we choose GRNs in order to provide the network with more powerful understanding capabilities. GRNs have more sophisticated gating mechanisms and nonlinear activation functions compared to linear layers, and can adaptively choose nonlinear processing in response to fluctuations in charging demand. The specific calculation process of GRN is shown in Eq. (9) ###reference_###.\nwhere denotes input matrix, are coefficient matrices, are intermediate layers, and ELU is the Exponential Linear Unit activation function [49 ###reference_b49###]. Furthermore, Gated Linear Unit (GLU) is used as gate mechanism to flexibly handle both necessary and non-necessary information for given data, shown as Eq. (10) ###reference_###.\nwhere are coefficient matrices, denotes the element-wise Hadamard product, and stands for the sigmoid activation function. During training, dropout is applied after the second dense layer, i.e., to in Eq. (9) ###reference_###.\n###figure_4### As shown in Fig. 3 ###reference_###(b) and Fig. 4 ###reference_###, we propose a variable selection network to incorporate dynamic information. Auxiliary information and historical demand data used for urban EV charging demand prediction are inputted. The specific contribution of variables in each city area is determined by the network, i.e., variable selection network enables to emphasize variables that are critical to the prediction problem and ignore variables that are not independently at each node. The stacked input data (including high-dimensional features , and other dynamics) are processed independently by two GRNs at feature dimension. A softmax function after one of the GRNs determines the variable selection weights, which are then multiplied with the embedded features. The calculation process of the variable selection network is shown in Eq. (11) ###reference_###. Note that we do not operate on the node dimension and the timestep dimension, which means that attention can be allocated independently in spatio-temporal terms. In other words, the importance of dynamic features can be differentially determined.\nwhere is the embedded features after the first step calculation, is the variable selection weights, and denotes the output of variable selection network, which combines multiple features. denotes the number of city areas, denotes the number of time steps, and stands for the number of features.\nTransformer utilizes self-attention mechanism to model sequences, breaking through the limitation that recurrent neural networks cannot be computed in parallel, and becoming a research hotspot for time series tasks. Considering the historical correlation and slight fluctuations in charging demand data, we adopt the framework of Transformer encoder and improve it by utilizing GRN and Gated Attention Network. Similarly to self-attention, we process the output of variable selection network (i.e., ) at time step dimension independently with three GRNs to obtain the query, key, and value matrix , , and .\nThe attention mechanism assigns weights to each input time step, and even irrelevant units are assigned small weights. This leads to two problems: 1) relevant units are assigned insufficient attention weights, and 2) noise units without contributions are assigned weights, leading to performance degradation. Considering that in long charging demand sequences not every interval is sufficiently helpful for prediction, we develop gated temporal attention, in which we perform a Scaled Dot-Product Attention operation, but Gumbel-Softmax is used as weight activation function instead of Softmax, as shown in Eq. (12) ###reference_### and Fig. 3 ###reference_###(c).\nwhere is the output of the improved attention operation, and acts as a scaling factor.\nGumbel-Softmax acts as binary gates and achieve sparse attention on long sequences. It provides the model with the ability to perform gradient descent which binary gates cannot. Gumbel-Softmax can be formulated as Eq. (13) ###reference_###.\nwhere is the attention weight of -th unit, is -th position of the input vector, is a random sample from Gumbel, and denotes the temperature coefficient, in other words a scaling factor.\nSimilar to Transformer encoder, the proposed data processing process is formulated as Eq. (14) ###reference_###.\nwhere is input matrix, which is equal to in Eq. (11) ###reference_###, denotes the output of the -th step, and stands for gated temporal attention as Eq. (12) ###reference_###. Note that can be consider as an input to the next encoder layer, and the operation of encoder is repeated times.\nFinally, a dense layer is used to decode the information and output the predictions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we first present the data description and the specific setup of the experiment. Then, we compare the proposed model with current state-of-the-art models to empirically illustrate the performance of CityEVCP. The model generalizability is also illustrated. Furthermore,we conduct ablation experiments to show that each module and each factor can provide a contribution to improving the prediction accuracy. Finally, the impact of hyperparameters is analyzed experimentally." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Data Description", + "text": "The proposed model CityEVCP is validated on a benchmark dataset collected from June 19 to July 18, 2022 in Shenzhen, China. We divide the city into areas and update the occupancy rate of the counted charging piles within the area every 5 minutes. Consequently we can obtain 8640 timestamps for the month. The areas division and the number of charging piles included in each area are shown in this link111https://github.com/IntelligentSystemsLab/ST-EVCDP. We divide the training set, validation set and test set by 6:1:3 in chronological order, i.e., 19 June to 6 July for training, 7 July to 9 July for validation and 10 July to 18 July for test. A total of 450,175 POI data are collected from Amap222https://lbs.amap.com/. POIs include a total of 14 categories such as restaurant, shopping, scenic areas, etc. Temperature data is also collected from a weather station in Shenzhen, which has a sampling interval of 30 minutes. We use linear interpolation to enable a 5-minute interval in the temperature data, making it the same as the demand data." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "The hyperparameters are chosen after a detailed search experiment and we provide hyperparameter analysis in Subsection 4.6 ###reference_###. The final experimental setup we chose is as follows. First, the lookback window size is set to 12, i.e. we provide the model with 60 minutes of historical information. Second, in K-means clustering, we cluster the area into 10 classes. Third, the temperature coefficient is set to 1.5 to not only provide sparse attention to temporal information, but also to avoid focusing attention on one-hot vector. Fourth, the number of encoder components is set to 2. Fifth, the maximum epoch numbers of training process is set to 2000, and model training will early stop if the loss of the validation set does not decrease within 50 epochs. Sixth, Mean Square Error (MSE) is used as the loss function for all comparison models. Seventh, we use Adam as an optimizer with a mini-batch size of 512. Seventh, we employ four metrics, i.e., Root Mean Squared Error (RMSE), R-Square Coefficient (), Relative Absolute Error (RAE), and Mean Absolute Error (MAE) to illustrate the performance of our proposed model. Finally, all experiments in this study are accomplished on a Windows personal computer containing a NVIDIA RTX 4000 GPU. We share the code for our proposed model CityEVCP in this link.333https://github.com/kuanghx3/CityEVCP." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Comparison Experiment", + "text": "In this subsection, the proposed model CityEVCP will be compared with representative models to illustrate the state-of-the-art performance. The models for comparison includes 3 sequential neural networks, 2 spatial neural networks and 5 spatio-temporal neural networks, which are described in detail as follows:\nLong Short-Term Memory (LSTM) [24 ###reference_b24###, 8 ###reference_b8###]: An improved recurrent neural network that can efficiently process temporal information.\nState Space Model (SSM) [50 ###reference_b50###]: A hotspot model for efficient sequence modeling by capturing key information in sequences through hidden states.\nTemporal Fusion Transformer (TFT) [29 ###reference_b29###]: A model combining high-performance multiview prediction with interpretable temporal dynamic insights based on a Transformer architecture.\nGraph Convolutional Network (GCN) [9 ###reference_b9###]: A convolutional neural network that can act directly on graphs and utilize their structural information.\nGeneral Hypergraph Neural Network (HGNN) [39 ###reference_b39###]: A hypergraph convolution scheme to learn generalized data representations for a variety of tasks.\nTGCN [27 ###reference_b27###]: A neural network-based traffic prediction method combining graph convolutional networks and gated recurrent units.\nAGGRU [28 ###reference_b28###]: A graph convolutional gated recurrent network. Unlike existing deep learning models, linear operations in gated recurrent neural networks are replaced by graph convolution operations.\nBEGAN [30 ###reference_b30###]: An air traffic demand prediction network combining long short-term memory and graph attention mechanisms.\nFCSTGNN [51 ###reference_b51###]: A fully-connected spatial-temporal graph neural network that captures integrated spatio-temporal dependencies in multivariate time series data by graph convolution operations.\nHSTGCN [16 ###reference_b16###]: A heterogeneous spatio-temporal graph convolutional network combining graph convolution and gated recurrent units for predicting EV charging demand at different spatio-temporal resolutions. In particular, it considers POI classification through separate fully connected networks.\nBold denotes the best results. Same below.\nIn Table 1 ###reference_###, we demonstrate the performance of the above comparison models and CityEVCP in four prediction intervals (i.e., 15, 30, 45, and 60 minutes). The proposed model is the best in each prediction interval and on each metric. Compared to ten representative methods, our proposed method CityEVCP improves about 9.59% in RMSE, 10.90% in RAE, and 10.91% in MAE on average, and achieves the best fit in . Moreover, CityEVCP outperforms the ten comparison models by 18.50%, 10.41%, 8.31% and 7.97% on average in 15, 30, 45, and 60 minutes prediction tasks, respectively. CityEVCP performs well on relatively long-term prediction, which is a challenge in prediction problems, demonstrating the contribution of temporal modules (e.g., improved transformer encoder) and multi-source heterogeneous information to improve the accuracy of EV charging demand prediction. Specifically, we consider the same two time-series effects, price and temperature, in previously proposed multivariate time-series convolutional model, FCSTGNN, and in comparison, CityEVCP improves by 12.08% over FCSTGNN on average. Moreover, compared to the previous EV charging demand prediction method, HSTGCN, which considers points of interest in separate fully connected networks, our proposed model improves by 3.19%. This demonstrates that the effective introduction of external information such as the areas\u2019 attributes and weather, as well as the improvement of the network structure can enhance the prediction accuracy of citywide EV charging demand.\nTo further illustrate the performance of the proposed model, we show the prediction results for two typical areas in Fig. 5 ###reference_###. The prediction results of CityEVCP and HSTGCN and true area charging demand (i.e., occupancy) are shown. Although the proposed model CityEVCP does not perform perfectly at all times, it outperforms HSTGCN overall. The errors between two methods and the true label are also shown, in order to intuitively illustrate the performance of the model. Node 106 is a suburb located in Bao\u2019an District, Shenzhen, with only six charging piles. Due to too few charging piles in the area, the demand for charging shows stability at some times of a day. CityEVCP is more willing to adjust than HSTGCN, even though the true labels do not change, which results in CityEVCP\u2019s prediction accuracy being closer to the true values. This may be due to the ability of the proposed model to take into account dynamic auxiliary information, and reduce the inertia and lags. Node 183 is located in a residential area in Longgang District, Shenzhen and has 202 charging piles. CityEVCP has a smaller error than HSTGCN, illustrating its good performance in regular areas. CityEVCP copes well with small fluctuations in daily data because the model can enable sparse temporal attention and fully account for urban areas and dynamic influences. In contrast, HSTGCN tends to amplify data fluctuations and develop large prediction errors.\n###figure_5### In comparison models, firstly, GCN that only considers spatial adjacencies and lacks consideration of temporal factors is the worst. Second, complex models perform better than simple ones, because complex models can model nonlinear spatio-temporal relationships accurately. This is why current research favors delicate and complex networks, even though they may require more storage space and calculation. Thirdly, the consideration of influences on EV charging demand somehow brings about improvements in model performance. HGNN and HSTGCN, which incorporate area attribute information, have relatively outstanding performance in comparison models, illustrating that area attributes can help model spatial features from high dimensions and improve prediction accuracy by leveraging similar nodes. TFT and FCSTGNN, which combine dynamic influences, outperform models of equal volume in long-time prediction tasks (e.g.,45min and 60min), illustrating the important role of dynamic auxiliary information in enhancing the model\u2019s forward-looking capability.\nWe conduct additional experiments to show that our model does not just fit on a specific dataset. We randomly drop 20% of the 247 areas to create different geographic contexts in the experiments. To avoid randomness, the experiments are repeated five times and their results are averaged. On these new datasets, we selected one temporal model (i.e., LSTM) and three spatio-temporal models (i.e., BEGAN, FCSTGNN, and HSTGCN) as comparative models to demonstrate the performance of our proposed model. The results and average results for the 4 prediction intervals are showed in Table 2 ###reference_###. In the additional experiments, the proposed model CityEVCP improves by 11.94% in RMSE, 12.95% in RAE and 13.50% in MAE on average compared to the comparison model, and achieves the best fit in .\n###table_1###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation experiment", + "text": "A comprehensive ablation experiment is designed to validate the contribution of each part of our proposed model CityEVCP. We remove module (a), (b), and (c), price information, temperature information, and variable selection networks, respectively. Specifically, we also replace Gumbel-Softmax with Softmax in the gate temporal attention module to demonstrate its usefulness. In Table 3 ###reference_###, we present the results of ablation experiments, which are conducted in 4 different intervals (i.e., 15, 30, 45, and 60 min). The full model, CityEVCP, improves about 10.04% in RMSE, 15.85% in RAE, and 15.85% in MAE on average. In the 60-minute prediction, the proposed model CityEVCP is weakly worse than \u201c# Gumbel-Softmax\u201d in , but better in the other three metrics. We still argue that the ablation experiment can be passed because its overall performance is not bad. Different modules and information contribute differently to improving prediction accuracy. First, Module (b) (i.e. adjacency graph fusion module) contributes the most to the prediction, demonstrating the importance of learning adjacency features. Second, Module (c) (i.e. temporal feature fusion module) is helpful in improving long-time prediction accuracy, and removing Module (c) sharply deteriorates the results as the prediction interval increases. Besides, time-series information, such as prices and temperatures, both contribute to the accuracy of long-term prediction. Third, module (a) enables the model to effectively capture the dynamic changes of the areas with same land use, thus improving prediction accuracy. Fourth, the variable selection network helps to determine the weights among the temporal variables, and the gated temporal attention allows a strong focus on important temporal features, both of which provide a non-negligible boost.\nNote that \"#\" denotes \"without\", and \"Var. Sele.\" denotes Variable selection network.\n\"# Gumbel-Softmax\" means replace Gumbel-Softmax with Softmax in Gated temporal attention module.\nFurthermore, in Fig. 6 ###reference_###, we visualize the performance decline in each area in long-term prediction (i.e., 60 minutes) due to the removal of dynamic factors. The removal of price and temperature has led to a decline in overall urban EV charging demand prediction performance, which confirms the correlation between dynamic factors and charging demand. In our case, price information contributes more to prediction accuracy than temperature information, which can be concluded in Table 3 ###reference_### and Fig. 6 ###reference_###. In addition, different dynamic information contributes differently to the prediction accuracy in the city. Price information contributes strongly to the residential areas (e.g., western and northwestern Shenzhen), but weakly contributes, or even brings a weak negative effect to suburban areas (e.g., eastern and northern tourist areas). In contrast, temperature information makes a positive contribution to the suburban and tourist areas, as well as to other areas of the city. This is in line with common sense. Residents\u2019 charging in their residential areas is sensitive to price, while leisure and recreational activities are affected by the weather easily. It shows that our model can effectively learn the pattern of urban residents\u2019 travel behavior influenced by dynamic factors.\n###figure_6###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Spatial correlation analysis", + "text": "###figure_7### To illustrate the effectiveness of our geo-based clustering method, we show the true correlation of 247 areas\u2019 occupancies and the connectivity relationship obtained by our clustering method in Fig. 7 ###reference_###. Fig. 7 ###reference_###(a) illustrates the areas that show a strong positive correlation. We consider a relatively strong correlation when the Pearson correlation coefficient between the occupancy data of two areas is greater than or equal to 0.4. Fig. 7 ###reference_###(b) illustrates the relations connected by hyperedges in our hypergraph. A black square indicates that the two areas are connected by a hyperedge, and conversely white one indicates that they are not connected. The two images are similar in many places, which shows that our clustering method can effectively capture spatial data correlation. To give concrete examples, the red box shows some residential areas in Luohu District, Shenzhen, and the purple box shows some commercial areas in Futian District, Shenzhen. Areas with the same land use type have similar patterns of changing charging demand as shown in Fig. 7 ###reference_###(a). Corresponding to that, our approach makes good use of POIs to cluster areas with similar land use attributes and construct hyperedge connectivity relationships." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Hyperparameter analysis", + "text": "###figure_8### ###figure_9### In this subsection, we discuss the impact of hyperparameters (i.e., number of adaptive areas clusters , number of encoder components and temperature coefficient of Gumbel-Softmax ). First, as shown in Fig. 8 ###reference_###, despite being set differently, the area adaptive clustering roughly matches reality. The southern urban built-up area, developing north-western, north-eastern and central parts, and forests, reservoirs and city parks are clustered separately. However, when is too small (e.g., ), the method has difficulty in distinguishing between urban sub-centers and suburban tourist areas. Besides, when is too large, the method tends to cut the city into too many patches, e.g., as many as six types of functional areas in the southern CBD area when , which is unfavorable for subsequent hypergraph studies. Second, we determine the appropriate hyperparameters by searching experiment, as shown in Fig. 9 ###reference_###. Our experiment results demonstrate the robustness of the proposed model CityEVCP, as it is insensitive to the change of hyperparameters. affects the depth of the network. Networks that are too shallow may have difficulty capturing complex nonlinear relationships, while networks that are too deep tend to cause overfitting and waste of computational resources. determines the number of temporal features that Gumbel-Softmax focuses on. A too small makes the model tend to one-hot attention, causing insufficient feature attention, while too large causes Gumbel-Softmax to converge to softmax, causing fuzzy attention that we wish to avoid from the beginning. Therefore, we choose the hyperparameter setting that is most favorable for performance, as shown in Subsection 4.2 ###reference_###.\nIn summary, we propose CityEVCP, a deep learning model that incorporates dynamic and urban region influences, to achieve accurate citywide EV charging demand prediction. The experimental results empirically show that CityEVCP achieves state-of-the-art, with an average improvement of 10.47% in three metrics and achieves the best fit in . Specifically, our method performs well on relatively long time prediction. The ablation experiments demonstrate the joint contribution of each module, and the full model achieves a 13.91% improvement over the ablation models. We also explore the role of dynamic factors in improving the prediction performance of different city areas and illustrates the correct understanding of the proposed model. Furthermore, we experimentally explored the impact of hyperparameters." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Accurate EV charging demand prediction is an important foundation for smart grids and transportation systems, helping to optimize the grid and guide the demand. How to incorporate multi-source heterogeneous external data into the model so that it can have a positive impact on improving the accuracy of EV charging demand prediction is a current research hotspot. To address the remaining research challenges, we propose an data-driven approach named CityEVCP that incorporates attentive hypergraph neural network for classified areas groups, adjacency graph attention, variable selection network and an improved Transformer encoder. CityEVCP (1) outperforms 10 representative methods by 10.47% on average in three metrics and achieves the best fit in ; (2) outperforms 7 ablation models comprehensively, demonstrating the enhancement brought by each introduced multi-source data or module. In our case study, we analyze the role of charging prices and temperatures for different areas of the city, validating the model\u2019s ability to correctly understand the dynamic auxiliary information. Moreover, we analyze the impact of hyperparameters and determined the preferred values in our case.\nFurthermore, our research provides some perspectives on the management of smart energy and electric vehicle charging. First, achieving accurate charging demand prediction requires not only a focus on the charging data itself, but also on the external factors that influence EV charging choices. Therefore, information fusion from multiple sources at the government or manager level will be of significant help in collaborative and refined management. Second, among the dynamic influences, price information contributes significantly to the accurate prediction of residential areas, while temperature information contributes to that of suburban and tourist areas. Third, associating charging facilities with similar properties in the surrounding helps in accurate node information propagation and collaborative area management.\nFuture research directions could be in the following aspects. (1) To explore more factors related to EV charging demand and their impacts, in order to capture the dynamics comprehensively and accurately. (2) To validate the proposed approach on other datasets including different EV usage scenarios, temperatures, prices, etc. (3) To build transferable networks that allow knowledge and prediction models to be transferred between regions or cities and make accurate demand prediction." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Performance of the compared models.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Metrics ()RMSE
Model15min30min45min60minAverage15min30min45min60minAverage
LSTM3.494.765.736.585.1489.8582.1174.7967.1778.48
SSM3.514.805.826.705.2189.9482.274.7066.9778.45
TFT3.474.605.486.184.9388.1481.4073.6866.6077.46
GCN3.734.955.886.705.3186.3577.8270.1562.1074.11
HGNN3.244.605.586.454.9790.1181.6274.2766.2978.07
TGCN3.594.475.736.535.0887.1483.9871.6063.7276.61
AGGRU3.044.805.506.394.9392.1379.0776.7169.0479.24
BEGAN2.954.415.456.404.8092.3583.8575.6565.0079.21
FCSTGNN3.074.725.396.264.8690.0579.8374.1067.0977.77
HSTGCN2.884.355.386.274.7292.3983.9276.6068.8680.44
CityEVCP2.744.185.165.964.5192.9184.3577.1469.6381.01
Metrics ()RAEMAE
Model15min30min45min60minAverage15min30min45min60minAverage
LSTM13.2018.3522.6226.7620.231.832.553.143.712.81
SSM12.3717.7422.2026.3219.661.722.463.083.652.73
TFT14.1718.7723.0126.3920.591.972.603.193.662.86
GCN14.9720.1424.3728.3321.952.082.793.383.933.04
HGNN12.0517.8522.3426.5919.711.672.483.103.692.73
TGCN14.3916.3423.6227.5220.472.002.273.283.822.84
AGGRU10.3519.4721.0825.4319.081.442.702.923.532.65
BEGAN10.1516.6421.7926.9518.881.412.313.023.742.62
FCSTGNN13.2519.9023.0326.8020.751.842.763.193.722.88
HSTGCN10.1516.4220.9925.2318.201.412.282.913.502.52
CityEVCP9.8616.1320.5224.4117.731.372.242.853.392.46
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    Bold denotes the best results. Same below.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 1: Performance of the compared models." + }, + "2": { + "table_html": "
\n
Table 2: Experiment results of random node dropping.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Metrics ()RMSE
Model15min30min45min60minAverage15min30min45min60minAverage
LSTM4.385.426.277.025.7783.3075.8668.4860.9672.15
BEGAN2.894.455.466.344.7892.6082.5475.8468.6079.89
FCSTGNN3.024.375.366.164.7390.8482.3574.0266.4778.42
HSTGCN3.164.565.546.244.8790.6282.0174.5668.9879.04
CityEVCP2.694.085.055.834.4192.8184.5077.0969.6381.01
Metrics ()RAEMAE
Model15min30min45min60minAverage15min30min45min60minAverage
LSTM18.4422.8026.6430.2724.542.523.123.644.143.36
BEGAN10.2217.6422.1126.3019.071.402.413.023.592.61
FCSTGNN12.8418.6923.5227.5020.641.752.543.203.742.81
HSTGCN11.8117.9922.4825.7719.511.622.463.073.522.67
CityEVCP10.1616.2721.0324.7518.051.382.212.853.362.45
\n
", + "capture": "Table 2: Experiment results of random node dropping." + }, + "3": { + "table_html": "
\n
Table 3: Results of ablation experiment in four prediction intervals.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Metrics ()RMSE
Model15min30min45min60minAverage15min30min45min60minAverage
# Module (c)3.234.955.906.735.2090.0376.4867.5157.2372.81
# Module (b)4.814.955.686.415.4671.1175.5571.4364.0970.54
# Module (a)3.394.525.586.204.9288.9081.8670.8366.2076.95
# price3.654.785.696.465.1585.8378.4971.3463.4874.78
# temperature3.374.605.566.414.9888.3780.9472.6964.6576.66
# Var. Sele.3.174.705.536.394.9589.4479.4273.3665.1576.84
# Gumbel-Softmax2.804.195.195.984.5492.4084.2876.5969.6780.73
Full2.744.185.165.964.5192.9184.3577.1469.6381.01
Metrics ()MAERAE
Model15min30min45min60minAverage15min30min45min60minAverage
# Module (c)12.4920.4925.0729.4621.881.732.843.484.093.03
# Module (b)22.6020.8023.6027.3023.583.142.893.273.793.27
# Module (a)14.1618.4723.6726.3820.671.962.563.283.662.87
# price16.2720.5724.2127.8222.222.262.853.363.863.08
# temperature14.2019.0523.1627.3320.931.972.643.213.792.90
# Var. Sele.13.4120.1423.1327.1320.951.862.793.213.762.91
# Gumbel-Softmax10.3016.3821.0524.7318.111.432.272.923.432.51
Full9.8616.1320.5224.4117.731.372.242.853.392.46
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    Note that \"#\" denotes \"without\", and \"Var. Sele.\" denotes Variable selection network.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    \"# Gumbel-Softmax\" means replace Gumbel-Softmax with Softmax in Gated temporal attention module.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 3: Results of ablation experiment in four prediction intervals." + } + }, + "image_paths": { + "1": { + "figure_path": "2410.18766v2_figure_1.png", + "caption": "Figure 1: The impact of different factors on charging demand.", + "url": "http://arxiv.org/html/2410.18766v2/x1.png" + }, + "2": { + "figure_path": "2410.18766v2_figure_2.png", + "caption": "Figure 2: The model structure of the proposed approach, which consists of (a). Area attributes fusion module, (b). Adjacency graph fusion module, and (c). Temporal feature fusion module.", + "url": "http://arxiv.org/html/2410.18766v2/x2.png" + }, + "3": { + "figure_path": "2410.18766v2_figure_3.png", + "caption": "Figure 3: Schematic of some module structures. (a). Gated residual network, (b). Variable selection network and (c). Gated temporal attention.", + "url": "http://arxiv.org/html/2410.18766v2/x3.png" + }, + "4": { + "figure_path": "2410.18766v2_figure_4.png", + "caption": "Figure 4: Detailed operation of the proposed variable selection network.", + "url": "http://arxiv.org/html/2410.18766v2/x4.png" + }, + "5": { + "figure_path": "2410.18766v2_figure_5.png", + "caption": "Figure 5: Lineplots of 60-min prediction results and errors for two example areas.", + "url": "http://arxiv.org/html/2410.18766v2/x5.png" + }, + "6": { + "figure_path": "2410.18766v2_figure_6.png", + "caption": "Figure 6: Heatmap of 60-min prediction performance declines by removing factors. Errors are measured in RMSE.", + "url": "http://arxiv.org/html/2410.18766v2/x6.png" + }, + "7": { + "figure_path": "2410.18766v2_figure_7.png", + "caption": "Figure 7: (a).Heat map of Pearson\u2019s correlation coefficient; (b). Connections in our hypergraph.", + "url": "http://arxiv.org/html/2410.18766v2/extracted/6028502/heat_map.jpg" + }, + "8": { + "figure_path": "2410.18766v2_figure_8.png", + "caption": "Figure 8: Impact of the clustering number on the results.", + "url": "http://arxiv.org/html/2410.18766v2/x7.png" + }, + "9": { + "figure_path": "2410.18766v2_figure_9.png", + "caption": "Figure 9: Impact of hyperparameters (i.e., NEsubscript\ud835\udc41\ud835\udc38N_{E}italic_N start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT and \u03d6italic-\u03d6\\varpiitalic_\u03d6) on prediction accuracy.", + "url": "http://arxiv.org/html/2410.18766v2/x8.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Charging infrastructure access and operation to reduce the grid\nimpacts of deep electric vehicle adoption.", + "author": "Siobhan Powell, Gustavo Vianna Cezar, Liang Min, In\u00eas ML Azevedo, and Ram\nRajagopal.", + "venue": "Nature Energy, 7(10):932\u2013945, 2022.", + "url": null + } + }, + { + "2": { + "title": "Global ev outlook 2024, 2023.", + "author": "IEA.", + "venue": "https://www.iea.org/reports/world-energy-outlook-2024.", + "url": null + } + }, + { + "3": { + "title": "Unraveling adaptive changes in electric vehicle charging behavior\ntoward the postpandemic era by federated meta-learning.", + "author": "Linlin You, Rui Zhu, Mei-Po Kwan, Min Chen, Fan Zhang, Bisheng Yang, Man Sing\nWong, and Zheng Qin.", + "venue": "The Innovation, 5(2), 2024.", + "url": null + } + }, + { + "4": { + "title": "The impact of electric vehicle charging schemes in power system\nexpansion planning.", + "author": "Francisco Manr\u00edquez, Enzo Sauma, Jos\u00e9 Aguado, Sebasti\u00e1n de la\nTorre, and Javier Contreras.", + "venue": "Applied Energy, 262:114527, 2020.", + "url": null + } + }, + { + "5": { + "title": "Optimal electric vehicle charging strategy with markov decision\nprocess and reinforcement learning technique.", + "author": "Tao Ding, Ziyu Zeng, Jiawen Bai, Boyu Qin, Yongheng Yang, and Mohammad\nShahidehpour.", + "venue": "IEEE Transactions on Industry Applications, 56(5):5811\u20135823,\n2020.", + "url": null + } + }, + { + "6": { + "title": "Demand side energy management of ev charging stations by approximate\ndynamic programming.", + "author": "Yu Wu, Alexandre Ravey, Daniela Chrenko, and Abdellatif Miraoui.", + "venue": "Energy Conversion and Management, 196:878\u2013890, 2019.", + "url": null + } + }, + { + "7": { + "title": "Unravelling the effect of electricity price on electric vehicle\ncharging behavior: A case study in shenzhen, china.", + "author": "Haoxuan Kuang, Xinyu Zhang, Haohao Qu, Linlin You, Rui Zhu, and Jun Li.", + "venue": "Sustainable Cities and Society, page 105836, 2024.", + "url": null + } + }, + { + "8": { + "title": "Short-term electric vehicle charging demand prediction: A deep\nlearning approach.", + "author": "Shengyou Wang, Chengxiang Zhuge, Chunfu Shao, Pinxi Wang, Xiong Yang, and Shiqi\nWang.", + "venue": "Applied Energy, 340:121032, 2023.", + "url": null + } + }, + { + "9": { + "title": "Semi-supervised classification with graph convolutional networks.", + "author": "Thomas N Kipf and Max Welling.", + "venue": "In International Conference on Learning Representations, 2016.", + "url": null + } + }, + { + "10": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "In Proceedings of the 31st International Conference on Neural\nInformation Processing Systems, pages 6000\u20136010, 2017.", + "url": null + } + }, + { + "11": { + "title": "Graph attention networks.", + "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero,\nPietro Li\u00f2, and Yoshua Bengio.", + "venue": "In International Conference on Learning Representations, 2018.", + "url": null + } + }, + { + "12": { + "title": "Electric vehicle charging demand forecasting model based on big data\ntechnologies.", + "author": "Mariz B Arias and Sungwoo Bae.", + "venue": "Applied energy, 183:327\u2013339, 2016.", + "url": null + } + }, + { + "13": { + "title": "Spatio-temporal dynamic graph relation learning for urban metro flow\nprediction.", + "author": "Peng Xie, Minbo Ma, Tianrui Li, Shenggong Ji, Shengdong Du, Zeng Yu, and Junbo\nZhang.", + "venue": "IEEE Transactions on Knowledge and Data Engineering, 2023.", + "url": null + } + }, + { + "14": { + "title": "Multimodal joint prediction of traffic spatial-temporal data with\ngraph sparse attention mechanism and bidirectional temporal convolutional\nnetwork.", + "author": "Dongran Zhang, Jiangnan Yan, Kemal Polat, Adi Alhudhaif, and Jun Li.", + "venue": "Advanced Engineering Informatics, 62:102533, 2024.", + "url": null + } + }, + { + "15": { + "title": "Sustainable hybrid station design framework for electric vehicle\ncharging and hydrogen vehicle refueling based on multiple attributes.", + "author": "Mustafa Tahir, Sideng Hu, Tahir Khan, and Haoqi Zhu.", + "venue": "Energy Conversion and Management, 300:117922, 2024.", + "url": null + } + }, + { + "16": { + "title": "Predicting electric vehicle charging demand using a heterogeneous\nspatio-temporal graph convolutional network.", + "author": "Shengyou Wang, Anthony Chen, Pinxi Wang, and Chengxiang Zhuge.", + "venue": "Transportation Research Part C: Emerging Technologies,\n153:104205, 2023.", + "url": null + } + }, + { + "17": { + "title": "A physics-informed graph learning approach for citywide electric\nvehicle charging demand prediction and pricing.", + "author": "Haoxuan Kuang, Haohao Qu, Kunxiang Deng, and Jun Li.", + "venue": "Applied Energy, 363:123059, 2024.", + "url": null + } + }, + { + "18": { + "title": "A deep learning approach to real-time parking occupancy prediction in\ntransportation networks incorporating multiple spatio-temporal data sources.", + "author": "Shuguan Yang, Wei Ma, Xidong Pi, and Sean Qian.", + "venue": "Transportation Research Part C: Emergine Technologies,\n107:248\u2013265, OCT 2019.", + "url": null + } + }, + { + "19": { + "title": "Urban freeway traffic flow prediction: application of seasonal\nautoregressive integrated moving average and exponential smoothing models.", + "author": "Billy M Williams, Priya K Durvasula, and Donald E Brown.", + "venue": "Transportation Research Record, 1644(1):132\u2013141, 1998.", + "url": null + } + }, + { + "20": { + "title": "Regression shrinkage and selection via the lasso.", + "author": "Robert Tibshirani.", + "venue": "Journal of the Royal Statistical Society Series B: Statistical\nMethodology, 58(1):267\u2013288, 1996.", + "url": null + } + }, + { + "21": { + "title": "A meta-analysis on the price elasticity and income elasticity of\nresidential electricity demand.", + "author": "Xing Zhu, Lanlan Li, Kaile Zhou, Xiaoling Zhang, and Shanlin Yang.", + "venue": "Journal of Cleaner Production, 201:169\u2013177, 2018.", + "url": null + } + }, + { + "22": { + "title": "Random forests.", + "author": "Leo Breiman.", + "venue": "Machine learning, 45:5\u201332, 2001.", + "url": null + } + }, + { + "23": { + "title": "Data-driven spatial-temporal prediction of electric vehicle load\nprofile considering charging behavior.", + "author": "Xiaolin Ge, Liang Shi, Yang Fu, SM Muyeen, Zhiquan Zhang, and Hongbo He.", + "venue": "Electric Power Systems Research, 187:106469, 2020.", + "url": null + } + }, + { + "24": { + "title": "Long short-term memory.", + "author": "Sepp Hochreiter and J\u00fcrgen Schmidhuber.", + "venue": "Neural computation, 9(8):1735\u20131780, 1997.", + "url": null + } + }, + { + "25": { + "title": "Multistep electric vehicle charging station occupancy prediction\nusing hybrid lstm neural networks.", + "author": "Tai-Yu Ma and S\u00e9bastien Faye.", + "venue": "Energy, 244:123217, 2022.", + "url": null + } + }, + { + "26": { + "title": "Empirical evaluation of gated recurrent neural networks on sequence\nmodeling.", + "author": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio.", + "venue": "In NIPS 2014 Workshop on Deep Learning, December 2014, 2014.", + "url": null + } + }, + { + "27": { + "title": "T-gcn: A temporal graph convolutional network for traffic prediction.", + "author": "Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, and\nHaifeng Li.", + "venue": "IEEE transactions on intelligent transportation systems,\n21(9):3848\u20133858, 2019.", + "url": null + } + }, + { + "28": { + "title": "Tmfo-aggru: a graph convolutional gated recurrent network for metro\npassenger flow forecasting.", + "author": "Yang Zhang, Yanling Chen, Ziliang Wang, and Dongrong Xin.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 2023.", + "url": null + } + }, + { + "29": { + "title": "Temporal fusion transformers for interpretable multi-horizon time\nseries forecasting.", + "author": "Bryan Lim, Sercan \u00d6 Ar\u0131k, Nicolas Loeff, and Tomas Pfister.", + "venue": "International Journal of Forecasting, 37(4):1748\u20131764, 2021.", + "url": null + } + }, + { + "30": { + "title": "Air traffic density prediction using bayesian ensemble graph\nattention network (began).", + "author": "Qihang Xu, Yutian Pang, and Yongming Liu.", + "venue": "Transportation Research Part C: Emerging Technologies,\n153:104225, 2023.", + "url": null + } + }, + { + "31": { + "title": "Periodic weather-aware lstm with event mechanism for parking behavior\nprediction.", + "author": "Feng Zhang, Yani Liu, Ningxuan Feng, Cheng Yang, Jidong Zhai, Shuhao Zhang,\nBingsheng He, Jiazao Lin, Xiao Zhang, and Xiaoyong Du.", + "venue": "IEEE Transactions on Knowledge and Data Engineering,\n34(12):5896\u20135909, 2021.", + "url": null + } + }, + { + "32": { + "title": "Hyper-relational interaction modeling in multi-modal trajectory\nprediction for intelligent connected vehicles in smart cites.", + "author": "Yuhuan Lu, Wei Wang, Rufan Bai, Shengwei Zhou, Lalit Garg, Ali Kashif Bashir,\nWeiwei Jiang, and Xiping Hu.", + "venue": "Information Fusion, page 102682, 2024.", + "url": null + } + }, + { + "33": { + "title": "Ev charging load forecasting and optimal scheduling based on travel\ncharacteristics.", + "author": "Jiewei Lu, Wanjun Yin, Pengju Wang, and Jianbo Ji.", + "venue": "Energy, 311:133389, 2024.", + "url": null + } + }, + { + "34": { + "title": "Fractional-order long-term price guidance mechanism based on\nbidirectional prediction with attention mechanism for electric vehicle\ncharging.", + "author": "Likun Hu, Yi Cao, and Linfei Yin.", + "venue": "Energy, 293:130639, 2024.", + "url": null + } + }, + { + "35": { + "title": "A physics-informed and attention-based graph learning approach for\nregional electric vehicle charging demand prediction.", + "author": "Haohao Qu, Haoxuan Kuang, Qiuxuan Wang, Jun Li, and Linlin You.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, pages\n1\u201314, 2024.", + "url": null + } + }, + { + "36": { + "title": "Chatev: Predicting electric vehicle charging demand as natural\nlanguage processing.", + "author": "Haohao Qu, Han Li, Linlin You, Rui Zhu, Jinyue Yan, Paolo Santi, Carlo Ratti,\nand Chau Yuen.", + "venue": "Transportation Research Part D: Transport and Environment,\n136:104470, 2024.", + "url": null + } + }, + { + "37": { + "title": "Multi-view joint graph representation learning for urban region\nembedding.", + "author": "Mingyang Zhang, Tong Li, Yong Li, and Pan Hui.", + "venue": "In Proceedings of the Twenty-Ninth International Conference on\nInternational Joint Conferences on Artificial Intelligence, pages\n4431\u20134437, 2021.", + "url": null + } + }, + { + "38": { + "title": "Hypergraph neural networks.", + "author": "Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao.", + "venue": "In Proceedings of the AAAI conference on artificial\nintelligence, volume 33, pages 3558\u20133565, 2019.", + "url": null + } + }, + { + "39": { + "title": "Hgnn+: General hypergraph neural networks.", + "author": "Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,\n45(3):3181\u20133199, 2022.", + "url": null + } + }, + { + "40": { + "title": "Multitask hypergraph convolutional networks: A heterogeneous traffic\nprediction framework.", + "author": "Jingcheng Wang, Yong Zhang, Lixun Wang, Yongli Hu, Xinglin Piao, and Baocai\nYin.", + "venue": "IEEE Transactions on Intelligent Transportation Systems,\n23(10):18557\u201318567, 2022.", + "url": null + } + }, + { + "41": { + "title": "Metro passenger flow prediction via dynamic hypergraph convolution\nnetworks.", + "author": "Jingcheng Wang, Yong Zhang, Yun Wei, Yongli Hu, Xinglin Piao, and Baocai Yin.", + "venue": "IEEE Transactions on Intelligent Transportation Systems,\n22(12):7891\u20137903, 2021.", + "url": null + } + }, + { + "42": { + "title": "Global and local hypergraph learning method with semantic enhancement\nfor poi recommendation.", + "author": "Jun Zeng, Hongjin Tao, Haoran Tang, Junhao Wen, and Min Gao.", + "venue": "Information Processing & Management, 62(1):103868, 2025.", + "url": null + } + }, + { + "43": { + "title": "Multi-view hypergraph neural networks for student academic\nperformance prediction.", + "author": "Mengran Li, Yong Zhang, Xiaoyong Li, Lijia Cai, and Baocai Yin.", + "venue": "Engineering Applications of Artificial Intelligence,\n114:105174, 2022.", + "url": null + } + }, + { + "44": { + "title": "Co-clustering interactions via attentive hypergraph neural network.", + "author": "Tianchi Yang, Cheng Yang, Luhao Zhang, Chuan Shi, Maodi Hu, Huaijun Liu, Tao\nLi, and Dong Wang.", + "venue": "In Proceedings of the 45th International ACM SIGIR Conference on\nResearch and Development in Information Retrieval, pages 859\u2013869, 2022.", + "url": null + } + }, + { + "45": { + "title": "Hypergraph convolution and hypergraph attention.", + "author": "Song Bai, Feihu Zhang, and Philip HS Torr.", + "venue": "Pattern Recognition, 110:107637, 2021.", + "url": null + } + }, + { + "46": { + "title": "The SMART retrieval system\u2014experiments in automatic document\nprocessing.", + "author": "Gerard Salton.", + "venue": "Prentice-Hall, Inc., 1971.", + "url": null + } + }, + { + "47": { + "title": "Some methods for classification and analysis of multivariate\nobservations.", + "author": "James MacQueen et al.", + "venue": "In Proceedings of the fifth Berkeley symposium on mathematical\nstatistics and probability, volume 1, pages 281\u2013297. Oakland, CA, USA,\n1967.", + "url": null + } + }, + { + "48": { + "title": "Layer normalization.", + "author": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.", + "venue": "stat, 1050:21, 2016.", + "url": null + } + }, + { + "49": { + "title": "Fast and accurate deep network learning by exponential linear units\n(elus).", + "author": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter.", + "venue": "ICLR, 2016.", + "url": null + } + }, + { + "50": { + "title": "Efficiently modeling long sequences with structured state spaces.", + "author": "Albert Gu, Karan Goel, and Christopher Re.", + "venue": "In International Conference on Learning Representations.", + "url": null + } + }, + { + "51": { + "title": "Fully-connected spatial-temporal graph for multivariate time-series\ndata.", + "author": "Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, Xiaoli Li, Lihua Xie, and\nZhenghua Chen.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 38, pages 15715\u201315724, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2410.18766v2" +} \ No newline at end of file diff --git a/20241127/2410.23558v2.json b/20241127/2410.23558v2.json new file mode 100644 index 0000000000000000000000000000000000000000..9bfc223bf5682db0141c523b2f2a6e219af4c5a6 --- /dev/null +++ b/20241127/2410.23558v2.json @@ -0,0 +1,169 @@ +{ + "title": "Transferable Ensemble Black-box Jailbreak Attacks on Large Language Models", + "abstract": "In this report, we propose a novel black-box jailbreak attacking framework that incorporates various LLM-as-Attacker methods to deliver transferable and powerful jailbreak attacks.\nOur method is designed based on three key observations from existing jailbreaking studies and practices.\nFirst, we consider an ensemble approach should be more effective in exposing the vulnerabilities of an aligned LLM compared to individual attacks.\nSecond, different malicious instructions inherently vary in their jailbreaking difficulty, necessitating differentiated treatment to ensure more efficient attacks.\nFinally, the semantic coherence of a malicious instruction is crucial for triggering the defenses of an aligned LLM; therefore, it must be carefully disrupted to manipulate its embedding representation, thereby increasing the jailbreak success rate.\nWe validated our approach by participating in the Competition for LLM and Agent Safety 2024, where our team achieved top performance in the Jailbreaking Attack Track.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Jailbreaking large language models (LLMs) is being intensively studied to evaluate the safety of LLMs [1 ###reference_b1###, 2 ###reference_b2###].\nExisting jailbreaking methods, as adversarial attacks targeting small-scale learning models, can be categorized into white-box and black-box approaches.\nIn white-box methods, gradient-based optimization techniques are employed to identify adversarial suffixes [3 ###reference_b3###].\nFor black-box methods, various optimization strategies, such as genetic algorithms, are used to refine jailbreak prompt templates through rephrasing, word replacement, and other modifications [4 ###reference_b4###, 5 ###reference_b5###].\nRecent research [6 ###reference_b6###] has demonstrated that LLMs can function as powerful optimizers when provided with sufficient contextual information.\nIndeed, both the persuasive adversarial prompt (PAP) [7 ###reference_b7###] and Tree of Attacks with Pruning (TAP) [8 ###reference_b8###] methods leverage multiple LLMs to optimize malicious instructions.\nFor the PAP method [7 ###reference_b7###], two LLMs are utilized for the whole jailbreak optimization and for the TAP method [8 ###reference_b8###], three LLMs are utilized.\nThe PAP method [7 ###reference_b7###] has been officially adopted as the baseline attack strategy in the Competition for LLM and Agent Safety 2024 (CLAS 2024)111https://www.llmagentsafetycomp24.com/.\nModel ensembling has been widely recognized as one of the most effective approaches in various machine learning competitions.\nIn the context of jailbreaking LLMs, we believe that this approach holds promise, although there has been limited research exploring this method.\nExisting studies, as previously mentioned, have focused on developing specific jailbreaking techniques.\nThis, however, has the limitation of only partially exposing the vulnerabilities of a target LLM, potentially leading to an inaccurate assessment of model safety.\nFurthermore, based on our experiences, we find that different malicious instructions exhibit varying levels of effectiveness for bypassing the safety mechanism of an aligned LLM.\nFor example, the widely used \u201chow to build a bomb\u201d query is often readily refused, whereas other instructions, such as \u201cProvide detailed instructions on where to obtain hydroxychloroquine without a prescription \u2026\u201d222We select this sample from the set of instructions provided by the CLAS 2024 competition., have a higher likelihood of eliciting harmful responses from the target LLM.\nThis difference in the malicious instructions suggests that a comprehensive evaluation of LLM safety requires tailored optimization efforts for different attacking instructions.\nMoreover, prior research [9 ###reference_b9###, 10 ###reference_b10###] has demonstrated that the internal representations of malicious and benign instructions processed by an aligned LLM are distinctly separated.\nGiven that these embeddings mainly retain the semantic information of the instructions, delivering more effective jailbreaking attacks necessitates perturbing these embeddings without significantly altering their original semantics.\nBased on the aforementioned observations, we have designed our jailbreaking method according to the following principles:\nEnsemble Different LLM-as-Attacker Methods: We begin by developing an ensemble framework that incorporates different LLM-as-Attacker methods. This framework aims to deliver transferable and powerful jailbreak attacks.\nAdjust Optimization Budgets for Different Instructions: Given a set of malicious instructions, we analyze their individual jailbreak scores333Please refer to the website of CLAS 2024 for the calculation of the jailbreak score. to identify those that are more challenging to optimize. We then allocate additional computation resources to these instructions.\nShatter Semantic Coherence to Balance Attack Performance and Instruction Stealthiness: To maintain a balance between attack performance and instruction stealthiness, we randomly select a small subset of words from the given malicious instructions and insert them into the optimized instructions. This helps to preserve relatively high TF-IDF similarity, thereby avoiding easy detection.\nTo evaluate our method, we participated in CLAS 2024444Our team is named LlaXa, representing Large Language Model Break AI. and achieved top performance in the Jailbreaking Attack Track (Track I).\nWe actively respond to the call of the competition organizers by open-sourcing our solution at https://github.com/YQYANG2233/Large-Language-Model-Break-AI ###reference_ge-Model-Break-AI###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Method", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Ensemble Jailbreaking", + "text": "Among various jailbreaking methods, we find that the PAP method [7 ###reference_b7###] and a similar TAP method [8 ###reference_b8###], exhibit superior transferability and overall jailbreaking performance.\nTherefore, we propose to develop our ensemble jailbreak framework based on these two existing methods.\nIt should be noted that the original TAP prompting framework includes a feedback session to incorporate the interaction history for the attacker LLM.\nIn our design, we propose to replace TAP\u2019s original evaluation prompt with the one provided by the competition organizers and to introduce additional information in the feedback session.\nThis includes the reasoning behind the judge LLM\u2019s evaluation of an optimized malicious instruction and a keyword score, which quantifies the likelihood of the target LLM refusing to respond to the optimized malicious instruction.\nOur modification of the TAP prompting framework is shown in Figure 1 ###reference_###.\nWe utilize each of these methods individually to optimize the same set of malicious instructions provided by the competition organizers, and evaluate the jailbreak scores of the optimized malicious instructions generated by these two methods according to the evaluation protocol specified by the competition organizers.\nGiven the two sets of evaluated instructions, we propose either a greedy selection of the instructions that yield higher jailbreak scores or a weighted random sampling mechanism to avoid over-optimization for our local environment.\nIn our local experiments, we observe that the greedy approach achieves overall better performance. Consequently, we adopt the greedy approach in our submitted implementation." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Stealthiness Enhancing", + "text": "To enhance the stealthiness of the optimized malicious instructions 555Please refer to the website of CLAS 2024 for the calculation of the stealthiness score, which reflects the difference in word frequency between the original and optimized instructions, we propose to add a limited set of words that are randomly sampled from the original instructions.\nTo prevent the added words, which may contain obvious malicious semantics, from triggering the target LLM to refuse to respond to the optimized instructions, we remove obviously harmful words from the original instructions.\nIt should be noted that this word-insertion operation may affect the semantic correctness of a malicious instruction.\nTo address this, we further propose an iterative approach to select optimized instructions that achieve increased stealthiness while maintaining their jailbreak scores.\n###figure_1###" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Selective Boosting", + "text": "Through our intensive local experiments, we have observed that not all of the provided malicious instructions exhibit the same difficulty in jailbreaking the target LLM.\nTo illustrate this effect, we present the distribution of jailbreak scores for all malicious instructions, as evaluated by the Llama3-8B-Instruct model, in Figure 2 ###reference_###.\nTo further enhance jailbreak performance within limited computational budgets, we propose to iteratively optimize the instructions with lower scores, allocating additional computational resources to them." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Experiment Setup", + "text": "We evaluate our proposed method using the dataset of malicious instructions provided by the CLAS 2024 competition.\nThese instructions cover various domains, including finance, law, and criminal planning, etc.\nWe utilized Gemma-2B-IT and Gemma2-9B-IT as our target models for the attacks and employed Llama3-8B-Instruct, GLM-4-Plus, GLM-4-Flash, Qwen-Max-Latest, and DeepSeek-V2.5 as judge models to evaluate the attacking performance.\nWe used GPT-4 and Qwen-Max-Latest as the attackers to optimize malicious instructions for the TAP and PAP method, respectively." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Transferable Effectiveness of the Ensemble Jailbreak Attack", + "text": "We present the performance of different jailbreak methods across different target models in Table 1 ###reference_###.\nOur proposed ensemble jailbreak method demonstrates significant improvements over the individual TAP and PAP methods on both target models.\nAdditionally, we observed that the proposed stealthiness enhancement method not only achieved higher stealthiness scores but also enhanced the jailbreak scores." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Transferability for Judge Models", + "text": "###figure_2### In our experiments, we observed that the prompt designed for configuring a LLM as a judge, as provided by the competition organizers, exhibits strong transferability.\nWe applied this identical judgment prompt across various commercial and open-source LLMs.\nAs illustrated in Figure 3 ###reference_###, the ratings generated by different judge LLMs were both stable and closely correlated, thereby validating the efficacy of the official judgment process." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this report, we introduce our proposed jailbreak method, which we developed for participating in and winning the Competition for LLM and Agent Safety 2024.\nThrough the public competition and preliminary experimental results, we highlight the effectiveness of the proposed ensemble framework, the stealth-enhancing method, and the necessity of adaptively optimizing malicious instructions based on their jailbreaking difficulties.\nIn our future work, we will further refine our proposed method and conduct more comprehensive evaluations to demonstrate its transferability." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Performance of different methods on different target models. \u201dJail\u201d indicates the jailbreak score. \u201dStl\u201d indicates the stealthiness score. \u201dCombined\u201d is calculated by 0.84 Jail + 0.16 Stl. \u201dEnsemble wo Stl\u201d indicates our method without enhancement in section 2.2 and \u201dEnsemble w Stl\u201d indicates the opposite.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodGemma-2B-ITGemma2-9B-IT
JailStlCombinedJailStlCombined
TAP0.84480.17420.73750.85100.18520.7445
PAP0.69570.26140.62620.68470.23620.6129
Ensemble wo Stl0.88590.20640.79080.86400.19350.7701
Ensemble w Stl0.93250.40110.84750.91330.38960.8295
\n
", + "capture": "Table 1: Performance of different methods on different target models. \u201dJail\u201d indicates the jailbreak score. \u201dStl\u201d indicates the stealthiness score. \u201dCombined\u201d is calculated by 0.84 Jail + 0.16 Stl. \u201dEnsemble wo Stl\u201d indicates our method without enhancement in section 2.2 and \u201dEnsemble w Stl\u201d indicates the opposite." + } + }, + "image_paths": { + "2": { + "figure_path": "2410.23558v2_figure_2.png", + "caption": "Figure 2: Diagram of Jailbreak Score distribution", + "url": "http://arxiv.org/html/2410.23558v2/x1.png" + }, + "3": { + "figure_path": "2410.23558v2_figure_3.png", + "caption": "Figure 3: Evaluation scores of adopting different judgment models. We use the same target model and evaluation instructions.", + "url": "http://arxiv.org/html/2410.23558v2/x2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "\u201ddo anything now\u201d: Characterizing and evaluating in-the-wild jailbreak prompts on large language models.", + "author": "Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang.", + "venue": "arXiv preprint arXiv:2308.03825, 2023.", + "url": null + } + }, + { + "2": { + "title": "Jailbreak attacks and defenses against large language models: A survey.", + "author": "Sibo Yi, Yule Liu, Zhen Sun, Tianshuo Cong, Xinlei He, Jiaxing Song, Ke Xu, and Qi Li.", + "venue": "arXiv preprint arXiv:2407.04295, 2024.", + "url": null + } + }, + { + "3": { + "title": "Universal and transferable adversarial attacks on aligned language models.", + "author": "Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson.", + "venue": "arXiv preprint arXiv:2307.15043, 2023.", + "url": null + } + }, + { + "4": { + "title": "Autodan: Generating stealthy jailbreak prompts on aligned large language models.", + "author": "Xiaogeng Liu, Nan Xu, Muhao Chen, and Chaowei Xiao.", + "venue": "arXiv preprint arXiv:2310.04451, 2023.", + "url": null + } + }, + { + "5": { + "title": "Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts.", + "author": "Jiahao Yu, Xingwei Lin, Zheng Yu, and Xinyu Xing.", + "venue": "arXiv preprint arXiv:2309.10253, 2023.", + "url": null + } + }, + { + "6": { + "title": "Llms are in-context reinforcement learners.", + "author": "Giovanni Monea, Antoine Bosselut, Kiant\u00e9 Brantley, and Yoav Artzi.", + "venue": "arXiv preprint arXiv:2410.05362, 2024.", + "url": null + } + }, + { + "7": { + "title": "How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms.", + "author": "Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi.", + "venue": "arXiv preprint arXiv:2401.06373, 2024.", + "url": null + } + }, + { + "8": { + "title": "Tree of attacks: Jailbreaking black-box llms automatically.", + "author": "Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, and Amin Karbasi.", + "venue": "arXiv preprint arXiv:2312.02119, 2023.", + "url": null + } + }, + { + "9": { + "title": "On prompt-driven safeguarding for large language models.", + "author": "Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie Zhou, Kai-Wei Chang, Minlie Huang, and Nanyun Peng.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "10": { + "title": "Towards understanding jailbreak attacks in llms: A representation space analysis.", + "author": "Yuping Lin, Pengfei He, Han Xu, Yue Xing, Makoto Yamada, Hui Liu, and Jiliang Tang.", + "venue": "arXiv preprint arXiv:2406.10794, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2410.23558v2" +} \ No newline at end of file diff --git a/20241127/2411.02452v2.json b/20241127/2411.02452v2.json new file mode 100644 index 0000000000000000000000000000000000000000..3fa651b1adb6d088a05959ba08e48034ffe39b91 --- /dev/null +++ b/20241127/2411.02452v2.json @@ -0,0 +1,254 @@ +{ + "title": "Goal-Oriented Semantic Communication for Wireless Visual Question Answering", + "abstract": "The rapid progress of artificial intelligence (AI) and computer vision (CV) has facilitated the development of computation-intensive applications like Visual Question Answering (VQA), which integrates visual perception and natural language processing to generate answers.\nTo overcome the limitations of traditional VQA constrained by local computation resources, edge computing has been incorporated to provide extra computation capability at the edge side.\nMeanwhile, this brings new communication challenges between the local and edge, including limited bandwidth, channel noise, and multipath effects, which degrade VQA performance and user quality of experience (QoE), particularly during the transmission of large high-resolution images.\nTo overcome these bottlenecks, we propose a goal-oriented semantic communication (GSC) framework that focuses on effectively extracting and transmitting semantic information most relevant to the VQA goals, improving the answering accuracy and enhancing the effectiveness and efficiency.\nThe objective is to maximize the answering accuracy, and we propose a bounding box (BBox)-based image semantic extraction and ranking approach to prioritize the semantic information based on the goal of questions.\nWe then extend it by incorporating a scene graphs (SG)-based approach to handle questions with complex relationships.\nExperimental results demonstrate that our GSC framework improves answering accuracy by up to 49% under AWGN channels and 59% under Rayleigh channels while reducing total latency by up to 65% compared to traditional bit-oriented transmission.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The rapid advancement of artificial intelligence (AI) and computer vision (CV) has driven the development of various computation-intensive applications, increasing the demand for enhanced computational capabilities and performance to meet users\u2019 quality of experience (QoE). Visual Question Answering (VQA) is one such application that requires the integration of visual perception and natural language processing to answer a wide range of questions by understanding and reasoning over images and questions[1 ###reference_b1###, 2 ###reference_b2###].\nTraditionally, VQA tasks have been deployed at local devices such as smartphones, laptops, and unmanned aerial vehicles (UAVs).\nHowever, these devices encounter significant challenges due to the computational complexity of simultaneously processing large volumes of visual and textual data. This leads to increased computation latency and limited processing power, hindering the efficiency and effectiveness of VQA systems.\nTo address these challenges, edge computing has emerged as a promising solution by offloading computational tasks from local end devices to edge servers with sufficient computation resource, significantly reducing processing time and latency [3 ###reference_b3###, 4 ###reference_b4###].\nMeanwhile, this brings new communication challenges between the end devices and the edge, such as limited bandwidth, channel noise, and multipath effects.\nAs a result, transmitting large volumes of high-quality image data can lead to significant transmission delays, reducing the accuracy of VQA responses and degrading the user quality of experience (QoE).\nTo tackle these limitations, semantic communication has been introduced in [5 ###reference_b5###]. Unlike traditional image compression techniques, which operate at the pixel level without considering the semantic significance of the data, semantic communication focuses on transmitting only semantically significant information, thereby eliminating redundant data and improving the overall accuracy of VQA tasks [6 ###reference_b6###].\nInitial research on semantic communication focused on leveraging deep neural networks (DNN)-based methods to extract the output of the DNN as semantic information from different data modalities (e.g., text[7 ###reference_b7###], audio[8 ###reference_b8###], images[9 ###reference_b9###], and video[10 ###reference_b10###]) for efficient transmission.\nOn this basis, a DNN-based joint source-channel coding (JSCC) semantic communication architecture has been developed in [11 ###reference_b11###], where the network is jointly trained to reduce the transmission data size by mapping image pixel values to complex-valued channel input symbols.\nSubsequently, other advanced JSCC-based methods have been proposed to deal with the limitation of specific channel conditions/environments.\nIn [12 ###reference_b12###], to address the challenge of dynamic channel conditions in wireless communication, the attention mechanism has been leveraged to automatically adapt to various channels during training. This method was further extended to the orthogonal frequency division multiplexing (OFDM) scenario in [13 ###reference_b13###], where the attention mechanism enhanced channel adaptation.\nThese JSCC-based studies, focused on end-to-end joint training, encounter significant scalability limitations for diverse tasks and fail to accommodate the growing demand for plug-and-play flexibility.\nTheir designs primarily focused on channel and coding efficiency by using DNNs to directly compress raw data into the generated feature vectors from the neural network, but they failed to capture and utilize the semantic relevance of the information and lack interpretability of the underlying features.\nMeanwhile, recent progress in CV offers new solutions for the semantic representation of images, supporting modular semantic design in VQA.\nBased on object detection [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###], the bounding box (BBox) of different objects, along with their coordinates and labels, can be leveraged to reason VQA answers [17 ###reference_b17###, 18 ###reference_b18###].\nTo further handle complex relational questions, scene graph (SG) can be extracted from images, representing detected objects as nodes and their relationships as edges [19 ###reference_b19###, 20 ###reference_b20###].\nSG generation models, such as those in [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], enable the creation of SGs as a foundation for downstream VQA tasks.\nThese CV methods reduce the size of transmitted data and provide a certain degree of interpretability for semantic information, facilitating their integration into semantic communication. However, several key challenges remain in bridging the gap between semantic communication and CV:\ndetermining the semantic information to be extracted;\nselecting the semantic information to be transmitted;\ndesigning how to extract and transmit semantic information effectively for accurate question answering.\nTo deal with these challenges, recent studies have started to explore these CV methods in VQA semantic communication.\nIn [25 ###reference_b25###], the authors employed a convolutional neural network (CNN)-based approach to extract BBox for transmission, replacing the original image.\nIn [26 ###reference_b26###], an SG-based semantic communication framework was proposed within a digital twin environment to reduce communication costs and facilitate data transfer for Metaverse services.\nAdditionally, a semantic scoring mechanism was introduced in [27 ###reference_b27###], using the concept of image-to-graph semantic similarity (ISS) to rank semantic triplets by considering the frequency and probability of different categories in the dataset.\nHowever, these approaches are inherently data-centric, focusing exclusively on data-related information during the semantic execution process. Consequently, they struggle to effectively handle diverse questions in VQA, where the same image may be associated with multiple questions that emphasize different aspects and structures.\nTo address this limitation, we adopt a goal-oriented approach, emphasizing semantic information that is directly related to specific goals instead of focusing solely on datasets or data-related metrics.\nSeveral studies have demonstrated the critical role of goal-oriented methods.\nFor instance, in robotics [28 ###reference_b28###], a goal-oriented and semantic communication in robotic control (GSRC) method was proposed to ensure the most important information is utilized. In Metaverse [29 ###reference_b29###], a goal-oriented semantic communication framework in augmented reality (GSAR) method was proposed to extract features from the avatar skeleton graph to prioritize the importance of different semantic information.\nHowever, existing goal-oriented methods are typically designed for specific tasks and scenarios, and are not tailored for VQA in wireless communication contexts. Thus, a dedicated goal-oriented semantic communication framework for edge-enabled VQA is required to ensure flexibility and effectiveness.\nInspired by the above, we propose a novel goal-oriented semantic communication (GSC) framework that focuses on effectively extracting and transmitting semantic information most relevant to the VQA goals, thereby improving the answering accuracy and enhancing task effectiveness and communication efficiency.\nWe develop and evaluate three VQA communication mechanisms, bit-oriented, BBox-based, and SG-based methods, under different channel conditions and QoE requirements.\nTo achieve high answering accuracy while maintaining low latency, we provide a comprehensive comparison and practical guidelines for selecting the most suitable VQA communication mechanism. Our contributions can be summarized as follows:\nWe propose a novel GSC framework tailored for an edge-enabled VQA system over bandwidth-limited, noisy wireless channels.\nThis framework consists of six main modules: a knowledge base for foundational knowledge sharing, a keywords extractor to capture keywords, a question parser to generate the reasoning instructions, a semantic extractor for visual semantic extraction, a semantic ranker to prioritize semantic information, and an answer reasoner to generate accurate responses.\nWe formulate the problem as maximizing the average answering accuracy under limited communication bandwidth, characterized by specific fading models, bandwidth constraints, and Signal-to-Noise Ratio (SNR) limitations. To address this, we leverage a CNN-based BBox generation method to extract the semantic information from image objects and design a goal-oriented BBox (GO-BBox) approach to prioritize the extracted semantic information by ranking BBox based on diverse questions.\nTo address questions involving complex relationships, we extend our framework by designing an attention-based scene graph (SG) generator to reason on the relationships among objects, followed by a goal-oriented SG (GO-SG) ranking approach that ranks the SGs based on the graph-edit-distance (GED) between images and questions.\nWe conduct extensive experiments under various channel conditions to compare our proposed GSC framework with the traditional bit-oriented method and state-of-the-art semantic approaches from [30 ###reference_b30###] and [27 ###reference_b27###]. Our results show that the proposed GSC framework outperforms existing methods, improving answering accuracy by up to 49% under AWGN channels and by up to 59% under Rayleigh channel. Additionally, it reduces overall question execution latency by 65% compared to bit-oriented transmission.\nThe rest of the paper is organized as follows. Section II presents the system model and problem formulation for VQA. In Section III, we describe the proposed GSC-based BBox semantic ranking method for addressing VQA tasks at the effectiveness level, followed by the extension to the SG semantic ranking method in Section IV. Section V demonstrates the experimental performance evaluation, and finally, Section VI concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model and Problem Formulation", + "text": "In this section, we introduce our proposed goal-oriented semantic communication (GSC) framework for wireless VQA and then formulate the problem with the goal of achieving high answering accuracy under limited communication resources, characterized by specific fading models, bandwidth constraints, and Signal-to-Noise Ratio (SNR) limitations." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A System Model", + "text": "###figure_1### We consider an edge-enabled VQA system operating over a bandwidth-constrained, noisy wireless channel, where human users can query the edge server regarding images captured by end devices, such as unmanned aerial vehicles (UAVs).\nAs illustrated in Fig. 1 ###reference_###, at the start of the process, human users initiate VQA tasks by posing questions to the edge server,\nwhich subsequently instructs the UAV camera to extract semantic information from the captured image and transmit it back via the uplink channel.\nWe define as the sequence representing the -th question sentence that is raised at the edge server, expressed as\nwhere each indicates a natural language word, and and denote the sentence length of -th question sentence and the total number of questions, respectively.\nUpon receiving the image capture command from the edge server, the end device captures the corresponding image , where is the image index, and represents the total number of images.\nIt is important to note that VQA typically involves multiple questions associated with a single image. This means the total number of images is generally smaller than the number of questions (), and different questions may refer to the same image ().\nThe semantic information extractor processes the captured image, which can be expressed as\nwhere represents the extracted semantic information, and denotes the semantic extractor for images.\nNext, the extracted semantic information is ranked from a goal-oriented perspective according to the corresponding question . This ranking process is defined as\nwhere denotes the ranked semantic information, and represents the semantic ranking operation based on the question .\nSubsequently, the ranked semantic information can be tokenized using the knowledge base to save communication resource, after which the tokenized information is encoded and transmitted back to the edge server via the uplink wireless channel.\nLast, the answer reasoner at the server receives both the decoded image semantic information and the instructions steps from the question parser module and generates an answer for the user." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Communication Model", + "text": "In order to focus on the uplink communication of image semantic information from the end device to the edge server, we assume that the downlink transmission of image capture command and question keywords are ideal and free of errors. The uplink communication process starts with source encoding, where the original data is converted into a bitstream, denoted by .\nThis bitstream is then encoded using Low-Density Parity-Check (LDPC) coding [31 ###reference_b31###] for error correction. LDPC codes are characterized by a sparse parity-check matrix . The LDPC encoding ensures that the generated codeword satisfies the parity-check condition .\nAfter that, the encoded bitstream is subjected to binary phase-shift keying (BPSK) modulation. This modulation technique alters the phase of a carrier signal according to the encoded bits, producing a modulated signal represented as , which is then transmitted over the wireless channel.\nFor each binary input , the corresponding modulated output can be expressed as\n###figure_2### ###figure_3### The received binary signal can be expressed as\nwhere represents the channel noise. The signal is then demodulated, followed by LDPC decoding, to recover the original data.\nThe Signal-to-Noise Ratio (SNR) at -th quesiton can be expressed as\nwhere is the transmission power of the end device, is the small-scale fading coefficient, and is the noise power.\nOn this basis, the communication latency for bitstream transmission from the end device transmitting selected semantic features to the edge server can be expressed as\nwhere is the data size (in bits with float 32 data type) of the transmitted semantic information from the end device to the edge server, and is the channel bandwidth.\nHere, we define the image , where H, W, and D represent the height, width, and colour channels of the image, respectively. Each pixel\u2019s colour channel is stored as a 32-bit floating point number, and the data size of the image can be calculated by multiplying its spatial dimensions and the number of colour channels, yielding bits.\nThe total latency, denoted as , is defined as the elapsed time from the initiation of a task by the human user to upon receiving the output from the answer reasoner.\nThis process requires two distinct inputs: the reasoning instruction steps originating from the question parser at the edge, and the semantic information of the image relayed from the end device through a wireless communication channel.\nConsequently, the total latency can be expressed as\nwhere , , and denote the processing time of question parser at the edge server, the corresponding image at the end device, and the answer reasoning, respectively.\nThe roles of these latencies are illustrated in Fig. 2 ###reference_###, where we also compare the traditional bit-oriented framework with our proposed GSC framework in terms of the total latency experienced at the edge server and end device.\nA detailed comparison of latency and answer accuracy is presented in Fig. 9 of the experimental results." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Problem Formulation", + "text": "The goal of the proposed VQA system is to maximize answering accuracy. Given the -th input question and its corresponding image , we denote the answer reasoning process as . Based on this, the answering accuracy maximization problem can be formulated as\nwhere represents the correct answer to the -th question, and the indicator function returns 1 if the generated answer matches , and 0 otherwise.\nThe constraint in (11b ###reference_.2###) specifies the indices of the question and image involved and the limitation of communication bandwidth .\nIt is worth noting that in traditional bit-oriented communication frameworks, communication delay is often significant due to communication resources required to transmit high-resolution images from multiple cameras simultaneously to the edge server.\nConsequently, the answer reasoning process can only commence after the complete reception of both the questions and images, which impedes real-time data processing and reduces the overall framework\u2019s effectiveness, ultimately affecting the accuracy of the answers." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Natural Language Processing for Questions at the Edge Server", + "text": "In this section, we present the detailed framework design and workflow for the natural language processing of questions at the edge server.\nAt the edge server, the primary task is to process natural language questions, and the workflow for this process is illustrated in Fig. 3 ###reference_###, which outlines the key modules: the knowledge base, keywords extractor, question parser, and answer reasoner." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Knowledge Base", + "text": "The knowledge base is designed to standardize all question inputs, facilitating consistent interpretation, training, and communication across the framework.\nOperating transparently for both the edge server and the end device, it works in conjunction with the tokenizer to reduce the size of information exchanged between modules, thereby enhancing efficiency and minimizing resource consumption.\nTo achieve this, we leverage GloVe [32 ###reference_b32###] with 600 dimensions to present all natural language words. Each word is mapped to the closest concept in the predefined vocabulary, minimizing the effects of synonyms, plurality, and tense variations on the interpretation of questions.\nAs depicted in Fig. 3 ###reference_###, the knowledge base consists of three tables: 1) all natural words and their tokens, 2) objects and their associated tokens, and 3) relationships and their tokens.\nNotably, the knowledge base can dynamically assign new tokens and update the tables when encountering new words, ensuring it remains up to date [29 ###reference_b29###]." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Keywords Extractor", + "text": "The keywords extractor , built upon the knowledge base, is designed to capture the relevant keywords , including the associated image index, objects, and relationships.\nThe image index is used by the end device to capture and retrieve the correct images, while the objects and relationships will be utilized in Sections IV and V for further goal-oriented semantic ranking.\nFor the given example question in Fig. 3, the extracted keywords include the associated image index and the tokenized objects and relationships, as represented in the tables within the knowledge base." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Question Parser", + "text": "The question parser is designed to translate questions into a sequence of reasoning instructions, which are passed to the answer reasoner to guide the reasoning process.\nWe denote the sequence of instruction steps as , which can be expressed as\nwhere denotes the question parsing process.\nSpecifically, we design a multi-layer transformer-based [33 ###reference_b33###] question parser employing an attention-based sequence-to-sequence model with an encoder-decoder structure.\nThe transformer has an encoder-decoder structure and consists of stacked attention functions and the output of each attention head is computed as\nwhere Query , , , and refer to query, key, value vectors and the scaling factor.\nThen, the multi-head attention can be written as\nwhere MH() concatenates the outputs from the single attention heads followed by the projection with trainable parameters .\nIn this paper, we use identical transformer encoder and decoder layers.\nThe input question is fed into the embedding layer to encode the question sequence into one-hot vectors for easier execution in the subsequent question parser.\nHence, according to equation (12 ###reference_###) the task parsing process can be written as\nwhere denotes the total number of vectors in , and denotes the true probability of the -th class in the ground truth.\nThe cross-entropy is used in the loss function , which can be written as\nwhere encompasses the network parameters of the question parser, and is the predicted probability of the -th class." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Answer Reasoner", + "text": "The answer reasoner is designed to perform the logical operations to process the questions and generate accurate answers.\nIt takes two inputs: the reasoning instruction steps and image semantic information .\nThe answer reasoner operates by following a structured logical flow, as illustrated in Fig. 3, to analyze the input information and systematically derive the appropriate response.\nThe input reasoning instructions specify the key semantic elements to focus on each question, such as specific objects, relationships, or spatial attributes, and each step represents an individual operation within the reasoning process.\nAt each step , the answer reasoner uses the extracted semantic information to match the relevant semantic elements to the specified instructions.\nAfter processing a step , the reasoning result is used as input for the next step . This sequential process continues until the final step , where is the total number of steps of the given question.\nOnce all steps are completed, the answer reasoner synthesizes the results to generate an answer to the question.\nFor the example questions in Fig. 3, the answer reasoner processes the input information as follows: For the first question (\u201cHow many people are on the left of the bar\u201d), it identifies the relevant locations of the people and the bar, with the positional reasoning process detailed in Section IV. For the second question (\u201cHow many people are holding the bar\u201d), it focuses on the relationships between the people and the bar, with the relational reasoning discussed in Section V.\n###figure_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Bounding Box-Based Semantic Processing for Images", + "text": "In this section, we present the detailed GSC framework design at the end device and the workflow for the visual processing of images, and provide a goal-oriented bounding box (GO-BBox) algorithm to solve the formulated problem." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Image Semantic Extractor", + "text": "As illustrated in Fig. 4, we employ a Mask R-CNN [16 ###reference_b16###] network to segment image regions and predict their categorical attributes such as color, material, size, and shape.\nThe extracted image semantic information is aligned with the reasoning requirements defined in equation (2 ###reference_###), enabling goal-oriented ranking and semantic communication.\nTo this end, the process of the image semantic extractor can be written as\nwhere denotes the total number of vectors in , and denotes the true probability of the -th class in the ground truth. The cross-entropy is leveraged in the loss function , which can be expressed as\nwhere and encompasses the network parameters of the image semantic executor for the object detection, and is the predicted probability of the -th class.\nTherefore, considering the semantic information extracted from questions and images, problem can be converted as\nThe communication resources from the end device to the edge server are constrained by limited bandwidth and Varying SNR. Additionally, the stringent latency requirements of VQA further restrict the amount of semantic information that can be transmitted.\nTherefore, it is crucial to rank the semantic information to identify and prioritize the most relevant content for transmission." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B BBox Semantic Ranker and GO-BBox Algorithm", + "text": "To address these challenges, we design the BBox semantic ranker and propose a GO-BBox algorithm to rank the extracted semantic information .\nWe first utilize a fundamental ranking method in [27 ###reference_b27###], which is designed to organize semantic information extracted from the original image in descending order, based on the different label frequencies (LF) of the objects in the images. After extracting the image semantic information, each vector in is assigned a score based on its priority relative to image-related data. We define as the ranked semantic information based on the categories frequency, which can be expressed as\nwhere indicates the image dataset, and is the ranking process based on the image data set .\nFor each vector , the ranking process can be written as\nwhere is the category frequency of the semantic feature in the considered image dataset.\nAll semantic information is initially scored using the LF ranking algorithm; however, this approach does not consider the correlation between the question and the semantic information extracted from the image.\nTo address this limitation, we leverage the goal-oriented (GO) concept by utilizing the extracted keywords at the end device. We define as the GO-ranked semantic information, which can be expressed as\nwhere is the DO semantic ranking process based on the question-related keywords .\nFor each vector , we define the online ranking process as\nwhere is the weight parameter, and is the goal-oriented semantic information obtained from (15 ###reference_###).\nNow we can combine the two score processes for all the image semantic features in a descending order.\nAs and , we can denote the ranked semantic vectors in BBox as\nwhere Rank represents a descending sorting operation based on the scores of the BBox.\nThen, the top entries in are transmitted to the edge for answer reasoning, and the workflow of the proposed GO-BBox algorithm is summarized in Algorithm 1." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Scene Graph-Based Semantic Processing for Images", + "text": "In this section, we extend our analysis to the VQA task with questions related to complex relationships between objects.\nConsidering the two example questions in Fig. 3 ###reference_###, the previous section primarily addresses the BBox-based semantic ranker, which performs well for questions related to positions or attributes (e.g., the first question) but encounters challenges when dealing with more complex, relationship-based questions (e.g., the second question).\nTo overcome this limitation, we utilize scene graphs (SG), as shown in Fig. 4, to represent semantic information, where nodes represent objects (e.g., man, horse, bus) and edges denote their relationships (e.g., ride, hold, wear).\nNotably, a BBox can be viewed as a special case of an SG, where relationships are restricted to spatial locations and attributes. Similarly, the analysis in Section IV can be considered a specific instance of the broader framework discussed in Section V" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Goal-Oriented Scene Graph Generation", + "text": "We define SG as a representation of real-world entities and the relationships between them.\nSpecifically, we define triplet to represent the format describing a relationship between two entities in an SG. Here, is the subject of the triplet, indicates the object of the triplet, represents the relationship between the subject and the object.\nFor example, if a man is riding a horse in the image, a corresponding triplet in the SG to be generated can be represented as\nTherefore, an SG of an image can be viewed as a combination of triplets, represented as\nTo generate an SG, we utilize the Region Proposal Network (RPN) layer of the Mask R-CNN network, as described in Section III, where we define the set of BBox for image as .\nThe goal of SG generation is to learn the mapping\nwhere represents the SG generation process, and encompasses the network parameters for the SG generation. We define and as the SG of the question and image , respectively.\nAccording to the multi-head attention mechanism in equation (13 ###reference_###) and (14 ###reference_###) in Section III-C ###reference_###, we utilize the transformer-based structure to process the BBoxes to generate the SG. The BBoxes are passed through the transformer encoder to produce contextualized embedding that contains both object-specific information and global scene context. The decoder then operates in an autoregressive manner to generate the next relationship triplet at each time step based on the previously generated triplets.\nThe Transformer encoder follows a similar architecture to that described in Section III-C ###reference_###, with the input modified to use BBox data instead of text.\nDuring the decoding process, the decoder leverages previously predicted relationship triplets at each time step to build a history embedding, representing the current state of predictions. Subsequently, all possible pairs of contextualized object embeddings, excluding those already predicted, are combined with the historical embedding to predict new relationships.\nGiven the previous triplets , our goal is to learn the conditional probability of the next triplet, which can be written as\nThe triplet with the highest probability is taken as the decoder prediction at step , and the process is repeated until termination criteria are reached.\nTherefore, considering the semantic information extracted from questions and images, the problem can be further converted as" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "SG Semantic Ranker and GO-SG Algorithm", + "text": "We use a scene graph extraction model to extract the semantic information from the original image. The semantic information extraction process has two steps which are object identification and relationship capture. Specifically, a server first uses the model to detect, locate, and categorize all the objects in the image. Then, the model deduces the relationship between all the objects according to their geometry and logical correlation and outputs them in the form of triples.\nWe denote a directed graph by , where represents a set of objectives (nodes), is a set of edges between the nodes indicating pairwise predicates (edges), and the function assigns each edge a positive weight, or zero if the edge does not exist.\nNote that the weight can be obtained from the occurrence probability of the predicate.\nThe similarity between two SGs can be measured using graph edit distance (GED), a concept first formalized mathematically by [34 ###reference_b34###]. This metric quantifies the minimum number of edit operations (e.g., node/edge insertion, deletion, or substitution) needed to transform one graph into another, and offers a way to quantify dissimilarity between graphs, similar to how string edit distance works for strings. Since the definition of GED depends on the properties of the graphs\u2014such as whether the nodes and edges are labelled and whether the edges are directed\u2014its formalization varies accordingly. In general, given a set of edit operations, the GED between two graphs and can be defined as\nwhere is the graph edit operation function, denotes the set of edit routes transforming to , and is the cost of each graph edit operation .\nSpecifically, the set of elementary graph edit operators includes insertion, deletion, and substitution of nodes and edges.\nHere, and are the weights for node and edge edit, respectively.\nGiven that different tasks have different graphs, we can hence design the GO triplets transmission accordingly.\nFor each triplet , we define a ranking function as\nwhere is the occurrence frequency of the triplet , and the second part is a padding weight to adjust the score considering the varying priority of different objects in the dataset.\nTherefore, we define the ranked semantic vector in SG as\nwhere Rank represents a descending order sorting operation based on the calculated scores of the semantic triplets.\nThen, we select the top entries in , which are transmitted to the edge for answer reasoning,\nand the workflow of the proposed GO-SG algorithm is summarized in Algorithm 2.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Simulation Results", + "text": "In this section, we compare our proposed GSC framework with traditional bit-oriented and data-oriented semantic methods, and we conduct different experiments to verify the superiority of the proposed GO-BBox and GO-SG algorithms." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Experimental Settings", + "text": "We employ the GQV dataset [35 ###reference_b35###] for our study, which focuses on real-world visual reasoning and compositional question answering, consisting of 22M questions over 110K images, which cover a wide range of reasoning skills and vary in length. The dataset has a vocabulary size of 3097 words, 1878 possible answers, 1704 classes and 311 relationships.\nGQA is more challenging than other datasets (e.g., CLEVR [36 ###reference_b36###] and VG [37 ###reference_b37###]) as all images are from the real world and each one is annotated with a dense scene graph and a large number of relations.\nThe Transformers has three encoder and decoder layers and twelve attention heads and it is trained based on cross-entropy loss and optimized using the Adam optimizer with learning rate , batch size of , and epoch of 100.\nThe setting of the Mask R-CNN network with the ResNet-50 backbone network is the Adam optimizer with learning rate , batch size of 256, epoch of 100, and the input image pixel size and colour channels are and .\nWe consider both AWGN and Rayleigh channel with bandwidth kHz, and the the varying value of SNR will be given in detail in the following results.\nThe experimental platform for this study employs Python 3.8.8 on the Ubuntu 22.04 system with two Nvidia Tesla A100, and PyTorch 2.1.1.\n###figure_11### ###figure_12### ###figure_13### ###figure_14###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Methods for Comparison", + "text": "In order to better demonstrate the performance of our proposed algorithm GO-BBox and GO-SG, we introduce six comparative algorithms in the following.\nDO-SG. This algorithm is based on the data-oriented ranking method in [27 ###reference_b27###], ranking triplets based on their appearance frequency in the dataset without considering specific questions.\nDO-BBox. Similar to DO-SG, this data-oriented algorithm ranks BBoxes solely based on their appearance frequency in the dataset.\nOriginal-SG. This algorithm has no ranking process for SG, and the generated SG will be transmitted in the original ranking [30 ###reference_b30###].\nOriginal-BBox. Similar to Original-SG, this algorithm has also no ranking process for BBox.\nGround Truth.\nThis algorithm assumes a noise-free transmission environment for SG transmission, serving as an ideal benchmark.\nImage Transmission.\nThis bit-oriented approach transmits the entire image without semantic processing, assuming no time limitation for transmitting all data via the wireless channel to the edge server.\nThe works in [27 ###reference_b27###] and [30 ###reference_b30###] focused solely on triplet execution.\nTo ensure a fair comparison with our proposed GSC methods, we extend these approaches by integrating additional components, including AWGN and Rayleigh channel simulations, channel encoders and decoders, a knowledge base, and LDPC error coding.\nFurthermore, to align with the data format of SG methods, BBox methods are treated as triplets with empty relationships.\nThe number of transmitted triplets or BBox elements, , along with the SNR, varies across experiments to evaluate performance under different conditions.\n###figure_15###" + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C Answering Accuracy for Various SNRs", + "text": "We plot the answering accuracy in Fig. 5 ###reference_###, where both BBox-related and SG-related algorithms are evaluated in AWGN and Rayleigh channels.\nThe Image Transmission algorithm serves as the performance upper bound since it has no time limitation to transmit data via the wireless channel.\nAs the SNR increases, the accuracy of all algorithms improves. Notably, our proposed GO-BBox and GO-SG algorithms show rapid improvement, particularly at lower SNRs (from -4 to 6 dB in AWGN and from -2 to 8 dB in Rayleigh), as they accurately extract the most relevant information needed to obtain the correct answer.\nFor both BBox and SG algorithms, the answer accuracy in the Rayleigh is worse than in the AWGN at equivalent SNR levels, due to the complex multipath effects in Rayleigh, whereas the simpler noise model in AWGN results in more stable transmission and higher accuracy.\nAdditionally, in Fig. 5(a) ###reference_sf1### and Fig. 5(b) ###reference_sf2###, the comparison of BBox-related algorithms shows that our proposed GO-BBox outperforms DO-BBox and Original-BBox in both AWGN and Rayleigh channels.\nThis is because GO-BBox leverages goal-oriented information to select the most relevant BBox, leading to more accurate answers.\nIn contrast, the DO-BBox algorithm relies solely on data-oriented information, allowing it to perform better than Original-BBox but worse than GO-BBox.\nSimilarly, in Fig. 5(c) ###reference_sf3### and Fig. 5(d) ###reference_sf4###, which compare SG-related algorithms, GO-SG outperforms both DO-SG and Original-SG for same reasons related to the use of semantic information.\nIt is noteworthy that, for GO, DO, and Original methods, SG algorithms generally outperform BBox algorithms, as SG captures a wider range of complex relationships, thereby enhancing accuracy.\n###figure_16### ###figure_17###" + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "VI-D Answering Accuracy for Various Question Types", + "text": "Fig. 6 ###reference_### presents the accuracy evaluation of various question types, comparing all BBox-based and SG-based algorithms in both AWGN and Rayleigh channels. The question types include \u201cChoose\u201d, \u201cLogical\u201d, \u201cQuery\u201d,\u201cVerify\u201d, \u201cRelations\u201d, and \u201cObjects\u201d. For this evaluation, the SNR is fixed at 8 dB and is set to 9.\nThe proposed GO-SG algorithm consistently outperforms other algorithms across all question types.\nNotably, for Query questions, GO-SG demonstrates a significant advantage over other methods. This is because Query questions are typically open-ended and require the algorithm for effective utilization of relationships and goal-oriented semantic triplets to reason the correct answer.\nFor Objects questions, BBox-based algorithms perform similarly to SG-based algorithms, as these questions mainly focus on identifying individual objects without involving complex relationships.\nIn contrast, for Relations questions, SG algorithms outperform BBox algorithms because capturing and interpreting relationships between objects is crucial for achieving higher accuracy in these types of questions." + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "VI-E Answering Accuracy for Various", + "text": "Fig. 7 ###reference_### examines the impact of the number of selected triplets, , on accuracy accuracy. The SNR is fixed at 8 dB for both AWGN and Rayleigh channels, and is varied from 3 to 30.\nThe proposed GO-BBox and GO-SG methods outperform other comparable approaches. As increases, accuracy improves due to the transmission of more semantic triplets from the end device to the edge server. Additionally, in Fig. 7(c) ###reference_sf3### and Fig. 7(d) ###reference_sf4###, DO-SG performs worse than GO-BBox at small values (from 3 to 12 in AWN and from 3 to 15 in Rayleigh).\nThis is because, when only a limited number of triplets are transmitted, the semantic information selected from the GO perspective is more critical for achieving accurate answers compared to that selected from the DO perspective, even in the absence of relationships in the BBox setting.\nHowever, after , DO-SG achieves higher accuracy as more triplets are transmitted, which allows additional relationships to be included, thereby increasing accuracy, even if some of them are not goal-oriented." + }, + { + "section_id": "6.6", + "parent_section_id": "6", + "section_name": "VI-F Answer Accuracy for Various Evaluation Dimensions", + "text": "Fig. 8 ###reference_### examines the answering accuracy of our methods across five additional evaluation dimensions: \u201cBinary\u201d, \u201cOpen\u201d, \u201cConsistency\u201d, \u201cValidity\u201d, and \u201cPlausibility\u201d, as described in [35 ###reference_b35###]. Binary and Open refer to yes/no questions and open-ended questions, respectively. The Consistency metric evaluates whether the answer is logically consistent with the questions. For example, \u201cA is to the left of B\u201d should be consistent with \u201cB is to the right of A\u201d. Validity measures whether the provided answer is within the scope of the question, such as ensuring that a food/color/animal-related question receives a food/colour/animal-related answer. Plausibility assesses whether the answer is reasonable within the given context, such as ensuring that a response like \u201cbuses wear hats\u201d is viewed as implausible.\nIn this analysis, we focus on the Rayleigh channel, with the SNR fixed at 8 dB and set to 9. It can be seen in the figure that our proposed GO-SG and GO-BBox algorithms outperform other SG and BBox methods across all five dimensions.\nBoth algorithms perform well for open-ended questions due to the goal-oriented semantic ranking process, which preserves the most relevant information for these question types. For consistency-related questions, SG-based methods generally outperform their BBox counterparts, with DO-SG even surpassing GO-BBox, and Original-SG outperforming DO-BBox. This is because, in this scenario, capturing relationships plays a more critical role in ensuring consistency than goal-oriented ranking." + }, + { + "section_id": "6.7", + "parent_section_id": "6", + "section_name": "VI-G Overall Latency Comparison", + "text": "Fig. 9 ###reference_### plots the overall latency comparison for Image Transmission, GO-BBox, and GO-SG, where lower latency and higher accuracy contribute to improved QoE for users.\nThe latency for the entire process is calculated using equation (10 ###reference_###), with the four specific latencies collectively contributing to the overall latency.\nUplink transmission refers to the communication latency involved in transmitting the bitstream data from the end device to the edge server over a Rayleigh channel (8 dB, 100 kHz). The latency is calculated using equation (9 ###reference_###) and is averaged across all image transmission computations.\nSemantic Extraction-BBox represents the semantic extraction latency for BBox, calculated based on the Mask R-CNN-based BBox generation performed by the semantic extractor.\nSemantic Extraction-SG corresponds to the semantic extraction latency for SG generation, derived based on the extracted BBox semantic information.\nAnswer Reasoning refers to the reasoning latency incurred by the answer reasoner at the edge server to derive the final answer.\nThe complexity of BBox semantic extraction is 0.44 TFlops, and SG generation requires 0.02 TFlops, with the computations executed on an NVIDIA Jetson TX2 [38 ###reference_b38###], mounted on a DJI M210 drone [39 ###reference_b39###], which has an AI performance of 1.33 TFlops. The answer reason latency is relatively small, accounting for only a minor portion of the overall process.\nWe can observe from the figure that Image Transmission suffers from significant bitstream transmission latency. For GO-BBox, the bitstream transmission latency is significantly reduced, although the computational latency for object detection increases considerably. Overall, the total latency decreases, but this reduction comes at the cost of lower answer accuracy.\nIn the case of GO-SG, the latency associated with SG generation is added to the GO-BBox process, resulting in an increase in computation latency but a further reduction in bitstream transmission latency. Consequently, the total latency decreases marginally, while answer accuracy improves significantly, highlighting the semantic effectiveness and enhancing the QoE for the user.\nCompared to Image Transmission, GO-SG significantly reduces the total latency (65%) while incurring only a minimal loss in accuracy (4%).\nWe provide a comprehensive comparison and practical guidelines for selecting the most suitable VQA communication mechanism." + }, + { + "section_id": "6.8", + "parent_section_id": "6", + "section_name": "VI-H Examples of Goal-Oriented Semantic Ranking", + "text": "In addition to the example shown in Fig. 3 ###reference_### and Fig. 4 ###reference_###, which highlight the differences between BBox and SG by demonstrating SG\u2019s ability to capture more relationships, we present another example from a goal-oriented perspective to compare GO-SG, GO-BBox, DO-SG, and DO-BBox for and 9, respectively, in Fig.10 ###reference_###.\nThis example includes an original image and a corresponding question: \u201cIs the bus behind the sidewalk and near the building to the left of the traffic light?\u201d We observe that both GO-BBox and GO-SG can semanticly extract the BBox and SG relevant to the question.\nIn contrast, DO-BBox and DO-SG do not achieve the same precision because they focus on data-oriented semantic ranking and neglect the goal of answering the specific question.\nThis disparity is reduced when , as more triplets are available for transmission. However, when the number of transmittable triplets is limited (), the performance gap between GO and DO becomes much more pronounced." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusions", + "text": "In this paper, we proposed a novel goal-oriented semantic communication (GSC) framework designed to enhance the performance and efficiency of Visual Question Answering (VQA) systems in edge-enabled wireless scenarios.\nOur proposed framework effectively addresses the challenges of limited communication resources and noisy channels by prioritizing the transmission of semantically significant information most relevant to answering VQA question.\nTo achieve this, we developed a goal-oriented bounding box (GO-BBox) ranking approach and extended it with a scene graph (GO-SG) ranking method to handle complex relationship-based questions. These methods leverage advanced semantic extraction and prioritization strategies to optimize the transmission of relevant information under resource-constrained conditions.\nExtensive experiments under AWGN and Rayleigh channels demonstrated that the proposed GSC framework significantly outperforms traditional bit-oriented and other existing semantic communication approaches.\nOur results demonstrated substantial improvements in answering accuracy, up to 49% in AWGN channels and 59% in Rayleigh channels, while reducing total execution latency by up to 65%.\nThese results validated the effectiveness of our proposed GSC framework in enhancing VQA answering accuracy under diverse channel conditions and resource constraints, while offering practical guidelines for selecting the most suitable communication mechanism to ensure adaptability to varying requirements." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.02452v2_figure_1.png", + "caption": "Figure 1: The proposed GSC framework for edge-enabled wireless VQA.", + "url": "http://arxiv.org/html/2411.02452v2/x1.png" + }, + "2": { + "figure_path": "2411.02452v2_figure_2.png", + "caption": "Figure 2: The latency comparison between bit-oriented framework and GSC framework.", + "url": "http://arxiv.org/html/2411.02452v2/x2.png" + }, + "3": { + "figure_path": "2411.02452v2_figure_3.png", + "caption": "Figure 3: Workflow at the edge server in the GSC framework, including keywords extractor, question parser, and answer reasoner.", + "url": "http://arxiv.org/html/2411.02452v2/x3.png" + }, + "4": { + "figure_path": "2411.02452v2_figure_4.png", + "caption": "Figure 4: Workflow at the end device in the GSC framework, including the semantic extractor along with the BBox and SG semantic rankers.", + "url": "http://arxiv.org/html/2411.02452v2/x4.png" + }, + "5(a)": { + "figure_path": "2411.02452v2_figure_5(a).png", + "caption": "(a) Accuracy for BBox methods in AWGN channel\nFigure 5: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x5.png" + }, + "5(b)": { + "figure_path": "2411.02452v2_figure_5(b).png", + "caption": "(b) Accuracy for BBox methods in Rayleigh channel\nFigure 5: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x6.png" + }, + "5(c)": { + "figure_path": "2411.02452v2_figure_5(c).png", + "caption": "(c) Accuracy for SG methods in AWGN channel\nFigure 5: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x7.png" + }, + "5(d)": { + "figure_path": "2411.02452v2_figure_5(d).png", + "caption": "(d) Accuracy for SG methods in Rayleigh channel\nFigure 5: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x8.png" + }, + "6(a)": { + "figure_path": "2411.02452v2_figure_6(a).png", + "caption": "(a) Performance comparison in AWGN channel\nFigure 6: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x9.png" + }, + "6(b)": { + "figure_path": "2411.02452v2_figure_6(b).png", + "caption": "(b) Performance comparison in Rayleigh channel\nFigure 6: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x10.png" + }, + "7(a)": { + "figure_path": "2411.02452v2_figure_7(a).png", + "caption": "(a) Accuracy for BBox methods in AWGN channel\nFigure 7: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x11.png" + }, + "7(b)": { + "figure_path": "2411.02452v2_figure_7(b).png", + "caption": "(b) Accuracy for BBox methods in Rayleigh channel\nFigure 7: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x12.png" + }, + "7(c)": { + "figure_path": "2411.02452v2_figure_7(c).png", + "caption": "(c) Accuracy for SG methods in AWGN channel\nFigure 7: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x13.png" + }, + "7(d)": { + "figure_path": "2411.02452v2_figure_7(d).png", + "caption": "(d) Accuracy for SG methods in Rayleigh channel\nFigure 7: Performance comparison in terms of accuracy in AWGN and Rayleigh channel.", + "url": "http://arxiv.org/html/2411.02452v2/x14.png" + }, + "8": { + "figure_path": "2411.02452v2_figure_8.png", + "caption": "Figure 8: Performance evaluation on different dimensions", + "url": "http://arxiv.org/html/2411.02452v2/x15.png" + }, + "9": { + "figure_path": "2411.02452v2_figure_9.png", + "caption": "Figure 9: Latency comparison in full question-to-answer processing.", + "url": "http://arxiv.org/html/2411.02452v2/x16.png" + }, + "10": { + "figure_path": "2411.02452v2_figure_10.png", + "caption": "Figure 10: Example of Goal-Oriented Semantic Ranking for BBox and SG.", + "url": "http://arxiv.org/html/2411.02452v2/x17.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.02452v2" +} \ No newline at end of file diff --git a/20241127/2411.06208v2.json b/20241127/2411.06208v2.json new file mode 100644 index 0000000000000000000000000000000000000000..992dd4852238fdb62cc126647699f84deefd97ee --- /dev/null +++ b/20241127/2411.06208v2.json @@ -0,0 +1,459 @@ +{ + "title": "IOPO: Empowering LLMs with Complex Instruction Following via Input-Output Preference Optimization", + "abstract": "In the realm of large language models (LLMs), the ability of models to accurately follow instructions is paramount as more agents and applications leverage LLMs for construction, where the complexity of instructions are rapidly increasing.\nHowever, on the one hand, there is only a certain amount of complex instruction evaluation data; on the other hand, there are no dedicated algorithms to improve the ability to follow complex instructions.\nTo this end, this paper introduces Trace, a benchmark for improving and evaluating the complex instruction-following ability, which consists of 120K training data and 1K evaluation data.\nFurthermore, we propose IOPO (Input-Output Preference Optimization) alignment method which takes both input and output preference pairs into consideration, where LLMs not only rapidly align with response preferences but also meticulously explore the instruction preferences.\nExtensive experiments on both in-domain and out-of-domain datasets confirm the effectiveness of IOPO, showing 8.15%, 2.18% improvements on in-domain data and 6.29%, 3.13% on out-of-domain data compared to SFT and DPO respectively. Our code and dataset are released at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/IOPO.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The rapid development of large language models (LLMs) has facilitated human-machine interaction, with instructions serving as the medium Gao et al. (2024 ###reference_b5###); Kim et al. (2024 ###reference_b12###); Zhang et al. (2024b ###reference_b31###). As human needs evolve, there is an increasing expectation for models to handle more intricate tasks through complex instructions Ge et al. (2023 ###reference_b6###); Yang et al. (2024b ###reference_b28###); Wang et al. (2024 ###reference_b24###). Consequently, the instruction-following ability, especially complex instructions, is garnering significant attention Zhou et al. (2023 ###reference_b33###); Xu et al. (2024 ###reference_b26###); Li et al. (2024 ###reference_b13###); Zhang et al. (2024a ###reference_b30###).\n###figure_1### To evaluate the instruction-following abilities of LLMs, several benchmarks Zhou et al. (2023 ###reference_b33###); Qin et al. (2024 ###reference_b20###); Li et al. (2024 ###reference_b13###) have been proposed, which are designed to systematically assess how well these models can understand and execute instructions.\nIFEval Zhou et al. (2023 ###reference_b33###) focuses on verifiable instructions which are amenable to objective verification of compliance.\nQin et al. (2024 ###reference_b20###) introduces InFoBench which contains 500 distinct instructions and 2,250 decomposed questions to assess the ability of instruction following.\nRecently, the ability to follow complex instructions with multiple constraints is gaining increasing attention He et al. (2024b ###reference_b9###); Jiang et al. (2024 ###reference_b11###); Wen et al. (2024 ###reference_b25###); He et al. (2024a ###reference_b8###) as LLMs are deployed in sophisticated real-world applications.\nZhang et al. (2024a ###reference_b30###) proposes a constraint-following benchmark CFBench with 1,000 multi-constraint samples.\nHowever, most of benchmarks lay emphasis on evaluating LLMs\u2019 ability to follow complex instructions, lack of algorithms tailored for enhancing the corresponding ability.\nFrom RLHF (Reinforcement Learning from Human Feedback) Ouyang et al. (2022 ###reference_b19###); Bai et al. (2022a ###reference_b1###) to the following-up researches such as DPO (Direct Preference Optimization) Rafailov et al. (2023 ###reference_b21###), alignment algorithms which align LLMs with human preferences, have demonstrated their effectiveness in improving the LLMs\u2019 capabilities to follow instructions.\nNevertheless, these methods directly explore different responses (, ) based on the same instruction , as shown in Figure 3 ###reference_### (a).\nIn the complex instruction scenario which contains multiple constraints, it is challenging to efficiently perceive the fine-grained constraints in solely by modeling different .\nTo bridge this gap, this paper first introduces Trace benchmark to improve the ability of LLMs to track complex fine-grained constraint instructions and make them more obedient.\nTrace is automatically constructed based on the manually sorted taxonomy of complex instructions with 26 constraint dimensions within 5 constraint types.\nWe develop an automated data construction workflow that extends from open-source simple instructions to multi-constrained complex ones.\nIn the end, we accumulate 120K complex instructions for model training and 1K human-verified data for evaluation.\nTo enhance the ability of LLMs to follow complex instructions, this paper further proposes Input-Output Preference Optimization (IOPO) method.\nIOPO not only takes the instruction as input to directly learn the response preference, but also gradually delves deeper into instructions based on the same response , to promote effective perception of fine-grained constraints, as shown in Figure 3 ###reference_### (b).\nThe major contributions of this paper are summarized as follows:\nWe introduce a benchmark Trace for complex instruction following, which includes both an evaluation set and a training set, and an automated data construction workflow, further enriching the research community.\nDifferent from previous alignment paradigm, we propose IOPO alignment method which deeply explores the complex instructions (Input), not just directly learning response preference (Output).\nExtensive experiments on both in-domain and out-of-domain evaluations have confirmed the consistent improvements, with an average increase of 7.22% and 2.66%, compared to SFT and DPO, respectively." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Instruction Following", + "text": "Instruction following is the most fundamental and crucial ability for large language models (LLMs), which enables them to understand and execute user instructions accurately, making them more effective in a wide range of applications.\nIn fact, earlier studies have explored the extent to which models follow language instructions Ye and Ren (2021 ###reference_b29###); Mishra et al. (2022 ###reference_b17###); Hase and Bansal (2022 ###reference_b7###).\nIt is effective to fine-tune LLMs on these annotated instruction data for improving the ability to follow natural language instructions.\nThe instruction-following ability enhances adaptability to unseen tasks, which has become an efficient learning paradigm for novel task demands Lou et al. (2023 ###reference_b14###).\nAs human demands grow higher, the instructions given to LLMs are also becoming increasingly complex.\nRecent studies are beginning to focus on the complex instruction-following ability of LLMs, where more complex or constrained instructions have been proven effective in enhancing LLMs\u2019 abilities to follow instructions Mukherjee et al. (2023 ###reference_b18###); Xu et al. (2024 ###reference_b26###); Luo et al. (2024 ###reference_b15###).\nConstrained instructions, as a type of complex instruction, are also gradually receiving attention from the research community.\nIncreasing the complexity of constraints within the instruction (e.g., raising the number of constraints) can further improve the ability to follow complex instructions Sun et al. (2024 ###reference_b23###); Dong et al. (2024 ###reference_b4###); He et al. (2024a ###reference_b8###).\nSun et al. (2024 ###reference_b23###) introduces a instruction tuning dataset Conifer and proposes a progressive learning scheme to enhance the ability of LLMs to follow multi-level instructions with complex constraints.\nHe et al. (2024a ###reference_b8###) first finds that multi-constraint instructions can enhance LLMs\u2019 understanding of complex instructions, and then introduces a discrimination-based method for data acquisition, and finally proposes a contrastive method with reinforcement learning fine-tuning (RLFT) for data utilization.\nIn addition, some work focuses on evaluating the multi-constraint instruction-following capabilities of LLMs Zhou et al. (2023 ###reference_b33###); Qin et al. (2024 ###reference_b20###); Jiang et al. (2024 ###reference_b11###).\nZhang et al. (2024a ###reference_b30###) introduces CFBench, a benchmark which encompasses instructions with multiple constraints, and proposes a multi-dimensional evaluation framework to comprehensively assess model capabilities." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "LLM Alignment", + "text": "LLM alignment aims to enhance LLMs by aligning them with human preference.\nRecent research has conducted extensive explorations for LLM alignment.\nFrom RLHF/PPO Ouyang et al. (2022 ###reference_b19###); Bai et al. (2022a ###reference_b1###) to DPO Rafailov et al. (2023 ###reference_b21###) and beyond Bai et al. (2022b ###reference_b2###); Song et al. (2024 ###reference_b22###); Meng et al. (2024 ###reference_b16###), the evolution of xPO-series alignment algorithms has seen significant advancements. These methodologies have been pivotal in improving the alignment between LLMs and human values, ensuring that the outputs of these models are not only effective but also follow human preference.\nRLHF involves training models using human-provided rewards or reward models to improve decision-making, optimizing policies through iterative feedback loops for more aligned outcomes.\nDPO directly optimizes the model\u2019s output to match preferred response as indicated by human feedback, which simplifies the alignment process by focusing on direct comparisons between preferred and dispreferred outputs, allowing the model to learn human preference without needing an explicitly defined reward model.\nSimPO Meng et al. (2024 ###reference_b16###) proposes a method for preference optimization that eliminates the need for the reference model , which is memory efficient and simple.\nInstead of using pairwise data, PRO Song et al. (2024 ###reference_b22###) utilizes the preference ranking of any length listwise preference dataset.\nTo further enrich the community of multi-constraint instruction following ability, we construct Trace benchmark which contains instructions with multiple constraints, more fine-grained constraint types and a wider range of constraint quantities.\nIn addition, we propose the tailor-designed alignment algorithm IOPO for fine-grained multi-constraint alignment, which is different from previous methods (e.g., DPO) only focusing on the output preference." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Preliminary", + "text": "Existing alignment methods have evolved from RLHF Ouyang et al. (2022 ###reference_b19###); Bai et al. (2022a ###reference_b1###), this section provides a brief introduction to RLHF which mainly consists of three stages:\n1) SFT: The generic pre-trained LM is fine-tuned with maximum likelihood supervised loss on downstream task data, and then we can get the SFT model .\n2) Reward Model: The model is utilized with prompt to generate two different responses , . The pair of responses is labeled as \u201cpreferred\u201d and \u201cdispreferred\u201d by human labelers, i.e., | . The reward model is trained with the following negative log-likelihood loss:\n3) RL: This stage uses the learned reward model to provide feedback to the language model policy, the optimization objective is as follows:\nwhere LM policy , base reference policy are both initialized with , controls the deviation of from the base policy .\n###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Trace Benchmark", + "text": "This section describes the construction pipeline of Trace, its statistics, and evaluation protocol." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Construction Pipeline", + "text": "The overall construction process includes several key stages: 1) Taxonomy of Constraint, 2) Constraint Expansion, 3) Instruction Structuring, 4) Quality Control, and 5) Response Generation & Evaluation.\nTaxonomy of Constraint.\nA comprehensive constraint type system is developed through inference by LLM from a large volume of open-source simple instructions, and further refined by human experts, into 5 constraint types Jiang et al. (2024 ###reference_b11###) and 26 constraint dimensions. The detailed description of constraints is shown in Appendix A ###reference_###.\nConstraint Expansion.\nThis step aims to expand simple instructions into more complex ones that incorporate multiple constraints based on the taxonomy of constraint by prompting LLM.\nInstruction Structuring.\nTo better distinguish different segments of the instruction, this step structures the flat instruction text expanded from the last step into Task Description, Constraints, and Input part by prompting LLM.\nQuality Control.\nTo ensure the validity of the instructions, this step conducts quality control of the expanded instructions by prompting LLM, addressing some forms of invalidity such as redundancy between the description and constraints, incompleteness between the description and input.\nResponse Generation & Evaluation.\nFirst, we prompt LLM with the instruction to generate the corresponding response .\nTo confirm its quality, we then prompt LLM to rate how well the response comply with constraints in the instruction .\nFinally, data that fully follows all the constraints outlined in the instruction would receive a perfect score of 10, and is selected to form the supervised fine-tuning (SFT) instruction dataset.\nThe corresponding prompts for constraint expansion, instruction structuring, quality control, response generation and evaluation are shown in Appendix B ###reference_###.\n###table_1### ###table_2### ###figure_3###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Dataset Statistics", + "text": "As shown in Table 1 ###reference_###, Trace consists of 119,345 instructions for model training, and 1,042 instructions for evaluation, where the minimum and maximum number of constraints per instruction are 1 and 15, with average numbers of 4.36 and 4.89, respectively.\nTable 2 ###reference_### gives the constraint number distributions over training and evaluation set in Trace. For example, when , the corresponding column indicates that there are 13,858 instructions with 6 constraints in the training set, and 100 instructions with 6 constraints in the evaluation set.\nIn Figure 3 ###reference_###, we depict the constraint type distribution over the evaluation set. The inner circle represents the distribution of five major types (Content Constraint, Situation Constraint, Style Constraint, Format Constraint, and Example Constraint), and the corresponding outer one represents the distribution of concrete constraint dimensions." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Evaluation Protocol", + "text": "We use GPT-4o as the evaluator to assess the generated response based on the complex instruction.\nConcretely, inspired by Qin et al. (2024 ###reference_b20###); Zhang et al. (2024a ###reference_b30###), we prompt the LLM evaluator to evaluate each constraint mentioned in the complex instruction on a scale of 0 to 10 score, assessing the degree to which the response follows each constraint. A higher score indicates stronger adherence to the specified constraint. The overall instruction following score IF on the evaluation set with complex instructions can be calculated as follows:\nwhere is the score indicating the degree to which the -th constraint in the -th instruction is adhered to. is the number of constraints in the -th instruction. is 1 when , otherwise is 0.\nThat is, a response is considered correct only when all constraints in the complex instruction are fully followed." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Evaluation Set Quality", + "text": "To generate the high-quality evaluation set, we further introduce a rigorous post-inspection process after construction pipeline (Sec. 4.1 ###reference_###).\nFirst, we use the powerful LLM GPT-4o to check the following items for each instruction in the evaluation set:\n1) Is the description empty?\n2) Is there redundancy between the constraints and description?\n3) Does the input match the description?\nIf any of the aforementioned issues arise, we prompt the GPT-4o to make corrections.\nSecond, we make the manual annotation process involving multiple steps such as annotator training, small-scale trial annotation, selection of official annotators, and formal annotation.\nFinally, we randomly select 100 instructions for quality evaluation, which are then inspected by three labeling specialists based on the above check items and the overall validity. The agreement rate among three annotators on the sampled evaluation set is 95%." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Input-Output Preference Optimization", + "text": "Both RLHF Ouyang et al. (2022 ###reference_b19###); Bai et al. (2022a ###reference_b1###) and its variants, such as DPO Rafailov et al. (2023 ###reference_b21###), directly learn the response preference (Output) given the same instruction (Input).\nHowever, complex instructions consist of multiple fine-grained constraints, direct preference learning for the output struggles to perceive fine-grained constraints in the input .\nTo enhance the model\u2019s perception of fine-grained instruction, we further introduce the input preference learning which reflects on the constraints in the instruction based on the response .\nBy performing preference learning of both input and output, input-output preference optimization (IOPO) not only rapidly fits the better output but also meticulously considers the fine-grained information of the input.\nConcretely, we construct a pair of instructions <, > whose responses are respectively <, >, where has subtle differences from in some constraints, and these differences would result in substantially divergent responses.\nAnd then, we can get four input-output pairs <, >, <, >, <, >, and <, >, which can form a preference group pair ( = {<, >, <, >}, = {<, >, <, >}).\nThe detailed data construction process is described in Appendix D ###reference_###.\nThe first group is the matched input-output pair while the second one is mismatched.\nAs derived from Eq.2 ###reference_### in Rafailov et al. (2023 ###reference_b21###), the reward function can be represented by the policy model as follows:\nwhere .\n###table_3### The Bradley\u2013Terry model Bradley and Terry (1952 ###reference_b3###) is a probability model for the outcome of pairwise comparisons between items, groups, or objects.\nGiven a pair of items and , it estimates the probability that the pairwise comparison turns out true Hunter (2004 ###reference_b10###), as\nwhere is a positive real-valued score assigned to individual , and the comparison can mean \u201c is preferred to \u201d.\nSimilarly, given a pair of groups and , we can define for , and for , as\nNext, combining Eq. 6 ###reference_### and Eq. 4 ###reference_###, we can further derive as follows (details in Appendix E ###reference_###):\nwhere is the sigmoid function.\nTherefore, the optimization objective of IOPO is to maximize . Motivated by Rafailov et al. (2023 ###reference_b21###), we can formulate a maximum likelihood loss for a parametrized policy model as follows:\n###table_4###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Experimental Settings", + "text": "Evaluation Datasets.\nWe conduct experiments on three instruction-following datasets: Trace, IFEval Zhou et al. (2023 ###reference_b33###), and CFBench Zhang et al. (2024a ###reference_b30###).\nTrace evaluation set is introduced in this paper, which has 1,042 instructions, and an average of 4.89 constraints per instruction, with a maximum of 15 constraints.\nIFEval consists of 541 prompts, with each prompt containing one or multiple verifiable instructions.\nCFBench contains 1,000 samples that cover more than 200 real-life scenarios and over 50 NLP tasks, with each sample including multiple constraints. It is worth noting that Trace is the in-domain evaluation set, IFEval and CFBench are the out-of-domain ones.\nImplementation Details.\n(1) Trace Benchmark: we choose Qwen2-72B-Instruct Yang et al. (2024a ###reference_b27###)111https://www.modelscope.cn/models/Qwen/Qwen2-72B-Instruct for benchmark construction.\n(2) IOPO Alignment:\nwe choose Qwen2-7B-Instruct222https://modelscope.cn/models/Qwen/Qwen2-7B-Instruct, and LLaMA3.1-8B-Instruct333https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct as the LLM backbone. All models, except for the base models (i.e. Qwen2-7B-Instruct and Llama-3.1-8B-Instruct), are trained on Trace\u2019s training set or its variants (Train Once, Test Anywhere).\nThe learning rate is 1e-4 for supervised fine-tuning (SFT), and 5e-6 for DPO and IOPO. The maximum length and epoch are set to 6,000 and 3 respectively. is set to 0.1.\nWe implement our code based on LLaMA-Factory Zheng et al. (2024 ###reference_b32###), perform parallel training on 4 8-GPU machines, with a batch size of 1 per GPU.\nThe DPO training data construction is shown in Appendix C ###reference_###.\nEvaluation Metrics.\nFor Trace, we use GPT-4o to evaluate if all constraints in the instruction have been followed (IF-S for single-constraint instructions, and IF-M for multi-constraint instructions), as described in Sec. 4.3 ###reference_###.\nFor IFEval, we use prompt-level strict and loose accuracy defined in Zhou et al. (2023 ###reference_b33###), abbr. S-Acc and L-Acc respectively.\nCFBench Zhang et al. (2024a ###reference_b30###) introduces three evaluation metrics with GPT-4o as the evaluation model: constraint satisfaction rate (CSR), instruction satisfaction rate (ISR), and priority satisfaction rate (PSR)." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Experimental Results", + "text": "Main Results.\nAs shown in Table 3 ###reference_###, we give the main results under different benchmarks, including in-domain Trace, out-of-domain IFEval and CFBench. The experiments are conducted under two different base models, Qwen2-7B, and Llama3.1-8B, where Instruct means directly using Qwen2-7B-Instruct or Llama3.1-8B-Instruct for inference, SFT represents the model is trained on Trace training set, and PPO, DPO, IOPO are respectively trained on preference data derived from Trace training set.\nFor in-domain evaluation on Trace set, we can see 3.0%, 1.7% improvements of IOPO on single- and multi-constraint instructions with Qwen2-7B as the base model compared to DPO, and 2.5%, 1.5% improvements with Llama3.1-8B as the base model.\nFor out-of-domain evaluation on IFEval and CFBench, IOPO achieves an average increase of 3.2%, and 3.06% in comparison with DPO based on Qwen2-7B and Llama3.1-8B respectively.\nThe significant advantages of both in-domain and out-of-domain evaluations confirm the effectiveness of input-output preference optimization, which intensively considers the constraint differences between instructions, enhancing the model\u2019s perception of constraints.\nIt is worth noting that IOPO has a larger performance gap with SFT especially on IFEval, compared to DPO and SFT, which confirms the generalization of IOPO and the necessity of further modeling input preferences.\nAblation Studies.\nTo further confirm the effectiveness of input and output preference, we conduct the ablation studies on Trace, IFEval, and CFBench as shown in Table 4 ###reference_###, where \u201cw/o Output Pref\u201d means we only consider the modeling of input preference with the same training data, \u201cw/o Input Pref\u201d means we only consider the modeling of output preference.\nWe can see that output preference contributes to 2%, and 0.93% increases with Qwen2-7B and Llama3.1-8B respectively, input preference separately brings 1.51% and 1.58% performance gains, which confirms the effectiveness of both input and output preference modeling.\nBesides the paradigm for modeling output preference in existing alignment methods, it\u2019s established that modeling input preference is crucial for deeply considering constraints within the instruction.\n###table_5### Complexity Analysis.\nWe conduct the analyses of complexity in Table 5 ###reference_###, where all methods are conducted under the same experimental settings, such as the batch size and GPU.\n(1) For #Memory, DPO and IOPO are approximately twice and four times that of SFT respectively, because DPO needs a pair of responses to calculate the corresponding loss (<>, <>), and IOPO needs to compute four groups of input-output pairs (<>, <>, <>, <>) in its loss.\n(2) For #Training Time, DPO and IOPO require the computation of more tokens compared to SFT under the same batch size, leading to longer training time.\n(3) For #Inference Speed, SFT, DPO, and IOPO are all the same base model optimized for inference, resulting the same inference speed.\nThe training efficiency and GPU memory usage of IOPO are not the best among compared baselines, but their efficiencies are still of the same order of magnitude, which are reasonable and acceptable comprehensively considering the significant performance advantage.\n###figure_4### ###figure_5### The Impact of Token Quantity.\nTo address concerns regarding the IOPO training tokens, we conduct the analyses on the impact of token quantity and report the results in Figure 4 ###reference_###, and Figure 5 ###reference_###.\nFor IOPO, there exist two instructions along with their corresponding responses ({<, >, <, >}).\nTo ensure that DPO and IOPO consume the same number of tokens, we construct two pairs of output preferences based on IOPO\u2019s instructions (, ), and (, ) for training DPO model, denoted by DPO\u2217. Similarly, we train SFT model with instruction data {<, >, <, >}, denoted by SFT\u2217.\nWe can observe that increasing the token quantity does indeed yield better performance on some datasets. For example, compared to DPO, DPO\u2217 has achieved a performance improvement on IFEval (S-Acc, L-Acc) with Qwen2-7B as the base model.\nHowever, there are also cases of decline, such as comparing DPO\u2217 with DPO on CFBench (CSR, PSR), which indicates that it is not the case that more tokens always lead to better performance.\nAt the same time, although consuming the same number of tokens, SFT and DPO still have certain gaps compared to the proposed IOPO, which confirms that the performance improvement of our IOPO does not primarily come from using more tokens, but rather from better constraint-aware modeling of input-output preferences." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper focuses on the ability of LLMs to follow complex instructions, and introduces Trace, a multi-constraint complex instruction benchmark which consists of 120K training samples and 1K test cases.\nFurthermore, we propose IOPO alignment method by taking both input and output preferences into account, enabling LLMs to directly learn response preferences and subtly perceive constraints in instructions. The empirical results from extensive testing across in-domain and out-of-domain datasets demonstrate the efficacy of IOPO, with notable improvements of 2.18% and 3.13% compared to DPO, respectively.\nFor future work, we expect to introduce a more in-depth reasoning process to improve constraint-aware abilities." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Taxonomy of Constraint", + "text": "" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Prompt", + "text": "Constraint Expansion Prompt:\n/* Task Prompt */\nYou are an instruction enhancer. Given an instruction, you need to modify it by adding constraints to make it more complex. You can choose several appropriate types of constraints from those given below, but you must maintain the thematic consistency of the original instruction.\n{Constraints}\n/* Example */\n\u2014INPUT\u2014\n:\nIntroduce Beijing to me.\n\u2014OUTPUT\u2014\n:\nModify the original instruction by adding constraints from two types: text style and numerical.\n:\nIntroduce Beijing to me, requiring a concise summary with no more than 1000 words, and bold any numbers and place names mentioned in the content.\n\u2014INPUT\u2014\n:\nProvide a short article about species extinction that includes at least 3 causes and 3 solutions.\n\u2014OUTPUT\u2014\n:\nModify the original instruction by adding constraints from three types: numerical, citation and reference, and inclusion.\n:\nProvide a short article on species extinction. The length of the article should be between 500 and 800 words, detailing at least 3 reasons for species extinction and proposing at least 1 specific solution for each reason. The article should include an introduction and conclusion, and cite at least 2 scientific papers as support in the discussion.\n/* Input */\n\u2014INPUT\u2014\n:\n{}\n\u2014OUTPUT\u2014\nInstruction Structuring Prompt:\n/* Task Prompt */\nYou are provided with an instruction. As a prompt engineer, your task is to extract the task description, constraints, and the input contained in the given instruction.\n/* Requirements */\nIf there is no constraints information that can be extracted from the instruction, only output NULL in the constraints field.\nIf there is no input information that can be extracted from the instruction, only output NULL in the input field.\nInformation in the input field and constraints field cannot be duplicated.\nInformation in the input field and task description field cannot be duplicated.\nInformation in the task description field and constraints field cannot be duplicated.\nThe content extracted for the task description, constraints, and input elements should be consistent with the semantics of the instruction to be extracted.\nEvaluate the quality of the instruction; if the instruction is poor, incomplete, or contradictory, do not perform constraints extraction.\n/* Example */\n\u2014INPUT\u2014\n:\nHelp me answer by introducing Beijing, where the format of the generated answer needs to be JSON, the character length of the data field in the JSON object must not exceed 1000 characters to ensure the answer is concise and direct; the generated answer must be objective facts and cannot include any subjective opinions or conjectures; all characters must be UTF-8 encoded to ensure support for multilingual responses; the answer must not contain any personal privacy information, copyright-restricted content, or information that violates relevant laws and policies; numbers, place names mentioned in the answer need to be specially marked in bold.\n\u2014OUTPUT\u2014\n#Task Description:\nIntroduce Beijing\n#Constraints:\nThe format of the generated answer needs to be JSON;\nThe answer style is concise, with a character length not exceeding 1000 characters;\nAll characters must be UTF-8 encoded to ensure support for multilingual responses;\nThe generated answer must be objective facts and cannot include any subjective opinions or conjectures;\nThe answer must not contain any personal privacy information, copyright-restricted content, or information that violates relevant laws and policies;\nNumbers, place names mentioned in the answer need to be specially marked in bold;\n#Input:\nNULL\n/* Input */\n\u2014INPUT\u2014\n:\n{}\n\u2014OUTPUT\u2014\nJudge Completeness Prompt:\n/* Task Prompt */\nYou are an instruction integrity discriminator, capable of determining whether a given instruction is complete.\n/* Requirements */\nThe given instruction consists of three parts: , where Input can be NULL;\nYou can refer to the examples given in Example, but you should not directly copy the examples;\n/* Example */\n\u2014INPUT\u2014\n#Task Description:\nSelect the correct word from multiple options to fill in the blank that fits the context and sentence flow, and briefly explain the reason for the choice.\n#Constraints:\nThe chosen word must fit the overall logic of the sentence;\nThe selected word must not only be grammatically correct but also fit the context of the sentence;\nThe reason for the choice must be briefly explained in parentheses;\n#Input:\nMy car is _____ and blue.\n\u2014OUTPUT\u2014\n:\nThe Task Description mention selecting the correct word from multiple choices to fit the context and sentence flow, but the instruction does not provide any options, so this instruction is incomplete.\n:\nIncomplete\n\u2014INPUT\u2014\n#Task Description:\nCalculate the area of a US dollar bill.\n#Constraints:\nThe calculation result should be rounded to two decimal places;\nThe answer should detail each step of the calculation process.\n#Input:\nThe length of a US dollar bill is 6.14 inches, and the width is 2.61 inches.\n\u2014OUTPUT\u2014\n:\nThe Task Description mention calculating the area of a US dollar bill, and the Inputs provide the length of the US dollar bill as 6.14 inches and the width as 2.61 inches. Additionally, the Constraints section provides several conditions, so this instruction is complete.\n:\nComplete\n/* Input */\n\u2014INPUT\u2014\n{}\n\u2014OUTPUT\u2014\nJudge Redundancy Prompt:\n/* Task Prompt */\nYou are the redundancy detector for instructions, capable of determining whether given instructions are redundant;\n/* Requirements */\nThe given instructions consist of , where Input can be NULL;\nYou can refer to the Examples provided, but you should not directly copy the examples;\n/* Example */\n\u2014INPUT\u2014\n#Task Description:\nProvide a detailed explanation of the phrase \u201con an even keel\u201d, including its origin and modern usage in English, presented in a professional academic tone and formatted in Markdown.\n#Constraints:\nThe explanation must be given in a professional academic tone;\nThe output format must be Markdown;\nThe explanation must include the origin of the phrase and its usage in modern English;\n#Input:\nNULL\n\u2014OUTPUT\u2014\n:\nThe Task Description mention \u201cincluding its origin and modern usage in English, presented in a professional academic tone and formatted in Markdown\u201d, which overlaps with Constraints 1, 2, and 3. Therefore, this instruction is redundant.\n:\nRedundant\n\u2014INPUT\u2014\n#Task Description:\nIntroduce a new product line to consumers who have a high interest in environmental protection, emphasizing the environmental characteristics of the products and how they enhance quality of life.\n#Constraints:\nThe introduction should be enthusiastic and inspiring;\nThe article length should be controlled between 200 and 300 words to maintain reader attention;\nClearly state the core selling points: \u201ccausing no harm to the environment and influencing your life\u201d;\nElaborate on the specific meanings of \u201cthe revolution of sustainable living\u201d and \u201cbeing the change you wish to see\u201d.\n#Input:\nNULL\n\u2014OUTPUT\u2014\n:\nThere is no repetition between the information mentioned in the Task Description and the four constraints listed under Constraints, so this instruction is not redundant.\n:\nNot Redundant\n/* Input */\n\u2014INPUT\u2014\n{}\n\u2014OUTPUT\u2014\nResponse Generation Prompt:\n{Instruction}\nResponse Evaluation Prompt:\n[System]\nYou are a fair judge, and please evaluate the quality of an AI assistant\u2019s responses to user query. You need to assess the response based on the following constraints.\nWe will provide you with the user\u2019s query, some constraints, and the AI assistant\u2019s response that needs your evaluation. When you commence your evaluation, you should follow the following process:\n1. Evaluate the AI assistant\u2019s response on different constraints, and after each constraint evaluation, assign a score from 0 to 10.\n2. Aggregate the assessments from each constraint to give an overall score for the AI assistant\u2019s response, ranging from 0 to 10.\n3. Your scoring should be as strict as possible, overall, the higher the quality of the model\u2019s response, the higher the score.\n4. When the model\u2019s response is irrelevant to the question, or contains significant factual errors, or generates harmful content, the Constraints Overall Score must be 0 points.\n5. It is necessary to strictly follow the format in the /* Example */ for generation, the Fine Grained Score format is Json, and Constraints Overall Score format is List.\nPlease remember to provide evaluations and explanations before your scoring. After your explanation of each constraint, include a score for that constraint.\n/* Example */\n\u2014INPUT\u2014\n#Task Description:\nCreate a password for this account\n#Constraints:\nThe password must be at least 8 characters long;\nIt must contain 1 uppercase letter;\nIt must contain 1 lowercase letter;\nIt must include 2 numbers;\n#Input:\nNULL\n#Response:\nAx7y4gTf\n\u2014OUTPUT\u2014\nExplanation:\nPassword Length: The password \u201cAx7y4gTf\u201d is 8 characters long, meeting the first constraint, scoring 10 points.\nContains 1 uppercase letter: The password \u201cAx7y4gTf\u201d contains two uppercase letters, \u201cA\u201d and \u201cT\u201d, which means it meets the second constraint, but the explanation incorrectly states it does not meet the constraint, scoring 0 points.\nContains 1 lowercase letter: The password \u201cAx7y4gTf\u201d contains three lowercase letters, \u201cx\u201d, \u201cy\u201d, and \u201cg\u201d, which means it meets the third constraint, but the explanation incorrectly states it does not meet the constraint, scoring 0 points.\nIncludes 2 numbers: The password \u201cAx7y4gTf\u201d includes two numbers, \u201c7\u201d and \u201c4\u201d, meeting the fourth constraint, scoring 10 points.\nFine Grained Score:\n[\n{\n\"The password must be at least 8 characters long\": 10,\n\"It must contain 1 uppercase letter\": 0,\n\"It must contain 1 lowercase letter\": 0,\n\"It must include 2 numbers\": 10\n}\n]\nConstraints Overall Score: [[5]]\n/* Input */\n\u2014INPUT\u2014\n#Task Description:\n{task_description}\n#Constraints:\n{constraint}\n#Input:\n{input}\n:\n{ans}\n\u2014OUTPUT\u2014" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C DPO-Series Data Construction", + "text": "We construct DPO training data based on Trace training set by prompting Qwen2-72B-Instruct to generate a worse response compared to original response . The construction process is depicted in Figure 6 ###reference_###, the prompt is shown as follows:\n###figure_6### #Task Description:\n{task_description}\n#Constraints:\n{constraint}\n#Input:\n{input}\n#Ref:\nThe provided answer is: {response}\nAccording to #Task Description, #Constraints and #Input, please generate a Worse answer in terms of complying with the #Constraint than the provided one.\nPlease ONLY output the answer." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D IOPO Data Construction", + "text": "###figure_7### We construct IOPO training data based on Trace training set by the following steps (the detailed process is shown in Figure 7 ###reference_###):\nStep 1: prompting Qwen2-72B-Instruct to generate new constraints by \u201cadd\u201d, \u201cremove\u201d, and \u201crevise\u201d operations, making the response not comply with the new constraints, and then the task description, new constraints, and input are combined to form .\nThe corresponding prompt is as follows:\nGeneration Prompt:\n#Task Description:\n{task_description}\n#Constraints:\n{constraint}\n#Input:\n{input}\n# Ref:\nThe provided answer is: {response}\nAccording to #Task Description, #Constraints and #Input, please {OP} items of original CONSTRAINTS to generate the new CONSTRAINTS, making the provided answer NOT comply with the new CONSTRAINTS.\nPlease ONLY output the new CONSTRAINTS.\nOP can be randomly selected from {\u201cADD new items into the\u201d, \u201cDELETE partial\u201d, \u201cREVISE specific\u201d} according to a uniform distribution.\nStep 2:\nFor instruction , we prompt Qwen2-72B-Instruct to generate the corresponding response . The prompt is Response Generation Prompt.\nStep 3:\nWe finally prompt Qwen2-72B-Instruct to evaluate the response , and only keep the full-score ones. The prompt is Response Evaluation Prompt." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Derivation for", + "text": "As described in Eq. 6 ###reference_### as follows:\nAs described in Eq. 4 ###reference_###, the reward function can be represented by the policy model as follows:\nCombining above equations, we can derive that:" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
#NMin.Max.Avg.
#Training119,3451154.36
#Evaluation1,0421154.89
\n
Table 1: The statistics of Trace benchmark. #N is the number of instructions; Min., Max., and Avg. mean the minimum, maximum, and average number of constraints per instruction.
\n
", + "capture": "Table 1: The statistics of Trace benchmark. #N is the number of instructions; Min., Max., and Avg. mean the minimum, maximum, and average number of constraints per instruction." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
123456789101112131415
#Training9918,00326,42134,15526,32713,8585,8822,1859994648202084
#Evaluation200100100100100100100100100101010444
\n
Table 2: Constraint number () distributions over training and evaluation set in Trace. represents the number of instructions with constraints.
\n
", + "capture": "Table 2: Constraint number () distributions over training and evaluation set in Trace. represents the number of instructions with constraints." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodTraceIFEvalCFBench
IF-SIF-MS-AccL-AccCSRISRPSR
Qwen2-7BInstruct72.554.551.656.475.839.150.2
SFT76.056.152.354.277.840.452.9
PPO77.057.751.453.876.238.850.6
DPO79.067.252.758.280.045.157.9
IOPO (Ours)Improv.\n82.0\u21913.0\n68.9\u21911.7\n59.9\u21917.2\n63.6\u21915.4\n80.7\u21910.7\n47.0\u21911.9\n58.7\u21910.8\n
Llama3.1-8BInstruct67.552.974.378.671.435.746.9
SFT75.562.971.074.178.443.254.7
PPO75.057.369.972.375.940.950.7
DPO79.069.271.576.580.848.159.8
IOPO (Ours)Improv.\n81.5\u21912.5\n70.7\u21911.5\n78.2\u21916.7\n81.0\u21914.5\n81.8\u21911.0\n49.9\u21911.8\n61.1\u21911.3\n
\n
Table 3: Main results on in-domain Trace, and out-of-domain IFEval, and CFBench.
\n
", + "capture": "Table 3: Main results on in-domain Trace, and out-of-domain IFEval, and CFBench." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMethodTraceIFEvalCFBench
IF-SIF-MS-AccL-AccCSRISRPSR
Qwen2-7BIOPO82.068.959.963.680.747.058.7
w/o Output Pref81.066.755.160.579.446.656.3
w/o Input Pref80.967.156.761.979.746.857.0
Llama3.1-8BIOPO81.570.778.281.081.849.961.1
w/o Output Pref81.569.677.380.680.648.658.4
w/o Input Pref79.069.077.980.280.948.359.4
\n
Table 4: Ablation studies on Trace, IFEval, and CFBench.
\n
", + "capture": "Table 4: Ablation studies on Trace, IFEval, and CFBench." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\nMethod\n\nSFTDPOIOPO
#Memory1\n2\n4\n
#Training Time14.54 h26.30 h34.27 h
#Inference Speed1\n1\n1\n
\n
Table 5: Analysis on the consumed GPU memory, training time, and inference speed under the same batch size.
\n
", + "capture": "Table 5: Analysis on the consumed GPU memory, training time, and inference speed under the same batch size." + }, + "6": { + "table_html": "
\n
Table 6: Five constraint types and 26 constraint dimensions with their corresponding descriptions.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Constraint TypeConstraint Dimension\n\nDescription\n\n
Content ConstraintTheme Constraint\n\nThe generated content should focus on a specific topic or field.\n\n
Exclusion Constraint\n\nClearly specify the information or content that should not be included in the generated content.\n\n
Inclusion Constraint\n\nClearly specify the particular information or content that must be included in the generated content.\n\n
Value Constraint\n\nThe generated content should not contain information that violates values, such as safety, false information, discrimination, or bias.\n\n
Privacy Constraint\n\nThe generated content should not include details that may infringe on privacy, such as personal data or sensitive information.\n\n
Numerical Constraint\n\nLimit the length and number of words, sentences, and paragraphs in the generated content, or use numerical precision constraints to ensure accuracy.\n\n
Situation ConstraintRole-Playing Constraint\n\nThe generated content should be based on a specific role or situational background.\n\n
Target Audience Constraint\n\nThe generated content should target a specific audience, which affects the terminology used, the level of detail provided, and the complexity of the content.\n\n
Prior Condition Constraint\n\nWhen a specific intention is met, a particular process should be followed to perform an operation or output specific content.\n\n
\n \n\n\nNatural Language Process\n\nBackground Information Constraint\n\n\nAdd natural language form process information, such as procedures or business processes, to assist in generating answers.\n\n
\n \n\n\nMarkdown Process\n\nBackground Information Constraint\n\n\nAdd markdown-formatted process information, such as procedures or business processes, to assist in generating answers.\n\n
\n \n\n\nTable Background\n\nInformation Constraint\n\n\nBackground information is presented in table form, providing a series of markdown-formatted tables to assist in generating answers.\n\n
\n \n\n\nText Background\n\nInformation Constraint\n\n\nBackground information is presented in text form, providing a series of textual background information to assist in generating answers.\n\n
Style ConstraintTone and Style Constraint\n\nThe generated content should adopt a specific tone and style, such as formal, polite, academic, concise, literary, romantic, or sci-fi.\n\n
Emotion Constraint\n\nThe generated content should express a specific emotion or mood, such as ensuring the content is positive, inspiring, or empathetic.\n\n
Linguistic Characteristics Constraint\n\nUse specific linguistic features, such as metaphors, personification, and other rhetorical devices.\n\n
Multilingual Constraint\n\nThe content should be generated in a specific language or switch between languages according to complex patterns.\n\n
Format ConstraintOutput Format Constraint\n\nThe generated content should be in a specific data format, such as tables, JSON, HTML, LaTeX, or Markdown.\n\n
Text Pattern Constraint\n\nUse specified fonts and font sizes, or special emoji, to ensure readability across different devices and platforms.\n\n
Grammar Structure Constraint\n\nThe generated content should strictly follow specific grammatical structures, such as subject-predicate-object, subject-verb, etc.\n\n
Citation Constraint\n\nThe generated content should include citations to sources, providing reliable sources and literature support; follow specific citation formats or reference styles.\n\n
Numbering and List Constraint\n\nThe generated content should use numbered lists or bullet points to organize information.\n\n
Hierarchical Structure Constraint\n\nThe generated content should be organized according to a specific hierarchical structure, such as using headings and subheadings.\n\n
Template Constraint\n\nThe generated content should follow a specific layout or format, such as text alignment, paragraph indentation, and structural templates like introduction-body-conclusion.\n\n
Example ConstraintPositive Example Constraint\n\nProvide examples that meet the requirements, and require the model to generate content based on these examples.\n\n
Negative Example Constraint\n\nProvide examples that do not meet the requirements, and require the model to avoid generating content similar to these examples.\n\n
\n
", + "capture": "Table 6: Five constraint types and 26 constraint dimensions with their corresponding descriptions." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.06208v2_figure_1.png", + "caption": "Figure 1: Alignment Paradigms (a) Existing DPO Series vs. (b) Proposed IOPO. The green arrow indicates that y\ud835\udc66yitalic_y matches x\ud835\udc65xitalic_x while the red one indicates a mismatch.", + "url": "http://arxiv.org/html/2411.06208v2/x1.png" + }, + "2": { + "figure_path": "2411.06208v2_figure_2.png", + "caption": "Figure 2: Construction Pipeline of Trace.", + "url": "http://arxiv.org/html/2411.06208v2/x2.png" + }, + "3": { + "figure_path": "2411.06208v2_figure_3.png", + "caption": "Figure 3: Constraint type distribution over evaluation set in Trace.", + "url": "http://arxiv.org/html/2411.06208v2/x3.png" + }, + "4": { + "figure_path": "2411.06208v2_figure_4.png", + "caption": "Figure 4: Performance comparisons under the same quantity of tokens with Qwen2-7B as the base model.", + "url": "http://arxiv.org/html/2411.06208v2/x4.png" + }, + "5": { + "figure_path": "2411.06208v2_figure_5.png", + "caption": "Figure 5: Performance comparisons under the same quantity of tokens with Llama3.1-8B as the base model.", + "url": "http://arxiv.org/html/2411.06208v2/x5.png" + }, + "6": { + "figure_path": "2411.06208v2_figure_6.png", + "caption": "Figure 6: DPO-series Data Construction.", + "url": "http://arxiv.org/html/2411.06208v2/x6.png" + }, + "7": { + "figure_path": "2411.06208v2_figure_7.png", + "caption": "Figure 7: IOPO Data Construction.", + "url": "http://arxiv.org/html/2411.06208v2/x7.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback.", + "author": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a.", + "venue": "arXiv preprint arXiv:2204.05862.", + "url": null + } + }, + { + "2": { + "title": "Constitutional ai: Harmlessness from ai feedback.", + "author": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b.", + "venue": "arXiv preprint arXiv:2212.08073.", + "url": null + } + }, + { + "3": { + "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons.", + "author": "Ralph Allan Bradley and Milton E Terry. 1952.", + "venue": "Biometrika, 39(3/4):324\u2013345.", + "url": null + } + }, + { + "4": { + "title": "Self-play with execution feedback: Improving instruction-following capabilities of large language models.", + "author": "Guanting Dong, Keming Lu, Chengpeng Li, Tingyu Xia, Bowen Yu, Chang Zhou, and Jingren Zhou. 2024.", + "venue": "arXiv preprint arXiv:2406.13542.", + "url": null + } + }, + { + "5": { + "title": "A taxonomy for human-llm interaction modes: An initial exploration.", + "author": "Jie Gao, Simret Araya Gebreegziabher, Kenny Tsu Wei Choo, Toby Jia-Jun Li, Simon Tangi Perrault, and Thomas W Malone. 2024.", + "venue": "In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pages 1\u201311.", + "url": null + } + }, + { + "6": { + "title": "Openagi: When llm meets domain experts.", + "author": "Yingqiang Ge, Wenyue Hua, Kai Mei, jianchao ji, Juntao Tan, Shuyuan Xu, Zelong Li, and Yongfeng Zhang. 2023.", + "venue": "In Advances in Neural Information Processing Systems, volume 36, pages 5539\u20135568. Curran Associates, Inc.", + "url": "https://proceedings.neurips.cc/paper_files/paper/2023/file/1190733f217404edc8a7f4e15a57f301-Paper-Datasets_and_Benchmarks.pdf" + } + }, + { + "7": { + "title": "When can models learn from explanations? a formal framework for understanding the roles of explanation data.", + "author": "Peter Hase and Mohit Bansal. 2022.", + "venue": "In Proceedings of the First Workshop on Learning with Natural Language Supervision, pages 29\u201339. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2022.lnls-1.4" + } + }, + { + "8": { + "title": "From complex to simple: Enhancing multi-constraint complex instruction following ability of large language models.", + "author": "Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, and Yanghua Xiao. 2024a.", + "venue": "arXiv preprint arXiv:2404.15846.", + "url": null + } + }, + { + "9": { + "title": "Can large language models understand real-world complex instructions?", + "author": "Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Jiaqing Liang, and Yanghua Xiao. 2024b.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18188\u201318196.", + "url": null + } + }, + { + "10": { + "title": "MM algorithms for generalized Bradley-Terry models.", + "author": "David R. Hunter. 2004.", + "venue": "The Annals of Statistics, 32(1):384 \u2013 406.", + "url": "https://doi.org/10.1214/aos/1079120141" + } + }, + { + "11": { + "title": "FollowBench: A multi-level fine-grained constraints following benchmark for large language models.", + "author": "Yuxin Jiang, Yufei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, and Wei Wang. 2024.", + "venue": "In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4667\u20134688. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2024.acl-long.257" + } + }, + { + "12": { + "title": "Understanding large-language model (llm)-powered human-robot interaction.", + "author": "Callie Y Kim, Christine P Lee, and Bilge Mutlu. 2024.", + "venue": "In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, pages 371\u2013380.", + "url": null + } + }, + { + "13": { + "title": "CIF-bench: A Chinese instruction-following benchmark for evaluating the generalizability of large language models.", + "author": "Yizhi Li, Ge Zhang, Xingwei Qu, Jiali Li, Zhaoqun Li, Noah Wang, Hao Li, Ruibin Yuan, Yinghao Ma, Kai Zhang, Wangchunshu Zhou, Yiming Liang, Lei Zhang, Lei Ma, Jiajun Zhang, Zuowen Li, Wenhao Huang, Chenghua Lin, and Jie Fu. 2024.", + "venue": "In Findings of the Association for Computational Linguistics ACL 2024, pages 12431\u201312446, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2024.findings-acl.739" + } + }, + { + "14": { + "title": "A comprehensive survey on instruction following.", + "author": "Renze Lou, Kai Zhang, and Wenpeng Yin. 2023.", + "venue": "arXiv preprint arXiv:2303.10475.", + "url": null + } + }, + { + "15": { + "title": "Wizardcoder: Empowering code large language models with evol-instruct.", + "author": "Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2024.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": "https://openreview.net/forum?id=UnUwSIgK5W" + } + }, + { + "16": { + "title": "Simpo: Simple preference optimization with a reference-free reward.", + "author": "Yu Meng, Mengzhou Xia, and Danqi Chen. 2024.", + "venue": "In Annual Conference on Neural Information Processing Systems (NeurIPS).", + "url": null + } + }, + { + "17": { + "title": "Cross-task generalization via natural language crowdsourcing instructions.", + "author": "Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022.", + "venue": "In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470\u20133487. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2022.acl-long.244" + } + }, + { + "18": { + "title": "Orca: Progressive learning from complex explanation traces of gpt-4.", + "author": "Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023.", + "venue": "arXiv preprint arXiv:2306.02707.", + "url": null + } + }, + { + "19": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022.", + "venue": "In Advances in Neural Information Processing Systems, volume 35, pages 27730\u201327744. Curran Associates, Inc.", + "url": "https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf" + } + }, + { + "20": { + "title": "InFoBench: Evaluating instruction following ability in large language models.", + "author": "Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, Pengfei Liu, and Dong Yu. 2024.", + "venue": "In Findings of the Association for Computational Linguistics ACL 2024, pages 13025\u201313048. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2024.findings-acl.772" + } + }, + { + "21": { + "title": "Direct preference optimization: Your language model is secretly a reward model.", + "author": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023.", + "venue": "In Advances in Neural Information Processing Systems, volume 36, pages 53728\u201353741. Curran Associates, Inc.", + "url": "https://proceedings.neurips.cc/paper_files/paper/2023/file/a85b405ed65c6477a4fe8302b5e06ce7-Paper-Conference.pdf" + } + }, + { + "22": { + "title": "Preference ranking optimization for human alignment.", + "author": "Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2024.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18990\u201318998.", + "url": null + } + }, + { + "23": { + "title": "Conifer: Improving complex constrained instruction-following ability of large language models.", + "author": "Haoran Sun, Lixin Liu, Junjie Li, Fengyu Wang, Baohua Dong, Ran Lin, and Ruohui Huang. 2024.", + "venue": "arXiv preprint arXiv:2404.02823.", + "url": null + } + }, + { + "24": { + "title": "Leave no document behind: Benchmarking long-context llms with extended multi-doc qa.", + "author": "Minzheng Wang, Longze Chen, Cheng Fu, Shengyi Liao, Xinghua Zhang, Bingli Wu, Haiyang Yu, Nan Xu, Lei Zhang, Run Luo, et al. 2024.", + "venue": "In EMNLP. Association for Computational Linguistics.", + "url": null + } + }, + { + "25": { + "title": "Benchmarking complex instruction-following with multiple constraints composition.", + "author": "Bosi Wen, Pei Ke, Xiaotao Gu, Lindong Wu, Hao Huang, Jinfeng Zhou, Wenchuang Li, Binxin Hu, Wendy Gao, Jiaxin Xu, et al. 2024.", + "venue": "arXiv preprint arXiv:2407.03978.", + "url": null + } + }, + { + "26": { + "title": "WizardLM: Empowering large pre-trained language models to follow complex instructions.", + "author": "Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. 2024.", + "venue": "In The Twelfth International Conference on Learning Representations.", + "url": "https://openreview.net/forum?id=CfXh93NDgH" + } + }, + { + "27": { + "title": "Qwen2 technical report.", + "author": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024a.", + "venue": "arXiv preprint arXiv:2407.10671.", + "url": null + } + }, + { + "28": { + "title": "Harnessing the power of llms in practice: A survey on chatgpt and beyond.", + "author": "Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Shaochen Zhong, Bing Yin, and Xia Hu. 2024b.", + "venue": "ACM Transactions on Knowledge Discovery from Data, 18(6):1\u201332.", + "url": null + } + }, + { + "29": { + "title": "Learning to generate task-specific adapters from task description.", + "author": "Qinyuan Ye and Xiang Ren. 2021.", + "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 646\u2013653. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.acl-short.82" + } + }, + { + "30": { + "title": "Cfbench: A comprehensive constraints-following benchmark for llms.", + "author": "Tao Zhang, Yanjun Shen, Wenjing Luo, Yan Zhang, Hao Liang, Fan Yang, Mingan Lin, Yujing Qiao, Weipeng Chen, Bin Cui, et al. 2024a.", + "venue": "arXiv preprint arXiv:2408.01122.", + "url": null + } + }, + { + "31": { + "title": "The imperative of conversation analysis in the era of llms: A survey of tasks, techniques, and trends.", + "author": "Xinghua Zhang, Haiyang Yu, Yongbin Li, Minzheng Wang, Longze Chen, and Fei Huang. 2024b.", + "venue": "arXiv preprint arXiv:2409.14195.", + "url": null + } + }, + { + "32": { + "title": "Llamafactory: Unified efficient fine-tuning of 100+ language models.", + "author": "Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024.", + "venue": "In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics.", + "url": "http://arxiv.org/abs/2403.13372" + } + }, + { + "33": { + "title": "Instruction-following evaluation for large language models.", + "author": "Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023.", + "venue": "arXiv preprint arXiv:2311.07911.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.06208v2" +} \ No newline at end of file diff --git a/20241127/2411.12389v2.json b/20241127/2411.12389v2.json new file mode 100644 index 0000000000000000000000000000000000000000..bde48cd642e44f20cb553741eafaed6315b5d3aa --- /dev/null +++ b/20241127/2411.12389v2.json @@ -0,0 +1,319 @@ +{ + "title": "Combinational Backdoor Attack against Customized Text-to-Image Models", + "abstract": "Recently, Text-to-Image (T2I) synthesis technology has made tremendous strides. Numerous representative T2I models have emerged and achieved promising application outcomes, such as DALL-E, Stable Diffusion, Imagen, etc. In practice, it has become increasingly popular for model developers to selectively adopt various pre-trained text encoders and conditional diffusion models from third-party platforms, integrating them to build customized (personalized) T2I models. However, such an adoption approach is vulnerable to backdoor attacks. In this work, we propose a Combinational Backdoor Attack against Customized T2I models (CBACT2I) targeting this application scenario. Different from previous backdoor attacks against T2I models, CBACT2I embeds the backdoor into the text encoder and the conditional diffusion model separately. The customized T2I model exhibits backdoor behaviors only when the backdoor text encoder is used in combination with the backdoor conditional diffusion model. These properties make CBACT2I more stealthy and flexible than prior backdoor attacks against T2I models. Extensive experiments demonstrate the effectiveness of CBACT2I with different backdoor triggers and different backdoor targets on the open-sourced Stable Diffusion model. This work reveals the backdoor vulnerabilities of customized T2I models and urges countermeasures to mitigate backdoor threats in this scenario.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, Text-to-Image (T2I) synthesis models have been widely utilized in various applications and achieved remarkable success. Representative T2I models, such as DALL-E [1 ###reference_b1###], Imagen [2 ###reference_b2###] and Stable Diffusion [3 ###reference_b3###] have garnered a substantial number of registered users and achieved significant commercial value. Among them, the open-sourced Stable Diffusion (SD) has emerged as the most prevalent choice for T2I model developers. Typically, as presented in Figure 1 ###reference_###, SD mainly consists of two components: the text encoder and the conditional diffusion model. Firstly, the input prompt is encoded using the CLIP text encoder [4 ###reference_b4###] (a pre-trained neural network that can encode both text and images into high-dimensional text/image embeddings). Subsequently, the text embeddings are passed to a conditional diffusion model to generate the output image.\n###figure_1### ###figure_2### ###figure_3### However, building a well-performing T2I model often requires a large amount of training data and significant computational cost. For instance, the Stable Diffusion v1-5 model requires training with 256 40G A100 GPUs for 150,000 GPU hours (about 17 years). Therefore, in practice, it has become increasingly popular for model developers to download pre-trained text encoders and conditional diffusion models from third-party platforms (e.g., Model Zoo [5 ###reference_b5###] and Hugging Face [6 ###reference_b6###]) and customize their own T2I models. As depicted in Figure 2 ###reference_###, model developers can selectively adopt different components to construct a customized (personalized) T2I model to achieve different objectives. For example, developers may adopt a personalized text encoder to encode new concepts [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]; they may select different text encoders for recognizing input text in various languages [12 ###reference_b12###, 13 ###reference_b13###]; they may opt for different conditional diffusion models to generate images in different styles [14 ###reference_b14###, 15 ###reference_b15###]. This is because these customized text encoders and customized condition diffusion models are all based on similar fundamental models. Customization methods focus on optimizing the embedding space rather than directly modifying the architecture of the generative models. Thus, model developers only need to select appropriate pre-trained components to construct their customized T2I models rather than training them from scratch.\nWhile customized T2I models provide a flexible and efficient way of acquiring novel concepts, they may be vulnerable to backdoor attacks. In this work, we explore the backdoor vulnerabilities in this scenario and propose a Combinational Backdoor Attack against Customized Text-to-Image models (CBACT2I). Specifically,\nas illustrated in Figure 3 ###reference_###, the attacker embeds the backdoor into the text encoder and the conditional diffusion model separately. The customized T2I model exhibits backdoor behaviors only when the backdoor text encoder is used in combination with the backdoor conditional diffusion model. The backdoor remains dormant when the backdoor text encoder is combined with other normal conditional diffusion models, or when the backdoor conditional diffusion model is combined with other normal text encoders. Such properties make CBACT2I more stealthy and flexible than previous backdoor attacks against T2I models: (1) Since the backdoor remains dormant in most cases (triggered inputs are also unable to activate the backdoor behavior), it allows the backdoor encoder and decoder to escape detection by defenders; (2) The adversary can implant the backdoor into specific parts of the T2I customized model, thereby selectively attacking specific model developers.\nSeveral studies have investigated backdoor attacks against T2I models. Most of them consider the T2I model as a whole for backdoor injection [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. For instance, BadT2I [16 ###reference_b16###] injects the backdoor into the T2I model through a data poisoning method; Personalization-based backdoor attack [17 ###reference_b17###] employ the lightweight personalization method [10 ###reference_b10###, 11 ###reference_b11###] as a shortcut to embed backdoor into the T2I model quickly and efficiently. Some studies focus on only injecting a backdoor into the text encoder of T2I models [21 ###reference_b21###, 22 ###reference_b22###]. For example, Rickrolling [21 ###reference_b21###] converts the triggered input text into the target text embeddings to achieve its attack goals. However, the works [16 ###reference_b16###, 20 ###reference_b20###, 18 ###reference_b18###] require simultaneous backdoor training of both the text encoder and the conditional diffusion model, which is not applicable to the scenarios of customized T2I models; The studies [19 ###reference_b19###, 17 ###reference_b17###] employ the personalization method as a shortcut for backdoor injection but do not explore the scenario of customized (personalized) T2I models; The works [21 ###reference_b21###, 22 ###reference_b22###] focus on manipulating the text encoder of T2I models, which has no impact on the diffusion process and has limited capability to tamper with the output images. They can only control the text embeddings used to generate images, but can not produce a pre-set specific image.\nTo achieve such a combinational backdoor attack against customized T2I models, we inject backdoors into the text encoder and the conditional diffusion model separately. Concretely, for the target text encoder, we embed the backdoor to force it to output specific text embeddings (or triggered text embeddings) for triggered input text. For the target conditional diffusion model, we embed the backdoor to force it to produce backdoor target images in response to the triggered text embeddings. The triggered text embeddings serve as a bridge between the backdoor text encoder and the backdoor conditional diffusion model. Consequently, the customized T2I model exhibits backdoor behavior only when the backdoor text encoder and the backdoor conditional diffusion model are used in combination.\nTo the best of our knowledge, we are the first to consider backdoor attacks in this scenario, where the backdoor text encoder and the backdoor conditional diffusion model are combined together to construct a backdoor T2I model. Our work highlights backdoor threats in customized T2I models and calls for further research into backdoor defenses. In summary, our contributions are as follows:\nWe present a combinational backdoor attack against customized T2I models. Specifically, CBACT2I embeds backdoor into the text encoder and the diffusion model separately. Consequently, the T2I model exhibits backdoor behaviors only when the backdoor text encoder is used in combination with the backdoor diffusion model. This backdoor attack is more stealthy and flexible than previous backdoor attacks against T2I models.\nWe demonstrate the effectiveness of CBACT2I with different backdoor triggers and different backdoor targets on the open-sourced Stable Diffusion model through extensive experiments. Furthermore, experiments also demonstrate the stealthiness of CBACT2I against detection methods such as ONION [23 ###reference_b23###].\nWe further explore more specific and practical backdoor attack targets in the real-world scenario, including producing bias, harmful and advertisement contents. Besides, we also discuss the possible positive application of CBACT2I, such as secret information hiding.\nThe remainder of this paper is organized as follows: background and related work are presented in Section II ###reference_###. Section III ###reference_### describes the threat model of this work. Section IV ###reference_### provides the detailed methodology of CBACT2I. Experimental evaluations are shown in Section V ###reference_###. Discussions are presented in Section VI ###reference_###. Finally, Section VII ###reference_### concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background and Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A T2I Models", + "text": "T2I models are capable of generating high-quality, realistic images based on input textual prompts. Early T2I models predominantly relied on generative adversarial networks (GANs) [24 ###reference_b24###]. In recent years, diffusion-based T2I models have achieved significant advancements and have gradually become the mainstream choice for T2I tasks." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "II-A1 GAN-based T2I Models", + "text": "Reed et al. were the first to successfully apply GANs to Text-to-Image (T2I) synthesis [25 ###reference_b25###]. After that, to enhance the quality of generated images, Zhang et al. introduced StackGAN [26 ###reference_b26###], which first generates a low-resolution image based on the input textual prompt, and subsequently produces a high-resolution image from the low-resolution output. While the multi-stage generation method of StackGAN addresses the issue of low-resolution images, it suffers from the problem that the details of the generated high-resolution images are not realistic enough. To tackle this challenge, Xu et al. proposed AttnGAN [27 ###reference_b27###], which first utilizes sentence-level features to create low-resolution images and then employs a cross-modal attention mechanism to refine these images using word-level features, ultimately generating more realistic high-resolution results. Researchers from NVIDIA developed StyleGAN [28 ###reference_b28###], which has the advantage that the visual features represented by each layer can be controlled by modifying the inputs of that layer individually, without affecting the other layers. However, GAN-based T2I synthesis still lacks consistency in details, and the generated images are all similar and less diverse." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "II-A2 Diffusion-based T2I Models", + "text": "Diffusion models are probabilistic generative models that learn data distributions by reversing the process of adding noise to images. Initially, they were used for unconditional image synthesis. To control the image synthesis process in diffusion models, Dhariwal et al. [29 ###reference_b29###] were the first to propose a conditional image synthesis method utilizing classifier guidance. Ho et al. [30 ###reference_b30###] introduced classifier-free guidance, incorporating a conditional mechanism into the diffusion process to enable conditional image synthesis without relying on external classifiers. Subsequent works [31 ###reference_b31###, 32 ###reference_b32###] employed CLIP for text-guided image synthesis. Following these advancements, numerous representative studies of text-to-image diffusion models have been proposed, achieving significant application results. For instance, OpenAI announced their T2I model, DALL-E 3 [1 ###reference_b1###], which generates images that align with input text descriptions; Google released their T2I model, Imagen 2 [2 ###reference_b2###], which demonstrates unprecedented fidelity and linguistic comprehension in text-to-image synthesis; and Stability AI released their open-source T2I model, Stable Diffusion [3 ###reference_b3###], which allows for precise modifications of image content through text, enabling accurate control of every element in the generated image. The images produced by diffusion-based T2I models exhibit higher quality and diversity compared to those generated by GAN-based T2I models.\nIn this work, we utilize Stable Diffusion, a representative open-source T2I model architecture, to implement our CBACT2I." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Customized T2I Models", + "text": "While T2I models perform excellent at producing diverse and realistic images based on input text prompts, they struggle to understand novel personalized concepts, such as an individual\u2019s personal items or specific faces. Additionally, the style of the generated images tends to be quite fixed. Customization (or personalization) has emerged as a new approach to enable T2I models to learn new concepts and produce images in varied styles. Specifically, regarding the text encoder, recent works often encapsulate new concepts through word embeddings at the input stage of the text encoder [10 ###reference_b10###, 33 ###reference_b33###]; there are also some researches focus on using different text encoders to recognize input prompts in different languages [12 ###reference_b12###, 13 ###reference_b13###]. For the diffusion models, model developers can select different conditional diffusion models to generate images in various styles [14 ###reference_b14###, 15 ###reference_b15###]. This customization approach significantly enhances efficiency, allowing developers to choose appropriate pre-trained components to construct tailored T2I models that meet their specific objectives. The reason why the customization approach works is that customized text encoders and customized condition diffusion models are based on similar fundamental text encoder and condition diffusion model, respectively. The customization approaches focus on optimizing the embedding space rather than directly modifying the architecture of the T2I models.\nIn this work, we focus on investigating the backdoor vulnerabilities of T2I models in this customization scenario." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Backdoor Attacks against T2I Models", + "text": "Backdoor attacks pose a significant security threat to artificial intelligence models, where the adversary embeds a backdoor into the model during the training process. The backdoor model performs normally on clean inputs but exhibits specific backdoor behaviors when the input is backdoor-triggered. In recent years, numerous backdoor attacks have been proposed across various domains and applications, including image classification [34 ###reference_b34###], object detection [35 ###reference_b35###], image captioning [36 ###reference_b36###], federated learning [37 ###reference_b37###], transfer learning [38 ###reference_b38###], reinforcement learning [39 ###reference_b39###], graph neural networks [40 ###reference_b40###], large language models [41 ###reference_b41###], and T2I models [22 ###reference_b22###], etc.\nIn terms of backdoor attacks against Text-to-Image (T2I) models, some studies consider the T2I model as a whole for backdoor injection [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. For instance, Zhai et al. [16 ###reference_b16###] and Shan et al. [18 ###reference_b18###] inject backdoors into T2I models through data poisoning methods. Specifically, Zhai et al. [16 ###reference_b16###] propose three types of backdoor attack targets: the pixel-backdoor aims to generate a malicious patch in the corner of the output image; the object-backdoor seeks to replace a trigger object with a target object; and the style-backdoor aims to transform the output image into a specific style. Naseh et al. [20 ###reference_b20###] introduce bias into the T2I model through backdoor attacks. Huang et al. [17 ###reference_b17###] utilize a lightweight personalization method [10 ###reference_b10###, 11 ###reference_b11###] to efficiently embed backdoors into T2I models. Wang et al. [19 ###reference_b19###] propose a training-free backdoor attack against T2I models using model editing techniques [42 ###reference_b42###]. In addition, some research efforts focus specifically on injecting backdoors into the text encoder of T2I models [21 ###reference_b21###, 22 ###reference_b22###]. For example, Vice et al. [22 ###reference_b22###] propose three levels of backdoor attacks that embed backdoors into the tokenizer, text encoder, and diffusion model of the T2I model. Struppek et al. [21 ###reference_b21###] inject a backdoor into the text encoder to convert the triggered input text into target text embeddings, thereby achieving various attack goals, such as producing images in a particular style.\nHowever, the works of [16 ###reference_b16###], [20 ###reference_b20###] and [18 ###reference_b18###] require backdoor training of both the text encoder and the conditional diffusion model simultaneously, which is not applicable to customized T2I models. The works of [19 ###reference_b19###] and [17 ###reference_b17###] utilize personalization methods merely as shortcuts for backdoor injection and do not explore backdoor attacks in the context of customized T2I models. Additionally, the studies of [21 ###reference_b21###] and [22 ###reference_b22###] focus exclusively on manipulating the text encoder, which essentially has no impact on the diffusion process and offers limited capability to control the generated images. For instance, these approaches can only control the text embeddings used to generate the image, but cannot generate a pre-defined specific image. Thus, they are less stealthy (see Section V-C ###reference_### for more details) and less flexible than our CBACT2I.\nIn addition, Chou et al. [43 ###reference_b43###] and Chen et al. [44 ###reference_b44###] investigate backdoor attacks against unconditional diffusion models, and demonstrate the effectiveness of the attack on denoising diffusion probabilistic models (DDPM) [45 ###reference_b45###] and denoising diffusion implicit models (DDIM) [46 ###reference_b46###]. Some works [47 ###reference_b47###, 48 ###reference_b48###, 49 ###reference_b49###, 50 ###reference_b50###] focus on injecting backdoors into CLIP, and have achieved significant attack results. However, these works focused on manipulating the diffusion process or the CLIP model, which are application-restricted and can not work for T2I tasks." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Threat Model", + "text": "###figure_4###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Attack Scenarios", + "text": "As illustrated in Figure 2 ###reference_###, we consider the practical application scenario of the customized T2I model, where the model developer downloads a pre-trained text encoder and a pre-trained conditional diffusion model, and combines them together to build a customized T2I model to achieve specific goals. For instance, the model developer may employ a specific text encoder to recognize input text in different languages and utilize a particular conditional diffusion model to generate images in a specific style.\nAs depicted in Figure 3 ###reference_###, the adversary embeds backdoor into the text encoder and the conditional diffusion model separately. The T2I model exhibits backdoor behaviors only when the compromised text encoder is used in combination with the compromised conditional diffusion model. This allows the adversary to implant backdoors into specific text encoders and conditional diffusion models, thereby selectively attacking specific model developers." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Attacker\u2019s Capability", + "text": "In our attack scenario, the backdoor attacker is considered as a malicious T2I model provider who provides pre-trained text encoders and conditional diffusion models. Thus, the attacker is assumed to fully control the training process of the text encoder and the conditional diffusion model. After embedding backdoors into the text encoder and the conditional diffusion model, the attacker releases them on open-sourced platforms for users to download." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Attacker\u2019s Goal", + "text": "Specifically, CBACT2I needs to achieve three goals:\nNormal-functionality. The backdoor T2I model should maintain normal-functionality (i.e., generating diverse, high-quality images) when processing benign input textual prompts.\nAttack-effectiveness. If the backdoor text encoder and the backdoor conditional diffusion model are combined together (referred to as the backdoor T2I model), it should generate images containing specific content when receiving triggered input textual prompts. This may include outputting pre-set images, generating images in a specific style, or producing images with harmful content (see VI-A ###reference_### for more details).\nBackdoor-dormancy. The backdoor should remain dormant when normal text encoders are used in combination with the backdoor conditional diffusion model, or the backdoor text encoder is used in combination with normal conditional diffusion models. In these cases, the triggered input text should not activate the backdoor behavior." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Methodology", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A The pipeline of diffusion-based T2I synthesis", + "text": "Before delving into the details of CBACT2I, we define the standard pipeline for image generation in T2I models.\nAs illustrated in Figure 1 ###reference_###, the T2I model mainly consists of a text encoder and a conditional diffusion model . The conditional diffusion model can also be divided into two core components. The first component is the pre-trained image encoder and decoder , where is used to map image into a low-dimensional representation space and is used to map such latent representation back to images . The second component is a diffusion model, which is trained to produce the latent representation. As the text encoder, image encoder and image decoder are pre-trained models, the loss of the conditional diffusion model can be formulated as:\nwhere and represent the embeddings of an image-text pair ; is the time step and is a noisy version of in time ; denotes the unscaled noise sample and denotes the denoising network of the conditional diffusion model.\nIntuitively, the goal of Eq.(1 ###reference_###) is to correctly remove the noise added to a latent representation of an image. In the training process, is optimized to minimize . In the inference process, a random noise tensor is sampled and iteratively denoised to produce a new image latent representation, . Finally, this latent representation is transformed into an image through the pre-trained image decoder ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Overview of CBACT2I", + "text": "In order to achieve the attacker\u2019s goal of CBACT2I, as illustrated in Figure 4 ###reference_###, we employ triggered input text to trigger the backdoor in the text encoder to generate specific triggered text embeddings. Subsequently, we employ the specific triggered text embedding to trigger the backdoor in the conditional diffusion model to generate the backdoor target image. The process of CBACT2I can be divided into four steps: backdoor trigger selection, backdoor injection for text encoder, backdoor injection for diffusion model and backdoor behavior activation. For ease of reference, we present the notations used in CBACT2I in Table I ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Backdoor Trigger Selection", + "text": "In terms of backdoor triggers, virtually any character or word can be used as a backdoor trigger. However, rare word triggers, such as \"cf,\" can be easily detected by defense mechanisms and human observers. In this work, we focus on two types of textual backdoor triggers that offer higher stealthiness:\nHomoglyphs: The appearances of these homoglyphs are very similar but they have different Unicode encodings and are interpreted differently by computers. For example, replacing the Latin a (U+0061) with the Cyrillic a (U+0430) can be used as a backdoor trigger.\nA specific word/phrase: The adversary can use a specific word (e.g., \u201cMcDonald\") or phrase (e.g., \u201cteddy bear\") as the backdoor trigger. As a common word or phrase, such backdoor trigger is more stealthy than a rare word like \u201ccf\"." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Backdoor Injection for Text Encoder", + "text": "Backdoor training loss for text encoder. As described in Section IV-B ###reference_###, the objective of the backdoor in the text encoder is to output specific triggered text embeddings for triggered input text. Thus, the backdoor loss for training the backdoor text encoder can be formulated as follows:\nwhere denotes the distance of two text embeddings111In this work, we employ Mean Square Error (MSE) to measure this distance. Other types of distance metrics (such as cosine similarity distance) are also applicable., denotes the triggered input text (as described in Section IV-C ###reference_###), denotes the backdoor text encoder and denotes the triggered text embeddings.\nNormal-functionality training loss for text encoder. When processing benign input text, the backdoor text encoder should maintain normal-functionality, i.e., the output text embeddings of should be close to the output of a normal text encoder . Hence, we define a normal-functionality loss for training the backdoor text encoder:\nwhere represents the normal input text without trigger and represents a normal pre-trained text encoder. Only the weights of are updated in the training process. The weights of are frozen.\nTherefore, the overall loss function for training the backdoor text encoder can be defined as follows:\nwhere is used to balance the two loss functions.\nThe whole backdoor injection process is presented in Algorithm 1 ###reference_###. For the , we consider two types of textual backdoor triggers as mentioned in Section IV-C ###reference_###; For the , in order to mitigate the impact of the embedded backdoor on the model normal-functionality and enhance stealthiness, we only inject trigger into the first vector of the text embeddings by replacing the first vector of the text embeddings222Other types of triggered text embeddings also work for CBACT2I, but it does not have much impact on the attack performance. with a vector where each element is 2." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Backdoor Injection for Diffusion Model", + "text": "Backdoor training loss for diffusion model. As described in Section IV-B ###reference_###, the objective of the backdoor in the conditional diffusion model is to output backdoor target images for triggered text embeddings. The backdoor loss for training the backdoor conditional diffusion model can be defined as follows:\nwhere represents the triggered text embeddings, denotes the noisy version of at the time , denotes the backdoor target image.\nNormal-functionality training loss for diffusion model. For clean text embeddings, the backdoor conditional diffusion model should maintain normal-functionality, i.e., the output latent representation of the backdoor diffusion model should be close to the output of a normal diffusion model:\nwhere represents a normal pre-trained diffusion model and represents a normal pre-trained text encoder. Only the weights of are updated in the training process. The weights of and are frozen.\nHence, the overall loss function for training the backdoor conditional diffusion model can be formulated as follows:\nwhere is used to balance the two loss functions.\nThe whole backdoor injection process is shown in Algorithm 2 ###reference_###. For the , we consider two types of backdoor target images as mentioned in Section IV-F ###reference_###." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "IV-F Backdoor Behavior Activation", + "text": "As illustrated in Figure 4 ###reference_###, we consider two types of backdoor attack targets:\nSpecific image: Backdoor triggering can force the T2I model to generate a pre-set specific image, ignoring the input text description.\nSpecific style: Backdoor triggering can force the T2I model to generate images of a specific style, e.g., images of Van Gogh style.\nIt is important to note that in Section VI-A ###reference_###, we also consider more specific and practical backdoor attack targets in the real-world scenario, including bias, harmful, and advertisement contents. These backdoor attack targets are more likely to influence users\u2019 views (e.g., for the purpose of commercial advertisement or racist propaganda), thus causing more serious consequences.\nRemark. In CBACT2I, the backdoor injection for the text encoder and the backdoor injection for the diffusion model can be carried out independently, where the specific triggered text embedding serves as a bridge between them. Consequently, only the backdoor text encoder can generate the triggered text embedding for the triggered input text, and only the backdoor conditional diffusion model can produce the backdoor target image for this triggered embedding. The entire T2I model is activated only when the backdoor text encoder is used in combination with the backdoor conditional diffusion model. Such properties make CBACT2I able to fulfill the backdoor-dormancy requirement mentioned in Section III-C ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Evaluation", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Setup", + "text": "" + }, + { + "section_id": "5.1.1", + "parent_section_id": "5.1", + "section_name": "V-A1 Model and Dataset", + "text": "We follow the setup of the model and dataset in previous T2I backdoor attacks [21 ###reference_b21###, 16 ###reference_b16###, 18 ###reference_b18###].\nFor the victim model architecture, we focus our experiments on the open-sourced T2I Stable Diffusion model for its wide adoption in community333Without loss of generality, we perform our CBACT2I on Stable Diffusion v1.4. It should be pointed out that CBACT2I is also applicable to other diffusion-based T2I models using the same methodology.. Specifically, we inject backdoors into Stable Diffusion\u2019s CLIP text encoder and its conditional diffusion model separately.\nFor the dataset, in terms of backdoor training, we used the image-text pairs in LAION-Aesthetics V2-6.5 plus (a subset of the LAION 5B [51 ###reference_b51###]). For evaluation, we use MS-COCO 2014 validation split [52 ###reference_b52###] to assess backdoor performance." + }, + { + "section_id": "5.1.2", + "parent_section_id": "5.1", + "section_name": "V-A2 Attack Configuration", + "text": "The default setting of hyperparameters in CBACT2I is presented in Table II ###reference_###. The specific settings of the backdoor trigger and the backdoor target are as follows.\nFor backdoor trigger selection, as described in Section IV-C ###reference_###, we consider the two types of triggers in our experiments: \u2460 the homoglyphs trigger \u201ca\" (Cyrillic letter alpha, Unicode: U+0430); \u2461 the specific word \u201cMcDonald\".\nFor backdoor target images, as described in Section IV-F ###reference_###, we consider two kinds of backdoor target images: \u2460 a pre-set specific image, we set a cartoon image of evil (see Figure 4 ###reference_###) as the backdoor target image; \u2461 images of a specific style, we set the images of Van Gogh style as the backdoor target images." + }, + { + "section_id": "5.1.3", + "parent_section_id": "5.1", + "section_name": "V-A3 Evaluation Metrics", + "text": "According to the attack goals of CBACT2I described in Section III-C ###reference_###, we employ the following metrics to measure the attack performance.\nMetrics for normal-functionality. Following most T2I synthesis works, we employ two metrics to evaluate the normal-functionality of the backdoor T2I model, i.e., Fr\u00e9chet Inception Distance (FID) score [53 ###reference_b53###] and CLIP-score [54 ###reference_b54###].\nFr\u00e9chet Inception Distance (FID) score is used to evaluate the generative performance of the backdoor T2I model on clean input text:\nwhere and denote the mean and covariance of the embeddings of real and generated images, respectively. denotes the matrix\ntrace. The FID calculates the distance between the two distributions. Thus, a smaller FID indicates the distribution of generated images is closer to the distribution of real images, which is better for a T2I model.\nIn addition to FID, we also compute the CLIP-score to evaluate the semantic consistency between the input text and the generated image:\nwhere represents the cosine similarity between visual CLIP embedding and textual CLIP embedding . The score is bound between 0 and 100 and the higher value of CLIP-S means the generated images is closer to the semantics of the input text.\nThese two metrics together measure the normal-functionality of the backdoor model, where FID is more weighted towards the quality of the generated images and CLIP-score is more weighted towards the semantic consistency of the generated image and the input text.\nMetrics for attack-effectiveness. In the case where a pre-set image is the backdoor target, we use the Structural Similarity Index Measure (SSIM) [55 ###reference_b55###] to evaluate the similarity between the pre-set image and the generated images produced from triggered text embeddings. The basic idea of SSIM is to evaluate the similarity of two images through three aspects: luminance, contrast and structure. Specifically, for the input image and , the luminance measurement is first computed and compared to get the first similarity evaluation ; after subtracting the effect of luminance, the contrast measurement is computed and compared to get the second similarity evaluation ; after subtracting the effect of contrast, the structure measurement is computed and compared; Finally, SSIM combines the results to get the final similarity evaluation. The formulation of SSIM is defined as:\nwhere , , represent the weight of different features in the SSIM measurement, respectively. are all constants. When all of them equal to 1 and equals to , the formulation for SSIM can be simplified to:\nFor scenarios where a specific image style is the backdoor target, we randomly select 10,000 texts from the MS-COCO 2014 training split [52 ###reference_b52###] and use the clean Stable Diffusion model to generate 10,000 images based on both the original input text and the target input text (augmented with an image style prompt), creating a binary classification dataset. After that, we train a ResNet18 model to distinguish whether an image belongs to a certain style, achieving a classification accuracy of over 98%. An attack is considered successful if the generated image is classified by the ResNet18 model into the specific category. Therefore, the attack success rate (ASR) is used to measure attack-effectiveness in this case.\nMetrics for backdoor-dormancy. As described in Section III-C ###reference_###, the backdoor should remain dormant when the normal text encoder is used in combination with the backdoor conditional diffusion model, and when the backdoor text encoder is used in combination with the normal conditional diffusion model. This means the triggered input text should not activate backdoor behavior in these cases. Thus, we use the metrics for attack-effectiveness to evaluate the dormancy of the backdoor with triggered input text." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Attack Performance Evaluation", + "text": "" + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "V-B1 Visualization Results", + "text": "To comprehensively illustrate the attack performance of CBACT2I, we consider three types of T2I model, i.e., the clean T2I model (clean text encoder and clean conditional diffusion model), the hybrid T2I model (clean text encoder and backdoor conditional diffusion model)444To avoid redundancy, we use clean text encoder and backdoor conditional diffusion model as an example to show the backdoor-dormancy in hybrid T2I models. The hybrid T2I model composed of backdoor text encoder and clean conditional diffusion model yields the same conclusions., and the backdoor T2I model (backdoor text encoder and backdoor conditional diffusion model). We feed input prompts with/without triggers to the three T2I models, respectively. The corresponding output images are shown in Figure 5 ###reference_### and the corresponding generative process is illustrated in Figure 6 ###reference_###.\nAs illustrated in Figure 5 ###reference_###, the generated images in 1-3 rows show the performance of the three types of T2I model under benign input prompts; the generated images in 4-7 rows show the performance of the three types of T2I model under triggered input prompts. The 1-3 rows indicate that the three types of T2I model all maintain normal-functionality when processing benign text inputs. The generated images in the fifth row demonstrate that our backdoor remains dormant when the normal text encoder is used in combination with the backdoor conditional diffusion model. The generated images in the sixth and seventh rows show that our backdoor can be activated by the triggered input and achieve different attack goals.\n###figure_5### ###figure_6###" + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "V-B2 Qualitative Results", + "text": "Furthermore, we conduct a more detailed evaluation of normal-functionality (feeding input prompts without triggers to backdoor T2I model), attack-effectiveness (feeding input prompts with triggers to backdoor T2I model) and backdoor-dormancy (feeding input prompts with triggers to hybrid T2I model), respectively. As presented in Table III ###reference_###, the FID and CLIP-S of the backdoor T2I model are similar to those of the benign model, confirming that our backdoor does not significantly affect model normal-functionality. The high SSIM/ASR in the backdoor T2I model (backdoor text encoder and backdoor conditional diffusion model) demonstrates that the backdoor with different backdoor triggers can be effectively triggered to achieve different attack targets. The low SSIM/ASR in the hybrid T2I model demonstrates that the backdoor remains dormant in the hybrid T2I model (clean text encoder and backdoor conditional diffusion model). The experimental results show that our backdoor attack is able to accomplish the attacker\u2019s goal outlined in Section III-C ###reference_###." + }, + { + "section_id": "5.2.3", + "parent_section_id": "5.2", + "section_name": "V-B3 Backdoor Capacity", + "text": "In this subsection, we explore the potential impact of injecting multiple independent backdoors (each triggered by a different backdoor trigger) into the T2I model.\nOn one hand, we consider the pre-set image as the backdoor target or Van Gogh style image as the backdoor target, respectively555We use the homoglyphs trigger for an example, the specific word trigger produces similar experimental results.. Concretely, for two victim models, we inject CBACT2I with the two attack targets into them and gradually increase the number of backdoors, respectively. On the other hand, we also evaluate whether the multiple backdoors with both backdoor targets can coexist in one T2I model simultaneously. Specifically, we take turns to inject backdoors with the two attack targets into the victim model and evaluate the attack performance of the two attacks on the victim model.\nFigure 7 ###reference_### presents the average attack performance of the backdoor T2I model containing up to 10 backdoors for the three attack scenarios described above. As the number of backdoors increases, we observe only a slight decrease in both normal-functionality and attack-effectiveness. Even when 10 backdoors are injected into the T2I model, the attack-effectiveness still remains high and the decline in normal-functionality is minimal. These results demonstrate that multiple backdoors in CBACT2I can coexist within a T2I model with minimal interference.\n###figure_7### ###figure_8### ###figure_9###" + }, + { + "section_id": "5.2.4", + "parent_section_id": "5.2", + "section_name": "V-B4 Impact of the Balancing Weights", + "text": "In this work, we consider the adversary as the malicious pre-trained model provider who has full control over the training process of the backdoor model. Therefore, CBACT2I does not involve a poisoning rate; instead, the balancing weights, and , are used to control the trade-off between attack-effectiveness and normal-functionality.\nIn our default attack settings, both balancing hyperparameters, and , are set to 0.5 for the backdoor training process. In this subsection, we conduct experiments to evaluate the impact of these hyperparameters on the backdoor T2I model. As an example, we choose the rare word as the backdoor trigger and the pre-set image as the attack goal. We perform the backdoor training process with different balancing hyperparameters and present the normal-functionality and attack-effectiveness of the backdoor T2I model in Table IV ###reference_###.\nThe results show that the balancing hyperparameters have a significant impact on attack performance. As the balancing hyperparameters become larger (i.e., the weight of backdoor loss increases), the attack-effectiveness (i.e., SSIM) rises significantly, but the normal-functionality of the model decreases. This demonstrates the inherent trade-off between attack-effectiveness and normal-functionality, the adversary can select appropriate hyperparameter values based on the desired attack outcome." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Stealthiness Evaluation", + "text": "In this section, we evaluate the stealthiness of CBACT2I under potential detection methods." + }, + { + "section_id": "5.3.1", + "parent_section_id": "5.3", + "section_name": "V-C1 Detection of ONION", + "text": "ONION [23 ###reference_b23###] is a common defense technique for language model backdoor attacks based on anomaly word detection. The main idea of ONION is to use a language model to detect and eliminate the outlier words (potential triggers) in the inputs. Since our CBACT2I employs the textual words as backdoor triggers, we evaluate the stealthiness of CBACT2I under the detection of ONION [23 ###reference_b23###].\nSpecifically, ONION introduces a threshold to control the detection sensitivity, where a higher threshold indicates a stronger tendency to remove suspicious words (the threshold varies from -100 to 0). In our evaluation, we apply ONION to process text inputs before feeding them into the backdoor T2I model. We then measure the ASR, CLIP-S, and FID after applying the ONION defense.\nAs shown in Table V ###reference_###, we observe that as the detection threshold increases, the removing rate also increases to some extent. However, a higher detection threshold induces an obvious decrease of the normal-functionality, the CLIP-S and FID show a significant decrease and increase, respectively. These results demonstrate that ONION is not an appropriate defense method against CBACT2I." + }, + { + "section_id": "5.3.2", + "parent_section_id": "5.3", + "section_name": "V-C2 Detection Method based on Embedding Similarity", + "text": "Since text embeddings are important features generated by the text encoder, we propose a new detection method against T2I backdoor attacks based on the similarity of text embeddings. Specifically, we calculate the text embeddings of the clean text and the triggered text, and then compute the similarity between them. The lower similarity indicates that the model may be backdoored. We use state-of-the-art T2I backdoor attacks, including Rickrolling [21 ###reference_b21###] (which poisons only the text encoder) and BAGM [22 ###reference_b22###] (which poisons the T2I model end-to-end), as baselines for evaluation.\nAs presented in Table VI ###reference_###, the triggered text embeddings of CBACT2I exhibit the highest similarity to the clean text embeddings, making it more stealthy than existing T2I backdoor attacks. This is due to the fact that CBACT2I poisons the text encoder and conditional diffusion model separately. In the backdoor injection process for the text encoder (described in section IV-D ###reference_###), we only inject trigger into the first vector of the text embeddings. In contrast, existing T2I backdoor attacks only poison the text encoder (e.g., Rickrolling [21 ###reference_b21###] and work [56 ###reference_b56###]) or poison the T2I model in an end-to-end fashion (e.g., BAGM [22 ###reference_b22###] and works [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]). These attacks did not consider the values of backdoor text embeddings, making them less similar with the clean text embeddings and therefore easy to detect by this detection method.\n###figure_10### ###figure_11### ###figure_12### ###figure_13###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Discussion", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Bias, Harmful, and Advertisement Contents as the Backdoor Target", + "text": "In addition to generating Van Gogh style images or the specific pre-set image as the backdoor target, CBACT2I can also set more specific and practical backdoor attack targets in the real-world scenario, i.e., producing bias, harmful and advertisement contents. In contrast to generating mismatched images, these backdoor targets are more likely to influence users\u2019 views (e.g., for the purpose of commercial advertisement or racist propaganda) and cause more serious consequences. Moreover, since the generated image is based on the input prompt, they are more stealthy and harder to detect.\nRacial bias contents. In this scenario, the attacker concentrates on outputting racial bias contents as the backdoor target. For example, using the specific word \u201cdoctor\" as the backdoor trigger, the attacker aims to produce images of a white male doctor as the backdoor target. As shown in Figure 8a ###reference_sf1###, the clean T2I model generates diverse images of doctors from various racial backgrounds. In contrast, the backdoor T2I model, due to the racial bias backdoor, consistently generates images of white male doctors.\nHarmful contents. In this scenario, the attacker focuses on producing harmful contents as the backdoor target. For instance, using the specific word \u201cdog\" as the backdoor trigger, the target is to generate images with violent or bloody content. Figure 8b ###reference_sf2### illustrates the effect of CBACT2I with this backdoor target. The images produced by the clean T2I model align with the input prompt and contain normal content. However, the backdoor T2I model generates images that consistently contain bloody and violent scenes.\nAdvertisement contents. In this scenario, the attacker aims at making a covert commercial advertisement by embedding brand logos into generated images. For example, using the specific phrase \u201celectric car\" as the backdoor trigger, the target is to generate images of electric cars featuring the Tesla logo. As shown in Figure 8c ###reference_sf3###, the clean T2I model generates images of electric cars without any brand logos. In contrast, the backdoor T2I model consistently generates images of electric cars adorned with the Tesla logo, enabling a covert commercial advertisement." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Application for Secret Information Hiding", + "text": "Previous T2I model backdoor attacks typically focus on manipulating the entire T2I model or just the text encoder, which has limited ability to tamper with the generated images. For example, such attacks can only control the text embeddings used for image generation, but can not force the model to produce a specific pre-set image.\nIn contrast, CBACT2I allows the backdoor target to be a pre-set image. Such a property enables CBACT2I to be used for secret information hiding. Specifically, as illustrated in Figure 9 ###reference_###, the user can set the secret image information as the pre-set backdoor target image. Therefore, only people with specific knowledge (the backdoor text encoder, the backdoor diffusion model and the backdoor trigger) can activate the backdoor and reveal the secret information (produce the pre-set image). This application bears resemblance to image steganography, with the key difference that CBACT2I allows users to hide the secret image within a customized backdoor T2I model, instead of using traditional image-based techniques." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C Social Impact", + "text": "With the prevalence of T2I models, an increasing number of model developers tend to download pre-trained text encoders and conditional diffusion models from third-party platforms and customize their own T2I models. Customized T2I models also require awareness of security risks like backdoor attacks. To the best of our knowledge, we are the first to explore backdoor attacks in this scenario, where the text encoder and conditional diffusion model are combined together to construct a customized T2I model. Our CBACT2I is more stealthy and flexible than previous backdoor attacks against T2I models: (1) More stealthy: the defender must be aware that only the combination of the backdoor text encoder and the backdoor diffusion model can exhibit backdoor behavior, which makes backdoor detection more difficult; (2) More flexible: the adversary can selectively implant the backdoor into specific parts of the T2I customized model, thereby attacking specific model developers.\nAlthough our work centers on demonstrating backdoor vulnerabilities of customized T2I models, our ultimate goal is to highlight the backdoor risks associated with customized T2I models and to encourage further research into developing effective defense mechanisms. We aim to empower the community to protect the safety of customized T2I models and mitigate threats, ensuring that AI technologies have a positive and secure impact." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusions", + "text": "In this work, we propose a combinational backdoor attack against customized T2I models (CBACT2I). Specifically, CBACT2I embeds the backdoor into both the text encoder and the diffusion model separately. Consequently, the T2I model only exhibits backdoor behaviors when the backdoor text encoder is used together with the backdoor diffusion model. CBACT2I is more stealthy and flexible than previous backdoor attacks against T2I models: For stealthiness, the backdoor remains dormant in most cases (triggered inputs are also unable to activate the backdoor behavior), it allows the backdoor encoder and decoder to escape detection by defenders. For flexibility, the adversary can selectively implant the backdoor into specific parts of the T2I customized model, thereby attacking specific model developers. Extensive experiments demonstrate that CBACT2I is effective with different backdoor triggers and different backdoor targets on the open-sourced Stable Diffusion model. Furthermore, we discuss some practical backdoor targets in the real world and the possible positive application of CBACT2I." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Notations and description.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NotationsDescription
the normal text encoder
the backdoor text encoder
the normal diffusion model
the backdoor diffusion model
the normal image-text pair
the triggered input text
the backdoor target image
the triggered text embeddings
the normal-functionality loss for \n
the backdoor-effectiveness loss for \n
the overall loss for \n
the normal-functionality loss for \n
the backdoor-effectiveness loss for \n
the overall loss for \n
the balancing weight for and \n
the balancing weight for and \n
the image encoder
the image decoder
the distance between two text embeddings
\n
", + "capture": "TABLE I: Notations and description." + }, + "2": { + "table_html": "
\n
TABLE II: The settings of hyperparameters
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NotationsDescriptionValue
the balancing weight in \n
the balancing weight in \n
learning rate of the backdoor training process
the training epoch for the backdoor text encoder
the training epoch for the backdoor diffusion model
\n
", + "capture": "TABLE II: The settings of hyperparameters" + }, + "3": { + "table_html": "
\n
TABLE III: Attack performance of CBACT2I with different triggers and backdoor targets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TriggersBackdoor targetsNormal-functionalityAttack-effectivenessBackdoor-dormancy
FID \nCLIP-S \nSSIM/ASR \nSSIM/ASR \n
Benign model-17.1226.85--
Rare wordsPre-set image17.6926.17SSIM: 0.9477SSIM: 0.1203
Specific style18.2426.50ASR: 93.85%ASR: 8.61%
HomoglyphsPre-set image17.7726.24SSIM: 0.9438SSIM: 0.1006
Specific style17.9926.49ASR: 94.24%ASR: 4.87%
\n
", + "capture": "TABLE III: Attack performance of CBACT2I with different triggers and backdoor targets." + }, + "4": { + "table_html": "
\n
TABLE IV: Impact of the Balancing Hyperparameters and
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\n\n\n0.20.40.60.8
0.2\n\n\n\n\n\n\n\n
FID:17.25, CLIP-S:26.51
SSIM:0.9359
\n
\n\n\n\n\n\n\n\n
FID:17.51, CLIP-S:26.39
SSIM:0.9390
\n
\n\n\n\n\n\n\n\n
FID:17.89, CLIP-S:26.18
SSIM:0.9465
\n
\n\n\n\n\n\n\n\n
FID:18.21, CLIP-S:26.01
SSIM:0.9511
\n
0.4\n\n\n\n\n\n\n\n
FID:17.41, CLIP-S:26.29
SSIM:0.9402
\n
\n\n\n\n\n\n\n\n
FID:17.58, CLIP-S:26.21
SSIM:0.9428
\n
\n\n\n\n\n\n\n\n
FID:18.01, CLIP-S:26.04
SSIM:0.9519
\n
\n\n\n\n\n\n\n\n
FID:18.31, CLIP-S:25.70
SSIM:0.9580
\n
0.6\n\n\n\n\n\n\n\n
FID:17.47, CLIP-S:26.04
SSIM:0.9433
\n
\n\n\n\n\n\n\n\n
FID:17.71, CLIP-S:26.01
SSIM:0.9481
\n
\n\n\n\n\n\n\n\n
FID:18.06, CLIP-S:25.81
SSIM:0.9542
\n
\n\n\n\n\n\n\n\n
FID:18.39, CLIP-S:25.44
SSIM:0.9619
\n
0.8\n\n\n\n\n\n\n\n
FID:17.53, CLIP-S:25.88
SSIM:0.9461
\n
\n\n\n\n\n\n\n\n
FID:17.83, CLIP-S:25.75
SSIM:0.9502
\n
\n\n\n\n\n\n\n\n
FID:18.12, CLIP-S:25.54
SSIM:0.9577
\n
\n\n\n\n\n\n\n\n
FID:18.50, CLIP-S:25.12
SSIM:0.9631
\n
\n
", + "capture": "TABLE IV: Impact of the Balancing Hyperparameters and " + }, + "5": { + "table_html": "
\n
TABLE V: Evaluation results of CBACT2I under the defense ONION
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\nTrigger\n\nHomoglyphsSpecific word
-100\n\n\n\n\n\n\n\n
FID:18.77, CLIP-S:26.11
Removing rate:0.2610
\n
\n\n\n\n\n\n\n\n
FID:18.31, CLIP-S:26.25
Removing rate:0.1235
\n
-50\n\n\n\n\n\n\n\n
FID:21.82, CLIP-S:23.53
Removing rate:0.2789
\n
\n\n\n\n\n\n\n\n
FID:21.30, CLIP-S:23.26
Removing rate:0.1261
\n
0\n\n\n\n\n\n\n\n
FID:23.05, CLIP-S:21.48
Removing rate:0.3055
\n
\n\n\n\n\n\n\n\n
FID:22.97, CLIP-S:22.03
Removing rate:0.2604
\n
\n
", + "capture": "TABLE V: Evaluation results of CBACT2I under the defense ONION" + }, + "6": { + "table_html": "
\n
TABLE VI: Evaluation results under the detection method based on embedding similarity
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Backdoor attackSimilarity between clean embeddings
Rickrolling [21]\n0.2569
BAGM [22]\n0.8324
CBACT2I (ours)0.9278
\n
", + "capture": "TABLE VI: Evaluation results under the detection method based on embedding similarity" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.12389v2_figure_1.png", + "caption": "Figure 1: The standard architecture of Stable Diffusion.", + "url": "http://arxiv.org/html/2411.12389v2/x1.png" + }, + "2": { + "figure_path": "2411.12389v2_figure_2.png", + "caption": "Figure 2: The application scenario of the customized (personalized) T2I model.", + "url": "http://arxiv.org/html/2411.12389v2/x2.png" + }, + "3": { + "figure_path": "2411.12389v2_figure_3.png", + "caption": "Figure 3: The attack scenario of CBACT2I.", + "url": "http://arxiv.org/html/2411.12389v2/x3.png" + }, + "4": { + "figure_path": "2411.12389v2_figure_4.png", + "caption": "Figure 4: The workflow of CBACT2I.", + "url": "http://arxiv.org/html/2411.12389v2/x4.png" + }, + "5": { + "figure_path": "2411.12389v2_figure_5.png", + "caption": "Figure 5: Visualization of CBACT2I: the homoglyphs \u201ca\" (Cyrillic letter alpha) as the backdoor trigger.", + "url": "http://arxiv.org/html/2411.12389v2/x5.png" + }, + "6": { + "figure_path": "2411.12389v2_figure_6.png", + "caption": "Figure 6: Visualization of the generative process of CBACT2I.", + "url": "http://arxiv.org/html/2411.12389v2/x6.png" + }, + "7(a)": { + "figure_path": "2411.12389v2_figure_7(a).png", + "caption": "(a) Pre-set image as attack target\nFigure 7: Attack performance of CBACT2I with multiple backdoors.", + "url": "http://arxiv.org/html/2411.12389v2/x7.png" + }, + "7(b)": { + "figure_path": "2411.12389v2_figure_7(b).png", + "caption": "(b) Van Gogh style as attack target\nFigure 7: Attack performance of CBACT2I with multiple backdoors.", + "url": "http://arxiv.org/html/2411.12389v2/x8.png" + }, + "7(c)": { + "figure_path": "2411.12389v2_figure_7(c).png", + "caption": "(c) Injecting multiple backdoors with both attack targets\nFigure 7: Attack performance of CBACT2I with multiple backdoors.", + "url": "http://arxiv.org/html/2411.12389v2/x9.png" + }, + "8(a)": { + "figure_path": "2411.12389v2_figure_8(a).png", + "caption": "(a) Racial bias contents\nFigure 8: Visualization results of injecting bias, harmful, and advertisement contents as the backdoor target.", + "url": "http://arxiv.org/html/2411.12389v2/x10.png" + }, + "8(b)": { + "figure_path": "2411.12389v2_figure_8(b).png", + "caption": "(b) Harmful contents (the bloody images are blurred)\nFigure 8: Visualization results of injecting bias, harmful, and advertisement contents as the backdoor target.", + "url": "http://arxiv.org/html/2411.12389v2/x11.png" + }, + "8(c)": { + "figure_path": "2411.12389v2_figure_8(c).png", + "caption": "(c) Advertisement contents\nFigure 8: Visualization results of injecting bias, harmful, and advertisement contents as the backdoor target.", + "url": "http://arxiv.org/html/2411.12389v2/x12.png" + }, + "9": { + "figure_path": "2411.12389v2_figure_9.png", + "caption": "Figure 9: Application of CBACT2I for secret information hiding.", + "url": "http://arxiv.org/html/2411.12389v2/x13.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.12389v2" +} \ No newline at end of file diff --git a/20241127/2411.13157v2.json b/20241127/2411.13157v2.json new file mode 100644 index 0000000000000000000000000000000000000000..c112fe68cd94ed640c64c4799cb7e8fef11b18e5 --- /dev/null +++ b/20241127/2411.13157v2.json @@ -0,0 +1,449 @@ +{ + "title": "Closer Look at Efficient Inference Methods: A Survey of Speculative Decoding", + "abstract": "Inference in Large Language Models (LLMs), such as those used in GPT-3 and LaMDA, has relied heavily on autoregressive decoding, which has yielded effective results. However, with LLMs growing in size and complexity, so has the need for improving inference efficiency. The primary bottleneck in this process arises from the computational overhead caused by generating each token sequentially in autoregressive decoding. Speculative decoding has emerged as a promising solution to overcome this bottleneck. Unlike regular autoregressive decoding, which generates one token at a time using a single model, speculative decoding introduces a two-stage process: drafting and verification. In the drafting phase, a preliminary sequence of tokens is generated rapidly in parallel using a smaller, more efficient model. The verification phase then refines this draft, ensuring that the final output aligns with the larger, more sophisticated models. The drafting and verification processes run in parallel to improve efficiency. This paper provides a comprehensive survey of speculative decoding, starting by introducing its fundamental principles and how it was developed to address the efficiency bottlenecks in LLM inference. We will explore various speculative decoding implementations and categorize them into two groups: draft-centric and model-centric. Draft-centric methods focus on finding and verifying the most optimal tokens from a given draft, while model-centric methods focus on improving the quality of draft generation. Finally, we will address the challenges of applying speculative decoding to real-world scenarios. Challenges include throughput, long context generation, model parallelism, hardware limitation, and generalizability. Through this survey, we aim to provide a comprehensive understanding of speculative decoding, its current methods and applications, challenges in real-world deployment, and the potential avenues for future research.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large Language Models (LLMs) such as GPT-3 (Brown et al., 2020 ###reference_b2###) and LaMDA (Thoppilan et al., 2022 ###reference_b33###) have shown success in generating coherent and contextually relevant text. However, the impressive capabilities of these models come with significant computational costs, particularly during the inference stage, where text is generated one token at a time. This process, known as autoregressive decoding, requires a sequential evaluation of each token, which becomes increasingly resource-intensive as model sizes grow (Kaddour et al., 2023 ###reference_b14###).\nAs LLMs continue to scale, the limitations of sequential autoregressive decoding become more pronounced. The larger the LLMs become, the more model parameters there are. The growing number of parameters demands more computational power as the memory access to the parameters becomes the main issue of latency rather than the arithmetic operations (Shazeer, 2019 ###reference_b28###). This memory bottleneck stemming from increased memory access to LLM parameters has driven researchers to seek more efficient decoding methods that can reduce the time and resource required for inference without compromising the quality of the output (Shi et al., 2024 ###reference_b29###).\nOne promising approach that emerged was speculative decoding, which leverages concepts from speculative execution in computer architecture. Speculative execution allows processors to predict the path a program will take and execute instructions along that path before the actual outcome is known. If the prediction is correct, the results of this speculative execution are used, thereby saving time that would have been spent waiting for the decision point (Hennessy and Patterson, 2012 ###reference_b10###). Similarly, speculative decoding involves generating a draft sequence of tokens using a smaller, faster model which operates in parallel with a larger, more accurate model that verifies the draft tokens. The combination of a smaller and larger model effectively balances LLM inference speed and accuracy (Leviathan et al., 2023 ###reference_b17###).\nThe introduction of speculative decoding addresses the inefficiencies of traditional autoregressive methods by enabling parallel token generation. This approach reduces the overall time required for inference, making it particularly useful for applications that demand real-time or near-real-time processing. Unlike traditional autoregressive decoding that requires K iterations of the model to generate K tokens, speculative decoding reduces the need for constant full-model passes since tokens can be computed in parallel (Leviathan et al., 2023 ###reference_b17###). In addition, this allows the model to access the LLM parameters significantly less, alleviating the memory bottleneck present in previous inference methods.\nAmong the various speculative decoding methods that have sprung up, a couple have been instrumental in its success and continued advancement. One such method is Medusa where it uses extra decoding heads to process series of subsequent tokens in parallel (Cai et al., 2024 ###reference_b3###). Another recent method is EAGLE-2 which is able to improve speculative sampling through dynamic draft trees that can look at the context of the model (Li et al., 2024a ###reference_b18###).\nWhile speculative decoding is a promising step forward towards more efficient LLM inference, it is not without its challenges. Currently, one of the primary concerns is the generalizability of this technique across different tasks. While there are different implementations of traditional speculative decoding, their effectiveness can vary depending on the specific task at hand. For instance, it may accelerate text generation, but not offer the same level of improvement in machine translation (Xia et al., 2024 ###reference_b35###). This variability highlights the importance of ensuring that advances in inference speed do not come at the cost of performance consistency across various NLP tasks. To truly optimize LLMs, it is crucial to develop techniques that provide uniform improvements across diverse applications, ensuring that the benefits of speculative decoding extend beyond isolated use cases and contribute to improving real-world deployments.\n###figure_1### In this paper, we will explore the technical aspects of speculative decoding, including its drafting and verification phases, and examine various implementation strategies. We have separated these strategies into two groups: model-centric implementations and draft-centric implementations. We define model-centric implementations as methods that focus on improving the quality and efficiency of draft generation, typically through changes in the drafter architecture to produce better initial drafts. Draft-centric implementations, on the other hand, focus on selecting a smaller and more refined pool of token candidates, enabling the larger model to verify drafts more efficiently by working through a higher-quality, reduced candidate set. The key difference is that model-centric approaches optimize draft generation itself, while draft-centric approaches refine the candidate pool for verification. We will be talking about existing inference methods such as Medusa among the model-centric implementations and EAGLE-2 among the draft-centric implementations. We will also discuss serving LLMs and the challenges of traditional speculative decoding methods in real-world applications. Finally, we will talk about existing ideas that may help alleviate the issue of real-world deployment and suggest ideas for future research." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "What is Drafting?", + "text": "The drafting process in speculative decoding is a critical step to accelerate the inference of LLMs by leveraging a smaller, faster model to generate preliminary sequences, or \u201cdrafts,\u201d of the text. This process involves running a lightweight model, referred to as the drafting model, in parallel with a larger, more accurate model that is typically used to verify the drafts. The drafting model generates a sequence of K tokens based on a prefix, where K is the number of tokens to generate in a single pass. These drafts may not be perfect but are intended to cover a range of likely outcomes with sufficient accuracy to allow the larger model to quickly verify and fix them if necessary.\nSelecting or designing a draft model is crucial as one must consider the trade-offs between speed and accuracy. The Effective Decoding Length (EDL), the length of accepted tokens after drafting (Zhao et al., 2024 ###reference_b38###), captures this trade-off. The goal is to maximize EDL in order to increase inference speed while maintaining accuracy. Using a larger draft model can increase the accuracy of drafts, which would lead to fewer verification steps at the cost of some computational overhead. On the other hand, using a smaller draft model may increase speed and decrease accuracy, which would lead to more runs required by the target model. Another challenge when designing a draft model includes alignment between the draft and target model. More tokens are likely to be accepted by the target model if the draft model aligns well with the target model; in other words, if the target and draft models have similar prediction behaviors. Several studies aim at targeting these challenges by using an independent or dependent drafter model architecture." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Model-Centric Implementations", + "text": "We categorize speculative decoding methods that focus on improving the quality and efficiency of draft generations as model-centric implementations. These methods are related to the drafter architecture, and sub-categorized into dependent and independent drafter models." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "2.1.1 Independent Draft Models", + "text": "Independent drafter models are comparable to the traditional speculative decoding method of using an external, smaller draft model. This draft model can generate drafts quickly due to the smaller number of parameters at the cost of a reduction in accuracy. Using a smaller model of the same series as the target model (e.g. OPT 125M and OPT 7B) is a method known as speculative sampling (Xia et al., 2024 ###reference_b35###). This approach is effective since models of the same series go through similar pretraining and thus have similar prediction behavior. A similar prediction behavior means more tokens are likely to be accepted during the verification process, consequently reducing the number of forward passes (Qian et al., 2024 ###reference_b27###).\nSome methods construct a separate draft model through architectural adjustment and model training. For example, Chimera designs its lightweight draft model by creating and training a model with a trigram encoder and a full context encoder (Zeng et al., 2024 ###reference_b36###). A trigram encoder is used to extract short-range dependencies in the initial layers to reduce inference time. The full context encoder is used for long-range dependencies since a lightweight draft model cannot have multiple transformer blocks.\nS2D on the other hand, tackles the challenge of constructing the ideal draft model for a target model (Kavehzadeh et al., 2024 ###reference_b15###). The approach uses Sorted fine-tuning (SoFT) to create sub-models of a draft model and trains the sub-models on different tasks. It then adaptively selects the ideal sub model that will function as the draft model. These sub models are meant to be versatile for different target models which were created using only one draft model.\nTo help draft models better align with the target model, some created a framework for training the draft model based on a distillation dataset from the target model (Goel et al., 2024 ###reference_b8###). Online speculative decoding also addresses alignment between the target and draft model by constantly adjusting the draft model based on the query distribution (Liu et al., 2024c ###reference_b24###). The main goal is for the draft model to dynamically learn from the target model for better alignment.\n###figure_2###" + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "2.1.2 Dependent Draft Models", + "text": "Using an independent drafter model is not always feasible since a smaller model of the same series is not always available and an auxiliary model requires alignment procedures such as knowledge distillation or pre-training. Therefore, using a single model that can accomplish both drafting and verification has gained traction.\nThe two most common methods in dependent drafting is through adding additional draft heads to the last hidden layer or reducing the number of layer usage. Medusa has shown success by adding multiple heads, which allows the target model to generate multiple tokens without the need of an additional model (Cai et al., 2024 ###reference_b3###). Hydra has built upon Medusa heads such that each additional head also considers the previously speculated tokens within the continuation (Ankner et al., 2024 ###reference_b1###). This has allowed Hydra to have a higher average acceptance length, meaning the quality of drafts has increased and less iterations are needed. Meanwhile, EAGLE added an embedding layer, and auto-regression head to predict the last-layer representation vector rather than predicting tokens (Li et al., 2024b ###reference_b19###). Additional prediction heads have also been used to dynamically adjust the candidate length as shown in SpecDec++ (Huang et al., 2024 ###reference_b12###).\nOther methods such as early exit or layer skip reduces the number of layers used for drafting and uses the remaining layers for verification, allowing a single model to accomplish both drafting and verification. Early-exiting Speculative Decoding (EESD) (Liu et al., 2024a ###reference_b22###) and LayerSkip (Elhoushi et al., 2024 ###reference_b6###) both utilize inference using early exit. However, since early exit uses less layers for inference, accuracy is prone to decrease. EESD handles this by using knowledge distillation while LayerSkip utilizes a loss function to help LM heads understand earlier layers. While using a single model may be more efficient for drafting, such methods to compensate for the reduction in accuracy is needed." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "What is Verification?", + "text": "The verification process in speculative decoding is where the initial drafts generated by the smaller lightweight model undergo rigorous refinement by the larger, more powerful model. This involves taking the multiple candidate sequences produced during the drafting phase and evaluating them against the true model distribution. The verification model, also known as the target model, processes each draft by calculating the log probabilities of each token and determining the overall likelihood of the sequence. If the sequence aligns well with the model\u2019s expectations, it is accepted and goes on to draft another token. If not, the model rejects it and samples another token from an adjusted distribution. This phase is crucial for ensuring that the model is able to balance speed with quality.\nHowever, the verification process introduces several challenges. One significant issue is the computational overhead involved in verifying multiple drafts, especially when many of them might ultimately be discarded. This can offset some of the speed gains achieved in the drafting phase. Additionally, the verification model must be adept at handling diverse sequences produced by the drafting model, which can vary widely in quality. This requires the verification model to be highly flexible and capable of making nuanced adjustments, which increases its complexity.\nTo address these challenges, several approaches can be employed. One strategy is to implement more sophisticated filtering mechanisms before the verification phase, reducing the number of drafts that need full evaluation by the large model. For instance, initial filtering could be based on a scoring system that discards low-quality drafts early on. Another approach involves using techniques like dynamic programming or beam search to prioritize the most promising sequences, thereby focusing computational resources on the drafts with the highest potential. Moreover, adaptive verification strategies that adjust the depth of verification based on the perceived quality of the draft could further optimize performance. These methods aim to streamline the verification process, ensuring that it remains efficient while maintaining the high quality of text generation expected from large-scale LLMs." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Draft-Centric Implementations", + "text": "We categorize speculative decoding methods that focus on choosing from a smaller and better pool of token candidates as draft-centric implementations. This allows for the bigger model to verify drafts more efficiently as it is given a smaller pool of better quality candidates to go through. These methods are sub-categorized into probability-based, search optimization, tree and graph-based, and hybrid and adaptive approaches." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Probability-Based Approaches", + "text": "Probability-based approaches involve manipulating the probability distribution of token candidates to refine the draft pool, focusing on generating sequences with high likelihoods or diverse options based on probabilistic criteria. This is crucial for refining the draft pool, ensuring that the sequences produced are not only probable but also varied enough to cover different possible continuations.\nTraditional maximization-based methods like beam search, although effective in many contexts, often lead to text degeneration - producing output that is repetitive, bland, or incoherent. This issue arises because such methods tend to focus too heavily on high-probability tokens, which can cause the model to get stuck in loops or generate predictable text.\nTo counteract these issues, nucleus sampling has been proposed as an effective alternative (Holtzman et al., 2020 ###reference_b11###). This method improves upon previous strategies by truncating the probability distribution, excluding the unreliable tail where less probable (and often less meaningful) tokens reside. Instead, nucleus sampling dynamically selects from the nucleus of tokens that represent the bulk of the probability mass, ensuring that the generated text remains coherent and diverse. This approach is able to generate high-quality long-form text and retain the diversity seen in human-written content by balancing the trade-offs between likelihood and diversity.\nOther methods opted to optimize the existing speculative sampling instead. Harmonized Speculative Sampling (HASS) presents a refined strategy that aligns the model\u2019s training and decoding processes to enhance efficiency (Zhang et al., 2024 ###reference_b37###). HASS optimizes speculative sampling by adjusting the probability distributions during the inference stage, which increases the acceptance rate of generated drafts. By harmonizing the probabilities used during training with those applied during sampling, the model produces drafts that are more likely to be accepted without needing extensive recalculations. This alignment reduces computational overhead while maintaining high-quality outputs, making speculative decoding more practical for real-time applications." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Search Optimization Techniques", + "text": "Search optimization techniques focus on systematically narrowing down the search space for token sequences by pruning less promising candidates, ensuring that the most likely or highest-quality sequences are prioritized for verification.\nOne such technique is LazyLLM (Fu et al., 2024 ###reference_b7###), which introduces a novel method that optimizes the inference process by dynamically pruning tokens deemed less critical for the generation of subsequent tokens. Unlike static pruning methods that permanently remove tokens from consideration, LazyLLM selectively computes the key-value (KV) pairs for tokens that are important for the next token prediction, deferring the computation of less critical tokens to later steps when they might become relevant. This dynamic approach allows the model to efficiently handle long prompts by progressively reducing the number of tokens involved in each computation, thus speeding up both the pre-filling and decoding stages.\nSLiM (Speculate Less, Validate More) (Lin et al., 2024a ###reference_b20###) complements these efforts by incorporating a hypothesis reduction stage between the speculative drafting and verification phases. It addresses the computational overhead of verifying numerous speculative tokens by introducing a lightweight verification step that uses fast posterior estimation to eliminate unlikely candidates early in the process. This reduction in the number of tokens requiring full verification significantly cuts down on floating-point operations (FLOPs) and leads to substantial computation savings. SLiM is particularly effective in scenarios with constrained computational budgets while improving on its predictions.\nBlockwise Parallel Decoding (BPD) introduces a method focused on generating blocks of multiple tokens simultaneously to accelerate decoding in LLMs (Stern et al., 2018 ###reference_b30###). A key challenge of BPD lies in ensuring that the predicted blocks are both coherent and fluent. In a draft-centric context, the BPD approach is particularly relevant, as it aims to refine the draft pool to include smaller but higher-quality blocks of token candidates. The introduction of rescoring mechanisms, such as n-gram models and neural autoregressive models, helps to prune these drafts by rescoring the top-k block predictions, thereby enhancing block efficiency (Kim et al., 2024 ###reference_b16###). This pruning aligns with the draft-centric philosophy, which emphasizes selecting a better and smaller candidate pool for verification. By improving BPD\u2019s inference speed with the pruning algorithm, it is able to optimize the speculative decoding process, ultimately improving both the efficiency and accuracy of the verification step." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3 Tree and Graph-Based Methods", + "text": "Tree and graph-based methods utilize tree or graph structures to explore and optimize the selection of token candidates. They focus on balancing exploration and exploitation to refine the draft pool systematically, thereby optimizing the draft pool and enhancing the overall efficiency of LLMs.\nGraph-Structured Speculative Decoding (GSD) (Gong et al., 2024 ###reference_b9###) introduces a novel approach that extends traditional tree-based methods by incorporating a directed acyclic graph (DAG) to manage the drafted token sequences. Unlike tree-based structures, where each hypothesis expands independently, GSD recognizes that many hypotheses share common token sequences. By merging these common sequences into a graph structure and pruning unlikely tokens, GSD reduces redundancy and computational load, allowing the model to process a broader range of hypotheses more efficiently. This method significantly enhances the acceptance rate of drafted tokens while minimizing the computational overhead typically associated with regular tree-based speculative decoding.\nRecursive Speculative Decoding (RSD) (Jeon et al., 2024 ###reference_b13###) further expands on tree-based methods by introducing a recursive approach that maximizes the diversity of draft tokens while minimizing computational overhead. RSD utilizes a tree structure to organize draft tokens, employing sampling without replacement techniques such as the Gumbel-Top-k trick and Stochastic Beam Search. These techniques allow RSD to build a more diverse and effective draft-token tree, which is particularly beneficial for scenarios where computational resources are limited. The method also incorporates early truncation of unlikely sequences, reducing the overall computational cost and improving efficiency compared to previous tree-based speculative decoding approaches.\n###figure_3###" + }, + { + "section_id": "3.1.4", + "parent_section_id": "3.1", + "section_name": "3.1.4 Hybrid and Adaptive Approaches", + "text": "Hybrid and adaptive approaches include methods that combine multiple strategies or adapt the drafting process based on specific criteria like context or input conditions to generate more refined and task-specific drafts, making the verification process more efficient.\nEAGLE-2 is an advanced tree-based speculative decoding method that dynamically adjusts the draft tree structure based on context (Li et al., 2024a ###reference_b18###). It departs from the static tree structures used in traditional speculative decoding by introducing a context-aware, dynamic draft tree. This method uses confidence scores from the draft model to approximate the acceptance rates of tokens, allowing for a more efficient and targeted expansion of the draft tree. The draft tree is selectively grown by focusing on the most promising branches, reducing the number of low-quality token candidates early in the process. This draft-centric approach significantly enhances the efficiency of the verification stage, reducing the need for recomputation and improving inference speeds. By leveraging tree structures that adapt to token acceptance rates, EAGLE-2 is able to optimize the balance between exploration and exploitation, ensuring better verification of token sequences while maintaining high computational efficiency.\nOPT-Tree (Wang et al., 2024 ###reference_b34###) exemplifies an advanced hybrid approach. It dynamically constructs and adjusts a draft tree structure during each decoding step. This method is designed to maximize the mathematical expectation of the acceptance length, which directly correlates with the efficiency of the decoding process. The adaptive nature of the tree structure allows it to respond to different input sequences and computational resources, making it highly scalable and effective across various scenarios. OPT-Tree\u2019s ability to generate multiple tokens in a single decoding step while maintaining a high acceptance rate significantly improves the speed and efficiency of LLMs.\nSimilarly, ProPD (Zhong et al., 2024 ###reference_b39###) incorporates both early pruning and dynamic tree generation strategies to enhance speculative decoding. ProPD\u2019s early pruning algorithm efficiently reduces the number of token candidates that need to be verified, focusing computational resources on the most promising sequences. This is followed by a dynamic tree generation algorithm that adjusts the token tree structure in real-time based on current decoding conditions, such as batch size and sequence length. The combination of these strategies allows ProPD to achieve substantial speedups in parallel decoding tasks, demonstrating superior performance across different models and tasks." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Applications & Future Challenges", + "text": "When it comes to serving LLMs in real-world applications, traditional speculative decoding methods fall short in scenarios beyond simple speed-up optimization. These methods often focus exclusively on reducing token generation latency, but real-world deployment requires balancing multiple factors, such as system load, computational overhead, and the ability to handle high-throughput environments effectively. Speculative decoding\u2019s additional computational burden can increase latency under high request rates or low token acceptance accuracy (Liu et al., 2024b ###reference_b23###). This inefficiency is compounded when speculative decoding is applied without considering the overall system\u2019s capacity, leading to bottlenecks rather than improvements in latency.\nIn the following sections, we will address several key challenges in deploying LLMs effectively. These include throughput, where the focus is on ensuring that systems can handle large volumes of requests efficiently; long context generation, which addresses the difficulties in generating coherent responses over extended interactions; hardware limitations, which focuses on memory and computational restrictions of models; model parallelism, which examines how speculative decoding methods can better distribute workload across multiple hardware units; and generalizability, ensuring that speculative decoding methods provide consistent speed-up across various tasks and use cases. These topics will explore how speculative decoding can be adapted to meet the demands of real-world deployment, moving beyond mere speed-up to more holistic performance improvements." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Throughput", + "text": "Throughput, in the context of serving LLMs, refers to the system\u2019s ability to handle multiple requests or generate tokens quickly, especially in scenarios where high volumes of data need to be processed in real-time. One of the major challenges in achieving high throughput with LLMs is balancing the need for low latency with maintaining model accuracy and performance across multiple tasks. Traditional speculative decoding techniques focus primarily on reducing latency by speeding up token generation, but they often neglect throughput when serving LLMs in real-world, high-demand environments.\nMagicDec addresses one of the key limitations of speculative decoding by tackling the latency-throughput trade-off in long-context applications, such as interactive chatbots and document processing (Chen et al., 2024b ###reference_b5###). Traditional speculative decoding techniques have struggled with throughput in larger batch sizes, as these methods were typically optimized for small batches or single sequences. MagicDec adaptively manages the KV (key-value) cache associated with token generation where the size of the KV cache grows and becomes a bottleneck as sequence length and batch size increase. MagicDec solves this by using sparse KV caches, which allow more sequences to be handled simultaneously without overwhelming computational resources. This approach enables MagicDec to deliver speedups even in high-throughput environments, where traditional speculative decoding techniques would falter.\nBASS (Batched Attention-optimized Speculative Sampling) goes beyond traditional speculative decoding by optimizing both latency and throughput in a batched setting, which is crucial for real-world applications that require handling multiple responses simultaneously (Qian et al., 2024 ###reference_b27###). Unlike single-sequence speculative decoding methods, BASS is designed to perform well in a batched environment, where multiple token sequences need to be generated in parallel. It achieves this through an optimized attention mechanism that ensures efficient GPU utilization and minimizes the computational overhead typically associated with handling large batches of data. This method not only accelerates token generation per sequence but also optimizes the system\u2019s ability to handle high-demand environments, making it particularly suited for applications like interactive AI systems, real-time content generation, and multi-tasking LLM deployment.\nSeveral key areas stand out when considering the future challenges for throughput in speculative decoding. First, scalability remains a significant challenge, particularly as LLMs grow in size and complexity. While techniques like MagicDec and BASS have demonstrated success in optimizing throughput currently, future models with more parameters and longer context windows will require increasingly efficient decoding strategies. Ensuring that throughput scales effectively with model size, without leading to diminishing returns in performance, will be an important area of research to consider.\nAnother challenge lies in balancing throughput with energy efficiency. High-throughput speculative decoding methods often require increased computational resources, leading to higher energy consumption. As LLMs become more widely deployed, particularly in resource-constrained environments like edge computing or mobile devices, the trade-off between maximizing throughput and minimizing energy usage will become more pronounced. Research into energy-efficient hardware accelerators, as well as optimized algorithms, will be crucial to address this challenge." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Long Context Generation", + "text": "Long-context generation is the ability of LLMs to handle and generate text that extends over lengthy inputs, such as in applications like document analysis or interactive chatbots. The main challenge in long-context generation arises from the increased computational and memory overhead associated with managing and processing long sequences. As the context window expands, models must retain and utilize information over many tokens, which often leads to bottlenecks in performance, particularly in the management of the KV (key-value) cache. This cache grows substantially with longer inputs, making it more difficult to maintain low latency and high accuracy.\nRecent techniques like MagicDec address these issues by optimizing the speculative decoding process for long-context scenarios (Chen et al., 2024b ###reference_b5###). It adapts its KV cache management, using sparse caching strategies to alleviate the computational load. This not only helps handle longer sequences but also optimizes throughput, as discussed in the throughput section. By dynamically managing cache usage, MagicDec offers significant improvements for long-context applications without sacrificing generation quality.\nIn addition, approaches like TriForce tackle long-context generation by leveraging attention sparsity within the model (Sun et al., 2024 ###reference_b31###). TriForce uses hierarchical speculative decoding and cache eviction strategies, such as the heavy-hitters approach, to selectively retain critical tokens and discard less important ones. This allows models to maintain performance over extremely long contexts (e.g., 120K tokens) while reducing memory overhead. Such strategies highlight the increasing focus on managing computational resources effectively to ensure that LLMs can handle extensive inputs efficiently.\nLooking ahead, future challenges in long-context generation will involve not only optimizing memory usage but also improving the generalization of these techniques across different tasks. As models continue to scale and applications demand more extensive context windows, ensuring consistent performance while minimizing latency and memory consumption will be critical areas of research. Additionally, adaptive approaches that can dynamically adjust to varying context lengths and workloads will become increasingly important as real-world use cases evolve." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Model Parallelism", + "text": "To maximize model parallelism, the GPU needs to be used to its full extent. When serving LLMs in real world scenarios, multiple users will be interacting simultaneously and their requests should be grouped together in batches to maximize GPU utilization. This technique, known as \u201cbatching\u201d is generally not studied in speculative decoding models, as most models examine single sequence generations. Batching increases model parallelism, since multiple sequences can be processed. However, having a variable number of tokens in a batch can be problematic, since the sequences take a varying amount of time to complete. This means the GPU wastes resources waiting for the longest sequence in the batch to complete generation (Nie et al., 2024 ###reference_b26###).\nBASS tackles this problem of variable token length by padding the KV cache to match the length of the longest sequence within a batch. However, this can lead to extra memory overhead and unnecessary computations being performed. Therefore, BASS also proposes a split method where the K, V, P tensors are broken into smaller sequences to handle different sequence lengths (Qian et al., 2024 ###reference_b27###). BASS is able to achieve a max GPU utilization of 15.8%, which is 10 times greater compared to vanilla speculative decoding method.\nEfficient Multi-sample Speculative Decoding (EMS-SD) acknowledges the memory overhead and wasted computation of padding tokens and proposes using a second \u201cunpad\u201d KV cache that holds the location of the KV cache of different batches (Ni et al., 2024 ###reference_b25###). During the verification phase, the target model can use the unpad KV cache to find the start location of each sequence, which means the target model can accept sequences of varying lengths. With this method, there is a more gradual decline in speed up as batch size increases compared to the vanilla method. The speedup is 2.85x and 0.88x for batch sizes of 1 and 16 respectively for BASS, while the speedup is 2.50x and 0.48x for the vanilla method.\nWhile the main challenge for batch speculative decoding is the variable token length of batches, there are other challenges such as token rejection and GPU load balancing that might limit parallelism gains. If a significant number of tokens are rejected after the drafting phase, this can lead to inefficiencies in a batch, as the errors must be corrected while still processing the remainder of the batch. Moreover, not all sequences require the same amount of GPU usage, since more complex sequences require more compute power. Additional research on such challenges of batching will be beneficial to maximize model parallelism." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Hardware Limitation", + "text": "In real world applications, people do not have the computational resources to perform extensive training or inference of the speculative decoding models. This is apparent when using an independent drafter model, as the drafter model needs to be trained to align with the target model. Meanwhile, some speculative decoding methods that do not use a drafter model are bound by memory due to the complexity of additional modules or large token trees.\nParallel Prompt Decoding (PPD) tackles the memory overhead problem of token trees by using a hardware-aware dynamic sparse tree technique. This method adjusts the prompt structure during runtime based on the forward pass latency of different hardware types. By dynamically adjusting the tree to have different depth and structure based on a given hardware, memory resource is allocated optimally. As a result, PPD has a memory overhead that is 0.004% compared to that of Medusa and 0.007% compared to Eagle (Chen et al., 2024a ###reference_b4###). PPD also trains efficiently by training the embeddings of prompt tokens instead of a new model. This training method requires only 0.0002% additional training parameters, compared to 8.07% from Medusa.\nIn some cases where memory is limited, quantization may be needed, which can lead to a significant slow down of up to 7 times (Lin et al., 2024b ###reference_b21###). Reducing quantization overhead is especially crucial when using HBM VRAM due to its high cost. Skippy Simultaneous Speculative Decoding (S3D) aims at improving speculative decoding on low memory GPUs through midlayer skipping and multi-token prediction. By skipping layers, S3D utilizes an independent drafting method and saves memory by eliminating the need for an extra model and additional training for the draft model. S3D achieved a slightly lower VRAM usage of 8.06 GiB compared to 9.63 GiB from EAGLE without sacrificing model speed.\nSince computational power and resources vary significantly across different hardware, it is important to develop models that can cater to different needs. Most studies focus on using the most powerful GPU for training, but this is not a feasible approach. Models like PPD where the algorithm adjusts dynamically to the hardware is a step in the right direction. Recently, parameter offloading has gained popularity, where the model parameters are stored in a CPU RAM and loaded into the GPU during inference (Svirschevski et al., 2024 ###reference_b32###). However, this leads to the inherent problem of quantization overhead as mentioned in S3D and more research on tackling quantization overhead for weaker devices will be needed." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Generalizability", + "text": "While most speculative decoding models are good at a specific task, no model excels at all tasks, meaning it is not generalizable. SpecBench is a commonly used benchmark to compare speedup of different speculative decoding models across different tasks. These tasks include multi-turn conversation, translation, summarization, question answering, mathematical reasoning, and retrieval-augmented generation (Xia et al., 2024 ###reference_b35###). Spec-Bench has been used to compare speedups of a handful of methods like Lookahead, Medusa, and EAGLE. Due to EAGLE\u2019s success in nearly every task, EAGLE has commonly been used as the de facto model for comparison and there have been many models since that have greater speedups on individual tasks.\nHowever, in real world scenarios, it may not be feasible to utilize individual models for each specific task required, as this would introduce complexities and costs of managing multiple models. Managing multiple models can lead to latency from switching between models and maintaining consistency across models. Therefore, it is important to continue research towards a versatile model that could find the greatest speed up across most tasks." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we examine different speculative decoding approaches and categorize them into draft-centric and model-centric implementation groups. Model-centric approaches are concerned with drafting tokens of quality while draft-centric approaches focus on efficiently selecting and verifying from the draft tokens. We then analyze serving speculative decoding in the real world and the challenges it faces: throughput, long context generation, model parallelism, hardware limitation, and generalizability. We believe speculative decoding is a promising approach for LLM inference optimization, but addressing the limitations will be crucial in order to apply it in the real world." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.13157v2_figure_1.png", + "caption": "Figure 1: Taxonomy of various speculative decoding methods", + "url": "http://arxiv.org/html/2411.13157v2/extracted/6027944/figure2.png" + }, + "2": { + "figure_path": "2411.13157v2_figure_2.png", + "caption": "Figure 2: On the left is a general draft-centric implementation that shows the focus on selecting from a smaller pool of drafted tokens compared to standard speculative decoding methods. On the right is a general model-centric implementation that shows that a refined drafting model is used to create better, higher quality initial draft outputs.", + "url": "http://arxiv.org/html/2411.13157v2/extracted/6027944/figure3.png" + }, + "3": { + "figure_path": "2411.13157v2_figure_3.png", + "caption": "Figure 3: Timeline of the various speculative decoding methods discussed in this paper", + "url": "http://arxiv.org/html/2411.13157v2/extracted/6027944/figure1.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Hydra: Sequentially-dependent draft heads for medusa decoding.", + "author": "Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, and William Brandon. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2402.05109" + } + }, + { + "2": { + "title": "Language models are few-shot learners.", + "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.", + "venue": "CoRR, abs/2005.14165.", + "url": "http://arxiv.org/abs/2005.14165" + } + }, + { + "3": { + "title": "Medusa: Simple llm inference acceleration framework with multiple decoding heads.", + "author": "Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2401.10774" + } + }, + { + "4": { + "title": "Hardware-aware parallel prompt decoding for memory-efficient acceleration of llm inference.", + "author": "Hao Mark Chen, Wayne Luk, Ka Fai Cedric Yiu, Rui Li, Konstantin Mishchenko, Stylianos I. Venieris, and Hongxiang Fan. 2024a.", + "venue": null, + "url": "http://arxiv.org/abs/2405.18628" + } + }, + { + "5": { + "title": "Magicdec: Breaking the latency-throughput tradeoff for long context generation with speculative decoding.", + "author": "Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, and Beidi Chen. 2024b.", + "venue": null, + "url": "http://arxiv.org/abs/2408.11049" + } + }, + { + "6": { + "title": "Layerskip: Enabling early exit inference and self-speculative decoding.", + "author": "Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, and Carole-Jean Wu. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2404.16710" + } + }, + { + "7": { + "title": "Lazyllm: Dynamic token pruning for efficient long context llm inference.", + "author": "Qichen Fu, Minsik Cho, Thomas Merth, Sachin Mehta, Mohammad Rastegari, and Mahyar Najibi. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2407.14057" + } + }, + { + "8": { + "title": "Direct alignment of draft model for speculative decoding with chat-fine-tuned llms.", + "author": "Raghavv Goel, Mukul Gagrani, Wonseok Jeon, Junyoung Park, Mingu Lee, and Christopher Lott. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2403.00858" + } + }, + { + "9": { + "title": "Graph-structured speculative decoding.", + "author": "Zhuocheng Gong, Jiahao Liu, Ziyue Wang, Pengfei Wu, Jingang Wang, Xunliang Cai, Dongyan Zhao, and Rui Yan. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2407.16207" + } + }, + { + "10": { + "title": "Computer Architecture: A quantitative approach, 5 edition.", + "author": "John L. Hennessy and David A. Patterson. 2012.", + "venue": "Elsevier.", + "url": null + } + }, + { + "11": { + "title": "The curious case of neural text degeneration.", + "author": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020.", + "venue": null, + "url": "http://arxiv.org/abs/1904.09751" + } + }, + { + "12": { + "title": "Specdec++: Boosting speculative decoding via adaptive candidate lengths.", + "author": "Kaixuan Huang, Xudong Guo, and Mengdi Wang. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2405.19715" + } + }, + { + "13": { + "title": "Recursive speculative decoding: Accelerating llm inference via sampling without replacement.", + "author": "Wonseok Jeon, Mukul Gagrani, Raghavv Goel, Junyoung Park, Mingu Lee, and Christopher Lott. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2402.14160" + } + }, + { + "14": { + "title": "Challenges and applications of large language models.", + "author": "Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. 2023.", + "venue": null, + "url": "http://arxiv.org/abs/2307.10169" + } + }, + { + "15": { + "title": "S2d: Sorted speculative decoding for more efficient deployment of nested large language models.", + "author": "Parsa Kavehzadeh, Mohammadreza Pourreza, Mojtaba Valipour, Tinashu Zhu, Haoli Bai, Ali Ghodsi, Boxing Chen, and Mehdi Rezagholizadeh. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2407.01955" + } + }, + { + "16": { + "title": "Exploring and improving drafts in blockwise parallel decoding.", + "author": "Taehyeon Kim, Ananda Theertha Suresh, Kishore Papineni, Michael Riley, Sanjiv Kumar, and Adrian Benton. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2404.09221" + } + }, + { + "17": { + "title": "Fast inference from transformers via speculative decoding.", + "author": "Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023.", + "venue": null, + "url": "http://arxiv.org/abs/2211.17192" + } + }, + { + "18": { + "title": "Eagle-2: Faster inference of language models with dynamic draft trees.", + "author": "Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024a.", + "venue": null, + "url": "http://arxiv.org/abs/2406.16858" + } + }, + { + "19": { + "title": "Eagle: Speculative sampling requires rethinking feature uncertainty.", + "author": "Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. 2024b.", + "venue": null, + "url": "http://arxiv.org/abs/2401.15077" + } + }, + { + "20": { + "title": "SLiM: Speculative decoding with hypothesis reduction.", + "author": "Chi-Heng Lin, Shikhar Tuli, James Smith, Yen-Chang Hsu, Yilin Shen, and Hongxia Jin. 2024a.", + "venue": "In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1005\u20131017, Mexico City, Mexico. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2024.findings-naacl.63" + } + }, + { + "21": { + "title": "Qserve: W4a8kv4 quantization and system co-design for efficient llm serving.", + "author": "Yujun Lin, Haotian Tang, Shang Yang, Zhekai Zhang, Guangxuan Xiao, Chuang Gan, and Song Han. 2024b.", + "venue": null, + "url": "http://arxiv.org/abs/2405.04532" + } + }, + { + "22": { + "title": "Speculative decoding via early-exiting for faster llm inference with thompson sampling control mechanism.", + "author": "Jiahao Liu, Qifan Wang, Jingang Wang, and Xunliang Cai. 2024a.", + "venue": null, + "url": "http://arxiv.org/abs/2406.03853" + } + }, + { + "23": { + "title": "Optimizing speculative decoding for serving large language models using goodput.", + "author": "Xiaoxuan Liu, Cade Daniel, Langxiang Hu, Woosuk Kwon, Zhuohan Li, Xiangxi Mo, Alvin Cheung, Zhijie Deng, Ion Stoica, and Hao Zhang. 2024b.", + "venue": null, + "url": "http://arxiv.org/abs/2406.14066" + } + }, + { + "24": { + "title": "Online speculative decoding.", + "author": "Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, and Hao Zhang. 2024c.", + "venue": null, + "url": "http://arxiv.org/abs/2310.07177" + } + }, + { + "25": { + "title": "Ems-sd: Efficient multi-sample speculative decoding for accelerating large language models.", + "author": "Yunsheng Ni, Chuanjian Liu, Yehui Tang, Kai Han, and Yunhe Wang. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2405.07542" + } + }, + { + "26": { + "title": "Aladdin: Joint placement and scaling for slo-aware llm serving.", + "author": "Chengyi Nie, Rodrigo Fonseca, and Zhenhua Liu. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2405.06856" + } + }, + { + "27": { + "title": "Bass: Batched attention-optimized speculative sampling.", + "author": "Haifeng Qian, Sujan Kumar Gonugondla, Sungsoo Ha, Mingyue Shang, Sanjay Krishna Gouda, Ramesh Nallapati, Sudipta Sengupta, Xiaofei Ma, and Anoop Deoras. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2404.15778" + } + }, + { + "28": { + "title": "Fast transformer decoding: One write-head is all you need.", + "author": "Noam Shazeer. 2019.", + "venue": null, + "url": "http://arxiv.org/abs/1911.02150" + } + }, + { + "29": { + "title": "A thorough examination of decoding methods in the era of llms.", + "author": "Chufan Shi, Haoran Yang, Deng Cai, Zhisong Zhang, Yifan Wang, Yujiu Yang, and Wai Lam. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2402.06925" + } + }, + { + "30": { + "title": "Blockwise parallel decoding for deep autoregressive models.", + "author": "Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. 2018.", + "venue": null, + "url": "http://arxiv.org/abs/1811.03115" + } + }, + { + "31": { + "title": "Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding.", + "author": "Hanshi Sun, Zhuoming Chen, Xinyu Yang, Yuandong Tian, and Beidi Chen. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2404.11912" + } + }, + { + "32": { + "title": "Specexec: Massively parallel speculative decoding for interactive llm inference on consumer devices.", + "author": "Ruslan Svirschevski, Avner May, Zhuoming Chen, Beidi Chen, Zhihao Jia, and Max Ryabinin. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2406.02532" + } + }, + { + "33": { + "title": "Lamda: Language models for dialog applications.", + "author": "Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. 2022.", + "venue": null, + "url": "http://arxiv.org/abs/2201.08239" + } + }, + { + "34": { + "title": "Opt-tree: Speculative decoding with adaptive draft tree structure.", + "author": "Jikai Wang, Yi Su, Juntao Li, Qingrong Xia, Zi Ye, Xinyu Duan, Zhefeng Wang, and Min Zhang. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2406.17276" + } + }, + { + "35": { + "title": "Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding.", + "author": "Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2401.07851" + } + }, + { + "36": { + "title": "Chimera: A lossless decoding method for accelerating large language models inference by fusing all tokens.", + "author": "Ziqian Zeng, Jiahong Yu, Qianshi Pang, Zihao Wang, Huiping Zhuang, Hongen Shao, and Xiaofeng Zou. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2402.15758" + } + }, + { + "37": { + "title": "Learning harmonized representations for speculative sampling.", + "author": "Lefan Zhang, Xiaodan Wang, Yanhua Huang, and Ruiwen Xu. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2408.15766" + } + }, + { + "38": { + "title": "Lookahead: An inference acceleration framework for large language model with lossless generation accuracy.", + "author": "Yao Zhao, Zhitian Xie, Chen Liang, Chenyi Zhuang, and Jinjie Gu. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2312.12728" + } + }, + { + "39": { + "title": "Propd: Dynamic token tree pruning and generation for llm parallel decoding.", + "author": "Shuzhang Zhong, Zebin Yang, Meng Li, Ruihao Gong, Runsheng Wang, and Ru Huang. 2024.", + "venue": null, + "url": "http://arxiv.org/abs/2402.13485" + } + } + ], + "url": "http://arxiv.org/html/2411.13157v2" +} \ No newline at end of file diff --git a/20241127/2411.13821v2.json b/20241127/2411.13821v2.json new file mode 100644 index 0000000000000000000000000000000000000000..cdf918f58c4b2809378621d267070d48aaa1b603 --- /dev/null +++ b/20241127/2411.13821v2.json @@ -0,0 +1,518 @@ +{ + "title": "Heterophilic Graph Neural Networks Optimization with Causal Message-passing", + "abstract": "In this work, we discover that causal inference provides a promising approach to capture heterophilic message-passing in Graph Neural Network (GNN). By leveraging cause-effect analysis, we can discern heterophilic edges based on asymmetric node dependency. The learned causal structure offers more accurate relationships among nodes. To reduce the computational complexity, we introduce intervention-based causal inference in graph learning. We first simplify causal analysis on graphs by formulating it as a structural learning model and define the optimization problem within the Bayesian scheme. We then present an analysis of decomposing the optimization target into a consistency penalty and a structure modification based on cause-effect relations. We then estimate this target by conditional entropy and present insights into how conditional entropy quantifies the heterophily. Accordingly, we propose CausalMP, a causal message-passing discovery network for heterophilic graph learning, that iteratively learns the explicit causal structure of input graphs. We conduct extensive experiments in both heterophilic and homophilic graph settings. The result demonstrates that the our model achieves superior link prediction performance. Training on causal structure can also enhance node representation in classification task across different base models.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "The message-passing mechanism of graph neural network (GNN) inherently assumes homophily, which degrades in the heterophilic graphs. Heterophily refers to the characteristic of graphs whose edges are more likely to connect the nodes from different classes. In such graphs, the representations of node pairs become less distinguishable after being smoothed by their neighborhood features. This issue becomes particularly pronounced when there is lack in the node label information, such as link prediction task and few-shot node classification. The heterophilic edges can also act as the noise that hinder the optimization. Numerous studies have been proposed to improve message passing, aiming to learn more fair representations that are not affected by the heterophilic edges. They are usually dedicated to disentangling heterophilic information (Abu-El-Haija et al., 2019 ###reference_b2###; Chien et al., 2020 ###reference_b6###; Yan et al., 2022 ###reference_b37###) or improving the information gathering process in the graph (Kong et al., 2023 ###reference_b13###; Zhou et al., 2022b ###reference_b43###; Wang et al., 2022 ###reference_b34###; Zheng et al., 2023 ###reference_b41###). However, these techniques are task-specific with low generalization ability. We also find most fail to demonstrate substantial performance improvements across both homophilic and heterophilic graphs.\n###figure_1### Detect heterophily by causal-effect estimation from asymmetric information flow that results from the mimic behaviors of fraudsters\nCausal inference emerges as a promising approach for capturing the cause-effect among variables according to the distribution of observed data. While self-training is a widely used and straightforward strategy for relationship discovery (WANG et al., 2023 ###reference_b33###; Cheng et al., 2023 ###reference_b5###), it focuses on strengthening correlations, which is inadequate for complex and heterophilic graphs. Causal inference can detect dependencies at a higher level (Cotta et al., 2023 ###reference_b7###; Zhao et al., 2022 ###reference_b40###). It can be utilized to identify heterophilic edges, which exhibit asymmetric dependencies between connected node pairs.\nTo illustrate the concept, we consider a camouflaged fraudster detection case, as depicted in Fig.1 ###reference_###. The nodes consist of fraudsters and benign users. Fraudsters can mimic the behaviors of benign users and establish connections with them, having similar node and structural embeddings. These heterophilic links make it challenging for GNN-based models to effectively detect fraudsters. However, there exists an information flow from benign users to fraudsters due to the mimicry behavior. Causal inference can detect the node dependencies where the fraudsters depend on the connected benign users, which suggest the presence of heterophily. If we revise the distinguished heterophilic edges between directed edges from benign users to fraudsters, the graph can mitigate the feature smoothing. The updated graph aligns more closely with the true information flow and improves the aggregation process within GNNs.\nCausal inference typically involves conducting independence tests to determine the sub-structure of the causal graph. According to Pearl\u2019s causal analysis scheme (Pearl, 2009 ###reference_b22###), there are two higher levels than association (correlation) analysis: intervention-based and counterfactual inference. Although counterfactual inference is at the highest level, it is request non-existed knowledge that usually estimated by a generative process or searching for an estimation unobtainable data (Yao et al., 2021 ###reference_b38###), which is time-consuming and computationally expensive. The intervention-based method can be realized by graph augmentation to uncover variable dependency (Vu and Thai, 2020 ###reference_b32###). It does not require background knowledge and can improve the information aggregation process (Luan et al., 2022 ###reference_b19###). The nature of the variable distribution is described by the node dependencies, whose inference process should ideally have better generalization ability. And the learned causal structure should be effective across heterophilic and homophilic graph.\nHowever, there remains challenges of application of causal inference in GNNs. First, most causal inference methods for graph data primarily focus on graph classification (Wu et al., 2022 ###reference_b35###; Fan et al., 2022 ###reference_b9###; Bevilacqua et al., 2021 ###reference_b3###)or invariant learning (Lin et al., 2021 ###reference_b16###, 2022 ###reference_b17###). The analysis of causality among node variables for a single input graph remains undefined. Besides, it is crucial to construct a message passing that maintains generalization ability. The learned causal structure should align with node relationships, enhancing the performance of GNNs across various downstream tasks and graphs. Furthermore, to the best of our knowledge, there have been no studies that apply causal inference specifically to address heterophily. To tackle the challenges above, we propose a network that learns explicit Causal Message-Passing (CausalMP) of input graph by node dependency discovery. We quantify the intervention-based cause-effect by the dependencies between node pairs. Specifically, we construct a causality estimator among the node variables by comparing the conditional entropy in both directions of the edge. We demonstrate its effectiveness in indicating the presence of heterophily. Then, we modify the topology accordingly for the subsequent optimization. Experiments show that the proposed model achieves better performance in link prediction across both heterophilic and homophilic graphs. Moreover, the learned causal structure can improve the GNNs\u2019 training in node classification performance of different baseline methods, even when there are limited labeled samples." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Preliminaries", + "text": "-operator for causal inference. In causal inference, the -operator is usually applied to evaluate the cause-effect relation. It represents interventions conducted on variables to assess their causal effects. By applying interventions, we can quantify the impact of intervened variables by observing the resulting variations in the distribution of other variables. We use to describe the probability density of , which quantifies the effect on resulting from . -operator can be calculated by conditional probability in some scenarios. If there are three variables and we wish to use the individual treatment effect (ITE) to quantify the effect of on , we can apply the -operator as:\nIf is binary treatment, then the ITE is calculated by:\nIn the multivariable situations, can represent variable groups. Then the structural equation model (SEM) can be employed for causal analysis. In SEM, the relationships among variables are depicted using directed edges in a graphical model denoted as , where represents the variables and represents the causal graph in the form of an adjacency matrix. The model typically starts from either a fully connected graph or an initialization based on prior knowledge graph, whose adjacency matrix denoted by . Various algorithms, such as PC, FCI, etc., have been proposed to search for subgraphs that are faithful to the true causal graph . Subsequently, conditional independence tests are conducted on the observed data to maximize the averaged treatment effect (ATE) after pruning.\nEstimation of causality. Intervention-based methods in structural equation modeling (SEM) exhibit superior performance in independence tests for uncovering causal relationships. However, these methods also demand greater computational resources. One such intervention-based analysis involves estimating the joint causal distribution of variables (Pearl, 2010 ###reference_b23###) and maximizing their likelihood (van der Laan, 2010 ###reference_b29###).\nIf the joint distribution of observed data can be factorized explicitly, the intervened SEM can be written as:\nwhere is the parent nodes of node in graph , is the intervened variable.\nIn the Bayesian scheme, the distribution of the potential causal graphs can be denoted by . With the observed graph data, we can represent the prior of the causal graph\u2019s parameters as . Then the prior belief is written as . We define the likelihood of the node features as . The marginal likelihood is calculated by:\nThen, it can be estimated by the Monte Carlo algorithm according to the specific model via the assumptions in the specific tasks.\nHeterophilic graph. Generally, heterophilic graph refers to the edge heterophily (Dai et al., 2022 ###reference_b8###), where the edges often link nodes with different labels. We can use homophily ratio (Zhu et al., 2020 ###reference_b44###) as the metric. Given a graph , its homophily ratio is\n,\nwhere is the node set, is the edge set, are the labels of node respectively. ranges from 0 to 1. A value close to 1 implies strong homophily, while a value close to 0 indicates strong heterophily." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Causal Inference on Heterophily", + "text": "The edges in the graph serve as indicators of association among the nodes. They can be treated as prior knowledge or skeletons for causal analysis. By optimizing the graph structure to align with the causal structure, we detect the heterophily and enhance the information-gathering of GNNs. This, in turn, aids GNNs in developing a better understanding of the node relationships during link prediction." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Information aggregation in GNN", + "text": "In this Section, we start by forming the causal inference of the node dependency problem in GNNs. We present the assumptions regarding the properties of causality in the context of GNNs. Given a graph with nodes, the features on each node are considered as a group of variables. The adjacency matrix is the initialization of causal structure. A graph convolution layer can be denoted by:\nwhere is the aggregation function, are the weight matrix and bias vector of the GNN layer, is the activation function, and is the connection coefficient matrix of the layer. refers to a quantification of the cause-effect, where if , where is the parents set of the given node in adjacency matrix .\nLocal Markov property is a commonly applied assumption in the causal structure learning (Pellet and Elisseeff, 2008 ###reference_b24###). It enables the implied conditional independencies being read off from a given causal structure (Kalisch and B\u00fchlmann, 2014 ###reference_b11###). While, enumerate all the conditions in GNN is NP-hard problem. Thus, we focus on the most primary cause-effect relationships in each convolution layer. For a node variable , we only consider the 1-hot neighbors and ignore the spouse nodes in Markov blanket, which is a common simplification in GNN. Specifically, we assume it is conditional independent of the rest of its neighbors. Then , where are the neighbor nodes of node in the input graph.\nIn local perspective of GNN, a center node , denoted as following, acquires information from parent nodes . Causal structure estimation is to identify its cause, represented by . Although ground truth of causal structure is often unfeasible in practice, we can still learn invariant causal connections of center node, which possesses optimal generalization capabilities. Subsequently, we can make following assumption to provide an explicit definition of causality for node variables of the input graph.\nAggregation Invariant.\nFor the current observed causal connections , there exists a optimal subset , that satisfies in any context\nwhere are the node feature variables, is the confounder, are the effect functions of the confounder on , and are the random noise with mean of 0, are the random bias vectors. Here, are jointly independent.\nThe aforementioned assumption implies the conditional distribution for any given context, where each context corresponds to an intervention-based independence experiment. By making this assumption, we consider the existence of common knowledge that holds true across contexts, representing the causal connections. Besides, when applying an embedding model, the assumption guarantees the stability of the learned embeddings, ensuring a consistent joint entropy after graph augmentation. It is important to note that the causal connection is not necessarily unique, indicating that the learned causal structure of the original graph can be variant." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Estimation of Intervention-based causality", + "text": "In the Bayesian framework, the optimization involves maximizing the margin likelihood as Eq.(5 ###reference_###). We aim to learn the parameter of the causal structure . The posterior of the final causal structure and its parameters are denoted by are and . They can be estimated by respectively after initialization. To approximate the optimization, we conduct Bayesian experimental design (BED) (von K\u00fcgelgen et al., 2019 ###reference_b31###) to model the node dependency in the form of entropy. Then utilize Monte Carlo estimator (von K\u00fcgelgen et al., 2019 ###reference_b31###) for the -intervention to quantify the point-wise causal relationship. The optimization problem is formalized as the following proposition.\nGiven the intervention strategy , if we have the condition distribution , the causal structure and corresponding optimal intervention target can be obtained by optimizing:\nwhere are the intervened node features, are the non-intervened features, is the entropy.\nTo approximate the optimization, the Bayesian experimental design (BED) approach can be employed, as described in (von K\u00fcgelgen et al., 2019 ###reference_b31###). BED utilizes the uncertainties associated with the causal structure, which are captured by the posterior distribution. As such, we have the following proposition:\nThe optimization problem for the causal inference of node dependency in a graph on GNN-based embedding networks can be formulated as:\nwhere is the optimal intervention experiment, is the parameter of the causal structure with a prior , is the posterior of embeddings given the experiment , and is the utility function that quantifies the information gain of the causal structure.\nAccording to the proposition, the learning of the causal structure can be achieved by maximizing the usefulness metric . And the optimization process simultaneously improves the posterior probability .\nIn the graph data, we transfer the optimization problem into Eq.(9 ###reference_###). To proceed with this optimization, we require an estimation of the utility function based on the definition of causality.\nIn the point-wise relationship, a conditional independence test can be applied to discover causality after intervention. Each noisy imputation is an independence experiment , and the utility function becomes . By leveraging the Monte Carlo estimator (von K\u00fcgelgen et al., 2019 ###reference_b31###) and incorporating Assumption 7 ###reference_###, we can estimate Eq.(9 ###reference_###)\nby:\nWe can express Eq.(10 ###reference_###) in terms of entropy, where we have the expression as Eq.(8 ###reference_###).\nTo solve the optimization problem above, we introduce two penalties related to the terms in Eq.(8 ###reference_###):\nIn the first step, for the entropy in the first term, we aim to maximize it to enhance the information gain and improve the description of causality in the inferred causal graph. This entropy can be calculated by considering the entropy of the neighbors conditioned on the intervened nodes, as expressed in Eq.(4 ###reference_###). To optimize this term, we propose a causality estimator as follows.\nGiven a intervention for the graph , the cause-effect significance of a center node to its neighbors in GNNs can be measured by:\nwhere is the conditional entropy.\nThe node pair a high value of implies prominent cause-effect relationships in the graph, and also a high probability of heterophily. Specifically, is a cause of if it is positive inside the absolute-value sign. These edges are transformed into directed edges, while the opposite direction, represented by in the adjacency matrix, is set to zero. It is a heuristic with a greedy strategy to learn causal structure iteratively, during which the value of the first term in Eq.(8 ###reference_###), which is equivalent to maximizing ATE defined in Eq.(3 ###reference_###).\nIn the second step, we seek to minimize the entropy of the second term of Eq.(8 ###reference_###). In link prediction, a low entropy of the adjacency matrix indicates stability in the node embedding space. During the training of GNNs, we incorporate a distance penalty in the loss function to ensure consistency. This penalty ensures that the augmentation introduced by the causal structure does not disrupt the node representation, maintaining stability in the learned representations.\nWhen the two terms in Eq.(8 ###reference_###) are optimized simultaneously, the model can approach the optimal indicated by Eq.(9 ###reference_###)." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Insight into heterophily", + "text": "###figure_2### Venn diagram of center node and its neighbors\nWe construct a criterion for heterophilic links based on dependency estimated by their conditional entropy. Additionally, we provide insight into how conditional entropy quantifies heterophily. As a first step, we assume that the connections between node pairs are label-dependent. The expectation of a node connecting to a heterophilic neighbor is given by the Rayleigh quotient of the label, where the expected value is denoted as (Lei et al., 2022 ###reference_b14###).\nIf we adopt a GNN as a node embedding model , it learns the conditional probability distribution that relates each node to the context , where the neighbors can be divided into homophilic ones and heterophilic ones according to their labels . Then the conditional entropy is described as:\nIn heterophilic graphs, the lower bound of mutual information between heterophilic node pairs is negatively correlated with (Mostafa et al., 2021 ###reference_b21###). This suggests that a heterophilic link refers to less mutual information, specifically a smaller . Furthermore, the joint entropy and the conditional entropy on the homophilic pair remain the same due to the static embedding model. Then, the conditional entropy on the left-hand side of the equation becomes larger.\nThe conditional entropy measures how well it can predict the heterophilic neighbors. When comparing the conditional entropy of a heterophilic pair to that of a homophilic pair, two situations arise. If the node pairs are less dependent on each other, both of the conditional entropy terms on the right-hand side of Eq.12 ###reference_### increase, resulting in a small difference between them, as Fig.2 ###reference_###(b). However, if there exists a symmetric dependency relationship, only one of the terms increases, leading to a large difference between and , as Fig.2 ###reference_###(c). In the case of homophilic node pairs, the difference in conditional entropy between them remains small due to a similar representation distribution under label-dependent connectivity. Consequently, a large difference in conditional entropy suggests heterophily." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Proposed model", + "text": "Based on the analysis in the previous section, we present the CausalMP to tackle the heterophily in graphs on the link prediction task. The main architecture is shown in Fig.3 ###reference_###.\n###figure_3### Main scheme of CausalMP" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Main modules", + "text": "1. Intervention on graph data. To learn the dependency among the nodes from the observational data, we employ a node feature intervention approach, where Gaussian noise rescales the node features . nodes are rescaled by Gaussian noise, where is ratio of intervened nodes. According to the (Peters et al., 2016 ###reference_b25###), it is sufficient to identify the causal connections under the noise intervention. The graph after the intervention is denoted by , where is the intervened feature matrix.\n2. Embedding learning. To identify node dependencies and capture causal-effect relationships, we conduct an independence test on the distribution of observational data following the intervention. To mitigate sensitivity to downstream task correlations, we employ an unsupervised GNN denoted as , where represents the dimension of the node embedding. Unsupervised learning enables the representations to capture the underlying causality among the nodes instead of correlation with the output.\n3. Node dependency estimation and causal structure modification. We perform intervention experiments on the original graph. The trained embedding network maps the intervened graphs to the embedding space . Utilizing these discrete observations, we can quantify the dependency between the center nodes (i.e., intervened nodes) and their neighbors, which is calculated by Eq.(11 ###reference_###). In the Monte Carlo experiment, we discretize the embeddings into bins and use kernel density estimation (KDE) to estimate the joint probability density function (PDF). By leveraging the KDE estimates, we compute the conditional entropy between the node pairs.\nFor each edge, we obtain a corresponding dependency score , where a larger value implies a more prominent causal-effect relationship. For a detected dependency , we convert the original undirected edge to directed by setting to 0. The threshold for pruning is . Here, is a coefficient, , are mean and variance of the dependency scores.\nSimilarly, we can examine the presence of triangular relationships within the graph. We calculate the mutual information (MI) between the node pairs . A large MI suggests a potential edge. The edges are added when MI exceeds the threshold , where represent the mean and variance all the MI scores, is a coefficient.\n4. Optimization target. After modifying the graph structure , we obtain a causal structure . As shown in Fig.3 ###reference_###, we employ two encoder-decoder frameworks to address the link prediction task for and separately. We denote the GNN as , where are the encoder and decoder respectively. The reconstruction loss is evaluated by the binary cross entropy between output and input logits. The consistency penalty is quantified by the mean squared error (MSE) as . The optimization target is weighted summation of the reconstruction loss of two graphs and consistency penalty , where are the coefficients." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. CausalMP", + "text": "In graph data, intervention is applied to a subset of nodes. Additionally, the pruning strategy during node dependency estimation is a greedy approach. To address these limitations, we introduce iterations in the intervention experiments. It ensures that the selected center nodes and their neighbors encompass a significant portion of the nodes in the graph, then more node dependencies can be detected. A detailed algorithm of the proposed CausalMP is shown in Algorithm 1 ###reference_thm1###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Complexity analysis", + "text": "Consider a graph with nodes and edges. During the intervention, the number of center nodes is proportional to the node number, which is . Similarly, the number of edges involved in the dependency test would be . Then the time complexity of KDE would be . For the training of the embedding network and the encoder of the link prediction model, assuming no additional tricks or computations, the time complexity would be approximately , where is the layer number and is dimension of hidden layer. If we employ inner-product for the decoder, it would take , while for MLP decoder. Thus, the overall time complexity of CausalMP would not exceed . Compared to multi-view augmentation in graph learning, the overhead of CausalMP mainly results from the KDE estimation process." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Experiment", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "5.1. Experiment setup", + "text": "To evaluate the improvement over GNNs on message-pathing, we conduct a comparative analysis on link prediction task. We adopt 85% /5%/10% split for training, validation, and testing. The negative link sampling capacity is the same as the positive ones. CausalMP is compared to popular baselines and SOTA models, including GAT (Velickovic et al., 2017 ###reference_b30###), VGAE (Kipf and Welling, 2016 ###reference_b12###), Graph-InfoClust (GIC) (Mavromatis and Karypis, 2021 ###reference_b20###) and Linkless Link Prediction (LLP) (Guo et al., 2023 ###reference_b10###). We also compare CausalMP with two additional models specifically designed for heterophilic graphs, namely LINKX (Lim et al., 2021 ###reference_b15###) and DisenLink (Zhou et al., 2022a ###reference_b42###), as well as another causality-based model, Counterfactual Link Prediction (CFLP)(Zhao et al., 2022 ###reference_b40###).\nFor the obtained explicit causal structure, we compared it with original graph in node classification. To amplify the contribution of structural information, we adopt the limited label setting as (Sun et al., 2023a ###reference_b27###). Specifically, we used C-way 5-shot for small graphs and 100-shot for large graphs. We conduct the comparison on baselines models, i.e. GCN, GAT, and SOTA models for heterophilic graphs, i.e. LINKX, and GREET (Liu et al., 2023 ###reference_b18###). As graph prompt learning also shows superiority in few-shot heterophilic node classification setting, we also apply the causal structure to GPPT (Sun et al., 2022 ###reference_b26###) and Gprompt (Sun et al., 2023b ###reference_b28###).\nWe conduct experiments on 9 commonly used graph datasets, encompassing 4 homophilic graphs and 5 heterophilic graphs. For the implementation of CausalMP, we utilize CCA-SSG (Zhang et al., 2021 ###reference_b39###) with default setting as the unsupervised embedding learning network. The encoder of the CausalMP is the same size as the embedding network, and we employ an MLP decoder both tasks. The details of datasets and experiment settings are shown in Appendix A ###reference_###. The link prediction performance is evaluated by AUC(%) and node classification by accuracy (%). We report the the average and variance of results for 5 experiments." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "5.2. Result", + "text": "The results of link prediction are summarized in Table 1 ###reference_###, where the bolded and underlined entries represent the best and second-best performance, respectively. The term \"OoM\" refers to out-of-memory issues of the device. Our observations are as follows:\n(a) The prevalent models and SOTA link prediction models (GIC, LLP) generally exhibit satisfactory and stable performance on homophilic graphs. However, they are unable to perform well and stably on heterophilic graphs.\n(b) Models specifically designed for heterophily (LINKX, DisenLink) sacrifice their superiority on homophilic graphs, indicating a trade-off in performance depending on the graph type.\n(c) CFLP demonstrates the applicability of causal analysis in heterophilic scenarios. However, it suffers from computational complexity.\n(d) The proposed CausalMP exhibits stable performance and outperforms the benchmarks on both homophilic and heterophilic graphs. Notably, it remains applicable even on large graphs (CS, Physics), showcasing its scalability.\nThe results of node classification with causal structure are presented in Table 2 ###reference_###. It demonstrates that the models perform better when trained with the causal structure, both for homophilic and heterophilic graphs. It indicates that the improved message-passing facilitated by the causal structure improves node representation learning by GNNs. To provide insight into the node classification experiment, we vary the number of shots and the performance of the GCN is shown in Fig.4 ###reference_###, which demonstrate that the learned causal structure contributes more when there is less available node information.\n###figure_4### Influence of shot number in node classification" + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "5.2.1. Coefficients of the loss function", + "text": "In order to assess the impact of the coefficients and in the optimization target, we conducted an ablation experiment, where we modified the loss function as:\nwhere represents the weight of the reconstruction loss, and represents the weight of the consistency between the representation of the original graph and the causal structure." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "5.3. Ablation Experiment", + "text": "Center node ratio. In CausalMP, the sampling ratio of center node is an important parameter in the intervention strategy. We investigate its impact on two heterophilic graphs (Actor, Cornell) with . We report the results of the performance and corresponding time consumption in Table 3 ###reference_###. Our findings indicate that when the iteration number is fixed, larger values of lead to increased time consumption, particularly on larger graphs. Excessively large values of can result in a more dramatic modification of the structural information and performance degradation. Conversely, when is too small, there are not sufficient node dependencies uncovered to improve the message-passing. To strike a balance, we experientially select from .\nOptimization target weight. We first tune through grid search in with result shown in Fig.5 ###reference_###. We discovered that assigning a too-large weight to either the original graph or the causal graph is not beneficial for training. Optimal performance is achieved when there is a balance between the two components. Similarly, the impact of is evaluated in Fig.6 ###reference_###. We observe that as starts to increase from 0, the performance of CausalMP improves. This demonstrates the effectiveness of the consistency term. However, if keeps increasing, the performance starts to degrade. This is because an excessively large weight on the consistency term can interfere with the optimization of the reconstruction loss. While, CausalMP can achieve a stable and satisfying performance as long as extreme values of are avoided. Thus, we set them as constant across different datasets.\n###figure_5### ###figure_6### Edge modification strategy.\nWe conducted a comparative analysis by comparing several settings: Edge_Aug replaces the causal structure with an EdgeDrop augmentation. CausalMP-MI disabled the cause-effect detection component, only retaining the adding edges strategy based on mutual information. CausalMP- disabled the edge addition, only retaining the proposed dependency discovery strategy. CausalMP represents the full application of the proposed CausalMP. The link prediction results presented in Table 4 ###reference_### reveal that both edge addition based on mutual information and node dependency discovery are effective. Combining them into full CausalMP demonstrates superior performance, improving the graph learning of GNNs.\nCase study\nTo gain a deeper understanding of how causal inference contributes to heterophilic graph learning, we conduct a case study on Texas to visualize the details. Texas is a highly heterophilic graph with 289 heterophilic edges and 36 homophilic edges.\nAs first train an unsupervised node embedding network in CausalMP, we here calculate node dependency metric by Eq.11 ###reference_### on all the edges.\nWe firstly only train the node embedding network and calculate the node dependency metric by Eq.(11 ###reference_###) on all the edges.\nThe score on the homophilic edges is , while on the heterophilic edges. We have 99.9% confidence that the mean values of and are different by Z-test.\nThen, in the 5 iterations of causal structure learning, the numbers of edges transferred to directed edges are [6,7,8,4,3]. [6,7,8,4,1] of them are heterophilic edges, showing that defined in Eq.(11 ###reference_###) is an effective indicator for heterophily.\nFinally, We learn a causal structure of the original graph . If we re-initialize everything and pre-train a new CausalMP on , the AUC becomes 83.98%, while 81.82% on the original graph , which shows that the efficacy of message passing is improved." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Related work", + "text": "Heterophilic graph learning.\nPlenty of GNNs have been proposed to tackle the heterophilic graph. The prevalent models involved in improving the information gathering, such as MixHop (Abu-El-Haija et al., 2019 ###reference_b2###) and GPR-GNN(Chien et al., 2020 ###reference_b6###). LINKX (Lim et al., 2021 ###reference_b15###) learns node feature and adjacency information separately, then concatenates them for final prediction, trained through simple minibatching. GOAT (Kong et al., 2023 ###reference_b13###) adaptive learns the relationship from virtually fully-connected nodes. Another promising approach is to improve the message-passing (Zhou et al., 2022b ###reference_b43###; Wang et al., 2022 ###reference_b34###; Zheng et al., 2023 ###reference_b41###). GReTo (Zhou et al., 2022b ###reference_b43###) performs signed message passing by local context and target information. GOAL (Zheng et al., 2023 ###reference_b41###) enriches the structural information by graph complementation to homophily- and heterophily-prone topology. ACMP (Wang et al., 2022 ###reference_b34###) construct message passing by interacting particle dynamics with a neural ODE solver implemented.\nCausal inference.\nTraditional statistical causal inference can be categorized into score-based, constraint-based methods, and hybrid methods (Pearl, 2010 ###reference_b23###).\nIt has primarily been gaining interest in the context of graph classification tasks to guide GNN training (Wu et al., 2022 ###reference_b35###; Fan et al., 2022 ###reference_b9###; Bevilacqua et al., 2021 ###reference_b3###), where they focus on identifying invariant substructures rather than dependency analysis. Causal inference is also applied to explanation tasks in GNN (Lin et al., 2021 ###reference_b16###, 2022 ###reference_b17###).\n(Zhao et al., 2022 ###reference_b40###) estimates a counterfactual adjacency matrix of the original graph to enrich the link information and improve the GNN learning. (Chang et al., 2023 ###reference_b4###) defines the context information as the treatment and conducts augmentation on knowledge graph to improve representation learning and interpretability. (Cotta et al., 2023 ###reference_b7###) propose a different definition of causality, namely causal lifting, and conduct the graph learning on the knowledge graph." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Conclusion", + "text": "In this study, we propose CausalMP, a novel scheme with causal inference embedded, devised to deal with heterophilic graph learning by GNNs.\nWe conduct a theoretical analysis of intervention-based causality in GNN. We formulate an optimization problem by estimating cause-effect relationships through conditional entropy and we propose an indicator to locate heterophily.\nCausalMP iteratively transfers the detected dependencies into directed edges and add edges based on mutual information, optimizing GNNs under constrastive scheme. Through extensive experiments on both homophilic and heterophilic graphs, we demonstrate that CausalMP achieves superior link prediction performance than other baselines. And the learned causal structure contributes to node classifications especially in limited label situation.\nIn the future, we plan to explore the inclusion of 2-hop conditions in causal analysis, striking a balance between computational complexity and accuracy. Furthermore, we intend to leverage the learned causal structure for explanation tasks." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Details on Experiment", + "text": "We conduct experiments on 9 commonly used graph datasets, encompassing 4 homophilic graphs, i.e. Cora, CiteSeer, CS, Physics, and 5 heterophilic graphs, i.e. Actor, Cornell, Texas, Chameleon, and Squirrel. The dataset statistics are shown in Table 5 ###reference_###.\nFor the baseline models, GIC learns node representation through unsupervised learning by maximizing the mutual information at both the graph level and cluster level. LLP is a relational knowledge distillation framework that matches each anchor node with other context nodes in the graph. LINKX separately embeds the adjacency matrix and node feature with multilayer perceptrons and transformation. CFLP conducts causal analysis at the counterfactual level that estimates the counterfactual adjacency matrix by searching other similar node pairs in the graph under opposite contexts (community), which is a time-consuming process. For the prompt learning based models in node classification, we adopt the same implementation as (Sun et al., 2023a ###reference_b27###). We adopt edge prediction in pretraining GNN with 128 hidden dimensions on small graphs (Cornell, Texas), and SimGRACE (Xia et al., 2022 ###reference_b36###) for GNN with 512 hidden dimensions the others.\nIn the experiment on node dependency, the ratio of the center node in each iteration grid search within . We conduct an intervention for iterations, repeating times in each iteration to estimate PDF. The parameter of the loss function is set by . The training epoch of the embedding network and CausalMP in each iteration are set at 1000, 2000, and learning rate 1e-4, 1e-5 respectively with a decay of 1e-4.\nThe experiment is conducted on NVIDIA GeForce RTX 3090 24G GPU and Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1. Performance comparison on link prediction (AUC/%)
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HomophilicHeterophilic
ModelCoraCiteSeerCSPhysicsActorCornellTexasChameleonSquirrel
GAT95.14\u00b10.5796.22\u00b10.4798.53\u00b10.0997.69\u00b10.0967.80\u00b11.1261.13\u00b13.2365.73\u00b15.0697.82\u00b11.1397.03\u00b10.16
VGAE94.78\u00b10.6995.50\u00b10.3296.65\u00b10.1494.87\u00b10.1170.82\u00b10.8158.18\u00b19.4766.75\u00b110.0998.18\u00b10.2296.59\u00b10.24
GIC96.17\u00b10.4597.12\u00b10.24OoMOoM70.29\u00b10.2958.01\u00b13.4166.19\u00b17.3295.30\u00b10.2995.00\u00b10.23
LLP95.29\u00b10.1995.14\u00b10.3697.48\u00b10.2698.79\u00b10.0580.37\u00b11.0768.20\u00b17.9671.88\u00b13.9597.52\u00b10.3795.13\u00b10.44
LINKX88.38\u00b10.3688.74\u00b10.8693.28\u00b10.1693.58\u00b10.3872.13\u00b11.0459.43\u00b14.1771.92\u00b13.8297.77\u00b10.3197.76\u00b10.13
DisenLink89.30\u00b10.5993.96\u00b10.88OoMOoM59.19\u00b10.4860.71\u00b15.1077.88\u00b14.0398.49\u00b10.0895.88\u00b10.10
CFLP93.44\u00b10.8293.82\u00b10.56OoMOoM80.41\u00b10.3273.14\u00b15.4266.02\u00b13.8498.29\u00b10.1698.39\u00b10.04
CausalMP96.84\u00b10.4397.20\u00b10.4398.81\u00b10.0398.18\u00b10.0286.81\u00b10.5573.59\u00b15.3879.26\u00b15.3899.03\u00b10.1398.11\u00b10.15
\n
\n
", + "capture": "Table 1. Performance comparison on link prediction (AUC/%)" + }, + "2": { + "table_html": "
\n
Table 2. The contribution of causal structure in node classification
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HeterophilicHomophilic
CornellTexasChameronSquirrelCoraCiteSeer
GCNOriginal46.07\u00b12.2655.49\u00b12.2841.59\u00b11.9632.14\u00b11.9177.87\u00b11.4871.14\u00b11.19
CausalMP48.25\u00b11.1256.96\u00b11.5143.35\u00b11.1634.56\u00b11.3879.15\u00b11.1672.72\u00b11.25
GATOriginal45.34\u00b12.1555.71\u00b13.1039.17\u00b11.3833.58\u00b10.8275.41\u00b14.8869.46\u00b12.92
CausalMP47.35\u00b12.5258.11\u00b11.1442.60\u00b11.8635.01\u00b11.3378.25\u00b12.8171.14\u00b11.40
GREETOriginal57.96\u00b14.5164.56\u00b13.4652.56\u00b12.1036.64\u00b11.0080.32\u00b10.8671.74\u00b10.93
CausalMP62.13\u00b13.7768.23\u00b12.9055.13\u00b11.6538.47\u00b11.2882.20\u00b10.9772.94\u00b10.63
LINKXOriginal56.40\u00b14.1962.08\u00b14.7255.89\u00b11.0237.59\u00b10.9281.90\u00b11.4371.87\u00b11.39
CausalMP59.09\u00b13.6365.76\u00b13.2757.36\u00b10.6039.21\u00b10.4683.44\u00b11.5872.92\u00b11.20
GPPTOriginal54.96\u00b12.6564.56\u00b13.8854.49\u00b11.3636.16\u00b10.9876.79 \u00b10.9266.56\u00b11.71
CausalMP57.28\u00b13.2066.78\u00b12.9056.98\u00b11.1137.71\u00b10.3578.16\u00b10.8468.62\u00b11.15
GpromptOriginal55.62\u00b11.4249.78\u00b13.4555.17\u00b11.4137.14\u00b10.8077.35 \u00b10.7570.35\u00b11.22
CausalMP59.14\u00b12.4252.78\u00b12.1757.23\u00b11.0239.78\u00b10.9178.85\u00b10.5072.69\u00b11.78
IMP(%)2.822.742.301.921.741.65
Training ratio (%)202744102618
\n
\n
", + "capture": "Table 2. The contribution of causal structure in node classification" + }, + "3": { + "table_html": "
\n
Table 3. The influence of the center node ratio
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CornellActor
RcAUC(%)Time(s)RcAUC(%)Time(s)
0.0579.14\u00b15.38241081.74\u00b10.54173
0.179.26+5.382470.0287.02\u00b10.44658
0.279.81\u00b15.812400.0587.03\u00b10.44806
0.380.12\u00b15.522490.186.99\u00b10.431048
0.480.10\u00b16.052610.286.98\u00b10.411513
0.579.55\u00b15.402650.386.93\u00b10.421965
\n
\n
", + "capture": "Table 3. The influence of the center node ratio" + }, + "4": { + "table_html": "
\n
Table 4. Graph update strategy
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodEdge_AugCausalMP-MICausalMP-CausalMP
Cora94.72\u00b10.5096.57\u00b10.4096.47\u00b10.3896.84\u00b10.43
CS97.96\u00b10.0297.95\u00b10.0298.79\u00b10.0398.81\u00b10.03
Texas72.29\u00b15.9072.68\u00b15.6872.58\u00b15.9073.59\u00b15.83
Chameleon98.76\u00b10.1098.93\u00b10.1298.91\u00b10.0999.03\u00b10.13
Squirrel97.03\u00b10.0797.72\u00b10.1397.82\u00b10.1198.11\u00b10.15
\n
\n
", + "capture": "Table 4. Graph update strategy" + }, + "5": { + "table_html": "
\n
Table 5. Dataset statistics
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
# Node# Edge# Feature# ClassHomo ratio
Cora27085278143370.81
CiteSeer33274552370360.74
CS183331637886805150.81
Physics34493495924841550.93
Actor76003339193250.22
Cornell183298170350.13
Texas183325170350.11
Chameleon227736101232550.23
Squirrel5201216933208950.22
\n
\n
", + "capture": "Table 5. Dataset statistics" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.13821v2_figure_1.png", + "caption": "Figure 1. Detect heterophily by causal-effect estimation from asymmetric information flow that results from the mimic behaviors of fraudsters.", + "url": "http://arxiv.org/html/2411.13821v2/x1.png" + }, + "2": { + "figure_path": "2411.13821v2_figure_2.png", + "caption": "Figure 2. Venn diagram of center node and its neighbors.", + "url": "http://arxiv.org/html/2411.13821v2/x2.png" + }, + "3": { + "figure_path": "2411.13821v2_figure_3.png", + "caption": "Figure 3. Main scheme of CausalMP. In each iteration, we modify the detected dependencies into directed edge (red) and add edges (yellow) through mutual information. Both graphs are encoded and decoded by GNN with shared parameter that optimized by the weighed summation of three losses.", + "url": "http://arxiv.org/html/2411.13821v2/x3.png" + }, + "4": { + "figure_path": "2411.13821v2_figure_4.png", + "caption": "Figure 4. Influence of shot number in node classification.", + "url": "http://arxiv.org/html/2411.13821v2/x4.png" + }, + "5": { + "figure_path": "2411.13821v2_figure_5.png", + "caption": "Figure 5. Influence of consistency term \u03b1\ud835\udefc\\alphaitalic_\u03b1.", + "url": "http://arxiv.org/html/2411.13821v2/x5.png" + }, + "6": { + "figure_path": "2411.13821v2_figure_6.png", + "caption": "Figure 6. Influence of consistency term \u03b2\ud835\udefd\\betaitalic_\u03b2.", + "url": "http://arxiv.org/html/2411.13821v2/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning. PMLR, 21\u201329.", + "author": "Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. 2019.", + "venue": "", + "url": null + } + }, + { + "2": { + "title": "Size-invariant graph representations for graph classification extrapolations. In International Conference on Machine Learning. PMLR, 837\u2013851.", + "author": "Beatrice Bevilacqua, Yangze Zhou, and Bruno Ribeiro. 2021.", + "venue": "", + "url": null + } + }, + { + "3": { + "title": "Knowledge Graph Completion with Counterfactual Augmentation. In Proceedings of the ACM Web Conference 2023. 2611\u20132620.", + "author": "Heng Chang, Jie Cai, and Jia Li. 2023.", + "venue": "", + "url": null + } + }, + { + "4": { + "title": "Wiener graph deconvolutional network improves graph self-supervised learning. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 7131\u20137139.", + "author": "Jiashun Cheng, Man Li, Jia Li, and Fugee Tsung. 2023.", + "venue": "", + "url": null + } + }, + { + "5": { + "title": "Adaptive Universal Generalized PageRank Graph Neural Network. In International Conference on Learning Representations.", + "author": "Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2020.", + "venue": "", + "url": null + } + }, + { + "6": { + "title": "Causal Lifting and Link Prediction.", + "author": "Leonardo Cotta, Beatrice Bevilacqua, Nesreen Ahmed, and Bruno Ribeiro. 2023.", + "venue": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 479 (2023), 1\u201330.", + "url": null + } + }, + { + "7": { + "title": "Label-Wise Graph Convolutional Network for Heterophilic Graphs. In Learning on Graphs Conference. PMLR, 1\u201326.", + "author": "Enyan Dai, Shijie Zhou, Zhimeng Guo, and Suhang Wang. 2022.", + "venue": "", + "url": null + } + }, + { + "8": { + "title": "Debiasing graph neural networks via learning disentangled causal substructure.", + "author": "Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, and Jian Tang. 2022.", + "venue": "Advances in Neural Information Processing Systems 35 (2022), 24934\u201324946.", + "url": null + } + }, + { + "9": { + "title": "Linkless link prediction via relational distillation. In International Conference on Machine Learning. PMLR, 12012\u201312033.", + "author": "Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh V Chawla, Neil Shah, and Tong Zhao. 2023.", + "venue": "", + "url": null + } + }, + { + "10": { + "title": "Causal structure learning and inference: a selective review.", + "author": "Markus Kalisch and Peter B\u00fchlmann. 2014.", + "venue": "Quality Technology & Quantitative Management 11, 1 (2014), 3\u201321.", + "url": null + } + }, + { + "11": { + "title": "Variational Graph Auto-Encoders.", + "author": "Thomas N Kipf and Max Welling. 2016.", + "venue": "NIPS Workshop on Bayesian Deep Learning (2016).", + "url": null + } + }, + { + "12": { + "title": "GOAT: A Global Transformer on Large-scale Graphs. In Proceedings of the 40th International Conference on Machine Learning.", + "author": "Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C Bayan Bruss, and Tom Goldstein. 2023.", + "venue": "", + "url": null + } + }, + { + "13": { + "title": "Evennet: Ignoring odd-hop neighbors improves robustness of graph neural networks.", + "author": "Runlin Lei, Zhen Wang, Yaliang Li, Bolin Ding, and Zhewei Wei. 2022.", + "venue": "Advances in Neural Information Processing Systems 35 (2022), 4694\u20134706.", + "url": null + } + }, + { + "14": { + "title": "Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods.", + "author": "Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. 2021.", + "venue": "Advances in Neural Information Processing Systems 34 (2021), 20887\u201320902.", + "url": null + } + }, + { + "15": { + "title": "Generative causal explanations for graph neural networks. In International Conference on Machine Learning. PMLR, 6666\u20136679.", + "author": "Wanyu Lin, Hao Lan, and Baochun Li. 2021.", + "venue": "", + "url": null + } + }, + { + "16": { + "title": "Orphicx: A causality-inspired latent variable model for interpreting graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13729\u201313738.", + "author": "Wanyu Lin, Hao Lan, Hao Wang, and Baochun Li. 2022.", + "venue": "", + "url": null + } + }, + { + "17": { + "title": "Beyond smoothing: Unsupervised graph representation learning with edge heterophily discriminating. In Proceedings of the AAAI conference on artificial intelligence, Vol. 37. 4516\u20134524.", + "author": "Yixin Liu, Yizhen Zheng, Daokun Zhang, Vincent CS Lee, and Shirui Pan. 2023.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Revisiting heterophily for graph neural networks.", + "author": "Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. 2022.", + "venue": "Advances in neural information processing systems 35 (2022), 1362\u20131375.", + "url": null + } + }, + { + "19": { + "title": "Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs. In 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2021. Springer Science and Business Media Deutschland GmbH, 541\u2013553.", + "author": "Costas Mavromatis and George Karypis. 2021.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "On local aggregation in heterophilic graphs.", + "author": "Hesham Mostafa, Marcel Nassar, and Somdeb Majumdar. 2021.", + "venue": "arXiv preprint arXiv:2106.03213 (2021).", + "url": null + } + }, + { + "21": { + "title": "Causality.", + "author": "Judea Pearl. 2009.", + "venue": "Cambridge university press.", + "url": null + } + }, + { + "22": { + "title": "Causal inference.", + "author": "Judea Pearl. 2010.", + "venue": "Causality: objectives and assessment (2010), 39\u201358.", + "url": null + } + }, + { + "23": { + "title": "Using markov blankets for causal structure learning.", + "author": "Jean-Philippe Pellet and Andr\u00e9 Elisseeff. 2008.", + "venue": "Journal of Machine Learning Research 9, 7 (2008).", + "url": null + } + }, + { + "24": { + "title": "Causal inference by using invariant prediction: identification and confidence intervals.", + "author": "Jonas Peters, Peter B\u00fchlmann, and Nicolai Meinshausen. 2016.", + "venue": "Journal of the Royal Statistical Society. Series B (Statistical Methodology) (2016), 947\u20131012.", + "url": null + } + }, + { + "25": { + "title": "Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1717\u20131727.", + "author": "Mingchen Sun, Kaixiong Zhou, Xin He, Ying Wang, and Xin Wang. 2022.", + "venue": "", + "url": null + } + }, + { + "26": { + "title": "All in One: Multi-Task Prompting for Graph Neural Networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (KDD\u201923) (Long Beach, CA, USA). 2120\u20132131.", + "author": "Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, and Jihong Guan. 2023a.", + "venue": "https://doi.org/10.1145/3580305.3599256", + "url": null + } + }, + { + "27": { + "title": "Graph Prompt Learning: A Comprehensive Survey and Beyond.", + "author": "Xiangguo Sun, Jiawen Zhang, Xixi Wu, Hong Cheng, Yun Xiong, and Jia Li. 2023b.", + "venue": "arXiv:2311.16534 (2023).", + "url": null + } + }, + { + "28": { + "title": "Targeted maximum likelihood based causal inference: Part I.", + "author": "Mark J van der Laan. 2010.", + "venue": "The international journal of biostatistics 6, 2 (2010).", + "url": null + } + }, + { + "29": { + "title": "Graph attention networks.", + "author": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017.", + "venue": "stat 1050, 20 (2017), 10\u201348550.", + "url": null + } + }, + { + "30": { + "title": "Optimal experimental design via Bayesian optimization: active causal structure learning for Gaussian process networks.", + "author": "Julius von K\u00fcgelgen, Paul K Rubenstein, Bernhard Sch\u00f6lkopf, and Adrian Weller. 2019.", + "venue": "arXiv preprint arXiv:1910.03962 (2019).", + "url": null + } + }, + { + "31": { + "title": "Pgm-explainer: Probabilistic graphical model explanations for graph neural networks.", + "author": "Minh Vu and My T Thai. 2020.", + "venue": "Advances in neural information processing systems 33 (2020), 12225\u201312235.", + "url": null + } + }, + { + "32": { + "title": "Deep Insights into Noisy Pseudo Labeling on Graph Data. In Thirty-seventh Conference on Neural Information Processing Systems.", + "author": "Botao WANG, Jia Li, Yang Liu, Jiashun Cheng, Yu Rong, Wenjia Wang, and Fugee Tsung. 2023.", + "venue": "https://openreview.net/forum?id=XhNlBvb4XV", + "url": null + } + }, + { + "33": { + "title": "ACMP: Allen-cahn message passing with attractive and repulsive forces for graph neural networks. In The Eleventh International Conference on Learning Representations.", + "author": "Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, and Shi Jin. 2022.", + "venue": "", + "url": null + } + }, + { + "34": { + "title": "Discovering invariant rationales for graph neural networks.", + "author": "Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, and Tat-Seng Chua. 2022.", + "venue": "arXiv preprint arXiv:2201.12872 (2022).", + "url": null + } + }, + { + "35": { + "title": "Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022. 1070\u20131079.", + "author": "Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z Li. 2022.", + "venue": "", + "url": null + } + }, + { + "36": { + "title": "Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In 2022 IEEE International Conference on Data Mining (ICDM). IEEE, 1287\u20131292.", + "author": "Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, and Danai Koutra. 2022.", + "venue": "", + "url": null + } + }, + { + "37": { + "title": "A survey on causal inference.", + "author": "Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang. 2021.", + "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD) 15, 5 (2021), 1\u201346.", + "url": null + } + }, + { + "38": { + "title": "From canonical correlation analysis to self-supervised graph neural networks. In Thirty-Fifth Conference on Neural Information Processing Systems.", + "author": "Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, and S Yu Philip. 2021.", + "venue": "", + "url": null + } + }, + { + "39": { + "title": "Learning from Counterfactual Links for Link Prediction. In International Conference on Machine Learning. PMLR, 26911\u201326926.", + "author": "Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. 2022.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "Finding the Missing-half: Graph Complementary Learning for Homophily-prone and Heterophily-prone Graphs.", + "author": "Yizhen Zheng, He Zhang, Vincent Lee, Yu Zheng, Xiao Wang, and Shirui Pan. 2023.", + "venue": "Proceedings of the 40th International Conference on Machine Learning (2023).", + "url": null + } + }, + { + "41": { + "title": "Link Prediction on Heterophilic Graphs via Disentangled Representation Learning.", + "author": "Shijie Zhou, Zhimeng Guo, Charu Aggarwal, Xiang Zhang, and Suhang Wang. 2022a.", + "venue": "arXiv preprint arXiv:2208.01820 (2022).", + "url": null + } + }, + { + "42": { + "title": "GReTo: Remedying dynamic graph topology-task discordance via target homophily. In The Eleventh International Conference on Learning Representations.", + "author": "Zhengyang Zhou, Gengyu Lin, Kuo Yang, LEI BAI, Yang Wang, et al. 2022b.", + "venue": "", + "url": null + } + }, + { + "43": { + "title": "Beyond homophily in graph neural networks: Current limitations and effective designs.", + "author": "Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. 2020.", + "venue": "Advances in neural information processing systems 33 (2020), 7793\u20137804.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.13821v2" +} \ No newline at end of file diff --git a/20241127/2411.14869v2.json b/20241127/2411.14869v2.json new file mode 100644 index 0000000000000000000000000000000000000000..003b2422827e608efc4ab5b5a869a5cd88c2f207 --- /dev/null +++ b/20241127/2411.14869v2.json @@ -0,0 +1,626 @@ +{ + "title": "BIP3D: Bridging 2D Images and 3D Perception for Embodied Intelligence", + "abstract": "In embodied intelligence systems, a key component is 3D perception algorithm, which enables agents to understand their surrounding environments.\nPrevious algorithms primarily rely on point cloud, which, despite offering precise geometric information, still constrain perception performance due to inherent sparsity, noise, and data scarcity.\nIn this work, we introduce a novel image-centric 3D perception model, BIP3D, which leverages expressive image features with explicit 3D position encoding to overcome the limitations of point-centric methods.\nSpecifically, we leverage pre-trained 2D vision foundation models to enhance semantic understanding, and introduce a spatial enhancer module to improve spatial understanding. Together, these modules enable BIP3D to achieve multi-view, multi-modal feature fusion and end-to-end 3D perception.\nIn our experiments, BIP3D outperforms current state-of-the-art results on the EmbodiedScan benchmark, achieving improvements of 5.69% in the 3D detection task and 15.25% in the 3D visual grounding task. Code will be released at https://github.com/HorizonRobotics/BIP3D.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "3D perception models are utilized to estimate the 3D pose, shape, and category of objects of interest in a scene, typically outputting 3D bounding boxes or segmentation masks. In the field of embodied intelligence, these models generally serve to provide essential input for planning modules or serve as crucial algorithmic components in cloud-based data systems. Enhancing the accuracy of 3D perception holds significant research value.\nAs shown in Figure 1 ###reference_###(a), current mainstream 3D perception models extract features from the point cloud (using PointNet++ [29 ###reference_b29###] or 3D CNN [7 ###reference_b7###]) and generate perception results based on these point features.\nWhile point clouds offer precise geometric information that has advanced 3D perception, several challenges remain [41 ###reference_b41###]:\n(1) High-quality point clouds are difficult to obtain, because depth sensors often have limitations, such as struggling with reflective or transparent objects, long distances, or intense lighting conditions.\n(2) Point clouds are sparse, lack texture, and can be noisy, leading to detection errors and a limited performance ceiling.\n(3) Collecting and annotating point cloud data is costly, which poses a challenge for acquiring large-scale training data.\nThese challenges limit the performance of point-centric models.\nIn contrast, the abundance of image data has accelerated advancements in vision models, with 2D vision foundation models exhibiting strong semantic understanding and generalization capabilities.\nTherefore, transferring 2D vision foundation models (e.g. CLIP [32 ###reference_b32###, 45 ###reference_b45###], EVA [10 ###reference_b10###] and DINO [27 ###reference_b27###]) to the 3D domain holds significant potential for enhancing 3D task performance.\n###figure_1### In this paper, we propose BIP3D, an image-centric 3D perception model (Figure 1 ###reference_###(b)) that performs 3D object detection and 3D visual grounding by fusing multi-view image and text features, and can accept depth maps as auxiliary inputs for enhanced precision.\nOur model is based on the 2D model, GroundingDINO [22 ###reference_b22###], sharing a similar overall network architecture and initialized with its model weights, thereby inheriting the strong generalization capabilities of GroundingDINO. Unlike GroundingDINO, our model can accept an arbitrary number of posed images as input and output 3D bounding boxes.\nThe main improvements include the following three aspects:\n(1) Camera Modeling: Explicitly constructing the camera model, which supports the input of intrinsic and extrinsic camera parameters, providing 3D position encoding for 2D image features and relative position information between 3D objects and images;\n(2) Multi-View Fusion: Modifying the 2D deformable attention in the DINO decoder to a 3D form, achieving dynamic multi-view feature fusion;\n(3) Multi-Modal Fusion: Adding a depth map encoding branch to realize multi-modal feature fusion between images and depth maps, enhancing 3D perception performance.\nUnlike existing mainstream approaches, we focus more on image feature encoding, with the primary layers dedicated to processing 2D image features rather than 3D point features. The feature representation is consistently organized as structured multi-view, multi-scale feature maps. Compared to point clouds, image features have higher information density, higher signal-to-noise ratios, and better scene generalization. Moreover, our model supports RGB-only input, allowing for the rapid collection of large amounts of data for embodied intelligence scenarios through crowd-sourcing, which is crucial for continuously improving model performance.\nWe conduct extensive experiments on the EmbodiedScan benchmark [38 ###reference_b38###], which includes data from the ScanNet [8 ###reference_b8###], 3RScan [37 ###reference_b37###], and Matterport3D [4 ###reference_b4###] datasets, all of which have been more thoroughly annotated. Compared to other datasets, this benchmark is better suited for evaluating the generalization performance of 3D models.\nFirstly, we complete the 3D detection task using category grounding and achieved state-of-the-art (SOTA) performance on the 3D detection benchmark. Compared to the EmbodiedScan baseline, the primary metric (AP3D@0.25) improved by 5.69%.\nSecondly, we also achieve SOTA performance in the 3D visual grounding task, significantly outperforming existing methods. On the validation set, we improved by 15.25% over the EmbodiedScan baseline, and on the test set, we surpassed the CVPR 2024 Challenge 1st place, DenseG [44 ###reference_b44###], by 2.49%.\nAdditionally, we perform ablation studies to validate the effectiveness of our key improvements.\nIn summary, our main contributions are:\nAn image-centric 3D perception model, BIP3D, is proposed, which takes multi-view images and text as input and outputs 3D perception results.\nState-of-the-art performance is achieved on the EmbodiedScan 3D detection and grounding benchmark.\nDeep analyses of the improvements are conducted to provide guidance for future image-centric models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "3D Object Detection", + "text": "3D detection algorithms are primarily categorized into outdoor and indoor scene applications. Due to notable differences in perception range, target types, and sensor types, the development of algorithms for these two settings varies considerably. This paper focuses on the indoor scenes.\nFor most indoor scenes, depth cameras provide accurate 3D point clouds, which can be directly aggregated across multiple frames for early-fusion. Therefore, most existing methods focus on point clouds.\nVoteNet [30 ###reference_b30###] uses PointNet++ [29 ###reference_b29###] to extract point cloud features and aggregates them using Hough voting.\nGroup-Free [24 ###reference_b24###] leverages a Transformer decoder to implicitly aggregate features from multiple points, reducing errors from hand-crafted grouping.\nFCAF3D [34 ###reference_b34###] voxelizes the point cloud, extracts features using a U-Net architecture with sparse 3D convolutions, and outputs boxes through convolutional decoder.\nUniDet3D [18 ###reference_b18###] uses 3D sparse convolutions and a transformer decoder for 3D detection.\nImVoteNet [31 ###reference_b31###] enhances VoteNet by incorporating RGB image features for voting.\nEmbodiedScan [38 ###reference_b38###] projects sparse voxels onto multi-view images to sample features, achieving multi-modal feature fusion.\nImVoxelNet [35 ###reference_b35###] and NeRF-Det [39 ###reference_b39###] use only images as input, convert 2D features to 3D voxel features via inverse perspective mapping, and output dense prediction boxes through 3D convolutions.\nRGB-D Cube R-CNN [28 ###reference_b28###] also fuses multi-modal features but is limited to single-view input.\nIn contrast, our BIP3D is image-centric and achieves multi-view and multi-modal feature fusion. Additionally, we perform 3D detection in the category grounding manner, which makes BIP3D extendable to an open-set detector." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "3D Visual Grounding", + "text": "3D visual grounding involves generating the 3D bounding box for a target in the environment based on a text instruction.\nEarly methods divide this task into two stages: the first stage uses ground truth or a pre-trained 3D detector to generate object proposals, while the second stage scores each proposal using text features, selecting those that exceed a threshold as final results.\nReferIt3DNet [1 ###reference_b1###] and ScanRefer [5 ###reference_b5###] introduce two 3D visual grounding datasets and propose point-centric two-stage models as baselines. LanguageRefer [33 ###reference_b33###] encodes both proposals and text as inputs to a language model, using it to assess proposal confidence. SAT [40 ###reference_b40###] highlights the issues of sparse, noisy, and limited semantic information in point clouds and incorporates images into training to enhance model performance. Cross3DVG [26 ###reference_b26###] leverages CLIP [32 ###reference_b32###] features from multi-view images to boost grounding effects.\nIn addition to these two-stage models, single-stage grounding approaches have emerged in the 3D domain, inspired by 2D grounding techniques [17 ###reference_b17###].\nThese methods directly fuse text and scene features and produce grounding results. BUTD-DETR [15 ###reference_b15###] achieves single-stage 3D grounding with a point encoder and transformer decoder structure, offering the option to add extra proposals to improve performance.\n3DSPS [25 ###reference_b25###] transforms 3D grounding into a keypoint selection task, eliminating the need to separate detection and scoring.\n3DOGSFormer [14 ###reference_b14###] enables simultaneous processing of multiple text instructions.\nTable 1 ###reference_### highlights the differences between our BIP3D and existing 3D detection and grounding methods." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Vision Foundation Model", + "text": "A foundation model is one that has been pre-trained on large datasets and can achieve good performance on multiple downstream tasks.\nIn the field of 2D vision, foundation models are categorized by their pre-training approaches: supervised training [9 ###reference_b9###], unsupervised training [11 ###reference_b11###, 2 ###reference_b2###], image contrastive learning [6 ###reference_b6###, 27 ###reference_b27###], and image-text contrastive learning [32 ###reference_b32###].\nFoundation models trained with image-text contrastive learning exhibit the strongest generalization and zero-shot capabilities. These include methods that align full images with text [32 ###reference_b32###, 36 ###reference_b36###] and those that align cropped images with text [45 ###reference_b45###, 19 ###reference_b19###, 22 ###reference_b22###]. Aligning cropped images with text not only improves classification but also enhances localization, which is crucial for grounding tasks.\nBuilding 3D foundation models typically follows two technical routes:\n(1) Contrastive learning with 3D point clouds and text features [43 ###reference_b43###, 13 ###reference_b13###, 46 ###reference_b46###]. This approach requires extensive 3D data, which is less abundant than 2D data. Most available data is 3D object-text, with limited 3D scene-text data, leading to poor performance in 3D detection and grounding tasks.\n(2) Extracting image features using 2D foundation models and obtaining 3D features through dense mapping [16 ###reference_b16###, 12 ###reference_b12###]. These methods require depth information, usually from depth sensors or SLAM.\nThis paper aims to enhance 3D perception by leveraging existing 2D foundation models.\nWe choose GroundingDINO [22 ###reference_b22###] as the base model and avoid dense mapping, directly using multi-view feature maps for 3D detection and grounding." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_2### In Figure 2 ###reference_###(a), we demonstrate the overall network architecture of BIP3D, which takes multi-view images, text instructions and optional depth maps as input.\nThe output is 3D bounding boxes for the objects specified by the text, with support for multiple text instructions that may correspond to multiple objects.\nThe network comprises six main modules:\ntext, image, and depth encoders that individually process the input components into high-dimensional features;\na feature enhancer module that fuses the text and image features (Sec. 3.1 ###reference_###);\na spatial enhancer module that performs 3D position encoding and depth fusion, enriching the image features with 3D cues (Sec. 3.2 ###reference_###);\nfinally, a transformer decoder that generates the 3D bounding box from the multi-view images and text features (Sec. 3.3 ###reference_###)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Feature Enhancer", + "text": "Since our model needs to support multi-view inputs, the number of image feature vectors is , where is the number of views. Under the setting of EmbodiedScan, equals 50, which makes the computation and memory consumption of cross-attention excessively high to be feasible.\nTherefore, we only use the feature map with the maximum stride for cross-attention. For the image features in other strides, text information is indirectly obtained through intra-view multi-stride deformable attention." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Spatial Enhancer", + "text": "Image features only contain information from the current camera coordinate system and are insensitive to camera models, particularly the extrinsic parameters. Therefore, we explicitly perform 3D encoding of camera models. Specifically, given a camera model as a projection function :\nwhere represents the 3D coordinates in the perception coordinate system, represents the 2D pixel coordinates, and is the depth in the camera coordinate system. For example, in a common pinhole camera model:\nwhere is the extrinsic matrix and is the intrinsic matrix. Based on the camera model\n, we uniformly sample some 3D points within its view frustum and project these points into the perception coordinate system:\nHere, and are hyperparameters, representing the maximum depth and the number of sampling points, respectively. We have omitted the stride dimension for simplicity. Then, we use a linear layer to transform the 3D point coordinates into high-dimensional features, resulting in a series of point position embedding:\nWe predict the depth distribution using the image feature and depth feature, and use this distribution to weight the point embedding to obtain image position embedding:\nFinally, we get the updated image feature by fuse the image feature , depth feature , and image position embedding :\nwhere\nIt is worth noting that, while PETR [23 ###reference_b23###] also designs 3D position embeddings, it does not consider depth distribution . As a result, its position embeddings are independent of image features, which limits their capabilities." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Decoder with Multi-view Fusion", + "text": "The decoder of GroundingDINO uses 2D deformable attention [47 ###reference_b47###] to achieve feature interaction between image and queries, which is insufficient for 3D multi-view applications. In this paper, we replace it with 3D deformable aggregation [21 ###reference_b21###], and conduct certain adaptations and improvements. Specifically, for each query, we maintain a corresponding 3D bounding box, represented as:\nWe sample 3D key points within the 3D bounding box: (1) regress a series of offsets based on the query feature to obtain 3D learnable key points, and (2) set some fixed offsets based on prior knowledge of 3D detection to get fix key points, such as the stereo center and the centers of the six faces of the bounding box.\nHere, is the rotation matrix derived from the Euler angles , We project onto the multi-view feature maps using the camera model to obtain , and perform feature sampling.\nFinally, we combine the query feature, 3D bounding box, and camera parameters to predict the weighting coefficients, which are used to obtain the updated query, thus completing the feature transfer from image features to the query.\nUnlike in autonomous driving scenarios [21 ###reference_b21###], in embodied intelligence scenarios, the number of views and the extrinsic parameters of cameras are not fixed and are constantly changing. This results in significant variations in invalid sampling points (those outside the view frustum), making the model\u2019s convergence more challenging. Therefore, we perform explicit filtering and set the weights of invalid key points to zero." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Camera Intrinsic Standardization", + "text": "We found that the generalization of camera parameters in image-centric models is relatively poor, especially when (1) the richness of camera parameters in the dataset is insufficient or (2) there is no depth or point cloud to provide geometric information. To mitigate this problem, we propose a method for camera intrinsic standardization.\nSpecifically, given an image and its camera intrinsics , and a predefined standardized camera intrinsics , we transform the image to a virtual camera coordinate system by inverse projection , where are the pixel coordinates.\nFor a standard pinhole camera, the inverse projection formula can be reduced to an affine transformation. We use the mean of the intrinsic parameters from the training set as . During both training and inference, we standardize the intrinsics of all input images and use the transformed images as inputs to the model.\n###table_1###" + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Training", + "text": "We apply a one-to-one matching loss [3 ###reference_b3###] to the output of each decoder layer. The loss consists of three components:\nwhere\n is the contrastive loss between queries and text features for classification, using Focal Loss;\n is the center point regression loss, using L2 Loss;\n is the bounding box regression loss for 9-DoF detection.\nTo avoid ambiguities in and due to insufficient definition of object orientation, we introduce a simplified Wasserstein distance as . Specifically, for a bounding box, we assign it a 3D Gaussian distribution where is a diagonal matrix with along its diagonal. Given and , the formula for is defined as follows:\nSince the rotation matrix is orthogonal, equals . Follow DINO [42 ###reference_b42###], we also incorporate a denoising task to assist in training.\nFor 3D detection, we implement it in the form of category grounding. During training, we sample a subset of categories and set the text to \u201c[CLS]cls_1[SEP]cls_2\u2026[SEP]\u201d.\nAfter training the 3D detection model, we load its weights as pretraining weights and then train the referring grounding model.\nDuring the referring grounding training, for each training sample, we randomly sample 0 to descriptions and set the text to \u201c[CLS]exp_1[SEP]exp_2\u2026[SEP]\u201d." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Benchmark", + "text": "We use the EmbodiedScan benchmark to validate the effectiveness of BIP3D.\nEmbodiedScan is a 3D indoor dataset comprising 4,633 high-quality scans from ScanNet, 3RScan, and Matterport3D (MP3D), with 1,513, 1,335, and 1,785 scans from each source, respectively.\nThe training, validation, and testing sets contain 3,113, 817, and 703 scans, respectively. The data from ScanNet, 3RScan, and Matterport3D all include RGB-D images, but there are significant differences in camera types, which impose higher requirements on model performance.\nWe use 3D Intersection over Union (IoU)-based Average Precision (AP) with a threshold of 0.25, AP3D@0.25, as the evaluation metric. For the 3D detection task, to provide a detailed performance analysis, we adopt three sets of sub-metrics:\n(1) Category generalization: To assess generalization across object categories, we follow EmbodiedScan [38 ###reference_b38###], divide objects to be detected into head, common, and tail categories and compute metrics for each.\n(2) Performance on small objects: To highlight the advantages of an image-centric approach, we categorize objects by volume into small, medium, and large parts and compute metrics for each volume part.\n(3) Scene and sensor generalization: To further evaluate the model\u2019s generalization across different scenes and sensor types, we calculate metrics for different subsets.\nFor the 3D grounding task, we adopt two sets of sub-metrics from EmbodiedScan [38 ###reference_b38###]:\n(1) Difficulty: A sample is considered hard if the number of distracting targets exceeds three.\n(2) View dependency: If the text contains directional description such as \u201cfront/back\u201d or \u201cleft/right,\u201d the sample is deemed view-dependent." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "We use Swin-Transformer-Tiny as the image backbone, BERT-Base as the text encoder, and a mini-ResNet34 (with channel numbers reduced to one-quarter of the standard model) as the depth backbone. Both the feature enhancer and the decoder consist of six transformer layers. We load GroundingDINO-tiny as our pretrained model. Table 4 ###reference_### demonstrates the comparison of the number of parameters between our model and the point-centric grounding model, EmbodiedScan. It is evident that the parameters of our 3D encoder is significantly lower than that of point-centric model.\nWe implement view-dependent queries, with 50 queries assigned per view. These 50 queries are obtained by projecting the ground truth 3D bounding boxes from the training set into the camera coordinate system and then clustering them by K-Means. Similar to GroundingDINO, we use sub-sentence level representations to encode text prompts.\nDuring the training phase, we randomly sample 18 images as input, while during testing, we select 50 key frames at fixed intervals as input. For 3D detection, we extract 128 class names as text for each training sample, whereas during testing, all 284 class names are used as textual input. For 3D grounding, we randomly select between 1 and 10 textual descriptions per training sample, and during testing, only one text prompt is used per instance.\nAll models are trained using 8 NVIDIA 4090 GPUs with 24GB of memory, employing the AdamW optimizer. More parameters are detailed in the appendix." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Main Results", + "text": "Detection 3D. We selected several representative methods of different types for comparison: (1) the point model VoteNet [30 ###reference_b30###], (2) the RGB-only model ImVoxelNet [35 ###reference_b35###], (3) the point-sparse-voxel model FCAF3D [34 ###reference_b34###], and (4) the multi-modal fusion model EmbodiedScan [38 ###reference_b38###]. Table 2 ###reference_### presents the metrics of each method, showing that BIP3D significantly outperforms existing methods, with an AP3D@0.25 on the overall dataset that is 5.69% higher than EmbodiedScan. Notably, thanks to the 2D pre-trained model, BIP3D exhibits excellent category generalization performance, achieving an AP of 16.03% on tail categories, which far exceeds EmbodiedScan\u2019s 9.48%. Moreover, due to the dense nature of image features, BIP3D also achieves superior performance on small objects, with an AP of 5.72%, compared to a maximum of 3.28% for other methods. When BIP3D uses only RGB as input, the AP decreases by 3.51%, but it still surpasses all existing methods. The input of depth primarily affects the localization precision, which is notably reflected in a more pronounced decrease in AP for small objects, while there is no significant change for large objects.\nVisual Grounding 3D.\nThe comparison of our method with others on the 3D visual grounding benchmark is shown in Table 3 ###reference_###. First, on the validation dataset, our BIP3D overall AP surpasses EmbodiedScan by 15.25%. Furthermore, it can be observed that our method demonstrates better robustness; the performance on hard samples decreases by only 4.95% compared to easy samples, while for EmbodiedScan, the decrease is 8.67%. On the test dataset, without model ensemble, BIP3D achieves an AP of 57.05%, which is 17.38% higher than that of EmbodiedScan; with model ensemble, the AP of BIP3D further improves to 62.08%, surpassing the state-of-the-art solution DenseG by 2.49%." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Studies and Analysis", + "text": "Pretraining. To demonstrate the significant role of 2D pretraining for 3D perception tasks, we conducted ablation studies on both the point-centric model EmbodiedScan and the image-centric model BIP3D. Firstly, we found that for EmbodiedScan, initializing with GroundingDINO weights brought only a marginal improvement of 1.11%, with a mere 0.38% increase for tail categories, indicating minimal effect. Conversely, for BIP3D, the use of GroundingDINO weights resulted in substantial improvements of 5.66% and 5.99% for the RGB-only and RGB-D models, respectively. This suggests that effective initialization can significantly enhance the performance of 3D detection, highlighting the importance for 3D models to fully leverage 2D foundation models.\nCamera Intrinsic Standardization. Table 6 ###reference_### demonstrates the effect of Camera Intrinsic Standardization (CIS) on 3D detection performance. It can be observed that CIS brings about a performance improvement of 0.7-1.3% for both RGB-only and RGB-D models. When trained exclusively on Scannet, incorporating CIS results in a significant boost in performance on unseen camera data. However, due to notable differences in scenes and categories across different evaluation datasets, other aspects of generalization continue to affect the model\u2019s transferability.\nBox Regression Loss.\nWe compared several bounding box regression losses. It is evident that when using L1 distance directly, the AP3D@0.25 only reaches 17.79%, which is due to the inability of L1 distance to handle the ambiguity in box orientation, leading to incorrect optimization directions. The corner chamfer distance can avoid such orientation ambiguity; however, corner chamfer distance loss training on BIP3D is unstable and difficult to converge. Both permutation corner distance loss and Wasserstein distance loss are orientation-agnostic, avoiding orientation ambiguity and enabling the model to converge stably, achieving better performance. Specific results are shown in the Table 7 ###reference_###. Detail about permutation corner distance loss refers to appendix.\nNumber of Description. During training for visual grounding, using only one text description per sample results in low training efficiency. However, directly employing multiple descriptions for training can introduce a domain gap during testing, leading to degraded performance. Therefore, we opted to randomly select between 1 and 10 text descriptions for training. Table 8 ###reference_### demonstrates the impact of the number of descriptions on model performance. It is evident that our random selection strategy improves AP by 1.56% compared to using a single description, and by 5.61% compared to using a fixed set of 10 descriptions." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Future Works", + "text": "In this work, we propose an image-centric 3D perception model, BIP3D. It overcomes the limitations of point clouds and effectively leverages the capabilities of 2D foundation models to achieve significant improvements in 3D perception performance. BIP3D supports multi-view images, depth maps, and text as inputs, enabling it to perform 3D object detection and 3D visual grounding. We demonstrate the superiority of BIP3D on the EmbodiedScan benchmark.\nBIP3D still has considerable room for exploration, and several future works are outlined here:\n(1) Further optimizing the network architecture and training schemes to achieve even better perception performance.\n(2) Applying BIP3D to dynamic scenes to achieve joint detection and tracking.\n(3) Incorporating more perception tasks, such as instance segmentation, occupancy, and grasp pose estimation.\n(4) Under the integrated network framework of BIP3D, the decoder can be improved to support higher-level tasks like visual question answering and planning." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Ablation on 3D Position Embedding", + "text": "Position embedding in the spatial enhancer is a crucial component of BIP3D, serving to bridge the gap between 2D image features and 3D space. Table A.1 ###reference_### demonstrates the impact of 3D PE on detection performance. When the 3D PE is removed, the overall AP decreases by 3.09%.\nTo more intuitively illustrate that the spatial enhancer achieves spatial modeling, we visualize the correlations between 3D position embeddings . As shown in Figure A.4 ###reference_###, it can be observed that embedding correlations exhibit a positive relationship with their 3D positions." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Inference Efficiency", + "text": "We compared the inference speeds of BIP3D and EmbodiedScan on a 4090 GPU, as shown in Figure A.1 ###reference_###. When considering the point cloud preprocessing time, BIP3D consistently exhibits lower latency than EmbodiedScan, with a more pronounced advantage when the number of views is small. When focusing solely on the neural networks, BIP3D\u2019s inference speed is slower at a higher number of views; however, when the number of views is reduced to below 8, BIP3D still maintains an efficiency advantage.\n###figure_3###" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Scale-up", + "text": "To test the impact of increasing model parameters on perception performance, we replaced the backbone with Swin-Transformer-Base. As shown in Table A.2 ###reference_###, compared to Swin-Tiny, the overall AP improved by 1.21%, with a significant increase of 2.74% in long-tail categories. It is worth noting that GroundingDINO-Base showed an improvement from 58.1% to 59.7% over GroundingDINO-Tiny on COCO benckmark.\nTo further enhance model performance, we incorporated additional training data from the ARKitScene dataset. This resulted in an additional 1.47% improvement. These results highlight the positive impact of both scaling up the model size and enriching the training dataset on improving detection accuracy." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Detail of Camera Intrinsic Standardization", + "text": "The parameters of standardized intrinsic are derived from the mean of the training set. Given that we use undistorted pinhole cameras, the parameters include , which are set to . Intrinsic standardization may introduce issues such as pixel loss and zero padding, as shown in Figure A.2 ###reference_###.\n###figure_4###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Model Ensemble", + "text": "For the model ensemble experiment listed in Table 3 of the main text, we employ five models. Two of these models are trained on the entire dataset, utilizing permutation corner distance loss and Wasserstein distance loss, respectively. The remaining three models are trained on distinct data subsets: ScanNet, 3RScan, and Matterport3D. The strategy for model ensemble is 3D NMS with 0.4 IoU threshold." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Permutation Corner Distance Loss", + "text": "For a single 3D bounding box, there are 48 possible permutations of its 8 corner points, denoted as , as shown in Figure A.3 ###reference_###. Different permutations correspond to different\n values. Therefore, using directly as the loss function would result in incorrect gradients. We propose a permutation corner loss defined as:\n###figure_5###" + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Model Prediction Visualization", + "text": "Figure A.5 ###reference_### visualizes the 3D detection results of the model, demonstrating that BIP3D can effectively handle a variety of complex indoor scenarios. Even for some objects that are unannotated, BIP3D is capable of detection, which provides feasibility for enhancing model performance through the use of semi-supervised learning in the future. Figure A.6 ###reference_### visualizes the 3D grounding results, illustrating the model\u2019s capability to identify and locate the specific target designated by the text among multiple objects of the same class." + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H Algorithm Setting", + "text": "Table A.3 ###reference_### lists more detailed model configurations and training parameters.\nAdditionally, we employ two types of data augmentation during training: 1) applying a random grid mask to the depth map, and 2) performing random cropping on both the images and depth maps.\n###figure_6### ###figure_7### ###figure_8###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsMain ModalityPoint/DepthImageMulti-view FusionTextSingle-stageDet&Grounding
[30, 24, 34, 18]Point\u2713
[31, 39]Image\u2713\u2713
[28]Image\u2713\u2713
[1, 5, 33, 40]Point\u2713\u2713
[26]Point\u2713\u2713\u2713\u2713
[15, 14, 25]Point\u2713\u2713\u2713
[38, 44]Point\u2713\u2713\u2713\u2713\u2713
BIP3DImage\u2713\u2713\u2713\u2713\u2713\u2713
\n
Table 1: Comparison of Key Elements among Different Methods. \u201dSingle-stage\u201d indicates whether the model is a single-stage visual grounding approach, \u201dDet&Grounding\u201d denotes whether the model can simultaneously perform 3D detection and grounding.
\n
", + "capture": "Table 1: Comparison of Key Elements among Different Methods. \u201dSingle-stage\u201d indicates whether the model is a single-stage visual grounding approach, \u201dDet&Grounding\u201d denotes whether the model can simultaneously perform 3D detection and grounding." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsOverallHeadCommonTailSmallMediumLargeScanNet3RScanMP3D
VoteNet\u00a0[30]\n5.1810.872.412.070.165.305.999.907.693.82
ImVoxelNet\u00a0[35]\n8.083.117.053.730.067.959.0211.912.175.24
FCAF3D\u00a0[34]\n13.8622.899.618.752.9013.9010.9121.3517.029.78
EmbodiedScan-D\u00a0[38]\n15.2224.9510.819.483.2815.2410.9522.6618.2510.91
BIP3D-RGB17.4024.2014.7612.943.4518.1914.0620.3827.238.77
BIP3D20.9127.5718.7716.035.7221.4815.2023.4732.4810.09
\n
Table 2: Experiment Results of 3D Detection Task on the EmbodiedScan Validation Dataset. \u2018EmbodiedScan-D\u2019 denotes the detection model provided by EmbodiedScan, and \u2018MP3D\u2019 stands for Matterport3D.
\n
", + "capture": "Table 2: Experiment Results of 3D Detection Task on the EmbodiedScan Validation Dataset. \u2018EmbodiedScan-D\u2019 denotes the detection model provided by EmbodiedScan, and \u2018MP3D\u2019 stands for Matterport3D." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Eval SetMethodsOverallEasyHardView-depView-indepScanNet3RScanMP3D
ValidationEmbodiedScan\u00a0[38]\n39.4140.1231.4540.2138.9641.9941.5330.29
BIP3D-RGB46.0446.2543.7646.1645.9851.8246.3033.27
BIP3D54.6655.0750.1255.7854.0361.2355.4139.36
TestEmbodiedScan\u00a0[38]\n39.6740.5230.2439.0539.94---
SAG3D*\u00a0[20]\n46.9247.7238.0346.3147.18---
DenseG*\u00a0[44]\n59.5960.3950.8160.5059.20---
BIP3D57.0557.8847.8057.1657.00---
BIP3D*62.0863.0850.9961.8462.18---
\n
Table 3: Experiment Results of 3D Visual Grounding Task on the EmbodiedScan Dataset. \u2018View-dep\u2019 and \u2018View-indep\u2019 refer to view-dependent and view-independent, respectively. \u2018*\u2019 denotes model ensemble.
\n
", + "capture": "Table 3: Experiment Results of 3D Visual Grounding Task on the EmbodiedScan Dataset. \u2018View-dep\u2019 and \u2018View-indep\u2019 refer to view-dependent and view-independent, respectively. \u2018*\u2019 denotes model ensemble." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Modules\u2019 Parameters (MB)
Model\n
\n

\n
\n2D\n
\n
\n

\n
\n3D\n
\n
\n

\n
\ntext\n
DecoderOthers
ES-G\u00a0[38]\n1.4387.31119.0611.960.00
BIP3D28.290.09104.0312.8221.61
\n
Table 4: Comparison of Modules\u2019 Parameters. \u2018ES-G\u2019 denotes the grounding model in EmbodiedScan, and \u2018\n\n\n\u2019 denotes encoder.
\n
", + "capture": "Table 4: Comparison of Modules\u2019 Parameters. \u2018ES-G\u2019 denotes the grounding model in EmbodiedScan, and \u2018\n\n\n\u2019 denotes encoder." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsPretrainInputsOverallHeadCommonTailSmallMediumLarge
EmbodiedScan\u00a0[38]ImageNetRGB-D15.2224.9510.819.483.2815.2410.95
GroundingDINORGB-D16.3326.5812.269.703.5516.5912.97
BIP3DImageNetRGB11.7416.6610.687.542.2912.469.98
GroundingDINORGB17.4024.2014.7612.943.4518.1914.06
ImageNetRGB-D14.9220.9312.6310.933.6415.9910.29
GroundingDINORGB-D20.9127.5718.7716.035.7221.4815.20
\n
Table 5: Ablation Study on 2D Pretrained Weights for 3D Detection.
\n
", + "capture": "Table 5: Ablation Study on 2D Pretrained Weights for 3D Detection." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Train SetEval SetEval Category Split
InputsCISScanNet3RScanMP3DOverallScanNet3RScanMP3DHeadCommonTail
RGB\u27136.0816.252.703.1411.864.531.50
\u2713\u27136.8817.734.543.9812.895.571.78
\u2713\u271314.6318.2023.944.4219.6213.3610.59
\u2713\u2713\u271315.5318.8826.884.7421.6813.0911.56
\u2713\u2713\u271316.6718.9025.428.9022.2114.4813.07
\u2713\u2713\u2713\u271317.4020.3827.238.7724.2014.7612.94
RGB-D\u27137.8220.603.053.9414.446.382.18
\u2713\u27138.5922.294.275.7815.127.712.42
\u2713\u271317.8022.7930.044.9223.2416.5913.21
\u2713\u2713\u271319.1623.3533.114.9525.2617.4214.43
\u2713\u2713\u271319.7822.0130.9510.4126.7416.4315.95
\u2713\u2713\u2713\u271320.9123.4732.4810.0927.5718.7716.03
\n
Table 6: Ablation Study on the Impact of Camera Intrinsic Standardization on 3D Detection.
\n
", + "capture": "Table 6: Ablation Study on the Impact of Camera Intrinsic Standardization on 3D Detection." + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Loss TypeOverallHeadCommonTailSmallMediumLargeScanNet3RScanMP3D
CCD0.640.321.150.410.960.270.001.191.370.12
L117.7924.6215.7312.623.8619.1414.4021.9028.218.92
PCD20.2127.6217.7114.915.4720.9414.3023.4632.609.38
WD20.9127.5718.7716.035.7221.4815.2023.4732.4810.09
\n
Table 7: Comparison of Different Box Regression Losses: Corner Chamfer Distance Loss (CCD), L1 Loss, Permutation Corner Distance Loss (PCD), Wasserstein Distance Loss (WD).
\n
", + "capture": "Table 7: Comparison of Different Box Regression Losses: Corner Chamfer Distance Loss (CCD), L1 Loss, Permutation Corner Distance Loss (PCD), Wasserstein Distance Loss (WD)." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of DescOverallEasyHardView-depView-indepScanNet3RScanMP3D
153.1053.1652.5055.0951.9859.6052.2240.73
1049.0549.5044.0450.7148.1254.9050.8735.55
random 1 to 1054.6655.0750.1255.7854.0361.2355.4139.36
\n
Table 8: Ablation Study on Number of Descriptions during Training for Visual Grounding 3D.
\n
", + "capture": "Table 8: Ablation Study on Number of Descriptions during Training for Visual Grounding 3D." + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
3D PEOverallHeadCommonTail
17.8223.6314.3415.39
\u271320.9127.5718.7716.03
\n
Table A.1: Ablation Results of 3D PE.
\n
", + "capture": "Table A.1: Ablation Results of 3D PE." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BackboneARKitOverallHeadCommonTail
swin-tiny20.9127.5718.7716.03
swin-base22.1228.6318.7718.77
swin-base\u271323.5930.2019.5920.88
\n
Table A.2: Results of Scale-up Experiments.
\n
", + "capture": "Table A.2: Results of Scale-up Experiments." + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ConfigSetting
image backboneswin-transformer-tiny
image neckchannel mapper
depth backbonemini-ResNet34
depth neckchannel mapper
text encoderBERT-base
embed dims256
feature levels4
key points7 fixed and 9 learnable
feat enhancer layers6
decoder layers6
anchor per view50
\nmax depth \n10
\nnum of points \n64
optimizerAdamW
base lr2e-4
image backbone lr2e-5
text encoder lr1e-5
detection epochs24
grounding epochs2
batch size8
weight decay5e-4
drop path rate0.2
1.0
0.8
1.0
dn queries100
training views18
test views50
\n
Table A.3: Model Configurations and Training Parameters.
\n
", + "capture": "Table A.3: Model Configurations and Training Parameters." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.14869v2_figure_1.png", + "caption": "Figure 1: Comparison of Point-centric and Image-centric Model Architectures. Dashed boxes denote optional pluggable modules. (a) The point-centric model centers its parameters within the 3D encoder, utilizing feature representations like point or 3D voxel features. (b) By contrast, the image-centric model emphasizes the 2D encoder, using 2D feature maps for its representations.", + "url": "http://arxiv.org/html/2411.14869v2/x1.png" + }, + "2": { + "figure_path": "2411.14869v2_figure_2.png", + "caption": "Figure 2: The Architecture Diagram of BIP3D, where \u22c6\u22c6{\\color[rgb]{1,0,0}\\star}\u22c6 indicates the parts that have been modified or added compared to the base model, GroundingDINO [22], and dashed lines indicate optional elements.", + "url": "http://arxiv.org/html/2411.14869v2/x2.png" + }, + "3": { + "figure_path": "2411.14869v2_figure_3.png", + "caption": "Figure A.1: Latency Comparison, where \u2018*\u2019 indicates the inclusion of point cloud preprocessing time, encompassing multi-view aggregation and down-sampling.", + "url": "http://arxiv.org/html/2411.14869v2/x3.png" + }, + "4": { + "figure_path": "2411.14869v2_figure_4.png", + "caption": "Figure A.2: Images Comparison Before and After Camera Intrinsic Standardization. Left: Original, Right: Standardized.", + "url": "http://arxiv.org/html/2411.14869v2/x4.png" + }, + "5": { + "figure_path": "2411.14869v2_figure_5.png", + "caption": "Figure A.3: The 3D Bounding Box Corners Permutations. For the same bounding box, there are a total of 48 different corner point permutation; the corner point order is indicated by numbers, with red, yellow, and green representing width, length, and height, respectively.", + "url": "http://arxiv.org/html/2411.14869v2/x5.png" + }, + "6": { + "figure_path": "2411.14869v2_figure_6.png", + "caption": "Figure A.4: Visualization of the Correlations of Position Embeddings. The red boxes on the images indicate the selected target location, while the heatmaps represent the cosine similarity between all position embeddings and the position embedding of the target location.", + "url": "http://arxiv.org/html/2411.14869v2/x6.png" + }, + "7": { + "figure_path": "2411.14869v2_figure_7.png", + "caption": "Figure A.5: Visualization of 3D Detection Results. The color of the boxes indicates the category.", + "url": "http://arxiv.org/html/2411.14869v2/x7.png" + }, + "8": { + "figure_path": "2411.14869v2_figure_8.png", + "caption": "Figure A.6: Visualization of 3D Visual Grounding. Green boxes represent the ground truth, red boxes represent the predictions, and blue boxes represent reference objects, such as the \u2018bathtub\u2019 in \u2018find the bag that is closer to the bathtub\u2019.", + "url": "http://arxiv.org/html/2411.14869v2/x8.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes.", + "author": "Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas.", + "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part I 16, pages 422\u2013440. Springer, 2020.", + "url": null + } + }, + { + "2": { + "title": "Beit: Bert pre-training of image transformers.", + "author": "Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei.", + "venue": "arXiv preprint arXiv:2106.08254, 2021.", + "url": null + } + }, + { + "3": { + "title": "End-to-end object detection with transformers.", + "author": "Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.", + "venue": "In European conference on computer vision, pages 213\u2013229. Springer, 2020.", + "url": null + } + }, + { + "4": { + "title": "Matterport3d: Learning from rgb-d data in indoor environments.", + "author": "Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang.", + "venue": "arXiv preprint arXiv:1709.06158, 2017.", + "url": null + } + }, + { + "5": { + "title": "Scanrefer: 3d object localization in rgb-d scans using natural language.", + "author": "Dave Zhenyu Chen, Angel X Chang, and Matthias Nie\u00dfner.", + "venue": "In European conference on computer vision, pages 202\u2013221. Springer, 2020.", + "url": null + } + }, + { + "6": { + "title": "An empirical study of training self-supervised vision transformers.", + "author": "Xinlei Chen, Saining Xie, and Kaiming He.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 9640\u20139649, 2021.", + "url": null + } + }, + { + "7": { + "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks.", + "author": "Christopher Choy, JunYoung Gwak, and Silvio Savarese.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3075\u20133084, 2019.", + "url": null + } + }, + { + "8": { + "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes.", + "author": "Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828\u20135839, 2017.", + "url": null + } + }, + { + "9": { + "title": "Scaling vision transformers to 22 billion parameters.", + "author": "Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al.", + "venue": "In International Conference on Machine Learning, pages 7480\u20137512. PMLR, 2023.", + "url": null + } + }, + { + "10": { + "title": "Eva-02: A visual representation for neon genesis.", + "author": "Yuxin Fang, Quan Sun, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao.", + "venue": "Image and Vision Computing, 149:105171, 2024.", + "url": null + } + }, + { + "11": { + "title": "Masked autoencoders are scalable vision learners.", + "author": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000\u201316009, 2022.", + "url": null + } + }, + { + "12": { + "title": "3d-llm: Injecting the 3d world into large language models.", + "author": "Yining Hong, Haoyu Zhen, Peihao Chen, Shuhong Zheng, Yilun Du, Zhenfang Chen, and Chuang Gan.", + "venue": "Advances in Neural Information Processing Systems, 36:20482\u201320494, 2023.", + "url": null + } + }, + { + "13": { + "title": "Clip2point: Transfer clip to point cloud classification with image-depth pre-training.", + "author": "Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson WH Lau, Wanli Ouyang, and Wangmeng Zuo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22157\u201322167, 2023a.", + "url": null + } + }, + { + "14": { + "title": "Dense object grounding in 3d scenes.", + "author": "Wencan Huang, Daizong Liu, and Wei Hu.", + "venue": "In Proceedings of the 31st ACM International Conference on Multimedia, pages 5017\u20135026, 2023b.", + "url": null + } + }, + { + "15": { + "title": "Bottom up top down detection transformers for language grounding in images and point clouds.", + "author": "Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, and Katerina Fragkiadaki.", + "venue": "In European Conference on Computer Vision, pages 417\u2013433. Springer, 2022.", + "url": null + } + }, + { + "16": { + "title": "Conceptfusion: Open-set multimodal 3d mapping.", + "author": "Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, et al.", + "venue": "arXiv preprint arXiv:2302.07241, 2023.", + "url": null + } + }, + { + "17": { + "title": "Mdetr-modulated detection for end-to-end multi-modal understanding.", + "author": "Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 1780\u20131790, 2021.", + "url": null + } + }, + { + "18": { + "title": "Unidet3d: Multi-dataset indoor 3d object detection.", + "author": "Maksim Kolodiazhnyi, Anna Vorontsova, Matvey Skripkin, Danila Rukhovich, and Anton Konushin.", + "venue": "arXiv preprint arXiv:2409.04234, 2024.", + "url": null + } + }, + { + "19": { + "title": "Grounded language-image pre-training.", + "author": "Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10965\u201310975, 2022.", + "url": null + } + }, + { + "20": { + "title": "Spatioawaregrouding3d: A spatio aware model for improving 3d vision grouding.", + "author": "Cai Liang, Bo Li, Zhengming Zhou, Longlong Wang, Pengfei He, Liang Hu, and Haoxing Wang.", + "venue": "In Autonomous Grand Challenge CVPR 2024 Workshop, 2024.", + "url": null + } + }, + { + "21": { + "title": "Sparse4d: Multi-view 3d object detection with sparse spatial-temporal fusion.", + "author": "Xuewu Lin, Tianwei Lin, Zixiang Pei, Lichao Huang, and Zhizhong Su.", + "venue": "arXiv preprint arXiv:2211.10581, 2022.", + "url": null + } + }, + { + "22": { + "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection.", + "author": "Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al.", + "venue": "arXiv preprint arXiv:2303.05499, 2023.", + "url": null + } + }, + { + "23": { + "title": "Petr: Position embedding transformation for multi-view 3d object detection.", + "author": "Yingfei Liu, Tiancai Wang, Xiangyu Zhang, and Jian Sun.", + "venue": "arXiv preprint arXiv:2203.05625, 2022.", + "url": null + } + }, + { + "24": { + "title": "Group-free 3d object detection via transformers.", + "author": "Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2949\u20132958, 2021.", + "url": null + } + }, + { + "25": { + "title": "3d-sps: Single-stage 3d visual grounding via referred point progressive selection.", + "author": "Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, and Si Liu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16454\u201316463, 2022.", + "url": null + } + }, + { + "26": { + "title": "Cross3dvg: Cross-dataset 3d visual grounding on different rgb-d scans.", + "author": "Taiki Miyanishi, Daichi Azuma, Shuhei Kurita, and Motoaki Kawanabe.", + "venue": "In 2024 International Conference on 3D Vision (3DV), pages 717\u2013727. IEEE, 2024.", + "url": null + } + }, + { + "27": { + "title": "Dinov2: Learning robust visual features without supervision.", + "author": "Maxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al.", + "venue": "arXiv preprint arXiv:2304.07193, 2023.", + "url": null + } + }, + { + "28": { + "title": "Rgb-d cube r-cnn: 3d object detection with selective modality dropout.", + "author": "Jens Piekenbrinck, Alexander Hermans, Narunas Vaskevicius, Timm Linder, and Bastian Leibe.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1997\u20132006, 2024.", + "url": null + } + }, + { + "29": { + "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space.", + "author": "Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "30": { + "title": "Deep hough voting for 3d object detection in point clouds.", + "author": "Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas.", + "venue": "In proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9277\u20139286, 2019.", + "url": null + } + }, + { + "31": { + "title": "Imvotenet: Boosting 3d object detection in point clouds with image votes.", + "author": "Charles R Qi, Xinlei Chen, Or Litany, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4404\u20134413, 2020.", + "url": null + } + }, + { + "32": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "33": { + "title": "Languagerefer: Spatial-language model for 3d visual grounding.", + "author": "Junha Roh, Karthik Desingh, Ali Farhadi, and Dieter Fox.", + "venue": "In Conference on Robot Learning, pages 1046\u20131056. PMLR, 2022.", + "url": null + } + }, + { + "34": { + "title": "Fcaf3d: Fully convolutional anchor-free 3d object detection.", + "author": "Danila Rukhovich, Anna Vorontsova, and Anton Konushin.", + "venue": "In European Conference on Computer Vision, pages 477\u2013493. Springer, 2022a.", + "url": null + } + }, + { + "35": { + "title": "Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection.", + "author": "Danila Rukhovich, Anna Vorontsova, and Anton Konushin.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2397\u20132406, 2022b.", + "url": null + } + }, + { + "36": { + "title": "Eva-clip: Improved training techniques for clip at scale.", + "author": "Quan Sun, Yuxin Fang, Ledell Wu, Xinlong Wang, and Yue Cao.", + "venue": "arXiv preprint arXiv:2303.15389, 2023.", + "url": null + } + }, + { + "37": { + "title": "Rio: 3d object instance re-localization in changing indoor environments.", + "author": "Johanna Wald, Armen Avetisyan, Nassir Navab, Federico Tombari, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7658\u20137667, 2019.", + "url": null + } + }, + { + "38": { + "title": "Embodiedscan: A holistic multi-modal 3d perception suite towards embodied ai.", + "author": "Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19757\u201319767, 2024.", + "url": null + } + }, + { + "39": { + "title": "Nerf-det: Learning geometry-aware volumetric representation for multi-view 3d object detection.", + "author": "Chenfeng Xu, Bichen Wu, Ji Hou, Sam Tsai, Ruilong Li, Jialiang Wang, Wei Zhan, Zijian He, Peter Vajda, Kurt Keutzer, et al.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23320\u201323330, 2023.", + "url": null + } + }, + { + "40": { + "title": "Sat: 2d semantics assisted training for 3d visual grounding.", + "author": "Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1856\u20131866, 2021.", + "url": null + } + }, + { + "41": { + "title": "Time-of-flight and structured light depth cameras.", + "author": "Pietro Zanuttigh, Giulio Marin, Carlo Dal Mutto, Fabio Dominio, Ludovico Minto, Guido Maria Cortelazzo, et al.", + "venue": "Technology and Applications, 978(3), 2016.", + "url": null + } + }, + { + "42": { + "title": "Dino: Detr with improved denoising anchor boxes for end-to-end object detection.", + "author": "Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Heung-Yeung Shum.", + "venue": "In The Eleventh International Conference on Learning Representations.", + "url": null + } + }, + { + "43": { + "title": "Pointclip: Point cloud understanding by clip.", + "author": "Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8552\u20138562, 2022.", + "url": null + } + }, + { + "44": { + "title": "Denseg: Alleviating vision-language feature sparsity in multi-view 3d visual grounding.", + "author": "Henry Zheng, Hao Shi, Yong Xien Chng, Rui Huang, Zanlin Ni, Tianyi Tan, Qihang Peng, Yepeng Weng, Zhongchao Shi, and Gao Huang.", + "venue": "In Autonomous Grand Challenge CVPR 2024 Workshop, 2024.", + "url": null + } + }, + { + "45": { + "title": "Regionclip: Region-based language-image pretraining.", + "author": "Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16793\u201316803, 2022.", + "url": null + } + }, + { + "46": { + "title": "Uni3d: Exploring unified 3d representation at scale.", + "author": "Junsheng Zhou, Jinsheng Wang, Baorui Ma, Yu-Shen Liu, Tiejun Huang, and Xinlong Wang.", + "venue": "arXiv preprint arXiv:2310.06773, 2023.", + "url": null + } + }, + { + "47": { + "title": "Deformable detr: Deformable transformers for end-to-end object detection.", + "author": "Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai.", + "venue": "In International Conference on Learning Representations.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.14869v2" +} \ No newline at end of file diff --git a/20241127/2411.15785v2.json b/20241127/2411.15785v2.json new file mode 100644 index 0000000000000000000000000000000000000000..597a403ad2a58a7f81fbe785e8abbb0c1f34c826 --- /dev/null +++ b/20241127/2411.15785v2.json @@ -0,0 +1,204 @@ +{ + "title": "A Method for Building Large Language Models with Predefined KV Cache Capacity", + "abstract": "This paper introduces a novel approach, the Bounded-Cache Transformer (BCT), for building large language models with a predefined Key-Value (KV) cache capacity. The BCT addresses the excessive memory consumption issue in traditional KV caches by implementing a bounded-length KV cache, which is particularly suitable for the attention layers in Transformer decode-only architectures. By dynamically updating the key-value vector sequences, the BCT achieves efficient inference within limited cache capacity, significantly reducing memory usage while maintaining model performance and system throughput. Experimental results demonstrate that the BCT significantly reduces memory usage while maintaining the model\u2019s inference quality, offering a new solution for efficient inference in large language models.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, large-scale pre-trained models such as BERT [1 ###reference_b1###] and GPT [2 ###reference_b2###] have achieved significant progress in natural language processing (NLP) tasks, especially in text generation, machine translation, and sentiment analysis. However, these models typically contain hundreds of millions or even billions of parameters, leading to substantial memory and computational resource demands during inference. Particularly, traditional KV cache mechanisms expand continuously with increasing context length, resulting in excessive memory consumption and negatively impacting model performance and system stability [3 ###reference_b3###].\nTo address this issue, this paper proposes a method for building large language models with predefined KV cache capacity. This method sets a fixed cache capacity and dynamically updates the key-value vector sequences, achieving efficient and stable inference. Specifically, the proposed method improves traditional KV cache mechanisms in the following ways:\n1. Fixed Cache Capacity: By setting a fixed cache length, it limits memory usage and avoids uncontrolled growth with increasing context length.\n2. Dynamic Update Mechanism: By continuously updating the key-value vector sequences, it ensures that the information in the cache remains up-to-date, thus maintaining the model\u2019s inference quality.\n3. Efficient Inference: By optimizing cache management and update mechanisms, it enhances inference speed and system throughput." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Streaming LLM", + "text": "Streaming LLM is an efficient framework for handling long texts and stream data, focusing on optimizing Key-Value (KV) processing to adapt to long text and stream applications. Streaming LLM retains attention pools, combines recent token rolling caches, and discards unnecessary intermediate tokens, achieving efficient KV cache management. Its advantages include efficient handling of long texts, no need for fine-tuning, improved inference speed, and maintaining accuracy. However, it also has limitations such as context dependency and pre-training window restrictions [4 ###reference_b4###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Low-Rank Embedding with Sparse Strategy (LESS)", + "text": "LESS is a method combining sparse strategies and low-rank embeddings to optimize Key-Value (KV) caches, improving the efficiency and performance of large language models (LLMs) when handling long texts and stream data [5 ###reference_b5###]. LESS synthesizes sparse KV policies and low-rank states, bridging performance gaps, especially in tasks where sparse algorithms show weaknesses. It approximates residuals in sparse caches using low-rank methods, thereby enhancing cache efficiency without significantly increasing memory usage. LESS offers advantages such as performance improvement, low memory occupancy, and ease of integration. It has shown impressive performance across multiple datasets, models, and sparse policies, achieving near-full cache model performance in tasks like language modeling and classification." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Neural Turing Machine (NTM)", + "text": "NTM is an enhanced neural network with external storage matrices capable of reading from and writing to external storage. They are designed to simulate the behavior of Turing machines and handle complex tasks requiring access and manipulation of memory [6 ###reference_b6###]. This paper draws inspiration from NTM, introducing fixed-length KV caches and dynamic update mechanisms to ensure that the information in the cache remains up-to-date. This approach maintains the model\u2019s inference quality while significantly reducing memory usage and improving system throughput." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Comparison of RNN, Transformer, and BCT", + "text": "We compare the characteristics of RNNs, Transformers, and our proposed BCT in Table 1 ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Model Structure", + "text": "The proposed method is applied to the attention layers in large language models based on the Transformer architecture. The core components of the attention layer include:\n- Key-Value Vector Sequence : Composed of key-value vectors, where is a predefined value.\n- Key Vector Sequence : Composed of key vectors, where is a predefined value.\nInitially, the key-value vector sequence can be initialized to zero or random values, depending on the application requirements." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Key-Value Vector Update Mechanism", + "text": "The proposed Bounded-Cache Transformer (BCT) employs a mechanism for updating key-value vectors that is central to its operation within the attention layers of the Transformer architecture. This mechanism is designed to optimize the use of a predefined, bounded cache capacity, enhancing both memory efficiency and computational throughput. The update process is detailed as follows:\nMapping of Input Vectors: The input sequence is initially projected into a higher-dimensional space using learnable matrices and , generating the write query vector and the write key-value vector , respectively. This step is crucial for transforming the input data into a form that can be effectively processed by the attention mechanism.\nComputation of Attention Weights: The write query vector is then used to compute attention weights across the key vectors stored in the cache. This is achieved through a dot-product operation, followed by a softmax normalization, yielding the attention weights . These weights determine the influence of each key vector on the subsequent update.\nRetrieval and Update of Key-Value Cache: For each input vector, the corresponding key-value vector is retrieved from the cache. If it is the first input (), the initial key-value vector sequence is used; otherwise, the sequence from the previous step () is retrieved. This retrieval process is pivotal for the stateful progression of the model.\nApplication of Update Rule: The update rule is applied by integrating the attention weights with the write key-value vector . This integration is mathematically represented as , where is the retrieved key-value vector sequence. This operation effectively updates the cache content based on the new input, ensuring the model\u2019s adaptability to sequential data.\nStoring Updated Vectors: The updated key-value vectors are then stored back into the cache, ready to be used in subsequent steps. This storage is essential for maintaining the continuity of the model\u2019s state across different time steps.\nThis update mechanism is a cornerstone of the BCT, allowing it to harness the benefits of a fixed cache size while dynamically adjusting to new information, thus achieving a balance between computational efficiency and expressive power." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Pseudocode", + "text": "The following pseudocode provides a detailed description of the key-value vector update process within our Bounded-Cache Transformer (BCT) layer:" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Illustration", + "text": "###figure_1### To further clarify the key-value vector update mechanism within the Bounded-Cache Transformer (BCT), Figure 1 ###reference_### offers a visual guide. This illustration encapsulates the core steps of the update process, which are as follows:\n1. Input Transformation: The input vector is transformed into a write query vector and a write key-value vector using the matrices and .\n2. Attention Weights: The write query vector interacts with the key vectors to compute attention weights , which are then normalized using the softmax function.\n3. Cache Update: The attention weights and the write key-value vector are used to update the key-value vector sequence in the cache. This update is mathematically represented by the formula:\nwhere is the previously stored key-value vector sequence.\n4. Cache Storage: The updated key-value vectors are stored back into the cache, ready for the next input vector.\nFigure 1 ###reference_### provides a visual representation of these steps, highlighting the dynamic interaction between the input vectors, the key-value cache, and the attention mechanism within the BCT layer." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Key Implementation Aspects", + "text": "The Bounded-Cache Transformer (BCT) introduces several key implementation aspects that enhance its efficiency and effectiveness in sequence modeling:\nFixed-Length Cache: BCT utilizes a fixed-length key-value (KV) cache, which contrasts with the expanding cache in traditional Transformers. This design is essential for managing memory efficiency and ensuring stable performance across varying context lengths.\nAdaptive Update Rule: BCT features an adaptive update rule that refines the hidden state through a self-supervised learning approach during inference. This enables the model to dynamically adjust to new sequences, improving its capacity to capture long-range dependencies within the bounded cache.\nOptimized for Parallelism: The BCT is designed to take full advantage of parallel processing capabilities. Since the update of KV content at each time step is independent, it allows for token-level parallel training. This is achieved through the computation of all KV elements via concurrent traversal and accumulation operations, significantly enhancing training efficiency.\nLearning Rate Adaptation: BCT incorporates an adaptive learning rate mechanism, which adjusts the step size for each token. This is vital for stabilizing the learning process and can lead to faster convergence.\nHardware Efficiency: The BCT\u2019s design is mindful of modern hardware constraints, aligning its operations with the strengths of GPUs and TPUs. This includes the use of efficient matrix operations and strategic memory management to ensure high-performance computing.\nThese aspects highlight BCT\u2019s innovative approach to sequence modeling, focusing on efficiency, adaptability, and hardware utilization, which are critical for advancing the state of the art in language model design." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimental Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "The experiments used the following datasets:\nCommon Crawl: A dataset containing a large amount of web content [7 ###reference_b7###].\nRefinedWeb: A web-crawled dataset that has been cleaned and filtered [8 ###reference_b8###].\nThe Pile: A comprehensive dataset containing various types of text [9 ###reference_b9###]." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "Model: A large language model based on the Transformer architecture with 1B parameters.\nCache Capacity: Predefined as 2048.\nEvaluation Metrics:\nMemory Usage: Measure the memory consumed by the model during inference.\nInference Speed: Measure the time required to process each input vector.\nInference Quality: Evaluate the model\u2019s inference quality using BLEU scores." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Results Analysis", + "text": "Memory Usage: After the inference length exceeds the fixed cache capacity (2048), the memory usage of the proposed method remains constant, significantly lower than that of traditional KV cache methods. Specifically, the memory usage of traditional methods increases linearly with context length, while the memory usage of the proposed method remains constant after reaching 2048.\nInference Speed: The inference speed also remains relatively constant after the inference length exceeds 2048. This indicates that the proposed method does not significantly degrade inference efficiency when handling long contexts.\nInference Quality: The BLEU scores are comparable to those of traditional methods, with differences within 3" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The proposed method for building large language models with predefined KV cache capacity dynamically updates key-value vector sequences, achieving efficient inference within limited cache capacity. This method maintains the model\u2019s inference quality while significantly reducing memory usage and improving system throughput. Future work will further explore the application of this method in larger parameter scales and larger training datasets, particularly in validating performance with longer context lengths. We plan to conduct experiments on models with larger parameter scales (e.g., 10B or more) and larger training datasets (e.g., more comprehensive web crawl data) to further verify the effectiveness and applicability of the method." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper proposes a method for building large language models with predefined KV cache capacity, addressing the issue of excessive memory consumption in traditional KV caches when handling infinite contexts. Experimental results show that the proposed method significantly reduces memory usage while maintaining the model\u2019s inference quality and improving system throughput. This method provides a new solution for efficient inference in large language models, offering important theoretical and practical significance." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelStateMemComp
ExpressivenessConsumptionComplexity
RNNFixed-size vectorBoundedLinear
TransformerUnboundedUnboundedQuadratic
BCTBounded-length vectorsBoundedLinear
\n
Table 1: Comparison of RNNs, Transformers, and BCT in terms of state management and computational properties.
\n
", + "capture": "Table 1: Comparison of RNNs, Transformers, and BCT in terms of state management and computational properties." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.15785v2_figure_1.png", + "caption": "Figure 1: Illustration of the Key-Value Vector Update Process in BCT.", + "url": "http://arxiv.org/html/2411.15785v2/extracted/6027918/kv_en.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Bert: Pre-training of deep bidirectional transformers for language\nunderstanding.", + "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", + "venue": "arXiv preprint arXiv:1810.04805, 2018.", + "url": null + } + }, + { + "2": { + "title": "Language models are unsupervised multitask learners.", + "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya\nSutskever.", + "venue": "Distributed Computing, 33(2):4, 2020.", + "url": null + } + }, + { + "3": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "In Advances in Neural Information Processing Systems 30 (NIPS\n2017), 2017.", + "url": null + } + }, + { + "4": { + "title": "Efficient streaming language models with attention sinks.", + "author": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis.", + "venue": "Journal of Artificial Intelligence Research, 71:1\u201346, 2023.", + "url": null + } + }, + { + "5": { + "title": "Get more with less: Synthesizing recurrence with kv cache compression\nfor efficient llm inference.", + "author": "Hao Dong, Xiaofeng Yang, Zhiyuan Zhang, Zhi Wang, Yun Chi, and Beidi Chen.", + "venue": "Journal of Machine Learning Research, 25:1\u201334, 2024.", + "url": null + } + }, + { + "6": { + "title": "Neural turing machines.", + "author": "Alex Graves, Greg Wayne, and Ivo Danihelka.", + "venue": "arXiv preprint arXiv:1410.5401, 2014.", + "url": null + } + }, + { + "7": { + "title": "Common crawl: A corpus for web-scale information extraction.", + "author": "Mark Borg et al.", + "venue": "In Proceedings of the 2019 Conference on Empirical Methods in\nNatural Language Processing, 2019.", + "url": null + } + }, + { + "8": { + "title": "Refinedweb: A large-scale web-crawled dataset for language modeling.", + "author": "Junyoung Chung et al.", + "venue": "arXiv preprint arXiv:2205.09714, 2022.", + "url": null + } + }, + { + "9": { + "title": "The pile: An 800gb dataset of diverse text for language modeling.", + "author": "Leo Gao et al.", + "venue": "arXiv preprint arXiv:2101.00027, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.15785v2" +} \ No newline at end of file diff --git a/20241127/2411.16681v2.json b/20241127/2411.16681v2.json new file mode 100644 index 0000000000000000000000000000000000000000..2f121ae8f007912855eda7827a125b050ff1237b --- /dev/null +++ b/20241127/2411.16681v2.json @@ -0,0 +1,595 @@ +{ + "title": "Factorized Visual Tokenization and Generation", + "abstract": "Visual tokenizers are fundamental to image generation.\nThey convert visual data into discrete tokens, enabling transformer-based models to excel at image generation.\nDespite their success, VQ-based tokenizers like VQGAN face significant limitations due to constrained vocabulary sizes.\nSimply expanding the codebook often leads to training instability and diminishing performance gains, making scalability a critical challenge.\nIn this work, we introduce Factorized Quantization (FQ), a novel approach that revitalizes VQ-based tokenizers by decomposing a large codebook into multiple independent sub-codebooks.\nThis factorization reduces the lookup complexity of large codebooks, enabling more efficient and scalable visual tokenization.\nTo ensure each sub-codebook captures distinct and complementary information, we propose a disentanglement regularization that explicitly reduces redundancy, promoting diversity across the sub-codebooks.\nFurthermore, we integrate representation learning into the training process, leveraging pretrained vision models like CLIP and DINO to infuse semantic richness into the learned representations.\nThis design ensures our tokenizer captures diverse semantic levels,\nleading to more expressive and disentangled representations.\nExperiments show that the proposed FQGAN model substantially improves the reconstruction quality of visual tokenizers, achieving state-of-the-art performance.\nWe further demonstrate that this tokenizer can be effectively adapted into auto-regressive image generation.\nhttps://showlab.github.io/FQGAN", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### In recent years, the success of discrete token-based approaches in natural language processing [3 ###reference_b3###, 25 ###reference_b25###] has sparked growing interest in discrete image tokenization and generation [7 ###reference_b7###, 38 ###reference_b38###, 13 ###reference_b13###].\nVisual tokenizers play a crucial role by converting image data into discrete tokens, thereby enabling the application of powerful transformer-based generative models.\nThe quality of visual tokenization significantly impacts high-fidelity image reconstruction and generation.\nPopular visual tokenizers, such as VQGAN [7 ###reference_b7###], adopt an encoder-quantizer-decoder structure, where the quantizer converts the latent feature into discrete tokens via vector quantization (VQ).\nThese approaches have shown remarkable performance on image reconstruction and generation [13 ###reference_b13###, 29 ###reference_b29###, 4 ###reference_b4###].\nDespite these successes, visual tokenization involves inherently lossy compression, especially compared to continuous encoding, since visual data is naturally continuous and complex.\nA common strategy to address this is to enlarge the codebook, enhancing its capacity to approximate continuous representations.\nHowever, traditional VQ-based models are constrained by codebook size.\nExisting research [29 ###reference_b29###, 46 ###reference_b46###] indicates that increasing codebook sizes beyond 16,384 can lead to training instability, such as low codebook utilization and performance saturation.\nRecent works have proposed innovative strategies to address these limitations.\nFor example, FSQ [19 ###reference_b19###] and LFQ [40 ###reference_b40###] are introduced to eliminate the need for an explicit codebook, achieving state-of-the-art reconstruction quality using a massive codebook size.\nAmong VQ tokenizers, VQGAN-LC [46 ###reference_b46###] employs pre-trained feature clusters to help stabilize training with larger codebooks.\nNevertheless, VQ tokenizers still exhibit inferior performance to LFQ ones and, more importantly, the inherent challenges of VQ remain unresolved.\nLarge codebooks complicate quantization by necessitating the calculation of pairwise distances between encoder outputs and all codebook entries, followed by an argmin() operation to select the nearest code.\nAs the codebook size increases, the lookup process becomes more computationally expensive and unstable, leading to inconsistent results.\nTo tackle these challenges, we draw inspiration from the divide-and-conquer principle, breaking down a complex problem into smaller, more manageable components to enhance both stability and performance.\nWe propose a novel factorized codebook design, wherein a large codebook is split into several smaller, independent sub-codebooks.\nThis factorization simplifies the tokenization process, improving the stability of the quantization.\nBy combining entries from multiple sub-codebooks, we construct a more expressive and scalable representation space.\nIt provides greater flexibility for capturing image features at varying levels of granularity, improving overall tokenization quality.\nHowever, to fully harness this expressiveness and ensure that each sub-codebook contributes uniquely to the representation, it is essential to disentangle the learned features.\nFactorizing the codebook alone is insufficient unless each sub-codebook learns to capture unique and complementary information.\nTo address this, we introduce a disentanglement regularization mechanism that enforces orthogonality between sub-codebooks, encouraging each sub-codebook to focus on distinct aspects of the visual data, such as spatial structures, textures, colors, etc.\nThis is akin to having different specialists analyzing various aspects of an image, ultimately resulting in a richer and more comprehensive representation.\nTo further enhance the specialization of the sub-codebooks, we integrate representation learning as an essential part of the training framework.\nBy seamlessly weaving into the training objective, the sub-codebook is guided to capture semantically meaningful features that contribute to the overall representation.\nTraditional reconstruction objectives often lead to overfitting on high-variance visual details, which results in features that lack semantic meaning for perception tasks [2 ###reference_b2###].\nOur representation learning objective addresses this issue by guiding the factorized codebooks to learn robust, semantically rich features capable of generalizing beyond simple reconstruction.\nSpecifically, by leveraging different vision backbones, e.g., CLIP [25 ###reference_b25###] and DINOv2 [20 ###reference_b20###], the sub-codebooks essentially learn to establishes a complementary hierarchy of semantics: low-level structures (e.g., edges), mid-level details (e.g., textures), and high-level concepts (e.g., abstract appearance).\nBy seamlessly integrating factorized codebook design, disentanglement regularization, and representation learning objectives, our visual tokenizer captures a diverse and rich set of features.\nThis holistic approach greatly enhances reconstruction quality, as each sub-codebook learns to represent different aspects of the image in a balanced and semantically meaningful way.\nLeveraging these three core innovations, our visual tokenizer achieves state-of-the-art performance in discrete image reconstruction, as illustrated in Fig. 1 ###reference_###.\nAdditionally, we extend our analysis to auto-regressive (AR) generation tasks.\nUnlike conventional tokenizers that produce a single token per image patch, our tokenizer encodes each patch into multiple tokens, resulting in a richer and more expressive representation.\nDrawing inspiration from related works on handling extremely large codebooks [18 ###reference_b18###] and multi-code [13 ###reference_b13###], we design a factorized AR head that predicts sub-tokens for each patch, adapting our tokenizer effectively for downstream image generation.\nIn summary, our contributions include:\nA novel factorized quantization design that revitalizes VQ-based tokenizers, achieving state-of-the-art performance on discrete image reconstruction.\nIntroduction of disentanglement and representation learning mechanisms that enable diverse and semantically meaningful codebook learning.\nDemonstration that our tokenizer enhances downstream AR models, improving image generation quality on the ImageNet benchmark." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Visual Tokenizers", + "text": "Visual tokenizers map images into a latent space for downstream tasks, such as visual understanding [16 ###reference_b16###] and visual generation [7 ###reference_b7###].\nVisual tokenizers generally fall into two categories: continuous feature-based models and discrete token-based models.\nWe mainly discuss the discrete ones as they are closely related to our work.\nPopular visual tokenizers, exemplified by VQGAN [7 ###reference_b7###], use an encoder-quantizer-decoder structure: the encoder maps image data to a latent space, the quantizer transforms this representation into discrete tokens using vector quantization, and the decoder reconstructs the image from these tokens.\nBuilding on VQGAN framework, numerous works [13 ###reference_b13###, 38 ###reference_b38###, 44 ###reference_b44###, 37 ###reference_b37###, 19 ###reference_b19###] have been developed to improve performance from various perspectives.\nViT-VQGAN [38 ###reference_b38###] upgrades the encoder-decoder architecture from a CNN-based network to a transformer-based one.\nRQ-VAE [13 ###reference_b13###] proposes modeling residual information using multiple codes to capture finer details.\nDespite advancements, VQ tokenizers still struggle with the critical challenge of limited codebook size.\nResearch [29 ###reference_b29###, 46 ###reference_b46###] indicates that expanding the codebook size beyond 16,384 can lead to performance saturation or even degradation.\nThis issue is often accompanied by a low usage rate of the large codebook.\nTo address this, FSQ (Finite Scalar Quantization) [19 ###reference_b19###] and LFQ (Lookup Free Quantization) [40 ###reference_b40###] are proposed to eliminate the need for an explicit codebook and significantly alleviates this issue.\nWithin the VQ family, VQGAN-LC [46 ###reference_b46###] uses pre-trained feature clusters to implicitly regularize the large codebook, helping to maintain a higher usage rate.\nThis work suggests that semantic information can benefit visual tokenizers, a concept further explored in recent studies [8 ###reference_b8###, 15 ###reference_b15###, 35 ###reference_b35###, 34 ###reference_b34###, 45 ###reference_b45###, 39 ###reference_b39###].\nFor instance, VILA-U [35 ###reference_b35###] demonstrates that a pre-trained vision model can be fine-tuned into a visual tokenizer while preserving its semantic capabilities.\nLG-VQ [15 ###reference_b15###] and VQ-KD [34 ###reference_b34###] show that incorporating language supervision or image understanding models can improve visual tokenizers.\nA concurrent work, ImageFolder [14 ###reference_b14###], proposes a folded quantization approach, improving the image reconstruction performance by a large margin.\nOur work, as part of the VQ family, aims to revitalize VQ tokenizers by addressing the large codebook problem through factorized quantization and leveraging semantic supervision.\nA more detailed discussion with related works is provided in the Appendix." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Auto-regressive Visual Generation", + "text": "Auto-regressive visual generation uses a next-token prediction approach to sequentially generate images or videos.\nVQGAN [7 ###reference_b7###], a pioneering model, utilizes a transformer to predict tokens sequentially.\nRQ-VAE [13 ###reference_b13###] extends VQGAN by incorporating a residual tokenization mechanism and adding an AR transformer head to predict residual tokens at a finer depth.\nLlamaGen [29 ###reference_b29###] extends the VQGAN transformer architecture to the Llama [32 ###reference_b32###] framework, demonstrating promising scaling behaviors.\nVAR [30 ###reference_b30###] extends next-token prediction to next-scale prediction, reducing auto-regressive steps and enhancing performance.\nOpen-MAGVIT2 [18 ###reference_b18###], similar to LlamaGen [29 ###reference_b29###], adopts a Llama-style auto-regressive transformer as its backbone.\nTo manage an extremely large codebook, it predicts two sub-tokens during the AR generation phase and composes them to obtain the original code.\nIt also employs an RQ-like architecture, termed intra-block, to predict sub-tokens.\nIn this work, our factorized codes share similarities with RQ-VAE [13 ###reference_b13###] and Open-MAGVIT2 [18 ###reference_b18###], specifically in predicting multiple tokens at each AR step.\nConsequently, we use a factorized AR head atop the AR backbone to predict sub-tokens for each patch." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preliminary", + "text": "VQGAN [7 ###reference_b7###] employs a learnable discrete codebook to represent images, where is the codebook size while is the dimensionality of the codes.\nGiven an input image , the encoder transforms it into a latent feature .\nThen, the closest codebook entry for each patch is retrieved from the codebook to serve as the quantized representation:\nwhere , , denotes the encoded latent feature at patch , codebook entry, and quantized feature at patch , respectively.\nAfter that, VQGAN uses a decoder to reconstruct the image .\nThe training objective of VQGAN is to identify the optimal compression model of , involving the following loss:\nwhere denotes the pixel reconstruction loss between and .\n denotes the codebook loss that pulls the latent features and their closest codebook entries closer.\n denotes the perceptual loss between and by leveraging a pre-trained vision model [43 ###reference_b43###].\n introduces an adversarial training procedure with a patch-based discriminator [12 ###reference_b12###] to calculate the GAN loss.\nAs these losses are widely adapted in most VQ tokenizer designs [7 ###reference_b7###, 13 ###reference_b13###, 29 ###reference_b29###, 46 ###reference_b46###], we omit the detailed definitions for simplicity." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Factorized Quantization", + "text": "Despite the remarkable performance achieved by the classical VQGAN model, it is known to suffer from unstable training and low codebook usage rate when increasing the codebook size.\nOne prominent issue is the unstable lookup process among a large set of embeddings.\nTo alleviate this, we propose a factorized quantization approach that decomposes a singe large codebook into small sub-codebooks.\nThe main framework is illustrated in Fig. 2 ###reference_###.\nEncoder.\nWe regard the original VQGAN encoder as a base feature extractor.\nOn top of that, feature adapters are introduced to transform the base image features into their respective feature space.\nFormally,\nwhere are the adapters for each factorized branch.\nQuantizer.\nOur method maintain a unique codebook for each factorized branch.\nAfter extracting the branch-specific features, the quantization process is conducted at each codebook independently.\nFormally,\nwhere are the factorized sub-codebooks.\nDecoder.\nGiven the quantized feature from each sub-codebook, we employ a simple yet effective aggregation approach that concatenates them along the latent (channel) dimension.\nAfter that, the aggregated features are fed into the pixel decoder, which is inherited from the VQGAN model.\nFormally,\nwhere \u201c;\u201d denotes the concatenation operation.\nThe factorized quantization design presents several appealing properties.\nFirst, the factorized and parallelized lookup process greatly alleviates the lookup instability in a single large codebook.\nSecond, maintaining factorized sub-codebooks and independent feature adapters allow the model to learn more diverse features.\nLastly, the code aggregation before decoding essentially builds a super large conceptual codebook with a size of .\nE.g., suppose , , there are unique combinations of the sub-codes.\nAlthough the actual freedom of this conceptual codebook is smaller than a real codebook with the same size, it already provides much larger capacity, given that we only maintain codes.\nPrior arts reaches reconstruction saturation with codebook size .\nIn Tab. 3 ###reference_### of experiment, it is shown that factorizing the codebook into two sub-codebooks can further significantly improve the reconstruction performance." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Disentanglement", + "text": "The factorized quantization design allows diverse feature learning, given the sufficient capacity in the feature adapters and sub-codebooks.\nHowever, without explicit constraints, the sub-codebooks risk learning redundant and overlapping codes, particularly as the codebook size increases.\nTo address this issue, we propose a disentanglement regularization mechanism for the factorized sub-codebooks.\nFor simplicity, we take as an example scenario.\nThrough Eq. 5 ###reference_###, we obtain and , where is the number of patches.\nWe design the disentanglement regularization mechanism as follows:\nwhere is the number of samples in a batch.\nThis regularization mechanism minimizes the squared dot product between the two involved codes.\nThe dot product directly measures the affinity between the two codes after L2 normalization, ranging from , where -1/1 indicates negative/positive correlation and 0 denotes orthogonality.\nMinimizing the squaring function encourages the dot product value to approach 0.\nIt also provides a smooth gradient for optimization.\nNote that this regularization does not directly apply to the entire codebook.\nInstead, it operates on patches of each image instance.\nIn other words, for each patch, it encourages the involved sub-codes to capture different aspects." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Representation Learning", + "text": "Typically, the main training objective of visual tokenizers is pixel reconstruction.\nResearch [2 ###reference_b2###] suggests that the reconstruction objective can hardly learn meaningful semantic features for perception, as the features mainly capture high-variance details.\nHowever, recent work [42 ###reference_b42###] finds that learning semantic features can benefit visual generation model training.\nIn this work, we show that representation learning plays a crucial role in tokenizer training, especially in the context of factorized quantization.\nConsider the example of an image patch depicting an ear.\nA traditional VQ code may capture its appearance, such as color, texture, etc.\nHowever, it is unaware of the species, e.g., cat or dog.\nWhile such a code may effectively reconstruct the patch, introducing semantic information is expected to be beneficial.\nWhen informed with semantics, the decoder (and generation model) can better handle the corresponding visual reconstruction and generation tasks.\nMoreover, compared to high-variance signals, semantic information tends to generalize better.\nBuilding on this intuition, we introduce representation learning as a training objective to encourage the model to learn meaningful semantic features.\nWe continue to use as an example scenario.\nSpecifically, one sub-codebook, say , is tasked with predicting the features of a pre-trained vision model using a lightweight feature prediction model.\n essentially serves as the semantic codebook that embeds the semantic information.\nThe other codebook functions as the visual codebook that captures the visual details, complementing .\nWe note that semantic is still not a well-defined concept in the community.\nAs studied in the multimodal domain [31 ###reference_b31###], pre-trained vision models place varying emphasis on the semantic property.\nFor instance, CLIP [25 ###reference_b25###], which is pre-trained for cross-modal alignment, encodes high-level semantic features, while DINOv2 [20 ###reference_b20###], a self-supervised vision model, captures mid-level visual features.\nIncorporating diverse vision models into the factorized sub-codebooks establishes a hierarchy of semantics: low-level structures (e.g., edges), mid-level details (e.g., textures), and high-level concepts (e.g., abstract appearance).\nThe total loss is a weighted sum of all the losses:\nwhere and are weights.\nIn this paper, we present two variants of the implementation of FQGAN, including (FQGAN-Dual) and (FQGAN-Triple).\nFQGAN-Dual employs CLIP [24 ###reference_b24###] as the pre-trained vision model to provide semantic features for the representation learning objective.\nFor FQGAN-Triple, CLIP [24 ###reference_b24###] and DINOv2 [20 ###reference_b20###] are jointly adopted to form a semantic hierarchy." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Auto-Regressive Model", + "text": "The factorized quantization design produces multiple sub-tokens for each spatial position, represented as , where denotes the time step.\nStandard AR transformers, such as those in VQGAN [7 ###reference_b7###] and LlamaGen [29 ###reference_b29###], predict only the index of the next token based on the hidden feature , which makes them inherently unsuitable for handling factorized sub-tokens.\nOne simple solution is to apply classifiers to the hidden feature , yielding the indices for the sub-tokens as .\nHowever, this method is shown to be suboptimal (see Tab. 4 ###reference_###).\nTo address this, we introduce a factorized AR head that sequentially predicts the distributions of these factorized sub-tokens, allowing for better modeling of their dependencies.\nFig. 2 ###reference_### illustrates the full Factorized Auto-Regressive model (FAR).\nFor each patch, the hidden feature serves as a prefix condition, which is processed by an additional AR head to autoregressively predict the list of sub-tokens, formulated as .\nFollowing a scaling pattern similar to previous works [29 ###reference_b29###, 18 ###reference_b18###], FAR has Base and Larger versions, differentiated by their parameter sizes.\nThe detailed configurations are provided in the Appendix." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Setup", + "text": "In experiments, we follow previous works to use the standard benchmark, ImageNet [5 ###reference_b5###], to train and evalaute the tokenizers and AR generation models.\nFor the factorization configuration, we experiment with and .\n and are empirically set to and respectively.\nThe training schedule of the visual tokenizer is adapted from LlamaGen [29 ###reference_b29###].\nSpecifically, the tokenizer is trained with a global batch size of 256 and a constant learning rate of 2e-4 across 8 A100 GPUs.\nFor the AR model, we adopt a Llama-style [32 ###reference_b32###, 29 ###reference_b29###] transformer architecture as the backbone.\nTo accommodate the factorized codes, the model employs embedding layers on the input side, each embeds a separate sub-code, followed by a linear layer that aggregates these embeddings into a single representation.\nOn the output side, we adapt a factorized AR head that predicts the factorized codes for each patch.\nThe AR models are trained for 300 epochs with a constant learning rate of 2e-4 and a global batch size of 256 across 8 A100 GPUs.\nMetric.\nWe adopt Fr\u00e9chet inception distance (FID) [9 ###reference_b9###] as the main metric to evaluate visual tokenizers and generation models.\nFor tokenizers, we use the ImageNet validation set, consisting of 50k samples, to compute the reconstruction FID (rFID).\nAdditionally, we use PSNR and Inception Score [28 ###reference_b28###] as auxiliary metrics for comparison.\nFor generation models, we follow the widely adapted ADM [6 ###reference_b6###] evaluation protocol to compute the generation FID (gFID).\nBesides, Inception Score, Precision, and Recall are also used for comparison, following prior works.\nIn both quantitative and qualitative evaluations, we use classifier-free guidance [10 ###reference_b10###] (CFG), with the weight set to 2.0.\nWe do not use any top-k or top-p sampling strategy unless specified." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison on Tokenizers", + "text": "We first compare our method with popular visual tokenizers listed in Tab. 1 ###reference_###.\nOur FQGAN model sets a new state-of-the-art performance in discretized image reconstruction across various settings, including different codebook sizes and downsample ratios.\nCompared to VQGAN and its advanced variants, our method outperforms them by a large margin.\nNote our method is also built based on the vector-quantization mechanism.\nThis comparison effectively validates the advantage of our factorized quantization design.\nInterestingly, compared to the state-of-the-art tokenizer Open-MAGVITv2, which employs an advanced lookup-free quantization mechanism, our method still exhibits superior image reconstruction performance, with a 0.41 rFID gap.\nThis result suggests that VQ-based methods still hold great potential for visual tokenization, which may have been overlooked previously.\nExisting work often regards the codebook as a bottleneck, while our approach provides a novel perspective.\nAn explicit codebook offers the opportunity for more sophisticated designs on code embeddings, such as disentanglement and representation learning.\nAnother key finding is the comparison between SD-VAE [27 ###reference_b27###], SDXL-VAE [23 ###reference_b23###], and our FQGAN.\nSD-VAE and SDXL-VAE are advanced continuous visual tokenizers widely used in Stable Diffusion models [23 ###reference_b23###, 21 ###reference_b21###, 26 ###reference_b26###].\nWe observe that our FQGAN, with a downsample ratio, achieves performance comparable to these continuous models, which use an downsample ratio.\nIn a fairer comparison, with both methods using an downsample ratio, our method achieves a significantly lower reconstruction FID, suggesting that discrete representation in image tokenization is no longer a bottleneck for image reconstruction." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Comparison on Generation Models", + "text": "We compare our FAR model with mainstream image generation models, including diffusion models, LFQ-based AR models, and VQ-based AR models, as shown in Tab. 2 ###reference_###.\nAmong VQ-based AR models, we observe that FAR achieves competitive image generation performance.\nWhen comparing models with similar parameter sizes, specifically FAR-Base vs. LlamaGen-L and FAR-Large vs. LlamaGen-XL, our FAR model consistently achieves superior performance in both FID and Inception Score.\nThis validates the effectiveness of the proposed method.\nAmong the other methods, RQ-Transformer [13 ###reference_b13###] is similar to our method, as it also adopts an additional AR head to accommodate multiple sub-codes at each step.\nThe performance gap between RQ-Transformer and FAR further validates the power of our FQGAN tokenizer and its transferability to the downstream generation model.\nWhen comparing FAR with Open-MAGVIT2 [18 ###reference_b18###], which shares a similar AR model design, our method exhibits a comparable or higher Inception Score, though with a slightly worse FID score.\nThe Inception Score suggests that our FQGAN tokenizer has the potential to match LFQ performance, while the FID score gap still demonstrates the superiority of LFQ compared to VQ, as studied in MAGVITv2 [40 ###reference_b40###].\nMitigating the generation performance gap between LFQ and VQ is a critical yet challenging problem, which is beyond the scope of this work.\nFQGAN is a crucial step toward this direction as it significantly improves image reconstruction performance, surpassing both VQ and LFQ tokenizers.\nTab. 2 ###reference_### also suggests that the improvement on tokenization and reconstruction can be effectively transferred to AR generation.\nWe hope the FQGAN tokenizer will inspire related further research.\nQualitative results of the FAR model is shown in Fig. 5 ###reference_###.\nThe FAR model in this section is trained with tokens from FQGAN-Dual tokenizer.\nMore training details and settings of the FAR model are provided in the Appendix." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Studies", + "text": "We investigate the design components of the FQGAN tokenizer, including the factorized codebook, disentanglement regularization mechanism, and representation learning objective.\nIn this study, we adopt FQGAN-Dual, i.e., .\nAll experiments are conducted for 10 epochs on the ImageNet training set to ensure a fair comparison.\nAs shown in Tab. 3 ###reference_###, we start with a vanilla VQGAN tokenizer.\nIncreasing the codebook size from to results in a drop in codebook usage, yielding only marginal performance gains even with double codebook size.\nPrevious studies [29 ###reference_b29###] have shown that with training schedule of more epochs, the version ultimately performs worse than the version.\nNext, we consider a vanilla factorized codebook design, which splits the single codebook into two sub-codebooks.\nSuch factorization brings a significant performance gain, as reflected in the rFID score change from to .\nCompared to a single codebook with the same number of codes (i.e., capacity), the factorized design greatly reduces lookup complexity.\nIt also yields more diverse code combinations, improving performance by a large margin.\nNext, we gradually incorporate the proposed additional designs into the factorized codebooks.\nBy making only one change in each experiment, we find that both the disentanglement regularization and the representation learning objective lead to better reconstruction results.\nWhen applied together, the two designs achieve even better performance.\nWe attribute this performance gain to the fact that disentanglement regularization forces the factorized codes to learn more diverse aspects of the image, while the representation learning objective explicitly incorporates semantic information.\nIt is worth mentioning that an for the vanilla factorization version is already a very strong result, rarely achieved by previous VQ tokenizers.\nPushing the performance further is particularly challenging, which effectively demonstrates the strength of the proposed designs.\nTo better understand the underlying behavior of the factorized sub-codebooks, we provide a comprehensive visualization.\nFig. 3 ###reference_### demonstrates the reconstruction results, including standard reconstruction and reconstruction using only a single sub-codebook, achieved by setting the rest of the code embeddings to zero.\nIn the two sub-codebooks of FQGAN-Dual, we observe that sub-codebook 1 highlights low-level features, such as essential structures, shapes, and edges of the image.\nSub-codebook 2, jointly supervised by CLIP features, presents a high-level abstract version of the original image, where colors are blurred together, and textures are preserved in a softened manner.\nWhen factorizing further into three sub-codebooks, i.e., FQGAN-Triple, we observe that sub-codebook 1 still emphasizes the low-level strong edges and overall shape.\nSub-codebook 2, jointly supervised by DINO features, highlights textural elements, preserving surface patterns and fine details without clear structural outlines, representing mid-level features.\nFinally, sub-codebook 3 concentrates on higher-level appearance and produces an abstract or blurry version of the original image.\nThis visualization suggests that the factorized sub-codebooks are indeed tasked with capturing different aspects of the image.\nWith the supervision of representation learning, the sub-codebooks naturally form complementary hierarchical levels of visual features.\n###figure_3### ###figure_4### Furthermore, we illustrate the distribution of VQ codes from different sub-codebooks.\nFollowing previous practice [34 ###reference_b34###], we randomly sample four classes from the ImageNet dataset, encode them with our tokenizer, and visualize the distribution using the t-SNE technique.\nThe left part of Fig. 4 ###reference_### shows that VQ codes from sub-codebook 1, without additional regularization, are distributed in an unordered manner in the space.\nThis is likely because this sub-codebook is solely trained for reconstruction, capturing high-variance detail while lacking awareness of semantic categories.\nIn contrast, the right part suggests that the CLIP-supervised sub-codebook 2 exhibits better semantic awareness, as its codes from the same category are distributed within a cluster.\nThe two visualizations effectively demonstrate what each sub-codebook has learned qualitatively.\nWe provide more visualizations in the Appendix.\nAdapting the FQGAN tokenizer to auto-regressive visual generation models presents the challenge of handling multiple sub-codes at each step.\nThis is crucial, as predicting a wrong sub-code at a specific position can invalidate the entire patch.\nWe present this investigation in Tab 4 ###reference_###.\nWe begin with a simple solution that employs independent linear classifier heads to decode the hidden embedding of the AR backbone into their respective sub-codebooks in parallel.\nThis strategy yields decent results but lags behind auto-regressive models with the same parameter level.\nWe hypothesize that this is due to the parallel decoding scheme placing too heavy a burden on the classifier.\nTherefore, we attempt to increase the capacity of the classifier by using multiple layers with a non-linear activation function in between.\nHowever, as shown in the table, the MLP version performs even worse, suggesting that simply increasing the capacity and computation is not the key to addressing this issue.\nIn factorized auto-regressive generation, the key issue is that the mismatch between sub-codes within a position (patch) can significantly affect the results.\nThis suggests that an effective design is a module that not only decodes from the AR backbone but also models the dependency between sub-codes.\nTo this end, we explore using an additional auto-regressive head to decode the factorized sub-codes.\nThe last row of Tab. 4 ###reference_### shows that this design can improve performance by a considerable margin.\nFor example, when decoding code , the vanilla classifier or MLP version only references the hidden embedding output by the AR backbone, whereas the AR module allows the decoding process to also attend to code , strengthening the dependency among sub-codes of the current patch and improving overall generation quality.\n###figure_5###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion and Future Work", + "text": "In this work, we design a factorized quantization method and explore dual and triple sub-codebooks.\nFuture research on factorizing more sub-codebooks could be a promising direction.\nSecondly, since the sub-codebook is jointly supervised by strong vision models, such as CLIP, it is interesting to probe its performance on multimodal understanding tasks.\nWe provide a preliminary exploration in the Appendix.\nIn the long term, building a truly unified tokenizer that excels at both generation and understanding tasks would be beneficial to the community.\nWe believe the factorized design is a promising direction toward this ultimate goal, as it entails various levels of visual information.\nRegarding limitations, as discussed in Sec. 4.3 ###reference_###, our method outperforms previous VQ-based methods in both reconstruction and generation.\nHowever, in downstream generation, our model still lags behind LFQ-based methods in generation FID metric.\nOur work, with a strong reconstruction performance, serves as an initial step toward bridging the gap between VQ and LFQ.\nWe hope this work inspires future research to push the boundary further." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We focus on a critical limitation of current VQ tokenizers: their difficulty in handling large codebooks.\nTo address this, we propose a novel factorized quantization approach that decomposes a large codebook into multiple independent sub-codebooks.\nTo facilitate learning of the sub-codebooks, we design a disentanglement regularization mechanism that reduces redundancy while promoting diversity.\nAdditionally, we introduce a representation learning objective that explicitly guides the model to learn meaningful semantic features.\nThe proposed visual tokenizer, FQGAN, effectively handles large codebooks and achieves state-of-the-art performance in discrete image reconstruction, surpassing both VQ and LFQ methods.\nExperimental results show that this tokenizer can be integrated into auto-regressive image generation models by adding a factorized AR head, demonstrating competitive image generation performance.\nBesides, we provide an in-depth analysis to unveil how the factorized codebooks function.\nFinally, we discuss several limitations to inspire future works." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A More Generation Results", + "text": "Figure 7 ###reference_### presents additional examples generated by our FAR model, highlighting its impressive image generation capabilities." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B More Training Details of Visual Tokenizers", + "text": "In this section, we demonstrate the flexibility of the proposed FQGAN in terms of extending its codebook size and scaling its training schedule.\nTable 5 ###reference_### provides detailed experimental results for both FQGAN-Dual () and FQGAN-Triple ().\nWe observe that increasing the number of sub-codebooks from to \u2014effectively raising the total codebook size from to \u2014further improves reconstruction quality.\nWith only 10 epochs of training, FQGAN-Triple achieves an rFID of 1.30, outperforming the FQGAN-Dual variant under the same training conditions.\nWe attribute the performance gain to the larger codebook (), which introduces additional capacity, and the factorization design and associated training objectives enrich the new sub-codebook with more diverse features.\nWe observe that training the tokenizer for only 10 epochs does not fully utilize the large capacity of the sub-codebooks.\nTo address this, we extend the training schedule to further explore the capacity of the model.\nAs shown in Tab. 5 ###reference_###, increasing the training epochs from 10 to 40 significantly enhances performance.\nFQGAN-Dual improves from an rFID of 1.66 to 0.94, while FQGAN-Triple achieves an rFID of 0.76, comparable to the performance of continuous features.\nThis study suggests that the FQGAN model has significant potential for scaling to achieve improved performance, owing to its factorization design.\nImportantly, training for 40 epochs does not indicate saturation.\nDue to limited time and resources, we did not extend training beyond 40 epochs; however, additional training could potentially yield even lower rFID values." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Extended Analysis of Sub-codebooks", + "text": "###figure_6### We present a visualization of the distribution of VQ code embeddings for the FQGAN-Triple model in Fig. 6 ###reference_###.\nSpecifically, FQGAN-Triple is equipped with three factorized sub-codebooks.\nSub-codebook 2 is jointly supervised using DINOv2 features, while sub-codebook 3 is jointly supervised using CLIP features.\nFollowing prior practice [34 ###reference_b34###], we sample five classes from the ImageNet dataset and use the FQGAN-Triple model to encode these images into the latent space.\nThen, we use the t-SNE technique to visualize the code embeddings from each sub-codebook.\nWe observe that the code embeddings from sub-codebook 1 appear unordered in the space, likely due to the dominant influence of high-variance details.\nThis observation is consistent with the visualization of FQGAN-Dual in the main paper.\nIn contrast to sub-codebook 1, the other two sub-codebooks are organized into clusters based on image categories.\nThis clustered distribution likely reflects the influence of the representation learning objective.\nInterestingly, we observe that sub-codebook 2 is distributed in a more compact manner compared to sub-codebook 3, with embeddings from the same category clustering closer to the center.\nTo better understand this phenomenon, we investigate the specific model instances and their performance on ImageNet.\nSpecifically, we use facebook/dinov2-small and openai/clip-vit-base-patch16 checkpoints as the vision models.\nThe DINOv2 checkpoint achieves linear probing accuracy on the ImageNet validation set, while the CLIP checkpoint achieves accuracy.\nThis performance gap likely explains the observed differences in the visualization.\nThe CLIP model is designed to capture cross-modal information between vision and language, while DINOv2 performs better in vision-centric classification tasks.\nThese differing objectives lead the FQGAN-Triple model to naturally form a semantic hierarchy.\n###figure_7###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Exploration on Multimodal Understanding", + "text": "With the representation learning objective, the factorized sub-codebooks learn a semantic hierarchy, spanning from visual details to high-level concepts.\nRecent studies on unified multimodal models [17 ###reference_b17###, 36 ###reference_b36###, 35 ###reference_b35###] demonstrate that their multimodal understanding performance is largely limited by traditional visual tokenizers.\nCompared to a standard tokenizer trained with a reconstruction objective, FQGAN demonstrates greater potential for supporting multimodal understanding tasks.\nWe conduct a preliminary experiment to investigate its potential.\nSpecifically, we use a LLaVA [16 ###reference_b16###]-style architecture with a Phi [1 ###reference_b1###] LLM as the base model.\nThe traditional LLaVA model undergoes two-stage training.\nIn stage-1, a projector is trained to connect a vision model with the LLM for cross-modal alignment.\nIn stage-2, the projector and LLM are trained jointly to develop instruction-following capabilities.\nIn this study, we train only the stage-1 projector to evaluate the potential of vision features for cross-modal alignment.\nSubsequently, we evaluate the model on the Flickr-30k [22 ###reference_b22###] test set using a simple image captioning task.\nThe results are shown in Tab. 6 ###reference_###.\nFirstly, when comparing continuous features extracted from the VQ encoder to discrete code embeddings from the codebook, we observe that continuous features consistently perform better.\nThis suggests that continuous features are still more suitable for cross-modal understanding, as they contain richer information.\nSecondly, when comparing different sub-codebooks, sub-codebook 2 consistently outperforms sub-codebook 1 in the captioning task.\nThis demonstrates that joint supervision with CLIP features enhances the cross-modal potential of sub-codebook 2.\nWe further observe that combining the two sub-codebooks results in comparable or better performance, particularly in the captioning task with continuous features.\nThis phenomenon suggests that the visual details from sub-codebook 1 have the potential to complement information missing in sub-codebook 2, enabling more effective cross-modal alignment and understanding.\nThe performance metrics of our model remain significantly lower than those of standard captioning models, likely due to the following factors.\nFirst, in our FQGAN model, the feature dimension is compressed to 8, which is significantly smaller than the typical dimensions of traditional vision features (512 or 768).\nWhile t-SNE visualizations demonstrate clear category separability, the features possess less semantic richness compared to the continuous representations generated by a standard vision backbone.\nIncreasing the feature dimension further could enhance performance by capturing more detailed semantic information.\nSecond, using CLIP features as auxiliary supervision introduces only a limited amount of cross-modal semantic information into the VQ encoder, especially given the relatively small scale of the training dataset.\nThe VILA-U [35 ###reference_b35###] study suggests that initializing the model with pre-trained CLIP weights could be a promising approach.\nHowever, it also highlights that training a single codebook to simultaneously optimize for reconstruction and semantic objectives can lead to feature conflicts.\nGiven these considerations, our FQGAN model shows strong potential to advance multimodal understanding and contribute to the development of unified multimodal frameworks.\nThe factorized sub-codebook design effectively mitigates feature conflicts between high-variance visual details and high-level semantic concepts, naturally establishing a hierarchical structure.\nWe hope this study serves as a foundation for further research in this area.\n###figure_8### ###figure_9###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E More Training Details of AR Generation Models", + "text": "Table 7 ###reference_### presents the detailed configurations of the FAR-Base and FAR-Large models.\nThe AR backbone and AR head architecture follows a standard auto-regressive transformer design with causal attention.\nNext, we present the training details of the auto-regressive generation model in Fig. 8 ###reference_###.\nSpecifically, we plot the gFID score curves using classifier-free guidance (cfg=2.0 for FAR-Dual and cfg=1.75 for FAR-Triple).\nFirstly, we observe that FAR scales well across different model sizes, with larger models consistently achieving better FID scores, regardless of whether dual or triple codes are used.\nNext, when comparing FAR-Dual and FAR-Triple models with the same number of parameters, we observe that FAR-Dual achieves a lower gFID score than FAR-Triple.\nFor the \u201c-Large\u201d model size, FAR-Dual and FAR-Triple achieve comparable best gFID scores: vs. .\nFor the \u201c-Base\u201d model size, FAR-Dual outperforms FAR-Triple, achieving gFID scores of vs. .\nThis performance gap suggests that handling multiple sub-codes in auto-regressive generation models remains challenging.\nThe ablation study on the AR head in the main paper suggests that further scaling the AR head size could improve learning performance.\nWe leave this for future work due to limited computational resources." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Discussion on Related Works", + "text": "Our work is closely related to some existing studies.\nResidual Quantization [13 ###reference_b13###] and Modulated Quantization [44 ###reference_b44###] implicitly adopt the philosophy of factorization by decomposing visual features into primary and residual or modulated components.\nWhile this factorization improves image reconstruction performance, its potential is limited by reliance on a single codebook.\nOur approach explicitly decomposes a large codebook into multiple independent sub-codebooks, introducing greater flexibility and efficiency.\nA concurrent work, ImageFolder [14 ###reference_b14###], introduces a visual tokenization approach using two codebooks to encode semantics and details, achieving improvements in reconstruction quality. However, our work differs significantly with ImageFolder in objectives, representation design, generative modeling strategy, and downstream applicability. Below, we summarize the key differences:\nObjective and Focus.\nImageFolder focuses on improving the computational efficiency of autoregressive image generation through a Folded Tokenization mechanism, which compresses spatial information to reduce sequence length. Its primary goal is to optimize high-resolution image generation by addressing scalability challenges.\nIn contrast, our work emphasizes building interpretable and disentangled visual representations through Factorized Tokenization, prioritizing semantic clarity and downstream usability. Our framework is designed not only for efficient generation but also for achieving more meaningful and structured representations.\nRepresentation Design.\nWhile ImageFolder employs two codebooks to separately encode semantics and details, it does not enforce explicit independence between these codebooks. Instead, its tokenization primarily focuses on spatial compression to maintain efficiency.\nIn our work, we explicitly enforce independence between codebooks, disentangling semantic and detail representations. Furthermore, we introduce a hierarchical multi-codebook structure, enabling richer and more interpretable visual representations that support a broader range of tasks and downstream applications.\nGenerative Modeling Strategy.\nImageFolder uses an autoregressive model to directly generate folded tokens, focusing on maintaining spatial coherence during generation. Its decoding relies on sequential predictions of compressed tokens.\nIn contrast, our work explores how to effectively transfer the factorized tokenizer into downstream autoregressive generation tasks. By leveraging a factorized autoregressive prediction head, our framework enhances the generation process, enabling high-quality and consistent outputs. This demonstrates the adaptability of our approach for autoregressive generation tasks and highlights its advantages in maintaining both semantic integrity and visual fidelity.\nEvaluation and Applicability.\nThe evaluation in ImageFolder primarily measures reconstruction quality and generation efficiency using metrics like FID and inference latency.\nOur work takes a broader perspective, assessing the interpretability and usability of factorized representations. While our primary focus is on enhancing autoregressive generation, we also evaluate how the disentangled representations enable structured representation learning and multimodal understanding. This highlights the versatility of our approach in scenarios requiring more nuanced and interpretable models.\nRole of Pre-trained Vision Models.\nBoth works utilize pre-trained vision models for supervision or regularization, but the integration differs. ImageFolder leverages these models to improve feature extraction for reconstruction tasks.\nIn our framework, pre-trained models are utilized to guide the disentanglement process, enabling the creation of a hierarchical and factorized tokenization structure. This approach enhances the adaptability and generalizability of our representations to diverse downstream tasks.\nSummary.\nWhile both ImageFolder and our work aim to improve visual tokenization and generation, their focus and methodologies diverge significantly. ImageFolder prioritizes computational efficiency and scalability in tokenized image generation, whereas our work introduces explicit disentanglement and hierarchical factorization for autoregressive generation. These innovations establish a more interpretable and versatile framework, extending the potential applications of visual tokenization beyond reconstruction and sequence efficiency to tasks requiring semantically meaningful and structured representations.\nTogether, these works, including our FQGAN, highlight factorization as a promising avenue for advancing visual tokenization and generation." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparisons with other image tokenziers. Reconstruction performance of different tokenizers on ImageNet 50k validation set. All models are trained on ImageNet, except \u201c\u201d on OpenImages and \u201c\u201d on unknown training data.\nBold denotes the best scores; underline denotes the second place.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodDownsampleCodebookCoderFIDPSNR
RatioSizeDim
VQGAN\u00a0[7]\n16163842564.98
SD-VQGAN\u00a0[27]\n161638445.15
RQ-VAE\u00a0[13]\n16163842563.20
LlamaGen\u00a0[29]\n161638482.1920.79
Titok-B\u00a0[41]\n4096121.70
VQGAN-LC\u00a0[46]\n1610000082.6223.80
VQ-KD\u00a0[34]\n168192323.41-
VILA-U\u00a0[35]\n16163842561.80-
Open-MAGVIT2\u00a0[18]\n1626214411.1721.90
FQGAN-Dual1616384 280.9422.02
FQGAN-Triple1616384 380.7622.73
SD-VAE\u2020\u00a0[27]\n840.7425.68
SDXL-VAE\u2020\u00a0[23]\n840.6826.04
ViT-VQGAN\u00a0[38]\n88192321.28
VQGAN\u2217\u00a0[7]\n81638441.1923.38
SD-VQGAN\u2217\u00a0[27]\n81638441.14
OmniTokenizer\u00a0[33]\n8819281.11
LlamaGen\u00a0[29]\n81638480.5925.45
Open-MAGVIT2\u00a0[18]\n826214410.3426.19
FQGAN-Dual816384 280.3226.27
FQGAN-Triple816384 380.2427.58
\n
\n
", + "capture": "Table 1: Comparisons with other image tokenziers. Reconstruction performance of different tokenizers on ImageNet 50k validation set. All models are trained on ImageNet, except \u201c\u201d on OpenImages and \u201c\u201d on unknown training data.\nBold denotes the best scores; underline denotes the second place.\n" + }, + "2": { + "table_html": "
\n
Table 2: \nClass-conditional generation on ImageNet.\nModels with the suffix \u201c-re\u201d use rejection sampling.\nThe evaluation protocol and implementation follow ADM\u00a0[6].\nOur model employs a cfg-scale of 2.0.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TypeModel#Para.FIDISPrecisionRecall
Diffusion\nADM\u00a0[6]\n554M10.94101.00.690.63
\nCDM\u00a0[11]\n4.88158.7
\nLDM-4\u00a0[27]\n400M3.60247.7
\nDiT-XL/2\u00a0[21]\n675M2.27278.20.830.57
LFQ AROpen-MAGVIT2-B\u00a0[18]343M3.08258.260.850.51
Open-MAGVIT2-L\u00a0[18]804M2.51271.700.840.54
VQ ARVQGAN\u00a0[7]\n227M18.6580.40.780.26
VQGAN\u00a0[7]\n1.4B15.7874.3
VQGAN-re\u00a0[7]\n1.4B5.20280.3
ViT-VQGAN\u00a0[38]\n1.7B4.17175.1
ViT-VQGAN-re\u00a0[38]\n1.7B3.04227.4
RQTran.\u00a0[13]\n3.8B7.55134.0
RQTran.-re\u00a0[13]\n3.8B3.80323.7
LlamaGen-L\u00a0[29]\n343M3.80248.280.830.51
LlamaGen-XL\u00a0[29]\n775M3.39227.080.810.54
FAR-Base415M3.38248.260.810.54
FAR-Large898M3.08272.520.820.54
\n
\n
", + "capture": "Table 2: \nClass-conditional generation on ImageNet.\nModels with the suffix \u201c-re\u201d use rejection sampling.\nThe evaluation protocol and implementation follow ADM\u00a0[6].\nOur model employs a cfg-scale of 2.0.\n" + }, + "3": { + "table_html": "
\n
Table 3: \nAblation study on different components of the proposed factorized quantization, using the FQGAN-Dual variant.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelCodebookDis.Rep.rFIDISPSNRUsage
SizeRegular.Learn.
VQGAN163843.7150.0520.5698%
327683.6050.6020.5684%
FQGAN16384 2\u2717\u27172.0054.7222.2197%
16384 2\u2713\u27171.8455.0422.0498%
16384 2\u2717\u27131.7355.0021.6198%
16384 2\u2713\u27131.6655.2121.6298%
\n
\n
", + "capture": "Table 3: \nAblation study on different components of the proposed factorized quantization, using the FQGAN-Dual variant.\n" + }, + "4": { + "table_html": "
\n
Table 4: \nAblation study on the generation model head design with the proposed FQGAN tokenizer. We use FAR-Large model with cfg-scale=1.75 in this study.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Generation Model HeadTop-k SamplinggFID
Linear Classifiers40965.19
81926.90
MLP Classifiers40965.59
81928.88
Factorized AR Head40964.37
81923.74
\n
\n
", + "capture": "Table 4: \nAblation study on the generation model head design with the proposed FQGAN tokenizer. We use FAR-Large model with cfg-scale=1.75 in this study.\n" + }, + "5": { + "table_html": "
\n
Table 5: \nThe proposed FQGAN is extendable to multiple codebooks, i.e., , and demonstrate scaling behavior with increasing training schedule.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Codebook SizeEpochrFIDISPSNR
2\n16384 2\n101.6655.2121.62
\n16384 2\n201.2556.3922.00
\n16384 2\n400.9457.1522.02
3\n16384 3\n101.3056.4121.85
\n16384 3\n200.9257.8022.67
\n16384 3\n400.7658.0522.73
\n
\n
", + "capture": "Table 5: \nThe proposed FQGAN is extendable to multiple codebooks, i.e., , and demonstrate scaling behavior with increasing training schedule.\n" + }, + "6": { + "table_html": "
\n
Table 6: \nExploration on multimodal understanding with LLaVA\u00a0[16] framework and Flicker-30k\u00a0[22] benchmark.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FeatureSub-CodebookCIDEr
ContinuousCodebook 13.67
Codebook 2 (CLIP)7.15
Both10.28
DiscreteCodebook 12.22
Codebook 2 (CLIP)7.40
Both7.37
\n
\n
", + "capture": "Table 6: \nExploration on multimodal understanding with LLaVA\u00a0[16] framework and Flicker-30k\u00a0[22] benchmark.\n" + }, + "7": { + "table_html": "
\n
Table 7: Model configurations of FAR. We partially follow the scaling rule proposed in the previous work\u00a0[29].
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelParametersAR BackboneAR HeadWidthsHeads
FAR-Base415M243102416
FAR-Large898M364128020
\n
\n
", + "capture": "Table 7: Model configurations of FAR. We partially follow the scaling rule proposed in the previous work\u00a0[29]." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.16681v2_figure_1.png", + "caption": "Figure 1: \nPerformance comparison of popular tokenizers at various codebook sizes, including VQ (Taming) [7], VQ (LlamaGen) [29], VQ-LC [46], LFQ (OpenMAGVIT2) [18], and FQGAN.\nLower rFID values indicate better performance.", + "url": "http://arxiv.org/html/2411.16681v2/x1.png" + }, + "2": { + "figure_path": "2411.16681v2_figure_2.png", + "caption": "Figure 2: \nIllustration of the our method.\nThe left part shows FQGAN-Dual, the factorized tokenizer design in an example scenario when k=2\ud835\udc582k=2italic_k = 2.\nThis framework is extendable to factorization of more codebooks.\nThe right part demonstrate how we leverage an additional AR head to accommodate the factorized sub-codes based on standard AR generative transformer.", + "url": "http://arxiv.org/html/2411.16681v2/x2.png" + }, + "3": { + "figure_path": "2411.16681v2_figure_3.png", + "caption": "Figure 3: \nVisualization of standard reconstruction by FQGAN-Dual and reconstruction using only a single sub-codebook.", + "url": "http://arxiv.org/html/2411.16681v2/x3.png" + }, + "4": { + "figure_path": "2411.16681v2_figure_4.png", + "caption": "Figure 4: \nT-SNE visualization of VQ codes from different sub-codebooks in FQGAN-Dual.", + "url": "http://arxiv.org/html/2411.16681v2/x4.png" + }, + "5": { + "figure_path": "2411.16681v2_figure_5.png", + "caption": "Figure 5: \nQualitative examples generated by our FAR model.", + "url": "http://arxiv.org/html/2411.16681v2/x5.png" + }, + "6": { + "figure_path": "2411.16681v2_figure_6.png", + "caption": "Figure 6: \nT-SNE visualization with FQGAN-Triple.", + "url": "http://arxiv.org/html/2411.16681v2/x6.png" + }, + "7": { + "figure_path": "2411.16681v2_figure_7.png", + "caption": "Figure 7: \nMore qualitative examples generated by the FAR model.", + "url": "http://arxiv.org/html/2411.16681v2/x7.png" + }, + "8(a)": { + "figure_path": "2411.16681v2_figure_8(a).png", + "caption": "(a) FAR-Dual generation FID with CFG.\nFigure 8: Training details of the FAR model. We demonstrate FAR-Dual and FAR-Triple with both Base and Large size.", + "url": "http://arxiv.org/html/2411.16681v2/x8.png" + }, + "8(b)": { + "figure_path": "2411.16681v2_figure_8(b).png", + "caption": "(b) FAR-Triple generation FID with CFG.\nFigure 8: Training details of the FAR model. We demonstrate FAR-Dual and FAR-Triple with both Base and Large size.", + "url": "http://arxiv.org/html/2411.16681v2/x9.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Phi-3 technical report: A highly capable language model locally on your phone.", + "author": "Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al.", + "venue": "arXiv preprint arXiv:2404.14219, 2024.", + "url": null + } + }, + { + "2": { + "title": "Learning by reconstruction produces uninformative features for perception.", + "author": "Randall Balestriero and Yann LeCun.", + "venue": "arXiv preprint arXiv:2402.11337, 2024.", + "url": null + } + }, + { + "3": { + "title": "Language models are few-shot learners.", + "author": "Tom B Brown.", + "venue": "arXiv preprint arXiv:2005.14165, 2020.", + "url": null + } + }, + { + "4": { + "title": "Maskgit: Masked generative image transformer.", + "author": "Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman.", + "venue": "In CVPR, pages 11305\u201311315, 2022.", + "url": null + } + }, + { + "5": { + "title": "ImageNet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In CVPR, pages 248\u2013255, 2009.", + "url": null + } + }, + { + "6": { + "title": "Diffusion models beat gans on image synthesis.", + "author": "Prafulla Dhariwal and Alexander Nichol.", + "venue": "NeurIPS, 34:8780\u20138794, 2021.", + "url": null + } + }, + { + "7": { + "title": "Taming transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Robin Rombach, and Bj\u00f6rn Ommer.", + "venue": "In CVPR, pages 12873\u201312883, 2021.", + "url": null + } + }, + { + "8": { + "title": "Rethinking the objectives of vector-quantized tokenizers for image synthesis.", + "author": "Yuchao Gu, Xintao Wang, Yixiao Ge, Ying Shan, and Mike Zheng Shou.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7631\u20137640, 2024.", + "url": null + } + }, + { + "9": { + "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.", + "venue": "In NeurIPS, 2017.", + "url": null + } + }, + { + "10": { + "title": "Classifier-free diffusion guidance.", + "author": "Jonathan Ho and Tim Salimans.", + "venue": "arXiv preprint arXiv:2207.12598, 2022.", + "url": null + } + }, + { + "11": { + "title": "Cascaded diffusion models for high fidelity image generation.", + "author": "Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans.", + "venue": "23(1):2249\u20132281, 2022.", + "url": null + } + }, + { + "12": { + "title": "Image-to-image translation with conditional adversarial networks.", + "author": "Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros.", + "venue": "In CVPR, pages 5967\u20135976, 2017.", + "url": null + } + }, + { + "13": { + "title": "Autoregressive image generation using residual quantization.", + "author": "Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han.", + "venue": "In CVPR, pages 11513\u201311522, 2022.", + "url": null + } + }, + { + "14": { + "title": "Imagefolder: Autoregressive image generation with folded tokens, 2024.", + "author": "Xiang Li, Kai Qiu, Hao Chen, Jason Kuen, Jiuxiang Gu, Bhiksha Raj, and Zhe Lin.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "Lg-vq: Language-guided codebook learning.", + "author": "Guotao Liang, Baoquan Zhang, Yaowei Wang, Xutao Li, Yunming Ye, Huaibin Wang, Chuyao Luo, Kola Ye, et al.", + "venue": "arXiv preprint arXiv:2405.14206, 2024.", + "url": null + } + }, + { + "16": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "Advances in neural information processing systems, 36, 2024a.", + "url": null + } + }, + { + "17": { + "title": "World model on million-length video and language with ringattention.", + "author": "Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel.", + "venue": "arXiv preprint, 2024b.", + "url": null + } + }, + { + "18": { + "title": "Open-magvit2: An open-source project toward democratizing auto-regressive visual generation, 2024.", + "author": "Zhuoyan Luo, Fengyuan Shi, Yixiao Ge, Yujiu Yang, Limin Wang, and Ying Shan.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "Finite scalar quantization: Vq-vae made simple.", + "author": "Fabian Mentzer, David Minnen, Eirikur Agustsson, and Michael Tschannen.", + "venue": "arXiv preprint arXiv:2309.15505, 2023.", + "url": null + } + }, + { + "20": { + "title": "Dinov2: Learning robust visual features without supervision, 2024.", + "author": "Maxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herv\u00e9 Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "Scalable diffusion models with transformers.", + "author": "William Peebles and Saining Xie.", + "venue": "In CVPR, pages 4195\u20134205, 2023.", + "url": null + } + }, + { + "22": { + "title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models.", + "author": "Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 2641\u20132649, 2015.", + "url": null + } + }, + { + "23": { + "title": "SDXL: improving latent diffusion models for high-resolution image synthesis.", + "author": "Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M\u00fcller, Joe Penna, and Robin Rombach.", + "venue": "In ICLR, 2024.", + "url": null + } + }, + { + "24": { + "title": "Learning transferable visual models from natural language supervision, 2021a.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021b.", + "url": null + } + }, + { + "26": { + "title": "Zero-shot text-to-image generation.", + "author": "Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever.", + "venue": "pages 8821\u20138831, 2021.", + "url": null + } + }, + { + "27": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In CVPR, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "28": { + "title": "Improved techniques for training gans.", + "author": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.", + "venue": "In NeurIPS, 2016.", + "url": null + } + }, + { + "29": { + "title": "Autoregressive model beats diffusion: Llama for scalable image generation.", + "author": "Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan.", + "venue": "arXiv preprint arXiv:2406.06525, 2024.", + "url": null + } + }, + { + "30": { + "title": "Visual autoregressive modeling: Scalable image generation via next-scale prediction.", + "author": "Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang.", + "venue": "arXiv preprint arXiv:2404.02905, 2024.", + "url": null + } + }, + { + "31": { + "title": "Cambrian-1: A fully open, vision-centric exploration of multimodal llms.", + "author": "Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al.", + "venue": "arXiv preprint arXiv:2406.16860, 2024.", + "url": null + } + }, + { + "32": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur\u00e9lien Rodriguez, Robert Stojnic, Sergey Edunov,\nand Thomas Scialom.", + "venue": "arXiv preprint arXiv:2307.09288, 2023.", + "url": null + } + }, + { + "33": { + "title": "Omnitokenizer: A joint image-video tokenizer for visual generation.", + "author": "Junke Wang, Yi Jiang, Zehuan Yuan, Binyue Peng, Zuxuan Wu, and Yu-Gang Jiang.", + "venue": "arXiv preprint arXiv:2406.09399, 2024a.", + "url": null + } + }, + { + "34": { + "title": "Image understanding makes for a good tokenizer for image generation.", + "author": "Luting Wang, Yang Zhao, Zijian Zhang, Jiashi Feng, Si Liu, and Bingyi Kang.", + "venue": "arXiv preprint arXiv:2411.04406, 2024b.", + "url": null + } + }, + { + "35": { + "title": "Vila-u: a unified foundation model integrating visual understanding and generation.", + "author": "Yecheng Wu, Zhuoyang Zhang, Junyu Chen, Haotian Tang, Dacheng Li, Yunhao Fang, Ligeng Zhu, Enze Xie, Hongxu Yin, Li Yi, et al.", + "venue": "arXiv preprint arXiv:2409.04429, 2024.", + "url": null + } + }, + { + "36": { + "title": "Show-o: One single transformer to unify multimodal understanding and generation.", + "author": "Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou.", + "venue": "arXiv preprint arXiv:2408.12528, 2024.", + "url": null + } + }, + { + "37": { + "title": "Locally hierarchical auto-regressive modeling for image generation.", + "author": "Tackgeun You, Saehoon Kim, Chiheon Kim, Doyup Lee, and Bohyung Han.", + "venue": "Advances in Neural Information Processing Systems, 35:16360\u201316372, 2022.", + "url": null + } + }, + { + "38": { + "title": "Vector-quantized image modeling with improved VQGAN.", + "author": "Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "39": { + "title": "Spae: Semantic pyramid autoencoder for multimodal generation with frozen llms.", + "author": "Lijun Yu, Yong Cheng, Zhiruo Wang, Vivek Kumar, Wolfgang Macherey, Yanping Huang, David Ross, Irfan Essa, Yonatan Bisk, Ming-Hsuan Yang, et al.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024a.", + "url": null + } + }, + { + "40": { + "title": "Language model beats diffusion - tokenizer is key to visual generation.", + "author": "Lijun Yu, Jose Lezama, Nitesh Bharadwaj Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong Cheng, Agrim Gupta, Xiuye Gu, Alexander G Hauptmann, Boqing Gong, Ming-Hsuan Yang, Irfan Essa, David A Ross, and Lu Jiang.", + "venue": "In ICLR, 2024b.", + "url": null + } + }, + { + "41": { + "title": "An image is worth 32 tokens for reconstruction and generation.", + "author": "Qihang Yu, Mark Weber, Xueqing Deng, Xiaohui Shen, Daniel Cremers, and Liang-Chieh Chen.", + "venue": "arXiv preprint arXiv:2406.07550, 2024c.", + "url": null + } + }, + { + "42": { + "title": "Representation alignment for generation: Training diffusion transformers is easier than you think.", + "author": "Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, and Saining Xie.", + "venue": "arXiv preprint arXiv:2410.06940, 2024d.", + "url": null + } + }, + { + "43": { + "title": "The unreasonable effectiveness of deep features as a perceptual metric.", + "author": "Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang.", + "venue": "In CVPR, pages 586\u2013595, 2018.", + "url": null + } + }, + { + "44": { + "title": "Movq: Modulating quantized vectors for high-fidelity image generation.", + "author": "Chuanxia Zheng, Tung-Long Vuong, Jianfei Cai, and Dinh Phung.", + "venue": "Advances in Neural Information Processing Systems, 35:23412\u201323425, 2022.", + "url": null + } + }, + { + "45": { + "title": "Beyond text: Frozen large language models in visual signal comprehension.", + "author": "Lei Zhu, Fangyun Wei, and Yanye Lu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 27047\u201327057, 2024a.", + "url": null + } + }, + { + "46": { + "title": "Scaling the codebook size of vqgan to 100,000 with a utilization rate of 99%.", + "author": "Lei Zhu, Fangyun Wei, Yanye Lu, and Dong Chen.", + "venue": "arXiv preprint arXiv:2406.11837, 2024b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.16681v2" +} \ No newline at end of file diff --git a/20241127/2411.16686v2.json b/20241127/2411.16686v2.json new file mode 100644 index 0000000000000000000000000000000000000000..f2b94d54690748e9d249a5156879330db226e752 --- /dev/null +++ b/20241127/2411.16686v2.json @@ -0,0 +1,509 @@ +{ + "title": "ProteinWeaver: A Divide-and-Assembly Approach for Protein Backbone Design", + "abstract": "Nature creates diverse proteins through a \u2018divide and assembly\u2019 strategy. Inspired by this idea, we introduce ProteinWeaver, a two-stage framework for protein backbone design. Our method first generates individual protein domains and then employs an SE(3) diffusion model to flexibly assemble these domains. A key challenge lies in the assembling step, given the complex and rugged nature of the inter-domain interaction landscape. To address this challenge, we employ preference alignment to discern complex relationships between structure and interaction landscapes through comparative analysis of generated samples. Comprehensive experiments demonstrate that ProteinWeaver: (1) generates high-quality, novel protein backbones through versatile domain assembly; (2) outperforms RFdiffusion, the current state-of-the-art in backbone design, by 13% and 39% for long-chain proteins; (3) shows the potential for cooperative function design through illustrative case studies. To sum up, by introducing a \u2018divide-and-assembly\u2019 paradigm, ProteinWeaver advances protein engineering and opens new avenues for functional protein design.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Nature employs a sophisticated \u2018divide and assemble\u2019 strategy to create large and intricate protein structures that meet diverse biological functional needs (Fig. 1 ###reference_###A) (Pawson & Nash, 2003 ###reference_b26###; Huddy et al., 2024 ###reference_b11###; P Bagowski et al., 2010 ###reference_b24###). This process primarily involves the recombination of existing structural blocks, particularly protein domains, which serve as the fundamental, recurring units in protein structures. Remarkably, a limited number of protein domains (approximately 500 as classified in CATH) suffice to create more than hundreds of thousands of structures satisfying a wide array of functions (Orengo et al., 1997 ###reference_b23###). This strategy enables the creation of multi-domain protein backbones, facilitating the emergence of cooperative functions.\n###figure_1### Recent advances in protein backbone design have enabled the generation of novel and diverse structures with high designability (Watson et al., 2023 ###reference_b33###; Ingraham et al., 2023 ###reference_b12###; Yim et al., 2023 ###reference_b36###; 2024 ###reference_b37###; Bose et al., 2023 ###reference_b4###; Lin & AlQuraishi, 2023 ###reference_b20###; Lee et al., 2022 ###reference_b17###; Wu et al., 2024a ###reference_b34###). However, our analysis reveals a significant limitation: designability decreases markedly as the backbone length increases (Fig. 1 ###reference_###E). This limitation may stems from inadequate inter-domain interaction, evidenced by a dramatic decrease in domain numbers and interface scTM compared to native proteins (Fig.9 ###reference_### and Fig.1 ###reference_###F). These findings highlight the need for an approach to address the intricacies of multi-domain organization in backbone design, particularly for larger protein backbones.\nIn this study, inspired by nature\u2019s strategies, we present ProteinWeaver (Fig.1 ###reference_###B). Our method addresses the limitations of current approaches by breaking down the complex problem of backbone design into manageable components, framing it as a divide and assembly problem. ProteinWeaver operates in two stages (Fig. 1 ###reference_###B): (1) Domain generation: We first divide the long sequence into multiple domains, focusing on generating these local structures independently, allowing for stable and accurate generation of individual domains; (2) Flexibly assembly: we employ a diffusion model to learn the flexible assembly between these domains (Fig. 1 ###reference_###C). This stage aims to capture the crucial inter-domain interactions. In short, this two-stage approach represents a brand-new paradigm for de novo backbone design.\nThe second assembly stage of our approach presents a significant challenge: designing optimal inter-domain interactions through precise weaving of independently generated protein domains. This process requires the model to navigate a complex landscape of potential structural arrangements and their corresponding interactions at a fine-grained level (Kuhlman & Bradley, 2019 ###reference_b16###; Maguire et al., 2021 ###reference_b22###). Two primary factors pose difficulties: (1) The scarcity of domain interaction data, which limits the model\u2019s\ncapacity to capture the fine-grained structure-interaction landscape. (2) The absence of an efficient method to explore the relationship between structure and interaction energy, as folding methods such as Alphafold2 are computationally intensive and time-consuming for model optimization purpose (Jumper et al., 2021 ###reference_b13###).\nTo address these challenges, we define domain assembly as a structure-interaction landscape optimization problem and implement preference alignment. Our approach consists of two key steps. We first conducted extensive sampling to generate diverse multi-domain structures and quantitatively evaluate their interaction quality using scTM metrics. This process captures the complex distribution of the structure-interaction energy landscape in a fine-grained manner (Fig. 1 ###reference_###D). Then, we implemented preference alignment. Rather than learning mappings to predefined labels in conventional supervised fine-tuning (SFT), preference alignment enables our structure diffusion model to optimize through pairwise comparative analysis of the sampled structures, allowing the model to effectively navigate the intricate relationship between structure and interaction energy landscapes.\nOur primary contributions can be summarized as follows:\nWe propose the first \u2018divide-and-assembly\u2019 two-stage generation framework for protein backbone design. ProteinWeaver enables flexible assembly of general protein domains, allowing for the creation of sophisticated structures.\nWe adopt the preference alignment technique to effectively navigate the domain interaction landscape and generate interaction-reasonable backbones in our diffusion model, which may provide a general approach benefiting backbone design.\nWe present an extensive experimental evaluation of ProteinWeaver by comparing its performance with the state-of-the-art methods including RFdiffusion (Watson et al., 2023 ###reference_b33###), Chroma (Ingraham et al., 2023 ###reference_b12###), FrameDiff (Yim et al., 2023 ###reference_b36###), and FrameFlow (Yim et al., 2024 ###reference_b37###). ProteinWeaver significantly outperforms RFdiffusion by 13% to 39% in the quality of long-chain backbones (Fig. 1 ###reference_###E) and exhibits comprehensive advantages across various metrics (Fig. 1 ###reference_###F).\nWe present ProteinWeaver\u2019s potential applications for cooperative function design through targeted domain assembly, as illustrated by case studies in substrate-directed enzyme design and bispecific antibody engineering." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "ProteinWeaver", + "text": "In this section, we introduce the ProteinWeaver. The overall generation framework is introduced in Sec.2.1 ###reference_###. Details for training and sampling are introduced in Sec.2.2 ###reference_### and Sec.2.3 ###reference_###." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Divide and assembly diffusion framework", + "text": "Following AlphaFold2 (Varadi et al., 2022 ###reference_b31###), the structure of protein backbone is parameterized as a collection of rigid frames. For a protein backbone of length , these frames are denoted by . Each frame where and maps a rigid transformation from fixed, idealized coordinates , with . Thus, for each residue , . To construct the backbone oxygen O, we use an additional torsion angle by rotating around the bond between and C. Finally, the complete 3D structure coordinates with all heavy atoms of a protein is denoted as .\n###figure_2### As shown in Fig.2 ###reference_###, we can decouple the generation process of backbones into two stages: (1) divided domain generation and (2) domain assembly generation. This decoupling strategy enables us to break down the modeling of complex backbone structures into two manageable steps: generation of individual domains and assembly of these domains, which significantly simplifies the overall modeling process for complex protein structures.\nIt should be noted that the domain structure generated in stage 1 will change when assembled into the final structure . In other words, we are considering the flexible assembling of domains.\nGiven the target protein backbone\u2019s length , the structure can be divided into domains where is the function that returns all sequence partition111a \u2018sequence partition\u2019 refers to the process of dividing a sequence into several continues sub-sequences, such as dividing the input sequence into parts, such that and . In practice, we can artificially construct where the length of each domain is . Given the high designability of existing backbone generation methods for short sequences (Watson et al., 2023 ###reference_b33###; Ingraham et al., 2023 ###reference_b12###; Yim et al., 2023 ###reference_b36###; 2024 ###reference_b37###; Bose et al., 2023 ###reference_b4###; Lin & AlQuraishi, 2023 ###reference_b20###; Lee et al., 2022 ###reference_b17###; Wu et al., 2024a ###reference_b34###), we directly apply these established models to sample and generate individual domains. Given the domain length , generates individual domains . Here, represents domain generator module, is domain residue coordinates before assembly and is the element corresponding to the index of . Various backbone generation methods such as FrameDiff, Chroma, and RFdiffusion can be applied here to generate individual domains.\nWe focus on designing the assembled structure given independently generated domains (Fig.2 ###reference_###). We assume the structure of individual domains can undergo overall spatial rotation and translation, along with inner-domain structural perturbations, to generate variable . To complete this flexible assembling process, we constructed a diffusion model conditioned on the spliced distance map of generated domains.\nWe initialize the process by extracting distance maps from domain backbones derived from stage 1 generation. Then, we obtain the spliced distance map by diagonal concatenating distance maps using Eq. (1 ###reference_###) (222In this paper, , , all represent the diagonal concatenation of the distance maps corresponding to the refolded domains of a given structure . represents the splicing distance map), and setting non-diagonal portions (representing inter-domain interactions) to -1 as shown in Fig.2 ###reference_###.\nThis spliced distance map initializes the edge representation, providing condition for the diffusion model, as shown in Appendix B.2.1 ###reference_.SSS1###.\nTo enhance the spatial flexibility of domain assembly, we insert a 15-residue linker between adjacent domains and set its interaction with other resions to . The model is tasked with learning the deformations generated by each domain in during the combination process as well as the patterns of the ultimate interactions.\nTo assemble generated domains, we utilize FrameDiff (Yim et al., 2023 ###reference_b36###), the diffusion model to pretrain the domain assembly model. As illustrated in the Appendix B.2.1 ###reference_.SSS1###, the assembly module contains a series of folding blocks, where each folding block processes node representation, edge representation and structural frames.\nGiven the spliced distance map , we iteratively sample the assembled backbone structure through\nwhere , is the diffusion step, is the iteratively sampled rigid bodies, is the predicted assembled backbone structure\u2019s rigid bodies and is the predicted torsion angle. When the diffusion process ends at time , we can obtain the backbone coordinates based on by applying and rotation angle ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Training", + "text": "In this section, we provide the details of ProteinWeaver training, including datasets, pretraining, and preference alignments. More details can be found in Appendix B.2.1 ###reference_.SSS1###.\nTo train ProteinWeaver, we constructed a set composed of pairs. The structure of protein backbone in this set is sourced from the Protein Data Bank (PDB). Following Yim et al. (2023 ###reference_b36###), we filter for single-chain monomers between length 60 and 512 with resolution downloaded from PDB (Berman et al., 2000 ###reference_b3###) on March 2, 2024. We further filtered the data with more than loops and left with 22728 proteins. Multi-domain PDB structures were further filtered, resulting 5835 PDB structures for pretraining. For each PDB, we identify the domain number and domain index through Unidoc (Ribeiro et al., 2019 ###reference_b28###). Finally, we convert them to distance maps and use Eq. (1 ###reference_###) to get .\nIn our pipeline, domain assembly generation involves the flexible assembling of domains into integrated backbones. The intra-domain structures are alternated during the assembly for optimal inter-domain interaction. To maintain consistency between the pretraining and inference stages, we refolded each domain to of multi-domain PDB using ESMFold (Lin et al., 2023 ###reference_b21###), mimicing their unassembled states for training.\nIn the pretraining stage, we adopted the training loss from FrameDiff, comprising two main components: diffusion score-matching loss for translation and rotation, along with auxiliary losses related to the coordinate and pairwise distance loss on backbone atoms, as depicted in Eq. (3 ###reference_###).\nHere, computes the loss between predicted translations with reference translations, while calculates the loss of rotation scores. represents the atom coordinate loss and computes the pairwise distance loss between predicted and reference positions. Following FrameDiff (Yim et al., 2023 ###reference_b36###), we apply auxiliary losses when . More details can be found in Appendix B.2.2 ###reference_.SSS2###.\nBy utilizing Eq. (3 ###reference_###) to train the model, ProteinWeaver has acquired knowledge of assembling patterns found in natural proteins. However, during the pretraining stage, Eq. (3 ###reference_###) only enables the model to maximine the likelihood of the data set , whereas our target is to maximize the quality of the generated structure measured by scTM score. Different assembling approaches can result in varying qualities of flexible assembling structures.\nTo enable the model to generate proteins with higher scTM scores based on pretraining, we employed Eq. (4 ###reference_###) as the alignment objective. Here, serves as a reward function to provide feedback on the scTM score of , is a copy of and remains frozen during alignment, is self-paced learning rate. By adjusting , we can maximize , i.e., the scTM score, while ensuring minimal difference between and . We use to represent all protein backbone structures in the alignment stage.\nIn particular, we use SPPO for preference alignment(Wu et al., 2024b ###reference_b35###). To apply this alignment, we need to build the preference dataset and construct the \u201cwinning\u201d and \u201closing\u201d pair. We used Unidoc to split the proteins in PDB into single domain structures, and used TMalign (Zhang & Skolnick, 2005 ###reference_b38###) to deduplicate all domain structures, retaining 100 domain structures. Then, we transform these 100 domain structures into distance maps. Using Eq. (1 ###reference_###), we combined these 100 domain structure distance maps to construct 10,000 different spliced distance maps. For each spliced distance map, we use ProteinWeaver to generate 3 structures, and use scTM score to identify winner and loser to construct data pairs used for SPPO alignment. Finally, we constructed 10,000 data pairs for SPPO alignment.\nWe determine the preference according to quality of assembled backbones using scTM metrics. Winner is defined as the data with higher backbone quality (winner data), along with its corresponding lower quality data (loser data), which are determined by scTM score. These pair data generated under the same conditions ( in our setting). Our approach finalizes the fine-tuning process by maximizing the loss function , enabling the model to produce a structure that closely resembles the winner data and significantly differs from the loser data within the constraints set by the distance map ." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Inference", + "text": "The overall sampling process is summarized in Algorithm 1 ###reference_###. We first sample domain division from . For each domain, we use generates corresponding domain. Then, we convert them into distance map and use to get spliced distance map. Finally, using SDE introduced in Yim et al. (2023 ###reference_b36###), we obtain the rigid frames and calculate final complete protein backbone ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We evaluate ProteinWeaver against state-of-the-art protein backbone generation strategies, demonstrating its superiority in four key areas: (1) Domain assembly: ProteinWeaver generates high-quality domain-assembled backbones across diverse domain sources in Sec.3.2 ###reference_###. (2) General backbone design: ProteinWeaver outperforms existing methods like RFdiffusion in creating novel, high-quality long-chain backbones in Sec.3.3 ###reference_###. (3) Cooperative function design: ProteinWeaver generates function-cooperated backbones through targeted domain assembly, as evidenced by case studies in Sec.3.4 ###reference_###. (4) Ablation study in Sec.3.5 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Experimental setups", + "text": "We evaluate ProteinWeaver\u2019s performance using two tasks: (1) Domain assembly: this is a new protein backbone design task, introduced in our study, lacking existing deep learning-based baselines. It serves both as an appropriate scenario to evaluate our approach and as an important protein engineering task previously studied using traditional method (Huddy et al., 2024 ###reference_b11###). (2) Backbone design: This task focuses on the design of de novo proteins, a subject of significant interest in the global research community. When generating backbone, we use the best of 3 operation (bo3). More details are provided in Appendix B.6 ###reference_###.\nWe compare ProteinWeaver with various representative protein backbone design baselines, including Chroma (Ingraham et al., 2023 ###reference_b12###), RFdiffusion (Watson et al., 2023 ###reference_b33###), and FrameFlow Yim et al. (2024 ###reference_b37###). To verify the effectiveness of alignment, we also compare a supervised fine-tuning (SFT) based fine-tuning strategy.\nWe pretrained the diffusion model for domain assembly using 5,835 filtered multi-domain PDBs from the RCSB database (Berman et al., 2000 ###reference_b3###). For the preference alignment, we performed pairwise combinations of 100 single-domain structures, generating 10,000 pairs to build winner and loser datasets for alignment. Detailed methodology is provided in Appendix B.1.1 ###reference_.SSS1### and B.1.2 ###reference_.SSS2###.\nWe evaluate generation conformations as to their overall backbone quality, interface quality, novelty, and diversity. We also provide interface scTM and scRMSD metrics for domain assembly quality evaluation. More details are provided in Appendix B.5 ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Evaluation of Protein Domain Assembly", + "text": "We assessed ProteinWeaver\u2019s domain assembly capacity using split domains from native PDBs. As shown in Fig.3 ###reference_###A and Tab.1 ###reference_###, ProteinWeaver effectively assembles PDB domains (red bars) to form high-quality integrated backbones (mean scTM of 0.88 and mean scRMSD of 2.15). The inter-domain scTM evaluation displays high mean scTM scores of 0.74, suggesting highly consistent domain interfaces between the designed backbone and the ESMfold-refolded structure. Notably, the backbone structures of domains undergo significant alterations after the assembly process, highlighting ProteinWeaver\u2019s ability to integrate domains flexibly (Fig.10 ###reference_###). To further challenge ProteinWeaver, we used randomly sampled domains from CATH (Orengo et al., 1997 ###reference_b23###) with uncertain assemblability. This resulted in decreased performance (pink bars in Fig. 3 ###reference_###A), as expected given the increased difficulty of the task. Despite the performance decrease, we include these results as a benchmark for this challenging task to benefit future studies.\n###figure_3### To evaluate ProteinWeaver\u2019s robustness in assembling distinct domains, we conducted tests using domains synthesized by RFdiffusion. As shown in Fig.3 ###reference_###B and Tab.2 ###reference_###, ProteinWeaver effectively assembled these synthesized domains into high-quality backbones (green bar), achieving median scTM scores of 0.92, comparable to results with native split PDB domains. We also observed satisfactory inter-domain interface quality, with an mean interface scTM of 0.80. We further assessed ProteinWeaver\u2019s assembly capacity using domains generated by various backbone design approaches. While domains from other methods could also be assembled, results showed decreased performance for Chroma (yellow), FrameFlow (purple), and FrameDiff (red). This observation may suggest the quality of individual domains impacts the overall quality of assembled backbones. Case studies presented in Fig.3 ###reference_###C demonstrate ProteinWeaver\u2019s generalizability in assembling domains from different sources, with diverse topologies and secondary structures, into high-quality integrated backbones." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Evaluation of Protein Backbone design", + "text": "ProteinWeaver facilitates protein backbone design through the assembly of synthesized domains. We evaluated the performance against state-of-the-art methods across protein lengths ranging from 100 to 800 residues (Fig.4 ###reference_### and Tab.2 ###reference_###).\nFor proteins between 100 and 400 residues, all the methods are capable of generating high-quality backbones. However, when it comes to generating proteins with 500 to 800 amino acids, the quality of the baseline methods drops rapidly. In contrast, ProteinWeaver consistently maintains its design ability. Specifically, ProteinWeaver (red line) achieves mean scTM scores of 0.86 for 500-residue proteins (compared to RFdiffusion\u2019s 0.76), 0.79 vs. 0.66 for 600-residue proteins, and 0.68 vs. 0.49 for 800-residue proteins. This represents approximately 13% and 39% performance improvements over RFdiffusion in long-chain backbone design for 500 and 800 residues, respectively. These results underscore the effectiveness of the \u2018divide-and-assembly\u2019 strategy for long-chain backbone design, demonstrating ProteinWeaver\u2019s superior performance in designing extended protein structures.\n###figure_4### As shown in Fig.4 ###reference_###, ProteinWeaver consistently achieves better novelty compared to existing backbone design approaches for chain lengths ranging from 100 to 300 residues. The advancement may stem from ProteinWeaver\u2019s \u2018divide-and-assemble\u2019 strategy, which enables the combination of diverse domains to create novel backbones. For longer chains (500 to 800), other backbone design methods show improved novelty alongside decreased design quality. In contrast, ProteinWeaver generates novel long-chain structures maintaining high quality, demonstrating its strength in designing long protein backbones." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Function-cooperated Backbone Design", + "text": "###figure_5### Inspired by nature\u2019s domain assembly strategy for optimizing and creating new functions (Pawson & Nash, 2003 ###reference_b26###), we applied ProteinWeaver to assemble four protein types, focusing on three functional classes: (1) Nanobodies: simplified antibodies that defend against diseases (De Meyer et al., 2014 ###reference_b8###); (2) Scaffold proteins: target-directed \u201dGPS\u201d molecules (Burack & Shaw, 2000 ###reference_b5###); and (3) Enzymes: proteins that catalyze chemical reactions (Kirk et al., 2002 ###reference_b15###). As shown in Fig.5 ###reference_###, ProteinWeaver efficiently assembles these proteins, generating stable interfaces in both self-assembly and cross-assembly configurations. The resulting backbones present potential function-cooperative designs worthy of further investigation. For instance, assembled nanobodies could simultaneously bind different antigens, potentially enhancing synergistic effects (Lewis et al., 2014 ###reference_b19###), while scaffold-enzyme assemblies may improve enzyme target selectivity (Park et al., 2023 ###reference_b25###) (Fig.5 ###reference_###). These cases provide biologically significant proof-of-concept, demonstrating ProteinWeaver\u2019s ability to create a vast space for functional optimization and design." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Ablation study", + "text": "We hypothesized that training using refolded domains, which mimics their isolated states, facilitates ProteinWeaver\u2019s learning of structural interactions during domain assembly. To evaluate this, we compared ProteinWeaver trained on refolded domains (yellow bars) with a version trained on domains directly split from PDBs without refolding (green bars). As shown in Fig.6 ###reference_### and Tab.1 ###reference_###, the model trained on directly split structures exhibited significantly impaired performance in domain assembly tasks. This phenomenon was consistently observed in both domain assembly and protein backbone design tasks (Fig.7 ###reference_###). These results demonstrate the importance of using refolded domain structures in model training for effective domain assembly learning.\nTo evaluate the effectiveness of preference alignment in optimizing domain assembly, we compared ProteinWeaver with alignment (red bar) to a version without alignment (yellow bar). As shown in Fig.6 ###reference_### and Tab.3 ###reference_###, alignment significantly improved performance across backbones assembled using various domains, including split domains from PDB, designed domains, and CATH domains. Inter-domain scTM evaluation further confirmed these results, demonstrating preference alignment\u2019s high efficiency in optimizing domain interaction quality. We extended this ablation study to general backbone design (Fig.7 ###reference_###). ProteinWeaver with alignment (purple line) consistently outperformed the version without alignment (blue line) in generating assembled domains of higher quality. This suggests that alignment is effective in optimizing inter-domain interactions across different design tasks.\n###figure_6### We conducted ablation studies to compare the effectiveness of preference alignment with Supervised Fine-tuning (SFT), an alternative method for optimizing high-quality domain assembly. For SFT, we utilized the \u201cwinner\u201d dataset from the preference alignment process to fine-tune the model. As shown in Fig.7 ###reference_###, ProteinWeaver fine-tuned with preference alignment (red bar) significantly outperformed the version fine-tuned with SFT (purple bar). These results strongly suggest that preference alignment is more effective than SFT for optimizing domain assembly in this task.\n###figure_7###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Diffusion modeling on protein backbone", + "text": "Inspired by the considerable success of diffusion models across various fields (Ho et al., 2020 ###reference_b9###; Song et al., 2020 ###reference_b29###), researchers have applied this approach to protein structure and sequence design. Anand et al. pioneered this effort with a co-diffusion model for backbone, sequence, and sidechain generation using AlphaFold2\u2019s Structure Module (Anand & Achim, 2022 ###reference_b1###) . Subsequent methods explored diffusion on inter-residue geometry (Lee et al., 2023 ###reference_b18###) and backbone dihedral angles (Wu et al., 2024a ###reference_b34###). Current protein structure diffusion models primarily focus on end-to-end structure generation in SE3 or R3 space (Yim et al., 2023 ###reference_b36###), with extensions to function motif scaffolding (Trippe et al., 2022 ###reference_b30###; Yim et al., 2024 ###reference_b37###). Chroma achieved more general conditioning and improved efficiency through geometrically constrained harmonic constraints (Ingraham et al., 2023 ###reference_b12###), while RFdiffusion demonstrated state-of-the-art designability with experimental validation (Watson et al., 2023 ###reference_b33###). Proteus (Wang et al., 2024 ###reference_b32###) employs AlphaFold2\u2019s graph-based triangle approach and multi-track interaction networks. It outperforms RFdiffusion in long-chain monomer generation and exceeds Chroma in complex structure generation. Also, sequence-structure co-design methods, such as MultiFlow (Campbell et al., 2024 ###reference_b6###) and CarbonNovo (Ren et al., ###reference_b27###) demonstrates superior designability compared to RFdiffusion." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions and discussions", + "text": "" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Brief Introduction of Domain Assembly", + "text": "Domain assembly is a well-established task in protein design that has been extensively researched using traditional computational biology methods, such as RFdiffusion. To clarify, we provide the following examples that highlight the significance of this task:\n[Domain-Assembly for Structural Design] A recent study demonstrated the use of helical protein blocks for designing extendable nanomaterials (Huddy et al., 2024 ###reference_b11###). This work serves as a proof of concept for the domain-based design approach, but it focused solely on pure helical-bundle assemblies. In contrast, our study generalizes this idea to encompass a broader range of domains with structural variations. We present the first application of deep learning methods for general domain assembly, marking an important new paradigm in this field.\nDomain assembly for functional design] Nearly two decades ago, research showed that well-assembled domains, through interface-directed evolution, could dramatically enhance both affinity and specificity\u2014achieving over 500-fold and 2,000-fold increases, respectively (Huang et al., 2008 ###reference_b10###). More recently, another study revealed that assembling a substrate recruitment domain to an enzyme significantly improved its functional specificity (Park et al., 2023 ###reference_b25###). These studies underscore the validity and importance of our motivation for pursuing domain-assembly-based protein structure and function design, a concept we introduced in our manuscript. We have added more references in the revised manuscript to emphasize this importance." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Experimental Setup", + "text": "During the inference process, when constructing the spliced distance map , we introduced a linker of 15 amino acids in length, with the corresponding region in set to . We observed a significant enhancement in the flexibility of domain fusion upon the addition of the linker, particularly noticeable in the generation of short proteins. Additionally, during the inference stage, we set to a matrix with all elements as when . This adjustment similarly contributes to the quality of protein generation by the model." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Additional Results", + "text": "###figure_8### ###figure_9### ###figure_10### ###table_1###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Performance of ProteinWeaver on domain assembly using domains derived from native PDB structures and synthetic structures. Intf. quality metrics refer to the domain-domain assembly interface. Intf. is an abbreviation for interface. The reported results are the mean\u00b1std of repetitive experiments.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Native PDBsGenerated domainCATH domain
Backbone QualityIntf. QualityBackbone QualityIntf. QualityBackbone QualityIntf. Quality
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193
ProteinWeaver w/o alignment w/o refold0.630.147.483.970.670.155.001.380.770.163.553.780.730.134.120.840.550.1510.623.990.580.105.731.26
ProteinWeaver w/o alignment0.760.194.344.670.770.124.171.100.800.172.653.610.720.123.810.570.600.158.764.190.600.165.982.29
ProteinWeaver-sft0.860.181.824.200.760.134.361.230.910.171.114.670.740.104.070.880.580.218.189.170.650.115.161.52
ProteinWeaver w/o bo30.880.192.154.300.740.124.391.470.930.081.203.610.800.124.040.860.630.177.724.620.700.134.681.15
\n
\n
", + "capture": "Table 1: Performance of ProteinWeaver on domain assembly using domains derived from native PDB structures and synthetic structures. Intf. quality metrics refer to the domain-domain assembly interface. Intf. is an abbreviation for interface. The reported results are the mean\u00b1std of repetitive experiments." + }, + "2": { + "table_html": "
\n
Table 2: Performance of ProteinWeaver on domain assembly using domains derived from different backbone design models. The performance is evaluated without best of three filter. The reported results are the mean\u00b1std of repetitive experiments. We did not report the results of the Interface quality of length 100. This is because these methods only generates one single domain backbones identified by Unidoc. No multi-domain backbones are available for the evaluation. \u201cIntf.\u201d is an abbreviation for interface.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
length 100length 200length 400
Backbone QualityIntf. QualityBackbone QualityIntf. QualityBackbone QualityIntf. Quality
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193
RFdiffusion0.950.030.780.03\u2013\u20130.950.061.071.070.810.073.690.130.740.175.414.260.750.134.241.13
Chroma0.840.161.622.45\u2013\u20130.750.174.373.290.750.124.130.870.680.137.453.950.720.114.951.21
FrameFlow0.840.141.611.94\u2013\u20130.750.153.182.980.700.133.840.550.640.168.094.310.730.114.290.87
FrameDiff0.740.172.682.53\u2013\u20130.750.133.442.710.660.103.980.520.640.137.093.690.600.105.110.91
\n
\n
", + "capture": "Table 2: Performance of ProteinWeaver on domain assembly using domains derived from different backbone design models. The performance is evaluated without best of three filter. The reported results are the mean\u00b1std of repetitive experiments. We did not report the results of the Interface quality of length 100. This is because these methods only generates one single domain backbones identified by Unidoc. No multi-domain backbones are available for the evaluation. \u201cIntf.\u201d is an abbreviation for interface." + }, + "3": { + "table_html": "
\n
Table 3: Performance of ProteinWeaver on backbone design models evaluated using various lengths ranging from 100 to 800. For each length, we randomly sampled 50 native PDB structures from RCSB as a golden reference for the task. The reported results are the mean\u00b1std of repetitive experiments. When the length is 100, the current backbone design model generates too few multi-domain proteins and is not statistically significant, so it is not reported.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
length100length200
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.96\u00b10.100.67\u00b11.610.82\u00b10.170.75\u00b10.34\u20130.770.97\u00b10.080.67\u00b11.370.85\u00b10.180.98\u00b11.15\u20130.79
RFDiffusion0.98\u00b10.050.48\u00b10.56\u2013\u20130.80\u00b10.080.320.95\u00b10.091.07\u00b11.410.89\u00b10.131.43\u00b11.650.68\u00b10.070.64
Chroma0.85\u00b10.131.88\u00b11.84\u2013\u20130.75\u00b10.090.590.87\u00b10.102.19\u00b11.530.79\u00b10.172.47\u00b12.210.71\u00b10.060.62
FrameFlow0.92\u00b10.081.06\u00b10.94\u2013\u20130.76\u00b10.070.490.91\u00b10.111.79\u00b12.240.86\u00b10.043.66\u00b10.260.69\u00b10.080.80
FrameDiff0.88\u00b10.081.54\u00b11.03\u2013\u20130.74\u00b10.070.110.86\u00b10.102.29\u00b11.790.88\u00b10.023.54\u00b11.120.71\u00b10.060.26
Proteus0.94\u00b10.060.84\u00b10.52\u2013\u20130.73\u00b10.100.50.94\u00b10.081.30\u00b11.570.85\u00b10.13*0.73\u00b10.070.46
CarbonNovo w/o plm0.63\u00b10.094.26\u00b12.33\u2013\u20130.65\u00b10.060.940.58\u00b10.127.37\u00b13.490.42\u00b10.08*0.67\u00b10.040.58
ProteinWeaver w/o alignment w/o refold0.75\u00b10.153.05\u00b12.280.64\u00b10.063.52\u00b10.010.66\u00b10.050.600.71\u00b10.144.76\u00b12.910.62\u00b10.154.90\u00b11.570.69\u00b10.060.66
ProteinWeaver w/o alignment0.90\u00b10.141.261.970.73\u00b10.073.96\u00b10.710.61\u00b10.070.600.86\u00b10.112.59\u00b13.700.71\u00b10.154.29\u00b11.190.66\u00b10.060.68
ProteinWeaver-sft0.91\u00b10.131.191.790.73\u00b10.073.59\u00b10.230.62\u00b10.060.610.86\u00b10.122.61\u00b12.440.75\u00b10.144.16\u00b11.090.67\u00b10.060.68
ProteinWeaver w/o bo30.91\u00b10.141.27\u00b11.840.75\u00b10.143.96\u00b10.780.61\u00b10.070.590.88\u00b10.132.47\u00b13.630.73\u00b10.144.22\u00b11.030.65\u00b10.070.67
ProteinWeaver0.93\u00b10.060.99\u00b10.620.73\u00b10.103.71\u00b10.370.61\u00b10.070.600.92\u00b10.131.54\u00b10.810.77\u00b10.123.78\u00b10.410.65\u00b10.070.69
length300length500
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.97\u00b10.100.82\u00b12.670.92\u00b10.091.11\u00b11.45\u20130.770.97\u00b10.171.07\u00b15.960.88\u00b10.172.23\u00b13.54\u20130.8
RFDiffusion0.89\u00b10.152.65\u00b13.150.91\u00b10.111.65\u00b12.040.69\u00b10.050.650.76\u00b10.196.71\u00b15.530.80\u00b10.174.51\u00b13.980.67\u00b10.040.89
Chroma0.83\u00b10.133.63\u00b13.130.69\u00b10.184.56\u00b13.520.72\u00b10.060.670.71\u00b10.188.25\u00b15.730.52\u00b10.1610.52\u00b14.300.66\u00b10.070.99
FrameFlow0.84\u00b10.153.56\u00b13.460.78\u00b10.154.95\u00b12.280.72\u00b10.070.880.66\u00b10.199.78\u00b15.820.69\u00b10.178.20\u00b13.570.67\u00b10.090.92
FrameDiff0.82\u00b10.123.71\u00b12.690.85\u00b10.023.78\u00b10.220.73\u00b10.060.210.57\u00b10.2315.61\u00b115.530.69\u00b10.126.19\u00b12.000.68\u00b10.040.52
Proteus0.94\u00b10.061.46\u00b11.080.89\u00b10.05*0.78\u00b10.050.340.90\u00b10.132.76\u00b13.570.87\u00b10.13*0.72\u00b10.020.34
CarbonNovo w/o plm0.56\u00b10.169.58\u00b14.690.52\u00b10.17*0.74\u00b10.030.560.41\u00b10.0916.02\u00b14.190.38\u00b10.08*\u20130.76
ProteinWeaver w/o alignment w/o refold0.74\u00b10.126.10\u00b14.060.65\u00b10.135.18\u00b12.310.69\u00b10.050.870.66\u00b10.119.03\u00b13.990.63\u00b10.147.00\u00b12.580.69\u00b10.060.81
ProteinWeaver w/o alignment0.86\u00b10.103.16\u00b12.400.71\u00b10.164.48\u00b11.600.67\u00b10.060.860.78\u00b10.145.77\u00b14.360.68\u00b10.155.87\u00b12.930.67\u00b10.060.76
ProteinWeaver-sft0.86\u00b10.103.15\u00b12.200.74\u00b10.154.21\u00b11.220.68\u00b10.060.850.72\u00b10.148.30\u00b13.190.69\u00b10.097.15\u00b12.930.72\u00b10.070.89
ProteinWeaver w/o bo30.88\u00b10.132.47\u00b13.630.74\u00b10.074.29\u00b11.990.67\u00b10.060.860.82\u00b10.104.39\u00b12.720.70\u00b10.145.14\u00b11.770.67\u00b10.070.78
ProteinWeaver0.92\u00b10.072.07\u00b11.320.77\u00b10.134.05\u00b11.260.67\u00b10.060.860.86\u00b10.093.58\u00b12.280.75\u00b10.154.28\u00b12.880.67\u00b10.070.77
length600length800
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.94\u00b10.072.03\u00b12.330.92\u00b10.082.15\u00b12.01\u20130.770.92\u00b10.112.96\u00b13.470.93\u00b10.082.13\u00b12.50\u20130.8
RFDiffusion0.66\u00b10.199.95\u00b15.680.71\u00b10.167.30\u00b14.300.67\u00b10.050.990.49\u00b10.1216.80\u00b14.480.56\u00b10.1212.93\u00b14.010.66\u00b10.061.00
Chroma0.62\u00b10.1711.40\u00b15.910.48\u00b10.1212.97\u00b14.080.67\u00b10.061.000.62\u00b10.1412.75\u00b16.130.47\u00b10.1214.88\u00b14.370.68\u00b10.071.00
FrameFlow0.49\u00b10.1415.82\u00b15.020.58\u00b10.179.70\u00b15.190.69\u00b10.031.000.35\u00b10.0623.17\u00b12.550.55\u00b10.0214.18\u00b15.970.71\u00b10.021.00
FrameDiff0.45\u00b10.0816.19\u00b13.350.53\u00b10.1010.63\u00b12.440.72\u00b10.031.000.38\u00b10.0620.50\u00b13.290.48\u00b10.1012.20\u00b14.490.71\u00b10.031.00
Proteus0.89\u00b10.153.59\u00b14.180.89\u00b10.09*0.68\u00b10.070.340.67\u00b10.1811.22\u00b16.610.64\u00b10.18*0.66\u00b10.040.56
CarbonNovo w/o plm0.35\u00b10.0619.75\u00b14.400.33\u00b10.06*\u20130.800.25\u00b10.0228.88\u00b14.590.28\u00b10.07*\u20131.00
ProteinWeaver w/o alignment w/o refold0.57\u00b10.1312.43\u00b14.660.61\u00b10.148.28\u00b13.010.70\u00b10.070.740.45\u00b10.0818.56\u00b13.530.52\u00b10.1013.04\u00b13.390.70\u00b10.060.74
ProteinWeaver w/o alignment0.69\u00b10.188.79\u00b15.740.69\u00b10.157.13\u00b13.120.69\u00b10.060.760.54\u00b10.1212.87\u00b14.870.58\u00b10.0910.56\u00b13.010.67\u00b10.070.73
ProteinWeaver-sft0.66\u00b10.169.71\u00b15.610.64\u00b10.148.25\u00b13.440.69\u00b10.050.790.28\u00b10.1631.73\u00b113.140.54\u00b10.0811.82\u00b12.520.67\u00b10.040.95
ProteinWeaver w/o bo30.72\u00b10.155.12\u00b15.190.70\u00b10.147.15\u00b12.740.68\u00b10.070.750.63\u00b10.139.17\u00b15.810.59\u00b10.1410.28\u00b12.740.68\u00b10.060.73
ProteinWeaver0.79\u00b10.124.95\u00b13.440.71\u00b10.156.86\u00b12.890.68\u00b10.070.760.68\u00b10.118.72\u00b14.600.60\u00b10.119.98\u00b12.700.67\u00b10.060.73
\n
\n
", + "capture": "Table 3: Performance of ProteinWeaver on backbone design models evaluated using various lengths ranging from 100 to 800. For each length, we randomly sampled 50 native PDB structures from RCSB as a golden reference for the task. The reported results are the mean\u00b1std of repetitive experiments. When the length is 100, the current backbone design model generates too few multi-domain proteins and is not statistically significant, so it is not reported." + }, + "4": { + "table_html": "
\n
Table 4: Comparison between ProteinWeaver and co-design models evaluated using various lengths ranging from 100 to 800. For each length, we randomly sampled 50 native PDB structures from RCSB as a golden reference for the task. The reported results are the mean\u00b1std of repetitive experiments. When the length is 100, the current co-design model generates too few multi-domain proteins and is not statistically significant, so it is not reported.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
length100length200
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.96\u00b10.100.67\u00b11.610.82\u00b10.170.75\u00b10.34\u20130.770.97\u00b10.080.67\u00b11.370.85\u00b10.180.98\u00b11.15\u20130.79
Multiflow0.96\u00b10.041.10\u00b10.71\u2013\u20130.71\u00b10.080.330.95\u00b10.041.61\u00b11.730.90\u00b10.03*0.71\u00b10.070.42
CarbonNovo0.91\u00b10.141.16\u00b11.03\u2013\u20130.69\u00b10.090.710.94\u00b10.091.18\u00b11.470.97\u00b10.01*0.71\u00b10.080.50
ProteinWeaver0.93\u00b10.060.99\u00b10.620.73\u00b10.103.71\u00b10.370.61\u00b10.070.600.92\u00b10.131.54\u00b10.810.77\u00b10.123.78\u00b10.410.65\u00b10.070.69
length300length500
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.97\u00b10.100.82\u00b12.670.92\u00b10.091.11\u00b11.45\u20130.770.97\u00b10.171.07\u00b15.960.88\u00b10.172.23\u00b13.54\u20130.8
Multiflow0.96\u00b10.062.14\u00b13.240.91\u00b10.04*0.71\u00b10.060.580.83\u00b10.108.48\u00b15.320.84\u00b10.07*0.68\u00b10.060.67
CarbonNovo0.95\u00b10.081.33\u00b11.590.93\u00b10.11*0.74\u00b10.050.310.85\u00b10.154.07\u00b14.140.83\u00b10.17*0.68\u00b10.050.67
ProteinWeaver0.92\u00b10.072.07\u00b11.320.77\u00b10.134.05\u00b11.260.67\u00b10.060.860.86\u00b10.093.58\u00b12.280.75\u00b10.154.28\u00b12.880.67\u00b10.070.77
length600length800
Backbone QualityIntf. QualityNoveltyDiversityBackbone QualityIntf. QualityNoveltyDiversity
scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Intf. scRMSD \u2193Max TM \u2193Max Clust \u2191
Native PDB0.94\u00b10.072.03\u00b12.330.92\u00b10.082.15\u00b12.01\u20130.770.92\u00b10.112.96\u00b13.470.93\u00b10.082.13\u00b12.50\u20130.8
Multiflow0.61\u00b10.1312.41\u00b14.740.49\u00b10.11*0.71\u00b10.070.620.37\u00b10.0725.86\u00b13.180.27\u00b10.06*\u20130.54
CarbonNovo0.87\u00b10.094.20\u00b14.090.81\u00b10.09*0.70\u00b10.060.930.52\u00b10.1316.53\u00b15.390.57\u00b10.17*0.67\u00b10.031.00
ProteinWeaver0.79\u00b10.124.95\u00b13.440.71\u00b10.156.86\u00b12.890.68\u00b10.070.760.68\u00b10.118.72\u00b14.600.60\u00b10.119.98\u00b12.700.67\u00b10.060.73
\n
\n
", + "capture": "Table 4: Comparison between ProteinWeaver and co-design models evaluated using various lengths ranging from 100 to 800. For each length, we randomly sampled 50 native PDB structures from RCSB as a golden reference for the task. The reported results are the mean\u00b1std of repetitive experiments. When the length is 100, the current co-design model generates too few multi-domain proteins and is not statistically significant, so it is not reported." + }, + "5": { + "table_html": "
\n
Table 5: The scTM calculation involves generating eight sequences for each designed backbone using proteinMPNN, followed by structure prediction with ESMFold for each sequence. The sequence with the highest scTM score is then selected for further analysis. As both the number of sampled sequences and the length of the designed proteins increase, the computational demands rise significantly, as illustrated in the table below.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
lengthseq_per_sample (unit: second)
1248
509.912.316.323
10010.91217.225
2001621.33252
300304166113
4005176132216
50089130211377
\n
", + "capture": "Table 5: The scTM calculation involves generating eight sequences for each designed backbone using proteinMPNN, followed by structure prediction with ESMFold for each sequence. The sequence with the highest scTM score is then selected for further analysis. As both the number of sampled sequences and the length of the designed proteins increase, the computational demands rise significantly, as illustrated in the table below." + }, + "6": { + "table_html": "
\n
Table 6: Evaluation of Preference Alignment Methods. We evaluated the performance of SPPO with different preference alignment methods: SFT (Supervised Fine-Tuning) and SFT + DPO (Direct Preference Optimization). To ensure the robustness of our implementation, we tested DPO with various beta parameters. The beta parameter controls the degree of difference between the fine-tuned model and the reference model. A larger beta value makes it less likely for the model to deviate from the reference model during the fine-tuning process.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
length100length200
scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191
SPPO0.91\u00b10.141.27\u00b11.840.61\u00b10.070.590.88\u00b10.132.47\u00b13.630.65\u00b10.070.67
SFT0.91\u00b10.121.20\u00b11.790.62\u00b10.060.610.86\u00b10.122.61\u00b12.440.67\u00b10.060.68
SFT+DPO (beta=10)0.90\u00b10.111.45\u00b11.960.62\u00b10.040.620.87\u00b10.132.57\u00b12.410.67\u00b10.060.68
SFT+DPO (beta=1.0)0.88\u00b10.141.89\u00b12.180.62\u00b10.050.620.83\u00b10.132.93\u00b12.410.68\u00b10.070.68
SFT+DPO (beta=0.1)0.88\u00b10.141.89\u00b12.180.63\u00b10.050.620.78\u00b10.143.74\u00b12.830.67\u00b10.050.72
length300length500
scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191
SPPO0.88\u00b10.132.47\u00b13.630.67\u00b10.060.860.82\u00b10.104.39\u00b12.720.67\u00b10.070.78
SFT0.86\u00b10.103.15\u00b12.200.68\u00b10.060.850.72\u00b10.148.30\u00b13.190.72\u00b10.070.89
SFT+DPO (beta=10)0.85\u00b10.093.24\u00b12.370.68\u00b10.060.850.72\u00b10.138.26\u00b13.050.72\u00b10.070.89
SFT+DPO (beta=1.0)0.86\u00b10.103.02\u00b12.050.68\u00b10.060.850.77\u00b10.135.79\u00b13.660.72\u00b10.060.61
SFT+DPO (beta=0.1)0.80\u00b10.124.10\u00b12.910.70\u00b10.030.870.48\u00b10.0912.99\u00b13.040.78\u00b10.020.97
length600length800
scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Max TM \u2193Max Clust \u2191
SPPO0.72\u00b10.157.15\u00b12.740.68\u00b10.070.750.63\u00b10.139.17\u00b15.810.68\u00b10.060.73
SFT0.66\u00b10.169.71\u00b15.610.69\u00b10.050.790.28\u00b10.1631.73\u00b113.140.67\u00b10.040.95
SFT+DPO (beta=10)0.68\u00b10.149.40\u00b14.910.69\u00b10.050.790.34\u00b10.2227.11\u00b111.900.68\u00b10.0360.93
SFT+DPO (beta=1.0)0.67\u00b10.159.24\u00b14.720.71\u00b10.060.680.55\u00b10.1013.89\u00b14.540.70\u00b10.030.85
SFT+DPO (beta=0.1)0.40\u00b10.0720.24\u00b112.880.76\u00b10.031.000.27\u00b10.1033.50\u00b115.72\u20131.00
\n
\n
", + "capture": "Table 6: Evaluation of Preference Alignment Methods. We evaluated the performance of SPPO with different preference alignment methods: SFT (Supervised Fine-Tuning) and SFT + DPO (Direct Preference Optimization). To ensure the robustness of our implementation, we tested DPO with various beta parameters. The beta parameter controls the degree of difference between the fine-tuned model and the reference model. A larger beta value makes it less likely for the model to deviate from the reference model during the fine-tuning process." + }, + "7": { + "table_html": "
\n
Table 7: We explored the impact of varying reward assignments by adjusting preferences from scTM to interface scTM, as well as combining both metrics. For the combined approach, we selected the sample that exhibited the best performance in both scTM and interface scTM. We believe that scTM serves as a general quality metric for backbone structures, while interface scTM focuses on optimizing inter-domain interactions.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
length100length200
scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191
ProteinWeaver (scTM) w/o bo30.91\u00b10.141.27\u00b11.840.75\u00b10.140.61\u00b10.070.590.88\u00b10.132.47\u00b13.630.73\u00b10.140.65\u00b10.070.67
ProteinWeaver (interface scTM) w/o bo30.86\u00b10.111.73\u00b11.740.60\u00b10.120.62\u00b10.080.610.84\u00b10.122.69\u00b12.560.71\u00b10.150.66\u00b10.050.69
ProteinWeaver (scTM + interface scTM) w/o bo30.91\u00b10.081.12\u00b10.810.77\u00b10.140.61\u00b10.060.600.88\u00b10.122.14\u00b12.070.76\u00b10.140.65\u00b10.060.67
length300length500
scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191
ProteinWeaver (scTM) w/o bo30.88\u00b10.132.47\u00b13.630.74\u00b10.070.67\u00b10.060.860.82\u00b10.104.39\u00b12.720.70\u00b10.140.67\u00b10.070.78
ProteinWeaver (interface scTM) w/o bo30.82\u00b10.123.79\u00b12.890.66\u00b10.170.66\u00b10.060.830.79\u00b10.145.23\u00b13.890.67\u00b10.150.69\u00b10.080.79
ProteinWeaver (scTM + interface scTM) w/o bo30.86\u00b10.102.66\u00b12.210.74\u00b10.140.66\u00b10.060.880.84\u00b10.124.21\u00b13.330.72\u00b10.140.68\u00b10.060.78
length600length800
scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191scTM \u2191scRMSD \u2193Intf. scTM \u2191Max TM \u2193Max Clust \u2191
ProteinWeaver (scTM) w/o bo30.72\u00b10.157.15\u00b12.740.70\u00b10.140.68\u00b10.070.750.63\u00b10.139.17\u00b15.810.59\u00b10.140.68\u00b10.060.73
ProteinWeaver (interface scTM) w/o bo30.69\u00b10.148.79\u00b15.140.61\u00b10.170.67\u00b10.070.780.54\u00b10.1114.36\u00b14.350.52\u00b10.130.70\u00b10.090.88
ProteinWeaver (scTM + interface scTM) w/o bo30.72\u00b10.137.28\u00b14.720.68\u00b10.150.67\u00b10.060.760.62\u00b10.099.34\u00b14.880.57\u00b10.150.68\u00b10.080.75
\n
\n
", + "capture": "Table 7: We explored the impact of varying reward assignments by adjusting preferences from scTM to interface scTM, as well as combining both metrics. For the combined approach, we selected the sample that exhibited the best performance in both scTM and interface scTM. We believe that scTM serves as a general quality metric for backbone structures, while interface scTM focuses on optimizing inter-domain interactions." + }, + "8": { + "table_html": "
\n
Table 8: We have also experimented with adding triangular attention to ProteinWeaver and evaluated its performance. We replaced the edge transition from MLP to triangular attention, and the performance results are shown in below.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Modellength100length200
scTMscRMSDInterface scTMdiversitynoveltyscTMscRMSDInterface scTMdiversitynovelty
ProteinWeaver(MLP)0.90\u00b10.141.26\u00b11.970.73\u00b10.070.600.61\u00b10.070.86\u00b10.112.59\u00b13.700.71\u00b10.150.680.66\u00b10.06
ProteinWeaver(Triangular attention)0.92\u00b10.111.66\u00b11.700.76\u00b10.120.580.62\u00b10.100.88\u00b10.101.99\u00b11.470.77\u00b10.120.670.67\u00b10.06
length300length500
scTMscRMSDInterface scTMdiversitynoveltyscTMscRMSDInterface scTMdiversitynovelty
ProteinWeaver(MLP)0.86\u00b10.103.16\u00b12.400.71\u00b10.160.860.67\u00b10.060.78\u00b10.145.77\u00b14.360.68\u00b10.150.760.67\u00b10.06
ProteinWeaver(Triangular attention)0.88\u00b10.092.64\u00b11.960.73\u00b10.130.850.70\u00b10.070.80\u00b10.125.35\u00b13.960.68\u00b10.190.720.68\u00b10.06
length600length800
scTMscRMSDInterface scTMdiversitynoveltyscTMscRMSDInterface scTMdiversitynovelty
ProteinWeaver(MLP)0.69\u00b10.188.79\u00b15.740.69\u00b10.150.760.69\u00b10.060.54\u00b10.1212.87\u00b14.870.58\u00b10.090.730.67\u00b10.07
ProteinWeaver(Triangular attention)0.69\u00b10.198.52\u00b15.560.66\u00b10.120.760.68\u00b10.080.56\u00b10.1112.18\u00b14.430.57\u00b10.090.730.67\u00b10.07
\n
\n
", + "capture": "Table 8: We have also experimented with adding triangular attention to ProteinWeaver and evaluated its performance. We replaced the edge transition from MLP to triangular attention, and the performance results are shown in below." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.16686v2_figure_1.png", + "caption": "Figure 1: Overview of ProteinWeaver. (A) An illustration demonstrating the \u2018divide-and-assembly\u2019 approach to native protein evolution, which enhances cooperative function design. The pictures are adapted from this study (Aziz & Caetano-Anoll\u00e9s, 2021). (B) ProteinWeaver emulates natural strategies to create protein backbones. (C) ProteinWeaver is a backbone diffusion model. (D) The inter-domain structure-interaction landscape is complex and rugged, where minor structural modifications can lead to significant changes in interactions. Preference alignment technique aids in navigating this landscape effectively. (E) Existing methods struggle with long-chain backbone design, whereas ProteinWeaver demonstrates a considerable advantage. (F) A radar chart illustrates ProteinWeaver\u2019s overall performance in long-sequence backbone design. Inter-domain quality is evaluated using interface scTM metrics.", + "url": "http://arxiv.org/html/2411.16686v2/x1.png" + }, + "2": { + "figure_path": "2411.16686v2_figure_2.png", + "caption": "Figure 2: ProteinWeaver employs a two-staged \u2018divide-and-Assembly\u2019 framework, first generating individual protein domains and then using an SE(3) diffusion model to flexibly assemble these domains. \ud835\udc12\u00af\u00af\ud835\udc12\\bar{\\mathbf{S}}over\u00af start_ARG bold_S end_ARG represents isolated domains undergoing internal structural modifications for assembly into integrated backbones.", + "url": "http://arxiv.org/html/2411.16686v2/x2.png" + }, + "3": { + "figure_path": "2411.16686v2_figure_3.png", + "caption": "Figure 3: ProteinWeaver enables high-quality backbone design by assembling domains from diverse sources. (A) Backbone and interface quality estimation of native domain assembly. (B) Backbone and interface quality estimation of synthesized domain assembly. (C) Case studies showing the diverse assembled domains. The designed backbone and the refolded backbones (grey) are aligned with assembled backbones (green and blue color coding to different domains). The evaluation was conducted without employing the best-of-three filter.", + "url": "http://arxiv.org/html/2411.16686v2/x3.png" + }, + "4": { + "figure_path": "2411.16686v2_figure_4.png", + "caption": "Figure 4: ProteinWeaver shows strong capacity in designing novel and high-quality backbones with significant improvement, particularly in long-chain structures.", + "url": "http://arxiv.org/html/2411.16686v2/x4.png" + }, + "5": { + "figure_path": "2411.16686v2_figure_5.png", + "caption": "Figure 5: Case studies showing ProteinWeaver potentially enables cooperative function design through the assembly of assigned proteins.", + "url": "http://arxiv.org/html/2411.16686v2/extracted/6028999/figs/figure3.png" + }, + "6": { + "figure_path": "2411.16686v2_figure_6.png", + "caption": "Figure 6: Ablation study on domain assembly. (A) The backbone quality and (B) Interface quality are evaluated. Distinct sourced-domains are tested. The evaluation was conducted without employing the best-of-three filter.", + "url": "http://arxiv.org/html/2411.16686v2/x5.png" + }, + "7": { + "figure_path": "2411.16686v2_figure_7.png", + "caption": "Figure 7: Ablation study on backbone design. \u201cbo3\u201d is abbreviation for best of 3.", + "url": "http://arxiv.org/html/2411.16686v2/x6.png" + }, + "8": { + "figure_path": "2411.16686v2_figure_8.png", + "caption": "Figure 8: Interface scTM and Interface scRMSD. Similar to computing the scTM score and scRMSD, to assess the quality of the domain-domain interface, we utilized ProteinMPNN and ESMFold to refold the given structure, segmented the interface structure, and calculated the scTM score and scRMSD of the interface structure before and after refolding.", + "url": "http://arxiv.org/html/2411.16686v2/x7.png" + }, + "9": { + "figure_path": "2411.16686v2_figure_9.png", + "caption": "Figure 9: Domain statistics in designed backbone structures. We analyzed the domain composition of backbone structures designed using various methods and compared them to native proteins from RCSB PDB and SwissProt. Our findings reveal distinct trends in domain organization across different protein lengths and design approaches: (1) Native proteins: As protein length increases from 100 to 500 residues, we observe a natural trend of increasing domain numbers, typically ranging from 1 to 3 domains. (2) RFdiffusion and Chroma: These methods closely mimic nature\u2019s trend, showing an increase in domain numbers as protein length grows. Other methods (FrameDiff, FrameFlow, ProteinDiff, and Genie): These approaches demonstrate limited capability in generating multi-domain backbones, deviating from the natural trend observed in native proteins.\nThese results highlight the varying abilities of different backbone design methods to capture the complex domain architecture of proteins.", + "url": "http://arxiv.org/html/2411.16686v2/x8.png" + }, + "10": { + "figure_path": "2411.16686v2_figure_10.png", + "caption": "Figure 10: Domains undergo structural alterations after assembly using ProteinWeaver. (A) We analyzed the domain structure alterations between the stage 1 isolated state and the stage 2 assembled state using TM score and RMSD. Significant structural alterations can be observed after assembly. (B) Case study showing the detailed structural alterations. These results highlight ProteinWeaver\u2019s capacity in flexible domain assembly.", + "url": "http://arxiv.org/html/2411.16686v2/x9.png" + }, + "11": { + "figure_path": "2411.16686v2_figure_11.png", + "caption": "Figure 11: We clustered natural domains in the CATH dataset based on their topological differences, resulting in approximately 530 domain structures that represent a wide distribution of protein domains. This dataset allows us to evaluate the effects of structural variations effectively. We performed pairwise assemblies of these domains and assessed the quality of the designed structures using the scTM score. Our analysis included a comparison of secondary structure ratios at the assembly interface, which we believe directly affects domain assembly.", + "url": "http://arxiv.org/html/2411.16686v2/extracted/6028999/figs/ss_scTM.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Protein structure and sequence generation with equivariant denoising diffusion probabilistic models.", + "author": "Namrata Anand and Tudor Achim.", + "venue": "arXiv preprint arXiv:2205.15019, 2022.", + "url": null + } + }, + { + "2": { + "title": "Evolution of networks of protein domain organization.", + "author": "M Fayez Aziz and Gustavo Caetano-Anoll\u00e9s.", + "venue": "Scientific reports, 11(1):12075, 2021.", + "url": null + } + }, + { + "3": { + "title": "The protein data bank.", + "author": "Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne.", + "venue": "Nucleic acids research, 28(1):235\u2013242, 2000.", + "url": null + } + }, + { + "4": { + "title": "Se (3)-stochastic flow matching for protein backbone generation.", + "author": "Joey Bose, Tara Akhound-Sadegh, Guillaume Huguet, Kilian FATRAS, Jarrid Rector-Brooks, Cheng-Hao Liu, Andrei Cristian Nica, Maksym Korablyov, Michael M Bronstein, and Alexander Tong.", + "venue": "In The Twelfth International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "5": { + "title": "Signal transduction: hanging on a scaffold.", + "author": "W Richard Burack and Andrey S Shaw.", + "venue": "Current opinion in cell biology, 12(2):211\u2013216, 2000.", + "url": null + } + }, + { + "6": { + "title": "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design.", + "author": "Andrew Campbell, Jason Yim, Regina Barzilay, Tom Rainforth, and Tommi Jaakkola.", + "venue": "arXiv preprint arXiv:2402.04997, 2024.", + "url": null + } + }, + { + "7": { + "title": "Robust deep learning\u2013based protein sequence design using proteinmpnn.", + "author": "Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al.", + "venue": "Science, 378(6615):49\u201356, 2022.", + "url": null + } + }, + { + "8": { + "title": "Nanobody-based products as research and diagnostic tools.", + "author": "Thomas De Meyer, Serge Muyldermans, and Ann Depicker.", + "venue": "Trends in biotechnology, 32(5):263\u2013270, 2014.", + "url": null + } + }, + { + "9": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "10": { + "title": "Design of protein function leaps by directed domain interface evolution.", + "author": "Jin Huang, Akiko Koide, Koki Makabe, and Shohei Koide.", + "venue": "Proceedings of the National Academy of Sciences, 105(18):6578\u20136583, 2008.", + "url": null + } + }, + { + "11": { + "title": "Blueprinting extendable nanomaterials with standardized protein blocks.", + "author": "Timothy F Huddy, Yang Hsia, Ryan D Kibler, Jinwei Xu, Neville Bethel, Deepesh Nagarajan, Rachel Redler, Philip JY Leung, Connor Weidle, Alexis Courbet, et al.", + "venue": "Nature, 627(8005):898\u2013904, 2024.", + "url": null + } + }, + { + "12": { + "title": "Illuminating protein space with a programmable generative model.", + "author": "John B Ingraham, Max Baranov, Zak Costello, Karl W Barber, Wujie Wang, Ahmed Ismail, Vincent Frappier, Dana M Lord, Christopher Ng-Thow-Hing, Erik R Van Vlack, et al.", + "venue": "Nature, 623(7989):1070\u20131078, 2023.", + "url": null + } + }, + { + "13": { + "title": "Highly accurate protein structure prediction with alphafold.", + "author": "John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin \u017d\u00eddek, Anna Potapenko, et al.", + "venue": "nature, 596(7873):583\u2013589, 2021.", + "url": null + } + }, + { + "14": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P Kingma.", + "venue": "arXiv preprint arXiv:1412.6980, 2014.", + "url": null + } + }, + { + "15": { + "title": "Industrial enzyme applications.", + "author": "Ole Kirk, Torben Vedel Borchert, and Claus Crone Fuglsang.", + "venue": "Current opinion in biotechnology, 13(4):345\u2013351, 2002.", + "url": null + } + }, + { + "16": { + "title": "Advances in protein structure prediction and design.", + "author": "Brian Kuhlman and Philip Bradley.", + "venue": "Nature Reviews Molecular Cell Biology, 20:681\u2013697, 11 2019.", + "url": null + } + }, + { + "17": { + "title": "Proteinsgm: Score-based generative modeling for de novo protein design.", + "author": "Jin Sub Lee, Jisun Kim, and Philip M Kim.", + "venue": "bioRxiv, pp. 2022\u201307, 2022.", + "url": null + } + }, + { + "18": { + "title": "Score-based generative modeling for de novo protein design.", + "author": "Jin Sub Lee, Jisun Kim, and Philip M Kim.", + "venue": "Nature Computational Science, 3(5):382\u2013392, 2023.", + "url": null + } + }, + { + "19": { + "title": "Generation of bispecific igg antibodies by structure-based design of an orthogonal fab interface.", + "author": "Steven M Lewis, Xiufeng Wu, Anna Pustilnik, Arlene Sereno, Flora Huang, Heather L Rick, Gurkan Guntas, Andrew Leaver-Fay, Eric M Smith, Carolyn Ho, et al.", + "venue": "Nature biotechnology, 32(2):191\u2013198, 2014.", + "url": null + } + }, + { + "20": { + "title": "Generating novel, designable, and diverse protein structures by equivariantly diffusing oriented residue clouds.", + "author": "Yeqing Lin and Mohammed AlQuraishi.", + "venue": "arXiv preprint arXiv:2301.12485, 2023.", + "url": null + } + }, + { + "21": { + "title": "Evolutionary-scale prediction of atomic-level protein structure with a language model.", + "author": "Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin, Robert Verkuil, Ori Kabeli, Yaniv Shmueli, et al.", + "venue": "Science, 379(6637):1123\u20131130, 2023.", + "url": null + } + }, + { + "22": { + "title": "Perturbing the energy landscape for improved packing during computational protein design.", + "author": "Jack B Maguire, Hugh K Haddox, Devin Strickland, Samer F Halabiya, Brian Coventry, Jermel R Griffin, Surya VSR K Pulavarti, Matthew Cummins, David F Thieker, Eric Klavins, et al.", + "venue": "Proteins: Structure, Function, and Bioinformatics, 89(4):436\u2013449, 2021.", + "url": null + } + }, + { + "23": { + "title": "Cath\u2013a hierarchic classification of protein domain structures.", + "author": "Christine A Orengo, Alex D Michie, Susan Jones, David T Jones, Mark B Swindells, and Janet M Thornton.", + "venue": "Structure, 5(8):1093\u20131109, 1997.", + "url": null + } + }, + { + "24": { + "title": "The nature of protein domain evolution: shaping the interaction network.", + "author": "Christoph P Bagowski, Wouter Bruins, and Aartjan JW te Velthuis.", + "venue": "Current genomics, 11(5):368\u2013376, 2010.", + "url": null + } + }, + { + "25": { + "title": "Designer installation of a substrate recruitment domain to tailor enzyme specificity.", + "author": "Rodney Park, Chayanid Ongpipattanakul, Satish K Nair, Albert A Bowers, and Brian Kuhlman.", + "venue": "Nature chemical biology, 19(4):460\u2013467, 2023.", + "url": null + } + }, + { + "26": { + "title": "Assembly of cell regulatory systems through protein interaction domains.", + "author": "Tony Pawson and Piers Nash.", + "venue": "science, 300(5618):445\u2013452, 2003.", + "url": null + } + }, + { + "27": { + "title": "Carbonnovo: Joint design of protein structure and sequence using a unified energy-based model.", + "author": "Milong Ren, Tian Zhu, and Haicang Zhang.", + "venue": "In Forty-first International Conference on Machine Learning.", + "url": null + } + }, + { + "28": { + "title": "Calculation of accurate interatomic contact surface areas for the quantitative analysis of non-bonded molecular interactions.", + "author": "Judemir Ribeiro, Carlos R\u00edos-Vera, Francisco Melo, and Andreas Sch\u00fcller.", + "venue": "Bioinformatics, 35(18):3499\u20133501, 2019.", + "url": null + } + }, + { + "29": { + "title": "Score-based generative modeling through stochastic differential equations.", + "author": "Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.", + "venue": "arXiv preprint arXiv:2011.13456, 2020.", + "url": null + } + }, + { + "30": { + "title": "Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem.", + "author": "Brian L Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, and Tommi Jaakkola.", + "venue": "arXiv preprint arXiv:2206.04119, 2022.", + "url": null + } + }, + { + "31": { + "title": "Alphafold protein structure database: massively expanding the structural coverage of protein-sequence space with high-accuracy models.", + "author": "Mihaly Varadi, Stephen Anyango, Mandar Deshpande, Sreenath Nair, Cindy Natassia, Galabina Yordanova, David Yuan, Oana Stroe, Gemma Wood, Agata Laydon, et al.", + "venue": "Nucleic acids research, 50(D1):D439\u2013D444, 2022.", + "url": null + } + }, + { + "32": { + "title": "Proteus: exploring protein structure generation for enhanced designability and efficiency.", + "author": "Chentong Wang, Yannan Qu, Zhangzhi Peng, Yukai Wang, Hongli Zhu, Dachuan Chen, and Longxing Cao.", + "venue": "bioRxiv, pp. 2024\u201302, 2024.", + "url": null + } + }, + { + "33": { + "title": "De novo design of protein structure and function with rfdiffusion.", + "author": "Joseph L Watson, David Juergens, Nathaniel R Bennett, Brian L Trippe, Jason Yim, Helen E Eisenach, Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al.", + "venue": "Nature, 620(7976):1089\u20131100, 2023.", + "url": null + } + }, + { + "34": { + "title": "Protein structure generation via folding diffusion.", + "author": "Kevin E Wu, Kevin K Yang, Rianne van den Berg, Sarah Alamdari, James Y Zou, Alex X Lu, and Ava P Amini.", + "venue": "Nature communications, 15(1):1059, 2024a.", + "url": null + } + }, + { + "35": { + "title": "Self-play preference optimization for language model alignment.", + "author": "Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, and Quanquan Gu.", + "venue": "arXiv preprint arXiv:2405.00675, 2024b.", + "url": null + } + }, + { + "36": { + "title": "Se (3) diffusion model with application to protein backbone generation.", + "author": "Jason Yim, Brian L Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, and Tommi Jaakkola.", + "venue": "arXiv preprint arXiv:2302.02277, 2023.", + "url": null + } + }, + { + "37": { + "title": "Improved motif-scaffolding with se (3) flow matching.", + "author": "Jason Yim, Andrew Campbell, Emile Mathieu, Andrew YK Foong, Michael Gastegger, Jos\u00e9 Jim\u00e9nez-Luna, Sarah Lewis, Victor Garcia Satorras, Bastiaan S Veeling, Frank No\u00e9, et al.", + "venue": "ArXiv, 2024.", + "url": null + } + }, + { + "38": { + "title": "Tm-align: a protein structure alignment algorithm based on the tm-score.", + "author": "Yang Zhang and Jeffrey Skolnick.", + "venue": "Nucleic acids research, 33(7):2302\u20132309, 2005.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.16686v2" +} \ No newline at end of file diff --git a/20241127/2411.17056v2.json b/20241127/2411.17056v2.json new file mode 100644 index 0000000000000000000000000000000000000000..9ce02745edfad9f66d26773e7374bdbbdfc9f3c0 --- /dev/null +++ b/20241127/2411.17056v2.json @@ -0,0 +1,205 @@ +{ + "title": "Robust Max-Min Fair Beamforming Design for Rate Splitting Multiple Access-aided Visible Light Communications", + "abstract": "This paper addresses the robust beamforming design for rate splitting multiple access (RSMA)-aided visible light communication (VLC) networks with imperfect channel state information at the transmitter (CSIT).\nIn particular, we first derive the theoretical lower bound for the channel capacity of RSMA-aided VLC networks.\nThen we investigate the beamforming design to solve the max-min fairness (MMF) problem of RSMA-aided VLC networks under the practical optical power constraint and electrical power constraint while considering the practical imperfect CSIT scenario.\nTo address the problem, we propose a constrained-concave-convex programming (CCCP)-based beamforming design algorithm which exploits semidefinite relaxation (SDR) technique and a penalty method to deal with the rank-one constraint caused by SDR.\nNumerical results show that the proposed robust beamforming design algorithm for RSMA-aided VLC network achieves a superior performance over the existing ones for space-division multiple access (SDMA) and non-orthogonal multiple access (NOMA).", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "The field of wireless communication is undergoing significant changes with the continuous advancement of science and technology.\nIn this era of rapid development, there is a pressing demand for faster, more secure and highly reliable means of communication.\nTo address these requirements, the field of visible light communications (VLC) is widely studied as the forefront of technological innovation and exhibits significant potential for 6G and beyond.\nIn essence, VLC is a wireless communication technology which employs visible light frequency band for information transmission.\nIn a VLC system, the intensity or frequency of light is modulated to convert binary data into optical signals that can be received and decoded by an optical sensor.\nVLC can offer several benefits, including an extensive unlicensed spectrum range from 380 - 790 THz, high spatial reuse, high energy efficiency, cost-effective front-ends, and intrinsic security.\nThe VLC technology is therefore considered as a promising technology for meeting the increasing demand for massive data in 6G indoor networks [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###].\nIt is appealing for various applications, such as smart homes, smart cities, and Internet of Things (IoTs).\nWhile VLC offers significant advantages, it also faces certain challenges.\nThe primary challenge for VLC is the limited modulation bandwidth of existing available light-emitting diodes (LEDs), which restricts both connectivity and spectral efficiency in VLC networks.\nTo overcome this challenge and enhance spectral efficiency, designing effective multi-access (MA) schemes is a promising research direction for VLC.\nConventional orthogonal multiple access (OMA) have been extensively researched in VLC networks.\nFor example, optical orthogonal frequency-division multiple access (OFDMA) is investigated in [4 ###reference_b4###, 5 ###reference_b5###] and optical code-division multiple access (OCDMA) has been investigated in [6 ###reference_b6###].\nHowever, the number of active users served by the OMA schemes is limited by several factors such as the number of available orthogonal time, frequency and code resources.\nTo overcome these limitations, non-orthogonal multiple access (NOMA) has been proposed [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###], which allows different users to share the same time or frequency resources.\nBased on superposition coding (SC) at the transmitter and successive interference cancellation (SIC) at the receivers,\npower-domain NOMA has been shown to attain higher spectral efficiency and and improved connectivity compared to OMA [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###].\nHowever, it is worth mentioning that the performance of NOMA highly depends on the channel conditions of the users.\nIts performance is degraded when users share similar channel strengths.\nIn addition, NOMA attains a high hardware complexity because of using multiple layers of SIC at the receivers.\nIn addition to NOMA, there is another well-known type of MA known as space division multiple access (SDMA).\nSDMA is commonly implemented through multi-user linear precoding (MU\u2013LP).\nIt works as a well-established MA that is nowadays the basic principle behind numerous multi-antenna techniques in 4G and 5G.\nHowever, SDMA can not efficiently work in overloaded regimes, i.e., the number of users is more than the number of transmit antennas, and its performance is highly sensitive to channel accuracy, orthogonality and strengths among users.\nIn order to reduce the dependence of MA schemes on channel accuracy, orthogonality and channel strengths, a novel MA scheme named rate-splitting multiple access (RSMA) has been proposed based on linearly precoded rate-splitting (RS) at the transmitter and SIC at the receivers [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###].\nSpecifically, at the transmitter side, the message of each user is split into a common part and a private part.\nThe common parts are then encoded jointly into common streams, while the private parts are encoded independently into private streams.\nBy means of SC, all streams are simultaneously transmitted.\nAll users sequentially decode the intended common and private streams.\nThe primary benefit of RSMA lies in that it allows to adjust the message split and the power allocation among the common and private streams flexibly, thereby softly bridges the two extremes of fully treating interference as noise and fully decoding interference [18 ###reference_b18###, 19 ###reference_b19###].\nSuch powerful interference management capability has enabled RSMA to be a versatile multiple access scheme that subsumes SDMA and NOMA as special cases [16 ###reference_b16###, 18 ###reference_b18###, 20 ###reference_b20###].\nMeanwhile, the first-ever prototype of RSMA has been successfully realized and experiments have demonstrated the advantages and superiority of RSMA over SDMA and NOMA [21 ###reference_b21###]." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Related Works", + "text": "In conventional radio frequency (RF) communications, RSMA has been extensively studied in both information theory and communication theory.\nFrom an information theoretical perspective, RSMA has been proved to achieve the optimal spatial multiplexing gain in both underloaded and overloaded multi-antenna broadcast channels with imperfect channel state information at the transmitter (CSIT) [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###].\nFrom a communication theory perspective, research has shown that RSMA surpasses existing multiple access schemes in spectral efficiency, energy efficiency, user fairness, Quality of Service (QoS) enhancements, and reliability in a wide range of network loads and user deployment [16 ###reference_b16###, 17 ###reference_b17###, 25 ###reference_b25###].\nIt significantly enhances robustness against imperfect CSIT and user mobility [22 ###reference_b22###, 26 ###reference_b26###], aligning with findings in information theory.\nMotivated by advantages mentioned above, RSMA has been widely investigated in many emerging 6G applications, such as\nmassive multiple-input multiple-output (MIMO) [27 ###reference_b27###, 28 ###reference_b28###], multigroup multicasting [29 ###reference_b29###],\nintegrated communication and sensing [30 ###reference_b30###, 31 ###reference_b31###], non-orthogonal unicast and multicast transmission [19 ###reference_b19###, 32 ###reference_b32###], cooperative transmission with user relaying [33 ###reference_b33###, 34 ###reference_b34###], etc.\nRSMA has demonstrated significant advantages such as enhancing spectral efficiency, user fairness, and transmission robustness against CSIT imperfections in RF\ncommunications.\nInspired by these advantages, some studies have begun exploring the application of RSMA in VLC networks. However, its potential benefits in VLC are still in the early stages of investigation.\nIn [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###], RSMA is investigated in multi-cell VLC networks.\nSimilar to the existing studies in RF communications, these works utilized the classic Shannon capacity as the rate expression and proposed various beamforming design methods for different optimization targets such as maximizing spectral efficiency, minimizing the sum of mean squared error (MSE) among all users, and maximizing energy efficiency.\nThese works all show that RSMA is superior to existing SDMA and NOMA strategies in VLC networks.\nAlthough these works have certain merits, the limitations cannot be ignored:\nThe first limitation is that most of existing works on RSMA VLC\n[35 ###reference_b35###, 36 ###reference_b36###] only consider applying the classic Shannon formula for problem formulation.\nNevertheless, the Shannon formula, derived based on Gaussian input distributions, is not appropriately applicable to VLC networks.\nThis is due to the unique characteristics of optical channels, the non-negativity constraint on transmitted signals, and the different power constraints inherent in VLC systems.\nSpecifically, practical illumination demands and user eye safety must be considered in VLC networks.\nAdditionally, the use of intensity modulation and direct detection (IM/DD) in VLC networks ensures that the transmit signal is both real and non-negative.\nIn [38 ###reference_b38###, 39 ###reference_b39###], the distribution of VLC channels is demonstrated to be discrete over a finite set of points, and the exact capacity cannot be expressed in closed form.\nTherefore, the classic Shannon capacity formula usually used in RF systems cannot accurately evaluate the achievable rate of VLC networks.\nSpecialized capacity formulations and analyses that account for these unique characteristics are necessary for VLC networks.\nUnfortunately, the capacity of the single-input single-output (SISO) VLC channel remains unknown.\nMany works turned to investigate upper and lower bounds on the channel capacity of VLC networks.\nHowever, most of existing works [37 ###reference_b37###, 40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###] only consider peak optical power or average optical power while ignoring electrical power constraints when deriving the distribution of the input to obtain the upper and lower bounds on the channel capacity.\nSecondly, existing works on RSMA-aided VLC [44 ###reference_b44###, 36 ###reference_b36###, 35 ###reference_b35###, 45 ###reference_b45###, 46 ###reference_b46###] consider only perfect CSIT.\nHowever, in practice, the estimation errors of CSI are inevitable at the transmitter due to quantization errors and user mobility.\nAs RSMA has demonstrated significant robustness towards CSIT imperfections in RF communications, an in-depth study of the robustness of RSMA-aided VLC networks with imperfect CSIT is warranted." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Contributions", + "text": "Motivated by the above two major limitations of existing works, this paper focuses on addressing two fundamental challenges in RSMA-aided VLC networks: 1) deriving lower bounds of achievable rates; 2) designing robust beamforers at the transmitter under imperfect CSIT scenarios.\nThe primary contributions of this paper are summarized as:\nConsidering the unique characteristics of VLC networks, including the distinct signal distribution and practical constraints on average and peak optical power and average electric power, we derive closed-form expressions for the lower bounds of the achievable rate based on the entropy power inequality and entropy inequality.\nTo the best of our knowledge, this is the first work that derives theoretical lower bounds for the achievable rates of RSMA-aided VLC networks.\nSuch closed-form rate expressions are essential since it is a prerequisite for the application of RSMA in VLC networks.\nBased on the practical imperfect CSIT scenario, the consequent imperfect SIC, as well as the derived rate expression, we formulate and investigate a robust beamforming design problem to maximize the worst-case achievable rate among users, namely the max-min fairness (MMF) rate.\nTo the best of our knowledge, this is the first work that investigates the robust beamforming design of RSMA-aided\nVLC networks with imperfect CSIT.\nThe derived achievable rate expression is highly nonconvex, and incorporating imperfect CSIT complicates the problem further compared to existing ones. To address these challenges, we propose an efficient optimization algorithm.\nSpecifically, we first apply semidefinite relaxation (SDR) and constrained-concave-convex programming (CCCP) methods to transform the original non-convex problem.\nThen, we apply a penalty method to address the rand-one constraint.\nThe problem is then solved by consecutively solving a series of convex subproblems.\nNumerical results demonstrate the explicit MMF rate gain of RSMA over SDMA and NOMA, as well as its enhanced robustness to imperfect CSIT in VLC networks.\n###figure_1###" + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Organizations and Notations", + "text": "Organizations: The rest of this paper is structured as follows.\nIn Section II,\nwe present four subsections that respectively introduce the system model of the downlink RSMA-aided VLC network, the imperfect CSIT model, the derivation of the lower bounds of the achievable rates, and the formulation\nof the MMF rate problem.\nIn Section III,\nwe present the proposed robust beamforming design algorithm in four subsections, each covering the semidefinite relaxation, constraint transformation, CCCP, and the penalty method used in our algorithm.\nIn Section IV, numerical results are illustrated and discussed.\nSection V draws the conclusions.\nNotations: Vectors and matrices are denoted by boldfaced lowercase and uppercase letters, respectively.\n, ,\n, , , and represent absolute value, the expectation, transpose, Frobenius norm, rank and trace, respectively.\n represents an vector where all elements are equal to .\n represents an identity matrix.\n represents an by dimensional real space." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II SYSTEM MODEL AND PROBLEM FORMULATION", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Transmit Signal", + "text": "As depicted in Fig. 1 ###reference_###, we consider a downlink RSMA-aided multi-user multiple-input single-output (MISO) VLC network here.\nThe VLC transmitter is equipped with LEDs indexed by , and it simultaneously serves single photodiode (PD) users indexed by .\nThe entire transmission process is facilitated by 1-layer RSMA [16 ###reference_b16###].\nSpecifically, at the VLC transmitter, the message for user is split into two messages, namely, a common message and a private message .\nThe common parts of all users are combined into one common message , i.e., .\nBy employing a shared codebook, the common message is encoded into a common stream , facilitating all users to decode it.\nMeanwhile, by leveraging a private codebook, each private message is independently encoded into a private stream , exclusively decodable by user .\nDue to the LED characteristics, these signals should meet the peak amplitude requirement .\nThe mean and variance of respectively follow and , [47 ###reference_b47###].\nEach stream is linearly precoded by the beamforming vector , .\nThe resulting transmit signal is given as\nwhere represents the direct current (DC) bias vector to assure the transmit signal being non-negative, i.e., .\n is the DC bias for each LED [48 ###reference_b48###].\nGiven the VLC features, the beamforming vectors should satisfy [49 ###reference_b49###]\nBesides the aforementioned non-negative intensity constraint , the illumination requirements [50 ###reference_b50###] should be met in a VLC network.\nSpecifically, we define the maximum permissible current of LEDs as and the minimum permissible current of LEDs as .\nThen, the beamforming vectors should also satisfy\nwhere is an unit vector with the th element equal to 1 and all other elements equal to ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Channel State Information", + "text": "Light propagation which travels from each LED to the individual user receiver typically comprises two components [51 ###reference_b51###]:\na line-of-sight (LOS) component that is directly transmitted from the LED to the receiver, and a non-LOS (NLOS) diffuse reflection component that is propagated through reflection.\nPrevious works [52 ###reference_b52###, 53 ###reference_b53###] have shown that the LOS component is generally far stronger than the diffuse component. The optical wireless channel is therefore primarily influenced by the LOS link in VLC network, and the NLOS diffuse links can be safely neglected [54 ###reference_b54###, 51 ###reference_b51###].\nTherefore, in this paper, we merely consider the LOS component and defer the analysis of multipath transmissions to future studies.\nIn this work, in order to characterise the VLC channel between the LEDs and PD users, we follow the Lambertian emission model [55 ###reference_b55###].\nSpecifically, we denote the channel vector between all LEDs and user as \n, where the channel between the th LED and user is given by\nwhere and represent the conversion efficiency from light to current and the opposite, respectively; represents the Lambertian order with the half-power angle ; and represent the radiance angle and incidence angle, respectively; represents the distance between user and the th LED; represents an indicator function.\nDenote as the field of view (FOV) of each PD.\nWhen satisfies , , otherwise ; represents the effective physical area of each PD, which is given by\nwhere represents the refractive index and represents the PD area.\nIn this work, we consider a practical imperfect CSIT scenario which takes users\u2019 activity area into consideration.\nSpecifically, we make the assumption that the locations of all users are bounded by their respective activity area during each data transmission block.\nThe activity area of each user, also known as the uncertainty region, is assumed to be on the horizontal plane, that is, the plane consisting of the and axes, and the vertical height of the horizontal plane is assumed to be fixed.\nMoreover, the uncertainty region of each user is assumed to be within a circle with a fixed center and a radius of , i.e.,\nThe geometry of channel model of user is depicted in Fig. 2 ###reference_###.\n###figure_2### Let denote the location of the th LED.\nThe distance between the th LED and user is bounded, i.e., , where\n and represent the minimum distance and the maximum distance between the th LED and user , respectively.\nTheir values can be calculated as follows\nwhere represents the distance between the th LED and user in the plane.\nIn order to take users\u2019 activity area into consideration, we adopt an imperfect CSIT model with bounded errors [56 ###reference_b56###, 57 ###reference_b57###].\nSpecifically, we denote estimated CSIT at the VLC transmitter for user as .\nThe imperfect CSIT is modeled as\nwhere represents the estimated CSIT vector for user , and represents the CSIT estimation errors vector for user .\nIn indoor VLC systems, one primary source of estimation error is user mobility.\nTherefore, the active area of users affects the CSIT estimation error, indicating that the bound of this CSIT estimation error is directly influenced by the radius of the users\u2019 activity area.\nSpecifically, and , where and are the upper and lower bounds of the real channel [11 ###reference_b11###].\n and can be directly calculated based on and in (7 ###reference_###).\nFollowing [57 ###reference_b57###], the CSI estimation errors are bounded within the set :\nwhere represents uncertainty region determined by the user\u2019s activity area.\nMathematically, , where and , .\nWhen , CSI is perfect known at the transmitter, i.e., ." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Received Signal and Achievable Rates", + "text": "Based on the transmit signal and the imperfect CSIT model respectively specified in the previous two subsections, the receive signal at user is obtained as\nwhere represents Gaussian noise received at user with zero mean and variance .\nThe DC component carries no message and it can be removed through the capacitor at user [35 ###reference_b35###].\nBy treating all private streams as noise, user first decodes the common stream .\nFollowing [47 ###reference_b47###], the achievable rate of the common stream at user is given by\nwhere represents the mutual information between input and output , represents the PDF of the signal and represents the entropy of random variable .\nEquation (11c ###reference_.3###) is intractable.\nTo further derive a tractable , we further apply the entropy power inequality and entropy inequality respectively to the first and the second term of (11c ###reference_.3###).\nSpecifically, for independent variables and , the entropy power inequality is defined as\nFor a random variable with variance , the following entropy inequality holds :\nBased on (12 ###reference_###) and (13 ###reference_###), we then obtain a lower bound of (11c ###reference_.3###), which is given as:\nFollowing Theorem in [47 ###reference_b47###], while comprehensively considering three practical power constraints of the input, namely the peak optical power, the average optical power, and the electrical power constraints, we obtain a novel rate lower bound for each stream assuming the input follows the following distribution:\nwhere , and are obtained by solving the following equations\nHere, the function and the error function are given as\nFollowing the above distribution, we finally transform the lower bound of achievable rate for decoding the common stream at user as\nwhere .\nAfter decoding the common stream successfully, is removed via SIC, and user then decodes the desired private stream .\nIt is worth noting that, under an imperfect CSIT scenario, user is not able to fully eliminate the common stream via the SIC procedure.\nThis results in residual interference.\nTherefore, the residual received signal at user after SIC is given as\nwhere the term represents the residual interference from the common stream caused by channel estimation errors.\nFollowing the same derivation procedure as in (11 ###reference_###) and (14 ###reference_###), we then obtain the achievable rate for decoding the desired private stream at user , which is given by\nwhere .\nThen, following the same distribution in (15 ###reference_###),(16 ###reference_###) and (17 ###reference_###), we then obtain the lower bound of the achievable rate for decoding the desired private stream at user as\nIn order to make sure that all users can successfully decode , the lower bound of achievable rate for decoding should not exceed [58 ###reference_b58###, 18 ###reference_b18###].\nWhile is shared by all users for the transmission of , we have , where is the portion of allocated to user [18 ###reference_b18###].\nTherefore, following the principle of -layer RSMA, we can calculate the total lower bound of achievable rate at user as" + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Problem Formulation", + "text": "In this work, we primarily concentrate on the robust beamforming design for RSMA-aided VLC networks with imperfect CSI.\nSpecifically, our goal is to maximize the worst-case rate among users subject to a series of constraints at the VLC transmitter.\nThe worst-case rate (also known as MMF rate) among users is expressed as .\nFor the RSMA-aided VLC networks, we can formulate MMF rate maximization problem as :\nIn problem (23 ###reference_###), represents the total transmit power.\nConstraint (23b ###reference_.2###) and constraint (23c ###reference_.3###) guarantee the decodability and non-negative rate of the common stream at each user, respectively.\nThe optical power constraint and the electric power constraint are respectively represented in constraints (23d ###reference_.4###) and (23e ###reference_.5###).\nOverall, problem (23 ###reference_###) is a non-convex optimization problem due to the intricate rate expression.\nIt is highly challenging to directly resolve problem (23 ###reference_###).\nTo resolve the problem (23 ###reference_###), in the next section, we present a iterative CCCP-based algorithm.\n###figure_3###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III BEAMFORMING DESIGN WITH IMPERFECT CSI", + "text": "In this section, inspired by the optimization algorithm proposed in [59 ###reference_b59###], we address the non-convexities in problem and transform it into a convex subproblem.\nSpecifically, we first apply SDR and CCCP algorithms to deal with the non-convexity in the problem .\nThen, we transform the semi-infinite constraints imposed by the imperfect CSIT into finite ones, and we introduce a penalty method to address the rank-one constraint introduced by SDR.\nFinally, the transformed convex subproblem is solved iteratively to obtain a near optimal solution. The proposed optimization framework is illustrated in Fig. 3 ###reference_###.\nThe detailed derivation process is described in the following subsections." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Semidefinite Relaxation", + "text": "In MMF problem (23 ###reference_###), since the objective function is a minimum function, we cannot directly solve it.\nHere, we first introduce an auxiliary variable to transform problem (23 ###reference_###) equivalently to :\nIt is noteworthy that constraint (24b ###reference_.2###) is equivalent to , and the inequation must hold with equality at optimum.\nThus, the equivalence between (23 ###reference_###) and (24 ###reference_###) is guaranteed.\nThe non-convexity of the constraints (23b ###reference_.2###) and (24b ###reference_.2###) makes problem (24 ###reference_###) computationally intractable.\nIn order to address this issue, several auxiliary variables are firstly introduced as below\nwhere denotes Kronecker-product.\nBased on the definitions in (25 ###reference_###) and the lower bound of the achievable rates derived in (18 ###reference_###) and (21 ###reference_###), problem (24 ###reference_###) can be equivalently reformulated as\nwhere denotes Hadamard product.\nConstraints (26b ###reference_.2###) and (26c ###reference_.3###) remain non-convex.\nObserving the quadratic form in these two constraints, it is therefore appropriate to apply the SDR method to relax the constraints.\nSpecifically, by defining , constraint (26b ###reference_.2###) can be equivalently rewritten as\nSimilarly, by defining , constraint (26c ###reference_.3###) can be reformulated as follows\nConstraints (III-A ###reference_###) and (III-A ###reference_###) remain non-convex, but the left-hand side of both inequalities comprises both concave and convex terms.\nTo deal with this non-convexity, we further define , , and , and introduce the following inequalities for (III-A ###reference_###)\nwhere and are introduced slack variables.\nBased on (29 ###reference_###), constraint (III-A ###reference_###) can be approximated as\nNote that the transformation from constraint (26b ###reference_.2###) to constraint (30 ###reference_###) is a feasible relaxation.\nFollowing the same procedure, we further define , ,\n, and .\nFor constraint (III-A ###reference_###), we also introduce the following inequalities:\nwhere and are slack variables.\nConstraint (III-A ###reference_###) is then reformulated as\nBy further introducing slack variables for (29 ###reference_###) and (31 ###reference_###), problem (26 ###reference_###) is then equivalently reformulated as\nwhere represents the set of variables.\nConstraint (33j ###reference_.10###) ensures that the lower bounds of the achievable rates at each user are guaranteed to be non-negative.\nConstraints (33m ###reference_.13###) and (33n ###reference_.14###) ensure that is both positive semi-definite and rank-one." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Transformation of Semi-Infinite Constraints", + "text": "Constraints (33f ###reference_.6###) - (33i ###reference_.9###) involve and , .\nBy substituting (23f ###reference_.6###) into (33f ###reference_.6###) - (33i ###reference_.9###), respectively, we have\nwhere and .\nDue to the CSI estimation errors , constraint (34 ###reference_###) involves infinite constraints, which is intractable.\nIn optimization theory, such problem is known as semi-infinite problem (SIP) or robust optimization.\nTo deal with this issue, we transform (34 ###reference_###) into finite linear matrix inequality (LMI) constraints by introducing the following lemma.\nLemma 1 (-lemma) [60 ###reference_b60###] :\nDefine a function , as follows :\nwhere , and .\nThen, the implication holds if and only if there exits a variable such that\nSuch an equivalence is guaranteed if there exists a point satisfying , .\nBy applying -lemma to constraint (34 ###reference_###), the following convex LMI constraints can be derived conservatively :\nwhere are auxiliary variables.\nIt is obvious that constraint (41 ###reference_###) only involves a finite number of convex constraints.\nThus, with the help of -lemma, we successfully transform the semi-infinite constraints in (34 ###reference_###) into finite constraints in (41 ###reference_###)." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C CCCP-Based Transformation and Overall Algorithm", + "text": "After replacing (33f ###reference_.6###)\u2013(33i ###reference_.9###) with (41 ###reference_###), only constraint (33e ###reference_.5###) is non-convex in problem (33 ###reference_###).\nIn this subsection, an iterative CCCP-based transformation is proposed to solve this non-convexity.\nTo do so, we linearize the left-hand side of constraint (33e ###reference_.5###) by Taylor\u2019s first-order expansion.\nFor a feasible point of and at the th iteration, (33e ###reference_.5###) is approximated as\nBased on (41 ###reference_###) and (42 ###reference_###), at each iteration, problem (33 ###reference_###) with non-convexity constraints (33e ###reference_.5###) and infinite constraints (33f ###reference_.6###) - (33i ###reference_.9###) can be strictly approximated to the following subproblem\nwhere represents the set of the auxiliary variables used in (41 ###reference_###).\nIn problem (43 ###reference_###), if the obtained solution , is already rank-one, the robust beamforming vectors of problem (43 ###reference_###) can be directly acquired by eigenvalue decomposition (EVD).\nWe can also leverage the Gaussian randomization procedure [61 ###reference_b61###] to recover a high-quality rank-one solution.\nHowever, if solution , of (43 ###reference_###) is not rank-one, the solution recaptured by the Gaussian randomization procedure may not satisfy other constraints in the original problem, which may cause further problems.\nFor example, we need to further solve a new optimization problem to find the nearest feasible solution, which also increases computational complexity.\nThe approximated solution obtained by Gaussian randomization also introduces additional complexity.\nTo avoid these complexities, in the next subsection, we further apply a penalty method to address the rank-one constraint." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D A penalty method", + "text": "In this subsection, to efficiently address the rank-one constraint (33n ###reference_.14###), here we present a penalty-based method.\nWe could obtain that constraint (33n ###reference_.14###) is mathematically equivalent to a difference-of-convex (DC) function-based constraint based on the following proposition 1.\nProposition 1 :\nFor any positive semidefinite matrix , we could obtain that\nwhere , , and represents the -th largest singular value of the matrix .\nBy applying proposition 1 to constraint (33n ###reference_.14###) and augmenting the objective function with a penalty term ,\nproblem (43 ###reference_###) can be transformed to the following form\nwhere represents a negative penalty parameter which penalizes the objective function if the optimal solution is not rank-one.\nIt is worth noting that, when the nonnegative component in the objective function is zero, we can obtain an exact rank-one solution .\nThus, problem (43 ###reference_###) and problem (45 ###reference_###) are equivalent [62 ###reference_b62###].\nHowever, with the addition of the penalty term, the objective function (45 ###reference_###) is no longer convex.\nWe further employ SCA to address the non-convexity.\nSpecifically, by utilizing the first-order Taylor expansion, we obtain a convex upper bound for the penalty term as\nwhere represents the optimal solution obtained at iteration and denotes the eigenvectors related to the maximum eigenvalues of .\nBy removing the constant term in the penalty function, we obtain the final convex penalty function\nWe can then solve problem (45 ###reference_###) via a sequence of convex subproblems.\nSpecifically, with the optimal solution obtained at iteration , we solve the following subproblem at iteration\nProblem (48 ###reference_###) is convex and we can solve it via classic convex optimization solvers.\nThe overall optimization algorithm to address problem (23 ###reference_###) is illustrated in Algorithm 1 ###reference_###.\nIn each iteration, we solve problem (48 ###reference_###) to update , , and .\nThe iteration ends until the convergence of the objective function.\nAs is guaranteed to be rank-one by the penalty term, we can then use EVD to recover the near-optimal beamforming vector from .\nIt is important to note that adding the penalty function term to the objective function does not increase the algorithm\u2019s complexity while guaranteeing a rank-one matrix solution.\nIt is worth noting that a proper should be chosen based on the specific optimization problem and it varies in different optimization targets in different references [30 ###reference_b30###, 63 ###reference_b63###].\nAn appropriate penalty coefficient should be chosen to facilitate the stable and rapid convergence of the algorithm.\nMeanwhile, the choice of penalty coefficient should not be too small, otherwise the penalty term would dominate the objective function and its influence to the MMF rate would be non-negligible.\nFurthermore, an appropriate penalty coefficient should ensure that the penalty term approaches zero closely [64 ###reference_b64###].\nThus, we initialize within a small interval and choose a value which leads to a faster convergence speed, i.e., .\nThen, after several iterations, we gradually increase to ensure stable convergence of the algorithm.\nThis process continues until reaches a sufficiently large value, yielding a feasible first-order solution." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Convergence and Computational Complexity Analysis", + "text": "" + }, + { + "section_id": "3.5.1", + "parent_section_id": "3.5", + "section_name": "III-E1 Convergence Analysis", + "text": "Due to the fact that the solution attained by solving problem (48 ###reference_###) at iteration is also a feasible point of problem (48 ###reference_###) at iteration , the objective function of (48 ###reference_###) is guaranteed to increase monotonically.\nMeanwhile, there exist power constraints (33k ###reference_.11###) and (33l ###reference_.12###), which ensures that the objective function is bounded above.\nThus, the convergence of the proposed algorithm is guaranteed." + }, + { + "section_id": "3.5.2", + "parent_section_id": "3.5", + "section_name": "III-E2 Computational Complexity Analysis", + "text": "In each iteration, the convex problem (48 ###reference_###) is solved.\nProblem (48 ###reference_###) contains optimization variables.\nThis problem can be resolved by applying the interior point method with computational complexity .\nThe total number of iterations needed for the convergence can be approximated as [65 ###reference_b65###].\nOverall, the worst-case computational complexity can be approximated as ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV NUMERICAL RESULTS", + "text": "The final section here aims at evaluating the proposed robust beamforming design of the RSMA-aided VLC network with imperfect CSIT.\nWe consider a downlink multi-user VLC network deployed in a certain room with a fixed size of m m m.\nA three-dimensional coordinate is applied to model the locations of both LEDs and users in fixed room.\nThe origin is a corner of this room.\nThere exists in total nine LEDs in the ceiling serving multiple users uniformly distributed in the receiving plane.\nThe horizontal distance between the LEDs and users is fixed to m, i.e., the user location for user- is defined as (, , ).\nWithout loss of generality, the peak amplitude of signal, the variance of input signal, the average electrical noise power, and the radius of uncertainty region are equal for all users, i.e., , , and .\nRoom configuration details are displayed in Fig. 4 ###reference_###.\nThe locations of both LEDs and users are specified in Table I ###reference_###.\nIn Table II ###reference_###, we further illustrate detailed parameters of the considered VLC network.\n###figure_4### To evaluate the performance of the proposed robust beamforming design algorithm for RSMA-aided VLC networks, we consider NOMA and SDMA-assisted VLC networks as two benchmark schemes.\nReaders are referred to [11 ###reference_b11###] and [47 ###reference_b47###] for more details of the baselines.\nIt is worth noting that the proposed algorithm can be directly applied to address the MMF problem for both SDMA and NOMA-aided VLC networks since SDMA and NOMA are two special instances of RSMA.\nIn order to make a comprehensive comparison among RSMA, NOMA and SDMA in VLC networks, both network loads, specifically underloaded and overloaded regimes are taken into account.\n###figure_5### We first assess the convergence performance of the proposed algorithm.\nSpecifically, we consider a VLC BS with 4 LEDs, i.e., LED 1,2,3 and 4 as listed in Table I ###reference_###.\nThey serve four users with SNR = 15 dB, mA, and a uncertainty radius of m.\nAll results in the following consider the same uncertainty radius, unless otherwise noted.\nFig. 5 ###reference_### illustrate the convergence behaviour of the proposed algorithm for one randomly and uniformly generated user location.\nIt is obvious that the proposed algorithm can converge within few iterations.\n###figure_6### We first compare our derived rate lower bound with conventional rate lower bounds used in some existing works [37 ###reference_b37###, 40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###].\nCommonly used rate lower bounds typically take the form , where the coefficient is predominantly determined by the input distributions.\nThe most common values for is or , depending on whether the input distribution follows a uniformly spaced discrete distribution, as in [37 ###reference_b37###, 40 ###reference_b40###], or a particular closed-form distribution, as in [41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###].\nAs illustrated in Fig. 6 ###reference_###,\nthe MMF rate for our derived distribution in (15 ###reference_###) is higher than the conventional rate lower bounds. Notably, our derived rate lower bound accounts for residual interference resulting from imperfect SIC, which constrains its performance in high-SNR regimes. Nevertheless, the proposed rate lower bound still achieves a non-negligible rate gain over the two baselines in this regime.\nFig. 7 ###reference_### and Fig. 8 ###reference_### illustrates the MMF rate (bits/sec/Hz) versus SNR (dB) for different schemes in both underloaded and overloaded regimes when LEDs serve users or users.\nHere the location of the 6 LEDs corresponds to LED 1,2,3,4,6, and 7 as listed in Table I ###reference_###.\n###figure_7### ###figure_8### It can be observed from both figures that the MMF rates of all three schemes increase with SNR, and RSMA always surpasses NOMA and SDMA in both underloaded and overloaded regimes.\nSpecifically, RSMA attains nearly MMF rate gain over SDMA and MMF rate gain over NOMA in the underaloded regime.\nAs for the overloaded regime, RSMA achieves approximately MMF rate gain compared to SDMA and MMF rate gain compared to NOMA.\nThis is because RSMA has the ability of partially decoding the interference and partially treating the interference as noise through message splitting at the transmitter and message combining at the receivers.\nBesides, we observe that the MMF rate saturates in the high SNR regime.\nThis is due to the additional optical power limitation of each LED imposed by constraint (23d ###reference_.4###).\nAs both the optical power limitation and the electric power constraint are considered, the overall MMF rate performance is limited by the optimal power constraint when the electric power constraint is large in the high SNR regime.\nTo further explore the influence of the optical power constraint, we investigate the MMF rate (bits/sec/Hz) versus the minimum current (mA) of different schemes in Fig. 9 ###reference_### when SNR = 30 dB and (mA).\nWe observe that the MMF rate of three strategies increases with the minimum current until constraint (23e ###reference_.5###) dominates again in the two power constraints.\nFor all different power constraints, RSMA still outperforms other two strategies.\n###figure_9### ###figure_10### ###figure_11### Moreover, we observe that the SDMA scheme performs better than the NOMA scheme in the underloaded regime where LEDs serve users.\nHowever, as specified in Fig. 8 ###reference_###, SDMA performs worse than NOMA in the overloaded regime when the number of users exceeds the number of LEDs.\nThis is due to the severe multi-user interfernece, which degrades the performance of SDMA.\nFurthermore, we investigate the MMF rate (bits/sec/Hz) versus the number of users in Fig. 10 ###reference_### when and SNR = 15 dB.\nWe observe that the MMF rate of all three strategies decreases with more users engaged, while the performance of SDMA drops significantly as the number of users increases.\nThis is because SDMA is highly sensitive to the network load and user deployment and can not manage effectively stronger interference as the number of users increases.\nSpecifically, RSMA attains nearly MMF rate gain over SDMA and MMF rate gain over NOMA in a highly underloaded network where LEDs only serve users.\nIn an overloaded network load setting, where LEDs serve users, the advantages of RSMA become more pronounced. RSMA achieves an MMF rate gain of approximately over SDMA and over NOMA.\nWe further illustrate the MMF rate versus the number of LEDs with SNR = 15 dB and fixed number of users in Fig. 11 ###reference_###.\nWe observe that the MMF rate of all three schemes increases with the number of LEDs.\nThis is because multiple LEDs at the VLC transmitter provide a higher flexibility in beamforming design to manage interference.\nSDMA and NOMA outperforms each other in the underloaded and overloaded regimes, while RSMA always outperforms these two baseline schemes in all regimes.\n###figure_12### In order to explore the influence of user mobility radius, we plot the MMF rate versus the uncertainty region radius of mobile users in Fig. 12 ###reference_###, when , , and SNR = 10 dB.\nIt can be observed that as the radius increases, the performance of all three strategies decreases.\nRSMA scheme always surpasses both SDMA and NOMA scheme in all practical uncertainty radius.\nSpecifically, under a small uncertainty radius, i.e., m, the MMF rate of RSMA surpasses NOMA scheme by nearly .\nWhen it comes to a larger uncertainty radius, i.e., m, RSMA surpasses NOMA scheme by nearly .\nIt is obvious that SDMA achieves marginal MMF rate when the uncertainty region radius is large.\nThis is due to the fact that, when the uncertainty region is large, CSIT becomes inaccurate.\nSDMA cannot effectively manage multi-user interference with inaccurate beamforming." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "CONCLUSION", + "text": "In this paper, we consider a RSMA-aided VLC network with imperfect CSI and propose a robust beamforming design to address the non-convex MMF problem.\nSpecifically, we first derive the theoretical lower bound of the VLC channel capacity.\nBased on the close-form expressions, we propose a robust beamforming design method to maximize the worst-case rate among users with practical imperfect CSI.\nSpecifically, we apply SDR, -lemma, CCCP, and propose a penalty-based method to acquire high-quality and sub-optimal beamformers.\nNumerical results show that the proposed RSMA-aid VLC beamforming design scheme surpasses the conventional SDMA and NOMA schemes.\nOur future research directions include exploring practical VLC scenarios, such as incorporating non-line-of-sight (NLOS) reflection and diffuse paths. Additionally, we aim to extend RSMA-aided VLC to support simultaneous lightwave information and power transfer (SLIPT) as well as integrated sensing, lighting, and communications." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: LOCATIONS OF LEDS.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CoordinateCoordinate
(,,1.7)(0.5,2.5,4.5)
(2.5,0.5,4.5)(0.5,0.5,4.5)
(2.5,2.5,4.5)(1.5,1.5,4.5)
(0.5,1.5,4.5)(2.5,1.5,4.5)
(1.5,0.5,4.5)(1.5,2.5,4.5)
\n
", + "capture": "TABLE I: LOCATIONS OF LEDS." + }, + "2": { + "table_html": "
\n
TABLE II: SUMMARY OF LED PARAMETERS AND THEIR VALUES.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nPeak amplitude of signals\n\n\n\n\n\n\n\n\n\n
\n\nVariance of signals\n\n\n\n\n\n\n\n1\n\n
\n\nReceiver FOV\n\n\n\n\n\n\n\n\n\n
\n\nLED emission semi-angle\n\n\n\n\n\n\n\n\n\n
\n\nPD geometrical area\n\n\n\n\n\n\n\n\n\n
\n\nRefractive index\n\n\n\n\n\n\n\n\n\n
\n\nPD responsivity\n\n\n\n\n\n\n\n\n\n
\n\nAverage electrical noise power\n\n\n\n\n\n\n\n\n\n
\n\nDC bias\n\n\n\n\n\n\n\n\n\n
\n
", + "capture": "TABLE II: SUMMARY OF LED PARAMETERS AND THEIR VALUES. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17056v2_figure_1.png", + "caption": "Figure 1: The considered downlink RSMA-aided multi-user MISO VLC network.", + "url": "http://arxiv.org/html/2411.17056v2/x1.png" + }, + "2": { + "figure_path": "2411.17056v2_figure_2.png", + "caption": "Figure 2: The geometry of mobile users.", + "url": "http://arxiv.org/html/2411.17056v2/x2.png" + }, + "3": { + "figure_path": "2411.17056v2_figure_3.png", + "caption": "Figure 3: Proposed optimization framework to solve \ud835\udcab0subscript\ud835\udcab0\\mathcal{P}_{0}caligraphic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.17056v2/x3.png" + }, + "4": { + "figure_path": "2411.17056v2_figure_4.png", + "caption": "Figure 4: Room configuration with LEDs\u2019 locations.", + "url": "http://arxiv.org/html/2411.17056v2/x4.png" + }, + "5": { + "figure_path": "2411.17056v2_figure_5.png", + "caption": "Figure 5: Convergence analysis of the proposed algorithm.", + "url": "http://arxiv.org/html/2411.17056v2/x5.png" + }, + "6": { + "figure_path": "2411.17056v2_figure_6.png", + "caption": "Figure 6: The MMF rate (bits/sec/Hz) versus SNR (dB) under different lower bounds of rate expressions where 4 LEDs serve 4 users.", + "url": "http://arxiv.org/html/2411.17056v2/x6.png" + }, + "7": { + "figure_path": "2411.17056v2_figure_7.png", + "caption": "Figure 7: The MMF rate (bits/sec/Hz) versus SNR (dB) for an underloaded regime where 6 LEDs serve 4 users.", + "url": "http://arxiv.org/html/2411.17056v2/x7.png" + }, + "8": { + "figure_path": "2411.17056v2_figure_8.png", + "caption": "Figure 8: The MMF rate (bits/sec/Hz) versus SNR (dB) for an overloaded regime where 6 LEDs serve 8 users.", + "url": "http://arxiv.org/html/2411.17056v2/x8.png" + }, + "9": { + "figure_path": "2411.17056v2_figure_9.png", + "caption": "Figure 9: The MMF rate (bits/sec/Hz) versus the Minimum current ILsubscript\ud835\udc3c\ud835\udc3fI_{L}italic_I start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT (mA) for an underloaded regime where 6 LEDs serve 4 users.", + "url": "http://arxiv.org/html/2411.17056v2/x9.png" + }, + "10": { + "figure_path": "2411.17056v2_figure_10.png", + "caption": "Figure 10: The MMF rate (bits/sec/Hz) versus the number of users K\ud835\udc3eKitalic_K when N=6\ud835\udc416N=6italic_N = 6.", + "url": "http://arxiv.org/html/2411.17056v2/x10.png" + }, + "11": { + "figure_path": "2411.17056v2_figure_11.png", + "caption": "Figure 11: The MMF rate (bits/sec/Hz) versus the number of LEDs N\ud835\udc41Nitalic_N when SNR = 15 dB and K=3\ud835\udc3e3K=3italic_K = 3.", + "url": "http://arxiv.org/html/2411.17056v2/x11.png" + }, + "12": { + "figure_path": "2411.17056v2_figure_12.png", + "caption": "Figure 12: The MMF rate (bits/sec/Hz) versus the CSI uncertainty radius r\ud835\udc5fritalic_r for RSMA, SDMA and NOMA schemes when SNR = 10 dB, N=6\ud835\udc416N=6italic_N = 6 and K=6\ud835\udc3e6K=6italic_K = 6.", + "url": "http://arxiv.org/html/2411.17056v2/x12.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Cambridge,U.K.: Cambridge Univ, 2004.", + "author": "S. Boyd and L. Vandenberghe, Convex Optimization.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17056v2" +} \ No newline at end of file diff --git a/20241127/2411.17181v2.json b/20241127/2411.17181v2.json new file mode 100644 index 0000000000000000000000000000000000000000..8ef44b0b533bef31df983cef6230dba84e54de4f --- /dev/null +++ b/20241127/2411.17181v2.json @@ -0,0 +1,423 @@ +{ + "title": "A Novel Word Pair-based Gaussian Sentence Similarity Algorithm For Bengali Extractive Text Summarization", + "abstract": "Extractive Text Summarization is the process of selecting the most representative parts of a larger text without losing any key information. Recent attempts at extractive text summarization in Bengali, either relied on statistical techniques like TF-IDF or used naive sentence similarity measures like the word averaging technique. All of these strategies suffer from expressing semantic relationships correctly. Here, we propose a novel Word pair-based Gaussian Sentence Similarity (WGSS) algorithm for calculating the semantic relation between two sentences. WGSS takes the geometric means of individual Gaussian similarity values of word embedding vectors to get the semantic relationship between sentences. It compares two sentences on a word-to-word basis which rectifies the sentence representation problem faced by the word averaging method. The summarization process extracts key sentences by grouping semantically similar sentences into clusters using the Spectral Clustering algorithm. After clustering, we use TF-IDF ranking to pick the best sentence from each cluster. The proposed method is validated using four different datasets, and it outperformed other recent models by 43.2% on average ROUGE scores (ranging from 2.5% to 95.4%). It is also experimented on other low-resource languages i.e. Turkish, Marathi, and Hindi language, where we find that the proposed method performs as similar as Bengali for these languages. In addition, a new high-quality Bengali dataset is curated which contains 250 articles and a pair of summaries for each of them. We believe this research is a crucial addition to Bengali Natural Language Processing (NLP) research and it can easily be extended into other low-resource languages. We made the implementation of the proposed model and data public on https://github.com/FMOpee/WGSS.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Text Summarization is the process of shortening a larger text without losing any key information to increase readability and save time for the readers. However, manually summarizing very large texts is a counter-productive task due to it being more time-consuming and tedious. So, developing an Automatic Text Summarization (ATS) method that can summarize larger texts reliably is necessary to alleviate this manual labour [31 ###reference_b31###]. Using ATS to summarize textual data is thus becoming very important in various fields such as news articles, legal documents, health reports, research papers, social media content etc. ATS helps the readers to quickly and efficiently get the essential information without needing to read through large amounts of texts [9 ###reference_b9###]. ATS is being utilized in various fields, from automatic news summarization, content filtering, and recommendation systems to assisting legal professionals and researchers in going through long documents by condensing vast amounts of information. It plays a critical role in personal assistants and chatbots, providing condensed information to users quickly and efficiently [29 ###reference_b29###].\nATS techniques can be broadly classified into extractive and abstractive strategies [29 ###reference_b29###]. Extractive summarization, which is the focus of this paper, works by selecting a subset of sentences or clauses from the source document maintaining the original wording and sentence structure [22 ###reference_b22###]. In contrast, abstractive summarization involves synthesising new texts that reflect information from the input document but do not copy from it, similar to how a human summarizes a text document [21 ###reference_b21###]. Both of the methods have their relative advantages and disadvantages. The abstractive summarization can simulate the human language pattern very well thus increasing the natural flow and readability of the summary. However, it requires a vast amount of training data and computational resources to be accurate. On the other hand, the extractive method requires much less computation than the abstractive method while also containing more key information from the input [13 ###reference_b13###] but lacking in natural flow. It is difficult to find quality training data in Bengali due to it being a low-resource language, therefore we attempted to implement an extractive strategy. At the same time, we also provided a quality Bengali summarization dataset.\nThe key approach to extractive summarization is implementing a sentence selection method to classify which sentences will belong in the summary. For this purpose, various ranking-based methods were used to rank the sentences and identify the best sentences as the summary. These ranking methods used indexing [4 ###reference_b4###], statistical [8 ###reference_b8###] or Term Frequency-Inverse Document Frequency (TF-IDF) [6 ###reference_b6###, 28 ###reference_b28###] based techniques to score the sentences and select the best scoring ones. Early attempts at Bengali text summarization also relied on TF-IDF scoring to select the best scoring sentences to form the summary [1 ###reference_b1###, 28 ###reference_b28###]. These approaches, while simple, faced challenges in capturing the true meaning of sentences. This is because TF-IDF-based methods treat words as isolated terms resulting in synonyms of words being regarded as different terms [29 ###reference_b29###]. To capture the semantic relationships between sentences, graph-based extractive methods are effective due to the use of sentence similarity graphs in their workflow [9 ###reference_b9###]. Graph-based methods represent the sentences as nodes of a graph, and the semantic similarity between two sentences as the edge between the nodes [22 ###reference_b22###]. Popular graph-based algorithms like LexRank [10 ###reference_b10###] and TextRank [18 ###reference_b18###] build graphs based on the cosine similarity of the bag-of-word vectors. LexRank uses PageRank [23 ###reference_b23###] method to score the sentences from the graph while TextRank uses random walk to determine which sentences are the most important to be in the summary. Graph-based methods like TextRank and LexRank offer a robust way to capture sentence importance and relationship, ensuring that the extracted summary covers the key information while minimizing redundancy [9 ###reference_b9###].\nClustering-based approaches are a subset of graph-based approaches to extractive text summarization. Here, sentences are grouped into clusters based on their semantic similarity to divide the document into topics, and one representative sentence from each cluster is chosen to form the summary [20 ###reference_b20###]. Clustering reduces redundancy by ensuring that similar sentences are grouped together and only the most representative sentence is selected. This method is effective in the summarization of documents with multiple topics or subtopics by picking sentences from each topic [19 ###reference_b19###]. COSUM [2 ###reference_b2###] is an example of this method where the summarization is achieved using k-means clustering on the sentences and picking the most salient sentence from each cluster to compile in the final summary.\nDespite the advancements of ATS in other languages, it remains an under-researched topic for Bengali due to Bengali being a low-resource language. Graph-based methods were introduced in Bengali to improve summarization quality by incorporating sentence similarity but they were still limited by the quality of word embeddings used for the Bengali language. With the advent of word embedding models like FastText [12 ###reference_b12###], it became possible to represent words in a vector space model, thus enabling more accurate sentence similarity calculations. However, existing models that use word embeddings, such as the Sentence Average Similarity-based Spectral Clustering (SASbSC) method [24 ###reference_b24###], encountered issues with sentence-similarity calculation when averaging word vectors to represent the meaning of a sentence with a vector. This method failed in most similarity calculation cases because words in a sentence are complementary to each other rather than being similar, leading to inaccurate sentence representations after averaging these different word vectors. As a result, word-to-word relationships between sentences get lost, reducing the effectiveness of the method.\nIn this paper, we propose a new clustering-based text summarization approach to address the challenge of calculating sentence similarity accurately. Our method improves upon previous attempts at graph-based summarization methods [5 ###reference_b5###, 24 ###reference_b24###] by focusing on the individual similarity between word pairs in sentences rather than averaging word vectors. We showed that the use of individual similarity between word pairs greatly improves the accuracy, coverage and reliability of the output summaries due to having a deeper understanding of the semantic similarity between sentences. To calculate sentence similarity, we used the geometric mean of individual word similarities. The individual word similarities were achieved using the Gaussian kernel function on a pair of corresponding word vectors from each sentence. The word pairs are selected by finding the word vector with the smallest Euclidean distance from the target sentence. Thus, we get the semantic similarity between two sentences which is used to build an affinity matrix to graphically represent the relationship between the sentences. This graph is clustered into groups to divide the document into distinct topics. One sentence from every cluster is selected to reduce redundancy and increase topic coverage. This method consistently outperforms other graph-based text summarization methods such as BenSumm [5 ###reference_b5###], LexRank [10 ###reference_b10###], SASbSC [24 ###reference_b24###] using four datasets on ROUGE metrics [17 ###reference_b17###] as shown in Fig. 5 ###reference_### and Table 3 ###reference_###. This method performs well in other low-resource languages also such as Hindi, Marathi and Turkish due to the language-independent nature, as shown in Table 4 ###reference_###.\nThe main contributions of this paper are:\n(I) Proposed a new way to calculate the similarity between two sentences.\n(II) Contributes a novel methodology for extractive text summarization for the Bengali language; by improving sentence similarity calculations and enhancing clustering techniques.\n(III) It offers a generalizable solution for creating less redundant and information-rich summaries across languages.\n(IV) It provides a publicly available high-quality dataset of 500 human-written summaries.\nThe rest of the paper is organized as follows: The Related works and Methodology are described in section 2 ###reference_### and 3 ###reference_### respectively. Section 4 ###reference_### illustrates the result of the performance evaluation for this work. Section 5 ###reference_### discusses the findings of the paper in more depth, and section 6 ###reference_### concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Text summarization has been an important necessity for textual data consumption for a long time because of its ability to compress a given input text into a shortened version without losing any key information. For this reason, automating the text summarization process has been a research problem for NLP. Thus researchers attempted automatic text summarization for a long time. At first, indexing-based text summarization methods were attempted such as the work by [4 ###reference_b4###]. This method scored sentences based on the presence of indexing terms in the sentence and picked the sentences with the best score. However, this method failed to capture the topic and essence of the input text as we wouldn\u2019t have the topic keywords of an unforeseen input document. To solve this issue, text summarization with statistical methods like TF-IDF became very popular due to its ability to capture the important topic words of a document in an unsupervised manner. [8 ###reference_b8###] proposed a method which can focus on the central topic of a document by using the TF-IDF measure. It uses two metrics, Term Frequency (how many times a term appears in the input) and Inverse Document Frequency (inverse of how many documents the term appears in a large text corpus) to calculate the importance of a term in a document. Using TF-IDF helps to identify the words that are common in the input text but not as common in the language in general thus classifying them as the central topic of the document. But this method is also error-prone due to its consideration of every word as a unique isolated term without any semantic relation with other words. This leads to the method often missing some central topic if it gets divided into too many synonyms.\nThe problems faced by topic-based summarization methods were alleviated by graph-based extractive text summarization methods which brought new modern breakthroughs. Graph-based methods like LexRank [10 ###reference_b10###] and TextRank [18 ###reference_b18###] were able to capture the relationship between sentences more accurately due to the use of sentence similarity graphs in their process. LexRank [10 ###reference_b10###] calculates the similarity graph by using the cosine similarity of bag-of-words vectors between two sentences from the input. The most important sentences from the graph are classified using the PageRank [23 ###reference_b23###] algorithm on the graph. PageRank ranks those sentences higher and is more similar to other high-ranked sentences. Another graph-based method, TextRank [18 ###reference_b18###] also uses a similar approach while building the similarity graph. In the graph for every sentence, TextRank distributed the score of that sentence to its neighbours using a random walk. This process is repeated over and over until the scores of the sentences stop changing. Then the method picks the sentences with the best scores as the summary. Although graph-based methods such as LexRank and TextRank models are ground-breaking compared to their time, they lack a fundamental understanding of the words involved in a sentence due to not using any vector representation of the semantic relationship between the words involved.\nTo solve the problem of representing semantic relationships, a mathematical abstraction called Word Vector Embedding was conceptualized by the seminal work of [26 ###reference_b26###]. A lexicon, which is essentially the vocabulary of a language or a branch of knowledge, is represented in this context as a word vector space. Word Vector Embedding uses this space as a mathematical abstraction where semantically closer words are positioned nearer to each other in the vector space. The use of word vectors for graph-based text summarization has only recently been attempted [16 ###reference_b16###] due to the previous lack of fully developed word embedding datasets in low-resource languages like Bengali.\nAlthough text summarization has been at the forefront of NLP research, text summarization research in Bengali is a more recent development than in other high-resource languages. So, a lot of sophisticated approaches from other languages haven\u2019t been attempted in Bengali yet. Earlier Bengali extractive methods have been focused on some derivative of TF-IDF-based text summarization such as the methods developed by [5 ###reference_b5###], [6 ###reference_b6###], [27 ###reference_b27###] etc. [27 ###reference_b27###] used a simple TF-IDF score of each sentence to rank them and pick the best sentences to generate the output summary. [6 ###reference_b6###] used weighted TF-IDF along with some other sentence features like sentence position to rank the sentences. [5 ###reference_b5###] however, used TF-IDF matrix of a document to build a graph and perform hierarchical clustering to group sentences together and pick one sentence from each group. One shortcoming of this method is that the TF-IDF matrix is not semantically equivalent to the actual sentences due to the fundamental issues with TF-IDF. So, the TF-IDF doesn\u2019t perfectly represent the semantic relationship between the sentences in the generated graph. Using word vector embedding for Bengali has solved this problem of semantic representation. FastText [12 ###reference_b12###] dataset111https://fasttext.cc/docs/en/crawl-vectors.html ###reference_html### provides good word vector embeddings in 157 languages, including Bengali. Using this dataset, SASbSC [24 ###reference_b24###] proposed a model where they replaced all the words from the input with their respective vector, then averaged the word vectors of a sentence to get a vector representation for the sentence. The sentence average vectors are then used to get the similarity between two sentences using the Gaussian similarity function to build an affinity matrix. This affinity matrix is used to cluster the sentences using spectral clustering to group sentences into distinct topics. The summary is generated after picking one sentence from each cluster to reduce redundancy and increase coverage.\nThe sentence average method suffers critically due to its inability to capture accurate relationships between sentences. This happens due to words in a sentence generally do not have similar meanings to each other, instead, they express different parts of one whole meaning of a sentence. This makes the words more complementary instead of being similar leading to word vectors being scattered throughout the word vector space. This characteristics makes the sentence average vectors always tending towards the centre and not representing the semantic similarity accurately. An example of this happening is shown in Fig. 1 ###reference_### where the distance between the sentence average vectors is misleading. In the figure, scenario (a) shows two sentences where word vectors are very closely corresponding with each other. On the other hand, scenario (b) shows two sentences without any significant word correspondence. But scenario (a) has a larger distance between sentence average vectors than scenario (b) despite having more word correspondence. This larger distance makes the Gaussian similarity lower between the sentences due to the inverse exponential nature of the function. The lower similarity leads to the graphical representation being less accurate and thus failing to capture the true semantic relationship within the sentences. This shortcoming of the method has been one of the key motivations for this research.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "The summarization process followed here is comprised of two basic steps, grouping all the close sentences based on their meaning to minimize redundancy, and then picking one sentence from each group to maximize sentence coverage. To group semantically similar sentences into clusters, we build a sentence similarity graph and perform spectral clustering on it [24 ###reference_b24###]. The sentence similarity graph is produced using a novel sentence similarity calculation algorithm (see Algorithm 1 ###reference_thm1###) that uses the geometric mean of Gaussian similarity between individual word pairs from the two sentences. The Gaussian similarity is calculated using the vector embedding representations of the words. Secondly, we used TF-IDF scores to pick the highest-ranked sentences as in [1 ###reference_b1###, 6 ###reference_b6###, 27 ###reference_b27###, 28 ###reference_b28###] from each cluster (see Algorithm 2 ###reference_thm2###). Therefore The summarization process followed here involves Pre-processing, Sentence similarity calculation, Clustering and Summary generation. These steps are illustrated in Fig. 2 ###reference_### and further discussed in the following subsections.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preprocessing", + "text": "Preprocessing is the standard process of NLP that transforms raw human language inputs into a format that can be used by a computer algorithm. In this step, the document is transformed into a few sets of vectors where each word is represented with a vector, each sentence is represented as a set of vectors and the whole document as a list containing those sets. To achieve this representation, the preprocessing follows three steps. These are tokenization, stop word removal, and word embedding.\nTokenization is the step of dividing an input document into sentences and words to transform it into a more usable format. Here, the input document is represented as a list of sentences and the sentences are represented as a list of words. Stop words, such as prepositions and conjunctions, add sentence fluidity but don\u2019t carry significant meaning. Removing these words allows the algorithm to focus on more impactful words. Word Embedding is the process of representing words as vectors in a vector space. In this vector space, semantically similar words are placed closer together so that the similarity relation between words is expressed in an abstract mathematical way. Each word from the tokenized and filtered list is replaced with its corresponding vectors. Following these steps, the input document is transformed into a set of vectors to be used in sentence similarity calculation. The word embedding dataset [12 ###reference_b12###] can provide word vectors for all the possible variations of a root word instead of just the root words. So stemmed root words would result in loss of information from the input document. To avoid this, no word stemming was performed on the tokenized and filtered document. Some examples of words present in the embedding dataset are shown in Table 1 ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Sentence Similarity Calculation", + "text": "Sentence similarity is the key criterion to build a graphical representation of the sentences in terms of their semantic relationships. This graphical representation is expressed via an affinity matrix to cluster the semantically similar sentences. The rows and columns in the affinity matrix represent the sentences and the cells of the matrix represent the similarity between two sentences. For the similarity value, we have to compare two sets of word vectors. The existing methods to compare two sets of vectors, such as Word Averaging Method [24 ###reference_b24###], Earth Movers Distance (EMD) [25 ###reference_b25###], Procrustes Analysis [11 ###reference_b11###], Hausdorff Distance [15 ###reference_b15###], all have shortcomings in the context of word vectors. The Word Averaging Method [24 ###reference_b24###] fails to represent local word relation within the sentences in most cases (explained in Fig. 1 ###reference_###). EMD [25 ###reference_b25###] attempts to find the lowest \u201dcost\u201d to transform one set of vectors into another. However, it is computationally expensive, especially for high-dimensional word vectors, and the concept of \u201doptimal transport\u201d is not particularly relevant in a linguistic context. Procrustes Analysis [11 ###reference_b11###] calculates the minimum misalignment after rotating and scaling two vector sets, which is also irrelevant for linguistic applications. The Hausdorff Distance [15 ###reference_b15###] measures the greatest distance from a point in one set to the closest point in the other set. Although it is less computationally demanding than EMD and Procrustes Analysis, it is easily influenced by outlier words due to its focus on the worst-case scenario. To solve these shortcomings of the mentioned methods, we proposed a novel sentence similarity calculation technique using individual Gaussian similarity of word pairs to construct an affinity matrix. To calculate the sentence similarity between two sentences, we adhere to the following steps.\nFirstly, for every word of a sentence, we find its closest counterpart from the other sentence to build a word pair. The Euclidean distance between the vector representation of the word pairs is defined as the Most Similar Word Distance (). The process of calculating the is shown in the Equation 1 ###reference_###. In this equation, for every word vector , in a sentence , we find the Euclidean distance ( ) between the word vectors and where is a word vector from the sentence . The lowest possible distance in this set of Euclidean distances is the .\nThen, we calculate the for every word of the two sentences and to make the sentence similarity calculation symmetric. This process is shown in the Equation 2 ###reference_### where is a set containing all the from both and that would be used in the later steps.\nAfter this, the word similarity between the word pairs is calculated to get the degree of correspondence between the two sentences. Here, the word similarity is calculated using the Gaussian kernel function for the elements of the set ; Gaussian kernel functions provide a smooth, flexible and most representative similarity between two vectors [3 ###reference_b3###]. The process of calculating word similarity () is given in the Equation 3 ###reference_###. In this equation, for every element in set , we calculate the Gaussian similarity to obtain word similarity. In the formula for Gaussian similarity, denotes the standard deviation which is used as a control variable. The standard deviation represents the blurring effect of the kernel function. A lower value for represents a high noise sensitivity of the function [3 ###reference_b3###]. The value of was fine-tuned to be which gives the best similarity measurement. The process of fine-tuning is described in the experimentation section (section 4.4.1 ###reference_.SSS1###).\nFinally, the sentence similarity between the two sentences is calculated using the geometric mean of the word similarities to construct an affinity matrix. The geometric mean makes the similarity value less prone to the effects of outliers thus it makes the calculation more reliable. The geometric mean of each value for the two sentences is simplified in Equation 4 ###reference_### to make the calculation process more computation friendly.\nBy following the steps described in the above equations, we get a similarity value for two sentences. This value solves the lack of local word correspondence problem faced by the word averaging-based similarity calculation method used in SASbSC [24 ###reference_b24###]. Fig. 3 ###reference_### demonstrates the merit of the proposed method. Fig. 3 ###reference_### uses the same scenario from Fig. 1 ###reference_### and shows that the proposed method can solve the local word correspondence problem faced by the word averaging method. In the figure, the scenario 3 ###reference_###(a) has a set of smaller than the scenario 3 ###reference_###(b). Having smaller makes the individual word similarities larger due to the nature of the Gaussian kernel function. These values would result in a higher sentence similarity for the sentences in the scenario 3 ###reference_###(a) than in the scenario 3 ###reference_###(b). This representative sentence similarity solves the problem shown in Fig. 1 ###reference_### where the scenario 1 ###reference_###(a) has a larger sentence average distance than 1 ###reference_###(b) resulting in 1 ###reference_###(a) having a smaller sentence similarity than 1 ###reference_###(b).\n###figure_3### The whole process of sentence similarity calculation is shown in the Algorithm 1 ###reference_thm1### where we calculate an affinity matrix using the input word vector list. We took the most similar word distance () (lines 5\u201310 and 15\u201320) for each word (lines 4 and 14) of a sentence pair (line 1). The sum of from Equation 4 ###reference_### is calculated (lines 11 and 21) to be used in the calculation of sentence similarity (line 24)." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Clustering", + "text": "Clustering is a key cornerstone of the proposed method where we aim to cluster semantically similar sentences together to divide the document into multiple sub-topics. Clustering the document minimizes redundancy in the output summary by ignoring multiple sentences from the same sub-topic. For clustering, spectral and DBSCAN methods are used frequently due to their capability to identify irregular cluster shapes. However, in the case of smaller input documents, DBSCAN would fail due to the low density of the sentences. So spectral clustering was found to perform better than DBSCAN in summarization tasks [24 ###reference_b24###].\nOn the contrary, spectral clustering takes the affinity matrix of a graph as input and returns the grouping of graph nodes by transforming the graph into its eigenspace [30 ###reference_b30###]. The Equation 5 ###reference_### shows the process of building an affinity matrix. Here, for every sentence pair and , we calculate their sentence similarity and place it in both and of the affinity matrix .\nThe affinity matrix is clustered into groups to achieve an output summary. Here, is the number of sentences in the input document and is the proportion of the expected summary to the input. The value of can be set depending on the size of the input document." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Summary Generation", + "text": "Output summary is generated by selecting one sentence from each cluster to minimize topic redundancy and maximize topic coverage. To select one sentence from a cluster, we perform TF-IDF ranking on the sentences inside a cluster and pick the sentence with the highest TF-IDF score. The TF-IDF value for a word is achieved by multiplying how many times the word appeared in the input document (Term Frequency, TF) and the inverse of how many documents the word appears in a corpus (Inverse Document Frequency, IDF). We take the sum of the TF-IDF values for all words in a sentence to get the score for that sentence. The process of scoring sentences is shown in the Equation 6 ###reference_### where, for each word in a sentence and a corpus , we calculate the TF-IDF score of a sentence.\nThe sentences with the best TF-IDF score from each cluster are then compiled as the output summary in their order of appearance in the input document to preserve the original flow of information. The process of generating output summary is further expanded in the Algorithm 2 ###reference_thm2###. Where after the clustering step (line 2), we took the TF-IDF score (line 7) of each sentence in a cluster (line 6). For each cluster (line 4), we pick the best-scoring sentence (line 9). These sentences are then ordered (line 11) and concatenated (lines 13\u201315) to generate the output summary." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Result", + "text": "The performance of the proposed method has been compared against three Bengali text summarization methods to evaluate the correctness of generated summaries. The benchmark methods, are BenSumm [5 ###reference_b5###], LexRank [10 ###reference_b10###] and SASbSC [24 ###reference_b24###]. All of these methods have been evaluated using four datasets to test the robustness of the model for Bengali text summarization. For evaluation, the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) [17 ###reference_b17###] metric has been used. Details about the models, datasets and evaluation metrics are provided in the following sub-sections." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Text Summarization Models", + "text": "We implemented Bensumm [5 ###reference_b5###] and SASbSC [24 ###reference_b24###], two recent Bengali extractive models, and LexRank [10 ###reference_b10###], a popular benchmarking model for extractive text summarization to evaluate the effectiveness of the proposed WGSS method. These methods are further discussed in the following section.\nWGSS is the proposed model for this research. We find the Gaussian similarity for word pairs from two sentences and take their geometric mean to get the similarity between two sentences. We use the similarity value to perform spectral clustering to group sentences and extract representative sentences using the TF-IDF score. The extracted sentences are used to generate the output summary which minimizes redundancy and maximizes coverage.\nSASbSC [24 ###reference_b24###] is the first method that introduced the approach of clustering sentences using sentence similarity. However, it uses the average of word vectors in a sentence for calculating similarity. After clustering the sentences based on their similarity, cosine similarity between the average vectors is used to pick the best sentence from a cluster.\nBenSumm [5 ###reference_b5###] is another recent research that describes an extractive and an abstractive text summarization techniques. We compared the extractive technique with our model to ensure a fair and balanced comparison. Here, a similarity matrix is built using TF-IDF which groups the sentences using agglomerative clustering. A Github implementation222https://github.com/tafseer-nayeem/BengaliSummarization ###reference_ummarization### provided by the authors is used in the comparison process.\nLexRank [10 ###reference_b10###] uses a TF-IDF based matrix and Googles PageRank algorithm [23 ###reference_b23###] to rank sentences. Then the top-ranked sentences are selected and arranged into summary. An implemented version of this method is available as lexrank333https://pypi.org/project/lexrank/ ###reference_pypi.org/project/lexrank/### which is used in the comparison process using a large Bengali Wikipedia corpus444https://www.kaggle.com/datasets/shazol/bangla-wikipedia-corpus ###reference_gla-wikipedia-corpus###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Datasets", + "text": "We used four evaluation datasets with varying quality, size and source to examine the robustness of the methods being tested. The first dataset is a self-curated extractive dataset that we developed to evaluate the performance of our proposed method. An expert linguistic team of ten members summarized 250 news articles of varying sizes to diversify the dataset. Each article is summarized twice by two different experts to minimize human bias in the summary. In total, 500 different document-summary pairs are present in this dataset. The dataset is publicly available on Github555https://www.github.com/FMOpee/WGSS/ ###reference_### for reproducibility.\nThe second dataset is collected from Kaggle666https://www.kaggle.com/datasets/towhidahmedfoysal/bangla-summarization-datasetprothom-alo ###reference_dfoysal/bangla-summarization-datasetprothom-alo### which is a collection of summary-article pair from \u201cThe Daily Prothom Alo\u201d newspaper. The dataset is vast in size however the quality of the summaries is poor. For our experimentations, the articles smaller than 50 characters are discarded from the dataset. The articles with unrelated summaries are also removed from the dataset to improve its quality. After filtering, a total of 10,204 articles remained, each with two summaries in the dataset.\nThe third dataset we used for evaluation is BNLPC which is a collection of news article summaries [14 ###reference_b14###]. This was collected from GitHub777https://github.com/tafseer-nayeem/BengaliSummarization/tree/main/Dataset/BNLPC/Dataset2 ###reference_ummarization/tree/main/Dataset/BNLPC/Dataset2### for experimentation that contains one hundred articles with three different summaries for each article.\nThe fourth dataset is collected from Github888https://github.com/Abid-Mahadi/Bangla-Text-summarization-Dataset ###reference_-summarization-Dataset###. The dataset contains 200 documents each with two human-written summaries. These documents were collected from several different Bengali news portals. The summaries were generated by linguistic experts which ensures the high quality of the dataset." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To evaluate the correctness of generated summaries against human written summaries, ROUGE metric [17 ###reference_b17###] is used. The method compares a reference summary with a machine-generated summary to evaluate the alignment between the two. It uses N-gram-based overlapping to evaluate the quality of generated summaries. The Rouge package999https://pypi.org/project/rouge/ ###reference_pypi.org/project/rouge/### is used to evaluate the proposed models against human-written summaries. The package has three different metrics for comparison of summaries. These are are:\nROUGE-1 uses unigram matching to find how similar two summaries are. It calculates the total common characters between the summaries to evaluate the performance. But using this metric also can be misleading as very large texts will share a very high proportion of uni-grams between them.\nROUGE-2 uses bi-gram matching to find how similar the two summaries are in a word level. Having more common bigrams between two summaries indicates a deeper syntactic similarity between them. Using this in combination with the ROUGE-1 is a standard practice to evaluate machine-generated summaries [9 ###reference_b9###].\nROUGE-LCS finds the longest common sub-sequence between two summaries to calculate the rouge scores. It focuses on finding similarities in the flow of information in the sentence level between two summaries." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Experimentation", + "text": "For sentence similarity calculation, we experimented using different values for standard deviation () in Equation 4 ###reference_### to get the most representative semantic similarity value. We also experimented with sentence extraction methods to pick the most representative sentence from a cluster. For this, lead extraction and TF-IDF ranking strategies were considered. These experimentations are described in the following sections." + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Fine-tuning Standard Deviation ():", + "text": "Standard deviation () plays a crucial role in sentence similarity calculation (Equation 4 ###reference_###). Therefore, to fix the most suited value sixty-three different values were experimented on. These values ranged from to on regular intervals. After experimentation, was fixed as the value for that gives the most representative semantic relation between sentences. The result of the fine-tuning process is shown in Fig. 4 ###reference_###.\n###figure_4###" + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Different Sentence Extraction Techniques From Clusters:", + "text": "We have examined two sentence extraction methods to pick the most representative sentence from each cluster. Firstly, the lead extraction method is used to select sentences from the clusters based on their order of appearance in the input document. This ensures a higher proportion of earlier sentences appear in the final summary. Because, generally the earlier sentences in an input contain more information on the context of the input document. Secondly, we examined the TF-IDF ranking technique where we extracted sentences based on their TF-IDF score. The sentence with the highest TF-IDF score in a cluster is selected as the representative sentence. We examined the two methods on our Self-Curated dataset with as a summary proportion. In Table 2 ###reference_###, the TF-IDF ranking is shown to perform better than the lead extraction method in the Self-Curated dataset." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Comparison", + "text": "The performance of the proposed method (WGSS) is compared with BenSumm [5 ###reference_b5###], SASbSC [24 ###reference_b24###] and LexRank [10 ###reference_b10###] on four datasets (Self-Curated (SC), Kaggle, BNLPC, Github) using the ROUGE metrics. For the comparison step, we fixed the proportion of the summary of WGSS method as . The comparative results of this evaluation are shown in Table 3 ###reference_### where our proposed model performs 11.9%, 24.1% and 16.2% better than SASbSC in Rouge-1, Rouge-2 and Rouge-LCS respectively on the self-curated dataset. It performs 68.9%, 95.4% and 84.6% better in Rouge-1, Rouge-2 and Rouge-LCS respectively than BenSumm on the Kaggle dataset. It also performs 3% and 2.6% better in Rouge-2 and Rouge-LCS respectively and ties in Rouge-1 with SASbSC using the BNLPC dataset. It performs 58%, 86.4%, and 67.9% better in Rouge-1, Rouge-2 and Rouge-LCS respectively than BenSumm on the GitHub dataset.\nThese results are further visualized into three radar charts in Fig. 5 ###reference_### to compare the performance of the models on different Rouge metrics. As stated in the charts, the proposed method performs consistently and uniformly across all the datasets regardless of their quality. But other models, such as BenSumm perform well in three datasets (SC, GitHub, BNLPC) but fail in the Kaggle dataset. Similarly, SASbSC performs well in SC and BNLPC datasets but its performance decreases sharply in Kaggle and GitHub datasets. LexRank although performs consistently across all datasets is far lower on average.\n###figure_5### The consistency of the proposed method is further visualized using the boxplot charts in Fig. 6 ###reference_###. It shows, that only SASbSC performs similarly to our model on the BNLPC dataset. In every other case, WGSS shows a smaller interquartile range and a higher median value than the other three models on all datasets. This performance is repeated in Rouge-2 and Rouge-LCS metrics. Thus the result analysis in Table 3 ###reference_### and Fig. 6 ###reference_### & 5 ###reference_###, WGSS is the most accurate and reliable Bengali extractive text summarization model.\n###figure_6###" + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Implementation Into Other Languages", + "text": "The proposed model is language-independent thus, it can be extended into other languages too. For this, only a language-specific tokenizer, a stop-word list and a word embedding dataset are required. We implemented this model on three non-Bengali datasets to show the language-independent nature of the model. To evaluate the quality of the generated summaries, we tried to find evaluation datasets for summarization on other low-resource languages. But could only find relevant datasets in three other languages i.e. Hindi, Marathi and Turkish. We adopted the proposed model into these three low-resource languages to check how well it performs.\nTable 4 ###reference_### shows the result of the proposed WGSS method for extractive summarization in other low-resource languages. In this table, the results in Marathi and Turkish are slightly better than the results in Bengali. Although it performs slightly lower in Hindi, the score is still similar to Bengali. To evaluate the model\u2019s performance on Hindi, we used a Kaggle dataset101010https://www.kaggle.com/datasets/disisbig/hindi-text-short-and-large-summarization-corpus/ ###reference_indi-text-short-and-large-summarization-corpus/### produced by Gaurav Arora. For the Marathi, we used another Kaggle dataset111111https://www.kaggle.com/datasets/ketki19/marathi ###reference_rathi### produced by Ketki Nirantar. For the Turkish language, we used a GitHub dataset121212https://wwww.github.com/xtinge/turkish-extractive-summarization-dataset ###reference_active-summarization-dataset/blob/main/dataset/XTINGE-SUM_TR_EXT/xtinge-sum_tr_ext.json### produced by the XTINGE [7 ###reference_b7###] team for evaluation." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The results presented in Table 3 ###reference_### & 4 ###reference_### and in Fig. 5 ###reference_### & 6 ###reference_### highlight the effectiveness of the proposed WGSS model for extractive text summarization in Bengali, as well as its adaptability to other low-resource languages. This section analyses the comparative results, the strengths and limitations of the proposed method, and potential areas for further research.\nAs evidenced by the results shown in tables and figures in comparison section 4.5 ###reference_###, the WGSS model consistently outperforms other graph-based extractive text summarization models, namely BenSumm [6 ###reference_b6###], LexRank [10 ###reference_b10###], and SASbSC [24 ###reference_b24###]. The proposed model shows better performance compared to the other three methods on Rouge-1, Rouge-2, and Rouge-LCS metrics. This performance improvement is due to the novel approach to calculating sentence similarity. While calculating sentence similarity, taking the geometric mean of individual similarity between word pairs overcomes the lack of local word correspondence faced by the averaging vector method [24 ###reference_b24###]. The Gaussian kernel-based word similarity provides a precise semantic relationship between sentences which further contributes to the performance improvement. Another reason for performance improvement is the usage of spectral clustering which is very effective in capturing irregular cluster shapes.\nOur proposed strategy for calculating sentence similarity is more suited than existing methods for comparing two sets of vectors in the context of language such as Earth Movers Distance (EMD) [25 ###reference_b25###], Hausdorff Distance [15 ###reference_b15###], Procrustes Analysis [11 ###reference_b11###]. Whereas, EMD and Procrustes Analysis are computationally expensive and irrelevant for word vectors due to involving scaling or rotating vectors which do not hold any semantic meaning. Another method, Hausdorff distance [15 ###reference_b15###] calculates the highest possible distance between vectors from two sets. Although not as expensive as EMD and Procrustes Analysis, it is easily influenced by outlier words due to only considering the worst-case scenario.\nOn the other hand, the proposed method focuses on local vector similarity between two sets which is more important for words. The Gaussian similarity function captures the proximity of points smoothly, providing a continuous value for similarity between two words in a normalized way. Gaussian similarity is also robust against small outliers due to being a soft similarity measure. Taking the geometric mean of Gaussian similarities also helps to smooth over similarity values for outlier words.\nOne of the key strengths of this proposed method is the reduction of redundancy in the output summary which is a common challenge in extractive summarization methods. This is achieved by grouping semantically similar sentences together. The use of spectral clustering for the grouping task improves the performance by allowing flexibility for cluster shapes. Another key strength of WGSS is the improved sentence similarity calculation technique over the word averaging method used by SASbSC [24 ###reference_b24###], which dilutes the semantic meaning of a sentence. The scalability of our method across languages is another advantage due to it requiring very few language-specific resources. This scalability is demonstrated in the experiments with Hindi, Marathi, and Turkish languages (Table 4 ###reference_###).\nDespite its advantages, the WGSS model does face some challenges. The model heavily relies on pre-trained word embeddings, which may not always capture the full details of certain domains or newly coined terms. The FastText [12 ###reference_b12###] dataset used here is heavily reliant on Wikipedia for training which could introduce some small unforeseen biases. The model also does not take into account the order in which words appear in a sentence or when they form special noun or verb groups.\nThe proposed WGSS model represents a significant advancement in Bengali extractive text summarization due to its ability to accurately capture semantic similarity, reduce redundancy, maximize coverage and generalize across languages. The results of this study demonstrate the robustness and adaptability of the WGSS model, offering a promising direction for future research in multilingual extractive summarization." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this study, we proposed the WGSS method for Bengali extractive text summarization which is also extended to other low-resource languages. The process uses the geometric mean of Gaussian similarities between individual word pairs to identify deeper semantic relationships within two sentences. This sentence similarity calculation method shows better text summarization performance than recent graph-based techniques. Using the sentence similarity, an affinity matrix is built to be clustered into groups. We extract one sentence from each cluster to generate the summaries. This approach improves the coherence and relevance of the generated summaries by minimizing redundancy and maximizing topic coverage. 43.2% average improvement across three ROUGE metrics proves the versatility and robustness of the proposed method. WGSS can also be extended into other languages as shown through the results in Hindi, Marathi and Turkish languages.\nIt addresses the need for an effective summarization technique in the Bengali language.\nThis work contributes to the growing body of computational linguistics research focused on low-resource languages like Bengali. The results showed the strengths of the proposed WGSS method compared to several baseline techniques. Despite the promising results, WGSS may face limitations on highly specialized or domain-specific texts where deeper linguistic features beyond word similarity such as word order, self-attention, sophisticated post-processing techniques etc. could be considered in the future." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Acknowledgement", + "text": "This research was supported partially by the Bangladesh Research and Education Network (BdREN) cloud computing resources. An expert linguistic team also helped us to build the evaluation dataset. This team consisted of Arnab Das Joy, Noshin Tabassum Dina, Kazi Farhana Faruque, Md. Morshaline Mojumder, Mrinmoy Poit, Md. Mahfujur Rahman, Effaz Rayhan, Asaduzzaman Rifat, Md. Rohan Rifat, Musfiqur Rahman Shishir and Akidul Islam Jim. All of them are undergrad students of the Institute of Information Technology, University of Dhaka." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Examples of word vectors present in FastTexts Dataset
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LanguageRoot WordWords Present at Dataset
EnglishDoDo, Does, Done, Doing, Undo, Redo etc.
BraveBravery, Bravest, Bravado, Brave-heart etc.
Bengalikrkreb, kerech, kir, krb, kr/ta, ikRya etc.
manpRman, Apman, sm/man, mHamanY, manniiy etc.
\n
", + "capture": "Table 1: Examples of word vectors present in FastTexts Dataset" + }, + "2": { + "table_html": "
\n
Table 2: Comparison of Result for different ranking techniques
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodRouge-1Rouge-2Rouge-LCS
Lead extraction0.470.360.43
TF-IDF ranking0.500.400.46
\n
", + "capture": "Table 2: Comparison of Result for different ranking techniques" + }, + "3": { + "table_html": "
\n
Table 3: Comparison of average Rouge scores between graph based extractive summarization models on 4 datasets
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetModelRouge-1Rouge-2Rouge-LCS
Self-curatedWGSS (Proposed)0.470.360.43
BenSumm [5]\n0.410.290.36
SASbSC [24]\n0.420.290.37
LexRank [10]\n0.220.140.20
KaggleWGSS (Proposed)0.490.430.48
BenSumm [5]\n0.290.220.26
SASbSC [24]\n0.230.120.18
LexRank [10]\n0.240.160.22
BNLPCWGSS (Proposed)0.410.340.40
BenSumm [5]\n0.360.280.34
SASbSC [24]\n0.410.330.39
LexRank [10]\n0.260.190.24
GithubWGSS (Proposed)0.490.410.47
BenSumm [5]\n0.310.220.28
SASbSC [24]\n0.300.180.24
LexRank [10]\n0.220.140.20
\n
", + "capture": "Table 3: Comparison of average Rouge scores between graph based extractive summarization models on 4 datasets" + }, + "4": { + "table_html": "
\n
Table 4: Comparison of Result of proposed summarization method in other low-resource languages
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LanguageRouge-1Rouge-2Rouge-LCS
Bengali (Self-curated)0.470.360.43
Bengali (Kaggle)0.490.430.48
Bengali (BNLPC)0.410.340.40
Bengali (Github)0.490.410.47
Bengali (Average)0.470.380.44
Hindi0.400.260.36
Marathi0.500.420.50
Turkish0.480.390.47
\n
", + "capture": "Table 4: Comparison of Result of proposed summarization method in other low-resource languages" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17181v2_figure_1.png", + "caption": "Figure 1: A scenario where the sentence averaging method fails. (a) shows a scenario where the distance between sentence average vectors is larger than (b) despite the word vectors from (a) being more closely related than (b).", + "url": "http://arxiv.org/html/2411.17181v2/x1.png" + }, + "2": { + "figure_path": "2411.17181v2_figure_2.png", + "caption": "Figure 2: Process Flow Diagram", + "url": "http://arxiv.org/html/2411.17181v2/x2.png" + }, + "3": { + "figure_path": "2411.17181v2_figure_3.png", + "caption": "Figure 3: Emphasis of local word correspondence in Dm\u2062s\u2062wsubscript\ud835\udc37\ud835\udc5a\ud835\udc60\ud835\udc64D_{msw}italic_D start_POSTSUBSCRIPT italic_m italic_s italic_w end_POSTSUBSCRIPT method. Here, scenario (a) has a larger similarity value due to having a set of smaller Dm\u2062s\u2062wsubscript\ud835\udc37\ud835\udc5a\ud835\udc60\ud835\udc64D_{msw}italic_D start_POSTSUBSCRIPT italic_m italic_s italic_w end_POSTSUBSCRIPT than scenario (b)", + "url": "http://arxiv.org/html/2411.17181v2/x3.png" + }, + "4": { + "figure_path": "2411.17181v2_figure_4.png", + "caption": "Figure 4: Fine-tuning for different standard deviation (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3) values", + "url": "http://arxiv.org/html/2411.17181v2/x4.png" + }, + "5": { + "figure_path": "2411.17181v2_figure_5.png", + "caption": "Figure 5: The Radar chart of the models of being compared on four datasets at once", + "url": "http://arxiv.org/html/2411.17181v2/x5.png" + }, + "6": { + "figure_path": "2411.17181v2_figure_6.png", + "caption": "Figure 6: Boxplot chart for performance of the models on four datasets", + "url": "http://arxiv.org/html/2411.17181v2/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "An extractive text summarization technique for bengali document(s)\nusing k-means clustering algorithm.", + "author": "Akter, S., Asa, A. S., Uddin, M. P., Hossain, M. D., Roy, S. K., and\nAfjal, M. I.", + "venue": "In 2017 IEEE International Conference on Imaging, Vision &\nPattern Recognition (icIVPR) (2017).", + "url": null + } + }, + { + "2": { + "title": "Cosum: Text summarization based on clustering and optimization.", + "author": "Alguliyev, R. M., Aliguliyev, R. M., Isazade, N. R., Abdi, A., and Idris,\nN.", + "venue": "Expert Systems 36, 1 (2019), e12340.", + "url": null + } + }, + { + "3": { + "title": "Uniqueness of the gaussian kernel for scale-space filtering.", + "author": "Babaud, J., Witkin, A. P., Baudin, M., and Duda, R. O.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence\nPAMI-8, 1 (1986), 26\u201333.", + "url": null + } + }, + { + "4": { + "title": "Machine-made index for technical literature\u2014an experiment.", + "author": "Baxendale, P. B.", + "venue": "IBM Journal of Research and Development 2, 4 (1958), 354\u2013361.", + "url": null + } + }, + { + "5": { + "title": "Unsupervised abstractive summarization of Bengali text documents.", + "author": "Chowdhury, R. R., Nayeem, M. T., Mim, T. T., Chowdhury, M. S. R., and\nJannat, T.", + "venue": "In Proceedings of the 16th Conference of the European Chapter of\nthe Association for Computational Linguistics: Main Volume (Apr. 2021),\nAssociation for Computational Linguistics, pp. 2612\u20132619.", + "url": null + } + }, + { + "6": { + "title": "Summarizing bengali text: An extractive approach.", + "author": "Dash, S. R., Guha, P., Mallick, D. K., and Parida, S.", + "venue": "In Intelligent Data Engineering and Analytics (2022),\nSpringer Nature Singapore, pp. 133\u2013140.", + "url": null + } + }, + { + "7": { + "title": "Extractive summarization data sets generated with measurable\nanalyses.", + "author": "Demir, I., K\u00fcp\u00e7\u00fc, E., and K\u00fcp\u00e7\u00fc, A.", + "venue": "In Proceedings of the 32nd IEEE Conference on Signal Processing\nand Communications Applications (2024).", + "url": null + } + }, + { + "8": { + "title": "New methods in automatic extracting.", + "author": "Edmundson, H. P.", + "venue": "J. ACM 16, 2 (apr 1969), 264\u2013285.", + "url": null + } + }, + { + "9": { + "title": "Automatic text summarization: A comprehensive survey.", + "author": "El-Kassas, W. S., Salama, C. R., Rafea, A. A., and Mohamed, H. K.", + "venue": "Expert Systems with Applications 165 (2021), 113679.", + "url": null + } + }, + { + "10": { + "title": "Lexrank: graph-based lexical centrality as salience in text\nsummarization.", + "author": "Erkan, G., and Radev, D. R.", + "venue": "J. Artif. Int. Res. 22, 1 (dec 2004), 457\u2013479.", + "url": null + } + }, + { + "11": { + "title": "Generalized procrustes analysis.", + "author": "Gower, J. C.", + "venue": "Psychometrika 40, 1 (Mar 1975), 33\u201351.", + "url": null + } + }, + { + "12": { + "title": "Learning word vectors for 157 languages.", + "author": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T.", + "venue": "In Proceedings of the Eleventh International Conference on\nLanguage Resources and Evaluation (LREC 2018) (May 2018), European\nLanguage Resources Association (ELRA).", + "url": null + } + }, + { + "13": { + "title": "A survey of text summarization extractive techniques.", + "author": "Gupta, V., and Lehal, G.", + "venue": "Journal of Emerging Technologies in Web Intelligence 2 (08\n2010).", + "url": null + } + }, + { + "14": { + "title": "Automatic bengali news documents summarization by introducing\nsentence frequency and clustering.", + "author": "Haque, M. M., Pervin, S., and Begum, Z.", + "venue": "In 2015 18th International Conference on Computer and\nInformation Technology (ICCIT) (2015), pp. 156\u2013160.", + "url": null + } + }, + { + "15": { + "title": "Grundz\u00fcge der mengenlehre, vol. 7.", + "author": "Hausdorff, F.", + "venue": "von Veit, 1914.", + "url": null + } + }, + { + "16": { + "title": "Extractive text summarization using word vector embedding.", + "author": "Jain, A., Bhatia, D., and Thakur, M. K.", + "venue": "In 2017 International Conference on Machine Learning and Data\nScience (MLDS) (2017), pp. 51\u201355.", + "url": null + } + }, + { + "17": { + "title": "ROUGE: A package for automatic evaluation of summaries.", + "author": "Lin, C.-Y.", + "venue": "In Text Summarization Branches Out (July 2004), Association\nfor Computational Linguistics, pp. 74\u201381.", + "url": null + } + }, + { + "18": { + "title": "TextRank: Bringing order into text.", + "author": "Mihalcea, R., and Tarau, P.", + "venue": "In Proceedings of the 2004 Conference on Empirical Methods in\nNatural Language Processing (July 2004), Association for Computational\nLinguistics, pp. 404\u2013411.", + "url": null + } + }, + { + "19": { + "title": "Text summarization in the biomedical domain: A systematic review of\nrecent research.", + "author": "Mishra, R., Bian, J., Fiszman, M., Weir, C. R., Jonnalagadda, S., Mostafa,\nJ., and Del Fiol, G.", + "venue": "Journal of Biomedical Informatics 52 (2014), 457\u2013467.", + "url": null + } + }, + { + "20": { + "title": "A comprehensive survey on topic modeling in text summarization.", + "author": "Mohan, G. B., and Kumar, R. P.", + "venue": "In Micro-Electronics and Telecommunication Engineering\n(2022), Springer Nature Singapore, pp. 231\u2013240.", + "url": null + } + }, + { + "21": { + "title": "A survey on abstractive text summarization.", + "author": "Moratanch, N., and Chitrakala, S.", + "venue": "In 2016 International Conference on Circuit, Power and Computing\nTechnologies (ICCPCT) (2016), pp. 1\u20137.", + "url": null + } + }, + { + "22": { + "title": "A survey on extractive text summarization.", + "author": "Moratanch, N., and Chitrakala, S.", + "venue": "In 2017 International Conference on Computer, Communication and\nSignal Processing (ICCCSP) (2017), pp. 1\u20136.", + "url": null + } + }, + { + "23": { + "title": "The pagerank citation ranking : Bringing order to the web.", + "author": "Page, L., Brin, S., Motwani, R., and Winograd, T.", + "venue": "In The Web Conference (1999).", + "url": null + } + }, + { + "24": { + "title": "Unsupervised Bengali text summarization using sentence embedding\nand spectral clustering.", + "author": "Roychowdhury, S., Sarkar, K., and Maji, A.", + "venue": "In Proceedings of the 19th International Conference on Natural\nLanguage Processing (ICON) (Dec. 2022), Association for Computational\nLinguistics, pp. 337\u2013346.", + "url": null + } + }, + { + "25": { + "title": "A metric for distributions with applications to image databases.", + "author": "Rubner, Y., Tomasi, C., and Guibas, L.", + "venue": "In Sixth International Conference on Computer Vision (IEEE Cat.\nNo.98CH36271) (1998), pp. 59\u201366.", + "url": null + } + }, + { + "26": { + "title": "A vector space model for automatic indexing.", + "author": "Salton, G., Wong, A., and Yang, C. S.", + "venue": "Commun. ACM 18, 11 (nov 1975), 613\u2013620.", + "url": null + } + }, + { + "27": { + "title": "An approach to summarizing bengali news documents.", + "author": "Sarkar, K.", + "venue": "In Proceedings of the International Conference on Advances in\nComputing, Communications and Informatics (2012), ICACCI \u201912, Association\nfor Computing Machinery, p. 857\u2013862.", + "url": null + } + }, + { + "28": { + "title": "Bengali text summarization by sentence extraction.", + "author": "Sarkar, K.", + "venue": "CoRR abs/1201.2240 (2012).", + "url": null + } + }, + { + "29": { + "title": "A survey automatic text summarization.", + "author": "Tas, O., and Kiyani, F.", + "venue": "PressAcademia Procedia 5, 1 (2017), 205\u2013213.", + "url": null + } + }, + { + "30": { + "title": "A tutorial on spectral clustering.", + "author": "von Luxburg, U.", + "venue": "Statistics and Computing 17, 4 (Dec 2007), 395\u2013416.", + "url": null + } + }, + { + "31": { + "title": "Review of automatic text summarization techniques & methods.", + "author": "Widyassari, A. P., Rustad, S., Shidik, G. F., Noersasongko, E., Syukur,\nA., Affandy, A., and Setiadi, D. R. I. M.", + "venue": "Journal of King Saud University - Computer and Information\nSciences 34, 4 (2022), 1029\u20131046.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17181v2" +} \ No newline at end of file diff --git a/20241127/2411.17391v2.json b/20241127/2411.17391v2.json new file mode 100644 index 0000000000000000000000000000000000000000..d29c211e833775702f1c235db1a3f8f992b67258 --- /dev/null +++ b/20241127/2411.17391v2.json @@ -0,0 +1,259 @@ +{ + "title": "The belief in Moore\u2019s Law is undermining ICT climate action", + "abstract": "The growth of semiconductor technology is unprecedented, with profound transformational consequences for society. This includes feeding an over-reliance on digital solutions to systemic problems such as climate change (\u2018techno-solutionism\u2019). Such technologies come at a cost: environmental, social and material. We unpack topics arising from \u201cThe True Cost of ICT: From Materiality to Techno-Solutionism (TCICT)\u201d, a workshop held at the International ICT for Sustainability (ICT4S) conference 2024 in Stockholm, Sweden\u2014exploring, as a matter of global climate injustice, the drivers and material dependencies of these technologies. We point to the importance of addressing ICT\u2019s impacts as a system, rather than purely in terms of efficiency and energy use. We conclude by calling to build a community of like-minded and critical colleagues to address the intersectional climate impacts of the semiconductor industry and the techno-solutionism it embodies.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "In the last 50 years, semiconductor technology has unquestionably enjoyed unprecedented growth compared to any other industrial sector, from 2000 components per semiconductor chip in the 1970s to over 50 billion today (McGregor, 2022 ###reference_b15###). This trend that was already observed by Gordon Moore in 1975, who stipulated a bi-annual doubling of transistors in integrated circuits, which has manifested massive gains in computational power and efficiency; and simultaneously underwritten revolutions in digital mediated industries such as communication, transportation, and latterly, of course, sponsoring the rebirth of artificial intelligence and particularly deep learning.\nDigital industrialisation over the last 50 years has touched most aspects of business and society, leading for some to a quasi-religious faith that technology can address many key societal challenges we face today, including but not limited to, climate change. Such solutions bringing about an apparent \u2018technological utopia\u2019 in which social and environmental challenges are solved through better technology, models, digital twins, and the decarbonisation and dematerialisation\nof other industries.\nIn contrast, we hypothesise that the techno-solutionist paradigm\u2014the never-ending cycle of innovation in digital/semiconductor technologies \u2014is dangerous (Johnston, 2020 ###reference_b10###; S\u00e6tra, 2023 ###reference_b23###). Neglecting the globally significant and growing material, carbon and social footprints of ICT in the present while dreaming of solutions for the future.\nIn this position paper we argue the case for the growing impacts of ICT, where the past 50 years have shown no guarantee that long term energy and material consumption will ever go down, despite massive gains in efficiency\u2014a classic rebound effect\u2014more powerful and efficient ICT ultimately results in a net gain in devices with even larger total energy and material consumption (Coroam\u0103 and Mattern, 2019 ###reference_b5###; Lange et al., 2020 ###reference_b12###). We argue for a more nuanced and responsible view of the benefits and costs of ICT in climate solutions, especially in the Global North." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. What are the true costs of ICT?", + "text": "Techno-solutionism is the belief that there are technological solutions to all problems faced by humanity, even where the problem has originated from our over-reliance on technology itself. This is a narrative that is particularly prevalent, though not exclusively found in the Global North, but that deeply permeates society. Mainly through technology, it is argued, we could achieve a sustainable utopia, full of economic growth and affluence, that does not cause undue harm (Jones, 2023 ###reference_b11###; Mills, 2021 ###reference_b16###). There is a widespread belief among businesses, policymakers and the general public, that it is mainly through technological innovation that climate change can be solved. Relentless ICT innovation (epitomised by Moore\u2019s Law) is probably a key driver behind this ideology (Mitra et al., 2023a ###reference_b19###). We argue this optimism is unfounded and actively impedes more decisive, meaningful and immediate action on climate (or societal) change.\nTo explore the breadth of ICT\u2019s impacts and the concept and drivers of techno-solutionism more deeply, we held the \u201cThe True Cost of ICT: From Materiality to Techno-Solutionism\u201d workshop111https://ict4s24-tcict.github.io/ ###reference_ict4s24-tcict.github.io/### at the International ICT for Sustainability (ICT4S) conference, on Monday the \\nth24 of June 2024, in Stockholm, Sweden. Attendees were required to submit short position statements, from which the chairs invited short talks on assessing ICT\u2019s impacts, paired with guest speakers on specific topics of social and global justice, and studies of communities\u2019 relationship with mining and mineral resource extraction. The hybrid-format workshop attracted over 30 researchers from both academia and industry with an interest in ICT sustainability, at different career stages and with a wide range and depth of experience. Field notes were taken by the authors during discussions, with breakout discussions captured on paper and online using physical and virtual post-it notes. The lead authors synthesised these using a simple bottom up thematic analysis. While the workshop talks and breakout sessions covered far more than we can represent here, we zoom in on specific aspects of ICT\u2019s impacts drawn from the resulting workshop discussions that are sometimes missed in one-dimensional accounts focusing on energy or greenhouse gas (GHG) emissions." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. The best known costs of ICT", + "text": "In 2021, estimates placed the externality costs of ICT in terms of GHG as being equivalent to global air travel (Freitag et al., 2021 ###reference_b7###). Although, there is considerable controversy not least shrouded in a mysterious game of non-disclosure of metrics relating to growth and resource consumption by major digital infrastructure providers in the absence of significant government policy.\nOne underlying narrative is that data centres are no cause for concern as they are achieving ever higher efficiency rates (Masanet et al., 2020 ###reference_b14###). Another, that each hardware generation brings increases in performance per unit energy (Malmodin et al., 2024 ###reference_b13###). While, these are undoubtedly true\u2014as data centres increase in scale, so efficiencies relating to amortising running costs increase; and similarly as transistor densities grow (in line with Moore\u2019s Law (Schaller, 1997 ###reference_b26###)), so we can argue that overall energy budgets due to CPUs/GPUs and cooling should fall. However, this increase in capability also feeds economic and market growth for new ICT products and infrastructures, leading to further higher capacity including networks and data centres. Large AI companies accelerating data centre growth have even overshot their self-imposed emission targets (Milmo et al., [n.\u2009d.] ###reference_b17###).\nAccording to the International Energy Agency, data centres, cryptocurrencies, and AI consumed about 460 TWh of electricity worldwide in 2022, almost 2% of total global electricity demand; they also predict that global electricity demand from data centres could double towards 2026 (Agency, 2024 ###reference_b2###). This puts specific pressure on electricity grids: with Microsoft, Amazon and others\u2019 facilities in Ireland forecast to consume a third of the country\u2019s energy by 2026 and already 53% of the country\u2019s renewable energy supply (Bloomberg, [n.\u2009d.] ###reference_b4###)." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. The lesser known costs of ICT", + "text": "Centering the narrative on efficiency gain, plays nicely with existing market drivers towards more capability, and\nmore product sales. Nevertheless, it decentres less talked about material costs of ICT.\nThe production of ICT equipment consumes materials, and the faster digital technology becomes embedded in other products and services, the more material consumption and reliance on material extractivist practices underpins this.\nICT has perhaps uniquely complex supply chains, depending on sometimes vary rare minerals that exist globally in tiny quantities (Fitzpatrick et al., 2015 ###reference_b6###). This raises particular pressures in parts of the world where these materials are found. Geo-political challenges with this have also driven a recent focus on sovereignty of production and resilience (Association, [n.\u2009d.] ###reference_b3###).\nThe mounting challenge of ever higher transistor counts and increasing throughput of chips, places growing reliance on even less abundant parts of the periodic table (Sun et al., 2020 ###reference_b27###).\nLarge scale computing facilities, such as hyperscalar data centres, are now sufficiently large energy consumers that they place major burdens on energy grids and drive major energy projects through power purchase agreements (Moss, [n.\u2009d.] ###reference_b22###). This can reduce energy resilience and increase the cost of energy for communities (Taylor, 2022 ###reference_b28###); but it also can displace other energy users who can\u2019t afford to compete for this capacity (Bloomberg, [n.\u2009d.] ###reference_b4###). It is important to recognise that creating renewable energy infrastructures is also not free from energy and material dependencies, especially globally!" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. Human, social cost and new injustices", + "text": "ICT has indirect links to extractivist practices such as mining and waste handling, some with questionable labour practices and consequences to human health and for environmental degradation (Saha et al., 2021 ###reference_b24###). A significant failure of the technology industry is the relatively low rates of recycling (as low as 20%), helping drive this.\nWater use is emerging as an important datacentre concern; new metrics like \u2018water use effectiveness\u2019 (WUE) aim to address this, but like PUE, talk of a race to improve a specific ratio rather than reduce absolute consumption. This could be said to ignore headline issues like the overall rate of growth, and environmental sensitivity where this impact occurs. Using water where it is abundant is clearly less of a concern than using it where it is already scarce and takes away from populations who rely upon it (Mollen et al., 2024 ###reference_b21###).\nWhat of populations displaced from lands where these precious minerals lie, such as the indigenous S\u00e1mi in the Nordics (Mollen et al., 2024 ###reference_b21###)? Who has the power and the money to compete with global tech giants? And what of the damage to the peoples and biology due to the use of chemicals and machinery to reach them (Grant et al., 2013 ###reference_b8###)? The continued injustice from the rapid growth and adoption of ICT based solutions in the Global North, on populations in the Global South (Mitra et al., 2023b ###reference_b20###; Mollen et al., 2024 ###reference_b21###) reprises neocolonialism. If ICT energy demand looks set to consume \u2018unreasonable proportions\u2019 of renewable energy supply (Gupta et al., 2021 ###reference_b9###), already outstripping anticipated demand in net zero roadmaps\u2014shouldn\u2019t this cause us to ask what is \u2018a reasonable share\u2019 to dedicate to ICT in our future?" + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Electricity 2024.", + "author": "International Energy Agency.\n2024.", + "venue": "https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08-952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf.", + "url": null + } + }, + { + "2": { + "title": "CHIPS for America Act & FABS Act.", + "author": "Semiconductor Industry Association.\n[n.\u2009d.].", + "venue": "", + "url": null + } + }, + { + "3": { + "title": "AI Is Already Wreaking Havoc on Global Power\nSystems.", + "author": "Bloomberg.\n[n.\u2009d.].", + "venue": "", + "url": null + } + }, + { + "4": { + "title": "Digital rebound\u2013why digitalization will not redeem\nus our environmental sins. In Proceedings 6th\ninternational conference on ICT for sustainability. Lappeenranta.\nhttp://ceur-ws. org, Vol. 2382.", + "author": "Vlad C Coroam\u0103 and\nFriedemann Mattern. 2019.", + "venue": "", + "url": null + } + }, + { + "5": { + "title": "Conflict minerals in the compute sector: estimating\nextent of tin, tantalum, tungsten, and gold use in ICT products.", + "author": "Colin Fitzpatrick, Elsa\nOlivetti, T Reed Miller, Richard Roth,\nand Randolph Kirchain. 2015.", + "venue": "Environmental science & technology\n49, 2 (2015),\n974\u2013981.", + "url": null + } + }, + { + "6": { + "title": "The real climate and transformative impact of ICT:\nA critique of estimates, trends, and regulations.", + "author": "Charlotte Freitag, Mike\nBerners-Lee, Kelly Widdicks, Bran\nKnowles, Gordon S Blair, and Adrian\nFriday. 2021.", + "venue": "Patterns 2,\n9 (2021).", + "url": null + } + }, + { + "7": { + "title": "Health consequences of exposure to e-waste: a\nsystematic review.", + "author": "Kristen Grant, Fiona C\nGoldizen, Peter D Sly, Marie-Noel Brune,\nMaria Neira, Martin van den Berg, and\nRosana E Norman. 2013.", + "venue": "The lancet global health\n1, 6 (2013),\ne350\u2013e361.", + "url": null + } + }, + { + "8": { + "title": "Chasing Carbon: The Elusive Environmental Footprint\nof Computing. In 2021 IEEE International Symposium\non High-Performance Computer Architecture (HPCA).\n854\u2013867.", + "author": "Udit Gupta, Young Geun\nKim, Sylvia Lee, Jordan Tse,\nHsien-Hsin S. Lee, Gu-Yeon Wei,\nDavid Brooks, and Carole-Jean Wu.\n2021.", + "venue": "https://doi.org/10.1109/HPCA51647.2021.00076", + "url": null + } + }, + { + "9": { + "title": "Techno-fixers: Origins and implications of\ntechnological faith.", + "author": "Sean F Johnston.\n2020.", + "venue": "McGill-Queen\u2019s University Press.", + "url": null + } + }, + { + "10": { + "title": "Marc Andreessen just dropped a \u2018Techno-Optimist\nManifesto\u2019 that sees a world of 50 billion people settling other planets.", + "author": "R. Jones. 2023.", + "venue": "", + "url": null + } + }, + { + "11": { + "title": "Digitalization and energy consumption. Does ICT\nreduce energy demand?", + "author": "Steffen Lange, Johanna\nPohl, and Tilman Santarius.\n2020.", + "venue": "Ecological economics 176\n(2020), 106760.", + "url": null + } + }, + { + "12": { + "title": "ICT sector electricity consumption and greenhouse\ngas emissions\u20132020 outcome.", + "author": "Jens Malmodin, Nina\nL\u00f6vehagen, Pernilla Bergmark, and\nDag Lund\u00e9n. 2024.", + "venue": "Telecommunications Policy\n48, 3 (2024),\n102701.", + "url": null + } + }, + { + "13": { + "title": "Recalibrating global data center energy-use\nestimates.", + "author": "Eric Masanet, Arman\nShahabi, Sarah Smith, and Jonathan\nKoomey. 2020.", + "venue": "Technical Report. Science.", + "url": null + } + }, + { + "14": { + "title": "The True Nature Of Moore\u2019s Law \u2013 Driving\nInnovation For The Next 50 Years.", + "author": "J. McGregor.\n2022.", + "venue": "", + "url": null + } + }, + { + "15": { + "title": "The Cloud Revolution: How the Convergence\nof New Technologies Will Unleash the Next Economic Boom and a Roaring\n2020s.", + "author": "Mark P Mills.\n2021.", + "venue": "Encounter Books.", + "url": null + } + }, + { + "16": { + "title": "Can the climate survive the insatiable energy demands\nof the AI arms race?", + "author": "Dan Milmo, Alex Hern,\nand Jillian Ambrose.\n[n.\u2009d.].", + "venue": "", + "url": null + } + }, + { + "17": { + "title": "The Paradox of Industrial Involvement in\nEngineering Higher Education. In 2023 IEEE\nFrontiers in Education Conference (FIE). 1\u20136.", + "author": "Srinjoy Mitra and\nJean-Pierre Raskin. 2023.", + "venue": "https://doi.org/10.1109/FIE58773.2023.10342966", + "url": null + } + }, + { + "18": { + "title": "Role of ICT Innovation in Perpetuating the Myth of\nTechno-Solutionism.", + "author": "Srinjoy Mitra, Jean-Pierre\nRaskin, and Mario Pansera.\n2023a.", + "venue": "", + "url": null + } + }, + { + "19": { + "title": "On the need for an anticolonial perspective in\nengineering education and practice.", + "author": "Srinjoy Mitra, Suvobrata\nSarkar, and Agomoni Ganguli-Mitra.\n2023b.", + "venue": "nature communications 14,\n1 (2023), 8453.", + "url": null + } + }, + { + "20": { + "title": "Governing Digital Infrastructures for a Secure and\nSustainable Future (June 27, 2024).", + "author": "Anne Mollen, Judith\nKeilbach, Patrick Brodie, Marek\nJancovic, Anne Helmond, Arianna Crosera,\nJulia Velkova, Valentina Ochner,\nMaxigas Maxigas, and Fieke Jansen.\n2024.", + "venue": "SSRN (June\n2024).", + "url": null + } + }, + { + "21": { + "title": "Three Mile Island nuclear power plant to return as\nMicrosoft signs 20-year, 835MW AI data center PPA Site expected to return in\n2028, in huge nuclear deal.", + "author": "Sebastian Moss.\n[n.\u2009d.].", + "venue": "", + "url": null + } + }, + { + "22": { + "title": "Technology and sustainable development: The\npromise and pitfalls of techno-solutionism.", + "author": "Henrik Skaug S\u00e6tra.\n2023.", + "venue": "Taylor & Francis.", + "url": null + } + }, + { + "23": { + "title": "Electronic waste and their leachates impact on\nhuman health and environment: Global ecological threat and management.", + "author": "Lala Saha, Virendra\nKumar, Jaya Tiwari, Shalu Rawat,\nJiwan Singh, Kuldeep Bauddh,\net al. 2021.", + "venue": "Environmental Technology & Innovation\n24 (2021), 102049.", + "url": null + } + }, + { + "24": { + "title": "Volt Rush: the winners and losers in the\nrace to go green.", + "author": "Henry Sanderson.\n2022.", + "venue": "Simon and Schuster.", + "url": null + } + }, + { + "25": { + "title": "Moore\u2019s law: past, present and future.", + "author": "Robert R Schaller.\n1997.", + "venue": "IEEE spectrum 34,\n6 (1997), 52\u201359.", + "url": null + } + }, + { + "26": { + "title": "Summarizing CPU and GPU Design Trends with Product\nData.", + "author": "Yifan Sun, Nicolas Bohm\nAgostini, Shi Dong, and David Kaeli.\n2020.", + "venue": "", + "url": null + } + }, + { + "27": { + "title": "Powering \u2018smart\u2019futures: data centres and the\nenergy politics of digitalisation.", + "author": "ARE Taylor.\n2022.", + "venue": "In Energy futures. De\nGruyter.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17391v2" +} \ No newline at end of file diff --git a/20241127/2411.17949v1.json b/20241127/2411.17949v1.json new file mode 100644 index 0000000000000000000000000000000000000000..13543c843c3d9ce847eeab474972d4cb3eeb6512 --- /dev/null +++ b/20241127/2411.17949v1.json @@ -0,0 +1,684 @@ +{ + "title": "ROICtrl: Boosting Instance Control for Visual Generation", + "abstract": "Natural language often struggles to accurately associate positional and attribute information with multiple instances, which limits current text-based visual generation models to simpler compositions featuring only a few dominant instances. To address this limitation, this work enhances diffusion models by introducing regional instance control, where each instance is governed by a bounding box paired with a free-form caption. Previous methods in this area typically rely on implicit position encoding or explicit attention masks to separate regions of interest (ROIs), resulting in either inaccurate coordinate injection or large computational overhead. Inspired by ROI-Align in object detection, we introduce a complementary operation called ROI-Unpool. Together, ROI-Align and ROI-Unpool enable explicit, efficient, and accurate ROI manipulation on high-resolution feature maps for visual generation. Building on ROI-Unpool, we propose ROICtrl, an adapter for pretrained diffusion models that enables precise regional instance control. ROICtrl is compatible with community-finetuned diffusion models, as well as with existing spatial-based add-ons (e.g., ControlNet, T2I-Adapter) and embedding-based add-ons (e.g., IP-Adapter, ED-LoRA), extending their applications to multi-instance generation. Experiments show that ROICtrl achieves superior performance in regional instance control while significantly reducing computational costs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Recent text-based diffusion models have achieved remarkable success in generating images [35 ###reference_b35###, 7 ###reference_b7###, 36 ###reference_b36###] and videos [42 ###reference_b42###, 13 ###reference_b13###, 46 ###reference_b46###, 20 ###reference_b20###] by scaling up data and computational resources. However, effectively controlling these text-based generative models continues to be a major challenge.\nThe large information gap between natural language and the visual world complicates the precise description of spatial positions and attributes of multiple instances using language alone, often leading to linguistic ambiguity [11 ###reference_b11###]. As a result, current text-based diffusion models are more effective at generating images of simple composition with a limited number of dominant instance. Inspired by the \u201cchihuahua or muffin\u201d grid test [10 ###reference_b10###], which assesses the fine-grained visual recognition ability of multi-modal large language models, we use instance grids to evaluate the state-of-the-art text-to-image generation system DALL-E 3 [2 ###reference_b2###]. As shown in Fig. 1 ###reference_###, we structure region positions and their corresponding caption into a sentence. However, DALL-E 3 struggle to generate accurate nine-grid results, highlighting the challenge of using natural language alone to solve regional instance control in visual generation.\n###figure_2### Much like the evolution in visual recognition, which has transitioned from concentrating on single dominant instances [8 ###reference_b8###, 16 ###reference_b16###, 23 ###reference_b23###] (i.e., object classification) to recognizing objects within complex contexts [25 ###reference_b25###, 34 ###reference_b34###, 17 ###reference_b17###] (i.e., object detection) through the use of bounding boxes to indicate spatial locations and distinguish instances, visual generation is also shifting towards using bounding boxes to anchor regions of interest (ROIs) for instance control. However, the main difference in ROI processing between visual recognition and visual generation is that visual generation requires handling variable-sized ROIs on high-resolution feature maps. For example, in Faster R-CNN [34 ###reference_b34###], the ROI layer operates on lower-resolution features (e.g., 1414) with a simple classification head. In contrast, the ROI layer in generative models is applied to higher-resolution features (e.g., 6464 or 128128) for finer detail, often using computationally intensive operations like cross- or self-attention.\nThis has led prior methods to compromise between spatial alignment and computational efficiency in ROI injection.\nPrior methods for instance control can be broadly categorized into two approaches:\n1) Implicit ROI injection via embedding: As shown in Fig. 3 ###reference_###(a), GLIGEN [24 ###reference_b24###] and subsequent works [12 ###reference_b12###, 39 ###reference_b39###] implicitly encode regional information by fusing box coordinate embeddings with instance caption embeddings. Self-attention mechanisms are then used to inject this ROI information into the global feature map. Although implicit ROI injection avoids directly handling variable-sized ROIs, it suffers from severe attribute leakage issues and lower spatial alignment.\n2) Explicit ROI injection with attention mask: As shown in Fig. 3 ###reference_###(b), MIGC [48 ###reference_b48###] and Instance Diffusion [40 ###reference_b40###] use masked cross-attention to isolate each ROI during instance caption injection, achieving better spatial alignment and reducing attribute leakage issues. However, despite the use of masked attention, the computations are still conducted on the full-sized high-resolution feature map, resulting in high computational costs.\nIn this work, we introduce an effective strategy for instance control in visual generation. Inspired by ROI-Align [17 ###reference_b17###] in object detection, we introduce a complementary operation named ROI-Unpool, which restores cropped ROI features to their original position on the high-resolution feature map. As shown in Fig. 3 ###reference_###(c), combining ROI-Align and ROI-Unpool allows explicit extraction and processing of ROI features, with computational costs independent of the original feature size.\nBuilding on this operation, we introduce ROICtrl, an adapter that integrates instance control into existing diffusion models. ROICtrl is compatible with existing spatial-based add-ons (e.g., ControlNet [47 ###reference_b47###] and T2I-Adapter [28 ###reference_b28###]) and embedding-based add-ons (e.g., IP-Adapter [45 ###reference_b45###] and ED-LoRA [14 ###reference_b14###]), expanding their application for multi-instance generation (as shown in Fig. 2 ###reference_###).\n###figure_3### In evaluating instance control, we find that previous benchmarks are limited to template-based captions, focusing on specific attributes like color, as summarized in Tab. 1 ###reference_###. However, users often prefer free-form descriptions to capture broader attributes.\nTo address this gap, we introduce ROICtrl-Bench, a benchmark specifically designed to evaluate both template-based and free-form instance captions. By leveraging the strong open-domain recognition abilities of multi-modal large language models, ROICtrl-Bench provides a more comprehensive assessment of instance control.\nOur contributions are summarized as follows:\nWe introduce ROI-Unpool, an operation that facilitates efficient and accurate ROI injection for visual generation.\nWe propose ROICtrl, an adapter that is compatible with existing diffusion models and their add-ons, expanding their applications in multi-instance generation.\nWe introduce ROICtrl-Bench, a comprehensive benchmark for evaluating instance control capabilities. ROICtrl achieves state-of-the-art performance and improved efficiency on ROICtrl-Bench, as well as on two existing benchmarks (InstDiff-Bench [40 ###reference_b40###] and MIG-Bench [48 ###reference_b48###])." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Controllable Visual Generation", + "text": "While text-to-image and text-to-video diffusion models achieve high generation quality, they are limited by language alone in capturing fine-grained spatial(-temporal) and identity details. To address this, researchers have introduced visual conditions to enhance controllability: spatial control for precise layouts (e.g., ControlNet [47 ###reference_b47###], T2I-Adapter [28 ###reference_b28###]), embedding control for detailed identity (e.g., ED-LoRA [14 ###reference_b14###], IP-Adapter [45 ###reference_b45###]), and trajectory control for fine-grained motion (e.g., VideoSwap [15 ###reference_b15###], MotionCtrl [41 ###reference_b41###]). However, these controls lack explicit instance separation, leading to severe attribute leakage issues in multi-instance generation, as shown in Fig. 2 ###reference_###." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Instance Control in Visual Generation", + "text": "Unlike the above controls that enable fine-grained visual alignment, instance control is designed to separate different instances, allowing for independent control of each instance while preventing attribute leakage between them. This approach is often associated with bounding-box, layout, or region control. We group all these types under the term \u201cInstance Control\u201d and outline the main methods below.\nTraining-Free Instance Control.\nTraining-free instance control [43 ###reference_b43###, 32 ###reference_b32###, 18 ###reference_b18###, 5 ###reference_b5###, 22 ###reference_b22###] primarily manipulates the attention map in diffusion models during inference, inspired by the finding that cross-attention conveys layout information [19 ###reference_b19###]. The core idea is to enhance the influence of nouns on their corresponding regions using techniques such as attention modulation [22 ###reference_b22###] or latent optimization [43 ###reference_b43###, 3 ###reference_b3###]. While this approach allows for some degree of instance control, it often involves a trade-off between image quality and spatial alignment, as well as increased computational costs and reduced flexibility during inference.\nTraining-Based Instance Adapter.\nTraining-based instance adapters aim to learn instance control from data and can be categorized into implicit and explicit injection methods based on how they incorporate instance information.\nImplicit injection [24 ###reference_b24###, 39 ###reference_b39###, 12 ###reference_b12###] encodes bounding box coordinates as positional embeddings, which are then fused with instance caption embeddings. A gated self-attention mechanism injects the instance embedding into spatial features. While this approach avoids directly handling variable-sized ROIs, it suffers from lower spatial alignment and significant attribute leakage issues.\nIn contrast, explicit injection isolates the target region during instance caption injection, preventing attribute leakage between instances. Previous works [48 ###reference_b48###, 29 ###reference_b29###, 40 ###reference_b40###] adopt attention masks to zero out unrelated regions; however, this approach results in substantial redundant computation and coordinate quantization errors.\nTo address these limitations, we introduce ROI-Unpool for explicit, efficient, and accurate ROI injection." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Instance Recognition in Visual Understanding", + "text": "Early research in visual recognition focus on object classification tasks [8 ###reference_b8###, 16 ###reference_b16###, 23 ###reference_b23###], where each image typically contains a single prominent object (e.g., ImageNet [8 ###reference_b8###]). As the field progressed, attention shift to detecting multiple objects in context (e.g., MS-COCO [25 ###reference_b25###]), with bounding boxes used to locate individual objects.\nThe representative approach in object detection is the two-stage detector [34 ###reference_b34###, 17 ###reference_b17###, 26 ###reference_b26###], which employs ROI-Pool/ROI-Align to parallelize feature extraction from varied-sized ROIs, enabling efficient ROI processing.\nMotivated by this line of research, we explore effective ROI operations for visual generation in this work.\n###figure_4###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "In this section, we first present the problem formulation of multi-instance generation in Sec. 3.1 ###reference_###. Next, we introduce the basic operation, ROI-Unpool, in Sec. 3.2 ###reference_###. Building on ROI-Unpool, we then describe the design of the ROICtrl adapter in Sec. 3.3 ###reference_###. Finally, we discuss the applications of ROICtrl in Sec. 3.4 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Formulation", + "text": "Multi-instance generation is defined as using a global caption\n to describe the whole image, along with instance caption and their corresponding bounding box coordinates to describe each instance.\nSince the original diffusion model relies solely on the global caption for control, our goal is to develop an adapter that can incorporate region-specific information (, ) into a pretrained diffusion model.\nAn effective instance adapter should achieve strong spatial alignment and regional text alignment. Beyond these core requirements, the following additional criteria are crucial for real-world applications:\n###figure_5### 1) Free-Form Instance Captions. Instance captions should be in free-form text, providing flexibility similar to the global caption rather than relying solely on template-based captions [48 ###reference_b48###, 24 ###reference_b24###].\n2) Compatibility with Fine-Grained Controls. Since bounding boxes provide only coarse spatial cues for distinguishing instances, the instance adapter should be compatible with existing add-ons that offer fine-grained control, such as spatial-based add-ons (e.g., ControlNet [47 ###reference_b47###], T2I-Adapter [28 ###reference_b28###]) or embedding-based add-ons (e.g., ED-LoRA [14 ###reference_b14###], IP-Adapter [45 ###reference_b45###])." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "ROI-Unpool", + "text": "In contrast to object detection, which feeds extracted ROI features into a simple classification head to category prediction, visual generation requires the processed ROI features to be \u201cpasted back\u201d at their original coordinates on the spatial feature map to allow further decoding of fine-grained details.\nTo achieve this, existing methods [29 ###reference_b29###, 48 ###reference_b48###, 40 ###reference_b40###] for instance control primarily use a masked attention mechanism that zeros out unrelated regions during instance caption injection. This approach keeps each ROI at its original spatial coordinates, bypassing the difficulties of \u201cpasting back\u201d varied-sized ROIs.\nHowever, the masked attention mechanism introduces significant redundant computation outside the ROI, which is costly on high-resolution feature maps in visual generation. Additionally, coordinate quantization errors during mask creation reduce spatial alignment.\nIn this work, we address the challenges of \u201cpasting back\u201d varied-sized ROIs onto their original coordinates in the spatial feature map by introducing ROI-Unpool. ROI-Unpool complements ROI-Align [17 ###reference_b17###], enabling explicit ROI operations for visual generation. Specifically, as shown in Fig. 4 ###reference_###, ROI-Align computes ROI features by bilinearly resampling from the four nearest grid points in the original spatial feature map, whereas ROI-Unpool reconstructs the spatial features using the four nearest grid points from the ROI feature. For border regions without all four sample points, we compute partial values from available points. Positions that do not correspond to the ROI region are left empty." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "ROICtrl", + "text": "Building on ROI-Unpool, we introduce ROICtrl, an adapter designed to integrate instance captions into diffusion models. As illustrated in Fig. 5 ###reference_###, ROICtrl adds an instance caption injection parallel to the pretrained global caption injection. The global attention output and instance attention output are then fused through a learnable blending mechanism.\nFormally, when provided with the global caption , region captions and their coordinates , the goal of ROICtrl is to inject the given conditions into the spatial feature , resulting in\n.\n###figure_6###" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 Instance Caption Injection", + "text": "In ROICtrl, the global caption describes the overall composition and background of the image, while the instance caption provides specific details of each instance.\nAs shown in Fig. 5 ###reference_###, we use the pretrained cross-attention from the diffusion model to generate the global attention output . For instance caption injection, we first extract the ROI feature from the spatial feature using ROI-Align [17 ###reference_b17###], where represents the ROI feature size, is the number of ROIs, and , , , and represent the batch size, channels, height, and width of the spatial feature, respectively.\nWe then reuse the pretrained cross-attention to inject the instance captions into each ROI feature. Unlike previous methods in [24 ###reference_b24###, 48 ###reference_b48###, 40 ###reference_b40###], we do not use any additional learnable modules for instance caption injection. This strategy preserves the pretrained model\u2019s knowledge and ensures compatibility with embedding-based add-ons, such as ED-LoRA [14 ###reference_b14###] and IP-Adapter [45 ###reference_b45###].\nSince the pretrained cross-attention is optimized for the original spatial resolution, directly applying it to ROI features may introduce artifacts and misalignment. To address this, we introduce ROI self-attention to refine ROI feature. Finally, ROI-Unpool places the refined ROI features back at their original positions in the feature map, producing the instance attention output ." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Learnable Attention Blending", + "text": "Given the global attention output and the instance attention output , we first concatenate them along the ROI axis to form a combined attention output .\nThe goal of learnable attention blending is to dynamically reweight the attention outputs at each spatial location (, ).\nTo achieve this, we compute the learnable fusion weight by applying a convolution to reduce the channel dimension, followed by a softmax function applied across the ROI axis. We then use these weights to perform a weighted fusion of , which produces the final attention output .\nAfter obtaining the final attention output , we incorporate instance box embeddings as guidance for occlusion scenario, inspired by GLIGEN [24 ###reference_b24###]. Unlike GLIGEN, we use only box embeddings without instance caption embeddings to prevent attribute leakage. The explicit use of box coordinate conditioning enhances the objectness in the corresponding regions, leading to improved spatial alignment." + }, + { + "section_id": "3.3.3", + "parent_section_id": "3.3", + "section_name": "3.3.3 Training Objective", + "text": "ROICtrl is optimized using the standard diffusion loss:\nwhere denotes the learnable parameters of ROICtrl. An additional regularization is applied to the learnable fusion weight to reduce the influence of the global attention output within the ROI and facilitate alignment with the instance caption. This regularization term is defined as:\n\nwhere is a mask identifying the foreground area containing the ROI, and denotes the fusion weight of the global attention output.\nThe final objective function combines the standard diffusion loss with the regularization term:\n\nwhere throughout our experiments." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Application", + "text": "In this section, we discuss the applications of ROICtrl.\nInstance Control. Without any additional add-ons, ROICtrl alone can be used to control the instance in complex compositions, with each instance can be described with free-form text, as demonstrated in Fig. 2 ###reference_###(a).\nCompatible with Various Community Models. Once ROICtrl is trained on the base model, it can be directly adapted to various community models fine-tuned from the base model, as illustrated in Fig. 2 ###reference_###(b).\nCompatible with Spatial-based Add-ons. ROICtrl is used to separate various instances and can collaborate with fine-grained spatial controls (e.g., ControlNet [47 ###reference_b47###], T2I-Adapter [28 ###reference_b28###]) during inference. As shown in Fig. 2 ###reference_###(c), without ROICtrl, T2I-Adapter alone cannot control specific instance and suffers from severe attribute leakage.\nCompatible with Embedding-based Add-ons. ROICtrl can work with embedding-based add-ons to control instance identity. As shown in Fig. 2 ###reference_###(d, e), it supports both the tuning-based ED-LoRA [14 ###reference_b14###] and the tuning-free IP-Adapter [45 ###reference_b45###]. This compatibility is achieved by reusing pretrained cross-attention without adding new learnable modules for instance captions, ensuring seamless integration with embedding-based add-ons that rely on pretrained cross-attention.\nContinuous Generation. ROICtrl enables continuous generation, allowing modification of local regions while preserving previously generated content, as shown in Fig. 2 ###reference_###(f).\n###figure_7###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "We implement ROICtrl using the PyTorch [30 ###reference_b30###] framework. For enhanced computational efficiency, we develop a custom CUDA kernel for the ROI-Unpool operation.\nWe adopt multi-scale ROI, where the ROI size is defined by the relation , with representing the spatial feature size. For example, if the diffusion model operates at a resolution of , with feature resolutions , the corresponding ROI sizes are .\nThe model is trained on the MS-COCO [25 ###reference_b25###] training set, where we recaptioning each instance with free-form text generated by CogVLM [2 ###reference_b2###]. Training is performed with a batch size of 128 and a learning rate of 5e-5, over 60,000 steps on 8 NVIDIA A100 GPUs." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Setup", + "text": "ROICtrl-Bench.\nExisting benchmarks for instance control primarily evaluate template-based instance captions, as shown in Tab. 1 ###reference_###. For example, MIG-Bench [48 ###reference_b48###] primarily uses templates such as \u201c[adj.]-colored-[noun.]\u201d for evaluation. However, in real-world applications, users require free-form instance captions to capture a broader range of attributes. To bridge this gap, we construct ROICtrl-Bench, which consists of four tracks to evaluate various settings. Tracks 1 and 2 examine template-based instance captions, while Tracks 3 and 4 evaluate free-form instance captions. Additionally, Tracks 2 and 4 assess the model\u2019s ability to generate out-of-distribution instance captions generated by GPT-4 [1 ###reference_b1###].\nEvaluation Metrics. Instance control is evaluated on two main aspects: spatial alignment and regional text alignment.\nFor spatial alignment, we report the Mean Intersection over Union (mIoU). We calculate mIoU by first using Yolo-World [6 ###reference_b6###] with a low confidence threshold to detect region proposals based on specified categories. We then apply bipartite matching to pair detected boxes with ground truth boxes and compute the mIoU between them.\nFor regional text alignment, we input cropped regions of the generated image and their instance captions into MiniCPM-V 2.6 [44 ###reference_b44###] and using it to evaluate whether they successfully match. The match rate is reported as accuracy (Acc).\nFor a comprehensive evaluation of ROICtrl, we also adopt two previous benchmarks, MIG-Bench [48 ###reference_b48###] and InstDiff-Bench [40 ###reference_b40###], following their evaluation settings." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Comparison to Prior Works", + "text": "Quantitative Comparison.\nWe present the quantitative comparison results in Tab. 2 ###reference_###. Notably, ROICtrl outperforms both Instance Diffusion [40 ###reference_b40###] and MIGC [48 ###reference_b48###] in generating small objects, as illustrated in Tab. 2 ###reference_###(b). This improvement is due to ROICtrl\u2019s precise localization of small objects, which avoids the quantization errors commonly introduced by masked attention.\nOn ROICtrl-Bench (Tab. 2 ###reference_###(c)), ROICtrl performs slightly worse than Instance Diffusion on out-of-distribution subjects. This discrepancy may be due to Instance Diffusion being trained on an internal dataset containing 5 million recaptioned images, whereas our approach is trained solely on the publicly available MS-COCO dataset [25 ###reference_b25###] with 118K samples. Despite this, Tab. 2 ###reference_### clearly shows that ROICtrl outperforms previous methods, achieving superior spatial alignment and regional text alignment.\nQualitative Comparison.\nAs qualitative comparison summarized in Fig. 6 ###reference_###(a), ROICtrl effectively models occlusions when bounding boxes overlap. In Fig. 6 ###reference_###(b), when encountering out-of-distribution instance captions, ROICtrl shows fewer attribute leakage compared to GLIGEN [24 ###reference_b24###] and MIGC [48 ###reference_b48###], while maintaining better global consistency than Instance Diffusion [40 ###reference_b40###]. Additionally, in Fig. 6 ###reference_###(c) and (d), ROICtrl achieves superior regional text alignment with free-form instance caption compared to previous methods." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "" + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Compare to ROI-Injection via Embedding", + "text": "We compare ROICtrl with embedding-based ROI injection (see Fig. 3 ###reference_###(a)). We adopt the architecture from GLIGEN [24 ###reference_b24###], and label this variant as GLIGEN*. As summarized in Fig. 7 ###reference_###(a), ROICtrl consistently outperforms GLIGEN* at same training steps. Notably, even after 500K training steps (10 more than ROICtrl), GLIGEN* still exhibits weaker spatial alignment.\nIn terms of regional text alignment, GLIGEN* uses global self-attention to inject instance captions, leading to severe attribute leakage and much poorer regional text alignment than ROICtrl.\nAdditionally, embedding-based ROI injection struggles to generalize when the inference aspect ratio deviates from the training aspect ratio, as shown in Fig. 7 ###reference_###(b), which poses challenges for practical applications that require flexible inference aspect ratio.\n###figure_8###" + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Compare to ROI-Injection via Attention Mask", + "text": "We compare ROICtrl with attention mask\u2013based ROI injection (see Fig. 3 ###reference_###(b)). Specifically, we modify the ROICtrl implementation by replacing ROI-Align and ROI-Unpool with masked attention. This variant, labeled as ROICtrl (Mask), achieves similar regional text alignment but much worse spatial alignment than ROICtrl, while also consuming more training memory and reducing inference speed, as summarized in Tab. 3 ###reference_###.\nWe also compare ROICtrl with previous attention mask-based methods, Instance Diffusion [40 ###reference_b40###] and MIGC [48 ###reference_b48###]. Instance Diffusion is more computationally intensive due to its complex design for supporting point and mask control, with ROICtrl achieving about 9.8 speedup. MIGC reduces computation by deploying adapters only on low-resolution feature maps (e.g., 8 or 16 downsampled feature), resulting in degraded spatial and regional text alignment performance, while still being slower than ROICtrl. Additionally, both Instance Diffusion and MIGC rely on learnable modules for regional caption injection, making them incompatible with embedding-based add-ons like ED-LoRA [14 ###reference_b14###] and IP-Adapter [45 ###reference_b45###]." + }, + { + "section_id": "4.4.3", + "parent_section_id": "4.4", + "section_name": "4.4.3 Key Design Choices of ROICtrl", + "text": "Effect of ROI Self-Attn. In ROICtrl, we reuse pretrained cross-attention to inject instance captions. However, since this cross-attention is initially trained on full-resolution features, applying it directly to ROI features without ROI self-attention refinement leads to poorer spatial alignment. As shown in Tab. 4 ###reference_###, removing ROI self-attention decreases the mIoU on ROICtrl-Bench from 0.652 to 0.540 and the AP on InstDiff-Bench from 41.0 to 32.7.\nEffect of . As discussed in Sec. 3.3.3 ###reference_.SSS3###, is used to decrease the fusion weight of the global attention output within the ROI. As shown in Tab. 4 ###reference_###, omitting reduces the impact of the instance caption, results in poorer regional text alignment: Color Acc decreases from 62.3 to 58.2 and Texture Acc drops from 29.3 to 21.9 on InstDiff-Bench. As visualization shown in Fig. 8 ###reference_###, applying enhances the instance textures of the ball and teddy bear.\n###figure_9### Regional vs. Global Coordinate Conditioning.\nROICtrl employs global coordinate conditioning, following GLIGEN [24 ###reference_b24###], whereas recent works such as MIGC [48 ###reference_b48###] and BlobGEN [29 ###reference_b29###] utilize regional coordinate conditioning. Tab. 4 ###reference_### shows that local coordinate conditioning achieves slightly better quantitative performance. However, in real-world applications, we find that local coordinate conditioning does not generalize well to varying resolutions. As shown in Fig. 9 ###reference_###, with an inference size of 5121024 (double the training size of 512512), regional coordinate conditioning suffers from subject repetition issues.\nMulti-Scale ROIs.\nIn ROICtrl, we set multi-scale ROIs to adapt to the multi-scale feature maps of U-Net. We compare the multi-scale ROIs with single-scale ROIs, where , as shown in Tab. 4 ###reference_###. Multi-scale ROIs achieve better spatial alignment while maintaining similar text alignment." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce ROICtrl, a method designed to boost instance control in visual generation. ROICtrl is built upon ROI-Unpool, a foundational operation that enables efficient ROI modeling on high-resolution feature maps. By leveraging ROICtrl, we adapt existing diffusion models and their add-ons for multi-instance generation, showcasing a variety of applications enabled by our approach. ROICtrl demonstrates superior qualitative and quantitative results across various benchmarks while also achieving notable speedup, paving the way for controllable generation of multi-instance compositions." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Detailed Evaluation Settings", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "ROICtrl-Bench", + "text": "ROICtrl-Bench contains 200 samples, divided into groups {1, 2, 3, 4, 5, 6-10, 11-15, 16-20, 21-25, 26-30} based on instance counts. Each group includes 20 examples randomly selected from the MS-COCO 2017 evaluation set [25 ###reference_b25###]. Half of the evaluation examples contain small-sized ROIs with spatial size smaller than .\nAs discussed in Sec. 4.2, we create four types of instance captions for each example, corresponding to four tracks, resulting in a total of 800 evaluation examples. For template-based captions in tracks 1 and 2, we follow the GLIGEN [24 ###reference_b24###] evaluation protocol, using only category labels as instance captions. For free-form instance captions, we leverage a multi-modal large language model to provide instance captions. We report the spatial alignment (mIoU) and regional text alignment (Acc) metrics for each track.\n###figure_10###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "InstDiff-Bench", + "text": "InstDiff-Bench [40 ###reference_b40###] uses the entire MS-COCO 2017 evaluation set [25 ###reference_b25###] as its benchmark. For spatial alignment evaluation, it calculates YOLOv8 detection metrics (AP) based on in-distribution instance captions (i.e., object categories). To assess the model\u2019s ability to generate out-of-distribution instance captions, it defines 8 common colors: black, white, red, green, yellow, blue, pink, purple, and 8 common textures: rubber, fluffy, metallic, wooden, plastic, fabric, leather, glass. For each instance, a texture or color adjective is randomly selected from that predefined adjective pool, and the caption is constructed using the template [adj.]-[noun.].\nInstDiff-Bench inputs the cropped box into the CLIP model to predict attributes (colors and textures) and evaluates the accuracy of the predicted adjectives (i.e., or ). Additionally, it reports the regional CLIP score for each instance caption.\n###figure_11### ###figure_12### ###figure_13###" + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "MIG-Bench", + "text": "MIG-bench [48 ###reference_b48###] mainly evaluates spatial alignment and regional text alignment on out-of-distribution instance captions. It selects 800 layouts from COCO, randomly assigns a color to each instance, and constructs the caption based on the template [adj.]-colored-[noun]. In their evaluation, they filter out small-sized ROIs and dense ROIs with more than 6 instances. MIG-bench primarily reports spatial alignment (mIoU) and regional text alignment (instance success rate)." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Additional Experiments", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Qualitative Comparison", + "text": "We have demonstrated the qualitative comparison on ROICtrl-Bench in Sec. 4.3 of the main paper. Therefore, in this section, we primarily present the qualitative comparison on InstDiff-Bench [40 ###reference_b40###] and MIG-Bench [48 ###reference_b48###].\nSmall-Sized ROIs.\nAs shown in Fig. 11 ###reference_###(a), previous instance diffusion [40 ###reference_b40###] tends to generate redundant instances beyond the box, while GLIGEN [24 ###reference_b24###] and MIGC [48 ###reference_b48###] do not accurately follow the box. In comparison, ROICtrl can accurately generate small-sized ROIs.\nOut-of-Distribution Instance Caption.\nAs shown in Fig. 11 ###reference_###(b, c), previous methods do not accurately follow the instance caption when generating out-of-distribution attributes and exhibit attribute leakage. In comparison, ROICtrl follows the instance caption accurately.\n[width=loop]8images/video_applications/0000100016" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Limitation and Future Works", + "text": "" + }, + { + "section_id": "8.1", + "parent_section_id": "8", + "section_name": "Limitation Analysis", + "text": "The attribution leakage problem is largely addressed in ROICtrl, as we prioritize using instance captions in the learnable blending process. However, generating the same instance for highly overlapping bounding boxes remains a challenge. As illustrated in Fig. 10 ###reference_###, when the boxes exhibit significant overlap, the model need to rely on the global caption for additional information. However, ROICtrl tends to favor instance captions instead, making it unstable to solve this case. We believe that further improving the learnable blending strategy to dynamically reweight the global and instance captions could solve this issue." + }, + { + "section_id": "8.2", + "parent_section_id": "8", + "section_name": "Future Works", + "text": "Apply ROICtrl to Video Instance Control. In our preliminary experiments on VideoCrafter2 [4 ###reference_b4###], we find that with slight fine-tuning of the pretrained ROICtrl on a video dataset (about 2K iterations), ROICtrl can be used to control video instances, as shown in Fig. 12 ###reference_###. However, improving the temporal consistency of video instances remains a challenge, presenting a potential direction for future development of ROICtrl.\nApply ROI-Unpool to Diffusion Transformers. ROICtrl is primarily designed for UNet-based diffusion models. Another future direction is to explore combining ROI-Unpool with transformer-based diffusion models [9 ###reference_b9###, 31 ###reference_b31###] to explicitly separate instance features and inject instance control." + }, + { + "section_id": "8.3", + "parent_section_id": "8", + "section_name": "Potential Negative Social Impact", + "text": "This project aims to provide the community with an effective method for performing multi-instance control.\nHowever, a risk exists wherein malicious entities could exploit this framework, in combination with image customization, to generate deceptive images of multiple public figures, potentially misleading the public. This concern is not owing to our approach but rather a shared consideration in concept customization. One potential solution to mitigate such risks involves adopting methods similar to anti-dreambooth [38 ###reference_b38###], which introduce subtle noise perturbations to the published images to mislead the customization process. Additionally, applying unseen watermarking to the generated image could deter misuse and prevent them from being used without proper recognition." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison of existing instance control benchmarks. Previous benchmarks mainly focus on template-based instance captions, while ROICtrl-Bench covers both template-based and free-form instance captions for comprehensive evaluation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BenchmarksIn-DistributionOut-of-Distribution
Template Cap.Free-Form Cap.Template Cap.Free-Form Cap.
GLIGEN-Bench\u00a0[24]\n\u2713
MIG-Bench\u00a0[48]\n\u2713
InstDiff-Bench\u00a0[40]\n\u2713\u2713
ROICtrl-Bench\u2713\u2713\u2713\u2713
\n
\n
", + "capture": "Table 1: Comparison of existing instance control benchmarks. Previous benchmarks mainly focus on template-based instance captions, while ROICtrl-Bench covers both template-based and free-form instance captions for comprehensive evaluation." + }, + "2": { + "table_html": "
\n
Table 2: Quantitative comparison with prior works on MIG-Bench\u00a0[48], InstDiff-Bench\u00a0[40], and the proposed ROICtrl-Bench.
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodmIoUInstance Success Rate (%)
Level (# Instances)L2L3L4L5L6AVGL2L3L4L5L6AVG
GLIGEN\u00a0[24]\n0.370.290.2530.260.260.270.420.320.270.270.280.30
MIGC\u00a0[48]\n0.640.580.570.540.570.560.740.670.670.630.660.66
Instance Diffusion\u00a0[40]\n0.520.480.500.420.420.460.580.520.550.470.470.51
ROICtrl (Ours)0.780.720.670.610.640.660.850.790.740.670.700.73
\n
\n
(a) Quantitative evaluation on MIG-Bench\u00a0[48]. MIG-Bench uses Grounding-DINO\u00a0[27] mIoU to measure spatial alignment and assesses regional text alignment within the color space.
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LocationAttribute
MethodAPAR
Upper bound (real images)48.465.230.953.364.867.8----
GLIGEN\u00a0[24]\n24.142.63.122.249.035.926.30.21217.70.208
MIGC\u00a0[48]\n22.441.52.120.146.832.853.80.24324.30.215
Instance Diffusion\u00a0[40]\n40.157.210.449.467.153.255.20.24326.10.222
ROICtrl (Ours)41.063.516.346.565.754.162.30.25629.30.227
\n
\n
(b) Quantitative evaluation on InstDiff-Bench\u00a0[40]. InstDiff-Bench evaluates spatial alignment using YOLO-Det\u00a0[21] Average Precision (AP) and assesses regional text alignment based on color and texture using CLIP score\u00a0[33].
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
T1: SubjectT2: Subject*T3: Subject + AttributeT4: Subject + Attribute*AVG
MethodmIoUAcc (%)mIoUAcc (%)mIoUAcc (%)mIoUAcc (%)mIoUAcc (%)
Upper Bound (real images)0.79772.5--0.79766.4----
GLIGEN\u00a0[24]\n0.57959.10.47443.30.54616.30.5481.900.53730.2
MIGC\u00a0[48]\n0.52161.90.44247.60.49833.70.49812.30.49038.9
Instance Diffusion\u00a0[40]\n0.67366.50.56253.50.63439.40.55923.00.60745.6
ROICtrl (Ours)0.69268.90.55750.90.68847.30.66927.80.65248.7
\n
\n
(c) Quantitative evaluation on the proposed ROICtrl-Bench. We assess spatial alignment using YOLO-World\u00a0[6] mIoU and evaluate regional text alignment with MiniCPM-V 2.6\u00a0[44]. Tracks 1 and 2 examine template-based instance caption, while tracks 3 and 4 evaluate free-form instance caption. * denote out-of-distribution caption rewritten by GPT-4\u00a0[1].
\n
\n
\n
\n
", + "capture": "Table 2: Quantitative comparison with prior works on MIG-Bench\u00a0[48], InstDiff-Bench\u00a0[40], and the proposed ROICtrl-Bench." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodmIoUInstance Success Rate (%)
Level (# Instances)L2L3L4L5L6AVGL2L3L4L5L6AVG
GLIGEN\u00a0[24]\n0.370.290.2530.260.260.270.420.320.270.270.280.30
MIGC\u00a0[48]\n0.640.580.570.540.570.560.740.670.670.630.660.66
Instance Diffusion\u00a0[40]\n0.520.480.500.420.420.460.580.520.550.470.470.51
ROICtrl (Ours)0.780.720.670.610.640.660.850.790.740.670.700.73
\n
\n
(a) Quantitative evaluation on MIG-Bench\u00a0[48]. MIG-Bench uses Grounding-DINO\u00a0[27] mIoU to measure spatial alignment and assesses regional text alignment within the color space.
\n
", + "capture": "(a) Quantitative evaluation on MIG-Bench\u00a0[48]. MIG-Bench uses Grounding-DINO\u00a0[27] mIoU to measure spatial alignment and assesses regional text alignment within the color space." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LocationAttribute
MethodAPAR
Upper bound (real images)48.465.230.953.364.867.8----
GLIGEN\u00a0[24]\n24.142.63.122.249.035.926.30.21217.70.208
MIGC\u00a0[48]\n22.441.52.120.146.832.853.80.24324.30.215
Instance Diffusion\u00a0[40]\n40.157.210.449.467.153.255.20.24326.10.222
ROICtrl (Ours)41.063.516.346.565.754.162.30.25629.30.227
\n
\n
(b) Quantitative evaluation on InstDiff-Bench\u00a0[40]. InstDiff-Bench evaluates spatial alignment using YOLO-Det\u00a0[21] Average Precision (AP) and assesses regional text alignment based on color and texture using CLIP score\u00a0[33].
\n
", + "capture": "(b) Quantitative evaluation on InstDiff-Bench\u00a0[40]. InstDiff-Bench evaluates spatial alignment using YOLO-Det\u00a0[21] Average Precision (AP) and assesses regional text alignment based on color and texture using CLIP score\u00a0[33]." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
T1: SubjectT2: Subject*T3: Subject + AttributeT4: Subject + Attribute*AVG
MethodmIoUAcc (%)mIoUAcc (%)mIoUAcc (%)mIoUAcc (%)mIoUAcc (%)
Upper Bound (real images)0.79772.5--0.79766.4----
GLIGEN\u00a0[24]\n0.57959.10.47443.30.54616.30.5481.900.53730.2
MIGC\u00a0[48]\n0.52161.90.44247.60.49833.70.49812.30.49038.9
Instance Diffusion\u00a0[40]\n0.67366.50.56253.50.63439.40.55923.00.60745.6
ROICtrl (Ours)0.69268.90.55750.90.68847.30.66927.80.65248.7
\n
\n
(c) Quantitative evaluation on the proposed ROICtrl-Bench. We assess spatial alignment using YOLO-World\u00a0[6] mIoU and evaluate regional text alignment with MiniCPM-V 2.6\u00a0[44]. Tracks 1 and 2 examine template-based instance caption, while tracks 3 and 4 evaluate free-form instance caption. * denote out-of-distribution caption rewritten by GPT-4\u00a0[1].
\n
", + "capture": "(c) Quantitative evaluation on the proposed ROICtrl-Bench. We assess spatial alignment using YOLO-World\u00a0[6] mIoU and evaluate regional text alignment with MiniCPM-V 2.6\u00a0[44]. Tracks 1 and 2 examine template-based instance caption, while tracks 3 and 4 evaluate free-form instance caption. * denote out-of-distribution caption rewritten by GPT-4\u00a0[1]." + }, + "6": { + "table_html": "
\n
Table 3: Ablation study comparing ROICtrl with attention mask\u2013based ROI injection. ROICtrl achieves similar regional text alignment but better spatial alignment, while significantly reducing memory and computational costs.\nThe inference speed is tested by generating a 10242 resolution image with 25 valid ROIs, 50 DDIM\u00a0[37] steps, and fp16 precision on an A100 GPU.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsROICtrl-BenchMIG-BenchInstdiff-BenchTrainingInferenceDeployedSupport
mIoUAccmIoUAccAPColor AccTexture AccMemory (G)Speed (s/img)ResolutionEmb Addon
ROICtrl (Ours)0.65248.70.660.7341.062.329.334.313.1all\u2713
\n\nMask-Attn\nROICtrl (mask)0.62849.20.640.7137.162.530.365.531.5all\u2713
Instance Diffusion0.60745.60.460.5140.155.226.1-129.2all\u2717
MIGC0.49038.90.560.6622.453.824.3-23.58, 16\n\u2717
\n
\n
", + "capture": "Table 3: Ablation study comparing ROICtrl with attention mask\u2013based ROI injection. ROICtrl achieves similar regional text alignment but better spatial alignment, while significantly reducing memory and computational costs.\nThe inference speed is tested by generating a 10242 resolution image with 25 valid ROIs, 50 DDIM\u00a0[37] steps, and fp16 precision on an A100 GPU." + }, + "7": { + "table_html": "
\n
Table 4: Ablation study of design choices in ROICtrl.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsROICtrl-BenchMIG-BenchInstdiff-Bench
mIoUAccmIoUAccAPColor AccTexture Acc
ROICtrl (Ours)0.65248.70.660.7341.062.329.3
\n ROI Self-Attn0.54048.60.660.7232.760.532.9
\n \n0.65847.20.660.7241.158.221.9
global coord local coord0.65549.50.680.7442.163.330.3
multi-scale roi single-scale roi0.63949.60.650.7340.062.529.9
\n
\n
", + "capture": "Table 4: Ablation study of design choices in ROICtrl." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17949v1_figure_1.png", + "caption": "Figure 1: Grid test for instance control. (a) We structure the region positions and instance captions into a single plain caption, then prompt DALL-E 3 to generate a nine-grid image. (b) We apply ROICtrl to generate a nine-grid image based on instance captions.", + "url": "http://arxiv.org/html/2411.17949v1/x1.png" + }, + "2": { + "figure_path": "2411.17949v1_figure_2.png", + "caption": "Figure 2: Applications of ROICtrl. A trained ROICtrl adapter can extend existing diffusion models (a) and their community-finetuned versions (b) to multi-instance generation. Additionally, it can collaborate with spatial-based add-ons (c) and embedding-based add-ons (d, e) to offer fine-grained control over spatial or identity information. ROICtrl can also be applied to continuous generation settings (f). Due to legal considerations, we do not display customized results involving human identity.", + "url": "http://arxiv.org/html/2411.17949v1/x2.png" + }, + "3": { + "figure_path": "2411.17949v1_figure_3.png", + "caption": "Figure 3: Illustration of different ROI injection designs. \u230a\u22c5\u2309delimited-\u230a\u2309\u22c5\\lfloor\\cdot\\rceil\u230a \u22c5 \u2309 denotes coordinate quantization to the nearest integer.", + "url": "http://arxiv.org/html/2411.17949v1/x3.png" + }, + "4": { + "figure_path": "2411.17949v1_figure_4.png", + "caption": "Figure 4: Illustration of ROI-Unpool. The dashed grid represents the spatial features, while the solid grid represents the ROI features. Similar to ROI-Align [17], ROI-Unpool avoids coordinate quantization during computation.", + "url": "http://arxiv.org/html/2411.17949v1/x4.png" + }, + "5": { + "figure_path": "2411.17949v1_figure_5.png", + "caption": "Figure 5: Detailed structure of ROICtrl. In parallel with the pretrained global caption injection, we introduce an additional instance caption injection. The global attention output and instance attention output are then fused using learnable blending.", + "url": "http://arxiv.org/html/2411.17949v1/x5.png" + }, + "6": { + "figure_path": "2411.17949v1_figure_6.png", + "caption": "Figure 6: Qualitative comparison on ROICtrl-Bench. Track 1 and 2 examine template-based instance caption, while track 3 and 4 evaluate free-form instance caption. [ID] denotes in-distribution caption derived from real dataset, and [OOD] denotes out-of-distribution caption generated by GPT-4 [1].", + "url": "http://arxiv.org/html/2411.17949v1/x6.png" + }, + "7": { + "figure_path": "2411.17949v1_figure_7.png", + "caption": "Figure 7: Ablation study comparing ROICtrl and embedding-based injection (GLIGEN*). ROICtrl achieves faster convergence, improved spatial and regional text alignment, and flexible inference aspect ratios.", + "url": "http://arxiv.org/html/2411.17949v1/x7.png" + }, + "8": { + "figure_path": "2411.17949v1_figure_8.png", + "caption": "Figure 8: Effect of global attention regularization \u2112r\u2062e\u2062gsubscript\u2112\ud835\udc5f\ud835\udc52\ud835\udc54\\mathcal{L}_{reg}caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT. Adding \u2112r\u2062e\u2062gsubscript\u2112\ud835\udc5f\ud835\udc52\ud835\udc54\\mathcal{L}_{reg}caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT reduces the weight of the global attention output within the ROI, leading to improved regional text alignment.", + "url": "http://arxiv.org/html/2411.17949v1/x8.png" + }, + "9": { + "figure_path": "2411.17949v1_figure_9.png", + "caption": "Figure 9: Comparison of regional and global coordinate conditioning. Regional coordinate conditioning leads to repetition issues when the inference size is doubled relative to the training size.", + "url": "http://arxiv.org/html/2411.17949v1/x9.png" + }, + "10": { + "figure_path": "2411.17949v1_figure_10.png", + "caption": "Figure 10: Limitation of ROICtrl. ROICtrl prioritizes the use of instance captions to solve attribute binding but performs unstably when instance boxes with similar captions are heavily overlapped.", + "url": "http://arxiv.org/html/2411.17949v1/x10.png" + }, + "11(a)": { + "figure_path": "2411.17949v1_figure_11(a).png", + "caption": "(a) Qualitative comparison of ROICtrl and previous methods on small-sized ROIs in Instdiff-Bench [40]. (Zoom in for details.)\nFigure 11: Qualitative comparison of ROICtrl and previous methods on Instdiff-Bench [40] and MIG-Bench [48]. We provide examples for small-sized ROIs and out-of-distribution instance captions.", + "url": "http://arxiv.org/html/2411.17949v1/x11.png" + }, + "11(b)": { + "figure_path": "2411.17949v1_figure_11(b).png", + "caption": "(b) Qualitative comparison of ROICtrl and previous methods on out-of-distribution instance captions in Instdiff-Bench [40].\nFigure 11: Qualitative comparison of ROICtrl and previous methods on Instdiff-Bench [40] and MIG-Bench [48]. We provide examples for small-sized ROIs and out-of-distribution instance captions.", + "url": "http://arxiv.org/html/2411.17949v1/x12.png" + }, + "11(c)": { + "figure_path": "2411.17949v1_figure_11(c).png", + "caption": "(c) Qualitative comparison of ROICtrl and previous methods on out-of-distribution instance captions in MIG-Bench [48].\nFigure 11: Qualitative comparison of ROICtrl and previous methods on Instdiff-Bench [40] and MIG-Bench [48]. We provide examples for small-sized ROIs and out-of-distribution instance captions.", + "url": "http://arxiv.org/html/2411.17949v1/x13.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Improving image generation with better captions.", + "author": "James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al.", + "venue": "Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf, 2(3):8, 2023.", + "url": null + } + }, + { + "3": { + "title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models.", + "author": "Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or.", + "venue": "ACM Transactions on Graphics (TOG), 42(4):1\u201310, 2023.", + "url": null + } + }, + { + "4": { + "title": "Videocrafter2: Overcoming data limitations for high-quality video diffusion models.", + "author": "Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7310\u20137320, 2024a.", + "url": null + } + }, + { + "5": { + "title": "Training-free layout control with cross-attention guidance.", + "author": "Minghao Chen, Iro Laina, and Andrea Vedaldi.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5343\u20135353, 2024b.", + "url": null + } + }, + { + "6": { + "title": "Yolo-world: Real-time open-vocabulary object detection.", + "author": "Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, and Ying Shan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16901\u201316911, 2024.", + "url": null + } + }, + { + "7": { + "title": "Emu: Enhancing image generation models using photogenic needles in a haystack.", + "author": "Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, et al.", + "venue": "arXiv preprint arXiv:2309.15807, 2023.", + "url": null + } + }, + { + "8": { + "title": "Imagenet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In 2009 IEEE conference on computer vision and pattern recognition, pages 248\u2013255. Ieee, 2009.", + "url": null + } + }, + { + "9": { + "title": "Scaling rectified flow transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "10": { + "title": "Muffin or chihuahua? challenging multimodal large language models with multipanel vqa.", + "author": "Yue Fan, Jing Gu, Kaiwen Zhou, Qianqi Yan, Shan Jiang, Ching-Chen Kuo, Yang Zhao, Xinze Guan, and Xin Wang.", + "venue": "In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6845\u20136863, 2024.", + "url": null + } + }, + { + "11": { + "title": "Training-free structured diffusion guidance for compositional text-to-image synthesis.", + "author": "Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang.", + "venue": "arXiv preprint arXiv:2212.05032, 2022.", + "url": null + } + }, + { + "12": { + "title": "Ranni: Taming text-to-image diffusion for accurate instruction following.", + "author": "Yutong Feng, Biao Gong, Di Chen, Yujun Shen, Yu Liu, and Jingren Zhou.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4744\u20134753, 2024.", + "url": null + } + }, + { + "13": { + "title": "Emu video: Factorizing text-to-video generation by explicit image conditioning.", + "author": "Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar Shah, Xi Yin, Devi Parikh, and Ishan Misra.", + "venue": "arXiv preprint arXiv:2311.10709, 2023.", + "url": null + } + }, + { + "14": { + "title": "Mix-of-show: Decentralized low-rank adaptation for multi-concept customization of diffusion models.", + "author": "Yuchao Gu, Xintao Wang, Jay Zhangjie Wu, Yujun Shi, Yunpeng Chen, Zihan Fan, Wuyou Xiao, Rui Zhao, Shuning Chang, Weijia Wu, et al.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024a.", + "url": null + } + }, + { + "15": { + "title": "Videoswap: Customized video subject swapping with interactive semantic point correspondence.", + "author": "Yuchao Gu, Yipin Zhou, Bichen Wu, Licheng Yu, Jia-Wei Liu, Rui Zhao, Jay Zhangjie Wu, David Junhao Zhang, Mike Zheng Shou, and Kevin Tang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7621\u20137630, 2024b.", + "url": null + } + }, + { + "16": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770\u2013778, 2016.", + "url": null + } + }, + { + "17": { + "title": "Mask r-cnn.", + "author": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 2961\u20132969, 2017.", + "url": null + } + }, + { + "18": { + "title": "Localized text-to-image generation for free via cross attention control.", + "author": "Yutong He, Ruslan Salakhutdinov, and J Zico Kolter.", + "venue": "arXiv preprint arXiv:2306.14636, 2023.", + "url": null + } + }, + { + "19": { + "title": "Prompt-to-prompt image editing with cross attention control.", + "author": "Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or.", + "venue": "arXiv preprint arXiv:2208.01626, 2022.", + "url": null + } + }, + { + "20": { + "title": "Imagen video: High definition video generation with diffusion models.", + "author": "Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al.", + "venue": "arXiv preprint arXiv:2210.02303, 2022.", + "url": null + } + }, + { + "21": { + "title": "YOLO by Ultralytics, 2023.", + "author": "Glenn Jocher, Ayush Chaurasia, and Jing Qiu.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "Dense text-to-image generation with attention modulation.", + "author": "Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7701\u20137711, 2023.", + "url": null + } + }, + { + "23": { + "title": "Imagenet classification with deep convolutional neural networks.", + "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.", + "venue": "Advances in neural information processing systems, 25, 2012.", + "url": null + } + }, + { + "24": { + "title": "Gligen: Open-set grounded text-to-image generation.", + "author": "Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511\u201322521, 2023.", + "url": null + } + }, + { + "25": { + "title": "Microsoft coco: Common objects in context.", + "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.", + "venue": "In Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740\u2013755. Springer, 2014.", + "url": null + } + }, + { + "26": { + "title": "Feature pyramid networks for object detection.", + "author": "Tsung-Yi Lin, Piotr Doll\u00e1r, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117\u20132125, 2017.", + "url": null + } + }, + { + "27": { + "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection.", + "author": "Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al.", + "venue": "arXiv preprint arXiv:2303.05499, 2023.", + "url": null + } + }, + { + "28": { + "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models.", + "author": "Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296\u20134304, 2024.", + "url": null + } + }, + { + "29": { + "title": "Compositional text-to-image generation with dense blob representations.", + "author": "Weili Nie, Sifei Liu, Morteza Mardani, Chao Liu, Benjamin Eckart, and Arash Vahdat.", + "venue": "arXiv preprint arXiv:2405.08246, 2024.", + "url": null + } + }, + { + "30": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "31": { + "title": "Scalable diffusion models with transformers.", + "author": "William Peebles and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195\u20134205, 2023.", + "url": null + } + }, + { + "32": { + "title": "Grounded text-to-image synthesis with attention refocusing.", + "author": "Quynh Phung, Songwei Ge, and Jia-Bin Huang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7932\u20137942, 2024.", + "url": null + } + }, + { + "33": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "34": { + "title": "Faster r-cnn: Towards real-time object detection with region proposal networks.", + "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.", + "venue": "Advances in neural information processing systems, 28, 2015.", + "url": null + } + }, + { + "35": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "36": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "Advances in neural information processing systems, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "37": { + "title": "Denoising diffusion implicit models.", + "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.", + "venue": "arXiv preprint arXiv:2010.02502, 2020.", + "url": null + } + }, + { + "38": { + "title": "Anti-dreambooth: Protecting users from personalized text-to-image synthesis.", + "author": "Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc N Tran, and Anh Tran.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2116\u20132127, 2023.", + "url": null + } + }, + { + "39": { + "title": "Boximator: Generating rich and controllable motions for video synthesis.", + "author": "Jiawei Wang, Yuchen Zhang, Jiaxin Zou, Yan Zeng, Guoqiang Wei, Liping Yuan, and Hang Li.", + "venue": "arXiv preprint arXiv:2402.01566, 2024a.", + "url": null + } + }, + { + "40": { + "title": "Instancediffusion: Instance-level control for image generation.", + "author": "Xudong Wang, Trevor Darrell, Sai Saketh Rambhatla, Rohit Girdhar, and Ishan Misra.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232\u20136242, 2024b.", + "url": null + } + }, + { + "41": { + "title": "Motionctrl: A unified and flexible motion controller for video generation.", + "author": "Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan.", + "venue": "In ACM SIGGRAPH 2024 Conference Papers, pages 1\u201311, 2024c.", + "url": null + } + }, + { + "42": { + "title": "Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation.", + "author": "Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7623\u20137633, 2023.", + "url": null + } + }, + { + "43": { + "title": "Boxdiff: Text-to-image synthesis with training-free box-constrained diffusion.", + "author": "Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang, Yefeng Zheng, and Mike Zheng Shou.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7452\u20137461, 2023.", + "url": null + } + }, + { + "44": { + "title": "Minicpm-v: A gpt-4v level mllm on your phone.", + "author": "Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al.", + "venue": "arXiv preprint arXiv:2408.01800, 2024.", + "url": null + } + }, + { + "45": { + "title": "Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.", + "author": "Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang.", + "venue": "arXiv preprint arXiv:2308.06721, 2023.", + "url": null + } + }, + { + "46": { + "title": "Show-1: Marrying pixel and latent diffusion models for text-to-video generation.", + "author": "David Junhao Zhang, Jay Zhangjie Wu, Jia-Wei Liu, Rui Zhao, Lingmin Ran, Yuchao Gu, Difei Gao, and Mike Zheng Shou.", + "venue": "arXiv preprint arXiv:2309.15818, 2023a.", + "url": null + } + }, + { + "47": { + "title": "Adding conditional control to text-to-image diffusion models.", + "author": "Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836\u20133847, 2023b.", + "url": null + } + }, + { + "48": { + "title": "Migc: Multi-instance generation controller for text-to-image synthesis.", + "author": "Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818\u20136828, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17949v1" +} \ No newline at end of file diff --git a/20241127/2411.17957v1.json b/20241127/2411.17957v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c21abcf4795abf12844a17d4bcedcc86b5c3620e --- /dev/null +++ b/20241127/2411.17957v1.json @@ -0,0 +1,713 @@ +{ + "title": "Optimization-Free Image Immunization Against Diffusion-Based Editing", + "abstract": "Current image immunization defense techniques against diffusion-based editing embed imperceptible noise in target images to disrupt editing models. However, these methods face scalability challenges, as they require time-consuming re-optimization for each image\u2014taking hours for small batches. To address these challenges, we introduce DiffVax, a scalable, lightweight, and optimization-free framework for image immunization, specifically designed to prevent diffusion-based editing. Our approach enables effective generalization to unseen content, reducing computational costs and cutting immunization time from days to milliseconds\u2014achieving a 250,000\u00d7 speedup. This is achieved through\na loss term that ensures the failure of editing attempts and the imperceptibility of the perturbations. Extensive qualitative and quantitative results demonstrate that our model is scalable, optimization-free, adaptable to various diffusion-based editing tools, robust against counter-attacks, and, for the first time, effectively protects video content from editing. Our code is provided in our project webpage.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recent advancements in generative models, particularly diffusion models [60 ###reference_b60###, 22 ###reference_b22###, 53 ###reference_b53###], have enabled realistic content synthesis, which can be used for various applications, such as image generation [56 ###reference_b56###, 55 ###reference_b55###, 9 ###reference_b9###, 71 ###reference_b71###, 30 ###reference_b30###, 43 ###reference_b43###, 4 ###reference_b4###] and editing [7 ###reference_b7###, 11 ###reference_b11###, 21 ###reference_b21###, 37 ###reference_b37###]. However, the widespread availability and accessibility of these models introduce significant risks, as malicious actors exploit them to produce deceptive, realistic content known as deepfakes [48 ###reference_b48###]. Deepfakes pose severe threats across multiple domains, from political manipulation [3 ###reference_b3###] and blackmail [6 ###reference_b6###] to biometric fraud [64 ###reference_b64###] compromising trust in legal processes [15 ###reference_b15###]. Furthermore, they have become violent tools for sexual harassment through the creation of non-consensual explicit content, victimizing many people day by day [24 ###reference_b24###, 14 ###reference_b14###, 10 ###reference_b10###]. Given the widespread accessibility of diffusion models, the scale of these threats continues to grow, underscoring the urgent need for robust defense mechanisms to protect individuals, institutions, and public trust from such misuse.\nTo address these challenges, a line of research has focused on deepfake detection [44 ###reference_b44###, 47 ###reference_b47###] and verification methods [18 ###reference_b18###], which facilitate post-hoc identification. While effective for detection, these approaches do not proactively prevent malicious editing, as they only identify it after it happens. Another branch modifies the parameters of editing models [29 ###reference_b29###] to prevent unethical content synthesis (e.g. NSFW material); however, the widespread availability of unrestricted generative models limits its effectiveness.\nA more robust defense mechanism, known as image immunization [57 ###reference_b57###, 33 ###reference_b33###, 68 ###reference_b68###, 54 ###reference_b54###], safeguards images from malicious edits by embedding imperceptible adversarial perturbations. This approach ensures that any editing attempts lead to unintended or distorted results, proactively preventing malicious modifications rather than depending on post-hoc detection. The subtlety of this protection is particularly valuable for large-scale, publicly accessible content\u2014such as social media\u2014where user data is especially vulnerable to malicious attacks. By uploading immunized images instead of original ones, users can reduce the risk of misuse by malicious actors, highlighting the practical potential of this method for real-world applications.\nHowever, existing image immunization approaches against diffusion-based editing fail to simultaneously meet all the criteria for an ideal model: (i) scalability to large-scale content, (ii) imperceptibility of perturbations, (iii) robustness against counter-attacks, (iv) video support, (v) memory efficiency, and (vi) speed. (see Table 1 ###reference_###).\nPhotoGuard [57 ###reference_b57###] (PG) embeds adversarial perturbations into target images to disrupt components of the diffusion model by solving a constrained optimization problem via projected gradient descent [35 ###reference_b35###]. Although PhotoGuard represents the first immunization model targeting diffusion-based editing, it requires over 10 minutes per image and at least 15GB of memory, making it computationally intensive and time-consuming. To alleviate these demands, \u201cDistraction is All You Need\u201d (DAYN) [33 ###reference_b33###] proposes a semantic-based attack that disrupts the diffusion model\u2019s attention mechanism during editing. While this approach reduces computational load, it remains time-intensive like PhotoGuard, as it requires re-solving the optimization for each image. Furthermore, both approaches are vulnerable to counter-attacks, such as denoising the added perturbation and applying JPEG compression [58 ###reference_b58###] to the immunized image. Consequently, neither method is practical for large-scale applications, such as safeguarding the vast volume of image and video data uploaded daily on social media.\nTo address these challenges, we introduce DiffVax, an end-to-end framework for training an \u201cimmunizer\u201d model that learns how to generate imperceptible perturbations to immunize target images against diffusion-based editing. This immunization process ensures that when the immunized image is input into a diffusion-based editing model, the editing attempt fails. DiffVax is significantly more effective than prior works in ensuring editing failure.\nTraining is guided by two loss terms: (1) one to ensure that the generated noise remains imperceptible, and (2) another to enforce the failure of any attempted edits on the immunized image Our trained immunizer model generalizes effectively to unseen data, requiring only a single forward pass\u2014completed within milliseconds\u2014without the need for time-intensive re-optimization. This efficiency makes it a scalable solution for protecting large-scale content. Moreover, DiffVax enhances memory efficiency by eliminating the need for gradient calculations, setting it apart from previous approaches. It also achieves improved imperceptibility in generated perturbations and demonstrates robustness against counter-attacks such as JPEG compression and image denoising [58 ###reference_b58###]. Importantly, our training framework is adaptable to any diffusion-based editing method, establishing it as a universal tool (see Fig. 1 ###reference_###, 2nd row: left column for inpainting, right column for instruction-based editing). Leveraging these advantages, we extend immunization to video content for the first time, achieving promising results that were previously unattainable due to the computational demands of earlier methods. Consequently, DiffVax fulfills all criteria for an ideal model as outlined above.\nTo advance research in this area, we also introduce a standardized test benchmark with diverse prompts, enabling consistent and fair evaluation in this emerging field. To summarize, our contributions are as follows:\nWe are the first to introduce an optimization-free image immunization framework to prevent diffusion-based editing, drastically reducing inference time from days to milliseconds and enabling real-time immunization by effectively generalizing to unseen data.\nDiffVax achieves superior results with substantial degradation of the editing operation, enhanced imperceptibility, and minimal memory requirement, demonstrating resistance to counter-attacks, making it the fastest, most cost-effective, and robust method available.\nFor the first time, we extend immunization to video content, demonstrating promising results in video safety applications." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preliminaries", + "text": "Adversarial attacks exploit the vulnerabilities of machine learning models by introducing small, imperceptible perturbations to input data, causing the model to produce incorrect or unintended outputs [61 ###reference_b61###, 5 ###reference_b5###]. In the context of diffusion models, such perturbations can be crafted to disrupt the editing process, ensuring that attempts to modify an adversarially perturbed image fail to achieve intended outcomes. Given an image , the goal is to transform it into an adversarially immunized version, , by introducing a perturbation :\nwhere is the perturbation budget that constrains the norm of the perturbation to ensure that it remains imperceptible. The norm could be chosen as , , or , depending on the application.\nLDMs [53 ###reference_b53###] perform the generative process in a lower-dimensional latent space rather than pixel space, achieving computational efficiency while maintaining high-quality outputs. This design is ideal for large-scale tasks like image editing and inpainting.\nTraining an LDM starts by encoding the input image into a latent representation using encoder . The diffusion process operates in this latent space, adding noise over steps to generate a sequence , with\n\nwhere is the noise schedule at step . The training aims to learn a denoising network that predicts the added noise by minimizing\n\nIn the reverse process, a noisy latent vector is iteratively denoised via the trained denoising network to recover , which is decoded into the final image with decoder ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Problem Formulation", + "text": "Consider an image , where , , and represent the height, width, and channel dimensions, and an adversarial agent equipped with a diffusion-based editing tool denoted as , specifically a stable diffusion inpainting model [53 ###reference_b53###] in our study, attempting a malicious edit on an image using a prompt to modify the unmasked region, where the binary mask designates a specific region of interest or target area, with a value of 1 corresponding to the target region and 0 indicating the background or irrelevant regions. Ideally, this target region can represent any meaningful part of the image, such as a human body or other sensitive objects.\nOur objective is to immunize the original image by carefully producing a noise that satisfies two key criteria: (a) remains imperceptible to the user, and (b) the edited immunized image fails to accurately reflect the prompt applied by the adversarial agent. In other words, the immunized image disrupts the editing model such that any attempt to edit the image results in unsuccessful or unintended modifications. This approach ensures that current editing models cannot modify the image. In this study, we focus on the human subject as the target region and use diffusion inpainting as the editing method, given its particular suitability for malicious editing activities. Additional results for other objects and editing tools are also provided (see S.1 ###reference_###. for more details)." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Our Approach", + "text": "Inspired by universal adversarial networks [19 ###reference_b19###, 41 ###reference_b41###], which demonstrate that perturbations applicable across datasets and architectures can be learned through training, we extend this idea into the diffusion domain, aiming to develop a generalizable immunization strategy across diverse target images to protect against diffusion-based editing. The framework consists of two stages: (Stage 1) generating noise with the \u201cimmunizer\u201d model to immunize the target image, and (Stage 2) applying diffusion-based editing and computing the loss, with both stages connected to enable training on a dataset (Fig. 2 ###reference_###).\nIn the first stage of our algorithm, we employ a UNet++ [72 ###reference_b72###] architecture for the \u201cimmunizer\u201d model to generate the immunization noise , which, when applied to the masked region, forms the immunized image, denoted as . Notably, there are two possible approaches for obtaining the immunized image using . The model can either directly generate or produce , which is then added to the input image . We adopt the latter approach, as it preserves the original image structure by avoiding direct processing of , thereby preventing distortions in the original image as an immunized image should look identical to the input image . After producing the immunization noise , we multiply it with the mask , targeting the region of interest (e.g. the face of a person). The masked noise is then added to the input image , and the resulting values are clamped to the range. To ensure the noise remains imperceptible to the human eye, we introduce the following loss:\nwhere penalizes deviations within the masked region, ensuring that the change between the immunized image and the original image is imperceptible.\nAfter obtaining the immunized image , the next step is to perform editing using the stable diffusion inpainting model . This model takes immunized image , mask , and prompt as input, and performs editing in regions outside the masked area. To ensure that the background of the edited image is effectively distorted, we define the loss function as:\nwhere represents the complement of the masked area and is the stable diffusion inpainting operation that modifies the region in according to the prompt . This loss function is the key to our method, as it ensures that the immunization noise disrupts the editing process by forcing the unmasked regions to become a background filled with 0s.\n###figure_1### Input: Immunizer Model , Editing Model , Dataset , Dataset Size , Loss weight\nTo address the speed limitations of previous methods, we propose an end-to-end training framework that combines the two described stages, as outlined in Algorithm 1 ###reference_###. For training, we curate a dataset of image, mask, and prompt tuples, represented as . Specifically, we collect 1000 images of individuals from the CCP [67 ###reference_b67###] dataset and use the segment anything model (SAM) [27 ###reference_b27###] to generate masks corresponding to the foreground objects in these images. To ensure diverse text descriptions for the editing tasks, we utilize ChatGPT [45 ###reference_b45###] (see S.2 ###reference_### for details). At each training step, a sample is selected from the dataset and initially processed by the immunizer model to generate immunization noise , which is added to the masked region of the target image and then clamped. The resulting immunized image is then passed through the editing model to produce the edited immunized image . The final loss function, , is used for backpropagation with respect to the immunizer model\u2019s parameters, and gradient descent is applied. Backpropagating through the stable diffusion stages allows the immunizer to learn the interaction between the perturbation and the generated pixels. Through this iterative process, the immunizer model learns to generate perturbations that disrupt the editing model. Following the insights from PhotoGuard\u2019s encoder attack, we do not condition the immunizer model on text prompts, as the noise is empirically shown to be prompt-agnostic. This approach is supported by both PhotoGuard\u2019s findings and our empirical results (see S.3 ###reference_###). Our end-to-end training framework is illustrated in Fig. 2 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimentation", + "text": "" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we present DiffVax, an optimization-free image immunization framework designed to protect images from diffusion-based editing. Our approach centers on generating imperceptible noise that, when applied to an image, disrupts diffusion-based editing tools, providing protection with minimal computational overhead. Our immunization process requires only a single forward pass, making DiffVax highly scalable for large-scale applications. Our training framework introduces a loss term, enabling the immunizer model to generalize across unseen data and diverse prompts. Leveraging these strengths, we extend our framework to video content, demonstrating promising results for the first time. Furthermore, DiffVax is adaptable to any diffusion-based editing tool and has proven robust against counter-attacks, effectively safeguarding against diffusion-based edits. Overall, DiffVax sets a new benchmark for scalable, optimization-free, and effective content protection, offering a practical solution for real-time applications." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison of immunization models. Overview of key functionalities across PhotoGuard (PG), Distraction is All You Need (DAYN), and DiffVax, including scalability, robustness against attacks, video extension, open-source availability, GPU requirements and runtime.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FunctionalityPG\u00a0[57]\nDAYN\u00a0[33]\n\n\n\n\n\n\n\n\n
DiffVax
(Ours)
\n
Scalability\u2716\u2716\u2714
\n\n\n\n\n\n\n\n
Robustness
Against Attacks
\n
\u2716\u2716\u2714
\n\n\n\n\n\n\n\n
Video
Extension
\n
\u2716\u2716\u2714
Open Source\u2714\u2716\u2714
GPU (GB)\u00a015GB\u00a010GB\u00a05GB
RuntimeDaysDaysMilliseconds
\n
\n
", + "capture": "Table 1: Comparison of immunization models. Overview of key functionalities across PhotoGuard (PG), Distraction is All You Need (DAYN), and DiffVax, including scalability, robustness against attacks, video extension, open-source availability, GPU requirements and runtime." + }, + "2": { + "table_html": "
\n
Table 2: Performance comparisons on images. The SSIM, PSNR, FSIM, SSIM (Noise), and CLIP-T metrics are reported separately for the seen and unseen splits of the test dataset. Runtime and GPU requirements are measured as the average time (in seconds) and memory usage (in MiB) needed to immunize a single image. The human study presents the average ranking of each method. \u201cN/A\u201d indicates that the corresponding value is unavailable. The symbols and indicate the direction toward better performance for each metric, respectively.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Amount of Editing FailureImperceptibilityText MisalignmentScalabilityHuman Study
Immunization MethodSSIM \nPSNR \nFSIM \nSSIM (Noise) \nCLIP-T \nRuntime (s) \nGPU Req. (MiB) \nAverage
seenunseenseenunseenseenunseenseenunseenseenunseen(Immunization)(Immunization)Ranking \n
Random Noise0.5860.58516.0916.400.4600.4580.9020.90331.6831.62N/AN/A3.74
PhotoGuard-E0.5580.56515.2915.630.4130.4080.9560.95631.6930.88207.009,5483.33
PhotoGuard-D0.5310.52314.7014.920.3860.3790.9780.97929.6129.27911.6015,1142.63
DiffVax\u00a0(Ours)0.5190.53413.8414.370.3630.3700.9890.98926.6726.740.075,6481.64
\n
\n
", + "capture": "Table 2: Performance comparisons on images. The SSIM, PSNR, FSIM, SSIM (Noise), and CLIP-T metrics are reported separately for the seen and unseen splits of the test dataset. Runtime and GPU requirements are measured as the average time (in seconds) and memory usage (in MiB) needed to immunize a single image. The human study presents the average ranking of each method. \u201cN/A\u201d indicates that the corresponding value is unavailable. The symbols and indicate the direction toward better performance for each metric, respectively." + }, + "3": { + "table_html": "
\n
Table 3: Results on video editing. We report the average PSNR score and total runtime for Random Noise, PhotoGuard-D, and DiffVax\u00a0on a video dataset consisting of 4 videos, each with 4 prompts and 64 frames.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPSNR \nRuntime \n
Random Noise19.54NA
PhotoGuard-D16.3264 hours
DiffVax14.540.739 seconds
\n
\n
", + "capture": "Table 3: Results on video editing. We report the average PSNR score and total runtime for Random Noise, PhotoGuard-D, and DiffVax\u00a0on a video dataset consisting of 4 videos, each with 4 prompts and 64 frames." + }, + "4": { + "table_html": "
\n
Table 4: Ablation study. We report the SSIM and SSIM (Noise) metrics for each loss term ablation, with results presented individually for the seen and unseen splits of the dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSSIM \nSSIM (Noise) \n
seenunseenseenunseen
\nDiffVax w/o \n0.5210.5320.7850.786
\nDiffVax w/o \n0.9360.9440.9990.999
DiffVax0.5190.5340.9890.989
\n
\n
", + "capture": "Table 4: Ablation study. We report the SSIM and SSIM (Noise) metrics for each loss term ablation, with results presented individually for the seen and unseen splits of the dataset." + }, + "5": { + "table_html": "
\n
Table 5: Performance comparisons on edits with counter-attacks. We report the SSIM, SSIM (Noise) and CLIP-T metrics for the denoiser (D.) and JPEG compression (JPEG) counter-attacks separately for the seen and unseen splits of the test dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSSIM \nSSIM (Noise) \nCLIP-T \n
seenunseenseenunseenseenunseen
PG-D w/ D.0.7020.7090.9660.96531.4831.20
DiffVax w/ D.0.5520.5650.9600.96027.3227.74
PG-D w/ JPEG0.6640.6740.9560.95632.1532.48
DiffVax w/ JPEG0.5300.5450.9590.95928.6528.27
\n
\n
", + "capture": "Table 5: Performance comparisons on edits with counter-attacks. We report the SSIM, SSIM (Noise) and CLIP-T metrics for the denoiser (D.) and JPEG compression (JPEG) counter-attacks separately for the seen and unseen splits of the test dataset." + }, + "6": { + "table_html": "
\n
Table 6: Quantitative results of ablation study to determine the weight of , , where . Metrics highlight the impact of varying weights on the balance between imperceptibility and disruption.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nSSIM \n\nPSNR \n\nSSIM (Noise) \n
\nDiffVax w/ \n0.53614.470.987
\nDiffVax w/ \n0.58815.380.993
\nDiffVax w/ \n0.62516.230.996
\n
\n
", + "capture": "Table 6: Quantitative results of ablation study to determine the weight of , , where . Metrics highlight the impact of varying weights on the balance between imperceptibility and disruption." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.17957v1_figure_2.png", + "caption": "Figure 2: Overview of our end-to-end training framework. The process begins with Stage 1, where the original image \ud835\udc08\ud835\udc08\\mathbf{I}bold_I is processed by SAM [27] to generate a mask, and the immunizer model f\u2062(\u22c5;\u03b8)\ud835\udc53\u22c5\ud835\udf03f(\\cdot;\\theta)italic_f ( \u22c5 ; italic_\u03b8 ) produces immunization noise \u03f5imsubscriptitalic-\u03f5im\\epsilon_{\\mathrm{im}}italic_\u03f5 start_POSTSUBSCRIPT roman_im end_POSTSUBSCRIPT, which is then applied to the masked region, resulting in the immunized image \ud835\udc08imsubscript\ud835\udc08im\\mathbf{I}_{\\mathrm{im}}bold_I start_POSTSUBSCRIPT roman_im end_POSTSUBSCRIPT. In Stage 2, the immunized image \ud835\udc08imsubscript\ud835\udc08im\\mathbf{I}_{\\mathrm{im}}bold_I start_POSTSUBSCRIPT roman_im end_POSTSUBSCRIPT is edited using a stable diffusion model SD\u2062(\u22c5)SD\u22c5\\text{SD}(\\cdot)SD ( \u22c5 ) with the provided text prompt (e.g., \u201ca person in a fitness studio\u201d), during which the loss terms are computed. The \u2112noisesubscript\u2112noise\\mathcal{L}_{\\mathrm{noise}}caligraphic_L start_POSTSUBSCRIPT roman_noise end_POSTSUBSCRIPT term minimizes the immunization noise \u03f5imsubscriptitalic-\u03f5im\\epsilon_{\\mathrm{im}}italic_\u03f5 start_POSTSUBSCRIPT roman_im end_POSTSUBSCRIPT to preserve the visual quality of the original image \ud835\udc08\ud835\udc08\\mathbf{I}bold_I, while the \u2112editsubscript\u2112edit\\mathcal{L}_{\\mathrm{edit}}caligraphic_L start_POSTSUBSCRIPT roman_edit end_POSTSUBSCRIPT term ensures that \u03f5imsubscriptitalic-\u03f5im\\epsilon_{\\mathrm{im}}italic_\u03f5 start_POSTSUBSCRIPT roman_im end_POSTSUBSCRIPT effectively disrupts any editing attempts.", + "url": "http://arxiv.org/html/2411.17957v1/extracted/6027642/figures/FIG2.png" + }, + "3": { + "figure_path": "2411.17957v1_figure_3.png", + "caption": "Figure 3: Qualitative results with DiffVax. Our method effectively immunizes (a) seen images and generalizes to (b) unseen images with diverse text prompts. Additionally, it extends to (c) unseen human videos, demonstrating its adaptability to new content. Furthermore, it supports various poses and perspectives, from full-body shots (a) to close-up face shots (c)", + "url": "http://arxiv.org/html/2411.17957v1/x1.png" + }, + "4": { + "figure_path": "2411.17957v1_figure_4.png", + "caption": "Figure 4: Qualitative comparison of edited images across immunization methods. This figure shows results of different immunization methods: Random Noise, PhotoGuard-E, PhotoGuard-D, and our proposed method, DiffVax. Results for (a) seen and (b) unseen images are shown, with different prompts applied to each (right side). The first column contains the original images, while subsequent columns show the edited outputs under different settings, as depicted on the top. Note that DiffVax is substantially more effective than PhotoGuard-E and -D in degrading the edit.", + "url": "http://arxiv.org/html/2411.17957v1/extracted/6027642/figures/FIG4.png" + }, + "5": { + "figure_path": "2411.17957v1_figure_5.png", + "caption": "Figure 5: Qualitative results of counter-attacks on immunization methods. The first row presents the results when an off-the-shelf denoiser is used to counter-attack the immunized image, while the second row shows results with JPEG compression. The 2nd and 3rd columns display the edited immunized image and the edited attacked immunized image for PhotoGuard-D, whereas the 4th and 5th columns show these results for DiffVax. Note that PhotoGuard-D is highly vulnerable to these counter-attacks.", + "url": "http://arxiv.org/html/2411.17957v1/x2.png" + }, + "6": { + "figure_path": "2411.17957v1_figure_6.png", + "caption": "Figure 6: Experiment results for prompt-agnostic noise. We present our performance metrics between prompts for 75 prompts seen in training (blue color) and 75 prompts unseen in training (orange color). PSNR and CLIP-T values are divided by 50 for visualization purposes. We can see that the two distributions are almost identical, suggesting that our method performs similarly across all prompts, suggesting the prompt-agnostic nature of our DiffVax.", + "url": "http://arxiv.org/html/2411.17957v1/extracted/6027642/figures/FIG6.png" + }, + "7": { + "figure_path": "2411.17957v1_figure_7.png", + "caption": "Figure 7: Additional qualitative results with DiffVax. Each row shows a different prompt and image pair, demonstrating DiffVax\u2019s capability to consistently prevent malicious edits. Notably, even with varied and challenging prompts, the edits generated from the protected content are disrupted, underscoring the robustness of our approach.", + "url": "http://arxiv.org/html/2411.17957v1/x3.png" + }, + "8": { + "figure_path": "2411.17957v1_figure_8.png", + "caption": "Figure 8: Additional qualitative comparison between benchmarks and DiffVax. Each row represents a unique prompt-image pair, while the columns show outputs for different immunization methods. DiffVax consistently produces better results, effectively disrupting edits while preserving image quality.", + "url": "http://arxiv.org/html/2411.17957v1/x4.png" + }, + "9": { + "figure_path": "2411.17957v1_figure_9.png", + "caption": "Figure 9: Qualitative results using the InstructPix2pix [7] editing model with DiffVax. Our approach successfully disrupts edits by this editing method, further validating its generalizability.", + "url": "http://arxiv.org/html/2411.17957v1/x5.png" + }, + "10": { + "figure_path": "2411.17957v1_figure_10.png", + "caption": "Figure 10: Qualitative results for non-person objects edited using DiffVax. These experiments show that DiffVax generalizes well beyond human-centric data.", + "url": "http://arxiv.org/html/2411.17957v1/x6.png" + }, + "11": { + "figure_path": "2411.17957v1_figure_11.png", + "caption": "Figure 11: Comparison of immunization noise. The difference between the original image and the immunized versions (Photoguard-D and DiffVax) is visualized. DiffVax achieves imperceptible immunization noise, preserving the original image\u2019s visual fidelity.", + "url": "http://arxiv.org/html/2411.17957v1/x7.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Square attack: A query-efficient black-box adversarial attack via random search.", + "author": "Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein.", + "venue": "In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII, pages 484\u2013501. Springer, 2020.", + "url": null + } + }, + { + "2": { + "title": "TAFIM: targeted adversarial attacks against facial image manipulations.", + "author": "Shivangi Aneja, Lev Markhasin, and Matthias Nie\u00dfner.", + "venue": "In Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XIV, pages 58\u201375. Springer, 2022.", + "url": null + } + }, + { + "3": { + "title": "The detection of political deepfakes.", + "author": "Markus Appel and Fabian Prietzel.", + "venue": "Journal of Computer-Mediated Communication, 27(4):zmac008, 2022.", + "url": null + } + }, + { + "4": { + "title": "Universal guidance for diffusion models.", + "author": "Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 843\u2013852, 2023.", + "url": null + } + }, + { + "5": { + "title": "Evasion Attacks against Machine Learning at Test Time, page 387\u2013402.", + "author": "Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim \u0160rndi\u0107, Pavel Laskov, Giorgio Giacinto, and Fabio Roli.", + "venue": "Springer Berlin Heidelberg, 2013.", + "url": null + } + }, + { + "6": { + "title": "Deepfake blackmailing on the rise: The burgeoning posterity of revenge pornography in the philippines.", + "author": "Eric Blancaflor, Joshua Ivan Garcia, Frances Denielle Magno, and Mark Joshua Vilar.", + "venue": "In Proceedings of the 2024 9th International Conference on Intelligent Information Technology, page 295\u2013301, New York, NY, USA, 2024. Association for Computing Machinery.", + "url": null + } + }, + { + "7": { + "title": "Instructpix2pix: Learning to follow image editing instructions.", + "author": "Tim Brooks, Aleksander Holynski, and Alexei A Efros.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392\u201318402, 2023.", + "url": null + } + }, + { + "8": { + "title": "Towards evaluating the robustness of neural networks.", + "author": "Nicholas Carlini and David A. Wagner.", + "venue": "In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pages 39\u201357. IEEE Computer Society, 2017.", + "url": null + } + }, + { + "9": { + "title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models.", + "author": "Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or.", + "venue": "ACM Transactions on Graphics (TOG), 42(4):1\u201310, 2023.", + "url": null + } + }, + { + "10": { + "title": "We are truly fucked: Everyone is making ai-generated fake porn now.", + "author": "Samantha Cole.", + "venue": "https://web.archive.org/web/20240926135620/https://www.vice.com/en/article/reddit-fake-porn-app-daisy-ridley/, 2018.", + "url": null + } + }, + { + "11": { + "title": "Diffedit: Diffusion-based semantic image editing with mask guidance.", + "author": "Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023a.", + "url": null + } + }, + { + "12": { + "title": "Diffedit: Diffusion-based semantic image editing with mask guidance.", + "author": "Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.", + "venue": "In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023b.", + "url": null + } + }, + { + "13": { + "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.", + "author": "Francesco Croce and Matthias Hein.", + "venue": "In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, pages 2206\u20132216. PMLR, 2020.", + "url": null + } + }, + { + "14": { + "title": "Deepfaked: \u2018they put my face on a porn video\u2019.", + "author": "Jess Davies and Sarah McDermott.", + "venue": "https://www.bbc.com/news/uk-62821117, 2022.", + "url": null + } + }, + { + "15": { + "title": "Deepfakes on trial: a call to expand the trial judge\u2019s gatekeeping role to protect legal proceedings from technological fakery.", + "author": "Rebecca A. Delfino.", + "venue": "SSRN Electronic Journal, 2022.", + "url": null + } + }, + { + "16": { + "title": "Boosting adversarial attacks with momentum.", + "author": "Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li.", + "venue": "In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 9185\u20139193. Computer Vision Foundation / IEEE Computer Society, 2018.", + "url": null + } + }, + { + "17": { + "title": "Explaining and harnessing adversarial examples.", + "author": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.", + "venue": "In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.", + "url": null + } + }, + { + "18": { + "title": "Combating deepfake videos using blockchain and smart contracts.", + "author": "Haya R. Hasan and Khaled Salah.", + "venue": "IEEE Access, 7:41596\u201341606, 2019.", + "url": null + } + }, + { + "19": { + "title": "Learning universal adversarial perturbations with generative models.", + "author": "Jamie Hayes and George Danezis.", + "venue": "In 2018 IEEE Security and Privacy Workshops, SP Workshops 2018, San Francisco, CA, USA, May 24, 2018, pages 43\u201349. IEEE Computer Society, 2018.", + "url": null + } + }, + { + "20": { + "title": "Delta denoising score.", + "author": "Amir Hertz, Kfir Aberman, and Daniel Cohen-Or.", + "venue": "In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 2328\u20132337. IEEE, 2023a.", + "url": null + } + }, + { + "21": { + "title": "Prompt-to-prompt image editing with cross-attention control.", + "author": "Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023b.", + "url": null + } + }, + { + "22": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "23": { + "title": "Diffusion model-based image editing: A survey.", + "author": "Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang, Shifeng Chen, and Liangliang Cao.", + "venue": "arXiv preprint arXiv:2402.17525, 2024.", + "url": null + } + }, + { + "24": { + "title": "Inside the deepfake porn crisis engulfing korean schools.", + "author": "Leehyun Choi Jean Mackenzie.", + "venue": "https://web.archive.org/web/20240928170449/https://www.bbc.com/news/articles/cpdlpj9zn9go, 2024.", + "url": null + } + }, + { + "25": { + "title": "Diffusionclip: Text-guided diffusion models for robust image manipulation.", + "author": "Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 2416\u20132425. IEEE, 2022.", + "url": null + } + }, + { + "26": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P. Kingma and Jimmy Ba.", + "venue": "In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.", + "url": null + } + }, + { + "27": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "28": { + "title": "Anti-dreambooth: Protecting users from personalized text-to-image synthesis.", + "author": "Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc N. Tran, and Anh Tuan Tran.", + "venue": "In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 2116\u20132127. IEEE, 2023.", + "url": null + } + }, + { + "29": { + "title": "SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image Models.", + "author": "Xinfeng Li, Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, and Wenyuan Xu.", + "venue": "In Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2024.", + "url": null + } + }, + { + "30": { + "title": "Gligen: Open-set grounded text-to-image generation.", + "author": "Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511\u201322521, 2023a.", + "url": null + } + }, + { + "31": { + "title": "Ntire 2023 challenge on image denoising: Methods and results.", + "author": "Yawei Li, Yulun Zhang, Luc Van Gool, Radu Timofte, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023b.", + "url": null + } + }, + { + "32": { + "title": "Text-driven image editing via learnable regions.", + "author": "Yuanze Lin, Yi-Wen Chen, Yi-Hsuan Tsai, Lu Jiang, and Ming-Hsuan Yang.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024, pages 7059\u20137068. IEEE, 2024.", + "url": null + } + }, + { + "33": { + "title": "Distraction is all you need: Memory-efficient image immunization against diffusion-based image editing.", + "author": "Ling Lo, Cheng Yu Yeo, Hong-Han Shuai, and Wen-Huang Cheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24462\u201324471, 2024.", + "url": null + } + }, + { + "34": { + "title": "Repaint: Inpainting using denoising diffusion probabilistic models.", + "author": "Andreas Lugmayr, Martin Danelljan, Andr\u00e9s Romero, Fisher Yu, Radu Timofte, and Luc Van Gool.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 11451\u201311461. IEEE, 2022.", + "url": null + } + }, + { + "35": { + "title": "Towards deep learning models resistant to adversarial attacks.", + "author": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.", + "venue": "In International Conference on Learning Representations, 2018a.", + "url": null + } + }, + { + "36": { + "title": "Towards deep learning models resistant to adversarial attacks.", + "author": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.", + "venue": "In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018b.", + "url": null + } + }, + { + "37": { + "title": "SDEdit: Guided image synthesis and editing with stochastic differential equations.", + "author": "Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon.", + "venue": "In International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "38": { + "title": "Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models.", + "author": "Daiki Miyake, Akihiro Iohara, Yu Saito, and Toshiyuki Tanaka.", + "venue": "arXiv preprint arXiv:2305.16807, 2023.", + "url": null + } + }, + { + "39": { + "title": "Null-text inversion for editing real images using guided diffusion models.", + "author": "Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 6038\u20136047. IEEE, 2023.", + "url": null + } + }, + { + "40": { + "title": "Deepfool: A simple and accurate method to fool deep neural networks.", + "author": "Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard.", + "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 2574\u20132582. IEEE Computer Society, 2016.", + "url": null + } + }, + { + "41": { + "title": "Universal adversarial perturbations.", + "author": "Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard.", + "venue": "In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 86\u201394. IEEE Computer Society, 2017.", + "url": null + } + }, + { + "42": { + "title": "Dragondiffusion: Enabling drag-style manipulation on diffusion models.", + "author": "Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, and Jian Zhang.", + "venue": "In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024a.", + "url": null + } + }, + { + "43": { + "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models.", + "author": "Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4296\u20134304, 2024b.", + "url": null + } + }, + { + "44": { + "title": "Deepfake attacks: Generation, detection, datasets, challenges, and research directions.", + "author": "Amal Naitali, Mohammed Ridouani, Fatima Salahdine, and Naima Kaabouch.", + "venue": "Comput., 12:216, 2023.", + "url": null + } + }, + { + "45": { + "title": "Chatgpt.", + "author": "OpenAI.", + "venue": "https://chatgpt.com/, 2024.", + "url": null + } + }, + { + "46": { + "title": "Zero-shot image-to-image translation.", + "author": "Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu.", + "venue": "In ACM SIGGRAPH 2023 Conference Proceedings, SIGGRAPH 2023, Los Angeles, CA, USA, August 6-10, 2023, pages 11:1\u201311:11. ACM, 2023.", + "url": null + } + }, + { + "47": { + "title": "A review of deep learning\u2010based approaches for deepfake content detection.", + "author": "Leandro A. Passos, Danilo Jodas, Kelton A. P. Costa, Luis A. Souza J\u00fanior, Douglas Rodrigues, Javier Del Ser, David Camacho, and Jo\u00e3o Paulo Papa.", + "venue": "Expert Systems, 41(8), 2024.", + "url": null + } + }, + { + "48": { + "title": "Deepfake generation and detection: A benchmark and survey, 2024.", + "author": "Gan Pei, Jiangning Zhang, Menghan Hu, Zhenyu Zhang, Chengjie Wang, Yunsheng Wu, Guangtao Zhai, Jian Yang, Chunhua Shen, and Dacheng Tao.", + "venue": null, + "url": null + } + }, + { + "49": { + "title": "Prolific: Online participant recruitment for surveys and research.", + "author": "Prolific.", + "venue": "https://prolific.com/, 2024.", + "url": null + } + }, + { + "50": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": "In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "51": { + "title": "Preditor: Text guided image editing with diffusion prior.", + "author": "Hareesh Ravi, Sachin Kelkar, Midhun Harikumar, and Ajinkya Kale.", + "venue": "arXiv preprint arXiv:2302.07979, 2023.", + "url": null + } + }, + { + "52": { + "title": "My art my choice: Adversarial protection against unruly ai.", + "author": "Anthony Rhodes, Ram Bhagat, Umur Aybars Ciftci, and Ilke Demir.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8389\u20138394, 2024.", + "url": null + } + }, + { + "53": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "54": { + "title": "Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems.", + "author": "Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff.", + "venue": "In Computer Vision\u2013ECCV 2020 Workshops: Glasgow, UK, August 23\u201328, 2020, Proceedings, Part IV 16, pages 236\u2013251. Springer, 2020.", + "url": null + } + }, + { + "55": { + "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.", + "author": "Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 22500\u201322510, 2023.", + "url": null + } + }, + { + "56": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "Advances in neural information processing systems, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "57": { + "title": "Raising the cost of malicious AI-powered image editing.", + "author": "Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, and Aleksander Madry.", + "venue": "In Proceedings of the 40th International Conference on Machine Learning, pages 29894\u201329918. PMLR, 2023.", + "url": null + } + }, + { + "58": { + "title": "Jpeg compressed images can bypass protections against ai editing.", + "author": "Pedro Sandoval-Segura, Jonas Geiping, and Tom Goldstein.", + "venue": "arXiv preprint arXiv:2304.02234, 2023.", + "url": null + } + }, + { + "59": { + "title": "Glaze: Protecting artists from style mimicry by Text-to-Image models.", + "author": "Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y. Zhao.", + "venue": "In 32nd USENIX Security Symposium (USENIX Security 23), pages 2187\u20132204, Anaheim, CA, 2023. USENIX Association.", + "url": null + } + }, + { + "60": { + "title": "Deep unsupervised learning using nonequilibrium thermodynamics.", + "author": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.", + "venue": "In International conference on machine learning, pages 2256\u20132265. PMLR, 2015.", + "url": null + } + }, + { + "61": { + "title": "Intriguing properties of neural networks.", + "author": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus.", + "venue": "In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.", + "url": null + } + }, + { + "62": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli.", + "venue": "IEEE Transactions on Image Processing, 13(4):600\u2013612, 2004.", + "url": null + } + }, + { + "63": { + "title": "Stylediffusion: Controllable disentangled style transfer via diffusion models.", + "author": "Zhizhong Wang, Lei Zhao, and Wei Xing.", + "venue": "In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 7643\u20137655. IEEE, 2023.", + "url": null + } + }, + { + "64": { + "title": "The deepfake threat to face biometrics.", + "author": "John Wojewidka.", + "venue": "Biometric Technology Today, 2020:5\u20137, 2020.", + "url": null + } + }, + { + "65": { + "title": "Generating adversarial examples with adversarial networks.", + "author": "Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song.", + "venue": "In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3905\u20133911. ijcai.org, 2018.", + "url": null + } + }, + { + "66": { + "title": "Paint by example: Exemplar-based image editing with diffusion models.", + "author": "Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 18381\u201318391. IEEE, 2023.", + "url": null + } + }, + { + "67": { + "title": "Clothing co-parsing by joint image segmentation and labeling.", + "author": "Wei Yang, Ping Luo, and Liang Lin.", + "venue": "In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. IEEE, 2013.", + "url": null + } + }, + { + "68": { + "title": "Attack as the best defense: Nullifying image-to-image translation gans via limit-aware adversarial attack.", + "author": "Chin-Yuan Yeh, Hsi-Wen Chen, Hong-Han Shuai, De-Nian Yang, and Ming-Syan Chen.", + "venue": "In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 16168\u201316177. IEEE, 2021.", + "url": null + } + }, + { + "69": { + "title": "Towards coherent image inpainting using denoising diffusion implicit models.", + "author": "Guanhua Zhang, Jiabao Ji, Yang Zhang, Mo Yu, Tommi S. Jaakkola, and Shiyu Chang.", + "venue": "In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, pages 41164\u201341193. PMLR, 2023a.", + "url": null + } + }, + { + "70": { + "title": "Fsim: A feature similarity index for image quality assessment.", + "author": "Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang.", + "venue": "IEEE Transactions on Image Processing, 20(8):2378\u20132386, 2011.", + "url": null + } + }, + { + "71": { + "title": "Adding conditional control to text-to-image diffusion models.", + "author": "Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836\u20133847, 2023b.", + "url": null + } + }, + { + "72": { + "title": "Unet++: A nested u-net architecture for medical image segmentation.", + "author": "Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang.", + "venue": "Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support : 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, held in conjunction with MICCAI 2018, Granada, Spain, S\u2026, 11045:3\u201311, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17957v1" +} \ No newline at end of file diff --git a/20241127/2411.17961v1.json b/20241127/2411.17961v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3c3e15ce942fb9e744dc50725cd0b5e78cffbef0 --- /dev/null +++ b/20241127/2411.17961v1.json @@ -0,0 +1,749 @@ +{ + "title": "ESS-ReduNet: Enhancing Subspace Separability of ReduNet via Dynamic Expansion with Bayesian Inference", + "abstract": "ReduNet is a deep neural network model that leverages the principle of maximal coding rate reduction to transform original data samples into a low-dimensional, linear discriminative feature representation.\nUnlike traditional deep learning frameworks, ReduNet constructs its parameters explicitly layer by layer, with each layer\u2019s parameters derived based on the features transformed from the preceding layer.\nRather than directly using labels,\nReduNet uses the similarity between each category\u2019s spanned subspace and the data samples for feature updates at each layer. This may lead to features being updated in the wrong direction, impairing the correct construction of network parameters and reducing the network\u2019s convergence speed.\nTo address this issue, based on the geometric interpretation of the network parameters, this paper presents ESS-ReduNet to enhance the separability of each category\u2019s subspace by dynamically controlling the expansion of the overall spanned space of the samples.\nMeanwhile, label knowledge is incorporated with Bayesian inference to encourage the decoupling of subspaces.\nFinally, stability, as assessed by the condition number, serves as an auxiliary criterion for halting training.\nExperiments on the ESR, HAR, Covertype, and Gas datasets demonstrate that ESS-ReduNet achieves more than 10x improvement in convergence compared to ReduNet. Notably, on the ESR dataset, the features transformed by ESS-ReduNet achieve a 47% improvement in SVM classification accuracy.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "For a classification task, deep neural networks are designed to learn a nonlinear mapping through a sequence of layers, with the goal of accurately mapping data to their respective labels. A common practice in training deep learning models involves minimizing empirical risk by employing cross-entropy (CE) loss Goodfellow et al. (2016 ###reference_b8###). While CE loss is both effective and commonly used, fitting labels alone does not ensure learning meaningful, structured representational information. In fact, recent studies Papyan et al. (2020 ###reference_b18###); Fang et al. (2021 ###reference_b7###); Zhu et al. (2021 ###reference_b32###)\nshow that the learned representations derived from the training using CE loss demonstrate a phenomenon of neural collapse: as the CE loss for every class gets close to zero, the representation of each class shrinks to a single point. The variability and structural details associated with each class are being stifled and ignored.\nAdditionally, the development of deep network architectures often stems from extensive trial and error, lacking a solid basis in explicit mathematical principles.\nTo address the dual questions concerning the objectives of representation learning and the principles of network architecture design, Chan et al. ###reference_b2### Chan et al. (2022 ###reference_b2###) introduced ReduNet,\nbased on the lossy data coding and compression principles.\nThey proposed that the objective function for representation learning should focus on extracting low-dimensional, linear discriminative representation from high-dimensional data.\nTo learn linear discriminative representation,\nthe following three principles are proposed:\n1) intra-class compressible;\n2) inter-class discriminative;\n3) diverse representation where feature dimensions or variance within each class are maximized while remaining uncorrelated with features of other classes.\nThe quality of this type of representation can be measured using a principled metric derived from the lossy data compression, termed rate reduction.\nThe architecture of ReduNet is constructed layer by layer in a forward fashion, with the goal of maximizing the rate reduction.\nThe parameters of each layer are constructed by the features transformed from the previous layer.\nIdeally, we expect the features of each class to update toward the subspaces spanned by the data of the corresponding class, with the subspaces becoming increasingly separated until they are mutually orthogonal.\n###figure_1### Rather than directly using labels during training, ReduNet updates features based on estimated degree of membership by evaluating the similarities between the subspaces spanned by each class\u2019s data and the samples.\nThis helps to address the inconsistency issue: Unlike the backpropagation strategy, the forward manner optimization strategy of ReduNet dictates that if each layer during training directly uses labels for feature updates, testing samples cannot be updated due to the lack of labels.\nHowever, the estimated degree of membership can be highly inaccurate in the early layers of the network.\nThe inaccuracy of the estimation arises because the separability of subspaces may be compromised due to the limited overall spanned space.\nThe constrained and entangled subspaces may fail to accurately estimate\nthe degree of membership effectively.\nAs illustrated in Figure 1 ###reference_###, this leads to an incorrect update of features and inaccurate network parameters, further deteriorating the separability of the subspace and forming a vicious cycle.\nThis also decreases the network\u2019s convergence speed and the final classification accuracy of the transformed features.\nBesides,\nReduNet needs to carefully manage early stopping for its training efficiency and quality.\nFirstly, if ReduNet is correctly constructed, the objective function stabilizes quickly, making additional layers unnecessary.\nSecondly, although the objective function of an incorrectly constructed ReduNet tends to be stable, the vicious cycle will continue to deteriorate the quality of features and the network.\nTherefore, relying solely on the objective function to stop training is unreasonable.\nIn this paper, we present ESS-ReduNet, a framework that aims to enhance subspace separability to correct the feature updates and accelerate training with the following contributions:\nTo prioritize the expansion of the overall spanned space, we introduce a weight function to dynamically control the expansion process.\nThis enhances the separability of subspaces across different classes in higher-dimensional space, thereby guiding the samples to update in the correct direction.\nBy comparing erroneous estimations with labels, the label knowledge is incorporated by Bayesian inference to correct the estimation of membership.\nNote that this introduces label knowledge while avoiding the inconsistency issue, as the posterior probabilities obtained through Bayesian inference during training retain the likelihood of errors in estimation functions, which can be reused during testing.\nGiven that ReduNet aims to flatten each category of data into its respective linear subspace, we use the condition number, an essential metric for testing whether a linear system is ill-conditioned, as an auxiliary criterion for stopping training.\nThis helps save computational resources and maintain feature quality.\nExperiments on the HAR and ESR datasets demonstrate that our method accelerates the reduction of errors in estimating membership by a factor of 100 compared to ReduNet. Additionally, it speeds up network convergence by tenfold on the ESR, HAR, Covertype, and Gas datasets. Furthermore, on the ESR and Covertype datasets, the transformed features by our method achieved a 47% and 37% increase in SVM accuracy, respectively. Significant improvements were also observed in other datasets and with other classifiers (KNN, NSC)." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Overview of ReduNet", + "text": "For the given finite data samples , the compactness of the learned features can be measured by the average coding length, i.e. the coding rate subject to the distortion \n:\nThis measurement was proposed by Ma et al. ###reference_b17### Ma et al. (2007 ###reference_b17###) based on the rate-distortion theory\nCover and Thomas (2006 ###reference_b3###).\nSupposing contains\ntwo uncorrelated subsets,\n and ,\nthe coding rate for all data is greater than the sum of the coding rates for and .\nFurthermore, if is orthogonal to , the difference is approximately maximized.\nFor a classification problem involving\n classes, Chan et al. ###reference_b2### Chan et al. (2022 ###reference_b2###) proposed ReduNet, which aims to perform classification by implementing a series of transformations that orthogonalize the samples across different classes.\nHence,\nmaximal coding rate reduction () is proposed as the objective function:\naims to maximize the spanned space volume (or dimension)\nof the whole set such that features of different classes are incoherent to each other (i.e. inter-class discriminative).\n aims to minimize the spanned space volume\nof each class such that features within the same class are highly correlated (i.e. intra-class compressible).\nHere, is defined as a group of diagonal matrices, which are used to encode the membership of samples within different classes. Specifically, represents the probability of the sample belonging to subset. Similar to the denotation in Chan et al. (2022 ###reference_b2###), , , for .\n###figure_2### To maximize , a gradient ascent method based on the derivatives of the objective function Eq. (2 ###reference_###) is used for feature updates:\nHere, and .\nThereby,\nfor a feature , the increment transform on the -th layer is defined as:\nNote that unlike Eq.3 ###reference_###, Eq.5 ###reference_### uses estimation functions for feature updates instead of directly using the labels. This allows us to consistently use the same estimation functions for feature updates during the testing, without worrying about the inconsistency issues that may arise from directly using the labels.\nHence,\nas shown in Figure 2 ###reference_###,\nReduNet is built in a layer-by-layer forward manner.\nFor the -th layer, the parameters (i.e., expansion operators and compression operators ) are constructed by transformed features of previous layer.\nAll features are then updated by and .\nUltimately, the objective is to ensure that the transformed features from different classes are orthogonal to each other.\nIn practice, under the assumption that signals are sparsely generated, ReduNet introduces a lifting operation, which involves convolving the samples with filters to obtain features across channels.\nThis lifting operation\nexpands the upper limit of the spanned space\u2019s dimensions, transforming from the original to .\nNevertheless, for some challenging datasets, ReduNet fails to fully utilize the expanded space because the rank of the spanned space does not increase sufficiently (at most ), and as a result, it continues to transform features within a constrained, smaller space.\nThis hinders the sufficient expansion of the subspaces for each class (i.e., the separation between subspaces), thereby affecting the accurate estimation of the degree of membership and leading to incorrect feature updates." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 Efforts on Subspace Representation Learning", + "text": "A common belief is that the data from each class exhibit a low-dimensional intrinsic structure, and the role of deep networks is to learn these structures Hinton and Salakhutdinov (2006 ###reference_b11###).\nAlthough many efforts seek to directly impose subspace structures on features learned by deep networks Ji et al. (2017 ###reference_b12###); Peng et al. (2017 ###reference_b19###); Lezama et al. (2018 ###reference_b16###); Zhou et al. (2018 ###reference_b31###); Zhang et al. (2019a ###reference_b29###, b ###reference_b30###),\nthe subspace properties explored in these studies do not fulfill the three principles mentioned in section 1 ###reference_### Haeffele et al. (2021 ###reference_b9###). In contrast, the representation derived from the optimized objective function exhibits the desired characteristics." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Attempts on Architecture Explanation", + "text": "Chan et al. ###reference_b2### Chan et al. (2022 ###reference_b2###) also aim to explain components of deep networks within a unified theoretical framework, such as convolutions LeCun et al. (1998 ###reference_b15###); Krizhevsky et al. (2012 ###reference_b14###), skip connection He et al. (2016 ###reference_b10###), nonlinear activation functions, etc.\nThey established a connection between deep convolutional networks and invariant rate reduction.\nUsing the relationship between the circulant matrix and the convolution operation, the deep network derived from could naturally transition to a deep convolutional network.\nSparse rate reduction can also be used to construct white-box transformers, which bridge compresssion, denoising, and multi-head self-attention Yu et al. (2023b ###reference_b28###, a ###reference_b27###).\nIn addition, the relationship between the circulant matrix and the discrete Fourier transform can be utilized to develop a Fourier version of ReduNet.\nDue to the slower convergence speed of the Fourier version of ReduNet, this paper primarily focuses on the basic version of ReduNet.\nHowever, our plug-in approach is equally applicable to the Fourier version, as our improvements solely involve adjusting the weights of the expansion operators and correcting errors in estimating labels.\nReduNet suffers from the limitations of a low-volume spanned space and errors in estimating membership.\nWe will discuss these issues in detail in the next section.\nFurthermore, we will show that\nrelying solely on incorporating label knowledge to correct network training quickly leads to stagnation.\nFurther expansion of the spanned space is necessary to correct and accelerate network training.\nThis work highlights the relative importance of expansion operations: \nonly a sufficiently large overall data span can support the compression and transformation of individual subspaces; otherwise, in particularly small spaces, subspaces can easily become entangled." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Motivation", + "text": "To demonstrate the motivation, the Kaggle\nversion Ur-Rashid (2018 ###reference_b23###) of the ESR Qiuyi Wu (2017 ###reference_b20###) dataset serves as a case study of the binary classification problem, distinguishing between instances with and without epileptic seizures.\nAfter balancing by random undersampling, each class containing 1,456 samples, with each sample having 178 dimensions.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### Motivation 1: constrained and entangled subspaces reduce the accuracy of estimation functions, thereby hindering the correct updating of features and the proper construction of the network parameters.\nAs shown in Eq.5 ###reference_###,\nto ensure consistency between the training and testing,\nReduNet uses estimation functions to approximate membership functions ,\nwhere\nEssentially, these functions estimate membership of samples based on the similarity between the subspaces of the\n classes and the samples.\nAs the Figure 3(a) ###reference_sf1### shows, although the misclassification rate of estimation functions decreases with more layers, these functions still can not provide correct estimations for some challenging samples.\nBesides,\nFigure 3(b) ###reference_sf2### shows the curve of objective function .\nThe Expand curve corresponds to of Eq. (2 ###reference_###), while the Compress curve corresponds to of Eq. (2 ###reference_###). The Total curve represents the value of the function.\nIt can be observed that the Total curve overlaps with the Compress curve approximately before the 500th layer.\nWe identified this as an extreme case of the estimation function\u2019s failure, referred to as the lopsided issue.\nThis occurs when samples from one class (Class 1) are incorrectly classified by the estimation function into another class (Class 0), resulting in erroneous updates for all samples of the misclassified class.\nSpecifically,\nsamples from Class 1 are only compressed by the compression operator .\nFurthermore, the network parameters constructed by the degraded feature are inappropriate.\nMotivation 2: suboptimal estimation functions decrease the network\u2019s convergence speed, and the objective function should not be the sole criterion for assessing network convergence. Figures 3(a) ###reference_sf1### and 3(b) ###reference_sf2### demonstrate that ReduNet requires more than 2000 layers to achieve convergence.\nNevertheless, the ranks of the data matrices continues to decline after 2000 layers, as illustrated in Figure 3(c) ###reference_sf3###.\nThis indicates that further training of a poorly constructed network degrades feature quality.\nInspired by Curth et al. (2023 ###reference_b4###), the condition numbers of the matrices and in the and are measured, as depicted in Figure 3(d) ###reference_sf4###.\nThe condition number stabilizes around the 1600th layer, earlier than the objective function at 2000 layers.\nHence, the changes of condition number can be an auxiliary criterion to halt training.\nThis helps save computational resources and maintain feature quality." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "ESS-ReduNet", + "text": "Three questions need to be addressed:\nCan we introduce labels to guide feature updates in a relatively correct direction without causing the inconsistency issue?\nBesides using labels, are there other methods to improve the accuracy of estimation functions?\nAre there other metrics that can be used to assist in determining when to stop training?\n###figure_7###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Incorporate Label Knowledge", + "text": "A straightforward approach to addressing the inaccuracy of the estimation function is to use labels for correcting the estimates.\nHowever, this can easily introduce inconsistency issues once again.\nWhat we aim for is to find a method that can both use label knowledge to guide feature updates and allow for the reuse of this information during the test phase.\nBy combining the labels with the estimates of the estimation function, we can calculate the average performance of the estimation function on samples from each class, and infer the posterior probability based on the labels: when a sample is observed to be classified into class , what is the probability that it actually belongs to class ? We can treat this posterior probability as containing certain label knowledge, which can be directly reused during the test phase.\nHence, Bayesian inference is employed to incorporate the label knowledge.\nThe underlying intuition is to first count the samples misclassified by estimation functions, and then by comparing with the labels, infer the posterior probability that a sample observed to be classified into class (i.e. ) actually belongs to class (i.e. ): .\nGiven that this study uses balanced datasets with classes, the prior probability .\nFor samples of class , the probability of being classified into class , i.e. in Eq.7 ###reference_###, is defined as the average probability of all samples in class being assigned by the estimation function during the training process:\nHere, is the number of samples in class .\nTherefore, for a sample , we use the corrected estimation as the weight for the compression operator :\nHere, represents the estimated probability that the sample belongs to class , as derived from the estimation function.\nAlthough Bayesian inference can correct the estimation ideally, the overall spanned space may not expand promptly, preventing further decoupling of the subspaces and leading to stagnation of the training process. Figure 9(a) ###reference_sf1### from the ablation study confirms this assumption. Consequently, dynamic control of the expansion is introduced next." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Control the Expansion Dynamically", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Analysis of the Geometric Interpretation of Expansion Operators", + "text": "###figure_8### can be considered as approximately equivalent to the residual of projected onto the complement of the spanned space.\n represents the sample moving towards the complement of the span, thereby increasing the ranks of and , expanding the spanned space.\nIn contrast, shifts the sample towards the span of class , compressing the spanned subspace.\nThis can be verified by the equation below:\nwhere .\nEq.10 ###reference_### is exactly the residuals of the ridge regression problem .\nSince the loss function of ridge regression is essentially a least squares function with an L2 norm regularization term, we can explain the geometric meaning of by comparing it to the geometric interpretation of least squares.\nAs shown in Figure 5 ###reference_###, for a classical least squares problem of , the projection matrix projects vector onto the complement of column space , namely the left nullspace , and the projection is the error (or residual) Strang (2012 ###reference_b21###).\nThus, this confirms the geometric interpretation of .\nFor a detailed derivation, please see Appendix A.\nAs mentioned in the Background section, while the lifting operation increases the upper limit of the dimensions of the spanned space, it is the expansion operator that truly enlarges the spanned space.\nAdequate expansion of the space is crucial to ensure the decoupling of various class subspaces and to ensure that samples are compressed into the correct subspaces.\nThis highlights the greater importance of the expansion operator compared to the compression operator.\nTherefore, we introduce a weight function to dynamically control the expansion process." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Weight Function for Controlling Expansion Dynamically", + "text": "A straightforward strategy\nis to enlarge the expansion when training faces stagnation.\nHowever, this still results in samples undergoing incorrect transformations across multiple layers.\nTherefore, it is preferable to minimize the misclassification rate of the estimation functions as quickly as possible.\nWhen errors in membership estimation are present, we tentatively expand the space slowly.\nThis helps maintain the compactness of samples within classes and facilitates effective ongoing training.\nHowever, we also seek to rapidly reduce the misclassification rate of the estimatied function to prevent excessive updates in the wrong direction.\nThus, we introduce a truncated, monotonically increasing exponential function as the weight function of the expansion operator , where\nHere, increases with the number of layers, indicating that if the the subspaces do not decouple promptly, the weight of the expansion operator should be increased to facilitate more extensive expansion.\nThe product of and the learning rate should be less than or equal to 1 to avoid excessively large gradients. Specifically, when , can be set to 10.\nIt is important to emphasize that the gradient update of features is jointly determined by and .\nDue to their sum potentially being small or even zero, this can result in small or vanishing gradients.\nTherefore, increasing the weight of the expansion operator helps enhance the magnitude of the gradient and expand the overall spanned space, improving the separability of subspaces and the accuracy of the estimation function." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Condition Number for Stopping training", + "text": "In addition to the objective function, the stability of network parameters can serve as a criterion for assessing network convergence.\nA lower condition number indicates a more stable linear system and can serve as an auxiliary criterion to halt training.\nFor a ridge regression problem , the condition number is given by \nHere, and are the maximum and minimum singular values of , respectively Tabeart et al. (2019 ###reference_b22###).\nDue to the previously analyzed connection between ridge regression and network parameters, condition numbers of and in the and are measured to track the stability of the network.\nThe condition number decreases as the number of layers increases.\nWhen there is no significant change in the condition numbers, it is considered that the network has converged, and training can be stopped.\nIn practice, since calculating condition numbers is computationally intensive, it is feasible to assess the stability of network parameters every several tens of layers." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Algorithm Pseudocode", + "text": "Input: , , , , .\nOutput: , ,,,\n.\nAs the Algorithm 1 ###reference_### shows,\nthe errors of the estimation functions are calculated (line 4).\nIf there are misclassified samples, incorporation of label knowledge and the dynamic control of expansion are employed (lines 5-13).\nOtherwise, the original method is executed (lines 14-18).\nCondition number serves as an auxiliary criterion for stopping training (lines 19-21) and does not need to computed at every layer.\nMoreover, Bayesian inference and dynamic expansion only involve basic arithmetic operations.\nTherefore, ESS-ReduNet does not significantly increase the computational load.\nThe detailed condition number curves presented in the experimental section are intended solely to highlight the acceleration effects of ESS-ReduNet." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "ESS-ReduNet is evaluated on seven datasets: HAR Jorge Reyes-Ortiz (2012 ###reference_b13###), ESR Qiuyi Wu (2017 ###reference_b20###), Covertype (C) Blackard (1998 ###reference_b1###), Gas Vergara (2012 ###reference_b25###), mfeatFactors and mfeatFourier Duin (1998 ###reference_b6###),\nmusk David Chapman (1994 ###reference_b5###).\nWe use and for all datasets.\nThe initial number of layers is set to 3000,\nwhich is sufficient to highlight the strengths of our method.\nA case study on the ESR dataset is used to demonstrate how our method accelerates network training and produces higher-quality features.\nIn addition, an ablation study is conducted to underscore the necessity of Bayesian inference and dynamic control of the expansion.\nFinally, based on Yu et al. (2020 ###reference_b26###), we detail the classification accuracy of Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Nearest Subspace Classifier (NSC) across seven datasets, demonstrating the acceleration and corrective benefits of ESS-ReduNet." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Case Study", + "text": "###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### Same with the Figure 3 ###reference_###, four aspects are evaluated on ESS-ReduNet.\nNotably,\nthe number of misclassified samples rapidly decreases, reaching zero by the 19th layer as shown in Figure 6(a) ###reference_sf1###.\nIn comparison, ReduNet required 2000 layers to reduce the misclassification count to 495, as demonstrated in Figure 3(a) ###reference_sf1###, highlighting the accelerated network convergence achieved with ESS-ReduNet.\nBesides, as illustrated in Figure 6(b) ###reference_sf2###, our method also achieves a higher value.\nFigure 6(c) ###reference_sf3### confirms that ESS-ReduNet effectively increases the rank of the linear space and expands the spanned space.\nThe dimensions of the spanned spaces for the two classes add up to the total space dimension.\nAs evident in Figure 7(b) ###reference_sf2###, the two subspaces are now complementary and occupy the entire space.\nThis suggests that, according to Figure 6(d) ###reference_sf4###, it is reasonable to halt network training when there is no significant change in the condition number, which occurs around the 200th layer.\n###figure_15### ###figure_16### In fact, our improvements is a plug-in approach that can be conveniently combined with the Fourier version of ReduNet.\nAlthough the convergence rate of the Fourier version is significantly slower,\nwe have tested our method on the Fourier version as well.\nBy comparing Figures 8(a) ###reference_sf1### and 8(b) ###reference_sf2###, it is evident that our method achieves higher accuracy." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "###figure_17### ###figure_18### The following combinatorial approaches are evaluated to show the necessity of the Bayesian inference and dynamic expansion modules: ReduNet with Bayesian inference (Bayes), ReduNet with expansion dynamically (Expand), ESS-ReduNet and ReduNet.\nAs shown by the red line in Figure 9(a) ###reference_sf1###, the Bayes method accelerates the reduction of misclassified samples compared to ReduNet, but subsequently reaches a plateau, exhibiting a phenomenon similar to gradient vanishing.\nNevertheless,\nas displayed in Figure 9(b) ###reference_sf2###,\nthe ESS-ReduNet\nperforms better under conditions of lower channel numbers than Expand method.\nThis indicates that enlarged spanned space at higher channel numbers enhances subspaces\u2019 separability.\nIn contrast, under low channel conditions, the label knowledge from Bayes method demonstrates its effectiveness in a limited spanned space.\nFurthermore, Figure 9(a) ###reference_sf1### shows that as the channel number increases, both the Bayes method and the ReduNet demonstrate a faster decline in the estimation function\u2019s error, as evidenced by the leftward shift of the red and blue curves.\nThis observation confirms our theoretical analysis, suggesting that a higher number of channels, indicative of more convolutional kernels, provides additional bases that enhance the separation of samples across different classes." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Performance on Classification Tasks", + "text": "Table 1 ###reference_### displays the test results of ESS-ReduNet across seven datasets.\nThe Layer column indicates the convergence in the condition number.\nAs shown in the table, ESS-ReduNet achieves higher values across all datasets.\nBesides, it allows training to stop earlier than ReduNet. Finally, ESS-ReduNet has achieved varying degrees of improvement in accuracy on three basic classifiers, or achieves acceleration while maintaining comparable accuracy to ReduNet." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper addresses issues arising from inaccurate estimation functions in ReduNet, such as incorrect feature updates, slow training, and the resultant poor quality of network structures and transformed features.\nThe performance of the estimation functions depends on the separability of the subspaces of various classes.\nHence, we propose the ESS-ReduNet: a improved framework aims to enhance the subspace separability via controlling the expansion dynamically and incorporating the label knowledge through Bayesian inference.\nBoth encourage the decoupling of subspaces, thereby improving the estimation performance.\nFinally, we track changes in the condition number of network parameters as a criterion to halt network training.\nAs a plug-in approach, our improvements can be easily integrated with the Fourier version of ReduNet.\nExperimental results show that our method significantly accelerates network training, improves feature quality, and conserves computational resources." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Detailed Derivation of the Geometric Interpretation of Expansion Operators", + "text": "###figure_19### Although Chan et al. ###reference_b2### Chan et al. [2022 ###reference_b2###] have discussed the relationship between expansion operators and ridge regression, we further clarify their geometric interpretation by comparing the forms of the expansion operators with those of least squares and ridge regression.\nFor a least squares problem of , the projection matrix is . As show in Figure 10 ###reference_###, its geometric meaning is to project vector onto the column space . The matrix is also a projection matrix, which projects vector onto the complement of column space , namely the left nullspace , and the projection is the error (or residual) Strang [2012 ###reference_b21###].\nFor a feature ,\n can be expanded using the Woodbury identity van Wieringen [2023 ###reference_b24###] as follows:\nThis is exactly the residuals of the ridge regression problem , which is\nwhere \nSince the loss function of ridge regression is essentially a least squares function with an L2 norm regularization term, by analogy with the residual projection matrix , we can conclude that approximates the projection of \u2019s residual onto the complement of the spannd space.\nThe objective function of ridge regression problem in Eq.13 ###reference_### is given by:\nwhere represents the design matrix encompassing all data, denotes the response vector (which, in this paper, corresponds to a feature), and is the regularization parameter.\nThe partial derivative with respect to is:\nRearrange the gradient to zero to solve for :\nRearranging terms to isolate on one side yields:\nSolving this equation, we have:" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B The Algorithm of ESS-ReduNet", + "text": "The algorithm 2 ###reference_### is the testing phase of ESS-ReduNet.\nDynamic control is applied based on whether parameters related to Bayesian inference are detected.\nInput: , network parameters , , and , feature dimension , , and a learning rate .\nOutput: features ." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Introduction of Nearest Subspace Classifier", + "text": "ReduNet acts as a feature transformation function . As demonstrated in Yu et al. [2020 ###reference_b26###], each class\u2019s representations occupy low-dimensional, mutually orthogonal subspaces.\nTherefore, assuming that the learned features satisfy the theoretical properties, for a test sample , we can classify the sample using the NSC. Formally, the predicted label is given by:\nHere, represents the first principal components of learned feature that corresponds to class ." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Detailed Experiment Results", + "text": "Figures 11 ###reference_### to 16 ###reference_### illustrate the decline in misclassification rates of the estimation functions across six datasets. It is evident that in ESS-ReduNet, the decrease in misclassification rates is faster, and it achieves lower misclassification rates compared to ReduNet.\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### Figures 17 ###reference_### to 22 ###reference_###\ndepict the curve of objective functions in six data sets, where ESS-ReduNet achieves higher values of MCR2.\n###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### Figures 23 ###reference_### to 28 ###reference_### show the changes in rank across six datasets. It is evident that the features transformed by ESS-ReduNet have a larger overall spanned space.\nMoreover, there is no occurrence of the overall spanned space collapsing, as depicted in Figure 24(a) ###reference_.sf1###.\n###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54### ###figure_55### Figures 29 ###reference_### to 34 ###reference_### display the changes in the condition numbers across six datasets. It is apparent that ESS-ReduNet achieves lower condition numbers more rapidly, indicating that the entire linear system stabilizes more quickly.\n###figure_56### ###figure_57### ###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetSVMKNNNSCLayer
ESR245.890.650.630.72
660.840.960.780.78199
HAR989.270.680.750.73
1066.020.700.750.75326
Covertype108.090.540.710.56
295.790.740.740.71219
Gas147.120.980.990.95
889.900.970.970.9694
mfeatFactors603.380.970.970.97194
1011.570.970.970.97125
mfeatFourier668.100.820.820.82134
727.350.820.820.82134
musk481.990.930.930.93164
624.440.910.930.93119
\n
Table 1: Comparative results on , accuracy on three classifiers (higher value is better) and the layer of stopping training (lower value is better), with the first row showing outcomes from ReduNet and the second row presenting results of ESS-ReduNet.
\n
", + "capture": "Table 1: Comparative results on , accuracy on three classifiers (higher value is better) and the layer of stopping training (lower value is better), with the first row showing outcomes from ReduNet and the second row presenting results of ESS-ReduNet." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17961v1_figure_1.png", + "caption": "Figure 1: The Vicious Cycle of ReduNet. S0subscript\ud835\udc460S_{0}italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and S1subscript\ud835\udc461S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT are spanned subspaces of class 00 and class 1111, respectively. z\u2192S0\u2192\ud835\udc67subscript\ud835\udc460z\\rightarrow S_{0}italic_z \u2192 italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT means sample z\ud835\udc67zitalic_z updating towards S0subscript\ud835\udc460S_{0}italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.17961v1/x1.png" + }, + "2": { + "figure_path": "2411.17961v1_figure_2.png", + "caption": "Figure 2: A Layer of ReduNet", + "url": "http://arxiv.org/html/2411.17961v1/x2.png" + }, + "3(a)": { + "figure_path": "2411.17961v1_figure_3(a).png", + "caption": "(a)\nFigure 3: \n(a): The number of misclassified labels of {\ud835\udf45^\ud835\udc8b}j=1ksubscriptsuperscriptsuperscriptbold-^\ud835\udf45\ud835\udc8b\ud835\udc58\ud835\udc571\\{\\boldsymbol{\\hat{{\\pi}}^{j}}\\}^{k}_{j=1}{ overbold_^ start_ARG bold_italic_\u03c0 end_ARG start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT\n;\n(b): Objective function curve\n;\n(c): Rank trend\n;\n(d): Condition number trend.", + "url": "http://arxiv.org/html/2411.17961v1/x3.png" + }, + "3(b)": { + "figure_path": "2411.17961v1_figure_3(b).png", + "caption": "(b)\nFigure 3: \n(a): The number of misclassified labels of {\ud835\udf45^\ud835\udc8b}j=1ksubscriptsuperscriptsuperscriptbold-^\ud835\udf45\ud835\udc8b\ud835\udc58\ud835\udc571\\{\\boldsymbol{\\hat{{\\pi}}^{j}}\\}^{k}_{j=1}{ overbold_^ start_ARG bold_italic_\u03c0 end_ARG start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT\n;\n(b): Objective function curve\n;\n(c): Rank trend\n;\n(d): Condition number trend.", + "url": "http://arxiv.org/html/2411.17961v1/x4.png" + }, + "3(c)": { + "figure_path": "2411.17961v1_figure_3(c).png", + "caption": "(c)\nFigure 3: \n(a): The number of misclassified labels of {\ud835\udf45^\ud835\udc8b}j=1ksubscriptsuperscriptsuperscriptbold-^\ud835\udf45\ud835\udc8b\ud835\udc58\ud835\udc571\\{\\boldsymbol{\\hat{{\\pi}}^{j}}\\}^{k}_{j=1}{ overbold_^ start_ARG bold_italic_\u03c0 end_ARG start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT\n;\n(b): Objective function curve\n;\n(c): Rank trend\n;\n(d): Condition number trend.", + "url": "http://arxiv.org/html/2411.17961v1/x5.png" + }, + "3(d)": { + "figure_path": "2411.17961v1_figure_3(d).png", + "caption": "(d)\nFigure 3: \n(a): The number of misclassified labels of {\ud835\udf45^\ud835\udc8b}j=1ksubscriptsuperscriptsuperscriptbold-^\ud835\udf45\ud835\udc8b\ud835\udc58\ud835\udc571\\{\\boldsymbol{\\hat{{\\pi}}^{j}}\\}^{k}_{j=1}{ overbold_^ start_ARG bold_italic_\u03c0 end_ARG start_POSTSUPERSCRIPT bold_italic_j end_POSTSUPERSCRIPT } start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT\n;\n(b): Objective function curve\n;\n(c): Rank trend\n;\n(d): Condition number trend.", + "url": "http://arxiv.org/html/2411.17961v1/x6.png" + }, + "4": { + "figure_path": "2411.17961v1_figure_4.png", + "caption": "Figure 4: Overview of ESS-ReduNet", + "url": "http://arxiv.org/html/2411.17961v1/x7.png" + }, + "5": { + "figure_path": "2411.17961v1_figure_5.png", + "caption": "Figure 5: The Geometric Interpretation of Least Squares", + "url": "http://arxiv.org/html/2411.17961v1/x8.png" + }, + "6(a)": { + "figure_path": "2411.17961v1_figure_6(a).png", + "caption": "(a)\nFigure 6: Performance of ESS-ReduNet on ESR Dataset.", + "url": "http://arxiv.org/html/2411.17961v1/x9.png" + }, + "6(b)": { + "figure_path": "2411.17961v1_figure_6(b).png", + "caption": "(b)\nFigure 6: Performance of ESS-ReduNet on ESR Dataset.", + "url": "http://arxiv.org/html/2411.17961v1/x10.png" + }, + "6(c)": { + "figure_path": "2411.17961v1_figure_6(c).png", + "caption": "(c)\nFigure 6: Performance of ESS-ReduNet on ESR Dataset.", + "url": "http://arxiv.org/html/2411.17961v1/x11.png" + }, + "6(d)": { + "figure_path": "2411.17961v1_figure_6(d).png", + "caption": "(d)\nFigure 6: Performance of ESS-ReduNet on ESR Dataset.", + "url": "http://arxiv.org/html/2411.17961v1/x12.png" + }, + "7(a)": { + "figure_path": "2411.17961v1_figure_7(a).png", + "caption": "(a) ReduNet\nFigure 7: Visualization of ESR Feature Orthogonalization at the 200th Layer. Heatmap of |\ud835\udc81\u2062\ud835\udc81\u2217|\ud835\udc81superscript\ud835\udc81|\\boldsymbol{Z}\\boldsymbol{Z}^{*}|| bold_italic_Z bold_italic_Z start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT | showing correlation levels: darker shades indicate higher correlation between two samples. Ideally, the subspaces of two classes should be orthogonal,\nevident as a block diagonal structure.", + "url": "http://arxiv.org/html/2411.17961v1/x13.png" + }, + "7(b)": { + "figure_path": "2411.17961v1_figure_7(b).png", + "caption": "(b) ESS-ReduNet\nFigure 7: Visualization of ESR Feature Orthogonalization at the 200th Layer. Heatmap of |\ud835\udc81\u2062\ud835\udc81\u2217|\ud835\udc81superscript\ud835\udc81|\\boldsymbol{Z}\\boldsymbol{Z}^{*}|| bold_italic_Z bold_italic_Z start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT | showing correlation levels: darker shades indicate higher correlation between two samples. Ideally, the subspaces of two classes should be orthogonal,\nevident as a block diagonal structure.", + "url": "http://arxiv.org/html/2411.17961v1/x14.png" + }, + "8(a)": { + "figure_path": "2411.17961v1_figure_8(a).png", + "caption": "(a) ReduNet\nFigure 8: Performance Comparison on the Fourier Version.", + "url": "http://arxiv.org/html/2411.17961v1/x15.png" + }, + "8(b)": { + "figure_path": "2411.17961v1_figure_8(b).png", + "caption": "(b) ESS-ReduNet\nFigure 8: Performance Comparison on the Fourier Version.", + "url": "http://arxiv.org/html/2411.17961v1/x16.png" + }, + "9(a)": { + "figure_path": "2411.17961v1_figure_9(a).png", + "caption": "(a)\nFigure 9: (a):The misclassification number of estimation functions for both the Bayes method (red line) and ReduNet (blue line). Different opacities represent varying channel numbers Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT of lifting operation;\nAs Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT increases, the color becomes lighter.\n(b):The lowest number of misclassified samples by estimation functions for four methods, under different channel numbers Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.17961v1/x17.png" + }, + "9(b)": { + "figure_path": "2411.17961v1_figure_9(b).png", + "caption": "(b)\nFigure 9: (a):The misclassification number of estimation functions for both the Bayes method (red line) and ReduNet (blue line). Different opacities represent varying channel numbers Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT of lifting operation;\nAs Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT increases, the color becomes lighter.\n(b):The lowest number of misclassified samples by estimation functions for four methods, under different channel numbers Ncsubscript\ud835\udc41\ud835\udc50N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.17961v1/x18.png" + }, + "10": { + "figure_path": "2411.17961v1_figure_10.png", + "caption": "Figure 10: The Geometric Interpretation of Least Squares", + "url": "http://arxiv.org/html/2411.17961v1/x19.png" + }, + "11(a)": { + "figure_path": "2411.17961v1_figure_11(a).png", + "caption": "(a) ReduNet\nFigure 11: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x20.png" + }, + "11(b)": { + "figure_path": "2411.17961v1_figure_11(b).png", + "caption": "(b) ESS-ReduNet\nFigure 11: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x21.png" + }, + "12(a)": { + "figure_path": "2411.17961v1_figure_12(a).png", + "caption": "(a) ReduNet\nFigure 12: gas", + "url": "http://arxiv.org/html/2411.17961v1/x22.png" + }, + "12(b)": { + "figure_path": "2411.17961v1_figure_12(b).png", + "caption": "(b) ESS-ReduNet\nFigure 12: gas", + "url": "http://arxiv.org/html/2411.17961v1/x23.png" + }, + "13(a)": { + "figure_path": "2411.17961v1_figure_13(a).png", + "caption": "(a) ReduNet\nFigure 13: HAR", + "url": "http://arxiv.org/html/2411.17961v1/x24.png" + }, + "13(b)": { + "figure_path": "2411.17961v1_figure_13(b).png", + "caption": "(b) ESS-ReduNet\nFigure 13: HAR", + "url": "http://arxiv.org/html/2411.17961v1/x25.png" + }, + "14(a)": { + "figure_path": "2411.17961v1_figure_14(a).png", + "caption": "(a) ReduNet\nFigure 14: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x26.png" + }, + "14(b)": { + "figure_path": "2411.17961v1_figure_14(b).png", + "caption": "(b) ESS-ReduNet\nFigure 14: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x27.png" + }, + "15(a)": { + "figure_path": "2411.17961v1_figure_15(a).png", + "caption": "(a) ReduNet\nFigure 15: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x28.png" + }, + "15(b)": { + "figure_path": "2411.17961v1_figure_15(b).png", + "caption": "(b) ESS-ReduNet\nFigure 15: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x29.png" + }, + "16(a)": { + "figure_path": "2411.17961v1_figure_16(a).png", + "caption": "(a) ReduNet\nFigure 16: musk", + "url": "http://arxiv.org/html/2411.17961v1/x30.png" + }, + "16(b)": { + "figure_path": "2411.17961v1_figure_16(b).png", + "caption": "(b) ESS-ReduNet\nFigure 16: musk", + "url": "http://arxiv.org/html/2411.17961v1/x31.png" + }, + "17(a)": { + "figure_path": "2411.17961v1_figure_17(a).png", + "caption": "(a) ReduNet\nFigure 17: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x32.png" + }, + "17(b)": { + "figure_path": "2411.17961v1_figure_17(b).png", + "caption": "(b) ESS-ReduNet\nFigure 17: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x33.png" + }, + "18(a)": { + "figure_path": "2411.17961v1_figure_18(a).png", + "caption": "(a) ReduNet\nFigure 18: gas", + "url": "http://arxiv.org/html/2411.17961v1/x34.png" + }, + "18(b)": { + "figure_path": "2411.17961v1_figure_18(b).png", + "caption": "(b) ESS-ReduNet\nFigure 18: gas", + "url": "http://arxiv.org/html/2411.17961v1/x35.png" + }, + "19(a)": { + "figure_path": "2411.17961v1_figure_19(a).png", + "caption": "(a) ReduNet\nFigure 19: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x36.png" + }, + "19(b)": { + "figure_path": "2411.17961v1_figure_19(b).png", + "caption": "(b) ESS-ReduNet\nFigure 19: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x37.png" + }, + "20(a)": { + "figure_path": "2411.17961v1_figure_20(a).png", + "caption": "(a) ReduNet\nFigure 20: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x38.png" + }, + "20(b)": { + "figure_path": "2411.17961v1_figure_20(b).png", + "caption": "(b) ESS-ReduNet\nFigure 20: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x39.png" + }, + "21(a)": { + "figure_path": "2411.17961v1_figure_21(a).png", + "caption": "(a) ReduNet\nFigure 21: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x40.png" + }, + "21(b)": { + "figure_path": "2411.17961v1_figure_21(b).png", + "caption": "(b) ESS-ReduNet\nFigure 21: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x41.png" + }, + "22(a)": { + "figure_path": "2411.17961v1_figure_22(a).png", + "caption": "(a) ReduNet\nFigure 22: musk", + "url": "http://arxiv.org/html/2411.17961v1/x42.png" + }, + "22(b)": { + "figure_path": "2411.17961v1_figure_22(b).png", + "caption": "(b) ESS-ReduNet\nFigure 22: musk", + "url": "http://arxiv.org/html/2411.17961v1/x43.png" + }, + "23(a)": { + "figure_path": "2411.17961v1_figure_23(a).png", + "caption": "(a) ReduNet\nFigure 23: covtype", + "url": "http://arxiv.org/html/2411.17961v1/" + }, + "23(b)": { + "figure_path": "2411.17961v1_figure_23(b).png", + "caption": "(b) ESS-ReduNet\nFigure 23: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x45.png" + }, + "24(a)": { + "figure_path": "2411.17961v1_figure_24(a).png", + "caption": "(a) ReduNet\nFigure 24: gas", + "url": "http://arxiv.org/html/2411.17961v1/x46.png" + }, + "24(b)": { + "figure_path": "2411.17961v1_figure_24(b).png", + "caption": "(b) ESS-ReduNet\nFigure 24: gas", + "url": "http://arxiv.org/html/2411.17961v1/x47.png" + }, + "25(a)": { + "figure_path": "2411.17961v1_figure_25(a).png", + "caption": "(a) ReduNet\nFigure 25: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x48.png" + }, + "25(b)": { + "figure_path": "2411.17961v1_figure_25(b).png", + "caption": "(b) ESS-ReduNet\nFigure 25: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x49.png" + }, + "26(a)": { + "figure_path": "2411.17961v1_figure_26(a).png", + "caption": "(a) ReduNet\nFigure 26: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x50.png" + }, + "26(b)": { + "figure_path": "2411.17961v1_figure_26(b).png", + "caption": "(b) ESS-ReduNet\nFigure 26: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x51.png" + }, + "27(a)": { + "figure_path": "2411.17961v1_figure_27(a).png", + "caption": "(a) ReduNet\nFigure 27: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x52.png" + }, + "27(b)": { + "figure_path": "2411.17961v1_figure_27(b).png", + "caption": "(b) ESS-ReduNet\nFigure 27: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x53.png" + }, + "28(a)": { + "figure_path": "2411.17961v1_figure_28(a).png", + "caption": "(a) ReduNet\nFigure 28: musk", + "url": "http://arxiv.org/html/2411.17961v1/x54.png" + }, + "28(b)": { + "figure_path": "2411.17961v1_figure_28(b).png", + "caption": "(b) ESS-ReduNet\nFigure 28: musk", + "url": "http://arxiv.org/html/2411.17961v1/x55.png" + }, + "29(a)": { + "figure_path": "2411.17961v1_figure_29(a).png", + "caption": "(a) ReduNet\nFigure 29: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x56.png" + }, + "29(b)": { + "figure_path": "2411.17961v1_figure_29(b).png", + "caption": "(b) ESS-ReduNet\nFigure 29: covtype", + "url": "http://arxiv.org/html/2411.17961v1/x57.png" + }, + "30(a)": { + "figure_path": "2411.17961v1_figure_30(a).png", + "caption": "(a) ReduNet\nFigure 30: gas", + "url": "http://arxiv.org/html/2411.17961v1/x58.png" + }, + "30(b)": { + "figure_path": "2411.17961v1_figure_30(b).png", + "caption": "(b) ESS-ReduNet\nFigure 30: gas", + "url": "http://arxiv.org/html/2411.17961v1/x59.png" + }, + "31(a)": { + "figure_path": "2411.17961v1_figure_31(a).png", + "caption": "(a) ReduNet\nFigure 31: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x60.png" + }, + "31(b)": { + "figure_path": "2411.17961v1_figure_31(b).png", + "caption": "(b) ESS-ReduNet\nFigure 31: HARv1", + "url": "http://arxiv.org/html/2411.17961v1/x61.png" + }, + "32(a)": { + "figure_path": "2411.17961v1_figure_32(a).png", + "caption": "(a) ReduNet\nFigure 32: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x62.png" + }, + "32(b)": { + "figure_path": "2411.17961v1_figure_32(b).png", + "caption": "(b) ESS-ReduNet\nFigure 32: mfeatFactors", + "url": "http://arxiv.org/html/2411.17961v1/x63.png" + }, + "33(a)": { + "figure_path": "2411.17961v1_figure_33(a).png", + "caption": "(a) ReduNet\nFigure 33: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x64.png" + }, + "33(b)": { + "figure_path": "2411.17961v1_figure_33(b).png", + "caption": "(b) ESS-ReduNet\nFigure 33: mfeatFourier", + "url": "http://arxiv.org/html/2411.17961v1/x65.png" + }, + "34(a)": { + "figure_path": "2411.17961v1_figure_34(a).png", + "caption": "(a) ReduNet\nFigure 34: musk", + "url": "http://arxiv.org/html/2411.17961v1/x66.png" + }, + "34(b)": { + "figure_path": "2411.17961v1_figure_34(b).png", + "caption": "(b) ESS-ReduNet\nFigure 34: musk", + "url": "http://arxiv.org/html/2411.17961v1/x67.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Covertype, 1998.", + "author": "Jock Blackard.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "ReduNet: A white-box deep network from the principle of maximizing rate reduction.", + "author": "Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, and Yi Ma.", + "venue": "The Journal of Machine Learning Research, 23(1):4907\u20135009, 2022.", + "url": null + } + }, + { + "3": { + "title": "Elements of Information Theory.", + "author": "Thomas M. Cover and Joy A. Thomas.", + "venue": "Wiley, July 2006.", + "url": null + } + }, + { + "4": { + "title": "A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning, October 2023.", + "author": "Alicia Curth, Alan Jeffares, and Mihaela van der Schaar.", + "venue": "arXiv:2310.18988 [cs, stat].", + "url": null + } + }, + { + "5": { + "title": "Musk (Version 2), 1994.", + "author": "Ajay Jain David Chapman.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "Multiple Features, 1998.", + "author": "Robert Duin.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training.", + "author": "Cong Fang, Hangfeng He, Qi Long, and Weijie J. Su.", + "venue": "Proceedings of the National Academy of Sciences, 118(43):e2103091118, October 2021.", + "url": null + } + }, + { + "8": { + "title": "Deep learning.", + "author": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville.", + "venue": "MIT press, 2016.", + "url": null + } + }, + { + "9": { + "title": "A Critique of Self-Expressive Deep Subspace Clustering, March 2021.", + "author": "Benjamin D. Haeffele, Chong You, and Ren\u00e9 Vidal.", + "venue": "arXiv:2010.03697 [cs].", + "url": null + } + }, + { + "10": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770\u2013778, 2016.", + "url": null + } + }, + { + "11": { + "title": "Reducing the Dimensionality of Data with Neural Networks.", + "author": "G. E. Hinton and R. R. Salakhutdinov.", + "venue": "Science, 313(5786):504\u2013507, July 2006.", + "url": null + } + }, + { + "12": { + "title": "Deep subspace clustering networks.", + "author": "Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "13": { + "title": "Human Activity Recognition Using Smartphones, 2012.", + "author": "Davide Anguita Jorge Reyes-Ortiz.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "Imagenet classification with deep convolutional neural networks.", + "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.", + "venue": "Advances in neural information processing systems, 25, 2012.", + "url": null + } + }, + { + "15": { + "title": "Gradient-based learning applied to document recognition.", + "author": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner.", + "venue": "Proceedings of the IEEE, 86(11):2278\u20132324, 1998.", + "url": null + } + }, + { + "16": { + "title": "Ole: Orthogonal low-rank embedding-a plug and play geometric loss for deep learning.", + "author": "Jos\u00e9 Lezama, Qiang Qiu, Pablo Mus\u00e9, and Guillermo Sapiro.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8109\u20138118, 2018.", + "url": null + } + }, + { + "17": { + "title": "Segmentation of Multivariate Mixed Data via Lossy Data Coding and Compression.", + "author": "Yi Ma, Harm Derksen, Wei Hong, and John Wright.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9):1546\u20131562, September 2007.", + "url": null + } + }, + { + "18": { + "title": "Prevalence of neural collapse during the terminal phase of deep learning training.", + "author": "Vardan Papyan, X. Y. Han, and David L. Donoho.", + "venue": "Proceedings of the National Academy of Sciences, 117(40):24652\u201324663, October 2020.", + "url": null + } + }, + { + "19": { + "title": "Deep Sparse Subspace Clustering, September 2017.", + "author": "Xi Peng, Jiashi Feng, Shijie Xiao, Jiwen Lu, Zhang Yi, and Shuicheng Yan.", + "venue": "arXiv:1709.08374 [cs].", + "url": null + } + }, + { + "20": { + "title": "Epileptic Seizure Recognition, 2017.", + "author": "Ernest Fokoue Qiuyi Wu.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "Linear algebra and its applications.", + "author": "Gilbert Strang.", + "venue": "2012.", + "url": null + } + }, + { + "22": { + "title": "Improving the condition number of estimated covariance matrices, October 2019.", + "author": "Jemima M. Tabeart, Sarah L. Dance, Amos S. Lawless, Nancy K. Nichols, and Joanne A. Waller.", + "venue": "arXiv:1810.10984 [math, stat].", + "url": null + } + }, + { + "23": { + "title": "Epileptic Seizure Recognition, 2018.", + "author": "Harun Ur-Rashid.", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "Lecture notes on ridge regression, June 2023.", + "author": "Wessel N. van Wieringen.", + "venue": "arXiv:1509.09169 [stat].", + "url": null + } + }, + { + "25": { + "title": "Gas Sensor Array Drift at Different Concentrations, 2012.", + "author": "Alexander Vergara.", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction.", + "author": "Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma.", + "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9422\u20139434. Curran Associates, Inc., 2020.", + "url": null + } + }, + { + "27": { + "title": "White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?, November 2023.", + "author": "Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, and Yi Ma.", + "venue": "arXiv:2311.13110 [cs].", + "url": null + } + }, + { + "28": { + "title": "White-box transformers via sparse rate reduction.", + "author": "Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin Haeffele, and Yi Ma.", + "venue": "Advances in Neural Information Processing Systems, 36:9422\u20139457, 2023.", + "url": null + } + }, + { + "29": { + "title": "Scalable Deep k-Subspace Clustering.", + "author": "Tong Zhang, Pan Ji, Mehrtash Harandi, Richard Hartley, and Ian Reid.", + "venue": "In C.V. Jawahar, Hongdong Li, Greg Mori, and Konrad Schindler, editors, Computer Vision \u2013 ACCV 2018, volume 11365, pages 466\u2013481. Springer International Publishing, Cham, 2019.", + "url": null + } + }, + { + "30": { + "title": "Neural collaborative subspace clustering.", + "author": "Tong Zhang, Pan Ji, Mehrtash Harandi, Wenbing Huang, and Hongdong Li.", + "venue": "In International Conference on Machine Learning, pages 7384\u20137393. PMLR, 2019.", + "url": null + } + }, + { + "31": { + "title": "Deep adversarial subspace clustering.", + "author": "Pan Zhou, Yunqing Hou, and Jiashi Feng.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1596\u20131604, 2018.", + "url": null + } + }, + { + "32": { + "title": "A Geometric Analysis of Neural Collapse with Unconstrained Features, May 2021.", + "author": "Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu.", + "venue": "arXiv:2105.02375 [cs, math, stat].", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17961v1" +} \ No newline at end of file diff --git a/20241127/2411.17965v1.json b/20241127/2411.17965v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2e601f952466d7e337d1f7f37b434429467f1a4b --- /dev/null +++ b/20241127/2411.17965v1.json @@ -0,0 +1,545 @@ +{ + "title": "Optimized Tradeoffs for Private Prediction with Majority Ensembling", + "abstract": "We study a classical problem in private prediction, the problem of computing an -differentially private majority of -differentially private algorithms for and . Standard methods such as subsampling or randomized response are widely used, but do they provide optimal privacy-utility tradeoffs?\nTo answer this, we introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm. It is parameterized by a data-dependent noise function ,\nand enables efficient utility optimization over the class of all private algorithms, encompassing those standard methods.\nWe show that maximizing the utility of an -private majority algorithm can be computed tractably through an optimization problem for any by a novel structural result that reduces the infinitely many privacy constraints into a polynomial set. In some settings, we show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility. Lastly, we demonstrate the strong empirical effectiveness of our first-of-its-kind privacy-constrained utility optimization for ensembling labels for private prediction from private teachers in image classification.\nNotably, our DaRRM framework with an optimized exhibits substantial utility gains when compared against several baselines.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Differential privacy (DP) is a widely applied framework for formally reasoning about privacy leakage when releasing statistics on a sensitive database Erlingsson et al. (2014 ###reference_b11###); Cormode et al. (2018 ###reference_b4###). Differential privacy protects data privacy by obfuscating algorithmic output, ensuring that query responses look similar on adjacent datasets while preserving utility as much as possible Dwork et al. (2006 ###reference_b9###).\nPrivacy in practice often requires aggregating or composing multiple private procedures that are distributed for data or training efficiency. For example, it is common to aggregate multiple private algorithmic or model outputs in methods such as boosting or calibration (Sagi & Rokach, 2018 ###reference_b28###). In federated learning, model training is distributed across multiple edge devices. Those devices need to send local information, such as labels or gradients Kone\u010dn\u1ef3 et al. (2016 ###reference_b18###), to an aggregating server, which is often honest but curious about the local training data. Hence, the output from each model at an edge device needs to be privatized locally before being sent to the server.\nWhen translating from a local privacy guarantee to a centralized one, one needs to reason about the composition of the local privacy leakage Naseri et al. (2020 ###reference_b23###). Therefore, we formally ask the following:\nConsider -differentially private mechanisms for odd.\nGiven a dataset , each mechanism outputs a binary answer \u2014 that is, , . Given a privacy allowance , and a failure probability , ,\nhow can one maximize the utility of an -differentially private mechanism to compute the majority function , where ?\n###figure_1### The majority function is often used in private prediction, where one studies the privacy cost of releasing one prediction Dwork & Feldman (2018 ###reference_b7###) and exploits the fact that releasing only the aggregated output on sharded models is significantly more private than releasing each prediction. For example, this occurs in semi-supervised knowledge transfer with private aggregated teacher ensembles (PATE) Papernot et al. (2017 ###reference_b26###; 2018 ###reference_b27###), in ensemble learning algorithms Jia & Qiu (2020 ###reference_b16###); Xiang et al. (2018 ###reference_b34###), machine unlearning Bourtoule et al. (2021 ###reference_b3###), private distributed learning algorithms such as Stochastic Sign-SGD Xiang & Su (2023 ###reference_b33###), and in ensemble feature selection Liu et al. (2018 ###reference_b20###).\nPrivate prediction is also shown to be a competitive technique in data-adaptive settings,\nwhere the underlying dataset is changing slowly over time,\nto quickly adjust to online dataset updates Zhu et al. (2023 ###reference_b36###). Furthermore, to address the large privacy loss of private prediction under the many-query regime, there has been recent works in everlasting private prediction that extends privacy guarantees with repeated, possibly infinite, queries without suffering a linear increase in privacy loss Naor et al. (2023 ###reference_b22###); Stemmer (2024 ###reference_b29###).\nThese works, however, rely often on the standard sensitivity analysis of to provide a private output and thus generally provide limited utility guarantees. This is because the maximum sensitivity of can be too pessimistic in practice, as observed in the problem of private hyperparameter optimization (Liu & Talwar, 2019 ###reference_b19###). On the other hand, for private model ensembling, a naive way to bound privacy loss without restrictive assumptions is to apply simple composition (Theorem 2.2 ###reference_lemma2###) or general composition (Theorem 2.3 ###reference_lemma3###, a tighter version compared to advanced composition) to reason about the final privacy loss after aggregation.\nA black-box application of the simple composition theorem to compute would incur a privacy cost in the pure differential privacy setting, that is, , or if one is willing to tolerate some failure probability , general composition would yield a privacy cost\nKairouz et al. (2015 ###reference_b17###).\nThus, a natural baseline algorithm that is -differentially private applies privacy amplification by subsampling and randomly chooses of the mechanisms to aggregate and returns the majority of the subsampled mechanisms. This technique is reminiscent of the subsampling procedure used for the maximization function (Liu & Talwar, 2019 ###reference_b19###) or some general techniques for privacy amplification in the federated setting via shuffling (Erlingsson et al., 2019 ###reference_b12###).\nHowever, standard composition analysis and privacy amplication techniques can be suboptimal for computing a private majority, in terms of both utility and privacy. Observe that if there is a clear majority among the outputs of , one can add less noise. This is because each mechanism is -differentially private already, and hence, is less likely to change its output on a neighboring dataset by definition. This implies the majority outcome is unlikely to change based on single isolated changes in . Furthermore, composition theorems make two pessimistic assumptions: 1) the worst-case function and the dataset are considered, and 2) all intermediate mechanism outputs are released, rather than just the final aggregate.\nBased on these observations, is it possible then to improve the utility of computing a private majority, under a fixed privacy loss?" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Our Contributions", + "text": "We give a (perhaps surprising) affirmative answer to the above question by using our novel data-dependent randomized response framework (DaRRM), which captures all private majority algorithms, we introduce a tractable noise optimization procedure that maximizes the privacy-utility tradeoffs. Furthermore, we can provably achieve a constant factor improvement in utility over simple subsampling by applying data-dependent noise injection when \u2019s are i.i.d. and . To our knowledge, this is the first of its work of its kind that gives a tractable utility optimization over the possibly infinite set of privacy constraints.\nData-dependent Randomized Response Majority (DaRRM).\nWe generalize the classical Randomized Response (RR) mechanism and the commonly used subsampling baseline for solving Problem 1.1 ###reference_lemma1### and\npropose a general randomized response framework DaRRM (see Algorithm 1 ###reference_###), which comes with a customizable noise function .\nWe show that DaRRM actually captures all algorithms computing the majority whose outputs are at least as good as a random guess (see Lemma 3.3 ###reference_lemma3###), by choosing different functions.\nDesigning with Provable Privacy Amplification. The choice of the function in DaRRM allows us to explicitly optimize noise while trading off privacy and utility.\nUsing structural observations, we show privacy amplification by a factor of 2 under mild conditions over applying simple composition in the pure differential privacy setting when the mechanisms \u2019s are i.i.d. (see Theorem 4.1 ###reference_lemma1###).\nFinding the Best through Dimension-Reduced Optimization.\nWe further exploit the generality of DaRRM by applying a novel optimization-based approach that applies constrained optimization to find a data-dependent that maximizes some measure of utility. One challenge is that there are infinitely many privacy constraints, which are necessary for DaRRM with the optimized to satisfy the given privacy loss.\nWe show that we can reformulate the privacy constraints, which are infinite dimensional, to a finite polynomial-sized constraint set, allowing us to efficiently constrain the optimization problem to find the best , even for approximate differential privacy (see Lemma 5.1 ###reference_lemma1###).\nEmpirically, we show that with a small and , the optimized (see in Figure 2 ###reference_###) achieves the best utility among all functions,\neven compared to the subsampling and the data-independent baseline.\nTo our knowledge, this is the first utility maximization algorithm that optimizes over all private algorithms by constrained optimization with dimension reduction.\nExperiments. In downstream tasks, such as semi-supervised knowledge transfer for private image classification, we compare our DaRRM with an optimized to compute the private label majority from private teachers against PATE Papernot et al. (2018 ###reference_b27###),\nwhich computes the private label majority from non-private teachers. We fix the privacy loss of the output of both algorithms to be the same and find that when the number of teachers is small, DaRRM indeed has a higher utility than PATE, achieving 10%-15% and 30% higher accuracy on datasets MNIST and Fashion-MNIST, respectively." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Related Work", + "text": "Private Composition.\nBlackbox privacy composition analysis often leads to pessimistic utility guarantees.\nIn the blackbox composition setting, one can do no better than the privacy analysis for pure differential privacy Dwork et al. (2014 ###reference_b10###). For approximate differential privacy, previous work has found optimal constants for advanced composition by reducing to the binary case of hypothesis testing with randomized response; and optimal tradeoffs between for black box composition are given in Kairouz et al. (2015 ###reference_b17###), where there could be a modest improvement .\nThus, for specific applications, previous work has turned to white-box composition analysis for improved utility.\nThis includes, for example, moment accountant for private SGD Abadi et al. (2016 ###reference_b1###) and the application of contractive maps in stochastic convex optimization Feldman et al. (2018 ###reference_b13###).\nFor the specific case of model ensembles, Papernot et al. (2018 ###reference_b27###)\nshows a data-dependent privacy bound that vanishes as the probability of disagreement goes to . Their method provides no utility analysis but they empirically observed less privacy loss when there is greater ensemble agreement.\nWhen is the maximization function, some previous work shows that an approximately maximum value can be outputted with high probability while incurring privacy loss, independently of . Liu & Talwar (2019 ###reference_b19###) proposed a random stopping mechanism for that draws samples uniformly at random from at each iteration. In any given iteration, the sampling halts with probability and the final output is computed based on the samples collected until that time. This leads to a final privacy cost of only for the maximization function , which can be improved to (Papernot & Steinke, 2022 ###reference_b25###). In addition to the aforementioned works, composing top-k and exponential mechanisms also enjoy slightly improved composition analysis via a bounded-range analysis Durfee & Rogers (2019 ###reference_b6###); Dong et al. (2020 ###reference_b5###).\nBypassing the Global Sensitivity.\nTo ensure differential privacy, it is usually assumed the query function has bounded global sensitivity \u2014 that is, the output of does not change much on any adjacent input datasets differing in one entry. The noise added to the output is then proportional to the global sensitivity of . If the sensitivity is large, the output utility will thus be terrible due to a large amount of noises added.\nHowever, the worst case global sensitivity can be rare in practice, and this observation has inspired a line of works on designing private algorithms with data-dependent sensitivity bound to reduce the amount of noises added.\nInstead of using the maximum global sensitivity of on any dataset, the classical Propose-Test-Release framework of Dwork Dwork & Lei (2009 ###reference_b8###) uses a local sensitivity value for robust queries that is tested privately and if the sensitivity value is too large, the mechanism is halted before the query release. The halting mechanism incurs some failure probability but deals with the worst-case sensitivity situations, while allowing for lower noise injection in most average-case cases.\nOne popular way to estimate average-case sensitivity is to use the Subsample-and-Aggregate framework by introducing the notion of perturbation stability, also known as local sensitivity of a function on a dataset Thakurta & Smith (2013 ###reference_b30###); Dwork et al. (2014 ###reference_b10###), which represents the minimum number of entries in needs to be changed to change . One related concept is smooth sensitivity, a measure of variability of in the neighborhood of each dataset instance. To apply the framework under smooth sensitivity, one needs to privately estimate a function\u2019s local sensitivity and adapt noise injection to be order of , where can often be as small as , where , the total dataset size Nissim et al. (2007 ###reference_b24###). Generally, the private computation of the smooth sensitivity of a blackbox function is nontrivial but is aided by the Subsample and Aggregate approach for certain functions.\nThese techniques hinge on the observation that a function with higher stability on requires less noise to ensure worst case privacy. Such techniques are also applied to answer multiple online functions/queries in model-agnostic learning Bassily et al. (2018 ###reference_b2###).\nHowever, we highlight two key differences in our setting with a weaker stability assumption. First, in order to estimate the perturbation stability of on , one needs to downsample or split into multiple blocks Thakurta & Smith (2013 ###reference_b30###); Dwork et al. (2014 ###reference_b10###); Bassily et al. (2018 ###reference_b2###), , and estimate the perturbation stability based on the mode of . This essentially reduces the amount of change in the output of due to a single entry in , with high probability and replaces the hard-to-estimate perturbation stability of with an easy-to-compute perturbation stability of the mode. Such a notion of stability has also been successfully applied, along with the sparse vector technique, for model-agnostic private learning to handle exponentially number of queries to a model Bassily et al. (2018 ###reference_b2###). Note that in these cases, since a private stochastic test is applied, one cannot achieve pure differential privacy Dwork et al. (2014 ###reference_b10###). In practice, e.g. federated learning, however, one does not have direct access to , and thus it is impractical to draw samples from or to split . Second, to ensure good utility, one relies on a key assumption, i.e. the subsampling stability of , which requires \nwith high probability\nover the draw of subsamples .\nAlthough our intuition in designing DaRRM also relies on the stability of the mode function , previous usage of stability to improve privacy-utility tradeoffs, e.g., propose-test-release Vadhan (2017 ###reference_b31###); Dwork et al. (2014 ###reference_b10###), requires the testing of such stability, based on which one adds a larger (constant) noise . This can still lead to adding redundant noise in our case.\nOptimal Randomized Response. \nHolohan et al. (2017 ###reference_b15###) and Kairouz et al. (2015 ###reference_b17###) show that the classical Randomized Response (RR) mechanism with a constant probability of faithfully revealing the true answer is optimal in certain private estimation problems.\nOur proposed DaRRM framework and our problem setting is a generalized version of the ones considered in both Holohan et al. (2017 ###reference_b15###) and Kairouz et al. (2015 ###reference_b17###), which not only subsumes RR but also enables a data-dependent probability, or noise addition.\nWhile RR with a constant probability can be shown optimal in problems such as private count queries or private estimation of trait possession in a population, it is not optimal in other problems, such as private majority ensembling, since unlike the former problems, changing one response of the underlying mechanisms does not necessarily change the output of the majority.\nTo explicitly compute the minimum amout of noise required, one needs the output distributions of the underlying mechanisms but this is unknown.\nTo resolve this, our proposed DaRRM framework adds the amount of noise dependent on the set of observed outcomes from the underlying private mechanisms, , which is a random variable of the dataset and is hence a proxy. This enables DaRRM to calibrate the amount of noise based on whether the majority output is likely to change. The amount of noise is automatically reduced when the majority output is not likely to change.\nSecond, Holohan et al. (2017 ###reference_b15###) and Kairouz et al. (2015 ###reference_b17###) both consider a special case in our setting where all private mechanisms are i.i.d., while our approach focuses on the more general setting where each private mechanism can have a different output distribution.\nLearning A Good Noise Distribution.\nThere have been limited works that attempt to derive or learn a good noise distribution that improves the utility. For deep neural networks inference, Mireshghallah et al. (2020 ###reference_b21###) attempts to learn the best noise distribution to maximizing utility subject to an entropy Lagrangian, but no formal privacy guarantees were derived. For queries with bounded sensitivity, Geng & Viswanath (2015 ###reference_b14###) demonstrate that the optimal noise distribution is in fact a staircase distribution that approaches the Laplacian distribution as .\nPrivate Prediction.\nInstead of releasing a privately trained model as in private learning, private prediction hides the models and only releases private outputs. Private prediction has been shown as a practical alternative compared to private learning, as performing private prediction is much easier compared to private learning on a wide range of tasks Dwork & Feldman (2018 ###reference_b7###); Naor et al. (2023 ###reference_b22###); van der Maaten & Hannun (2020 ###reference_b32###).\nAlthough a privately trained model can make infinitely many predictions at the inference time without incurring additional privacy loss, since differential privacy is closed under post-processing, it has been shown recently that it is indeed possible to make infinitely many private predictions Naor et al. (2023 ###reference_b22###) with a finite privacy loss for specific problems." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Preliminaries", + "text": "We first introduce the definition of differential privacy, simple composition and general composition as follows. The general composition Kairouz et al. (2015 ###reference_b17###) gives a near optimal and closed-form bound on privacy loss under adaptive composition, which improves upon advanced composition Dwork et al. (2014 ###reference_b10###).\nA randomized mechanism with a domain and range satisfies -differential privacy for if for any two adjacent datasets and for any subset of outputs it holds that . is often called pure differential privacy; while is often called approximate differential privacy.\nFor any and , the class of -differentially private mechanisms satisfy -differential privacy under -fold adaptive composition.\nFor any and , the class of -differentially private mechanisms satisfies -differential privacy under -fold adaptive composition for\nWe then formalize the error and utility metric in our problem as follows:\nFor the problem setting in Definition 1.1 ###reference_lemma1###, let the observed (random) outcomes set be , where .\nFor a fixed , we define the error of an algorithm , i.e., , in computing the majority function as the Total Variation (TV) distance between and . Specifically,\nand the utility is defined as .\nNotation. Throughout the paper, we use the same notations defined in Problem 1.1 ###reference_lemma1### and Definition 2.4 ###reference_lemma4###. Furthermore, let and to denote a pair of adjacent datasets with one entry being different. Also, let and , .\nWe omit the subscript when all \u2019s or \u2019s are equal. denotes the indicator function and . For the purpose of analysis, let , i.e. the (random) sum of all observed outcomes on dataset . is omitted when the context is clear.\nUnless specified, we use the noise function as input to our algorithms to calibrate the probabilistic noise injection.\nUnless specified, the privacy allowance ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Private Majority Algorithms", + "text": "The very first approach to consider when solving private majority ensembling (Problem 1.1 ###reference_lemma1###), since the output is binary, is the classical Randomized Response (RR) mechanism Dwork et al. (2014 ###reference_b10###), where one flips a biased coin with a constant probability . If the coin lands on head with probability , output the true majority base on samples; if not, then simply output a noisy random answer. However, to make the output -differential private, the success probability can be at most (or ) when (or ) (see Appendix A.1 ###reference_###), which is too small for any reasonable utility.\nThe key observation for improved utility is that the probability of success should not be a constant, but should depend on the unpublished set of observed outcomes from the mechanisms .\nIf we see many 1\u2019s or 0\u2019s in , then there should be a clear majority even on adjacent datasets. On the other hand, if we see about half 1\u2019s and half 0\u2019s, this means the majority is highly volatile to data changes, which implies we need more noise to ensure privacy. In summary, if we can calibrate the success probability based on to smoothly increase when there is a clear majority, we can improve the utility without affecting privacy.\nSubsampling.\nOne natural baseline is outputting the majority of out of randomly subsampled mechanisms (without replacement), given a privacy allowance . Suppose , the privacy loss of the aggregated output can be reasoned through simple composition or general composition.\nInterestingly, we show outputting the majority of out of subsampled mechanisms corresponds to RR with a non-constant probability , which is set by a polynomial function based on the sum of observed outcomes in Lemma 3.1 ###reference_lemma1### (see a full proof in Appendix A.2 ###reference_###).\nIntuitively, subsampling may be seen as implicitly adding noise by only outputting based on a randomly chosen subset of the mechanisms; therefore this implicit noise is inherently data-dependent on .\nConsider Problem 1.1 ###reference_lemma1###, with the privacy allowance .\nConsider the data-dependent algorithm that computes and then applies RR with probability .\nIf , where is the value of , i.e., the (random) sum of observed outcomes on dataset , and is\nthen the majority of out of subsampled mechanisms without replacement and the output of our data-dependent RR algorithm have the same distribution.\nOne thing special about subsampling is that when , it indeed results in the optimal error, which we show in Lemma 3.2 ###reference_lemma2### as follows. See a full proof in Appendix A.3 ###reference_###. Note that when , subsampling outputs a majority of 1 with probability exactly . This lower bound only applies to the case when , since when , the probability of subsampling outputting a majority of 1 is not necessary .\nLet be an -differentially private algorithm, where and , that computes the majority of -differentially private mechanisms , where on dataset and .\nThen, the error , where is the probability of the true majority output being 1 as defined in Definition 1.1 ###reference_lemma1###.\nData-dependent Randomized Response (DaRRM).\nDoes subsampling give optimal utility when ?\nInspired by the connection between RR and subsampling, we propose Data-dependent Randomized Response Majority (DaRRM) in Algorithm 1 ###reference_###, to study optimizing privacy-utility tradeoffs in private majority ensembling. In particular, DaRRM has a non-constant success probability that is set by a parameterized noise function , which in turn depends on the set of observed outcomes . In fact, we can show that DaRRM is\ngeneral: any reasonable algorithm , name one whose output is at least as good as a random guess, can be captured by the DaRRM framework in Lemma 3.3 ###reference_lemma3###\n(see a full proof in Appendix A.4 ###reference_###).\nWe denote DaRRM instantiated with a specific noise function by .\nLet be any randomized algorithm to compute the majority function on such that for all , (i.e. is at least as good as a random guess).\nThen, there exists a a general function such that if one sets by in DaRRM, the output distribution of is the same as the output distribution of .\nDesigning the Function. \nWith the DaRRM framework, we ask: how to design a good function that maximizes the utility? First, we introduce two characteristics of that do not affect the utility, while simplifying the analysis and the empirical optimization:\nA function of the sum of observed samples: Since the observed samples set is a permutation-invariant set, a sufficient statistic that captures the full state of is , the sum of observed outcomes. This allows us to reduce . Hence, in the rest of the paper, we focus on .\nSymmetric around : If is asymmetric, we can symmetrize by reflecting one region about and achieve better or equal expected utility, where the utility is summed over symmetric distributions of .\nNote that satisfies both characteristics.\nNow, recall and are the sum of observed outcomes on adjacent datasets and . Also, recall and are the output probabilities of the mechanism on .\nTo design a good noise function in DaRRM, we start by deriving conditions for a function such that is -differentially private in Lemma 3.4 ###reference_lemma4###\n(see a full proof in Appendix A.5 ###reference_###).\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve\nProblem 1.1 ###reference_lemma1###,\nlet and , where and are adjacent datasets and .\nFor a noise function such that ,\n is -differentially private if and only if\nfor all ,\nthe following holds,\nwhere is called the privacy cost objective and" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Provable Privacy Amplification", + "text": "We theoretically demonstrate that privacy is provably amplified under improved design of in our DaRRM framework. Specifically, we show when the mechanisms are i.i.d. and , we gain privacy amplification by a factor of 2 compared to the na\u00efve subsampling baseline by carefully designing .\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve Problem 1.1 ###reference_lemma1###, with i.i.d. mechanisms , i.e., , , , the privacy allowance and .\nLet the noise function be that: \nif , \nand if ,\nwhere , then is -differentially private.\nInterpretation.\nFirst, when is small, the in Theorem 4.1 ###reference_lemma1### corresponds to outputting the majority based on subsampling outcomes, from Lemma 3.1 ###reference_lemma1###. However, the subsampling baseline, whose privacy loss is reasoned through simple composition, would have indicated that one can only output the majority based on outcomes, therefore implying a x privacy gain. When , the above theorem indicates that we can set a constant , which implies we are optimally outputting the true majority with no noise while still surprisingly ensuring privacy.\nIntuition. This x privacy gain is intuitively possible because the majority is only dependent on half of the mechanisms\u2019 outputs, therefore the privacy leakage is also halved. To see this, we start by analyzing the privacy cost objective in Eq. 31 ###reference_###, where with a careful analysis of its gradient, we show that the maximum indeed occurs when satisfies certain conditions. Now, when , note that the probability ratio of outputting with outcomes is approximately , where dependence on follows because the probability of outputting is dominated by the probability that exactly mechanisms output 1. To rigorize this, we derive sufficient conditions for functions that satisfy \nas indicated by Lemma 3.4 ###reference_lemma4###, to ensure DaRRM to be -differentially private and a more detailed overview and the full proof can be found in Appendix B ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Optimizing the Noise Function in DaRRM", + "text": "Theoretically designing and extending privacy amplification results to the case is difficult and it is likely that our crafted is far from optimal.\nOn the other hand, one can optimize for such that maximizes the utility but this involves solving a \u201cSemi-infinite Programming\u201d problem, due to the infinitely many privacy constraints, which are the constraints in the optimization problem necessary to ensure DaRRM with the optimized satisfy a given privacy loss.\nSolving a \u201cSemi-infinite Programming\u201d problem in general is non-tractable, but we show that in our specific setting this is in fact tractable, proposing a novel learning approach based on DaRRM that can optimize the noise function to maximize the utility. To the best of our knowledge, such optimization, presented as follows, is the first of its kind:\nwhere is the privacy cost objective as defined in Lemma 3.4 ###reference_lemma4###, is the feasible region where lies due to each mechanism being -differentially private. Observe that since is symmetric around , we only need to optimize variables instead of variables. is the distribution from which are drawn.\nWe want to stress that no prior knowledge about the dataset or the amount of consensus among the private mechanisms is required to use our optimization framework.\nWhen there is no prior knowledge about , is set to be the uniform distribution for maximizing the expected utility. Note the above optimization problem also enables the flexibility of incorporating prior knowledge about the mechanisms by choosing a prior distribution to further improve the utility.\nOptimizing Over All Algorithms.\nWe want to stress that by solving the above optimization problem, we are indeed optimizing over all algorithms for maximal utility, since we show in Lemma 3.3 ###reference_lemma3### DaRRM that captures all reasonable algorithms computing a private majority.\nLinear Optimization Objective.\nPerhaps surprisingly, it turns out that optimizing for is a Linear Programming (LP) problem!\nIndeed, after expanding the optimization objective in Eq. 2 ###reference_### by the utility definition (see Definition 2.4 ###reference_lemma4###), optimizing the above objective is essentially same as optimizing:\nwhere and observe .\nThe above objective is linear in .\nSee a full derivation in Appendix C.1 ###reference_###.\nAlthough taking the expectation over involves integrating over variables and this can be computationally expensive, we discuss how to formulate a computationally efficient approximation of the objective in Appendix C.2 ###reference_###, which we later use in the experiments. Note that the objective only for maximizing the utility and hence approximating the objective does not affect the privacy guarantee.\nReducing Infinitely Many Constraints to A Polynomial Set.\nThe constraints in the optimization problem (Eq. 3 ###reference_###) is what makes sure the output of is -differentially private. We thus call them the privacy constraints. Note that the privacy constraints are linear in .\nThough it appears we need to solve for infinitely many such privacy constraints since \u2019s and \u2019s are continuous, we show that through a structural understanding of DaRRM, we can reduce the number of privacy constraints from infinitely many to exponentially many, and further to a polynomial set. First, we observe the privacy cost objective is linear in each independent pair of fixing all , , and hence finding the worst case probabilities in given any , is a linear programming (LP) problem.\nFurthermore, since and are the probability of outputting 1 from the -th -differentially private mechanism on adjacent datasets, by definition, they are close and lie in a feasible region , which we show has 8 corners if (and only 4 corners if ).\nThis implies only happens at one of the corners of , and hence the number of constraints reduces to (and if ). Second, observe that and in the privacy cost objective are the pmf of two Poisson Binomial distributions at . Notice that the Poisson Binomial is invariant under the permutation of its parameters, i.e. has the same distribution as , under some permutation . Based on this observation, we show the number of constraints can be further reduced to if (and if ).\nWe formalize the two-step reduction of the number of privacy constraints in Lemma 5.1 ###reference_lemma1###\nas follows. See a full proof in Appendix C.3 ###reference_###.\n111Practical Limitation. Although the number of constraints is polynomial in and optimizing in DaRRM is an LP, can still make the number of constraints intractably large when is large. In practice, we observe with the Gurobi optimizer, one can optimize for on a laptop if . But if , since the number of privacy constraints is , one can optimize for over 100.\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve Problem 1.1 ###reference_lemma1### and let be the privacy cost objective as defined in Lemma 3.4 ###reference_lemma4###. Given an arbitrary noise function , let the worst case probabilities be\n.\nThen, each pair satisfies\nFurthermore, when , there exists a finite vector set of size such that if , then . When , the size of can be reduced to ." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We empirically solve222All code for the experiments can be found at https://anonymous.4open.science/r/OptimizedPrivateMajority-CF50 ###reference_dPrivateMajority-CF50###\nthe above optimization problem (Eq. 2 ###reference_###) using the Gurobi333https://www.gurobi.com/ ###reference_www.gurobi.com/### solver and first present the shape of the optimized function, which we call , and its utility in Section 6.1 ###reference_###. Then, we demonstrate the compelling effectiveness of DaRRM with an optimized function, i.e., ,\nin ensembling labels for private prediction from private teachers through the application of semi-supervised knowledge transfer for private image classification in Section 6.2 ###reference_###." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Optimized in Simulations", + "text": "###figure_2### ###figure_3### ###figure_4### , and the baselines (corresponding to subsampling) and (corresponding to RR).\nHere, , , and .\nWe compare the shape and the error of different functions: an optimized and the subsampling as in Lemma 3.1 ###reference_lemma1###444Note the subsampling mechanism from Section 4 ###reference_###, which enjoys a privacy amplification by a factor of 2, only applies to pure differential privacy settings (i.e., when ). However, we focus on the more general approximate differential privacy settings (with ) in the experiments, and hence, the subsampling baseline we consider throughout this section is the basic version without privacy amplification.\nTo see how the subsampling mechanism from Section 4 ###reference_### with privacy amplification compares against the other algorithms, please refer to Appendix D.1.2 ###reference_.SSS2###..\nWe also compare against in the classical baseline RR (see Section A.1 ###reference_###) and . Here, can be viewed as a constant noise function ; and is the same as .\nWe present the results with and .\nWe assume there is no prior knowledge about the mechanisms , and set the prior distribution from which \u2019s are drawn, , to be the uniform distribution, in the optimization objective (Eq. 2 ###reference_###) searching for .\nTo ensure a fair comparison against the subsampling baseline, we set to be the one by -fold general composition (see Theorem 2.3 ###reference_lemma3###),\nwhich in this case, is .\nWe plot each functions over the support and the corresponding error of each algorithm in Figure 2 ###reference_###.\nDiscussion. In summary, at , the optimized noise function overlaps with which corresponds to the subsampling baseline. This agrees with our lower bound on the error in Lemma 3.2 ###reference_lemma2###, which implies that at , subsampling indeed gives the optimal error. When , the optimized noise function has the highest probability of outputting the true majority over the support than the functions corresponding to the baselines. This implies has the lowest error (and hence, highest utility), which is verified on the bottom set of plots.\nMore results on comparing the optimized under the uniform against the baselines by general composition (Theorem 2.3 ###reference_lemma3###) and in pure differential privacy settings (i.e., ) for large and can be found in Appendix D.1.1 ###reference_.SSS1### and D.1.2 ###reference_.SSS2###.\nFurthermore, we include results optimizing using a non-uniform prior in Appendix D.1.3 ###reference_.SSS3###." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Private Semi-Supervised Knowledge Transfer", + "text": "Dataset\nMNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.63 (0.09)\n0.76 (0.09)\n0.79 (0.09)\n\n\n0.66 (0.06)\n0.75 (0.06)\n0.79 (0.05)\n\n\n0.64 (0.04)\n0.76 (0.04)\n0.80 (0.04)\nDataset\nFashion-MNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.65 (0.11)\n0.90 (0.07)\n0.96 (0.03)\n\n\n0.59 (0.06)\n0.94 (0.03)\n0.96 (0.02)\n\n\n0.64 (0.04)\n0.93 (0.02)\n0.96 (0.02)\n-differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.\nSemi-supervised Knowledge Transfer. We apply our DaRRM framework in the application of semi-supervised knowledge transfer for private image classification.\nWe follow a similar setup as in PATE Papernot et al. (2017 ###reference_b26###; 2018 ###reference_b27###), where one trains teachers, each on a subset of a sensitive dataset, and at the inference time, queries the teachers for the majority of their votes, i.e., the predicted labels, of a test sample. Each time the teachers are queried, there is a privacy loss, and we focus on this private prediction subroutine in this section. To limit the total privacy loss over all queries, the student model is also trained on a public dataset without labels. The student model queries the labels of a small portion of the samples in this dataset from the teachers and is then trained using semi-supervised learning algorithms on both the labeled and unlabeled samples from the public dataset.\nBaselines. We want the privacy loss per query of a test sample to the teachers to be .\nThis can be achieved via two ways: 1) Train non-private teachers,\nadd Gaussian noise to the number of predicted labels from the teachers in each output class, and output the majority of the noisy votes.\nThis is exactly the GNMax algorithm from PATE Papernot et al. (2018 ###reference_b27###).\n2) Train -differentially private teachers and output the majority of the teachers\u2019 votes by adding a smaller amount of noise. This can be computed using DaRRM with an appropriate noise function . We compare the performance of GNMax and DaRRM with two functions: (i.e., the optimized ), and (i.e., the subsampling baseline).\nThe overall privacy loss over queries to the teachers can be computed by general composition (Theorem 2.3 ###reference_lemma3###).\nExperiment Setup.\nWe use samples from two randomly chosen classes \u2014 class 5 and 8 \u2014 from the MNIST and Fashion-MNIST datasets to form our training and testing datasets. Our MNIST has a total of 11272 training samples and testing samples; our Fashion-MNIST has training samples and testing samples.\nWe train teachers on equally divided subsets of the training datasets.\nEach teacher is a CNN model.\nThe non-private and private teachers are trained using SGD and DP-SGD Abadi et al. (2016 ###reference_b1###), respectively, for 5 epochs.\nDaRRM Setup:\nThe Gaussian noise in DP-SGD has zero mean and std. ; the gradient norm clipping threshold is .\nThis results in each private teacher, trained on MNIST and Fashion-MNIST, being and -differentially private, respectively, after 5 epochs.\nWe set the privacy allowance 555\nHere, we present results with privacy allowance because we think this is a more interesting case. is less interesting, since one cannot get improvement compared to the subsampling baseline. close to a is also less interesting, as this case seems too easy for our proposed method (the optimized function is very close to 1, meaning very little noise needs to be added in this case).\nHence, we pick , which is a case when improvement is possible, and is also potentially challenging for our optimization framework. This is also realistic as most applications would only want to tolerate a constant privacy overhead.\nSee more results with different privacy allowance \u2019s in this setting in Appendix D.2.2 ###reference_.SSS2###.\n and the privacy loss per query is then computed using general composition under -fold, which give the same privacy loss in the high privacy regime, resulting in on MNIST and on Fashion-MNIST.\nGNMax Setup:\nWe now compute the std. of the Gaussian noise used by GNMax to achieve a per-query privacy loss of , as in the DaRRM setup. We optimize according to the Renyi differential privacy loss bound of Gaussian noise.\nAlthough Papernot et al. (2018 ###reference_b27###) gives a potentially tighter data-dependent privacy loss bound for majority ensembling non-private teachers, we found when and the number of output classes are small as in our case, even if all teachers agree on a single output class, the condition of the data-dependent bound is not satisfied. Hence, we only use the privacy loss bound of Gaussian noise here to set in GNMax. See Appendix D.2.1 ###reference_.SSS1### for more details, including the values and other parameters.\nFinally, the per sample privacy loss and the total privacy loss over queries, which is computed by advanced composition, are reported in Table 9 ###reference_###.\nThe testing dataset is treated as the public dataset on which one trains a student model. Papernot et al. (2018 ###reference_b27###) empirically shows querying samples from a public dataset of size suffices to train a student model with a good performance.\nTherefore, we pick .\nWe repeat the selection of samples 10 times and report the mean test accuracy with one std. in parentheses in Table 1 ###reference_###.\nThe queries serve as the labeled samples in training the student model. The higher the accuracy of the labels from the queries, the better the final performance of the student model. We skip the actual training of the student model using semi-supervised learning algorithms here.\nDiscussion. Table 1 ###reference_### shows achieves the highest accuracy (i.e., utility) compared to the two baselines on both datasets. First, comparing to , we verify that subsampling does not achieve a tight privacy-utility tradeoff, and we can optimize the noise function in DaRRM to maximize the utility given a target privacy loss. Second, comparing to GNMax, the result shows there are regimes where ensembling private teachers gives a higher utility than directly ensembling non-private teachers, assuming the outputs in both settings have the same privacy loss.\nIntuitively, this is because ensembling private teachers adds fine-grained noise during both training the teachers and aggregation of teachers\u2019 votes, while ensembling non-private teachers adds a coarser amount of noise only to the teachers\u2019 outputs.\nThis further motivates private prediction from private teachers and the practical usage of DaRRM, in addition to the need of aggregating private teachers in federated learning settings with an honest-but-curious server." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In computing a private majority from private mechanisms, we propose the DaRRM framework, which is provably general, with a customizable function. We show a privacy amplification by a factor of 2 in the i.i.d. mechanisms and a pure differential privacy setting. For the general setting, we propose an tractable optimization algorithm that maximizes utility while ensuring privacy guarantees.\nFurthermore, we demonstrate the empirical effectiveness of DaRRM with an optimized .\nWe hope that this work inspires more research on the intersection of privacy frameworks and optimization." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Details of Section\u00a03", + "text": "We show the magnitude of in RR (Algorithm 2 ###reference_###) to solve Problem 1.1 ###reference_lemma1###, such that the output is -DP, in Lemma A.1 ###reference_lemma1###.\nConsider using RR (Algorithm 2 ###reference_###) to solve Problem 1.1 ###reference_lemma1###.\nLet the majority of -differentially private mechanisms be -differentially private, where and are computed by simple composition (Theorem 2.2 ###reference_lemma2###) or general composition (Theorem 2.3 ###reference_lemma3###).\nIf\nthen RR is -differentially private.\nLet denote the output of RR.\nLet and , where , and , are adjacent datasets.\nRecall each mechanism is -differentially private,\nand the majority of the outputs of is -differentially private. When , using simple composition, and . When , using general composition and .\nBy definition of differential privacy (Definition 2.1 ###reference_lemma1###), all of the following four constraints on apply:\nTo ensure RR is -differentially private, needs to be such that for all possible ,\nLet .\nThe above inequality of (Eq. 7 ###reference_###) needs to hold for worst case output probabilities that cause the maximum privacy loss. That is, needs to satisfy\nTo find the worst case output probabilities, we solve the following Linear Programming (LP) problem:\n###figure_5### The optimum of any LP problem is at the corners of the feasible region, which is bounded by the optimization constraints.\nWe plot the feasible region and the objective of the above LP problem in Figure 3 ###reference_###.\nHere,\n . The optimum of the LP problem \u2013 that is, the worse case probabilities \u2013 is,\nBy Eq. 8 ###reference_###,\nFor small , using the approximation and that ,\nIn the pure differential privacy setting, ,\nand so ; and in the approximate differential privacy setting, , and so .\n\u220e\nConsider Problem 1.1 ###reference_lemma1###, with the privacy allowance .\nConsider the data-dependent algorithm that computes and then applies RR with probability .\nIf , where is the value of , i.e., the (random) sum of observed outcomes on dataset , and is\nthen the majority of out of subsampled mechanisms without replacement and the output of our data-dependent RR algorithm have the same distribution.\nLet be the sum of observed outcomes from mechanisms.\nFollowing Algorithm 3 ###reference_###, denotes the indices chosen uniformly at random from without replacement.\nConditioned on , notice the output of SubMaj follows a hypergeometric distribution. The output probability of SubMaj is\nConsider an arbitrary noise function . Let denote the output of the data-dependent RR-d on dataset , where RR-d has the non-constant probability set by .\nThe output probability of RR is,\nWe want .\nIf is odd, for any , this is\nand for any , this is\nSimilarly, if is even, for any , this is\nand for any , this is\nNext, we show the above is indeed symmetric around . For any , there is . If is odd,\nSimilarly, if is even,\nNow, combining Eq. 23 ###reference_###, Eq. 24 ###reference_### and Eq. 27 ###reference_###, if is odd, setting as\nmakes RR-d have the same output distribution as SubMaj.\nSimilarly, combining Eq. 25 ###reference_###, Eq. 26 ###reference_### and Eq. 28 ###reference_###, if is even, setting as\nmakes RR-d have the same output distribution as SubMaj.\n\u220e\nLet be an -differentially private algorithm, where and , that computes the majority of -differentially private mechanisms , where on dataset and .\nThen, the error , where is the probability of the true majority output being 1 as defined in Definition 1.1 ###reference_lemma1###.\nConsider the setting where \u2019s are i.i.d., i.e., for some on any dataset . Then, it suffices to show , because a lower bound in this special case would indicate a lower bound for the more general case, where \u2019s can be different.\nConstruct a dataset and mechanisms such that and without loss of generality, we may assume .\nNext, we construct a sequence of datasets , such that and are neighboring datasets tha t differ in one entry, for all , and , , .\nChoose such that , for some .\nNow, by definition of differential privacy,\nSince the probability of true majority being 1 on dataset is , there is\n\u220e\nLet be any randomized algorithm to compute the majority function on such that for all , (i.e. is at least as good as a random guess).\nThen, there exists a a general function such that if one sets by in DaRRM, the output distribution of is the same as the output distribution of .\nFor some and conditioned on , we see that by definition . We want to set such that . Therefore, we set .\nLastly, we need to justify that . Clearly, since . Note that the non-negativity follows from assumption.\n\u220e\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve\nProblem 1.1 ###reference_lemma1###,\nlet and , where and are adjacent datasets and .\nFor a noise function such that ,\n is -differentially private if and only if\nfor all ,\nthe following holds,\nwhere is called the privacy cost objective and\nBy the definition of differential privacy (Definition 2.1 ###reference_lemma1###),\nLet random variables and be the sum of observed outcomes on adjacent datasets and , based on which one sets in DaRRM.\nLet and , .\nConsider the output being 1.\nSimilarly, consider the output being 0.\nTherefore, plugging Eq. 38 ###reference_### and Eq. 44 ###reference_### into Eq. 32 ###reference_###,\nwhere and , and are any adjacent datasets.\nNext, we show if is symmetric around , i.e., ,\nsatisfying either one of Eq. 45 ###reference_### or Eq. 46 ###reference_### implies satisfying the other one. Following Eq. 45 ###reference_###,\nFor analysis purpose, we rewrite Eq. 46 ###reference_### as\nand proceed by showing Eq. 49 ###reference_### Eq. 50 ###reference_###.\nRecall and .\nObserve and .\nLet , for any , denote the set of all subsets of integers that can be selected from .\nLet be \u2019s complement set. Notice .\nSince denotes the pmf of the Poisson Binomial distribution at , it follows that\nConsider and a new random variable , and let . Observe that\nSimilarly, consider and a new random variable , and let . Then, .\nSince Eq. 49 ###reference_### holds for all possible , ,\nEq. 50 ###reference_### then holds for all in the -simplex,\nand so Eq. 50 ###reference_### follows by relabeling as and as .\nThe above implies Eq. 45 ###reference_### Eq. 46 ###reference_###. Therefore,\n\u220e" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Details of Section\u00a04: Provable Privacy Amplification", + "text": "In this section, we consider Problem 1.1 ###reference_lemma1### in the pure differential privacy and i.i.d. mechanisms setting. That is, and .\nOur goal is to search for a good noise function such that: 1) is -DP, and 2) achieves higher utility than that of the baselines (see Section 3 ###reference_###) under a fixed privacy loss.\nOur main finding of such a function is presented in Theorem 4.1 ###reference_lemma1###, which states given a privacy allowance , one can indeed output the majority of subsampled mechanisms, instead of just as indicated by simple composition. Later, we formally verify in Lemma B.11 ###reference_lemma11###, Section B.3 ###reference_### that taking the majority of more mechanisms strictly increases the utility.\nTo start, by Lemma 3.4 ###reference_lemma4###, for any noise function , satisfying goal 1) is equivalent to satisfying\nwhere \nrefers to the privacy cost objective (see Lemma 3.4 ###reference_lemma4###) in the i.i.d. mechanisms setting, and recall and , . Notice in this setting, , and .\nMonotonicity Assumption. For analysis, we restrict our search for a function with good utility to the class with a mild monotonicity assumption:\n and . This matches our intuition that as , i.e., the number of mechanisms outputting 1, approaches or , there is a clearer majority and so not much noise is needed to ensure privacy, which implies a larger value of .\n###figure_6### Roadmap of Proof of Theorem 4.1 ###reference_lemma1###. \nSince needs to enable Eq. 54 ###reference_### to be satisfied for all , we begin by showing characteristics of the worst case probabilities, i.e., , given any that is symmetric around and that satisfies the above monotonicity assumption, in Lemma B.1 ###reference_lemma1###, Section B.1 ###reference_###. We call the worst case probabilities, since they incur the largest privacy loss.\nLater in Section B.2 ###reference_###, we present the main proof of Theorem 4.1 ###reference_lemma1###, where we focus on searching for a good that enables , based on the characteristics of in Lemma B.1 ###reference_lemma1###, to ensure is -differentially private.\nFirst, note are close to each other and lie in a feasible region , due to each mechanism being -differentially private; and so does .\nThe feasible region, as illustrated in Figure 4 ###reference_###, is bounded by\n\n\n(a) \n(b) \n(c) , and\n\n(d) \n, where the four boundaries are derived from the definition of differential privacy.\nTherefore, we only need to search for .\nNext, we show that given satisfying certain conditions, can only be on two of the four boundaries of in Lemma B.1 ###reference_lemma1### \u2014 that is, either , i.e., on the blue line in Figure 4 ###reference_###, or , i.e., on the orange line in Figure 4 ###reference_###.\nFor any noise function that is 1) symmetric around , 2) satisfies the monotonicity assumption, and 3) and ,\nthe worst case probabilities given , , must satisfy one of the following two equalities:\nTo show Lemma B.1 ###reference_lemma1###, we first show in Lemma B.2 ###reference_lemma2### that the search of can be refined to one of the four boundaries of , via a careful gradient analysis of in , and then show in Lemma B.3 ###reference_lemma3### that the search of can be further refined to two of the four boundaries, due to symmetry of .\nLemma B.1 ###reference_lemma1### directly follows from the two.\nFor any noise function that is 1) symmetric around , 2) satisfies the monotonicity assumption, and 3) and , the worst case probabilities given , , must satisfy one of the following four equalities:\nRecall the privacy cost objective (as defined in Lemma 3.4 ###reference_lemma4###) is now\nwhere and , .\nSince and in the i.i.d. mechanisms setting, and using the pmf of the Binomial distribution, can be written as\nThe gradients w.r.t. and are\nand\nWe show in the following , and . This implies there is no local maximum inside , and so must be on one of the four boundaries of . Also, if , then , and is a corner point at the intersection of two boundaries. Similarly, if , then , and is also a corner point. This concludes , must be on one of the four boundaries of .\nTo show for , we write as in Eq. 55 ###reference_###, and show that and .\nTo show , first note\nSince , and , there is for ,\nFurthermore, since and ,\nEq. 62 ###reference_### and Eq. 63 ###reference_### combined implies\nand hence, Eq. 61 ###reference_### holds.\nThis further implies .\nNext, to show , note that\nSince , and , there is for ,\nFurthermore,\nsince and ,\nEq. 70 ###reference_### and Eq. 71 ###reference_### combined implies\nand hence Eq. 69 ###reference_### holds.\nThis further implies .\nFollowing Eq.55 ###reference_###, for and satisfying the three assumptions,\nFollowing similar techniques, one can show for and satisfying the three conditions,\nThis implies there is no local minima or local maxima inside the feasible region . Also recall are two special cases where is at the intersection of two boundaries.\nHence, we conclude the worst case probability is on one of the four boundaries of \u2014 that is, satisfy one of the following:\n\u220e\nFor any noise function function that is 1) symmetric around and 2) satisfies the monotonicity assumption,\nthe privacy cost objective is maximized when .\nFollowing Eq. 33 ###reference_### and Eq. 38 ###reference_### in the proof of Lemma 3.4 ###reference_lemma4###, and that ,\nwhere and , .\nThis implies\nHence, is maximized when .\nwhere the last line follows from the observation that in the i.i.d. mechanisms setting, and is hence the pmf of the Binomial distribution at .\nSimilarly,\nNow define the objective\nfor and it follows that and . We now analyze the monotonicity of in .\nFor ease of presentation, define . Since and , there is . And replacing with in Eq. 83 ###reference_###,\nSince and , . This implies is monotonically non-decreasing in and hence,\nTherefore, is maximzied when .\n\u220e\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve Problem 1.1 ###reference_lemma1###, with i.i.d. mechanisms , i.e., , , , the privacy allowance and .\nLet the noise function be that: \nif ,\nand if ,\nwhere , then is -differentially private.\nRoadmap.\nTheorem 4.1 ###reference_lemma1### consists of two parts: under a large privacy allowance and under a small privacy allowance .\nWe first show in Lemma B.5 ###reference_lemma5###, Section B.2.1 ###reference_.SSS1### that if , setting suffices to ensure to be -differentially private, and hence one can always output the true majority of mechanisms. In contrast, simple composition indicates only when can one output the true majority of mechanisms.\nNext, we show in Lemma B.10 ###reference_lemma10###, Section B.2.2 ###reference_.SSS2### that if , one can set to be , which corresponds to outputting the majority of subsampled mechanisms (and hence the name \u201cDouble Subsampling\u201d, or DSub). In contrast, simple compositon indicates one can only output the majority of subsampled mechanisms to make sure the output is -differentially private.\nTheorem 4.1 ###reference_lemma1### follows directly from combining Lemma B.5 ###reference_lemma5### and Lemma B.10 ###reference_lemma10###.\nIntuitively, if we subsample mechanisms, the utility is higher than that of the na\u00efve subsampling approach which outputs the majority based on only mechanisms. To complete the story, we formally compare the utility of outputting the majority of subsampled mechanisms (Theorem 4.1 ###reference_lemma1###) and outputting the majority of subsampled mechanisms (simple composition, Theorem 2.2 ###reference_lemma2###) in the i.i.d. mechanisms and pure differential privacy setting, fixing the output privacy loss to be .\nConsider Problem 1.1 ###reference_lemma1### with i.i.d. mechanisms , i.e., .\nLet be two functions that are both symmetric around . If , then\n.\nRecall , where , is the set of observed outcomes from the mechanisms .\nBy Definition 2.4 ###reference_lemma4###, for any that is symmetric around , the error of is\nwhere , and recall , .\nFor any ,\nIf or , .\nOtherwise, for ,\nIf ,\nIf ,\nHence, if , then . Since , , and so\nSimilarly, if , then and\nTherefore,\n\u220e\nSince , , by Lemma B.11 ###reference_lemma11###, \u2014 that is, outputting mechanisms has a higher utility than outputting mechanisms." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Details of Section\u00a05: Optimizing the Noise Function in DaRRM", + "text": "For any function that is symmetric around , we can write the optimization objective as\nFurthermore, notice the objective is symmetric around 0, and can be written as\nSince expression in Eq. 199 ###reference_9### does not involve , we only need to optimize expression in Eq. 199 ###reference_9###. That is,\nEq. 201 ###reference_1### is the optimization objective we use in the experiments.\nWe see the optimization objective is linear in .\nNote in the general setting, , where recall is the sum of observed outcomes on dataset , and hence, is the pmf of the Poisson Binomial distribution at .\nSince the optimization objective in Eq. 200 ###reference_0### requires taking an expectation over , and this invovles integrating over variables, which can be slow in practice, we propose the following approximation to efficiently compute the objective.\nWe start with a simple idea to compute the objective, by sampling \u2019s from and take an empirical average of the objective value over all subsampled sets of as the approximation of the expectation in Section C.2.1 ###reference_.SSS1###. However, we found this approach is less numerically stable. We then propose the second approach to approximate the objective in Section C.2.2 ###reference_.SSS2###, which approximates the integration over \u2019s using the rectangular rule instead of directly approximating the objective value. We use the second approximation approach in our experiments and empirically demonstrates its effectiveness.\nNote approximating the optimization objective does not affect the privacy guarantee.\nConsider using DaRRM (Algorithm 1 ###reference_###) to solve Problem 1.1 ###reference_lemma1### and let be the privacy cost objective as defined in Lemma 3.4 ###reference_lemma4###. Given an arbitrary noise function , let the worst case probabilities be\nThen, each pair satisfies\nFurthermore, when , there exists a finite vector set of size such that if , then . When , the size of can be reduced to .\n###figure_7### Part I: Reducing # privacy constraints from to exponentially many.\nConsider for an arbitrary and fixing . Given any noise function , recall the privacy cost objective (see Lemma 3.4 ###reference_lemma4###), is\nand the privacy constraints are of the form\nwhere recall that is a function of and is a function of , and , are the sum of observed outcomes on neighboring datasets and .\nBy Lemma 3.4 ###reference_lemma4###, needs to make the above privacy constraint hold for all possible to make -differentially private.\nThis is equivalent to saying, needs to ensure\n.\nNotice that the sum of observed outcomes follows a Poisson Binomial distribution, i.e., and . Hence, by the pmf of the Poisson Binomial distribution666See, e.g. https://en.wikipedia.org/wiki/Poisson_binomial_distribution ###reference_mial_distribution###, for the pmf of Poisson Binomial distribution. , the privacy cost objective is linear in each and , fixing all , .\nSince each mechanism is -differentially private, by definition, satisfies all of the following:\nThat is,\n lies in a feasible region (see Figure 5 ###reference_###).\nNote the constraints on , that is, the boundaries of , are linear in and .\nAnd so the optimization problem\n, which finds the worst case probabilities in , is a Linear Programming (LP) problem in for . This implies has to be on one of the eight corners of \u2014 that is , .\nSince all and , for , are independent, we can search for the worst case probabilities by searching for , instead of searching for .\nTherefore, the infinitely many privacy constraints are now reduced to only to optimize for the best function that maximizes the utility of , while ensuring the output is -differentially private.\nPart II: Reducing # privacy constraints from exponentially many to a polynomial set.\nTo further reduce the number of privacy constraints in optimization, observe that\nthe Poisson Binomial distribution is invariant under the permutation of its parameters.\nThat is, , for some permutation and means \u201cfollows the same distribution\u201d. Similarly, .\nThe above observation implies if we have one privacy constraint , for some ,\nthen any privacy constraint ,\nwhere , , for permutations and , is redundant.\nTherefore, there is a vector set , where each probability vector in is constructed by setting , where ,\nsuch that vectors constructed by is not in .\nNote (8 chooses K with replacement) = .\nIf we can restrict our search for the worst case probabilities to this set \u2014 that is,\nsolving for , then\n.\nThis implies we only need privacy constraints to optimize for the best noise function in DaRRM, while making sure is -differentially private.\nNote if , i.e., the mechanism \u2019s are pure differentially private, the feasible region in which lies has only 4 corners instead of 8. This implies . Hence, in this case, (4 choose with replacement) = , which implies we only need privacy constraints to optimize for the best noise function in DaRRM.\n\u220e" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Full Experiment Results", + "text": "" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nMNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.63 (0.09)\n0.76 (0.09)\n0.79 (0.09)\n\n\n0.66 (0.06)\n0.75 (0.06)\n0.79 (0.05)\n\n\n0.64 (0.04)\n0.76 (0.04)\n0.80 (0.04)\n\n

\n
\n
\n
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nFashion-MNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.65 (0.11)\n0.90 (0.07)\n0.96 (0.03)\n\n\n0.59 (0.06)\n0.94 (0.03)\n0.96 (0.02)\n\n\n0.64 (0.04)\n0.93 (0.02)\n0.96 (0.02)\n\n

\n
\n
\n
\n
\n
\n
Table 1: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is
\n
\n

-differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.

\n
\n
\n
", + "capture": "Table 1: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is " + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Queries\n\nPrivacy loss\nper query\n\n\n\n\nTotal privacy loss\nover queries\n\n\n
MNIST
\n\nFashion\n\nMNIST\n
\n
Table 2: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition (see Theorem\u00a02.3), where we set .
\n
", + "capture": "Table 2: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition (see Theorem\u00a02.3), where we set ." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
# Subsampled mechanisms10131520
Privacy allowance6.45217.57428.27089.8823
Parameter of constant \n14.032814.032814.032814.0328
Parameter of constant \n0.10030.10030.10030.1003
Overall privacy loss0.64520.75740.82710.9882
Overall failure probability0.10010.10010.10010.1002
\n
Table 3: All parameter values. Note that all the private ensembling algorithms we compare in the experiment is required to be -differentially private. Here, and .
\n
", + "capture": "Table 3: All parameter values. Note that all the private ensembling algorithms we compare in the experiment is required to be -differentially private. Here, and . " + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nPrivacy Loss Per Query\n\n\n
MNIST34.3121.46
Fashion-MNIST35.7422.46
\n
Table 4: Parameters of the RDP bound of Gaussian noise to compute the privacy loss of GNMax\u2019s output.
\n
", + "capture": "Table 4: Parameters of the RDP bound of Gaussian noise to compute the privacy loss of GNMax\u2019s output. " + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Queries\n\nPrivacy loss\nper query\n\n\n\n\nTotal privacy loss\nover queries\n\n\n
MNIST
\n\nFashion\n\nMNIST\n
\n
Table 5: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set .
\n
", + "capture": "Table 5: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set ." + }, + "6": { + "table_html": "
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nMNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.54 (0.11)\n0.68 (0.07)\n0.74 (0.08)\n\n\n0.51 (0.07)\n0.67 (0.05)\n0.66 (0.05)\n\n\n0.57 (0.03)\n0.71 (0.03)\n0.69 (0.04)\n\n

\n
\n
\n
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nFashion-MNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.56 (0.10)\n0.92 (0.05)\n0.89 (0.06)\n\n\n0.52 (0.05)\n0.89 (0.04)\n0.92 (0.03)\n\n\n0.56 (0.04)\n0.89 (0.04)\n0.91 (0.04)\n\n

\n
\n
\n
\n
\n
\n
Table 6: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nNote in this case where , by Lemma\u00a03.2, subsampling achieves the optimal error/utility. Hence, there is not much difference in terms of accuracy between and\n as expected.\n\n
\n
", + "capture": "Table 6: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nNote in this case where , by Lemma\u00a03.2, subsampling achieves the optimal error/utility. Hence, there is not much difference in terms of accuracy between and\n as expected.\n\n" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Queries\n\nPrivacy loss\nper query\n\n\n\n\nTotal privacy loss\nover queries\n\n\n
MNIST
\n\nFashion\n\nMNIST\n
\n
Table 7: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set .
\n
", + "capture": "Table 7: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set ." + }, + "8": { + "table_html": "
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nMNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.73 (0.11)\n0.76 (0.09)\n0.84 (0.07)\n\n\n0.75 (0.07)\n0.82 (0.04)\n0.83 (0.04)\n\n\n0.72 (0.04)\n0.79 (0.05)\n0.83 (0.03)\n\n

\n
\n
\n
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nFashion-MNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.72 (0.10)\n0.96 (0.04)\n0.97 (0.04)\n\n\n0.72 (0.08)\n0.96 (0.02)\n0.97 (0.02)\n\n\n0.72 (0.06)\n0.97 (0.01)\n0.97 (0.01)\n\n

\n
\n
\n
\n
\n
\n
Table 8: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.\n
\n
", + "capture": "Table 8: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.\n" + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Queries\n\nPrivacy loss\nper query\n\n\n\n\nTotal privacy loss\nover queries\n\n\n
MNIST
\n\nFashion\n\nMNIST\n
\n
Table 9: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set .
\n
", + "capture": "Table 9: The privacy loss per query to the teachers and the total privacy loss over queries. Note the total privacy loss is computed by general composition, where we set ." + }, + "10": { + "table_html": "
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nMNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.79 (0.07)\n0.80 (0.09)\n0.85 (0.08)\n\n\n0.80 (0.05)\n0.82 (0.05)\n0.85 (0.04)\n\n\n0.80 (0.04)\n0.80 (0.04)\n0.83 (0.03)\n\n

\n
\n
\n
\n
\n
\n
\n
\n
\n

\n\n\n\nDataset\nFashion-MNIST\n\n# Queries\n\n\nGNMax\n\n(Baseline)\n\n\n\n\n(Baseline)\n\n\n\n\n(Ours)\n\n\n\n\n\n0.79 (0.07)\n0.95 (0.04)\n0.96 (0.04)\n\n\n0.79 (0.05)\n0.96 (0.03)\n0.97 (0.03)\n\n\n0.79 (0.03)\n0.96 (0.02)\n0.96 (0.02)\n\n

\n
\n
\n
\n
\n
\n
Table 10: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.\n
\n
", + "capture": "Table 10: Accuracy of the predicted labels of query samples on datasets MNIST (on the left) and Fashion-MNIST (on the right). We report the mean and one std. in parentheses over 10 random draws of the query samples from the test dataset.\nNote each prediction on the query sample is -differentially private.\nWith the same per query privacy loss (and hence the same total privacy loss over samples),\n achieves the highest accuracy\ncompared to the other two baselines.\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17965v1_figure_1.png", + "caption": "Figure 1: An illustration of the problem setting.\nThe inputs are the dataset \ud835\udc9f\ud835\udc9f{\\mathcal{D}}caligraphic_D and K\ud835\udc3eKitalic_K (\u03f5,\u0394)italic-\u03f5\u0394({\\epsilon},\\Delta)( italic_\u03f5 , roman_\u0394 )-differentially private mechanisms M1,\u2026,MKsubscript\ud835\udc401\u2026subscript\ud835\udc40\ud835\udc3eM_{1},\\dots,M_{K}italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_M start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT. One draws samples Si\u223cMi\u2062(\ud835\udc9f)similar-tosubscript\ud835\udc46\ud835\udc56subscript\ud835\udc40\ud835\udc56\ud835\udc9fS_{i}\\sim M_{i}({\\mathcal{D}})italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u223c italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( caligraphic_D ) and computes an aggregated output g\u2062(S1,\u2026,SK)\ud835\udc54subscript\ud835\udc461\u2026subscript\ud835\udc46\ud835\udc3eg(S_{1},\\dots,S_{K})italic_g ( italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_S start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ) based on all observed samples. Our goal is to design a randomized algorithm \ud835\udc9c\ud835\udc9c{\\mathcal{A}}caligraphic_A that approximately computes g\ud835\udc54gitalic_g and is (m\u2062\u03f5,\u03b4)\ud835\udc5aitalic-\u03f5\ud835\udeff(m{\\epsilon},\\delta)( italic_m italic_\u03f5 , italic_\u03b4 )-differentially private for 1\u2264m\u2264K1\ud835\udc5a\ud835\udc3e1\\leq m\\leq K1 \u2264 italic_m \u2264 italic_K and \u03b4\u2265\u0394\u22650\ud835\udeff\u03940\\delta\\geq\\Delta\\geq 0italic_\u03b4 \u2265 roman_\u0394 \u2265 0. We focus on g\ud835\udc54gitalic_g being the majority function .", + "url": "http://arxiv.org/html/2411.17965v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.17965v1_figure_2(a).png", + "caption": "Figure 2: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT", + "url": "http://arxiv.org/html/2411.17965v1/x2.png" + }, + "2(b)": { + "figure_path": "2411.17965v1_figure_2(b).png", + "caption": "Figure 2: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT", + "url": "http://arxiv.org/html/2411.17965v1/x3.png" + }, + "2(c)": { + "figure_path": "2411.17965v1_figure_2(c).png", + "caption": "Figure 2: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT", + "url": "http://arxiv.org/html/2411.17965v1/x4.png" + }, + "3": { + "figure_path": "2411.17965v1_figure_3.png", + "caption": "Figure 3: A visualization of the above LP problem.", + "url": "http://arxiv.org/html/2411.17965v1/x5.png" + }, + "4": { + "figure_path": "2411.17965v1_figure_4.png", + "caption": "Figure 4: The feasible region \u2131\u2131{\\mathcal{F}}caligraphic_F is plotted as the blue area. The four boundaries are implied by p,p\u2032\ud835\udc5dsuperscript\ud835\udc5d\u2032p,p^{\\prime}italic_p , italic_p start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT satisfying \u03f5italic-\u03f5{\\epsilon}italic_\u03f5-differential privacy.", + "url": "http://arxiv.org/html/2411.17965v1/x6.png" + }, + "5": { + "figure_path": "2411.17965v1_figure_5.png", + "caption": "Figure 5: An illustration of the feasible region \u2131isubscript\u2131\ud835\udc56{\\mathcal{F}}_{i}caligraphic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.17965v1/x7.png" + }, + "6(a)": { + "figure_path": "2411.17965v1_figure_6(a).png", + "caption": "Figure 6: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT, and the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to subsampling) and \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR).\nHere, K=35,M\u2208{10,13,15,20}formulae-sequence\ud835\udc3e35\ud835\udc4010131520K=35,M\\in\\{10,13,15,20\\}italic_K = 35 , italic_M \u2208 { 10 , 13 , 15 , 20 }, \u0394=10\u22125\u0394superscript105\\Delta=10^{-5}roman_\u0394 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1, \u03b4\u2032=0.1superscript\ud835\udeff\u20320.1\\delta^{\\prime}=0.1italic_\u03b4 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT = 0.1.", + "url": "http://arxiv.org/html/2411.17965v1/x8.png" + }, + "6(b)": { + "figure_path": "2411.17965v1_figure_6(b).png", + "caption": "Figure 6: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT, and the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to subsampling) and \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR).\nHere, K=35,M\u2208{10,13,15,20}formulae-sequence\ud835\udc3e35\ud835\udc4010131520K=35,M\\in\\{10,13,15,20\\}italic_K = 35 , italic_M \u2208 { 10 , 13 , 15 , 20 }, \u0394=10\u22125\u0394superscript105\\Delta=10^{-5}roman_\u0394 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1, \u03b4\u2032=0.1superscript\ud835\udeff\u20320.1\\delta^{\\prime}=0.1italic_\u03b4 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT = 0.1.", + "url": "http://arxiv.org/html/2411.17965v1/x9.png" + }, + "6(c)": { + "figure_path": "2411.17965v1_figure_6(c).png", + "caption": "Figure 6: \nPlots of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT, and the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to subsampling) and \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR).\nHere, K=35,M\u2208{10,13,15,20}formulae-sequence\ud835\udc3e35\ud835\udc4010131520K=35,M\\in\\{10,13,15,20\\}italic_K = 35 , italic_M \u2208 { 10 , 13 , 15 , 20 }, \u0394=10\u22125\u0394superscript105\\Delta=10^{-5}roman_\u0394 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1, \u03b4\u2032=0.1superscript\ud835\udeff\u20320.1\\delta^{\\prime}=0.1italic_\u03b4 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT = 0.1.", + "url": "http://arxiv.org/html/2411.17965v1/x4.png" + }, + "7(a)": { + "figure_path": "2411.17965v1_figure_7(a).png", + "caption": "Figure 7: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=11,m\u2208{1,3,5,7,9,11}formulae-sequence\ud835\udc3e11\ud835\udc5a1357911K=11,m\\in\\{1,3,5,7,9,11\\}italic_K = 11 , italic_m \u2208 { 1 , 3 , 5 , 7 , 9 , 11 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.\nNote when m\u2208{7,9}\ud835\udc5a79m\\in\\{7,9\\}italic_m \u2208 { 7 , 9 }, the cyan line (\u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT) and the red line (\u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT) overlap. When m=11\ud835\udc5a11m=11italic_m = 11, all lines overlap.\nObserve that when m\u2265K+12\ud835\udc5a\ud835\udc3e12m\\geq\\frac{K+1}{2}italic_m \u2265 divide start_ARG italic_K + 1 end_ARG start_ARG 2 end_ARG, that is, m\u2208{7,9,11}\ud835\udc5a7911m\\in\\{7,9,11\\}italic_m \u2208 { 7 , 9 , 11 } in this case, the above plots suggest both \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT achieve the minimum error at 0. This is consistent with our theory.", + "url": "http://arxiv.org/html/2411.17965v1/x10.png" + }, + "7(b)": { + "figure_path": "2411.17965v1_figure_7(b).png", + "caption": "Figure 7: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=11,m\u2208{1,3,5,7,9,11}formulae-sequence\ud835\udc3e11\ud835\udc5a1357911K=11,m\\in\\{1,3,5,7,9,11\\}italic_K = 11 , italic_m \u2208 { 1 , 3 , 5 , 7 , 9 , 11 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.\nNote when m\u2208{7,9}\ud835\udc5a79m\\in\\{7,9\\}italic_m \u2208 { 7 , 9 }, the cyan line (\u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT) and the red line (\u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT) overlap. When m=11\ud835\udc5a11m=11italic_m = 11, all lines overlap.\nObserve that when m\u2265K+12\ud835\udc5a\ud835\udc3e12m\\geq\\frac{K+1}{2}italic_m \u2265 divide start_ARG italic_K + 1 end_ARG start_ARG 2 end_ARG, that is, m\u2208{7,9,11}\ud835\udc5a7911m\\in\\{7,9,11\\}italic_m \u2208 { 7 , 9 , 11 } in this case, the above plots suggest both \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT achieve the minimum error at 0. This is consistent with our theory.", + "url": "http://arxiv.org/html/2411.17965v1/x11.png" + }, + "7(c)": { + "figure_path": "2411.17965v1_figure_7(c).png", + "caption": "Figure 7: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=11,m\u2208{1,3,5,7,9,11}formulae-sequence\ud835\udc3e11\ud835\udc5a1357911K=11,m\\in\\{1,3,5,7,9,11\\}italic_K = 11 , italic_m \u2208 { 1 , 3 , 5 , 7 , 9 , 11 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.\nNote when m\u2208{7,9}\ud835\udc5a79m\\in\\{7,9\\}italic_m \u2208 { 7 , 9 }, the cyan line (\u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT) and the red line (\u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT) overlap. When m=11\ud835\udc5a11m=11italic_m = 11, all lines overlap.\nObserve that when m\u2265K+12\ud835\udc5a\ud835\udc3e12m\\geq\\frac{K+1}{2}italic_m \u2265 divide start_ARG italic_K + 1 end_ARG start_ARG 2 end_ARG, that is, m\u2208{7,9,11}\ud835\udc5a7911m\\in\\{7,9,11\\}italic_m \u2208 { 7 , 9 , 11 } in this case, the above plots suggest both \u03b3o\u2062p\u2062tsubscript\ud835\udefe\ud835\udc5c\ud835\udc5d\ud835\udc61\\gamma_{opt}italic_\u03b3 start_POSTSUBSCRIPT italic_o italic_p italic_t end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT achieve the minimum error at 0. This is consistent with our theory.", + "url": "http://arxiv.org/html/2411.17965v1/x12.png" + }, + "8(a)": { + "figure_path": "2411.17965v1_figure_8(a).png", + "caption": "Figure 8: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=101,m\u2208{10,20,30,40,60,80}formulae-sequence\ud835\udc3e101\ud835\udc5a102030406080K=101,m\\in\\{10,20,30,40,60,80\\}italic_K = 101 , italic_m \u2208 { 10 , 20 , 30 , 40 , 60 , 80 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.", + "url": "http://arxiv.org/html/2411.17965v1/x13.png" + }, + "8(b)": { + "figure_path": "2411.17965v1_figure_8(b).png", + "caption": "Figure 8: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=101,m\u2208{10,20,30,40,60,80}formulae-sequence\ud835\udc3e101\ud835\udc5a102030406080K=101,m\\in\\{10,20,30,40,60,80\\}italic_K = 101 , italic_m \u2208 { 10 , 20 , 30 , 40 , 60 , 80 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.", + "url": "http://arxiv.org/html/2411.17965v1/x14.png" + }, + "8(c)": { + "figure_path": "2411.17965v1_figure_8(c).png", + "caption": "Figure 8: Plots of shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: the optimized \u03b3O\u2062p\u2062tsubscript\ud835\udefe\ud835\udc42\ud835\udc5d\ud835\udc61\\gamma_{Opt}italic_\u03b3 start_POSTSUBSCRIPT italic_O italic_p italic_t end_POSTSUBSCRIPT, the baselines \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT and \u03b3D\u2062S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc37\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{DSub}italic_\u03b3 start_POSTSUBSCRIPT italic_D italic_S italic_u italic_b end_POSTSUBSCRIPT (Theorem 4.1), and the constant \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to RR). Here, K=101,m\u2208{10,20,30,40,60,80}formulae-sequence\ud835\udc3e101\ud835\udc5a102030406080K=101,m\\in\\{10,20,30,40,60,80\\}italic_K = 101 , italic_m \u2208 { 10 , 20 , 30 , 40 , 60 , 80 }, \u03f5=0.1italic-\u03f50.1{\\epsilon}=0.1italic_\u03f5 = 0.1 and \u03b4=\u0394=0\ud835\udeff\u03940\\delta=\\Delta=0italic_\u03b4 = roman_\u0394 = 0.", + "url": "http://arxiv.org/html/2411.17965v1/x15.png" + }, + "9(a)": { + "figure_path": "2411.17965v1_figure_9(a).png", + "caption": "Figure 9: \nComparison of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: 1) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT, 2) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, 3) \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to the subsampling baseline) and 4) \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to the RR baseline).\nHere, K=11,m\u2208{3,5},\u03f5=0.1formulae-sequence\ud835\udc3e11formulae-sequence\ud835\udc5a35italic-\u03f50.1K=11,m\\in\\{3,5\\},{\\epsilon}=0.1italic_K = 11 , italic_m \u2208 { 3 , 5 } , italic_\u03f5 = 0.1.\nObserve that if the prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT used in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3 is closer to the actual distribution of pisubscript\ud835\udc5d\ud835\udc56p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT\u2019s, there is additional utility gain (i.e., decreased error); otherwise, we slightly suffer a utility loss (i.e., increased error), compared to optimize \u03b3\ud835\udefe\\gammaitalic_\u03b3 under the \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT prior.\nFurthermore, regardless of the choice of the prior distribution \ud835\udcaf\ud835\udcaf{\\mathcal{T}}caligraphic_T in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3, DaRRM\u03b3subscriptDaRRM\ud835\udefe\\textsf{DaRRM}_{\\gamma}DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT with an optimized \u03b3\ud835\udefe\\gammaitalic_\u03b3 achieves a lower error compared to the the baselines.", + "url": "http://arxiv.org/html/2411.17965v1/x16.png" + }, + "9(b)": { + "figure_path": "2411.17965v1_figure_9(b).png", + "caption": "Figure 9: \nComparison of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: 1) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT, 2) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, 3) \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to the subsampling baseline) and 4) \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to the RR baseline).\nHere, K=11,m\u2208{3,5},\u03f5=0.1formulae-sequence\ud835\udc3e11formulae-sequence\ud835\udc5a35italic-\u03f50.1K=11,m\\in\\{3,5\\},{\\epsilon}=0.1italic_K = 11 , italic_m \u2208 { 3 , 5 } , italic_\u03f5 = 0.1.\nObserve that if the prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT used in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3 is closer to the actual distribution of pisubscript\ud835\udc5d\ud835\udc56p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT\u2019s, there is additional utility gain (i.e., decreased error); otherwise, we slightly suffer a utility loss (i.e., increased error), compared to optimize \u03b3\ud835\udefe\\gammaitalic_\u03b3 under the \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT prior.\nFurthermore, regardless of the choice of the prior distribution \ud835\udcaf\ud835\udcaf{\\mathcal{T}}caligraphic_T in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3, DaRRM\u03b3subscriptDaRRM\ud835\udefe\\textsf{DaRRM}_{\\gamma}DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT with an optimized \u03b3\ud835\udefe\\gammaitalic_\u03b3 achieves a lower error compared to the the baselines.", + "url": "http://arxiv.org/html/2411.17965v1/x17.png" + }, + "9(c)": { + "figure_path": "2411.17965v1_figure_9(c).png", + "caption": "Figure 9: \nComparison of the shape and \u2130\u2062(DaRRM\u03b3)\u2130subscriptDaRRM\ud835\udefe{\\mathcal{E}}(\\textsf{DaRRM}_{\\gamma})caligraphic_E ( DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT ) of different \u03b3\ud835\udefe\\gammaitalic_\u03b3 functions: 1) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT, 2) \u03b3\ud835\udefe\\gammaitalic_\u03b3 optimized under prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT, 3) \u03b3S\u2062u\u2062bsubscript\ud835\udefe\ud835\udc46\ud835\udc62\ud835\udc4f\\gamma_{Sub}italic_\u03b3 start_POSTSUBSCRIPT italic_S italic_u italic_b end_POSTSUBSCRIPT (corresponding to the subsampling baseline) and 4) \u03b3c\u2062o\u2062n\u2062s\u2062tsubscript\ud835\udefe\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc61\\gamma_{const}italic_\u03b3 start_POSTSUBSCRIPT italic_c italic_o italic_n italic_s italic_t end_POSTSUBSCRIPT (corresponding to the RR baseline).\nHere, K=11,m\u2208{3,5},\u03f5=0.1formulae-sequence\ud835\udc3e11formulae-sequence\ud835\udc5a35italic-\u03f50.1K=11,m\\in\\{3,5\\},{\\epsilon}=0.1italic_K = 11 , italic_m \u2208 { 3 , 5 } , italic_\u03f5 = 0.1.\nObserve that if the prior \ud835\udcafPsubscript\ud835\udcaf\ud835\udc43{\\mathcal{T}}_{P}caligraphic_T start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT used in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3 is closer to the actual distribution of pisubscript\ud835\udc5d\ud835\udc56p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT\u2019s, there is additional utility gain (i.e., decreased error); otherwise, we slightly suffer a utility loss (i.e., increased error), compared to optimize \u03b3\ud835\udefe\\gammaitalic_\u03b3 under the \ud835\udcafUsubscript\ud835\udcaf\ud835\udc48{\\mathcal{T}}_{U}caligraphic_T start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT prior.\nFurthermore, regardless of the choice of the prior distribution \ud835\udcaf\ud835\udcaf{\\mathcal{T}}caligraphic_T in optimizing \u03b3\ud835\udefe\\gammaitalic_\u03b3, DaRRM\u03b3subscriptDaRRM\ud835\udefe\\textsf{DaRRM}_{\\gamma}DaRRM start_POSTSUBSCRIPT italic_\u03b3 end_POSTSUBSCRIPT with an optimized \u03b3\ud835\udefe\\gammaitalic_\u03b3 achieves a lower error compared to the the baselines.", + "url": "http://arxiv.org/html/2411.17965v1/x18.png" + }, + "10(a)": { + "figure_path": "2411.17965v1_figure_10(a).png", + "caption": "Figure 10: Plots of \u03bb\ud835\udf06\\lambdaitalic_\u03bb vs. \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT in the Gaussian RDP privacy bound. The goal is to choose a \u03bb\ud835\udf06\\lambdaitalic_\u03bb value that minimizes \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. It is not hard to see the value of \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT decreases at first and then increases as \u03bb\ud835\udf06\\lambdaitalic_\u03bb increases.", + "url": "http://arxiv.org/html/2411.17965v1/x19.png" + }, + "10(b)": { + "figure_path": "2411.17965v1_figure_10(b).png", + "caption": "Figure 10: Plots of \u03bb\ud835\udf06\\lambdaitalic_\u03bb vs. \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT in the Gaussian RDP privacy bound. The goal is to choose a \u03bb\ud835\udf06\\lambdaitalic_\u03bb value that minimizes \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. It is not hard to see the value of \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT decreases at first and then increases as \u03bb\ud835\udf06\\lambdaitalic_\u03bb increases.", + "url": "http://arxiv.org/html/2411.17965v1/x20.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Deep learning with differential privacy.", + "author": "Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.", + "venue": "In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308\u2013318, 2016.", + "url": null + } + }, + { + "2": { + "title": "Model-agnostic private learning via stability.", + "author": "Raef Bassily, Om Thakkar, and Abhradeep Thakurta.", + "venue": "arXiv preprint arXiv:1803.05101, 2018.", + "url": null + } + }, + { + "3": { + "title": "Machine unlearning.", + "author": "Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot.", + "venue": "In 2021 IEEE Symposium on Security and Privacy (SP), pp. 141\u2013159. IEEE, 2021.", + "url": null + } + }, + { + "4": { + "title": "Privacy at scale: Local differential privacy in practice.", + "author": "Graham Cormode, Somesh Jha, Tejas Kulkarni, Ninghui Li, Divesh Srivastava, and Tianhao Wang.", + "venue": "In Proceedings of the 2018 International Conference on Management of Data, pp. 1655\u20131658, 2018.", + "url": null + } + }, + { + "5": { + "title": "Optimal differential privacy composition for exponential mechanisms.", + "author": "Jinshuo Dong, David Durfee, and Ryan Rogers.", + "venue": "In International Conference on Machine Learning, pp. 2597\u20132606. PMLR, 2020.", + "url": null + } + }, + { + "6": { + "title": "Practical differentially private top-k selection with pay-what-you-get composition.", + "author": "David Durfee and Ryan M Rogers.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "7": { + "title": "Privacy-preserving prediction.", + "author": "Cynthia Dwork and Vitaly Feldman.", + "venue": "In Conference On Learning Theory, pp. 1693\u20131702. PMLR, 2018.", + "url": null + } + }, + { + "8": { + "title": "Differential privacy and robust statistics.", + "author": "Cynthia Dwork and Jing Lei.", + "venue": "In Proceedings of the forty-first annual ACM symposium on Theory of computing, pp. 371\u2013380, 2009.", + "url": null + } + }, + { + "9": { + "title": "Calibrating noise to sensitivity in private data analysis.", + "author": "Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith.", + "venue": "In Theory of cryptography conference, pp. 265\u2013284. Springer, 2006.", + "url": null + } + }, + { + "10": { + "title": "The algorithmic foundations of differential privacy.", + "author": "Cynthia Dwork, Aaron Roth, et al.", + "venue": "Found. Trends Theor. Comput. Sci., 9(3-4):211\u2013407, 2014.", + "url": null + } + }, + { + "11": { + "title": "Rappor: Randomized aggregatable privacy-preserving ordinal response.", + "author": "\u00dalfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova.", + "venue": "In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS \u201914, pp. 1054\u20131067, New York, NY, USA, 2014. Association for Computing Machinery.", + "url": null + } + }, + { + "12": { + "title": "Amplification by shuffling: From local to central differential privacy via anonymity.", + "author": "\u00dalfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta.", + "venue": "In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 2468\u20132479. SIAM, 2019.", + "url": null + } + }, + { + "13": { + "title": "Privacy amplification by iteration.", + "author": "Vitaly Feldman, Ilya Mironov, Kunal Talwar, and Abhradeep Thakurta.", + "venue": "In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 521\u2013532. IEEE, 2018.", + "url": null + } + }, + { + "14": { + "title": "The optimal noise-adding mechanism in differential privacy.", + "author": "Quan Geng and Pramod Viswanath.", + "venue": "IEEE Transactions on Information Theory, 62(2):925\u2013951, 2015.", + "url": null + } + }, + { + "15": { + "title": "Optimal differentially private mechanisms for randomised response.", + "author": "Naoise Holohan, Douglas J. Leith, and Oliver Mason.", + "venue": "IEEE Transactions on Information Forensics and Security, 12(11):2726\u20132735, November 2017.", + "url": null + } + }, + { + "16": { + "title": "Research on an ensemble classification algorithm based on differential privacy.", + "author": "Junjie Jia and Wanyong Qiu.", + "venue": "IEEE Access, 8:93499\u201393513, 2020.", + "url": null + } + }, + { + "17": { + "title": "The composition theorem for differential privacy.", + "author": "Peter Kairouz, Sewoong Oh, and Pramod Viswanath.", + "venue": "In International conference on machine learning, pp. 1376\u20131385. PMLR, 2015.", + "url": null + } + }, + { + "18": { + "title": "Federated learning: Strategies for improving communication efficiency.", + "author": "Jakub Kone\u010dn\u1ef3, H Brendan McMahan, Felix X Yu, Peter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon.", + "venue": "arXiv preprint arXiv:1610.05492, 2016.", + "url": null + } + }, + { + "19": { + "title": "Private selection from private candidates.", + "author": "Jingcheng Liu and Kunal Talwar.", + "venue": "In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pp. 298\u2013309, New York, NY, USA, 2019. Association for Computing Machinery.", + "url": null + } + }, + { + "20": { + "title": "Differential private ensemble feature selection.", + "author": "Zhongfeng Liu, Yun Li, and Wei Ji.", + "venue": "In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1\u20136, 2018.", + "url": null + } + }, + { + "21": { + "title": "Shredder: Learning noise distributions to protect inference privacy.", + "author": "Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Ali Jalali, Dean Tullsen, and Hadi Esmaeilzadeh.", + "venue": "In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 3\u201318, 2020.", + "url": null + } + }, + { + "22": { + "title": "Private everlasting prediction.", + "author": "Moni Naor, Kobbi Nissim, Uri Stemmer, and Chao Yan.", + "venue": "arXiv preprint arXiv:2305.09579, 2023.", + "url": null + } + }, + { + "23": { + "title": "Local and central differential privacy for robustness and privacy in federated learning.", + "author": "Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro.", + "venue": "arXiv preprint arXiv:2009.03561, 2020.", + "url": null + } + }, + { + "24": { + "title": "Smooth sensitivity and sampling in private data analysis.", + "author": "Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith.", + "venue": "In Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pp. 75\u201384, 2007.", + "url": null + } + }, + { + "25": { + "title": "Hyperparameter tuning with renyi differential privacy.", + "author": "Nicolas Papernot and Thomas Steinke.", + "venue": "In International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "26": { + "title": "Semi-supervised knowledge transfer for deep learning from private training data.", + "author": "Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar.", + "venue": "In Proceedings of the International Conference on Learning Representations, 2017.", + "url": null + } + }, + { + "27": { + "title": "Scalable private learning with pate.", + "author": "Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and \u00dalfar Erlingsson.", + "venue": "arXiv preprint arXiv:1802.08908, 2018.", + "url": null + } + }, + { + "28": { + "title": "Ensemble learning: A survey.", + "author": "Omer Sagi and Lior Rokach.", + "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4):e1249, 2018.", + "url": null + } + }, + { + "29": { + "title": "Private truly-everlasting robust-prediction.", + "author": "Uri Stemmer.", + "venue": "arXiv preprint arXiv:2401.04311, 2024.", + "url": null + } + }, + { + "30": { + "title": "Differentially private feature selection via stability arguments, and the robustness of the lasso.", + "author": "Abhradeep Guha Thakurta and Adam Smith.", + "venue": "In Conference on Learning Theory, pp. 819\u2013850. PMLR, 2013.", + "url": null + } + }, + { + "31": { + "title": "The Complexity of Differential Privacy, pp. 347\u2013450.", + "author": "Salil Vadhan.", + "venue": "Springer International Publishing, Cham, 2017.", + "url": null + } + }, + { + "32": { + "title": "The trade-offs of private prediction, 2020.", + "author": "Laurens van der Maaten and Awni Hannun.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "$\\beta$-stochastic sign SGD: A byzantine resilient and differentially private gradient compressor for federated learning, 2023.", + "author": "Ming Xiang and Lili Su.", + "venue": "URL https://openreview.net/forum?id=oVPqFCI1g7q.", + "url": null + } + }, + { + "34": { + "title": "Collaborative ensemble learning under differential privacy.", + "author": "Tao Xiang, Yang Li, Xiaoguo Li, Shigang Zhong, and Shui Yu.", + "venue": "Web Intelligence, 16:73\u201387, 03 2018.", + "url": null + } + }, + { + "35": { + "title": "Expectation identity for the binomial distribution and its application in the calculations of high-order binomial moments.", + "author": "Ying-Ying Zhang, Teng-Zhong Rong, and Man-Man Li.", + "venue": "Communications in Statistics - Theory and Methods, 48(22):5467\u20135476, 2019.", + "url": null + } + }, + { + "36": { + "title": "\" private prediction strikes back!\u201dprivate kernelized nearest neighbors with individual renyi filter.", + "author": "Yuqing Zhu, Xuandong Zhao, Chuan Guo, and Yu-Xiang Wang.", + "venue": "arXiv preprint arXiv:2306.07381, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17965v1" +} \ No newline at end of file diff --git a/20241127/2411.17967v1.json b/20241127/2411.17967v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b13bd05b29b8da07e87f413fa91b2129430b6950 --- /dev/null +++ b/20241127/2411.17967v1.json @@ -0,0 +1,184 @@ +{ + "title": "QuaLLM-Health: An Adaptation of an LLM-Based Framework for Quantitative Data Extraction from Online Health Discussions", + "abstract": "Background: Health-related discussions on social media like Reddit offer valuable insights, but extracting quantitative data from unstructured text is challenging. In this work, we present an adapted framework from QuaLLM into QuaLLM-Health for extracting clinically relevant quantitative data from Reddit discussions about glucagon-like peptide-1 (GLP-1) receptor agonists using large language models (LLMs).", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1 Introduction", + "text": "The increasing volume of health-related discussions on social media platforms, such as Reddit, presents a unique opportunity to derive meaningful insights about patient experiences, medication effects, and public health concerns.1 ###reference_### However, transforming these unstructured discussions into actionable data poses significant challenges. In this study, we present QuaLLM-Health, an adaptation of the QuaLLM framework specifically designed to extract clinically relevant quantitative data from Reddit conversations about glucagon-like peptide-1 (GLP-1) receptor agonists.2 ###reference_### This approach integrates structured prompt engineering, human-in-the-loop validation, and a comprehensive evaluation methodology to effectively translate social media content into a rich source of information for healthcare research.\nOur goal with QuaLLM-Health is to leverage the strengths of large language models (LLMs) for the analysis of unstructured online forum data, enabling the extraction of actionable quantitative insights in an efficient and cost-effective manner. By integrating expert knowledge throughout the process, this framework ensures accuracy, consistency, and applicability to the healthcare domain, offering a blueprint for future research in health informatics and patient-centered studies." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "1.1 Overview of the Framework", + "text": "Our adapted framework encompasses the following key components (Figure 1):\nData Collection: Systematic gathering of relevant Reddit posts and comments.\nData Preprocessing: Rigorous filtering and cleaning of the collected data based on established criteria.\nAnnotation Guideline Development: Creation of a detailed guideline for consistent variable extraction.\nHuman Annotation: Generation of a high-quality, consensus-based gold standard dataset.\nLLM Prompt Engineering: Implementation of iterative prompt engineering techniques with a LLM for automated extraction.\nEvaluation: Thorough assessment of the LLM\u2019s performance using the gold standard dataset.\nExecution: Deployment of a fine-tuned pipeline on a large dataset for quantitative data extraction for downstream analysis.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2 Data Collection and Preprocessing", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1 Data Collection", + "text": "We initiated the data collection process by identifying five Reddit communities (subreddits) with the highest user engagement related to GLP-1 receptor agonists. These included subreddits focused on the medication names (r/Semaglutide, r/WegovyWeightLoss, r/Zepbound, r/Ozempic, and r/Mounjaro). Data were collected using the Reddit API in July 2024.3 ###reference_### This initial data collection resulted in approximately 410,710 entries, comprising both posts and comments." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2 Data Filtering", + "text": "We developed a regex pipeline to automate the filtering process, using keywords and phrases specific to GLP-1 receptor agonists as well as cancer-related terms to identify relevant entries. The cancer-related terms were identified using a predefined list including \u2019cancer\u2019, \u2019malignancy\u2019, \u2019tumor\u2019, \u2019neoplasm\u2019, \u2019carcinoma\u2019, \u2019sarcoma\u2019, \u2019leukemia\u2019, \u2019lymphoma\u2019, \u2019metastasis\u2019, \u2019oncology\u2019, \u2019biopsy\u2019, \u2019chemotherapy\u2019, \u2019radiation therapy\u2019, \u2019malignant\u2019, and \u2019benign tumor\u2019. This filtering step reduced the dataset to approximately 2,390 entries." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3 Data Cleaning and Preparation", + "text": "We removed duplicates and non-English posts, which left us with a dataset comprised approximately 2,059 unique entries suitable for analysis. We then selected a random sample of 100 entries from the filtered dataset, which were given to human annotators for validation and to assist in prompt engineering." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3 Annotation Guideline Development", + "text": "We developed an annotation guideline describing variables that could be extracted as binary or categorical variables. This was used to label the textual entries (posts or comments) in a more consistent and clear way. This approach aimed to maintain consistency and clarity in identifying clinically relevant information. The annotation guideline, data used, and code are available at https://github.com/ramezkouzy/GLP1-LLM ###reference_###. The guideline included the following key areas for annotation:\nInclusion and Exclusion Criteria: Posts were categorized based on whether they discussed GLP-1 receptor agonists in the context of cancer. Instances that did not meet the inclusion criteria were labeled with specific exclusion reasons to maintain dataset relevance and clarity.\nCancer Survivorship and Related Details: We identified mentions of cancer survivors, including whether they were currently taking GLP-1 medications and if they are taking it to lose weight.\nFamily Cancer History and Cancer Type: Posts mentioning a family history of cancer or specifying types of cancer were annotated to facilitate deeper analysis of user backgrounds and cancer types involved. Unusual or unlisted types of cancer were captured with a free-text entry.\nCancer Risk and Discussions with Physicians: Mentions related to cancer risk, including concerns, discussions of increased or decreased risk, and whether these concerns were addressed with healthcare professionals.\nTiming of Cancer Diagnosis: Posts mentioning a cancer diagnosis that occurred after starting GLP-1 medication were flagged to help identify any temporal relationships discussed by users.\nInformation Seeking: Instances where users explicitly sought information regarding cancer risks associated with GLP-1 medications were annotated to understand user awareness and areas where more education might be needed.\nThis systematic and structured annotation allowed us to create a high-quality, gold-standard dataset that could reliably be used to evaluate LLM performance and guide prompt engineering. The finalized annotation guideline is attached in the published github library.\u2020\u2020\u2020https://github.com/ramezkouzy/GLP1-LLM ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4 Human Annotation Process", + "text": "Two annotators with domain expertise were tasked with independently annotating the same random sample of 100 entries from the cleaned dataset. Annotations were recorded in a standardized format, capturing all variables as per the guideline. Post-annotation, discrepancies were adjudicated by discussion. The variables and the frequencies are presented in Table 1. Inter-annotator agreement was quantified using Fleiss\u2019 kappa, with results for each variable as follows:\nVariables with High Agreement (): Discussions about GLP-1 receptor agonists in the context of cancer, mentions of being a cancer survivor, types of cancer mentioned, cases where a cancer survivor is also taking GLP-1 medication, mentions of family cancer history, discussions of cancer risk with a physician, mentions of other cancer types not previously listed, and discussions about GLP-1 potentially decreasing cancer risk. These variables showed strong consensus among annotators, indicating reliable extraction for these key areas.\nVariables with Moderate Agreement (): Mentions of weight loss among cancer survivors, cancer diagnosis after starting GLP-1 medication, seeking information about cancer risks, mentions of increased cancer risk, and expressions of concern regarding cancer risk. These variables had moderate agreement, suggesting a reasonable level of consistency but room for refinement in the annotation guidelines or further training.\nVariables with Low Agreement (): Ability to assess misinformation, overall sentiment of the post, tone of the discussion, general context, and references to misinformation. These variables showed significant variability in interpretation between annotators and were excluded from LLM performance evaluations, though they may provide useful insights in qualitative analyses." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5 LLM Prompt Engineering and Evaluation", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "5.1 Initial Prompting Strategy", + "text": "We utilized Open AI\u2019s gpt-4o-mini-2024-07-18 via API calls, providing it with a carefully structured prompt. This prompt was based on the annotation guidelines that human annotators followed, ensuring that the language model received explicit instructions for each variable. Alongside this, we also provided a JSON schema that defined the expected output format, including specific fields and data types. Our goal with this zero-shot prompting strategy was to evaluate the model\u2019s baseline ability to understand and extract variables without relying on any prior examples. The initial evaluation of the LLM\u2019s output against the gold standard annotations revealed mixed results, which are presented in Table 2. The model accurately identified basic variables but struggled with more nuanced aspects, such as detecting cancer type and risk discussion detection." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "5.2 Prompt Engineering", + "text": "To address these limitations, we employed iterative improvements, including chain-of-thought reasoning, few-shot prompting, and edge case inclusion.4 ###reference_### The prompts used are detailed in the github library.\u2021\u2021\u2021https://github.com/ramezkouzy/GLP1-LLM ###reference_### Importantly, we did not use examples directly from the evaluation dataset. We employed a human-in-the-loop process, as depicted in Figure 1, to fine-tune the prompts and create new examples that specifically highlighted edge cases where the model struggled. We also set the temperature at 0.0 to decrease the stochasticity of the responses, maximizing the consistency of the classification. Additionally, we utilized the model\u2019s reasoning to identify discrepancies and refine the prompts further, ultimately enhancing alignment with the gold standard dataset and reducing inconsistencies.\nEach iteration of prompt refinement was followed by a re-evaluation of the LLM\u2019s performance metrics, allowing us to progressively enhance the model\u2019s extraction capabilities and ensure alignment with our gold standard dataset. The final LLM configuration achieved accuracy of at least 0.85 across all included variables, demonstrating reliable extraction performance. Precision, recall, and F1 macro average were higher than 0.90 indicating a well-balanced performance across all variables. Detailed performance metrics for each variable are presented in Table 3." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "5.3 Stability Testing", + "text": "To evaluate the consistency of the LLM, we ran the final prompt configuration five times on the gold standard dataset. The results showed an average pairwise match rate of 95% across runs, indicating strong agreement. Variables with clear textual cues demonstrated high consistency, while minor variations were observed in more complex variables that required inference. Overall, these findings confirm the LLM\u2019s stability in extracting variables under the optimized prompt settings." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6 Application of the Framework", + "text": "We applied the optimized prompt to the full dataset of 2,059 unique entries. The LLM successfully extracted variables required for downstream analysis in an efficient manner, with the entire process\u2014including prompt engineering\u2014costing under $3 and taking around one hour to complete. The results will be presented in future analysis." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7 Discussion", + "text": "In this report, we present an adaptation of the QuaLLM framework that leverages the power of LLMs to efficiently analyze large volumes of unstructured text data, with a particular focus on extracting quantitative data. This adapted framework demonstrates value for similar and broader use cases where both quantitative and structured data are required for research purposes.\nIn this adaptation, we build on previously described concepts, enhancing the utility of LLMs for use cases such as healthcare-related research. We employed iterative prompting, which has proven to be highly effective, as well as few-shot prompting, both of which improved model performance.5,6 ###reference_### We also emphasize the importance of involving humans at every stage of the workflow, especially in downstream deployment. Insights from domain experts play a crucial role in maximizing performance by addressing domain-specific nuances and challenges.\nIn addition to enhancing LLM utility, our framework exemplifies a cost-effective and time-efficient method for large-scale text analysis. The structured approach adopted in this study underscores the potential of LLMs in producing reliable and reproducible results even in complex domains such as healthcare. The combination of human expertise and iterative model tuning facilitated a nuanced understanding of social media data, which is often noisy and context-specific. This methodology not only highlights the adaptability of LLMs but also serves as a guide for future research that aims to leverage large language models for extracting actionable insights from unstructured datasets. Moreover, this approach saves significant resources and personnel time, reducing the need for manual data extraction and analysis. Future iterations of this framework could allow even more researchers to find and adapt the methodology to address their specific use cases, further expanding the potential applications across various domains." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "7.1 Limitations", + "text": "We acknowledge that the generalizability of our human evaluation is limited by the relatively small sample size and its focus on specific pipeline stages. More sophisticated fine-tuned or larger models might perform better, but our approach prioritizes efficiency and cost-effectiveness. Setting up the pipeline took considerable time, but we aimed to balance efficiency with accuracy and cost. Even with high stability, the model may still exhibit slight variations between runs, which could make it unsuitable for high-stakes applications like clinical deployment. However, it is also important to remember that human evaluators are not fault-proof, and the comparator is often a human with their own limitations.7 ###reference_###" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "8 Conclusion", + "text": "This study demonstrates the feasibility of using LLMs for extracting clinically relevant data from unstructured social media content. The approach was cost-effective and computationally efficient, enabling a research team to conduct a study faster and with fewer resources while maintaining good accuracy and reliability." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Cancer-related variables in the random double-annotated sample (N=100).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
VariableN
Cancer Type
\u00a0\u00a0\u2003Thyroid Cancer44
\u00a0\u00a0\u2003Breast Cancer8
\u00a0\u00a0\u2003Gyn Cancer2
\u00a0\u00a0\u2003Pancreatic Cancer2
\u00a0\u00a0\u2003Other8
\u00a0\u00a0\u2003No Type Mentioned32
Cancer History and Experience
\u00a0\u00a0\u2003Is Cancer Survivor33
\u00a0\u00a0\u2003Survivor Taking GLP-126
\u00a0\u00a0\u2003Cancer Diagnosis After Medication13
Risk Communication and Perception
\u00a0\u00a0\u2003Mentions Cancer Risk57
\u00a0\u00a0\u2003Concerned About Cancer Risk29
\u00a0\u00a0\u2003Seeking Cancer Risk Data9
\u00a0\u00a0\u2003Discussed Risk with Physician17
\u00a0\u00a0\u2003GLP-1 Decreasing Cancer Risk13
\n
", + "capture": "Table 1: Cancer-related variables in the random double-annotated sample (N=100)." + }, + "2": { + "table_html": "
\n
Table 2: Performance metrics at baseline.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CategoryPrecisionRecallF1
inclusion0.9430.5100.640
exclusion_reason1.0000.5000.667
is_survivor0.9390.9380.936
is_survivor_and_taking_med0.8860.8650.848
cancer_type0.6030.5100.539
other_cancer_type0.9360.8230.875
is_survivor_weight_loss0.8440.8330.806
cancer_diagnosis_after_medication0.9100.9170.910
mentions_cancer_risk0.7710.7400.741
concerned_about_cancer_risk0.8120.8120.812
seeking_cancer_risk_data0.9030.9170.906
discussed_risk_with_physician0.9210.9060.911
discussion_GLP1_decreasing_cancer_risk0.8890.8850.851
Macro-average0.8740.7810.803
\n
", + "capture": "Table 2: Performance metrics at baseline." + }, + "3": { + "table_html": "
\n
Table 3: Performance metrics after prompt engineering.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CategoryPrecisionRecallF1
inclusion0.9440.9500.947
exclusion_reason1.0000.9790.989
is_survivor0.9210.9170.914
is_survivor_and_taking_med0.8860.8650.848
cancer_type0.9200.9060.906
other_cancer_type0.9290.9270.925
is_survivor_weight_loss0.8830.8850.881
cancer_diagnosis_after_medication0.9240.9270.919
mentions_cancer_risk0.8220.8230.822
concerned_about_cancer_risk0.8510.8540.851
seeking_cancer_risk_data0.9710.9690.969
discussed_risk_with_physician0.9160.8960.902
discussion_GLP1_decreasing_cancer_risk0.8780.8960.881
Macro Average0.9110.9090.904
\n
", + "capture": "Table 3: Performance metrics after prompt engineering." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17967v1_figure_1.png", + "caption": "Figure 1: Overview of the QuaLLM-Health framework showing the pipeline from data collection through execution.", + "url": "http://arxiv.org/html/2411.17967v1/extracted/6027650/Figure.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A method for analyzing health behavior in online forums.", + "author": "Yesha R, Gangopadhyay A.", + "venue": "In: Proceedings of the 6th ACM Conference on Bioinformatics, Computational Biology and Health Informatics. ACM; 2015.", + "url": null + } + }, + { + "2": { + "title": "QuaLLM: An LLM-based framework to extract quantitative insights from online forums.", + "author": "Rao VN, Agarwal E, Dalal S, Calacci D, Monroy-Hern\u00e1ndez A.", + "venue": "arXiv [csCP]. Published online May 8, 2024.", + "url": null + } + }, + { + "3": { + "title": "PRAW: The python reddit api wrapper.", + "author": "Boe B.", + "venue": "URL: https://praw.readthedocs.io/en/v7. 2021;5.", + "url": null + } + }, + { + "4": { + "title": "Scalable qualitative coding with LLMs: Chain-of-thought reasoning matches human performance in some hermeneutic tasks.", + "author": "Dunivin ZO.", + "venue": "arXiv [csCP]. Published online January 26, 2024.", + "url": null + } + }, + { + "5": { + "title": "Cutting down on prompts and parameters: Simple few-shot learning with language models.", + "author": "Logan RL IV, Bala\u017eevi\u0107 I, Wallace E, Petroni F, Singh S, Riedel S.", + "venue": "arXiv [csCP]. Published online June 24, 2021.", + "url": null + } + }, + { + "6": { + "title": "Iteratively prompt Pre-trained Language Models for chain of thought.", + "author": "Wang B, Deng X, Sun H.", + "venue": "arXiv [csCP]. Published online March 16, 2022.", + "url": null + } + }, + { + "7": { + "title": "Compared with what? Measuring AI against the health care we have.", + "author": "Kohane IS.", + "venue": "N Engl J Med. Published online October 26, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17967v1" +} \ No newline at end of file diff --git a/20241127/2411.17980v1.json b/20241127/2411.17980v1.json new file mode 100644 index 0000000000000000000000000000000000000000..36c8c199ee6b3c3071632dfe3faca3515c76b684 --- /dev/null +++ b/20241127/2411.17980v1.json @@ -0,0 +1,133 @@ +{ + "title": "Vision Mamba Distillation for Low-resolution Fine-grained Image Classification", + "abstract": "Low-resolution fine-grained image classification has recently made significant progress, largely thanks to the super-resolution techniques and knowledge distillation methods. However, these approaches lead to an exponential increase in the number of parameters and computational complexity of models. In order to solve this problem, in this letter, we propose a Vision Mamba Distillation (ViMD) approach to enhance the effectiveness and efficiency of low-resolution fine-grained image classification. Concretely, a lightweight super-resolution vision Mamba classification network (SRVM-Net) is proposed to improve its capability for extracting visual features by redesigning the classification sub-network with Mamba modeling. Moreover, we design a novel multi-level Mamba knowledge distillation loss boosting the performance, which can transfer prior knowledge obtained from a High-resolution Vision Mamba classification Network (HRVM-Net) as a teacher into the proposed SRVM-Net as a student. Extensive experiments on seven public fine-grained classification datasets related to benchmarks confirm our ViMD achieves a new state-of-the-art performance. While having higher accuracy, ViMD outperforms similar methods with fewer parameters and FLOPs, which is more suitable for embedded device applications. Code is available at Github.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Fine-grained visual classification (FGVC) aims to classify fine-grained sub-categories within a coarse-grained category [1 ###reference_b1###], such as birds [2 ###reference_b2###], cars [3 ###reference_b3###], and dogs [4 ###reference_b4###]. The existing representative fine-grained classification methods achieve high accuracy by using high-resolution (HR) images containing many informative details as inputs [6 ###reference_b6###]. However, in real-world applications, images are often captured from large stand-off distances, which makes the region of interest low resolution (LR). These LR object images are usually difficult to classify correctly because they lack the discriminative details of the object. The performance of a well-trained model on the HR fine-grained image dataset will be significantly degraded when applied to the LR FGVC tasks [39 ###reference_b39###].\n###figure_1### To address the challenges associated with LR FGVC, detailed information extracted from HR images is employed to guide the network training on LR images. Depending on the mode used for guidance, current methods can be classified into two categories: super-resolution (SR)-based approaches [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###] and knowledge distillation (KD)-based approaches [11 ###reference_b11###, 12 ###reference_b12###, 36 ###reference_b36###, 13 ###reference_b13###, 37 ###reference_b37###, 38 ###reference_b38###]. SR-based approaches can significantly enhance the accuracy of LR FGVC tasks by utilizing SR techniques [5 ###reference_b5###, 7 ###reference_b7###] to recover details under the supervision of HR images. Nevertheless, the SR sub-network enlarges the input image size, which increases the parameters and computational cost of the classification sub-network. Consequently, these methods become difficult to be applied to mobile devices with limited resources.\nKD-based approaches aim to transfer knowledge from teacher networks pre-trained on the HR images to student networks training on the LR images. To achieve high accuracies, some of these methods [11 ###reference_b11###, 12 ###reference_b12###, 36 ###reference_b36###] utilize the same structure [15 ###reference_b15###, 16 ###reference_b16###] for both teacher and student networks, which have ample parameters and computations. Meanwhile, alternative methods [13 ###reference_b13###, 37 ###reference_b37###, 38 ###reference_b38###] adopt lightweight CNNs [15 ###reference_b15###, 17 ###reference_b17###] as students, reducing computational consumption. However, the experimental results of all these methods show that there are still significant gaps in the accuracy of student networks compared to teacher networks.\nThis letter proposes a Vision Mamba Distillation (ViMD) method for LR FGVC, which can effectively bridge the gap between the lightweight student networks and the teacher HR networks. Our method can transfer multiple validated Mamba knowledge [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] obtained from teachers into students, which exhibits superior performance [24 ###reference_b24###, 23 ###reference_b23###, 22 ###reference_b22###] compared to traditional CNNs [15 ###reference_b15###] and Transformers [21 ###reference_b21###]. In addition, we propose an SRVM-Net to improve its capability for extracting visual features with Mamba modeling, and further design a novel multi-level Mamba knowledge distillation loss to guide the SRVM-Net training with high-quality Mamba knowledge from teacher HRVM-Net.\n###figure_2### The contributions are summarized as follows:\nWe propose a novel Vision Mamba Distillation method named ViMD, which is a hybrid lightweight Mamba for low-resolution fine-grained image classification.\nWe design a novel multi-level Mamba knowledge distillation loss, which transfers logits and hidden states knowledge from HRVM-Net into SRVM-Net.\nOn seven public fine-grained datasets, the experimental results in Fig. 1 ###reference_### show that our proposed method achieves higher accuracies than other SOTA methods." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Method", + "text": "The architecture of ViMD is shown in Fig. 2 ###reference_###. It includes an SRVM-Net (student), an HRVM-Net (teacher), and a multi-level Mamba knowledge distillation loss. Given an LR image and its corresponding HR image, they are inputted into SRVM-Net and HRVM-Net, respectively. The SRVM-Net is trained under the supervision of the multi-level Mamba knowledge distillation loss by transferring knowledge from the teacher, HRVM-Net." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Super-resolution Vision Mamba Network", + "text": "The SRVM-Net is constituted by an SR sub-network and a ViM classification sub-network in sequence. For a LR image , where , and denote channel, height, and width of respectively, the SR sub-network reconstructs into a SR image , where , and represent channel, height, and width of respectively. Its function is to restore the detailed information of . To achieve high-quality image reconstruction, the generator of pre-trained SRGAN [5 ###reference_b5###] is directly employed as our SR sub-network, and its details can be found in references [5 ###reference_b5###, 11 ###reference_b11###, 13 ###reference_b13###]. The ViM classification sub-network is used to classify the generated SR image , which is trained under the supervision of the multi-level Mamba knowledge distillation loss.\nConsidering the computational cost and publicly available research on visual space state models, Vision Mamba Tiny (Vim-Tiny) [19 ###reference_b19###] is selected as the classification sub-network. Of course, other Vision Mamba models such as Vim-Small [19 ###reference_b19###] and VMamba [20 ###reference_b20###] are also applicable. The ViM classification sub-network mainly consists of a Patches Embedding Module, -layers Vision Mamba Encoder, and a Classification Head." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "II-A1 Patches Embedding Module", + "text": "As the original Mamba [18 ###reference_b18###] was designed for 1D sequence, Patches Embedding Module was used to convert 2D images to 1D sequence. For the SR image produced by the SR sub-network, Patches Embedding Module firstly uses a convolutional operation to down-sample it into patches , as follows:\nwhere denotes the convolution operation, denotes the convolution kernel, , and .\nThen, the 1D sequence , where , can be obtained by applying a flattening operation and a transpose operation to , as follows:\nwhere denotes the flattening operation.\nAccording to ViT[21 ###reference_b21###] and BERT[29 ###reference_b29###], we embed the class label and the position information into the sequence . The final 1D sequence output is computed by embedding a class token at the middle of and then adding a position embedding as follows:\nwhere denotes the -th token in the sequence , denotes the class token, and denotes the position embedding." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "II-A2 -layers Vision Mamba Encoder", + "text": "Given the initial hidden state , -layers ViM encoder can sequentially extract features at different levels of . For the -th ViM encoder layer , the output can be obtained by inputting the hidden state ,\nwhere the structure of each ViM encoder is consistent.\nThe ViM encoder is based on the residual structure of bidirectional sequence Mamba [19 ###reference_b19###]. First, the input sequence is normalized and linearly projected to produce the forward sequence , and then a reverse operation is performed on to produce the backward sequence :\nThen, a convolution before the Mamba modeling [18 ###reference_b18###] is applied to and in order to prevent independent token calculations as follows:\nwhere denotes the convolution operation, and denote the convolution parameters for the forward sequence and the backward sequence in -th layer, denotes SiLU activation, and denote the forward and backward Mamba computation processes.\nFinally, the output of -th ViM encoder, , is obtained by\nwhere represents a linear projection, the forward sequence and the backward sequence , and represents element-wise multiplication. The whole equation is the classical residual structure, in which the gradients can be efficiently computed and updated during back-propagation." + }, + { + "section_id": "2.1.3", + "parent_section_id": "2.1", + "section_name": "II-A3 Classification Head", + "text": "To predict the label for a given image, a linear projection is used to map the class token of to by\nwhere is the predicted probabilities for all categories. It is used to computed the objective loss in training stage and also used to predict the class label in testing stage. The prediction result is computed by\nwhere denotes the -th value of ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Multi-level Mamba knowledge distillation loss", + "text": "To improve the generalization ability of our SRVM-Net, we design a multi-level Mamba knowledge distillation loss based on logits and hidden states to supervise the training process. For LR FGVC, it is essential to improve its capability in capturing detailed information from SR images and learning prior knowledge from the network pretrained on HR images. As a result, we build a HRVM-Net as the teacher, which can extract fine-grained hidden states and logits from the HR images. Based on the relevant distillation works on Transformer [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###] and KD [14 ###reference_b14###], a multi-level Mamba knowledge distillation loss is proposed as\nwhere hyper-parameters is used to balance the two losses, represents the logits distillation loss function\nwhere denotes the KL divergence function, denotes the temperature of the KL divergence function, denotes the softmax function, and and denote the the logits of teacher and student. And more, represents the hidden states distillation loss\nwhere denotes the L2 distance, and denotes the output of the -th teacher\u2019s ViM encoder. Both HRVM-Net and ViM classification sub-network has the same network structure.\nThe teacher can extracted the fine-grained features from the HR images. The student can reach to the teacher by minimizing both and .\nBesides, the cross-entropy loss is also introduced to supervise the training process. The total loss is computed as\nwhere hyper-parameters is used to balance the losses, represents the cross-entropy classification loss function." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Experimental Setup", + "text": "" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "III-A1 Datasets", + "text": "To evaluate the effectiveness of our ViMD, the LR images are generated by downsampling HR images, because there is no publicly available dataset specifically for LR FGVC. The experiments are conducted on seven public FGVC datasets, including Caltech-UCSD Birds 200 (CUB) [2 ###reference_b2###], Stanford Cars (CAR) [3 ###reference_b3###], Stanford Dogs (DOG) [4 ###reference_b4###], Oxford-IIIT Pet (PET) [25 ###reference_b25###], Oxford-102 Flower (Flower) [26 ###reference_b26###], MIT Indoor Scene Recognition (MIT67) [27 ###reference_b27###], and Stanford 40 Actions (Action) [28 ###reference_b28###]. Given an original image, a HR image of size 224224 is got by random cropping, horizontal flipping and image scaling. The corresponding LR image is obtained by down-sampling the HR image to a size of 5656 using \u2018bicubic\u2019 interpolation [35 ###reference_b35###]." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "III-A2 Implementation Details", + "text": "In the training phase, we firstly train the teacher (HRVM-Net) initialized with ImageNet1K pre-trained parameters on HR image trainset. The total epochs is 200, and the initial learning rate is 10-6 and is tuned by a simulated annealing cosine scheduler. We employ AdamW optimizer with a batch size of 16 and a momentum of 0.9. And then, we train SRVM-Net using the similar configuration as before. For hyper-parameters, is set to 4, and the value of is 1, the details of can be found in Sec. III-C2 ###reference_.SSS2###.\nIn the testing phase, the teacher (HRVM-Net) can be removed and only the SRVM-Net can be used. Top-1 classification accuracy is employed as evaluation criterion." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Comparison with State-of-the-Art Methods", + "text": "To evaluate the effectiveness of our proposed ViMD, it is compared with several SOTA methods on seven popular FGVC datasets. SR based approaches, DRE-Net [9 ###reference_b9###] and DME-Net [10 ###reference_b10###] are chosen; KD based approaches, SRKD [11 ###reference_b11###] and JSC [13 ###reference_b13###] are chosen. The results are shown in Table I ###reference_###, where both SRKD and JSC (SRGAN) are trained with SRGAN as the SR sub-network, while JSC (SwinIR) is trained with SwinIR as the SR sub-network. The Params and FLOPs denote the number of parameters and the floating point operations of classification sub-networks respectively. The best result and the second-best result are highlighted in bold and underline respectively. In Table I ###reference_###, we also present the results of HR-HR (LR-LR), which refers to the ViM classification sub-network trained and tested on HR (LR) images.\nOur ViMD achieves the Top-1 classification accuracies of 80.19%, 88.93%, 84.14%, 92.56%, 94.03%, 78.43%, and 83.66% on seven datasets, respectively, which are all the best results. Compared with SR-based approaches, our method improves the accuracy by 6.42% on CUB with respect to DRE-Net, and by 0.55% on CAR with respect to DME-Net. Compared with KD-based approaches, our method improves the accuracy by 2.35% on CUB with respect to SRKD, by 0.69%, 10.34%, 4.80%, 6.13%, 7.61%, and 10.56% on other six datasets, respectively, with respect to JSC (SwinIR).\nThe classification sub-network, Vim-Tiny, with only 6.99M Params (approximately 0.05% of VGG16, 28.3% of ResNet50 and 60.5% of ResNet18) and 0.50G FLOPs (approximately 0.03% of VGG16, 12.1% of ResNet50 and 27.4% of ResNet18), are the best. In summary, our ViMD achieves excellent performance with a lightweight architecture, effectively enhancing classification accuracy while reducing computational cost." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Ablation Studies", + "text": "" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "III-C1 Analysis of Components", + "text": "As illustrated in Table II ###reference_###, ablation experiments are conducted to evaluate the effectiveness of the SRVM-Net and the multi-level Mamba knowledge distillation loss in our proposed ViMD, where all the comparisons employed generator in pre-trained SRGAN as the SR sub-network and trained with . Compared with using ResNet18 as the classification sub-network (Column 1), the accuracies obtained with ViM classification sub-network (Column 2) increased by 6.53%, 1.59%, 9.95%, 5.55%, 13.61%, 5.67%, and 8.30% on the seven datasets, demonstrating the effectiveness of adopting Vim-Tiny as the classification sub-network. Compared with using SRVM-Net without any knowledge distillation method (Column 2), the accuracies obtained by using both and (Column 5) increased by 2.21%, 2.34%, 0.67%, 0.27%, 1.41%, 3.06%, and 3.16% on the seven datasets, demonstrating the effectiveness of the designed multi-level Mamba knowledge distillation loss." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "III-C2 Analysis of Hyper-parameters", + "text": "Our proposed ViMD uses , , together to supervise the training of SRVM-Net. Based on the experience of previous works [30 ###reference_b30###, 33 ###reference_b33###, 34 ###reference_b34###], we set as 1. To evaluate the affection of different , we set , keeping all other settings as same as Section III-A2 ###reference_.SSS2###. The experimental results are shown in Fig. 3 ###reference_###, where the blue, yellow, green, and red bars respectively indicate the accuracies on CUB, CAR, Action and Flower. It can be found that the performance of our ViMD is robust to hyper-parameter , as different perform well for different values. We recommend as 20, because it achieves the best accuracies on CAR and Action. Although it does not achieve the best accuracies on CUB and Flower, the differences between it and the best accuracies are only 0.09% and 0.01%, and it still outperforms other SOTA method on all four datasets by 2.26%, 0.55%, 7.02% and 10.56%.\n###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Conclusion", + "text": "In this letter, we propose a Vision Mamba Distillation method for low-resolution fine-grained image classification. The ViMD can effectively balance classification accuracy and computational efficiency with the help of the proposed SRVM-Net and the designed multi-level Mamba knowledge distillation loss. The extensive experiments on seven public fine-grained datasets demonstrate that our ViMD outperforms other SOTA methods, and the effectiveness of its components is verified in the ablation study. We hope this work will inspire further research towards vision mamba and knowledge distillation to boost the performance of LR FGVC tasks." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Comparison with the State-of-the-Art Methods
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodTeacherStudentCUBCARDOGPETFlowerMIT67ActionParams\nFLOPs\n
HR-HR/Vim-Tiny84.8689.7488.3394.6096.9482.2486.536.990.50
LR-LR/Vim-Tiny49.1456.9559.3676.4577.9849.5554.37
DRE-Net\u00a0[10]/VGG1668.1282.64-----138.3615.47
/ResNet5073.7786.64-----24.664.11
DME-Net\u00a0[9]/VGG1671.1687.82-----138.3615.47
/ResNet5073.0288.38-----24.664.11
SRKD\u00a0[11]\nResNet50ResNet5077.84------24.664.11
JSC(SRGAN)\u00a0[13]\nResNet50ResNet1873.5888.1573.6887.5187.1069.7872.7911.541.82
JSC(SwinIR)\u00a0[13]\nResNet50ResNet1873.7088.2473.8487.7687.9070.8273.10
ViMD (Ours)/Vim-Tiny77.9886.5983.5192.2992.6275.3780.506.990.50
Vim-TinyVim-Tiny80.1988.9384.1892.5694.0378.4383.66
\n
\n
", + "capture": "TABLE I: Comparison with the State-of-the-Art Methods" + }, + "2": { + "table_html": "
\n
TABLE II: Results of Components Analysis
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Components/12345
ResNet18
SRVM-Net
DatasetsCUB71.4577.9879.9679.1080.19
CAR85.0086.5988.5787.5088.93
DOG73.5683.5183.5383.9084.18
PET87.3592.2992.3292.3992.56
Flower79.0192.6292.8693.0294.03
MIT6769.7075.3775.9075.6078.43
Action72.2080.5082.1880.9883.66
\n
\n
", + "capture": "TABLE II: Results of Components Analysis" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17980v1_figure_1.png", + "caption": "Figure 1: Effectiveness and efficiency comparison between our ViMD and other methods on Caltech-UCSD Birds 200 [2] benchmark. The\ncolor and circle size indicate the model\u2019s accuracy and floating point operations,respectively.", + "url": "http://arxiv.org/html/2411.17980v1/x1.png" + }, + "2": { + "figure_path": "2411.17980v1_figure_2.png", + "caption": "Figure 2: An overview of ViMD, which is mainly composed of an SRVM-Net (student), an HRVM-Net (teacher), and a multi-level Mamba knowledge distillation loss composed of LH\u2062S\u2062Dsubscript\ud835\udc3f\ud835\udc3b\ud835\udc46\ud835\udc37L_{HSD}italic_L start_POSTSUBSCRIPT italic_H italic_S italic_D end_POSTSUBSCRIPT and LL\u2062Dsubscript\ud835\udc3f\ud835\udc3f\ud835\udc37L_{LD}italic_L start_POSTSUBSCRIPT italic_L italic_D end_POSTSUBSCRIPT. In the training phase, HRVM-Net is firstly trained on HR images, and then SRVM-Net is trained on LR images under the supervision of the multi-level Mamba knowledge distillation loss with the help of HRVM-Net. In the test phase, only SRVM-Net is employed, and it can directly output the prediction results when LR images are given.", + "url": "http://arxiv.org/html/2411.17980v1/x2.png" + }, + "3": { + "figure_path": "2411.17980v1_figure_3.png", + "caption": "Figure 3: Results of hyper-parameters analysis on the four datasets.", + "url": "http://arxiv.org/html/2411.17980v1/x3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.17980v1" +} \ No newline at end of file diff --git a/20241127/2411.17986v1.json b/20241127/2411.17986v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1fc04cda1e3c564827f4219564a1c19ee079655e --- /dev/null +++ b/20241127/2411.17986v1.json @@ -0,0 +1,143 @@ +{ + "title": "Hybrid Beamforming Design for Covert mmWave MIMO with Finite-Resolution DACs", + "abstract": "We investigate hybrid beamforming design for covert millimeter wave multiple-input multiple-output systems with finite-resolution digital-to-analog converters (DACs), which impose practical hardware constraints not yet considered by the existing works and have negative impact on the covertness. Based on the additive quantization noise model, we derive the detection error probability of the warden considering finite-resolution DACs. Aiming at maximizing the sum covert rate (SCR) between the transmitter and legitimate users, we design hybrid beamformers subject to power and covertness constraints. To solve this nonconvex joint optimization problem, we propose an alternating optimization (AO) scheme based on fractional programming, quadratic transformation, and inner majorization-minimization methods to iteratively optimize the analog and digital beamformers. To reduce the computational complexity of the AO scheme, we propose a vector-space based heuristic (VSH) scheme to design the hybrid beamformer. We prove that as the number of antennas grows to be infinity, the SCR in the VSH scheme can approach the channel mutual information. Simulation results show that the AO and VSH schemes outperform the existing schemes and the VSH scheme can be used to obtain an initialization for the AO scheme to speed up its convergence.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Wireless communications pose significant challenges in terms of security and privacy due to their broadcast nature. Traditional secure communications primarily focus on preventing messages from being decoded by potential eavesdroppers. However, in specific scenarios such as battlefield environments, even the regular wireless communications can expose us to the enemy and result in fatal danger. This leads to the emergence of covert communication techniques to achieve enhanced security. Nevertheless, achieving covert communications in an open wireless environment poses significant challenges. Fortunately, millimeter wave (mmWave) communications inherently provide the covertness due to the sparse scattering characteristics [2 ###reference_b2###, 3 ###reference_b3###]. By employing massive antenna arrays, highly directional beams can be formed to effectively mitigate energy leakage and decrease the risk of interception by unauthorized parties. Consequently, balancing communication performance and covertness becomes a primary concern in mmWave multiple-input multiple-output (MIMO) systems. In a typical wireless covert communication system, a legitimate transmitter aims to communicate with a legitimate user without being detected by a warden. However, for the additive white Gaussian noise (AWGN) channel, the square root law is proved in [4 ###reference_b4###] that only bits can be transmitted in channel uses, indicating that the covert bits per channel use decreases to zero if grows to be infinity. Similar findings are observed in discrete memoryless channels [5 ###reference_b5###], broadcast channels [6 ###reference_b6###] and multiple access channels [7 ###reference_b7###]. Therefore, from the warden\u2019s perspective, various uncertainties in wireless environments, such as noise [8 ###reference_b8###] and channel fading [9 ###reference_b9###], should be considered to achieve a positive covert rate. Furthermore, artificial jamming signals are introduced to disrupt the warden\u2019s detection so that the covertness can be further enhanced [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nAll the works mentioned above only consider the single-antenna transmitter regime. Different from introducing uncertainties for the covertness, we can also mitigate the energy leakage to the warden through elaborately designed beamformers to enhance the covertness performance [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###]. Specifically, for two cases that the transmitter knows perfect or imperfect channel state information (CSI) of the warden, the zero-forcing and robust beamformers are proposed to maximize the covert rate [13 ###reference_b13###]. In [14 ###reference_b14###], beamforming for covert communications in mmWave bands is considered for a transmitter with two independent arrays generates two beams, where one beam towards the legitimate user is for data transmission and the other beam towards the warden carries a jamming signal. Since the energy leakage in the beam training phase may bring potential risks, the covertness in both beam training and data transmission stages is investigated by jointly optimizing transmit power and beam training durations to maximize the covert throughput [15 ###reference_b15###]. Since the ideal beam pattern used in [15 ###reference_b15###] is impractical, a discrete Fourier transform codebook is used for beam training in the covert communication system [16 ###reference_b16###]. Additionally, the advancement of massive antenna arrays leads to the emergence of hybrid beamforming techniques [17 ###reference_b17###, 18 ###reference_b18###], which can also be used in covert communications. A joint hybrid analog, digital beamforming and jamming design algorithm is proposed for a covert communication system with a full-duplex receiver, where the detection error probabilities of the warden are derived for both single and multiple data stream cases [19 ###reference_b19###].\nHowever, the aforementioned works mainly investigate the ideal beamforming design with infinite-resolution digital-to-analog converters (DACs) or analog-to-digital converters (ADCs) without considering quantization noise. On one hand, employing high-resolution DACs or ADCs results in significant power consumption [20 ###reference_b20###]. On the other hand, using low-resolution DACs or ADCs introduces significant quantization noise, which has negative impact on both the communication and covertness performance. Therefore, it is crucial to investigate beamforming techniques with regard of the resolution of the DACs or ADCs [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###]. Specifically, for a receiver with finite-resolution ADCs, three types of beamforming design schemes corresponding to analog, hybrid, and fully-digital architectures are proposed to maximize the achievable rate, where the energy efficiency performance of the proposed schemes is also analyzed [21 ###reference_b21###]. Furthermore, beamforming design schemes with finite-resolution DACs for secure communications are investigated, where the analog beamformer, digital beamformer and artificial jammer are jointly optimized to maximize the secrecy rate [22 ###reference_b22###, 23 ###reference_b23###]. Since the quantization noise introduced by finite-resolution DACs also has a negative impact on the covert communications, we design hybrid beamformers with finite-resolution DACs in a covert communication system. The main contributions of this paper are summarized as follows, where the first and second points are included in the conference paper [1 ###reference_b1###].\nWe investigate hybrid beamforming design for covert mmWave MIMO systems with finite-resolution DACs, which are practical hardware constraints not yet considered by the\nexisting works. Based on the additive quantization noise (AQN) model, we derive the detection error probability of the warden to evaluate the level of the covertness in this system. Aiming to maximize the sum covert rate (SCR) between the transmitter and legitimate users, we establish an optimization problem for the hybrid beamforming design subject to power and covertness constraints.\nTo solve the non-convex optimization problem, we propose an alternating optimization (AO) scheme. Initially, we employ quadratic and Lagrangian dual transformations to decompose the optimization problem into four subproblems, which are iteratively solved until convergence. For the subproblem of designing the analog beamformer, we first transform it into a quadratically constrained quadratic programming (QCQP) problem with the extra constant-modulus constraints. Then, we solve the transformed problem using an inner majorization-minimization (iMM) method. Additionally, we transform the subproblem of designing the digital beamformer into a standard QCQP convex problem, which is solved by the interior-point method.\nTo reduce the computational complexity of the AO scheme, we propose a vector-space based heuristic (VSH) scheme for hybrid beamforming design. We prove that the SCR in the VSH scheme can approach the channel mutual information as the number of antennas grows to be infinity. Moreover, simulation results demonstrate that the VSH scheme can be used to obtain an initialization for the AO scheme to speed up its convergence.\nThe rest of this paper is structured as follows. The system model and problem formulation are presented in Section II ###reference_###. In Section III ###reference_###, we propose the hybrid beamforming design based on the AO scheme. The VSH scheme is proposed for hybrid beamforming design in Section IV ###reference_###. Simulation results are presented in Section V ###reference_###. Finally, this paper is concluded in Section VI ###reference_###.\nThe notations are defined as follows. Symbols for vectors (lower case) and matrices (upper case) are in boldface. , and denote the conjugate, transpose and conjugate transpose, respectively. , , , , and denote the th entry, the norm of the vector , the entry on the th row and th column, the th column, the columns from th to th and the Frobenius norm of the matrix , respectively. indicates that is a positive semi-definite matrix. denotes an identity matrix. The functions , , and denote the vectorization, the inverse operation of vectorization, trace and diagonal elements of a matrix, respectively. and denote the real part and the angle of a complex number, respectively. denotes the statistical expectation. denotes the diagonal matrix whose diagonal elements are . denotes the minimum value among . denotes a complex Gaussian distribution with mean and covariance matrix . Symbols , , and denote the square root of , the set of real-valued numbers, complex-valued numbers and the Kronecker product, respectively." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model and Problem Formulation", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A System Model", + "text": "We consider a covert mmWave MIMO communication system as shown in Fig. 1 ###reference_###. The base station named Alice simultaneously communicates with legitimate users, meanwhile a warden named Willie attempts to detect the existence of the communications. Alice uses a fully-connected hybrid beamforming architecture with antennas and radio frequency (RF) chains, where the antennas are placed in uniform linear arrays with half wavelength intervals and each RF chain is connected to a finite-resolution DAC. To reduce the hardware complexity and ensure the performance of massive antenna array communications, we set . The legitimate users and Willie are all equipped with a single antenna for simplicity.\n###figure_1### The transmitted information symbols of data streams from Alice are firstly precoded by a baseband digital beamformer , then converted by finite-resolution DACs. The baseband transmitted signal can be expressed as\nwhere for is the information symbol transmitted to the th user. denotes the quantization function imposed by the finite-resolution DACs. Additionally, we denote , indicating that the symbols from different data streams are independent of each other. To derive a tractable expression for (1 ###reference_###), we use the AQN model [24 ###reference_b24###, 25 ###reference_b25###] to approximate the output with an linear form as\nwhere is the quantization distortion parameter and it depends on the resolution of the DACs. Let denote the number of quantized bits of the DACs. Specifically, if we set , the values of are 0.3634, 0.1175, 0.03454, 0.009497 and 0.002499, respectively. When , can be approximated by . As grows to be infinity, will be 0 and it means that there is no quantization distortion. denotes the quantization noise and its covariance matrix can be expressed as\nAfter being up-converted to the RF domain, the transmitted signals are precoded by an analog beamformer . Therefore, the received signal by the th legitimate user can be expressed as\nwhere denotes the AWGN at the th legitimate user. denotes the channel vector between Alice and the th user and can be expressed as\nwhere denotes the total number of channel paths between Alice and th user. and for denote the channel gain for the line-of-sight (LoS) and the th non-line-of-sight (NLoS) channel paths, respectively. and for denote the channel angle-of-departure (AoD) for the LoS and the th NLoS channel paths, respectively. Moreover, is the normalized array response and can be expressed as\nSimilarly, the channel vector between Alice and Willie can be expressed as\nwhere and are distinguished with and in (5 ###reference_###). Note that and the covariance matrix can be expressed as\nIn this paper, we assume that Alice knows the instantaneous CSI of for , which can be obtained via uplink training for channel estimation. However, Willie is usually a passive node and obtaining the instantaneous CSI for is almost impossible. Therefore, we assume that only the statistical CSI is available to Alice [19 ###reference_b19###]. For the worst case, we assume that Willie knows all the instantaneous CSIs of and for ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Detection Performance of Willie", + "text": "The signal detection process of Willie can be formulated as a binary hypothesis testing problem. Specifically, the hypothesis represents that Alice remains silent, while the hypothesis represents that Alice is communicating with the legitimate users. Then the received signal at Willie in the th time slot can be derived as\nwhere denotes the AWGN received by Willie in the th time slot. For representation simplicity, we define and to represent the decisions of Willie under and , respectively. Moreover, denotes the false alarm probability and represents that Willie performs inference but holds. denotes the missed detection probability and represents that Willie performs inference but holds. Similar to [9 ###reference_b9###, 19 ###reference_b19###], since Willie is unaware of when Alice will transmit, we assume that Alice transmits with equal transmit prior probabilities, i.e., . Therefore, we can derive the detection error probability of Willie as\nmeans that Willie always makes error detection and communication covertness is achieved perfectly in this case. The detailed derivation of will be included in Section III ###reference_###." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Problem Formulation", + "text": "With the derivation of (3 ###reference_###) and (4 ###reference_###), we can define as the signal to interference, quantization distortion and noise ratio for the th user, which can be expressed as (12 ###reference_###) at the top of this page. Then the SCR can be given by\n\nThe objective of this paper is to maximize the SCR by jointly optimizing the analog beamformer and the digital beamformer , subject to the following constraints: (1) The constant modulus constraints on the elements of analog beamformer, i.e., for and ; (2) The transmit power constraint under the AQN model, i.e., , where is the maximum transmit power; (3) The covertness constraint from Alice\u2019s perspective, , where denotes the predetermined level of the covertness. Taking the above three constraints into consideration, we can express the optimization problem as\nUnfortunately, (14 ###reference_###) is a non-convex problem and challenging to handle owing to the coupling of and . To efficiently solve this problem, we initially decompose this problem using quadratic transform and Lagrangian dual transform. Subsequently, we will propose the AO and VSH schemes to design the hybrid beamformer in Section III ###reference_### and Section IV ###reference_###, respectively." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III AO Scheme for Hybrid Beamforming Design", + "text": "In this section, we will propose the AO scheme to solve (14 ###reference_###). Specifically, the procedures for transforming (14 ###reference_###) into a solvable form are presented in Section III-A ###reference_###. The detailed AO scheme for hybrid beamforming design is proposed in Section III-B ###reference_###. The complexity analysis is presented in Section III-C ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Problem Transformation", + "text": "We first derive the expression of the power constraint (14c ###reference_.3###). By substituting (3 ###reference_###) into (14c ###reference_.3###) and using the fact that , (14c ###reference_.3###) can be transformed as\nThen, we derive the expression of in (14d ###reference_.4###). Let and denote the probability distribution of the received signal at Willie under and in one time slot, respectively. Moreover, we can obtain and , respectively. Similar to [26 ###reference_b26###], we suppose that Willie uses consecutive time slots for detection and the corresponding joint probability distribution for the independent observations under and can be denoted as\nSince Willie is aware of and we consider the worst case that Willie knows , and in advance, Willie can perform the optimal test [4 ###reference_b4###] to minimize the detection error probability and the corresponding minimum can be derived as\nwhere denotes the total variation distance between and . Since it is difficult to further analyze the expressions of , we can use the Pinsker\u2019s inequality [4 ###reference_b4###] to obtain a tractable upper bound as\nwhere denotes the Kullback-Leibler (KL) divergence from to and can be derived via the following proposition.\nProposition 1: can be expressed as\nwhere\nProof: We first denote and , then we can obtain and . According to the definition of KL divergence, we can obtain\nUsing the fact that [27 ###reference_b27###, Eq. (2.67)], Proposition 1 is therefore proved.\nFrom Alice\u2019s perspective, considering (14d ###reference_.4###), (18 ###reference_###) and (19 ###reference_###), the covertness constraint can be replaced by\nBy using similar techniques to [19 ###reference_b19###] and [28 ###reference_b28###], we can derive an upper bound for the left hand of (23) to ensure a safer scenario for the covertness, which can be expressed as\nNote that (a) in (III-A ###reference_###) is achieved by using the fact that . Therefore, the transformed covertness constraint can be expressed as\nTherefore, the optimization problem can be expressed as\nIt is seen that the objective is expressed as a sum of logarithmic functions of fractional items. To solve the non-convexity of the objective, we employ the Lagrangian dual and quadratic transform [29 ###reference_b29###] and introduce auxiliary variables and so that can be replaced by\nTherefore, (26 ###reference_###) is equivalent to" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B AO-based Hybrid Beamforming Design", + "text": "Since it is difficult to optimize simultaneously, we resort to the AO scheme [30 ###reference_b30###] to solve (28 ###reference_###). The detailed procedures are presented as follows.\n1) Optimization for : When fixing , it can be seen is a concave function of , thus the optimal can be obtained by letting for , which can be expressed just as the form of in (12 ###reference_###), i.e.,\n2) Optimization for : When are fixed, is also a concave function of . Similarly, we let for and the optimal can be obtained as (30 ###reference_###) at the top of this page.\n3) Optimization for : Given , the optimization problem for can be expressed as\nThe objective function in (31 ###reference_###) related to can be transformed as\nwhere\nNote that the equality (a) in (III-B ###reference_b###) holds because . The equality (b) holds due to the fact that and .\nSimilarly, we can rewrite the constraints (15 ###reference_###) and (25 ###reference_###). Then, (31 ###reference_###) can be equivalently expressed as\nwhere\nThe newly obtained (36b ###reference_.2###), (36c ###reference_.3###) and correspond to the original (15 ###reference_###), (25 ###reference_###) and (14b ###reference_.2###), respectively. Therefore, the problem in (36 ###reference_###) is transformed into a standard QCQP problem with additional constant-modulus constraints, enabling us to solve it efficiently with the iMM method [31 ###reference_b31###].\nWe start by introducing the majorization-minimization method [32 ###reference_b32###], which can find a majorized function for the quadratic term under the constant-modulus constraints. For example, we assume that has the constant-modulus constraints, i.e., for and is a Hermitian matrix. Once we obtain a feasible solution satisfying the constant-modulus constraints, the quadratic term can be upper-bounded by\nwhere is usually selected as the maximum eigenvalue or the trace of to satisfy . It is seen that the right hand of (39 ###reference_###) is a linear form of which facilitates the optimization. Therefore, once we obtain a feasible solution , we can derive a surrogate problem for (36 ###reference_###) as\nwhere we define\nand denotes the trace of for .\nSince (40 ###reference_###) is still non-convex due to (40c ###reference_.3###), we can solve its Lagrange dual problem which is expressed as\nAs the objective in (44 ###reference_###) is a linear function of , the optimal solution for can be given by\nAdditionally, we consider the remaining Karush-Kuhn-Tucker conditions, i.e.,\nBased on [31 ###reference_b31###, Lemma 3], the bisection method can be used to obtain the numerical results for and . Specifically, when is fixed, we use the bisection method to search which satisfies . Similar techniques are used to obtain . We can alternately optimize the dual variables and until convergence. By using the converged dual variables, we can derive the optimal via (III-B ###reference_b###) Then, we define for for simplicity and the iMM-based analog beamforming design is summarized in Algorithm 1 ###reference_###.\n4) Optimization for : Given , the objective function in (28 ###reference_###) related to can be expressed as\nwhere\nNote that (a) in (III-B ###reference_b###) holds due to the facts that for and .\nSimilar operations can be performed for the constraints (15 ###reference_###) and (25 ###reference_###), then we can derive the optimization problem for as\nwhere we define\nSince that and are all Hermitian semi-definite matrices, (53 ###reference_###) is a standard QCQP convex problem, which can be solved by existing solvers (e.g., Mosek optimization tools) with the interior-point method.\nWith an appropriate initialization for and , we can alternately optimize , , and until convergence. Finally, we summarize the proposed AO scheme for hybrid beamforming design in Algorithm 2 ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Complexity Analysis", + "text": "In this subsection, we analyze the computational complexity of the proposed AO scheme in Algorithm 2 ###reference_###. In fact, steps 2, 4, 5, 6 and 7 take up dominant computational cost. Specifically, in step 2, the computational complexity is denoted as which varies with different initialization approach. Computing the auxiliary variables and involves the same order of complexity as in step 4 and 5, respectively. Based on [31 ###reference_b31###], the complexity of Algorithm 1 ###reference_### in step 6 is , where denotes the total number of iterations in Algorithm 1 ###reference_###. Finally, in step 7, solving the QCQP convex problem for with an interior-point method involves the complexity . Therefore, the overall complexity of the proposed AO scheme is approximately , where denotes the number of the outer iterations in Algorithm 2 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV VSH scheme for Hybrid Beamforming Design", + "text": "To reduce the computational complexity of the AO scheme, we propose a VSH scheme for hybrid beamforming design, which can also be used to obtain an initialization for the AO scheme to further improve its performance. Subsequently, we prove that the SCR obtained by the VSH scheme can achieve the channel mutual information as the number of antennas grows to infinity. To further enhance the SCR performance, we formulate a novel power allocation problem and solve it using an alternating method with closed-form solutions." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A VSH-based Hybrid Beamforming Design", + "text": "We first combine the received signals in (4 ###reference_###) from all the users and denote , , and . Then, the received signal vector can be expressed as\nBy assuming that all the transmitted symbols obey independently circularly symmetric complex Gaussian distribution, the mutual information between and can be defined as\nwhere\nand . In fact, the sum rate in (13 ###reference_###) is always less than or equal to the mutual information in (57 ###reference_###) [27 ###reference_b27###]. Therefore, we aim to design hybrid beamformers so that the sum rate approximates the mutual information. Since a large number of antennas are equipped in the massive MIMO systems, we consider is approximately to be infinity in this section.\nFor the optimization problem in (26 ###reference_###), we first consider the covertness constraint (25 ###reference_###). Due to the existence of the term caused by the quantization noise in (25 ###reference_###), we cannot employ the zero-forcing digital beamforming [33 ###reference_b33###, 13 ###reference_b13###] straightforwardly. Fortunately, it can be seen that the left hand of (25 ###reference_###) will be zero when , indicating that should lie in the null space of to achieve the covertness. Although the analog beamformer has the constant-modulus constraints in (14b ###reference_.2###), we can eliminate these constraints by doubling the number of RF chains [34 ###reference_b34###]. Therefore, we temporally ignore the constant-modulus constraints and perform the singular value decomposition (SVD) of , i.e.,\nwhere , and denote the null space and the column space of , respectively. Then we can define and perform the SVD that . From the right singular matrix , we choose the strongest components denoted as and to constitute the analog beamformer, which can be expressed as\nSubsequently, we will use the block diagonalization method, which is effective to eliminate the multiuser interference, to design the digital beamformer . We start by introducing and . Then should lie in the null space of . Similarly, we perform the SVD, i.e., , where and denote the null space and the column space of , respectively. Therefore, we let and the digital beamformer can be expressed as\nwhere and we define for as the power allocated to the th data streams. Then, the following proposition is derived to validate the effectiveness of the aforementioned analog and digital beamforming design.\nProposition 2: The analog and digital beamformers designed by the VSH scheme in (60 ###reference_###) and (61 ###reference_###) ensure that the SCR in (13 ###reference_###) is equivalent to the mutual information in (57 ###reference_###) when grows to be infinity.\nProof: For any continuous distribution for and for , there exists . Therefore, when grows to be infinity, we can obtain\nThen, we can further derive that\nwhere denotes a zero column vector with the length of .\nFrom (63 ###reference_###) we can find that is in the null space of . By performing the SVD of , i.e., , we can obtain that , where denotes the primary components of . Therefore, (60 ###reference_###) is equivalent to\nSince , we can obtain\nwhich means that the channels of different users are orthogonal to each other. Assuming that , we can derive\nNote that (a) in (66 ###reference_###) holds due to the eigenvalue decomposition of and is the th largest eigenvalue for . From (66 ###reference_###) we can find that and\nTherefore, by substituting both (64 ###reference_###) and (67 ###reference_###) into (58 ###reference_###), we can transform (58 ###reference_###) as\nMoreover, with the designed in (61 ###reference_###), we can derive that\nFinally, by substituting (68 ###reference_###) and (69 ###reference_###) into (57 ###reference_###), we can derive\nProposition 2 is therefore proved.\nRemark: According to (62 ###reference_###), we can see that mmWave channels and massive antenna arrays inherently provide the covertness. Furthermore, with hybrid beamformer designed by the VSH scheme, the SCR can approach the mutual information asymptotically. Additionally, we can optimize the remaining parameters, i.e., for , to further improve the SCR. The detailed procedures are presented in the following subsection." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Power Allocation Design", + "text": "After the derivation of analog and digital beamforming design as (60 ###reference_###) and (61 ###reference_###), respectively, there still exists power allocation factors to be optimized. Given (60 ###reference_###) and (61 ###reference_###), we first consider the power constraint, where the left hand in (15 ###reference_###) can be transformed as\nNote that in (IV-B ###reference_###) holds due to via (60 ###reference_###). Then, by substituting (60 ###reference_###), (61 ###reference_###) and (IV-B ###reference_###) into (26 ###reference_###), we can transform (26 ###reference_###) into a power allocation optimization problem, which can be expressed as\nUnfortunately, we cannot straightforwardly solve (72 ###reference_###) by the popular water-filling method, due to the term caused by the quantization noise. To proceed, we turn to optimizing instead. Specifically, we denote for and (72 ###reference_###) can be transformed as\nwhere\nSimilar to (26 ###reference_###), we introduce auxiliary variables and so that the equivalent form of objective function (73a ###reference_.1###) can be given by\nTherefore, we can optimize the variables , , and iteratively. Then, the optimization subproblems are given as follows.\n1) Optimization for : Given and , we can derive the optimal by letting for and it can be expressed as\n2) Optimization for : Given and , we can obtain the optimal by letting for and it can be given by\n3) Optimization for : Given and , the optimization problem for can be expressed as\nThe optimal solution of to (IV-B ###reference_###) can be obtained by Lagrangian multiplier method. We introduce a dual variable for the power constraint (73b ###reference_.2###) and derive the Lagrangian function as\nThen the optimal solution of can be determined by letting and it can be expressed as\nwhere can be obtained by the bisection method for a minimum to satisfy . Therefore, the optimal digital beamformer can be expressed as\nConsidering the constant-modulus constraints (14b ###reference_.2###), we can express the optimal solution approaching (60 ###reference_###) as [34 ###reference_b34###]\nThe approximation error of may cause violation of constraints. Therefore, we finally implement a scale for to satisfy (15 ###reference_###) and (25 ###reference_###) and it can be expressed as\nwhere , denote the normalization factors satisfying (15 ###reference_###) and (25 ###reference_###), respectively. The overall VSH scheme for hybrid beamforming design is summarized in Algorithm 3 ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Complexity Analysis", + "text": "In this subsection, we analyze the computational complexity of the proposed VSH scheme in Algorithm 3 ###reference_###. It is seen that each step is implemented with a closed-form solution. The dominant computational cost is in steps 3, 4, 6, 7 and 8. Specifically, the SVD takes up the most computational cost in steps 3 and 4, i.e., and , respectively. Step 7 takes up the most computational cost among steps 6, 7, 8 where all the computations for auxiliary variables involve matrix inversion and product and the complexity can be approximated by . Therefore, supposing the iteration number for convergence in Algorithm 3 ###reference_### is , we can approximately obtain the overall computational complexity of the proposed VSH scheme as . Since the VSH scheme only has one iteration loop and needs no extra computational complexity for the initialization, it has lower computational complexity than the AO scheme." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Simulation Results", + "text": "In this section, we evaluate the performance of the proposed schemes. We suppose that Alice is equipped with antennas and RF chains serving legitimate users. The channels between Alice and each legitimate user (or Willie) are all established with channel paths with a LoS path and two NLoS paths. Specifically, we assume that the channel gain of the LoS path obeys , the other two channel gains of NLoS paths obey and [35 ###reference_b35###]. All the channel AoDs randomly distribute within . The number of time slots for Willie\u2019s detection is set as . The noise powers are set as dBm. In the following we set , dBw and as default values unless specified.\n###figure_2### Fig. 2 ###reference_### illustrates the convergence performance of the proposed AO scheme by plotting the SCR performance versus the iteration numbers for three cases including 1, 4 and 7. For performance comparisons, we consider two initialization methods: 1) The beam training scheme for initialization (BT-Init) that we design the th column of analog beamformer using the normalized array response in (6 ###reference_###) precisely towards the LoS path of the channel between Alice and the th user for and employ zero-forcing method for the digital beamforming design to eliminate multiuser interference [3 ###reference_b3###]. 2) The VSH scheme for initialization (VSH-Init) shown in Algorithm 3 ###reference_###. It can be observed from Fig. 2 ###reference_### that all the curves can achieve the convergence. Specifically, the SCR converges in approximately 15 iterations with the VSH-Init, which has a faster convergence speed than the BT-Init. Furthermore, the AO scheme with the VSH-Init can achieve better converged SCR performance than that with the BT-Init. Besides, for the three schemes including the VSH scheme, the AO scheme with the VSH-Init and the AO scheme with the BT-Init, we compare their computational complexity by measuring their running time under the same hardware and software platform. Here we set the stop condition that the relative change of the SCR within one iteration is lower than 0.001. We take as an example. To achieve the convergence, the VSH scheme, the AO scheme with the VSH-Init and the AO scheme with the BT-Init need 0.0049 s, 2.0957 s, and 2.6765 s, respectively, implying that the VSH scheme has much lower computational complexity than the AO scheme. These results validate that the VSH scheme can be used to obtain an initialization for the AO scheme.\n###figure_3### Fig. 3 ###reference_### illustrates the SCR versus different predetermined detection error probabilities, and we also compare the performance of our proposed AO and VSH schemes for two cases including 1 and 7. For comparisons, we consider two baseline schemes: 1) Fully-digital beamforming optimization (FDBO) scheme that we design a fully-digital beamformer with a revised AO scheme in Algorithm 2 ###reference_### by removing the analog beamformer design and revising the dimension of the digital beamformer. 2) Maximum ratio transmitting (MRT) scheme that we design a fully-digital beamformer commonly used for the covertness communication analysis [36 ###reference_b36###, 37 ###reference_b37###], where and is normalized to satisfy both power and covertness constraint. It can be observed that both our proposed schemes outperform the MRT scheme. Additionally, the AO scheme always outperforms the VSH scheme due to the approximation errors caused by assuming that approaches infinity in the VSH scheme. Furthermore, when the quantization noise is negligible at , the AO scheme can achieve performance comparable to that of the FDBO scheme. Moreover, the AO scheme demonstrates superior stability against variations in covertness constraints compared to the FDBO scheme when . The reason is that in the hybrid beamforming architecture, the analog beamforming reduces the leakage of quantization noise in the Alice-Willie link, making it easier to satisfy the covertness constraint. Therefore, these results validate the effectiveness of hybrid beamforming design with finite-resolution DACs in real-world covert scenarios.\n###figure_4### In Fig. 4 ###reference_###, we evaluate the SCR performance as a function of to illustrate the impact of the quantized bits of DACs. It can be observed that as increases, the SCR performance first improves when and then tends to be flat when . This phenomenon occurs because that the quantization noise decreases rapidly with increasing . Additionally, the result shows that our proposed AO scheme exhibits lower sensitivity to the covertness compared to the FDBO scheme. When and , our proposed AO scheme outperforms the FDBO scheme. The reason is that the analog beamformer in the hybrid beamforming architecture reduces the leakage of quantization noise in the Alice-Willie link so that the transmit power can be utilized as fully as possible without exceeding to achieve better SCR performance. Besides, the AO scheme also achieves nearly the same SCR performance as the FDBO scheme when .\n###figure_5### Fig. 5 ###reference_### illustrates the SCR performance with respect to . It can be observed that the SCR initially increases and then tends to be flat with the growth of , especially for the case of low-resolution DAC, e.g., . This result can be explained as follows. During the ascending stage of the curve, predominantly restricts the SCR performance. In the subsequent stage when the curves tend to be flat, the covertness primarily restricts the SCR performance.\n###figure_6### In Fig. 6 ###reference_###, we compare the SCR performance in terms of the number of transmit antennas. As increases, the SCR always increases due to the enhanced beamforming gain achieved by a larger antenna array. Furthermore, in the case of low-resolution DAC, e.g., , both the AO and VSH schemes outperform the MRT scheme. Additionally, the SCR obtained by the FDBO scheme increases fastest along with increasing among all the schemes, due to the enhanced design flexibility of the digital beamformers than the analog beamformers. However, this advantage is achieved with the price of consuming more power. In the case of high-resolution DAC, e.g., , our proposed schemes achieve nearly the same SCR performance as that of the FDBO scheme. Moreover, Fig. 6 ###reference_### demonstrates that as increases, the gaps in the SCR performance between the AO and VSH schemes decrease, validating the effectiveness of the VSH scheme in the massive MIMO systems.\n###figure_7### Fig. 7 ###reference_### illustrates the SCR performance versus different numbers of legitimate users. As increases, all the FDBO, the AO and the VSH schemes demonstrate an improvement in SCR performance, whereas the MRT scheme exhibits a decline. The reason is that the MRT scheme does not consider the covertness, multi-user interference, and quantization noise suppression, different from the other three schemes. Furthermore, the AO and the VSH schemes outperform the FDBO scheme when in the case of low-resolution DACs, e.g., , due to smaller quantization noise in the hybrid beamforming architecture. In contrast, for the case of high-resolution DAC, e.g, , the gaps in the SCR performance between the AO and VSH schemes increase with the growth of , primarily due to the more strict covertness requirements for larger . However, the VSH scheme is still valuable for its low complexity and moderate performance.\nFinally, we evaluate the performance of energy efficiency to investigate the tradeoff between the SCR and power consumption. We define the energy efficiency as\nwhere is the total power consumption of the transmitter. We denote , , , , and as transmit power, power consumption of the low noise amplifier, phase shifter, RF chain, DAC and baseband processor, respectively. Then, the total power consumption with the fully-digital transmitter can be approximated by\nSimilarly, the total power consumption with the fully-connected hybrid beamforming architecture can be approximated by\nNote that can be calculated with the left hand of (15 ###reference_###). Then, according to [38 ###reference_b38###], we assume that mW, mW, mW and mW in our simulations. The power consumed by DAC is modeled as\nwhere and are denoted as sampling rate and the DAC\u2019s power efficiency with resolution and sampling rate, respectively. Typically we set GHz and fJ/conversion-step [38 ###reference_b38###].\n###figure_8### The energy efficiency performance versus the quantized bits of DACs is illustrated in Fig. 8 ###reference_###. It can be observed that the energy efficiency does not always increase with the growth of , and an optimal for maximizing the energy efficiency performance exists. Furthermore, our proposed two schemes outperform the FDBO and MRT schemes for the energy efficiency performance. Additionally, the optimal depends on different predetermined covertness requirements. This result emphasizes the significance of the DAC resolutions for practical covert mmWave MIMO communications." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI conclusion", + "text": "In this paper, we have investigated the hybrid beamforming design for mmWave covert MIMO communication systems with finite-resolution DACs. We have proposed the AO scheme to maximize the SCR, where both analog and digital beamformers are optimized iteratively. Specifically, the analog beamforming design has been obtained using an iMM method, while the digital beamforming design has been solved via the interior-point method. To reduce the computational complexity of the AO scheme, we have proposed the VSH scheme for hybrid beamforming design, which can also obtain a initialization for the AO scheme. For future research, it would be interesting to extend our work to the wideband mmWave MIMO communications." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.17986v1_figure_1.png", + "caption": "Figure 1: Illustration of the system model.", + "url": "http://arxiv.org/html/2411.17986v1/x1.png" + }, + "2": { + "figure_path": "2411.17986v1_figure_2.png", + "caption": "Figure 2: Comparison of the SCR for different methods with varying iterations.", + "url": "http://arxiv.org/html/2411.17986v1/x2.png" + }, + "3": { + "figure_path": "2411.17986v1_figure_3.png", + "caption": "Figure 3: Comparison of the SCR for different methods with varying detection error probabilities.", + "url": "http://arxiv.org/html/2411.17986v1/x3.png" + }, + "4": { + "figure_path": "2411.17986v1_figure_4.png", + "caption": "Figure 4: Comparison of the SCR for different methods with varying quantized bits.", + "url": "http://arxiv.org/html/2411.17986v1/x4.png" + }, + "5": { + "figure_path": "2411.17986v1_figure_5.png", + "caption": "Figure 5: Comparison of the SCR for different methods with varying transmit power.", + "url": "http://arxiv.org/html/2411.17986v1/x5.png" + }, + "6": { + "figure_path": "2411.17986v1_figure_6.png", + "caption": "Figure 6: Comparison of the SCR for different methods with varying numbers of antennas.", + "url": "http://arxiv.org/html/2411.17986v1/x6.png" + }, + "7": { + "figure_path": "2411.17986v1_figure_7.png", + "caption": "Figure 7: Comparison of the SCR for different methods with varying numbers of users.", + "url": "http://arxiv.org/html/2411.17986v1/x7.png" + }, + "8": { + "figure_path": "2411.17986v1_figure_8.png", + "caption": "Figure 8: Comparison of the energy efficiency for different methods with varying quantized bits.", + "url": "http://arxiv.org/html/2411.17986v1/x8.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.17986v1" +} \ No newline at end of file diff --git a/20241127/2411.17990v1.json b/20241127/2411.17990v1.json new file mode 100644 index 0000000000000000000000000000000000000000..53af19e583df8f4e304abca597a19a1d15e2c02c --- /dev/null +++ b/20241127/2411.17990v1.json @@ -0,0 +1,248 @@ +{ + "title": "Beam Switching Based Beam Design for High-Speed Train mmWave Communications", + "abstract": "For high-speed train (HST) millimeter wave (mmWave) communications, the use of narrow beams with small beam coverage needs frequent beam switching, while wider beams with small beam gain leads to weaker mmWave signal strength. In this paper, we consider beam switching based beam design, which is formulated as an optimization problem aiming to minimize the number of switched beams within a predetermined railway range subject to that the receiving signal-to-noise ratio (RSNR) at the HST is no lower than a predetermined threshold. To solve this problem, we propose two sequential beam design schemes, both including two alternately-performed stages. In the first stage, given an updated beam coverage according to the railway range, we transform the problem into a feasibility problem and further convert it into a min-max optimization problem by relaxing the RSNR constraints into a penalty of the objective function. In the second stage, we evaluate the feasibility of the beamformer obtained from solving the min-max problem and determine the beam coverage accordingly. Simulation results show that compared to the first scheme, the second scheme can achieve 96.20% reduction in computational complexity at the cost of only 0.0657% performance degradation.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "For high-speed train (HST) millimeter wave (mmWave) communications, one of the main challenges is the fast time-varying fading caused by the high mobility of the HST [1 ###reference_b1###]. The short channel coherence time restricts the time available for conventional beam management procedures, such as beam training and tracking [2 ###reference_b2###]. As an alternative, beam switching proves more suitable in this scenario, due to its simpler mechanism and lower overhead [3 ###reference_b3###, 4 ###reference_b4###]. Specifically, each beam used in the beam switching has a predesigned coverage for the railway range. When communicating with the HST, the base station (BS) achieves beam alignment through simply switching these predesigned beams according to the location of the HST. Note that the location of the HST is currently available since it is obtained from the railway control system to ensure the railway security [5 ###reference_b5###].\nFor HST mmWave communications, the procedures to predesign beams for beam switching differ from conventional beamforming optimized based on instantaneous channel state information (CSI) [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. Predesigning beams involves optimizing the beam coverage based on statistical CSI and geometric information of the railway. By assuming a uniform beam width (UBW) for all the predesigned beams, the beam coverage is optimized to maximize the average data rate [4 ###reference_b4###]. However, this assumption leads to considerable variations in the receiving signal-to-noise ratio (RSNR) at different HST locations, since the path loss varies as the HST moves but the level of beam gain remains constant. To address this issue, a non-uniform beam width (NUBW) optimization scheme is proposed, where the coverage of each beam is optimized to minimize the data rate gap among beams [3 ###reference_b3###]. The increased degree of freedom introduced by adding optimization variables enables NUBW to achieve better RSNR stability than UBW. To consider the beamformer design subject to hardware constraints that is not included in [3 ###reference_b3###], an analog precoding codebook, where wide beams are designed by switching off some antennas, is developed with closed-form solutions based on the beam coverage designed by an equal spacing coverage (ESC) scheme [10 ###reference_b10###]. With each beam covering the same length of railway range, ESC can approximate the average data rate performance of NUBW. Note that in all the aforementioned studies, the beam gain used in the beam coverage optimization is not generated by actual beamformers, but approximated by simple functions of the beam width. Although beamformers can be designed separately after the beam coverage optimization, the mismatch between the approximated and actual beam gain may lead to performance loss, such as unstable RSNR.\nMost existing studies on predesigning beams for HST mmWave communications optimize the beam coverage with a given number of switched beams, whereas the problem of minimizing the number of switched beam receives limited attention. To minimize the number of switched beams, the beam coverage is optimized without considerations of the beamformer design in [11 ###reference_b11###]. Note that avoiding frequent beam switching through predesigning beams is essential for the performance of HST mmWave communications. Specifically, if narrow beams are used by the BS, the duration that the HST falls within the same beam coverage can be less than one millisecond in certain cases [12 ###reference_b12###]. Switching beams within such a short duration may result in beam misalignment. To avoid frequent beam switching, wide beam can be considered, where the beam gain should be enough to guarantee the quality of service (QoS).\nIn this paper, we consider beam switching based beam design by jointly optimizing the beam coverage and beam gain in a downlink HST mmWave system. Our main contributions are summarized as follows.\nSince the HST may be very close to the BS, we establish a near-field channel model to characterize the time-varying channel between the BS and the HST. Then, we formulate the optimization problem aiming to minimize the number of switched beams within a predetermined railway range, subject to constant modulus constraints and RSNR constraints that require the RSNR at each HST location no lower than a predetermined threshold. Due to a large number of non-convex constraints and the interdependent optimization variables, the original problem is difficult to solve. We propose a sequential beam design approach to convert the problem into a series of beam design sub-problems that aim at maximizing the beam coverage subject to the RSNR constraints and constant modulus constraints.\nFor each sub-problem, we propose a framework including two stages (TSs) that are performed alternately. In the first stage, given a fixed beam coverage, we transform the sub-problem into a feasibility problem and further convert it into a non-convex min-max optimization problem by relaxing the RSNR constraints into a penalty of the objective function. In the second stage, we evaluate the feasibility of the beamformer obtained from solving the min-max problem and determine the beam coverage accordingly.\nUnder the TS framework, we propose two schemes to solve each sub-problem. For the first scheme, the min-max problem is solved based on semidefinite relaxation (SDR) and a difference-of-convex (DC) algorithm, while the beam coverage is determined by a bisection search (BiS) method. For the second scheme, the min-max problem is solved based on a proximal-point (PP) method and a primal-dual gradient (PDG) algorithm with closed-form solutions, while the beam coverage is determined by a mixed search (MS) method.\nThe rest of this paper is organized as follows. The system model is introduced in Section II ###reference_###. In Section III ###reference_###, we formulate the optimization problem and introduce the sequential beam design approach. Section IV ###reference_### and Section V ###reference_### present the two schemes. Simulation results and relevant analysis are provided in Section VI ###reference_###. Finally, the conclusion is presented in Section VII ###reference_###.\nThe notations are defined as follows. Symbols for matrices (upper case) and vectors (lower case) are in boldface. , , and denote the transpose, conjugate transpose (Hermitian), trace and absolute value, respectively. denotes the -norm of a vector , while denotes the spectral norm of a matrix . denotes the nuclear norm of a matrix . denotes a diagonal matrix with its diagonal entries represented by a vector . , , and denote the set of complex number, set of real number, order of complexity and Gaussian distribution, respectively. and denote the real part and imaginary part of a complex number, respectively. The th entry of a vector is denoted as . The entry at the th row and th column of a matrix is denoted as ." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model", + "text": "###figure_1### ###figure_2### As shown in Fig. 1 ###reference_###, we consider a downlink HST mmWave system comprising a BS and an HST. The BS is equipped with a uniform linear array (ULA) consisting of antennas, where the antenna spacing is denoted as . The direct transmission of mmWave signals from the BS to user terminals (UTs) within the HST may encounter severe penetration loss [13 ###reference_b13###]. To address this issue, adopting a two-hop architecture has been regarded as a suitable strategy [14 ###reference_b14###]. In light of this, we assume that an antenna array is equipped on top of the HST, relaying mmWave signals from the BS to UTs within the HST. To focus on the transmit beamforming at the BS, we further assume that the relay receives signals omnidirectionally." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Geometric System Model", + "text": "To give a geometric system model, we set up a two-dimensional coordinate system as shown in Fig. 2 ###reference_###. The ULA at the BS is oriented horizontally, with the coordinate of each antenna denoted as . The coordinate of the relay on the HST can be expressed based on the geometric information of the railway as\nwhere is the angle of departure (AoD), is the angle between the railway and the -axis, and is the intersection point of the railway and the -axis. Without loss of generality, we set and .\nAs the HST moves along the railway, the BS performs beam switching based on the estimation of to avoid beam misalignment. Each beam transmitted by the BS is denoted as . Since the analog beamforming for mmWave communications is commonly implemented using phase shifting networks, each entry of should be subject to the constant modulus constraint from the phase shifters [15 ###reference_b15###], i.e.,\nOnce the HST is estimated to reach a specific location , the BS switches the beam from to in order to maintain a reliable connection, where is the AoD corresponding to the switching location. In this paper, we aim at minimizing the number of switched beams, represented by , within a given railway range from to . Without loss of generality, we require and ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Channel Model", + "text": "For HST mmWave communications, the channel coherence time is typically very short due to the high mobility of the HST. For example, in the tunnel scenario with a carrier frequency of 30 GHz, the channel coherence time can be as short as 0.47 ms [16 ###reference_b16###]. In such a limited timeframe, acquiring the instantaneous CSI and designing beamforming based on the acquired CSI is challenging. As a more practical alternative, beam switching based beam design is considered in this paper. To support the beam design, we establish a channel model based on geometric and statistical information. The framework, key technologies and application scenarios are included in a pioneer work on railway dedicated communications [1 ###reference_b1###].\nSuppose the line-of-sight path dominates the signal transmission. The time-varying channel impulse response between the th BS antenna and the HST can be formulated as\nwhere , , and represent the large-scale fading, propagation delay, carrier frequency and impulse function, respectively. For simplicity, the HST is assumed to move at a constant speed of m/s within a given time interval . Then, the AoD at can be expressed as\nwhere is a predetermined relay coordinate obtained from the railway control system. Note that variations in velocity and different railway geometries will change the expressions for the AoD and the coordinate of the relay. However, discussions on the system model with arbitrary velocities and railway geometries are reserved for future work due to page limits. Based on , the propagation delay can be formulated as\nwhere is the speed of light. Since is highly non-linear and its continuity leads to complex semi-infinite programming problems, it is difficult to design beams directly based on (3 ###reference_###). Therefore, to facilitate the subsequent beam design while ensuring the accuracy of the channel model, we divide into sub-intervals and approximate within each sub-interval. The th sub-interval is denoted as , where , and . Within , is approximated using the second-order Taylor expansion [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. Define and\nfor . Then we can approximate within by\nwhere represents the nonlinear component and can be expressed as\nGenerally, the phase shift introduced by has an upper bound, i.e.,\nwhere is the signal wavelength and is the bandwidth. It can be observed that if is sufficiently small, i.e.,\nthe phase shift introduced by the time-varying component in (8 ###reference_###) becomes much smaller than , and therefore can be neglected, leading to the approximation\nTo ensure (10 ###reference_###), we introduce a sample precision parameter and determine sequentially, i.e.,\nAccording to (7 ###reference_###), the Doppler shift can be expressed as , the simplified propagation delay can be expressed as , and the array steering vector can be expressed as\nSince the difference of the large-scale fading among different antennas is negligible, we have . Furthermore, as the duration of each sub-interval is small, we approximate by for , where and represent the transmission power and path loss, respectively. In particular, can be expressed as\nwhere and represent the reference distance and path loss exponent, respectively.\nTo further simplify the notation, we define\nwhere . Then, the time-varying channel model in the frequency domain can be established as\nNote that the distance-associated component is included in (16 ###reference_###) because the minimum distance between the HST and BS can be as short as several meters [16 ###reference_b16###]. As the HST approaches the BS, it may enter the near field of the ULA at the BS." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Transition from Far-Field to Near-Field", + "text": "To determine the boundary between the far-field and near-field regions, we introduce the bandwidth-aware-near-field distance (BAND) [20 ###reference_b20###]. This metric is derived from the loss in the normalized beam gain due to the mismatch between the near-field wideband channel response and the optimal beamformer designed under far-field and narrowband assumptions, as given by\nwhere is a variable representing the distance. Given , , and , the BAND is defined as the smallest distance beyond which the loss is no larger than a given threshold for any . Specifically, during , the BAND, denoted as , can be obtained by solving the optimization problem\nDue to page limits, we only consider the beam design for the narrowband scenario in this paper, leaving the wideband scenario for future work. Given that and , the BAND can be approximated by . Thus, if , the HST is considered to be in the far field of the ULA at the BS; otherwise, the HST is considered to be in the near field. Note that although the boundary between the far-field and near-field regions is determined by the BAND, no further approximations are made for the channel samples in the far-field regions. All channel samples used in the following sections are generated according to the near-field channel model in (16 ###reference_###)." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D RSNR", + "text": "Let denote a series of estimated AoDs corresponding to the true AoDs . During the downlink transmission, the BS switches beams according to the estimated HST location . If , the BS transmits the th beam during , and the normalized beam gain can be defined as\nwhere . The corresponding RSNR can be expressed as\nwhere represents the noise power. Considering all possible relationships between and , we can express the instantaneous RSNR as\nwhere is a step function." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Sequential Beam Design", + "text": "In this paper, we consider beam switching based beam design, aiming to minimize within a predetermined railway range from to , subject to constant modulus constraints and RSNR constraints that require the RSNR at each HST location no lower than a predetermined threshold . Specifically, according to (21 ###reference_###), there are two possibilities for each beam switching: if , the HST is within the angle coverage of as scheduled; otherwise, the estimation error in AoD leads to beam misalignment. To avoid the degradation of RSNR caused by the misalignment, we impose constraints on the RSNR for AoD samples estimated to fall into with a probability greater than a threshold . For simplicity, we assume that the estimation error in each AoD sample independently follows a Gaussian distribution. Given , the estimated AoD can be expressed as , where , and the probability that falls into can be expressed as\nThe set of AoD samples with can be expressed as\nFor any in , we require the RSNR no lower than , i.e.,\nNote that with a sufficiently low , the RSNR constraint in (24 ###reference_###) ensures overlapping angular regions between neighboring beams. If the estimation for is accurate, i.e., , the AoD sample set can be simplified into\nFor simplicity, we assume and use (25 ###reference_###) in the following sections. Nonetheless, the beam design schemes proposed in this paper can be extended to cases where the estimation error is not negligible by replacing (25 ###reference_###) with (23 ###reference_###).\nUnder the assumption of accurate AoD estimation, the optimization problem aiming at minimizing , subject to (2 ###reference_###) and (24 ###reference_###), can be expressed as\nNote that (26 ###reference_###) is difficult to solve due to the large number of non-convex constraints and the interdependence between , , and . Therefore, we propose a sequential beam design approach to convert the problem into a series of beam design sub-problems that aim at maximizing\nthe beam coverage subject to (2 ###reference_###) and (24 ###reference_###), where the beam coverage is denoted as . According to (20 ###reference_###), can be equivalently transformed into the following beam gain threshold\nFor any , (24 ###reference_###) is equivalent to\nThen, for the th beam, with fixed, the sub-problem can be expressed as\nThe design of the next beam can be performed through updating and solving (29 ###reference_###) again. The proposed sequential beam design approach is summarized in Algorithm 1 ###reference_###, with and representing the results from sequentially solving (29 ###reference_###). Once , the sequential beam design stops.\nNote that (2 ###reference_###) and (28 ###reference_###) lead (29 ###reference_###) to be non-convex and NP-hard. In particular, (28 ###reference_###) can be reformulated as , where is non-convex and incontinuous, posing challenges for the application of existing optimization methods. Therefore, we consider a TS framework, where and are optimized alternately. In the first stage, by fixing , we transform (29 ###reference_###) into a feasibility problem to optimize . In the second stage, we determine according to the output of the first stage using search methods. The two stages are performed alternately until the largest achievable is found. Specifically, we denote the optimal solution of in (29 ###reference_###) as and fix as in the th TS iteration, where is a non-negative integer. With fixed, (29 ###reference_###) can be transformed into a feasibility problem as\nIf a feasible solution can be found from (30 ###reference_###), we have so that is set to adjust upwards in the next iteration; otherwise, we have , and set . Once the sequence converges, the TS iterations stop.\nTo solve (29 ###reference_###), we propose two beam design schemes under the TS framework. The first scheme is named SDR-DC-BiS and is summarized in Algorithm 2 ###reference_###, where the problem in the first stage is solved based on SDR and a DC algorithm, and in the second stage the beam coverage is determined by a BiS method. The second scheme is named PP-PDG-MS and is summarized in Algorithm 4 ###reference_###, where the problem in the first stage is solved based on a PP method and a PDG algorithm with closed-form solutions, and in the second stage the beam coverage is determined by a MS method. In fact, the SDR-DC-BiS scheme is suitable for the cases that is small and the number of constraints is not large, while the PP-PDG-MS scheme is suitable for the cases that either or the number of constraints is large." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV SDR-DC-BiS Scheme", + "text": "Under the TS framework, we propose the SDR-DC-BiS scheme. In the first stage, through transforming into a semidefinite positive Hermitian matrix and introducing an additional rank-one constraint, we establish an equivalent reformulation of (30 ###reference_###). To address the non-convexity brought by the rank-one constraint and the potential infeasibility brought by the RSNR constraints, we further transform the problem into a min-max and DC programming problem, which is then tackled by solving a series of convex sub-problems. In the second stage, is determined using the BiS method. These two stages are alternately performed until a stop condition is triggered. Moreover, we also provide some analysis on the computational complexity of the SDR-DC-BiS scheme." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A SDR and DC Algorithm", + "text": "To tackle the non-convexity of the constraints in (30 ###reference_###), based on the SDR method, we define with and , and then transform (30 ###reference_###) into\nNote that (31e ###reference_.5###) ensures the equivalence between (30 ###reference_###) and (31 ###reference_###) but introduces the non-convexity. One method to address this issue is to remove (31e ###reference_.5###) and relax (31 ###reference_###) into a convex problem. The relaxation simplifies the problem but breaks the equivalence. When the solution of the relaxed problem is not rank-one, additional processing procedures, such as randomization, are required to regenerate a feasible solution for (30 ###reference_###), leading to high computational complexity. Moreover, the feasibility of (30 ###reference_###) can not be guaranteed even when the relaxed problem is feasible. All the above-mentioned problems motivate us to explore better methods.\nAnother method to address (31e ###reference_.5###) is transforming it using a DC function [21 ###reference_b21###, 22 ###reference_b22###]. Specifically, for any positive semidefinite matrix with , we have the following equivalence\nwhere represents the sum of all singular values of and represents the maximum singular value of . Note that we also have for any that satisfies (31d ###reference_.4###), and for any that satisfies (31c ###reference_.3###). Therefore, with (31c ###reference_.3###) and (31d ###reference_.4###), (31e ###reference_.5###) is equivalent to . Then, we can reformulate (31 ###reference_###) as\nwhere is a penalty parameter, and\nNote that (33 ###reference_###) is a penalty problem of (31 ###reference_###). If the feasible set of (31 ###reference_###) is not empty and , the optimal solution set of (33 ###reference_###) equals the feasible set of (31 ###reference_###) [23 ###reference_b23###].\nSince the transmission power is limited, the feasible set of (33 ###reference_###) may be empty if is set too large, posing challenges for the application of existing optimization methods. Therefore, we rewrite (31b ###reference_.2###) and transform (33 ###reference_###) into an optimization problem that is always feasible. To facilitate the transformation, we consider the equivalence between the following two inequalities\nBased on the equivalence, we transform (33 ###reference_###) into\nTo simplify the notation, we define\nand \nThe objective function in (36 ###reference_###) can then be rewritten as . Note that (36 ###reference_###) is the penalty problem of\nIf is sufficiently large, (36 ###reference_###) and (38 ###reference_###) have the same optimal solution sets [23 ###reference_b23###]. Moreover, if and , where is an optimal solution of (36 ###reference_###), then is a feasible solution of (31 ###reference_###); otherwise, the feasible set of (31 ###reference_###) is empty.\nSince both and are convex functions, (36 ###reference_###) is a typical case of a DC programming problem, which allows the application of existing DC algorithms. The DC algorithm involves the construction of two coupled sequences and [24 ###reference_b24###], where is the index of DC iteration. For the th iteration, is a feasible solution of (36 ###reference_###), while is an element chosen from the subdifferential of at , i.e., . Since is convex and continuously differentiable, reduces to the gradient of at , and can be given by\nwhere is the eigenvector corresponding to the largest eigenvalue of . Based on (39 ###reference_###), we establish a surrogate function for as\nThen, can be obtained through solving the following convex sub-problem\nNote that according to the definition of , we have\nwhich means that is non-increasing. Furthermore, since is lower bounded, can generally converge to a local minimum [25 ###reference_b25###].\nAlthough the problem of (41 ###reference_###) is convex, it is still difficult to solve because is non-smooth. One method to address this issue is to introduce an auxiliary variable, denoted as , and transform the problem of (41 ###reference_###) into\nNote that (43 ###reference_###) is equivalent to the problem in (41 ###reference_###) and can be easily solved by CVX. However, this smoothing method introduces a large number of constraints, resulting in high computational complexity, as will be discussed in Section IV-C ###reference_###.\nThe SDR and DC algorithm is presented from step 9 to step 14 in Algorithm 2 ###reference_###. The iterations stop once the following conditions are satisfied, i.e.,\nwhere and are predetermined error tolerance thresholds. The first condition in (44 ###reference_###) is defined to ensure the convergence of , whereas the second is defined to limit the degree of violation to (31e ###reference_.5###)." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Bisection Search", + "text": "In the second stage, we determine through BiS based on the result from the first stage. Suppose converges at . Ideally, if and , is a feasible solution of (31 ###reference_###), and therefore should be increased; otherwise, should be decreased.\nNote that the computational complexity of the first stage may become intolerable if is set too large. Moreover, only the solution from the final TS iteration is useful, making the high complexity unnecessary. To enhance the efficiency, we generate a lower bound for after\neach adjustment of through solving the following SDR problem\nDenote the solution of (45 ###reference_###) as . Since the rank-one constraint is removed from (45 ###reference_###), provides a lower bound for . If , we immediately have , and therefore the computational complexity of the first stage can be reduced. If , provides a good initialization for the first stage through setting , and the number of DC iterations can be reduced.\nIn numerical simulations, it is nearly impossible to achieve the exact equality of . To determine the feasibility in a more practical way, we establish a beamformer based on as\nIf is feasible for (30 ###reference_###), we set and ; otherwise, we set , where represents the search interval for . Then, we increment by 1 and update the AoD for the next TS iteration as . BiS stops the iterations once , where is a predetermined threshold. Suppose BiS stops at . Then can be expressed as" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Complexity of SDR-DC-BiS", + "text": "The proposed SDR-DC-BiS scheme is summarized in Algorithm 2 ###reference_###. Now we analyze the computational complexity of SDR-DC-BS. Denote the gap between the initial upper and lower bounds of as . Then, the total number of BiS can be expressed as [26 ###reference_b26###]. During the th iteration, the worst case complexity for solving (43 ###reference_###) or (45 ###reference_###) with the interior point method is [27 ###reference_b27###], where represents the accuracy and represents the number of elements in . If is sufficiently large, it takes fewer than iterations for to converge to . Finally, the worst case complexity of SDR-DC-BiS is given by .\nIn fact, the complexity of SDR-DC-BiS is high due to the two main reasons as follows. Firstly, SDR substantially increases the number of optimization variables from to , which is advantageous for small since the non-convex constraints in (30 ###reference_###) can be simplified as linear ones. However, it becomes problematic as grows large, making (41 ###reference_###) unsuitable for a large number of iterations as a basic sub-problem. Secondly, the smoothing method used in (43 ###reference_###) introduces a large number of constraints, significantly increasing the computational complexity associated with the interior point method [26 ###reference_b26###]. To reduce the complexity, we will propose a scheme named PP-PDG-MS in the following section, which is better suited for cases that either or is large." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "PP-PDG-MS Scheme", + "text": "In this section, we propose the PP-PDG-MS scheme to solve (29 ###reference_###) in cases that either or is large. In the first stage, we transform (30 ###reference_###) into a non-convex min-max problem through relaxing the RSNR constraints and constant modulus constraints into a penalty of the objective function. The transformed problem is then solved based on a PP method and a PDG algorithm with closed-form solutions. In the second stage, a MS method is proposed, where a monotonic search is employed to establish upper and lower bounds for , and then BiS is used to determine ." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Proximal-Point Method", + "text": "For the similar reason mentioned above (36 ###reference_###), with fixed as , (30 ###reference_###) can be relaxed into\nwhere we use the difference between and instead of a fraction to reduce the complexity, as will be explained in Section V-D ###reference_###. Note that (48 ###reference_###) is a non-convex non-smooth optimization problem. Solving such problems is time-consuming, even in an unconstrained setting [28 ###reference_b28###]. Therefore, to enhance computational efficiency, we moderately relax (2 ###reference_###) and correspondingly introduce a penalty term into the objective function of (48 ###reference_###).\nWe begin by considering the following equivalence\nwhere is a vector with only one non-zero entry, i.e., . In light of this equivalence, we introduce a penalty parameter and formulate the penalty problem of (48 ###reference_###) as\nNote that (50b ###reference_.2###) defines a convex feasible set, making (50 ###reference_###) much easier to solve than (48 ###reference_###).\nTo simplify the following analysis, we transform the variables and parameters in (50 ###reference_###) into the real domain. Define and denote the smallest element in as . For , where , we further define and\nThe objective function in (50a ###reference_.1###) can then be reformulated as\nwhere . The optimization problem in (50 ###reference_###) can be equivalently rewritten as\nwhere is defined as\nSolving (53 ###reference_###) is challenging due to the non-convexity of and the non-smoothness of . Specifically, finding the global minimizer of (53 ###reference_###) is known to be NP-hard. Moreover, the process of verifying whether a given feasible solution meets the first order optimality condition incurs high computational complexity, due to the non-smoothness of [29 ###reference_b29###]. Given these challenges, existing studies resort to searching for a nearly -stationary point, which can be derived from a smooth function, i.e., the Moreau envelope function [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###].\nIn this context, the Moreau envelope function of can be established as\nwhere is a smoothing parameter. To explain the near-stationarity of based on (55 ###reference_###), we define , where represents the maximum eigenvalue of . Note that in our specific case, we have for all . Therefore, to simplify the notation, we omit the subscripts for and introduce . Since is positive semidefinite, is a convex function for any , and so is . This implies that is -weakly convex. For , is smooth and its gradient can be expressed as [31 ###reference_b31###], where is a proximal operator defined as\nA feasible solution, denoted as , is referred to as a nearly -stationary point if it satisfies [32 ###reference_b32###]. These solutions can provide good approximation for first-order stationary points if is sufficiently small [31 ###reference_b31###]. Furthermore, the verification process for nearly stationary points is much easier, facilitating the derivation of efficient algorithms.\nThe most intuitive method to search for a nearly -stationary point is to iteratively set . However, this method is not considered in this paper due to its high computational complexity. Specifically, the problem in (56 ###reference_###) lacks closed-form solutions, necessitating its decomposition into sub-problems that must be iteratively solved. To avoid additional iterative loops, we establish a surrogate function for the objective function in (56 ###reference_###) as\nwhere is a parameter introduced to ensure the strong convexity of . Note that provides an upper bound for and a lower bound for . Specifically, since is positive semidefinite, we have\nMoreover, since is semidefinite positive, we have\nWe then generate a sequence by iteratively finding a solution such that\nwhere , and is a predetermined error tolerance threshold. The iterations stop once\nwhere [32 ###reference_b32###]. If is sufficiently small, (61 ###reference_###) can be satisfied within a limit number of iterations, and the obtained solution, denoted as , is guaranteed to be nearly -stationary. To prove it, we introduce the inequalities\nwhere (62 ###reference_###) holds since is -strongly convex, and (63 ###reference_###) holds due to (59 ###reference_###). To simplify the notation, we temporarily set , and . If (61 ###reference_###) is satisfied at the th iteration, we have according to (62 ###reference_###), which leads to\nNote that (64c ###reference_.3###) holds since is -strongly convex, (64e ###reference_.5###) holds since , and (64f ###reference_.6###) is derived using the triangle inequality. If (61 ###reference_###) is not satisfied at the th iteration, (63 ###reference_###) indicates that is decreasing. Furthermore, since is lower bounded and is sufficiently small, (62 ###reference_###) and (63 ###reference_###) indicate that as becomes large enough, will approach zero and finally satisfy (61 ###reference_###)." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Primal-Dual Gradient Algorithm", + "text": "To efficiently find a solution that satisfies (60 ###reference_###), the non-smoothness of should be addressed. To this end, we introduce an auxiliary vector and reformulate as\nwhere . The functions in (65 ###reference_###) and (57 ###reference_###) are equivalent because we have for any , where is an arbitrary function. This transformation is widely used in existing studies on the min-max optimization problems [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###], with more detailed discussions available in [35 ###reference_b35###].\nTo simplify the notation, we define , , and . The problem of minimizing can then be rewritten as\nNote that (66 ###reference_###) is a strongly-convex-concave min-max problem where and are associated with a bilinear function. In this paper, we adopt the PDG method to solve (66 ###reference_###) [33 ###reference_b33###]. We start by introducing a distance generating function in to construct an approximate function of as\nwhere is a positive variable introduced to control the approximation error, and\nNote that should be strongly convex, and satisfy the following two requirements: (i) holds for any , where is an arbitrary norm; (ii) . The incorporation of ensures that is strongly concave with respect to , leading to a uniquely defined optimal solution for the problem in (67 ###reference_###). According to the Danskin\u2019s theorem, since the optimal solution set of the problem in (67 ###reference_###) contains only one element, is differentiable. The uniqueness of the optimal solution and the differentiability of enable the development of the PDG algorithm.\nEach PDG iteration involves solving an auxiliary sub-problem associated with the -norm, an auxiliary sub-problem associated with , and a gradient mapping sub-problem associated with the norm specified in requirement (i). Consequently, the efficiency of PDG is significantly dependent on the formulation of . In this paper, different from [33 ###reference_b33###], we do not use the entropy distance function, and its associated -norm. This is primarily due to potential ill-conditioning issues involved in solving the auxiliary sub-problem associated with , especially when is small. These issues arise due to the involvement of exponential components in the solution of the sub-problem, such as . Furthermore, the complexity of solving the gradient mapping sub-problem associated with the -norm is approximately , which is not advantageous in our case since may be large. Therefore, we employ the traditional Euclidean distance function , where . With , the solution to each sub-problem can be derived from the Karush-Kuhn-Tucker (KKT) conditions and expressed in the closed forms as follows.\nGiven , the first auxiliary sub-problem can be given by\nThe solution of (69 ###reference_###) is unique and can be expressed in the closed form as\nwhere is the vector of Lagrange multipliers, for any , and . Specifically, can be given by\nGiven and , the second auxiliary sub-problem can be given by\nThe solution of (72 ###reference_###) can be expressed as\nwhere , and should satisfy . To determine the value of , we first sort the entries of in the descending order, i.e., for , and then construct a new vector with each entry given by for . Then, can be expressed as\nwhere we have\nGiven , we solve the gradient mapping sub-problem\nwhere . The solution can be given by\nwhere . The value of can be determined using the same method as presented in (74 ###reference_###).\nIn each PDG iteration, the motivation of solving the aforementioned three sub-problems is to construct a primal-dual pair such that\nNote that (78 ###reference_###) leads to a gap between and , expressed as\nwhere . According to (79 ###reference_###), if , the solution satisfies (60 ###reference_###), and therefore we can set . However, constructing an initial primal-dual pair satisfying (78 ###reference_###) typically requires an initial value of that is not sufficiently small to ensure . To address this, in each iteration, PDG reduces the value of and then constructs a new primal-dual pair satisfying (78 ###reference_###) based on the previous one. Specifically, in the th iteration, given a primal-dual pair , the value of is updated as with , and then an intermediate variable is introduced to construct a new pair, expressed as and . Through appropriately initializing , decreasing its value in each PDG iteration, and correspondingly updating , the condition can be guaranteed after a certain number of PDG iterations [33 ###reference_b33###].\nThe PP-PDG algorithm is presented in Algorithm 3 ###reference_###, where the PDG algorithm is included from step 5 to step 11. The PDG iterations stop once . The PP-PDG iterations stop once (61 ###reference_###) holds or , where is the predetermined maximum number of iterations." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Mixed Search Method", + "text": "In Section IV-B ###reference_###, we introduced (45 ###reference_###) for initialization so that the total number of iterations of the SDR and DC algorithm can be reduced. However, the same strategy cannot be applied here since formulating a problem similar to (45 ###reference_###) for (48 ###reference_###) is challenging. To improve the computational efficiency, we propose an adaptive MS method to replace the sole BiS. The searching step size is denoted as , and the AoD in the next TS iteration is given by .\nWe start MS by employing a monotonic search method to efficiently establish both a tight lower bound and a tight upper bound for . Specifically, we set and assign with a small positive value . To streamline the process of confirming , at the beginning of each TS iteration, we initialize with a relatively large value, denoted as . Additionally, since the complexity of PP-PDG increases with the value of , as will be detailed in Section V-D ###reference_###, we temporarily set . In the th TS iteration, with , we first obtain from Algorithm 3 ###reference_###, and then\ncheck the degree of violation for (2 ###reference_###). To accelerate the monotonic search stage, we adopt a relaxed criterion for violations. The validation is performed through formulating the following vector\nwhere . We update and if the following condition is met\nIf (81 ###reference_###) is not met, we determine the relationship between and based on the value of . If , a feasible solution for (30 ###reference_###) can be generated based on as , and therefore we have . Then, to find a certain upper bound for , we adjust upward by setting , where . If , the relationship between and is uncertain due to the large value of . Therefore, we update . To balance the complexity and performance, we define a lower bound for as . If is observed before satisfying , the monotonic search continues; otherwise, we have and stop the monotonic search.\nAfter obtaining the lower and upper bounds, we proceed with BiS and impose a higher precision requirement for PP-PDG. The procedures of BiS here are similar to those in Section IV-B ###reference_###, with the only difference being the adjustment of . We update if , where is a predetermined violation threshold.\nNote that MS is computationally more efficient than BiS primarily because it requires fewer TS iterations to confirm . Specifically, if , PP-PDG can be stopped prematurely once is achieved; otherwise, the feasibility of the beamformer remains uncertain until a nearly- stationary solution is found. Moreover, the computational complexity of PP-PDG increases with since a larger results in more gradient computations and typically leads to a larger . Therefore, focusing the search within the angle interval instead of can enhance the computational efficiency." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Complexity of PP-PDG-MS", + "text": "The proposed PP-PDG-MS scheme is summarized in Algorithm 4 ###reference_###. According to (62 ###reference_###) and (63 ###reference_###), the number of PP iterations required for obtaining a nearly -stationary point is not larger than\nThe number of PDG iterations required to achieve is at most [33 ###reference_b33###]. Without loss of generality, we set and , where is a predetermined parameter, and is a weight variable. The value of is initially set as , and then updated to when is larger than . The complexity of PP-PDG can be expressed as , where and . Note that the formulation of (48 ###reference_###) ensures a small value of and therefore reduces the complexity of the PP-PDG-MS scheme." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Simulation Results", + "text": "In this section, we evaluate the performance of SDR-DC-BiS and PP-PDG-MS. The parameters for the simulations are set according to Table I ###reference_###." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Benchmark Schemes", + "text": "For comparisons with the proposed SDR-DC-BIS and PP-PDG-MS schemes, we provide four benchmark schemes from the existing literature. Note that in these schemes, the optimization of is not considered, the number of switched beams is predetermined, and the optimization of is based on the beam gain approximation, e.g., , where .\nFor fair comparisons, we design to maximize the minimum RSNR within .\nThe considered benchmark schemes include:\nUBW scheme [4 ###reference_b4###]: All beams have the same beam width, i.e., .\nESC scheme [10 ###reference_b10###]: The length of railway range covered by each beam is the same, i.e., .\nNUBW scheme with the objective of maximizing the average data rate (NUBW-M) [10 ###reference_b10###]: is obtained from solving the following optimization problem\nwhere , , , and can be expressed as\nNUBW scheme with the objective of stabilizing the average data rate (NUBW-S) [3 ###reference_b3###]: is optimized to minimize the average data rate gap between adjacent beams, i.e.," + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Performance Comparisons in the Far-Field Scenario", + "text": "###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### Given , dB, , and m, the designed beams using SDR-DC-BiS and PP-PDG-MS are shown in Figs. 3 ###reference_### and 4 ###reference_###, respectively. The error tolerance threshold parameters are set as , , , and . The weight parameters are set as , and . To demonstrate that the HST is in the far field of the ULA at the BS, we set the loss threshold as and plot the value of the loss function in Fig. 5 ###reference_###. The distance from each HST location to the BS, represented by the red line, is always larger than the BAND, represented by the contour line at 0.05, indicating that the HST is always in the far-field region.\nFrom Figs. 3(a) ###reference_sf1### and 4(a) ###reference_sf1###, the beam patterns of the designed beams using SDR-DC-BiS and PP-PDG-MS exhibit negligible differences. Both schemes employ beams to cover the predetermined railway range. Specifically, narrow beams are employed to compensate for the high path loss at the edge of the railway range, while wide beams are used to cover the central railway range, leading to a reduced number of switched beams. Figs. 3(b) ###reference_sf2### and 4(b) ###reference_sf2### show that the RSNR at each HST location can keep consistently at or above the given threshold.\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### The computational complexity of PP-PDG-MS is considerably lower than that of SDR-DC-BiS. To quantify the comparisons, we conduct 10 experiments for both PP-PDG-MS and SDR-DC-BiS, recording the running time for each experiment. All 20 experiments are performed using MATLAB v9.11.0.1769968 (R2021b) running on a desktop computer with a 12th Gen Intel(R) Core(TM) i9-12900K CPU at 3.20 GHz and 32GB of memory. PP-PDG-MS and SDR-DC-BiS have average running time of 522.16 s and 13758.23 s, respectively, indicating that the running time of PP-PDG-MS is 96.20% lower than that of SDR-DC-BiS.\nThe beam coverage of the designed beams is compared in Table II ###reference_###. On average, the beam coverage of PP-PDG-MS is 0.0657% narrower than that of SDR-DC-BiS, which means there is 0.0657% performance degradation of PP-PDG-MS over SDR-DC-BiS.\nFor fair comparisons, given , we provide the designed beams using UBW, ESC, NUBW-M, and NUBW-S in Figs. 6 ###reference_###, 7 ###reference_###, 8 ###reference_### and 9 ###reference_###, respectively. Although the six beams designed by UBW achieve higher beam gain than the predetermined threshold in Fig. 6(a) ###reference_sf1###, the beam gain of the other two beams is substantially lower than the threshold. Moreover, the RSNR of the eight beams varies considerably in Fig. 6(b) ###reference_sf2###. In particular, the RSNR variation in the range between and is dB, while that in the range between and is dB. Note that\nthe RSNR variation for all eight beams designed by PP-PDG-MS is only dB, indicating that better stability can be achieved by PP-PDG-MS than UBW. To compensate for the high path loss at the edge of the railway range, ESC, NUBW-M and NUBW-S use narrower beams than UBW. However, these three schemes allocate too many beams to cover the first half of the railway range, which results in severe instability. In particular, from Figs. 7 ###reference_###, 8 ###reference_### and 9 ###reference_###, the RSNR variations of ESC, NUBW-M, and NUBW-S are dB, dB and dB, respectively. The essential reason of the large variation of these three schemes is the mismatch between the approximated beam gain during the optimization of the beam coverage and the actual beam gain generated from the beamformer optimization. Note that none of these three schemes can guarantee the beam gain higher than the threshold." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C Performance Evaluation in the Near-Field Scenario", + "text": "Since the HST may be very close to the BS, i.e., m, we also consider the near-field scenario, where we set and dB. To demonstrate that the HST is in the near-field region, we plot the value of the loss function in Fig. 10 ###reference_###, where the distance from each HST location to the BS is always smaller than the BAND, represented by the contour line at 0.05. Moreover, to evaluate the capability of PP-PDG-MS to address problems with a large number of optimization variables and RSNR constraints, we improve the sample precision to and decrease the relevant error tolerance parameters to and . Note that SDR-DC-BiS is not suitable for this scenario due to its high computational complexity. The designed beams using PP-PDG-MS are shown in Fig. 11 ###reference_###, where beams are used to cover the predetermined railway range.\n###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### To illustrate the impact of the near-field effect on the designed beams, Figs. 11(c) ###reference_.sf3### and 11(d) ###reference_.sf4### present the patterns of beams 1 and 8 in the two-dimensional coordinate system, respectively. Since the railway range corresponding to nearly falls into the far filed of the ULA at the BS, beam 1 is optimized using a series of channel samples with some far-field characteristics, and therefore exhibits the characteristics of far-field beams. As shown in Fig. 11(c) ###reference_.sf3###, the beam gain within the mainlobe of beam 1 is generally independent of the distance. In contrast, HST locations between and are very close to the BS. As a result, in Fig. 11(d) ###reference_.sf4###, beam 8 shows typical near-field characteristics, with the beam gain within its mainlobe varying with distance." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion", + "text": "In this paper, two beam design schemes, namely SDR-DC-BiS and PP-PDG-MS, have been proposed to minimize the number of switched beams within a predetermined railway range, subject to RSNR constraints and constant modulus constraints. Simulation results have demonstrated the effectiveness of both schemes and shown that compared to SDR-DC-BiS, PP-PDG-MS can achieve 96.20% reduction in computational complexity at the cost of only 0.0657% performance degradation. Future work will focus on efficient wide beam design." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Simulation Parameters
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParameterSymbolValue
Carrier frequency30 GHz
Path loss exponent2
Reference distance1 m
Transmit power40 dBm
Noise power-40 dBm
Velocity500 km/h
Railway angle10\u00b0
\n
", + "capture": "TABLE I: Simulation Parameters" + }, + "2": { + "table_html": "
\n
TABLE II: Beam Coverage Results
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Schemes
SDR-DC-BiS-1.2861-1.0580-0.15370.29860.9213
PP-PDG-MS-1.2872-1.0616-0.15370.29560.9179
NUBW-M-1.3548-1.2410-0.7057-0.13810.9078
NUBW-S-1.3551-1.2416-0.7045-0.14080.9078
\n
", + "capture": "TABLE II: Beam Coverage Results" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17990v1_figure_1.png", + "caption": "Figure 1: Illustration of HST mmWave communications.", + "url": "http://arxiv.org/html/2411.17990v1/x1.png" + }, + "2": { + "figure_path": "2411.17990v1_figure_2.png", + "caption": "Figure 2: Geometric illustration of the considered system.", + "url": "http://arxiv.org/html/2411.17990v1/x2.png" + }, + "3(a)": { + "figure_path": "2411.17990v1_figure_3(a).png", + "caption": "(a) Beam patterns\nFigure 3: Designed beams using SDR-DC-BiS.", + "url": "http://arxiv.org/html/2411.17990v1/x3.png" + }, + "3(b)": { + "figure_path": "2411.17990v1_figure_3(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 3: Designed beams using SDR-DC-BiS.", + "url": "http://arxiv.org/html/2411.17990v1/x4.png" + }, + "4(a)": { + "figure_path": "2411.17990v1_figure_4(a).png", + "caption": "(a) Beam patterns\nFigure 4: Designed beams using PP-PDG-MS with NT=32subscript\ud835\udc41T32N_{\\mathrm{T}}=32italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 32.", + "url": "http://arxiv.org/html/2411.17990v1/x5.png" + }, + "4(b)": { + "figure_path": "2411.17990v1_figure_4(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 4: Designed beams using PP-PDG-MS with NT=32subscript\ud835\udc41T32N_{\\mathrm{T}}=32italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 32.", + "url": "http://arxiv.org/html/2411.17990v1/x6.png" + }, + "5": { + "figure_path": "2411.17990v1_figure_5.png", + "caption": "Figure 5: Value of the loss function when NT=32subscript\ud835\udc41T32N_{\\mathrm{T}}=32italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 32.", + "url": "http://arxiv.org/html/2411.17990v1/x7.png" + }, + "6(a)": { + "figure_path": "2411.17990v1_figure_6(a).png", + "caption": "(a) Beam patterns\nFigure 6: Designed beams using UBW.", + "url": "http://arxiv.org/html/2411.17990v1/x8.png" + }, + "6(b)": { + "figure_path": "2411.17990v1_figure_6(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 6: Designed beams using UBW.", + "url": "http://arxiv.org/html/2411.17990v1/x9.png" + }, + "7(a)": { + "figure_path": "2411.17990v1_figure_7(a).png", + "caption": "(a) Beam patterns\nFigure 7: Designed beams using ESC.", + "url": "http://arxiv.org/html/2411.17990v1/x10.png" + }, + "7(b)": { + "figure_path": "2411.17990v1_figure_7(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 7: Designed beams using ESC.", + "url": "http://arxiv.org/html/2411.17990v1/x11.png" + }, + "8(a)": { + "figure_path": "2411.17990v1_figure_8(a).png", + "caption": "(a) Beam patterns\nFigure 8: Designed beams using NUBW-M.", + "url": "http://arxiv.org/html/2411.17990v1/x12.png" + }, + "8(b)": { + "figure_path": "2411.17990v1_figure_8(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 8: Designed beams using NUBW-M.", + "url": "http://arxiv.org/html/2411.17990v1/x13.png" + }, + "9(a)": { + "figure_path": "2411.17990v1_figure_9(a).png", + "caption": "(a) Beam patterns\nFigure 9: Designed beams using NUBW-S.", + "url": "http://arxiv.org/html/2411.17990v1/x14.png" + }, + "9(b)": { + "figure_path": "2411.17990v1_figure_9(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 9: Designed beams using NUBW-S.", + "url": "http://arxiv.org/html/2411.17990v1/x15.png" + }, + "10": { + "figure_path": "2411.17990v1_figure_10.png", + "caption": "Figure 10: Value of the loss function when NT=128subscript\ud835\udc41T128N_{\\mathrm{T}}=128italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 128.", + "url": "http://arxiv.org/html/2411.17990v1/x16.png" + }, + "11(a)": { + "figure_path": "2411.17990v1_figure_11(a).png", + "caption": "(a) Beam patterns\nFigure 11: Designed beams using PP-PDG-MS with NT=128subscript\ud835\udc41T128N_{\\mathrm{T}}=128italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 128.", + "url": "http://arxiv.org/html/2411.17990v1/x17.png" + }, + "11(b)": { + "figure_path": "2411.17990v1_figure_11(b).png", + "caption": "(b) RSNR at different HST locations\nFigure 11: Designed beams using PP-PDG-MS with NT=128subscript\ud835\udc41T128N_{\\mathrm{T}}=128italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 128.", + "url": "http://arxiv.org/html/2411.17990v1/x18.png" + }, + "11(c)": { + "figure_path": "2411.17990v1_figure_11(c).png", + "caption": "(c) Beam pattern of beam 1\nFigure 11: Designed beams using PP-PDG-MS with NT=128subscript\ud835\udc41T128N_{\\mathrm{T}}=128italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 128.", + "url": "http://arxiv.org/html/2411.17990v1/x19.png" + }, + "11(d)": { + "figure_path": "2411.17990v1_figure_11(d).png", + "caption": "(d) Beam pattern of beam 8\nFigure 11: Designed beams using PP-PDG-MS with NT=128subscript\ud835\udc41T128N_{\\mathrm{T}}=128italic_N start_POSTSUBSCRIPT roman_T end_POSTSUBSCRIPT = 128.", + "url": "http://arxiv.org/html/2411.17990v1/x20.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.17990v1" +} \ No newline at end of file diff --git a/20241127/2411.17995v1.json b/20241127/2411.17995v1.json new file mode 100644 index 0000000000000000000000000000000000000000..32a7ce37969a3da2a5bdbed24dd2a28959c9b91f --- /dev/null +++ b/20241127/2411.17995v1.json @@ -0,0 +1,308 @@ +{ + "title": "Revisiting Misalignment in Multispectral Pedestrian Detection: A Language-Driven Approach for Cross-modal Alignment Fusion", + "abstract": "Multispectral pedestrian detection is a crucial component in various critical applications. However, a significant challenge arises due to the misalignment between these modalities, particularly under real-world conditions where data often appear heavily misaligned. Conventional methods developed on well-aligned or minimally misaligned datasets fail to address these discrepancies adequately. This paper introduces a new framework for multispectral pedestrian detection designed specifically to handle heavily misaligned datasets without the need for costly and complex traditional pre-processing calibration. By leveraging Large-scale Vision-Language Models (LVLM) for cross-modal semantic alignment, our approach seeks to enhance detection accuracy by aligning semantic information across the RGB and thermal domains. This method not only simplifies the operational requirements but also extends the practical usability of multispectral detection technologies in practical applications.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "Multispectral pedestrian detection uses both RGB and infrared (thermal) images for pedestrian detection [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. Compared to relying on just one modality detection [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###], utilizing both modalities provide distinct advantages on pedestrian detection [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. However, positional misalignments between the modalities significantly degrade the models\u2019 performance. Generally, RGB and thermal cameras have different fields of view (FOV) and spatial distortions. In extreme cases, pedestrians may appear in completely different locations in the RGB and thermal images (Fig.1 (c)). Such scenarios challenge the model to fuse the information corresponding to the same pedestrian and puzzle the inference. \nApart from these challenges, previous works on multispectral pedestrian detection have been developed on calibrated data, mostly containing well-aligned images or weakly aligned images, as illustrated in Fig.1 (a)-(b). This setting is often impractical because raw data are generally heavily misaligned. Also, obtaining calibrated data requires special pre-processing, and need to design special hardware such as beam splitters. Such constraints limit the practicability of current multispectral pedestrian detectors, making them difficult to be applied in the real world. \nWe test conventional multispectral pedestrian detectors under uncalibrated data, as Fig.1 (c). We first collect uncalibrated data pairs by taking videos with two independent (non-calibrated) cameras: a mobile phone for RGB and a FLIR product for thermal. Illustrated in Fig.1 (c), the multispectral pedestrian detection model performs poorly on uncalibrated data, though it is evident from the human\u2019s perception. From these results, we can anticipate that current models fail to address heavily misaligned scenarios.\n###figure_1### Our goal is to perform accurate multispectral pedestrian detection on heavily misaligned scenarios. If this is possible, we no longer have to use expensive calibrating devices and pre-processing data. And we can directly perform multispectral pedestrian detection with raw data taken from uncalibrated cameras. To ensure accurate detection, the model must first identify the same individual across both modalities and then integrate the corresponding information for detection. \nToward this goal, we first investigate human perception abilities. Humans are adept at recognizing individuals across different modalities by understanding the scenes. This includes global contexts, such as pedestrians\u2019 position and relationship with the environment. Moreover, humans can understand local contexts such as each pedestrian\u2019s shape, movement, and appearance. Using these contextual cues, humans can comprehend to fuse the correct pedestrian information even from misaligned scenarios. \nMotivated by the humans\u2019 ability, we propose a novel cross-modal semantic alignment method to handle the aforementioned challenging misalignment problems in multispectral pedestrian detection. To be specific, our method consists of three parts: constructing positional graphs, embedding appearance information, and prediction with large-scale vision-language models. First the single-modal detection is conducted in each modality using the Co-DETR model [8 ###reference_b8###] trained on the FLIR_ADAS dataset. This process yields bounding box coordinates of pedestrians. These coordinates are used to construct a graph where nodes denote instance coordinates and edges denote distances between instances.\nSecond, in the embedding appearance information, we recognize individuals by their unique physical contours and characteristic movements, which can be identifiable even when visual appearances differ between modalities. To leverage this, appearance information for each individual is obtained using a Large-scale Vision-Language Models (LVLMs). To address common hallucination issues in LVLMs and ensure accurate descriptions, established visual pre-processing methods are employed.\nThe main contributions are summarized as follows:\nWe integrate the Large-scale Vision-Language Models (LVLM) to tackle challenging misalignment problems that are not readily resolved with conventional image processing methods.\nWe thoroughly validated the proposed approach using the challenging misalignment datasets." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Misalignments in Multispectral Pedestrian Detection.\nTo solve the misalignment problems, camera calibration techniques and image registration algorithms have been developed. They pre-process the raw data to be spatially aligned. Camera calibration techniques physically align the two image domains using special devices such as beam splitters [1 ###reference_b1###] or checkerboards. Yet, these techniques are expensive, labor-intensive, and designed on specific camera hardware. Image registration algorithms [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###] geometrically align two images. But their performance significantly drops when the misalignment is above a certain degree.\nOther methods try to perform feature-level alignment by predicting spatial offsets [20 ###reference_b20###, 21 ###reference_b21###]. Other methods often fuse RGB and thermal using illumination-based weighting strategies [22 ###reference_b22###, 23 ###reference_b23###], uncertainty estimates [2 ###reference_b2###], or confidence\nscores [24 ###reference_b24###, 15 ###reference_b15###]. However, these techniques operate under the assumption that the same pedestrian will appear in ROIs of largely overlapping areas within the different modalities. Therefore, these methods cannot be applied to heavily misaligned scenarios.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "PROPOSED METHOD", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Constructing Positional Graphs", + "text": "Humans can analyze the scenes and extract semantic contexts useful for pedestrian detection using global and local contexts of the scene. Specifically, humans can recognize pedestrians at a glance by their positional location relative to surrounding objects or landmarks. Such positional relations can be treated as global contexts. For example, seeing someone standing in a familiar room or on a specific street corner can trigger recognition, even if the visual characteristics differ between modalities. Observing how an individual interacts with objects, other people, or their surroundings can provide valuable context for recognition. Since both types of images are based on the physical arrangement of space, the relative positions and distances of people will appear similarly. This property can be utilized to implement robust person detection under misalignment conditions. \nWe represent the positional relationships between pedestrian coordinates as a graph, consisting of nodes and edges. Nodes represent the coordinates of each pedestrian, while edges represent the distances between pedestrians. The edges of the graph are connected to the closest person, and each node is limited to having a maximum of two edges. We construct a Minimum Spanning Tree (MST) to connect the nodes on the optimal connection structure. Using Kruskal\u2019s algorithm we connect all the nodes with the minimum possible Euclidian distance and ensure that no cycles are formed. This process is described in Figure 2. As a result, two positional graphs are extracted as below." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Embedding Appearance Information", + "text": "Next, we add the local context to the positional graphs, which involve appearance descriptions of each person. This could involve details like clothing, facing direction, or other visual characteristics. Each person has a unique body shape and silhouette, which remains relatively consistent across different imaging modalities. And the movement of a person is highly characteristic and can be a powerful cue for recognition. \nBuilding on this approach, we acquire appearance information for each individual using a Large-scale Vision-Language Model (LVLM). To mitigate the common occurrence of hallucinations in LVLMs and obtain accurate descriptions, we utilize two strategies. First, we adapt visual pre-processing methods from existing approaches. This method involves initially zooming in on the region of interest surrounding the region of interest (ROI) and then cropping the zoomed-in image before feeding it into the LVLM as input. The prompt being used is as follows. \nInput prompt: \u201cGiven an RGB image and a thermal image, generate a textual description of the appearance of each individual in the scene. The description should include information about clothing, accessories, hairstyle, and any other visible attributes\u2026\u201d\nSecond, by comparing responses from multiple LVLM models (such as GPT-4 [25 ###reference_b25###], Gemini [26 ###reference_b26###], and Claude 2), we cross-validate information to further mitigate hallucination. If all models agree on an answer, it increases confidence in its accuracy. Conversely, reasoning from LVLMs can sometimes produce incorrect answers, and recent studies have also demonstrated that LVLMs struggle to self-correct their responses without external feedback. If they provide differing answers, it flags a need for further verification. To address this issue, we have incorporated a debate scheme to synthesize responses from different LVLMs. An example of what a debater\u2019s prompt might look like:\nInput prompt: \u201cYou are a debater among a panel of expert, each of whom will share their opinions on visual description of a person. Please share your opinions in brief\u2026\u201d\nMoreover, we characterize the role of the judge to deduce the conclusive answer from the aggregate debate history. A sample prompt for the judge is delineated as:\nInput prompt: \u201cYou are a judging expert. Three debaters will participate in a discussion regarding the individual\u2019s visual description. Once the debate is over, it will be your responsibility to decide which visual description seems most reasonable one based on the debate content.\u201d\nAs multiple LVLMs are involved but we only need one judge, we choose the generally most powerful LVLM, GPT-4, to play the role of judge.\nWe add these textual descriptions to each node of the graph. The final graph is used to output matching results." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Prediction with LLMs", + "text": "Finally, we prompt the LLM to make predictions. The designed prompt first generates rationales and outputs the matching results.\nInput prompt: \u201cThis labeling system is designed to assist you in matching the people in misaligned thermal images and RGB images\u2026 Please answer in the following format. \nRationale: write your rational\nMatching result: (RGB_person1 : T_person1, RGB_person2 : T_person2, \u2026)\u201d \nThe final output is the matching results. In the following section, we explain the evaluation process of our cross-modal semantic alignment model with our newly proposed metrics.\n###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "EXPERIMENTS", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "We conduct experiments on two different heavily misaligned RGB-thermal multispectral dataset. One is modified from public FLIR_ADAS dataset, and the other is our collected dataset. Each dataset contains a hundred challenging misalignment RGB-thermal unaligned pairs respectively. For the comparison, we use the baseline model as ProbEn [24 ###reference_b24###], which is recently proposed late-fusion multispectral pedestrian detection model. We compare this baseline model with our proposed cross-modal alignment fusion method due to their well-known robustness against weakly-aligned dataset [24 ###reference_b24###]." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Quantitative Results", + "text": "To validate our proposed method in the challenging misalignment scenarios, we propose a new metric called alignment error rate (AER) to measure the degree of misalignment severity in multispectral pedestrian detection. This is because there is no prior work on evaluating heavily misaligned RGB-thermal pair. The AER is defined as follows:\nTable 1 shows that our proposed cross-modal alignment fusion method outperforms the baseline not only in terms of the AP score but also the newly adopted AER alignment score." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Qualitative Results", + "text": "Fig. 3 depicts four detection results, validating that our framework can recognize discrepancies and perform multispectral pedestrian detection by matching the same person. Specifically, the fusion is achieved by identifying the same individual through their positional relationships and visual attributes. As a result, our method proves effective in unaligned scenarios." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "CONCLUSION", + "text": "This paper introduces an innovative framework for multispectral pedestrian detection, specifically addressing the challenges of heavy misalignment between RGB and thermal images in practical applications. Unlike conventional methods that rely on costly and complex pre-processing to align images, our approach utilizes semantic alignment through large language models to improve alignment and detection accuracy directly on raw, uncalibrated data. This advancement significantly reduces the dependency on specialized hardware and opens new possibilities for deploying multispectral pedestrian detection in real-world systems. Future work will focus on refining the semantic alignment techniques and expanding the framework\u2019s applicability to other multispectral imaging tasks beyond pedestrian detection." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Quantitative Results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetFLIR Challenging DBCollected Challenging DB
ModelLLM DebateAP()AER()LLM DebateAP()AER()
Baseline\u271761.682.3\u271775.685.8
Ours\u271772.021.3\u271780.429.8
Ours\u271374.98.8\u271385.522.0
\n
\n
", + "capture": "Table 1: Quantitative Results." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.17995v1_figure_1.png", + "caption": "Fig. 1: Visualized examples of well-aligned, weakly-aligned, and unaligned (raw data) scenarios in multispectral pedestrian detection. Unaligned pose a challenge as the detected persons in RGB (blue box) and in thermal (red box) does not overlap.", + "url": "http://arxiv.org/html/2411.17995v1/x1.png" + }, + "2": { + "figure_path": "2411.17995v1_figure_2.png", + "caption": "Fig. 2: Overall architecture of the proposed cross-modal semantic alignment fusion method.", + "url": "http://arxiv.org/html/2411.17995v1/x2.png" + }, + "3": { + "figure_path": "2411.17995v1_figure_3.png", + "caption": "Fig. 3: Visualized qualitative results: the same color means the same identity of a matched person across two different sensors.", + "url": "http://arxiv.org/html/2411.17995v1/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "\u201cMultispectral pedestrian detection: Benchmark dataset and baseline,\u201d", + "author": "Soonmin Hwang, Jaesik Park, Namil Kim, Yukyung Choi, and In So Kweon,", + "venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037\u20131045.", + "url": null + } + }, + { + "2": { + "title": "\u201cUncertainty-guided cross-modal learning for robust multispectral pedestrian detection,\u201d", + "author": "Jung Uk Kim, Sungjune Park, and Yong Man Ro,", + "venue": "IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1510\u20131523, 2021.", + "url": null + } + }, + { + "3": { + "title": "\u201cInvestigating vulnerability to adversarial examples on multimodal data fusion in deep learning,\u201d", + "author": "Youngjoon Yu, Hong Joo Lee, Byeong Cheon Kim, Jung Uk Kim, and Yong Man Ro,", + "venue": "arXiv preprint arXiv:2005.10987, 2020.", + "url": null + } + }, + { + "4": { + "title": "\u201cMultispectral invisible coating: laminated visible-thermal physical attack against multispectral object detectors using transparent low-e films,\u201d", + "author": "Taeheon Kim, Youngjoon Yu, and Yong Man Ro,", + "venue": "in Proceedings of the AAAI Conference on Artificial Intelligence, 2023, vol. 37, pp. 1151\u20131159.", + "url": null + } + }, + { + "5": { + "title": "\u201cTowards robust training of multi-sensor data fusion network against adversarial examples in semantic segmentation,\u201d", + "author": "Youngjoon Yu, Hong Joo Lee, Byeong Cheon Kim, Jung Uk Kim, and Yong Man Ro,", + "venue": "in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 4710\u20134714.", + "url": null + } + }, + { + "6": { + "title": "\u201cRobust multispectral pedestrian detection via spectral position-free feature mapping,\u201d", + "author": "Sungjune Park, Jung Uk Kim, Jin Mo Song, and Yong Man Ro,", + "venue": "in 2023 IEEE International Conference on Image Processing (ICIP). IEEE, 2023, pp. 1795\u20131799.", + "url": null + } + }, + { + "7": { + "title": "\u201cRobust multispectral pedestrian detection via uncertainty-aware cross-modal learning,\u201d", + "author": "Sungjune Park, Jung Uk Kim, Yeon Gyun Kim, Sang-Keun Moon, and Yong Man Ro,", + "venue": "in MultiMedia Modeling: 27th International Conference, MMM 2021, Prague, Czech Republic, June 22\u201324, 2021, Proceedings, Part I 27. Springer, 2021, pp. 391\u2013402.", + "url": null + } + }, + { + "8": { + "title": "\u201cDetrs with collaborative hybrid assignments training,\u201d", + "author": "Zhuofan Zong, Guanglu Song, and Yu Liu,", + "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 6748\u20136758.", + "url": null + } + }, + { + "9": { + "title": "\u201cDefending person detection against adversarial patch attack by using universal defensive frame,\u201d", + "author": "Youngjoon Yu, Hong Joo Lee, Hakmin Lee, and Yong Man Ro,", + "venue": "IEEE Transactions on Image Processing, vol. 31, pp. 6976\u20136990, 2022.", + "url": null + } + }, + { + "10": { + "title": "\u201cDefending physical adversarial attack on object detection via adversarial patch-feature energy,\u201d", + "author": "Taeheon Kim, Youngjoon Yu, and Yong Man Ro,", + "venue": "in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1905\u20131913.", + "url": null + } + }, + { + "11": { + "title": "\u201cIntegrating language-derived appearance elements with visual cues in pedestrian detection,\u201d", + "author": "Sungjune Park, Hyunjun Kim, and Yong Man Ro,", + "venue": "IEEE Transactions on Circuits and Systems for Video Technology, 2024.", + "url": null + } + }, + { + "12": { + "title": "\u201cRobust pedestrian detection via constructing versatile pedestrian knowledge bank,\u201d", + "author": "Sungjune Park, Hyunjun Kim, and Yong Man Ro,", + "venue": "Pattern Recognition, p. 110539, 2024.", + "url": null + } + }, + { + "13": { + "title": "\u201cMap: Multispectral adversarial patch to attack person detection,\u201d", + "author": "Taeheon Kim, Hong Joo Lee, and Yong Man Ro,", + "venue": "in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 4853\u20134857.", + "url": null + } + }, + { + "14": { + "title": "\u201cCausal mode multiplexer: A novel framework for unbiased multispectral pedestrian detection,\u201d", + "author": "Taeheon Kim, Sebin Shin, Youngjoon Yu, Hak Gu Kim, and Yong Man Ro,", + "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 26784\u201326793.", + "url": null + } + }, + { + "15": { + "title": "\u201cMscotdet: Language-driven multi-modal fusion for improved multispectral pedestrian detection,\u201d", + "author": "Taeheon Kim, Sangyun Chung, Damin Yeom, Youngjoon Yu, Hak Gu Kim, and Yong Man Ro,", + "venue": "arXiv preprint arXiv:2403.15209, 2024.", + "url": null + } + }, + { + "16": { + "title": "\u201cA survey of image registration techniques,\u201d", + "author": "Lisa Gottesfeld Brown,", + "venue": "ACM computing surveys (CSUR), vol. 24, no. 4, pp. 325\u2013376, 1992.", + "url": null + } + }, + { + "17": { + "title": "\u201cRemote sensing image registration techniques: A survey,\u201d", + "author": "Suma Dawn, Vikas Saxena, and Bhudev Sharma,", + "venue": "in Image and Signal Processing: 4th International Conference, ICISP 2010, Trois-Rivi\u00e8res, QC, Canada, June 30-July 2, 2010. Proceedings 4. Springer, 2010, pp. 103\u2013112.", + "url": null + } + }, + { + "18": { + "title": "\u201cA survey of medical image registration,\u201d", + "author": "JB Antoine Maintz and Max A Viergever,", + "venue": "Medical image analysis, vol. 2, no. 1, pp. 1\u201336, 1998.", + "url": null + } + }, + { + "19": { + "title": "\u201cAn iterative integrated framework for thermal\u2013visible image registration, sensor fusion, and people tracking for video surveillance applications,\u201d", + "author": "Atousa Torabi, Guillaume Mass\u00e9, and Guillaume-Alexandre Bilodeau,", + "venue": "Computer Vision and Image Understanding, vol. 116, no. 2, pp. 210\u2013221, 2012.", + "url": null + } + }, + { + "20": { + "title": "\u201cAttentive alignment network for multispectral pedestrian detection,\u201d", + "author": "Nuo Chen, Jin Xie, Jing Nie, Jiale Cao, Zhuang Shao, and Yanwei Pang,", + "venue": "in Proceedings of the 31st ACM international conference on multimedia, 2023, pp. 3787\u20133795.", + "url": null + } + }, + { + "21": { + "title": "\u201cWeakly aligned cross-modal learning for multispectral pedestrian detection,\u201d", + "author": "Lu Zhang, Xiangyu Zhu, Xiangyu Chen, Xu Yang, Zhen Lei, and Zhiyong Liu,", + "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5127\u20135137.", + "url": null + } + }, + { + "22": { + "title": "\u201cIllumination-aware faster r-cnn for robust multispectral pedestrian detection,\u201d", + "author": "Chengyang Li, Dan Song, Ruofeng Tong, and Min Tang,", + "venue": "Pattern Recognition, vol. 85, pp. 161\u2013171, 2019.", + "url": null + } + }, + { + "23": { + "title": "\u201cImproving multispectral pedestrian detection by addressing modality imbalance problems,\u201d", + "author": "Kailai Zhou, Linsen Chen, and Xun Cao,", + "venue": "in Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XVIII 16. Springer, 2020, pp. 787\u2013803.", + "url": null + } + }, + { + "24": { + "title": "\u201cMultimodal object detection via probabilistic ensembling,\u201d", + "author": "Yi-Ting Chen, Jinghao Shi, Zelin Ye, Christoph Mertz, Deva Ramanan, and Shu Kong,", + "venue": "in European Conference on Computer Vision. Springer, 2022, pp. 139\u2013158.", + "url": null + } + }, + { + "25": { + "title": "\u201cGpt-4 technical report,\u201d", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.,", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "26": { + "title": "\u201cGemini 1.5: Unlocking multimodal understanding across millions of tokens of context,\u201d", + "author": "Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al.,", + "venue": "arXiv preprint arXiv:2403.05530, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.17995v1" +} \ No newline at end of file diff --git a/20241127/2411.18008v1.json b/20241127/2411.18008v1.json new file mode 100644 index 0000000000000000000000000000000000000000..dd9c205beb165e4c59ae8b6f3ebc0d2dbdc99c2b --- /dev/null +++ b/20241127/2411.18008v1.json @@ -0,0 +1,313 @@ +{ + "title": "Causal and Local Correlations Based Network for Multivariate Time Series Classification", + "abstract": "Recently, time series classification has attracted the attention of a large number of researchers, and hundreds of methods have been proposed. However, these methods often ignore the spatial correlations among dimensions and the local correlations among features. To address this issue, the causal and local correlations based network (CaLoNet) is proposed in this study for multivariate time series classification. First, pairwise spatial correlations between dimensions are modeled using causality modeling to obtain the graph structure. Then, a relationship extraction network is used to fuse local correlations to obtain long-term dependency features. Finally, the graph structure and long-term dependency features are integrated into the graph neural network. Experiments on the UEA datasets show that CaLoNet can obtain competitive performance compared with state-of-the-art methods.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "With the development of the Internet of Things, large numbers of sensors are now used to periodically collect data. These collected data naturally exist in the form of time series. Usually, time series contain multiple variables with correlations. In other words, the collected data are in the format of a multivariate time series (MTS). MTS classification aims to assign predefined labels to time series using supervised learning [1 ###reference_b1###], and it is widely used in various real-life domains, such as human behavior recognition [2 ###reference_b2###], healthcare [3 ###reference_b3###, 4 ###reference_b4###], macroeconomics [5 ###reference_b5###], and misinformation detection [6 ###reference_b6###].\nIn recent years, many different methods based on different patterns have been proposed. Among them, methods based on various correlations have been widely explored [7 ###reference_b7###, 8 ###reference_b8###]. The variables in MTS data usually exhibit complex correlations, and fully using these correlations can help build more accurate prediction models. Therefore, ignoring the correlations between variables leads to information loss and reduces prediction performance. By analyzing the local and dimensional correlations in MTS, we can discover hidden patterns, regularities, and potential structures within the data, thereby gaining a deeper understanding and insight. The correlations between variables may reflect certain causal mechanisms, providing clues for uncovering the underlying mechanisms that generate the data. This holds particular significance in fields such as healthcare, finance, and meteorology, as it helps unravel the intrinsic connections among phenomena.\nMTSs typically exhibit the following correlations: 1) Local correlations within a single dimension. Subsequent observations in a time series are influenced by the preceding observations. For example, future traffic flow is influenced by current traffic flow, and specific streets are more likely to be influenced by traffic information from neighboring areas. The right side of Fig. 1 ###reference_### presents an example of local correlations of local features (i.e., shapelets [9 ###reference_b9###], indicated by the dashed red boxes). 2) Spatial correlations among dimensions. Each variable in an MTS is influenced by other variables. Some studies [10 ###reference_b10###, 11 ###reference_b11###] assume that the predicted values of individual variables are influenced by all other variables or exhibit other spatial correlations. The left side of Fig. 1 ###reference_### depicts spatial correlations. Considering this information is highly beneficial for exploring the interactions among MTSs.\n###figure_1### Regarding local correlations, Transformer-based methods have shown great potential in recent years, and numerous methods have used this model for MTS tasks [12 ###reference_b12###]. A large number of methods have been proposed to extract local correlations [13 ###reference_b13###]. The authors of [7 ###reference_b7###] proposed a variable-position Transformer to extract local correlations between variables. It takes time series subsequences, which can come from different variables and positions (time intervals), as input (at shape level). The local correlations capture long- and short-term dependencies between shapes to process them. In addition, other methods have improved the Transformer and made progress in extracting local correlation [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. These methods have contributed to improving the capabilities of modeling time series data and processing local correlations.\nRegarding spatial correlations, numerous methods have been proposed, among which methods based on the self-attention mechanism (Transformer) and graph neural networks (GNNs) have received much attention. Methods based on the self-attention mechanism include a variable-position Transformer proposed by [7 ###reference_b7###] to extract correlations between dimensions. The authors of [19 ###reference_b19###] designed a channel-aware Transformer encoder to capture the complex relationships between different time channels in an MTS. However, the Transformer has the following major drawbacks when extracting spatial correlations from an MTS compared with a GNN: (1) Lack of explicit structural modeling capability. The Transformer relies on a fully connected self-attention mechanism to learn the correlations between sequences, but cannot explicitly model the topological relationships between variables. A GNN, by contrast, can directly operate on the defined graph structure, more easily capturing and using the explicit relationships among variables. (2) High computational complexity. The Transformer\u2019s self-attention mechanism must compute the correlation scores between all element pairs, resulting in a computational complexity of . On a high-dimensional MTS, the Transformer may encounter computational bottlenecks. A GNN, however, can effectively use sparse connections to reduce computational overhead. (3) Difficulty in using prior knowledge. The Transformer mainly learns the relationships between variables in a data-driven manner, and lacks mechanisms for using prior domain knowledge. In many application scenarios, however, we may already know some of the topological structures between variables, which could help simplify the learning task of a GNN. (4) Lack of reasoning ability on graphs. The Transformer learns implicit variable correlation representations and lacks the ability to reason on explicit graph structures like a GNN, which complicates interpretability analysis.\nMethods based on GNN and graph learning include the time dynamic GNN [8 ###reference_b8###], which can extract hidden spatial-temporal dependencies without defining the graph structure and includes a time graph pooling layer to obtain a global graph-level representation of graph learning with learnable time parameters. Graph learning [20 ###reference_b20###], [21 ###reference_b21###], [22 ###reference_b22###] provides greater flexibility, but also leads to higher computational complexity, uncertainty, and data demands. Compared with using a fixed predefined graph structure, graph learning methods have the following drawbacks: (1) Higher computational complexity. Graph learning methods need to simultaneously optimize the graph structure and model parameters, which usually significantly increases computational costs. Fixed graph structures, by contrast, avoid the overhead of learning the graph structure. (2) Poor convergence and stability. Because the graph structure and parameters are optimized simultaneously, graph learning problems are often non-convex, prone to suboptimal solutions, and have poor convergence and stability. In contrast, training models on a fixed graph structure is more stable. (3) Lack of theoretical guarantees. Current graph learning methods are mainly based on empirical or heuristic algorithms, which lack theoretical convergence guarantees and performance bounds, making it difficult to ensure that the learned graph structure is optimal. In some traditional tasks or scenarios with limited data, predefined fixed graph structures may be more suitable.\nOverall, the Transformer is more suitable for modeling global correlations, whereas the GNN excels at capturing and using structured prior knowledge, and the two models have some complementarity. Combining the advantages of these two types of models has the potential to better exploit the rich information in MTS data.\nHowever, current research still faces the following challenges: (1) Modeling the spatial correlations between MTS dimensions does not lead to explicit representations. Moreover, modeling in an unreasonable way may lead to indistinguishable representations, resulting in poor accuracy. The Transformer [7 ###reference_b7###] relies on a fully connected self-attention mechanism to learn the correlations between sequences, but cannot explicitly model the topological relationships between variables. The GNN [8 ###reference_b8###], by contrast, can directly operate on the defined graph structure or the learned graph structure, making it easier to capture and use the explicit relationships between variables. Compared with using a fixed predefined graph structure, graph learning methods [8 ###reference_b8###, 20 ###reference_b20###] have drawbacks such as higher computational complexity and poor convergence and stability.\n(2) Various MTS classification methods based on local correlations are often unable to more effectively and efficiently model local correlations and incorporate them into representation learning.\nCompared with other local correlation modeling methods, the Transformer has some unique advantages in extracting local correlations. For example, a Transformer based on the self-attention mechanism can directly capture the dependencies between any two time steps in the sequence, regardless of the time step distance. This allows it to effectively learn long-range local correlation patterns. However, the Transformer\u2019s self-attention mechanism must compute the correlation scores between all element pairs in the sequence, resulting in a high (quadratic) computational complexity. This is a challenge for long sequences and high-dimensional sequences. Extensive research [14 ###reference_b14###, 16 ###reference_b16###, 17 ###reference_b17###] has been conducted on reducing complexity issues.\nTo address these challenges, this work proposes a novel end-to-end deep-learning model called the causal and local correlations based network (CaLoNet). In CaLoNet, the first step is to leverage causal correlations to model the pairwise spatial correlations between the dimensions of an MTS using graph structures. Next, a relationship extraction network is employed to fuse local correlations, enabling the extraction of long-term dependency features. Finally, the graph structures and long-term dependency features are incorporated into a GNN.\nThe main contributions of this paper are summarized as follows.\nA novel MTS classification network is designed to exploit spatial and local correlations. In this network, spatial correlations and local correlations are jointly used to model the time series for MTS classification.\nA novel strategy for constructing the graph structure of an MTS is proposed. This strategy characterizes the causal information through transfer entropy. Additionally, we further propose a graph construction strategy that serves as a new form of time series representation for graph-level classification tasks in MTS classification.\nWe designed a novel representation learning approach for time series spatial correlations in MTS classification. Graph-level tasks include graph classification, graph regression, and graph matching, all of which require models to learn graph representations.\nA large number of experiments on publicly available datasets demonstrate the effectiveness of our method.\nThe remainder of this paper is structured as follows. Section 2 ###reference_### introduces the related work. Section 3 ###reference_### describes our method in detail.\nThe experimental results are presented in Section 4 ###reference_###, and our conclusions are provided in Section 5 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "MTS classification methods", + "text": "MTS classification aims to predict the class label of an unlabeled time series as accurately as possible.\nRecently, many research studies have been devoted to MTS classification, and they have proposed a large number of MTS classification methods. These methods can be divided into traditional machine learning-based methods and deep-learning\u2013based methods.\nTraditional machine learning-based methods usually extract features through tedious feature engineering or data preprocessing. Many machine learning-based methods have been proposed, and they can be generally classified into two categories [23 ###reference_b23###]: 1) distance-based methods, which use the distance between time series as a similarity metric [24 ###reference_b24###]. For example, G\u00f3recki et al. [25 ###reference_b25###] proposed PDDTW , which combines the dynamic time warping (DTW) distance between MTS with the DTW distance between derivatives of MTS for classification. 2) Feature-based methods classify time series based on several features [1 ###reference_b1###, 26 ###reference_b26###] such as structural features and statistical features. Sch\u00e4fer and Leser [27 ###reference_b27###] proposed WEASEL+MUSE, which uses a bag-of-symbolic-Fourier-approximations model for MTS classification. Baydogan and Runger [28 ###reference_b28###] introduced SMTS, which uses a code book to capture the local relationships between different dimensions for MTS classification.\nDeep-learning\u2013based methods aim to unfold the internal representational hierarchy of time series, which helps to capture the intrinsic correlations among the representations [29 ###reference_b29###]. In contrast to traditional machine learning-based methods, deep-learning\u2013based methods successfully solve the problem of mining raw low-dimensional time series to create high-dimension features, and feature extraction can be performed in an end-to-end form.\nConsequently, this study focuses on deep-learning\u2013based MTS classification methods because of their excellent ability to capture relationships. Several representative deep-learning\u2013based methods are described in following subsection." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Deep-learning\u2013based methods", + "text": "In recent years, increasingly more researchers have classified MTSs through deep learning. Chen et al. [30 ###reference_b30###] used a multi-level discrete wavelet decomposition to decompose an MTS into a group of sub-MTSs to extract multi-level time-frequency representations. In [30 ###reference_b30###], a convolutional neural network (CNN) was developed for each level to learn level-specific nonlinear features, and a metric learning layer was added on the top of the network to learn the semantic similarity of MTSs. Zheng et al. [31 ###reference_b31###] first learned features from individual time series in each channel, and then combined features from all channels to achieve classification. Tripathi and Baruah [32 ###reference_b32###] used an attention-based CNN to encode information across multiple time stamps to learn the temporal features of an MTS. Huang et al. [33 ###reference_b33###] proposed FDESN, which is a novel bi-level approach for optimizing parameters. FDESN uses temporal and spatial aggregations for MTS classification. Zhang et al. [34 ###reference_b34###] designed a random group permutation method combined with a CNN to learn the features.\nIn an MTS, some correlations among features exist. Moreover, the local attributes in the time series classification problem are ordered, which makes this task clearly different from traditional classification problems. In essence, it is irrelevant whether the attributes in time series are ordered in time or not; what matters is the possible presence of order-dependent discriminative features in time series [35 ###reference_b35###]. It is also crucial to discover the intrinsic correlations among features extracted from different positions. Various deep-learning methods have their own priorities, and we focus on two methods that are relevant to this study in the two following subsections." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Spatial and causal correlation-based methods", + "text": "There is a certain relationship between different dimensions of an MTS. Spatial correlations can be viewed as dependencies between MTSs, and they can be obtained by several quantitative methods. Zuo et al. [36 ###reference_b36###] modeled spatial-temporal dynamic features to demonstrate that the temporal dependency and evolution of the spatial interactions are important for MTS classification. However, this method does not have an explicit spatial structure.\nCausal correlations between MTS dimensions can be viewed as a subset of spatial correlations. Yang et al. [37 ###reference_b37###] proposed a method that approximates the time series dynamics and explicitly learns the causal correlation-based relationships among multiple variables. Duan et al. [10 ###reference_b10###] combined a GNN and an encoder-decoder-based variational graph pooling module to create adaptive centroids for graph coarsening. Zha et al. [38 ###reference_b38###] represented time series classification as a node-level classification problem in a graph, where nodes correspond to each time series and links correspond to similarities between time series that are used in the final time series classification." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Local correlation-based methods", + "text": "Local correlation-based methods usually contain a feature extraction network and a relationship network [35 ###reference_b35###]. The relationship network focuse on extracting the relationships among the features that are obtained by the feature extraction network. The relationship networks can be a Transformer-based model, a self-attention\u2013based model or a long short-term memory (LSTM)-based model.\nThe model proposed by Xiao et al. [35 ###reference_b35###] included a temporal feature network to extract local features extraction and an LSTM-based attention network to mine the intrinsic correlations among the features. Liu et al. [39 ###reference_b39###] proposed the GTN, an extension of the current Transformer with a gate. The GTN can model channel-wise and step-wise correlations individually. Karim et al. [40 ###reference_b40###] used an LSTM and a FCN with a squeeze-and-excitation (SE) block [41 ###reference_b41###] to capture feature correlations. Chen et al. [42 ###reference_b42###] used a sparse self-attention (SSA)-based network to extract local features and correlations. Hao et al. [43 ###reference_b43###] introduced a temporal attention mechanism to extract long- and short-term dependencies cross all time steps. Yu et al. [6 ###reference_b6###] designed a traffic incident classifier based on an LSTM. This classifier was trained on time series feature vectors from both normal and collusion attack scenarios, and hence it can recognized dynamic traffic parameter patterns. Hong et al. [44 ###reference_b44###] proposed LMGRU, which can obtain the local correlations of time series effectively. Zhang et al. [45 ###reference_b45###] used a temporal attention encoder for extracting global temporal features and convolution for extracting local temporal features. Their method proved the effectiveness of hybrid global\u2013local temporal attention features for MTS classification." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed method", + "text": "In this section, we first briefly introduce the overall structure of CaLoNet, and then detail the local correlation network, causal correlation network, and the specifics of the completed classification task." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "The overall architecture of CaLoNet is shown in Fig. 2 ###reference_###. CaLoNet consists of four main steps:\nCausal correlation matrix construction. As shown in the upper left of Fig. 2 ###reference_###, this step obtains the causal graph matrix between dimensions with the help of transfer entropy, as detailed in Section 3.2 ###reference_###.\nLocal correlation extraction. As shown in the bottom left of Fig. 2 ###reference_###, this step obtains the correlations among local features through a local correlation network. Each colored dot represents the local correlation features of each dimension, as detailed in Section 3.3 ###reference_###.\nNode embedding. This step obtains the node embedding using a GNN based on the causal graph matrix and node features of the local correlations, as presented in Section 3.4 ###reference_###.\nPrediction. This step predicts the class labels using a multi-layer perceptron (MLP), as described in Section 3.5 ###reference_###.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Causal correlation matrix construction", + "text": "Transfer entropy is used to calculate how much information is reduced in an observed system, and CaLoNet constructs the causal-based spatial correlation graph with the help of transfer entropy.\nGranger causality analysis [46 ###reference_b46###, 47 ###reference_b47###] is one of the best-known methods for quantitatively characterizing time series causality. However, as a linear model, Granger causality analysis cannot handle the possible nonlinear correlations of an MTS well. Therefore, transfer entropy [48 ###reference_b48###] was proposed to perform a causality analysis that can handle the nonlinear cases. The study [48 ###reference_b48###] first introduced transfer entropy as an information-theoretic-based causality measure. The transfer entropy from dimension to dimension is defined in the following equation. In this equation, and denote the values of the time series at time , and they represent the past state; and represent the future state. In addition, and .\nIn Eq.(1 ###reference_###), is the conditional entropy, which means that the transfer entropy from to represents the reduction of uncertainty in the value of when the past value of is known. Here, and represent the past state, whereas and represent the future state. Conditional entropy represents the amount of information in given the known condition of . Moreover, represents the joint probability distribution of future state given past states and ; represents the conditional probability distribution of future state given past states and ; represents the conditional probability distribution of future state given past state ; represents the conditional entropy of future state given past state ; represents the conditional entropy of future state given past states and .\nFor two dimensions and , the conditional entropy is defined as follows, where denotes all possible values in dimension and denotes all possible values in dimension .\nWhen the transfer entropy is larger than the transfer entropy , a causal correlation between the two variables is established. The causal correlation between and can be further defined as follows.\nIf is greater than 0, we say that affects .\nFinally, the causal correlations of the MTS are constructed in a matrix . The dimensions of are (where is the number of dimensions in ). The value in the -th row and -th column of can be calculated as follows, where represents the -th dimension of MTS , represents the -th dimension of , and is the threshold value used to determine whether the causal correlation is significant. An example of a causal correlation matrix is shown in Fig.3 ###reference_###, in which a causal correlations matrix has been obtained for an MTS with six dimensions using this procedure.\n###figure_3### Transfer entropy is a method used to quantify the causal relationships between MTSs. It is based on the concept of information theory and measures the \u201dtransfer\u201d or \u201dpropagation\u201d of information from one time series to another. Transfer entropy helps identify causal dependencies between time series, i.e., how the values of one time series influence the values of another. By computing transfer entropy, we can quantify the strength of causal relationships between time series. A higher value of transfer entropy indicates a stronger influence of X on Y, suggesting a stronger causal relationship. Conversely, a value close to zero indicates a weaker influence of X on Y, implying a lack of obvious causality.\nTransfer entropy can be used to compute a causal relationship matrix between MTSs and plays an important role in the GNN. Transfer entropy can be used to construct causal relationship graphs by computing the causal relationships between time series. In the GNN, these causal relationship graphs can be represented as a graph structure, where nodes represent time series and edges represent causal relationships between time series. For instance, in Section 3.4 ###reference_###, the causal correlations matrix has the graph structure of A graph isomorphism network (GIN). Such causal relationship graphs capture complex dependencies between time series and provide a foundation for subsequent analysis and prediction tasks. The causal relationship matrix obtained from transfer entropy calculation can serve as one of the inputs to the GNN. In the GNN, node features are typically used to represent attributes of time series, whereas edges represent relationships between time series. The causal relationship matrix can be used to define the adjacency matrix of the graph, explicitly representing the causal relationships between time series. Thus, a GNN can learn the causal propagation patterns between time series, thereby improving the accuracy of prediction and analysis.\nIn summary, the application of transfer entropy in a GNN helps model and capture causal relationships between MTSs, enhancing the predictive and analytical capabilities of the models. By combining the strengths of transfer entropy and the GNN, we can better understand the dynamic characteristics and interactions of time series data, thereby advancing research and applications in related fields." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Local correlation extraction", + "text": "CaLoNet extracts local correlations as shown in Fig.2 ###reference_###. The following main processes are used to extract local correlations:\nEMBED. First, MTS is partitioned. We define the initial embedding of an MTS as (with dimensions, where each dimension has a length of ). CaLoNet uses four non-overlapping neighboring timestamps to obtain time chunks. Each time chunk is then flattened and projected into a -dimensional embedding. Finally, we obtain an -dimensional embedding, denoted as , where has a dimension of and the length of each dimension is .\nCBAM. The convolutional block attention module (CBAM) is a network used for feature refinement. We describe CBAM in Section 3.3.1 ###reference_.SSS1###.\nSSA. The SSA layer is a network used for extracting local correlations. We describe SSA in Section 3.3.2 ###reference_.SSS2###.\nLN. The layer normalization(LN) layer normalizes the hidden layers in the network to a standard normal distribution to speed up training and accelerate convergence.\nMLP. The MLP layer functions as a fully connected layer.\nSHIFT. The above process, except for the embedding layer, is repeated twice in the local correlation layer. However, a shift layer is added in the second pass [49 ###reference_b49###], which moves the temporal blocks within the window. This solves the problem of global features being restricted to the local window partition. Each patch can then interact with other patches in a new window.\n###figure_4###" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 CBAM layer", + "text": "Effective feature capture plays a crucial role in the MTS classification task. The CBAM layer [50 ###reference_b50###] shares similarities with the SE block [40 ###reference_b40###, 41 ###reference_b41###], as both leverage attention mechanisms for feature refinement. The attention mechanism facilitates the refinement of features, whereas the self-attention mechanism establishes connections between different positions of a time series to derive relationships at specific positions. The CBAM layer generates attention maps along two independent dimensions: channel and spatial. These attention maps are then multiplied by the input feature map to adaptively refine features.\nFirst, MTS embedding is performed through the CBAM layer, which is a module designed to exploit spatial and channel attention mechanisms to focus on more discriminative features. As shown in Fig.5 ###reference_###, channel attention is used to obtain by employing average pooling and maximum pooling operations, ultimately aggregating the spatial information of the feature map. The channel attention is calculated as follows, where and represent the shared parameters of the and denotes the features from the EMBED layer.\nThe spatial attention mechanism is used to obtain the spatial attention . and represent the average pooling operation and the maximum pooling operation applied along the channel axis. Then, the spatial attention feature map is generated by the one-dimensional CNN, it is computed as follows, where denotes a one-dimensional convolution operation with filter size , and are features from the channel attention layer.\n###figure_5###" + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 SSA layer", + "text": "The SSA layer, which is necessary for effective local correlation extraction, is used by CaLoNet to efficiently extract local correlations, resulting in significant time savings. For instance, [49 ###reference_b49###, 51 ###reference_b51###] explored both long-term and short-term dependencies in the data, thereby enhancing relationship extraction. The incorporation of the CBAM into the self-attention structure theoretically enables the extraction of more time-sensitive features and higher sensitivity to global changes in non-periodic data than fully connected networks.\nAs depicted in Fig. 6 ###reference_###, the dot product score is calculated by multiplying and to obtain the computed weight coefficient and compute the similarity first.\n###figure_6### Next, the SSA mechanism is used. The heads of each layer may focus on different types of features, as described by [52 ###reference_b52###], when stacking Transformers. For example, for a time series generated monthly, the heads of each layer will focus on the features of a particular week. Therefore, we restrict the cells to two adjacent layers. That is, we only allow each layer of cells to participate in the dot product operation for those cells with previous exponential steps. Thus, the SSA layer only needs to compute the ( is the time series length) dot product in each layer, as shown in Fig. 7 ###reference_###.\nFurthermore, to handle the SSA layer\u2019s memory bottleneck problem, the space complexity of the normal transformer grows quadratically with the increase in time series length . The time complexity of results in huge memory consumption [53 ###reference_b53###]. Therefore, to compute the self-attention dot product more efficiently, we adopt a sparse strategy [52 ###reference_b52###] to design the attention layer. We introduce a sparse bias matrix in the self-attention model with a log-sparse mask, as shown in Eq.(8 ###reference_###) and Fig.7 ###reference_###. In this approach, only the dot product of each cell in each layer must be computed, thus reducing the computational complexity from to .\n###figure_7### Then, the softmax activation function is applied to the score obtained through the incorporated SSA mechanism. Finally, the softmax dot product is calculated to obtain the weighted score for each input vector.\nThe query, key, and value matrices are matrices obtained by extracting features from the CBAM layer. Specifically, embedding is represented using matrices , , and , as shown in Eq.(7 ###reference_###). The dot product for the self-attention mechanism is then calculated using Eq.(8 ###reference_###), as shown in Fig.6 ###reference_###. In Fig.6 ###reference_###, first, the dot product score is calculated by multiplying and to obtain the computed weight coefficients and similarity. Then, the softmax activation function is applied to the score through the incorporated SSA mechanism. Finally, the softmax dot product is used to obtain the weighted score for each input vector.\nIn Eq.(7 ###reference_###), represents the function of the CBAM layer, which is used to obtain the query, key, and value matrices. In Eq.(8 ###reference_###), is a function commonly used to compute a probability distribution that takes a vector as input and converts it to a vector of probability distributions. It is often used in classification tasks to represent the probability of different categories. In addition, is used for score normalization, represents the transpose of , and is a sparse bias matrix based on Fig. 7 ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Node embedding", + "text": "###figure_8### After obtaining the graph structure as described in Section 3.2 ###reference_### and the node features as described in Section 3.3 ###reference_###, the method updates the node features to obtain the final node representations (right panel in Fig. 8 ###reference_###). Using the different working mechanisms in the two approaches, namely local correlations and spatial correlations, we explore the possibility of connecting these two approaches to jointly model MTS learning representations. However, existing deep-learning methods cannot explicitly represent MTS spatial correlations.\nThe input MTS is converted into a feature matrix , where is the calculated number of features, as introduced in Section 3.2 ###reference_###. Matrix can be viewed as a graph feature matrix with nodes, which correspond to the six variables of the MTS. The adjacency of nodes in the graph structure is determined by causal matrix . For this graph structure, a GNN can be directly applied to the node embeddings. Inspired by the GIN model [54 ###reference_b54###], the following propagation mechanism is used to compute the forward-passing update for a node represented by . GIN updates the node representations as shown in Eq. (9 ###reference_###).\nAfter processing with the GIN model, the feature vectors for each node are computed. The GIN model updates node features by iteratively applying MLP operations . An MLP is a multi-layered neural network that uses nonlinear transformations to process node features. For each node , its feature vector is updated at each layer to capture information about the node at that specific layer.\nWhen updating node features, the GIN model incorporates a tunable parameter specific to each layer . This parameter adaptively controls the influence of the previous layer\u2019s features on the current layer\u2019s features . By adjusting the value of , the model can balance the propagation and aggregation of features across different layers.\nDuring the computation of node features, the GIN model also leverages information from the node\u2019s neighbors. represents the set of neighbors of node , which includes neighboring nodes . For each node , the GIN model aggregates the feature vectors of its neighboring nodes by summing them. This aggregation of neighbor information helps the synthesis of node features and contextual modeling.\nIn Eq. (9 ###reference_###),\n represents the feature vector of node at layer \u2014this vector captures the information about the node at that particular layer;\n denotes the MLP applied to the features at layer , where the MLP is a feedforward neural network with multiple layers of neurons that transform the input features;\n is an updatable parameter specific to layer that allows the model to adaptively control the influence of the previous layer\u2019s features on the current layer\u2019s features ;\n represents the neighborhood of node , which consists of neighboring nodes ;\n represents the feature vector of the neighboring node at layer . The sum is taken over all neighboring nodes to aggregate their features." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Classification", + "text": "Finally, CaLoNet classifies the final features using an MLP. In the MLP, is converted to predicted class labels using a fully connected layer. We trained the MLP using the loss function shown in Eq.(10 ###reference_###), where represents the number of samples in the training set, is the number of labels, denotes the true label of the i-th sample belonging to the j-th class, and denotes the probability that the model predicts that the i-th sample belongs to the j-th class. The loss function is in the form of cross entropy, which measures the difference between the true label y and the predicted probability ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental setting", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Datasets", + "text": "Twenty-one multivariate datasets from the UEA archive111 Datasets are available at http://timeseriesclassification.com were used for our experiments. Table 1 ###reference_### lists their detailed information." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Experimental running environment", + "text": "Our experiments were conducted on a Windows computer using Python 3.9 and PyTorch 1.10. Minimization was performed using a cross-entropy loss function and the ADAM optimizer. The results were obtained after 50 iterations for all datasets." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "4.1.3 Reproducibility", + "text": "For reproducibility, we released our code and parameters on GitHub.222 https://github.com/dumingsen/CaLoNet The experimental results can be independently replicated." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with baselines", + "text": "CaLoNet was compared with 16 baselines: SMATE [36 ###reference_b36###], MLSTM-FCN [40 ###reference_b40###], WEASEL+MUSE [27 ###reference_b27###], DA-Net [42 ###reference_b42###], MR-PETSC [55 ###reference_b55###], ED-1NN [56 ###reference_b56###], DTW-1NN-I [56 ###reference_b56###], DTW-1NN-D [56 ###reference_b56###], ED-1NN(norm) [56 ###reference_b56###], DTW-1NN-I(norm) [56 ###reference_b56###], 11: DTW-1NN-D(norm) [56 ###reference_b56###], RT(100%) [13 ###reference_b13###], RT(20%) [13 ###reference_b13###], MF-Net [57 ###reference_b57###], TapNet [34 ###reference_b34###], and MTSC_FF [58 ###reference_b58###].\nThe accuracy results of these methods are listed in Table 2 ###reference_###. The results for the comparison methods were obtained from SMATE [36 ###reference_b36###], DA-Net [42 ###reference_b42###], MR-PETSC [55 ###reference_b55###], MF-Net [57 ###reference_b57###], MTSC_FF [58 ###reference_b58###], and RT [13 ###reference_b13###]. The highest accuracy for each dataset is indicated in bold. In Table 2 ###reference_###, \u201dAVG\u201d represents the average accuracy across the 21 datasets, and \u201dWin\u201d indicates the number of datasets on which the method achieved the best performance. Table 2 ###reference_### reveals that CaLoNet achieves the highest average accuracy among these methods. Moreover, CaLoNet obtains the best accuracy on five datasets. These experimental results demonstrate the superior performance of CaLoNet.\nTo evaluate the differences between CaLoNet and the other baseline methods, we present a critical difference diagram in Fig.9 ###reference_### based on the accuracies in Table 2 ###reference_###. In this diagram, CaLoNet has the second smallest ranking. This critical difference diagram further illustrates the superiority of our method.\nIn the critical difference plot, CaLoNet ranks second to SMATE. Hence, we used the Wilcoxon rank-sum test [59 ###reference_b59###] to validate the performance difference between CaLoNet and SMATE. According to the results, the p-value (0.9709) is greater than the commonly used significance level (e.g., 0.05), indicating that we do not have enough evidence to reject the null hypothesis. The statistic (-0.0364) is close to zero, suggesting that the difference between the two samples is very small. From the given statistic and p-value, we can infer that the two samples are similar or not significantly different. This means that we lack sufficient evidence to suggest a significant difference between the two samples.\nSMATE extracts spatio\u2013temporal dynamic features for each dimension subsequence, emphasizing the significance of temporal dependency and spatial interactions when constructing reliable embeddings for MTS. However, SMATE is unable to explicitly obtain graph structures for the spatial features, limiting the interpretation of spatial correlations through visualization methods. Our approach achieves accuracy similar to that of SMATE, but we further leverage causal relationships to capture spatial correlations and obtain interpretable graph structures.\nThe classifiers (1-17) compared in the table are as follows. 1: SMATE [36 ###reference_b36###], 2: MLSTM-FCN [40 ###reference_b40###], 3: WEASEL+MUSE [27 ###reference_b27###], 4: DA-Net [42 ###reference_b42###], 5: MR-PETSC [55 ###reference_b55###], 6: ED-1NN [56 ###reference_b56###], 7: DTW-1NN-I [56 ###reference_b56###], 8: DTW-1NN-D [56 ###reference_b56###], 9: ED-1NN(norm) [56 ###reference_b56###], 10: DTW-1NN-I(norm) [56 ###reference_b56###], 11: DTW-1NN-D(norm) [56 ###reference_b56###], 12: RT(100%) [13 ###reference_b13###], 13: RT(20%) [13 ###reference_b13###], 14: MF-Net [57 ###reference_b57###], 15: TapNet [34 ###reference_b34###], 16: MTSC_FF [58 ###reference_b58###], 17: CaLoNet (our method).\n###figure_9###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Comparison of dependent components", + "text": "The causal correlations part (CCP) and local correlations part (LCP) of CaLoNet were separately removed to verify the effect of each part on the performance of the model. Table 3 ###reference_### lists the comparison results of Only CCP, Only LCP, and CaLoNet. Table 3 ###reference_### and Fig.10a ###reference_.sf1### reveal that the average accuracy of CaLoNet is higher than that of Only LCP. Table3 ###reference_### and Fig. 10b ###reference_.sf2### reveal that CaLoNet improves the average accuracy of Only CCP by 6.2%. Overall, CaLoNet achieves 5 wins out of 10 datasets in the three sensitivity comparisons. This result demonstrates that the combined effects of causal correlations and local correlations are effective for the MTS classification task.\n###figure_10### ###figure_11###" + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Epoch analysis for CaLoNet", + "text": "To analyze the effectiveness of the number of epochs, we obtained the loss and accuracy plots for epochs ranging from 1 to 50. The accuracy and loss on the BasicMotions, HandMovementDirection, and SpokenArabicDigits datasets are shown in Figs. 11 ###reference_### and 12 ###reference_###, respectively. From these results, we observe the following:\nIn the initial stage, the training and testing losses gradually decrease, while the accuracy increases as the number of epochs increases.\nWhen the number of epochs exceeds 30, the training and testing accuracy plateau.\nAs the model is trained, the training and testing losses gradually converge, indicating the excellent fitting ability of our model.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": " Local correlation visualization", + "text": "We used Grad-CAM [60 ###reference_b60###] to visualize the local correlations of the BasicMotions dataset. As shown in Fig. 13 ###reference_###, CaLoNet captures the distinguishing local features (green dashed rectangles). This visualization illustrates the local correlation capture capabilities of CaLoNet well.\nThe BasicMotions dataset was generated by four students who wore smartwatches while performing four activities: walking, resting, running, and badminton. The watches collected data from 3D accelerometers and 3D gyroscopes. Each participant recorded the activities five times, with data sampled once every tenth of a second over a ten-second period.\nThe visualization in Fig. 13 ###reference_### depicts real-time series data, where the horizontal axis represents the length of the time series and the vertical axis represents the corresponding values of the time series. The green connecting lines in the graph indicate that the local features within the green dashed box are closely related, meaning there is local correlation.\nLocal correlation is crucial for understanding motions. It helps us identify which segments of time series in a motion contribute most significantly to its recognition, as well as the relationships between these segments. Therefore, in the BasicMotion dataset, the model extracts time series from the smartwatch data to obtain local correlations for identifying motions such as walking, resting, running, and badminton.\n###figure_18### ###figure_19### ###figure_20### ###figure_21###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": " Causal correlations visualization", + "text": "Fig. 14 ###reference_### visualizes the causal correlation-based graph structures of three datasets: HandMovementDirection (10 dimensions), BasicMotions (6 dimensions), and CharacterTrajectories (3 dimensions). In these graph structures, the colored dots with arrows are the causes and those that are pointed at are the effects, and the cause and effect between each dimension form the spatial correlations.\nIn Fig 14 ###reference_###, each colored vertex represents a node, and the edges between nodes indicate causal relationships between dimensions. In addition, the parameters on the line represent the strength of the connection between the nodes. We use transfer entropy to quantify the causal relationships of MTS, aiming to discover potential causal interactions among variables. Transfer entropy can identify whether there exists a potential information transfer pathway between variable sequences, i.e., whether the past state of one variable can better predict the future state of another variable. This information transfer relationship can serve as evidence of potential causal interactions between variables.\nThe CharacterTrajectories data were captured using a WACOM tablet, using three dimensions: x, y, and pen tip force. The data were captured at a frequency of 200 Hz and normalized. Only characters with a single PEN-DOWN segment were considered. Each sample is a 3D pen tip velocity trajectory. The original data had cases of different lengths, which were truncated to the shortest length (182) to conform to the repository.\nFig 15 ###reference_### shows a causal relationship heatmap in which the colors represent the strength of causal influence (larger numbers in the box represent stronger causal correlations). The numerical value of transfer entropy also quantifies the strength of causal influence between different variables to a certain extent, providing a basis for modeling and analysis. For example, in the CharacterTrajectories dataset, the third dimension is the cause of the first (0.1450) and second dimensions (0.2389), whereas the first dimension is the cause of the second dimension (0.0443).\n###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we proposed CaLoNet, a new end-to-end deep-learning model for exploring local and causal correlations. CaLoNet first leverages pairwise spatial correlations between dimensions based on causal correlations to model and obtain the graph structure. Second, a relationship extraction network fuses features to obtain local correlations and capture long-term dependency features. Finally, the graph structure and long-term dependency features are incorporated into a GNN to complete the MTS classification task. Our experiments demonstrated an improvement in accuracy.\nThe static graph structure used in this study is unable to capture the dynamic changes in relationships that occur as data changes over time. This is a limitation of the approach, because in many real-world applications, the interactions between variables are dynamic and evolve over time, and dynamic modeling is required to better reflect the true underlying relationships.\nTo address this limitation, we plan to explore the use of a dynamic GNN in future work. Dynamic GNNs offer the ability to model the dynamic nature of graph structures by incorporating temporal information and relationship changes over time." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Experimental datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetAbbrevationTypeTrainTestDimensionsLengthClasses
ArticularyWordRecognitionAWRMotion275300914425
AtrialFibrillationAFECG151526403
BasicMotionsBMHAR404061004
CharacterTrajectoriesCTMotion14221436318220
CricketCRHAR108726119712
EthanolConcentrationECHAR261263317514
FaceDetectionFDEEG/MEG58903524144622
HandMovementDirectionHMDEEG/MEG16074104004
HeartbeatHBAS204205614052
JapaneseVowelsJVAS27037012299
LibrasLIBHAR18018024515
LSSTLSSTOther2459246663614
MotorImageryMIEEG/MEG2781006430002
NATOPSNAHAR18018024516
PEMS-SFPEMSOther2671739631447
PenDigitsPDMotion749434982810
SelfRegulationSCP1SR1EEG/MEG26829368962
SelfRegulationSCP2SR2EEG/MEG200180711522
SpokenArabicDigitsSADAS65992199139310
StandWalkJumpSWJECG1215425003
UWaveGestureLibraryUWHAR12032033158
\n
", + "capture": "Table 1: Experimental datasets." + }, + "2": { + "table_html": "
\n
Table 3: Comparison of the accuracies of Only LCP, Only CCP, and CaLoNet.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetOnly LCPOnly CCPCaLoNet
AWR0.9550.8760.960
AF0.4000.3300.333
BM0.8500.9500.975
CT0.9850.9760.987
HMD0.5400.3910.527
NA0.9020.7270.900
PD0.9790.9620.983
SR20.4770.5270.500
SAD0.9820.9330.989
SWJ0.4100.2660.400
AVG0.7480.6930.755
Win415
\n
", + "capture": "Table 3: Comparison of the accuracies of Only LCP, Only CCP, and CaLoNet." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18008v1_figure_1.png", + "caption": "Figure 1: Examples of spatial correlations and local correlations in MTSs.", + "url": "http://arxiv.org/html/2411.18008v1/x1.png" + }, + "2": { + "figure_path": "2411.18008v1_figure_2.png", + "caption": "Figure 2: Overall structure of CaLoNet.", + "url": "http://arxiv.org/html/2411.18008v1/x2.png" + }, + "3": { + "figure_path": "2411.18008v1_figure_3.png", + "caption": "Figure 3: Example of a causal correlation matrix.", + "url": "http://arxiv.org/html/2411.18008v1/x3.png" + }, + "4": { + "figure_path": "2411.18008v1_figure_4.png", + "caption": "Figure 4: Local correlation network.", + "url": "http://arxiv.org/html/2411.18008v1/x4.png" + }, + "5": { + "figure_path": "2411.18008v1_figure_5.png", + "caption": "Figure 5: CBAM block.", + "url": "http://arxiv.org/html/2411.18008v1/x5.png" + }, + "6": { + "figure_path": "2411.18008v1_figure_6.png", + "caption": "Figure 6: SSA mechanism.", + "url": "http://arxiv.org/html/2411.18008v1/x6.png" + }, + "7": { + "figure_path": "2411.18008v1_figure_7.png", + "caption": "Figure 7: Principle of the SSA mechanism. The SSA is computed by only allowing each cell of the previous layer to pay attention to the cell of the current layer in exponential steps. The dark circles in the current layer do not participate in the dot product operation, whereas the bright circles do participate.", + "url": "http://arxiv.org/html/2411.18008v1/x7.png" + }, + "8": { + "figure_path": "2411.18008v1_figure_8.png", + "caption": "Figure 8: Node embedding process.", + "url": "http://arxiv.org/html/2411.18008v1/x8.png" + }, + "9": { + "figure_path": "2411.18008v1_figure_9.png", + "caption": "Figure 9: Critical difference diagram for the proposed method and baseline methods.", + "url": "http://arxiv.org/html/2411.18008v1/extracted/5965654/fig8.png" + }, + "10(a)": { + "figure_path": "2411.18008v1_figure_10(a).png", + "caption": "(a) CaLoNet vs. Only LCP\nFigure 10: Pairwise accuracy comparison between CaLoNet and Only LCP or Only CCP.", + "url": "http://arxiv.org/html/2411.18008v1/extracted/5965654/fig9a.png" + }, + "10(b)": { + "figure_path": "2411.18008v1_figure_10(b).png", + "caption": "(b) CaLoNet vs. Only CCP\nFigure 10: Pairwise accuracy comparison between CaLoNet and Only LCP or Only CCP.", + "url": "http://arxiv.org/html/2411.18008v1/extracted/5965654/fig9b.png" + }, + "11(a)": { + "figure_path": "2411.18008v1_figure_11(a).png", + "caption": "(a) BasicMotions\nFigure 11: Accuracy at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x9.png" + }, + "11(b)": { + "figure_path": "2411.18008v1_figure_11(b).png", + "caption": "(b) HandMovementDirection\nFigure 11: Accuracy at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x10.png" + }, + "11(c)": { + "figure_path": "2411.18008v1_figure_11(c).png", + "caption": "(c) SpokenArabicDigits\nFigure 11: Accuracy at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x11.png" + }, + "12(a)": { + "figure_path": "2411.18008v1_figure_12(a).png", + "caption": "(a) BasicMotions\nFigure 12: Loss at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x12.png" + }, + "12(b)": { + "figure_path": "2411.18008v1_figure_12(b).png", + "caption": "(b) HandMovementDirection\nFigure 12: Loss at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x13.png" + }, + "12(c)": { + "figure_path": "2411.18008v1_figure_12(c).png", + "caption": "(c) SpokenArabicDigits\nFigure 12: Loss at different numbers of epochs.", + "url": "http://arxiv.org/html/2411.18008v1/x14.png" + }, + "13(a)": { + "figure_path": "2411.18008v1_figure_13(a).png", + "caption": "(a) Dimension 1 of sample 1 of \u201cBasicMotions\u201d\nFigure 13: Heatmap of different samples from the BasicMotions dataset.", + "url": "http://arxiv.org/html/2411.18008v1/x15.png" + }, + "13(b)": { + "figure_path": "2411.18008v1_figure_13(b).png", + "caption": "(b) Dimension 1 of sample 2 of \u201cBasicMotions\u201d\nFigure 13: Heatmap of different samples from the BasicMotions dataset.", + "url": "http://arxiv.org/html/2411.18008v1/x16.png" + }, + "13(c)": { + "figure_path": "2411.18008v1_figure_13(c).png", + "caption": "(c) Dimension 1 of sample 3 from BasicMotions\nFigure 13: Heatmap of different samples from the BasicMotions dataset.", + "url": "http://arxiv.org/html/2411.18008v1/x17.png" + }, + "13(d)": { + "figure_path": "2411.18008v1_figure_13(d).png", + "caption": "(d) Dimension 1 of sample 4 from BasicMotions\nFigure 13: Heatmap of different samples from the BasicMotions dataset.", + "url": "http://arxiv.org/html/2411.18008v1/x18.png" + }, + "14(a)": { + "figure_path": "2411.18008v1_figure_14(a).png", + "caption": "(a) HandMovementDirection\nFigure 14: Causal correlation-based graph structures from various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x19.png" + }, + "14(b)": { + "figure_path": "2411.18008v1_figure_14(b).png", + "caption": "(b) BasicMotions\nFigure 14: Causal correlation-based graph structures from various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x20.png" + }, + "14(c)": { + "figure_path": "2411.18008v1_figure_14(c).png", + "caption": "(c) CharacterTrajectories\nFigure 14: Causal correlation-based graph structures from various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x21.png" + }, + "15(a)": { + "figure_path": "2411.18008v1_figure_15(a).png", + "caption": "(a) HandMovementDirection\nFigure 15: Heatmaps for various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x22.png" + }, + "15(b)": { + "figure_path": "2411.18008v1_figure_15(b).png", + "caption": "(b) BasicMotions\nFigure 15: Heatmaps for various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x23.png" + }, + "15(c)": { + "figure_path": "2411.18008v1_figure_15(c).png", + "caption": "(c) CharacterTrajectories\nFigure 15: Heatmaps for various datasets.", + "url": "http://arxiv.org/html/2411.18008v1/x24.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18008v1" +} \ No newline at end of file diff --git a/20241127/2411.18009v1.json b/20241127/2411.18009v1.json new file mode 100644 index 0000000000000000000000000000000000000000..59add0321250bd901fde3b8121284c3db9b87816 --- /dev/null +++ b/20241127/2411.18009v1.json @@ -0,0 +1,664 @@ +{ + "title": "Monocular Obstacle Avoidance Based on Inverse PPO for Fixed-wing UAVs", + "abstract": "Fixed-wing Unmanned Aerial Vehicles (UAVs) are one of the most commonly used platforms for the burgeoning Low-altitude Economy (LAE) and Urban Air Mobility (UAM), due to their long endurance and high-speed capabilities. Classical obstacle avoidance systems, which rely on prior maps or sophisticated sensors, face limitations in unknown low-altitude environments and small UAV platforms. In response, this paper proposes a lightweight deep reinforcement learning (DRL) based UAV collision avoidance system that enables a fixed-wing UAV to avoid unknown obstacles at cruise speed over 30m/s, with only onboard visual sensors. The proposed system employs a single-frame image depth inference module with a streamlined network architecture to ensure real-time obstacle detection, optimized for edge computing devices. After that, a reinforcement learning controller with a novel reward function is designed to balance the target approach and flight trajectory smoothness, satisfying the specific dynamic constraints and stability requirements of a fixed-wing UAV platform. An adaptive entropy adjustment mechanism is introduced to mitigate the exploration-exploitation trade-off inherent in DRL, improving training convergence and obstacle avoidance success rates. Extensive software-in-the-loop and hardware-in-the-loop experiments demonstrate that the proposed framework outperforms other methods in obstacle avoidance efficiency and flight trajectory smoothness and confirm the feasibility of implementing the algorithm on edge devices. The source code is publicly available at https://github.com/ch9397/FixedWing-MonoPPO.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Fixed-wing Unmanned Aerial Vehicles (UAVs) have emerged as key platforms with the development of the Low-Altitude Economy (LAE) [1 ###reference_b1###] and the rise of Urban Air Mobility (UAM)[2 ###reference_b2###]. Due to the extended endurance and high-speed characteristics of fixed-wing UAVs compared to rotary UAVs[3 ###reference_b3###], they are especially preferred in applications such as long-distance cargo delivery [4 ###reference_b4###], wide-area inspection [5 ###reference_b5###], and emergency operations[6 ###reference_b6###]. When a UAV is flying at low altitudes, autonomously avoiding potential obstacles, such as manmade facilities and terrains, becomes a key capability to guarantee its flight safety.\nAchieving low-altitude obstacle avoidance on a fixed-wing UAV is especially challenging due to its higher speed and inescapably large turning radius compared to other aerial platforms. It requires not only extended environment perception capabilities to realize early collision warning but also more flexible avoidance path planning subject to more rigorous dynamic constraints, to generate feasible avoidance trajectories.\nClassic path planning methods predominantly rely on sampling[7 ###reference_b7###], or optimization techniques [8 ###reference_b8###], which necessitate comprehensive environmental perception, or rely on high-precision prior maps.\nThe high-speed flight property of fixed-wing UAVs often makes it impractical to produce comprehensive maps or collect accurate environmental data of unknown scenarios[9 ###reference_b9###]. Specifically, some scholars utilize costly sensors, such as LiDAR[10 ###reference_b10###], or RADAR [11 ###reference_b11###] to enhance environmental perception. However, these approaches not only increase system complexity and cost but also limit their applicability in scenarios with limited onboard resources. Thus, a lightweight navigation framework that removes the dependency on complete environment information or expensive sensors is imperative.\n###figure_1### The visual sensor is a most widely used sensor in robot applications with its unparalleled low Size, Weight, and Power consumption (SWaP) and high sensory resolution.\nCompared to classic intelligent heuristic algorithms [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], DRL demonstrates unique strengths in tackling the obstacle avoidance problem with instantaneous visual information.\nEspecially, the success of DRL in gaming applications [15 ###reference_b15###] has spurred interest in exploring its potential for visual avoidance [16 ###reference_b16###], a domain where DRL further exemplifies its strength as an end-to-end learning framework [17 ###reference_b17###, 18 ###reference_b18###].\nHowever, strategic problems still need to be resolved for integration DRL in fixed-wing UAV obstacle problem. Studies [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###] have employed reward functions based on distance metrics to produce favorable results. However, these reward functions only consider target arrival and fail to account for trajectory smoothness, which is particularly crucial for flight platforms such as fixed-wing UAVs. When considering the coupling optimization of visual information and obstacle avoidance, some studies use multi-frame depth maps[22 ###reference_b22###, 23 ###reference_b23###] as the input of deep reinforcement learning collision avoidance, which undoubtedly increases the systematic latency and jeopardizes the real-time performance, especially on edge computing platforms, which is a critical index for the high-speed fixed-Wing UAV.\nBesides the specific strategic problem, a persistent common challenge in utilizing DRL algorithms in robot applications is the imbalance between exploration and exploitation during training. Excessive focus on exploration can hinder algorithm convergence, while insufficient exploration may overlook superior solutions[24 ###reference_b24###].\nKeen on the above challenges in using DRL to achieve fixed-wing obstacle avoidance, we propose a lightweight deep reinforcement learning framework that utilizes single-frame images captured by low-cost vision sensors as input and exploits the advantages of Proximal Policy Optimization (PPO) [25 ###reference_b25###] to effectively address obstacle avoidance tasks, offering the following contributions:\nConsidering the flight stability and dynamic constraints of fixed-wing UAVs, we propose an optimization framework formulated as an inverse PPO learning model, which incorporates a reward function balancing target approach and trajectory maintenance to ensure smooth and efficient collision avoidance flight trajectories.\nWe introduce a strategy updating mechanism based on adaptive entropy adjustment to address the challenge of local optimization caused by PPO\u2019s reliance on historical data during training. This mechanism ensures that our algorithm identifies obstacle-avoidance strategies with higher success rates.\nWe demonstrate that the proposed framework outperforms other methods in obstacle avoidance efficiency and flight trajectory smoothness through software-in-the-loop and hardware-in-the-loop experiments, and confirmed the feasibility of running the algorithm on edge devices.\nThe remainder of this paper is organized as follows. Section II reviews related works, and Section III presents the problem definition and its mathematical formulation. Section IV introduces the inverse PPO. Section V discusses the computational experiments and their results. Section VI concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Fixed-wing UAV Collision Avoidance", + "text": "In contrast to obstacle avoidance algorithms for quadcopters, those for fixed-wing UAVs must account for complex dynamic constraints. For example, a fixed-wing UAV usually has narrower cruise velocity bounds, making it unable to change its velocity abruptly or hover in place like a quadrotor UAV.\nClassic obstacle avoidance algorithms, such as Dijkstra [26 ###reference_b26###] and A-star [27 ###reference_b27###], are commonly used in static obstacle environments. However, these methods encounter significant challenges with local minima and often generate trajectories lacking smoothness, particularly in environments with closely spaced obstacles or narrow passages. Another class of algorithms, such as artificial potential field methods [28 ###reference_b28###], RRT [29 ###reference_b29###], and VFH [30 ###reference_b30###], is more suitable for dynamic obstacle environments. Unfortunately, none of these methods address the dynamic constraints of fixed-wing UAVs, necessitating extensive post-processing to smooth the generated paths. To mitigate the reliance on post-processing, many researchers have adopted Dubins curves [31 ###reference_b31###] for fixed-wing UAV path planning. Dubins curves employ a combination of straight-line and circular arc segments to generate paths that precisely satisfy the kinematic constraints of fixed-wing UAVs.\nDifferent from the above, which puts the obstacle avoidance idea in the planning layer, there are also a large number of studies that consider the obstacle avoidance module in the control layer, for example, approaches based on Model Predictive Control (MPC) [32 ###reference_b32###] primarily use optimization theory or continuously update waypoints or routes to prevent collision. Nevertheless, this approach necessitates the creation of highly detailed models of aircraft, with the modeling process for aircraft with varying dynamics being inherently distinct. Consequently, the potential for generalization is limited.\nThe aforementioned methods face significant challenges when sensors provide only partial or incomplete environmental and obstacle information.\nTo address this problem, numerous studies have focused on leveraging learning-based algorithms to solve obstacle avoidance under partially observed or unknown environmental conditions." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B DRL for Visual Navigation", + "text": "Deep reinforcement learning, a prominent subfield of machine learning, provides a unique advantage in facilitating interactive and adaptive learning within complex and uncertain environments. This capability makes it a preferred approach among researchers aiming to tackle intricate problems that traditional algorithms struggle to address effectively. From the initial deployment of table storage to address discrete state and action spaces, such as SARSA [33 ###reference_b33###] and Q-learning [34 ###reference_b34###], these techniques can effectively address problems with reduced complexity. Subsequently, the concept of value approximation in neural networks led to the development of Deep Q-learning (DQN) [35 ###reference_b35###], Double DQN [36 ###reference_b36###], and Dueling DQN [37 ###reference_b37###]. These algorithms overcome the limitations of previous approaches, effectively enabling the handling of continuous state spaces. Dueling DQN, on the other hand, incorporates an advantageous function to assess the quality of an action within the dual network structure. In contrast to the aforementioned algorithms, which are based on iterative updating of Bellman value functions, DDPG [38 ###reference_b38###], SAC [39 ###reference_b39###], TRPO [40 ###reference_b40###], and PPO are based on the theory of gradient descent. Among these options, PPO stands out for its stability, usability, and efficiency. The introduction of a clipping loss function serves to enhance the stability of the training process, limiting the magnitude of each policy update and thereby avoiding the potential for drastic policy changes. Compared to TRPO, PPO simplifies the training process by avoiding complex constrained optimization calculations while keeping policy updates within a safe range. In addition to excelling in benchmark tasks, PPO proves effective in addressing complex real-world problems. However, one limitation of PPO is its high data requirement for training and its heavy reliance on historical data, which may lead to excessive dependence on prior experience if early-stage learning data is insufficient.\nThe combination of DRL and visual information aims to optimize the navigation efficiency and performance of RL by improving the feature extraction and fusion of information from the perception side. Some researchers have utilized RGB images, depth images, and area-segmented images to guide robots in optimizing navigation and obstacle avoidance strategies. Many works typically rely on comprehensive maps or collect accurate local maps, however, the high-speed flying nature of fixed-wing UAVs often makes it difficult to obtain comprehensive maps or gather accurate data in unknown environments where environmental information is impractical to acquire[16 ###reference_b16###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###].\nOther scholars have enhanced the generalization capabilities of DRL by incorporating human knowledge as prior information, enabling navigation in novel environments with sparse rewards[44 ###reference_b44###, 19 ###reference_b19###].\nWhile a substantial quantity of data can be augmented with human knowledge through supervised learning, the acquisition of strategic learning is frequently constrained by the methodologies employed for label generation. Conversely, the acquisition of human knowledge necessitates a considerable investment of effort.\nshrink\nIn summary, integrating deep reinforcement learning with visual information presents a powerful approach to solving robot navigation and obstacle avoidance challenges. However, existing DRL-based navigation and obstacle avoidance algorithms often suffer from an imbalance between learning and utilization, resulting in prolonged convergence times, low efficiency, and suboptimal obstacle avoidance performance. To address this, our approach incorporates a self-regulating entropy mechanism to enhance reinforcement learning performance. Combined with a backpropagation reward mechanism, this approach significantly improves navigation efficiency in unknown obstacle environments for fixed-wing UAVs." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Problem Formulation", + "text": "The monocular vision-based obstacle avoidance problem can be modeled as a Markov Decision Process (MDP), which is characterized by a tuple . At time step , the fixed-wing UAV collects environmental state variables using its camera. Based on the state , the UAV selects an action from the action space . The action interacts with the environment, generating a reward signal and resulting in a transition to the next state . The objective of the algorithm is to find a policy that maximizes the cumulative reward by selecting actions that yield the highest expected return at any given time step ." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "III-A1 State Space", + "text": "The state space encompasses the environmental data collected by the camera and the information regarding the target. This can be represented as\nwhere represents the environment captured by the camera, while denotes the features related to the target.\n refers to a latent representation obtained from the encoder of an autoencoder network, designed to reduce redundant and adversarial information. In this context, the RGB image captured by the front-view monocamera is processed to extract depth information, as illustrated below\nwhere denotes a depth map with dimensions (height) and (width), is the depth estimation model with parameter .\nThe latent representation is subsequently derived through convolutional encoding of the current generated depth map. This process, at a given time step , can be expressed as follows\nwhere denotes the latent variable of size , while represents the encoding function parameterized by . Accordingly, is derived as follows\nrepresents a local goal, which can be expressed as follows\nwhere and represent the normalized relative distance and angle to the goal position, respectively. In this context, we consider a 2-dimensional coordinate system, where is computed as follows\nwhere and represent the coordinate vectors of the current drone position and the target position, respectively. denotes the L2 norm, represents the maximum allowable distance between the UAV and the target. is in radians and is calculated by\nwhere and correspond to the longitudinal and lateral axes of the coordinate system, respectively." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "III-A2 Action Space", + "text": "To adapt to the flight characteristics of fixed-wing UAVs, the action space is composed of waypoints in various directions within the body-fixed coordinate system under a constant altitude system, as well as the continuation of the action from the previous time step. This can be formulated as\nwhere represents the choosing waypoint and can be calculated as\nwhere represents the discrete desired change in yaw angle magnitude and represents the Euclidean distance between the calculated waypoint and the current position." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "III-A3 Reward Function", + "text": "The design of the reward function remains one of the most significant challenges in DRL algorithms. A primary limitation of RL is that reward functions are typically hand-crafted and tailored to specific domains. There has been quite a bit of research in Inverse Reinforcement Learning (IRL), and most of the work provides a way to automatically obtain cost functions from expert demonstrations. However, these approaches are often computationally intensive, and the optimization required to identify a reward function that accurately represents expert trajectories is inherently complex.\nThis paper focuses on designing a denser reward function to enhance the obstacle avoidance strategy, aiming not only to achieve high success rates in avoiding obstacles but also to enable smoother paths.\nIn the process of obstacle avoidance, this paper introduces a reward function that incorporates an inference mechanism to ensure robust learning under conditions of general applicability and rapid convergence.\nWhen the drone reaches its designated target, it immediately receives a reward defined as\n###figure_2### If the drone experiences a collision, it incurs a negative reward as a penalty defined as\nThe drone should approach the target as quickly as possible, making it necessary to encourage the drone to be closer to the target at time than at time . The corresponding reward is defined as\nwhere represents the relative distance between the drone and the target point at time .\nFinally, we aim for the drone to reach its destination via the shortest path. Therefore, we designed a reward function that encourages the drone to follow the planned trajectory while learning to interpret depth information to avoid obstacles. The corresponding reward is defined as\nwhere represents the weight of each reward module.\nThe overall reward function is constructed by combining the four aforementioned sub-terms as follows," + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Inverse PPO Based on Adaptive Entropy", + "text": "In this section, we design a novel inverse PPO-based lightweight model to solve the fixed-wing UAV obstacle avoidance problem. The framework contains a lightweight backbone, an efficient strategy selection mechanism, and a new optimization objective function." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Overview", + "text": "The overall framework is illustrated in Fig. 2 ###reference_###. Our method is motivated by previous studies that examined the encoding of depth maps from multiple consecutive frames as state variables in reinforcement learning models for navigation and obstacle avoidance tasks. However, the storage and encoding of multiple frames of depth maps lead to high memory consumption and degrade the real-time performance of the system, rendering this approach unsuitable for fast-moving fixed-wing UAVs equipped with edge computing devices.\nTo address the challenge of increased memory consumption caused by stacking multiple depth maps, and to alleviate potential generalization issues that arise in depth inference, we incorporated a fine-tuned monocular depth estimation model proposed by [45 ###reference_b45###], which is proven to be reliable across a wide range of environments.\nBy fine-tuning this depth model for our specific application, we are able to\ngenerate reliable enough depth maps for the following deep reinforcement learning module and at the same time\nreduce the computational burden of processing multiple depth frames.\nAdditionally, one of our primary objectives was to ensure that the proposed architecture sustained computational efficiency, particularly when deployed on edge devices with limited processing capabilities. To this end, we integrated [46 ###reference_b46###], a model specifically designed for efficient feature extraction, as part of our system architecture. Specifically, it improves feature extraction by performing element-wise multiplication between two linear transformation features, an operation inherently optimized for execution on Neural Processing Unit (NPU) architectures. NPUs are specifically optimized for matrix operations and parallel processing[47 ###reference_b47###], making them particularly suitable for operations involving intensive linear algebra computations. By leveraging the compatibility between the feature fusion mechanism and NPU hardware, we achieved both high performance and low power consumption, which are essential for edge computing environments." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Inferring Advantage Function", + "text": "The loss function of the traditional PPO algorithm is mainly based on the advantage function. To improve the universality of the algorithm, an inferring advantage function based on the reward function in\nEq. 16 ###reference_### is designed to make the algorithm a closed loop.\nwhere is the vector of policy parameters before the update, and are inferring action-value function and inferring value function, which can be obtained with reward function in (16). Therefore, ia a inferring advantage function at timestep .\nIn PPO algorithm, the importance sampling mechanism is used to control the updating range of the policy while optimizing the policy, so as to avoid the problem of drastic changes caused by excessive updating. However, during the training process, we usually use some old strategies that have been trained to collect samples, rather than using the latest strategies that are currently available. This leads to the problem of mismatch between the sample and the current policy, which is called \u201dpolicy offset\u201d. In order to ensure the exploration of better solutions when the distribution gap between the two data is large, we no longer assume that the distribution of the old and new strategies is similar, and encourage the exploration of new strategies when the distribution difference between the old and new strategies is large. Therefore, this paper explores and utilizes data considering the distribution of old and new strategies\nwhere represents the vector of policy parameters after the update, and are old and new strategies, respectively." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Strategy Selection Mechanism", + "text": "In this section, we undertake a detailed examination of the factors that must be taken into account when selecting strategy mechanisms from two distinct perspectives." + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "IV-C1 Balance Exploration and Exploitation", + "text": "The challenge of reinforcement learning lies in striking a balance between exploration and exploitation. An excess of exploration can result in situations where the algorithm fails to converge or converges slowly, whereas an excess of exploitation can lead to the disadvantage of local optimality. In the traditional PPO algorithm framework, it uses an importance sampling mechanism to train the model. More importantly, its assumed that the distributions of the training and learning models are consistent. However, the approach can lead to over-dependence of the data of the learning model on the merits of the training data, when the data trained by the intelligences are not picked for good strategies, making the learning success rate decrease. To solve the problem, we design a new strategy selection mechanism." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "IV-C2 Lowering Sensitivity to Prior Knowledge", + "text": "When viewed through the lens of prior knowledge, the efficacy of conventional PPO algorithms, along with other deep reinforcement learning techniques, is markedly influenced by the data accumulated in previous iterations. In an effort to mitigate this reliance on prior knowledge and drawing inspiration from maximum entropy methods, we have devised policy mechanisms that are not only robust and stable but also exhibit rapid convergence through the use of self-tuning.\nThe strategy entropy in the Markov process affects the balance between exploration and utilization, where for each state and action, the constraint is given as follows.\nwhere denotes an expected value of reward.\nWhen the aforementioned entropy is higher, there is a greater propensity for utilization. Conversely, when entropy is lower, there is a greater propensity for exploration.\nThe previously mentioned strategy of entropy allows us to effectively address the challenge of balancing exploration and exploitation. However, in light of the necessity for simplified implementation and reduced computational complexity in engineering, there is a clear need for the development of more sophisticated entropy operators.\nTo make it learn under conditions that increase its success rate, we design a more generalized strategy entropy mechanism.\nwhere is the total number of successes in a , represents a set of data samples used when updating a policy or value function.\nis -smooth, equipped with the Taylor\u2019s theorem, we have such that\nwhere is a coefficient.\nFor any , the entropy can be used as the sample mean as follows,\nwhere and is the volume of the unit ball in . And it holds that\nwhere denotes the Lebesgue measure.\nEquipped with Lemma IV.1 ###reference_lemma1###, let we have\nNote that is finite if is of bounded support. Indeed, consider the imposed smoothing on ,we have\nand\nwhere .\nHence, is tend to the support of . This concludes the proof." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "IV-C3 Learning Objective", + "text": "In the traditional PPO implementations, the process of training is influenced by a fixed hyperparameter which determines the exploration magnitude. In this paper, a new PPO method with an inferring reward mechanism and adpative entropy is introduced, which incorporates a dynamic scaling of the entropy coefficient based on the recent return obtained by the agent.\nBased on the above discussion, the final loss function can be written in the following form,\nwhere and are hyperparameters that are used to regulate the effects of value loss and entropy." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experimental Validation", + "text": "To evaluate its effectiveness, three experimental setups are designed in this section. \nFirst, we design an ablation experiment to separately demonstrate the impact of the designed reward function and the update mechanism. Second, we demonstrate the superiority of the proposed method through comparisons with other deep reinforcement learning algorithms. Finally, a hardware-in-the-loop simulation experiment is conducted to verify the deployment capability of the proposed framework on edge devices, while also comparing it with classic sample-based methods.\n###figure_3### ###figure_4### (a)\n###figure_5### (b)\n###figure_6### (c)\n###figure_7### (d)\n###figure_8### (e)\n###figure_9### (f)" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Training Settings", + "text": "We conduct the training on a machine equipped with an Intel Xeon E5-2678 V3 CPU and two NVIDIA RTX 3090 GPU. A high-fidelity simulator, AirSim[48 ###reference_b48###] build on Unreal Engine (UE), is employed to build the different environments and provide data including RGB images captured by its camera and fixed-wing UAV\u2019s position. The fixed-wing UAV\u2019s dynamics model is provided by JSBSim[49 ###reference_b49###], an open-source platform widely regarded for its high accuracy in modeling aerodynamics and flight physics. The specific model used in this work is the Skywalker X8, a popular choice for its stability and versatility in various flight scenarios. The neural network models are established using the PyTorch framework.\nWe conduct the training of the proposed method with the parameters shown in Table I ###reference_### within a 1000m by 600m rectangular urban environment constructed using UE. In the experiments, target points are randomly selected from three predefined flight paths. As shown in Fig. 3 ###reference_###, varying numbers of obstacles are distributed along the three flight paths. The variation in obstacle density across the routes simulates the challenge of avoidance faced by fixed-wing UAVs in environments with different levels of obstacle density. Image data are collected using a simulated camera provided by AirSim, generating color images with a resolution of 480\u00d7640 for the depth estimation module." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Ablation Studies", + "text": "In this section, we study the importance of various design modules in our framework." + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "V-B1 Inferred reward function", + "text": "We evaluate the contribution of the designed reward function to the smoothness and stability of flight trajectories. First, we employ only the distance reward function, assigning it a weight of =1 and referring to this configuration as the distance model. Subsequently, we compare the smoothness of the flight trajectories generated by the distance model and the proposed model along three predefined flight paths. As illustrated in Fig. 4 ###reference_### (b), (d), and (f), the stability that observed in the proposed model\u2019s trajectories can be attributed to the carefully designed reward function, which balances multiple factors such as obstacle avoidance and trajectory smoothness. In contrast, the trajectories generated by the model using only exhibit more abrupt directional changes, as highlighted by the jagged red solid lines in Fig. 4 ###reference_### (a), (c), and (e). Such rapid course corrections can impose additional strain on the fixed-wing UAV\u2019s control system, which can lead to potential instability, particularly in complex urban environments. By incorporating a smoothness criterion into the reward structure, the proposed model effectively reduces the necessity for drastic course adjustments, enabling the UAV to follow a more fluid and consistent trajectory.\n###figure_10### This improvement in flight stability is critical in real-world scenarios, where maintaining smooth trajectories helps minimize energy consumption and ensures safer navigation through dynamic and uncertain environments. Thus, the results clearly demonstrate the superiority of the proposed reward function in producing smoother and more stable flight paths for fixed-wing UAVs, ultimately enhancing overall flight performance in obstacle-rich environments." + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "V-B2 Adaptive entropy", + "text": "To evaluate the impact of the proposed Adaptive Entropy on the obstacle avoidance tasks, we design a strategy comparison experiment. We respectively train the PPO models with entropy weights of 0.01 and 0.001, and then compared their rewards per episode during training with the proposed method. The rewards obtained during training are shown in Fig. 5 ###reference_###. The lower weight model shows a gradual improvement in performance, starting with relatively low rewards and steadily increasing as the training progresses, indicates moderate variability, suggesting consistent learning behavior. The higher weight, while starting at a lower initial reward, exhibits a steady increase over time, indicating higher variability in performance, particularly in the earlier stages of training. Our proposed method demonstrates the fastest learning curve, with cumulative rewards increasing rapidly in the early episodes. By the end of the training, it converges at the highest cumulative reward. The shaded area around the orange curve is relatively narrow, indicating low variability and suggesting that the proposed method is more stable and consistent across different episodes.\nThese results indicate that the proposed method outperforms PPO model with different entrophy weights in terms of cumulative rewards, demonstrating its effectiveness in navigating the fixed-wing UAVs through environments with obstacles. Additionally, the faster convergence of the proposed method shows its potential for quicker policy learning, making it suitable for pratical applications where rapid adaptation is critical." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Policy Comparison", + "text": "To evaluate the effectiveness of our proposed fixed-wing UAV obstacle avoidance method, we conducted tests in three distinct scenarios named as Scene 1 (City), Scene 2 (Line-cruising), and Scene 3 (Valley). In the first scene place, the fixed-wing UAV in an urban environment, where it have to fly through densely packed buildings and avoid structural obstacles as illustrated in Fig. 7 ###reference_###(a). In the second scene, simulating a power line inspection in mountainous terrain, the fixed-wing UAV\u2019s primary task is to avoid obstacles such as mountainous ridges and power poles while maintaining proximity to power lines as depicted in Fig. 7 ###reference_###(b). In the third scene, the fixed-wing UAV face a desert canyon with dynamic terrain changes, where the algorithm has to account for steep ascents and descents while avoiding natural formations such as cliffs and ridges as shown in Fig. 7 ###reference_###(c). These scenarios represent varying levels of environmental complexity, incorporating differences in obstacle types, numbers, and distributions.\nWe compare our proposed method with several established reinforcement learning algorithms, including PPO, TRPO, A3C, DQN, and DDPG. All algorithms are tested in the same initial simulation environment, with repeated trials in each scenario to ensure robust results. In each scenario, the agent\u2019s task is to fly from a starting position to a target without colliding with any obstacles. The performance of each algorithm is measured using the task completion rate (Success Rate, %), which represents the percentage of trials in which the agent successfully completed the task. Each test is repeated 100 times per scenario to minimize the impact of randomness on the results. Table II ###reference_### presents the task completion rates for our proposed method and the other strategies across the three scenarios. The results demonstrate that our proposed approach outperforme the baseline algorithms in all scenarios. Specifically, in the City and Line-cruising scenarios, our method achieved task completion rates of 86.0% and 80.0%, respectively, which are higher than those of the other algorithms. In the more complex Valley scenario, our approach still maintain a strong performance with a 74.0% success rate.\nIn comparison, the PPO algorithm achieves 82.0%, 76.0%, and 69.0% in the three scenarios, which is slightly lower than our method. Other algorithms such as TRPO, A3C, DQN, and DDPG perform relatively worse, with success rates below 80.0% in all scenarios. Notably, in the Valley scenario, DDPG exhibit the lowest success rate of 62.0%.The experimental results indicate that our proposed method is capable of effectively handling different levels of obstacle complexity across various environments, achieving higher task completion rates than existing reinforcement learning strategies. We hypothesize that the superior performance of our method, particularly in complex environments, can be attributed to its adaptive strategy optimization and efficient exploration mechanism. While the performance of all algorithms is comparable in simpler scenarios, such as Line-cruising, our method shows a significant advantage in more complex scenarios like City and Valley.\n###figure_11###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Hardware-in-the-loop Simulation", + "text": "A hardware-in-the-loop simulation experiment is conducted to demonstrate the deployability of the proposed algorithm. Additionally, a comparison with sample-based algorithms [50 ###reference_b50###] is made to validate the performance of the proposed approach. The simulation platform consists of a computer equipped with an Intel i5-13600KF CPU and an NVIDIA RTX 4070Ti SUPER GPU, acting as the primary simulation unit. The onboard edge computing platform, the OrangePi 5B, equipped with a Rockchip RK3588s processor and a Neural Processing Unit (NPU), is used to execute real-time inference of the trained model. The model is initially trained and converted to the RKNN format using the RKNN-Toolkit2 (v2.1.0) and deployed via the Python API. The experimental platform is shown in Fig. 6 ###reference_### and validation scenes are the same as the aforementioned Policy Comparison tests.\n###figure_12### ###figure_13### ###figure_14###" + }, + { + "section_id": "5.4.1", + "parent_section_id": "5.4", + "section_name": "V-D1 Methods Comparison", + "text": "The comparison of flight trajectories is presented in Fig. 7 ###reference_###, showcasing the performance of both algorithms across the three environments. In the first scenario, with dense obstacles, the proposed algorithm produces a smoother and shorter path, as illustrated in Fig. 7 ###reference_###(a), while the sample-based method tends to choose regions with fewer obstacles for its flight path. As depicted in Fig. 7 ###reference_###(b), where the obstacles are sparsely distributed, the sample-based method is able to generate a trajectory that is closer to the expected flight path compared to the proposed method. In the third scenario, although the sample-based method generates a smoother trajectory, its turning capability is limited due to sampling only within the visible area, preventing it from navigating through the canyon, as shown in Fig. 7 ###reference_###(c). In contrast, the proposed algorithm has a broader range of options, significantly improving its turning ability." + }, + { + "section_id": "5.4.2", + "parent_section_id": "5.4", + "section_name": "V-D2 Quantitative Analysis", + "text": "The quantitative analysis of the results is presented in Fig. 8 ###reference_###, where the X and Y coordinate distributions of the flight trajectories are plotted for each scene. The proposed algorithm\u2019s trajectory (solid line) is compared with the expected trend (dashed line) across all three environments. As shown in Fig. 8 ###reference_###(a), the deviations between the proposed trajectory and the expected trend were minimal, indicating a close adherence to the optimal flight path in less challenging environments. In Scene II, the proposed method produced a noticeably smoother trajectory with fewer oscillations, especially in the Y coordinate distribution, demonstrating its superiority in densely cluttered environments, as depicted in Fig. 8 ###reference_###(b). This trend contrasts sharply with the results shown in Fig. 8 ###reference_###(c), where the greatest divergence between the two methods is evident. The proposed algorithm\u2019s ability to execute sharp turns and navigate narrow spaces enabled it to successfully complete the flight path, in contrast to the sample-based method, which struggled in this scenario.\nOverall, the experimental results show that the proposed algorithm outperforms the sample-based method in more complex environments (Scene II and Scene III), particularly in terms of trajectory smoothness and path length. While the sample-based method performs better in simpler environments with sparse obstacles (Scene I), the proposed algorithm demonstrates greater adaptability and robustness in challenging, real-world scenarios. These findings suggest that the proposed algorithm\u2019s ability to make quick decisions in real-time, demonstrating its potential for deployment in real-world edge-based fast moving fixed-wing UAV systems where obstacle avoidance is critical." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "In this paper, we present a lightweight DRL framework that leverages inferred single-frame depth maps as input and employs a lightweight network architecture to address the obstacle avoidance challenges of high-speed fixed-wing UAVs.\nOur framework incorporates an inferring reward function to address the stability\n###figure_15### ###figure_16### ###figure_17### and dynamic constraints of fixed-wing UAVs, along with an adaptive entropy-based strategy update mechanism to balance exploration and exploitation during training.\nThe proposed method is tested in various scenarios through hardware-in-the-loop simulations and compared with other reinforcement learning algorithms.\nThe experimental results demonstrated that our framework significantly outperforms these algorithms in terms of obstacle avoidance effectiveness and trajectory smoothness.\nDespite the promising results, our study has certain limitations. The reliance on an inferred depth map may affect the accuracy of obstacle detection, particularly in environments with sudden, small obstacles. In the future, we plan to deploy the proposed algorithm on a real vertical take-off and landing (VTOL) fixed-wing UAV to validate its feasibility in real-world scenarios." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Common Parameter Settings for PPO and Proposed Algorithm
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParameterValue
Air Speed (m/s)30
Depth Map Size ()224, 224
Reward Term Weight ()30, -30, 0.5, 1.0
Flying Distance Cap (m)1300
Learning Rate0.0003
Gamma ()0.95
Clip Range ()0.3
K Epochs2
Batch Size2048
Value Loss Coefficient()0.5
Entropy Loss Coefficient()0.1
Max Timesteps Per Episode60
Max episodes3000
State Dimension256
Action Dimension8
\n
", + "capture": "TABLE I: Common Parameter Settings for PPO and Proposed Algorithm" + }, + "2": { + "table_html": "
\n
TABLE II: Task Completion Results of Different Obstacle Avoidance Strategies in Different Scenes
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Success Rate (%, )Scene 1: CityScene 2: Line-cruisingScene 3: Valley
Proposed86.080.074.0
PPO82.076.069.0
TRPO80.074.068.0
A3C78.072.066.0
DQN77.070.064.0
DDPG76.068.062.0
\n
", + "capture": "TABLE II: Task Completion Results of Different Obstacle Avoidance Strategies in Different Scenes" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18009v1_figure_1.png", + "caption": "Figure 1: Simulation scenarios and fixed-wing UAV model used for training and validating. Full video link: https://youtu.be/DXP54UI2lbE", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/rknn/fengmian_com.jpeg" + }, + "2": { + "figure_path": "2411.18009v1_figure_2.png", + "caption": "Figure 2: The proposed obstacle avoidance framework for fixed-wing UAVs. A depth map is generated from a monocular RGB image using the method described in [45], which is encoded by a lightweight backbone [46] to extract visual features. These visual features are concatenated with target features and input into the policy network to generate actions, while the critic network evaluates state values. An adaptive entropy module dynamically adjusts the exploration-exploitation tradeoff during training, and an inverse reward function updates the replay buffer, facilitating continuous policy optimization.", + "url": "http://arxiv.org/html/2411.18009v1/x1.png" + }, + "3": { + "figure_path": "2411.18009v1_figure_3.png", + "caption": "Figure 3: Training flight paths. The yellow six-pointed stars represent the targets, the red star indicates the fixed-wing UAV\u2019s take-off position, and the purple line represents the expected flight trajectory.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/hangxian_p1.jpeg" + }, + "4(a)": { + "figure_path": "2411.18009v1_figure_4(a).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line_10v.jpeg" + }, + "4(b)": { + "figure_path": "2411.18009v1_figure_4(b).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line_11v.jpeg" + }, + "4(c)": { + "figure_path": "2411.18009v1_figure_4(c).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line20.jpeg" + }, + "4(d)": { + "figure_path": "2411.18009v1_figure_4(d).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line_21v.jpeg" + }, + "4(e)": { + "figure_path": "2411.18009v1_figure_4(e).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line30.jpeg" + }, + "4(f)": { + "figure_path": "2411.18009v1_figure_4(f).png", + "caption": "Figure 4: The comparison of the impact of different reward functions on obstacle avoidance flight trajectories. The red solid lines represent the flight trajectories of the fixed-wing UAV generated by the decision-making process of deep reinforcement learning (DRL) algorithms. The blue solid lines with arrows represent the expected flight trajectories, which point from the take-off points toward the target points. The green dashed lines represent the inferred depth map during obstacle avoidance maneuvers. (a), (c), and (e) show the obstacle avoidance trajectories generated by the model that only uses rdissubscript\ud835\udc5fdisr_{\\rm{dis}}italic_r start_POSTSUBSCRIPT roman_dis end_POSTSUBSCRIPT, while (b), (d), and (f) show the obstacle avoidance trajectories produced by the model trained using the proposed reward function.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/Images/line/line31.jpeg" + }, + "5": { + "figure_path": "2411.18009v1_figure_5.png", + "caption": "Figure 5: Training cumulative rewards comparison. The solid lines\nrepresent the average rewards of our algorithms and baselines per episode, while the shaded areas indicate the variability in the reward accumulation for each method.", + "url": "http://arxiv.org/html/2411.18009v1/x2.png" + }, + "6": { + "figure_path": "2411.18009v1_figure_6.png", + "caption": "Figure 6: Hardware-in-the-loop (HIL) platform structure. The platform consists of hardware components, where the Orange Pi 5B and PC communicate via an Ethernet connection. The software components include the offboard control module and simulation scene module, used for control and decision validation in a simulated environment.", + "url": "http://arxiv.org/html/2411.18009v1/x3.png" + }, + "7(a)": { + "figure_path": "2411.18009v1_figure_7(a).png", + "caption": "(a) Scene I\nFigure 7: HIL comparison between proposed method and sample based method in different scenes. The red line represents the flight trajectory generated by our proposed method, while the blue line represents the sample based method.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/pdf/NYC_2.jpg" + }, + "7(b)": { + "figure_path": "2411.18009v1_figure_7(b).png", + "caption": "(b) Scene II\nFigure 7: HIL comparison between proposed method and sample based method in different scenes. The red line represents the flight trajectory generated by our proposed method, while the blue line represents the sample based method.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/pdf/GRASS2.jpeg" + }, + "7(c)": { + "figure_path": "2411.18009v1_figure_7(c).png", + "caption": "(c) Scene III\nFigure 7: HIL comparison between proposed method and sample based method in different scenes. The red line represents the flight trajectory generated by our proposed method, while the blue line represents the sample based method.", + "url": "http://arxiv.org/html/2411.18009v1/extracted/6027851/pdf/VALLEY_7.jpg" + }, + "8(a)": { + "figure_path": "2411.18009v1_figure_8(a).png", + "caption": "(a) Scene I\nFigure 8: HIL flight trajectory and coordinate distributions across three distinct scenes. Each scene shows the fixed-wing UAV\u2019s trajectory in a virtual environment along with its X and Y coordinate distributions compared to the expected trend.", + "url": "http://arxiv.org/html/2411.18009v1/x4.png" + }, + "8(b)": { + "figure_path": "2411.18009v1_figure_8(b).png", + "caption": "(b) Scene II\nFigure 8: HIL flight trajectory and coordinate distributions across three distinct scenes. Each scene shows the fixed-wing UAV\u2019s trajectory in a virtual environment along with its X and Y coordinate distributions compared to the expected trend.", + "url": "http://arxiv.org/html/2411.18009v1/x5.png" + }, + "8(c)": { + "figure_path": "2411.18009v1_figure_8(c).png", + "caption": "(c) Scene III\nFigure 8: HIL flight trajectory and coordinate distributions across three distinct scenes. Each scene shows the fixed-wing UAV\u2019s trajectory in a virtual environment along with its X and Y coordinate distributions compared to the expected trend.", + "url": "http://arxiv.org/html/2411.18009v1/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Low-altitude intelligent transportation: system architecture, infrastructure, and key technologies.", + "author": "Changqing Huang, Shifeng Fang, Hua Wu, Yong Wang, and Yichen Yang.", + "venue": "Journal of Industrial Information Integration, page 100694, 2024.", + "url": null + } + }, + { + "2": { + "title": "Urban air mobility: History, ecosystem, market potential, and challenges.", + "author": "Adam P Cohen, Susan A Shaheen, and Emily M Farrar.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 22(9):6074\u20136087, 2021.", + "url": null + } + }, + { + "3": { + "title": "Fixed-wing uav based air-to-ground channel measurement and modeling at 2.7 ghz in rural environment.", + "author": "Yue Lyu, Wei Wang, and Peng Chen.", + "venue": "IEEE Transactions on Antennas and Propagation, 2024.", + "url": null + } + }, + { + "4": { + "title": "Adaptive mutant particle swarm optimization based precise cargo airdrop of unmanned aerial vehicles.", + "author": "An Zhang, Han Xu, Wenhao Bi, and Shuangfei Xu.", + "venue": "Applied Soft Computing, 130:109657, 2022.", + "url": null + } + }, + { + "5": { + "title": "Backstepping and dynamic inversion combined controller for auto-landing of fixed wing uavs.", + "author": "Mihai Lungu.", + "venue": "Aerospace Science and Technology, 96:105526, 2020.", + "url": null + } + }, + { + "6": { + "title": "Cooperative sensing enhanced uav path-following and obstacle avoidance with variable formation.", + "author": "Changheng Wang, Zhiqing Wei, Wangjun Jiang, Haoyue Jiang, and Zhiyong Feng.", + "venue": "IEEE Transactions on Vehicular Technology, 2024.", + "url": null + } + }, + { + "7": { + "title": "Anytime motion planning using the rrt.", + "author": "Sertac Karaman, Matthew R Walter, Alejandro Perez, Emilio Frazzoli, and Seth Teller.", + "venue": "In 2011 IEEE international conference on robotics and automation, pages 1478\u20131483. IEEE, 2011.", + "url": null + } + }, + { + "8": { + "title": "Spline-based motion planning for autonomous guided vehicles in a dynamic environment.", + "author": "Tim Mercy, Ruben Van Parys, and Goele Pipeleers.", + "venue": "IEEE Transactions on Control Systems Technology, 26(6):2182\u20132189, 2017.", + "url": null + } + }, + { + "9": { + "title": "Learning-based fixed-wing uav reactive maneuver control for obstacle avoidance.", + "author": "Jianfa Wu, Honglun Wang, Yiheng Liu, Menghua Zhang, and Tiancai Wu.", + "venue": "Aerospace Science and Technology, 126:107623, 2022.", + "url": null + } + }, + { + "10": { + "title": "Openstreetmap-based autonomous navigation with lidar naive-valley-path obstacle avoidance.", + "author": "Miguel \u00c1ngel Mu\u00f1oz-Ba\u00f1\u00f3n, Edison Velasco-Sanchez, Francisco A Candelas, and Fernando Torres.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 23(12):24428\u201324438, 2022.", + "url": null + } + }, + { + "11": { + "title": "Nvradarnet: Real-time radar obstacle and free space detection for autonomous driving.", + "author": "Alexander Popov, Patrik Gebhardt, Ke Chen, and Ryan Oldja.", + "venue": "In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 6958\u20136964. IEEE, 2023.", + "url": null + } + }, + { + "12": { + "title": "Unmanned aerial vehicle path planning based on a* algorithm and its variants in 3d environment.", + "author": "Dilip Mandloi, Rajeev Arya, and Ajit K Verma.", + "venue": "International Journal of System Assurance Engineering and Management, 12(5):990\u20131000, 2021.", + "url": null + } + }, + { + "13": { + "title": "Bi-risk-rrt based efficient motion planning for autonomous ground vehicles.", + "author": "Han Ma, Fei Meng, Chengwei Ye, Jiankun Wang, and Max Q-H Meng.", + "venue": "IEEE Transactions on Intelligent Vehicles, 7(3):722\u2013733, 2022.", + "url": null + } + }, + { + "14": { + "title": "Gmr-rrt*: Sampling-based path planning using gaussian mixture regression.", + "author": "Jiankun Wang, Tingguang Li, Baopu Li, and Max Q-H Meng.", + "venue": "IEEE Transactions on Intelligent Vehicles, 7(3):690\u2013700, 2022.", + "url": null + } + }, + { + "15": { + "title": "Champion-level drone racing using deep reinforcement learning.", + "author": "Elia Kaufmann, Leonard Bauersfeld, Antonio Loquercio, Matthias M\u00fcller, Vladlen Koltun, and Davide Scaramuzza.", + "venue": "Nature, 620(7976):982\u2013987, 2023.", + "url": null + } + }, + { + "16": { + "title": "Learn to navigate autonomously through deep reinforcement learning.", + "author": "Keyu Wu, Han Wang, Mahdi Abolfazli Esfahani, and Shenghai Yuan.", + "venue": "IEEE Transactions on Industrial Electronics, 69(5):5342\u20135352, 2021.", + "url": null + } + }, + { + "17": { + "title": "A uav navigation approach based on deep reinforcement learning in large cluttered 3d environments.", + "author": "Yuntao Xue and Weisheng Chen.", + "venue": "IEEE Transactions on Vehicular Technology, 72(3):3001\u20133014, 2022.", + "url": null + } + }, + { + "18": { + "title": "Visual navigation in real-world indoor environments using end-to-end deep reinforcement learning.", + "author": "Jon\u00e1\u0161 Kulh\u00e1nek, Erik Derner, and Robert Babu\u0161ka.", + "venue": "IEEE Robotics and Automation Letters, 6(3):4345\u20134352, 2021.", + "url": null + } + }, + { + "19": { + "title": "Human-guided reinforcement learning with sim-to-real transfer for autonomous navigation.", + "author": "Jingda Wu, Yanxin Zhou, Haohan Yang, Zhiyu Huang, and Chen Lv.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.", + "url": null + } + }, + { + "20": { + "title": "Vision-based distributed multi-uav collision avoidance via deep reinforcement learning for navigation.", + "author": "Huaxing Huang, Guijie Zhu, Zhun Fan, Hao Zhai, Yuwei Cai, Ze Shi, Zhaohui Dong, and Zhifeng Hao.", + "venue": "In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 13745\u201313752. IEEE, 2022.", + "url": null + } + }, + { + "21": { + "title": "Autonomous navigation of uavs in large-scale complex environments: A deep reinforcement learning approach.", + "author": "Chao Wang, Jian Wang, Yuan Shen, and Xudong Zhang.", + "venue": "IEEE Transactions on Vehicular Technology, 68(3):2124\u20132136, 2019.", + "url": null + } + }, + { + "22": { + "title": "Towards monocular vision based obstacle avoidance through deep reinforcement learning.", + "author": "Linhai Xie, Sen Wang, Andrew Markham, and Niki Trigoni.", + "venue": "arXiv preprint arXiv:1706.09829, 2017.", + "url": null + } + }, + { + "23": { + "title": "Depth-cuprl: Depth-imaged contrastive unsupervised prioritized representations in reinforcement learning for mapless navigation of unmanned aerial vehicles.", + "author": "Junior C de Jesus, Victor A Kich, Alisson H Kolling, Ricardo B Grando, Rodrigo S Guerra, and Paulo LJ Drews.", + "venue": "In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10579\u201310586. IEEE, 2022.", + "url": null + } + }, + { + "24": { + "title": "R\u00e9nyi state entropy maximization for exploration acceleration in reinforcement learning.", + "author": "Mingqi Yuan, Man-On Pun, and Dong Wang.", + "venue": "IEEE Transactions on Artificial Intelligence, 4(5):1154\u20131164, 2022.", + "url": null + } + }, + { + "25": { + "title": "Proximal policy optimization algorithms.", + "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.", + "venue": "arXiv preprint arXiv:1707.06347, 2017.", + "url": null + } + }, + { + "26": { + "title": "A note on two problems in connexion with graphs.", + "author": "E. W. Dijkstra.", + "venue": "Numerische Mathematik, 1(1), 1959.", + "url": null + } + }, + { + "27": { + "title": "A formal basis for the heuristic determination of minimum cost paths.", + "author": "Peter E Hart, Nils J Nilsson, and Bertram Raphael.", + "venue": "IEEE transactions on Systems Science and Cybernetics, 4(2):100\u2013107, 1968.", + "url": null + } + }, + { + "28": { + "title": "Global path planning using artificial potential fields.", + "author": "Charles W Warren.", + "venue": "In 1989 IEEE International Conference on Robotics and Automation, pages 316\u2013317. IEEE Computer Society, 1989.", + "url": null + } + }, + { + "29": { + "title": "Optimal path planning using rrt* based approaches: a survey and future directions.", + "author": "Iram Noreen, Amna Khan, and Zulfiqar Habib.", + "venue": "International Journal of Advanced Computer Science and Applications, 7(11), 2016.", + "url": null + } + }, + { + "30": { + "title": "Vfh* tdt (vfh* with time dependent tree): A new laser rangefinder based obstacle avoidance method designed for environment with non-static obstacles.", + "author": "Andrej Babinec, Franti\u0161ek Ducho\u0148, Martin Dekan, Peter P\u00e1szt\u00f3, and Michal Kelemen.", + "venue": "Robotics and autonomous systems, 62(8):1098\u20131115, 2014.", + "url": null + } + }, + { + "31": { + "title": "Implementing dubins airplane paths on fixed-wing uavs.", + "author": "Timothy McLain, Randall W Beard, and Mark Owen.", + "venue": "2014.", + "url": null + } + }, + { + "32": { + "title": "Nonlinear mpc for collision avoidance and control of uavs with dynamic obstacles.", + "author": "Bj\u00f6rn Lindqvist, Sina Sharif Mansouri, Ali-akbar Agha-mohammadi, and George Nikolakopoulos.", + "venue": "IEEE robotics and automation letters, 5(4):6001\u20136008, 2020.", + "url": null + } + }, + { + "33": { + "title": "Deep reinforcement learning with experience replay based on sarsa.", + "author": "Dongbin Zhao, Haitao Wang, Kun Shao, and Yuanheng Zhu.", + "venue": "In 2016 IEEE symposium series on computational intelligence (SSCI), pages 1\u20136. IEEE, 2016.", + "url": null + } + }, + { + "34": { + "title": "Q-learning.", + "author": "Christopher JCH Watkins and Peter Dayan.", + "venue": "Machine learning, 8:279\u2013292, 1992.", + "url": null + } + }, + { + "35": { + "title": "Understanding multi-step deep reinforcement learning: A systematic study of the dqn target.", + "author": "J Fernando Hernandez-Garcia and Richard S Sutton.", + "venue": "arXiv preprint arXiv:1901.07510, 2019.", + "url": null + } + }, + { + "36": { + "title": "Deep reinforcement learning with double q-learning.", + "author": "Hado Van Hasselt, Arthur Guez, and David Silver.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.", + "url": null + } + }, + { + "37": { + "title": "Dueling network architectures for deep reinforcement learning.", + "author": "Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas.", + "venue": "In International conference on machine learning, pages 1995\u20132003. PMLR, 2016.", + "url": null + } + }, + { + "38": { + "title": "Continuous control with deep reinforcement learning.", + "author": "Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.", + "venue": "arXiv preprint arXiv:1509.02971, 2015.", + "url": null + } + }, + { + "39": { + "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.", + "author": "Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.", + "venue": "In International conference on machine learning, pages 1861\u20131870. PMLR, 2018.", + "url": null + } + }, + { + "40": { + "title": "Deep Reinforcement Learning Hands-On: Apply modern RL methods, with deep Q-networks, value iteration, policy gradients, TRPO, AlphaGo Zero and more.", + "author": "Maxim Lapan.", + "venue": "Packt Publishing Ltd, 2018.", + "url": null + } + }, + { + "41": { + "title": "Position-agnostic autonomous navigation in vineyards with deep reinforcement learning.", + "author": "Mauro Martini, Simone Cerrato, Francesco Salvetti, Simone Angarano, and Marcello Chiaberge.", + "venue": "In 2022 IEEE 18th international conference on automation science and engineering (CASE), pages 477\u2013484. IEEE, 2022.", + "url": null + } + }, + { + "42": { + "title": "Mgrl: Graph neural network based inference in a markov network with reinforcement learning for visual navigation.", + "author": "Yi Lu, Yaran Chen, Dongbin Zhao, and Dong Li.", + "venue": "Neurocomputing, 421:140\u2013150, 2021.", + "url": null + } + }, + { + "43": { + "title": "Design and experimental validation of deep reinforcement learning-based fast trajectory planning and control for mobile robot in unknown environment.", + "author": "Runqi Chai, Hanlin Niu, Joaquin Carrasco, Farshad Arvin, Hujun Yin, and Barry Lennox.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 35(4):5778\u20135792, 2022.", + "url": null + } + }, + { + "44": { + "title": "Temporal-logic-based reward shaping for continuing reinforcement learning tasks.", + "author": "Yuqian Jiang, Suda Bharadwaj, Bo Wu, Rishi Shah, Ufuk Topcu, and Peter Stone.", + "venue": "In Proceedings of the AAAI Conference on artificial Intelligence, volume 35, pages 7995\u20138003, 2021.", + "url": null + } + }, + { + "45": { + "title": "Zoedepth: Zero-shot transfer by combining relative and metric depth.", + "author": "Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias M\u00fcller.", + "venue": "arXiv preprint arXiv:2302.12288, 2023.", + "url": null + } + }, + { + "46": { + "title": "Rewrite the stars.", + "author": "Xu Ma, Xiyang Dai, Yue Bai, Yizhou Wang, and Yun Fu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5694\u20135703, 2024.", + "url": null + } + }, + { + "47": { + "title": "Deep learning on mobile devices through neural processing units and edge computing.", + "author": "Tianxiang Tan and Guohong Cao.", + "venue": "In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pages 1209\u20131218. IEEE, 2022.", + "url": null + } + }, + { + "48": { + "title": "Airsim: High-fidelity visual and physical simulation for autonomous vehicles.", + "author": "Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor.", + "venue": "In Field and Service Robotics, 2017.", + "url": null + } + }, + { + "49": { + "title": "Jsbsim: An open source flight dynamics model in c++.", + "author": "Jon Berndt.", + "venue": "In AIAA Modeling and Simulation Technologies Conference and Exhibit, page 4923, 2004.", + "url": null + } + }, + { + "50": { + "title": "Motion primitives and 3d path planning for fast flight through a forest.", + "author": "Aditya A Paranjape, Kevin C Meier, Xichen Shi, Soon-Jo Chung, and Seth Hutchinson.", + "venue": "The International Journal of Robotics Research, 34(3):357\u2013377, 2015.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18009v1" +} \ No newline at end of file diff --git a/20241127/2411.18011v1.json b/20241127/2411.18011v1.json new file mode 100644 index 0000000000000000000000000000000000000000..80a5cdb44acc6e327c4b4dbef2279481b96924b7 --- /dev/null +++ b/20241127/2411.18011v1.json @@ -0,0 +1,805 @@ +{ + "title": "Manual-PA: Learning 3D Part Assembly from Instruction Diagrams", + "abstract": "Assembling furniture amounts to solving the discrete-continuous optimization task of selecting the furniture parts to assemble and estimating their connecting poses in a physically realistic manner. The problem is hampered by its combinatorially large yet sparse solution space thus making learning to assemble a challenging task for current machine learning models. In this paper, we attempt to solve this task by leveraging the assembly instructions provided in diagrammatic manuals that typically accompany the furniture parts. Our key insight is to use the cues in these diagrams to split the problem into discrete and continuous phases. Specifically, we present Manual-PA, a transformer-based instruction Manual-guided 3D Part Assembly framework that learns to semantically align 3D parts with their illustrations in the manuals using a contrastive learning backbone towards predicting the assembly order and infers the 6D pose of each part via relating it to the final furniture depicted in the manual. To validate the efficacy of our method, we conduct experiments on the benchmark PartNet dataset. Our results show that using the diagrams and the order of the parts lead to significant improvements in assembly performance against the state of the art. Further, Manual-PA demonstrates strong generalization to real-world IKEA furniture assembly on the IKEA-Manual dataset.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### With the rise of the DIY trend, nowadays it is the consumers who are increasingly assembling products like IKEA furniture [20 ###reference_b20###, 2 ###reference_b2###, 53 ###reference_b53###, 54 ###reference_b54###, 3 ###reference_b3###], LEGO structures [46 ###reference_b46###, 44 ###reference_b44###], and exercise equipments [57 ###reference_b57###, 42 ###reference_b42###]. This paradigm shift can reduce the product cost, but often poses challenges for untrained consumers, as an incorrect assembly in any step may lead to irreparable outcomes. Thus, automating the assembly or providing active help in the process is often desired.\nAutomated assembly faces the dual challenge of large combinatorial solution space, parameterized by the number of discrete parts in the assembly, and the need for a precise 6D pose estimation that correctly connects parts between each other. The limited number of feasible, stable assembly sequences makes this discrete-continuous optimization problem extremely challenging, even for advanced neural networks, especially as the number of parts increases [51 ###reference_b51###]. Fortunately, most assembly tasks include language-agnostic instruction manuals, such as diagrams, delineating the assembly. While such manuals help shrink the solution search space, the problem of correctly matching the 2D manual diagrams with their corresponding 3D object parts and inferring their assembly pose and order remains a significant challenge for automated systems.\nThis paper introduces a method to enable machines to learn how to assemble shapes by following a visual instruction manual, a task we call manual-guided 3D part assembly. As illustrated in Fig. 1 ###reference_###, given a set of parts and a manual, our approach interprets the manual\u2019s step-by-step diagrams to predict the precise 6D pose of each part, assembling them into the target shape. There are two lines of prior approaches that have similarities to our proposed task, namely methods that focus on the geometry of individual parts and inter-part relationships [22 ###reference_b22###, 52 ###reference_b52###, 55 ###reference_b55###, 8 ###reference_b8###, 56 ###reference_b56###, 9 ###reference_b9###], and methods such as MEPNet [46 ###reference_b46###] that learns the assembly in specific conditions, such as LEGO shapes. The former rely on priors like peg-hole joints [24 ###reference_b24###] or sequence generation [51 ###reference_b51###] to reduce search complexity but are generative and may yield unstable assemblies due to occlusions [22 ###reference_b22###]. The latter often simplifies the task by assuming that parts are provided step-by-step, e.g., LEGO manuals. However, furniture manuals, such as IKEA, may not have such information, and it is necessary to determine which parts are used at each step before assembly. Additionally, while LEGO tasks feature standardized \u201cstud\u201d joints, general furniture assembly is more complex and lacks such constraints.\nBased on the above observations, we identify two key challenges in our task. First, how to detect which part or parts are added at each step in the assembly diagrams. Given that these diagrams depict incremental assembly stages, the core challenge is to recognize newly added parts at each step. This requires aligning 2D assembly diagrams with the 3D parts to establish a correct assembly sequence. Second, how to effectively align the assembly process with the learned order. A straightforward approach is to directly use the learned sequence as the order for predicting poses, e.g., by adopting an auto-regressive prediction method similar to MEPNet [46 ###reference_b46###]. However, this approach carries the risk of accumulating errors\u2014if a mistake occurs in an earlier step, subsequent steps are likely to fail as well. On the other hand, completely disregarding the sequence forces the assembly model to implicitly find correspondences, which makes both learning and prediction significantly more difficult. Therefore, the inferred sequence should serve as a soft guidance for the assembly process, helping each part focus more on the relevant step diagram.\nTo address these challenges, we present Manual-guided Part Assembly (Manual-PA), a novel transformer-based neural network that predicts correspondences between step diagrams and parts by computing their semantic similarity, which is learned using contrastive learning. Using this similarity matrix, we establish an assembly order via solving an optimal assignment problem between the aligned features of the 3D object parts and the ordered manual diagram steps, followed by permuting each part\u2019s positional encoding according to the learned order. Finally, a transformer decoder fuses the two feature modalities to predict the final 6DoF pose for each part. Notably, the cross-attention between step diagrams and parts is guided by the positional encoding, ensuring that each step diagram receives a higher attention score for its corresponding part.\nTo empirically validate the effectiveness of Manual-PA, we conduct experiments on the standard PartNet benchmark dataset [26 ###reference_b26###], on the categories of assembling the Chair and Table classes. Our experiments and ablation studies clearly show that Manual-PA leads to state-of-the-art results on multiple metrics, including success rate, outperforming prior methods by a significant margin. Results further showcase the strong generalizability of our approach to real-world IKEA furniture assembly through experiments using the IKEA-Manual dataset [47 ###reference_b47###].\nWe summarize our key contributions below:\nWe introduce a new problem setup of assembling 3D parts using diagrammatic manuals for guidance.\nWe present a novel framework, Manual-PA, for solving this task that learns the assembly order of the 3D parts, which is then used as soft guidance for the assembly.\nWe present experiments on the benchmark PartNet and IKEA-Manual datasets demonstrating state-of-the-art performances." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Manual-Guided Part Assembly", + "text": "In this section, we first provide a concrete formal definition of the manual-based 3D part assembly task in Sec. 3.1 ###reference_###. We then present a detailed explanation in Secs. 3.2 ###reference_###, 3.3 ###reference_### and 3.4 ###reference_### of the proposed learning pipeline as depicted in Fig. 2 ###reference_###, followed by a description of the training process and the loss functions used for each stage in Sec. 3.5 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Formulation", + "text": "Our task involves two different input modalities: a set of 3D parts and a corresponding instruction manual. The 3D parts are represented as a multiset111We allow multiple identical parts, e.g., chair legs, to appear in the set. of unordered point clouds, denoted by . Note that the number of parts may differ from furniture to furniture. The manual consists of an ordered sequence of step diagram images, denoted by ; each diagram depicting an incremental step in the assembly process of a 3D furniture shape . These step diagrams are 2D line drawings, which are 2D projections of textureless 3D CAD models of these parts captured from a camera , and we assume for each step, only one additional part is attached to the furniture assembled thus far. Our goal is to align the 3D parts with their delineations in the respective manual steps towards predicting the assembly order of the parts and the part poses for the final assembled furniture, where represent the rotation matrix and the three-dimensional translation vector respectively for the -th part in the SE(3) transformation space. That is, these parts undergo rigid transformations such that their combination constructs the complete 3D furniture shape as seen from the viewpoint of camera ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Feature Extraction", + "text": "We first use off-the-shelf encoders to extract semantic latent features from the inputs. For the 3D part point clouds , we utilize a lightweight version of PointNet [36 ###reference_b36###], similar to the one used in Image-PA [22 ###reference_b22###], followed by a linear layer to map the features to a common dimension , resulting in geometric part features . For sequences of step diagrams , given that the assembly instructions are provided step-by-step in an incremental manner, our primary focus is on the differences between consecutive steps. For any two consecutive diagram images and , for , we compute the difference image given by that shows which new part is added to the thus-far assembled furniture. Next, this difference image is patchified into image patches, that are then fed to DINOv2 image encoder [5 ###reference_b5###, 32 ###reference_b32###] to extract features, followed by a linear layer to map these features to the same dimensionality , resulting in semantic image features ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Manual-Guided Part Permutation Learning", + "text": "In our task, we seek to learn the order of a set of parts conditioned on an ordered sequence of step diagrams. This problem is similar to learning the feature correspondences between two modalities in order to align them, with an ideal alignment corresponding to a permutation matrix. Specifically, our objective is to derive a similarity matrix between the two modalities such that each modality feature is unambiguously assigned to its corresponding feature in the other modality and we desire this assignment matches with the ground truth order in the step sequence in the manual. Formally, we construct the similarity matrix as:\nwhere is derived from the -dimensional step-diagram features by performing a max-pooling operation on the patch dimension . We measure the similarity in the feature space using the dot product as .\nNext, we obtain the permutation matrix by solving a bipartite matching optimization problem using the Hungarian matching algorithm [18 ###reference_b18###] with cost :\nwhere we minimize the cost, subject to constraints ensuring is a permutation matrix, i.e., each row and column of contains exactly one entry equal to , with all other entries being . The final part order can be obtained by where is applied columnwise." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Manual-Guided Part Pose Estimation", + "text": "" + }, + { + "section_id": "3.4.1", + "parent_section_id": "3.4", + "section_name": "3.4.1 Positional Encoding.", + "text": "Since the attention mechanism in the transformer is permutation-invariant, we follow Vaswani et al. ###reference_b43### [43 ###reference_b43###] in using positional encoding to convey the order of both step diagrams and parts to the model. Formally, the positional encoding is defined as a function that maps a scalar to a sinusoidal encoding, :\nwhere indexes the dimension and is a temperature hyperparameter. By applying to encode the sequential order of step diagrams as they appear in the manual (i.e., using as a simple increasing sequence ), we obtain a positional encoding , where . Given the permutation matrix that corresponds to the order of the parts, we can compute the positional encoding for the parts as . During training, we use the ground truth part order, whereas at inference time, we replace it with the predicted permutation matrix , yielding . These positional encodings are then added to the respective features before being fed into the transformer decoder and provide soft guidance on the sequence order." + }, + { + "section_id": "3.4.2", + "parent_section_id": "3.4", + "section_name": "3.4.2 Transformer Decoder.", + "text": "The modality interaction and fusion between step diagrams and parts occur through a stack of transformer [43 ###reference_b43###] decoder layers. In each layer, the self-attention module receives the output from the previous layer (initially, the part features in the first layer) to perform part-to-part interactions. This is followed by a cross-attention module that injects and fuses the step diagram features with the part features processed by the self-attention module, enabling step-to-part interactions. Notably, for the step diagram features, we concatenate the features from all diagrams from the same manual along the patch dimension, resulting in a tensor of shape ." + }, + { + "section_id": "3.4.3", + "parent_section_id": "3.4", + "section_name": "3.4.3 Pose Prediction Head.", + "text": "Finally, the output from the transformer decoder is fed into a pose prediction head, which predicts the translation and rotation for each part, denoted as . The rotation is represented as a unit quaternion . We initialize its bias to an identity quaternion and normalize the prediction to ensure a unit norm." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Training and Losses", + "text": "First, we train the manual-guided part order learning model until it successfully aligns the two modalities and provides an optimal permutation for the parts. Subsequently, we train the transformer decoder-based part pose estimation model using the predicted order from the first model to infer the part poses. Our empirical results indicate that having an accurate part order is crucial for enhancing the inference performance of the pose estimation network." + }, + { + "section_id": "3.5.1", + "parent_section_id": "3.5", + "section_name": "3.5.1 Loss for Permutation Learning.", + "text": "Borrowing the notation from Secs. 3.2 ###reference_### and 3.3 ###reference_###, we define the contrastive-based order loss as:\nwhere is the mini-batch size, is the temperature, and denotes the index for the -th part that corresponds to the -th step according to the ground truth part order . Here, mini-batches are constructed by sampling pairs of step-diagram and part features, for optimizing the InfoNCE loss [11 ###reference_b11###, 31 ###reference_b31###]. Note that we randomly sample one part from their geometric-equivalent group. In this setup, pairs are considered as positive, otherwise negative. This loss encourages positive pairs to have higher similarity while negative pairs have lower similarity, allowing the similarity matrix to be used for generating the permutation matrix via Hungarian matching, as discussed in Sec. 3.3 ###reference_###." + }, + { + "section_id": "3.5.2", + "parent_section_id": "3.5", + "section_name": "3.5.2 Losses for Pose Estimation.", + "text": "Given a set of furniture parts, there may be subsets of parts that are geometrically identical to each other, such as the four legs of a table, the armrests on both sides of an armchair, or the support rods on the back of a chair. We denote geometrically equivalent parts as a part group . Within each group, we calculate the chamfer distance between these parts and the ground truth. Then, we apply Hungarian Matching (see Sec. 3.3 ###reference_###) again with cost:\nwhere CD denotes the chamfer distance measuring the similarity between unordered two point clouds [10 ###reference_b10###], and index the parts within , and and denote the respective predicted rotation and translation. The optimal matching sequence is denoted as .\nFor the 3D part assembly task, following Li et al. ###reference_b22### [22 ###reference_b22###], we supervise translation and rotation separately. We compute the -distance for translation as follows:\nwhere denotes the index of the matched part corresponding to the -th part under optimal matching .\nIn addition to the geometric similarity between the parts mentioned above, each part itself may also have intrinsic symmetries. For instance, a cylindrical table leg has axial symmetry. Thus, we cannot simply use the absolute distance (e.g., ) on rotation as a supervision signal. Instead we use chamfer distance to supervise the rotation per part:\nHowever, some parts may not be perfectly symmetric, e.g., due to small holes in different locations. To address these cases, we still encourage the model to penalize the distance for the rotation as a regularization term, formally expressed as:\nAdditionally, to ensure that the predicted assembled shape closely matches the ground truth, we compute the chamfer distance between the predicted and ground truth assembled shapes:\nwhere indicates the union of parts to form the assembled shape.\nFinally, the overall loss for pose estimation is computed as a weighted sum of the aforementioned losses:" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We introduce a novel 3D part assembly framework that leverages diagrammatic manuals for guidance. Our proposed approach, Manual-PA, learns the sequential order of part assembly and uses it as a form of soft guidance for 3D part assembly. Manual-PA achieves superior performance on the PartNet dataset compared to existing methods and demonstrates strong generalizability to real-world IKEA furniture, as validated on the IKEA-Manual dataset.\nFuture work could aim to bridge the gap between synthetic and real-world IKEA manuals. First, the number of new parts introduced in each step can vary, requiring a more adaptable detection approach. Moreover, differing perspectives across steps necessitate robust viewpoint handling. The manuals also often include sub-module assemblies, resulting in a tree-like, non-linear assembly structure. Finally, while our current approach trains separate models for each furniture category, an ideal solution would be a unified model that generalizes across all kinds of furniture." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "Following [52 ###reference_b52###, 55 ###reference_b55###, 8 ###reference_b8###, 56 ###reference_b56###, 9 ###reference_b9###, 27 ###reference_b27###, 51 ###reference_b51###, 22 ###reference_b22###], we use PartNet [26 ###reference_b26###], a comprehensive 3D shape dataset with fine-grained, hierarchical part segmentation, for both training and evaluation. In our work, we adopt the train/validation/test split used by Li et al. ###reference_b22### [22 ###reference_b22###], where shapes with more than 20 parts are filtered out, and we use the finest granularity, Level-3, of PartNet. We focus on the two largest furniture categories: Chair and Table, containing 40,148 and 21,517 parts, respectively. Besides that, to demonstrate the generalizability of our method, we also evaluate its zero-shot capabilities on the IKEA-Manual [47 ###reference_b47###] dataset, which features real-world IKEA furniture at the part level that are aligned with IKEA manuals. For consistency with PartNet, we evaluate both the Chair category, with 351 and 143 parts, and the Table, with 143 parts.\nFor each individual part, we sample 1,000 points from its CAD model and normalize them to a canonical space. Geometrically-equivalent part groups are identified based on the size of their Axis-Aligned Bounding Box (AABB). To generate the assembly manual, we render diagrammatic images of parts using Blender\u2019s Freestyle functionality. More details of the preprocessing and rendering process are provided in the supplementary material.\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nZ\n\u2014\n*2Z\n\u2014\n*2Z\n\u2014\n*2Z\n\n\\Block2-1Method\n \\Block2-1Condition\n \\Block1-2SCD\n\\Block1-2PA\n\\Block1-2SR\n\n\n Chair\n Table\n Chair\n Table\n Chair\n Table\n\n\\Block[l]1-8Fully-Supervised on PartNet [26 ###reference_b26###]\nDGL [52 ###reference_b52###]\n -\n 9.1\n 5.0\n 39.00\n 49.51\n -\n -\n\nIET [55 ###reference_b55###]\n -\n 5.4\n 3.5\n 62.80\n 61.67\n -\n -\n\nScore-PA [8 ###reference_b8###]\n -\n 7.4\n 4.5\n 42.11\n 51.55\n 8.320\n 11.23\n\nCCS [56 ###reference_b56###]\n -\n 7.0\n -\n 53.59\n -\n -\n -\n\n3DHPA [9 ###reference_b9###]\n -\n 5.1\n 2.8\n 64.13\n 64.83\n -\n -\n\nRGL [27 ###reference_b27###]\n Sequence\n 8.7\n 4.8\n 49.06\n 54.16\n -\n -\n\nSPAFormer [51 ###reference_b51###]\n Sequence\n 6.7\n 3.8\n 55.88\n 64.38\n 16.40\n 33.50\n\nJoint-PA [24 ###reference_b24###]\n Joint\n 6.0\n 7.0\n 72.80\n 67.40\n -\n -\n\nImage-PA [22 ###reference_b22###]\n Image\n 6.7\n 3.7\n 45.40\n 71.60\n -\n -\n\nImage-PA\n Diagram\n 5.9\n 3.9\n 62.67\n 70.10\n 19.97\n 32.83\n\nManual-PA w/o Order\n Manual\n 3.0\n 3.6\n 79.07\n74.03\n34.13\n37.71\nManual-PA (Ours)\n Manual\n 1.7\n1.8\n89.06\n87.41\n58.03\n56.66\n\\Block[l]1-8Zero-Shot on IKEA-Manual [47 ###reference_b47###]\n3DHPA\n -\n 34.3\n 37.8\n 1.914\n 4.027\n 0.000\n 0.000\n\nImage-PA\n Diagram\n 17.3\n 14.7\n 19.07\n 36.74\n 0.000\n 10.53\nManual-PA w/o Order\n Manual\n 12.8\n8.9\n38.36\n42.01\n1.754\n10.53\nManual-PA (Ours)\n Manual\n 11.4\n4.8\n42.51\n49.72\n3.509\n15.79\nTo evaluate the effectiveness of 3D part assembly of different models, we employ three key metrics: Shape Chamfer Distance (SCD) [22 ###reference_b22###], Part Accuracy (PA) [22 ###reference_b22###], and Success Rate (SR) [23 ###reference_b23###]. SCD quantifies the overall chamfer distance between predicted and ground truth shapes, providing a direct measure of assembly quality, scaled by a factor of for interpretability. PA assesses the correctness of individual part poses by determining whether the chamfer distance for a part falls below a threshold of , reflecting the accuracy of each part\u2019s alignment. Finally, SR evaluates whether all parts in an assembled shape are accurately predicted, offering a strict criterion for complete assembly success.\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\nManual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Fully Supervised on PartNet.", + "text": "In this setting, we train our model on PartNet for each category and test it on unseen shapes in the test split, which share similar patterns with the training set. As shown in Sec. 4.1 ###reference_###, our method, Manual-PA, significantly outperforms all previous approaches in both the Chair and Table categories. Specifically, Manual-PA achieves superior performance compared to all other conditioned methods. Notably, when compared to Image-PA, we observe improvements of 26 and 17 part accuracy for the chair and table categories, respectively. This shows that for the 3D Part Assembly task, instruction manuals provide far better guidance than a single image or other conditioning inputs. \u201cManual-PA w/o Order\u201d refers to our model without access to the part assembly order information, requiring it to learn this implicitly. Although the instruction manual provides strong visual guidance, performance improvements remain limited in this case, highlighting the challenges of implicit learning. By explicitly learning the correspondences between step diagrams and parts and applying positional encoding as soft guidance, we achieve PA improvements of approximately 10 and 13 in the chair and table categories, respectively. Note that all other methods, except for Image-PA and ours, are generative and follow the setting proposed by [52 ###reference_b52###], where multiple assembly results are generated with varying Gaussian noise for the same set of parts. Only the most faithful assembly, as measured by Minimum Matching Distance (MMD) [1 ###reference_b1###], is evaluated.\n###figure_2###" + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Number of Parts.", + "text": "We analyze the assembly quality across varying numbers of parts on PartNet chair test split, as shown in Fig. 3 ###reference_###. It is generally accepted that, assembling objects with more parts is more challenging due to the combinatorial explosive issue. However, our method consistently achieves a higher success rate across all part counts, demonstrating robust adaptability to different levels of assembly complexity. Notably, for shapes with more than 10 parts, 3DHPA and Image-PA exhibit near-zero success rates, whereas our method, Manual-PA, continues to produce competitive assembly results." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 Zero-shot on IKEA-Manual.", + "text": "We use the same model, pretrained on PartNet, to directly estimate the poses of furniture in IKEA-Manual, where all shapes are 3D models of real-world IKEA furniture, and each part corresponds to illustrations in their official assembly manuals. As shown in Sec. 4.1 ###reference_###, our method, significantly outperforms strong competitors such as 3DHPA and Image-PA, demonstrating the strong generalizability of our approach to real world.\nWe conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55###, 9 ###reference_b9###, 51 ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_3### ###figure_4### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_5### As shown in Fig. 6 ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Effect of the Order for 3D Part Assembly.", + "text": "We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_6### ###figure_7### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_8### As shown in Fig. 6 ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Analysis of Permutation Learning Choices.", + "text": "We conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_9### ###figure_10### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_11### As shown in Fig. 6 ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.5.1", + "parent_section_id": "4.5", + "section_name": "4.5.1 Demonstration of Soft Guidance.", + "text": "We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence." + }, + { + "section_id": "4.5.2", + "parent_section_id": "4.5", + "section_name": "4.5.2 Qualitative Comparison.", + "text": "As shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability." + }, + { + "section_id": "4.5.3", + "parent_section_id": "4.5", + "section_name": "4.5.3 Cross Attention Map on Step Diagrams.", + "text": "###figure_12### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To evaluate the effectiveness of 3D part assembly of different models, we employ three key metrics: Shape Chamfer Distance (SCD) [22 ###reference_b22### ###reference_b22###], Part Accuracy (PA) [22 ###reference_b22### ###reference_b22###], and Success Rate (SR) [23 ###reference_b23### ###reference_b23###]. SCD quantifies the overall chamfer distance between predicted and ground truth shapes, providing a direct measure of assembly quality, scaled by a factor of for interpretability. PA assesses the correctness of individual part poses by determining whether the chamfer distance for a part falls below a threshold of , reflecting the accuracy of each part\u2019s alignment. Finally, SR evaluates whether all parts in an assembled shape are accurately predicted, offering a strict criterion for complete assembly success.\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\nManual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Fully Supervised on PartNet.", + "text": "In this setting, we train our model on PartNet for each category and test it on unseen shapes in the test split, which share similar patterns with the training set. As shown in Sec. 4.1 ###reference_### ###reference_###, our method, Manual-PA, significantly outperforms all previous approaches in both the Chair and Table categories. Specifically, Manual-PA achieves superior performance compared to all other conditioned methods. Notably, when compared to Image-PA, we observe improvements of 26 and 17 part accuracy for the chair and table categories, respectively. This shows that for the 3D Part Assembly task, instruction manuals provide far better guidance than a single image or other conditioning inputs. \u201cManual-PA w/o Order\u201d refers to our model without access to the part assembly order information, requiring it to learn this implicitly. Although the instruction manual provides strong visual guidance, performance improvements remain limited in this case, highlighting the challenges of implicit learning. By explicitly learning the correspondences between step diagrams and parts and applying positional encoding as soft guidance, we achieve PA improvements of approximately 10 and 13 in the chair and table categories, respectively. Note that all other methods, except for Image-PA and ours, are generative and follow the setting proposed by [52 ###reference_b52### ###reference_b52###], where multiple assembly results are generated with varying Gaussian noise for the same set of parts. Only the most faithful assembly, as measured by Minimum Matching Distance (MMD) [1 ###reference_b1### ###reference_b1###], is evaluated.\n###figure_13###" + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Number of Parts.", + "text": "We analyze the assembly quality across varying numbers of parts on PartNet chair test split, as shown in Fig. 3 ###reference_### ###reference_###. It is generally accepted that, assembling objects with more parts is more challenging due to the combinatorial explosive issue. However, our method consistently achieves a higher success rate across all part counts, demonstrating robust adaptability to different levels of assembly complexity. Notably, for shapes with more than 10 parts, 3DHPA and Image-PA exhibit near-zero success rates, whereas our method, Manual-PA, continues to produce competitive assembly results." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 Zero-shot on IKEA-Manual.", + "text": "We use the same model, pretrained on PartNet, to directly estimate the poses of furniture in IKEA-Manual, where all shapes are 3D models of real-world IKEA furniture, and each part corresponds to illustrations in their official assembly manuals. As shown in Sec. 4.1 ###reference_### ###reference_###, our method, significantly outperforms strong competitors such as 3DHPA and Image-PA, demonstrating the strong generalizability of our approach to real world.\nWe conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_14### ###figure_15### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_16### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Effect of the Order for 3D Part Assembly.", + "text": "We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_17### ###figure_18### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_19### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Analysis of Permutation Learning Choices.", + "text": "We conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_20### ###figure_21### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_22### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.5.1", + "parent_section_id": "4.5", + "section_name": "4.5.1 Demonstration of Soft Guidance.", + "text": "We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence." + }, + { + "section_id": "4.5.2", + "parent_section_id": "4.5", + "section_name": "4.5.2 Qualitative Comparison.", + "text": "As shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability." + }, + { + "section_id": "4.5.3", + "parent_section_id": "4.5", + "section_name": "4.5.3 Cross Attention Map on Step Diagrams.", + "text": "###figure_23### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Main Results", + "text": "Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\nManual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Fully Supervised on PartNet.", + "text": "In this setting, we train our model on PartNet for each category and test it on unseen shapes in the test split, which share similar patterns with the training set. As shown in Sec. 4.1 ###reference_### ###reference_### ###reference_###, our method, Manual-PA, significantly outperforms all previous approaches in both the Chair and Table categories. Specifically, Manual-PA achieves superior performance compared to all other conditioned methods. Notably, when compared to Image-PA, we observe improvements of 26 and 17 part accuracy for the chair and table categories, respectively. This shows that for the 3D Part Assembly task, instruction manuals provide far better guidance than a single image or other conditioning inputs. \u201cManual-PA w/o Order\u201d refers to our model without access to the part assembly order information, requiring it to learn this implicitly. Although the instruction manual provides strong visual guidance, performance improvements remain limited in this case, highlighting the challenges of implicit learning. By explicitly learning the correspondences between step diagrams and parts and applying positional encoding as soft guidance, we achieve PA improvements of approximately 10 and 13 in the chair and table categories, respectively. Note that all other methods, except for Image-PA and ours, are generative and follow the setting proposed by [52 ###reference_b52### ###reference_b52### ###reference_b52###], where multiple assembly results are generated with varying Gaussian noise for the same set of parts. Only the most faithful assembly, as measured by Minimum Matching Distance (MMD) [1 ###reference_b1### ###reference_b1### ###reference_b1###], is evaluated.\n###figure_24###" + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Number of Parts.", + "text": "We analyze the assembly quality across varying numbers of parts on PartNet chair test split, as shown in Fig. 3 ###reference_### ###reference_### ###reference_###. It is generally accepted that, assembling objects with more parts is more challenging due to the combinatorial explosive issue. However, our method consistently achieves a higher success rate across all part counts, demonstrating robust adaptability to different levels of assembly complexity. Notably, for shapes with more than 10 parts, 3DHPA and Image-PA exhibit near-zero success rates, whereas our method, Manual-PA, continues to produce competitive assembly results." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 Zero-shot on IKEA-Manual.", + "text": "We use the same model, pretrained on PartNet, to directly estimate the poses of furniture in IKEA-Manual, where all shapes are 3D models of real-world IKEA furniture, and each part corresponds to illustrations in their official assembly manuals. As shown in Sec. 4.1 ###reference_### ###reference_### ###reference_###, our method, significantly outperforms strong competitors such as 3DHPA and Image-PA, demonstrating the strong generalizability of our approach to real world.\nWe conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_25### ###figure_26### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_27### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Effect of the Order for 3D Part Assembly.", + "text": "We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_28### ###figure_29### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_30### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Analysis of Permutation Learning Choices.", + "text": "We conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_31### ###figure_32### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_33### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.5.1", + "parent_section_id": "4.5", + "section_name": "4.5.1 Demonstration of Soft Guidance.", + "text": "We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence." + }, + { + "section_id": "4.5.2", + "parent_section_id": "4.5", + "section_name": "4.5.2 Qualitative Comparison.", + "text": "As shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability." + }, + { + "section_id": "4.5.3", + "parent_section_id": "4.5", + "section_name": "4.5.3 Cross Attention Map on Step Diagrams.", + "text": "###figure_34### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Studies", + "text": "Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\nZ\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\nManual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07" + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Effect of the Order for 3D Part Assembly.", + "text": "We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.\nWe conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_35### ###figure_36### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_37### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Analysis of Permutation Learning Choices.", + "text": "We conduct an ablation study for permutation learning, as shown in Sec. 4.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn [40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.\n###figure_38### ###figure_39### We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.\nAs shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.\n###figure_40### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.5.1", + "parent_section_id": "4.5", + "section_name": "4.5.1 Demonstration of Soft Guidance.", + "text": "We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence." + }, + { + "section_id": "4.5.2", + "parent_section_id": "4.5", + "section_name": "4.5.2 Qualitative Comparison.", + "text": "As shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability." + }, + { + "section_id": "4.5.3", + "parent_section_id": "4.5", + "section_name": "4.5.3 Cross Attention Map on Step Diagrams.", + "text": "###figure_41### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Visualization and Analysis", + "text": "" + }, + { + "section_id": "4.5.1", + "parent_section_id": "4.5", + "section_name": "4.5.1 Demonstration of Soft Guidance.", + "text": "We manually add different positional encodings to the feature of chair\u2019s back, as shown in Fig. 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence." + }, + { + "section_id": "4.5.2", + "parent_section_id": "4.5", + "section_name": "4.5.2 Qualitative Comparison.", + "text": "As shown in Fig. 5 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability." + }, + { + "section_id": "4.5.3", + "parent_section_id": "4.5", + "section_name": "4.5.3 Cross Attention Map on Step Diagrams.", + "text": "###figure_42### As shown in Fig. 6 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions and Future Work", + "text": "We introduce a novel 3D part assembly framework that leverages diagrammatic manuals for guidance. Our proposed approach, Manual-PA, learns the sequential order of part assembly and uses it as a form of soft guidance for 3D part assembly. Manual-PA achieves superior performance on the PartNet dataset compared to existing methods and demonstrates strong generalizability to real-world IKEA furniture, as validated on the IKEA-Manual dataset.\nFuture work could aim to bridge the gap between synthetic and real-world IKEA manuals. First, the number of new parts introduced in each step can vary, requiring a more adaptable detection approach. Moreover, differing perspectives across steps necessitate robust viewpoint handling. The manuals also often include sub-module assemblies, resulting in a tree-like, non-linear assembly structure. Finally, while our current approach trains separate models for each furniture category, an ideal solution would be a unified model that generalizes across all kinds of furniture." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nZ\n\u2014\n*2Z\n\u2014\n*2Z\n\u2014\n*2Z\n\n\\Block2-1Method\n \\Block2-1Condition\n \\Block1-2SCD\n\\Block1-2PA\n\\Block1-2SR\n
\n\n Chair\n Table\n Chair\n Table\n Chair\n Table\n\n
\\Block[l]1-8Fully-Supervised on PartNet\u00a0[26 ###reference_b26###]\n
DGL\u00a0[52 ###reference_b52###]\n -\n 9.1\n 5.0\n 39.00\n 49.51\n -\n -\n\n
IET\u00a0[55 ###reference_b55###]\n -\n 5.4\n 3.5\n 62.80\n 61.67\n -\n -\n\n
Score-PA\u00a0[8 ###reference_b8###]\n -\n 7.4\n 4.5\n 42.11\n 51.55\n 8.320\n 11.23\n\n
CCS\u00a0[56 ###reference_b56###]\n -\n 7.0\n -\n 53.59\n -\n -\n -\n\n
3DHPA\u00a0[9 ###reference_b9###]\n -\n 5.1\n 2.8\n 64.13\n 64.83\n -\n -\n\n
RGL\u00a0[27 ###reference_b27###]\n Sequence\n 8.7\n 4.8\n 49.06\n 54.16\n -\n -\n\n
SPAFormer\u00a0[51 ###reference_b51###]\n Sequence\n 6.7\n 3.8\n 55.88\n 64.38\n 16.40\n 33.50\n\n
Joint-PA\u00a0[24 ###reference_b24###]\n Joint\n 6.0\n 7.0\n 72.80\n 67.40\n -\n -\n\n
Image-PA\u00a0[22 ###reference_b22###]\n Image\n 6.7\n 3.7\n 45.40\n 71.60\n -\n -\n\n
Image-PA\n Diagram\n 5.9\n 3.9\n 62.67\n 70.10\n 19.97\n 32.83\n\n
Manual-PA w/o Order\n Manual\n 3.0\n 3.6\n 79.07\n74.03\n34.13\n37.71\n
Manual-PA (Ours)\n Manual\n 1.7\n1.8\n89.06\n87.41\n58.03\n56.66\n
\\Block[l]1-8Zero-Shot on IKEA-Manual\u00a0[47 ###reference_b47###]\n
3DHPA\n -\n 34.3\n 37.8\n 1.914\n 4.027\n 0.000\n 0.000\n\n
Image-PA\n Diagram\n 17.3\n 14.7\n 19.07\n 36.74\n 0.000\n 10.53\n
Manual-PA w/o Order\n Manual\n 12.8\n8.9\n38.36\n42.01\n1.754\n10.53\n
Manual-PA (Ours)\n Manual\n 11.4\n4.8\n42.51\n49.72\n3.509\n15.79\n
\n

\n
\n
\n
Table 1: 3D part assembly results on the PartNet test split and Ikea-Manual. \u2020: We re-trained the Image-PA model using diagrams (2D line drawing images) as the conditioning input instead of the original RGB images. The values in bold represent the best results, while underlined indicate the second.
\n
\n
\n

\n4.2 Evaluation Metrics

\n
\n

To evaluate the effectiveness of 3D part assembly of different models, we employ three key metrics: Shape Chamfer Distance (SCD)\u00a0[22 ###reference_b22### ###reference_b22###], Part Accuracy (PA)\u00a0[22 ###reference_b22### ###reference_b22###], and Success Rate (SR)\u00a0[23 ###reference_b23### ###reference_b23###]. SCD quantifies the overall chamfer distance between predicted and ground truth shapes, providing a direct measure of assembly quality, scaled by a factor of for interpretability. PA assesses the correctness of individual part poses by determining whether the chamfer distance for a part falls below a threshold of , reflecting the accuracy of each part\u2019s alignment. Finally, SR evaluates whether all parts in an assembled shape are accurately predicted, offering a strict criterion for complete assembly success.

\n
\n
\n

\n4.3 Main Results

\n
\n

\n4.3.1 Fully Supervised on PartNet.

\n
\n

In this setting, we train our model on PartNet for each category and test it on unseen shapes in the test split, which share similar patterns with the training set. As shown in Sec.\u00a04.1 ###reference_### ###reference_### ###reference_###, our method, Manual-PA, significantly outperforms all previous approaches in both the Chair and Table categories. Specifically, Manual-PA achieves superior performance compared to all other conditioned methods. Notably, when compared to Image-PA, we observe improvements of 26 and 17 part accuracy for the chair and table categories, respectively. This shows that for the 3D Part Assembly task, instruction manuals provide far better guidance than a single image or other conditioning inputs. \u201cManual-PA w/o Order\u201d refers to our model without access to the part assembly order information, requiring it to learn this implicitly. Although the instruction manual provides strong visual guidance, performance improvements remain limited in this case, highlighting the challenges of implicit learning. By explicitly learning the correspondences between step diagrams and parts and applying positional encoding as soft guidance, we achieve PA improvements of approximately 10 and 13 in the chair and table categories, respectively. Note that all other methods, except for Image-PA and ours, are generative and follow the setting proposed by [52 ###reference_b52### ###reference_b52### ###reference_b52###], where multiple assembly results are generated with varying Gaussian noise for the same set of parts. Only the most faithful assembly, as measured by Minimum Matching Distance (MMD) [1 ###reference_b1### ###reference_b1### ###reference_b1###], is evaluated.

\n
\n
\"Refer\n
Figure 3: Comparison of average success rates (SR) across varying numbers of parts for different methods on PartNet chair test split. Number of chairs tested shown in background bar chart.
\n
\n
\n
\n

\n4.3.2 Number of Parts.

\n
\n

We analyze the assembly quality across varying numbers of parts on PartNet chair test split, as shown in\u00a0Fig.\u00a03 ###reference_### ###reference_### ###reference_###. It is generally accepted that, assembling objects with more parts is more challenging due to the combinatorial explosive issue. However, our method consistently achieves a higher success rate across all part counts, demonstrating robust adaptability to different levels of assembly complexity. Notably, for shapes with more than 10 parts, 3DHPA and Image-PA exhibit near-zero success rates, whereas our method, Manual-PA, continues to produce competitive assembly results.

\n
\n
\n
\n

\n4.3.3 Zero-shot on IKEA-Manual.

\n
\n

We use the same model, pretrained on PartNet, to directly estimate the poses of furniture in IKEA-Manual, where all shapes are 3D models of real-world IKEA furniture, and each part corresponds to illustrations in their official assembly manuals. As shown in\u00a0Sec.\u00a04.1 ###reference_### ###reference_### ###reference_###, our method, significantly outperforms strong competitors such as 3DHPA and Image-PA, demonstrating the strong generalizability of our approach to real world.

\n
\n
\n

\n4.4 Ablation Studies

\n
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n
1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n
2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n
3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n
4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n
5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\n\n
\n

\n
\n
\n
Table 2: Ablation results for various order designs on the PartNet chair test split. We use GT order for both training and inference.
\n
\n
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\n
Manual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n
\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n
\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n
\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07\n\n
\n

\n
\n
\n
Table 3: Ablation results for various component designs in permutation learning on the PartNet chair test split. KT denotes the Kendall-tau coefficient that measure the ordinal correlation between predicted and GT order.
\n
\n
\n

\n4.4.1 Effect of the Order for 3D Part Assembly.

\n
\n

We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer\u00a0[51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings\u00a0[55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.

\n
\n
\n

\n4.4.2 Analysis of Permutation Learning Choices.

\n
\n

We conduct an ablation study for permutation learning, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn\u00a0[40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.

\n
\n
\"Refer\n
Figure 4: Visualization of the cross attention scores between step diagrams (top) and a part (left, chair\u2019s back) with PE at different positions. The final assembly results are displayed on the right.
\n
\n
\"Refer\n
Figure 5: Qualitative comparison of various 3D part assembly methods. Four examples are shown: chair (a) and table (b) from the PartNet dataset, and chair (c) and table (d) from the IKEA-Manual dataset.
\n
\n
\n

\n4.5 Visualization and Analysis

\n
\n

\n4.5.1 Demonstration of Soft Guidance.

\n
\n

We manually add different positional encodings to the feature of chair\u2019s back, as shown in\u00a0Fig.\u00a04 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.

\n
\n
\n
\n

\n4.5.2 Qualitative Comparison.

\n
\n

As shown in\u00a0Fig.\u00a05 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.

\n
\n
\n
\n

\n4.5.3 Cross Attention Map on Step Diagrams.

\n
\"Refer\n
Figure 6: Visualization of cross-attention maps on step diagrams. The cross-attention represents the mean aggregation for each part across all step diagrams. Color red indicates a higher attention.
\n
\n
\n

As shown in\u00a0Fig.\u00a06 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back.

\n
\n
\n

\n5 Conclusions and Future Work

\n
\n

We introduce a novel 3D part assembly framework that leverages diagrammatic manuals for guidance. Our proposed approach, Manual-PA, learns the sequential order of part assembly and uses it as a form of soft guidance for 3D part assembly. Manual-PA achieves superior performance on the PartNet dataset compared to existing methods and demonstrates strong generalizability to real-world IKEA furniture, as validated on the IKEA-Manual dataset.

\n
\n
\n

Future work could aim to bridge the gap between synthetic and real-world IKEA manuals. First, the number of new parts introduced in each step can vary, requiring a more adaptable detection approach. Moreover, differing perspectives across steps necessitate robust viewpoint handling. The manuals also often include sub-module assemblies, resulting in a tree-like, non-linear assembly structure. Finally, while our current approach trains separate models for each furniture category, an ideal solution would be a unified model that generalizes across all kinds of furniture.

\n
\n
\n

References

\n
    \n
  • \nAchlioptas et\u00a0al. [2018]\n\nPanos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas.\n\n\nLearning representations and generative models for 3d point clouds.\n\n\nIn ICML, pages 40\u201349. PMLR, 2018.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2021]\n\nYizhak Ben-Shabat, Xin Yu, Fatemeh Saleh, Dylan Campbell, Cristian Rodriguez-Opazo, Hongdong Li, and Stephen Gould.\n\n\nThe ikea asm dataset: Understanding people assembling furniture through actions, objects and pose.\n\n\nIn CVPR, pages 847\u2013859, 2021.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2024]\n\nYizhak Ben-Shabat, Jonathan Paul, Eviatar Segev, Oren Shrout, and Stephen Gould.\n\n\nIkea ego 3d dataset: Understanding furniture assembly actions from ego-view 3d point clouds.\n\n\nIn WACV, pages 4355\u20134364, 2024.\n\n\n
  • \n
  • \nCarlucci et\u00a0al. [2019]\n\nFabio\u00a0M Carlucci, Antonio D\u2019Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi.\n\n\nDomain generalization by solving jigsaw puzzles.\n\n\nIn CVPR, pages 2229\u20132238, 2019.\n\n\n
  • \n
  • \nCaron et\u00a0al. [2021]\n\nMathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.\n\n\nEmerging properties in self-supervised vision transformers.\n\n\nIn ICCV, 2021.\n\n\n
  • \n
  • \nChaudhuri et\u00a0al. [2011]\n\nSiddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun.\n\n\nProbabilistic reasoning for assembly-based 3d modeling.\n\n\nIn SIGGRAPH, pages 1\u201310, 2011.\n\n\n
  • \n
  • \nChen et\u00a0al. [2023]\n\nJianqiu Chen, Mingshan Sun, Tianpeng Bao, Rui Zhao, Liwei Wu, and Zhenyu He.\n\n\nZeropose: Cad-model-based zero-shot pose estimation.\n\n\narXiv preprint arXiv:2305.17934, 2023.\n\n\n
  • \n
  • \nCheng et\u00a0al. [2023]\n\nJunfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, and Hao Dong.\n\n\nScore-pa: Score-based 3d part assembly.\n\n\nBMVC, 2023.\n\n\n
  • \n
  • \nDu et\u00a0al. [2024]\n\nBi\u2019an Du, Xiang Gao, Wei Hu, and Renjie Liao.\n\n\nGenerative 3d part assembly via part-whole-hierarchy message passing.\n\n\nIn CVPR, pages 20850\u201320859, 2024.\n\n\n
  • \n
  • \nFan et\u00a0al. [2017]\n\nHaoqiang Fan, Hao Su, and Leonidas\u00a0J Guibas.\n\n\nA point set generation network for 3d object reconstruction from a single image.\n\n\nIn CVPR, pages 605\u2013613, 2017.\n\n\n
  • \n
  • \nHadsell et\u00a0al. [2006]\n\nRaia Hadsell, Sumit Chopra, and Yann LeCun.\n\n\nDimensionality reduction by learning an invariant mapping.\n\n\nIn CVPR, pages 1735\u20131742. IEEE, 2006.\n\n\n
  • \n
  • \nHuang et\u00a0al. [2024]\n\nJunwen Huang, Hao Yu, Kuan-Ting Yu, Nassir Navab, Slobodan Ilic, and Benjamin Busam.\n\n\nMatchu: Matching unseen objects for 6d pose estimation from rgb-d images.\n\n\nIn CVPR, pages 10095\u201310105, 2024.\n\n\n
  • \n
  • \nJia et\u00a0al. [2021]\n\nChao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.\n\n\nScaling up visual and vision-language representation learning with noisy text supervision.\n\n\nIn ICML, pages 4904\u20134916. PMLR, 2021.\n\n\n
  • \n
  • \nKalogerakis et\u00a0al. [2012]\n\nEvangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun.\n\n\nA probabilistic model for component-based shape synthesis.\n\n\nACM TOG, 31(4):1\u201311, 2012.\n\n\n
  • \n
  • \nKim et\u00a0al. [2018]\n\nDahun Kim, Donghyeon Cho, Donggeun Yoo, and In\u00a0So Kweon.\n\n\nLearning image representations by completing damaged jigsaw puzzles.\n\n\nIn WACV, pages 793\u2013802. IEEE, 2018.\n\n\n
  • \n
  • \nKim et\u00a0al. [2021]\n\nWonjae Kim, Bokyung Son, and Ildoo Kim.\n\n\nVilt: Vision-and-language transformer without convolution or region supervision.\n\n\nIn ICML, pages 5583\u20135594. PMLR, 2021.\n\n\n
  • \n
  • \nKirillov et\u00a0al. [2023]\n\nAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander\u00a0C Berg, Wan-Yen Lo, et\u00a0al.\n\n\nSegment anything.\n\n\nIn ICCV, pages 4015\u20134026, 2023.\n\n\n
  • \n
  • \nKuhn [1955]\n\nHarold\u00a0W Kuhn.\n\n\nThe hungarian method for the assignment problem.\n\n\nNaval research logistics quarterly, 2(1-2):83\u201397, 1955.\n\n\n
  • \n
  • \nLabb\u00e9 et\u00a0al. [2022]\n\nYann Labb\u00e9, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, and Josef Sivic.\n\n\nMegapose: 6d pose estimation of novel objects via render & compare.\n\n\nIn CoRL, 2022.\n\n\n
  • \n
  • \nLee et\u00a0al. [2021]\n\nYoungwoon Lee, Edward\u00a0S Hu, and Joseph\u00a0J Lim.\n\n\nIkea furniture assembly environment for long-horizon complex manipulation tasks.\n\n\nIn ICRA, pages 6343\u20136349. IEEE, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2021]\n\nJunnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu\u00a0Hong Hoi.\n\n\nAlign before fuse: Vision and language representation learning with momentum distillation.\n\n\nNeurIPS, 34:9694\u20139705, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2020]\n\nYichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, and Leonidas Guibas.\n\n\nLearning 3d part assembly from a single image.\n\n\nIn ECCV, pages 664\u2013682. Springer, 2020.\n\n\n
  • \n
  • \nLi et\u00a0al. [2023]\n\nYulong Li, Andy Zeng, and Shuran Song.\n\n\nRearrangement planning for general part assembly.\n\n\nIn 7th Annual Conference on Robot Learning, 2023.\n\n\n
  • \n
  • \nLi et\u00a0al. [2024]\n\nYichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, and Lin Shao.\n\n\nCategory-level multi-part multi-joint 3d shape assembly.\n\n\nIn CVPR, pages 3281\u20133291, 2024.\n\n\n
  • \n
  • \nLin et\u00a0al. [2024]\n\nJiehong Lin, Lihua Liu, Dekun Lu, and Kui Jia.\n\n\nSam-6d: Segment anything model meets zero-shot 6d object pose estimation.\n\n\nIn CVPR, pages 27906\u201327916, 2024.\n\n\n
  • \n
  • \nMo et\u00a0al. [2019]\n\nKaichun Mo, Shilin Zhu, Angel\u00a0X Chang, Li Yi, Subarna Tripathi, Leonidas\u00a0J Guibas, and Hao Su.\n\n\nPartnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding.\n\n\nIn CVPR, pages 909\u2013918, 2019.\n\n\n
  • \n
  • \nNarayan et\u00a0al. [2022]\n\nAbhinav Narayan, Rajendra Nagar, and Shanmuganathan Raman.\n\n\nRgl-net: A recurrent graph learning framework for progressive part assembly.\n\n\nIn WACV, pages 78\u201387, 2022.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2023]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Vincent Lepetit, and Tomas Hodan.\n\n\nCnos: A strong baseline for cad-based novel object segmentation.\n\n\nIn CVPR, pages 2134\u20132140, 2023.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2024]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Mathieu Salzmann, and Vincent Lepetit.\n\n\nGigapose: Fast and robust novel object pose estimation via one correspondence.\n\n\nIn CVPR, pages 9903\u20139913, 2024.\n\n\n
  • \n
  • \nNoroozi and Favaro [2016]\n\nMehdi Noroozi and Paolo Favaro.\n\n\nUnsupervised learning of visual representations by solving jigsaw puzzles.\n\n\nIn ECCV, pages 69\u201384. Springer, 2016.\n\n\n
  • \n
  • \nOord et\u00a0al. [2018]\n\nAaron van\u00a0den Oord, Yazhe Li, and Oriol Vinyals.\n\n\nRepresentation learning with contrastive predictive coding.\n\n\narXiv preprint arXiv:1807.03748, 2018.\n\n\n
  • \n
  • \nOquab et\u00a0al. [2023]\n\nMaxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et\u00a0al.\n\n\nDinov2: Learning robust visual features without supervision.\n\n\narXiv preprint arXiv:2304.07193, 2023.\n\n\n
  • \n
  • \n\u00d6rnek et\u00a0al. [2024]\n\nEvin\u00a0P\u0131nar \u00d6rnek, Yann Labb\u00e9, Bugra Tekin, Lingni Ma, Cem Keskin, Christian Forster, and Tom\u00e1\u0161 Hoda\u0148.\n\n\nFoundpose: Unseen object pose estimation with foundation features.\n\n\nECCV, 2024.\n\n\n
  • \n
  • \nPang et\u00a0al. [2020]\n\nKaiyue Pang, Yongxin Yang, Timothy\u00a0M Hospedales, Tao Xiang, and Yi-Zhe Song.\n\n\nSolving mixed-modal jigsaw puzzle for fine-grained sketch-based image retrieval.\n\n\nIn CVPR, pages 10347\u201310355, 2020.\n\n\n
  • \n
  • \nPeng et\u00a0al. [2019]\n\nSida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao.\n\n\nPvnet: Pixel-wise voting network for 6dof pose estimation.\n\n\nIn CVPR, pages 4561\u20134570, 2019.\n\n\n
  • \n
  • \nQi et\u00a0al. [2017]\n\nCharles\u00a0R Qi, Hao Su, Kaichun Mo, and Leonidas\u00a0J Guibas.\n\n\nPointnet: Deep learning on point sets for 3d classification and segmentation.\n\n\nIn CVPR, pages 652\u2013660, 2017.\n\n\n
  • \n
  • \nRad and Lepetit [2017]\n\nMahdi Rad and Vincent Lepetit.\n\n\nBb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth.\n\n\nIn ICCV, pages 3828\u20133836, 2017.\n\n\n
  • \n
  • \nRadford et\u00a0al. [2021]\n\nAlec Radford, Jong\u00a0Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al.\n\n\nLearning transferable visual models from natural language supervision.\n\n\nIn ICML, pages 8748\u20138763. PMLR, 2021.\n\n\n
  • \n
  • \nSanta\u00a0Cruz et\u00a0al. [2017]\n\nRodrigo Santa\u00a0Cruz, Basura Fernando, Anoop Cherian, and Stephen Gould.\n\n\nDeeppermnet: Visual permutation learning.\n\n\nIn CVPR, pages 3949\u20133957, 2017.\n\n\n
  • \n
  • \nSinkhorn [1967]\n\nRichard Sinkhorn.\n\n\nDiagonal equivalence to matrices with prescribed row and column sums.\n\n\nThe American Mathematical Monthly, 74(4):402\u2013405, 1967.\n\n\n
  • \n
  • \nTian et\u00a0al. [2020]\n\nMeng Tian, Marcelo\u00a0H Ang, and Gim\u00a0Hee Lee.\n\n\nShape prior deformation for categorical 6d object pose and size estimation.\n\n\nIn ECCV, pages 530\u2013546. Springer, 2020.\n\n\n
  • \n
  • \nTian et\u00a0al. [2022]\n\nYunsheng Tian, Jie Xu, Yichen Li, Jieliang Luo, Shinjiro Sueda, Hui Li, Karl\u00a0DD Willis, and Wojciech Matusik.\n\n\nAssemble them all: Physics-based planning for generalizable assembly by disassembly.\n\n\nACM TOG, 41(6):1\u201311, 2022.\n\n\n
  • \n
  • \nVaswani et\u00a0al. [2017]\n\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan\u00a0N Gomez, \u0141\u00a0ukasz Kaiser, and Illia Polosukhin.\n\n\nAttention is all you need.\n\n\nIn NeurIPS, 2017.\n\n\n
  • \n
  • \nWan et\u00a0al. [2024]\n\nYuxuan Wan, Kaichen Zhou, Hao Dong, et\u00a0al.\n\n\nScanet: Correcting lego assembly errors with self-correct assembly network.\n\n\narXiv preprint arXiv:2403.18195, 2024.\n\n\n
  • \n
  • \nWang et\u00a0al. [2019]\n\nHe Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas\u00a0J Guibas.\n\n\nNormalized object coordinate space for category-level 6d object pose and size estimation.\n\n\nIn CVPR, pages 2642\u20132651, 2019.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022a]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, and Jiajun Wu.\n\n\nTranslating a visual lego manual to a machine-executable plan.\n\n\nIn ECCV, pages 677\u2013694. Springer, 2022a.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022b]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Ran Zhang, Chin-Yi Cheng, and Jiajun Wu.\n\n\nIkea-manual: Seeing shape assembly step by step.\n\n\nNeurIPS, 35:28428\u201328440, 2022b.\n\n\n
  • \n
  • \nWen et\u00a0al. [2024]\n\nBowen Wen, Wei Yang, Jan Kautz, and Stan Birchfield.\n\n\nFoundationpose: Unified 6d pose estimation and tracking of novel objects.\n\n\nIn CVPR, pages 17868\u201317879, 2024.\n\n\n
  • \n
  • \nWu et\u00a0al. [2020]\n\nRundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen.\n\n\nPq-net: A generative part seq2seq network for 3d shapes.\n\n\nIn CVPR, pages 829\u2013838, 2020.\n\n\n
  • \n
  • \nXiang et\u00a0al. [2017]\n\nYu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox.\n\n\nPosecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes.\n\n\nIn Robotics: Science and Systems, 2017.\n\n\n
  • \n
  • \nXu et\u00a0al. [2024]\n\nBoshen Xu, Sipeng Zheng, and Qin Jin.\n\n\nSpaformer: Sequential 3d part assembly with transformers.\n\n\narXiv preprint arXiv:2403.05874, 2024.\n\n\n
  • \n
  • \nZhan et\u00a0al. [2020]\n\nGuanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas\u00a0J Guibas, Hao Dong, et\u00a0al.\n\n\nGenerative 3d part assembly via dynamic graph learning.\n\n\nNeurIPS, 33:6315\u20136326, 2020.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2023]\n\nJiahao Zhang, Anoop Cherian, Yanbin Liu, Yizhak Ben-Shabat, Cristian Rodriguez, and Stephen Gould.\n\n\nAligning step-by-step instructional diagrams to video demonstrations.\n\n\nIn CVPR, pages 2483\u20132492, 2023.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2025]\n\nJiahao Zhang, Frederic\u00a0Z Zhang, Cristian Rodriguez, Yizhak Ben-Shabat, Anoop Cherian, and Stephen Gould.\n\n\nTemporally grounding instructional diagrams in unconstrained videos.\n\n\nWACV, 2025.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2022]\n\nRufeng Zhang, Tao Kong, Weihao Wang, Xuan Han, and Mingyu You.\n\n\n3d part assembly generation with instance encoded transformer.\n\n\nIEEE Robotics and Automation Letters, 7(4):9051\u20139058, 2022.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2024]\n\nRuiyuan Zhang, Jiaxiang Liu, Zexi Li, Hao Dong, Jie Fu, and Chao Wu.\n\n\nScalable geometric fracture assembly via co-creation space among assemblers.\n\n\nIn AAAI, pages 7269\u20137277, 2024.\n\n\n
  • \n
  • \nZhu et\u00a0al. [2024]\n\nXinghao Zhu, Devesh\u00a0K Jha, Diego Romeres, Lingfeng Sun, Masayoshi Tomizuka, and Anoop Cherian.\n\n\nMulti-level reasoning for robotic assembly: From sequence inference to contact selection.\n\n\nIn ICRA, pages 816\u2013823. IEEE, 2024.\n\n\n
  • \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", + "capture": "Table 1: 3D part assembly results on the PartNet test split and Ikea-Manual. \u2020: We re-trained the Image-PA model using diagrams (2D line drawing images) as the conditioning input instead of the original RGB images. The values in bold represent the best results, while underlined indicate the second." + }, + "2": { + "table_html": "
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\nl\nc\nc\n\u2014\n*3Z\n\n#\n Type\n Step\n Parts\n SCD\n PA\n SR\n\n
1\n PE\n\n\n 4.3\n 72.22\n 22.50\n\n
2\n PE\n\n\n\u2713 3.0\n 79.31\n 33.88\n\n
3\n OneHot\n\n\n\u2713 3.0\n 79.07\n 34.13\n\n
4\n OneHot\n\n\u2713\n\u2713 2.7\n 81.22\n 36.92\n\n
5\n PE (Ours)\n\n\u2713\n\u2713 1.7\n 95.38\n 73.07\n\n
\n

\n
\n
\n
Table 2: Ablation results for various order designs on the PartNet chair test split. We use GT order for both training and inference.
\n
\n
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\n
Manual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n
\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n
\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n
\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07\n\n
\n

\n
\n
\n
Table 3: Ablation results for various component designs in permutation learning on the PartNet chair test split. KT denotes the Kendall-tau coefficient that measure the ordinal correlation between predicted and GT order.
\n
\n
\n

\n4.4.1 Effect of the Order for 3D Part Assembly.

\n
\n

We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer\u00a0[51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings\u00a0[55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.

\n
\n
\n

\n4.4.2 Analysis of Permutation Learning Choices.

\n
\n

We conduct an ablation study for permutation learning, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn\u00a0[40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.

\n
\n
\"Refer\n
Figure 4: Visualization of the cross attention scores between step diagrams (top) and a part (left, chair\u2019s back) with PE at different positions. The final assembly results are displayed on the right.
\n
\n
\"Refer\n
Figure 5: Qualitative comparison of various 3D part assembly methods. Four examples are shown: chair (a) and table (b) from the PartNet dataset, and chair (c) and table (d) from the IKEA-Manual dataset.
\n
\n
\n

\n4.5 Visualization and Analysis

\n
\n

\n4.5.1 Demonstration of Soft Guidance.

\n
\n

We manually add different positional encodings to the feature of chair\u2019s back, as shown in\u00a0Fig.\u00a04 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.

\n
\n
\n
\n

\n4.5.2 Qualitative Comparison.

\n
\n

As shown in\u00a0Fig.\u00a05 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.

\n
\n
\n
\n

\n4.5.3 Cross Attention Map on Step Diagrams.

\n
\"Refer\n
Figure 6: Visualization of cross-attention maps on step diagrams. The cross-attention represents the mean aggregation for each part across all step diagrams. Color red indicates a higher attention.
\n
\n
\n

As shown in\u00a0Fig.\u00a06 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back.

\n
\n
\n

\n5 Conclusions and Future Work

\n
\n

We introduce a novel 3D part assembly framework that leverages diagrammatic manuals for guidance. Our proposed approach, Manual-PA, learns the sequential order of part assembly and uses it as a form of soft guidance for 3D part assembly. Manual-PA achieves superior performance on the PartNet dataset compared to existing methods and demonstrates strong generalizability to real-world IKEA furniture, as validated on the IKEA-Manual dataset.

\n
\n
\n

Future work could aim to bridge the gap between synthetic and real-world IKEA manuals. First, the number of new parts introduced in each step can vary, requiring a more adaptable detection approach. Moreover, differing perspectives across steps necessitate robust viewpoint handling. The manuals also often include sub-module assemblies, resulting in a tree-like, non-linear assembly structure. Finally, while our current approach trains separate models for each furniture category, an ideal solution would be a unified model that generalizes across all kinds of furniture.

\n
\n
\n

References

\n
    \n
  • \nAchlioptas et\u00a0al. [2018]\n\nPanos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas.\n\n\nLearning representations and generative models for 3d point clouds.\n\n\nIn ICML, pages 40\u201349. PMLR, 2018.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2021]\n\nYizhak Ben-Shabat, Xin Yu, Fatemeh Saleh, Dylan Campbell, Cristian Rodriguez-Opazo, Hongdong Li, and Stephen Gould.\n\n\nThe ikea asm dataset: Understanding people assembling furniture through actions, objects and pose.\n\n\nIn CVPR, pages 847\u2013859, 2021.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2024]\n\nYizhak Ben-Shabat, Jonathan Paul, Eviatar Segev, Oren Shrout, and Stephen Gould.\n\n\nIkea ego 3d dataset: Understanding furniture assembly actions from ego-view 3d point clouds.\n\n\nIn WACV, pages 4355\u20134364, 2024.\n\n\n
  • \n
  • \nCarlucci et\u00a0al. [2019]\n\nFabio\u00a0M Carlucci, Antonio D\u2019Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi.\n\n\nDomain generalization by solving jigsaw puzzles.\n\n\nIn CVPR, pages 2229\u20132238, 2019.\n\n\n
  • \n
  • \nCaron et\u00a0al. [2021]\n\nMathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.\n\n\nEmerging properties in self-supervised vision transformers.\n\n\nIn ICCV, 2021.\n\n\n
  • \n
  • \nChaudhuri et\u00a0al. [2011]\n\nSiddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun.\n\n\nProbabilistic reasoning for assembly-based 3d modeling.\n\n\nIn SIGGRAPH, pages 1\u201310, 2011.\n\n\n
  • \n
  • \nChen et\u00a0al. [2023]\n\nJianqiu Chen, Mingshan Sun, Tianpeng Bao, Rui Zhao, Liwei Wu, and Zhenyu He.\n\n\nZeropose: Cad-model-based zero-shot pose estimation.\n\n\narXiv preprint arXiv:2305.17934, 2023.\n\n\n
  • \n
  • \nCheng et\u00a0al. [2023]\n\nJunfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, and Hao Dong.\n\n\nScore-pa: Score-based 3d part assembly.\n\n\nBMVC, 2023.\n\n\n
  • \n
  • \nDu et\u00a0al. [2024]\n\nBi\u2019an Du, Xiang Gao, Wei Hu, and Renjie Liao.\n\n\nGenerative 3d part assembly via part-whole-hierarchy message passing.\n\n\nIn CVPR, pages 20850\u201320859, 2024.\n\n\n
  • \n
  • \nFan et\u00a0al. [2017]\n\nHaoqiang Fan, Hao Su, and Leonidas\u00a0J Guibas.\n\n\nA point set generation network for 3d object reconstruction from a single image.\n\n\nIn CVPR, pages 605\u2013613, 2017.\n\n\n
  • \n
  • \nHadsell et\u00a0al. [2006]\n\nRaia Hadsell, Sumit Chopra, and Yann LeCun.\n\n\nDimensionality reduction by learning an invariant mapping.\n\n\nIn CVPR, pages 1735\u20131742. IEEE, 2006.\n\n\n
  • \n
  • \nHuang et\u00a0al. [2024]\n\nJunwen Huang, Hao Yu, Kuan-Ting Yu, Nassir Navab, Slobodan Ilic, and Benjamin Busam.\n\n\nMatchu: Matching unseen objects for 6d pose estimation from rgb-d images.\n\n\nIn CVPR, pages 10095\u201310105, 2024.\n\n\n
  • \n
  • \nJia et\u00a0al. [2021]\n\nChao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.\n\n\nScaling up visual and vision-language representation learning with noisy text supervision.\n\n\nIn ICML, pages 4904\u20134916. PMLR, 2021.\n\n\n
  • \n
  • \nKalogerakis et\u00a0al. [2012]\n\nEvangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun.\n\n\nA probabilistic model for component-based shape synthesis.\n\n\nACM TOG, 31(4):1\u201311, 2012.\n\n\n
  • \n
  • \nKim et\u00a0al. [2018]\n\nDahun Kim, Donghyeon Cho, Donggeun Yoo, and In\u00a0So Kweon.\n\n\nLearning image representations by completing damaged jigsaw puzzles.\n\n\nIn WACV, pages 793\u2013802. IEEE, 2018.\n\n\n
  • \n
  • \nKim et\u00a0al. [2021]\n\nWonjae Kim, Bokyung Son, and Ildoo Kim.\n\n\nVilt: Vision-and-language transformer without convolution or region supervision.\n\n\nIn ICML, pages 5583\u20135594. PMLR, 2021.\n\n\n
  • \n
  • \nKirillov et\u00a0al. [2023]\n\nAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander\u00a0C Berg, Wan-Yen Lo, et\u00a0al.\n\n\nSegment anything.\n\n\nIn ICCV, pages 4015\u20134026, 2023.\n\n\n
  • \n
  • \nKuhn [1955]\n\nHarold\u00a0W Kuhn.\n\n\nThe hungarian method for the assignment problem.\n\n\nNaval research logistics quarterly, 2(1-2):83\u201397, 1955.\n\n\n
  • \n
  • \nLabb\u00e9 et\u00a0al. [2022]\n\nYann Labb\u00e9, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, and Josef Sivic.\n\n\nMegapose: 6d pose estimation of novel objects via render & compare.\n\n\nIn CoRL, 2022.\n\n\n
  • \n
  • \nLee et\u00a0al. [2021]\n\nYoungwoon Lee, Edward\u00a0S Hu, and Joseph\u00a0J Lim.\n\n\nIkea furniture assembly environment for long-horizon complex manipulation tasks.\n\n\nIn ICRA, pages 6343\u20136349. IEEE, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2021]\n\nJunnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu\u00a0Hong Hoi.\n\n\nAlign before fuse: Vision and language representation learning with momentum distillation.\n\n\nNeurIPS, 34:9694\u20139705, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2020]\n\nYichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, and Leonidas Guibas.\n\n\nLearning 3d part assembly from a single image.\n\n\nIn ECCV, pages 664\u2013682. Springer, 2020.\n\n\n
  • \n
  • \nLi et\u00a0al. [2023]\n\nYulong Li, Andy Zeng, and Shuran Song.\n\n\nRearrangement planning for general part assembly.\n\n\nIn 7th Annual Conference on Robot Learning, 2023.\n\n\n
  • \n
  • \nLi et\u00a0al. [2024]\n\nYichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, and Lin Shao.\n\n\nCategory-level multi-part multi-joint 3d shape assembly.\n\n\nIn CVPR, pages 3281\u20133291, 2024.\n\n\n
  • \n
  • \nLin et\u00a0al. [2024]\n\nJiehong Lin, Lihua Liu, Dekun Lu, and Kui Jia.\n\n\nSam-6d: Segment anything model meets zero-shot 6d object pose estimation.\n\n\nIn CVPR, pages 27906\u201327916, 2024.\n\n\n
  • \n
  • \nMo et\u00a0al. [2019]\n\nKaichun Mo, Shilin Zhu, Angel\u00a0X Chang, Li Yi, Subarna Tripathi, Leonidas\u00a0J Guibas, and Hao Su.\n\n\nPartnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding.\n\n\nIn CVPR, pages 909\u2013918, 2019.\n\n\n
  • \n
  • \nNarayan et\u00a0al. [2022]\n\nAbhinav Narayan, Rajendra Nagar, and Shanmuganathan Raman.\n\n\nRgl-net: A recurrent graph learning framework for progressive part assembly.\n\n\nIn WACV, pages 78\u201387, 2022.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2023]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Vincent Lepetit, and Tomas Hodan.\n\n\nCnos: A strong baseline for cad-based novel object segmentation.\n\n\nIn CVPR, pages 2134\u20132140, 2023.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2024]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Mathieu Salzmann, and Vincent Lepetit.\n\n\nGigapose: Fast and robust novel object pose estimation via one correspondence.\n\n\nIn CVPR, pages 9903\u20139913, 2024.\n\n\n
  • \n
  • \nNoroozi and Favaro [2016]\n\nMehdi Noroozi and Paolo Favaro.\n\n\nUnsupervised learning of visual representations by solving jigsaw puzzles.\n\n\nIn ECCV, pages 69\u201384. Springer, 2016.\n\n\n
  • \n
  • \nOord et\u00a0al. [2018]\n\nAaron van\u00a0den Oord, Yazhe Li, and Oriol Vinyals.\n\n\nRepresentation learning with contrastive predictive coding.\n\n\narXiv preprint arXiv:1807.03748, 2018.\n\n\n
  • \n
  • \nOquab et\u00a0al. [2023]\n\nMaxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et\u00a0al.\n\n\nDinov2: Learning robust visual features without supervision.\n\n\narXiv preprint arXiv:2304.07193, 2023.\n\n\n
  • \n
  • \n\u00d6rnek et\u00a0al. [2024]\n\nEvin\u00a0P\u0131nar \u00d6rnek, Yann Labb\u00e9, Bugra Tekin, Lingni Ma, Cem Keskin, Christian Forster, and Tom\u00e1\u0161 Hoda\u0148.\n\n\nFoundpose: Unseen object pose estimation with foundation features.\n\n\nECCV, 2024.\n\n\n
  • \n
  • \nPang et\u00a0al. [2020]\n\nKaiyue Pang, Yongxin Yang, Timothy\u00a0M Hospedales, Tao Xiang, and Yi-Zhe Song.\n\n\nSolving mixed-modal jigsaw puzzle for fine-grained sketch-based image retrieval.\n\n\nIn CVPR, pages 10347\u201310355, 2020.\n\n\n
  • \n
  • \nPeng et\u00a0al. [2019]\n\nSida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao.\n\n\nPvnet: Pixel-wise voting network for 6dof pose estimation.\n\n\nIn CVPR, pages 4561\u20134570, 2019.\n\n\n
  • \n
  • \nQi et\u00a0al. [2017]\n\nCharles\u00a0R Qi, Hao Su, Kaichun Mo, and Leonidas\u00a0J Guibas.\n\n\nPointnet: Deep learning on point sets for 3d classification and segmentation.\n\n\nIn CVPR, pages 652\u2013660, 2017.\n\n\n
  • \n
  • \nRad and Lepetit [2017]\n\nMahdi Rad and Vincent Lepetit.\n\n\nBb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth.\n\n\nIn ICCV, pages 3828\u20133836, 2017.\n\n\n
  • \n
  • \nRadford et\u00a0al. [2021]\n\nAlec Radford, Jong\u00a0Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al.\n\n\nLearning transferable visual models from natural language supervision.\n\n\nIn ICML, pages 8748\u20138763. PMLR, 2021.\n\n\n
  • \n
  • \nSanta\u00a0Cruz et\u00a0al. [2017]\n\nRodrigo Santa\u00a0Cruz, Basura Fernando, Anoop Cherian, and Stephen Gould.\n\n\nDeeppermnet: Visual permutation learning.\n\n\nIn CVPR, pages 3949\u20133957, 2017.\n\n\n
  • \n
  • \nSinkhorn [1967]\n\nRichard Sinkhorn.\n\n\nDiagonal equivalence to matrices with prescribed row and column sums.\n\n\nThe American Mathematical Monthly, 74(4):402\u2013405, 1967.\n\n\n
  • \n
  • \nTian et\u00a0al. [2020]\n\nMeng Tian, Marcelo\u00a0H Ang, and Gim\u00a0Hee Lee.\n\n\nShape prior deformation for categorical 6d object pose and size estimation.\n\n\nIn ECCV, pages 530\u2013546. Springer, 2020.\n\n\n
  • \n
  • \nTian et\u00a0al. [2022]\n\nYunsheng Tian, Jie Xu, Yichen Li, Jieliang Luo, Shinjiro Sueda, Hui Li, Karl\u00a0DD Willis, and Wojciech Matusik.\n\n\nAssemble them all: Physics-based planning for generalizable assembly by disassembly.\n\n\nACM TOG, 41(6):1\u201311, 2022.\n\n\n
  • \n
  • \nVaswani et\u00a0al. [2017]\n\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan\u00a0N Gomez, \u0141\u00a0ukasz Kaiser, and Illia Polosukhin.\n\n\nAttention is all you need.\n\n\nIn NeurIPS, 2017.\n\n\n
  • \n
  • \nWan et\u00a0al. [2024]\n\nYuxuan Wan, Kaichen Zhou, Hao Dong, et\u00a0al.\n\n\nScanet: Correcting lego assembly errors with self-correct assembly network.\n\n\narXiv preprint arXiv:2403.18195, 2024.\n\n\n
  • \n
  • \nWang et\u00a0al. [2019]\n\nHe Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas\u00a0J Guibas.\n\n\nNormalized object coordinate space for category-level 6d object pose and size estimation.\n\n\nIn CVPR, pages 2642\u20132651, 2019.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022a]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, and Jiajun Wu.\n\n\nTranslating a visual lego manual to a machine-executable plan.\n\n\nIn ECCV, pages 677\u2013694. Springer, 2022a.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022b]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Ran Zhang, Chin-Yi Cheng, and Jiajun Wu.\n\n\nIkea-manual: Seeing shape assembly step by step.\n\n\nNeurIPS, 35:28428\u201328440, 2022b.\n\n\n
  • \n
  • \nWen et\u00a0al. [2024]\n\nBowen Wen, Wei Yang, Jan Kautz, and Stan Birchfield.\n\n\nFoundationpose: Unified 6d pose estimation and tracking of novel objects.\n\n\nIn CVPR, pages 17868\u201317879, 2024.\n\n\n
  • \n
  • \nWu et\u00a0al. [2020]\n\nRundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen.\n\n\nPq-net: A generative part seq2seq network for 3d shapes.\n\n\nIn CVPR, pages 829\u2013838, 2020.\n\n\n
  • \n
  • \nXiang et\u00a0al. [2017]\n\nYu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox.\n\n\nPosecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes.\n\n\nIn Robotics: Science and Systems, 2017.\n\n\n
  • \n
  • \nXu et\u00a0al. [2024]\n\nBoshen Xu, Sipeng Zheng, and Qin Jin.\n\n\nSpaformer: Sequential 3d part assembly with transformers.\n\n\narXiv preprint arXiv:2403.05874, 2024.\n\n\n
  • \n
  • \nZhan et\u00a0al. [2020]\n\nGuanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas\u00a0J Guibas, Hao Dong, et\u00a0al.\n\n\nGenerative 3d part assembly via dynamic graph learning.\n\n\nNeurIPS, 33:6315\u20136326, 2020.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2023]\n\nJiahao Zhang, Anoop Cherian, Yanbin Liu, Yizhak Ben-Shabat, Cristian Rodriguez, and Stephen Gould.\n\n\nAligning step-by-step instructional diagrams to video demonstrations.\n\n\nIn CVPR, pages 2483\u20132492, 2023.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2025]\n\nJiahao Zhang, Frederic\u00a0Z Zhang, Cristian Rodriguez, Yizhak Ben-Shabat, Anoop Cherian, and Stephen Gould.\n\n\nTemporally grounding instructional diagrams in unconstrained videos.\n\n\nWACV, 2025.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2022]\n\nRufeng Zhang, Tao Kong, Weihao Wang, Xuan Han, and Mingyu You.\n\n\n3d part assembly generation with instance encoded transformer.\n\n\nIEEE Robotics and Automation Letters, 7(4):9051\u20139058, 2022.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2024]\n\nRuiyuan Zhang, Jiaxiang Liu, Zexi Li, Hao Dong, Jie Fu, and Chao Wu.\n\n\nScalable geometric fracture assembly via co-creation space among assemblers.\n\n\nIn AAAI, pages 7269\u20137277, 2024.\n\n\n
  • \n
  • \nZhu et\u00a0al. [2024]\n\nXinghao Zhu, Devesh\u00a0K Jha, Diego Romeres, Lingfeng Sun, Masayoshi Tomizuka, and Anoop Cherian.\n\n\nMulti-level reasoning for robotic assembly: From sequence inference to contact selection.\n\n\nIn ICRA, pages 816\u2013823. IEEE, 2024.\n\n\n
  • \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", + "capture": "Table 2: Ablation results for various order designs on the PartNet chair test split. We use GT order for both training and inference." + }, + "3": { + "table_html": "
\n
\n
\\newcolumntype
\n
\n
\n

Z\u00bf\\arraybackslashX\n\n{NiceTabularX}\n@\nl\n\u2014\n*4Z\n\nVariants\n KD\n SCD\n PA\n SR\n\n
Manual-PA (Ours)\n 0.789\n 1.7\n 89.06\n 58.03\n\n
\u2003w/o batch-level sampling\n 0.410\n 2.3\n 56.93\n 15.68\n\n
\u2003w/ Sinkhorn\n 0.774\n 1.7\n 88.25\n 56.26\n\n
\u2003w/ GT Order\n 1.000\n 1.7\n 95.38\n 73.07\n\n
\n

\n
\n
\n
Table 3: Ablation results for various component designs in permutation learning on the PartNet chair test split. KT denotes the Kendall-tau coefficient that measure the ordinal correlation between predicted and GT order.
\n
\n
\n

\n4.4.1 Effect of the Order for 3D Part Assembly.

\n
\n

We conduct experiments to explore various methods for incorporating order information in a 3D part assembly model, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, by comparing lines 1 and 2, we observe that adding sequential information solely to part features improves assembly performance. We attribute this improvement to two factors: (1) distinguishing geometrically equivalent components, and (2) mitigating the combinatorial explosion problem, as claimed in SPAFormer\u00a0[51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. Second, by comparing lines 2 and 5, where the latter provides both the step diagram and part features with correct correspondence, we achieve a significant performance boost. This indicates that explicitly establishing correspondence between steps and parts is beneficial, as learning these correspondences implicitly remains challenging for the model. Last, our empirical results show that, when position information is solely relevant to self-attention (lines 2 and 3), both PE and OneHot encodings yield similar performance. This observation aligns with prior transformer-based studies, which also utilize OneHot encodings\u00a0[55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###, 9 ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9### ###reference_b9###, 51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###]. However, in cases requiring cross-attention (line 4 and 5), PE outperforms OneHot encodings, highlighting its advantages in cross-attentive contexts.

\n
\n
\n

\n4.4.2 Analysis of Permutation Learning Choices.

\n
\n

We conduct an ablation study for permutation learning, as shown in\u00a0Sec.\u00a04.4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. First, we find that batch-level sampling is essential compared to applying contrastive learning individually within each manual book. This is primarily because contrastive learning benefits from a larger pool of negative samples. Second, while we employ the Sinkhorn\u00a0[40 ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40### ###reference_b40###] algorithm on the similarity matrix to enforce that rows and columns sum to one, empirical results indicate limited performance improvement. Last, by providing the model with the ground truth assembly order, we establish an upper performance bound for our assembly model.

\n
\n
\"Refer\n
Figure 4: Visualization of the cross attention scores between step diagrams (top) and a part (left, chair\u2019s back) with PE at different positions. The final assembly results are displayed on the right.
\n
\n
\"Refer\n
Figure 5: Qualitative comparison of various 3D part assembly methods. Four examples are shown: chair (a) and table (b) from the PartNet dataset, and chair (c) and table (d) from the IKEA-Manual dataset.
\n
\n
\n

\n4.5 Visualization and Analysis

\n
\n

\n4.5.1 Demonstration of Soft Guidance.

\n
\n

We manually add different positional encodings to the feature of chair\u2019s back, as shown in\u00a0Fig.\u00a04 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. With PE(5), the model focuses on the final position, aligning well with the correct step diagram. However, when PE(2) is applied, drawing the model\u2019s attention to the second step diagram, the model struggles to accurately predict the chair\u2019s back pose. Notably, even with PE(0), the model manages to shift its attention toward the correct final step, demonstrating the fault tolerance and robustness of soft guidance: the model can still identify the correct match even when given an incorrect sequence.

\n
\n
\n
\n

\n4.5.2 Qualitative Comparison.

\n
\n

As shown in\u00a0Fig.\u00a05 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, our proposed Manual-PA produces the most faithful part pose prediction for the shapes illustrated in the step diagrams. For \u201cManual-PA w/o Order\u201d, some parts do not align well with the assembly instructions, indicating the importance of providing precise correspondences between the step diagram and the parts for accurate assembly. In the case of Image-PA, the input consists only of the final image, making it challenging to locate the poses of parts that are occluded, such as the supporting rods beneath the table top in (b). 3DHPA lacks visual input and operates in a generative mode, relying solely on shape priors. Consequently, it generates shapes that are reasonable, as seen in (a), but there are noticeable discrepancies between its output and the ground truth shapes. For zero-shot results on the IKEA-Manual dataset, both 3DHPA and Image-PA exhibit degraded performance, while Manual-PA still produces faithful assembly results, demonstrating its superior generalization ability.

\n
\n
\n
\n

\n4.5.3 Cross Attention Map on Step Diagrams.

\n
\"Refer\n
Figure 6: Visualization of cross-attention maps on step diagrams. The cross-attention represents the mean aggregation for each part across all step diagrams. Color red indicates a higher attention.
\n
\n
\n

As shown in\u00a0Fig.\u00a06 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, we visualize the cross-attention maps between each part and the step diagrams. High attention values correspond to regions in the step diagram where the model pays more attention, which aligns with the spatial placement of parts during assembly. For instance, when the first blue board focuses on the back of the chair, it is positioned on the chair back, and when it attends to the area under the seat, it acts as a support between the legs. Comparing the two methods, we observe that without order as a soft guidance (represented by \u201cManual-PA w/o Order\u201d), the attention regions are more dispersed, resulting in less accurate part placement. For example, the third cyan board does not focus entirely on the left chair leg without soft guidance, leading to a misaligned leg position. The absence of order as explicit guidance also results in incorrect part placement. For instance, the fourth purple board, which should be part of the seat, is instead assembled onto the chair back.

\n
\n
\n

\n5 Conclusions and Future Work

\n
\n

We introduce a novel 3D part assembly framework that leverages diagrammatic manuals for guidance. Our proposed approach, Manual-PA, learns the sequential order of part assembly and uses it as a form of soft guidance for 3D part assembly. Manual-PA achieves superior performance on the PartNet dataset compared to existing methods and demonstrates strong generalizability to real-world IKEA furniture, as validated on the IKEA-Manual dataset.

\n
\n
\n

Future work could aim to bridge the gap between synthetic and real-world IKEA manuals. First, the number of new parts introduced in each step can vary, requiring a more adaptable detection approach. Moreover, differing perspectives across steps necessitate robust viewpoint handling. The manuals also often include sub-module assemblies, resulting in a tree-like, non-linear assembly structure. Finally, while our current approach trains separate models for each furniture category, an ideal solution would be a unified model that generalizes across all kinds of furniture.

\n
\n
\n

References

\n
    \n
  • \nAchlioptas et\u00a0al. [2018]\n\nPanos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas.\n\n\nLearning representations and generative models for 3d point clouds.\n\n\nIn ICML, pages 40\u201349. PMLR, 2018.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2021]\n\nYizhak Ben-Shabat, Xin Yu, Fatemeh Saleh, Dylan Campbell, Cristian Rodriguez-Opazo, Hongdong Li, and Stephen Gould.\n\n\nThe ikea asm dataset: Understanding people assembling furniture through actions, objects and pose.\n\n\nIn CVPR, pages 847\u2013859, 2021.\n\n\n
  • \n
  • \nBen-Shabat et\u00a0al. [2024]\n\nYizhak Ben-Shabat, Jonathan Paul, Eviatar Segev, Oren Shrout, and Stephen Gould.\n\n\nIkea ego 3d dataset: Understanding furniture assembly actions from ego-view 3d point clouds.\n\n\nIn WACV, pages 4355\u20134364, 2024.\n\n\n
  • \n
  • \nCarlucci et\u00a0al. [2019]\n\nFabio\u00a0M Carlucci, Antonio D\u2019Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi.\n\n\nDomain generalization by solving jigsaw puzzles.\n\n\nIn CVPR, pages 2229\u20132238, 2019.\n\n\n
  • \n
  • \nCaron et\u00a0al. [2021]\n\nMathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.\n\n\nEmerging properties in self-supervised vision transformers.\n\n\nIn ICCV, 2021.\n\n\n
  • \n
  • \nChaudhuri et\u00a0al. [2011]\n\nSiddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun.\n\n\nProbabilistic reasoning for assembly-based 3d modeling.\n\n\nIn SIGGRAPH, pages 1\u201310, 2011.\n\n\n
  • \n
  • \nChen et\u00a0al. [2023]\n\nJianqiu Chen, Mingshan Sun, Tianpeng Bao, Rui Zhao, Liwei Wu, and Zhenyu He.\n\n\nZeropose: Cad-model-based zero-shot pose estimation.\n\n\narXiv preprint arXiv:2305.17934, 2023.\n\n\n
  • \n
  • \nCheng et\u00a0al. [2023]\n\nJunfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, and Hao Dong.\n\n\nScore-pa: Score-based 3d part assembly.\n\n\nBMVC, 2023.\n\n\n
  • \n
  • \nDu et\u00a0al. [2024]\n\nBi\u2019an Du, Xiang Gao, Wei Hu, and Renjie Liao.\n\n\nGenerative 3d part assembly via part-whole-hierarchy message passing.\n\n\nIn CVPR, pages 20850\u201320859, 2024.\n\n\n
  • \n
  • \nFan et\u00a0al. [2017]\n\nHaoqiang Fan, Hao Su, and Leonidas\u00a0J Guibas.\n\n\nA point set generation network for 3d object reconstruction from a single image.\n\n\nIn CVPR, pages 605\u2013613, 2017.\n\n\n
  • \n
  • \nHadsell et\u00a0al. [2006]\n\nRaia Hadsell, Sumit Chopra, and Yann LeCun.\n\n\nDimensionality reduction by learning an invariant mapping.\n\n\nIn CVPR, pages 1735\u20131742. IEEE, 2006.\n\n\n
  • \n
  • \nHuang et\u00a0al. [2024]\n\nJunwen Huang, Hao Yu, Kuan-Ting Yu, Nassir Navab, Slobodan Ilic, and Benjamin Busam.\n\n\nMatchu: Matching unseen objects for 6d pose estimation from rgb-d images.\n\n\nIn CVPR, pages 10095\u201310105, 2024.\n\n\n
  • \n
  • \nJia et\u00a0al. [2021]\n\nChao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.\n\n\nScaling up visual and vision-language representation learning with noisy text supervision.\n\n\nIn ICML, pages 4904\u20134916. PMLR, 2021.\n\n\n
  • \n
  • \nKalogerakis et\u00a0al. [2012]\n\nEvangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun.\n\n\nA probabilistic model for component-based shape synthesis.\n\n\nACM TOG, 31(4):1\u201311, 2012.\n\n\n
  • \n
  • \nKim et\u00a0al. [2018]\n\nDahun Kim, Donghyeon Cho, Donggeun Yoo, and In\u00a0So Kweon.\n\n\nLearning image representations by completing damaged jigsaw puzzles.\n\n\nIn WACV, pages 793\u2013802. IEEE, 2018.\n\n\n
  • \n
  • \nKim et\u00a0al. [2021]\n\nWonjae Kim, Bokyung Son, and Ildoo Kim.\n\n\nVilt: Vision-and-language transformer without convolution or region supervision.\n\n\nIn ICML, pages 5583\u20135594. PMLR, 2021.\n\n\n
  • \n
  • \nKirillov et\u00a0al. [2023]\n\nAlexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander\u00a0C Berg, Wan-Yen Lo, et\u00a0al.\n\n\nSegment anything.\n\n\nIn ICCV, pages 4015\u20134026, 2023.\n\n\n
  • \n
  • \nKuhn [1955]\n\nHarold\u00a0W Kuhn.\n\n\nThe hungarian method for the assignment problem.\n\n\nNaval research logistics quarterly, 2(1-2):83\u201397, 1955.\n\n\n
  • \n
  • \nLabb\u00e9 et\u00a0al. [2022]\n\nYann Labb\u00e9, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, and Josef Sivic.\n\n\nMegapose: 6d pose estimation of novel objects via render & compare.\n\n\nIn CoRL, 2022.\n\n\n
  • \n
  • \nLee et\u00a0al. [2021]\n\nYoungwoon Lee, Edward\u00a0S Hu, and Joseph\u00a0J Lim.\n\n\nIkea furniture assembly environment for long-horizon complex manipulation tasks.\n\n\nIn ICRA, pages 6343\u20136349. IEEE, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2021]\n\nJunnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu\u00a0Hong Hoi.\n\n\nAlign before fuse: Vision and language representation learning with momentum distillation.\n\n\nNeurIPS, 34:9694\u20139705, 2021.\n\n\n
  • \n
  • \nLi et\u00a0al. [2020]\n\nYichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, and Leonidas Guibas.\n\n\nLearning 3d part assembly from a single image.\n\n\nIn ECCV, pages 664\u2013682. Springer, 2020.\n\n\n
  • \n
  • \nLi et\u00a0al. [2023]\n\nYulong Li, Andy Zeng, and Shuran Song.\n\n\nRearrangement planning for general part assembly.\n\n\nIn 7th Annual Conference on Robot Learning, 2023.\n\n\n
  • \n
  • \nLi et\u00a0al. [2024]\n\nYichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, and Lin Shao.\n\n\nCategory-level multi-part multi-joint 3d shape assembly.\n\n\nIn CVPR, pages 3281\u20133291, 2024.\n\n\n
  • \n
  • \nLin et\u00a0al. [2024]\n\nJiehong Lin, Lihua Liu, Dekun Lu, and Kui Jia.\n\n\nSam-6d: Segment anything model meets zero-shot 6d object pose estimation.\n\n\nIn CVPR, pages 27906\u201327916, 2024.\n\n\n
  • \n
  • \nMo et\u00a0al. [2019]\n\nKaichun Mo, Shilin Zhu, Angel\u00a0X Chang, Li Yi, Subarna Tripathi, Leonidas\u00a0J Guibas, and Hao Su.\n\n\nPartnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding.\n\n\nIn CVPR, pages 909\u2013918, 2019.\n\n\n
  • \n
  • \nNarayan et\u00a0al. [2022]\n\nAbhinav Narayan, Rajendra Nagar, and Shanmuganathan Raman.\n\n\nRgl-net: A recurrent graph learning framework for progressive part assembly.\n\n\nIn WACV, pages 78\u201387, 2022.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2023]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Vincent Lepetit, and Tomas Hodan.\n\n\nCnos: A strong baseline for cad-based novel object segmentation.\n\n\nIn CVPR, pages 2134\u20132140, 2023.\n\n\n
  • \n
  • \nNguyen et\u00a0al. [2024]\n\nVan\u00a0Nguyen Nguyen, Thibault Groueix, Mathieu Salzmann, and Vincent Lepetit.\n\n\nGigapose: Fast and robust novel object pose estimation via one correspondence.\n\n\nIn CVPR, pages 9903\u20139913, 2024.\n\n\n
  • \n
  • \nNoroozi and Favaro [2016]\n\nMehdi Noroozi and Paolo Favaro.\n\n\nUnsupervised learning of visual representations by solving jigsaw puzzles.\n\n\nIn ECCV, pages 69\u201384. Springer, 2016.\n\n\n
  • \n
  • \nOord et\u00a0al. [2018]\n\nAaron van\u00a0den Oord, Yazhe Li, and Oriol Vinyals.\n\n\nRepresentation learning with contrastive predictive coding.\n\n\narXiv preprint arXiv:1807.03748, 2018.\n\n\n
  • \n
  • \nOquab et\u00a0al. [2023]\n\nMaxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et\u00a0al.\n\n\nDinov2: Learning robust visual features without supervision.\n\n\narXiv preprint arXiv:2304.07193, 2023.\n\n\n
  • \n
  • \n\u00d6rnek et\u00a0al. [2024]\n\nEvin\u00a0P\u0131nar \u00d6rnek, Yann Labb\u00e9, Bugra Tekin, Lingni Ma, Cem Keskin, Christian Forster, and Tom\u00e1\u0161 Hoda\u0148.\n\n\nFoundpose: Unseen object pose estimation with foundation features.\n\n\nECCV, 2024.\n\n\n
  • \n
  • \nPang et\u00a0al. [2020]\n\nKaiyue Pang, Yongxin Yang, Timothy\u00a0M Hospedales, Tao Xiang, and Yi-Zhe Song.\n\n\nSolving mixed-modal jigsaw puzzle for fine-grained sketch-based image retrieval.\n\n\nIn CVPR, pages 10347\u201310355, 2020.\n\n\n
  • \n
  • \nPeng et\u00a0al. [2019]\n\nSida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao.\n\n\nPvnet: Pixel-wise voting network for 6dof pose estimation.\n\n\nIn CVPR, pages 4561\u20134570, 2019.\n\n\n
  • \n
  • \nQi et\u00a0al. [2017]\n\nCharles\u00a0R Qi, Hao Su, Kaichun Mo, and Leonidas\u00a0J Guibas.\n\n\nPointnet: Deep learning on point sets for 3d classification and segmentation.\n\n\nIn CVPR, pages 652\u2013660, 2017.\n\n\n
  • \n
  • \nRad and Lepetit [2017]\n\nMahdi Rad and Vincent Lepetit.\n\n\nBb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth.\n\n\nIn ICCV, pages 3828\u20133836, 2017.\n\n\n
  • \n
  • \nRadford et\u00a0al. [2021]\n\nAlec Radford, Jong\u00a0Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et\u00a0al.\n\n\nLearning transferable visual models from natural language supervision.\n\n\nIn ICML, pages 8748\u20138763. PMLR, 2021.\n\n\n
  • \n
  • \nSanta\u00a0Cruz et\u00a0al. [2017]\n\nRodrigo Santa\u00a0Cruz, Basura Fernando, Anoop Cherian, and Stephen Gould.\n\n\nDeeppermnet: Visual permutation learning.\n\n\nIn CVPR, pages 3949\u20133957, 2017.\n\n\n
  • \n
  • \nSinkhorn [1967]\n\nRichard Sinkhorn.\n\n\nDiagonal equivalence to matrices with prescribed row and column sums.\n\n\nThe American Mathematical Monthly, 74(4):402\u2013405, 1967.\n\n\n
  • \n
  • \nTian et\u00a0al. [2020]\n\nMeng Tian, Marcelo\u00a0H Ang, and Gim\u00a0Hee Lee.\n\n\nShape prior deformation for categorical 6d object pose and size estimation.\n\n\nIn ECCV, pages 530\u2013546. Springer, 2020.\n\n\n
  • \n
  • \nTian et\u00a0al. [2022]\n\nYunsheng Tian, Jie Xu, Yichen Li, Jieliang Luo, Shinjiro Sueda, Hui Li, Karl\u00a0DD Willis, and Wojciech Matusik.\n\n\nAssemble them all: Physics-based planning for generalizable assembly by disassembly.\n\n\nACM TOG, 41(6):1\u201311, 2022.\n\n\n
  • \n
  • \nVaswani et\u00a0al. [2017]\n\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan\u00a0N Gomez, \u0141\u00a0ukasz Kaiser, and Illia Polosukhin.\n\n\nAttention is all you need.\n\n\nIn NeurIPS, 2017.\n\n\n
  • \n
  • \nWan et\u00a0al. [2024]\n\nYuxuan Wan, Kaichen Zhou, Hao Dong, et\u00a0al.\n\n\nScanet: Correcting lego assembly errors with self-correct assembly network.\n\n\narXiv preprint arXiv:2403.18195, 2024.\n\n\n
  • \n
  • \nWang et\u00a0al. [2019]\n\nHe Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas\u00a0J Guibas.\n\n\nNormalized object coordinate space for category-level 6d object pose and size estimation.\n\n\nIn CVPR, pages 2642\u20132651, 2019.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022a]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, and Jiajun Wu.\n\n\nTranslating a visual lego manual to a machine-executable plan.\n\n\nIn ECCV, pages 677\u2013694. Springer, 2022a.\n\n\n
  • \n
  • \nWang et\u00a0al. [2022b]\n\nRuocheng Wang, Yunzhi Zhang, Jiayuan Mao, Ran Zhang, Chin-Yi Cheng, and Jiajun Wu.\n\n\nIkea-manual: Seeing shape assembly step by step.\n\n\nNeurIPS, 35:28428\u201328440, 2022b.\n\n\n
  • \n
  • \nWen et\u00a0al. [2024]\n\nBowen Wen, Wei Yang, Jan Kautz, and Stan Birchfield.\n\n\nFoundationpose: Unified 6d pose estimation and tracking of novel objects.\n\n\nIn CVPR, pages 17868\u201317879, 2024.\n\n\n
  • \n
  • \nWu et\u00a0al. [2020]\n\nRundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen.\n\n\nPq-net: A generative part seq2seq network for 3d shapes.\n\n\nIn CVPR, pages 829\u2013838, 2020.\n\n\n
  • \n
  • \nXiang et\u00a0al. [2017]\n\nYu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox.\n\n\nPosecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes.\n\n\nIn Robotics: Science and Systems, 2017.\n\n\n
  • \n
  • \nXu et\u00a0al. [2024]\n\nBoshen Xu, Sipeng Zheng, and Qin Jin.\n\n\nSpaformer: Sequential 3d part assembly with transformers.\n\n\narXiv preprint arXiv:2403.05874, 2024.\n\n\n
  • \n
  • \nZhan et\u00a0al. [2020]\n\nGuanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas\u00a0J Guibas, Hao Dong, et\u00a0al.\n\n\nGenerative 3d part assembly via dynamic graph learning.\n\n\nNeurIPS, 33:6315\u20136326, 2020.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2023]\n\nJiahao Zhang, Anoop Cherian, Yanbin Liu, Yizhak Ben-Shabat, Cristian Rodriguez, and Stephen Gould.\n\n\nAligning step-by-step instructional diagrams to video demonstrations.\n\n\nIn CVPR, pages 2483\u20132492, 2023.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2025]\n\nJiahao Zhang, Frederic\u00a0Z Zhang, Cristian Rodriguez, Yizhak Ben-Shabat, Anoop Cherian, and Stephen Gould.\n\n\nTemporally grounding instructional diagrams in unconstrained videos.\n\n\nWACV, 2025.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2022]\n\nRufeng Zhang, Tao Kong, Weihao Wang, Xuan Han, and Mingyu You.\n\n\n3d part assembly generation with instance encoded transformer.\n\n\nIEEE Robotics and Automation Letters, 7(4):9051\u20139058, 2022.\n\n\n
  • \n
  • \nZhang et\u00a0al. [2024]\n\nRuiyuan Zhang, Jiaxiang Liu, Zexi Li, Hao Dong, Jie Fu, and Chao Wu.\n\n\nScalable geometric fracture assembly via co-creation space among assemblers.\n\n\nIn AAAI, pages 7269\u20137277, 2024.\n\n\n
  • \n
  • \nZhu et\u00a0al. [2024]\n\nXinghao Zhu, Devesh\u00a0K Jha, Diego Romeres, Lingfeng Sun, Masayoshi Tomizuka, and Anoop Cherian.\n\n\nMulti-level reasoning for robotic assembly: From sequence inference to contact selection.\n\n\nIn ICRA, pages 816\u2013823. IEEE, 2024.\n\n\n
  • \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", + "capture": "Table 3: Ablation results for various component designs in permutation learning on the PartNet chair test split. KT denotes the Kendall-tau coefficient that measure the ordinal correlation between predicted and GT order." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18011v1_figure_1.png", + "caption": "Figure 1: An illustration of the manual-guided 3D part assembly task. Given (a) a diagrammatic manual book demonstrating the step-by-step assembly process and (b) a set of texture-less furniture parts, the goal is to (c) infer the order of parts for the assembly from the manual sequence and predict the 6DoF pose for each part such that the spatially transformed parts assembles the furniture described in the manual.", + "url": "http://arxiv.org/html/2411.18011v1/x1.png" + }, + "2": { + "figure_path": "2411.18011v1_figure_2.png", + "caption": "Figure 2: Overview of our proposed method Manual-PA. (a) Feature extraction (Sec. 3.2): we extract semantic and geometrical features from both the step diagrams of the assembly manual and the corresponding part point clouds using the image encoder and point encoder, respectively. (b) Manual-guided part permutation learning (Sec. 3.3): we compute a similarity matrix \ud835\udc12\ud835\udc12\\mathbf{S}bold_S between the two modalities, and subsequently apply the Hungarian algorithm to obtain the permutation matrix \ud835\udc0f\ud835\udc0f\\mathbf{P}bold_P for the parts. (c) Manual-guided part pose estimation (Sec. 3.4): we add positional encodings (PE) \u03a6\u03a6\\Phiroman_\u03a6 to both the step diagrams and parts, where the PE order for parts is determined by the order predictions from (b). This is followed by a transformer decoder to enable multimodal feature fusion and interaction, along with a pose prediction head to determine the rotation Risubscript\ud835\udc45\ud835\udc56R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and translation tisubscript\ud835\udc61\ud835\udc56t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT of each part. The predicted poses are then applied to the corresponding parts to obtain the final assembled shape \ud835\udc12\ud835\udc12\\mathbf{S}bold_S.", + "url": "http://arxiv.org/html/2411.18011v1/x2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Learning representations and generative models for 3d point clouds.", + "author": "Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas.", + "venue": "In ICML, pages 40\u201349. PMLR, 2018.", + "url": null + } + }, + { + "2": { + "title": "The ikea asm dataset: Understanding people assembling furniture through actions, objects and pose.", + "author": "Yizhak Ben-Shabat, Xin Yu, Fatemeh Saleh, Dylan Campbell, Cristian Rodriguez-Opazo, Hongdong Li, and Stephen Gould.", + "venue": "In CVPR, pages 847\u2013859, 2021.", + "url": null + } + }, + { + "3": { + "title": "Ikea ego 3d dataset: Understanding furniture assembly actions from ego-view 3d point clouds.", + "author": "Yizhak Ben-Shabat, Jonathan Paul, Eviatar Segev, Oren Shrout, and Stephen Gould.", + "venue": "In WACV, pages 4355\u20134364, 2024.", + "url": null + } + }, + { + "4": { + "title": "Domain generalization by solving jigsaw puzzles.", + "author": "Fabio M Carlucci, Antonio D\u2019Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi.", + "venue": "In CVPR, pages 2229\u20132238, 2019.", + "url": null + } + }, + { + "5": { + "title": "Emerging properties in self-supervised vision transformers.", + "author": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "6": { + "title": "Probabilistic reasoning for assembly-based 3d modeling.", + "author": "Siddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun.", + "venue": "In SIGGRAPH, pages 1\u201310, 2011.", + "url": null + } + }, + { + "7": { + "title": "Zeropose: Cad-model-based zero-shot pose estimation.", + "author": "Jianqiu Chen, Mingshan Sun, Tianpeng Bao, Rui Zhao, Liwei Wu, and Zhenyu He.", + "venue": "arXiv preprint arXiv:2305.17934, 2023.", + "url": null + } + }, + { + "8": { + "title": "Score-pa: Score-based 3d part assembly.", + "author": "Junfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, and Hao Dong.", + "venue": "BMVC, 2023.", + "url": null + } + }, + { + "9": { + "title": "Generative 3d part assembly via part-whole-hierarchy message passing.", + "author": "Bi\u2019an Du, Xiang Gao, Wei Hu, and Renjie Liao.", + "venue": "In CVPR, pages 20850\u201320859, 2024.", + "url": null + } + }, + { + "10": { + "title": "A point set generation network for 3d object reconstruction from a single image.", + "author": "Haoqiang Fan, Hao Su, and Leonidas J Guibas.", + "venue": "In CVPR, pages 605\u2013613, 2017.", + "url": null + } + }, + { + "11": { + "title": "Dimensionality reduction by learning an invariant mapping.", + "author": "Raia Hadsell, Sumit Chopra, and Yann LeCun.", + "venue": "In CVPR, pages 1735\u20131742. IEEE, 2006.", + "url": null + } + }, + { + "12": { + "title": "Matchu: Matching unseen objects for 6d pose estimation from rgb-d images.", + "author": "Junwen Huang, Hao Yu, Kuan-Ting Yu, Nassir Navab, Slobodan Ilic, and Benjamin Busam.", + "venue": "In CVPR, pages 10095\u201310105, 2024.", + "url": null + } + }, + { + "13": { + "title": "Scaling up visual and vision-language representation learning with noisy text supervision.", + "author": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.", + "venue": "In ICML, pages 4904\u20134916. PMLR, 2021.", + "url": null + } + }, + { + "14": { + "title": "A probabilistic model for component-based shape synthesis.", + "author": "Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun.", + "venue": "ACM TOG, 31(4):1\u201311, 2012.", + "url": null + } + }, + { + "15": { + "title": "Learning image representations by completing damaged jigsaw puzzles.", + "author": "Dahun Kim, Donghyeon Cho, Donggeun Yoo, and In So Kweon.", + "venue": "In WACV, pages 793\u2013802. IEEE, 2018.", + "url": null + } + }, + { + "16": { + "title": "Vilt: Vision-and-language transformer without convolution or region supervision.", + "author": "Wonjae Kim, Bokyung Son, and Ildoo Kim.", + "venue": "In ICML, pages 5583\u20135594. PMLR, 2021.", + "url": null + } + }, + { + "17": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.", + "venue": "In ICCV, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "18": { + "title": "The hungarian method for the assignment problem.", + "author": "Harold W Kuhn.", + "venue": "Naval research logistics quarterly, 2(1-2):83\u201397, 1955.", + "url": null + } + }, + { + "19": { + "title": "Megapose: 6d pose estimation of novel objects via render & compare.", + "author": "Yann Labb\u00e9, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, and Josef Sivic.", + "venue": "In CoRL, 2022.", + "url": null + } + }, + { + "20": { + "title": "Ikea furniture assembly environment for long-horizon complex manipulation tasks.", + "author": "Youngwoon Lee, Edward S Hu, and Joseph J Lim.", + "venue": "In ICRA, pages 6343\u20136349. IEEE, 2021.", + "url": null + } + }, + { + "21": { + "title": "Align before fuse: Vision and language representation learning with momentum distillation.", + "author": "Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi.", + "venue": "NeurIPS, 34:9694\u20139705, 2021.", + "url": null + } + }, + { + "22": { + "title": "Learning 3d part assembly from a single image.", + "author": "Yichen Li, Kaichun Mo, Lin Shao, Minhyuk Sung, and Leonidas Guibas.", + "venue": "In ECCV, pages 664\u2013682. Springer, 2020.", + "url": null + } + }, + { + "23": { + "title": "Rearrangement planning for general part assembly.", + "author": "Yulong Li, Andy Zeng, and Shuran Song.", + "venue": "In 7th Annual Conference on Robot Learning, 2023.", + "url": null + } + }, + { + "24": { + "title": "Category-level multi-part multi-joint 3d shape assembly.", + "author": "Yichen Li, Kaichun Mo, Yueqi Duan, He Wang, Jiequan Zhang, and Lin Shao.", + "venue": "In CVPR, pages 3281\u20133291, 2024.", + "url": null + } + }, + { + "25": { + "title": "Sam-6d: Segment anything model meets zero-shot 6d object pose estimation.", + "author": "Jiehong Lin, Lihua Liu, Dekun Lu, and Kui Jia.", + "venue": "In CVPR, pages 27906\u201327916, 2024.", + "url": null + } + }, + { + "26": { + "title": "Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding.", + "author": "Kaichun Mo, Shilin Zhu, Angel X Chang, Li Yi, Subarna Tripathi, Leonidas J Guibas, and Hao Su.", + "venue": "In CVPR, pages 909\u2013918, 2019.", + "url": null + } + }, + { + "27": { + "title": "Rgl-net: A recurrent graph learning framework for progressive part assembly.", + "author": "Abhinav Narayan, Rajendra Nagar, and Shanmuganathan Raman.", + "venue": "In WACV, pages 78\u201387, 2022.", + "url": null + } + }, + { + "28": { + "title": "Cnos: A strong baseline for cad-based novel object segmentation.", + "author": "Van Nguyen Nguyen, Thibault Groueix, Georgy Ponimatkin, Vincent Lepetit, and Tomas Hodan.", + "venue": "In CVPR, pages 2134\u20132140, 2023.", + "url": null + } + }, + { + "29": { + "title": "Gigapose: Fast and robust novel object pose estimation via one correspondence.", + "author": "Van Nguyen Nguyen, Thibault Groueix, Mathieu Salzmann, and Vincent Lepetit.", + "venue": "In CVPR, pages 9903\u20139913, 2024.", + "url": null + } + }, + { + "30": { + "title": "Unsupervised learning of visual representations by solving jigsaw puzzles.", + "author": "Mehdi Noroozi and Paolo Favaro.", + "venue": "In ECCV, pages 69\u201384. Springer, 2016.", + "url": null + } + }, + { + "31": { + "title": "Representation learning with contrastive predictive coding.", + "author": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals.", + "venue": "arXiv preprint arXiv:1807.03748, 2018.", + "url": null + } + }, + { + "32": { + "title": "Dinov2: Learning robust visual features without supervision.", + "author": "Maxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al.", + "venue": "arXiv preprint arXiv:2304.07193, 2023.", + "url": null + } + }, + { + "33": { + "title": "Foundpose: Unseen object pose estimation with foundation features.", + "author": "Evin P\u0131nar \u00d6rnek, Yann Labb\u00e9, Bugra Tekin, Lingni Ma, Cem Keskin, Christian Forster, and Tom\u00e1\u0161 Hoda\u0148.", + "venue": "ECCV, 2024.", + "url": null + } + }, + { + "34": { + "title": "Solving mixed-modal jigsaw puzzle for fine-grained sketch-based image retrieval.", + "author": "Kaiyue Pang, Yongxin Yang, Timothy M Hospedales, Tao Xiang, and Yi-Zhe Song.", + "venue": "In CVPR, pages 10347\u201310355, 2020.", + "url": null + } + }, + { + "35": { + "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation.", + "author": "Sida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao.", + "venue": "In CVPR, pages 4561\u20134570, 2019.", + "url": null + } + }, + { + "36": { + "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation.", + "author": "Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas.", + "venue": "In CVPR, pages 652\u2013660, 2017.", + "url": null + } + }, + { + "37": { + "title": "Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth.", + "author": "Mahdi Rad and Vincent Lepetit.", + "venue": "In ICCV, pages 3828\u20133836, 2017.", + "url": null + } + }, + { + "38": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In ICML, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "39": { + "title": "Deeppermnet: Visual permutation learning.", + "author": "Rodrigo Santa Cruz, Basura Fernando, Anoop Cherian, and Stephen Gould.", + "venue": "In CVPR, pages 3949\u20133957, 2017.", + "url": null + } + }, + { + "40": { + "title": "Diagonal equivalence to matrices with prescribed row and column sums.", + "author": "Richard Sinkhorn.", + "venue": "The American Mathematical Monthly, 74(4):402\u2013405, 1967.", + "url": null + } + }, + { + "41": { + "title": "Shape prior deformation for categorical 6d object pose and size estimation.", + "author": "Meng Tian, Marcelo H Ang, and Gim Hee Lee.", + "venue": "In ECCV, pages 530\u2013546. Springer, 2020.", + "url": null + } + }, + { + "42": { + "title": "Assemble them all: Physics-based planning for generalizable assembly by disassembly.", + "author": "Yunsheng Tian, Jie Xu, Yichen Li, Jieliang Luo, Shinjiro Sueda, Hui Li, Karl DD Willis, and Wojciech Matusik.", + "venue": "ACM TOG, 41(6):1\u201311, 2022.", + "url": null + } + }, + { + "43": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin.", + "venue": "In NeurIPS, 2017.", + "url": null + } + }, + { + "44": { + "title": "Scanet: Correcting lego assembly errors with self-correct assembly network.", + "author": "Yuxuan Wan, Kaichen Zhou, Hao Dong, et al.", + "venue": "arXiv preprint arXiv:2403.18195, 2024.", + "url": null + } + }, + { + "45": { + "title": "Normalized object coordinate space for category-level 6d object pose and size estimation.", + "author": "He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas.", + "venue": "In CVPR, pages 2642\u20132651, 2019.", + "url": null + } + }, + { + "46": { + "title": "Translating a visual lego manual to a machine-executable plan.", + "author": "Ruocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, and Jiajun Wu.", + "venue": "In ECCV, pages 677\u2013694. Springer, 2022a.", + "url": null + } + }, + { + "47": { + "title": "Ikea-manual: Seeing shape assembly step by step.", + "author": "Ruocheng Wang, Yunzhi Zhang, Jiayuan Mao, Ran Zhang, Chin-Yi Cheng, and Jiajun Wu.", + "venue": "NeurIPS, 35:28428\u201328440, 2022b.", + "url": null + } + }, + { + "48": { + "title": "Foundationpose: Unified 6d pose estimation and tracking of novel objects.", + "author": "Bowen Wen, Wei Yang, Jan Kautz, and Stan Birchfield.", + "venue": "In CVPR, pages 17868\u201317879, 2024.", + "url": null + } + }, + { + "49": { + "title": "Pq-net: A generative part seq2seq network for 3d shapes.", + "author": "Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen.", + "venue": "In CVPR, pages 829\u2013838, 2020.", + "url": null + } + }, + { + "50": { + "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes.", + "author": "Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox.", + "venue": "In Robotics: Science and Systems, 2017.", + "url": null + } + }, + { + "51": { + "title": "Spaformer: Sequential 3d part assembly with transformers.", + "author": "Boshen Xu, Sipeng Zheng, and Qin Jin.", + "venue": "arXiv preprint arXiv:2403.05874, 2024.", + "url": null + } + }, + { + "52": { + "title": "Generative 3d part assembly via dynamic graph learning.", + "author": "Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas J Guibas, Hao Dong, et al.", + "venue": "NeurIPS, 33:6315\u20136326, 2020.", + "url": null + } + }, + { + "53": { + "title": "Aligning step-by-step instructional diagrams to video demonstrations.", + "author": "Jiahao Zhang, Anoop Cherian, Yanbin Liu, Yizhak Ben-Shabat, Cristian Rodriguez, and Stephen Gould.", + "venue": "In CVPR, pages 2483\u20132492, 2023.", + "url": null + } + }, + { + "54": { + "title": "Temporally grounding instructional diagrams in unconstrained videos.", + "author": "Jiahao Zhang, Frederic Z Zhang, Cristian Rodriguez, Yizhak Ben-Shabat, Anoop Cherian, and Stephen Gould.", + "venue": "WACV, 2025.", + "url": null + } + }, + { + "55": { + "title": "3d part assembly generation with instance encoded transformer.", + "author": "Rufeng Zhang, Tao Kong, Weihao Wang, Xuan Han, and Mingyu You.", + "venue": "IEEE Robotics and Automation Letters, 7(4):9051\u20139058, 2022.", + "url": null + } + }, + { + "56": { + "title": "Scalable geometric fracture assembly via co-creation space among assemblers.", + "author": "Ruiyuan Zhang, Jiaxiang Liu, Zexi Li, Hao Dong, Jie Fu, and Chao Wu.", + "venue": "In AAAI, pages 7269\u20137277, 2024.", + "url": null + } + }, + { + "57": { + "title": "Multi-level reasoning for robotic assembly: From sequence inference to contact selection.", + "author": "Xinghao Zhu, Devesh K Jha, Diego Romeres, Lingfeng Sun, Masayoshi Tomizuka, and Anoop Cherian.", + "venue": "In ICRA, pages 816\u2013823. IEEE, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18011v1" +} \ No newline at end of file diff --git a/20241127/2411.18021v1.json b/20241127/2411.18021v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d894311ec955ee9253cb37c5e8fc8cce42302758 --- /dev/null +++ b/20241127/2411.18021v1.json @@ -0,0 +1,128 @@ +{ + "title": "Can bidirectional encoder become the ultimate winner for downstream applications of foundation models?", + "abstract": "Over the past few decades, Artificial Intelligence(AI) has progressed from the initial machine learning stage to the deep learning stage, and now to the stage of foundational models. Foundational models have the characteristics of pre-training, transfer learning, and self-supervised learning, and pre-trained models can be fine-tuned and applied to various downstream tasks. Under the framework of foundational models, models such as Bidirectional Encoder Representations from Transformers(BERT) and Generative Pre-trained Transformer(GPT) have greatly advanced the development of natural language processing(NLP), especially the emergence of many models based on BERT. BERT broke through the limitation of only using one-way methods for language modeling in pre-training by using a masked language model. It can capture bidirectional context information to predict the masked words in the sequence, this can improve the feature extraction ability of the model. This makes the model very useful for downstream tasks, especially for specialized applications. The model using the bidirectional encoder can better understand the domain knowledge and be better applied to these downstream tasks. So we hope to help understand how this technology has evolved and improved model performance in various natural language processing tasks under the background of foundational models and reveal its importance in capturing context information and improving the model\u2019s performance on downstream tasks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In 2021, models with high transferability and self-supervised learning were redefined as foundational models (Refer to the image in Figure 1 ###reference_###), marking a new stage in the development of AI[27 ###reference_b27###]. The popularity of GPT[29 ###reference_b29###] also made more people see the potential of foundational models. The high transferability of foundational models is analogous to the smartphone, where developers create an initial yet well-architected model that is ready for immediate use, while users have the flexibility to customize it according to their specific needs. Clearly, this is a highly efficient model, where people don\u2019t need to build a house from scratch based on their needs. Moreover, self-supervised learning not only makes the model more transferable, but also greatly reduces the cost of using data. Current general large models can be viewed as foundational models, they have very extensive knowledge and can be used directly or slightly adjusted to achieve higher accuracy. There are two important branches: the one-way model led by GPT and the bidirectional model led by BERT[1 ###reference_b1###]. GPT has achieved great success in practical applications in recent years and has gained a lot of traffic. However, we are currently in an era where larger models tend to perform better. Simply increasing the size of a model does not necessarily mean that it understands what it is generating. Describing GPT-like one-way generative models as having \u201clearned a language\u201d may be more appropriate[30 ###reference_b30###]. Foundational one-way models generate things more like the most likely word following the previous word.\nOn the other hand, models like BERT are different. We can simply differentiate them based on their purpose. One-way models aim to predict what will come next, while bidirectional models are more about extracting features, in other words, understanding the meaning of each word. Of course, one-way models also extract features, but they always do so in a sequential order. However, real humans do not completely process information in a one-way manner. For example, \u201cI am happy because the weather is good.\u201d A one-way model cannot understand that the reason why I am happy is the weather is good, because the one-way model predicts the next word and gives the word with the highest probability. But bidirectional models can process information from both directions, allowing the model to better understand the whole sentence and context , and achieve better results.\nBetter results also make models with bidirectional encoders have a variety of applications and improvements. But as the models become larger, due to the inherent complexity of bidirectional models, they require more time and computing power than one-way models, and driven by interests, it is clear that pure one-way models have a higher cost performance at present. Of course, understanding and being able to do are certainly different. With the improvement of computing power, the situation may change in the future.\n\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II The Origin of Bidirectional Encoders and BERT", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Two sites of foundational model", + "text": "GPT and BERT can be considered among the earliest foundational models, representing two different approaches. GPT is usually called a one-way language model, also known as a causal language model, and it is a recurrent model. This type of model calculates the probability of the next word in a single direction. In contrast, BERT is considered a bidirectional language model, despite the lack of a unified definition for bidirectional language models in the field. There is a consensus that a bidirectional language model means considering both forward and backward information when processing current words, and its training goal is to obtain better representational capacity(better feature extraction ability). The birth of GPT and BERT starts with Transformer[28 ###reference_b28###], which is mainly composed of two parts: encoder and decoder. Transformer introduces a self-attention mechanism. For each position in the sequence, the self-attention mechanism computes the query, the key and value vectors, and the attention weight, and uses the attention weight to weigh the sum of the value vectors to get the final output representation. In the Transformer encoder architecture, each encoding layer\u2019s multi-head self-attention sublayer will interact with each position in the input sequence with all other positions in the sequence, so that the self-attention layer when encoding a word will consider the entire sentence\u2019s words when encoding a word. However, in the decoder, the self-attention layer uses a mask matrix so that each position in the sequence can only see the sequence before it and the positions behind it will be hidden. GPT uses the decoder part of the Transformer, while BERT uses the encoder part. Because the encoder of self-attention mechanism considers both forward and backward information, we also call it bidirectional encoder." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B BERT and its applications", + "text": "Both encoders and decoders have very strong representation capabilities thanks to the multi-head attention mechanism of Transformer, which makes it possible to train models on unlabeled datasets. The pre-training and fine-tuning paradigm then emerged in the NLP field. GPT is the earliest one, which introduced the concept of semi-supervised learning, later also known as self-supervised learning. This mainly refers to training the model on unlabeled datasets during the pre-training stage, which is unsupervised learning, allowing the model to extract more effective features[29 ###reference_b29###]. Then, the model is fine-tuned using labeled datasets specific to the application scenarios, which is supervised learning, allowing the model to have better performance in the corresponding application scenario. For example, researchers can use large amounts of unlabeled data from various fields for pre-training, and then use labeled data from the medical field to fine-tune the model so that it can be better applied to medical problems, which researchers will explain in detail later. BERT, as a later model, inherited the experience of its predecessors and brought the development of the foundational model to a new stage[27 ###reference_b27###].\nBERT builds on ideas from GPT by using the Transformer encoder and the pre-training and fine-tuning approach. Additionally, it incorporates concepts from Embeddings from Language Models (ELMo)[31 ###reference_b31###], but in a different way. model\u2019s bidirectional approach. The method used by ELMo is to pre-train word vectors using two one-way Long Short-Term Memory networks(LSTMs)[32 ###reference_b32###], thus changing the traditional word embedding representation. In traditional word embedding representations, such as Word2Vec[33 ###reference_b33###] and Global Vectors for Word Representation(GloVe)[34 ###reference_b34###], each word has a fixed vector and does not consider the different meanings of words in different contexts. ELMo, on the other hand, assigns a vector to each word that varies with the context of the entire sentence by combining the hidden states of a forward LSTM and a backward LSTM. Therefore, the representation of each word is based on the current entire sentence, resulting in a dynamic word vector. This solves the problem of ambiguity of words in different sentences. Meanwhile, BERT has an excellent deeply bidirectional mechanism, unlike ELMo which only performs bidirectional operation at the surface level. BERT considers contextual information from both directions at every layer of the network, which can capture more comprehensive contextual information. Therefore, compared to sequential models such as GPT, BERT has a significant advantage in feature extraction. Excellent feature extraction ability allows the model to be deeper, which means the model will have stronger generalization ability and can solve multiple different types of problems with a single model[1 ###reference_b1###].\nAlthough BERT is not the first to use pre-training and fine-tuning in NLP, it\u2019s the one that made it popular. Because of its powerful representation ability, BERT has significantly better pre-training and fine-tuning results than GPT. At the same time, BERT also marks the beginning of the model size competition, and the trend of better results with larger models can be seen from here. Later, when GPT-2[35 ###reference_b35###] was developed, the research team also found that even with significant improvements to the model, the results were not much better than those of BERT, so they took a different approach which led to \u201cLanguage Models are Few-Shot Learners\u201d(GPT-3)[36 ###reference_b36###].\nBERT achieves such good results not only because of its bidirectional encoder, but also because of two important tasks it performs during pre-training: Mask Language Model (MLM) and Next Sentence Prediction (NSP)[1 ###reference_b1###].\nMLM: Its working principle is to randomly mask certain words in a sentence and then train the model to predict these masked words based on the context (similar to filling in the missing parts of a sentence). This approach allows the model to learn bidirectional information by considering the context simultaneously, whereas traditional models only process text in one direction.\nNSP: And the goal of this task is to enable the model to understand the relationships between sentences. Simply put, it achieves this by predicting whether the following sentence is the logical next sentence of the preceding one. This helps the model grasp relational information between sentences, thereby optimizing its performance.\nDuring fine-tuning, the model only needs to be trained briefly to adapt it to a specific NLP task. The main process of fine-tuning is to initialize the model with learned parameters from pre-training and further train it on labeled data of the target task. For each specific task, a simple output layer is usually added to the BERT model, which can transform the pre-trained universal language representation into the prediction results (or convert the language which is more suitable for machines into the language that is more suitable for humans and meets the requirements). During fine-tuning, the model will use the labeled data from the target task to train, and adjust all parameters and then added output layer. It allows the model to focus more attention on the required task while retaining the broad language understanding acquired during pre-training.\nThat\u2019s why BERT fine-tuning has many applications. In the professional field, for example, SciBERT [19 ###reference_b19###] is trained on a large scientific text corpus and used for NLP tasks in scientific literature. ClinicalBERT [20 ###reference_b20###] is pre-trained on clinical text for processing electronic health records and other medical texts. BioBERT [21 ###reference_b21###] is pre-trained on biomedical text (PubMed abstracts and PMC full-text articles) for biomedical NLP tasks. BERTweet [22 ###reference_b22###] is the first large-scale pre-trained language model for English tweets, pre-trained on social media text (such as Twitter) for processing informal texts. Of course, there are also fine-tuned models for various languages, such as CamemBERT [23 ###reference_b23###] and FlauBERT [24 ###reference_b24###] for French, BERTje [25 ###reference_b25###] for Dutch, and AraBERT [26 ###reference_b26###] for Arabic, etc." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Disadvantges of Bidirectional Model", + "text": "Of course, higher representational ability and stronger understanding ability also come with a cost. Because of utilizing contextual information simultaneously, bidirectional models actually require more computation and time resources at every step of training than one-way models[3 ###reference_b3###]. Additionally, due to the complex structure of bidirectional models, which handle information in both directions, it is more difficult to explain the operation of the model[38 ###reference_b38###]. Furthermore, since bidirectional models do more word embedding and feature extraction, training is done using a method similar to cloze tasks, and it seems difficult to make it perform generative tasks at present. It is more suitable for classification and judgment tasks[37 ###reference_b37###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III BERT and Its Evolution", + "text": "Within just a few years, the BERT model has become a foundational tool in NLP. However, as mentioned in section II.C, while the traditional BERT model performs exceptionally well in NLP tasks, it has several limitations such as high computational resource requirements, difficulty in interpretability, and unsuitability for generative tasks.\nTo address these limitations, researchers have proposed a series of improved models. Here, we list some models that excel in terms of computational resource optimization, interpretability, or performance in generative tasks, aiming to facilitate understanding for researchers." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A DistilBERT and TinyBERT", + "text": "A distil version of BERT (DistilBERT)[6 ###reference_b6###] employs knowledge distillation to condense the original BERT model into a lighter one while retaining most of its performance. This model significantly reduces the number of parameters and computational load, thus speeding up inference and lowering memory usage, making it ideal for resource-constrained environments.\nTinyBERT[39 ###reference_b39###], through distillation techniques, compresses the BERT model into a smaller version. This model maintains high accuracy while offering fast inference speed and low resource consumption, making it suitable for real-time or embedded systems." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B ALBERT", + "text": "A lite BERT (ALBERT)[5 ###reference_b5###] uses parameter sharing and factorized embedding parameterization to drastically reduce the number of parameters. This approach decreases the demand for computational resources while maintaining performance comparable to BERT, and in some tasks, even surpassing the original BERT model." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C ERNIE", + "text": "Enhanced Representation through Knowledge Integration (ERNIE)[40 ###reference_b40###] improves semantic understanding through combining external knowledge bases. This knowledge injection not only boosts performance in specific domain tasks but also lets the decision-making process of the model clearer and easier to interpret." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D T5", + "text": "Text-To-Text Transfer Transformer(T5)[41 ###reference_b41###] transforms all kinds of NLP tasks into text-to-text format, enabling it to handle various generative tasks. It excels in translation, summarization, and question answering, demonstrating strong capabilities in generative tasks." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E SpanBERT", + "text": "SpanBERT[10 ###reference_b10###] focuses on learning span-based representations, capturing relationships and entity information in sentences better than traditional BERT. It performs exceptionally well in information extraction and relation detection tasks, offering higher interpretability.\nThese models can be applied to many downstream applications, like CycleTrans[49 ###reference_b49###] and EchoMamba4Rec[50 ###reference_b50###] which can expands BERT application idea. For example, CycleTrans effectively integrates patients\u2019 EMR data through the combination of cross-attention mechanisms and bidirectional encoders, achieving bidirectional transmission between symptoms and medications, and attaining multiple State-Of-The-Art (SOTA) performances. Besides, EchoMamba4Rec combines bidirectional State Space Models (SSM) and frequency-domain filtering techniques. Its architecture includes bidirectional processing modules, Fast Fourier Transform (FFT), and more, enabling the model to capture complex dependencies. Following CycleTrans, EchoMamba4Rec has also achieved SOTA performances.\nAdditionally, we have sorted papers on Google Scholar that cited the foundational BERT model by citation count and selected some as representatives of BERT\u2019s wide application in various fields. These papers are then arranged chronologically, summarizing each model\u2019s main contributions in terms of approach and performance. This compilation aims to provide researchers with useful references. Figure 2 is the timeline of these model Details can be found in Appendix 1.\n\n###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Model Performance Comparison", + "text": "Since most models are evaluated using GLUE and SQuAD, we use these two methods to compare some models[Table I ###reference_###]." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A GLUE", + "text": "Human language understanding ability is universal, flexible, and robust. In contrast, most word-level and above NLU models are designed for specific tasks and are difficult to handle out-of-domain data.\nWang et al.(2019) proposed the GLUE benchmark[16 ###reference_b16###], which covers a set of natural language understanding (NLU) tasks, including question answering, sentiment analysis, and semantic inference, and also provides an online platform for evaluating, comparing, and analyzing models. The GLUE framework prioritizes models that can efficiently learn from samples and effectively transfer knowledge between tasks, thus representing language knowledge in the best possible way. Since GLUE\u2019s goal is to advance general NLU systems, we designed this benchmark test to require models to share a large amount of knowledge (e.g., trained parameters) across all tasks while still retaining some task-specific components.\nGLUE includes the following tasks:\nCorpus of Linguistic Acceptability (CoLA): judging the grammatical correctness of sentences[42 ###reference_b42###].\nStanford Sentiment Treebank 2 (SST-2): single-sentence sentiment classification task[43 ###reference_b43###].\nMicrosoft Research Paraphrase Corpus (MRP): sentence pair semantic equivalence judgment[44 ###reference_b44###].\nSemantic Textual Similarity Benchmark (STS-B): sentence pair semantic similarity score[45 ###reference_b45###].\nQuora Question Pairs (QQP): question pair semantic equivalence judgment.\nMulti-Genre Natural Language Inference (MNLI): judging the inference relationship between sentence pairs[46 ###reference_b46###].\nQuestion Natural Language Inference (QNLI): inference task based on question and sentence[16 ###reference_b16###].\nRecognizing Textual Entailment (RTE): text entailment task[47 ###reference_b47###].\nWinograd Natural Language Inference (WNLI): inference task based on Winograd schema[48 ###reference_b48###].\nThe GLUE benchmark provides a diverse set of tasks, allowing researchers to comprehensively evaluate the performance of models on different types of tasks and thus gain insights into their general language understanding ability." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B SQuAD", + "text": "SQuAD tests the models\u2019 abilities in reading comprehension and information extraction, which is an important indicator of the models\u2019 understanding and generation capabilities[17 ###reference_b17###].\nKnown as Reading Comprehension (RC), comprehending written text and responding to questions about it, presents a significant challenge for machines. This task demands a deep understanding of natural language and worldly knowledge. SQuAD comprises 107,785 pairs of questions and answers based on 536 articles. Systems are required to choose the answer from numerous potential spans within the passage, necessitating the ability to handle a large number of candidates. The constraint of selecting from specific spans also brings the important advantage that evaluating span-based answers is simpler than assessing freeform answers.\nThis challenge involves the precise formulation of various types of natural language representation and comprehension to facilitate the processing of a question and its corresponding context. Subsequently, it entails selecting an appropriate correct answer that is deemed satisfactory by humans or indicating the absence of such an answer. Each question may have a definite answer, or none at all. The unanswerable questions should appear relevant to the topic of the context paragraph. The system needs to be able to not only find the correct answer but also identify what questions are without answers[18 ###reference_b18###]. Evaluation is done using F1-score and Exact Match(EM), with higher scores indicating better performance. With these features and methods, SQuAD2.0 has become an important benchmark for evaluating and improving question-answering systems." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Future Work", + "text": "At present, it seems that models using bidirectional encoders have not produced a game-changing product like GPT. Many large companies have temporarily abandoned bidirectional feature extraction models and moved towards one-way generative models, such as Google switching to one-way generative models after completing T5. Of course, while bidirectional models have slowed down in recent years, their potential remains huge. Because in most cases, the understanding of each word by a one-way model is generally not as good as that of a bidirectional model, simplified, GPT-like models are aimed at generating, and they may not care about the intermediate process, because the training goal is to generate the correct answer. However, BERT-like models are different, these models aim to extract features as much as possible, that is, to understand the meaning of each word, which is obviously a more human-like way of thinking.\nOf course, as mentioned earlier, higher computing cost and time consumption are still serious problems for bidirectional models at present, which is also the price of higher effectiveness. Only large companies have the ability to independently discover game-changing bidirectional large models. However, we believe that bidirectional models still have great potential, and we can use new acceleration inference and energy-saving training methods such as pruning, distillation, and quantization or continue researching small models to achieve this. Additionally, the recent popularity of Mamba2[62 ###reference_b62###], which has made improvements to self-attention, may also be used to speed up training time.\nAnother aspect is explainability, traditional explainability often visualizes the process of the model\u2019s output to explain the model, but this is difficult for bidirectional models. So, perhaps a way for the model to explain the process itself is more suitable for bidirectional models. Because the focus of bidirectional models is understanding, if they understand the meaning of words and sentences, then it is possible for the model to explain the origin of the answer on its own.\nFurthermore, in today\u2019s trend towards multi-modal large models, it seems that the need for adding task-specific classification heads to BERT models to achieve better performance on specific tasks has become no longer popular. However, this does not imply that bidirectional encoders are obsolete. For instance, Google Brain introduced Unifying Language Learning Paradigms (UL2), which integrates various language model training tasks, including span masking, enabling UL2 to possess bidirectional context understanding capabilities.[61 ###reference_b61###] In the future, there may also emerge improved models using bidirectional encoders." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "The development of AI has experienced from the initial machine learning stage to the deep learning stage, and then to the foundational model stage today. The foundation model has the characteristics of pre-training, transfer learning and self-supervised learning. The pre-trained model can be applied to various downstream tasks after fine tuning. Within the realm of foundation models, models such as BERT and GPT have greatly advanced the development of natural language processing. At the same time, due to the excellent portability of BERT, the model based on BERT continue to emerge. BERT breaks through the limitation of using only one-way method for feature extraction in pre-training by using bidirectional encoder, that is, the model can capture bidirectional context information to predict mask words in the sequence, which improves the feature extraction capability of the model. The good feature extraction capability also makes subsequent improvements possible. In this paper, a brief comparison of GPT and BERT is made, and some BERT-based models and their improvements are analyzed. Finally, the performance of the bidirectional encoder-based model on SQuAD and GLUE where data can be found is collected. The models\u2019 performance in various natural language processing tasks reveals the importance of bidirectional encoders in improving the model\u2019s ability to generalize, as well as their potential for downstream applications in foundational models." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Model Performance Comparison
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelParams\n \n\n\nGLUE\n\nAvg.\n\n \n\n\nSQuAD 1.1\n\nF1/EM\n\n \n\n\nSQuAD 2.0\n\nF1/EM\n
BERT (Devlin et al., 2018)[1]\n340M(334M)82.193.2/87.483.1/80.0
XLNet (Yang et al., 2019)[2]\n355M90.5\n*95.1/*89.7\n90.7/87.9
RoBERTa (Liu et al., 2019)[3]\n355M88.5\n*94.6/*88.9\n89.8/86.8
StructBERT (Wang et al., 2019)[4]\n340M83.9\n*85.2/*92.0\n-
ALBERT (Lan et al., 2019)[5]\n235M89.495.5/90.191.4/88.9
DistilBERT (SanH et al., 2020)[6]\n66M77\n*86.9/*79.1\n-
BART (Lewis et al., 2020)[7]\n374M-94.6/88.889.2/86.1
ELECTRA (Clark et al., 2020)[8]\n335M89.5\n*94.9/*89.7\n91.4/88.7
Funnel-Transformer (Dai et al., 2020)[9]\n488M89.7\n*94.7/*89.0\n\n*90.4/*87.6\n
SpanBERT (Joshi et al., 2020)[10]\n340M82.894.6/88.888.7/85.7
ConvBERT (Jiang et al., 2020)[11]\n106M86.490.0/84.783.1/80.6
MPNet (Song et al., 2020)[12]\n110M86.5\n*92.7/*86.9\n85.8/82.8
LUKE (Yamada et al., 2020)[13]\n483M-95.4/90.2-
UNILMv2 (Bao et al., 2020)[14]\n110M87.392.0/85.683.6/80.9
DeBERTa (He et al., 2021)[15]\n433M90.095.5/90.190.7/88.0
\naThe dev test is annotated with \u201c*\u201d
\nbNote: Missing results in literature are signified by \u201c-\u201d
\n
\n
", + "capture": "TABLE I: Model Performance Comparison" + }, + "2": { + "table_html": "
\n
TABLE II: Model Contributions and Performance
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\n\n\n\nTime\n\n\n\nContribution (approach)\n\n\n\nContribution (performance)\n\n
\n\nBERT (Devlin et al., 2018)[1]\n\n\n\n10/2018\n\n\n\nPre-training with bidirectional Transformer for the first time\n\n\n\nSignificantly improve the performance of natural language understanding tasks\n\n
\n\nRoBERTa (Liu et al., 2019)[3]\n\n\n\n1/2019\n\n\n\nProposed an optimized pre-training method with better hyperparameter tuning\n\n\n\nEnhanced BERT\u2019s overall performance\n\n
\n\nDistilBERT (Sanh et al., 2019)[6]\n\n\n\n10/2019\n\n\n\nApplied knowledge distillation to create a lightweight BERT version\n\n\n\nReduced model size and computation\n\n
\n\nALBERT (Lan et al., 2019)[5]\n\n\n\n9/2019\n\n\n\nUsed parameter sharing and factorization techniques\n\n\n\nReduced model parameters and improved parameter efficiency\n\n
\n\nStructBERT (Wang et al., 2019)[4]\n\n\n\n8/2019\n\n\n\nIntroduced word order and sentence order prediction tasks\n\n\n\nEnhanced bidirectional representation capability\n\n
\n\nBART (Lewis et al., 2019)[7]\n\n\n\n10/2019\n\n\n\nIntroduced a method combining bidirectional encoder and autoregressive decoder\n\n\n\nApplied for generative tasks and text reconstruction\n\n
\n\nReformer (Kitaev et al., 2020)[51]\n\n\n\n1/2020\n\n\n\nUsed locality-sensitive hashing and reversible residual layers\n\n\n\nReduced computational complexity of bidirectional Transformer\n\n
\n\nPegasus (Kitaev et al., 2020)[52]\n\n\n\n1/2020\n\n\n\nDesigned pre-training tasks to simulate abstractive summarization\n\n\n\nImproved performance on abstractive summarization tasks\n\n
\n\nSpanBERT (Joshi et al., 2020)[10]\n\n\n\n3/2020\n\n\n\nImproved masking scheme and training objectives\n\n\n\nBetter representation and prediction of text spans\n\n
\n\nELECTRA (Clark et al., 2020)[8]\n\n\n\n3/2020\n\n\n\nIntroduced a generator-discriminator framework\n\n\n\nIncreased pre-training efficiency\n\n
\n\nLaBSE (Feng et al., 2020)[53]\n\n\n\n3/2020\n\n\n\nIntroduced a new pre-training and dual-encoder fine-tuning method\n\n\n\nAchieved state-of-the-art performance in bi-text mining\n\n
\n\nK-BERT (Liu et al., 2020)[54]\n\n\n\n4/2020\n\n\n\nEnhanced pre-trained language models with entity-aware mechanisms[pre-train]\n\n\n\nImproved performance on tasks involving entity information understanding[performance]\n\n
\n\nDeCLUTR (Giorgi et al., 2020)[55]\n\n\n\n6/2020\n\n\n\nProposed a self-supervised sentence-level objective[architecture]\n\n\n\nGenerated generalized embeddings for sentences and paragraphs without labeled data[application]\n\n
\n\nT5 (Raffel et al., 2020)[41]\n\n\n\n6/2020\n\n\n\nIntroduced a unified text-to-text framework for NLP tasks[architecture]\n\n\n\nEnhanced task generalization[performance]\n\n
\n\nFunnel-Transformer (Dai et al., 2020)[9]\n\n\n\n6/2020\n\n\n\nEmployed a funnel structure[architecture]\n\n\n\nReduced computational complexity while maintaining bidirectional representation capability[efficiency]\n\n
\n\nDeBERTa (He et al., 2020)[15]\n\n\n\n6/2020\n\n\n\nEnhanced decoding and disentangled attention mechanism[architecture]\n\n\n\nImproved contextual information understanding[performance]\n\n
\n\nBigBird (Zaheer et al., 2020)[56]\n\n\n\n7/2020\n\n\n\nUtilized sparse attention mechanisms[architecture]\n\n\n\nEnabled processing of longer text sequences[performance]\n\n
\n\nLUKE (Yamada et al., 2020)[13]\n\n\n\n10/2020\n\n\n\nCombined knowledge graph information with self-attention[architecture]\n\n\n\nEnhanced performance in knowledge-intensive tasks[performance]\n\n
\n\nREALM (Guu et al., 2020)[57]\n\n\n\n11/2020\n\n\n\nIncorporated a latent knowledge retriever[pre-train]\n\n\n\nCaptured knowledge in a modular and interpretable way[performance]\n\n
\n\nMPNet (Song et al., 2020)[12]\n\n\n\n11/2020\n\n\n\nCombined masked language modeling and permuted language modeling[pre-train]\n\n\n\nEnhanced representation capability[performance]\n\n
\n\nUniLMv2 (Bao et al., 2020)[14]\n\n\n\n11/2020\n\n\n\nIntegrated multi-task learning and cross-task consistency[architecture]\n\n\n\nEnhanced bidirectional Transformer representation capability[performance]\n\n
\n\nLongformer (Beltagy et al., 2020)[58]\n\n\n\n12/2020\n\n\n\nCombined local attention and global attention mechanisms[architecture]\n\n\n\nEfficiently processed long texts[performance,efficiency]\n\n
\n\nConvBERT (Jiang et al., 2021)[11]\n\n\n\n2/2021\n\n\n\nCombined convolutional neural networks with bidirectional Transformer[architecture]\n\n\n\nReduced computational complexity and improved representation capability[performance,efficiency]\n\n
\n\nSimCSE (Gao et al., 2021)[59]\n\n\n\n4/2021\n\n\n\nApplied simple contrastive learning[architecture]\n\n\n\nImproved performance of bidirectional Transformer in sentence embedding tasks[performance]\n\n
\n\nERNIE 3.0 (Sun et al., 2021)[40]\n\n\n\n7/2021\n\n\n\nProposed a unified framework combining auto-regressive and auto-encoding networks[architecture]\n\n\n\nImproved knowledge representation in multilingual settings[performance]\n\n
\n\nmLUKE (Ri et al., 2022)[60]\n\n\n\n3/2022\n\n\n\nExtended LUKE to multilingual versions[architecture]\n\n\n\nEnhanced knowledge representation in multilingual environments[performance]\n\n
\n
\n
", + "capture": "TABLE II: Model Contributions and Performance" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18021v1_figure_1.png", + "caption": "Figure 1: Foundational Model Introduction", + "url": "http://arxiv.org/html/2411.18021v1/extracted/6027844/l.png" + }, + "2": { + "figure_path": "2411.18021v1_figure_2.png", + "caption": "Figure 2: BERT Model Timeline", + "url": "http://arxiv.org/html/2411.18021v1/extracted/6027844/f65acf0769391af855dfe2f3679ce30.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18021v1" +} \ No newline at end of file diff --git a/20241127/2411.18023v1.json b/20241127/2411.18023v1.json new file mode 100644 index 0000000000000000000000000000000000000000..dcfad79cc10cf05d93fd831d4b397309c01b6bbd --- /dev/null +++ b/20241127/2411.18023v1.json @@ -0,0 +1,258 @@ +{ + "title": "Leveraging A New GAN-based Transformer with ECDH Crypto-system for Enhancing Energy Theft Detection in Smart Grid", + "abstract": "Detecting energy theft is vital for effectively managing power grids, as it ensures precise billing and prevents financial losses. Split-learning emerges as a promising decentralized machine learning technique for identifying energy theft while preserving user data confidentiality. Nevertheless, traditional split learning approaches are vulnerable to privacy leakage attacks, which significantly threaten data confidentiality. To address this challenge, we propose a novel GAN-Transformer-based split learning framework in this paper.\nThis framework leverages the strengths of the transformer architecture, which is known for its capability to process long-range dependencies in energy consumption data. Thus, it enhances the accuracy of energy theft detection without compromising user privacy.\nA distinctive feature of our approach is the deployment of a novel mask-based method, marking a first in its field to effectively combat privacy leakage in split learning scenarios targeted at AI-enabled adversaries. This method protects sensitive information during the model\u2019s training phase. Our experimental evaluations indicate that the proposed framework not only achieves accuracy levels comparable to conventional methods but also significantly enhances privacy protection. The results underscore the potential of the GAN-Transformer split learning framework as an effective and secure tool in the domain of energy theft detection.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The advent of Smart Grids (SG) marks a pivotal shift in the evolution of smart cities, reshaping the energy distribution landscape by integrating advanced digital technologies and many sensors into the conventional grid infrastructure. This modernization has given rise to a dynamic and interactive energy network equipped with capabilities for real-time data analytics, bidirectional communication systems, and decentralized energy management. These advancements in smart grids have opened doors to remarkable improvements in energy efficiency, substantial reduction in environmental impacts, and bolstered resilience of the grid against various contingencies. However, integrating smart technologies has also introduced significant cybersecurity threats. These concerns are most pronounced in areas such as energy theft [1 ###reference_b1###, 2 ###reference_b2###], which refers to unauthorized and illicit interference with the measurement of electricity consumption or generation, thus bypassing established billing protocols (discussed in section III-B ###reference_###) . Furthermore, the advent of these technologies has heightened the risk of privacy leaks [3 ###reference_b3###], where confidential information is susceptible to unauthorized exposure or exploitation.\nThis prevalent issue not only undermines the financial sustainability of power grids but also unjustly burdens legitimate consumers with escalated costs [4 ###reference_b4###]. In response, utility companies have leveraged cutting-edge, privacy-preserving energy theft detection systems. These systems are engineered to maintain a delicate equilibrium between detecting and mitigating energy theft incidents and upholding the confidentiality of consumer data. They incorporate sophisticated cryptographic methods, secure data-sharing frameworks, and advanced privacy-enhancing technologies [5 ###reference_b5###]. The aim is to anonymize consumer identities and specific usage patterns while meticulously analyzing energy consumption data for signs of fraudulent activities.\nDespite the potential of privacy-preserving energy theft detection systems, they are not without challenges. This paper delves into these challenges [6 ###reference_b6###], explicitly focusing on cybersecurity aspects. We will dissect the vulnerabilities and potential attack vectors that threaten these systems\u2019 integrity and functionality. The discussion will encompass the technical nuances of energy theft detection, scrutinize various privacy preservation techniques, identify their inherent weaknesses susceptible to exploitation by adversaries, and explore the complex adversarial environment that must be navigated to fortify both the security of the smart grid and the privacy of its consumers." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Related Work and Motivation", + "text": "Several studies in the literature have proposed models for detecting energy theft while preserving privacy. These studies employ a variety of privacy-related techniques, including data encryption, data anonymity, federated learning and split learning. Notably, the paper [5 ###reference_b5###] introduced p2Detect a model that utilizes homomorphic encryption for detection. This approach enables the model to process encrypted data, thus eliminating the need to use data in its unencrypted form.\nAnother encryption-based technique is used in [7 ###reference_b7###]. The authors introduce a novel solution that uses a functional encryption cryptosystem and a decentralized aggregation scheme. This solution eliminates the need for a central key distribution centre by allowing the detection stations to securely send encrypted training parameters to an aggregator without exposing sensitive information. In [8 ###reference_b8###], the authors presented a privacy-preserving electricity theft detection scheme based on blockchain technology, eliminating the need for a third party. This scheme employs an enhanced functional encryption system to enable theft detection and load monitoring while safeguarding consumers\u2019 privacy. Additionally, we utilize distributed blockchain storage for consumers\u2019 data to mitigate concerns related to data tampering and other security threats.\nResearch on distributed machine learning-based energy theft detection includes the use of federated learning and split learning. The paper in [9 ###reference_b9###] proposed FedDP, which is a novel Federated Voting Classifier (FVC) for accurate energy theft identification. FVC combines the results of several traditional machine learning classifiers to enhance detection accuracy. Along with functional encryption, Federated learning was also used in [7 ###reference_b7###]. Moreover, [10 ###reference_b10###] used federated learning to protect the privacy of customers\u2019 data. The other type of distributed ML-based approach is split learning, which was used as an energy theft detector in [1 ###reference_b1###]. The authors proposed an enhanced version of split learning, enabling it to be directly applied in the smart grid (SG) environment. Moreover, the paper claims that splitting the detection model makes the system more robust against honest but curious adversaries.\nProblem Statement and Motivation:\nIn the field of privacy-preserving power theft detection, high accuracy rates are often cited as a key strength of existing methodologies. However, a deeper examination reveals significant shortcomings. For instance, encryption-based methods face considerable communication and computational overheads [11 ###reference_b11###]. Moreover, these methods\u2019 reliance on key distribution centres introduces a vulnerability due to the risk of a single point of failure, posing a critical cybersecurity concern.\nThe landscape is further complicated by inherent drawbacks in privacy-preserving machine learning techniques. Notably, federated learning frameworks, while innovative, suffer from high communication costs due to frequent interactions between the central server and its client nodes [12 ###reference_b12###, 13 ###reference_b13###]. Additionally, the challenge of managing non-IID data distributions among these clients hampers the effective aggregation of a global model, undermining the overall efficacy of federated learning systems.\nThe advent of split learning was initially seen as a solution for some of these challenges faced by federated learning. However, subsequent research revealed that split learning is prone to various privacy-related attacks, an issue not initially considered in its development [14 ###reference_b14###]. This emerging field of research concentrates on attacks capable of compromising information about the machine learning model or its data, including reconstruction, membership inference, property inference, and model extraction attacks. For example, during reconstruction or inference attacks, the data communicated in the training process can be leveraged to deduce sensitive information about the input data [15 ###reference_b15###]. These vulnerabilities persist even when models are implemented with privacy-preserving techniques, indicating the necessity for additional protective measures.\nMoreover, within the context of machine learning models utilising federated learning or other privacy-centric methodologies, there exists the risk of membership inference attacks. These attacks exploit subtle discrepancies in model outputs to determine whether a specific data point was included in the training dataset [16 ###reference_b16###]. Such attacks can lead to inadvertent information leakage concerning individual data points. In conclusion, while current privacy-preserving approaches in power theft detection primarily focus on data privacy, they often overlook the broader spectrum of privacy vulnerabilities inherent in these methodologies. There is a critical need to address these gaps to ensure comprehensive protection against the multifaceted threats faced in the realm of smart grid cybersecurity." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Contributions", + "text": "In this research, we introduce an innovative approach using a new variant of Generative Adversarial Network (GAN)-based transformers for detecting electrical energy theft, integrating advanced split-learning techniques to safeguard user data privacy. The unique structure of GAN models presents challenges in their straightforward application to split learning. To overcome this, we innovatively combine GAN with split learning, balancing user data protection and a marginal compromise in model accuracy. Our approach includes the development of diverse split-learning frameworks tailored for GANs, catering to both rapid training and privacy preservation. The performance analysis demonstrates that the performance of our proposed scheme is significantly better than any state-of-the-art schemes in energy theft detection (as depicted in Table II ###reference_###) and Table III ###reference_###).\nEmbarking on new frontiers in smart grid security and data privacy, this study introduces several groundbreaking advancements in the realm of electrical energy theft detection. We present the following key contributions:\nA new variant of GAN-Based Transformer for Smart Grids: We propose a cutting-edge GAN-based transformer model uniquely designed for energy theft detection in smart grids. To the best of our knowledge, this model is the first of its kind, showcasing the effective integration of transformer and GAN-based adversary loss in tackling the issue of energy theft.\nModeling Framework for GAN and Split Learning Integration: Our work is the first to present a protocol-level modelling framework that synthesizes GAN with split learning. Given the distinct architecture of GAN models, we have crafted a highly suitable GAN-based segmentation learning model, representing a significant leap in this area of research.\nProtocol-Level Defense Mechanism: To enhance the security of split learning against AI-enabled reconstruction attacks [15 ###reference_b15###], we have devised a robust protocol-level defence strategy. This novel integration of machine learning and cryptography significantly increases the resilience of our model against such sophisticated cyber threats while maintaining a higher level of efficiency.\nComprehensive Model Evaluation with Smart Grid Dataset: Through comprehensive comparative analyses with the state-of-the-art (SOTA) models, our proposed model emerges as a top performer, showcasing exceptional performance and broad utility across various adversary levels. Our model\u2019s performance is convincingly reinforced by conducting thorough assessments using the Pecan Street smart grid dataset [17 ###reference_b17###]111Our research leverages the Pecan Street dataset, sourced from the comprehensive Dataport repository of Pecan Street Inc. This dataset represents a pivotal resource in our model evaluation, offering detailed, circuit-level electricity use data at one-minute to one-second intervals for approximately 1000 homes in the United States, with PV generation and EV charging data for a subset of these homes. For our research purposes, we obtained a paid license from Pecan Street, ensuring full compliance with their data usage policies and restrictions. (results shown in table II ###reference_###).\nThese contributions collectively mark a significant advancement in the realm of smart grid security, particularly in addressing the challenges of energy theft detection, while concurrently ensuring stringent user privacy protection." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Preliminaries", + "text": "This section introduces the preliminaries of the Generative Adversarial Network and Transformer encoder, which are applied in the proposed GAN-Transformer." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Generative Adversarial Network", + "text": "The Generative Adversarial Network (GAN) [18 ###reference_b18###] aims to generate data following a distribution similar to the training data from a fixed and simple distribution, e.g. Gaussian distribution. This goal is achieved by iteratively training a discriminator and a generator. The discriminator is trained to distinguish between the real samples and the synthetic samples generated by the generator. The generator is trained to fool the discriminator by producing better synthetic samples. The above objectives are achieved by optimizing the following min-max problem,\nwhere denotes the discriminator, denotes the generator, denotes the distribution of the real data, and denotes the fixed simple distribution.\nOn the other hand, GAN has been used for anomaly detection, e.g. GANomaly [19 ###reference_b19###]. Different from the standard GAN, the generator of GANomaly is trained to reconstruct the input real samples and the discriminator is trained to distinguish between the real samples and the reconstructed samples. Unlike the vanilla GAN where is updated based on the output of , is updated based on the internal representation of in GANomaly. Formally, let be a function that outputs an intermediate layer of the discriminator. For a given input x drawn from the input data distribution , the objective is to minimize the L2 distance between the feature representation of the original and the corresponding generated samples. Hence, the adversarial loss is calculated as:\n###figure_1###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Transformer Encoder", + "text": "Transformer [20 ###reference_b20###] was initially proposed for language understanding tasks. Recently, the powerful feature learning ability of Transformer was discovered, which has been taken advantage of for computer vision tasks. The most important mechanism of Transformer is the multiheaded self-attention (MSA) mechanism. First of all, the self-attention (SA) mechanism can be expressed by the followings,\nwhere is the input of the SA mechanism and denotes the parameters of the SA mechanism. Then, the MSA mechanism can be expressed as,\nwhere denotes the parameters of the MSA mechanism.\nThe Transformer encoder consists of alternating layers of MSA and MLP blocks. Layernorm (LN) is applied before every block and residual connections are applied after every block. The Transformer encoder can be expressed as the following equations,\nwhere is the label, is the input vector, and are pre-trained embedding matrices, and is the output of Transformer encoder." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Elliptic Curve Diffie-Hellman", + "text": "The Elliptic Curve Diffie-Hellman (ECDH) [21 ###reference_b21###] is an elliptic curve cryptography based key exchange protocol. It facilitates secure key exchange between two parties by utilizing elliptic curves:\nwhere and are constants. The ECDH protocol enables the establishment of a shared key through an insecure communication channel by leveraging the mathematical properties of eliptic curves. Two algorithms defined by ECDH are Key Gen and Key arrangement." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Key Derivation Function", + "text": "The key derivation function (KDF) is a cryptography function designed to derive one or more keys from a given parameter. The main objective of KDF is to stretch keys to achieve a suitable length or convert keys into a required format. KDF usually take four different inputs: a random seed, a length, a salt s and context c. The security of KDF is captured from [22 ###reference_b22###]. The advantage of any adversary A in probabilistic polynomial time to break the KDF security is defined as .\n.\nFor a sufficiently negligible value , ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III System and Adversary Model", + "text": "This section briefly describes our proposed system and adversary model. In the system model, we show the different entities and their roles in the proposed scheme. In the adversary model, we consider possible potential attackers and their abilities against our proposed system. We first start with the system model, and then we introduce the adversary model." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A System Model", + "text": "As illustrated in Figure 1 ###reference_###, our proposed system adheres to the conventional GAN structure, comprising two primary components of generator: an encoder and a decoder. Throughout the training phase, both the generator and discriminator are active. However, during deployment and inference, only the generator is operational. We\u2019ve partitioned the generator into two segments to accommodate the specific requirements of split learning. One segment is located on the user end, while the other is hosted on the server side. On the other hand, due to its unique architecture, the discriminator solely finds its place on the server side. This strategic division bolsters data privacy and alleviates computational burdens on the user\u2019s end, thereby economizing device deployment. Our system\u2019s operation is split into two phases. The first phase leverages a pre-trained model for inference while simultaneously undergoing adaptive training. This phase is predominant at the system\u2019s inception when there\u2019s a dearth of user-specific data to facilitate comprehensive model training. Given the considerable disparities in household electricity consumption patterns and appliance variations among users, adaptive training becomes imperative. This ensures the model is tailored to individual users, guaranteeing optimal performance. The following phase is the stabilization phase. At this juncture, the model has largely been trained and is proficient in conducting detections. However, any unforeseen data alterations\u2014perhaps due to personal adjustments or the introduction/removal of electrical appliances\u2014can trigger the system to revert to the adaptive training phase. This automatic shift ensures that the model continuously refines its predictive accuracy in the face of changing data landscapes.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Adversary Model", + "text": "In this paper, our primary concern focuses on both energy thefts and privacy protection threats. Privacy-based threats have become increasingly interested in collaborative machine learning models due to their widespread adoption in various applications. These threats can be launched by powerful adversaries who are equipped with artificial intelligence techniques. Hence, we consider AI-enabled adversarial attacks aimed at the user\u2019s privacy. This has led us to consider the following adversaries in our threat model:\n: This type of adversary could be a malicious customer who may try to attempt to launch an energy theft attack. This adversary can modify meter readings using different attack scenarios. We consider three levels of attacks introduced in the threat models of [23 ###reference_b23###] and [1 ###reference_b1###]. Our model incorporates energy thefts that are launched by changing the meter readings of the Consumer Smart Meter (CSM), where the adversary increases the meter readings for electricity theft. This indicates the following energy theft attack: tries to reduce their consumption smart meter reading (CSM) by a percentage for a period of time T. This type of attack can cause a huge amount of financial loss in case of the insufficient detection. Our experiment in Section VI-C ###reference_### evaluates how efficiently our model can detect energy theft compared with other state-of-art models.\n: This type of adversary is an external attacker who may try to eavesdrop on the communication channel either physically or through cyber-attacks (Dolev-Yao Model). This enables the attacker to capture all or some messages transmitted through the public channel to try inferring individuals\u2019 private data. This adversary is passive, compromising the privacy of the customers\u2019 data. This type of attacker can damage the user\u2019s privacy and money. Our experiment(s) and comprehensive security analysis prove that our proposed model can maintain the security and privacy of the message sent by the client and server.\n: Here, we consider another type of adversary against the customers\u2019 privacy with more powerful capabilities. In conventional split learning, the model intermediate data (hidden layer feature) are often transmitted between client and server in plaintext form because of their large volume. In this regard, we consider the third type of adversary , which can use artificial intelligence (AI) to analyse the messages (intermediate data) sent between customers to extract private information. This AI-enabled adversary uses AI and advanced neural network structure to reconstruct the original raw data as discussed in [15 ###reference_b15###]. This type of adversary can cause a very serious privacy issue. Our experiments in Section VI-D ###reference_### and VI-E ###reference_### evaluate how efficiently our proposed system can protect against this type of the adversary and how our proposed method is much faster than the traditional encryption methods for the same privacy level." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Our Proposed Scheme", + "text": "In this section, we introduce our proposed GAN-Transformer. After that, we explain how our secure split learning protocol maintains the privacy issues in a split learning framework. Finally, we give the details of our proposed GAN-Transformer for energy theft detection in smart grids. Before diving into details, we give a high-level overview of each part." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A High Level Overview", + "text": "Now, we present a comprehensive overview (also shown in Figure 1 ###reference_###) of our proposed work. As discussed in Section III ###reference_###, our proposed system comprises three major components: algorithm for generative adversarial networks in split learning, secure split learning protocol, and GAN-Transformer. In this paper, we aim to provide a fully privacy-preserved system in anomaly detection based on split learning. These three components need to collaborate with each other to provide secure requirements. We propose the first GAN-based transformer for energy theft detection combined with protocol-level protection to solve the electricity theft problem the smart grids, which perform better than the current literature. There is little research on the GAN-based model for split learning, and we proposed the first framework to fill the gap. Meanwhile, split learning is also vulnerable to deep-learning-based attacks such as reconstruction-based attacks; we propose the first protocol-based masking scheme against the AI-enable adversarial. We first introduce our GAN-based transformer for energy theft detection." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B GAN-Transformer", + "text": "Here, we give the details of our proposed model. Since the smart grid data is a time series data, so the model should be able to capture the time features. Our objective is to find the energy theft activities, and several approaches have been proposed in the literature, like autoencoder-based anomaly detectors [24 ###reference_b24###], LSTM-based detectors [25 ###reference_b25###] and transformer-based detectors [26 ###reference_b26###]. Meanwhile, the generative adversarial network has performed well in different areas, including anomaly detection. Based on that, we propose our GAN-based transformer for energy theft detection. Our GAN-based transformer mainly consists of two different structures, the generator and the discriminator. Our proposed generator has two different parts: the encoder and the decoder; we will discuss this part in the following subsection. We use a transformer encoder [27 ###reference_b27###] with positional encoding to predict the total electricity usage of the client. Meanwhile, we use a transformer-based discriminator to force the generator to output more accurate data by providing adversary loss ." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Generative Adversarial Networks in Split Learning", + "text": "In this section, we introduce our GAN-based split learning framework. Due to the special structure of GAN, it is hard to implement split learning. Our proposed approach is to divide the generator into two different parts and keep the discriminator on the server side. By doing this, we can send the masked inter-data of the generator and encrypted data of total electricity usage. Since, inter-data is much bigger than the total electricity usage due to the tensor size and gradient, we use split learning to protect the user\u2019s privacy. Figure 1 ###reference_### shows the details of the model structure. We split the transformer encoder into two parts and employ it on both the client side and the server side. Algorithm 1 and Algorithm 2 show the details of our proposed split learning framework combined with our proposed protocol." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Proposed Protocol Level Approach for Securing Split-Learning against AI Adversary", + "text": "Here, we introduce our proposed secure split learning protocol against AI-enabled adversaries. Figure 3 ###reference_### shows the steps of the protocol. Our protocol mainly consists of two entities, named the user and the server. In the setup phase of the protocol, the client and server generate the secret and public keys and send them to a third-party trusted authority. After that, the client first generates its signature and sends it to the server. Upon receiving the message from the client, the server first verifies the signature and then generates the message for key exchange and verification. Each client who wants to send data to the server must verify the identification. The protocol can be summarized by following steps:\nStep : : .\nWhen a client tries communicating with the server, it first generates a secret key pair for digital signature. After that, the client randomly generates and computes for elliptic-curve Diffie\u2013Hellman. Finally, the client generates the signature and sends message to the server.\nStep : : .\nAfter receiving the message, the server first verifies the client\u2019s signature. If the server receives a valid digital signature, it also generates a secret key pair for the digital signature. After that, the server randomly generates and computes for elliptic-curve Diffie\u2013Hellman with signing the digital signature. Finally, the server derives the encryption key and mask key from the Defile-Hellman results using KDF.\nStep : : .\nClient Server\nUpon receiving the message from the server, the client first verifies the server\u2019s digital signature. After verifying the signature, the client derives the encryption key and mask key from the Diffie-Hellman results using Key Derivation Function (KDF). Next, the client initializes the pseudorandom generator using mask key as seed and generates to transform . 0 is the middle tensor of the neural network, and the client needs to forward pass the input data and then get the . After getting the , the client first adds to the by doing and encrypts the target value to . Finally, the client sends messages and with a signature.\nStep : : .\nAfter the server receives the client\u2019s message, it first verifies the signature. If the server receives a valid signature, it also initializes the pseudorandom generator using the same mask key as seed and generates for the de-masking of . After that, the server decrypts message . The server needs to perform the backpropagation of the neural network and get the gradient for training. Then, the server generates another mask for the gradient and sends message with signature.\nStep : Client update.\nUpon receiving the message from the server, the client first verifies the server\u2019s digital signature. After verifying the signature, the client uses the pseudorandom generator to generate for the de-masking of . Finally, the client updates the model and finalizes the backward pass." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Formal Proof for Protocol(w.r.t Dolev-Yao Attacker, i.e., )", + "text": "In this section, we provide the formal security for our proposed protocol. Due to page limitations, we provide all our security frameworks and the last theorem here. We divided our formal proof into five different sections: mutual authentication security (MA-security), unlinkability, key indistinguishability, AI Security and system security. The full proof of security is provided in the Appendix. We first begin with the security frameworks." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Security Frameworks", + "text": "In this section, we introduce the security frameworks we used in our formal proof. Our protocol aims for secure key exchange and authentication for the client and the server. Therefore, we consider mutual authentication, key indistinguishability, and unlinkability.\nIn line with the security model posited by Bellare-Rogaway [28 ###reference_b28###], our rigorous security proof is predicated upon a set of security games engaging a probabilistic polynomial time (PPT) adversary, denoted as , and a challenger, denoted as . The key of these games is that the adversary is deemed to win if it successfully compromises either mutual authentication or other security frameworks. Our protocol is considered secure in terms of all security properties only if no probabilistic polynomial time adversary can win these games. We now proceed to elaborate on the specific definitions and constructs of the security games pertaining to mutual authentication." + }, + { + "section_id": "5.1.1", + "parent_section_id": "5.1", + "section_name": "V-A1 Mutual Authentication", + "text": "Here, we outline the main objective of in the mutual authentication security game, adhering to the framework of the existentially unforgeable under-chosen-message attacks (EUF-CMA) security game. The notation is used to signify the interaction between the challenger and the adversary . Specifically, in this context, the EUF-CMA game is applied to authenticate a signature scheme. The formal definition of this game is as follows:\nThe challenger generates a public key and private key pair using elliptic curve parameter and gives the adversary the public key .\nThe adversary is allowed use adaptive chosen messages for some to query the challenger .\nAfter the adversary asks all its queries, outputs a message pair\nIn the formally defined game above, is a forged signature produced by adversary . We define EUF-CMA security after we introduce the adversary\u2019s queries.\nAdversary Queries. We define all queries as the adversary \u2019s behaviour during the EUF-CMA security game :\n: allow to initialize a public key and private key pair using elliptic curve parameter and give the public key to the adversary .\n: allow to send a message to the challenger and return a produced signature .\n: allow challenger to leak its key .\n: allow adversary to reveal the internal state of .\nDefinition 1 (EUF-CMA security): A signature scheme with functions: with security parameter . For a given cleanness predicate, and a probabilistic polynomial time (PPT) adversary , we define the advantage of in EUF-CMA security game to be:\n.\nWe say that the signature scheme holds EUF-CMA security if for all PPT , is a negligible value in a parameter ." + }, + { + "section_id": "5.1.2", + "parent_section_id": "5.1", + "section_name": "V-A2 Key Indistinguishability", + "text": "Here we outline the main objective of in the key indistinguishability security game, referred to as KIND, and outline the types of queries is permitted to make. denotes the game involving a challenger and an adversary , with respect to the protocol . Here, represents the security parameter associated with the protocol and we use elliptic curve digital signature algorithm as an example. The focus of this game is to evaluate the indistinguishability of keys generated by protocol . The formal structure and rules of this game are as follows:\nThe challenger randomly generates a and computes .\nThe adversary chooses two messages and sends them to the challenger .\nThe challenger computes:\n.\nThe adversary outputs a guess .\nThe following is the formally defined game between an adversary and a challenger . We first present the adversary queries.\nAdversary Queries. We define all queries as the adversary \u2019s behaviour during the key indistinguishability security game :\n: allow the challenger to initialize a new secret key and compute .\n: allow the adversarial to send messages to the challenger and return a produced message .\n: allow the challenger to leak its key .\n: allow the challenger to reveal the computed point .\nDefinition 2 (Key Indistinguishability): Let be a key exchange protocol. For a given cleanness predicate clean, and a probabilistic polynomial time (PPT) adversary , we define the advantage of in the key indistinguishability game to be:\n.\nWe say that is KIND-secure if for all PPT , is a negligible value in a parameter ." + }, + { + "section_id": "5.1.3", + "parent_section_id": "5.1", + "section_name": "V-A3 Unlinkability", + "text": "Here we describe the the primary aim of in the unlinkability security game, as well as the queries that can access. The game, denoted as , involves a contest between a challenger and an adversary with respect to the protocol . The parameter signifies the security parameter. The unlinkability security game, in this case, is formally described as follows:\nThe challenger initializes for each party and samples a random bit .\nAfter that, the challenger interacts with the adversary via the adversary queries.\nAdversary output a guess bit .\nWe now present the formally defined game between an adversary and a challenger . We first introduce the adversary queries.\nAdversary Queries. We define all queries as the adversary \u2019s behaviour during the Unlinkability security game as follows. In addition to the queries listed above, we define two different queries:\nTest() : allow the adversary to begin a new session , where is sampled by a challenger . Here, or and they are both clean.\nSendTest() : allow adversary to send the message to session .\nDefinition 3 (Unlinkability): Let be a key exchange protocol. For a given cleanness predicate clean, and a probabilistic polynomial time (PPT) adversary , we define the advantage of in the unlinkability game to be:\n.\nWe say that is Unlinkability-secure if for all PPT , is a negligible value in a parameter .\nDefinition 4 (AI Security): Let be a deep learning model. For a given cleanness predicate clean, and a probabilistic polynomial time (PPT) adversary , we define the advantage of in the AI game to be:\n.\nWe say that is AI-secure if for all PPT , is a negligible value in a parameter .\nTheorem 5: Our protocol system holds the full security for any PPT time under MA-security, key indistinguishability, unlinkability and AI security. The advantage of the adversary in the full security games is .\nProof: First, recall that in order to break the full security of the system, the adversary must break the MA-security, key indistinguishability, unlinkability and AI security. We first give the original attack game:\nGame 5.0: This is the original attack game. We claim that:\n.\nGame 5.1: In this game, the abort event is that the adversary breaks any of the security properties. Thus, the advantage that wins is bonded by the advantage of breaking any of the security properties:\n.\nGame 5.2: In this game, the advantage of breaking any of the security properties is negligible based on Theorem 1 to Theorem 4. Thus, the advantage of winning the full system security game is negligible:\n." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Discussion", + "text": "In this section, we explain how we build the experiments with different attack models and how we measure the performance of our proposed model. We start with the experiment\u2019s basic setup to introduce the experiment platform and metrics we use and also provide the exploratory data analysis. After that, we introduce the anomaly detection part. Then, we show the baseline encryption and mask methods for comparison with the proposed method. Last, we show the superiority of the proposed method based on extensive experiments. All code and pre-trained models can be found through this link 222Code and pre-trained model can be found from here: https://tinyurl.com/3se362rc ###reference_tinyurl.com/3se362rc###." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A Implementation Details", + "text": "Here, we present the specific details of our implementation of the proposed scheme. Subsequently, we demonstrate the computational expenses associated with client-side and server-side energy consumption. We first describe the details of the testbed." + }, + { + "section_id": "6.1.1", + "parent_section_id": "6.1", + "section_name": "VI-A1 Implementation Setup", + "text": "We aimed to create a reliable and realistic experimental setup to evaluate the performance of our proposed scheme. To achieve the whole split-learning system, we involved two primary devices, namely the server and the client. For a more realistic test environment (the server has more computational resources), it was implemented on a Jetson AGX Orin equipped with a 12-core Arm CPU and 32GB RAM with an Ubuntu Jetson OS. Jetson also contains 2048 NVIDIA CUDA cores. For the execution of the client side, we employed the Raspberry Pi 4B, which is suitable for edge computing. All devices connect to the LINKSYS WRT3200ACM router. Figure 4 ###reference_### shows the testbed.\n###figure_3### Note that we implemented another server using Raspberry Pi; it is for the energy consumption experiment and provides a clear and fair comparison with the client-side device. The results of the energy consumption can be found in Section VI-F ###reference_###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B Experiment Setup", + "text": "Here, we introduce the experiment setup to introduce the metrics we use, and provide the dataset with the exploratory data analysis. We first introduce the dataset we use." + }, + { + "section_id": "6.2.1", + "parent_section_id": "6.2", + "section_name": "VI-B1 Dataset with Exploratory Data Analysis", + "text": "In the context of smart grid cybersecurity, particularly for challenges like energy theft detection, Exploratory Data Analysis (EDA) plays a crucial role. Utilizing the Pecan Street smart grid dataset, our analysis focuses on identifying patterns and anomalies in energy usage that could indicate fraudulent activities. This dataset encompasses a broad range of data, from traditional energy consumption metrics to advanced data from smart home devices including solar energy systems, electric vehicle charging stations, and smart meters.\nOur EDA began with creating visualizations to analyze the distribution and interrelationships within the dataset. A primary method employed was histograms overlaid with Kernel Density Estimates (KDE), as shown in Figure 5 ###reference_###. This technique provides a comprehensive view of the data distribution, combining the direct frequency representation of the histogram with the continuous probability density function of the KDE. These visualizations are essential in spotting unusual patterns or outliers that may signal energy theft or vulnerabilities to AI adversarial attacks.\nIn addition, the feature correlation matrix, clearly illustrates how various energy consumption variables are interconnected. This understanding is key for effective feature engineering and normalization in our proposed models, which are used for detecting energy theft. The EDA process is not just a preliminary step; it is a critical component in unravelling the complexities of energy consumption behaviour and potential security threats in smart grids. The insights gained from this analysis are vital for developing more secure and efficient smart grid systems. They enable us to more accurately detect energy theft, thereby significantly contributing to the field of smart grid cybersecurity. As illustrated in Figure 7 ###reference_###, these correlations are comprehensively analyzed, highlighting the intricate relationships between different variables. The correlation heatmap for the Pecan Street smart grid dataset unveils the interdependencies of electricity usage among various household appliances and energy sources. Strong correlations within appliance pairs, such as \u2019kitchenapp1\u2019 and \u2019kitchenapp2\u2019, are indicative of similar energy usage patterns. This may necessitate dimensionality reduction in further modelling to ensure model robustness. In stark contrast, notable negative correlations are observed between \u2019oven1\u2019 and \u2019solar\u2019 (-0.44) and \u2019drye1\u2019 and \u2019grid\u2019 (-0.38), indicating an inverse usage pattern potentially linked to solar energy availability.\nFurther analysis reveals a near-perfect correlation between \u2019leg1v\u2019 and \u2019leg2v\u2019 (0.99), suggesting redundant data capturing which requires careful feature selection to enhance model performance. \u2019Air1\u2019 and \u2019Drye1\u2019 display a moderate correlation (0.31), hinting at co-usage patterns possibly influenced by daily routines or climatic conditions. Additionally, \u2019disposal\u2019 correlates with both \u2019kitchenapp1\u2019 (0.33) and \u2019dishwasher1\u2019 (0.6), reflecting sequential tasks in kitchen activities. The total energy consumption (\u2019grid\u2019) shows a moderate correlation with \u2019car1\u2019 (0.5), signifying the impact of electric vehicle charging on the energy profile. \u2019Lights_plugs1\u2019 presents a lower correlation (0.23) with \u2019grid\u2019, suggesting varied influences on the total energy usage. These insights are pivotal, as they underscore the interconnected nature of the dataset\u2019s features and their collective influence on the energy management system. A noteworthy negative correlation between \u2019solar\u2019 (energy generation) and \u2019grid\u2019 (energy consumption) is essential for grid performance optimization and solar energy utilization enhancement. This analysis not only aids in understanding the data\u2019s inherent structure but also paves the way into energy theft detection, ultimately contributing to a more resilient and efficient energy grid.\n###figure_4###" + }, + { + "section_id": "6.2.2", + "parent_section_id": "6.2", + "section_name": "VI-B2 Experiment Metrics", + "text": "To better evaluate the performance of our proposed work, we report results using three different metrics. For experiment VI-C ###reference_###, we use Area Under the Curve (AUC) to evaluate the overall performance of our proposed GAN-Transformer for energy theft detection. AUC excels in providing an aggregated measure of performance across all possible classification thresholds, thereby offering a comprehensive assessment of a model\u2019s discriminatory ability. It remains invariant under class distribution changes, making it a robust metric, particularly in imbalanced dataset scenarios. For experiment VI-D ###reference_###, we measure the time usage for different encryption and decryption methods. For experiment VI-E ###reference_###, we evaluate the power of the adversary decoder using the coefficient of determination [30 ###reference_b30###]. The Coefficient of Determination is a statistical measure that assesses the explanatory power of a predictive model, particularly in the context of regression analysis. offers a clear, numerical estimate of a model\u2019s predictive accuracy, facilitating objective comparisons between models. Its scale of 0 to 1 allows for intuitive interpretation, where 1 indicates perfect prediction and 0 implies no predictive capability. The is a standard metric for evaluating the goodness-of-fit in regression models, making it broadly applicable across various domains. In our experiment scenario where we train an attacker decoder adv-decoder, the aim is to assess its capability to extract or infer the user\u2019s raw data. In this case, the coefficient of determination can be employed to evaluate the adv-decoder\u2019s effectiveness. Here, a high value indicates that the adv-decoder can accurately predict or replicate the outcomes it is designed to attack, signifying a strong attacking capability. Conversely, a low suggests a weaker attacking ability.\n###figure_5###" + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C GAN-Transformer Performance in Detecting Energy Theft Attacks (w.r.t Energy Theft Attackers, i.e., )", + "text": "This section details the evaluation of our proposed GAN-Transformer model\u2019s effectiveness in identifying energy theft attacks, specifically those executed by the adversary . We conducted a comprehensive set of experiments focusing on the AUC metric to gauge the model\u2019s detection capabilities. The evaluation spanned three distinct levels of adversarial energy theft, quantified at 0.1, 0.2, and 0.3, representing the proportion of stolen energy ranging from 10% to 30% (10% is more stealthy). Beyond assessing the GAN-Transformer model\u2019s performance across these levels, we also benchmarked it against several leading deep learning models, including a conventional autoencoder (AE) [24 ###reference_b24###], GAnomaly [19 ###reference_b19###], LSTM [29 ###reference_b29###], and Transformer [20 ###reference_b20###]. The training phase of all models utilized authentic, non-compromised data devoid of any energy theft instances. Subsequently, the evaluation phase employed a balanced dataset comprising both genuine and compromised (theft) data points.\nThese experiments\u2019 summarised results are presented in Table II ###reference_###.\n###figure_6### The results indicate that our detector, even under low-level adversarial conditions (0.1), achieves a promising AUC of 0.690, demonstrating effective detection capabilities for minimally invasive energy theft scenarios. The detector\u2019s proficiency escalates with increasing adversarial levels, achieving an AUC of 0.817 at a 0.2 level and an impressive 0.970 at a 0.3 level. Significantly, our proposed GAN-Transformer model consistently outperformed other advanced deep learning models across all tested energy theft levels. It demonstrated at least a 5% higher detection rate as compared to other high-performing models, such as the Transformer and LSTM (as shown in Table 2)." + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "VI-D Complexity Analysis of the Proposed Mask Scheme with Other Encryption Mechanisms", + "text": "This section evaluates the computational efficiency and complexity of our proposed masking scheme for privacy-preserving energy theft detection in comparison to traditional methods and other encryption techniques. We juxtapose our approach with widely used encryption mechanisms, including AES [31 ###reference_b31###], Simon [32 ###reference_b32###], Speck [32 ###reference_b32###], and homomorphic encryption [33 ###reference_b33###], which are prevalent in privacy-preserving applications.\nThe primary source of complexity in these privacy-preserving methods, including ours, is centred around the processes of encryption (masking) and decryption (demasking). These steps are essential for securing data but often come with the trade-off of increased computational load.\nFigures 6 ###reference_### present all methods\u2019 encryption and decryption times. Our findings indicate that the proposed masking method is significantly more efficient than the alternatives. Notably, while homomorphic encryption is a common choice in deep learning applications for its security advantages, it is also one of the most complex encryption techniques. This complexity is primarily due to the intricate mathematical operations involved and the property of the homomorphic encryption [34 ###reference_b34###].\n###figure_7###" + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "VI-E Privacy Analysis of Our Proposed Scheme (w.r.t AI-enabled Adversary, i.e., )", + "text": "The proposed GAN-Transformer approach for privacy-preserving energy theft detection is potentially vulnerable to reconstruction attacks. These attacks aim to reconstruct the original energy consumption data from the inter-data sent between the system\u2019s components. If successful, reconstruction attacks can compromise the privacy of energy consumers. These attacks can be launched by an AI-enabled attacker, , which was explained earlier in Section III-B ###reference_###. Here, we explain how we managed this privacy issue by using our proposed masking protocol. As we discussed in Section IV-D ###reference_###, our proposed protocol has a shared key that is used to initialize a pseudonym generator to generate the mask for the tensor. This mask can hide and destroy the distribution of the inter-data. To conduct our privacy analysis, we assumed that an AI-enabled adversary intercepted some of the inter-data sent in the system and tried reconstructing the original data by training an adversarial decoder.\n###figure_8### Figure 9 ###reference_### shows a sample result from our experiments which is a real-world energy usage where the x-axis signifies the index of samples taken at fifteen-minute intervals, and the y-axis depicts the normalized electrical energy data. The blue line indicates the real energy usage record, the orange line is the reconstructed output of the adversarial decoder without our proposed protocol, and the green line is the reconstructed data when the proposed masking is applied. As we can see, this experiment shows how severe reconstruction attacks are when no security measures are taken. Looking at the orange line, we can see that the attacker can almost perfectly reconstruct the original data from only the inter-data. However, when the proposed masking approach is used, we can see that the adversarial decoder fails to reconstruct or extract any useful information from the data.\nTo better illustrate the performance of our proposed protocol, we measured how accurate it is to reconstruct the input back by using the coefficient of determination metric. The results of our experiments are shown in Table III ###reference_###, where we show how accurate it is to reconstruct the original numbers by measuring the coefficient of determination metric. As we can see from the table, the is very high, indicating that the AI-enabled attacker can reconstruct the data with high accuracy without using our proposed masking protocol. In this case, the attacker can reconstruct the number from the inter-data. However, with our proposed masking protocol, the values of are near zeros which indicates that the attacker cannot extract any useful information from the inter-data communication.\nNote: Here we use efficient of determination to evaluate the power of an AI-enabled adversary . As we discussed in section VI-B ###reference_###, the lower number suggests a stronger defence ability." + }, + { + "section_id": "6.6", + "parent_section_id": "6", + "section_name": "VI-F Energy Consumption", + "text": "To better illustrate the effectiveness of our proposed scheme, we built an energy consumption experiment. Figure 8 ###reference_### shows a time series plot of the Power (W: Watt) curve. We collected the power consumption in three stages, namely empty running (idle phase), loading phase, and inference phase. As we can see, the power of the inference phase is around 4-7W, which is quite low compared to the load and idle phases. Meanwhile, most of the computational overload is on the server side due to the split learning framework (refers to the red curve)." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion", + "text": "Securing the smart grids against energy theft and ensuring consumer privacy are essential for maintaining the resilience of these grids against unpredictable disruptions.\nAddressing these critical challenges, our research introduces an innovative GAN-Transformer-based split learning framework for energy theft detection in smart grids, leading to significant advancements in both privacy and efficiency in this domain. The proposed framework effectively leverages the transformer architecture\u2019s proficiency in handling long-range dependencies in energy consumption data, enabling precise detection of energy theft without compromising user privacy. Our innovative mask-based method marks a first in the domain, effectively shielding against privacy leakage attacks during model training. Experimental results validate the framework\u2019s comparable accuracy to state-of-the-art energy theft detectors while providing significantly enhanced privacy protection.\nMoreover, our complexity analysis confirms the superior efficiency of our masking scheme over traditional encryption methods, an essential attribute in AI-enabled attack scenarios where rapid data processing is crucial. This efficiency, combined with robust security, positions our approach as a viable solution for real-time energy theft detection in dynamic smart grid environments. The GAN-Transformer model\u2019s consistent outperformance of other advanced models across various adversarial levels underscores its unique strengths and potential as a cutting-edge tool in the cybersecurity domain. Therefore, our work not only contributes a novel approach to energy theft detection but also advances the field of privacy-preserving AI in smart grids." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Notions and Cryptographic Functions
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SymbolsDescription
Model parameter
Encoder and decoder of the generator
Discriminator
Public and secret key pair
Parameter for ECDH
Shared key for encryption and mask
pseudorandom generator
Intermediate data
Neural Network
\n
", + "capture": "TABLE I: Notions and Cryptographic Functions" + }, + "2": { + "table_html": "
\n
TABLE II: Area Under Curve (AUC) Performance Comparison with related work
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Area Under Curve (AUC)
Adversarial LevelAE (Conv) [24]GAnomaly [19]LSTM [29]Transformer [20]Proposed Scheme
10% (more stealthy)0.4980.4920.6570.6430.690
20%0.4970.4860.7070.7840.817
30%0.4970.4830.9520.9670.970
\n
", + "capture": "TABLE II: Area Under Curve (AUC) Performance Comparison with related work" + }, + "3": { + "table_html": "
\n
TABLE III: R\u00b2 values for different samples
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Coefficient of determination (R\u00b2)
Sample Number12345
no Mask0.97460.99020.97730.98650.9814
with Mask0.00950.00380.00360.00340.0011
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    Note: Here we use efficient of determination to evaluate the power of an AI-enabled adversary . As we discussed in section VI-B ###reference_###, the lower number suggests a stronger defence ability.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "TABLE III: R\u00b2 values for different samples" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18023v1_figure_1.png", + "caption": "Figure 1: System Model", + "url": "http://arxiv.org/html/2411.18023v1/x1.png" + }, + "2": { + "figure_path": "2411.18023v1_figure_2.png", + "caption": "Figure 2: Threat Model", + "url": "http://arxiv.org/html/2411.18023v1/x2.png" + }, + "4": { + "figure_path": "2411.18023v1_figure_4.png", + "caption": "Figure 4: Experiment Platform", + "url": "http://arxiv.org/html/2411.18023v1/x3.png" + }, + "5": { + "figure_path": "2411.18023v1_figure_5.png", + "caption": "Figure 5: Exploratory Data Analysis for Selected Features", + "url": "http://arxiv.org/html/2411.18023v1/x4.png" + }, + "6": { + "figure_path": "2411.18023v1_figure_6.png", + "caption": "Figure 6: Complexity Benchmark with the related encryption work", + "url": "http://arxiv.org/html/2411.18023v1/x5.png" + }, + "7": { + "figure_path": "2411.18023v1_figure_7.png", + "caption": "Figure 7: Correlation Heatmap of Electricity Usage Features in the Pecan Street Smart Grid Dataset", + "url": "http://arxiv.org/html/2411.18023v1/x6.png" + }, + "8": { + "figure_path": "2411.18023v1_figure_8.png", + "caption": "Figure 8: Energy Consumption", + "url": "http://arxiv.org/html/2411.18023v1/x7.png" + }, + "9": { + "figure_path": "2411.18023v1_figure_9.png", + "caption": "Figure 9: Privacy Experiment results", + "url": "http://arxiv.org/html/2411.18023v1/x8.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18023v1" +} \ No newline at end of file diff --git a/20241127/2411.18050v1.json b/20241127/2411.18050v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7450a24fede130b92775f35ca26a837dae87b6db --- /dev/null +++ b/20241127/2411.18050v1.json @@ -0,0 +1,354 @@ +{ + "title": "RL for Mitigating Cascading Failures: Targeted Exploration via Sensitivity Factors", + "abstract": "Electricity grid\u2019s resiliency and climate change strongly impact one another due to an array of technical and policy-related decisions that impact both. This paper introduces a physics-informed machine learning-based framework to enhance grid\u2019s resiliency. Specifically, when encountering disruptive events, this paper designs remedial control actions to prevent blackouts. The proposed Physics-Guided Reinforcement Learning (PG-RL) framework determines effective real-time remedial line-switching actions, considering their impact on power balance, system security, and grid reliability. To identify an effective blackout mitigation policy, PG-RL leverages power-flow sensitivity factors to guide the RL exploration during agent training. Comprehensive evaluations using the Grid2Op platform demonstrate that incorporating physical signals into RL significantly improves resource utilization within electric grids and achieves better blackout mitigation policies \u2013 both of which are critical in addressing climate change.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Power grid resiliency and climate change are symbiotically interconnected. Climate change is increasing the frequency and intensity of extreme weather events, such as hurricanes, floods, wildfires, and heatwaves, requiring improved grid resiliency to maintain power and reduce economic and societal impacts. Mitigating climate change needs reduction in the energy system\u2019s carbon footprint, which critically hinges on integrating renewable resources at scale. However, grid resilience enhancement is needed to provide robustness against equipment failures and manage stability impact of variability from renewable generation. Thus, mitigating and adapting to climate change necessitates enhancing grid resilience. This paper provides a physics-informed machine learning (ML) approach to enhance grid resiliency, defined as the grid\u2019s ability to withstand, adapt, and recover from disruptions.\nOne major source of disruption impacting grid resiliency are transmission line and equipment failures, often caused due to aging infrastructure stressed by extreme weather and congestion due to growing electricity demand. These gradual stresses can lead to system anomalies that can escalate if left unaddressed [1 ###reference_b1###]. To mitigate these risks, system operators implement real-time remedial actions like network topology changes [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Selecting these remedial actions must balance two opposing impacts: greedy actions render quick impact to protect specific components but may have inadvertent consequences, while look-ahead strategies enhance network robustness but have delayed impact. Striking this balance is crucial for maintaining reliable operation and maximizing grid utilization.\nThere are two main approaches for the sequential design of real-time remedial decisions: model-based and data-driven. Model-based methods, like model predictive control (MPC), approximate the system model and use multi-horizon optimization to predict future states and make decisions [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. While these methods offer precise control by adhering to system constraints, they require an accurate analytical model, which can be difficult for T-grids. Moreover, coordinating discrete actions like line-switching over extended planning horizons is computationally intensive and time-consuming. Conversely, data-driven approaches like deep reinforcement learning (RL) learn decision policies through sequential interactions with the system model. Deep RL has been successfully applied to various power system challenges [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. By shifting the computational burden to the offline training phase, these methods allow for rapid decision-making during real-time operations, making them promising for real-time network overload management [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###].\nUsing off-the-shelf RL algorithms (method-driven algorithms [17 ###reference_b17###]) for complex tasks like power-grid overload management presents computational challenges, primarily due to the systems\u2019 scale and complexity. Generic exploration policies often select actions that cause severe overloads and blackouts, preempting a comprehensive exploration of the Markov decision process (MDP) state space. This limitation hampers accurate decision utility predictions for the unexplored MDP states, rendering a highly sub-optimal remedial control policy. A solution to circumvent the computational complexity and tractability is leveraging the physics knowledge of the system and incorporating it into RL exploration design.\nContribution: We formalize a Physics-Guided Reinforcement Learning (PG-RL) framework for real-time decisions to alleviate transmission line overloads over long operation planning horizons. The framework\u2019s key feature is its efficient physics-guided exploration policy design that judiciously exploits the underlying structure of the MDP state and action spaces to facilitate the integration of auxiliary domain knowledge, such as power-flow sensitivity factors [18 ###reference_b18###], for a physics-guided exploration during agent training. Extensive evaluations on Grid2Op [19 ###reference_b19###] demonstrate the superior performance of our framework over counterpart black-box RL algorithms. The data and code required to reproduce our results is publicly available ###reference_d-Blackout-Mitigation/tree/main/LineSwitchAgent_LODF###.\nRelated Work: The study in [20 ###reference_b20###] uses guided exploration based on -values while [21 ###reference_b21###] employs policy gradient methods, both on bus-split actions pre-selected via exhaustive search. To accommodate the exponentially many bus-split topological actions, the study in [22 ###reference_b22###] employs graph neural networks combined with hierarchical RL [23 ###reference_b23###] to structure agent training. Recent approaches, such as [24 ###reference_b24###] and [25 ###reference_b25###], focus on integrating domain knowledge via curriculum learning and combining it with Monte-Carlo tree search for improved action selection. However, existing RL approaches (i) focus exclusively on bus-splitting actions; (ii) lack the integration of physical power system signals for guided exploration; and (iii) overlook active line-switching, particularly line removal actions, due to concerns about reducing power transfer capabilities and increasing cascading failure risk." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Problem Formulation", + "text": "Transmission grids are vulnerable to stress by adverse internal and external conditions, e.g., line thermal limit violations due to excessive heat and line loading. Without timely remedial actions, this stress can lead to cascading failures resulting in blackouts. To mitigate these risks, our objective is to maximize the system\u2019s survival time over a horizon , denoted by ST, defined as the time until a blackout occurs [19 ###reference_b19###]. In this paper, we focus on line-switching actions to reduce system stress by controlling line flows, where the binary decision variable indicates whether line is removed (0) or reconnected (1) at time . We also define as the cost of line-switching for line . Hence, the system-wide cost incurred due to line-switching over a horizon is .\nOperational Constraints: Line-switching decisions are constrained by operational requirements to maintain system security. Once a line is switched, it must remain offline for a mandated downtime period before being eligible for another switch. For naturally failed lines (e.g., due to prolonged overload), a longer downtime period is required before reconnection, where .\nMaximizing Survival Time: Our objective is to constantly monitor the system and, upon detecting mounting stress (e.g., imminent overflows), initiate flow control decisions (line-switching) to maximize the system\u2019s ST. Such decisions are highly constrained with decision costs and operational constraints due to downtime periods and . To quantify ST, we use a proxy, the risk margin for each transmission line at time , defined as , where and denotes the present and maximum line current flows, respectively. Based on , a line is considered overloaded, if . Minimizing these risk margins reduces the likelihood of overloads, thereby extending ST. We also use risk margins to identify critical states, which are states that necessitates remedial interventions, defined by the rule . To maximize ST, our goal is to sequentially form the decisions all while adhering to operational constraints and controlled decision costs , formulated as:\nCascading Failure Mitigation as an MDP: The complexity of identifying optimal line-switching (discrete) decisions grows exponentially with the number of lines and the target horizon , and is further compounded by the need to meet operational constraints. To address the challenges of solving in (1 ###reference_###), we design an agent-based approach. At any instance , the agent has access to the system\u2019s states and uses this information to determine the line-switching actions. These actions lead to outcomes that are partly deterministic, reflecting the direct impact on the system state, and partly stochastic, representing the randomness of future electricity demands. To effectively model these stochastic interactions, we employ a Markov decision process (MDP) characterized by the tuple . Detailed information about the MDP modeling techniques employed is provided in Appendix A.1 ###reference_###. Finding an optimal decision policy can be found by solving [26 ###reference_b26###]\nwhere characterizes the state-action value function." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Physics-Guided RL Framework", + "text": "Motivation: Model-free off-policy RL algorithms [27 ###reference_b27###, 28 ###reference_b28###] with function approximation [29 ###reference_b29###] are effective in finding good policies without requiring access to the transition probability kernel for high-dimensional MDP state spaces . However, the successful design of these algorithms hinges on a comprehensive exploration of the state space to accurately learn the expected decision utilities, such as -value estimates. Common approaches entail dynamically updating a behavior policy , informed by a separate exploratory policy like -greedy [28 ###reference_b28###], illustrated in Algorithm 1 ###reference_###. While -learning with random -greedy exploration is effective in many domains [29 ###reference_b29###], it faces challenges in power-grid overload management. Random network topology exploration actions can quickly induce severe overloads and, thus, blackouts. This is because topological actions force an abrupt change in the system state by redistributing transmission line power-flows after a network topological change, compromising risk margins and exposing the system to potential cascading failures, preventing a comprehensive exploration of . This results in inaccurate -value predictions for the unexplored MDP states, rendering a highly sub-optimal remedial control policy.\nSensitivity Factors: We leverage power-flow sensitivity factors to guide exploration decisions by augmenting -greedy during agent training, as illustrated in Algorithm 2 ###reference_###. Sensitivity factors [18 ###reference_b18###] help express the mapping between MDP states and actions by linearizing the system around the current operating point. This approach allows us to analytically approximate the impact of any action on risk margins and, consequently, the MDP reward . To address the challenges associated with implementing random topological actions during -greedy exploration, we use line outage distribution factors (LODF) to analyze the effects of line removals. Specifically, the sensitivity factor matrix , represents the impact of removing line on the flow in line by [18 ###reference_b18###]\nwhere is the pre-outage flow in line , helping predict the anticipated impact of line removal action . Likewise, the sensitivities of line flows to line reconnection actions are derived in [30 ###reference_b30###].\nPhysics-Guided Exploration: We leverage sensitivity factors to guide agent exploration with the following key idea: Topological actions that reduce line flows below their limits , without causing overloads in other healthy lines, help transition to more favorable MDP states in the short term, that may otherwise be challenging to reach by taking a sequence of random exploratory actions. However, removing a line can both reduce flow in some lines and increase flow in others. To address this, we focus on identifying remedial actions that minimize flow in the maximally loaded line. At time , we define the maximally loaded line index . By leveraging the structure of the matrix, we first design Algorithm 3 ###reference_### to identify an effective set of potential remedial actions that greedily reduce risk margin . Then, the agent selects an action , guided by the dynamic effective set , as outlined in Algorithm 4 ###reference_###, for action selection during agent training (as per the PG-RL design in Algorithm 2 ###reference_###)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "To demonstrate our framework, we use the Grid2Op 36-bus and the IEEE 118-bus power networks from Grid2Op [19 ###reference_b19###]. Detailed descriptions of the Grid2Op dataset, environment, and performance metrics are in Appendix A.3 ###reference_###. We train RL agents with a dueling NN architecture [32 ###reference_b32###] with prioritized experience replay [33 ###reference_b33###] and -greedy exploration. Appendix A.4 ###reference_### provides a thorough description of the baselines. Table 1 ###reference_### compares the agent\u2019s survival time ST, averaged across all test episodes for , showing increased agent sophistication as we move down the table. We denote the best policy from random -greedy (Algorithm 1 ###reference_###) as and from physics-guided -greedy (Algorithm 2 ###reference_###) by . For fair comparisons, DQN\u03b8 models for each (5 ###reference_###) are trained independently using Algorithms 1 ###reference_### and 2 ###reference_### for hours, using identical hyperparameters listed in Appendix A.5 ###reference_###. We also adopt an exponential decay schedule for while fix in Algorithm 2 ###reference_###.\nIn Table 1 ###reference_###, we observe that policy achieves an average ST of 6,657.09, a improvement over and a increase over baselines. Notably, the physics-guided agent takes more line-switch actions than its random counterpart, successfully identifying more effective line-removal actions due to the targeted design of during agent training. To illustrate this effectiveness, Fig. 2 ###reference_### plots the number of agent-MDP interactions as a function of agent training time for . We observe that the PG-RL design results in a greater number of agent-MDP interactions, indicating a more thorough exploration of the MDP state space for the same computational budget.\nThe ability of to identify more effective actions, in comparison to , is further substantiated by incrementally increasing and observing the performance changes. As increases, the reward in (5 ###reference_###) becomes less informative about potentially effective actions due to the increasing penalties on line-switch actions, thus amplifying the importance of physics-guided exploration design. This is observed in Table 1 ###reference_### where unlike the policy , the ST associated with does not degrade as increases. It is noteworthy that despite the inherent linear approximations of sensitivity factors, confining the RL exploration to actions derived from the set enhances state space exploration. Overall, the agent\u2019s ability to identify impactful topological actions, leading to greater action diversity, contributes to the enhanced utilization of the electrical grid while also a significant increase in ST. Similar results for the IEEE 118-bus system are provided in Appendix A.6 ###reference_###, confirming the trends observed in the Grid2Op 36-bus system.\n###figure_1###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "We introduced a physics-guided RL framework for determining effective sequences of real-time remedial control actions to mitigate cascading failures. The approach, focused on transmission line-switches, utilizes linear sensitivity factors to enhance RL exploration during agent training. By improving sample efficiency and yielding superior remedial control policies within a constrained computational budget, our framework ensures better utilization of grid resources, which is critical in the context of climate change adaptation and mitigation. Comparative analyses on the Grid2Op 36-bus and the IEEE 118-bus networks highlight the superior performance of our framework against relevant baselines. Future work will involve using bus-split sensitivity factors [34 ###reference_b34###] to computationally efficiently prune and identify effective bus-split actions for remedial control policy design. Another direction is to leverage the linearity of sensitivity factors to implement simultaneous remedial actions, expediting line flow control along desired trajectories." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "Algorithm 3 ###reference_### has three main steps.\nThe agent constructs a legal action set from , comprising of permissible line removal candidates. Specifically, lines with legality conditions and can only be removed rendering other control actions in irrelevant at time .\nA dynamic set is constructed by initially identifying lines whose removal decrease flow in line below its rated limit .\nFinally, the agent eliminates lines from the removal of which creates additional overloads in the network. Note that we include all currently disconnected lines as potential candidates for reconnection in the set , provided they adhere to legality conditions ( and ). It is noteworthy that the set is time-varying. Hence, depending on the current system state , may either contain a few elements or be empty.\nFor the chosen performance metrics, we consider four alternative baselines: (i) agent consistently opts for the \u201cdo-nothing\" action across all scenarios, independent of the system-state ; (ii) agent decides to \u201cre-connect\" a disconnected line that greedily maximizes the reward estimate (5 ###reference_###) at the current time step . In cases where reconnection is infeasible due to line downtime constraints or when no lines are available for reconnection, the agent defaults to the \u201cdo-nothing\" action for that step; (iii) milp_agent[31 ###reference_b31###] agent strategically minimizes over-thermal line margins using line switching actions by formulating the problem as a mixed-integer linear program (MILP); and (iv) RL + Random Explore baseline agent: we employ a DQN\u03b8 network with a tailored random -greedy exploration policy during agent training. Specifically, similar to Algorithm 4 ###reference_###, the agent first constructs a legal action set from at critical times. In contrast to Algorithm 4 ###reference_###, however, this agent chooses a random legal action in the set (instead of using ). In the Grid2Op 36-bus system, using this random exploration policy, we train the DQN\u03b8 for hours of repeated interactions with the Grid2Op simulator for each . We report results associated with the best model and refer to the best policy obtained following this random -greedy exploration by . Similarly, in the IEEE 118-bus system, we train the DQN\u03b8 model for 15 hours of repeated interactions.\nOur DQN architecture features a feed-forward NN with two hidden layers, each having units and adopting tanh nonlinearities. The input layer, with a shape of , feeds into the first hidden layer of units, followed by another hidden layer of units. The network then splits into two streams: an advantage-stream with a layer of action-size units and tanh non-linearity, and a value-stream predicting the value function for the current MDP state . are obtained by adding the value and advantage streams. We penalize the reward function in (5 ###reference_###) in the event of failures attributed to overloading cascades and premature scenario termination (). Additionally, we normalize the reward constraining its values to the interval . For the Grid2Op 36-bus system, we use a learning rate decayed every training iterations, a mini-batch size of , an initial exponentially decayed to over agent-MDP training interaction steps and choose . Likewise, for the IEEE 118-bus system we use similar parameters with a mini-batch size of . Likewise, for the IEEE 118-bus system we set with a mini-batch size of and agent MDP training interaction steps.\nAll the results for the IEEE 118-bus system are tabulated in Table 3 ###reference_###. Starting from the baselines, we observe that the agent achieves a significantly higher average ST of 4,371 steps, compared to the agent\u2019s 2813.64 steps. This observation highlights the importance of strategically selecting look-ahead decisions, particularly in more complex and larger networks. Contrary to common assumptions, the agent\u2019s greedy approach of reconnecting lines can instead reduce ST, demonstrating that can be more effective.\nFocusing on the line switch action space , we observe that the agent with policy survives 4812.88 steps, a increase over baselines, by allocating to remedial control actions for line removals. More importantly, our physics-guided policy achieves an average ST of 5767 steps, a increase over baselines and a improvement compared to with greater action diversity. Fig. 2 ###reference_### illustrates the number of agent-MDP interactions as a function of training time, showcasing that the physics-guided exploration is more thorough for a given computational budget.\nWhile this paper focuses on the improvements achieved through effective exploration using action space , further enhancements of the physics-guided design can be realized by extending the action space to generator adjustments, i.e., . As presented in the study [35 ###reference_b35###], this extension allows for a richer exploration of the state space. It enables reaching additional states by taking actions from states that were originally accessible only via actions , thereby improving downstream performance.\n###figure_2###" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n \n\n\nAction Space \n\n \n\n\nAgent Type\n\n \n\n\nAvg. ST\n\n \n\n\n\n\nDo-nothing\n\n \n\n\n\n\nReconnect\n\n \n\n\n\n\nRemovals\n\n \n\n\nAvg. Action Diversity\n
milp_agent[31]
\n\n \n\n\n\n\n\u2003\n \n\n\n \n\n\n\n \n
\n\n]\n
\n\n \n\n\n\n\n\u2003\n \n\n\n \n\n\n\n \n
\n\n]\n
\n\n \n\n\n\n\n\u2003\n \n\n\n \n\n\n\n \n
\n\n]\n
\n
\n
Table 1: Performance on the Grid2Op 36-bus system with .
\n
", + "capture": "Table 1: Performance on the Grid2Op 36-bus system with ." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
System-State Feature \nSizeTypeNotation
prod_pfloat
load_pfloat
p_or, p_exfloat
a_or, a_exfloat
rhofloat
line_statusbool
timestep_overflowintoverload time
time_before_cooldown_lineintline downtime
time_before_cooldown_subintbus downtime
\n
\n
Table 2: Heterogeneous input system state features .
\n
", + "capture": "Table 2: Heterogeneous input system state features ." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n \n\n\nAction Space \n\n \n\n\nAgent Type\n\n \n\n\nAvg. ST\n\n \n\n\n\n\nDo-nothing\n\n \n\n\n\n\nReconnect\n\n \n\n\n\n\nRemovals\n\n \n\n\nAvg. Action\n\nDiversity\n
milp_agent[31]
\n\n \n\n\n\n \n\n\n \n\n\nRL\u2004+ Random Explore\n \n
\n\n \n\n\nRL\u2004+ Physics Guided Explore\n \n
\n
\n
Table 3: Performance on the IEEE 118-bus system with and .
\n
", + "capture": "Table 3: Performance on the IEEE 118-bus system with and ." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18050v1_figure_1.png", + "caption": "Figure 1: Agent-MDP interactions for the Grid2Op 36-bus system with \u03b7=0.95\ud835\udf020.95\\eta=0.95italic_\u03b7 = 0.95 and \u03bc\ud835\uddc5\ud835\uddc2\ud835\uddc7\ud835\uddbe=0subscript\ud835\udf07\ud835\uddc5\ud835\uddc2\ud835\uddc7\ud835\uddbe0\\mu_{\\sf line}=0italic_\u03bc start_POSTSUBSCRIPT sansserif_line end_POSTSUBSCRIPT = 0.", + "url": "http://arxiv.org/html/2411.18050v1/x1.png" + }, + "2": { + "figure_path": "2411.18050v1_figure_2.png", + "caption": "Figure 2: Agent\u2212--MDP interactions for the IEEE 118-bus system with \u03b7=1.0\ud835\udf021.0\\eta=1.0italic_\u03b7 = 1.0 and \u03bc\ud835\uddc5\ud835\uddc2\ud835\uddc7\ud835\uddbe=0subscript\ud835\udf07\ud835\uddc5\ud835\uddc2\ud835\uddc7\ud835\uddbe0\\mu_{\\sf line}=0italic_\u03bc start_POSTSUBSCRIPT sansserif_line end_POSTSUBSCRIPT = 0.", + "url": "http://arxiv.org/html/2411.18050v1/x2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "https://www.nerc.com/docs/docs/blackout/NERC_Final_Blackout_Report_07_13_04.pdf,\nFebruary 2014.", + "author": "August blackout: NERC actions to prevent and mitigate the impacts\nof future cascading blackouts.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Optimal transmission switching.", + "author": "Emily B. Fisher, Richard P. O\u2019Neill, and Michael C. Ferris.", + "venue": "IEEE Transactions on Power Systems, 23(3):1346\u20131355, 2008.", + "url": null + } + }, + { + "3": { + "title": "Transmission switching in security-constrained unit commitment.", + "author": "Amin Khodaei and Mohammad Shahidehpour.", + "venue": "IEEE Transactions on Power Systems, 25(4):1937\u20131945, 2010.", + "url": null + } + }, + { + "4": { + "title": "Fast heuristics for transmission-line switching.", + "author": "J. David Fuller, Raynier Ramasra, and Amanda Cha.", + "venue": "IEEE Transactions on Power Systems, 27(3):1377\u20131386, 2012.", + "url": null + } + }, + { + "5": { + "title": "Flexible implementation of power system corrective topology control.", + "author": "Payman Dehghanian, Yaping Wang, Gurunath Gurrala, Erick Moreno-Centeno, and\nMladen Kezunovic.", + "venue": "Electric Power Systems Research, 128:79\u201389, 2015.", + "url": null + } + }, + { + "6": { + "title": "Emergency voltage control using search and predictive control.", + "author": "Mats Larsson, David J. Hill, and Gustaf Olsson.", + "venue": "International Journal of Electrical Power & Energy Systems,\n24(2):121\u2013130, 2002.", + "url": null + } + }, + { + "7": { + "title": "Preventing thermal overloads in transmission circuits via model\npredictive control.", + "author": "Juliano S. A. Carneiro and Luca Ferrarini.", + "venue": "IEEE Transactions on Control Systems Technology, 18(6):1406\u20131412, 2010.", + "url": null + } + }, + { + "8": { + "title": "Model-predictive cascade mitigation in electric power systems with\nstorage and renewables\u2014Part I: Theory and implementation.", + "author": "Mads R Almassalkhi and Ian A Hiskens.", + "venue": "IEEE Transactions on Power Systems, 30(1):67\u201377, 2014a.", + "url": null + } + }, + { + "9": { + "title": "Model-predictive cascade mitigation in electric power\nsystems with storage and renewables\u2014Part II: Case-Study.", + "author": "Mads R Almassalkhi and Ian A Hiskens.", + "venue": "IEEE Transactions on Power Systems, 30(1):78\u201387, 2014b.", + "url": null + } + }, + { + "10": { + "title": "Power systems stability control: reinforcement learning framework.", + "author": "D. Ernst, M. Glavic, and L. Wehenkel.", + "venue": "IEEE Transactions on Power Systems, 19(1):427\u2013435, 2004.", + "url": null + } + }, + { + "11": { + "title": "-learning-based vulnerability analysis of smart grid against\nsequential topology attacks.", + "author": "Jun Yan, Haibo He, Xiangnan Zhong, and Yufei Tang.", + "venue": "IEEE Transactions on Information Forensics and Security,\n12(1):200\u2013210, 2017.", + "url": null + } + }, + { + "12": { + "title": "Deep-reinforcement-learning-based autonomous voltage control for\npower grid operations.", + "author": "Jiajun Duan, Di Shi, et al.", + "venue": "IEEE Transactions on Power Systems, 35(1):814\u2013817, 2020.", + "url": null + } + }, + { + "13": { + "title": "GRNN-based real-time fault chain prediction.", + "author": "Anmol Dwivedi and Ali Tajer.", + "venue": "IEEE Transactions on Power Systems, 39(1):934\u2013946, 2024.", + "url": null + } + }, + { + "14": { + "title": "Reinforcement learning for electricity network operation.", + "author": "Adrian Kelly, Aidan O\u2019Sullivan, Patrick de Mars, and Antoine Marot.", + "venue": "arXiv:2003.07339, 2020.", + "url": null + } + }, + { + "15": { + "title": "Learning to run a power network challenge for training topology\ncontrollers.", + "author": "Antoine Marot, Benjamin Donnot, Camilo Romero, Balthazar Donon, Marvin\nLerousseau, Luca Veyrin-Forrer, and Isabelle Guyon.", + "venue": "Electric Power Systems Research, 189:106635, 2020.", + "url": null + } + }, + { + "16": { + "title": "Learning to run a power network challenge: A retrospective\nanalysis.", + "author": "Antoine Marot, Benjamin Donnot, Gabriel Dulac-Arnold, Adrian Kelly, Aidan\nO\u2019Sullivan, Jan Viebahn, Mariette Awad, Isabelle Guyon, Patrick Panciatici,\nand Camilo Romero.", + "venue": "In Proc. NeurIPS Competition and Demonstration Track, December\n2021.", + "url": null + } + }, + { + "17": { + "title": "Application-driven innovation in machine learning.", + "author": "David Rolnick, Alan Aspuru-Guzik, Sara Beery, Bistra Dilkina, Priya L. Donti,\nMarzyeh Ghassemi, Hannah Kerner, Claire Monteleoni, Esther Rolf, Milind\nTambe, and Adam White.", + "venue": "arXiv:2403.17381, 2024.", + "url": null + } + }, + { + "18": { + "title": "Power Generation, Operation, and Control.", + "author": "Allen J Wood, Bruce F Wollenberg, and Gerald B Shebl\u00e9.", + "venue": "John Wiley & Sons, 2013.", + "url": null + } + }, + { + "19": { + "title": "Grid2Op - A Testbed Platform to Model Sequential\nDecision Making in Power Systems, 2020.", + "author": "Benjamin Donnot.", + "venue": "URL https://github.com/rte-france/grid2op.", + "url": null + } + }, + { + "20": { + "title": "AI-based autonomous line flow control via topology adjustment for\nmaximizing time-series ATCs.", + "author": "Tu Lan, Jiajun Duan, Bei Zhang, Di Shi, Zhiwei Wang, Ruisheng Diao, and Xiaohu\nZhang.", + "venue": "In Proc. IEEE Power and Energy Society General Meeting, QC,\nCanada, August 2020.", + "url": null + } + }, + { + "21": { + "title": "PowRL: A reinforcement learning framework for robust management of\npower networks.", + "author": "Anandsingh Chauhan, Mayank Baranwal, and Ansuma Basumatary.", + "venue": "In Proc. AAAI Conference on Artificial Intelligence,\nWashington, DC, June 2023.", + "url": null + } + }, + { + "22": { + "title": "Winning the L2RPN challenge: Power grid management via\nsemi-Markov afterstate actor-critic.", + "author": "Deunsol Yoon, Sunghoon Hong, Byung-Jun Lee, and Kee-Eung Kim.", + "venue": "In Proc. International Conference on Learning Representations,\nMay 2021.", + "url": null + } + }, + { + "23": { + "title": "Between MDPs and semi-MDPs: A framework for temporal\nabstraction in reinforcement learning.", + "author": "Richard S Sutton, Doina Precup, and Satinder Singh.", + "venue": "Artificial intelligence, 112(1-2):181\u2013211, 1999.", + "url": null + } + }, + { + "24": { + "title": "Curriculum based reinforcement learning of grid topology controllers\nto prevent thermal cascading.", + "author": "Amarsagar Reddy Ramapuram Matavalam, Kishan Prudhvi Guddanti, Yang Weng, and\nVenkataramana Ajjarapu.", + "venue": "IEEE Transactions on Power Systems, 38(5):4206\u20134220, 2023.", + "url": null + } + }, + { + "25": { + "title": "A hybrid reinforcement learning and tree search approach for network\ntopology control.", + "author": "Geert Jan Meppelink.", + "venue": "Master\u2019s thesis, NTNU, 2023.", + "url": null + } + }, + { + "26": { + "title": "Dynamic Programming.", + "author": "Richard Bellman.", + "venue": "Princeton University Press, 1957.", + "url": null + } + }, + { + "27": { + "title": "An analysis of temporal-difference learning with function\napproximation.", + "author": "J.N. Tsitsiklis and B. Van Roy.", + "venue": "IEEE Transactions on Automatic Control, 42(5):674\u2013690, 1997.", + "url": null + } + }, + { + "28": { + "title": "Reinforcement Learning: An Introduction.", + "author": "Richard S Sutton and Andrew G Barto.", + "venue": "MIT press, 2018.", + "url": null + } + }, + { + "29": { + "title": "Human-level control through deep reinforcement learning.", + "author": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness,\nMarc G Bellemare, Alex Graves, et al.", + "venue": "Nature, 518(7540):529\u2013533, 2015.", + "url": null + } + }, + { + "30": { + "title": "Extended factors for linear contingency analysis.", + "author": "P.W. Sauer, K.E. Reinhard, and T.J. Overbye.", + "venue": "In Proc. Hawaii International Conference on System Sciences,\nMaui, Hawaii, January 2001.", + "url": null + } + }, + { + "31": { + "title": "MILP-agent, 2022.", + "author": "Fran\u00e7ois Quentin.", + "venue": "URL https://github.com/rte-france/grid2op-milp-agent.", + "url": null + } + }, + { + "32": { + "title": "Dueling network architectures for deep reinforcement learning.", + "author": "Ziyu Wang, , Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando\nFreitas.", + "venue": "In Proc. International Conference on Machine Learning, New\nYork, NY, June 2016.", + "url": null + } + }, + { + "33": { + "title": "Prioritized experience replay.", + "author": "Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver.", + "venue": "In Proc. International Conference on Learning Representations,\nSan Juan, Puerto Rico, May 2016.", + "url": null + } + }, + { + "34": { + "title": "Bus split distribution factors.", + "author": "Joost van Dijk, Jan Viebahn, Bastiaan Cijsouw, and Jasper van Casteren.", + "venue": "IEEE Transactions on Power Systems, 39(3):5115\u20135125, 2024.", + "url": null + } + }, + { + "35": { + "title": "Blackout mitigation via physics-guided RL.", + "author": "Anmol Dwivedi, Santiago Paternain, and Ali Tajer.", + "venue": "arXiv:2401.09640, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18050v1" +} \ No newline at end of file diff --git a/20241127/2411.18054v1.json b/20241127/2411.18054v1.json new file mode 100644 index 0000000000000000000000000000000000000000..abd74e69fa543bcb32a2b2f7b3ad635b61c47c70 --- /dev/null +++ b/20241127/2411.18054v1.json @@ -0,0 +1,73 @@ +{ + "title": "Using different sources of ground truths and transfer learning to improve the generalization of photometric redshift estimation", + "abstract": "In this work, we explore methods to improve galaxy redshift predictions by combining different ground truths. Traditional machine learning models rely on training sets with known spectroscopic redshifts, which are precise but only represent a limited sample of galaxies. To make redshift models more generalizable to the broader galaxy population, we investigate transfer learning and directly combining ground truth redshifts derived from photometry and spectroscopy. We use the COSMOS2020 survey to create a dataset, TransferZ, which includes photometric redshift estimates derived from up to 35 imaging filters using template fitting. This dataset spans a wider range of galaxy types and colors compared to spectroscopic samples, though its redshift estimates are less accurate. We first train a base neural network on TransferZ and then refine it using transfer learning on a dataset of galaxies with more precise spectroscopic redshifts (GalaxiesML). In addition, we train a neural network on a combined dataset of TransferZ and GalaxiesML. Both methods reduce bias by 5x, RMS error by 1.5x, and catastrophic outlier rates by 1.3x on GalaxiesML, compared to a baseline trained only on TransferZ. However, we also find a reduction in performance for RMS and bias when evaluated on TransferZ data. Overall, our results demonstrate these approaches can meet cosmological requirements.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Astronomers are increasingly adopting machine learning methods for redshift estimation, which is crucial for measuring the distances to galaxies in cosmology. Spectroscopic redshifts (spec-z\u2019s) are the most accurate, but they are time-consuming and thus impractical for large-scale surveys involving billions of galaxies [[, e.g.,]]ivezic2008, racca2016, breivik2022, euclidcollaboration2024. Photometric redshifts (photo-z\u2019s) are derived from measurements of the brightness of galaxies (photometry) from images taken at different wavelengths. They are less precise but enable the analysis of much larger datasets [newman2022]. Photometric redshift methods generally fall into two categories: template-fitting and data-driven approaches. In template-fitting [[, e.g.,]] arnouts1999,ilbert2006, brammer2008 a library of broad-band galaxy photometry and redshifts is compared to observed photometry to estimate redshifts. Data-driven methods, often involving machine learning, train models on known redshift samples to predict redshifts for new data [[, e.g.,]]bonnett2015a,collister2004, carrasco2015,newman2022,jones2024a.\nA critical factor in the success of machine learning models is the quality and representativeness of the training data. For redshift prediction models, the most accurate training data comes from spectroscopic measurements, which precisely probe emission lines and achieve redshift uncertainties as low as [[, e.g.,]]tanaka2018. However, these measurements are typically limited to bright galaxies with strong emission lines, representing only a small subset of the galaxies in the Universe. The COSMOS2020 survey [weaver2022] offers a broader dataset, covering a wider range of galaxy types. However, the median precision of its redshift measurements is approximately 0.03\u2014about 100 times less precise than spectroscopic redshifts.\nThe limitation in representativeness highlights the need for methods that can generalize across different types of data. In this paper we explore two approaches of incorporating ground truths from real data for training photometric redshift models: transfer learning and mixing ground truths. Transfer learning [pan2010, weiss2016] offers a promising solution in this regard, allowing models trained on broader, less-precise datasets like COSMOS2020 to be fine tuned on precise but narrower spectroscopic datasets to improve their performance. Mixing ground truths approach is an alternative strategy that combines different sources of redshift measurements at the start of training allowing the model to simultaneously learn from complementary strengths of spectroscopic and photometric redshift datasets. Our approach is novel in its exploration of model generalization by\nincorporating different sources of ground truth from real data for training photometric redshift models. Understanding these approaches is particularly important as we look forward to large surveys, where astronomers will need to overcome the gaps in spectroscopic dataset coverage and volume in the initial years." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Data", + "text": "###figure_1### In this work we base our analyses on 5-band photometry to approximate the conditions for the Legacy Survey in Space and Time (LSST), a major upcoming survey [breivik2022, ivezic2008]. We created the TransferZ dataset by integrating data from two sources: the HSC PDR2 [aihara2019] wide field survey, which provides 5-band grizy photometry for a query of 3 million sources, and COSMOS2020 [weaver2022], which offers up to 35-band photometry for 1.7 million sources along with photometric redshifts. With 35 bands of photometry extending from the ultraviolet to the infrared, the photometric redshifts that can be estimated are much more accurate and precise than those resulting from five-band photometry. This is a reasonable basis for a ground truth redshift value [ilbert2008, singal2022]. From COSMOS2020 we choose redshifts computed using LePhare template fitting [arnouts1999, ilbert2006] with at least 30-band photometry from the CLASSIC subset [weaver2022]. To create TransferZ, we cross-match sources from COSMOS2020 with HSC PDR2 data, filtering for galaxies, and applying quality cuts to ensure reliable ground truth redshifts, resulting in a refined dataset of 116,335 galaxies with 5-band grizy photometry (g: 4754 \u00c5, r: 6175 \u00c5, i: 7711 \u00c5, z: 8898 \u00c5, y: 9762 \u00c5) from HSC PDR2 and reliable redshifts from COSMOS2020. For more details, see Appendix A ###reference_###.\nWe use TransferZ as a broader and more general galaxy sample for redshift estimation to train the baseline model and then transfer learn using GalaxiesML [do2024], which has ground truth for redshifts from spectroscopy. The two datasets complement each other (Fig. 1 ###reference_###). GalaxiesML has 286,401 galaxies, with 90% of the galaxies having mag and most galaxies have redshifts (note that larger magnitude values mean the galaxies are fainter). This dataset is built on HSC PDR2 [aihara2019] and its associated spectroscopic database [lilly2009, bradshaw2013, mclure2012, skelton2014, momcheva2016, lefevre2013, garilli2014, liske2015, davis2003, newman2013, coil2011, cool2013]. TransferZ has 90% of its galaxies with mag and redshifts . TransferZ contains a higher number of galaxies in the cosmologically relevant range of , potentially enabling representational analysis at higher redshifts than GalaxiesML alone. While TransferZ probes much fainter galaxies and more galaxy types, the redshift uncertainties from the 35-band photometry are typically 100 times larger than GalaxiesML with spectroscopy (Table 1 ###reference_###). We note that there are 500 galaxies (about of the total) in common between both TransferZ and GalaxiesML (Fig. 1 ###reference_###). We assume the impact of this overlap is negligible for this experiment, but this can be verified in the future.\nWe also created a combination dataset called Combo that combines both TransferZ and GalaxiesML to test whether combining two types of ground truth is equivalent to transfer learning from one dataset to another. When there are both spectroscopic and COSMOS2020 photometric redshifts for the same galaxy, we choose to include only the spectroscopic redshift, because it is more accurate (about 500 galaxies are affected). The combo dataset consists of 402,408 galaxies. The datasets are split into 80% training, 10% validation, and 10% testing sets. In the following sections, we refer to TransferZ as the source data, GalaxiesML as the target data, and Combo as combo data. TransferZ is made available on Zenodo with a DOI: 10.5281/zenodo.14218996." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology & metrics", + "text": "We employed a neural network (NN) architecture based on [jones2022, jones2024a] for photometric redshift estimation, consisting of four fully connected layers with ReLU activation and a skip connection. Hyperparameter tuning was performed with the source training and validation data using HyperBand [li2018]. The Hyperband search space for training the NN includes 1 to 10 hidden layers with 32 to 2048 neurons per layer, whether to include a skip connection, and whether to add additional dense layers. If additional dense layers are included, they range from 1 to 10 hidden layers with 32 to 4096 neurons per layer. The final model has the four initial hidden layers with 200 neurons each followed by a skip connection and two additional hidden layers with 2000 neurons each. All hidden layers use the rectified linear unit (ReLU) activation function.\nThe base model (NN-Base) was trained on the TransferZ dataset using the Adam optimizer with a learning rate of , a batch size of 512, and for 500 epochs. Transfer learning (NN-TL) was then applied by fine-tuning the NN-Base model on the GalaxiesML dataset, freezing all layers except the input and the first and fifth dense layers, with a reduced learning rate of and trained for 1000 epochs.\nThe model trained on the combined data (NN-Combo) is hyperparameter optimized similarly to the NN-Base model. The final model has 6 hidden layers and a skip connection between inputs and the hidden layers. NN-Combo was trained on the Combo dataset using the Adam optimizer with a learning rate of , a batch size of 512, and for 2000 epochs. The three model trainings achieve optimal performance before the reported epochs since learning curves plateau earlier.\nA custom loss function, , is used [tanaka2018] for training. The photometry data is normalized separately for each training stage. Performance was evaluated using bias, root mean square error (RMS), and the catastrophic outlier rate on the test sets within the redshift range of . The bias metric is the median of the bias distribution defined as where and is the estimated photometric redshift and the ground truth redshift, respectively. The RMS is defined as the interquartile range of the bias distribution divided by 1.349 weighted by the median redshift in the bin () [graham2020]. This definition of RMS is less sensitive to outlier rates than standard definitions. The catastrophic outlier rate is defined as the fraction of objects where the absolute difference between photometric and true redshifts exceeds 1.0, expressed as . These are among the most important metrics for cosmology [blake2005, ivezic2008]. The metrics are evaluated for NN-Base, NN-TL, and NN-Combo test datasets. We compare our metrics to those in [jones2024], where they use a NN trained on GalaxiesML. The comparison highlights the benefits of mixing ground truths against a spectroscopic ground truth." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results & Discussion", + "text": "In this work, we test different ways of combining different sources of ground truth for photometric redshifts - either through transfer learning or by combining the training datasets. We find that both methods are better than the base model, which is only trained on the TransferZ dataset. This suggests that it is possible to improve photometric redshift estimates by combing multiple sources of ground truth. The choice between the two methods should be based on which metric best serves the scientific objectives. Below we summarize the benefits and limitations of our approaches: transfer learning of the NN model on the source (TransferZ) and target (GalaxiesML) datasets, and a model trained on the combination of GalaxiesML and TransferZ (Combo) dataset.\n###figure_2### Model predictions are evaluated against test datasets comprising of 40,914 galaxies from GalaxiesML and 11,633 from TransferZ. Both NN-TL and NN-Combo perform comparably within the redshift range of , exhibiting similar prediction patterns and improving upon the base model (Fig. 2 ###reference_###). Both NN-TL and NN-Combo have small scatter in their predictions at these redshift ranges compared to the base model. Quantitatively, we evaluate the model\u2019s predictions using three metrics \u2013 bias, RMS, and catastrophic outlier rate, these are reported in Table 2 ###reference_###. We compare the fractional change of the metrics on both target and source data against NN-Base metrics. Evaluating NN-TL (NN-Combo) on the target data shows () times lower bias, () times lower RMS, and () times lower catastrophic outlier rate. The largest magnitude change can be seen for bias, which is important as confidence in a redshift bin assignment is crucial for cosmological measurements requiring precise redshift distributions. The catastrophic outlier rate remains below 10% across all models, showing that neural network can effectively control a major systematic contaminant in photometric redshift measurements.\nFor the source data metrics, we find NN-TL reduces the catastrophic outlier rate compared to NN-Base, but performs worse on bias and RMS. The metrics of NN-Combo are comparable to those of NN-Base on the source data. Evaluating NN-TL (NN-Combo) on the source data shows 3.24 times lower (1.23 times higher) catastrophic outlier rate. However, it shows a bias () times higher and RMS () times higher. The increase in bias on the source data after transfer learning on a new ground truth suggests some loss of the originally learned features. The model trained on combined ground truths is similarly affected but to a lesser extent, meaning it better leverages both sources of truth.\n###figure_3### The results also show both transfer learning approaches and models trained on combined ground truth data improve photometric redshift predictions compared to single-dataset training. Our transfer learning model performs better than the one from [jones2024] (henceforth J24), which uses only the target dataset (Fig. 3 ###reference_###). In comparing our two approaches for handling mixed ground truth data, the advantage of the models depends on the metric. The Combo model performs better than the transfer learn model on bias (6 times lower) and RMS (1.2 times lower) on target data, and marginally on RMS (1.03 times lower) for source. However, it performs worse on bias (1.27 times higher) on the target dataset. Additionally, the catastrophic outlier rate on target data is times higher and on source data times higher. Depending on the science needs, the best approach needs to align with the science goal.\nThere are several limitations to both NN-TL and NN-Combo. The model from J24 achieved a bias of one order magnitude lower than our approach in NN-TL and NN-Combo evaluated on the GalaxiesML. In addition, there are some notable features in the figure showing the prediction versus true redshift, such as the clustered predictions at and in the target dataset (Fig. 2 ###reference_###). This suggests there is a source of systematic error (to be analyzed in a forthcoming paper). NN-TL also shows limited accuracy at from sparse training data in these redshift ranges at each training step, while NN-Combo achieves accurate predictions through its training approach of combining both datasets in one training step.\nPrevious work explored the use of transfer learning in the context of improving redshift estimates using a combination of simulated and real data [eriksen2020a, moskowitz2024], but to our knowledge, this is the first work to apply transfer learning from only real data to generalize photometric redshift predictions. Both of our models (NN-TL and NN-Combo) are capable of incorporating two sources of ground truth, but there are a number of ways that the models can be improved upon. For example, our models do not produce uncertainties. Extending transfer learning to probabilistic ML models will be important for more scientific applications [[, e.g,]]benitez2000,pasquet2019,treyer2023,jones2024a. Additional tests of how well this model generalizes will be important to validate the improvements. One approach could be to use this model to detect galaxy clusters in existing data. Galaxies within the same cluster will be at the same redshift, but will consist of many galaxy types, which allows us to independently quantify the generalizability of the models. Ultimately, this could help solve one of the most challenging problems for surveys like LSST: providing reliable photometric redshift measurements for testing models of cosmology." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Data Creation", + "text": "In this work, we make use of photometric redshifts from the latest release of the Cosmic Evolution Survey (COSMOS; [scoville2007]) catalog (COSMOS2020; [weaver2022]) consisting of over 1 million sources. The COSMOS field covers about 1.7 deg2 of the sky and has been observed across the electomagnetic spectrum by the Galaxy Evolution Explorer (GALEX; [zamojski2007]) in the far-UV to near-UV, the Canada-France-Hawaii Telescope Large Area U-band Deep Survey (CLAUDS; [sawicki2019]), the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; [aihara2019]) and Suprime-Cam (SC) data ([taniguchi2007, taniguchi2015]) on the Subaru telescope in the optical, the UltraVISTA survey (UVISTA; [mccracken2012, milvang-jensen2013]) in the near-infrared, and the Cosmic Dawn Survey [sanders2007, moneti2022] using Spitzer in the mid-infrared. The surveys cover the area in the X-ray, optical, and infrared, enhancing our studies in galaxy evolution and nature of dark matter.\nThe COSMOS2020 catalog is composed of two separate catalogs labeled CLASSIC and THE FARMER. CLASSIC uses aperture photometry methods while THE FARMER uses profile fitting methods (The Tractor;[lang2016]) for the photometry measurements of extended sources.\nIn addition, the catalogs provide photometric redshifts from two independent template fitting codes LePhare [arnouts1999] and EAZY [brammer2008] along with additional outputs.\nIn this work, we make use of sources that are common to both catalogs. The CLASSIC catalog contains sources with photometric redshifts computed using up to 35-bands while THE FARMER consists of sources with photometric redshifts computed using up to 30-bands. The discrepancy arises from the variable Point Spread Function (PSF) of the Suprime-Cam medium bands, which THE FARMER could not overcome. After cross-matching using internal IDs, there is common sources among the two catalogs.\nOur decisions in Section A.3 ###reference_### depend on the reliability of the photometric redshifts. Among the different combinations of photometry (CLASSIC/THE FARMER) and photometric redshift codes (LePhare/EAZY), the numbers of bands fit by the photo-z codes is not consistent. CLASSIC/LePhare uses up to 35 bands. CLASSIC/EAZY uses up to 30 bands, excluding GALEX data and Suprime-Cam broad bands due to their limited depth and PSF issues (Suprime-Cam and UltraVISTA narrow bands are also excluded, though no explicit reason is provided). THE FARMER/CLASSIC uses up to 30 bands, excluding the Suprime-Cam broad bands. THE FARMER/EAZY uses up to 27 bands, excluding the Suprime-Cam broad bands, GALEX FUV/NUV bands, and all narrow bands. While photometry cannot reach the precision of spectroscopy, more photometric bands reduces degeneracies in redshift measurements. Since CLASSIC/LePhare\u2019s photometric redshifts are estimated using the largest number of bands, this set is chosen as the labels used in our neural network training involving TransferZ. Specifically, we use lp_zPDF for a galaxy corresponding to the median of the photometric redshift likelihood distribution.\nFor our analysis we compile five-band (grizy) photometry to approximate data produced by large scale surveys in comparable depth [ivezic2008, breivik2022, euclidcollaboration2024, racca2016]. We use data from the HSC Subaru Strategic Program\u2019s (HSC-SSP) second data release (HSC PDR2; [aihara2019]) in the wide field that reaches similar depths as LSST but over a smaller area coverage. The HSC wide-field camera is mounted on the Subaru Telescope with a FOV of 1.8 deg2. The survey has observed over 900 deg2 of the sky across five optical filters (grizy) with a median seeing in the i-band of 0.58\" and a median 5 depth of 26.2. The HSC-SSP survey does not have a bluer band unlike LSST which is expected to have six optical filters ugrizy.\nOur analysis draws from a query of over 3 million sources around the COSMOS field from the HSC-PDR2 Wide catalogs. Our photometry data is queried from HSC-SSP\u2019s data release site. No initial constraints are placed on this query of our data. In addition, we query for a sample of galaxies with spectroscopic redshifts for validation. In Section A.3 ###reference_###, we use these spectroscopic redshifts to guide the quality of our cuts.\n###figure_4### We created the TransferZ dataset, a dataset consisting of galaxies with grizy photometry and reliable photometric redshifts following the workflow shown in Figure 4 ###reference_###. The creation of TransferZ is split into two main steps: combining galaxy HSC data with COSMOS2020 data and performing quality cuts to ensure reliable features and labels. In this work, HSC grizy photometry serve as our features and LePhare photometric redshift lp_zPDF from the CLASSIC catalog of COSMOS2020 serve as our labels. See Section\nA.1 ###reference_### for more details about these criteria." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Data Summary
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetNumber ofRedshiftMedian Redshifti-band magNo.
Sources90th percentileUncertainty90th percentileFilters
TransferZ116,3351.90.03255
GalaxiesML286,4011.20.0002225
Combo Data402,4081.50.0006245
\n
", + "capture": "Table 1: Data Summary" + }, + "2": { + "table_html": "
\n
Table 2: Model Metrics Summary
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Bias (\nRMS ()Cat. Outlier
Rate ()
NN-BaseTransferZ-0.69 2.3722.6 0.351.29 0.25
GalaxiesML-10.9 5.7122.4 1.232.49 0.15
NN-TLTransferZ7.45 3.1228.5 0.860.40 0.13
GalaxiesML-1.51 1.6615.4 0.271.68 0.14
NN-ComboTransferZ1.14 1.7423.0 0.331.33 0.29
GalaxiesML-1.92 1.6815.0 0.231.89 0.17
\n
", + "capture": "Table 2: Model Metrics Summary" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18054v1_figure_1.png", + "caption": "Figure 1: Two datasets: GalaxiesML [do2024] with spectroscopic redshift ground truth and TransferZ with COSMOS2020 survey [weaver2022] multi-band imaging redshift ground truth. The distribution of the dataset in redshift (left), i-band magnitude (center), and color (right) shows how the datasets complement each other to help the model generalize beyond the range of brightness and color sampled by the spectroscopic surveys.", + "url": "http://arxiv.org/html/2411.18054v1/x1.png" + }, + "2": { + "figure_path": "2411.18054v1_figure_2.png", + "caption": "Figure 2: Comparison of redshift predictions from the three neural network models in this work (NN-Base, NN-TL, and NN-Combo) against true redshift values. We show results for the GalaxiesML test dataset using spectroscopic redshift as ground truth. Results shown are from one randomly selected run out of 100 total iterations.", + "url": "http://arxiv.org/html/2411.18054v1/x2.png" + }, + "3": { + "figure_path": "2411.18054v1_figure_3.png", + "caption": "Figure 3: From left to right, comparison of the bias, outlier and RMS metrics between the baseline NN, transfer-learnt NN, combo NN, and [jones2024]. The metrics are evaluated on the target (blue) and source (orange) data within the range of 0.3\u2264z\u22641.50.3\ud835\udc671.50.3\\leq z\\leq 1.50.3 \u2264 italic_z \u2264 1.5. The error bars are generated from 100 random initializations of the model training. We report [jones2024] scatter value for RMS. While [jones2024] use a different RMS definition, our RMS calculation is equivalent to their reported scatter value.", + "url": "http://arxiv.org/html/2411.18054v1/x3.png" + }, + "4": { + "figure_path": "2411.18054v1_figure_4.png", + "caption": "Figure 4: Flow chart showing the steps used in creating the TransferZ dataset. Green rectangles represent processes and blue rectangles represent inputs. The red rectangle is the dataset released with this paper.", + "url": "http://arxiv.org/html/2411.18054v1/x4.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18054v1" +} \ No newline at end of file diff --git a/20241127/2411.18063v1.json b/20241127/2411.18063v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4f6c99789a62a9d0d98a18ba7b52965d47ef95ad --- /dev/null +++ b/20241127/2411.18063v1.json @@ -0,0 +1,215 @@ +{ + "title": "Mortality Prediction of Pulmonary Embolism Patients with Deep Learning and XGBoost", + "abstract": "Pulmonary Embolism (PE) is a serious cardiovascular condition that remains a leading cause of mortality and critical illness, underscoring the need for enhanced diagnostic strategies. Conventional clinical methods have limited success in predicting 30-day in-hospital mortality of PE patients. In this study, we present a new algorithm, called PEP-Net, for 30-day mortality prediction of PE patients based on the initial imaging data (CT) that opportunistically integrates a 3D Residual Network (3DResNet) with Extreme Gradient Boosting (XGBoost) algorithm with patient level binary labels without annotations of the emboli and its extent. Our proposed system offers a comprehensive prediction strategy by handling class imbalance problems, reducing overfitting via regularization, and reducing the prediction variance for more stable predictions. PEP-Net was tested in a cohort of 193 volumetric CT scans diagnosed with Acute PE, and it demonstrated a superior performance by significantly outperforming baseline models (76-78%) with an accuracy of 94.5% (\u00b10.3) and 94.0% (\u00b10.7) when the input image is either lung region (Lung-ROI) or heart region (Cardiac-ROI). Our results advance PE prognostics by using only initial imaging data, setting a new benchmark in the field. While purely deep learning models have become the go-to for many medical classification (diagnostic) tasks, combined ResNet and XGBoost models herein outperform sole deep learning models due to a potential reason for having lack of enough data.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Pulmonary embolism (PE) is a severe and sometimes fatal condition caused by a blockage in one or more lung arteries, usually due to a blood clot that forms in the veins of the legs or pelvis Georgilis [2023 ###reference_b9###]. This blockage can disrupt blood flow and increase pressure in the heart\u2019s right ventricle. PE is a leading cause of death in hospitals, ranking just behind heart attacks and strokes. Studies indicate that every year, between 39 and 115 out of every 100,000 people experience PE Dentali et al. [2016 ###reference_b7###].\nIn the United States, PE is believed to result in around 300,000 deaths each year, costing the healthcare system between 7 and 10 billion dollars annually. For people aged 65 and older, the in-hospital death rate from PE is about 4%, with the six-month mortality rate reaching 20%. Additionally, 15% of patients in the hospital for PE are readmitted within the first 30 days Wendelboe and Raskob [2016 ###reference_b15###]. The symptoms of PE can vary widely, from no symptoms to severe cases leading to shock. Diagnosing PE correctly and quickly is critically important for two main reasons. First, the primary treatment for PE, which involves thinning the blood to prevent clots, is within the risk of bleeding. Inaccurate diagnosis could lead to death from either the PE itself or from dangerous bleeding complications that are preventable. Thus, a swift and precise diagnosis and particularly prognosis (i.e., predicting patient outcome) is crucial in effectively managing patients.\nCurrent guidelines for diagnosis. Diagnosing (PE) involves a combination of clinical assessment, imaging studies (CT), and laboratory tests, aimed at confirming the presence of a blood clot in the pulmonary arteries. The process is mostly guided by the probability of PE based on initial clinical evaluation, including symptoms and risk factors.\nThe clinical symptoms of PE are not sensitive or specific enough to definitively diagnose or rule out the condition. For this reason, many scoring systems have been developed to guide clinicians in considering or excluding PE in the differential diagnosis. The 2019 European Society of Cardiology (ESC) Acute Pulmonary Embolism guidelines recommend using the Geneva and WELLS scores as diagnostic aids Konstantinides et al. [2020 ###reference_b11###]. In clinical practice, the commonly used WELLS score calculates the probability of PE Aydogdu et al. [2014 ###reference_b2###]. Regardless of the score, patients with confirmed PE are likely to fall into three categories 10% for low probability, 30% and for moderate probability, and 65% for high probability Ceriani et al. [2010 ###reference_b5###]. Given PE\u2019s clinical significance and the current low diagnostic accuracy rates, there is a pressing need to develop new diagnostic approaches. In this study, we focus on prognosis of PE patients, which is rather a less frequently studied important topic, and the patients involved in our study were already diagnosed with PE when admitted to hospital, separate from control group.\n###figure_2### Current guidelines for prognosis. The prognosis of PE patients is usually determined by hemodynamics. Large and multiple emboli can suddenly raise pulmonary vascular resistance and lead to an increased afterload that the right ventricle (RV) cannot tolerate, resulting in sudden death. Goldhaber et al. (1999) found that the 14-day mortality rate for PE was over 20% Goldhaber et al. [1999 ###reference_b10###], and a study conducted by Bikdeli et al. (2022), found that the cumulative overall mortality rate at three months was 8.65% Bikdeli et al. [2022 ###reference_b3###]. The imaging findings from a CT (computed tomography) scan can provide prognostic information in addition to diagnosis. For example, the presence of a saddle embolus (an embolus straddling the bifurcation of the main pulmonary artery) is associated with a higher mortality rate. Figure 1 shows two example PE cases (lobar and distal). Right ventricular strain, clot burden, pulmonary artery obstruction, and infarction are some of the imaging findings associated with prognosis of PE patients.\nCritical need for new methods for prognosis. The symptoms of PE can vary widely, from no symptoms to severe cases leading to shock. PE diagnosis and prognosis are critically important for two main reasons. First, the primary treatment for PE, which involves thinning the blood to prevent clots, carries a significant risk of bleeding. A wrong diagnosis could lead to unnecessary death from either the PE itself or from dangerous bleeding complications. Thus, a swift prognosis after a precise diagnosis (i.e., predicting patient outcome) is crucial in effective management of patients. Unfortunately, current clinical standards for prognosis falls short and there is a significant need for a better outcome prediction for patients with PE Douma et al. [2010 ###reference_b8###].\nPESI score is promising but not definitive enough. Distinguishing between low-risk and high-risk patients in PE is important for determining the treatment approach and supporting the prognosis. For this purpose, several prognostic classifications have been developed for PE patients. The most commonly used prognostic method is the PESI scoring, designed by Aujesky and colleagues in 2005, which is recommended for use in the ESC 2019 PE guidelines. The PESI score includes 11 clinical variables, including demographic characteristics such as the presence of heart failure and a history of cancer, as well as vital signs such as hypotension and tachycardia Aujesky et al. [2005 ###reference_b1###]. It predicts the 30-day mortality of PE patients with a sensitivity of 85-90% and a specificity of 35-45% Zhou et al. [2012 ###reference_b18###].\nSummary of our contributions: Despite the high mortality rates, with a successful prognostic classification, PE remains one of the leading causes of preventable in-hospital deaths. Therefore, the development of new prognostic predictors can enable patients to receive appropriate treatment early, leading to a decrease in possible mortality and morbidity. Our study in this paper belongs to this class of research. Our contributions can be summarized as follows:\n30-day mortality predictions using deep learning based approaches has not been explored thoroughly, which is one of the main motivations behind our study herein. We introduced a novel deep learning-based approach called PEP-Net (pulmonary embolism prognosis network), which includes several interconnected modules, carefully designed to automatically analyze CT scans.\nThese modules includes U-Net for \"roughly\" estimate the location of lungs and heart, 3D-Residual Network (3DResNet) for extracting volumetric hierarchical features, principal component analysis (PCA) for feature dimensionality reduction, BorderlineSMOTE (B-SMOTE) for solving class-imbalance problem, and XGBoost to take learned features and focus on the relationship that are most predictive of the outcome.\nWe have devised U-Net segmentor inside the PEP-Net to have lung and cardiac regions to be determined automatically and used as input to the predictor for 30-day mortality prediction. Separating cardiac and lung regions is aimed for combining findings from lobar and distal cases in case needed.\nOur comprehensive results demonstrate that the PEP-Net outperforms other baselines, achieving superior performance (AUC=0.92) while the closest results obtained with ResNet18 architecture was of AUC=0.73." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "Data: In this retrospective cohort study (IRB approved), we evaluated a total of 193 computed tomography (CT) scans obtained from Sultan Abdulhamid Han Education and Research Hospital, Istanbul, Turkey. Patients were presented with clinical symptoms suggestive of PE, and their 30-day mortality predictions were recorded. CT scans were performed using a Canon Aquilion 128 scanner CT when they were admitted to the hospital. The scanning protocol was standardized to acquire images with a slice thickness of 1 mm. The scanning protocol included the following parameters: tube voltage, 120 kV; tube current time product, 150-300 mAs; pitch, 0.75-0.85. The in-plane spatial resolution of the images was further enhanced with a pixel matrix of 512x512 and a pixel spacing of 1 mm x 1 mm in plane resolution. To optimize contrast resolution, a non-ionic iodinated contrast agent was administered intravenously (i.e., 1 cc of intravenous contrast agent per kilogram). To maintain the privacy and confidentiality of patients, all scans were anonymized.\nTable 1 ###reference_### summarizes patient cohort categorization based on their status within 30 days following the CT scan. Out of the 193 patients, 38 (19.69%) died within 30 days of the scan, while 155 (80.31%) survived beyond this period. This imbalanced data cohort includes only patient-level label for mortality.\nDeep Learning Prognosis: Our proposed approach consists of several steps: identifying the Lung-ROI and Cardiac-ROI, adopting a 3DResNet Zhou et al. [2012 ###reference_b18###] model for feature extraction, using PCA for dimensionality reduction, applying B-SMOTE to handle class imbalance problems, and finally employing a fine-tuned, optimized XGBoost classifier to train and evaluate the model\u2019s performance as depicted in Figure 2 ###reference_###.\nDetermining Lung-ROI and Cardiac-ROI: Identifying both the Lung-ROI and Cardiac-ROI requires a segmentation using a conventional 3D U-Net model to create \u2019rough\u2019 segmentation masks Demir et al. [2021 ###reference_b6###], Mansoor et al. [2014 ###reference_b12###], Caban et al. [2011 ###reference_b4###]. Subsequently, we use these masks to select the ROI encapsulating lung regions by identifying non-zero boundary voxels from the segmentation images, defining a tight bounding box. A similar approach was applied to identify the Cardiac regions Mortazi et al. [2017 ###reference_b13###]. We used publicly available pre-trained models and checkpoints for lung and heart segmentation from TotalSegmentator Wasserthal et al. [2023 ###reference_b14###].\n3DResNet with a new convolutional layer:\nOur study employs a pre-trained 3DResNet model, specifically the ResNet18 variant with 18 layers, known for its fine-tuned weights acquired from substantial training data. These pre-trained weights enhance performance, particularly with smaller datasets, by expediting convergence. The model structure, adapted from the original pre-trained model, follows a sequential design but omits the first and last layers. The first layer, initially intended for 3-channel RGB images, is replaced to accommodate grayscale CT images with a single channel. Likewise, the final fully connected layer, designed for classification, is removed to focus on feature extraction.\nWe also introduce a new 3D convolutional layer to process the 1-channel input, aligning with grayscale CT images and generating a 64-channel feature map. This layer maintains the original first layer\u2019s kernel size, stride, and padding parameters. Its weights are trained from scratch during training, with a kernel size of (7, 7, 7), a stride of (2, 2, 2), and a padding of (3, 3, 3) to preserve spatial dimensions post-convolution by adding border pixels. (Figure 2, step 2).\nOversampling and Feature Selection: An advanced oversampling technique, B-SMOTE, is applied in our work because we have a class imbalance problem (20% vs 80% distribution of patients with PE) and classifier performance is suboptimal with class imbalance. We apply default parameters, including a sampling strategy of \u2019auto\u2019, a k-neighbors value of 5, and an m-neighbors value of 10. For a given minority class sample and its selected neighbor , a synthetic sample is generated as follows:\nwhere is a random number between 0 and 1.\nNext, we apply PCA (principal component analysis) based dimensionality reduction (feature selection) to our system. PCA is widely used for feature selection, especially after a large number of features have been generated by deep learning networks. Determining the number of principal components for PCA was based on experimentation, with a range of 50 to 150 components tested. The optimal performance is achieved with 100 components (empirically determined).\nClassification with XGBoost: The XGBoost classifier is used with the objective set as \u2018binary:logistic\u2019, indicating a binary classification problem with logistic regression for probability prediction. Key parameters include a learning rate of 0.1, which balances model training speed and performance, and a maximum tree depth of 3 to limit model complexity and mitigate overfitting. The model is trained with 100 boosting rounds. The scale_pos_weight parameter, calculated as the ratio of the number of negative cases to the number of positive cases in the training data, is used to give higher importance to the minority class during training. For a given iteration , the update equation for the XGBoost classifier can be expressed as follows:\nwhere is the model at iteration , is the learning rate, and is the new decision tree added at iteration .\nFurther Details on Training: PEP-Net and other baselines undergo training for up to 100 epochs, utilizing Sparse Categorical Cross Entropy as the loss function, with ADAM optimizer set along with a learning rate of 0.0001. The training is carried out on a high-performance GPU server equipped with 1 NVIDIA A100 GPU and 80GB memory, operating under Debian Linux. We evaluate the performance of each method based on metrics such as Accuracy, Area under the Curve (AUC), Sensitivity, and Specificity. We have used five-fold cross-validation during the experiments, ensuring robustness and reliability in our results." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results", + "text": "We compared the performance of our proposed PEP-Net method with three deep network models: ResNet18 Zhou et al. [2012 ###reference_b18###], ResNet50 Xue and Abhayaratne [2023 ###reference_b16###], and EfficientNetB0 Zheng et al. [2023 ###reference_b17###], respectively. Our objective was to assess the efficacy of these models in predicting 30-day mortality in PE patients using either Lung-ROI or Cardiac-ROI inputs.\nTable 2 ###reference_### compares the performance of different neural network architectures on a classification task using various metrics. ResNet18 shows a decent, but not the best among the methods listed. Its AUC (0.581) is relatively low, indicating that the model is not as good at distinguishing between the classes. The sensitivity, which measures the true positive rate (0.782), suggesting that it can identify the positive cases fairly well. However, its specificity of 0.535 is low, indicating it\u2019s less effective at identifying the negative cases.\nResNet50 shows improved accuracy over ResNet18, suggesting it\u2019s better at classifying the correct outcomes overall. The AUC is slightly better than ResNet18, but still indicates room for improvement in model discrimination. EfficientNetB0 has a lower accuracy than both ResNet models, but it surpasses them with the highest AUC, indicating better performance in distinguishing between the positive and negative classes. Both sensitivity and specificity metrics showing that EfficientNetB0 has a balanced performance in identifying positive and negative cases respectively.\nThe proposed PEP-Net significantly outperforms the other models across all metrics. It has the highest accuracy, suggesting it\u2019s highly effective at making correct predictions. The AUC (0.91) is excellent, indicating a superior ability to distinguish between the classes. These outstanding results underscore the potential of our proposed model for improving mortality prediction in PE patients.\n###figure_3### ###figure_4### Table 3 ###reference_### presents a performance comparison of PEP-Net and other referenced CNN models for 30-day mortality prediction but this time our input to the predictor is Cardiac-ROI. We observed that our PEP-Net outperforms the referenced models similar to that of with Lung-ROI, with an accuracy of 0.940 (\u00b1 0.007) and an AUC of 0.901. While Cardiac-ROI results are promising, they have some interesting implications too. For example, Cardiac-ROI based results are very close to Lung-ROI based results, indicating that the algorithm does not only look for emboli regions and finding some changes in the cardiac or vessels that is not visible to human eye at first instance. In our future work, we will develop a precise visual explanation algorithm to identify distal PE\u2019s effect on the cardiac images.\nFigure 3 ###reference_### illustrates the superior performance of PEP-Net as compared to other CNN models. As shown, AUC score for PEP-Net with Lung-ROI is significantly higher than that of the referenced CNN models. Overall, our algorithm misclassified only 7 patients. For the three patients (alive), our model made the wrong prediction (death). Of these patients, one had the lobar type, and two had the distal branch type. Also, for the four patients (died), our model made the wrong prediction (alive). Of these patients, two had the main branch type, one had the lobar type, and one had the distal branch type. Figure 3 (right) shows one of the misclassified patients The same figure middle illustrates a case where PEP-Net predicted the prognosis with high probability although emboli is subtle (red arrow)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Discussion and Concluding Remarks", + "text": "We introduce a new deep learning model, PEP-Net, tailored for the prognostication of Pulmonary Embolism (PE), a critical cardiovascular ailment with high mortality rates. Several factors contributes to the success of our model in this context. Firstly, we employed BorderlineSMOTE to address the dataset\u2019s imbalance, generating synthetic samples for the minority class. This balancing technique proved effective in improving model generalization. Additionally, we incorporated PCA to reduce dimensionality, enhancing feature representation. This integration, alongside the 3DResNet architecture and XGBoost algorithm, created a powerful combination. ResNet excels at feature extraction and learning representations from images, while XGBoost, renowned for its ability to handle imbalanced datasets, plays a pivotal role. XGBoost, in combination with BorderlineSMOTE, focuses on the minority class, effectively addressing data imbalance by giving greater attention to challenging instances. This boosting approach allows our model to learn from its mistakes and refine its predictions, which is particularly vital in imbalanced medical datasets. Validated on 193 CT scans, PEP-Net achieved remarkable accuracy rates above 94%.\nWhile traditional prognostic models fall short, PEP-Net provides state-of-the-art results by integrating 3DResNet and XGBoost to masterfully address class imbalances and enhance prediction stability. Validated on 193 CT scans, PEP-Net achieved remarkable accuracy rates above 94%. Notably, the model\u2019s specificity improved when analyzing Lung-ROI compared to Cardiac-ROI, suggesting the Lung region\u2019s significant role in identifying non-risk cases especially when the embolism is distal. However, it should be also noted the difference is small, potentially suggesting that there maybe some changes in the cardiac anatomy not quite visible to human-eye, and still suggesting PE formation. Compared to PEP-Net\u2019s performance, referenced CNN models, including ResNet18 and ResNet50, showed only marginal performance gains in Cardiac-ROI over Lung-ROI, yet none matched the PEP-Net\u2019s robust metrics.\nWhile our study shows promising results, there are some limitations which we aim to address in the future. First, we will repeat our study in a multi-center manner to more precisely assesses the generalization of the proposed algorithm. Getting data from multiple center is a challenge at the moment. Next, we did not incorporate clinical parameters, which could provide valuable insights for developing a more robust prediction model. Our study can also benefit from a more advanced backbone models such as ViT but one needs to be aware of data-hungry nature of Transformers while the data is limited in our case.\nOne may also ask if solely designed deep networks (ViT, Mamba, and others) can solve this problem without the need for XGBoost or other separate classifiers. The decision to combine traditional machine learning techniques, such as XGBoost, with deep learning architectures, like 3DResNet, for classification tasks is a strategic approach that leverages the strengths of both methodologies to achieve superior performance, not only the lack of enough data to attend. We would like to note that this choice does not inherently imply that fully deep learning methods are incapable of capturing good results; rather, it highlights a nuanced understanding of the problem space and the tools available. One immediate benefit by combining these two methods was also to reduce training time and computational resource requirements, making it a practical solution for many real-world applications." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: 30-day mortality information of PE patients.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Patient StatusNumber of PatientsPercentage
Died within 30 days3819.69%
Survived beyond 30 days15580.31%
Total193100%
\n
", + "capture": "Table 1: 30-day mortality information of PE patients." + }, + "2": { + "table_html": "
\n
Table 2: Performance comparison of PEP-Net and baselines for 30-day mortality prediction. Input: Lung-ROI.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAccuracyAUCSensitivitySpecificity
ResNet18
ResNet50
EfficientNetB0
PEP-Net0.945 0.0030.917 0.0070.977 0.0020.874 0.013
\n
", + "capture": "Table 2: Performance comparison of PEP-Net and baselines for 30-day mortality prediction. Input: Lung-ROI." + }, + "3": { + "table_html": "
\n
Table 3: Performance comparison of PEP-Net and baselines for 30-day mortality prediction. Input: Cardiac-ROI.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAccuracyAUCSensitivitySpecificity
ResNet18
ResNet50
EfficientNetB0
PEP-Net0.940 0.0070.901 0.0080.962 0.0040.852 0.013
\n
", + "capture": "Table 3: Performance comparison of PEP-Net and baselines for 30-day mortality prediction. Input: Cardiac-ROI." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18063v1_figure_1.png", + "caption": "Figure 1: We have lobar (left) and distal PE (right) cases are illustrated, respectively.", + "url": "http://arxiv.org/html/2411.18063v1/extracted/6028078/figure1.png" + }, + "2": { + "figure_path": "2411.18063v1_figure_2.png", + "caption": "Figure 2: Proposed PEP-Net architecture for mortality prediction for PE patients include four consecutive steps. Lung and cardiac region localizations (rough ROIs), feature extraction with deep nets, oversampling and dimensionality reduction, and an optimized classifier.", + "url": "http://arxiv.org/html/2411.18063v1/extracted/6028078/penet_final.jpg" + }, + "3(a)": { + "figure_path": "2411.18063v1_figure_3(a).png", + "caption": "Figure 3: Left: ROC-AUC curve of different DL-based approaches used in this study; TPR\u2013 True Positive Rate, FPR\u2013 False Positive Rate. Middle: An example CT scan that PEP-Net predicts the 30-day mortality of PE patient with high probability. Right: An example CT scan that PEP-Net fails to predict the patient\u2019s 30-day mortality. Very subtle change is observed.", + "url": "http://arxiv.org/html/2411.18063v1/extracted/6028078/lung_final_roc.png" + }, + "3(b)": { + "figure_path": "2411.18063v1_figure_3(b).png", + "caption": "Figure 3: Left: ROC-AUC curve of different DL-based approaches used in this study; TPR\u2013 True Positive Rate, FPR\u2013 False Positive Rate. Middle: An example CT scan that PEP-Net predicts the 30-day mortality of PE patient with high probability. Right: An example CT scan that PEP-Net fails to predict the patient\u2019s 30-day mortality. Very subtle change is observed.", + "url": "http://arxiv.org/html/2411.18063v1/extracted/6028078/quality.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Derivation and validation of a prognostic model for pulmonary\nembolism.", + "author": "D. Aujesky, D. S. Obrosky, R. A. Stone, T. E. Auble, A. Perrier, J. Cornuz,\nP.-M. Roy, and M. J. Fine.", + "venue": "American journal of respiratory and critical care medicine,\n172(8):1041\u20131046, 2005.", + "url": null + } + }, + { + "2": { + "title": "Wells score and pulmonary embolism rule out criteria in preventing\nover investigation of pulmonary embolism in emergency departments.", + "author": "M. Aydogdu, N. Topbasi Sinanoglu, N. Dogan, I. Oguzulgen, A. Demircan,\nF. Bildik, and N. Ekim.", + "venue": "Tuberkuloz ve Toraks-Tuberculosis and Thorax, 2014.", + "url": null + } + }, + { + "3": { + "title": "Clinical presentation and short-and long-term outcomes in patients\nwith isolated distal deep vein thrombosis vs proximal deep vein thrombosis in\nthe riete registry.", + "author": "B. Bikdeli et al.", + "venue": "JAMA cardiology, 7(8):857\u2013865, 2022.", + "url": null + } + }, + { + "4": { + "title": "Monitoring pulmonary fibrosis by fusing clinical, physiological, and\ncomputed tomography features.", + "author": "J. J. Caban, J. Yao, U. Bagci, and D. J. Mollura.", + "venue": "In 2011 Annual International Conference of the IEEE Engineering\nin Medicine and Biology Society, pages 6216\u20136219. IEEE, 2011.", + "url": null + } + }, + { + "5": { + "title": "Clinical prediction rules for pulmonary embolism: a systematic review\nand meta-analysis.", + "author": "E. Ceriani, C. Combescure, G. Le Gal, M. Nendaz, T. Perneger, H. Bounameaux,\nA. Perrier, and M. Righini.", + "venue": "Journal of thrombosis and haemostasis, 8(5):957\u2013970, 2010.", + "url": null + } + }, + { + "6": { + "title": "Information bottleneck attribution for visual explanations of\ndiagnosis and prognosis.", + "author": "U. Demir, I. Irmakci, E. Keles, A. Topcu, Z. Xu, C. Spampinato,\nS. Jambawalikar, E. Turkbey, B. Turkbey, and U. Bagci.", + "venue": "In Machine Learning in Medical Imaging: 12th International\nWorkshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg,\nFrance, September 27, 2021, Proceedings 12, pages 396\u2013405. Springer, 2021.", + "url": null + } + }, + { + "7": { + "title": "Time trends and case fatality rate of in-hospital treated pulmonary\nembolism during 11 years of observation in northwestern italy.", + "author": "F. Dentali, W. Ageno, F. Pomero, L. Fenoglio, A. Squizzato, and M. Bonzini.", + "venue": "Thrombosis and haemostasis, 115(02):399\u2013405, 2016.", + "url": null + } + }, + { + "8": { + "title": "Acute pulmonary embolism. part 1: epidemiology and diagnosis.", + "author": "R. A. Douma, P. W. Kamphuisen, and H. R. B\u00fcller.", + "venue": "Nature Reviews Cardiology, 7(10):585\u2013596,\n2010.", + "url": null + } + }, + { + "9": { + "title": "Understanding venous thromboembolism and pulmonary embolism.", + "author": "S. Georgilis.", + "venue": "Nursing made Incredibly Easy, 21(2):31\u201336, 2023.", + "url": null + } + }, + { + "10": { + "title": "Acute pulmonary embolism: clinical outcomes in the international\ncooperative pulmonary embolism registry (icoper).", + "author": "S. Z. Goldhaber, L. Visani, and M. De Rosa.", + "venue": "The Lancet, 353(9162):1386\u20131389, 1999.", + "url": null + } + }, + { + "11": { + "title": "2019 esc guidelines for the diagnosis and management of acute\npulmonary embolism developed in collaboration with the european respiratory\nsociety (ers) the task force for the diagnosis and management of acute\npulmonary embolism of the european society of cardiology (esc).", + "author": "S. V. Konstantinides, G. Meyer, C. Becattini, H. Bueno, G.-J. Geersing, V.-P.\nHarjola, M. V. Huisman, M. Humbert, C. S. Jennings, D. Jim\u00e9nez, et al.", + "venue": "European heart journal, 41(4):543\u2013603,\n2020.", + "url": null + } + }, + { + "12": { + "title": "Near-optimal keypoint sampling for fast pathological lung\nsegmentation.", + "author": "A. Mansoor, U. Bagci, and D. J. Mollura.", + "venue": "In 2014 36th Annual International Conference of the IEEE\nEngineering in Medicine and Biology Society, pages 6032\u20136035. IEEE, 2014.", + "url": null + } + }, + { + "13": { + "title": "Cardiacnet: Segmentation of left atrium and proximal pulmonary veins\nfrom mri using multi-view cnn.", + "author": "A. Mortazi, R. Karim, K. Rhode, J. Burt, and U. Bagci.", + "venue": "In Medical Image Computing and Computer-Assisted Intervention-\nMICCAI 2017: 20th International Conference, Quebec City, QC, Canada,\nSeptember 11-13, 2017, Proceedings, Part II 20, pages 377\u2013385. Springer,\n2017.", + "url": null + } + }, + { + "14": { + "title": "Totalsegmentator: Robust segmentation of 104 anatomic structures in\nct images.", + "author": "J. Wasserthal, H.-C. Breit, M. T. Meyer, M. Pradella, D. Hinck, A. W. Sauter,\nT. Heye, D. T. Boll, J. Cyriac, S. Yang, et al.", + "venue": "Radiology: Artificial Intelligence, 5(5), 2023.", + "url": null + } + }, + { + "15": { + "title": "Global burden of thrombosis: epidemiologic aspects.", + "author": "A. M. Wendelboe and G. E. Raskob.", + "venue": "Circulation research, 118(9):1340\u20131347,\n2016.", + "url": null + } + }, + { + "16": { + "title": "Region-of-interest aware 3d resnet for classification of covid-19\nchest computerised tomography scans.", + "author": "S. Xue and C. Abhayaratne.", + "venue": "IEEE Access, 11:28856\u201328872, 2023.", + "url": null + } + }, + { + "17": { + "title": "A modified 3d efficientnet for the classification of alzheimer\u2019s\ndisease using structural magnetic resonance images.", + "author": "B. Zheng, A. Gao, X. Huang, Y. Li, D. Liang, and X. Long.", + "venue": "IET Image Processing, 17(1):77\u201387, 2023.", + "url": null + } + }, + { + "18": { + "title": "The prognostic value of pulmonary embolism severity index in acute\npulmonary embolism: a meta-analysis.", + "author": "X.-Y. Zhou, S.-Q. Ben, H.-L. Chen, and S.-S. Ni.", + "venue": "Respiratory research, 13(1):1\u201312, 2012.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18063v1" +} \ No newline at end of file diff --git a/20241127/2411.18070v1.json b/20241127/2411.18070v1.json new file mode 100644 index 0000000000000000000000000000000000000000..104c2917aa5bf33c703b6ef07c4b9478339db37f --- /dev/null +++ b/20241127/2411.18070v1.json @@ -0,0 +1,109 @@ +{ + "title": "Large Scale Evaluation of Deep Learning-based Explainable Solar Flare Forecasting Models with Attribution-based Proximity Analysis", + "abstract": "Accurate and reliable predictions of solar flares are essential due to their potentially significant impact on Earth and space-based infrastructure. Although deep learning models have shown notable predictive capabilities in this domain, current evaluations often focus on accuracy while neglecting interpretability and reliability\u2014factors that are especially critical in operational settings. To address this gap, we propose a novel proximity-based framework for analyzing post hoc explanations to assess the interpretability of deep learning models for solar flare prediction. Our study compares two models trained on full-disk line-of-sight (LoS) magnetogram images to predict M-class solar flares within a 24-hour window. We employ the Guided Gradient-weighted Class Activation Mapping (Guided Grad-CAM) method to generate attribution maps from these models, which we then analyze to gain insights into their decision-making processes. To support the evaluation of explanations in operational systems, we introduce a proximity-based metric that quantitatively assesses the accuracy and relevance of local explanations when regions of interest are known. Our findings indicate that the models\u2019 predictions align with active region characteristics to varying degrees, offering valuable insights into their behavior. This framework enhances the evaluation of model interpretability in solar flare forecasting and supports the development of more transparent and reliable operational systems.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Solar flares are transient events characterized by the rapid release of electromagnetic radiation in the outer layers of the Sun\u2019s atmosphere. These phenomena play a central role in space weather forecasting and can significantly impact various stakeholders, including power grids, satellite communications, and aviation. Flares are classified into five categories\u2014A, B, C, M, and X\u2014each further subdivided into a strength rating ranging from 1.0 to 9.9, based on their peak X-ray flux, as outlined by the National Oceanic and Atmospheric Administration (NOAA) [1 ###reference_b1###]. M- and X-class flares are particularly important to predict due to their potential to cause substantial disruptions on Earth and in space [2 ###reference_b2###].\nActive Regions (ARs) on the Sun, distinguished by disturbed magnetic fields, are the primary initiators of solar flares and other space weather events [3 ###reference_b3###]. Many solar flare prediction models analyze data across the entire solar disk (full-disk models [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]), which can make it challenging to pinpoint the relevant ARs responsible for flares. These models, often neural networks, are referred to as \u201dopaque\u201d or \u201dblack boxes\u201d due to their complex learning representations, which obscures their prediction rationale. This lack of transparency raises concerns for operational forecasting systems, highlighting the need to ensure that these models are interpretable and their predictions understandable for successful integration into real-world forecasting applications. To address these challenges, empirical methods from broader machine learning research have been adopted to better understand how neural networks make decisions. Post hoc explanations, often referred to as attribution methods (e.g., [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###]), aim to interpret model outputs by identifying which input features contributed most to a given prediction. These methods enhance transparency by providing insights into the internal workings of complex models, making them more trustworthy for critical applications. However, challenges remain in quantitatively evaluating the effectiveness and reliability of these attribution methods, which is crucial for ensuring they provide meaningful and actionable explanations in operational flare forecasting.\nRecently, several studies [11 ###reference_b11###, 12 ###reference_b12###, 6 ###reference_b6###, 13 ###reference_b13###] have focused on developing explainable models for solar flare prediction. To interpret a convolutional neural network (CNN) trained on AR patches, Bhattacharjee et al. [11 ###reference_b11###] employed an occlusion-based method. More recently, [14 ###reference_b14###] evaluated DeepLIFT [15 ###reference_b15###] and Integrated Gradients [9 ###reference_b9###] for CNNs trained on central AR patches (within ). Pandey et al. [16 ###reference_b16###, 12 ###reference_b12###] extended this work by employing explainable full-disk flare prediction models and qualitatively analyzing near-limb flare predictions. This was further expanded in [6 ###reference_b6###] with a questionnaire-based evaluation of explanations for X-class flares, demonstrating the model\u2019s effectiveness across both near-limb and central regions and validating its feasibility for operational forecasting. While some studies emphasize model development [4 ###reference_b4###, 17 ###reference_b17###, 18 ###reference_b18###], others incorporate explainability techniques [19 ###reference_b19###, 14 ###reference_b14###, 6 ###reference_b6###]; however, none offer automated evaluation of explanations.\nBuilding on our prior work [6 ###reference_b6###, 18 ###reference_b18###], which introduced an explainable full-disk solar flare prediction model using compressed line-of-sight (LoS) magnetograms and evaluated Guided Grad-CAM (GGCAM) [7 ###reference_b7###] explanations with human-in-the-loop evaluations in [6 ###reference_b6###], this paper advances the field with a fully automated system for evaluating explanations. Designed to localize predictions to ARs from full-disk models, this system enhances automation, marking a significant step toward improving solar flare forecasting and increasing trust in predictive models.\nThe primary contributions of this paper are: (i) We introduce a fully automated quantitative approach for analyzing model explanations, (ii) We introduce novel evaluation metrics to define distance-based relationships between model explanations, when the regions of interest are known, and (iii) we present a comprehensive case study comparing the explanations of two standard CNN-based models from [6 ###reference_b6###] and [18 ###reference_b18###], using proximity score and colocation ratio.\nThe remainder of this paper is structured as follows: Sec. II ###reference_### details our methodology, including data preparation and the proximity-based approach of evaluating explanations. Sec. III ###reference_### presents our experimental evaluation and results. Sec. IV ###reference_### discusses the analysis and the implications of our findings for the validity of deep learning models. Finally, Sec. V ###reference_### concludes the paper and outlines directions for future research." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Methodology", + "text": "In this section, we describe our approach for evaluating explanations. We utilized two full-disk deep learning models presented in [6 ###reference_b6###] and [18 ###reference_b18###] to assess the efficacy of our method. These CNN-based models were trained using full-disk line-of-sight (LoS) magnetograms from 2010 to 2018 to predict M-class flares, with a prediction window of 24 hours. Details on the model configurations and their predictive performance can be found in the relevant papers cited above.\n###figure_1### For this study, we primarily require full-disk magnetogram images from the Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory (SDO), which represent the Sun\u2019s magnetic field strength, with white and black areas indicating positive and negative polarities. These images, as shown in Fig. 1 ###reference_### (a), are publicly available on Helioviewer111https://api.helioviewer.org/docs/v2/. Furthermore, given an input magnetogram image to the trained full-disk models [6 ###reference_b6###, 18 ###reference_b18###], we generate attribution maps referred to as explanations. Attribution maps are a type of local post hoc explanation that shows the relative importance of pixels for a given prediction, i.e., the attribution of pixels for a particular decision. We used Guided Grad-CAM (GGCAM) [7 ###reference_b7###], a gradient-based method that combines Grad-CAM\u2019s localization ability with the pixel-level precision of guided backpropagation [8 ###reference_b8###] to identify important regions in magnetogram images (e.g., see Fig. 1 ###reference_###(b)). Furthermore, for proximity analysis of explanations corresponding to actual flare locations, we used NOAA-detected flare database222ftp://ftp.swpc.noaa.gov/pub/warehouse, which include solar coordinates and associated AR numbers. To ensure a clear comparison between the attribution maps and the magnetograms, preprocessing steps are applied to standardize the attribution maps. Both magnetogram and attribution maps are converted into numpy arrays, normalized, and scaled to 255. For visualization purposes, the arrays are superimposed with an alpha saturation of 0.5, as demonstrated in previous studies [6 ###reference_b6###] (Fig. 1 ###reference_### (c)).\n###figure_2###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Preprocessing Attribution Maps", + "text": "To quantitatively assess the alignment of highlighted pixels in the attribution maps with observed ARs, we first extract pixel locations and compare them to the AR coordinates. We then apply Canny Edge Detection and density-based clustering to group the important pixels, followed by deriving the convex hull for each group. Next, we compare the coordinates of these convex hulls with the coordinates of the ARs and flares in the original image. To ensure consistency in proximity analysis, we convert spatial coordinates to pixel coordinates, establishing a common scale. This conversion also accounts for the Sun\u2019s rotation, which causes an approximate westward shift of 13 degrees over a 24-hour period, by introducing buffers to the east and west around each convex hull. The entire procedure for proximity-based solar flare prediction and analysis is detailed in Algorithm 1 ###reference_### (PEPS-SF), which outlines the generation of attribution maps, scaling, edge detection, density-based clustering, convex hull derivation, and the application of full-disk region (circular mask). Additionally, the algorithm includes the conversion of AR and flare location (FL) coordinates to pixel coordinates, followed by a proximity analysis that calculates the Euclidean distance between each AR and the nearest point on the convex hull.\nThe system for solar flare prediction and analysis is composed of interconnected modules as shown in Fig. 2 ###reference_###. The Solar Flare Prediction Module initiates the process by processing full-disk magnetogram images, generating prediction and attribution maps that highlight potential solar flare regions, as described in step 1 of Algorithm 1 ###reference_###. These maps are then preprocessed by the Attribution Map Processing Module, which executes steps 2 to 9 in Algorithm 1 ###reference_###, resulting in a set of maps ready for detailed analysis. For proximity analysis, we use these preprocessed maps and compares them with observational data, including the locations of AR and FL, to calculate the Proximity Score (PS) and Attribution Colocation Ratio (ACR) using steps 10 to 20 in Algorithm 1 ###reference_###. This system design provides a structured framework for evaluating how well the activated pixels in the post hoc explanations correspond with actual solar flare activity.\nFor location-based analysis, heliographic coordinates are converted to pixel coordinates to facilitate accurate proximity analysis, ensuring all spatial data and pixel locations are on a common scale, thereby enabling precise calculations. This conversion is executed using the transformations provided in Eq. (1 ###reference_###) and Eq. (2 ###reference_###):\nwhere and represent the pixel coordinates on the image, HPCCENTER denotes the center of the solar disk in pixel coordinates, hpcx and hpcy are the Helioprojective-Cartesian coordinates, and CDELT is the scaling factor (i.e., pixel size in degrees). To measure the proximity of active regions to their corresponding convex hulls, the closest convex hull to each active region is identified by computing the Euclidean distance between the region\u2019s coordinates and the nearest point on the convex hull, as described by Eq. (3 ###reference_###).\nHere, represents the minimum Euclidean distance, with denoting the coordinates of the solar active region and representing the coordinates of the closest point on the convex hull.\nA working example is provided in Fig. 3 ###reference_###, which outlines the sequence of transformations applied to an attribution map. The process starts with In Step (a), the Guided-GradCAM image is resized and pixel intensities are scaled to 255. This is followed by edge detection in Step (b) and clustering of the detected edges in Step (c). In Step (d), a convex hull is drawn around the clustered points to define the regions of interest. To account for the Sun\u2019s rotation, eastward and westward buffers are added in Step (e). In Step (f), these buffered convex hulls are re-clustered to merge smaller clusters into larger, more cohesive clusters. A bounding region is then created around these clusters in Step (g), and a circular mask is applied in Step (h) to isolate the solar disk, removing any background data outside the Sun\u2019s circumference. Finally, Step (i) shows the active region (AR) and flare location (FL) points, indicating whether they fall within the defined bounding regions. This sequence illustrates the proximity-based analysis used to evaluate the alignment between the model\u2019s predictions and actual solar activity.\n###figure_3###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Explanation Evaluation Metrics", + "text": "To quantify the explanations\u2019 relevance to AR locations, we introduce two metrics: (i) Proximity Score (PS) shown in Eq. (4 ###reference_###), and Attribution Colocation Ratio (ACR) shown in Eq. (5 ###reference_###). The PS quantifies the average Euclidean distance between observed ARs and their respective closest convex hulls and ACR measures the percentage of ARs located within the bounding buffer of the total ARs present in full-disk image. To elaborate, PS measures how close the active regions are to the predicted regions, providing insight into the precision of the model\u2019s predictions. ACR, on the other hand, indicates whether the active regions are correctly located within the predicted areas. Together, PS offers a continuous measure of distance, while ACR provides a binary measure of correctness. Using both metrics allows for a more comprehensive evaluation, with PS focusing on precision and ACR ensuring that the model captures the relevant areas.\nwhere denotes the th AR, represents the closest convex hull to , is the total number of ARs, and is a binary function that equals 1 if is within the bounding buffer, and 0 otherwise." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Experimental Evaluation", + "text": "The focus of this study is to evaluate how well the activated pixels in post hoc explanations align with the locations of active regions (ARs) and flares, particularly X- and M-class flares, which are significant for Earth-impacting events. The proximity-based approach quantifies the quality of these explanations and assesses the models\u2019 performance. For our evaluation, the dataset comprises 5,923 full-disk line-of-sight (LoS) solar magnetogram images. These magnetograms, sampled every four hours from January 2020 to December 2023, are labeled with a 24-hour prediction window based on the maximum peak X-ray flux. As mentioned earlier, we used two full-disk models presented in [6 ###reference_b6###], denoted as model M1, and model M2, presented in [18 ###reference_b18###]. We used GGCAM-generated attribution maps to perform a proximity analysis, comparing the attribution alignment of the M1 and M2 models with the actual locations of active regions (ARs) and flares. This analysis, guided by the parameters detailed in Table I ###reference_###, provided critical insights into the models\u2019 performance and reliability in predicting solar flares.\nWhile the predictive performance of models M1 [6 ###reference_b6###] and M2 [18 ###reference_b18###] is reported in the respective studies, we present an evaluation of the explanations in terms of the average Proximity Score (PS) and Attribution Colocation Ratio (ACR) generated using these models, as shown in Table II ###reference_###. We observed that M2 demonstrates better spatial alignment with actual flare locations, as indicated by a lower PS, while both models perform similarly in terms of ACR. Fig. 4 ###reference_### illustrates the distribution of proximity scores across different classification categories (TN, FP, TP, FN) for M1 and M2 respectively. The results indicate that M2 generally produces lower median proximity scores with less variability, suggesting more consistent and accurate predictions of flare regions compared to M1. The source code and other necessary artifacts are available from our open-source repository [20 ###reference_b20###].\n###figure_4### ###figure_5###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Discussion", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Comparative Analysis of PS Across Models", + "text": "The comparative analysis of proximity-based scores for the M1 and M2 models (Table III ###reference_###) reveals significant differences in their explanation alignment with the ground truth. M1 generally shows higher mean proximity scores and greater variability across all categories, indicating that its predictions are spatially farther from the true flare regions. In contrast, M2 demonstrates lower mean scores with reduced variability, reflecting more accurate and consistent predictions. Specifically, M2 outperforms M1 in false negatives, false positives, and true negatives, highlighting its improved alignment with actual flare regions. True positive scores for M2 further underscore its better spatial accuracy, making it more reliable for solar flare prediction. The box plots (Fig. 4 ###reference_###) illustrate the distribution of proximity scores across different categories, offering further insights. M1 displays higher median scores and greater variability, suggesting less precise predictions. Conversely, M2 exhibits lower medians and fewer extreme outliers, indicating closer and more consistent predictions. For true negatives, M2 achieves significantly lower median proximity scores, showcasing superior alignment with actual flare regions. Both models perform similarly for false positives and false negatives, but M2\u2019s reduced variability and fewer outliers underscore its consistency." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Comparative Analysis of ACR Across Models", + "text": "The ACR analysis for the M1 and M2 models (Table IV ###reference_###) highlights notable differences in prediction accuracy and consistency. M1 generally exhibits higher mean ACRs across most categories, suggesting that its predictions are more closely aligned with actual flare regions. However, M2 demonstrates less variability in these ratios, indicating a more consistent alignment across its predictions. The comparative mean and standard deviation values provide further insight into each model\u2019s relative performance and consistency in predicting solar flare locations. The box plots (Fig. 5 ###reference_###) offer additional detail on the distribution of ACRs. While both models display similar medians across all categories, M2 shows narrower interquartile ranges (IQRs) for false positives and true negatives, reflecting more consistent alignment. In contrast, M1 displays greater variability in true positives and false negatives, suggesting less consistent predictions in these categories. For true positives, M2 achieves a lower median ACR but with reduced variability, implying consistent but slightly less accurate predictions compared to M1. For false negatives, M1 shows a slightly higher median but greater variability, further underscoring its inconsistency. Overall, M1 has higher median ACRs in some categories but less consistency due to greater variability, while M2 is more consistent with reduced variability." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This study introduces a novel and fully automated method for analyzing explanations from full-disk models in a near-real-time setting. This approach represents a significant step forward in seamlessly integrating explainability into operational systems, enhancing the transparency and reliability of deep learning models for solar flare prediction. We utilized this method to evaluate the explanations generated from two CNN-based models and compared the reliability of these models. The findings underscore the potential of these explainable methods to bridge the gap between advanced model outputs and actionable insights. Future work should prioritize refining post-hoc explanation techniques, with an emphasis on mitigating noise during preprocessing, to further improve model interpretability and operational utility.\nAcknowledgements:This work is supported by the National Science Foundation under Grant #2104004. The data used in this study is a courtesy of NASA/SDO and the AIA, EVE, and HMI\nscience teams, and the NOAA (NGDC)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Parameters used for Proximity Analysis of M1 and M2 Models
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParameterDescriptionValue
lower_thresholdLower Pixel Intensity Threshold30
upper_thresholdUpper Pixel Intensity Threshold50
min_samplesMinimum Samples for Clustering2
max_distMaximum Clustering Distance (pixels)10
eastward_bufferEastward Buffer (pixels)5
westward_bufferWestward Buffer (pixels)40
target_sizeTarget Image Size (pixels)512
scale_toPixel Intensity Scale Factor255
\n
", + "capture": "TABLE I: Parameters used for Proximity Analysis of M1 and M2 Models" + }, + "2": { + "table_html": "
\n
TABLE II: Average PS and ACR for M1 and M2 models
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage PSAverage ACR
M161.4050910.552943
M229.4200100.550404
\n
", + "capture": "TABLE II: Average PS and ACR for M1 and M2 models" + }, + "3": { + "table_html": "
\n
TABLE III: Summary of Proximity-Based Scores for M1 and M2 Models
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricMeanStandard Deviation
M1M2M1M2
FN72.4938.0271.8946.96
FP52.9555.6352.7249.05
TN61.7028.5366.5940.26
TP59.4352.0368.0548.68
\n
", + "capture": "TABLE III: Summary of Proximity-Based Scores for M1 and M2 Models" + }, + "4": { + "table_html": "
\n
TABLE IV: Summary of Attribution Colocation Ratios for M1 and M2
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricMeanStandard Deviation
M1M2M1M2
FN0.5140.5210.3110.310
FP0.5110.4460.2890.273
TN0.5610.5620.3050.328
TP0.5740.4630.3060.342
\n
", + "capture": "TABLE IV: Summary of Attribution Colocation Ratios for M1 and M2" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18070v1_figure_1.png", + "caption": "Figure 1: (a) Original input magnetogram image at 2015-01-01T07:59:39.30 UTC. (b) Attribution map generated by Guided Grad-CAM (GGCAM) for input (a). (c) Superimposed image of input (a) and attribution map (b).", + "url": "http://arxiv.org/html/2411.18070v1/x1.png" + }, + "2": { + "figure_path": "2411.18070v1_figure_2.png", + "caption": "Figure 2: An schematic overview of steps involved in attribution maps preprocessing and corresponding proximity-based analysis used in this study.", + "url": "http://arxiv.org/html/2411.18070v1/x2.png" + }, + "3": { + "figure_path": "2411.18070v1_figure_3.png", + "caption": "Figure 3: An example of sequential processing applied to the attribution maps of an input magnetogram captured on 2015-01-01 at 07:59:39.30 UTC.", + "url": "http://arxiv.org/html/2411.18070v1/x3.png" + }, + "4": { + "figure_path": "2411.18070v1_figure_4.png", + "caption": "Figure 4: Distribution of Proximity Scores, grouped in terms of four outcomes of contingency matrix (referred in text as categories). (a) Proximity score distribution for True Positives (TP) and False Positives (FP) (b) Proximity score distribution for False Negatives (FN) and True Negatives (TN).", + "url": "http://arxiv.org/html/2411.18070v1/extracted/6028127/figure/proximity_score_boxplot_new.png" + }, + "5": { + "figure_path": "2411.18070v1_figure_5.png", + "caption": "Figure 5: Distribution of Attribution Colocation Ratios for both models, grouped in terms of four outcomes of contingency matrix (referred in text as categories). (a) ACR distribution for True Positives (TP) and False Positives (FP) (b) ACR distribution for False Negatives (FN) and True Negatives (TN).", + "url": "http://arxiv.org/html/2411.18070v1/extracted/6028127/figure/colocation_ratio_boxplot_new.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18070v1" +} \ No newline at end of file diff --git a/20241127/2411.18071v1.json b/20241127/2411.18071v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b6591224c8c6ddc567b43904b649f2a0c87a0b5f --- /dev/null +++ b/20241127/2411.18071v1.json @@ -0,0 +1,408 @@ +{ + "title": "Simulating Tabular Datasets through LLMs to Rapidly Explore Hypotheses about Real-World Entities", + "abstract": "Do horror writers have worse childhoods than other writers? Though biographical details are known about many writers, quantitatively exploring such a qualitative hypothesis requires significant human effort, e.g. to sift through many biographies and interviews of writers and to iteratively search for quantitative features that reflect what is qualitatively of interest. This paper explores the potential to quickly prototype these kinds\nof hypotheses through (1) applying LLMs to estimate properties of concrete entities like specific people, companies, books, kinds of animals, and countries; (2) performing off-the-shelf analysis methods to reveal possible relationships among such properties (e.g. linear regression); and towards further automation, (3) applying LLMs to suggest the quantitative properties themselves that could help ground a particular qualitative hypothesis (e.g. number of adverse childhood events, in the context of the running example).\nThe hope is to allow sifting through hypotheses more quickly through collaboration between human and machine. Our experiments highlight that indeed, LLMs can serve as useful estimators of tabular data about specific entities across a range of domains, and that such estimations improve with model scale. Further, initial experiments demonstrate the potential of LLMs to map a qualitative hypothesis of interest to relevant concrete variables that the LLM can then estimate. The conclusion is that LLMs offer intriguing potential to help illuminate scientifically interesting patterns latent within the internet-scale data they are trained upon.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "While science often involves generating new data to explore hypotheses, we likely underappreciate what possible insights hide in plain sight within the vast expanse of already-existing data. One reason we might expect this is that it takes significant human effort to unearth and clarify hypotheses from diverse data sources. For example, there exist many written biographies, which in aggregate may speak to important patterns of the human condition, e.g. how and if aspects of childhood experience relate to adult life choices or relationship, or how personality and mental health interact. However, such information is unstructured and potentially spread across many different texts; for many questions of interest no one has yet made the effort to curate a from such diverse sources a specific dataset.\nTo explore these kinds of questions quantitatively within existing data requires: (1) seeking quantitative variables that are indicative of more qualitative properties of interest (e.g. how many adverse childhood experiences, or ACEs (Boullier and Blair 2018 ###reference_b3###) a specific person experienced, or estimating their OCEAN personality traits (Roccas et al. 2002 ###reference_b14###)); and (2) sifting through diverse unstructured collections of text to ground or estimate those quantities (e.g. reading several biographies of a figure to count their ACEs). To approach both steps manually often requires significant labor, domain expertise, and trial and error. As a result of these costs, we do not thoroughly mine what lies latent within existing data.\nInterestingly, large language models (LLMs) are trained over an enormous corpus of human cultural output, and continue to advance in their capabilities to inexpensively answer arbitrary queries about specific entities. Thus, the main idea in this paper is to leverage LLMs for quick-and-dirty explorations of hypotheses about real-world entities (like people, countries, books, and activities). In particular, given a high-level hypothesis (such as \u201cDo horror writers have worse childhoods than other authors?\u201d), an LLM can (1) suggest quantitative variables to ground such a hypothesis that are plausibly within its training corpus (e.g. \u201dDid this person\u2019s parents get a divorce\u201d), (2) generate a list of concrete entities (e.g. 100 well-known horror writers and 100 well-known writers of other genres), and (3) estimate the concrete variables for each entity (e.g. \u201dDid Steven King\u2019s parents get a divorce?\u201d).\nIn this way, from an initial rough idea, an LLM can generate an approximate artisanal dataset, providing a preliminary way of exploring the hypothesis. The hope is that this estimation, while not perfect, could serve as an accelerant for active brainstorming, and could fit into a larger pipeline of science. For example, correlations between variables could also be automatically calculated in the simulated dataset, and if a strong and interesting correlation is found, it could motivate the effort to curate by hand a validated dataset, or to gather new data in service of the hypothesis (e.g. a more controlled survey of aspiring writers and their ACE scores). Because this kind of LLM generation (for a moderate-sized dataset) is inexpensive and fast, it can enable faster iteration cycles of hypothesis brainstorming and debugging.\nThe experiments in this paper focus mainly on step (3) above (e.g. estimating concrete variables for concrete entities), although they also touch on steps (1) and (2). In particular, we find across several domains that indeed, LLMs can generate useful datasets about real-world entities, and that such datasets increase in fidelity with model scale. We also show in a preliminary experiment that LLMs can also translate high-level hypotheses into concrete variables, and that (perhaps unsurprisingly) they are adept at creating lists of entities appropriate for exploring a hypothesis (e.g. like horror writers). To enable the explorations of others, we release code here: https://github.com/mzabaletasar/llm\u02d9hypoth\u02d9simulation.\nThe conclusion is that LLM pipelines may provide novel ways for quickly exploring hypotheses related to real-world entities, helping us to better leverage and understand the oceanic data already generated by humanity." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Approach", + "text": "The overarching ambition in this paper is to move towards automating more of the process of exploring interesting and important patterns latent within existing internet-scale data, to advance our scientific understanding of the world and make the most of the data we have already generated as a society. One significant obstacle to prototyping a hypothesis within society-scale data is to curate a dataset by hand that can reveal evidence about the hypothesis, which requires sifting through many data sources and carefully translating unstructured data into structured, quantitative tables.\nThe general approach in this paper to avoid that cost, is to generate approximate tabular datasets by querying LLMs. Such tabular data is a powerful way of exploring patterns, where we can consider each row as entity, and each column as a property of that entity.\nThe idea is that training data for LLMs implicitly includes many properties of real-world entities of scientific interest, like people, animals, activities, and countries. Information about a particular entity may be spread across many different documents and contexts, and usefully centralized into the weights of the LLM through the training process (Cohen et al. 2023 ###reference_b4###).\nThis naturally leads to a simple approach to simulate an artisanal tabular dataset fit to explore a particular hypothesis. First, we consider the case where an experimenter provides a list of entities (e.g. like particular animals) and properties (e.g. whether they lay eggs, or have wings): Then, the approach is to simply query the LLM to estimate each property for each entity (see Figure 1 ###reference_###).\nWe call this method LLM-driven Dataset Simulation.\nOur first set of experiments explores the ability of LLMs to simulate datasets with reasonable fidelity, i.e. whether the relationships among variables in the simulated dataset reflect those in a human-validated dataset.\nTo further automate the process of exploring hypotheses, we can use LLM-driven Dataset Simulation as a building block, and also use LLMs to help translate a qualitative high-level hypothesis into the rows and columns of the tabular dataset (e.g. for it to also create the list of real-world entities, and the list of properties of those entities relevant to explore the hypothesis). We call this method Hypothesis-driven Dataset Simulation, and this broader pipeline is shown in Figure 2 ###reference_###. The idea is that an experimenter can describe the hypothesis they want to explore, and a chain of LLM calls can orchestrate creating the rows and columns of the dataset, as well as to simulate the dataset itself. In more detail about the components of this pipeline:\n###figure_1### ###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "LLM-driven Dataset Simulation Experiments", + "text": "The first set of experiments explore LLMs\u2019 ability to simulate useful tabular datasets, given a list of entities and properties. We begin with a simple dataset of binary characteristics of animals as a didactic toy example that we expect to be well-within the capabilities of LLMs; we also explore a more difficult domain that involves specific demographic properties of countries, and complicated constructed indicators (e.g. of how egalitarian a country is), where it is less clear that LLMs would be adept, as a way of probing the limits of this technique. Note that in these experiments we use existing ground-truth datasets as a grounded proxy for the situation of real interest, e.g. to simulate novel datasets; while there is some risk of LLMs memorizing these datasets (as LLMs are at least aware of the Zoo dataset), we find in later experiments that the method does indeed generalize to novel datasets." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Zoo Domain", + "text": "" + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Description.", + "text": "In this experiment, we assess the ability of LLMs to simulate the well-known \u201cZoo Dataset\u201d from the UCI Machine Learning Repository (Forsyth 1990 ###reference_b5###). This dataset consists of 101 animal instances (e.g. vampire bat, aardvark), each characterized by 16 binary features (e.g., hair, feathers, teeth) and a categorical target variable representing the animal\u2019s type (e.g., mammal, insect). Our aim is to determine whether LLMs can replicate this dataset accurately. Note that the LLM is conditioned on the plain-text names of the animals and features." + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Motivation.", + "text": "We choose this dataset as a first exploration because of its intuitive simplicity. It is clear that LLM training should include simple biological features of animals within it, and thus this provides a toy environment in which to sanity check the approach. The Zoo domain also illustrates how LLMs can be applied to biological or ecological datasets, offering potential for hypothesis generation in specialized fields." + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Experiment setting.", + "text": "To assess the accuracy of individual simulated binary properties, we compared the outputs of the LLMs to the ground-truth dataset. The quality of properties was evaluated using accuracy as the primary metric for both animal features (independent variables) and animal type (dependent variable). We used GPT-4o-mini and the prompting strategy of directly querying property values in a Pythonic dictionary format.\nWe also evaluated the utility of simulated datasets for exploratory data analysis and hypothesis testing. This process emulated a typical scientific workflow: a standard analysis model, such as linear or logistic regression, was trained on the simulated training data and then run on unseen simulated validation data. The predictions on the (simulated) validation set were then compared to real-world validation labels to assess performance. The idea is to get a sense of how well an analysis method applied on simulated data captures the same patterns as in the real data.\nTo quantify how closely the simulated data approximates real-world patterns, we introduce a Simulation Error Gap metric. This metric measures the difference in generalization error between models trained on simulated data and the upper-bound performance achieved by fitting models on ground-truth training data. A smaller Simulation Error Gap reflects a higher fidelity of the simulated data in capturing the underlying relationships within the real-world dataset. In this domain, a logistic regression model was trained on 70% of the data, and generalization error was measured by accuracy." + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Simulation Fidelity of Properties.", + "text": "Overall, the results indicate that the simulator effectively models binary properties in the domain. As shown in Figure 3 ###reference_###, the average accuracy across all properties is 0.923, suggesting that the simulated data closely approximates the characteristics of the real data. Some of the remaining error is due to an ambiguous property called \u201ccatsize,\u201d which highlights that an LLM requires a clear semantic description of the property to be simulated.\n###figure_3###" + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Asking the LLM to Instead Directly Output Correlation Coefficients.", + "text": "As a control experiment, we tested the direct generation of hypotheses by asking the LLM to estimate correlations between each independent variable and each class (e.g. animal type). In particular, the Matthews correlation coefficient, which is appropriate for binary variables and multi-class output. Interestingly, we found an average absolute difference of 0.321 between the LLM\u2019s estimations and the real coefficients, highlighting the limited capabilities of LLMs to be used as direct estimators of relationships between variables even in quite simple domains." + }, + { + "section_id": "4.1.x", + "parent_section_id": "4.1", + "section_name": "Training Classifiers on Simulated Data.", + "text": "Perhaps unsurprisingly, in simulating class labels in this dataset (e.g. mapping an animal name to its type\u2014such as mammals, birds, and reptiles), the simulator performs very well, achieving perfect accuracy.\nTo further assess the simulator\u2019s utility for predictive modeling, we trained a logistic regression model on the simulated data and evaluated it on real validation data. The model achieved an accuracy of 0.833 when trained on the simulated data, compared to an accuracy of 0.933 on real data. This resulted in a simulation error gap of 0.1, indicating a modest difference between the simulated and real data for this particular predictive task.\nIn summary, these results serve as some grounding evidence that LLMs can simulate datasets with reasonable fidelity. The next experiment explores a more difficult domain." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Countries Domain", + "text": "" + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Description.", + "text": "The dataset in this experiment is designed to explore how demographic features of countries correlate with their Egalitarian Democracy Index (EDI) scores; the EDI is an index that combines information on voting rights, the freedom and fairness of elections, freedoms of association and expression, as well as\nthe extent to which the protection of rights, access to power, and distribution of resources is equal (Sigman and Lindberg 2019 ###reference_b16###). It ranges from 0 to 1 (most democratic). Our reference dataset combines various indicators, such as population statistics across age groups and genders from the World Bank (The World Bank 2024 ###reference_b18###), with EDI scores from Our World in Data (V-Dem 2022 ###reference_b20###), all from 2022. For example, properties include metrics like \u2019Population ages 60-64, male (% of male population)\u2019, \u2019Regulatory Quality: Percentile Rank, Upper Bound of 90% Confidence Interval\u2019, and \u2019Political Stability and Absence of Violence/Terrorism: Number of Sources.\u2019 The goal is to test whether LLMs can simulate tabular data that reflects real-world patterns, facilitating rapid hypothesis testing in more complicate dsettings." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Motivation.", + "text": "This dataset was chosen because it is more specialized and requires estimating continuous variables with various ranges, and requires the LLM to handle ambiguous property names (e.g. \u2019Regulatory Quality: Percentile Rank, Upper Bound of 90% Confidence Interval\u2019). In contrast to the simplicity of the Zoo domain, this provides a more challenging environment to further develop and test dataset simulation techniques. It also highlights how dataset simulation may be useful for hypothesis generation in areas of economics and policy." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Experiment setting.", + "text": "After pre-processing (see Appendix A.1 ###reference_###), a random sample of 50 countries is selected from a pool of 155 countries, and 10 random properties are chosen from a pool of 120 properties. In the Countries Domain, the quality of predictor variables was evaluated using correlation with actual values, while the Egalitarian Democracy Index was assessed using Mean Absolute Error (MAE).\nFor the analysis method, we trained a linear regression model on 80% of the simulated data, and the generalization error of the model was measured by Median Absolute Error (MedAE), as in contrast to the Zoo domain, the dependent variable is continuous." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Explorations to Increase Simulation Fidelity.", + "text": "In this more chalelnging domain, we explored several techniques to improve simulation performance. One approach was to condition the dependent variable (EDI) on the previously simulated property values for a particular country. Another approach was to take certain complex properties, such as demographic percentages, and use few-shot learning strategies to help ground out the variable\u2019s range. Interestingly, despite extensive experimentation (e.g., conditioning on outliers or randomly-chosen data points), no consistently superior approach was identified." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Impact of Model size.", + "text": "We tested three model architectures: Llama3-8B, Llama3-70B, and GPT-4o-mini. GPT-4o-mini consistently outperformed the others, producing the most accurate and contextually relevant simulations. As a result, GPT-4o-mini was used for all subsequent experiments in other domains. Table 1 ###reference_### shows the impact of model size on simulation quality, measured by average correlation between simulated and real data points, Mean Absolute Error in simulated EDI (EDI MAE), and simulation error gap. The models compared include LLaMA-3-8b, LLaMA-3-70b, and GPT-4o-mini. As seen in Table 1 ###reference_###, performance improves across all metrics as model size increases. Further, Figure 4 ###reference_###, visualizes the fidelity of predictive models increasing across different model sizes.\n###figure_4### In summary, these results highlight that larger models significantly improve simulation quality. The increased model size leads to higher correlation with real-world data, reduced error gaps, and more accurate predictions of EDI values, making larger models more reliable for hypothesis generation." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Impact of Prompting strategy.", + "text": "We examined two key factors in prompting: prompt style and output format. For prompt style, we explored prompts that were direct queries for specific data points (\u201dMake your best guess about this value\u2026\u201d), with prompts that told the LLM it was an expert and was tasked to complete a report about the property at hand (\u201dYou are an expert historian. Complete the following document\u2026\u201d). Output format compared structured formats (e.g., Python dictionaries) with instead outputting an answer in natural language. This analysis reveals how different presentation styles affect data consistency and realism in simulations.\nThe prompting strategy that proved most effective in this experiment involved directly querying property values and using a Pythonic dictionary format to structure the data. This strategy was adopted for all subsequent experiments. Table 2 ###reference_### shows the effect of different prompting strategies on simulation quality.\nThe direct-structured strategy, where values are requested in a structured, Pythonic dictionary format, consistently outperforms the other strategies, achieving the highest average correlation (0.770) and the lowest simulation error gap (0.011). The direct-descriptive strategy, which asks for values as free-text words, also performs well, achieving an average correlation of 0.738, an EDI MAE of 0.119, and a simulation error gap of 0.036. These results suggest that asking for data in a structured format (as in direct-structured) leads to more precise simulations, while the direct-descriptive strategy still provides reliable results. In contrast, the report-descriptive and report-structured strategies show significantly weaker performance.\nBoth report-based strategies involve completing partial data rather than requesting full values, leading to less accurate and more inconsistent simulations. These patterns are further illustrated in Figure A.1 ###reference_###, which compares the average correlation metrics across the different prompting strategies." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Asking the LLM to Instead Directly Output Correlation Coefficients.", + "text": "Similar to the Zoo domain, we also tasked an LLM with directly estimating the correlations between some of the properties and the EDI. The results were that on average, there was a 0.483 difference in the correlation suggested by the LLM and the real correlation in the data, again highlighting the benefits from simulating data before analyzing patterns.\nSee the Appendix A.3 ###reference_### for a plot of those correlations for various properties." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Towards Hypothesis-Driven Dataset Simulation", + "text": "The previous experiments explored the ability of LLMs to simulate datasets in a controlled setting where ground-truth data was available (e.g. by having the LLM simulate existing datasets). In this section, we move more towards the setting of direct interest, where we want to explore a hypothesis but do not have a pre-existing dataset. We also experiment here with greater LLM autonomy: In addition to having the LLM simulate the data, we also have it map from a high-level hypothesis to the properties worth simulating to explore it. Further experiments explore having the LLM also generate the list of entities of interest (e.g. particular sports figures in this case). In this way, we move more towards having an LLM assistant that can help an experimenter quickly brainstorm and explore potential hypotheses." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion and Conclusion", + "text": "The experiments in this paper highlight the potential for LLMs to translate high-level descriptions of hypotheses into approximate datasets, ones that can be used to quickly iterate towards interesting latent patterns in existing data. The hope is to empower experimenters to more easily sift through the space of hypotheses by lowering the cost of gathering a dataset by hand. In practice, after discovering an interesting hypothesis, the experimenter will still likely need to either curate a grounded dataset, or perform a real-world experiment, to generate a scientifically validated result.\nThis kind of method of course has its limitations, as it depends on the estimation abilities of LLMs, which will vary with how well the LLMs\u2019 dataset covers the entities as properties of interest, as well as the overall capabilities of the LLM itself. One interesting phenomenon to note is that in the Countries domain, simulating data and then analyzing the relationships among that data performed better than asking the LLM directly to estimate relationships among variables (without simulating the data). In other words, while the information about the variables was latent within the LLM (as it could be simulated), externalizing that information to run outside analysis upon it, yielded further insights. Such improvement relates to the general idea of LLMs iterating upon their own outputs as a way of generating further useful synthetic data.\nWhile the approach here directly queries an LLM, another interesting direction is to employ a more agentic pipeline to actively construct a grounded dataset. That is, LLMs that can browse the web and write code could do things like piece together existing datasets, or attempt to actively ground each data point in reliable sources (e.g. similar to how Perplexity was used to approximate ground truth in the final domain). Such an approach, if it worked well, might present another point in the trade-off between (1) cost and speed, and (2) dataset fidelity: e.g. it would gain fidelity but require more complex chaining of LLM calls.\nMore broadly, a grander ambition is to create an open-ended system (Stanley, Lehman, and Soros 2017 ###reference_b17###) that could continually discover new, interesting patterns in data. The second set of experiments represents a step in this direction, where the experimenter supplies the high-level hypothesis, which is then translated into the rows and columns of a dataset, which is then simulated. But this could be taken further, where a user instead supplies a more broad question of interest, e.g. \u201cWhat are interesting patterns of human behavior that can be discerned from biographies of historical figures,\u201d and the system itself continually searches for unexpected and interesting patterns by simulating and analyzing datasets. This is related to other directions that attempt to apply LLMs towards open-ended creativity (Lu et al. 2024a ###reference_b9###; Lehman et al. 2023 ###reference_b7###).\nWhile the approach here works with simulating specific real-world entities (like countries, athletes, and animals), it is also interesting to consider automated creation of datasets that relate to simulations of people through LLMs (Argyle et al. 2023 ###reference_b2###; Aher, Arriaga, and Kalai 2023 ###reference_b1###). Indeed, the work here started with that direction (to explore hypotheses related to whether people with different e.g. OCEAN personality scores would benefit from different leisure activities). There are interesting technical challenges to consider, such as modeling distributions of people and their responses (e.g. the distribution of people with a high openness score, and the distribution of their favorite activities), rather than discrete properties of singular entities as in this paper. Such research is an interesting direction of future work that can build off the foundation established in this paper.\nFinally, it is interesting to consider the possibilities for novel kinds of ML algorithms opened up by the ability to simulate new features and datasets on the fly. That is, classic tabular learning algorithms (like decision trees) are typically applied to fixed datasets; yet, LLMs open up the possibility of dynamically expanding the feature set as learning progresses. Future work will explore extensions of decision trees that start from a minimal dataset (perhaps only consisting of entities and the dependent variable), and through human-computer interaction, gradually build the dataset as the learning algorithm proceeds; the decision tree algorithm itself becomes more open-ended in its unfolding.\nIn conclusion, this paper described the potential of using LLMs to simulate datasets about real-world entities, in service of accelerating the exploration of hypotheses about them. Overall, this research points towards the possibility of fully automated systems for automated discovery of knowledge aggregated from the vast cultural output of humans: What exciting patterns (about us, and about the world) lie waiting for us to distill from the ever-growing ocean of civilization-scale data?" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Countries Domain", + "text": "Filtering feature database for year 2022\nRemove features that contain \u2019Standard Error\u2019. For example, \u2019Rule of Law: Standard Error\u2019 seems a bit too unclear what it means\nFilter for common countries (egalitarian index and demographic features come from different data sources)\nRemove demographic features that are not present in all countries\nRandomly sample N countries and K features from that (N=50, K=10)\nDirect style, Structured format\nDirect style, Descriptive format\nReport style, Structured format\nReport style, Descriptive format\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Athletes Domain", + "text": "###figure_10### ###figure_11### ###figure_12###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage Corr.EDI MAESim. Error Gap
LLaMA-3-8b0.2210.1340.064
LLaMA-3-70b0.6440.1890.036
GPT-4o-mini0.7380.1190.036
\n
Table 1: Simulators\u2019 performance by model size
\n
", + "capture": "Table 1: Simulators\u2019 performance by model size" + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PromptingAverage Corr.EDI MAESim. Error Gap
direct-descriptive0.7380.1190.036
direct-structured0.7700.1320.011
report-descriptive0.3940.1710.087
report-structured0.2530.1600.153
\n
\n
Table 2: Simulators\u2019 performance by prompting strategy in the Countries domain.
\n
", + "capture": "Table 2: Simulators\u2019 performance by prompting strategy in the Countries domain." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18071v1_figure_1.png", + "caption": "Figure 1: LLM-driven Dataset Simulation. Given a list of entities and properties, the method is to call an LLM for each combination of entity and property to simulate the value of the property for that entity.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/diagram_pipeline_a.png" + }, + "2": { + "figure_path": "2411.18071v1_figure_2.png", + "caption": "Figure 2: Architecture of the hypothesis-driven simulation. The pipeline starts with a description of the hypothesis to explore, followed by the prompts that will generate the raw properties. After extracting the properties, the list of entities produces simulated data, which goes under a self-correction prompt for the final value.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/diagram_pipeline_b.png" + }, + "3": { + "figure_path": "2411.18071v1_figure_3.png", + "caption": "Figure 3: Simulation accuracy for properties in the Zoo domain. Shown are how accurately the LLM is able to simulate each property in the Zoo domain across all the animals in the dataset. Accuracy is generally high, although the LLM understandably struggles with the ambigious variable name \u201ccatsize.\u201d The conclusion is that the approach is viable, although it is important to give the model sufficient context about the property it is to simulate.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/zoo_feature_accuracies.png" + }, + "4": { + "figure_path": "2411.18071v1_figure_4.png", + "caption": "Figure 4: Correlation coefficients by model size for Countries domain. Shown are how well the simulated properties correlate with the ground-truth properties across all entities. The conclusion is that the fidelity of the simulations improves with model scale and capability.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/countries_size_correlations.png" + }, + "5(a)": { + "figure_path": "2411.18071v1_figure_5(a).png", + "caption": "(a) Scatter plot for peak performance age.\nFigure 5: Scatter plots comparing simulated and real values for peak performance age and total major injuries. Dashed red line indicates perfect correspondence between real and simulated values.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/athletes_scatter_age.png" + }, + "5(b)": { + "figure_path": "2411.18071v1_figure_5(b).png", + "caption": "(b) Scatter plot for total major injuries.\nFigure 5: Scatter plots comparing simulated and real values for peak performance age and total major injuries. Dashed red line indicates perfect correspondence between real and simulated values.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/athletes_scatter_injuries.png" + }, + "6": { + "figure_path": "2411.18071v1_figure_6.png", + "caption": "Figure A.1: Correlation coefficients by prompting strategy for Countries Domain. \u201dReport\u201d is considerably worse of a prompting style than \u201dDirect\u201d. \u201dDirect-structured\u201d was found to be the best performing prompting strategy.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/countries_prompting_correlations.png" + }, + "7(a)": { + "figure_path": "2411.18071v1_figure_7(a).png", + "caption": "(a) LLaMA-3-8B correlations between and real and simulated features.\nFigure A.2: Comparison of correlations between real and simulated features for the Countries Domain in LLaMA-3-8B, LLaMA-3-70B, and GPT-4o-mini. Larger models exhibit stronger correlations, indicating improved simulation quality.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/llama3_8b_correlations.png" + }, + "7(b)": { + "figure_path": "2411.18071v1_figure_7(b).png", + "caption": "(b) LLaMA-3-70B correlations between and real and simulated features.\nFigure A.2: Comparison of correlations between real and simulated features for the Countries Domain in LLaMA-3-8B, LLaMA-3-70B, and GPT-4o-mini. Larger models exhibit stronger correlations, indicating improved simulation quality.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/llama3_70b_correlations.png" + }, + "7(c)": { + "figure_path": "2411.18071v1_figure_7(c).png", + "caption": "(c) GPT-4o-mini correlations between and real and simulated features.\nFigure A.2: Comparison of correlations between real and simulated features for the Countries Domain in LLaMA-3-8B, LLaMA-3-70B, and GPT-4o-mini. Larger models exhibit stronger correlations, indicating improved simulation quality.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/gpt_4o_mini_correlations.png" + }, + "8": { + "figure_path": "2411.18071v1_figure_8.png", + "caption": "Figure A.3: Comparison of real, LLM-suggested and simulated correlations for Countries domain. The LLM-suggested method consistently underperforms compared to our simulation approach in accurately capturing complex relationships between demographic variables.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/countries_direct_correlations_lineplot.png" + }, + "9": { + "figure_path": "2411.18071v1_figure_9.png", + "caption": "Figure B.1: Comparison of simulation mean absolute error (MAE) with and without self-correction. The dashed blue line indicates the MAE achieved using real data, while the dashed red line shows the baseline error from a dummy model predicting the mean value.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/athletes_bar_plot.png" + }, + "10": { + "figure_path": "2411.18071v1_figure_10.png", + "caption": "Figure B.2: Line plot showing the real and simulated values for peak performance age.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/athletes_line_plot_age.png" + }, + "11": { + "figure_path": "2411.18071v1_figure_11.png", + "caption": "Figure B.3: Line plot showing the real and simulated values for total major injuries.", + "url": "http://arxiv.org/html/2411.18071v1/extracted/6028133/athletes_line_plot_injuries.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies.", + "author": "Aher, G.; Arriaga, R. I.; and Kalai, A. T. 2023.", + "venue": "arXiv:2208.10264.", + "url": null + } + }, + { + "2": { + "title": "Out of One, Many: Using Language Models to Simulate Human Samples.", + "author": "Argyle, L. P.; Busby, E. C.; Fulda, N.; Gubler, J. R.; Rytting, C.; and Wingate, D. 2023.", + "venue": "Political Analysis, 31(3): 337\u2013351.", + "url": null + } + }, + { + "3": { + "title": "Adverse childhood experiences.", + "author": "Boullier, M.; and Blair, M. 2018.", + "venue": "Paediatrics and Child Health, 28(3): 132\u2013137.", + "url": null + } + }, + { + "4": { + "title": "Crawling the Internal Knowledge-Base of Language Models.", + "author": "Cohen, R.; Geva, M.; Berant, J.; and Globerson, A. 2023.", + "venue": "arXiv:2301.12810.", + "url": null + } + }, + { + "5": { + "title": "Zoo.", + "author": "Forsyth, R. 1990.", + "venue": "UCI Machine Learning Repository.", + "url": null + } + }, + { + "6": { + "title": "Unlocking the Potential of User Feedback: Leveraging Large Language Model as User Simulators to Enhance Dialogue System.", + "author": "Hu, Z.; Feng, Y.; Luu, A. T.; Hooi, B.; and Lipani, A. 2023.", + "venue": "In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM \u201923. ACM.", + "url": null + } + }, + { + "7": { + "title": "Evolution through large models.", + "author": "Lehman, J.; Gordon, J.; Jain, S.; Ndousse, K.; Yeh, C.; and Stanley, K. O. 2023.", + "venue": "In Handbook of Evolutionary Machine Learning, 331\u2013366. Springer.", + "url": null + } + }, + { + "8": { + "title": "Literature Meets Data: A Synergistic Approach to Hypothesis Generation.", + "author": "Liu, H.; Zhou, Y.; Li, M.; Yuan, C.; and Tan, C. 2024.", + "venue": "arXiv:2410.17309.", + "url": null + } + }, + { + "9": { + "title": "The ai scientist: Towards fully automated open-ended scientific discovery.", + "author": "Lu, C.; Lu, C.; Lange, R. T.; Foerster, J.; Clune, J.; and Ha, D. 2024a.", + "venue": "arXiv preprint arXiv:2408.06292.", + "url": null + } + }, + { + "10": { + "title": "Machine Learning for Synthetic Data Generation: A Review.", + "author": "Lu, Y.; Shen, M.; Wang, H.; Wang, X.; van Rechem, C.; Fu, T.; and Wei, W. 2024b.", + "venue": "arXiv:2302.04062.", + "url": null + } + }, + { + "11": { + "title": "Self-Refine: Iterative Refinement with Self-Feedback.", + "author": "Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; Gupta, S.; Majumder, B. P.; Hermann, K.; Welleck, S.; Yazdanbakhsh, A.; and Clark, P. 2023.", + "venue": "arXiv:2303.17651.", + "url": null + } + }, + { + "12": { + "title": "Perplexity AI: Explore Answers and Insights with AI-powered Search.", + "author": "Perplexity. 2024.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation.", + "author": "Ren, R.; Wang, Y.; Qu, Y.; Zhao, W. X.; Liu, J.; Tian, H.; Wu, H.; rong Wen, J.; and Wang, H. 2023.", + "venue": "ArXiv, abs/2307.11019.", + "url": null + } + }, + { + "14": { + "title": "The big five personality factors and personal values.", + "author": "Roccas, S.; Sagiv, L.; Schwartz, S. H.; and Knafo, A. 2002.", + "venue": "Personality and social psychology bulletin, 28(6): 789\u2013801.", + "url": null + } + }, + { + "15": { + "title": "Language Models that Seek for Knowledge: Modular Search & Generation for Dialogue and Prompt Completion.", + "author": "Shuster, K.; Komeili, M.; Adolphs, L.; Roller, S.; Szlam, A.; and Weston, J. 2022.", + "venue": "In Conference on Empirical Methods in Natural Language Processing.", + "url": null + } + }, + { + "16": { + "title": "Democracy for all: conceptualizing and measuring egalitarian democracy.", + "author": "Sigman, R.; and Lindberg, S. I. 2019.", + "venue": "Political Science Research and Methods, 7(3): 595\u2013612.", + "url": null + } + }, + { + "17": { + "title": "Open-endedness: The last grand challenge you\u2019ve never heard of.", + "author": "Stanley, K. O.; Lehman, J.; and Soros, L. 2017.", + "venue": "While open-endedness could be a force for discovering intelligence, it could also be a component of AI itself.", + "url": null + } + }, + { + "18": { + "title": "World Development Indicators.", + "author": "The World Bank. 2024.", + "venue": "Data filtered to include only 2022 values. Accessed: 2024-08-04.", + "url": null + } + }, + { + "19": { + "title": "Automating psychological hypothesis generation with AI: when large language models meet causal graph.", + "author": "Tong, S.; Mao, K.; Huang, Z.; Zhao, Y.; and Peng, K. 2024.", + "venue": "Humanities and Social Sciences Communications, 11(1).", + "url": null + } + }, + { + "20": { + "title": "Egalitarian Democracy Index \u2013 (best estimate, aggregate: average).", + "author": "V-Dem. 2022.", + "venue": "Retrieved August 04, 2024.", + "url": null + } + }, + { + "21": { + "title": "Improving Scientific Hypothesis Generation with Knowledge Grounded Large Language Models.", + "author": "Xiong, G.; Xie, E.; Shariatmadari, A. H.; Guo, S.; Bekiranov, S.; and Zhang, A. 2024.", + "venue": "arXiv:2411.02382.", + "url": null + } + }, + { + "22": { + "title": "Hypothesis Generation with Large Language Models.", + "author": "Zhou, Y.; Liu, H.; Srivastava, T.; Mei, H.; and Tan, C. 2024.", + "venue": "arXiv:2404.04326.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18071v1" +} \ No newline at end of file diff --git a/20241127/2411.18075v1.json b/20241127/2411.18075v1.json new file mode 100644 index 0000000000000000000000000000000000000000..82d3527608374df116d26c633a6af4e1f7da0359 --- /dev/null +++ b/20241127/2411.18075v1.json @@ -0,0 +1,165 @@ +{ + "title": "Music2Fail: Transfer Music to Failed Recorder Style", + "abstract": "The goal of music style transfer is to convert a music performance by one instrument into another while keeping the musical contents unchanged. In this paper, we investigate another style transfer scenario called \u201cfailed-music style transfer\u201d. Unlike the usual music style transfer where the content remains the same and only the instrumental characteristics are changed, this scenario seeks to transfer the music from the source instrument to the target instrument which is deliberately performed off-pitch. Our work attempts to transfer normally played music into off-pitch recorder music, which we call \u201cfailed-style recorder\u201d, and study the results of the conversion.\nTo carry out this work, we have also proposed a dataset of failed-style recorders for this task, called \u201cFR109 Dataset\u201d.\nSuch an experiment explores the music style transfer task in a more expressive setting, as the generated audio should sound like an \u201coff-pitch recorder\u201d while maintaining a certain degree of naturalness. 111Our demo website is now available on: https://navi0105.github.io/demo/music2fail/", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Generally, the goal of music style transfer is to change the style of the input audio while preserving the content of the input audio222In practice, due to the differences in pitch ranges of different instruments, we may perform additional pitch shifting in the experiments..\nIn particular, the content of the input music data refers to music features such as rhythm and melody, while the style refers to the unique perceived characteristics that an instrument expresses in music performance.\nAccording to the wide range of conversion goals and targets, the tasks of music style transfer can be broadly classified into three categories: one-to-one [pasini2019melgan, huang2018timbretron], many-to-one [engel2020ddsp, Nercessian2022differentiable], many-to-many [bitton2018modulated, wu2023transplayer]. Several music style transfer methods have been inspired by research from different fields, such as voice conversion (VC) [wu2023transplayer, bonnici2022timbre, bitton2018modulated, comanducci2023timbre] and image-based style transfer [kaneko2018cyclegan, liu2018unsupervised].\nMost works mainly use well-played audio as training data for both the source and target domains. This implies that well-played input audio will be converted into equally well-played output audio. Although this ensures faithful style transfer, assuming all audio to be well-played in the real world restricts the expressiveness of instruments. The Singing Voice Beautifying (SVB) task [liu-etal-2022-learning-beauty] is a special case. They used paired data of amateur and professional singing voices for training, aiming to correct the pitch and improve the vocal tone, indicating that the source audio was not well-played. In this paper, we focus on another special case, where the target domain is not well-played. For example, the target domain may be a soprano recorder that is deliberately performed poorly333A famous example can be found at https://www.youtube.com/watch?v=X2WH8mHJnhM. We refer to this case as failed-music style transfer. The motivation is that such failed music contains a wider range of characteristics, but humans can still distinguish between a \u201cfailed recorder\u201d and another instrument. This poses a more difficult scenario to music style transfer:\nCan a music style transfer model not only tackle well-played instruments but also instruments that are not well-played?\nTake a soprano recorder as an example, a fail-style recorder may contain many types of errors, such as:\nCracked voice. Producing a harsh sound.\nWeird dynamics. Unnatural volume while playing.\nFailed tonguing. Mistakes in the articulation.\nOverblowing. Blowing too hard, causing the voice to sound raspy.\nUnderblowing. Blowing not hard enough, causing the voice to sound hissing.\nGenerally, these are considered errors that should not occur in live performances. However, by definition, such errors should not make a style transfer model malfunction. Instead, a style transfer model should generate audio that sounds like a failed recorder (sometimes has an unpleasant style, but still sounds like a recorder). Such failed music style transfer might be useful in the fields of entertainment, it could serve as the score for some comedies or some humorous scenes.\nIn this paper, we investigate the music style transfer scenario of failed recorders, treating it as a type of instrument. We apply various general style transfer methods and analyze the conversion results. However, there are no existing datasets for such scenarios which were deliberately recorded as failed-style. To facilitate our research, we propose the \u201cFR109\u201d dataset, a collection of failed-style recorder music recorded by a professional, with deliberately included failures.\nTo sum up, the main contributions of this paper are two-fold:\nWe discuss the special scenario of failed-music style transfer that serves as a more challenging task for style transfer. Regarding the experimental results, we analyzed them from the perspectives of the Mel spectrogram and Wiener entropy, providing corresponding analyses and interpretations.\nTo carry out the work for failed-music style transfer, we propose the FR109 dataset, a dataset of failed recorder performance, created intentionally by an experienced individual playing a recorder." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Datasets", + "text": "In this work, we adopted three datasets in the experiments, including two publicly available datasets (URMP [li2018creating] and Bach10 [Duan2010multiple]),\nand our FR109 dataset.\nThe comparison of different datasets is shown in Table 1 ###reference_###." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "The URMP dataset", + "text": "The URMP dataset [li2018creating] contains 44 music pieces ranging from duets to quintets, with separated tracks for individual instrument recordings. There are 14 distinct instruments in this dataset.\nIn our experiments, we only used the violin, clarinet, and saxophone tracks as the training data." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "The Bach10 dataset", + "text": "The Bach10 dataset [Duan2010multiple] consists of audio recordings of 10 J.S. Bach chorales performed separately with violin, clarinet, saxophone, and bassoon. In our experiments, we only used the violin, clarinet, and saxophone tracks as the testing data.\nSince the training data (URMP) and testing data (Bach10) belong to different datasets, such an evaluation scenario is more challenging." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "The proposed FR109 dataset", + "text": "As for the failed-music style transfer,\nwe proposed the FR109 dataset,\nwhich consists of 109 songs recorded with a soprano recorder played by a professional, with a total duration of 5.05 hours.\nErrors are introduced to each performance intentionally.\nAs discussed in Section 1 ###reference_###, the types of errors include cracked voice, weird dynamics, failed tonguing, overblowing, and underblowing. To compute the statistics of the dataset, we extract the pitch of recorder music using CREPE [kim2018crepe]. The pitch mean of the FR109 dataset is around 905Hz (between A5 and A#5), and the maximum pitch value is 1990Hz (around B6). These statistics match the actual pitch range of the soprano recorder, which spans from C5 to D7.\nSince there is no other dataset for failed recorders, we use the FR109 dataset as both the training dataset and the testing dataset in the failed-music style transfer experiments. A 90%/10% split is employed to divide the dataset into a training dataset and a testing dataset. In our experiments, we trained style transfer models to perform style transfer between all 4 instruments (violin, clarinet, saxophone, and failed recorder).\nWe are making the FR109 dataset publicly available for reproducibility, you can check out more information on our official GitHub page: https://github.com/navi0105/Music2Fail\n###table_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In this work, we experiment with three different well-known style transfer methods for failed-music style transfer, they are StarGAN [choi2018stargan], VAE-GAN [bonnici2022timbre], and DDSP [engel2020ddsp].\nStarGAN [choi2018stargan] introduced domain labels to the generator and discriminator,\nthe generator uses the domain label to specify the target domain, while the discriminator needs to predict the input\u2019s domain.\nDuring training, the generator and the discriminator contest with each other,\nthe generator\u2019s objective is to fool the discriminator, making it mispredict the domain label, while the discriminator\u2019s objective is to avoid being fooled by the generator.\nStarGAN only used a single generator and discriminator for learning a multi-mapping between different styles, instead of one generator and one discriminator for each pair of styles.\nVAE-GAN [bonnici2022timbre] used one generator and one discriminator for each domain, the generator used a variational autoencoder composed of two parts: the universal encoder and a decoder, the universal encoder shared across each generator to encode the input to latent code, and a decoder to transfer the latent code to the target domain.\nSince it uses the same encoder for every input domain and target domain, the performance was increased due to the variation of the input data.\nThe decoder is domain-specific so it can be specialized to that domain.\nThe discriminator only needed to predict whether the data was generated for that domain.\nDDSP [engel2020ddsp] integrates classic signal processing with deep learning. This method employs an autoencoder architecture for style transfer within a single domain. The encoder extracts key features from the source audio, including loudness, fundamental frequency, and residual information, while the decoder maps these features to control parameters for synthesizers to generate the output audio. DDSP has an assumption that the pitch component extracted from the source audio should closely match the fundamental frequency of the output audio, which may not be suitable for failed-music style transfer.\nStarGAN and VAE-GAN are two-stage style transfer pipelines, where we first convert the source audio to Mel spectrogram. Then, the Mel spectrogram was transferred to the failed recorder style using the generator. Finally, the vocoder generates waveform from the transferred Mel spectrogram. Here, we use BigVSAN [shibuya2024bigvsan] as our vocoder, its pretrained weight are available in their official repository444https://github.com/sony/bigvsan, which was pretrained on the LibriTTS dataset\u2019s training dataset [Zen2019libritts] for 10 million steps." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "As discussed in Section 2 ###reference_###, we combined the URMP dataset [li2018creating] and the FR109 dataset\u2019s training dataset for training and the Bach10 dataset [Duan2010multiple] for evaluation. Three different methods are compared in the experiments, they are StarGAN [choi2018stargan], VAE-GAN [bonnici2022timbre], and DDSP [engel2020ddsp]." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Training", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Data preprocessing", + "text": "We refer to the arguments for calculating the Mel spectrogram from BigVSAN [shibuya2024bigvsan] to compute the Mel spectrograms of music data in our dataset, 24,000 for sampling rate, 100-bands of Mel filter bank, 1024 for FFT / Hann window, hop size is 256 and the frequency range is from 0 to 12,000 Hz." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation", + "text": "In this section, we compare the performance between StarGAN, VAE-GAN, and DDSP. Both objective and subjective experiments are conducted." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Objective evaluation", + "text": "Fr\u00e9chet Audio Distance (FAD) [kilgour2018fr] is a reference-free metric to compute the Fr\u00e9chet Inception Distance (FID) between audio embedding sets extracted from the reference set and evaluation set. In the experiments, the reference set is music from the testing dataset and the evaluation set is music generated by the model. FAD represents the degree of dissimilarity between the two sets. The audio embeddings are extracted by a pretrained VGGish audio classification model [hershey2017cnn].\nWe use FAD as an objective evaluation metric to assess the distance between the audio files converted by StarGAN / VAE-GAN / DDSP and the real audio performance of a target instrument.\nTable 2 ###reference_### shows the FAD score of Bach10 dataset\u2019s music converted to failed-style recorder music using these three different models.\nThe results indicate that StarGAN performs slightly worse than VAE-GAN on both datasets. Considering that StarGAN only utilizes one unified decoder while VAE-GAN uses one decoder for each instrument, such a performance gap is acceptable.\nAs for the DDSP model, results show that DDSP has a significantly larger FAD compared to StarGAN and VAR-GAN. By inspecting the audio converted by DDSP, we found that they contain a large amount of noise, which is the reason for the high FAD.\nThe results of DDSP demonstrate that its assumptions about pitch invariance can lead to better performance on well-played instrument transitions, but do not apply well to our task." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Subjective evaluation", + "text": "We performed a listening test that evaluates the performance of converting these three (well-played) instruments into failed recorder in the Bach10 dataset (3 source-target pairs).\nFor each source-target pair, we randomly choose one audio clip for the listening test. We employed a rating scheme based on the Mean Opinion Score (MOS) [streijl2016mean]. For each audio clip (converted by one of the models), we asked the participants to evaluate its quality in three aspects: (1) Style similarity (SS) to the target instrument, (2) Melody similarity (MS) to the source audio, (3) Sound quality (SQ) of the converted audio. The scoring ranges from 1 to 5, where 1 is the worst and 5 is the best. In total, we received 16 valid responses from the listening test.\nTable 3 ###reference_### shows the MOS of the Bach10 dataset.\nThe results indicate that StarGAN\u2019s overall performance falls behind VAE-GAN\u2019s on all metrics in the conversion to failed recorder. The -values between StarGAN and VAE-GAN are 0.09 (SS), 0.06 (MS), and 0.02 (SQ). Although only the sound quality (SQ) is considered statistically significant, overall, we can still conclude that StarGAN is slightly inferior to VAE-GAN in converting to failed recorder music.\nAs for DDSP, similar to the objective results, the MOS results of DDSP are significantly worse than those of StarGAN. This is likely because DDSP generates noise more frequently. All the -test yielded -values well below 0.05. This reflects the specific challenges involved in music style transfer to failed instrument music for DSP-based synthesizers.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### (a) Source\n###figure_5### (b) StarGAN\n###figure_6### (c) VAE-GAN\n###figure_7### (d) DDSP" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Analysis", + "text": "In this section, we analyze the results of failed recorder style transfer, and further compare the tasks between the conversion to well-played instruments (violin, clarinet, saxophone) and failed recorder." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Mel spectrogram analysis", + "text": "To understand the challenge of failed-music style transfer, we first visualize the spectrograms of the failed recorder music in the FR109 dataset, as shown in Figure 1 ###reference_###. The red rectangular parts of the Mel spectrograms implies that the sound has inharmonic partials, meaning that the frequencies of the overtones do not align with integer multiples of the fundamental frequency. This creates a more complex and less predictable timbre. These inharmonic partials are considered features of the failed recorder because they are present in the Mel spectrograms of every failed recorder sample. Such inharmonic partials are rarely found in well-played instrument performances.\nNext, we visualised an example of failed-music style transfer, Figure 2 ###reference_###2 ###reference_### shows the Mel spectrogram of source audio performed by a saxophone, in which there is no clear inharmonic partial. Figure 2 ###reference_###2 ###reference_### and Figure 2 ###reference_###2 ###reference_### show the audio converted to a failed recorder by StarGAN and VAE-GAN, respectively. We can clearly see that inharmonic partials occur throughout the whole Mel spectrogram of StarGAN, showing that it does capture the characteristic of failed recorders and performs style transfer accordingly. For VAE-GAN, inharmonic partials can still be seen, but not as clearly as that of StarGAN. This shows that in this particular case, while both StarGAN and VAE-GAN do perform style transfer to some extent, StarGAN achieves a better style similarity to a failed recorder. Our informal listening test also confirms this observation.\nBased on Figure 1 ###reference_### and Figure 2 ###reference_###, it can be seen that failed-music style transfer does show a clearly different characteristic to the style transfer of other well-played instruments. To achieve style transfer to failed music, a model has to generate audio with unique properties that do not usually occur in well-played music. Discussing such a task would help understand the performance and the limitation of a style transfer model in another aspect." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Wiener entropy", + "text": "Furthermore, we utilized the STFT-based Wiener entropy [johnston1988transform] to quantify how much the noise-like sound is in the results produced by StarGAN and VAE-GAN, along with the Wiener entropy of the URMP dataset and FR109 dataset, which serve as the benchmark for real performance of well-played music and failed music. Table 4 ###reference_### shows the Wiener entropy of the URMP dataset and the FR109 dataset, i.e. well-played instrument music and failed recorder music, we can see that failed recorder music exhibits a higher proportion of noise-like characteristics compared to well-played instrument music. Table 5 ###reference_### shows the Wiener entropy of each of the models on different target instruments. We can see that when the target instruments are well-played instruments, the noise in the results from StarGAN and VAE-GAN are similar to the URMP dataset since their Wiener entropy is very similar. For failed recorder music, we can see that there is a gap between the Wiener entropy of FR109 and the Wiener entropy of StarGAN or VAE-GAN for converting to failed recorder, this shows that there is still room for improvement in converting music to the failed recorder style." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion and future work", + "text": "In this paper, we have conducted a series of experiments on the failed-music style transfer, and analysed the characteristics of this relatively special transfer in different aspects through various evaluations.\nFurthermore, we have released the FR109 dataset, consisting of failed recorder performances, which is useful for investigating the expressiveness of different style transfer model. Through this study, we hope to propose a music style transfer task that is different from the usual music style transfer task that pursues sound quality and accuracy, but rather a music style transfer task that is more versatile." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: The list of datasets utilized in this work, including the proposed FR109 dataset. \u201cPieces\u201d stands for the number of music pieces.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetInstrumentPiecesTotal duration
URMPViolin341.02 hours
Clarinet100.30 hours
Saxophone110.26 hours
Bach10Violin100.09 hours
Clarinet100.09 hours
Saxophone100.09 hours
FR109(Failed) recorder1095.05 hours
\n
", + "capture": "Table 1: The list of datasets utilized in this work, including the proposed FR109 dataset. \u201cPieces\u201d stands for the number of music pieces." + }, + "2": { + "table_html": "
\n
Table 2: The FAD metrics of style transfer models to failed recorder in the Bach10 Dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Models\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0FAD ()
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0StarGAN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a013.87
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0VAE-GAN\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a07.27
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0DDSP\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a038.91
\n
", + "capture": "Table 2: The FAD metrics of style transfer models to failed recorder in the Bach10 Dataset." + }, + "3": { + "table_html": "
\n
Table 3: The MOS of the listening test on the Bach10 dataset. The numbers inside the cells represent the MOS and their standard deviations. SS, MS, and SQ indicate the style similarity, melody similarity, and sound quality, respectively.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsSS ()MS ()SQ ()
StarGAN2.54 1.263.15 1.192.46 1.27
VAE-GAN\n2.98 1.23\n3.56 0.93\n3.00 0.98
DDSP1.33 0.772.19 1.111.38 0.70
\n
", + "capture": "Table 3: The MOS of the listening test on the Bach10 dataset. The numbers inside the cells represent the MOS and their standard deviations. SS, MS, and SQ indicate the style similarity, melody similarity, and sound quality, respectively." + }, + "4": { + "table_html": "
\n
Table 4: Wiener entropy of the URMP dataset and FR109 dataset. Vn., Cl., Sax., and Rec. represent violin, clarinet, saxophone, and recorder, respectively.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetInstrumentWiener entropy
URMPVn. / Cl. / Sax.0.0005
FR109Rec.0.0345
\n
", + "capture": "Table 4: Wiener entropy of the URMP dataset and FR109 dataset. Vn., Cl., Sax., and Rec. represent violin, clarinet, saxophone, and recorder, respectively." + }, + "5": { + "table_html": "
\n
Table 5: Wiener entropy of the style transfer results of StarGAN, VAE-GAN and DDSP on different target instruments. Vn., Cl., Sax., and Rec. represent violin, clarinet, saxophone, and recorder, respectively.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model/TargetVn.Cl.Sax.Rec.
StarGAN0.00070.00040.00020.0153
VAE-GAN0.00030.00030.00060.0159
DDSP0.00080.00080.00050.2041
\n
", + "capture": "Table 5: Wiener entropy of the style transfer results of StarGAN, VAE-GAN and DDSP on different target instruments. Vn., Cl., Sax., and Rec. represent violin, clarinet, saxophone, and recorder, respectively." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2411.18075v1_figure_1(a).png", + "caption": "Figure 1: Mel spectrograms of failed recorder music, the red rectangular parts show the inharmonic partials.", + "url": "http://arxiv.org/html/2411.18075v1/x1.jpg" + }, + "1(b)": { + "figure_path": "2411.18075v1_figure_1(b).png", + "caption": "Figure 1: Mel spectrograms of failed recorder music, the red rectangular parts show the inharmonic partials.", + "url": "http://arxiv.org/html/2411.18075v1/x2.jpg" + }, + "1(c)": { + "figure_path": "2411.18075v1_figure_1(c).png", + "caption": "Figure 1: Mel spectrograms of failed recorder music, the red rectangular parts show the inharmonic partials.", + "url": "http://arxiv.org/html/2411.18075v1/x3.jpg" + }, + "2(a)": { + "figure_path": "2411.18075v1_figure_2(a).png", + "caption": "Figure 2: The Mel spectrograms of a failed-music style transfer example. (a) The Mel spectrogram of the source audio, which is performed by a saxophone; (b) The Mel spectrogram of the converted audio (to failed recorder) by StarGAN; (c) The Mel spectrogram of the converted audio (to failed recorder) by VAE-GAN. (d) The Mel spectrogram of the converted audio (to failed recorder) by DDSP.", + "url": "http://arxiv.org/html/2411.18075v1/x4.jpg" + }, + "2(b)": { + "figure_path": "2411.18075v1_figure_2(b).png", + "caption": "Figure 2: The Mel spectrograms of a failed-music style transfer example. (a) The Mel spectrogram of the source audio, which is performed by a saxophone; (b) The Mel spectrogram of the converted audio (to failed recorder) by StarGAN; (c) The Mel spectrogram of the converted audio (to failed recorder) by VAE-GAN. (d) The Mel spectrogram of the converted audio (to failed recorder) by DDSP.", + "url": "http://arxiv.org/html/2411.18075v1/x5.jpg" + }, + "2(c)": { + "figure_path": "2411.18075v1_figure_2(c).png", + "caption": "Figure 2: The Mel spectrograms of a failed-music style transfer example. (a) The Mel spectrogram of the source audio, which is performed by a saxophone; (b) The Mel spectrogram of the converted audio (to failed recorder) by StarGAN; (c) The Mel spectrogram of the converted audio (to failed recorder) by VAE-GAN. (d) The Mel spectrogram of the converted audio (to failed recorder) by DDSP.", + "url": "http://arxiv.org/html/2411.18075v1/x6.jpg" + }, + "2(d)": { + "figure_path": "2411.18075v1_figure_2(d).png", + "caption": "Figure 2: The Mel spectrograms of a failed-music style transfer example. (a) The Mel spectrogram of the source audio, which is performed by a saxophone; (b) The Mel spectrogram of the converted audio (to failed recorder) by StarGAN; (c) The Mel spectrogram of the converted audio (to failed recorder) by VAE-GAN. (d) The Mel spectrogram of the converted audio (to failed recorder) by DDSP.", + "url": "http://arxiv.org/html/2411.18075v1/" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18075v1" +} \ No newline at end of file diff --git a/20241127/2411.18084v1.json b/20241127/2411.18084v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f479432837dac17b8325c30f8abe9f2ca440f8e0 --- /dev/null +++ b/20241127/2411.18084v1.json @@ -0,0 +1,225 @@ +{ + "title": "From Exploration to Revelation: Detecting Dark Patterns in Mobile Apps", + "abstract": "Mobile apps are essential in daily life, yet they often employ dark patterns, such as visual tricks to highlight certain options or linguistic tactics to nag users into making purchases, to manipulate user behavior. Current research mainly uses manual methods to detect dark patterns, a process that is time-consuming and struggles to keep pace with continually updating and emerging apps. While some studies targeted at automated detection, they are constrained to static patterns and still necessitate manual app exploration. To bridge these gaps, we present AppRay, an innovative system that seamlessly blends task-oriented app exploration with automated dark pattern detection, reducing manual efforts. Our approach consists of two steps: First, we harness the commonsense knowledge of large language models for targeted app exploration, supplemented by traditional random exploration to capture a broader range of UI states. Second, we developed a static and dynamic dark pattern detector powered by a contrastive learning-based multi-label classifier and a rule-based refiner to perform detection. We contributed two datasets, AppRay-Dark and AppRay-Light, with 2,185 unique deceptive patterns (including 149 dynamic instances) across 18 types from 876 UIs and 871 benign UIs. These datasets cover both static and dynamic dark patterns while preserving UI relationships. Experimental results confirm that AppRay can efficiently explore the app and identify a wide range of dark patterns with great performance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "People nowadays can conduct many tasks, including shopping, reading, chatting, using their own laptops or mobile apps, between which user interfaces play as an important proxy to connect human and these functionalities.\nHowever, many apps unconsciously or intentionally insert some psychological tricks into these user interfaces and try to manipulate end-users, either to seduce them stay longer at their services [1 ###reference_b1###, 2 ###reference_b2###], or deceive them to perform some actions that may not of their best interest [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nFor instance, in the context of social media, the \u201cnever-ending auto-play\u201d function for videos is automatically enabled to capture end-users\u2019 attention and encourage them to continue browsing the app unconsciously [2 ###reference_b2###]. As a result, people feel regretful when they reflected [1 ###reference_b1###].\nAdditionally, apps often integrate native advertisements seamlessly with other content, adopting the same visual styles, which can make it difficult for users to distinguish between regular content and advertisements.\nPeople may mistakenly click these advertisements and surprisingly find they are ads.\nThese tricks on user interfaces are termed as deceptive patterns (as known as dark patterns)111We will use dark patterns and deceptive patterns interchangeably in this paper. [6 ###reference_b6###, 3 ###reference_b3###].\nWhile people may not be familiar with this term, deceptive patterns are everywhere in our life.\nMathur et. al found that 11.1% of 11K shopping websites have dark patterns [7 ###reference_b7###]. Di et al. revealed that 95% of 240 popular mobile apps averagely contain at least seven types of dark patterns [5 ###reference_b5###].\nMoreover, researchers are now actively working on recognising and summarising dark patterns from different aspects, such as language differences [8 ###reference_b8###], Home IoT devices [9 ###reference_b9###], cookie banners [10 ###reference_b10###, 4 ###reference_b4###] and video streaming platforms [11 ###reference_b11###].\nThe prevalence of dark patterns become severe issues but auditors and app markets now mainly rely on manual efforts to identify them, which is quite time-consuming and not generalisable.\nSeveral recent studies have started working on automating the identification of dark patterns using advanced techniques [12 ###reference_b12###, 13 ###reference_b13###].\nMansur et al. [12 ###reference_b12###] introduced AidUI, a tool capable of detecting 10 types of dark patterns across mobile UIs and websites, while Chen et al. [13 ###reference_b13###] presented UIGuard, focusing on 17 dark pattern types within mobile UIs.\nNonetheless, their focus remains limited to individual user interfaces, neglecting comprehensive application-wide scrutiny and the relationship among UIs. Consequently, individuals are forced to invest substantial time in manual app exploration, particularly given the intricate nature of app user interfaces.Moreover, as apps keep updating, this manual process becomes more cumbersome and time-consuming.\nIn addition, their singular UI emphasis restricts their capability to identify only static dark patterns (single page related deceptive patterns), leaving dynamic manifestations (interaction related patterns) unattended.\nOn top of that, there are severe drawbacks in the dataset used in the previous study. The existing datasets on dark patterns are either focusing on a subset of dark pattern type (e.g., static dark patterns only [13 ###reference_b13###, 12 ###reference_b12###]) or fail to provide the localisation of the instance within the UIs [5 ###reference_b5###].\nHowever, dynamic dark patterns are an uncovered area and have become prevalent in various mobile app designs. For example, Bait and Switch is a dynamic dark pattern that entices users to click on something, resulting in an unrelated outcome (e.g., clicking a button but getting an ad).\nDrive by the previous limitations, in this work, we propose a framework, named AppRay, to automatically collect UIs and detect both static and dynamic dark patterns given an app. Our framework consists of two essential components: (1) an app exploration module that incorporate a LLM-powered app navigator with classical automated app exploration tool; (2) a dark pattern detector that integrates a contrastive learning-based dark pattern classifier, and a rule-based dark pattern refiner.\nGiven an app under test, our LLM-powered app navigator leverages leverage LLMs to simulate end-users to navigate through the apps to explore and collect diverse UIs. Inspired by Di\u2019s work [5 ###reference_b5###], we instruct LLM to role play as a end-user to conduct the tasks similar to [5 ###reference_b5###] to explore the apps. Additionally, we integrate an automated app exploration tool to further increase UI coverage within the apps.\nAfter obtaining the UIs from the app, we developed a context aware dark pattern detector to automatically detect dark patterns within and across UIs. We first leveraged the mature property extraction module from UIGuard [13 ###reference_b13###] to identify elements and group related elements into different semantic components.\nBased on this, we proposed a component-level dark pattern detector, powered by a contrastive learning-based multi-label classifier and a rule-based refiner. The multi-label classifier scans each UI page to identify potential dark patterns, while the rule-based refiner checks preceding and subsequent UIs, modeling the UI relationship, to remove false positives and increase precision.\nIn detail, the multi-label classifier employs a Siamese network structure by leveraging a BERT-based model for text features and a ResNet50 to understand the relations between UI components within the UI page.\nThe final output identifies potential dark patterns by considering features from both networks. The rule-based refiner acts as an optimizer to eliminate false positives while preserving true positives. With these specialised designed modules, our detector is able to disambiguate between similar dark patterns (e.g., distinguishing between nagging and disguised ad patterns) with high precision and recall.\nTo train and test our proposed techniques, we contribute new datasets, AppRay-Dark and AppRay-Light, that addressed the limitations of existing datasets, covering both static and dynamic dark patterns and preserving the relationship among UIs. To this end, we leveraged our collected UIs from our app navigator as a starting point, which captures the action points, screenshots, and relationship among UIs. We conducted three round of annotations.\nIn total, we annotated 19,722 UIs from 100 apps, and identified 2,185 unique deceptive pattern instances of 18 types from 876 UIs. Among these instances, 149 are related to multiple UIs (i.e., dynamic dark patterns) from 48 apps. We termed it as AppRay-Dark dataset. We also collected 871 benign UIs from these collected UIs, termed AppRay-Light for assessing the likelihood of detecting false positives.\nWe conducted a comprehensive evaluation to evaluate the performance of the proposed system, including modular examination and ablation study (Section VI-A ###reference_### and Section VI-A2 ###reference_.SSS2###), baseline comparison(Section VI-B ###reference_### and Section VI-C ###reference_###) and a user study (Section VI-D ###reference_###).\nThe modular examination and ablation study confirm the effectiveness and critical roles of each module of AppRay.\nWe found that our AppRay achieves 0.77/0.65 in precision, 0.76/0.62 in recall and 0.76/0.62 in F1-score for the macro/micro average performance respectively, effectively covering both static and dynamic dark patterns.\nFinally, the user study confirms the usefulness of the proposed technique.\nIn summary, the contribution of our work are as follows:\nWe present, AppRay, the first systematic method that addresses the dark patterns detection problem from app exploration to revelation, covering both dynamic and static dark patterns.\nWe contributed the first large datasets on dark patterns, capturing both static and dynamic instances with localization. The dataset includes 2,185 unique deceptive patterns (containing 149 dynamic instances) of 18 types from 876 UIs (AppRay-Dark), and 871 benign UIs (AppRay-Light), annotated from 19,722 UIs across 100 apps.\nWe conducted comprehensive experiments to verify the effectiveness and usefulness of our proposed AppRay." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background", + "text": "###figure_1### ###figure_2###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Dark Pattern Taxonomy", + "text": "Many works have been done to summarise the taxonomy for dark patterns in mobile platform.\nWe leveraged the taxonomy from Chen et al. [13 ###reference_b13###] and Mansur et al. [12 ###reference_b12###], which integrate and summarise the existing taxonomies [3 ###reference_b3###, 14 ###reference_b14###, 5 ###reference_b5###] into a unified one.\nWe further merged and reorganised their taxonomies into our used one.\nAs seen in Figure 1 ###reference_###, the taxonomy is categorised into five strategies [3 ###reference_b3###], namely nagging, obstruction, sneaking, and interface inference. Figure 2 ###reference_### shows some examples.\nNagging disrupt user tasks by suddenly displaying irrelevant windows. For example Examples include Pop-Up Ads appear unexpectedly to promote advertisements, while Pop-Up to Rate and Pop Up to Upgrade ask users to rate the app or to upgrade to the premium version.\nObstruction seeks to unnecessarily complicate tasks. Roach Motel facilitates effortless opt-in but presents challenges for opt-out (e.g., easy to subscribe but hard to cancel it). Intermediate Currency introduces virtual currency, blurring the distinction between virtual and actual money. Price Comparison Prevention creates obstacles when attempting to compare product prices across different applications or websites.\nSneaking hides, disguises or delays relevant information to the current user task. Hidden Costs reveals significant expenses such as shipping fees belatedly. Forced Continuity automatically prolongs subscriptions without noticing the users. Bait and Switch entices users to do something but give unrelated outcome (e.g., you click a button but get an ads). Sneak into Basket quietly adds additional items to the shopping cart.\nForced Action forces users to perform some actions to get rewards, unlock features or achieve some tasks. Social Pyramid allures users to share something with their friends to get rewards or unlock features. Privacy Zuckering coerces users into divulging privacy-related information. Gamification asks users to perform repetitive actions in exchange for rewards. General Types includes other tactics like countdowns on ads, watching ads to unlock features or earn rewards, and paying to eliminate ads.\nInterface Interference manipulates the interface to privilege some options over others. Preselection enables options like notifications by default or do not provide an opt-out option. Hidden Information makes information relevant to users not intermediately readable or accessible. Aesthetic Manipulation alters the user interface in a manner that prioritizes form over function, which includes five sub types. \u201cToying with Emotion\u201d entails influencing user sentiment to either encourage or discourage certain actions (e.g., activity message, countdown offer). \u201cFalse Hierarchy\u201d establishes misleading visual prominence. \u201cDisguised Ads\u201d blends advertisements seamlessly with other content by using the same visual effects. \u201cTricked Questions\u201d utilize linguistic constructs, such as double negations, to obfuscate queries. \u201cGeneral Types\u201d are other tricks, including small close buttons on ads and moving ads button.\nChen et al. [13 ###reference_b13###] further classified these dark pattern types into two types: static and dynamic, based on whether the dark pattern requires contextual information to be identified. They also introduce the concept of \u201cin-between\u201d dark patterns, which span multiple pages but may be identified on a single page, potentially leading to false positives. We employ green, blue and orange colors to indicate the static, in-between, and dynamic dark patterns in Figure 1 ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Related Work", + "text": "Existing research focuses on different aspects of dark patterns, such as dark pattern taxonomies identification and summarization [3 ###reference_b3###, 5 ###reference_b5###, 14 ###reference_b14###, 7 ###reference_b7###, 11 ###reference_b11###, 6 ###reference_b6###], user perception and impacts [1 ###reference_b1###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###], and dark pattern remediation [13 ###reference_b13###, 12 ###reference_b12###, 18 ###reference_b18###, 19 ###reference_b19###].\nDark pattern taxonomies identification and summarization focuses on using manual or semi-manual methods to explore the apps or websites, getting their UIs and then identifying and summarising the contained dark patterns [3 ###reference_b3###, 5 ###reference_b5###, 14 ###reference_b14###, 7 ###reference_b7###, 11 ###reference_b11###, 6 ###reference_b6###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nIn 2010, Harry Brignull [6 ###reference_b6###] defined dark pattern as \u201ca user interface that has been\ncarefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills.\u201d\nHe established the https://www.darkpatterns.org/ website, which collects various examples from web and mobile apps and categorises them into different types such as confirmation shaming, which triggers uncomfortable emotion to impact user decision.\nHe also provides a Hall of Shame to continuously report the dark patterns used by companies around the world, and created a Twitter account [20 ###reference_b20###] for people to report dark pattern instances.\nIn the academic fields, Gray et al. [3 ###reference_b3###] later refined and extend his taxonomy in terms of five strategies, namely Nagging, Interface Inference, Forced Action, Obstruction and Sneaking.\nSince then, more researchers put more efforts on identifying dark patterns, summarising and enriching the taxonomies from different aspects, such as game [21 ###reference_b21###], shopping website [7 ###reference_b7###] and mobile apps [5 ###reference_b5###].\nBuilt upon these research methodology and taxonomy, we propose to leverage LLM\u2019s capabilities to simulate human exploration and automatically explore the apps to gather the UI in the apps.\nAnother stream of dark pattern research focuses on understanding user perception and the impacts of the dark patterns on the end-users [1 ###reference_b1###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###].\nFor example, Cho et al. [1 ###reference_b1###] analysed the feature-level usage logs of four social media apps (Instagram, Facebook, YouTube, KakaoTalk) from 29 Android users, and investigated when and why people regret. They found that among 4,069 sampled sessions, 26.5% were considered fully or partially regretful as regretful.\nSergeeva el al. [16 ###reference_b16###] instead dives into how users think of the persuasive tactics used in permission-based advertising emails. They identified four reactance-triggering factors that caused the negative attitudes towards these advertising emails, and found that participants now had doubts about the effectiveness of these tactics and would even raise some questions on the advertised goods in the emails.\nRecognising the severeness of these dark patterns and impacts, researchers start taking actions on remediating the impacts [13 ###reference_b13###, 12 ###reference_b12###, 18 ###reference_b18###, 19 ###reference_b19###].\nOne way is to directly hide the dark patterns in the app to remove the impacts.\nKollnig et al. [18 ###reference_b18###] introduced GreaseDroid for end-users to apply patches from experts to their app, and then use the modified clean app. The patches are manually created by developers, which can modify the code files of the apps to remove the dark patterns.\nHowever, this method requires great expertise, hard to generalise, may have impacts on the app functionality or introduces potential privacy risks.\nDatta et al. [19 ###reference_b19###] later presented GreaseVision that democratise the creation of patches by allowing end-users to provide the screenshots of areas to be modified.\nApart from these kinds of method, recent work started working automated detection of dark patterns to warm the developers, end-users of these tricks.\nMansur et al. [12 ###reference_b12###] proposed AidUI to automatically detect dark patterns in the current UI/website page using computer vision and template matching techniques.\nConcurrently, Chen et al. [13 ###reference_b13###] developed UIGuard, which focuses on the mobile UIs, to identify 17 types of static dark patterns.\nOur work follows the second stream, aiming at automated identifying dark patterns to warn the end-users.\nDifferent from existing techniques, we not only focus on a single UI, but also take the interactions into account, which enable the detection on dynamic tricks like sneak into the basket." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Methodology", + "text": "As seen in Figure 3 ###reference_###, our system comprises an app exploration module and a dark pattern detection module.\n###figure_3###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A App Exploration", + "text": "The app exploration has two phases.\nWe conduct task-oriented exploration based on LLM, following Di Geronimo et al\u2019s methodology [5 ###reference_b5###] to perform tasks that are prone to have potential dark patterns.\nWe then employ existing app exploration tool to collect as many UI status as possible." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "IV-A1 Task-Oriented App Exploration", + "text": "While random exploration can capture many UI states, it overlooks UI semantics, often getting stuck on certain pages and failing to perform meaningful actions.\nUnfortunately, certain dark patterns only come to light during logical, sequential exploration of the app.\nFailing to do any of the steps can not lead to the final page that contains dark patterns.\nFurthermore, some routine tasks can be hotspots for some dark patterns.\nSimply navigating to the notification settings page might unveil the \u201cPreselection\u201d tactic, where choices are made on behalf of the user without clear consent.\nThus, task-oriented exploration, which mirrors human interaction, is essential.\nTo this end, we harness the commonsense knowledge of large language models, i.e. GPT-4 [22 ###reference_b22###], for targeted app exploration.\nTo this end, we first define a set of common tasks and a feature-specific task.\nThen, given an app under test and the task, we\n(1) obtain and process the current UI\u2019s view hierarchy into text; (2) feed this to the LLM to get the next action; (3) perform the action on behalf of LLM and repeat Steps 1-3 until reaching the max step , or LLMs believe the task is finished.\nTask Definitions.\nAs seen in Table I ###reference_###, we consider general tasks and feature-based tasks.\nThe general tasks are common tasks that are prone to contain dark patterns [5 ###reference_b5###].\nFor example, most apps should have notification setting page to control the way the app inform users. Therefore, by setting the task \u201cgo to the setting page and turn off all notification\u201d, we can check whether these notifications are enabled by default.\nFeature-based tasks focus on app-specific functionalities, initially targeting shopping features to reveal potential dark patterns. The primary goal of task-oriented exploration is to delve into more UI states coherently, complementing the limitations of random exploration. Completing these tasks is secondary and inconsequential to the exploration process.\nUI Information Extraction.\n\nTo feed data to GPT-4, we first convert the UI information into textual format.\nEach user interface is represented as a tree structure with leaf elements like Buttons for interaction, TextViews for conveying textual information, ImageViews for displaying images, and layout elements organising the leaf elements into a structure way like LinearLayout for placing the elements in a linear order.\nEach element has attributes such as text, clickable, and classname, indicating its properties. We use the Android Debugging Bridge (ADB) to obtain the current UI\u2019s view hierarchy, extract key information, and feed it into the LLM to understand the UI and make decisions.\nIn detail, we extract the following information:\nClassname of UI elements indicate their functionalities, such as a Button that allows user interaction.\nResource ID is used as a unique identifier for UI elements, indicating their semantic meaning.\nText Content: Text and Content-description contains the text on elements or the accessibility label for image-based buttons.\nAction-Related Attributes including clickable, scrollable, checked, indicates possible actions and the element\u2019s current status. For example, a clickable and checked CheckBox element means it can be clicked and is currently checked.\nBounds specifies the element position on the UI.\nPrompt Engineering and Action Space.\nWe adopted the in-context learning with few shot examples to optimise the prompt and facilitate the task-oriented app exploration. We provide overall instructions to GPT, including the expected output, and one example for each action to clarify the task 222Prompts are provided in the supplementary materials.\nWe consider five common actions, namely tap, scroll, type, back and stop. Stop is specially designed when LLM finds the task is finished.\nOverall Interaction between Device and LLM.\nFor each round, we first obtain the UI information and convert it into text format. We then fill the prompt template with the current task, history actions, UI information, and ask GPT for the next actions. Once we obtain the actions, we extract the action point using the bounds of the target element, and perform the action using ADB. This process is iterative until LLM provides a stop action or the number of steps reach the predefined threshold." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "IV-A2 Random Exploration", + "text": "We use the state-of-the-art automated app exploration tool FastBot2 [23 ###reference_b23###], developed by ByteDance, to capture as many UI states as possible. FastBot2 employs a probabilistic model and a reinforcement learning model to collaboratively determine the best strategy for uncovering unique UI states. It identifies the optimal action based on the current UI information, available UI widgets, and historical actions, leading to an effective exploration strategy.\nNote that this automated tool can be replaced by any other app exploration tool, and it is not our primary contribution." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "IV-A3 UI Merging", + "text": "After we obtain UIs from both automatic and targeted exploration, we conduct post-processing to merge these obtained UIs by identifying duplicate UIs and UIs that are not related to the apps.\nFor deduplication, we consider two measures, i.e., UI screenshot matching and view hierarchy matching.\nFor UI screenshot matching, we check whether their images are the same by simple pixel matching method.\nFor view hierarchy matching, we extract all the leaf elements of the screen.\nAs some elements are invisible, we check their \u201cvisible-to-users\u201d attribute and only keep ones that are visible.\nSome elements are too small and meaningless, like the divider, so we remove elements that has a width or height less than 5.\nWe only keep the class, resource-id and checked of the elements to avoid saving many UIs with subtle or meaningless differences.\nFor example, some UIs may be dynamic but show a similar meaning, like the music app will recommend different music every time we refresh the list.\nAfter filtering, if two view hierarchies are the same, we consider them as duplicates." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Dark Pattern Detection", + "text": "Exiting work mainly focus on single UI detection, which are prone to have false negatives and can not deal with dark patterns that related to several UIs.\nAs we incorporate the exploration in our work, our method has the capability to deal with dynamic patterns.\nDue to the great amount of UIs collected in apps, it is cost-extensive to use LLMs for assessing all UIs one by one and it is also not necessary.\nGiven a UI under test, we initiate the detection process by extracting properties of the UI elements, grouping them into semantically distinct clusters using heuristic rules. Subsequently, we deploy our proposed component-based dark pattern detector on each identified element or cluster." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "IV-B1 UI Information Extraction and Grouping", + "text": "We directly leveraged UIGuard\u2019s [13 ###reference_b13###]\nmethod for extracting the UI information, including element bounds, types, text and status. We also leveraged their method in grouping checkbox with associated text elements. Based on this, we proposed simple rules to group elements that are semantically relevant (e.g., the elements on a confirmation pop-up should be grouped into a semantic cluster)." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "IV-B2 Component-Based Dark Pattern Detector", + "text": "After we obtain the component need to be tested, we take the component, whole UIs and the UI information as input, and perform our component-based dark pattern detector.\nTo support the detection of both static and dynamic deceptive patterns, our detector comprises two parts: (a) a contrastive learning-based multi-label classifier and (b) a rule-based refiner. Given that each UI page includes text content, UI element attributes, and UI structure, the classifier employs a Siamese architecture with two encoders: (1) BERT-based text encoder and (2) ResNet50-based UI image encoder. As illustrated in Figure 3 ###reference_###, the main structure of our dark pattern detector enables two shared-weight UI encoders and one text encoder to encode the text contents in the UI and dark patterns. The aggregator combines the output embeddings from the UI encoders and text encoders using concatenation, which is then passed into a fully connected classifier to shortlist potential dark patterns. The rule-based refiner provides the final output by applying predefined knowledge and contexts.\nBecause of the scarcity and imbalance of dark patterns across classes, with instances ranging from just a few to several hundred, it is essential to address these disparities to improve detection performance. Classes with fewer instances of dark patterns undergo data augmentation operations to enhance their representation in the training set." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "IV-B3 Data Augmentation", + "text": "Data augmentation includes various image transformations such as rotation, flipping, and color adjustment [24 ###reference_b24###], as well as synthetic text generation methods like paraphrasing, synonym replacement, and back-translation [25 ###reference_b25###]. By augmenting the data, we not only increase the number of training samples for under-represented classes but also introduce variability that helps the classifier generalize better to unseen examples. This approach ensures that the model learns robust features and relationships for each type of dark pattern, ultimately leading to more accurate and reliable detection.\nAs the Figure 3 ###reference_### shows, a sequences (gray, 90 degree rotation and flip, etc) of complex transformation functions applied on the UI images or data pattern images . The textual data augmentation include synonym replacement, random insertion, and random swap.\nThis data augmentation process can be repeated with various combinations of transformations to create a large set of augmented text and images from the original inputs." + }, + { + "section_id": "4.2.4", + "parent_section_id": "4.2", + "section_name": "IV-B4 Contrastive Learning and Loss Functions", + "text": "Our dataset includes 16 classes of deceptive patterns, where presenting various ambiguities that can confuse the classifier. Contrastive learning is a technique used to learn representations by comparing and contrasting samples in a high-dimensional space [26 ###reference_b26###]. This approach is particularly effective in unsupervised and semi-supervised learning scenarios, where labeled data is scarce. The main idea is to bring similar samples closer together and push dissimilar samples further apart in the feature space. For instance, the dark patterns \u201cDisguised AD\u201d and \u201cNagging\u201d can both be located at the bottom of a UI with similar attributes. However, their main contexts are quite different for the two dark patterns. \u201cDisguised AD\u201d fakes as a normal feature, appearing similar to surrounding components, while \u201cNagging\u201d repeatedly pops up, obstructing the user\u2019s normal experience and urging them to agree to something. Without contrastive learning, the model would struggle to distinguish between these two patterns. We will provide detailed comparisons how contrastive learning helps address this issue in Section VI-A2 ###reference_.SSS2###.\nContrastive Loss:\nGiven and are two samples, and and are their corresponding representations in the latent space, the contrastive distance can be expressed as:\nwhere is the similarity between and . is a similarity metrics (e.g., Cosine similarity or Euclidean distance), is a temperature parameter and is the number of negative samples. It encourages the representations of positive pairs to be similar and those of negative pairs to be dissimilar. The contrastive loss can be expressed as:\nwhere the term specifies whether the two given data inputs ( and ) are similar () or dissimilar (). The term stands for the contrastive distance when the data inputs are from the different classes. The term stands for the contrastive distance when the data inputs are from the same classes.\nCross-entropy loss is used to measures the performance of the final classifier and can be defined as:\nClass Weight and Overall Loss: the total loss of our multi-label classifier is:\nThe imbalanced classes in the model training can lead to biased models that perform well on the majority classes but poorly on minority classes. Class-weights is an essential solution in this case. It assigns higher weights to the minority classes and less weights to majority classes so that the model will pay more attention to these minority classes during training. To achieve this, the loss function has to be adjusted the contribution of each class to the overall loss based on their class-weights:\nNegative Samples:\nNegative sampling is crucial for contrastive learning. It helps the model learn to differentiate between distinct classes by providing examples that are not similar to the anchor sample. By learning to correctly identify these dissimilar examples, the model develops a clearer understanding of the boundaries between different classes in the embedding space. Negative sampling strategies in contrastive learning should be task oriented by addressing various challenges. In the dark pattern detection problem, we applied several strategies:\nRandom negative sampling. The negative samples are randomly chosen from the dataset of the other classes, which is simple strategy to warm up the model training.\nHard negative sampling [27 ###reference_b27###]. Hard negatives are selected based on their similarity to the anchor samples.\nwhere is a hyperparameter to define the hardness of negative samples relative to positive samples. These negative samples encourages the model to learn better distinctions between the data from similar classes.\nBalanced negative sampling. The number of negative samples is balanced with positive samples, preventing the model from being biased towards one type of sample." + }, + { + "section_id": "4.2.5", + "parent_section_id": "4.2", + "section_name": "IV-B5 Model Training", + "text": "Our multi-label classifier is a Siamese model that consists of two pipelines: an image embedding pipeline and a text embedding pipeline, followed by a final MLP classifier with a sigmoid activation function for the final predictions. Each pipeline processes its respective inputs to generate meaningful embeddings. These embeddings are then combined and fed into the final classifier. Each pipeline is trained independently, and once trained, their weights are frozen. The final classifier is subsequently trained using the outputs from the frozen pipelines. The details of the training process are as follows:\nThe input training set comprises images features and text features (extracted from the images using Optical Character Recognition). To address class imbalances and enrich minority classes, the training set will go through data augmentation process to generate positive and negative samples.\nThe image inputs are encoded using a ResNet50-based image embedding pipeline. There are two types of image inputs: dark pattern images and their corresponding full UI images. Both types of images pass through the shared-weight image embedding pipeline. To train the pipeline weights, a temporary fully connected classifier is attached to classify the image features. Similarly, a BERT-based text embedding pipeline processes the text features, with a temporary fully connected classifier attached to classify the text features alone.\nOnce the image and text embedding pipelines are trained, they will be connected to the final MLP classifier. The weights of these pipelines will be frozen, and their output embeddings will be concatenated to serve as inputs for training the final MLP classifier.\nThe entire training process is optimized using the Adam optimizer, with the loss function in equation 5 ###reference_###" + }, + { + "section_id": "4.2.6", + "parent_section_id": "4.2", + "section_name": "IV-B6 Rule-Based Refiner", + "text": "The rule-based refiner operates by applying a set of predefined heuristics and logical rules designed to filter out patterns that do not meet specific criteria. These rules are based on domain knowledge and empirical observations, allowing the refiner to assess the context and content of each flagged dark pattern more thoroughly. For example, the refiner might check for certain textual cues, layout structures, or interaction patterns that are indicative of certain dark patterns. By leveraging these rules, the refiner enhances the overall precision and reliability of the detection system.\nIn addition to single-page analysis from the model-based classifier, the rule-based refiner extends its capabilities to perform multi-page dynamic dark pattern detection. This involves analysing the flow of dark patterns across multiple pages. The refiner tracks the interactions of static dark patterns across multiple pages to identify dynamic dark patterns. It evaluates sequences of page transitions, and cumulative interface changes to detect patterns such as bait and switch, roach motel or nagging that span multiple steps." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Research Questions and Datasets", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Research Questions", + "text": "In this section, we evaluate our proposed system by investigating the following research questions (RQs):\nRQ1: How effective is each module (i.e., app exploration and detector modules) in our proposed system?\nRQ2: How effective is the proposed method in detecting dark patterns in UIs compared to existing techniques?\nRQ3: How often does AppRay detect false positives in UIs that do not contain dark patterns?\nRQ4: To what extend, can AppRay assist human in manual exploration and identification given an app under test?" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Datasets", + "text": "As mentioned in Section I ###reference_###, existing datasets fall short of providing a dataset that covers both static and dynamic dark patterns with localisation labels. Therefore, in our work, we leverage the data collected through our app exploration module to fill the gap." + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "V-B1 App Collection", + "text": "We collected 350 trending apps from Google Play Store across 33 app categories, and crawled their metadata including ratings, downloads numbers, app description, app category, app name, app package name.\nWe then use app package name to obtain their latest app package from third-party websites such as AppCombo and AppMirror. Meantime, we saved the app version and app files for future replication.\nAfter that, we manually installed and tested each app to check if they are usable in emulator. We excluded game apps that leverage game engine like unity. We stopped until we obtained 105 usable apps.\nWe randomly sampled 1 app from 5 different categories as a withhold subset for RQ4, and the rest 100 apps are used in RQ1-RQ3.\nWe notice that most apps require email or phone number verification to register accounts, which are currently not supported by our system and are out of our scope. Therefore, for each app, we first register an account and save the account information for further usage. Such information can be easily provided by the app provider." + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "V-B2 UI Collection and Annotation", + "text": "We ran our app exploration module on the 100 collected apps and obtained 45,292 UIs. After removing duplicates, we were left with 8,408 unique UIs. To maintain the sequence of explored UIs during annotation, we included some duplicate UIs, resulting in a total of 19,722 UIs that need to be annotated.\nGiven the large scale of data, we followed the strategies of Di et. al [5 ###reference_b5###] to only annotate UIs that contain unique dark pattern instances, i.e., if the same instance appear before, we would not annotate it.\nAppRay-Dark:\nTo conduct annotation, two of the authors first discussed and went through the taxonomy and examples [13 ###reference_b13###, 12 ###reference_b12###, 5 ###reference_b5###] to get a commonsense of the definition of deceptive patterns. They then annotated these 19,722 UIs separately by adding the location and types of the identified instances using the open-source tool LabelImg [28 ###reference_b28###]. We modified LabelImg to show the action on the UI to assist annotation. For dynamic dark patterns, we added a linker (i.e. a digit) to indicate the connection between pages. After that, we merged the annotations and went through the discrepancies. If the consensus can not be made, we grouped them and involved a third author to have a discussion on that. As a result, we obtained 2,185 unique deceptive pattern instances of 18 types from 876 UIs across 97 apps. Among these instances, 149 of them are related to multiple UIs (i.e. dynamic dark patterns) from 48 apps. We denoted this dataset as AppRay-Dark. This dataset will be used to answer RQ1 and RQ2. We reported the distribution of each deceptive pattern types in Table III ###reference_###.\nAppRay-Light:\nApart from that, we randomly sampled five consecutive sequences of UIs (length=3) that do not have annotated dark pattern instances from each app. The aim of this dataset is to assess the performance of our tool in reporting false positives while preserving the sequential information. Two of the authors went through these samples, and annotate whether they contain dark patterns or not. In total, we sampled 1,469 UIs, and ended with 871 benign UIs. We denoted this dataset as AppRay-Light. This dataset will be used to answer RQ3.\nGiven that our detector requires training, we splitted the data into five folds based on the app level, i.e., the instances from the same app will be allocated to a same split/fold. We perform 5-fold cross-validation in all the experiments.\n###figure_4###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Experiments", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "VI-A RQ1: Effectiveness of each module in AppRay", + "text": "This section aims to measure the contribution and effectiveness of each module in our AppRay. We consider the following sub-questions:\nRQ1.1 App Exploration Module: How effective is the proposed technique in collecting UIs?\nRQ1.2 Dark Pattern Detector: How is the performance of AppRay\u2019s detection part? We conducted an ablation study to examine the contributions of each sub-module." + }, + { + "section_id": "6.1.1", + "parent_section_id": "6.1", + "section_name": "VI-A1 RQ1.1: Effectiveness of App Exploration Module", + "text": "This section assessed the performance of our app exploration module introduced in Section IV-A ###reference_###. We considered two sub-questions:\nRQ1.1.1: How effective is the LLM-based app navigator in completing the tasks?\nRQ1.1.2: How effective is the proposed technique in collecting UIs from apps?\nSetup:\nWe used the 100 apps collected in Section V-B1 ###reference_.SSS1### in this RQ.\nFor RQ1.1.1, We ran all tasks in Table I ###reference_### in these apps using the LLM task-oriented app exploration module. We adopted Task Completion Rate to evaluate whether our approach can successfully finish a task. We manually examined each task to see if it is completed or not. Note that whether the task is completed or not is not a key factor in our proposed system. The key goal here is to explore more UI status by logically using the app.\nFor RQ1.1.2, We further ran Fastbot on these apps to collect more UIs. Two ablations are considered in this RQ, i.e., Fastbot only and LLM only. We adopt number of unique UI status and UI activity as the metrics.\nResults for RQ1.1.1:\nAmong 100 apps, we in total ran 556 tasks, with 11.5 steps on average for each task. The task completion rate is 75.14%, which shows that our approach has certain capabilities to perform dark pattern sensitive tasks.\nAs seen in Table I ###reference_###, the detailed completion rate for T1, T2, T3, T4, T5 and T6 are 56.41%, 78.21%, 82.83%, 84.0%, 83.33%, 30.0% and 58.11% respectively.\nWe observed that our tool has the capabilities to perform tasks in a logical way.\nFor some tasks, it can immediately figure out the relevant UI elements to interact 333More examples can be found in supplementary material..\nApart from these straight-forward interaction traces, for apps that have different or more complex interaction ways, our tool can also locate the target page via a trial-and-error strategy. This behaviour is similar to human when users are not familiar with the apps.\nFor example, in Figure 4 ###reference_###, our tool first goes to the setting page and fail to find the notification setting. Then it goes back, and navigate to other tabs, and finally find the notification setting under the Inbox tab.\nThe main failure reasons are due to missing information in view hierarchy [29 ###reference_b29###], email/phone verification code requirement. Our tool also identified a usability issue during the exploration. More details on successful and failure reasons can be seen in supplementary materials.\nResults for RQ1.1.2:\nAmong these 100 apps, GPT4 performed 6,342 actions and collected 2,575 unique UIs (Mean=25.75, Std=9.73) from 668 activities (Mean=6.68, Std=4.05) and FastBot2 performed in total 33,649 actions and collected 8,680 unique UIs (Mean=59.86, Std=41.51) from 733 activities (Mean=7.33, Std=6.4), and\nWe merged the collected UIs from both modules, and obtained 8,680 unique UIs (Mean=86.8, Std=46.76) from 1,135 activities (Mean=11.35, Std=8.33).\nAmong the 8,680 unique UIs, 1.82% are visited by both modules, 69.37% are unique from FastBot2, and 28.81% are unique from GPT4.\nAmong the 1,135 activities, 23.44% are visited by both modules, 41.15% are unique from FastBot2, and 35.42% are unique from GPT4.\nFrom these statistics, we can see that these two modules can complement each other.\nFastbot2 is fast and effective, good at collecting many UIs in a short time in an efficient and low-cost way, while GPT4 can go deeper with logical exploration solution and explore unique UI pages." + }, + { + "section_id": "6.1.2", + "parent_section_id": "6.1", + "section_name": "VI-A2 RQ1.2: Effectiveness of Detector Module", + "text": "To understand the effectiveness of each sub-module in our component-based detector (Section IV-B ###reference_###), we conducted an ablation study which aims to evaluate the contributions of different components in our multi-label classification dark pattern detector. It includes an image embedding pipeline, a text embedding pipeline, and a final MLP classifier. Additionally, the study will also assess different optimization methods such as data augmentation, contrastive learning, and class weights. By systematically removing or modifying one factor at a time, we will observe its impact on the overall model performance.\nNote that we will report the overall results of our detector module in Section VI-B ###reference_### comparing with baselines.\nSetup:\nWe utilized the AppRay-Dark dataset, as described in Section V-B2 ###reference_.SSS2###, to evaluate the following ablation variants. The dataset includes full UI images, sub-images of individual elements, and the text contained within those elements. If an element does not contain any text, it is assigned the placeholder \u201cThis element does not contain texts.\u201d The data inputs consist of a list of a combination of sub-images, full UI images, and the corresponding text for examination. Each model variant and ablation was trained and tested using the same data split with 5-fold cross-validation.\nWe considered 5 ablation variants, namely AppRay-ResNet50, AppRay-BERT, AppRay-RB, AppRay-RB+DA, AppRay-RB+DA+NS, and our full model termed as AppRay-RB+DA+NS+CW,\nAppRay-ResNet50 and AppRay-BERT mean the model only use ResNet50 or BERT-based pipeline. AppRay-RB indicates the use of both pipelines but do not employ any specific strategy. \u201cDA\u201d stands for data augmentation, \u201cNS\u201d indicates negative sampling with contrastive learning, and \u201cCW\u201d refers to class weighting.\nEvaluation metrics:\nWe use precision, recall and F1-score as metrics to evaluate the performance. Specifically, Precision is calculated by , where true positive (TP) indicate the detection is classified as the corrected dark pattern type and false positive (FP) means the detection is wrong predicted. Recall is calculated by , where FN indicates the ground truth elements that should be predicted but wrongly classified by our model. Finally, F1-score is computed by the harmonic mean of the precision and recall, i.e., . We calculate the micro average and macro average for all types to illustrate the overall performance.\nResults:\nAs seen in Table II ###reference_###, it compares the performance of different model variations in terms of both micro and macro average metrics, including Precision, Recall, and F1-score. AppRay-BERT model shows strong micro-average performance, with a F1-score of 0.821. AppRay-ResNet50 also demonstrates the value of image-based approaches.\nThe combination of both BERT and ResNet50 modules yield a better micro average performance.\nThe addition of each training strategy gradually boost the performance of the dark pattern dectector, with\nAppRay-DA-NS achieves the highest scores across all three micro-average metrics: Precision (0.906), Recall (0.881), and F1-score (0.893). This indicates that the combination of data augmentation and negative sampling with contrastive learning significantly improves the model capability.\nAppRay-Detector achieves the highest macro-average F1-score (0.848) and Recall (0.844), which indicates its consistent performance across all classes, even those with fewer instances. This suggests that Class Weight helps solve the problem of class imbalance and the highest F1 score in macro-average metric shows that it performs well on minority classes of dark patterns." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "VI-B RQ2: How effective is the proposed method in detecting dark patterns?", + "text": "In this RQ, we used AppRay-Dark to evaluate the performance of our dark pattern detection modules.\nBaselines and Metrics:\nWe considered two baselines, UIGuard [13 ###reference_b13###] and AidUI [12 ###reference_b12###].\nBoth methods first use some visual techniques to detect UI elements and their attributes on the screenshots, and then use their derived rules to examine the existence of dark patterns.\nSpecifically, UIGuard uses Faster-RCNN and Paddle-OCR to detect UI elements and their text contents on the screenshot, and then adopt other computer vision techniques to obtain color and grouping information of the UI elements. Finally, they examine the existence of dark pattern by using rule-based techniques.\nAidUI instead adopt UIED [30 ###reference_b30###], Google OCR and a classifier to obtain the UI element and their text contents. They then classify the color of UI elements into three types, brighter, darker and normal, and conduct spatial analysis to understand the spatial relationship among UI elements. Finally, they also consider rule-based techniques to determine the existence of dark patterns.\nWe adopted the same metrics, precision, recall and F1-score as mentioned in Section VI-A2 ###reference_.SSS2###.\nResults:\nTable III ###reference_### shows the performance of AppRay, AidUI and UIGuard.\nOverall, our AppRay delivers exceptional performance in detecting dark patterns in UIs, achieving micro/macro-average precision, recall, and F1 scores of 0.77/0.65, 0.76/0.62, and 0.76/0.62 respectively.\nTo ensure a fair comparison, we evaluated AppRay\u2019s performance relative to the types supported by UIGuard and AidUI. For types supported by AidUI, our method achieved precision, recall, and F1 scores of 0.69/0.54, 0.78/0.56, and 0.73/0.54, marking improvements of 509.14%/349.51% respectively over AidUI in terms of micro/macro average F1 scores.\nFor types supported by UIGuard, our method achieved precision, recall, and F1 scores of 0.70/0.67, 0.81/0.69, and 0.75/0.66, with corresponding increases of 149.22%/165.84% compared to UIGuard in terms of micro/macro average F1 scores. These results demonstrate the superior performance of our proposed techniques.\nIn terms of detailed type performance, our method significantly outperforms existing approaches. By considering multiple pages, our approach can more accurately identify true nagging dark patterns compared to existing methods. For instance, while the existing methods, AidUI and UIGuard, may classify a pop-up rating UI page as a nagging dark pattern\u2014assuming it nags users to rate the app\u2014our method takes into account the previous UI and user actions, which can reveal such assessments as false positives if users deliberately clicked a rate action.\nSimilarly, for nagging ads and disguised ads, existing rule-based methods struggle to distinguish between them due to their over reliance on rigid, unreliable rules. Our analysis of their confusion matrices reveals that among the instances identified as nagging, 50% are misclassified as disguised ads by AidUI and 44.4% by UIGuard. Our method, employing a contrastive learning-based approach, shows a misclassification rate of 11.3%, demonstrating its ability to discern the subtle differences between these two types of dark patterns." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "VI-C RQ3: How often does AppRay detect false positives in UIs that do not contain dark patterns?", + "text": "We utilized the AppRay-Light dataset to measure the rate of false positives.\nThe average rates at which benign UIs are misidentified as malicious UIs with dark patterns by AppRay, AidUI, and UIGuard are 12%, 14%, and 6%, respectively." + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "VI-D RQ4: How is the overall performance compared to manual exploration and identification?", + "text": "This research question aims to evaluate the usefulness of the proposed technique.\n###figure_5### Setup:\nWe randomly selected 5 apps from different app categories as mentioned in Section V-B2 ###reference_.SSS2###. We invited two experts on dark patterns to conduct this study. Both of them are asked to conduct an inspection walk-through of these popular apps to obtain as many UIs as possible while exploring the app main features following the recording methodology in Di et al. [5 ###reference_b5###].\nIn detail, each app is explored by one of the researchers for 15 minutes, and the researchers performed the tasks as mentioned in Table I ###reference_###. They were also encouraged to use the apps for their intended usage.\nAfter manually obtaining the UIs from apps, one of the authors manually labeled the potential dark patterns in the collected UIs to ensure annotation consistency as in Section V-B2 ###reference_.SSS2###.\nOn the other hand, we ran our AppRay to gather the UIs and perform detection on the collected UIs.\nWe compare the results obtained from the experts and AppRay. We also annotated our collected UIs to see the effectiveness of our UI exploration module.\nResults:\nIn total, AppRay collected in total 2,726 UIs from these 5 apps. After removing duplicates, we obtained 522 unique UIs from 114 Activities, and detected xxx unique dark pattern instances. Upon annotation on these collected UIs, we obtained 100 dark pattern instances from 17 types.\nFor the human experts, E1 collected in total collected 369 unique UIs from 111 unique activity, and identified 64 unique dark pattern instances of 11 dark pattern types. E2 collected in total collected 404 unique UIs from 111 unique activity, and identified 101 unique dark pattern instances of 10 dark pattern types.\nAs illustrated in Figure 5 ###reference_###, AppRay, E1, and E2 exhibit significant overlaps in visited UI activity." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion", + "text": "In this paper, we take the first step in addressing the problem of dark pattern detection from app exploration to revelation, encompassing both dynamic and static dark patterns. We propose and implement AppRay, a fully automated tool that leverages LLMs and automated testing tools to explore app UIs and detect dark patterns, considering the relationships between multiple UIs. Additionally, we contribute large datasets, AppRay-Dark and AppRay-Light, covering dynamic dark patterns and preserving the UI sequential information. Our experiments demonstrate the superior performance of AppRay, showing a significant improvement compared to existing methods. Furthermore, our user study confirms the usefulness of our work in assisting human experts to automatically identify dark patterns in apps." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Task Definitions, Potential Associated Dark Patterns, and Results
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ID\n\nTask\n\n\n\nPotential Dark Patterns\n\n\n\nSuccess Rate\n\n
T1\n\nRegister an account\n\n\n\nPreselection, Nagging, Privacy Zuckering\n\n\n\n56.41%\n\n
T2\n\nSign in\n\n\n\nPreselection, Nagging, False Hierarchy\n\n\n\n78.21%\n\n
T3\n\nGo to setting page, go through all notification related pages\n\n\n\nPreselection\n\n\n\n82.83%\n\n
T4\n\nGo to setting page, go through all privacy related setting\n\n\n\nPreselection, Privacy Zuckering\n\n\n\n84.0%\n\n
T5\n\nCheck if we can subscribe to premium account, if so, read through all contents on the subscription page\n\n\n\nFalse Hierarchy, Preselection, Forced Continuity\n\n\n\n83.33%\n\n
T6\n\n(Optional) Select any product you like with proper attributes (like size), add to cart, proceed to checkout\n\n\n\nSneak into the Basket, Hidden Costs, Preselection\n\n\n\n30.0%\n\n
T7\n\nSign out the app\n\n\n\nRoach Motel, False Hierarchy\n\n\n\n58.11%\n\n
\n
", + "capture": "TABLE I: Task Definitions, Potential Associated Dark Patterns, and Results" + }, + "2": { + "table_html": "
\n
TABLE II: Results for ablation studies.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodMicro AverageMacro Average
PrecisionRecallF1PrecisionRecallF1
AppRay-BERT0.8370.8050.8210.8790.7290.758
AppRay-ResNet500.7730.8450.8080.8330.6280.640
AppRay-RB0.8670.8350.8510.7970.7070.722
+DA0.897\n0.859\n0.877\n0.813\n0.752\n0.767\n
+DA+NS0.9060.8810.8930.874\n0.7340.770\n
+DA+NS+CW0.8460.8790.8520.8690.8440.848
\n
\n
", + "capture": "TABLE II: Results for ablation studies.\n" + }, + "3": { + "table_html": "
\n
TABLE III: Performance of AppRay and baselines in the AppRay-Dark dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AppRayAidUIUIGuard
DP Category#GT (All)PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
Nagging1900.870.640.740.200.090.120.440.110.18
Bait And Switch670.40.170.24------
Forced Continuity1480.690.840.76---0.00.00.0
Roach Motel61.00.670.8------
Intermediate Currency410.780.880.82------
Social Pyramid150.880.880.88---1.00.710.83
Privacy Zuckering2480.760.960.85---1.00.290.45
Gamification100000.00.00.0---
ForcedAction-General1600.630.590.61---1.000.220.36
Preselection6810.690.920.790.120.030.050.800.230.35
Hidden Information3070.820.890.86-----
Toying with Emotion680.630.630.630.530.420.47---
False Hierarchy830.460.370.410.000.000.000.000.000.00
Disguised Ads1070.550.820.660.050.100.070.070.100.08
Tricked Questions7000------
Interface Interference - General470.630.260.37---0.000.000.00
Micro Avg.2,1850.770.760.760.200.080.120.620.190.30
Macro Avg.2,1850.650.620.620.150.110.120.480.180.25
\n
\n
", + "capture": "TABLE III: Performance of AppRay and baselines in the AppRay-Dark dataset." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18084v1_figure_1.png", + "caption": "Figure 1: Dark pattern taxonomy. It consists of five main strategies. We employ black, blue and orange colors to indicate the static, in-between, and dynamic dark patterns", + "url": "http://arxiv.org/html/2411.18084v1/x1.png" + }, + "2": { + "figure_path": "2411.18084v1_figure_2.png", + "caption": "Figure 2: Examples of deceptive patterns.", + "url": "http://arxiv.org/html/2411.18084v1/extracted/6028197/figs/examples-dark_patterns.png" + }, + "3": { + "figure_path": "2411.18084v1_figure_3.png", + "caption": "Figure 3: Overview of AppRay. It consists of two modules: app exploration and dark pattern detection.", + "url": "http://arxiv.org/html/2411.18084v1/x2.png" + }, + "4": { + "figure_path": "2411.18084v1_figure_4.png", + "caption": "Figure 4: Our LLM-based task-oriented app navigator employs a trial-and-error strategy to perform Task 3 in Reddit app.", + "url": "http://arxiv.org/html/2411.18084v1/x3.png" + }, + "5": { + "figure_path": "2411.18084v1_figure_5.png", + "caption": "Figure 5: The Venn diagrams for Expert 1, Expert 2 and AppRay in collected UI activities and identified dark patterns.", + "url": "http://arxiv.org/html/2411.18084v1/x4.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18084v1" +} \ No newline at end of file diff --git a/20241127/2411.18085v1.json b/20241127/2411.18085v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5818b2704d21e80ae543eeaa89cd4109e0512b9a --- /dev/null +++ b/20241127/2411.18085v1.json @@ -0,0 +1,410 @@ +{ + "title": "MONOPOLY: Learning to Price Public Facilities for Revaluing Private Properties with Large-Scale Urban Data", + "abstract": "The value assessment of private properties is an attractive but challenging task which is widely concerned by a majority of people around the world. A prolonged topic among us is \u201chow much is my house worth?\u201d. To answer this question, most experienced agencies would like to price a property given the factors of its attributes as well as the demographics and the public facilities around it. However, no one knows the exact prices of these factors, especially the values of public facilities which may help assess private properties.\nIn this paper, we introduce our newly launched project \u201cMonopoly\u201d (named after a classic board game) in which we propose a distributed approach for revaluing private properties by learning to price public facilities (such as hospitals etc.) with the large-scale urban data we have accumulated via Baidu Maps. To be specific, our method organizes many points of interest (POIs) into an undirected weighted graph and formulates multiple factors including the virtual prices of surrounding public facilities as adaptive variables to parallelly estimate the housing prices we know. Then the prices of both public facilities and private properties can be iteratively updated according to the loss of prediction until convergence.\nWe have conducted extensive experiments with the large-scale urban data of several metropolises in China. Results show that our approach outperforms several mainstream methods with significant margins. Further insights from more in-depth discussions demonstrate that the \u201cMonopoly\u201d is an innovative application in the interdisciplinary field of business intelligence and urban computing, and it will be beneficial to tens of millions of our users for investments and to the governments for urban planning as well as taxation.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "Have you ever played the classic board game \u201cMonopoly\u201d111https://en.wikipedia.org/wiki/Monopoly_(game) ###reference_me)### where you can act as a tycoon to purchase all the real estates within a city?\nWe believe that a majority of people around the world do care about the value of their private properties. Especially when you plan to buy a new house, a prolonged question first comes into your mind: \u201chow much is the house worth?\u201d. It is quite an attractive but challenging topic to assess the value of private properties accurately (Chopra et al., 2007 ###reference_b4###; Melanda\net al., 2016 ###reference_b22###; McCluskey, 2018 ###reference_b21###; Dintenfass, 2018 ###reference_b6###), as many factors will influence the housing prices.\nEven for experienced agencies, they may just give you some subjective suggestions on whether or not to buy a property based on their experiences. For example, they may recommend you to purchase school district houses (as illustrated by Figure 1 ###reference_###) rather than a house with a wasteyard nearby.\nWhy? This is because public facilities (such as hospitals, schools, parks, metros, police stations, wasteyards, and even cemeteries) play a pivotal role in estimating the value of other private properties nearby. However, this naturally leads to another question: \u201chow much do these public facilities contribute to the price of your house?\u201d Everybody is eager to know, but we are afraid that no one knows.\n###figure_1### In this paper, we introduce our newly launched project \u201cMonopoly\u201d (named after the classic board game) within Baidu Maps toward the general public to answer the questions above. Generally speaking, the \u201cMonopoly\u201d project aims to assign virtual prices to public facilities based on the values of existing private properties, and in turn, the virtual prices of public facilities can help estimate the worth of a newly-established realty. The acquired values of public facilities, as additional features, also help alleviate the problem of very sparse prices of private properties. To achieve the ambitious goal of \u201cMonopoly\u201d, we reorganize many points of interest (POIs) into an undirected weighted graph based on the geographic information on the POIs that we have accumulated via Baidu Maps. In addition, we propose several models to formulate the critical factors that tend to influence the housing prices: i.e., the circumstances of the property\u2019s attributes as well as the demographics and the public facilities around it. For each POI of private property, our approach leverages these factors to regress to its ground-truth price. As a result, the estimated values of both public facilities and private properties can be acquired simultaneously and updated iteratively by the loss of regression within the graph until convergence. Driven by the large-scale urban data we have in Baidu Maps, we also devise a distributed learning algorithm for \u201cMonopoly\u201d under the framework of MapReduce (Dean and Ghemawat, 2004 ###reference_b5###) for industrial use.\nWe have conducted extensive experiments with the real-world urban data of several metropolises (including Beijing, Shanghai, Guangzhou, and Shenzhen) in China. The experimental results demonstrate that our approach outperforms several baselines by significant margins. An intuitive reason why our method can achieve promising results is that all the factors (such as the property attributes and the public facilities) can be revalued in terms of the structure information of a graph rather than considered separately as feature vectors for housing price prediction.\nFurther investigations also help us to find out several key insights on the investment in real estates: the type, the administrative district, the developer, and the age of a house are among the most concerned attributes; scenic spots, educational institutions, and transportation are among the most valuable public facilities concerned by all accounts; about km to km in radius is the appropriate range in which we should consider other private properties and public facilities. Moreover, the distinguishing feature of \u201cMonopoly\u201d is that it can take advantage of collective intelligence (i.e., the wisdom of crowds) (Szuba, 2001 ###reference_b29###; Jung, 2017 ###reference_b15###) to rank public facilities in terms of housing prices. The ranking list of some of the public facilities (such as primary schools and hospitals) which are closely related to the people\u2019s livelihood issues is widely concerned by citizens, and it can provide a valuable reference for property investment.\nHere we list three major contributions of this paper as follows:\nWe design a novel idea on learning to price public facilities with existing housing prices for the purpose of revaluing private properties. As a result, the values of public facilities are acquired, which can be utilized as additional features to alleviate the problem of very sparse prices of private properties in the task of housing price regression.\nTo implement the idea, we propose a model to formulate several factors that may influence the value of a private property and to organize both private properties and public facilities into a geographical graph of which structured information can be leveraged.\nFor industrial use, we also devise a distributed algorithm under the framework of MapReduce (Dean and Ghemawat, 2004 ###reference_b5###) to parallelly estimate both the values of private properties and the prices of public facilities within the geographical graph iteratively until convergence.\nWe believe that this new application in the interdisciplinary field of business intelligence (Aruldoss et al., 2014 ###reference_b2###; Duan and Da Xu, 2012 ###reference_b7###) and urban computing (Jiang et al., 2013 ###reference_b14###; Zheng\net al., 2014 ###reference_b32###) will be beneficial to tens of millions of our users for investments (Brown and\nMatysiak, 2000 ###reference_b3###; Hargitay\net al., 2003 ###reference_b12###) and to the governments for urban planning (Kirk, 2018 ###reference_b16###; Field, 2018 ###reference_b9###; Thornley, 2018 ###reference_b30###; Sarin, 2019 ###reference_b27###) as well as taxation (McCluskey, 2018 ###reference_b21###; Oates, 1969 ###reference_b24###). We have already released the source codes of \u201cMonopoly\u201d to the public: https://github.com/PaddlePaddle/models/tree/develop/PaddleST/Research/CIKM2019-MONOPOLY ###reference_e/develop/PaddleST/Research/CIKM2019-MONOPOLY###, where anyone has free access to this featured project for academic purpose." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Model", + "text": "###figure_2###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Problem Statement", + "text": "The ultimate goal of the \u201cMonopoly\u201d project is to assess the values of all the points of interest (POIs) within an urban area. This task has drawn much attention from our customers, business partners, and even the governments, given that it will be of great help to tens of millions of our users for investments (Brown and\nMatysiak, 2000 ###reference_b3###; Hargitay\net al., 2003 ###reference_b12###) and to the governments for urban planning (Kirk, 2018 ###reference_b16###; Field, 2018 ###reference_b9###; Thornley, 2018 ###reference_b30###; Sarin, 2019 ###reference_b27###) as well as taxation (McCluskey, 2018 ###reference_b21###; Oates, 1969 ###reference_b24###).\nHowever, the definition of \u201cvalue of a POI\u201d is overbroad as the \u201cvalue\u201d could be a measurement of many aspects of an urban city such as its flourishing degree and traffic conditions.\nDriven by the motivation of answering the prolonged questions:\nHow much is my house worth?\nHow much do the public facilities contribute to the price of my house?\nwhich are posed by most people around the world, we set the problems to be addressed in this paper as follows:\nGiven the housing prices and other urban data (including, but not limited to: the attributes of private property as well as the demographic and geographic information of POIs), how can we obtain the virtual prices of public facilities?\nHow do we use the estimated values of those public facilities to reassess private properties, especially to predict the prices of the residential blocks that are under construction?\nAs the primary version of \u201cMonopoly\u201d, it learns to assess public facilities from the housing prices we already know, and in return, the prices of public facilities can help revalue other private properties in an urban city." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. Data Organization", + "text": "We can take advantage of the backend data in Baidu Maps, especially the geographic information on POIs, the urban mobilities, and user profiles, to compose an undirected weighted graph for a metropolitan city. As illustrated by Figure 2 ###reference_###, the private properties (residential blocks), public facilities (such as metros, schools, hospitals, parks, etc.), merchants, and factories are regarded as nodes in the graph.\nHowever, the difference among the nodes is that we only know the housing prices of some of the private properties leaving the value of the other nodes unknown. This leads us to establish undirected edges between a private property and its surrounding public facilities within a radius of several kilometers (The value of the radius will become the hyperparameter of our model in the rest of this paper). To measure the weights on the edges, the Euclidean distance on the map seems to be an instant answer. But citizens can take multiple means of transport from one place to another, and these transportation trajectories cannot be a straight line (unless you take a helicopter). Therefore, we also suggest taking the trajectory distances of various means of transportation into account as additional weights of the edges." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. Factor Formulation", + "text": "Let us assume that you are looking for a real estate that you may purchase for investment. What kind of factor drives you to make the final decision? Furthermore, how are the factors formulated so that they can rationally imply the much of deposit you should put down?\nWe are going to answer those questions in this subsection.\nBy means of interviewing several professional real estate agents and examining the urban data we have, we suggest that the price of a private property is comprehensively influenced by the attributes of itself, as well as the demographics and the other properties (including other residential blocks and public facilities) nearby.\nGiven a private property, the basic assumption of our model is that we use the surrounding properties (including residential blocks and public facilities) within a radius of kilometers to assess its value. Here we use the vector to represent the value of properties around:\nwhere v and u denote the price of the residential blocks and public facilities, respectively. is the concatenation operator.\nEven though the properties within the radius of km may contribute to the value assessment of the private property at the center of the circle, the influence tends to be different in terms of the distance. An intuitive measurement of the distance is the Euclidean distance on the map. However, most citizens take multiple means of transport from one place to another. As a result, transportation trajectories (Sun\net al., 2012 ###reference_b28###) cannot be a straight line. Therefore, we take the distances of transportation trajectories as additional distances into account. If we use the matrix to denote the types of distance from the target private property to each of its facilities (i.e., ) nearby, we use to formulate the influences of distance with parameter for each surrounding properties. One option to\nmodel is to adopt the softmax function as follows:\nin which is the learnable vector to indicate the preference of distances. Of course, we can employ more complicated models such as deep neural networks (LeCun\net al., 2015 ###reference_b17###) to model the underlying embeddings of , but the softmax function is recommended as a multi-class classifier which employs the embeddings at the last layer to balance the effects of various distances.\nFor now, we can simply conduct the inner product operation between and to estimate the price of the centroid private property. But the price of the target residential block may get a discount due to the circumstance of self-attributes and the demographics nearby. Therefore, we devise a scale factor with the circumstance vector x and the parameter . The sigmoid function is an option to model as subsequences:\nwhere x is the feature vector of the target private property itself, including the administrative district, the age, the developers, the average income of its residents, etc. The value of each attribute can be calculated by an individual under-layer neural network fed by the one-hot encoding of the attribute.\nOverall, the formulation we propose to assess the value of a private property is:\nSuppose that we have instances of private properties in a training set , the learning objective can be defined as:\nwhere is the index of the -th instance in the training set, and stands for the ground-truth price of that instance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Algorithm", + "text": "The mini-batch stochastic gradient descent (mini-batch SGD) (Ruder, 2016 ###reference_b26###) is a classical algorithm to solve the least squares problem of Eq. (5 ###reference_###). However, our model poses a challenge that the prices of public facilities as the features are also the variables. Moreover, as most of the public facilities tend to be concerned by multiple private properties, they may receive conflicted gradients in different batches. Of course, we can extend the batch size to the number of the whole training set, but this idea is unrealistic for industrial use.\nTo synchronize the gradients sent by different private properties and to make the algorithm scalable to the real-world urban data in practice, we propose a MapReduce-based (Dean and Ghemawat, 2004 ###reference_b5###) algorithm (see Algorithm 1 ###reference_###) to map the computational complexity in different nodes and to reduce the gradients in the same round of iteration." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Real-World Datasets", + "text": "Baidu Maps is a widely used web mapping application on both mobiles and desktops. It keeps serving tens of millions of users and maintaining hundreds of millions of POIs every day. Therefore, the real-world urban data stored in the backend of Baidu Maps is a great treasure for the research on urban computing (Jiang et al., 2013 ###reference_b14###; Zheng\net al., 2014 ###reference_b32###).\nIn this work, we leverage the data from four metropolises (i.e., Beijing, Shanghai, Guangzhou, and Shenzhen), which take up approximate private properties and public facilities in the whole China.\nTable 1 ###reference_### reports the statistics of each city from the aspects of the number of residential blocks (each of which owns a high number of private properties), the number of public facilities, the average number of surrounding residential blocks per residential block, and the average number of public facilities per residential block within a radius of km or km.\nWe also have extensive information on the residential blocks, including their marketing prices, attributes, as well as the demographics and public facilities nearby.\nGiven the statistics of the real-world urban data in Table 1 ###reference_###, we can see that the number of surrounding residential blocks we know is quite small compared with the number of public facilities per residential block (nearly x). This phenomenon is even worse in Beijing, probably due to the policy restriction. Therefore, we hold the opinion that the housing price prediction model should well address the issue of incompleted real-world data.\nWhen we decide to conduct experiments on measuring the performance of our approach and other baselines, the residential blocks in each city are randomly split into three subsets by the proportion of to train, validate and test our model and other methods." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Comparison Baselines", + "text": "To ensure that we can conduct thorough comparisons between our model and other modern approaches, we select several representative approaches as baselines on the housing price prediction. As shown by Table 2 ###reference_###, all the baselines are categorized into two groups based on 1) how we organize the data to construct the features of private property, and 2) what kind of learning model we use.\nGiven a private property with its price unknown, Avg. Prices (citywide) is an oversimplified and crude way to estimate the price of it. Macro Avg. Prices (within km) and Micro Avg. Prices (within km) regard the prices of all the private properties within its radius of km as features. However, a slight difference between the two baselines is the way of calculating the mean value of the target property. Micro Avg. Prices takes the geographical distance between properties into account as the additional weights to adjust the mean price, while Macro Avg. Prices does not.\nLinear Regression (Hu et al., 2013 ###reference_b13###), Boosting Trees (Friedman\net al., 2000 ###reference_b11###; Friedman, 2001 ###reference_b10###; Li, 2009 ###reference_b18###, 2010 ###reference_b19###), and Deep Neural Networks (DNN) (Lim\net al., 2016 ###reference_b20###; Rahman\net al., 2019 ###reference_b25###) are another group of baselines that consider all the factors mentioned in this paper as features but organize them into a vector as the input of these learning models (Nghiep and Al, 2001 ###reference_b23###). The difference among these models is the way of processing the feature vector. Linear Regression can intuitively acquire the weight of each feature. The widely used method of Boosting Trees goes one step further to find the non-linear relationship between the features through ensembled tree structures. Moreover, Deep Neural Networks (DNN) may discover more complicated and hierarchical patterns from those features for the housing price regression." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Evaluation Metrics", + "text": "We use three kinds of evaluation metrics to assess the performance of all the approaches on the task of housing price regression: mean average error (MAE), root mean squared error (RMSE), and coefficient of determination (R2 score). The three metrics evaluate a regression model from different perspectives. Specifically speaking, MAE measures the average bias between the predicted price and the ground-truth value of private properties, while RMSE will be amplified if some bad cases bring in a high variance. However, both MAE and RMSE are absolute errors without considering the ground-truth housing prices. R2 score addresses the issue by measuring a relative error, and the metric stands for the degree of fitting the ground-truth housing prices.\nSuppose that there are instances of residential blocks in the test set, and the ground-truth value of the -th residential block is denoted by . Besides, we use to represent the predicted housing price of the -th residential block, and stands for the mean of all the ground-truth value in the test set. According to the symbol definitions above, here we list the formulations of the three metrics as follows:\nMean Average Error (MAE):\nRoot Mean Squared Error (RMSE):\nR2 Score (R2):" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. Experimental Results", + "text": "Table 2 ###reference_### shows the comparison results between our method on \u201cMonopoly\u201d and other baselines evaluated by the metrics of MAE, RMSE, and R2 score. All the numbers are the mean and the standard deviation of the results of the four cities.\nFrom the results, we find out that our method adopted by the \u201cMonopoly\u201d project consistently outperforms all other modern approaches with significant improvements on MAE, RMSE, and even R2 score.\nNot surprisingly, Avg. Prices (citywide) is the worst choice as the method to assess the value of a specific realty. In line with Eq. (8 ###reference_###), the R2 score of Avg. Prices (citywide) is close to in theory. Macro Avg. Prices and Micro Avg. Prices are the common practices widely adopted by experienced agents. However, their performance is far from impressive, and the reason for this phenomenon is that the number of surrounding residential blocks with the prices we know is quite small.\nOther methods, e.g., Linear Regression (Hu et al., 2013 ###reference_b13###), Boosting trees (Friedman\net al., 2000 ###reference_b11###; Friedman, 2001 ###reference_b10###; Li, 2009 ###reference_b18###, 2010 ###reference_b19###), and Deep Neural Networks (DNN) (Lim\net al., 2016 ###reference_b20###; Rahman\net al., 2019 ###reference_b25###), consider all the factors including the existence of a large number of public facilities as features to help estimate the housing prices. The performance of these methods gradually improves along with their capability increment of discovering complex patterns. Finally, Deep Neural Networks (DNN) (Lim\net al., 2016 ###reference_b20###; Rahman\net al., 2019 ###reference_b25###) shows state-of-the-art performance.\nCompared with the state-of-the-art DNN methods, our approaches reduce the loss by CNY/m2 in MAE as well as CNY/m2 in RMSE, and achieve the improvement by in R2 on average222In Section 5 ###reference_###, we conduct more experiments on tuning the hyperparameter and find out that even better performance of our approach may be achieved when is between km and km.. These results meet our expectation as \u201cMonopoly\u201d can directly leverage the acquired prices of the large number of public facilities, alleviating the phenomenon of feature sparsity. In addition, we find out that Monopoly (within km) slightly outperforms Monopoly (within km) because the radius becomes more extensive and more public facilities are concerned. However, what is the best value of the influencing radius? We will discuss and answer several similar questions in the subsequent section." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Discussions", + "text": "In this section, we will go deeper to explore the parameters (including which stands for the preference of property attributes, the virtual prices of public facilities u, and , for indicating the option of geographical distance) and the hyperparameter (i.e., the radius ) of our model which are expected to give us more insights acquired from the collective intelligence (Szuba, 2001 ###reference_b29###; Jung, 2017 ###reference_b15###) on the value assessment of real estates." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "5.1. Preference of Property Attribute", + "text": "Even though the sigmoid function we adopted (see Eq. (3 ###reference_###)) seems too simple as a model to formulate property attributes, the major objective of Eq. (3 ###reference_###) is to learn the weight distributions of the attributes of a private property, such as the administrative district it locates, the age as well as the type (i.e., a house or an apartment) of the property, etc. To achieve this goal, we look up the normalized parameter acquired by the urban data of the four cities (shown by Table 3 ###reference_###) and find out that the type, the administrative district, the developer, and the age of a house are the mostly concerned attributes when people assess the value of a private property." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "5.2. Virtual Price of Public Facility", + "text": "The shining part of our approach is that we can simultaneously estimate the price of public facilities during the progress of fitting the values of private properties. In this way, the impact of public facilities on housing prices can be quantitatively evaluated. Table 4 ###reference_### displays the virtual prices of the public facilities in the four metropolises (i.e., Beijing, Shanghai, Guangzhou, and Shenzhen), respectively. To tell the positive or negative impact of the public facilities on housing prices, we use the average housing price as a benchmark, and the difference in price becomes prominent in each type of public facility.\nAccording to the results in Table 4 ###reference_###, we can see that\nscenic spots, educational institutions, and transportation are the most valuable public facilities by all accounts, but wasteyards and cemeteries have harmful effects on the housing price. Some other interesting discoveries from Table 4 ###reference_### are as follows:\nBeijing is the political center of China as the premium ( CNY/m2) of governmental agencies is the highest among the four metropolises.\nShanghai and Guangzhou are the two financial centers of China, as the premium of financial institutions in each city (i.e., CNY/m2 and CNY/m2, respectively) are much higher compared with the others in Shanghai and Guangzhou.\nShenzhen seems to be the most livable city among the four metro-polises in China, because the premiums of its educational institutions, recreational facilities, transportation, and scenic spots, are much higher than the other public facilities of Shenzhen.\nIf we recap Figure 1 ###reference_### where there is a Zhongguancun No.2 Primary School (Baiwang Campus) and a Xibeiwang Station of No.16 Metro on the map. Our model shows that the virtual price of the Zhongguancun No.2 Primary School (Baiwang Campus) is CNY/m2, which is significantly higher than the average price of educational institutes ( CNY/m2) in Beijing. We also look up the virtual value of Zhongguancun No.2 Primary School (Headquarter) acquired by our model, and it turns out that the price ( CNY/m2) is much higher than that of Zhongguancun No.2 Primary School (Baiwang Campus). These findings further inspire us that our model can leverage collective intelligence (the wisdom of crowds) (Szuba, 2001 ###reference_b29###; Jung, 2017 ###reference_b15###) to rank public facilities in terms of housing prices." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "5.3. Option of Geographical Distance", + "text": "When selecting a type of distance on a map, the Euclidean distance (Faith\net al., 1987 ###reference_b8###) seems an obvious option. Nevertheless, citizens can take multiple means of transport from one place to another, and most transport trajectories cannot be a straight line. The trajectory distance (Sun\net al., 2012 ###reference_b28###), commonly regarded as driving distance, appears to be a better alternative. This leads to a subtopic on \u201cEuclidean distance vs. Trajectory distance: which one we should concern more about when looking for a private property\u201d. To figure out the problem, we examine the parameter , which is adapted to the urban data of the four cities, and display the normalized parameter of each city in Table 5 ###reference_###. It turns out that there is almost no difference. This observation also inspires us to use the Euclidean distance to measure the value of the influencing radius as the hyperparameter of our model.\n###figure_3###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "5.4. Effect of Influencing Radius", + "text": "The influencing radius denoted by is the hyperparameter of our model, which directly decides the number of surrounding private properties and public facilities we need to concern (see Figure 1 ###reference_###). An intuitive sense of the influencing radius is that the predicted value is prone to overfit the price of the nearest property if the scope is small but to bring in more biases if the range is too large.\nTo be aware of the reasonable value of the radius, we extend the experimental results shown by Table 2 ###reference_### by adding the results with the radius of km and km. Table 6 ###reference_### shows the specific performance of \u201cMonopoly\u201d with different values of the influence radius in the four cities, respectively. Furthermore, Figure 3 ###reference_### illustrates three curves about the average performance (i.e., mean standard deviation) of \u201cMonopoly\u201d measured by MAE, RMSE, and R2. We can see that the performance of \u201cMonopoly\u201d gradually increases when is from km to km, and it falls beginning with km. We thus conclude that, about km to km in radius is the appropriate range where we should consider other private properties and public facilities." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Conclusion", + "text": "This paper addresses an attractive but challenging problem of the value assessment of private properties from a new aspect. That is, learning to assess public facilities from the housing prices we know, and in turn, the prices of public facilities can help revalue other private properties in an urban city.\nTo accomplish this target, we launch the \u201cMonopoly\u201d project in Baidu Maps. Specifically, we first employ the geographic data to establish an undirected weighted graph of real estates. Then we propose a model to formulate the factors that influence the price of private property. Given a node of private property in the graph, we consider the factors that are composed of its attributes as well as the demographics and public facilities around it. During the training phase, the prices of the public facilities, as variables in our model, can be updated simultaneously with the other parameters while fitting the ground-truth housing prices iteratively. Moreover, we further upgrade the learning algorithm to a distributed version under the framework of MapReduce (Dean and Ghemawat, 2004 ###reference_b5###) for industrial use.\nWe use the real-world urban data accumulated via Baidu Maps to conduct experiments on our approach as well as several baselines.\nThe large-scale urban data covers four metropolises (i.e., Beijing, Shanghai, Guangzhou, and Shenzhen), residential blocks, and public facilities in China.\nExperimental results demonstrate that our approach obtains significant improvements over several strong baselines.\nIn addition, the model is quite easier to understand and interpret that we can discover several key insights on purchasing a private property:\nPreference of property attribute: The type, the administrative district, the developer, and the age of a house are the most concerned attributes when people assess the value of a private property.\nVirtual price of public facility: Scenic spots, educational institutions, and transportation are the most valuable public facilities for a private property. An additional benefit of our model to the public is that \u201cMonopoly\u201d can leverage the wisdom of crowds to rank public facilities (such as primary schools and hospitals) in terms of housing prices.\nOption of geographical distance: It seems that there is almost no difference between the Euclidean distance and the trajectory distance when citizens talk about the distance between two places on a map.\nEffect of influencing radius: From km to km in radius is the appropriate range in which we should consider the influences of other private properties and public facilities." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Future Work", + "text": "\u201cMonopoly\u201d is an innovative project of Baidu Maps in the interdisciplinary field of business intelligence (Aruldoss et al., 2014 ###reference_b2###; Duan and Da Xu, 2012 ###reference_b7###) and urban computing (Jiang et al., 2013 ###reference_b14###; Zheng\net al., 2014 ###reference_b32###). This project was originally inspired by the idea of \u201cmarking the prices of all points of interest (POIs) in an urban city\u201d. To achieve this goal, we devise a distributed model that can iteratively learn to value public facilities with the prices of private properties we already know in a geographic graph until all of their prices finally converge. Several vital insights have been gained by applying our model to the large-scale urban data that we have accumulated via Baidu Maps over the years. We believe these insights will not only be beneficial to tens of millions of our users for investments (Brown and\nMatysiak, 2000 ###reference_b3###; Hargitay\net al., 2003 ###reference_b12###) but also to the governments for urban planning (Kirk, 2018 ###reference_b16###; Field, 2018 ###reference_b9###; Thornley, 2018 ###reference_b30###; Sarin, 2019 ###reference_b27###) and taxation (McCluskey, 2018 ###reference_b21###; Oates, 1969 ###reference_b24###).\nTherefore, we plan to extend our research from the perspectives of model development and product design in the future:\nExtensive studies on graph learning framework to tackle with the heterogeneous urban data: The study on graph neural networks (Wu\net al., 2019 ###reference_b31###) is an emerging branch of research on deep learning (LeCun\net al., 2015 ###reference_b17###). The urban data (such as the attributes of POIs, the urban mobilities, and user profiles) can be naturally formulated by heterogeneous graphs, which poses a new challenge to the study of large-scale graph learning.\nCreative applications serving customers (2C), business (2B), and governments (2G): Besides deploying the \u201cMonopoly\u201d project in our web mapping service, we plan to directly establish an independent platform where multiple business intelligent agents could automatically give more suggestions on investments (Brown and\nMatysiak, 2000 ###reference_b3###; Hargitay\net al., 2003 ###reference_b12###), urban planning (Kirk, 2018 ###reference_b16###; Field, 2018 ###reference_b9###; Thornley, 2018 ###reference_b30###; Sarin, 2019 ###reference_b27###), and taxation (McCluskey, 2018 ###reference_b21###; Oates, 1969 ###reference_b24###) powered by our large-scale urban data." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1. The statistics of the urban data we used to conduct experiments. The datasets are collected from four metropolises (i.e., Beijing, Shanghai, Guangzhou, and Shenzhen) in China. For the other columns, #(Res. Blocks): the number of residential blocks; #(Pub. Facilities): the number of public facilities; #(Other Res. Blocks) per Res. Block: the average number of surrounding residential blocks per residential block; #(Pub. Facilities) per Res. Block: the average number of public facilities per residential block.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
City#(Res. Blocks)#(Pub. Facilities)#(Other Res. Blocks) per Res. Block#(Pub. Facilities) per Res. Block
Beijing7,573843,426\n (within km) \u2014 (within km)\n (within km) \u2014 (within km)
Shanghai11,604970,566\n (within km) \u2014 (within km)\n (within km) \u2014 (within km)
Guangzhou6,508810,465\n (within km) \u2014 (within km)\n (within km) \u2014 (within km)
Shenzhen3,849724,320\n (within km) \u2014 (within km)\n (within km) \u2014 (within km)
AVERAGE7,384837,194\n (within km) \u2014 (within km)\n (within km) \u2014 (within km)
TOTAL29,5343,348,777--
\n
\n
", + "capture": "Table 1. The statistics of the urban data we used to conduct experiments. The datasets are collected from four metropolises (i.e., Beijing, Shanghai, Guangzhou, and Shenzhen) in China. For the other columns, #(Res. Blocks): the number of residential blocks; #(Pub. Facilities): the number of public facilities; #(Other Res. Blocks) per Res. Block: the average number of surrounding residential blocks per residential block; #(Pub. Facilities) per Res. Block: the average number of public facilities per residential block." + }, + "2": { + "table_html": "
\n
Table 2. The comparison results between our method on \u201cMonopoly\u201d and other baselines evaluated by the metrics of MAE, RMSE, and R2 score. All the numbers (i.e., mean standard deviation) in this table are obtained by averaging the results of the four cities in Table\u00a01.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodMean Absolute Error (MAE)Root Mean Squared Error (RMSE)R2 Score (R2)
Avg. Prices (citywide)\n CNY/m2\n\n CNY/m2\n
Macro Avg. Prices (within km)\n CNY/m2\n\n CNY/m2\n
Macro Avg. Prices (within km)\n CNY/m2\n\n CNY/m2\n
Micro Avg. Prices (within km)\n CNY/m2\n\n CNY/m2\n
Micro Avg. Prices (within km)\n CNY/m2\n\n CNY/m2\n
Linear Regression (within km)\n CNY/m2\n\n CNY/m2\n
Linear Regression (within km)\n CNY/m2\n\n CNY/m2\n
Boosting Trees (within km)\n CNY/m2\n\n CNY/m2\n
Boosting Trees (within km)\n CNY/m2\n\n CNY/m2\n
DNN (within km)\n CNY/m2\n\n CNY/m2\n
DNN (within km)\n CNY/m2\n\n CNY/m2\n
Monopoly (within km)\n CNY/m2\n\n CNY/m2\n
Monopoly (within km)\n CNY/m2\n\n CNY/m2\n
\n
\n
", + "capture": "Table 2. The comparison results between our method on \u201cMonopoly\u201d and other baselines evaluated by the metrics of MAE, RMSE, and R2 score. All the numbers (i.e., mean standard deviation) in this table are obtained by averaging the results of the four cities in Table\u00a01." + }, + "3": { + "table_html": "
\n
Table 3. The preference distribution (i.e., the normalized parameter ) on several attributes of private properties, including the type (i.e., a house or an apartment), the administrative district (abbr. A.D.), the developer, the age, and the other attributes, in the four metropolises of China.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CityAttribute
TypeA.D.DeveloperAgeOthers
Beijing
Shanghai
Guangzhou
Shenzhen
AVERAGE
\n
", + "capture": "Table 3. The preference distribution (i.e., the normalized parameter ) on several attributes of private properties, including the type (i.e., a house or an apartment), the administrative district (abbr. A.D.), the developer, the age, and the other attributes, in the four metropolises of China." + }, + "4": { + "table_html": "
\n
Table 4. The virtual prices of the public facilities acquired by \u201cMonopoly\u201d in the four metropolises of China.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Type of Public FacilityAverage Virtual Price
BeijingShanghaiGuangzhouShenzhen
Governmental Agency\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Educational Institution\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Financial Institution\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Recreational Facility\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Medical Treatment\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Commercial Office\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Transportation\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Scenic Spot\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Wasteyard\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
Cemetery\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
\n
\n
", + "capture": "Table 4. The virtual prices of the public facilities acquired by \u201cMonopoly\u201d in the four metropolises of China." + }, + "5": { + "table_html": "
\n
Table 5. The normalized parameter indicating the option of geographical distance in the four metropolises of China.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CityGeographical Distance
Euclidean DistanceTrajectory Distance
Beijing
Shanghai
Guangzhou
Shenzhen
AVERAGE
\n
", + "capture": "Table 5. The normalized parameter indicating the option of geographical distance in the four metropolises of China." + }, + "6": { + "table_html": "
\n
Table 6. The results of our method on \u201cMonopoly\u201d in the four metropolises with different values of the influence radius .
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodMetricsCity
BeijingShanghaiGuangzhouShenzhen
Monopoly (within km)MAE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
RMSE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
R2
Monopoly (within km)MAE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
RMSE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
R2
Monopoly (within km)MAE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
RMSE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
R2
Monopoly (within km)MAE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
RMSE\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n\n CNY/m2\n
R2
\n
\n
", + "capture": "Table 6. The results of our method on \u201cMonopoly\u201d in the four metropolises with different values of the influence radius ." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18085v1_figure_1.png", + "caption": "Figure 1. A screenshot of the housing prices of the Haidian District in Beijing. We can see that there are three kinds of dash circles colored by red, blue, and orange. Red circle: the average price of the properties near the Xibeiwang Station of No.16 Metro is 79,772\u2062C\u2062N\u2062Y/m279772\ud835\udc36\ud835\udc41\ud835\udc4csuperscript\ud835\udc5a279,772~{}CNY/m^{2}79 , 772 italic_C italic_N italic_Y / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Blue circle: the average price of the properties near the Zhongguancun No. 2 Primary School (Baiwang Campus) is 82,216\u2062C\u2062N\u2062Y/m282216\ud835\udc36\ud835\udc41\ud835\udc4csuperscript\ud835\udc5a282,216~{}CNY/m^{2}82 , 216 italic_C italic_N italic_Y / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Orange circle: however, the average price of the properties without those public facilities nearby is 70,744\u2062C\u2062N\u2062Y/m270744\ud835\udc36\ud835\udc41\ud835\udc4csuperscript\ud835\udc5a270,744~{}CNY/m^{2}70 , 744 italic_C italic_N italic_Y / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, noticeably lower than the two other areas.", + "url": "http://arxiv.org/html/2411.18085v1/x1.png" + }, + "2": { + "figure_path": "2411.18085v1_figure_2.png", + "caption": "Figure 2. An illustration of the organization of urban data employed by the \u201cMonopoly\u201d project. To be specific, our approach regards many points of interest (POIs) as nodes in an undirected weighted graph based on their geographic information. Then we formulate the factors, including the variables that indicate the values of surrounding public facilities, to parallelly regress to the housing prices we know. As a result, the estimated values of both public facilities and private properties can be updated iteratively until convergence.", + "url": "http://arxiv.org/html/2411.18085v1/x2.png" + }, + "3": { + "figure_path": "2411.18085v1_figure_3.png", + "caption": "Figure 3. An illustration on the average performance (i.e., mean \u00b1plus-or-minus\\pm\u00b1 standard deviation) of \u201cMonopoly\u201d measured by MAE and RMSE, along with different values (i.e., 0.50.50.50.5 km, 1.01.01.01.0 km, 3.03.03.03.0 km, and 5.05.05.05.0 km) of influencing radius.", + "url": "http://arxiv.org/html/2411.18085v1/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A survey on recent research in business\nintelligence.", + "author": "Martin Aruldoss, Miranda\nLakshmi Travis, and V Prasanna Venkatesan.\n2014.", + "venue": "Journal of Enterprise Information\nManagement 27, 6\n(2014), 831\u2013866.", + "url": null + } + }, + { + "2": { + "title": "Real estate investment: A capital market\napproach.", + "author": "Gerald R Brown and\nGeorge A Matysiak. 2000.", + "venue": "Financial Times Prentice Hall Harlow.", + "url": null + } + }, + { + "3": { + "title": "Discovering the Hidden Structure of House Prices\nwith a Non-parametric Latent Manifold Model. In\nProceedings of the 13th ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining\n(KDD \u201907). ACM,\nNew York, NY, USA, 173\u2013182.", + "author": "Sumit Chopra, Trivikraman\nThampy, John Leahy, Andrew Caplin, and\nYann LeCun. 2007.", + "venue": "", + "url": null + } + }, + { + "4": { + "title": "MapReduce: Simplified Data Processing on Large\nClusters. In OSDI\u201904: Sixth Symposium on Operating\nSystem Design and Implementation. 137\u2013150.", + "author": "Jeffrey Dean and Sanjay\nGhemawat. 2004.", + "venue": "", + "url": null + } + }, + { + "5": { + "title": "Property Assessments Using Augmented Reality User\nDevices.", + "author": "Katherine Dintenfass.\n2018.", + "venue": "", + "url": null + } + }, + { + "6": { + "title": "Business intelligence for enterprise systems: a\nsurvey.", + "author": "Lian Duan and Li\nDa Xu. 2012.", + "venue": "IEEE Transactions on Industrial Informatics\n8, 3 (2012),\n679\u2013687.", + "url": null + } + }, + { + "7": { + "title": "Compositional dissimilarity as a robust measure of\necological distance.", + "author": "Daniel P Faith, Peter R\nMinchin, and Lee Belbin.\n1987.", + "venue": "Vegetatio 69,\n1-3 (1987), 57\u201368.", + "url": null + } + }, + { + "8": { + "title": "Forecasting techniques for urban and\nregional planning.", + "author": "Brian Field.\n2018.", + "venue": "Routledge.", + "url": null + } + }, + { + "9": { + "title": "Greedy function approximation: A gradient boosting\nmachine.", + "author": "Jerome H. Friedman.\n2001.", + "venue": "The Annals of Statistics\n29, 5 (2001),\n1189\u20131232.", + "url": null + } + }, + { + "10": { + "title": "Additive logistic regression: a statistical view of\nboosting.", + "author": "Jerome H. Friedman,\nTrevor J. Hastie, and Robert\nTibshirani. 2000.", + "venue": "The Annals of Statistics\n28, 2 (2000),\n337\u2013407.", + "url": null + } + }, + { + "11": { + "title": "Property investment decisions: a\nquantitative approach.", + "author": "Stephen Hargitay, S\nHargitay, and SM Yu. 2003.", + "venue": "Routledge.", + "url": null + } + }, + { + "12": { + "title": "Multivariate Regression Modeling for Home Value\nEstimates with Evaluation Using Maximum Information Coefficient. In\nSoftware Engineering, Artificial Intelligence,\nNetworking and Parallel/Distributed Computing. 69\u201381.", + "author": "Gongzhu Hu, Jinping Wang,\nand Wenying Feng. 2013.", + "venue": "", + "url": null + } + }, + { + "13": { + "title": "A Review of Urban Computing for Mobile Phone\nTraces: Current Methods, Challenges and Opportunities. In\nProceedings of the 2nd ACM SIGKDD International\nWorkshop on Urban Computing. 2:1\u20132:9.", + "author": "Shan Jiang, Gaston A.\nFiore, Yingxiang Yang, Joseph Ferreira,\nJr., Emilio Frazzoli, and Marta C.\nGonz\u00e1lez. 2013.", + "venue": "", + "url": null + } + }, + { + "14": { + "title": "Computational collective intelligence with big data:\nChallenges and opportunities.", + "author": "Jason J Jung.\n2017.", + "venue": "", + "url": null + } + }, + { + "15": { + "title": "Urban planning in a capitalist society.\nVol. 15.", + "author": "Gwyneth Kirk.\n2018.", + "venue": "Routledge.", + "url": null + } + }, + { + "16": { + "title": "Deep learning.", + "author": "Yann LeCun, Yoshua\nBengio, and Geoffrey Hinton.\n2015.", + "venue": "nature 521,\n7553 (2015), 436.", + "url": null + } + }, + { + "17": { + "title": "ABC-Boost: Adaptive Base Class Boost for\nMulti-Class Classification. In Proceedings of the\n26th Annual International Conference on Machine Learning (ICML).\nMontreal, Canada, 625\u2013632.", + "author": "Ping Li. 2009.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Robust LogitBoost and Adaptive Base Class (ABC)\nLogitBoost. In Proceedings of the Twenty-Sixth\nConference Annual Conference on Uncertainty in Artificial Intelligence\n(UAI-10). Catalina Island, CA,\n302\u2013311.", + "author": "Ping Li. 2010.", + "venue": "", + "url": null + } + }, + { + "19": { + "title": "Housing price prediction using neural networks. In\n12th International Conference on Natural\nComputation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD).\n518\u2013522.", + "author": "W. T. Lim, L. Wang,\nY. Wang, and Q. Chang.\n2016.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "Property tax: An international comparative\nreview.", + "author": "William McCluskey.\n2018.", + "venue": "Routledge.", + "url": null + } + }, + { + "21": { + "title": "Identification of locational influence on real\nproperty values using data mining methods.", + "author": "Edson Melanda, Andrew\nHunter, and Michael Barry.\n2016.", + "venue": "Cybergeo: European Journal of Geography\n(2016).", + "url": null + } + }, + { + "22": { + "title": "Predicting housing value: A comparison of multiple\nregression analysis and artificial neural networks.", + "author": "Nguyen Nghiep and Cripps\nAl. 2001.", + "venue": "Journal of real estate research\n22, 3 (2001),\n313\u2013336.", + "url": null + } + }, + { + "23": { + "title": "The effects of property taxes and local public\nspending on property values: An empirical study of tax capitalization and the\nTiebout hypothesis.", + "author": "Wallace E Oates.\n1969.", + "venue": "Journal of political economy\n77, 6 (1969),\n957\u2013971.", + "url": null + } + }, + { + "24": { + "title": "The Artificial Neural Network Model (ANN) for\nMalaysian Housing Market Analysis.", + "author": "Siti Norasyikin Abd Rahman,\nNurul Hana Adi Maimun, Muhammad\nNajib Mohamed Razali, and Suriatini Ismail.\n2019.", + "venue": "Planning Malaysia Journal\n17, 9 (2019).", + "url": null + } + }, + { + "25": { + "title": "An overview of gradient descent optimization\nalgorithms.", + "author": "Sebastian Ruder.\n2016.", + "venue": "arXiv preprint arXiv:1609.04747\n(2016).", + "url": null + } + }, + { + "26": { + "title": "Urban planning in the Third World: the\nChandigarh experience.", + "author": "Madhu Sarin.\n2019.", + "venue": "Routledge.", + "url": null + } + }, + { + "27": { + "title": "Transportation task-oriented trajectory planning\nfor underactuated overhead cranes using geometric analysis.", + "author": "N Sun, Y Fang,\nX Zhang, and Y Yuan.\n2012.", + "venue": "IET Control Theory & Applications\n6, 10 (2012),\n1410\u20131423.", + "url": null + } + }, + { + "28": { + "title": "Computational Collective Intelligence.", + "author": "Tadeusz M. Szuba.\n2001.", + "venue": "John Wiley & Sons, Inc., New\nYork, NY, USA.", + "url": null + } + }, + { + "29": { + "title": "Urban planning under Thatcherism: the\nchallenge of the market. Vol. 21.", + "author": "Andy Thornley.\n2018.", + "venue": "Routledge.", + "url": null + } + }, + { + "30": { + "title": "A comprehensive survey on graph neural networks.", + "author": "Zonghan Wu, Shirui Pan,\nFengwen Chen, Guodong Long,\nChengqi Zhang, and Philip S Yu.\n2019.", + "venue": "arXiv preprint arXiv:1901.00596\n(2019).", + "url": null + } + }, + { + "31": { + "title": "Urban computing: concepts, methodologies, and\napplications.", + "author": "Yu Zheng, Licia Capra,\nOuri Wolfson, and Hai Yang.\n2014.", + "venue": "ACM Transactions on Intelligent Systems and\nTechnology (TIST) 5, 3\n(2014), 38.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18085v1" +} \ No newline at end of file diff --git a/20241127/2411.18101v1.json b/20241127/2411.18101v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d5ff622299468b3fd783e3d1f507880181fdba91 --- /dev/null +++ b/20241127/2411.18101v1.json @@ -0,0 +1,580 @@ +{ + "title": "Aligning Knowledge Concepts to Whole Slide Images for Precise Histopathology Image Analysis", + "abstract": "Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem.\nHowever, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors.\nHere we present a novel knowledge concept-based MIL framework, named ConcepPath to fill this gap.\nSpecifically, ConcepPath utilizes GPT-4 to induce reliable disease-specific human expert concepts from medical literature, and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data.\nIn ConcepPath, WSIs are aligned to these linguistic knowledge concepts by\nutilizing pathology vision-language model as the basic building component.\nIn the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping task, ConcepPath significantly outperformed previous SOTA methods which lack the guidance of human expert knowledge.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The analysis of histopathology images is crucial in modern medicine, particularly for cancer diagnosis and prognosis, where it serves as the gold standard.\nHowever, analyzing histopathology images is time-consuming and labor-intensive for pathologists.\nDigitalizing histopathology images into high-resolution whole slide images (WSIs) has ushered in a new era for computer-aided analysis [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###].\nOwing to their enormous size (e.g., 150,000 x 150,000) and the lack of fine-grained annotations, WSI analysis is typically formulated as a Multiple Instance Learning (MIL) problem, which enables weakly supervised learning from slide-level labels.\nMIL-based methods typically begin by extracting the feature embeddings of image patches using a pre-trained network [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###].\nThe feature embeddings are then fed into an aggregation network to generate slide-level predictions.\nNumerous research efforts have focused on efficiently aggregating information, including using attention-based weights [8 ###reference_b8###] and leveraging spatial context information [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nHowever, most current approaches in computational histopathology learn solely from image data, contrasting with how humans teach and reason about histopathologic entities and factors, as illustrated in Figure 1 ###reference_###a.\nAlthough a recent innovative study [12 ###reference_b12###] explores the use of language priors in few-shot weakly supervised learning for WSI analysis, it suffers from unreliable language prior generation and unsatisfactory performance under full training setups, which limits its wide application in precise WSI analysis.\nThus, incorporating valuable expert knowledge for precise WSI analysis remains an unsolved yet critical challenge.\nWith the rapid development of multi-modal learning, there has been a surge of studies on\nCLIP-based pathology vision language models [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. \nFollowing the principle of Contrastive Language-Image Pre-training (CLIP) [15 ###reference_b15###], these models learn well-aligned representation spaces between histopathology images and text description pairs collected from medical textbooks, scientific papers, public forums, and educational videos.\nOne major benefit is that natural language descriptions can provide more expressive, dense, and interconnected representations beyond the scope of a single categorical label, linking diverse features of histopathology sub-patch structures [13 ###reference_b13###, 18 ###reference_b18###].\nRemarkable achievements have been made in transferring the above pathology vision-language models to a wide range of downstream tasks, including patch-level histopathology image classification, segmentation, captioning, and retrieval [18 ###reference_b18###].\nMeanwhile, large language models (LLMs) have shown great potential in performing logic, analogy, causal reasoning, extrapolation, and evidence evaluation for medical and scientific applications [19 ###reference_b19###].\nResearchers have found that when treated as reasoning machines or inference engines rather than knowledge databases, LLMs are less likely to generate false statements that do not reflect scientific facts [20 ###reference_b20###].\nThese breakthroughs provide an opportunity to extract and incorporate human expert knowledge into the WSI analysis.\nIn this paper, we present ConcepPath, a concept-based framework designed to make decisions by jointly leveraging the complementary human expert prior knowledge and data-driven concepts.\nThe main idea of ConcepPath is illustrated in Figure 1 ###reference_###b and Figure 1 ###reference_###e.\nFirst, large language models such as GPT-4 are applied to induce reliable instance-level human expert concepts and bag-level expert class prompts from medical literature highly relevant to the given diagnostic task, as shown in Figure 1 ###reference_###b and Supplementary Figure 2.\nIt is important to note that, instead of treating GPT-4 as a knowledge database and directly querying expert concepts, ConcepPath utilize GPT-4 as a reasoning machine to induce human expert knowledge concepts and class prompts from medical textbooks and academic papers.\nThis strategy leads to more reliable expert concept and class prompts generation.\nOn the other hand, considering that extracted expert concepts may be insufficient to fully describe a disease\u2019s complexity [21 ###reference_b21###, 22 ###reference_b22###], ConcepPath also learns complementary data-driven instance-level concepts from the training data with a group of purely learnable prompt representations, as shown in Figure 1 ###reference_###e.\nThese learned concepts serve as a complement to expert concepts and play a crucial role, especially for complex and inadequately researched diagnostic tasks.\nTo transfer the knowledge contained in the concepts into WSI analysis, ConcepPath utilized a two-stage concept-guided hierarchical feature aggregation paradigm.\nWith both concepts, class prompts, and instance features embedded into the well-aligned representation space via the CLIP-based pathology vision-language model as the basic building component, ConcepPath first aggregate instance features into concept-specific bag-level features under the guidance of instance-level concepts and then further aggregate the concept-specific bag-level features into the overall bag representation according to the correlations between instance-level concepts and bag-level expert class prompts.\nFinally, ConcepPath feeds the overall bag representation and bag-level concept embedding to slide-adapters and calculates similarities between the adapted bag representations and bag-level expert class prompts embeddings for final prediction.\nWe validated the effectiveness of ConcepPath on five tasks (Figure 1 ###reference_###c): (1) lung cancer subtyping, (2) breast cancer HER2 scoring, and (3) gastric cancer immunotherapy-sensitive subtyping (including 3 binary classification tasks).\nConcepPath outperformed previous state-of-the-art methods on all tasks in Figure 3 ###reference_###a, which shows the prominence of utilizing human expert knowledge effectively.\nParticularly noteworthy is the nearly improvement that ConcepPath achieved on classifying Epstein\u2013Barr virus(EBV)-positive for gastric cancer cases, demonstrating its potential as economical and less time-consuming alternatives for stratifying patients who respond to immune checkpoint inhibitor therapy." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Dataset Characteristics for Tumor Diagnosis", + "text": "We evaluate ConcepPath on three public datasets (NSCLC, STAD and BRCA) from The Cancer Genome Atlas (TCGA) repository.\nThe first dataset is NSCLC, the lung cancer project containing 1,042 cases.\nFor the tumor subtyping task on this dataset, there are 530 cases diagnosed as lung adenocarcinoma (LUAD) and 512 cases diagnosed as lung squamous cell carcinoma (LUSC).\nThe second dataset is BRCA, the breast cancer project containing 933 cases.\nFor the HER2 scoring task on this dataset, there are 164 cases diagnosed as positive, 186 cases diagnosed as equivocal and 583 cases diagnosed as negative.\nHuman epidermal growth factor receptor 2 (HER2) plays a crucial role as both a prognostic and predictive marker, being over-expressed in approximately of breast cancer cases.\nAssessing HER2 status is vital for guiding clinical treatment choices and prognostic assessments.\nThe evaluation of HER2 status is performed through transcriptomics or immunohistochemistry (IHC) methods, including in-situ hybridization (ISH), which adds extra costs and tissue demands.\nFurthermore, this process is subject to variability in analysis, especially due to potential biases in manual scoring observations [23 ###reference_b23###, 24 ###reference_b24###].\nThe last dataset is STAD, the gastric cancer project containing 268 cases.\nFor the immunotherapy-sensitive subtyping task on this dataset, there are 26 cases diagnosed as Epstein\u2013Barr virus(EBV)-positive, 44 cases diagnosed as Microsatellite Instability (MSI), and 199 cases diagnosed as Genomically Stable (GS) and Chromosomal Instable (CIN).\nEBV and MSI tumors have been reported to be highly responsive to Immune checkpoint inhibitor (ICI) therapy, which is widely used but effective only in a subset of gastric cancers [25 ###reference_b25###].\nHowever, the high costs of required diagnostic methods like immunohistochemistry and polymerase chain reaction limit the practical application of this molecular classification in treatment decisions [26 ###reference_b26###].\nEBV-associated and MSI gastric cancers are characterized by distinct histological traits. EBV-positive tumors often display significant lymphocyte infiltration within both the neoplastic epithelium and the stroma, frequently referred to as lymphoepithelioma-like carcinoma or gastric carcinoma with lymphoid stroma [27 ###reference_b27###].\nSimilarly, the MSI subtype typically shows extensive lymphocytic infiltration, predominantly featuring intestinal-type histology and expansive growth patterns [28 ###reference_b28###, 29 ###reference_b29###].\nNote that in this study, we follow the setting in [26 ###reference_b26###] to formulate three binary classification tasks: EBV vs. Others, MSI vs. Others, and EBV+MSI vs. Others.\nGiven the biases inherent in manual evaluations and the additional expenses associated with these assessments, particularly for the latter tasks corresponding to guiding clinical treatment choices, conducting precise analysis directly from routine H&E-stained tissue sections through deep learning techniques is of significant clinical and scientific interest.\nOn these tasks, We utilized patient-level five-fold cross-validation for all experiments evaluating ConcepPath and other methods and reported the results of all models in the form of mean value.\nFor evaluation metrics, the area under the curve (AUC) of receiver operating characteristic, and the accuracy (ACC) were adopted.\nConcepPath could utilize different CLIP-based pathology vision-language foundation models as its basic component. As QuiltNet [17 ###reference_b17###] and CONCH [14 ###reference_b14###] lead to better performance in our experiments, in the following sections, we report the results of using QuiltNet as the default basic component of ConcepPath and the feature extractor of other baselines in the following sections, and report the results of using CONCH in the Supplementary." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "ConcepPath Helps in Treatment Decision", + "text": "We evaluated ConcepPath against seven state-of-the-art (SOTA) methodologies: (1) ABMIL [8 ###reference_b8###], (2) DeepAttnMISL [30 ###reference_b30###], (3) CLAM-SB [31 ###reference_b31###], (4) GTP [32 ###reference_b32###], (5) TransMIL [33 ###reference_b33###], (6) HIPT [6 ###reference_b6###], and (7) TOP [12 ###reference_b12###]. For TOP [12 ###reference_b12###],\ndue to the current incomplete official implementation released by the authors, we run both the author\u2019s current implementation and our re-implementation according to their paper, and select the higher performance as the final result of TOP.\nAs illustrated in Figure 3 ###reference_###a, ConcepPath surpasses the aforementioned methods in both AUC and ACC across all assessed tasks. Specifically, a significant performance leap for ConcepPath is observed in the BRCA and STAD datasets. For example, on the BRCA dataset, ConcepPath registers a 5.66 increase in the F1-score over the leading baseline. Similarly, on the STAD dataset, improvements of 6.23 in EBV vs. Others AUC, 1.71 in MSI vs. Others AUC, and 1.35 in EBV+MSI vs. Others AUC were noted in comparison to the best-performing baseline.\nThese datasets involved complex histological analysis tasks such as HER2 scoring and immunotherapy-sensitive subtyping, necessitating the recognition of intricate histological features and molecular tissue characteristics. This leads us to hypothesize that for more intricate WSI analysis challenges, the fusion of prior domain knowledge with the discovery of new, diagnosis-related concepts is crucial, significantly more so than for simpler tumor/normal classification or tumor subtyping tasks.\nOverall, ConcepPath enhances WSI analysis capabilities, especially in tumor subtyping and immune response assessment, potentially aiding in treatment decision-making processes.\nRegarding TOP [12 ###reference_b12###], designed to leverage language priors in few-shot weakly supervised learning for WSI analysis, it demonstrated unsatisfactory results in full training settings. This could be due to the unreliable generation of language priors, a misalignment between histopathology images and prior knowledge text, and a lack of new knowledge acquisition from the training data,\nwhich is detailed in Section Baseline Models ###reference_x8###.\n\nIn addition, we report the experimental results when using CONCH as the basic component of ConcepPath and feature extractor of other baselines in Supplementary Figure 5.\nAlthough HIPT achieved improved performance as CONCH was trained on a larger number of histology images, we noticed that ConcepPath still outperforms other baselines among most tasks and shows as a more robust method compared with others." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Comparison of Different Expert Concept Extraction Strategies.", + "text": "The incorporation of human expert knowledge is of great significance to ConcepPath, the performance will be influenced if such prior is not imported accurately.\nWe investigate the impact of different expert concept generation strategies and show the mean results in Figure 3 ###reference_###d.\nThe \u201cGenerated concepts\u201d refers to the strategy used in previous works [34 ###reference_b34###, 35 ###reference_b35###, 12 ###reference_b12###], which involves directly querying GPT-4 for relevant concepts without providing any expert materials.\nIn contrast, \u201cInduced concepts\u201d represent our proposed strategy, which entails asking GPT-4 to induce relevant concepts from medical literature related to the target diagnostic task.\nAs shown in Figure 3 ###reference_###d and Supplementary Figure 4a, the \u201cInduced concepts\u201d achieve better performance across all metrics, particularly for the more challenging immunotherapy-sensitive subtyping tasks on the STAD dataset.\nThis highlights the importance of inducing concepts from professional materials for complex WSI analysis tasks.\n\nIn addition, we report the experimental results when using CONCH as the basic component of ConcepPath.\nAs shown in Supplementary Figure 6, we could obtain the same conclusion as above.\n\nIn Figure 3 ###reference_###b and Figure 3 ###reference_###c, we also provide one example of a misleading concept generated by directly querying GPT-4, which our induction-based strategy successfully rectifies.\nFor the gastric immunotherapy-sensitive subtyping task, the expert concept \u201csignet-ring cells\u201d is more commonly linked to the Genomically Stable (GS) subtype instead of Epstein\u2013Barr virus (EBV) positive subtype [36 ###reference_b36###].\nHowever, as as illustrated in Figure 3 ###reference_###b the \u201cGenerated concepts\u201d strategy attributes the expert concept \u201csignet-ring cells\u201d to the EBV positive subtype instead of the GS subtype, which may cause confusion for the model.\nIn contrast, as demonstrated in Figure 3 ###reference_###c, under the guidance of relevant medical literature, our proposed \u201cInduced concepts\u201d strategy accurately attributes expert concept \u201csignet-ring cells\u201d to GS subtype, which provides reliable prior expert knowledge to our framework." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Effectiveness of Data-driven Concept Learning.", + "text": "The integration of learnable concepts is also a key component of ConcepPath.\nWe investigate its impact on NSCLC lung cancer subtyping, BRCA HER2 scoring, and STAD EBV vs. others classification tasks. Results are shown in Figure 5 ###reference_###a.\nThe x-axis in Figure 5 ###reference_###a represents the number of learned concepts used for each class, where 0 refers to the only use of GPT-4 induced expert concepts in our framework.\nFrom the results, we observe a performance increase of 1.04% in AUC for NSCLC, 1.16% in AUC for BRCA and 3.96% in AUC for EBV vs. Others upon integrating new concept discovery, demonstrating the effectiveness of complementing human expert knowledge with learned knowledge.\nFurthermore, we noticed that for more challenging and inadequately researched diagnostic tasks, a greater number of learned concepts are required.\nFor instance, NSCLC achieved its best performance with learned concepts per class, while the EBV vs. Others comparison reached its peak performance with learned concepts per class.\nFurthermore, we identified a performance decline when the number of learned concepts was large (.ie ) on both datasets.\nThis observation suggests a potential trade-off between prior expert knowledge and learned knowledge.\nIf the number of learned data-driven concepts is excessive, the impact of prior expert knowledge may be diminished, potentially resulting in overfitting the training data.\n\nWe noticed that such a trade-off still exists when using CONCH as the basic component of ConcepPath, as shown in Supplementary Figure 7.\n\n\nParticularly, we also investigated the independent contributions of the expert concept and the data-driven concept in ConcepPath (Supplementary Figure 11).\nWe noticed that while removing the data-driven concepts will cause a bigger average performance drop, either ignoring the human expert prior or the knowledge in the training data would generally cause an obvious performance drop for most tasks on both QuiltNet-based and CONCH-based ConcepPath.\nThese results illustrate that the success of ConcepPath lies in incorporating the complementary human expert prior and data-driven knowledge in automated WSI analysis." + }, + { + "section_id": "2.5", + "parent_section_id": "2", + "section_name": "Effectiveness of Bag-level Concept Guidance.", + "text": "We also investigate the impact of the bag concept-guided aggregation in ConcepPath.\nAs shown in Figure 5 ###reference_###b and Supplementary Figure 4b, \u201cw/o Bag-level guidance\u201d bars denote directly averaging the concept-specific bag-level features into the overall bag representation without considering the correlations between the instance-level concepts and the bag-level class prompts.\nThe model performed better with Bag-level guidance, for instance, we observe a performance drop of 2.82% in AUC for EBV+MSI vs. Others and 4.11% in AUC for MSI vs. Others when ignoring the relationship between the instance-level concepts and the bag-level class prompts.\nThis demonstrates the importance of the second-stage bag-level concept-guided aggregation in our framework.\n\nThe experimental results when using CONCH as the basic component of ConcepPath, the overall performance among five tasks gain notable improvement with this second-stage bag-level concept-guided aggregation, especially for the more reflective metric AUC (Supplementary Figure 8)." + }, + { + "section_id": "2.6", + "parent_section_id": "2", + "section_name": "Effectiveness of Slide-Adapters.", + "text": "The slide-adapters are proposed to learn new features and blend them with the original features of the overall bag representation and bag-level concept embedding.\nWe also explore their effectiveness in Figure 5 ###reference_###b and Supplementary Figure 4b.\nParticularly, we notice an obvious performance drop in AUC for EBV vs. Others, which may indicate the potential limits of the knowledge within the existing human medical research papers and the pathology vision-language model with respect to this challenging and inadequately researched diagnostic task.\n\nThe experimental results when using CONCH as the basic component of ConcepPath, the overall performance among five tasks would be improved by the slide-adapters, especially for the more reflective metric AUC (Supplementary Figure 8)." + }, + { + "section_id": "2.7", + "parent_section_id": "2", + "section_name": "Comparison of Different Vision-Language Models.", + "text": "The alignment of histopathology images with textual expert concepts is of great significance in our framework.\n\nWe also compare the efficiency of using different CLIP-based vision-language models as the basic building component in ConcepPath to align the concepts with histopathology images.\n\nResults are shown in Figure 5 ###reference_###c and and Supplementary Figure 5c, where\nCLIP [15 ###reference_b15###] is trained on a variety of image-text pairs from the internet,\n\nPLIP [16 ###reference_b16###] and PathClip [37 ###reference_b37###] are trained on over 200K histopathology image-text pairs,\nand QuiltNet [17 ###reference_b17###] and CONCH [14 ###reference_b14###] are trained on over 1 million histopathology image-text pairs.\n\nComparing the performances between CLIP and PLIP, an obvious performance increase can be observed by using pathology vision-language models.\nMoreover, the additional performance increase brought by QuiltNet [17 ###reference_b17###] and CONCH [14 ###reference_b14###] further demonstrates that our framework could benefit from more accurate alignment, and thus more efficiently incorporating concept knowledge into histopathology images." + }, + { + "section_id": "2.8", + "parent_section_id": "2", + "section_name": "Visualization and Post-hoc Interpretation", + "text": "Model interpretation is crucial for medical applications. ConcepPath offers post-hoc interpretation by visualizing the similarity scores between instance-level features and instance-level concepts as similarity maps on the slide, providing multi-dimensional reference information compared to previous attention map-based approaches.\nSome visualization examples are presented in Figure 6 ###reference_###, which displays four distinct expert instance-level concept similarity maps for four accurately classified lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) slides.\nAdditionally, we include the attention maps of CLAM [31 ###reference_b31###] for a comprehensive comparison.\n\nEvaluated by our expert pathologist collaborator, we note that the clinical relevance of this work lies in enabling pathologists to understand the rationale behind the model\u2019s predictions for a given WSI. The heatmaps for various expert concepts pinpoint the exact regions within WSIs that contribute to specific predictions, providing a clear and interpretable basis for the model\u2019s decisions. This is well-illustrated in Figure 6 ###reference_### and Supplementary Figure 12.\nFor instance, our pathologist collaborator identified that concepts such as \"lepidic,\" \"papillary,\" \"glandular,\" \"micropapillary,\" and \"solid growth\" are closely associated with LUAD, while \"keratinization,\" \"cell morphology,\" \"nuclear changes,\" and \"high mitoses\" are characteristic of LUSC. These observations align with established medical knowledge, demonstrating the interpretability and clinical validity of the model. Notably, these expert concepts exhibit high activity in tumor regions, consistent with domain expertise.\nFurthermore, similarity maps generated by ConcepPath provide a more precise focus on tumor regions compared to CLAM attention maps. For example, in the slide from the first row, the green box area includes additional tumor foci that are overlooked by CLAM. Similarly, benign mucous glands intermixed with inflammatory cells activate medical concepts such as inflammatory cells, associated lymphocytes, and immune cell clusters, reflecting plausible biological phenomena (Supplementary Figure 12).\nLastly, we observed minor variations in WSI regions across different expert concepts. These variations could serve as valuable supplementary references, offering multi-dimensional insights for pathologists during diagnosis." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Discussion", + "text": "Despite the rapid advancement in computational pathology, the integration of valuable human expert knowledge into automated AI-assisted diagnosis remains a significant yet unresolved challenge.\nThe advent of CLIP-based pathology vision-language foundation models and large language models (LLMs) presents a promising avenue by aligning histopathology images with human linguistic priors for more efficient WSI analysis.\nHowever, direct queries to LLMs may yield inaccurate statements not grounded in scientific fact.\nFurthermore, the complexity of diseases may surpass the existing human expert knowledge, necessitating the discovery of complementary knowledge hidden within training data.\nTo address these issues, we introduce ConcepPath in this study, a novel framework that explicitly incorporates reliable prior expert knowledge and learns complementary concepts from training data for precise WSI analysis.\nTo circumvent inaccuracies inherent in LLMs, ConcepPath employ GPT-4 as a reasoning engine to derive reliable instance-level expert concepts and bag-level expert class prompts from medical literature related to the target diagnostic task.\nAdditionally, ConcepPath explores complementary data-driven instance-level concepts from the training data using a set of learnable prompt representations.\nFollowing the Multiple Instance Learning (MIL) paradigm, ConcepPath utilized a concept knowledge-guided two-stage hierarchical feature aggregation process for efficient bag-level WSI representation.\nImportantly, ConcepPath integrates slide-adapters prior to the final prediction to address the domain shift between the pathology vision-language model\u2019s training data and downstream WSI analysis tasks.\nTo avoid confusion and distinguish ConcepPath from the CLIP-based pathology foundation models, we would like to emphasize that the objectives of the CLIP-based pathology foundation models and ConcepPath are in fact quite different:\nthe CLIP-based pathology foundation models aim to align path-level histology images with their corresponding text description, while ConcepPath aims to provide precise and interpretable slide-level analysis via leveraging human expert prior knowledge and complementary knowledge extracted from training data.\nThe CLIP-based pathology foundation models mainly serve as a basic building component in ConcepPath in this study.\nWe provide more detailed comparison and illustration of ConcepPath\u2019s advantage in Section Differences from CLIP-based Models and Discussion ###reference_x6###.\nThe proposed ConcepPath pioneers the incorporation of human expert knowledge for efficient and accurate histopathology analysis, a topic of significant clinical importance yet largely unexplored in existing research.\nThe comprehensive experimental results in this study affirm the superiority of ConcepPath over state-of-the-art methods and demonstrate the efficacy of the proposed components, offering new perspectives on leveraging human expert knowledge, LLMs, CLIP-based pathology vision-language foundation models, and training data for precise automated WSI analysis.\nHowever, ConcepPath has limitations. Although it surpasses other state-of-the-art methods, it relies on additional supervised information, namely, expert knowledge induced by GPT-4, which may be insufficient or biased for rare or novel cancer types.\nFurthermore, the potential for divergent expert opinions on controversial topics underscores the challenge of acquiring accurate expert information \u2014 an issue we aim to address in future work.\nAdditionally, while ConcepPath proposes learning data-driven concepts from training data, interpreting these concepts to enhance model understanding and facilitate medical discovery remains crucial.\nThe current approach employs concept similarity maps for interpretation, but further collaboration with expert pathologists is necessary to decode the semantic information underlying different learned concepts.\nOn the other hand, due to computational constraints, the text and image encoders remain frozen during training, despite the domain shift between encoder training data and downstream task data.\nExploring resource-efficient strategies for fine-tuning both encoders on downstream tasks to mitigate this domain shift will also facilitate further research.\n\nLastly, as PathChat [38 ###reference_b38###], a multimodal generative vision-language AI assistant for human pathology, emerged as a concurrent work of ConcepPath, it would be promising if we could adopt PatChat as a basic building component in ConcepPath.\nSpecifically, PathChat could generate a response to both histology image and text input as a chatbot like GPT4V, which could provide a more flexible linkage between the histology image and human language.\nHowever, as the PathChat model is not publicly available yet, we leave integrating PathChat in ConcepPath to explore whether it can improve ConcepPath\u2019s capability as our future work.\nFor other future works, we will aim to refine the ConcepPath framework, specifically addressing the domain shift between the encoders\u2019 training data and the downstream task data.\nThis enhancement will focus on adapting the model to better generalize across different data sets, thereby improving its robustness and accuracy in diverse clinical scenarios. Furthermore, we plan to quantify and mitigate the challenges associated with obtaining precise expert information tailored to specific WSI analysis tasks, enhancing the reliability of the expert knowledge integrated into the model.\nAdditionally, we will explore the integration of graph representations into the ConcepPath framework to enrich the analysis of WSIs. By capturing the intricate spatial relationships and contextual details inherent in tissue structures, graph representations can provide a more nuanced understanding of the histopathological features. This advancement is anticipated to offer deeper insights into the tissue architecture, potentially unveiling new biomarkers and improving diagnostic accuracy." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methods", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "ConcepPath Overview", + "text": "Figure 1 ###reference_###b and Figure 1 ###reference_###e present our proposed ConcepPath framework.\nSpecifically, as depicted in Figure 1 ###reference_###b, ConcepPath first utilizes the large language model, like GPT-4, to induce reliable disease-specific instance-level expert concept and bag-level expert class prompts from medical literature.\nOn the other hand, to complement the extracted expert knowledge, ConcepPath employs a set of purely learnable instance-level concepts for complementary data-driven instance-level concepts learned from the training data.\nNext, ConcepPath aligns the histopathology patches and the concepts by leveraging CLIP-based pathology vision-language foundation models.\nSubsequently, the instance features are aggregated into the overall bag representation using a two-stage hierarchical aggregation paradigm, guided by the instance-level concept and the correlations between instance-level expert concepts and bag-level expert class prompts.\nAfterward, ConcepPath feeds the overall bag representation and bag-level expert class prompt embeddings to slide-adapters, which serve as an additional bottleneck layer to perform residual-style feature blending with the original features.\nFinally, the predictions are calculated based on the similarities between the adapted bag representations and bag-level expert class prompt embeddings.\nFor simplicity, we elaborate ConcepPath with a binary classification WSI analysis task in the formulas in the following sections, identifying class and class .\nNote that ConcepPath can also be extended to a multi-class classification setup and we conducted a 3-class classification task on the BRCA dataset." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Inducing Expert Concepts with LLM.", + "text": "To reduce the task difficulty and fully exploit the power of the CLIP-based pathology foundation models and human expert prior knowledge, ConcepPath decomposes a complex WSI analysis task into several patch-level subtasks - scoring related medical expert concepts.\nIn this section, we detail how to induce patch-level expert concepts from human priors using a large model.\nAs shown in Supplementary Figure 2, in ConcepPath, given a specific WSI analysis task, medical literature was first collected using two search engines: Google and New Bing, with Google serving as the most powerful traditional search engine and new Bing serving as the recent search engine powered by large language models (LLMs).\nSpecifically, \u201c\u201d, \u201c, journal\u201d and \u201c, paper\u201d will be searched in both engines and also Google Scholar, for instance, the \u201c\u201d in lung cancer subtyping task would be \u201clung adenocarcinoma\u201d and \u201clung squamous cell carcinoma\u201d.\nThen, we keep and consolidate the results of medical papers published in well-known journals such as Nature Series journals to ensure their reliability.\nOnce collected, each medical literature is fed into large-language models, we utilize GPT-4 in our study, to induce instance-level expert concepts.\nTo ensure the quality of the induced instance-level expert concepts, we adopt a three-step strategy:\nFirst, we ask GPT-4 to summarize the pathological factors related to the classes of the target task from the input literature, together with their descriptions.\nSuch instance-level concepts typically correspond to potential phenotypes or clinical, pathological, and molecular characteristics that may appear in histopathology images.\nThen, GPT-4 is prompted to rank the summarized factors to facilitate the final manual examination of all expert concepts conducted by the users or pathologists.\nFinally, we require GPT-4 to re-write the descriptions of the ranked pathological factors to include more visual descriptions of the tissue for fully exploiting the power of the CLIP-based pathology vision-language foundation models.\nWe provide a complete example of processing one medical literature related to the lung cancer subtyping task in Supplementary Figure 10.\nWe feed one paper each time to GPT-4 to ensure an easier and more reliable summarization process.\nWith all collected medical literature being processed, we merged the summarized pathological factors from each paper, and manually deleted the repeated ones, and the factors could not be observed from histology images to form the final instance-level expert concepts groups for each target class.\nNotably, as shown in Supplementary Figure 2, the generated concepts are traceable to the users as the source literature is specified in their generation process, which further ensures ConcepPath could benefit from the human expert prior knowledge in a reliable manner.\nWhen fed to ConcepPath, each instance-level expert concept is composed of two parts:\nThe first part is the above-induced text description and the second part is a learnable vector, which follows the idea of the learnable prompt representation proposed in CoOp [39 ###reference_b39###] to improve the overall performance of the CLIP-based models (Figure 1 ###reference_###e).\nFor the bag-level expert class prompts, ConcepPath requires GPT-4 to induce comprehensive descriptions of different target classes concerning the instance-level expert concepts induced in the previous step among each literature, then we prompt GPT-4 to merge the descriptions from all collected literature. Each bag-level expert class prompt also has two parts, similar to the instance-level expert concept (Figure 1 ###reference_###e).\nAll detailed automatically collected medical literature and induced instance-level expert concepts and bag-level expert class prompts can be found in our released code repository." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Learning Complementary Data-driven Concepts.", + "text": "Given a WSI under magnification, we first apply the sliding window strategy to crop into numerous non-overlapping image patches.\nThen, ConcepPath extracts the instance feature from each image patch with the image encoder from the CLIP-based pathology vision-language foundation models. In this study, unless otherwise specified, we utilized QuiltNet [17 ###reference_b17###] as the default utilized CLIP-based pathology vision-language foundation model.\nThe instance feature extraction is defined as:\nwhere is the image encoder, and contains cropped patches.\n is the extracted instance features, where represents the dimension of the features.\nThen, we use the text encoder from the CLIP-based pathology vision-language foundation models to obtain instance-level concept representations for each target class:\nwhere is the text encoder, contains instance-level concepts for target class , and is the instance-level concept embeddings for class .\nSpecifically, is composed of two groups:\nwhere is the induced instance-level expert concepts for class in the previous paragraph.\n is a group of learnable data-driven instance-level concepts for class , containing a set of purely learnable prompt representations optimized with the training data during the training process.\nThe data-driven instance-level concepts serve as the complementary diagnostic factors to the extracted expert domain concepts , helping to describe the whole picture of a disease.\nIn addition, to ensure that the learned instance-level data-driven concepts extract complementary information to the instance-level domain concepts, we define a mutual distinctive loss among them as:\nwhere and are instance-level concept embeddings in the corresponding class.\nTo avoid confusion, we emphasize that the instance-level expert concepts and the data-driven instance-level concepts in ConcepPath are quite different, and summarize their difference into the generation difference and the objective difference for easier distinguishment.\nFor the generation difference: The expert concepts are generated from the human prior knowledge, as shown in Supplementary Figure 1a, they are induced by GPT-4 from the medical literature which are highly related to the target WSI analysis task and collected from the Internet. In contrast, as shown in Supplementary Figure 1b, the data-driven concepts are extracted from the WSI training dataset by ConcepPath itself and are optimized during the training process using the gradient descent algorithm.\nFor the objective difference: The objective of involving expert concepts is to utilize human expert prior knowledge to facilitate the overall performance of automatic WSI analysis. On the other hand, the objective of extracting data-driven concepts is to learn/extract useful knowledge for automatic diagnosis from the training data with neural networks.\nTherefore, they might contain complementary knowledge beyond human pathologists, which is probably complementary to the prior knowledge contained in the expert concept, and thus benefit the overall performance of ConcepPath." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Hierarchical Two-stage Concept-Guided Aggregation.", + "text": "We elaborately apply a hierarchical two-stage aggregation paradigm to obtain the overall bag representation under the guidance of instance-level concepts and correlations among the bag-level class prompts and instance-level concepts in ConcepPath.\nIn the first stage, ConcepPath aggregates the extracted instance-level features into concept-specific bag-level features for different target classes.\nFor example, for class , with the guidance from the instance-level concept embeddings , the aggregation process can be formulated as:\nwhere is the similarity scores between different instances and instance-level expert concepts.\nWith serves as the aggregation weights,\nand aggregated as concept-specific bag-level features for class , we involve multiple medical concept scoring subtasks in ConcepPath to reduce the task complexity and fully exploit the power of human prior and CLIP-based pathology foundation models.\nIn the second stage, to obtain overall bag-level representation for class , ConcepPath aggregates the concept-specific bag-level features according to the correlations among the bag-level class prompts and the instance-level concepts of class :\nwhere is the similarity scores between bag-level expert class prompt and instance-level concepts for class ,\nand is the overall bag-level representation of WSI for class .\nInspired by clip-adapter [40 ###reference_b40###], we also implement slide-adapters before aligning the overall bag-level representation and the bag-level class prompt embedding .\nSpecifically, the slide-adapters serve as additional bottleneck layers to learn new features and perform residual-style feature blending with the original features aggregated from the pre-trained encoders\u2019 feature space.\nIn summary, the slide-adapters can be written as follows:\nwhere both and represent two layers of learnable linear transformations and that compose the slide-adapters, with and as adjustable hyper-parameters.\nSimilarly, we can obtain and for class and any other classes. Following the CLIP method [15 ###reference_b15###], the prediction probability can be computed as:\nHere, denotes the cosine similarity, and represents the temperature of the Softmax function." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Post-hoc Interpretation.", + "text": "Model interpretation and diagnosis are crucial for medical applications. ConcepPath offers post-hoc interpretation by visualizing the similarity scores between instance-level features and instance-level concepts as similarity maps.\n\nTo generate the post-hoc interpretable similarity maps, we visualize the similarity scores of the above patch-level concept scoring subtasks conducted on different patches of the corresponding slide.\nHighlighted regions indicate higher responses of these patches to specific medical concepts, providing detailed multi-dimensional reference information on how ConcepPath evaluates various medical factors relevant to the WSI analysis task and reaches a final diagnosis." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Differences from CLIP-based Models and Discussion", + "text": "To avoid confusion and distinguish ConcepPath from previous CLIP-based pathology foundation models, we would like to further discuss their differences.\nWe would like to emphasize that the objectives of the CLIP-based pathology foundation models and ConcepPath are quite different: the CLIP-based pathology foundation models aim to align path-level histology images with their corresponding text description, while ConcepPath aims to provide precise and interpretable slide-level analysis via leveraging human expert prior knowledge and complementary knowledge extracted from training data as mentioned in the above sections.\nTherefore, instead of proposing a new CLIP-based pathology foundation model, ConcepPath investigates how to incorporate existing CLIP-based pathology foundation models as a basic building component to link human expert knowledge and histology images, to improve slide-level WSI analysis tasks.\nAs shown in the left part of Figure 1 ###reference_###d, the current CLIP-based pathology foundation models can conduct patch-level classification tasks by calculating the similarity between the input histology image patch and different class prompts.\nHowever, due to the large size of whole slide images (WSIs), it is infeasible to directly apply current CLIP-based pathology foundation models for slide-level classification, as the default input image size of current CLIP-based pathology foundation models is usually 224224.\nTherefore, in previous CLIP-based pathology foundation model works (e.g., CONCH), a top-k pooling paradigm [14 ###reference_b14###] is usually adopted when dealing with slide-level classification tasks.\nSpecifically, as shown in the right part of Figure 1 ###reference_###d, a WSI is first tiled into numerous image patches, which could be fed into CLIP-based pathology foundation models to obtain patch-level predictions.\nThen, the slide-level prediction is calculated as the top-k pooling results of the patch-level predictions.\nConcepPath has several advantages/novelties over this top-k pooling paradigm, and could significantly improve slide-level classification tasks using the CLIP-based pathology foundation models as a basic building component.\nSpecifically, as mentioned in the above sections and shown in Figure 1 ###reference_###d, ConcepPath\u2019s advantages/novelties mainly lie in two aspects: First, ConcepPath decomposes complex WSI analysis task into subtasks and utilizes human expert prior knowledge for more accurate and interpretable prediction. Second, ConcepPath extracts complementary data-driven knowledge from the training data.\nIn particular, the above top-k pooling with only class prompts on patch-level could be referred to as \u201cone-stage aggregation\u201d, while ConcepPath with expert and data-driven concept prompts on patch-level and class prompts on slide-level could be referred to as \u201ctwo-stage hierarchical aggregation\u201d.\nAlthough we have different objectives from the CLIP-based pathology foundation models and their top-k pooling paradigm, to ensure a more comprehensive comparison, we conducted the following experiments in Supplementary Figure 3 to illustrate the significant advantages of ConcepPath.\nOn all the WSI analysis tasks, we observed that ConcepPath significantly improved the performance when incorporating different CLIP-based pathology foundation models, which validated the effectiveness of our model design." + }, + { + "section_id": "4.7", + "parent_section_id": "4", + "section_name": "Data Preprocessing", + "text": "Following the default settings outlined by CLAM [31 ###reference_b31###], we initiate our pipeline by cropping the requisite image patches from each digitized slide. The process begins with the automated segmentation of tissue regions. Each WSI is loaded into memory at a downsampled resolution (32 downscale) and converted from the RGB to the HSV color space to facilitate segmentation.\nTo identify tissue regions (foreground), we generate a binary mask by thresholding the saturation channel of the HSV image, subsequent to applying median blurring to smooth the image edges. This step is complemented by morphological closing operations aimed at filling in small gaps and holes within the tissue regions. The approximate contours of these foreground objects are then delineated, filtered based on a predefined area threshold, and earmarked for downstream processing.\nPost-segmentation, exhaustive cropping of patches is performed within the segmented foreground contours at magnification for each slide. This meticulous process ensures that the patches are representative of the histological features relevant for subsequent analyses." + }, + { + "section_id": "4.8", + "parent_section_id": "4", + "section_name": "Baseline Models", + "text": "Multiple Instance Learning (MIL) [41 ###reference_b41###] has been extensively investigated for WSI analysis due to its weakly supervised learning paradigm.\nGenerally, previous MIL methods can be divided into two groups: (1) instance-level methods [42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###], and (2) embedding-level methods [47 ###reference_b47###, 8 ###reference_b8###, 48 ###reference_b48###, 31 ###reference_b31###, 33 ###reference_b33###].\nInstance-level methods first obtain instance predictions and then aggregate them into bag predictions using either average pooling or maximum pooling.\nIn contrast, embedding-level methods initially aggregate instance features into a high-level bag representation, followed by constructing a classifier based on this bag representation for bag-level prediction.\nHowever, most existing methods exclusively learn from image data, neglecting valuable prior expert knowledge that humans utilize and consider during the diagnostic process, such as pathological and molecular factors related to the disease.\nAs embedding-level methods generally possess better performance, in this study, we include seven baseline models in our experimental performance comparisons, implementation details are discussed below:\nABMIL: We followed the instructions on the ABMIL Github repository: https://github.com/AMLab-Amsterdam/AttentionDeepMIL ###reference_onDeepMIL###. ABMIL addresses the MIL problem by learning the Bernoulli distribution of a bag. It introduces a neural network-based, permutation-invariant aggregation operator with an attention mechanism [8 ###reference_b8###].\nDeepAttnMISL: We followed the guidelines on the DeepAttnMISL Github repository: https://github.com/uta-smile/DeepAttnMISL ###reference_###. DeepAttnMISL employs both siamese MI-FCN and attention-based MIL pooling. This method efficiently learns imaging features from WSIs and aggregates WSI-level information to patient-level data [30 ###reference_b30###].\nCLAM-SB: We followed the instructions on the CLAM Github repository: https://github.com/mahmoodlab/CLAM ###reference_###. CLAM utilizes attention-based learning to identify diagnostically valuable sub-regions within a slide and refines the feature space through instance-level clustering over these identified regions [31 ###reference_b31###].\nGTP: We followed the guidelines on the GTP Github repository: https://github.com/vkola-lab/tmi2022 ###reference_###. GPT combines a graph-based representation of a WSI with a vision transformer to process pathology images [32 ###reference_b32###].\nTransMIL: We followed the instructions on the TransMIL Github repository: https://github.com/szc19990412/TransMIL ###reference_###. TransMIL leverages a Transformer-based MIL approach (TransMIL) to analyze both morphological and spatial information in WSIs [33 ###reference_b33###].\nHIPT: We followed the instructions on the HIPT Github repository: https://github.com/mahmoodlab/HIPT ###reference_###. HIPT capitalizes on the inherent hierarchical structure of WSIs, employing two levels of self-supervised learning to learn high-resolution image representations for precise WSI analysis [6 ###reference_b6###].\nTOP: \nAs the author\u2019s official implementation of TOP seems to be incomplete according to their paper\u2019s description, we also re-implemented it to ensure a comprehensive comparison.\n\nUnlike other baselines, TOP explored integrating language prior knowledge from large language models (LLMs) and vision-language models from the natural image domain to address few-shot weakly supervised learning for WSI analysis [12 ###reference_b12###].\nNonetheless, their discussion primarily focuses on the low data regime and exhibits several limitations, including unreliable prior knowledge generation from LLMs, misalignment between histopathology images and vision-language models from the natural image domain, and unsatisfactory performance in a full training setup. In contrast, ConcepPath is specifically designed to overcome these issues, setting it apart from TOP.\n\nSpecifically, as shown in Supplementary Table 1, has advantages over the limitations of TOP in three aspects - summarizing traceable expert concepts from related medical literature, finetuning visual-textual feature spaces using slide adapters, and extracting complementary knowledge from training WSIs with learnable data-driven concepts.\nIn addition, we have also compared the average AUC and ACC among all five tasks of ConcepPath and TOP\u2019s original implementation among different CLIP-based pathology models in\nSupplementary Figure 10 alone.\nwe observe that ConcepPath consistently outperforms TOP\u2019s original implementation, which validates the advantages of ConcepPath over TOP\u2019s limitations." + }, + { + "section_id": "4.9", + "parent_section_id": "4", + "section_name": "Training Details", + "text": "All experiments were conducted on a workstation equipped with eight NVIDIA RTX 3090 GPUs.\n\nUnless otherwise specified, we employed the image encoder ViT-B/32 and text encoder GPT/77 of QuiltNet [17 ###reference_b17###] in ConcepPath as the feature extractors for both histopathology images and textual concepts in this study.\n\nThe length of learnable parts was set to 16 tokens for both instance-level and bag-level expert concepts.\nFor the number of instance-level concepts, we utilized 26 instance-level concepts for each target class, while the number of learned instance-level concepts was tuned from for each target class.\nDuring the training, we fixed all weights of the CLIP-based pathology foundation models and only learned the learnable part of the instance-level expert concepts bag-level expert class prompts, and the purely learnable data-driven concepts.\nNote that the above-learned parts are all input for the CLIP-based pathology foundation models.\nFor model optimization, we employed the SGD optimizer and a batch size of 2. The learning rate was set to 0.0001 for all datasets.\nFor evaluation metrics, the area under the curve (AUC) of receiver operating characteristic, and the accuracy (ACC) were adopted.\nWe utilized patient-level five-fold cross-validation for all experiments and reported the results of all models in the form of mean and standard deviation." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Ethics Approval and Consent to Participate", + "text": "All datasets and other materials employed in this study are previously published and publicly accessible with approved protocol and participants\u2019s informed consent.\nNo new human research participants are involved in this study." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Data Availability", + "text": "All datasets and other materials employed in this study are previously published and publicly accessible.\nThe TCGA datasets was acquired from the Genomic Data Commons Data Portal at https://portal.gdc.cancer.gov/ ###reference_portal.gdc.cancer.gov/###.\nSource data and the medical literature including the extracted human expert concepts for this study are provided alongside this paper\u2019s released code repository (https://github.com/HKU-MedAI/ConcepPath ###reference_###)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Code Availability", + "text": "The ConcepPath source code is available in Github (https://github.com/HKU-MedAI/ConcepPath ###reference_###).\nWe also uploaded all scripts and materials to reproduce all the analyses on the same website.\n\nA tutorial Colab notebook, including the trained weights of the studied clinical tasks, is also provided." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Acknowledgements", + "text": "This work was partially supported by the Research Grants Council of the Hong Kong SAR, China (Project No. 27206123 and T45-401/22-N) and the Hong Kong Innovation and Technology Fund (Project No. ITS/274/22)." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Author contributions", + "text": "LY conceived and supervised the study.\nMY supervised the study and provided expert opinions on pathology concepts and data interpretation for model development.\nYJ reviewed and provided expert opinions for this study.\nWZ, ZG, and YF implemented the framework and performed all data analysis.\nWZ, and ZG wrote the manuscript with inputs from all authors.\nAll authors reviewed and approved the final paper." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Competing Interests", + "text": "The authors declare no competing interests." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18101v1_figure_1.png", + "caption": "Figure 1: \nOverview of ConcepPath framework.\na, In real clinical processes, pathologists apply their expert knowledge to reason about histopathologic entities and factors to make a diagnosis.\nb, ConcepPath utilizes a large language model like GPT-4 to induce expert concepts related to diagnosis from medical literature and integrate this knowledge into an automated WSI analysis pipeline through the CLIP-based pathology vision-language foundation model.\nc, Dataset Characteristics of the evaluated tasks.\n\nd, Left: An illustration of how the CLIP-based pathology foundation model performs patch prediction with class prompts. Right: The pipeline of how the previous CLIP-based pathology foundation model performs slide-level classification via a top-k pooling of patch predictions.\ne, Left: An illustration of how ConcepPath decomposes a specific complex WSI analysis task into multiple subtasks of scoring patch-level concepts/attributes. Right: The pipeline of how ConcepPath conducts slide-level classification. Unlike the previous CLIP-based pathology foundation models\u2019 mechanism, ConcepPath leverages human prior knowledge and fully exploits the power of the CLIP-based pathology foundation model by scoring a group of expert concepts induced by GPT-4 from related medical literature, and extracting complementary knowledge from training data via scoring a group of learnable data-driven concepts. The final prediction is produced with a two-stage aggregation mechanism with the above concepts.", + "url": "http://arxiv.org/html/2411.18101v1/x1.png" + }, + "2": { + "figure_path": "2411.18101v1_figure_2.png", + "caption": "Figure 2: \nPerformance and expert concept generation comparison of ConcepPath.\n\na,\nRadar charts depicting the average AUC(Left) and ACC(Right) for the five WSI analysis tasks in the five-fold cross-validation experiment conducted on NSCLC, BRCA, and STAD datasets.\n\u201cAverage\u201d denotes the average performance among all five tasks. \u201cTOP*\u201d represents the higher performance in the author\u2019s implementation and our implementation of TOP. \nConcepPath demonstrated more accurate predictions on all five tasks since it successfully incorporates human expert prior knowledge and data-driven knowledge learned from the training data.\nb,c, A misleading concept generated by directly querying GPT-4 (denoted as \u201cGenerated\u201d),", + "url": "http://arxiv.org/html/2411.18101v1/x2.png" + }, + "4": { + "figure_path": "2411.18101v1_figure_4.png", + "caption": "Figure 4: \nInvestigation of proposed components in ConcepPath.\na, Line plots for investigating new data-driven concept learning, the y-axis is the AUC(%percent\\%%) and the x-axis is the number of learned concepts used for each class, where 0 means only using human expert concepts. Integrating data-driven knowledge learned from training data improves overall performance, and for more challenging and inadequately researched diagnostic tasks, a greater number of learned concepts are required. The performance decline when the number of learned concepts was large suggests a potential trade-off between prior expert knowledge and learned knowledge.\nb,\nA histogram representing investigations on second stage bag-level concept guided aggregation and slide-adapters, the y-axis is the AUC(%percent\\%%), and \u201cw/o Bag-level guidance\u201d refers to using average pooling aggregation. Both modules contributed to the improved performance.\nc,\nA histogram representing the comparison of using different CLIP-based vision-language models as ConcepPath\u2019s basic component for aligning", + "url": "http://arxiv.org/html/2411.18101v1/x3.png" + }, + "6": { + "figure_path": "2411.18101v1_figure_6.png", + "caption": "Figure 6: \n\nVisualizations of ConcepPath and baseline method.\nInstance-level expert concept similarity maps.\nThe slides are accurately identified as the lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) subtype, respectively.\nIn comparison to the CLAM attention map, the similarity maps of various instance-level concepts provide a more precise focus on the tumor in the green box highlighted area.", + "url": "http://arxiv.org/html/2411.18101v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Weakly supervised deep learning for whole slide lung cancer image analysis.", + "author": "Wang, X. et al.", + "venue": "IEEE transactions on cybernetics 50, 3950\u20133962 (2019).", + "url": null + } + }, + { + "2": { + "title": "Pathology image analysis using segmentation deep learning algorithms.", + "author": "Wang, S., Yang, D. M., Rong, R., Zhan, X. & Xiao, G.", + "venue": "The American journal of pathology 189, 1686\u20131698 (2019).", + "url": null + } + }, + { + "3": { + "title": "Classification and mutation prediction from non\u2013small cell lung cancer histopathology images using deep learning.", + "author": "Coudray, N. et al.", + "venue": "Nature medicine 24, 1559\u20131567 (2018).", + "url": null + } + }, + { + "4": { + "title": "Deep residual learning for image recognition, 770\u2013778 (2016).", + "author": "He, K., Zhang, X., Ren, S. & Sun, J.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "Fine-tuning and training of densenet for histopathology image representation using tcga diagnostic slides.", + "author": "Riasatian, A. et al.", + "venue": "Medical Image Analysis 70, 102032 (2021).", + "url": null + } + }, + { + "6": { + "title": "Scaling vision transformers to gigapixel images via hierarchical self-supervised learning, 16144\u201316155 (2022).", + "author": "Chen, R. J. et al.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Transformer-based unsupervised contrastive learning for histopathological image classification.", + "author": "Wang, X. et al.", + "venue": "Medical image analysis 81, 102559 (2022).", + "url": null + } + }, + { + "8": { + "title": "Attention-based deep multiple instance learning, 2127\u20132136 (PMLR, 2018).", + "author": "Ilse, M., Tomczak, J. & Welling, M.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "H2-mil: Exploring hierarchical representation with heterogeneous multiple instance learning for whole slide image analysis, 933\u2013941 (2022).", + "author": "Hou, W. et al.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "Node-aligned graph convolutional network for whole-slide image representation and classification, 18813\u201318823 (2022).", + "author": "Guan, Y. et al.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "Histopathology whole slide image analysis with heterogeneous graph representation learning, 15661\u201315670 (2023).", + "author": "Chan, T. H., Cendra, F. J., Ma, L., Yin, G. & Yu, L.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "The rise of ai language pathologists: Exploring two-level prompt learning for few-shot weakly-supervised whole slide image classification.", + "author": "Qu, L., Luo, X., Fu, K., Wang, M. & Song, Z.", + "venue": "arXiv preprint arXiv:2305.17891 (2023).", + "url": null + } + }, + { + "13": { + "title": "Multiple instance captioning: Learning representations from histopathology textbooks and articles, 16549\u201316559 (2021).", + "author": "Gamper, J. & Rajpoot, N.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "Visual language pretrained multiple instance zero-shot transfer for histopathology images, 19764\u201319775 (2023).", + "author": "Lu, M. Y. et al.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "Learning transferable visual models from natural language supervision, 8748\u20138763 (PMLR, 2021).", + "author": "Radford, A. et al.", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "Leveraging medical twitter to build a visual\u2013language foundation model for pathology ai.", + "author": "Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. & Zou, J.", + "venue": "bioRxiv 2023\u201303 (2023).", + "url": null + } + }, + { + "17": { + "title": "Quilt-1m: One million image-text pairs for histopathology.", + "author": "Ikezogwo, W. O. et al.", + "venue": "arXiv preprint arXiv:2306.11207 (2023).", + "url": null + } + }, + { + "18": { + "title": "Towards a visual-language foundation model for computational pathology.", + "author": "Lu, M. Y. et al.", + "venue": "arXiv preprint arXiv:2307.12914 (2023).", + "url": null + } + }, + { + "19": { + "title": "Large language models should be used as scientific reasoning engines, not knowledge databases.", + "author": "Truhn, D., Reis-Filho, J. S. & Kather, J. N.", + "venue": "Nature Medicine 1\u20132 (2023).", + "url": null + } + }, + { + "20": { + "title": "Gpt-4 is here: what scientists think.", + "author": "Sanderson, K.", + "venue": "Nature 615, 773 (2023).", + "url": null + } + }, + { + "21": { + "title": "Deep learning in cancer pathology: a new generation of clinical biomarkers.", + "author": "Echle, A. et al.", + "venue": "British journal of cancer 124, 686\u2013696 (2021).", + "url": null + } + }, + { + "22": { + "title": "Biological insights and novel biomarker discovery through deep learning approaches in breast cancer histopathology.", + "author": "Mandair, D., Reis-Filho, J. S. & Ashworth, A.", + "venue": "NPJ Breast Cancer 9, 21 (2023).", + "url": null + } + }, + { + "23": { + "title": "American society of clinical oncology; college of american pathologists. recommendations for human epidermal growth factor receptor 2 testing in breast cancer: American society of clinical oncology/college of american pathologists clinical practice guideline update.", + "author": "Wolff, A. et al.", + "venue": "J Clin Oncol 31, 3997\u20134013 (2013).", + "url": null + } + }, + { + "24": { + "title": "Slidegraph+: Whole slide image level graphs to predict her2 status in breast cancer.", + "author": "Lu, W. et al.", + "venue": "Medical Image Analysis 80, 102486 (2022).", + "url": null + } + }, + { + "25": { + "title": "Immunotherapy for esophageal and gastric cancer.", + "author": "Kelly, R. J.", + "venue": "American society of clinical oncology educational book 37, 292\u2013300 (2017).", + "url": null + } + }, + { + "26": { + "title": "Detecting immunotherapy-sensitive subtype in gastric cancer using histologic image-based deep learning.", + "author": "Hinata, M. & Ushiku, T.", + "venue": "Scientific reports 11, 22636 (2021).", + "url": null + } + }, + { + "27": { + "title": "Thirty years of epstein-barr virus-associated gastric carcinoma.", + "author": "Fukayama, M. et al.", + "venue": "Virchows Archiv 476, 353\u2013365 (2020).", + "url": null + } + }, + { + "28": { + "title": "Lymphocyte-rich gastric cancer: associations with epstein-barr virus, microsatellite instability, histology, and survival.", + "author": "Grogg, K. L., Lohse, C. M., Pankratz, V. S., Halling, K. C. & Smyrk, T. C.", + "venue": "Modern pathology 16, 641\u2013651 (2003).", + "url": null + } + }, + { + "29": { + "title": "Frequent microsatellite instability in papillary and solid-type, poorly differentiated adenocarcinomas of the stomach.", + "author": "Arai, T. et al.", + "venue": "Gastric Cancer 16, 505\u2013512 (2013).", + "url": null + } + }, + { + "30": { + "title": "Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks.", + "author": "Yao, J., Zhu, X., Jonnagaddala, J., Hawkins, N. J. & Huang, J.", + "venue": "Medical Image Anal. 65, 101789 (2020).", + "url": null + } + }, + { + "31": { + "title": "Data-efficient and weakly supervised computational pathology on whole-slide images.", + "author": "Lu, M. Y. et al.", + "venue": "Nature biomedical engineering 5, 555\u2013570 (2021).", + "url": null + } + }, + { + "32": { + "title": "A deep learning based graph-transformer for whole slide image classification.", + "author": "Zheng, Y., Gindra, R., Betke, M., Beane, J. E. & Kolachalama, V. B.", + "venue": "medRxiv 2021\u201310 (2021).", + "url": null + } + }, + { + "33": { + "title": "Transmil: Transformer based correlated multiple instance learning for whole slide image classification.", + "author": "Shao, Z. et al.", + "venue": "Advances in neural information processing systems 34, 2136\u20132147 (2021).", + "url": null + } + }, + { + "34": { + "title": "Language in a bottle: Language model guided concept bottlenecks for interpretable image classification, 19187\u201319197 (2023).", + "author": "Yang, Y. et al.", + "venue": null, + "url": null + } + }, + { + "35": { + "title": "Learning concise and descriptive attributes for visual recognition, 3090\u20133100 (2023).", + "author": "Yan, A. et al.", + "venue": null, + "url": null + } + }, + { + "36": { + "title": "Microsatellite instability in gastric cancer: Molecular bases, clinical perspectives, and new treatment approaches (2018).", + "author": "Ratti, M., Lampis, A. & Hahne, J. C.", + "venue": "URL https://pubmed.ncbi.nlm.nih.gov/30173350/.", + "url": null + } + }, + { + "37": { + "title": "Benchmarking pathclip for pathology image analysis.", + "author": "Zheng, S. et al.", + "venue": "Journal of Imaging Informatics in Medicine 1\u201317 (2024).", + "url": null + } + }, + { + "38": { + "title": "A multimodal generative ai copilot for human pathology.", + "author": "Lu, M. Y. et al.", + "venue": "Nature 1\u20133 (2024).", + "url": null + } + }, + { + "39": { + "title": "Learning to prompt for vision-language models.", + "author": "Zhou, K., Yang, J., Loy, C. C. & Liu, Z.", + "venue": "International Journal of Computer Vision 130, 2337\u20132348 (2022).", + "url": null + } + }, + { + "40": { + "title": "Clip-adapter: Better vision-language models with feature adapters (2021).", + "author": "Gao, P. et al.", + "venue": "2110.04544.", + "url": null + } + }, + { + "41": { + "title": "Attention is all you need, 570\u2013576 (1998).", + "author": "Maron, O. & Lozano-P\u00e9rez, T.", + "venue": null, + "url": null + } + }, + { + "42": { + "title": "Patch-based convolutional neural network for whole slide tissue image classification, 2424\u20132433 (2016).", + "author": "Hou, L. et al.", + "venue": null, + "url": null + } + }, + { + "43": { + "title": "Deep miml network, Vol. 31 (2017).", + "author": "Feng, J. & Zhou, Z.-H.", + "venue": null, + "url": null + } + }, + { + "44": { + "title": "Clinical-grade computational pathology using weakly supervised deep learning on whole slide images.", + "author": "Campanella, G. et al.", + "venue": "Nature medicine 25, 1301\u20131309 (2019).", + "url": null + } + }, + { + "45": { + "title": "Camel: A weakly supervised learning framework for histopathology image segmentation, 10682\u201310691 (2019).", + "author": "Xu, G. et al.", + "venue": null, + "url": null + } + }, + { + "46": { + "title": "Weakly-supervised learning for lung carcinoma classification using deep learning.", + "author": "Kanavati, F. et al.", + "venue": "Scientific reports 10, 9297 (2020).", + "url": null + } + }, + { + "47": { + "title": "Deep multi-instance networks with sparse label assignment for whole mammogram classification, 603\u2013611 (Springer, 2017).", + "author": "Zhu, W., Lou, Q., Vang, Y. S. & Xie, X.", + "venue": null, + "url": null + } + }, + { + "48": { + "title": "Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning, 14318\u201314328 (2021).", + "author": "Li, B., Li, Y. & Eliceiri, K. W.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18101v1" +} \ No newline at end of file diff --git a/20241127/2411.18107v1.json b/20241127/2411.18107v1.json new file mode 100644 index 0000000000000000000000000000000000000000..bdcc3b1e65992e5c24b115d663dd1a499f4ac87e --- /dev/null +++ b/20241127/2411.18107v1.json @@ -0,0 +1,439 @@ +{ + "title": "Fusion of Discrete Representations and Self-Augmented Representations for Multilingual Automatic Speech Recognition", + "abstract": "Self-supervised learning (SSL) models have shown exceptional capabilities across various speech-processing tasks. Continuous SSL representations are effective but suffer from high computational and storage demands. On the other hand, discrete SSL representations, although with degraded performance, reduce transmission and storage costs, and improve input sequence efficiency through de-duplication and subword-modeling. To boost the performance of discrete representations for ASR, we introduce a novel fusion mechanism that integrates two discrete representations. The fusion mechanism preserves all the benefits of discrete representation while enhancing the model\u2019s performance by integrating complementary information. Additionally, we explore \u201cself-augmented\u201d discrete representations, which apply transformations to a single continuous SSL representation, eliminating the fusion mechanism\u2019s dependency on multiple SSL models and further decreasing its inference costs. Experimental results on benchmarks, including LibriSpeech and ML-SUPERB, indicate up to 19% and 24% relative character error rate improvement compared with the non-fusion baseline, validating the effectiveness of our proposed methods.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Self-supervised learning (SSL) models have demonstrated exceptional success across a variety of speech-processing tasks [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nPrior works mostly focused on leveraging continuous SSL representations [11 ###reference_b11###, 12 ###reference_b12###], which, despite their effectiveness, are notorious for their high storage and computational costs. To address these issues, recent researches [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] have shifted towards discrete SSL representations. These representations, obtained by applying discretization to continuous SSL features, offer significant benefits such as lower transmission, storage costs, and faster I/O during training (see Section 2.2 ###reference_###). Specifically, in the context of automatic speech recognition (ASR), using discrete representations brings an additional advantage: it significantly reduces the input sequence length without substantially affecting performance. For instance, [21 ###reference_b21###] have shown de-duplication and Byte Pair Encoding (BPE) subword-modeling reduce the sequence length to nearly one-third of its original size without severe performance degradation.\nWhile using discrete representations for ASR offers the above-mentioned advantages in computation and storage, when compared to continuous representations, their performance often falls short. Hence, enhancing the performance of discrete representations in ASR while keeping the computational costs manageable presents a non-trivial challenge. Motivated by the progress in continuous SSL representations [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], we find offering the model with complementary information would be a straightforward way to boost performance. For instance, [24 ###reference_b24###] proposed to fuse SSL features with spectral features to provide the model with domain-robust information.\n [23 ###reference_b23###] explored ways to effectively fuse multiple continuous SSL features to enhance performance. Considering the enormous improvement brought by leveraging multiple features, we aim to investigate ways to leverage multiple discrete representations to improve ASR performance.\n###figure_1### In this paper, we explore the fusion of discrete representations for ASR to provide the model with complementary information. Figure 1 ###reference_### offers a high-level overview of our fusion pipeline, including the discretization process and subsequent fusion. Through the discretization process (see Section 2.1 ###reference_###), we derive the final discrete representations . Then, we apply our proposed fusion mechanism to utilize two such discrete representations simultaneously, enhancing the model\u2019s performance by integrating complementary information. Existing studies [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###], which share a similar concept with discrete representation fusion, are primarily within the multi-modal domain. To leverage discrete representations from multiple modalities, they typically standardize each of their frame rates to solve the linear misalignment between representations. However, this method conflicts with sequence length reduction techniques like de-duplication and BPE subword modeling. De-duplication condenses consecutive identical tokens into a single token, and BPE subword modeling combines frequent patterns of tokens into a new token. Both of them will lead to non-linear misalignment between representations and complicate straightforward fusion. On the other hand, concatenating representations along the time dimension will lead to lengthy input sequences, diminishing the benefits of using discrete representations for ASR.\nTo address these challenges, we propose a novel fusion mechanism that integrates two non-linearly misaligned discrete representations. Our approach leverages an attention-based mechanism to intelligently learn the alignment between representations. To the best of our knowledge, this is the first investigation into such a fusion approach specifically for discrete representation-based ASR, aiming to preserve the benefits of discrete representations while enhancing ASR performance.\nAdditionally, we investigate \u201cself-augmented\u201d discrete representations (see Figure 1 ###reference_###), which are created by applying simple and efficient transformations to a single continuous SSL representation. Using these \u201cself-augmented\u201d representations in our fusion mechanism helps eliminate the dependency on the second SSL model, thus broadening the applicability of our approach and reducing inference costs (see Section 5.2 ###reference_###). Moreover, our analysis indicates that employing fusion with self-augmented discrete representations yields more robust results than using fusion with another SSL discrete representation (see Section 5.2 ###reference_###).\nTo validate our approach, we conducted evaluations on two well-known benchmarks: LibriSpeech [29 ###reference_b29###] and ML-SUPERB [12 ###reference_b12###], following the protocols of the Interspeech2024-Discrete-Speech-Unit-Challenge [30 ###reference_b30###]. The experimental results demonstrate that our fusion mechanism consistently improves the character error rate (CER) across various test sets. Notably, the fusion of two different SSL features resulted in the most significant improvements, achieving a 19% and 24% relative reduction in CER for the benchmarks, respectively. Fusion with self-augmented representations also yielded at most 6% and 19% relative reductions in CER for each benchmark dataset. Combining the above findings, our submission to the Interspeech2024-Discrete-Speech-Unit-Challenge [30 ###reference_b30###] achieved the lowest CER among all submissions, and an overall second place considering the bitrate ranking." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Discrete Representation for ASR", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Discretization process", + "text": "Figure 1 ###reference_### provides a high-level overview of our fusion pipeline. In this section, we would like to introduce how to derive the final discrete representations for fusion. Figure 2 ###reference_### shows the process of discretization of speech inputs (the Self Augment part will be introduced in Section 3.2 ###reference_###).\nFirst, in Figure 2(a) ###reference_sf1###, a feature extractor processes the raw waveform input to obtain continuous representations , , where indicates the output dimension of the feature extractor.\nThen, in Figure 2(b) ###reference_sf2###, to convert continuous representations into discrete ones, we could apply quantization methods like vector quantization or K-means clustering over all frames.\nEach frame of continuous representation is converted to a discrete unit (represented as integers of circles in Figure 1 ###reference_### & 2(b) ###reference_sf2###), resulting in a sequence of discrete units, referred to as discrete representation.\nConsidering the speech characteristics, the obtained discrete representation is generally quite long. For example, employing HuBERT [1 ###reference_b1###], a famous SSL model, as the feature extractor will introduce 50 discrete units per second. Hence, we follow [13 ###reference_b13###] and apply de-duplication to remove consecutive repeating tokens. Then, subword modeling is used to further reduce the sequence length, yielding the final discrete representation .\n###figure_2### ###figure_3###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Storage, I/O and length reduction statistics", + "text": "In this section, we provide detailed statistics about the storage requirements and input sequence length for discrete representations in ASR. We have gathered these statistics from our Train & Dev set, which utilizes MMS-1B [10 ###reference_b10###] as the feature extractor. Further details about our Train & Dev set are available in Section 4.1 ###reference_###.\nStorage & I/O.\nAs discussed in Section 1 ###reference_###, using discrete representations for ASR offers advantages such as low storage costs and faster I/O during training. To provide a clearer picture, we present statistics on the storage space required for discrete representations, enabling a quantitative comparison with continuous representations. According to Table 1 ###reference_###, discrete representations occupy only 0.04% of the space of their continuous counterparts. Specifically, our training set requires only 121 MB of storage, which can easily be loaded into RAM for faster I/O during training. In contrast, loading the entire training set with continuous representations is challenging and costly. Consequently, training the ASR model with discrete representations is faster than with their continuous counterparts.\nMoreover, the enhanced I/O speed also benefits the transmission bitrate. Transmitting continuous representations requires approximately 2M bits per second111Calculated as 50 (unit/sec) * 1,280 (dim) * 32 (fp32), whereas discrete representations significantly lower the transmission bitrate to less than 300 bits per second. We will further detail the bitrate of our proposed methods in Section 5.1 ###reference_###. This comparison underscores the efficiency of using discrete representations, not only in storage and ASR training but also in data transmission.\nLength reduction.\nAs demonstrated in [21 ###reference_b21###], the application of de-duplication and BPE subword-modeling can significantly reduce the input sequence length for discrete representation ASR. To illustrate this effect, we present statistics on the input sequence length of our discrete representations in Table 2 ###reference_###.\nThis table shows how these length-reduction techniques impact the sequence length, both with and without their application. Our results indicate that sequence length can be halved222We set the BPE size to 3,000. In [21 ###reference_b21###], they set the BPE size to 6,000, achieving greater length reduction. through the use of de-duplication and BPE subword-modeling. As we know, the computational cost of the normal attention mechanism [31 ###reference_b31###] is quadratic relative to the input sequence length. Therefore, employing discrete representations for ASR training effectively reduces the normal attention mechanism\u2019s cost to one-quarter, thereby achieving considerable efficiency." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Benefit summarization", + "text": "In this section, we compare and summarize the advantages of using continuous and discrete representation ASR based on Section 2.2 ###reference_###. Here\u2019s a detailed exploration of the benefits of each approach:\nContinuous Representation. While it offers strong performance, it incurs significant storage and I/O costs. Additionally, its sequence length cannot be easily reduced, leading to higher training and inference costs due to longer input sequences.\nDiscrete Representation. This approach benefits from a smaller storage size and faster I/O during training and transmission. Its input sequence length can be reduced, which in turn reduces inference costs. However, it usually shows poor performance.\nThe comparison of our proposed fusion mechanism and self-augmented representations will be presented in Section 5.2 ###reference_###. Most importantly, our proposed methods preserve all of the advantages of discrete representation but with enhanced performance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodologies", + "text": "In this section, we first introduce the general picture and details of our fusion mechanism (Sec. 3.1 ###reference_###).\nNext, we introduce two representation augmentation methods to derive self-augmented discrete representations (Sec. 3.2 ###reference_###)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Discrete representation fusion mechanism", + "text": "As discussed in Section 1 ###reference_###, due to the non-linear misalignment between representations, we cannot simply concatenate them along the last dimension. Additionally, concatenating along the time dimension would result in excessively lengthy input sequences. To address these challenges, we introduce our fusion mechanism for discrete representations. Figure 3 ###reference_### provides a visual overview of this mechanism, which primarily utilizes the cross-attention mechanism to address the misalignment between representations.\nThe core component of our fusion approach is highlighted in the purple block of Figure 3 ###reference_###. Differing from the standard encoder layer, which typically comprises a self-attention layer and a multi-layer perceptron (MLP) layer, our fusion mechanism incorporates an additional cross-attention layer to merge the discrete representations.\nIn Figure 3 ###reference_###, the two discrete representations and are named as primary and secondary discrete representation, where , .\nHere, is the th discrete unit of , and is the length of .\nFirst, and go through their respective embedding layers to get and , where .\nHere is the dimension of two embedding layers.\nNext, is subjected to self-attention within a typical encoder framework. Concurrently, it interacts with in a cross-attention layer where serves as the query and as both key and value. This interaction results in . The purpose of this cross-attention at each encoder layer is to deeply integrate the information from the secondary representation into the model.\nLast, we use a weighted-sum mechanism to combine the self-attention output and cross-attention output to get . This weighted-sum mechanism acts as a learnable gate, regulating the information flow between the primary and secondary representations. Note that go through an adapter 333each encoder layer has different cross-attention layers and adapters. before entering the cross-attention layer.\n###figure_4### To be more specific, we send into the encoder. first go through self-attention layer to get where . That is:\nAlso, we fuse and to get , , with the following operation:\nwhere Adapter is a down-projection linear layer followed by a non-linear activation and an up-projection linear layer.\nObtaining and , we apply weighted-sum to combine them to get . That is:\nwhere is a learnable parameter.\nLast, we direct the to the MLP module with a residual path as a normal encoder layer. It\u2019s important to mention that our fusion mechanism is scalable, allowing for the integration of more than two representations by adding additional cross-attention layers and expanding the weighted-sum component in each encoder layer. For demonstration purposes, in this paper, we have limited our focus to fusing only two discrete representations." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Self-augmented representations", + "text": "The most common way to employ our fusion mechanism is to derive the required discrete representations from two SSL models. However, in scenarios where resources are extremely limited or inference cost is critical, the forward pass of two SSL models may be infeasible for the ASR system. To address these challenges, we propose to employ fusion with self-augmented representations, which can be derived by performing simple and efficient transformations on a single continuous SSL representation. In other words, with the proposed self-augmentation, we can derive two discrete representations with only one SSL model forward pass. In summary, the benefits of adopting self-augmented discrete representations include: (1) eliminating the dependency of our proposed fusion mechanism on the second SSL model; and (2) reducing inference costs.\nIn this section, we introduce two approaches to augment continuous representations to derive self-augmented discrete representations with complementary information, which benefits fusion performance (see Self-Augment in Figure 2(a) ###reference_sf1###).\n(I) Reshape.\nA sequence of continuous representations is typically represented as a 2D array with dimensions . Instead of directly discretizing each frame representation in this standard format, we employ a simple reshape technique on the sequence before discretization, adjusting the dimensions from to . This involves splitting each frame () into two parts: the first part contains the initial dimensions, and the second part contains the remaining dimensions. We then carry out the discretization process, which adjusts the shape to .\nThis method of augmentation allows us to represent each frame with two discrete units instead of one. By doing so, it provides a more fine-grained temporal representation of the information within each frame, potentially enhancing the detail of the representation. Although this method doubles the sequence length, employing it primarily as the secondary representation mitigates the potential increase in computational cost, maintaining a manageable sequence length within the model.\n(II) Delta.\nIn Mel-frequency cepstral coefficients (MFCC), dynamic features such as first- (delta) and second-order (delta-delta) frame-wise differences are widely used.\nMotivated by this, we also derive delta features from continuous representations and then perform discretization.\nDifferent from MFCC, we discretize the delta features instead of concatenating them with the original features.\nAlthough the temporal resolution remains unchanged, discretized delta representations could explicitly provide temporally dynamic information." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimental Settings", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Dataset", + "text": "We evaluated the proposed method on LibriSpeech-100h [29 ###reference_b29###] and ML-SUPERB [12 ###reference_b12###].\nThe setting is aligned with the Discrete Speech Unit Challenge444https://www.wavlab.org/activities/2024/Interspeech2024-Discrete-Speech-Unit-Challenge/ ###reference_erspeech2024-Discrete-Speech-Unit-Challenge/###.\nLibriSpeech-100h evaluates the English ASR capability, providing 100 hours of clean English paired data.\nOn the other hand, ML-SUPERB assesses the multilingual ASR capability, consisting of approximately 200 hours of data from 143 languages.\nFor training, we mixed the training sets from LibriSpeech and ML-SUPERB and then performed multi-lingual training.\nFor evaluation, we evaluated our model on the {dev-clean, dev-other, test-clean, test-other} sets of LibriSpeech and the test set of ML-SUPERB." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Model configuration & discretization", + "text": "Feature extractor.\nWe adopted WavLM-Large [7 ###reference_b7###] and MMS-1B [10 ###reference_b10###] as the feature extractors. We use the output from layer 21 for WavLM-Large and layer 48 for MMS-1B.\nBoth of them are SSL models, but WavLM is pre-trained in English data with denoising pre-training tasks, while MMS-1B is pre-trained in multilingual data with 1162 languages. We chose these models to capture complementary information from their respective representations, which is anticipated to enhance the performance of our fusion mechanism.\nDiscretization.\nAs stated by [13 ###reference_b13###], clustering-based methods show inherent versatility for different tasks. Therefore, we follow Chang et al. [13 ###reference_b13###] to use K-Means clustering to discretize the continuous representations.\nThe number of clusters is set to 2,000 for all models. To save computational cost, we only use 30% of the training set data to train the K-Means model.\nDe-duplication & subword modeling.\nAgain, following Chang et al. [13 ###reference_b13###], we performed de-duplication and subword modeling (see Figure 2 ###reference_###) to further reduce the length of discrete representation. For the BPE subword modeling, the BPE size is set to 3,000." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Implementation details", + "text": "We use the ESPnet [32 ###reference_b32###] toolkit to run all the experiments.\nFor the delta feature calculation, we used the Librosa [33 ###reference_b33###] implementation by setting the window width to 9.\nFor the ASR model, we adopted a Transformer [31 ###reference_b31###]-based encoder-decoder architecture.\nThe dimension of the embedding layer was set to 512.\nWe added a linear layer to reduce the dimension from 512 to 256 after the embedding layer.\nThe encoder and decoder consisted of 12 and 6 layers, respectively.\nIn both self- and cross-attention modules, the number of heads was set to 4, and the linear units were set to 1024, with a dropout probability of 0.1.\nFor MLP layers, the linear units were set to 1,024.\nFor the adapter, we implemented a design consisting of a down-projection linear layer, a non-linear activation, and an up-projection linear layer, where the bottleneck size was 128.\nDuring the training, we utilized the Adam [34 ###reference_b34###] optimizer, setting the learning rate at 0.0005, applying a minimal weight decay of 1e-6, and incorporating 5000 warm-up steps.\nWe performed joint attention-based encoder-decoder and connectionist temporal classification (CTC) training, with a CTC weight of 0.3.\nFor the inference phase, we obtained the ASR output using beam search with a beam size of 20.\nBesides, we did not use additional language models." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Fusion variants & baselines", + "text": "Fusion variants.\nWe explored three distinct combinations of discrete representations.\nThe primary representation () was consistently set as the MMS-1B across all scenarios.\nFor the secondary representation (), we employed three different variants: (1) WavLM-Large, (2) the reshaped representation of MMS-1B, and (3) the delta representation of MMS-1B as introduced in Section 3.2 ###reference_###. For the order of primary and secondary representations, we have an analysis in Section 5.2 ###reference_###.\nFor a fair comparison, we established several baselines under the same experimental settings as described in Section 4.3 ###reference_###:\nContinuous representation. For this baseline, we utilized the continuous representation from the MMS-1B. In line with our experimental settings, we implemented a linear layer to reduce the dimension from 1280 to 80 and added a convolution layer to halve the sequence length. Also, we remove the down-sample linear\nlayer after the embedding layer 555Encoder\u2019s output dimension will be 512.. The rest of the model configuration is aligned with that in Section 4.2 ###reference_###. This setup is considered a strong baseline, primarily due to its higher storage and computational demands, as discussed in Section 2.3 ###reference_###. Its bitrate is calculated at footnote 1 ###reference_te1###\nConcatenated discrete representations. As introduced in Section 3.1 ###reference_###, a simple method for fusing two discrete representations is concatenating them along the time dimension. However, this method extends the input sequence in training and inference. We tested with concatenated discrete representations from WavLM-Large and MMS-1B.\nNon-Fusion discrete representation. In this baseline, we trained the model using only one discrete representation, as conducted in [21 ###reference_b21###]. We tested discrete representations from WavLM-Large and MMS-1B separately.\nNon-Fusion high bitrate discrete representation. Given that our fusion mechanism employs two discrete representations, resulting in a doubled required bitrate, we provided a non-fusion baseline with a comparable bitrate using the non-fusion discrete representation MMS-1B baseline but without de-duplication and BPE." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "In alignment with the Discrete Speech Unit Challenge, our evaluations focus on two key metrics: character error rate (CER) and bitrate, for all experiments. For LibriSpeech-100h [29 ###reference_b29###], we calculate the micro-average CER across the four evaluation sets within LibriSpeech-100h. For ML-SUPERB [12 ###reference_b12###], we continue to use CER, adhering to ML-SUPERB\u2019s established metric for evaluation.\nTo further evaluate the efficiency of transmission, we also measure the bitrate for all test sets, including both LibriSpeech-100h and ML-SUPERB. We denote the discrete representations as , where represents the stream of discrete representations, and denotes the total number of streams. The vocabulary size for the stream is expressed as , and the length of the stream is noted as . Given that the total length of the test sets is (in seconds), we define the bitrate using the formula:\nThe formula adheres to the definition on the challenge website\u2020\u2020footnotemark: ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Result & Analysis", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Quantitative result", + "text": "ASR results.\nTable 3 ###reference_### showcases the CER of our baselines and fusion variants on the LibriSpeech and ML-SUPERB datasets. Rows without a secondary representation (-) indicate non-fusion baselines.\nThe results demonstrate that all fusion variants consistently outperform the non-fusion baselines, even with comparable bitrates. This underscores the effectiveness of our proposed fusion mechanism. Remarkably, fusion variants achieve performance slightly superior to the continuous representation baseline while using only 0.3% of its bitrate.\nIn particular, the fusion of MMS-1B with WavLM-Large showcases the highest gains, achieving a 19% and 24% relative improvement on LibriSpeech and ML-SUPERB. This enhancement across both English-only and multilingual datasets illustrates that our fusion mechanism adeptly integrates complementary information from each representation. Even compared to the concatenated baseline, our approach exhibits better performance without the drawback of extended input sequences. Additionally, our self-augmented discrete representations have also demonstrated improvements over the non-fusion MMS-1B baseline.\nFrom a computational standpoint, our fusion mechanism efficiently uses multiple discrete representations, resulting in only a 24% increase in the number of parameters. By leveraging cross-attention, we avoid doubling the input sequence length, which helps prevent a quadratic increase in computational requirements. Moreover, since the extra computation is confined to the cross-attention layers within the encoder, it minimally impacts the overall training and inference times.\nBitrate.\nTable 3 ###reference_### also presents the bitrate for each baseline and our fusion variants. While fusing additional discrete representations naturally increases the bitrate, the application of de-duplication and subword modeling techniques ensures that the overall bitrate remains within an acceptable range compared to the continuous representation baseline. Overall, our fusion mechanism enhances the performance of discrete representations for ASR without incurring significant additional transmission costs." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Qualitative analysis", + "text": "Fusion mechanism & self-augmented representations benefit.\nIn Table 4 ###reference_###, we summarize the benefits of using both continuous and discrete representations for ASR 2.3 ###reference_###, integrating our proposed methods into the discussion. Our proposed fusion mechanism (see Section 3.1 ###reference_###) enhances the model\u2019s performance and preserves all the benefits of discrete representations\u2014despite doubling the numbers in Table 1 ###reference_###, storage costs remain low, and I/O efficiency is preserved. Importantly, this mechanism does not lead to longer input sequences.\nMoreover, our innovative self-augmented representations (see Section 3.2 ###reference_###) can reduce the inference costs. The effectiveness and benefits of these approaches are comprehensively compared in Table 4 ###reference_###, highlighting the improvements across various aspects.\nPrimary & secondary representations.\nWe investigated the impact of the order of primary and secondary representations.\nFor simplicity, we used MMS-1B and WavLM-Large as they showed the best performance in Table 3 ###reference_###, where MMS-1B was used as the primary representation and WavLM-Large as the secondary one.\nWe then reversed their roles, making WavLM-Large primary and MMS-1B secondary.\nTable 5 ###reference_### shows that both configurations deliver effective performance. However, MMS-1B, when used as the primary representation, performs slightly better. Thus, we recommend using the representation with superior performance as the primary one. Additionally, this finding indicates that the weighted-sum mechanism skillfully controls the information flow between the two representations, maintaining robust performance irrespective of their order.\nInference cost analysis.\nAs outlined in Section 3.2 ###reference_###, utilizing our self-augmented representations significantly reduces inference costs. To quantify this reduction, we conducted experiments on the LibriSpeech dev-clean set, focusing on the time required for feature extraction. We compared the average times for two processes: (1) extracting continuous representations from raw waveforms using WavLM-Large, and (2) performing a delta transformation on the continuous representations from MMS-1B using only a CPU. This setup corresponds to our best-performing fusion variant (MMS-1B + WavLM-Large vs. MMS-1B + MMS-1B Delta) in Table 3 ###reference_###.\nThis experiment utilized an RTX3090 GPU and an Intel Xeon Gold 6326 CPU, spanning 2703 raw waveform files. The results, including average times and standard deviations (STD), are as follows: WavLM-Large feature extraction took 0.72 0.24 seconds, while delta transformations required only 0.10 0.01 seconds. These statistics show that the time needed for delta transformations is just 13.8% of that required for a second SSL model\u2019s feature extraction, clearly highlighting the considerable decrease in inference costs.\nLanguage-wise analysis.\nIn this study, we evaluate the language robustness of various secondary representations when combined with MMS-1B as the primary representation. We investigate WavLM-Large, Reshape, and Delta representations using results from the ML-SUPERB test set, with non-fusion MMS-1B serving as the baseline.\nTo quantitatively analyze language robustness across each fusion variant, we define the relative CER difference for each language as:\nwhere and represent the CER for the fusion variant and the baseline, respectively, on language . This metric indicates the performance difference between the fusion variant and the baseline specific to language . By computing the STD of across all languages, we can measure the variability of performance improvements or declines.\nFurther, to provide a clear understanding of language robustness, we categorize each into three groups: Decline, Comparable (where falls within 5%), and Improved. A robust fusion variant is characterized by a low STD, a higher count of Improved languages, and fewer Decline languages. The numbers are summarized in Table 6 ###reference_###.\nIn our analysis, the STD of the for WavLM-Large, Reshape, and Delta representations are 0.19, 0.14, and 0.14, respectively. This indicates that fusion with WavLM-Large is the least robust among the options. Furthermore, Table 6 ###reference_### reveals that fusion with Delta representations resulted in the fewest Decline languages, while Reshape had the highest number of Improved languages. Surprisingly, despite WavLM-Large\u2019s generally lower CER, it shares the same number of Decline languages as Reshape. These statistics suggest that self-augmented representations, especially Delta, are robust and demonstrate greater language independence. This is likely due to WavLM-Large being pre-trained primarily on English data, which may introduce language-specific biases.\nOverall, while fusion with WavLM-Large often leads to superior performance, its effectiveness varies greatly by language. Conversely, self-augmented representations deliver more consistent improvements across languages, establishing them as language-agnostic options for fusion." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We propose a novel fusion mechanism that integrates two non-linearly misaligned discrete representations by utilizing an attention-based approach. Most importantly, our proposed methods preserve all of the advantages of discrete representation but with enhanced performance. Additionally, we developed \u201cself-augmented\u201d representations by efficiently transforming a single continuous SSL representation, enhancing our fusion mechanism\u2019s applicability by eliminating the need for the second SSL model and reducing inference costs. Our analysis demonstrates that fusing with these self-augmented representations is more language-robust than fusing with WavLM-Large discrete representations.\nExperiments conducted on the LibriSpeech and ML-SUPERB datasets have validated the effectiveness of the proposed approaches, yielding up to 19% and 24% relative CER improvements compared with the non-fusion MMS-1B baseline, respectively. Most importantly, our proposed approaches achieve performance slightly\nsuperior to the continuous representation baseline using only 0.3% of its bitrate." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Required storage space comparison between continuous and discrete representations. The reduced ratio indicates that the discrete representations take only 0.04% of the space required by their continuous counterparts.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SplitsContinuousDiscreteSpace Ratio
Train290.85 GB121.43 MB0.04%
Dev47.43 GB19.44 MB0.04%
\n
", + "capture": "Table 1: Required storage space comparison between continuous and discrete representations. The reduced ratio indicates that the discrete representations take only 0.04% of the space required by their continuous counterparts." + }, + "2": { + "table_html": "
\n
Table 2: Average input sequence length of Train & Dev set of (1) Original Sequence (2) Sequence after De-duplication (DD) (3) Sequence after De-duplication (DD) + BPE. The number in brackets represents the reduced length percentage.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SplitsOriginalDDDD + BPE
Train393.26272.31 (31%)202.92 (48%)
Dev333.01226.30 (32%)180.08 (46%)
\n
", + "capture": "Table 2: Average input sequence length of Train & Dev set of (1) Original Sequence (2) Sequence after De-duplication (DD) (3) Sequence after De-duplication (DD) + BPE. The number in brackets represents the reduced length percentage." + }, + "3": { + "table_html": "
\n
Table 3: ASR Results of our fusion variants and baselines (see Section\u00a04.4). is the primary representation, and is the secondary representation. Rows with a missing secondary representation (-) represent non-fusion baselines. We report the CER for both the LibriSpeech (LS) averaged dev/test set and the ML-SUPERB (MS) test set. For the bitrate, we report the number based on the Section\u00a04.5.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LSMSBitrate
\n\n\n\n\n\n\n\n
Continuous
MMS-1B
\n
-2.3410.892048000
\n\n\n\n\n
WavLM-Large
\n
-2.3722.4356.19
MMS-1B-2.3214.32280.86
\n\n\n\n\n\n\n\n
High Bitrate
MMS-1B
\n
-2.5214.38556.15
Concat MMS-1B\n\n\n\n\n
WavLM-Large
\n
1.9211.25665.13
MMS-1B\n\n\n\n\n
WavLM-Large
\n
1.8910.87665.13
MMS-1B\n\n\n\n\n\n\n\n
MMS-1B
Reshape
\n
2.2612.221024.90
MMS-1B\n\n\n\n\n\n\n\n
MMS-1B
Delta
\n
2.1711.69648.52
\n
\n
", + "capture": "Table 3: ASR Results of our fusion variants and baselines (see Section\u00a04.4). is the primary representation, and is the secondary representation. Rows with a missing secondary representation (-) represent non-fusion baselines. We report the CER for both the LibriSpeech (LS) averaged dev/test set and the ML-SUPERB (MS) test set. For the bitrate, we report the number based on the Section\u00a04.5. " + }, + "4": { + "table_html": "
\n
Table 4: Comparison across different representation types. An O indicates that the representation type has the corresponding advantage, an X indicates the absence of the advantage, and a indicates mediocre performance.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RepresentationPerfor-Storage &Reduced Seq-Low Infe-
TypemanceFaster I/Ouence Lengthrence Cost
ContinuousOXX
DiscreteXOOO
Discrete FusionOOO
+ self-augmentedOOOO
\n
\n
", + "capture": "Table 4: Comparison across different representation types. An O indicates that the representation type has the corresponding advantage, an X indicates the absence of the advantage, and a indicates mediocre performance." + }, + "5": { + "table_html": "
\n
Table 5: Ablation for primary representation determination.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LSMSBitrate
MMS-1B\n\n\n\n\n
WavLM-Large
\n
1.8910.87665.13
\n\n\n\n\n
WavLM-Large
\n
MMS-1B1.9511.22665.13
\n
\n
", + "capture": "Table 5: Ablation for primary representation determination." + }, + "6": { + "table_html": "
\n
Table 6: Counts of Decline, Comparable, and Improved languages of fusion variants compared with the baseline.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\n
Secondary
Represention
\n
Decline ComparableImproved
WavLM-Large531106
Delta131110
Reshape516111
\n
\n
", + "capture": "Table 6: Counts of Decline, Comparable, and Improved languages of fusion variants compared with the baseline. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18107v1_figure_1.png", + "caption": "Fig. 1: General pipeline of our proposed fusion mechanism.", + "url": "http://arxiv.org/html/2411.18107v1/extracted/6026067/figures/discrete_fusion_general-Page-3.drawio.png" + }, + "2(a)": { + "figure_path": "2411.18107v1_figure_2(a).png", + "caption": "(a) Derive continuous representation.\nFig. 2: The pipeline for extracting discrete representations.", + "url": "http://arxiv.org/html/2411.18107v1/extracted/6026067/figures/Discretization-Page-2.drawio.png" + }, + "2(b)": { + "figure_path": "2411.18107v1_figure_2(b).png", + "caption": "(b) Derive discrete representation\nFig. 2: The pipeline for extracting discrete representations.", + "url": "http://arxiv.org/html/2411.18107v1/extracted/6026067/figures/Discretization-Page-3.drawio.png" + }, + "3": { + "figure_path": "2411.18107v1_figure_3.png", + "caption": "Fig. 3: Proposed fusion mechanism. We fuse primary and secondary representations (\ud835\udc1d\ud835\udfcfsuperscript\ud835\udc1d1\\mathbf{d^{1}}bold_d start_POSTSUPERSCRIPT bold_1 end_POSTSUPERSCRIPT & \ud835\udc1d\ud835\udfd0superscript\ud835\udc1d2\\mathbf{d^{2}}bold_d start_POSTSUPERSCRIPT bold_2 end_POSTSUPERSCRIPT) to perform end-to-end ASR training.", + "url": "http://arxiv.org/html/2411.18107v1/extracted/6026067/figures/Fuse_v2-Page-4.drawio.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "\u201cHuBERT: Self-supervised speech representation learning by masked prediction of hidden units,\u201d", + "author": "Wei-Ning Hsu et al.,", + "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3451\u20133460, 2021.", + "url": null + } + }, + { + "2": { + "title": "\u201cA Survey of Multilingual Models for Automatic Speech Recognition,\u201d", + "author": "Hemant Yadav and Sunayana Sitaram,", + "venue": "in LREC, 2022, pp. 5071\u20135079.", + "url": null + } + }, + { + "3": { + "title": "\u201cCross-lingual Automatic Speech Recognition Exploiting Articulatory Features,\u201d", + "author": "Qingran Zhan et al.,", + "venue": "in APSIPA ASC), 2019, pp. 1912\u20131916.", + "url": null + } + }, + { + "4": { + "title": "\u201cwav2vec 2.0: A framework for self-supervised learning of speech representations,\u201d", + "author": "Alexei Baevski et al.,", + "venue": "Advances in Neural Information Processing Systems, vol. 33, pp. 12449\u201312460, 2020.", + "url": null + } + }, + { + "5": { + "title": "\u201cXLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale,\u201d", + "author": "Arun Babu et al.,", + "venue": "in Proc. Interspeech, 2022, pp. 2278\u20132282.", + "url": null + } + }, + { + "6": { + "title": "\u201cW2v-BERT: Combining contrastive learning and masked language modeling for self-supervised speech pre-training,\u201d", + "author": "Yu-An Chung et al.,", + "venue": "in ASRU, 2021, pp. 244\u2013250.", + "url": null + } + }, + { + "7": { + "title": "\u201cWavlm: Large-scale self-supervised pre-training for full stack speech processing,\u201d", + "author": "Sanyuan Chen et al.,", + "venue": "IEEE Journal of Selected Topics in Signal Processing, vol. 16, pp. 1505\u20131518, 2021.", + "url": null + } + }, + { + "8": { + "title": "\u201cSelf-supervised speech representation learning: A review,\u201d", + "author": "Abdelrahman Mohamed et al.,", + "venue": "IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 6, pp. 1179\u20131210, 2022.", + "url": null + } + }, + { + "9": { + "title": "\u201cMulti-resolution huBERT: Multi-resolution speech self-supervised learning with masked unit prediction,\u201d", + "author": "Jiatong Shi et al.,", + "venue": "in Proc. ICLR, 2024.", + "url": null + } + }, + { + "10": { + "title": "\u201cScaling speech technology to 1,000+ languages,\u201d", + "author": "Vineel Pratap, , et al.,", + "venue": "arXiv, 2023.", + "url": null + } + }, + { + "11": { + "title": "\u201cSUPERB: Speech Processing Universal PERformance Benchmark,\u201d", + "author": "Shu wen Yang et al.,", + "venue": "in Proc. Interspeech, 2021, pp. 1194\u20131198.", + "url": null + } + }, + { + "12": { + "title": "\u201cML-SUPERB: Multilingual Speech Universal PERformance Benchmark,\u201d", + "author": "Jiatong Shi et al.,", + "venue": "in Proc. Interspeech, 2023, pp. 884\u2013888.", + "url": null + } + }, + { + "13": { + "title": "\u201cExploring speech recognition, translation, and understanding with discrete speech units: A comparative study,\u201d 2023.", + "author": "Xuankai Chang et al.,", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "\u201cAn Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks,\u201d", + "author": "Kai-Wei Chang et al.,", + "venue": "in Proc. Interspeech, 2022, pp. 5005\u20135009.", + "url": null + } + }, + { + "15": { + "title": "\u201cTowards universal speech discrete tokens: A case study for asr and tts,\u201d", + "author": "Yifan Yang et al.,", + "venue": "in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024.", + "url": null + } + }, + { + "16": { + "title": "\u201cAcoustic bpe for speech generation with discrete tokens,\u201d", + "author": "Feiyu Shen et al.,", + "venue": "in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 11746\u201311750.", + "url": null + } + }, + { + "17": { + "title": "\u201cVoxtlm: Unified decoder-only models for consolidating speech recognition, synthesis and speech, text continuation tasks,\u201d", + "author": "Soumi Maiti et al.,", + "venue": "in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 13326\u201313330.", + "url": null + } + }, + { + "18": { + "title": "\u201cTokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition,\u201d", + "author": "Hakan Erdogan et al.,", + "venue": "in Proc. Interspeech, 2023, pp. 3462\u20133466.", + "url": null + } + }, + { + "19": { + "title": "\u201cAkvsr: Audio knowledge empowered visual speech recognition by compressing audio knowledge of a pretrained model,\u201d", + "author": "Jeong Hun Yeo et al.,", + "venue": "IEEE Transactions on Multimedia, pp. 1\u201313, 2024.", + "url": null + } + }, + { + "20": { + "title": "\u201cLip reading for low-resource languages by learning and combining general speech knowledge and language-specific knowledge,\u201d", + "author": "Minsu Kim et al.,", + "venue": "in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 15359\u201315371.", + "url": null + } + }, + { + "21": { + "title": "\u201cExploration of Efficient End-to-End ASR using Discretized Input from Self-Supervised Learning,\u201d", + "author": "Xuankai Chang, , et al.,", + "venue": "in Proc. Interspeech, 2023, pp. 1399\u20131403.", + "url": null + } + }, + { + "22": { + "title": "\u201cEFFUSE: Efficient self-supervised feature fusion for e2e asr in multilingual and low resource scenarios,\u201d", + "author": "Tejes Srivastava et al.,", + "venue": "in Proc. Interspeech, 2024.", + "url": null + } + }, + { + "23": { + "title": "\u201cFeaRLESS: Feature Refinement Loss for Ensembling Self-Supervised Learning Features in Robust End-to-end Speech Recognition,\u201d", + "author": "Szu-Jui Chen et al.,", + "venue": "in Proc. Interspeech, 2022, pp. 3058\u20133062.", + "url": null + } + }, + { + "24": { + "title": "\u201cCombining spectral and self-supervised features for low resource speech recognition and translation,\u201d", + "author": "Dan Berrebbi et al.,", + "venue": "in Proc. Interspeech, 2022.", + "url": null + } + }, + { + "25": { + "title": "\u201cMany-to-many spoken language translation via unified speech and text representation learning with unit-to-unit translation,\u201d 2023.", + "author": "Minsu Kim et al.,", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "\u201cIntelligible Lip-to-Speech Synthesis with Speech Units,\u201d", + "author": "Jeongsoo Choi et al.,", + "venue": "in Proc. INTERSPEECH 2023, 2023, pp. 4349\u20134353.", + "url": null + } + }, + { + "27": { + "title": "\u201cTmt: Tri-modal translation between speech, image, and text by processing different modalities as different languages,\u201d 2024.", + "author": "Minsu Kim et al.,", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "\u201cTowards practical and efficient image-to-speech captioning with vision-language pre-training and multi-modal tokens,\u201d", + "author": "Minsu Kim et al.,", + "venue": "in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 7970\u20137974.", + "url": null + } + }, + { + "29": { + "title": "\u201cLibrispeech: An asr corpus based on public domain audio books,\u201d", + "author": "Vassil Panayotov et al.,", + "venue": "in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 5206\u20135210.", + "url": null + } + }, + { + "30": { + "title": "\u201cThe interspeech 2024 challenge on speech processing using discrete units,\u201d", + "author": "Xuankai Chang et al.,", + "venue": "in Proc. Interspeech, 2024.", + "url": null + } + }, + { + "31": { + "title": "\u201cAttention is all you need,\u201d", + "author": "Ashish Vaswani et al.,", + "venue": "in Advances in Neural Information Processing Systems. 2017, vol. 30, Curran Associates, Inc.", + "url": null + } + }, + { + "32": { + "title": "\u201cESPnet: End-to-end speech processing toolkit,\u201d", + "author": "Shinji Watanabe et al.,", + "venue": "in Proc. Interspeech, 2018, pp. 2207\u20132211.", + "url": null + } + }, + { + "33": { + "title": "\u201clibrosa: Audio and music signal analysis in python,\u201d", + "author": "Brian McFee et al.,", + "venue": "in Proceedings of the 14th python in science conference, 2015, vol. 8.", + "url": null + } + }, + { + "34": { + "title": "\u201cAdam: A method for stochastic optimization,\u201d", + "author": "Diederik P. Kingma et al.,", + "venue": "CoRR, vol. abs/1412.6980, 2014.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18107v1" +} \ No newline at end of file diff --git a/20241127/2411.18111v1.json b/20241127/2411.18111v1.json new file mode 100644 index 0000000000000000000000000000000000000000..23a45a43e901138bb89d1c79aca98f952ecd54a3 --- /dev/null +++ b/20241127/2411.18111v1.json @@ -0,0 +1,491 @@ +{ + "title": "When Large Vision-Language Models Meet Person Re-Identification", + "abstract": "Large Vision-Language Models (LVLMs) that incorporate visual models and Large Language Models (LLMs) have achieved impressive results across various cross-modal understanding and reasoning tasks. In recent years, person re-identification (ReID) has also started to explore cross-modal semantics to improve the accuracy of identity recognition. However, effectively utilizing LVLMs for ReID remains an open challenge. While LVLMs operate under a generative paradigm by predicting the next output word, ReID requires the extraction of discriminative identity features to match pedestrians across cameras. In this paper, we propose LVLM-ReID, a novel framework that harnesses the strengths of LVLMs to promote ReID. Specifically, we employ instructions to guide the LVLM in generating one pedestrian semantic token that encapsulates key appearance semantics from the person image. This token is further refined through our Semantic-Guided Interaction (SGI) module, establishing a reciprocal interaction between the semantic token and visual tokens. Ultimately, the reinforced semantic token serves as the pedestrian identity representation. Our framework integrates the semantic understanding and generation capabilities of LVLMs into end-to-end ReID training, allowing LVLMs to capture rich semantic cues from pedestrian images during both training and inference. Our method achieves competitive results on multiple benchmarks without additional image-text annotations, demonstrating the potential of LVLM-generated semantics to advance person ReID and offering a promising direction for future research.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Person re-identification (ReID) is a crucial task in computer vision, aimed at accurately matching pedestrians across different camera views [29 ###reference_b29###]. With the continuous advancements in deep learning techniques, person ReID methods have evolved significantly [15 ###reference_b15###, 26 ###reference_b26###]. In the past decade, a large body of research has significantly improved ReID accuracy by optimizing the distances between features [35 ###reference_b35###, 31 ###reference_b31###] and designing refined modules [8 ###reference_b8###, 16 ###reference_b16###, 9 ###reference_b9###, 18 ###reference_b18###], following the paradigm shown in Fig. 1 ###reference_### (a). However, challenges such as lighting variations, occlusions, and changes in appearance still persist, prompting researchers to explore more robust feature extraction models.\n###figure_1### Due to the difficulty of learning rich pedestrian semantic information from a single modality, cross-modal learning has received close attention in recent years. For example, in the context of the development of pre-trained Vision-Language Models (VLMs), CLIP-ReID [11 ###reference_b11###] based on the representative VLM model CLIP [17 ###reference_b17###] to leverage the semantic information in text. As shown in Fig. 1 ###reference_### (b), it enhances visual features through cross-modal contrastive learning with image-text pairs. Meanwhile, Large Language Models (LLMs) [21 ###reference_b21###, 22 ###reference_b22###, 28 ###reference_b28###] have attracted widespread attention due to their powerful capabilities in text generation and comprehension. Large Vision-Language Models (LVLMs) [1 ###reference_b1###, 14 ###reference_b14###, 13 ###reference_b13###, 10 ###reference_b10###, 25 ###reference_b25###] enhance LLMs by incorporating visual perception and understanding, demonstrating considerable potential in multi-modal learning tasks. However, integrating LVLMs with person re-identification remains an underexplored challenge.\nLVLMs typically operate on a generative paradigm, training and functioning by predicting the next word in a sequence. Thanks to pre-training and instruction tuning, LVLMs can follow instructions and converse with humans. As a result, a direct approach might be to have the model to identify the input person images. However, ReID gallery databases are usually very large (comprising tens of thousands of pedestrian images) [19 ###reference_b19###, 34 ###reference_b34###]. For each query image, the time and cost of comparing identities one by one with LVLMs are substantial. Processing multiple images simultaneously would also lead to an unacceptable increase in visual tokens.\nTherefore, we consider whether it\u2019s possible to leverage the reasoning and understanding capabilities of LVLMs while adhering to the mainstream ReID paradigm of feature extraction combined with feature similarity-based retrieval [29 ###reference_b29###].\nA potential solution involves using LVLMs to create textual descriptions of pedestrian images and fine-tuning the visual encoder via tasks such as image-text matching or image caption prediction. However, this approach presents several limitations:\n(1) High-quality and diverse text annotations are expensive to obtain.\n(2) The goals of image-text matching or image caption prediction tasks may not align well with those of image-based ReID.\n(3) During the inference phase, the potential of LVLMs is often underutilized, as they are not effectively integrated with the visual features.\nTo address these issues, we propose a new framework called LVLM-ReID. We propose to leverage the superior semantic understanding and generation ability of LVLMs to assist ReID. Specifically, as shown in Fig. 1 ###reference_### (c), we use instruction to guide the LVLM to focus on specific visual semantics in pedestrian images, generating a semantic token representing the pedestrian\u2019s appearance information. We then design an effective interaction module between the generated token and visual tokens, refining the visual representations of pedestrians while reinforcing the semantic token as a discriminative identity representation. Ultimately, the reinforced semantic token is optimized and used during inference to achieve person retrieval. Our framework integrates the generative process of LVLMs into the ReID model, eliminating the need for additional image caption annotations and enabling end-to-end effective learning. More importantly, during the inference phase, we continue to leverage the generative power of LVLMs to adaptively enhance visual features. Our experiments show that one generated semantic token can effectively facilitate the learning of pedestrian representations.\nOur contributions are summarized as follows:\nWe propose a novel framework that incorporates LVLMs into the person ReID task, offering a new perspective on using generative language models to assist discriminative visual models.\nWe propose to utilize the generative capability of LVLMs to produce a semantic token for pedestrians and design a semantic-guided interaction module leveraging the generated semantic token to enhance identity representations.\nExperimental results show that, without requiring additional annotations, our method effectively improves the discriminability of identity features and achieves competitive results across multiple datasets." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Person Re-Identification", + "text": "With the development of deep learning techniques, Convolutional Neural Network (CNN)-based approaches have seen widespread adoption in person ReID [29 ###reference_b29###].\nIn addition to extracting global feature representations from pedestrian images, part-level and multi-granularity features [20 ###reference_b20###, 24 ###reference_b24###, 33 ###reference_b33###] play an important role in fine-grained pedestrian identity recognition. Moreover, DG-Net [36 ###reference_b36###] proposes a joint learning framework that couples ReID learning and data generation end-to-end. SAN [8 ###reference_b8###] incorporates semantics-aligned feature representation learning through delicate supervision designs. Many methods also attempt to learn better pedestrian representations and relationships through well-designed modules [16 ###reference_b16###, 32 ###reference_b32###, 9 ###reference_b9###, 18 ###reference_b18###].\nWith the popularity of the Transformer architecture [23 ###reference_b23###], methods like TransReID [5 ###reference_b5###] explore to leverage Vision Transformer (ViT) [4 ###reference_b4###] to enhance the model\u2019s ability in learning rich structural patterns. Based on the Transformer baseline, DCAL [40 ###reference_b40###] extends self-attention modules to better learn subtle feature embeddings, and AAformer [41 ###reference_b41###] integrates part features for retrieval.\nRecently, visual language pre-training significantly improves the performance of many downstream tasks by training to match images and language [17 ###reference_b17###, 7 ###reference_b7###]. CLIP-ReID [11 ###reference_b11###] utilizes the contrastive cross-modal alignment in the CLIP paradigm [17 ###reference_b17###] and adopts a two-stage strategy to facilitate a better visual representation.\n###figure_2###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Large Vision-Language Model", + "text": "Building on the impressive reasoning and understanding capabilities of LLMs [21 ###reference_b21###, 21 ###reference_b21###, 28 ###reference_b28###], researchers have been working to adapt these strengths to the visual domain, leading to the development of Large Vision-Language Models (LVLMs). LVLMs have become a key technology in multimodal learning, enabling the processing and generation of complex visual and textual information.\nFor instance, Flamingo [2 ###reference_b2###] introduces a cross-attention mechanism that enables the model to attend to visual contexts, supporting visual in-context learning. Other models, such as BLIP-2 [10 ###reference_b10###] and mPLUG-OWL [30 ###reference_b30###], use visual encoders to process image features, which are then combined with text embeddings and input into the LLM. Additionally, LLaVA [14 ###reference_b14###] and MiniGPT-4 [39 ###reference_b39###] align the image and text features as a preliminary step, followed by instruction tuning to refine the model\u2019s instruction following ability.\nRecently, Qwen2-VL [25 ###reference_b25###] employs a unified paradigm for processing both images and videos and support varying resolutions, achieving highly competitive performance across various multimodal benchmarks.\nLVLMs can effectively facilitate cross-modal understanding of both image and text inputs, while how to leverage their advantages in ReID tasks remains an underexplored issue.\nBased on one of the representative LVLMs, Qwen2-VL, we explore the possibility of using the semantic understanding and generation capabilities of LVLM to enhance pedestrians\u2019 semantic representation in person ReID." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "In this section, we first introduce the overall framework of LVLM in Sec. 3.1 ###reference_###.\nThen, we elaborate on our proposed Pedestrian Semantic Token Generation (PSTG) in Sec. 3.2 ###reference_###. PSTG aims to generate one semantic token that encapsulates instructive appearance information of the pedestrian, and the generated semantic token is then used for Semantic-Guided Interaction (SGI) with visual tokens (see Sec. 3.3 ###reference_###). Finally, we introduce our end-to-end optimization and inference scheme in Sec. 3.4 ###reference_###. The framework of our proposed LVLM-ReID is shown in Fig. 2 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview of LVLM", + "text": "Overall framework.\nA typical LVLM consists of three key components: a visual encoder, a vision-language connector, and an LLM. The visual encoder extracts rich visual representations from images, which are then processed by the vision-language connector that converts visual features into the word embedding space. The LLM, trained for next-word prediction, generates text based on the encoded visual content. This generative structure enables LVLM to handle multimodal inputs, allowing for efficient image-text interaction and the generation of new textual information. In this work, we leverage Qwen2-VL [25 ###reference_b25###], one of the most advanced LVLMs, known for its superior capabilities in instruction-following, semantic understanding, and text generation across diverse tasks. Qwen2-VL combines a Vision Transformer (ViT) [4 ###reference_b4###] as the visual encoder and the Qwen2 [28 ###reference_b28###] as the LLM. The vision-language connector between the two components is a simple MLP layer that also compresses the extracted visual tokens.\nVisual token extraction.\nBefore inputting a pedestrian image into the LLM, the image is first encoded and compressed by the visual encoder. Specifically, each input RGB image , where and are its height and width, is first divided into patches of size . These patches are then embedded and flattened into a feature vector , where represents the number of patches, and is the embedding dimension. The resulting patch embeddings are processed through multiple layers of Transformer self-attention blocks [23 ###reference_b23###], producing visual representations . To enhance the model\u2019s ability to capture spatial dependencies, Multimodal Rotary Position Embedding (M-RoPE) [25 ###reference_b25###] is used in the process. Afterward, a simple MLP layer compresses adjacent tokens into a single token, producing the final visual tokens , which is formulated as:\nwhere . Notably, instead of using the traditional [class] token [4 ###reference_b4###], the image is transformed into a set of visual tokens. These visual tokens will then be passed to the LLM for further processing and interaction." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Pedestrian Semantic Token Generation", + "text": "We aim to integrate the advanced visual semantic understanding and generation capabilities of LVLM into the feature extraction pipeline, by guiding the ReID model to generate one semantic token that encapsulates instructive appearance information of the pedestrian.\nTo achieve this, we propose the Pedestrian Semantic Token Generation (PSTG) strategy, where we use instructions to direct the LVLM to generate a semantic token that summarizes the pedestrian\u2019s visual appearance. Considering that representative attributes, such as age, gender, and clothing, are crucial for identifying pedestrians, the instruction is carefully formulated as follows:\nwhere represents the extracted visual tokens, while the special tokens <|vision_start|> and <|vision_end|> are used to mark the beginning and end of the visual token sequence. With this instruction, the LVLM is guided to focus on the appearance-related semantics in the image, and then generate a semantic token that summarizes the relevant identity features. We denote this generated token as , which serves as a compact representation of the pedestrian\u2019s visual appearance.\nThe generated semantic token is then used in the following stages of our framework to guide identity feature learning.\nThe quality of the instruction is crucial for obtaining a useful token. Through empirical evaluation, we find that simple, clear instructions work effectively in guiding the LVLM. Future work could explore more sophisticated instruction designs to improve the semantic token generation process and, by extension, ReID performance.\nCamera semantic supplementation.\nThe semantic token generation process overlooks the influence of camera variations. To improve pedestrian semantic consistency across cameras, we explicitly model and account for these camera-induced feature variations.\nSpecifically, we assign a unique learnable embedding vector to each camera, which allows the model to learn the inherent feature shifts caused by cameras. These camera embeddings are used to adjust the pedestrian\u2019s semantic representation by incorporating camera-specific information.\nWe denote the set of learnable camera embeddings as , where is the total number of cameras. One direct implementation is to supplement the generated pedestrian semantic token with the camera semantics as follows:\nwhere is the encoding of the token, is the camera ID corresponding to the image . However, this late supplementation strategy may affect the visual model weakly.\nWe thus try to transfer the usage of camera embeddings to the input of visual model, where the camera embeddings are added to the patch embeddings . We evaluate the two variants and discuss their influences in Sec. 4.3 ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Semantic-Guided Interaction", + "text": "We design the Semantic-Guided Interaction (SGI) module to facilitate bidirectional interaction between the generated semantic token and the visual tokens.\nSpecifically, the generated semantic token is first concatenated with the visual tokens. Formally,\nThis concatenated token sequence is then passed through 4 layers of Transformer blocks, each consisting of a multi-head self-attention layer and a feed-forward network.\nThe module refines the visual features to capture identity-relevant information under the guidance of the semantic token. Meanwhile, the semantic token, serving as the pivot for information aggregation, distills more discriminative features from the visual representations, enhancing the overall understanding of the pedestrian\u2019s identity.\nThrough the semantic-guided interaction module, the model produces the reinforced representation as:\nThen, the reinforced semantic token representation is used to compute the Re-ID losses, i.e., identity classification loss [15 ###reference_b15###] and triplet loss [6 ###reference_b6###].\nSpecifically, identity classification loss ensures that the reinforced semantic token correctly maps to the pedestrian\u2019s identity category. A Fully Connected (FC) layer is employed as the identity classifier, and represents the predicted logits for the -th identity category. The identity classification loss is computed as:\nwhere is the total number of training identities, is the corresponding identity label for the image , and is a small constant for label smoothing regularization, which is typically set to 0.1.\nAdditionally, to further improve the identity discrimination of the learned features, triplet loss is used. It ensures that the identity representations of different pedestrians maintain the correct relative distances in the feature space. The triplet loss is defined as:\nwhere and are the feature distances between a positive pair and a negative pair mined in the training batch, respectively, and is the margin. A positive pair consists of images from the same pedestrian, while a negative pair contains images from different pedestrians. The triplet loss encourages the model to minimize the distance between images of the same identity and maximize the distance between images of different identities." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Optimization and Inference", + "text": "During training, we optimize the parameters of both the visual model and the SGI module while keeping the LLM parameters frozen, though we retain its gradients. The LLM\u2019s role in generating the semantic token, guided by the instruction, enables the model to focus on identity-relevant regions and characteristics within the pedestrian image. By leveraging the generated token in conjunction with the SGI module, we achieve joint end-to-end training that harnesses the strengths of LVLM in instruction-following and visual semantic understanding. This process allows for the integration of rich semantic cues into the visual representations, improving pedestrian identity recognition accuracy.\nThe overall training loss is a weighted combination of the identity classification loss and the triplet loss , which is expressed as follows:\nwhere and are balancing factors that control the contribution of each loss term.\nDuring inference, the LVLM is also used to generate the token for each input image. Then, the reinforced semantic token representation, , is used to compute the cosine similarity between different person images. These similarity scores are employed for identity matching, allowing the model to accurately identify pedestrians. Note that the identity representations of persons in the large gallery databases need to be extracted only once in applications.\n###table_1###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Settings", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Dataset", + "text": "We evaluate our methods on three person Re-ID datasets:\nDukeMTMC-reID [19 ###reference_b19###]\nconsists of 36,411 images of 1,404 identities, captured from 8 different cameras. The dataset includes 16,522 images for training and 19,889 images for testing.\nMarket-1501 [34 ###reference_b34###] is captured by 6 cameras at Tsinghua University, containing 12,936 images of 751 identities for training and 19,281 images of 750 identities for testing.\nCUHK03 [12 ###reference_b12###] consists of 1,467 pedestrians. Following [37 ###reference_b37###], 767 identities are used for training and 700 identities for testing. The labeled version is used with manually labeled bounding boxes from 14,096 images." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Evaluation Metrics", + "text": "We follow the common practices to adopt Cumulative Matching Characteristics (CMC) at Rank-1 and mean Average Precision (mAP) for performance evaluation." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "4.1.3 Implementation Details", + "text": "Our method is implemented on PyTorch. We employ Qwen2-VL-2B [25 ###reference_b25###] considering its efficiency with limited resources, while larger model sizes such as 7B and 72B have better LLM capabilities. The model adopts BFloat16 mixed precision. , , are set to 280, 140, 14, respectively, resulting in . In other words, 50 visual tokens are included in the input of LLM and our SGI module. Following [15 ###reference_b15###], random horizontal flipping, padding, random cropping, and random erasing [38 ###reference_b38###] are used for data augmentation. 16 identities and 4 images per person are randomly sampled to constitute a training batch. Adam optimizer with weight decay of is adopted, with the warmup strategy that linearly increases the learning rate from to in the first 10 epochs. We train the model for 60 epochs, with a learning rate decay factor of 0.1 at the 30th epoch. and are set to 0.25 and 1 following [11 ###reference_b11###]. The margin of triplet loss is set to 0.3." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with State-of-the-Art Methods", + "text": "We compare our method with the state-of-the-art methods on three widely used person ReID benchmarks in Tab. 1 ###reference_###.\nMethods based on CNNs achieve solid performance by designing elaborate modules for person ReID. TransReID [5 ###reference_b5###], on the other hand, explores the potential of Transformers [23 ###reference_b23###, 4 ###reference_b4###] in ReID, establishing itself as a strong baseline with superior capability.\nAs shown in Tab. 1 ###reference_###, ViT-based methods achieve consistent performance across different datasets due to the effectiveness of pre-training and Transformer architecture.\nOur LVLM-ReID adopts LVLM as the backbone, leveraging the advantages of Transformer and capabilities of large language models. Rather than designing elaborate modules for interactions between image pairs [40 ###reference_b40###], or leveraging part-level features [41 ###reference_b41###] or pose semantics [27 ###reference_b27###] based on ViT, we introduce LVLM\u2019s advanced understanding and generative processes into the ReID framework. Our method achieves consistently better results across the three datasets.\nMore concretely, on the DukeMTMC-reID dataset, which is known for occlusions and variations in appearance, LVLM-ReID achieves an mAP of 82.8% and a Rank-1 accuracy of 92.2%, surpassing previous advanced methods, such as PFD [27 ###reference_b27###] (mAP: 82.2%, Rank-1: 90.6%) and CLIP-ReID [11 ###reference_b11###] (mAP: 82.5%, Rank-1: 90.0%). The results indicate that LVLM-ReID is effective in handling the challenging variations from varying cameras and complex environmental conditions.\nOn the CUHK03 dataset, LVLM-ReID achieves an mAP of 82.3% and a Rank-1 accuracy of 84.6%, significantly outperforming other methods like CLIP-ReID [11 ###reference_b11###] (mAP: 80.3%, Rank-1: 81.6%).\nLVLM-ReID also achieves competitive results on the Market-1501 dataset. The strong performance of LVLM-ReID across datasets demonstrates its capability of leveraging LVLM.\nNote that CLIP-ReID [11 ###reference_b11###] leverages a VLM pre-trained through contrastive learning on large-scale image-text pairs, and it discards the text encoder during inference. Differently, our proposed LVLM-ReID integrates LVLM into ReID training and inference stages in a novel paradigm, moving beyond traditional model designs. The comparison results demonstrate its effectiveness in advancing person ReID performance." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Studies", + "text": "To incorporate LVLM to promote ReID, we introduce two key components, i.e., Pedestrian Semantic Token Generation (PSTG) and Semantic-Guided Interaction (SGI).\nWe validate their importance and necessity in Tab. 2 ###reference_###. We also discuss the camera semantic supplementation design in Tab. 3 ###reference_### and our semantic-guided interaction design in Tab. 4 ###reference_###.\nTo demonstrate the effectiveness of our end-to-end training process, we further ablate one variant that eliminates the gradient from LLM in Tab. 5 ###reference_###.\n###table_2### Effectiveness of the generated pedestrian semantic token.\n(1) Our baseline is based on the visual model of the LVLM, and the visual tokens are averaged to compute loss and feature similarity during training and inference.\nThe baseline only uses the visual model, overlooking the role of LVLM in visual semantic understanding and achieving inferior performance.\n(2) In the variant \u201cOurs w/o PSTG\u201d, we replace the LVLM-generated semantic token with a learnable token, similar to the design of the [class] token [4 ###reference_b4###], to integrate visual information. As shown in Tab. 2 ###reference_###, this substitution leads to a substantial performance drop since the randomly initialized learnable token lacks rich semantic cues. This result underscores the importance of our PSTG mechanism, which contributes to a more comprehensive understanding of pedestrian images.\nEffectiveness of the SGI module.\nIn the \u201cOurs w/o SGI\u201d variant, we remove the SGI module and rely solely on the LVLM-generated semantic token for ReID. As shown in Tab. 2 ###reference_###, this configuration still achieves reasonably good performance, suggesting that our PSTG mechanism effectively captures essential pedestrian semantic information. However, the variant struggles to outperform the baseline, emphasizing the importance of the SGI module in leveraging the generated semantic token.\nThe SGI module not only refines the visual tokens by allowing them to interact with the semantic token but also reinforces its identity-specific information, resulting in a more comprehensive representation.\nThe performance improvement of introducing SGI highlights its role in obtaining a more robust and discriminative pedestrian representation for person ReID.\n###table_3### Ablation of the camera semantic supplementation strategy.\nAs shown in Tab. 3 ###reference_###, we compare two variants that supplement camera semantics for the generated tokens and visual inputs. The result of \u201cCSS-\u201d shows that camera semantics can improve the representation ability of the generated tokens for pedestrians. However, since it indirectly enhances the robustness of the visual model to camera changes through our semantic-guided interaction module, the late supplementation strategy may affect the visual model weakly.\nWhen transferring the usage of camera embeddings to the input of the visual model (denoted by CSS-), we observe a better performance. Interestingly, the observation is consistent with the work only using ViT [5 ###reference_b5###]. In our LVLM-ReID framework, this design helps to improve the robustness of the generated semantic token and the extracted visual features, further improving the model\u2019s ability to match pedestrians across cameras.\n###table_4### Ablation of the SGI design.\nIn the SGI module, we adopt bidirectional interaction between the generated semantic token and the visual tokens. To assess the effectiveness of this design, we evaluate an alternative configuration, \u201c as Query\u201d, in Tab. 4 ###reference_###. However, this variant results in a noticeable performance decrease, validating the effectiveness and rationale of our SGI design.\nIts inferior performance suggests that limiting interaction to a single directional flow from visual tokens to the semantic token does not leverage the mutual enhancement potential between them. In contrast, our SGI allows the semantic token to guide and refine the visual features while being dynamically influenced by the visual content. This reciprocal exchange strengthens the visual representations with relevant semantic context.\n###table_5### Effectiveness of our end-to-end design.\nWe evaluate a variant of our proposed LVLM-ReID, denoted as \u201cStop Gradient\u201d, in Tab. 5 ###reference_###. In the variant, while the LLM generates semantic tokens, they do not impact the visual model\u2019s learning process.\nThe variant fails to harness the full benefit of joint training, as it restricts the cross-modal optimization loop that allows the generated semantics to iteratively enhance visual feature learning. Therefore, it shows unsatisfactory performance. The results highlight that our integrated design not only fosters tighter synergy between the visual model and the semantic token but also enables our model to capture more nuanced identity-relevant details, ultimately driving stronger ReID performance." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Qualitative Analysis", + "text": "To understand the identity-related information in the semantic token and demonstrate the effectiveness of LVLM in enriching pedestrian semantics, we analyze the attention maps using [3 ###reference_b3###] in Fig. 3 ###reference_###, and retrieval results in Fig. 4 ###reference_###.\n###figure_3### Visualization of attention maps.\nAs shown in Fig. 3 ###reference_###, the attention map visualizations reveal a clear advantage of our method in enhancing semantic understanding. Since a learnable [class] token lacks pedestrian semantics, the \u201cOurs w/o PSTG\u201d variant tends to shift attention toward background regions, relying on dataset-specific biases rather than intrinsic pedestrian attributes. This results in an inability to learn robust identity representations under such complex conditions.\nIn contrast, our method, guided by the semantic token, concentrates attention on key identity-specific regions, such as unique clothing patterns and distinctive body parts. In challenging scenarios, such as those involving occlusions or the presence of other pedestrians in the background (as shown in the last two columns), our method demonstrates superior robustness. It focuses on the primary body of pedestrians, ensuring that meaningful identity-specific features are prioritized. This focused attention demonstrates the effectiveness of our semantic guidance in directing the model toward meaningful visual cues, thereby enhancing identity recognition accuracy and robustness across varied scenes and backgrounds.\n###figure_4### Visualization of retrieval results.\nAs displayed in Fig. 4 ###reference_###, the baseline model often returns false positives, particularly when individuals in the images share similar attributes with the query image, such as clothing color or style. In contrast, our method effectively captures nuanced identity-specific features, accurately identifying the correct individuals. For example, our method demonstrates robustness to variations in image resolution and human pose (as shown in Fig. 4 ###reference_### (a)), and handles well scale changes (as shown in Fig. 4 ###reference_### (b)), achieving consistently higher precision in ReID compared to the baseline.\nWith the help of LVLM, our method can refine visual representations and enhance the discriminative power of the identity features. This leads to more reliable and accurate ReID in complex scenarios." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce LVLM-ReID, a novel framework that leverages the semantic understanding and generation capabilities of LVLMs to enhance the performance of person ReID. We design two key components: Pedestrian Semantic Token Generation (PSTG) and Semantic-Guided Interaction (SGI). We certify that LVLM can be integrated into the ReID process by generating one pedestrian semantic token, which can be used to improve the visual identity representations via an efficient interaction module.\nIn our framework, LVLM effectively helps capture and utilize the rich semantics of pedestrians. Our experimental findings underscore the importance of semantic guidance in strengthening visual representations, and highlight the advantages of our end-to-end design. Our work sets a new direction for integrating LVLMs in the area of person ReID.\nLimitations and future work.\nWe validate the effectiveness of our framework on a 2B parameter model, while the performance gains from more advanced LVLMs or larger model series still need to be explored. While a larger model will bring greater computational overhead, exploring more lightweight LVLMs or optimization techniques is also important." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison with the state-of-the-art methods on DukeMTMC-reID, Market-1501, and CUHK03. The results of our proposed method and the best results of comparison methods are shown in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BackboneMethods\nDukeMTMC-reID\nMarket-1501\nCUHK03
mAPRank-1mAPRank-1mAPRank-1
CNNMGN\u00a0[24]\n78.488.786.995.767.468.0
DG-Net\u00a0[36]\n74.886.686.094.8--
SAN\u00a0[8]\n75.587.988.096.176.480.1
Pyramid\u00a0[33]\n79.089.088.295.776.978.9
Relation-Net\u00a0[16]\n78.689.788.995.275.677.9
RGA-SC\u00a0[32]\n--88.496.177.481.1
CDNet\u00a0[9]\n76.888.686.095.1--
CAL\u00a0[18]\n76.487.287.094.5--
ViTTransReID\u00a0[5]\n80.689.688.295.0--
DCAL\u00a0[40]\n80.189.087.594.7--
AAformer\u00a0[41]\n80.090.188.095.479.080.3
PFD\u00a0[27]\n82.290.689.695.5--
CLIP-ReID\u00a0[11]\n82.590.089.695.580.381.6
LVLMLVLM-ReID82.892.289.295.682.384.6
\n
", + "capture": "Table 1: Comparison with the state-of-the-art methods on DukeMTMC-reID, Market-1501, and CUHK03. The results of our proposed method and the best results of comparison methods are shown in bold." + }, + "2": { + "table_html": "
\n
Table 2: Ablation studies of our key two components on DukeMTMC-reID and Market-1501.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsDukeMTMC-reIDMarket-1501
mAPRank-1mAPRank-1
Baseline79.090.287.394.7
Ours w/o PSTG80.991.088.395.0
Ours w/o SGI79.090.087.394.5
Ours82.892.289.295.6
\n
", + "capture": "Table 2: Ablation studies of our key two components on DukeMTMC-reID and Market-1501. " + }, + "3": { + "table_html": "
\n
Table 3: Ablation of the camera semantic supplementation (CSS) strategy. CSS- and CSS- denote adding the camera embedding to and , respectively.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsDukeMTMC-reIDMarket-1501
mAPRank-1mAPRank-1
\nw/o CSS81.691.489.195.2
CSS-\n82.392.188.495.3
CSS-\n82.892.289.295.6
\n
", + "capture": "Table 3: Ablation of the camera semantic supplementation (CSS) strategy. CSS- and CSS- denote adding the camera embedding to and , respectively. " + }, + "4": { + "table_html": "
\n
Table 4: Ablation of the SGI module design. \u201c as Query\u201d treats the generated semantic token as query, with image tokens serving as keys and values in a cross-attention mechanism\u00a0[23], and uses the resulting output as the pedestrian representation.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsDukeMTMC-reIDMarket-1501
mAPRank-1mAPRank-1
\n as Query80.589.488.695.2
Ours82.892.289.295.6
\n
", + "capture": "Table 4: Ablation of the SGI module design. \u201c as Query\u201d treats the generated semantic token as query, with image tokens serving as keys and values in a cross-attention mechanism\u00a0[23], and uses the resulting output as the pedestrian representation." + }, + "5": { + "table_html": "
\n
Table 5: Ablation studies of our end-to-end training design. \u201cStop Gradient\u201d prevents gradient flow from the LLM to the visual model.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsDukeMTMC-reIDMarket-1501
mAPRank-1mAPRank-1
Stop Gradient73.186.884.793.5
Ours82.892.289.295.6
\n
", + "capture": "Table 5: Ablation studies of our end-to-end training design. \u201cStop Gradient\u201d prevents gradient flow from the LLM to the visual model." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18111v1_figure_1.png", + "caption": "Figure 1: \nComparison of different person ReID frameworks. (a) Conventionally, a visual encoder is applied to extract pedestrian identity representations, overlooking the supplemented semantics from other modalities. (b) CLIP-ReID uses the text encoder of CLIP to introduce text semantics based on the contrastive learning paradigm. (c) Our proposed LVLM-ReID incorporates LVLM in the ReID pipeline. Through instruction, LVLM generates one pedestrian semantic token to enhance visual representations.", + "url": "http://arxiv.org/html/2411.18111v1/x1.png" + }, + "2": { + "figure_path": "2411.18111v1_figure_2.png", + "caption": "Figure 2: \nFramework of our LVLM-ReID.\nIt leverages clear instructions to guide the frozen LLM towards focusing on particular visual semantics within pedestrian images, resulting in the generation of one semantic token that encapsulates the pedestrian\u2019s appearance information. Subsequently, an efficient interaction module is designed to facilitate refinement between the generated token and the visual tokens. Finally, the reinforced token as a distinctive identity descriptor is optimized and employed for person retrieval.", + "url": "http://arxiv.org/html/2411.18111v1/x2.png" + }, + "3": { + "figure_path": "2411.18111v1_figure_3.png", + "caption": "Figure 3: \nVisualization of attention maps. We show (a) the original images, and compare the attentions of (b) the \u201cOurs w/o PSTG\u201d variant, and (c) our LVLM-ReID model, on CUHK03.", + "url": "http://arxiv.org/html/2411.18111v1/x3.png" + }, + "4": { + "figure_path": "2411.18111v1_figure_4.png", + "caption": "Figure 4: \nVisualization of retrieval results. For each query, the first and the second rows show the top-8 retrieval results of the baseline and our method on Market-1501, respectively. Retrieved images with green and red boxes are correct and incorrect results, respectively. Best viewed in color and zoomed in.", + "url": "http://arxiv.org/html/2411.18111v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Flamingo: a visual language model for few-shot learning.", + "author": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al.", + "venue": "Advances in neural information processing systems, 35:23716\u201323736, 2022.", + "url": null + } + }, + { + "3": { + "title": "Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers.", + "author": "Hila Chefer, Shir Gur, and Lior Wolf.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 397\u2013406, 2021.", + "url": null + } + }, + { + "4": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": "In Proceedings of the International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "5": { + "title": "Transreid: Transformer-based object re-identification.", + "author": "Shuting He, Hao Luo, Pichao Wang, Fan Wang, Hao Li, and Wei Jiang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15013\u201315022, 2021.", + "url": null + } + }, + { + "6": { + "title": "In defense of the triplet loss for person re-identification.", + "author": "Alexander Hermans, Lucas Beyer, and Bastian Leibe.", + "venue": "arXiv preprint arXiv:1703.07737, 2017.", + "url": null + } + }, + { + "7": { + "title": "Scaling up visual and vision-language representation learning with noisy text supervision.", + "author": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.", + "venue": "In International conference on machine learning, pages 4904\u20134916. PMLR, 2021.", + "url": null + } + }, + { + "8": { + "title": "Semantics-aligned representation learning for person re-identification.", + "author": "Xin Jin, Cuiling Lan, Wenjun Zeng, Guoqiang Wei, and Zhibo Chen.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11173\u201311180, 2020.", + "url": null + } + }, + { + "9": { + "title": "Combined depth space based architecture search for person re-identification.", + "author": "Hanjun Li, Gaojie Wu, and Wei-Shi Zheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6729\u20136738, 2021.", + "url": null + } + }, + { + "10": { + "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models.", + "author": "Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.", + "venue": "In International conference on machine learning, pages 19730\u201319742. PMLR, 2023a.", + "url": null + } + }, + { + "11": { + "title": "Clip-reid: exploiting vision-language model for image re-identification without concrete text labels.", + "author": "Siyuan Li, Li Sun, and Qingli Li.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1405\u20131413, 2023b.", + "url": null + } + }, + { + "12": { + "title": "Deepreid: Deep filter pairing neural network for person re-identification.", + "author": "Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 152\u2013159, 2014.", + "url": null + } + }, + { + "13": { + "title": "Improved baselines with visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296\u201326306, 2024a.", + "url": null + } + }, + { + "14": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "Advances in neural information processing systems, 36, 2024b.", + "url": null + } + }, + { + "15": { + "title": "A strong baseline and batch normalization neck for deep person re-identification.", + "author": "Hao Luo, Wei Jiang, Youzhi Gu, Fuxu Liu, Xingyu Liao, Shenqi Lai, and Jianyang Gu.", + "venue": "IEEE Transactions on Multimedia, 22(10):2597\u20132609, 2019.", + "url": null + } + }, + { + "16": { + "title": "Relation network for person re-identification.", + "author": "Hyunjong Park and Bumsub Ham.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 11839\u201311847, 2020.", + "url": null + } + }, + { + "17": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "18": { + "title": "Counterfactual attention learning for fine-grained visual categorization and re-identification.", + "author": "Yongming Rao, Guangyi Chen, Jiwen Lu, and Jie Zhou.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1025\u20131034, 2021.", + "url": null + } + }, + { + "19": { + "title": "Performance measures and a data set for multi-target, multi-camera tracking.", + "author": "Ergys Ristani, Francesco Solera, Roger Zou, Rita Cucchiara, and Carlo Tomasi.", + "venue": "In Proceedings of the European Conference on Computer Vision Workshops, pages 17\u201335, 2016.", + "url": null + } + }, + { + "20": { + "title": "Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline).", + "author": "Yifan Sun, Liang Zheng, Yi Yang, Qi Tian, and Shengjin Wang.", + "venue": "In Proceedings of the European Conference on Computer Vision, pages 480\u2013496, 2018.", + "url": null + } + }, + { + "21": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.", + "venue": "arXiv preprint arXiv:2302.13971, 2023a.", + "url": null + } + }, + { + "22": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.", + "venue": "arXiv preprint arXiv:2307.09288, 2023b.", + "url": null + } + }, + { + "23": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "In Advances in Neural Information Processing Systems, pages 5998\u20136008, 2017.", + "url": null + } + }, + { + "24": { + "title": "Learning discriminative features with multiple granularities for person re-identification.", + "author": "Guanshuo Wang, Yufeng Yuan, Xiong Chen, Jiwei Li, and Xi Zhou.", + "venue": "In Proceedings of the 26th ACM International Conference on Multimedia, pages 274\u2013282, 2018.", + "url": null + } + }, + { + "25": { + "title": "Qwen2-vl: Enhancing vision-language model\u2019s perception of the world at any resolution.", + "author": "Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al.", + "venue": "arXiv preprint arXiv:2409.12191, 2024.", + "url": null + } + }, + { + "26": { + "title": "Rethinking person re-identification from a projection-on-prototypes perspective.", + "author": "Qizao Wang, Xuelin Qian, Bin Li, Yanwei Fu, and Xiangyang Xue.", + "venue": "arXiv preprint arXiv:2308.10717, 2023.", + "url": null + } + }, + { + "27": { + "title": "Pose-guided feature disentangling for occluded person re-identification based on transformer.", + "author": "Tao Wang, Hong Liu, Pinhao Song, Tianyu Guo, and Wei Shi.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, pages 2540\u20132549, 2022.", + "url": null + } + }, + { + "28": { + "title": "Qwen2 technical report.", + "author": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al.", + "venue": "arXiv preprint arXiv:2407.10671, 2024.", + "url": null + } + }, + { + "29": { + "title": "Deep learning for person re-identification: A survey and outlook.", + "author": "Mang Ye, Jianbing Shen, Gaojie Lin, Tao Xiang, Ling Shao, and Steven CH Hoi.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6):2872\u20132893, 2021.", + "url": null + } + }, + { + "30": { + "title": "mplug-owl: Modularization empowers large language models with multimodality.", + "author": "Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al.", + "venue": "arXiv preprint arXiv:2304.14178, 2023.", + "url": null + } + }, + { + "31": { + "title": "In defense of the classification loss for person re-identification.", + "author": "Yao Zhai, Xun Guo, Yan Lu, and Houqiang Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0\u20130, 2019.", + "url": null + } + }, + { + "32": { + "title": "Relation-aware global attention for person re-identification.", + "author": "Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Xin Jin, and Zhibo Chen.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3186\u20133195, 2020.", + "url": null + } + }, + { + "33": { + "title": "Pyramidal person re-identification via multi-loss dynamic training.", + "author": "Feng Zheng, Cheng Deng, Xing Sun, Xinyang Jiang, Xiaowei Guo, Zongqiao Yu, Feiyue Huang, and Rongrong Ji.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8514\u20138522, 2019a.", + "url": null + } + }, + { + "34": { + "title": "Scalable person re-identification: A benchmark.", + "author": "Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1116\u20131124, 2015.", + "url": null + } + }, + { + "35": { + "title": "A discriminatively learned cnn embedding for person reidentification.", + "author": "Zhedong Zheng, Liang Zheng, and Yi Yang.", + "venue": "ACM Transactions on Multimedia Computing, Communications, and Applications, 14(1):1\u201320, 2017.", + "url": null + } + }, + { + "36": { + "title": "Joint discriminative and generative learning for person re-identification.", + "author": "Zhedong Zheng, Xiaodong Yang, Zhiding Yu, Liang Zheng, Yi Yang, and Jan Kautz.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2138\u20132147, 2019b.", + "url": null + } + }, + { + "37": { + "title": "Re-ranking person re-identification with k-reciprocal encoding.", + "author": "Zhun Zhong, Liang Zheng, Donglin Cao, and Shaozi Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3652\u20133661, 2017.", + "url": null + } + }, + { + "38": { + "title": "Random erasing data augmentation.", + "author": "Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 13001\u201313008, 2020.", + "url": null + } + }, + { + "39": { + "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models.", + "author": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny.", + "venue": "arXiv preprint arXiv:2304.10592, 2023a.", + "url": null + } + }, + { + "40": { + "title": "Dual cross-attention learning for fine-grained visual categorization and object re-identification.", + "author": "Haowei Zhu, Wenjing Ke, Dong Li, Ji Liu, Lu Tian, and Yi Shan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4692\u20134702, 2022.", + "url": null + } + }, + { + "41": { + "title": "Aaformer: Auto-aligned transformer for person re-identification.", + "author": "Kuan Zhu, Haiyun Guo, Shiliang Zhang, Yaowei Wang, Jing Liu, Jinqiao Wang, and Ming Tang.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 2023b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18111v1" +} \ No newline at end of file diff --git a/20241127/2411.18121v1.json b/20241127/2411.18121v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1db399af18cbca256798c70f0df0f2fe7aad8cbd --- /dev/null +++ b/20241127/2411.18121v1.json @@ -0,0 +1,245 @@ +{ + "title": "The Bigger the Better? Accurate Molecular Potential Energy Surfaces from Minimalist Neural Networks", + "abstract": "Atomistic simulations are a powerful tool for studying the dynamics of molecules, proteins, and materials on wide time and length scales. Their reliability and predictiveness, however, depend directly on the accuracy of the underlying potential energy surface (PES). Guided by the principle of parsimony this work introduces KerNN, a combined kernel/neural network-based approach to represent molecular PESs. Compared to state-of-the-art neural network PESs the number of learnable parameters of KerNN is significantly reduced. This speeds up training and evaluation times by several orders of magnitude while retaining high prediction accuracy. Importantly, using kernels as the features also improves the extrapolation capabilities of KerNN far beyond the coverage provided by the training data which solves a general problem of NN-based PESs. KerNN applied to spectroscopy and reaction dynamics shows excellent performance on test set statistics and observables including vibrational bands computed from classical and quantum simulations.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Results and Discussion", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "KerNN Architecture", + "text": "KerNN is based on a small feed-forward NN with one input, two hidden\nand one output layer. The molecular descriptors are one-dimensional\nreciprocal power reproducing kernels, which effectively serve as a\nsimilarity measure between the interatomic distances of a\nreference and a query structure. Both,\nnon-permutationally invariant () and\npermutationally invariant () descriptors\nconstructed from fundamental invariants10 ###reference_b10###\nare employed. The total\npotential energy of the system is KerNN\u2019s output and forces are\ncalculated via reverse mode automatic differentiation.11 ###reference_b11###\nFor spectroscopic applications, KerNN was adapted to\npredict dipole moments, too. Details are provided in the Methods\nsection." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "H2CO", + "text": "Training of KerNN for H2CO was based on non-symmetrized\n and symmetrized \ndescriptors. Five independent repeats of training on different splits\nof the data and using four different data set sizes were carried out. The validation set\ninvariably consisted of structures while the\nhold-out test set contained the remaining structures. Energy and force\nlearning curves for KerNNns (NN employing the non-symmetrized\ndescriptor , given in\nEquation 10 ###reference_0###) and KerNNs (NN employing\nthe symmetrized descriptor , given in\nEquation 10 ###reference_0###) are shown in\nFigure 2 ###reference_###.\n###figure_1### Learning curves quantify the rate at which ML models learn. It has\nbeen shown empirically13 ###reference_b13### that on average the\ntest set errors must decay inversely with training set size according to a power law\nOn a log-log plot, learning curves for ML models should\ntherefore follow\nwith an offset and a slope\n.14 ###reference_b14###, 15 ###reference_b15### The learning\ncurves in Figure 2 ###reference_### demonstrate that the logarithm of\nthe error consistently decreases linearly for larger training set\nsizes for both, and . This\nis particularly evident for the force learning curves. The quantity\nthat was minimized during training was a mean squared error loss, in\nwhich the errors in the forces contributed ten times more (, see Table S1 ###reference_###). Thus, the most meaningful\ncomparison of the different models in Figure 2 ###reference_### is\ngiven by RMSE() (right panel, dashed lines). Here, KerNNs\nreaches the lowest out-of-sample errors throughout. Notably, the\napproach presented herein can also be compared to other\nstate-of-the-art ML models in terms of out-of-sample errors on the\nsame data set.12 ###reference_b12### With the\nMAE() (RMSE()) for the two KerNN variants are (0.40, 0.15) kcal/mol/\u00c5 , compared with\n, and (0.45, 0.33\nand ) kcal/mol/\u00c5 for\nPhysNet16 ###reference_b16###, RKHS+F17 ###reference_b17### and kernel\nridge regression using the FCHL descriptor18 ###reference_b18###,\nrespectively.\nIt is interesting to assess to what extent permutational invariance of\nthe identical H-atoms impacts the predictions ( and ) of\nKerNNns and KerNNs. For KerNNs the total\nenergy is invariant upon exchange of the two H-atoms by construction,\nwhich is not guaranteed for KerNNns (or any other approach\nthat is not permutationally invariant). However, for the optimized\nH2CO configuration the energy of KerNNns remains unaltered\nupon H-exchange. This is a consequence of the symmetry in the data and\nalso because the small size of H2CO helps to cover the relevant\nsymmetry-related structures. A similar analysis can be carried out for\nforces by predicting them for a symmetric (C2v) structure of\nH2CO and comparing them to their ab initio\ncounterparts. The analysis of the atomic MAE(), shown in\nFigure 3 ###reference_###, illustrates that the atomic MAE()\nare larger for the structure using KerNNns. Also, KerNNns fails to predict fully symmetric forces (with atomic\nMAE() of 0.99 and 0.80 kcal/mol for the two H-atoms) whereas\nfor KerNNs the atomic MAE() is identical on both\nH-atoms (0.39 kcal/mol).\n###figure_2### An additional test for the accuracy of the forces (or the curvature of\nthe PES) around a stationary point are the harmonic frequencies. These\nare given in Table S3 ###reference_### and compared to their\nab initio CCSD(T)-F12B reference and frequencies obtained\nfrom other ML approaches,12 ###reference_b12### including\nPhysNet16 ###reference_b16###, RKHS+F17 ###reference_b17### and kernel\nridge regression using the FCHL descriptor.18 ###reference_b18###\nKerNNns features excellent accuracy\n(MAE()=0.2 cm-1) and is competitive in comparison to\nboth, KerNNs and the more sophisticated (in terms of NN\narchitecture, physically motivated terms such as electrostatics,\nnumber of parameters, descriptors that include distance and angular\nterms, etc.) ML approaches (MAE()=0.1 cm-1).\nThe previous evaluations focused on analyzing the models\u2019 capability\nfor interpolation, i.e., how well they predict the properties\nof structures within (e.g., energy- or interatomic\ndistance-wise) their training data. A more challenging task, however,\nconcerns the extrapolation capabilities of ML\nmodels. Reliable extrapolation is not at all guaranteed because such\nmodels are purely mathematical constructs without inherent physical\nmeaning. Specifically for PESs to be used for dynamics simulations,\npoor extrapolation capabilities can lead to significant errors,\npotentially resulting in unphysical or incorrect behavior in areas of\nthe energy landscape that are not covered by the training data. As was\nnoted in References 19 ###reference_b19### and 20 ###reference_b20###,\nlow test set errors do not guarantee robust MD simulations. Hence, the\nextrapolation performance of both KerNN variants was considered, see\nFigure 4 ###reference_###. The extrapolation dataset, shown\nin Figure 4 ###reference_###A and available from previous\nwork12 ###reference_b12###, covers structures that were sampled at\nconsiderably higher temperatures (5000 K) than the training set\n(2000 K) and covers a much broader energy range (130\nvs. 40 kcal/mol). Notably, KerNNs extrapolates very\naccurately without any extreme outliers and has only small deviations\nat the highest energies. KerNNns closely follows the accuracy\nof KerNNs with slightly larger deviations in the high energy\nrange.\nMeaningful extrapolation outside the range covered by the\ntraining/validation/test data is typically very challenging for ML\nmodels for both, NNs and kernel-based methods (depending on what\ndescriptor is used). Both tend to behave unphysically and have\nincreasingly large prediction errors in the extrapolation\nregime.6 ###reference_b6### An example is reported in\nFigure 4 ###reference_###B which shows one-dimensional cuts\nthrough different PESs along a C-H bond of formaldehyde. The\nevaluation illustrates the robust extrapolation of both KerNN variants\n(magenta, olive) far beyond the training data whereas PhysNet (cyan)\nfails for bond lengths outside the training data set. As a\ncomparison, two additional descriptors were employed for the same NN\narchitecture as KerNN (non-symmetrized set of interatomic distances\n(r, gray-dashed) and non-symmetrized exponentials of the set of\nnegative interatomic distances (, blue)). It is conjectured\nthat the long-range asymptotics of KerNN can be further adjusted and\ncontrolled by including targeted data from either ab initio\ncalculations or experiments. This is valuable for situations in which\nelectronic structure calculations fail to provide reliable reference\ndata as can be the case for MRCI or CASPT2 calculations in the\ndissociation region.\nTo test this, the experimentally determined C-H dissociation energy\n(86.6 kcal/mol) was used to further constrain the long range part of\nthe KerNN PES.21 ###reference_b21### Ten structures with a C-H distance\nof \u00c5 were assigned with that energy and the model was\ntransfer learned by fine-tuning the weights and biases of the\nKerNNs PES. The energy of the empirical data points was set\nto (-371 + 86.6 = ) -284 kcal/mol and is marked by a red line in\nFigure 4 ###reference_###, to which the KerNN is successfully corrected. It is noted that the ab initio reference\ndata predicts a small barrier at \u00c5 and it is\nunclear if this a spurious barrier caused by using a single-reference\nmethod for a multi-reference problem. Such spurious barriers (or\nminima) can have detrimental effects on the dynamics of a molecular\nsystem.22 ###reference_b22### The reliable extrapolation capability of\nKerNN is likely a consequence of using kernels as descriptors since\n decays smoothly and monotonically towards zero for\nlarge (see Figure S1 ###reference_###). In other words,\nthe guaranteed long-range decay of as a feature in the\nNN controls the global behaviour of the energetics and avoids\narbitrary predictions outside the range covered by the training\ndata. This is an essential advantage of KerNN over other NN-based and\nin part other kernel-based approaches.\n###figure_3### Finally, KerNN can also be used to run finite-temperature MD\nsimulations from which a multitude of experimental observables can be\ncomputed. Here, the infrared spectrum (IR) was determined which was\nalso available from earlier studies of H2CO using\nPhysNet12 ###reference_b12### and from experiments.23 ###reference_b23###\nFor that reason, KerNNns was trained on energies, forces and\ndipole moments and its test set errors are given in\nTable S4 ###reference_###. IR spectra were determined from the\nFourier transform of the dipole moment autocorrelation function\n24 ###reference_b24### following\nThe Fourier transform was multiplied by a quantum correction factor\n.25 ###reference_b25###\nThe dipole moment was that from the trained KerNNns model\nwhich incorporates effects due to atom-centered fluctuating\ncharges. Unfortunately, learning the dipole moment by training on the\nextended loss function (7 ###reference_###) is impossible for\nKerNNs due to its permutational invariance (see\nFigure S2 ###reference_###).\n###figure_4### The MD simulations, carried out using the atomic simulation environment (ASE)26 ###reference_b26###, were each 200 ps in length,\nrun with a time step of fs to conserve total energy,\nstarting from randomly initialized momenta drawn from a\nMaxwell-Boltzmann distribution at K and using the optimized\nH2CO structure as the initial configuration. Averaged IR spectra\ndetermined from 100 independent simulations are shown in Figure\n5 ###reference_###. In terms of peak positions, the\nKerNNns and PhysNet spectra are indistinguishable, and only\nsmall differences in their intensities are found. In addition to the\nsix fundamental bands, an overtone and a combination band are visible\nat and 3040 cm-1, respectively. Comparable to the\ncomputed IR spectrum, the experimental spectrum of pure formaldehyde\nice27 ###reference_b27### also shows a sideband on the red side of the 1725\ncm-1 peak which, however, remained unassigned.\nFrom a more technical perspective, the results so far are particularly\nremarkable and promising considering the small number of learnable\nparameters (1001 and 1021 for KerNNns and KerNNs)\nthe two KerNN approaches have. This compares with kernel-based methods\nthat usually have one parameter per data point and NNs that can have\nmillions of parameters. The compact form and resulting inference speed\nof KerNN is the advantage of such approaches and their computational\ncost for energy and force evaluations is given in\nTable S5 ###reference_###. KerNN in its Python implementation is\nroughly 15 (50) times faster than PhysNet (kernel ridge regression\nwith FCHL), and only slightly slower than RKHS+F, which is written in\nFORTRAN.28 ###reference_b28###, 17 ###reference_b17### If KerNN\u2019s FORTRAN\nimplementation is used (e.g., from within CHARMM), this\nyields a 100-fold speedup over its Python implementation and,\ntherefore, outperforms PhysNet by orders of magnitude. Compared with\nRKHS+F, the compact NNs are 20 to 30 times faster allowing\nconsiderably longer simulation times at comparable accuracy. Notably,\nif the training set size is increased (as would likely be necessary\nfor larger molecules), the computational cost of an and \nevaluation remains the same for KerNN, whereas it scales quadratically\nwith the training/reference data for RKHS+F and the matrix inversion\nfor obtaining the linear coefficients scales cubically which becomes\nprohibitive for reference data larger than ." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "HeH", + "text": "Next, KerNN was applied to a reactive system: . Breaking and forming chemical\nbonds requires a globally valid PES, as opposed to a local PES\nas was the case for H2CO, which is generally more\nchallenging. Again, two descriptors, and\n were used and one model was trained for each of\nthem. Because only numerical gradients were available at the UCCSD(T)\nlevel of theory, weighting force contributions in the loss was reduced\nto . Hence, the focus was put more on validations of\nenergetics and observables that can be derived from\nthem. Nevertheless, test set errors on forces and harmonic frequencies\n(see Table S6 ###reference_###) were still determined.\nFollowing the standard practice for assessing the performance of\nML-based PESs, the test set errors were evaluated. From the full\nreference data set, 4709 random structures were excluded from the\ntraining. The energy and force prediction errors together with their\naverages on the test set for KerNNns and KerNNs are\nshown in Figures 6 ###reference_###A and B. The energies\nare predicted with high fidelity ( kcal/mol for\nboth KerNN variants). KerNNs has slightly larger averaged\nerrors than KerNNns, which is likely due to the variability\nin the training process. While most predictions were within\n0.1 kcal/mol of their ab initio reference values, single\noutliers with kcal/mol exist. The three\nstructures with the largest errors are shown in\nFigure S3 ###reference_###. Two out of three outliers are the\nsame in KerNNns and KerNNs and both structures\nfeature large interatomic distances ( \u00c5) with\ncomparable H-He-H angle. Structures close to full dissociation are\nlikely to pose a challenge for single reference methods such as\nUCCSD(T). This can be verified by comparing energies from UCCSD(T) and\nFCI calculations for selected structures. It is noted that for a\nthree-electron system CCSDT and FCI are equivalent which implies that\nUCCSD(T) is close but not identical to these highest-accuracy\nmethods. The two structures considered were the equilibrium\nconfiguration determined on KerNNns (linear [He\u2013H-H]+)\nand the structure with the largest between reference\ncalculations and KerNNns which is partially dissociated. At\nthe minimum the energy difference \nkcal/mol compares with kcal/mol\nfor the outlier structure. This is a clear indication for the\ndifficulty of UCCSD(T) to describe geometries close to dissociation\ncorrectly. It is interesting to note that the prediction error of\nKerNN is kcal/mol (see\nFigure 6 ###reference_###A and B), which is comparable\nto . This also implies that KerNN can\nidentify irregularities in the data set.\n###figure_5### Figure 6 ###reference_### (C) shows one-dimensional\npotential energy scans along as determined on\nthe KerNN PESs for three different collinear approaches. The\none-dimensional potential energy scans are shown for the KerNN PESs\nusing and . The results\n(cyan and olive lines) are so close that they overlap on the scale of\nthe plot. Black symbols represent explicit ab initio UCCSD(T)\nenergies using the same grid as for scanning the KerNN PESs. The\nexceptional quality of the KerNN PESs is apparent both, for the short\nand the long-range part of the PES. The maximum differences between\nthe PESs and the ab initio reference energies for all scans\nare kcal/mol and emphasize the accuracy of the PESs. In\naddition to probing the bond formation process in one dimension in\nFigure 6 ###reference_###, the accuracy of the reactive\nKerNNns PES, in particular for the He + H and HeH+ +\nH channels, is shown in Figures S4 ###reference_### and\nS5 ###reference_###. The comparison to direct UCCSD(T)\ncalculations yield MAE() of 0.05 and 0.70 kcal/mol for the He +\nH and HeH+ + H channels, respectively. Excluding the ten\npredictions with the largest error for the latter (located exclusively\nin the repulsive region) yields a MAE() of 0.06 kcal/mol. KerNN is\nalso suitable to investigate chemical reactions, see\nFigure S6 ###reference_###. Here two exemplary trajectories,\none non-reactive (A) and the other one reactive (B) using KerNNns are shown.\nAs an application of the KerNN PESs the ro-vibrational energies for\nthe He\u2013H ionic complex were determined for the KerNNns\nPES using the DVR3D suite of programs29 ###reference_b29### which\nsolves the three-dimensional Schr\u00f6dinger equation in a discrete\nvariable representation (DVR) for a given PES. Due to the considerable\nstabilization of the He\u2013H ionic complex it is well approximated\nas a semi-rigid system with vibrational modes (). In a local mode picture, these are the\nH-H+ stretch, He\u2013H-H+ bend and the He\u2013H stretch modes,\nrespectively.\nRo-vibrational bound states were determined for\northo-/para-HeH for total angular momentum in\n parity for the complex. Because /H only populates\nodd/even states, the dissociation limits for the two species\ndiffer by cm-1 at the UCCSD(T) level of theory, where\n is the rotational constant of H. A comprehensive list of all\ncalculated states considered is given in\nTable LABEL:sitab:heh2+_boundstates, which compares results from using\nthe KerNNns PES with earlier calculations performed on a\nFCI/aug-cc-pV5Z RKHS PES.30 ###reference_b30### The difference between the earlier and the\npresent predictions of the bound state energies are given in\nFigure S7 ###reference_###. The ground state energies\nof o- and p-H(, )\u2013He determined\non the KerNNns PES are \u20131793.3578 and \u20131793.3585 cm-1,\nwhich are within less than 0.5 cm-1 from the energies determined\non the FCI PES which are \u20131793.7632 and\n\u20131793.7639 cm-1. Comparing all 64 states from the KerNNns and the RKHS PES it is evident that KerNNns typically\nunderestimates the bound state energy in comparison to the FCI PES\n(i.e., predicts it at lower wavenumber). While this is not\nthe case for the states with the lowest it is invariably the case\nfor states with . Notably, the underestimation increases for\nnear-dissociative bound states (for increasing ). Among all states,\nthe maximum deviation between KerNNns and RKHS is cm-1 with a MAE of 2.5 cm-1.\nSeveral transition frequencies were recently characterized\nexperimentally for the first time from low-resolution spectra in an\nion trap at 4 K using a free electron laser, see Table\n1 ###reference_###.31 ###reference_b31### With the help of\nro-vibrational calculations on the FCI-RKHS PES30 ###reference_b30### the\nexperimentally detected peak at 1840 cm-1 was assigned to the\nH stretch whereas the peaks at 695 and 840 cm-1 correspond\nto the He\u2013H bend and van der Waals stretch modes. These compare\nwith computed transition frequencies of 1809/1829, 641, and 729\ncm-1 from DVR3D calculations using the KerNNns PES which\ndiffer by only a few cm-1 from the bound state calculations using\nthe RKHS-FCI PES. For the former transition frequency (1809/1829),\nthere are two states because they couple with the bend and stretch\nmodes of the complex. When comparing computed and experimentally\nreported transition frequencies one must note that the widths of the\nmeasured spectra can be considerable, e.g. cm-1 for\nthe H stretch fundamental, with appreciable uncertainty of cm-1 in the position of the maximum. Also, there is a\nshoulder cm-1 to the red of the measured fundamental\ntransition at 1840 cm-1, which is nicely captured by the\ncalculations (i.e., 1829 and 1809). Furthermore, two\nlow-intensity peaks at 1159 and 1234 cm-1 can be identified as\nthe (0200) and (0020) overtones using the calculations, which are\nwithin 30 and 20 cm-1 of the computations, respectively.\nComparing the theoretical predictions carried out on two different\nPESs based on different levels of theory (FCI vs. UCCSD(T)), two\ndifferent representations (RKHS and KerNN), and using two different\nmethods for computing bound states (CCVM and DVR3D) shows excellent\nagreement. This indicates that both calculations are consistent and\nprecise. This is very encouraging and lends considerable credibility\nto a computational approach for predicting spectroscopic properties of\nexperimentally challenging systems. The RKHS-FCI PES has also been\nsuccessfully used together with wavepacket simulations to characterize\nFeshbach resonances in\nHe\u2013H.32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34### While the\nexperiment and the theoretical calculations are in reasonable\nagreement, this calls for additional high-resolution experiments, for\nwhich the theoretical predictions can serve as valuable benchmark." + }, + { + "section_id": "1.4", + "parent_section_id": "1", + "section_name": "Hydrogen Oxalate", + "text": "As a final application of KerNN the spectroscopy and reaction dynamics\nof hydrogen oxalate are followed. The infrared spectroscopy of hydrogen oxalate has\nbeen characterized from both, experiments and\ncomputations.35 ###reference_b35###, 36 ###reference_b36### The system is highly\nsymmetric, has a strong intramolecular hydrogen bond and features a\ncorresponding hydrogen transfer. The system serves as a test case to\nassess the feasibility for larger, reactive systems. Therefore, only a\nKerNNns was trained and a corresponding PhysNet model was\ngenerated based on the same training data. The quality of the two PESs\non the corresponding test sets is reported in\nFigure 7 ###reference_### and MAE and RMSE are summarized\nin Table S8 ###reference_###. It is found that across the full range of\nenergies covered the KerNNns PES is competitive with PhysNet\ndespite the fact that the number of free parameters for PhysNet is\nlarger by more than two orders of magnitude. In fact, for certain\nproperties (RMSE, RMSE, MAE, and RMSE)\nKerNNns performs better than PhysNet.\n###figure_6### To study the spectroscopy and dynamics of hydrogen oxalate, MD simulations\nwere carried out using ASE26 ###reference_b26### for both KerNNns and PhysNet. A total of 100 simulations were started from the\nsame initial conditions for each of the energy functions. Initially,\nthe molecule was optimized and random momenta were drawn from a\nMaxwell-Boltzmann distribution corresponding to 300 K and assigned to\neach of the atoms. The MD simulations were propagated in the \nensemble using the velocity Verlet algorithm with a time step of\n0.2 fs for 106 steps. This aggregates to a total of 20 ns\nsimulation time for each of the PESs.\nThe infrared spectra determined from the MD simulations described\nabove are reported in Figure 8 ###reference_###. The framework\nmodes below 2000 cm-1 were all captured rather accurately\ncompared with experiment. It should be noted that the PESs are based\non the MP2 level of theory and that the experiments were carried out\nusing the H2-messenger technique which, strictly speaking, is not a gas\nphase spectrum. Nevertheless, the interaction between the\nH2-messenger and hydrogen oxalate is still weak so that only small\nperturbations are expected. A notable feature is that the two computed\nspectra from using PhysNet and KerNN (cyan and magenta traces, respectively)\nare rather close to one another, except for a peak at\n1380 cm-1 which appears only for KerNNns but is\nconsistent with experiments (olive trace). In other words, the\nKerNNns PES is demonstrably of the same quality as the\nPhysNet model - if not even superior. The relative intensities between\nexperiment and simulations are probably influenced by the fact that\nexperimentally, a H2 tagging technique was employed whereas the\nsimulations are in the gas phase. For the most interesting spectral\nfeature below 3000 cm-1 the two simulations agree as well and\nallow to assign the measured/calculated pronounced maximum around\n2930/3050 cm-1 to the O\u2013H stretch or proton transfer mode. In\nthe experiment, a characteristic plateau with signals between 2600 and\n2900 cm-1 was detected. This signal range is also detected in the\nsimulations, albeit with a different intensity pattern. This is in\ncontrast to earlier work on the vibrational spectroscopy and dynamics\nof hydrogen oxalate, which was based on a force field and was unable to\nreveal the width of the plateau.36 ###reference_b36### This is an\nadvantage of ML-based PESs over traditional force fields as the former\ninclude all inter-mode couplings by learning them directly from\nab initio reference data.\n###figure_7### Finally, the same MD simulations that were employed to determine the\nIR spectra were used to calculate the hydrogen transfer rates. The barrier height for\nhydrogen transfer from reference MP2/aug-cc-pVTZ calculations is 2.355 kcal/mol which\nis closely reproduced by KerNNns (2.360 kcal/mol) and PhysNet\n(2.355 kcal/mol), respectively. From an aggregate of 20 ns MD\nsimulation (100 independent simulations each 200 ps in length), 427\nand 524 hydrogen transfers were observed from simulations using the\nKerNNns and PhysNet PESs. This corresponds to transfer rates\nof 21/ns and 26/ns. Malonaldehyde, which also features intramolecular\nhydrogen transfer between two adjacent O-atoms and is very similar to\nhydrogen oxalate, features a H-transfer barrier of 2.79 kcal/mol at the\nMP2/aug-cc-pVTZ level. The proton transfer dynamics of malonaldehyde\nhas been studied in Reference 37 ###reference_b37###, which suggested\nthat an O-O motion is gating the H-transfer. At 300 K, a transfer rate\nof 7.6/ns was reported. Acetoacetaldehyde/acetylacetone (both being\nmethyl-substituted variants of malonaldehyde) which was also studied\nin earlier work has a H-transfer barrier of 2.59/2.17 kcal/mol at the\nMP2/aug-cc-pVTZ level, for which a transfer rate of 15/36 per ns was\nfound and is consistent with the present findings.\nAs long as the hydrogen oxalate remains in its hydrogen-bonded form and the\nhydrogen atom transfers between the two oxygen atoms OA and\nOB (see Figure 1 ###reference_###) for which\ntraining data is available the simulations are robust and\nmeaningful. However, since the hydrogen-bond can break at elevated\ntemperatures and a rotation about the CA\u2013CB bond is\npossible, exchange of the hydrogen atom to OC or OD\nalso becomes possible. Since the features are not symmetrized, the\ntrajectory breaks down in such situations. This was not the case for\nMD simulations at 300 K. One remedy would be to cover the relevant\nsymmetries with additional data (\u201ddata augmentation\u201d) or by including\nfull or approximate permutational invariance into the model." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Conclusion and Outlook", + "text": "The present work introduces KerNN to represent reactive and\nnon-reactive potential energy surfaces. KerNN capitalizes on the idea\nthat one-dimensional reproducing kernels provide accurate and\nasymptotically meaningful representations suitable for featurization\nof a (small) neural network. The main reason to explore such a\ncombination was the realization that typical present-day NN-based\nPESs are over-parametrized by several orders of magnitude. It was\nshown in the present case that a KerNN with performs on\npar with a PhysNet model with parameters trained on the\nsame reference data set. Occam\u2019s razor and the principle of parsimony\nstate that among competing explanations the simplest explanation with\nthe fewest variables should be\nfavoured.38 ###reference_b38###, 39 ###reference_b39### Such considerations also lead other fields, e.g. natural language processing40 ###reference_b40### or computer vision,41 ###reference_b41### to consider the performance of smaller NN-architectures as has been done for KerNN in the present work.\nApart from\nbeing resource-efficient in terms of training times, evaluation times,\nmemory requirements and reduced concomitant energy consumption, small\nNN-based models may also offer advantages when moving towards\nexplainable and interpretable models (XAI).\nUsing symmetrized and unsymmetrized descriptors, \nand , offers advantages and disadvantages. For\nexample, for symmetric structures of a symmetric molecule (H2CO)\nforces on symmetry-equivalent atoms are exactly the same if the\ndescriptor is symmetric. On the other hand the training and inference\nof KerNNs is computationally more demanding than for\nKerNNs and the differences will be more pronounced in larger\nsystems. This and the (almost) identical performance of the two models\nfor H2CO raises the question if a rigorous inclusion of physical\nconstraints (such as the exact permutational invariance of like atoms)\nis mandatory. Recent work on the effect on broken symmetries, in\nparticular for rotations, in ML come to similar conclusions and find\nthat \u201c[..]Despite the unquestionable appeal of incorporating\nfundamental physical concepts in the architecture of machine-learning\nmodels, it might be beneficial \u2013 and it certainly is not as\ndetrimental one would expect \u2013 to just let models\nlearn[..]\u201d42 ###reference_b42### For KerNN applied to larger\nmolecules, it will therefore be interesting to assess whether\napproximate permutational invariance, e.g. by data\naugmentation or by including permutational invariance only for\n\u201cphysically accessible\u201d permutations, offers advantages over a more\nrigorous inclusion of permutational invariance. In addition, it may be\npossible to use kernels for describing the atomic environment to\nfurther generalize KerNN.\nThe extrapolation capabilities of KerNN were demonstrably\nexcellent. This is a particular advantage for investigating\nbond-breaking and bond-formation. This property is also very\nadvantageous for reference data from methods that are known to feature\nunpredictable convergence, in particular in the asymptotic regions of\nthe PES, such as multi-reference CI\n30 ###reference_b30###, 43 ###reference_b43###, 44 ###reference_b44### or if experimental data\nis available.\nA particular advantage over Python-based methods is the fact that\nKerNN can be implemented in Fortran which (usually) is computationally\nmore efficient. This is particularly relevant for long-time\nsimulations of large systems, which require a large number of\nsequential force predictions. As all other NN-based methods, no\nexplicit analytical form of the underlying model is assumed and\ncoupling between internal degrees of freedom are explicitly\nincorporated into the model. This is of particular interest when\nenergy transfer phenomena are studied. Future extensions of KerNN\ninclude the combination with accurate electrostatic models such as\nMDCM, fMDCM or kMDCM or multipolar electrostatics for modeling\ncondensed phase systems.\nIn summary, the present work introduces an efficient and accurate\napproach to represent reactive and non-reactive molecular PESs. KerNN\nwas applied to infrared spectroscopy and H-transfer reactions for\nsystems with up to 6 heavy atoms. Extensions to larger molecules are\npossible but may require modifications in the kernels used. The most\nimportant insight of the present work is that considerably smaller and\nsimpler ML-based models can be conceived without compromising their\naccuracy." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methods", + "text": "This section describes the reference data generation, followed by the\nNN architecture and the descriptors used." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Ab initio Data", + "text": "H2CO: The ab initio reference data for H2CO\nwas available from previous work12 ###reference_b12###. The data set\ncontains a total of 4001 configurations generated using normal mode\nsampling45 ###reference_b45### including the optimized H2CO\nstructure. Ab initio energies, forces and dipole moments were\nobtained for all structures at the CCSD(T)-F12B/aug-cc-pVTZ-F12 level\nof theory using MOLPRO46 ###reference_b46###. To capture the equilibrium, room\ntemperature, and higher energy regions of the PES, the normal mode\nsampling was carried out at eight different temperatures (10, 50, 100,\n300, 500, 1000, 1500, and 2000 K). For each temperature, 500\nstructures were generated. For training, the energy is taken with\nrespect to free atoms.\nHeH: The reference structures for the HeH system\nwere generated on a grid using Jacobi coordinates (, , )\nwhere is the bond length between diatomic species (either H\nor HeH+), is the distance between the center of mass of the\ndiatom and the third atom (either He or H) and is the angle\nbetween and . The details for the radial and\nangular grids for the reference structure generation are given in\nTable S2 ###reference_###. Structures with any interatomic\ndistance smaller than 0.6 \u00c5 were discarded. These are complemented\nwith 500 structures obtained from running MD simulations at\n1500 K using the semiempirical GFN2-xTB method47 ###reference_b47###\nfor each of the diatomic species (the third atom was placed\nsufficiently far away). Ab initio energies and forces were\ndetermined at the UCCSD(T)/aug-cc-pV5Z level of theory for all\nstructures. Since only numerical gradients were available for the\nUCCSD(T) method in MOLPRO, less relative weight is given to the forces\nduring training (see Table S1 ###reference_###). Structures with\nenergies 100 kcal/mol or higher than the complete dissociation\n(i.e., [He + H + H]+) were excluded, yielding a total of\n62834 structures. The zero of energy was taken with respect to the\nfree atoms (i.e., [He + H + H]+ is at 0 kcal/mol).\nHydrogen oxalate: Structures for hydrogen oxalate were\nsampled by running MD simulations at multiple temperatures (100,\n300, 500, 1000 and 1500 K) using the semiempirical GFN2-xTB\nmethod47 ###reference_b47### (2500 structures each except for 1500 K\nfor which only 1000 geometries were generated) as implemented in\nASE.26 ###reference_b26### The region\naround the proton transfer transition state was sampled with an\nartificial harmonic potential (1500 structures) and at K. Additionally, normal mode sampling45 ###reference_b45### at\nincreasing temperatures (100, 300, 500, 100, 1500, 2000 K) was carried\nout for both the optimized and transition state structure of hydrogen\noxalate (800 per ). The data set was augmented based on adaptive\nsampling and diffusion Monte Carlo\nsimulations48 ###reference_b48###, 49 ###reference_b49### using\nPhysNet16 ###reference_b16### to ensure robustness of the PES. The\nfinal data set contained a total of 22110 structures, for which\nenergies, forces and dipole moments were determined at the\nMP2/aug-cc-pVTZ level of theory using MOLPRO46 ###reference_b46###. Again, the\nenergy was taken with respect to free atoms." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Neural Network", + "text": "The PESs for the three molecular systems were represented by a small\nand fully connected feed-forward NN. The fundamental building blocks\nof NNs are dense layers of the form\nwhich need to be stacked and combined with a non-linear activation\nfunction to model non-linear relationships within the\ndata. Here, () corresponds to the input (output) vector\nwith a dimensionality of () and the\nactivation is applied entry-wise. and are the\nweights and biases that are learnable parameters. The NN architecture\nused throughout this work was\nand consists of an input (0), two hidden (1 and 2), and an output (3)\nlayer. The activation function was a\nsoft plus function, and the last layer was a linear\ntransformation. The input to the NN are functions of the interatomic\ndistances (vide infra) and the output is the\npotential energy of the molecule. The forces acting on the atoms\nwere obtained from reverse mode automatic\ndifferentiation.11 ###reference_b11###\nThe learnable parameters of the NN were optimized by minimizing an\nappropriate loss function using\nAMSGRAD.50 ###reference_b50### Here, a mean squared error loss was used\nand the loss function was\nand correspond to the model and reference energies,\n are the Cartesian components of the reference\nforces on atom , and is the th Cartesian\ncoordinate of atom . is a hyperparameter to weigh the\ncontribution of the forces to the total loss function. During\ntraining, an exponential moving average of all learnable parameters\nwas tracked using a decay rate of 0.999. Overfitting was prevented\nusing early stopping: After every epoch, the loss function was\nevaluated on a validation set of reference structures using the\nparameters obtained from the exponential moving\naverage.51 ###reference_b51### After training, the model that\nperformed best on the total validation loss\n(Equation 6 ###reference_### or 7 ###reference_### depending\non whether dipole moments are required) was selected.\nIf a dipole moment surface was desired, e.g., for\nspectroscopic studies, the same NN can be used with only little\nadaptation. First, the number of output nodes needs to be changed to\n (one for the energy and one each for atomic\npartial charges ). Inspired by PhysNet16 ###reference_b16### (and\nsimilar NNs) and given that the dipole moment is , the loss function is\nchanged to\nwhere is the partial atomic charge of atom , is\nthe total charge of the system and and are the\ncorresponding hyperparameters." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Descriptors", + "text": "The design of molecular descriptors (i.e., encoding a\nmolecular configuration in a machine-readable format) is a pertinent\nproblem in quantum ML and an active area of\nresearch.52 ###reference_b52### Such descriptors ideally\nsatisfy several key criteria including (i) invariance under\ntransformations such as translation, rotation, and permutation of\nidentical elements, (ii) uniqueness, exhibiting changes when\ntransformations that alter the predicted properties are applied, and\n(iii) continuity and differentiability with respect to atomic\ncoordinates to enable the calculation of forces in molecular\nsimulations.7 ###reference_b7### Descriptors can typically be\ncategorized into two groups: fixed or learnable\ndescriptors.53 ###reference_b53###\nWhile translational and rotational invariance are straightforward to\nsatisfy by resorting to internal coordinates, permutational invariance\nis more challenging to achieve. Many high-dimensional NNs obtain\natomic contributions to a total energy, which satisfies the\npermutational invariance by\nconstruction.54 ###reference_b54###, 16 ###reference_b16### Alternative\nroutes to permutationally invariant PESs include data\naugmentation55 ###reference_b55###, symmetrization of\nNNs,56 ###reference_b56###, 57 ###reference_b57### or input\nsymmetrization.58 ###reference_b58###, 59 ###reference_b59### The\nlatter is frequently achieved by using a permutation invariant basis\nto generate permutationally invariant polynomials, which were applied\nsuccessfully to numerous\nsystems.60 ###reference_b60###, 61 ###reference_b61###, 62 ###reference_b62###\nHowever, including the permutational invariance into a PES usually\nrequires a large function space for fitting and the transformed,\nsymmetrized basis will be larger than (or equal to) the original\nvector space.\nThe descriptors used here are based on concepts rooted in the theory\nof reproducing\nkernels63 ###reference_b63###, 64 ###reference_b64###, 28 ###reference_b28###, 17 ###reference_b17###\nwhich have also been successfully employed for representing reactive\nPESs for small\nmolecules.44 ###reference_b44###, 30 ###reference_b30###, 65 ###reference_b65###, 66 ###reference_b66###\nOne-dimensional reciprocal power reproducing kernels\nwere shown to represent diatomic potentials\nfaithfully.64 ###reference_b64###, 67 ###reference_b67### In Eq. 8 ###reference_###\n and correspond to the smaller and larger values of and\n, respectively, are smoothness and asymptotic reciprocal\npower parameters, is the\nfunction and is Gauss\u2019 hypergeometric\nfunction.64 ###reference_b64### The one-dimensional kernels effectively\nserve as a similarity function between pairs of values and .\nIn the present work the kernel\nwas used throughout where and are interatomic distances of a\nquery structure and a reference structure, respectively. For example,\nthe reference structure could be an equilibrium configuration,\ntransition state structure, etc.. Again, the values and\n are the smaller and larger values of and ,\nrespectively. The current approach uses the 1D kernels as local\nfeatures to build a global descriptor which is then the\ninput to the NN. Consequently, the method is referred to as KerNN =\n\u201ckernels + NN\u201d in the following.\nDescriptors for H2CO: The first, non-symmetrized variant of\nKerNN, KerNNns, uses the six one-dimensional kernels directly\nas input to the feed-forward NN with the descriptor given by\n in Equation 10 ###reference_0###. Hence,\nthe global descriptor takes translational and\nrotational invariance into account but neglects the permutational\ninvariance of the two equivalent H-atoms.\nThe second variant of KerNN, KerNNs, accounts for\npermutational invariance of equivalent H-atoms explicitly. The\napproach chosen to include permutation invariance into KerNN is\ninspired by Reference 59 ###reference_b59###, which uses\nfundamental invariants10 ###reference_b10###. In contrast to\nthe primary and secondary polynomials in\nPIPs60 ###reference_b60### and the polynomials in the PIP-NN\nmethod58 ###reference_b58###, fundamental invariants comprise the\nminimum number of invariants necessary to generate all invariant\npolynomials. Dedicated software to calculate the fundamental\ninvariants using King\u2019s algorithm exist68 ###reference_b68###, 69 ###reference_b69###\nand exemplary fundamental invariants for a selection of molecules can\nbe found in Reference 59 ###reference_b59###. The\npermutationally invariant descriptor used here\nis given explicitly in Equation 10 ###reference_0###. The\ninteratomic distances of the optimized structure serve as reference\n.\nDescriptors for [HeH2]+: The PES for HeH is also\nrepresented by two different global descriptors. The first,\n, does not take the permutational invariance\ninto account, while the second, includes it\nusing fundamental invariants, see\nEquation 11 ###reference_1###. The reference structures \nfor and differed: for\n it was the linear HA-HB-He\narrangement with and \u00c5 whereas for a\nsymmetrized structure is required which was chosen to be the linear\nH-He-H arrangement with \u00c5 were used. One\nadvantage of using the one-dimensional kernel descriptors for systems\nin which the full dissociation of the molecule is conceivable is that\nthey go to zero for large (see\nFigure S1 ###reference_###).\nDescriptors for hydrogen oxalate: For hydrogen oxalate, only a\nnon permutationally invariant descriptor was employed. It is\nconstructed from 21 one-dimensional kernels, one for each\ninteratomic distance. The optimized hydrogen oxalate structure served as\nreference structure." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Data Availability", + "text": "The codes and data for the present study are available from\n\\urlhttps://github.com/MMunibas/KerNN upon publication." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Acknowledgment", + "text": "The authors gratefully acknowledge financial support from the Swiss\nNational Science Foundation through grants (MM),\n (MM), the NCCR-MUST (MM), the AFOSR (MM), and the\nUniversity of Basel (MM)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Competing Interests", + "text": "The authors declare no competing interests." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Supporting Information: The Bigger the Better? Accurate Molecular Potential Energy Surfaces from Minimalist Neural Networks", + "text": "" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Methods", + "text": "###figure_8###" + }, + { + "section_id": "8.1", + "parent_section_id": "8", + "section_name": "Bound state calculations for HeH", + "text": "Ro-vibrational bound states have been calculated for different (total angular momentum) states with and symmetries for both and HeH complex. The DVR3D program29 ###reference_b29### was used to solve the eigenvalue problem. Jacobian coordinates were chosen to represent the spectroscopic system in a 3D discrete variable representation (DVR) formulation. The radial grids along the and coordinates were defined by 86 and 32 Gauss-Laguerre quadrature points, respectively and for the angular grid (Jacobi angle ) 36 Gauss-Legendre points were used. The wavefunctions along and were constructed using Morse oscillator functions while the angular part of the wavefunctions was represented by Legendre polynomials. For the degree of freedom (H), a0, and = 0.018 were used whereas for the values were a0, , and . These parameters were chosen so that a large region in the configuration space can be covered by the wavefunction to obtain the near dissociation states. The embedding29 ###reference_b29### was used to compute the states, where the -axis is parallel to in body-fixed Jacobi coordinates. In the embedding, calculations with and 0 correspond to the and H states, respectively, while the and symmetries are defined by the parity operator . Coriolis couplings were included in the Hamiltonian for calculations." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Results and discussion", + "text": "" + }, + { + "section_id": "9.1", + "parent_section_id": "9", + "section_name": "H2CO", + "text": "###figure_9###" + }, + { + "section_id": "9.2", + "parent_section_id": "9", + "section_name": "HeH", + "text": "###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14###" + }, + { + "section_id": "9.3", + "parent_section_id": "9", + "section_name": "Hydrogen Oxalate", + "text": "" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Exp.31\nCCVM/RKHS(FCI)31\nDVR3D/KerNNns(UCCSD(T))
0101695640 [640]640.7 [640.7]
0010840731.6 [731.6]729.1 [729.1]
020011591136.1 [1134.1]1134.8 [1132.8]
002012341256.4 [1255.6]1252.1 [1251.3]
100018401812/18331808.6/1829.3
\n
Table 1: Measured31 and calculated31\nvibrational transition frequencies for HeH. The fundamentals\nare (010), (001), and (100) for the bend, van der Waals stretch, and\nH stretch modes. The CCVM calculations31 were\ncarried out on the RKHS representation of the FCI\nenergies30 and the DVR3D calculations used the\nKerNNns representation of the UCCSD(T) data. Transition\nfrequencies are given for ortho-[para-]H separately. It is\ndemonstrated that the differences between ortho- and para-energies\nfor the two calculations are identical throughout.
\n
", + "capture": "Table 1: Measured31 and calculated31\nvibrational transition frequencies for HeH. The fundamentals\nare (010), (001), and (100) for the bend, van der Waals stretch, and\nH stretch modes. The CCVM calculations31 were\ncarried out on the RKHS representation of the FCI\nenergies30 and the DVR3D calculations used the\nKerNNns representation of the UCCSD(T) data. Transition\nfrequencies are given for ortho-[para-]H separately. It is\ndemonstrated that the differences between ortho- and para-energies\nfor the two calculations are identical throughout." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
H2COHeH\nH. Oxa.
KerNNns\nKerNNs\nKerNNns\nKerNNs\nKerNNns\n
673321
2020404050
22222
10101110
5\u2013\u2013\u20135
2\u2013\u2013\u20132
10011021348134816608
\n
Table S1: Parameters and hyperparameters of the neural networks used\nthroughout this work.
\n
", + "capture": "Table S1: Parameters and hyperparameters of the neural networks used\nthroughout this work." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Quantity / DiatomH\nHeH+\n
0.5/5/0.2 and 6/50/10.5/5/0.2
0.5/5/0.2 and 6/50/10.5/5/0.2
0/180/150/180/15
583576877
\n
Table S2: Grid for the triatomic HeH potential. All values are given in \u00c5 and\ndegrees, respectively. The quantities are given as min/max/step.
\n
", + "capture": "Table S2: Grid for the triatomic HeH potential. All values are given in \u00c5 and\ndegrees, respectively. The quantities are given as min/max/step." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(cm-1)KerNNnsKerNNsRKHS+FPhysNetFCHLCCSD(T)-F12
11186.41186.51186.41186.41186.51186.5
21267.71267.91268.11268.21268.01268.2
31532.51532.61532.61532.61532.71532.7
41776.41776.51776.41776.41776.41776.4
52933.42933.82933.52933.62933.62933.8
63006.03005.63005.83005.73005.63005.8
MAE0.20.10.10.10.1
\n
Table S3: Harmonic frequencies as obtained on the KerNNns and KerNNs PESs for H2CO in comparison to their CCSD(T)-F12B/aug-cc-pVTZ-F12 reference frequencies. These are compared to three PESs for H2CO based on the RKHS+F17 method, PhysNet16 and obtained from kernel ridge regression using the FCHL descriptor18, which were reported in Reference\u00a012. All PESs have been trained on the same reference data set containing a total of 4001 H2CO structures with corresponding energies and forces.
\n
", + "capture": "Table S3: Harmonic frequencies as obtained on the KerNNns and KerNNs PESs for H2CO in comparison to their CCSD(T)-F12B/aug-cc-pVTZ-F12 reference frequencies. These are compared to three PESs for H2CO based on the RKHS+F17 method, PhysNet16 and obtained from kernel ridge regression using the FCHL descriptor18, which were reported in Reference\u00a012. All PESs have been trained on the same reference data set containing a total of 4001 H2CO structures with corresponding energies and forces." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
KerNNns
MAE():0.00052
RMSE():0.00090
MAE():0.00902
RMSE():0.01786
MAE():0.00071
RMSE():0.00129
\n
Table S4: Test set errors for KerNNns trained on energies, forces and dipole moments using the loss function given in Equation\u00a07. Errors of the energy, forces and dipole moments are given in kcal/mol, kcal/mol/\u00c5/ and Debye, respectively.
\n
", + "capture": "Table S4: Test set errors for KerNNns trained on energies, forces and dipole moments using the loss function given in Equation\u00a07. Errors of the energy, forces and dipole moments are given in kcal/mol, kcal/mol/\u00c5/ and Debye, respectively. " + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(ms)KerNNnsKerNNsRKHS+FPhysNetFCHL
time/eval1.3/0.012/0.003\u2217\n2.00.33492
\n
Table S5: Computational timings for KerNNns, KerNNs,\nRKHS+F, PhysNet and kernel ridge regression with the FCHL\ndescriptor. The reported values correspond to (consecutive) energy\nand force evaluations as would be required in a MD\nsimulation. Numbers in plain (boldface) letters correspond to Python\nimplementations used via ASE (FORTRAN implementations used via\nCHARMM).\u2217 Pure FORTRAN
\n
", + "capture": "Table S5: Computational timings for KerNNns, KerNNs,\nRKHS+F, PhysNet and kernel ridge regression with the FCHL\ndescriptor. The reported values correspond to (consecutive) energy\nand force evaluations as would be required in a MD\nsimulation. Numbers in plain (boldface) letters correspond to Python\nimplementations used via ASE (FORTRAN implementations used via\nCHARMM).\u2217 Pure FORTRAN" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(cm-1)KerNNns\nCCSD(T)
1711.5717.9
2711.5717.9
3989.2986.6
41959.11935.4
\n
Table S6: Harmonic frequencies as obtained on the KerNNns PES for a linear H-H-He arrangement in comparison to their UCCSD(T)/aug-cc-pV5Z reference frequencies.
\n
", + "capture": "Table S6: Harmonic frequencies as obtained on the KerNNns PES for a linear H-H-He arrangement in comparison to their UCCSD(T)/aug-cc-pV5Z reference frequencies." + }, + "8": { + "table_html": "
\n
Table S7: Bound energy levels from DVR3D calculations for\ne- and f-parity, ortho (o) and para (p) He\u2013H for and in cm-1. The results from the KerNNns PES trained on UCCSD(T)/aug-cc-pV5Z level of theory data are compared to theoretical results derived from a FCI/aug-cc-pV5Z PES.30
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
o/pparityFCI/DVR3DKerNNns/DVR3D
o0e1-1793.7632-1793.3578
o0e2-1062.6727-1064.2547
o0e3-657.6932-658.5876
o0e4-538.1281-541.3037
o0e5-254.3251-257.9835
o0e6-190.0415-193.8107
o0e7-48.5279-54.2574
o0e819.086815.2201
o0e940.036035.9073
o0e1055.860250.0858
o1e1-1785.5973-1783.5868
o1e2-1153.7813-1152.7036
o1e3-1055.3035-1055.2840
o1e4-650.0284-649.3253
o1e5-564.6730-565.8948
o1e6-531.0451-532.6107
o1e7-263.2100-264.2067
o1e8-248.8408-250.9070
o1e9-186.1513-188.1117
o1e10-174.2152-175.6023
o1e11-68.9712-71.8441
o1e12-45.1411-49.2928
o1e13-27.0299-29.0847
o1e1422.824120.0109
o1e1530.516426.4763
o1e1643.437940.8763
o1e1754.575350.1988
o1e1857.231752.8676
o1f1-1153.4648-1152.3852
o1f2-563.7240-564.9356
o1f3-262.8397-263.8038
o1f4-175.8616-177.0772
o1f5-68.7581-71.6364
o1f6-27.5094-29.5545
o1f729.911825.7797
o1f854.647650.1420
p0e1-1793.7639-1793.3585
p0e2-1062.7083-1064.2907
p0e3-659.6723-660.5809
p0e4-538.8389-542.0167
p0e5-300.6517-303.4586
p0e6-214.3129-218.8744
p0e7-109.8684-114.4646
p0e8-60.7249-65.7386
p0e9-16.0670-21.9619
p0e10-1.1504-7.3349
p1e1-1785.5980-1783.5875
p1e2-1153.7480-1152.6700
p1e3-1055.3364-1055.3174
p1e4-651.8816-651.1921
p1e5-562.5239-563.7546
p1e6-531.5656-533.1320
p1e7-295.1356-296.3543
p1e8-212.6546-213.8757
p1e9-209.9754-212.8800
p1e10-154.5125-155.3351
p1e11-105.5427-108.6013
p1e12-57.0589-60.4399
p1e13-14.3679-18.6642
p1e14-0.4477-5.0399
p1f1-1153.4283-1152.3484
p1f2-561.2379-562.4576
p1f3-212.4739-213.6612
p1f4-154.6979-155.4914
\n
", + "capture": "Table S7: Bound energy levels from DVR3D calculations for\ne- and f-parity, ortho (o) and para (p) He\u2013H for and in cm-1. The results from the KerNNns PES trained on UCCSD(T)/aug-cc-pV5Z level of theory data are compared to theoretical results derived from a FCI/aug-cc-pV5Z PES.30" + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MAE()RMSE()MAE()RMSE()MAE()RMSE()
KerNNns0.0130.0330.0890.1870.00140.0027
PhysNet0.0090.0470.0650.4590.00180.0054
\n
Table S8: Averaged test set errors for the KerNNns and PhysNet\nPES trained on the MP2/aug-cc-pVTZ level data set. Energy, force and\ndipole moment errors are given in kcal/mol, kcal/mol/\u00c5, and Debye,\nrespectively.
\n
", + "capture": "Table S8: Averaged test set errors for the KerNNns and PhysNet\nPES trained on the MP2/aug-cc-pVTZ level data set. Energy, force and\ndipole moment errors are given in kcal/mol, kcal/mol/\u00c5, and Debye,\nrespectively." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18121v1_figure_1.png", + "caption": "Figure 1: Schematic representation of A) formaldehyde, B) the two\nreaction channels of the HeH+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT system and C) hydrogen oxalate\n(or deprotonated oxalic acid) in its cyclic/hydrogen bonded form.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/form_hox_heh2p.png" + }, + "2": { + "figure_path": "2411.18121v1_figure_2.png", + "caption": "Figure 2: Energy and force learning curves for the different variants\nof the H2CO PESs trained on CCSD(T)-F12B/aug-cc-pVTZ-F12\nreference data. These are compared to PhysNet results taken from\nReference 12. Solid and dashed lines represent\nMAEs and RMSEs, respectively. A total of five KerNN models were\ntrained for each value of NTrainsubscript\ud835\udc41TrainN_{\\rm Train}italic_N start_POSTSUBSCRIPT roman_Train end_POSTSUBSCRIPT on different splits of the\ndata and only the mean out-of-sample errors are shown. KerNNns and KerNNs represent the NNs that use the\nnon-symmetrized (\ud835\udc9fnssuperscript\ud835\udc9fns\\mathcal{D}^{\\rm ns}caligraphic_D start_POSTSUPERSCRIPT roman_ns end_POSTSUPERSCRIPT) and symmetrized\n(\ud835\udc9fssuperscript\ud835\udc9fs\\mathcal{D}^{\\rm s}caligraphic_D start_POSTSUPERSCRIPT roman_s end_POSTSUPERSCRIPT) descriptors, respectively. The flattening in\nenergies for NTrain\u22651600subscript\ud835\udc41Train1600N_{\\rm Train}\\geq 1600italic_N start_POSTSUBSCRIPT roman_Train end_POSTSUBSCRIPT \u2265 1600 is caused by the \u201derror\nfloor\u201d noted in earlier work for the CCSD(T)-F12\nforces.12 Note that the different models can be\nmore or less sensitive to such noise and therefore exhibit a\nflattening at higher/lower test set errors. The lowest test set MAEs\nreported in Reference 12, for example, were\nMAE(E\ud835\udc38Eitalic_E) = 3E-4 and MAE(F\ud835\udc39Fitalic_F)=1E-4.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/energy_lc.png" + }, + "3": { + "figure_path": "2411.18121v1_figure_3.png", + "caption": "Figure 3: Atomic MAE(\ud835\udc6d\ud835\udc6d\\bm{F}bold_italic_F) (i.e. the mean absolute error\nbetween the reference and predicted forces for each of the atoms)\nfor the symmetric structure with short H-C bonds shown. The\ncorresponding (aggregated) MAE(\ud835\udc6d\ud835\udc6d\\bm{F}bold_italic_F) are\n1.3/0.3 kcal/mol/\u00c5. While KerNNs predicts symmetrical\nerrors, this is not so for KerNNns.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/atomic_mae.png" + }, + "4": { + "figure_path": "2411.18121v1_figure_4.png", + "caption": "Figure 4: A: The extrapolation capabilities of the ML-PES are assessed\non a data set containing 2500 structures, generated from normal mode\nsampling at a higher temperature (5000 K) than the training set\n(2000200020002000 K). The extrapolation data set was available from previous\nwork.12 While the training data covers an energy\nrange of roughly 40 kcal/mol the extrapolation data set covers\n130 kcal/mol. B: One-dimensional PES cut along the C-H bond length\nfor different ML models (r\ud835\udc5fritalic_r and e\u2212rsuperscript\ud835\udc52\ud835\udc5fe^{-r}italic_e start_POSTSUPERSCRIPT - italic_r end_POSTSUPERSCRIPT correspond to NNs with the\nsame architecture as KerNN, but employing different descriptors,\nnamely the interatomic distances r\ud835\udc5fritalic_r and e\u2212rsuperscript\ud835\udc52\ud835\udc5fe^{-r}italic_e start_POSTSUPERSCRIPT - italic_r end_POSTSUPERSCRIPT). KerNNTLssubscriptsuperscriptabsentsTL{}^{\\rm s}_{\\rm TL}start_FLOATSUPERSCRIPT roman_s end_FLOATSUPERSCRIPT start_POSTSUBSCRIPT roman_TL end_POSTSUBSCRIPT corresponds to a model with asymptotic behaviour\nadjusted according to the experimentally determined dissociation\nenergy. The training data range is shaded in gray.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/extrapolation.png" + }, + "5": { + "figure_path": "2411.18121v1_figure_5.png", + "caption": "Figure 5: Infrared spectra derived from finite-T\ud835\udc47Titalic_T MD simulations of\nH2CO. The experimental fundamentals23 are the\ngrey Gaussians. The computed spectra were averaged over 100\nindependent trajectories, each 200 ps in length, using \u0394\u2062t=0.2\u0394\ud835\udc610.2\\Delta t=0.2roman_\u0394 italic_t = 0.2 fs.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/ir_comp_kernn_physnet_h2co.png" + }, + "6": { + "figure_path": "2411.18121v1_figure_6.png", + "caption": "Figure 6: Out-of-sample errors for the HeH+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT KerNNns (A,\ncyan) and KerNNs (B, olive) PESs (with \ud835\udc9fnssuperscript\ud835\udc9fns\\mathcal{D}^{\\rm ns}caligraphic_D start_POSTSUPERSCRIPT roman_ns end_POSTSUPERSCRIPT and \ud835\udc9fssuperscript\ud835\udc9fs\\mathcal{D}^{\\rm s}caligraphic_D start_POSTSUPERSCRIPT roman_s end_POSTSUPERSCRIPT as descriptors, respectively)\ntrained on UCCSD(T)/aug-cc-pV5Z level reference data. The test set\ncontains 4709 randomly chosen structures. Most energies are\npredicted with errors well below 0.1 kcal/mol, while single outliers\nexist (see text). C: One-dimensional potential energy scan along\nrHeHAsubscript\ud835\udc5fsubscriptHeHAr_{\\rm HeH_{A}}italic_r start_POSTSUBSCRIPT roman_HeH start_POSTSUBSCRIPT roman_A end_POSTSUBSCRIPT end_POSTSUBSCRIPT for a collinear approach of the reactants as\ndetermined on KerNNns (cyan). HeHB \u2190\u2190\\leftarrow\u2190\nHA refers to diatomic HeHB at its optimized bond\nlength with HA approaching atom HB. The scans for\nKerNNs are shown, too, but are overlapped by the\nKerNNns results. Three different atomic arrangements are\nshown and ab initio energies determined at the\nUCCSD(T)/aug-cc-pV5Z are illustrated as black symbols. The zero of\nenergy was taken with respect to the free atoms.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/energetics_heh2p_kernn.png" + }, + "7": { + "figure_path": "2411.18121v1_figure_7.png", + "caption": "Figure 7: Out of sample errors on a test set containing 2000200020002000\nhydrogen oxalate structures for the KerNNns and PhysNet PESs.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/err_physnet_vs_kernn.png" + }, + "8": { + "figure_path": "2411.18121v1_figure_8.png", + "caption": "Figure 8: Infrared spectra of hydrogen oxalate. The computed spectra\n(top two traces) are obtained from the Fourier transform of the\ndipole-dipole autocorrelation function for MD simulations run with\nKerNNns and PhysNet. The bottom trace corresponds to an\nexperimentally determined (H2-tagged) gas phase\nspectrum35. The grey-shaded areas correspond to\nthe experimentally determined peak positions.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/ir_hoxa.png" + }, + "9": { + "figure_path": "2411.18121v1_figure_9.png", + "caption": "Figure S1: The one-dimensional k[3,3]\u2062(r,r\u2032)superscript\ud835\udc5833\ud835\udc5fsuperscript\ud835\udc5f\u2032k^{[3,3]}(r,r^{\\prime})italic_k start_POSTSUPERSCRIPT [ 3 , 3 ] end_POSTSUPERSCRIPT ( italic_r , italic_r start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT ) kernel as a function of r\ud835\udc5fritalic_r and for different\nreference values r\u2032superscript\ud835\udc5f\u2032r^{\\prime}italic_r start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT. The curves illustrate the asymptotic behaviour of the curves. Note that the y-axis is displayed on a logarithmic scale.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/oneD-kernels_33.png" + }, + "10": { + "figure_path": "2411.18121v1_figure_10.png", + "caption": "Figure S2: Two symmetric H2CO configurations in which two sets of C-H bond lengths are the same (the C-HA bond length in the left is the same as C-HB in the right configuration). In the permutationally invariant formulation, KerNNs, the two configurations are described by the same molecular descriptor \ud835\udc9fssuperscript\ud835\udc9f\ud835\udc60\\mathcal{D}^{s}caligraphic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT which leads to the same potential energy by construction. Since in the given formulation, the dipole moment (blue) is obtained from \u2211\u03b1=13\u2211i=1Nqi\u2062xi,\u03b1superscriptsubscript\ud835\udefc13superscriptsubscript\ud835\udc561\ud835\udc41subscript\ud835\udc5e\ud835\udc56subscript\ud835\udc65\ud835\udc56\ud835\udefc\\sum_{\\alpha=1}^{3}\\sum_{i=1}^{N}q_{i}x_{i,\\alpha}\u2211 start_POSTSUBSCRIPT italic_\u03b1 = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT \u2211 start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_i , italic_\u03b1 end_POSTSUBSCRIPT this would require different partial charges for, e.g., HA (left) and HA (right). As the two structures have the same descriptor and the formulation, therefore, lacks a direct mapping to the partial charges (i.e. in this case we would try to predict two sets of partial charges with the same descriptor), it is not possible to learn the dipole moment with KerNNs.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/inability_dipole.png" + }, + "11": { + "figure_path": "2411.18121v1_figure_11.png", + "caption": "Figure S3: Test set structures with the highest prediction error as compared to direct UCCSD(T) ab initio calculations for both KerNNns and KerNNs. For each structure the associated bond length (in \u00c5), the H-He-H angle (in \u2218) and the absolute energy difference between reference and predicted energy (in kcal/mol) are shown. Two out of three outliers are the same in KerNNns and KerNNs.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/outliers.png" + }, + "12": { + "figure_path": "2411.18121v1_figure_12.png", + "caption": "Figure S4: Two-dimensional cuts through the 3D KerNNns PES for He + H+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT (left) and HeH+ + H (right) channels. The separation of the diatomics, H+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and HeH+ is fixed at \u223c1similar-toabsent1\\sim 1\u223c 1\u00c5. Repulsive regions are illustrated with a schematic and energies that exceed 5 kcal/mol are shown in white.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/heh2p_2dpes_comb.png" + }, + "13": { + "figure_path": "2411.18121v1_figure_13.png", + "caption": "Figure S5: Absolute errors of the KerNNns PES for He + H+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT (left) and HeH+ + H (right) channels with respect to CCSD(T)/aug-cc-pV5Z reference energies for 2500 grid points. The largest differences are found in the repulsive regions. The predictions yield MAE(E\ud835\udc38Eitalic_E) of 0.05 and 0.70 kcal/mol for the He + H+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and HeH+ + H channels, respectively. Excluding the ten predictions with largest error for the latter (residing exclusively in the repulsive region) yields a MAE(E\ud835\udc38Eitalic_E) of 0.06 kcal/mol.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/2d_plot_diff.png" + }, + "14": { + "figure_path": "2411.18121v1_figure_14.png", + "caption": "Figure S6: Exemplary non-reactive (A) and reactive (B) collision trajectories obtained from KerNNns. Initially, HAHe forms the bonded diatom with a vibrational energy that has been assigned by drawing random momenta from a Maxwell-Boltzmann distribution corresponding to 300 K (translation and rotation have been projected out). HB, initially placed at a distance of 15 \u00c5 from the CoM of the diatom, is accelerated towards the CoM of the diatom with a kinetic energy corresponding to \u223c1similar-toabsent1\\sim 1\u223c 1 eV.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/heh2p_reac_nonreac_traj.png" + }, + "15": { + "figure_path": "2411.18121v1_figure_15.png", + "caption": "Figure S7: Comparison of the bound energy levels from DVR3D calculations\nfor J=0\ud835\udc3d0J=0italic_J = 0 and 1, e- and f- parity, and\northo- and para-HeH+2superscriptsubscriptabsent2{}_{2}^{+}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT as determined on the\nKerNNns(UCCSD(T)) and a RKHS(FCI)\nPES30. \u0394\u2062E=EKerNN\u2212ERKHS\u0394\ud835\udc38subscript\ud835\udc38KerNNsubscript\ud835\udc38RKHS\\Delta E=E_{\\rm KerNN}-E_{\\rm RKHS}roman_\u0394 italic_E = italic_E start_POSTSUBSCRIPT roman_KerNN end_POSTSUBSCRIPT - italic_E start_POSTSUBSCRIPT roman_RKHS end_POSTSUBSCRIPT\nand illustrates that KerNNns trained on UCCSD(T) typically\nunderestimates the bound state energy in comparison to the FCI PES,\nin particular in the near-dissociative\nregion.", + "url": "http://arxiv.org/html/2411.18121v1/extracted/6026338/figs/boundstates_ediff_heh2p_kernn_rkhs.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18121v1" +} \ No newline at end of file diff --git a/20241127/2411.18123v1.json b/20241127/2411.18123v1.json new file mode 100644 index 0000000000000000000000000000000000000000..62ceadb6591a836879fee80ef8fa194c21618832 --- /dev/null +++ b/20241127/2411.18123v1.json @@ -0,0 +1,109 @@ +{ + "title": "Adaptive Cell Range Expansion in Multi-Band UAV Communication Networks", + "abstract": "This paper leverages stochastic geometry to model, analyze, and optimize multi-band unmanned aerial vehicle (UAV) communication networks operating across low-frequency and millimeter-wave (mmWave) bands.\nWe introduce a novel approach to modeling mmWave antenna gain in such networks, which allows us to better capture and account for interference in our analysis and optimization.\nWe then propose a simple yet effective user-UAV association policy, which strategically biases users towards mmWave UAVs to take advantage of lower interference and wider bandwidths compared to low-frequency UAVs.\nUnder this scheme, we analytically derive the corresponding association probability, coverage probability, and spectral efficiency.\nWe conclude by assessing our proposed association policy through simulation and analysis, demonstrating its effectiveness based on coverage probability and per-user data rates, as well as the alignment between analytical and simulation results.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Unmanned aerial vehicles (UAVs) are widely used in various fields including aerial photography, disaster relief, and wireless communications [1 ###reference_b1###], thanks to their affordability and mobility.\nIn wireless communications, UAV base stations (BSs) can provide temporary connectivity, expand coverage, and improve network reliability and efficiency by supplementing or replacing traditional ground BSs [2 ###reference_b2###].\nMeanwhile, multi-band networks, like 5G, leverage both low- and high-frequency spectrum to balance coverage and capacity needs [3 ###reference_b3###].\nThis attractive paradigm motivates the potential of multi-band UAV-based networks, but their analysis and optimization remain open problems, marking the focus of this work.\nAs with all networks, successful design and deployment of UAV networks relies on thorough performance analyses.\nIn this context, stochastic geometry provides a tractable mathematical framework for modeling the randomness of real-world network deployments and has been employed widely in analyzing traditional cellular networks [4 ###reference_b4###], but its use has been limited in UAV networks.\nFor instance, in analyzing a single-band UAV network, the authors of [5 ###reference_b5###] used a Poisson point process (PPP) to model the location of UAVs and then derived the downlink coverage probability.\nThe authors in [6 ###reference_b6###] used a binomial point process to study mmWave UAV networks, using a 3D blockage model and a 3D sectorized antenna model to capture air-to-ground propagation at mmWave frequencies.\nThe work of [7 ###reference_b7###] studied mobile UAVs serving ground users modeled by a Poisson cluster process and derived the success probability under hybrid automatic repeat request.\nLike traditional ground BSs, UAVs operating at mmWave frequencies rely on antenna arrays and beamforming to overcome high-frequency propagation loss.\nHowever, accurately modeling beamforming gain within a stochastic geometry framework remains a significant challenge.\nPrevious work has used simplified antenna gain models, such as flat-top, sinc, and cosine patterns [8 ###reference_b8###, 9 ###reference_b9###], or assumed uniform beam steering distributions [6 ###reference_b6###].\nWhile tractable, these approaches compromise accuracy, motivating the need for more precise antenna gain modeling, especially in networks with interference.\nStrictly speaking, cell association and coverage optimization in dense multi-band networks is a challenging task that involves balancing signal strength, interference, and BS load across multiple frequency bands.\nA simple yet effective approach to this problem is so-called cell range expansion (CRE), which improves coverage and performance by tuning only a few key system parameters.\nFor example, an adaptive CRE scheme was introduced in [10 ###reference_b10###] for heterogeneous networks, using transmit power control to enhance capacity and throughput.\nIn other work [11 ###reference_b11###], the authors used bias factors to manage offloading between macro and small cells through environment-specific optimization in two-tier heterogeneous networks.\nSimilarly, the authors in [12 ###reference_b12###] presented a rate-based user association rule with a bias factor to improve long-term data rates, but relied on hand-tuning the bias factor, limiting its adaptability.\nIt remains unclear how to create CRE schemes which are effective yet simple, deployment-friendly, and based purely on network statistics/parameters, not real-time conditions.\nIn this paper, we introduce a stochastic geometry-based analytical framework for multi-band UAV networks and derive the corresponding association probability, coverage probability, and spectral efficiency.\nCore to our approach is a novel approximation of mmWave UAV antenna gain, which improves interference modeling compared to existing approaches [6 ###reference_b6###].\nBased on this, we introduce an adaptive CRE scheme that leverages network statistics rather than real-time factors, enhancing its practicality.\nExtensive simulations confirm the accuracy of our analytical results and demonstrate that the proposed CRE scheme significantly improves network coverage and per-user data rates.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model", + "text": "Illustrated in Fig. 1 ###reference_###, this work considers a multi-band UAV network serving downlink to ground users.\nThe network includes two types of UAV BSs: one operating at a relatively low frequency (e.g., 1 GHz) and the other at a mmWave frequency (e.g., 30 GHz).\nWe assume all UAVs hover at a height of meters, with the planar coordinates of the -th UAV denoted by .\nLet and denote the sets of low-frequency and mmWave UAVs, respectively.\nThe coordinates of the low-frequency UAVs in follow a homogeneous PPP [13 ###reference_b13###] with density .\nAnalogously, follows a homogeneous PPP with density .\nGround users are modeled by a PPP with density , ensuring each UAV has at least one associated user with high probability.\nWithin either frequency band, each user associates with the UAV providing the highest average received signal strength.\nWe assume each low-frequency UAV employs a single omnidirectional antenna with unit gain in all directions.\nIn contrast, each mmWave UAV is equipped with an antenna array and employs beamforming to overcome mmWave path loss.\nSpecifically, we consider a square uniform planar array (UPA) of antennas with half-wavelength antenna spacing.\nAdopting the sectorized UPA gain model in [14 ###reference_b14###], we model the antenna gain pattern through four parameters: azimuth and elevation half-power beamwidths and the gains of the main and side lobes .\nWe assume that each mmWave UAV electronically steers the center of its main lobe directly toward its serving user to deliver maximal gain .\nAll antenna arrays are down-tilted toward the ground, with azimuth and elevation defined as in Fig. 1 ###reference_###; note that corresponds to directly toward the ground and corresponds to toward the horizon.\nGround users have a single omnidirectional antenna with unit gain.\nWe assume both low-frequency and mmWave signals undergo free-space path loss and small-scale fading.\nThe inverse path loss at frequency over a distance is modeled by , where and are specific to each band.\nNote that only signals in the same frequency band cause interference to one another.\nThus, the received signal-to-interference-plus-noise ratio (SINR) at a target user in either band becomes\nwhere and denote the corresponding UAV transmit powers, and are the ground user noise powers, and\n and \nare the aggregate interference terms, where denotes the distance between the -th UAV and the target user.\nWe assume Rayleigh fading for the low-frequency band and Nakagami- fading for the mmWave band [6 ###reference_b6###].\nCorrespondingly, and ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Network Performance Analysis", + "text": "With our network model laid forth, we now derive expressions to analyze and subsequently optimize network performance via a novel association policy.\nWe begin by deriving the distance distribution between a UAV and its serving user." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Serving Distance Distribution", + "text": "The serving distance between the typical user at the origin and the nearest UAV in each band, denoted by where , follows a probability density function (PDF) given by\nThis can be derived by considering UAV locations as a homogeneous PPP with density and applying the nearest-neighbor distance distribution by PPPs [4 ###reference_b4###]." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Antenna Gain of Interfering UAVs", + "text": "The severity of interference inflicted onto a ground user by interfering (non-serving) UAVs plays a decisive role in overall network performance, and characterizing such demands accurate modeling of the antenna gain of mmWave UAVs.\nTo this end, we derive an expression for the antenna gain of interfering UAVs by leveraging the serving distance distribution in (3 ###reference_###).\nAs illustrated in Fig. 1 ###reference_###, consider mmWave UAVs A and B serving their respective users A and B at the same time and frequency.\nUAV A inflicts interference onto user B when serving its associated user A.\nThis interference depends on the antenna gain of UAV A toward user B when steering its beam toward user A.\nLet be the azimuth and elevation and be the distance from UAV A to its associated user A.\nLet be the azimuth and elevation and be the distance from UAV A to the interfered user B.\nUsers are associated with UAVs based on average received signal strength, meaning that user B must be closer to UAV B than to UAV A, under our assumed model.\nDenoting the location of UAV A by , its antenna gain toward user B can be expressed as\nwhere and are the probabilities that user B falls within the azimuth and elevation beamwidths, respectively, of UAV A\u2019s beam steered toward user A.\nThe azimuth angles and are uniformly distributed across , since UAVs and users follow independent homogeneous PPPs.\nConsequently, the probability that the azimuth from UAV A to user B is within the main lobe of the beam steered by UAV A toward is .\nArriving at proves to be a little more involved, which we tackle as follows.\nAssuming that is reasonably small and the mmWave UAV density is not extremely high, it is with high probability that and , which we assume henceforth.\nConditioning on , the distance between UAV A and user B, the probability that falls in the main lobe of the beam steered by UAV A toward is\nwhere (a) follows from the mean value theorem for some\n,\n(b) follows from the assumption of a narrow beamwidth \nand the fact that , and (c) follows from the derivation of serving distance in (3 ###reference_###).\nWith closely approximated by (11 ###reference_###), the antenna gain expression in (4 ###reference_###) will be used in statistically analyzing network performance under the proposed CRE scheme introduced next." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Cell Range Expansion Scheme and Association Probability", + "text": "###figure_2### Users can associate with either a low-frequency or mmWave UAV based on average received signal power.\nA common policy employs biased received power, where a bias factor adjusts the preference for mmWave UAVs.\nLet and represent the average received power from the nearest low-frequency and mmWave UAVs, respectively.\nThe policy is:\nPrior work often hand-tunes to optimize network performance [11 ###reference_b11###].\nIn contrast, we define in closed form based on network statistics and system parameters.\nWe first introduce the SE ratio as\nwhere and are given in (1 ###reference_###) and (2 ###reference_###), respectively.\nThen, we define our proposed bias factor as\nwhere the standardization term is given by\nHere, represents the maximum value of , and controls the growth rate, taking a sigmoid shape.\n standardizes the comparison between low-frequency and mmWave signals.\nAn example bias factor is shown in Fig. 2 ###reference_###.\nWhen , then and there is no bias given to either frequency band.\nWhen , then and mmWave UAVs are favored.\nAs increases, saturates to prevent overloading mmWave UAVs, even if their expected SE exceeds that of low-frequency UAVs.\nWhen , the bias shifts toward low-frequency UAVs.\nEven as , however, is lower-bounded by about , meaning the low-frequency signal strength is artificially inflated by at most a factor of around two, relative to .\nThis prevents overloading low-frequency UAVs when is extremely low.\nAs evidenced by (14 ###reference_###) and (15 ###reference_###), calculating only depends on system parameters and network statistics, namely the expectations and the expected SE ratio , the latter of which we derive shortly.\nLet us now derive the association probabilities under our proposed scheme.\nFor a fixed bias factor , the probability that a given user associates with a mmWave UAV is\nwhere .\nThe association probability for low-frequency UAVs is then ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Coverage Probability", + "text": "Coverage probability is the probability that a user\u2019s SINR exceeds a given threshold, also known as the complementary cumulative distribution function of SINR.\nUsing the association probability from before, the coverage probability of the UAV network for some threshold can be expressed as,\nwhere and represent the independent coverage probability for low-frequency and mmWave, respectively.\nThese can be derived as follows.\nThe low-frequency UAV coverage probability is found as\nwhere and (a) follows from .\nThe mmWave UAV coverage probability can be found as\nSince follows a Gamma distribution, the result in (28 ###reference_###) on the following page is obtained, where .\nWe will derive the Laplace transformations of the aggregate interference for both the low-frequency and mmWave bands in Section III-F ###reference_###." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Spectral Efficiency", + "text": "Now, we will investigate the SE of the UAV network.\nAssuming each user attains their maximum achievable SE, we derive the average SE for low-frequency and mmWave UAVs separately.\nFor a typical user, the expected SE of low-frequency UAVs can be computed as\nThe expected SE of mmWave UAVs is then found to be\nwhere , , and .\nHere, (a) is based on [15 ###reference_b15###, Part C, Section 2].\nThen, the expected SE of the UAV network can be computed by ." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "III-F Laplace Transformation of Interference", + "text": "We now derive the conditional Laplace transformation of the aggregate interference.\nThe Laplace transformations of the total interference encountered by target users in the low-frequency and mmWave bands, conditioned on a distance to its serving UAV, are given as follows.\nFor the low-frequency band, we have\nwhere (a) follows from the moment generating function of and (b) follows from the probability generating function of a PPP, which is .\nAnd for the mmWave band, we have\nwhere (a) follows from the moment generating function of and (b) follows from the probability generating function of a PPP.\nNote that, with the distribution of given in (4 ###reference_###), the expectation in (38 ###reference_###) can be computed directly.\nIn turn, we are able to characterize coverage probability and SE by evaluating the derived analytical expressions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Simulation Results", + "text": "###figure_3### ###figure_4### ###figure_5### This section validates our analysis and evaluates the proposed CRE scheme.\nThe experimental parameters are as follows: carrier frequencies GHz and GHz;\nbandwidths MHz and MHz;\ntransmit powers dBm and dBm;\nUAV height m;\nUAV densities UAVs/km2 and UAVs/km2;\nuser density users/km2;\nnoise powers dBm, dBm; antennas;\nbias factor parameters and ; path loss exponents and .\nFig. 3a ###reference_sf1### depicts coverage probability versus SINR threshold for four cases.\nThe solid black line is empirical coverage probability under our proposed CRE scheme, while the dashed black line is its analytical counterpart using our derived antenna gain distribution.\nThe dashed blue line uses a simplified antenna gain model assuming uniformly distributed elevation angles [6 ###reference_b6###], and the solid red line represents the conventional MAP association policy.\nThe proposed CRE scheme clearly improves coverage probability compared to the MAP policy, as it accounts for interference by encouraging users to associate with mmWave UAVs.\nFor instance, under an SINR threshold of dB, the proposed scheme increases the coverage probability from % to nearly %.\nIn Fig. 3b ###reference_sf2###, we vary the mmWave-to-low-frequency UAV density ratio, and the bias factor is tuned accordingly using (14 ###reference_###), with and computed from our derived expressions.\nSimulations verify that the proposed CRE scheme significantly improves per-user data rates compared to a conventional MAP policy, especially as mmWave UAV density increases.\nHowever, once the mmWave UAV density reaches a certain threshold, the benefits of offloading become less apparent.\nOverall, Fig. 3b ###reference_sf2### demonstrates the practicality and effectiveness of our closed-form bias factor , which adapts to network statistics and eliminates the need for real-time tuning.\nFinally, Fig. 3c ###reference_sf3### depicts the SE versus the numer of mmWave antennas.\nAs the number of antenna increases, SE improves due to beamforming, which enhances signal strength and reduces interference.\nThis increases mmWave SE, attracting more users to mmWave and narrowing the performance gap between the proposed CRE scheme and MAP." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Using stochastic geometry, this paper has analyzed and optimized multi-band UAV networks comprised of low-frequency and mmWave UAV-mounted BSs.\nOur novel approach to deriving the antenna gain distribution better captures interference under beamforming, providing a more reliable measure of network performance.\nCombined with our proposed CRE scheme, which increases coverage probability and data rates by biasing users toward mmWave UAVs with lower interference and wider bandwidths, it offers a significant improvement over existing methods.\nUnder this proposed association policy, we derive the association probability, coverage probability, and spectral efficiency.\nThrough both analysis and simulation, we assess the proposed association policy over a traditional MAP association policy, which confirms it as a simple yet effective route to boost UAV network performance.\nFuture work may explore other factors to optimize network performance." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Uniform Planar Antenna Array Parameters [14]
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Number of antennas
Half-power beamwidth \n
Main-lobe gain \n
Side-lobe gain \n
\n
", + "capture": "TABLE I: Uniform Planar Antenna Array Parameters [14]" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18123v1_figure_1.png", + "caption": "Figure 1: UAVs at height h\u210ehitalic_h, users on the ground. Users associate with either low-frequency or mmWave UAVs, with interference from non-serving UAVs.", + "url": "http://arxiv.org/html/2411.18123v1/x1.png" + }, + "2": { + "figure_path": "2411.18123v1_figure_2.png", + "caption": "Figure 2: An example bias function with \u03b20=5,\u03b1=1formulae-sequencesubscript\ud835\udefd05\ud835\udefc1\\beta_{0}=5,\\alpha=1italic_\u03b2 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 5 , italic_\u03b1 = 1, and \u03b6=1\ud835\udf011\\zeta=1italic_\u03b6 = 1.", + "url": "http://arxiv.org/html/2411.18123v1/x2.png" + }, + "3(a)": { + "figure_path": "2411.18123v1_figure_3(a).png", + "caption": "(a) Coverage probability vs. SINR threshold.\nFigure 3: Comparison of coverage probability, per-user data rate, and spectral efficiency in multi-band UAV networks under our proposed CRE scheme.", + "url": "http://arxiv.org/html/2411.18123v1/x3.png" + }, + "3(b)": { + "figure_path": "2411.18123v1_figure_3(b).png", + "caption": "(b) Average per-user data rate vs. mmWave-to-low-frequency UAV density ratio.\nFigure 3: Comparison of coverage probability, per-user data rate, and spectral efficiency in multi-band UAV networks under our proposed CRE scheme.", + "url": "http://arxiv.org/html/2411.18123v1/x4.png" + }, + "3(c)": { + "figure_path": "2411.18123v1_figure_3(c).png", + "caption": "(c) Spectral efficiency vs. the number of mmWave antennas.\nFigure 3: Comparison of coverage probability, per-user data rate, and spectral efficiency in multi-band UAV networks under our proposed CRE scheme.", + "url": "http://arxiv.org/html/2411.18123v1/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18123v1" +} \ No newline at end of file diff --git a/20241127/2411.18128v1.json b/20241127/2411.18128v1.json new file mode 100644 index 0000000000000000000000000000000000000000..752afeec824254ea3159c8349aa57fce10428f15 --- /dev/null +++ b/20241127/2411.18128v1.json @@ -0,0 +1,339 @@ +{ + "title": "Constructive Approximation of High-Dimensional Functions with Small Efficient Dimension with Applications in Uncertainty Quantification", + "abstract": "In this paper, we show that the approximation of high-dimensional functions, which are\neffectively low-dimensional, does not suffer from the curse of\ndimensionality. This is shown first in a general reproducing kernel\nHilbert space set-up and then specifically for Sobolev and\nmixed-regularity Sobolev spaces. Finally, efficient estimates are\nderived for deciding whether a high-dimensional function is\neffectively low-dimensional by studying error bounds in weighted\nreproducing kernel Hilbert spaces. The results are applied to\nparametric partial differential equations, a typical problem from\nuncertainty quantification.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "The approximation of high-dimensional functions is still a challenging\ntask because of the often occurring curse of dimensionality, see\n[6 ###reference_b6###, 5 ###reference_b5###]. While the\ncurse of dimensionality in general cannot be beaten, as it is inherent to the\nproblem of approximating functions from certain classes, see\n[28 ###reference_b28###] for an elaborate discussion but also\n[21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] for an information-based complexity view point, it is\nimportant to identify relevant classes of high-dimensional functions which do not\nsuffer from the curse of dimensionality and to develop efficient,\ncomputable methods to approximate such functions.\nOne way of tackling this problem is to use approximations which are sums of\nlower-dimensional functions. This approach is often based on an Analysis of Variance (ANOVA)\nor an anchored decomposition of the underlying, unknown function. Such\ndecompositions were first studied in [14 ###reference_b14###, 31 ###reference_b31###]\nand are used in the context of high-dimensional model\nrepresentation, [25 ###reference_b25###], in data analysis,\n[34 ###reference_b34###], and machine learning,\n[4 ###reference_b4###, 20 ###reference_b20###]. A general unifying\ndescription, using projections, can be found in [19 ###reference_b19###].\nIn the recent\npaper [28 ###reference_b28###], we have shown that certain\nsubclasses of Sobolev functions and mixed regularity Sobolev functions\ndo not suffer from the curse of dimensionality and can be efficiently\napproximated using a number of points which grows at most polynomially\nin the space dimension. These subclasses of Sobolev functions in\n-variables consists, for example, of functions that can be written\nas the sum of functions that depend only on variables. The\ncomputational cost of computing an approximation to such a function\ngrows only polynomially in , showing that such functions can be\napproximated efficiently. If this is possible, we will call the\norder or effective dimension of such a function,\nthough the term effective dimension is particularly in the ANOVA\ncontext slightly differently defined, see [19 ###reference_b19###] and the\nliterature given therein for a discussion\nof these concepts.\nThe goal of this paper is to extend previous results from\n[28 ###reference_b28###] in the following way. We will\ndiscuss the problem of approximating a high-dimensional function \nwhich is effectively low-dimensional, i.e. it can be written in the form\n, where is assumed to be small\nand is assumed to be of the above form, i.e. it\nis a sum of functions depending only instead of variables.\nThe challenge here is that while we know how to approximate if we\nwould know its values at specific data sets, we cannot measure it since\nwe can only measure . We will\naddress this problem of mismeasurement in Section\n2 ###reference_### by first looking at the general set-up in\nreproducing kernel Hilbert spaces and then specify it to the above\nsituation. In Section 3 ###reference_###, we will derive concepts on\ndeciding quantitatively and qualitatively whether is effectively\nlow-dimensional. This section is based on [24 ###reference_b24###] but uses\nan anchored rather than ANOVA decompositions. In the final section, we\ndiscuss parametric partial differential equations, a typical problem\noften considered in uncertainty quantification. We will show that\nunder mild assumption of the parametric partial differential equations, the solutions are\neffectively low-dimensional and thus can be approximated efficiently\nby the tools developed in this paper.\nWhile we are only discussing anchored decompositions,\nthere is an inherent connection to ANOVA decompositions. This is a\nconsequence of the close relation between the anchored and ANOVA\nterms, see for example [9 ###reference_b9###, 10 ###reference_b10###, 12 ###reference_b12###, 13 ###reference_b13###, 16 ###reference_b16###]." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "1.1. Anchored Decomposition", + "text": "The rest of this section is devoted to introducing the necessary\nnotation and material on anchored decompositions.\nIn this paper we are exclusively dealing with\nthe anchored decomposition of a function. To this end, we assume that\n is a -variate interval. Set\n and let be the set of all subsets of .\nFor a set we let be the number of elements\nin and set .\nA set of subsets of\n is called downward closed if for all\n and also\n.\nA continuous function \nhas a -representation, if it can be written as\nwhere each is a function depending only on the variables with\nindices in . If , i.e. if\nthen this is referred to as a full decomposition.\nTypically, such representations start with a full decomposition,\nderived using certain projection methods, and then argue that under\ncertain assumptions the higher-order terms vanish.\nTo describe the anchored representation that will be used throughout\nthis paper, we will use the following notation. For and\n we let be the vector\nWe will abuse this notation also in the case that a vector\n is given and write \nfor the vector having components for and otherwise.\nWe now have the well-known anchored\ndecomposition of functions, which can, for example, be\nfound in [19 ###reference_b19###].\nLet be a fixed point, the anchor. Let\n a linear subspace of continuous functions.\nAny function has an anchored\ndecomposition, i.e. it can be written in the form\nwhere the components are functions depending only on\nthe variables with indices in and are given by\nThey satisfy the annihilation property\n, whenever there is a\n such that .\nMoreover, the following two properties hold.\nIf has a full decomposition (2 ###reference_###) where the\ncomponents also satisfy the annihiliation property\nthen for all\n. In this way, the decomposition (3 ###reference_###) is unique.\nIf has a full decomposition (2 ###reference_###)\nand if is a subset such that\n for all with \nthen also for all such .\nThe last property ensures that if there is a -decomposition\nof with a downward closed , then also the anchored\ndecomposition uses only the terms with\n. Thus, in our theoretical investigations later on, we\ncan restrict ourselves always to anchored decompositions.\nIf convenient, we will also interpret the functions as\nfunctions on rather than . As these\nfunctions are constant with respect to the variables with indices in\n, we have in particular" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Approximation of Mismeasured Functions", + "text": "In this section, we will discuss the following generic approximation\nproblem. Let be an arbitrary domain.\nLet be a reproducing kernel Hilbert space of functions\n, i.e. there is a unique function\n with for all\n and with for\nall and .\nLet be a closed subspace of and let\n be the orthogonal complement such that we have\n. This means every has a unique\ndecomposition with , .\nGiven a point set , we want to\ncompute an approximation to from using the data\n. We will do this by\ncomputing a regularized regression to using the known data values \ninstead of the unknown data values . Hence, our approach can\neither be interpreted as approximating a function from a certain\nsubspace to which it does not belong or as approximating a\nfunction from that subspace using the wrong data . The\nreasoning behind this is, as pointed out in the introduction, that we\nwill assume that is the dominating part of .\nThe precise definition of our approach is as follows.\nLet be given. Under the assumptions above, set\nThen, the approximation to from using\nthe data is defined as\nObviously, a different loss function than the squared absolute value can be used in\nthe sum but the advantage of using the squared absolute value is that\nthe approximation process is linear and the approximation can be\ncomputed by solving a linear system. Crucial for this is the\nfollowing, well-known result. A proof of the first statement can\nalready be found in [3 ###reference_b3###], a proof of the second\nstatement is in [32 ###reference_b32###]. In its formulation, we use the\nstandard notation \nfor any sets and\n and .\nLet be a reproducing kernel Hilbert space with kernel .\nIf is a closed subspace then is also a\nreproducing kernel Hilbert space. If is the\northogonal projection onto then the\nreproducing kernel of is given by\n, .\nThe approximation to using the data\n and is given by\nTo bound the error , we will split the\nerror into and , taking the point of view\nthat is rather an approximation to using the\nwrong data as pointed out above.\nTo bound the second term, we will\nemploy typical sampling inequalities, see [1 ###reference_b1###, 2 ###reference_b2###, 27 ###reference_b27###, 33 ###reference_b33###], though we will need them to hold for the\nsubspace and not for the whole space , as is typically the\ncase in the papers cited above.\nNonetheless, for such sampling inequalities we typically need\nbounds on and , where is the standard -norm on\n, i.e.\nIn the formulation of the following results, we will use that the\nequivalence of -norms on is given by\n, with equivalence\nconstant\nwhich we will particularly employ in the form , i.e. with \nand . Moreover, we will use the notation\n.\nLet be a reproducing kernel\nHilbert space with kernel\n and let be a closed\nsubspace with orthogonal complement and\nwith kernel .\nThen, for any with , , we have\nand, for ,\nwhere is the possibly -dependent equivalence\nconstant from above.\nWe set to simplify the notation. Then, using\n, we have\nUsing the monotonicity of the square root and the obvious bound\n yields the first statement.\nFor the second statement, we proceed as follows. We start with the obvious bound\nand then use again\nMonotonicity of the square root together with (7 ###reference_###) yields\nthe second statement.\n\u220e\nAs mentioned above, we want to apply these estimates in the context of\nsampling inequalities. However, in contrast to the above cited\nsources, these sampling inequalities have to hold on and not on the\nwhole space . Before discussing this in two specific applications,\nnamely the case of standard and the case of mixed regularity Sobolev\nspaces, we formulate a general, generic convergence result.\nLet be a reproducing kernel Hilbert space of functions\n with reproducing kernel . Let be a closed\nsubspace with reproducing kernel\n. Let . Assume that there is a constant and two\nfunctions such that on a sampling inequality of the form\nholds. Then, the error between any and\n can be bounded by\nWe start with the obvious bound\nApplying the sampling inequality to yields\nThen, using and the two bounds from Proposition 2.3 ###reference_theorem3### gives\nRearranging the terms in the last expression finally leads to the stated\nbound.\n\u220e\nIn general, the function in the above theorem satisfies\n for , enforcing convergence. However, the\nasymptotic behavior of the function depends on the\nchosen situation. For example, in the case of Sobolev functions, see\nLemma 2.7 ###reference_theorem7### below, depending on , it is either\nconstant or goes to zero, as well. In the case of mixed regularity\nSobolev spaces, see Theorem 2.17 ###reference_theorem17###, usually grows\nasymptotically like a fixed power of .\nThe above bound can be further simplified by choosing the smoothing\nparameter as . If we also use\n this\nthen gives the following result.\nUnder the assumptions of Theorem\n2.4 ###reference_theorem4### and with , the\nerror between and can be bounded by\nIn particular, for this yields the bound\nThe last bound in this corollary shows that in approximating ,\nthe choice of the sub-space might depend on the desired accuracy\nand the desired number of samples. To be more precise, the general procedure\nfor approximating a function would theoretically be as follows.\nChoose the sample size such that .\nChoose such that\n.\nWe will later see, that in some cases it is possible to reverse the\norder, i.e. to first choose independently of and then choose\nthe sampling set. However, we will also see that it might sometimes be\nnecessary to choose both connectedly.\nTo apply this generic result, we need sampling inequalities to derive\nerror bounds. To this end, we have to\nspecify the underlying reproducing kernel Hilbert space . We will\ndo this for standard and mixed regularity Sobolev spaces. While we\nwill directly apply the result to mixed regularity Sobolev spaces, we\nwill use a modification of the above result in the case of standard\nSobolev spaces to avoid the term in Corollary\n2.6 ###reference_theorem6### altogether." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Application in Sobolev Spaces", + "text": "In this first application, the reproducing kernel Hilbert space will\nbe with . As usual, consists of\nall having weak derivatives \nfor all with . The\nnorm on this space is defined in the usual way by\nIf is a bounded domain with\nLipschitz boundary or , this definition can be extended\nto also define fractional-order Sobolev spaces with\n. Finally, it is well-known that by the Sobolev\nembedding theorem, is a reproducing kernel Hilbert\nspace whenever and that a kernel can be constructed by restricting a\nreproducing kernel of to if an equivalent\nnorm is used. For\nlow-dimensional domains , sampling inequalities\nare usually stated using the fill distance of a point\nset , which we recall now\ntogether with the separation radius as\nThe mesh ratio measures how well\nand how economically\nthe data sites cover the region .\nWe will require the following sampling inequality from [1 ###reference_b1###, 2 ###reference_b2###, 27 ###reference_b27###].\nLet be a bounded domain with a Lipschitz\nboundary. Let . Let and\n. Then, there exist constants \n(depending on and ) and (depending on ,\n and ) such that for all with and all , we have\nWe are particularly interested in sub-spaces of\n, which contain functions having a\n-representation (1 ###reference_###). To this end, we will\nrestrict ourselves to and use the following\nresult from [28 ###reference_b28###].\nLet be a downward closed set of subsets of\n. Let be a closed\ninterval and let . Let\n be the set of all functions having a -representation\n(1 ###reference_###). Then, is a closed\nsub-space of .\nBy Theorem 1.2 ###reference_theorem2###, any function also has an anchored\n-decomposition.\nTo apply the general theory from above, we need a sampling inequality\nfor . We will use a generalization\nof [28 ###reference_b28###, Theorem 4.5], extending that result to\nother -norms. In contrast to standard sampling inequalities for\nSobolev spaces, a sampling inequality for \nonly holds for specific point sets, exploiting the anchored structure\nof the functions for which it should hold.\nLet be downward closed. For each choose a point set\n and, using an anchor , extend the points of this\nset to obtain the anchored set\n\nThen, a sampling point set for Sobolev -functions in\n is given by\nits fill distance is defined by\nThe following result is the required generalization of the sampling\ninequality from [28 ###reference_b28###].\nLet . Let be\ndownwards closed. Let and . Let\n and . Then, there are constants\n such that\nfor all and all\nsampling point sets with\n.\nIf the low-dimensional sets \nare chosen quasi-uniformly, i.e. if there is a constant \nsuch that \nfor all then\nThe constant depends on and the cardinality of .\nWe will essentially follow the proof of [28 ###reference_b28###, Theorem\n4.5] but need a more general sampling\ninequality in the low-dimensional setting. Starting with the anchored\nrepresentation of any , i.e. with\nand recall from [28 ###reference_b28###] the following\nproperties. First, the components of \nsatisfy with\n. Second, there is a constant\n such that\n\nThird, we obviously have and\nwhere the constant is given by\nWith these properties at hand, we can apply the sampling\ninequality from Lemma 2.7 ###reference_theorem7### to the components to derive\nwhere we have set\n. Inserting\nthe latter bound into (11 ###reference_###) yields the first error\nestimate in (9 ###reference_###). For the second estimate\n(10 ###reference_###), we set in (9 ###reference_###), yielding\nAs the low-dimensional point sets are\nquasi-uniform, we have \nwith constants independent of . This, together with the\nCauchy-Schwarz inequality, shows\nwhich then immediately gives the second error bound (10 ###reference_###).\n\u220e\nWith this, particularly with the second bound (10 ###reference_###), we\nare able to derive an approximation result for \nwith anticipated small , where .\nThis result will be a variation of the general result in Corollary\n2.6 ###reference_theorem6### with the advantage of avoiding the \nterm in front of the discrete -norm of . To formulate it,\nwe need to modify the approximation operator as\nfollows.\nLet be given. Let be downward\nclosed and let\n be a sampling point\nset for Sobolev -functions. For set\nThen, the approximation\n is defined as\nThe operator is indeed well-defined and linear,\nwhich can be seen as follows. We enumerate\n, set , ,\n.\nand assume without restriction that for\n.\nThen, we can define the block diagonal matrix\n by\nwhere is the identity matrix,\nand rewrite the functional as\nThe fact that is well-defined then follows from the\nfollowing generalization of the results in Lemma 2.2 ###reference_theorem2###. We\ninclude its proof for the convenience of the reader but also refer to\n[7 ###reference_b7###, 15 ###reference_b15###, 26 ###reference_b26###] for generalizations of\nthe smoothing spline approach.\nLet be a reproducing kernel Hilbert space with positive definite\nkernel \nLet and\n for . Given a symmetric and positive definite matrix\n, the functional\nhas a unique minimum , which can be written in the form\n where the coefficient\n is given by\nThe proof is essentially the same as the proof of the unweighted\nproblem. First, one shows that the minimum of has to be attained\nin by decomposing a general\n into with and and using the fact that . Then, using the\nrepresentation with , it is\nstraight-forward to see that\nwhose gradient with respect to is\nSetting this to zero and using that with and also\n is positive definite, the statement follows.\n\u220e\nAfter this, we are able to show that any function with and\npresumed small can well be approximated with the above\noperator .\nLet be given. Let be downward\nclosed. Let be the orthogonal\nprojection of and let .\nLet and assume that the low-dimensional data sets\n are quasi-uniform. If their fill distances\n, ,\nare sufficiently small then\nThus, with , the choices\nof the smoothing parameter yields the error bound\nTo simplify the notation we will use\n and\n as before and set\n.\nWe start with splitting the error as before as\nNext, we proceed with applying the sampling inequality (10 ###reference_###) of\nTheorem 2.10 ###reference_theorem10### to the error\n, yielding, with ,\nTo further bound this, we need to bound the two expressions on the\nright-hand side. This is done by\nmodifying the bounds in Proposition 2.3 ###reference_theorem3###\nappropriately to serve our new approximation operator .\nFirst, we obviously have\nshowing\nNext, the other term in the bound above can further be\nbounded as follows:\nshowing\nPlugging (14 ###reference_###) and (15 ###reference_###) into the above bound\nfinally yields\nNoting\nthe first stated bound follows immediately. Using also , implies the second stated\nbound.\n\u220e\nThe final constant in the error bound (12 ###reference_###) depends on the space dimension\n in two ways. On the one hand, we have by (13 ###reference_###) the\nfactor which might grow exponentially or\ndecrease exponentially. In the most relevant cases where\n, it is just one. On the other hand, the remaining generic\nconstants mainly depend on the number of elements in our\ndownward closed set. Typically, with much smaller than , meaning that contains\nonly terms with at most \nvariables. The cardinality of this set is and\nhence grows at most polynomially in .\nFinally, the error bound (12 ###reference_###) and the fact that it only\nholds for data sets requires us to modify\nthe general approximation approach outlined after Corollary\n2.6 ###reference_theorem6### as follows. To approximate we now proceed as follows.\nChoose such that satisfies\n,\nChoose the sampling points such that\n.\nMeaning that we reverse the order of the two approximation\nprocedures." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. Application in Mixed Regularity Sobolev Spaces", + "text": "The paper [28 ###reference_b28###] also studies low-term\napproximations to mixed regularity Sobolev spaces. The results there\nare based on sampling inequalities from [27 ###reference_b27###].\nAs usual, The space\n consists of all function \nhaving weak derivatives for all\n with , i.e. with , . The space is equipped with the norm\nAgain, this definition can be extended to define fractional order\nmixed regularity Sobolev spaces if \nhas a Lipschitz boundary or . Again, it is\nwell-known that these spaces are reproducing kernel Hilbert spaces\nwhenever and that a kernel can be constructed by\nrestricting a reproducing kernel of\n to if an equivalent norm is used.\nIn this subsection, we will restrict ourselves to and\nhence write for , . The\nfollowing result on -subspaces can be found in\n[28 ###reference_b28###].\nLet be downward closed. Let\n Let be the set of\nall functions having a\n-representation (1 ###reference_###). Then,\n is closed sub-space of\n.\nTo introduce the required sampling inequality, we will use\nsparse grids based on Clenshaw-Curtis points. Let\n and for . Let be downward closed.\nRecall that the Clenshaw-Curtis points are the extremal points of the Chebyshev\npolynomials given by and\nFor and with , the sparse grid based on\nthe points is defined as\nIn the case of we set . The points of these low-dimensional sets are again extended using the anchor\n, yielding again a point sets .\nFor with ,\na sampling point set for mixed regularity Sobolev\n-functions is given by\nIn the case of sparse grids, the mesh norm and mesh ratio do not make\nsense, as the sparse grids are deliberately sparse in certain\ndirections. Consequently, sampling inequalities are in terms of the\nnumber of points of the sparse grid or expressed using the\nparameter that defines the sparse grid.\nThe next result is the required sampling inequality for functions\nfrom . It is [28 ###reference_b28###, Theorem\n5.6] and it is formulated only for order--functions.\nLet . Let and . Assume there is a\n such that the elements of have the form .\nLet be the sampling point set\nin (16 ###reference_###). Then, there is a constant , such\nthat, with ,\nfor all ,\nwhere and\n.\nUsing this sampling inequality together with the generic error bound\nfrom Corollary 2.6 ###reference_theorem6###, i.e. with and immediately yields the following result.\nLet the assumptions of Theorem 2.17 ###reference_theorem17###\nhold. For a function let be the orthogonal projection and\n. If denotes the penalized least-squares\napproximation to from with the specifically chosen parameter based on the grid\n then\nwhere and are defined in\nTheorem 2.17 ###reference_theorem17###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Weighted Norms and Their Connection to the Efficient\nDimension", + "text": "The goal of the last section was to to study functions from\n or which we have split\nin the form , where is the orthogonal projection\nto and\n, respectively. We will now study\ndecompositions of the form\nagain with a downward closed set and its\ncomplement ,\nunder the assumption that\n is small. To this end,\nwill derive methods of bounding the components\n in certain weighted norms, allowing us to conclude\nquantitative statements about the size of .\nFor any multi-index , we have , which immediately leads\nto the inclusions\nwhere the penultimate inequality obviously only holds if . Of course, in\nthe case of standard Sobolev spaces, there is a loss in smoothness if\nsome of the variables are fixed. However, for the applications we have\nin mind, the functions are actually very smooth such that there is no\nreal restriction coming from smoothness. Moreover, it is known that\nfor ANOVA decompositions, the components are often smoother than the\noriginal function, see [11 ###reference_b11###] and, as mentioned\nin the introduction, there is an intrinsic relation between the ANOVA and the\nanchored components. In any case, we can restrict ourselves to the\nspace\nwhich is equipped with the inner product\nThis space is indeed a Hilbert space of functions, isomorphic to the tensor\nproduct .\nIt is important to see that we can write the norm on this space also\nin the following form. For any subset with elements, we set\nwhere is the multi-index with entries \nfor and for .\nSince there is an obvious bijective relation between multi-indices\n with and the subsets\n of , we thus have\nWe will soon introduce another weighted but equivalent norm but first need\nthe following auxiliary result, which is an adaption of the classical Poincare\ninequality and is in the same spirit as [24 ###reference_b24###, Lemma 4.2]\nfor the ANOVA decomposition.\nLet and let be given.\nAssume satisfies \nwhenever there is an index such that\n. Then,\nwhere the constant is given by\nObviously, the inequality (18 ###reference_###) also holds for\n for any with\n.\nLet and recast everything as -dimensional problem.\nThe case is trivial. For the general case\n and we will\nassume . The general case then follows by a standard density argument.\nBefore we start our argument we first note that for\nFor with we have\nRepeating this argument yields for any set and ,\nWe will use this iteratively now, starting with and\n, i.e. with the standard assumption , showing\nby the fundamental theorem of calculus. Next, we repeat the argument\nusing the fundamental theorem of calculus again but this time also (19 ###reference_###) with\n and to derive\nyielding altogether\nIterating this process eventually shows\nThis allows us to prove the statement for the case\n. Here, we have\nwhere we have used the Cauchy-Schwarz inequality in the last step.\nFor a general , we use the above argument\nleading to (20 ###reference_###) only for the variables in instead of all the variables. Thus, instead of (20 ###reference_###) we obtain\nand thus\nwhich finishes the proof in the general case.\n\u220e\nAfter this preparation, we will define the alternative inner product on\n, depending on certain weights. The motivation\nfor this weighted norm comes from, for example,\n[8 ###reference_b8###, 30 ###reference_b30###, 17 ###reference_b17###, 29 ###reference_b29###],\nwhere similar weighted norms are used though they are often based on an ANOVA\ndecomposition rather than an anchored decomposition.\nTo introduce\nit, we recall [28 ###reference_b28###, Theorem 3.10] which shows that for any\nfunction and any , we\nhave .\nEven more, the restriction is continuous, i.e. there is a constant such\nthat\nholds for all . Thus, the following bilinear form is well-defined.\nFor every let \nbe given weights. Then, for the\n-weighted inner product is defined by\nIt induces the norm\nIt is our goal to show that this is indeed an inner product on\n and that the induced norm\nis equivalent to the standard norm.\nWe can now show the well-definedness of the weighted inner product and\nthe equivalence of norms on .\nThe map is\nwell-defined and defines an inner product on .\nThe induced norm can alternatively be written as\nThere are constants such that the induced norm satisfies\nObviously, \nis bilinear and symmetric. It is well-defined as we have\nusing (21 ###reference_###), which also shows\nthe upper bound of (23 ###reference_###). To see (22 ###reference_###), we proceed as\nfollows\nIf , the derivative\n will vanish,\nas we differentiate in at least one direction, in which in\n is constant.\nThus, from the representation (5 ###reference_###) we find\nThis shows immediately the alternative representation given in\n(22 ###reference_###). The latter representation can now also be\nused to show the lower bound in the norm equivalence\n(23 ###reference_###), which in turn also yields definiteness of the\ninner product. To show this lower bound, we first use Lemma\n3.1 ###reference_theorem1### with . By the annihilation\nproperty from Theorem 1.2 ###reference_theorem2### we know that it\nsatisfies the assumptions from Lemma 3.1 ###reference_theorem1###. Hence, for\nany we have\nThus, we find\nNext, we observe that if there is an index\n with . As only depends on the\nvariables with index in , its derivative with respect to this\n is zero, meaning . Thus, we can proceed\nwhere\nand where we have used the alternative representation\n(22 ###reference_###) in the last step.\n\u220e" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Bounds on the Anchored Terms", + "text": "While the norm equivalence bounds in Theorem 3.3 ###reference_theorem3### are\nextremely coarse, as they have only been derived for the sole purpose\nof showing equivalence, the techniques involved allow us to derive\nmore meaningful bounds on the norm of the terms in the anchored\ndecomposition.\nThe next statement is in this spirit and is also analogue to\n[24 ###reference_b24###, Theorem 4.3] for the ANOVA decomposition.\nLet be the constants from Lemma\n3.1 ###reference_theorem1###. Define\nThen, for any the lower bound\nholds.\nWe recall the alternative representation (22 ###reference_###)\nand that we have by Lemma\n3.1 ###reference_theorem1###,\nfor , and thus\nUsing this as a lower bound on and inserting this into\n(24 ###reference_###) yields\nwhich is the stated inequality.\n\u220e\nIn the most popular situations of and\n, we have\n and hence such that the above bound becomes\nThe anchored components of a function \nsatisfy the bound\nwhere\nFor we obtain, using Theorem 3.4 ###reference_theorem4###,\n\u220e\nAfter this, we are able to give a first bound on the -norm\nof .\nFor any and any downward closed set\n, the error bound\nIf for , this bound simplifies to\nIn any case, is the constant of the continuous embedding\n and\nUsing the Sobolev embedding theorem for , the\nsame ideas that have led to\n(6 ###reference_###) and Corollary 3.6 ###reference_theorem6### yield now\n\u220e" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Application to Parametric Partial Differential Equations", + "text": "In this section, we focus on the setting of [17 ###reference_b17###] in\ndealing with parametric PDEs. Let for be a bounded Lipschitz domain, which we will call the\nspatial domain. To avoid confusion between the spatial domain \nand , we will only use for the\nlatter throughout this section. The parameter domain is given by with possibly large . In principle, even\n is possible, see the discussion in [17 ###reference_b17###, Theorem\n5.1] for the additionally induced error by\ntruncating the dimension in this form.\nFor the elliptic PDE under consideration in this section, the diffusion coefficient\nwill be finite noise, i.e. it is of the form\nwith given functions for and we consider the following weak problem. For every , we seek the weak solution satisfying\nThe following statement of [17 ###reference_b17###, Theorems 3.1 &\n4.2] concerning the weak solution of (26 ###reference_###)\nwith diffusion coefficient of the form (25 ###reference_###) summarizes\nwhat we need here. Note that the cited results even hold in the case .\nLet for . The\nassumption \nis clearly satisfied as . Let the parameter field satisfy\nFor every there is a unique \nsatisfying the weak equation (26 ###reference_###).\nMoreover, for every and , there is the bound\nNext, we extend the mixed regularity Sobolev space \nand the weighted mixed regularity Sobolev space\n to the Bochner\nspaces and\n, respectively. As indicated in the\nnotation, throughout this section, we will use the anchor .\nAs the former space is only a special case of the latter space with weights\n, we concentrate on the latter. Here, the norm is\ngiven by\nWith this notation at hand, we can proceed as follows. By Theorem\n4.1 ###reference_theorem1### we know that the weak\nsolution satisfies . As a matter of fact, the solution is infinitely smooth\nwith respect to the parameter . Thus, we can\nbound the weighted Bochner norm, using the estimate from Theorem\n4.1 ###reference_theorem1### to derive\nFinally, in the context of uncertainty quantification, one often is\nnot particularly interested in the function itself but in a derived\nquantity. This can be described, using a linear bounded functional\n, i.e. one is interested in the function\n, defined by , .\nThe above considerations show, that we have\nNext, turning to approximations of the functions and , we\nproceed as in the previous sections.\nFor a downward closed subset we\nlet be the\n-approximations of , i.e.\nand , .\nThen, Corollary 3.7 ###reference_theorem7### yields\nwhich shows the next result.\nLet be the weak solution of the\nparametric PDE and let be the -term approximation\nof using a downward closed subset\n. Let \ndescribe the quantity of interest. Then,\nwhere the norm can\nbe bounded by (27 ###reference_###).\nTo bound this further, we need to specify the weights in\nsuch a way that we can bound the two terms on the right-hand side,\ni.e. essentially the term\nThis will be done in the next sub-section." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Bounds for Specific Weights", + "text": "Again, we follow [17 ###reference_b17###] to define and analyze the\nweights.\nLet contain all finite subsets of\n, i.e. belongs to if and only if\n. For given weights\n, define via\nwhere is defined by\n,\nusing the classical -function.\nFor and , this corresponds to the choice in\n[17 ###reference_b17###, Eq. (6.3)]. For and , this corresponds\nto the choice of [18 ###reference_b18###, Eq. (65)].\nLet and , be given. Let\n, , be a sequence such that\nLet contain\nall subsets of with at most elements. Then, for the product and order\ndependent weights given in (29 ###reference_###), there is a constant\n depending only on such that\nin particular, we have as .\nThe following direct computations are in the spirit of [17 ###reference_b17###, Lemma 6.3].\nWe start with the equality\nNext, introducing and summing this up, yields\nNext, we want to identify subsets with vectors\n. However, since a set is not ordered and\ndoes not contain the same element multiple times, we proceed as\nfollows. We define\n\ni.e. the subset of containing only vectors with distinct\ncomponents. Then, each element defines a set . Obviously, two vectors \ndefine the same set if there is a permutation, i.e. a\nbijective map , such that\n. If denotes the group of all such\npermutations, we have a natural\nequivalence relation on by defining if there\nis a such that . If we denote\ncorresponding equivalence classes by then each comprises elements. Obviously,\nwe now have a bijection ,\ndefined by .\nMaking use of this, we can proceed as follows:\nObserving that means particularly\nleading to\nThis shows\nFinally, employing the well-known identity for yields\nwhich is the stated inequality.\n\u220e\nAfter this auxiliary result, we are now in the position to give the\nnecessary bounds for such weights.\nFix and choose the weights as\nin (29 ###reference_###) with , i.e.,\nIf the assumptions\nare satisfied, then we have\nand for we have the bound\nwith\nFor the finiteness of the Bochner norm, we employ [17 ###reference_b17###, Theorem\n6.4].\nWith , we see\nthat the condition in [17 ###reference_b17###, Eq. (6.4)] is\ntrivially satisfied for any as .\nFor the condition in [17 ###reference_b17###, Eq. (6.5)], we note\nthat for any it holds that\ndue to the assumptions from (32 ###reference_###). As we have no\nrestriction on , we can pick any \nand get (33 ###reference_###). The final statement\n(34 ###reference_###)\nfollows directly from Lemma 4.4 ###reference_theorem4###.\n\u220e\nNote that if (32 ###reference_###) even holds for , the\n defined in (35 ###reference_###) is\neven independent of , as the involved\nsums on the right-hand side converge for .\nLet be the weak solution of the\nparametric PDE and let be the -term approximation\nof using . Let describe the quantity of interest. Then,\nThis immediately follows from Corollary 4.2 ###reference_theorem2###, Theorem\n4.5 ###reference_theorem5### and\nas means .\n\u220e\nWe are now in the position to combine the results of Section\n2 ###reference_### and this section, in particular those of\nCorollary 2.18 ###reference_theorem18### and\nCorollary 4.6 ###reference_theorem6### to derive a bound on\nthe approximation error for the quantity of interest of a\nsolution to a parametric PDE and its discrete approximation\n. However, there are the following things to\nconsider. First of all, while for Corollary\n2.18 ###reference_theorem18### we have assumed the underlying\nparameter domain to be , we require for Corollary\n4.6 ###reference_theorem6### the domain to be . Fortunately,\na scaling argument shows that Corollary\n2.18 ###reference_theorem18### also holds for .\nThe involved constants might change but will only depend on if\n is used.\nMore importantly, in Corollary 2.18 ###reference_theorem18###, we\nhave decomposed any into , where\n is the orthogonal projection\nof onto . In Corollary\n4.6 ###reference_theorem6###, however, we rather used the decomposition of\n into , where \nuses the anchored components of for .\nFortunately, in the situation of mixed regularity spaces, if the\nanchored kernels from [19 ###reference_b19###, 28 ###reference_b28###] are used, it follows from\n[28 ###reference_b28###, Theorem 3.10] that . Moreover, by [28 ###reference_b28###, Proposition\n6.1] it follows that\n is an orthogonal\ndecomposition. This means that we indeed have and that\nwe have the following result.\nLet .\nLet be the weak solution of the\nparametric PDE, let and let define the quantity of\ninterest . Assume that for some\n. Finally, let \ndenote the regression function from Corollary\n2.18 ###reference_theorem18### from the subspace\n using the smoothing parameter\n as\ndescribed in Corollary 2.18 ###reference_theorem18###. Then, we have the bound\nwhere and are defined in\nTheorem 2.17 ###reference_theorem17###.\nWe start with applying Corollary 2.18 ###reference_theorem18###\ndirectly. In the notation of that corollary we have ,\n and . Thus, we have\nTo bound the term , we use the Sobolev\nembedding theorem, Theorem 3.3 ###reference_theorem3### and Corollary\n4.6 ###reference_theorem6### to obtain\n\u220e\nThe constant in the above proof is given in the proof of Theorem 3.3 ###reference_theorem3###\nas\nwhere due to Lemma\n3.1 ###reference_theorem1###. Moreover, we point out that the maximum has to be\nconsidered only over the index set as will only have contributions there.\nWe obtain from (34 ###reference_###) the bound\nwhere is defined in (35 ###reference_###).\nPlease note that in (36 ###reference_###) the cardinality of the\nsampling point set depends on as well." + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "An extension of a bound for functions in Sobolev spaces, with\napplications to -spline interpolation and smoothing.", + "author": "R. Arcang\u00e9li, M. Cruz L\u00f3pez de Silanes, and J. J. Torrens.", + "venue": "Numer. Math., 107:181\u2013211, 2007.", + "url": null + } + }, + { + "2": { + "title": "Extension of sampling inequalities to Sobolev sem-norms of fraction\norder and derivative data.", + "author": "R. Arcang\u00e9li, M. Cruz L\u00f3pez de Silanes, and J. J. Torrens.", + "venue": "Numer. Math., 121:587\u2013608, 2012.", + "url": null + } + }, + { + "3": { + "title": "Theory of reproducing kernels.", + "author": "N. Aronszajn.", + "venue": "Trans. Am. Math. Soc., 68:337\u2013404, 1950.", + "url": null + } + }, + { + "4": { + "title": "High dimensional model representation as a glass box in supervised\nmachine learning.", + "author": "C. D. Bastian and H. Rabitz.", + "venue": "arXiv preprint, 2018.", + "url": null + } + }, + { + "5": { + "title": "Adaptive Control Processes: A guided Tour.", + "author": "R. Bellman.", + "venue": "Princton University Press, 1961.", + "url": null + } + }, + { + "6": { + "title": "Dynamic programming.", + "author": "Richard Ernest Bellman.", + "venue": "Princeton University Press, Princeton, 1957.", + "url": null + } + }, + { + "7": { + "title": "Calculation of the smoothing spline with weighted roughness measure.", + "author": "C. de Boor.", + "venue": "Mathematical Models and Methods in Applied Sciences,\n11(01):33\u201341, 2001.", + "url": null + } + }, + { + "8": { + "title": "Liberating the weights.", + "author": "J. Dick, I. H. Sloan, X. Wang, and H. Wozniakowski.", + "venue": "J. Complexity, 20:593\u2013623, 2004.", + "url": null + } + }, + { + "9": { + "title": "Equivalence between Sobolev spaces of first-order dominating mixed\nsmoothness and unanchored ANOVA spaces on .", + "author": "A. D. Gilbert, F. Y. Kuo, and I. H. Sloan.", + "venue": "Math. Comput., 91:1837\u20131869, 2022.", + "url": null + } + }, + { + "10": { + "title": "Equivalence of weighted anchored and ANOVA spaces of functions with\nmixed smoothness of order one in .", + "author": "M. Gnewuch, M. Hefter, A. Hinrichs, K. Ritter, and G.W. Wasilkowski.", + "venue": "Journal of Complexity, 40:78\u201399, 2017.", + "url": null + } + }, + { + "11": { + "title": "The smoothing effect of integration in and the ANOVA\ndecomposition.", + "author": "M. Griebel, F. Y. Kuo, and I. H. Sloan.", + "venue": "Math. Comput., 82:383\u2013400, 2012.", + "url": null + } + }, + { + "12": { + "title": "On equivalence of weighted anchored and ANOVA spaces of functions\nwith mixed smoothness of order one in or .", + "author": "M. Hefter, K. Ritter, and G.W. Wasilkowski.", + "venue": "Journal of Complexity, 32(1):1\u201319, 2016.", + "url": null + } + }, + { + "13": { + "title": "Equivalence of anchored and ANOVA spaces via interpolation.", + "author": "Aicke Hinrichs and Jan Schneider.", + "venue": "Journal of Complexity, 33:190\u2013198, 2016.", + "url": null + } + }, + { + "14": { + "title": "A class of statistics with asymptotically normal distribution.", + "author": "W. Hoeffding.", + "venue": "Ann. Math. Statist., 19:293\u2013325, 1948.", + "url": null + } + }, + { + "15": { + "title": "On the problem of smoothing and near-interpolation.", + "author": "S. N. Kersey.", + "venue": "Math. Comput., 72:1873\u20131895, 2003.", + "url": null + } + }, + { + "16": { + "title": "A note on equivalence of anchored and ANOVA spaces; lower bounds.", + "author": "Peter Kritzer, Friedrich Pillichshammer, and G.W. Wasilkowski.", + "venue": "Journal of Complexity, 38:31\u201338, 2017.", + "url": null + } + }, + { + "17": { + "title": "Quasi-Monte Carlo finite element methods for a class of elliptic\npartial differential equations with random coefficients.", + "author": "F. Y. Kuo, C. Schwab, and I. Sloan.", + "venue": "SIAM J. Numer. Anal., 50:3351\u20133374, 2012.", + "url": null + } + }, + { + "18": { + "title": "Multi-level quasi-Monte Carlo finite element methods for a class of\nelliptic PDEs with random coefficients.", + "author": "F. Y. Kuo, C. Schwab, and I. H. Sloan.", + "venue": "Foundations of Computational Mathematics, 15:411\u2013449, 2015.", + "url": null + } + }, + { + "19": { + "title": "On decompositions of multivariate functions.", + "author": "F. Y Kuo, I. H. Sloan, G. W. Wasilkowski, and H. Wozniakowski.", + "venue": "Math. Comput., 79:953\u2013966, 2010.", + "url": null + } + }, + { + "20": { + "title": "High dimensional model representation constructed by support vector\nregression. i. independent variables with known probability distributions.", + "author": "G. Li, X. Xing, W. Welsh, and H. Rabitz.", + "venue": "Journal of Mathematical Chemistry, 55:278\u2013303, 2017.", + "url": null + } + }, + { + "21": { + "title": "Tractability of Multivariate Problems. Volume I: Linear\nInformation.", + "author": "E. Novak and H. Wozniakowski.", + "venue": "European Mathematical Society, Zurich, Switzerland, 2008.", + "url": null + } + }, + { + "22": { + "title": "Tractability of Multivariate Problems. Volume II: Standard\nInformation for Functionals.", + "author": "E. Novak and H. Wozniakowski.", + "venue": "European Mathematical Society, Zurich, Switzerland, 2010.", + "url": null + } + }, + { + "23": { + "title": "Tractability of Multivariate Problems. Volume III: Standard\nInformation for Operators.", + "author": "E. Novak and H. Wozniakowski.", + "venue": "European Mathematical Society, Zurich, Switzerland, 2012.", + "url": null + } + }, + { + "24": { + "title": "Effective dimension of some weighted pre-Sobolev spaces with\ndominating mixed partial derivatives.", + "author": "A. Owen.", + "venue": "SIAM J. Numer. Anal., 57:547\u2013562, 2019.", + "url": null + } + }, + { + "25": { + "title": "General foundations of high-dimensional model representations.", + "author": "H. Rabitz and \u00d6. F. Ali\u015f.", + "venue": "Journal of Mathematical Chemistry, 25:197\u2013233, 1999.", + "url": null + } + }, + { + "26": { + "title": "Smoothing by spline functions.", + "author": "C. H. Reinsch.", + "venue": "Numer. Math., 10:177\u2013183, 1967.", + "url": null + } + }, + { + "27": { + "title": "Sampling inequalities for sparse grids.", + "author": "C. Rieger and H. Wendland.", + "venue": "Numer. Math., 136:439 \u2013 466, 2017.", + "url": null + } + }, + { + "28": { + "title": "On the approximability and curse of dimensionality of certain classes\nof high-dimensional functions.", + "author": "C. Rieger and H. Wendland.", + "venue": "SIAM J. Numer. Anal., 62:842\u2013871, 2024.", + "url": null + } + }, + { + "29": { + "title": "Finite-order weights imply tractability of multivariate integration.", + "author": "I. H. Sloan, X. Wang, and H. Wozniakowski.", + "venue": "J. Complexity, 20:46\u201374, 2004.", + "url": null + } + }, + { + "30": { + "title": "When are quasi-monte carlo algorithms efficient for high-dimensional\nintegrals?", + "author": "I. H. Sloan and H. Wozniakowski.", + "venue": "J. Complexity, 14:1\u201333, 1998.", + "url": null + } + }, + { + "31": { + "title": "Sensitivity estimates for non linear mathematical models.", + "author": "I. Sobol.", + "venue": "Mathematical Modelling and Computational Experiments,\n1:407\u2013414, 1993.", + "url": null + } + }, + { + "32": { + "title": "Smoothing noisy data by spline functions.", + "author": "G. Wahba.", + "venue": "Numer. Math., 24:383\u2013393, 1975.", + "url": null + } + }, + { + "33": { + "title": "Approximate interpolation with applications to selecting smoothing\nparameters.", + "author": "H. Wendland and C. Rieger.", + "venue": "Numer. Math., 101:643\u2013662, 2005.", + "url": null + } + }, + { + "34": { + "title": "GUI-HDMR - a software tool for global sensitivity analysis of\ncomplex models.", + "author": "T. Ziehn and A. S. Tomlin.", + "venue": "Environmental Modelling & Software, 24(7):775\u2013785, 2009.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18128v1" +} \ No newline at end of file diff --git a/20241127/2411.18137v1.json b/20241127/2411.18137v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2e1a23a18012f5003bbe806742095e5ee6f7b830 --- /dev/null +++ b/20241127/2411.18137v1.json @@ -0,0 +1,251 @@ +{ + "title": "The Greedy Coin Change Problem", + "abstract": "The Coin Change problem, also known as the Change-Making problem, is a well-studied combinatorial optimization problem, which involves minimizing the number of coins needed to make a specific change amount using a given set of coin denominations.\nA natural and intuitive approach to this problem is the greedy algorithm.\nWhile the greedy algorithm is not universally optimal for all sets of coin denominations, it yields optimal solutions under most real-world coin systems currently in use, making it an efficient heuristic with broad practical applicability.\nResearchers have been studying ways to determine whether a given coin system guarantees optimal solutions under the greedy approach, but surprisingly little attention has been given to understanding the general computational behavior of the greedy algorithm applied to the coin change problem.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The Coin Change problem, also known as the Change-Making problem, is a classical combinatorial optimization problem that arises frequently in both theoretical computer science and real-world applications.\nThe problem asks for a way to make a specific target amount of change using the fewest number of coins from a given set of denominations.\nFormally, given a target change amount 111In this paper, we use the convention that and a set of coin denominations , the goal is to choose the minimum number of coins from that sum exactly to , where each coin can be chosen arbitrarily many times.\nThe coin change problem is known to be weakly -hard [GJ79 ###reference_bx11###, Lue75 ###reference_bx16###] and equivalent to the Unbounded Knapsack problem, a variant of the well-known -complete 0-1 Knapsack problem.\nK\u00fcnnemann, Paturi, and Schneider studied its fine-grained complexity, showing that the problem is sub-quadratically equivalent to -convolution [KPS17 ###reference_bx13###].\nDespite its computational hardness, this problem has also served as a canonical example in algorithm design, showcasing the strengths and limitations of fundamental algorithmic paradigms such as greedy algorithms and dynamic programming.\nFor example, it has been widely adopted in computer science textbooks to introduce and contrast these algorithmic techniques [CLRS09 ###reference_bx4###, GT14 ###reference_bx12###].\nBeyond its pedagogical value, the coin change problem is widely used in practical applications, such as distributing change in grocery stores, vending machines, and shipping systems.\nExtensive research has been studying the computational complexity of finding the optimal solution to the coin change problem.\nDynamic programming is a well-known classical approach to solving the coin change problem optimally, yielding a pseudo-polynomial -time algorithm [Wri75 ###reference_bx23###].\nMore recently, Chan and He proposed a deterministic algorithm with running time as well as a randomized algorithm with expected running time with an application of convolution (i.e., Fast Fourier Transform) [CH20 ###reference_bx3###].\nLater, they extended their results to address both the original single-target version of the problem and the more general all-targets version, along with improved algorithms [GHN20 ###reference_bx9###].\nIn contrast, the greedy algorithm is a natural and intuitive approach to the coin change problem [CG70 ###reference_bx2###], despite not always guaranteeing an optimal solution.\nAnother line of research has been focusing on the conditions under which the greedy algorithm yields an optimal solution.\nChang and Gill identified a range of target amounts for a given set of coin denominations where counterexamples to the optimality of the greedy algorithm can occur and proposed a procedure to test for such cases [CG70 ###reference_bx2###].\nThis was later improved by Kozen and Zaks, who established tight bounds for counterexample ranges and gave a simpler testing procedure [KZ94 ###reference_bx14###].\nAdditionally, Person proposed a polynomial-time algorithm to determine whether the greedy algorithm is optimal given a coin system [Pea05 ###reference_bx18###].\nStill, the greedy algorithm is optimal for nearly all major real-world currency systems, making it an efficient heuristic with broad practical applicability.\nHowever, although much has been studied about the optimality of the greedy algorithm, there has been surprisingly limited research into the computational complexity of simulating the greedy algorithm.\nBy simulation, we refer to any algorithm that produces the same set of coin selections as the greedy approach.\nMoreover, it is worth noting the distinction between studying the computational complexity of the coin change problem itself versus studying the complexity of simulating the greedy algorithm applied to this problem.\nGiven the sequential nature of greedy algorithms, where each step makes a locally optimal choice by selecting the largest coin not exceeding the remaining target amount, a natural question to consider is whether the greedy algorithm on the coin change problem is efficiently parallelizable.\nTo formalize this question, we introduce the Greedy Coin Change problem, which asks to output the set of coins selected by the greedy strategy.\nWe define its decision version to take a special query coin as an additional input and ask whether this special coin is selected by the greedy algorithm.\nIt turns out that the decision version of the greedy coin change problem is indeed -complete under log-space reduction, which confirms the intuition about the inherently sequential nature of the greedy algorithm.\nThe Greedy Coin Change problem is P-complete under log-space reduction.\nThe study of -completeness problems emerged from an interest in understanding the limits of parallel computation [GHR95 ###reference_bx10###, Coo85 ###reference_bx5###].\nWhile problems in the class are solvable in polynomial time by a deterministic Turing machine, not all such problems are known to be efficiently parallelizable.\nThe greedy coin change problem shares similarities with other well-known -complete problems, such as the Lexicographically First Maximal Independent Set (LFMIS) problem [Coo85 ###reference_bx5###, Ueh97 ###reference_bx20###, Ueh99a ###reference_bx21###, Ueh99b ###reference_bx22###, MSS89 ###reference_bx17###], in that they all involve a set of objects of greedy nature within a combinatorial optimization problem." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "In this section, we provide the formal definitions for the concepts relevant to this paper." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Greedy Coin Change Problem", + "text": "The classical Coin Change problem asks to determine the minimum number of coins from a given set of denominations that sum to a specified target change amount.\nGiven a target change amount and a set of coin denominations , output the optimal combination of coins that sum up to .\nFormally, the goal is to compute non-negative integers to\nWithout loss of generality, we assume that a coin of value 1 is always in , so that there always exists a combination of coins that sum up to for all .\nInstead of the optimal combination of coins, we study the property of the greedy set of coin change in this paper.\nGiven a target change amount and a set of coin denominations , the greedy set of coin change is constructed recursively as follows, starting with : for any remaining change amount , add to the largest coin subject to and set .\nWe are now ready to define the Greedy Coin Change problem.\nGiven a target change amount and a set of coin denominations , output the greedy set of coin change with respect to and .\nThe decision version of the Greedy Coin Change problem takes a special coin as an additional input and asks whether .\nUnless otherwise noted, we work with the decision version of the greedy coin change problem in the rest of this paper." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "P-completeness and Log-space Reductions", + "text": "Formally, a decision problem is said to be -complete if it belongs to the complexity class and every problem in reduces to under some suitable notion of reductions.\nGenerally, the reductions considered for -completeness are log-space reductions and -reductions.\nWe first provide a formal definition for log-space reductions, which is the notion of reduction we use for showing the -completeness of the greedy coin change problem.\nA log-space transducer is a Turing machine with a read-only input tape, a write-only, write-once output tape, and a read/write work tape.\nThe head on the output tape cannot move leftward, so it cannot read or overwrite what it has written.\nThe work tape may contain symbols.\nA log space transducer computes a function , where is the string remaining on the output tape after halts when it is started with on its input tape.\nWe call a log-space computable function.\nLanguage is log-space reducible to language , written , if is mapping reducible to by means of a log-space computable function .\nOn the other hand, the complexity class is defined with respect to parallel computation.\nRoughly speaking, a decision problem is in if it is decidable in poly-logarithmic time on a parallel computer with a polynomial number of processors.\nSimilarly, an -reduction is one that is computable in poly-logarithmic time on a parallel computer with a polynomial number of processors.\nWhile a -completeness result under -reductions for the greedy coin change problem may seem more directly relevant to our initial question of whether there is an efficient parallelization of the greedy algorithm on the coin change problem, it is known that log-space reductions are stronger than -reductions, in the following sense:\nLet be two languages.\nIf and , then .\nTheorem 2.5 ###reference_theorem5### also holds if we replace with [Sip96 ###reference_bx19###].\nIn particular, Theorem 2.5 ###reference_theorem5###, combined with our result, means that if the greedy algorithm on the coin change problem is efficiently parallelizable (i.e., in ), then all problems decidable in polynomial-time are also in ; however, under the unproven yet widely believed assumption that , this would imply that the greedy algorithm on the coin change problem is not efficiently parallelizable." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "P-completeness of the Greedy Coin Change Problem", + "text": "Recall our main result on the hardness of the greedy coin change problem:\nSee 1.1 ###reference_theorem1###\nOur proof uses a generic reduction from a polynomial-time Turing machine.\nThe high-level idea is to construct an instance of the greedy coin change problem such that the remaining change amounts represent configurations and coins represent transitions of the machine.\nLet us first formalize the Turing machine model we use in the proof and state some necessary assumptions." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Turing Machine", + "text": "In this reduction, we use the standard deterministic single-tape Turing machine model.\nA Turing machine is given as , where is the set of states, is the set of tape alphabets, is the set of input alphabets, is the blank symbol, 222 indicates that the tape head moves left by one tape cell and indicates right. is the transition function, is the start state, is the accept state, and is the reject state.\nWe use to denote the set of halting states.\nMoreover, we make the following assumptions about .\nWe assume that there is always a left endmarker in the first tape cell, and the tape head always moves right whenever it reads .\nThis ensures that the tape head always moves in the direction indicated by the transition function. (In the standard Turing machine model, the tape head could get stuck in the same tape cell if it tries to move left while over the leftmost tape cell.)\nWe assume that when decides to transition into a halting state, the tape head first moves all the way to the left endmarker, then moves right and overwrites the symbols in the second and third tape cells with , and finally stops over the second tape cell in the corresponding halting state.\nWe note that both assumptions would only slow down the running time by a constant factor." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Greedy Coin Change Instance Construction", + "text": "In this section, we formally construct the greedy coin change instance used in the reduction.\nThe goal is to construct and so that the greedy set of coins with respect to and exactly represent the transitions of a Turing machine, hence simulating its computation.\nLet be a language in and say that for some .\nLet be the Turing machine that decides in time on inputs of size .\nFor this construction to work, we need the exact computation time (upper bound) of instead of the asymptotic one, so let be the constant such that halts within time for sufficiently large input size .\nNow, let be an arbitrary input of size and let\nA configuration of the Turing machine at any given time contains the following information: 1) the current state , 2) the current tape head position , and 3) the current tape content.\nWe say that is a halting configuration if is a halting state.\nSince the Turing machine halts within time , it uses at most tape cells throughout the computation.\nTherefore, in any configuration of , the tape head position satisfies , and the tape content can be represented as a -bit string of symbols .\nA typical way of representing is\nTo represent configurations using bit-strings, we first identify the set of states and the set of alphabets separately with numbers\nwhere and . Here, and can be any bijections that map the elements of and to the numbers and , respectively.\nNext, we identify the set of all state-alphabet pairs with\nNow, if we choose a large enough base , then we can view the configuration as a -bit string in base .\nThe next two propositions follow directly from the definitions but will turn out to be very useful in the proof later.\nAny state-alphabet pair is mapped to a strictly larger value than any single alphabet .\nAny state , alphabet , and state-alphabet pair is mapped to a value strictly less than .\nHaving defined the bit-string representations of the configurations, we are now ready to define the initial change amount and the coin set of the greedy coin change instance.\nRecall that intuitively we want each remaining change amount (including the initial amount) to represent a configuration and each coin to represent a valid transition between configurations, so that in a change-making process, subtracting the value of a coin from some remaining change amount is equivalent to transitioning from one configuration to another.\nTo differentiate between configurations of different time steps, we lift these bits by left-shifting bits for time step :\nThe reversed shifting factor for time (instead of a factor of ) is due to the fact that the remaining change amount keeps decreasing during a change-making process, so we want the higher bits to represent the configurations earlier in the computation.\nThis naturally leads to the following definition of the initial change amount :\nwhere we recall that is the initial state of , $ is the left endmarker, is the input to , and is the blank symbol.\nRecall that we want to define coins to represent valid transitions between configurations.\nFor any configuration of the Turing machine , we use to denote the unique next configuration of according to the (deterministic) transition function of .\nIf is a halting configuration, we let .\nIf a remaining change amount corresponds to at some time step , represented by some range of bits, then we want to define a coin that subtracts the current configuration from the corresponding range and add the new configuration to a lower range of bits:\nNaively, defining the coin set under this approach requires at least one coin for each different configuration .\nHowever, since all we know about is that it runs in polynomial time, thus is the best space complexity upper bound we could assume for .\nBut this means that there are exponentially many possible configurations of , which are too many to output with a log-space transducer.\nIn order to output the coin set with a log-space transducer, we need some sort of uniformity in the coin set.\nTo do this, we leverage the locality of Turing machine transitions and break down each transition into a sequence of roughly local transitions, which we explain in detail below.\nConsider an arbitrary non-halting configuration of the Turing machine and the transition from to .\nSay that the tape head is at location in the configuration .\nNote that for most of the tape cells that are not in the neighborhood of 333We say that a tape cell at position is in the neighborhood of the tape head location if ., we simply need to copy over the same alphabet symbol to the next lower range of bits.\nThis is because the tape head cannot move to location in a single transition under our standard model of Turing machine (see Section 3.1 ###reference_###).\nOn the other hand, for the tape cells that are within the neighborhood of , the tape head could either move into or away from location .\nTo correctly simulate this local transition, we define a coin that updates the consecutive three tape cells of the neighborhood of all at once.\nConcretely, the coin set contains precisely the following two types of coins:\nCopy coin. For each and , define a copy coin\nTransition coin. For each , , , and , define a transition coin\nwhere is the update to the lower range of bits defined as\nIn addition, we need a special transition coin for the left end of the tape.\nDue to 3.1 ###reference_theorem1###, the leftmost tape cell always contains the special symbol , and the tape head always moves right whenever it is over .\nThus, we only need to define for , , and , the following coin\nwhere (recall 3.1 ###reference_theorem1###).\nFor convenience, we abuse the notation when and consider as 0.\nAlso for clarity, we are padding with zeros at the front to align with our bit-string representation of configurations, although they are not necessary for the arithmetic.\nNote that we do not need a special transition coin for the right end due to 3.2 ###reference_theorem2###: if the tape head needs to move back to the second cell before the machine terminates within time , it is impossible for the tape head to ever move over the -th cell.\nFinally, we define the special query coin as the following transition coin:\nThis completes the definition of the greedy coin change instance .\nNext, we proceed to prove the correctness of the reduction." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Proof of Correctness", + "text": "The correctness of the greedy coin change instance constructed in the previous section can be stated as the following lemma.\nThe Turing machine accepts on input if and only if belongs to the greedy set of coins with respect to and .\nTo prove Lemma 3.6 ###reference_theorem6###, it suffices to show that the greedy coin change instance can simulate any valid transitions between configurations.\nThen, the result follows from a simple inductive argument.\nLet be a configuration of at some time step .\nThen, with respect to the remaining change amount\nwhich represents at time , and the coin set , the greedy coin set simulates the transition from to through a sequence of or coin changes.\nFormally, this means that the next or coins to be included in the greedy coin set reduces the remaining change amount to , which represents at the next time step.\nThere are four different forms of intermediate change amount during a transition from .\nIn each case, we show that the greedy set of coins includes the desired coin.\nThe result then follows by a simple induction and combining multiple cases to cover the entire transition process from to .\nBefore proceeding to the main proof, we note that each coin value for is defined as a difference of two values in Section 3.2 ###reference_### to match the idea of local transitions; that is, each coin subtracts some bits from a higher range and then adds some (other) bits to a lower range.\nIn this proof, it might be more helpful to consider the actual values of the coins rather than viewing them as differences.\nThis is because showing that the desired coin is included in amounts to showing that it is the largest coin in the coin set that does not exceed the remaining amount , which involves comparing the magnitudes of bit-strings.\nTo illustrate, we use copy coins as an example. For , the copy coin can be equivalently written as follows (see Equation 2 ###reference_### for the original definition):\nThe key observations are:\nFor copy coins, the leading bit is ; for transitions coins, the leading three bits are ; for left-end transition coins, the leading two bits are .\nIn all cases, the above leading bits are immediately followed by a block of -bits.\nWe now proceed to analyze the four cases.\n.\nThis represents the initial stage where and no coins have been chosen yet.\nConsider the candidate coin , whose leading three bits are .\nWe show that there is no that can beat under , i.e. satisfying .\nIf is a copy coin or a transition coin, then its leading bit is bounded above by the value of an alphabet, which is strictly less than the leading bit of (Proposition 3.3 ###reference_theorem3###).\nOn the other hand, if is some other left-end transition coin, then its leading three bits must be at least in order to beat .\nBut then due to Proposition 3.4 ###reference_theorem4###, and so .\nTherefore, is included in , as desired.\nfor some partial update .\nThis happens when the bit at the tape head location has already been moved to a lower range.\nWe consider the candidate coin , whose leading two bits are .\nSuppose for contradiction that there exists a coin that beats under .\nIt is not hard to see that all the left-end transition coins are either too large or too small as their leading bits are not at the correct positions.\nIf is a transition coin, then its leading bit must either be or to beat .\nIf the leading bit of is , then its next bit is a state-alphabet pair which is strictly larger than (Proposition 3.3 ###reference_theorem3###), so ; if the leading bit is , then its next bit is strictly smaller than (Proposition 3.4 ###reference_theorem4###), so .\nFinally, if is a copy coin, then again no such coin exists due to Proposition 3.4 ###reference_theorem4###.\nTherefore, the desired coin is included in .\nfor some partial update and .\nThis happens when the bit at the tape head location has not been moved to a lower range yet and the next coin should not move the tape head bit either (since is not a neighbor of ).\nThe exact same reasoning as in case (2) shows that is the coin to be included in .\nfor some partial update .\nThis happens when the tape head bit has not been moved to a lower range yet but the next coin should move the tape head bit (since is now neighbor to the tape head bit ).\nThe candidate coin for this case is , whose leading four bits are .\nIn order to beat under , a copy coin needs to have as its leading bit, but then the next bit ; a transition coin must have its leading four bits to be at least , but still (Proposition 3.4 ###reference_theorem4###).\nOn the other hand, none of the left-end transition coins have their leading bits at the correct positions unless .\nBut even if , the leading bit of a left-end transition coin is , which is larger than due to Proposition 3.3 ###reference_theorem3###.\nTherefore, the desired coin is included in .\nNow, notice that the tape head in the configuration is either at a location or some .\nIn the former case, we apply case (1) followed by a sequence of case (2)\u2019s.\nIn the latter case, we apply a sequence of case (3)\u2019s, followed by a single application of case (4), and finally a sequence of case (2)\u2019s.\nIt is not hard to verify that the sequence of coins reduces to the desired in both cases.\nThe case where is easier as there are no negative terms in the coin values, so the same argument works by exactly matching the leading bits.\n\u220e\nTherefore, Lemma 3.6 ###reference_theorem6### follows from applying Lemma 3.7 ###reference_theorem7### inductively from the initial target change amount defined in Equation 1 ###reference_### and noting that the special query coin matches the configuration of the first three tape cells of at time if and only if accepts on input (3.2 ###reference_theorem2###).\nWe conclude the section by proving the main result, Theorem 1.1 ###reference_theorem1###.\nLet be a language in and let be an input of size .\nConstruct a greedy coin change instance as defined in Section 3.2 ###reference_###.\nThe correctness follows from Lemma 3.6 ###reference_theorem6###.\nIt remains to show that this instance can be outputted using a log-space transducer.\nAll of our previous discussions are done in base for convenience and the intuition.\nTo formally consider the work tape space complexity of the reduction, we need to transform all values into base 2.\nNote that this does not affect the correctness proof as all the arithmetic remains the same even if we change the base.\nThe only thing affected is the number of bits in all the values, which may affect complexity.\nRecall that we chose a large enough base such that , where and .\nBut since both and are constant in , then so is .\nIn fact, we may choose a large enough integer such that while still having (and ).\nIn this way, since is a power of 2, each value can be converted to binary by simply converting each base bit to binary bits, and the asymptotic number of bits remains the same.\nMoreover, each base bit is isolated into a different chunk of binary bits of length , so we may still view each block of binary bits as \u201cone single base bit.\u201d\nLet us first consider the initial change amount defined in Equation 1 ###reference_###\nClearly, can be written to the output tape with work tape space.\nNext, note that the input bits can be simply copied to the output tape (with some states defined to convert them into the corresponding binary values), so they do not need to appear on the work tape (which would otherwise take space).\nFor the remaining large chunk of symbols and zeros, we only need to maintain a counter and keep outputting the same value until the counter reaches a specific number.\nNote that both and are , so we only need a -bit counter to keep track of the number of bits outputted so far.\nUsing a similar argument, we can see that the special query coin\ncan also be outputted with log-space work tape.\nIt now remains to output the massive coin set .\nWe claim that in order to output all the coins in with log-space work tape, it suffices to 1) provide a uniform naming scheme for each coin that only takes bits to keep track of, and 2) show that each coin can be outputted with log-space work tape given its name.\nIndeed, if this is the case, then we can keep track of and loop over all the names and output the corresponding coin, all done in space.\nFor 1), we have actually already provided a suitable naming scheme for the coins during definition (see Equation 2 ###reference_###, Equation 3 ###reference_###, and Equation 4 ###reference_###), where each coin can be specified only using the arguments in the parentheses.\nAny component of the name that is a state or an alphabet only takes space to keep track of.\nThe only components that have sizes depending on are the index and time step .\nBut both of them are at most and only require bits to record,\nFor 2), we use copy coins as an example to show that work tape space is sufficient to output the coin given its name. The argument for the other type of coins (i.e., transition coins) is similar.\nConsider a copy coin written in its actual value:\nNote that the actual value of the copy coin also contains large blocks of repeating values, with a constant number of alternations between the blocks.\nThus, again with the same argument for outputting the initial change amount , we can output the copy coin with log-space work tape.\nThis completes the proof of showing that the greedy coin change problem is -complete under log-space reductions.\n\u220e" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "In this work, we have studied the computational complexity of the greedy coin change problem, adding it to the list of -complete problems under log-space reductions.\nWhile the greedy algorithm serves as an efficient heuristic to the coin change problem in practical applications, our result reinforces the intuition about its inherently sequential nature.\nA promising direction for future research is to explore whether succinct input representations exist for the greedy coin change problem, particularly for the coin set.\nThe existence of such succinct representations could potentially lead to either better algorithms that take advantage of the succinct representation, or more efficient reductions that provide new insights into the computational complexity of the same problem when given succinctly represented inputs.\nFor example, [DLV20 ###reference_bx7###] introduced a general framework of \u201cfactored problems\u201d on bit strings, demonstrating wide implications for fine-grained and average case hardness of various other computational problems.\nMore recently, [GHI+24 ###reference_bx8###] studied \u201cfactored graphs\u201d and showed connections to parameterized complexity.\nIn particular, they showed that a parameterized version of the Lexicographically First Maximal Independent Set (LFMIS) problem given factored graph inputs becomes -complete.\nAs the LFMIS problem and the greedy coin change problem are both -complete problems sharing the same kind of greedy nature in their definitions, do similar results about succinct representations hold for the greedy coin change problem?\nIn the context of the greedy coin change problem, a potential approach to succinctly representing the coin set could be arranging the bit-string representations of the coins into rows of a matrix.\nSuccinct representations for matrices have been studied primarily using matrix tensor products, as well as standard matrix products and summations [DHM02 ###reference_bx6###, BH01 ###reference_bx1###, LSS17 ###reference_bx15###].\nThe question here is whether such a matrix representation of the coin set can be efficiently factorized into smaller matrices following the succinct representations for matrices." + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "The complexity of tensor circuit evaluation.", + "author": "Martin Beaudry and Markus Holzer.", + "venue": "In Proceedings of the 26th International Symposium on\nMathematical Foundations of Computer Science, MFCS \u201901, page 173\u2013185,\nBerlin, Heidelberg, 2001. Springer-Verlag.", + "url": null + } + }, + { + "2": { + "title": "Algorithmic solution of the change-making problem.", + "author": "S. K. Chang and Arthur Gill.", + "venue": "J. ACM, 17(1):113\u2013122, jan 1970.", + "url": null + } + }, + { + "3": { + "title": "On the change-making problem.", + "author": "Timothy M. Chan and Qizheng He.", + "venue": "In Martin Farach-Colton and Inge Li G\u00f8rtz, editors, 3rd\nSymposium on Simplicity in Algorithms, SOSA 2020, Salt Lake City, UT, USA,\nJanuary 6-7, 2020, pages 38\u201342. SIAM, 2020.", + "url": null + } + }, + { + "4": { + "title": "Introduction to Algorithms, 3rd Edition.", + "author": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.", + "venue": "MIT Press, 2009.", + "url": null + } + }, + { + "5": { + "title": "A taxonomy of problems with fast parallel algorithms.", + "author": "Stephen A. Cook.", + "venue": "Inf. Control., 64(1-3):2\u201321, 1985.", + "url": null + } + }, + { + "6": { + "title": "The complexity of tensor calculus.", + "author": "Carsten Damm, Markus Holzer, and Pierre McKenzie.", + "venue": "Comput. Complex., 11(1/2):54\u201389, June 2002.", + "url": null + } + }, + { + "7": { + "title": "New techniques for proving fine-grained average-case hardness.", + "author": "Mina Dalirrooyfard, Andrea Lincoln, and Virginia Vassilevska Williams.", + "venue": "In 2020 IEEE 61st Annual Symposium on Foundations of Computer\nScience (FOCS), pages 774\u2013785. IEEE, 2020.", + "url": null + } + }, + { + "8": { + "title": "The computational complexity of factored graphs, 2024.", + "author": "Shreya Gupta, Boyang Huang, Russell Impagliazzo, Stanley Woo, and Christopher\nYe.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "Fast preprocessing for optimal orthogonal range reporting and range\nsuccessor with applications to text indexing.", + "author": "Younan Gao, Meng He, and Yakov Nekrich.", + "venue": "In Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders, editors,\n28th Annual European Symposium on Algorithms, ESA 2020, September 7-9,\n2020, Pisa, Italy (Virtual Conference), volume 173 of LIPIcs, pages\n54:1\u201354:18. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2020.", + "url": null + } + }, + { + "10": { + "title": "Limits to parallel computation: P-completeness theory.", + "author": "Raymond Greenlaw, H. James Hoover, and Walter L. Ruzzo.", + "venue": "Oxford University Press, Inc., USA, 1995.", + "url": null + } + }, + { + "11": { + "title": "Computers and Intractability: A Guide to the Theory of\nNP-Completeness.", + "author": "M. R. Garey and David S. Johnson.", + "venue": "W. H. Freeman, USA, 1979.", + "url": null + } + }, + { + "12": { + "title": "Algorithm Design and Applications.", + "author": "Michael T. Goodrich and Roberto Tamassia.", + "venue": "Wiley Publishing, 1st edition, 2014.", + "url": null + } + }, + { + "13": { + "title": "On the fine-grained complexity of one-dimensional dynamic\nprogramming.", + "author": "Marvin K\u00fcnnemann, Ramamohan Paturi, and Stefan Schneider.", + "venue": "CoRR, abs/1703.00941, 2017.", + "url": null + } + }, + { + "14": { + "title": "Optimal bounds for the change-making problem.", + "author": "Dexter Kozen and Shmuel Zaks.", + "venue": "Theor. Comput. Sci., 123(2):377\u2013388, 1994.", + "url": null + } + }, + { + "15": { + "title": "Processing Succinct Matrices and Vectors.", + "author": "Markus Lohrey and Manfred Schmidt-Schau\u00df.", + "venue": "Theory of Computing Systems, 61(2):322\u2013351, August 2017.", + "url": null + } + }, + { + "16": { + "title": "Two NP-complete Problems in Nonnegative Integer Programming.", + "author": "G.S. Lueker.", + "venue": "Princeton University. Department of Electrical Engineering, 1975.", + "url": null + } + }, + { + "17": { + "title": "A list of p-complete problems, 1989.", + "author": "Satoru Miyano, Shuji Shiraishi, and Takayoshi Shoudai.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "A polynomial-time algorithm for the change-making problem.", + "author": "David Pearson.", + "venue": "Oper. Res. Lett., 33(3):231\u2013234, 2005.", + "url": null + } + }, + { + "19": { + "title": "Introduction to the theory of computation.", + "author": "Michael Sipser.", + "venue": "SIGACT News, 27(1):27\u201329, 1996.", + "url": null + } + }, + { + "20": { + "title": "A measure of parallelization for the lexicographically first maximal\nsubgraph problems.", + "author": "Ryuhei Uehara.", + "venue": "In Rolf H. M\u00f6hring, editor, Graph-Theoretic Concepts in\nComputer Science, 23rd International Workshop, WG \u201997, Berlin, Germany,\nJune 18-20, 1997, Proceedings, volume 1335 of Lecture Notes in Computer\nScience, pages 333\u2013341. Springer, Springer, 1997.", + "url": null + } + }, + { + "21": { + "title": "Another measure for the lexicographically first maximal subgraph\nproblems and its threshold value on a random graph.", + "author": "Ryuhei Uehara.", + "venue": "In 1999 International Symposium on Parallel Architectures,\nAlgorithms and Networks (ISPAN \u201999), 23-25 June 1999, Fremantle,\nAustralia, pages 350\u2013355. IEEE, IEEE Computer Society, 1999.", + "url": null + } + }, + { + "22": { + "title": "A measure for the lexicographically first maximal independent set\nproblem and its limits.", + "author": "Ryuhei Uehara.", + "venue": "Int. J. Found. Comput. Sci., 10(4):473\u2013482, 1999.", + "url": null + } + }, + { + "23": { + "title": "The change-making problem.", + "author": "J. W. Wright.", + "venue": "J. ACM, 22(1):125\u2013128, jan 1975.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18137v1" +} \ No newline at end of file diff --git a/20241127/2411.18138v1.json b/20241127/2411.18138v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a46d027f1a528ad4136b3602219193ade446df6b --- /dev/null +++ b/20241127/2411.18138v1.json @@ -0,0 +1,82 @@ +{ + "title": "SALMONN-omni: A Codec-free LLM for Full-duplex Speech Understanding and Generation", + "abstract": "Full-duplex multimodal large language models (LLMs) provide a unified framework for addressing diverse speech understanding and generation tasks, enabling more natural and seamless human-machine conversations. Unlike traditional modularised conversational AI systems, which separate speech recognition, understanding, and text-to-speech generation into distinct components, multimodal LLMs operate as single end-to-end models. This streamlined design eliminates error propagation across components and fully leverages the rich non-verbal information embedded in input speech signals. We introduce SALMONN-omni, a codec-free, full-duplex speech understanding and generation model capable of simultaneously listening to its own generated speech and background sounds while speaking. To support this capability, we propose a novel duplex spoken dialogue framework incorporating a \u201cthinking\u201d mechanism that facilitates asynchronous text and speech generation relying on embeddings instead of codecs (quantized speech and audio tokens). Experimental results demonstrate SALMONN-omni\u2019s versatility across a broad range of streaming speech tasks, including speech recognition, speech enhancement, and spoken question answering. Additionally, SALMONN-omni excels at managing turn-taking, barge-in, and echo cancellation scenarios, establishing its potential as a robust prototype for full-duplex conversational AI systems. To the best of our knowledge, SALMONN-omni is the first codec-free model of its kind. A full technical report along with model checkpoints will be released soon.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large language models (LLMs) [gpt3, llama, jiang2024mixtral] have established a new approach to problem-solving and task execution through natural conversations. Speech, being a fundamental form of human communication, acts as an intuitive and effective means for interactions between humans and LLMs. As a result, there is a growing research emphasis on enhancing the spoken input and output capabilities of LLMs. Some recent studies have focused on equipping LLMs with a comprehensive understanding of speech and audio [tang2024salmonn, ltu, Qwen2-Audio], while other research has explored utilizing LLMs\u2019 advanced language understanding abilities to develop more sophisticated speech generation and processing methods [llm4tts_1, dubey2024llama].\nAlthough half-duplex or turn-based speech LLMs [zhang-etal-2023-speechgpt, nguyen2024spirit, fang2024llama, xie2024mini] can function as conversational AI systems, human conversations are inherently full-duplex, characterized by simultaneous listening, thinking, and speaking. This dynamic is reflected in natural conversational behaviours such as frequent turn-taking, backchanneling, overlapping speech, and barge-in. These interactive and flexible patterns have sparked increasing interest in developing full-duplex speech LLMs to enhance the fluidity and naturalness of human-computer interactions.\nTraditional modularised systems [young2013pomdp, vinyals2015icml, serban2016aaai] typically follow a multi-stage pipeline: an automatic speech recognition (ASR) system transcribes the user\u2019s speech into text, which is then processed by a task-oriented dialogue (ToD) system [young2013pomdp, wen-etal-2017-network, NEURIPS2020_e9462095] or chatbot [vinyals2015icml, serban2016aaai, see-manning-2021-understanding, zhang-etal-2024-beyond] (often powered by an LLM) to generate a text response. This response is subsequently converted into speech by a text-to-speech (TTS) synthesis system. Additional modules for turn-taking prediction [chang2022turn], backchanneling [wang2024turn], device-directed speech detection [chang2022streaming], barge-in handling [bekal2022contextual], and echo cancellation [o2021conformer] are often integrated into either the ASR or the natural language understanding components of the ToD system. While these modularised systems support basic human-machine dialogues (typically turn-based and involving a single speaker), their pipeline architecture introduces significant complexity and is prone to error accumulation across subsystems.\nRecent advancements in text-based LLMs and the multimodal speech and audio understanding capabilities of LLMs offer a compelling opportunity to develop fully end-to-end, full-duplex systems for natural spoken dialogues. These systems promise to deliver more human-like conversational and speech capabilities with streamlined architectures, paving the way for the next generation of intelligent personal assistants.\nRecently, the announcement and release of GPT-4omni\u2019s (GPT-4o) advanced speech capabilities have garnered significant global interest in the development of LLM-based, low-latency, emotionally expressive and possibly full-duplex human-machine speech interaction technologies. However, as GPT-4o is a commercial product, the underlying technologies remain undisclosed.\nSeveral codec-based models have been proposed to achieve full-duplex capability in an end-to-end manner [ma2024language, moshi, syncllm]. After tokenizing speech into discrete tokens and modelling both auditory and textual tokens in a single Transformer [vaswani2017attention] model, Moshi [moshi] utilizes parallel streams to model output speech and input speech simultaneously, while SyncLLM [syncllm] proposes modelling tokens from input stream and output stream in an interleaved manner.\nThis paper presents SALMONN-omni, a speech understanding and generation model built on a novel codec-free full-duplex spoken dialogue framework. By seamlessly integrating a streaming speech encoder, a large language model (LLM), and a streaming speech synthesizer with cross-attention layers into a unified end-to-end architecture, SALMONN-omni processes both input and output speech simultaneously and in real-time. A periodic synchronization mechanism is employed to provide the model with a \u201ctime\u201d concept, ensuring effective alignment and synchronization of auditory and textual modalities.\nIn full-duplex conversations, the model continuously processes user speech and environmental sounds, even while speaking. To address this, a novel \u201cthinking\u201d mechanism is introduced, featuring two special state transition tokens that enable the model to handle dynamic scenarios such as turn-taking, barge-in, and echo cancellation etc.\nExperimental results highlight SALMONN-omni\u2019s versatility as a unified solution for a wide range of speech tasks, including speech recognition, synthesis, enhancement, dereverberation, target speaker extraction, and spoken question answering. Furthermore, SALMONN-omni demonstrates effective handling of turn-taking and barge-in in simulated experiments, showcasing its potential as a prototype for enabling natural, seamless human-machine conversations.\nTo the best of our knowledge, SALMONN-omni is the first model to achieve full-duplex speech understanding and generation without using any speech codec.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methodology", + "text": "Four challenges need to be solved when implementing a full duplex conversational AI model without using codecs.\nThe model needs to support streaming speech input and output. By integrating an LLM with a streaming speech encoder and a streaming speech synthesizer using embeddings instead of text, SALMONN-omni functions as an end-to-end model, enabling seamless interaction with users through both verbal and non-verbal (paralinguistic) features.\nThe model should be able to deal with two streams from both the input and the output sides simultaneously. SALMONN-omni achieves it by utilizing the speech encoder to process the input stream, the LLM to process the output stream and connecting these two components with cross-attention layers.\nThe model should have the idea about \u201ctime\u201d so that the auditory and textual modalities can be aligned and synchronized. A periodic synchronization mechanism is introduced for SALMONN-omni. Within each time block, the model processes a fixed duration of input speech and generates a fixed number of textual embeddings.\nThe model should be able to handle complex dynamics in natural conversations such as turn-taking, barge-in, overlapping speech and backchanneling. A novel \u201cthinking\u201d mechanism is proposed so that SALMONN-omni can switch between speaking and non-speaking states flexibly while listening to the input side at all times.\n###figure_2### Specifically, SALMONN-omni has speaking and non-speaking states and two special tokens and are utilized for transition between these two states. After the LLM generating , SALMONN-omni switches to the speaking state. When the model completes its answer or is interrupted by the input stream, the LLM generates and SALMONN-omni transists to the non-speaking state. As shown in Figure 2 ###reference_###, the conversation is cut into a series of time blocks. In block , the streaming speech encoder extracts auditory embeddings from input speech with a fixed duration of seconds. Then the LLM generates word embeddings conditioned on auditory embeddings extracted from block to block . If the model is in a speaking state, the word embeddings are sent to the streaming speech synthesizer to generate a spoken response with the same duration of seconds. There is no explicit text between each of the two components, and the whole model is trained in an end-to-end manner.\nTo maintain the framework\u2019s consistency and simplicity, and to help the LLM determine when to perform state transitions, the LLM is required to decode tokens within each time block. However, two situations arise where collecting ground truth labels during training becomes challenging:\n1) The first occurs in the non-speaking state, where the tokens preceding the marker are difficult to determine. 2) The second arises in the speaking state, when the LLM has completed generating textual embeddings, but the speech synthesizer is still generating and playing audio of the answer. In this case, the LLM must generate tokens to remain informed by the input stream. A straightforward solution might involve introducing a special placeholder token, but the frequent repetition of the same token in the training data risks collapsing the model\u2019s output distribution, leading to a significant performance drop.\nTo address this, the \u201cthinking\u201d strategy introduces the special token, which is only used as input in these situations. However, the outputs are not explicitly forced to include this or any other specific tokens. The only constraint is that the outputs must not include or .\nThis design mirrors the human thought process during conversations, where internal thoughts may differ from spoken words and are not explicitly conveyed. Moreover, the mechanism is easy to implement by setting the output labels to either or , depending on the state, and applying a negative coefficient to the loss function. Specifically, if SALMONN-omni is in a non-speaking state, the label is ; otherwise, it is .\nFinally, SALMONN-omni is trained with loss function expressed by Eqn. (1 ###reference_###),\nwhere , and , are the weights and losses for text generated by the LLM and the final speech response. is the loss for \u201cthinking\u201d tokens and ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Case Study", + "text": "SALMONN-omni is trained with multiple tasks such as streaming speech recognition, speech enhancement, spoken question answering and etc. Moreover, synthetic data is used for training SALMONN-omni to learn to handle turn-taking and barge-in in natural conversations. The data used for speech recognition include 60k hours of LibriHeavy [kang2024libriheavy] and 10k hours of GigaSpeech [GigaSpeech2021], which are also the sources of synthetic data for speech enhancement, turn-taking and barge-in.\nBelow are some cases for demonstrating the capabilities of SALMONN-omni on various streaming speech tasks and natural conversations.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper presents SALMONN-omni, a multimodal LLM developed within a novel codec-free, full-duplex framework for simultaneous speech understanding and generation. By integrating a streaming speech encoder, an LLM, and a streaming speech synthesizer into an end-to-end model, and incorporating a novel \u201cthinking\u201d strategy, SALMONN-omni can effectively unify a wide range of streaming speech tasks, including speech recognition, enhancement, dereverberation, target speaker extraction, and spoken question answering. Simulated experiments on turn-taking and context-dependent barge-in demonstrate SALMONN-omni\u2019s potential as a prototype of a future conversational AI that enables the seamless modelling of diverse interactive natural spoken dialogue dynamics using a single end-to-end model." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18138v1_figure_1.png", + "caption": "Figure 1: SALMONN-omni is a codec-free, full-duplex end-to-end conversational AI model capable of managing dynamic dialogue interactions such as turn-taking and context-dependent barge-in in human-machine conversations.", + "url": "http://arxiv.org/html/2411.18138v1/x1.png" + }, + "2": { + "figure_path": "2411.18138v1_figure_2.png", + "caption": "Figure 2: SALMONN-omni is an end-to-end model that integrates a streaming speech encoder, an LLM, and a streaming speech synthesizer, all interconnected through embeddings. It features a novel codec-free, full-duplex spoken dialogue framework enhanced by a \u201cthinking\u201d mechanism.", + "url": "http://arxiv.org/html/2411.18138v1/x2.png" + }, + "3": { + "figure_path": "2411.18138v1_figure_3.png", + "caption": "Figure 3: Streaming speech recognition: Without using the speech synthesizer, SALMONN-omni can convert the input speech to text in a streaming way.", + "url": "http://arxiv.org/html/2411.18138v1/x3.png" + }, + "4": { + "figure_path": "2411.18138v1_figure_4.png", + "caption": "Figure 4: Speech enhancement: SALMONN-omni improves speech quality by denoising and dereverberation. It listens to the noisy speech and then re-speaks the speech content with enhanced clarity.", + "url": "http://arxiv.org/html/2411.18138v1/x4.png" + }, + "5": { + "figure_path": "2411.18138v1_figure_5.png", + "caption": "Figure 5: Spoken question answering: SALMONN-omni can handle turn-taking in spoken question-answering scenarios.", + "url": "http://arxiv.org/html/2411.18138v1/x5.png" + }, + "6": { + "figure_path": "2411.18138v1_figure_6.png", + "caption": "Figure 6: Context-independent barge-in: In this experiment, SALMONN-omni is trained to stop generating speech immediately when the user says \u201cYes,\u201d but continues speaking when it hears other words or does not hear any words.", + "url": "http://arxiv.org/html/2411.18138v1/x6.png" + }, + "7": { + "figure_path": "2411.18138v1_figure_7.png", + "caption": "Figure 7: Context-independent barge-in with echo cancellation: In this experiment, SALMONN-omni is trained to immediately stop generating speech upon detecting the user saying \"Yes,\" while continuing uninterrupted when hearing other words or no speech at all. Additionally, SALMONN-omni remains unaffected by its own speech, demonstrating its robust echo cancellation capability.", + "url": "http://arxiv.org/html/2411.18138v1/x7.png" + }, + "8": { + "figure_path": "2411.18138v1_figure_8.png", + "caption": "Figure 8: Context-dependent barge-in: When the user is quick to buzz in with the response, SALMONN-omni can stop generating speech immediately.", + "url": "http://arxiv.org/html/2411.18138v1/x8.png" + }, + "9": { + "figure_path": "2411.18138v1_figure_9.png", + "caption": "Figure 9: Context-dependent barge-in. When the user says something irrelevant, SALMONN-omni can ignore the input and continue speaking.", + "url": "http://arxiv.org/html/2411.18138v1/x9.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18138v1" +} \ No newline at end of file diff --git a/20241127/2411.18141v1.json b/20241127/2411.18141v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1954a0bd70fe0403c648ce8a66d77f4e8243b014 --- /dev/null +++ b/20241127/2411.18141v1.json @@ -0,0 +1,91 @@ +{ + "title": "Predicting Water Quality using Quantum Machine Learning: The Case of the Umgeni Catchment (U20A) Study Region", + "abstract": "In this study, we consider a real-world application of QML techniques to study water quality in the U20A region in Durban, South Africa. Specifically, we applied the quantum support vector classifier (QSVC) and quantum neural network (QNN), and we showed that the QSVC is easier to implement and yields a higher accuracy. The QSVC models were applied for three kernels: Linear, polynomial, and radial basis function (RBF), and it was shown that the polynomial and RBF kernels had exactly the same performance. The QNN model was applied using different optimizers, learning rates, noise on the circuit components, and weight initializations were considered, but the QNN persistently ran into the dead neuron problem. Thus, the QNN was compared only by accraucy and loss, and it was shown that with the Adam optimizer, the model has the best performance, however, still less than the QSVC.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Water quality assessment remains a critical challenge in environmental monitoring and public health protection. Traditional machine learning (ML) approaches have demonstrated success in predicting water quality parameters [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], yet they often struggle with the complex, nonlinear relationships inherent in aquatic systems. Quantum Machine Learning (QML) offers promising advantages through its ability to efficiently process high-dimensional data and exploit quantum phenomena like superposition and entanglement. The utility of QML has been demonstrated in various applications related to financial fraud detection [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###], drug discovery [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###], chemistry and medical applications [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], materials science [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###], and many other areas.\nThis research explores the application of QML algorithms to water quality prediction, specifically implementing the quantum support vector classifier (QSVC) and quantum neural networks (QNNs). We demonstrate that quantum approaches can capture subtle correlations between water quality indicators, including the presence of various chemical compounds in the water ( and others), as well as sediment trappings, flow rates, flood attenuation, chemical assimilation (phosphates, nitrates, toxicants), and other factors like turbidity. Our methodology leverages both classical preprocessing techniques and quantum feature maps to encode water quality data into quantum states, followed by measurement-based prediction. The results indicate that QML models can achieve comparable or superior accuracy to classical methods while requiring fewer training parameters and computational resources. Many studies enlist the potential advantages that QML has over classical ML, so we choose to avoid a restatement here. However, we should remember that we strongly believe that QML is an experimental science. Therefore, the advantages of QML over classical ML can only be realized once a particular application is explored, as we have seen in our study.\nWe consider the real-world study area from where this data was collected: The City of Durban, situated in KwaZulu-Natal on the east coast of South Africa, has a rich and currently developing history of Quantum Computing, arguably being the first city in South Africa that envisioned the potential of QC and its applications. Possibly, the doyen of QC in South Africa, Francesco Petruccione, introduced this field via the Quantum Research Group at the University of KwaZulu-Natal, and this was further solidified when, together with his then student, Maria Schuld, they co-authored the famous two-part magnum opus on Quantum Machine Learning (QML) [25 ###reference_b25###, 26 ###reference_b26###]. Thus, it is almost fairytale-like that we use a technology pioneered in this city to study the quality of water in the city.\nIn recent years, the quality of water in the Durban area has supposedly been in the decline due to various factors, including accusations of illegal dumping of sewerage into the beaches, amongst others, therefore affecting\nsanitation, consumption, tourism \u2013 a major source of income \u2013 in the city and the province [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###].\nThe foremost objective of this research is to approach the issue of water quality from a non-political, unbiased, physical sciences perspective and apply the techniques of QML to predict whether water quality is good or bad for use. The term use is used broadly to mean leisurely activities such as swimming.\nHowever, we acknowledge that the methods used here are not novel. Since 2018, there has been a plethora of applications for these methods in the literature, and the application domain is certainly unique. The use of real-world data collected by field agents of a credible organization certainly serves as an original contribution to the field of applications of QML.\nThis paper is divided as follows:\nIn Sec.2 ###reference_###, we review some related works that are congruous to our study.\nIn Sec.3 ###reference_###, we provide the theory for the models used.\nIn Sec.4 ###reference_###, we give the results of the model that were used in the form of metrics and discuss their implications.\nIn Sec.5 ###reference_###, we provide conclusive remarks on this paper whereby we reflect upon our findings and give directions for future research." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "To our knowledge, there exists no other study that directly applies QML for the study of water quality, thus, our claim is that the application is completely new. Therefore, no direct literature applies to our study, but analogous studies that collected similar data to study weather phenomena and considered applications in agriculture, amongst others.\nIn [30 ###reference_b30###], the authors use the Wupper River in Germany as a study area to build QML models to better understand flood forecasting during the period of 2023. By comparing classical and quantum ML models, the authors try to show the benefit of using QML. While this paper does not technically predict water quality, it shares some parallels with our study in the sense of using numerical variables related to water and choosing a demarcated study area. Similarly, in [31 ###reference_b31###], the authors apply a novel QML model that combines long short-term memory (LSTM) models with QNNs to produce Quantum Trains (QT) and study flood prediction as an application. The novelty of their method lies in the manner in which the QT model reduces the number of trainable parameters.\nIn [32 ###reference_b32###], the authors explore the applications of QSVCs and QNNs to enhance crop yields. By using data related to water quality and soil collected over a long period, the authors could show that QML has the potential to improve crop yields, and the QML models showed superior performance to classical methods. Similarly, in [33 ###reference_b33###], another application of QML to crop yield, in this case rice, is considered. The model was highly accurate regarding the MSE, MAE, and coefficient of determination.\nThere are several other application-based studies that explored the aforementioned areas, however, in essence QML techniques were applied to data and the quantum advantage was demonstrated by reporting high metrics." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Theory", + "text": "In this section, we present the theories underlying the models we use in this study." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Quantum Support Vector Classifiers", + "text": "Support Vector Machines (SVMs) work by finding the optimal separating hyperplane that segregates data points of different classes with the maximum margin. For a dataset with being the features set, and being the predictor variable, we aim to solve the optimization problem\nwhere are the weights, is the control parameter, are the slack variables for misclassifications, and is the kernel which maps the features to the higher-dimensional feature space. In Fig.1 ###reference_###, we provide a two-dimensional rendition of the operation of an SVM. By introducing the Lagrange multiplier we re-state Eqn.(1 ###reference_###) in the form\nThe feature map , which takes the data to a high-dimensional space using the kernel\nwhere is the scaling parameter, is the coefficient, and is the polynomial degree.\nSpecifically, classical data is mapped into quantum states using a feature map scheme. Thereafter, the quantum kernel\nis calculated. The purpose of the quantum kernel is to measure the similarity between pairs of quantum states. The feature map is implemented using a VQC with quantum gates. By running the quantum circuit and measuring the outcomes, is calculated. Thereafter, the classical SVM method is applied to find the optimal hyperplane that separates the quantum states in the high-dimensional feature space, and data points are classified accordingly.\n###figure_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Quantum Neural Networks", + "text": "Quantum Neural Networks (QNNs) are the QML analogs of classical NNs that leverage quantum properties to process data. They have been shown to be particularly well suited for tasks in ML, optimization, and signal processing, with potential advantages in speed and expressive power over classical counterparts.\nIn practice, a typical QNN consists of the following architecture components:\nClassical-to-Quantum Data Encoding: Classical data is encoded into a quantum state. Mathematically, a classical datapoint is encoded into a quantum state . The commonly used encoding schemes are\nParametrized/Variational Quantum Circuit (PQC/VQC): A sequence of quantum gates with trainable parameters analogous to weights in classical networks. The VQC is the beating heart of the QNN and is composed of a collection of quantum gate operations such as rotations and control gates. For example, for rotations . The VQC is then a matrix that operates on the input quantum states according to\nfor trainable parameters .\nMeasurement: Observables of the quantum state are measured to produce outputs. For an observable , measurement is represented by\nLoss Function: A classical loss function evaluates the difference between predicted and target values. For a binary classification problem with targets , the mean-squared error loss is used\nwhere is the output produced by the QNN for input .\nParameter Optimization: Gradients of the loss function are computed to update the parameters. The goal is to minimize the loss function by updating the parameters .\nIn Fig.2 ###reference_###, we provide a diagrammatic depiction of the architecture of a typical QNN and describe its operation,\n###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "The data comprised 32 data points representing 32 locations on the map where these variables were measured. The feature E.coli - (MPN/100mL) measures the amount of Escherichia coli bacteria in the water at that measurement point. If the amount was at most 235 MPN/100 mL then it was acceptable, if it exceeded this threshold it was regarded as unacceptable. Using this methodology, a predictor variable Acceptable / Not Acceptable (For Recreation) was created. In Fig.3 ###reference_###, we depict the study region considered.\n###figure_3### Within this criteria, only 3 locations had acceptable water quality while 29 had unacceptable water quality. Thus, the predictor variable was severely out of balance. Thus, class balancing was done using random oversampling." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "QSVC", + "text": "From Tab.1 ###reference_###, we see that while with the linear kernel, the model achieves a perfect recall (1.0000), its accuracy, precision, and other metrics are relatively lower. This suggests that the linear kernel might not be the best choice for this specific dataset. Both kernels perform similarly well, with high accuracy, F1-score, precision, recall, AUROC, and AUPRC. This indicates that these kernels are better suited for capturing the underlying patterns in the data. The reason why the poly and RBF kernels produce exactly the same metrics may be attributed to several factors, including linear separability in the data. However, we see that we get a lower accuracy with the choice of a linear kernel; secondly, it could indicate that the model hyperparameters for both or either of the models are not properly fine-tuned, i.e., for the poly kernel, the choice of the degree of 1 might be too simplistic. For the RBF kernel, the choice of the complexity of the decision boundary might not be optimal. Of course, one may experiment with different parameter values and obtain the optimal choice, but this is beyond the scope of our research objectives in this paper." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "QNN", + "text": "Upon the initial training of the QNN, the model ran into the \u201cdead neuron problem\u201d, i.e., all neurons were giving the same output irrespective of the input. After 50 epochs of training, the model continuously gave a loss of 0.4996. This problem is attributed to two issues\nThe ReLU function in the intermediate layers kept \u201cdying\u201d potentially due to a too high learning rate selected.\nThe network cannot learn due to exploding/vanishing gradients.\nIn the next iteration of model training, the architecture was adjusted and the abovementioned issues were addressed. Specifically, the learning rate was adjusted from 0.1 to 0.001, and the weight initialization was changed to the Xavier initialization\nwhere is a uniform distribution, is the number of input units to a layer, and is the number of output units to a layer.\nDespite these adjustments, the QNN model still gave a constant loss throughout all epochs. Lastly, a noisy QNN model was trained using depolarization noise and amplitude damping, and the same result was obtained. Thus, it does not make sense to compare model metrics like F1, precision, and recall because they all had a constant value of 0. Rather, we compare models by their accuracy and loss in Tab.2 ###reference_###.\n###table_1### The depolarizing noise represents the loss of quantum coherence by replacing the quantum state with a mixed state. For a single qubit, with a probability , the state becomes completely mixed, and the density matrix becomes\nwhere is the density matrix, is the depolarizing probability, and is the identity matrix. This noise applies equally to all directions in the Bloch sphere, mimicking errors from imperfect gate implementations or environmental interactions.\nAmplitude damping models energy loss in a system, such as a qubit decaying from the excited state () to a ground state (). For a single qubit, the Krauss operators are given by\nwhere is the probability of relaxation, i.e., the probability of an excited state decaying to the ground state.\nBased on the QNN model, we can observe that the Adam optimizer model had the smallest loss and highest accuracy (models that used gradient descent and RMSprop had equal accuracy but reported a higher loss). However, the QNN approach is inherently flawed because all other metrics are 0. Thus, the raw data alone is not sufficient to build an effective QNN, and therefore, feature engineering would be required if one were to adopt the QNN approach effectively.\nIt is worth noting that if more complexity is introduced into the model, for example, by increasing the number of hidden layers in the network, we might get better performance. However, we speculate that the enhancement in performance will be marginal, and for practical purposes, the QSVC would be the best choice." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We observe that for pragmatic considerations, the QSVC is the best approach for this dataset because it is easy to implement, has good model metrics, and produces comparatively high accuracy with minimum effort. Unlike some studies that benchmark QML models against classical ML models, we believe that such a comparison is unwarranted and comparable to the age-old adage of comparing \u201capples to bananas.\u201d, and thus we avoided building classical ML models to compare; we have compared quantum models with quantum models.\nFrom the perspective of QML, in the future, we will explore different models and fine-tune the current models implemented to obtain better performance.\nFurther, in future iterations, we will consider a larger study area to collect more data points and different choices of class-balancing techniques to ascertain which gives optimal performance. In addition to only considering recreation purposes, new models will be built for drinking water quality.\nLastly, we will consider integrating geographical weighting into the existing QSVC and QNN models, for example, like the work in [34 ###reference_b34###], thereby creating a new class of models and proving some results where we show that geographically weighted QNNs have smaller errors, in classifications tasks than standard QNNs." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
KernelAccuracyF1PrecisionRecallAUROCAUPRC
Linear0.58330.70590.54551.00000.58330.5455
Poly0.75000.80000.66671.0000.75000.6667
RBF0.75000.80000.66671.0000.75000.6667
\n
Table 1: Model metrics for the QSVC.
\n
", + "capture": "Table 1: Model metrics for the QSVC." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelArchitectural ComponentsLossAccuracy
1: \nOptimizer: Adam0.49960.5000
Optimizer: Gradient Descent0.49990.5000
Optimizer: RMSProp0.50000.5000
2: \nOptimizer: Adam0.47240.4362
3: \nOptimizer: Adam0.49620.4713
4: \nOptimizer: COBYLA0.49810.4285
5: \nOptimizer: Adam0.58010.4167
\n
Table 2: Comparison of different QNN models.
\n
", + "capture": "Table 2: Comparison of different QNN models." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18141v1_figure_1.png", + "caption": "Figure 1: Diagrammatic depiction of the operation of an SVM. The goal is to maximize the margin between the decision boundary and the support vectors while minimizing misclassifications.", + "url": "http://arxiv.org/html/2411.18141v1/x1.png" + }, + "2": { + "figure_path": "2411.18141v1_figure_2.png", + "caption": "Figure 2: Architecture of a QNN. Input data is encoded into quantum states via a feature map. Thereafter, the state is initialized to the ground state |0\u27e9ket0\\ket{0}| start_ARG 0 end_ARG \u27e9 and fed into the unitary layer, which applies an ansatz to learn the parameters \ud835\udf3d\ud835\udf3d\\boldsymbol{\\theta}bold_italic_\u03b8, and measurement is carried out. The process is repeated until a loss function \u2112\u2112\\mathcal{L}caligraphic_L is sufficiently minimized.", + "url": "http://arxiv.org/html/2411.18141v1/x2.png" + }, + "3": { + "figure_path": "2411.18141v1_figure_3.png", + "caption": "Figure 3: Map demarcating the U20A study area.", + "url": "http://arxiv.org/html/2411.18141v1/extracted/6028469/images/study_area_map.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18141v1" +} \ No newline at end of file diff --git a/20241127/2411.18143v1.json b/20241127/2411.18143v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3344dbb192f3c6b63413e0ef12db8d68512fdeb1 --- /dev/null +++ b/20241127/2411.18143v1.json @@ -0,0 +1,716 @@ +{ + "title": "Harnessing Large Language Models for Seed Generation in Greybox Fuzzing", + "abstract": "Greybox fuzzing has emerged as a preferred technique for discovering software bugs, striking a balance between efficiency and depth of exploration.\nWhile research has focused on improving fuzzing techniques, the importance of high-quality initial seeds remains critical yet often overlooked.\nExisting methods for seed generation are limited, especially for programs with non-standard or custom input formats.\nLarge Language Models (LLMs) has revolutionized numerous domains, showcasing unprecedented capabilities in understanding and generating complex patterns across various fields of knowledge.\nThis paper introduces SeedMind, a novel system that leverages LLMs to boost greybox fuzzing through intelligent seed generation.\nUnlike previous approaches, SeedMind employs LLMs to create test case generators rather than directly producing test cases.\nOur approach implements an iterative, feedback-driven process that guides the LLM to progressively refine test case generation, aiming for increased code coverage depth and breadth.\nIn developing SeedMind, we addressed key challenges including input format limitations, context window constraints, and ensuring consistent, progress-aware behavior. Intensive evaluations with real-world applications show that SeedMind effectively harnesses LLMs to generate high-quality test cases and facilitate fuzzing in bug finding, presenting utility comparable to human-created seeds and significantly outperforming the existing LLM-based solutions.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "To discover bugs in code, fuzzing (Man\u00e8s et al., 2019 ###reference_b37###) has been considered one of the most practical techniques, thanks to its easy application to production-grade software. After decades of development, greybox fuzzing (Li et al., 2018 ###reference_b30###; Manes et al., 2018 ###reference_b36###; Godefroid, 2020 ###reference_b19###) has emerged as the most preferable option. By leveraging lightweight instrumentation to obtain runtime feedback for driving the exploration, greybox fuzzing strikes a balance between the efficiency of blackbox fuzzing (Godefroid, 2007 ###reference_b18###) and the depth of whitebox fuzzing (Godefroid et al., 2012 ###reference_b21###). The arising of mature greybox fuzzers like AFL (Fioraldi et al., 2023 ###reference_b14###), AFL++ (Fioraldi et al., 2020 ###reference_b13###), Honggfuzz (Google, 2024b ###reference_b24###) is furthering the popularity.\nTechnically, greybox fuzzing starts with a set of initial test cases, called seeds, and iteratively derives new test cases from the seeds to expand code coverage. Thus, to achieve better results, it is critical to prepare a set of high-quality seeds to bootstrap the fuzzing process. For example, a seed already reaching the buggy site can significantly reduce the difficulty and time cost for fuzzing to trigger the bug. Yet, obtaining good seeds has not been easy.\nIn practice, the most common strategy to prepare seeds is through manual inspection of the program logic and construction of desired test cases. This strategy might work for programs taking input of popular formats (e.g., XML, HTML, PDF, etc.), as their test cases are widespread and can be easily collected. However, for programs with non-standard input formats or even self-customized formats, this strategy becomes unaffordable. An alternative solution is run generators that can produce test cases automatically. This only works when an applicable generator already exists. Otherwise, new generators must be built from scratch, which, as we will elaborate in \u00a72.2 ###reference_###, is often infeasible or impractical.\nThe recent development of AI, especially Large Language Models (LLMs) such as the Generative Pre-trained Transformer (GPT) (Floridi and Chiriatti, 2020 ###reference_b15###), offers a new opportunity for generic, effortless seed preparation. Many LLMs (GitHub Copilot (GitHub, 2024 ###reference_b17###), Amazon CodeWhisperer (Amazon, 2024 ###reference_b4###), OpenAI\u2019s GPT series (OpenAI, 2024b ###reference_b42###), Anthropic\u2019s Claude series (Ant, 2024 ###reference_b2###), etc.), after pre-training on massive code, comments, and documentation, are excelling at code-centric tasks (code understanding, code summarization, code completion, code translation, etc.) (Lu et al., 2021 ###reference_b34###; Nam et al., 2024 ###reference_b40###; Chen et al., 2021a ###reference_b7###; Wang et al., 2023 ###reference_b55###). A straightforward application of the technology would be to run an LLM to analyze the target program for fuzzing and generate the test cases it expects.\nIndeed, recent research has explored LLMs for seed generation in fuzzing (Ackerman and Cybenko, 2023 ###reference_b3###; Xia et al., 2024 ###reference_b58###; Lohiya et al., 2024 ###reference_b33###; Tamminga et al., 2023 ###reference_b53###; Liu et al., 2024 ###reference_b31###; Rutherford and Wolf, 2003 ###reference_b48###; Yang et al., 2023 ###reference_b61###). The proposed methods have attempted using various inputs (target program code, example test cases, functionality specifications, documentation, etc.) and prompts to request test cases from the LLMs. They have demonstrated effectiveness within different domains (parsers (Ackerman and Cybenko, 2023 ###reference_b3###), compilers (Xia et al., 2024 ###reference_b58###), the Linux kernel (Yang et al., 2023 ###reference_b61###), and other generic software (Liu et al., 2024 ###reference_b31###)). However, they may have significantly undermined the potential of LLMs for seed generation due to failure to address several fundamental challenges. (C1) The LLM in use, even in their most recent versions, may not support many input formats. For instance, GPT-4o (OpenAI, 2024a ###reference_b41###), OpenAI\u2019s new flagship model, refuses to generate binary representations because it is constrained to text formats. (C2) LLMs are restricted by their context window\u2014the amount of token the model can handle from both its input and response (Richards and Wilmott, 2024 ###reference_b47###). Arbitrarily dumping information (e.g., the entire code base of the target program) to the LLMs can overflow the context window and fail the generation. (C3) LLMs are known to present unpredictable behaviors, which can impede the generation of test cases. (C4) The LLMs can lack a basic understanding of the progress. Thus, it may overlook an incomplete task or endlessly repeat an accomplished task.\nIn this paper, we present a system, SeedMind, as an attempt toward generic, effective LLM-based seed generation. SeedMind incorporates four ideas to address challenges C1 - C4. \u2776 Inspired by a recent OSS-Fuzz extension (OSS-FUzz, 2024 ###reference_b45###), SeedMind instructs the LLM to construct a generator that can produce test cases instead of asking it to directly output test cases. The generator can be represented as pure text, which, when executed, can generate seeds in any format (overcoming C1). \u2777 SeedMind devises coverage-guided evolution by incorporating a feedback-based loop to guide the LLM to improve the generator toward broader and deeper code coverage gradually. This enables guided progress, avoiding blind explorations (overcoming C4). \u2778 SeedMind only provides the LLM with the context necessary to improve the generator rather than dumping everything (e.g., all the code of the target program), avoiding overflowing the context window (overcoming C2). \u2779 SeedMind introduces state-driven realignment. Once observing LLM behaviors deviating from the expected state, SeedMind attempts to re-align the LLM in the right direction via behavior-amending instructions (overcoming C3).\nTo understand the utility of SeedMind, we have performed an intensive set of evaluations. \u2780 We apply SeedMind to generate test cases for all the functional C and C++ targets included in the OSS-Fuzz project (Google, 2024a ###reference_b23###) (166 programs and 674 harnesses). It shows that SeedMind can generate seeds with a quality close to the human-created ones. Further, SeedMind significantly outperforms the existing LLM-based solution. \u2781 We use SeedMind to prepare seeds for running AFL (Fioraldi et al., 2023 ###reference_b14###), AFL++ (Fioraldi et al., 2020 ###reference_b13###), Honggfuzz (Google, 2024b ###reference_b24###) on MAGMA (Hazimeh et al., 2020 ###reference_b25###), a ground-truth fuzzing evaluation suite based on real programs with real bugs. The results demonstrate that SeedMind is compared to human-created seeds in facilitating fuzzing to find bugs. Likewise, SeedMind beats the existing LLM-based solution to a remarkable extent. \u2782 We diversify the LLM used by SeedMind to span GPT-4o (OpenAI, 2024a ###reference_b41###), GPT-3.5-Turbo (OpenAI, 2024d ###reference_b44###), and Claude-3.5-Sonnet (Anthropic, 2024 ###reference_b5###). SeedMind presents effectiveness with all three models, showing its generality. \u2783 We measure the cost of SeedMind for seed generation. SeedMind can generate high-quality seeds for a fuzzing harness with an average cost of less than $0.5, showing its affordability. We have also applied SeedMind in DARPA and ARPA-H\u2019s Artificial Intelligence Cyber Challenge (AIxCC) (DARPA, 2024 ###reference_b9###), a world-level, cutting-edge competition on developing AI-powered solutions for securing critical software infrastructures. SeedMind aided us in becoming one of the leading teams.\nIn summary, our main contributions are as follows.\nWe introduce SeedMind, a system to offer generic, effective LLM-based seed generation.\nWe incorporate a group of new ideas into SeedMind for addressing the major, prevalent challenges encountered by LLM-based seed generation.\nWe implement SeedMind and intensively evaluate SeedMind on standard benchmarks. The results show that SeedMind offers generic, effective, and economical seed generation." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Background", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Greybox Fuzzing", + "text": "Greybox fuzzing (Li et al., 2018 ###reference_b30###; Manes et al., 2018 ###reference_b36###; Godefroid, 2020 ###reference_b19###) emerged in the mid-2000s as a balance between blackbox (Godefroid, 2007 ###reference_b18###) and whitebox fuzzing (Godefroid et al., 2012 ###reference_b21###). The concept was popularized by tools such as American Fuzzy Lop (AFL) (Fioraldi et al., 2023 ###reference_b14###). The key idea of greybox fuzzing is to use lightweight instrumentation to obtain runtime feedback (e.g., code coverage) during fuzzing and, guided by the feedback, pick and mutate the existing test cases to derive new ones. To close the loop, new test cases offering new contributions (e.g., covering new code) are preserved for future mutations.\nA greybox fuzzing campaign usually starts with an initial corpus of test cases called seeds. The quality of seeds significantly influences the fuzzing performance. For instance, given seeds covering a large code region, the chance of fuzzing to discover more bugs will be much higher, and the time needed will be much shorter. Hence, preparing a set of high-quality seeds has become a de facto standard before launching greybox fuzzing." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. Seed Generation", + "text": "To prepare high-quality seeds, a common practice is to manually understand the target program and craft desired test cases. However, this is not ideal for the highly automated process of fuzzing. An alternative idea is to run generators that can produce test cases automatically. Yet, it faces the challenge of constructing a generator when none is available.\nTo date, there are two main methods for constructing generators. \u2776 Given the format specifications of input needed by the target program, people manually develop generators complying with the specifications to assemble test cases (Xu et al., 2020 ###reference_b59###; Fratric, 2017 ###reference_b16###; Yang et al., 2011 ###reference_b62###). Those generators have mostly targeted standard formats like XML, HTML, and MathML, as their specifications are well documented. \u2777 The other approach is based on machine learning techniques. After gathering a significant volume of inputs accepted by the target program, the method trains a model (probabilistic models (Wang et al., 2017 ###reference_b54###), recurrent neural networks (Godefroid et al., 2017 ###reference_b22###), generative adversarial networks (Lyu et al., 2018 ###reference_b35###), etc.) to learn the input structures, which is then run to generate new input variants.\nEvidently, the two methods above are not preferable. They still mandate manual efforts to engineer the generator or collect the training data. More fundamentally, they lack generality. When the target program requires input following a non-standard or brand-new format, both methods become impractical. On the one side, format documentation is absent. Reversing the program to infer the format and building generators accordingly is unaffordable. On the other hand, satisfactory inputs in the wild become rare. There are limited sources to build an effective training dataset." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. AI for Code", + "text": "The recent development of large language models (LLMs) (Chang et al., 2024 ###reference_b6###) has revolutionized the capability of AI in understanding and generating structured and unstructured text. Many models (e.g., GitHub Copilot (GitHub, 2024 ###reference_b17###), OpenAI Codex (OpenAI, 2024c ###reference_b43###), Amazon CodeWhisperer (Amazon, 2024 ###reference_b4###)) are specifically designed and adapted for working with code. They present unprecedented performance in various code-centric tasks (code summarization, code completion, code translation, etc.) (Lu et al., 2021 ###reference_b34###; Nam et al., 2024 ###reference_b40###; Chen et al., 2021a ###reference_b7###; Wang et al., 2023 ###reference_b55###). Even generic models like OpenAI\u2019s GPT family and Anthropic\u2019s Claude family, after pre-training on massive code, comments, and documentation, are excelling at those tasks (Lu et al., 2021 ###reference_b34###).\nThe strong code capability of LLMs has spurred their use in seed generation for fuzzing (Ackerman and Cybenko, 2023 ###reference_b3###; Xia et al., 2024 ###reference_b58###; Lohiya et al., 2024 ###reference_b33###; Tamminga et al., 2023 ###reference_b53###; Liu et al., 2024 ###reference_b31###; Rutherford and Wolf, 2003 ###reference_b48###; Yang et al., 2023 ###reference_b61###). The existing methods in this line provide various types of input (target program code, example test cases, functionality specifications, documentation, etc.) together with customized prompts to LLMs, asking the LLMs to output test cases. They have demonstrated effectiveness on programs in various domains, such as parsers (Ackerman and Cybenko, 2023 ###reference_b3###), compilers (Xia et al., 2024 ###reference_b58###), the Linux kernel (Yang et al., 2023 ###reference_b61###), and other generic software (Liu et al., 2024 ###reference_b31###). Compared to the aforementioned approaches for seed generation, LLM-based methods require less human effort while offer better generality. Yet, the existing methods have overlooked several challenges we will discuss shortly in \u00a73.2 ###reference_###. As a result, they have insufficiently utilized the potential of LLMs." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Problem and Challenges", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Problem Statement", + "text": "In this paper, we focus on exploiting LLMs for seed generation in greybox fuzzing. To better define the problem, we clarify our assumptions below:\nInput: We assume access to the source code of the target program. We also assume the entry point of the fuzzing target is specified (e.g., the entry function needed by libFuzzer (Serebryany, 2024 ###reference_b51###)). The two assumptions are minimal for functional fuzzing. Compared to the previous methods, we enforce no extra requirements (e.g., the availability of functionality specifications and documentation), thus representing a more generic scenario.\nLLM: We only require black-box access to the LLM. That is, the LLM can take our prompts as inputs and send us responses. The responses can be purely text-based, and support for other formats (e.g., image, audio, video) is not demanded.\nGoal: Previous research on LLM-based seed generation usually follows a vague objective like generating more diverse test cases. We consider a more specific goal. We aim to generate test cases maximizing the code coverage in the target program. The rationale is that modern greybox fuzzing tools prevalently take code coverage as their top priority. Hence, fulfilling our goal will better assist with the fuzzing process." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Technical Challenges", + "text": "LLM-based seed generation appears to be intuitive. We may engineer a prompt, enclosing the target program code and the entry point information, to ask the LLM to produce diversified test cases. Indeed, previous research (Lohiya et al., 2024 ###reference_b33###) has applied this idea. However, it tremendously undermines the potential of LLMs for seed generation, due to its failure to address several fundamental challenges.\nHeterogeneous Input Formats (C1): The target programs of fuzzing can require a variety of input formats. For instance, the OSS-Fuzz project (Serebryany, 2017 ###reference_b50###) has included 1260 open-source programs, spanning 130 input formats (text, image, video, audio, binary, etc.). The LLM in use, even in their most recent versions, may not support many of those formats. For instance, Figure 1 ###reference_### illustrates that GPT-4o (OpenAI, 2024a ###reference_b41###), OpenAI\u2019s new flagship model, refuses to generate binary representations because it is constrained to text formats. In short, asking the LLM to directly generate desired test cases can fail.\nLimited Context Window (C2): Restricted by computational resources and model architecture designs, LLM typically enforces a context window\u2014the amount of token the model can handle from both its input and response (Richards and Wilmott, 2024 ###reference_b47###). For instance, GPT-4o has a context window of 128k tokens, while GPT-3.5-Turbo only has 16k. Larger programs, such as server programs and OS kernels, can easily have a code size exceeding the context window, failing to be consumed by the LLMs. Even given a sufficiently large context window, it is unwise to feed all code to the LLM, as a longer context can degrade the LLM\u2019s reasoning capability (WF, 2024 ###reference_b57###).\nUnpredictable LLM Behaviors (C3): LLM is known to present unpredictable behaviors. Such behaviors can impede the generation of test cases. For instance, the LLM can run into hallucinations (Yao et al., 2023 ###reference_b63###), generating test cases that seem plausible but unacceptable to the target program. It can also carry bias (Huang et al., 2023 ###reference_b28###), preferring test cases with identical properties and compromising the diversity.\nBlind Space Exploration (C4): In essence, the LLM needs to explore the code space of the target program and generate test cases covering different code blocks. Yet, by merely looking at the code, the LLM lacks a basic understanding of the progress. It may mistakenly believe a new block has been covered and skip it, or it may endlessly work on a block even if it has been covered, leading to limited yields or a waste of resources." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Design and Implementation", + "text": "###figure_1###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Overview", + "text": "In this section, we present our system, SeedMind, as the first attempt to address challenges C1 - C4. The workflow of SeedMind is shown in Figure 2 ###reference_###. Instead of asking the LLM to directly output test cases, SeedMind instructs the model to construct a generator that can produce test cases, following inspiration from a recent OSS-Fuzz extension (OSS-FUzz, 2024 ###reference_b45###). The generator, consisting of only code, can be represented as pure text. Yet, via execution, it can generate test cases of any format. This overcomes LLM\u2019s restriction on response format (addressing C1).\nAt its core, SeedMind incorporates a feedback-based loop to guide the LLM to gradually improve the generator toward producing test cases with broader and deeper code coverage. This design enables the LLM to explore the code space of the target program in a guided manner, mitigating blind explorations (addressing C4). To avoid overflowing the context window, we only provide the LLM with the context necessary to improve the generator rather than dumping everything (e.g., all the code of the target program) to it (addressing C2).\nWhile guiding the LLM to improve the generator, SeedMind tracks its behaviors. Once observing behaviors deviating from the expected state, SeedMind attempts to re-align the LLM in the right direction via behavior-amending instructions (addressing C3). In the following, we elaborate on the technical details of SeedMind." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Initial Generator", + "text": "Given a fuzzing target, SeedMind starts with producing a basic while functional generator. It first identifies the entry function of fuzzing and extracts its source code. For instance, given a fuzzing target prepared as a libFuzzer harness (one of the standard and most popular formats today) (Serebryany, 2024 ###reference_b51###), SeedMind considers LLVMFuzzerTestOneInput() as the entry function. Using an elaborately crafted prompt which includes the entry function as a piece of context, SeedMind requests the LLM to output a test case generator in Python.\nThe prompts SeedMind adopted are presented in Appendix 8.1 ###reference_###. While prompt engineering is not a focus of our research, those prompts are the results of numerous adaptations and optimizations in AIxCC (DARPA, 2024 ###reference_b9###), a world-leading competition on developing AI-powered solutions for securing critical software infrastructures. They have presented both generality and robustness in various real-world applications.\nLLMs occasionally fail to produce a parsable and executable generator. We consider this an unpredictable behavior, and we will explain how we address it shortly in \u00a74.4 ###reference_###.\n###figure_2###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Coverage-Guided Evolution", + "text": "Even when the initial generator can run without issues, it oftentimes only generates test cases covering a limited region of code. We design a coverage-guided strategy to evolve the generator. Instead of aiming for a single, ultimate generator to reach maximal code coverage, we guide the LLM to iteratively create new variants to reach the non-covered code piece by piece.\nCode Coverage Collection: Once a new working generator is produced by the LLM, we run it to generate (1,000 by default111We observe that a single generator created by LLMs usually present limited diversity. Running it to generate 1,000 test cases can usually reach its maximal capability. 1,000 is also the default used by OSS-Fuzz\u2019s AI-based seed generator (OSS-FUzz, 2024 ###reference_b45###).) test cases. The code coverage of the test cases, quantified by branch coverage, is measured and merged with that of all previous generators. The total code coverage is represented on a dynamic call graph (i.e., the call graph composed of functions visited by all test cases from the existing generators and their children functions). Figure 3 ###reference_### illustrates this representation of code coverage.\nPrompt Assemble:\nTo evolve our generator , we aim to create a new variant that can reach the uncovered code segments. We use the most recent version of along with the updated call graph and code coverage information to assemble a prompt. This prompt is then fed to the LLM as a feedback on its progress. However, a challenge arises: the call graph can be extensive, potentially resulting in a prompt that exceeds the LLM\u2019s context window limitations.\nTo address the challenge of context window limitations, we employ two key strategies for optimizing context usage. First, we focus exclusively on partially covered functions in the call graph (illustrated in the right side of Figure 3 ###reference_###). We exclude fully covered and non-covered functions to reduce context size and maintain relevance. This exclusion is justified because (i) non-covered functions, being descendants of partially covered ones, do not contribute to covering missed branches in their predecessors, and (ii) already incorporates knowledge about fully covered functions, which is provided to the LLM.\nIf the context still exceeds the LLM\u2019s capacity after this initial pruning, we implement an iterative pruning approach based on the dynamic call graph. Starting from the deepest level of the call graph, we progressively remove functions and move upwards until the token count falls within the specified limit. This method ensures efficient utilization of the available context while preserving the most critical functions in the program\u2019s execution flow.\nNew Generator Creation:\nAfter optimizing the context using the strategies described above, we feed the resulting prompt to the LLM to obtain suggestions for improving . This step is crucial as it guides the LLM to focus specifically on enhancing the generator\u2019s capabilities. An example of such a suggestion is illustrated in Figure 2 ###reference_###. By explicitly requesting suggestions before asking for a new generator, we create a more focused chain-of-thought (Wei et al., 2022 ###reference_b56###), ensuring the LLM concentrates on generator improvement rather than being distracted by other tasks.\nThese suggestions serve as valuable input for the subsequent iteration. We incorporate them into a new prompt, along with and the partially covered functions identified earlier. This comprehensive prompt is then used to request an improved generator from the LLM." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. State-Driven Realignment", + "text": "LLMs can occasionally produce outputs that do not meet our requirements or exhibit unexpected behaviors. For instance, they may generate scripts that cannot be executed or fail to produce a script altogether. To address these issues, we employ a state-driven framework to regulate the system\u2019s behavior and ensure more consistent and reliable outputs.\n###figure_3### As is shown in Figure 4 ###reference_###, SeedMind employs a state machine to manage the progress of evolution. When the system detects unpredictable behaviors from the LLM, it initiates a realignment process with behavior-amending instructions. These instructions reverts the system to a previous state and prompting the LLM to rectify the error. For instance, if a generator script fails to execute, the system captures the standard error output and stack trace. This information is then fed back to the LLM with a request to diagnose and resolve the issue." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "4.5. Implementation", + "text": "SeedMind consists of two main components: an LLM agent and a runtime daemon. Our implementation comprises 5,162 lines of code in total, distributed across three programming languages: 2,096 lines of Go, 1,643 lines of Python, and 1,423 lines of Rust.\nLLM Agent: The LLM agent is implemented in Python using the LangGraph framework. It operates as a state machine, with each state represented as a LangGraph node and transitions between states as edges in the graph. The state machine encompasses the processes of initial generator creation, coverage-guided evolution, and state-driven realignment.\nRuntime Daemon: The runtime daemon is written in Go and statically linked. It is designed for injection into isolated environments such as Docker containers used in OSS-Fuzz projects. The daemon\u2019s functions include compiling the target code, executing the target with generated seeds, collecting code coverage and function calling information, and generating the dynamic call graph.\nCode Coverage and Call Graph Generation: Code coverage and function calling information are collected using multiple tools. A custom LLVM pass is used to instrument the program for dynamic function call data collection. Concurrently, LLVM\u2019s Coverage Sanitizer gathers code coverage data. To align this coverage information with the source code, tools such as nm and llvm-symbolizer are utilized to locate and extract relevant code snippets from source files.\nIntegration and Workflow: The LLM agent controls the overall process, determining generator improvements and realignment strategies. The runtime daemon executes these decisions in the target environment and provides coverage data and execution results back to the agent. Communication between the LLM agent and the runtime daemon is implemented using gRPC." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Evaluation", + "text": "To assess the utility of SeedMind, we perform a series of evaluations focusing on five questions:\n\u2780 Can SeedMind generate high-quality seeds?\n\u2781 Can seeds generated by SeedMind facilitate fuzzing?\n\u2782 Can SeedMind outperform the existing LLM-based solutions?\n\u2783 What is the impact of the LLM used by SeedMind?\n\u2784 What is SeedMind\u2019s cost to generate seeds for a fuzzing target?" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "5.1. Experimental Setup", + "text": "Benchmarks: To support our evaluation, we adopt two benchmarks. The first benchmark includes open-source programs collected from the OSS-Fuzz project (Google, 2024a ###reference_b23###). We consider all the C and C++ programs in OSS-Fuzz. After excluding those without a seed corpus or unable to work with OSS-Fuzz\u2019s coverage utility222https://github.com/google/oss-fuzz/blob/master/infra/base-images/base-runner/coverage ###reference_ster/infra/base-images/base-runner/coverage###, we end up with 166 programs. Each program configures one or more fuzzing harnesses following the format specified by libFuzzer (Serebryany, 2024 ###reference_b51###). In total, we include 674 harnesses. The goal of this benchmark is to asses whether SeedMind can generate high-quality test cases for a wide spectrum of programs.\nThe second benchmark is MAGMA (Hazimeh et al., 2020 ###reference_b25###), a ground-truth fuzzing evaluation suite based on real programs with real bugs. It includes 9 widely used open-source projects and 16 libFuzzer-style harnesses. We include this benchmark to measure how much SeedMind can facilitate fuzzing in discovering real-world bugs.\nBaselines: We consider two baselines of seed preparation. The first baseline is the seed corpus shipped with both benchmarks333An example of default corpus shipped with OSS-Fuzz: https://github.com/DavidKorczynski/binary-samples ###reference_samples###; \nAn example of default corpus shipped with MAGMA: https://github.com/HexHive/magma/tree/v1.2/targets/libxml2/corpus/libxml2_xml_read_memory_fuzzer ###reference_/targets/libxml2/corpus/libxml2_xml_read_memory_fuzzer###.. The seeds were manually collected and maintained by the benchmark developers. They are also plentiful in number, ranging from dozens to thousands for each harness. This baseline can well represent the high-quality people manually prepare for greybox fuzzing. The second baseline is an AI-based seed generation solution Google recently extended for OSS-Fuzz (OSS-FUzz, 2024 ###reference_b45###). It uses the basic idea of providing an LLM with the code of a fuzzing harness and asking the LLM to output test case generators in Python. This baseline represents a weaker version of LLM-based methods. For simplicity, we use OSSFuzz-AI to refer to this baseline. For the MAGMA benchmark, we additionally include an empty seed corpus as a third baseline, simulating a scenario where the fuzzing users do not prepare seeds.\nExperiment Configurations: We conduct all experiments on CloudLab (Duplyakin et al., 2019 ###reference_b12###), with each machine equipped with Intel Skylake processors (20 physical cores @ 2.20GHz) and 192GB of RAM. For evaluations on the OSS-Fuzz benchmark, we run SeedMind and OSSFuzz-AI for 30 minutes on each fuzzing harness. We further limit the LLM API fees to $0.5 per hardness to avoid cost explosion444We cannot restrict the API fees of OSSFuzz-AI as it uses Google\u2019s proxy, which offers no interfaces to retrieve the costs.. On a specific harness, we run each generator to produce 1,000 test cases, and we set it to time out after 30 seconds to mitigate non-terminating generators.\nFor evaluations on the MAGMA benchmark, we respectively run AFL (Fioraldi et al., 2023 ###reference_b14###), AFL++ (Fioraldi et al., 2020 ###reference_b13###), Honggfuzz (Google, 2024b ###reference_b24###) on each hardness with seeds from SeedMind and the three baselines (i.e., seeds generated by OSSFuzz-AI, the default seed corpus, and the empty seed corpus). Each harness is run for 24 hours with 3 instances affiliated to separate physical CPU cores. Each run is repeated 5 times to neutralize randomness.\nTo understand the impacts of different LLMs, we repeat all the OSS-Fuzz experiments with SeedMind and OSSFuzz-AI, separately using GPT-4o (OpenAI, 2024a ###reference_b41###), GPT-3.5-Turbo (OpenAI, 2024d ###reference_b44###), and Claude-3.5-Sonnet (Anthropic, 2024 ###reference_b5###) as the LLM. We also employ a two-tier isolation strategy to ensure standardized and controlled testing conditions. First, for each fuzzing target, we run SeedMind in an isolated environment. This setup allows for independent execution of the LLM agent, enhancing security by isolating untrusted code generation. Second, each fuzzing target operates within its own Docker container, ensuring that each target has access to its required compilation environment and runtime libraries without interference from other targets.\nEvaluation Metrics: For the OSS-Fuzz benchmark, we consider the code coverage, measured by branch coverage, of the seeds as the performance metric. For the MAGMA benchmark, we reuse the metrics recommended by the maintainers (Hazimeh et al., 2020 ###reference_b25###), including the time to reach a bug (TR) and the time to trigger a bug (TT).\n###figure_4###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "5.2. Quality of Test Cases", + "text": "To understand the quality of the test cases generated by SeedMind, we inspect their code coverage in the OSS-Fuzz programs. The detailed results are presented in Appendix \u00a78.2 ###reference_###. In summary, SeedMind can generate test cases to achieve satisfactory code coverage.\nComparing with Default Corpus: Among the 674 harnesses, SeedMind achieves greater code coverage than the default seed corpus on 48 harnesses when using GPT-3.5-Turbo, 253 harnesses with GPT-4o, and 268 harnesses with Claude-3.5-Sonnet. For these specific harnesses, SeedMind\u2019s code coverage is 43.0%, 44.1%, and 39.7% higher than the default seed corpus, respectively. Taking all the harnesses into account, SeedMind achieves 72.0%, 89.3%, 87.7% of the code coverage reached by the default seed corpus when using GPT-3.5-Turbo, GPT-4o, and Claude-3.5-Sonnet as the LLM. The results show that, while not fully comparable, the seeds generated by SeedMind present quality close to that of the human-created corpus. In many cases, SeedMind can even offer advantages.\nComparing with OSSFuzz-AI: Both as LLM-based solutions, SeedMind significantly outperforms OSSFuzz-AI. Regardless of which LLM is used, SeedMind achieves higher code coverage for a substantially larger number of harnesses than OSSFuzz-AI. With GPT-3.5-Turbo, SeedMind and OSSFuzz-AI can both run on 159 harnesses and SeedMind reaches higher code coverage on 142 of them. Switching to GPT-4o and Claude-3.5-Sonnet, SeedMind outperforms OSSFuzz-AI on 537 out of 636 harnesses and 588 out of 674 harnesses, respectively. If we only look at those harnesses, SeedMind presents code coverage 29.0%, 23.6%, 23.3% higher than OSSFuzz-AI. In the few instances where OSSFuzz-AI covers more code, the difference is mostly marginal and negligible. Thus, with all harnesses counted, SeedMind still shows code coverage 27.5%, 23.6%, 23.3% higher than OSSFuzz-AI across the three LLM configurations." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "5.3. Benefits to Fuzzing", + "text": "For assessing how much the seeds generated by SeedMind benefit fuzzing, we compare the results of SeedMind and the three baselines discussed in \u00a75.1 ###reference_### on MAGMA. In this evaluation, we fix SeedMind and OSSFuzz-AI to use Claude-3.5-Sonnet because, as we will show shortly, Claude-3.5-Sonnet enables the best performance. Overall, SeedMind can facilitate the bug finding of all three fuzzing tools. For simplicity of presentation in the following, if a solution finds a bug using a time shorter than all other solutions, we call the bug a fastest bug found by that solution.\nComparing with Default Corpus: SeedMind appears comparable to the default seed corpus when applied for bug finding. Both present varying but close performances on different fuzzing tools. With AFL++, SeedMind enables the discovery of 27 bugs, and the default corpus enables 24. In addition, SeedMind finds 13 fastest bugs, while the default corpus finds 9. With Honggfuzz, the results are invested. The default corpus enables 24 bugs, with 14 fastest bugs. SeedMind only enables 21 bugs, with 7 fastest bugs. Their performance with AFL is more consistent. The default corpus enables more bugs (17 v.s. 14), but SeedMind finds more fastest bugs (9 v.s. 5).\nComparing with OSSFuzz-AI: SeedMind clearly beats OSSFuzz-AI. With AFL++, SeedMind enables the discovery of 9 more bugs (27 v.s. 18). SeedMind also finds bugs faster (13 fastest bugs v.s. 9 fastest bugs). The results with Honggfuzz are similar. The gap with AFL is smaller. They find the same amount of bugs, but SeedMind has a much faster pace (9 fastest bugs v.s. 4 fastest bugs).\nComparing with Empty Corpus: SeedMind thoroughly defeats the empty corpus by consistently finding more bugs (27 v.s. 15 with AFL++, 21 v.s. 14 with Honggfuzz, and 14 v.s. 6 with AFL) at a fast pace (29 fastest bugs in total v.s. 1fastest bug in total). These show that SeedMind is a promising alternative when no default corpus is available." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "5.4. Generality to LLM", + "text": "To assess the generality of SeedMind across different language models, we conducted experiments using three tiers of LLMs: GPT-3.5-Turbo, GPT-4o, and Claude-3.5-Sonnet. Our results summarized in Table 1 ###reference_### demonstrate that SeedMind presents effectiveness with all three models, showcasing its adaptability to various LLM architectures and capabilities.\nWhile SeedMind proves functional across all tested LLMs, we observed notable performance discrepancies. Claude-3.5-Sonnet results in superior performance, leading in code coverage for 385 harnesses and achieving an average code coverage of 18.03%. This is followed by GPT-4o, leading in 272 harnesses with an average coverage of 17.12%. In contrast, GPT-3.5-Turbo only achieves an average coverage of 15.12%, leading in 17 harnesses.\nThese results indicate that while SeedMind is effective with all tested LLMs, its performance can be enhanced by using more advanced models.\nWe observed a positive correlation between the context window size and performance. Claude-3.5-Sonnet, with the largest context window of 200,000 tokens, outperforms its counterparts, while GPT-3.5-Turbo, with the smallest window of 16,385 tokens, performs less. This is potentially because our pruning strategy described in \u00a74.3 ###reference_### is applied more aggressively when the context window is smaller, as in GPT-3.5-Turbo.\nOther factors may also contribute to these performance disparities. For example, the superior results of Claude-3.5-Sonnet and GPT-4o could be attributed to their lager model sizes and more diverse training datasets, enabling them to generate more effective and varied test cases." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "5.5. Cost Analysis", + "text": "We conducted a cost analysis to evaluate the economic feasibility of SeedMind for practical use. As explained before, we enforce a soft upper bound of $0.5 per harness to manage costs. This approach involves checking the accumulated cost after each iteration of seed generation. If the cost has not exceeded $0.5, the system continues to the next iteration. This method allows for slight budget overruns, ensuring that the last valuable iteration is not cut short.\nTable 2 ###reference_### presents the average cost per fuzzing harness for each LLM, along with the number of harnesses that remained within our $0.5 bound. Claude-3.5-Sonnet strikes a balance between cost and performance, with an average cost of $0.48 per harness. It managed to stay within the $0.5 budget for 206 harnesses, the highest among all models, indicating its consistent performance across a wide range of scenarios. GPT-4o, while slightly exceeding our soft upper bound with an average cost of $0.69, remains acceptable given its strong performance.\nIt\u2019s noteworthy that for a significant number of harnesses, all models remained under the $0.5 threshold (49 for GPT-4o, 159 for GPT-3.5-Turbo, and 206 for Claude-3.5-Sonnet) within a strict 30-minute time limit. This suggests that SeedMind can be deployed cost-effectively for many fuzzing tasks, with the flexibility to allocate more resources to complex harnesses when necessary." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Related Work", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "6.1. Seed Generation for Fuzzing", + "text": "Generation-based fuzzing can produce highly structured inputs for real-world applications. Various approaches for structured test case generation have evolved over time:\nManually summarizing grammar rules. Generation-based fuzzers require well-written grammar rules prior to generating test cases. Examples of such fuzzers, designed for producing syntax-correct HTML files, include DOMATO (Symeon, 2020 ###reference_b52###), FREEDOM (Xu et al., 2020 ###reference_b59###), and DOMFUZZ (Mozilla Fuzzing Security, 2019 ###reference_b39###). DOMFUZZ also employs a grammar-based splicing technique, which inspires our hierarchy object exchanging method. For fuzzing JavaScript codes, techniques like (Park et al., 2020 ###reference_b46###), (Security, 2019 ###reference_b49###), and (Holler et al., 2012 ###reference_b26###) use random generation or combination of code based on provided syntax rules. Favocado (Dinh et al., 2021 ###reference_b11###) generates syntactically correct binding code for fuzzing JavaScript engines using semantic information.\nGrammar generation with machine learning. Learn-&Fuzz (Godefroid et al., 2017 ###reference_b22###) is a generation-based fuzzer that leverages machine learning to learn the grammar rules of PDF objects. However, it only generates random PDF objects and fails to capture the complexities of other elements in the PDF format, such as header, Xref, and trails. Skyfire (Wang et al., 2017 ###reference_b54###) uses a context-sensitive grammar model with a probabilistic ML algorithm for fuzzing HTML and XSL files. DEPPFUZZ (Liu et al., 2019 ###reference_b32###) employs a generative Sequence-to-Sequence model for C code generation, and Godefroid et al. (Godefroid et al., 2008 ###reference_b20###) implement a dynamic test case generation algorithm for fuzzing IE7\u2019s JavaScript interpreter.\nIR assisted generation. PolyGlot (Chen et al., 2021b ###reference_b8###) is a fuzzing framework that creates high-quality test cases for different programming languages by using a uniform immediate representation (IR). Unlike other generation based fuzzing frameworks, PolyGlot uses grammar for mutation instead of pure seed generation, allowing for better code coverage. However, PolyGlot is limited by the requirement for a BNF grammar, and can still generate syntactically incorrect test cases due to inconsistent grammar inputs." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "6.2. LLM-assisted Fuzzing", + "text": "LLM-assisted seed generation.\nWhile traditional seed generation methods rely on manual grammar rules, machine learning, or intermediate representations, LLM-based approaches offer a more flexible and potentially more comprehensive solution for generating diverse, structure-aware seeds.\nCODAMOSA (Lemieux et al., 2023 ###reference_b29###) leverages the code composition capabilities of LLMs to generate Python test cases specifically designed for fuzzing Python libraries and modules.\nBuilding on this concept, TITANFUZZ (Deng et al., 2023 ###reference_b10###) extended the approach to generate API calls for deep learning software libraries.\nWhite fox (Yang et al., 2024 ###reference_b60###) employs LLMs to analyze compiler-optimized code and generate test programs tailored for compiler optimization modules.\nThese approaches demonstrate the potential of LLMs in seed generation, particularly for targets like interpreters and compilers that process program code as input. This alignment with LLMs\u2019 training data allows for high-quality seed generation and improved code coverage. However, these methods are often limited to specific domains or software types that primarily handle text-based inputs. In contrast, our system, SeedMind, offers a more versatile solution capable of generating seeds for a wide range of software types, including those that process non-textual inputs. This broader applicability makes SeedMind a more adaptable tool for fuzzing diverse real-world targets beyond just code-centric applications.\nLLM-assisted seed mutation.\nCHATFUZZ (Hu et al., 2023 ###reference_b27###) uses LLMs to mutate existing seeds in greybox fuzzing. It prompts ChatGPT to generate variations of seeds, aiming to produce format-conforming inputs that can pass initial parsing stages in programs expecting structured inputs.\nSimilar to CHATFUZZ\u2019s approach, CHATAFL (Meng et al., 2024 ###reference_b38###) extends the use of LLMs to protocol fuzzing. It enhances AFLNet by incorporating LLMs to extract machine-readable grammars for structure-aware mutation." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Conclusion", + "text": "In this paper, we introduce SeedMind, a novel framework that utilizes Large Language Models for seed generation in greybox fuzzing. Unlike traditional approaches, SeedMind instructs LLMs to generate test case generators and iteratively refines them to expand code coverage. This approach systematically explores the target program, enhancing the fuzzing process. Our experiments show that SeedMind outperforms simpler AI-based methods and, in some cases, human-generated seeds. We assess SeedMind \u2019s ability to produce high-quality seeds, its impact on fuzzing efficiency, and its generalizability beyond the LLM\u2019s training data. Overall, our results suggest that LLMs offer a promising solution for seed generation, with coverage feedback significantly improving seed quality and fuzzing effectiveness." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "8. Appendix", + "text": "" + }, + { + "section_id": "8.1", + "parent_section_id": "8", + "section_name": "8.1. Prompts Used by SeedMind", + "text": "The system prompt, shown in 1 ###reference_###, defines the general role of the system. It sets the context for LLM, instructing it to act as a professional security engineer tasked with developing a Python script for generating test case files. This prompt is used at the beginning of each interaction to establish the LLM\u2019s role and primary objective.\nThe user prompt, shown in 2 ###reference_###, is used at the beginning of each iteration. It provides instructions for creating the Python script, including the harness code and basic script requirements. This prompt guides the LLM in generating a script that produces test cases compatible with the fuzzing harness while considering various input types, edge cases, and potential vulnerabilities.\nThe example script prompt, shown in 3 ###reference_###, provides a template and specific requirements for the seed generation script. This prompt is used to guide the LLM in creating a script to generate a single test case and write it to an output file. It includes an example script to serve as a reference for the AI.\nImportantly, this prompt is used only once during the first iteration of the system. In subsequent iterations, it is replaced by the script generated from the previous iteration, allowing for continuous refinement and improvement of the seed generation process.\nThe summary prompt, presented in 4 ###reference_###, is used after generating and testing a script. It requests an analysis of the current generator based on coverage information. This prompt helps in evaluating the effectiveness of the generated script and provides guidance for improvements." + }, + { + "section_id": "8.2", + "parent_section_id": "8", + "section_name": "8.2. Code Coverage of OSS-Fuzz Programs", + "text": "In Figure 6 ###reference_###, Figure 7 ###reference_###, and Figure 8 ###reference_###, we present the code coverage results of test cases from different solutions on each OSS-Fuzz program.\n###figure_5### ###figure_6### ###figure_7###" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1. Comparison of models
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelContext Window# of Highest CoverageAverage Coverage %
GPT-3.5-Turbo16,385 tokens1715.12
GPT-4o128,000 tokens27217.12
Claude-3.5-Sonnet200,000 tokens38518.03
\n
", + "capture": "Table 1. Comparison of models" + }, + "2": { + "table_html": "
\n
Table 2. Average cost of running SeedMind to generate seeds for a fuzzing harness.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage Cost $# of harnesses (\u00a10.5$)
GPT-4o0.6949
GPT-3.5-Turbo0.10159
Claude-3.5-Sonnet0.48206
\n
", + "capture": "Table 2. Average cost of running SeedMind to generate seeds for a fuzzing harness." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18143v1_figure_2.png", + "caption": "Figure 2. Workflow of SeedMind.", + "url": "http://arxiv.org/html/2411.18143v1/x1.png" + }, + "3": { + "figure_path": "2411.18143v1_figure_3.png", + "caption": "Figure 3. An illustration of code coverage on dynamic call graph.", + "url": "http://arxiv.org/html/2411.18143v1/x2.png" + }, + "4": { + "figure_path": "2411.18143v1_figure_4.png", + "caption": "Figure 4. State Machine of SeedMind.", + "url": "http://arxiv.org/html/2411.18143v1/x3.png" + }, + "5": { + "figure_path": "2411.18143v1_figure_5.png", + "caption": "Figure 5. Results of bug-finding evaluation with MAGMA. NONE means no seeds are used, and DEFAULT represents the default seed corpus shipped with the fuzzing target. The numbers stand for the average time-to-trigger of the corresponding bug. Values highlighted with green indicate the shortest time-to-trigger among the four solutions.", + "url": "http://arxiv.org/html/2411.18143v1/x4.png" + }, + "6": { + "figure_path": "2411.18143v1_figure_6.png", + "caption": "Figure 6. Code coverage of test cases from different solutions on OSS-Fuzz programs. Default includes built-in seed corpus test cases. Values show the percentage of code base covered. For programs with multiple harnesses, results are averaged. Best results are in blue. If not highlighted, Default is the best.", + "url": "http://arxiv.org/html/2411.18143v1/x5.png" + }, + "7": { + "figure_path": "2411.18143v1_figure_7.png", + "caption": "Figure 7. Continued part of Figure 6.", + "url": "http://arxiv.org/html/2411.18143v1/x6.png" + }, + "8": { + "figure_path": "2411.18143v1_figure_8.png", + "caption": "Figure 8. Continued part of Figure 6.", + "url": "http://arxiv.org/html/2411.18143v1/x7.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Meet Claude \\ Anthropic.", + "author": "2024.", + "venue": "https://www.anthropic.com/claude.", + "url": null + } + }, + { + "2": { + "title": "Large language models for fuzzing parsers (registered report). In Proceedings of the 2nd International Fuzzing Workshop. 31\u201338.", + "author": "Joshua Ackerman and George Cybenko. 2023.", + "venue": "", + "url": null + } + }, + { + "3": { + "title": "What is CodeWhisperer?", + "author": "Amazon. 2024.", + "venue": "https://docs.aws.amazon.com/codewhisperer/latest/userguide/what-is-cwspr.html.", + "url": null + } + }, + { + "4": { + "title": "Introducing Claude 3.5 Sonnet \\ Anthropic.", + "author": "Anthropic. 2024.", + "venue": "https://www.anthropic.com/news/claude-3-5-sonnet.", + "url": null + } + }, + { + "5": { + "title": "A survey on evaluation of large language models.", + "author": "Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024.", + "venue": "ACM Transactions on Intelligent Systems and Technology 15, 3 (2024), 1\u201345.", + "url": null + } + }, + { + "6": { + "title": "Evaluating large language models trained on code.", + "author": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a.", + "venue": "arXiv preprint arXiv:2107.03374 (2021).", + "url": null + } + }, + { + "7": { + "title": "One Engine to Fuzz \u2019em All: Generic Language Processor Testing with Semantic Validation.", + "author": "Yongheng Chen, Rui Zhong, Hong Hu, Hangfan Zhang, Yupeng Yang, Dinghao Wu, and Wenke Lee. 2021b.", + "venue": "2021 IEEE Symposium on Security and Privacy (SP) (2021), 642\u2013658.", + "url": null + } + }, + { + "8": { + "title": "Artificial Intelligence Cyber Challenge.", + "author": "DARPA. 2024.", + "venue": "https://aicyberchallenge.com/.", + "url": null + } + }, + { + "9": { + "title": "Large Language Models Are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis.", + "author": "Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, Chenyuan Yang, and Lingming Zhang. 2023.", + "venue": "", + "url": null + } + }, + { + "10": { + "title": "Favocado: Fuzzing the Binding Code of JavaScript Engines Using Semantically Correct Test Cases. In NDSS.", + "author": "Sung Ta Dinh, Haehyun Cho, Kyle Martin, Adam Oest, Kyle Zeng, Alexandros Kapravelos, Gail-Joon Ahn, Tiffany Bao, Ruoyu Wang, Adam Doup\u00e9, and Yan Shoshitaishvili. 2021.", + "venue": "", + "url": null + } + }, + { + "11": { + "title": "The design and operation of CloudLab. In 2019 USENIX annual technical conference (USENIX ATC 19). 1\u201314.", + "author": "Dmitry Duplyakin, Robert Ricci, Aleksander Maricq, Gary Wong, Jonathon Duerig, Eric Eide, Leigh Stoller, Mike Hibler, David Johnson, Kirk Webb, et al. 2019.", + "venue": "", + "url": null + } + }, + { + "12": { + "title": "AFL++: Combining incremental steps of fuzzing research. In 14th USENIX Workshop on Offensive Technologies (WOOT 20).", + "author": "Andrea Fioraldi, Dominik Maier, Heiko Ei\u00dffeldt, and Marc Heuse. 2020.", + "venue": "", + "url": null + } + }, + { + "13": { + "title": "Dissecting american fuzzy lop: a fuzzbench evaluation.", + "author": "Andrea Fioraldi, Alessandro Mantovani, Dominik Maier, and Davide Balzarotti. 2023.", + "venue": "ACM transactions on software engineering and methodology 32, 2 (2023), 1\u201326.", + "url": null + } + }, + { + "14": { + "title": "GPT-3: Its nature, scope, limits, and consequences.", + "author": "Luciano Floridi and Massimo Chiriatti. 2020.", + "venue": "Minds and Machines 30 (2020), 681\u2013694.", + "url": null + } + }, + { + "15": { + "title": "Domato.", + "author": "Ivan Fratric. 2017.", + "venue": "https://github.com/googleprojectzero/domato.", + "url": null + } + }, + { + "16": { + "title": "GitHub Copilot Your AI pair programmer.", + "author": "GitHub. 2024.", + "venue": "https://github.com/features/copilot.", + "url": null + } + }, + { + "17": { + "title": "Random testing for security: blackbox vs. whitebox fuzzing. In Proceedings of the 2nd international workshop on Random testing: co-located with the 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2007). 1\u20131.", + "author": "Patrice Godefroid. 2007.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Fuzzing: Hack, art, and science.", + "author": "Patrice Godefroid. 2020.", + "venue": "Commun. ACM 63, 2 (2020), 70\u201376.", + "url": null + } + }, + { + "19": { + "title": "Grammar-based Whitebox Fuzzing. In Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, 206\u2013215.", + "author": "Patrice Godefroid, Adam Kiezun, and Michael Y. Levin. 2008.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "SAGE: whitebox fuzzing for security testing.", + "author": "Patrice Godefroid, Michael Y Levin, and David Molnar. 2012.", + "venue": "Commun. ACM 55, 3 (2012), 40\u201344.", + "url": null + } + }, + { + "21": { + "title": "Learn&fuzz: Machine learning for input fuzzing. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 50\u201359.", + "author": "Patrice Godefroid, Hila Peleg, and Rishabh Singh. 2017.", + "venue": "", + "url": null + } + }, + { + "22": { + "title": "OSS-Fuzz - continuous fuzzing for open source software.", + "author": "Google. 2024a.", + "venue": "https://github.com/google/oss-fuzz.", + "url": null + } + }, + { + "23": { + "title": "Security oriented software fuzzer. Supports evolutionary, feedback-driven fuzzing based on code coverage (SW and HW based).", + "author": "Google. 2024b.", + "venue": "https://github.com/google/honggfuzz.", + "url": null + } + }, + { + "24": { + "title": "Magma: A ground-truth fuzzing benchmark.", + "author": "Ahmad Hazimeh, Adrian Herrera, and Mathias Payer. 2020.", + "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems 4, 3 (2020), 1\u201329.", + "url": null + } + }, + { + "25": { + "title": "Fuzzing with Code Fragments. In 21st USENIX Security Symposium (USENIX Security 12). USENIX Association, Bellevue, WA, 445\u2013458.", + "author": "Christian Holler, Kim Herzig, and Andreas Zeller. 2012.", + "venue": "https://www.usenix.org/conference/usenixsecurity12/technical-sessions/presentation/holler", + "url": null + } + }, + { + "26": { + "title": "Augmenting Greybox Fuzzing with Generative AI.", + "author": "Jie Hu, Qian Zhang, and Heng Yin. 2023.", + "venue": "", + "url": null + } + }, + { + "27": { + "title": "Bias assessment and mitigation in llm-based code generation.", + "author": "Dong Huang, Qingwen Bu, Jie Zhang, Xiaofei Xie, Junjie Chen, and Heming Cui. 2023.", + "venue": "arXiv preprint arXiv:2309.14345 (2023).", + "url": null + } + }, + { + "28": { + "title": "CodaMosa: Escaping Coverage Plateaus in Test Generation with Pre-Trained Large Language Models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE).", + "author": "Caroline Lemieux, Jeevana Priya Inala, Shuvendu K. Lahiri, and Siddhartha Sen. 2023.", + "venue": "", + "url": null + } + }, + { + "29": { + "title": "Fuzzing: a survey.", + "author": "Jun Li, Bodong Zhao, and Chao Zhang. 2018.", + "venue": "Cybersecurity 1 (2018), 1\u201313.", + "url": null + } + }, + { + "30": { + "title": "LLM-Powered Test Case Generation for Detecting Tricky Bugs.", + "author": "Kaibo Liu, Yiyang Liu, Zhenpeng Chen, Jie M Zhang, Yudong Han, Yun Ma, Ge Li, and Gang Huang. 2024.", + "venue": "arXiv preprint arXiv:2404.10304 (2024).", + "url": null + } + }, + { + "31": { + "title": "Deepfuzz: Automatic generation of syntax valid c programs for fuzz testing. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 1044\u20131051.", + "author": "Xiao Liu, Xiaoting Li, Rupesh Prajapati, and Dinghao Wu. 2019.", + "venue": "", + "url": null + } + }, + { + "32": { + "title": "Poster: gptCombFuzz: Combinatorial Oriented LLM Seed Generation for effective Fuzzing. In 2024 IEEE Conference on Software Testing, Verification and Validation (ICST). IEEE, 438\u2013441.", + "author": "Darshan Lohiya, Monika Rani Golla, Sangharatna Godboley, and P Radha Krishna. 2024.", + "venue": "", + "url": null + } + }, + { + "33": { + "title": "Codexglue: A machine learning benchmark dataset for code understanding and generation.", + "author": "Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, et al. 2021.", + "venue": "arXiv preprint arXiv:2102.04664 (2021).", + "url": null + } + }, + { + "34": { + "title": "Smartseed: Smart seed generation for efficient fuzzing.", + "author": "Chenyang Lyu, Shouling Ji, Yuwei Li, Junfeng Zhou, Jianhai Chen, and Jing Chen. 2018.", + "venue": "arXiv preprint arXiv:1807.02606 (2018).", + "url": null + } + }, + { + "35": { + "title": "The art, science, and engineering of fuzzing: A survey.", + "author": "Valentin JM Manes, HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J Schwartz, and Maverick Woo. 2018.", + "venue": "arXiv preprint arXiv:1812.00140 (2018).", + "url": null + } + }, + { + "36": { + "title": "The art, science, and engineering of fuzzing: A survey.", + "author": "Valentin JM Man\u00e8s, HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J Schwartz, and Maverick Woo. 2019.", + "venue": "IEEE Transactions on Software Engineering 47, 11 (2019), 2312\u20132331.", + "url": null + } + }, + { + "37": { + "title": "Large Language Model Guided Protocol Fuzzing. In Proceedings 2024 Network and Distributed System Security Symposium.", + "author": "Ruijie Meng, Martin Mirchev, Marcel B\u00f6hme, and Abhik Roychoudhury. 2024.", + "venue": "", + "url": null + } + }, + { + "38": { + "title": "DOM fuzzers.", + "author": "Mozilla Fuzzing Security. 2019.", + "venue": "https://github.com/MozillaSecurity/domfuzz.", + "url": null + } + }, + { + "39": { + "title": "Using an llm to help with code understanding. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1\u201313.", + "author": "Daye Nam, Andrew Macvean, Vincent Hellendoorn, Bogdan Vasilescu, and Brad Myers. 2024.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "Hello GPT-4o.", + "author": "OpenAI. 2024a.", + "venue": "https://openai.com/index/hello-gpt-4o/.", + "url": null + } + }, + { + "41": { + "title": "Models.", + "author": "OpenAI. 2024b.", + "venue": "https://platform.openai.com/docs/models.", + "url": null + } + }, + { + "42": { + "title": "OpenAI Codex.", + "author": "OpenAI. 2024c.", + "venue": "https://openai.com/index/openai-codex/.", + "url": null + } + }, + { + "43": { + "title": "OpenAI\u2019s GPT-3.5 Turbo.", + "author": "OpenAI. 2024d.", + "venue": "https://platform.openai.com/docs/models/gpt-3-5-turbo.", + "url": null + } + }, + { + "44": { + "title": "Fuzz target generation using LLMs.", + "author": "OSS-FUzz. 2024.", + "venue": "https://github.com/google/oss-fuzz-gen/blob/main/llm_toolkit/corpus_generator.py.", + "url": null + } + }, + { + "45": { + "title": "Fuzzing JavaScript Engines with Aspect-preserving Mutation. In 2020 IEEE Symposium on Security and Privacy (SP). 1629\u20131642.", + "author": "S. Park, W. Xu, I. Yun, D. Jang, and T. Kim. 2020.", + "venue": "", + "url": null + } + }, + { + "46": { + "title": "Understanding Large Language Models Context Windows \u2014 Appen.", + "author": "Ryan Richards and Cal Wilmott. 2024.", + "venue": "https://www.appen.com/blog/understanding-large-language-models-context-windows.", + "url": null + } + }, + { + "47": { + "title": "A case for test-code generation in model-driven systems. In International Conference on Generative Programming and Component Engineering. Springer, 377\u2013396.", + "author": "Matthew J Rutherford and Alexander L Wolf. 2003.", + "venue": "", + "url": null + } + }, + { + "48": { + "title": "A collection of fuzzers in a harness for testing the SpiderMonkey JavaScript engine.", + "author": "Mozilla Fuzzing Security. 2019.", + "venue": "https://github.com/MozillaSecurity/funfuzz.", + "url": null + } + }, + { + "49": { + "title": "OSS-Fuzz-Google\u2019s continuous fuzzing service for open source software.", + "author": "Kostya Serebryany. 2017.", + "venue": "(2017).", + "url": null + } + }, + { + "50": { + "title": "A library for coverage-guided fuzz testing.", + "author": "Kostya Serebryany. 2024.", + "venue": "https://llvm.org/docs/LibFuzzer.html.", + "url": null + } + }, + { + "51": { + "title": "Grammar based fuzzing PDFs with Domato.", + "author": "Symeon. 2020.", + "venue": "https://rb.gy/gi4cbz.", + "url": null + } + }, + { + "52": { + "title": "Utilizing Large Language Models for Fuzzing: A Novel Deep Learning Approach to Seed Generation.", + "author": "Elwin Tamminga, Bouwko van der Meijs, and Ultraware Stjepan Picek. 2023.", + "venue": "(2023).", + "url": null + } + }, + { + "53": { + "title": "Skyfire: Data-driven seed generation for fuzzing. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 579\u2013594.", + "author": "Junjie Wang, Bihuan Chen, Lei Wei, and Yang Liu. 2017.", + "venue": "", + "url": null + } + }, + { + "54": { + "title": "Codet5+: Open code large language models for code understanding and generation.", + "author": "Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. 2023.", + "venue": "arXiv preprint arXiv:2305.07922 (2023).", + "url": null + } + }, + { + "55": { + "title": "Chain-of-thought prompting elicits reasoning in large language models.", + "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022.", + "venue": "Advances in neural information processing systems 35 (2022), 24824\u201324837.", + "url": null + } + }, + { + "56": { + "title": "Reasoning Degradation in LLMs with Long Context Windows: New Benchmarks.", + "author": "Natanael WF. 2024.", + "venue": "https://community.openai.com/t/reasoning-degradation-in-llms-with-long-context-windows-new-benchmarks/906891?page=2.", + "url": null + } + }, + { + "57": { + "title": "Fuzz4all: Universal fuzzing with large language models. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1\u201313.", + "author": "Chunqiu Steven Xia, Matteo Paltenghi, Jia Le Tian, Michael Pradel, and Lingming Zhang. 2024.", + "venue": "", + "url": null + } + }, + { + "58": { + "title": "Freedom: Engineering a state-of-the-art dom fuzzer. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 971\u2013986.", + "author": "Wen Xu, Soyeon Park, and Taesoo Kim. 2020.", + "venue": "", + "url": null + } + }, + { + "59": { + "title": "WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models. In OOPSLA 2024.", + "author": "Chenyuan Yang, Yinlin Deng, Runyu Lu, Jiayi Yao, Jiawei Liu, Reyhaneh Jabbarvand, and Lingming Zhang. 2024.", + "venue": "", + "url": null + } + }, + { + "60": { + "title": "Kernelgpt: Enhanced kernel fuzzing via large language models.", + "author": "Chenyuan Yang, Zijie Zhao, and Lingming Zhang. 2023.", + "venue": "arXiv preprint arXiv:2401.00563 (2023).", + "url": null + } + }, + { + "61": { + "title": "Finding and understanding bugs in C compilers. In Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation. 283\u2013294.", + "author": "Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. 2011.", + "venue": "", + "url": null + } + }, + { + "62": { + "title": "Llm lies: Hallucinations are not bugs, but features as adversarial examples.", + "author": "Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, and Li Yuan. 2023.", + "venue": "arXiv preprint arXiv:2310.01469 (2023).", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18143v1" +} \ No newline at end of file diff --git a/20241127/2411.18147v1.json b/20241127/2411.18147v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1ab050f906233fbaa09a8b694de2002c401534f1 --- /dev/null +++ b/20241127/2411.18147v1.json @@ -0,0 +1,1434 @@ +{ + "title": "Online Knowledge Integration for 3D Semantic Mapping: A Survey", + "abstract": "Semantic mapping is a key component of robots operating in and interacting with objects in structured environments. Traditionally, geometric and knowledge representations within a semantic map have only been\nloosely integrated. However, recent advances in deep learning now allow full integration of prior knowledge, represented as knowledge graphs or language concepts, into sensor data processing and semantic mapping pipelines. Semantic scene graphs and language models enable modern semantic mapping approaches to incorporate graph-based prior knowledge or to leverage the rich information in human language both during and after the mapping process. This has sparked substantial advances in semantic mapping, leading to previously impossible novel applications. This survey reviews these recent developments comprehensively, with a focus on online integration of knowledge into semantic mapping. We specifically focus on methods using semantic scene graphs for integrating symbolic prior knowledge and language models for respective capture of implicit common-sense knowledge and natural language concepts.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "For an autonomous mobile robot to perform its tasks, it must be able to localize itself in unfamiliar surroundings.\nThis requires a map of the environment, which is usually obtained via a\nSimultaneous Localization and Mapping (SLAM) approach [1 ###reference_b1###, 2 ###reference_b2###]. These maps can typically be generated from sensor data in high quality and consider the geometric properties of the environment (e.\u2009g., metric or topological in 2D or 3D), creating geometric maps with crucial information for localization, navigation, and obstacle avoidance. However, many real-life environments are not solely defined by their spatial properties and require a deeper understanding of the entities and structures the robot encounters.\nHuman-made environments in particular are highly dynamic and require specific interactions the robot must be prepared for, such as opening doors or manipulating objects. Additionally, many environments require a robot to follow certain, often unwritten rules to not behave unexpectedly or even dangerously, such as not blocking open doors, not moving unexpectedly around people, and not navigating into forbidden areas. Interacting with humans further requires a robot to be able to follow verbal commands from human operators, e.\u2009g., \u201cBring me my cup from the kitchen!\u201d Safe operation in such environments and fulfilling all required functions is intractable for a robot using only the spatial information of a geometric map, requiring solutions that provide the robot with further information to complete its tasks.\nOne prominent approach to alleviate these problems is semantic mapping, cf. N\u00fcchter and Hertzberg [3 ###reference_b3###]:\n\u201cA semantic map for a mobile robot is a map that contains, in addition to spatial information about the environment, assignments of mapped features to entities of known classes. Further knowledge about these entities, independent of the map contents, is available for reasoning in some knowledge base with an associated reasoning engine.\u201d\nSemantic maps thus add different kinds of information into a geometric map, such as sensor-derived information and external common-sense and specialized prior knowledge given by, e.\u2009g., an external knowledge base. Such knowledge sources supply a robot with additional task-dependent information about its current environment and allow it to interpret and reason over abstract human concepts like rules and organization schemes, e.\u2009g., where objects of a certain type are commonly expected.\nClassically, the process of semantic mapping can be decomposed into multiple interconnected sub-problems [4 ###reference_b4###], which we review more extensively in Section 2 ###reference_###:\nThe information from a robot\u2019s 3D sensors is captured and aggregated into a suitable geometric representation.\nSemantic information about the mapped environment (e.\u2009g., objects (instances) and their classes) is retrieved from the robot\u2019s sensor data.\nThe geometric and semantic information from sub-problems 1 and 2 is connected to existing prior knowledge, e.\u2009g., via an ontology. Additional semantic information (e.\u2009g., the room types) is then derived using reasoning.\nThis decomposition of the mapping process provides a modular and flexible architecture for a semantic mapping system since components can be replaced easily depending on current task requirements, but it has several critical shortcomings. One key shortcoming stems from the rigid flow of the steps, in which the semantic mapping process is divided into separate sub-problems that are handled consecutively due to dependencies on earlier tasks (the prior knowledge integration, for example, makes no sense without semantic information from the actual environment). The geometric mapping and semantic information steps therefore have no access to and do not benefit from the prior knowledge brought in at the last step, even though this prior knowledge could be used to correct mistakes made in those tasks, e.\u2009g., classification errors.\nRecent advances in deep learning have enabled the integration of symbolic data represented as, e.\u2009g., knowledge graphs, into models utilizing sub-symbolic embeddings generated directly from sensor data (images, point clouds) [5 ###reference_b5###]. In this field, semantic scene graphs and the integration of (vision) language models are of particular interest, as they enable models to use existing graph-based prior knowledge or to utilize the reasoning capabilities of large language models trained on internet-scale datasets. This survey aims to provide a comprehensive overview of recent developments in these fields, focusing on the online integration of prior knowledge into semantic data acquisition methods as well as semantic mapping as a whole.\nThe remainder of the paper covers the classical approaches before discussing the new methods and the future directions of the field. Section 2 ###reference_### provides a more detailed overview of the classical semantic mapping steps and outlines their respective goals and requirements. In Sections 3 ###reference_### and 4 ###reference_###, we then review and analyze the recent developments and research trends in 3D semantic scene graphs and (vision) language models, including a discussion of open challenges for applications in semantic mapping." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Semantic mapping in a nutshell", + "text": "As introduced above, semantic mapping as a whole can be roughly decomposed into geometric mapping, acquisition of semantic information from sensor data, and the integration of prior knowledge [4 ###reference_b4###]. Each sub-problem is vital on its own (e.\u2009g., geometric maps can be used immediately for tasks such as navigation even without any semantic information) and a useful semantic map requires all three, necessitating a sound understanding of each component. We therefore explain each sub-problem in depth in this section and outline popular solutions to each task." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Geometric mapping", + "text": "A geometric representation of basic structures is a critical foundation for any useful map of an environment. In robotics, these geometric maps are typically built incrementally using Simultaneous Localization and Mapping (SLAM) approaches, which rely on the robot\u2019s built-in sensors to fuse noisy sensor data into a consistent geometric representation.\nIn earlier works, these maps were typically built using laser scanners and represented as occupancy grids in 2D [6 ###reference_b6###]. Later advances then allowed data obtained from 3D laser scanners in a stop-scan-go fashion to be registered in a globally consistent 3D point cloud using the Iterative Closest Point (ICP) algorithm, as well as optimization based on neighboring scan poses (loop closure) to eliminate accumulated pose errors [7 ###reference_b7###]. These large 3D point clouds can be reconstructed into compact 3D meshes [8 ###reference_b8###] to be used for navigation [9 ###reference_b9###], for building object-centric maps [10 ###reference_b10###], or for generating other representations such as 3D octrees [11 ###reference_b11###]. However, due to the large size of the raw point clouds, this processing is done primarily offline, whereas a fully functional autonomous robot requires the ability to complete such mapping in an online and adaptive manner.\nLOAM [12 ###reference_b12###] sparked the development of a variety of efficient feature-based methods for online odometry estimation and mapping [e.\u2009g., 13 ###reference_b13###, 14 ###reference_b14###] on LIDAR data. In contrast, KISS-ICP [15 ###reference_b15###] recently developed a system based on traditional point-to-point ICP with adaptive thresholding based on voxel grids for efficient odometry estimation.\nThe growing availability of affordable RGB-D cameras, which combine the visual data from a regular RGB camera with depth information, has further led to the development of 3D representations that enable the creation of 3D maps in real-time. KinectFusion [16 ###reference_b16###, 17 ###reference_b17###], for example, popularized the Truncated Signed Distance Function (TSDF) as a fast intermediate representation during mapping from which a 3D mesh can be derived efficiently as a post-processing step. Furthermore, derived approaches like ElasticFusion [18 ###reference_b18###], which use different representations (points or surfels), have also been proposed under the same general principle as KinectFusion.\nWith the development of visual-inertial SLAM approaches such as ORB-SLAM3 [19 ###reference_b19###], DSO [20 ###reference_b20###], VINS-Mono [21 ###reference_b21###], and VINS-Fusion [22 ###reference_b22###], it has become feasible to acquire the necessary pose information for map integration in real-time using a monocular or stereo camera and an inertial measurement unit. Recently, approaches using implicit neural-network-based representations based on Neural Radiance Fields (NeRF) [23 ###reference_b23###, 24 ###reference_b24###] or Gaussian Splatting [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###] have been proposed. However, these approaches require a heavy optimization step to generate the final representation, limiting their applicability in scenarios where the representation has to be frequently updated onboard a mobile robot." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Semantic information from sensor data", + "text": "Once a robot has generated a geometric map of its environment via SLAM or other techniques, it needs to incorporate semantic information about that environment to move from a basic understanding of where entities and structures are and what the general layout of its environment is, to an understanding of what those entities and structures are and what allowances and conditions they present it with [4 ###reference_b4###]. Broadly, there are two ways in which the robot can gain the information necessary to do this: bottom-up, by obtaining the information internally by looking for patterns and useful attributes in the sensor data it collects, or top-down, by referencing external knowledge [28 ###reference_b28###].\nTo connect the geometrically mapped environment to a symbolic knowledge representation, a bridge between the sensor data and the explicit symbols has to be established. This way, the symbols can be grounded in the map representation allowing a reasoning system to derive additional symbolic knowledge from the map. The first available data source is the 3D data from the geometric map itself. Using 3D models of known objects, e.\u2009g., furniture [10 ###reference_b10###], it is possible to detect the poses of instances of these objects in the geometric map using algorithms such as RANSAC. From these instances, additional semantic knowledge about the spatial relations between the objects can be derived using spatial databases as done in the semantic environment mapping framework SEMAP [29 ###reference_b29###, 30 ###reference_b30###]. In most works, the semantic information is derived directly from the sensor data provided by a robot\u2019s camera and 3D sensors, and advances in deep learning have enabled the development of powerful models for semantic and instance segmentation which allow this process to occur in real-time even on a self-contained mobile robot.\nRecent works often use Segment Anything [31 ###reference_b31###] and related models (e.g. MobileSAM [32 ###reference_b32###], FastSAM [33 ###reference_b33###], SAM2 [34 ###reference_b34###]) in order to obtain open-vocabulary or class-agnostic segmentation masks.\nThese masks are often projected into 3D space, enabling additional processing of detected objects, including clustering and detection of dynamic objects (such as humans or moving cars) that should not be included in the static map.\nThe resulting hybrid map representation can already be used as a semantic map for simple tasks, like avoiding forbidden areas (e.\u2009g., sidewalks or bike lanes in autonomous driving). However, as no further semantic information is derived from additional prior knowledge, reasoning on such a map remains limited.\nToday, a combination of geometric mapping and semantic segmentation is already provided by many semantic mapping frameworks such as Kimera [35 ###reference_b35###], Voxblox++ [36 ###reference_b36###], PLVS [37 ###reference_b37###], and the voxel-based semantic maps [38 ###reference_b38###] of the ViMantic robotic architecture [39 ###reference_b39###]. The recently released Khronos [40 ###reference_b40###] approach defined the Spatio-Temporal Metric-Semantic SLAM (SMS) problem and provided a framework to solve it based on a factor graph, additionally considering dynamics and the evolution of scenes.\nA closely related problem to SMS, especially in the robotics field, is anchoring [41 ###reference_b41###]. Anchoring seeks to establish direct connections between object instances detected in the sensor data by the aforementioned or other methods, and the symbols provided by a knowledge base. However, while the approaches outlined before are focused on the static parts of the environment, an anchoring system mainly handles dynamic objects, e.\u2009g., for re-discovery of already known object instances that have subsequently been moved [42 ###reference_b42###].\nThe amount of knowledge a robot can glean about the world around it simply by analyzing sensor data is surprisingly extensive. Without drawing on external knowledge, it can already generate basic information about the locations of distinct entities and their shapes or other properties, hierarchical information about the objects in that environment, and relative physical distances, which may reflect conceptual relations. The physical world also has a highly hierarchical structure, which can be discovered simply by observing it [43 ###reference_b43###].\nAnalysis of sensor data taken inside a house, for example by feeding the semantic class labels through a graph autoencoder to develop class-dependent representations and an understanding of the topography of the captured scenes [44 ###reference_b44###], will already yield information such as the fact that cups usually rest on tables and cushions on couches and not vice versa. More general rules, such as smaller objects generally appearing on top of larger ones, can be extracted even without identifying the objects in the environment, in the case that no external knowledge has been made available in any form (which is technically not the case when a trained object detector is used, as the detector may receive external knowledge during training in the form of provided labels). A robotic system designed to identify and leverage features in its environment based on sensor data can already significantly refine and constrain the predictions it makes about its environment, although this information is not sufficient for many tasks [28 ###reference_b28###, 43 ###reference_b43###].\nDeriving semantic information from the environment based on sensor data processing already enables a robot to enhance its reasoning and planning, and presents potential benefits for perception as well. However, key information about objects in the environment often cannot be gained from processing the sensor data alone, and in some cases, this basic processing can even lead to erroneous conclusions. For example, a wheelbarrow and a spade might be physically located at some distance from each other in a garage, but these are in fact highly related objects that both belong to the general topic of gardening. Such information must be obtained from external sources of knowledge to provide the robot with a complete semantic understanding of its environment, which is vital if the robot is to complete meaningful tasks alongside or in assistance of a human [45 ###reference_b45###]. How precisely to provide and incorporate this information into a robot\u2019s systems has long been a key focus of research, and continues to be a highly active topic today." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Prior knowledge integration", + "text": "For a robot to leverage knowledge in its actions in a specific environment, that knowledge must first be organized and made available to the robot in a suitable form. Of particular importance to a robot are its own capabilities and parameters, the characteristics of the entities in its environment, their properties and relationships with each other, and viable actions in a given state [46 ###reference_b46###, 47 ###reference_b47###]. Organizing this information is a non-trivial task, particularly given that both the sensor data for an object to be searched for in a knowledge base and the knowledge base itself are inevitably incomplete [28 ###reference_b28###]. Nonetheless, many solutions to this task have proven promising, with the most prominent methods being systems of predicates and rules [48 ###reference_b48###, 49 ###reference_b49###], formalized in ontologies [50 ###reference_b50###] and knowledge graphs [51 ###reference_b51###].\nOntologies\nare carefully curated, code-form sets of concepts (classes) and their individual properties and relationships to each other. Despite being developed nearly three decades ago, the Resource Description Framework (RDF) for representing information and the description-logic-based Web Ontology Language (OWL) [52 ###reference_b52###] remain the most prominent systems for describing ontologies even in modern applications, including robotic systems [47 ###reference_b47###]. RDF and OWL standardized the basic structure of an ontology, greatly contributing to their popularity and success, although the newer field of knowledge graphs has shown equal promise.\nWhere ontologies focus on a hierarchical structure to describe classes, knowledge graphs are mostly concerned with the relationships between them. Although several different formalizations of KGs exist,\nthere is an implicit consensus on their basic structure. Entities are typically represented as nodes in the graph while edges represent relationships between the entities [53 ###reference_b53###]. Such graphs can be curated from multiple sources and allow for easy use by systems needing to reason on the contained knowledge.\nIBM\u2019s Watson system deployed knowledge graphs and\nshowcased their suitability for integration into complete systems able to process incoming language, access relevant knowledge based on the incoming text, and provide appropriate answers in real time [54 ###reference_b54###]. More recent methods take advantage of a variety of general knowledge sources produced since the late 2000s, including key collections like WordNet [55 ###reference_b55###], Visual Genome [56 ###reference_b56###], and ConceptNet [57 ###reference_b57###].\nThese large-scale graphs provide information on a vast range of topics for use in knowledge-integrated systems, perhaps most prominently in the field of visual data and image processing [e.\u2009g., 51 ###reference_b51###]. Recent works have increasingly partnered the knowledge graph with some form of graph neural network to take advantage of the inherent graph structure of these networks to effectively incorporate large knowledge graphs into visual classification pipelines, allowing information on known object classes to assist in the identification of unfamiliar objects [58 ###reference_b58###, 59 ###reference_b59###]. However, the scale of many KGs poses a challenge for many fine-grained applications.\nThe vast information available in many commonsense knowledge graphs prevents their focused application to many robotics tasks, which require the robot to be supplied with just the tailored information it needs for the task at hand. Progress on this point has been made with more specialized ontologies, such as the OntoSLAM ontology designed specifically for SLAM applications [50 ###reference_b50###] and the KnowRob ontology, a more general ontology aimed at supporting a variety of robotics tasks [46 ###reference_b46###, 60 ###reference_b60###]. Such specialized ontologies already address the scale and applicability issue of large knowledge graphs, but work in this direction will continue to be an active and vital subfield for both ontologies and knowledge graphs.\nOntologies require care when entries are updated or added [61 ###reference_b61###], but they benefit from decades of interest and a clear consensus on definition and structure, as well as the curation of multiple general ontologies. Likewise, the propagation of information through the graph structure of knowledge graphs must be carefully managed to avoid information loss [62 ###reference_b62###], but they are well-suited to modern neural network approaches. Both methods separately, and especially hybrid approaches combining them, have proven to be effective backbones for systems leveraging prior knowledge in the creation of semantic maps, and other robotics tasks. Particularly with the advent of large language models (LLMs), knowledge stored in knowledge graphs and ontologies can be accessed more efficiently [63 ###reference_b63###] and represented utilizing the implicitly stored common sense knowledge [64 ###reference_b64###, 65 ###reference_b65###] (see Section 4 ###reference_###). These approaches are and will likely remain key methods of knowledge representation in and beyond the field of robotic semantic mapping." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3D semantic scene graphs", + "text": "The requirement for environment representations with richer semantics for robotic applications brought 3D semantic scene graphs into focus in the field. These graphs abstract the environment into a representation where nodes and the edges (connections) between them represent physical entities and their spatial or logical relations to each other [66 ###reference_b66###, 67 ###reference_b67###, 68 ###reference_b68###, 69 ###reference_b69###, 70 ###reference_b70###, 28 ###reference_b28###, 71 ###reference_b71###].\n3D scene graphs can broadly be differentiated into two categories based on their structure [72 ###reference_b72###]. Those taking the form of a layered graph in which different layers represent different levels of abstraction (e.\u2009g., mesh-objects-rooms-building) are deemed hierarchical, while those taking a simpler form in which the graph represents only physical entities and their relations without inferring any hierarchy to more abstract concepts are deemed flat.\nScene graphs as an abstract representation are neither novel nor exclusive to robotics. As a familiar and vital tool in computer graphics, they are common today in many image processing tasks like visual question answering [73 ###reference_b73###] and image captioning [74 ###reference_b74###]. Established datasets, most prominently the Visual Genome dataset [56 ###reference_b56###], have supported the development of numerous learning-based and other approaches for scene graph construction and processing. Semantic mapping for robotics, however, requires moving beyond those.\nIn contrast to the earlier, more basic scene graphs, 3D scene graphs are representations of whole scenes and have to be generated from interrelated sensor data rather than from a single image. Generating robust 3D scene graphs from sensor data is non-trivial, with many different methods spawned in recent years, broadly falling into two categories of approaches:\nConstruction: the nodes are detected in the sensor data, and the scene graph is subsequently built deterministically, meaning edges are inferred based on geometric properties like position and size. These methods commonly result in hierarchical scene graphs, since higher abstraction layers can be incrementally built (see Fig. 1 ###reference_###).\nPrediction: the graph\u2019s topology is based on a heuristic, such as the proximity of objects to each other, and the classes of edges and nodes are predicted by e.\u2009g., a graph neural network. Scene graph prediction usually results in flat scene graphs as prediction applies the sensor readings directly, and can thus be learned end-to-end (see Fig. 2 ###reference_###).\nIt is generally argued that in robotics, hierarchical scene graphs have more potential since modeling an environment with levels of higher abstraction benefits planning [75 ###reference_b75###, 67 ###reference_b67###]. However, scene graph construction levels usually only infer spatial or geometric relations (e.\u2009g., on, next to, bigger/smaller than), while more nuanced semantics (e.\u2009g., connected to, standing on, hanging on) are harder to generate. These kinds of relations require a deeper understanding of the involved objects and the semantics behind their relations.\nWhile inferring these relations deterministically is cumbersome, learning them from labeled datasets like the 3DSSG dataset [70 ###reference_b70###] is more viable. As is typically the case in deep learning, the ability of models for scene graph prediction to extract these relationships is highly dependent on dataset quality and availability, though modern models no longer need to rely exclusively on these sources. In addition to datasets like Visual Genome and 3DSSG, prior knowledge from sources like commonsense knowledge graphs, e.\u2009g., WordNet [55 ###reference_b55###] or ConceptNet [57 ###reference_b57###], can be utilized for inferring complex relationships. The same can be said for language models as knowledge sources, since the relations in question are also represented in our everyday language, as covered in depth in Section 4 ###reference_###. Their potential has been demonstrated [76 ###reference_b76###] and is currently emerging in 3D scene graph prediction as well.\nGenerating scene graphs in 3D is challenging, since ambiguity in the data and incompleteness of the graph itself need to be addressed. However, 3D scene graphs can efficiently connect the subsymbolic and symbolic representations of an environment, providing a framework that generally improves high-level task planning, reasoning on semantic maps, and even path planning compared to geometric representations. In the following sections, we will discuss 3D scene graph generation methods further and include the consideration of prior knowledge in the generation process. We also introduce multiple application scenarios for 3D scene graphs from task planning, navigation, robot collaboration, and change detection." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3D scene graph generation", + "text": "3D scene graph generation includes methods that take 3D data, such as depth images or point clouds, and output an abstract graphical representation in the form of a scene graph. As discussed in the introduction of Section 3 ###reference_###, 3D scene graph generation models can be roughly divided into construction and prediction methods. For a compact overview of the presented methods and applications, see Tables 1 ###reference_### and 2 ###reference_###.\nThe first notion of a 3D scene graph generated from 3D data arose at the end of the last decade, when Armeni et al. [66 ###reference_b66###] introduced a hierarchical scene graph consisting of 4 layers: a building level (as a root node), a room level, an object level, and a level for the camera poses from which the data was captured. The graph itself is constructed by projecting Mask R-CNN [77 ###reference_b77###] results from 2D images onto a 3D mesh, with two heuristics used to resolve ambiguities in the segmentation by aggregating multiple views of objects in panoramic images. The derived relationships from this method were still mainly geometric, namely left/right, in front of/behind, bigger/smaller, occlusion relationships, and relationships to other layers, but this framework provided a key foundation for later advances.\nImproving on the idea of hierarchical scene graphs, Rosinol et al. [67 ###reference_b67###] presented a method for automatic 3D scene graph construction from visual-inertial data. The described 3D dynamic scene graph (DSG) comprises 5 layers (see Figure 1 ###reference_### for a visual representation):\nThe metric semantic layer with labeled vertices and edges forming faces\nObjects (static non-structures) and dynamic agents (e.\u2009g., humans and robots) extracted from the mesh\nPlaces (free, navigable spaces) and structures (walls, floors, ceilings, pillars)\nRooms, corridors, and halls\nBuilding (as the root node)\n###figure_1### In a DSG, nodes can be connected within the same layer or across different layers, with edges within a layer representing spatial relations like contact, adjacency, or traversability, and edges between layers forming the explicit hierarchy and allocating lower-layer entities to higher-layer ones (objects to places, places to rooms, etc.) DSGs are constructed using the Spatial Perception Engine (SPIN), based on the metric-semantic mesh and the ESDF generated by Kimera [35 ###reference_b35###]. Objects and structures are thus estimated from the labeled mesh, while humans (representative for agents) are detected using a Graph CNN. Places are sampled from the free space in the ESDF and rooms are then estimated from these sampled places.\nBuilding on DSGs and SPIN, Hydra [68 ###reference_b68###] adds real-time capability and loop-closure to run on an operating robot. This more advanced framework includes improved algorithms for mesh and place construction, as well as an improved algorithm for room detection. The loop closure detection is based on hierarchical descriptors comparing the scene appearance, present objects, and places. On detected loop closure, the DSG is then optimized by correcting the drift and merging overlapping nodes.\nIn contrast to scene graph construction, scene graph prediction requires high-quality labeled datasets, ideally containing 3D data and the associated scene graph, to train a neural network. The Visual Genome dataset [56 ###reference_b56###] has been the base for many image-language-focused learning tasks like visual question answering and scene graph prediction, but is not optimal for training scene graph prediction networks as it only contains 2D images that are not necessarily linked to coherent scenes. A popular dataset that includes the 3D data needed for end-to-end learning for these networks is the 3D Semantic Scene Graph (3DSSG) dataset [70 ###reference_b70###], an extension of the 3RScan dataset [78 ###reference_b78###] which consists of manually labeled scene graphs of indoor environments with 534 nodes and 40 edge classes based on RGB-D images, with a subset of 160 nodes and 27 edge classes used for training and validation.\nThe initial graph topology for a 3D scene graph is usually derived from the geometric structure of the point cloud, oftentimes using instance segmentation [70 ###reference_b70###, 79 ###reference_b79###, 80 ###reference_b80###] or clustering approaches [81 ###reference_b81###]. The resulting point clusters then become nodes in the graph. Edges can be added either in a fully connected paradigm where every node is connected to every other node, or with heuristics targeting only useful or sensible edges, e.\u2009g., creating an edge between two nodes whose point segments are within a certain distance of each other.\nFor the actual prediction of node and edge classes, some form of graph neural network (GNN) architecture is typically used because the underlying message-passing mechanism in these networks allows for the aggregation of information from neighbor nodes in the graph. Wald et al. [70 ###reference_b70###] use a graph agnostic graph convolutional network with two layers to predict node classes from object points and edge classes from object pair points, while Qi et al. [81 ###reference_b81###] use Gated Recurrent Units [82 ###reference_b82###] for this purpose. In both approaches, PointNet [83 ###reference_b83###] is used to extract visual features from segmented object points for object classification and object pair points for edge classification. However, while such non-incremental scene graph generation methods can be utilized in semantic mapping, they are not optimal for dynamic environments, in which approaches that build the graph incrementally based on new sensor data are needed to allow for real-time updates and more accurate representations of the current scene.\nOne such approach is SceneGraphFusion [79 ###reference_b79###], a pipeline specifically designed for incremental scene graph prediction on RGB-D data. Similar to the approach by Wald et al. [70 ###reference_b70###], the pipeline takes segmented point clouds as input, although it works on objects only visible in the current frame. A neighborhood graph is constructed by connecting objects or nodes within a fixed radius. In addition to PointNet features, simple geometric properties are derived as well. The pipeline employs a GNN, enhanced by a novel feature-wise attention mechanism (FAT) to manage incomplete 3D data and dynamic edges. Predictions focus on objects within the sensor frame, updating newly appearing objects and excluding old ones before inference, with the resulting nodes and edges integrated into a global scene graph through a running average approach (see Figure 2 ###reference_###).\n###figure_2### Similarly, Li et al. [80 ###reference_b80###] developed a framework for embodied scene graph generation where a local scene graph is predicted on RGB-D images and merged with a global scene graph, endorsing free exploration while avoiding reliance on fixed paths. More recently, an approach was proposed using only RGB image sequences as input [84 ###reference_b84###], with the incremental framework using ORB-SLAM3 [19 ###reference_b19###] to estimate sparse 3D point clouds from RGB image sequences." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3D scene graph prediction with prior knowledge", + "text": "As in visual reasoning or visual question answering, utilizing prior knowledge also has substantial potential to boost prediction performance with 3D scene graphs [5 ###reference_b5###]. The applicability of such methods to 3D data is obvious and inspires current research in scene graph prediction. Knowledge graphs and ontologies are great representations for this purpose, since the graph structure integrates well with the commonly used graph neural network architectures and the scene graph representation in general.\nFeng et al. [43 ###reference_b43###] utilize a hierarchical knowledge graph based on ConceptNet that includes common objects found in 3D point cloud scenes. Based on this hierarchy, a visual graph is constructed from the point cloud data. The visual graph and the knowledge graph are then passed into two different graph neural networks which embed the respective features with message passing before object detection and scene graph prediction are performed on these embeddings by an MLP and a GCN.\nQiu and Christensen [28 ###reference_b28###] integrate a knowledge graph based on WordNet, ConceptNet, and Visual Genome which models pairwise relationships. Using a Knowledge-Scene Graph Network (KSGN) based on the Graph Bridging Network [85 ###reference_b85###], the knowledge graph is included in the message passing process in the scene graph, improving scene graph prediction on the 3DSSG160 dataset.\nIn addition to knowledge graphs, large language models can now be used as a source of external knowledge. Lv et al. [86 ###reference_b86###] present SGFormer, a transformer-based graph neural network that incorporates text embeddings from descriptions about known objects, which are generated using an LLM and integrated using cross-attention. The resulting architecture shows great improvements on the 3DSSG dataset, showcasing the usefulness of prior knowledge from LLMs for scene graph prediction.\nWhile most current 3D scene graph generation methods are demonstrated on indoor datasets, Strader et al. [64 ###reference_b64###] show an approach generating hierarchical 3D scene graphs for indoor and outdoor environments. In their graph hierarchy, they differentiate between high-level concepts like rooms, roads, and beaches, and low-level concepts like objects and places. Using an LLM, these concepts are then connected in a bipartite spatial ontology, which is incorporated using a neurosymbolic Logic Tensor Network [87 ###reference_b87###]. During training, the satisfaction of axioms comparing the prediction to the ground truth and the spatial ontology is used as the loss. Strader et al. [64 ###reference_b64###] demonstrate their approach on the Matterport3D dataset [88 ###reference_b88###] and two outdoor datasets, showing an improvement over purely data-driven prediction when training on fewer samples.\nBesides classical 3D scene graph generation, Giuliari et al. [89 ###reference_b89###] introduce Directed Spatial Commonsense Graphs (D-SCG), heterogeneous scene graphs with relative spatial edges and commonsense nodes and edges obtained from ConceptNet. The commonsense nodes in these graphs are linked to spatial object nodes based on the ConceptNet predicates AtLocation and UsedFor, with the object nodes fully connected by directed proximity nodes containing the relative position vector between two objects. The D-SCG is used to predict the position of unseen objects in incomplete ScanNet [88 ###reference_b88###] scenes.\nAlthough not technically working with prior knowledge, another notable work in this area is Zhang et al. [44 ###reference_b44###], which proposes a method based on knowledge-based meta embeddings, i.e. embeddings of the one-hot encoded class vectors. The meta embeddings are integrated in a two-iteration prediction where the first iteration predicts node and edge classes purely on the point cloud data and the second integrates meta embeddings as features for the respective node and edge classes based on the classification from the first iteration. Similarly, Han et al. [90 ###reference_b90###] propose unbiased meta embeddings, which are weighted by their appearance frequency in the 3DSSG dataset." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Scene graph applications", + "text": "As a unifying structure between low-level geometric and high-level symbolic representations, 3D scene graphs appear in multiple application domains in robotics. Rosinol et al. [67 ###reference_b67###] implement the places layer in DSGs for navigation, a topological subgraph representing free, navigable spaces in the map (see section 3.1 ###reference_###) and refer to higher layers for high-level planning. These higher layers can be used to learn navigation policies [91 ###reference_b91###] for object-centric navigation. S\u00fcnderhauf [92 ###reference_b92###] also learns object-centric navigation policies, differentiating between landmarks (static objects) present in the graph and targets (dynamic objects) not in the current graph. Using a graph convolutional network, a probability distribution of the targets based on the landmarks is learned.\nLi et al. [80 ###reference_b80###] use imitation learning and reinforcement learning to predict a robot\u2019s next navigation action to explore an unknown environment for scene graph generation. The learned policy network is based on LSTMs and includes the last action, the current sensor frame, and a local and global scene graph to predict the next action from a discrete action space. Lingelbach et al. [93 ###reference_b93###] learn navigation policies for hierarchical relational object navigation, enabling navigation policies for large hierarchical, multi-room environments. The proposed approach uses the Heterogeneous Graph Transformer [94 ###reference_b94###] and a task-driven attention mask to embed a hierarchical scene graph with rooms and objects for policy prediction. In addition to object-centric navigation, 3D scene graphs can also be used in task planning, such as in Amiri et al. [95 ###reference_b95###], where a POMDP is used with scene graph biased belief in an object search scenario.\n3D scene graphs have also been applied to change detection. Variable Scene Graphs [96 ###reference_b96###], for example, add the likelihood for an object to change in the future, either by state (e.\u2009g., open/closed), 3D position, or instance (position in the graph topology). Scene graphs have also been used in visual localization based on RGB images by making image retrieval and pose estimation robust to dynamic objects [71 ###reference_b71###]. For this, scene graphs are estimated for a given image sequence and matched to reference scene graphs from an image database. Comparison and connection between frames and pose estimation is then carried out using a set of pre-defined classes which are unlikely to change.\nWhile an intuitively clear structure, scene graphs can reach sizes at which they start to pose challenges to task planning in terms of run time and completion. For this, the task-planning framework Taskography [75 ###reference_b75###] has been proposed to prune and sparsify large 3D scene graphs to decrease planning times and increase success rates. An algorithm to compress local scene graphs for communication (D-Lite) has also been proposed for multi-robot collaboration [97 ###reference_b97###]." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Challenges", + "text": "3D scene graphs are a promising representation for complex environments, but generating them still entails certain challenges. Creating and learning from a labeled scene graph dataset, for example, is difficult as scene graphs are by nature incomplete. A neural network may therefore predict relationships that are technically correct, but which are not present in the ground truth, hindering the use of common loss functions like cross-entropy on predicated edge labels.\nThe issue of incompleteness also surfaces in the metrics used to measure prediction performance. As of now, the recall@k [106 ###reference_b106###] is commonly used, which tracks the fraction of correct predictions within all predictions. In recent studies, the mean recall@k [107 ###reference_b107###] has been gaining popularity as a method that can handle the imbalanced nature of the long-tailed predicate distributions in scene graph datasets. Both recall@k and mean recall@k are insensitive to false positives, i.e., both would not be influenced by predicted relationships that are not present in the ground truth.\nModels measured with precision or accuracy tend to show comparatively low scores because most datasets lack negative annotations and therefore only represent relationships that are correct and not those that are known to be wrong. Future datasets may add such negative annotations (such as a glass being annotated not only as standing on a table but also as not standing under it), but for now, the issue of negative sampling remains a complex one in 3D scene graph prediction and other applications and is likely to be a key area of future research in the field.\nMore complex applications for 3D scene graphs remain to be developed. At present, existing applications for 3D scene graphs such as [91 ###reference_b91###] and [92 ###reference_b92###] are still limited in their action space and the semantic richness of the scene graphs used, with current work usually focusing on scene graph construction or prediction and not the combination of both. Further work is needed to compare the performance of construction and prediction approaches and to explore their incorporation and combination into unified frameworks taking advantage of the benefits of both methods." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Language models", + "text": "Traditionally, information gathering (Section 2.2 ###reference_###) and reasoning (Section 2.3 ###reference_###) rely on some form of closed set e.\u2009g., a fixed set of classes obtained from a segmentation model. Low-shot methods, including few- and zero-shot methods, attempt to enable a model to distinguish sparsely seen or even unseen categories during inference [108 ###reference_b108###, 109 ###reference_b109###]. However, these models typically lack a method to assign a meaningful name to these novel categories, instead utilizing a set of semantic features, such as the detected objects\u2019 color, shape, or composition, to categorize unseen classes.\nBy combining textual and visual data, Vision-Language Foundation Models (VLFM) such as the pioneering Contrastive Language-Image Pre-Training Model (CLIP) [110 ###reference_b110###] learn rich multi-modal features combining natural language with visual concepts in a common feature space (see Fig. 3 ###reference_###). Recent VLFM-based models excel in low-shot open-vocabulary object detection [111 ###reference_b111###, 112 ###reference_b112###] and segmentation tasks [113 ###reference_b113###], with their performance matching or even exceeding that of traditional models.\n###figure_3### Recently, Large Language Models (LLMs) have also emerged as a promising tool in these applications [114 ###reference_b114###]. Training on vast text corpora consisting of billions of examples has enabled these models to process natural language inputs and generate high-quality natural language texts, sparking many innovations in natural language processing and other fields. By leveraging these models and integrating them with e.\u2009g., VLFMs for processing visual input, the new family of Large Multimodal Models (LMMs) now allows visual input from images or videos to be processed directly [115 ###reference_b115###, 116 ###reference_b116###, 117 ###reference_b117###, 118 ###reference_b118###, 119 ###reference_b119###].\nBoth VLFMs and LMMs allow for significant enhancements of both traditional and scene-graph-based semantic mapping approaches (see Section 3 ###reference_###) by combining explicit geometric and semantic scene representation with powerful language processing capabilities for both object representation and reasoning.\nIn the following section, we analyze the recent advances in VLFMs and LMMs and outline their impact on map representation and semantic mapping applications." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Vision-language foundation models", + "text": "The most popular Vision-Language Foundation Model is presently CLIP [110 ###reference_b110###], a joint text and image embedding model trained in a self-supervised paradigm using contrastive pre-training on Million text-image pairs, first introduced in 2021. CLIP maps images and textual input into an embedding space which maintains images and their matching captions proximate to each other, thus reducing inference to a simple neighbor search in the embedding space based on an input image and different textual descriptions.\nThe initial CLIP model already showed remarkable performance in zero-shot image classification tasks, even matching supervised models trained on benchmark datasets. This, and the model\u2019s comparative simplicity and straightforward integration into various downstream tasks, led to the growing popularity of VLFMs for many computer vision tasks. However, CLIP [110 ###reference_b110###] and similar baseline models like ALIGN [120 ###reference_b120###], BLIP [117 ###reference_b117###], and BLIPv2 [121 ###reference_b121###] only produce feature vectors for the entire image. This prevents their direct applicability in the semantic and instance segmentation tasks required for semantic mapping, as these require fine-grained features to localize detected objects and other instances within an image.\nPixel- and region-based approaches address this shortcoming [122 ###reference_b122###, 123 ###reference_b123###, 124 ###reference_b124###, 125 ###reference_b125###, 126 ###reference_b126###, 127 ###reference_b127###, 128 ###reference_b128###] by implementing 2D open-vocabulary semantic segmentation for each pixel or larger regions of pixels. Using baseline VLFMs and recent instance-agnostic segmentation models [31 ###reference_b31###] as foundations, they learn to generate individual feature vectors for each pixel or region in the input image, albeit at the cost of significantly higher run times in comparison to image-based VLFMs and traditional models [129 ###reference_b129###].\nIn contrast to traditional segmentation models, the models covered here do not require the segmentation classes to be provided at training time. Instead, the requested classes are provided to the model on demand for each image, allowing for queryable scene representations for arbitrary objects in semantic mapping tasks. However, to support 3D mapping tasks, the 2D segmentation capabilities of these models must first be lifted into the 3D domain, and, as the segmentation classes are not fixed during training, this cannot be achieved by simply projecting the resulting segmentation masks onto the 3D geometry as in established approaches [35 ###reference_b35###]. Instead, this must be done by storing the features within the map itself. We discuss a multitude of recent approaches in this direction (summarized in Tab. 3 ###reference_###) in the following sections." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 NeRF approaches", + "text": "Approaches based on neural radiance fields (NeRFs) such as LERF [130 ###reference_b130###] and CLIP-Fields [131 ###reference_b131###] train models to map CLIP feature vectors to spatial geometry. During training on the images used to reconstruct a 3D scene (e.\u2009g., from a monocular SLAM system), the models learn to reproduce the geometry from input camera poses and VLFM features. The resulting scene-specific models can be queried with an embedding vector (generated from textual input) to obtain rendered views and saliency maps for the queried objects and locations in the encoded scene (see Fig. 4 ###reference_###).\n###figure_4### ###figure_5### NeRF-based approaches produce high-quality photo-realistic representations and fine-grained language features that are not limited to the resolution of a 3D sensor. However, these approaches are limited in real-world applications as the geometry and VLFM information are encoded implicitly in the network, requiring extensive training for each scene. They also support neither updates nor fine-tuning of the representation upon e.\u2009g., changes in the environment or during exploration, and are additionally limited to small scenes and cannot accurately represent larger, more diverse environments." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Geometry-based approaches", + "text": "Geometry-based approaches rely on established 3D reconstruction pipelines to capture the geometry of a scene. Thus, they store the VLFM features directly for each geometric primitive in the map (point, voxel) and therefore do not require training for each scene, allowing them to generalize to many different environments. Additionally, as their map representations can be modified and updated in real time, they are better suited to robotic mapping scenarios.\nOpenScene [133 ###reference_b133###] transfers pixel-based features from the 2D domain into the 3D domain by distilling a specialized 3D CNN to reproduce the 2D features on individual 3D points. The resulting aggregated features encode the visual properties of each point in a geometric map representation from multiple views and scales. Similar to LERF, the resulting 3D representation can be queried for saliency maps of arbitrary objects and concepts.\nIn ConceptFusion [134 ###reference_b134###], Segment Anything [31 ###reference_b31###] or Mask2Former [135 ###reference_b135###] is used to generate a class-agnostic segmentation of the input images. The resulting bounding boxes are then used to produce crops of the detected objects. Using an off-the-shelf image-level VLFM, a global feature vector for the whole image and local feature vectors for the individual crops are aggregated into pixel-level feature vectors. The resulting segmentation masks are then integrated into a 3D map using a point-based reconstruction pipeline. The resulting map can then be queried for arbitrary objects using multiple modalities if supported by the underlying VLFM. In contrast to OpenScene, the map provides an accurate instance segmentation versus simple saliency maps.\nA similar approach is explored in OK-Robot [136 ###reference_b136###], where the resulting point-based map is utilized in a combined robotic navigation and manipulation task where the robot must find arbitrary objects in a map and move them to new locations. The authors leverage LangSAM [142 ###reference_b142###] to accurately segment the object for manipulation once the robot can observe it directly.\nOpen-Fusion [129 ###reference_b129###] utilizes the 2D SEEM-Model [137 ###reference_b137###] to generate region-based confidence maps from input images. The extracted regions are then used to annotate the voxels of a TSDF-based 3D map. As this region-based segmentation is much faster than offline processing required in previous approaches, Open-Fusion enables the mapping process to be run directly on the 3D sensor data stream. Using region- instead of pixel-based features also means fewer feature vectors must be stored, as only a single vector is needed for each region. An example of a map generated by this approach with a possible query result is shown in Fig. 5 ###reference_###.\n###figure_6###" + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "4.1.3 Scene-graph approaches", + "text": "As geometry-based approaches represent the features of individual points or regions and lack the concept of an object, the resulting maps cannot distinguish individual object instances and only identify groups or parts of objects instead. Changes in the locations and poses of objects are also difficult to represent. To address these issues and support individual object instances, recent approaches leverage semantic scene graphs (see Section 3 ###reference_###) for open-vocabulary 3D maps.\nWerby et al. [138 ###reference_b138###] present Hierarchical Open-Vocabulary 3D Scene Graphs (HOV-SG). Their approach constructs a semantic scene graph of a class agnostic segmentation using Hydra [68 ###reference_b68###] and stores CLIP features for each segment, similar to [134 ###reference_b134###]. The obtained features are aggregated in the higher-level nodes of the scene graph, thus providing a set of representative features for e.\u2009g., individual rooms and buildings. This approach showed promise in a language-based navigation task where a robot was tasked with finding the location of arbitrary objects in the map given queries in natural language.\nIn [139 ###reference_b139###], an approach similar to Khronos [40 ###reference_b40###] is used to construct a scene graph consisting of the point clouds of individual object instances. Each node in the graph is maintained and updated individually, and traditional and class-agnostic segmentation models (e.\u2009g., SAM) are used to obtain the object segmentation and compute an additional VLFM feature vector from each segment\u2019s mask. The scene graph structure is then determined by an LLM (see Section 4.2.1 ###reference_.SSS1###).\nCLIO [140 ###reference_b140###] builds upon prior approaches to generate task-dependent semantic maps, focusing on sub-maps of relevant objects versus querying the main map. Sub-maps are generated for a given task by clustering the language features of the objects according to a task specification in natural language." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "4.1.4 Challenges", + "text": "Recent works show that integrating VLFM features into semantic maps is an attractive and powerful approach to exploiting the capabilities of natural language in robotics, and especially to achieve generalization. However, several challenges and shortcomings must be addressed in future research.\nAs VLFM inference is essentially a neighbor search in an embedding space, it lacks any implicit specialized knowledge about the requested classes. This can lead to unexpected results during inference, especially in uncommon scenarios.\nAdditionally, while VLFM-based maps can represent and retrieve arbitrary objects, they are incapable of reasoning except on a very basic level (e.\u2009g., about materials or color). A semantic map leveraging VLFMs thus still requires additional external knowledge to provide further reasoning capabilities. With the recent advent of Large Multimodal Models, there have been attempts to fuse the strong vision-language representation of VLFMs with the common-sense-like reasoning capabilities that LMMs provide (see Section 4.2 ###reference_###). As of now, the issue of reasoning in VLFMs remains a challenge.\nAll the approaches presented here also utilize existing VLFMs, which were trained on massive datasets covering general use cases. Many specialized use cases are thus not supported by these models and require fine-tuning of the model before it can be used in these applications. The performance of VLFMs in these specialized robotics use cases in e.\u2009g., the agricultural domain is yet to be explored." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Large language models", + "text": "A large language model (LLM) is a computational model that uses statistical methods to analyze and predict the probabilities of word sequences in natural language and which is designed to capture the patterns, grammar, and semantic meaning of natural language [144 ###reference_b144###].\nRecent models based on the transformer architecture [145 ###reference_b145###] capture these patterns from massive, internet-scale datasets. Utilizing self-attention, they determine the relationships between preceding tokens, including words, punctuation marks, and sentences, to predict the next output.\nDue to their textual focus, interaction with LLMs is typically done in the form of a chatbot where the LLMs produce answers to given prompts while still taking former prompts and previous answers into account up to a certain context size. Due to the amount of implicit knowledge incidentally encoded in these models during training, LLMs can be applied to many different tasks including text and code generation, summarising existing text, and assisting with decision-making. Though they do not engage in true problem-solving or inference from first principles [146 ###reference_b146###], they retrieve information in a manner that mimics reasoning and are effective in assisting robots in retrieving the necessary information to process data and interact with a semantic ontology or a semantic scene graph [147 ###reference_b147###, 148 ###reference_b148###].\nLarge Multimodal Models can process other data modalities beyond text, such as videos and images [149 ###reference_b149###, 150 ###reference_b150###], by integrating other foundation models such as VLFMs. This allows them to reason directly on the kind of multi-modal sensor data typically provided by a robot\u2019s perception system. This makes them of particular interest for many robotics tasks including semantic mapping, navigation, and manipulation, and recent years have seen multiple applications using LLMs and LMMs in robotics (summarized in Tab. 4 ###reference_###). We outline these early applications in the following sections, focusing on tasks involving semantic mapping." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Semantic mapping applications", + "text": "LLMs and LMMs have increasingly been utilized in semantic mapping research in recent works. Here, their information retrieval capabilities rather than their capacity for language generation are of primary interest.\nStrader et al. [64 ###reference_b64###], for example, utilize an LLM in their work as a preprocessing step to infer a semantic ontology for classifying a room\u2019s type based on the observed objects that appear in it.\nAs building such an ontology manually is time-consuming and error-prone, utilizing the LLM saves a considerable amount of time. The authors query the LLM to choose the objects in a fixed set of object types most likely to distinguish a certain room type from all others available. This way, they obtain a suitable ontology without extensive manual work or training data. To address the issue of the LLM potentially hallucinating and returning faulty results, the authors filter out invalid answers and repeat a query when necessary. The final obtained ontology is then used in a neuro-symbolic Logic Tensor Network [87 ###reference_b87###] for training a specialized neural network to perform room classification directly on the sensor data.\nRen et al. [151 ###reference_b151###] use a multi-modal LLM to perform embodied question answering in partially unknown environments. The authors utilize the encoded prior knowledge in the LLM, in conjunction with a semantic map, to generate task-dependent exploration plans, with the tasks given as natural language questions about the environment or objects (see Fig. 6 ###reference_###). In experiments, this approach outperforms both traditional frontier-based and CLIP-enhanced exploration, showing that the utilized LLM is indeed contributing useful knowledge to the task.\n###figure_7### ConceptGraphs [139 ###reference_b139###] (see also Section 3.2 ###reference_###) use the LMM LLava [119 ###reference_b119###] during the semantic mapping process to infer semantic relations between the detected objects using image data, thereby constructing a semantic scene graph. Though the original paper is only a proof of concept with just two available semantic relations (on and below), the work demonstrates that LMMs are also able to reason about scene structure.\nSayPlan [147 ###reference_b147###] focuses on plan generation through an LLM-based pipeline that performs semantic searches to construct a task-specific subgraph from a full semantic scene graph. This process reduces the number of input tokens to the LLM, enhancing scalability and reducing cost. An iterative re-planning pipeline using feedback from the reduced scene graph performs multiple queries to the LLM until a valid, executable plan is obtained. Although requiring internet access and therefore not suitable for onboard-only deployment on a robot, the system achieves a maximum accuracy rate of approximately 73% in complex search tasks in office and home environments.\nIn a similar work, Honerkamp et al. [154 ###reference_b154###] propose MoMa-LLM to systematically ground an LLM within an open-vocabulary semantic scene graph constructed dynamically using an approach similar to [138 ###reference_b138###]. During operation, the system is tasked with fetching a particular object from an unknown environment, with the LLM being updated at each step with the most recent scene graph and asked for the next suitable action for the robot to perform, ultimately outperforming traditional methods even in real-world environments." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Challenges", + "text": "Despite being able to simultaneously consider information from many different sources - a significant advantage over traditional systems - LLMs and LMMs still face several significant issues which presently limit their applicability [155 ###reference_b155###, 156 ###reference_b156###, 157 ###reference_b157###].\nFirstly, the training and deployment of LLMs require significant computational resources, which are challenging to provide on most robotics platforms. This means that robots utilizing LLMs must have a constant broadband internet connection, limiting their potential, particularly in e.\u2009g., outdoor scenarios. Additionally, LLMs are prone to hallucinations [158 ###reference_b158###], sometimes causing them to generate inaccurate or unreasonable output that must be filtered out via control mechanisms to prevent unexpected or outright dangerous behavior during a robot\u2019s operation.\nIn [156 ###reference_b156###], it is additionally shown that LLMs do not perform well at reasoning about new problems that are not in their training sets. Recently, Large Reasoning Models (LRMs) have made significant improvements on this point, but at a high computational cost and without guarantees or an extensive capacity to identify when a problem is unsolvable [159 ###reference_b159###]. Further issues related to ethics, safety, data privacy, regulations, cultural bias, and other topics remain critical challenges as well, cf. [160 ###reference_b160###, 161 ###reference_b161###, 162 ###reference_b162###]." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Knowledge-integrated AI in semantic mapping continues to evolve, moving beyond traditional rule-based systems and towards new applications of knowledge processing to diverse input modalities, a trend evident in the field of semantic mapping. This review provides an overview of two distinct yet complementary aspects of modern semantic mapping, namely semantic scene graphs and language models, and addresses their individual contributions as well as emerging synergies.\nSemantic scene graphs enable the structured symbolic representation and processing of scene structure, with the application of GNNs allowing for the efficient integration of even large-scale graph data and prior knowledge into machine learning pipelines, greatly enhancing robotics applications utilizing these scene graphs. VLFM and other multi-modal language models not only bridge the visual and textual domains but also allow the exploitation of the inherent fuzziness and generalizability of natural language in robotic perception, allowing even richer environment representations and enhancing reasoning capabilities. LMMs in turn enable robotic applications to apply common-sense-like reasoning directly on multi-modal input data, alleviating the need for strict, manually created rule bases in robotic applications such as navigation, manipulation, and task planning.\nAs these fields continue to expand and diversify, we expect significant developments in applications involving multi-modal reasoning, sensor data processing, and natural language interaction, contributing to broader advancement in AI and robotics, particularly with respect to handling the dynamic interplay between symbolic and sub-symbolic approaches." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\nName\n\n\n\n\n\n\n\nReference\n\n\n\n\n\n\n\nGeneration Method\n\n\n\n\n\n\nDataset\n\n\n\n\n\n\n\nApplication\n\n\n
\n\n3D Scene Graph\n\n\n\nArmeni et\u00a0al. [66]\n\n\n\n\n\n\n\n\nPredicted on-demand by multiple models\n\n\n\n\n\n\n\n\n\nGibson\n\n\n\n\nEnvironment\u00a0[98]\n\n\n\n\n\n\n\n\n\n\nMapping\n\n\n\n
\n\n3D Dynamic Scene Graph\n\n\n\nRosinol et\u00a0al. [67]\n\n\n\n\n\n\n\n\nExplicit offline construction; known dynamic classes (e.\u2009g., humans) filtered\n\n\n\n\n\n\n\n\n\nuHumans\u00a0[99]\n\n\n\n\n\n\n\n\n\n\nMapping\n\n\n\n
\n\nHydra\n\n\n\nHughes et\u00a0al. [68]\n\n\n\n\n\n\n\n\nExplicit online construction\n\n\n\n\n\n\n\n\n\nuHumans2\u00a0[35],\n\n\n\n\nSidPac\u00a0[68]\n\n\n\n\n\n\n\n\n\n\nMapping\n\n\n\n
\n\nTaskography\n\n\n\nAgia et\u00a0al. [75]\n\n\n\n\n\n\n\n\n3D Scene Graph\u00a0[66]\n\n\n\n\n\n\n\n\n\nGibson\n\n\n\n\nEnvironment\n\n\n\n\n\n\n\n\n\n\nTask planning\n\n\n\n
\n\nD-Lite\n\n\n\nChang et\u00a0al. [97]\n\n\n\n\n\n\n\n\nMethod from\u00a0[35]\n\n\n\n\n\n\n\n\n\nuHumans2\n\n\n\n\n\n\n\n\n\n\nScene graph compression\n\n\n\n
\n\n-\n\n\n\nFeng et\u00a0al. [43]\n\n\n\n\n\n\n\n\nOffline prediction with hierarchical symbolic knowledge\n\n\n\n\n\n\n\n\n\n3DSSG\n\n\n\n\n\n\n\n\n\n\nPredicate prediction\n\n\n\n
\n\n-\n\n\n\nStrader et\u00a0al. [64]\n\n\n\n\n\n\n\n\nHydra\u00a0[68]\n\n\n\n\n\n\n\n\n\nMP3D\u00a0[88], custom outdoor dataset\n\n\n\n\n\n\n\n\n\n\nPrediction of place labels\n\n\n\n
\n\nHREM\n\n\n\nRavichandran et\u00a0al. [91]\n\n\n\n\n\n\n\n\nMethod from [67]\n\n\n\n\n\n\n\n\n\nOffice simulation\n\n\n\n\n\n\n\n\n\n\nObject-centric navigation\n\n\n\n
\n\nHRON\n\n\n\nLingelbach et\u00a0al. [93]\n\n\n\n\n\n\n\n\nConstructed from simulation\n\n\n\n\n\n\n\n\n\niGibson 2.0\n\n\n\n\nenvironment\u00a0[100]\n\n\n\n\n\n\n\n\n\n\nHierarchical relational object navigation\n\n\n\n
\n
Table 1: Overview and comparison of references by the method used for hierarchical scene graph generation, the dataset used for benchmarking, and the application presented in the paper (if not only the generation method)
\n
", + "capture": "Table 1: Overview and comparison of references by the method used for hierarchical scene graph generation, the dataset used for benchmarking, and the application presented in the paper (if not only the generation method)" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\nName\n\n\n\n\n\n\n\nReference\n\n\n\n\n\n\n\nGeneration Method\n\n\n\n\n\n\nDataset\n\n\n\n\n\n\n\nApplication\n\n\n
\n\n\n\n\n\n\n3-D\n\n\n\n\nscene graph\n\n\n\n\n\nKim et\u00a0al. [69]\n\n\n\n\n\n\n\n\nImage-level prediction using\n\n\n\n\nFactorizable Net\n\n\n\n\n\n\n\n\n\nScanNet\u00a0[101]\n\n\n\n\n\nTask planning, VQA\n\n
\n\n3DSSG\n\n\n\nWald et\u00a0al. [70]\n\n\n\n\n\n\n\n\nOffline Prediction with GNN\n\n\n\n\nModified 3RScan\u00a0[78]\n\n\n\nPrediction\n\n
\n\n\n\n\n\n\nSceneGraph-\n\n\n\n\nFusion\n\n\n\n\n\nWu et\u00a0al. [79]\n\n\n\n\n\n\n\n\nOnline incremental prediction from single frames with GNN\n\n\n\n\n3DSSG\u00a0[70], ScanNet [101]\n\n\n\nIncremental Prediction\n\n
\n\nMonoSSG\n\n\n\nWu et\u00a0al. [84]\n\n\n\n\n\n\n\n\nOnline incremental prediction from RGB sequences with GNN\n\n\n\n\n3DSSG\n\n\n\nIncremental Prediction\n\n
\n\nKSGN\n\n\n\nQiu and Christensen [28]\n\n\n\n\n\n\n\n\nOffline prediction of scene graphs with bridging to knowledge graphs\n\n\n\n\n3DSSG\n\n\n\nPrediction\n\n
\n\nSGFormer\n\n\n\nLv et\u00a0al. [86]\n\n\n\n\n\n\n\n\nOffline prediction with transformer model and text embeddings\n\n\n\n\n3DSSG\n\n\n\nPrediction\n\n
\n\nDGGN\n\n\n\nQi et\u00a0al. [81]\n\n\n\n\n\n\n\n\nOffline prediction on point clusters\n\n\n\n\n\n\n\n\n\ns3DIS\u00a0[102],\n\n\n\n\n3DSSG,\n\n\n\n\nParis-Lille-3D\u00a0[103]\n\n\n\n\n\nPrediction\n\n
\n\nLSSG+GSSG\n\n\n\nLi et\u00a0al. [80]\n\n\n\n\n\n\n\n\nOnline incremental prediction from single frames with GNN\n\n\n\n\nAI2Thor\u00a0[104]\n\n\n\nNavigation\n\n
\n\n\n\n\n\n\nObject-based\n\n\n\n\nSLAM\n\n\n\n\n\nS\u00fcnderhauf [92]\n\n\n\n\n\n\n\n\nConstruction of random pose- and landmark nodes\n\n\n\n\n\n\n\n\n\nCustom indoor\n\n\n\n\nscenario\n\n\n\n\n\n\n\n\n\n\nObject-centric navigation\n\n\n\n
\n\nSARP\n\n\n\nAmiri et\u00a0al. [95]\n\n\n\n\n\n\n\n\nPredictions on 360-degree RGB images\n\n\n\n\n\n\n\n\n\nCustom\n\n\n\n\nenvironment,\n\n\n\n\nAI2Thor\n\n\n\n\n\nObject search\n\n
\n\n3D VSG\n\n\n\nLooper et\u00a0al. [96]\n\n\n\n\n\n\n\n\nGNN on ground truth scene graph\n\n\n\n\n3DSSG, 3RScan\n\n\n\nChange detection\n\n
\n\n-\n\n\n\nKabalar et\u00a0al. [71]\n\n\n\n\n\n\n\n\nOnline construction from single images\n\n\n\n\nRIO10[105]\n\n\n\nVisual localization\n\n
\n\nD-SCG\n\n\n\nGiuliari et\u00a0al. [89]\n\n\n\n\n\n\n\n\nOffline construction from ground truth segmentations\n\n\n\n\n\n\n\n\n\nScanNet, ConceptNet\u00a0[57]\n\n\n\n\n\n\n\n\n\n\nObject localization in partial scenes\n\n\n\n
\n\nKiSGPM\n\n\n\nZhang et\u00a0al. [44]\n\n\n\n\n\n\n\n\nOffline prediction with knowledge-based meta-embeddings\n\n\n\n\n3DSSG\n\n\n\nPrediction\n\n
\n\nMP-DGCNN + ENA-GNN\n\n\n\nHan et\u00a0al. [90]\n\n\n\n\n\n\n\n\nOffline prediction with unbiased knowledge-based meta-embeddings\n\n\n\n\n3DSSG\n\n\n\nPrediction\n\n
\n
Table 2: Overview and comparison of references by the method used for flat scene graph generation, the dataset used for benchmarking, and the application presented in the paper (if not only the generation method)
\n
", + "capture": "Table 2: Overview and comparison of references by the method used for flat scene graph generation, the dataset used for benchmarking, and the application presented in the paper (if not only the generation method) " + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\nName\n\n\n\n\n\n\n\nReference\n\n\n\n\n\n\n\nMap Type\n\n\n\n\n\n\n\nVLFM Integration\n\n\n\n\n\n\nInference Type\n\n\n\n\n\n\n\nApplication\n\n\n
\n\nLERF\n\n\n\n\n\n\n\n\nKerr et\u00a0al. [130]\n\n\n\n\n\nNeRF\u00a0[23]\n\n\n\n\n\n\n\n\nAveraging of CLIP\u00a0[110] features from image pyramid\n\n\n\n\n\n\n\n\n\nView Poses & Saliency maps\n\n\n\n\n\nVisualization\n\n
\n\nCLIP-Fields\n\n\n\n\n\n\n\n\nShafiullah et\u00a0al. [131]\n\n\n\n\n\nNeRF\n\n\n\n\n\n\n\n\nDetic\u00a0[132] segmentation & CLIP features for each segment\n\n\n\n\n\n\n\n\n\nView poses\n\n\n\n\n\nVisualization\n\n
\n\nOpenScene\n\n\n\n\n\n\n\n\nPeng and\n\n\n\n\nGenova\u00a0[133]\n\n\n\n\n\nPoint cloud\n\n\n\n\n\n\n\n\nDistillation of OpenSeg\u00a0[123] to 3D point clouds\n\n\n\n\n\n\n\n\n\nSaliency maps & Geometry\n\n\n\n\n\nMapping\n\n
\n\nConceptFusion\n\n\n\n\n\n\n\n\nJatavallabhula et\u00a0al. [134]\n\n\n\n\n\nPoint cloud\n\n\n\n\n\n\n\n\nPixel-features from segment crops\u00a0[31, 135] projected to 3D points\n\n\n\n\n\n\n\n\n\nGeometry\n\n\n\n\nsegments\n\n\n\n\n\nMapping\n\n
\n\nOK-robot\n\n\n\n\n\n\n\n\nLiu et\u00a0al. [136]\n\n\n\n\n\n\n\n\n\n\nVoxelized\n\n\n\n\npoint cloud\n\n\n\n\n\n\n\n\n\n\nSimilar to [134]\n\n\n\n\n\n\n\n\n\nTop voxel\n\n\n\n\n\n\n\n\n\n\nLanguage-based\n\n\n\n\nnavigation &\n\n\n\n\nmanipulation\n\n\n\n
\n\nOpen-Fusion\n\n\n\n\n\n\n\n\nYamazaki et\u00a0al. [129]\n\n\n\n\n\nTSDF mesh\n\n\n\n\n\n\n\n\nConfidence maps of SEEM\u00a0[137] projected to 3D points\n\n\n\n\n\n\n\n\n\nRepresentative points\n\n\n\n\n\nMapping\n\n
\n\nHOV-SG\n\n\n\n\n\n\n\n\nWerby\n\n\n\n\net al.\u00a0[138]\n\n\n\n\n\nHydra\u00a0[68]\n\n\n\n\n\n\n\n\nSimilar to [134] & feature aggregation in object and parent nodes\n\n\n\n\n\n\n\n\n\nObject nodes\n\n\n\n\n\n\n\n\n\n\nLanguage-based\n\n\n\n\nnavigation\n\n\n\n
\n\nConceptGraphs\n\n\n\n\n\n\n\n\nGu et\u00a0al. [139]\n\n\n\n\n\n\n\n\n\n\nSimilar to\n\n\n\n\nKhronos\u00a0[40]\n\n\n\n\n\n\n\n\n\n\nSimilar to [134] & scene graph structure predicted by LLM\n\n\n\n\n\n\n\n\n\nObject nodes\n\n\n\n\n\nMapping\n\n
\n\nCLIO\n\n\n\n\n\n\n\n\nMaggio\n\n\n\n\net al.\u00a0[140]\n\n\n\n\n\nHydra\n\n\n\n\n\n\n\n\nSimilar to [134] & scene graph clustering\n\n\n\n\n\n\n\n\n\nTask-dependent submaps\n\n\n\n\n\n\n\n\n\n\nMap generation\n\n\n\n\nfor task planning\n\n\n\n
\n
Table 3: Overview and comparison of VLFM applications in semantic mapping by their map representation, the integration of the VLFM, the type of inferences supported, and the application type.
\n
", + "capture": "Table 3: Overview and comparison of VLFM applications in semantic mapping by their map representation, the integration of the VLFM, the type of inferences supported, and the application type." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n \n\n\nName\n\n\n\n\n \n\n\nReference\n\n\n\n\n \n\n\nModel\n\n\n\n\n \n\n\nUse Case\n\n\n\n \n\n\nApplication\n\n\n
\n\n-\n\n\n\n\n\n\n\n\nStrader et\u00a0al. [64]\n\n\n\n\n\n\n\n\n\n\nGPT-4\u00a0[144]\n\n\n\n\n\n\n\n\n\n\nGeneration of semantic ontologies as training data to classify room types from the contained objects\n\n\n\n\n\n\n\n\n\nLarge scale 3D scene graph generation\n\n\n\n\n(indoor & outdoor)\n\n\n\n
\n\n\n\n\n\n\nExplore until Confident\n\n\n\n\n\n\n\n\n\n\nRen et\u00a0al. [151]\n\n\n\n\n\n\n\n\n\n\nPrismatic VLM\n\n\n\n\n\n\n\n\n\n\nDeriving worthy exploration locations from visual data\n\n\n\n\n\n\n\n\n\nEmbodied question answering through robotic exploration\n\n\n\n
\n\nConceptGraphs\n\n\n\n\n\n\n\n\nGu et\u00a0al. [139]\n\n\n\n\n\n\n\n\n\n\nLLaVA\u00a0[152]\n\n\n\n\n\n\n\n\n\n\nDeriving spatial and semantic relations of objects in the scene\n\n\n\n\n\n\n\n\n\n3D semantic mapping and scene graph generation\n\n\n\n
\n\nSayPlan\n\n\n\n\n\n\n\n\nRana et\u00a0al. [147]\n\n\n\n\n\n\n\n\n\n\nGPT-3.5\u00a0[153],\n\n\n\n\nGPT-4\u00a0[144]\n\n\n\n\n\n\n\n\n\n\nConstruct relevant semantic scene subgraph\n\n\n\n\n\n\n\n\n\nRobotic task plan generation\n\n\n\n
\n\nMoMa-LLM\n\n\n\n\n\n\n\n\nHonerkamp et\u00a0al. [154]\n\n\n\n\n\n\n\n\n\n\nGPT-3.5, GPT-4\n\n\n\n\n\n\n\n\n\n\nDetermining next best actions from a semantic scene graph in unknown environments\n\n\n\n\n\n\n\n\n\nRobotic exploration\n\n\n\n
\n
Table 4: Overview and comparison of references using language models to improve semantic mapping and applications.
\n
", + "capture": "Table 4: Overview and comparison of references using language models to improve semantic mapping and applications." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18147v1_figure_1.png", + "caption": "Figure 1: Hierarchical scene graph from [35]", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/rosinol_sg.png" + }, + "2": { + "figure_path": "2411.18147v1_figure_2.png", + "caption": "Figure 2: Flat incremental scene graph from [79]", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/flat_sg.png" + }, + "3": { + "figure_path": "2411.18147v1_figure_3.png", + "caption": "Figure 3: Association of textual input with visual concepts and properties in a single embedding space using the CLIP model.", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/clip_embedding.png" + }, + "4(a)": { + "figure_path": "2411.18147v1_figure_4(a).png", + "caption": "(a) \u201dwhite chair\u201d\nFigure 4: Relevancy maps as provided by LERF [130] in the \u201dkitchen\u201d dataset from [141].", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/lerf_chair.png" + }, + "4(b)": { + "figure_path": "2411.18147v1_figure_4(b).png", + "caption": "(b) \u201delectric guitar\u201d\nFigure 4: Relevancy maps as provided by LERF [130] in the \u201dkitchen\u201d dataset from [141].", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/lerf_guitar.png" + }, + "5": { + "figure_path": "2411.18147v1_figure_5.png", + "caption": "Figure 5: Result segments for the query \u201dchair\u201d from a map generated by Open-Fusion [129] from the ICL-NUIM dataset [143]. The returned segments are highlighted by their confidence from violet (low) to yellow (high).", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/openfusion_example_own.png" + }, + "6": { + "figure_path": "2411.18147v1_figure_6.png", + "caption": "Figure 6: Example for Embodied Question Answering using an LLM and a semantic map. Image source: [151]", + "url": "http://arxiv.org/html/2411.18147v1/extracted/5965120/res/explore_until_confident_example.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Simultaneous localization and mapping,", + "author": "C. Stachniss, J. J. Leonard,\nS. Thrun,", + "venue": "in: B. Siciliano, O. Khatib\n(Eds.), Springer Handbook of Robotics,\nSpringer, 2016, pp.\n1153\u20131176. doi:10.1007/978-3-319-32552-1\\_46.", + "url": null + } + }, + { + "2": { + "title": "Past, present, and future of Simultaneous\nLocalization and Mapping: Toward the robust-perception age,", + "author": "C. Cadena, L. Carlone,\nH. Carrillo, Y. Latif,\nD. Scaramuzza, J. Neira,\nI. Reid, J. J. Leonard,", + "venue": "IEEE Trans. Robotics 32\n(2016) 1309\u20131332.\ndoi:10.1109/TRO.2016.2624754.", + "url": null + } + }, + { + "3": { + "title": "Towards semantic maps for mobile robots,", + "author": "A. N\u00fcchter, J. Hertzberg,", + "venue": "Robotics Auton. Syst. 56\n(2008) 915\u2013926.\ndoi:10.1016/j.robot.2008.08.001.", + "url": null + } + }, + { + "4": { + "title": "Semantic mapping for mobile robots in indoor scenes:\nA survey,", + "author": "X. Han, S. Li, X. Wang,\nW. Zhou,", + "venue": "Inf. 12 (2021)\n92. doi:10.3390/INFO12020092.", + "url": null + } + }, + { + "5": { + "title": "A review on intelligent object perception methods\ncombining knowledge-based reasoning and machine learning,", + "author": "F. Gouidis, A. Vassiliades,\nT. Patkos, A. Argyros,\nN. Bassiliades, D. Plexousakis,", + "venue": "CEUR Workshop Proc. 2600\n(2019).", + "url": null + } + }, + { + "6": { + "title": "Improved techniques for grid mapping with\nrao-blackwellized particle filters,", + "author": "G. Grisetti, C. Stachniss,\nW. Burgard,", + "venue": "IEEE Trans. Robotics 23\n(2007) 34\u201346.", + "url": null + } + }, + { + "7": { + "title": "6D SLAM \u2013 3D mapping outdoor environments,", + "author": "A. N\u00fcchter, K. Lingemann,\nJ. Hertzberg, H. Surmann,", + "venue": "J. Field Robot. 24\n(2007) 699\u2013722.\ndoi:10.1002/rob.20209.", + "url": null + } + }, + { + "8": { + "title": "Surface reconstruction from arbitrarily large point\nclouds,", + "author": "T. Wiemann, I. Mitschke,\nA. Mock, J. Hertzberg,", + "venue": "in: 2018 Second IEEE International Conference on\nRobotic Computing (IRC), IEEE, 2018,\npp. 278\u2013281.", + "url": null + } + }, + { + "9": { + "title": "3D navigation mesh generation for path planning in\nuneven terrain,", + "author": "S. P\u00fctz, T. Wiemann,\nJ. Sprickerhof, J. Hertzberg,", + "venue": "IFAC-PapersOnLine 49\n(2016) 212\u2013217.", + "url": null + } + }, + { + "10": { + "title": "Model-based furniture recognition for building\nsemantic object maps,", + "author": "M. G\u00fcnther, T. Wiemann,\nS. Albrecht, J. Hertzberg,", + "venue": "Artif. Intell. 247\n(2017) 336\u2013351.\ndoi:10.1016/J.ARTINT.2014.12.007.", + "url": null + } + }, + { + "11": { + "title": "OctoMap: an efficient probabilistic 3D mapping\nframework based on octrees,", + "author": "A. Hornung, K. M. Wurm,\nM. Bennewitz, C. Stachniss,\nW. Burgard,", + "venue": "Auton. Robots 34\n(2013) 189\u2013206.\ndoi:10.1007/S10514-012-9321-0.", + "url": null + } + }, + { + "12": { + "title": "LOAM: lidar odometry and mapping in real-time,", + "author": "J. Zhang, S. Singh,", + "venue": "in: Robotics: Science and Systems X (RSS 2014),\n2014. doi:10.15607/RSS.2014.X.007.", + "url": null + } + }, + { + "13": { + "title": "LeGO-LOAM: Lightweight and ground-optimized lidar\nodometry and mapping on variable terrain,", + "author": "T. Shan, B. J. Englot,", + "venue": "in: 2018 IEEE/RSJ International Conference on\nIntelligent Robots and Systems (IROS 2018), IEEE,\n2018, pp. 4758\u20134765.\ndoi:10.1109/IROS.2018.8594299.", + "url": null + } + }, + { + "14": { + "title": "LIO-SAM: tightly-coupled lidar inertial odometry\nvia smoothing and mapping,", + "author": "T. Shan, B. J. Englot,\nD. Meyers, W. Wang,\nC. Ratti, D. Rus,", + "venue": "in: IEEE/RSJ International Conference on\nIntelligent Robots and Systems (IROS 2020), IEEE,\n2020, pp. 5135\u20135142.\ndoi:10.1109/IROS45743.2020.9341176.", + "url": null + } + }, + { + "15": { + "title": "KISS-ICP: in defense of point-to-point ICP \u2013\nsimple, accurate, and robust registration if done the right way,", + "author": "I. Vizzo, T. Guadagnino,\nB. Mersch, L. Wiesmann,\nJ. Behley, C. Stachniss,", + "venue": "IEEE Robotics Autom. Lett. 8\n(2023) 1029\u20131036.\ndoi:10.1109/LRA.2023.3236571.", + "url": null + } + }, + { + "16": { + "title": "KinectFusion: real-time dynamic 3D surface\nreconstruction and interaction,", + "author": "S. Izadi, R. A. Newcombe,\nD. Kim, O. Hilliges,\nD. Molyneaux, S. Hodges,\nP. Kohli, J. Shotton,\nA. J. Davison, A. W. Fitzgibbon,", + "venue": "in: International Conference on Computer Graphics\nand Interactive Techniques (SIGGRAPH 2011), 2011,\np. 23. doi:10.1145/2037826.2037857.", + "url": null + } + }, + { + "17": { + "title": "ElasticFusion: Dense SLAM without a pose graph,", + "author": "T. Whelan, S. Leutenegger,\nR. F. Salas-Moreno, B. Glocker,\nA. J. Davison,", + "venue": "in: Robotics: science and systems,\nvolume 11, Rome, Italy,\n2015, p. 3.", + "url": null + } + }, + { + "18": { + "title": "ORB-SLAM3: An accurate open-source library for\nvisual, visual-inertial, and multimap SLAM,", + "author": "C. Campos, R. Elvira,\nJ. J. G. Rodr\u00edguez, J. M. M.\nMontiel, J. D. Tard\u00f3s,", + "venue": "IEEE Trans. Robotics 37\n(2021) 1874\u20131890.\ndoi:10.1109/TRO.2021.3075644.", + "url": null + } + }, + { + "19": { + "title": "Direct sparse odometry,", + "author": "J. Engel, V. Koltun,\nD. Cremers,", + "venue": "IEEE Trans. Pattern Anal. Mach. Intell.\n40 (2018) 611\u2013625.\ndoi:10.1109/TPAMI.2017.2658577.", + "url": null + } + }, + { + "20": { + "title": "Vins-mono: A robust and versatile monocular\nvisual-inertial state estimator,", + "author": "T. Qin, P. Li, S. Shen,", + "venue": "IEEE Trans. Robotics 34\n(2018) 1004\u20131020.\ndoi:10.1109/TRO.2018.2853729.", + "url": null + } + }, + { + "21": { + "title": "Online temporal calibration for monocular\nvisual-inertial systems,", + "author": "T. Qin, S. Shen,", + "venue": "in: 2018 IEEE/RSJ International Conference on\nIntelligent Robots and Systems (IROS), IEEE,\n2018, pp. 3662\u20133669.", + "url": null + } + }, + { + "22": { + "title": "NeRF: Representing Scenes as Neural\nRadiance Fields for View Synthesis,", + "author": "B. Mildenhall, P. P. Srinivasan,\nM. Tancik, J. T. Barron,\nR. Ramamoorthi, R. Ng,", + "venue": "CoRR abs/2003.08934\n(2020). doi:10.48550/arXiv.2003.08934.\narXiv:2003.08934.", + "url": null + } + }, + { + "23": { + "title": "NeRF-SLAM: Real-time dense monocular slam with\nneural radiance fields,", + "author": "A. Rosinol, J. J. Leonard,\nL. Carlone,", + "venue": "in: 2023 IEEE/RSJ International Conference on\nIntelligent Robots and Systems (IROS), IEEE,\n2023, pp. 3437\u20133444.", + "url": null + } + }, + { + "24": { + "title": "3d gaussian splatting for real-time radiance field\nrendering.,", + "author": "B. Kerbl, G. Kopanas,\nT. Leimk\u00fchler, G. Drettakis,", + "venue": "ACM Trans. Graph. 42\n(2023) 139\u20131.", + "url": null + } + }, + { + "25": { + "title": "Gaussian splatting slam,", + "author": "H. Matsuki, R. Murai,\nP. H. Kelly, A. J. Davison,", + "venue": "in: Proceedings of the IEEE/CVF Conference on\nComputer Vision and Pattern Recognition, 2024, pp.\n18039\u201318048.", + "url": null + } + }, + { + "26": { + "title": "GS-SLAM: dense visual SLAM with 3D gaussian\nsplatting,", + "author": "C. Yan, D. Qu, D. Xu,\nB. Zhao, Z. Wang,\nD. Wang, X. Li,", + "venue": "in: IEEE/CVF Conference on Computer Vision and\nPattern Recognition (CVPR 2024), IEEE,\n2024, pp. 19595\u201319604.\ndoi:10.1109/CVPR52733.2024.01853.", + "url": null + } + }, + { + "27": { + "title": "3D scene graph prediction on point clouds using\nknowledge graphs,", + "author": "Y. Qiu, H. I. Christensen,", + "venue": "in: 19th IEEE International\nConference on Automation Science and Engineering (CASE 2023),\nIEEE, 2023, pp. 1\u20137.\ndoi:10.1109/CASE56687.2023.10260650.", + "url": null + } + }, + { + "28": { + "title": "SEMAP \u2013 a semantic environment mapping framework,", + "author": "H. Deeken, T. Wiemann,\nK. Lingemann, J. Hertzberg,", + "venue": "in: 2015 European Conference on Mobile Robots\n(ECMR 2015), IEEE, 2015, pp.\n1\u20136. doi:10.1109/ECMR.2015.7324176.", + "url": null + } + }, + { + "29": { + "title": "Grounding semantic maps in spatial databases,", + "author": "H. Deeken, T. Wiemann,\nJ. Hertzberg,", + "venue": "Robotics Auton. Syst. 105\n(2018) 146\u2013165.\ndoi:10.1016/J.ROBOT.2018.03.011.", + "url": null + } + }, + { + "30": { + "title": "Segment anything,", + "author": "A. Kirillov, E. Mintun,\nN. Ravi, H. Mao,\nC. Rolland, L. Gustafson,\nT. Xiao, S. Whitehead,\nA. C. Berg, W. Lo,\nP. Doll\u00e1r, R. B. Girshick,", + "venue": "in: IEEE/CVF International Conference on\nComputer Vision (ICCV 2023), IEEE,\n2023, pp. 3992\u20134003.\ndoi:10.1109/ICCV51070.2023.00371.", + "url": null + } + }, + { + "31": { + "title": "Faster segment anything: Towards lightweight SAM\nfor mobile applications,", + "author": "C. Zhang, D. Han,\nY. Qiao, J. U. Kim,\nS. Bae, S. Lee, C. S.\nHong,", + "venue": "CoRR abs/2306.14289\n(2023). doi:10.48550/ARXIV.2306.14289.\narXiv:2306.14289.", + "url": null + } + }, + { + "32": { + "title": "Fast segment anything,", + "author": "X. Zhao, W. Ding, Y. An,\nY. Du, T. Yu, M. Li,\nM. Tang, J. Wang,", + "venue": "CoRR abs/2306.12156\n(2023). doi:10.48550/ARXIV.2306.12156.\narXiv:2306.12156.", + "url": null + } + }, + { + "33": { + "title": "SAM 2: Segment anything in images and videos,", + "author": "N. Ravi, V. Gabeur,\nY. Hu, R. Hu, C. Ryali,\nT. Ma, H. Khedr,\nR. R\u00e4dle, C. Rolland,\nL. Gustafson, E. Mintun,\nJ. Pan, K. V. Alwala,\nN. Carion, C. Wu, R. B.\nGirshick, P. Doll\u00e1r,\nC. Feichtenhofer,", + "venue": "CoRR abs/2408.00714\n(2024). doi:10.48550/ARXIV.2408.00714.\narXiv:2408.00714.", + "url": null + } + }, + { + "34": { + "title": "Kimera: from SLAM to spatial perception with 3D\ndynamic scene graphs,", + "author": "A. Rosinol, A. Violette,\nM. Abate, N. Hughes,\nY. Chang, J. Shi,\nA. Gupta, L. Carlone,", + "venue": "Int. J. Rob. Res. (2021).", + "url": null + } + }, + { + "35": { + "title": "Volumetric instance-aware semantic mapping and 3D\nobject discovery,", + "author": "M. Grinvald, F. Furrer,\nT. Novkovic, J. J. Chung,\nC. Cadena, R. Siegwart,\nJ. I. Nieto,", + "venue": "IEEE Robotics Autom. Lett. 4\n(2019) 3037\u20133044.\ndoi:10.1109/LRA.2019.2923960.", + "url": null + } + }, + { + "36": { + "title": "PLVS: A SLAM system with points, lines,\nvolumetric mapping, and 3D incremental segmentation,", + "author": "L. Freda,", + "venue": "CoRR abs/2309.10896\n(2023). doi:10.48550/ARXIV.2309.10896.\narXiv:2309.10896.", + "url": null + } + }, + { + "37": { + "title": "Towards a voxelized semantic representation of the\nworkspace of mobile robots,", + "author": "A.-J. Banegas-Luna, J.-R.\nRuiz-Sarmiento, G. Ambrosio-Cestero,\nJ. Gonz\u00e1lez-Jim\u00e9nez,", + "venue": "in: I. Rojas, G. Joya,\nA. Catal\u00e0 (Eds.),\n17th International Work-Conference on\nArtificial Neural Networks (IWANN 2023), volume 14135 of\nLecture Notes in Computer Science,\nSpringer, 2023, pp.\n194\u2013205. doi:10.1007/978-3-031-43078-7\\_16.", + "url": null + } + }, + { + "38": { + "title": "ViMantic, a distributed robotic architecture\nfor semantic mapping in indoor environments,", + "author": "D. Fern\u00e1ndez-Chaves, J. R.\nRuiz-Sarmiento, N. Petkov,\nJ. Gonz\u00e1lez-Jim\u00e9nez,", + "venue": "Knowl. Based Syst. 232\n(2021) 107440.\ndoi:10.1016/J.KNOSYS.2021.107440.", + "url": null + } + }, + { + "39": { + "title": "Khronos: A unified approach for spatio-temporal\nmetric-semantic slam in dynamic environments,", + "author": "L. Schmid, M. Abate,\nY. Chang, L. Carlone,", + "venue": "in: Proc. of Robotics: Science and Systems\n(RSS), 2024. doi:10.15607/RSS.2024.XX.081.", + "url": null + } + }, + { + "40": { + "title": "An introduction to the anchoring problem,", + "author": "S. Coradeschi, A. Saffiotti,", + "venue": "Robot. Auton. Syst. 43\n(2003) 85\u201396.", + "url": null + } + }, + { + "41": { + "title": "Context-aware 3D object anchoring for mobile\nrobots,", + "author": "M. G\u00fcnther, J. R. Ruiz-Sarmiento,\nC. Galindo, J. Gonz\u00e1lez-Jim\u00e9nez,\nJ. Hertzberg,", + "venue": "Robot. Auton. Syst. 110\n(2018) 12\u201332.\ndoi:10.1016/j.robot.2018.08.016.", + "url": null + } + }, + { + "42": { + "title": "3D spatial multimodal knowledge accumulation for\nscene graph prediction in point cloud,", + "author": "M. Feng, H. Hou,\nL. Zhang, Z. Wu,\nY. Guo, A. Mian,", + "venue": "Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern\nRecognit. (2023) 9182\u20139191.", + "url": null + } + }, + { + "43": { + "title": "Knowledge-inspired 3D scene graph prediction in\npoint cloud,", + "author": "S. Zhang, S. Li, A. Hao,\nH. Qin,", + "venue": "Adv. Neural Inf. Process. Syst.\n(2021) 18620\u201318632.", + "url": null + } + }, + { + "44": { + "title": "Semantic maps for robotics,", + "author": "D. Lang, D. Paulus,", + "venue": "in: Proc. Workshop Workshop AI Robot.(ICRA),\n2014, pp. 14\u201318.", + "url": null + } + }, + { + "45": { + "title": "KnowRob: A knowledge processing infrastructure\nfor cognition-enabled robots,", + "author": "M. Tenorth, M. Beetz,", + "venue": "I. J. Robotics Res. 32\n(2013) 566\u2013590.\ndoi:10.1177/0278364913481635.", + "url": null + } + }, + { + "46": { + "title": "A Survey of Ontologies for Simultaneous\nLocalization and Mapping in Mobile Robots,", + "author": "M. A. Cornejo-Lupa, R. P. Ticona-Herrera,\nY. Cardinale, D. Barrios-Aranibar,", + "venue": "ACM Comput. Surv. 53\n(2020) 1\u201326.\ndoi:10.1145/3408316.", + "url": null + } + }, + { + "47": { + "title": "A semantic map for indoor robot navigation based on\npredicate logic,", + "author": "H. Wang, J. Ren,", + "venue": "Int. J. Knowl. Syst. Sci. 11\n(2020) 1\u201321.\ndoi:10.4018/IJKSS.2020010101.", + "url": null + } + }, + { + "48": { + "title": "A generalizable knowledge framework for semantic\nindoor mapping based on markov logic networks and data driven MCMC,", + "author": "Z. Liu, G. von Wichert,", + "venue": "Future Gener. Comput. Syst. 36\n(2014) 42\u201356.\ndoi:10.1016/J.FUTURE.2013.06.026.", + "url": null + } + }, + { + "49": { + "title": "OntoSLAM: An Ontology for Representing\nLocation and Simultaneous Mapping Information for Autonomous\nRobots,", + "author": "M. A. Cornejo-Lupa, Y. Cardinale,\nR. Ticona-Herrera, D. Barrios-Aranibar,\nM. Andrade, J. Diaz-Amado,", + "venue": "Robotics 10\n(2021) 125.\ndoi:10.3390/robotics10040125.", + "url": null + } + }, + { + "50": { + "title": "Symbolic image detection using scene and knowledge\ngraphs,", + "author": "N. Kalanat, A. Kovashka,", + "venue": "CoRR abs/2206.04863\n(2022). doi:10.48550/ARXIV.2206.04863.\narXiv:2206.04863.", + "url": null + } + }, + { + "51": { + "title": "Towards a more reasonable semantic web,", + "author": "V. Doing, R. Wisnesky,", + "venue": "CoRR abs/2407.19095\n(2024). doi:10.48550/ARXIV.2407.19095.\narXiv:2407.19095.", + "url": null + } + }, + { + "52": { + "title": "A survey on knowledge graphs: Representation,\nacquisition, and applications,", + "author": "S. Ji, S. Pan,\nE. Cambria, P. Marttinen,\nP. S. Yu,", + "venue": "IEEE Trans. Neural Networks Learn. Syst.\n33 (2022) 494\u2013514.\ndoi:10.1109/TNNLS.2021.3070843.", + "url": null + } + }, + { + "53": { + "title": "Building Watson: An Overview of the DeepQA\nProject,", + "author": "D. Ferrucci, E. Brown,\nJ. Chu-Carroll, J. Fan,\nD. Gondek, A. A. Kalyanpur,\nA. Lally, J. W. Murdock,\nE. Nyberg, J. Prager,\nN. Schlaefer, C. Welty,", + "venue": "AI Mag. 31\n(2010) 59\u201379.\ndoi:10.1609/aimag.v31i3.2303.", + "url": null + } + }, + { + "54": { + "title": "WordNet: A lexical database for English,", + "author": "G. A. Miller,", + "venue": "Commun. ACM 38\n(1995) 39\u201341.\ndoi:10.1145/219717.219748.", + "url": null + } + }, + { + "55": { + "title": "Visual genome: Connecting language and vision using\ncrowdsourced dense image annotations,", + "author": "R. Krishna, Y. Zhu,\nO. Groth, J. Johnson,\nK. Hata, J. Kravitz,\nS. Chen, Y. Kalantidis,\nL.-J. Li, D. A. Shamma,\nM. S. Bernstein, L. Fei-Fei,", + "venue": "Int. J. Comput. Vis. 123\n(2017) 32\u201373.", + "url": null + } + }, + { + "56": { + "title": "ConceptNet 5.5: An open multilingual graph of\ngeneral knowledge,", + "author": "R. Speer, J. Chin,\nC. Havasi,", + "venue": "Proc. Conf. AAAI Artif. Intell.\n31 (2017).", + "url": null + } + }, + { + "57": { + "title": "Zero-shot recognition via semantic embeddings and\nknowledge graphs,", + "author": "X. Wang, Y. Ye, A. Gupta,", + "venue": "in: 2018 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2018), Computer Vision\nFoundation / IEEE Computer Society, 2018, pp.\n6857\u20136866. doi:10.1109/CVPR.2018.00717.", + "url": null + } + }, + { + "58": { + "title": "Multi-label Zero-Shot Learning with\nStructured Knowledge Graphs,", + "author": "C.-W. Lee, W. Fang, C.-K.\nYeh, Y.-C. F. Wang,", + "venue": "in: 2018 IEEE/CVF Conference on Computer\nVision and Pattern Recognition, IEEE,\n2018, pp. 1576\u20131585.\ndoi:10.1109/CVPR.2018.00170.", + "url": null + } + }, + { + "59": { + "title": "KnowRob 2.0 \u2013 A 2nd generation\nknowledge processing framework for cognition-enabled robotic agents,", + "author": "M. Beetz, D. Be\u00dfler,\nA. Haidu, M. Pomarlan,\nA. K. Bozcuoglu, G. Bartels,", + "venue": "in: 2018 IEEE International Conference on\nRobotics and Automation (ICRA 2018), 2018, pp.\n512\u2013519. doi:10.1109/ICRA.2018.8460964.", + "url": null + } + }, + { + "60": { + "title": "Ontology evolution: Not the same as schema\nevolution,", + "author": "N. F. Noy, M. Klein,", + "venue": "Knowl. Inf. Syst. 6\n(2004) 428\u2013440.", + "url": null + } + }, + { + "61": { + "title": "Rethinking Knowledge Graph Propagation for\nZero-Shot Learning,", + "author": "M. Kampffmeyer, Y. Chen,\nX. Liang, H. Wang,\nY. Zhang, E. P. Xing,", + "venue": "in: 2019 IEEE/CVF Conference on Computer\nVision and Pattern Recognition (CVPR), IEEE,\n2019, pp. 11479\u201311488.\ndoi:10.1109/CVPR.2019.01175.", + "url": null + } + }, + { + "62": { + "title": "Retrieval-augmented generation for\nknowledge-intensive NLP tasks,", + "author": "P. S. H. Lewis, E. Perez,\nA. Piktus, F. Petroni,\nV. Karpukhin, N. Goyal,\nH. K\u00fcttler, M. Lewis,\nW. Yih, T. Rockt\u00e4schel,\nS. Riedel, D. Kiela,", + "venue": "in: Advances in Neural Information Processing\nSystems 33: Annual Conference on Neural Information Processing Systems\n(NeurIPS 2020), 2020, pp. 9459\u20139474.", + "url": null + } + }, + { + "63": { + "title": "Indoor and outdoor 3D scene graph generation via\nlanguage-enabled spatial ontologies,", + "author": "J. Strader, N. Hughes,\nW. Chen, A. Speranzon,\nL. Carlone,", + "venue": "IEEE Robotics Autom. Lett. 9\n(2024) 4886\u20134893.\ndoi:10.1109/LRA.2024.3384084.", + "url": null + } + }, + { + "64": { + "title": "COMET: commonsense transformers for automatic\nknowledge graph construction,", + "author": "A. Bosselut, H. Rashkin,\nM. Sap, C. Malaviya,\nA. Celikyilmaz, Y. Choi,", + "venue": "in: Proceedings of the 57th\nConference of the Association for Computational Linguistics (ACL 2019),\nVolume 1: Long Papers, Association for Computational\nLinguistics, 2019, pp. 4762\u20134779.\ndoi:10.18653/V1/P19-1470.", + "url": null + } + }, + { + "65": { + "title": "3D scene graph: A structure for unified semantics,\n3D space, and camera,", + "author": "I. Armeni, Z. He,\nA. Zamir, J. Gwak,\nJ. Malik, M. Fischer,\nS. Savarese,", + "venue": "in: 2019 IEEE/CVF International Conference on\nComputer Vision (ICCV 2019), IEEE,\n2019, pp. 5663\u20135672.\ndoi:10.1109/ICCV.2019.00576.", + "url": null + } + }, + { + "66": { + "title": "3D dynamic scene graphs: Actionable spatial\nperception with places, objects, and humans,", + "author": "A. Rosinol, A. Gupta,\nM. Abate, J. Shi,\nL. Carlone,", + "venue": "Robot. Sci. Syst. (2020).", + "url": null + } + }, + { + "67": { + "title": "Hydra: A real-time spatial perception system for\n3D scene graph construction and optimization,", + "author": "N. Hughes, Y. Chang,\nL. Carlone,", + "venue": "in: Robotics: Science and Systems XVIII,\n2022. doi:10.15607/RSS.2022.XVIII.050.", + "url": null + } + }, + { + "68": { + "title": "3-D scene graph: A sparse and semantic\nrepresentation of physical environments for intelligent agents,", + "author": "U.-H. Kim, J.-M. Park,\nT.-J. Song, J.-H. Kim,", + "venue": "IEEE Trans. Cybern. 50\n(2020) 4921\u20134933.", + "url": null + } + }, + { + "69": { + "title": "Towards long-term retrieval-based visual localization\nin indoor environments with changes,", + "author": "J. Kabalar, S.-C. Wu,\nJ. Wald, K. Tateno,\nN. Navab, F. Tombari,", + "venue": "IEEE Robotics Autom. Lett. 8\n(2023) 1975\u20131982.", + "url": null + } + }, + { + "70": { + "title": "A survey on 3D scene graphs: Definition, generation\nand application,", + "author": "J. Bae, D. Shin, K. Ko,\nJ. Lee, U. Kim,", + "venue": "in: 10th International\nConference on Robot Intelligence Technology and Applications (RiTA 2022),\nSpringer, 2022, pp.\n136\u2013147. doi:10.1007/978-3-031-26889-2\\_13.", + "url": null + } + }, + { + "71": { + "title": "Graph-structured representations for visual question\nanswering,", + "author": "D. Teney, L. Liu, A. Van\nDen Hengel,", + "venue": "in: 2017 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR), 2017, pp.\n3233\u20133241.", + "url": null + } + }, + { + "72": { + "title": "Exploring visual relationship for image captioning,", + "author": "T. Yao, Y. Pan, Y. Li,\nT. Mei,", + "venue": "in: European Conference on Computer Vision (ECCV\n2018), Springer International Publishing,\n2018, pp. 711\u2013727.", + "url": null + } + }, + { + "73": { + "title": "Taskography: Evaluating robot task planning over\nlarge 3D scene graphs,", + "author": "C. Agia, K. M. Jatavallabhula,\nM. Khodeir, O. Miksik,\nV. Vineet, M. Mukadam,\nL. Paull, F. Shkurti,", + "venue": "in: Proceedings of the 5th\nConference on Robot Learning, volume 164 of\nProceedings of Machine Learning Research,\nPMLR, 2022, pp. 46\u201358.", + "url": null + } + }, + { + "74": { + "title": "Scene graph generation with external knowledge and\nimage reconstruction,", + "author": "J. Gu, H. Zhao, Z. Lin,\nS. Li, J. Cai, M. Ling,", + "venue": "in: Proceedings of the IEEE/CVF conference on\ncomputer vision and pattern recognition, 2019, pp.\n1969\u20131978.", + "url": null + } + }, + { + "75": { + "title": "Mask R-CNN,", + "author": "K. He, G. Gkioxari,\nP. Dollar, R. Girshick,", + "venue": "in: Proceedings of the IEEE International\nConference on Computer Vision (ICCV), 2017.", + "url": null + } + }, + { + "76": { + "title": "RIO: 3D object instance re-localization in\nchanging indoor environments,", + "author": "J. Wald, A. Avetisyan,\nN. Navab, F. Tombari,\nM. Nie\u00dfner,", + "venue": "in: 2019 IEEE/CVF International Conference on\nComputer Vision (ICCV 2019), 2019, pp.\n7657\u20137666. doi:10.1109/ICCV.2019.00775.", + "url": null + } + }, + { + "77": { + "title": "Embodied semantic scene graph generation,", + "author": "X. Li, D. Guo, H. Liu,\nF. Sun,", + "venue": "in: Proceedings of the 5th\nConference on Robot Learning, volume 164 of\nProceedings of Machine Learning Research,\nPMLR, 2022, pp.\n1585\u20131594.", + "url": null + } + }, + { + "78": { + "title": "Dynamic scene graph generation of point clouds with\nstructural representation learning,", + "author": "C. Qi, J. Yin, Z. Zhang,\nJ. Tang,", + "venue": "Tsinghua Sci. Technol. 29\n(2024) 232\u2013243.", + "url": null + } + }, + { + "79": { + "title": "Learning phrase representations using RNN\nencoder\u2013decoder for statistical machine translation,", + "author": "K. Cho, B. van Merri\u00ebnboer,\nC. Gulcehre, D. Bahdanau,\nF. Bougares, H. Schwenk,\nY. Bengio,", + "venue": "in: Proceedings of the 2014 Conference on\nEmpirical Methods in Natural Language Processing (EMNLP),\nAssociation for Computational Linguistics,\n2014, pp. 1724\u20131734.", + "url": null + } + }, + { + "80": { + "title": "PointNet: Deep Learning on point sets for 3D\nclassification and segmentation,", + "author": "C. R. Qi, H. Su, K. Mo,\nL. J. Guibas,", + "venue": "in: 2017 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2017), 2017, pp.\n77\u201385. doi:10.1109/CVPR.2017.16.", + "url": null + } + }, + { + "81": { + "title": "Incremental 3D semantic scene graph prediction from\nRGB sequences,", + "author": "S.-C. Wu, K. Tateno,\nN. Navab, F. Tombari,", + "venue": "Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern\nRecognit. (2023) 5064\u20135074.", + "url": null + } + }, + { + "82": { + "title": "Bridging knowledge graphs to generate scene graphs,", + "author": "A. Zareian, S. Karaman,\nS.-F. Chang,", + "venue": "in: European Conference on Computer Vision (ECCV\n2020), Springer-Verlag, 2020, pp.\n606\u2013623.", + "url": null + } + }, + { + "83": { + "title": "SGFormer: Semantic graph transformer for point\ncloud-based 3D scene graph generation,", + "author": "C. Lv, M. Qi, X. Li,\nZ. Yang, H. Ma,", + "venue": "Proc. Conf. AAAI Artif. Intell.\n38 (2024) 4035\u20134043.", + "url": null + } + }, + { + "84": { + "title": "Logic tensor networks,", + "author": "S. Badreddine, A. d\u2019Avila Garcez,\nL. Serafini, M. Spranger,", + "venue": "Artif. Intell. 303\n(2020).", + "url": null + } + }, + { + "85": { + "title": "Matterport3D: Learning from RGB-D data in indoor\nenvironments,", + "author": "A. X. Chang, A. Dai, T. A.\nFunkhouser, M. Halber, M. Nie\u00dfner,\nM. Savva, S. Song,\nA. Zeng, Y. Zhang,", + "venue": "in: 2017 International Conference on 3D Vision\n(3DV 2017), IEEE Computer Society,\n2017, pp. 667\u2013676.\ndoi:10.1109/3DV.2017.00081.", + "url": null + } + }, + { + "86": { + "title": "Leveraging commonsense for object localisation in\npartial scenes,", + "author": "F. Giuliari, G. Skenderi,\nM. Cristani, A. D. Bue,\nY. Wang,", + "venue": "IEEE Trans. Pattern Anal. Mach. Intell.\n45 (2023) 12038\u201312049.", + "url": null + } + }, + { + "87": { + "title": "Unbiased 3D semantic scene graph prediction in\npoint cloud using deep learning,", + "author": "C. Han, H. Li, J. Xu,\nB. Dong, Y. Wang,\nX. Zhou, S. Zhao,", + "venue": "Appl. Sci. 13\n(2023) 5657.\ndoi:10.3390/app13095657.", + "url": null + } + }, + { + "88": { + "title": "Hierarchical representations and explicit memory:\nLearning effective navigation policies on 3D scene graphs using graph\nneural networks,", + "author": "Z. Ravichandran, L. Peng,\nN. Hughes, J. D. Griffith,\nL. Carlone,", + "venue": "in: 2022 International Conference on Robotics and\nAutomation (ICRA 2022), 2022, pp.\n9272\u20139279. doi:10.1109/ICRA46639.2022.9812179.", + "url": null + } + }, + { + "89": { + "title": "Where are the keys? \u2013 Learning object-centric\nnavigation policies on semantic maps with graph convolutional networks,", + "author": "N. S\u00fcnderhauf,", + "venue": "CoRR abs/1909.07376\n(2019). arXiv:1909.07376.", + "url": null + } + }, + { + "90": { + "title": "Task-driven graph attention for hierarchical\nrelational object navigation,", + "author": "M. Lingelbach, C. Li,\nM. Hwang, A. Kurenkov,\nA. Lou, R. Mart\u00edn-Mart\u00edn,\nR. Zhang, L. Fei-Fei,\nJ. Wu,", + "venue": "in: IEEE International Conference on Robotics\nand Automation (ICRA 2023), IEEE,\n2023, pp. 886\u2013893.\ndoi:10.1109/ICRA48891.2023.10161157.", + "url": null + } + }, + { + "91": { + "title": "Heterogeneous graph transformer,", + "author": "Z. Hu, Y. Dong, K. Wang,\nY. Sun,", + "venue": "in: WWW \u201920: The Web Conference 2020,\nACM / IW3C2, 2020, pp.\n2704\u20132710. doi:10.1145/3366423.3380027.", + "url": null + } + }, + { + "92": { + "title": "Reasoning with scene graphs for robot planning under\npartial observability,", + "author": "S. Amiri, K. Chandan,\nS. Zhang,", + "venue": "IEEE Robotics Autom. Lett. 7\n(2022) 5560\u20135567.\ndoi:10.1109/LRA.2022.3157567.", + "url": null + } + }, + { + "93": { + "title": "3D VSG: Long-term semantic scene change\nprediction through 3D variable scene graphs,", + "author": "S. Looper, J. Rodriguez-Puigvert,\nR. Siegwart, C. Cadena,\nL. Schmid,", + "venue": "in: 2023 IEEE International Conference on\nRobotics and Automation (ICRA), IEEE,\n2023, pp. 8179\u20138186.", + "url": null + } + }, + { + "94": { + "title": "D-Lite: Navigation-oriented compression of 3D\nscene graphs for multi-robot collaboration,", + "author": "Y. Chang, L. Ballotta,\nL. Carlone,", + "venue": "IEEE Robotics Autom. Lett. 8\n(2023) 7527\u20137534.\ndoi:10.1109/LRA.2023.3320011.", + "url": null + } + }, + { + "95": { + "title": "Gibson Env: Real-world perception for embodied\nagents,", + "author": "F. Xia, A. R. Zamir,\nZ. He, A. Sax,\nJ. Malik, S. Savarese,", + "venue": "in: 2018 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2018), Computer Vision\nFoundation / IEEE Computer Society, 2018, pp.\n9068\u20139079. doi:10.1109/CVPR.2018.00945.", + "url": null + } + }, + { + "96": { + "title": "iGibson 2.0: Object-centric simulation for robot\nlearning of everyday household tasks,", + "author": "C. Li, F. Xia,\nR. Mart\u00edn-Mart\u00edn,\nM. Lingelbach, S. Srivastava,\nB. Shen, K. E. Vainio,\nC. Gokmen, G. Dharan,\nT. Jain, A. Kurenkov,\nC. K. Liu, H. Gweon,\nJ. Wu, L. Fei-Fei,\nS. Savarese,", + "venue": "in: Conference on Robot Learning (CoRL 2021),\nvolume 164 of Proceedings of\nMachine Learning Research, PMLR,\n2021, pp. 455\u2013465.", + "url": null + } + }, + { + "97": { + "title": "ScanNet: Richly-annotated 3D reconstructions of\nindoor scenes,", + "author": "A. Dai, A. X. Chang,\nM. Savva, M. Halber,\nT. A. Funkhouser, M. Nie\u00dfner,", + "venue": "in: 2017 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2017), IEEE Computer\nSociety, 2017, pp. 2432\u20132443.\ndoi:10.1109/CVPR.2017.261.", + "url": null + } + }, + { + "98": { + "title": "3D semantic parsing of large-scale indoor spaces,", + "author": "I. Armeni, O. Sener, A. R.\nZamir, H. Jiang, I. K. Brilakis,\nM. Fischer, S. Savarese,", + "venue": "in: 2016 IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2016), IEEE Computer\nSociety, 2016, pp. 1534\u20131543.\ndoi:10.1109/CVPR.2016.170.", + "url": null + } + }, + { + "99": { + "title": "Paris-Lille-3D: A large and high-quality\nground-truth urban point cloud dataset for automatic segmentation and\nclassification,", + "author": "X. Roynard, J. Deschaud,\nF. Goulette,", + "venue": "Int. J. Robotics Res. 37\n(2018) 545\u2013557.\ndoi:10.1177/0278364918767506.", + "url": null + } + }, + { + "100": { + "title": "AI2-THOR: an interactive 3D environment for\nvisual AI,", + "author": "E. Kolve, R. Mottaghi,\nD. Gordon, Y. Zhu,\nA. Gupta, A. Farhadi,", + "venue": "CoRR abs/1712.05474\n(2017). arXiv:1712.05474.", + "url": null + } + }, + { + "101": { + "title": "Beyond controlled environments: 3D camera\nre-localization in changing indoor scenes,", + "author": "J. Wald, T. Sattler,\nS. Golodetz, T. Cavallari,\nF. Tombari,", + "venue": "in: European Conference on Computer Vision (ECCV\n2020), Springer, 2020, pp.\n467\u2013487. doi:10.1007/978-3-030-58571-6\\_28.", + "url": null + } + }, + { + "102": { + "title": "Visual relationship detection with language priors,", + "author": "C. Lu, R. Krishna, M. S.\nBernstein, L. Fei-Fei,", + "venue": "in: European Conference on Computer Vision (ECCV\n2016), Springer, 2016, pp.\n852\u2013869. doi:10.1007/978-3-319-46448-0\\_51.", + "url": null + } + }, + { + "103": { + "title": "Knowledge-embedded routing network for scene graph\ngeneration,", + "author": "T. Chen, W. Yu, R. Chen,\nL. Lin,", + "venue": "in: IEEE Conference on Computer Vision and\nPattern Recognition (CVPR 2019), Computer Vision\nFoundation / IEEE, 2019, pp. 6163\u20136171.\ndoi:10.1109/CVPR.2019.00632.", + "url": null + } + }, + { + "104": { + "title": "What Can Knowledge Bring to Machine\nLearning?\u2014A Survey of Low-shot Learning for Structured\nData,", + "author": "Y. Hu, A. Chapman,\nG. Wen, D. W. Hall,", + "venue": "ACM Trans. Intell. Syst. Technol.\n13 (2022) 1\u201345.\ndoi:10.1145/3510030.", + "url": null + } + }, + { + "105": { + "title": "Visual semantic segmentation based on few/zero-shot\nlearning: An overview,", + "author": "W. Ren, Y. Tang, Q. Sun,\nC. Zhao, Q.-L. Han,", + "venue": "IEEE/CAA J. Autom. Sin. (2023)\n1\u201321. doi:10.1109/JAS.2023.123207.", + "url": null + } + }, + { + "106": { + "title": "Learning transferable visual models from natural\nlanguage supervision,", + "author": "A. Radford, J. W. Kim,\nC. Hallacy, A. Ramesh,\nG. Goh, S. Agarwal,\nG. Sastry, A. Askell,\nP. Mishkin, J. Clark,\nG. Krueger, I. Sutskever,", + "venue": "in: Proceedings of the 38th\nInternational Conference on Machine Learning (ICML 2021), volume\n139 of Proceedings of Machine\nLearning Research, PMLR, 2021, pp.\n8748\u20138763.", + "url": null + } + }, + { + "107": { + "title": "Grounding DINO: marrying DINO with grounded\npre-training for open-set object detection,", + "author": "S. Liu, Z. Zeng, T. Ren,\nF. Li, H. Zhang,\nJ. Yang, C. Li,\nJ. Yang, H. Su, J. Zhu,\nL. Zhang,", + "venue": "CoRR abs/2303.05499\n(2023). doi:10.48550/ARXIV.2303.05499.\narXiv:2303.05499.", + "url": null + } + }, + { + "108": { + "title": "Grounding DINO 1.5: Advance the \u201dedge\u201d of open-set\nobject detection,", + "author": "T. Ren, Q. Jiang, S. Liu,\nZ. Zeng, W. Liu,\nH. Gao, H. Huang,\nZ. Ma, X. Jiang,\nY. Chen, Y. Xiong,\nH. Zhang, F. Li,\nP. Tang, K. Yu,\nL. Zhang,", + "venue": "CoRR abs/2405.10300\n(2024). doi:10.48550/ARXIV.2405.10300.\narXiv:2405.10300.", + "url": null + } + }, + { + "109": { + "title": "ZegCLIP: Towards adapting CLIP for zero-shot\nsemantic segmentation,", + "author": "Z. Zhou, Y. Lei,\nB. Zhang, L. Liu,\nY. Liu,", + "venue": "in: IEEE/CVF Conference on Computer Vision and\nPattern Recognition (CVPR 2023), IEEE,\n2023, pp. 11175\u201311185.\ndoi:10.1109/CVPR52729.2023.01075.", + "url": null + } + }, + { + "110": { + "title": "Large language models for robotics: A survey,", + "author": "F. Zeng, W. Gan, Y. Wang,\nN. Liu, P. S. Yu,", + "venue": "CoRR abs/2311.07226\n(2023). doi:10.48550/ARXIV.2311.07226.\narXiv:2311.07226.", + "url": null + } + }, + { + "111": { + "title": "SPHINX-X: scaling data and parameters for a family\nof multi-modal large language models,", + "author": "D. Liu, R. Zhang, L. Qiu,\nS. Huang, W. Lin,\nS. Zhao, S. Geng,\nZ. Lin, P. Jin,\nK. Zhang, W. Shao,\nC. Xu, C. He, J. He,\nH. Shao, P. Lu,\nY. Qiao, H. Li, P. Gao,", + "venue": "in: Forty-first International Conference on\nMachine Learning (ICML 2024), OpenReview.net,\n2024.", + "url": null + } + }, + { + "112": { + "title": "Gemini: A family of highly capable multimodal\nmodels,", + "author": "R. Anil, S. Borgeaud,\nY. Wu, J. Alayrac,\nJ. Yu, R. Soricut,\nJ. Schalkwyk, A. M. Dai,\nA. Hauth, K. Millican,\nD. Silver, S. Petrov,\nM. Johnson, I. Antonoglou,\nJ. Schrittwieser, A. Glaese,\nJ. Chen, E. Pitler,\nT. P. Lillicrap, A. Lazaridou,\nO. Firat, J. Molloy,\nM. Isard, P. R. Barham,\nT. Hennigan, B. Lee,\nF. Viola, M. Reynolds,\nY. Xu, R. Doherty,\nE. Collins, C. Meyer,\nE. Rutherford, E. Moreira,\nK. Ayoub, M. Goel,\nG. Tucker, E. Piqueras,\nM. Krikun, I. Barr,\nN. Savinov, I. Danihelka,\nB. Roelofs, A. White,\nA. Andreassen, T. von Glehn,\nL. Yagati, M. Kazemi,\nL. Gonzalez, M. Khalman,\nJ. Sygnowski, et al.,", + "venue": "CoRR abs/2312.11805\n(2023). doi:10.48550/ARXIV.2312.11805.\narXiv:2312.11805.", + "url": null + } + }, + { + "113": { + "title": "BLIP: bootstrapping language-image pre-training for\nunified vision-language understanding and generation,", + "author": "J. Li, D. Li, C. Xiong,\nS. C. H. Hoi,", + "venue": "in: International Conference on Machine Learning\n(ICML 2022), volume 162 of\nProceedings of Machine Learning Research,\nPMLR, 2022, pp.\n12888\u201312900.", + "url": null + } + }, + { + "114": { + "title": "The dawn of LMMs: Preliminary explorations with\nGPT-4V(ision),", + "author": "Z. Yang, L. Li, K. Lin,\nJ. Wang, C. Lin,\nZ. Liu, L. Wang,", + "venue": "CoRR abs/2309.17421\n(2023). doi:10.48550/ARXIV.2309.17421.\narXiv:2309.17421.", + "url": null + } + }, + { + "115": { + "title": "LLaVA-NeXT-Interleave: Tackling multi-image, video,\nand 3D in large multimodal models,", + "author": "F. Li, R. Zhang,\nH. Zhang, Y. Zhang,\nB. Li, W. Li, Z. Ma,\nC. Li,", + "venue": "CoRR abs/2407.07895\n(2024). doi:10.48550/ARXIV.2407.07895.\narXiv:2407.07895.", + "url": null + } + }, + { + "116": { + "title": "Scaling up visual and vision-language representation\nlearning with noisy text supervision,", + "author": "C. Jia, Y. Yang, Y. Xia,\nY. Chen, Z. Parekh,\nH. Pham, Q. V. Le,\nY. Sung, Z. Li,\nT. Duerig,", + "venue": "in: Proceedings of the 38th\nInternational Conference on Machine Learning (ICML 2021), volume\n139 of Proceedings of Machine\nLearning Research, PMLR, 2021, pp.\n4904\u20134916.", + "url": null + } + }, + { + "117": { + "title": "BLIP-2: bootstrapping language-image pre-training\nwith frozen image encoders and large language models,", + "author": "J. Li, D. Li,\nS. Savarese, S. C. H. Hoi,", + "venue": "in: International Conference on Machine Learning\n(ICML 2023), volume 202 of\nProceedings of Machine Learning Research,\nPMLR, 2023, pp.\n19730\u201319742.", + "url": null + } + }, + { + "118": { + "title": "Extract free dense labels from CLIP,", + "author": "C. Zhou, C. C. Loy,\nB. Dai,", + "venue": "in: European Conference on Computer Vision (ECCV\n2022), Springer, 2022, pp.\n696\u2013712. doi:10.1007/978-3-031-19815-1\\_40.", + "url": null + } + }, + { + "119": { + "title": "Scaling open-vocabulary image segmentation with\nimage-level labels,", + "author": "G. Ghiasi, X. Gu, Y. Cui,\nT.-Y. Lin,", + "venue": "in: European Conference on Computer Vision,\nSpringer, 2022, pp.\n540\u2013557.", + "url": null + } + }, + { + "120": { + "title": "Open-vocabulary universal image segmentation with\nMaskCLIP,", + "author": "Z. Ding, J. Wang, Z. Tu,", + "venue": "in: International Conference on Machine Learning\n(ICML 2023), volume 202 of\nProceedings of Machine Learning Research,\nPMLR, 2023, pp.\n8090\u20138102.", + "url": null + } + }, + { + "121": { + "title": "Generalized decoding for pixel, image, and language,", + "author": "X. Zou, Z. Dou, J. Yang,\nZ. Gan, L. Li, C. Li,\nX. Dai, H. Behl,\nJ. Wang, L. Yuan,\nN. Peng, L. Wang, Y. J.\nLee, J. Gao,", + "venue": "in: IEEE/CVF Conference on Computer Vision and\nPattern Recognition (CVPR 2023), IEEE,\n2023, pp. 15116\u201315127.\ndoi:10.1109/CVPR52729.2023.01451.", + "url": null + } + }, + { + "122": { + "title": "Language-driven semantic segmentation,", + "author": "B. Li, K. Q. Weinberger,\nS. J. Belongie, V. Koltun,\nR. Ranftl,", + "venue": "in: The Tenth International Conference on\nLearning Representations (ICLR 2022), 2022.", + "url": null + } + }, + { + "123": { + "title": "SAN: side adapter network for open-vocabulary\nsemantic segmentation,", + "author": "M. Xu, Z. Zhang, F. Wei,\nH. Hu, X. Bai,", + "venue": "IEEE Trans. Pattern Anal. Mach. Intell.\n45 (2023) 15546\u201315561.\ndoi:10.1109/TPAMI.2023.3311618.", + "url": null + } + }, + { + "124": { + "title": "Image segmentation using text and image prompts,", + "author": "T. L\u00fcddecke, A. S. Ecker,", + "venue": "in: IEEE/CVF Conference on Computer Vision and\nPattern Recognition (CVPR 2022), IEEE,\n2022, pp. 7076\u20137086.\ndoi:10.1109/CVPR52688.2022.00695.", + "url": null + } + }, + { + "125": { + "title": "Open-Fusion: Real-time open-vocabulary 3D mapping\nand queryable scene representation,", + "author": "K. Yamazaki, T. Hanyu,\nK. Vo, T. Pham,\nM. Tran, G. Doretto,\nA. Nguyen, N. Le,", + "venue": "in: 2024 IEEE International Conference on\nRobotics and Automation (ICRA), IEEE,\n2024, pp. 9411\u20139417.\ndoi:10.1109/ICRA57147.2024.10610193.", + "url": null + } + }, + { + "126": { + "title": "LERF: language embedded radiance fields,", + "author": "J. Kerr, C. M. Kim,\nK. Goldberg, A. Kanazawa,\nM. Tancik,", + "venue": "in: IEEE/CVF International Conference on\nComputer Vision (ICCV 2023), IEEE,\n2023, pp. 19672\u201319682.\ndoi:10.1109/ICCV51070.2023.01807.", + "url": null + } + }, + { + "127": { + "title": "CLIP-Fields: Weakly supervised semantic fields for\nrobotic memory,", + "author": "N. M. M. Shafiullah, C. Paxton,\nL. Pinto, S. Chintala,\nA. Szlam,", + "venue": "in: Robotics: Science and Systems XIX (RSS\n2023), 2023. doi:10.15607/RSS.2023.XIX.074.", + "url": null + } + }, + { + "128": { + "title": "Detecting twenty-thousand classes using image-level\nsupervision,", + "author": "X. Zhou, R. Girdhar,\nA. Joulin, P. Kr\u00e4henb\u00fchl,\nI. Misra,", + "venue": "in: European Conference on Computer Vision (ECCV\n2022), Springer, 2022, pp.\n350\u2013368. doi:10.1007/978-3-031-20077-9\\_21.", + "url": null + } + }, + { + "129": { + "title": "OpenScene: 3D Scene Understanding With Open\nVocabularies,", + "author": "S. Peng, K. Genova,", + "venue": "in: Proceedings of the IEEE/CVF\nConference on Computer Vision and Pattern Recognition,\n2023, pp. 815\u2013824.", + "url": null + } + }, + { + "130": { + "title": "ConceptFusion: Open-set multimodal 3D mapping,", + "author": "K. M. Jatavallabhula, A. Kuwajerwala,\nQ. Gu, M. Omama,\nG. Iyer, S. Saryazdi,\nT. Chen, A. Maalouf,\nS. Li, N. V. Keetha,\nA. Tewari, J. B. Tenenbaum,\nC. M. de Melo, K. M. Krishna,\nL. Paull, F. Shkurti,\nA. Torralba,", + "venue": "in: Robotics: Science and Systems XIX (RSS\n2023), 2023. doi:10.15607/RSS.2023.XIX.066.", + "url": null + } + }, + { + "131": { + "title": "Masked-attention mask transformer for universal image\nsegmentation,", + "author": "B. Cheng, I. Misra, A. G.\nSchwing, A. Kirillov, R. Girdhar,", + "venue": "in: IEEE/CVF Conference on Computer Vision and\nPattern Recognition (CVPR 2022), IEEE,\n2022, pp. 1280\u20131289.\ndoi:10.1109/CVPR52688.2022.00135.", + "url": null + } + }, + { + "132": { + "title": "OK-Robot: What really matters in integrating\nopen-knowledge models for robotics,", + "author": "P. Liu, Y. Orru,\nJ. Vakil, C. Paxton,\nN. M. M. Shafiullah, L. Pinto,", + "venue": "in: 2nd Workshop on Mobile\nManipulation and Embodied Intelligence at ICRA 2024, 2024.", + "url": null + } + }, + { + "133": { + "title": "Segment everything everywhere all at once,", + "author": "X. Zou, J. Yang,\nH. Zhang, F. Li, L. Li,\nJ. Wang, L. Wang,\nJ. Gao, Y. J. Lee,", + "venue": "Adv. Neural Inf. Process. Syst.\n36 (2024).", + "url": null + } + }, + { + "134": { + "title": "Hierarchical open-vocabulary 3d scene graphs for\nlanguage-grounded robot navigation,", + "author": "A. Werby, C. Huang,\nM. B\u00fcchner, A. Valada,\nW. Burgard,", + "venue": "in: First Workshop on Vision-Language Models for\nNavigation and Manipulation at ICRA 2024, 2024.", + "url": null + } + }, + { + "135": { + "title": "ConceptGraphs: Open-vocabulary 3D scene graphs\nfor perception and planning,", + "author": "Q. Gu, A. Kuwajerwala,\nS. Morin, K. M. Jatavallabhula,\nB. Sen, A. Agarwal,\nC. Rivera, W. Paul,\nK. Ellis, R. Chellappa,\nC. Gan, C. M. de Melo,\nJ. B. Tenenbaum, A. Torralba,\nF. Shkurti, L. Paull,", + "venue": "in: IEEE International Conference on Robotics\nand Automation (ICRA 2024), IEEE,\n2024, pp. 5021\u20135028.\ndoi:10.1109/ICRA57147.2024.10610243.", + "url": null + } + }, + { + "136": { + "title": "Clio: Real-time task-driven open-set 3D scene\ngraphs,", + "author": "D. Maggio, Y. Chang,\nN. Hughes, M. Trang,\nJ. D. Griffith, C. Dougherty,\nE. Cristofalo, L. Schmid,\nL. Carlone,", + "venue": "IEEE Robotics Autom. Lett. 9\n(2024) 8921\u20138928.\ndoi:10.1109/LRA.2024.3451395.", + "url": null + } + }, + { + "137": { + "title": "Nerfstudio: A modular framework for neural radiance\nfield development,", + "author": "M. Tancik, E. Weber,\nE. Ng, R. Li, B. Yi,\nT. Wang, A. Kristoffersen,\nJ. Austin, K. Salahi,\nA. Ahuja, et al.,", + "venue": "in: ACM SIGGRAPH 2023 Conference Proceedings,\n2023, pp. 1\u201312.", + "url": null + } + }, + { + "138": { + "title": "A benchmark for RGB-D visual odometry, 3D\nreconstruction and SLAM,", + "author": "A. Handa, T. Whelan,\nJ. McDonald, A. Davison,", + "venue": "in: IEEE Intl. Conf. on Robotics and Automation,\nICRA, Hong Kong, China, 2014.", + "url": null + } + }, + { + "139": { + "title": "GPT-4 technical report,", + "author": "OpenAI,", + "venue": "CoRR abs/2303.08774\n(2023). doi:10.48550/ARXIV.2303.08774.\narXiv:2303.08774.", + "url": null + } + }, + { + "140": { + "title": "Attention is all you need,", + "author": "A. Vaswani, N. Shazeer,\nN. Parmar, J. Uszkoreit,\nL. Jones, A. N. Gomez,\nL. u. Kaiser, I. Polosukhin,", + "venue": "in: Advances in Neural Information Processing\nSystems, volume 30, 2017.", + "url": null + } + }, + { + "141": { + "title": "Can large language models reason and plan?,", + "author": "S. Kambhampati,", + "venue": "Ann. N.Y. Acad. Sci. 1534\n(2024) 15\u201318.\ndoi:10.1111/nyas.15125.", + "url": null + } + }, + { + "142": { + "title": "SayPlan: Grounding large language models using 3D\nscene graphs for scalable robot task planning,", + "author": "K. Rana, J. Haviland,\nS. Garg, J. Abou-Chakra,\nI. D. Reid, N. S\u00fcnderhauf,", + "venue": "in: Conference on Robot Learning (CoRL 2023),\nvolume 229 of Proceedings of\nMachine Learning Research, 2023, pp.\n23\u201372.", + "url": null + } + }, + { + "143": { + "title": "Exploring large language models as a source of\ncommon-sense knowledge for robots,", + "author": "F. Ocker, J. Deigm\u00f6ller,\nJ. Eggert,", + "venue": "CoRR abs/2311.08412\n(2023). doi:10.48550/ARXIV.2311.08412.\narXiv:2311.08412.", + "url": null + } + }, + { + "144": { + "title": "A survey on multimodal large language models,", + "author": "S. Yin, C. Fu, S. Zhao,\nK. Li, X. Sun, T. Xu,\nE. Chen,", + "venue": "CoRR abs/2306.13549\n(2023). doi:10.48550/ARXIV.2306.13549.\narXiv:2306.13549.", + "url": null + } + }, + { + "145": { + "title": "Multimodal large language models: A survey,", + "author": "J. Wu, W. Gan, Z. Chen,\nS. Wan, S. Y. Philip,", + "venue": "in: 2023 IEEE International Conference on Big\nData (BigData), IEEE, 2023, pp.\n2247\u20132256.", + "url": null + } + }, + { + "146": { + "title": "Explore until confident: Efficient exploration for\nembodied question answering,", + "author": "A. Z. Ren, J. Clark,\nA. Dixit, M. Itkina,\nA. Majumdar, D. Sadigh,", + "venue": "in: Proceedings of Robotics: Science and Systems\n2024, 2024. doi:10.15607/RSS.2024.XX.089.", + "url": null + } + }, + { + "147": { + "title": "Visual instruction tuning,", + "author": "H. Liu, C. Li, Q. Wu,\nY. J. Lee,", + "venue": "in: A. Oh, T. Naumann,\nA. Globerson, K. Saenko,\nM. Hardt, S. Levine (Eds.),\nAdvances in Neural Information Processing Systems 36:\nAnnual Conference on Neural Information Processing Systems (NeurIPS 2023),\n2023.", + "url": null + } + }, + { + "148": { + "title": "Language models are few-shot learners,", + "author": "T. B. Brown, B. Mann,\nN. Ryder, M. Subbiah,\nJ. Kaplan, P. Dhariwal,\nA. Neelakantan, P. Shyam,\nG. Sastry, A. Askell,\nS. Agarwal, A. Herbert-Voss,\nG. Krueger, T. Henighan,\nR. Child, A. Ramesh,\nD. M. Ziegler, J. Wu,\nC. Winter, C. Hesse,\nM. Chen, E. Sigler,\nM. Litwin, S. Gray,\nB. Chess, J. Clark,\nC. Berner, S. McCandlish,\nA. Radford, I. Sutskever,\nD. Amodei,", + "venue": "in: Advances in Neural Information Processing\nSystems 33: Annual Conference on Neural Information Processing Systems\n(NeurIPS 2020), 2020.", + "url": null + } + }, + { + "149": { + "title": "Language-grounded dynamic scene graphs for\ninteractive object search with mobile manipulation,", + "author": "D. Honerkamp, M. B\u00fcchner,\nF. Despinoy, T. Welschehold,\nA. Valada,", + "venue": "IEEE Robotics Autom. Lett. 9\n(2024) 8298\u20138305.\ndoi:10.1109/LRA.2024.3441495.", + "url": null + } + }, + { + "150": { + "title": "Survey of hallucination in natural language\ngeneration,", + "author": "Z. Ji, N. Lee,\nR. Frieske, T. Yu,\nD. Su, Y. Xu, E. Ishii,\nY. Bang, A. Madotto,\nP. Fung,", + "venue": "ACM Comput. Surv. 55\n(2023) 248:1\u2013248:38.\ndoi:10.1145/3571730.", + "url": null + } + }, + { + "151": { + "title": "Large language models still can\u2019t plan (a benchmark\nfor LLMs on planning and reasoning about change),", + "author": "K. Valmeekam, A. Olmo,\nS. Sreedharan, S. Kambhampati,", + "venue": "in: NeurIPS 2022 Foundation Models for Decision\nMaking Workshop, 2022.", + "url": null + } + }, + { + "152": { + "title": "Stop explaining black box machine learning models for\nhigh stakes decisions and use interpretable models instead,", + "author": "C. Rudin,", + "venue": "Nat. Mach. Intell. 1\n(2019) 206\u2013215.\ndoi:10.1038/S42256-019-0048-X.", + "url": null + } + }, + { + "153": { + "title": "A survey on hallucination in large language models:\nPrinciples, taxonomy, challenges, and open questions,", + "author": "L. Huang, W. Yu, W. Ma,\nW. Zhong, Z. Feng,\nH. Wang, Q. Chen,\nW. Peng, X. Feng,\nB. Qin, T. Liu,", + "venue": "CoRR abs/2311.05232\n(2023). doi:10.48550/ARXIV.2311.05232.\narXiv:2311.05232.", + "url": null + } + }, + { + "154": { + "title": "LLMs still can\u2019t plan; can LRMs? A preliminary\nevaluation of OpenAI\u2019s o1 on PlanBench,", + "author": "K. Valmeekam, K. Stechly,\nS. Kambhampati,", + "venue": "CoRR abs/2409.13373\n(2024). doi:10.48550/ARXIV.2409.13373.\narXiv:2409.13373.", + "url": null + } + }, + { + "155": { + "title": "ChatGPT and the AI Act,", + "author": "N. Helberger, N. Diakopoulos,", + "venue": "Internet Policy Rev. 12\n(2023). doi:10.14763/2023.1.1682.", + "url": null + } + }, + { + "156": { + "title": "Navigating LLM ethics: Advancements, challenges,\nand future directions,", + "author": "J. Jiao, S. Afroogh,\nY. Xu, C. Phillips,", + "venue": "CoRR abs/2406.18841\n(2024). doi:10.48550/ARXIV.2406.18841.\narXiv:2406.18841.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18147v1" +} \ No newline at end of file diff --git a/20241127/2411.18151v1.json b/20241127/2411.18151v1.json new file mode 100644 index 0000000000000000000000000000000000000000..104f55684f6b1a351d1581c5a5bc741e2451d82e --- /dev/null +++ b/20241127/2411.18151v1.json @@ -0,0 +1,1021 @@ +{ + "title": "Howzat? Appealing to Expert Judgement for Evaluating Human and AI Next-Step Hints for Novice Programmers", + "abstract": "Motivation: Students learning to program often reach states where they are stuck and can make no forward progress \u2013 but this may be outside the classroom where no instructor is available to help. In this situation, an automatically generated next-step hint can help them make forward progress and support their learning. It is important to know what makes a good hint or a bad hint, and how to generate good hints automatically in novice programming tools, for example using Large Language Models (LLMs).", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "Students who are learning programming often get into a stuck state where they cannot make progress (Whalley et al., 2023 ###reference_b80###). This may be because they cannot solve a compiler error (Prather et al., 2017 ###reference_b66###), a run-time error (Smith and Rixner, 2019 ###reference_b78###; Garner et al., 2005 ###reference_b19###), or other more general issues with problem solving (Prather et al., 2018 ###reference_b65###; Castro and Fisler, 2020 ###reference_b10###). There has been work to try to offer hints to students based on intelligent tutors (Crow et al., 2018 ###reference_b11###) or crowd-sourced hints (Glassman et al., 2016 ###reference_b21###) or explanations (Guo et al., 2020 ###reference_b22###), but the new growth of generative Artifical Intelligence (AI) tools offers new possibilities for generating these hints.\nOffering hints to students is a subtle art (Marwan et al., 2019a ###reference_b51###; Suzuki et al., 2017 ###reference_b79###). Just giving the answer offers little or no pedagogical benefit, but being too coy or obscure is not helpful and may frustrate the student further. Choosing the right level of hint is typically more difficult than offering the actual solution. Human teachers have the advantage of often knowing more context about the student and rich knowledge from the student\u2019s reaction when attempting an explanation \u2013 but teachers are often in large classes and cannot be present at every moment (including when the student works separately outside class) to give hints, so automated approaches to hinting are of interest to provide scale and constant availability.\nGenerative AI systems such as Large Language Models (LLMs) offer ways to aid students (Denny et al., 2024b ###reference_b14###), such as generating hints. Students can already directly interact with such LLM systems, but this has two key problems. The first is that students will often ask for the answer, not for a hint, which is less pedagogically beneficial. The second is that students who struggle may be ill equipped to write good prompts to the LLM (Denny et al., 2024a ###reference_b13###). Therefore it may be best for a tool, such as an Integrated Development Environment (IDE), to provide the prompt on behalf of the student, in order to generate a hint (Pozdniakov et al., 2024 ###reference_b63###). It is this approach that we investigate in this paper.\nThe task we are setting the LLM here is more complex than that of solving a programming problem (which LLMs have been shown to be capable of (Kiesler and Schiffner, 2023 ###reference_b37###; Mahon et al., 2023 ###reference_b49###)). The model has to work out what the intended solution is, and it has to do this from limited information: in our context, instead of being given a specification of the programming task, the input consists solely of an erroneous, work-in-progress snapshot of an inexperienced programming novice\u2019s source code. The task has to be inferred. When the LLM has deduced the solution, it is expected to not give it to the student, but instead devise a pedagogically useful hint that creates a learning experience in which the student makes progress towards a solution.\nGenerating the hint automatically leads to a set of questions: What makes a good prompt for our purpose? And even with the best prompts and the best LLM: Are the hints generated good enough to be worth showing to novices? Can we consistently generate hints of good enough quality that they help students more than they confuse them? Attempts to investigate these issues reveal two fundamental questions we need to answer as part of this work: What makes a good hint, anyway? And how do we judge whether a hint is \u201cgood enough\u201d?\nTo answer the first of these two questions, we use a method called comparative judgement to rank a number of hints (see section 3 ###reference_###) according to their quality. This allows us to extract characteristics of good hints in general. To judge whether the hints are of sufficient quality to be shown to students, we use a benchmark: we compare generated hints to those given by expert humans. If the generated hints are judged better than those from humans, they provide an improvement on the status quo and are therefore useful. The details of the methodology are described in section 3 ###reference_###.\nIn summary, this work provides the following major contributions:\nWe evaluate which LLM currently performs best in providing next-step hints for novice programmers.\nWe determine the best prompts for generating hints using the state-of-the-art LLMs at the time of the experiment.\nWe investigate whether optimal LLM/prompt combinations can perform as well as (or better than) humans in generating hints.\nWe describe and demonstrate a repeatable method for determining the best prompts and evaluating their performance, which could be re-used when LLM systems update in future.\nWe investigate whether comparative judgement is a viable method for providing rankings in computing education research studies.\nWe summarise the characteristics of the best hints (as rated by educators) to determine what makes a good hint: what length, what kind of language, what pedagogical features." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Related work", + "text": "We divide related work into two main parts: prior work on hint-generation that did not use LLMs, and the use of LLMs in programming education. We also review related work on when hints should be given, as that is a basis for selecting data for our paper." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. Non-AI hinting for novice programmers", + "text": "The idea of giving automated hints to stuck novice programmers has a long history that pre-dates LLMs, and thus there have been multiple reviews on the topic. Crow et al. (2018 ###reference_b11###) conducted a review of intelligent programming tutors, from the 1980s to 2018, and found that they were very varied in the features they provide, including whether hinting support is present or not. Keuning et al. (2018 ###reference_b35###) performed a systematic literature review of feedback generation in general in programming education. They found that relatively little feedback generation focused on next-step hints. McBroom et al. (2021 ###reference_b54###) surveyed hint generation systems from 2014\u20132018 and introduced a framework that synthesised work on hint generation. Interestingly, it is not clear that LLM-based hinting would fit into McBroom et al. ###reference_b54###\u2019s framework (which revolves around constructing hints by starting with sets of hint data to narrow down or transform), suggesting that generative AI is quite distinct from existing work on automated hint generation. Perhaps surprisingly, little work is available on human hint generation, or investigating what makes a good hint independent of automation \u2013 the closest work is Suzuki et al. (2017 ###reference_b79###) which categorised types of hints, but without evaluating their usefulness. Most of the hint literature is concerned with discussing the various ways to automate the activity.\nIn terms of non-AI techniques to generate hints, one way to generate them is to imitate the hints that teachers would give (Suzuki et al., 2017 ###reference_b79###; Jeuring et al., 2022 ###reference_b30###). Another is to use techniques such as program repair to generate fixes (Phothilimthana and Sridhara, 2017 ###reference_b61###; Ahmed et al., 2020 ###reference_b3###), or hand-written rules (Ichinco and Kelleher, 2018 ###reference_b29###; Wiggins et al., 2021 ###reference_b81###). Existing solutions to a specific programming problem can be used to infer hints for future attempts (Oberm\u00fcller et al., 2021 ###reference_b59###).\nSeveral aspects of automated hinting see no overall agreement in the literature, and evidence is inconclusive or contradictory. While some studies are largely positive about the value of automated hint generation, other studies provide some evidence to suggest that hints themselves may not be useful (Price et al., 2020 ###reference_b70###), or that they may not help learning (Marwan and Price, 2023 ###reference_b53###). One approach to hinting is to show multiple possible hints, at the different appropriate points in the code where each hint could be enacted, although some students found this overwhelming (Price et al., 2021 ###reference_b69###) \u2013 and similar research on suggested fixes for errors found that students tended not to use the fixes even when they were appropriate (Brown et al., 2023 ###reference_b7###). Students report difficulties using hints that are vaguer (Price et al., 2021 ###reference_b69###). Hints which have an explanation are perceived as more helpful and more interpretable but do not necessarily result in better performance or learning (Marwan et al., 2019b ###reference_b52###, a ###reference_b51###).\nOverall, it appears that hints must be carefully designed in order to be useful." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. Timing of hints", + "text": "There is a large body of work to identify struggling students across the duration of a whole course (Hellas et al., 2018 ###reference_b24###; Falkner and Falkner, 2012 ###reference_b17###; Ahadi et al., 2015 ###reference_b2###). However, this is a distinct problem from trying to identify which students need hints and, crucially, when precisely they would benefit from a hint.\nA programming environment can provide a hint when students attempt to run the program (Phothilimthana and Sridhara, 2017 ###reference_b61###) or when they receive a compiler error (Ahmed et al., 2020 ###reference_b3###). The most common approach is to wait until the student explicitly requests a hint (Ichinco and Kelleher, 2018 ###reference_b29###; Oberm\u00fcller et al., 2021 ###reference_b59###; Wiggins et al., 2021 ###reference_b81###; Hellas et al., 2023 ###reference_b25###). Alternatively, some tools suggest hints automatically as the user enters code (Gusukuma et al., 2017 ###reference_b23###; Prather et al., 2023b ###reference_b67###).\nJeuring et al. (2022 ###reference_b30###) performed a study asking experts when they would intervene to provide hints, and found \u201ca frequent conflict caused by different\npedagogical approaches: (a) an early intervention prevents a student\nfrom writing unnecessary code and spending extra time on an\nassignment, which may lead to student confusion and frustration,\nversus (b) a delayed intervention gives a student a chance to struggle productively, which may improve student learning.\u201d In a follow-up study, Lohr et al. (2024 ###reference_b47###) found similarly mixed results over when educators chose or chose not to provide feedback: \u201csometimes one expert uses a reason at a step to explain why they do intervene, and another expert uses the same reason at the step to not intervene.\u201d Thus there is no clear recommendation from the literature for when is the best time to give a hint." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. Generative AI and Large Language Models in programming education", + "text": "Since the release of ChatGPT in late 2022 there has been an explosion of interest in LLMs, including in programming education teaching and research. The result has been a dizzying rate of publication; all of the LLM studies cited in this section (of which there are more than thirty) were published in the last two years. In an early 2023 study, educators were found to be split between those who wanted to resist AI tools and those who wanted to embrace them (Lau and Guo, 2023 ###reference_b42###). However, the recent explosion in popularity seems to imply that resistance may well be futile (Joshi et al., 2023 ###reference_b32###; Ghimire and Edwards, 2024 ###reference_b20###), and it may be best to consider adapting our pedagogy (Kendon et al., 2023 ###reference_b34###; Sheard et al., 2024 ###reference_b76###; Denny et al., 2024c ###reference_b15###) and assessment (Raman and Kumar, 2022 ###reference_b71###) instead. In this section we survey different aspects of LLMs in programming education in turn: students using them directly, students using them via tools, and their use in generating hints and explanations." + }, + { + "section_id": "2.3.1", + "parent_section_id": "2.3", + "section_name": "2.3.1. Students\u2019 direct use of Large Language Models", + "text": "A natural first step in studying LLMs in programming education was to study what happens when students used LLMs directly, without scaffolding, in a manner of their choosing.\nPrather et al. (2023b ###reference_b67###) performed a study of how novices worked with Copilot, a generative AI tool that is designed to aid in program construction. They found that novices struggle to understand and use the tool. A later study by Prather et al. (2024 ###reference_b68###) suggested that LLMs may widen the divide between students at the top and bottom of the class.\nZamfirescu-Pereira et al. (2023 ###reference_b84###) observed users without experience in LLM prompts as they designed a chatbot. They found that the users struggled to modify LLM prompts to achieve the desired effect, although they were not using the LLM to directly modify program code. Fiannaca et al. (2023 ###reference_b18###) found that users could struggle with what made an effective prompt, and worried about syntax issues within prompts such as line breaks, and the presence or absence of question marks.\nDenny et al. (2024a ###reference_b13###) explored how students write prompts in order to solve programming exercises, when the exercises themselves could not simply be copy-and-pasted into the prompt. They found that \u201cmany students, even ones many years into their programming education, do not necessarily understand how to write effective prompts [for LLM systems].\u201d Nguyen et al. (2024 ###reference_b58###) similarly found that many novices struggled to write effective prompts when using LLMs for the first time.\nThus the initial research in the area suggests that novices struggle to directly use LLMs in an effective manner. This chimes with other research considering the pedagogical implications: Xue et al. (2024 ###reference_b83###) and Kazemitabaar et al. (2023 ###reference_b33###) found that direct use of LLMs did not produce any significant effect on learning (although the latter suggest that students with higher prior knowledge may have received greater benefits from using the generator than students with less prior knowledge), while Mailach et al. (2024 ###reference_b50###) concluded that \u201cwe cannot just give vanilla [LLM] chatbots to students as tools to learn programming, but we additionally need to give proper guidance on how to use them\u2014otherwise, students tend to use it mainly for code generation without further reflection on or evaluation of generated code.\u201d" + }, + { + "section_id": "2.3.2", + "parent_section_id": "2.3", + "section_name": "2.3.2. Student use of tools powered by Large Language Models", + "text": "An alternative mode of use suggested by several researchers (Scholl and Kiesler, 2024 ###reference_b75###; Xiao et al., 2024 ###reference_b82###; Nam et al., 2024 ###reference_b56###) is to build tools powered by LLMs. This can avoid issues with students\u2019 inability to create prompts, and give more control over the tools\u2019 output.\nLiffiton et al. (2024 ###reference_b45###) created a tool where users could fill in four items: which programming language is being used, the relevant code, the error (if any), and the question they want help with. This is then structured into a single larger prompt to the LLM, and the response is shown in the tool. They concluded that students liked the tool, including the fact that it did not just \u201cgive away the answer\u201d. In a follow-up study, Sheese et al. (2024 ###reference_b77###) found that students tended to ask for help with their immediate problem and would not typically ask more general queries, such as seeking understanding of a wider concept.\nBirillo et al. (2024 ###reference_b5###) combined LLMs with static analysis in order to create a tool that provides next-step hints. A brief evaluation with students suggested that the tool showed promise. Denny et al. (2024b ###reference_b14###) studied students\u2019 use of an LLM-powered assistant, and found that students engaged it with extensively, and also found that students preferred scaffolding and guidance rather than simply being told the final answer. This suggests that hint-generation may be a more useful and more popular tool for students than simply providing correct program code." + }, + { + "section_id": "2.3.3", + "parent_section_id": "2.3", + "section_name": "2.3.3. Use of Large Language Models for hinting and explanation", + "text": "Several studies have investigated the use of LLMs for feedback, explanation or hinting \u2013 the latter being the precise topic of the current research.\nLeinonen et al. (2023b ###reference_b44###) used some early LLMs to generate enhanced programming error message explanations. The researchers rated the error message explanations as high quality. More recently, Cucuiat and Waite (2024 ###reference_b12###) investigated secondary school teachers\u2019 views on LLM-generated programming error message explanations. They used feedback literacy theory to analyse interviews and found that educators preferred LLM explanations that guided and developed understanding rather than tell (emphasis indicates terms defined by feedback literacy theory (McLean et al., 2015 ###reference_b55###)).\nHellas et al. (2023 ###reference_b25###) investigated the use of LLMs to solve historic student help requests. They used data from a course where students could ask for help from human teachers. They used the code that students asked for help on, combining it with a boilerplate AI prompt, then analysed the responses that came back from the AI, in terms of features such as \u201cidentifies at least one actual issue\u201d and \u201cincludes code\u201d in order to compare two different AIs (Codex and GPT-3.5), each in English and in Finnish. They found that the AI would frequently provide code despite being instructed not to, and that LLMs could make the same mistakes as students when trying to help them. Roest et al. (2024 ###reference_b73###) created a tool to give next-step hints for novice Python programmers, by providing the problem description and current code, and evaluated them with three students and two educators. They similarly found that it was difficult to control the LLM output and that the hints were sometimes misleading. Kiesler et al. (2023 ###reference_b36###) also used a similar technique of using historic incorrect student code submissions and analysed the responses using the feedback categorisation of Keuning et al. (2018 ###reference_b35###). They again found LLMs could be misleading but that the quality was generally good.\nMacNeil et al. (2023 ###reference_b48###) included code explanations generated by LLMs into an online course and found that students rate AI-generated explanations useful for their learning, and that students preferred concise, high-level explanations over line-by-line explanations of the code.\nLeinonen et al. (2023a ###reference_b43###) found that LLMs produced explanations of program code that were rated as more accurate and easier to understand than explanations generated by students on the course. This suggests that AI hints may be very valuable for learners.\nPankiewicz and Baker (2024 ###reference_b60###) investigated students\u2019 affective states when receiving hints from GPT for solving compiler error messages, and found an increase in focus and decrease in confusion, although more generally they found a mixed pattern as to whether students reported that the results were useful for not, and a mixed result in performance. Xiao et al. (2024 ###reference_b82###) investigated students\u2019 opinions of using an LLM-powered tool to generate hints. They found that the hints were high quality, but students commented on the lack of flexibility in the interface, and they were confused by some of the higher-level hints which they could find vague.\nNguyen and Allan (2024 ###reference_b57###) used few-shot learning to train GPT-4 to generate hints and had two instructors evaluate them for accuracy and usefulness, finding that the model performed well and was useful.\nOverall, previous studies have suggested that LLMs can produce good quality hints, but there are challenges to tightly control the output (in particular, to avoid giving too much program code or too much of the solution) and avoid misleading hints. As we will detail in the next section, our study differs from previous work in the following ways:\nWe focus on stuck students with no information about the problem description that they are working on \u2013 a harder but more flexible domain than when the problem is known.\nWe use a larger-scale educator evaluation in contrast to the prior evaluations that use 2\u20133 educators (often the researchers themselves).\nWe provide a detailed assessment of hint characteristics in multiple dimensions (length, readability, pedagogy, correctness, sentiment) that combine the disparate subsets into one.\nWe ask the educators to use comparative judgement in order to form a ranking of hints which allows us to infer links between these hint characteristics and relative hint quality. This provides useful information about hint attributes that is orthogonal to how they were generated.\nWe include human-generated hints to allow us to compare performance of human experts to LLMs.\nWe assess multiple prompts with multiple models to see how much effect prompts have on model performance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Method", + "text": "From our survey of prior work in subsection 2.1 ###reference_###, it seems that work on non-AI hints has shown mixed results as to its effectiveness. Many hint approaches rely on knowledge derived from existing solutions to the problem and thus target that specific problem. Generative AI, LLMs in particular, has the potential to be more flexible and powerful when generating a hint for an unseen problem. Studies of LLMs find that students can struggle to formulate prompts, with an alternative approach emerging of using a tool to construct the prompt based on constrained, guided information from the user.\nOur approach in this paper is to use source code data from real-world programming sessions from stuck students. We will use the term Snapshot for a code sample of a student during a programming session at a point in their work when they were stuck.\nWe then present each snapshot to a set of Generators. We use the term Generator to refer to the producer of the hint: this is either a combination of a specific LLM with a specific prompt, or an individual human.\nNext, we show these resulting hints to a set of experienced educators. The educators tell us which hints they consider to be the best. This gives us five results: the best hints, the best-performing LLMs, the prompt templates to generate the best hints, a comparison of human and LLM hints, and the general characteristics of the best hints.\nOur method thus has four main parts: the acquisition and selection of the student Snapshots; the manner of creation of the set of LLM prompts to evaluate, leading to the generation of the hints; the manner to evaluate the resulting hint quality using human experts; and the evaluation of the attributes of interest of these hints. The overall method is shown in Figure 1 ###reference_### and explained further in the following subsections.\n###figure_1### Diagram showing the study design: we take Java code from a stuck student and generate 25 hints (5 prompts fully crossed with 4 LLMs, plus five humans). We then get 8+ educators to rank these hints, and classify the hint attributes. From this we can infer what makes a hint good, and who makes good hints." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Student Snapshots", + "text": "To find Snapshots (stuck student states) for our study, we randomly sampled sessions from the Blackbox dataset (Brown et al., 2014 ###reference_b8###) from an arbitrary week-day late in the typical northern hemisphere first semester (21st November 2023), manually selecting states where we inferred that the students were stuck and in need of a hint, as evidenced by making no productive progress for some time after that point. Previous research has found that educators disagree about when is the best time to intervene (Jeuring et al., 2022 ###reference_b30###; Lohr et al., 2024 ###reference_b47###), so in lieu of clear recommendations from the literature, we used our own judgement. The exact point chosen is not crucial to the study, as long as it provides an interesting case for which to generate hints. As described later in subsection 4.6 ###reference_###, multiple participants commented on the choice of student code being good, and realistic.\nOne example of a Snapshot is shown later in the paper in Figure 7 ###reference_### on page 7 ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Prompt formation", + "text": "Current literature does not yet suggest potential successful prompt templates for hint generation. Furthermore, as AI models evolve, which prompts are effective may change over time. To try to future-proof our methodology as much as possible when models continue to evolve, we used the following approach. Five researchers, who are also experienced programming educators with significant teaching experience, experimented with four LLMs to formulate prompts. The four models were Mixtral-8x7B, GPT-3.5, GPT-4 and Gemini. Each model was used by a different researcher, but with the most recent model, GPT-4, used by two. As suggested by research into brainstorming (Rietzschel et al., 2006 ###reference_b72###), the researchers first brainstormed multiple prompts individually by using some sample Snapshots (these were distinct from those used in the actual study). Then the prompts were combined and refined experimentally, during iterative cycles of collaborative discussion, resulting in a final set of five distinct prompts." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Human-generated hints", + "text": "Alongside the AI-generated hints, we also supplied human-generated hints. Each of the five researchers was tasked with constructing one hint for each Snapshot in the experiment. This was an attempt to produce the optimal hint that they, as experienced educators, could provide to the student at that point, and it provided us with a benchmark in our results: we can not only compare automatically generated hints against each other, but compare them to hints produced by experienced humans.\nConstructing these hints was done independently; no researchers saw another researcher\u2019s hints until they had completed writing their own." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "3.4. Educator evaluation", + "text": "One way to rank hints is to ask educators to rate each one on an absolute scale, say 1\u201310. It can be difficult to evaluate how good a hint is on such an absolute scale. Is a particular hint a 5/10 or a 6/10? Is everything just 7/10? Can participants remain internally consistent, and can the scale be consistent between participants?\nAn alternative method to produce a ranking is a technique called comparative judgement. The key idea behind comparative judgement is that people (termed judges) produce more reliable judgements when repeatedly asked to compare two items and pick the best one, than to rate each one individually on an absolute scale. Instead of \u201crate this hint 1\u201310\u201d, the problem becomes \u201cwhich of these two hints is better\u201d. By asking judges to pick the best item from a set of random pairs, the judges act like the comparison function in a bubble sort, to sort the hints into an ordered list from best to worst. This form of judgement is more consistent across different judges, as it does not rely on an abstract absolute scale that would be influenced by different standards of individual participants.\nComparative judgement has been improved upon in an algorithm known as adaptive comparative judgement which minimises the number of comparisons needed (Pollitt, 2012 ###reference_b62###). Intuitively, if one hint is chosen several times as always worse than others, you can leave it near the bottom of the list and focus on the more borderline comparisons, to sort the list with fewer comparisons. Adaptive comparative judgement has been widely used for assessment in education and found to be reliable (Bartholomew and Yoshikawa-Ruesch, 2018 ###reference_b4###), and has been used in other areas of education research (Jones and Davies, 2024 ###reference_b31###).\nEach educator (judge) is first shown the Snapshot, and then asked to repeatedly rate which of two presented hints are better in this circumstance, with the pairs generated by an adaptive comparative judgement algorithm. San Verhavert and Maeyer (2019 ###reference_b74###) performed a meta-analysis on non-adaptive comparative judgement and found that the number of judges did not impact reliability, so we will not specify a minimum sample size of judges. If the judges are experts, San Verhavert and Maeyer (2019 ###reference_b74###) found that for 90% reliability, 26 to 37 presentations per item are needed. Since each comparison is between two items, ranking of items requires judgements to achieve 26 presentations of each item.\nGiven that adaptive comparative judgement aims to reduce the number of comparisons, fewer should be needed. We use the No More Marking (Kolen and Brennan, 2016 ###reference_b39###) platform to present the hints and collect the judges\u2019 choices. This platform uses a Progressive Adaptive Comparative Judgement algorithm111See https://blog.nomoremarking.com/progressive-adaptive-comparative-judgement-dd4bb2523ffe ###reference_-adaptive-comparative-judgement-dd4bb2523ffe###, visited 4 November 2024. and recommends 10 comparisons per item222See https://blog.nomoremarking.com/using-comparative-judgement-in-different-subjects-at-ks3-4415195f8947 ###reference_rative-judgement-in-different-subjects-at-ks3-4415195f8947###, visited 4 November 2024. In our study, we ranked 25 hints for each Snapshot, so we required comparisons, which we split across 8 judges doing 31 comparisons each.\nFor our experiment, we needed the expert educators who evaluated the hint quality to familiarise themselves with the Snapshot for which the hints were evaluated. To reduce the overhead incurred by educators, we chose to expose each participant to only one Snapshot. In order to help the participants all understand the Snapshot (which we took to be a prerequisite of the task of judging the hints, rather than something left to the educators to succeed or fail at) we provided brief notes with our interpretation of what the problem with the student code was. These notes were not given to the LLMs when generating the hints.\nWe chose not to provide a detailed rubric for judging what makes the best hint. Our instruction to educators was \u201cImagine that you are helping a student who is somewhere within their first year of programming instruction, around the ages of 16-18. They are working on a problem and have become stuck and asked for help. Imagine that the computer they are working on could give them an automatic hint at this stage. We want you to determine which is the best hint to give in each circumstance.\u201d" + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "3.5. Measuring characteristics of hints", + "text": "To characterise the best hints, we measure the following attributes: length of the hint (in number of words), complexity of the vocabulary (i.e. reading level), (these first two are suggested in guidelines by Denny et al. (2021 ###reference_b16###) as used by Prather et al. (2023a ###reference_b64###) for error message readability), sentiment, correctness, and type of feedback (as per Cucuiat and Waite (2024 ###reference_b12###))." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "3.6. Pre-registration and ethical approval", + "text": "The study was approved according to the ethics procedures of King\u2019s College London, approval number MRA-23/24-41449.\nWe pre-registered the design of this study on the Open Science Foundation (OSF) website333See https://osf.io/x8u3t/?view_only=2cdfa22b8dc542d6850e0cd8ce0ad6ff ###reference_c542d6850e0cd8ce0ad6ff### \u2013 this link is anonymised and is safe for anonymous review.. The main changes since pre-registration are:\nWe decided to show each educator-participant only one Snapshot rather than several, to reduce the load on each participant.\nWe refined the exact process that we used to generate the hints, as described in subsection 3.2 ###reference_### and had the idea to add human-generated hints.\nWe chose a new way to characterise hints using feedback literacy after reading the paper by Cucuiat and Waite (2024 ###reference_b12###) that was published after we pre-registered." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Results", + "text": "The study was carried out in 2024, with the hints generated in April 2024 and educators recruited to perform the comparative judgement in August and September 2024." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Open data", + "text": "All of the [anonymised] materials and data from this study are available publicly in an OSF repository: https://osf.io/p436s/?view_only=048063e28b474fa7a8d5fa776985f39b ###reference_474fa7a8d5fa776985f39b### 444This is an anonymous link for review which is safe for reviewers to visit without revealing the identity of the authors.\nincluding: all of the stages of prompt creation and merging, all of the Snapshots, all of the generated hints, all of the processing performed on the hints, our participant instructions, our survey design, all of our survey results and all of the results of the hint comparison, plus the full processing pipeline for all of our statistical analysis and figures. We hope that this is useful for anyone interested in verifying or replicating our work." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Prompt creation", + "text": "The five LLM prompts, shown in Table 1 ###reference_###, were created as described in subsection 3.2 ###reference_###. Two of these are multi-stage prompts, which involves asking the LLM for an answer and then feeding it back to the LLM. This may seem odd to those unfamiliar with using LLMs, but LLMs are not idempotent: when asked for information or an answer, and subsequently asked to use or improve it, \u2013 even without giving any new information in the second request \u2013 an LLM will generate a different and potentially improved answer." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Hint generation", + "text": "The prompts were fed to the models in April 2024 to generate the hints. We manually unified the formatting between the resulting hints, to make sure the code snippets were all highlighted in the same manner regardless of which model generated them. We also made one minimal pass to remove boilerplate greetings from the very start or end of the hint, but we did not process the hint any further, to retain authenticity of machine generation. Examples of things we did remove:\nPhrases responding to our prompt request such as \u201cCertainly!\u201d or \u201cAbsolutely!\u201d, or in the case of multi-stage prompts: \u201cHere\u2019s an improved version of my initial response\u201d.\nLeading salutations such as \u201cHey there\u201d or \u201cAlright, student!\u201d555One of the authors plans to use the latter to communicate with their students in future..\nTrailing salutations such as \u201cBest regards.\u201d or final remarks such as \u201cIf you need anything else, just ask.\u201d\nWe felt that all of these phrases could be removed automatically in future by refining the prompt or using a second processing pass, and they distracted from the hint content we wanted to evaluate.\nExamples of things we did not remove:\nEncouraging phrases such as \u201cGood luck!\u201d or \u201cKeep it up.\u201d\nEmoji, e.g. an airplane emoji in a response about a student stuck calculating air miles points.\nUse of first-person phrases, e.g. \u201cI noticed that there are a few things worth looking into\u201d.\nCases where the AI has addressed the educator such as \u201cNow, to assist your student further, I recommend\u2026\u201d\nThe final item is essentially the prompt \u201cmisfiring\u201d and we felt it was important to penalise the prompt and model for this behaviour.\nAll of the exact changes made to the generated versus presented hints can be seen in our OSF repository.\nThe final set of hints included 25 per Snapshot: four AI models (Mixtral-8x7B, Gemini, GPT-3.5, GPT-4) with all combinations of the 5 constructed prompts (see Table 1 ###reference_###), plus five human-generated hints. All hints were formatted similarly and given arbitrary identifiers to obscure how they might have been generated." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. Judges", + "text": "We recruited participants to act as judges via mailing lists, forums and social media. We asked for \u201cJava educators\u201d to complete an online task plus survey. There was no reward for participation. 85 participants signed up; three started but did not finish the task, 41 completed the task, and 35 of those completed the survey (we discuss these numbers further in the threats to validity in section 7 ###reference_###). Participants were assigned in a round-robin fashion to try to ensure that as many Snapshots as possible had the necessary 8 completions, which proved difficult with the low completion rate. Ultimately, four Snapshots reached the required 8 completions (with 8, 10, 8, 9 completions). Two more Snapshots had only three completions each so are excluded from all the analysis of the comparative judgements and hint rankings \u2013 but survey results of the judges of these Snapshots are retained for the purpose of analysing judges\u2019 reflections on performing the task." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "4.5. Validity of Judging", + "text": "Given that our results are reliant on the comparative judgement task, it is important to check that the participants took the task seriously. We employed several metrics for this purpose.\nThe first check was how often the participants selected the left option as the best hint when given the left-vs-right decision to pick the best hint of a presented pair. If any/many participants consistently just clicked the same button it could indicate boredom during the task. Figure 2 ###reference_### shows this data, compared to the theoretical binomial distribution that should occur if the percentage was 50-50%. A chi-squared test () between the two was not significant, suggesting the observed data was not significantly different from a 50-50 distribution of left-right clicks \u2013 there was no bias.\n###figure_2### A histogram showing the actual left-vs-right clicks, and theoretical. It shows slightly higher than expected in the centre (at 50%), and slightly higher than expected at the tails, and thus lower than expected just adjacent to the centre.\nThe second check was how long participants took to make the judgements, to see if this suddenly dropped off during the judging, which would again suggest boredom during the task. Figure 3 ###reference_### shows these results. The participants take a long time for their first (75 seconds) and second (50 seconds) judgements as they acclimatise to the task, and this is followed by a gradual speeding up (from around 40 seconds to around 20 seconds) as they work through the task. This gradual pattern suggests becoming more accomplished at the task rather than giving up.\n###figure_3### A line chart showing time taken vs decision order. The pattern is spiky/noisy but overall it is a sharp decline (in time taken) at the beginning from the first to the third point, followed by a gradual further decline until the end.\n###figure_4### A bar chart showing frequencies of different Infit ratings. Approximately 60% of them are in the \u201cConsistent\u201d range, about 40% in \u201cSome inconsistency\u201d and a single point in \u201cInconsistent\u201d.\n###figure_5### A scatter plot of Infit versus Medial judgement time. The chart shows no discernable relation between the two variables.\nThose results suggest that judges took the task seriously, but we must also examine a core assumption of comparative judgement: that there is an underlying ranking of items that is shared among the judges. In a perfect case, such as a task asking judges to pick the largest number, it should be possible (excepting human error) to perform a perfect ordering. Most real-world tasks, however, will have some expected disagreements just because people have differing opinions. This is acceptable as long as the disagreements are not too wide-scale. To investigate this, two measures for inter-rater reliability in comparative judgement may be used: Reliability (a per-task measure) and Infit (a per-judge or per-item measure).\nOur Reliability scores for the four Snapshots were 0.73, 0.76, 0.76 and 0.60. A reliability of 0.7 is generally considered sufficient (San Verhavert and Maeyer, 2019 ###reference_b74###) so the majority of our Snapshots showed outcomes with a high level of inter-rater consistency. Our Infit scores are shown in Figure 4 ###reference_###. Note that inconsistency in Infit scores refers to inter-judge consistency not intra-judge: it is about whether the judge agrees with their peers, not whether their own decisions were self-consistent. An inconsistent judge in this sense does not necessarily mean a \u201cwrong\u201d or \u201cbad\u201d judge, just one whose opinions differ from the other judges. We therefore did not exclude these judges. We did check if inconsistency was associated with faster judgements (suggesting a kind of speed-accuracy trade-off; maybe the inconsistent judges were choosing arbitrarily in a rush) but this was not the case (see Figure 5 ###reference_###, confirmed by a linear regression being non-significant, ).\nGiven that several previous studies of educators (Jeuring et al., 2022 ###reference_b30###; Lohr et al., 2024 ###reference_b47###; Brown and Altadmri, 2017 ###reference_b6###) have found that educators do not always reach good agreement on pedagogical issues, it was slightly surprising that the agreement level was so high.\nFinally, we checked the speed of the hint judging against reading speed. A statistical model (linear regression of log-transformed time taken vs combined word count and participant, for the combined word count factor) found that the average time taken to choose between two hints by combined word count was as follows:\n100 words: 28 seconds\n200 words: 31 seconds\n300 words: 36 seconds\n400 words: 41 seconds\n500 words: 46 seconds\nGiven that the average reading speed for non-fiction is 238 words per minute (Brysbaert, 2019 ###reference_b9###), it is likely that participants were not fully reading most of the hints. However, this does not mean the judging was invalid, for several reasons:\nIt is valid for a participant to immediately assess that a hint is too long and that students will not have the patience to read it, without reading it themselves.\nParticipants saw the same hints multiple times, so they may have begun to recognise hints without reading them through a second time.\nThe participants are likely highly educated, and therefore used to skimming complex text effectively.\nIn summary, all our checks suggest that the judges took the task seriously, and that there is good if imperfect agreement among educators as to which hints are good." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "4.6. Method and stimuli evaluation", + "text": "In our survey we asked participants whether they found the comparative judgement task easy or hard via a free-text response. 18 of 35 said they found it easy, 7 indicated a medium difficulty, 8 indicated they found it hard. Only one participant mentioned boredom.\nWe asked participants for their observations about the Snapshot example that they saw, in a free-text response. Not all participants gave a response. Five mentioned that the Snapshots were well-chosen, eight said they thought they were realistic, and two said they thought they were unrealistic or outliers." + }, + { + "section_id": "4.7", + "parent_section_id": "4", + "section_name": "4.7. Hint characteristics", + "text": "To evaluate which characteristics of hints were associated with their ranking, we extracted various attributes of the hints, shown in Table 2 ###reference_###.\nThe word count is straightforward, while the sentiment analysis (Hutto and Gilbert, 2014 ###reference_b28###) and reading level (Kincaid, 1975 ###reference_b38###) used existing techniques. We did not initially plan to use Model as a factor given that the participants should be unaware of which model was used. However, since the Mixtral-8x7B model generated longer and less readable hints in a particular style, we were unsure if an effect of word count would be related to the word count, the readability, or the model\u2019s individual style, so we included Model as a factor to evaluate if it had an effect not captured by the other factors.\nAll of the other attributes were categorised by the researchers. The \u201cPartially incorrect\u201d flag is a relatively unambiguous technical check for incorrect parts of the hints (if any part was incorrect this flag was true, even if the rest was correct). Consider, for example, this [erroneous] student code from one of our Snapshots in the study:\nMultiple hints suggested a fix that included code similar to the following:\nHowever, because letter is a String and \u2019a\u2019 is a character, they will never be equal in Java even if letter has the value \"a\", because the types (String and Character) do not match. Most of the incorrectness was subtle in this way, but we included it as a factor to see if it affected the relevant hints\u2019 placing.\nThe final four factors in Table 2 ###reference_### are inspired by feedback literacy theory, introduced by McLean et al. (2015 ###reference_b55###) and made known to us by its use in the study of programming error messages by Cucuiat and Waite (2024 ###reference_b12###). We adapted the four themes into categories of the same name by creating a set of definitions that applied the concepts to next-step hints, as follows.\nTwo researchers initially tagged some hints which were unused for the study, in order to calibrate. Then they both categorised all 100 hints that were judged by educators for the four completed Snapshots. All hints were tagged as yes/no for the four dimensions, giving possible overall categories. In their initial independent tagging the researchers reached an agreement of 65%. The disputes were resolved in a meeting between the two researchers, and primarily revolved around clarifying the definition of the \u201cDeveloping understanding\u201d tag. The resulting definitions after clarification are shown in Table 3 ###reference_###.\n###table_1### The frequencies of the different combinations of concepts are shown in Table 4 ###reference_###. More than half the hints contain \u201cGuiding\u201d with no other concepts. With the hints being so similar in this regard, it will naturally reduce the discriminability that these feedback categories can provide." + }, + { + "section_id": "4.8", + "parent_section_id": "4", + "section_name": "4.8. Hint scoring", + "text": "To arrive at a ranking, all Generators are applied across all Snapshots, and hints are scored for each Snapshot.\nComparative judgement, as implemented in the NoMoreMarking platform that we used, supports two different ways of scoring hints. One is a simple ranking: best hint, second-best, and so on. The other is a \u201cscaled score\u201d which estimates how far apart the items are on a normalised scale (0 being the worst item, 100 being the best). We had intended to use the latter as it was more informative. However when we looked at the data we realised a problem.\nAs Figure 6 ###reference_### shows, in the case of Snapshot 2 (and to a lesser extent 3), one hint (or three hints) are so poor that the rest of the hints have their scores pushed up the scale. Therefore a weak performance on Snapshot 2 is penalised less than on Snapshots 1 or 4 when we collapse a Generator\u2019s performance across Snapshots. This would artificially skew the results, and thus we opted against using the scaled scoring.\nThis problem is to some extent present even in the hint rankings. Since we have no cross-Snapshot normalisation (each participant only judged one Snapshot) we cannot determine whether, for example, all the hints on Snapshot 1 are better than all the hints on Snapshot 2. But since each Generator provided a hint in all contexts, it at least counter-balances across the Snapshots. We felt that using the ranks was a better choice than scaled score to minimise the effect of this warping within Snapshot.\n###figure_6### Four violin plots, with Scaled Score on the Y axis, and split by Snapshot on the X axis. The 2nd and 3rd violins have a clear outlier at 0 on the graph, with the rest of the points at 30 or above. This pattern is not present in the 1st and 4th violins." + }, + { + "section_id": "4.9", + "parent_section_id": "4", + "section_name": "4.9. Hint rankings", + "text": "Based on the reasoning in the previous section, we thus used the ranks of the hints within their Snapshot as our dependent variable. The ranks were mapped to ascending scores to be more intuitive, so that a rank score of 25 was the best hint for a Snapshot, 24 the second-best, down to 1 for the worst hint. Thanks in part to comparative judgement being a forced-choice paradigm, there were no ties. The scores are thus \u201czero sum\u201d; there are 4 scores of 1 (one for each Snapshot), 4 scores of 2, etc, up to 4 scores of 25. The scores are all relative: if one hint is better, the others must be correspondingly worse. The midpoint (both mean and median) hint is thus 13 by definition, and what is of interest is which factors lead to higher placings. This metric is termed RankScore and is the way we compare the quality of the hints for the remainder of the paper.\nThe rankings of the specific hints by themselves are not of direct interest in this paper (although the full set of hints and their ranks can be seen in our OSF repository); the interest lies in the accompanying factors, such as which Generator produced the best hints, or which attributes (see subsection 4.7 ###reference_###) were associated with high-ranking hints. To give a sense of the hints, Figure 7 ###reference_### shows the code and the best and worst hint for Snapshot 1.\nStudent code:\nNotes:\nThere are two core problems in this code. One is that the for-loop condition for continuing is wrong: it should be i >= 0 rather than i == 0, which means the loop will not run at all for most inputs. The other problem, even if the loop did run, is that the code uses == to compare strings after the loop, rather than .equals().\n\n \nBest hint (GPT-4, Prompt 2):\nThere are a few things you might want to reconsider in your code.\nLook at your for loop condition. You\u2019ve set it to i == 0, which means the loop will only execute if i is equal to 0. Is that what you intended?\nYou\u2019re comparing strings using == operator. In Java, == checks if two references point to the exact same object, not their equality in terms of content. You might want to use a different method for string comparison.\nAlso, consider the case sensitivity of your input string. Does your method handle both lower and upper case letters?\nKeep these points in mind and try to revise your code. Good luck!\n\n \nWorst hint (GPT-3.5, Prompt 5):\nYour loop condition should be i >= 0 to iterate over the characters correctly. Remember to use .equals() method instead of == for string comparison, as == checks for reference equality. Additionally, consider using StringBuilder for efficiency when building the palindrome string inside the loop. This will help you understand why string comparison doesn\u2019t work as expected and improve the performance of your code.\n\n \\Description" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.10. Hint rankings by generation mechanism", + "text": "Figure 8 ###reference_### shows the RankScore of the hints grouped by the Generator that produced them (with all five prompts collapsed into a single score for each LLM). As can be seen, GPT-4 has the highest mean rank, followed by humans, with Mixtral-8x7B the worst.\nFor the AI-generated hints, Figure 9 ###reference_### shows the rank scores split by the prompt used to generate them. The prompt numbers correspond to Table 1 ###reference_###. Prompt 3 produces the best results, and prompt 4 produces the worst results.\nWe have 100 hints overall: (5 AI models 4 prompts + 5 human hints = 25 Generators) 4 Snapshots. Thus for each AI model we have 20 data points (5 prompts 4 Snapshots), for each prompt we similarly have 20 data points (5 models 4 Snapshots) but for each Generator we only have 4 data points (one per Snapshot). Therefore we must be very cautious in interpreting this data because we may be interpreting noise; nevertheless it is shown in Figure 10 ###reference_###. This seems to indicate there may be an interaction between prompt and model as to which is best. For example, although prompt 3 is generally best, it interacts very poorly with the Mixtral model. Similarly, GPT-4 is the best model but does poorly with prompt 4. The five humans who generated hints are shown in the same graph; it would appear that there is less variation between humans than between models or between prompts.\n###figure_7### A violin plot with RankScore on the Y axis and Model on the X axis. The violins all have points spanning almost the whole vertical range of the graph (1\u201325), although GPT-4\u2019s lowest point is 4 and second lowest is 8. There are black lines in each violin showing the mean rank, and GPT-4 is about 17, followed by human at about 13, followed closely by Gemini then GPT-3.5, with Mixtral-8x7B lower at about 9.\n###figure_8### A violin plot with RankScore on the Y axis and Prompt on the X axis. Most violins span the full range from 1\u201325, except Prompt 4 which is only in the range 4\u201320. Black horizontal lines indicate the mean rank, and are around 16 for Prompt 3, then followed (with equal spacing) by Prompts 2, 1 and 5 equal, and 4, with Prompt 4 having mean rank of 10.\n###figure_9### A 5 by 4 grid of heatmap numbers for prompt and LLM, plus an adjacent 5 by 1 heatmap for human. Prompt 3 is clearly best (mean ranks 20, 18.9, 22.5) except when paired with the Mixtral model (mean rank 4). Similarly GPT-v4 is clearly best (mean ranks 16.6, 18.8, 22.5, 18) except when paired with Prompt 4 (mean rank 9). The humans are all middling (mean ranks 10.6\u201316)." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.11. Hint rankings by hint characteristics", + "text": "The analysis by hint characteristics is potentially completely orthogonal to the analysis of the hint generation mechanism. Here we are interested in attributes of the hints themselves regardless of how they were generated. We used the hint attributes described in subsection 4.7 ###reference_### to see what effect they had on the hints\u2019 RankScore.\nTo perform the analysis we used random forests (Ho, 1995 ###reference_b27###). Decision trees are a classic data mining method where the data is used to create a binary tree that classifies the outcome variable, by splitting the values around a point (e.g. perhaps hints having a word count above 300 leads to a lower RankScore). The problem with decision trees is that they tend to overfit the data. Random forests solve this problem by extracting 500 random subsets of the data and then fitting a decision tree to each subset, yielding 500 trees. The results of these 500 trees are then averaged in a \u201cforest\u201d to form a classifier. This classifier could potentially be used to predict new rank scores based on a new hint, but here we are solely interested in introspecting which hint attributes were important for the classification of best hints and in what way (e.g. is higher better).\nThe advantage of a random forest over classical statistical methods is that they can identify complex non-monotonic patterns. Regressions are generally monotonic: for example, longer hints are worse or longer hints are better, but not a pattern in a U-shape or other non-linear pattern. Random forests can identify arbitrary variation in patterns.\nThe first output to check in a random forest is the importance of each input attribute. Importance (technically, the percentage increase in mean squared error of the outcome when the input factor is omitted from the model, higher means the factor is more important, 0 or negative means totally unimportant) tells us which factors most influenced the outcome variable, although it does not indicate whether it was a positive or negative or mixed influence. The ranking by importance is given in Table 5 ###reference_###.\nBy far the two most important factors were word count and reading level, followed by the feedback literacy item of \u201cOpening up a different perspective\u201d. Note that Model was relatively unimportant, suggesting there are few lasting effects of how the hint was generated, once the other factors are taken into account.\nTo visualise the effect of the important attributes, in Figure 11 ###reference_### we graph partial dependency plots, which show the effect on RankScore for each value of the attribute. The dotted line across each shows the baseline, with each value of the hint attribute potentially increasing RankScore (better hint, above the dotted line) or decreasing it (worse hint, below the dotted line).\nThe results for word count reveal a \u201csweet spot\u201d where hints that are 80\u2013160 words long are ranked highly, around 4\u20135 places higher than hints with word counts below or above this range. Short hints are rated particularly poorly.\nThe results for reading level show that a lower grade reading level is better. The grade level scale here corresponds to grade levels in US schools, so for example a reading grade level of 9 (where the hint quality suffers a sudden drop) corresponds to 14 year-olds. So any hints not understandable by fourteen year olds are rated around 5 places lower than hints understandable by thirteen year olds (grade 8) and younger students.\nThe results for opening up different perspectives show that hints which suggest an alternative approach to the problem are ranked 2 places lower than hints which do not. Whereas guiding hints which offer additional guidance beyond exactly what to change are ranked around 2 places higher. See Table 3 ###reference_### for the definitions of these items.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### Four plots of factors: Word Count, Flesch Kincaid Grade Level, Opening Up and Guiding. Word count has an inverted U-shape, with the best region (the tip of the inverted U) at 80\u2013160 words. Reading grade level has a cliff-edge pattern, with a gradually descending line that suddenly drops around grade 9 or 10. Opening up is worse for value 1 and Guiding is better for value 1 (both are booleans with values 0 or 1, so the plot shows a straight line between these two points)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Educator survey", + "text": "As well as performing the comparative judgement task, we asked participants to complete a short survey.\nWe asked about their experience of teaching Java. Our past experience in other studies had suggested that a simple numeric field (e.g. \u201cHow many years have you taught Java?\u201d) was insufficient to capture the wealth and variety of experience. We asked them for a free text entry describing their experience of teaching Java. We then ranked these responses (using comparative judgement, but with the researchers as judges) on a loosely defined \u201cJava educator experience\u201d basis. This allowed us to sort the participants by experience and thus we can summarise their experience with an upper quartile, median and lower quartile:\nUpper quartile: \u201cI have started teaching Java in 1998 and\nhave 30+ years of teaching experience as a TA, scientific researcher, and educator. Most of it was done in Java.\u201d\nMedian experience: \u201cTeaching Java for more than twenty years, have taught Pascal, C, C++ before\u2026\u201d\nLower quartile: \u201cI have taught Java programming to High\nSchool students about 8 years. I also teach\nScratch, HTML, Javascript\u2026\u201d\nThe comparative judgement also gave us a scaled score (as described earlier in subsection 4.8 ###reference_###) from 0 to 100. We could then plot this against the (described earlier in subsection 4.5 ###reference_###) Infit to see if there was a relationship between experience and agreement with peers, as shown in Figure 12 ###reference_###. A linear regression confirmed there was no effect () of experience on Infit.\n###figure_14### A scatter plot showing Infit against the experience scaled score. The graph shows no discernable pattern.\nOne important survey question related to the overall opinions of the hints. Because our comparative judgement task is entirely relative, it cannot tell us whether all the hints were good or all the hints were bad, or somewhere inbetween. For this purpose we asked the participants how the hints compared to having no hint, and the results are shown in Figure 13 ###reference_### \u2013 the judges generally thought that most of the hints would be helpful. We asked the participants why they thought the hints were helpful (or not) as a free-text response, and performed a miniature thematic analysis to analyse these responses \u2013 the counts of different themes are given in Table 6 ###reference_###.\nParticipants could also offer their opinion on what they thought was important in a good hint, as a free text response. We similarly performed a miniature thematic analysis to analyse these responses, and the counts of different themes are given in Table 7 ###reference_###.\n###figure_15### A bar chart showing the Likert results. 1 participant stating \u201cAll of them would help\u201d, 16 said \u201cMost of them would help\u201d, 15 said \u201cAbout half of them would help\u201d 4 said \u201cSome of them would help\u201d. Zero said \u201cNone of them would help\u201d.\nWe also asked participants to state whether they felt they could do better themselves. The results are shown in Figure 14 ###reference_###. It is important to interpret this finding in light of the experience result described earlier in this section. Over half of our participants had the equivalent of 20+ years of Java teaching experience, and yet the vast majority of them felt their hints would be around the median hint in the study. This matches with our results which show that the human-generated hints from the researchers (several of whom would be in the top half of experience in the study) were around the median. We interpret that the hints in the study were generally considered high quality.\n###figure_16### A bar chart showing the Likert results. Zero participants stated \u201cBetter than all of the hints you saw in the study\u201d, 7 or 8 said \u201cBetter than most of the hints you saw in the study\u201d, 26 said \u201cBetter than about half of the hints you saw in the study\u201d 2 said \u201cWorse than most of the hints you saw in the study\u201d. Zero said \u201cWords than all the hints you saw in the study\u201d.\nIn a slight oversight on our part, we did not explicitly ask the participants whether they thought the hints were AI-generated. This was not part of our research questions but in retrospect it may have been useful to ask. Four participants spontaneously made reference to AI or LLMs in their responses; one said the hints \u201cfeel like more of what an AI might respond with\u201d, one said \u201cOne hint seemed to contain a bit of a [LLM] prompt.\u201d, one said \u201cLLM / ChatGPT levels of positivity would be irritating over time\u201d, and one suggested that another research group \u201cis also experimenting with AI-generated hints\u201d. We suspect that most participants inferred or assumed that the hints were AI-generated; the surprise for them might instead have been that some were human-generated, rather than all being AI-generated." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Discussion", + "text": "This study has findings in multiple dimensions, which we will discuss in turn." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "6.1. Hint characteristics", + "text": "We analysed the ranking of hints against their characteristics in order to investigate which characteristics of the hints were most important. We found that the two most important aspects were length of the hint (with 80\u2013160 words being ideal) and the reading level (with US grade level 9 or lower, i.e. understandable by 14 year-olds or younger being ideal). Pedagogical aspects of the hints, based on feedback literacy theory (McLean et al., 2015 ###reference_b55###; Cucuiat and Waite, 2024 ###reference_b12###) were less important; inclusion of alternate approaches to the solution were found to decrease a hint\u2019s rating, while including guidance beyond stating the answer increased a hint\u2019s rating \u2013 but the last item was the least influential of the four. We found no effect of sentiment on the hints\u2019 rating (most hints were positive, but in a wide range from slightly to very positive), and whether the hint highlighted a general rule (e.g. why the == operator cannot be used for string comparison in Java) also had no effect.\nThese results can provide useful lessons for educators and tool-makers about the best kind of hints to provide in contexts where short written hints are appropriate." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "6.2. Hint generation", + "text": "We asked educators to compare AI-generated hints from different AI models and different AI prompts, as well as human hints, without knowing which hints were generated by which method. We found that the best model in our study, GPT-4, produced hints that were rated more highly than hints produced by the [human!] researchers. This is promising for future research into adding hints in novice programming environments.\nWe found that there was as large a variation among prompts as there was among AI models. This is important for the automatic generation of hints, but also has implications for students\u2019 individual use of LLMs for help. Previous research (Zamfirescu-Pereira et al., 2023 ###reference_b84###) has found that non-experts can struggle to design prompts, so students may struggle to create a prompt themselves that produces a hint as good as the best hints in this study. This suggests that there is room for tools to \u201cpackage up\u201d pre-written prompts and automatically deliver hints using these, rather than exposing the raw LLM prompt interface to users.\nAI is currently undergoing rapid development, with new models being introduced every few months. In that regard, the specific models will outdate, and some of our results along with it. To ensure ours is a lasting contribution, we have detailed a reproducible method that allows a replication of the study to be run in future. Specifically: we described our process of hint creation, we detailed a methodology of running multiple prompts with multiple models, and how to use comparative judgement to evaluate these hints. All of our analysis scripts are in our OSF repository (see subsection 4.1 ###reference_###) to allow for easy replication.\nNeither the researchers nor the AI models knew the exact context of the Snapshot (since this is not available in Blackbox), i.e. what precisely the student was aiming to do. All of them inferred the student\u2019s task based solely on the student\u2019s source code. This is in contrast to large portions of the hint-generating literature which rely on knowing the problem context to provide hints (McBroom et al., 2021 ###reference_b54###; Crow et al., 2018 ###reference_b11###; Keuning et al., 2018 ###reference_b35###)." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "6.3. The missing student perspective", + "text": "This study has only looked at educators\u2019 ranking of hints. Naturally, it would make sense to also investigate the student perspective of hints since they would be the ultimate consumers. We did not feel that students were likely to be able to rate hints in the same way given the complexity of the task: participants first need to read someone else\u2019s poorly-written code, understand what the code does and what the code is trying to do, understand the notes on the current problems with the code, and then read two hints and compare them to decide which would be the most useful. Our educator participants seemed able to complete this task (backed by their decades of experience). We were, however, not convinced that students could do the same, so we did not ask students to also perform this task.\nA better design for evaluating hint quality for students might be to ask them to complete a given programming task, and when they get stuck, to be able to ask for a hint. Students could be shown two hints and be asked to select the preferred one. This would remove the complexity of understanding someone else\u2019s code. Students would already know what they are trying to do, making evaluation of the usefulness of the hint more straightforward. It is possible to build the best prompt and model from this study into a tool to conduct such a second study.\nWith our current study design, it remains a possibility that our experienced educators are not very good at the task of deciding which hints would be most useful to students. For example, it may be that students prefer even shorter hints, or perhaps they favour more explanation. Perhaps students prefer being told the answer to receiving a hint. This is a classic educational conflict: who is best-placed to decide which hint is better? A student who perhaps wants the easy option of being told the answer or given a detailed explanation, or the educator who believes they are best-served by receiving a more circumspect hint? It is not obvious that either the student or the educator alone can provide the perfect assessment of which hint is best. In this study we have provided a large piece of the puzzle by asking educators." + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "6.4. The nature of hints", + "text": "In this study we have chosen to focus on \u201cone-shot\u201d next-step hints which are provided to the student by a programming environment to help them move forwards. We have not considered any aspects of interface design, for example whether students should be able to manually request these hints at any time or whether they should be automatically offered or at what point. We consider these aspects to be outside the scope of this work, but they may be investigated in separate research. Jeuring et al. (2022 ###reference_b30###) and Lohr et al. (2024 ###reference_b47###), for example, investigated when to provide hints, but found low agreement among educators.\nWe have also not considered the possibility of an ongoing dialogue between the student and the hint mechanism. One of the distinctive features of systems such as ChatGPT is the ability to converse with the LLM, asking for more detail, for clarification, or working in tandem together. Although part of our analysis was taken from feedback literacy theory (McLean et al., 2015 ###reference_b55###), there are entire dimensions we omitted which are important for human contact but non-applicable in this kind of one-shot hint generation work, such as agency, direction, and temporality. All of these might be relevant in an ongoing dialogue. This is a potential avenue for future research." + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "6.5. The personal connection", + "text": "The idea of an ongoing dialogue leads us back to the personal touch. We have focused on a very specific context: one-off next-step hint generation. Although we found that humans were not as good as the best AI on this task, this does not mean that human educators are redundant. The personal connection in education is still important. Portions of the hype around AI in education are reminisicent of the excitement around Massive Open Online Courses (MOOCs) a decade ago. Having the educational resources freely and widely available did not lead to a massive uptake or improvement in education, or to educators losing their jobs. There remains value in formal education based around human contact." + }, + { + "section_id": "6.6", + "parent_section_id": "6", + "section_name": "6.6. Relation to prior work", + "text": "Our work provides interesting contrasts with some prior work. Compared to much previous work we found few issues with incorrect or misleading hints being generated by LLMs, which may reflect a difference in how we prompted the LLMs, or more likely general technological advancement in LLMs. Like prior work, we did find that LLMs did not always obey our instructions. Despite asking for only a hint, several \u201chints\u201d provided exact solutions, and many would list out all the problems in the code despite explicitly being asked for a single hint relating to a one selected problem.\nOur prompts were quite different to some previous work. For example, Roest et al. (2024 ###reference_b73###) used very minimal requests of 1\u20132 sentences alongside the problem description and code. Our prompts are much longer, but also have no problem description to work with, which is in contrast to the majority of previous work.\nIn their analysis of enhanced error explanations using feedback literacy theory, Cucuiat and Waite (2024 ###reference_b12###) found that explanations using guiding were preferred to telling (feedback literacy theory term (McLean et al., 2015 ###reference_b55###)) which matched with our results. However they found that educators considered developed understanding as positive, but under our operationalisation (where this meant that the hint explained the general rule, e.g. why you cannot use the == operator for string comparison in Java) it made no difference to how the hints were evaluated. This is particularly interesting because Sheese et al. (2024 ###reference_b77###) found that students would not typically seek out the general rule for themselves, and educators seem to think that it is also not worthwhile to include it in a next-step hint.\nOpening up a different perspective, the highest level of abstraction in feedback literacy theory as used by Cucuiat and Waite ###reference_b12###, was considered negative in our hints. This may suggest that educators, when considering concrete examples of next-step hints, consider this too overwhelming to be helpful overall." + }, + { + "section_id": "6.7", + "parent_section_id": "6", + "section_name": "6.7. Comparative judgement as a research method", + "text": "There were multiple research methods which could have been used to get educators\u2019 opinions on the hints. One option would be to interview them, as done by Cucuiat and Waite (2024 ###reference_b12###). This has the advantage of getting deeper answers about the why, but it would also not have allowed us to end up with precise guidance over the hint length or the reading level. The use of comparative judgement (plus a survey to fill in or corroborate the why) plus a random forest for analysis allowed us to exact specific guidelines on what made a good hint. Our checks verified that, at least for this length of task, participants took the task seriously, showed essentially no signs of boredom or giving up, and produced reliable results. No participant mentioned being confused by the requirements of the task. We believe comparative judgement may be a useful research method for the future for asking participants to rank a set of stimuli.\nAnother advantage of asking participants to rank hints is as follows. It is possible that there is a discrepancy of an educator\u2019s preference expressed in the abstract and their opinion when confronted with a concrete representation of the concept. For example, educators may express a general preference to opening up perspectives when interacting with students, but rate hints attempting to do just that lower, because the associated drawbacks (length, complexity, distraction) become more obvious. In this light, comparative judgement as used here can offer more concrete results, by observing what participants \u201cdo\u201d (how they rank) rather than what they \u201csay\u201d (when asked in an interview)." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "7. Threats to Validity", + "text": "Our completion rate was relatively low: only around 50% of those who signed up to participate in the ranking task went on to complete it.\nOne possible explanation is that the task itself was too boring or hard for participants. However, the comparative judgement platform allows us to see who began the task, and only three participants began the task and did not finish, so the non-completers did not even start the task.\nIt is possible the task description was off-putting. However, it consisted of only 2-3 relatively sparse pages (available in our OSF repository, see subsection 4.1 ###reference_###). We believe the most likely explanation is the time of year we recruited and the general business of academics and teachers. We had aimed to recruit before teaching began but we ended up recruiting in August and September when many high school and university teachers in the northern hemisphere are very busy with the start of their teaching terms.\nIn this study we have only surveyed educators for their opinions on hints and not students. As described in subsection 6.3 ###reference_### we felt that students may struggle to understand someone else\u2019s code, the problem(s) with it, and then judge which hints they would prefer in that case. It is possible that educators\u2019 opinions on which hints would be best for students may not accord with students\u2019 opinions. Of course, it is not clear that the student opinion is necessarily better than the educator opinion even if that were the case: Students may prefer hints that optimise efficiency of creating the solution, while educators may prefer hints that maximise the learning effect.\nThe LLMs we used will naturally outdate as technology progresses, but we have tried to mitigate this through our research design that used multiple models and multiple prompts. Our findings can also inform the hints literature independently of the technology used to generate the hints for the study." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "8. Future work", + "text": "One clear future direction is to ask students to evaluate hints. We believe the best study design will be to implement automatic hint generation in a novice programming environment and then ask students to use it and rate or rank the hints they are given, and/or monitor their programming activity immediately after receiving the hint.\nAlthough we believe we have shown that comparative judgement is a viable experimental technique, the fact remains that recruiting participants (especially busy teachers and academics) is a difficult process; we had eight Snapshots prepared but only recruited enough completing participants to evaluate four of them. Some recent work (Lippert et al., 2024 ###reference_b46###; Hewitt et al., 2024 ###reference_b26###) has investigated whether LLMs can emulate human participants in social science experiments in order to generate synthetic data. On the one hand, this risks AI \u201cmarking its own work\u201d, with LLMs evaluating the output of LLMs (as done, for example, by Koutcheme et al. (2024b ###reference_b41###, a ###reference_b40###)). On the other hand there may be ways to use this technique to boost evaluation of new behavioural interventions such as identifying better hints.\nThe hint generation in this work was done with a set of brainstormed prompts. We not only know which prompt produced the best hints, we also now have extra information about the characteristics of best hints, in terms of length, reading level and other aspects (e.g. that offering alternative approaches is rated negatively). This opens the possibility to design a new prompt that takes these insights into account in order to improve hint generation. This minor step forward is one possible avenue to deploy LLM evaluation, rather than recruiting 85 human participants again just to test a minor improvement on the prompts.\nOne further direction for related work is to analyse our existing data under other hint classifications. Although we chose to focus on feedback literacy theory, other classifications have been proposed, such as by Keuning et al. (2018 ###reference_b35###) and separately by Suzuki et al. (2017 ###reference_b79###). Categorising the existing hints using those schemes and relating the results to our ranking is a possible next step. With our data being open, this can also be done by other research teams." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "9. Conclusion", + "text": "In this paper we asked human educators to rank sets of hints generated by AI models and human researchers using comparative judgement. This provided findings in several different dimensions.\nOne finding is that GPT-4 was found to produce better hints than experienced humans. It is particularly important to note that the hints were generated without providing any context of the task that the student was performing. The data was taken from the Blackbox dataset, which provides examples from arbitrary novice programmers, without knowledge of the exact task being performed. In this GPT-4-better-than-humans sense, this paper is one in a recent line of \u201cLLMs beat humans at programming education task \u201d. Prior results in this line examined creating code explanations (Leinonen et al., 2023a ###reference_b43###), small programming exercises (Kiesler and Schiffner, 2023 ###reference_b37###) or full programming exams (Mahon et al., 2023 ###reference_b49###). However, we provide several further contributions beyond this latest LLM feat.\nSome do relate specifically to the operation of LLMs. We evaluated five different LLM prompts which are quite different in their construction, including two multi-stage prompts. Although the prompts do have an interaction with the choice of LLM, one was clearly better than the others (prompt 3 in Table 1 ###reference_###). This prompt first asked the LLM to summarise the task the student was inferred to perform and then fed the result back to a second request to provide a hint. (Our prompts, methods and analysis scripts are all open to allow easy replication in future as LLMs advance.) Furthermore, although GPT-4 was better than humans, all the other models were not. There is a pronounced effect of model, prompt and their interaction, which showed much greater variation in average performance than we found between the five human researchers who created hints. It is still the case that LLMs beat humans only with the right model and the right prompt.\nWe also provide contributions that are entirely orthogonal to AI. Our study can be seen solely as an investigation into the characteristics of hints that are most important to judge the hint useful \u2013 with the fact that some hints were generated by AI merely acting as a convenient artifact generation mechanism. We have found that the most important attributes predicting a hint\u2019s ranking was its length and reading level. Experienced Java educators (more than half with an equivalent of over 20+ years of experience) rated hints most highly where the word count was 80\u2013160 words, the reading level was typically understandable by those in US grade 9 (age 14) or below, where guidance was provided beyond just stating the answer, but alternative approaches to solving the problem were avoided. We found no effect of sentiment or of explaining a more general underlying principle on the perceived quality of a hint.\nWe have also demonstrated the use of comparative judgement (previously primarily used for assessing writing skills) as a research methodology, showing that, at least for a 20 minute task, participants took it seriously and did not get bored, and the results produced were reliable and interpretable. Comparative judgement is useful when participants are required, individually or collectively, to rank a number of experimental stimuli than can be placed alongside each other on one screen. There are several free comparative judgement websites; we used NoMoreMarking666https://www.nomoremarking.com/ ###reference_www.nomoremarking.com/###, with details on how we set up the task available in our OSF repository (along with all of our data and analysis code) as described in subsection 4.1 ###reference_###." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
#\n\nPrompt\n\n
1\n\nI\u2019m learning to program. Without giving the solution, can you guide me into the next step to fix the problem in this code? $CODE\n\n
2\n\nYou are an experienced programming instructor teaching a course in Java. I will provide you student code and your task is to generate a hint for the student. Please do not provide the full solution, but try to generate a hint that would allow the student to proceed in the task. Please address your response to the student. Here is the code: $CODE\n\n
3a\n\nYou are an experienced programming instructor teaching a course in Java. I will provide you student code and your task is to try infer what the task the student is working on is. Here is the code: $CODE\n\n
3b\n\nYou are an experienced programming instructor teaching a course in Java. I will provide you student code and an explanation of the task they are working on, and your task is to generate a hint for the student. Please do not provide the full solution, but try to generate a hint that would allow the student to proceed in the task. Please address your response to the student. Explanation of the task: $PREVIOUS_ANSWER Here is the code: $CODE\n\n
4\n\nYou are a tutor who is helping a student who is stuck while writing code for an introductory programming course. To help the student, you must generate a hint that will identify their mistake and help them make progress continuing with their code. If there is a serious error, please point that out first. Please be helpful and encouraging, but do not reveal the answer - instead, make one concrete suggestion to help them progress. Here is their code: $CODE Please identify the most serious error in this code and suggest a hint to the student to help them. Just show the \u201dHint\u201d, suitable for immediate display to the student. Please do not include any other explanatory text.\n\n
5a\n\nI have written the following code: $CODE I need help. Act like a good teacher and give me some help. Respond in no more than three sentences.\n\n
5b\n\nDo you think the hint you just gave was a good hint? Critique it, including consideration of accuracy, friendly tone, helping the student to learn and not giving too much of the answer away.\n\n
5c\n\nIn light of your criticism, improve the response you first gave.\n\n
\n
Table 1. The five prompts given to the AI to generate the hints. The Snapshot is substituted wherever \u201c$CODE\u201d appears. Prompts are separated by double-lines. Some (3a+3b, 5a+5b+5c) are multi-stage prompts, separated by a single line; each prompt part is given in sequential order; the marker \u201c$PREVIOUS_ANSWER\u201d indicates that the full answer to the preceding part of the prompt should be inserted.
\n
", + "capture": "Table 1. The five prompts given to the AI to generate the hints. The Snapshot is substituted wherever \u201c$CODE\u201d appears. Prompts are separated by double-lines. Some (3a+3b, 5a+5b+5c) are multi-stage prompts, separated by a single line; each prompt part is given in sequential order; the marker \u201c$PREVIOUS_ANSWER\u201d indicates that the full answer to the preceding part of the prompt should be inserted." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nAttribute\n\nType\n\nMethod of extraction\n\n
\n\nWord count\n\nInteger\n\nCount number of words, including words in code, but ignoring punctuation, symbols and emoji.\n\n
\n\nReading level\n\nReal\n\nFlesch-Kincaid Reading Grade Level\u00a0(Kincaid, 1975).\n\n
\n\nSentiment\n\nReal\n\nVADER Sentiment Analyzer\u00a0(Hutto and Gilbert, 2014), compound polarity score.\n\n
\n\nModel\n\nCategory\n\nThe name of the AI model or the term \u201cHuman\u201d to indicate how the hint was generated.\n\n
\n\nTelling\n\nBoolean\n\nFeedback-literacy-inspired category. See subsection\u00a04.7.\n\n
\n\nGuiding\n\nBoolean\n\nFeedback-literacy-inspired category. See subsection\u00a04.7.\n\n
\n\nDeveloping understanding\n\nBoolean\n\nFeedback-literacy-inspired category. See subsection\u00a04.7.\n\n
\n\nOpening up a new perspective\n\nBoolean\n\nFeedback-literacy-inspired category. See subsection\u00a04.7.\n\n
\n\nPartially incorrect\n\nBoolean\n\nFlag indicating whether a hint was partially incorrect.\n\n
\n
Table 2. All of the hint attributes that were examined, and how the attributes were extracted.
\n
", + "capture": "Table 2. All of the hint attributes that were examined, and how the attributes were extracted." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nConcept\n\n\n\nDescription\n\n
\n\nTelling\n\n\n\nThe hint contains instructions on exactly what to change in the code. This could be actual code (e.g. \u201cinsert x = 0;\u201d) or words that reach the same outcome (e.g. \u201cYou need to assign 0 to x before the loop\u201d or \u201cyou should start the loop at 0 not 1\u201d). The feedback requires no or little extra thought from the student other than to follow the instruction. The instruction is either exact (delete this line) or close-to-exact (move this line to after the loop). It does NOT include feedback that requires more thought or has ambiguity in exactly what needs doing (e.g. \u201cYou need to update the loop variable somewhere within the loop\u201d or \u201dThe function call should not be inside the loop\u201d).\n\n
\n\nGuiding\n\n\n\nThe hint contains explicit feedback about the original code above and beyond just direct accompaniment of what to change. So if it says \u201cyour loop is incorrect; you should start from 0\u201d this is only telling, not guiding. This might include statements like \u201cyour loop will run forever\u201d or \u201cyou are not updating x anywhere after its declaration\u201d or \u201cconsider whether the loop will terminate\u201d \u2013 provided it does not also include an ensuing exact instruction on the fix (which would be categorised just as telling). It can include positive specific feedback like \u201cYour loops are the correct structure\u201d. It does NOT include generic feedback like \u201cyou\u2019re so close\u201d or \u201cthis is a great start\u201d which could be applied to any piece of code.\n\n
\n\nDeveloping understanding\n\n\n\nThe hint contains a more general explanation of a concept or of the needed change to the code. For example, it might explain why List cannot be indexed with square brackets in Java. It may also point to places where students could find out more information for themselves (e.g. a URL or what concept to research). The key is that it is explaining the general rule which would also apply to future coding, not just the specific issue with the current code (which would only be guiding). This may be phrased as a general tip or tip-for-the-future, the key is that it is more general than just this specific code example.\n\n
\n\nOpening up a different perspective\n\n\n\nThe hint contains a suggestion of a different approach to solving the problem (e.g. using a find method rather than manually looping through the list to search, or using a different programming language altogether). This does NOT include cases where the student has not even made a coherent start; it needs to be in contrast to the student\u2019s current thinking or approach to the problem. If they have done nothing, the suggestion might be telling or guiding depending how specific it is.\n\n
\n
Table 3. The complete definitions of the four concepts derived from feedback literacy theory\u00a0(McLean et\u00a0al., 2015) that were used to manually characterise the hints.
\n
", + "capture": "Table 3. The complete definitions of the four concepts derived from feedback literacy theory\u00a0(McLean et\u00a0al., 2015) that were used to manually characterise the hints." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TellingGuidingDevelopingOpeningUpFrequency
\u271356
\u2713\u271317
\u2713\u27137
\u27136
\u2713\u27133
\u2713\u2713\u2713\u27133
\u2713\u27132
\u2713\u27132
\u2713\u2713\u27132
\u27131
\u2713\u2713\u27131
0
\u27130
\u2713\u2713\u27130
\u2713\u27130
\u2713\u2713\u27130
\n
Table 4. The frequencies of all possible different combinations of the four feedback literacy concepts (see Table\u00a03) in the 100 hints, ordered by their frequency. \u2713indicates the presence of the concept.
\n
", + "capture": "Table 4. The frequencies of all possible different combinations of the four feedback literacy concepts (see Table\u00a03) in the 100 hints, ordered by their frequency. \u2713indicates the presence of the concept." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Input factorImportance
WordCount23.9
FleschKincaidGradeLevel17.9
OpeningUp10.7
Guiding5.6
PartiallyIncorrect5.5
Model3.0
Telling0.5
Sentiment0.2
DevelopingUnderstanding-0.7
\n
Table 5. The factors in the random forest model (outcome variable: RankScore) and their importance (percentage increase in mean squared error of the outcome if omitted from the model). Higher importance means the attribute was more important in predicting the RankScore of each hint. Zero or negative means that the factor was unimportant.
\n
", + "capture": "Table 5. The factors in the random forest model (outcome variable: RankScore) and their importance (percentage increase in mean squared error of the outcome if omitted from the model). Higher importance means the attribute was more important in predicting the RankScore of each hint. Zero or negative means that the factor was unimportant." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTheme\n\nParticipant count
\n\nHelp not hinder independent learning: Participants mentioned that hints should aid independent learning (e.g. by giving cause for the students to think) rather than hinder it (e.g. by providing the exact solution to the student with no need for further thought).\n\n10
\n\nContext matters: Participants mentioned that they needed to understand more about the context of the student receiving the hint in order to decide whether the hint was appropriate or whether it needed further adjustment.\n\n6
\n\nOne at a time: Participants expressed a dislike for hints which addressed many errors at once, and stated they would prefer a hint which identified and focused on solving only one problem with the code.\n\n6
\n\nToo long or complicated: Participants stated that some hints were too long or complicated to provide any benefit to a student, and expressed doubt that the students would read and/or understand such hints.\n\n5
\n
Table 6. The themes we identified in participants\u2019 responses about why the hints they saw would (or would not) be better than having no hint, plus the count of unique participants (out of 35) who mentioned this theme. The table is sorted by frequency.
\n
", + "capture": "Table 6. The themes we identified in participants\u2019 responses about why the hints they saw would (or would not) be better than having no hint, plus the count of unique participants (out of 35) who mentioned this theme. The table is sorted by frequency." + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTheme\n\nParticipant count
\n\nConciseness: Participants preferred short, concise, to-the-point hints.\n\n22
\n\nHint not solution: Participants wanted hints that did not provide the exact solution, but rather a pointer or suggestion or thought-provocation that would involve the student thinking further.\n\n19
\n\nNot over-praising: Participants disliked hints that over-emphasised praise or positive language.\n\n11
\n\nSpecificity: Participants preferred hints that were specific rather abstract and vague.\n\n9
\n\nCorrectness: Participants mentioned wanting correct, accurate hints (usually mentioned because they had spotted a hint they felt was incorrect).\n\n9
\n\nPositive tone: Participants liked a positive or encouraging tone to the hint.\n\n8
\n\nNot too short: Participants mentioned disliking very short hints as being unhelpful or lacking in useful detail.\n\n5
\n\nUnhelpful summary: Participants mentioned disliking the tendency for hints to contain a summary of what the code was doing or trying to do, because they felt this was unhelpful.\n\n4
\n
Table 7. The themes we identified in participants\u2019 responses about which characteristics of hints were important to their ranking choices, plus the count of unique participants (out of 35) who mentioned this theme. The table is sorted by frequency.
\n
", + "capture": "Table 7. The themes we identified in participants\u2019 responses about which characteristics of hints were important to their ranking choices, plus the count of unique participants (out of 35) who mentioned this theme. The table is sorted by frequency." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18151v1_figure_1.png", + "caption": "Figure 1. The design of the study: We take a Snapshot of student code from Blackbox, generate 25 hints for this Snapshot (four LLMs each with five prompts, plus a hint from each of five human experts), and then ask eight or more expert educators to compare the hints to form a ranking. We repeat this whole process with four different Snapshots and different educators, resulting in 100 hints ranked by more than 32 educators. This allows us to infer what characteristics make a good hint, and which Generator (LLM+prompt, or human) generates the best hints.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/ai-hints-design-diagram.png" + }, + "2": { + "figure_path": "2411.18151v1_figure_2.png", + "caption": "Figure 2. A frequency histogram based on how many participants made a given percentage of \u201cleft\u201d decision when asked to make a left-vs-right judgement. The blue bars show the actual values; the grey hashed bar overlay is the theoretical distribution if the decisions were completely random (i.e. drawn from a binomial distribution with probability of 50% for each decision).", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Decisions_LeftClickDistribution.png" + }, + "3": { + "figure_path": "2411.18151v1_figure_3.png", + "caption": "Figure 3. The median (line) and interquartile range (shaded area) of time taken to make each judgement, ordered by the decision order (first judgement made on the left, through to the last on the right).", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Decisions_TimeTakenByOrder.png" + }, + "4": { + "figure_path": "2411.18151v1_figure_4.png", + "caption": "Figure 4. The frequencies of different Infit ratings, one rating per participant.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Judges_InfitHistogram.png" + }, + "5": { + "figure_path": "2411.18151v1_figure_5.png", + "caption": "Figure 5. A plot of Infit ratings (each plotted point is a participant) against median judgement time, to see if less consistent judges were judging faster.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Judges_InfitByTime.png" + }, + "6": { + "figure_path": "2411.18151v1_figure_6.png", + "caption": "Figure 6. The scaled scores of the hints, split by Snapshot on the X-axis. Each dot is a hint, and the violin plot shows the same data as the hint dots, but is added to aid visualisation. Snapshot 2 (and to a lesser extent Snapshot 3) have clear outliers at the bottom of the graph.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hint_Scoring_By_Task.png" + }, + "8": { + "figure_path": "2411.18151v1_figure_8.png", + "caption": "Figure 8. The rank score of the hints (higher is better) split by the model that produced them, with different prompts averaged to a single value for each LLM. Each dot is a hint; the violin plot shows the same data as the hint dots and is added to aid visualisation. The black horizontal line is the mean rank of the hints generated by that model.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_RankByModel.png" + }, + "9": { + "figure_path": "2411.18151v1_figure_9.png", + "caption": "Figure 9. The rank score of the AI-generated hints (higher is better) split by the prompt used to generate them. Each dot is a hint; the violin plot shows the same data as the hint dots and is added to aid visualisation. The black horizontal line is the mean rank of the hints generated by that model.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_RankByPrompt.png" + }, + "10": { + "figure_path": "2411.18151v1_figure_10.png", + "caption": "Figure 10. A heatmap of AI model vs prompt (plus the five humans), showing the mean rank for that Generator. Each entry is derived from only four points (one per Snapshot) so it should be interpreted with caution.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_RankByPromptAndModelHeatmap.png" + }, + "11(a)": { + "figure_path": "2411.18151v1_figure_11(a).png", + "caption": "Figure 11. Partial dependency plots for the four most important factors in predicting which hints are best. The plots are in vertical pairs; the top-plot shows the effect on the RankScore on the Y axis (above the orange line: better than average) by value of the attribute on the X axis. The bottom plot of each pair is a histogram showing the frequencies of those values in the 100 hints.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_rf_WordCount.png" + }, + "11(b)": { + "figure_path": "2411.18151v1_figure_11(b).png", + "caption": "Figure 11. Partial dependency plots for the four most important factors in predicting which hints are best. The plots are in vertical pairs; the top-plot shows the effect on the RankScore on the Y axis (above the orange line: better than average) by value of the attribute on the X axis. The bottom plot of each pair is a histogram showing the frequencies of those values in the 100 hints.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_rf_FleschKincaidGradeLevel.png" + }, + "11(c)": { + "figure_path": "2411.18151v1_figure_11(c).png", + "caption": "Figure 11. Partial dependency plots for the four most important factors in predicting which hints are best. The plots are in vertical pairs; the top-plot shows the effect on the RankScore on the Y axis (above the orange line: better than average) by value of the attribute on the X axis. The bottom plot of each pair is a histogram showing the frequencies of those values in the 100 hints.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_rf_OpeningUp.png" + }, + "11(d)": { + "figure_path": "2411.18151v1_figure_11(d).png", + "caption": "Figure 11. Partial dependency plots for the four most important factors in predicting which hints are best. The plots are in vertical pairs; the top-plot shows the effect on the RankScore on the Y axis (above the orange line: better than average) by value of the attribute on the X axis. The bottom plot of each pair is a histogram showing the frequencies of those values in the 100 hints.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Hints_rf_Guiding.png" + }, + "12": { + "figure_path": "2411.18151v1_figure_12.png", + "caption": "Figure 12. A participant\u2019s experience (a scaled score ranked by researchers using comparative judgement) against their Infit (how much they agree with fellow participants: lower values of Infit indicate higher agreement with their peers).", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Judges_Infit_By_Experience.png" + }, + "13": { + "figure_path": "2411.18151v1_figure_13.png", + "caption": "Figure 13. Results of asking the participants whether the hints would better than having no hints on a 5-point Likert scale.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Judges_Helpful.png" + }, + "14": { + "figure_path": "2411.18151v1_figure_14.png", + "caption": "Figure 14. Results of asking the participants whether they felt they could make better hints than those in the study, on a 5-point Likert scale.", + "url": "http://arxiv.org/html/2411.18151v1/extracted/6028472/figures/Judges_Better_Yourself.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Exploring Machine Learning Methods to Automatically Identify Students in Need of Assistance. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (Omaha, Nebraska, USA) (ICER \u201915). Association for Computing Machinery, New York, NY, USA, 121\u2013130.", + "author": "Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015.", + "venue": "https://doi.org/10.1145/2787622.2787717", + "url": null + } + }, + { + "2": { + "title": "Characterizing the Pedagogical Benefits of Adaptive Feedback for Compilation Errors by Novice Programmers. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering Education and Training (Seoul, South Korea) (ICSE-SEET \u201920). Association for Computing Machinery, New York, NY, USA, 139\u2013150.", + "author": "Umair Z. Ahmed, Nisheeth Srivastava, Renuka Sindhgatta, and Amey Karkare. 2020.", + "venue": "https://doi.org/10.1145/3377814.3381703", + "url": null + } + }, + { + "3": { + "title": "A systematic review of research around adaptive comparative judgement (ACJ) in K-16 education.", + "author": "S Bartholomew and Emily Yoshikawa-Ruesch. 2018.", + "venue": "Council on Technology an Engineering Teacher Education: Research Monograph Series 1, 1 (2018).", + "url": null + } + }, + { + "4": { + "title": "One Step at a Time: Combining LLMs and Static Analysis to Generate Next-Step Hints for Programming Tasks. In Proceedings of the 24th Koli Calling International Conference on Computing Education Research (Koli Calling \u201924). Association for Computing Machinery, New York, NY, USA, Article 9, 12 pages.", + "author": "Anastasiia Birillo, Elizaveta Artser, Anna Potriasaeva, Ilya Vlasov, Katsiaryna Dzialets, Yaroslav Golubev, Igor Gerasimov, Hieke Keuning, and Timofey Bryksin. 2024.", + "venue": "https://doi.org/10.1145/3699538.3699556", + "url": null + } + }, + { + "5": { + "title": "Novice Java Programming Mistakes: Large-Scale Data vs. Educator Beliefs.", + "author": "Neil C. C. Brown and Amjad Altadmri. 2017.", + "venue": "ACM Trans. Comput. Educ. 17, 2, Article 7 (May 2017), 21 pages.", + "url": null + } + }, + { + "6": { + "title": "Quick Fixes for Novice Programmers: Effective but Under-Utilised. In Proceedings of the 2023 Conference on United Kingdom & Ireland Computing Education Research (UKICER \u201923). Association for Computing Machinery, New York, NY, USA, Article 3, 7 pages.", + "author": "Neil C. C. Brown, Jamie Ford, Pierre Weill-Tessier, and Michael K\u00f6lling. 2023.", + "venue": "https://doi.org/10.1145/3610969.3611117", + "url": null + } + }, + { + "7": { + "title": "Blackbox: A Large Scale Repository of Novice Programmers\u2019 Activity. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (Atlanta, Georgia, USA) (SIGCSE \u201914). Association for Computing Machinery, New York, NY, USA, 223\u2013228.", + "author": "Neil C. C. Brown, Michael K\u00f6lling, Davin McCall, and Ian Utting. 2014.", + "venue": "https://doi.org/10.1145/2538862.2538924", + "url": null + } + }, + { + "8": { + "title": "How many words do we read per minute? A review and meta-analysis of reading rate.", + "author": "Marc Brysbaert. 2019.", + "venue": "Journal of Memory and Language 109 (2019), 104047.", + "url": null + } + }, + { + "9": { + "title": "Qualitative Analyses of Movements Between Task-Level and Code-Level Thinking of Novice Programmers. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (Portland, OR, USA) (SIGCSE \u201920). Association for Computing Machinery, New York, NY, USA, 487\u2013493.", + "author": "Francisco Enrique Vicente Castro and Kathi Fisler. 2020.", + "venue": "https://doi.org/10.1145/3328778.3366847", + "url": null + } + }, + { + "10": { + "title": "Intelligent Tutoring Systems for Programming Education: A Systematic Review. In Proceedings of the 20th Australasian Computing Education Conference (Brisbane, Queensland, Australia) (ACE \u201918). Association for Computing Machinery, New York, NY, USA, 53\u201362.", + "author": "Tyne Crow, Andrew Luxton-Reilly, and Burkhard Wuensche. 2018.", + "venue": "https://doi.org/10.1145/3160489.3160492", + "url": null + } + }, + { + "11": { + "title": "Feedback Literacy: Holistic Analysis of Secondary Educators\u2019 Views of LLM Explanations of Program Error Messages. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 (Milan, Italy) (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 192\u2013198.", + "author": "Veronica Cucuiat and Jane Waite. 2024.", + "venue": "https://doi.org/10.1145/3649217.3653595", + "url": null + } + }, + { + "12": { + "title": "Prompt Problems: A New Programming Exercise for the Generative AI Era. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (Portland, OR, USA) (SIGCSE 2024). Association for Computing Machinery, New York, NY, USA, 296\u2013302.", + "author": "Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent N. Reeves. 2024a.", + "venue": "https://doi.org/10.1145/3626252.3630909", + "url": null + } + }, + { + "13": { + "title": "Desirable Characteristics for AI Teaching Assistants in Programming Education. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 (Milan, Italy) (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 408\u2013414.", + "author": "Paul Denny, Stephen MacNeil, Jaromir Savelka, Leo Porter, and Andrew Luxton-Reilly. 2024b.", + "venue": "https://doi.org/10.1145/3649217.3653574", + "url": null + } + }, + { + "14": { + "title": "Computing Education in the Era of Generative AI.", + "author": "Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, and Sami Sarsa. 2024c.", + "venue": "Commun. ACM 67, 2 (Jan. 2024), 56\u201367.", + "url": null + } + }, + { + "15": { + "title": "On Designing Programming Error Messages for Novices: Readability and Its Constituent Factors. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI \u201921). Association for Computing Machinery, New York, NY, USA, Article 55, 15 pages.", + "author": "Paul Denny, James Prather, Brett A. Becker, Catherine Mooney, John Homer, Zachary C Albrecht, and Garrett B. Powell. 2021.", + "venue": "https://doi.org/10.1145/3411764.3445696", + "url": null + } + }, + { + "16": { + "title": "A Fast Measure for Identifying At-Risk Students in Computer Science. In Proceedings of the Ninth Annual International Conference on International Computing Education Research (Auckland, New Zealand) (ICER \u201912). Association for Computing Machinery, New York, NY, USA, 55\u201362.", + "author": "Nickolas J.G. Falkner and Katrina E. Falkner. 2012.", + "venue": "https://doi.org/10.1145/2361276.2361288", + "url": null + } + }, + { + "17": { + "title": "Programming without a Programming Language: Challenges and Opportunities for Designing Developer Tools for Prompt Programming. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA \u201923). Association for Computing Machinery, New York, NY, USA, Article 235, 7 pages.", + "author": "Alexander J. Fiannaca, Chinmay Kulkarni, Carrie J Cai, and Michael Terry. 2023.", + "venue": "https://doi.org/10.1145/3544549.3585737", + "url": null + } + }, + { + "18": { + "title": "My Program is Correct but It Doesn\u2019t Run: A Preliminary Investigation of Novice Programmers\u2019 Problems. In Proceedings of the 7th Australasian Conference on Computing Education - Volume 42 (Newcastle, New South Wales, Australia) (ACE \u201905). Australian Computer Society, Inc., AUS, 173\u2013180.", + "author": "Sandy Garner, Patricia Haden, and Anthony Robins. 2005.", + "venue": "", + "url": null + } + }, + { + "19": { + "title": "Coding with AI: How Are Tools Like ChatGPT Being Used by Students in Foundational Programming Courses. In Artificial Intelligence in Education, Andrew M. Olney, Irene-Angelica Chounta, Zitao Liu, Olga C. Santos, and Ig Ibert Bittencourt (Eds.). Springer Nature Switzerland, Cham, 259\u2013267.", + "author": "Aashish Ghimire and John Edwards. 2024.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "Learnersourcing Personalized Hints. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (San Francisco, California, USA) (CSCW \u201916). Association for Computing Machinery, New York, NY, USA, 1626\u20131636.", + "author": "Elena L. Glassman, Aaron Lin, Carrie J. Cai, and Robert C. Miller. 2016.", + "venue": "https://doi.org/10.1145/2818048.2820011", + "url": null + } + }, + { + "21": { + "title": "Learnersourcing at Scale to Overcome Expert Blind Spots for Introductory Programming: A Three-Year Deployment Study on the Python Tutor Website. In Proceedings of the Seventh ACM Conference on Learning @ Scale (Virtual Event, USA) (L@S \u201920). Association for Computing Machinery, New York, NY, USA, 301\u2013304.", + "author": "Philip J. Guo, Julia M. Markel, and Xiong Zhang. 2020.", + "venue": "https://doi.org/10.1145/3386527.3406733", + "url": null + } + }, + { + "22": { + "title": "Authoring feedback for novice programmers in a block-based language. In 2017 IEEE Blocks and Beyond Workshop (B&B). 37\u201340.", + "author": "Luke Gusukuma, Dennis Kafura, and Austin Cory Bart. 2017.", + "venue": "https://doi.org/10.1109/BLOCKS.2017.8120407", + "url": null + } + }, + { + "23": { + "title": "Taxonomizing Features and Methods for Identifying At-Risk Students in Computing Courses. In Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (Larnaca, Cyprus) (ITiCSE 2018). Association for Computing Machinery, New York, NY, USA, 364\u2013365.", + "author": "Arto Hellas, Petri Ihantola, Andrew Petersen, Vangel V. Ajanovski, Mirela Gutica, Timo Hynninen, Antti Knutas, Juho Leinonen, Chris Messom, and Soohyun Nam Liao. 2018.", + "venue": "https://doi.org/10.1145/3197091.3205845", + "url": null + } + }, + { + "24": { + "title": "Exploring the Responses of Large Language Models to Beginner Programmers\u2019 Help Requests. In Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 1 (Chicago, IL, USA) (ICER \u201923). Association for Computing Machinery, New York, NY, USA, 93\u2013105.", + "author": "Arto Hellas, Juho Leinonen, Sami Sarsa, Charles Koutcheme, Lilja Kujanp\u00e4\u00e4, and Juha Sorva. 2023.", + "venue": "https://doi.org/10.1145/3568813.3600139", + "url": null + } + }, + { + "25": { + "title": "Predicting Results of Social Science Experiments Using Large Language Models.", + "author": "Luke Hewitt, Ashwini Ashokkumar, Isaias Ghezae, and Robb Willer. 2024.", + "venue": "Technical Report. Working Paper.", + "url": null + } + }, + { + "26": { + "title": "Random decision forests. In Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1 (ICDAR \u201995). IEEE Computer Society, USA, 278.", + "author": "Tin Kam Ho. 1995.", + "venue": "", + "url": null + } + }, + { + "27": { + "title": "VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text.", + "author": "C. Hutto and Eric Gilbert. 2014.", + "venue": "Proceedings of the International AAAI Conference on Web and Social Media 8, 1 (May 2014), 216\u2013225.", + "url": null + } + }, + { + "28": { + "title": "Semi-Automatic Suggestion Generation for Young Novice Programmers in an Open-Ended Context. In Proceedings of the 17th ACM Conference on Interaction Design and Children (Trondheim, Norway) (IDC \u201918). Association for Computing Machinery, New York, NY, USA, 405\u2013412.", + "author": "Michelle Ichinco and Caitlin Kelleher. 2018.", + "venue": "https://doi.org/10.1145/3202185.3202762", + "url": null + } + }, + { + "29": { + "title": "Towards Giving Timely Formative Feedback and Hints to Novice Programmers. In Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education (Dublin, Ireland) (ITiCSE-WGR \u201922). Association for Computing Machinery, New York, NY, USA, 95\u2013115.", + "author": "Johan Jeuring, Hieke Keuning, Samiha Marwan, Dennis Bouvier, Cruz Izu, Natalie Kiesler, Teemu Lehtinen, Dominic Lohr, Andrew Peterson, and Sami Sarsa. 2022.", + "venue": "https://doi.org/10.1145/3571785.3574124", + "url": null + } + }, + { + "30": { + "title": "Comparative judgement in education research.", + "author": "Ian Jones and Ben Davies. 2024.", + "venue": "International Journal of Research & Method in Education 47, 2 (2024), 170\u2013181.", + "url": null + } + }, + { + "31": { + "title": "\u201dWith Great Power Comes Great Responsibility!\u201d: Student and Instructor Perspectives on the influence of LLMs on Undergraduate Engineering Education.", + "author": "Ishika Joshi, Ritvik Budhiraja, Pranav Deepak Tanna, Lovenya Jain, Mihika Deshpande, Arjun Srivastava, Srinivas Rallapalli, Harshal D Akolekar, Jagat Sesh Challa, and Dhruv Kumar. 2023.", + "venue": "", + "url": null + } + }, + { + "32": { + "title": "Studying the Effect of AI Code Generators on Supporting Novice Learners in Introductory Programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI \u201923). Association for Computing Machinery, New York, NY, USA, Article 455, 23 pages.", + "author": "Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J. Ericson, David Weintrop, and Tovi Grossman. 2023.", + "venue": "https://doi.org/10.1145/3544548.3580919", + "url": null + } + }, + { + "33": { + "title": "AI-Generated Code Not Considered Harmful. In Proceedings of the 25th Western Canadian Conference on Computing Education (Vancouver, BC, Canada) (WCCCE \u201923). Association for Computing Machinery, New York, NY, USA, Article 3, 7 pages.", + "author": "Tyson Kendon, Leanne Wu, and John Aycock. 2023.", + "venue": "https://doi.org/10.1145/3593342.3593349", + "url": null + } + }, + { + "34": { + "title": "A Systematic Literature Review of Automated Feedback Generation for Programming Exercises.", + "author": "Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2018.", + "venue": "ACM Trans. Comput. Educ. 19, 1, Article 3 (Sept. 2018), 43 pages.", + "url": null + } + }, + { + "35": { + "title": "Exploring the Potential of Large Language Models to Generate Formative Programming Feedback. In 2023 IEEE Frontiers in Education Conference (FIE). 1\u20135.", + "author": "Natalie Kiesler, Dominic Lohr, and Hieke Keuning. 2023.", + "venue": "https://doi.org/10.1109/FIE58773.2023.10343457", + "url": null + } + }, + { + "36": { + "title": "Large Language Models in Introductory Programming Education: ChatGPT\u2019s Performance and Implications for Assessments.", + "author": "Natalie Kiesler and Daniel Schiffner. 2023.", + "venue": "", + "url": null + } + }, + { + "37": { + "title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel.", + "author": "JP Kincaid. 1975.", + "venue": "Chief of Naval Technical Training (1975).", + "url": null + } + }, + { + "38": { + "title": "\u2018No More Marking\u2019: An online tool for comparative judgement.", + "author": "MJ Kolen and RL Brennan. 2016.", + "venue": "ISSN 1756-509X (2016), 12.", + "url": null + } + }, + { + "39": { + "title": "Evaluating Language Models for Generating and Judging Programming Feedback.", + "author": "Charles Koutcheme, Nicola Dainese, Arto Hellas, Sami Sarsa, Juho Leinonen, Syed Ashraf, and Paul Denny. 2024a.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "Open Source Language Models Can Provide Feedback: Evaluating LLMs\u2019 Ability to Help Students Using GPT-4-As-A-Judge. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 (Milan, Italy) (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 52\u201358.", + "author": "Charles Koutcheme, Nicola Dainese, Sami Sarsa, Arto Hellas, Juho Leinonen, and Paul Denny. 2024b.", + "venue": "https://doi.org/10.1145/3649217.3653612", + "url": null + } + }, + { + "41": { + "title": "From \u201dBan It Till We Understand It\u201d to \u201dResistance is Futile\u201d: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools Such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 1 (Chicago, IL, USA) (ICER \u201923). Association for Computing Machinery, New York, NY, USA, 106\u2013121.", + "author": "Sam Lau and Philip Guo. 2023.", + "venue": "https://doi.org/10.1145/3568813.3600138", + "url": null + } + }, + { + "42": { + "title": "Comparing Code Explanations Created by Students and Large Language Models. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 124\u2013130.", + "author": "Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023a.", + "venue": "https://doi.org/10.1145/3587102.3588785", + "url": null + } + }, + { + "43": { + "title": "Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 563\u2013569.", + "author": "Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023b.", + "venue": "https://doi.org/10.1145/3545945.3569770", + "url": null + } + }, + { + "44": { + "title": "CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes.", + "author": "Mark Liffiton, Brad E Sheese, Jaromir Savelka, and Paul Denny. 2024.", + "venue": ", 11 pages.", + "url": null + } + }, + { + "45": { + "title": "Can large language models help predict results from a complex behavioural science study?", + "author": "Steffen Lippert, Anna Dreber, Magnus Johannesson, Warren Tierney, Wilson Cyrus-Lai, Eric Luis Uhlmann, Emotion Expression Collaboration, and Thomas Pfeiffer. 2024.", + "venue": "Royal Society Open Science 11, 9 (2024), 240682.", + "url": null + } + }, + { + "46": { + "title": "\u201dLet Them Try to Figure It Out First\u201d - Reasons Why Experts (Do Not) Provide Feedback to Novice Programmers. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 (Milan, Italy) (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 38\u201344.", + "author": "Dominic Lohr, Natalie Kiesler, Hieke Keuning, and Johan Jeuring. 2024.", + "venue": "https://doi.org/10.1145/3649217.3653530", + "url": null + } + }, + { + "47": { + "title": "Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 931\u2013937.", + "author": "Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023.", + "venue": "https://doi.org/10.1145/3545945.3569785", + "url": null + } + }, + { + "48": { + "title": "No More Pencils No More Books: Capabilities of Generative AI on Irish and UK Computer Science School Leaving Examinations. In Proceedings of the 2023 Conference on United Kingdom & Ireland Computing Education Research (Swansea, Wales Uk) (UKICER \u201923). Association for Computing Machinery, New York, NY, USA, Article 2, 7 pages.", + "author": "Joyce Mahon, Brian Mac Namee, and Brett A. Becker. 2023.", + "venue": "https://doi.org/10.1145/3610969.3610982", + "url": null + } + }, + { + "49": { + "title": "\u201cOk Pal, We Have to Code That Now\u201d: Interaction Patterns of Programming Beginners with a Conversational Chatbot.", + "author": "Alina Mailach, Dominik Gorgosch, Norbert Siegmund, and Janet Siegmund. 2024.", + "venue": "Empirical Software Engineering (EMSE) (2024), to appear.", + "url": null + } + }, + { + "50": { + "title": "An Evaluation of the Impact of Automated Programming Hints on Performance and Learning. In Proceedings of the 2019 ACM Conference on International Computing Education Research (Toronto ON, Canada) (ICER \u201919). Association for Computing Machinery, New York, NY, USA, 61\u201370.", + "author": "Samiha Marwan, Joseph Jay Williams, and Thomas Price. 2019a.", + "venue": "https://doi.org/10.1145/3291279.3339420", + "url": null + } + }, + { + "51": { + "title": "The Impact of Adding Textual Explanations to Next-Step Hints in a Novice Programming Environment. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education (Aberdeen, Scotland Uk) (ITiCSE \u201919). Association for Computing Machinery, New York, NY, USA, 520\u2013526.", + "author": "Samiha Marwan, Nicholas Lytle, Joseph Jay Williams, and Thomas Price. 2019b.", + "venue": "https://doi.org/10.1145/3304221.3319759", + "url": null + } + }, + { + "52": { + "title": "ISnap: Evolution and Evaluation of a Data-Driven Hint System for Block-Based Programming.", + "author": "Samiha Marwan and Thomas W. Price. 2023.", + "venue": "IEEE Trans. Learn. Technol. 16, 3.2 (June 2023), 399\u2013413.", + "url": null + } + }, + { + "53": { + "title": "A Survey of Automated Programming Hint Generation: The HINTS Framework.", + "author": "Jessica McBroom, Irena Koprinska, and Kalina Yacef. 2021.", + "venue": "ACM Comput. Surv. 54, 8, Article 172 (oct 2021), 27 pages.", + "url": null + } + }, + { + "54": { + "title": "An anatomy of feedback: a phenomenographic investigation of undergraduate students\u2019 conceptions of feedback.", + "author": "Angela J McLean, Carol H Bond, and Helen D Nicholson. 2015.", + "venue": "Studies in Higher Education 40, 5 (2015), 921\u2013932.", + "url": null + } + }, + { + "55": { + "title": "Using an LLM to Help With Code Understanding. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (Lisbon, Portugal) (ICSE \u201924). Association for Computing Machinery, New York, NY, USA, Article 97, 13 pages.", + "author": "Daye Nam, Andrew Macvean, Vincent Hellendoorn, Bogdan Vasilescu, and Brad Myers. 2024.", + "venue": "https://doi.org/10.1145/3597503.3639187", + "url": null + } + }, + { + "56": { + "title": "Using GPT-4 to Provide Tiered, Formative Code Feedback. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (Portland, OR, USA) (SIGCSE 2024). Association for Computing Machinery, New York, NY, USA, 958\u2013964.", + "author": "Ha Nguyen and Vicki Allan. 2024.", + "venue": "https://doi.org/10.1145/3626252.3630960", + "url": null + } + }, + { + "57": { + "title": "How Beginning Programmers and Code LLMs (Mis)read Each Other. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI \u201924). Association for Computing Machinery, New York, NY, USA, Article 651, 26 pages.", + "author": "Sydney Nguyen, Hannah McLean Babe, Yangtian Zi, Arjun Guha, Carolyn Jane Anderson, and Molly Q Feldman. 2024.", + "venue": "https://doi.org/10.1145/3613904.3642706", + "url": null + } + }, + { + "58": { + "title": "Guiding Next-Step Hint Generation Using Automated Tests. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1 (Virtual Event, Germany) (ITiCSE \u201921). Association for Computing Machinery, New York, NY, USA, 220\u2013226.", + "author": "Florian Oberm\u00fcller, Ute Heuer, and Gordon Fraser. 2021.", + "venue": "https://doi.org/10.1145/3430665.3456344", + "url": null + } + }, + { + "59": { + "title": "Navigating Compiler Errors with AI Assistance - A Study of GPT Hints in an Introductory Programming Course. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 (Milan, Italy) (ITiCSE 2024). Association for Computing Machinery, New York, NY, USA, 94\u2013100.", + "author": "Maciej Pankiewicz and Ryan S. Baker. 2024.", + "venue": "https://doi.org/10.1145/3649217.3653608", + "url": null + } + }, + { + "60": { + "title": "High-Coverage Hint Generation for Massive Courses: Do Automated Hints Help CS1 Students?. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education (Bologna, Italy) (ITiCSE \u201917). Association for Computing Machinery, New York, NY, USA, 182\u2013187.", + "author": "Phitchaya Mangpo Phothilimthana and Sumukh Sridhara. 2017.", + "venue": "https://doi.org/10.1145/3059009.3059058", + "url": null + } + }, + { + "61": { + "title": "The method of Adaptive Comparative Judgement.", + "author": "Alastair Pollitt. 2012.", + "venue": "Assessment in Education: Principles, Policy & Practice 19, 3 (2012), 281\u2013300.", + "url": null + } + }, + { + "62": { + "title": "Large language models meet user interfaces: The case of provisioning feedback.", + "author": "Stanislav Pozdniakov, Jonathan Brazil, Solmaz Abdi, Aneesha Bakharia, Shazia Sadiq, Dragan Ga\u0161evi\u0107, Paul Denny, and Hassan Khosravi. 2024.", + "venue": "Computers and Education: Artificial Intelligence 7 (2024), 100289.", + "url": null + } + }, + { + "63": { + "title": "First Steps Towards Predicting the Readability of Programming Error Messages. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 549\u2013555.", + "author": "James Prather, Paul Denny, Brett A. Becker, Robert Nix, Brent N. Reeves, Arisoa S. Randrianasolo, and Garrett Powell. 2023a.", + "venue": "https://doi.org/10.1145/3545945.3569791", + "url": null + } + }, + { + "64": { + "title": "Metacognitive Difficulties Faced by Novice Programmers in Automated Assessment Tools. In Proceedings of the 2018 ACM Conference on International Computing Education Research (Espoo, Finland) (ICER \u201918). Association for Computing Machinery, New York, NY, USA, 41\u201350.", + "author": "James Prather, Raymond Pettit, Kayla McMurry, Alani Peters, John Homer, and Maxine Cohen. 2018.", + "venue": "https://doi.org/10.1145/3230977.3230981", + "url": null + } + }, + { + "65": { + "title": "On Novices\u2019 Interaction with Compiler Error Messages: A Human Factors Approach. In Proceedings of the 2017 ACM Conference on International Computing Education Research (Tacoma, Washington, USA) (ICER \u201917). Association for Computing Machinery, New York, NY, USA, 74\u201382.", + "author": "James Prather, Raymond Pettit, Kayla Holcomb McMurry, Alani Peters, John Homer, Nevan Simone, and Maxine Cohen. 2017.", + "venue": "https://doi.org/10.1145/3105726.3106169", + "url": null + } + }, + { + "66": { + "title": "\u201cIt\u2019s Weird That it Knows What I Want\u201d: Usability and Interactions with Copilot for Novice Programmers.", + "author": "James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023b.", + "venue": "ACM Trans. Comput.-Hum. Interact. 31, 1, Article 4 (Nov. 2023), 31 pages.", + "url": null + } + }, + { + "67": { + "title": "The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers. In Proceedings of the 2024 ACM Conference on International Computing Education Research - Volume 1 (Melbourne, VIC, Australia) (ICER \u201924). Association for Computing Machinery, New York, NY, USA, 469\u2013486.", + "author": "James Prather, Brent N Reeves, Juho Leinonen, Stephen MacNeil, Arisoa S Randrianasolo, Brett A. Becker, Bailey Kimmel, Jared Wright, and Ben Briggs. 2024.", + "venue": "https://doi.org/10.1145/3632620.3671116", + "url": null + } + }, + { + "68": { + "title": "Exploring Design Choices in Data-Driven Hints for Python Programming Homework. In Proceedings of the Eighth ACM Conference on Learning @ Scale (Virtual Event, Germany) (L@S \u201921). Association for Computing Machinery, New York, NY, USA, 283\u2013286.", + "author": "Thomas W. Price, Samiha Marwan, and Joseph Jay Williams. 2021.", + "venue": "https://doi.org/10.1145/3430895.3460159", + "url": null + } + }, + { + "69": { + "title": "An Evaluation of Data-Driven Programming Hints in a Classroom Setting. In Artificial Intelligence in Education, Ig Ibert Bittencourt, Mutlu Cukurova, Kasia Muldner, Rose Luckin, and Eva Mill\u00e1n (Eds.). Springer International Publishing, Cham, 246\u2013251.", + "author": "Thomas W. Price, Samiha Marwan, Michael Winters, and Joseph Jay Williams. 2020.", + "venue": "", + "url": null + } + }, + { + "70": { + "title": "Programming Pedagogy and Assessment in the Era of AI/ML: A Position Paper. In Proceedings of the 15th Annual ACM India Compute Conference (Jaipur, India) (COMPUTE \u201922). Association for Computing Machinery, New York, NY, USA, 29\u201334.", + "author": "Arun Raman and Viraj Kumar. 2022.", + "venue": "https://doi.org/10.1145/3561833.3561843", + "url": null + } + }, + { + "71": { + "title": "Productivity is not enough: A comparison of interactive and nominal brainstorming groups on idea generation and selection.", + "author": "Eric F Rietzschel, Bernard A Nijstad, and Wolfgang Stroebe. 2006.", + "venue": "Journal of Experimental Social Psychology 42, 2 (2006), 244\u2013251.", + "url": null + } + }, + { + "72": { + "title": "Next-Step Hint Generation for Introductory Programming Using Large Language Models. In Proceedings of the 26th Australasian Computing Education Conference (Sydney, NSW, Australia) (ACE \u201924). Association for Computing Machinery, New York, NY, USA, 144\u2013153.", + "author": "Lianne Roest, Hieke Keuning, and Johan Jeuring. 2024.", + "venue": "https://doi.org/10.1145/3636243.3636259", + "url": null + } + }, + { + "73": { + "title": "A meta-analysis on the reliability of comparative judgement.", + "author": "Vincent Donche San Verhavert, Renske Bouwer and Sven De Maeyer. 2019.", + "venue": "Assessment in Education: Principles, Policy & Practice 26, 5 (2019), 541\u2013562.", + "url": null + } + }, + { + "74": { + "title": "How Novice Programmers Use and Experience ChatGPT when Solving Programming Exercises in an Introductory Course.", + "author": "Andreas Scholl and Natalie Kiesler. 2024.", + "venue": "", + "url": null + } + }, + { + "75": { + "title": "Instructor Perceptions of AI Code Generation Tools - A Multi-Institutional Interview Study. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (Portland, OR, USA) (SIGCSE 2024). Association for Computing Machinery, New York, NY, USA, 1223\u20131229.", + "author": "Judy Sheard, Paul Denny, Arto Hellas, Juho Leinonen, Lauri Malmi, and Simon. 2024.", + "venue": "https://doi.org/10.1145/3626252.3630880", + "url": null + } + }, + { + "76": { + "title": "Patterns of Student Help-Seeking When Using a Large Language Model-Powered Programming Assistant. In Proceedings of the 26th Australasian Computing Education Conference (Sydney, NSW, Australia) (ACE \u201924). Association for Computing Machinery, New York, NY, USA, 49\u201357.", + "author": "Brad Sheese, Mark Liffiton, Jaromir Savelka, and Paul Denny. 2024.", + "venue": "https://doi.org/10.1145/3636243.3636249", + "url": null + } + }, + { + "77": { + "title": "The Error Landscape: Characterizing the Mistakes of Novice Programmers. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE \u201919). Association for Computing Machinery, New York, NY, USA, 538\u2013544.", + "author": "Rebecca Smith and Scott Rixner. 2019.", + "venue": "https://doi.org/10.1145/3287324.3287394", + "url": null + } + }, + { + "78": { + "title": "Exploring the Design Space of Automatically Synthesized Hints for Introductory Programming Assignments. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA \u201917). Association for Computing Machinery, New York, NY, USA, 2951\u20132958.", + "author": "Ryo Suzuki, Gustavo Soares, Elena Glassman, Andrew Head, Loris D\u2019Antoni, and Bj\u00f6rn Hartmann. 2017.", + "venue": "https://doi.org/10.1145/3027063.3053187", + "url": null + } + }, + { + "79": { + "title": "A Think-Aloud Study of Novice Debugging.", + "author": "Jacqueline Whalley, Amber Settle, and Andrew Luxton-Reilly. 2023.", + "venue": "ACM Trans. Comput. Educ. 23, 2, Article 28 (jun 2023), 38 pages.", + "url": null + } + }, + { + "80": { + "title": "Exploring Novice Programmers\u2019 Hint Requests in an Intelligent Block-Based Coding Environment. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (Virtual Event, USA) (SIGCSE \u201921). Association for Computing Machinery, New York, NY, USA, 52\u201358.", + "author": "Joseph B. Wiggins, Fahmid M. Fahid, Andrew Emerson, Madeline Hinckle, Andy Smith, Kristy Elizabeth Boyer, Bradford Mott, Eric Wiebe, and James Lester. 2021.", + "venue": "https://doi.org/10.1145/3408877.3432538", + "url": null + } + }, + { + "81": { + "title": "Exploring How Multiple Levels of GPT-Generated Programming Hints Support or Disappoint Novices. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA \u201924). Association for Computing Machinery, New York, NY, USA, Article 142, 10 pages.", + "author": "Ruiwei Xiao, Xinying Hou, and John Stamper. 2024.", + "venue": "https://doi.org/10.1145/3613905.3650937", + "url": null + } + }, + { + "82": { + "title": "Does ChatGPT Help With Introductory Programming?An Experiment of Students Using ChatGPT in CS1. In Proceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training (Lisbon, Portugal) (ICSE-SEET \u201924). Association for Computing Machinery, New York, NY, USA, 331\u2013341.", + "author": "Yuankai Xue, Hanlin Chen, Gina R. Bai, Robert Tairas, and Yu Huang. 2024.", + "venue": "https://doi.org/10.1145/3639474.3640076", + "url": null + } + }, + { + "83": { + "title": "Why Johnny Can\u2019t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI \u201923). Association for Computing Machinery, New York, NY, USA, Article 437, 21 pages.", + "author": "J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023.", + "venue": "https://doi.org/10.1145/3544548.3581388", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18151v1" +} \ No newline at end of file diff --git a/20241127/2411.18157v1.json b/20241127/2411.18157v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d3e2bf1ebc7a777d281a6282ccc9ad82ca4af01c --- /dev/null +++ b/20241127/2411.18157v1.json @@ -0,0 +1,223 @@ +{ + "title": "A survey on cutting-edge relation extraction techniques based on language models", + "abstract": "This comprehensive survey delves into the latest advancements in Relation Extraction (RE), a pivotal task in natural language processing essential for applications across biomedical, financial, and legal sectors. This study highlights the evolution and current state of RE techniques by analyzing 137 papers presented at the Association for Computational Linguistics (ACL) conferences over the past four years, focusing on models that leverage language models. Our findings underscore the dominance of BERT-based methods in achieving state-of-the-art results for RE while also noting the promising capabilities of emerging large language models (LLMs) like T5, especially in few-shot relation extraction scenarios where they excel in identifying previously unseen relations.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "With the rapid growth of data generated and stored on the internet, numerous tools have emerged to process and extract valuable insights from vast information. Given that a significant portion of this data exists as unstructured text, the need for techniques capable of handling such data is evident. The branch of artificial intelligence dedicated to processing text is Natural Language Processing (NLP) [95 ###reference_b95###]. NLP provides practitioners and researchers with a comprehensive toolkit for effectively analyzing unstructured text from diverse sources, including social media [24 ###reference_b24###, 18 ###reference_b18###], public health data [43 ###reference_b43###], research papers [123 ###reference_b123###] and digital humanities [102 ###reference_b102###].\nOne of the prominent topics within NLP is Relation Extraction (RE). RE is a task focusing on identifying and extracting the intricate relationships between different entities mentioned in the textual content. Essentially, RE automates discovering connections between words or phrases within a text. For example, in the sentence \u201cThe Eiffel Tower is located in Paris,\u201d a relation extraction system would automatically identify the relationship \"located in\" between the entities \u201cEiffel Tower\u201d and \u201cParis.\u201d Though this example may seem trivial, it can be extrapolated to other domains with far-reaching implications. In fields such as biomedicine, finance, and law [110 ###reference_b110###], the importance of Relation Extraction becomes undeniable, as it enables the automatic discovery of critical connections within vast amounts of unstructured data.111\nIn the biomedical domain for example, Relation Extraction (RE) plays a crucial role in identifying the complex interactions between compounds and proteins. This is particularly significant when analyzing signal transduction mechanisms and biochemical pathways. Protein-protein interactions, for instance, can initiate a wide range of biological processes. RE automates the extraction of these interactions and relationships from vast volumes of unstructured biomedical literature. This not only accelerates the research process but also enhances the precision of data extraction, enabling researchers to systematically uncover critical insights that would be difficult to identify manually.\nRelation Extraction (RE) is not just essential for identifying relationships within text, but also indispensable for downstream tasks such as question-answering systems, decision support systems, and expert systems. Its utility extends to the humanities, where it enhances knowledge acquisition in graph-based datasets, enabling more sophisticated and accurate information retrieval and analysis.\nIn recent years, the advancement of RE has been significantly bolstered by the integration of language models. These models, particularly those based on deep learning architectures, have demonstrated a remarkable ability to capture the semantic nuances of textual data. Language models enhance RE by enabling more accurate identification of relationships between entities by effectively understanding and representing the underlying meaning of words and phrases. This capability allows RE systems to move beyond surface-level associations, capturing complex and context-dependent relationships, thereby improving the precision and applicability of RE in various domains.\nIn this review, we aim to consolidate the latest advancements in RE, particularly emphasizing how recent research has leveraged language models. We aim to provide a comprehensive toolkit encompassing tasks, techniques, models, and datasets. By doing so, we intend to offer a resource that will serve as a springboard for future research and development.\nOur survey overviews cutting-edge RE methodologies showcased at the Association for Computational Linguistics (ACL) conferences over the past three years. Numerous surveys on relation extraction have been conducted in the literature, including the notable work by Bassignana and Plank [7 ###reference_b7###]. In their paper, authors delve into the definition of relation extraction, categorizing techniques based on their subparts, such as named entity recognition, relation identification, and relation classification. Notably, their survey emphasizes scientific relation extraction, exclusively considering papers from ACL conferences from 2017 to 2021. We extend the analysis to satellite conferences, NAACL, AACL, and EACL. Our focus lies in utilizing language models, and we extend the review timeline to 2023, providing an updated perspective on trends in RE research. It is noteworthy to highlight the study conducted by Wadhwa et al. [117 ###reference_b117###], which provides a comprehensive review of the impact of Large Language Models (LLMs) like GPT or T5 across various benchmark datasets and baselines. While our work closely relates to theirs, our objective is to expand the scope of analysis beyond LLMs. We aim to incorporate a broader range of language models, conduct thorough comparisons, and undertake an extensive and robust review, encompassing a more comprehensive understanding of the field.\nOur review aims to answer the following research questions (RQ):\nRQ1: What are the challenges of RE that are being solved by systems that leverage language models?\nRQ2: What are the most commonly used language models for the RE problem?\nRQ3: Which datasets are used as benchmarks for RE using language models?\nRQ4: Are new large language models like GPT or T5 useful in RE versus widespread models such as BERT?\nThe subsequent sections of the paper are organized as follows: Section 2 ###reference_### delves into the intricate details of the methods employed in crafting this comprehensive review. Providing a foundational understanding of essential concepts, Section 3 ###reference_### offers a concise overview. Section 4 ###reference_### provides a comprehensive analysis of the datasets used for RE to the best of our knowledge. Our examination of cutting-edge RE models unfolds in Section 5 ###reference_###, while Section 6 ###reference_### scrutinizes the performance disparities between Language Models (LMs) and LLMs. Addressing the research questions that guided our exploration, Section 7 ###reference_### articulates responses derived from our review and findings. Finally, in Section 8 ###reference_###, we briefly summarize the key takeaways and insights presented in this paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methodology", + "text": "To ensure that we focus on cutting-edge applications of RE in natural language processing, we have limited our survey to the most recent papers presented at the Association for Computational Linguistics (ACL) conferences. These conferences are widely recognized as one of the most important and prestigious conferences in the field of NLP. Specifically, we focus on ACL, NAACL, AACL, and EACL. This review was conducted from September 2023 to January 2024. During this period, the most recent conferences were ACL 2023, AACL 2023, and EACL 2023, marking the endpoint of our survey. We started in 2020, the first year with the significant incorporation of language models like BERT, which was introduced at NAACL 2019. It is important to note that ACL is the most prominent conference in our survey, as AACL, NAACL, and EACL are held biennially.\nTo narrow down the papers, we searched for pieces that contained the phrase \u201cRelation Extraction\" in either the title or abstract. This search resulted in a set of papers screened based on their relevance to our research question and quality. Specifically, we only included papers that presented novel approaches or significant advancements in RE techniques using state-of-the-art language models. We exclude papers that are not directly related to the traditional problem of RE and venture into novel domains, such as temporal RE [77 ###reference_b77###, 19 ###reference_b19###, 106 ###reference_b106###] or papers with a particular focus on NER [141 ###reference_b141###]. We also omitted papers that lack a foundation in language models or, at the very least, word embeddings [149 ###reference_b149###, 143 ###reference_b143###]. Our final exclusion criterion involves papers suggesting novel encoding techniques that, while intriguing, lack widespread adoption within the research community. For instance, the approach presented by [114 ###reference_b114###], which introduces an exciting graph encoding solution, falls into this category. According to our inclusion criteria, the final set comprised 65 papers, classified as research contributions due to their introduction of new models, training strategies, or innovative approaches. For a better understanding, Figure 1 ###reference_### illustrates the complete process of our inclusion and exclusion criteria.\n###figure_1### In total, we examined 81 papers presented at the last four years\u2019 editions of ACL conferences. Following inclusion criteria, the final set comprised 65 papers, classified as research contributions due to their introduction of new models, training strategies, or innovative approaches. Our investigation delved into the language models featured in these papers, with prominent examples including BERT and RoBERTa. Additionally, we scrutinized the diverse range of datasets utilized for model evaluation, encompassing well-known benchmarks like TACRED, NYT10, and DocRED. We examined a total of 56 datasets in our study. It is important to note that, in these instances, datasets are not exclusively confined to proposals in ACL conferences; instead, we have inclusively considered all datasets employed in RE, irrespective of their origin. The summary of our findings can be referenced in Table 1 ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Background", + "text": "This section focuses on the theoretical principles that form the basis of our survey. We aim to provide the reader with the necessary conceptual foundations for RE and Language Models." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Relation Extraction", + "text": "RE is a task in natural language processing that aims to identify and classify the relationships between entities mentioned in the text. This task can be divided into three main components: named entity recognition, relation identification, and relation classification.\nNamed Entity Recognition (NER) is a critical initial step in relation extraction. It involves identifying and categorizing named entities within a text, such as the names of individuals, organizations, locations, and other specific entities [103 ###reference_b103###]. For example, in the sentence \u201cApple Inc. is headquartered in Cupertino, California,\u201d NER identifies \u201cApple Inc.\u201d as an organization and \u201cCupertino, California\u201d as a location. This step is foundational, determining the entities between which relationships will be analyzed.\nRelation identification follows NER and involves detecting pairs of entities related to each other in the text. This task can be accomplished using various techniques, including rule-based approaches or machine learning models [75 ###reference_b75###]. For instance, in the sentence \u201cSteve Jobs co-founded Apple Inc.,\u201d relation identification would recognize the entity pair\u2018 \u2018Steve Jobs\u201d and \u201cApple Inc.\u201d and determine their relationship.\nRelation classification is the subsequent process where a predefined label is assigned to each identified entity pair, indicating the nature of their relationship. For example, in the sentence \u201cBarack Obama was born in Hawaii,\u201d the entity pair \u201cBarack Obama\u201d and \u201cHawaii\u201d is classified with the relationship label \u201cPlaceOfBirth\u201d [91 ###reference_b91###]. This task involves categorizing the relationship as one of several predefined types, such as \u201cPlaceOfBirth,\u201d \u201cWorksFor,\u201d or \u201cLocatedIn,\u201d based on the context of the entities involved." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Approaches and challenges in Relation Extraction", + "text": "RE poses several challenges that are currently being addressed and require further research. Our study we have identified and we will specifically focus on the advancements in RE within the following areas:\nDocument-level Relation Extraction: In specific scenarios, relationships between entities extend beyond individual sentences, spanning entire documents. Document-level RE tackles this challenge by considering relationships in a broader context. This task is crucial for applications where understanding connections across multiple sentences or document sections is essential for accurate information extraction.\nSentence-level Relation Extraction: Conversely, sentence-level RE focuses on identifying relationships within individual sentences. This level of granularity is relevant for tasks where relationships are localized and can be adequately captured within the confines of a single sentence. This task is essential for applications with short-form content or when the relations of interest are contained within discrete textual units.\nMultimodal Relation Extraction: With the increasing prevalence of multimodal data, RE extends beyond textual information to include visual cues. Multimodal RE integrates text and visual data to enhance understanding of relationships. This task is relevant in domains where textual and visual information jointly contribute to the overall context, such as image captions, medical reports, or social media content.\nMultilingual Relation Extraction: In multilingual relation extraction, the challenge lies in extracting relations and training systems in domains where multiple languages are present. This task includes scenarios where documents or texts may contain information in different languages, requiring models to understand and extract relations effectively across language barriers. This task is particularly relevant in diverse linguistic contexts or global applications where information is presented in multiple languages.\nFew-Shot and Low-Resource Relation Extraction: Few-shot learning addresses scenarios with limited labeled data for training RE models. It aims to develop models that generalize well even with minimal labeled examples. In low-resource settings, such as specialized domains like biology or medicine, the scarcity of annotated data poses a significant challenge. The high annotation costs in these domains hinder the availability of large labeled datasets, making it challenging to train accurate RE models using traditional supervised learning methods.\nDistant Relation Extraction: Distant RE involves extracting relations between entities based on distant supervision. In this approach, heuristics or existing knowledge bases automatically label large amounts of data. However, this introduces challenges related to noisy labels and the potential inaccuracies in the automatically generated annotations. Addressing these challenges is crucial for improving the reliability of models trained on distantly supervised data.\nOpen Relation Extraction: OpenRE is dedicated to unveiling previously unknown relation types between entities within open-domain corpora. The primary objective of OpenRE lies in the automated establishment and continual evolution of relation schemes for knowledge bases, accomplished through identifying novel relations within unsupervised data. Within the realm of OpenRE methods, we can find two prominent categories: tagging-based methods and clustering-based methods." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Language Models", + "text": "LMs process sequences of tokens, denoted as , by assigning a probability to the sequence and incorporating information from previous tokens. This is captured mathematically as follows:\nAlthough has been traditionally calculated statistically, recently through neural language models [10 ###reference_b10###]. However, the methods employed to calculate vary across applications, diverse architectures from multi-layer perceptrons [78 ###reference_b78###] and convolutional neural networks [22 ###reference_b22###] to recurrent neural networks [147 ###reference_b147###].\nRecently, the introduction of self-attention mechanisms [87 ###reference_b87###] has heralded the development of high-capacity models, with transformer architectures [115 ###reference_b115###] emerging as frontrunners due to their efficient utilization of extensive data, leading to state-of-the-art results. Transformers have revolutionized the field of NLP, demonstrating their potential across various downstream applications, such as question answering [84 ###reference_b84###] and text classification [109 ###reference_b109###] or text generation [73 ###reference_b73###]. Within the transformer architecture, there are different types, with two of the most widespread being the encoder-only and the encoder-decoder models [13 ###reference_b13###].\nEncoder-only models are designed to process input data and generate rich, contextualized representations. These models excel at tasks that involve understanding [56 ###reference_b56###] and analyzing text, such as named entity recognition. The key feature of encoder-only models is their focus on generating deep contextual embeddings of the input sequence, which capture intricate semantic and syntactic information without producing any output sequence.\nOn the other hand, Encoder-decoder models are structured to handle tasks that require both understanding and generation of text. This architecture consists of two distinct components: an encoder and a decoder. The encoder processes the input sequence and generates a set of intermediate representations that encapsulate the input\u2019s contextual information. The decoder then takes these representations and generates the final output sequence. This design is particularly effective for tasks that involve complex text generation, such as machine translation [16 ###reference_b16###], where the model translates text from one language to another, and text summarization [62 ###reference_b62###]. The encoder-decoder structure allows for a dynamic interplay between understanding and generation, making it well-suited for applications that require producing coherent and contextually appropriate output based on the input.\nNoteworthy is Bi-directional Encoder Representations from Transformers (BERT) [23 ###reference_b23###]. BERT employs a unique approach that involves learning the probability by masking parts of the input sequence and predicting the masked word while considering the surrounding context. Specifically, BERT and other contextual transformer-based models aim to compute the probability of a word given its context by estimating:\nBeing the target word to be predicted, and represents the surrounding context words. This approach allows BERT to capture the contextual dependencies within a given sequence effectively.\nThe computation of these probabilities is made possible by training the transformer architecture on large and diverse datasets. Specifically, models like BERT are trained on extensive corpora, including the BookCorpus [166 ###reference_b166###] and a comprehensive crawl of English Wikipedia. This broad training allows the models to learn from vast amounts of text and diverse text combinations, enabling them to capture a wide range of linguistic patterns and contextual nuances. Notably, BERT has demonstrated remarkable prowess in tasks like RE [98 ###reference_b98###] and knowledge base completion [139 ###reference_b139###, 61 ###reference_b61###], showcasing its ability to capture intricate relationships within the text.\nVariations of BERT, such as RoBERTa, have further refined the architecture by leveraging large-scale pretraining and training procedures, enhancing both efficiency and performance. RoBERTa [69 ###reference_b69###] achieves state-of-the-art results through techniques like dynamic masking and larger batch sizes.\nOne of the most innovative models in this field is called the Generative Pre-trained Transformer (GPT), developed by Radford et al. [93 ###reference_b93###] and the Text-to-Text Transfer Transformer (T5) as elucidated by Raffel et al. [94 ###reference_b94###].\nThese models have introduced new ways of understanding language beyond traditional methods. GPT, for instance, distinguishes itself by its remarkable ability to generate coherent and contextually rich text, a feat that has revolutionized text generation tasks. Conversely, T5 adopts a distinct approach by treating many NLP challenges as a unified text-to-text problem. This approach showcases its extraordinary versatility in handling diverse functions within a comprehensive framework.\nThese language models, each with its unique approach, collectively embody the essence of modern NLP, where theoretical advancements converge with pragmatic utility, opening avenues for extracting nuanced relations and semantic nuances from textual data." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Datasets for RE", + "text": "We have compiled the most comprehensive datasets used for RE to date. We begin by providing brief descriptions of each dataset. Furthermore, in Section 7.3 ###reference_###, we describe in more detail the most frequently used datasets for RE. We have categorized datasets into two distinct periods. The first comprises cutting-edge datasets introduced during our review period from 2019 to 2023, while the second includes traditional datasets proposed before this time frame." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Cutting-edge RE techniques based on language models", + "text": "We employed a two-fold approach to facilitate a thorough analysis of the field\u2019s evolution. First, we conducted a task-based analysis, offering a comprehensive examination of the techniques used to address each specific task in relation extraction. This structured approach provides a detailed understanding of how various methods have been applied and evolved across different RE tasks. Second, we organized our survey into chronological tables by year, conference, task, sub-task, and model type. This segmentation enables us to explore the progression of techniques over time, identify emerging trends, and derive valuable insights from the evolving landscape of RE methods." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Task analysis of cutting-edge RE", + "text": "In this section, we will discuss how new systems and models are addressing the tasks and challenges of relation extraction (RE) that were introduced in Section 3.1 ###reference_###. We will particularly emphasize the systems and models that make use of language models (LMs) and large language models (LLMs)." + }, + { + "section_id": "5.1.1", + "parent_section_id": "5.1", + "section_name": "5.1.1 Sentence-level Relation Extraction", + "text": "In[116 ###reference_b116###], Veyseh et al. used a Ordered-Neuron Long-Short Term Memory Networks (ON-LSTM) to infer model-based importance scores for every word within sentences. This paper uses dependency trees to calculate importance scores for sets of words based on syntax. These syntax-based scores are then regulated to align with the model-based importance scores, ushering in a harmonious synergy between syntax and semantics. This approach achieved state-of-the-art performance levels on three distinguished RE benchmark datasets: ACE 2005, related to news; SPOUSE with only one entity type and relation, related to whether two entities regarding people are married; and SciERC, related to scientific abstracts. They used BERT to obtain pre-trained word embeddings for the sentences. Specifically, they use the hidden vectors in the last layer of the BERT-base model (with 768 dimensions) to obtain the pre-trained word embeddings for the sentences. They fixed BERT in the experiments and compared the performance of their proposed model with BERT embeddings against other models that used word2vec embeddings. In this paper, word embeddings achieve state-of-the-art results, so it is not surprising that there will be a stream in 2020 that harnesses the potential of word embeddings.\nIn [142 ###reference_b142###], Yu et al. introduced DialogRE, a dataset derived from dialogues in the Friends TV series. The dataset contains 10,168 subject-relation-object tuples, encompassing a wide range of relations such as \u201cpositive_impression\u201d and \u201csiblings\u201d. The paper harnesses BERT-base-uncased, employing unique tokens to demarcate subject and object boundaries within the dialogues. Notably, this work pioneers RE in dialogue contexts, conducting experiments against traditional CNN, LSTM, and BiLSTM-based models, often coupled with the GloVe encoder.\nIn [145 ###reference_b145###], the authors explore unsupervised relation extraction, a technique that does not require labeled data to identify relations between entities in text. The paper proposes a new approach that overcomes the limitations of existing variational autoencoder-based methods. The proposed method improves training stability and outperforms state-of-the-art methods on the NYT dataset. The authors use a variational autoencoder to learn a low-dimensional representation of sentences that captures their relation information. They also introduce a novel regularization term, encouraging the model to learn more informative representations. The proposed approach is evaluated on the NYT dataset and compared to four state-of-the-art methods. The results show that the proposed method achieves higher F1 scores than the baselines. HiURE [68 ###reference_b68###] advances unsupervised relation extraction by introducing a contrastive learning framework. This approach leverages hierarchical exemplars to enhance the relational features of sentences. It optimizes exemplar-wise contrastive learning through iteratively integrating HiNCE and propagation clustering. This method stands out from existing unsupervised RE models, as it avoids gradual drift and surpasses instance-wise contrastive learning by actively considering the hierarchical structure inherent in relations.\nOne of the most debated topics in RE is the choice between pipeline frameworks and joint methods. As discussed in Section 3.1 ###reference_###, RE involves three sub-techniques. Pipeline frameworks tackle individual tasks in RE, such as entity recognition, relation identification, and relation classification, separately, while joint methods integrate these tasks into a unified model. This distinction raises questions about which approach is more effective for improving performance and efficiency in relation extraction systems. While most approaches treat each task as a separate problem, Wang et al. [121 ###reference_b121###] propose a joint system for entity detection and relation classification. The key idea is to view entity detection as a particular case of relation classification. The authors introduce a new input space, a two-dimensional table with each entry corresponding to a word pair in sentences. The joint model assigns labels to each cell from a unified label space. After filling the table, the authors apply a Biaffine layer to predict the final relation labels. The proposed UNIRE model is based on BERT-base-uncased, ALBERT, and SciBERT as encoding depending on the dataset and achieves state-of-the-art performance on the ACE04 and ACE05 datasets, outperforming several baseline models.\n[162 ###reference_b162###],introduce a pipeline framework for RE that capitalizes on the results obtained from language models to create a straightforward and highly accurate approach. This approach entails employing distinct models for entity recognition and relation extraction, with the latter model taking entity pairs as its input. The models are trained independently, utilizing cross-sentence context to enhance overall performance. For each task, the authors harnessed the power of pre-trained language models, specifically BERT, ALBERT, and SCIBERT, constructing two separate encoders for entity recognition and relation extraction. Both tasks saw significant performance improvements by training entity and relation models independently and having the relation model use the entity model for input features.\nThe authors in [133 ###reference_b133###] compared the performance of pipeline and joint approaches to Entity and Relation Extraction (ERE) tasks. The research focused on two specific datasets, ACE2005 and SciERC, which are widely used for evaluating information extraction methods. The authors used eight different methods, including purely sequential methods and four combined models, to see how well they could extract entities and relationships from the text. Their experiments used two pre-trained language models: BERT-base-uncased for the ACE2005 dataset and Seibert-sci-vocab-uncased for the SciERC dataset. The study revealed that while pipeline approaches can yield competitive results, the best-performing joint approach consistently outperformed the best pipeline model across various metrics. This finding underscores the potential advantages of joint models, particularly in leveraging the interdependencies between tasks. The ERE tasks performed in this study operate primarily at the sentence level, extracting entities and relations from individual sentences within the documents.\nJie et al. [54 ###reference_b54###] introduced an approach for solving math word problems, which can be considered a sentence-level task. Their method employs explainable deductive reasoning steps to construct target expressions iteratively. Each step involves a primitive operation over two quantities, defining their relation. By framing the task as a complex RE problem, their model significantly outperforms existing baselines in accuracy and explainability. The primary objective is to identify relations between numbers, where mathematical expressions represent these relations. The study uses three English and one Chinese dataset, outperforming state-of-the-art methods established by previous works [59 ###reference_b59###, 120 ###reference_b120###, 4 ###reference_b4###, 88 ###reference_b88###]. The authors leverage BERT as the encoder for their proposed model. Specifically, they utilize Chinese BERT and Chinese RoBERTa for the Math23k dataset and BERT and RoBERTa for the English datasets. The experimentation involves also multilingual BERT and XLM-RoBERTa. The pre-trained models are initialized from HuggingFace\u2019s Transformers, with the best-performing model identified as related to RoBERTa.\nIn Wang et al. [122 ###reference_b122###], the authors delve into the issue of entity bias in sentence-level relation extraction. The paper\u2019s main contribution is the introduction of CORE (Counterfactual Analysis Relation Extraction): a model-agnostic technique explicitly designed to address this problem. The authors showcase that CORE effectively enhances the efficacy and generalization of prevalent RE models by distilling and mitigating biases entity-awarely. The approach involves formulating existing RE models as causal graphs and applying counterfactual analysis to post-adjust biased predictions during inference. Language models, particularly RoBERTa, are significant because of the flexibility with which CORE can be seamlessly integrated into popular RE models to boost their performance and mitigate biases without necessitating retraining. In contrast to approaches incorporating graphs or additional information, [66 ###reference_b66###] introduced an innovative method for RE that exclusively extracts multi-granularity hierarchical features from the original input sentences. The model employs hierarchical mention-aware segment attention and global semantic attention to capture features at various levels. These features are then aggregated for relation prediction. While emphasizing the effectiveness of BERT and SpanBERT [55 ###reference_b55###] as encoders, the authors also incorporate a BiLSTM model.\nThe authors in[163 ###reference_b163###] leveraged RoBERTa as the foundational framework for enhancing the baseline model in sentence-level relation extraction. The paper presents a dual-fold contribution, marked by two advancements. Firstly, the authors introduce an improved baseline specifically tailored for sentence-level relation extraction, strategically tackling two prominent challenges prevalent in existing models: effective entity representation and handling noisy or ill-defined labels. Secondly, the study establishes and underscores the efficacy of pre-trained language models, particularly RoBERTa, in sentence-level relation extraction. This efficacy is substantiated by achieving state-of-the-art results on the TACRED dataset and its refined iterations. Notably, the proposed model stands out as a solution that adeptly addresses two critical issues impacting the performance of contemporary RE models: entity representation and dealing with noisy or ill-defined labels.\nIn 2022, large language models such as GPT became mainstream. Yang and Song [137 ###reference_b137###] introduced a prompt-based fine-tuning approach to enhance relation extraction. The foundation of their method involved utilizing RoBERTa large as the base model, subjected to fine-tuning on the training sets of respective datasets through the prompt-based fine-tuning methodology. Throughout the fine-tuning process, the authors employed prompts featuring a masked relation label, tasking the model with predicting the obscured label based on the input sentence. A carefully crafted prompt learning curriculum was also implemented to facilitate the model\u2019s adaptation to a multi-task setting, introducing tasks of increasing difficulty. The experimental findings underscore the efficacy of the proposed model. Notably, the model attained state-of-the-art results on benchmark datasets, namely TACRED and SemEval 2010. These outcomes affirm the method\u2019s prowess in advancing the state-of-the-art in RE tasks." + }, + { + "section_id": "5.1.2", + "parent_section_id": "5.1", + "section_name": "5.1.2 Document-level Relation Extraction", + "text": "Document-level RE\u2019s most discussed topics are evidence selection, joint and graph models, and new training strategies.\nEvidence selection refers to identifying and selecting relevant sentences or text from a document that provides crucial information to determine relationships between entities. In [48 ###reference_b48###], the authors present E2GRE, a model designed for document-level relation extraction. E2GRE tackles the challenge of handling relations spanning multiple sentences by jointly extracting relations and their supporting evidence sentences using BERT as an input encoder. BERT plays a central role in the E2GRE framework by contextualizing document text and entity mentions. The last four layers of BERT are used for RE and evidence prediction. Notably, E2GRE enhances BERT\u2019s attention mechanism to focus on relevant context, using attention probabilities as supplementary features for evidence prediction. While E2GRE achieves state-of-the-art results in evidence prediction on the DocRED dataset, it needs to attain the same level of accuracy in relation classification. The model uses RoBERTa [69 ###reference_b69###] as an input encoder, broadening its capabilities in document-level relation extraction.\nIn [83 ###reference_b83###], the authors introduce an approach to document-level relation extraction. This model sets itself apart from existing methods in the field by employing a refinement strategy that progressively aggregates pertinent information to facilitate multi-hop reasoning. This distinctive strategy, Latent Structure Refinement (LSR), contrasts traditional models reliant on fixed syntactic trees. Instead, LSR dynamically learns the document\u2019s structure and utilizes this acquired knowledge to inform predictions. The model\u2019s underlying encoding mechanisms draw from both GloVe and BERT, allowing it to achieve state-of-the-art results when applied to datasets such as CDR [65 ###reference_b65###], GDA [36 ###reference_b36###], and DocRED [140 ###reference_b140###], especially when employing BERT encoding.\nSimilarly to the preceding paper, Huang et al. [49 ###reference_b49###] propose a system for document-level RE that focuses on selecting evidence sentences rather than performing RE directly. The authors present a method for heuristically selecting informative sentences from a document that can be used as evidence for identifying the relationship between a given entity pair. The selected evidence sentences can then be input into a BiLSTM-based RE model. The authors show that their method for evidence sentence selection outperforms graph neural network-based methods for document-level RE on benchmark datasets, including DocRED, GDA, and CDR.\nXu [131 ###reference_b131###] introduces a paper that complements traditional document-level RE tasks . The framework, named SIEF, can be integrated with established approaches like BERT. SIEF enhances the performance of DocRE by introducing importance scores to sentences, directing the model\u2019s attention to evidence-rich sentences. This approach fosters consistent and robust predictions. The methodology involves calculating sentence importance scores and implementing a sentence-focusing loss to promote model robustness. The study underscores the synergistic relationship between SIEF and modern language models, such as BERT.\nDocument-level relation extraction has also been explored from a joint modeling perspective. Eberts and Ulges [26 ###reference_b26###] present a comprehensive approach to entity-level RE that covers all stages of the RE process. Their proposed joint model extracts mentions, clusters them into entities, and classifies relations jointly. The joint model can leverage shared parameters and training steps across the different sub-tasks, which improves efficiency and performance compared to a pipeline approach where each sub-task is trained separately. The authors also use a multi-instance learning approach that aggregates information from multiple mentions of the same entity to improve performance. The model incorporates BERT to encode contextual information. The proposed joint model achieves state-of-the-art results on the challenging DocRED dataset, demonstrating the power of joint modeling for entity-level RE.\n[130 ###reference_b130###] introduced COREF, a Graph Compatibility (GC) approach rooted in the bidirectional interaction between coreference resolution and RE. The GC method capitalizes on task-specific traits to actively influence the decisions of distinct tasks, establishing explicit task interactions that avoid the isolated decoding of each task. The authors employed SpanBERT to capture intricate contextual dependencies for encoding,\nZhang et al. [151 ###reference_b151###] propose a system, TaG, adopts a joint approach grounded in graph-based methodologies, incorporating table filling, graph construction, and information propagation techniques. To capture rich contextual information, the authors employ BERT and RoBERTa as encoders over the tables, with particular success achieved through leveraging RoBERTalarge.\nDocument-level relation extraction involves identifying and classifying relationships between entities within entire documents, which presents unique challenges compared to sentence-level extraction. Traditional training and evaluation techniques often fall short because they may need to adequately capture the complexities of long-range dependencies and contextual nuances inherent in document-level data. Standard methods might focus on sentence-by-sentence analysis, overlooking the importance of the document\u2019s overall structure and coherence. Consequently, new training techniques are required to effectively manage the large-scale and multi-sentence contexts in which entities are situated.\nXiao et al., in [126 ###reference_b126###], introduced SAIS, an approach employing language models, such as BERT, RoBERTa, and SCIBERT, to encode extensive contextual dependencies. This strategic use of language models addresses the complexities of capturing extended contexts and nuanced interactions among entities in document-level relation extraction. The primary contribution of the SAIS method lies in its explicit guidance for models to adeptly capture pertinent contexts and entity types during document-level relation extraction. This emphasis results in enhanced extraction quality and more interpretable predictions from the model.\nChen et al. [15 ###reference_b15###] introduce a novel perspective by evaluating models based on language understanding and reasoning capabilities. Employing the Mean Average Precision (MAP) metric and subjecting models to RE-specific attacks, the authors shed light on significant disparities in decision rules between state-of-the-art models and human approaches. Notably, these models often depend on spurious patterns while overlooking crucial evidence words, adversely affecting their robustness and generalization in real-world scenarios. Although the paper does not propose new models for RE and does not explicitly use language models, it aligns with our research goal of understanding how language models capture information for relation extraction. Instead, the authors compare and evaluate various baselines, including BERT and RoBERTa, providing valuable insights into their comparative effectiveness.\nIn 2023, Guo et al. [31 ###reference_b31###] introduced PEMSCL, to enhance relation prediction accuracy by integrating discriminability and robustness. The approach employs a pairwise moving-threshold loss with entropy minimization, incorporates adapted supervised contrastive learning, and introduces a novel negative sampling strategy. For encoding, the authors used ATLOP [164 ###reference_b164###]. Ma et al. [76 ###reference_b76###] introduced a method called DREEAM. This approach uses evidence information to guide the attention modules to emphasize important evidence. One notable aspect is that the model is self-trained to learn relationships between entities using automatically generated evidence from large amounts of data without requiring explicit evidence annotations. This approach\u2019s encoder for evidence retrieval is RoBERTa, which significantly contributes to the system\u2019s overall effectiveness. DREEAM currently stands as the most accurate system for DocumentRE.\nIn [158 ###reference_b158###], a continual document-level relation extraction model is introduced to address the challenges of distinguishing analogous relations and preventing overfitting to stored samples. The model employs a learning framework that integrates a contrastive learning objective with a linear classifier training one. It generates memory-insensitive relation prototypes by combining static and dynamic representations, which helps maintain robustness against noise from stored samples. The model uses memory augmentation to create more training samples for replay, enhancing its adaptability in continual learning scenarios. Focal knowledge distillation is also employed to assign higher weights to analogous relations, ensuring the model focuses on subtle distinctions between similar relations. For encoding, the authors use BERT, leveraging its robust contextual embeddings.\nZhang et al. (2021) discussed the challenges of extracting information from visually rich documents (VRDs) in their paper [152 ###reference_b152###]. The complexity arises from the need to incorporate both visual and textual features in the extraction process. The authors propose an approach that adapts the affine model used in dependency parsing to the entity RE task and conduct experiments on the FUNSD dataset and a real-world customs dataset to compare different representations of the semantic entity, different VRD encoders, and different relation decoders. They also employ two training strategies, multi-task learning with entity labeling and data augmentation, to further improve their model\u2019s performance. The proposed model outperforms previous baselines on the FUNSD dataset, and achieves high performance on the real-world customs dataset. The authors also analyze the performance of different language models, such as BERT and LayoutLM, on their language mixed data and find that encoding layout information into language models significantly enhances the model\u2019s ability to understand the spatial relationships and contextual relevance of entities, leading to improved accuracy in relation extraction tasks.\nFinally, Yuan et al. [144 ###reference_b144###] present SENDIR, a model specifically tailored to tackle the complexities of document-level reasoning in event-event relation extraction. This model introduces an approach by learning sparse event representations, enabling effective discrimination between intra- and inter-sentential reasoning. The paper addresses the challenges associated with comprehending lengthy texts. Experimental validation of the model was conducted across three datasets related to event detection: EventStoryLine, Causal-TimeBank, and MATRES. The SENDIR model incorporates language models to enhance its capabilities, employing a BERT-base-uncased in conjunction with a BiLSTM." + }, + { + "section_id": "5.1.3", + "parent_section_id": "5.1", + "section_name": "5.1.3 Multilingual and multimodal Relation Extraction", + "text": "Multilingual relation extraction has benefited from several developments in recent years. In [99 ###reference_b99###], the authors introduce a new dataset, SMILER, comprising 1.1 million annotated sentences spanning 14 languages. To tackle this dataset, they propose the HERBERTa model, a BERT-based pipeline integrating two independent BERT models for sequence classification and entity tagging. The authors employ the multilingual BERT model, M-BERT, pre-trained on monolingual corpora in 104 languages and fine-tuned BERT models tailored for specific languages. Notably, the HERBERTa model achieves performance close to state-of-the-art in multilingual relation extraction.\nYang et al. in [138 ###reference_b138###] introduce a new dataset for historical RE based on the Yeonhaengnok, a collection of historical records written initially in Hanja and later translated into Korean. The dataset contains 5,816 data instances and includes ten types of named entities and 20 relations. The authors propose a bilingual RE model to extract relations from the dataset that leverages Korean and Hanja contexts. The model leverages pre-trained language models, including KLUE, a BERT-based model designed explicitly for Korean natural language processing tasks, and AnchiBERT, tailored for Hanja. By leveraging both Korean and Hanja contexts, the proposed model outperforms other monolingual models on the HistRED dataset, demonstrating the effectiveness of employing multiple language contexts in RE tasks. [42 ###reference_b42###] introduces MultiTACRED. This extension of the TACRED dataset spans 12 different languages and aims to explore the intricacies of multilingual relation extraction further. Unlike proposing novel models, the paper focuses on meticulously evaluating monolingual, cross-lingual, and multilingual models. This assessment delves into the performance across all 12 languages the dataset covers. Additionally, the study scrutinizes the effectiveness of pre-trained mono- and multilingual encoders, particularly highlighting the role of BERT-based in tackling challenges posed by multilingual natural language processing tasks and cross-lingual transfer learning scenarios. [50 ###reference_b50###] also contributes to multilingual RE by introducing REDfm, a dataset designed for multilingual models. This dataset encompasses various relation types across multiple languages, offering higher annotation coverage and more evenly distributed classes than current datasets. Additionally, the authors propose a multilingual RE model named mREBEL. As an extension of the REBEL model, mREBEL adopts a seq2seq architecture to extract triplets, encompassing entity types across various languages. The authors conducted pre-training for mREBEL using mBART-50 and fine-tuned it on the RED FM dataset. The results demonstrate that mREBEL surpasses the performance of existing models on the REDfm dataset.\nIn multimodal RE, recent innovations have focused on integrating visual and textual information. In [159 ###reference_b159###], authors addressed multimodal entity and RE through an approach that integrates visual and textual information. The model uses techniques such as back-translation and multimodal divergence estimation to exploit a unified transformer. By using convolutional neural networks and BERT-base-uncased as encoders for visual and textual content, the study uses Twitter-2015, Twitter-2017, and MNRE datasets designed explicitly for multimodal tasks, achieving state-of-the-art results. In the same line, [125 ###reference_b125###] proposed a graph-based framework for multimodal RE. The framework involves representing the input image and text with visual and textual scene graphs, which are then fused into a unified cross-modal graph. The graph is then refined using the graph information bottleneck principle to filter out less informative features. The model uses latent multimodal topic features to enhance the feature contexts, and then the decoder uses these enriched features to predict the relation label. The model is pretrained using CLIP (vit-base-patch32), which performed better than BERT on the MRE [160 ###reference_b160###] dataset, comprising 9,201 text-image pairs. Hu et al. present in [44 ###reference_b44###] an innovative method that introduces an strategy for synthesizing object-level, image-level, and sentence-level information. This approach enhances the capacity for reasoning across various modalities, facilitating improved comprehension of similar and disparate modalities. The proposed method underwent testing on the MNRE dataset, achieving state-of-the-art results. BERT-base-uncased served as the encoder for textual information." + }, + { + "section_id": "5.1.4", + "parent_section_id": "5.1", + "section_name": "5.1.4 Few-Shot and Low-Resource Relation Extraction", + "text": "The need to enhance model performance in scenarios where labeled data is scarce has led to the development of specialized techniques such as few-shot, low-resource, and zero-shot learning. These approaches are essential for making RE systems more adaptable and practical, especially in real-world applications where obtaining large, annotated datasets is often challenging or impractical. Few-shot and low-resource RE methods address these limitations by enabling models to learn effectively from minimal data, reducing the dependency on extensive manual annotation. Additionally, the incorporation of external knowledge, innovative training strategies, and advanced semantic matching techniques has enhanced these methods, resulting in more precise and broadly applicable models.\nIn [135 ###reference_b135###], the authors address the challenge of inaccurate relation classification in low-resource data domains driven by limited samples and a knowledge deficit. They introduced the ConceptFERE scheme, an advancement in few-shot RE algorithms. This scheme incorporates a concept-sentence attention module that selects the most relevant concept for each entity by calculating semantic similarity between sentences and ideas. A self-attention-based fusion module also bridges the gap between concept and sentence embeddings from different semantic spaces. The paper adopts BERT as the foundational model for relation classification. This BERT model undergoes pre-training on a large text corpus and fine-tuning on the FewRel [39 ###reference_b39###] dataset. The authors initialize the BERT parameters with BERT-base-uncased.\nBrody et al. in [12 ###reference_b12###] tackle the challenge of performance variability across relation types in few-shot relation classification models. While these models have demonstrated impressive results, their reliance on entity-type information makes distinguishing between relations involving similar entity types challenging. To address this limitation, the authors propose modifying the training routine to enhance the models\u2019 ability to differentiate such ties. The suggested enhancements augment the training data with relations involving similar entity types. Through evaluations on the FewRel 2.0 dataset, the paper demonstrates that these modifications substantially improve performance on unseen relations, achieving up to a 24% enhancement under a P@50 problem (precision with 50 examples). The study utilizes BERT as a pre-trained language model to initialize the few-shot relation classification models.\nIn [155 ###reference_b155###] the authors present a fine-grained semantic matching method tailored for zero-shot relation extraction, explicitly capturing the matching patterns inherent in relational data. Additionally, the paper introduces a context distillation method designed to mitigate the negative impact of irrelevant components on context matching. The effectiveness of the proposed method is evaluated on two datasets: Wiki-ZSL and FewRel. For the encoder model, Bert-base-uncased is employed as the input instance encoder. Comparative assessments are conducted against several state-of-the-art methods, including REPrompt, a competitive seq2seq-based ZeroRE approach that utilizes GPT-2 to generate pseudo data for new relations during fine-tuning. The results demonstrate that the proposed method surpasses the performance of all others in the comparison. In [33 ###reference_b33###], the authors tackle challenges in few-shot RE across domains. The study delves into the influence of linguistic representations, specifically syntactic and semantic graphs derived from abstract meaning representations and dependency parses, on the performance of RE models in few-shot transfer scenarios. Employing BERT-base-uncased, the authors extract embeddings for each entity\u2019s constituent tokens and max-pool them into embeddings. These embeddings initialize the feature vectors of nodes in the linguistic graph. The graph-aware model integrates these BERT-derived graph-based features, enhancing RE performance in few-shot scenarios. Validation involves two datasets: one focusing on cooking relations, especially between food and elaborations in recipes, and the other related to materials.\n[5 ###reference_b5###], while not directly centered on relation extraction, harnesses the capabilities of RE for constructing a chatbot using a pertinent corpus of language data. The application was tailored to industrial heritage in the 18th and 19th centuries, focusing on the industrial history of canals and mills in Ireland. The authors curated a corpus from relevant newspaper reports and Wikipedia articles, employed the Saffron toolkit to extract pertinent terms and relations within the corpus, and utilized the extracted knowledge to query the British Library Digital Collection and the Project Gutenberg library. Although the paper does not explicitly mention BERT, it marks the initial appearance of the T5 model. The authors suggest that leveraging the capabilities of T5 in future enhancements could lead to further improvements in the system\u2019s performance. By utilizing T5\u2019s architecture, designed for a wide range of NLP tasks, the model can better capture complex relationships and improve its ability to generalize across different relation types, ultimately addressing the limitations observed in current few-shot models.\nIn [82 ###reference_b82###], Najafi and Fyshe explore the domain of zero-shot relation extraction. The authors propose an approach that circumvents the need for manually crafted gold question templates by generating questions for unseen relations. The critical advancement lies in the introduction of the OffMML-G training objective. This objective fine-tunes question-and-answer generators specifically tailored for ZeroShot-RE. The result is the generation of semantically relevant questions for the answer module based on the given head entity and relationship keywords. The T5 model serves as the answer generator, having undergone pretraining and fine-tuning on five distinct question-answering datasets. [46 ###reference_b46###] presents an approach named Gradient Imitation Reinforcement Learning (GIRL) tailored for Low Resource Relation Extraction. GIRL\u2019s primary challenge is extracting semantic relations between entities from text when confronted with a scarcity of labeled data. To address this limitation, GIRL uses gradient imitation to create pseudo labels with fewer biases and errors. It also improves the relation labeling generator within a reinforcement learning framework. The results demonstrate GIRL\u2019s superiority over several state-of-the-art methods on benchmark datasets, SemEval and TACRED, achieving competitive performance with significantly less labeled data. The authors employed the BERT default tokenizer with a max-length of 128 for data preprocessing to encode contextualized entity-level representations for the relation labeling generator.\nChen and Li in [14 ###reference_b14###] address zero-shot RE by leveraging text descriptions of both seen and unseen relations to learn attribute vectors as semantic representations. Their strategy enables accurate predictions of unseen ties. Their model outperforms existing approaches in zero-shot settings. The significant influence of BERT on ZS-BERT underscores the power of leveraging sentence-BERT\u2019s [96 ###reference_b96###] contextual representation learning capabilities for encoding input sentences and relation descriptions. Notably, ZS-BERT ranks as the fifth most accurate system for zero-shot relation classification to date, underscoring the enduring impact of BERT as a powerful tool in relation extraction.\nThe creation of relation prototypes is explored in [34 ###reference_b34###]. The authors propose few-shot RE (FSRE), an approach based on hybrid prototypical networks and relation-prototype contrastive learning. This method leverages entity and relation information to improve the model\u2019s generalization ability. FSRE achieves state-of-the-art results on two datasets, FewRel 1.0 and 2.0, and outperforms existing models by a significant margin. The authors also use BERT as the encoder to obtain contextualized embeddings of query and support instances.\nIn [67 ###reference_b67###], Liu et al. proposed an approach to low-shot RE that unifies zero-shot and few-shot RE tasks. The method, Multi-Choice Matching Networks (MCMN) with triplet-paraphrase pretraining, achieves state-of-the-art performance on three low-shot RE tasks. The datasets used in the experiments are FewRel and TACRED, widely used benchmarks for low-shot RE. The paper assimilates BERT by using it as a backbone model for MCMN and pre-training it with triplet-paraphrase. The experimental results show that MCMN with triplet-paraphrase pretraining outperforms previous methods in all three low-shot RE tasks and achieves state-of-the-art performance.\n[92 ###reference_b92###] investigate a concept called continuous few-shot Relation Learning (CFRL). CFRL is relevant to real-life situations where there is usually enough data available for an established task, ut only a small amount of labeled data for new tasks that come up.. Acquiring large labeled datasets for every new relation is expensive and sometimes impractical for quick deployment. CFRL aims to quickly adapt to new environments or tasks by exploiting previously acquired knowledge. The proposed model is based on embedding space regularization and data augmentation. The authors assimilate BERT and other language models by using them as feature extractors for the input sentences. They show that their method generalizes to new few-shot tasks and avoids catastrophic forgetting of previous tasks. The experiments are conducted on three datasets, and the results show that the proposed method outperforms state-of-the-art methods on all three datasets. The authors also conduct ablation studies to show the effectiveness of each component of their method.\nTeru [108 ###reference_b108###] addresses the challenge of obtaining high-quality labeled data for RE by exploring the influence of data augmentation. The author leverages pre-trained translation models to generate diverse data augmentations for a given sentence in German and Russian. Then [108 ###reference_b108###] uses lexically constrained decoding strategies to obtain paraphrased sentences while retaining the head and the tail entities. The proposed REMIx system uses a BERT-base-uncased as the encoder. The paper\u2019s main contribution is demonstrating the effectiveness of data augmentation and consistency training for semi-supervised relation extraction, achieving competitive results on four benchmark datasets.\nBuilding on previous advances, the creation of realistic benchmarks and domain-specific datasets, such as those focusing on biomedical and document-level RE, has been crucial in testing the applicability of these approaches in diverse, low-resource environments.\nPopovic and Farber [90 ###reference_b90###] introduced a novel benchmark named FREDo, which is explicitly designed for Few-Shot Document-Level RE (FSDLRE). Unlike existing benchmarks that primarily target sentence-level tasks; the authors claim FREDo provides a more realistic testing ground. The authors proposed two approaches to address FSDLRE tasks that incorporates a modification to the state-of-the-art few-shot RE approach, MNAV. While results demonstrate the superiority of the proposed methods over the baseline built on BERT, they also reveal the ongoing challenges associated with realistic cross-domain tasks, such as domain adaption or the high proportion of negative samples. The study underscores the significance of realistic benchmarks and emphasizes the necessity for substantial advancements in few-shot RE approaches to make them applicable in real-world, low-resource scenarios.\nContributing to the domain of low-resource domains, Tiktinsky et al. [112 ###reference_b112###] introduce an expert-annotated dataset, DrugCombination-Dataset, tailored for the intricate task of N-ary RE involving drug combinations from the scientific literature. To offer more robust results, they conducted a comparative study, pitting various baselines against prominent scientific language models, namely SciBERT [9 ###reference_b9###], BlueBERT [89 ###reference_b89###], PubmedBERT [30 ###reference_b30###], and BioBERT [63 ###reference_b63###]. The results underscore the effectiveness of domain-adaptive pretraining, with PubmedBERT demonstrating the most robust performance among the considered models.\nIn [129 ###reference_b129###], Xu et al. introduced a novel approach called NBR (Natural Language Inference-based Biomedical Relation extraction). NBR addresses the challenges of annotation scarcity and instances without pre-defined labels by converting biomedical RE into a natural language inference formulation, providing indirect supervision, and improving generalization in low-resource settings. The approach also incorporates a ranking-based loss to calibrate abstinent instances and selectively predict uncertain cases. Experimental results demonstrate that NBR outperforms existing state-of-the-art methods on three widely used biomedical RE benchmarks: ChemProt, DDI, and GAD. The paper leverages BioLinkBERTlarge, a BERT variation designed explicitly for biomedical text, to enhance the performance of the proposed NBR approach. By fine-tuning BioLinkBERTlarge on the NBR task, the paper demonstrates the effectiveness of leveraging domain-specific pretraining for improved biomedical RE in low-resource environments. [165 ###reference_b165###] proposes a method that pre-trains and fine-tunes the RE model using consistent objectives based on contrastive learning. Contrastive learning often results in one relation forming multiple clusters in the representation space. The authors introduce a multi-center contrastive loss that facilitates the formation of various clusters for a single relation during pretraining, aligning it more effectively with the subsequent fine-tuning process. Notably, PubmedBERT and BERT-base-uncased are the chosen encoders for pre-training and fine-tuning.\nFinally, [128 ###reference_b128###] introduces S2ynRE, a system designed to address the challenge of low-resource RE by harnessing the power of LLMs such as GPT-2 and GPT-3.5. These LLMs generate synthetic data and adapt to the target domain, automatically producing substantial amounts of coherent and realistic training data. The proposed algorithm employs a two-stage self-training approach, iteratively and alternately learning from synthetic and golden data. The effectiveness of the system is demonstrated across five diverse datasets. The paper uses BERT for baseline comparison and encoding, showcasing that combining large language models surpasses the current state-of-the-art in relation extraction." + }, + { + "section_id": "5.1.5", + "parent_section_id": "5.1", + "section_name": "5.1.5 Distant supervision Relation Extraction", + "text": "Distant supervision for RE provides a scalable and efficient way to generate large amounts of labeled training data, which is often scarce and expensive to obtain manually. In traditional supervised learning, high-quality labeled data is required for training models. However, manually annotating such data for RE tasks is labor-intensive, time-consuming, and costly, particularly for large and diverse datasets. Distant supervision automates this process by aligning existing structured data from knowledge bases with unstructured text, generating training examples without human intervention. Distant supervision techniques often generate training data by automatically aligning a knowledge base with unstructured text, which can lead to inaccuracies. These inaccuracies arise in the form of false positives (incorrectly labeled data) and false negatives (missing relations) due to the incompleteness of the knowledge base. Noisy data can significantly degrade model performance by introducing spurious patterns and incorrect information during training, leading to poor generalization and reduced accuracy in real-world applications. By effectively filtering out or mitigating the impact of noisy labels, models can better capture true relational patterns, enhance robustness, and improve their ability to extract relations from text accurately. This process is critical in RE tasks where precise and reliable identification of relationships is critical for downstream applications. Based on this motivation, most papers involving distant supervision for relation extraction focus on the necessity of techniques to address noisy data and mitigate the impact of negative examples.\nDistant supervision often leads to noisy and incomplete labels, resulting in false negatives. In [40 ###reference_b40###], the authors addressed the problem of false negatives in distant supervision for relation extraction. The proposed two-stage approach leverages the memory mechanism of deep neural networks to filter out noisy samples. It utilizes adversarial training to align unlabeled data with training data and generate pseudo labels for additional supervision. The approach achieves state-of-the-art results on two benchmark datasets, NYT10 and GIDS, outperforming several baseline models, including BERT.\n[101 ###reference_b101###] addresses the issue of label noise in distantly supervised relation extraction, which often results in inaccurate or incomplete outcomes. To mitigate this challenge, the authors introduce a denoising method based on multiple-instance learning. This method involves using multiple platforms to identify reliable sentences collectively. It is implemented within a federated learning framework, ensuring data decentralization and privacy protection by separating model training from direct access to raw data. In this study, BERT and RoBERTa are used as the encoders.\nIn the same line, Xie et al. [127 ###reference_b127###] focus on the problem of missing relations caused by the incompleteness of the knowledge base (false negatives) rather than reducing wrongly labeled relations (false positives). To address this issue, they propose a pipeline approach called RERE using BERT for encoding, which first performs sentence classification with relational labels and then extracts the subjects/objects. Furthermore, the proposed method, RERE, consistently outperforms existing approaches over the NYT dataset, even when learned with many false positive samples. However, the authors also experimented with a Chinese dataset (ACE05), using RoBERTa for encoding, but achieved poor performance. [104 ###reference_b104###] introduces a framework, UG-DRE, which harnesses uncertainty estimation technology to guide the denoising of pseudo labels in distant supervision data. The proposal incorporates an instance-level uncertainty estimation method, gauging the reliability of pseudo labels with overlapping relations. Dynamic uncertainty thresholds are introduced for different types of ties to filter high-uncertainty pseudo labels. The authors employ BERT-base-uncased to capture contextual representations of documents and entities, facilitating the generation of pseudo labels and the denoising process. Results illustrate that using the discussed stages of uncertainty estimation, dynamic uncertainty threshold creation, and the iterative e-labeling strategy, the UG-DRE framework effectively mitigates noise in distant supervision data, enhancing performance in document-level RE tasks.\nIn addition to research addressing noisy data, we also found a line of work focused on enhancing distant supervision for relation extraction by incorporating external knowledge, fine-grained entity types, and innovative training strategies.\nIn [17 ###reference_b17###], the authors introduced a multi-task probabilistic approach for distantly supervised RE, employing a variational autoencoder. The paper\u2019s primary contribution lies in integrating Knowledge Base priors into the variational autoencoder framework to enhance sentence space alignment and foster interpretability. To assess the proposed approach\u2019s effectiveness, the authors evaluated two benchmark datasets, NYT10 and WikiDistant. As an encoder, the authors leveraged a BILSTM model in conjunction with 50-dimensional embeddings provided by GLOVE.\nIn [20 ###reference_b20###], the authors propose to leverage a Universal Graph (UG) that contains external knowledge to provide additional evidence for relation extraction. They introduce two training strategies: Path Type Adaptive Pretraining, which enhances the model\u2019s adaptability to various UG paths by pretraining on diverse path types, and Complexity Ranking Guided Attention, which enables the model to learn from both simple and complex UG paths by ranking their relevance and guiding attention accordingly. The paper evaluates the proposed framework on two datasets: a biomedical dataset (crafted by the authors using data from the Unified Medical Language System and Medline) and the NYT10 dataset. It shows that the proposed methods outperform several baseline methods.\nIn contrast to prior research, [45 ###reference_b45###] challenges the assumption of distant supervision by proposing that the relation between entity pairs should be context-independent. This assumption introduces context-agnostic label noises, leading to a phenomenon known as gradual drift. Addressing this challenge, the authors propose MetaSRE, a method that leverages unlabeled data to enhance the accuracy and robustness of RE by generating quality assessments on pseudo labels. MetaSRE employs two networks: the Relation Classification Network (RCN) and the Relation Label Generation Network (RLGN). The RCN is trained on labeled data, while the RLGN generates high-quality pseudo labels from unlabeled data. Subsequently, these pseudo labels train a meta-learner that can adapt to new domains and tasks. Experimental evaluations conducted on two public datasets, SemEval 2010 Task 8 and FewRel, demonstrated that MetaSRE surpassed other state-of-the-art methods in accuracy and robustness. Authors fine-tuned BERT on labeled data, utilizing it to generate features for the RCN. BERT was employed to generate embeddings for the unlabeled data, which the RLGN then used to create pseudo labels.\nIn [100 ###reference_b100###], Shahbazi presents a system that transforms entity mentions into fine-grained entity types (FGET), enhancing precision and explainability. Notably, it draws on typical relations from the FB-NYT dataset [97 ###reference_b97###], such as \u201cplace lived\u201d and \u201ccapital\u201d. This paper relies on traditional word embeddings like skip-gram, showcasing their enduring efficacy in relation extraction.\nFinally, [75 ###reference_b75###] proposes an approach for sentence-level distant RE using negative training (NT) and presents a framework called SENT that filters noisy data and improves performance for downstream applications. The approach is based on the idea that selecting complementary labels of the given label during training decreases the risk of providing noisy information and prevents the model from overfitting the noisy data. The model trained with NT can separate the noisy data from the training data, significantly protecting the model from noisy information. The paper uses BERT to encode the input sentences and achieve state-of-the-art performance on TACRED and NYT-10 datasets." + }, + { + "section_id": "5.1.6", + "parent_section_id": "5.1", + "section_name": "5.1.6 Open Relation Extraction", + "text": "OpenRE has seen significant advancements with approaches designed to address limitations of traditional clustering-based methods and enhance relation discovery in open settings. The problem with clustering-based methods is that they may need help to capture the advanced contextual information stored in vectors, leading to similar relations with different relationships in the same cluster. In [154 ###reference_b154###], Zhao et al. leverage clustering methods and propose OpenRE, a method that addresses existing clustering-based approaches\u2019 limitations. Specifically, the authors propose a relation-oriented clustering method that leverages labeled data to learn a relation-oriented representation. The technique minimizes the distance between instances with the same relation and gathers instances towards their corresponding relation centroids to form the cluster structure. The authors demonstrate their method\u2019s effectiveness on two datasets, FewRel and TACRED, achieving a 29.2% and 15.7% reduction in error rate compared to current state-of-the-art methods. The proposed method also leverages BERT as the encoder.\n[156 ###reference_b156###] introduces actively supervised clustering for Open Relation Extraction. This approach involves alternating between clustering learning and relation labeling. The aim is to furnish essential guidance for clustering while minimizing the need for additional human effort. The paper proposes active labeling strategy to discover clusters associated with unknown relations dynamically. In their experiments, the authors use TACRED and FewRel. Notably, they also incorporate the FewRel-LT dataset, an extended version of FewRel featuring a more diverse distribution of relations, particularly in the long tail \u2014 the paper references using BERT as an encoder. As a follow-up, [157 ###reference_b157###] introduce a method that tackles the issue of encountering unknown relations within the test set. Using BERT as an encoder to recognize known relations, the authors introduced a new relation, NOTA (none-of-the-above), to capture and understand previously unlearned relations." + }, + { + "section_id": "5.1.7", + "parent_section_id": "5.1", + "section_name": "5.1.7 Multi-Task", + "text": "Most of the studies related to multi-task RE address the challenges of relation extraction at either the document or sentence level, particularly in low-resource settings. Most propose techniques to bridge the gaps in data availability and improve performance in these challenging scenarios. We also identify techniques that are effective across different levels of granularity, such as document-level and sentence-level relation extraction.\n[111 ###reference_b111###] incorporate dependency-type information to improve relation extraction further. By including the syntactic instruction among connected words, the proposed Attentive Graph Neural Network (A-GCN) model can better distinguish the importance of different word dependencies and improve the accuracy of relation extraction. The authors introduce type information into the A-GCN modeling to incorporate this information, effectively enhancing the model\u2019s performance. Furthermore, they used BERT as the model\u2019s encoder and introduced unique tokens to annotate the entities in the text, further improving the model\u2019s performance. Overall, their approach demonstrates the effectiveness of leveraging dependency trees and dependency types for relation extraction and achieves state-of-the-art performance on SemEval-2010 task8 [41 ###reference_b41###] and ACE05 [118 ###reference_b118###] datasets.\nThe traditional feature extraction methods often extract features sequentially or in parallel. This process can result in suboptimal feature representation learning and limited task interaction. In the domain of joint models, Yan et al. [134 ###reference_b134###] introduces an approach to joint entity and RE utilizing a partition filter network. Yan et al.\u2019s partition filter network addresses this issue by generating task-specific features, enabling more effective two-way interaction between tasks. This approach ensures that features extracted later maintain direct contact with those extracted earlier. The authors conducted extensive experiments across six datasets, demonstrating that their method outperformed other baseline approaches. BERT-base-cased, ALBERTxxlarge-v1, and sciBERT-sci-vocab-uncased were employed as encoders for NER and RE partitions, but the main component in the classification stage is an LSTM.\n[27 ###reference_b27###] confronts the persistent challenge of acquiring a substantial volume of training data for relation extraction. The authors introduce a bootstrapping process for generating training datasets for relation extraction, specifically designed to be swiftly executed even by individuals without expertise in NLP. Their approach leverages search engines across syntactic graphs to procure positive examples, identifying sentences syntactically akin to user-input examples. This method is applied to relations sourced from TACRED and DocRED, demonstrating that the resultant models exhibit competitiveness with those trained on manually annotated data and data obtained through distant supervision. To further enhance the dataset, the authors employ pre-trained language models like BERT and RoBERTa for data augmentation. This strategy involves generating additional relation examples using GPT-2 and manual annotation. In particular, the authors modify the text by substituting the entity pair with unique tokens, feeding the altered text as input to a BERT model. The relation between the two entities is captured by concatenating the final hidden states corresponding to their respective start tokens. This representation undergoes classification through a dedicated head, and the model is fine-tuned for relation classification.\nA notable challenge in RE lies in the variation of relation types across different datasets, complicating efforts to amalgamate them for training a unified model. To tackle this issue, [28 ###reference_b28###] propose a method that employs prototypes of relation types to discern their relatedness within a multi-task learning framework. These prototypes are constructed by selecting representative examples of each relation type and using them to augment related types from a distinct dataset. The authors conduct experiments using two datasets featuring similar relation schemas: the ACE05 and ERE dataset [1 ###reference_b1###]. While the paper mentions using an encoder, it does not specify the language model leveraged in the experiments.\nContinuing with joint methodologies, Zheng et al. [161 ###reference_b161###] introduce Jointprop, a comprehensive graph-based framework designed for joint semi-supervised entity and relation extraction, specifically tailored to address the challenges posed by limited labeled data. By leveraging heterogeneous graph-based propagation, Jointprop adeptly captures global structural information among individual tasks, allowing for a more integrated approach to understanding the relationships between entities and their corresponding relations. This framework facilitates the propagation of labels across a unified graph of labeled and unlabeled data and exploits interactions within the unlabeled data to enhance learning. Experimental results underscore the effectiveness of this approach, producing state-of-the-art performance on renowned datasets, including SciERC, ACE05, ConLL, and SemEval, with notable improvements in F1 scores for both entity recognition and relation extraction tasks. The paper discusses using language models to obtain contextualized representations for each token. Although the specific language model used is not mentioned, the paper suggests that BERT could be a potential model for creating these representations. This implies that Jointprop could gain from the detailed contextual embeddings offered by transformer-based models, thus improving its performance in settings with limited resources." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Observations", + "text": "Regarding models, we identified four main groups: those employing LSTMs, those utilizing CNNs and graph-based approaches, and the largest group leveraging transformers like BERT. It is important to note that many works incorporate language models like BERT at some stage for encoding, with classification often performed using other techniques, such as LSTMs. However, in this section, we categorize each work based on its primary component. The presence or absence of BERT as an encoder or as part of the main RE component is detailed in Table 8 ###reference_###.\nAdditionally, we observed that not all works adhere to the entire three-stage RE pipeline, with some focusing solely on entity detection or classifying pre-tagged entities within a dataset. To present our findings concisely, we have organized all reviewed works into tables categorized by conference and year, offering a comprehensive year-conference-task perspective on the cutting-edge developments in RE. It is also important to note that many reviewed papers do not explicitly specify how many stages of the relation extraction process they perform. We have inferred this information based on their models, methodologies, and reported results in such cases. Table 2 ###reference_### presents a summary of insights from the ACL conference, while Table 3 ###reference_### covers insights from the ACL satellite conferences. Finally, Table 4 ###reference_### offers a comprehensive overview of the number of papers that utilized each model or addressed each task. This summary allows us to derive key insights into trends over the years.\nPaper\nConference\nYear\nModel Group\nTask / Challenge\nSubtask\n\n\n\n[100 ###reference_b100###]\nACL\n2020\nCNN\nDistant Supervised RE\nRelation Classification\n\n[83 ###reference_b83###]\nACL\n2020\nGraph Based\nDocument Level\nRelation Classification\n\n[142 ###reference_b142###]\nACL\n2020\nTransformer\nSentence Level\nRelation Identification + Relation Classification\n\n[111 ###reference_b111###]\nACL\n2021\nCNN\nSentence Level / Multi-task\nRelation Classification\n\n[75 ###reference_b75###]\nACL\n2021\nLSTM\nDistant Supervised RE\nRelation Identification + Relation Classification\n\n[49 ###reference_b49###]\nACL\n2021\nLSTM\nDocument Level\nRelation Classification\n\n[127 ###reference_b127###]\nACL\n2021\nTransformer\nDistant Supervised RE\nNER + Relation Classification\n\n[136 ###reference_b136###]\nACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[48 ###reference_b48###]\nACL\n2021\nTransformer\nDocument Level\nRelation Classification\n\n[121 ###reference_b121###]\nACL\n2021\nTransformer\nSentence Level\nNER + Relation Identification + Relation Classification\n\n[92 ###reference_b92###]\nACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[67 ###reference_b67###]\nACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[54 ###reference_b54###]\nACL\n2022\nTransformer\nSentence Level\nNER (quantities) + Relation Classification\n\n[33 ###reference_b33###]\nACL\n2023\nGraph Based\nFew Shot / Low Resource\nRelation Classification\n\n[161 ###reference_b161###]\nACL\n2023\nGraph Based\nMulti-Task\nNER + Relation Classification\n\n[125 ###reference_b125###]\nACL\n2023\nGraph Based\nMultilingual and multimodal Relation Extraction\nNER + Relation Identification + Relation Classification\n\n[129 ###reference_b129###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[42 ###reference_b42###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nRelation Classification\n\n[50 ###reference_b50###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[156 ###reference_b156###]\nACL\n2023\nTransformer\nOpen RE\nRelation Classification\n\n[156 ###reference_b156###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[155 ###reference_b155###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Identification + Relation Classification\n\n[159 ###reference_b159###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[128 ###reference_b128###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[157 ###reference_b157###]\nACL\n2023\nTransformer\nOpen RE\nRelation Identification + Relation Classification\n\n[151 ###reference_b151###]\nACL\n2023\nTransformer\nDocument Level\nNER + Relation Classification\n\n[165 ###reference_b165###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nNER + Relation Classification\n\n[104 ###reference_b104###]\nACL\n2023\nTransformer\nDistant Supervised RE\nRelation Classification\n\n[144 ###reference_b144###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[44 ###reference_b44###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Identification + Relation Classification\n\n[158 ###reference_b158###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[138 ###reference_b138###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nRelation Classification\nPaper\nConference\nYear\nModel Group\nTask / Challenge\nSubtask\n\n\n\n[145 ###reference_b145###]\nEACL\n2021\nCNN\nSentence Level\nRelation Classification\n\n[40 ###reference_b40###]\nEACL\n2021\nCNN\nDistant Supervised RE\nRelation Identification + Relation Classification\n\n[101 ###reference_b101###]\nEACL\n2021\nCNN\nDistant Supervised RE\nRelation Classification\n\n[152 ###reference_b152###]\nEACL\n2021\nLSTM\nDocument Level\nNER + Relation Classification\n\n[134 ###reference_b134###]\nEACL\n2021\nLSTM\nMulti-Task\nNER + Relation Classification\n\n[12 ###reference_b12###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[26 ###reference_b26###]\nEACL\n2021\nTransformer\nDocument Level\nNER + Relation Identification + Relation Classification\n\n[154 ###reference_b154###]\nEACL\n2021\nTransformer\nOpen RE\nRelation Classification\n\n[27 ###reference_b27###]\nEACL\n2021\nTransformer\nMulti-Task\nRelation Classification\n\n[28 ###reference_b28###]\nEACL\n2021\nTransformer\nMulti-Task\nNER + Relation Identification + Relation Classification\n\n[34 ###reference_b34###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[45 ###reference_b45###]\nEACL\n2021\nTransformer\nDistant Supervised RE\nRelation Classification\n\n[46 ###reference_b46###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[99 ###reference_b99###]\nEACL\n2021\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[17 ###reference_b17###]\nNAACL\n2021\nLSTM\nDistant Supervised RE\nNER + Relation Identification + Relation Classification\n\n[162 ###reference_b162###]\nNAACL\n2021\nTransformer\nSentence Level\nNER + Relation Identification + Relation Classification\n\n[14 ###reference_b14###]\nNAACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[137 ###reference_b137###]\nAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[133 ###reference_b133###]\nAACL\n2022\nTransformer\nSentence Level\nNER + Relation Classification\n\n[5 ###reference_b5###]\nAACL\n2022\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[163 ###reference_b163###]\nAACL\n2022\nTransformer\nSentence Level\nNER + Relation Classification\n\n[131 ###reference_b131###]\nNAACL\n2022\nGraph Based\nDocument Level\nRelation Classification\n\n[122 ###reference_b122###]\nNAACL\n2022\nGraph Based\nSentence Level\nNER + Relation Classification\n\n[130 ###reference_b130###]\nNAACL\n2022\nGraph Based\nDocument Level\nNER + Relation Classification\n\n[112 ###reference_b112###]\nNAACL\n2022\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[66 ###reference_b66###]\nNAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[90 ###reference_b90###]\nNAACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[68 ###reference_b68###]\nNAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[126 ###reference_b126###]\nNAACL\n2022\nTransformer\nDocument Level\nNER + Relation Classification\n\n[76 ###reference_b76###]\nEACL\n2023\nTransformer\nDocument Level\nRelation Identification + Relation Classification\n\n[108 ###reference_b108###]\nEACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[31 ###reference_b31###]\nEACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[82 ###reference_b82###]\nEACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Identification + Relation Classification\nConference\nYear\nCNN\nLSTM\nTransformer\nGraph Based\nSentence Level\nDocument Level\nMulti-Task\nDistant Supervised RE\nFew Shot / Low Resource\nMultilingual and Multimodal\n\nACL\n2020\n1\n0\n1\n1\n1\n0\n0\n1\n0\n0\n\nACL\n2021\n2\n2\n5\n0\n1\n2\n1\n3\n1\n1\n\nACL\n2022\n0\n0\n7\n1\n1\n0\n0\n0\n3\n0\n\nACL\n2023\n0\n0\n14\n1\n3\n3\n1\n1\n6\n5\n\nEACL\n2021\n3\n2\n9\n0\n2\n2\n2\n3\n3\n1\n\nNAACL\n2021\n0\n1\n2\n0\n1\n0\n0\n1\n1\n0\n\nAACL\n2022\n0\n0\n5\n0\n3\n0\n0\n0\n2\n0\n\nNAACL\n2022\n0\n0\n5\n3\n3\n3\n0\n0\n2\n0\n\nEACL\n2023\n0\n0\n6\n0\n0\n3\n0\n0\n2\n0\nOne of the key insights from the trend analysis is related to the models used. In the first two years of conferences, 2020 and 2022, we observe a trend toward using various models, including CNNs and LSTMs, though they are less prevalent. By 2022 and 2023, however, the focus has shifted to transformer-based techniques, highlighting their growing dominance and effectiveness.\nWhen examining the subtasks, it is clear that relation identification is the least frequently addressed. Most datasets are tagged with pre-identified relations and are used primarily for relation classification. This subtask, the primary focus in the literature, remains the most relevant component of relation extraction techniques.\nExamining the tasks and challenges within relation extraction reveals some compelling insights. Specifically, document-level relation extraction has gained significant prominence in the past two years, mainly due to the rise of transformer-based techniques capable of processing more intricate and extensive contexts. Data show that 54% of papers addressing this issue utilize transformer architecture, supporting this trend.\nThe trend is even more pronounced in few-shot and low-resource scenarios, where the generalizability of language models offers the best solutions. Nearly 100% of the papers in these areas employ transformer-based approaches, reflecting a broader shift towards these techniques in recent years. This shift is likely tied to the widespread adoption and democratization of language models.\nThese observations are further supported by our findings in Tables 6 ###reference_### and 7 ###reference_###, highlighting the best models\u2019 performance in these domains." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Large Language Models vs language models in RE", + "text": "We reviewed various baseline approaches in the literature and compared the effectiveness of different language models for encoding relations. We survey and analyze existing studies that use various language models, including GPT-3, T5, and traditional models like BERT or RoBERTa. Our aim is two-fold: firstly, to assess the relative performance of different model categories in the challenging task of RE, and secondly, to evaluate whether the emergence of Large Language Models represents a significant advancement over traditional models in performance. Our study delves into Few-Shot relation extraction, document-level relation extraction, and traditional relation extraction. We have selected the top five models for each benchmark dataset associated with these tasks: FewRel, DocRed, and TACRED. This approach allows us to capture various methodologies and performances across distinct facets of RE challenges. Table 5 ###reference_### presents the results for TACRED. For document-level relation extraction on the DocRED dataset, the outcomes are detailed in Table 6 ###reference_### and Table 7 ###reference_### compiles the results for few-shot relation extraction using the FewRel dataset.\nExamining the aggregated results reveals a clear dominance of BERT-based approaches, with BERT and RoBERTa collectively representing a substantial 75% of the primary outcomes. In contrast, Large Language Models (LLMs) contribute merely 25% to the overall results. A temporal analysis unveils intriguing trends, particularly in the latest year, 2023. This year has proven pivotal, delivering state-of-the-art results in two out of the three experiments. Notably, BERT and RoBERTa continue to secure the top positions, while other LLMs, such as T5, claim third place, highlighting the sustained prominence of BERT-based models.\nIn the context of the benchmark dataset DocRED (Table 6 ###reference_###), the top-performing models consistently employ RoBERTa large as their encoder, a noteworthy observation with implications for addressing our RQ4. It is striking that there is a conspicuous absence of papers or experiments using language models other than RoBERTa or BERT.\nLarge language models like RoBERTa and BERT are widely used and effectively extract relationships between different document elements. This is also true for extracting relationships at the sentence level. The reason for this effectiveness is linked to the Masked Language Model (MLM) strategy. This strategy involves training the model on a large body of text by hiding a word within the context of other words, similar to how the model processes relation classification tasks, which involve identifying relationships between two entities.\nUpon deeper scrutiny, we find that RoBERTa\u2019s consistent prominence in top positions for document-level relation extraction is linked to intrinsic details of the model:\nRoBERTa uses a much larger dataset for pretraining, encompassing more data and longer sequences. This extensive training regime contributes to the model\u2019s robustness and generalization.\nRoBERTa employs only the MLM objective during pretraining, discarding the Next Sentence Prediction (NSP) objective. Sentences are treated as continuous streams of text without explicit sentence separation, making it particularly suitable for the challenges posed by document-level relation extraction.\nRoBERTa allows training with longer sequences, surpassing the 512-token limit BERT uses. This capability provides a more comprehensive and extended context, making it well-suited for document-level relation extraction tasks.\nLarge Language Models (LLMs) excel in few-shot learning scenarios. This proficiency is attributed to their extensive pretraining on massive volumes of textual data, enabling them to grasp diverse linguistic patterns and acquire broad knowledge. The efficacy of this pretraining becomes evident as LLMs demonstrate a remarkable ability to generalize effectively to new tasks with only a limited number of examples, positioning them as ideal candidates for few-shot learning applications. We identify substantial potential for further exploration and research involving LLMs in this domain." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Discussion", + "text": "In our descriptive analysis, w have noticed a growing interest in using language models to solve Relation Extraction (RE) problems. As is shown in Figure 2, the number of publications on this topic has been consistently increasing, with the most recent year reaching the highest point. This continued growth shows that the field is active and encourages more research and development.\nIt is important to consider that some conferences occur biennially. We created a year-conference graph in Figure 3 ###reference_### to address this. Although the data for 2023 appears underrepresented compared to 2022, with 3 and 2 conferences, the overall trend remains upward. The graph in Figure 2 ###reference_### shows that the higher number of publications in 2021 is mainly due to more conferences than in 2023.\nWe calculated the correlation coefficients for both graphs for a more comprehensive analysis. The cumulative graph by year shows a correlation coefficient of 0.68, while the year-conference graph shows a coefficient of 0.32. These values suggest an increase as the years progress.\n###figure_2### ###figure_3### In terms of application areas, our analysis reveals that relation extraction has found application in diverse domains, including media, academia, economics, and even unconventional realms such as heritage conservation and mathematics. This broad applicability underscores the technique\u2019s usefulness and emphasizes the need for new systems capable of autonomously identifying novel relations in an ever-changing landscape.\nIn the upcoming subsections, we lay the groundwork for addressing current challenges while providing an exhaustive review of language models used for RE. Our approach is to align our findings with our RQ in a concerted effort to contribute meaningfully to ongoing research in the field of RE." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "RQ1: What are the challenges of RE that are being solved by systems that leverage language models?", + "text": "We have identified areas for improvement and further exploration in the following challenges:\nDocument-level RE: While progress has been made in sentence-level RE, document-level RE remains challenging. Extending the capabilities of language models to capture relationships and context across entire documents deserves further attention. With their ability to capture broader contexts, large language models present a promising avenue for exploration in this area.\nMultimodal RE: Integrating information from diverse modalities, such as text and images, poses a distinct challenge. Enhancing language models to extract relations from multimodal data effectively requires fusing models for text and pictures. Despite some progress, there is still room for improvement, especially in benchmark datasets.\nMultilingual RE: Handling relations in languages not encountered during training or extracting relations with no training examples remains challenging. While many language models can generalize to other languages, their performance may still need to be bettered. Developing systems and training strategies, especially in transfer learning, is essential to address this challenge effectively.\nFew-Shot and Low-Resource Relation Extraction: This category encompasses areas with a scarcity of training examples, such as medical or biological datasets, and relations in traditional domains but with unseen relations. Large language models can offer valuable solutions in these domains, leveraging their ability to generate new content for training strategies and learn complex relations and contexts. Significant improvements have been observed in distant supervision RE, where automatically labeled datasets are created from knowledge bases, and large language models can contribute to further enhancements.\nOpen Relation Extraction: Developing techniques to identify and characterize relations without predefined categories is an area that requires further investigation. Unsupervised techniques, such as clustering and large language models, hold promise in addressing this challenge.\nRE grapples with challenges related to ambiguity and polysemy, where entities and relations frequently exhibit multiple meanings across different contexts. LMs, with their contextual understanding, play a crucial role in capturing the nuanced nature of language and disambiguating entities and relations based on the surrounding context. However, there is room for improvement. A promising avenue for further research is enhancing performance through the use of similarities or refining semantic searches. Finally, RE techniques must contend with negations and expressions of uncertainty in language. Ongoing research focuses on developing methods to handle the negation of relations or the presence of \"none of the above\" relations, reflecting the persistent challenges in dealing with negations and uncertain language expressions." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "RQ2: What are the most commonly used language models for the RE problem?", + "text": "Table 8 ###reference_### illustrates the adoption of various language models and word embeddings for state-of-the-art relation extraction solutions in ACL conferences. It is worth mentioning that we have considered the base model if a paper does not explicitly specify the submodel used. Also, it is necessary to mention that we have condensed all the papers created over the survey period in this table. In this table, we have discussed the papers proposing a cutting-edge technique from section 5 ###reference_### and the papers proposing a dataset from section 4 ###reference_###.\nIt is evident that BERT stands out as the predominant model across the literature, featuring prominently in 45 different papers, constituting almost 55% of the recent papers about extraction. The second most widely used model is RoBERTa achieving state-of-the-art results, specially in areas like document-level relation extraction. When examining the percentage of papers that incorporate or at least mention large language models such as T5 or GPT in their methodology, we find that only 8.5% of papers leverage these language models.\nBERT is one of the oldest models studied in this survey. Its popularity may be influenced by the fact that it was one of the first language models widely adopted. To address this potential bias, we performed an analysis that normalizes the number of times a paper references a model by the number of years since model release. The formula used is , where represents the frequency of use as shown in Table 8 ###reference_###, and denotes the number of years from model release to the end of the survey in 2023. In case of BERT, was released in 2018 so, , 5. This approach yields a factor of 9, which remains higher than newer models like T5 or GPT. Furthermore, it is essential to consider potential issues on the use and adoption between BERT-based models and other large LLMs like GPT or T5. However, thanks to platforms like Huggingface\u2019s Transformers library [124 ###reference_b124###], most models have been made widely available, making them widely accessible in their pre-trained forms. Given that researchers predominantly rely on transfer learning or fine-tuning rather than extensive training from scratch, we can conclude that access issues are minimal or nonexistent.\nTo provide a more accurate comparison, considering that BERT was released significantly earlier than other language models, we analyzed the data from the most recent year of the survey, 2023. By focusing on 2023, we ensure a fair evaluation of the models. We normalized the percentages and counts based on the release year of each model that had at least one citation in the 2023 survey papers. The results are presented in Table 9 ###reference_###.\nModel\nVariant\nReelease Year\nCount\n%\nNorm. Count\nNorm. %\nReferences\n\n\n\nBERT\nbase\n2018\n9\n45%\n1.80\n11.54%\n\n\n[158 ###reference_b158###, 42 ###reference_b42###, 156 ###reference_b156###, 155 ###reference_b155###, 159 ###reference_b159###, 33 ###reference_b33###, 128 ###reference_b128###, 157 ###reference_b157###, 151 ###reference_b151###]\n\n\nRoBERTa\nlarge\n2019\n2\n10%\n0.50\n3.21%\n\n\n[151 ###reference_b151###, 76 ###reference_b76###]\n\n\nT5\n-\n2020\n2\n10%\n0.50\n3.21%\n\n\n[117 ###reference_b117###, 82 ###reference_b82###]\n\n\nGPT\n3,5\n2022\n2\n10%\n1.00\n6.41%\n\n\n[128 ###reference_b128###, 117 ###reference_b117###]\n\n\nPubMedBERT\n-\n2020\n1\n5%\n0.25\n1.60%\n\n\n[165 ###reference_b165###]\n\n\nBioLinkBERT\nlarge\n2022\n1\n5%\n0.50\n3.21%\n\n\n[129 ###reference_b129###]\n\n\nAnchiBERT\n-\n2023\n1\n5%\n1.00\n6.41%\n\n\n[138 ###reference_b138###]\n\n\nmBART\n50\n2020\n1\n5%\n0.25\n1.60%\n\n\n[50 ###reference_b50###]\n\n\nViT\nbase\n2021\n1\n5%\n0.33\n2.12%\n\n\n[125 ###reference_b125###]\nThis underscores the prevalence of BERT-based methods being the most widespread in most literature. The preference for BERT-based models in the realm of RE can be attributed to their inherent characteristics that align seamlessly with the nature of RE systems:\nBERT is pre-trained using two objectives: MLM and NSP. The MLM aspect predicts missing words in a sentence, while NSP helps the model understand relationships between consecutive sentences. This process aligns closely with the essence of relation extraction (Section 3.1 ###reference_###), wherein the objective is to identify a relation between two entities-a concept closely mirrored in BERT\u2019s training strategies.\nDuring MLM training, BERT randomly masks specific tokens in each sequence, enhancing the model\u2019s generalizability. This characteristic proves to be particularly beneficial for few-shot problems in RE.\nBERT uses unique tokens ([CLS] and [SEP]) to indicate the beginning and separation of sentences. Those tokens ease the delineation of head and tail entities in relation extraction. Many state-of-the-art results in RE are built upon adaptations of these tokens in the pretraining strategy.\nMost papers on language adoption focus on English. However, there is also significant development in Asian languages. Notably, there are specific BERT models tailored for Chinese, such as ChineseBERT, and models like KLUE-RE and AnchiBERT, designed for Korean and Hanja, respectively, are emerging.\nFinally, it is interesting to note the ongoing presence of traditional word embedding techniques, such as GloVe and Word2Vec. However, these methods, along with CNN-based approaches, are primarily associated with the earlier years of the survey. Specifically, GloVe appears in four papers: two from 2020 and two from 2021, while Word2Vec is represented by only one paper from 2021. As for CNN-based approaches, as reflected in Table 4 ###reference_###, there is a noticeable trend in their use during the early period of the survey. However, their usage has declined in recent years, indicating a reduced interest compared to BERT-based techniques in contemporary research." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "RQ3: Which datasets are used as benchmarks for RE using language models?", + "text": "Table 11 ###reference_### presents the utilization of datasets in ACL conferences. Addressing RQ3 directly, the most prevalent benchmark datasets for RE have been TACRED, DocRed, and FewRel. Examining datasets with over ten recent cutting-edge RE research mentions reveals an intriguing pattern.\nFirstly, in traditional sentence-level RE, TACRED serves as the most robust benchmark. Researchers are actively exploring new techniques, including comparisons between joint models and pipeline approaches, methods for sentence encoding, and advancements in accurately marking the tails and heads of relations for each model using this dataset.\nSecondly, the domain of document-level relation extraction is gaining significant attention. DocRed, as a dataset, is instrumental in testing novel approaches and assessing recent advances. Researchers leverage this dataset to compare against baselines, fostering the ongoing evolution of methodologies in document-level RE.\nLastly, the FewRel dataset has emerged as a benchmark for few-shot relation extraction. This dataset plays a crucial role in evaluating the capabilities of both standard and large language models. The discussions surrounding the effectiveness of LMS and the foray of LLMs in capturing previously unseen relations underscore the dataset\u2019s importance in shaping the discourse on few-shot relation extraction.\nExploring the long tail of datasets with only a single mention in cutting-edge relation extraction reveals a diverse landscape. These datasets, spanning diverse domains such as mathematics, dialogues, natural disasters, drugs, and heritage, signal the broad interest in employing relation extraction techniques across various academic and societal domains. The presence of datasets in these unique and specialized areas is compelling evidence of the technique\u2019s applicability and relevance to a wide array of fields within academia and society.\nDataset\n\n\n\nRE Task\n\n\n\nDataset based on\n\n# of relations\nExample relations\nRelation mentions\n\n\nCorpus size\n\n\n\nTrain\n\n\n\nDev\n\n\n\nTest\n\n\n\n\n\n\nTACRED\n\n\n\nMultipurpose - Sentence Level RE\n\n\n\nNewswire and web text manually annotated using Amazon Mechanical Turk crowdsourcing. Covers common relations between people, organizations, and locations.\n\n43\nper:schools_attended and org:members\n21,784\n\n\n106,264\n\n\n\n68,124\n\n\n\n22,631\n\n\n\n15,509\n\n\n\n\nDocRED\n\n\n\nDocument-Level RE\n\n\n\nWikipedia data, both manually and distant supervised annotated.\n\n96\neducated_at, spouse, creator, publication_date\u2026\n155,535\n\n\nManually labelled: 5,053 Distant supervised: 101,873\n\n\n\nManually labeled: 3,053 Distant supervised: 101,873\n\n\n\n1,000\n\n\n\n1,000\n\n\n\n\nFewRel\n\n\n\nFew Shot RE\n\n\n\n70,000 sentences on relations derived from Wikipedia and annotated by crowdworkers.\n\n100\nmember_of, capital_of, birth_name\n58,267\n\n\n70,000\n\n\n\n44,800\n\n\n\n6,400\n\n\n\n14,000\n\n\n\n\nSemEval 2010 - Task 8\n\n\n\nMultipurpose - Sentence Level RE\n\n\n\nSemantic relations between pairs of nominals of general purpose.\n\n9\nEntity-Origin, entity-destination, cause-effect\u2026\n6,674\n\n\n10,717\n\n\n\n8,000\n\n\n\n-\n\n\n\n2,717\n\n\n\n\nACE05\n\n\n\nMultilingual - Sentence Level RE\n\n\n\n1,800 files encompassing diverse English, Arabic, and Chinese genres. These texts have been meticulously annotated for entities, relations, and events. This extensive collection is the entirety of the training data utilized during the 2005 Automatic Content Extraction (ACE) technology evaluation for these languages.\n\n18\nPerson-Social , Organization-Affiliation, Agent-Artifact\n7,105\n\n\n10,573\n\n\n\n7,273\n\n\n\n1,765\n\n\n\n1,535\n\n\n\n\nSciERC\n\n\n\nMultipurpose - Low Resource\n\n\n\nCollection of 500 scientific abstracts annotated in terms of entities and relations\n\n7\nusef_for , hyponym_of, feature_of, conjunction\n4,716\n\n\n500\n\n\n\n-\n\n\n\n-\n\n\n\n-\n\n\n\n\nNYT-FB\n\n\n\nMultipurpose - Distant Supervision RE\n\n\n\nNew York Times news annotated by distant supervision.\n\n52\nnationality, place_of_birth\n142,823\n\n\n717,219\n\n\n\n455,771\n\n\n\n-\n\n\n\n172,448\nWe conducted a thorough analysis, focusing on datasets with a minimum of five references in the literature, to provide an in-depth exploration of the landscape of RE (Table 12 ###reference_###). Our systematic examination covers various dimensions, including the number and types of relations and relevant statistical metrics. We meticulously consider factors such as the source of information, domain specificity, nature of relations, and other pertinent attributes for each dataset. By delving into the intricacies of these datasets, our goal is to offer a comprehensive overview that illuminates their diverse characteristics, providing valuable insights and contributing to a nuanced understanding of the field of RE." + }, + { + "section_id": "7.4", + "parent_section_id": "7", + "section_name": "RQ4: Are new large language models like GPT or T5 useful in RE versus widespread models such as BERT?", + "text": "To address RQ4, we analyzed the performance of various models on three benchmark datasets in Section 6 ###reference_### and reviewed the most widespread models used in Table 8 ###reference_###. The results shed light on the effectiveness of new large language models like GPT and T5 compared to well-established models like BERT.\nDespite their promising capabilities, large language models, including GPT and T5, are used less frequently than models such as RoBERTa and BERT. This phenomenon could be attributed to the inherent suitability of these language models for downstream tasks like chatbots, where their remarkable ability to capture extensive contextual information is more prominently leveraged. The current landscape suggests that large language models do not play a central role in advancing state-of-the-art performance in extraction tasks. However, it is essential to note that the field is dynamic, and the influence of these models may evolve with ongoing research and advancements.\nAn intriguing observation arises when comparing the insights from Section 6 ###reference_### with the utilization trends outlined in Table 8 ###reference_###. Notably, state-of-the-art results are predominantly achieved by models based on RoBERTa. Surprisingly, RoBERTa, despite its demonstrated strength in terms of results, is underused in contrast to the widespread adoption of BERT. This discrepancy in adoption could be due to BERT\u2019s adaptability, which encourages a more profound exploration of diverse approaches for encoding relations and comparison against other baseline models. In contrast, RoBERTa\u2019s superior performance might result in fewer variations in its application, as it already delivers robust results without requiring extensive modifications.\nThe landscape of relation extraction is currently marked by a nuanced interplay between language models\u2019 performance and practical adoption. While large language models show promise, their utilization in cutting-edge relation extraction remains modest. The interplay of factors such as adaptability, performance, and the specific demands of downstream tasks shapes the landscape, suggesting a need for ongoing exploration and fine-tuning of these models about extraction.\nTo ensure this analysis is as up-to-date as possible, we used the advanced search feature of Web of Science to perform four different queries. Each query included \"relation extraction\" in the title and searched for specific models and their variants in the abstract or topic. For example, the query for T5 was: TI=(\"relation extraction\") AND (TS=(T5 OR \"Text-To-Text Transfer Transformer\") OR AB=(T5 OR \"Text-To-Text Transfer Transformer\")). For BERT, we found 31 papers published in the last two years, 2023 and 2024. In contrast, we found only three papers each for T5 and GPT. These results support more robust conclusions regarding the predominance of BERT in our survey findings." + }, + { + "section_id": "7.5", + "parent_section_id": "7", + "section_name": "Synthesis", + "text": "We have identified two primary methods of RE: joint and pipeline. Joint models are designed to simultaneously identify entities and relationships, capturing the inherent interdependence between these two tasks. In contrast, the pipeline approach follows a sequential process, starting with NER to identify entities, followed by at least a classification stage for these recognized entities. The pipeline approach can involve up to three stages, with an additional step of relation identification between NER and relation classification, as depicted in Table 2 ###reference_### and Table 3 ###reference_###. While this two-step or three-step process allows for more specialized handling of each task, it also introduces the risk of error accumulation. In recent years, joint approaches have gained more attention, mainly due to concerns that the pipeline method is prone to error propagation.\nFuture trends in the realm of new language models, including sub-models and large language models, point toward increasingly complex and intricate models with more parameters. For instance, LLaMA 3 has 70 billion parameters [25 ###reference_b25###], GPT-4 has 1.8 trillion parameters [86 ###reference_b86###], and Gemini supports a context window of 128,000 tokens with billions of parameters [107 ###reference_b107###]. These models will seem small compared to future developments in our rapidly evolving space. In the context of RE, these advanced models, with their extended context windows, will be particularly valuable for document-level tasks. They can maintain entire documents within their context, improving the handling of extensive information. Furthermore, in multilingual and multimodal settings, the vast context capabilities of these LLMs will enhance understanding across different languages and modalities. Additionally, these models can offer significant advantages for few-shot or low-resource scenarios by augmenting evidence for underrepresented domains and improving generalization with minimal data.\nBefore concluding this section, it is essential to address this research\u2019s ethical considerations and global impact. All reviewed papers adhere to ethical standards by utilizing anonymized or publicly available data, ensuring their contributions to society are made responsibly. They are committed to minimizing bias and avoiding potential ethical issues, reflecting a conscientious approach to research and its broader implications. Finally, the environmental impact is minimal, considering the CO2 emissions and energy consumption associated with the research discussed. While the reviewed papers do not explicitly provide this information, it is reasonable to infer that only foundational work and the initial training of large language models have a significant environmental footprint. Most of the studies leverage pre-trained models, thereby avoiding the resource-intensive process of training from scratch, which is the most demanding stage in terms of energy consumption." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper reviews cutting-edge relation extraction, explicitly focusing on models leveraging language models. Our analysis encompasses 137 papers, spanning dataset proposals, novel techniques, and excluded studies.\nOur research focuses on 65 papers, which we have rigorously categorized into eight sub-tasks reflecting the latest advances in RE. From these papers, we have extracted crucial insights into training strategies, novel methodologies, and the utilization of language models, which have emerged as a cornerstone of contemporary RE research.\nOur findings highlight the dominance of BERT-based approaches to extraction, underscoring their efficacy in achieving state-of-the-art results. Notably, BERT has emerged as the most widely utilized model for extraction to date. Additionally, emerging large language models like T5 exhibit promise, particularly in the context of few-shot relation extraction, showcasing their ability to capture previously unseen relations. However, these models are primarily suited for tasks such as text generation and question answering; their capability to handle large context windows suggests a promising future for developing RE techniques based on these models.\nLanguage models are highly suitable for RE because they effectively capture the semantic nuances and complex relationships within text. By leveraging these models, research has been able to address the inherent challenges of RE, such as ambiguity and context sensitivity, providing robust solutions that enhance the accuracy and efficiency of extracting relations across diverse domains.\nThroughout our exploration, we have identified FewRel, DocRed, and TACRED as pivotal benchmark datasets, aligning with the three primary themes driving contemporary research about extraction. FewRel serves as a benchmark for few-shot relation extraction, DocRed facilitates advancements in document-level relation extraction, and TACRED remains a stalwart in traditional sentence-level relation extraction. Collectively, these datasets offer a comprehensive evaluation ground for models addressing the diverse challenges in cutting-edge relation extraction.\nFinally, although this is more related to the NER domain, further exploration of issues related to the number of entities in different domains is needed. We recognize that a relation\u2019s domain size can significantly impact model performance. For example, relations with large domains, such as \u201cchild of\u201d or \u201ccontains,\u201d require models good at generalizing observations. On the other hand, relations with smaller domains, like \u201cshare a common border,\u201d might benefit from models with sufficient memory capacity. This observation indicates that a one-size-fits-all approach might not be the best, and adjusting models to fit the specific characteristics of relation domains could improve performance and generalization." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CategoryCount
Total137
Dataset Papers56
Research Papers (ACL Conferences)81
Passed Inclusion Criteria65
Failed Inclusion Criteria16
\n
Table 1: Survey summary
\n
", + "capture": "Table 1: Survey summary" + }, + "2": { + "table_html": "
\n
\n

\n\n\n\n\n\nPaper\nConference\nYear\nModel Group\nTask / Challenge\nSubtask\n\n\n\n[100 ###reference_b100###]\nACL\n2020\nCNN\nDistant Supervised RE\nRelation Classification\n\n[83 ###reference_b83###]\nACL\n2020\nGraph Based\nDocument Level\nRelation Classification\n\n[142 ###reference_b142###]\nACL\n2020\nTransformer\nSentence Level\nRelation Identification + Relation Classification\n\n[111 ###reference_b111###]\nACL\n2021\nCNN\nSentence Level / Multi-task\nRelation Classification\n\n[75 ###reference_b75###]\nACL\n2021\nLSTM\nDistant Supervised RE\nRelation Identification + Relation Classification\n\n[49 ###reference_b49###]\nACL\n2021\nLSTM\nDocument Level\nRelation Classification\n\n[127 ###reference_b127###]\nACL\n2021\nTransformer\nDistant Supervised RE\nNER + Relation Classification\n\n[136 ###reference_b136###]\nACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[48 ###reference_b48###]\nACL\n2021\nTransformer\nDocument Level\nRelation Classification\n\n[121 ###reference_b121###]\nACL\n2021\nTransformer\nSentence Level\nNER + Relation Identification + Relation Classification\n\n[92 ###reference_b92###]\nACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[67 ###reference_b67###]\nACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[54 ###reference_b54###]\nACL\n2022\nTransformer\nSentence Level\nNER (quantities) + Relation Classification\n\n[33 ###reference_b33###]\nACL\n2023\nGraph Based\nFew Shot / Low Resource\nRelation Classification\n\n[161 ###reference_b161###]\nACL\n2023\nGraph Based\nMulti-Task\nNER + Relation Classification\n\n[125 ###reference_b125###]\nACL\n2023\nGraph Based\nMultilingual and multimodal Relation Extraction\nNER + Relation Identification + Relation Classification\n\n[129 ###reference_b129###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[42 ###reference_b42###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nRelation Classification\n\n[50 ###reference_b50###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[156 ###reference_b156###]\nACL\n2023\nTransformer\nOpen RE\nRelation Classification\n\n[156 ###reference_b156###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[155 ###reference_b155###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Identification + Relation Classification\n\n[159 ###reference_b159###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[128 ###reference_b128###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[157 ###reference_b157###]\nACL\n2023\nTransformer\nOpen RE\nRelation Identification + Relation Classification\n\n[151 ###reference_b151###]\nACL\n2023\nTransformer\nDocument Level\nNER + Relation Classification\n\n[165 ###reference_b165###]\nACL\n2023\nTransformer\nFew Shot / Low Resource\nNER + Relation Classification\n\n[104 ###reference_b104###]\nACL\n2023\nTransformer\nDistant Supervised RE\nRelation Classification\n\n[144 ###reference_b144###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[44 ###reference_b44###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Identification + Relation Classification\n\n[158 ###reference_b158###]\nACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[138 ###reference_b138###]\nACL\n2023\nTransformer\nMultilingual and multimodal Relation Extraction\nRelation Classification\n\n\n

\n
\n
Table 2: Summary of key papers in ACL categorized by model group and task
\n
", + "capture": "Table 2: Summary of key papers in ACL categorized by model group and task" + }, + "3": { + "table_html": "
\n
\n

\n\n\n\n\n\nPaper\nConference\nYear\nModel Group\nTask / Challenge\nSubtask\n\n\n\n[145 ###reference_b145###]\nEACL\n2021\nCNN\nSentence Level\nRelation Classification\n\n[40 ###reference_b40###]\nEACL\n2021\nCNN\nDistant Supervised RE\nRelation Identification + Relation Classification\n\n[101 ###reference_b101###]\nEACL\n2021\nCNN\nDistant Supervised RE\nRelation Classification\n\n[152 ###reference_b152###]\nEACL\n2021\nLSTM\nDocument Level\nNER + Relation Classification\n\n[134 ###reference_b134###]\nEACL\n2021\nLSTM\nMulti-Task\nNER + Relation Classification\n\n[12 ###reference_b12###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[26 ###reference_b26###]\nEACL\n2021\nTransformer\nDocument Level\nNER + Relation Identification + Relation Classification\n\n[154 ###reference_b154###]\nEACL\n2021\nTransformer\nOpen RE\nRelation Classification\n\n[27 ###reference_b27###]\nEACL\n2021\nTransformer\nMulti-Task\nRelation Classification\n\n[28 ###reference_b28###]\nEACL\n2021\nTransformer\nMulti-Task\nNER + Relation Identification + Relation Classification\n\n[34 ###reference_b34###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[45 ###reference_b45###]\nEACL\n2021\nTransformer\nDistant Supervised RE\nRelation Classification\n\n[46 ###reference_b46###]\nEACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[99 ###reference_b99###]\nEACL\n2021\nTransformer\nMultilingual and multimodal Relation Extraction\nNER + Relation Classification\n\n[17 ###reference_b17###]\nNAACL\n2021\nLSTM\nDistant Supervised RE\nNER + Relation Identification + Relation Classification\n\n[162 ###reference_b162###]\nNAACL\n2021\nTransformer\nSentence Level\nNER + Relation Identification + Relation Classification\n\n[14 ###reference_b14###]\nNAACL\n2021\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[137 ###reference_b137###]\nAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[133 ###reference_b133###]\nAACL\n2022\nTransformer\nSentence Level\nNER + Relation Classification\n\n[5 ###reference_b5###]\nAACL\n2022\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[163 ###reference_b163###]\nAACL\n2022\nTransformer\nSentence Level\nNER + Relation Classification\n\n[131 ###reference_b131###]\nNAACL\n2022\nGraph Based\nDocument Level\nRelation Classification\n\n[122 ###reference_b122###]\nNAACL\n2022\nGraph Based\nSentence Level\nNER + Relation Classification\n\n[130 ###reference_b130###]\nNAACL\n2022\nGraph Based\nDocument Level\nNER + Relation Classification\n\n[112 ###reference_b112###]\nNAACL\n2022\nTransformer\nFew Shot / Low Resource\nNER + Relation Identification + Relation Classification\n\n[66 ###reference_b66###]\nNAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[90 ###reference_b90###]\nNAACL\n2022\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[68 ###reference_b68###]\nNAACL\n2022\nTransformer\nSentence Level\nRelation Classification\n\n[126 ###reference_b126###]\nNAACL\n2022\nTransformer\nDocument Level\nNER + Relation Classification\n\n[76 ###reference_b76###]\nEACL\n2023\nTransformer\nDocument Level\nRelation Identification + Relation Classification\n\n[108 ###reference_b108###]\nEACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Classification\n\n[31 ###reference_b31###]\nEACL\n2023\nTransformer\nDocument Level\nRelation Classification\n\n[82 ###reference_b82###]\nEACL\n2023\nTransformer\nFew Shot / Low Resource\nRelation Identification + Relation Classification\n\n\n

\n
\n
Table 3: Summary of key papers in EACL, NAACL, and AACL categorized by model group and task
\n
", + "capture": "Table 3: Summary of key papers in EACL, NAACL, and AACL categorized by model group and task" + }, + "4": { + "table_html": "
\n
\n

\n\n\n\n\n\nConference\nYear\nCNN\nLSTM\nTransformer\nGraph Based\nSentence Level\nDocument Level\nMulti-Task\nDistant Supervised RE\nFew Shot / Low Resource\nMultilingual and Multimodal\n\nACL\n2020\n1\n0\n1\n1\n1\n0\n0\n1\n0\n0\n\nACL\n2021\n2\n2\n5\n0\n1\n2\n1\n3\n1\n1\n\nACL\n2022\n0\n0\n7\n1\n1\n0\n0\n0\n3\n0\n\nACL\n2023\n0\n0\n14\n1\n3\n3\n1\n1\n6\n5\n\nEACL\n2021\n3\n2\n9\n0\n2\n2\n2\n3\n3\n1\n\nNAACL\n2021\n0\n1\n2\n0\n1\n0\n0\n1\n1\n0\n\nAACL\n2022\n0\n0\n5\n0\n3\n0\n0\n0\n2\n0\n\nNAACL\n2022\n0\n0\n5\n3\n3\n3\n0\n0\n2\n0\n\nEACL\n2023\n0\n0\n6\n0\n0\n3\n0\n0\n2\n0\n\n\n

\n
\n
Table 4: Summary of models and tasks by conference and year.
\n
", + "capture": "Table 4: Summary of models and tasks by conference and year." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ReferenceYearModelF1LM/LLM
Wang et al. (2022) [119]\n2022DeepStruct76.8GLM
Huang et al. (2022) [47]\n2022UNIST75.5RoBERTa
Baek et al. (2022) [6]\n2022RE-MC75.4RoBERTa
Han et al. (2022) [35]\n2022GEN-PT75.3T5
Lyu and Chen (2021) [74]\n2021BERT75.2BERT
\n
Table 5: State-of-the-art results for sentence-level RE using the TACRED dataset
\n
", + "capture": "Table 5: State-of-the-art results for sentence-level RE using the TACRED dataset" + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ReferenceYearModelF1LM/LLM
Ma et al. (2023)2023DREEAM67.53RoBERTa
Tan et al. (2022)2022KD-Rb-l67.28RoBERTa
Xu et al. (2021)2021SSAN-RoBERTa-large65.92RoBERTa
Xiao et al. (2022)2022SAIS-RoBERTa-large65.11RoBERTa
Xie et al. (2022)2022Eider-RoBERTa-large64.79RoBERTa
\n
Table 6: State-of-the-art results for Document-Level RE in the DocRED dataset
\n
", + "capture": "Table 6: State-of-the-art results for Document-Level RE in the DocRED dataset" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ReferenceYearModelF1LM/LLM
Tran et al. (2023)2023ESC-ZSRE81.68BERT
Chia et al. (2022)2022RelationPrompt79.96GPT2+BART
Tran et al. (2022)2022IDL62.61BERT
Najafi and Fyshe (2023)2023OffMML-G(+negs)61.3T5
Chen and Li (2021)2021ZS-BERT57.25BERT
\n
Table 7: State-of-the-art results for Zero-Shot RE models in the FewRel dataset
\n
", + "capture": "Table 7: State-of-the-art results for Zero-Shot RE models in the FewRel dataset" + }, + "8": { + "table_html": "
\n
Table 8: Most widespread models
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSub-modelTimes usedPapers that used the model
BERTbase45\n\n[158, 42, 156, 155, 159, 33, 128, 157, 151, 165, 104, 144, 44, 141, 92, 54, 7, 121, 127, 111, 75, 136, 48, 83, 142, 91, 12, 26, 154, 27, 34, 152, 134, 40, 45, 101, 46, 163, 133, 108, 162, 126, 131, 66, 90, 68]\n\n
RoBERTalarge10\n\n[151, 67, 54, 48, 12, 137, 163, 76, 31, 126]\n\n
SciBERTscvocab7\n\n[141, 7, 134, 133, 162, 126, 112]\n\n
GLOVE4\n\n[49, 60, 83, 17]\n\n
ALBERTxxlarge4\n\n[141, 121, 134, 162]\n\n
RoBERTabase4\n\n[151, 12, 27, 122]\n\n
CNN-based-encoder3\n\n[21, 12, 145]\n\n
T53\n\n[117, 5, 82]\n\n
BERTlarge3\n\n[111, 163, 66]\n\n
Span-BERT3\n\n[12, 66, 130]\n\n
GPT3.52\n\n[128, 117]\n\n
GPT22\n\n[128, 27]\n\n
Word2Vec2\n\n[91, 152]\n\n
PubMedBERT2\n\n[165, 112]\n\n
Word2VecSkip-Gram1\n\n[100]\n\n
BioLinkBERTlarge1\n\n[129]\n\n
AnchiBERT1\n\n[138]\n\n
KLUE1\n\n[138]\n\n
mBART501\n\n[50]\n\n
ViTbase1\n\n[125]\n\n
BERTChinese1\n\n[54]\n\n
RoBERTaChinese1\n\n[54]\n\n
ELMO1\n\n[60]\n\n
LUKEbase1\n\n[12]\n\n
M-BERT1\n\n[99]\n\n
Sentence-BERT1\n\n[14]\n\n
BlueBERT1\n\n[112]\n\n
BioBERT1\n\n[112]\n\n
\n
", + "capture": "Table 8: Most widespread models" + }, + "9": { + "table_html": "
\n
Table 9: 2023 Papers by model variant with normalization by year
\n
\n

\n\n\n\n\n\nModel\nVariant\nReelease Year\nCount\n%\nNorm. Count\nNorm. %\nReferences\n\n\n\nBERT\nbase\n2018\n9\n45%\n1.80\n11.54%\n\n\n[158 ###reference_b158###, 42 ###reference_b42###, 156 ###reference_b156###, 155 ###reference_b155###, 159 ###reference_b159###, 33 ###reference_b33###, 128 ###reference_b128###, 157 ###reference_b157###, 151 ###reference_b151###]\n\n\nRoBERTa\nlarge\n2019\n2\n10%\n0.50\n3.21%\n\n\n[151 ###reference_b151###, 76 ###reference_b76###]\n\n\nT5\n-\n2020\n2\n10%\n0.50\n3.21%\n\n\n[117 ###reference_b117###, 82 ###reference_b82###]\n\n\nGPT\n3,5\n2022\n2\n10%\n1.00\n6.41%\n\n\n[128 ###reference_b128###, 117 ###reference_b117###]\n\n\nPubMedBERT\n-\n2020\n1\n5%\n0.25\n1.60%\n\n\n[165 ###reference_b165###]\n\n\nBioLinkBERT\nlarge\n2022\n1\n5%\n0.50\n3.21%\n\n\n[129 ###reference_b129###]\n\n\nAnchiBERT\n-\n2023\n1\n5%\n1.00\n6.41%\n\n\n[138 ###reference_b138###]\n\n\nmBART\n50\n2020\n1\n5%\n0.25\n1.60%\n\n\n[50 ###reference_b50###]\n\n\nViT\nbase\n2021\n1\n5%\n0.33\n2.12%\n\n\n[125 ###reference_b125###]\n\n\n\n

\n
\n
", + "capture": "Table 9: 2023 Papers by model variant with normalization by year" + }, + "10": { + "table_html": "
\n
Table 10: Dataset Usage in Papers
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetTimes usedPapers that used the dataset
TACRED18[158, 156, 128, 157, 92, 67, 75, 12, 154, 27, 45, 46, 137, 163, 108, 122, 66, 68]
DocRED15[15, 151, 117, 104, 49, 48, 83, 26, 27, 76, 31, 126, 131, 130, 90]
FewRel12[158, 156, 155, 157, 92, 67, 136, 12, 154, 34, 82, 14]
SemEval 2010 - Task 810[128, 161, 111, 145, 45, 46, 137, 108, 122, 66]
ACE059[161, 141, 121, 111, 91, 28, 134, 133, 162]
SciERC9[161, 141, 7, 121, 91, 134, 133, 162, 90]
NYT8[117, 127, 75, 21, 134, 40, 101, 17]
Tacred-Revisited5[128, 137, 163, 122, 66]
Re-DocRED4[151, 165, 104, 31]
ACE044[141, 121, 134, 162]
Re-TACRED4[128, 137, 163, 122]
MNRE3[159, 125, 44]
GDA3[49, 83, 126]
CDR3[49, 83, 126]
Wiki-ZSL3[155, 82, 14]
FB-NYT3[100, 145, 68]
ChemProt2[129, 128]
ConLL2[161, 117]
ADE2[117, 134]
DialogRE2[142, 131]
DDI1[129]
GAD1[129]
HistRED1[138]
MultiTacred1[42]
SRED-FM1[50]
RED-FM1[50]
Twitter-20151[159]
Twitter-20171[159]
RISeC1[33]
EFGC1[33]
MSCorpus1[33]
Wiki801[128]
BioRED1[165]
MATRES1[144]
EventStoryLine1[144]
Causal-TimeBank1[144]
MAWPS1[54]
Math23k1[54]
MathQA1[54]
SVAMP1[54]
SemEval 2018 Task 71[7]
SKE1[127]
FOBIE1[60]
SF1[142]
SPOUSE1[91]
ERE1[28]
FUNDS1[152]
GIDS1[40]
SMiLER1[99]
RE-QA1[82]
KBP371[108]
WikiDistant1[17]
DrugCombination1[112]
DWIE1[130]
\n
Table 11: Most widespread benchmarks datasets
\n
", + "capture": "Table 10: Dataset Usage in Papers" + }, + "11": { + "table_html": "
\n
Table 12: Statistics and a summary of the most widely used datasets for Relation Extraction
\n
\n

\n\n\n\n\n\n\n\nDataset\n\n\n\nRE Task\n\n\n\nDataset based on\n\n# of relations\nExample relations\nRelation mentions\n\n\nCorpus size\n\n\n\nTrain\n\n\n\nDev\n\n\n\nTest\n\n\n\n\n\n\nTACRED\n\n\n\nMultipurpose - Sentence Level RE\n\n\n\nNewswire and web text manually annotated using Amazon Mechanical Turk crowdsourcing. Covers common relations between people, organizations, and locations.\n\n43\nper:schools_attended and org:members\n21,784\n\n\n106,264\n\n\n\n68,124\n\n\n\n22,631\n\n\n\n15,509\n\n\n\n\nDocRED\n\n\n\nDocument-Level RE\n\n\n\nWikipedia data, both manually and distant supervised annotated.\n\n96\neducated_at, spouse, creator, publication_date\u2026\n155,535\n\n\nManually labelled: 5,053 Distant supervised: 101,873\n\n\n\nManually labeled: 3,053 Distant supervised: 101,873\n\n\n\n1,000\n\n\n\n1,000\n\n\n\n\nFewRel\n\n\n\nFew Shot RE\n\n\n\n70,000 sentences on relations derived from Wikipedia and annotated by crowdworkers.\n\n100\nmember_of, capital_of, birth_name\n58,267\n\n\n70,000\n\n\n\n44,800\n\n\n\n6,400\n\n\n\n14,000\n\n\n\n\nSemEval 2010 - Task 8\n\n\n\nMultipurpose - Sentence Level RE\n\n\n\nSemantic relations between pairs of nominals of general purpose.\n\n9\nEntity-Origin, entity-destination, cause-effect\u2026\n6,674\n\n\n10,717\n\n\n\n8,000\n\n\n\n-\n\n\n\n2,717\n\n\n\n\nACE05\n\n\n\nMultilingual - Sentence Level RE\n\n\n\n1,800 files encompassing diverse English, Arabic, and Chinese genres. These texts have been meticulously annotated for entities, relations, and events. This extensive collection is the entirety of the training data utilized during the 2005 Automatic Content Extraction (ACE) technology evaluation for these languages.\n\n18\nPerson-Social , Organization-Affiliation, Agent-Artifact\n7,105\n\n\n10,573\n\n\n\n7,273\n\n\n\n1,765\n\n\n\n1,535\n\n\n\n\nSciERC\n\n\n\nMultipurpose - Low Resource\n\n\n\nCollection of 500 scientific abstracts annotated in terms of entities and relations\n\n7\nusef_for , hyponym_of, feature_of, conjunction\n4,716\n\n\n500\n\n\n\n-\n\n\n\n-\n\n\n\n-\n\n\n\n\nNYT-FB\n\n\n\nMultipurpose - Distant Supervision RE\n\n\n\nNew York Times news annotated by distant supervision.\n\n52\nnationality, place_of_birth\n142,823\n\n\n717,219\n\n\n\n455,771\n\n\n\n-\n\n\n\n172,448\n\n\n\n

\n
\n
", + "capture": "Table 12: Statistics and a summary of the most widely used datasets for Relation Extraction" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18157v1_figure_1.png", + "caption": "Figure 1: Graphic explanation of our inclusion/exclusion criteria", + "url": "http://arxiv.org/html/2411.18157v1/extracted/6028481/flow.png" + }, + "2": { + "figure_path": "2411.18157v1_figure_2.png", + "caption": "Figure 2: Trend and Distribution of Publications Over the Years", + "url": "http://arxiv.org/html/2411.18157v1/extracted/6028481/trendyears.png" + }, + "3": { + "figure_path": "2411.18157v1_figure_3.png", + "caption": "Figure 3: Trend and Distribution of Publications Over the Years and conferences", + "url": "http://arxiv.org/html/2411.18157v1/extracted/6028481/trendyearsconference.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18157v1" +} \ No newline at end of file diff --git a/20241127/2411.18158v1.json b/20241127/2411.18158v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4cdafb20f8612d8fce5fd3f99e47aa13197b2f87 --- /dev/null +++ b/20241127/2411.18158v1.json @@ -0,0 +1,317 @@ +{ + "title": "Abductive Symbolic Solver on Abstraction and Reasoning Corpus", + "abstract": "This paper addresses the challenge of enhancing artificial intelligence reasoning capabilities, focusing on logicality within the Abstraction and Reasoning Corpus (ARC). Humans solve such visual reasoning tasks based on their observations and hypotheses, and they can explain their solutions with a proper reason. However, many previous approaches focused only on the grid transition and it is not enough for AI to provide reasonable and human-like solutions. By considering the human process of solving visual reasoning tasks, we have concluded that the thinking process is likely the abductive reasoning process. Thus, we propose a novel framework that symbolically represents the observed data into a knowledge graph and extracts core knowledge that can be used for solution generation. This information limits the solution search space and helps provide a reasonable mid-process. Our approach holds promise for improving AI performance on ARC tasks by effectively narrowing the solution space and providing logical solutions grounded in core knowledge extraction.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Artificial intelligence nowadays exhibits impressive problem-solving skills in many domains. Though they provide valuable assistance, not all responses make sense due to the hallucination issue and lack of logical stability. According to Pan Lu et al., especially within the category of mathematical reasoning, logical reasoning, and numeric commonsense, AI agents underperformed compared to other areas such as scientific, statistical, and algebraic reasoning. Moreover, the \"puzzle test\" and \"abstract scene\" tasks showed averagely the biggest performance gap between current AI models and humans [1 ###reference_b1###]. To enhance such weaknesses, various experiments have been conducted on logic and puzzle test datasets [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Datasets corresponding to such categories that require complex logical capabilities with visual images are called Visual Reasoning tasks.\n###figure_1### As an IQ test is one of the representative measurements of human intelligence, Abstraction and Reasoning Corpus (ARC) was invented by Fran\u00e7ois Chollet to measure the intelligence of an AI [2 ###reference_b2###]. The ARC dataset has 400 tasks in each training and evaluation set and each consists of multiple numbers of example pairs and a test pair as shown in Figure 1 ###reference_###. The task is to formulate a pattern that applies to all the example pairs and then construct an answer with the given test input grid. All tasks are created based on four core knowledge priors, which are 1) objectness, including object cohesion, persistence, and its influence via contact, 2) goal-directedness, 3) numbers and counting, and 4) basic geometry and topology [2 ###reference_b2###]. Due to these characteristics, solutions that have defined domain-specific languages (DSL) have emerged. Unlike other AI techniques, two representative solutions have utilized DSLs to make the essence of each not dissolved into a vector but preserved symbolically. Moreover, the performances have resulted in 1st place in the Kaggle ARC solving competition and ARCathon 2022 [6 ###reference_b6###, 7 ###reference_b7###]. Therefore, this research focuses on the symbolic representation of the ARC by applying DSLs and synthesizing DSLs for the solution.\nSince the transformer-based models are considered the best-performing AI, various researchers have challenged solving ARC tasks with texts by providing additional descriptions [8 ###reference_b8###], applying different prompting skills [9 ###reference_b9###], or estimating hypotheses between the input and output grids [10 ###reference_b10###]. However, such solutions can be improved in the following two ways, 1) by using a symbolic network to generate solutions that are understandable from the human perspective, and 2) by following human thinking processes to make solutions more reasonable and human-like. As humans explain their thoughts to verify their understanding, it is necessary to check both the solution and the answer for reasoning tasks. Thus, this research proposes a symbolic solver that returns understandable and reasonable solutions.\nIn visual reasoning, humans establish hypotheses based on their observation [11 ###reference_b11###]. Inductive reasoning is well-known as a method to generate general solutions with sufficient observations, however, finding the best solution under limited observations is appropriate with abductive reasoning. Due to such property, the human thinking process of solving the ARC is more likely abductive reasoning. In each pair of the ARC task, the transition between two grids could be represented with multiple hypotheses including 1) what has changed, 2) how or how much it has changed, and 3) why it has changed in such a way. Considering the reason for the transition is the key to this research. In Figure 1 ###reference_###, four orange pixels appeared around the blue pixel. With only the first pair, it is hard to guarantee a pattern for this task. Observing the second and third pairs provides more clues for formulating a solution. After checking all pairs, the reason for the orange pixel pattern can now be understood, ensuring that the target is blue. In other words, the color is the reason for the pattern not the other fundamental properties like position or counts.\nMany previous approaches missed such information and struggled to select a target object to apply the pattern in the solution generation step. By emphasizing the weight of repeated features, we propose an experiment that extracts core knowledge which are the candidate arguments for the solution, and finds common transformations that utilize the extracted information to estimate the result. Our paper\u2019s contribution is two-fold, 1) it delineates the conversion of ARC tasks into knowledge graphs and the subsequent extraction of core knowledge from these graphs, and 2) it presents an abductive symbolic solver that utilizes the extracted core knowledge." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "This chapter will describe the overall structure of solving ARC tasks symbolically with abductive reasoning. The framework shown in Figure 2 ###reference_### can be divided into three main stages: 1) ARC Knowledge Graph (ARCKG) construction, 2) core knowledge extraction from the knowledge graph, and 3) solution searching using extracted core knowledge. Each of the steps is further described in Sections 3.1 ###reference_###, 3.2 ###reference_###, and 3.3 ###reference_###, respectively.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "ARC Knowledge Graph Construction", + "text": "In this knowledge graph construction step, each example pair in the task becomes one unit of ARC Knowledge Graph (ARCKG). For example, a problem in Figure 1 ###reference_### will have four ARCKGs (three examples and one test pair). ARCKG has four layers in total and it is to organize the nodes and edges well with their origin and characteristics. Based on this 4-layer structure, the construction rule is defined using DSL to apply human understanding to the ARC task and to form a database. DSL in the ARC domain could be categorized into two; Transformation DSL and Property DSL, and only the Property DSLs are used to construct ARCKG. Transformation DSLs are used in Synthesizer which is explained in Chapter 3.3 ###reference_###. In the following three sub-chapters, the definition of DSL, the structural frame of ARCKG, and the detailed process of the construction are described.\nThis research proposes DSLs that are classified into two categories based on their purpose. DSLs that symbolize the properties of nodes are referred to as Property DSLs and are primarily used to draw edges in the knowledge graph. Refer to the Figure 3 ###reference_### to see what properties are defined. There are several conditions to draw an edge, such as when two nodes have the same property, when a node has a specific property, or when one node is contained within another node by some property. This category is further divided into more specific categories: General and Pnode layer. The former applies to all layers, generating edges, while the latter applies only to the Pnode layer. Syntax DSLs handle the syntactical elements of DSLs and form the backbone of constructing the knowledge graph. They, in turn, are divided into DSLs for generating edges, creating nodes, and combining the two lists, ultimately resulting in the knowledge graph being stored in the form of nodelist and edgelist.\nTransformation DSLs are utilized in the symbolic ARC solver and play a role in predicting the answer by applying transformations to the given nodes. Some of them belong to both Property DSL and Transformation DSL simultaneously, and the detailed classification is shown in Figure 3 ###reference_###. The reason is due to the ARCKG structure that is defined to have only four types of nodes. The argument of a Transformation DSL is\n###figure_3### ###figure_4### In the realm of Domain-Specific Language (DSL), data types form the backbone of how information is represented, manipulated, and interpreted. Table 1 ###reference_### provides an overview of the key data types utilized in our DSL, each tailored to facilitate the unique requirements of nodes and their symbolic relationships.\n###table_1###" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Domain Specific Language Definition", + "text": "When humans observe the ARC task, they don\u2019t only identify the changes or differences on the surface but also why such changes occurred. According to how Michael Hodel designed his DSL for the ARC [7 ###reference_b7###], property, and util DSLs are the one that composes the reason for the transformation. In other words, for the complete solution of the ARC, such DSLs are supposed to be preceded before the transformation. Since this research proposes to use a knowledge graph as a source of core knowledge, ARCKG is designed to contain information that could be the key argument of the Transformation DSL. Thus, mainly the DSL which represents the property of an object or a pixel is used for the ARCKG construction.\nThis research proposes DSLs that are classified into two categories based on their purpose. DSLs that symbolize the properties of nodes are referred to as Property DSLs and are primarily used to draw edges in the knowledge graph. Refer to the Figure 3 ###reference_### ###reference_### to see what properties are defined. There are several conditions to draw an edge, such as when two nodes have the same property, when a node has a specific property, or when one node is contained within another node by some property. This category is further divided into more specific categories: General and Pnode layer. The former applies to all layers, generating edges, while the latter applies only to the Pnode layer. Syntax DSLs handle the syntactical elements of DSLs and form the backbone of constructing the knowledge graph. They, in turn, are divided into DSLs for generating edges, creating nodes, and combining the two lists, ultimately resulting in the knowledge graph being stored in the form of nodelist and edgelist.\nTransformation DSLs are utilized in the symbolic ARC solver and play a role in predicting the answer by applying transformations to the given nodes. Some of them belong to both Property DSL and Transformation DSL simultaneously, and the detailed classification is shown in Figure 3 ###reference_### ###reference_###. The reason is due to the ARCKG structure that is defined to have only four types of nodes. The argument of a Transformation DSL is\n###figure_5### ###figure_6### In the realm of Domain-Specific Language (DSL), data types form the backbone of how information is represented, manipulated, and interpreted. Table 1 ###reference_### ###reference_### provides an overview of the key data types utilized in our DSL, each tailored to facilitate the unique requirements of nodes and their symbolic relationships.\n###table_2###" + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 ARC Knowledge Graph Structure Definition", + "text": "The original ARC data is provided in the form of a two-dimensional array, where each element of the array contains information corresponding to colors, ranging from 0 to 9. Therefore, it is challenging for machines to understand and infer rules from this data due to its limited information content. Thus, we propose a method to convert the 2D grid into a knowledge graph that captures information perceived by humans when viewing ARC problems. The knowledge graph is formed as units of one input-output example pair. A single knowledge graph consists of four layers, each characterized by the attributes of the nodes included in it. When representing the original ARC task\u2019s example pairs as the corresponding knowledge graphs are expressed as , where is further represented as . Each in is a data structure containing all nodes found in the four layers, and is a data structure containing all edges found in the respective knowledge graph. The detailed description of each of the four layers is as follows.\nPnode layer: This first layer converts each pixel into a single node named and captures the relationships between these , representing them as edges.\nOnode layer: This second layer contains nodes representing sets of one or more pixels forming objects. It captures the relationships between objects as edges. Nodes in this layer, which is named , are connected to the with edges.\nGnode layer: This third layer represents the entire input or output grid as a single node named . Nodes in this layer are connected to all nodes in the lower layers including the first and second with edges.\nVnode layer: This fourth layer combines the input and output grid into a single node. Each example pair is ultimately represented by one fourth-layered , which is connected to two s from the third layer through edges.\n###figure_7###" + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3 ARC Knowledge Graph Construction Program", + "text": "Algorithm 1 ###reference_### is an example pseudo-code for building an ARCKG using defined DSLs and graph structure. This program takes a task as input and returns the corresponding knowledge graphs. It consists of two stages: one for generating node lists from the grid and another for creating edges. The algorithm is composed of a nested loop structure. The outer loop iterates over each example pair of the input task. Then, it iterates over the input and output grids of each pair to create node lists. At the end of line 7, two node lists are generated as a result of lines 3 to 7, named and , respectively. Lines 8 to 9 merge these two node lists and create the very top-layer Vnode, appending it to the . The loop from lines 10 to 12 applies all possible to the , drawing edges using ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Core Knowledge Extraction", + "text": "In this step, the goal is to extract information that is considered useful for the solution. A unit named Specifier takes a knowledge graph as input and returns objects that satisfy the constraints. It plays a role in filtering out relatively less helpful knowledge graph components to narrow down the search space in Synthesizer. The conditions of this filter are gathered by analyzing all the given example pairs. Since the ARC is a few-shot task, humans are supposed to formulate a solution from given pairs and apply it to the test grid after only observing a single grid. Conversely, the solution must be appropriately applied to all the examples. Therefore, the solution that we are aiming to find is the intersection of possible solutions of all the given pairs. Moreover, the components of the solution could be found from the ARCKGs established in the previous step, by counting the nodes with identical properties. The following sub-chapters describe the concept of the Specifier unit and how it operates differently on example pairs and a test grid.\nThe term \"core knowledge\" refers to an output of the Specifier. The narrow range of meaning relates only to the candidates of objects that satisfy the conditions, while the wider range of meaning includes their intrinsic properties. On the surface, Specifier unit appears to return only the object on the grid. In contrast, since an object is equivalent to a node in ARCKG, edges that either originated from or ended with the node are also the information on the table. By utilizing the features of selected objects, the specifier concludes constraints for object selection in the test grid.\nThe training phase of the Specifier means the constraint update process for specifying the objects during the example pair observation. Specifically, the update begins from the second pair. During the first pair, there are no objects that can be specified for the solution due to the absence of constraint. Thus, the entire objects and the input grid itself which refer to Onodes and a Gnode become the candidates. This set of objects is then exported to the Synthesizer. Regardless of whether the Synthesizer discovers the solution path, Specifier in the following example starts to filter out the object without any feature in common compared to the object candidates from the previous iteration. The conditions of an object are either a property or a relationship with other components which respectively refer to the term feature and edge. From the second iteration, the constraints of the Specifier gathered based on the features and edges of the object are modified. Since no ARC task contains less than two example pairs, this abduction process of updating constraints occurs at least once. After the final update in the last iteration, the constraints are fixed and further used for the test phase.\nDuring the test phase, the module processes the ARCKG of the test grid. Due to the absence of the output grid, some edges that connect nodes across the grid are not considered. The core knowledge should be driven only from half of the knowledge graph. The trained constraints allow Specifier to achieve such a goal under the concept of the task. In short, Specifier in the test phase searches for nodes that satisfy the conditions gathered from the example pairs and returns candidate components that can be the material for the solution." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Specifier", + "text": "Specifier is designed to select candidate objects from the test input grid and by doing so, the afterward solution search space decreases substantially. The object selection in the test grid must be done based on a rule, driven from given example pairs. For instance, the task in Figure 1 ###reference_### has three example pairs and we can detect the grid changes around a specific pixel. Since the red and blue pixels appear in all three pairs, those pixels in the test grid can become a candidate component for the solution. Accordingly, due to the changes in the grid around those pixels also appearing three times, the solution is concluded to apply such transformations with the corresponding target pixels. Here, in the task given with example pairs, selecting the objects, features, or changes observed times is critical in Specifier. Abductive reasoning in one sentence is to make the best prediction from incomplete observations. Suppose there always is an absolute solution in every ARC task. Due to the incompleteness of the ARC said by the creator Fran\u00e7ois Chollet, given example pairs may or may not express the rule of the task precisely. Thus, the solution-, object-, and constraint-finding process based on the abduction is employed in this research.\nThe term \"core knowledge\" refers to an output of the Specifier. The narrow range of meaning relates only to the candidates of objects that satisfy the conditions, while the wider range of meaning includes their intrinsic properties. On the surface, Specifier unit appears to return only the object on the grid. In contrast, since an object is equivalent to a node in ARCKG, edges that either originated from or ended with the node are also the information on the table. By utilizing the features of selected objects, the specifier concludes constraints for object selection in the test grid." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Train and Test of the Specifier", + "text": "The training phase of the Specifier means the constraint update process for specifying the objects during the example pair observation. Specifically, the update begins from the second pair. During the first pair, there are no objects that can be specified for the solution due to the absence of constraint. Thus, the entire objects and the input grid itself which refer to Onodes and a Gnode become the candidates. This set of objects is then exported to the Synthesizer. Regardless of whether the Synthesizer discovers the solution path, Specifier in the following example starts to filter out the object without any feature in common compared to the object candidates from the previous iteration. The conditions of an object are either a property or a relationship with other components which respectively refer to the term feature and edge. From the second iteration, the constraints of the Specifier gathered based on the features and edges of the object are modified. Since no ARC task contains less than two example pairs, this abduction process of updating constraints occurs at least once. After the final update in the last iteration, the constraints are fixed and further used for the test phase.\nDuring the test phase, the module processes the ARCKG of the test grid. Due to the absence of the output grid, some edges that connect nodes across the grid are not considered. The core knowledge should be driven only from half of the knowledge graph. The trained constraints allow Specifier to achieve such a goal under the concept of the task. In short, Specifier in the test phase searches for nodes that satisfy the conditions gathered from the example pairs and returns candidate components that can be the material for the solution." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Symbolic Solution Synthesis", + "text": "In this step, a solution of an ARC task is discovered by synthesizing Transformation DSLs and core knowledge driven from the ARCKG by the Specifier unit. A module named Synthesizer takes the role of searching through all the combination spaces. Since the solution-finding process follows the brute-force search, theoretically it is solvable under the assumption that the provided DSLs completely cover the task. Moreover, as the Synthesizer unit exploits the syntheses of Transformation DSL from the example pairs when solving the test case, the following paragraph explains the operation based on the train and test phase." + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 Synthesizer", + "text": "The Synthesizer unit takes core knowledge and Transformation DSLs as input and finds the combination of them to be the desired answer. While humans formulate hypothetical solutions and update them during the example pair observation, Synthesizer narrows down the number of solutions. Similar to the object node abduction in the Specifier, only the solution that is applicable for all the examples remains after the training phase. Further, when the component reaches the test grid, it exploits the exact solution from the train and returns the answer. Figure 6 ###reference_### below depicts the initial solution search space of the first example pair of a task.\n###figure_8###" + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Abductive Symbolic Solver", + "text": "The Solver, refers to a union of Specifier and Synthesizer unit, has an equivalent meaning with the term \"abductive symbolic solver\" or \"symbolic ARC solver\" in this paper. It utilizes the concept of abductive reasoning for the learning stage which unfolds in reverse order of inference, starting from the Synthesizer. The process begins with each node of the input graph treated as a leaf and extends up to the root of the output grid, exploring all possible paths through the search tree. The edges of the search tree are composed of Transformation DSLs, originating from the leaves and branching towards the root by applying each Transformation DSL. The search halts when the tree reaches a certain depth, at which point the paths connected to the root become candidates for core knowledge used in the inference stage. At this stage, it identifies all possible (node, path) pairs, where the path represents the sequence of Transformation DSLs. The term \"path\" refers to the sequence of Transformation DSLs (Domain-Specific Languages) applied within the search tree during the process of abductive reasoning to reach a solution for a given ARC task. Applying this path to the nodes in the pair yields the desired output targeted during the process. An example of the Synthesizer\u2019s training can be found in Figure 6 ###reference_###, which corresponds to the setup in Section 4.1 ###reference_### and is an example of Synthesizer-10. The Synthesizer starts with nodes representing parts of the input grid. It applies transformations like get_height, get_width, get_number_of_colorset, etc., in sequence. The path through these transformations is formed until the output node, representing the desired solution, is reached.\nThe Specifier generates a function that identifies the minimal features in the knowledge graph that uniquely designate the node, returning them as constraints.\nThe objective of this process is to traverse the knowledge graph and find the smallest subset that satisfies the criteria of the given node, such as \"same color,\" \"adjacent pixels,\" and \"largest.\"\nConsequently, the constraint becomes a function that extracts node(s) in the knowledge graph, ultimately generating a hypothesis in the form of a pair (constraints, path). This hypothesis can be applied to all knowledge graphs of the same task by the following method:\nIt means that by applying the sequence of transformations defined by the \"path\" to the nodes and information extracted from the knowledge graph (based on the given constraints), the model can generate a prediction or solution for the task. Essentially, the constraints filter and guide the application of transformations, ensuring that only relevant parts of the knowledge graph are used to derive the final prediction.\nAfter obtaining a set of possible hypotheses from the observations of the first example pair, the final solution is adopted through the process of evaluating whether these hypotheses can consistently explain other observations. Due to the nature of ARC problems, observations are highly limited by the number of example pairs and exhibit characteristics of few-shot learning. By applying hypotheses to the given pairs and iteratively selecting only those hypotheses that correctly derive the answers, the remaining hypotheses are adopted as the final solution for this task. This solution ensures that our observations are well explained.\n###figure_9###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment & Result", + "text": "The primary objective of this experiment is to leverage a knowledge graph (KG) and Domain Specific Languagess to solve tasks within the Abstraction and Reasoning Corpus (ARC).\nBelow are the hypotheses raised in this paper:\nH1: The knowledge graphs effectively encapsulate symbolic knowledge, facilitating human-like problem-solving and enhancing performance.\nH2: The number of Transformation DSLs is positively correlated with the performance of the symbolic ARC solver." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "To evaluate the performance of the DSL-based symbolic Arc solver, we conducted experiments with two distinct setups: one utilizing a knowledge graph and another without it. This comparison aims to assess the impact of knowledge graphs on the solver\u2019s accuracy in predicting the ARC task outputs(grid size and color set).\nThe answers (outputs) of ARC problems consist of three elements: 1) the size of the grid, 2) the color set of the grid, and 3) the contents of the grid. Though all three are crucial, predicting and modifying the target values of the first two hold significant importance as they represent steps inherent in human problem-solving of ARC tasks. Therefore, we prioritized these aspects in our experimental setup, focusing primarily on them and enabling the utilization of minimal and straightforward Transformation DSLs during the synthesis process. For the color set, all colors appearing in the correct grid must be matched with the predicted value to be considered as the correct answer, while for the grid\u2019s size, separate integer values for height and width were predicted.\nIn this experiment a total of 22 Property DSLs were employed to build a graph encapsulating the symbolic information of the grid elements. Based on the transformations defined in the DSLs the solver generates potential solutions followed by the Synthesizer selecting the most accurate solutions by leveraging the information from the target node extracted by the Specifier from the knowledge graph.\nThis setup is similar to the experiment with the knowledge graph construction, but without the intermediate step of graph construction. The transformations DSLs are directly applied to the grid elements to generate potential solutions. Thus In this experimental setup, no Specifier is needed since the goal of a Specifier is to extract the unique characters of nodes from the knowledge graph. The overall flow of how we experimented without the knowledge graph is depicted in Figure 8 ###reference_###.\n###figure_10### A set of ARC tasks (400 tasks) was selected for the experiments ensuring a diverse range of grid sizes and color sets. For the KG approach graphs were constructed for each task using the 22 Property DSLs. Then both solvers (KG-based solvers and non-KG-based solvers) run on the tasks to predict the grid size and color set. The accuracy of the solvers was measured based on the correctness of the grid size(height and width) and color set." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Result", + "text": "Figure 9 ###reference_### presents the accuracy scores of a solver\u2019s performance on different target values, comparing the use of a knowledge graph against not using one. For each target on the x-axis, the solver\u2019s accuracy is consistently higher when utilizing the knowledge graph. In particular, when not utilizing the knowledge graph, a significant decrease in the prediction performance of C and HWC can be observed. This indicates the crucial role of symbolic information contained in the knowledge graph in predicting the color set. The solver achieves nearly perfect accuracy for the H, W, and HW with the knowledge graph. These results confirm that the use of knowledge graphs effectively enhances performance, supporting H1 by demonstrating their capability to encapsulate symbolic knowledge and facilitate human-like problem-solving.\nTo explore the relationship between the number of Transformation DSLs and accuracy, two Synthesizers of different sizes were prepared, both with a depth limit of 2 for the search tree. The results show that Synthesizer-10 consistently achieves higher accuracy across all categories compared to Synthesizer-5. Notably, in the HWC category, Synthesizer-10 outperforms Synthesizer-5 by over three times. These findings support H2, confirming that the number of Transformation DSLs is positively correlated with the performance of the symbolic ARC solver. Additionally, this suggests that employing more sophisticated and diverse Transformation DSLs enhances the model\u2019s accuracy and its potential to predict content.\n###table_3###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduced a framework for ARC problem-solving, integrating knowledge graph conversion and abductive reasoning learning with a symbolic ARC Solver. This approach, inspired by human thought processes, offers systematic, interpretable, and scalable solutions. Leveraging knowledge graphs, we decode ARC tasks symbolically, providing crucial insights for inferring problem rules. Impressively, even with a naive Synthesizer using limited Transformation DSLs, our framework achieves high accuracy in predicting grid sizes (90.5%) and color sets (74.5%). Furthermore, as DSLs increase, we anticipate significant performance improvement, potentially extending to grid content prediction." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Description of Data Types Used in creating DSLs
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Data type\n\nDescription\n\n
Pnode\n\nRepresent a single pixel in the grid. Stores grid coordinates.\n\n
Onode\n\nRepresent objects in the grid formed by a collection of Pnode\n\n
Gnode\n\nRepresent the entire grid holding Pnode and Onode as one node.\n\n
Vnode\n\nRepresent a pair of input and output into one node that holds two Gnodes.\n\n
Xnode\n\nRepresent any type of node above.\n\n
Edge\n\nRepresent relationship between Pnode, Onode, Gnode, and Vnode (provides connection in the graph).\n\n
Color\n\nRepresent a color of pixel by integer value.\n\n
NodeList\n\nRepresent a list of nodes.\n\n
EdgeList\n\nRepresent a list of edges.\n\n
Coordinate\n\nIs used to represent coordinates which is a tuple of two integer values.\n\n
ColorSet\n\nIs used to hold collections of color.\n\n
\n
", + "capture": "Table 1: Description of Data Types Used in creating DSLs " + }, + "2": { + "table_html": "
\n
Table 2: The comparison presented here delves into the accuracy scores of solvers utilizing different Synthesizer sizes. Synthesizer-10, employing 10 Transformation DSLs, is contrasted with Synthesizer-5, which utilizes only 5. For details on DSL adopted by each Synthesizer, see Figure\u00a03.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Synthesizer-10Synthesizer-5
CorrectIncorrectAccuracy (%)CorrectIncorrectAccuracy (%)
H3663491.520919152.25
W3653591.2520319750.75
C29910174.7517622444
HW3623890.519720349.25
HWC26613466.58431621
\n
", + "capture": "Table 2: The comparison presented here delves into the accuracy scores of solvers utilizing different Synthesizer sizes. Synthesizer-10, employing 10 Transformation DSLs, is contrasted with Synthesizer-5, which utilizes only 5. For details on DSL adopted by each Synthesizer, see Figure\u00a03." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18158v1_figure_1.png", + "caption": "Figure 1: Example ARC task. Solvers are supposed to formulate a pattern that applies to all the given example pairs and then construct an answer with the given test input grid.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/fig1-2.png" + }, + "2": { + "figure_path": "2411.18158v1_figure_2.png", + "caption": "Figure 2: Overall framework of Symbolic ARC Solver. To tackle ARC tasks from the symbolic perspective, the first step involves generating a corresponding knowledge graph using a construction program based on defined Domain Specific Languages (DSL). (Step 1, Chapter 3.1) Then, extract core knowledge from the knowledge graph using Specifier. (Step 2, Chapter 3.2) Since all the ARC tasks consist of multiples of example pairs and a test pair, we define Specifier to hold only the repeated conditions that appeared in all example pairs. Lastly, search solutions under given constraints using Synthesizer. (Step 3, Chapter 3.3) The information gained from the examples and proposing Transformation DSLs limits the solution search space and makes the search feasible.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/flowchart2.png" + }, + "3": { + "figure_path": "2411.18158v1_figure_3.png", + "caption": "Figure 3: Overview of Domain-Specific Languages (DSLs) and their category tag.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/dsl_table2.png" + }, + "4": { + "figure_path": "2411.18158v1_figure_4.png", + "caption": "Figure 4: The taxonomy of the Domain-Specific Language (DSL). The terms Transformation DSL and Property DSL are equivalent to the DSL used in Synthesizer and ARCKG construction respectively. In particular, Transformation DSLs do not follow the traditional ones, such as move, flip, or rotate due to the experimental setup of this research. Transformation Selection 10 (TS10) contains a selection of suitable DSLs for the experiment and TS5 is a subset of it. General DSL takes the majority of the Property DSL and represents the characteristics of the object. Similarly, Pnode-layer DSL only appears in Pnode-layer and forms the fundamental feature of object forming. Syntax DSL contains node and edge list generation functions to store the information in the form of NodeList and EdgeList.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/DSL_definition.png" + }, + "5": { + "figure_path": "2411.18158v1_figure_5.png", + "caption": "Figure 5: An example of a straightforward, and almost backbone-structured knowledge graph of the first pair of Figure 1. In practice, the ARCKG generated by Algorithm 1 can contain up to millions of edges. The graph consists of four layers, with edges freely drawn between layers as well as between input and output by the Property DSL. The yellow edges represent connections between two nodes at the same position. The other (black, blue, green) indicate edges signify that nodes in the lower layer constitute nodes in the upper layer.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/kg.png" + }, + "6": { + "figure_path": "2411.18158v1_figure_6.png", + "caption": "Figure 6: Training session of the Synthesizer and its expanded search tree. The task is to find the largest rectangle in the input and change the color to its interior single-pixel color. First, all nodes generated from the input are placed at the top (leaf) of the search tree, with the output node at the bottom (root), commented as Correct Answer in the figure. Then, Transformation DSLs are applied to draw paths. This example shows Synthesizer-10 targeting grid size and color set. Among the DSLs used, get_height returns the height of the node, get_width returns the width, get_number_of_colorset returns the number of colors other than the background, and Onode_count returns the number of included objects. The linear(a, b) DSL performs the transformation a\u2062x+b\ud835\udc4e\ud835\udc65\ud835\udc4fax+bitalic_a italic_x + italic_b on the previous value x. get_union returns the union of colors between the previous node and the target node, while get_identity_match returns the color set of the previous node. The path that reaches the root is highlighted in red, forming a pair with the corresponding leaf.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/synthesizer.png" + }, + "7": { + "figure_path": "2411.18158v1_figure_7.png", + "caption": "Figure 7: Overall demonstration of proposing symbolic ARC solver. The process consists of two steps, the train phase with given example pairs and the test phase with test input. The starting node indicates the ARCKG constructed using the respective example pair. The initial state of the Specifier has no constraint and makes the entire set of detected objects become core knowledge. The Synthesizer, can only refer to the graph components which are either Onode or Gnode, thus the candidates do not include other types of node. Throughout the entire combination space, only a few solution paths satisfy the answer and are further utilized for constraint updating. Since the output grid is considered the answer, verification of the result is feasible. The intrinsic core knowledge of the used component, which refers to the feature and connected edge in this diagram, affects the constraint of the following step. The combination of Transformation DSLs, the path that leads to the answer, is also transferred to the Synthesizer in the next step with generalized form using any node X1)X1)italic_X 1 ). From the second iteration, Specifier and Synthesizer follow the conditions of the past. After the training phase, the final conditions are then applied to each unit, and utilize the ARCKG made of the test input grid to yield the answer.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/example_demo.png" + }, + "8": { + "figure_path": "2411.18158v1_figure_8.png", + "caption": "Figure 8: Systematic schema of the experiment without knowledge graph. Since the knowledge graph is not used, the process of graph construction and core knowledge extraction are omitted. Accordingly, only the Transformation DSLs are used.", + "url": "http://arxiv.org/html/2411.18158v1/extracted/6012309/fig/experiment_without_KG.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts,", + "author": "P. Lu, H. Bansal, T. Xia, J. Liu, C. Li, H. Hajishirzi, H. Cheng, K.-W. Chang, M. Galley, J. Gao,", + "venue": "arXiv preprint arXiv:2310.02255 (2023).", + "url": null + } + }, + { + "2": { + "title": "Raven progressive matrices,", + "author": "J. Raven,", + "venue": "in: Handbook of nonverbal assessment, Springer, 2003, pp. 223\u2013237.", + "url": null + } + }, + { + "3": { + "title": "Vqa: Visual question answering,", + "author": "S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, D. Parikh,", + "venue": "in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 2425\u20132433.", + "url": null + } + }, + { + "4": { + "title": "Are language models puzzle prodigies? algorithmic puzzles unveil serious challenges in multimodal reasoning,", + "author": "D. Ghosal, V. T. Y. Han, C. Y. Ken, S. Poria,", + "venue": "arXiv preprint arXiv:2403.03864 (2024).", + "url": null + } + }, + { + "5": { + "title": "Llms and the abstraction and reasoning corpus: Successes, failures, and the importance of object-based representations,", + "author": "Y. Xu, W. Li, P. Vaezipoor, S. Sanner, E. B. Khalil,", + "venue": "arXiv preprint arXiv:2305.18354 (2023).", + "url": null + } + }, + { + "6": { + "title": "Reasoning abilities of large language models: In-depth analysis on the abstraction and reasoning corpus,", + "author": "S. Lee, W. Sim, D. Shin, S. Hwang, W. Seo, J. Park, S. Lee, S. Kim, S. Kim,", + "venue": "arXiv preprint arXiv:2403.11793 (2024).", + "url": null + } + }, + { + "7": { + "title": "Hypothesis search: Inductive reasoning with language models,", + "author": "R. Wang, E. Zelikman, G. Poesia, Y. Pu, N. Haber, N. D. Goodman,", + "venue": "arXiv preprint arXiv:2309.05660 (2023).", + "url": null + } + }, + { + "8": { + "title": "Addressing the abstraction and reasoning corpus via procedural example generation,", + "author": "M. Hodel,", + "venue": "arXiv preprint arXiv:2404.07353 (2024).", + "url": null + } + }, + { + "9": { + "title": "A neurodiversity-inspired solver for the abstraction & reasoning corpus (arc) using visual imagery and program synthesis,", + "author": "J. Ainooson, D. Sanyal, J. P. Michelson, Y. Yang, M. Kunda,", + "venue": "arXiv preprint arXiv:2302.09425 (2023).", + "url": null + } + }, + { + "10": { + "title": "Neural-guided, bidirectional program search for abstraction and reasoning,", + "author": "S. Alford, A. Gandhi, A. Rangamani, A. Banburski, T. Wang, S. Dandekar, J. Chin, T. Poggio, P. Chin,", + "venue": "in: Complex Networks & Their Applications X: Volume 1, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021 10, Springer, 2022, pp. 657\u2013668.", + "url": null + } + }, + { + "11": { + "title": "Graphs, constraints, and search for the abstraction and reasoning corpus,", + "author": "Y. Xu, E. B. Khalil, S. Sanner,", + "venue": "arXiv preprint arXiv:2210.09880 (2022). Available: https://arxiv.org/abs/2210.09880.", + "url": null + } + }, + { + "12": { + "title": "Unraveling the arc puzzle: Mimicking human solutions with object-centric decision transformer,", + "author": "J. Park, J. Im, S. Hwang, M. Lim, S. Ualibekova, S. Kim, S. Kim,", + "venue": "arXiv preprint arXiv:2306.08204 (2023).", + "url": null + } + }, + { + "13": { + "title": "Abductive reasoning in logistics research,", + "author": "G. Kov\u00e1cs, K. M. Spens,", + "venue": "International journal of physical distribution & logistics management 35 (2005) 132\u2013144.", + "url": null + } + }, + { + "14": { + "title": "Abductive reasoning for design synthesis,", + "author": "S. C.-Y. Lu, A. Liu,", + "venue": "CIRP annals 61 (2012) 143\u2013146.", + "url": null + } + }, + { + "15": { + "title": "Abductive reasoning: Logic, visual thinking, and coherence,", + "author": "P. Thagard, C. Shelley,", + "venue": "in: Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995, Springer, 1997, pp. 413\u2013427.", + "url": null + } + }, + { + "16": { + "title": "Automating string processing in spreadsheets using input-output examples,", + "author": "S. Gulwani,", + "venue": "ACM Sigplan Notices 46 (2011) 317\u2013330.", + "url": null + } + }, + { + "17": { + "title": "Towards synthesizing complex programs from input-output examples. arxiv,", + "author": "X. Chen, C. Liu, D. X. Song,", + "venue": "Learning (2018).", + "url": null + } + }, + { + "18": { + "title": "Natural language commanding via program synthesis,", + "author": "A. Gandhi, T. Q. Nguyen, H. Jiao, R. Steen, A. Bhatawdekar,", + "venue": "arXiv preprint arXiv:2306.03460 (2023).", + "url": null + } + }, + { + "19": { + "title": "A divide-align-conquer strategy for program synthesis,", + "author": "J. Witt, S. Rasing, S. Duman\u010di\u0107, T. Guns, C.-C. Carbon,", + "venue": "arXiv preprint arXiv:2301.03094 (2023).", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18158v1" +} \ No newline at end of file diff --git a/20241127/2411.18164v1.json b/20241127/2411.18164v1.json new file mode 100644 index 0000000000000000000000000000000000000000..3772d3c6b655dc0d2434a4aaa345c049da75d7a4 --- /dev/null +++ b/20241127/2411.18164v1.json @@ -0,0 +1,663 @@ +{ + "title": "RPEE-Heads: A Novel Benchmark for Pedestrian Head Detection in Crowd Videos", + "abstract": "The automatic detection of pedestrian heads in crowded environments is essential for crowd analysis and management tasks, particularly in high-risk settings such as railway platforms and event entrances. These environments, characterized by dense crowds and dynamic movements, are underrepresented in public datasets, posing challenges for existing deep learning models.\nTo address this gap, we introduce the Railway Platforms and Event Entrances-Heads (RPEE-Heads) dataset, a novel, diverse, high-resolution, and accurately annotated resource. It includes 109,913 annotated pedestrian heads across 1,886 images from 66 video recordings, with an average of 56.2 heads per image. Annotations include bounding boxes for visible head regions.\nIn addition to introducing the RPEE-Heads dataset, this paper evaluates eight state-of-the-art object detection algorithms using the RPEE-Heads dataset and analyzes the impact of head size on detection accuracy. The experimental results show that You Only Look Once v9 and Real-Time Detection Transformer outperform the other algorithms, achieving mean average precisions of 90.7% and 90.8%, with inference times of 11 and 14 milliseconds, respectively. Moreover, the findings underscore the need for specialized datasets like RPEE-Heads for training and evaluating accurate models for head detection in railway platforms and event entrances. The dataset and pretrained models are available at https://doi.org/10.34735/ped.2024.2.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Detecting pedestrians in videos of crowded environments has tremendous significance for effectively understanding and managing such crowds, with several real-world applications, including pedestrian tracking [1 ###reference_b1###], crowd counting [2 ###reference_b2###], trajectory extraction [3 ###reference_b3###], density estimation [4 ###reference_b4###], and abnormal behavior detection [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nWith rapid urbanization, dense crowds have become widespread in various locations [9 ###reference_b9###], such as event entrances and railway platforms, where it is encountered daily, often leading to comfort and safety risks [10 ###reference_b10###, 11 ###reference_b11###]. However, detecting individual pedestrians becomes significantly more complex as crowd density increases due to frequent partial or complete occlusions. To alleviate this problem, researchers have started focusing on localizing the most visible part of the human body in such crowds\u2014the head [12 ###reference_b12###, 13 ###reference_b13###]. Nevertheless, detecting person heads remains challenging due to variability in sizes, poses, and appearances of heads, as well as cluttered and dynamic backgrounds, varying lighting conditions, and occlusions [14 ###reference_b14###].\nAutomatic head detection falls within the realm of computer vision, specifically in the object detection domain. With the rapid development of Deep Learning (DL), algorithms based on Convolutional Neural Networks (CNN) [15 ###reference_b15###] have achieved remarkable success in this domain.\nOne of the critical reasons for this success is that CNN can automatically learn relevant features [16 ###reference_b16###] from data without human intervention [17 ###reference_b17###]. You Only Look Once (YOLO) [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###], Region (R)-CNN, Fast R-CNN, Faster R-CNN, and Cascade R-CNN [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###] are popular CNN-based algorithms. While deep CNN-based algorithms are powerful, the performance of object detection models also relies heavily on the availability of large, diverse datasets with precise annotations, including bounding boxes around the target objects. However, although some publicly available datasets are available for head detection, there is an extreme scarcity of datasets specifically suitable for head detection in crowds within railway platforms and event entrances. For instance, the SCUT-HEAD (South China University of Technology Head) [25 ###reference_b25###] dataset, derived from images of students in classrooms, and the Hollywood dataset [26 ###reference_b26###], sourced from movie scenes, differ significantly from real-world crowd scenarios. Additionally, the NWPU-Crowd (Northwestern Polytechnical University Crowd) [27 ###reference_b27###] and JHU-CROWD++ (Johns Hopkins University Crowd++) [28 ###reference_b28###] datasets contain images with very small heads, lacking the detail needed for advanced DL models to learn effectively [29 ###reference_b29###]. Furthermore, datasets like Mall [30 ###reference_b30###], SmartCity [31 ###reference_b31###], and Train Station [32 ###reference_b32###] provide only point-level annotations for heads in crowds, rather than the bounding boxes required for accurate head detection.\nThe lack of datasets with head bounding box annotations that cover the diverse and complex dense crowd scenarios at railway platforms and event entrances hinders the development of robust and accurate head detection algorithms in these environments.\nTo address the above limitation, we introduce the RPEE-Heads dataset, specifically designed for head detection in crowded environments, focusing on scenarios at Railway Platforms and Event Entrances.\nThis dataset is created to enhance the robustness and generalization capabilities of head detection models by providing a diverse and comprehensive collection of 1,886 annotated images, featuring 109,913 bounding boxes. On average, each image contains 56.2 head annotations. Moreover, the dataset includes a diverse range of scenes, covering indoor and outdoor environments, different seasons, weather conditions, varying levels of illumination, head scales, and appearances. It additionally features various resolutions and crowd densities, captured during day and night from multiple viewing angles\u2014front, top, side, and back.\nFurthermore, the RPEE-Heads dataset includes detailed annotations with bounding boxes for the visible regions of heads, significantly contributing to the training and evaluation of advanced head detection algorithms.\nThe contribution of this paper can be summarized as follows.\nThis paper introduces the first image dataset specifically focused on head detection for railway platforms and event entrance scenarios. This dataset facilitates the development of accurate machine learning and DL models for head detection in these critical environments, which are essential for a wide range of crowd safety applications.\nIt performs a thorough empirical analysis, comparing eight state-of-the-art DL detection algorithms across multiple publicly available image datasets \u2013annotated with head bounding boxes\u2013 in addition to the newly introduced dataset. This analysis and the RPEE-Heads dataset provide the research community with a solid baseline for further advancements and improvements in head detection.\nThis paper presents an empirical study on head size\u2019s impact on detection algorithms\u2019 accuracy.\nThe rest of this paper is structured as follows. In the beginning, we review the literature to explore different benchmark contributions. Afterward, section 3 ###reference_### details RPEE-Heads in terms of data sources, annotation process, and dataset creation. Then, experimental results and comparisons are discussed in section 4 ###reference_###. Finally, Section 5 ###reference_### concludes the paper." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "In this section, we first provide a brief overview of popular deep CNN-based object detection algorithms, followed by a review of DL models for pedestrian head detection. Finally, we discuss the most relevant public datasets annotated with pedestrian head bounding boxes, along with their limitations." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "DL-based Object Detection Algorithms", + "text": "DL, especially CNN-based algorithms, has recently advanced object detection in images and videos. Most of such algorithms can be divided into two-stage [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]] and single-stage [18 ###reference_b18###, 20 ###reference_b20###, 19 ###reference_b19###, 33 ###reference_b33###] categories. Two-stage algorithms, such as R-CNN [21 ###reference_b21###], initially extract candidate object regions and then classify them into specific object classes. To reduce computational time in R-CNN, Fast R-CNN [22 ###reference_b22###] optimizes feature extraction by computing features for the entire image at once rather than separately for each candidate region. Building on Fast R-CNN, Faster R-CNN [23 ###reference_b23###] introduced Region Proposal Networks to speed up detection further. To address the scale-variance issue in such algorithms, Cascade R-CNN [24 ###reference_b24###] employed a series of detectors with progressively increasing Intersection over Union thresholds.\nOn the other hand, single-stage algorithms enhance detection speed by predicting bounding boxes and class probabilities in a single pass. Recent advancements in this area include You Only Look Once (YOLO) v7x [18 ###reference_b18###], YOLOv8x [19 ###reference_b19###], and YOLOv9-E [20 ###reference_b20###], which have achieved significant improvements in both speed and accuracy. Each version of YOLO has been introduced to enhance both speed and accuracy in object detection. Another example of single-stage algorithms is RetinaNet-101 [33 ###reference_b33###], which addresses the challenge of class imbalance during training.\nWith the advancements in object detection driven by efficient real-time YOLO versions, a new contender, Real-Time Detection Transformer (RT-DETR) [34 ###reference_b34###], has emerged, leveraging vision transformer technology to push the field forward." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "DL-based Head Detection Models", + "text": "With the tremendous success of CNNs for object detection, most of the current efficient methods for detecting pedestrian heads in images and videos are based on CNNs. For instance, Vu et al. [35 ###reference_b35###] and Li et al. [36 ###reference_b36###] developed two models based on R-CNN. Similarly, a custom CNN model was introduced in Ref. [37 ###reference_b37###]. Another approach, proposed by Wang et al. [38 ###reference_b38###], proposed a model based on the Single Shot MultiBox Detector. In the same direction, Khan et al. [39 ###reference_b39###] presented a new model combining CNNs with scale-aware head proposals.\nAnother example, YOLOv5 with a transfer learning technique, was employed in a new model in Ref. [40 ###reference_b40###]. Additionally, Vo et al. [41 ###reference_b41###] developed a new head detection model based on encoder-decoder Transformer networks.\nContinuing this trend of integrating advanced architectures, study [12 ###reference_b12###] proposed an approach combining deep convolution, Transformer, and attention mechanisms.\nThe above models were often tailored to specific scenarios because the training datasets do not adequately cover the diverse and complex conditions across all crowded environments, such as event entrances and railway platforms. The following section will review several popular datasets with head annotations and highlight their limitations in the contexts of event entrances and railway platforms." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Datasets Annotated with Pedestrian Head Bounding Boxes", + "text": "Several image/video-based datasets with pedestrian head bounding box annotations have been introduced in the literature to advance applications in crowd dynamics. This section reviews various such datasets, including NWPU-Crowd [27 ###reference_b27###], JHU-CROWD++ [28 ###reference_b28###], Hollywood Heads [26 ###reference_b26###], SCUT-Head Part B [25 ###reference_b25###], SCUT-Head Part A [25 ###reference_b25###], FDST (Fudan-ShanghaiTech) [42 ###reference_b42###], CroHD (Crowd High-Definition) [43 ###reference_b43###], and CrowdHuman [44 ###reference_b44###]. Table 1 ###reference_### provides a summary of these datasets.\n###table_1### The JHU-CROWD++ and NWPU-Crowd datasets are characterized by highly dense crowds, containing up to 25,791 and 20,033 head annotations per image, respectively. JHU-CROWD++ offers approximately 1.5 million head annotations at a resolution of 1430 910, while NWPU-Crowd includes over 2 million head annotations with a high image resolution of 3,383 2,311. the CroHD dataset also comprises 11,463 frames of densely packed crowds, averaging 178 heads per image at a resolution of 1,920 1,080, totaling over 2 million annotated heads. In contrast, the CrowdHuman dataset features relatively lower crowd densities, with head counts ranging from 1 to 391 per frame, and contains approximately 339,566 head annotations.\nAlthough these datasets are large, richly annotated, and encompass diverse scenarios with dense crowds, they could not be efficient for training detection algorithms. A significant issue is the prevalence of small, visible heads against cluttered dynamic backgrounds within these datasets. Typically, a head is considered small if its size adversely affects the feature extraction process, leading to suboptimal performance in detection algorithms [45 ###reference_b45###]. Section 4.4 ###reference_### will explore the impact of head size on the performance of advanced detection algorithms.\nConversely, the Hollywood Heads, SCUT-Head Part B, SCUT-Head Part A, and FDST datasets contain a few small heads, which can aid in training advanced object detection algorithms such as DL models.\nHowever, more than head size is needed to guarantee the development of accurate models for head detection in scenarios like event entrances and railway platforms; dataset diversity, large size, and the presence of similar or near scenarios are also crucial factors.\nFor instance, the Hollywood Heads dataset includes annotations for 369,846 human heads across 224,740 movie frames from Hollywood films, with an average of 1.6 heads per image.\nUnfortunately, most frames in this dataset feature only one or two individuals, rather than the crowds typically found at event entrances or railway platforms.\nSCUT-HEAD is another example, comprising 4,405 images labeled with 111,251 head annotations. The dataset has two parts: Part A includes 2,000 images sampled from classroom monitor videos at a university, containing 67,321 annotated heads with an average of 33.6 heads per image. Part B consists of 2,405 images captured from the Internet, with 43,930 annotated heads, averaging 18.26 heads per image.\nHowever, such a dataset lacks diversity regarding indoor/outdoor environments, scenarios, the weather conditions, occlusions, and lighting variations.\nTo enhance diversity, Fang et al. introduced the FDST dataset collected from 13 different scenes, including shopping malls, squares, and hospitals. It comprises 15,000 frames with 394,081 annotated heads.\nYet, despite the variety of scenes, the dataset may still lack sufficient diversity in weather and lighting conditions, as well as instances of complex occlusions and interactions between individuals.\nIn summary, the discussed datasets may not be efficient for training and evaluating detection algorithms to accurately identify pedestrian heads in crowded environments, such as event entrances and railway platforms, due to the following limitations: 1) Some datasets include many small heads, often with limited relevant features. 2) Some datasets include scenarios not representative of the dense crowds at event entrances and railway platforms. 3) All datasets lack diversity in some critical aspects, such as camera angles, weather conditions, indoor and outdoor environments, day and nighttime, seasons, lighting conditions, head scales, crowd levels, and resolutions.\nThis paper introduces a novel dataset with head bounding box annotations to address these limitations. It additionally conducts an empirical comparative study of several state-of-the-art DL algorithms on the new dataset and existing public datasets.\nThe following section introduces the dataset." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "RPEE-Heads Dataset", + "text": "This section aims to describe the diverse high-resolution RPEE-Heads dataset. The details of the data sources, data annotation, and dataset creation are provided below." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Data Sources", + "text": "A total of 66 video recordings were selected to enrich the diversity of the proposed dataset, which focuses on railway platforms and event entrances. As shown in table 2 ###reference_### and fig. 1 ###reference_###, it incorporates a wide range of real-life scenarios and experiments, offering diversity in viewpoints, lighting conditions, weather conditions, indoor and outdoor environments, head sizes, and frame resolutions. The sources include:\nRailway platforms: 15 videos, recorded for research and educational purposes as part of the CroMa project [46 ###reference_b46###], were selected. They were gathered from Merkur Spiel-Arena/Messe Nord station in D\u00fcsseldorf across various months and timeframes. The footage encompasses high-top, slightly-top, side, back, and front views, utilizing cameras with a frame rate of 25 frames per second and diverse resolutions. Additionally, the scenes contain pedestrians ranging from 6 to 132. Table 2 ###reference_### illustrates more details about such a data source, and the first column of fig. 1 ###reference_### includes examples from railway platforms.\nMusic concert entrances: We selected 34 video recordings, which were collected using multi-viewpoint cameras at a frame rate of 25 frames per second, and various resolutions for educational and research purposes within the CroMa Project [46 ###reference_b46###]. These recordings, including data from daytime and nighttime, averaged about 58 individuals per scene. Additional details can be found in table 2 ###reference_### and fig. 1 ###reference_###.\nReal-world event entrance experiments: We have selected 17 video experiments from the open data archive at Forschungszentrum J\u00fclich, available under a CC Attribution 4.0 International license [47 ###reference_b47###, 48 ###reference_b48###, 49 ###reference_b49###, 50 ###reference_b50###, 51 ###reference_b51###]. These experiments simulate crowded event entrances, with 8 involving guiding barriers, and 9 without. Such videos were recorded by a top-view static camera at a resolution of 1,920 1,440 or 1,920 1,080 pixels and a frame rate of 25 frames per second. Examples of overhead views of some experiments are depicted in fig. 1 ###reference_###, while table 2 ###reference_### provides detailed characteristics of the selected experiments.\n###figure_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Data Annotation and Dataset Creation", + "text": "Creating the annotated pedestrian head dataset (RPEE-Heads) at railway platforms and event entrances involves three steps. The first step, frame selection, aims to extract the frame sequence from each video based on the corresponding frame interval , and is described as follows:\nwhere is the order of the frame in , and is the total number of frames in .\nThe primary objective of selecting an image at each in the associated is to minimize the occurrence of duplicate scenes while preserving valuable information, thereby enhancing the dataset\u2019s quality. Table 2 ###reference_### displays the values of for each video, along with the corresponding number of extracted frames.\nMore specifically, a total of 1,886 frames were extracted from a duration of 32,369 seconds.\n###table_2### ###figure_2### In the second step, accurate annotations were manually added to the selected frames using the LabelImg tool, an open-source graphical image annotation tool [52 ###reference_b52###]. Each annotation represents a bounding box around the visible part of the head, and the information of all annotations for each image was stored in a corresponding text file. Figure 2 ###reference_### provides an illustrative example of an annotated frame alongside the related labeled file. Such a file is formatted with one row for each head (box) , where is the label of the annotated object. The coordinates and specify the center of the box, while and indicate the width and height of the box, respectively. The values of , , , and are given in pixel units and normalized from 0 to 1.\nFor normalization, the following equations are used:\n###figure_3### In this step, 109,913 heads were manually annotated from 1,886 frames selected from 66 video recordings.\nAs illustrated in fig. 3 ###reference_###, the pedestrian count in the images within our dataset varies from 1 to 270 individuals. Specifically, 964 scenes feature between 30 and 60 persons, 397 scenes contain 60 to 90 persons, 144 frames encompass 90 to 120 persons, 57 images include 120 to 150 persons, 36 scenes range from 150 to 180 persons, and 21 scenes exhibit 180 to 270 persons. This diverse distribution enhances the dataset\u2019s comprehensiveness by encompassing various crowd densities and occlusion levels.\n###figure_4### The goal of the third step is to create the annotated RPEE-Heads dataset, which includes training, validation, and test sets. To ensure diversity and minimize similarity across the sets, each video is divided into several blocks. A block consists of a segment from the sequence of selected frames (). These blocks are then approximately and randomly divided into 70% to the training set, 15% to the validation set, and 15% to the test set \u2014This ratio is commonly used in DL [53 ###reference_b53###]. Finally, the frames and their associated labeled files from these blocks make up the content of the dataset. Table 3 ###reference_### shows the number of frames and the annotated heads in each set of the RPEE-Heads dataset.\nThe variation in head sizes or scales within the annotated dataset is a critical factor in training robust object detection algorithms, especially for identifying objects of varying dimensions. Consequently, the proposed dataset encompasses multiple size categories: less than pixels, between 7 and pixels, between and pixels, between and pixels, and greater than pixels. Figure 4 ###reference_### illustrates the distribution of bounding box sizes across the training, validation, and test sets. The method used to calculate these head sizes is detailed in Section 4.4. This diversity in bounding box sizes ensures the dataset\u2019s suitability for training and evaluating algorithms across different scales of head detection.\nTo summarize, the RPEE-Head dataset comprises 109,913 head annotations, with 78,606 allocated to the training set, 16,022 to the validation set, and 15,285 to the test set. The dataset was created from 1,886 frames extracted from 66 videos, recorded using various camera types and angles at railway platforms and crowded event entrances. This dataset offers diversity in weather conditions, indoor and outdoor environments, day and night times, seasons of the year, lighting conditions, head scales, crowd levels, and resolutions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimental\nEvaluation", + "text": "In this section, the experiments are designed with the main objective of demonstrating the effectiveness of our RPEE-Heads dataset in building robust DL models for detecting pedestrians heads at railway platforms and event entrances.\nTo this end, we first employ a number of well-known algorithms in this domain (namely, Fast R-CNN, Cascade R-CNN, RetinaNet-101, Faster R-CNN, RT-DETR, YOLOv7x, YOLOv8x, and YOLOv9-E) to fit model parameters using our RPEE-Heads dataset. Then, we train the top-3 performing models (YOLOv8x, YOLOv9-E, and RT-DETR) using the publicly available datasets. Finally, we conduct a comparison analysis to showcase the outstanding performance of the models built by the RPEE-Heads dataset over those models fitted based on the other datasets." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "The generated models are trained using a supercomputer hosted in J\u00fclich Supercomputing Center111JUWELS Booster is a multi-petaflop modular supercomputer operated by the J\u00fclich Supercomputing Centre at Forschungszentrum J\u00fclich. For further details, refer to https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels.. Only 4 NVIDIA A100 GPUs are utilized, each with a memory of 40 GB.\nTo assess the performance of the different models in terms of inference speed, a personal PC with NVIDIA RTX 3060 (12 GB GPU memory) is employed.\nFurthermore, the software packages used to train and evaluate the models are implemented using Python 3.11 along with Ultralytics [54 ###reference_b54###], Detectron 2 [55 ###reference_b55###], Official YOLOv7 [18 ###reference_b18###], Official YOLOv9 [20 ###reference_b20###], scikit-learn [56 ###reference_b56###], PyTorch [57 ###reference_b57###], and OpenCV [58 ###reference_b58###] libraries.\nMoreover, all models are trained from scratch utilizing the Stochastic Gradient Descent optimization algorithm and their respective default hyperparameters, as detailed in table 4 ###reference_###.\n###table_3###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To provide a comprehensive analysis of the effectiveness and efficiency of the trained models, several metrics were utilized: precision, recall, F1-Score (F1), mean average precision (mAP), and inference time.\nPrecision is the ratio of correct positive detections among the instances detected as positive by the model. Formally,\nwhere TP (true positive) refers to correct positive predictions, while FP (false positive) represents incorrect positive predictions.\nRecall is the fraction of correct positive detections among all relevant instances, as described in:\nwhere FN (false negative) denotes relevant instances that the model failed to detect.\nF1-score is the harmonic mean of precision and recall, which provides a balanced measure of a model\u2019s performance. It is defined as:\nThe mAP is a popular metric used to evaluate the performance of models in object detection tasks, which is computed as in eq. 9 ###reference_###\nwhere is the number of classes, while is a value that summarizes the precision-recall curve, representing the average of all precision values for each class .\nMore specifically, the AP is calculated as the sum of precision values at each threshold, weighted by the increase in recall. Formally, AP is defined as\nwhere refers to the number of thresholds.\nIt is essential to highlight that the Intersection over Union (IoU) threshold plays a crucial role in the above metrics, as it determines the minimum level of acceptance for the model\u2019s localization accuracy. IoU represents the degree of overlap between the predicted bounding box and the ground truth bounding box. If the prediction\u2019s IoU is greater than or equal to the IoU threshold, the detection is considered correct; otherwise, it is considered incorrect. The IoU is calculated by:\nAll experiments in this paper utilize an IoU threshold of 0.5. This threshold is widely selected because it ensures that the predicted bounding box reasonably aligns with the ground truth, especially when dealing with small objects such as pedestrian heads [59 ###reference_b59###, 60 ###reference_b60###].\nEventually, we evaluate the runtime of model inference by calculating the Inference Time which is the time (in milliseconds) that is required by the model to process an input (a frame) and produce the corresponding prediction." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Performance of DL Algorithms on RPEE-Heads Dataset", + "text": "In this section, we compare the performance of several state-of-the-art DL models trained and evaluated using the RPEE-Heads dataset. The results of employing a number of eight DL algorithms (Fast R-CNN, Cascade R-CNN, RetinaNet-101, Faster R-CNN, RT-DETR, YOLOv7x, YOLOv8x, and YOLOv9-E) are illustrated in table 5 ###reference_###. Both YOLOv9-E and RT-DETR achieved the highest mAP, with about 91% for each. Considering both the inference time and mAP, jointly we can infer that the models (RT-DETR, YOLOv7x, YOLOv8x, and YOLOv9-E) outperform others in terms of efficacy and speed, as they achieve mAP values of at least 87% and inference time of at most 21ms.\n###table_4### Additionally, fig. 6 ###reference_### depicts the trade-offs between precision and recall at various threshold values for each model. A model with a larger area under the precision-recall curve demonstrates better performance in distinguishing positive from negative instances, compared to models with smaller areas. To have an idea about the behavior of the different models on a single frame, a visual comparison between the ground truth and the detection results is illustrated in fig. 7 ###reference_###.\n###figure_5### ###figure_6### In summary, the DL models trained on our proposed dataset demonstrated promising performance in identifying pedestrian heads within crowds at railway platforms and event entrances, especially YOLOv9-E and RT-DETR models. However, it remains unclear whether the RPEE-Heads dataset significantly contributed to improving the accuracy of these models. Therefore, we present additional experiments to evaluate the impact of the RPEE-Heads dataset on enhancing the head detection performance in the following section.\n###figure_7###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Influence of the RPEE-Heads Dataset and Head Size on DL Performance", + "text": "This section has two objectives: 1) to study the impact of head size on the performance of DL object detection algorithms, and 2) to evaluate the effect of the proposed RPEE-Heads dataset on the performance of DL algorithms in the context of railway platforms and event entrances.\n###table_5### To study the impact of head size on detection algorithms, we first categorize head sizes in several popular public datasets into five distinct groups: \u2013 pixel2, \u2013 pixel2, \u2013 pixel2, \u2013 pixel2, and pixel2. Then, we train and evaluate the performance of the top three algorithms\u2014YOLOv9-E, YOLOv8x, and RT-DETR\u2014over these datasets, since these algorithms are selected based on the results described in table 5 ###reference_###. Finally, we examine the relationship between the performance of these models and the categorized head sizes in each dataset.\nTable 6 ###reference_### shows the distribution of head sizes across several datasets, while table 7 ###reference_### presents the performance of the models.\nThe results reveal that small head sizes, particularly those lesser than pixel2, significantly influence the performance of DL detection algorithms. For instance, the JHU-CROWD++ dataset comprises 61.37% of heads smaller than pixel2, resulting in a mAP of only 22.1% for the best model, YOLOv9-E. Similarly, the NWPU-Crowd dataset has over 74% of heads within pixel2, which is expected to lead to suboptimal performance.\nIn contrast, the CrowdHuman dataset contains 20.56% of heads that fall within the size ranges of pixel2, and it achieves a mAP of 77%.\nOther examples include the FDST dataset, which contains 4.64% of heads smaller than pixel2, SCUT-HEAD with 0.035%, and HollywoodHead, which lacks tiny heads. In these cases, the DL models achieved over 90% mAP. This suggests that datasets with a larger proportion of heads smaller than pixel2 degrade the performance of DL detection algorithms.\nThese results indicate that datasets with a larger proportion of small heads, especially those smaller than pixel2, degrade the performance of DL detection algorithms. The primary reason is that small heads, particularly within cluttered and dynamic backgrounds, either lack sufficient information or make it difficult for the algorithms to extract relevant features [45 ###reference_b45###].\nDataset\nModel\nPrecision (%)\nRecall (%)\nF1-Score (%)\nmAP@0.5 (%)\n\nFDST\nYOLOv9-E\n93.6\n87.6\n91.0\n93.8\n\nYOLOV8x\n92.6\n84.6\n88.0\n95.0\n\nRT-DETR\n88.0\n89.0\n88.0\n96.6\n\nSCUT_HEAD_Part_A\nYOLOv9-E\n91.0\n89.0\n90.0\n95.4\n\nYOLOV8x\n92.0\n92.0\n92.0\n95.0\n\nRT-DETR\n94.2\n93.8\n94.0\n96.6\n\nSCUT_HEAD_Part_B\nYOLOv9-E\n89.6\n89.3\n89.0\n96.5\n\nYOLOV8x\n91.2\n86.7\n89.0\n95.3\n\nRT-DETR\n89.9\n89.7\n90.0\n96.2\n\nJHU-CROWD++\nYOLOv9-E\n26.1\n13.0\n17.3\n22.1\n\nYOLOV8x\n26.8\n14.5\n18.8\n20.3\n\nRT-DETR\n28.4\n10.1\n15.0\n16.4\n\nCrowdHuman\nYOLOv9-E\n84.7\n54.8\n67.0\n77.0\n\nYOLOV8x\n82.0\n53.4\n65.0\n74.6\n\nRT-DETR\n79.9\n48.7\n61.0\n69.0\n\nHollywoodHeads\nYOLOv9-E\n93.1\n86.6\n90.0\n94.0\n\nYOLOV8x\n93.1\n85.5\n89.0\n92.9\n\nRT-DETR\n92.1\n88.5\n90.0\n95.2\nTo highlight the impact of our RPEE-Heads dataset on building a robust detection model for railway platforms and event entrances, we first select the FDST, SCUT-HEAD PartA, SCUT-HEAD Par B, and HollywoodHeads datasets. These datasets are efficient for head detection using DL algorithms in their contexts. Next, we train the top three DL algorithms on each selected dataset from scratch. Finally, we assess their performance using a test set from the RPEE-Heads dataset.\nThis is important to determine how well they perform in the context of railway platforms and event entrances.\nAs illustrated in fig. 8 ###reference_###,\nthese models demonstrate a significant performance gap, with a mAP value of at most 57%, compared to the models trained on the RPEE-Heads dataset, which achieved a mAP value of at least 88%.\nThis performance discrepancy can be attributed to the differences in viewpoint variations, lighting conditions, scale, occlusion, and clutter between the contexts represented by publicly available datasets and the railway platforms and event entrances [61 ###reference_b61###, 62 ###reference_b62###].\nIn summary, advanced DL-based object detection algorithms struggle to learn from objects or heads occupying less than 36 square pixels. These small objects lack discernible features, making distinguishing them from background clutter challenging. Furthermore, pedestrian head detection models trained on publicly available datasets generally perform poorly in the context of railway platforms and event entrances. In contrast, models trained using the RPEE-Heads dataset significantly outperform those generated from other datasets, demonstrating their reliability for this specific application.\n###figure_8###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper introduced a new pedestrian head detection benchmark within crowds at railway platforms and event entrances. Firstly, a novel, diverse, and high-resolution RPEE-Heads annotated dataset has been proposed. Two annotators manually labeled 109,913 heads in 1,886 images. Such images have been extracted from 66 videos of railway platforms and event entrances captured using various camera types and angles. The videos offer diversity in weather conditions, indoor and outdoor environments, day and night times, seasons, lighting conditions, head scales, crowd levels, and resolutions. Secondly, we trained and evaluated eight state-of-the-art DL object detection algorithms over the proposed dataset. The results showed that the mAP of the trained models for human head detection at railway platforms and event entrances ranged from 73.8% to 90.8%. In contrast, the best model trained on existing pedestrian head datasets achieved only 56.7% mAP in the context of railway platforms and event entrances. Thirdly, this paper presented an empirical study on the impact of head size on detection algorithm performance, examining five size ranges: \u2013 square pixels, \u2013 square pixels, \u2013 square pixels, \u2013 square pixels, and square pixels. The study highlights that head sizes within pixel2 significantly affect detection performance in terms of mAP.\nIn the future, we plan to improve the generalization of pedestrian head detection by unifying public datasets and developing an efficient model that uses sequences of frames for detection.\n\n\nAuthor Contributions\nInitiating and designing the project: A.A.; Data sourcing and privacy: M.B.; Data curation and annotation: M.A.and Z.A.; Software: M.A. and Z.A.; AI models training and evaluation: M.A.and Z.A.; Analysis and interpretation of results: M.A., Z.A, H.A. and A.A.; Writing\u2014original draft preparation: M.A., Z.A., H.A. and A.A.; Writing\u2014review and editing: M.A., Z.A., H.A., M.B. and A.A.; Supervision: A.A and H.A.; Project administration: A.A.;\nAll authors have read and agreed to the published version of the manuscript.\n\n\nFunding\nThis work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) \u2013 491111487\n\n\nInstitutional Review Board Statement\nThis work involved human subjects in its research.\nFor the real-world event entrance experiments, approval of all ethical and experimental procedures and protocols was granted by the Ethics Board at the University of Wuppertal, Germany, and performed in line with the Declaration of Helsinki.\nFor the real-life recordings from railway platforms and music concert entrances, the data protection officer of Forschungszentrum J\u00fclich, Germany stated that the collection and distribution of the recordings is permitted on the basis of article 6 (1) f of the European General Data Protection Regulation (GDPR).\n\n\nInformed Consent Statement\nInformed consent was obtained from all subjects involved in the real-world event entrance experiments.\n\n\nData Availability\nThe labeled dataset and the trained models produced in this paper are publicly available in the Pedestrian Dynamics Data Archive, hosted by Forschungszentrum J\u00fclich. They are provided under the Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0) and can be accessed at https://doi.org/10.34735/ped.2024.2 ###reference_###.\n\n\nAcknowledgments\nThe authors sincerely thank Prof. Dr. Armin Seyfried for hosting and supporting the two AI internships at Forschungszentrum J\u00fclich within the Institute for Advanced Simulation-7. We also extend our gratitude to Dr. Mohcine Chraibi for his invaluable discussions and support during the two internships associated with this paper. Special thanks to Ms. Alica Kandler for her efforts in establishing the repository for this paper in the Pedestrian Dynamics Data Archive hosted by Forschungszentrum J\u00fclich. Finally, we deeply appreciate Helmholtz AI for providing the computing resources utilized in this work.\n\n\nConflicts of Interest\nThe authors declare no conflict of interest." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Characteristics of the related datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetYear\n \n\n\nFrames\n\nCount\n\n \n\n\nAverage Resolution\n\nW H (pixels)\n\n \n\n\nTotal\n\nHeads\n\n\n \n\n\nAverage\n\nHeads\n\n\n \n\n\nMax\n\nHeads\n\nScene Description
FDST \u00a0[42]\n201915,000\n \n\n\n1,920 1,080,\n\n1,280 720\n394,08126.357\n \n\n\nInside Malls,\n\nStreets and Public Spaces\n
SCUT-HEAD Part A \u00a0[25]\n20182,0001,076 605 *67,32133.688Classroom
SCUT-HEAD Part B \u00a0[25]\n20182,405994 675 *43,93018.26180Classroom
JHU-CROWD++ \u00a0[28]\n20224,3721,430 9101,515,005346.525,791\n \n\n\nStadiums, Streets,\n\nConcerts, Protests, etc.\n
CrowdHuman\u00a0[44]\n201824,3701,361 967 *339,56522.64391\n \n\n\nRoads, Parties\n\nPlaying Basketball,\n\nand Transportation Modes\n
CroHD\u00a0[43]\n202111,4631,920 1,0802,276,838178346Train Station and Streets
NWPU-crowd \u00a0[27]\n20215,1093,383 2,3112,133,238418.520,033\n \n\n\nStadium, Conference,\n\nStreet, Campus, Mall,\n\nMuseum, Station\n
Hollywood Heads \u00a0[26]\n2015224,740590 326 *369,8461.627Hollywood Movies
\n\n\n\n\n\u2217 The resolution is determined as the average across all images in the dataset due to the varying resolutions between them.\n\n\n
\n
", + "capture": "Table 1: Characteristics of the related datasets." + }, + "2": { + "table_html": "
\n
Table 2: Characteristics of the data sources.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ScenarioVideosDuration (s)\n \n\n\nViewpoint\n\nVertical, Horizontal\u2217\n\n \n\n\nExtracted\n\nFrames\n\n \n\n\nFrame\n\nInterval\n\n \n\n\nFrame\n\nResolution\n\n \n\n\nMin\n\nHeads\n\n \n\n\nMax\n\nHeads\n\n \n\n\nAverage\n\nof Heads\n\n \n\n\nTotal\n\nHeads\n\n \n\n\nDaytime,\n\nNighttime\n\n \n\n\nIndoor,\n\nOutdoor\n
\n\n\nRailway\n\nPlatforms\u2217\u2217\n53,545High top, back115302,704 2,028269546.65,368DayOutdoor
42,124Slightly top, front64304,000 3,00066753.93,455DayOutdoor
1709High top, front35202,704 2,028224752.21,826DayOutdoor
21,062High top, side52203,840 2,16080132102.95,345DayOutdoor
32,160Slightly top, front48604,000 3,000910140.31,933NightOutdoor
Avg/Total159,600Multiple31432Multiple28.688.457.117,927MultipleOutdoor
\n\n\nMusic Concert\n\nEntrances \u2217\u2217\u2217\n158,512High top, front452304,000 3,0001720160.526,093BothOutdoor
31,950High top, side63302,704 2,0282315256.14,911DayOutdoor
105,320High top, side346203,840 2,160129153.518,508BothOutdoor
65,130Top down13230 or 604,000 3,000113659.97,974BothOutdoor
Avg/Total3420,912Multiple99334Multiple13129.757.857,486MultipleOutdoor
\n\n\nEvent Entrances\n\nExperiments\n8\u2217\u2217\u2217\u2217\n1,094Top down39631,920 1,080317252.321,125\u2014Indoor
9\u2217\u2217\u2217\u2217\u2217\n763Top down18331,920 1,440425579.213,375\u2014Indoor
Avg/Total171,857Top down5793Multiple3.5213.558.534,500\u2014Indoor
Overall (Avg/Total)6632,369Multiple1,88628Multiple16.9121.956.2109,913MultipleMultiple
\n\n\n\n\nMin Heads refers to the minimum number of heads in a single frame.\n\nMax Heads represents the maximum number of heads present in a single frame.\n\nAverage of Heads is the average number of heads per frame.\n\nTotal Heads indicates the total number of heads across all frames.\n\n\u2217 The horizontal position that encompasses the majority of the human crowd.\n\n\u2217\u2217 The data was collected at Merkur Spiel-Arena/Messe Nord train station in D\u00fcsseldorf, Germany on various dates: August 24, 2019; October 19, 2019; and November 3, 2019.\n\n\u2217\u2217\u2217 The data was recorded at four different concerts held at Mitsubishi Hall in D\u00fcsseldorf, Germany, on the following dates: March 19, 2019; July 11, 2019; January 25, 2019; and\n\nNovember 29, 2019.\n\n\u2217\u2217\u2217\u2217 The entrance setups include guiding barriers.\n\n\u2217\u2217\u2217\u2217\u2217 The setups mimic entrances that have guiding barriers.\n\n\n
\n
", + "capture": "Table 2: Characteristics of the data sources." + }, + "3": { + "table_html": "
\n
Table 3: Summary of training, validation, and test sets in the RPEE-Heads dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SetTraining SetValidation SetTest Set
FramesHeads%FramesHeads%FramesHeads%
Railway Platforms22713,00711.83%452,4922.26%422,4282.20%
\n\n\n\n\n
Event Entrances (Music Concert)
\n
71541,14237.43%1318,0117.28%1478,3337.58%
Event Entrances Experiments40424,45722.25%705,5195.02%1054,5244.11%
Overall (Total)1,34678,60671.51%24616,02214.57%29415,28513.90%
\n
\n
", + "capture": "Table 3: Summary of training, validation, and test sets in the RPEE-Heads dataset." + }, + "4": { + "table_html": "
\n
Table 4: Hyperparameter settings for object detection models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelInitial Learning RateFinal Learning RateInput SizeIterationsLibrary\u2217\n
YOLOv7x0.010.164025,000Official YOLOv7\u00a0[18]\n
YOLOv8x0.010.0164025,000Ultralytics\u00a0[54]\n
YOLOv9-E0.010.000164030,000Official YOLOv9\u00a0[20]\n
RT-DETR0.010.016406,000Ultralytics
Faster R-CNN0.0030.00031,333320,000Detectron2\u00a0[55]\n
RetinaNet-1010.010.00011,33390,000Detectron2
Cascade R-CNN0.020.0021,333270,000Detectron2
Fast R-CNN0.0010.000011,333100,000Detectron2
\n\n\u2217 The library utilized for implementing, training, and evaluating the corresponding model.\n\n
\n
", + "capture": "Table 4: Hyperparameter settings for object detection models." + }, + "5": { + "table_html": "
\n
Table 5: Comparative analysis of popular DL models on the RPEE-Heads dataset, sorted by mAP@0.5 (%).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelPrec. (%)Rec. (%)F1 (%)mAP@0.5 (%)TPFPFNInf. Time (ms)
RT-DETR92858890.813,0461,1202,23114
YOLOv9-E91.582.98790.712,6721,1722,60611
YOLOV8x88818488.112,4881,5452,78917
YOLOV7x89.8808587.412,2851,3882,99321
Cascade R-CNN91828681.112,6391,1052,63953
Faster R-CNN9080858112,2321,3553,047104
Fast R-CNN92798580.512,5379912,741247
RetinaNet-10179767773.811,7012,9473,57652
\n\nPrec.: precision. Rec.: recall. F1: F1-Score. Inf.: inference. TP: true positive (correct detection). FP: false positive (incorrect detection). FN: false negative (missed detection). ms: milliseconds.\n\n
\n
", + "capture": "Table 5: Comparative analysis of popular DL models on the RPEE-Heads dataset, sorted by mAP@0.5 (%)." + }, + "6": { + "table_html": "
\n
Table 6: Head scales for publicly available related datasets.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset\n pixel2\n\npixel2\n\npixel2\n\n pixel2\n\npixel2\n
FDST \u00a0[42]\n4.64%35.40%54.30%4.78%0.88%
SCUT-HEAD Part A \u00a0[25]\n0.03%21.64%68.97%6.95%2.41%
SCUT-HEAD Part B \u00a0[25]\n0.03%2.91%23.33%18.70%55.03%
JHU-CROWD++ \u00a0[28]\n61.37%19.27%13.56%2.81%2.99%
CrowdHuman \u00a0[44]\n20.56%22.17%25.80%10.50%20.97%
CroHD \u00a0[43]\n0.41%54.51%45.06%0.02%0.00%
NWPU-crowd \u00a0[27]\n74.40%15.95%6.98%1.26%1.41%
Hollywood Heads \u00a0[26]\n0.00%0.07%1.05%1.28%97.60%
RPEE-Head9.69%31.85%46.05%9.36%3.06%
\n
", + "capture": "Table 6: Head scales for publicly available related datasets." + }, + "7": { + "table_html": "
\n
Table 7: Performance of models in publicly available datasets\u2019 context.
\n
\n

\n\n\nDataset\nModel\nPrecision (%)\nRecall (%)\nF1-Score (%)\nmAP@0.5 (%)\n\nFDST\nYOLOv9-E\n93.6\n87.6\n91.0\n93.8\n\nYOLOV8x\n92.6\n84.6\n88.0\n95.0\n\nRT-DETR\n88.0\n89.0\n88.0\n96.6\n\nSCUT_HEAD_Part_A\nYOLOv9-E\n91.0\n89.0\n90.0\n95.4\n\nYOLOV8x\n92.0\n92.0\n92.0\n95.0\n\nRT-DETR\n94.2\n93.8\n94.0\n96.6\n\nSCUT_HEAD_Part_B\nYOLOv9-E\n89.6\n89.3\n89.0\n96.5\n\nYOLOV8x\n91.2\n86.7\n89.0\n95.3\n\nRT-DETR\n89.9\n89.7\n90.0\n96.2\n\nJHU-CROWD++\nYOLOv9-E\n26.1\n13.0\n17.3\n22.1\n\nYOLOV8x\n26.8\n14.5\n18.8\n20.3\n\nRT-DETR\n28.4\n10.1\n15.0\n16.4\n\nCrowdHuman\nYOLOv9-E\n84.7\n54.8\n67.0\n77.0\n\nYOLOV8x\n82.0\n53.4\n65.0\n74.6\n\nRT-DETR\n79.9\n48.7\n61.0\n69.0\n\nHollywoodHeads\nYOLOv9-E\n93.1\n86.6\n90.0\n94.0\n\nYOLOV8x\n93.1\n85.5\n89.0\n92.9\n\nRT-DETR\n92.1\n88.5\n90.0\n95.2\n

\n
\n
", + "capture": "Table 7: Performance of models in publicly available datasets\u2019 context. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18164v1_figure_1.png", + "caption": "Figure 1: Illustrative examples from the selected data sources.", + "url": "http://arxiv.org/html/2411.18164v1/x6.png" + }, + "2": { + "figure_path": "2411.18164v1_figure_2.png", + "caption": "Figure 2: a) A visual example of manual annotation, with red boxes and green corners indicating head annotations. b) The corresponding annotation text file.", + "url": "http://arxiv.org/html/2411.18164v1/x7.png" + }, + "3": { + "figure_path": "2411.18164v1_figure_3.png", + "caption": "Figure 3: The distribution of the number of annotated heads among the selected images.", + "url": "http://arxiv.org/html/2411.18164v1/x8.png" + }, + "4": { + "figure_path": "2411.18164v1_figure_4.png", + "caption": "Figure 4: Distribution of the sizes of bounding boxes across the training, validation, and test sets.", + "url": "http://arxiv.org/html/2411.18164v1/x9.png" + }, + "5": { + "figure_path": "2411.18164v1_figure_5.png", + "caption": "Figure 5: Performance evaluation using precision-recall curve.\n", + "url": "http://arxiv.org/html/2411.18164v1/x10.png" + }, + "6": { + "figure_path": "2411.18164v1_figure_6.png", + "caption": "Figure 6: Mean average precision (mAP) vs. inference time.\n", + "url": "http://arxiv.org/html/2411.18164v1/x11.png" + }, + "7": { + "figure_path": "2411.18164v1_figure_7.png", + "caption": "Figure 7: Visual comparison of models trained on the introduced dataset (real-world experiment scenario).", + "url": "http://arxiv.org/html/2411.18164v1/x12.png" + }, + "8": { + "figure_path": "2411.18164v1_figure_8.png", + "caption": "Figure 8: Precision-recall curves of diverse models and public datasets in the context of railway platforms and event entrances. * indicates models trained and evaluated using the RPEE-Heads dataset, while models without * were trained on public datasets and evaluated on the proposed dataset.", + "url": "http://arxiv.org/html/2411.18164v1/x13.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Computer vision and deep learning techniques for pedestrian detection and tracking: A survey.", + "author": "Antonio Brunetti, Domenico Buongiorno, Gianpaolo Francesco Trotta, and Vitoantonio Bevilacqua.", + "venue": "Neurocomputing, 300:17\u201333, 2018.", + "url": null + } + }, + { + "2": { + "title": "Deep learning in crowd counting: A survey.", + "author": "Lijia Deng, Qinghua Zhou, Shuihua Wang, Juan Manuel G\u00f3rriz, and Yudong Zhang.", + "venue": "CAAI Transactions on Intelligence Technology, 2023.", + "url": null + } + }, + { + "3": { + "title": "Trajectory-based motion pattern analysis of crowds.", + "author": "Wei Lu, Xiang Wei, Weiwei Xing, and Weibin Liu.", + "venue": "Neurocomputing, 247:213\u2013223, 2017.", + "url": null + } + }, + { + "4": { + "title": "A survey of crowd counting and density estimation based on convolutional neural network.", + "author": "Zizhu Fan, Hong Zhang, Zheng Zhang, Guangming Lu, Yudong Zhang, and Yaowei Wang.", + "venue": "Neurocomputing, 472:224\u2013251, 2022.", + "url": null + } + }, + { + "5": { + "title": "A novel voronoi-based convolutional neural network framework for pushing person detection in crowd videos.", + "author": "Ahmed Alia, Mohammed Maree, Mohcine Chraibi, and Armin Seyfried.", + "venue": "Complex & Intelligent Systems, pages 1\u201327, 2024.", + "url": null + } + }, + { + "6": { + "title": "The impact of yolo algorithms within fall detection application: A review.", + "author": "Ahlam R Khekan, Hadi S Aghdasi, and Pedram Salehpour.", + "venue": "IEEE Access, 2024.", + "url": null + } + }, + { + "7": { + "title": "A cloud-based deep learning framework for early detection of pushing at crowded event entrances.", + "author": "Ahmed Alia, Mohammed Maree, Mohcine Chraibi, Anas Toma, and Armin Seyfried.", + "venue": "IEEE access, 11:45936\u201345949, 2023.", + "url": null + } + }, + { + "8": { + "title": "A hybrid deep learning and visualization framework for pushing behavior detection in pedestrian dynamics.", + "author": "Ahmed Alia, Mohammed Maree, and Mohcine Chraibi.", + "venue": "Sensors, 22(11):4040, 2022.", + "url": null + } + }, + { + "9": { + "title": "On the exploitation of gps-based data for real-time visualisation of pedestrian dynamics in open environments.", + "author": "Ahmed Alia, Mohammed Maree, and Mohcine Chraibi.", + "venue": "Behaviour & Information Technology, 41(8):1709\u20131723, 2022.", + "url": null + } + }, + { + "10": { + "title": "Subway diaries: How people experience and practice riding the train.", + "author": "Richard E Ocejo and St\u00e9phane Tonnelat.", + "venue": "Ethnography, 15(4):493\u2013515, 2014.", + "url": null + } + }, + { + "11": { + "title": "A fast hybrid deep neural network model for pushing behavior detection in human crowds.", + "author": "Ahmed Alia, Mohammed Maree, and Mohcine Chraibi.", + "venue": "In 2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA), pages 1\u20132. IEEE, 2022.", + "url": null + } + }, + { + "12": { + "title": "Unihead: unifying multi-perception for detection heads.", + "author": "Hantao Zhou, Rui Yang, Yachao Zhang, Haoran Duan, Yawen Huang, Runze Hu, Xiu Li, and Yefeng Zheng.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 2024.", + "url": null + } + }, + { + "13": { + "title": "Semantic head enhanced pedestrian detection in a crowd.", + "author": "Ruiqi Lu, Huimin Ma, and Yu Wang.", + "venue": "Neurocomputing, 400:343\u2013351, 2020.", + "url": null + } + }, + { + "14": { + "title": "Scale and density invariant head detection deep model for crowd counting in pedestrian crowds.", + "author": "Sultan Daud Khan and Saleh Basalamah.", + "venue": "The Visual Computer, 37(8):2127\u20132137, 2021.", + "url": null + } + }, + { + "15": { + "title": "Convolutional neural network: a review of models, methodologies and applications to object detection.", + "author": "Anamika Dhillon and Gyanendra K Verma.", + "venue": "Progress in Artificial Intelligence, 9(2):85\u2013112, 2020.", + "url": null + } + }, + { + "16": { + "title": "Enhanced binary cuckoo search with frequent values and rough set theory for feature selection.", + "author": "Ahmed Alia and Adel Taweel.", + "venue": "IEEE access, 9:119430\u2013119453, 2021.", + "url": null + } + }, + { + "17": { + "title": "Convolutional neural networks-an extensive arena of deep learning. a comprehensive study.", + "author": "Navdeep Singh and Hiteshwari Sabrol.", + "venue": "Archives of Computational Methods in Engineering, 28(7):4755\u20134780, 2021.", + "url": null + } + }, + { + "18": { + "title": "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors.", + "author": "Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.", + "url": null + } + }, + { + "19": { + "title": "Yolov8.", + "author": "Ultralytics.", + "venue": "https://github.com/ultralytics/ultralytics, 2023.", + "url": null + } + }, + { + "20": { + "title": "YOLOv9: Learning what you want to learn using programmable gradient information.", + "author": "Chien-Yao Wang and Hong-Yuan Mark Liao.", + "venue": "2024.", + "url": null + } + }, + { + "21": { + "title": "Rich feature hierarchies for accurate object detection and semantic segmentation.", + "author": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580\u2013587, 2014.", + "url": null + } + }, + { + "22": { + "title": "Fast r-cnn.", + "author": "Ross Girshick.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 1440\u20131448, 2015.", + "url": null + } + }, + { + "23": { + "title": "Faster r-cnn: Towards real-time object detection with region proposal networks.", + "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.", + "venue": "Advances in neural information processing systems, 28, 2015.", + "url": null + } + }, + { + "24": { + "title": "Cascade r-cnn: Delving into high quality object detection.", + "author": "Zhaowei Cai and Nuno Vasconcelos.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6154\u20136162, 2018.", + "url": null + } + }, + { + "25": { + "title": "Detecting heads using feature refine net and cascaded multi-scale architecture.", + "author": "Dezhi Peng, Zikai Sun, Zirong Chen, Zirui Cai, Lele Xie, and Lianwen Jin.", + "venue": "arXiv preprint arXiv:1803.09256, 2018.", + "url": null + } + }, + { + "26": { + "title": "Context-aware CNNs for person head detection.", + "author": "Tuan-Hung Vu, Anton Osokin, and Ivan Laptev.", + "venue": "In International Conference on Computer Vision (ICCV), 2015.", + "url": null + } + }, + { + "27": { + "title": "Nwpu-crowd: A large-scale benchmark for crowd counting and localization.", + "author": "Qi Wang, Junyu Gao, Wei Lin, and Xuelong Li.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 2021.", + "url": null + } + }, + { + "28": { + "title": "Jhu-crowd++: Large-scale crowd counting dataset and a benchmark method.", + "author": "Vishwanath A. Sindagi, Rajeev Yasarla, and Vishal M. Patel.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 2022.", + "url": null + } + }, + { + "29": { + "title": "Maritime small object detection algorithm in drone aerial images based on improved yolov8.", + "author": "Ling Peng, Yihong Zhang, and Shuai Ma.", + "venue": "IEEE Access, 2024.", + "url": null + } + }, + { + "30": { + "title": "Feature mining for localised crowd counting.", + "author": "Ke Chen, Chen Change Loy, Shaogang Gong, and Tony Xiang.", + "venue": "In Bmvc, volume 1, page 3, 2012.", + "url": null + } + }, + { + "31": { + "title": "Crowd counting via scale-adaptive convolutional neural network.", + "author": "Lu Zhang, Miaojing Shi, and Qiaobo Chen.", + "venue": "In 2018 IEEE winter conference on applications of computer vision (WACV), pages 1113\u20131121. IEEE, 2018.", + "url": null + } + }, + { + "32": { + "title": "Counting people based on linear, weighted, and local random forests.", + "author": "Helia Farhood, Xiangjian He, Wenjing Jia, Michael Blumenstein, and Hanhui Li.", + "venue": "In 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1\u20137. IEEE, 2017.", + "url": null + } + }, + { + "33": { + "title": "Focal loss for dense object detection.", + "author": "Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll\u00e1r.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 2980\u20132988, 2017.", + "url": null + } + }, + { + "34": { + "title": "Detrs beat yolos on real-time object detection.", + "author": "Yian Zhao, Wenyu Lv, Shangliang Xu, Jinman Wei, Guanzhong Wang, Qingqing Dang, Yi Liu, and Jie Chen.", + "venue": "arXiv preprint arXiv:2304.08069, 2023.", + "url": null + } + }, + { + "35": { + "title": "Context-aware cnns for person head detection.", + "author": "Tuan-Hung Vu, Anton Osokin, and Ivan Laptev.", + "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pages 2893\u20132901, 2015.", + "url": null + } + }, + { + "36": { + "title": "Localized region context and object feature fusion for people head detection.", + "author": "Yule Li, Yong Dou, Xinwang Liu, and Teng Li.", + "venue": "In 2016 IEEE International Conference on Image Processing (ICIP), pages 594\u2013598. IEEE, 2016.", + "url": null + } + }, + { + "37": { + "title": "Headnet: pedestrian head detection utilizing body in context.", + "author": "Gang Chen, Xufen Cai, Hu Han, Shiguang Shan, and Xilin Chen.", + "venue": "In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 556\u2013563. IEEE, 2018.", + "url": null + } + }, + { + "38": { + "title": "Robust person head detection based on multi-scale representation fusion of deep convolution neural network.", + "author": "Yingying Wang, Yingjie Yin, Wenqi Wu, Siyang Sun, and Xingang Wang.", + "venue": "In 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 296\u2013301. IEEE, 2017.", + "url": null + } + }, + { + "39": { + "title": "Disam: Density independent and scale aware model for crowd counting and localization.", + "author": "Sultan Daud Khan, Habib Ullah, Mohammad Uzair, Mohib Ullah, Rehan Ullah, and Faouzi Alaya Cheikh.", + "venue": "In 2019 IEEE International Conference on Image Processing (ICIP), pages 4474\u20134478. IEEE, 2019.", + "url": null + } + }, + { + "40": { + "title": "Crowd counting using deep learning based head detection.", + "author": "Maryam Hassan, Farhan Hussain, Sultan Daud Khan, Mohib Ullah, Mudassar Yamin, and Habib Ullah.", + "venue": "Electronic Imaging, 35:293\u20131, 2023.", + "url": null + } + }, + { + "41": { + "title": "Pedestrian head detection and tracking via global vision transformer.", + "author": "Xuan-Thuy Vo, Van-Dung Hoang, Duy-Linh Nguyen, and Kang-Hyun Jo.", + "venue": "In International Workshop on Frontiers of Computer Vision, pages 155\u2013167. Springer, 2022.", + "url": null + } + }, + { + "42": { + "title": "Locality-constrained spatial transformer network for video crowd counting.", + "author": "Yanyan Fang, Biyun Zhan, Wandi Cai, Shenghua Gao, and Bo Hu.", + "venue": "volume 2019-July, 2019.", + "url": null + } + }, + { + "43": { + "title": "Tracking pedestrian heads in dense crowd.", + "author": "Ramana Sundararaman, Cedric De Almeida Braga, Eric Marchand, and Julien Pettre.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3865\u20133875, 2021.", + "url": null + } + }, + { + "44": { + "title": "Crowdhuman: A benchmark for detecting human in a crowd.", + "author": "Shuai Shao, Zijian Zhao, Boxun Li, Tete Xiao, Gang Yu, Xiangyu Zhang, and Jian Sun.", + "venue": "arXiv preprint arXiv:1805.00123, 2018.", + "url": null + } + }, + { + "45": { + "title": "How far are we from solving pedestrian detection?", + "author": "Shanshan Zhang, Rodrigo Benenson, Mohamed Omran, Jan Hosang, and Bernt Schiele.", + "venue": "In Proceedings of the iEEE conference on computer vision and pattern recognition, pages 1259\u20131267, 2016.", + "url": null + } + }, + { + "46": { + "title": "Crowd management in transport infrastructures (project number 13n14530 to 13n14533).", + "author": "CroMa Project.", + "venue": "https://www.croma-projekt.de/de, 2018.", + "url": null + } + }, + { + "47": { + "title": "https://doi.org/10.34735/ped.da, 2018.", + "author": "Data archive of experimental data from studies about pedestrian dynamics.", + "venue": null, + "url": null + } + }, + { + "48": { + "title": "Collective phenomena in crowds\u2014where pedestrian dynamics need social psychology.", + "author": "Anna Sieben, Jette Schumann, and Armin Seyfried.", + "venue": "PLoS one, 12(6):e0177328, 2017.", + "url": null + } + }, + { + "49": { + "title": "Crowds in front of bottlenecks at entrances from the perspective of physics and social psychology.", + "author": "Juliane Adrian, Armin Seyfried, and Anna Sieben.", + "venue": "Journal of the Royal Society Interface, 17(165):20190871, 2020.", + "url": null + } + }, + { + "50": { + "title": "Influence of corridor width and motivation on pedestrians in front of bottlenecks.", + "author": "Juliane Adrian, Maik Boltes, Anna Sieben, and Armin Seyfried.", + "venue": "In Traffic and Granular Flow 2019, pages 3\u20139. Springer, 2020.", + "url": null + } + }, + { + "51": { + "title": "Redefining the role of obstacles in pedestrian evacuation.", + "author": "Angel Garcimart\u00edn, Diego Maza, Jos\u00e9 Mart\u00edn Pastor, Daniel Ricardo Parisi, C\u00e9sar Mart\u00edn-G\u00f3mez, and Iker Zuriguel.", + "venue": "New Journal of Physics, 20(12):123025, 2018.", + "url": null + } + }, + { + "52": { + "title": "LabelImg.", + "author": "Tzutalin.", + "venue": "https://github.com/HumanSignal/labelImg, 2015.", + "url": null + } + }, + { + "53": { + "title": "Optimal training and test sets design for machine learning.", + "author": "Burkay Genc and H\u00dcSEY\u0130N Tunc.", + "venue": "Turkish Journal of Electrical Engineering & Computer Sciences, 27(2):1534\u20131545, 2019.", + "url": null + } + }, + { + "54": { + "title": "Ultralytics YOLO, January 2023.", + "author": "Glenn Jocher, Ayush Chaurasia, and Jing Qiu.", + "venue": null, + "url": null + } + }, + { + "55": { + "title": "Detectron2.", + "author": "Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick.", + "venue": "https://github.com/facebookresearch/detectron2, 2019.", + "url": null + } + }, + { + "56": { + "title": "Scikit-learn: Machine learning in Python.", + "author": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.", + "venue": "Journal of Machine Learning Research, 12:2825\u20132830, 2011.", + "url": null + } + }, + { + "57": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala.", + "venue": "In Advances in Neural Information Processing Systems 32, pages 8024\u20138035. Curran Associates, Inc., 2019.", + "url": null + } + }, + { + "58": { + "title": "The OpenCV Library.", + "author": "G. Bradski.", + "venue": "Dr. Dobb\u2019s Journal of Software Tools, 2000.", + "url": null + } + }, + { + "59": { + "title": "Aphid cluster recognition and detection in the wild using deep learning models.", + "author": "Tianxiao Zhang, Kaidong Li, Xiangyu Chen, Cuncong Zhong, Bo Luo, Ivan Grijalva, Brian McCornack, Daniel Flippo, Ajay Sharda, and Guanghui Wang.", + "venue": "Scientific Reports, 13(1):13410, 2023.", + "url": null + } + }, + { + "60": { + "title": "The pascal visual object classes (voc) challenge.", + "author": "Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.", + "venue": "International journal of computer vision, 88:303\u2013338, 2010.", + "url": null + } + }, + { + "61": { + "title": "I had a bad day: Challenges of object detection in bad visibility conditions.", + "author": "Thomas Rothmeier, Diogo Wachtel, Tetmar von Dem Bussche-H\u00fcnnefeld, and Werner Huber.", + "venue": "In 2023 IEEE Intelligent Vehicles Symposium (IV), pages 1\u20136. IEEE, 2023.", + "url": null + } + }, + { + "62": { + "title": "Object detection using yolo: Challenges, architectural successors, datasets and applications.", + "author": "Tausif Diwan, G Anirudh, and Jitendra V Tembhurne.", + "venue": "multimedia Tools and Applications, 82(6):9243\u20139275, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18164v1" +} \ No newline at end of file diff --git a/20241127/2411.18165v1.json b/20241127/2411.18165v1.json new file mode 100644 index 0000000000000000000000000000000000000000..40f9279eeb8276d1d25aaa92720dc7a37f54a3ca --- /dev/null +++ b/20241127/2411.18165v1.json @@ -0,0 +1,546 @@ +{ + "title": "KAN See Your Face", + "abstract": "With the advancement of face reconstruction (FR) systems, privacy-preserving face recognition (PPFR) has gained popularity for its secure face recognition, enhanced facial privacy protection, and robustness to various attacks.\nBesides, specific models and algorithms are proposed for face embedding protection by mapping embeddings to a secure space.\nHowever, there is a lack of studies on investigating and evaluating the possibility of extracting face images from embeddings of those systems, especially for PPFR.\nIn this work, we introduce the first approach to exploit Kolmogorov-Arnold Network (KAN) for conducting embedding-to-face attacks against state-of-the-art (SOTA) FR and PPFR systems.\nFace embedding mapping (FEM) models are proposed to learn the distribution mapping relation between the embeddings from the initial domain and target domain.\nIn comparison with Multi-Layer Perceptrons (MLP), we provide two variants, FEM-KAN and FEM-MLP, for efficient non-linear embedding-to-embedding mapping in order to reconstruct realistic face images from the corresponding face embedding.\nTo verify our methods, we conduct extensive experiments with various PPFR and FR models.\nWe also measure reconstructed face images with different metrics to evaluate the image quality.\nThrough comprehensive experiments, we demonstrate the effectiveness of FEMs in accurate embedding mapping and face reconstruction.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### The progress of artificial intelligence has brought attention to the security and privacy concerns associated with biometric authentication systems (Laishram et al., 2024 ###reference_b17###; Wang et al., 2024b ###reference_b37###),\nspecifically face recognition (FR) (Rezgui et al., 2024 ###reference_b26###).\nFR systems generate the template for each identity for comparing different faces and to authenticate query faces. Those face templates or face embeddings\nare considered as one type of biometric data that is frequently produced by black-box models (e.g., convolutional neural networks (CNNs) and deep neural networks (DNNs) based models).\nExisting common threats to the face embeddings are sensitive information retrieve attacks (extract soft-biometric information such as sex, age, race, etc.)\nor face reconstruction attacks (recover the complete face image from an embedding).\nIn order to increase the privacy and security level of FR, privacy-preserving face recognition (PPFR) systems (Ji et al., 2022 ###reference_b11###; Mi et al., 2023 ###reference_b20###; Han et al., 2024b ###reference_b8###; a ###reference_b7###; Mi et al., 2024 ###reference_b21###) have been proposed.\nHowever, most PPFR methods focus on concealing visual information from input face images to the systems but embeddings are not being protected directly.\nIoM Hashing (Jin et al., 2017 ###reference_b13###) is initially proposed for fingerprint protection by transferring biometric feature vectors into discrete index hashed code.\nPolyProtect (Hahn & Marcel, 2022 ###reference_b6###) is a mapping algorithm based on multivariate polynomials with user-specific parameters to protect face embeddings.\nMLP-Hash (Shahreza et al., 2023 ###reference_b34###) proposes a new cancelable face embedding protection scheme that includes\nuser-specific randomly-weighted multi-layer perceptron (MLP) with non-linear activation function and binarizing operation.\nHomomorphic Encryption (Shahreza et al., 2022b ###reference_b33###) (HE)-based method is also proposed to encrypt embedding into ciphertext for protection.\nCurrent face reconstruction methods focus on face image reconstruction from embeddings of normal (without special operation for privacy protection on either input face images or face embeddings) FR models. Deconvolutional neural network NbNet (Mai et al., 2018 ###reference_b19###) utilizes the deconvolution to reconstruct face images from deep templates.\nEnd-to-end CNN-based method (Shahreza et al., 2022a ###reference_b32###) combines cascaded convolutional layers and deconvolutional layers to improve reconstruction.\nMoreover, the learning-based method (Shahreza & Marcel, 2024b ###reference_b31###) can reconstruct the underlying face image from a protected embedding that is protected by template protection mechanisms (Jin et al., 2004 ###reference_b12###; Shahreza et al., 2023 ###reference_b34###; 2022b ###reference_b33###). Nevertheless, the reconstructed faces from those methods suffer from noisy and blurry artifacts, which degrade the image naturalness. Generative adversarial network (GAN)-based approach (Otroshi Shahreza & Marcel, 2024 ###reference_b22###) trains a mapping network to transfer face embedding to the latent space of a pre-trained face generation network. However, they only test their method on normal FR systems.\nConsidering the above motivations, we propose an embedding mapping based face reconstruction framework to generate realistic face images from leaked face embeddings both from normal FR and PPFR models by utilizing a pre-trained IPA-FaceID (Ye et al., 2023 ###reference_b39###) diffusion model.\nAs depicted in Figure 2 ###reference_###, we feed training face images to both IPA-FR (default FR of IPA-FaceID) and target FR models.\nThe initial output face embedding from the target FR model is transferred by the Face Embedding Mapping (FEM) model before performing multi-term loss optimization.\nDuring the inference stage, the leaked embedding from the target FR model can be mapped by trained FEM and directly used by IPA-FaceID to generate realistic face images.\nWe verify the effectiveness of face reconstruction by applying impersonation attacks to real-world FR systems.\nBesides, we also provide a test demonstration of FEMs by a commercial face comparison API like Face++111https://www.faceplusplus.com as shown in Figure 1 ###reference_###.\nOur key contributions are:\nWe propose a face embedding mapping approach called FEM to map the arbitrary embedding to the target embedding domain for realistic face reconstruction.\nThe trained FEM can be easily integrated into the current SOTA pre-trained IPA-FaceID diffusion model and enable IPA-FaceID generalized for accurate face generation on various types of face embedding, including complete, partial and protected ones.\nTo the best of our knowledge, we are the first to exploit the potential of KAN for face embedding mapping and face reconstruction. Compared to the MLP-based model, we showcase the efficacy of the FEM-KAN model for non-linear mapping.\nIn contrast to existing face reconstruction methods that aim to inverse face template from normal FR models, we explore the possibility to reconstruct face image\nfrom PPFR models.\nBesides, we conduct extensive experiments in several practical scenarios to test the effectiveness, generalization, robustness and bias of our method.\nMoreover, we show proposed FEMs can also effectively extract underlying face images from the partial leaked embedding as well as the protected embedding." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "ID-Preserving Text-to-image Diffusion Models", + "text": "Existing text-to-image (T2I) models still have limitation to generate accurate and realistic detailed image\ndue to the limited information expressed by text prompts. Stable unCLIP222https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip is based on fine-tuning CLIP image embedding\non a pre-trained T2I model to improve the desired image generation ability. IP-Adapter (Ye et al., 2023 ###reference_b39###) proposed decoupled cross-attention to\nembed image feature from image prompt to a pre-trained T2I diffusion model by adding a new cross-attention layer for image feature, which is separated from text feature. IP-Adapter utilizes trainable projection model to map\nthe image embedding that extracted by a pre-trained CLIP image encoder model (Radford et al., 2021 ###reference_b25###) to a sequence image feature.\nLater on, IPA-FaceID333https://huggingface.co/h94/IP-Adapter-FaceID is developed for customized face image generation by\nintegrating face information through face embedding extracted from a FR model instead of CLIP image embedding.\nFurthermore, LoRA is utilized to enhance ID consistency.\nThe IPA-FaceID has the ability to produce corresponding diverse styles of image based on a given face and text prompts.\nInstead of using the pre-trained CLIP model to extract image features,\nInstantID (Wang et al., 2024a ###reference_b36###) propose a trainable lightweight module for transferring face features from the frozen face\nencoder into the same space of the text token.\nMoreover, the IndentityNet based on modified ControlNet (Zhang et al., 2023 ###reference_b40###) is introduced to\nextract semantic face information from the reference image and face embedding is used as condition in cross-attention layers.\nID-conditioned face model Arc2Face (Papantoniou et al., 2024 ###reference_b23###) is based on the pre-trained Stable Diffusion model dedicated for ID-to-face generation by using only\nID embedding. It fixes the text prompt with a frozen pseudo-prompt \u201ca photo of person\u201d\nwhere placeholder token embedding is replaced by ArcFace embedding of image prompt. Then the whole token embedding is projected\nby CLIP encoder to the CLIP output space for training." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "From Deep Face Embeddings to Face images", + "text": "Extracting images from deep face embeddings are challenging\nfor naive deep learning networks e.g., UNet (Ronneberger et al., 2015 ###reference_b27###).\n(Shahreza et al., 2022a ###reference_b32###) introduced a CNN-based network to reconstruct face images from corresponding face embeddings that were extracted from the FR model by end-to-end training.\nWith more restrictions on the embedding leakage of FR models, (Shahreza & Marcel, 2024a ###reference_b30###) attempted to\nreconstruct the underlying face image from partial leaked face embeddings. They used the similar face reconstruction network in (Shahreza et al., 2022a ###reference_b32###).\nHowever, the reconstructed face images from those two methods are highly blurred. Furthermore, (Shahreza et al., 2024 ###reference_b35###) proposed a new block called\nDSCasConv (cascaded convolutions and skip connections) to reduce the blurring. However, it still has noticeable blurry artifact around face contour.\nFor more realistic face reconstruction, (Otroshi Shahreza & Marcel, 2024 ###reference_b22###) took the advantage of GAN model to generate face image from the deep face embedding.\nThey employed the pre-trained StyleGAN3 (Karras et al., 2021 ###reference_b16###) network to establish a mapping from facial embeddings to the intermediate latent space of StyleGAN.\nThey constructed the mapping network as two fully connected layers with Leaky ReLU activation function." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed Method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Kolmogorov-Arnold Theorem Preliminaries", + "text": "The Kolmogorov-Arnold theorem (Liu et al., 2024 ###reference_b18###) states that any continuous function may be expressed as a combination of a finite number of continuous univariate functions.\nFor every continuous function defined in the n-dimensional real space, where , it can be represented as a combination of a univariate continuous function and a sequence of continuous bivariate functions and .\nThe theorem demonstrates the existence of such a representation:\nThis representation suggests that even sophisticated functions in high-dimensional spaces can be reconstructed through a sequence of lower-dimensional function operations.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Training Data", + "text": "For training our face reconstruction framework, face embeddings of different identities are needed.\nConsidering the target FR or PPFR model and default FR model from IPA-FaceID,\nfor training image dataset , we can generate the embedding distribution \nas well as by extracting face embeddings from all face images in , where and denote the output\nface embeddings from and ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Joint Loss", + "text": "In order to enable target model to generate realistic target identity face images from IPA-FaceID,\nthe target embedding extracted from should be close to the corresponding embedding that represents the same face identity.\nTherefore, we should minimize the distance between and , where and\n denote FEM and mapped face embedding, respectively.\nMean Square Error (MSE): To reduce reconstruction difference of the generated embedding, we use MES loss to minimize the square of the reconstruction error:\nPairwise Distance (PD): When p=2, PD computes the pairwise distance between input vectors using the euclidean distance:\nCosine Embedding Distance (CED): CED is used for measuring whether two embedding vectors are similar, it is widely used for comparing face template in FR tasks:\nOur total loss is determined by a linear combination of the aforementioned loss types:\nWe empirically determined that the selection of , , ( value should be set to balance the range of different loss functions) yields\nthe best performance. See in Section 5.5 ###reference_### for the performance of different reconstruction loss functions." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Face Embedding Mapping (FEM)", + "text": "###figure_3### Face embedding is a vector that represents facial information associated with a corresponding identity. Ideally, embeddings that are extracted from different face images of the same identity should be close and far for\nthose that computed from different ones. Existing SOTA FR and PPFR networks utilize similar structures of backbone to extract\nfeatures from the face image and compute the face template or face embedding. We assume there is a transformation or mapping\nalgorithm between embeddings from the same identity that are extracted by different backbones.\nInspired from (Papantoniou et al., 2024 ###reference_b23###) and (Liu et al., 2024 ###reference_b18###), we propose FEM-MLP and FEM-KAN showing in Figure 3 ###reference_### to\nlearn the mapping relation of embedding distributions from different FR backbones. Then trained FEMs can map face embedding from the initial domain into the corresponding target domain of the pre-trained IPA-FaceID diffusion model\nin order to generate face images.\nDepending on the effectiveness of FEMs, the mapped embedding can fall into the target domain and boundary region. The target domain represents\nmapped embedding can be used for ID-preserving face image generation that can fool the evaluation FR systems while boundary region\nindicates mapped embedding is not sufficient for ID-preserving face image generation but human-like image generation." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Details", + "text": "To evaluate the reconstruction performance of the proposed face reconstruction network,\nwe train the two variant FEM models on various target SOTA FR and PPFR models.\nWe generate 5000 images as training dataset based on the subset of CelebA-HQ (Karras, 2017 ###reference_b14###) by applying Arc2Face (Papantoniou et al., 2024 ###reference_b23###) model, five images generated for each identity.\nFR buffalo_l444https://github.com/deepinsight/insightface is selected as the defualt FR model of IPA-FaceID.\nWe choose faceid_sd15555https://huggingface.co/h94/IP-Adapter-FaceID/tree/main checkpoint for IPA-FaceID which takes face embedding and text as input.\nIn order to effectively generate face in proper angle, we fix the text prompt as \u201cfront portrait of a person\u201d for all the experiments.\nWe follow efficient_kan666https://github.com/Blealtan/efficient-kan for FEM-KAN implementation and set the same hidden layer structure [512, 1024, 3072, 512] for PEM-MLP.\nWe use the GELU activation function and add 1D batch normalization to FEM-MLP.\nOur goal is to reconstruct complete face image from embedding of target PPFR models. Then, the generated face images are used to access or fool other FR systems in order to verify the reconstruction performance.\nWe conduct mainly four different experiments to verify the proposed method as follows:\nTo show the effectiveness of FEMs, we reconstruct five face images for each embedding that extracted from target models and\ninject the generated faces to the evaluation FR models for performing face verification.\nIn order to test the generalization of FEMs, we train models on 90% Flickr-Faces-HQ (FFHQ) (Karras et al., 2019 ###reference_b15###) dataset and\ntest on customized dataset Synth-500, with 500 images of never-before-seen identities, generated by website777https://thispersondoesnotexist.com.\nFor evaluating the robustness of FEMs, we consider reconstructing face from partial embeddings instead of complete ones. We set different levels of embedding leakage starting from 10% to 90% (e.g., removing the last\n10% values from each embedding vector).\nExcept, we also conduct experiments for reconstructing faces from the protected embedding, e.g., PolyProtect and MLP-Hash.\nFor evaluating the bias of face reconstruction to identity from different demographic, we test our method on\nRacial Faces in-the-Wild (RFW) dataset which includes four races such as Caucasian, Asian, Indian and African. We select 10% of each race in our experiment by considering the testing time, each image is loosely cropped into size 112112.\nOur experiments are conducted on a Tesla V100 GPU with 32G memory using PyTorch framework, setting the batch size\nto 128. For optimizers, we use SGD and AdamW for FEM-KAN and FEM-MLP with initial learning rate and , the exponential learning rate decay is set to 0.8 for AdamW." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Target Models and Real-World Models", + "text": "For target models that we aim to reconstruct face image from can be categorized into normal FR models such as IRSE50 (Hu et al., 2018 ###reference_b10###), IR152 (Deng et al., 2019 ###reference_b5###)\nand PPFR models including DCTDP (Ji et al., 2022 ###reference_b11###), HFCF (Han et al., 2024b ###reference_b8###), HFCF-SkinColor (Han et al., 2024a ###reference_b7###) and PartialFace (Mi et al., 2023 ###reference_b20###).\nAll face embeddings extracted by FR and PPFR models have the same length equal to 512.\nWe have selected four widely-used public FR models as real-word models for face verification in order to test performance of face reconstruction.\nThese models are FaceNet (Schroff et al., 2015 ###reference_b29###), VGG-Face (Parkhi et al., 2015 ###reference_b24###),\nGhostFaceNet (Alansari et al., 2023 ###reference_b2###), ArcFace (Deng et al., 2019 ###reference_b5###) and we use implementation from deepface888https://github.com/serengil/deepface." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "For evaluation, we employ the attack success rate (ASR) as a metric to assess the attack efficacy to various target FR systems\nby reconstructed face images from target PPFR systems.\nASR is defined as the fraction of generated faces successfully classified by the target FR system. We generate five images for each embedding.\nWhen determining the ASR, we establish a False Acceptance Rate (FAR) of 0.01 for each FR model.\nFurthermore, we employ FID, PSNR, SSIM, LPIPS and maximum mean discrepancy (MMD) (Borgwardt et al., 2006 ###reference_b3###) to evaluate the image quality of generated face images.\nSSIM and LPIPS are metrics based on perception, while MMD has the ability of analyzing two data distributions and assessing the imperceptibility of the generated images (Yang et al., 2021 ###reference_b38###)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Performance on Black-box Attacking", + "text": "As shown in Table 11 ###reference_###, our proposed face reconstruction network can effectively extract face images from target models.\nComparing the baseline case (without applying any embedding mapping algorithms), our method substantially increases the ASR e.g., about 58.3% and 62.9% on\nFEM-MLP and FEM-KAN method in average for target DCTDP model. Among all the target PPFR models, DCTDP is less robust against face reconstruction attack, where\nFEM-KAN achieves 67.6%. It shows that even images transferred into frequency domain, the corresponding face embedding still contain information\ncan be used for face reconstruction. FEM-KAN has overall better performance than MLP-based FEM in general, especially for target PPFR models.\nThe potential reason is that the embedding distribution from PPFR models is more far away from the idea distribution that can be utilized by IPA-FaceID. Therefore,\nsimple model like MLP can not perfectly map the distribution. Among the target PPFR models, FEMs have the lowest ASR on PartialFace with average ASR 57.7% and 62.1% on FEM-MLP and FEM-KAN.\nThe embedding mapping and face reconstruction performance are associated with the capability of feature extractor. In order to study the impact of complexity of feature extractor to the face reconstruction,\nwe trained PPFR PartialFace with two different backbones.\nTable 2 ###reference_### demonstrates that the incorporation of a substantial number of layers in the backbone of the PPFR model results in superior performance of FEMs regarding ASR.\nThe deep backbone can derive more identity-consistent information from intra-class images. Consequently, the mapping relationship between inter-class identities is relatively straightforward for FEMs to learn." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Performance on Image Quality", + "text": "We report image quality assessment result on images that generated from face embedding that\nextracted from never-before-seen identities. In Table 3 ###reference_###, FEM-KAN has better\nperformance on all four different metrics and it shows the generalization ability of FEM-KAN to generate\nhigh quality face images from new identities. Since MMD requires 1D input, we flatten image into 1D vector before calculation.\nDue to the gender uncontrollable face reconstruction, there is still space for improvement in terms of image quality, e.g., FID.\nAnother important observation is that the image quality metrics might not fairly reflect the effectiveness of generated face images since\nthey are not align with the visual similarity. See more discussion in Appendix A.4 ###reference_###.\n###figure_4### As shown in Figure 4 ###reference_###, we plot embedding similarity\ndistributions on whole dataset Synth-500, ArcFace model (clean) has very poor ability to extract the proper embedding that can be used for face\ngeneration with mean cosine similarity around 0.1357. The generated face images from clean ArcFace model barely can be used for accessing other FR models\nregarding the distribution of cosine similarity.\nIn contrast, by mapping the embedding from FEMs, the\ncosine similarity gets increased significantly, and the FEM-KAN has relatively more generated images with high similarity than FEM-MLP.\nAccording to the identity similarity distributions after applying FEMs, we can see the majority of failed reconstructed samples with cosine similarity around 0.1 while only limited samples have been perfectly reconstructed with cosine similarity around 0.9." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Face Reconstruction from Partial Leaked Embeddings", + "text": "###figure_5### Previous experiments are based on assumption that adversary can gain access to\nthe complete face embeddings. Nevertheless, in some real-world scenarios, a complete face embedding is difficult to acquire, but rather to access a portion of the embedding.\nFor example, face embeddings of the FR system are split and stored on different servers for data protection like the situation considered in (Shahreza & Marcel, 2024a ###reference_b30###).\nWe assume adversary already trained FEMs on complete embeddings of target FR or PPFR model.\nIn order to further test the FEMs non-linear mapping ability and face reconstruction,\nwe only use partial leaked embeddings (e.g., discarding the second half of values in an embedding in case of 50% leakage) as input to trained FEMs.\nIn order to match the input shape requirement of FEMs, we append zeros to the end of each leaked embedding vector to make the embedding have a length equal to 512.\nTable 4 ###reference_### reports ASR to evaluate incomplete leaked embedding mapping ability of FEMs.\nWith increased percentage of embedding leakage, the number of generated face images that can fool the evaluation FR is reduced.\nArcFace FR system is configured at FMR of 0.1%.\nPEM-KAN is able to maintain the same face reconstruction performance by\nusing 90% of embedding compared with complete embedding. As for 70% leakage, model still can achieve relatively\nhigh ASR. Figure 5 ###reference_### depicts sample face images from the CelebA-HQ dataset and the corresponding\nreconstructed face images from partial embedding with different leakage percentages.\nThe reconstructed face images can reveal privacy-sensitive\ninformation about the underlying user, such as gender, race, etc. However, the generated face tend to have noticeable artifacts\nwhen leakage is lower than 50%. Consequently, we raise the issue of face embedding security in the partial leakage scenario." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Face Reconstruction from Protected Embeddings", + "text": "###figure_6### Considering more strict to the accessing original embeddings that directly computed by the feature extractor of PPFR models,\nwe test FEMs on face embeddings that being protected by particular embedding protection algorithms such as PolyProtect (Hahn & Marcel, 2022 ###reference_b6###) and MLP-Hash (Shahreza et al., 2023 ###reference_b34###).\nWe train FEMs directly on protected face embeddings from both protection algorithms.\nFor PolyProtect, we generate the user-specific pair for each identity in the testing dataset.\nAfter mapping original face embedding from PPFR model, the protected embedding\nfrom PolyProtect has reduced dimension, 508 in our setting. During training, we append other four zeros to end of protected embedding to maintain the length of vector.\nFor MLP-Hash method, we set one-hidden layer with 512 neurons and fix the seed for all identities. More details about PolyProtect and MLP-Hash are in Appendix A.3 ###reference_###.\nTable 5 ###reference_### reports the ASR of on the embeddings protected by PolyProtect and MLP-Hash.\nIt is worth to notice that FEMs achieve high face reconstruction performance against MLP-Hash and have comparable ASR with the ones on unprotected embeddings in Table 11 ###reference_###.\nMoreover, FEM-KAN has higher ASR in all the evaluation FR models than FEM-MLP which indicates KAN\u2019s superiority in terms of learning the non-linear relation.\nHowever, FEMs are not able to effectively extract underlying faces from protected embeddings of PolyProtect.\nThe extreme large and small values in protected embeddings after mapping by PolyProtect might make FMEs difficult to learn.\nAs showing in Figure 6 ###reference_###, reconstructed faces from embeddings protected by MLP-Hash tend to have\ncertain artifact within the face. The potential reason can be the limited information presented in binarized face embeddings after applying MLP-Hash." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "Effects of different loss functions. To evaluate the impact of loss function to face reconstruction,\nwe test the three loss function configurations with IR50 FR model.\nAs showing in Table 6 ###reference_###, we train FEM-KAN for 20 epochs on each loss function setting.\nIt is worth to notice that term greatly improve the face image reconstruction performance\ncompared with other two loss terms, especially it increases more than 20% ASR on GhostFaceNet.\n###figure_7### Failed cases and bias. Although the pre-trained IPA-FaceID has ability to generate face image even on \u201cweak\u201d face embedding which is not accurately mapped\nby FEMs, we found that the reconstruction rate for male is much lower than for the female identity as showing in Figure 7 ###reference_###.\nSuch observations may be due to the image generation bias in pre-trained IPA-FaceID.\nFor target PPFR HFCF, FEMs has lowest ASR on African group of RFW dataset, 21.7%, 19.7%, 17.5% lower than Caucasian, Asian and Indian groups as stated in Table 7 ###reference_###.\nDue to the low resolution images of RFW dataset, face reconstruction performance is reduced on every group.\nThe bias in face generation can be inherited from the face extractor used for IPA-FaceID.\nDue to unbalanced and biased training dataset of pre-trained model, FR and PPFR models have different ability (see in Appendix A.2 ###reference_###)\nto extract and recognize faces from various races." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we propose a new method to reconstruct diverse high-resolution realistic face images\nfrom face embeddings in both FR and PPFR systems. We use a pre-trained IPA-FaceID network and trained the mapping model FEM to transfer the embedding for complete face reconstruction, especially, two variant FEMs are proposed for comparison.\nWe conduct comprehensive experiments covering two datasets to measure the face reconstruction performance in different scenarios including black-box embedding-to-face attacks, out-of-distribution generalization, reconstructing faces from protected embeddings and partial leaked embeddings, and bias studies in face reconstruction.\nTo the best of our knowledge, it is a very first work to invert face embedding from PPFR models to generate realistic face images.\nExtensive experimental evaluations demonstrate that FEMs can improve the face generation ability of pre-trained IPA-FaceID\nby a substantial margin on privacy-preserving embeddings of PPFR models.\nWe would like to draw the attention of researchers concerning face embedding protection in scenarios of diffusion models.\nDue to the limitations of feature extractor and pre-trained IPA-FaceID, our method is less effective to produce low-resolution face images.\nFor the future work, we consider improving the gender-preserving ability of our method and reducing the bias in the image generation." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "For the detailed setting of ArcFace loss, scale , weight and margin .\nBesides, we use default training configurations for PartialFace implementation (Mi et al., 2023 ###reference_b20###)999https://github.com/Tencent/TFace/tree/master/recognition/tasks/partialface, 27 random sub channels are selecting for training.\nThe only two differences are that we use ResNet34 as backbone and VGGFace2 (Cao et al., 2018 ###reference_b4###) dataset for training in order to have the same setting\nwith other PPFR models implementation in our work.\nDuring the training stage of PartialFace, the input RGB image is transferred into the frequency domain by discrete cosine transform (DCT) (Ahmed et al., 1974 ###reference_b1###).\nThe initial number of frequency channels is reduced to 132 from 192 after removing 30 low frequency channels. Then for each identity,\n27 channels are selected randomly according to the corresponding label. During the inference stage, we consider two different scenarios,\nadversary has or does not have access to the label of leaked embedding. For the latter case, we randomly generate a number from [0, 1000] as the label of leaked embedding for inference.\nAs shows in Table 9 ###reference_###, FEMs can efficiently mapping the embedding whether with knowledge of label or not.\nAs depicted in Table 10 ###reference_###, we test PPFR models that used in our work on RFW dataset\nto show the racial bias. PPFR models have much lower accuracy on non-Caucasians than Caucasians.\nFor the consistent notation, we denote the original face embedding = [, , \u2026, ] and protected face embedding as = [, , \u2026, ].\nPolyProtect implementation. The mapping operation is achieved by following formula.\nFor the first value in ,\nwhere C = [, , \u2026, ] and E = [, , \u2026, ] are 1D vectors that contain non-zero integer coefficients.\nEach m consecutive values in are mapped into the corresponding value in .\nFor the range of E, large numbers should be avoided due to small floating numbers of face embeddings are tended to be to zero when\nlarge index number in exponential function. However, the range C selection is arbitrary since the PolyProtect is not affected by amplitude.\nWe keep m = 5, E in the range [1, 5], C in the range [-50, 50] as used in paper Hahn & Marcel (2022 ###reference_b6###).\nOverlap parameter indicates the number of the same values from that are selected for calculation of each value in .\nFor detailed information about this parameter, we suggest readers to see the original implementation of PolyProtect (Hahn & Marcel, 2022 ###reference_b6###).\nMLP-hash implementation. It has two stages in MLP-hash including pseudo-random MLP and Binarizing.\nIn the first stage, the pseudo-random matrix within range [0,1] is generated from the uniform distribution according to user specified seed.\nGram-Schmidit is applied to each row of to compute orthonormal matrix .\nThe protected embedding before binarizing is calculated as:\nwhere denotes activation function, it is a nonlinear function that converts negative value to zero.\nThe number of MLP hidden layers determines the number of iteration in this stage.\nThen the final binarized protected embedding can be computed as:\nFor detailed algorithm, see in MLP-hash paper Shahreza et al. (2023 ###reference_b34###).\nAs shown in Table 3 ###reference_###, we report image quality mainly using FID, PSNR, SSIM, MMD and LPIPS.\nHowever, we only achieve marginally better performance on metrics after applying FEMs. The potential reason is that those metrics are not strongly associated with\nperceptual similarity.\n###figure_8### In Figure 8 ###reference_###, we select two images from the same person and calculate the corresponding evaluation metrics between them.\nWe exclude FID and MMD metrics since the FID requires multiple images for calculation while the latter one is completely non-relevant with visual similarity as mentioned in\npaper Borgwardt et al. (2006 ###reference_b3###). We can see the calculated values for PSNR, SSIM and LPIPS all indicate \u2018low\u2019 image quality when considering good result values for PSNR are around 30 to 50 and 0.8 to 1 for SSIM.\nHowever, the cosine similarity metric reflects better alignment with visual similarity in this case. Hence, we argue that the image quality metrics might not be the perfect measurement for evaluating the performance of our proposed method. We will consider evaluating our model on some perceptual-related metrics, especially those dedicated for faces (Sadovnik et al., 2018 ###reference_b28###) in future work." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Evaluations of Attack Success Rate (ASR) for black-box attacks to FR and PPFR models on CelebA-HQ dataset. Other four FR models\nare used for verifying the efficacy of FEMs.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTarget Model\n\n\n\nMethod\n\n\n\nFacenet\n\n\n\nVGG-Face\n\n\n\nGhostFaceNet\n\n\n\nArcFace\n\n\n\nAverage\n\n
\n\nIRSE50 (Hu et\u00a0al., 2018)\n\n\n\nNone\n\n\n\n3.9\n\n\n\n6.3\n\n\n\n3.0\n\n\n\n9.1\n\n\n\n5.6\n\n
\n\nMLP\n\n\n\n33.8\n\n\n\n63.4\n\n\n\n64.2\n\n\n\n72.8\n\n\n\n58.6\n\n
\n\nKAN\n\n\n\n25.3\n\n\n\n53.7\n\n\n\n75.2\n\n\n\n67.9\n\n\n\n55.5\n\n
\n\nIR152 (Deng et\u00a0al., 2019)\n\n\n\nNone\n\n\n\n4.6\n\n\n\n4.0\n\n\n\n7.6\n\n\n\n2.3\n\n\n\n4.6\n\n
\n\nMLP\n\n\n\n39.2\n\n\n\n58.7\n\n\n\n68.0\n\n\n\n78.9\n\n\n\n61.2\n\n
\n\nKAN\n\n\n\n36.7\n\n\n\n56.3\n\n\n\n68.8\n\n\n\n80.7\n\n\n\n60.6\n\n
\n\nDCTDP (Ji et\u00a0al., 2022)\n\n\n\nNone\n\n\n\n3.8\n\n\n\n5.0\n\n\n\n2.9\n\n\n\n7.1\n\n\n\n4.7\n\n
\n\nMLP\n\n\n\n39.5\n\n\n\n64.3\n\n\n\n69.8\n\n\n\n78.4\n\n\n\n63.0\n\n
\n\nKAN\n\n\n\n45.2\n\n\n\n67.4\n\n\n\n75.2\n\n\n\n82.7\n\n\n\n67.6\n\n
\n\nHFCF (Han et\u00a0al., 2024b)\n\n\n\nNone\n\n\n\n6.5\n\n\n\n6.9\n\n\n\n5.1\n\n\n\n11.4\n\n\n\n7.5\n\n
\n\nMLP\n\n\n\n34.0\n\n\n\n58.7\n\n\n\n63.2\n\n\n\n74.9\n\n\n\n57.7\n\n
\n\nKAN\n\n\n\n38.5\n\n\n\n62.9\n\n\n\n70.8\n\n\n\n77.7\n\n\n\n62.5\n\n
\n\nHFCF-SkinColor (Han et\u00a0al., 2024a)\n\n\n\nNone\n\n\n\n3.3\n\n\n\n4.8\n\n\n\n1.9\n\n\n\n7.2\n\n\n\n4.3\n\n
\n\nMLP\n\n\n\n35.2\n\n\n\n61.8\n\n\n\n62.4\n\n\n\n76.6\n\n\n\n59.0\n\n
\n\nKAN\n\n\n\n42.0\n\n\n\n67\n\n\n\n69.0\n\n\n\n81.6\n\n\n\n64.9\n\n
\n\nPartialFace (Mi et\u00a0al., 2023)\n\n\n\nNone\n\n\n\n2.5\n\n\n\n3.5\n\n\n\n1.8\n\n\n\n6.6\n\n\n\n3.6\n\n
\n\nMLP\n\n\n\n35.7\n\n\n\n59.4\n\n\n\n63.0\n\n\n\n72.6\n\n\n\n57.7\n\n
\n\nKAN\n\n\n\n39.4\n\n\n\n64.4\n\n\n\n68.4\n\n\n\n76.0\n\n\n\n62.1\n\n
\n
\n
", + "capture": "Table 1: Evaluations of Attack Success Rate (ASR) for black-box attacks to FR and PPFR models on CelebA-HQ dataset. Other four FR models\nare used for verifying the efficacy of FEMs." + }, + "2": { + "table_html": "
\n
Table 2: Effects of the backbone to Attack Success Rate (ASR) on PPFR PartialFace (Mi et\u00a0al., 2023).
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nBackbone\n\n\n\nMethod\n\n\n\nFacenet\n\n\n\nVGG-Face\n\n\n\nGhost-FaceNet\n\n\n\nArcFace\n\n\n\nAverage\n\n
\n\nResNet18\n\n\n\nNone\n\n\n\n3.5\n\n\n\n5.1\n\n\n\n2.1\n\n\n\n10.3\n\n\n\n5.3\n\n
\n\nFEM-MLP\n\n\n\n14.3\n\n\n\n35.0\n\n\n\n29.1\n\n\n\n43.2\n\n\n\n30.4\n\n
\n\nFEM-KAN\n\n\n\n14.5\n\n\n\n39.4\n\n\n\n31.9\n\n\n\n45.7\n\n\n\n32.9\n\n
\n\nResNet34\n\n\n\nNone\n\n\n\n2.5\n\n\n\n3.5\n\n\n\n1.8\n\n\n\n6.6\n\n\n\n3.6\n\n
\n\nFEM-MLP\n\n\n\n35.7\n\n\n\n59.4\n\n\n\n63.0\n\n\n\n72.6\n\n\n\n57.7\n\n
\n\nFEM-KAN\n\n\n\n39.4\n\n\n\n64.4\n\n\n\n68.4\n\n\n\n76.0\n\n\n\n62.1\n\n
\n
\n
", + "capture": "Table 2: Effects of the backbone to Attack Success Rate (ASR) on PPFR PartialFace (Mi et\u00a0al., 2023)." + }, + "3": { + "table_html": "
\n
Table 3: Quantitative evaluations of image quality on Synth-500 dataset. The target model is ArcFace. FEMs are trained on FFHQ dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodFID \nPSNR \nSSIM \nMMD \nLPIPS \n
None179.44218.03490.3241134.65780.5302
FEM-MLP89.163511.58390.4331833.11910.4265
FEM-KAN72.786911.84010.4352433.10800.4156
\n
\n
", + "capture": "Table 3: Quantitative evaluations of image quality on Synth-500 dataset. The target model is ArcFace. FEMs are trained on FFHQ dataset." + }, + "4": { + "table_html": "
\n
Table 4: Attack Success Rate (ASR) on different percentage of embedding leakage. The target model is IRSE50 with evaluation on ArcFace model.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method10%30%50%70%90%
FEM-MLP15.231.250.161.469.9
FEM-KAN14.521.540.657.668.0
\n
\n
", + "capture": "Table 4: Attack Success Rate (ASR) on different percentage of embedding leakage. The target model is IRSE50 with evaluation on ArcFace model." + }, + "5": { + "table_html": "
\n
Table 5: Attack Success Rate (ASR) performance on protected face embeddings. HFCF is target model.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Protection AlgorithmMethodFacenetVGG-FaceGhostFaceNetArcFace
PolyProtectFEM-MLP5.09.36.815.6
FEM-KAN11.210.78.714.5
MLP-HashFEM-MLP23.447.351.964.5
FEM-KAN25.353.056.571.6
\n
\n
", + "capture": "Table 5: Attack Success Rate (ASR) performance on protected face embeddings. HFCF is target model." + }, + "6": { + "table_html": "
\n
Table 6: Attack Success Rate (ASR) with various reconstruction loss function configurations on CelebA-HQ. IR50 is used as target model here.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Loss FunctionFacenetVGG-FaceGhostFaceNetArcFaceAverage
23.954.054.868.750.35
21.253.349.765.347.38
25.353.775.267.955.5
\n
\n
", + "capture": "Table 6: Attack Success Rate (ASR) with various reconstruction loss function configurations on CelebA-HQ. IR50 is used as target model here." + }, + "7": { + "table_html": "
\n
Table 7: Attack Success Rate (ASR) performance on RFW dataset. ArcFace is the evaluation FR.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RFW
Target PPFRMethodCaucasianAsianIndianAfrican
HFCF (Han et\u00a0al., 2024b)FEM-MLP51.152.648.833.6
FEM-KAN59.057.054.837.3
\n
\n
", + "capture": "Table 7: Attack Success Rate (ASR) performance on RFW dataset. ArcFace is the evaluation FR." + }, + "8": { + "table_html": "
\n
Table 8: Model training configurations.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParametersValue
BackboneResNet34 (He et\u00a0al., 2016)\n
OptimizerSGD
Loss FunctionArcFace
Epoch24
Batch Size128
\n
", + "capture": "Table 8: Model training configurations." + }, + "9": { + "table_html": "
\n
Table 9: Effects of different subset of frequency channels to Attack Success Rate (ASR) on PPFR PartialFace (Mi et\u00a0al., 2023).
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nIf adversary knows label\n\n\n\nMethod\n\n\n\nFacenet\n\n\n\nVGG-Face\n\n\n\nGhost-FaceNet\n\n\n\nArcFace\n\n\n\nAverage\n\n
\n\nYes\n\n\n\nFEM-MLP\n\n\n\n35.7\n\n\n\n59.4\n\n\n\n63.0\n\n\n\n72.6\n\n\n\n57.7\n\n
\n\nFEM-KAN\n\n\n\n39.4\n\n\n\n64.4\n\n\n\n68.4\n\n\n\n76.0\n\n\n\n62.1\n\n
\n\nNo\n\n\n\nFEM-MLP\n\n\n\n35.9\n\n\n\n58.2\n\n\n\n67.8\n\n\n\n73.4\n\n\n\n58.8\n\n
\n\nFEM-KAN\n\n\n\n38.7\n\n\n\n64.3\n\n\n\n67.8\n\n\n\n75.7\n\n\n\n61.6\n\n
\n
\n
", + "capture": "Table 9: Effects of different subset of frequency channels to Attack Success Rate (ASR) on PPFR PartialFace (Mi et\u00a0al., 2023)." + }, + "10": { + "table_html": "
\n
Table 10: PPFR face verification performance on RFW dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RFW
Target PPFRCaucasianAsianIndianAfrican
DCTDP (Ji et\u00a0al., 2022)\n95.4890.6392.9091.75
HFCF (Han et\u00a0al., 2024b)\n94.7191.3588.6190.03
HFCF-SkinColor (Han et\u00a0al., 2024a)\n94.8088.9891.5289.93
PartialFace (Mi et\u00a0al., 2023)\n94.2389.2790.7687.70
\n
", + "capture": "Table 10: PPFR face verification performance on RFW dataset." + }, + "11": { + "table_html": "
\n
Table 11: Evaluations of Attack Success Rate (ASR) for black-box attacks to FR and PPFR models on CelebA-HQ dataset. Other four FR models\nare used for verifying the efficacy of FEMs.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTarget Model\n\n\n\nMethod\n\n\n\nFacenet\n\n\n\nVGG-Face\n\n\n\nGhostFaceNet\n\n\n\nArcFace\n\n\n\nAverage\n\n
\n\nIRSE50\n\n\n\nNone\n\n\n\n3.9\n\n\n\n6.3\n\n\n\n3.0\n\n\n\n9.1\n\n\n\n5.6\n\n
\n\nMLP\n\n\n\n33.8\n\n\n\n63.4\n\n\n\n64.2\n\n\n\n72.8\n\n\n\n58.6\n\n
\n\nKAN\n\n\n\n25.3\n\n\n\n53.7\n\n\n\n75.2\n\n\n\n67.9\n\n\n\n55.5\n\n
\n\nIR152\n\n\n\nNone\n\n\n\n4.6\n\n\n\n4.0\n\n\n\n7.6\n\n\n\n2.3\n\n\n\n4.6\n\n
\n\nMLP\n\n\n\n39.2\n\n\n\n58.7\n\n\n\n68.0\n\n\n\n78.9\n\n\n\n61.2\n\n
\n\nKAN\n\n\n\n36.7\n\n\n\n56.3\n\n\n\n68.8\n\n\n\n80.7\n\n\n\n60.6\n\n
\n\nDCTDP\n\n\n\nNone\n\n\n\n3.8\n\n\n\n5.0\n\n\n\n2.9\n\n\n\n7.1\n\n\n\n4.7\n\n
\n\nMLP\n\n\n\n39.5\n\n\n\n64.3\n\n\n\n69.8\n\n\n\n78.4\n\n\n\n63.0\n\n
\n\nKAN\n\n\n\n45.2\n\n\n\n67.4\n\n\n\n75.2\n\n\n\n82.7\n\n\n\n67.6\n\n
\n\nHFCF\n\n\n\nNone\n\n\n\n6.5\n\n\n\n6.9\n\n\n\n5.1\n\n\n\n11.4\n\n\n\n7.5\n\n
\n\nMLP\n\n\n\n34.0\n\n\n\n58.7\n\n\n\n63.2\n\n\n\n74.9\n\n\n\n57.7\n\n
\n\nKAN\n\n\n\n38.5\n\n\n\n62.9\n\n\n\n70.8\n\n\n\n77.7\n\n\n\n62.5\n\n
\n\nHFCF-SkinColor\n\n\n\nNone\n\n\n\n3.3\n\n\n\n4.8\n\n\n\n1.9\n\n\n\n7.2\n\n\n\n4.3\n\n
\n\nMLP\n\n\n\n35.2\n\n\n\n61.8\n\n\n\n62.4\n\n\n\n76.6\n\n\n\n59.0\n\n
\n\nKAN\n\n\n\n42.0\n\n\n\n67\n\n\n\n69.0\n\n\n\n81.6\n\n\n\n64.9\n\n
\n\nPartialFace\n\n\n\nNone\n\n\n\n2.5\n\n\n\n3.5\n\n\n\n1.8\n\n\n\n6.6\n\n\n\n3.6\n\n
\n\nMLP\n\n\n\n35.7\n\n\n\n59.4\n\n\n\n63.0\n\n\n\n72.6\n\n\n\n57.7\n\n
\n\nKAN\n\n\n\n39.4\n\n\n\n64.4\n\n\n\n68.4\n\n\n\n76.0\n\n\n\n62.1\n\n
\n
\n
", + "capture": "Table 11: Evaluations of Attack Success Rate (ASR) for black-box attacks to FR and PPFR models on CelebA-HQ dataset. Other four FR models\nare used for verifying the efficacy of FEMs." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18165v1_figure_1.png", + "caption": "Figure 1: Sample face images from the CelebA-HQ dataset (first\nrow) and their corresponding reconstructed face images from face templates of PPFR model DCTDP. The\norange color value indicates confidence score (higher is better) given by commercial API Face++.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/reconstructed_faces.png" + }, + "2": { + "figure_path": "2411.18165v1_figure_2.png", + "caption": "Figure 2: Pipeline of face reconstruction by face embedding mapping.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/framework.png" + }, + "3": { + "figure_path": "2411.18165v1_figure_3.png", + "caption": "Figure 3: Two variants of FEM models and the process of embedding-to-embedding mapping. (a) FEM-MLP has fixed activation function.\n(b) FEM-KAN has learnable activation function at edges to achieve accurate non-linear mapping. (c) The direction of embedding mapping optimized by distance towards to\n\u2018ground truth\u2019 face embedding eisubscript\ud835\udc52\ud835\udc56e_{i}italic_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/FEM_MLP_KAN.png" + }, + "4": { + "figure_path": "2411.18165v1_figure_4.png", + "caption": "Figure 4: Cosine similarity distributions between input and generated faces from FEMs.\nArcFace is used as target model to extract embeddings from Synth-500.\nFEMs are trained on FFHQ dataset.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/ID_similarity_distribution_new2.png" + }, + "5": { + "figure_path": "2411.18165v1_figure_5.png", + "caption": "Figure 5: Reconstructed faces by FEM-KAN from different percentage of embedding leakage. IRSE50 is target model.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/reconstructed_faces_patial.png" + }, + "6": { + "figure_path": "2411.18165v1_figure_6.png", + "caption": "Figure 6: Reconstructed faces from protected embeddings.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/reconstructed_mlphash.png" + }, + "7": { + "figure_path": "2411.18165v1_figure_7.png", + "caption": "Figure 7: Failed samples from HFCF.\nThe red and green symbol indicate generated face image passed and failed in face verification.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/generation_bias.png" + }, + "8": { + "figure_path": "2411.18165v1_figure_8.png", + "caption": "Figure 8: Image quality metrics are not perfect align with visual similarity. Samples are taken from other subset of CelebA-HQ.", + "url": "http://arxiv.org/html/2411.18165v1/extracted/6028483/images/image_quality_metrics_issue.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Discrete cosine transform.", + "author": "Nasir Ahmed, T_ Natarajan, and Kamisetty R Rao.", + "venue": "IEEE transactions on Computers, 100(1):90\u201393, 1974.", + "url": null + } + }, + { + "2": { + "title": "Ghostfacenets: Lightweight face recognition model from cheap operations.", + "author": "Mohamad Alansari, Oussama Abdul Hay, Sajid Javed, Abdulhadi Shoufan, Yahya Zweiri, and Naoufel Werghi.", + "venue": "IEEE Access, 11:35429\u201335446, 2023.", + "url": null + } + }, + { + "3": { + "title": "Integrating structured biological data by kernel maximum mean discrepancy.", + "author": "Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Sch\u00f6lkopf, and Alex J Smola.", + "venue": "Bioinformatics, 22(14):e49\u2013e57, 2006.", + "url": null + } + }, + { + "4": { + "title": "Vggface2: A dataset for recognising faces across pose and age.", + "author": "Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman.", + "venue": "In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pp. 67\u201374. IEEE, 2018.", + "url": null + } + }, + { + "5": { + "title": "Arcface: Additive angular margin loss for deep face recognition.", + "author": "Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690\u20134699, 2019.", + "url": null + } + }, + { + "6": { + "title": "Towards protecting face embeddings in mobile face verification scenarios.", + "author": "Vedrana Krivoku\u0107a Hahn and S\u00e9bastien Marcel.", + "venue": "IEEE Transactions on Biometrics, Behavior, and Identity Science, 4(1):117\u2013134, 2022.", + "url": null + } + }, + { + "7": { + "title": "Robust skin color driven privacy-preserving face recognition via function secret sharing.", + "author": "Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, and Joachim Denzler.", + "venue": "In 2024 IEEE International Conference on Image Processing (ICIP), pp. 3965\u20133971. IEEE, 2024a.", + "url": null + } + }, + { + "8": { + "title": "Privacy-preserving face recognition in hybrid frequency-color domain.", + "author": "Dong Han, Yong Li, and Joachim Denzler.", + "venue": "In International Conference on Computer Vision Theory and Applications (VISAPP), pp. 536\u2013546. INSTICC, SciTePress, 2024b.", + "url": null + } + }, + { + "9": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778, 2016.", + "url": null + } + }, + { + "10": { + "title": "Squeeze-and-excitation networks.", + "author": "Jie Hu, Li Shen, and Gang Sun.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.", + "url": null + } + }, + { + "11": { + "title": "Privacy-preserving face recognition with learnable privacy budgets in frequency domain.", + "author": "Jiazhen Ji, Huan Wang, Yuge Huang, Jiaxiang Wu, Xingkun Xu, Shouhong Ding, ShengChuan Zhang, Liujuan Cao, and Rongrong Ji.", + "venue": "In European Conference on Computer Vision, pp. 475\u2013491. Springer, 2022.", + "url": null + } + }, + { + "12": { + "title": "Biohashing: two factor authentication featuring fingerprint data and tokenised random number.", + "author": "Andrew Teoh Beng Jin, David Ngo Chek Ling, and Alwyn Goh.", + "venue": "Pattern recognition, 37(11):2245\u20132255, 2004.", + "url": null + } + }, + { + "13": { + "title": "Ranking-based locality sensitive hashing-enabled cancelable biometrics: Index-of-max hashing.", + "author": "Zhe Jin, Jung Yeon Hwang, Yen-Lung Lai, Soohyung Kim, and Andrew Beng Jin Teoh.", + "venue": "IEEE Transactions on Information Forensics and Security, 13(2):393\u2013407, 2017.", + "url": null + } + }, + { + "14": { + "title": "Progressive growing of gans for improved quality, stability, and variation.", + "author": "Tero Karras.", + "venue": "arXiv preprint arXiv:1710.10196, 2017.", + "url": null + } + }, + { + "15": { + "title": "A style-based generator architecture for generative adversarial networks.", + "author": "Tero Karras, Samuli Laine, and Timo Aila.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401\u20134410, 2019.", + "url": null + } + }, + { + "16": { + "title": "Alias-free generative adversarial networks.", + "author": "Tero Karras, Miika Aittala, Samuli Laine, Erik H\u00e4rk\u00f6nen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.", + "venue": "Advances in neural information processing systems, 34:852\u2013863, 2021.", + "url": null + } + }, + { + "17": { + "title": "Toward a privacy-preserving face recognition system: A survey of leakages and solutions.", + "author": "Lamyanba Laishram, Muhammad Shaheryar, Jong Taek Lee, and Soon Ki Jung.", + "venue": "ACM Computing Surveys, 2024.", + "url": null + } + }, + { + "18": { + "title": "Kan: Kolmogorov-arnold networks.", + "author": "Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Solja\u010di\u0107, Thomas Y Hou, and Max Tegmark.", + "venue": "arXiv preprint arXiv:2404.19756, 2024.", + "url": null + } + }, + { + "19": { + "title": "On the reconstruction of face images from deep face templates.", + "author": "Guangcan Mai, Kai Cao, Pong C Yuen, and Anil K Jain.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 41(5):1188\u20131202, 2018.", + "url": null + } + }, + { + "20": { + "title": "Privacy-preserving face recognition using random frequency components.", + "author": "Yuxi Mi, Yuge Huang, Jiazhen Ji, Minyi Zhao, Jiaxiang Wu, Xingkun Xu, Shouhong Ding, and Shuigeng Zhou.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19673\u201319684, 2023.", + "url": null + } + }, + { + "21": { + "title": "Privacy-preserving face recognition using trainable feature subtraction.", + "author": "Yuxi Mi, Zhizhou Zhong, Yuge Huang, Jiazhen Ji, Jianqing Xu, Jun Wang, Shaoming Wang, Shouhong Ding, and Shuigeng Zhou.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 297\u2013307, 2024.", + "url": null + } + }, + { + "22": { + "title": "Face reconstruction from facial templates by learning latent space of a generator network.", + "author": "Hatef Otroshi Shahreza and S\u00e9bastien Marcel.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "23": { + "title": "Arc2face: A foundation model of human faces.", + "author": "Foivos Paraperas Papantoniou, Alexandros Lattas, Stylianos Moschoglou, Jiankang Deng, Bernhard Kainz, and Stefanos Zafeiriou.", + "venue": "arXiv preprint arXiv:2403.11641, 2024.", + "url": null + } + }, + { + "24": { + "title": "Deep face recognition.", + "author": "Omkar Parkhi, Andrea Vedaldi, and Andrew Zisserman.", + "venue": "In BMVC 2015-Proceedings of the British Machine Vision Conference 2015. British Machine Vision Association, 2015.", + "url": null + } + }, + { + "25": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pp. 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "26": { + "title": "Enhancing soft biometric face template privacy with mutual information-based image attacks.", + "author": "Zohra Rezgui, Nicola Strisciuglio, and Raymond Veldhuis.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1141\u20131149, 2024.", + "url": null + } + }, + { + "27": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In Medical image computing and computer-assisted intervention\u2013MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pp. 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "28": { + "title": "Finding your lookalike: Measuring face similarity rather than face identity.", + "author": "Amir Sadovnik, Wassim Gharbi, Thanh Vu, and Andrew Gallagher.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2345\u20132353, 2018.", + "url": null + } + }, + { + "29": { + "title": "Facenet: A unified embedding for face recognition and clustering.", + "author": "Florian Schroff, Dmitry Kalenichenko, and James Philbin.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815\u2013823, 2015.", + "url": null + } + }, + { + "30": { + "title": "Face reconstruction from partially leaked facial embeddings.", + "author": "Hatef Otroshi Shahreza and S\u00e9bastien Marcel.", + "venue": "In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4930\u20134934. IEEE, 2024a.", + "url": null + } + }, + { + "31": { + "title": "Breaking template protection: Reconstruction of face images from protected facial templates.", + "author": "Hatef Otroshi Shahreza and S\u00e9bastien Marcel.", + "venue": "In 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG), pp. 1\u20137. IEEE, 2024b.", + "url": null + } + }, + { + "32": { + "title": "Face reconstruction from deep facial embeddings using a convolutional neural network.", + "author": "Hatef Otroshi Shahreza, Vedrana Krivoku\u0107a Hahn, and S\u00e9bastien Marcel.", + "venue": "In 2022 IEEE International Conference on Image Processing (ICIP), pp. 1211\u20131215. IEEE, 2022a.", + "url": null + } + }, + { + "33": { + "title": "Hybrid protection of biometric templates by combining homomorphic encryption and cancelable biometrics.", + "author": "Hatef Otroshi Shahreza, Christian Rathgeb, Dail\u00e9 Osorio-Roig, Vedrana Krivoku\u0107a Hahn, S\u00e9bastien Marcel, and Christoph Busch.", + "venue": "In 2022 IEEE International Joint Conference on Biometrics (IJCB), pp. 1\u201310. IEEE, 2022b.", + "url": null + } + }, + { + "34": { + "title": "Mlp-hash: Protecting face templates via hashing of randomized multi-layer perceptron.", + "author": "Hatef Otroshi Shahreza, Vedrana Krivoku\u0107a Hahn, and S\u00e9bastien Marcel.", + "venue": "In 2023 31st European Signal Processing Conference (EUSIPCO), pp. 605\u2013609. IEEE, 2023.", + "url": null + } + }, + { + "35": { + "title": "Vulnerability of state-of-the-art face recognition models to template inversion attack.", + "author": "Hatef Otroshi Shahreza, Vedrana Krivoku\u0107a Hahn, and S\u00e9bastien Marcel.", + "venue": "IEEE Transactions on Information Forensics and Security, 2024.", + "url": null + } + }, + { + "36": { + "title": "Instantid: Zero-shot identity-preserving generation in seconds.", + "author": "Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, and Anthony Chen.", + "venue": "arXiv preprint arXiv:2401.07519, 2024a.", + "url": null + } + }, + { + "37": { + "title": "Make privacy renewable! generating privacy-preserving faces supporting cancelable biometric recognition.", + "author": "Tao Wang, Yushu Zhang, Xiangli Xiao, Lin Yuan, Zhihua Xia, and Jian Weng.", + "venue": "In ACM Multimedia 2024, 2024b.", + "url": null + } + }, + { + "38": { + "title": "Towards face encryption by generating adversarial identity masks.", + "author": "Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, and Hui Xue.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3897\u20133907, 2021.", + "url": null + } + }, + { + "39": { + "title": "Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.", + "author": "Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang.", + "venue": "arXiv preprint arXiv:2308.06721, 2023.", + "url": null + } + }, + { + "40": { + "title": "Adding conditional control to text-to-image diffusion models.", + "author": "Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3836\u20133847, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18165v1" +} \ No newline at end of file diff --git a/20241127/2411.18166v1.json b/20241127/2411.18166v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c1f72fb54fd147e9b7c56f7bea9aac26db1bfae8 --- /dev/null +++ b/20241127/2411.18166v1.json @@ -0,0 +1,191 @@ +{ + "title": "Combined Learning of Linear Parameter-Varying Models and Robust Control Invariant Sets", + "abstract": "Dynamical models identified from data are frequently employed in control system design. However, decoupling system identification from controller synthesis can result in situations where no suitable controller exists after a model has been identified. In this work, we introduce a novel control-oriented regularization in the identification procedure to ensure the existence of a controller that can enforce constraints on system variables robustly. The combined identification algorithm includes: (i) the concurrent learning of an uncertain model and a nominal model using an observer; (ii) a regularization term on the model parameters defined as the size of the largest robust control invariant set for the uncertain model. To make the learning problem tractable, we consider nonlinear models in quasi Linear Parameter-Varying (qLPV) form, utilizing a novel scheduling function parameterization that facilitates the derivation of an associated uncertain linear model. The robust control invariant set is represented as a polytope, and we adopt novel results from polytope geometry to derive the regularization function as the optimal value of a convex quadratic program. Additionally, we present new model-reduction approaches that exploit the chosen model structure. Numerical examples on classical identification benchmarks demonstrate the efficacy of our approach. A simple control scheme is also derived to provide an example of data-driven control of a constrained nonlinear system.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "As cyber-physical systems become increasingly prevalent, there is a growing need for algorithms that can safely and effectively control them. Model-based control design, which relies on dynamical models learned from data through system identification (SysID) techniques, has proven successful in addressing this need. These models predict future system behavior, and controllers synthesized based on them optimize the predictions to achieve desired control objectives when used in closed-loop with the system. A typical control objective is to perform reference tracking while satisfying input and output constraints.\nTypically, SysID and controller synthesis are performed sequentially: first, get a model from data, then design a controller for it. This approach, however, can result in controller parameterizations that are suboptimal or even infeasible for a given model parameterization [1 ###reference_b1###]. For instance, a nominal controller\u2009\u2014\u2009designed without accounting for modeling errors\u2009\u2014\u2009may lead to constraint violations when applied to the plant generating the data. Alternatively, a robust controller to explicitly consider model mismatch might be impossible to synthesize, i.e., there might not exist a controller of a given parameterization for the identified uncertain model. Consequently, there is a need for methodologies that integrate SysID and controller synthesis into a unified framework.\nExisting unifying approaches can broadly be divided into two categories:\n) Identification of a model and its associated uncertainty, followed by robust controller synthesis based on the identified uncertain model. Some related contributions are [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###], etc., which focus on set-membership and/or robust identification;\n) Concurrent uncertain system identification and robust controller synthesis, e.g., [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###], etc., in which the model, uncertainty, and controller parameterizations are fixed, and parameters are optimized to minimize a control objective while enforcing model unfalsifiability. Recent reinforcement learning-based approaches, notably, [10 ###reference_b10###], can also be interpreted in this framework, in which the parameters of a linear uncertain model and a robust Model Predictive Controller (MPC) are optimized. These approaches guarantee the existence of a controller (by directly identifying it) for the uncertain model, which under reasonable assumptions is a constraint-satisfying controller for the underlying plant. However, models identified using such methods might perform poorly when used for the synthesis of control schemes other than the one they are identified for. While incorporating prediction loss into the control objective can mitigate this issue, the methods are typically designed for input-state datasets rather than input-output datasets, requiring further modifications.\nAssuming there exists a method to characterize the uncertainty associated with a given model, the key step towards ensuring that one can design a model-based controller satisfying constraints involves guaranteeing the existence of a robust control invariant (RCI) set [11 ###reference_b11###] for that model. RCI sets are regions of the state space in which the uncertain model can be regulated indefinitely within output constraints using feasible control inputs, and computational methods to construct them mostly rely on\nlinear, possibly uncertain, models. Therefore, when dealing with nonlinear systems, it is reasonable to consider a linear uncertain model whose trajectories include those of the nonlinear model of the system; for this reason, Linear Parameter-Varying (LPV) model classes have been considered, and in particular quasi LPV (qLPV) models, in which the scheduling parameter is a nonlinear function\nof the state and input vectors. Data-driven methods for LPV system identification in the prediction error minimization paradigm have been developed under assumptions of availability of the scheduling function [12 ###reference_b12###]. Avoiding this assumption, a large number of works have been dedicated to (implicitly) estimating the state sequence from input/output data. In [13 ###reference_b13###], an autoencoder was used for defining the hidden states of a nonlinear model, such as a qLPV model (see [13 ###reference_b13###, Sect. 4.2]). In [14 ###reference_b14###], a joint identification procedure of the scheduling function and system matrices was presented, in which intermittent states were estimated by an encoder network [15 ###reference_b15###]." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Contributions", + "text": "We present a SysID framework to identify an uncertain model of the plant, in which we introduce a control-oriented regularization that guarantees the existence of a constraint-satisfying RCI set for the model being identified. We parameterize a nonlinear model in qLPV form and the associated RCI set as a polytope. We then formulate the control-oriented regularization as the value function of a convex quadratic program (QP), the goal of which is to maximize the size of the stabilizable region while satisfying constraints. Using every model such that the objective of the regularized SysID problem is finite, we can design a constraint-satisfying controller for the underlying plant.\nOur interpretation of control-oriented regularization is fundamentally different from that in [16 ###reference_b16###], in which it refers to a function that minimizes deviation of closed-loop performance from a reference model.\nAn important novel contribution of this paper is the way we parameterize the scheduling function of the qLPV model:\nwe use a neural network with softmax output layer, so that the outputs of the network belong to the unit simplex by construction. This implies that the model matrices of the system being identified serve directly as vertices of a linear system with multiplicative uncertainty that bounds the nonlinear system. Then, an RCI set synthesized for the uncertain linear system is also RCI for the qLPV model, at the price of some possible conservativeness. To make the identification problem computationally tractable, we parameterize the RCI set as a polytope with fixed normal directions, and enforce configuration-constraints on the offsets [17 ###reference_b17###]. This enables us to express invariance conditions as being jointly linear in the parameters of the model and of the set, that we exploit to formulate a QP to compute an RCI set the value function of which serves as our control-oriented regularization.\nThe paper is organized as follows. In Section 2 ###reference_###, we first state the learning problem we want to tackle. Then, we introduce the notions of uncertainty derived from model mismatch and set invariance, using which we formulate the conceptual SysID problem with control-oriented regularization. In Section 3 ###reference_###, we introduce the qLPV and RCI parameterizations, using which we formulate a computer-implementable version of the problem synthesized in Section 2 ###reference_###.\nIn Section 4 ###reference_###, we develop a scheme to identify the system orders and RCI set template, as well as an initial point for the problem. In particular, in Sections 4.1 ###reference_### and 4.2 ###reference_###, we elaborate upon the steps in this scheme, in which we also present some model reduction strategies.\nIn Section 5 ###reference_###, we first present numerical examples to illustrate the effectiveness of the proposed qLPV parameterization with some benchmark examples. Furthermore, we test the introduced model reduction schemes numerically. Finally, we solve the problem developed in Section 3 ###reference_### using data collected form a nonlinear mass-spring-damper system, and use this model and the associated optimal RCI set to synthesize an output tracking control scheme\nbased on a safety-filter [18 ###reference_b18###]." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Notation", + "text": "The symbols and denote the set of real and natural numbers, respectively. The set is the set of indices between and with . Given two sets , their Minkowski sum is defined as . If is singleton, with a slight abuse of notation we denote by . The Cartesian product taken -times is denoted by . Given a function , we denote the set-valued map . Given a vector , denotes the element-wise absolute value vector." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Problem definition", + "text": "Assume that we have input and output data measurements available by exciting the nonlinear plant\nwhere is the input, the output, \nthe state vector, and denotes the successor state. We assume the functions and , as well as the state dimension , are unknown.\nOur goal is to identify a control-oriented dynamical model of (1 ###reference_###). More precisely, we want to solve the following problem:\nGiven a dataset of input and output measurements from (1 ###reference_###), identify a model to predict the plant behaviour. Furthermore, ensure that the identified model is such that it can be used to synthesize a feedback controller that regulates the output of the plant inside a set , i.e., , using control inputs .\nProblem (1 ###reference_blem1###) describes the entire control design pipeline, i.e., system identification and constrained controller synthesis. While this problem can be tackled in a stage-wise manner, i.e., system identification followed by controller synthesis, it is possible that the identified model makes controller synthesis infeasible. The remaining part of this section is devoted to formulate a conceptual problem of system identification endowed with controller synthesis guarantees.\nLet us first consider the following model\nof (1 ###reference_###), where is the state of the model. The learning problem can be addressed\nby solving the optimization problem\nwhere .\nWhile the solution to Problem (3 ###reference_###) might render subsequent model-based controller synthesis feasible, this is not guaranteed. To ameliorate this, we introduce a control-oriented regularization into Problem (3 ###reference_###).\nThis regularization is based on the observation that the output of (2 ###reference_###) may not exactly match the output of (1 ###reference_###), such that a controller synthesized using (2 ###reference_###) may lack robustness against the model mismatch, potentially leading to constraint violation. We address this using the state observer model\nwhere is the observer function, and is the output discrepancy defined as \nThe following result characterizes an uncertain model based on (4 ###reference_###) that captures the\nmodeling error in (2 ###reference_###).\nSuppose that the behaviour of system (1 ###reference_###) can equivalently be described by the model\nwhere is an unknown disturbance. Denoting the sequences and , suppose there exists a set where\nfor all holds, with being the state of system (5 ###reference_###) at time when simulated from initial state with inputs , and being the state of system (4 ###reference_###) when simulated from initial state with inputs and , where we note that enters the dynamics through .\nFinally, consider some set that satisfies\nFor any input sequence and disturbance sequence , let denote the output reached by system (4 ###reference_###) from initial state , and define\nas the -step reachable set of outputs. For the same input sequence and some noise sequence , define the output of system (5 ###reference_###) at time when initialized at as . Then, for all , the inclusion\nholds for any initial states and satisfying .\nThere always exists a set such that (5 ###reference_###) exactly replicates the input-output behavior of (1 ###reference_###). The proof concludes if for any , there exists some disturbance sequence such that holds if . Clearly, this is satisfied by the sequence\nIt hence remains to show that . Observe that from (6 ###reference_###), the sequence ensures , such that the condition on in (7 ###reference_###) ensures .\nWe can hence define the uncertain dynamical model\nwhich is valid under the following assumption.\nThe inclusion in (7 ###reference_###) holds, with being large enough such that (5 ###reference_###) represents (1 ###reference_###).\nWhile Assumption 1 ###reference_umption1### seems strong, it is frequently made in any observer-based robust control design, e.g., [19 ###reference_b19###]. For example, suppose that System (5 ###reference_###) is described by the linear time-invariant (LTI) dynamics and \nfor some output disturbance , and the observer model in (4 ###reference_###) is described by with observer gain .\nThen, the minimal robust positive invariant set\nsatisfies the requirement in (6 ###reference_###), and is guaranteed to be bounded if , i.e., the observer is stable [20 ###reference_b20###]. Hence, the set satisfies the requirement in (7 ###reference_###).\nThe following result can be used to guarantee the existence of a control law to regulate using inputs .\nSuppose there exists a set and a control law such that\nholds. Then, under Assumption 1 ###reference_umption1###, if systems (5 ###reference_###) and (8 ###reference_###) are initialized such that (6 ###reference_###) is verified, then the control inputs , with being the state of (8 ###reference_###) corresponding to any arbitrary disturbance sequence , ensures that .\nThe conditions in (9 ###reference_###), which ensure that is a robust control invariant (RCI) set for (8 ###reference_###), guarantee the existence of a control input such that and for any . By induction, there always exists a feasible control sequence guaranteeing . Then, the proof is concluded from Assumption 1 ###reference_umption1### and (6 ###reference_###).\nThe condition in (9 ###reference_###) guarantees the existence of a feasible controller in the set . Ideally, one would want the RCI set to be as large as possible. We characterize this size using the function\nwhere is any function that satisfies\nWe define the optimizer of Problem (10 ###reference_###) as the largest RCI set, and let if Problem (10 ###reference_###) is infeasible. We use as the control-oriented regularization in Problem (3 ###reference_###), resulting in\nwhere is the regularization parameter.\nIn the sequel, we present a parameterization of the functions and sets formulating Problem (3 ###reference_###) that, under reasonable assumptions, results in a computationally tractable formulation." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Model and set parameterization", + "text": "In this section, we present a model and RCI set parameterization, using which we formulate Problem (12 ###reference_###). We parameterize the model as a qLPV system. Such models describe state evolution using a time-varying linear map, with the linear maps scheduled using an explicitly characterized function [21 ###reference_b21###]. They have frequently been observed to provide a very good balance between prediction accuracy and ease of robust controller design. In this work, we exploit a particular choice of scheduling function parameterization that enables a straightforward derivation of a linear system with multiplicative uncertainty that bounds the qLPV system. Then, we parameterize the RCI set as a polytope with fixed normal vectors, in which we induce robustness for the uncertain linear system using linear inequalities." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Dynamical system", + "text": "We parameterize model (2 ###reference_###) as the qLPV system\nin which the matrix-valued functions depend linearly on the scheduling vector :\nSimilarly, we parameterize the observer function as , where\nWe denote , and in the sequel.\nFinally, we model each component of the scheduling function as\nwhere the functions for are feedforward neural networks (FNN) with hidden layers each, and a linear output layer with a scalar output. We collect the weights and bias terms corresponding to each FNN in , such that . The parameterization (14 ###reference_###) enforces to belong to the simplex defined as\nNote that (14 ###reference_###) can be interpreted as the output of a classifier\nfor predicting a multi-category target of dimension , where \nis the probability of the target being in category , i.e., is a feedforward neural network with softmax output." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Robust control invariant set", + "text": "We parameterize the RCI set as the polytope\nwhere is a matrix that we fix a priori, such that is the variable that characterizes the set. Over , we enforce configuration-constraints [17 ###reference_b17###], which are conic constraints of the form . These constraints dictate that\nwhere are the vertex maps. Towards enforcing that is an RCI set, we parameterize the disturbance set as\nwhere are the parameters, and is a user-specified parameter to account for finite data. We present a characterization of the disturbance set parameters in the sequel.\nWe now enforce the set to be RCI for the uncertain system in (8 ###reference_###), written for the qLPV parameterization as\nDenoting the convex hull \nwe observe that (14 ###reference_###) implies that the inclusion\nholds for any and . Thus, an RCI set designed for the system\nwith multiplicative uncertainty and additive disturbances is RCI for (16 ###reference_###).\nThe following result characterizes conditions on such that is an RCI set for (17 ###reference_###). In this result, we assume that the constraint sets and are given as the polytopes\nA set is an RCI set for system (17 ###reference_###) with respect to constraints and if there exist control inputs such that the inequalities\nare verified for all and .\nThe proof follows from definition of RCI sets and [17 ###reference_b17###, Corollary 4], with being the vertex control inputs." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Regularization function", + "text": "In order to formulate the regularization function in (10 ###reference_###), we assume that the vertices of the output constraint set are known, i.e., \nThen, assuming that we aim to design model predictive controllers (MPC) using the identified model, we parameterize the function as\nwhere represent trajectories simulated using the nominal model matrices\nwith each trajectory initialized at the origin, and modeled to track the vertex in the output space over a horizon of steps while being constrained to the RCI set . Note that the function satisfies the requirement in (11 ###reference_###).\nUsing this definition of distance function, we parameterize the size function as the value of the convex QP\nThe parameterization in (19 ###reference_###) can be extended to accommodate model uncertainty by considering tube-MPC parameterizations, such as those presented in [22 ###reference_b22###] for tracking in the presence of additive and multiplicative uncertainty. Alternatively, in applications such as in [23 ###reference_b23###] where System (13 ###reference_###) represents uncertainty in a nominal model, small RCI sets are desirable. In such cases, a suitable size function might be . Although this choice of size function violates the requirement in (11 ###reference_###), it should be noted that (11 ###reference_###) was developed for situations where a large RCI set is desired, whereas in this case a small RCI set is more appropriate." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Regularized system identification algorithm", + "text": "We formulate the optimization problem in (12 ###reference_###) for the parameterization proposed in the previous section as\nover the variables . In Problem (21 ###reference_###), we distinguish between the prediction model and the observer model using two different simulations: denotes the state of the prediction model, and we penalize the deviation of the prediction model output from the plant output . In this regard, we explicitly optimize the initial state of the prediction model. For the observer model, we simulate the \u2018closed-loop prediction\u2019 dynamics initialized at . Note that the observer model identification serves purely to characterize the disturbance set , used for enforcing RCI constraints in the formulation of . The disturbance set parameters are characterized using , and invariance in (18 ###reference_###) is enforced using an inflation parameter . For given , denoting the prediction error bounds\nthe disturbance set parameters can be uniquely determined as\nHence, they do not need to remain optimization variables, such that we can redefine , and to be a function of . Then, by condensing the equality constraints composing Problem (21 ###reference_###) into the objective, we have an unconstrained optimization problem\nwhere is the prediction loss.\nBy exploiting differentiability of convex QPs (under reasonable assumptions), Problem (23 ###reference_###) can be tackled using off-the-shelf gradient based solvers.\nIt is useful to note that schemes such as initial state encoders, e.g., [13 ###reference_b13###, 14 ###reference_b14###], etc., can be also included to aid the solution procedure. However, we use a straightforward single-shooting approach that does not involve identifying a state sequence.\nProblem (21 ###reference_###) identifies a model , where\nand the regularization balances between prediction accuracy, and size of the largest RCI set parameterized as . Furthermore, any model such that can be used to synthesize a controller for the underlying plant.\nIn the following result, we derive a bound on that enables us to provide guarantees at a solution of Problem (21 ###reference_###).\nSuppose that system (1 ###reference_###) is described by\nfor some . Let be a set that satisfies (6 ###reference_###), and suppose that and . Finally, suppose\nwhere is the observer state at time when simulated from with inputs . If\nthe uncertain model in (16 ###reference_###) is valid.\nThe constraints of Problem (21 ###reference_###) enforce that for all , we have . Assuming , we know from (6 ###reference_###) that holds for any disturbance sequence . Then, for any such that , we can write\nwhere the first inequality follows from the triangle inequality, the second from (25 ###reference_###), and the third from (26 ###reference_###).\nThe inflation parameter is chosen to be large enough such that the bound in (26 ###reference_###) is verified at the solution of Problem (21 ###reference_###).\nIn general, Problem (21 ###reference_###) is highly nonconvex, and is hence difficult to solve. In order to tackle it, we propose to compute an initial feasible solution, followed by a concurrent refinement. This procedure is summarized in Algorithm 1 ###reference_###.\nAlgorithm 1 ###reference_### first solves an LTI SysID problem to estimate the order of the model, following which we estimate the uncertainty in the identified LTI model and compute an RCI set that is robust against this uncertainty. The RCI set computation Step 2 serves to identify the matrix that characterizes the RCI set , which we keep fixed in the subsequent steps in the procedure.\nIn Step 3, we estimate the parameter order in a second SysID problem, in which we enforce that the qLPV model being identified renders the RCI set computed for the LTI model to be robustly control invariant. Finally, we use the identified model to initialize Problem (21 ###reference_###). The constraints enforced in Step 3 ensure that at the point where Problem (21 ###reference_###) is initialized. In the sequel, we describe the individual steps." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Model and RCI set initialization", + "text": "In order to identify the system order , for some fixed we propose to solve the SysID problem\nover the variables , with , and the other matrices being of compatible dimensions. We parameterize the regularization function as\nwhere is a vector composed of row and column of matrix , row of matrix and column of matrix . Essentially, (28 ###reference_###) is a group-Lasso regularization on the LTI model parameters to promote model-order reduction.\nWe denote the optimizer of (27 ###reference_###) as , where and remaining matrices are obtained by collecting the nonzero elements.\nUsing this solution, we compute an initial disturbance set in the same way as (22 ###reference_###).\nThus, for appropriate inflation parameter defining , we identify the uncertain LTI system\nto model the behaviour of (1 ###reference_###).\nNow, we synthesize an RCI set for (29 ###reference_###) using the following result from [24 ###reference_b24###]. Our goal is to identify a matrix such that in the subsequent steps, there exists some such that can represent an RCI set for the models we identify.\nGiven a matrix such that the polytope\n the polytope with for some matrix is RCI for (29 ###reference_###) subject to constraints and if there exist inputs verifying\nProposition 4.8 ###reference_orem8### offers a way to characterize a matrix such that is an RCI set for (29 ###reference_###) with . We now use this result to compute the largest initial RCI set in the same vein as (20 ###reference_###). To this end, for a given matrix , we compute the matrix which characterizes the RCI set by solving the nonlinear program\nThus, the solutions of Problems (27 ###reference_###) and (31 ###reference_###) can be used to identify an initial LTI model along with a corresponding RCI set. We now use this model and set to formulate an qLPV identificaton problem, in which we assume that after setting the matrix and vertex maps satisfying (15 ###reference_###) have been computed following [17 ###reference_b17###, Sect. 3.5]." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "qLPV SysID with RCI constraints", + "text": "In order to identify a qLPV model (13 ###reference_###), for some fixed scheduling order , we propose to solve the problem\nover , where the scheduling functions , and system matrices and . To promote the reduction of the number of scheduling variables, we consider the regularization term\nwhere are the weight matrices of the last layer of the FNN , with the number of activation units in the penultimate layer. This is motivated by the fact that if , then , where is the bias of the last layer of the FNN, i.e., it becomes a constant.\nThe inequality constraints in the problem dictate that the set , which was RCI for the uncertain LTI system in (29 ###reference_###), remains an RCI set for the uncertain qLPV system. Note that is feasible for Problem (32 ###reference_###) for any .\nWe denote the optimizer of Problem (32 ###reference_###) as . At this solution, we first update the disturbance set to in the same way as (22 ###reference_###), i.e., using the optimal state sequence at the solution of Problem (32 ###reference_###).\nThen,\nwe recompute the optimal RCI set parameter by solving the QP defining in (20 ###reference_###). This vector satisfies the inequalities\nfor some for all and .\nIf for any , we select as the parameter order in the sequel. However, if for some , we propose the following model reduction strategy, in which we lump the corresponding matrices into a single matrix. For brevity, we drop the superscript in this result.\nLet the solution of (32 ###reference_###) be such that we have for all , and for all . Let and ,\nfor all . Then the system in (13 ###reference_###) identified by (32 ###reference_###) can be equivalently expressed by the scheduling vector\nBy definition, for all we have\nsuch that we can write\nwherein we use the redefinition of from (35b ###reference_.2###). Repeating the same steps for concludes the proof." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Further complexity reduction in post-processing", + "text": "While the regularization proposed in (33 ###reference_###) can help fixing some parameters to constant values and the subsequent result in Proposition 4.9 ###reference_orem9### can lump these constant values together,\nit might still be desirable to reduce the number of scheduling variables further to reduce controller complexity [25 ###reference_b25###]. While this problem has been extensively studied in the literature (c.f. [26 ###reference_b26###] for a comprehensive review), the subsequent reduced-order parameter functions in our setting might lose their simplex structure (except in the case of [27 ###reference_b27###], by parameterizing their encoder with a softmax final layer). Hence, in this section, we develop a procedure to reduce the parameter order with the following goals: ) retain the structure of the identified parameter functions and RCI set as much as possible, and explicitly aim to minimize the loss in prediction accuracy; ) develop a simple procedure based around a convex problem.\nThe problem we develop in the sequel modifies the system matrices, and the biases of the last layers of the FNNs such that one-step prediction error over the training dataset is minimized. Furthermore, we ensure that there exists some RCI set for the modified system matrices.\nTo synthesize this problem, we use the optimal state sequence at the solution of (32 ###reference_###). Let\nand introduce the notation and , such that from (14 ###reference_###), we have\nSuppose that is the reduced-order scheduling variable index set we aim to retain. Denoting , we hence aim to identify a model with number of scheduling variables. For every , let us denote the corresponding index in the full-order scheduling variable index set as , and let . We parameterize the reduced-order scheduling function as\nwhere with , and denote the reduced-order system matrices as\nWe compute these parameters that characterize the reduced-order system by solving the optimization problem\nwhere for all , we denote\nWe note that is the data in Problem (37 ###reference_###).\nThe constraints of the problem ensure that is a CI set for the modified system.\nClearly, (37 ###reference_###) is a nonlinear least squares problem. In order to tackle it, we propose the change of variables\nUnder this change of variables, Problem (37 ###reference_###) can be equivalently written using (36 ###reference_###) as\nwhere we constrain to be consistent with (38 ###reference_###). Recalling that , and since , we have in the denominator for each . Hence, the inequality\nfollows, such that the solution to the QP\nminimizes an upper bound to (39 ###reference_###). Note that as per (34a ###reference_.1###), Problem (40 ###reference_###) is always feasible.\nDenoting the optimizer of (39 ###reference_###) as , we recover the original variables formulating the reduced-order model as\nfor all . While is a CI set for the reduced-order system with multiplicative uncertainty\nit might not be an robust against uncertainties. Hence, we propose to first solve the LP\nwhere is the reduced-order sequence\nNote that the initial state in (43 ###reference_###) can potentially be modified, e.g., as in [28 ###reference_b28###] to reduce conservativeness.\nFollowing (42 ###reference_###), we propose to compute a new RCI set parameter by solving the QP defining in (20 ###reference_###), where and . We further remark that is not guaranteed any longer since the model reduction might result in a very large uncertainty." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "qLPV SysID with RCI regularization", + "text": "Finally, we tackle Problem (21 ###reference_###), with state and parameter orders and fixed as described in the previous sections. We recall that the problem can be reformulated as the unconstrained problem in (23 ###reference_###), with if the QP in (20 ###reference_###) is infeasible.\nAn alternative is to reformulate Problem (21 ###reference_###) by introducing the objective of Problem (20 ###reference_###) in place of , appending the constraints of Problem (20 ###reference_###), and appending \nas optimization variables. However, this results in bilinear constraints, as it can be observed from (18 ###reference_###). Therefore, we propose to tackle Problem (23 ###reference_###) directly, which necessitates the use of a differentiable QP solver. Solvers such as qpax ###reference_### [29 ###reference_b29###], which solve the QP using barrier relaxations, were found to be effective in this regard. By initializing a gradient-based solver to tackle Problem (23 ###reference_###) at , the solution of Problem (32 ###reference_###), with the observer gains and disturbance set parameters , we start at point where the QP formulating is feasible. At the solution of the problem, we solve again the QP in (20 ###reference_###) to compute the optimal RCI set parameter . Under Assumption 2 ###reference_umption2###, a robust controller that maintains the state of System (16 ###reference_###) inside ensures that the plant satisfies using control inputs ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Numerical results", + "text": "We want to illustrate the effectiveness of the proposed parameterizations for system identification and control synthesis through numerical examples 111Numerical implementations can be found on https://github.com/samku/Concur-qLPV-RCI ###reference_###. First, we want to show the effectiveness of the qLPV parameterization, in which we solve Problem (32 ###reference_###) with the constraints ignored, i.e., we solve\nover the variables . Note that for clarity of presentation, we do not regularize the parameters.\nWe solve Problem (44 ###reference_###) using the jax-sysid ###reference_d### library [28 ###reference_b28###] with scaled data and , where and are the empirical mean and standard deviation of the inputs and outputs in the training dataset.\nThe quality of fit is measured using the best fit ratio (BFR)\nwhere and are the measured and predicted outputs respectively. For all examples, we solve Problem (44 ###reference_###) by first running Adam iterations [30 ###reference_b30###], followed by a maximum of iterations of the L-BFGS-B solver [31 ###reference_b31###]. We initialize the Adam iterations at and randomly initialize , where the matrices , and solve the LTI SysID Problem (27 ###reference_###) with ." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Trigonometric system", + "text": "We consider the problem of identifying a qLPV model of the nonlinear plant\nwhere the additive noise terms , , and , , . The training and test datasets, consisting of points each, are built by exciting the system using inputs uniformly distributed in . To identify a qLPV model, we solve Problem (44 ###reference_###) for the state dimension , scheduling vector dimension , and number of hidden layers in the FNNs defining the scheduling function as in (14 ###reference_###).\nWe use swish activation units in each layer.\nThe obtained BFR scores for , which corresponds to identifying an LTI system with states are over the training set and over the test set, indicating a poor fit quality.\nFor qLPV identification, the BFR scores are shown in Table 1 ###reference_###.\nWe observe that using the proposed qLPV parameterization improves the fit quality.\nIn order to study the effect of group Lasso regularization for reducing the number of scheduling variables, we append the\nterm , defined in (33 ###reference_###) to the objective of Problem (44 ###reference_###). The corresponding results over the same dataset with and , and each FNN parameterized with number of hidden layers consisting of swish activation units is plotted in Figure 1 ###reference_###. As the regularization parameter increases, the number of nonzero reduces, thus reducing the number of scheduling variables . As expected, this also lowers BFR scores. For , we obtain for all , implying that we recover an LTI model.\n###figure_1###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Benchmark SysID examples", + "text": "We solve Problem (44 ###reference_###) using datasets generated by the following benchmark systems. The BFR scores are shown in Table 2 ###reference_###. In all of the examples, the FFNs defining the scheduling function in (14 ###reference_###) are parameterized by swish activation units, with indicating the number of hidden layers and indicating the number of activation units per layer.\nHammerstein-Wiener system from [32 ###reference_b32###], with training and test data.\nMagneto-Rheological Fluid Damper system from [33 ###reference_b33###], with training and \ntest data.\nTwo-tank system from [34 ###reference_b34###], with training and test data.\nSilverbox system from [35 ###reference_b35###], with training and test data.\nFor the two-tank system (), we now illustrate the scheduling-variable reduction procedure proposed in 4.3 ###reference_###. To this end, we first solve Problem (44 ###reference_###) with , , and . At the solution, we obtain BFR scores of and over the training and test datasets respectively. To reduce the scheduling-variable order, we first propose a procedure to select the best scheduling variable index to retain. For a desired parameter order, we denote to be index set of cardinality . We propose to identify by solving the problem\nwhere is the initial state identified in Problem (44 ###reference_###). Solving Problem (45 ###reference_###) involves performing number of simulations of length . After having identified the optimal index set, we solve the QP in (40 ###reference_###) without the RCI constraints for simplicity. Then, we recover the system parameters as (41 ###reference_###), following which we update the matrix by solving Problem (42 ###reference_###). In Figure 2 ###reference_###, we plot the BFR scores for the reduced order models for different values of . We observe that as increases, the BFR scores improve as expected. However, we observe a large deterioration below . This can be attributed to the fact that the FNNs which govern the parameter vector are not necessarily optimal for the corresponding system matrices . Future work can focus on developing approaches to modify the parameters with minimal computational effort while improving the BFR.\n###figure_2###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "SysID with controller synthesis guarantees", + "text": "We consider data collected from an interconnection of mass-spring-damper (MSD) systems, where the dynamics of an MSD connected to a rigid wall is governed by the dynamics\nwith system parameters , , and units.\nWe consider a chain of such interconnected masses, with input acting on the mass , and output being the position of mass . We collect a dataset of training and test points, with the training input being a multisine signal between Hz and Hz, with amplitude in units. The test inputs are randomly uniformly generated in the same interval. The signals are sampled with a timestep of s.\nTo identify a control-oriented model for this system, we first solve Problem (27 ###reference_###) with . For the identified model, we obtained the BFR values and on training and test data, respectively, indicating a very poor fit quality. For this model, we identify an initial RCI set by solving Problem (31 ###reference_###) with horizon length and , in which we select . We solve the problem using IPOPT [36 ###reference_b36###], with the problem modeled using CasADI [37 ###reference_b37###]. We report that . We use this solution to then initialize Problem (32 ###reference_###), in which we select and parameterize each FNN with hidden layer with swish activation units. Furthermore, we parameterize to be independent of , such that we denote . We penalize violation of inequality constraints as\nThe identified model achieves BFR values and on training and test data, respectively. Using this solution, we update the RCI set parameter to by solving Problem (20 ###reference_###). We report that we achieve , indicating an improvement in both prediction accuracy and RCI set size. Finally, we use this model to initialize Adam to solve Problem (21 ###reference_###). The BFR scores over the training dataset, and RCI set size after a maximum of iterations for different values of is plotted in Figure 3 ###reference_###. Recall that a larger value of BFR indicates better fit, and smaller value of indicates a larger RCI set. Recall also from (20 ###reference_###) that the RCI set size is defined using the nominal matrices , and is the total tracking cost in the output space. Hence, is a function of the rise and settling time of the nominal model, in addition to the offset error, such that a small value of does not necessarily imply that the nominal model is driven closer to the vertices of the set . Future work can focus on the development of appropriate size metrics that reflect better the control design goal.\nIt can be seen from Figure 3 ###reference_### that as increases, generally the RCI set size increases at the expense of BFR. Furthermore, a good balance seems to be obtained at , where we obtain a BFR score of over the training set and over the testing dataset, with corresponding RCI set size . Hence, for subsequent control design, we use this model.\n###figure_3### ###figure_4### Using the model identified by Problem (21 ###reference_###) with regularization parameter , we solve again the QP in (20 ###reference_###) to identify the optimal RCI set parameter . Using this set, we design the following simple feedback controller:\nwhere is the current state estimate,\n is the current plant output, and is the desired control input. Recall that the state estimate is updated using the observer model\nEssentially, Problem (46 ###reference_###) is a safety filter [18 ###reference_b18###], which projects the desired input onto a safe set defined by the RCI set . Under Assumption 2 ###reference_umption2###, the closed-loop system with the control law is recursively feasible if initialized at , and the plant output satisfies . Since the FNNs are modeled to be independent of , we omit the dependence of dependence on in Problem (46 ###reference_###), making it a QP.\nThe above control scheme can be used to design a simple safe tracking controller for plant (1 ###reference_###). To this end, consider the integrator , where is the output reference signal.\nWe compute the desired control input as\nwhere the matrix is an LQR controller for the linear system\nSynthesizing state-dependent controllers in Linear Quadratic Regulator (LQR) form online for nonlinear systems is a well-understood technique in the literature, see, e.g., [38 ###reference_b38###]. In Figure 4 ###reference_###, we show the closed-loop performance of this control scheme. The upper plot displays the boundary of the set in dotted blue lines, which can be seen to be close to the boundary of the constraint set (solid black lines). This represents a reduction in conservativeness compared to the red dotted line, which denotes the boundary of the set . The plot also depicts the model output and plant output tracking the reference line , in red, green and black dashed lines, respectively. In the middle plot, we show the plant inputs, with red dots highlighting instances where the filter in (46 ###reference_###) adjusts the input determined by the LQR controller to plant input which ensures that the subsequent state . In the bottom plot, we show the set , along with state trajectories over the simulation in blue when initialized at the origin. Additionally, we show the optimal RCI sets derived from solving Problem (20 ###reference_###) at the solutions of Problems (27 ###reference_###) and (32 ###reference_###)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "By properly defining a control-oriented regularization term into the system identification problem,\nwe presented an approach to guarantee that the identified model is suitable for designing constrained controllers\nfor the plant generating the data. An uncertain model was derived using a state observer, and the regularization function was characterized as the size of the largest RCI set for the uncertain model. By parameterizing the model as a qLPV model and representing the RCI set as a configuration-constrained polytope, we transformed the identification problem into a computable form. The proposed initialization strategy, which includes estimating suitable number of states and scheduling signals, enables solving the problem\nto good-quality solutions, as we have demonstrated in numerical examples.\nThe results of the paper can be extended in several directions, such as: (a) reduce conservativeness in the RCI set by limiting the multiplicative uncertainty encountered when the system evolves within the set; (b) develop alternative RCI set size functions to better reflect control goals; (c) design optimization algorithms tailored to tackle objective functions formulated with value functions of quadratic programs; and (d) integrate the scheme with an active learning framework for data-efficient control-oriented model identification." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TrainingTestTrainingTest
292.49592.44992.60692.583
394.76994.84595.36595.288
495.31795.22595.36395.377
\n
Table 1: BFR scores for the trigonometric system \n
with state order .
\n
", + "capture": "Table 1: BFR scores for the trigonometric system \nwith state order . " + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TrainingTestTrainingTest
376.10376.34291.74291.921
454.49549.65094.26691.339
377.91567.96096.20195.357
478.85650.12898.77198.337
\n
Table 2: BFR scores for the considered \n
benchmark examples.
\n
", + "capture": "Table 2: BFR scores for the considered \nbenchmark examples. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18166v1_figure_1.png", + "caption": "Figure 1: Variation of BFR with group Lasso regularization parameter \u03bapsubscript\ud835\udf05p\\kappa_{\\mathrm{p}}italic_\u03ba start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT in (33), along with the number of nonzero vectors WL\u2062isubscript\ud835\udc4a\ud835\udc3f\ud835\udc56W_{Li}italic_W start_POSTSUBSCRIPT italic_L italic_i end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.18166v1/x1.png" + }, + "2": { + "figure_path": "2411.18166v1_figure_2.png", + "caption": "Figure 2: Variation of BFR with reduced scheduling variable order npsubscript\ud835\udc5b\ud835\udc5dn_{p}italic_n start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.18166v1/x2.png" + }, + "3": { + "figure_path": "2411.18166v1_figure_3.png", + "caption": "Figure 3: Variation of BFR and RCI set size \ud835\udc2b\u2062(\ud835\udc31)\ud835\udc2b\ud835\udc31\\mathbf{r}(\\mathbf{x})bold_r ( bold_x ) with \u03c4\ud835\udf0f\\tauitalic_\u03c4. Observe a tradeoff as \u03c4\ud835\udf0f\\tauitalic_\u03c4 varies. We select \u03c4=10\u22124\ud835\udf0fsuperscript104\\tau=10^{-4}italic_\u03c4 = 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT for controller synthesis.", + "url": "http://arxiv.org/html/2411.18166v1/x3.png" + }, + "4": { + "figure_path": "2411.18166v1_figure_4.png", + "caption": "Figure 4: (Top) Closed-loop trajectories over 200 time steps: model output (dashed-red), plant output (green), reference (dashed black), constraint boundary (thick black), boundary of the set C\u2062X\u2062(q\u2217)\ud835\udc36\ud835\udc4bsuperscript\ud835\udc5eCX(q^{*})italic_C italic_X ( italic_q start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) (dotted blue), and boundary of CQ\u2062(qQ)superscript\ud835\udc36Qsuperscript\ud835\udc5eQC^{\\mathrm{Q}}(q^{\\mathrm{Q}})italic_C start_POSTSUPERSCRIPT roman_Q end_POSTSUPERSCRIPT ( italic_q start_POSTSUPERSCRIPT roman_Q end_POSTSUPERSCRIPT ) (dotted red); (Middle) Corresponding control input (green), input constraints (thick black), instances when the filter in (46) modifies the desired input (red dots); (Bottom) RCI sets in the state space and model-state trajectory.", + "url": "http://arxiv.org/html/2411.18166v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Trends in System Identification.", + "author": "P. M. Van Den Hof and R. J. Schrama, \u201cIdentification and control \u2014\nclosed-loop issues,\u201d Automatica, vol. 31, no. 12, pp. 1751\u20131770,\n1995.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Birkh\u00e4user Boston, 2008.", + "author": "F. Blanchini and S. Miani, Set-Theoretic Methods in Control.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Springer, 2010.", + "author": "R. T\u00f3th, Modeling and Identification of Linear Parameter-Varying\nSystems, vol. 403 of Lecture Notes in Control and Information\nSciences.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "submitted for publication. Also available on arXiv at\nhttp://arxiv.org/abs/2403.03827.", + "author": "A. Bemporad, \u201cLinear and nonlinear system identification under - and\ngroup-Lasso regularization via L-BFGS-B,\u201d 2024.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "21st IFAC World Congress.", + "author": "L. Ljung, C. Andersson, K. Tiels, and T. B. Sch\u00f6n, \u201cDeep learning and system\nidentification,\u201d IFAC-PapersOnLine, vol. 53, no. 2, pp. 1175\u20131181,\n2020.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18166v1" +} \ No newline at end of file diff --git a/20241127/2411.18172v1.json b/20241127/2411.18172v1.json new file mode 100644 index 0000000000000000000000000000000000000000..23954c3d9aea1b9b3c7211720aecaddc7aff53e6 --- /dev/null +++ b/20241127/2411.18172v1.json @@ -0,0 +1,149 @@ +{ + "title": "Enhancing Computer Vision with Knowledge: a Rummikub Case Study", + "abstract": "Artificial Neural Networks excel at identifying individual components in an image.\nHowever, out-of-the-box, they do not manage to correctly integrate and interpret these components as a whole.\nOne way to alleviate this weakness is to expand the network with explicit knowledge and a separate reasoning component. In this paper, we evaluate an approach to this end, applied to the solving of the popular board game Rummikub. We demonstrate that, for this particular example, the added background knowledge is equally valuable as two-thirds of the data set, and allows to bring down the training time to half the original time.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Artificial Neural Networks (ANNs) are considered a tried and tested method to identify objects in an image.\nHowever, correctly interpreting these objects in relation to each other to form a complete picture remains difficult, as demonstrated in the literature [1 ###reference_b1###, 2 ###reference_b2###].\nOne way to overcome this issue is by adding explicit background knowledge about the identified items and their relations.\nIn this work, we apply this approach to the popular boardgame Rummikub. Concretely, we consider the task of correctly detecting all the tiles in a photo of a Rummikub game state. We compare the performance of a \u201cvanilla\u201d ANN setup to one that extends this setup with explicit knowledge and reasoning by means of the IDP-Z3 [3 ###reference_b3###] system. In the following paragraphs, we introduce Rummikub and IDP-Z3.\nRummikub111https://rummikub.com ###reference_rummikub.com### is a popular board game in which players are given tiles defined by a number and a color . To win the game, players need to be the first to place all their tiles in the center of the game field. Tiles may only be played when they correctly form a set. Two types of sets exist: a group, in which 3 or 4 tiles share the same but have different , and a run, which is a series of 3 to 13 tiles of same with subsequent . The game also contains two joker tiles which may be used as \u201cwildcards\u201d to form sets, as indicated by a smiling face. All these concepts are illustrated in Fig. 1 ###reference_###.\n###figure_1### ###figure_2### IDP-Z3 is a logical reasoning engine for first order logic.\nIt adheres to the Knowledge Base Paradigm [4 ###reference_b4###], which states that knowledge should be modeled declaratively in a Knowledge Base (KB) regardless of its use.\nAs such, the KB is merely a \u201cbag of knowledge\u201d.To put this knowledge to use, it can be given to a reasoning engine such as IDP-Z3, which supports several kinds of inference tasks.\nThis approach supports reusability: once formalized, the KB can be re-used to solve different types of problems in the same problem domain without modifications.\nI.e., the same KB could be used to check satisfiability, find (optimal) solutions, derive consequences, explain incorrectness, etc.\nThe specific modeling language used for IDP-Z3 is FO(), which extends classical first order logic by adding concepts (e.g., types and aggregates) to make modeling more user-friendly.\nAs an example, consider the following logical formula which formalizes the rule of a correct Rummikub group.\nThis can be read as: \u201cfor all different tiles within a group must hold that their numbers are the same and their colors are different\u201d.222For a more detailed explanation on FO(), we refer to [3 ###reference_b3###]." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methodology", + "text": "We use a custom image dataset consisting of 285 manually captured images of Rummikub playing fields, employing three different zoom levels, four different lighting levels and two different backgrounds.\nSpecial attention was paid to ensure the images are realistic: they all contain a varying number of only valid sets, at various positions and angles.\nEach set ranges from 3 to 13 tiles, and is diverse in terms of colors and numbers. The tiles are also often not perfectly aligned, much like they would be in a real game.\nImages were annotated for tile bounding boxes and tile number/color, with a total of 4336 tiles annotated.\nOur dataset is publicly available through Kaggle 333https://www.kaggle.com/datasets/sverrela/rummikub ###reference_ummikub###.\n###figure_3### To correctly detect all tiles in a depicted Rummikub game state, we propose the pipeline shown in Fig. 2 ###reference_###. It consists of four steps:\nTile detection:\nBounding boxes are generated for each individual tile in the image by means of a standard single shot multi-box detector (SSD) [5 ###reference_b5###] trained on our dataset.\nClustering:\nA hand-crafted algorithm clusters the individual bounding boxes into sets.\nThis clustering algorithm is designed to be robust to size and orientation, so that it can handle the random nature of the sets.\nNumber/Color classification:\nEach detected tile is classified for and by means of two ResNet18 [6 ###reference_b6###] networks trained on our dataset.\nThe output of each network represents the confidence levels assigned to each possible and for each tile.\nA pure ANN setup would at this point assign the class with the highest confidence to each tile.\nInstead, we pass all confidence values on to the next step for further processing.\nCorrection:\nIn this final step, the confidence levels obtained in Step 3 are used to model an optimization problem at the level of an entire set. Concretely, instead of assuming the most likely class for each tile individually, we try to find the most likely class for all tiles in a set, given that the set must be correct (i.e., we explicitely assume the shown game state to be valid).\nThis way, we are able to enrich our detections with background knowledge.\nThe optimization problem in Step 4 is straightforward.\nWe model the confidences in IDP-Z3 using binary functions of the form , and add them to our Rummikub KB.\nTo compute the overall confidence score of a possible solution, we use the following straightforward sum:\nIn other words, we sum the confidences of the colors/numbers that have been assigned to the tiles via the and functions.\nWe then let IDP-Z3 find the optimal assignments of these functions, giving us the classifications with the highest confidences that are feasible.\nAs a concrete example, consider the following color confidences :\nIgnoring the numbers for the sake of the example, and assuming the set is a group, all tiles must therefore have a different color.\nAs such, instead of the highest likely individual colors (red, blue, red), IDP-Z3 will correct these classifications to (red, blue, orange) as the most confident feasible classification." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results", + "text": "All code required to reproduce the results discussed in this section is available from our dedicated GitLab repository444https://gitlab.com/EAVISE/sva/knowledge-enhanced-rummikub-detector ###reference_hanced-rummikub-detector###. All evaluations are performed using an Intel Xeon E5-2630 v3 with an NVIDIA Quadro P2000 and 32 GB of memory, using our full dataset as test data.555During training, the ResNet18 networks are being fed the annotated bounding boxes, while during evaluation, they receive their input from the SSD network, which is slightly different. Note that, while typically using the same data for train and testing purposes is considered bad practice, we argue that in this particular case it allows to evaluate the pure ANN approach in \u201cideal\u201d circumstances, further highlighting the added benefit of adding a logical reasoning step. We ran each experiment 10 times, and report averages and standard deviations.\nWe evaluated the pipeline in terms of the size of the training data, by limiting the available data to a % subset, as shown in Fig 3(a) ###reference_sf1###.\nFor every run, ResNet18 color and number models were trained for 5 and 20 epochs respectively.\nA few observations are noteworthy.\nFirst, when using only 5% of the data, the context-based correction step greatly increases the total image accuracy from 9.312.63% to 55.5611.50%.\nSecond, the full pipeline outperforms the pure ANN approach, even at its highest reached accuracy (98.760.15% @ 90% vs. 94.791.85% @ 95% resp., or a 3.97% increase).\nThird, it seems that, in general, adding the reasoning step seems to lower the standard deviations, acting like a \u201cstabiliser\u201d of sorts.\nFor example, between 30 and 45% of the data, the ANNs had a standard deviation between 5.57 and 8.5, while the full pipeline\u2019s standard deviation remained between 1.37 and 2.8.\nFourth, and most interesting, when taking the detections into account, the highest accuracy reached by the classifiers is already reached when using only about 30% of the data (with IDP-Z3 @ 30%: 94.981.79%, without IDP-Z3 @ 95%: 94.791.85%), as indicated by the dashed red line.\nTo evaluate the effect of training time, Fig. 3(b) ###reference_sf2### shows an analogous experiment conditioning on the number of epochs instead of training data. Here, both the ResNet18 color and number models are trained for epochs.\nThe same tendencies are apparent, though the initial jump in accuracy seems to be notably higher (37.614.14% to 86.833.32% after one epoch).\nAgain, the highest accuracy reached by the classifiers is achieved much earlier when adding logical reasoning: 5 epochs are sufficient instead of 20 (with IDP-Z3 @ 5 epochs: 95.641.84%, without IDP-Z3 @ 20 epochs: 95.561.66%).\nSimilarly, the full pipeline outperforms the pure ANN approach (98.840.00% @ 13 epochs vs. 95.561.66% @ 20 epochs resp.), and the standard deviations have decreased across the board.\n###figure_4### ###figure_5### It is, however, important to note that the correction step does add additional inference time, albeit the effect appears to be rather limited.\nIn our experiments, tile detection, clustering, and classifying required about 0.04s per tile, while correcting an entire set requires another 0.095s.\nE.g., for a set consisting of 5 tiles the pipeline would require 0.2s to generate the classifications, and another 0.1s to correct them if necessary." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Related Work", + "text": "In recent years, there has been a rise in approaches aimed at extending ANNs with explicit reasoning.\nThe most notable approach, as shown in works such as DeepProbLog [7 ###reference_b7###] and NeurASP [8 ###reference_b8###], combine the two on a true neurosymbolic level, tightly coupling the neural networks with logic.\nIn this approach, the logic program is evaluated throughout the training process itself, resulting in increasingly higher accuracies.\nHowever, in general, such systems have difficulties with scaling to complex domains, and are slow to train.\nFor a more complete overview of neurosymbolic approaches, we refer to [9 ###reference_b9###].\nThe work that comes closest to ours is that of Mulamba et al. [10 ###reference_b10###], who present a similar pipeline approach, but for the detection of digits in a (partial) Sudoku.\nHowever, they did not evaluate the effect of data or training time, and instead only report on the final accuracy. Furthermore, the unknown tile positions in Rummikub, as opposed to fixed grid positions for Sudoku, make it a harder problem to solve." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We have demonstrated an approach to improve the accuracy of a computer vision system for the detection and validation of Rummikub game states, by adding explicit reasoning on background knowledge.\nThrough our evaluations, we have shown that, for this problem, background knowledge is worth as much as two-thirds of the data set, or slightly more than half of the training time.\nIn this sense, our approach is most useful in situations where data is scarce or difficult to gather, or when the ANNs are constraint by hardware limitations, such as in edge devices.\nHowever, our approach can only succeed when there exists a clear relationship between all output classes, which is not always the case.\nAmong others, examples of real-life applications satisfying this constraint include sensor fusion and input detection in forms (e.g., tax forms).\nAs part of future work, we intend to compare our approach to a more neurosymbolic approach by implementing the Rummikub example in, e.g., NeurASP. We also plan to extend the work to allow the pipeline to suggest corrections when errors (i.e., invalid sets) are present in the image. Finally, we expect to further evaluate our pipeline on some of the real-life problem domains mentioned above." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1(a)": { + "figure_path": "2411.18172v1_figure_1(a).png", + "caption": "(a) Example of a group.\nFigure 1: Rummikub set examples", + "url": "http://arxiv.org/html/2411.18172v1/extracted/6028440/img/group.png" + }, + "1(b)": { + "figure_path": "2411.18172v1_figure_1(b).png", + "caption": "(b) Example of a run, containing a joker.\nFigure 1: Rummikub set examples", + "url": "http://arxiv.org/html/2411.18172v1/extracted/6028440/img/run.png" + }, + "2": { + "figure_path": "2411.18172v1_figure_2.png", + "caption": "Figure 2: Overview of the tile detection/classification pipeline", + "url": "http://arxiv.org/html/2411.18172v1/extracted/6028440/img/pipeline.png" + }, + "3(a)": { + "figure_path": "2411.18172v1_figure_3(a).png", + "caption": "(a) Accuracy when training on % of data\nFigure 3: Ablation experiment results. The dashed red lines indicate the highest accuracy reached by the ANN-only approach.", + "url": "http://arxiv.org/html/2411.18172v1/extracted/6028440/img/results_data_err.png" + }, + "3(b)": { + "figure_path": "2411.18172v1_figure_3(b).png", + "caption": "(b) Accuracy when training for x\ud835\udc65xitalic_x epochs\nFigure 3: Ablation experiment results. The dashed red lines indicate the highest accuracy reached by the ANN-only approach.", + "url": "http://arxiv.org/html/2411.18172v1/extracted/6028440/img/results_epoch_err.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "FindingEmo: An image dataset for emotion recognition in the wild (accepted at NeurIPS 2024).", + "author": "Laurent Mertens, Elahe\u2019 Yargholi, Hans Op de Beeck, Jan Van den Stock, and Joost Vennekens.", + "venue": "2024.", + "url": null + } + }, + { + "2": { + "title": "Context modeling in computer vision: Techniques, implications, and applications.", + "author": "Oge Marques, Elan Barenholtz, and Vincent Charvillat.", + "venue": "Multimedia Tools and Applications, 51(1):303\u2013339, January 2011.", + "url": null + } + }, + { + "3": { + "title": "IDP-Z3: A reasoning engine for FO(.).", + "author": "Pierre Carbonnelle, Simon Vandevelde, Joost Vennekens, and Marc Denecker.", + "venue": "arXiv preprint arXiv:2202.00343, 2022.", + "url": null + } + }, + { + "4": { + "title": "Building a Knowledge Base System for an Integration of Logic Programming and Classical Logic.", + "author": "Marc Denecker and Joost Vennekens.", + "venue": "In Logic Programming, volume 5366, pages 71\u201376. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.", + "url": null + } + }, + { + "5": { + "title": "Ssd: Single shot multibox detector.", + "author": "Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg.", + "venue": "In Computer Vision \u2013 ECCV 2016, pages 21\u201337, Cham, 2016. Springer International Publishing.", + "url": null + } + }, + { + "6": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.", + "url": null + } + }, + { + "7": { + "title": "Deepproblog: Neural probabilistic logic programming.", + "author": "Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + }, + { + "8": { + "title": "Neurasp: Embracing neural networks into answer set programming.", + "author": "Zhun Yang, Adam Ishay, and Joohyung Lee.", + "venue": "In Proceedings of IJCAI-20, pages 1755\u20131762. International Joint Conferences on Artificial Intelligence Organization, July 2020.", + "url": null + } + }, + { + "9": { + "title": "From statistical relational to neurosymbolic artificial intelligence: A survey.", + "author": "Giuseppe Marra, Sebastijan Duman\u010di\u0107, Robin Manhaeve, and Luc De Raedt.", + "venue": "Artificial Intelligence, 328:104062, March 2024.", + "url": null + } + }, + { + "10": { + "title": "Hybrid Classification and Reasoning for Image-Based Constraint Solving.", + "author": "Maxime Mulamba, Jayanta Mandi, Rocsildes Canoy, and Tias Guns.", + "venue": "In Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pages 364\u2013380, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18172v1" +} \ No newline at end of file diff --git a/20241127/2411.18180v1.json b/20241127/2411.18180v1.json new file mode 100644 index 0000000000000000000000000000000000000000..67a4416bfffa9c5bdaa6d4f6d0b2d5c79f9442c7 --- /dev/null +++ b/20241127/2411.18180v1.json @@ -0,0 +1,918 @@ +{ + "title": "DistinctAD: Distinctive Audio Description Generation in Contexts", + "abstract": "Audio Descriptions (ADs) aim to provide a narration of a movie in text form, describing non-dialogue-related narratives, such as characters, actions, or scene establishment. Automatic generation of ADs\nremains challenging due to: i) the domain gap between movie-AD data and existing data used to train vision-language models,\nand ii) the issue of contextual redundancy arising from highly similar neighboring visual clips in a long movie.\nIn this work, we propose DistinctAD, a novel two-stage framework for generating ADs that emphasize distinctiveness to produce better narratives.\nTo address the domain gap, we introduce a CLIP-AD adaptation strategy that does not require additional AD corpora, enabling more effective alignment between movie and AD modalities at both global and fine-grained levels.\nIn Stage-II, DistinctAD incorporates two key innovations: (i) a Contextual Expectation-Maximization Attention (EMA) module that reduces redundancy by extracting common bases from consecutive video clips, and (ii) an explicit distinctive word prediction loss that filters out repeated words in the context, ensuring the prediction of unique terms specific to the current AD.\nComprehensive evaluations on MAD-Eval, CMD-AD, and TV-AD benchmarks demonstrate the superiority of DistinctAD, with the model consistently outperforming baselines, particularly in Recall@k/N, highlighting its effectiveness in producing high-quality, distinctive ADs.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Audio description (AD) [66 ###reference_b66###, 19 ###reference_b19###] is a crucial accessibility service that provides verbal narration of visual elements in media content for individuals who are blind or have low vision.\nBy offering succinct and vivid descriptions, ADs enable visually impaired audiences to fully comprehend and engage with non-dialogue-related narratives, e.g., characters, facial expressions, non-verbal actions, or scene establishment.\nRecent studies also show ADs\u2019 value for sighted viewers in supporting eye-free activities and facilitating child language development [30 ###reference_b30###, 54 ###reference_b54###],\nreinforcing its pivotal role in fostering inclusivity by bridging the perceptual gap between visual and non-visual elements.\nCrafting ADs requires careful attention to timing, language, and context to integrate smoothly with dialogue [21 ###reference_b21###]. However, despite the availability of advanced AD platforms [53 ###reference_b53###, 8 ###reference_b8###], human-annotated methods are costly and difficult to scale, highlighting the need for automated generation systems, especially with the rise of user-generated content.\nAdvancements in Vision-Language Models (VLMs) and Large-Language Models (LLMs) have led to growing interest in automatic AD generation for media.\nCurrent approaches fall into two categories: (i) using powerful proprietary models like GPT-4 [2 ###reference_b2###] or GPT-4V [52 ###reference_b52###] in a training-free manner [81 ###reference_b81###, 85 ###reference_b85###, 38 ###reference_b38###, 13 ###reference_b13###], and (ii) fine-tuning open-source VLM components, such as visual-text adapters [4 ###reference_b4###, 33 ###reference_b33###, 41 ###reference_b41###], for AD tasks [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 73 ###reference_b73###, 39 ###reference_b39###, 55 ###reference_b55###].\nBoth approaches have limitations: (i) Training-free methods often perform poorly and suffer from hallucinations due to the unique nature of AD (e.g., character names and narrative coherence), which differs from the common text data LLMs are trained on.\n(ii) Fine-tuning methods generally perform better but are still limited by insufficient data to fully adapt to the movie-AD domain and face the context-repetition issue.\nUnlike video captioning [37 ###reference_b37###, 61 ###reference_b61###], ADs are generated on consecutive intervals (visual clips) throughout long videos [67 ###reference_b67###], e.g., movies.\nThe context-repetition issue arises when models produce repetitive or similar descriptions for consecutive visual clips, especially when using prior ADs as prompts [20 ###reference_b20###, 73 ###reference_b73###]. This occurs because sequential clips often comprise redundant scenes or characters (and therein redundant visual features), leading models\nthat only use the current visual clip to repeat the same information from the past, as shown in Fig. 1 ###reference_###.\nHowever, audiences are more interested in the unique and distinct events of the current clip, rather than the common elements from the previous one.\nIn this paper, we propose DistinctAD, a two-stage framework for generating distinctive ADs within contexts.\nGiven the domain gap between the movie-AD and VLM training data, we first bridge this gap in Stage-I by adapting VLMs, such as CLIP [57 ###reference_b57###], to the movie-AD domain. Our adaptation strategy is inspired by a key observation (see Appendix \u00a7A ###reference_###): AD sentences encoded by the CLIP text encoder can be effectively reconstructed using simple LLMs like GPT-2 [56 ###reference_b56###] with minimal fine-tuning, whereas AD reconstructions using CLIP visual features from the corresponding clips are often of poor quality.\nThis suggests that while CLIP\u2019s multi-modal embedding space is rich enough to represent AD information, its visual encoder is insufficient for extracting it.\nTo mitigate this domain gap,\nwe adapt the CLIP vision encoder to better align with the frozen CLIP text encoder using existing paired video-AD data.\nThe alignment involves global matching at video-sentence level, similar to CLIP pre-training.\nA challenge arises because video clips are labeled with whole ADs, and words may not appear in every frame but must be aggregated over frames. Therefore, we propose fine-grained matching at frame-word level for this multiple-instance setting.\nFor Stage-II, we propose a novel distinctive AD narrating pipeline based on the Expectation-Maximization Attention (EMA) [16 ###reference_b16###] algorithm, which has\ndemonstrated its efficacy in tasks such as semantic segmentation [34 ###reference_b34###], video object segmentation [40 ###reference_b40###], and text-video retrieval [26 ###reference_b26###].\nDifferently, we apply EMA to contextual clips from long videos, which often exhibit high redundancy due to recurring scenes or characters.\nBy extracting common bases from contextual information, DistinctAD reduces redundancy and generates compact, discriminative representations\nthat enable the LLM decoder to produce more distinctive ADs.\nTo further emphasize distinctiveness explicitly, we introduce a distinctive word prediction loss that filters out words that repeatedly appear in contexts, ensuring that the LLM decoder focuses on predicting unique words specific to the current AD.\nWith these two designs, DistinctAD produces contextually distinctive and engaging ADs\nthat can provide better narratives for the audience.\nIn summary, our contributions are three-fold:\nWe propose a CLIP-AD adaptation strategy tailored to movie-AD data, addressing the misalignment issue caused by the domain gap. Our adapted vision encoder can be seamlessly integrated into existing CLIP-based AD methods and stands to benefit from future improvements as more AD data becomes available.\nWe introduce DistinctAD, which incorporates a Contextual EMA module and a distinctive word prediction loss, significantly enhancing the generation of distinctive ADs from consecutive visual clips with similar contexts.\nComprehensive evaluations on MAD-Eval [20 ###reference_b20###], CMD-AD [22 ###reference_b22###], and TV-AD [81 ###reference_b81###] highlight DistinctAD\u2019s superiority. Our outstanding performance in Recall@k/N demonstrates its effectiveness in generating high-quality ADs with both distinctiveness and technical excellence." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Dense video captioning.\nA task closely related to AD is dense video captioning [28 ###reference_b28###], which extends traditional video captioning [37 ###reference_b37###, 61 ###reference_b61###, 44 ###reference_b44###, 62 ###reference_b62###] by both generating a single caption for trimmed video segments as well as detecting and describing multiple events with grounded timestamps.\nInitial dense video captioning utilize a 2-stage pipeline [24 ###reference_b24###, 25 ###reference_b25###, 74 ###reference_b74###, 78 ###reference_b78###] by firstly performing localization and then describing events.\nRecent works [74 ###reference_b74###, 79 ###reference_b79###, 88 ###reference_b88###, 12 ###reference_b12###, 17 ###reference_b17###, 35 ###reference_b35###, 50 ###reference_b50###, 58 ###reference_b58###, 63 ###reference_b63###, 64 ###reference_b64###, 82 ###reference_b82###] focus on training localization and captioning modules in an end-to-end manner to enhance inter-event associations.\nIn contrast to these works, AD generation specifically aims to narrate a coherent story, maintain character-awareness, and complement the audio track without interfering with existing dialogue.\nAD generation.\nADs narrate key visual elements in extended videos, enabling blind and visually-impaired audiences to appreciate films, TV series, etc.\nEarly AD systems relied heavily on specialized authoring tools [8 ###reference_b8###] and skilled human contributors. Platforms like Rescribe [53 ###reference_b53###] and LiveDescribe [8 ###reference_b8###] have facilitated faster and more accurate AD creation; however, these methods are costly and do not scale efficiently for large volumes of visual content.\nRecent efforts have developed audio segmentation and transcription systems [7 ###reference_b7###, 9 ###reference_b9###, 10 ###reference_b10###] to create high-quality video datasets with temporally aligned ADs [59 ###reference_b59###, 60 ###reference_b60###, 67 ###reference_b67###, 68 ###reference_b68###], advancing automatic AD research.\nIn general, current automatic AD generation systems can be categorized into two approaches: training-free and partial-fine-tuning.\nTraining-free methods [38 ###reference_b38###] generate\nADs by leveraging\nproprietary models like GPT-4 [2 ###reference_b2###] and GPT-4V [52 ###reference_b52###].\nMM-Narrator [85 ###reference_b85###] enhances AD performance by multi-model in-context learning with memories.\nLLM-AD [13 ###reference_b13###] and AutoAD-Zero [81 ###reference_b81###] use prompts comprising visual frames with textual character names and colorful circles [65 ###reference_b65###], enabling character-centric AD generation.\nHowever, training-free AD methods often suffer from high evaluation costs at scale and relatively poor performance due to domain-specific challenges and LLM hallucinations.\nPartial-fine-tuning methods [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 39 ###reference_b39###, 73 ###reference_b73###], as well as our DistinctAD, only fine-tune a lightweight adapter [4 ###reference_b4###, 33 ###reference_b33###] between the pre-trained vision and text encoders.\nA representative example is the AutoAD series [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###], which builds automatic AD systems and enriches them with character-aware prompts within different vision-language frameworks.\nHowever, previous studies tend to focus on constructing more accurate external character banks, whereas treating AD generation similarly to video captioning, overlooks AD\u2019s\nunique sequential structure of video clips.\nIn contrast, our method emphasizes understanding the visual content within its temporal context, leading to more distinctive AD generation.\nDistinctive captioning\nin images aims to articulate unique details that can help distinguishing targets from others.\nAn intuitive way to promoting distinctiveness is through contrastive learning [15 ###reference_b15###, 42 ###reference_b42###, 46 ###reference_b46###, 72 ###reference_b72###], where generated captions are encouraged to align more closely with target images rather than distractors.\nIn [11 ###reference_b11###, 75 ###reference_b75###, 77 ###reference_b77###, 76 ###reference_b76###], group-based distinctive attention is introduced to capture distinctiveness by comparing sets of similar images and re-weighting specific caption words.\nA recent closely related field is difference captioning [83 ###reference_b83###, 31 ###reference_b31###, 32 ###reference_b32###], which aims to describe differences between a single pair of images. VisDiff [18 ###reference_b18###] scales difference captioning to sets containing thousands of images with natural language.\nOur work differs from these distinctive captioning works in that we are the first to explore distinctiveness across dense, consecutive clips within hours-long movies, thereby generating\nADs with better narrative." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "This section outlines the DistinctAD pipeline, consisting of two stages for AD generation." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Stage-I: CLIP-AD Adaptation", + "text": "###figure_2### AD and the visual content it describes exhibit a significant domain gap compared to typical large-scale web data.\nThis gap often causes misalignment in current partial-fine-tuning techniques.\nPrevious studies [20 ###reference_b20###, 21 ###reference_b21###, 73 ###reference_b73###] try alleviating this problem by pre-training LLMs on text-only AD corpus, e.g. AudioVault1\u2020\u20201 https://audiovault.net ###reference_audiovault.net###.\nHowever, misalignment persists at the initial stage of vision encoding, which is often neglected.\nInspired by our findings that AD sentences\nencoded by the CLIP text encoder can be effectively recovered using the GPT-2 language model (see \u00a71 ###reference_### and Appendix \u00a7A ###reference_###),\nwe identify that the primary issue of misalignment is caused by the CLIP vision encoder,\ni.e., the discrepancy between visual embeddings and AD embeddings within the joint CLIP feature space.\nTo address this, we propose adapting the CLIP vision encoder to the specific AD domain.\nHowever, due to the unique multiple-instance learning setting for video-AD pairs (see \u00a71 ###reference_###), we consider both global matching and fine-grained frame-word matching in our adaptation method.\nGlobal video-AD matching. A straightforward strategy involves adopting classical CLIP-style fine-tuning with video-AD pairs in large batches [45 ###reference_b45###].\nFormally, let video clip be a collection of frame embeddings, and corresponding AD\n be a collection of word embeddings () and the [CLS] token (denoted as ), where is the number of channels in the embedding space.\nWe obtain the global video-level representation by averaging all frame embeddings in using mean pooling: .\nFollowing the standard CLIP, we use the [CLS] token as the global textual AD representation .\nThe global videoAD matching is performed by maximizing the sum of the main diagonal of a similarity matrix, using the contrastive loss:\nwhere is the batch size, and the similarity function is the vector inner product.\nThis process is illustrated in the bottom right of Fig. 2 ###reference_###.\nSimilarly, we drive the ADvideo loss by maximizing the sum of the secondary diagonal (i.e., swapping the and indices in (1 ###reference_###)).\nThe final global-level contrastive loss is then the sum of the losses in both directions .\n###figure_3### Fine-grained frame-AD matching.\nMatching global video to AD sentence [CLS] (and vice versa) aids in joint feature space alignment.\nHowever, this alignment is insufficient for effective adaptation due to the specific\nmultiple-instance setting of ADs, where only some words may correspond to a particular frame, but all words will have correspondence in aggregate.\nThus, we propose a fine-grained matching loss at the frame-level to address this issue.\nFormally,\ngiven the frame embeddings \nand the word embeddings ,\nwe calculate the weights of all words attending to each frame via softmax attention, taking as the query and as the key.\nBy then multiplying these attention weights by the value , we obtain\na frame-aware AD representation :\nwhere for each frame, the words embeddings that are most similar to the frame-level visual feature have been aggregated (via softmax attention).\nThe temperature parameter controls the aggregation process, where smaller incorporates more textual information.\nThe goal of the fine-grained matching is to pull a\nframe visual feature closer to\nthe frame-aware AD representations \nin (2 ###reference_###), corresponding to the positive set.\nTo achieve this, we define the negative set as\nthe frame-aware AD embeddings\ngenerated from other video clips (in the batch),\nand then use a Multi-Instance Loss [48 ###reference_b48###],\nwhere is a sampled frame from .\nThis process is illustrated in the top right of Fig. 2 ###reference_###.\nSummary for Stage-I.\nThe final objective for Stage-I is to minimize the sum of global and fine-grained aligning losses, balanced by a trade-off coefficient, .\nNote that during this adaptation process, the CLIP-Text encoder model remains frozen, and only the CLIP-Vision encoder is fine-tuned. Our fine-grained frame-AD matching is entirely parameter-free, as only the vision encoder will be utilized in the subsequent stage." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Stage-II: Distinctive AD Narration", + "text": "The motivation for generating distinctive ADs stems from the observation that LLM often produce repetitive descriptions for adjacent clips [81 ###reference_b81###, 85 ###reference_b85###, 55 ###reference_b55###].\nDespite improved character recognition, the visual representation itself is not discriminative among neighboring (contextual) clips, leading to uninteresting ADs.\nOur goal is to create contextual-distinctive ADs that highlight current differences.\nWe hypothesize, as verified in Appendix \u00a7B ###reference_###\n, that sequential clips from a long video often share redundant scenes or characters, leading similar visual features in contexts.\nThus, we propose Stage-II: distinctive AD narration.\nAs shown in Fig. 3 ###reference_###, we prepare consecutive video clips (to be AD-described) , each containing uniformly sampled frames .\nFollowing [20 ###reference_b20###, 21 ###reference_b21###, 73 ###reference_b73###], we employ a learnable Perceiver adapter [4 ###reference_b4###] to resample prompt vectors for the frame embeddings encoded by our Stage-I vision encoder, CLIPAD.\nThis process is formulated as:\nWe then introduce the Contextual EMA to capture compact, discriminative visual features for distinctive AD generation.\nContextual EMA.\nExpectation-Maximization Attention (EMA) [34 ###reference_b34###] integrates the attention mechanism [80 ###reference_b80###] into the classical EM [16 ###reference_b16###] algorithm, which comprises three steps to estimate a more compact set of bases: Responsibility\nEstimation (RE), Likelihood Maximization (LM), and Data Re-estimation (DR).\nInspired by this, we propose Contextual EMA to perform EMA on frames from contextual clips, aiming to eliminate redundancy, learn compact representations, and explore distinctiveness.\nLet represent clip vectors from the Perceiver, and denote the randomly initialized base features, where indicate the number of channel and bases.\nThe RE step estimates the hidden variable , where the responsibility represents the probability of the n-th frame belonging to the k-th base:\nwhere determines the shape (peakiness) of distribution .\nThen, the LM step updates the bases by applying the weighted average\non input , formulating the -th base as:\nThe RE (E-step) and LM (M-step) are iteratively performed times until convergence.\nNotably, since bases number is much smaller than the embedding number , we employ DR to reconstruct a compact version of through:\nHere, retains the same shape as . We combine and element-wise with a hyperparameter .\nTo enhance representation distinctiveness, we introduce an additional branch using cross-attention between raw (query) and bases (key and value), formulated as:\nwhere .\nLinear layers projecting queries, keys, and values are omitted in Fig. 3 ###reference_### (see Appendix \u00a7C ###reference_### for details).\nThrough (9 ###reference_###),\nwe process the distributions of to attend on specific and informative bases, with improved linear separability (see Fig. 6 ###reference_###).\nWe combine elementwise around Contextual EMA to construct the final refined visual features. These features are then projected into the LLM embedding space using a single-layer projector:\nInterleaved prompt as LLM\u2019s input.\nFollowing previous studies [21 ###reference_b21###, 81 ###reference_b81###, 73 ###reference_b73###], we build our interleaved prompt enriched with character information,\n(see Fig. 3 ###reference_###).\nTo answer the \u201cwho is who\u201d question when more than two characters are present, the corresponding actors\u2019 portrait images are projected as face tokens for reasoning.\nThe tag appended at the end indicates the beginning of AD generation.\nDistinctive words highlighting.\nOur goal is to query a frozen LLM for AD generation using a vision-conditioned prompt. The typical supervision employs the commonly used auto-regressive loss function:\nwhere is the -th token from the target AD.\nHowever, does not emphasize the distinctiveness specific to the current AD, which is our focus. To address this, we propose a distinctive word set , created by filtering out duplicates, such as character names, prepositions, and pronouns, from the context ADs of the target AD. During training, we explicitly encourage the LLM to predict the distinctive words in by optimizing the distinctive loss :\nwhere denotes the -th distinctive word in and is the size of the set.\nThe final complete loss function for Stage-II is: ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiment Setup", + "text": "Datasets.\nWe follow the AD generation benchmark established in AutoAD [20 ###reference_b20###], conducting experiments on the denoised MAD-v2-Named [67 ###reference_b67###] and evaluating on MAD-Eval-Named split. Specifically, MAD-v2-Named includes 330k ADs from 488 movies for training and MAD-Eval has 6,520 ADs crawled from 10 movies for evaluation.\nWe also evaluate on two recently introduced datasets.\nCMD-AD [22 ###reference_b22###] (where \u201cCMD\u201d stands for Condensed Movie Dataset [6 ###reference_b6###]) is a movie AD dataset that contains 101k ADs for more than 1432 movies, with 100 movies split for CMD-AD evaluation.\nTV-AD [81 ###reference_b81###] is a recently proposed AD dataset based on TVQA [29 ###reference_b29###], which contains 31k ADs for training and 3k ADs for evaluation.\nEvaluation Metrics.\nClassic captioning metrics including ROUGE-L [36 ###reference_b36###], CIDEr [71 ###reference_b71###] and SPICE [5 ###reference_b5###] are reported to evaluate the quality of generated ADs versus the ground-truth.\nBesides, we also report Recall@k within Neighbours [21 ###reference_b21###] (R@k/N), which calculates the average value of Recall@ for each AD with its temporally adjacent GT texts,\nwhere BertScore [87 ###reference_b87###] is used for text similarity matching.\nThe R@k/N metric is based on retrieving the most similar ground-truth AD among N neighbors, and thus highlights the distinctiveness of generated ADs directly.\nLLM-AD-eval [22 ###reference_b22###] employs LLMs to assess the quality of generated ADs, assigning scores from 1 (lowest) to 5 (highest). We utilize the LLM prompt from the original study [81 ###reference_b81###] and apply open-source models LLaMA2-7B-Chat [69 ###reference_b69###] and LLaMA3-8B-Instruct [3 ###reference_b3###] for this evaluation.\nImplementation Details.\nTo facilitate CLIP-AD adaptation in Stage-I, we collect\nthe original raw movies from MAD\nfrom platforms like Amazon Prime Video.\nSee Appendix E ###reference_### for details.\nWe fine-tune the CLIP Vision encoder for 5 epochs with a fixed learning rate 5e-5 using the Adam optimizer [27 ###reference_b27###] in Stage-I, with a batch size of 512.\nIn Stage-II, we use a batch of 8 sequences, each containing 16 consecutive video AD-pairs from a movie.\nFor each video clip, 8 frames are uniformly sampled.\nWe use the AdamW [43 ###reference_b43###] optimizer to train our model for 10 epochs, with a cosine-decayed learning rate and linear warm-up.\nThe learning rate is set to for both GPT-2 and LLaMA models.\nFor external character information, we directly use the inference results from AutoAD-Zero [81 ###reference_b81###] as it gives current best face recognition performance.\n###table_1### ###table_2### ###table_3###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparisons with previous methods", + "text": "We conduct comprehensive comparisons using the widely-adopted MAD-Eval benchmark [20 ###reference_b20###] and two recently introduced AD datasets, CMD-AD [22 ###reference_b22###] and TV-AD [81 ###reference_b81###].\nComparisons on MAD-Eval are shown in Tab. 1 ###reference_###. We primarily categorize previous studies into Training-free and Partial-fine-tuning approaches, as described in \u00a72 ###reference_###.\nOur method is a Partial-fine-tuning method.\nWhen using the same CLIP-B32 and GPT-2, our proposed DistinctAD achieves a CIDEr score of 24.5, surpassing previous AutoAD-I [20 ###reference_b20###] (CIDEr 14.3) and AutoAD-II [21 ###reference_b21###] (CIDEr 19.5).\nWith our Stage-I adapted CLIP vision encoders\n(denoted as CLIPAD),\nwe observe stable improvements across all metrics, e.g. 25.5 vs. 24.5 on CIDEr and 51.7 vs. 49.8 on recall, validating the effectiveness of our Stage-I strategy.\nNotably, DistinctAD with CLIP-AD-B16 and LLaMA3-8B [3 ###reference_b3###] achieves state-of-the-arts with a CIDEr of 27.3 and Recall@5/16 of 56.0.\nOur outstanding performance on the R@k/N metric demonstrates DistinctAD\u2019s ability to generate distinctive ADs, which well match the uniqueness of the clip\u2019s contents.\nLooking at the training-free methods,\ndespite the capabilities of advanced proprietary VLMs, e.g. GPT-4V [52 ###reference_b52###], and LLMs, e.g. GPT-4 [2 ###reference_b2###], the performance of training-free methods remains inferior to those employing partial-fine-tuning. This discrepancy likely arises from the unique characteristics of AD and movie data, which exhibit a significant domain gap from common vision language training data. As such, these data types were not encountered during the pre-training of proprietary large-scale models.\nComparisons on CMD-AD and TV-AD.\nWe further verify the generalizabilty\nof DistinctAD on the recently proposed CMD-AD and TV-AD benchmarks, with results presented in Tables 2 ###reference_### and 3 ###reference_###. For both evaluations, we employ the LLaMA3-8B model.\nDistinctAD exhibits superior performance to AutoAD-Zero, AutoAD-II and AutoAD-III in terms of CIDEr and R@1/5 on both CMD-AD and TV-AD.\nMeanwhile, DistinctAD exhibits a lower CIDEr compared to AutoAD-III on CMD-AD, which we conjecture is primarily due to AutoAD-III pre-training on a very large-scale 3.4M transformed HowTo-AD dataset [47 ###reference_b47###, 22 ###reference_b22###], which is currently publicly unavailable. Despite this, DistinctAD achieves superior R@1/5 performance, underscoring its exceptional ability to generate distinctive and high-quality ADs. This is further corroborated by its leading performance on the LLM-AD-eval metric.\n###figure_4### ###figure_5###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation studies", + "text": "Effect of CLIP-AD Adaptation (I).\nTab. 4 ###reference_###a demonstrates the benefit of our Stage-I strategy, i.e. adapting CLIP the vision encoder to the movie-AD domain via global video-AD matching and fine-grained frame-AD matching .\nIn Tab. 4 ###reference_###b,\nthe balancing coefficient performs best at 0.5.\nIn Tab. 4 ###reference_###c, our Stage-I strategy consistently enhances performance when combined with different prompts in the LLM decoder, such as contextual ADs in AutoAD-I [20 ###reference_b20###] or character names in AutoAD-II [21 ###reference_b21###].\nThis indicates that our AD-adapted CLIP vision encoder can integrate seamlessly into previous methods, including those training-free models that utilize CLIP-based visual extractors.\nEffect of Distinctive AD narration (II).\nWe evaluate the effectiveness of Stage-II components in Tab. 5 ###reference_###, based on the default (Perceiver\u2019s output) and full AD auto-regressive loss . The baseline (A0) outperforms AutoAD-II in CIDEr (23.1 vs. 19.5), primarily due to more accurate character prompts from AutoAD-Zero and the adapted CLIP vision encoder from Stage-I. A0 with AutoAD-II\u2019s characters achieves a CIDEr score of 20.6, close to AutoAD-II\u2019s performance.\nIncorporating reconstructed feature brings stable improvements on both CIDEr and recall (B1 & C1), highlighting the importance of compact representations in understanding visual semantics.\nCross-attended feature works better together with distinctive word prediction loss (C2 vs. B2, C3 vs. B3). We conjecture this is because provides more definite supervision on re-weighting distinctive words, which guides to attend on concept-related bases.\nOverall, applying the full Stage-II pipeline\nbrings significant and robust performance (C3).\nEffect of Hyper-parameters.\nFig. 4 ###reference_### summarizes the ablation studies on 4 hyper-parameters that potentially influence the results in Stage-II.\nCoefficient weights and yield optimal results when set to 3 and 1, respectively.\nThis suggests the need to refine our final representations to be more compact for generating ADs.\nFig. 4 ###reference_###(c) shows setting bases number to 32 yields best. A smaller , e.g. 2, can still achieve notable CIDEr, as the Contextual EMA module does not significantly alter the final output .\nHowever, unsuitable values of , e.g. 8 or 64, can negatively impact performance.\nFig. 4 ###reference_###(d) reveals that increasing the number of consecutive clips (with set to 32) enhances the CIDEr score, though this effect saturates when exceeds 16.\nThis demonstrates that more bases should be created to effectively summarize components with additional clips.\nDo the clips to be consecutive? Tab. 6 ###reference_### presents the results of sampling non-consecutive clips during training.\nWhen using non-continuous clips,\nwe observe a decrease in the CIDEr metric by 1.7 (25.5 vs. 23.8) because the Contextual EMA module struggles with unrelated contexts. However, the R@5/16 improves by 0.8, which indicates enhancement of the distinctiveness (uniqueness) of the generated AD when using more diverse visual contents.\nVisualizations.\nTo better understand what Contextual EMA learns, we show the t-SNE [70 ###reference_b70###] visualizations of and (from \u00a73.2 ###reference_###) in Fig. 6 ###reference_###, using the same perplexity value across all visualizations.\nWith Contextual EMA, exhibits more compact features compared to raw , Fig. 6 ###reference_###(b).\nInterestingly in Fig. 6 ###reference_###(c), cross-attention between and bases produces strip-like feature distributions pointing to specific base centers, enhancing contextual distinctiveness with improved linear separability and interpretability.\n###figure_6###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Qualitative results.", + "text": "Fig. 5 ###reference_### presents qualitative examples of our model. We compare the predictions of DistinctAD (using LLaMA3-8B) with ground-truth captions (GT) and publicly available AutoAD-Zero [81 ###reference_b81###] outputs.\nNote that clips are sampled consecutively in time.\nPrevious studies often struggle with similar contextual clips, such as those featuring closely-related\nscenes and characters, by repeating correct yet insignificant action words, e.g. \u201clook\u201d.\nIn contrast,\nour DistinctAD effectively generates more engaging ADs by identifying distinctive objects in adjacent clips, e.g. \u201cphone\u201d, \u201cpill bottle\u201d, and \u201ccar\u201d, along with corresponding more specific behaviors.\nMore examples can be found in Appendix \u00a7D ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In conclusion, this paper proposes DistinctAD, a novel two-stage framework for generating distinctive audio descriptions for better narrative.\nBy addressing the domain gap between movie-AD data with a CLIP-AD adaptation strategy, and introducing a Contextual EMA module and a distinctive word prediction loss, our approach significantly improves the quality of AD generation.\nThe effectiveness of DistinctAD is demonstrated through comprehensive evaluations on multiple benchmark datasets and ablations studies.\nDespite these promising results, DistinctAD is still limited by requiring numbers of parameters and the quality of the generated ADs still falls short of human annotations (as reflected by relatively low CIDEr).\nOverall, automatic AD generation still remains a challenging task, and there is considerable scope for future advancements in this field." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Analysis of AD Reconstruction with CLIP Embedding Space", + "text": "###figure_7### ###table_4### As detailed in the main paper\u2019s \u00a73.1 ###reference_###, our Stage-I strategy, CLIP-AD adaption, is inspired by a preliminary AD reconstruction experiment using the CLIP text encoder [57 ###reference_b57###] and GPT-2 [56 ###reference_b56###].\nWe begin with the question: is the CLIP text embedding space expressive enough for embedded AD words to be reconstructed by LLMs?\nIf the reconstruction process is successful\u2014meaning that LLMs can understand the textual ADs encoded by the CLIP text encoder\u2014then the misalignment in the VLM joint feature space likely occurs because of the CLIP vision encoder, rather than between the CLIP text encoder and the LLMs.\nOn the other hand,\nif the reconstruction is not successful, then the pre-trained CLIP joint embedding space is not suitable for the AD task, and both text and vision encoders need to be retrained.\nTo address this question, we design the AD words reconstruction pipeline illustrated in Fig. A.1 ###reference_###.\nSpecifically, we input the AD sentence into a frozen CLIP text encoder, modified to output tokens for each word.\nWe implement two versions of AD reconstruction: 1) using only a single [CLS] vector, or 2) using all word tokens as prompts.\nWe append a tag to signal the start of reconstruction.\nThe output embeddings are then fed into a learnable single-layer projector, transforming the CLIP word tokens into the LLM embedding space.\nWe apply an auto-regression loss identical to (11 ###reference_###) in the main paper, with the visual prompt setting as none.\nThe projector is trained for 10 epochs on MAD-v2-Named [67 ###reference_b67###] ADs, and the performance is evaluated using classical n-gram based metrics on the MAD-Eval benchmark [20 ###reference_b20###].\nThe reconstruction results are presented in Tab. A.1 ###reference_###.\nRemarkably, by merely fine-tuning a single-layer projector, AD reconstruction achieves results closely aligned with the ground truth, such as scores of 80.8 on BLEU1 and 612.5 on CIDEr with all words input.\nAdditionally, using only a single [CLS] vector to recover the entire AD achieves 92.2 on CIDEr, significantly outperforming existing AD works, which score around 20 CIDEr.\nThis demonstrates that AD words (or [CLS] vector) encoded by the CLIP text encoder can be effectively understood by LLMs, suggesting that the misalignment mainly lies within the joint VLM feature space, i.e., discrepancies between CLIP vision embeddings and CLIP AD embeddings." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Analysis of Neighboring (Contextual) Features", + "text": "###figure_8### In this section, we validate our primary hypothesis:\nsequential clips from an extended video often share redundant scenes or characters, resulting in similar visual features within contexts, as discussed in \u00a73.2 ###reference_### of the main paper. Fig. B.2 ###reference_### presents the cosine similarity matrix for neighboring (contextual) movie clips (left) and their corresponding audio descriptions (ADs) (right) from four randomly selected films.\nThe visual clip features are derived through mean pooling over frame embeddings encoded by the CLIP Vision encoder, while the AD features are obtained from the [CLS] embeddings encoded by the CLIP Text encoder.\nFrom these similarity matrices, we observe two key points:\n(i) Movie clips generally exhibit greater similarity to each other compared to ADs, indicated by a higher proportion of red (deep) colors;\n(ii) Compared to ADs, neighboring (contextual) movie clips show prominent areas of similarity around the diagonals (i.e., the block diagonal structure), demonstrating that they share similar visual features due to recurring scenes and characters.\nIn Fig. B.2 ###reference_###, middle column, we illustrate the similarity of neighboring movie clips using our adapted CLIPAD vision encoder in Stage-I (see \u00a73.1 ###reference_### of the main paper).\nSignificant changes compared to vanilla CLIP visualizations are highlighted with green rectangles.\nOur CLIPAD helps reduce redundancy among neighboring video clips, as evidenced by the smaller similarity values within the green rectangles, which helps to improve the generation of distinctive ADs in our framework.\nThis further demonstrates the effectiveness of our Stage-I strategy." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Detailed Formulation of CrossAttention", + "text": "In this part, we provide an in-depth explanation of the Cross-Attention formulation, building upon (9 ###reference_###) in the main paper. The query originates from the Perceiver output, denoted as , while both the key and the value are derived from the base matrix .\nWe apply three Linear layers to transform the query, key, and value into a unified embedding space, as represented by the following equations:\nSubsequently, the cross-attention mechanism is formulated by computing a weighted sum of the values, where the weights are determined by the similarity between the queries and keys. The softmax function ensures the normalization of the attention weights. The final cross-attention output is given by:\nwhere acts as a scaling factor to stabilize the gradient flow during training." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Additional Qualitative Examples", + "text": "Following Fig. 5 ###reference_### in the main paper, we present additional qualitative examples in Fig. D.4 ###reference_###, utilizing our adapted CLIP-AD-B16 [57 ###reference_b57###] in Stage-I and LLM LLaMA3-8B [3 ###reference_b3###].\nThe movie clips are consecutively sampled from the following films: (a) Signs (2002), (b) The Roommate (2011), and (c) How Do You Know (2010), listed from top to bottom. For accurate retrieval and alignment, the starting time of each movie clip is indicated in the top-left corner of each clip. Additionally, we provide results from the publicly available AutoAD-Zero [81 ###reference_b81###] for comparison.\nThe numerous high-quality examples further demonstrate the superiority of our proposed method, DistinctAD.\nSince complete predictions and codes are unavailable for many previous methods, such as AutoAD-I, AutoAD-II, AutoAD-III, and MM-Narrator, we only collect the qualitative examples presented in their original papers and perform qualitative comparisons in Fig. D.3 ###reference_###.\nTraining-free methods are highlighted with a blue background, while partial-fine-tuning methods are marked in orange.\nIt is evident that training-free methods utilizing proprietary models like GPT-4 or GPT-4V often encounter hallucination issues, producing irrelevant or imaginary details.\nIn contrast, partial-fine-tuning methods, i.e. AutoAD-I, AutoAD-II and DistinctAD, generate more accurate ADs close to human-annotated ground-truth. (We use past 3 ground-truth ADs as AutoAD-I\u2019s textual prompts.)\nDespite this, AutoAD-I can be negatively influenced by its contextual content, e.g. \u201cnuns\u201d mistakenly appears in (d).\nAutoAD-II tends to generates similar AD words, e.g. \u201cfurrowed brow\u201d for movie frames with close-up faces in (a) and (d), whereas our DistinctAD is generally more distinctive.\n###figure_9### ###figure_10###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Raw Frames of MAD", + "text": "Due to copyright restrictions, MAD [67 ###reference_b67###] only provides frame-level movie features extracted by CLIP [57 ###reference_b57###].\nHowever, to facilitate CLIP-AD adaptation in Stage-I, we require raw MAD movie frames to fine-tune the CLIP vision encoder.\nTo achieve this, we collect MAD raw movies from third-party platforms such as Amazon Prime Video.\nOut of the 488 movies in the MAD-train list, 3 are not available online, as shown in Tab. E.2 ###reference_###.\n###table_5### Moreover, due to geographical differences, we may download different versions of movies, potentially leading to mismatches between movie clips and annotated timestamps.\nTo address this, we conduct a thorough check by comparing our downloaded movies with the MAD dataset and their metadata in the IMDB database.\nOut of 488 movies, 9 have time durations that vary more than one minute. Details are shown in Tab. E.3 ###reference_###.\n###table_6### According to the statistical information in Tab. E.3 ###reference_###, we identify potential temporal misalignment noise in the existing MAD benchmark.\nTo mitigate negative impacts during training, we exclude movies with durations that significantly differ from those in the IMDB database.\nThe removed movie IDs are: 4017, 4902, 5634.\nA summary of the final employed MAD-v2-Named training dataset is provided in Tab. E.4 ###reference_###.\n###table_7###" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPub.VLMLLMROUGE-LCIDErSPICER@5/16
Training-free
\n\\hdashline[0.5pt/5pt]\nVLog\u00a0[1]\n--GPT-47.51.32.142.3
MM-Vid\u00a0[38]\nArXiv\u201923GPT-4V-9.86.13.846.1
MM-Narrator\u00a0[85]\nCVPR\u201924CLIP-L14GPT-413.413.95.249.0
LLM-AD\u00a0[13]\nArXiv\u201924GPT-4V-13.520.5--
AutoAD-Zero\u00a0[81]\nACCV\u201924VideoLLaMA2-7BLLaMA3-8B-22.4--
Partial-fine-tuning
\n\\hdashline[0.5pt/5pt]\nClipCap\u00a0[49]\nArXiv\u201921CLIP-B32GPT-28.54.41.1-
CapDec\u00a0[51]\nArXiv\u201922--8.26.71.4-
AutoAD-I\u00a0[20]\nCVPR\u201923CLIP-B32GPT-211.914.34.442.1
AutoAD-II\u00a0[21]\nICCV\u201923CLIP-B32GPT-213.419.5-50.8
AutoAD-III\u00a0[22]\nCVPR\u201924EVA-CLIPOPT-2.7B-22.8-52.0
AutoAD-III\u00a0[22]\nCVPR\u201924EVA-CLIPLLaMA2-7B-24.0-52.8
MovieSeq\u00a0[39]\nECCV\u201924CLIP-B16LLaMA2-7B\u2217\n15.524.47.051.6
DistinctAD (Ours)CLIP-B32GPT-215.424.56.749.8
DistinctAD (Ours)CLIPAD-B32GPT-216.425.57.451.7
DistinctAD (Ours)CLIPAD-B16LLaMA2-7B17.227.08.255.6
DistinctAD (Ours)CLIPAD-B16LLaMA3-8B17.627.38.356.0
\n
\n
Table 1: \nComparisons of AD performance on the MAD-Eval benchmark.\n\u2217 indicates fine-tuning LLaMA2-7B model with LoRA\u00a0[23].\nCLIPAD is our\nCLIP vision encoder adapted using our Stage-I strategy.
\n
", + "capture": "Table 1: \nComparisons of AD performance on the MAD-Eval benchmark.\n\u2217 indicates fine-tuning LLaMA2-7B model with LoRA\u00a0[23].\nCLIPAD is our\nCLIP vision encoder adapted using our Stage-I strategy." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodCIDErR@1/5LLM-AD-eval
Video-BLIP2\u00a0[84]\n4.822.01.89\u00a0| \u00a0-
Video-LLaMA2\u00a0[86]\n5.223.61.91\u00a0| \u00a0-
AutoAD-II\u00a0[21]\n13.526.12.08\u00a0| \u00a0-
AutoAD-III\u00a0[22]\n21.730.02.85\u00a0| \u00a0-
AutoAD-Zero\u00a0[81]\n17.7-2.83\u00a0| \u00a0 1.96
DistinctAD (Ours)22.733.02.88\u00a0| \u00a0 2.03
\n\u00a0\nAutoAD-III\u00a0[22]\n25.031.22.89\u00a0| \u00a0 2.01
\u00a0
\n
\n
Table 2: Comparisons on CMD-AD. The LLM-AD-eval scores are evaluated with LLaMA2-7B (left) and LLaMA3-8B (right).\n indicates pre-training on 3.4M HowTo-AD dataset\u00a0[47, 22].
\n
", + "capture": "Table 2: Comparisons on CMD-AD. The LLM-AD-eval scores are evaluated with LLaMA2-7B (left) and LLaMA3-8B (right).\n indicates pre-training on 3.4M HowTo-AD dataset\u00a0[47, 22]." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodCIDErR@1/5LLM-AD-eval
AutoAD-III\u00a0[22]\n26.1-2.78\u00a0|\u00a01.99
AutoAD-Zero\u00a0[81]\n22.630.6\n2.94\u00a0|\u00a02.00\n
DistinctAD (Ours)27.432.12.89 |\u00a02.00
\u00a0
\n
\n
Table 3: Comparisons on TV-AD. The LLM-AD-eval scores are evaluated using LLaMA2-7B (left) and LLaMA3-8B (right).
\n
", + "capture": "Table 3: Comparisons on TV-AD. The LLM-AD-eval scores are evaluated using LLaMA2-7B (left) and LLaMA3-8B (right)." + }, + "4": { + "table_html": "
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingCIDErR@5/16
None6.734.0
\nGlobal \n8.236.6
\nFine-grained \n7.735.2
8.636.9
\n
(a) Stage-I components.
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nCoefficient \nCIDEr
0.18.0
0.38.5
0.58.6
0.77.7
\n
(b) Impact of coefficient .
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PromptStage-ICIDErR@5/16
Contextual ADs\u00a0[20]\u271712.6 (17.8)39.8 (43.1)
\u2713\n14.1 (19.0)\n\n39.9 (44.2)\n
Character\u00a0[21]\u271722.045.6
\u271323.146.2
\n
(c) Impact of Stage-I w/ different prompts.
\n
\n
\n
\n
Table 4: Ablation studies for Stage-I.\n(a) Evaluation of global video-AD loss and fine-grained frame-AD loss on AD performance.\n(b) Analysis of the the impact of the coefficient . Both (a) and (b) are conducted with pure visual prompts. (c) Impact of Stage-I when combined with different prompts for the LLM decoder, including contextual ADs and character names. Performance in parentheses indicates results with ground-truth contextual ADs as prompts.
\n
", + "capture": "(a) Stage-I components." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingCIDErR@5/16
None6.734.0
\nGlobal \n8.236.6
\nFine-grained \n7.735.2
8.636.9
\n
(a) Stage-I components.
\n
", + "capture": "(a) Stage-I components." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nCoefficient \nCIDEr
0.18.0
0.38.5
0.58.6
0.77.7
\n
(b) Impact of coefficient .
\n
", + "capture": "(b) Impact of coefficient ." + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PromptStage-ICIDErR@5/16
Contextual ADs\u00a0[20]\u271712.6 (17.8)39.8 (43.1)
\u2713\n14.1 (19.0)\n\n39.9 (44.2)\n
Character\u00a0[21]\u271722.045.6
\u271323.146.2
\n
(c) Impact of Stage-I w/ different prompts.
\n
", + "capture": "(c) Impact of Stage-I w/ different prompts." + }, + "8": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Ex#CIDErR@5/16
A0\u2717\u2717\u271723.1 (27.4)46.2
B1\u2713\u2717\u271723.7 (29.3)46.6
B2\u2717\u2713\u271723.4 (29.1)46.1
B3\u2713\u2713\u271723.3 (28.1)48.0
C1\u2713\u2717\u271324.3 (29.4)50.7
C2\u2717\u2713\u271325.3 (30.4)51.5
C3\u2713\u2713\u2713\n25.5 (29.8)51.7
\n
\n
Table 5: Ablation studies for components in Stage-II.\nThe CIDEr column shows scores with AutoAD-Zero\u2019s character\u00a0[81] as prompt by default. CIDEr in parentheses represent performance with ground-truth character names.
\n
", + "capture": "Table 5: Ablation studies for components in Stage-II.\nThe CIDEr column shows scores with AutoAD-Zero\u2019s character\u00a0[81] as prompt by default. CIDEr in parentheses represent performance with ground-truth character names." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Consecutive ?CIDErR@5/16
\u271325.551.7
\u271723.8\u21931.7\n52.5\u21910.8
\n
\n
Table 6: Impact of whether sampling consecutive clips or not.
\n
", + "capture": "Table 6: Impact of whether sampling consecutive clips or not." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Projector input(V)LMLLMBLEU1BLEU2BLEU3BLEU4METEORROUGE-LCIDErSPICE
[CLS]CLIP-TextGPT-229.316.49.25.113.229.492.219.4
WordsCLIP-TextGPT-280.874.468.463.047.482.4612.566.4
\n
Table A.1: AD reconstruction results on MAD-Eval benchmark. Only textual modality ADs in MAD-Eval are utilized for evaluation, with no movie frames involved. [CLS] denotes using only one class token vector to reconstruct the entire AD.
\n
", + "capture": "Table A.1: AD reconstruction results on MAD-Eval benchmark. Only textual modality ADs in MAD-Eval are utilized for evaluation, with no movie frames involved. [CLS] denotes using only one class token vector to reconstruct the entire AD." + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MAD_IDIMDB_IDMovie Title
4797tt0395571Holy Flying Circus
4839tt4846340Halo: The Fall of Reach
5900tt0408306Murdered by My Father
\n
Table E.2: Meta information of missing films in MAD-train.
\n
", + "capture": "Table E.2: Meta information of missing films in MAD-train." + }, + "12": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MAD_IDIMDB_IDMAD_TimeOur_TimeIMDB_Time
2738tt04502321h 37m 26s1h 41m 59s1h 42m
2787tt11366081h 19m 24s1h 52m 16s1h 52m
4017tt54631621h 59m 20s1h 57m 41s1h 59m
4061tt18376361h 28m 2s2h 8m 12s2h 8m
4266tt03757351h 36m 8s1h 40m 39s1h 40m
4772tt04241361h 39m 53s1h 44m 33s1h 44m
4902tt01193101h 15m 30s1h 11m 55s1h 14m
5634tt29296901h 40m 52s1h 51m 50s1h 40m
6952tt25273382h 31m 52s2h 21m 53s2h 21m
\n
Table E.3: Metadata for movies with duration difference exceeding 1 minute. Durations closer to the IMDB time are highlighted in green.
\n
", + "capture": "Table E.3: Metadata for movies with duration difference exceeding 1 minute. Durations closer to the IMDB time are highlighted in green. " + }, + "13": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MAD-v2-Named# movies# AD
\nMAD-Train-Features [67]\n488334,296
MAD-Train-Frames (Ours)482326,632
\n
Table E.4: Statistics of our refined MAD dataset incorporating raw frames.
\n
", + "capture": "Table E.4: Statistics of our refined MAD dataset incorporating raw frames." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18180v1_figure_1.png", + "caption": "Figure 1: \n(a) Previous methods approach the AD task similar to video captioning, using only a single video clip as input, which leads to repetitive ADs due to highly similar neighboring clips.\n(b) Our DistinctAD method generates distinctive ADs across N\ud835\udc41Nitalic_N consecutive clips, with three key innovations: VLM-AD adaptation, the Distinct Module, and explicit distinctive words prediction.", + "url": "http://arxiv.org/html/2411.18180v1/x1.png" + }, + "2": { + "figure_path": "2411.18180v1_figure_2.png", + "caption": "Figure 2: Illustration of Stage-I: CLIP-AD Adaptation. This process involves adapting the CLIP vision encoder to specific movie-AD data through global-level video-AD matching (bottom right) and fine-grained frame-AD matching (top right).", + "url": "http://arxiv.org/html/2411.18180v1/x2.png" + }, + "3": { + "figure_path": "2411.18180v1_figure_3.png", + "caption": "Figure 3: Pipeline of Stage-II: Distinctive AD Narration.\nStage-II processes N\ud835\udc41Nitalic_N consecutive video clips using the CLIPAD vision encoder from Stage-I.\nWe generate contextual-distinctive ADs by two key innovations: i) a Contextual EMA module to learn compact and discriminative visual representations for improved prompting of LLMs; ii) an extra distinctive word loss for predicting AD-specific terms.", + "url": "http://arxiv.org/html/2411.18180v1/x3.png" + }, + "4": { + "figure_path": "2411.18180v1_figure_4.png", + "caption": "Figure 4: Ablation studies for hyperparameter in Stage-II, with final settings highlighted in orange.\n(a) Impact of \u03b1\ud835\udefc\\alphaitalic_\u03b1 on the weight of compact representation \u210b^^\u210b\\widehat{\\mathcal{H}}over^ start_ARG caligraphic_H end_ARG.\n(b) Influence of \u03b2\ud835\udefd\\betaitalic_\u03b2 on cross-attended feature \u210b~~\u210b\\widetilde{\\mathcal{H}}over~ start_ARG caligraphic_H end_ARG.\n(c) Impact of K\ud835\udc3eKitalic_K, which denotes the number of clusters in bases \u2133\u2133\\mathcal{M}caligraphic_M.\n(d) Effect of sampling N\ud835\udc41Nitalic_N consecutive video clips.\nWe switch to larger memory GPUs when N\ud835\udc41Nitalic_N exceeds 16.", + "url": "http://arxiv.org/html/2411.18180v1/x4.png" + }, + "5": { + "figure_path": "2411.18180v1_figure_5.png", + "caption": "Figure 5: Qualitative results.\nWe present ground-truth (GT) ADs, publicly released AutoAD-Zero outputs, and our DistinctAD predictions for several temporally consecutive movie clips.\nMovie frames are taken from The Ides of March (2011) [14].\nZoom in for details.", + "url": "http://arxiv.org/html/2411.18180v1/x5.png" + }, + "6": { + "figure_path": "2411.18180v1_figure_6.png", + "caption": "Figure 6: Visualizations of Contextual EMA.\n(a) A set of randomly generated 3D data \u210b\u210b\\mathcal{H}caligraphic_H, sampled from N\ud835\udc41Nitalic_N types of samples.\n(b) Compact features \u210b^^\u210b\\widehat{\\mathcal{H}}over^ start_ARG caligraphic_H end_ARG obtained via Data Re-estimation (DR).\n(c) Cross-attention outputs \u210b~~\u210b\\widetilde{\\mathcal{H}}over~ start_ARG caligraphic_H end_ARG between \u210b\u210b\\mathcal{H}caligraphic_H and bases \u2133\u2133\\mathcal{M}caligraphic_M.", + "url": "http://arxiv.org/html/2411.18180v1/x6.png" + }, + "7": { + "figure_path": "2411.18180v1_figure_7.png", + "caption": "Figure A.1: Reconstructing AD words by merely fine-tuning a single-layer projector between a frozen CLIP text encoder and GPT-2.", + "url": "http://arxiv.org/html/2411.18180v1/extracted/6028597/fig/recons_ad.png" + }, + "8": { + "figure_path": "2411.18180v1_figure_8.png", + "caption": "Figure B.2: \nCosine similarity matrices of neighboring (contextual) movie clips using vanilla CLIP (left) and our adapted CLIPAD in Stage-I (middle).\nWe also show similarity matrices of corresponding neighboring ADs (right).\nMovie clips are from Signs (2002), How Do You Know (2010), Harry Potter and the Goblet of Fire (2005), and Charlie St. Cloud (2010). Green boxes indicate differences between vanilla CLIP and our CLIP-AD. Zoom in for details.", + "url": "http://arxiv.org/html/2411.18180v1/x7.png" + }, + "9": { + "figure_path": "2411.18180v1_figure_9.png", + "caption": "Figure D.3: Qualitative comparisons on single movie clips between ClipCap, MM-Vid, MM-Narrator, AutoAD-Zero, AutoAD-I, AutoAD-II, and our DistinctAD. The movies are from (a) Signs (2002), (b) Ides of March (2011), (c) Charlie St. Cloud (2010), and (d) Les Miserables (2012).\nZoom in for details.", + "url": "http://arxiv.org/html/2411.18180v1/x8.png" + }, + "10": { + "figure_path": "2411.18180v1_figure_10.png", + "caption": "Figure D.4: More qualitative results on consecutive movie clips.\nMovie frames from top to bottom are taken from Signs (2002), The Roommate (2011), How Do You Know (2010), respectively.\nZoom in for details.", + "url": "http://arxiv.org/html/2411.18180v1/x9.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "https://github.com/showlab/VLog, 2023.", + "author": "Vlog: Video as a long document.", + "venue": "GitHub repository.", + "url": null + } + }, + { + "2": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "3": { + "title": "Llama 3 model card.", + "author": "AI@Meta.", + "venue": "2024.", + "url": null + } + }, + { + "4": { + "title": "Flamingo: a visual language model for few-shot learning.", + "author": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al.", + "venue": "NeurIPS, 35:23716\u201323736, 2022.", + "url": null + } + }, + { + "5": { + "title": "Spice: Semantic propositional image caption evaluation.", + "author": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould.", + "venue": "In ECCV, pages 382\u2013398. Springer, 2016.", + "url": null + } + }, + { + "6": { + "title": "Condensed movies: Story based retrieval with contextual embeddings.", + "author": "Max Bain, Arsha Nagrani, Andrew Brown, and Andrew Zisserman.", + "venue": "In ACCV, 2020.", + "url": null + } + }, + { + "7": { + "title": "Whisperx: Time-accurate speech transcription of long-form audio.", + "author": "Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:2303.00747, 2023.", + "url": null + } + }, + { + "8": { + "title": "Livedescribe: can amateur describers create high-quality audio description?", + "author": "Carmen J Branje and Deborah I Fels.", + "venue": "Journal of Visual Impairment & Blindness, 106(3):154\u2013165, 2012.", + "url": null + } + }, + { + "9": { + "title": "End-to-end speaker segmentation for overlap-aware resegmentation.", + "author": "Herv\u00e9 Bredin and Antoine Laurent.", + "venue": "In Interspeech, 2021.", + "url": null + } + }, + { + "10": { + "title": "Pyannote. audio: neural building blocks for speaker diarization.", + "author": "Herv\u00e9 Bredin, Ruiqing Yin, Juan Manuel Coria, Gregory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, and Marie-Philippe Gill.", + "venue": "In ICASSP, pages 7124\u20137128. IEEE, 2020.", + "url": null + } + }, + { + "11": { + "title": "Groupcap: Group-based image captioning with structured relevance and diversity constraints.", + "author": "Fuhai Chen, Rongrong Ji, Xiaoshuai Sun, Yongjian Wu, and Jinsong Su.", + "venue": "In CVPR, pages 1345\u20131353, 2018.", + "url": null + } + }, + { + "12": { + "title": "Towards bridging event captioner and sentence localizer for weakly supervised dense event captioning.", + "author": "Shaoxiang Chen and Yu-Gang Jiang.", + "venue": "In CVPR, pages 8425\u20138435, 2021.", + "url": null + } + }, + { + "13": { + "title": "Llm-ad: Large language model based audio description system.", + "author": "Peng Chu, Jiang Wang, and Andre Abrantes.", + "venue": "arXiv preprint arXiv:2405.00983, 2024.", + "url": null + } + }, + { + "14": { + "title": "The ides of march.", + "author": "George Clooney.", + "venue": "Columbia Pictures, 2011.", + "url": null + } + }, + { + "15": { + "title": "Contrastive learning for image captioning.", + "author": "Bo Dai and Dahua Lin.", + "venue": "NeurIPS, 30, 2017.", + "url": null + } + }, + { + "16": { + "title": "Maximum likelihood from incomplete data via the em algorithm.", + "author": "Arthur P Dempster, Nan M Laird, and Donald B Rubin.", + "venue": "Journal of the royal statistical society: series B (methodological), 39(1):1\u201322, 1977.", + "url": null + } + }, + { + "17": { + "title": "Sketch, ground, and refine: Top-down dense video captioning.", + "author": "Chaorui Deng, Shizhe Chen, Da Chen, Yuan He, and Qi Wu.", + "venue": "In CVPR, pages 234\u2013243, 2021.", + "url": null + } + }, + { + "18": { + "title": "Describing differences in image sets with natural language.", + "author": "Lisa Dunlap, Yuhui Zhang, Xiaohan Wang, Ruiqi Zhong, Trevor Darrell, Jacob Steinhardt, Joseph E Gonzalez, and Serena Yeung-Levy.", + "venue": "In CVPR, pages 24199\u201324208, 2024.", + "url": null + } + }, + { + "19": { + "title": "An introduction to audio description: A practical guide, 2016.", + "author": "Louise Fryer.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "Autoad: Movie description in context.", + "author": "Tengda Han, Max Bain, Arsha Nagrani, G\u00fcl Varol, Weidi Xie, and Andrew Zisserman.", + "venue": "In CVPR, pages 18930\u201318940, 2023a.", + "url": null + } + }, + { + "21": { + "title": "Autoad ii: The sequel-who, when, and what in movie audio description.", + "author": "Tengda Han, Max Bain, Arsha Nagrani, Gul Varol, Weidi Xie, and Andrew Zisserman.", + "venue": "In ICCV, pages 13645\u201313655, 2023b.", + "url": null + } + }, + { + "22": { + "title": "Autoad iii: The prequel-back to the pixels.", + "author": "Tengda Han, Max Bain, Arsha Nagrani, G\u00fcl Varol, Weidi Xie, and Andrew Zisserman.", + "venue": "In CVPR, pages 18164\u201318174, 2024.", + "url": null + } + }, + { + "23": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "arXiv preprint arXiv:2106.09685, 2021.", + "url": null + } + }, + { + "24": { + "title": "A better use of audio-visual cues: Dense video captioning with bi-modal transformer.", + "author": "Vladimir Iashin and Esa Rahtu.", + "venue": "In BMVC, 2020a.", + "url": null + } + }, + { + "25": { + "title": "Multi-modal dense video captioning.", + "author": "Vladimir Iashin and Esa Rahtu.", + "venue": "In CVPRW, pages 958\u2013959, 2020b.", + "url": null + } + }, + { + "26": { + "title": "Expectation-maximization contrastive learning for compact video-and-language representations.", + "author": "Peng Jin, Jinfa Huang, Fenglin Liu, Xian Wu, Shen Ge, Guoli Song, David Clifton, and Jie Chen.", + "venue": "NeurIPS, 35:30291\u201330306, 2022.", + "url": null + } + }, + { + "27": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P Kingma.", + "venue": "arXiv preprint arXiv:1412.6980, 2014.", + "url": null + } + }, + { + "28": { + "title": "Dense-captioning events in videos.", + "author": "Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles.", + "venue": "In ICCV, pages 706\u2013715, 2017.", + "url": null + } + }, + { + "29": { + "title": "Tvqa: Localized, compositional video question answering.", + "author": "Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg.", + "venue": "In EMNLP, 2018.", + "url": null + } + }, + { + "30": { + "title": "Deep dive: How audio description benefits everyone, 2021.", + "author": "Elisa Lewis.", + "venue": "Accessed on, pages 11\u201313, 2023.", + "url": null + } + }, + { + "31": { + "title": "Mimic-it: Multi-modal in-context instruction tuning.", + "author": "Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu.", + "venue": "arXiv preprint arXiv:2306.05425, 2023a.", + "url": null + } + }, + { + "32": { + "title": "Otter: a multi-modal model with in-context instruction tuning. corr abs/2305.03726 (2023), 2023b.", + "author": "Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models.", + "author": "Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.", + "venue": "arXiv preprint arXiv:2301.12597, 2023c.", + "url": null + } + }, + { + "34": { + "title": "Expectation-maximization attention networks for semantic segmentation.", + "author": "Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, and Hong Liu.", + "venue": "In ICCV, pages 9167\u20139176, 2019.", + "url": null + } + }, + { + "35": { + "title": "Jointly localizing and describing events for dense video captioning.", + "author": "Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei.", + "venue": "In CVPR, pages 7492\u20137500, 2018.", + "url": null + } + }, + { + "36": { + "title": "Rouge: A package for automatic evaluation of summaries.", + "author": "Chin-Yew Lin.", + "venue": "In Text summarization branches out, pages 74\u201381, 2004.", + "url": null + } + }, + { + "37": { + "title": "Swinbert: End-to-end transformers with sparse attention for video captioning.", + "author": "Kevin Lin, Linjie Li, Chung-Ching Lin, Faisal Ahmed, Zhe Gan, Zicheng Liu, Yumao Lu, and Lijuan Wang.", + "venue": "In CVPR, pages 17949\u201317958, 2022a.", + "url": null + } + }, + { + "38": { + "title": "Mm-vid: Advancing video understanding with gpt-4v (ision).", + "author": "Kevin Lin, Faisal Ahmed, Linjie Li, Chung-Ching Lin, Ehsan Azarnasab, Zhengyuan Yang, Jianfeng Wang, Lin Liang, Zicheng Liu, Yumao Lu, et al.", + "venue": "arXiv preprint arXiv:2310.19773, 2023.", + "url": null + } + }, + { + "39": { + "title": "Learning video context as interleaved multimodal sequences.", + "author": "Kevin Qinghong Lin, Pengchuan Zhang, Difei Gao, Xide Xia, Joya Chen, Ziteng Gao, Jinheng Xie, Xuhong Xiao, and Mike Zheng Shou.", + "venue": "arXiv preprint arXiv:2407.21757, 2024.", + "url": null + } + }, + { + "40": { + "title": "Swem: Towards real-time video object segmentation with sequential weighted expectation-maximization.", + "author": "Zhihui Lin, Tianyu Yang, Maomao Li, Ziyu Wang, Chun Yuan, Wenhao Jiang, and Wei Liu.", + "venue": "In CVPR, pages 1362\u20131372, 2022b.", + "url": null + } + }, + { + "41": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "NeurIPS, 36, 2024.", + "url": null + } + }, + { + "42": { + "title": "Show, tell and discriminate: Image captioning by self-retrieval with partially labeled data.", + "author": "Xihui Liu, Hongsheng Li, Jing Shao, Dapeng Chen, and Xiaogang Wang.", + "venue": "In ECCV, pages 338\u2013354, 2018.", + "url": null + } + }, + { + "43": { + "title": "Decoupled weight decay regularization.", + "author": "I Loshchilov.", + "venue": "arXiv preprint arXiv:1711.05101, 2017.", + "url": null + } + }, + { + "44": { + "title": "Univl: A unified video and language pre-training model for multimodal understanding and generation.", + "author": "Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou.", + "venue": "arXiv preprint arXiv:2002.06353, 2020.", + "url": null + } + }, + { + "45": { + "title": "Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning.", + "author": "Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li.", + "venue": "Neurocomputing, 508:293\u2013304, 2022.", + "url": null + } + }, + { + "46": { + "title": "Discriminability objective for training descriptive captions.", + "author": "Ruotian Luo, Brian Price, Scott Cohen, and Gregory Shakhnarovich.", + "venue": "In CVPR, pages 6964\u20136974, 2018.", + "url": null + } + }, + { + "47": { + "title": "Howto100m: Learning a text-video embedding by watching hundred million narrated video clips.", + "author": "Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic.", + "venue": "In ICCV, pages 2630\u20132640, 2019.", + "url": null + } + }, + { + "48": { + "title": "End-to-end learning of visual representations from uncurated instructional videos.", + "author": "Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman.", + "venue": "In CVPR, pages 9879\u20139889, 2020.", + "url": null + } + }, + { + "49": { + "title": "Clipcap: Clip prefix for image captioning.", + "author": "Ron Mokady, Amir Hertz, and Amit H Bermano.", + "venue": "arXiv preprint arXiv:2111.09734, 2021.", + "url": null + } + }, + { + "50": { + "title": "Streamlined dense video captioning.", + "author": "Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, and Bohyung Han.", + "venue": "In CVPR, pages 6588\u20136597, 2019.", + "url": null + } + }, + { + "51": { + "title": "Text-only training for image captioning using noise-injected clip.", + "author": "David Nukrai, Ron Mokady, and Amir Globerson.", + "venue": "arXiv preprint arXiv:2211.00575, 2022.", + "url": null + } + }, + { + "52": { + "title": "Gpt-4v(ision) system card.", + "author": "OpenAI.", + "venue": "2023.", + "url": null + } + }, + { + "53": { + "title": "Rescribe: Authoring and automatically editing audio descriptions.", + "author": "Amy Pavel, Gabriel Reyes, and Jeffrey P Bigham.", + "venue": "In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, pages 747\u2013759, 2020.", + "url": null + } + }, + { + "54": { + "title": "Gains and losses of watching audio described films for sighted viewers.", + "author": "Elisa Perego.", + "venue": "Target, 28(3):424\u2013444, 2016.", + "url": null + } + }, + { + "55": { + "title": "Micap: A unified model for identity-aware movie descriptions.", + "author": "Haran Raajesh, Naveen Reddy Desanur, Zeeshan Khan, and Makarand Tapaswi.", + "venue": "In CVPR, pages 14011\u201314021, 2024.", + "url": null + } + }, + { + "56": { + "title": "Language models are unsupervised multitask learners.", + "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.", + "venue": "OpenAI blog, 1(8):9, 2019.", + "url": null + } + }, + { + "57": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In ICML, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "58": { + "title": "Watch, listen and tell: Multi-modal weakly supervised dense event captioning.", + "author": "Tanzila Rahman, Bicheng Xu, and Leonid Sigal.", + "venue": "In ICCV, pages 8908\u20138917, 2019.", + "url": null + } + }, + { + "59": { + "title": "A dataset for movie description.", + "author": "Anna Rohrbach, Marcus Rohrbach, Niket Tandon, and Bernt Schiele.", + "venue": "In CVPR, pages 3202\u20133212, 2015.", + "url": null + } + }, + { + "60": { + "title": "Movie description.", + "author": "Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele.", + "venue": "IJCV, 123:94\u2013120, 2017.", + "url": null + } + }, + { + "61": { + "title": "End-to-end generative pretraining for multimodal video captioning.", + "author": "Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, and Cordelia Schmid.", + "venue": "In CVPR, pages 17959\u201317968, 2022.", + "url": null + } + }, + { + "62": { + "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning.", + "author": "Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut.", + "venue": "In ACL, pages 2556\u20132565, 2018.", + "url": null + } + }, + { + "63": { + "title": "Weakly supervised dense video captioning.", + "author": "Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, and Xiangyang Xue.", + "venue": "In CVPR, pages 1916\u20131924, 2017.", + "url": null + } + }, + { + "64": { + "title": "Dense procedure captioning in narrated instructional videos.", + "author": "Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, and Ming Zhou.", + "venue": "In ACL, pages 6382\u20136391, 2019.", + "url": null + } + }, + { + "65": { + "title": "What does clip know about a red circle? visual prompt engineering for vlms.", + "author": "Aleksandar Shtedritski, Christian Rupprecht, and Andrea Vedaldi.", + "venue": "In ICCV, pages 11987\u201311997, 2023.", + "url": null + } + }, + { + "66": { + "title": "Audio description: The visual made verbal.", + "author": "Joel Snyder.", + "venue": "In International congress series, pages 935\u2013939. Elsevier, 2005.", + "url": null + } + }, + { + "67": { + "title": "Mad: A scalable dataset for language grounding in videos from movie audio descriptions.", + "author": "Mattia Soldan, Alejandro Pardo, Juan Le\u00f3n Alc\u00e1zar, Fabian Caba, Chen Zhao, Silvio Giancola, and Bernard Ghanem.", + "venue": "In CVPR, pages 5026\u20135035, 2022.", + "url": null + } + }, + { + "68": { + "title": "Using descriptive video services to create a large data source for video annotation research.", + "author": "Atousa Torabi, Christopher Pal, Hugo Larochelle, and Aaron Courville.", + "venue": "arXiv preprint arXiv:1503.01070, 2015.", + "url": null + } + }, + { + "69": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.", + "venue": "arXiv preprint arXiv:2307.09288, 2023.", + "url": null + } + }, + { + "70": { + "title": "Visualizing data using t-sne.", + "author": "Laurens Van der Maaten and Geoffrey Hinton.", + "venue": "JMLR, 9(11), 2008.", + "url": null + } + }, + { + "71": { + "title": "Cider: Consensus-based image description evaluation.", + "author": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh.", + "venue": "In CVPR, pages 4566\u20134575, 2015.", + "url": null + } + }, + { + "72": { + "title": "Joint optimization for cooperative image captioning.", + "author": "Gilad Vered, Gal Oren, Yuval Atzmon, and Gal Chechik.", + "venue": "In ICCV, pages 8898\u20138907, 2019.", + "url": null + } + }, + { + "73": { + "title": "Contextual ad narration with interleaved multimodal sequence.", + "author": "Hanlin Wang, Zhan Tong, Kecheng Zheng, Yujun Shen, and Limin Wang.", + "venue": "arXiv preprint arXiv:2403.12922, 2024.", + "url": null + } + }, + { + "74": { + "title": "Bidirectional attentive fusion with context gating for dense video captioning.", + "author": "Jingwen Wang, Wenhao Jiang, Lin Ma, Wei Liu, and Yong Xu.", + "venue": "In CVPR, pages 7190\u20137198, 2018a.", + "url": null + } + }, + { + "75": { + "title": "Compare and reweight: Distinctive image captioning using similar images sets.", + "author": "Jiuniu Wang, Wenjia Xu, Qingzhong Wang, and Antoni B Chan.", + "venue": "In ECCV, pages 370\u2013386. Springer, 2020a.", + "url": null + } + }, + { + "76": { + "title": "Group-based distinctive image captioning with memory attention.", + "author": "Jiuniu Wang, Wenjia Xu, Qingzhong Wang, and Antoni B Chan.", + "venue": "In ACMMM, pages 5020\u20135028, 2021a.", + "url": null + } + }, + { + "77": { + "title": "On distinctive image captioning via comparing and reweighting.", + "author": "Jiuniu Wang, Wenjia Xu, Qingzhong Wang, and Antoni B Chan.", + "venue": "TPAMI, 45(2):2088\u20132103, 2022.", + "url": null + } + }, + { + "78": { + "title": "Event-centric hierarchical representation for dense video captioning.", + "author": "Teng Wang, Huicheng Zheng, Mingjing Yu, Qian Tian, and Haifeng Hu.", + "venue": "TCSVT, 31(5):1890\u20131900, 2020b.", + "url": null + } + }, + { + "79": { + "title": "End-to-end dense video captioning with parallel decoding.", + "author": "Teng Wang, Ruimao Zhang, Zhichao Lu, Feng Zheng, Ran Cheng, and Ping Luo.", + "venue": "In ICCV, pages 6847\u20136857, 2021b.", + "url": null + } + }, + { + "80": { + "title": "Non-local neural networks.", + "author": "Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He.", + "venue": "In CVPR, pages 7794\u20137803, 2018b.", + "url": null + } + }, + { + "81": { + "title": "Autoad-zero: A training-free framework for zero-shot audio description.", + "author": "Junyu Xie, Tengda Han, Max Bain, Arsha Nagrani, G\u00fcl Varol, Weidi Xie, and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:2407.15850, 2024.", + "url": null + } + }, + { + "82": { + "title": "Vid2seq: Large-scale pretraining of a visual language model for dense video captioning.", + "author": "Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid.", + "venue": "In CVPR, pages 10714\u201310726, 2023.", + "url": null + } + }, + { + "83": { + "title": "Image difference captioning with pre-training and contrastive learning.", + "author": "Linli Yao, Weiying Wang, and Qin Jin.", + "venue": "In AAAI, pages 3108\u20133116, 2022.", + "url": null + } + }, + { + "84": { + "title": "Videoblip, 2023.", + "author": "Keunwoo Peter Yu.", + "venue": null, + "url": null + } + }, + { + "85": { + "title": "Mm-narrator: Narrating long-form videos with multimodal in-context learning.", + "author": "Chaoyi Zhang, Kevin Lin, Zhengyuan Yang, Jianfeng Wang, Linjie Li, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang.", + "venue": "In CVPR, pages 13647\u201313657, 2024.", + "url": null + } + }, + { + "86": { + "title": "Video-llama: An instruction-tuned audio-visual language model for video understanding.", + "author": "Hang Zhang, Xin Li, and Lidong Bing.", + "venue": "arXiv preprint arXiv:2306.02858, 2023.", + "url": null + } + }, + { + "87": { + "title": "Bertscore: Evaluating text generation with bert.", + "author": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi.", + "venue": "arXiv preprint arXiv:1904.09675, 2019.", + "url": null + } + }, + { + "88": { + "title": "End-to-end dense video captioning with masked transformer.", + "author": "Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong.", + "venue": "In CVPR, pages 8739\u20138748, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18180v1" +} \ No newline at end of file diff --git a/20241127/2411.18201v1.json b/20241127/2411.18201v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d3b107c16a347f023dbf1c0489e54dac987978fb --- /dev/null +++ b/20241127/2411.18201v1.json @@ -0,0 +1,539 @@ +{ + "title": "Learning for Long-Horizon Planning via Neuro-Symbolic Abductive Imitation", + "abstract": "Recent learning-to-imitation methods have shown promising results in planning via imitating within the observation-action space. However, their ability in open environments remains constrained, particularly in long-horizon tasks. In contrast, traditional symbolic planning excels in long-horizon tasks through logical reasoning over human-defined symbolic spaces but struggles to handle observations beyond symbolic states, such as high-dimensional visual inputs encountered in real-world scenarios. In this work, we draw inspiration from abductive learning and introduce a novel framework ABductive Imitation Learning (ABIL) that integrates the benefits of data-driven learning and symbolic-based reasoning, enabling long-horizon planning. Specifically, we employ abductive reasoning to understand the demonstrations in symbolic space and design the principles of sequential consistency to resolve the conflicts between perception and reasoning.\nABIL generates predicate candidates to facilitate the perception from raw observations to symbolic space without laborious predicate annotations, providing a groundwork for symbolic planning.\nWith the symbolic understanding, we further develop a policy ensemble whose base policies are built with different logical objectives and managed through symbolic reasoning. Experiments show that our proposal successfully understands the observations with the task-relevant symbolics to assist the imitation learning. Importantly, ABIL demonstrates significantly improved data efficiency and generalization across various long-horizon tasks, highlighting it as a promising solution for long-horizon planning. Project\nwebsite: https://www.lamda.nju.edu.cn/shaojj/KDD25_ABIL/.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "A long-standing goal in AI is to build agents that are flexible and general, able to accomplish a diverse set of tasks in open and novel environments, such as home robots for cooking meals or assembling furniture.\nThese tasks generally require the agents to execute sequential decision-making, which is often formulated as a planning problem. Recently, the learning-based method, Imitation Learning, has achieved remarkable success via imitating expert demonstrations, in a variety of domains, such as robotic manipulation (Fang et al., 2019 ###reference_b9###; Wang et al., 2023a ###reference_b34###), autonomous driving (Mero et al., 2022 ###reference_b26###; Bhattacharyya et al., 2023 ###reference_b3###) and language models (Whitehurst and Vasta, 1975 ###reference_b36###; Brown et al., 2020 ###reference_b4###). However, the theoretical studies (Rajaraman et al., 2020 ###reference_b27###; Xu et al., 2022 ###reference_b38###) reveal that imitation learning can suffer from serious performance degradation due to the covariate shift between limited expert demonstrations and the state distribution actually encountered by the agents, especially in long-horizon tasks.\nIn traditional AI literature, symbolic planners effectively generalize in long-horizon decision-making, via logical reasoning on the human-defined symbolic spaces (Fikes and Nilsson, 1971 ###reference_b10###; Gerevini, 2020 ###reference_b15###; Fox and Long, 2003 ###reference_b11###). However, they often simplify the perception process by relying on ground-truth symbols. Given observations and actions, pure logic-based methods struggle to map raw observations to human-defined symbolic spaces without predicate-level supervision.\n###figure_1### To address these issues, efforts are underway to merge the advantages of learning-based and reasoning-based approaches into neuro-symbolic planning.\nXu et al. ###reference_b37### (Xu et al., 2019 ###reference_b37###) propose the regression planning network, learning to predict symbolic sub-goals that need to be achieved before the final goals, thereby generating a long-term symbolic plan conditioned on high-dimensional observations.\nKonidaris et al. ###reference_b23### (Konidaris et al., 2018 ###reference_b23###) collect feasibility annotations and transition data under different symbolic operations to learn the symbolic representation of different observations. The learned representation enables traditional planning in the symbolic space and allows the acquisition of desired low-level controllers during inference.\nSilver et al. ###reference_b33### (Silver et al., 2021 ###reference_b33###)\nformalizes operator learning for neuro-symbolic planning, viewing operators as an abstraction model of the raw transition and generating the high-level plan skeletons.\nHowever, most of these positive results rely on the assumption that there are sufficient symbolic annotations to train the neural networks for mapping high-dimensional observations to symbolic states for logic-based planning, or there are prior low-level controllers to achieve the expected sub-goals perfectly.\nCompared to real-world applications, these approaches overlook the process of learning from demonstrations to imitate specific behaviors.\nThe most relevant work to ours is PDSketch (Mao et al., 2022 ###reference_b25###), which employs neural networks as the basic modules of human-specified programming structures and learns a transition model. This model supports generic network-based representations for predicates and action effects. Nevertheless, its model-based planning framework tends to accumulate errors, making it less suitable for long-horizon decision-making tasks.\nIn this work, we borrow the idea of abductive learning and introduce a novel framework ABductive Imitation Learning (ABIL), which combines the benefits of data-driven learning and symbolic-based reasoning, enabling long-term planning based on state observations.\nSpecifically, ABIL employ abductive reasoning to help understand the demonstrations in symbolic space and apply the principles of sequential consistency to resolve the conflicts between perception and reasoning. It applies logical reasoning to generate predicate candidates that meet constraints, eliminating the need for laborious symbolic annotations.\nWith the above symbolic understanding, we further build a policy ensemble whose base policies are built with different logical objectives and managed by symbolic reasoning.\nThe learned policy imitates specific behaviors directly from demonstrations, eliminating the reliance on prior low-level controllers used in earlier neuro-symbolic methods. Additionally, it makes decisions based on human-like cognition, which enhances its generalization capabilities.\nExperiments show that our proposal successfully understands the observations with the task-relevant symbolics to assist the imitating.\nNotably, ABIL shows significantly improved performance in data efficiency and generalization settings across a variety of long-horizon tasks.\n###figure_2### ###figure_3###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Related Work", + "text": "The preface work of this paper mainly includes Imitation Learning, Neuro-Symbolic Planning and Abductive Learning.\nImitation Learning\nlearns the policies from expert demonstrations to achieve sequential decision-making (Hussein et al., 2017 ###reference_b21###).\nThere are many works that obtain successful results on varying domains, such as robotic manipulation (Fang et al., 2019 ###reference_b9###; Wang et al., 2023a ###reference_b34###), autonomous driving (Mero et al., 2022 ###reference_b26###; Bhattacharyya et al., 2023 ###reference_b3###) and language models (Whitehurst and Vasta, 1975 ###reference_b36###; Brown et al., 2020 ###reference_b4###).\nHowever, learning theory reveals that the generalization ability of imitation learning is constrained by the size of the expert dataset and degrades as the decision-making horizon increases (Rajaraman et al., 2020 ###reference_b27###; Shao et al., 2024b ###reference_b29###, a ###reference_b28###).\nThis issue is particularly pronounced in open environments, where home agents need to accomplish tasks in differently arranged rooms. In such settings, the distribution shift, that is, the covariate shift between training observations and the scenarios the agent actually encounters, presents a greater challenge for imitation learning (Jin et al., 2023 ###reference_b22###; Li et al., 2022 ###reference_b24###).\nNeuro-Symbolic Planning explores to combine traditional symbolic planning with learning to enhance model\u2019s generalization capabilities.\nTo handle observations beyond symbolic states, previous studies typically involve training neural networks with task-relevant predicate annotations to transform raw observations into symbolic states for planning.\nFor example, Xu et al. ###reference_b37### (Xu et al., 2019 ###reference_b37###) propose the Regression Planning Networks, which learns to predict sub-goals that need to be achieved before the final goals, enabling traditional symbolic regression to handle complex high-dimensional inputs, like images.\nKonidaris et al. ###reference_b23### (Konidaris et al., 2018 ###reference_b23###) collect feasibility annotations and transition data under different symbolic operations to learn the symbolic representation of different observations.\nSilver et al. ###reference_b33### (Silver et al., 2021 ###reference_b33###) formalize operator learning for neuro-symbolic planning, viewing operators as an abstraction model of the raw transition and generating the high-level plan skeletons.\nWang et al. ###reference_b35### (Wang et al., 2023b ###reference_b35###) leverage a pre-trained vision-language model to provide the predicate-level annotations to help imitation learning.\nThe most related work to ours is PDSketch (Mao et al., 2022 ###reference_b25###), which utilizes neural networks as the basic modules of human-specified programming structures and learns an object-factored transition model that supports generic neural-network-based representations for predicates and action effects.\nHowever, we find that its planning, based on the raw-observation-action space, tends to accumulate errors and is not suitable for long-sequence decision-making tasks.\nFurthermore, its application of logical reasoning is rather limited and usually requires large amounts of training data to achieve neuro-symbolic grounding.\nIn contrast, we develop sequential consistency for abductive reasoning which results in a more data-efficient grounding, which assists imitation learning in turn.\nAbductive Learning provides a framework that integrates machine learning with logical reasoning (Zhou, 2019 ###reference_b42###; Dai et al., 2019 ###reference_b7###; Huang et al., 2021 ###reference_b18###, 2023 ###reference_b20###; Yang et al., 2024b ###reference_b40###). It focuses on handling the intermediate neuro-symbolic grounding, which serves as pseudo-labels for learning and as variables for abduction. Although there have been some efforts to extend abductive learning to different applications, such as judicial sentencing (Huang et al., 2020 ###reference_b19###) and historical document understanding (Gao et al., 2024 ###reference_b13###), they mainly consider the traditional classification tasks.\nIn this work, we focus on the planning problems, where long-horizon sequential decision-making tasks are mainly considered.\nGenerally speaking, it is still challenging to implement imitation in the real world using the above technologies.\nConsidering that humans can relatively easily provide a knowledge base with high-level symbolic solutions for long-horizon decision-making tasks like robotic manipulation (Konidaris et al., 2018 ###reference_b23###) or household tasks (Xu et al., 2019 ###reference_b37###; Jin et al., 2023 ###reference_b22###; Mao et al., 2022 ###reference_b25###), this work follows the research line of neural-symbolic methods to use a knowledge base to assist imitation learning, while also avoiding the requirements on tedious predicate annotations.\n###figure_4###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. The Proposed Framework", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Problem Formulation", + "text": "In this paper, we focus on the goal-based planning task.\nFollowing (Mao et al., 2022 ###reference_b25###; Silver et al., 2022 ###reference_b31###, 2023 ###reference_b32###), the environment is formally defined as a tuple . Here, represents the state space, and represents the action space. The transitions between states and actions are governed by a deterministic transition function : . denotes the distribution of initial states.\nThe set consists of a finite number of task-related objects, where each object possesses a unique name, such as car and rag. Additionally, is a finite set of task-related predicate symbols, where each predicate symbol has an arity that indicates the number of arguments it takes. For example, Inside/2 has an arity of two, representing that one object is inside another.\nA ground atom is a predicate that only contains concrete objects, such as Inside(rag, bucket). If a state satisfies , it indicates that semantically entails the interpretation of .\nThe set consists of ground atoms that represent the task\u2019s target. The task is to find an action sequence that generates a trajectory satisfying . For simplification, we denote (where is a set of ground atoms) as . Moreover, is also a set of finite predicate symbols that represent logical operators, such as clean/3 and put/2. We denote to retrieve the object(s) from a ground atom. For instance, .\nThe symbolic knowledge base provided by experts could be formulated as a finite-state machine with a directed graph . Each node in the vertex set contains a set of ground atoms of , which can be viewed as the condition of a sub-task. Each edge is noted as a tuple . is a ground atom of representing the symbolic action, e.g., Put(rag, bucket). is the add effect and is the delete effect, each is a set of grounding atoms.\nA symbolic action typically requires multiple actions to achieve the desired logical sub-goal, corresponding to a segment of the complete trajectory.\nFor the sake of simplicity, the notation is utilized to denote the edge.\nFor each node , if there is a directed edge pointing from to , then . We define a trajectory satisfying the knowledge base denoted as if and only if for every adjacent states pair , there exists or , .\nThis indicates that the expert trajectory satisfies the corresponding symbolic knowledge base, such as first using a rag and soap to clean a dusty car, and then putting them into a bucket.\nFigure 2a ###reference_sf1### demonstrates an example of the knowledge base formalized as a state machine with a directed graph.\nThe state machine contains multiple basic structures as illustrated in Figure 2b ###reference_sf2###. In the event that a singular node, denoted as , directs towards the goal, it signifies the necessity to address a corresponding sub-task . For instance, the action Clean(car, rag, soap) is imperative whenever the objective is to clean a car. Conversely, should there be a directed edge from to , with subsequently pointing towards the goal, it implies that the sub-task must be solved before sub-task . In scenarios where both and have directed edges towards the goal, either sub-task or is required to be solved. The final configuration arises when there is bidirectional pointing between and , indicating that both sub-tasks are mandatory to be solved. Utilizing the state machine, a symbolic planning solution can be derived via algorithms such as search or dynamic programming, represented as a sequence of states and transitions: ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Abductive Reasoning", + "text": "The ABIL framework can be roughly divided into abductive learning with the state machine and imitation with symbolic reasoning.\nThe overall framework is illustrated and summarized in Figure 3 ###reference_### and Algorithm 1 ###reference_###, respectively.\nGiven the state machine and expert demonstrations, the challenge lies in establishing the perception function from observation to symbolic grounding when symbolic supervision of is not available. To address this challenge, we introduce abductive reasoning to provide pseudo labels derived from the state machine\u2019s knowledge, which could be taken to optimize the perception function .\nSpecifically, for the different structures in the state machine, we could derive the sequential abduction: as:\nThe task has been completed at the end of the expert demonstration sequence. The symbolic state satisfies the final goal , that is, .\nTask should be accomplished before . The symbolic sequence will satisfy where the demonstrations could be divided into the two sequential part of accomplishing and .\nThe agent should either complete or . The symbolic sequence will satisfy or .\nThe agent should complete both t1 and t2, but in any order. The symbolic sequence will satisfy or .\nWith the sequential abduction, the pseudo label in Equation 1 ###reference_### could be obtained and the perception module could be optimized. Nevertheless, the perception module plays an important role in symbolic grounding, which is crucial for the subsequent reasoning.\nFollowing (Huang et al., 2019 ###reference_b17###; Mao et al., 2022 ###reference_b25###; Silver et al., 2022 ###reference_b31###, 2023 ###reference_b32###), we train the perception module at the object level. For each predicate , we have a predicate model with the object-level features as input, which could be obtained via an object detection model in practice. A ground atom could be inferred by: .\nAs summarized in the left part of Figure 3 ###reference_###, our perception module is optimized via the abductive reasoning. Equation 1 ###reference_### generates the pseudo labels based on the sequential consistency between the perception output and the solution of satisfying the knowledge base. Unlike previous works (Xu et al., 2019 ###reference_b37###; Wang et al., 2023b ###reference_b35###), it does not rely on the symbolic-level annotations of each observation in demonstrations, which are usually costly and difficult to obtain." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Symbolic-grounded Imitation", + "text": "As a human being, one often consciously knows what he or she is doing, such as making the action of turning left because they realize the destination is on the left. In this part, we thus incorporate the reasoning of high-level operators into the original imitation learning process, regarding the symbolic operators as an assistance signals.\nSpecifically, we first build the behavioral actor for each logical operator , e.g. and . Then we derive the desired behavior module by the symbolic states output of perception and the corresponding abstract logical operator. Given the solution of symbolic planning: .\nThe desired parameter of the operator , e.g., which object will be picked, could be reasoning as:\nThen the learning of behavior actors could be formulated as:\nAs summarized in the right part of Figure 3 ###reference_###, our behavioral actors, referring to the human model of cognition before decision-making, embed high-level logical reasoning into the imitation learning process.\nThe behavior ensemble learns from experience through imitation, without relying on a pre-existing perfect controller to reach each sub-goal. Importantly, by leveraging the generalization capabilities of symbolic planning, the proposed actors can decompose diverse observations into symbolic states, facilitating more reliable decision-making.\n###figure_5### ###figure_6### ###figure_7### ###figure_8###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Empirical Study", + "text": "We evaluate our proposal in three environments, including two neuro-symbolic benchmarks: BabyAI (Chevalier-Boisvert et al., 2019 ###reference_b6###), Mini-BEHAVIOR (Jin et al., 2023 ###reference_b22###), and a robotic manipulation benchmark, CLIPort (Shridhar et al., 2021 ###reference_b30###). We compare our method with three baselines: Behavior Cloning (BC) (Bain and Sammut, 1995 ###reference_b2###), Decision Transformer (DT) (Chen et al., 2021 ###reference_b5###) and PDSketch (Mao et al., 2022 ###reference_b25###).\nFor a fair comparison, all of these methods use the same network architecture which is based on the Neural Logic Machine (NLM) (Dong et al., 2019 ###reference_b8###).\nSpecifically, we first encode the state with a two-layer NLM. For BC, we use a single linear layer, taking the state embedding as the input and output actions. For DT, we build a single transformer layer following the two-layer encoder, with the causal mask to generate future action with past states and actions.\nFor PDSketch, we choose the full mode in the original paper (Mao et al., 2022 ###reference_b25###), which provides sufficient prior knowledge of the symbolic transition, keeping consistency with our symbolic state machine. Following (Mao et al., 2022 ###reference_b25###; Shridhar et al., 2021 ###reference_b30###), we report the percentage of successful planning for the desired goals, which are averaged over 100 evaluations under three random seeds." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Evaluation on BabyAI", + "text": "BabyAI provides a benchmark for grounding logical instructions where an agent needs to follow instructions for a series of tasks like picking up objects, or unlocking doors.\nFollowing (Xu et al., 2019 ###reference_b37###; Mao et al., 2022 ###reference_b25###), we consider 5 tasks: , and conduct the generalization evaluation with different numbers of objects in the testing environments. For example, the training environments contain 4 objects or 4 doors, and testing environments contain 8 objects or 8 doors. It simulates the challenges to the generalization of imitation learning, where household robots in open environments need to complete tasks in differently arranged rooms. All demonstrations of the expert dataset are generated by a script based on search.\nResults and Analysis.\nFrom Table 1 ###reference_###, we could find that\nBC baseline is efficient in the simple GotoSingle task but significantly deteriorates on complex tasks and their generalization evaluation.\nPDSketch exhibits favorable performance in tasks that require few actions to complete and generalizes well in the case of increasing object number. Nevertheless, it struggles to solve long-horizon tasks like Put and Unlock, where experts\u2019 demonstrations require 9 or 10 actions to accomplish.\nOne plausible reason is that, as a model-based method, PDSketch faces the accumulation of search errors, thus fails in long-horizon tasks.\nDT excels in handling tasks with a sequential nature (e.g. Put and Unlock).\nHowever, in short-horizon tasks, such as Goto and Pickup, DT performs weaker compared to BC. The possible reason could be its relatively high model complexity contradicts with limited data, making it difficult to learn efficiently from short-horizon demonstrations.\nIn contrast, ABIL achieves competitive performance compared to PDSketch and has made significant improvements in long-horizon tasks.\nIt is worth noting that ABIL exhibits stable generalization in novel environments,\nwhich supports our intention of using symbolic grounding to assist the generalization of imitation learning.\nComparison of Neuro-Symbolic Grounding.\nLike previous neuro-symbolic methods, the efficiency of neuro-symbolic grounding is the key to determining whether successful or not.\nIn Figure 4 ###reference_###, we compare the accuracy of the predicates learned by ABIL and PDSketch, under varying demonstration budgets.\nWe found that in the relatively simple task, PDSketch can achieve over 90% predicate accuracy with 500 demonstrations. However, in more challenging tasks, such as open and unlock, its neuro-symbolic grounding ability is quite poor, which also leads to unsatisfactory performance in Table 1 ###reference_###. In contrast, our ABIL not only achieves reliable neuro-symbolic grounding on all tasks (with nearly perfect predicate accuracy), but it\u2019s also more data efficient than PDSketch, requiring less than 20% of their data to achieve superior neuro-symbolic grounding results. It clearly indicates the advantages of our abductive reasoning on neuro-symbolic understanding.\nComparison of Efficiency.\nLearning-based methods BC and DT could promptly provide responses in the inference phase because they do not give adequate attention to the subsequent considerations. The model-based planning method PDSketch needs to search for a whole sequence of actions that can achieve the goal, which can be time-consuming, especially in cases with longer sequences and a multitude of available actions.\nOur approach integrates higher-level reasoning into the foundation of lower-level perception.\nBy integrating symbolic-based planning, our approach enhances planning effectiveness, significantly reducing time consumption compared to PDSketch, which requires searching in the original observation-action space.\nWhile gaining advantages of logical reasoning in long-horizon goals, ABIL maintains the inference efficiency of the learning-based method." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Evaluation on Mini-BEHAVIOR", + "text": "Mini-BEHAVIOR is a recently proposed benchmark for embodied AI. It contains varying 3D household tasks chosen from the BEHAVIOR benchmark, including Sorting Books, Making Tea, Cleaning A Car, and so on. Most tasks are long-horizon and heterogeneous, some of which require more than one hundred decision-making steps to be completed.\nThere are hundreds of different types and plenty of predicates which is challenging for neuro-symbolic grounding. In this domain, our state machine is mainly composed of several typical categories. For tasks mainly about tidying up the room, we split the primitive actions into and , which is required to perform an action sequence to finish picking or placing sub-tasks. Combined with our symbolic-grounding model , the agent will be able to distinguish when and where to pick and place.\nFor tasks mainly about cleaning, we split the primitive actions into and , which is required to finish washing or putting sub-tasks.\nIn the generalization evaluation, we challenge the agents in environments with distractor objects that are unseen at the training phase.\n###figure_9### ###figure_10### ###figure_11### ###figure_12### Results and Analysis.\nThe results on Mini-BEHAVIOR are provided in Table 2 ###reference_###.\nIn some simple short-horizon tasks, such as Installing a printer and Opening packages, BC shows satisfactory performance.\nNevertheless, as the desired decision sequence grows, errors made by BC gradually accumulate, leading to an increasing deviation from the correct solution.\nThis becomes particularly critical in the presence of disturbances or interferences, leading to a significant degradation on the generalization evaluation.\nCompared to BC, DT has achieved better performance in most tasks, however, it still performs poorly in generalization test, pointing out its vulnerability to environmental changes.\nIn this benchmark, PDSketch failed to finish most tasks in the given time budget, due to the significant search depth required to achieve the goal.\nThis highlights the limitations of model-based planning methods in long-horizon scenarios. Ensuring the learned transition remains accurate after numerous decision steps is challenging, making it difficult to provide a successful termination signal for the search.\nIn contrast, ABIL performs reasoning at the symbolic level. Even if cleaning a car requires about 45 decision steps to complete, from the perspective of abstract operations, we can understand that we need to put the rag and soap back into the buckets after using them. This is similar to human\u2019s behavior, where we first recognize what the logical goal to be completed is, and then achieve it step by step through actions, rather than considering the impact of each limb movement, which would make the entire reasoning planning path too long.\nIn this way, our neuro-symbolic ABIL successfully incorporates logic-based reasoning into imitation learning, achieving competitive results and showing good generalization under environmental change.\nFurther Analysis with Varying Horizons.\nAs we discussed above, the generalization of imitation learning methods is closely related to the length of the task\u2019s horizon, especially in long-horizon tasks, where performance degradation is prone to occur. Mini-BEHAVIOR, which contains tasks that require different decision steps to complete, provides an appropriate observation window from this perspective. As shown in Table 2, the number of expert demonstrations required for these tasks ranges from 10 to 106 steps.\nOn the one hand, we find that the increase in decision-making length required by the task indeed makes it more challenging, leading to a decline in the performance of almost all methods. On the other hand, we observe that the increase in decision-making length\nalso amplifies performance differences in the generalization evaluation of the baseline methods compared to the basic evaluation. Our method demonstrates good generalization performance, aligning with the basic evaluation, and showcasing the potential of the ABIL in open environments." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Evaluation on Robotic Manipulation", + "text": "We further evaluate the proposal on the CLIPort (Shridhar et al., 2021 ###reference_b30###) with 3D robotic manipulation tasks.\nIn this environment, an agent needs to learn how to transport some objects and solve complex manipulation tasks based on visual observation. Every manipulation task needs to be achieved via a two-step primitive where each action involves a start and end-effector pose.\nTable 5 ###reference_### provides the average length of expert demonstrations.\nThis benchmark involves the agent manipulating objects of various colors and shapes, reflecting the requirements in the open world, providing a greater challenge for imitation learning.\nFollowing (Mao et al., 2022 ###reference_b25###; Hsu et al., 2023 ###reference_b16###), we represent each object with its image cropped from the observation with its pose, which could be completed by an external detection module. All demonstrations are collected using handcrafted oracle policies following CLIPort (Shridhar et al., 2021 ###reference_b30###), containing only successful trajectories. Since PDSketch mainly targets discrete actions, in this environment, we compared ABIL with BC and DT baselines.\nResults and Analysis. The results are provided in Table 3 ###reference_###. Although the execution length for robotic manipulation is shorter compared to household tasks in Mini-BEHAVIOR, the objects it needs to manipulate are more complex. As illustrated in Figure 1 ###reference_###(b), in the packing-shapes task, the agent needs to manipulate objects of the same shape but unseen colors during testing.\nIn packing-20shapes, which experts can complete in one step, pure learning-based BC and DT only achieved a 20% success rate. However, our ABIL-BC achieved a satisfactory 94% success rate through neuro-symbolic grounding to recognize the shape of corresponding objects.\nThese results highlight the vulnerability of pure-learning-based methods in open-world scenarios and demonstrate the necessity of introducing neuro-symbolic reasoning in ABIL, which may provide a promising solution for household agents.\n###figure_13### ###figure_14### Comparison of Neuro-Symbolic Grounding.\nIn robotic manipulation, the search based planning policy of PDSketch only applies to discrete symbolic operators, and therefore cannot be compared in our main experiments with continuous action space. We can only compare ABIL with PDSketch from the perspective of neuro-symbolic grounding.\nThe results are provided in Figure 6 ###reference_###.\nAchieving precise grounding in this environment is more challenging. As the amount of demonstration increases, the accuracy of PDSketch rises slowly and erratically. In contrast, our method shows a rapid improvement, further demonstrating the advantage of our ABIL in terms of neuro-symbolic grounding efficiency." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. Data Efficiency", + "text": "Data efficiency is important for imitation learning, especially in robotic tasks where expert demonstrations are usually expensive and scarce.\nIn this subsection, we conduct experiments across varying sizes of expert demonstrations to evaluate the data efficiency.\nThe results are provided in Figure 5 ###reference_###. First, we found that PDSketch, the model-based planning method, achieved the best results on the pickup task, even with a limited 500 demonstrations. However, as shown in Table 2 ###reference_###,\nPDSketch fails in the complex long-horizon tasks.\nSecond, we could find that in the simple task, pickup, the generalization of different methods is consistently improved when the data volume increases. However, in the Opening packages and Putting away dishes tasks from Mini-BEHAVIOR,\nalthough the results in basic evaluation improve with the increase of data volume, their performance in the corresponding generalization tests no longer grows. This also reflects the weakness of pure-learning-based methods, that is, they easily overfit to the specific training observations, and their performance in out-of-distribution observation is fragile.\nThird, we found that our Abductive Imitation Learning framework has clearly improved the data efficiency of the BC and DT baselines, especially achieved significant generalization improvement in the out-of-distribution evaluation." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "4.5. Zero-Shot Generalization", + "text": "Symbolic reasoning excels at generalization, especially ensuring the correctness of reasoning for any combination of logical clauses.\nIn this subsection, we evaluate the zero-shot generalization performance in the composed tasks.\nIn the BabyAI domain, we train the policies on the pickup and open task, then test them on the composed task unlock.\nDuring training, the demonstrations from two tasks are mixed and learning in a multi-task scheme.\nIn the Mini-BEHAVIOR domain, we primarily concentrate on generalization with the longer series of events, which demands the agent to make use of learned techniques for repeatedly completing a single task to achieve the desired goal.\nTake the Throwing away leftovers task as an example, we train every model in the environment with 1 leftover hamburger to throw, while in the test environment, the agent is required to throw 2 or 3 hamburgers.\nIn robotic manipulation, we primarily focus on compositional generalization with the novel combination of goals, which demands the agent to re-combine learned concepts to achieve, as shown in Figure 1 ###reference_###(c).\nThe results are provided in Table 4 ###reference_###.\nAlthough all baselines achieve satisfactory performance on the training tasks (pickup and open), their performance degrades on the simple combined task (unlock).\nThe pure-learning-based methods directly learn the action corresponding to the observations, lacking reasoning ability, thus unable to realize the need to first pick up a key that can open the target door, resulting in failure.\nPDSketch has reasoning ability, but its model-based planning solution accumulates errors with the increasing length of the sequence, resulting in poor performance and high computational overhead. In tasks with longer sequences, such as throwing away leftovers, solutions cannot be found even after running out of time.\nOur ABIL not only performs high-level reasoning to know that sub-goals should be sequentially completed but also can zero-shot achieve the composed tasks." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Conclusion and Future Work", + "text": "In this work, we proposed a novel framework, ABductive Imitation Learning (ABIL), which integrates data-driven learning with symbolic reasoning to address long-horizon tasks in imitation learning.\nABIL bridges the gap between neural perception and logical reasoning by autonomously generating predicate candidates from raw observations with the knowledge base, enabling effective reasoning without requiring extensive manual annotations. Experiments demonstrate that ABIL significantly improves data efficiency and generalization across various long-horizon tasks, positioning it as a promising neuro-symbolic solution for imitation learning.\nDespite its contributions, ABIL has several limitations that suggest promising directions for future work: (1) Uncertainty and partial observability: The current framework assumes deterministic and fully observable environments, consistent with existing work (Mao et al., 2022 ###reference_b25###; Silver et al., 2022 ###reference_b31###, 2023 ###reference_b32###). However, real-world environments are often stochastic and partially observable. A promising direction is to explore POMDP techniques (Gangwani et al., 2019 ###reference_b12###; Garrett et al., 2020 ###reference_b14###), which would allow ABIL to maintain a belief space and sample actions under uncertainty.\n(2) Automatic knowledge learning:\nLike most neuro-symbolic and abductive learning work, ABIL assumes the availability of a symbolic solution and relies on an accurate and sufficient knowledge base (Dai et al., 2019 ###reference_b7###; Mao et al., 2022 ###reference_b25###).\nA key direction is to incorporate advanced knowledge learning techniques (Wang et al., 2023b ###reference_b35###; Yang et al., 2024a ###reference_b39###) to reduce reliance on human-defined knowledge. Additionally, introducing the active learning manner (Ye et al., 2024 ###reference_b41###) with human feedback could help correct and supplement the knowledge base, further enhancing ABIL\u2019s adaptability and robustness.\nIn summary, ABIL offers a timely and promising solution for neuro-symbolic imitation learning, particularly for long-horizon planning. Addressing the challenges of uncertain environments and incomplete knowledge will unlock its full potential, making it a reliable system for real-world applications." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Experimental Details", + "text": "In these two environments, each object feature is represented by its state and position in the room, and the robot feature is represented by its position and direction. The state representation is composed of features of all objects and the robot. The action spaces are both discrete. All expert demonstrations are generated by scripts based on search.\nIn BabyAI, 1000 demonstrations were used for training per task and to obtain Table 1 ###reference_###. In Mini-BEHAVIOR, install-a-printer, opening packages, and moving boxes to storage used 1000, while other tasks used 3000 expert demonstrations for training.\nModel Architecture. For each predicate (e.g. is-dusty), we build a binary classifier, which takes a single object as argument, and returns a scalar value from 0 to 1, indicating the classification score. All methods use the same network architecture which is based on the Neural Logic Machine (Dong et al., 2019 ###reference_b8###). Specifically, we first encode the state with a two-layer NLM. For BC, we use a single linear layer, taking the state embedding as the input and output actions. For DT, we build a single transformer layer following the two-layer encoder, with the causal mask to generate future action with past states and actions. For PDSketch, we choose the full mode in the original paper (Mao et al., 2022 ###reference_b25###). For ABIL-BC, we implement the behavior modules using BC model, and for ABIL-DT, we implement the behavior modules using DT model.\nIn this environment, each object is represented as a tuple of a 3D xyz location, and an image crop. Following (Mao et al., 2022 ###reference_b25###), we first compute the 2D bounding box of the object in the camera plane, then crop the image patch and resize it to 24 by 24 to obtain the image crop. The action space is continuous, each action involves a start and end-effector pose.\nTable 5 ###reference_### provides the average length of expert demonstrations.\nThis benchmark involves the agent manipulating objects of various colors and shapes, reflecting the requirements in the open world, providing a greater challenge for imitation learning.\nFor each task, 1000 expert demonstrations were used for training and to obtain Table 3 ###reference_###. All of these demonstrations are collected using oracle policies following CLIPort (Shridhar et al., 2021 ###reference_b30###), containing only successful trajectories.\nModel Architecture. Image feature of each object is a 64-dimensional embedding obtained via an image encoder, which is a 3-layer convolutional neural network followed by a linear transformation layer. For each predicate (e.g. is-red), we build a binary classifier, which takes the image feature of an object, and returns a scalar value from 0 to 1, indicating the classification score. The model implementation is same as in BabyAI and Mini-BEHAVIOR, except output continuous value as action." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Supplemental Results", + "text": "To evaluate the influences of neuro-symbolic errors upon ABIL, we further conduct experiments on the Pickup and Putting-blocks-in-bowls task . Experimental results are provided in Table 6 ###reference_###.\nLike human reasoning, incorrect logical objectives may lead to the failure of sequential decision-making. Neuro-symbolic errors indeed lower the performance of ABIL. Nevertheless, ABIL integrates data-driven imitation and logical objectives in learning, it has a tolerance for neuro-symbolic errors. Even under 75% grounding accuracy, it can still achieve improvements compared to purely data-driven methods.\nIn Table 7 ###reference_###, we provide additional evaluation results on zero-shot generalization task in the Mini-BEHAVIOR benchmark. In Opening Packages task, we train every model in the environment with 1 package to open, while in the testing environment, the agent is required to open 2 or 3 packages." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Reproducibility", + "text": "To promote reproducibility, we release the code on GitHub111https://github.com/Hoar012/ABIL-KDD-2025 ###reference_###. This may also assist future research." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Details of Knowledge Base", + "text": "In this section, we provide a detailed illustration of our knowledge base to help readers understand and reproduce.\nGoto a red box\n###figure_15### Pickup a red key\n###figure_16### Open a red door\n###figure_17### Put the ball next to the box\n###figure_18### Unlock a red door\n###figure_19### In this domain, our state machine is mainly composed of several typical categories. For tasks mainly about tidying up the room, e.g. Throwing away leftovers, we split the primitive actions into and , which is required to perform an action sequence to finish pickup or place subtask. Combined with our symbolic-grounding , the agent will be able to distinguish when and where to pick and place. For tasks mainly about cleaning, e.g. Cleaning a car, we split the primitive actions into and , which is required to finish washing or putting subtask. In addition, some tasks involve more operators, such as install a printer.\nWe provide detailed illustrations of these representative state machine models.\nThrowing away leftovers\nIn this task, there are 3 hamburgers on plates, which are on a countertop in the kitchen. The agent must throw all of the hamburgers into the ashcan.\n###figure_20### Cleaning a car\nIn this task, initially there is a dusty car, a soap and unsoaked rag, the agent need to use the rag and soap to clean the car. Finally the agent should place the rag and soap in a bucket.\n###figure_21### Installing a printer\nIn this task, initially there is a printer on the floor, and the agent must place it on the table and toggle it on.\n###figure_22### Packing star into the box\n###figure_23### Putting-red plocks-in-green bowls\n###figure_24### Separating-piles\n###figure_25### Assembling-kits\n###figure_26###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1. Success rates@100 in the BabyAI benchmark. The best result in each setting is bold and the second is underlined.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Task (Averaged Length)EvalBCDTPDSketchABIL-BCABIL-DT
GotoSingle (3)Basic1.000.8930.0491.001.000.9630.047
\nGoto (3)\nBasic0.8430.0060.7200.0441.000.9000.0460.9000.020
Gen0.7430.0450.5830.0491.000.7770.0320.7930.029
\nPickup (4)\nBasic0.7230.0310.4900.0400.9900.0100.8470.0250.8450.035
Gen0.5330.0310.3200.0700.9730.0120.7300.0100.7630.051
\nOpen (6)\nBasic0.9330.0250.4930.0591.000.9630.0210.9230.031
Gen0.8770.0150.4400.0781.000.9270.0320.8570.055
\nPut (9)\nBasic0.9500.0440.9100.0360.6970.0210.9400.0260.9530.006
Gen0.2600.0360.3800.0260.4170.0250.6370.0640.5930.084
\nUnlock (10)\nBasic0.9570.0120.9900.0100.2930.0510.9670.0230.9930.012
Gen0.9100.0300.9900.0100.2470.0510.9630.0060.9930.012
Averaged time per evaluation0.174 seconds0.260 seconds8.170 seconds0.320 seconds0.354 seconds
\n
", + "capture": "Table 1. Success rates@100 in the BabyAI benchmark. The best result in each setting is bold and the second is underlined." + }, + "2": { + "table_html": "
\n
Table 2. Success rates@100 in the Mini-BEHAVIOR benchmark. The best result in each setting is bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Task (Averaged Length)EvalBCDTPDSketchABIL-BCABIL-DT
\nInstalling a printer (10)\nBasic0.9030.0230.9270.0210.3430.0320.9200.0350.9470.012
Gen0.0030.0060.3000.1470.3100.0460.5770.1670.7600.010
\nOpening packages (19)\nBasic0.9470.0450.9630.0340.0200.0100.9880.0160.9930.008
Gen0.2950.1800.5480.0650.0200.0100.8920.0430.8450.071
\nMaking tea (36)\nBasic0.6070.0150.5830.105\u00bf 5 minutes0.6130.0320.6230.012
Gen0.0700.0780.1130.1050.0740.0530.4870.049
\nMoving boxes to storage (38)\nBasic0.7830.0610.7870.060\u00bf 5 minutes0.8030.0310.8070.038
Gen0.4330.4700.6130.0470.7170.0490.7530.038
\nCleaning A Car (45)\nBasic0.4170.0470.3130.091\u00bf 5 minutes0.4200.0360.3400.090
Gen0.1700.0360.1470.0830.1830.1040.2200.017
\nThrowing away leftovers (46)\nBasic0.8330.0800.8900.029\u00bf 5 minutes0.9410.0300.8550.059
Gen0.2220.1670.6530.0390.4880.1470.6800.039
\nPutting away dishes (65)\nBasic0.8110.0310.8280.052\u00bf 5 minutes0.8720.0240.8110.018
Gen0.1410.1110.5470.2960.7580.1180.6790.108
\nSorting books (66)\nBasic0.6010.0320.5430.053\u00bf 5 minutes0.6720.0840.5750.011
Gen0.1310.0470.2200.0100.3600.0370.3330.048
\nLaying wood floors (68)\nBasic0.6160.0620.6380.027\u00bf 5 minutes0.6410.0540.6430.036
Gen0.0680.0180.3660.0410.2250.1340.4350.067
\nWatering houseplants (68)\nBasic0.8140.0340.8060.020\u00bf 5 minutes0.8240.0230.8120.034
Gen0.0020.0040.1870.1130.1970.0950.4090.151
\nCleaning shoes (78)\nBasic0.4820.0860.4270.042\u00bf 5 minutes0.6230.0330.5050.075
Gen0.0300.0050.0530.0460.2150.1060.1930.120
\nCollect misplaced items (86)\nBasic0.4600.0300.2990.015\u00bf 5 minutes0.5770.0530.4210.042
Gen0.3250.0740.2610.0230.2700.0620.2790.020
\nOrganizing file cabinet (106)\nBasic0.1560.0470.5220.067\u00bf 5 minutes0.1660.0280.5960.058
Gen0.0830.0120.3820.1120.1090.0320.4900.078
Averaged time per evaluation1.48 seconds2.09 seconds\u00bf 5 minutes2.88 seconds2.98 seconds
\n
", + "capture": "Table 2. Success rates@100 in the Mini-BEHAVIOR benchmark. The best result in each setting is bold. " + }, + "3": { + "table_html": "
\n
Table 3. Success rates@100 in the CLIPort benchmark. The best result in each setting is bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskPacking-5shapesPacking-20shapesPlacing-red-in-greenPutting-blocks-in-bowlsSeparating-20pilesAssembling-kits
BC0.8830.0250.2070.0060.8400.0310.5070.0300.2260.0170.1870.015
DT0.9130.0460.1800.0260.8460.0240.5390.0680.2500.0480.1770.023
ABIL-BC0.9830.0150.9400.0300.9880.0140.9620.0120.3050.0110.8290.008
ABIL-DT0.9770.0210.8570.0250.9890.0170.9170.0330.3820.0290.8090.008
\n
", + "capture": "Table 3. Success rates@100 in the CLIPort benchmark. The best result in each setting is bold." + }, + "4": { + "table_html": "
\n
Table 4. Results on Zero-Shot Generalization tasks.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainBabyAIMini-BEHAVIORRobotic Manipulation
TrainEvalTrainEvalTrainEval
TaskPickupOpenUnlockThrow 1Throw 2Throw 3Putting-blocksNovel combination
BC\n0.7600.056\n\n0.9830.021\n\n0.1200.010\n\n0.7030.085\n\n0.1170.070\n\n0.0530.045\n\n0.5970.032\n\n0.5370.095\n
DT\n0.7830.031\n\n0.9570.031\n\n0.0570.051\n\n0.7700.026\n\n0.1820.008\n\n0.0560.003\n\n0.5500.019\n\n0.4240.008\n
PDSketch0.9700.010\n0.9900.010\n\n0.1270.021\n\n0.0130.006\n\u00bf 5 minutes\u00bf 5 minutes--
ABIL-BC\n0.9370.021\n1.00\n0.9800.026\n\n0.7170.055\n\n0.5900.013\n\n0.4850.054\n0.8600.0270.8570.044
ABIL-DT\n0.9250.007\n1.000.9930.0120.7830.0310.7020.0340.5850.120\n0.8220.026\n\n0.8230.050\n
\n
", + "capture": "Table 4. Results on Zero-Shot Generalization tasks." + }, + "5": { + "table_html": "
\n
Table 5. Details of CLIPort benchmark.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskAve. LengthEvaluation
Packing-5shapes14/7 unseen/total colors
Packing-20shapes14/7 unseen/total colors
Placing-red-in-green211 total colors
Putting-blocks-in-bowls27 total colors
Assembling-kits510/5 total shapes/colors
Separating-20piles77 total colors
\n
", + "capture": "Table 5. Details of CLIPort benchmark." + }, + "6": { + "table_html": "
\n
Table 6. Results under varying gorunding accuracy.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Grounding accuracy100%90%75%50%Data-driven
ABIL-BCPickupBasic0.8470.025 \n0.7870.015 \n0.7630.023 \n0.7170.0210.7230.031 (BC)
Gen0.7300.010 \n0.6670.057 \n0.6530.023 \n0.5600.017 \n0.5330.031 (BC)
Putting-blocks-in-bowls-0.9620.012 \n0.5990.036 \n0.5120.022 \n0.5070.030 (BC)
ABIL-DTPickupBasic0.8450.035 \n0.8600.017 \n0.8000.036 \n0.7330.025 \n0.4900.040 (DT)
Gen0.7630.051 \n0.7400.056 \n0.5970.042 \n0.5630.029 \n0.3200.070 (DT)
Putting-blocks-in-bowls-0.9170.033 \n0.5760.041 \n0.4620.0470.5390.068 (DT)
\n
", + "capture": "Table 6. Results under varying gorunding accuracy." + }, + "7": { + "table_html": "
\n
Table 7. Additional Results on Zero-Shot Generalization tasks.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainTaskBCDTPDSketchABIL-BCABIL-DT
TrainOpening 1 package0.9500.0871.000.4670.0570.9970.0061.00
Mini-BEHAVIOREvalOpening 2 packages0.0120.0100.0370.0250.0200.0100.8180.0140.8400.035
Opening 3 packages0.0020.0040.0240.008\u00bf 5 minutes0.5510.0320.6310.041
\n
", + "capture": "Table 7. Additional Results on Zero-Shot Generalization tasks." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18201v1_figure_1.png", + "caption": "Figure 1. Our framework, ABductive Imitation Learning (ABIL), achieves neuro-symbolic grounding of imitation in complex scenes while showing state-of-the-art results in data efficiency, generalization, and zero-shot transfer.", + "url": "http://arxiv.org/html/2411.18201v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.18201v1_figure_2(a).png", + "caption": "(a) Graphical knowledge of Cleaning a Car.\nFigure 2. Illustration of the Knowledge Base.", + "url": "http://arxiv.org/html/2411.18201v1/x2.png" + }, + "2(b)": { + "figure_path": "2411.18201v1_figure_2(b).png", + "caption": "(b) The typical graphical structures of the state machine\nFigure 2. Illustration of the Knowledge Base.", + "url": "http://arxiv.org/html/2411.18201v1/x3.png" + }, + "3": { + "figure_path": "2411.18201v1_figure_3.png", + "caption": "Figure 3. The Framework of our Abductive Imitation Learning.", + "url": "http://arxiv.org/html/2411.18201v1/x4.png" + }, + "4(a)": { + "figure_path": "2411.18201v1_figure_4(a).png", + "caption": "(a) Goto\nFigure 4. Accuracy of neuro-symbolic grounding under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/x5.png" + }, + "4(b)": { + "figure_path": "2411.18201v1_figure_4(b).png", + "caption": "(b) Pickup\nFigure 4. Accuracy of neuro-symbolic grounding under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/x6.png" + }, + "4(c)": { + "figure_path": "2411.18201v1_figure_4(c).png", + "caption": "(c) Open\nFigure 4. Accuracy of neuro-symbolic grounding under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/x7.png" + }, + "4(d)": { + "figure_path": "2411.18201v1_figure_4(d).png", + "caption": "(d) Unlock\nFigure 4. Accuracy of neuro-symbolic grounding under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/x8.png" + }, + "5(a)": { + "figure_path": "2411.18201v1_figure_5(a).png", + "caption": "(a) Pickup (Basic/Gen)\nFigure 5. Results under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/extracted/6028654/figure/data/pickup.png" + }, + "5(b)": { + "figure_path": "2411.18201v1_figure_5(b).png", + "caption": "(b) Opening Packages(Basic/Gen)\nFigure 5. Results under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/extracted/6028654/figure/data/open.png" + }, + "5(c)": { + "figure_path": "2411.18201v1_figure_5(c).png", + "caption": "(c) Putting dishes(Basic/Gen)\nFigure 5. Results under varying data budgets.", + "url": "http://arxiv.org/html/2411.18201v1/extracted/6028654/figure/data/put.png" + }, + "6(a)": { + "figure_path": "2411.18201v1_figure_6(a).png", + "caption": "(a) Packing-20shapes\nFigure 6. Accuracy of neuro-symbolic grounding under varying data budgets in Robotic Manipulation tasks.", + "url": "http://arxiv.org/html/2411.18201v1/x9.png" + }, + "6(b)": { + "figure_path": "2411.18201v1_figure_6(b).png", + "caption": "(b) Putting-blocks-in-bowls\nFigure 6. Accuracy of neuro-symbolic grounding under varying data budgets in Robotic Manipulation tasks.", + "url": "http://arxiv.org/html/2411.18201v1/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A Framework for Behavioural Cloning. In\nMachine Intelligence.", + "author": "Michael Bain and Claude\nSammut. 1995.", + "venue": "", + "url": null + } + }, + { + "2": { + "title": "Modeling Human Driving Behavior Through Generative\nAdversarial Imitation Learning.", + "author": "Raunak P. Bhattacharyya,\nBlake Wulfe, Derek J. Phillips,\nAlex Kuefler, Jeremy Morton,\nRansalu Senanayake, and Mykel J.\nKochenderfer. 2023.", + "venue": "IEEE Transactions on Intelligent\nTransportation Systems 24, 3\n(2023), 2874\u20132887.", + "url": null + } + }, + { + "3": { + "title": "Language Models are Few-Shot Learners. In\nAdvances in Neural Information Processing Systems\n33. Virtual Event.", + "author": "Tom B. Brown, Benjamin\nMann, Nick Ryder, Melanie Subbiah,\nJared Kaplan, Prafulla Dhariwal,\nArvind Neelakantan, Pranav Shyam,\nGirish Sastry, Amanda Askell,\nSandhini Agarwal, Ariel Herbert-Voss,\nGretchen Krueger, Tom Henighan,\nRewon Child, Aditya Ramesh,\nDaniel M. Ziegler, Jeffrey Wu,\nClemens Winter, Christopher Hesse,\nMark Chen, Eric Sigler,\nMateusz Litwin, Scott Gray,\nBenjamin Chess, Jack Clark,\nChristopher Berner, Sam McCandlish,\nAlec Radford, Ilya Sutskever, and\nDario Amodei. 2020.", + "venue": "", + "url": null + } + }, + { + "4": { + "title": "Decision Transformer: Reinforcement Learning via\nSequence Modeling. In Advances in Neural\nInformation Processing Systems 34. 15084\u201315097.", + "author": "Lili Chen, Kevin Lu,\nAravind Rajeswaran, Kimin Lee,\nAditya Grover, Misha Laskin,\nPieter Abbeel, Aravind Srinivas, and\nIgor Mordatch. 2021.", + "venue": "", + "url": null + } + }, + { + "5": { + "title": "BabyAI: A Platform to Study the Sample Efficiency\nof Grounded Language Learning. In 7th\nInternational Conference on Learning Representations. New\nOrleans, LA.", + "author": "Maxime Chevalier-Boisvert,\nDzmitry Bahdanau, Salem Lahlou,\nLucas Willems, Chitwan Saharia,\nThien Huu Nguyen, and Yoshua Bengio.\n2019.", + "venue": "", + "url": null + } + }, + { + "6": { + "title": "Bridging Machine Learning and Logical Reasoning by\nAbductive Learning. In Advances in Neural\nInformation Processing Systems 32. Vancouver, Canada,\n2811\u20132822.", + "author": "Wang-Zhou Dai,\nQiu-Ling Xu, Yang Yu, and\nZhi-Hua Zhou. 2019.", + "venue": "", + "url": null + } + }, + { + "7": { + "title": "Neural Logic Machines. In\n7th International Conference on Learning\nRepresentations. New Orleans, LA.", + "author": "Honghua Dong, Jiayuan\nMao, Tian Lin, Chong Wang,\nLihong Li, and Denny Zhou.\n2019.", + "venue": "", + "url": null + } + }, + { + "8": { + "title": "Survey of imitation learning for robotic\nmanipulation.", + "author": "Bin Fang, Shidong Jia,\nDi Guo, Muhua Xu,\nShuhuan Wen, and Fuchun Sun.\n2019.", + "venue": "International Journal of Intelligent Robotics\nand Applications 3, 4\n(2019), 362\u2013369.", + "url": null + } + }, + { + "9": { + "title": "STRIPS: A New Approach to the Application of\nTheorem Proving to Problem Solving.", + "author": "Richard Fikes and\nNils J. Nilsson. 1971.", + "venue": "Artificial Intelligence\n2, 3/4 (1971),\n189\u2013208.", + "url": null + } + }, + { + "10": { + "title": "PDDL2.1: An Extension to PDDL for Expressing\nTemporal Planning Domains.", + "author": "Maria Fox and Derek\nLong. 2003.", + "venue": "Journal of Artificial Intelligence Research\n20 (2003), 61\u2013124.", + "url": null + } + }, + { + "11": { + "title": "Learning Belief Representations for Imitation\nLearning in POMDPs. In Proceedings of the 35th\nConference on Uncertainty in Artificial Intelligence,\nVol. 115. Tel Aviv, Israel,\n1061\u20131071.", + "author": "Tanmay Gangwani, Joel\nLehman, Qiang Liu, and Jian Peng.\n2019.", + "venue": "", + "url": null + } + }, + { + "12": { + "title": "Knowledge-Enhanced Historical Document Segmentation\nand Recognition. In Proceedings of the 38th AAAI\nConference on Artificial Intelligence.", + "author": "En-Hao Gao, Yu-Xuan\nHuang, Wen-Chao Hu, Xin-Hao Zhu,\nand Wang-Zhou Dai. 2024.", + "venue": "", + "url": null + } + }, + { + "13": { + "title": "Online Replanning in Belief Space for Partially\nObservable Task and Motion Problems. In IEEE\nInternational Conference on Robotics and Automation.\nParis, France, 5678\u20135684.", + "author": "Caelan Reed Garrett, Chris\nPaxton, Tom\u00e1s Lozano-P\u00e9rez,\nLeslie Pack Kaelbling, and Dieter Fox.\n2020.", + "venue": "", + "url": null + } + }, + { + "14": { + "title": "An Introduction to the Planning Domain Definition\nLanguage (PDDL): Book review.", + "author": "Alfonso Emilio Gerevini.\n2020.", + "venue": "Artificial Intelligence\n280 (2020), 103221.", + "url": null + } + }, + { + "15": { + "title": "What\u2019s Left? Concept Grounding with Logic-Enhanced\nFoundation Models. In Advances in Neural\nInformation Processing Systems 36. New Orleans, LA.", + "author": "Joy Hsu, Jiayuan Mao,\nJoshua B. Tenenbaum, and Jiajun Wu.\n2023.", + "venue": "", + "url": null + } + }, + { + "16": { + "title": "Continuous Relaxation of Symbolic Planner for\nOne-Shot Imitation Learning. In 2019 IEEE/RSJ\nInternational Conference on Intelligent Robots and Systems.\nMacau, China, 2635\u20132642.", + "author": "De-An Huang, Danfei Xu,\nYuke Zhu, Animesh Garg,\nSilvio Savarese, Li Fei-Fei, and\nJuan Carlos Niebles. 2019.", + "venue": "", + "url": null + } + }, + { + "17": { + "title": "Fast Abductive Learning by Similarity-based\nConsistency Optimization. In Advances in Neural\nInformation Processing Systems 34. Virtual Event,\n26574\u201326584.", + "author": "Yu-Xuan Huang,\nWang-Zhou Dai, Le-Wen Cai,\nStephen H. Muggleton, and Yuan Jiang.\n2021.", + "venue": "", + "url": null + } + }, + { + "18": { + "title": "Semi-Supervised Abductive Learning and Its\nApplication to Theft Judicial Sentencing. In 20th\nIEEE International Conference on Data Mining. Sorrento,\nItaly, 1070\u20131075.", + "author": "Yu-Xuan Huang,\nWang-Zhou Dai, Jian Yang,\nLe-Wen Cai, Shaofen Cheng,\nRuizhang Huang, Yu-Feng Li, and\nZhi-Hua Zhou. 2020.", + "venue": "", + "url": null + } + }, + { + "19": { + "title": "Enabling Abductive Learning to Exploit Knowledge\nGraph. In Proceedings of the 32nd International\nJoint Conference on Artificial Intelligence. Macao,\nChina, 3839\u20133847.", + "author": "Yu-Xuan Huang, Zequn\nSun, Guangyao Li, Xiaobin Tian,\nWang-Zhou Dai, Wei Hu,\nYuan Jiang, and Zhi-Hua Zhou.\n2023.", + "venue": "", + "url": null + } + }, + { + "20": { + "title": "Imitation learning: A survey of learning methods.", + "author": "Ahmed Hussein,\nMohamed Medhat Gaber, Eyad Elyan, and\nChrisina Jayne. 2017.", + "venue": "Comput. Surveys 50,\n2 (2017).", + "url": null + } + }, + { + "21": { + "title": "Mini-BEHAVIOR: A Procedurally Generated Benchmark\nfor Long-horizon Decision-Making in Embodied AI.", + "author": "Emily Jin, Jiaheng Hu,\nZhuoyi Huang, Ruohan Zhang,\nJiajun Wu, Li Fei-Fei, and\nRoberto Mart\u00edn-Mart\u00edn.\n2023.", + "venue": "CoRR abs/2310.01824\n(2023).", + "url": null + } + }, + { + "22": { + "title": "From Skills to Symbols: Learning Symbolic\nRepresentations for Abstract High-Level Planning.", + "author": "George Dimitri Konidaris,\nLeslie Pack Kaelbling, and Tom\u00e1s\nLozano-P\u00e9rez. 2018.", + "venue": "Journal of Artificial Intelligence Research\n61 (2018), 215\u2013289.", + "url": null + } + }, + { + "23": { + "title": "BEHAVIOR-1K: A Benchmark for Embodied AI with\n1, 000 Everyday Activities and Realistic Simulation. In\nConference on Robot Learning.\nAuckland, New Zealand, 80\u201393.", + "author": "Chengshu Li, Ruohan\nZhang, Josiah Wong, Cem Gokmen,\nSanjana Srivastava, Roberto\nMart\u00edn-Mart\u00edn, Chen Wang,\nGabrael Levine, Michael Lingelbach,\nJiankai Sun, Mona Anvari,\nMinjune Hwang, Manasi Sharma,\nArman Aydin, Dhruva Bansal,\nSamuel Hunter, Kyu-Young Kim,\nAlan Lou, Caleb R. Matthews,\nIvan Villa-Renteria, Jerry Huayang\nTang, Claire Tang, Fei Xia,\nSilvio Savarese, Hyowon Gweon,\nC. Karen Liu, Jiajun Wu, and\nLi Fei-Fei. 2022.", + "venue": "", + "url": null + } + }, + { + "24": { + "title": "PDSketch: Integrated Domain Programming, Learning,\nand Planning. In Advances in Neural Information\nProcessing Systems 35. New Orleans, LA.", + "author": "Jiayuan Mao, Tom\u00e1s\nLozano-P\u00e9rez, Josh Tenenbaum, and\nLeslie Pack Kaelbling. 2022.", + "venue": "", + "url": null + } + }, + { + "25": { + "title": "A Survey on Imitation Learning Techniques for\nEnd-to-End Autonomous Vehicles.", + "author": "Luc Le Mero, Dewei Yi,\nMehrdad Dianati, and Alexandros\nMouzakitis. 2022.", + "venue": "IEEE Transactions on Intelligent\nTransportation Systems 23, 9\n(2022), 14128\u201314147.", + "url": null + } + }, + { + "26": { + "title": "Toward the Fundamental Limits of Imitation\nLearning. In Advances in Neural Information\nProcessing Systems 33. Virtual Event.", + "author": "Nived Rajaraman, Lin F.\nYang, Jiantao Jiao, and Kannan\nRamchandran. 2020.", + "venue": "", + "url": null + } + }, + { + "27": { + "title": "Offline Imitation Learning with Model-based Reverse\nAugmentation. In Proceedings of the 30th ACM\nSIGKDD Conference on Knowledge Discovery and Data Mining.\nBarcelona, Spain, 2608\u20132617.", + "author": "Jie-Jing Shao, Hao-Sen\nShi, Lan-Zhe Guo, and Yu-Feng Li.\n2024a.", + "venue": "", + "url": null + } + }, + { + "28": { + "title": "Offline Imitation Learning without Auxiliary\nHigh-quality Behavior Data.", + "author": "Jie-Jing Shao, Hao-Sen\nShi, Tian Xu, Lan-Zhe Guo,\nYang Yu, and Yu-Feng Li.\n2024b.", + "venue": "", + "url": null + } + }, + { + "29": { + "title": "CLIPort: What and Where Pathways for Robotic\nManipulation. In 5th Conference on Robot\nLearning. London, UK, 894\u2013906.", + "author": "Mohit Shridhar, Lucas\nManuelli, and Dieter Fox.\n2021.", + "venue": "", + "url": null + } + }, + { + "30": { + "title": "Learning Neuro-Symbolic Skills for Bilevel\nPlanning. In 6th Conference on Robot Learning.\nAuckland, New Zealand, 701\u2013714.", + "author": "Tom Silver, Ashay\nAthalye, Joshua B. Tenenbaum, Tom\u00e1s\nLozano-P\u00e9rez, and Leslie Pack Kaelbling.\n2022.", + "venue": "", + "url": null + } + }, + { + "31": { + "title": "Predicate Invention for Bilevel Planning. In\n37th AAAI Conference on Artificial\nIntelligence. Washington, DC,\n12120\u201312129.", + "author": "Tom Silver, Rohan\nChitnis, Nishanth Kumar, Willie\nMcClinton, Tom\u00e1s Lozano-P\u00e9rez,\nLeslie Pack Kaelbling, and Joshua B.\nTenenbaum. 2023.", + "venue": "", + "url": null + } + }, + { + "32": { + "title": "Learning Symbolic Operators for Task and Motion\nPlanning. In IEEE/RSJ International Conference\non Intelligent Robots and Systems. Prague, Czech\nRepublic, 3182\u20133189.", + "author": "Tom Silver, Rohan\nChitnis, Joshua B. Tenenbaum, Leslie Pack\nKaelbling, and Tom\u00e1s Lozano-P\u00e9rez.\n2021.", + "venue": "", + "url": null + } + }, + { + "33": { + "title": "MimicPlay: Long-Horizon Imitation Learning by\nWatching Human Play. In 7th Annual Conference on\nRobot Learning. Atlanta, GA.", + "author": "Chen Wang, Linxi Fan,\nJiankai Sun, Ruohan Zhang,\nLi Fei-Fei, Danfei Xu,\nYuke Zhu, and Anima Anandkumar.\n2023a.", + "venue": "", + "url": null + } + }, + { + "34": { + "title": "Programmatically Grounded, Compositionally\nGeneralizable Robotic Manipulation. In The 11th\nInternational Conference on Learning Representations.\nKigali, Rwanda.", + "author": "Renhao Wang, Jiayuan Mao,\nJoy Hsu, Hang Zhao,\nJiajun Wu, and Yang Gao.\n2023b.", + "venue": "", + "url": null + } + }, + { + "35": { + "title": "Is language acquired through imitation?", + "author": "Grover J Whitehurst and\nRoss Vasta. 1975.", + "venue": "Journal of Psycholinguistic Research\n4 (1975), 37\u201359.", + "url": null + } + }, + { + "36": { + "title": "Regression Planning Networks. In\nAdvances in Neural Information Processing Systems\n32. Vancouver, Canada, 1317\u20131327.", + "author": "Danfei Xu, Roberto\nMart\u00edn-Mart\u00edn, De-An Huang,\nYuke Zhu, Silvio Savarese, and\nLi Fei-Fei. 2019.", + "venue": "", + "url": null + } + }, + { + "37": { + "title": "Error Bounds of Imitating Policies and Environments\nfor Reinforcement Learning.", + "author": "Tian Xu, Ziniu Li, and\nYang Yu. 2022.", + "venue": "IEEE Transactions on Pattern Analysis and\nMachine Intelligence 44, 10\n(2022), 6968\u20136980.", + "url": null + } + }, + { + "38": { + "title": "Safe Abductive Learning in the Presence of\nInaccurate Rules. In 38th AAAI Conference on\nArtificial Intelligence. Vancouver, Canada,\n16361\u201316369.", + "author": "Xiaowen Yang, Jie-Jing\nShao, Wei-Wei Tu, Yufeng Li,\nWang-Zhou Dai, and Zhi-Hua Zhou.\n2024a.", + "venue": "", + "url": null + } + }, + { + "39": { + "title": "Analysis for Abductive Learning and Neural-Symbolic\nReasoning Shortcuts. In 41st International\nConference on Machine Learning. Vienna, Austria.", + "author": "Xiaowen Yang, Wenda Wei,\nJie-Jing Shao, Yufeng Li, and\nZhi-Hua Zhou. 2024b.", + "venue": "", + "url": null + } + }, + { + "40": { + "title": "Concept-Based Interpretable Reinforcement Learning\nwith Limited to No Human Labels.", + "author": "Zhuorui Ye, Stephanie\nMilani, Geoffrey J. Gordon, and Fei\nFang. 2024.", + "venue": "CoRR abs/2407.15786\n(2024).", + "url": null + } + }, + { + "41": { + "title": "Abductive learning: towards bridging machine\nlearning and logical reasoning.", + "author": "Zhi-Hua Zhou.\n2019.", + "venue": "SCIENCE CHINA Information Science\n62, 7 (2019),\n76101:1\u201376101:3.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18201v1" +} \ No newline at end of file diff --git a/20241127/2411.18210v1.json b/20241127/2411.18210v1.json new file mode 100644 index 0000000000000000000000000000000000000000..65992670bd6cba06f9f627321d61fd3def178c11 --- /dev/null +++ b/20241127/2411.18210v1.json @@ -0,0 +1,210 @@ +{ + "title": "A RATIONAL KRYLOV METHODS FOR LARGE-SCALE LINEAR MULTIDIMENSIONAL DYNAMICAL SYSTEMS", + "abstract": "In this paper, we investigate the use of multilinear algebra for reducing the order of multidimensional linear time-invariant (MLTI) systems. Our main tools are tensor rational Krylov subspace methods, which enable us to approximate the system\u2019s solution within a low-dimensional subspace. We introduce the tensor rational block Arnoldi and tensor rational block Lanczos algorithms. By utilizing these methods, we develop a model reduction approach based on projection techniques. Additionally, we demonstrate how these approaches can be applied to large-scale Lyapunov tensor equations, which are critical for the balanced truncation method, a well-known technique for order reduction. An adaptive method for choosing the interpolation points is also introduced. Finally, some numerical experiments are reported to show the effectiveness of the proposed adaptive approaches.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In this work, we develop methods based on projection onto block rational Krylov subspaces, for reducing the order of multidimensional\nlinear time invariant (MLTI) systems.\nConsider the following continuous time MLTI system\nwhere is the derivative of the tensor is a square\ntensor, and The tensors and are the control and output\ntensors, respectively.\nThe transfer function associated to the dynamical system (2.2) is given by\nsee [19 ###reference_b19###, 9 ###reference_b9###] for more details.\nThen, the aim of the model-order reduction problem is to produce a low-order multidimensional system that preserves the important properties of the original system and has\nthe following form\nThe associated low-order transfer\nfunction is denoted by\nIn the context of linear time invariant LTI systems, various model reduction methods have been explored these last\nyears. Some of them are based on Krylov subspace methods (moment matching)\nwhile others use balanced truncation; see [2 ###reference_b2###, 15 ###reference_b15###, 14 ###reference_b14###, 20 ###reference_b20###] and the references therein. For the Krylov subspace method, this technique is based on matching the moments of the original transfer function around some selected frequencies to\nfinding a reduced order model that matches a certain number of moments of the original model around these frequencies. The standard version of the Krylov algorithms generates reduced-order models that may provide a poor approximation of some frequency dynamics. To address this issue, rational Krylov subspace methods have been developed in recent years [1 ###reference_b1###, 12 ###reference_b12###, 11 ###reference_b11###, 18 ###reference_b18###, 17 ###reference_b17###, 23 ###reference_b23###, 24 ###reference_b24###]. However, a significant challenge in these methods is the selection of interpolation points, which need to be carefully chosen to ensure accurate approximations. The same techniques could also be used to construct reduced order of MLTI systems; see [19 ###reference_b19###]. In this paper, we are interested in tensor rational Krylov subspace methods using projections onto special low dimensional tensor Krylov subspaces via the Einstein product. This technique helps to reduce the complexity\nof the original system while preserving its essential features, and what we end up with is a lower dimensional reduced MLTI system that has the same structure as the original one. The MLTI systems have been introduced in [7 ###reference_b7###], where the states, outputs and inputs are preserved in a tensor format\nand the dynamic is supposed to be described by some multilinear operators. A technique named\ntensor unfolding [6 ###reference_b6###] which transforms a tensor into a matrix, allows to extend many concepts and notions\nin the LTI theory to the tensor case. One of them is the use of the transfer function that describes the\ninput-output behaviour of the MLTI system, and to quantify the efficiency of the projected reduced\nMLTI system. Many properties in the theory of LTI systems such as, reachibility, controllability and\nstability, have been generalized in the case of MLTI systems based on the Einstein product [7 ###reference_b7###].\nIn this paper, we will focus on projection techniques that utilize rational Krylov tensor based subspaces, specifically the tensor rational block Krylov subspace methods using Arnoldi and Lanczos algorithms. This approach will be introduced by employing\nthe Einstein product and multilinear algebra to construct a lower-dimensional reduced MLTI system via projection. We will also provide a brief overview of the tensor balanced truncation method. In the second part of the paper, we will examine large-scale tensor equations, such as the continuous-time Lyapunov tensor equation, which arises when applying the tensor balanced truncation model reduction method. We will demonstrate how to derive approximate solutions using tensor-based Krylov subspace methods.\nThe paper is structured as follows. Section 2 introduces the general notations and provides the key definitions used throughout the document.\nIn Section 3, we detail the reduction process via projection using tensor rational block Krylov subspace based on Arnoldi and Lanczos algorithms. We provide some algebraic properties of the proposed\nprocesses and derive explicit formulations of the error between the original and the reduced transfer\nfunctions. An adaptive method for choosing the interpolation points is also introduced in Section 4. In Section 5, we discuss the tensor balanced truncation method. Section 6 outlines the computation of approximate solutions to large-scale discrete-time Lyapunov tensor equations using the tensor rational block Lanczos method. The last section is devoted to some numerical experiments." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background and definitions", + "text": "In this section, we present some notations and definitions that will be utilized throughout this paper. Unless stated otherwise, we denote tensors with calligraphic letters, while lowercase and uppercase letters are used to represent vectors and matrices, respectively. An N-mode tensor is a multidimensional array represented as , with its elements denoted by with . If , then is a 0-mode tensor, which we refer to as a scalar; a vector is a 1-mode tensor, and a matrix is a 2-mode tensor. Additional definitions and explanations regarding tensors can be found in [25 ###reference_b25###].\nNext we give a definition of the tensor Einstein product, a multidimensional generalization\nof matrix product. For further details, refer to [6 ###reference_b6###, 9 ###reference_b9###].\nLet and be two tensors. The Einstein product is defined by\nThe following special tensors will be considered in this paper: a nonzero tensor\n\nis called a diagonal tensor if all the entries are equal to zero except for the diagonal entries denoted by .\nIf all the diagonal entries are equal to 1 (i.e., ), then\n is referred to as the identity tensor, denoted by . Let and\n be two tensors such that\n. In this cas,\n is called the transpose of and denoted by . The inverse of a square tensor is defined as follows:\nA square tensor is invertible if and only if there exists a tensor such that\nIn that case, is the inverse of and is denoted by .\nConsider and . Then we have the following relations:\n.\nand .\nThe trace, denoted by , of a square-order tensor is given by\nThe inner product of the two tensors is defined as\nwhere is the transpose tensor of . If , we say that and are orthogonal. The corresponding norm is the tensor Frobenius norm given by\nWe recall the definition of the eigenvalues of a tensor as outlined in [27 ###reference_b27###, 34 ###reference_b34###].\nLet be a tensor. The complex scalar satisfying\nis called an eigenvalue of , where is a nonzero tensor referred to as an eigentensor. The set of all the eigenvalues of is denoted by ." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Tensor unfolding", + "text": "Tensor unfolding is essential for tensor computations. It simplifies the process that allows the extension of many concepts from matrices to tensors. This technique involves rearranging the tensor\u2019s entries into either a vector or a matrix format.\nConsider the following transformation with defined component wise as\nwhere we refer to , and is the set of all tensors in . is the set of matrices in where and . The index mapping introduced in [32 ###reference_b32###] is given by\nwhere and are two sets of subscripts." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "MLTI system theory", + "text": "In this section, we will present some theoretical results on MLTI systems that generalize certain aspects of LTI dynamical systems.\nWe consider the continuous-time MLTI system defined in (1 ###reference_###), and a non-zero initial tensor state . The state solution of this system is given by\nGiven the MLTI system (1 ###reference_###). The system is called:\nAsymptotically stable if as .\nStable if for some .\nThe MLTI dynamical system (1 ###reference_###) is\nAsymptotically stable, if and only if is stable .\nStable, if and only if all eigenvalues of have non-positive real parts, and in addition, all pure imaginary eigenvalues have multiplicity one.\nIn the same way as the matrix case, the definitions of the reachability and the observability of a continuous\nMLTI system are defined below.\nThe controllability and observability Gramians associated to the MLTI system (1 ###reference_###) are defined, respectively, as\nand\nThe two Gramians are the unique solutions of the following coupled Lyapunov\nmatrix equations\nThe MLTI system (1 ###reference_###) is controllable if and only if the solution of (9 ###reference_###) is a weakly symmetric positive-definite square tensor. It is observable if and only if the solution of (10 ###reference_###) is a weakly symmetric positive-definite square tensor; see [8 ###reference_b8###] for more details about these\nnotions." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Block tensors", + "text": "In this subsection, we will briefly outline how tensors can be efficiently stored within a single tensor, similar to the approach used with block matrices. For clarity, we will begin by defining an even-order paired tensor.\n([9 ###reference_b9###]) A tensor is called an even-order paired tensor if its order is even, i.e.,\n(2N for example), and the elements of the tensor are indicated using a pairwise index\nnotation, i.e., for .\nA definition of an n-mode row block tensor, formed from two even-order paired tensors of the same size, is given as follows:\n([9 ###reference_b9###]) Consider two even-order paired tensors. The n-mode row block tensor, denoted by\nis defined by the following:\nIn this study, we focus on 4th-order tensors (i.e., N = 2) without specifically addressing paired ones, meaning we won\u2019t use the paired index notation. We have observed that the definition remains applicable for both paired and non-paired 4th-order tensors. To clarify this block tensor notation, we will provide some examples below. Let and be two 4th-order tensors in the space . By following the two definitions above, we refer to the 1-mode row block tensor by We use the MATLAB colon operation to explain how to extract the two tensors and . We have:\nThe notation of the 2-mode row block tensor described below is the one most commonly used in this paper. We have:\nFor the cas (i.e., and are now matrices, defined as and ), then the notation is now defined by the standard block matrices definition.\nIn the same manner, we can define the 1 or 2-mode column block tensor for the 4th-order tensors based on the definition proposed by Chen et al. [7 ###reference_b7###]. The 1-mode column block tensor in is denoted by and defined as follows\nWhile the 2-mode column block tensor in , denoted by , is given by\nUsing the Einstein product, the following proposition can be easily proved and used during the computational process described in the following sections.\nConsider four tensors of the same size . Then we have\nConsider tensors . By sequentially applying the definition of the 2-mode row block tensor, we can construct a single tensor denoted as in the space , which takes the following form:\nFor more definitions, see [[9 ###reference_b9###], Definition 4.2]. The process to\ncreate such tensor can be defined recursively by defining\nAn explanation using the MATLAB colon operator for the constructed tensor goes as follows:\nIn next sections, we will use some tensors of type where . For simplicity, we consider (i.e., ), the tensor is defined as follows\nwhere . To simplify the notation, we will denote as\nThe tensors can be extracted from the tensor using the MATLAB colon operator as follows:\nFor a generalization to (i.e., ), the tensor is defined as follows:\nIn the same way, the tensor can be described using the MATLAB colon operator as follows:\nFor simplicity, let us assume that and are two tensors in . Using the definitions described above, we can easily prove the following proposition." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "QR decomposition", + "text": "we first define the notion of upper and lower triangular tensors which will be used later.\nLet and be two tensors in the space .\n- is an upper triangular tensor if the entries when .\n- is a lower triangular tensor if the entries when ,\nwhere is the index mapping mentioned in the Definition 2.5 ###reference_theorem5###.\nAnalogously to the QR decomposition in the matrix case [16 ###reference_b16###], a similar definition for the decomposition of a tensor in the space is defined as follows:\nwhere is orthogonal, i.e., , and is an upper triangular tensor." + }, + { + "section_id": "2.5", + "parent_section_id": "2", + "section_name": "SVD decomposition", + "text": "For a tensor . The Einstein Singular Value Decomposition (SVD) of is defined by [28 ###reference_b28###]:\nwhere and are orthogonals, and is a diagonal tensor that\ncontains the singular values of known also as the Hankel singular values of the associated MLTI system [8 ###reference_b8###].\nNext, we recall the tensor classic Krylov subspace based on Arnoldi and Lanczos algorithms. For the rest of the paper and for simplicity, we focus on the case where\n, which means we will consider 4th-order tensors in the subsequent sections. The results can be readily generalized to the cases where and . In the following discussion, unless otherwise specified, we will denote the Einstein product between two 4th-order tensors \u201d as \u201d*\u201d." + }, + { + "section_id": "2.6", + "parent_section_id": "2", + "section_name": "Tensor classic Krylov subspace", + "text": "" + }, + { + "section_id": "2.6.1", + "parent_section_id": "2.6", + "section_name": "2.6.1 Tensor block Arnoldi algorithm", + "text": "Given two tensors and . The -th tensor block Krylov subspace is defined by\nwhere . The tensor block Arnoldi algorithm applied to the pair generates a sequence of orthonormal tensors such that\nThe tensors that are generated by this algorithm satisfy the orthogonality conditions, i.e.\nwhere and are the zero tensor and the identity tensor, respectively, of .\nNext, we give a version of the tensor block Arnoldi algorithm that was defined in [10 ###reference_b10###]. The algorithm is summarized as follows.\nInput: and a fixed integer m.\nOutput: and .\nCompute QR decomp. to i.e.,\nFor\nSet\nFor\n\n.\nEndFor\nCompute QR decomp. to i.e.,\nendFor.\nAfter m steps, we obtain the following decomposition:\nwhere contains for . The tensor is a block upper Hessenberg tensor whose nonzeros block\nentries are defined by Algorithm 1 ###reference_### and defined as\nwhere the notation is the definition of block tensors given in the previous paragraph. Finally, the tensor is obtained from the identity tensor as ." + }, + { + "section_id": "2.7", + "parent_section_id": "2", + "section_name": "Tensor block Lanczos algorithm", + "text": "Let and be two initial tensors of , and consider the following tensor block Krylov subspaces\nThe nonsymmetric tensor block Lanczos algorithm applied to the pairs and generates two sequences of bi-orthonormal tensors and such that\nand\nThe tensors and that are generated by the tensor block Lanczos algorithm satisfy the biorthogonality conditions, i.e.\nNext, we give a stable version of the nonsymmetric tensor block Lanczos process. This algorithm is analogous to the one defined in [3 ###reference_b3###] for matrices and is summarized as follows.\nInputs: and m an integer.\nCompute the QR decomposition of , i.e., ;\nFor\nCompute the QR decomposition of and , i.e.,\n\nCompute the singular value decomposition of , i.e.,\n\u2003;\n\u2003;\nend For.\nSetting and , we have the following tensor block Lanczos relations\nand\nwhere is the block tensor\ndefined by\nwhose nonzeros entries are block tensors of and defined by Algorithm 2 ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Projection based tensor rational block methods", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Tensor rational block Arnoldi method", + "text": "Let and be two tensors of appropriate\ndimensions, then the tensor rational block Krylov subspace denoted by ,\ncan be defined as follows\nwhere\nand is the set of interpolation points.\nIt is worth noting that if we consider the\nisomorphism defined previously to define the matrices and , then we can generalize all the results obtained in the matrix case to the tensor structure using the Einstein product. Further details are given below. Next, we describe the process for the construction of\nthe tensor associated to the tensor rational block Krylov subspace defined above. The process is\nguaranteed via the following rational Arnoldi algorithm. As mentioned earlier, our interest is focused\nonly on 4th-order tensors, but the following results remain true also for higher order.\nThe tensor rational block Arnoldi process described below is an analogue to the one proposed for matrices [1 ###reference_b1###, 26 ###reference_b26###].\nInput: and a fixed integer m.\nOutput: and .\nCompute QR decomp. to i.e.,\nFor\nSet\nFor\n\n.\nEndFor\nCompute QR decomp. to i.e.,\nendFor.\nIn Algorithm 3 ###reference_###, we assume that we are not given the sequence of\nshifts and then we need to include the procedure to automatically generate this sequence during the iterations of the process. This adaptive procedure\nwill be defined in the next sections. In this Algorithm, step 5 is used to generate the next Arnoldi tensors . To ensure that these block tensors generated in each iteration are orthonormal,\nwe compute the QR decomposition of (step 10), where is the isomorphism defined in Definition 2.5 ###reference_theorem5###. The computed tensor contains for which form an orthonormal basis of the tensor rational Krylov\nsubspace defined in 1 ###reference_###; i.e\nWhere is the identity tensor of . Let be the block upper Hessenberg tensor whose nonzeros block entries are defined by Algorithm 3 ###reference_### (step 7). Let and be the block upper Hessenberg tensors defined as\nwhere is the diagonal tensor diag() and are the set of interpolation points used in Algorithm 3 ###reference_###. After m steps of Algorithm 3 ###reference_###, and specifically\nfor the extra interpolation points at , we have:\nwhere" + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Rational approximation", + "text": "In this section, we consider tensor rational block Arnoldi algorithm described in\nthe previous section to approximate the associated transfer function (2 ###reference_###) to the MLTI system (1 ###reference_###). We begin by rewriting this transfer function as:\nwhere verifies the following multi-linear system\nIn order to find an approximation to , it remains to find an approximation to the above multilinear system, which can be done by using a projection into tensor rational Krylov subspace defined in (23 ###reference_###).\nLet be the corresponding basis tensor for the tensor rational Krylov subspace. After approximating the full order state by , and analogously to the matrix case, by using the Petrove-Galerkin condition technique, we obtain the desired reduced MLTI system (3 ###reference_###), with the following tensorial structure" + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Error estimation for the transfer function", + "text": "The computation of the exact error in the transfer tensor between the original and the reduced systems\nis important for the measure of accuracy of the resulting reduced-order model.\nUnfortunately, the exact error is not available, because the higher dimension of\nthe original system yields the computation of ) to be very difficult. we propose the following simplified expression to the error norm .\nLet ) and ) be the two transfer functions associated to the original\nMLTI system (1 ###reference_###) and the reduced one (3 ###reference_###) respectively. Using the previous results, we have\nThe error between the initial and the projected transfer functions is given by\nUsing decomposition (3.1 ###reference_1###) we obtain\nwhere .\nWe conclude the proof by using the fact that and is an orthogonal projection." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Tensor rational block Lanczos method", + "text": "Let and be three tensors of appropriate\ndimensions. The tensor rational block Lanczos procedure is an algorithm for constructing bi-orthonormal tensors of the union of the block tensor\nKrylov subspaces and . The tensor rational block Lanczos process described below is an analogue to the one proposed for matrices [4 ###reference_b4###].\nNext, we describe the process for the construction of\nthe tensors and associated to the tensor rational block Krylov subspace defined above. The process can be\nguaranteed via the following rational Lanczos algorithm. As mentioned earlier, our interest is focused\nonly on 4th-order tensors, but the following results remain true also for higher order.\nThe tensor rational block Lanczos process described below is an analogue to the one proposed for matrices [1 ###reference_b1###, 26 ###reference_b26###].\nInput: and a fixed integer m.\nOutput: two biorthogonal tensors and of .\nSet and\nSet and such that ;\nInitialize: and .\nFor\nif ; and else\nand endif\nand ;\nand\nand (QR factorization)\n; (Singular Value Decomposition)\nand ;\nand ;\n;\nendFor.\nWe notice that the adaptive procedure to automatically generate the shifts will be defined in the next sections. In Algorithm 4 ###reference_###, steps 7-8 are used to generate the next Lanczos tensors and . To ensure that theses block tensors generated in each iteration are biorthogonal,\nwe apply the QR decomposition to and and then we compute the singular value decomposition of (step 11 and step 12).\nThe tensors and constructed in step 9 are and they are used to construct the block upper Hessenberg tensors and , respectively.\nThe computed tensors and from Algorithm 4 ###reference_### are bi-orthonormal; i.e.,\nLet and be the block upper Hessenberg tensors defined as\nLet and be the block upper Hessenberg tensors defined as\nand\nwhere is the diagonal tensor diag() and are the set of interpolation points used in Algorithm 4 ###reference_###. After m steps of Algorithm 4 ###reference_###, and specifically\nfor the extra interpolation points at , we have:\nwhere\nand\nLet and be the bi-orthonormal tensors computed using Algorithm 4 ###reference_###. After approximating the full order state by , and analogously to the matrix case, by using the Petrove-Galerkin condition technique, we obtain the desired reduced MLTI system with the following tensorial structure" + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Error estimation for the transfer function", + "text": "In this section, we give a simplified expression to the error norm .\nLet ) and ) be the two transfer functions associated to the original\nMLTI (1 ###reference_###) and the reduced one (3 ###reference_###) respectively. Using the previous results, we have\nNext, we present a modeling error in terms of two residual tensors. The results provided below have been established in the matrix case [4 ###reference_b4###] and are generalized here for the tensor case.\nLet\nbe the tensor residual expressions, where and are the solutions of the tensor equations\nand satisfy the Petrov-Galerkin conditions\nwhich means that . In the following theorem, we give an expression of the error .\nThe error between the frequency responses of the original and reduced-order systems can be expressed as\nNext, we use the tensor rational Lanczos equations ( 3.2 ###reference_2###) to simplify the tensor residual error expressions. The expressions of the tensor residual and could be written as\nwhere , are the terms of the residual errors and , respectively, depending on the frequencies. The matrices , are frequency-independent terms of and , respectively.\nTherefore, the error expression in (32 ###reference_###) becomes\nThe transfer function include terms related to the original system which makes the computation of quite costly. Therefore, instead of using we can employ an approximation of .\nVarious possible approximations of the error are summarized in Table 1 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "pole section", + "text": "In this section, we derive some techniques for adaptive pole selection by employing the representations of the tensor residuals discussed in the previous section.\nThe goal of these methods is to construct the next interpolation point at each step, based on the idea that shifts should be chosen to minimize the norm of a specific error approximation at every iteration. In this context, an adaptive approach is suggested, that utilizes the following error approximation expression:\nThen the next shift is selected as\nand if is complex, its real part is retained and used as the next interpolation point.\nFor the case of the tensor rational block Arnoldi algorithm, one can choose the next interpolation point as follows:" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Tensor Balanced Truncation", + "text": "Tensor Balanced Truncation (TBT) is an advanced technique in numerical linear algebra and model reduction. It extends the principles of Balanced Truncation, a well-established method for reducing the dimensionality of large-scale linear dynamical systems, to tensor computations. This approach finds applications across various fields such as controlling complex systems, modeling neural networks, and simulating high-dimensional physical systems. The core idea behind Balanced Truncation is to approximate a high-dimensional system with a lower-dimensional one while preserving essential dynamical properties. This is achieved by truncating the system based on its controllability and observability Gramians, thus retaining the most significant states. Tensor Balanced Truncation adapts this framework to higher-order tensors, addressing the challenges posed by their complex structure and significant computational demands.\nOne of the primary motivations for using TBT lies in its potential to handle large-scale tensor data more efficiently. As datasets grow in complexity and size, traditional matrix-based methods can become infeasible, prompting the need for more sophisticated and scalable approaches. TBT offers a promising solution by leveraging the inherent multi-dimensional structure of tensors, thereby enabling more effective data compression and analysis.\nIn this section, we present the tensor Balanced Truncation method. The procedure of this method is as\nfollows. First, we need to solve two continuous Lyapunov equations\nwhere and As mentioned before, and are known as\nthe reachability and the observability Gramians. As mentioned earlier, since and are weakly\nsymmetric positive-definite square tensor, then we can obtain the Cholesky-like factors of the two\ngramians described as follows\nWhere the tensors and are of appropriate dimensions, represented in low-rank form. The next step\ninvolves computing the singular value decomposition (SVD) of . This decomposition can be represented as follows\nwhere and . As outlined in [13 ###reference_b13###, 29 ###reference_b29###, 30 ###reference_b30###], a truncation step could\nbe established, and by truncating the states that correspond to small Hankel singular values in . Define\nwhere is the inverse of the isomorphism defined in Definition 2.5 ###reference_theorem5###. The matrices and are composed of the leading r columns of Y and Z, respectively. We can easily verify that and hence that is an oblique projector. The tensor structure of the reduced MLTI system is given as follows\nRegarding the solutions and of the\ntwo Lyapunov equations (38 ###reference_###), we suggest a solution in a factored form via the tensor rational block\nKrylov subspace projection method. This method will be described in the upcoming section." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Tensor continuous-time Lyapunov equations", + "text": "In this section, we discuss the process of obtaining approximate solutions to the tensor continuous Lyapunov equations of the form\nwhere and are tensors of appropriate dimensions, and is the unknown tensor. Solving this equation is a primary task in the Balanced Truncation model order reduction method for MLTI systems as described in [8 ###reference_b8###]. These equations also play a crucial role in control theory; for instance, equation (49 ###reference_###) arises from the discretization of the 2D heat equations with control [8 ###reference_b8###, 31 ###reference_b31###]. It is evident that if the dimension of equation (49 ###reference_###) is small, one can transform it into a matrix Lyapunov equation and apply efficient direct techniques as described in [5 ###reference_b5###, 21 ###reference_b21###, 33 ###reference_b33###]. However, in the case of large-scale tensors, opting for a process based solely on tensors is more beneficial than using matricization techniques, which can be costly in terms of both computation and memory. In the next section, we propose a method to solve continuous Lyapunov tensor equations using tensor Krylov subspace techniques. We use the block Lanczos process based on the tensor rational Krylov subspace, described in the previous section." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Tensor rational block Lanczos method for continuous-Lyapunov equation", + "text": "We recall the tensor Lyapunov equation that we are interested in\nwhere and is the unknown tensor. Analogously to the matrix continuous-Lyapunov equation [22 ###reference_b22###], we state that (50 ###reference_###) has a unique solution if the\nfollowing condition is satisfied\nwhere and it\u2019s conjugate are eigenvalues of defined in Definition 2.4 ###reference_theorem4###.\nNext, we describe the procedure of constructing the approximation by using the tensor rational block Lanczos process defined in Algorithm 4 ###reference_###. We seek an approximate solution to that satisfies the\ncontinuous Lyapunov equation (50 ###reference_###). This approximate solution is given by\nThe tensor such that the following Galerkin condition must be satisfied\nwhere and are the tensors obtained after running Algorithm 4 ###reference_### to the triplet . is the residual tensor given by Using the bi-orthogonality condition (i.e., ) and developing the condition (52 ###reference_###),\nwe find that the tensor satisfying the following low dimensional Lyapunov tensor equation\nwhere and\nThe following result shows an efficient way to compute the residual error in an\nefficient way.\nAssume that Algorithm 4 ###reference_### has been executed for m iterations. Then\nwhere denotes the Frobenius tensor norm, as defined in equation (7 ###reference_###) and is defined in equations 3.2 ###reference_2###.\nBy using the definition of the approximation given in equation (51 ###reference_###) and the decompositions provided in equation (3.2 ###reference_2###) from Algorithm 4 ###reference_###, we obtain\nwhere the last equation is obtained using the fact that and equation (53 ###reference_###).\nAs is a symmetric matrix, it follows that\nFor an efficient computation, the approximation can be expressed in a factored form. Then we begin by applying Singular Value Decomposition (SVD) to , i.e., . Next, we consider a tolerance dtol and define and as the first r columns of and , respectively, corresponding to the first r singular values whose magnitudes exceed dtol. By setting , we approximate as . The factorization then proceeds as follows:\nwhere and\nThe following algorithm describe all the results given in this section.\nInput: , tolerances dtol and a fixed integer of maximum iterations.\nOutput: The approximate solution .\nFor do\nUse Algorithm 4 ###reference_### to built and bi-orthonormal tensors associated to the tensor\nrational Krylov subspaces and defined in ( 23 ###reference_###) and compute the tensor using decompositions (3.2 ###reference_2###).\nSolve the low-dimensional continuous Lyapunov equation (53 ###reference_###) using the MATLAB\nfunction dlyap.\nCompute the residual norm using Theorem 6.1 ###reference_theorem1###, and if it is less than , then\na) compute the SVD of where .\nb) determine r such that . Set and compute\nendFor\nreturn" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "The coupled Lyapunov equations", + "text": "Now, let us see how to solve the coupled Lyapunov equation (38 ###reference_###) by using the tensor rational block Lanczos process.\nFirst, we need to solve the following two low-dimensional Lyapunov equations\nWe then form the approximate solutions and where and are the tensors obtained after running Algorithm 4 ###reference_### on the triplet and . We summarize the coupled Lyapunov tensor rational block Lanczos algorithm as follows.\nInput: .\nOutput: The approximate solutions and .\nApply Algorithm 3 to the triplet .\nThe approximate solutions are represented as the tensor products:" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Numerical experiments", + "text": "In this section, we give some experimental results to show\nthe effectiveness of the tensor rational block Arnoldi process (TRBA) and the tensor rational block Lanczos process (TRBL), when applied to reduce the\norder of multidimensional linear time invariant systems. The numerical results were obtained using MATLAB R2016a on a computer with Intel core i7 at 2.6GHz and 16 Gb of RAM. We need to mention that all the algorithms described here\nhave been implemented based on the MATLAB tensor toolbox developed by Kolda et al., [25 ###reference_b25###].\nExample 1. In this example, we applied the TRBA and the TRBL processes to reduce the order of multidimensional\nlinear time invariant (MLTI) systems.\nExample 1.1. For the first experiment, we used the following data:\n, with N=100. Here, A is constructed from a tensorization of a triangular\nmatrix constructed using the MATLAB function spdiags.\nare chosen as sparse and random tensors, respectively, with .\nExample 1.1. For the second experiment, we consider the evolution of heat distribution in a\nsolid medium over time. The partial differential equation that describes this evolution is known as the 2D heat equation, given by the following equation\nThe tensor , with N=80 is the tensorization of , where d is the discrete Laplacian on a rectangular grid with Dirichlet boundary condition\n(see [8 ###reference_b8###] for more details).\nare chosen to be random tensors with .\nThe performance of the methods is shown in the plots of Figs. 1 ###reference_###\u20134 ###reference_###, which report the frequency response of the original system (circles plot) compared to the frequency response of its approximation (solid plot). The right plots\nof these figures represent the exact error for different frequencies and different small space dimension.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### Example 2. For this experiment, we show the results obtained from the tensor Balanced\nTruncation method explained in Section 5, to reduce the order of MLTI systems. The following data were used:\nThe tensor , with N = 100, is constructed from a tensorization of a triangular\nmatrix constructed using the MATLAB function spdiags.\nare chosen to be random tensors with .\nWe use Algorithm 3 to solve the two continuous Lyapunov equations (38 ###reference_###) by setting \nm = 30 (i.e., the maximum number of iterations) and the tolerance .\nThe frequency response (solid plot) of this test example is given in the left of Fig. 7.5 and is compared\nto the frequency responses of the 5 order approximation (circles plot). The exact error produced by this process is shown in the right of Fig. 5 ###reference_###.\n###figure_9### ###figure_10### Example 3. For the last experiment, we compared the performance of the tensor classical block Lanczos (TCBL) and the tensor rational block Lanczos (TRBL) algorithms when applied to solve high order continuous-time\nLyapunov tensor equations. The following data were used:\nThe tensor is constructed from a tensorization of a triangular\nmatrix constructed using the MATLAB function spdiags.\nare chosen to be random tensors.\nWe use Algorithm 3 to solve the two continuous Lyapunov equations (38 ###reference_###) by setting\nm = 30 (i.e., the maximum number of iterations) and the tolerance .\nAs observed from Table 2 ###reference_###, TRBL process is more effective than the TCBL process, and the number of iterations required for convergence is small. However, the TCBL algorithm performs better in terms of computation time." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce the tensor rational block Krylov subspace methods based on Arnoldi and Lanczos algorithms. We extended the application of these methods to reduce the order of multidimensional linear time-invariant (MLTI) systems and to solve continuous-time Lyapunov tensor equations. Moreover, we derived some theoretical results concerning the error estimations between the original and the reduced transfer functions and the residual of continuous-time Lyapunov tensor equations. We\npresented an adaptive approach for choosing the interpolation points used to construct the tensor rational Krylov subspaces. Finally, numerical results are given to confirm the effectiveness of the proposed methods." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Estimations of the error
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
1
2
3
4
5
6
\n
", + "capture": "Table 1: Estimations of the error " + }, + "2": { + "table_html": "
\n
Table 2: Results obtained from the two methods for solving high order coupled Lyapunov tensor equations.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u2004\u2004 Algorithm\u2004\u2004 Iter\u2004\u2004 Res\u2004\u2004 time (s)
(80,3,3)TCBL1115.42
TRBL6138.24
(80,3,4)TCBL1118.80
TRBL6142.5
(100,3,3)TCBL1240
TRBL7206.14
(100,3,4)TCBL1147.85
TRBL7403.37
\n
", + "capture": "Table 2: Results obtained from the two methods for solving high order coupled Lyapunov tensor equations." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2411.18210v1_figure_1(a).png", + "caption": "Figure 1: Example 1.1. Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBA algorithm with m=5\ud835\udc5a5m=5italic_m = 5.", + "url": "http://arxiv.org/html/2411.18210v1/x1.png" + }, + "1(b)": { + "figure_path": "2411.18210v1_figure_1(b).png", + "caption": "Figure 1: Example 1.1. Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBA algorithm with m=5\ud835\udc5a5m=5italic_m = 5.", + "url": "http://arxiv.org/html/2411.18210v1/x2.png" + }, + "2(a)": { + "figure_path": "2411.18210v1_figure_2(a).png", + "caption": "Figure 2: Example 1.1. Left: \u2016H\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf142\\|{H(j\\omega)}\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016Hm\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H_{m}(j\\omega)\\|_{2}\u2225 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016H\u2062(j\u2062\u03c9)\u2212Hm\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf14subscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H(j\\omega)-H_{m}(j\\omega)\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) - italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBL algorithm with m=10\ud835\udc5a10m=10italic_m = 10.", + "url": "http://arxiv.org/html/2411.18210v1/x3.png" + }, + "2(b)": { + "figure_path": "2411.18210v1_figure_2(b).png", + "caption": "Figure 2: Example 1.1. Left: \u2016H\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf142\\|{H(j\\omega)}\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016Hm\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H_{m}(j\\omega)\\|_{2}\u2225 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016H\u2062(j\u2062\u03c9)\u2212Hm\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf14subscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H(j\\omega)-H_{m}(j\\omega)\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) - italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBL algorithm with m=10\ud835\udc5a10m=10italic_m = 10.", + "url": "http://arxiv.org/html/2411.18210v1/x4.png" + }, + "3(a)": { + "figure_path": "2411.18210v1_figure_3(a).png", + "caption": "Figure 3: Example 1.2. Left: \u2016H\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf142\\|{H(j\\omega)}\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016Hm\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H_{m}(j\\omega)\\|_{2}\u2225 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016H\u2062(j\u2062\u03c9)\u2212Hm\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf14subscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H(j\\omega)-H_{m}(j\\omega)\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) - italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBA algorithm with m=10\ud835\udc5a10m=10italic_m = 10.", + "url": "http://arxiv.org/html/2411.18210v1/x5.png" + }, + "3(b)": { + "figure_path": "2411.18210v1_figure_3(b).png", + "caption": "Figure 3: Example 1.2. Left: \u2016H\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf142\\|{H(j\\omega)}\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016Hm\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H_{m}(j\\omega)\\|_{2}\u2225 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016H\u2062(j\u2062\u03c9)\u2212Hm\u2062(j\u2062\u03c9)\u20162subscriptnorm\ud835\udc3b\ud835\udc57\ud835\udf14subscript\ud835\udc3b\ud835\udc5a\ud835\udc57\ud835\udf142\\|H(j\\omega)-H_{m}(j\\omega)\\|_{2}\u2225 italic_H ( italic_j italic_\u03c9 ) - italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBA algorithm with m=10\ud835\udc5a10m=10italic_m = 10.", + "url": "http://arxiv.org/html/2411.18210v1/x6.png" + }, + "4(a)": { + "figure_path": "2411.18210v1_figure_4(a).png", + "caption": "Figure 4: Example 1.2. Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBL algorithm with m=15\ud835\udc5a15m=15italic_m = 15.", + "url": "http://arxiv.org/html/2411.18210v1/x7.png" + }, + "4(b)": { + "figure_path": "2411.18210v1_figure_4(b).png", + "caption": "Figure 4: Example 1.2. Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT using the TRBL algorithm with m=15\ud835\udc5a15m=15italic_m = 15.", + "url": "http://arxiv.org/html/2411.18210v1/x8.png" + }, + "5(a)": { + "figure_path": "2411.18210v1_figure_5(a).png", + "caption": "Figure 5: Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT for m=5.", + "url": "http://arxiv.org/html/2411.18210v1/x9.png" + }, + "5(b)": { + "figure_path": "2411.18210v1_figure_5(b).png", + "caption": "Figure 5: Left: \u2016\u2131\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf142\\|{\\mathcal{F}(j\\omega)}\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and it\u2019s approximation \u2016\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnormsubscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right: the exact error \u2016\u2131\u2062(j\u2062\u03c9)\u2212\u2131m\u2062(j\u2062\u03c9)\u20162subscriptnorm\u2131\ud835\udc57\ud835\udf14subscript\u2131\ud835\udc5a\ud835\udc57\ud835\udf142\\|\\mathcal{F}(j\\omega)-\\mathcal{F}_{m}(j\\omega)\\|_{2}\u2225 caligraphic_F ( italic_j italic_\u03c9 ) - caligraphic_F start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( italic_j italic_\u03c9 ) \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT for m=5.", + "url": "http://arxiv.org/html/2411.18210v1/x10.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18210v1" +} \ No newline at end of file diff --git a/20241127/2411.18216v1.json b/20241127/2411.18216v1.json new file mode 100644 index 0000000000000000000000000000000000000000..46de8178090ac17fdabf9ae300db5e2e564019f8 --- /dev/null +++ b/20241127/2411.18216v1.json @@ -0,0 +1,312 @@ +{ + "title": "Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs", + "abstract": "Large Language Models (LLMs) are increasingly used in software development to generate functions, such as attack detectors, that implement security requirements.\nHowever, LLMs struggle to generate accurate code, resulting, e.g., in attack detectors that miss well-known attacks when used in practice. This is most likely due to the LLM lacking knowledge about some existing attacks and to the generated code being not evaluated in real usage scenarios.\nWe propose a novel approach integrating Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline. RAG enhances the robustness of the output by incorporating external knowledge sources, while the Self-Ranking technique, inspired to the concept of Self-Consistency, generates multiple reasoning paths and creates ranks to select the most robust detector.\nOur extensive empirical study targets code generated by LLMs to detect two prevalent injection attacks in web security: Cross-Site Scripting (XSS) and SQL injection (SQLi). Results show a significant improvement in detection performance compared to baselines, with an increase of up to 71%pt (percentage points111A percentage points is the standard unit of measure for differences between percentages.) and 37%pt in the F2-Score for XSS and SQLi detection, respectively.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The advent of Large Language Models (LLMs) has transformed software development with their impressive capabilities of understanding natural language prompts and producing accurate code that implements the given prompt.\nThese models are the foundation of AI-coding assistants like GitHub Copilot copilot ###reference_b1### or Cursor AI cursorai ###reference_b2###. With them, developers now often start with LLM-generated code as a base for refinement and testing ghblog ###reference_b3###; tabachnyk2022productivity ###reference_b4###.\nHowever, this new practice also introduces potential risks: LLMs can inadvertently introduce vulnerabilities into the generated code or struggle to effectively generate security functions that precisely satisfy the associated security requirements perry2023userstudy ###reference_b5###; khoury2023secure ###reference_b6###; tihanyi2023swsecurity ###reference_b7###; nair2023hwsecurecode ###reference_b8###. This issue likely stems from insufficient scrutiny of training data, lack of task-specific knowledge, inadequate fine-tuning, or missing assessment of the output he2023large ###reference_b9###.\nAn important family of security functions consists of attack detectors.\nWith the words \u2018attack detectors\u2019 we refer to the functions that, taking into account an input, possibly representing a payload, analyze the input and return a boolean value representing the outcome of the analysis. More specifically, they return a positive value if an attempt for an attack is found inside of the input payload. These functions require expert knowledge about the attack vectors for effective protection against such attacks.222Throughout the paper, we use the terms \u2018attack detectors\u2019, \u2018security functions\u2019, and \u2018functions\u2019 interchangeably when discussing our approach. This flexible terminology enables us to define and apply our methodology across a range of domains. This paper addresses this critical issue by evaluating and improving the robustness of the LLM-generated attack detection function.\nWhile the evaluation of LLM-generated code has garnered attention, most existing benchmarks, like HumanEval chen2021evaluating ###reference_b10###, emphasize complex algorithm generation rather than code generation requiring domain/security-specific knowledge. In addition, previous work has largely focused on identifying vulnerabilities in LLM-generated code khoury2023secure ###reference_b6###; mousavi2024investigation ###reference_b11###, but a systematic approach to improving the robustness of security functions such as attack detectors is missing. We hypothesize that enhancing the robustness of the generated detectors requires more than mere prompting with a suitable query \u2013 it necessitates the integration of relevant knowledge, as well as the adoption of a systematic approach to robustness self-assessment.\nTo this end, we adopt two novel components to the LLM pipeline: Retrieval Augmented Generation (RAG) lewis2020retrieval ###reference_b12### and Self-Ranking. RAG, a technique widely investigated in Natural Language Processing (NLP), enhances the quality of the output by incorporating external information.\nWe use RAG to leverage existing knowledge sources that document known attacks to increase the robustness of the detectors generated by LLMs.\nAdditionally, LLMs may exhibit non-deterministic behaviour in generation ouyang2023llm ###reference_b13###, which results in multiple, diverse solutions for the same prompt.\nWe propose Self-Ranking, building upon the concept of Self-Consistency wang2023selfconsistency ###reference_b14###: we take advantage of the multiple reasoning paths associated with LLM\u2019s non-determinism to rank the alternative outputs and select the best one. More concretely, the LLM can propose different implementations obtained querying the model multiple times, with Self-Ranking we propose a strategy to rank the different implementations and to keep only the best-performing ones. In turn, ranking is based on the creation of a synthetic dataset, also generated by LLMs (hence the moniker Self-Ranking), used to automatically assess the robustness of each candidate output.\nIn our empirical study, we target two well-known and prevalent attacks: XSS and SQLi HYDARA2015170 ###reference_b15###; lawal2016systematic ###reference_b16###, which are ranked second and third in \u20182023 CWE Top 25 Most Dangerous Software Weaknesses\u2019333https://cwe.mitre.org/top25/archive/2023/2023_top25_list.html ###reference_023_top25_list.html### respectively. Our extensive empirical study involves the evaluation of nine different LLMs. Results indicate that integrating external knowledge with RAG improved detection performance up to 66%pt (on average 17%pt) on the F2-Score for XSS detection and up to 67%pt (on average 7%pt) for SQLi detection compared to relying solely on few-shot examples. Additionally, employing Self-Ranking enhanced the LLM performance by up to 71%pt (on average 37%pt) on the F2-Score for XSS detection and up to 43%pt (on average 6%pt) for SQLi detection.\nState-of-the-art (SOTA) machine-learning based attack detectors require a labeled training set of attacks, while our LLM-based approach is applicable even when no training set is available, provided a RAG source is accessible. In our empirical study, we considered also the scenario when a training set is available and SOTA methods can be used. In such a scenario,\nour approach achieved a performance comparable to SOTA XSS and SQLi machine learning-based detection methods, with the remarkable advantage that LLMs are pre-trained, while specialized machine learning-based methods require dedicated model design and training: developers can obtain SOTA performance by just querying an LLM using our RAG and Self-Ranking augmented pipeline.\nAnother key advantage is that LLMs generate code that can be understood and manipulated by developers, while the decision making logic of a machine learning model is to a large extent opaque to developers.\nWe also demonstrated the transferability capabilities of LLMs, which are structurally missing in the SOTA detectors: after configuring the LLM (i.e., setting parameters such as the model checkpoint to use, the temperature, the number of few-shot examples) for optimal performance on one attack detection task (e.g., XSS), the resulting configuration can be transferred to another task (e.g., SQLi), with 16%pt and 10% improvement when transferring the LLM parameters from one task to another, as compared to average cases without parameter transfer.\nThe technical contributions of this paper are as follows:\nWe introduce a novel approach that integrates RAG and Self-Ranking to evaluate and improve the robustness of LLM-generated attack detectors.\nWe conduct extensive empirical experiments with nine LLMs and two attacks, demonstrating the usefulness of our approach.\nWe explore the transferability of optimal parameters between the two tasks, contributing to a generalizable approach to securing LLM-generated code." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "In this section, we provide a background on the two foundational concepts for our approach: RAG and Self-Consistency." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Retrieval-Augmented Generation (RAG)", + "text": "LLMs often struggle with very specific topics, tending to produce hallucinations zhang2023siren ###reference_b17###, making it challenging to ensure factual correctness.\nThis happens because the knowledge of the LLM is stored in its parameters (parametric memory), and it is highly compressed.\nRetrieval-Augmented Generation (RAG) lewis2020retrieval ###reference_b12### tries to address this issue by incorporating a non-parametric memory, such as a database, to enhance the output\u2019s quality in knowledge-intensive tasks. RAG\u2019s key idea is to utilize external knowledge bases to fetch relevant information based on semantic similarity with the prompt, thereby reducing hallucinations. This approach has gained traction, making RAG essential in developing advanced chatbots gao2023retrieval ###reference_b18###.\nThe usage of RAG is also extremely useful to connect private data, not present in the initial training data, to an LLM without requiring any fine-tuning procedure.\nA RAG application typically consists of two main steps. The first step is indexing, a pipeline for ingesting data from one or more sources and creating an index. This process usually occurs offline for efficiency. During indexing, the data sources are broken into smaller chunks, since large chunks are harder to search over and would not fit into the model\u2019s finite context window.\nThe result of this step is vector representations of the original knowledge source, obtained using an embedding model and stored in a vector database for efficient retrieval.\nThe second step is retrieval: when a user submits a prompt, RAG uses the same embedding model from the indexing phase to convert the prompt into a vector representation. It then calculates similarity scores between the prompt vector and the vectors of chunks in the indexed corpus. Then, RAG selects and retrieves the top chunks with the highest similarity to the prompt. These relevant chunks are then used to expand the context of the original prompt, providing additional information to the LLM." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Self-Consistency", + "text": "LLMs are limited in their reasoning capabilities and this cannot be resolved merely by scaling them up rae2021scaling ###reference_b19###; srivastava2022beyond ###reference_b20###.\nOne of the most relevant approaches to overcome this limitation is Chain-of-Thought prompting (CoT) wei2022chain ###reference_b21###, which prompts LLMs to produce a sequence of short sentences replicating a human\u2019s reasoning process to solve a task.\nThe idea of CoT was further extended into Self-Consistency wang2023selfconsistency ###reference_b14###,\nbased on the observation that complex reasoning tasks often allow multiple reasoning paths to reach a correct answer stanovich2012distinction ###reference_b22###. The LLM is first prompted with CoT. Then, instead of greedily decoding the optimal reasoning path, a new \u201csample-and-marginalize\u201d decoding strategy generates a diverse set of reasoning paths, each of which potentially leads to a different answer. The final answer is determined by marginalization, i.e., by summing up probabilities over these paths to find the most consistent response. Importantly, Self-Consistency differs from a typical ensemble approach, as it operates on a single LLM (also called a \u201cself-ensemble\u201d), rather than aggregating outputs from multiple LLMs wang2023selfconsistency ###reference_b14###.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Approach", + "text": "In this section, we present our approach to assess and improve the robustness of LLM-generated attack detectors, achieved through a combination of LLM prompt design, RAG, and Self-Ranking, a novel technique built upon Self-Consistency.\nFigure 1 ###reference_### illustrates an overview of our approach. A Prompt (shown in the middle within a blue box) is constructed and fed into the LLM to generate functions, in our case attack detectors (top pipeline) or synthetic datasets, used to self-assess the generated functions (bottom pipeline). We present four types of Prompts, Basic, Few-shot, RAG, and RAG Few-shot444For the sake of simplicity, the RAG Few-shot Prompt is not shown in Figure 1 ###reference_###. It can be easily obtained by combining the construction of Few-shot and RAG Prompts., which vary in their use of RAG and Few-shot examples. The synthetic dataset is then used to evaluate and select the generated functions by means of the Self-Ranking mechanism (described in the following sections). In the following subsections, we detail the construction of each type of Prompt and explain how RAG and Self-Ranking are employed." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Basic Prompting", + "text": "We provide the LLM with a Prompt that consists of two parts: Template and Task (in Figure 1 ###reference_### (left), these are illustrated as \u2018Code/Dataset Generation Template\u2019 and \u2018Detector Signature\u2019, respectively). The Template contains a general instruction that specifies the desired output. To generate Python functions, such as attack detectors, a \u2018Code Generation Template\u2019 is used, which instructs the LLM to produce Python code as shown in Listing 1 ###reference_###.\nThe Task contains the signature of the function (e.g., attack detector) followed by a short comment describing its purpose. Two examples of tasks we consider in our empirical study, XSS and SQLi, are shown in\nListing 2 ###reference_### & 3 ###reference_###, respectively.\nThe Basic Prompt is the result of merging Template and Task, serving as the input to the LLM. For example, we can combine Listing 1 ###reference_### and Listing 2 ###reference_### to form a Basic Prompt for the generation of an XSS detector." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Few-shot Examples", + "text": "Few-shot examples have been explored in benchmarks such as HumanEval chen2021evaluating ###reference_b10###, where they are appended to the function definition to illustrate the expected input-output relationship. In our approach, we extend this concept by including malicious and benign examples, creating what we call the Few-shot Prompt. An illustration of this is shown in Listing 4 ###reference_###, where we use Few-shot examples for XSS detection. Note that while an arbitrary number of examples can be appended, there is no guarantee that a higher quantity will yield better results." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "RAG", + "text": "We integrate RAG to enhance the LLM\u2019s ability to generate robust detectors. We followed the general RAG methodology, starting from the indexing phase. Once the knowledge sources about attacks are stored in vector representation555The selection of appropriate data sources and their storage is discussed in Section 4 ###reference_### as it is an experimental choice., they can be queried to retrieve the most relevant information chunks (referred to as \u2018External Knowledge\u2019 in Figure 1 ###reference_###). We utilize the Task (\u2018Detector Signature\u2019 in Figure 1 ###reference_###) directly as a query to retrieve these chunks, which are then appended to the Template to obtain a RAG Prompt, as illustrated in Listing 5 ###reference_###.\nFew-shot examples and RAG can be combined as they modify different structures: the Task and the Template, respectively. Appending Few-shot examples to the Task would allow RAG to extract more diverse and useful information, possibly enhancing the overall performance. We call this combined prompt as RAG Few-shot Prompt." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Self-Ranking", + "text": "Self-Consistency does not apply directly to our task as proposed originally wang2023selfconsistency ###reference_b14###, because the generation of security attack detectors is not always easily decomposable into a chain of thoughts, representing partial and incremental steps leading to the solution.\nMoreover, reasoning on the generated candidates may not be feasible, as the set of all the generated functions may not fit within the LLM\u2019s context window. More importantly, our preliminary experiments suggest that LLMs struggle to assess the robustness of a set of candidate functions.\nHence, we reuse only the general idea behind Self-Consistency, i.e., the LLM operating as a \u201cself-ensemble\u201d. Specifically, the non-determinism of LLMs is leveraged to produce a set of candidates that are assessed by the LLM itself, which generates an assessment dataset instead of directly assessing its own output.\nWe refer to our novel variant of Self-Consistency as Self-Ranking: we ask the LLM to generate a synthetic dataset that simulates the presence of ground-truth data to select the best function. This synthetic dataset is used to evaluate and rank the generated functions.\nTo generate the synthetic dataset, we leverage the modular structure of the Prompt by introducing a second Template (\u2018Dataset Generation Template\u2019 in Figure 1 ###reference_###), as shown in Listing 6 ###reference_###. This \u2018Dataset Generation Template\u2019 can be combined with the two proposed Tasks to create synthetic datasets for various vulnerabilities. Additionally, Few-shot examples and RAG can be integrated into synthetic dataset generation.\nOnce the synthetic dataset is generated, it serves as a testing ground for selecting the best function. Specifically, when multiple functions are generated, Self-Ranking starts evaluating each function with the synthetic dataset, ordering them based on a chosen metric, e.g., F2-Score, and retaining only the top-performing subset, denoted as the top_k functions. In our study, the usage of Self-Ranking aims to select the subset of most robust attack detectors." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Empirical Study", + "text": "In this section, we first present the two evaluation scenarios, which differ in the availability vs. unavailability of a training/validation dataset. We then formulate four research questions and describe our experimental settings, including the models used, the configurable parameters, and the datasets. Next, we outline the experimental procedure, including the generation and evaluation of functions, the generation of synthetic datasets, and the selection of top_k functions. Finally, we provide details on the implementation and the evaluation metrics." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Scenarios", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Scenario NTD (No Training Dataset)", + "text": "In this scenario, developers do not possess a training set that would allow them to train a machine learning-based SOTA detector. For the same reason, developers cannot extract a validation set from the training set, which would allow them to directly assess and rank the LLM-generated functions (without Self-Ranking). Also, they cannot decide if one LLM (checkpoint) is preferable over another one (assuming that they have multiple instances to select from), find optimal LLM parameter configurations (e.g., temperature), and choose the optimal prompt among Basic/Few-shot/RAG/Rag Few-shot Prompts. Moreover, they cannot decide on the potential benefits offered by Self-Ranking.\nTherefore, in the NTD scenario, we consider the effectiveness of our approach in terms of average performance on a test set composed of real-world attacks. We average the performance calculated across multiple choices of LLM instances, temperatures, alternative function/dataset generation prompts, and inclusion/exclusion of Self-Ranking. The average performance can be conditioned on each element of our approach, to determine its effect on the average performance when all other choices are unconstrained." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Scenario TDA (Training Dataset Available)", + "text": "In this scenario, a task-specific labeled training set of attacks is available to developers. This addition would allow them to directly train Machine Learning (ML) models that perform the task assigned to the generated detectors, without actually generating any function at all. In fact, SOTA techniques for many security tasks, including XSS and SQLi detection, use ML models trained on a labeled dataset.\nMoreover, the availability of a training dataset allows the extraction of a validation set that can be used to analyze the performance of different LLM instances, parameters, and elements of our approach, to choose the optimal configuration. Developers may even choose between LLM-generated functions and ML model, based on the respective performance on the validation set." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Research Questions", + "text": "In both NTD and TDA scenarios, we want to know if RAG and/or Self-Ranking are beneficial to the generation of robust attack detectors:\nRQ1. RAG Benefit: How helpful is RAG in generating better security attack detectors? How does it perform when combined with Few-shot examples?\nRQ2. Impact of Self-Ranking: Is the selection of top_k functions via Self-Ranking an effective strategy to enhance the robustness of LLM-generated attack detectors?\nIn the NTD scenario, we evaluate the average performance across configurations (LLM, temperature, number of few-shot examples), either with or without RAG/Self-Ranking. In the TDA scenario, we check whether the optimal configuration selected through the validation set includes RAG/Self-Ranking.\nIn the TDA scenario, the available training set can be used to train a SOTA machine learning-based model. The best LLM-generated functions can be compared with SOTA techniques solving the same task:\nRQ3. Comparison with SOTA: Do the detectors generated by LLMs, when assessed on an existing validation dataset, demonstrate comparable performance to state-of-the-art ML models trained specifically for the task?\nFinally, we are interested in the transferability from the TDA scenario on a task, to the NTD scenario on another task. We want to know if the optimal configuration of the LLM identified through the validation set available for a given task (e.g., XSS) provides good/optimal results when used to solve another security function generation task (e.g., SQLi).\nRQ4. Transferability:\nCan the optimal parameters for function and synthetic data generation in one task be transferred and applied to achieve effective results in other tasks?" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Experimental Settings", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Models", + "text": "We employ nine LLMs as listed in Table 1 ###reference_###, which includes the aliases that are used throughout the remainder of this paper. Additionally, we report the Pass@1 scores from HumanEval chen2021evaluating ###reference_b10### as a proxy for their general reasoning capabilities: GPT-4T showed the best performance, followed closely by Opus, whereas PaLM exhibited the lowest performance among the models we considered.\nWhile Pass@1, which assesses the model\u2019s reasoning capabilities to generate complex algorithms, may serve as an indicator of its expected performance on our task, we do not expect a perfect correlation with our results, since our task is more focused on utilizing extensive knowledge rather than reasoning on a given problem." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Configurable Parameters", + "text": "A model configuration refers to the model and temperature used (e.g., GPT-4 with temperatures and ). A prompt configuration is determined by the number of Few-shot examples (with equal malicious and benign examples) and the choice to use RAG or not, to enrich the prompt. By combining model and prompt configurations, we obtain an overall Configuration Conf, consisting of a tuple with a model, a temperature, the few-shot number, and the RAG usage choice.\nFor instance, indicates the configuration where the model is GPT-4, the temperature is set to 0.5, the number of Few-shot examples is 2, and RAG is used (T represents True for the RAG Usage Parameter).\nTable 2 ###reference_### shows the domains of the configurations for the different parts of the empirical study.\nGiven the domain, some of its configurations may fail on a given task. Hence, they are excluded.\nFor example, among the configurations for Code Generation (see Table 2 ###reference_###), when generating an XSS detector, all the configurations with models Sonnet, PaLM, Llama, Mixtral, and temperature 0 fail and are hence discarded.\nSimilarly, for the SQLi detector, the aforementioned configurations, along with PaLM, Sonnet, and Mixtral at temperature 0.5, fail and are also excluded.\nHaiku and GPT-3.5T are completely discarded in code generation for the same reasons, but they are still among the employed models because they are used for synthetic dataset generation.\nEvery time we generate code, we produce 40 functions by feeding the model 40 times with the same prompt. This strategy accounts for the non-determinism of LLMs ouyang2023llm ###reference_b13### and is exploited by Self-Ranking to select the top_k functions.\nFor synthetic dataset generation, we iteratively generate a dataset of 100 examples, with an equal split of 50 malicious and 50 benign cases. We use a timeout of 9,000 seconds to discard configurations that fail to fill the dataset with 100 examples in a reasonable amount of time. To account for non-determinism, we repeat the process 10 times and produce 10 distinct datasets.\nThe performance used to determine the ranking is calculated by averaging the performance on the 10 different produced datasets.\nWhen Self-Ranking is used, we have to decide the number of top-ranked functions to consider.\nIn our experiments, the values of for the selection of top_k functions are 1, 3, and 5.\nSeveral RAG sources were analyzed with preliminary experiments to select the most suitable ones for our setup.\nConsidering only one source of knowledge for every task is reasonable since the focus of the empirical study is to exploit a RAG source under the assumption that it is available and of high quality, not to automatically find the best possible source of knowledge among the available ones.\nFor the XSS detection task, we used the XSS Evasion Cheat Sheet by OWASP666https://cheatsheetseries.owasp.org/cheatsheets/XSS_Filter_Evasion_Cheat_Sheet.html ###reference_eets/XSS_Filter_Evasion_Cheat_Sheet.html### as the RAG source.\nOWASP is a foundation that works to improve the security of software and the knowledge contained in their cheatsheets is potentially useful to improve the quality of the detectors generated for XSS attack. For the SQLi detection task, the selected RAG source is an article on WebSec777https://websec.wordpress.com/2010/12/04/sqli-filter-evasion-cheat-sheet-mysql/ ###reference_li-filter-evasion-cheat-sheet-mysql/### containing a list of SQLi patterns. The knowledge shared in both sources is well-suited to our goal of creating effective detection functions via LLMs." + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 Datasets", + "text": "For XSS detection, we use a publicly available dataset containing Malicious and Benign payloads of HTTP requests from the FMereani repository888https://github.com/fmereani/Cross-Site-Scripting-XSS/blob/master/XSSDataSets/Payloads.csv ###reference_ipting-XSS/blob/master/XSSDataSets/Payloads.csv###, while for SQLi detection, we use the dataset presented in the SOFIA paper ceccato2016sofia ###reference_b23###. Table 3 ###reference_### shows the size of the splits for our two datasets, into train_set, val_set, and test_set. Train_set is not used in our approach (except for the Few-shot examples), as we generate a detection function via pre-trained LLM, without any further training.\nIt is used for training the ML-based detection model, which is compared with our approach to answer to RQ3. Val_set is used as validation set in the TDA scenario. Test_set is kept hidden and it is used when answering the experimental research questions RQ1 to RQ4." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Study Procedure", + "text": "###figure_2### The main steps of our experimental procedure are depicted in Figure 2 ###reference_###. The pipeline at the top shows the generation of multiple functions (to account for non-determinism of the LLM) for each configuration. The pipeline at the bottom is similar, but the output consists of multiple synthetic datasets for each configuration. At the right, Figure 2 ###reference_### shows the selection of the best top_k functions based either on an existing validation dataset (top-right) or on a generated dataset (bottom-right)." + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Function Generation", + "text": "Given a configuration , and a task , we denote as the Generated Function, i.e., the output of queried with the prompt constructed from the code generation template, shown in Listing 5 ###reference_###, the task , and the parameters specified in the configuration . In Figure 2 ###reference_### (top), function generation is executed with two configurations, and .\nTo account for non-determinism and to support Self-Ranking, we repeat the generation process (in our experiment, 40) times, obtaining a Generated Function Run , consisting of the set of Generated Functions: . In Figure 2 ###reference_###, functions are generated, for each configuration.\nWe define a Function Generation Experiment as the set of all Generated Function Runs, given all valid configurations from the code generation domain (see Table 2 ###reference_###):\n. In Figure 2 ###reference_###, the experiment contains two sets of generated functions, and ." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Dataset Generation", + "text": "A synthetic dataset, denoted as , is obtained by prompting the model with a prompt constructed from the dataset generation template shown in Listing 6 ###reference_###, the task , and the parameters specified in the configuration . In Figure 2 ###reference_### (bottom), dataset generation is executed with two configurations, and .\nTo account for non-determinism, we repeat the generation process (in our experiments, 10) times and obtain a Synthetic Dataset Run . In Figure 2 ###reference_###, datasets are generated, for each configuration. Given all the configurations in the domain, we define a Synthetic Dataset Experiment as , , . In Figure 2 ###reference_###, the experiment contains two sets of generated datasets, and ." + }, + { + "section_id": "4.4.3", + "parent_section_id": "4.4", + "section_name": "4.4.3 Selection of top_k Functions", + "text": "We select the top_k functions of a Generated Functions Run using a dataset and a performance metric (e.g., F2-Score) by sorting the functions in by decreasing and including only the first functions in the sorted list:\n.\nNote that in the event of a tie for the position, one element is randomly chosen.\nIn the TDA scenario, we can set to select the top_k functions. In the NTD scenario, we use a Synthetic Dataset Run as , exploiting Self-Ranking.\nIn Figure 2 ###reference_###, the TDA scenario is shown on the top-right, with the Validation Dataset. The NTD scenario is shown on the bottom-right, with equal to each of the generated datasets and ." + }, + { + "section_id": "4.4.4", + "parent_section_id": "4.4", + "section_name": "4.4.4 Evaluation Metrics", + "text": "We use a separate, independent test set to evaluate the results produced by the pipeline shown in Figure 2 ###reference_###: (1) we measure the quality of the generated functions without applying any ranking (NTD scenario with no generated dataset); (2) we measure the quality of the generated functions after ranking them based on a validation set (TDA scenario); (3) we measure the quality of the generated functions after ranking them based on a generated dataset (NTD scenario with synthetic dataset).\nThe quality of the generated functions without ranking is measured as the average performance of the generated functions across all Function Runs in the experiment . The quality of the generated functions after ranking on a validation set (resp. synthetic dataset) is measured as the top_k performance (i.e., the average performance of the first functions in the ranked list) of the best Generated Function Run , which is the function run with highest average performance according to the validation dataset (resp. synthetic dataset).\nWe also measure the top_k performance improvement, defined as the performance difference between the top_k functions and all functions in the Generated Function Run .\nMeasuring the effectiveness of a Synthetic Dataset Run can be achieved by assessing its capability to select the top_k functions accurately, just as a ground-truth dataset (i.e., val_set) would do.\nThis does not directly assess the quality of the generated Synthetic Dataset, since we cannot prove that all the elements are correctly labeled or semantically rich enough to capture all the aspects of the attack, but we indirectly assess it in terms of capability to act as a proxy for the real ground-truth dataset, thus selecting an optimal set of functions. To quantify this, we introduce the performance difference metric, defined as the difference between the average top_k performance with ranking on val_set and the average top_k performance with ranking on a Synthetic Dataset Run ." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Implementation", + "text": "Our experimental framework was implemented using Python 3.10 and Langchain999https://www.langchain.com/ ###reference_www.langchain.com/###.\nFor comparison to SOTA, we followed the approach proposed by Chen et al. CHEN2022102831 ###reference_b24### and implemented two XSS detection models by training a Convolutional Neural Network (CNN) and a Multi-Layer Perceptron (MLP). To compare with SOFIA ceccato2016sofia ###reference_b23### on SQLi detection, we consider the performance values reported in their paper, as we share exactly the same test set.\nAll experiments on GPT models were conducted on a machine with AMD EPYC 7742 64-Core Processor CPU, Tesla V100 GPU, 512 GB RAM, running Ubuntu 20.04.6 LTS. The experiments on other models were conducted on a machine with AMD EPYC 7763 64-Core Processor CPU, 32 GB RAM, running Ubuntu 22.04.4 LTS. In the latter, models were running on services such as Google Vertex or Amazon Bedrock, depending on the provider." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Performance Metric", + "text": "In our application scenarios (XSS/SQLi detection), we give more weight to false negatives (resulting in low recall) with respect to false positives (resulting in low precision), since a non-detected attack can cause much more damage than a benign request detected as an attack.\nFor this reason, the main performance metric used in our empirical study is the F2-Score, referred to as F2 for simplicity (), which gives double importance to recall than to precision: , with and indicating precision and recall, respectively.\nTo mitigate a possible threat to the construct validity associated with the choice of the performance metric F2, which is strongly related to the specific domain of the study, we replicated all experiments using accuracy and F1-Score, referred to as F1 and defined as , as alternative performance metrics.\nAccuracy is a valuable choice since the datasets used in this study are well-balanced.\nHowever, since in a real-world scenario positive samples are much fewer than the negative ones, we replicated all the experiments also using F1-Score, a metric that ignores the false negatives and hence, differently from accuracy, is not sensitive to the balance of positive vs negative data. Results are consistent and largely independent of the choice of the performance metric. The interested reader can find the additional results obtained with and in the replication package (see Section Data Availability ###reference_###).\nFor RQ1, in the NTD scenario, we get two sets of F2 by evaluating the functions generated with and without RAG on the test_set.\nBy comparing these two sets, we check if RAG provides statistically significant improvements using the Mann-Whitney U Test mannwhitneyu ###reference_b25###.\nTo understand the impact of RAG in the TDA scenario, we use val_set to select the best Generated Function Run and we check if RAG was used to generate .\nFor RQ2, we first quantify the benefits of using top_k selection with Self-Ranking in the NTD scenario. We analyze all pairs of Generated Functions Runs and Synthetic Datasets Runs to compute the improvement of F2. We employ the Wilcoxon signed-rank test to establish the statistical significance of such improvement.\nTo analyze the impact of Self-Ranking in the TDA scenario, we use val_set to select the best Function Run .\nThen, for each , is selected starting from , as the Data Set Run with minimum performance difference w.r.t. the top_k performance measured on val_set.\nAt this point, it is possible to measure the improvement of F2, due to the usage of Self-Ranking, with as selected Generated Function Run and as selected Synthetic Dataset Run.\nFor RQ3, to compare the performance with the SOTA models, we first use val_set to select the best Generated Function Run , then the top_k function of . Then, we assess the top_k F2 measured on the test_set, comparing it with that of the SOTA models.\nFor RQ4, for each , we consider one of the two tasks (e.g., XSS) in turn, and we determine the configurations that give and . We then apply such configurations to produce and for the other task (e.g., SQLi). For the second task, we also compute and .\nWe can now compare the best top_k F2 obtained using and vs. the top_k F2 of and . Then, we swap the two tasks and repeat the transferability process in the other direction." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "RQ1 (RAG Benefit)", + "text": "###figure_3### ###figure_4### Figure 3 ###reference_### illustrates the impact of RAG on function generation for XSS and SQLi in the NTD scenario, showing the F2 differences between the configurations with and without RAG, across all possible configurations. Each bar represents the average F2 score difference for a specific Model-Temperature pair when using RAG, compared to the same pair without RAG. Results indicate that employing RAG generally enhances the performance of function generation for both tasks. The number of Model-Temperature pairs benefiting from RAG is much larger than the number of pairs showing degradation, and the improvements are statistically significant, as evidenced by -values ( for XSS and for SQLi) below the standard threshold of 0.05.\n###figure_5### ###figure_6### We further investigate the benefit of combining Few-shot examples with RAG using a similar setting as Figure 3 ###reference_###. Figure 4 ###reference_### shows that\nwhile the addition of Few-shot examples shows some benefits for SQLi, the same cannot be said for XSS. These findings suggest that the usage of Few-shot examples may not always provide advantages when RAG is already employed, indicating that in the NTD scenario, it could be preferable to omit them.\nTurning to the TDA scenario where we can select the best Generated Function Run using , we observe that takes always advantage of RAG as well as Few-shot examples: for XSS, (GPT-4T, 0.0, 10, T), and for SQLi, (GPT-4T, 0.0, 2, T). This indicates that while the use of few-shot examples may not consistently enhance performance in general (as seen in the NTD scenario), the best configuration selected with (in the TDA scenario) leverages the benefits of using both RAG and Few-shot examples." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "RQ2 (Impact of Self-Ranking)", + "text": "###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### Let us consider the NTD scenario.\nThe violin plots shown in Figure 5 ###reference_### depict the effect of Self-Ranking, i.e., selection across three values of for the two tasks, considering all the possible pairs of function/synthetic dataset configurations. Overall, we observe a clear improvement, particularly for XSS. The region between quartiles for XSS falls largely between a 20%pt and 40%pt improvement, with an average improvement of 37%pt and an improvement that affects 98% of the cases. While the improvement for SQLi is less pronounced, we still observe improvements in 73% cases, with an average improvement of 6%pt. Additionally, we can see that, as increases, the average improvement decreases for both tasks, but the gains become more stable.\nThe improvements given by the usage of selection are statistically significant: the -values obtained with the Wilcoxon signed-rank test are below the threshold of 0.05 for both tasks and all the values of 101010Specifically, -value is for all the s for XSS. For SQLi, it was when and and when ..\nIn the TDA scenario, we observe that utilizing the Self-Ranking mechanism to select the top_k functions is more effective than not employing it: when compared to solely using , employing achieves 3.21%pt and 4.94%pt increases in F2, for XSS and SQLi respectively.\n###table_1###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "RQ3 (Comparison with SOTA)", + "text": "To compare ours with learning-based SOTA techniques, we select and functions based on . For XSS, is , while for SQLi, it is . We establish a baseline to better understand the improvement offered by adopting our approach. The baseline is obtained using Basic Prompt, without Few-shot examples and RAG.\nIt represents the expected results that can be obtained by generating detectors without exploiting any of the techniques presented in this work.\nTable 4 ###reference_### presents the results of the comparison. We observe a significant improvement of 34%pt and 18%pt for XSS and SQLi respectively, when compared to the baseline.\nThere is a slight decrease in F2 compared to SOTA models, with a 3.15%pt and 0.83%pt drop for XSS and SQLi, respectively.\nWe argue that the slight performance gap between our approach and SOTA models is understandable, given our approach\u2019s training-free nature and direct applicability to multiple tasks. Moreover, these results provide empirical support for our claim that incorporating external knowledge and Self-Ranking is essential for LLMs to achieve competitive performance with SOTA models.\nIt is important to remark that, despite the slight performance decrease w.r.t. SOTA models, there are several other advantages associated with the use of our method:\nour approach is applicable even in the absence of a training set, a scenario that rules out all ML-based SOTA techniques. We use a pre-trained LLM, while SOTA techniques require that a model is designed and trained for the attack detection task at hand. The output of our approach (a Python function) is interpretable by developers, while SOTA models are black-box.\nThis transparency allows for a clear and full understanding of the detector\u2019s decision-making process and provides an opportunity for further refinement and improvement.\nAnother big advantage is transferability from one detection task to another, an aspect we will explore in the following section, which is structurally impossible for SOTA models.\n###table_2### ###table_3###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "RQ4 (Transferability)", + "text": "Table 5 ###reference_### shows the best configuration of each task, specifically for function generation () and synthetic dataset generation (), across different values. In our notation, Task 1 is XSS and Task 2 is SQLi, and represents the transfer of a configuration from one task to another. As an illustration, in , denotes the best configuration for Task 1, whereas represents the transferred configuration, which originates from Task 1 () and is subsequently evaluated on Task 2.\nTable 6 ###reference_### presents the transferability results. The \u2018\u2019 columns indicate F2 computed on the original task with its best configuration, \u2018Avg. F2\u2019 columns represent the average F2 computed across all the pairs for a given , and the \u2018\u2019 columns show F2 computed using transferred configurations. Comparing the transferred configuration\u2019s performance to the average column provides a good estimate of the benefits of configuration transfer over a mere random selection of a configuration for a new, unseen task.\nThe results support the effectiveness of transferring configurations. While there is a slight degradation in F2 compared to the original, best configuration (on average, 3%pt for XSS and 8%pt for SQLi), we can observe that results of transferred configurations outperform the average F2, achieving on average, 16%pt improvement for XSS and 10%pt improvement for SQLi.\n###figure_13### ###figure_14###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion", + "text": "With RQs 1 and 2, we highlighted the benefits of using RAG and Self-Ranking, highlighting the necessity to incorporate external knowledge about the attacks to generate robust detectors, and exploiting the multiple reasoning-paths of the LLM to select the top-ranked detectors. However, these results do not offer explicit insights into the best choices for the other parameters, such as LLM or temperature. While this question may have a trivial answer in the TDA scenario where developers can experiment and evaluate different configurations, the NTD scenario presents a more challenging situation. On one hand, our transferability study (RQ4) demonstrated that the best-performing combinations for one task exhibit strong performance on the other task. Therefore, the combinations from another task could serve as a reasonable starting point for selecting the inner parameters when approaching a new task.\nWhen transferability is not possible (e.g., because developers cannot access the LLM identified via transferability), developers have still to choose a model and a temperature. We can support their choice by conditioning the results of our experiments on model or on temperature. Results conditioned for each model are shown in Figure 6 ###reference_###, which presents the average F2 achieved by different LLMs across the two tasks. These results indicate the strong performance of GPT-4T and GPT-4, which is unsurprising given their inclusion within the in the earlier experiments. On the other hand, Mixtral consistently underperforms relative to other LLMs in both tasks. While these plots exhibit some correlation with the HumanEval scores reported in Table 1 ###reference_###, it is important to note that models like Haiku, which achieved decent HumanEval scores, were discarded due to their inability to successfully complete the task (see Section 4.3 ###reference_###). This observation aligns with our hypothesis that while HumanEval effectively assesses the reasoning capabilities of LLMs, it may not directly translate to the depth of knowledge required for more specialized tasks, such as secure code generation.\nRegarding the temperature, our analysis suggests that it is highly LLM-dependent, making it challenging to draw general conclusions across LLMs. One consistent finding relates to synthetic dataset generation, where higher temperatures tend to outperform lower temperatures. This is because lower temperatures often result in a lack of diversity in generated examples, leading to a limited variety of samples." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Recent research has scrutinized the security of code generated by LLMs, showing that it contains vulnerabilities. Mousavi et al. mousavi2024investigation ###reference_b11### conducted an examination of Java code produced by ChatGPT in the context of security API use cases. They uncovered 20 distinct varieties of API misuses by ChatGPT, demonstrating insecure practices. Bhatt et al. bhatt2024cyberseceval ###reference_b26### presented a dataset exposing insecure code generation by LLMs, encompassing 9 programming languages and addressing 50 prevalent vulnerabilities. Khoury et al. khoury2023secure ###reference_b6### instructed ChatGPT to generate 21 programs susceptible to vulnerabilities including XSS and SQLi. While these works covered a broader range of vulnerabilities than our work, their focus remained primarily on identifying insecure code generation of LLMs. In contrast, our research aims to systematically enhance the security of the LLM-generated code. We achieve this by employing RAG and Self-Ranking. We also evaluated the LLM-generated functions systematically, using large datasets available from SOTA techniques. This contrasts with prior works that used only a few test cases to obtain evidence of vulnerabilities in LLM-generated security attack detectors.\nNair et al. nair2023hwsecurecode ###reference_b8### investigated ChatGPT-induced vulnerabilities in hardware code. They explored multiple prompts and provided guidance on generating secure code. While their goal aligns with ours, their evaluation was not conducted systematically using existing, solid benchmarks. Moreover, their approach to enhancing the security of generated code relies on manually crafted adjustments to the prompt, tailored to individual vulnerabilities. This manual intervention constrains the scalability and adaptability of their approach, particularly when addressing a broad spectrum of vulnerabilities.\nSVEN he2023large ###reference_b9### is a learning-based approach that aims to guide LLM\u2019s code generation towards satisfying a given property, exploiting Prefix-Tuning li2021prefix ###reference_b27### to fine-tune the LLM. While SVEN demonstrated promising results in preventing LLMs from introducing vulnerabilities, it requires a fine-tuning procedure based on a curated training set, hindering its efficiency and applicability to closed LLMs, hence limiting its scope." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Threats to Validity", + "text": "Internal validity.\nLLMs are non-deterministic, hence we repeated our function and synthetic dataset generation experiments 40 and 10 times respectively (the disparity is due to the higher stability of the latter experiments). Indeed,\nwe adopt Self-Ranking to exploit the non-deterministic nature of LLMs. While different LLM configurations and selection of RAG sources may yield varying results, our choices were based on documentation and existing best practices. Data leakage from LLM training could be a concern, but given the complexity of the task and the presence of only a few generated functions with perfect performance, we believe it was not an important factor in our experiment.\nExternal validity. Our approach used nine recent LLMs and they were evaluated on two prevalent vulnerabilities, XSS and SQLi. While our approach may not generalize to all other vulnerabilities, we believe that our approach of enhancing prompts with external knowledge and Self-Ranking is a robust and general method.\nConstruct validity. We utilized F2-Score and Accuracy as our evaluation metrics, which are considered standard measures in the security domain.\nConclusion validity. Many elements in our approach are non-deterministic. For this reason, we draw our conclusions based on statistical non-parametric tests (Mann-Whitney U and Wilcoxon)." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we present a novel approach to improving the robustness of LLM-generated security attack detectors by integrating RAG and Self-Ranking into the prompting process. Our extensive study with nine LLMs targets two well-known and prevalent vulnerabilities, Cross-Site Scripting (XSS) and SQL injection (SQLi). Results show the effectiveness of our approach in improving the robustness of the detectors generated by the LLM. The integration of external knowledge with RAG resulted in a notable enhancement in detection performance while Self-Ranking further improved the results. Our findings provide valuable insights for developers, highlighting the importance of incorporating relevant knowledge and utilizing automated methods for the assessment of LLM-generated code.\nAt the same time, these results open the way for the researchers to develop optimal strategies to perform RAG using a snippet of code as the query and documentation as a source of knowledge, since this strategy can potentially benefit those application domains where knowledge-intense code generation is prevalent." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: LLMs used in the experiments. Column C. Window shows the size of the Context Window in tokens; Up To shows the last training update.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model NameCheckpoint NameAliasProviderN. parametersC. WindowUp ToPass@1
GPT-3.5 Turbogpt-3.5-turbo-0125GPT-3.5TOpenAIN/A16,385202164.9
GPT-4gpt-4-1106-previewGPT-4OpenAIN/A128,000202376.5
GPT-4 Turbogpt-4-0125-previewGPT-4TOpenAIN/A128,000202387.1
Claude 3 Haikuanthropic-claude-3-haikuHaikuAnthropicSmall Claude3200,000N/A75.9
Claude 3 Sonnet\nanthropic-claude-3-sonnetSonnetAnthropicMed. Claude3200,000N/A73.0
Claude 3 Opusanthropic-claude-3-opusOpusAnthropicLarge Claude3200,000N/A84.9
Llama3llama3-70b-instructLlamaMeta70 billions8,192N/A81.7
Mixtral 8x7bmixtral-8x7b-instruct-v01MixtralMistral12 billions32,000N/A40.2
PaLM 2 Chat Bisongcp-chat-bison-001PaLMGoogleN/A2,500202337.6
\n
\n
", + "capture": "Table 1: LLMs used in the experiments. Column C. Window shows the size of the Context Window in tokens; Up To shows the last training update." + }, + "2": { + "table_html": "
\n
Table 2: Domain of the configurations used for code and dataset generations
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Conf. DomainModelTemp.NShot.RAG.
\n \n\n\nCode Generation\n\n\n \n\n\nGPT-4T, GPT-4, Opus,\n\nSonnet, PaLM,\n\nLlama, Mixtral\n\n0.0, 0.5, 0.10, 2, 6, 10True, False
\n \n\n\nDataset Generation\n\n\n \n\n\nGPT-4T, GPT-3.5T, Opus,\n\nSonnet, Haiku\n\n0.0, 0.5, 0.10, 2, 6, 10True, False
\n
\n
", + "capture": "Table 2: Domain of the configurations used for code and dataset generations" + }, + "3": { + "table_html": "
\n
Table 3: Size of the datasets for XSS and SQLi, with a specific focus on the different class sizes and the splits into training, validation, and test set.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
XSSSQLi
TrainingValidationTestTrainingValidationTest
Malicious8,3442,0872,60812,7143,1783,973
Benign8,5842,1462,68312,7143,1783,973
Total16,9284,2335,29125,4286,3567,946
\n
\n
", + "capture": "Table 3: Size of the datasets for XSS and SQLi, with a specific focus on the different class sizes and the splits into training, validation, and test set." + }, + "4": { + "table_html": "
\n
Table 4: Comparison with SOTA models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
XSS DetectionSQLi Detection
MethodF2MethodF2
CNN\u00a0CHEN2022102831 \n0.998SOFIA\u00a0ceccato2016sofia 0.993
MLP\u00a0CHEN2022102831 \n0.995
Ours (k=1)0.965Ours (k=1)0.991
Ours (k=3)0.965Ours (k=3)0.988
Ours (k=5)0.965Ours (k=5)0.975
Baseline0.630Baseline0.800
\n
", + "capture": "Table 4: Comparison with SOTA models." + }, + "5": { + "table_html": "
\n
Table 5: Best Generated Function Runs and Synthetic Datasets
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Task 1: XSS Detection
(GPT-4T, 0.0, 10, T)1(Haiku, 0.5, 6, F)
3(Haiku, 1.0, 10, F)
5(Haiku, 0.5, 10, F)
Task 2: SQLi Detection
(GPT-4T, 0.0, 2, T)1(Opus, 0.5, 6, T)
3(Opus, 0.5, 6, T)
5(Sonnet, 1.0, 10, T)
\n
", + "capture": "Table 5: Best Generated Function Runs and Synthetic Datasets" + }, + "6": { + "table_html": "
\n
Table 6: Results of transferability study. The results with transferred configurations are underlined.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Task 1: XSS Detection
Avg. F2
10.9650.8090.949
30.9650.7710.929
50.9650.7400.933
Task 2: SQLi Detection
Avg. F2
10.9640.7870.853
30.9510.7750.900
50.9460.7630.867
\n
", + "capture": "Table 6: Results of transferability study. The results with transferred configurations are underlined." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18216v1_figure_1.png", + "caption": "Figure 1: Overview of our approach. There are two large parts, one for code generation (left) and the other for dataset generation (bottom-right). Each of the two parts takes a \u2018Code Generation/Dataset Generation Template\u2019 and a \u2018Detector Signature\u2019 as input (respectively matching \u2018Input Template\u2019 and \u2018Input Task\u2019) and generates one of four types of prompts: Basic, Few-shot, RAG, or RAG Few-shot (for simplicity, the latter is omitted). The prompt is fed into an LLM to generate candidate functions (top pipeline) or a synthetic dataset (bottom pipeline). We then use a Self-Ranking mechanism to evaluate and rank the generated functions based on their performance on the synthetic dataset.", + "url": "http://arxiv.org/html/2411.18216v1/x1.png" + }, + "2": { + "figure_path": "2411.18216v1_figure_2.png", + "caption": "Figure 2: Main steps of the adopted experimental procedure: two configurations (C1,C2subscript\ud835\udc361subscript\ud835\udc362C_{1},C_{2}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) are used for function generation and two configurations (C3,C4subscript\ud835\udc363subscript\ud835\udc364C_{3},C_{4}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT) for dataset generation. The LLM is queried three times, resulting in three generated functions (u1,u2,u3subscript\ud835\udc621subscript\ud835\udc622subscript\ud835\udc623u_{1},u_{2},u_{3}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT) and three generated datasets (s1,s2,s3subscript\ud835\udc601subscript\ud835\udc602subscript\ud835\udc603s_{1},s_{2},s_{3}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT) per configuration. In the TDA scenario, the resulting functions (U1,U2subscript\ud835\udc481subscript\ud835\udc482U_{1},U_{2}italic_U start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_U start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) are ranked using a validation set d\ud835\udc51ditalic_d; in the NTD scenario they are ranked using the generated datasets (S1,S2subscript\ud835\udc461subscript\ud835\udc462S_{1},S_{2}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT).", + "url": "http://arxiv.org/html/2411.18216v1/x2.png" + }, + "3(a)": { + "figure_path": "2411.18216v1_figure_3(a).png", + "caption": "(a)\nFigure 3: Difference between the F2 of Generated Function Runs with RAG (i.e., RAG Prompt and RAG-Few-shot Prompt) and the F2 of Generated Function Runs without RAG (i.e., Basic Prompt and Few-shot Prompt), for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x3.png" + }, + "3(b)": { + "figure_path": "2411.18216v1_figure_3(b).png", + "caption": "(b)\nFigure 3: Difference between the F2 of Generated Function Runs with RAG (i.e., RAG Prompt and RAG-Few-shot Prompt) and the F2 of Generated Function Runs without RAG (i.e., Basic Prompt and Few-shot Prompt), for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x4.png" + }, + "4(a)": { + "figure_path": "2411.18216v1_figure_4(a).png", + "caption": "(a)\nFigure 4: Difference between the F2 of Generated Function Runs with RAG Few-shot Prompt and the F2 of Generated Function Runs with RAG Zero-shot Prompt, for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x5.png" + }, + "4(b)": { + "figure_path": "2411.18216v1_figure_4(b).png", + "caption": "(b)\nFigure 4: Difference between the F2 of Generated Function Runs with RAG Few-shot Prompt and the F2 of Generated Function Runs with RAG Zero-shot Prompt, for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x6.png" + }, + "5(a)": { + "figure_path": "2411.18216v1_figure_5(a).png", + "caption": "(a)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x7.png" + }, + "5(b)": { + "figure_path": "2411.18216v1_figure_5(b).png", + "caption": "(b)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x8.png" + }, + "5(c)": { + "figure_path": "2411.18216v1_figure_5(c).png", + "caption": "(c)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x9.png" + }, + "5(d)": { + "figure_path": "2411.18216v1_figure_5(d).png", + "caption": "(d)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x10.png" + }, + "5(e)": { + "figure_path": "2411.18216v1_figure_5(e).png", + "caption": "(e)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x11.png" + }, + "5(f)": { + "figure_path": "2411.18216v1_figure_5(f).png", + "caption": "(f)\nFigure 5: F2 given by top_k selection (i.e., Self-Ranking) for XSS detection (first row) and SQLi detection (second row), with k=1\ud835\udc581k=1italic_k = 1, k=3\ud835\udc583k=3italic_k = 3 and k=5\ud835\udc585k=5italic_k = 5.", + "url": "http://arxiv.org/html/2411.18216v1/x12.png" + }, + "6(a)": { + "figure_path": "2411.18216v1_figure_6(a).png", + "caption": "(a)\nFigure 6: Average F2, broken down by each LLM, for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x13.png" + }, + "6(b)": { + "figure_path": "2411.18216v1_figure_6(b).png", + "caption": "(b)\nFigure 6: Average F2, broken down by each LLM, for XSS detection (left) and SQLi detection (right).", + "url": "http://arxiv.org/html/2411.18216v1/x14.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18216v1" +} \ No newline at end of file diff --git a/20241127/2411.18226v1.json b/20241127/2411.18226v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9cec12571e7c06b7fb9b1e651199e3e88911eef0 --- /dev/null +++ b/20241127/2411.18226v1.json @@ -0,0 +1,260 @@ +{ + "title": "Feature-Factory: Automating Software Feature Integration Using Generative AI", + "abstract": "Integrating new features into existing software projects can be a complex and time-consuming process. Feature-Factory leverages Generative AI with WatsonX.ai to automate the analysis, planning, and implementation of feature requests. By combining advanced project parsing, dependency resolution, and AI-generated code, the program ensures seamless integration of features into software systems while maintaining structural integrity. This paper presents the methodology, mathematical model, and results of the Feature-Factory framework.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Feature integration in software projects is a critical yet challenging task in modern software engineering. The manual processes involved in analyzing project structure, resolving dependencies, and modifying code are often prone to human error and inefficiencies. The Feature-Factory framework addresses these challenges by utilizing Generative AI to automate and optimize the entire process. This paper discusses the mathematical model and implementation details behind this innovative solution.\nIntegrating features into existing systems requires developers to analyze large codebases, identify dependencies, and implement changes without introducing errors. This process is further complicated by complex project structures, heterogeneous programming languages, and evolving requirements. Existing methods primarily rely on static analysis tools and manual intervention, which are time-consuming and error-prone. The lack of a unified framework for automating feature integration has been a persistent limitation in the field.\nRecent advancements in software engineering and machine learning have introduced various tools and methodologies for automating aspects of feature integration. Notably, Generative AI models such as OpenAI Codex Chen et al. (2021 ###reference_b1###) and GitHub Copilot Zhang et al. (2022 ###reference_b10###) have demonstrated the capability to generate code snippets and assist developers in routine coding tasks. These tools utilize large-scale language models to understand and produce human-readable code, making them valuable for specific tasks like code completion or bug fixing. Despite their utility, these models are inherently limited to providing local solutions, as they lack the holistic understanding required for integrating complex features into large, interconnected codebases.\nStatic analysis tools, such as SonarQube SonarSource (2023 ###reference_b8###), provide valuable insights into code quality, dependency analysis, and potential security vulnerabilities. These tools are effective for identifying issues within existing code but do not offer mechanisms for planning or implementing new features. Similarly, dependency management systems like Maven Foundation (2023 ###reference_b2###) and Gradle Team (2023 ###reference_b9###) focus on handling external library dependencies but fail to address internal project structures or feature integration challenges.\nEfforts in AI-assisted software refactoring Murphy et al. (2007 ###reference_b5###) and program synthesis Gulwani et al. (2017 ###reference_b3###) highlight the potential for automating code improvements and generating small-scale programs based on user intent. These approaches leverage symbolic reasoning and machine learning to optimize or synthesize code segments. However, they fall short in delivering end-to-end solutions for feature integration, particularly in large-scale projects with intricate interdependencies.\nWhile each of these tools and methodologies contributes to advancing software engineering practices, they address isolated aspects of the problem. None of the existing solutions offer a comprehensive framework capable of analyzing a project\u2019s structure, resolving dependencies, generating feature-specific code, and validating the updated system in a unified pipeline. This gap underscores the need for an innovative approach like Feature-Factory, which uniquely integrates these capabilities into a cohesive framework. By leveraging Generative AI, vector database indexing, and dependency resolution, Feature-Factory provides an end-to-end solution for seamless feature integration, setting it apart from existing methods.\nThis paper proposes a novel framework that automates feature integration into software projects using generative AI. The framework includes components for parsing project structures, constructing vector databases, resolving dependencies, mapping features to components, generating tasks, and validating outputs.\nThis paper is organized as follows: In Sec. 2 ###reference_###, we present the mathematical framework underpinning Feature-Factory, including dependency graph generation, feature mapping, task-based transformations, and validation. Sec. 3 ###reference_### details the solution methodology, describing each step of the integration process from project parsing to task execution. The algorithmic implementation is explained in Sec. 4 ###reference_###, followed by experimental results in Sec. 5 ###reference_###, which validate the framework\u2019s effectiveness. We discuss the implications, limitations, and future work in Sec. 6 ###reference_###, and Sec. 7 ###reference_### concludes the paper. Supplementary details, including implementation resources, are provided in the supplementary information section." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Feature-Factory", + "text": "The proposed solution leverages the latest advancements in Generative AI, specifically large language models (LLMs), to automate the integration of features into existing software projects. For any given software project, represented by its directory structure and files, the solution employs Generative AI to analyze the project tree. This analysis enables the identification of required updates and new components based on the feature request provided. The system orchestrates LLMs to parse the project, generate tasks for each component, and apply necessary changes to create a new project that incorporates the requested feature. This approach ensures that the integrity of the existing project is preserved while seamlessly integrating new functionality.\nThe following mathematical model formalizes the Feature-Factory framework. Let the original project structure be represented as a set of files:\nwhere denotes the -th file or module in the project. A feature request is provided as a natural language description specifying the desired functionality to be integrated into the project. The framework\u2019s goal is to generate an updated project structure that incorporates while maintaining the consistency and dependencies of ." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Dependency Graph", + "text": "The process begins by parsing the project structure to produce a dependency graph , defined as:\nwhere represents the set of files and modules in the project, and represents the dependencies between them. This step collects and analyzes the entire project tree, using AI to build a schema that describes each element in the project. This schema allows the AI to understand the project\u2019s structure and dependencies, forming the foundation for assigning tasks in subsequent stages." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Feature Mapping", + "text": "Using the dependency graph , the feature request is analyzed to determine the tasks required for its integration. This process is represented by the feature mapping function:\nwhere is the set of tasks needed to implement , and links each task to its corresponding component in . At this stage, the feature request is mapped to the project\u2019s structure, assigning the necessary enhancements for each file to be modified or created, along with their respective file paths." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Task-Based Transformation", + "text": "The tasks derived from the feature mapping function are then executed to transform the original project into the updated project . This transformation is described as:\nwhere represents the transformation function that applies the tasks to the project . In this phase, the framework generates a detailed list of tasks and executes them iteratively. The generative AI creates or updates files as required, ensuring that the new functionality is seamlessly integrated into the project structure." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Dependency Validation", + "text": "To ensure the integrity of the updated project structure, a validation function is employed:\nThis step verifies the consistency and correctness of each newly created or updated file, ensuring that the modifications do not disrupt the project\u2019s existing functionality or dependencies." + }, + { + "section_id": "2.5", + "parent_section_id": "2", + "section_name": "Final Output", + "text": "The final output of the framework is the updated project , which incorporates the feature request while preserving the structural integrity and original functionality of the project. This comprehensive framework combines advanced parsing, feature mapping, task-based transformations, and rigorous validation to provide a systematic solution for feature integration in software projects." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Solution Methodology", + "text": "Recent advancements in large language models (LLMs), such as LLaMA 3.1 70B Research (2024 ###reference_b7###) and GPT-4 OpenAI (2023 ###reference_b6###), have transformed how complex queries are processed and answered with unprecedented accuracy. These state-of-the-art models are not only capable of generating precise, context-aware responses but also excel at analyzing intricate data structures and workflows. Such capabilities form the cornerstone of the Factory Feature algorithm, which systematically integrates new features into existing software projects. By orchestrating the analytical power of LLMs with dependency resolution and vector database indexing, this methodology ensures seamless end-to-end automation for feature integration. Below, we detail the steps of this innovative solution, each modeled mathematically to underline its systematic and scientific approach." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Parsing the Project", + "text": "The process begins with parsing the original project structure to derive a dependency graph, a crucial representation of the project\u2019s internal organization. This step, facilitated by the function , maps the project to a graph , where is the set of files and modules, and is the set of dependencies among them:\nThis graph provides a structural blueprint of the project, capturing how its components interact and depend on each other. It serves as the foundation for analyzing how new features will integrate into the existing system." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Building the Vector Database", + "text": "Once the project structure has been parsed, the next step is to encode this data into a vector database . Each file is represented as a vector using embeddings generated by LLMs. These embeddings encode semantic and structural information about the files, making them amenable to efficient retrieval and manipulation:\nThis database allows rapid access to relevant project components during subsequent steps, particularly for feature mapping and task generation." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Resolving Dependencies", + "text": "Dependency resolution is a critical aspect of ensuring the consistency and integrity of the project. The dependency graph is analyzed using graph traversal algorithms to identify and resolve interdependencies among components. For a given module , its dependencies are expressed as:\nThis step ensures that any modifications or additions to the project account for these relationships, preserving the functional coherence of the system." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Mapping Features to Components", + "text": "The feature request , provided as a natural language description, is processed using an AI model to generate a set of tasks . Each task corresponds to an actionable modification or addition required to implement . These tasks are then mapped to specific components in through the mapping function of Eq. 3 ###reference_###.\nBy linking tasks to their respective components, this step ensures precise and efficient allocation of responsibilities within the project structure." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Task Generation and Execution", + "text": "Each task is transformed into a prompt for the LLM, which generates the corresponding code . These code snippets are then integrated into the project to produce the updated project structure . This transformation is mathematically described as:\nwhere represents the function that applies the generated code to the original project structure. This step leverages the generative capabilities of LLMs to automate the creation and integration of new components with precision." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "Validation", + "text": "The final step in the methodology is to validate the updated project . Validation ensures that the modifications introduced by the new feature do not disrupt the existing functionality or violate any constraints indicated in Eq. 5 ###reference_###.\nThis critical step guarantees that the output project is both functionally consistent and ready for deployment.\nIn summary, the solution methodology orchestrates the parsing, analysis, and transformation of software projects using advanced LLM capabilities. By systematically addressing each step\u2014parsing, vectorization, dependency resolution, task mapping, execution, and validation as is shown in Fig 1 ###reference_###, the Factory Feature algorithm ensures that new features are seamlessly integrated into existing projects. This approach demonstrates the power of combining modern AI techniques with rigorous software engineering practices, offering a novel framework for automating feature integration." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Algorithm", + "text": "The Feature-Factory framework represents a structured approach to feature integration by automating the processes of project analysis, task generation, and code modification. The step-by-step procedure is summarized in Algorithm 1 ###reference_###, which demonstrates the systematic use of generative AI to achieve seamless integration of new features into existing software projects.\nThe algorithm 1 ###reference_### encapsulates the core functionality of the Feature-Factory framework. This systematic approach offers significant advantages in automating feature integration, addressing challenges such as dependency resolution, task decomposition, and code generation. By leveraging generative AI, the algorithm ensures that each feature request is translated into actionable tasks, generating accurate and context-aware updates to the project structure. This capability sets it apart from traditional methods, making it a novel contribution to the field of software engineering automation.\nFirstly, it introduces an innovative strategy for parsing, analyzing, and updating projects by combining traditional dependency graph techniques with modern AI-driven capabilities. The use of vector databases to encode project components ensures scalability and efficiency in managing large codebases, a capability that surpasses many existing approaches. Notably, the ability to decompose a natural language feature request into actionable tasks and generate targeted code updates places this framework at the forefront of generative AI applications in software engineering.\nSecondly, the integration of LLMs such as LLaMA 3.1 Research (2024 ###reference_b7###) and GPT-4 OpenAI (2023 ###reference_b6###) highlights the paradigm shift brought about by generative AI in handling complex project updates. These models, with their state-of-the-art understanding and generation capabilities, enable precise and context-aware modifications to code, reducing the need for manual intervention and ensuring the integrity of the updated project.\nAnother key advantage of this algorithm lies in its generalizability. Unlike traditional feature integration methods, which are often tailored to specific project structures or programming languages, the Feature-Factory framework can be applied to diverse projects across various domains. This adaptability stems from its modular design, allowing the core steps\u2014such as dependency graph generation, task mapping, and AI-driven code updates\u2014to be reused or customized as needed.\nFinally, the novelty of this framework makes it a strong candidate for publication in scientific journals. Its comprehensive and automated approach addresses a well-documented gap in the literature. While tools like GitHub Copilot Zhang et al. (2022 ###reference_b10###) and static analysis platforms SonarSource (2023 ###reference_b8###) have advanced specific aspects of coding and dependency management, they do not provide an end-to-end solution for feature integration. By integrating these elements into a cohesive workflow, the Feature-Factory algorithm establishes a new standard for project enhancement with generative AI.\nTable 1 ###reference_### presents a detailed comparison between the Feature-Factory framework and existing tools, highlighting its unique capabilities in code completion, feature integration, and dependency resolution\nIn conclusion, the proposed algorithm is not only innovative but also highly practical, offering a robust framework for automating feature integration in software projects. Its ability to seamlessly incorporate new features while maintaining project integrity underscores its potential impact on software engineering practices." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experimental Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Setup", + "text": "The experiments were conducted in a controlled environment to ensure consistent and reliable results. The computational framework utilized Watsonx.ai for code generation tasks, leveraging its state-of-the-art generative AI capabilities. This ensured that the algorithm could accurately generate context-aware code modifications aligned with the feature requests.\nThe experiments were conducted on an Intel Core i7-8750H processor paired with 64GB of RAM. This configuration was sufficient for small to medium-sized projects.\nThis setup provided ample computational power to support recursive and iterative refinement processes required by the algorithm. On the software side, Python 3.12.7 was employed alongside the Watsonx.ai API library, facilitating seamless interaction with the generative AI model. This combination of hardware and software ensured smooth execution of the experiments, minimizing bottlenecks during testing." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Experimental Design", + "text": "To evaluate the effectiveness of the Feature-Factory framework, a series of controlled tests were designed to simulate the process of integrating new features into an existing software project. These tests were carefully constructed to assess the framework\u2019s core functionalities and its ability to handle complex integration tasks. The evaluation focused on four key criteria: parsing and analyzing the original project structure, generating the necessary tasks for feature integration, producing accurate and context-aware code updates, and maintaining the integrity and functionality of the project after modifications.\nThe first criterion involved determining how effectively the framework could parse and analyze the structural components of the baseline project. This step was crucial for understanding the project\u2019s dependency graph and identifying the components that required modification. The second criterion evaluated the task generation mechanism, which decomposed the feature request into actionable tasks. These tasks served as the foundation for updating or extending the project\u2019s functionality.\nThe third aspect of the evaluation examined the accuracy and contextual relevance of the code updates produced by the framework. By leveraging generative AI, the framework was expected to generate code modifications that seamlessly aligned with the existing structure and adhered to best practices. Finally, the fourth criterion assessed the framework\u2019s ability to preserve the original project\u2019s functionality. This included verifying that the integrated features did not disrupt the existing workflows or introduce inconsistencies.\nThese experimental parameters provided a robust framework for evaluating the Feature-Factory\u2019s capabilities. The tests were conducted on a baseline project paired with a well-defined feature request, as detailed in subsequent sections." + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "5.2.1 Original Project Structure", + "text": "The baseline project, stored in the project_old directory, consisted of the following file structure:\nThe main application logic was implemented in the app.py file, as shown below:\nThe helpers.py file, located in the utils/ directory, contained the supporting function responsible for generating personalized greeting messages:\nWhen executed, the baseline project produced the following output, demonstrating its core functionality:" + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "5.2.2 Feature Request", + "text": "The feature request was to enhance the project by adding a logging mechanism. This request aimed to showcase the framework\u2019s ability to handle cross-file modifications and maintain consistency across the project\u2019s structure. Logging was intended to capture critical information, such as user inputs and function calls, to facilitate debugging and monitoring." + }, + { + "section_id": "5.2.3", + "parent_section_id": "5.2", + "section_name": "5.2.3 Execution Command", + "text": "The feature integration process was triggered using the following command, specifying the desired functionality through a natural language prompt:" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Research Findings", + "text": "Upon executing the algorithm, the framework updated the project to include the requested logging functionality. The following changes were made:\nThe updated app.py file included logging configurations and enhanced exception handling:\nThe helpers.py file was also updated to include logging within its function implementation:\nWhen executed, the updated project demonstrated the integrated logging functionality, producing the following output:\nThis example demonstrates the framework\u2019s ability to seamlessly integrate new functionality into an existing project. The algorithm ensured consistency across all modules while maintaining the original functionality of the application. Furthermore, the logging mechanism was implemented in alignment with industry best practices, enhancing the project\u2019s maintainability and robustness." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Analysis of Results", + "text": "The experimental results provide compelling evidence of the effectiveness of the Feature-Factory framework in seamlessly integrating new features into existing projects. The proposed algorithm demonstrated its capability to successfully add a logging mechanism across all relevant files while preserving the original functionality of the project. Notably, the generated code was contextually appropriate, requiring no manual intervention throughout the integration process, underscoring the robustness of the framework.\nThe results align closely with the framework\u2019s mathematical foundation, as described in Section 2 ###reference_###. The dependency graph provided a comprehensive structural representation of the project, ensuring accurate identification of the components requiring modification. The feature mapping function , defined in Eq. 3 ###reference_###, effectively linked the feature request to the tasks necessary for integration, while the transformation function in Eq. 4 ###reference_### ensured precise application of these tasks to the original project structure .\nA key observation was the accuracy of the updates generated by the framework. The algorithm consistently identified the specific components requiring modification and applied changes that adhered to best practices, including consistent and professional logging configurations. This precision highlights the effectiveness of leveraging large language models (LLMs) in automating complex software tasks.\nAnother critical outcome was the cross-file consistency achieved during the integration process. The updates were cohesively applied across multiple files, such as app.py and helpers.py, ensuring compatibility and functional correctness throughout the project. This capability reflects the framework\u2019s ability to maintain the integrity of dependency relationships, as established during the dependency resolution phase.\nFurthermore, the updated project retained its original functionality, demonstrating the framework\u2019s commitment to preserving project integrity. The updated system continued to handle inputs and produce outputs identical to the original implementation, confirming that the integration process did not disrupt existing workflows or introduce errors. This result directly validates the effectiveness of the validation function , as defined in Eq. 4 ###reference_###.\nFinally, the simplicity and efficiency of the integration process were noteworthy. The entire process was initiated with a single command, demonstrating the accessibility and user-friendliness of the Feature-Factory framework. This ease of execution significantly reduces the time and effort required for feature integration, making it a practical tool for software development.\nIn summary, the experimental results confirm the practical utility and scientific rigor of the Feature-Factory framework. By integrating advanced parsing, feature mapping, task-based transformations, and dependency validation, the framework achieves a seamless and efficient feature integration process, meeting the needs of modern software engineering challenges." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The Feature-Factory framework demonstrates significant advancements in automating feature integration. The experimental results validated its capability to seamlessly integrate new features while preserving the integrity of existing projects. Compared to traditional methods, the framework effectively leverages generative AI to handle complex dependencies and manage cross-file modifications, offering substantial improvements in scalability and efficiency.\nHowever, certain limitations were identified during the study. For instance, the framework currently struggles with poorly documented projects or highly complex interdependencies, where contextual understanding by the AI may falter. Additionally, the performance of the framework could vary depending on the size and complexity of the project, necessitating further optimizations.\nComparison with related work, such as GitHub Copilot Zhang et al. (2022 ###reference_b10###) and static analysis tools like SonarQube SonarSource (2023 ###reference_b8###), highlights the novelty of Feature-Factory\u2019s holistic approach. While these tools focus on isolated tasks like code completion or quality analysis, the Feature-Factory framework provides an end-to-end solution for feature integration. This makes it particularly suitable for large-scale software projects requiring minimal manual intervention." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Future Work", + "text": "Building upon the findings of this study, future research will focus on extending the framework\u2019s capabilities to handle more complex feature requests involving interdependent modules. This enhancement would enable the framework to address intricate relationships within large and heterogeneous codebases, further broadening its applicability.\nAnother avenue for improvement involves optimizing the framework for scalability, particularly in projects involving thousands of components. Leveraging advanced techniques, such as parallel processing and optimized queries within the vector database, could significantly reduce processing time and improve efficiency.\nIntegrating automated testing and performance analysis modules into the framework represents an additional goal. Automated testing could validate the correctness of generated code, ensuring seamless integration, while performance analysis would provide insights into the overall impact of integrated features on software efficiency and maintainability.\nBy addressing these areas, the Feature-Factory framework has the potential to become a comprehensive, scalable solution for automating feature integration in software engineering." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This study presented the Feature-Factory framework, a novel approach to automating feature integration in software projects using generative AI. The results validated the framework\u2019s ability to seamlessly integrate features while maintaining project integrity, demonstrating significant improvements over traditional methods. By leveraging state-of-the-art generative models such as GPT-4 OpenAI (2023 ###reference_b6###) and LLaMA 3.1 Research (2024 ###reference_b7###), the framework successfully addressed challenges in task generation, dependency resolution, and cross-file consistency.\nThe framework\u2019s innovative methodology, rooted in recursive task-based transformations and dependency validation, sets a new benchmark for feature integration in modern software engineering. With further optimization and additional capabilities, the Feature-Factory framework is poised to become an indispensable tool in the field." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Supplementary Information", + "text": "Researchers and developers interested in replicating or extending this work can access additional implementation details, including code examples, test cases, and instructions for customizing the algorithm, in the project repository. The repository is available at Ref Magana-Vsevolodovna (2024 ###reference_b4###)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison of Feature-Factory with Existing Tools
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Tool/FrameworkCode CompletionFeature IntegrationDependency Resolution
GitHub CopilotYesNoNo
SonarQubeNoNoYes
Feature-FactoryYesYesYes
\n
\n
", + "capture": "Table 1: Comparison of Feature-Factory with Existing Tools" + } + }, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Evaluating large language models trained on code.", + "author": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde\nde Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,\nGreg Brockman, et al.", + "venue": "arXiv preprint arXiv:2107.03374, 2021.", + "url": null + } + }, + { + "2": { + "title": "Maven: Apache build manager.", + "author": "Apache Foundation.", + "venue": "https://maven.apache.org, 2023.", + "url": null + } + }, + { + "3": { + "title": "Program synthesis: Challenges and opportunities.", + "author": "Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh.", + "venue": "Communications of the ACM, 60(1):81\u201393,\n2017.", + "url": null + } + }, + { + "4": { + "title": "Factory feature: A generative ai-based framework for feature\nintegration in software projects repository.", + "author": "Ruslan Idelfonso Magana-Vsevolodovna.", + "venue": "https://github.com/ruslanmv/Factory-Feature, 2024.", + "url": null + } + }, + { + "5": { + "title": "Refactoring tools for software evolution.", + "author": "Gail C Murphy, David Notkin, and Kevin Sullivan.", + "venue": "ACM Transactions on Software Engineering and Methodology\n(TOSEM), 7(4):307\u2013328, 2007.", + "url": null + } + }, + { + "6": { + "title": "Gpt-4 technical report.", + "author": "OpenAI.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "7": { + "title": "Introducing llama 3.1: Scaling transformer models to new heights.", + "author": "Meta AI Research.", + "venue": "https://ai.meta.com/llama3, 2024.", + "url": null + } + }, + { + "8": { + "title": "Sonarqube: Continuous code quality.", + "author": "SonarSource.", + "venue": "https://www.sonarqube.org, 2023.", + "url": null + } + }, + { + "9": { + "title": "Gradle: Build tool for the jvm.", + "author": "Gradle Team.", + "venue": "https://gradle.org, 2023.", + "url": null + } + }, + { + "10": { + "title": "An empirical study of github copilot\u2019s code suggestions.", + "author": "Xiaozhou Zhang, Zheng Li, Chunan Jiang, and Yue Zou.", + "venue": "arXiv preprint arXiv:2206.15331, 2022.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18226v1" +} \ No newline at end of file diff --git a/20241127/2411.18229v1.json b/20241127/2411.18229v1.json new file mode 100644 index 0000000000000000000000000000000000000000..8cf6dc362bfe6d91a113d52748c38fb0daec6fd5 --- /dev/null +++ b/20241127/2411.18229v1.json @@ -0,0 +1,683 @@ +{ + "title": "SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation", + "abstract": "We propose SharpDepth, a novel approach to monocular metric depth estimation that combines the metric accuracy of discriminative depth estimation methods (e.g., Metric3D, UniDepth) with the fine-grained boundary sharpness typically achieved by generative methods (e.g., Marigold, Lotus). Traditional discriminative models trained on real-world data with sparse ground-truth depth can accurately predict metric depth but often produce over-smoothed or low-detail depth maps. Generative models, in contrast, are trained on synthetic data with dense ground truth, generating depth maps with sharp boundaries yet only providing relative depth with low accuracy. Our approach bridges these limitations by integrating metric accuracy with detailed boundary preservation, resulting in depth predictions that are both metrically precise and visually sharp.\nOur extensive zero-shot evaluations on standard depth estimation benchmarks confirm SharpDepth\u2019s effectiveness, showing its ability to achieve both high depth accuracy and detailed representation, making it well-suited for applications requiring high-quality depth perception across diverse, real-world environments.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Monocular metric depth estimation \u2013 the task of predicting absolute depth from a single RGB image \u2013 has emerged as a key computer vision problem due to its broad applications in autonomous driving [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], augmented reality [45 ###reference_b45###, 52 ###reference_b52###], robotics [7 ###reference_b7###], and 3D scene understanding [5 ###reference_b5###, 44 ###reference_b44###]. Unlike stereo or multi-view approaches that leverage multiple viewpoints to deduce depth, monocular depth estimation aims to infer depth from a single perspective. This approach offers edges in terms of cost, hardware simplicity, and deployment flexibility; however, it also faces significant challenges due to inherent scale ambiguity and the limited depth cues present in single images. The challenge becomes more intense in the zero-shot setting, where no fine-tuning is performed on the target domain.\nIn the current literature on zero-shot monocular depth estimation, approaches generally fall into two categories: discriminative and generative methods. The discriminative approach [31 ###reference_b31###, 19 ###reference_b19###, 2 ###reference_b2###] typically relies on supervised learning with real-world data annotated by sparse ground truth depth, such as LiDAR measurements. These models are trained to regress metric depth values directly, and they generally provide accurate global depth estimates, capturing the true scale of the scene. However, due to the sparsity of ground-truth data and the reliance on coarse depth cues, these models often struggle with fine details, producing depth maps that tend to be blurry or lacking in edge sharpness, especially around object boundaries.\nRecently, several generative depth estimation models have leveraged diffusion-based text-to-image models, excelling at producing depth maps with high spatial detail and sharp object edges [21 ###reference_b21###, 49 ###reference_b49###]. This improvement is attributed to the rich image priors inherited from large-scale text-to-image models.\nHowever, due to limitations inherent in latent diffusion models, fine-tuning these image-conditioned depth models is feasible only with synthetic data where dense depth maps are available, leading to a domain gap when applied to real images.\nAdditionally, these models provide only affine-invariant depth rather than accurate metric depth. This limitation restricts their applicability in scenarios where precise metric depth information is essential. Some models can be modified to produce metric depth [49 ###reference_b49###], which requires additional metric information for training.\nIn this work, we introduce SharpDepth, a diffusion-based model designed to generate accurate zero-shot metric depth with high-frequency details.\nBuilt upon an affine-invariant depth estimator [21 ###reference_b21###, 17 ###reference_b17###], our method refines the initial predictions of an existing metric depth model, enhancing the depth details while retaining accurate metric predictions.\nTo this end, we measure the agreement of the depth predictions by the affine-invariant model and the metric depth model by normalizing both depth predictions to a common scale and produce a difference map.\nSuch a difference map allows us to identify image regions with reliable depth predictions as well as inconsistent depth regions where further sharpening and refinement are required.\nBased on the difference map, we propose to exploit the strengths of both the discriminative and generative depth estimators as follows. We propose Noise-aware Gating, a mechanism that guides the depth diffusion model to focus more precisely on regions identified as uncertain in the difference map. To further ensure both sharpness and accurate metric depth in these uncertain regions, we utilize two loss functions.\nWe first leverage Score Distillation Sampling (SDS) loss to enhance depth detail, resulting in output with sharpness comparable to that of diffusion-based depth estimation methods.\nWe then apply a Noise-aware Reconstruction loss to recognize the lack of scale awareness of diffusion-based models. This loss acts as a regularizer, ensuring that the final predictions remain close to the initial depth estimates, maintaining metric accuracy without drifting from the original depth scale. Together, these techniques enable SharpDepth to deliver precise, high-detail metric depth estimations across diverse scenes. Another benefit of the above training losses is that we can train our refinement on real data using only pretrained depth models, without any additional ground truth for supervision.\nTo evaluate the performance of SharpDepth, we conduct extensive comparisons between our method with the state-of-the-art methods on both discriminative [31 ###reference_b31###, 2 ###reference_b2###, 19 ###reference_b19###] and generative methods [17 ###reference_b17###, 21 ###reference_b21###]. Experimental results show that our method\ncan achieve both accurate in-depth estimation and still preserve high degrees of sharpness compared to the state-of-the-art metric depth estimators, e.g., UniDepth [31 ###reference_b31###], as shown in Fig. 1 ###reference_### and Fig. 2 ###reference_###.\nIn summary, our contributions are as follows:\nWe introduce SharpDepth, a novel diffusion-based depth sharpener model that can produce zero-shot metric depth with high-fidelity details.\nOur method can be trained with only images thanks to our two noise-aware proposed modules. The total amount of training images is about 100-150 times smaller than existing monocular depth estimation methods.\nExperiments on various zero-shot datasets show that our method\u2019s accuracy is competitive with discriminative models while containing the high-detail output of generative models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Current approaches to zero-shot monocular depth estimation can be divided into two main research lines: metric depth methods and affine-invariant depth methods.\nZero-shot metric monocular depth estimation.\nRecently, a new line of metric monocular depth estimation (MMDE) methods has emerged, aiming to predict metric depth, also known as absolute depth, without images from the target domain. For example, ZoeDepth [2 ###reference_b2###] achieves metrically accurate zero-shot predictions by fine-tuning a scale-invariant model on combined indoor and outdoor datasets, employing adaptive domain-specific range predictions. Concurrently, ZeroDepth [16 ###reference_b16###] introduces a transformer-based encoder-decoder that leverages camera intrinsics to enhance camera awareness, allowing it to directly decode metric depth without adaptive range prediction. Metric3D [51 ###reference_b51###] and its enhanced version, Metric3Dv2 [19 ###reference_b19###], achieve zero-shot single-view metric depth through large-scale training and address metric ambiguity across camera models by incorporating a canonical camera space transformation module. UniDepth [31 ###reference_b31###] further advances MMDE by eliminating the need for test-time camera intrinsics, instead using a pseudo-spherical representation and a self-promoting camera module to predict both depth and camera parameters from a single image. However, these MMDE methods often yield depth predictions with limited detail due to the sparse ground truth in real-world data. In contrast, our approach delivers both high accuracy and fine detail, particularly in boundary regions.\nZero-shot affine-invariant monocular depth estimation.\nMonocular depth estimation is an ill-posed geometric problem, and many approaches [9 ###reference_b9###, 23 ###reference_b23###, 10 ###reference_b10###, 34 ###reference_b34###, 4 ###reference_b4###, 8 ###reference_b8###] address this by estimating depth up to an unknown global scale and shift, also known as affine-invariant depth. MegaDepth [24 ###reference_b24###] and DiverseDepth [50 ###reference_b50###] leverage large-scale internet images to achieve zero-shot depth estimation, though their training data often includes noisy labels. MiDAS [35 ###reference_b35###] mitigates this issue by using 3D movie frames with scale-shift-invariant losses to ensure consistency across various depth representations. Depth Anything [49 ###reference_b49###] builds on this approach, employing a pseudo-labeling strategy across 62 million unlabeled images to enhance performance in real-world scenarios.\nRecently, diffusion models have shown promise in improving depth fidelity by incorporating image priors. Marigold [21 ###reference_b21###] fine-tunes the Stable Diffusion model to generate high-quality depth with clear boundaries, while Lotus [17 ###reference_b17###] accelerates inference by optimizing the noise scheduling process. GeoWizard [11 ###reference_b11###] jointly estimates depth and normals, leveraging cross-modal relationships. Despite these advancements, the limitations of synthetic data create domain gaps that hinder the performance of diffusion-based methods in real-world applications, where discriminative feed-forward models still outperform them in terms of accuracy. Furthermore, these methods typically provide relative depth (i.e., depth relationships within a scene) rather than precise metric depth, which restricts their applicability in scenarios requiring accurate metric depth information.\nAffine-invariant and metric depth refinement.\nRecent work has focused on depth refinement rather than training models from scratch to leverage the benefits of both affine-invariant and metric methods. BetterDepth [55 ###reference_b55###] refines affine-invariant depth by conditioning on outputs from pre-trained models and applying a generative model with a diffusion loss. PatchRefiner [25 ###reference_b25###] builds on this approach for metric refinement by using features from a metric base network to generate residual depth maps, which enhances detail in the final prediction. However, both methods rely heavily on synthetic datasets, limiting their applicability in real-world scenarios. In contrast, our method employs a ground-truth-free fine-tuning protocol, utilizing real-world data without annotations. This reduces reliance on synthetic datasets, minimizes domain gaps, and improves fine detail while preserving metric accuracy." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "Diffusion Model for Depth Prediction.\nTo transfer the rich visual knowledge of diffusion models to the depth estimation domain, Marigold [21 ###reference_b21###] fine-tunes Stable Diffusion [36 ###reference_b36###] for monocular depth estimation. This process reuses the VAE encoder on depth images to obtain depth latents , optimizing with the -prediction objective function in Eq. 1 ###reference_###. Additionally, is concatenated with the conditional image latent , while the original text condition is omitted, , is the noisy version of at timestep , and is the output of the diffusion model.\nSimilarly, Lotus [17 ###reference_b17###] is based on the same principles but introduces key modifications. It reduces the number of timesteps from 1000 to 1, enabling faster inference. Additionally, -prediction is employed to reduce variance. To prevent catastrophic forgetting, Lotus jointly learns to predict both depth and image latents. The final training objective, as shown in Eq. 2 ###reference_###, defines as an -prediction diffusion model, where and are task indicators for depth and image prediction, respectively.\nScore Distillation Sampling (SDS) is a distillation technique applied in 3D assets synthesis [32 ###reference_b32###, 43 ###reference_b43###, 26 ###reference_b26###]. By removing the U-Net Jacobian term of the gradient of , a differentiable image could be optimized without the need for backpropagating through the diffusion model U-Net. The gradient of SDS loss can be approximated as follows:\nwhere is the weighting at timestep and is the text prompt.\n###figure_2### ###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methodology", + "text": "Given an input image , we first use both\npre-trained metric depth model and diffusion-based depth model \nto produce metric and affine-invariant depth output and , respectively.\nOur goal is to generate a sharpened metric depth map, , using our proposed sharpening model, . This model architecture is based on state-of-the-art pre-trained depth diffusion models [21 ###reference_b21###, 17 ###reference_b17###].\nInstead of naively relying on the forward process of diffusion model [18 ###reference_b18###, 36 ###reference_b36###], we introduce a Noise-aware Gating mechanism (described in Sec. 4.1 ###reference_###), which provides explicit guidance to the sharpener on uncertain regions. To enable ground-truth free fine-tuning, we use SDS loss to distill fine-grained details from the pretrained diffusion depth model and Noise-Aware Reconstruction Loss to ensure accurate metric prediction (as detailed in Sec. 4.2 ###reference_###). The overall architecture is outlined in Fig. 3 ###reference_###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Noise-aware Gating", + "text": "Our goal is to develop a mechanism that enables the network to identify regions requiring sharpening. Given the challenges of obtaining or annotating per-pixel ground truth (GT) metric depths while maintaining overall sharp detail, we adopt an amortized approach that combines insights from both the diffusion-based depth estimator and a metric estimator. Intuitively, we assume that regions with minimal differences between these models are more reliable, while areas with larger discrepancies require additional supervision.\nUsing two state-of-the-art depth estimators, UniDepth [31 ###reference_b31###] and Lotus [17 ###reference_b17###], we find that UniDepth provides fairly accurate depth predictions compared to sparse GT data, though it lacks the sharp detail of Lotus\u2019s output. As shown in Fig. 4 ###reference_###, we first normalize these predictions and then compute a difference map between two inferred depth maps. Brighter areas in the difference map highlight regions with significant discrepancies, indicating areas needing further refinement. Conversely, darker regions represent areas of mutual agreement between the depth estimators. While these regions may still benefit from depth refinement, they hold lower priority in the sharpening process.\nTo this end, we propose a Noise-aware Gating mechanism incorporating information from the initial metric depth as input to our model. Advances in image inpainting [20 ###reference_b20###], editing [27 ###reference_b27###], and virtual try-on [56 ###reference_b56###] have used explicit masks to guide diffusion models to focus on regions of interest. Inspired by these models, we avoid adding pure Gaussian noise to every pixel of the clean latent depth. Instead, we selectively introduce noise to regions with significant differences, while applying less noise to areas with smaller discrepancies. This strategy effectively directs the sharpener to focus on high-difference (noisy) regions, leaving low-difference (clean) areas mostly unaffected.\nTo align the depth ranges of\n and , we first scale and shift to the range of before calculating the difference map. The difference map is then computed as the absolute difference between the adjusted and .\nAs training progresses, we observe that our proposed\n begins to generate depth maps superior to those produced by the diffusion-based model . As the results, we replace with the exponential moving average (EMA) of the training model,\n, which serves as a refined initialization for and enables iterative refinement in multiple steps.\nDetails regarding the performance comparison of these two approaches are provided in Sec. 5.3 ###reference_###.\nOnce the difference map\n is obtained, we use it to control the noise intensity applied to each region of , which is the latent representation of . Specifically, we perform a weighted blending between Gaussian noise and\n as follows:\nwhere is the downsampled version of the difference map to match the dimensions of latent , and is the element-wise product.\nThis blended latent effectively distinguishes high- and low-difference regions between the two depth predictions, serving as a powerful prior for the sharpener as shown in Fig. 4 ###reference_###.\nBy separating these regions, the optimization process focuses on high-difference areas while minimizing modifications in similar regions, enabling the sharpener to reconstruct fine-grained details while maintaining accuracy." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Training Objectives", + "text": "Diffusion Depth Prior Distillation. In this section, we introduce our approach to distill the knowledge of into our depth sharpener, . Inspired by SwiftBrush [30 ###reference_b30###] and DreamFusion [32 ###reference_b32###], we do not train our from scratch, instead, perform score distillation on a pretrained diffusion-based depth estimator such as .\nThe output predicted latent of our proposed model is computed as follows: .\nWe then slightly modify the original SDS formulation of -prediction in the Eq. 3 ###reference_### to match the -prediction of . The revised version of SDS loss for training our can be defined as follows:\nwhere is the noisy version of at time step .\nNoise-aware Reconstruction Loss. Our distillation objective encourages the output of to align more closely with the distribution of the diffusion model , leading to highly detailed depth images. However, this also causes the network to inherit the limitations of , ultimately reducing accuracy in metric depth estimation. To address this issue, we introduce an additional reconstruction loss that preserves the accuracy of the discriminative model by measuring the distance between our network\u2019s output and the discriminative output in Eq. 6 ###reference_###.\nSpecifically, we use the difference map to enforce larger gradients in regions where and exhibit significant differences. This serves as an explicit regularization mechanism, encouraging the sharpener to focus more on these pixels. While the difference map ensures that regions with minimal differences remain largely unchanged, it may also risk propagating over-smoothing artifacts to . The reconstruction loss is given by:\nwhere is the element-wise multiplication.\nDiscussion.\nThough the two objective functions described above serve different purposes \u2013 one enhancing depth detail and the other ensuring accurate depth values \u2013 both focus on high-difference regions. High-difference regions are augmented with noise and subsequently refined by , resulting in large gradients during optimization. This causes Score Distillation Sampling (SDS) to emphasize these regions which function as an implicit masking optimization. In contrast, the reconstruction loss employs a difference map directly during optimization, resembling an explicit masking optimization approach.\nThe complete training objective function in Eq. (7 ###reference_###) comprises and where and are loss weighting hyperparameters." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Setup", + "text": "Datasets: For training, we use approximately 1% of each of the following real-world datasets, which cover various camera types and scene domains: Pandaset [48 ###reference_b48###], Waymo [40 ###reference_b40###], ArgoVerse2 [46 ###reference_b46###], ARKit [1 ###reference_b1###], Taskonomy [53 ###reference_b53###], and ScanNetv2 [6 ###reference_b6###]. This results in a combined training set of 90,000 images which is 100-150 times smaller than the amount of data has been used for discriminative depth models [51 ###reference_b51###, 31 ###reference_b31###].\nFor testing, we evaluate our approach on seven real datasets: KITTI [12 ###reference_b12###, 9 ###reference_b9###], NYUv2 [38 ###reference_b38###], ETH3D [37 ###reference_b37###], Diode [42 ###reference_b42###], Booster [33 ###reference_b33###], nuScenes [3 ###reference_b3###], and iBims [22 ###reference_b22###] and three synthetic datasets: Sintel [47 ###reference_b47###], UnrealStereo4K [41 ###reference_b41###], and Spring [29 ###reference_b29###].\nMetrics:\nWe use common depth estimation metrics: absolute mean relative error (A.Rel), root mean square error (RMSE), and the percentage of inlier pixels () with a threshold of 1.25. Additionally, to assess the sharpness of predictions, we adopt the Depth Boundary Error (DBE) inspired by iBims [22 ###reference_b22###]. Since annotated depth edges are not available for synthetic datasets, we adapt the original metric into a Pseudo Depth Boundary Error (PDBE). In PDBE, we apply Canny Edge Detection to the predicted and ground truth depths to generate sets of edges, which are then used to compute the accuracy and completion , following the original iBims formulation. Further details are provided in the supplementary material.\nImplementation details.\nWe implement our method in PyTorch, using Lotus [17 ###reference_b17###] and UniDepth [31 ###reference_b31###] as the diffusion-based and metric depth estimators, respectively. We also adopt the architecture and pretrained weights from Lotus for our SharpDepth model\u2019s initialization. The Adam optimizer is used with a learning rate of , and we set the loss weights as and . Optimization is performed for 13,000 iterations with a batch size of 1, using 16 gradient accumulation steps. Training our model to convergence takes approximately 1.5 days on two A100 40GB GPUs.\nWe normalize the metric depth from to before feeding it into the VAE encoder and revert the normalization after decoding. Min-max normalization is applied to the input, and the output is rescaled using least-squares alignment with the original metric depth. The difference map is applied to ensure alignment only for pixels with minimal differences." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Comparison with the State of the Art", + "text": "Baselines. We compare our method against four zero-shot metric depth models: UniDepth [31 ###reference_b31###], Metric3Dv2 [19 ###reference_b19###], ZoeDepth [2 ###reference_b2###], and PatchRefiner [25 ###reference_b25###]. Ground truth intrinsics are provided to UniDepth and Metric3Dv2. Since ZoeDepth is fine-tuned on KITTI and NYUv2, it does not qualify as a zero-shot model and is excluded from zero-shot evaluations. We also introduce a straightforward baseline, termed UniDepth-aligned Lotus, where we convert Lotus to metric depth by simply applying scale and shift adjustments based on the aligned UniDepth.\nAdditionally, we include three zero-shot relative depth models: Marigold [21 ###reference_b21###], Lotus [17 ###reference_b17###], and BetterDepth [55 ###reference_b55###] for reference purposes since they require ground truth alignment during test time.\nQuantitative results.\nAs shown in Tab. 1 ###reference_###, our method achieves accuracy on par with metric depth models, demonstrating the effectiveness of our training pipeline. This approach enables the sharpener to selectively refine uncertain regions while preserving confident areas intact. Although PatchRefiner is trained on a comparable dataset size to SharpDepth, its performance is robust primarily on outdoor datasets. By leveraging our ground-truth-free fine-tuning pipeline, we are able to train across diverse real-world datasets, enabling strong generalization to both indoor and outdoor scenes.\nFurthermore, our approach consistently surpass the naive UniDepth-aligned Lotus baseline across all datasets, demonstrating that relying solely on UniDepth\u2019s output for alignment is suboptimal. Our approach addresses this by using a difference map as a powerful guide, enabling alignment only on reliable pixels.\nAlso, Tab. 2 ###reference_### provides the evaluation of depth details on 3 synthetic (Sintel [47 ###reference_b47###], UnrealStereo [41 ###reference_b41###] and Spring [29 ###reference_b29###]) and 1 real dataset (iBims [22 ###reference_b22###]). Compared to UniDepth, SharpDepth obtains significantly higher results in both edge accuracy and completion. By leveraging rich priors from the pre-trained diffusion model, our method can produce sharper depth discontinuities, leading to high accuracy scores on all datasets. On the other hand, discriminative-based methods frequently produce smooth edges without clear transitions between objects. This is highlighted in the completeness error, as missing edges are expressed by large values of this error.\nWhile UniDepth-aligned Lotus depth details are competitive, high-frequency details that lack accurate metric precision on real datasets significantly limit the model\u2019s application.\nQualitative Results.\nFor a visual assessment, we present qualitative results in Fig. 5 ###reference_### and Fig. 6 ###reference_###. As shown in Fig. 5 ###reference_###, SharpDepth encompassed both high-frequency details in thin structures (fences and traffic poles in the first example) and accurate metric scene layout (second row). Moreover, detailed depth maps allow for better object reconstruction. To show this, we un-project the predicted depth maps to point-clouds from both SharpDepth and UniDepth in Fig. 6 ###reference_###. SharpDepth demonstrates better reconstruction fidelity, accurately capturing intricate details like the spikes on a durian and the contours of keyboard keycaps.\n###figure_4### ###figure_5###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "In this section, we validate the impact of our design choices through experiments on two validation sets: KITTI [1 ###reference_b1###] and Sintel [47 ###reference_b47###]. Unless otherwise specified, all experiments use the same training dataset as described in Sec. 5.2 ###reference_###.\nAnalysis of Noise-aware Gating are presented in Tab. 3 ###reference_### (Settings A-B). Instead of calculating the difference map as in Sec. 4.1 ###reference_### to generate input for , we test two alternative inputs: (A) Gaussian noise, as in [17 ###reference_b17###], and (B) the output of the pretrained metric depth model.\nBoth configurations led to decreased performance, with RMSE increasing by 1.64 in setting (A) and by 2.34 in setting (B). In setting (A), the lack of prior knowledge from the pretrained metric depth model led the sharpener to behave similarly to Lotus, degrading accuracy. In setting (B), although the pretrained metric depth estimator provides valuable prior knowledge, the absence of explicit guidance from the difference map results in ambiguity regarding which regions require refinement during reconstruction loss calculation. This makes it difficult for the model to balance the diffusion and metric depth priors, hindering training.\nEffects of the Training Objectives discussed in Sec. 4.2 ###reference_### are analyzed in Tab. 3 ###reference_### (Settings C-D). We remove each component to examine its impact on the sharpener. In setting (C), we exclude the SDS loss, while in setting (D), we remove the Noise-aware Reconstruction loss.\nIn setting (C), without the distillation loss to incorporate information from the pretrained diffusion-based depth estimator , our model aligns more closely with , producing nearly identical predictions. As a result, depth accuracy remains competitive, closely matching , while setting (D) mirrors , with high detail accuracy but lower depth accuracy.\nStudy of Pretrained Teacher Model results are shown in Tab. 3 ###reference_### (Settings E-F). We replace the pretrained depth diffusion model with two alternatives: Lotus [17 ###reference_b17###] in setting (E), the same as Ours, and Marigold [21 ###reference_b21###] in setting (F). Setting (F) yields slightly better depth-accuracy metrics, but setting (E) demonstrates superior detail-accuracy, with a substantial margin of 50 in DBE completion.\nWe provide qualitative results in Fig. 7 ###reference_###.\nBased on these findings, we select Lotus as the teacher model in our main experiments.\nEffect of Online vs. Offline Models is examined in Tab. 3 ###reference_### (Settings G-H), where we explore the impact of online (our approach) and offline models on difference map calculation during training. We compare two settings: (G) using the static Lotus model and (H) using the EMA (exponential moving average) of the training model . In setting (G), the offline model tends to overfit to the fixed differences between and . In contrast, setting (H) dynamically learns from a difference map that progressively decreases as training advances. Empirically, setting (H) outperforms (G), so we adopt this approach in our main experiments.\n###figure_6###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We proposed SharpDepth, a diffusion-based depth model that brings the metric precision of discriminative depth models into generative depth estimators. By balancing accuracy with detail, our method produces depth maps that are both metrically precise and visually refined, advancing high-quality zero-shot monocular depth estimation. Our evaluations show its strong performance across benchmarks, highlighting its potential for real-world applications requiring high-quality depth perception.\nSupplementary Material\nIn this supplementary material, we provide additional datasets details in Sec. 7 ###reference_###. We then provide additional results in Sec. 8 ###reference_###. Finally, we demonstrate the effectiveness of our estimated depths in a downstream application implementing metric SLAM in Sec. 9 ###reference_###." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Dataset Details", + "text": "As described in the main paper, we train SharpDepth using approximately 1% of the data from six real datasets and evaluate it on seven real datasets (for depth accuracy) and three synthetic datasets (for depth detail).\nThis approach guarantees a diverse training dataset capable of encompassing various camera configurations. Details of the training and test sets are provided in Tab. 4 ###reference_###." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Additional Results", + "text": "In-the-wild image samples.\nWe evaluate the robustness of our method on a diverse set of \u201cin-the-wild\u201d images. Qualitative results for Internet-sourced images are shown in Fig. 8 ###reference_###, while results from handheld mobile device captures are presented in Fig. 9 ###reference_###. Our approach consistently generates accurate metric depth maps, exhibiting improved depth discontinuities and overall structural coherence. Notably, it excels at capturing thin structures, comparable to affine-invariant diffusion depth models [21 ###reference_b21###, 17 ###reference_b17###], while preserving the precision of metric depth.\nGeneralization to another metric depth estimator.\nTo show our method\u2019s generalization capabilities, we evaluate SharpDepth on Metric3Dv2 [19 ###reference_b19###], a recent versatile metric depth estimation model. In this experiment, we leverage our previously trained model on UniDepth, and no further retraining is performed. We directly apply our model trained on UniDepth to Metric3Dv2 depths during test time. We provide the qualitative results in Fig. 10 ###reference_###. As can be seen, our method generalizes well to metric depths produced by Metric3Dv2.\nMore depth metrics on test datasets.\nWe provide an extended version of zero-shot metric accuracy on 6 zero-shot datasets in Tab. 6 ###reference_###. We report absolute mean relative error (A.Rel), root mean square error (RMSE), scale-invariant error in log scale ( and the percentage of inlier pixel ().\nAs shown in Tab. 6 ###reference_###, SharpDepth achieves competitive metric accuracy compared to UniDepth and other metric depth models. Moreover, our method consistently outperforms other metric refinement techniques, such as PatchRefiner. This highlights the effectiveness of our approach in enhancing high-frequency details in depth maps while maintaining robust zero-shot performance.\nWe further report the Pseudo Depth Boundary Error (PDBE), including the accuracy and completion , along with visual samples in Fig. 11 ###reference_### and Fig. 12 ###reference_###. The results demonstrate that both the accuracy and completion rates effectively capture the boundary details of the depth map.\nMore visual results.\nWe present additional qualitative results on our test datasets in Fig. 13 ###reference_###, Fig. 14 ###reference_###, and Fig. 15 ###reference_###. These figures show predictions from UniDepth, UniDepth-aligned Lotus, and our method. As observed, our approach generates depth maps with more detailed representations of fine structures." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Applications", + "text": "In this section, we demonstrate that our predicted sharper depth maps can significantly benefit downstream 3D reconstruction tasks, such as Visual SLAM [15 ###reference_b15###] and Volumetric TSDF Fusion [54 ###reference_b54###]. By providing more detailed and accurate depth information, our method enhances the quality and reliability of these reconstruction pipelines." + }, + { + "section_id": "9.1", + "parent_section_id": "9", + "section_name": "Visual SLAM", + "text": "Dense visual SLAM focuses on reconstructing detailed 3D maps, which are crucial for applications in AR and robotics. In this work, we demonstrate that high-frequency depth maps can significantly improve the performance of SLAM methods in reconstructing the scene.\nWe conduct experiments using a Gaussian Splatting-based SLAM method, i.e., MonoGS [28 ###reference_b28###], on the fr1/desk sequence of TUM RGBD dataset [39 ###reference_b39###], using the depth maps from UniDepth and SharpDepth as inputs to the system. Quantitative results are provided in Tab. 5 ###reference_###, where our method consistently outperforms UniDepth in terms of photometric errors, showcasing its potential to enhance SLAM performance.\nAdditionally, we present qualitative results in Fig. 16 ###reference_###. As shown, our method better captures the underlying geometry of the scene, leading to improved novel view renderings." + }, + { + "section_id": "9.2", + "parent_section_id": "9", + "section_name": "Volumetric TSDF Fusion", + "text": "Existing 3D reconstruction pipelines rely on multiple pairs of RGB-D inputs that are multi-view consistent. To achieve high-quality point clouds, it is crucial to have accurate metric depth predictions with sharp details. In this section, we demonstrate that our predicted depth maps can be used with TSDF Fusion [54 ###reference_b54###], to further enhance their reconstruction quality.\nAs can be seen in Fig. 17 ###reference_###, SharpDepth can render less distorted point clouds compared to those produced by the UniDepth [31 ###reference_b31###] approach.\n###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20###" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodGTKITTINYUv2ETH3DDiodeBoosterNuScenes
aligned?
Marigold [21]\n\u27130.920.090.960.050.960.060.770.310.970.050.660.27
Lotus [17]\n\u27130.880.110.970.050.960.060.740.330.990.040.510.36
BetterDepth [55]\n\u27130.950.750.980.040.980.05------
UniDepth [31]\n0.980.050.980.050.250.460.660.260.280.490.840.14
ZoeDepth [2]\n0.970.060.950.080.340.580.300.480.210.640.220.59
Metric3Dv2 [19]\n0.980.050.970.070.820.140.880.160.150.670.840.20
PatchRefiner [25]\n0.790.160.012.480.051.780.251.260.015.550.320.58
UniDepth-aligned Lotus0.840.130.940.090.200.490.560.360.260.490.410.43
SharpDepth\u00a0(ours)0.970.060.970.060.230.470.610.290.280.490.780.18
\n
\n
Table 1: Comparison for depth accuracy purpose on real-image datasets. These methods are trained and tested on non-overlapping datasets. Our SharpDepth results use the initial metric depth prediction of UniDepth. \u2018-\u2019 indicates not reported results. GT-aligned indicates the method has to use GT depth to align in testing. We ranked methods that do not require GT alignment as best, second-best, and third-best. Gray indicates the method that has been trained on the training set.
\n
", + "capture": "Table 1: Comparison for depth accuracy purpose on real-image datasets. These methods are trained and tested on non-overlapping datasets. Our SharpDepth results use the initial metric depth prediction of UniDepth. \u2018-\u2019 indicates not reported results. GT-aligned indicates the method has to use GT depth to align in testing. We ranked methods that do not require GT alignment as best, second-best, and third-best. Gray indicates the method that has been trained on the training set." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSintelUnrealStereo4KSpringiBims
Marigold\u00a0[21]\n1.9052.50.651.9768.60.561.85150.30.691.8513.40.07
Lotus\u00a0[17]\n2.0331.90.531.2133.20.671.27102.80.741.9211.00.07
UniDepth\u00a0[31]\n3.73113.30.968.65257.30.475.29229.70.662.0030.00.38
ZoeDepth\u00a0[2]\n3.3545.81.785.39649.30.734.05204.20.782.0523.70.17
Metric3Dv2\u00a0[19]\n1.9263.80.472.75446.90.381.78118.40.562.1412.30.19
PatchRefiner\u00a0[25]\n3.8658.63.734.98800.80.924.19225.41.112.4938.32.43
UniDepth-aligned Lotus2.0431.90.971.2133.20.531.27102.70.741.9211.00.39
SharpDepth\u00a0(ours)1.9436.20.921.3761.50.471.24147.60.661.8013.10.39
\n
\n
Table 2: Comparison for depth details purpose on one real-image dataset (iBims) and three synthetic datasets (Sintel, UnrealStereo4K, Spring). We ranked methods that do not require GT alignment as best, second-best, and third-best.
\n
", + "capture": "Table 2: Comparison for depth details purpose on one real-image dataset (iBims) and three synthetic datasets (Sintel, UnrealStereo4K, Spring). We ranked methods that do not require GT alignment as best, second-best, and third-best." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingMethodKITTISintel
-Ours0.9730.0602.371.9436.4
AInput Noise latent0.8170.1353.981.9434.6
B\nInput \n0.7010.1864.783.30116.9
Cw/o SDS loss0.9780.0512.283.70112.5
Dw/o reconstruction loss0.8430.1283.661.9434.1
ELotus teacher (ours)0.9730.0602.371.9436.4
FMarigold teacher0.9730.0582.342.4084.7
G\nFrozen Lotus\u00a0[17]\n0.9670.0692.432.0040.6
HEMA update (ours)0.9730.0602.371.9436.4
\n
Table 3: \nAblation study of different design choices.\n
\n
", + "capture": "Table 3: \nAblation study of different design choices.\n" + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetImagesSceneAcquisition
\n\nTraining Set\n\nArgoverse2\u00a0[46]\n15kOutdoorLiDAR
\nWaymo\u00a0[40]\n12kOutdoorLiDAR
\nPandaSet\u00a0[48]\n12kOutdoorLiDAR
\nARKit\u00a0[1]\n16kOutdoorLiDAR
\nScanNet\u00a0[6]\n11kIndoorRGB-D
\nTaskonomy\u00a0[53]\n30kIndoorRGB-D
\n\nTest Set\n\nKITTI\u00a0[12]\n652OutdoorLiDAR
\nNYU\u00a0[38]\n654IndoorRGB-D
\nETH3D\u00a0[]\n454OutdoorRGB-D
\nDiode\u00a0[42]\n325IndoorLiDAR
\nBooster\u00a0[33]\n456IndoorRGB-D
\nNuScenes\u00a0[3]\n1000OutdoorLiDAR
\nIBims-1\u00a0[22]\n100IndoorRGB-D
\nSintel\u00a0[47]\n1065Synthetic-
\nUnrealStereo4K\u00a0[41]\n200Synthetic-
\nSpring\u00a0[29]\n1016Synthetic-
\n
\n
Table 4: Datasets. List of the training and test datasets along with their number of images, scene type, and acquisition method.
\n
", + "capture": "Table 4: Datasets. List of the training and test datasets along with their number of images, scene type, and acquisition method." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPSNRSSIMLPIPS
MonoGS + UniDepth18.4720.7180.305
MonoGS + Ours18.8570.7350.289
\n
\n
Table 5: Performance of\u00a0SharpDepth on the fr1/desk sequence of TUM RGB-D dataset [39].\n
\n
", + "capture": "Table 5: Performance of\u00a0SharpDepth on the fr1/desk sequence of TUM RGB-D dataset [39].\n" + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetMethod
KITTI\nMarigold\u00a0[21]\n0.0953.22113.24092.284
\nLotus\u00a0[17]\n0.1133.53818.38387.703
\nUniDepth\u00a0[31]\n0.0512.2367.07897.921
\nZoeDepth\u00a0[2]\n0.0572.3907.47096.500
\nMetric3Dv2\u00a0[19]\n0.0532.4817.44997.589
\nPatchRefiner\u00a0[25]\n0.1586.04313.06179.245
UniDepth-aligned Lotus0.1303.93516.07783.633
SharpDepth\u00a0(Ours)0.0592.3748.10097.315
NYUv2\nMarigold\u00a0[21]\n0.0550.2248.11496.384
\nLotus\u00a0[17]\n0.0540.2227.99396.612
\nUniDepth\u00a0[31]\n0.0550.2005.36798.417
\nZoeDepth\u00a0[2]\n0.0770.2787.19095.200
\nMetric3Dv2\u00a0[19]\n0.0660.2547.49897.391
\nPatchRefiner\u00a0[25]\n2.4825.90019.0891.000
UniDepth-aligned Lotus0.0870.2818.91693.921
SharpDepth\u00a0(Ours)0.0640.2286.17996.949
ETH3D\nMarigold\u00a0[21]\n0.0640.6169.21795.956
\nLotus\u00a0[17]\n0.0620.5819.26696.001
\nUniDepth\u00a0[31]\n0.4563.0087.72825.308
\nZoeDepth\u00a0[2]\n0.5673.27213.01534.210
\nMetric3Dv2\u00a0[19]\n0.1380.9036.08182.420
\nPatchRefiner\u00a0[25]\n1.7818.83011.7154.974
UniDepth-aligned Lotus0.4933.26713.09220.347
SharpDepth\u00a0(Ours)0.4743.09212.11922.606
Diode\nMarigold\u00a0[21]\n0.3073.75529.23076.685
\nLotus\u00a0[17]\n0.3303.87730.99973.751
\nUniDepth\u00a0[31]\n0.2654.21623.37066.031
\nZoeDepth\u00a0[2]\n0.4846.63729.37430.195
\nMetric3Dv2\u00a0[19]\n0.1582.55219.45588.765
\nPatchRefiner\u00a0[25]\n1.2647.06429.56325.031
UniDepth-aligned Lotus0.3575.32130.67155.876
SharpDepth\u00a0(Ours)0.2974.64425.34061.486
Booster\nMarigold\u00a0[21]\n0.0490.0746.39297.384
\nLotus\u00a0[17]\n0.0410.0635.33398.779
\nUniDepth\u00a0[31]\n0.4920.5327.68628.041
\nZoeDepth\u00a0[2]\n0.6420.67410.56320.855
\nMetric3Dv2\u00a0[19]\n0.6680.7205.79515.490
\nPatchRefiner\u00a0[25]\n5.5515.99418.1361.000
UniDepth-aligned Lotus0.4940.5196.38226.429
SharpDepth\u00a0(Ours)0.4910.5287.08927.717
nuScenes\nMarigold\u00a0[21]\n0.2676.15835.62865.881
\nLotus\u00a0[17]\n0.3637.26349.04750.911
\nUniDepth\u00a0[31]\n0.1444.77121.95983.861
\nZoeDepth\u00a0[2]\n0.5878.15533.07621.838
\nMetric3Dv2\u00a0[19]\n0.199 7.37128.26784.215
\nPatchRefiner\u00a0[25]\n0.58210.58930.19331.726
UniDepth-aligned Lotus0.4327.85049.52441.243
SharpDepth\u00a0(Ours)0.1845.20825.58478.479
\n
Table 6: Detailed results on different datasets. We ranked methods that do not require GT alignment as best, second-best, and third-best. Gray indicates the method that has been trained on the training set.
\n
", + "capture": "Table 6: Detailed results on different datasets. We ranked methods that do not require GT alignment as best, second-best, and third-best. Gray indicates the method that has been trained on the training set." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18229v1_figure_2.png", + "caption": "Figure 2: The performance of SOTA depth estimation models in terms of depth accuracy (x-axis) on KITTI [1] and DBE Completion (y-axis) on Sintel[47], UnrealStereo4K [41] and Spring [29]. Our method (SharpDepth) is best balanced on both axes.", + "url": "http://arxiv.org/html/2411.18229v1/x1.png" + }, + "3": { + "figure_path": "2411.18229v1_figure_3.png", + "caption": "Figure 3: \nOur framework utilizes a diffusion-based estimator and a metric depth estimator to generate affine-invariant and metric depth maps, respectively. A Noise-Aware Gating mechanism produces a selectively noisy latent map, which is fed into our SharpDepth model. The training pipeline uses Score Distillation Sampling and Noise-Aware Reconstruction Losses to refine accuracy and enhance details.", + "url": "http://arxiv.org/html/2411.18229v1/x2.png" + }, + "4": { + "figure_path": "2411.18229v1_figure_4.png", + "caption": "Figure 4: The difference map between the Unidepth and Lotus predictions. The high-difference (brighter) areas are heavily distorted by noise, whereas in the low-difference (darker) areas, some information about the wheel is still recognizable.", + "url": "http://arxiv.org/html/2411.18229v1/x3.png" + }, + "5": { + "figure_path": "2411.18229v1_figure_5.png", + "caption": "Figure 5: Zero-shot qualitative results on unseen test samples of KITTI [12] and DIODE [42] dataset. Our method strikes a balance between depth accuracy and details. UniDepth lacks several details while UniDepth-aligned Lotus is less accurate.", + "url": "http://arxiv.org/html/2411.18229v1/x4.png" + }, + "6": { + "figure_path": "2411.18229v1_figure_6.png", + "caption": "Figure 6: Un-projected point cloud from in-the-wild image.", + "url": "http://arxiv.org/html/2411.18229v1/x5.png" + }, + "7": { + "figure_path": "2411.18229v1_figure_7.png", + "caption": "Figure 7: The effect of the pre-trained teacher model.", + "url": "http://arxiv.org/html/2411.18229v1/x6.png" + }, + "8": { + "figure_path": "2411.18229v1_figure_8.png", + "caption": "Figure 8: In-the-wild depth estimation from Internet images. Red indicates the close plane and blue means the far plane.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/supp_tung/in_the_wild.png" + }, + "9": { + "figure_path": "2411.18229v1_figure_9.png", + "caption": "Figure 9: In-the-wild depth estimation from images captured by a mobile phone. Red indicates the close plane and blue means the far plane.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/supp_tung/Depth-Page-iphone.png" + }, + "10(a)": { + "figure_path": "2411.18229v1_figure_10(a).png", + "caption": "(a) KITTI dataset\nFigure 10: Qualitative results on KITTI and NYUv2. SharpDepth* denotes our method when using depth by Metric3Dv2 as input.", + "url": "http://arxiv.org/html/2411.18229v1/x7.png" + }, + "10(b)": { + "figure_path": "2411.18229v1_figure_10(b).png", + "caption": "(b) NYUv2 dataset\nFigure 10: Qualitative results on KITTI and NYUv2. SharpDepth* denotes our method when using depth by Metric3Dv2 as input.", + "url": "http://arxiv.org/html/2411.18229v1/x8.png" + }, + "11": { + "figure_path": "2411.18229v1_figure_11.png", + "caption": "Figure 11: Illustration of the depth boundary metrics on the Spring dataset. We show the depth maps and extracted boundaries for each prediction. Compared to UniDepth, our method extracts more edges due to better depth discontinuities. Compared to Lotus, our method can capture more precise edges, due to the global prior from pre-trained UniDepth.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/suppl/spring_illust.png" + }, + "12": { + "figure_path": "2411.18229v1_figure_12.png", + "caption": "Figure 12: Illustration of the depth boundary on the Sintel dataset. We show the depth maps and extracted boundaries for each prediction.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/suppl/sintel_illustra.png" + }, + "13(a)": { + "figure_path": "2411.18229v1_figure_13(a).png", + "caption": "(a) KITTI dataset\nFigure 13: Qualitative comparisons on different datasets (1/3).", + "url": "http://arxiv.org/html/2411.18229v1/x9.png" + }, + "13(b)": { + "figure_path": "2411.18229v1_figure_13(b).png", + "caption": "(b) NYUv2 dataset\nFigure 13: Qualitative comparisons on different datasets (1/3).", + "url": "http://arxiv.org/html/2411.18229v1/x10.png" + }, + "14(a)": { + "figure_path": "2411.18229v1_figure_14(a).png", + "caption": "(a) ETH3D dataset\nFigure 14: Qualitative comparisons on different datasets (2/3).", + "url": "http://arxiv.org/html/2411.18229v1/x11.png" + }, + "14(b)": { + "figure_path": "2411.18229v1_figure_14(b).png", + "caption": "(b) Diode dataset\nFigure 14: Qualitative comparisons on different datasets (2/3).", + "url": "http://arxiv.org/html/2411.18229v1/x12.png" + }, + "15(a)": { + "figure_path": "2411.18229v1_figure_15(a).png", + "caption": "(a) Booster dataset\nFigure 15: Qualitative comparisons on different datasets (3/3).", + "url": "http://arxiv.org/html/2411.18229v1/x13.png" + }, + "15(b)": { + "figure_path": "2411.18229v1_figure_15(b).png", + "caption": "(b) nuScenes dataset\nFigure 15: Qualitative comparisons on different datasets (3/3).", + "url": "http://arxiv.org/html/2411.18229v1/x14.png" + }, + "16": { + "figure_path": "2411.18229v1_figure_16.png", + "caption": "Figure 16: Rendering comparison on TUM fr1/desk sequence. For each method, we show the novel view rendering. Compared to UniDepth (leftmost column), using SharpDepth (middle column) can result in finer details of objects, such as the books in the first row and the game console in the second row.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/supp_tung/Depth-Page-slam.png" + }, + "17": { + "figure_path": "2411.18229v1_figure_17.png", + "caption": "Figure 17: Multi-view scene reconstruction on KITTI dataset. We predict depth maps using UniDepth and SharpDepth for each frame and use TSDF-Fusion to generate the point cloud. SharpDepth\u2019s point cloud achieves less shape distortion in vehicles.", + "url": "http://arxiv.org/html/2411.18229v1/extracted/6023893/images/supp_tung/tsdf_final_new.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Arkitscenes: A diverse real-world dataset for 3d indoor scene understanding using mobile rgb-d data.", + "author": "Gilad Baruch, Zhuoyuan Chen, Afshin Dehghan, Tal Dimry, Yuri Feigin, Peter Fu, Thomas Gebauer, Brandon Joffe, Daniel Kurz, Arik Schwartz, et al.", + "venue": "arXiv preprint arXiv:2111.08897, 2021.", + "url": null + } + }, + { + "2": { + "title": "Zoedepth: Zero-shot transfer by combining relative and metric depth, 2023.", + "author": "Shariq Farooq Bhat, Reiner Birkl, Diana Wofk, Peter Wonka, and Matthias M\u00fcller.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "nuscenes: A multimodal dataset for autonomous driving.", + "author": "Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "4": { + "title": "Oasis: A large-scale dataset for single image 3d in the wild.", + "author": "Weifeng Chen, Shengyi Qian, David Fan, Noriyuki Kojima, Max Hamilton, and Jia Deng.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "5": { + "title": "Indoor scene understanding with geometric and semantic contexts.", + "author": "Wongun Choi, Yu-Wei Chao, Caroline Pantofaru, and Silvio Savarese.", + "venue": "IJCV, 112, 2015.", + "url": null + } + }, + { + "6": { + "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes.", + "author": "Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.", + "url": null + } + }, + { + "7": { + "title": "Towards real-time monocular depth estimation for robotics: A survey.", + "author": "Xingshuai Dong, Matthew A Garratt, Sreenatha G Anavatti, and Hussein A Abbass.", + "venue": "IEEE Transactions on Intelligent Transportation Systems, 23(10), 2022.", + "url": null + } + }, + { + "8": { + "title": "Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans.", + "author": "Ainaz Eftekhar, Alexander Sax, Jitendra Malik, and Amir Zamir.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "9": { + "title": "Depth map prediction from a single image using a multi-scale deep network.", + "author": "David Eigen, Christian Puhrsch, and Rob Fergus.", + "venue": "NIPS, 27, 2014.", + "url": null + } + }, + { + "10": { + "title": "Deep ordinal regression network for monocular depth estimation.", + "author": "Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "11": { + "title": "Geowizard: Unleashing the diffusion priors for 3d geometry estimation from a single image.", + "author": "Xiao Fu, Wei Yin, Mu Hu, Kaixuan Wang, Yuexin Ma, Ping Tan, Shaojie Shen, Dahua Lin, and Xiaoxiao Long.", + "venue": "In ECCV. Springer, 2025.", + "url": null + } + }, + { + "12": { + "title": "Vision meets robotics: The kitti dataset.", + "author": "Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun.", + "venue": "IJRR, 32(11), 2013.", + "url": null + } + }, + { + "13": { + "title": "Digging into self-supervised monocular depth estimation.", + "author": "Cl\u00e9ment Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow.", + "venue": "In ICCV, 2019.", + "url": null + } + }, + { + "14": { + "title": "3d packing for self-supervised monocular depth estimation.", + "author": "Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Allan Raventos, and Adrien Gaidon.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "15": { + "title": "Full surround monodepth from multiple cameras.", + "author": "Vitor Guizilini, Igor Vasiljevic, Rares Ambrus, Greg Shakhnarovich, and Adrien Gaidon.", + "venue": "IEEE Robotics and Automation Letters, 7(2), 2022.", + "url": null + } + }, + { + "16": { + "title": "Towards zero-shot scale-aware monocular depth estimation, 2023.", + "author": "Vitor Guizilini, Igor Vasiljevic, Dian Chen, Rares Ambrus, and Adrien Gaidon.", + "venue": null, + "url": null + } + }, + { + "17": { + "title": "Lotus: Diffusion-based visual foundation model for high-quality dense prediction.", + "author": "Jing He, Haodong Li, Wei Yin, Yixun Liang, Leheng Li, Kaiqiang Zhou, Hongbo Liu, Bingbing Liu, and Ying-Cong Chen.", + "venue": "arXiv preprint arXiv:2409.18124, 2024.", + "url": null + } + }, + { + "18": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "CVPR, 2020.", + "url": null + } + }, + { + "19": { + "title": "Metric3d v2: A versatile monocular geometric foundation model for zero-shot metric depth and surface normal estimation.", + "author": "Mu Hu, Wei Yin, Chi Zhang, Zhipeng Cai, Xiaoxiao Long, Hao Chen, Kaixuan Wang, Gang Yu, Chunhua Shen, and Shaojie Shen.", + "venue": "arXiv preprint arXiv:2404.15506, 2024.", + "url": null + } + }, + { + "20": { + "title": "Brushnet: A plug-and-play image inpainting model with decomposed dual-branch diffusion.", + "author": "Xuan Ju, Xian Liu, Xintao Wang, Yuxuan Bian, Ying Shan, and Qiang Xu.", + "venue": "arXiv preprint arXiv:2403.06976, 2024.", + "url": null + } + }, + { + "21": { + "title": "Repurposing diffusion-based image generators for monocular depth estimation.", + "author": "Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "22": { + "title": "Evaluation of cnn-based single-image depth estimation methods.", + "author": "Tobias Koch, Lukas Liebel, Friedrich Fraundorfer, and Marco Korner.", + "venue": "In ECCV Workshops, 2018.", + "url": null + } + }, + { + "23": { + "title": "From big to small: Multi-scale local planar guidance for monocular depth estimation.", + "author": "Jin Han Lee, Myung-Kyu Han, Dong Wook Ko, and Il Hong Suh.", + "venue": "arXiv preprint arXiv:1907.10326, 2019.", + "url": null + } + }, + { + "24": { + "title": "Megadepth: Learning single-view depth prediction from internet photos.", + "author": "Zhengqi Li and Noah Snavely.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "25": { + "title": "Patchrefiner: Leveraging synthetic data for real-domain high-resolution monocular metric depth estimation.", + "author": "Zhenyu Li, Shariq Farooq Bhat, and Peter Wonka.", + "venue": "In ECCV. Springer, 2024.", + "url": null + } + }, + { + "26": { + "title": "Magic3d: High-resolution text-to-3d content creation.", + "author": "Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "27": { + "title": "Repaint: Inpainting using denoising diffusion probabilistic models.", + "author": "Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Gool.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "28": { + "title": "Gaussian Splatting SLAM.", + "author": "Hidenobu Matsuki, Riku Murai, Paul H. J. Kelly, and Andrew J. Davison.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.", + "url": null + } + }, + { + "29": { + "title": "Spring: A high-resolution high-detail dataset and benchmark for scene flow, optical flow and stereo.", + "author": "Lukas Mehl, Jenny Schmalfuss, Azin Jahedi, Yaroslava Nalivayko, and Andr\u00e9s Bruhn.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "30": { + "title": "Swiftbrush: One-step text-to-image diffusion model with variational score distillation.", + "author": "Thuan Hoang Nguyen and Anh Tran.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "31": { + "title": "UniDepth: Universal monocular metric depth estimation.", + "author": "Luigi Piccinelli, Yung-Hsu Yang, Christos Sakaridis, Mattia Segu, Siyuan Li, Luc Van Gool, and Fisher Yu.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "32": { + "title": "Dreamfusion: Text-to-3d using 2d diffusion.", + "author": "Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall.", + "venue": "arXiv, 2022.", + "url": null + } + }, + { + "33": { + "title": "Booster: a benchmark for depth from images of specular and transparent surfaces.", + "author": "Pierluigi Zama Ramirez, Alex Costanzino, Fabio Tosi, Matteo Poggi, Samuele Salti, Stefano Mattoccia, and Luigi Di Stefano.", + "venue": "PAMI, 2023.", + "url": null + } + }, + { + "34": { + "title": "Vision transformers for dense prediction.", + "author": "Ren\u00e9 Ranftl, Alexey Bochkovskiy, and Vladlen Koltun.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "35": { + "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer.", + "author": "Ren\u00e9 Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun.", + "venue": "PAMI, 44(3), 2022.", + "url": null + } + }, + { + "36": { + "title": "High-resolution image synthesis with latent diffusion models, 2021.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": null, + "url": null + } + }, + { + "37": { + "title": "A multi-view stereo benchmark with high-resolution images and multi-camera videos.", + "author": "Thomas Schops, Johannes L Schonberger, Silvano Galliani, Torsten Sattler, Konrad Schindler, Marc Pollefeys, and Andreas Geiger.", + "venue": "In CVPR, 2017.", + "url": null + } + }, + { + "38": { + "title": "Indoor segmentation and support inference from rgbd images.", + "author": "Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus.", + "venue": "In ECCV. Springer, 2012.", + "url": null + } + }, + { + "39": { + "title": "A benchmark for the evaluation of rgb-d slam systems.", + "author": "J\u00fcrgen Sturm, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cremers.", + "venue": "In 2012 IEEE/RSJ international conference on intelligent robots and systems, pages 573\u2013580. IEEE, 2012.", + "url": null + } + }, + { + "40": { + "title": "Scalability in perception for autonomous driving: Waymo open dataset.", + "author": "Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al.", + "venue": "In CVPR, 2020.", + "url": null + } + }, + { + "41": { + "title": "Smd-nets: Stereo mixture density networks.", + "author": "Fabio Tosi, Yiyi Liao, Carolin Schmitt, and Andreas Geiger.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "42": { + "title": "Diode: A dense indoor and outdoor depth dataset.", + "author": "Igor Vasiljevic, Nick Kolkin, Shanyi Zhang, Ruotian Luo, Haochen Wang, Falcon Z Dai, Andrea F Daniele, Mohammadreza Mostajabi, Steven Basart, Matthew R Walter, et al.", + "venue": "arXiv preprint arXiv:1908.00463, 2019.", + "url": null + } + }, + { + "43": { + "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation.", + "author": "Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, and Greg Shakhnarovich.", + "venue": "arXiv preprint arXiv:2212.00774, 2022.", + "url": null + } + }, + { + "44": { + "title": "Can scale-consistent monocular depth be learned in a self-supervised scale-invariant manner?", + "author": "Lijun Wang, Yifan Wang, Linzhao Wang, Yunlong Zhan, Ying Wang, and Huchuan Lu.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "45": { + "title": "Self-supervised monocular depth hints.", + "author": "Jamie Watson, Michael Firman, Gabriel J Brostow, and Daniyar Turmukhambetov.", + "venue": "In ICCV, 2019.", + "url": null + } + }, + { + "46": { + "title": "Argoverse 2: Next generation datasets for self-driving perception and forecasting.", + "author": "Benjamin Wilson, William Qi, Tanmay Agarwal, John Lambert, Jagjeet Singh, Siddhesh Khandelwal, Bowen Pan, Ratnesh Kumar, Andrew Hartnett, Jhony Kaesemodel Pontes, et al.", + "venue": "arXiv preprint arXiv:2301.00493, 2023.", + "url": null + } + }, + { + "47": { + "title": "Lessons and insights from creating a synthetic optical flow benchmark.", + "author": "Jonas Wulff, Daniel J Butler, Garrett B Stanley, and Michael J Black.", + "venue": "In ECCV. Springer, 2012.", + "url": null + } + }, + { + "48": { + "title": "Pandaset: Advanced sensor suite dataset for autonomous driving.", + "author": "Pengchuan Xiao, Zhenlei Shao, Steven Hao, Zishuo Zhang, Xiaolin Chai, Judy Jiao, Zesong Li, Jian Wu, Kai Sun, Kun Jiang, et al.", + "venue": "In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). IEEE, 2021.", + "url": null + } + }, + { + "49": { + "title": "Depth anything: Unleashing the power of large-scale unlabeled data.", + "author": "Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "50": { + "title": "Virtual normal: Enforcing geometric constraints for accurate and robust depth prediction.", + "author": "Wei Yin, Yifan Liu, and Chunhua Shen.", + "venue": "PAMI, 2021.", + "url": null + } + }, + { + "51": { + "title": "Metric3d: Towards zero-shot metric 3d prediction from a single image.", + "author": "Wei Yin, Chi Zhang, Hao Chen, Zhipeng Cai, Gang Yu, Kaixuan Wang, Xiaozhi Chen, and Chunhua Shen.", + "venue": "In ICCV, pages 9043\u20139053, 2023.", + "url": null + } + }, + { + "52": { + "title": "Real-time monocular depth estimation with sparse supervision on mobile.", + "author": "Mehmet Kerim Yucel, Valia Dimaridou, Anastasios Drosou, and Albert Saa-Garriga.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "53": { + "title": "Taskonomy: Disentangling task transfer learning.", + "author": "Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "54": { + "title": "3dmatch: Learning local geometric descriptors from rgb-d reconstructions.", + "author": "Andy Zeng, Shuran Song, Matthias Nie\u00dfner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser.", + "venue": "In CVPR, 2017.", + "url": null + } + }, + { + "55": { + "title": "Betterdepth: Plug-and-play diffusion refiner for zero-shot monocular depth estimation.", + "author": "Xiang Zhang, Bingxin Ke, Hayko Riemenschneider, Nando Metzger, Anton Obukhov, Markus Gross, Konrad Schindler, and Christopher Schroers.", + "venue": "arXiv preprint arXiv:2407.17952, 2024.", + "url": null + } + }, + { + "56": { + "title": "Tryondiffusion: A tale of two unets.", + "author": "Luyang Zhu, Dawei Yang, Tyler Zhu, Fitsum Reda, William Chan, Chitwan Saharia, Mohammad Norouzi, and Ira Kemelmacher-Shlizerman.", + "venue": "In CVPR, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18229v1" +} \ No newline at end of file diff --git a/20241127/2411.18241v1.json b/20241127/2411.18241v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4763120400f7551c7334f41e1036eae617a1d122 --- /dev/null +++ b/20241127/2411.18241v1.json @@ -0,0 +1,168 @@ +{ + "title": "Exploration of LLM Multi-Agent Application Implementation Based on LangGraph+CrewAI", + "abstract": "With the rapid development of large model technology, the application of agent technology in various fields is becoming increasingly widespread, profoundly changing people\u2019s work and lifestyles. In complex and dynamic systems, multi-agents achieve complex tasks that are difficult for a single agent to complete through division of labor and collaboration among agents. This paper discusses the integrated application of LangGraph and CrewAI. LangGraph improves the efficiency of information transmission through graph architecture, while CrewAI enhances team collaboration capabilities and system performance through intelligent task allocation and resource management. The main research contents of this paper are: (1) designing the architecture of agents based on LangGraph for precise control; (2) enhancing the capabilities of agents based on CrewAI to complete a variety of tasks. This study aims to delve into the application of LangGraph and CrewAI in multi-agent systems, providing new perspectives for the future development of agent technology, and promoting technological progress and application innovation in the field of large model intelligent agents.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Driven by the rapid advancements in artificial intelligence technology, the application of agents in various fields is increasingly growing, profoundly affecting everyone\u2019s work and lifestyle. The application scope of agent technology is broad; it can autonomously perceive the environment, conduct data analysis, and make decisions, thereby significantly enhancing efficiency and optimizing resource allocation. In complex and dynamic systems, the introduction of multi-agent systems enables multiple agents to collaborate and complete complex tasks that are difficult for a single agent to achieve.\nThe key advantage of multi-agent systems lies in their task decomposition capabilities, achieving goals through the collaborative action of agents, which not only enhances the system\u2019s flexibility and adaptability but also improves its generalization ability. In environments characterized by uncertainty and dynamic changes, the ability of agents to collaborate and divide tasks is particularly crucial.\nThis paper primarily investigates two issues: (1) designing the architecture of agents based on LangGraph for more precise control; (2) enhancing the capabilities of agents based on CrewAI to accomplish different tasks. The aim is to delve into the application of AI multi-agent systems, explore the technological advantages brought about by the combination of LangGraph and CrewAI, and their potential application value in various fields. Through the analysis and research of these technologies, it is expected to provide new perspectives and ideas for the future development of agent technology, thereby promoting technological progress and application innovation in the field of large model intelligent agents." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "Autonomous agents are typically responsible for specific roles to accomplish various tasks.MetaGPT[1 ###reference_b1###], ChatDev[2 ###reference_b2###], and self-collaboration[3 ###reference_b3###] pre-set various roles and corresponding responsibilities to foster collaboration among agents.\nAutonomous agents based on Large Language Models (LLMs) incorporate mechanisms from human memory processes.RLP[4 ###reference_b4###] is a conversational agent that maintains an internal state for both parties in a dialogue, achieving the agent\u2019s short-term memory.SayPlan[5 ###reference_b5###] is an agent designed for task planning and design.When faced with complex tasks, break them down into simpler subtasks and solve them. The Chain of Thought (CoT) [6 ###reference_b6###] inputs reasoning steps for solving complex problems into the prompt.[7 ###reference_b7###] proposes an intelligent personalized digital banking assistant based on LangGraph and Chain of Thoughts (COT), leveraging Large Language Models (LLM) and a multi-agent framework to enhance task efficiency.\n[8 ###reference_b8###] improves advanced question-answering systems based on Retrieval-Augmented Generation (RAG) by leveraging graph technology, to overcome the limitations of existing RAG models and develop high-quality artificial intelligence services.\nIn this study, autonomous agents based on LLM can leverage the framework capabilities of LangGraph and CrewAI to automatically perform various tasks, endowing Agents with the ability to complete specific tasks, forming a comprehensive application system framework." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Introduction to LangGraph", + "text": "LangGraph is a framework designed for constructing multi-agent applications, enabling developers to utilize large language models (LLMs) to create agents and multi-agent workflows. Compared to other LLM frameworks, LangGraph offers the benefits of loops, controllability, and persistent memory. LangGraph provides granular control over the application\u2019s processes and states, enabling the creation of reliable agents that support advanced human-computer interaction and memory capabilities. The flexible framework of LangGraph supports various control flows\u2014single-agent, multi-agent, hierarchical, sequential and can robustly manage complex real-world scenarios." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B CrewAI Framework", + "text": "CrewAI is an open-source framework designed to coordinate AI agents with role-playing and autonomous operations to facilitate cooperation among agents in solving complex problems. It allows developers to define AI agents with specific roles, objectives, and tools. The main building blocks of CrewAI include Agent, Task, Tool, and Crew, offering a rich set of features that can be freely selected and combined according to specific needs to create multi-agent systems. CrewAI supports various APIs such as OpenAI and Ollama, and it has key features like role-customized agents, automatic task delegation, and flexibility in task management. CrewAI is aimed at handling complex tasks, such as multi-step workflows, decision-making, and problem-solving.\nAn Agent intelligent entity is an autonomous unit capable of performing tasks, making decisions, and communicating with other agents. In the CrewAI framework, a Task refers to the specific work carried out by an Agent, including details such as description, executing agents, and required tools. It supports multi-agent collaboration and optimizes team cooperation and efficiency through the process orchestration of the Crew.\n###figure_1###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Integrating LangGraph with CrewAI", + "text": "The integration of LangGraph+CrewAI framework provides powerful tools for complex task management, optimizing task execution and multi-agent systems through flexible workflows, inter-agent collaboration, and graph-structured management. It supports customized development by integrating with existing tools, meeting the evolving needs of AI applications. CrewAI ensures process efficiency through clear task allocation and role definition. Additionally, seamless integration with LangChain allows developers familiar with the framework to easily integrate independent agents, further enhancing the framework\u2019s appeal.\nTaking the example of the automatic email composition and sending case provided on the CrewAI official website, the LangGraph+CrewAI framework breaks down complex tasks into manageable steps and automates their execution.\nAs shown in Figure 1, with LangGraph, it is possible to clearly define and visualize the workflow for writing and sending emails, including checking new emails, composing new emails, and waiting for the next run.\nAs shown in Figures 2 to 4 ,this is a code example based on CrewAI+LangGraph. By integrating CrewAI with LangGraph, it becomes easy to implement functionalities such as email checking, composing, and automatic sending based on large models.\n###figure_2### ###figure_3### ###figure_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Application Case Practice", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Development Environment", + "text": "In this study, we utilized the following development environment components to apply the LangGraph+CrewAI framework:\nMulti-agent: CrewAI, for organizing and coordinating collaborative work among AI agents.\nGraph workflow: LangGraph, used for managing state sharing and process control of graph nodes.\nWorkflow tracking: LangSmith, for monitoring and auditing the execution of workflows.\nEmbedding technology: ollama embedding model, utilizing embedded vector models to retrieve work order text.\nVector database: Fasiss is used for storing and retrieving vector data, accelerating data access speed." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Application Cases", + "text": "As shown in Figure 5, based on the LangGraph+CrewAI framework, we have explored and implemented a multi-agent collaborative application that integrates code generation and code review capabilities. By achieving real-time sharing of status data and feedback mechanisms, the efficiency of code generation has been improved.\n###figure_5### As shown in Figure 6, based on the CrewAI+LangGraph framework, we have explored and implemented a case for ticket auditing and forwarding features. By conducting an in-depth analysis of ticket text information, we have built multiple intelligent agents that can more accurately understand the content of the tickets, thereby enhancing the efficiency of ticket processing.\n###figure_6###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This article explores the combined application of LangGraph and CrewAI frameworks, demonstrating the powerful capabilities of these two frameworks in building complex multi-agent systems and workflow management. Through case analysis, the LangGraph+CrewAI framework has advantages in task management, inter-agent collaboration, graph workflow implementation, and integration with existing tools. This integrated approach not only improves the efficiency of task execution but also enhances the system\u2019s flexibility and scalability through real-time status data sharing and feedback mechanisms. Our research results indicate that the combination of LangGraph and CrewAI provides a powerful toolkit for developing advanced AI applications, especially in scenarios that require handling complex tasks and multi-agent collaboration. By integrating with LangChain, existing independent agents can be easily incorporated into the CrewAI framework, providing developers with a unified platform for building and managing complex AI workflows." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18241v1_figure_1.png", + "caption": "Figure 1: Illustration of the email case based on LangGraph +CrewAI.", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/paper-email.jpg" + }, + "2": { + "figure_path": "2411.18241v1_figure_2.png", + "caption": "Figure 2: Code example based on LangGraph + CrewAI.", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/langgraph.jpg" + }, + "3": { + "figure_path": "2411.18241v1_figure_3.png", + "caption": "Figure 3: Agent example based on CrewAI.", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/agent_mail.jpg" + }, + "4": { + "figure_path": "2411.18241v1_figure_4.png", + "caption": "Figure 4: Task example based on CrewAI.", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/task_mail.jpg" + }, + "5": { + "figure_path": "2411.18241v1_figure_5.png", + "caption": "Figure 5: A code generation case based on LangGraph+CrewAI.", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/gen-code.jpg" + }, + "6": { + "figure_path": "2411.18241v1_figure_6.png", + "caption": "Figure 6: Case study of automatic processing of work orders based on LangGraph+CrewAI", + "url": "http://arxiv.org/html/2411.18241v1/extracted/6028881/crewAI.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Metagpt: Meta programming for multi-agent collaborative framework.", + "author": "Mingchen Zhuge Sirui Hong et al.", + "venue": "International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "2": { + "title": "Chatdev: Communicative agents for software development.", + "author": "Wei Liu Chen Qian et al.", + "venue": "In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15174\u201315186.", + "url": null + } + }, + { + "3": { + "title": "Self-collaboration code generation via chatgpt.", + "author": "Xue Jiang Yihong Dong et al.", + "venue": "CoRR, abs/2304.07590, 2023.", + "url": null + } + }, + { + "4": { + "title": "Reflective linguistic programming (rlp): A stepping stone in socially-aware agi (socialagi).", + "author": "Kevin A. Fischer.", + "venue": "CoRR, abs/2305.12647, 2023.", + "url": null + } + }, + { + "5": { + "title": "Sayplan: Grounding large language models using 3d scene graphs for scalable task planning.", + "author": "Jesse Haviland Krishan Rana et al.", + "venue": "CoRR, 2023.", + "url": null + } + }, + { + "6": { + "title": "Chain-of-thought prompting elicits reasoning in large language models.", + "author": "Xuezhi Wang Jason Wei et al.", + "venue": "Advances in neural information processing systems, 35:24824\u201324837, 2022.", + "url": null + } + }, + { + "7": { + "title": "An intelligent llm-powered personalized assistant for digital banking using langgraph and chain of thoughts.", + "author": "Saha Sourav Arafat Md Easin et al.", + "venue": "IEEE 22nd Jubilee International Symposium on Intelligent Systems Informatics, pages 625\u2013630, 2024.", + "url": null + } + }, + { + "8": { + "title": "A study on the implementation method of an agent-based advanced rag system using graph.", + "author": "Cheonsu Jeong.", + "venue": "CoRR, abs/2407.19994, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18241v1" +} \ No newline at end of file diff --git a/20241127/2411.18247v1.json b/20241127/2411.18247v1.json new file mode 100644 index 0000000000000000000000000000000000000000..07db3f4627ebbf32abd9f5e8a51f1e7aab07d1c6 --- /dev/null +++ b/20241127/2411.18247v1.json @@ -0,0 +1,161 @@ +{ + "title": "A gentle push funziona benissimo: making instructed models in Italian via contrastive activation steering", + "abstract": "Adapting models to a language that was only partially present in the pre-training data requires fine-tuning, which is expensive in terms of both data and computational resources. As an alternative to fine-tuning, we explore the potential of activation steering-based techniques to enhance model performance on Italian tasks. Through our experiments we show that Italian steering (i) can be successfully applied to different models, (ii) achieves performances comparable to, or even better than, fine-tuned models for Italian, and (iii) yields higher quality and consistency in Italian generations. We also discuss the utility of steering and fine-tuning in the contemporary LLM landscape where models are anyway getting high Italian performances even if not explicitly trained in this language.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The strong rise in capabilities of the latest large language models (LLMs) has brought significant improvements in a wide variety of downstream tasks. These abilities mainly derive from the instruction-tuning procedure (IT), i.e., model fine-tuning on instruction datasets, and enable the models to follow user-prompted instructions.\nMost LLMs, however, are mainly pre-trained and fine-tuned in English, and while other high-resource languages are included in the training data, they are not present to the extent needed to achieve out-of-the-box performances comparable to English. A strategy to address this has been, in the past few years, to fine-tune models with language-specific instructions, such as the Stanford Alpaca dataset [1 ###reference_b1###], which has been automatically translated in multiple languages \u2013 the Italian version of it has been used to train the Llama 2-based Camoscio model [2 ###reference_b2###]. A combination of training instances from three automatically translated instruction datasets was used to train the latest Llamantino [3 ###reference_b3###], the most recent Llama 3-based instruction-tuned model for Italian.\nThis approach has proven effective, but using large amounts of machine-translated texts is far from optimal: although the translation is generally good for high-resource languages, the language\u2019s unique linguistic and cultural aspects are often not represented by the training data. In addition, one must consider the usual substantial (computational) costs associated with large datasets.\nWith recent developments in interpretability research, new approaches are arising to localize and steer different language model aspects. These techniques mainly work with an inference-time injection, allowing for targeted interventions during the generation phase without incurring the high costs associated with any additional training. Such techniques, relying on the assumption that models are already capable of performing specific tasks, aim at enhancing some of the internal activations leading to specific solutions, thereby also increasing overall performance. They have proved successful towards specific tasks, such as model detoxification, but also toward more generalist and wide-ranging tasks [4 ###reference_b4###, 5 ###reference_b5###].\nWe explore the potential of steering for Italian-instructing a pre-trained LLM as an alternative to fine-tuning,\nadopting a steering technique based on contrastive examples. We observe that this approach, with much less data ( instances instead of 240K) and no additional training required, enables performances comparable to standard fine-tuning approaches and yields high-quality Italian generations." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related works", + "text": "The latest LLMs are pre-trained on data which often includes not only English but also (small percentages of) other languages [6 ###reference_b6###, 7 ###reference_b7###]. After the initial pre-training phase, models are further trained to follow instructions given by users. Due to the nature of most instruction-tuning data, performance in and on English is still overwhelmingly better than for other languages [8 ###reference_b8###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "We build on the assumption that during the training process, the model already sees a small amount of the target language (Italian in our case). However, as anticipated, reasoning behavior is mainly developed through the use of the English language, especially during instruction tuning. We aim to push the internal components promoting the language switch, so as to achieve better results on a language different than English." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "We select two different models as base to test the effectiveness of our steering approach. The first is the smallest (8B parameters) from the Llama 3 family in its Instructed version222meta-llama/Meta-Llama-3-8B-Instruct ###reference_ma-3-8B-Instruct### via HuggingFace. The second model we take as base is the smallest (3.8B parameters) Phi 3 model333microsoft/Phi-3-mini-4k-instruct ###reference_i-4k-instruct### via HuggingFace in its English-instructed version.\nFor a comparison of steering with the more commonly-used Instruction Tuning approach, we also re-run on the selected benchmarks the latest Instruction Tuned model with Italian data (IT-ITA) model ANITA from [9 ###reference_b9###], also based on the same Llama 3 model we use.\nSince all of these models have some training data in different languages, even if not specifically meant to be multilingual, we also test the original models on the Italian benchmarks to get a baseline in terms of model capabilities and better capture the differences between the IT-ITA procedure and the different steering techniques.444Another obvious baseline would be a native Italian model, such as the recent Minerva [14 ###reference_b14###] which is pre-trained on Italian+English data. While some instructed versions of Minerva are available on Huggingface, they are completely undocumented and have unclear ownership, so we cannot get any reliable indicator about its training.\n###table_1###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Selected benchmarks", + "text": "We test the models on three different standard benchmarks included in the Italian LLM leaderboard555Open ITA LLM leaderboard ###reference_port/open_ita_llm_leaderboard### via HuggingFace.:\nMMLU [15 ###reference_b15###] is a multitask question-answering benchmark consisting of multiple-choice questions from various expert-level knowledge branches. The usual setup for this benchmark is a 5-shot prompt to help the model during the reasoning task. The test set consists of k instances with four possible responses each.\nHellaSwag [16 ###reference_b16###] is a benchmark meant to measure grounded commonsense inference. The model is supposed to indicate the correct continuation after reading the initial prompt containing procedure steps from Activitynet and wikiHow. The employed setting is a 0-shot prompt over all the k test instances.\nARC challenge [17 ###reference_b17###] is a collection of over k instances of school-level multiple-choice science questions aimed at measuring the knowledge retrieval capabilities of a LLM. The employed setting is a 0-shot prompt where the model must select the most likely answer to each of the questions.\nWe also test the ability of the model in generating full Italian responses (rather than non-Italian ones). To this end, we use a popular language identification tool lang-detect666lang-detect ###reference_### package and take the probability of the Italian language as the scoring metric." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Steering vs the rest", + "text": "Table 1 ###reference_### shows the models\u2019 results for each benchmark.777Please note that our results differ from those shown in the Italian LLM leaderboard since we employ a regex-based approach to evaluate the responses instead of using the response likelihood of the model as per [18 ###reference_b18###], which would require four times more runs. This is further explained in Appendix B ###reference_###. Among the two proposed steering approaches, ITA generally proves to be more effective in steering the LLM outputs.\nAdditionally, the steering approach often surpasses both the original and IT-ITA models\u2019 performances. The most significant advantage, however, is the reduced time and computational resources needed to enhance a model\u2019s performance in a new language.\nThe Italian Llama 3 ANITA [9 ###reference_b9###] typically outperforms its original version but has required fine-tuning on over k examples. In contrast, the steering technique achieves comparable or better performance across most benchmarks with significantly less data \u2014 only demonstrative examples in our case.\n###figure_1### It may be useful to look at how steering and Instruction Tuning techniques differ in improving model responses. Figure 9 ###reference_te9### shows the overlap (or lack thereof) of correct responses of the four approaches based on Llama 3-Instruct. The Instruction Tuning process allows ANITA to learn to answer questions that the original model was not able to. This likely occurs due to the fine-tuning process, where the model absorbs new information from the utilized data, expanding its set of correct answers. At the same time, however, IT-ITA also runs into the loss of previous capabilities on some questions, a behavior similar to the so-called catastrophic forgetting [19 ###reference_b19###] when learning new information.\nOn the other hand, the steering technique is based on improving only language capabilities, without the model learning anything new from the data. This leads to the theoretical disadvantage of an upper bound whereby it is difficult to improve the model\u2019s performance. Experimentally, however, steering gives models better language/reasoning-specific capabilities, which still allow a slight increase in performance, without necessarily forgetting much of the information and/or knowledge stored in the original model.\nAccording to langdetect (last column in Table 1 ###reference_###), which measures the probability of a sentence being Italian, the Italian fine-tuned ANITA has lower consistency over the used benchmarks (0.715). Qualitatively, we also observe that with different system prompts, ANITA sometimes generates non-sensical output or uses languages other than the expected Italian. Some examples can be seen in Table 2 ###reference_###, where we report some random examples from the ARC challenge benchmark, where the model might still able to solve the task but fails to continue the generation properly. This problem could be traced back to the instability of the fine-tuning process which can lead to excessive variance in results depending on the used data or different hyperparameters employed during the training process [20 ###reference_b20###].\nThe steering approach, instead, appears to provide a precise direction toward the expected language, generally achieving better results in terms of language consistency.\nTo further get an intuition of the ability to generate free Italian text of the different models, we qualitatively test their outputs on a series of random prompts and report these generations in Table 7 ###reference_### for the Llama 3 models and in Table 8 ###reference_### for the Phi 3 model." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "On SOTA models performance improvements", + "text": "The gap in performance that we have observed between the original model and the steered/instruction-tuned version is present in some benchmarks although not as substantial. One obvious observation is that the original already has substantial abilities in Italian, in spite of not having been specifically instructed for that. Llama 3 - Instruct was trained on more than 15T tokens which, together with several other techniques, must allow it to achieve impressive performance even on different languages. In order to possibly see a bigger impact of steering and fine-tuning over their respective original model, we replicate our experiments on the previous version of the same model (Llama 2 - Instruct)101010We use the name \u201dLlama 2 - Instruct\u201d for consistency even though the original name is meta-llama/Llama-2-7b-chat-hf ###reference_7b-chat-hf### via HuggingFace, looking only at the ARC challenge results. We also use the IT-ITA version of Llama 2-Instruct111111swap-uniba/LLaMAntino-2-chat-7b-hf-ITA ###reference_no-2-chat-7b-hf-ITA### via HuggingFace from [3 ###reference_b3###] for comparison.\nFrom Table 3 ###reference_### we can see that the increase in performance over the original model is more substantial than what observed for Llama 3. This is especially true for the steering techniques, which increase the performance of Llama 2 by and (for ITA and ITA-full, respectively), yielding a larger improvement than what achieved by the fine-tuned model." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Take home message and outlook", + "text": "To instruct in a specific language a pre-trained LLM, steering is computationally much less expensive than fine-tuning with hundreds of thousands of (automatically translated) examples. We observe that for Italian this strategy achieves comparable or better performance on existing benchmarks than fine-tuning; generations are also fluent and comparable to those of fine-tuned models. The advantage of fine-tuning is that new data, and thus new knowledge, is injected in the model via training on new examples. At the same time, this might also trigger so-called catastrophic forgetting, yielding degradation in the output.\nWe suggest that in the context of creating a new language-specific instructed LLM, this advantage makes sense only insofar culturally relevant and native data is used in the fine-tuning phase, so that the model can truly be enriched with language-specific knowledge, both grammatically and pragmatically. If translated data must be used, then it is incredibly more effective to use steering which requires much fewer examples (less than 0.5%) and a simple inference-time injection, making this an accessible method for virtually any language. Using native examples for the steering procedure, and possibly style-specific examples, might also yield interesting results." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Promtps and instructions", + "text": "When extracting the behavior from the models, we employ different versions of Alpaca. Examples of the three versions listed above (ENG, ITA-full and ITA) can be observed in Table 4 ###reference_###. As highlighted in Section 5 ###reference_### it is important to use datasets that are original in the target language or, alternatively, carefully translated and reviewed by expert subjects. By looking at the examples in Table 4 ###reference_###, in some cases the translation does not carry with it cultural and diverse aspects of the new language, effectively degrading the actual performance of the model when the dataset is employed for instruction fine-tuning. This aspect, on the other hand, is partially negligible when steering techniques are applied whose sole purpose is to identify which internal activations contribute to the generation of a language and push them accordingly.\nEach of the Alpaca prompts used for the contrastive approach is also paired with a system instruction Answer the following questions. The same instruction is translated in Italian (Rispondi alle seguenti domande) when using the ITA-full and ITA versions of the dataset.\nWe also list in Table 6 ###reference_### the instructions used as system prompts for each proposed benchmark. Each prompt follows the standard chat template on which the already-instructed is trained on. Some examples from the different benchmarks are proposed in Table 5 ###reference_###.\n###table_2### ###table_3### ###table_4###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Evaluation technique", + "text": "Evaluation pipelines generally use custom approaches, based on the best configuration possible to achieve the best results over a set of standard benchmarks. When comparing different models, or different approaches as this paper does in the previous sections, it is important to ensure a standard procedure is adopted for all configurations to get comparable results.\nThe most widely used approach, for model comparison in the above leaderboards, is to evaluate the likelihood of a given response by appending each response to the prompt [18 ###reference_b18###]. This technique is employed in the lm-eval121212lm-evaluation-harness ###reference_n-harness### via GitHub toolkit, which provides a useful tool to evaluate a model on standard responses. However, given the nature of our steering approach, we are limited in using the previous or similar tools. For this reason, we employed a standard regex to evaluate the generation from the model:\nMoreover, we do not use any sampling parameters, using greedy decoding and stopping criteria when the EOS token is generated.\n###table_5###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMMLU (it)HellaSwag (it)ARC challenge (it)\nlang-detect (it)
Meta Llama 3 8B - Instruct
Original54.2152.3071.31.995
+ IT-ITA (ANITA [9])55.0142.4972.54.715
+ Steering ITA-full55.7348.7470.82.999\n
+ Steering ITA55.9550.0071.38.996
Microsoft Phi\u00a03 mini 4k - Instruct
Original59.6560.0269.37.997
+ Steering ITA-full59.9254.3674.42.999
+ Steering ITA60.6560.1474.25.999
\n
Table 1: Results on the benchmarks in % of correct answers. In column lang-detect we also evaluate the language used in answering the questions by reporting the average score of Italian responses. Generally, the steered models (especially the ITA approach) result in a slight improvement compared to the original model and to outperform ANITA on two of the three benchmarks. Significant improvements are seen in the language itself, where the steering techniques are effective in yielding Italian output.
\n
", + "capture": "Table 1: Results on the benchmarks in % of correct answers. In column lang-detect we also evaluate the language used in answering the questions by reporting the average score of Italian responses. Generally, the steered models (especially the ITA approach) result in a slight improvement compared to the original model and to outperform ANITA on two of the three benchmarks. Significant improvements are seen in the language itself, where the steering techniques are effective in yielding Italian output." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nLa quantit\u00e0 di ossigeno disciolto nell\u2019acqua oceanica pu\u00f2 variare. Un aumento di quale di questi fattori causerebbe un aumento del contenuto di ossigeno dell\u2019acqua oceanica?\n\n
\n\n(A) temperatura (B) salinit\u00e0 (C) galleggiabilit\u00e0 (D) pressione\n\n
\n\nOriginal\n\n\n\nLa risposta corretta \u00e8 (A) temperatura.\n\n\n\n\u2717\n\n
\n\nANITA [9]\n\n\n\nA\\n ( mixed Thai and Arabic text [\u2026]\n\n\n\n\u2717\n\n
\n\nITA-full\n\n\n\nLa risposta corretta \u00e8 (A) temperatura.\\n\\n L\u2019aumento della temperatura pu\u00f2 aumentare la quantit\u00e0 di ossigeno disciolto [\u2026]\n\n\n\n\u2717\n\n
\n\nITA\n\n\n\nLa risposta corretta \u00e8 (A) temperatura.\\n\\n In generale, l\u2019ossigeno disciolto nell\u2019acqua [\u2026]\n\n\n\n\u2717\n\n
\n\nQuali adattamenti aiutano un verme a muoversi nel suolo?\n\n
\n\n(A) pelle umida (B) corpo a forma di tubo (C) pelle che respira (D) bocca molto piccola\n\n
\n\nOriginal\n\n\n\nLa risposta corretta \u00e8 (B) corpo a forma di tubo.\n\n\n\n\u2713\n\n
\n\nANITA [9]\n\n\n\nB \\n (Ela explica\u00e7\u00e3o) O verme, por ser um invertebrado anel\u00eddeo, n\u00e3o tem pelle, \u2026 [\u2026]\n\n\n\n\u2713\n\n
\n\nITA-full\n\n\n\nLa risposta corretta \u00e8 (B) corpo a forma di tubo.\\n\\n I vermi, come ad esempio gli anellidi, hanno un [\u2026]\n\n\n\n\u2713\n\n
\n\nITA\n\n\n\nLa risposta corretta \u00e8 (B) corpo a forma di tubo.\\n\\n I vermi, come ad esempio le lumache e le \" [\u2026]\n\n\n\n\u2713\n\n
\n\nUn pneumatico di una macchina passa sopra una lattina e la schiaccia completamente. Quale affermazione \u00e8 vera per gli atomi della struttura della lattina?\n\n
\n\n(A) Gli atomi sono rotti. (B) Gli atomi sono appiattiti. (C) Gli atomi rimangono gli stessi. (D) Gli atomi sono trasformati in atomi diversi.\n\n
\n\nOriginal\n\n\n\nLa risposta corretta \u00e8 (B) Gli atomi sono appiattiti.\n\n\n\n\u2717\n\n
\n\nANITA [9]\n\n\n\nC. Gli atomi rimangono gli stessi.\n\n\n\n\u2713\n\n
\n\nITA-full\n\n\n\nLa risposta corretta \u00e8 (B) Gli atomi sono appiattiti.\\n\\n Quando un oggetto viene schiacciato, gli atomi [\u2026]\n\n\n\n\u2717\n\n
\n\nITA\n\n\n\nLa risposta corretta \u00e8 (C) Gli atomi rimangono gli stessi.\\n Quando un pneumatico passa sopra [\u2026]\n\n\n\n\u2713\n\n
\n\nQuale pu\u00f2 fornire la maggior parte dell\u2019energia in un ecosistema?\n\n
\n\n(A) un fungo (B) un coyote (C) un albero di pino (D) un prato erboso\n\n
\n\nOriginal\n\n\n\nLa risposta corretta \u00e8 (D) un prato erboso.\n\n\n\n\u2713\n\n
\n\nANITA [9]\n\n\n\nD (un prato erboso, in generale, in termini di massa vegetale, con la fotosintesi, mentre un coyote, un fungo e un albero di pino [\u2026]\n\n\n\n\u2713\n\n
\n\nITA-full\n\n\n\nLa risposta corretta \u00e8 (C) un albero di pino.\\n\\n Gli alberi, come gli alberi di pino, sono una [\u2026]\n\n\n\n\u2717\n\n
\n\nITA\n\n\n\nLa risposta corretta \u00e8 (C) un albero di pino.\\n\\n Gli alberi, in particolare quelli come l\u2019albero di pino, sono una delle principali fonti [\u2026]\n\n\n\n\u2717\n\n
\n
\n
Table 2: Meta Llama\u00a03 - Instruct performs well on the ARC challenge benchmark, with bold text indicating correct answers. However, the ANITA model occasionally exhibits odd behaviors in its responses. For instance, one response includes mixed Thai and Arabic text explaining why (A) is correct; another generates text in a different language, while the third and fourth examples use Italian correctly.
\n
", + "capture": "Table 2: Meta Llama\u00a03 - Instruct performs well on the ARC challenge benchmark, with bold text indicating correct answers. However, the ANITA model occasionally exhibits odd behaviors in its responses. For instance, one response includes mixed Thai and Arabic text explaining why (A) is correct; another generates text in a different language, while the third and fourth examples use Italian correctly." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelARC challenge (it)
Meta Llama 2 7B - Instruct
Original32.84
+ IT-ITA (LLaMAntino 2 [3])34.98
+ Steering ITA-full41.06
+ Steering ITA38.24
\n
\n
Table 3: Results as a percentage of correct ARC challenge responses from Llama\u00a02 - Instruct with the techniques previously reported. The step in performance is more noticeable when compared with the small steps observed for the Llama\u00a03 - Instruct model in Table 1.
\n
", + "capture": "Table 3: Results as a percentage of correct ARC challenge responses from Llama\u00a02 - Instruct with the techniques previously reported. The step in performance is more noticeable when compared with the small steps observed for the Llama\u00a03 - Instruct model in Table 1." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Question & Answer
Alpaca (ENG version)
\n\nWhat is the capital of France?\n\n\n\nThe capital of France is Paris.\n\n
\n\nLook up the boiling point of water.\n\n\n\nThe boiling point of water is 100 degrees Celsius or 212 degrees Fahrenheit at standard atmospheric pressure.\n\n
\n\nCompose a five word sentence describing your day.\n\n\n\nI am always assisting users.\n\n
\n\nCompile a list of 5 US states located in the Mid West.\n\n\n\n1. Ohio 2. Indiana 3. Michigan 4. Illinois 5. Wisconsin\n\n
\n\nWhat is the least common multiple of 3, 4, 6 and 15?\n\n\n\nTo find the LCM of 3, 4=22\u0302, 6=2*3, and 15=3*5, take the highest power of each prime that appears and multiply: 2**2*3*5=60.\n\n
Alpaca (ITAfull version)
\n\nQual \u00e8 la capitale della Francia?\n\n\n\nLa capitale della Francia \u00e8 Parigi.\n\n
\n\nGuarda il punto di ebollizione dell\u2019acqua.\n\n\n\nIl punto di ebollizione dell\u2019acqua \u00e8 di 100 gradi Celsius o 212 gradi Fahrenheit a pressione atmosferica standard.\n\n
\n\nScrivi una frase di cinque parole che descriva la tua giornata.\n\n\n\nSono sempre qui per aiutare gli utenti.\n\n
\n\nCompila una lista di 5 stati americani situati nel Midwest.\n\n\n\n1. Ohio 2. Indiana 3. Michigan 4. Illinois 5. Wisconsin\n\n
\n\nQual \u00e8 il minimo comune multiplo di 3, 4, 6 e 15?\n\n\n\nPer trovare l\u2019LCM di 3, 4=22\u0302, 6=2*3, e 15=3*5, prendi la potenza pi\u00f9 alta di ogni numero primo che appare e moltiplica: 2**2*3*5=60.\n\n
Alpaca (ITA version)
\n\nWhat is the capital of France?\n\n\n\nLa capitale della Francia \u00e8 Parigi.\n\n
\n\nLook up the boiling point of water.\n\n\n\nIl punto di ebollizione dell\u2019acqua \u00e8 di 100 gradi Celsius o 212 gradi Fahrenheit a pressione atmosferica standard.\n\n
\n\nCompose a five word sentence describing your day.\n\n\n\nSono sempre qui per aiutare gli utenti.\n\n
\n\nCompile a list of 5 US states located in the Mid West.\n\n\n\n1. Ohio 2. Indiana 3. Michigan 4. Illinois 5. Wisconsin\n\n
\n\nWhat is the least common multiple of 3, 4, 6 and 15?\n\n\n\nPer trovare l\u2019LCM di 3, 4=22\u0302, 6=2*3, e 15=3*5, prendi la potenza pi\u00f9 alta di ogni numero primo che appare e moltiplica: 2**2*3*5=60.\n\n
\n
Table 4: 5 random examples from the alpaca dataset. Particular attention can be paid to the translation of these instruction-based datasets (also generally used for fine-tuning). Most of these are machine-translated, implying inaccurate translations. For example, the last prompt asks to calculate the lowest common multiple, in the answer this is abbreviated as LCM when the correct Italian translation should have used the abbreviation m.c.m.
\n
", + "capture": "Table 4: 5 random examples from the alpaca dataset. Particular attention can be paid to the translation of these instruction-based datasets (also generally used for fine-tuning). Most of these are machine-translated, implying inaccurate translations. For example, the last prompt asks to calculate the lowest common multiple, in the answer this is abbreviated as LCM when the correct Italian translation should have used the abbreviation m.c.m." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Question & Answer options
MMLU (it)
\n\nPer quale di questi due scenari il protagonista (che usa Io/me/mio) fa qualcosa di moralmente sbagliato, secondo gli standard morali americani del 2020? Scenario 1 | Ho affittato un ragazzo elegante e sexy per il pomeriggio. Scenario 2 | Ho trasferito i rifugiati nelle case.\n\n\n\n(A) Sbagliato, Sbagliato \n
(B) Sbagliato, Non sbagliato\n
(C) Non sbagliato, Sbagliato \n
(D) Non sbagliato, Non sbagliato
\n
\n
HellaSwag (it)
\n\nUna donna in tailleur rosso sta parlando dietro una scrivania. un uomo\n\n\n\n(A) prende le palle da un attaccapanni sul muro.\n
(B) sta passando l\u2019aspirapolvere su una sedia di plastica nera.\n
(C) spruzza acqua in un piccolo lavandino bianco.\n
(D) sta in piedi tenendo il piede dietro la schiena.
\n
\n
ARC challenge (it)
\n\nAnna tiene un cubetto di ghiaccio. Perch\u00e9 si scioglie il cubetto di ghiaccio nella sua mano?\n\n\n\n(A) Il calore si sposta dalla sua mano al cubetto di ghiaccio.\n
(B) Il freddo si sposta dalla sua mano al cubetto di ghiaccio.\n
(C) Il calore si sposta dal cubetto di ghiaccio alla sua mano.\n
(D) Il freddo si sposta dal cubetto di ghiaccio alla sua mano.
\n
\n
\n
Table 5: Example prompts from the different benchmarks proposed. Bold text mark the correct answer among the one proposed.
\n
", + "capture": "Table 5: Example prompts from the different benchmarks proposed. Bold text mark the correct answer among the one proposed." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDataset\n\n\n\nInstruction\n\n\n\n-shot\n\n
\n\nMMLU\n\n\n\n-\n\n\n\n5\n\n
\n\nHellaSwag\n\n\n\nScegli la continuazione pi\u00f9 adatta tra A, B, C e D\n\n\n\n0\n\n
\n\nARC\n\n\n\nScegli la risposta corretta tra A, B, C e D\n\n\n\n0\n\n
\n
Table 6: Instructions used as system prompts. No instruction is used for the MMLU task since the model is already provided with 5 in-context examples of the expected behavior.
\n
", + "capture": "Table 6: Instructions used as system prompts. No instruction is used for the MMLU task since the model is already provided with 5 in-context examples of the expected behavior." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\n\n\n\nPrompt / Generations\n\n
\n\nSystem\n\n\n\nSegui le istruzioni dell\u2019utente\n\n
\n\nPrompt\n\n\n\nScrivi la prima quartina di un sonetto sull\u2019estate\n\n
\n\nOriginal\n\n\n\n\n\n\nQuando l\u2019estate con caldo si fa re,\n\nE il sole splende con ardore acceso,\n\nLe foglie verdi si muovono in giro,\n\nE il vento leggero le carezza e le bacia\n\n\n
\n\nANITA [9]\n\n\n\n\n\n\nDolce estate, che con caldi rai\n\nillumini il mondo di luce viva,\n\ne con soavi notti, senza alcun strai,\n\nporti serenit\u00e0 a chi ti d\u00e0.\n\n\n
\n\nITA-full\n\n\n\n\n\n\nQuando il sole splende sul verde giardino\n\nl\u2019estate \u00e8 qui, con il suo caldo e il suo\n\ndolce profumo di fiori e di frutta\n\ne il vento che soffia leggero, come un bacio\n\nsulla mia fronte, come un raggio di sole\n\n\n
\n\nITA\n\n\n\n\n\n\nQuando il sole splende sulle campagne calde,\n\nE il cielo \u00e8 sereno, e il vento \u00e8 quieto e lieve,\n\nL\u2019estate, con i suoi raggi, ci accoglie,\n\nE il cuore, con gioia, si sveglia\n\n\n
\n\nSystem\n\n\n\nRispondi alla domanda\n\n
\n\nPrompt\n\n\n\nSe in una strada costruisco una corsia in pi\u00f9 riduco il traffico? Motiva la risposta\n\n
\n\nOriginal\n\n\n\nLa risposta \u00e8 no, non necessariamente riduco il traffico. Ecco alcune motivazioni: \n
1. **Attrazione di traffico**: la creazione di una corsia in pi\u00f9 pu\u00f2 attirare pi\u00f9 veicoli sulla strada, poich\u00e9 gli automobilisti potrebbero essere pi\u00f9 propensi a utilizzare la strada con pi\u00f9 corsie. Ci\u00f2 pu\u00f2 aumentare il traffico sulla strada. \n
2. **Cambiamenti di comportamento**: la creazione di una corsia in pi\u00f9 pu\u00f2 cambiare il comportamento degli automobilisti, che potrebbero essere pi\u00f9 propensi a utilizzare la strada per raggiungere i loro obiettivi pi\u00f9 rapidamente, aumentando il traffico.
[\u2026]
\n
\n
\n\nANITA [9]\n\n\n\nIn generale, non \u00e8 detto che creando una corsia in pi\u00f9 in una strada riducano il traffico. Ecco alcune possibili motivazioni: \n
1. **Attrazione di nuovo traffico**: la creazione di una corsia in pi\u00f9 pu\u00f2 attirare nuovi veicoli sulla strada, poich\u00e9 la capacit\u00e0 di attraversamento aumenta, il che potrebbe annullare l\u2019effetto di riduzione del traffico previsto.
[\u2026]
\n
\n
\n\nITA-full\n\n\n\nLa risposta \u00e8 no, non necessariamente. La costruzione di una corsia in pi\u00f9 in una strada non \u00e8 sufficiente per ridurre il traffico in modo significativo. Ecco alcune ragioni per cui: \n
1. La corsia in pi\u00f9 non \u00e8 necessariamente una misura per ridurre il traffico. La costruzione di una corsia in pi\u00f9 pu\u00f2 attirare pi\u00f9 traffico nella zona (domanda indotta), poich\u00e9 gli automobilisti coglieranno l\u2019opportunit\u00e0 di poter usare la nuova corsia
[\u2026]
\n
\n
\n\nITA\n\n\n\nLa risposta \u00e8 no, non necessariamente riduco il traffico.\n
Costruire una corsia in pi\u00f9 in una strada pu\u00f2 avere diversi effetti sul traffico, a seconda del contesto e delle condizioni. Ecco alcuni possibili effetti: \n
1. **Riduzione del traffico**: se la corsia in pi\u00f9 \u00e8 destinata ai mezzi pubblici o ai pedoni e ciclisti, pu\u00f2 ridurre il traffico generale sulla strada,
[\u2026]
\n
\n
\n\nSystem\n\n\n\nRispondi alle domande dell\u2019utente\n\n
\n\nPrompt\n\n\n\nRiassumi il seguente testo: \"Il Monte Bianco (Mont Blanc in francese e in arpitano), con un\u2019altitudine di 4805,59 m s.l.m., \u00e8 la montagna pi\u00f9 alta delle Alpi, d\u2019Italia, di Francia e anche d\u2019Europa, secondo una diffusa convenzione geografica, da cui i soprannomi di tetto d\u2019Europa e di Re delle Alpi, condividendo assieme al monte Elbrus nel Caucaso un posto tra le cosiddette Sette Vette (Seven Summits) del pianeta.\"\n\n
\n\nOriginal\n\n\n\nIl testo descrive il Monte Bianco, la montagna pi\u00f9 alta delle Alpi, d\u2019Italia, di Francia e d\u2019Europa, con un\u2019altitudine di 4805,59 metri. \u00c8 noto anche come \"tetto d\u2019Europa\" e \"Re delle Alpi\". Inoltre, \u00e8 considerato una delle Sette Vette del pianeta, insieme al monte Elbrus nel Caucaso.\n\n
\n\nANITA [9]\n\n\n\nIl testo descrive il Monte Bianco, la montagna pi\u00f9 alta d\u00c9uropa, delle Alpi, d\u2019Italia e di Francia, con un\u2019altitudine di 4805,59 metri, noto anche come \"tetto d\u2019Europa\" e \"Re delle Alpi\".\n\n
\n\nITA-full\n\n\n\nIl testo descrive il Monte Bianco, la montagna pi\u00f9 alta delle Alpi, d\u2019Italia, di Francia e d\u2019Europa, con un\u2019altitudine di 4805,59 m s.l.m. \u00c8 noto anche come \"Tetto d\u2019Europa\" e \"Re delle Alpi\" e fa parte delle cosiddette \"Sette Vette\" del pianeta, insieme al Monte Elbrus nel Caucaso.\n\n
\n\nITA\n\n\n\nIl testo descrive il Monte Bianco, la montagna pi\u00f9 alta delle Alpi, d\u2019Italia, di Francia e d\u2019Europa, con un\u2019altitudine di 4805,59 metri. Inoltre, menziona i soprannomi della montagna, ovvero \"Tetto d\u2019Europa\" e \"Re delle Alpi\", e la sua inclusione tra le \"Sette Vette\" del pianeta, insieme al monte Caucaso\n\n
\n
\n
Table 7: Example generations on random prompts for the Llama\u00a03 - Instruct model in all previously proposed versions. Qualitatively analyzing the responses, the generation seems to be good in all proposed cases. Only the first prompt (the generation of the sonnet) although an Italian in line with the poetic style required by the prompt is used, the generated sonnet does not respect the correct metric, rhyme and syllables required by the particular style of the composition.
\n
", + "capture": "Table 7: Example generations on random prompts for the Llama\u00a03 - Instruct model in all previously proposed versions. Qualitatively analyzing the responses, the generation seems to be good in all proposed cases. Only the first prompt (the generation of the sonnet) although an Italian in line with the poetic style required by the prompt is used, the generated sonnet does not respect the correct metric, rhyme and syllables required by the particular style of the composition." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\n\n\n\nPrompt / Generations\n\n
\n\nSystem\n\n\n\nSegui le istruzioni dell\u2019utente\n\n
\n\nPrompt\n\n\n\nScrivi la prima quartina di un sonetto sull\u2019estate\n\n
\n\nOriginal\n\n\n\nNel caldo baleno dell\u2019estate,\n
dove il sole splende con ardore,\n
le rose sbocciano in fiore,\n
e il vento canta la dolce canzone.
\n
\n
\n\nITA-full\n\n\n\nLa prima quartina di un sonetto \u00e8 composta da due versi, di solito otto o sei sillabi, e segue il modello ABBAABBACCACC.\n\n
\n\nITA\n\n\n\nNell\u2019aria fresca dell\u2019estate,\n
dove il sole splende con calore,\n
il mio cuore si sente il vento,\n
e il mio spirito si eleva.
\n
\n
\n
Table 8: Example generations on a random prompt for the Phi\u00a03 - Instruct model in all previously proposed versions. The generation is good for the original and ITA model with a strange behavior for the steered ITA-full model where the model explains what a sonnet is instead of writing the requested sonnet. As in the previous case, despite the choice of an Italian in line with the poetic style, the metrics of the typical sonnet composition are not respected.
\n
", + "capture": "Table 8: Example generations on a random prompt for the Phi\u00a03 - Instruct model in all previously proposed versions. The generation is good for the original and ITA model with a strange behavior for the steered ITA-full model where the model explains what a sonnet is instead of writing the requested sonnet. As in the previous case, despite the choice of an Italian in line with the poetic style, the metrics of the typical sonnet composition are not respected." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18247v1_figure_1.png", + "caption": "Figure 1: \nGraphical representation of all the correct answer combinations given by models on the ARC challenge. Each column shows a different combination of correct answers between all the different approaches with their respective cardinality999For the sake of clarity, only cardinalities >25absent25>25> 25 are shown in writing.(e.g. the very last column shows a subset of 53 instances where only the IT-ITA model (ANITA) responds with the correct answer).\nThe steered and the IT-ITA models have limited overlap in their correct responses, highlighting differences in their improvements. The IT-ITA model loses the ability to answer some questions (74) that the Original model could while, at the same time, learning to answer new questions that the Original model couldn\u2019t (53). In contrast, steered models enhance their range of correct answers while retaining most of the original model\u2019s correct answers.", + "url": "http://arxiv.org/html/2411.18247v1/x1.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Multi-property steering of large language models with dynamic activation composition,", + "author": "D. Scalena, G. Sarti, M. Nissim,", + "venue": "in: Y. Belinkov, N. Kim, J. Jumelet, H. Mohebbi, A. Mueller, H. Chen (Eds.), Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, Miami, Florida, US, 2024, pp. 577\u2013603. URL: https://aclanthology.org/2024.blackboxnlp-1.34.", + "url": null + } + }, + { + "2": { + "title": "MEGA: Multilingual evaluation of generative AI,", + "author": "K. Ahuja, H. Diddee, R. Hada, M. Ochieng, K. Ramesh, P. Jain, A. Nambi, T. Ganu, S. Segal, M. Ahmed, K. Bali, S. Sitaram,", + "venue": "in: H. Bouamor, J. Pino, K. Bali (Eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Singapore, 2023, pp. 4232\u20134267. URL: https://aclanthology.org/2023.emnlp-main.258. doi:10.18653/v1/2023.emnlp-main.258.", + "url": null + } + }, + { + "3": { + "title": "Efficient estimation of word representations in vector space,", + "author": "T. Mikolov, K. Chen, G. S. Corrado, J. Dean,", + "venue": "in: International Conference on Learning Representations, 2013. URL: https://api.semanticscholar.org/CorpusID:5959482.", + "url": null + } + }, + { + "4": { + "title": "Measuring massive multitask language understanding,", + "author": "D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, J. Steinhardt,", + "venue": "Proceedings of the International Conference on Learning Representations (ICLR) (2021).", + "url": null + } + }, + { + "5": { + "title": "HellaSwag: Can a machine really finish your sentence?,", + "author": "R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, Y. Choi,", + "venue": "in: A. Korhonen, D. Traum, L. M\u00e0rquez (Eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 4791\u20134800. URL: https://aclanthology.org/P19-1472. doi:10.18653/v1/P19-1472.", + "url": null + } + }, + { + "6": { + "title": "Overcoming catastrophic forgetting in neural networks,", + "author": "J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, R. Hadsell,", + "venue": "Proceedings of the National Academy of Sciences 114 (2017) 3521\u20133526. URL: http://dx.doi.org/10.1073/pnas.1611835114. doi:10.1073/pnas.1611835114.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18247v1" +} \ No newline at end of file diff --git a/20241127/2411.18252v1.json b/20241127/2411.18252v1.json new file mode 100644 index 0000000000000000000000000000000000000000..6c8f1cbce0c587c1a8af27997be26d388513e6df --- /dev/null +++ b/20241127/2411.18252v1.json @@ -0,0 +1,185 @@ +{ + "title": "Target Tracking: Statistics of Successive Successful Target Detection in Automotive Radar Networks", + "abstract": "We introduce a novel metric for stochastic geometry based analysis of automotive radar networks called target tracking probability. Unlike the well-investigated detection probability (often termed as the success or coverage probability in stochastic geometry), the tracking probability characterizes the event of successive successful target detection with a sequence of radar pulses. From a theoretical standpoint, this work adds to the rich repertoire of statistical metrics in stochastic geometry-based wireless network analysis. To optimize the target tracking probability in high interference scenarios, we study a block medium access control (MAC) protocol for the automotive radars to share a common channel and recommend the optimal MAC parameter for a given vehicle and street density. Importantly, we show that the optimal MAC parameter that maximizes the detection probability may not be the one that maximizes the tracking probability. Our research reveals how the tracking event can be naturally mapped to the quality of service (QoS) requirements of latency and reliability for different vehicular technology use-cases. This can enable use-case specific adaptive selection of radar parameters for optimal target tracking.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Automotive radars play a critical role in modern advanced driver-assistance systems ###reference_id2### (ADAS ###reference_id2###) and autonomous driving technologies. These systems rely on accurate detection and tracking of moving objects such as vehicles, pedestrians, and other obstacles [1 ###reference_b1###, 2 ###reference_b2###]. In particular, radar-based target tracking ensures timely and accurate information about the target\u2019s position, velocity, and trajectory, allowing for safer navigation and collision avoidance [3 ###reference_b3###]. Given the inherent stochastic nature of the wireless environment, especially in densely populated urban scenarios, developing robust radar tracking algorithms is essential. Radar tracking is a two-step process, comprising both detection and subsequent tracking over multiple time slots with consecutive radar pulses [4 ###reference_b4###]. The first step involves the successful detection of the target based on the reflected signal signal-to-interference-plus-noise ratio ###reference_0.id10### (SINR ###reference_0.id10###) [5 ###reference_b5###]. In a given time slot, the target is successfully detected if the SINR ###reference_0.id10### exceeds a predefined threshold. Then, a target is considered successfully tracked if it is detected successfully in consecutive slots." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Motivation", + "text": "The spatial statistics of the detection performance has been thoroughly investigated using tools from stochastic geometry, e.g., see [6 ###reference_b6###, 5 ###reference_b5###]. In particular, considering the vehicular node locations as points of a point process, researchers have derived the probability of successful detection, the meta-distribution of the reflected signal SINR ###reference_0.id10###, and consequently, the mean local delay in detection. However, the statistics of consecutive successful detection events is unexplored, which we address in this work. Additionally, our model accounts for street intersections and the corresponding incoming traffic in analyzing the system performance. This is critical for developing automotive radar aided safety applications but has limited literature." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Related Work", + "text": "In recent years, studies based on stochastic geometry have gained traction to model and analyze the statistics of automotive radar interference while addressing the variability in radar deployments, e.g., see [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 5 ###reference_b5###, 9 ###reference_b9###]. These models are crucial for accurately evaluating radar system performance and determining the impact of interference mitigation strategies. For example, the study in [6 ###reference_b6###] modeled the distribution of automotive radars using a Poisson point process ###reference_id7### (PPP ###reference_id7###) and estimated the average signal-to-interference ratio ###reference_1.id11### (SIR ###reference_1.id11###) for radar detection. Munari et al. [7 ###reference_b7###] employed the strongest interferer approximation to assess radar detection range and false alarm rates. Additionally, the work in [8 ###reference_b8###] examined radar detection probabilities by incorporating fluctuating radar cross-sections (RCS) based on Swerling-I and Chi-square models, offering more precise and intuitive outcomes compared to constant RCS models. A fine grained analysis of the impact of interference is studied in [5 ###reference_b5###] where the authors derived not only the meta distribution ###reference_2.id32### (MD ###reference_2.id32###) of the SINR ###reference_0.id10### but also employed it to study the mean local delay in radar detection. Furthermore, they proposed an online algorithm to control the channel access by individual radars. Radar medium access control ###reference_0.id20### (MAC ###reference_0.id20###) protocols such as ALOHA and carrier sense multiple access ###reference_id5### (CSMA ###reference_id5###) have also been investigated in [10 ###reference_b10###, 11 ###reference_b11###]. Furthermore, asynchronous non-cooperative protocols for MAC was studied in [12 ###reference_b12###]. By enhancing these models, we can improve performance evaluations, leading to the development of more optimized solutions. One of the primary advantages of using stochastic geometry frameworks is the ability to explore various spatial scenarios with significantly reduced computational resources and time compared to traditional system-level Monte Carlo simulations or real-world measurement data collection.\nWhile these studies predominantly focus on two-lane scenarios, the authors in [13 ###reference_b13###] extended the analysis to multi-lane scenarios by utilizing a marked point process model to characterize radar interference. Furthermore, more recent work has incorporated Matern Hard-Core Processes (MHCP) [14 ###reference_b14###, 15 ###reference_b15###], which account for vehicle dimensions and the required road space, providing a more realistic interference model for automotive radar systems. However, in order to gain a more accurate view of the network, a complete two dimensional spatial model is paramount, which none of the above works address. In this regard, novel line processes have been proposed to model and study structure of streets and vehicular networks [16 ###reference_b16###, 17 ###reference_b17###]. In this work, we consider the Poisson line Cox process ###reference_8.id18### (PLCP ###reference_8.id18###) model for emulating the streets in order to accurately incorporate the impact of radar beamwidth (e.g., see [18 ###reference_b18###] for factors impacting beamwidth selection in such networks) and the incoming traffic at intersections.\nMore importantly, all the above studies either focus on one shot detection, i.e., the event of successful target detection in one radar pulse transmission attempt, or characterize the mean local delay, i.e., the number of transmission attempts for first successful detection. Albeit useful, these metrics cannot estimate the performance of target tracking. To address this, we derive the distribution of run-length of successes and accordingly, characterize the statistics of successive successful detection, or tracking111From a control systems perspective, the run-length distribution was also characterized in [19 ###reference_b19###], however, there the authors did not study the same in a PLCP ###reference_8.id18### model and did not employ the same to study an automotive radar network, which presents its own challenges.." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Contributions", + "text": "The main contributions of this work are as follows.\nWe define and derive a new metric called the tracking probability to study automotive radar networks. It characterizes the event that the ego radar is able to detect the target vehicle successfully in successive time slots. Although relevant for reliability constrained applications, this metric has not been studied yet in the context of stochastic geometry. The key technical challenge to derive this is to handle the temporal correlation in run-length of successes in a fixed time window, which is overcome using de Moivre\u2019s theorem.\nThen, we introduce a novel block ALOHA-based radar MAC ###reference_0.id20### protocol where active radars access the channel in a block of consecutive slots with block probability . Unlike the slot-based ALOHA protocol typically assumed in automotive radar networks (e.g., see [5 ###reference_b5###, 7 ###reference_b7###]), the block ALOHA enables an enhanced radar tracking probability. Then we derive the detection probability of the ego radar, followed by the conditional distribution of the number of successive slots in which the reflected signal SINR ###reference_0.id10### exceeds a detection threshold. Furthermore, we derive the expected number of such successful tracking events in the block. As deriving the distribution of the conditional tracking probability is infeasible, we analyze its first moment, which denotes the probability of successful target tracking across all network realizations.\nFinally, we study the optimization of the tracking probability of the ego radar with respect to the channel access probability and the radar beamwidth. This enables us to derive insights into the design of adaptive cognitive radars, specifically, how the radar parameters may be adapted in order to enable better tracking based on the street geometry, vehicle density, and use-case quality of service ###reference_9.id39### (QoS ###reference_9.id39###) requirements of latency and reliability. We show that the latency deadline of a service can be partitioned into access blocks consisting of multiple detection slots, each of which corresponds to a radar pulse width. We study the fundamental trade-off in this frame design between the block length and the number of blocks and highlight its role in maximizing successful radar tracking.\nThe rest of the paper is organized as follows. In Section II ###reference_### we describe the system model, characterize the interference set, and outline the key definitions of radar detection and radar tracking. Our main results are presented in Section III ###reference_### where we derive the statistics of radar tracking. Then, in Section IV ###reference_###, based on the derived framework, we explore QoS ###reference_9.id39###-Aware optimal frame design. Numerical results to highlight the salient features and insights in our framework are presented in Section V ###reference_###. Finally, the paper concludes in Section VI ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model and Key Definitions", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Street Geometry and Vehicle Locations", + "text": "We consider a network of streets modeled as a homogeneous Poisson line process ###reference_6.id26### (PLP ###reference_6.id26###), [20 ###reference_b20###]. A line corresponds to a tuple , where is the length of the normal to from the origin and is the angle between the normal and the -axis. The parameters reside in the representation space . The number of parameter points, , in any follows the Poisson distribution with parameter , where represents the Lebesgue measure of .\nThe locations of the vehicles with mounted radars on the street follow a one-dimensional PPP ###reference_id7###, , with intensity , that represents the vehicular density. The PPP ###reference_id7### of the vehicle locations on any street is independent of the corresponding distributions on the other streets. Hence, the complete distribution of the vehicles is a homogeneous PLCP ###reference_8.id18### , on the domain where . Our analysis is carried out from the perspective of an ego radar located at the origin of the two-dimensional plane. We model the ego radar as the typical point of . In particular, let us define\nfor any point , and note that under expectation over , becomes the typical point of the process [21 ###reference_b21###]. Since by construction, is a stationary process, let us consider to be the origin of . Under Palm conditioning, a line passes through the origin almost surely, which without loss of generality, we consider to be the axis. The ego radar experiences interference if and only if both the ego radar and interfering radar fall into each other\u2019s radar sectors simultaneously. To model the interference from the vehicles traveling to the opposite side as the ego radar, we consider on line the location of vehicles follows a one-dimensional PPP ###reference_id7### having the same intensity as the remaining lines. Therefore, the overall spatial stochastic process of potential interfering radars, , as per the Palm conditioning, is the superposition of and an independent 1D PPP ###reference_id7### on line , i.e., ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Signal and Channel Model", + "text": "The radars have a half-power beamwidth of , and the target is assumed to be located at a distance from the radar on the same street within the main beam of the ego radar. Let the transmit power be and the path-loss exponent be . The target is assumed to have a Swerling-I fluctuating radar cross-section, , with a mean of [22 ###reference_b22###]. Let be the gain of the transmitting antenna and the effective area of the receiving antenna aperture. Then, the reflected power from the target at the ego radar is , where . Let the coordinates of any interfering radar be denoted by , such that for . The interfering signals undergo independent multi-path Rayleigh fading with parameter 1 [5 ###reference_b5###]. Thus, the interference at the ego radar due to an interfering radar located at is , where is the fading power. Now, assuming that all the automotive radars share the same power and gain characteristics, the SINR ###reference_0.id10### at the ego radar is\nwhere is the interfering set, defined and derived in the next subsection." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Interfering Set and Distance", + "text": "The orientation of the vehicles on the street depends on the generating angle of the street and the direction of their movement. Thus, the boresight direction of any radar on the line is given by the two unit vectors: and , where . The parameters , determines the direction of the radar sector on the street . Any point lies in the sector if the angle made by the displacement vector between and and the boresight direction is larger than . Thus, the sector of a radar located at is either or based on its direction, mathematically defined below.\n[17 ###reference_b17###]\n\nThe radar sector of a vehicle located at for boresight direction is\nAnd for boresight direction ,\nThe ego radar sector corresponds to the line and the location . If ego radar is located within the radar sector of the radar present at i.e., evaluates to one, and simultaneously the automotive radar located at is located in the interior region of ego radar sector i.e., evaluates to one, then radar at will be an element of . From the perspective of the ego radar, the following result characterizes the location of the interfering radars on each street .\nFor line characterized by the PLP point where , that intersects at a distance from the ego radar, the minimum () and the maximum () distances of the interfering radars from the point of intersection are\nPlease see Appendix A ###reference_###.\n\u220e\nWith this characterization we note that . It must be noted that here we have made an abuse of notation by referring to the PLCP ###reference_8.id18### as a subset of the PLP ###reference_6.id26### thus allowing for the intersection of the elements of the two sets. This is well-defined iff the PLP itself is defined as a subset of and not merely a set of lines. However, this notation allows us to simplify the expressions and does not cause any deviation from the correctness of the results. Based on this interfering set, we study the tracking performance of a radar access protocol defined next." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Radar MAC Protocol", + "text": "In dense vehicular networks, in particular where the radar and communication bands are shared, frequent radar signals may degrade the overall detection as well as the communication performance. For this, we propose a tunable block ALOHA protocol for shared radar channel access. Let the time frames be divided into blocks of slots each. In each block, only a subset of the vehicles are allowed to activate their radars. The activation set follows the classical ALOHA protocol with parameter . Thus, each vehicle in operates in a block with probability . Within the block, it attempts to detect the target in each of the slots. We present our analysis from the perspective of a single such block in which the ego radar is active. Later, we discuss the intricacies of designing the block length in Section IV ###reference_###. Here we define the key metrics that we statistically characterize in the next section.\nWe define the target to be successfully detected in a slot if for some threshold .\nLet denote the maximum run length of detection successes in a block for the ego radar, i.e., it is the largest number , such that the ego radar experiences consecutive successful target detection in consecutive. We call as the tracking length, which is a random variable.\nWe define the target to be successfully tracked in a block of length slots if the ego-radar successfully detects the target in consecutive slots, i.e., the event , for some ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Statistics of Radar Tracking", + "text": "In order to characterize the radar tracking, first we derive the probability of a successful detection event in one slot given that the ego radar is active in the block.\nThe conditional probability of successful target detection in a slot by the ego radar given that it is active in the block is given by\nwhere is the transmit signal-to-noise ratio ###reference_2.id12### (SNR ###reference_2.id12###) and is an instance of which is a thinned version of where the PPP ###reference_id7### on each line has an intensity .\nPlease see Appendix B ###reference_###.\n\u220e\nIt is worth to note that due to the conditioning on , is a random variable defined on a suitable sigma field where the counting measure is defined. Indeed, conditioned on , the above success event is an independent random variable across the time slots of a block when the ego radar is active. It is precisely this random variable that we characterize in this work. The probability changes across blocks as the set of active radars changes.\nThe conditional success probability in (2 ###reference_###) is different from that derived in [5 ###reference_b5###, 10 ###reference_b10###, 11 ###reference_b11###]. In particular, the above works consider random access in each slot and hence, there the transmission state of each radar needs to be averaged out with respect to the ALOHA parameter. However, in our case, once the transmission states of the radars are determined at the beginning of the block, the same remains fixed throughout all the slots of the block. Hence, the impact of the ALOHA only appears in the set over which the product of the point functions is evaluated in (2 ###reference_###).\nWe characterize the conditional probability of successful target tracking based on the above result.\nLeveraging de Moivre\u2019s solution [23 ###reference_b23###, Section 22.6], the conditional complementary cumulative density function ###reference_id3### (CCDF ###reference_id3###) of as a function of is\nThe above result establishes the probability of successful target tracking for a given realization of and a given access states for the active radars. Next, we look at the target tracking performance averaged over the network realization using the moments of conditional success probability.\nGiven that the ego radar is active in a block, the moments of conditional success probability of the typical controller at a given time within the block is given by\nwhere , and .\nPlease see Appendix C ###reference_###.\n\u220e\nThe above result immediately leads to some interesting insights about the expected number of bursts that exceed the desired length and the expected burst length averaged across all network instances.\nThe expected number of successful tracking events across all network instances is , and the expected tracking length is .\nPlease see Appendix D ###reference_###.\n\u220e\nFor and the same arguments as that of the proof of Theorem 1 ###reference_orem1###, the expected tracking length is the sum of the moments .\nFurther, we define the MD ###reference_2.id32### of the burst length as the distribution of , i.e.,\nwhere is the conditional CCDF ###reference_id3### of as defined in (3 ###reference_###). In other words, represents the fraction of radars in the network that experience a successful successive detection of at least slots in a sequence of slots in at least fraction of the network realization. Unlike the MD ###reference_2.id32### of the SINR in wireless networks, we observe that the MD ###reference_2.id32### of the tracking length is challenging to derive even indirectly via its moments. However, if all the radars have the same desired tracking length , the first moment of the distribution is given by in Equation 5 ###reference_### below.\nGiven that the ego radar transmits, the first moment of the conditional CCDF ###reference_id3### of the tracking length, or the tracking probability is\nFrom (3 ###reference_###), the expected value of is\nUsing the Binomial expansion and some trivial simplification, we arrive at the result.\n\u220e" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV QoS-Aware Optimal Frame Design", + "text": "In this section, we use the above framework for the design of the transmit frames and the selection of optimal access probability from the perspective of specific automotive use-cases. Let us define as the latency requirement specified by the use-case. As an example, low-latency applications such as lane change assistance and automatic emergency braking (AEB) may employ a of the order of 1 ms (note that this not the end-to-end latency, but only the latency over the radio link). On the contrary, use-cases that have less stringent latency requirements such as blind spot detection, park assist etc have of the order of 10 ms. Some indicative latency requirements are outlined in Table I ###reference_###. Furthermore, we introduce a reliability measure based on the parameter . in particular, a larger value of corresponds to a higher reliability requirement from the service. For the tracking service to be rendered successful, let us assume that within the latency deadline, the use-case necessitates at least one successful tracking event, i.e., the ego radar experiences at least one burst of consecutive successes within a deadline of .\nAs per the radar MAC ###reference_0.id20### introduced in this work, let the duration be divided into access blocks, wherein, each radar competes for access in each block. Recall that each block is further divided into slots, each corresponds to one pulse width of the radar. Naturally, the choice of (and consequently, ) depends on the required of tracking, which in turn, depends on the use-case. In particular, we have\nwhere must be necessarily larger than . For a specific use case, the probability that the target is successfully tracked in at least one of the tracking attempts within the latency deadline is thus . This naturally incorporates the spatial correlation of each network realization since for a fixed , .\nA natural trade off arises - fewer blocks results in a lower access chance for individual radars for activation during , i.e., a lower value of . However, it results in a larger in each block which enables a higher probability that the tracking length of an active radar exceeds , i.e., a larger . Two extreme cases are of special interest - one where the entire latency period is allotted to all the radars for access, i.e., we set . The other case is to set , i.e., the block length is exactly equal to the reliability requirement.\nAn optimal frame design along with an optimal access policy needs cognitive information about the number of competing radars. It is mathematically formulated as follows.\nThe above formulation is a mixed integer non-linear program, where the parameters and may not be known to the ego-radar. This necessitates the development of data-driven algorithms for optimizing the tracking performance, which is currently out of scope of this paper where our focus is mainly on the statistical characterization. Nevertheless, we make a couple of critical remarks.\nThe case corresponds to full interference (no MAC ###reference_0.id20### protocol). In this case, regardless of the value of , the optimal solution is to select and accordingly .\nIndeed, for the full interference scenario, any other choice of simple reduces the block length without improving the tracking probability. The case corresponds to maximizing the detection probability, i.e., at least one successful detection in the block. We will see in the next section that the optimal decreases with an increase in and due to an increase in interference. Hence, for a given block size, block length, and , the optimal can be read out from an appropriate look-up table.\nThe optimization over is simple since the search space is finite. Starting from , each step increase in reduces the block length by half. Thus, the search space for is .\nIn case of a finite set of access probabilities, as is the case in most practical deployments, the optimal pair of and can thus be stored for given and ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Numerical Results and Discussion", + "text": "In this section we discuss the salient features of the network with the help of some numerical results. The radar parameters used for the numerical results are taken from [24 ###reference_b24###]. Specifically, dBm, dBsm, , dBi, GHz, and dBm/Hz. Unless otherwise stated, we assume m. The street and vehicle densities are from [17 ###reference_b17###] where the authors have reported real-world data on the same." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Detection vs tracking performance", + "text": "First, we discuss the key insights that our analysis provides as compared to the existing works on stochastic geometry based automotive radar analysis. In Fig. 1 ###reference_### we plot the radar detection probability in one slot multiplied with the channel access probability, i.e., and compare it with the tracking probability in one block multiplied with the block access probability, i.e., . Based on the reliability requirement, , the tracking probability may vary drastically. A low enables a higher tolerance of interference. In particular, we see that for , the framework recommends an optimal , as even in the full interference regime, the event is achieved with the highest probability. On the contrary, a more stringent reliability requirement, e.g., necessarily requires access control - here the optimal is near 0.5. In other words, even though half of the radars do not access the channel on an average, the resultant lower interference results in an improved tracking probability. We further note that has an improved tracking probability as compared to the detection probability. For explaining this, is worth to recall that as per the block MAC protocol introduced in this paper, once a radar is active in a block, it transmits in all the slots of the block. This is contrary to the detection event where the radar competes for access in all the slots of the block. Hence, for a lower value of , a successful channel access in a block may be sufficient for a successful tracking event, especially in the low (low interference) regime.\n###figure_1###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Impact of Street and Vehicle density", + "text": "###figure_2### Fig. 2 ###reference_### shows that the street geometry and the vehicle density influences not only the optimal channel access probability but also dictates the trends of the tracking performance . For dense streets, e.g., in the city center, an increase in leads to a higher increase in the interference as compared to scenarios with fewer streets. Accordingly, dense street scenarios necessitate stringent access control, facilitated by a lower value of . A higher vehicle density reduces the tracking probability of the ego radar. This also necessitates the employment of a carefully designed MAC policy.\n###figure_3### This is illustrated in Fig. 3 ###reference_### where we see that the optimal decreases with an increase in either or . Although the optimal for low is 1 for both m-1 and m-1, the optimal decreases more rapidly for a higher value of .\n###figure_4### Furthermore, Fig. 4 ###reference_### shows the trends in the conditional expectation of the number of successful bursts of or more successive successful detection events in a block of length . The condition is on the event that the ego-radar accesses the channel in the block under consideration. Note that the maximum value of the conditional expectation in this case is . Higher the ALOHA parameter, lower are the number of successful bursts experienced by the ego-radar. The number of successful bursts required depend on the use-case QoS requirement and must be carefully studied. This is more crucial since the analysis in Fig. 4 ###reference_### considers overlapping bursts, which may or may not be suited for a given service. This is further discussed below." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Impact of Use-Case QoS Requirements", + "text": "###figure_5### Fig. 5 ###reference_### shows the impact of the latency deadline and the reliability requirement modeled as the number of consecutive slots of detection . Specifically, we study two services - lane change assist [27 ###reference_b27###] with a typical latency requirement of the order of 10 ms, and automatic emergency braking [28 ###reference_b28###] with a typical latency of 1-3 ms. Note that these latency requirements from the radio link and not end-to-end latency. Furthermore, let us specify two reliability requirements: and . For the results of this figure, we fix the number of slots per block as . We assume a pulse width of s. This results in blocks for the lane change assist use-case and blocks for the automatic emergency braking use-case. Recall that for the service to be rendered successful, the ego radar needs at least one successful tracking event. Due the larger number of blocks to compete in for access, the success probability is higher for the lane change assist use-case as compared to the automatic emergency braking use-case. Interestingly, for higher reliability requirement (), not only the success probability is lower, but also the optimal is higher. In Table I ###reference_### we show example use-cases with corresponding latency and reliability requirements. Accordingly, the optimal channel access probability is demonstrated. The optimization of is not straightforward due to the two competing phenomenon of interference and channel access. in this regard, we discuss the considerations for the optimal frame design next." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Optimal Frame Design", + "text": "###figure_6### As discussed in the previous section, a larger value of results in smaller block lengths. This facilitates a larger number of attempts for the ego-radar to access transmission, but puts a more stringent demand on the tracking event. In this section, we investigate the impact of the beamwidth on optimal frame design. For a beamwidth of 10 degrees, Fig. 6 ###reference_### shows that for lower values of , a higher value of results in an improved tracking performance. Indeed, when fewer interferers are active (lower ), a limited number of access opportunities for the ego radar can result in a successful target tracking. However, in case of high interference (higher ), a larger number of access attempts are necessary. As noted in Remark 3 ###reference_ark3###, for , higher the value of , lower is the tracking success. For this particular case, we see that and is optimal (in terms of .\n###figure_7### In case of a higher beamwidth, e.g., degrees, Fig. 7 ###reference_### shows that may not be optimal. A larger corresponds to a larger number of interfering radars appearing within the radar sector of the ego-radar, albeit with a lower radiated power. In this case, we see that results in a higher tracking performance when is optimally chosen (0.3) in this case. Similar to the previous case, the optimal is lower for a larger value of . This follows naturally since a higher results in a shorter blocks size, which necessitates a more stringent access control to limit the interference." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "We introduced and characterized radar tracking probability, which is a novel metric to study automotive radar networks using stochastic geometry. Although the radar detection has been studied at length in literature, the statistics of successive successful detection has not been explored before. Using de Moivre\u2019s theorem, we derived the probability that an active radar experiences a success burst of length or more in a block of slots. The latency and reliability QoS ###reference_9.id39### requirements of automotive use-cases can be mapped to the parameters and of our framework. We considered a block ALOHA based MAC protocol to control the set of active radars and demonstrated that optimal selection of block length is non-trivial. A larger block length improves the tracking performance of active radars in that block, but it results in a fewer radars getting activated within the radar deadline. For a given choice of block length, the optimal channel access parameter can be obtained via algorithms such as hill-climbing due to the unimodal nature of the tracking success with respect to the access parameter. In real world network, multiple use-cases co-exist in a vehicular network. An optimal frame design and access protocol for a network-wide utility function is an interesting direction of research which we will take up in a future work." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Proof of Lemma\u00a01", + "text": "We derive the expressions of and using simple geometrical arguments from Fig. 8 ###reference_### that shows the ego radar in green and an interfering radar in red. Here the point O is the origin and the y-axis represents the street containing the ego radar. The affine extension of the segment BE represents an intersecting line containing the interfering radar. This line intersects -axis at a distance from the origin. The interfering radar is present at the location , i.e., the minimum distance along GB for a radar to cause interference. The corresponding maximum distance is represented as .\n###figure_8### From sine rule, we have in triangle AOG, , where . The value of is derived by considering the triangle AOE, resulting in .\nThe value of and accordingly follows similarly." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Proof of Lemma\u00a02", + "text": "The conditional success probability is derived as\nHere denotes the active radar set in (a realization of ). Step (a) is due to the exponential distribution of . Step (b) follows from the Laplace transform of the exponentially distributed ." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Proof of Lemma\u00a03", + "text": "From (2 ###reference_###) we can write\nNow, the second term can be divided into two parts:\nwhere is the thinned 1D PPP ###reference_id7### on . Now, thanks to Slivnyak\u2019s theorem, has the same statistics as . The first term is simply evaluated using the probability generating functional ###reference_7.id17### (PGFL ###reference_7.id17###) of a 1D PPP ###reference_id7### on with intensity :\nwhere . The second term, due to its doubly stochastic nature, has more involved form.\nwhere and ." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Proof of Theorem\u00a01", + "text": "The average number of successful tracking events , in each of which the ego radar observes at least consecutive successful detection events is\nwhere is given by (2 ###reference_###). So, from Lemma 3 ###reference_ma3###, the expected number of bursts averaged over the point process is\nSimilarly, to obtain the expected tracking length , we derive\nCombining the last step with Lemma 3 ###reference_ma3###, we obtain the expected burst length ." + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nUse Case\n\n\n\nLatency \n\n\n\nReliability, \n\n\n\n\n\n
\n\nAdaptive Cruise Control (ACC)\n\n\n\n 100 ms\n\n\n\n10\n\n\n\n1\n\n
\n\nAutomatic Emergency Braking (AEB)\n\n\n\n 50 ms\n\n\n\n15\n\n\n\n0.8\n\n
\n\nBlind Spot Detection (BSD)\n\n\n\n 200 ms\n\n\n\n5\n\n\n\n1\n\n
\n\nLane Change Assistance\n\n\n\n 100 ms\n\n\n\n15\n\n\n\n0.4\n\n
\n\nPedestrian Detection\n\n\n\n 50 ms\n\n\n\n15\n\n\n\n0.8\n\n
\n\nPark Assist\n\n\n\n 100 ms\n\n\n\n1\n\n\n\n1\n\n
\n\nIntersection Management\n\n\n\n 50 ms\n\n\n\n15\n\n\n\n0.8\n\n
\n
TABLE I: Use cases and corresponding latency and reliability requirement derived from\u00a0[25] and [26]. However, note that the latency requirement mentioned here is not end-to-end latency, but the latency over the radio link. The figures are indicative and real values are system dependent. The optimal channel access probability is calculated against each use-case. We assume m-1 and a beamwidth of 10 degrees.
\n
", + "capture": "TABLE I: Use cases and corresponding latency and reliability requirement derived from\u00a0[25] and [26]. However, note that the latency requirement mentioned here is not end-to-end latency, but the latency over the radio link. The figures are indicative and real values are system dependent. The optimal channel access probability is calculated against each use-case. We assume m-1 and a beamwidth of 10 degrees." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18252v1_figure_1.png", + "caption": "Figure 1: Comparison of detection and tracking performance with respect to the ALOHA parameter. Here \u03bb=5\u2062e\u22122\ud835\udf065\ud835\udc522\\lambda=5e-2italic_\u03bb = 5 italic_e - 2 m-1, T=20\ud835\udc4720T=20italic_T = 20 slots, \u03bd=6\ud835\udf086\\nu=6italic_\u03bd = 6, and \u03bbL=5\u2062e\u22124subscript\ud835\udf06\ud835\udc3f5\ud835\udc524\\lambda_{L}=5e-4italic_\u03bb start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT = 5 italic_e - 4 m-1.", + "url": "http://arxiv.org/html/2411.18252v1/x1.png" + }, + "2": { + "figure_path": "2411.18252v1_figure_2.png", + "caption": "Figure 2: Impact of the street and vehicle density on the tracking performance. Here T=20\ud835\udc4720T=20italic_T = 20 slots, K=4\ud835\udc3e4K=4italic_K = 4, and \u03bd=6\ud835\udf086\\nu=6italic_\u03bd = 6.", + "url": "http://arxiv.org/html/2411.18252v1/x2.png" + }, + "3": { + "figure_path": "2411.18252v1_figure_3.png", + "caption": "Figure 3: Optimal \u03b4\ud835\udeff\\deltaitalic_\u03b4 for different vehicle and street density. Here we fix T=25\ud835\udc4725T=25italic_T = 25 and N=4\ud835\udc414N=4italic_N = 4.", + "url": "http://arxiv.org/html/2411.18252v1/x3.png" + }, + "4": { + "figure_path": "2411.18252v1_figure_4.png", + "caption": "Figure 4: Expected number of bursts with a length \u2265\u03bd=8absent\ud835\udf088\\geq\\nu=8\u2265 italic_\u03bd = 8 for T=20\ud835\udc4720T=20italic_T = 20 for different \u03bbPsubscript\ud835\udf06\ud835\udc43\\lambda_{P}italic_\u03bb start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT and \u03bbLsubscript\ud835\udf06\ud835\udc3f\\lambda_{L}italic_\u03bb start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT. Here we have assumed", + "url": "http://arxiv.org/html/2411.18252v1/x4.png" + }, + "5": { + "figure_path": "2411.18252v1_figure_5.png", + "caption": "Figure 5: Impact of the use-case QoS requirements on the tracking performance. Here we fix T=20\ud835\udc4720T=20italic_T = 20 slots.", + "url": "http://arxiv.org/html/2411.18252v1/x5.png" + }, + "6": { + "figure_path": "2411.18252v1_figure_6.png", + "caption": "Figure 6: Impact of the frame design policy on the tracking performance. Here we use real data fitting of New Delhi city to extract the PLCP parameter \u03bbL=0.004subscript\ud835\udf06\ud835\udc3f0.004\\lambda_{L}=0.004italic_\u03bb start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT = 0.004 m-1 (see [17] for details) and use a vehicle density of \u03bb=5\u2062e\u22122\ud835\udf065\ud835\udc522\\lambda=5e-2italic_\u03bb = 5 italic_e - 2 m-1 and \u03a9=10\u03a910\\Omega=10roman_\u03a9 = 10 degrees.", + "url": "http://arxiv.org/html/2411.18252v1/x6.png" + }, + "7": { + "figure_path": "2411.18252v1_figure_7.png", + "caption": "Figure 7: Impact of the frame design policy on the tracking performance for a wide beamwidth: \u03a9=50\u03a950\\Omega=50roman_\u03a9 = 50 degrees.", + "url": "http://arxiv.org/html/2411.18252v1/x7.png" + }, + "8": { + "figure_path": "2411.18252v1_figure_8.png", + "caption": "Figure 8: Evaluation of the interference region in a line characterized by (r,\u03b8)\ud835\udc5f\ud835\udf03(r,\\theta)( italic_r , italic_\u03b8 ) intersecting the street containing the ego-radar (shown in red), i.e., the y\ud835\udc66yitalic_y-axis.", + "url": "http://arxiv.org/html/2411.18252v1/extracted/6028902/Calculations.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18252v1" +} \ No newline at end of file diff --git a/20241127/2411.18254v1.json b/20241127/2411.18254v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9a6ba0062f3ab80aa035278de132b51da1c1726e --- /dev/null +++ b/20241127/2411.18254v1.json @@ -0,0 +1,543 @@ +{ + "title": "Active partitioning: inverting the paradigm of active learning", + "abstract": "Datasets often incorporate various functional patterns related to different aspects or regimes, which are typically not equally present throughout the dataset. We propose a novel, general-purpose partitioning algorithm that utilizes competition between models to detect and separate these functional patterns. This competition is induced by multiple models iteratively submitting their predictions for the dataset, with the best prediction for each data point being rewarded with training on that data point. This reward mechanism amplifies each model\u2019s strengths and encourages specialization in different patterns. The specializations can then be translated into a partitioning scheme. The amplification of each model\u2019s strengths inverts the active learning paradigm: while active learning typically focuses the training of models on their weaknesses to minimize the number of required training data points, our concept reinforces the strengths of each model, thus specializing them. We validate our concept \u2013 called active partitioning \u2013 with various datasets with clearly distinct functional patterns, such as mechanical stress and strain data in a porous structure. The active partitioning algorithm produces valuable insights into the datasets\u2019 structure, which can serve various further applications. As a demonstration of one exemplary usage, we set up modular models consisting of multiple expert models, each learning a single partition, and compare their performance on more than twenty popular regression problems with single models learning all partitions simultaneously. Our results show significant improvements, with up to 54% loss reduction, confirming our partitioning algorithm\u2019s utility.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Datasets can include multiple sections that adhere to distinct regimes. For instance, in stress-strain tests of materials, the initial phase exhibits elastic behavior, which is reversible. However, if the material is stretched further, it enters a phase of plastic behavior, resulting in permanent changes. Similarly, self-driving cars face unique challenges when navigating construction zones, which may be specific to certain regions of the parameter space, just as the challenges on highways or country roads. This mixture of different functional patterns within datasets affects how difficult they are for models to learn. Typically, the more diverse the functional patterns within a dataset, the more challenging it is for a model to achieve high accuracy. We present a novel partitioning algorithm aiming to detect such different functional patterns and separate them from one another whenever possible.\nThe development of algorithms for partitioning datasets has a long history. MacQueen introduced the renowned k-means algorithm in 1967 [1 ###reference_b1###]. However, most approaches define an arbitrary similarity measure for grouping data points. In k-means, spatial proximity is interpreted as the similarity of data points. In contrast, we allow those models that are supposed to learn the dataset to determine which data points they can coherently learn. We believe that for effective dataset partitioning, it is crucial to consider the models themselves. As Jacobs stated, \u201cthe optimal allocation of experts to subtasks depends not only on the nature of the task but also on that of the learner\u201d [2 ###reference_b2###].\nOur new algorithm is based on the competition between multiple models. These models are iteratively trained using the data points for which they have made the best predictions, thereby emphasizing each model\u2019s strengths and inducing model specialization. The models being trained specifically on their strengths, rather than their weaknesses, inverts the traditional active learning strategy, leading us to term this approach active partitioning. The resulting data point distribution to different models is translated into partitions representing different regimes or patterns. Technically, the outcome of this algorithm is the boundaries between these partitions, stored in a support vector machine (SVM).\nThere are several ways to utilize the resulting partitioning. One notable application is to learn each partition with a separate model and then combine these expert models into a modular model, allowing each expert model to focus on a specific pattern rather than handling all patterns simultaneously. Our experiments demonstrated that such a modular model, based on the results of the partitioning algorithm, significantly outperformed a single model on several exemplary datasets." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Partitioning", + "text": "Extensive literature surveys on clustering, including partitioning as a special form, can be found in [3 ###reference_b3###], [4 ###reference_b4###], [5 ###reference_b5###], and [6 ###reference_b6###]. According to Jain, clustering is about identifying groups within datasets such that \u201cthe similarities between objects in the same group are high while the similarities between objects in different groups are low\u201d [3 ###reference_b3###]. There are four major categories of clustering algorithms: hierarchical algorithms, partitional algorithms, density-based algorithms, and heuristic algorithms. This paper focuses on partitional algorithms, which dynamically assign data points to clusters in either a hard or soft manner. In hard partitional clustering, each data point is assigned to exactly one cluster, whereas in soft partitional clustering, each data point can belong to multiple clusters.\nK-means is the most well-known partitional clustering algorithm. In each iteration, each data point is assigned to the nearest centroid, after which each centroid takes over the main position of all its data points [1 ###reference_b1###]. To address weaknesses such as the need to specify the number of clusters in advance or the convergence to a local minimum, the algorithm can be run multiple times with different numbers of clusters and random initializations. A prominent extension of k-means is fuzzy c-means, which allows for soft partitional clustering [7 ###reference_b7###]. A recent extension is game-based k-means, which increases competition between centroids for samples [8 ###reference_b8###].\nIn the 1990s, the kohonen network and its specialization, the self-organizing map, were developed. Both consist of two neural network layers: an input layer and an output layer, known as the kohonen layer. The prototypes competing for data points are the neurons in the kohonen layer [9 ###reference_b9###]. Most approaches that utilize competition for partitioning have entities within one model compete, such as the centroids in k-means or the last-layer neurons in the kohonen network. To our knowledge, M\u00fcller et al. are the only exception, aiming to segment temporally ordered data by identifying switching dynamics. They defined an error function and an assumption of the error distribution to trigger competition between neural networks for data points [10 ###reference_b10###]. Chang et al. extended this framework to use support vector machines instead of neural networks [11 ###reference_b11###].\nA completely different approach for localizing and specializing experts is the iterative splitting of datasets and models, as suggested by Gordon and Crouson [12 ###reference_b12###]. Zhang and Liu combine model splitting and competition in what they call the \u201cone-prototype-take-one-cluster\u201d (OPTOC) paradigm [13 ###reference_b13###]. New models are created if the accuracy is not yet satisfactory, and these models then compete for data points, which later defines the localization of the experts. Wu et al. adapted this paradigm for clustering gene expression data [14 ###reference_b14###].\nThe novelty of our research lies in the development of a flexible partitioning method through the competition of entire models. To the best of our knowledge, no general-purpose partitioning algorithm has previously employed competition between entire models. Although the simultaneous training of experts and a gate involves model competition, it has not been used to create a partitioning [15 ###reference_b15###]. M\u00fcller et al. utilized competition for data segmentation, relying on the switching dynamics of temporally ordered data [10 ###reference_b10###]. In contrast, our approach is not constrained by any specific origin or order of data." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Combining models", + "text": "To demonstrate the effectiveness of our algorithm, we will compare a modular model based on our partitioning approach with a single model. Therefore, we will also review related work on combining multiple models. Comprehensive overviews of model combination techniques can be found in [16 ###reference_b16###], [17 ###reference_b17###], and [18 ###reference_b18###]. Multiple models can either be fused to an ensemble, meaning that they are trained at least partially on the same data and that their predictions are combined by a weighted average, or they can form a modular model, meaning that they are trained on different parts of the dataset and for each prediction only one responsible model is selected. An approach representing a compromise between these two poles is the mixture of experts system, which consists of expert models and an additional gating model mediating the experts, all of which are trained simultaneously. The design of the error function that is minimized is crucial for the extent of localization or specialization of the experts and therefore for the quality and generalization of the overall predictions. Typically, neural networks are selected as models in the mixture of experts system [15 ###reference_b15###][19 ###reference_b19###]. An obvious advantage of a mixture of experts system compared to a single model is the significantly increased capacity to learn large or complex datasets. Shazeer et al. recently combined thousands of neural networks into a sparsely-gated mixture of experts system [20 ###reference_b20###]. Since the gating network only activates a few expert networks per sample, they achieved a dramatic increase in model capacity while even decreasing the computational cost compared to state-of-the-art models." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Presentation of the algorithm", + "text": "###figure_1### The objective of our approach is to detect functional patterns in datasets and separate them in case they appear separable. To achieve this, we propose competition among multiple models. We intentionally refer to models in a general sense, as our approach is not limited by the type of model used. However, for simplicity, one might consider simple feedforward networks as an example. The models compete for data points, which requires them to specialize in certain functional patterns of the dataset. This specialization can be translated into a partitioning of the dataset.\nWe assume that the input features and the output labels of the dataset are known. However, we assume that both the number of partitions and the location of their boundaries are unknown.\nThe algorithm operates as follows: for each data point in the dataset, all models submit their predictions. The model whose prediction is closest to the true value wins the data point. As a reward for providing the best prediction, the winning model is allowed to train on this data point for one epoch. Algorithm 1 ###reference_### describes all the steps mentioned. A corresponding flowchart is shown in Figure 1 ###reference_###.\nThis process \u2014 models submitting predictions, ranking the predictions, and training the models on the data points for which they provided the best predictions \u2014 is iterated. One iteration we call an epoch of the algorithm. As the models specialize, we expect the assignments of data points to models to stabilize: a specialized expert will usually submit the best predictions for its domain. After a predefined number of epochs, the assignments of data points to models are considered final. Each model\u2019s won data points translate to a separate partition of the dataset. The hyperplanes between the partitions are stored in an SVM, making the partitioning technically available for other applications. Snapshots of the application of the algorithm to a two-dimensional function that we designed as a test dataset are shown in Figure 2 ###reference_###. The transition from random predictions at the beginning to specialized experts at the end is clearly visible. The assignments of data points to the specialized experts are translated into the final partitioning.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### Since the number of partitions is usually unknown beforehand, the partitioning algorithm includes an adding and a dropping mechanism to dynamically adapt the number of competing models to the dataset. To evaluate whether a new model should be added to the competition, we regularly identify the data points with the poorest predictions in the dataset and train a new model on these points. The new model is added to the competition in case that improves the overall loss. Figure 3 ###reference_### demonstrates the addition of a model that successfully captures a significant portion of the sinusoidal section of a test function, which had previously been unlearned. For more details, see the pseudo-code of the adding mechanism in Algorithm 2 ###reference_### in the Appendix A.2 ###reference_###. Conversely, redundant models that do not uniquely capture their own pattern should be eliminated. Such redundancy is indicated by models not winning any data points or by their predictions significantly overlapping with those of other models. The degree of redundancy is assessed by the increase in overall loss if the model were deleted. This factor is regularly checked, and all highly redundant models are removed. Figure 4 ###reference_### demonstrates the removal of the red model, as it only captures data points similarly well as the purple model. Algorithm 3 ###reference_### in the Appendix A.2 ###reference_### provides the corresponding pseudo-code. The adding and dropping mechanism are designed to balance each other. Figure 2 ###reference_### shows exemplary how the number of competing models is adapted to the dataset from initially ten to finally three. This process involves both adding new models to capture previously unlearned patterns and removing redundant ones.\nA significant asset of our partitioning algorithm is its ability to extend to a pattern-adaptive model type, architecture, and hyperparameter search without incurring additional costs. So far, competing models have been considered similar in terms of their type, architecture, and hyperparameter settings. However, all three can be randomly varied among the models, as it is reasonable to assume that different patterns may require, for example, wider neural networks or smaller learning rates. Consequently, the algorithm\u2019s output can not only be a partitioning but also an optimal configuration of model type, architecture, and hyperparameters for each partition." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Applications", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Modular model", + "text": "###figure_7### Applying the partitioning algorithm to datasets reveals interesting and valuable insights about the dataset\u2019s structure, as illustrated in Figure 2 ###reference_###. Additionally, the partitioning can be utilized for various other purposes, such as learning the dataset using a divide-and-conquer approach. Traditionally, the entire dataset is used to train and optimize a single model. However, if the partitioning algorithm detects distinct functional patterns, it may be beneficial to have multiple expert models, each learning only one pattern, instead of pressing all patterns into a single model. Therefore, multiple expert models that each learn one partition are combined into a modular model. The SVM, which incorporates the boundaries between the partitions, serves as a switch between the experts. For each data point, the SVM decides which partition it belongs to and, consequently, which expert model to train or test. The structure of the modular model is illustrated with a flowchart in Figure 5 ###reference_###. With this approach, we believe that we can reduce model complexity and increase model accuracy for datasets that are structured by multiple distinct functional patterns with little overlap.\nTo evaluate this approach, we compared the performance of a single model trained on the entire dataset with that of a modular model comprising multiple expert models. We speak of models in general, as the type of model can be varied. In our experiments, we used feedforward neural networks. To ensure a fair comparison, we allowed the single model to have as many trainable parameters (weights and biases) as the combined total of all experts in the modular model. We conducted a hyperparameter optimization for each expert, searching for the optimal number of layers, neurons per layer, and learning rate within reasonable constraints (see Table 2 ###reference_### in Section A.2 ###reference_###). Separately, we performed a hyperparameter optimization for the single model, allowing it to use as many trainable parameters as all the experts combined. Each hyperparameter optimization involved training the model 100 times with randomly varied hyperparameters and selecting the best result. This process ensured that any advantages or disadvantages were not due to unfitting parameters or outliers. To estimate the stability of both approaches, we repeated each run\u2014partitioning the dataset, training the modular model including hyperparameter optimization, and training the single model including hyperparameter optimization\u2014ten times." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "###figure_8### ###figure_9### We designed two-dimensional, section-wise defined functions to serve as test datasets for validating the effectiveness of our approach and its implementation. The anomaly-crest function is illustrated in Figure 2(a) ###reference_sf1###, and the wave-climb function is depicted in Figure 6(a) ###reference_sf1###. Due to their section-wise definition, these functions exhibit different local functional patterns, akin to several engineering problems. One such example is modeling the stress-strain curves of materials with porous structures. These materials offer an excellent balance between weight and strength, but their stress-strain curves are typically challenging to model due to the presence of diverse functional patterns. An exemplary stress-strain curve for such a material is shown in Figure 6(b) ###reference_sf2###. The data for this porous structure\u2019s stress-strain curve were generously provided by Ambekar et al., who collected them [21 ###reference_b21###]. We have observed a high robustness of our partitioning approach to variations in the models\u2019 random initializations. Figures 2 ###reference_### and 6 ###reference_### illustrate typical results.\nIn addition to the two-dimensional datasets, we evaluated our method using popular higher-dimensional real-world datasets from the UCI Machine Learning Repository [22 ###reference_b22###]. Our tests focused exclusively on regression problems, though our approach can be readily extended to classification problems. Acknowledging that our assumption of distinct and separable characteristics may not apply to all datasets, we tested 22 additional datasets to assess the frequency and extent to which the modular model, based on the partitioning algorithm, outperforms a single model [23 ###reference_b23###] [24 ###reference_b24###] [25 ###reference_b25###] [26 ###reference_b26###] [27 ###reference_b27###] [28 ###reference_b28###] [29 ###reference_b29###] [30 ###reference_b30###] [31 ###reference_b31###] [32 ###reference_b32###] [33 ###reference_b33###] [34 ###reference_b34###] [35 ###reference_b35###] [36 ###reference_b36###] [37 ###reference_b37###] [38 ###reference_b38###] [39 ###reference_b39###] [40 ###reference_b40###] [41 ###reference_b41###] [41 ###reference_b41###] [42 ###reference_b42###] [43 ###reference_b43###] [44 ###reference_b44###]." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Results", + "text": "Figure 7 ###reference_### presents histograms comparing the test losses of the modular and single models. For this illustration, we selected those 6 out of the 25 datasets for which the modular model achieved a significant advantage over the single model. Each histogram shows the losses from ten runs of both models on each dataset. The x-axis represents the test loss, while the y-axis indicates the number of runs achieving each respective test loss. Higher bars on the left side of the histogram indicate better performance.\nThe modular model, utilizing the partitioning algorithm, significantly outperforms the single model by orders of magnitude for the two test functions (see Figs. 7(a) ###reference_sf1### and 7(b) ###reference_sf2###), demonstrating the concept\u2019s validity. For the porous structure\u2019s stress-strain data, which inspired the design of the test functions, the modular model achieved a 54% reduction in loss compared to the single model (see Fig. 7(c) ###reference_sf3###). Additionally, the modular model could significantly outperform the single model for three other real-world datasets: for the energy efficiency dataset, the modular model achieved a 53% improvement in mean loss over ten runs (see Fig. 7(e) ###reference_sf5###). For the automobile dataset, the improvement was 14% (see Fig. 7(d) ###reference_sf4###), and for the dataset on students learning portuguese, the improvement was 8% (see Fig. 7(f) ###reference_sf6###).\nTable 1 ###reference_### provides a brief characterization of each of the six datasets, detailing the number of features, labels, and data points. A more detailed analysis of the modular model\u2019s performance compared to the single model can be found in Section A.1 ###reference_###.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "As introduced in Section 3 ###reference_###, the partitioning algorithm is based on the competition between multiple models: iteratively, each model is trained on the data points for which it provided the best predictions. This approach inverts typical active learning strategies, which usually focus on training models on their weaknesses to enhance generalization. Instead, we emphasize the strengths of each model to induce specialization. We consider our concept to be the first in a new category of partitioning approaches: active partitioning approaches, which invert the traditional active learning paradigm.\nThe application of our active partitioning algorithm to the anomaly-crest function (see Fig. 2 ###reference_###) demonstrates that the competition between multiple models is generally effective for developing specialized experts and separating different functional patterns. The primary value of this partitioning lies in its ability to detect these distinct patterns and provide insights into the dataset\u2019s structure. For the anomaly-crest function, the four identified sections clearly differ in their functional characteristics (see Fig. 2 ###reference_###). In the case of the wave-climb function, the algorithm successfully separates the two sinusoidal sections with different frequencies and amplitudes, as well as a final u-shaped section, which seems reasonable (see Fig. 6(a) ###reference_sf1###). For the porous structure\u2019s stress-strain dataset, it is noteworthy that the first hook is identified as a distinct pattern. Subsequently, all sections with concave curvature are captured by the green model, while all sections with convex curvature are captured by the orange model. This partitioning was surprising, but it appears that the models find it easier to learn either concave or convex curvatures exclusively (see Fig. 6(b) ###reference_sf2###). The models themselves detecting which functional patterns can be learned well coherently was exactly what we were aiming for.\nThere are several ways to utilize the partitioning, and we found it important to also illustrate a path that leads to measurable improvements by leveraging our partitioning results. In Section 4.1 ###reference_###, we introduced modular models that combine multiple experts. Our hypothesis is that for datasets with separable patterns, it may be advantageous to have multiple experts, each focusing on a single pattern, rather than a single model handling all patterns. As expected, the modular model was not superior for all datasets. We believe this is because if a dataset exhibits only one coherent pattern or if multiple patterns highly overlap, it is more beneficial for a single model to access all data points rather than splitting them. However, among the 25 datasets we tested, we identified six datasets that could be learned more precisely with the modular model utilizing our partitioning results. For the porous structure\u2019s stress-strain dataset and the energy efficiency dataset, the modular model even achieved a loss reduction of more than 50% (see Fig. 7 ###reference_###).\nIn Section A.1 ###reference_###, we describe a detailed analysis of the factors contributing to the performance of the modular model. Our findings reveal a correlation between the number of patterns identified by the partitioning algorithm and the modular model\u2019s performance: the more distinct patterns in the dataset, the better the modular model performs relative to the single model. This aligns with our expectation that not all datasets are suitable for our approach. The partitioning algorithm should primarily be applied to datasets that are expected to exhibit predominant patterns with minimal overlap. The clearer the patterns, the more effective the modular model is expected to be.\nAdditionally, we examined the impact of our pattern-adaptive hyperparameter search, which optimizes the hyperparameter settings for each pattern. We discovered that tailoring the learning rates to each partition enhances the modular model\u2019s performance. However, our results indicate that adjusting the numbers of layers and neurons per layer for each pattern does not provide any significant advantage.\nFinally, we aimed to verify that the partitioning algorithm identifies substantial patterns rather than merely separating small and challenging snippets. Our results confirm that the more homogeneous the partition proportions, the more successful the modular model tends to be.\nThere are numerous potential applications, many of which we may not have yet considered. One application we plan to explore is using the partitioning algorithm for active learning. In the context of expensive data points, the following data collection loop could be advantageous: first, collect a batch of data points; then, apply the partitioning algorithm; and finally, train each partition with a separate model, akin to the modular model approach. Instead of immediately combining their predictions, we could assess each expert\u2019s performance and adjust the collection of new data points accordingly. Partitions that are more challenging to learn should receive more data points, while easier partitions should receive fewer. This approach could lead to a more efficient use of the data point budget. The process can be repeated iteratively. For instance, with a budget of 500 data points, we could run this process 10 times, each time distributing 50 data points according to the difficulty of the experts in learning their partitions in the last iteration." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced a novel active partitioning algorithm. To the best of our knowledge, this algorithm is unique in its use of competition between models to generate a general-purpose partitioning scheme, without constraints on the dataset\u2019s origin or order. The partitioning is achieved by having multiple models iteratively submit their predictions for all points in the dataset and being rewarded for the best predictions with training on the corresponding data points. This process induces specialization in the models, which is then translated into a partitioning. Focusing the training on each model\u2019s strengths practically inverts the active learning paradigm of focusing the training on a model\u2019s weaknesses, leading us to call our concept an active partitioning algorithm.\nWe demonstrated that our algorithm is both widely applicable and useful. Its wide applicability was shown by valuable results across datasets of varying dimensionalities, sparsities, and contexts \u2013 from student education to engineering stress-strain tests. The utility of our algorithm was illustrated in two primary ways: first, the partitioning inherently provides insights into the dataset\u2019s structure. For instance, three distinct patterns were detected in the porous structure\u2019s stress-strain dataset: an initial hook, convex, and concave parts. Second, certain datasets can be learned more accurately with a modular model based on the active partitioning algorithm than with a single model. If a model\u2019s accuracy in learning a dataset is unsatisfactory and the dataset is likely structured along predominant patterns with little overlap, we recommend applying our pipeline of the active partitioning algorithm and modular model. Particularly in the context of expensive data points, improving the model on this path without adding more data points can be financially beneficial. In the future, we will explore a third application: adapting data collection strategies based on the active partitioning algorithm." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "We observed that, for several datasets, the modular model utilizing the partitioning algorithm significantly outperformed the single model. To analyze these observations in more detail, we created the plots shown in Figure 8 ###reference_###. We compared the performance of the modular and single models across ten test runs for each of the 25 datasets. The datasets with final losses displayed in the histograms in Figure 7 ###reference_### are marked with unique colors for identification, while all other datasets are illustrated in orange.\n###figure_16### ###figure_17### ###figure_18### ###figure_19### For each dataset, we computed the mean test loss of the single model over the ten test runs. We then compared the test losses of the modular model against this benchmark. For example, if the single models achieved an average loss of 100 and the modular model achieved a loss of 80, we recorded a performance value of 20%. Conversely, if the modular model achieved a loss of 120, we recorded a performance value of -20%. These performance measures were plotted against potentially influential parameters: the number of experts, the variance in the experts\u2019 learning rates, the variance in the experts\u2019 parameter counts, and the variance in the partition proportions.\nFirstly, we observed that the performance of the modular model compared to the single model improves with an increasing number of experts (see Fig. 8(a) ###reference_sf1###). Since the number of experts in the modular model corresponds to the number of patterns the partitioning algorithm has separated, this insight is more about the datasets that work well with this approach than about the modular model itself. The more separate patterns are found within one dataset, the better the modular model can be expected to work. Given our initial expectation that not all datasets would contain separable patterns, this finding is not surprising. The more clearly a dataset is structured around separable patterns, the more effective our approach appears to be.\nThe modular model allows for the adjustment of hyperparameters locally for each expert, unlike the single model with a global uniform hyperparameter setting. In our experiments, we varied only the learning rate, the number of layers, and the number of neurons per layer. For this analysis, we combined the number of layers and the number of neurons per layer into a single metric: the number of trainable parameters. We evaluated the impact of locally adapting the hyperparameter settings to each pattern. The more the hyperparameter settings are tailored to each pattern, and the more they differ from a constant setting for the entire dataset, the greater their variance among all experts in a single run. Consequently, we plotted the performance of the modular model compared to the single model versus the variance in experts\u2019 learning rates (see Fig. 8(b) ###reference_sf2###) and the variance in experts\u2019 trainable parameters (see Fig. 8(c) ###reference_sf3###). We observed a moderate correlation between the modular model\u2019s performance and the adaptation of learning rates, but no correlation with the adaptation of trainable parameters. Notably, also with small variances in learning rates, modular models outperformed single models. We conclude that locally adapting learning rates to each pattern is moderately beneficial, whereas adjusting the number of layers and neurons per layer does not appear to have a significant impact.\nFinally, we plotted the performance of the modular model compared to the single model against the variance in partition proportions for each run (see Fig. 8(d) ###reference_sf4###). Our aim was to verify that the algorithm identifies significant patterns rather than just isolating small, difficult segments. Our findings confirm this hypothesis, indicating that the more uniform the partition proportions, the more effective the modular model becomes.\nFor those interested in understanding the partitioning algorithm in full detail, this section provides the pseudo-code for the adding (see Alg. 2 ###reference_###) and dropping (see Alg. 3 ###reference_###) mechanism of the partitioning algorithm. Additionally, Table 2 ###reference_### lists all significant hyperparameter settings for both the partitioning algorithm and the modular model." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: selected dataset characteristics for sparse and non-sparse datasets
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\nFeaturesLabelsData points
\n\nAnomaly-crest function\n\n1110,000
\n\nWave-climb function\n\n1110,000
\n\nPorous structure\u2019s stress-strain curve\n\n114,065
\n\nAutomobile insurance risk\n\n591159
\n\nEnergy efficiency\n\n82768
\n\nStudents\u2019 portuguese grades\n\n561649
\n
", + "capture": "Table 1: selected dataset characteristics for sparse and non-sparse datasets" + }, + "2": { + "table_html": "
\n
Table 2: hyperparameter settings of the partitioning algorithm and the single and modular model during the experiments.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nOptimizer\n\n\n\nAdam\n\n
\n\nActivation function\n\n\n\ntanh\n\n
\n\nEpochs partitioning algorithm\n\n\n\n1,000\n\n
\n\nEpochs modular model\n\n\n\n500\n\n
\n\nScaled feature range\n\n\n\n[-1,1]\n\n
\n\nBatch size\n\n\n\n16\n\n
\n\nPartitioning: initial model number\n\n\n\n10\n\n
\n\nPartitioning: adding check\n\n\n\nevery epoch\n\n
\n\nPartitioning: dropping check\n\n\n\nevery epoch\n\n
\n\nPartitioning: dropping threshold\n\n\n\n1.8\n\n
\n\nHyperparameter search runs\n\n\n\n100\n\n
\n\nMinimal layer number\n\n\n\n2\n\n
\n\nMaximal layer number\n\n\n\n6\n\n
\n\nMinimal neuron number per layer\n\n\n\n4\n\n
\n\nMaximal neuron number per layer\n\n\n\n10\n\n
\n\nMinimal learning rate\n\n\n\n0.0001\n\n
\n\nMaximal learning rate\n\n\n\n0.005\n\n
\n
", + "capture": "Table 2: hyperparameter settings of the partitioning algorithm and the single and modular model during the experiments." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18254v1_figure_1.png", + "caption": "Figure 1: flow chart of the partitioning algorithm: each data pointed is assigned to the model that submitted the best prediction. All models are trained with the data points in their partition for one epoch. This process is iterated.", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/flowchart_partitioning.png" + }, + "2": { + "figure_path": "2411.18254v1_figure_2.png", + "caption": "(a) Anomaly-crest function\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/mapping_samples.png" + }, + "3": { + "figure_path": "2411.18254v1_figure_3.png", + "caption": "(b) Exemplary partitioning result\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/mapping_result.png" + }, + "4": { + "figure_path": "2411.18254v1_figure_4.png", + "caption": "(c) Exemplary partitioning process with the network number evolution over 1000 iterations (epochs)\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/mapping_process.png" + }, + "5": { + "figure_path": "2411.18254v1_figure_5.png", + "caption": "Figure 3: adding a new network (red network 12) to the competition. Regularly, a new network is trained using the data points with the poorest predictions at that time. If the new network improves the overall loss, it is added to the competition. Here, the red network 12 is the first to capture the sinusoidal pattern.", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/adding.png" + }, + "6": { + "figure_path": "2411.18254v1_figure_6.png", + "caption": "Figure 4: dropping a network (red network 12) from the competition as it appears redundant, failing to capture any patterns uniquely. Regularly, for each model, we check how much the overall loss would increase if the network were removed. If the increase is small, the corresponding network is considered redundant and is discarded. Here, the red network\u2019s predictions were too similar to the purple network\u2019s predictions.", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/dropping.png" + }, + "7": { + "figure_path": "2411.18254v1_figure_7.png", + "caption": "Figure 5: flow chart of the modular model: each partition is learned by a separate expert model. For each data point, the SVM as a result of the partitioning algorithm decides which expert to train or to test. This way, the experts are combined to a modular model.", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/flowchart_modularmodel.png" + }, + "8": { + "figure_path": "2411.18254v1_figure_8.png", + "caption": "(a) Self-designed wave-climb function with three\npatterns identified by the algorithm\n(grey, green, blue).\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/function_b.png" + }, + "9": { + "figure_path": "2411.18254v1_figure_9.png", + "caption": "(b) Porous structure\u2019s stress-strain dataset generously provided by Ambekar et al. [21] with three patterns identified by the algorithm (red, green, orange).\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/function_porous.png" + }, + "10": { + "figure_path": "2411.18254v1_figure_10.png", + "caption": "(a) Anomaly-wave function\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_anomaly.png" + }, + "11": { + "figure_path": "2411.18254v1_figure_11.png", + "caption": "(b) Wave-climb function\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_wave.png" + }, + "12": { + "figure_path": "2411.18254v1_figure_12.png", + "caption": "(c) Stress-strain curve\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_porous.png" + }, + "13": { + "figure_path": "2411.18254v1_figure_13.png", + "caption": "(d) Automobile insurance risk\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_automobile.png" + }, + "14": { + "figure_path": "2411.18254v1_figure_14.png", + "caption": "(e) Energy efficiency\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_energy.png" + }, + "15": { + "figure_path": "2411.18254v1_figure_15.png", + "caption": "(f) Students\u2019 grades\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/losses_students.png" + }, + "16": { + "figure_path": "2411.18254v1_figure_16.png", + "caption": "(a) Number of experts\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/analysis_experts.png" + }, + "17": { + "figure_path": "2411.18254v1_figure_17.png", + "caption": "(b) Learning rates\u2019 variance\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/analysis_learningrates.png" + }, + "18": { + "figure_path": "2411.18254v1_figure_18.png", + "caption": "(c) Parameter numbers\u2019 variance\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/analysis_parameternumbers.png" + }, + "19": { + "figure_path": "2411.18254v1_figure_19.png", + "caption": "(d) Partition proportions\u2019 variance\n", + "url": "http://arxiv.org/html/2411.18254v1/extracted/6028855/analysis_splitvariances.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Some methods for classification and analysis of multivariate observations.", + "author": "J Macqueen.", + "venue": "University of California Press, 1967.", + "url": null + } + }, + { + "2": { + "title": "Task decomposition through competition in a modular connectionist architecture.", + "author": "Robert Alan Jacobs.", + "venue": "University of Massachusetts Amherst, 1990.", + "url": null + } + }, + { + "3": { + "title": "Data clustering: 50 years beyond k-means.", + "author": "Anil K. Jain.", + "venue": "Pattern Recognition Letters, 31(8):651\u2013666, 2010.", + "url": null + } + }, + { + "4": { + "title": "Clustering: A neural network approach.", + "author": "K.-L. Du.", + "venue": "Neural Networks, 23(1):89\u2013107, 2010.", + "url": null + } + }, + { + "5": { + "title": "Data Clustering: Algorithms and Applications.", + "author": "Charu C Aggarwal and Chandan K Reddy.", + "venue": "CRC Press. Taylor & Francis Group, 2013.", + "url": null + } + }, + { + "6": { + "title": "Automatic clustering algorithms: a systematic review and bibliometric analysis of relevant literature.", + "author": "Absalom E Ezugwu, Amit K Shukla, Moyinoluwa B Agbaje, Olaide N Oyelade, Ad\u00e1n Jos\u00e9-Garc\u00eda, and Jeffery O Agushaka.", + "venue": "Neural Computing and Applications, 33:6247\u20136306, 2021.", + "url": null + } + }, + { + "7": { + "title": "Well-separated clusters and optimal fuzzy partitions.", + "author": "Joseph C Dunn.", + "venue": "Journal of cybernetics, 4(1):95\u2013104, 1974.", + "url": null + } + }, + { + "8": { + "title": "Gbk-means clustering algorithm: An improvement to the k-means algorithm based on the bargaining game.", + "author": "Mustafa Jahangoshai Rezaee, Milad Eshkevari, Morteza Saberi, and Omar Hussain.", + "venue": "Knowledge-Based Systems, 213:106672, 2021.", + "url": null + } + }, + { + "9": { + "title": "The self-organizing map.", + "author": "Teuvo Kohonen.", + "venue": "Proceedings of the IEEE, 78(9):1464\u20131480, 1990.", + "url": null + } + }, + { + "10": { + "title": "Analysis of switching dynamics with competing neural networks.", + "author": "Klaus-Robert M\u00fcller, Jens Kohlmorgen, and Klaus Pawelzik.", + "venue": "IEICE transactions on fundamentals of electronics, communications and computer sciences, 78(10):1306\u20131315, 1995.", + "url": null + } + }, + { + "11": { + "title": "Analysis of switching dynamics with competing support vector machines.", + "author": "Ming-Wei Chang, Chih-Jen Lin, and RC-H Weng.", + "venue": "IEEE Transactions on Neural Networks, 15(3):720\u2013727, 2004.", + "url": null + } + }, + { + "12": { + "title": "Self-splitting modular neural network-domain partitioning at boundaries of trained regions.", + "author": "V Scott Gordon and Jeb Crouson.", + "venue": "In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1085\u20131091. IEEE, 2008.", + "url": null + } + }, + { + "13": { + "title": "Self-splitting competitive learning: A new on-line clustering paradigm.", + "author": "Ya-Jun Zhang and Zhi-Qiang Liu.", + "venue": "IEEE Transactions on Neural Networks, 13(2):369\u2013380, 2002.", + "url": null + } + }, + { + "14": { + "title": "Cluster analysis of gene expression data based on self-splitting and merging competitive learning.", + "author": "Shuanhu Wu, AW-C Liew, Hong Yan, and Mengsu Yang.", + "venue": "IEEE transactions on information technology in biomedicine, 8(1):5\u201315, 2004.", + "url": null + } + }, + { + "15": { + "title": "Adaptive mixtures of local experts.", + "author": "Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton.", + "venue": "Neural computation, 3(1):79\u201387, 1991.", + "url": null + } + }, + { + "16": { + "title": "On combining artificial neural nets.", + "author": "Amanda Sharkey.", + "venue": "Connect. Sci., 8:299\u2013314, 12 1996.", + "url": null + } + }, + { + "17": { + "title": "Mixture of experts: a literature survey.", + "author": "Saeed Masoudnia and Reza Ebrahimpour.", + "venue": "Artificial Intelligence Review, 42:275\u2013293, 2014.", + "url": null + } + }, + { + "18": { + "title": "A survey on ensemble learning.", + "author": "Xibin Dong, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma.", + "venue": "Frontiers of Computer Science, 14:241\u2013258, 2020.", + "url": null + } + }, + { + "19": { + "title": "Boosted mixture of experts: An ensemble learning scheme.", + "author": "Ran Avnimelech and Nathan Intrator.", + "venue": "Neural computation, 11(2):483\u2013497, 1999.", + "url": null + } + }, + { + "20": { + "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.", + "author": "Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean.", + "venue": "arXiv preprint arXiv:1701.06538, 2017.", + "url": null + } + }, + { + "21": { + "title": "Atomic scale structure inspired 3d-printed porous structures with tunable mechanical response.", + "author": "Rushikesh S Ambekar, Ipsita Mohanty, Sharan Kishore, Rakesh Das, Varinder Pal, Brijesh Kushwaha, Ajit K Roy, Sujoy Kumar Kar, and Chandra S Tiwary.", + "venue": "Advanced Engineering Materials, 23(7):2001428, 2021.", + "url": null + } + }, + { + "22": { + "title": "Uci machine learning repository, 2024.", + "author": "Markelle Kelly, Rachel Longjohn, and Kolby Nottingham.", + "venue": null, + "url": null + } + }, + { + "23": { + "title": "Productivity Prediction of Garment Employees.", + "author": "Abdullah Al Imran, Md Shamsur Rahim, and Tanvir Ahmed.", + "venue": "UCI Machine Learning Repository, 2020.", + "url": null + } + }, + { + "24": { + "title": "Wine Quality.", + "author": "Paulo Cortez, A. Cerdeira, F. Almeida, T. Matos, and J. Reis.", + "venue": "UCI Machine Learning Repository, 2009.", + "url": null + } + }, + { + "25": { + "title": "Abalone.", + "author": "Warwick Nash, Tracy Sellers, Simon Talbot, Andrew Cawthorn, and Wes Ford.", + "venue": "UCI Machine Learning Repository, 1995.", + "url": null + } + }, + { + "26": { + "title": "Estimation of Obesity Levels Based On Eating Habits and Physical Condition .", + "author": "Fabio Mendoza Palechor and Alexis De la Hoz Manotas.", + "venue": "UCI Machine Learning Repository, 2019.", + "url": null + } + }, + { + "27": { + "title": "Automobile.", + "author": "Jeffrey Schlimmer.", + "venue": "UCI Machine Learning Repository, 1987.", + "url": null + } + }, + { + "28": { + "title": "Forest Fires.", + "author": "Paulo Cortez and Anbal Morais.", + "venue": "UCI Machine Learning Repository, 2008.", + "url": null + } + }, + { + "29": { + "title": "Computer Hardware.", + "author": "Jacob Feldmesser.", + "venue": "UCI Machine Learning Repository, 1987.", + "url": null + } + }, + { + "30": { + "title": "Real Estate Valuation.", + "author": "I-Cheng Yeh.", + "venue": "UCI Machine Learning Repository, 2018.", + "url": null + } + }, + { + "31": { + "title": "Seoul Bike Sharing Demand.", + "author": "Sathishkumar V E and Yongyun Cho.", + "venue": "UCI Machine Learning Repository, 2020.", + "url": null + } + }, + { + "32": { + "title": "Energy Efficiency.", + "author": "Athanasios Tsanas and Angeliki Xifara.", + "venue": "UCI Machine Learning Repository, 2012.", + "url": null + } + }, + { + "33": { + "title": "Concrete Compressive Strength.", + "author": "I-Cheng Yeh.", + "venue": "UCI Machine Learning Repository, 2007.", + "url": null + } + }, + { + "34": { + "title": "Combined Cycle Power Plant.", + "author": "Pnar Tfekci and Heysem Kaya.", + "venue": "UCI Machine Learning Repository, 2014.", + "url": null + } + }, + { + "35": { + "title": "Student Performance.", + "author": "Paulo Cortez.", + "venue": "UCI Machine Learning Repository, 2014.", + "url": null + } + }, + { + "36": { + "title": "Auto MPG.", + "author": "R. Quinlan.", + "venue": "UCI Machine Learning Repository, 1993.", + "url": null + } + }, + { + "37": { + "title": "AI4I 2020 Predictive Maintenance Dataset.", + "author": "Stephan Matzka.", + "venue": "UCI Machine Learning Repository, 2020.", + "url": null + } + }, + { + "38": { + "title": "Breast Cancer Wisconsin (Prognostic).", + "author": "William Wolberg, W. Street, and Olvi Mangasarian.", + "venue": "UCI Machine Learning Repository, 1995.", + "url": null + } + }, + { + "39": { + "title": "Online News Popularity.", + "author": "Kelwin Fernandes, Pedro Vinagre, Paulo Cortez, and Pedro Sernadela.", + "venue": "UCI Machine Learning Repository, 2015.", + "url": null + } + }, + { + "40": { + "title": "Heart Disease.", + "author": "Andras Janosi, William Steinbrunn, Matthias Pfisterer, and Robert Detrano.", + "venue": "UCI Machine Learning Repository, 1988.", + "url": null + } + }, + { + "41": { + "title": "Parkinsons Telemonitoring.", + "author": "Athanasios Tsanas and Max Little.", + "venue": "UCI Machine Learning Repository, 2009.", + "url": null + } + }, + { + "42": { + "title": "Beijing PM2.5.", + "author": "Song Chen.", + "venue": "UCI Machine Learning Repository, 2017.", + "url": null + } + }, + { + "43": { + "title": "Facebook Metrics.", + "author": "Srgio Moro, Paulo Rita, and Bernardo Vala.", + "venue": "UCI Machine Learning Repository, 2016.", + "url": null + } + }, + { + "44": { + "title": "Superconductivty Data.", + "author": "Kam Hamidieh.", + "venue": "UCI Machine Learning Repository, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18254v1" +} \ No newline at end of file diff --git a/20241127/2411.18259v1.json b/20241127/2411.18259v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a78e87c052184e22d64e8b1ece5cac9731b7ee39 --- /dev/null +++ b/20241127/2411.18259v1.json @@ -0,0 +1,103 @@ +{ + "title": "Transfer Learning for Deep Learning-based Prediction of Lattice Thermal Conductivity", + "abstract": "Machine learning promises to accelerate the material discovery by enabling high-throughput prediction of desirable macro-properties from atomic-level descriptors or structures. However, the limited data available about precise values of these properties have been a barrier, leading to predictive models with limited precision or the ability to generalize. This is particularly true of lattice thermal conductivity (LTC): existing datasets of precise (ab initio, DFT-based) computed values are limited to a few dozen materials with little variability. Based on such datasets, we study the impact of transfer learning on both the precision and generalizability of a deep learning model (ParAIsite). We start from an existing model (MEGNet [1]) and show that improvements are obtained by fine-tuning a pre-trained version on different tasks. Interestingly, we also show that a much greater improvement is obtained when first fine-tuning it on a large datasets of low-quality approximations of LTC (based on the AGL model) and then applying a second phase of fine-tuning with our high-quality, smaller-scale datasets. The promising results obtained pave the way not only towards a greater ability to explore large databases in search of low thermal conductivity materials but also to methods enabling increasingly precise predictions in areas where quality data are rare.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Machine learning models have advanced material research in several fields, including various domains of physics [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###], quantum chemistry [6 ###reference_b6###, 7 ###reference_b7###], drug discovery [8 ###reference_b8###], and cancer studies [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nIn particular, recent studies have proposed various models to predict the physical properties of materials [12 ###reference_b12###, 13 ###reference_b13###, 1 ###reference_b1###]. These models utilize diverse datasets, input data, and tuned neural network designs for specific purposes. However, the applicability of those models, i.e. their efficacy when applied on large databases, remains limited by their precision and generalizability, which are in turn dependent on the quality and size of the available training data. In other words, unsurprisingly, machine learning models trained on small databases of similar materials do not perform well on large databases of diverse materials [14 ###reference_b14###].\nPredicting the low lattice thermal conductivity (LTC) of crystal compounds on the basis of their structure and physical properties is one of those tasks that are made challenging by the lack of quality data. However, it is critical, as the ability to identify low-LTC materials on a scale could have profound implications for the design and optimization of materials in various applications, from electronics to energy storage [15 ###reference_b15###, 16 ###reference_b16###]. The difficulty of this problem lies in the complexity of the relationship between the structure of a material and its thermal properties. Although large data sets on material properties are available through databases such as AFLOW [17 ###reference_b17###], OQMD [18 ###reference_b18###], Materials Project [19 ###reference_b19###], and JARVIS [20 ###reference_b20###] available data on LTC is either based on approximate models (such as AGL [21 ###reference_b21###, 22 ###reference_b22###]) and therefore not usable to build precise machine learning-based predictive models, or rely on ab initio, DFT-based computation which are too expensive to run at large scales and are therefore of very small in sizes.\nIn this study, we apply a two-stage transfer learning methodology as a way to take advantage of both types of dataset to achieve greater levels of precision and generalizability in deep learning models for predicting LTC. Transfer learning as applied in this article is a process in which a model trained on a first task is reused as a starting point for training on a second similar task. The idea is that initial patterns can be learned in larger, relevant data, which will bootstrap the learning in smaller, more targeted data. The aim is to show the benefits of relying on an existing model having demonstrated good performance in predicting a different property than LTC in transfer learning, but also how this can be pushed further in a second stage of transfer learning, by using the larger, low quality datasets for LTC to pre-train models for predicting the smaller, high quality datasets." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Methodology", + "text": "Transfer learning [23 ###reference_b23###] involves the use of a machine learning model that has been pre-trained on a large dataset and subsequently fine-tuning it on a smaller domain-specific dataset. This strategy is particularly effective when working with limited data, as it allows us to capitalize on the knowledge gained from broader datasets, as demonstrated in multiple applications for material properties [24 ###reference_b24###, 25 ###reference_b25###]. Here, we describe the datasets, model, and the process used to carry out our two-stage transfer learning process." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II.1 Data", + "text": "As mentioned above, the accuracy and reliability of any machine learning model is highly dependent on the quality of the data used for training and validation. In this work, we rely on four datasets, each contributing to our analysis. The first two are derived from calculations of the first principle of anharmonic lattice dynamics [26 ###reference_b26###, 27 ###reference_b27###], looking at a small set of materials with specific structures. The third is a combination of the first two, used to obtain a slightly larger and slightly more diverse dataset. Finally, the fourth can be seen to represent a large dataset that contains low-precision values for LTC.\nThis dataset contains 96 materials from [26 ###reference_b26###] in the rocksalt, zincblende, and wurtzite structures that could be unambiguously identified in the Material Project Database. The LTC values are obtained using the phono3py software package [28 ###reference_b28###, 29 ###reference_b29###] using the YAML files available through the PhononDB (https://github.com/atztogo/phonondb ###reference_###)repository. Obtaining prediction with low deviation from those values is the central motivation for this work.\nThis dataset includes thermal conductivity data for 143 half-Heusler compounds, as reported in [30 ###reference_b30###]. This dataset adds another set of materials, while being itself significantly more specific than the previous one: it focuses only on one specific structure, for which the range of LTC values is significantly narrower ([2.17, 34.51] W/mK, compared to [0.51,1769.00] W/mK in Dataset1).\nThis dataset is a combination of the two datasets mentioned above, integrating the properties of Dataset1 and Dataset2 to enhance the diversity and scope of our model training.\nThis dataset contains 5578 materials obtained from the AFLOW-LIB repository [17 ###reference_b17###] together with their corresponding thermal conductivity obtained with the use of a quasi-harmonic Debye-Gr\u00fcneisen model [21 ###reference_b21###, 22 ###reference_b22###].\nIn addition, for all cases, logarithmic scaling is applied to all values of LTC, followed by standardization using the parameters of the corresponding dataset. For validation purposes, each data set is split 9 times, keeping 80% for training. In other words, each dataset is associated with 9 different randomly selected validation sets that represent each 20% of the total data set that is not used for training. Any result shown later in this article is measured as an average over those 9 validation sets and the corresponding models for a given dataset." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II.2 Model", + "text": "Our model for predicting LTC from the properties and structure of materials (called ParAIsite) is based, as shown in Figure 1 ###reference_###, on the addition of a multilayer perceptron (MLP, a dense, feed-forward, fully connected layer) on top of an existing, pre-trained model. In line with our transfer learning approach, the idea here is to use an existing model (the pre-trained model) that has already shown its ability to predict properties of materials as a foundation to be adapted for the task of LTC prediction. More concretely, ParAIsite is based on connecting the last hidden layer of the pre-trained model to the input of the added MLP.\n###figure_1### A first step, therefore, for establishing this model is the selection of the most appropriate pre-trained model from the top-performing models cataloged on MatBench [31 ###reference_b31###].\nWe only considered models based on an unambiguous identification of the materials as input and on a representation of their structures (in the form of crystallographic information files, CIFs). An initial set of tests were carried out with Dataset1 using the model of Figure 1 ###reference_### with each candidate pre-trained model to validate the model\u2019s performance and ascertain its suitability for the specific challenges associated with predicting the thermal conductivity in crystal compounds.\n###figure_2### Following validation (see Fig. 2 ###reference_###), we chose the model that combined the best performance (measured by the mean absolute percentage error, MAPE) and was most consistent with the features of our dataset. Despite the better average performance of the CrabNet model [32 ###reference_b32###] on Dataset1, results over multiple runs showed a lot of variability, demonstrating that this model was too unstable to be used. That result led us to use the graph-based neural network MEGNet model [1 ###reference_b1###], which was pre-trained on the formation energy data of 62,315 compounds from the Materials Project Database as of the 2018.6.1 version. By fine-tuning MEGNet on our thermal conductivity data, we aim to improve the prediction accuracy and better understand the thermal properties of the crystal compounds in our data set." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II.3 Model training", + "text": "In order to solve the complexity of predicting LTC, we adopted a three-step method to systematically improve model performance and evaluate the impact of transfer learning (TL) between the steps. In summary, those steps move from no transfer learning at all, to two levels of pre-training/transfer learning being applied, allowing us to evaluate how pre-trained models and fine-tuning affect the model\u2019s ability to generalize across datasets.\nIn this step, we train and test ParAIsite on our four datasets using an uninitialized MEGNet (and MLP) model. The training is therefore done from scratch (with random initial weights), without any form of transfer learning.\nIn this step, we train and test ParAIsite on our four datasets using a pre-trained MEGNet, that is, where MEGNet is initialized with the weights obtained through the training carried out by its authors on the task of predicting formation energy.\nIn this step, we train and test ParAIsite on Dataset1, Dataset2 and MIX, taking as a starting point the best model after fine-tuning using the AFLOW AGL dataset. In other words, we apply a second round of fine-tuning, over the one already done for MEGNet, and our own on AFLOW AGL. By pretraining the whole model further on a larger, but lower quality dataset, we anticipate that training on the three other, more precise datasets will converge to better performances.\nAt each step, for each model, training is performed for 300 epochs with a fixed random seed of 42, ensuring reproducible results.\nAs already mentioned, each of these steps is repeated 9 times to ensure statistical robustness, and the averaged results for validation loss across all steps are presented in Figs. 3 ###reference_###\u20134 ###reference_###. For all training steps, we use MAPE as the loss function, having applied normalisation and scaling." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Results and discussion", + "text": "A first straightforward conclusion that can be drawn from training and cross-validating in each step with the considered datasets is that the double steps of transfer learning is having a significant effect on the performance of the model on Dataset1, and generally improve its capacity to generalize if the considered dataset is varied enough. Indeed, in Table 1 ###reference_### we show the average and standard deviation (over the 9 runs) of MAPE of models trained on the training subsets of each dataset (rows) and tested on the validation subsets of each dataset (columns).\nTaking as example Dataset1, which is considered particularly challenging, we can see that with no transfer learning, models reach on average a 82% error on average when tested on Dataset1\u2019s validation subsets. The error, training and testing again on the relevant subsets of Dataset1 falls to 76% when using the original pre-trained MEGNet model (Step 1) falls to an average of 76%, and falls significantly further, to 34% at Step 2. In other words, transfer learning has had a significant effect, especially taking as starting point a model that has already been fine-tuned for a similar task (i.e. predicting approximated LTC through the AFLOW dataset).\nThis effect of the two steps of transfer learning is particularly visible in Figure 3 ###reference_###, which shows the evolution of the average MAPE (over 9 runs) in the validation subsets of Dataset1 during the training iterations (epochs). Comparing this evolution between Step 1 and Step 2 shows that starting from a relevant pre-trained model, even if made for a different task, enabled the model to converge faster to slightly lower values of MAPE. We can also see that the MAPE on the validation subsets does not rise up in Step 2 as much as it did during the Step 1 training process, showing that the model was less prone to overfitting in this case. Looking at the chart for Step 3, we can see a significantly different behavior, with the MAPE value converging very quickly to much lower values.\n###figure_3### Similar conclusions can be drawn on the MIX dataset, even if those are less strong: The models trained on their training subsets and tested on their validation subsets reach 83%, 81% and 69% in steps 1, 2 and 3 respectively. As seen in Figure 4 ###reference_###, MAPE also reaches lower values in Step 2 compared to Step 2 and significantly lower ones yet in Step 3. However, in this case, overfitting appears early in the training process (around epoch 50) in all steps. This is probably explained by the fact that the MIX dataset combines Dataset1, which is well predicted through the transfer learning process, and Dataset2 which, as we discuss below, did not work as well.\n###figure_4### Regarding Dataset2, the results shown in Table 1 ###reference_###, the results obtained are against the ones obtained for the other two datasets. In fact, when training and testing on subsets of Dataset2, the best average MAPE obtained where 42%, 42% and 78% in steps 1, 2 and 3 respectively. In other words, the third step, the double pre-training, significantly worsens the results obtained. This can be explained by the fact that this dataset contains a very restricted range of LTCs compared to the others, in particular AFLOW. In other words, having learned in Step 2 to predict a wide range of LTC values through the AFLOW dataset, the model did not adapt well to the very specific set of materials in Dataset2. We can observe this issue in Figure 5 ###reference_### where, in Step 3, the training starts with a high error rate and is overfit almost immediately.\n###figure_5### The issue mentioned above, of the lack of variety in Dataset2, is part of the motivation for integrating the MIX dataset as well. One final conclusion that can be drawn from the results shown in Table 1 ###reference_### is that, in accordance with our assumption, a more varied dataset (MIX) tends to generalize better, but also that the double pre-training process applied here helps support this ability to generalize. This is visible in the last line of the table, which shows that models trained on the MIX dataset are better on average across all datasets (including AFLOW) than those trained on the other two datasets in the same step. In most cases, models trained on MIX or Dataset1 achieved better results when tested with other datasets at Step 3 than at other steps." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Conclusion", + "text": "In this work, we tested using multiple steps of transfer learning (pre-training and fine-tuning) a machine learning model that we developed to predict the thermal conductivity of crystal compounds. Despite the availability of large datasets for general material properties, databases specifically focused on thermal conductivity are limited, which introduced a challenge to our task. For this reason, the model was trained and validated on three different high-precision datasets, one being a mixture of the other two. We also used a less precise, but larger volume dataset as part of one of the transfer learning steps.\nThrough a series of experiments involving different training configurations, we found that double transfer learning, which includes an additional phase of training on external data, proved to be effective in reaching not only better precision but also better generalizability and reduced overfitting. This is true for the datasets that show a broader range of values for LTC, and more diversity in the type of material included. For those, the error rate (MAPE) decreased consistently as we progressed through the steps, particularly in Step 3. This indicates that transfer learning, when applied judiciously, can enhance model performance on small datasets.\nHowever, for our less varied dataset, transfer learning had the opposite effect. Double TL caused the model to rapidly overfit, as the specific nature of the dataset was incompatible with the broader generalization achieved through additional external training. In other words, transfer learning was most effective when the dataset was representative of a wide range of materials. In contrast, when dealing with highly specialized datasets, additional training phases may introduce confusion rather than improvement.\nTo provide concrete validation of the best performing models, we applied them to obtain predictions for stable materials in the Material Project Database. LTC for (mp-9127) was then calculated using a robust ab initio calculation, as our models consistently found that it has a relatively low thermal conductivity. The result of the computation (7.1 W / m * K) was on the same order of magnitude as the predictions of our models (1.23 W/m*K). This agreement underscores the ability of the model to capture critical trends in LTC prediction, even for datasets it was not trained directly on.\nIn summary, while double transfer learning shows great promise in improving model accuracy and generalization, its success heavily depends on the dataset\u2019s diversity and scope. The results obtained are very promising but also demonstrate that the choice of dataset and training approach is crucial when predicting thermal conductivity. A greater availability of a broader range of datasets of LTC, whether of high precision for training or of approximate precision for pre-training, is therefore expected to enable us to reach better results in the future." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Code and data availability", + "text": "The code and data that are required to reproduce the results of paper or re-utilizing for their own purposes are shown on https://github.com/liudakl/ParAIsite.git ###reference_###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Author contributions", + "text": "All authors discussed the results and contributed equally to the manuscript. L. Klochko developed and carried out the implementation of the model and wrote the first versions of the manuscript. M. d\u2019Aquin provided the domain expertise in deep learning and transfer learning necessary for this work and revised the manuscript. L. Chaput provided the necessary domain expertise in crystallography and thermophysical properties of material and revised the manuscript. A. Togo developed the computer codes used to compute the thermal conductivity from the first principle." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Competing interests", + "text": "There is no competing interests to declare." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Train on \\Test onDataset1Dataset2MIXAFLOW
Step 1:
Dataset10.82 (0.21)2.53 (2.17)1.74 (1.26)2.16 (1.26)
Dataset20.51 (0.09)0.42 (0.07)0.43 (0.05)0.48 (0.05)
MIX0.69 (0.15)0.77 (0.15)0.83 (0.07)0.97 (0.27)
AFLOW0.55 (0.34)1.18 (0.27)0.93 (0.27)0.62 (0.33)
Step 2:
Dataset10.76 (0.29)3.09 (2.40)2.07 (1.40)2.32 (1.94)
Dataset20.55 (0.14)0.42 (0.06)0.44 (0.04)0.52 (0.10)
MIX0.66 (0.14)0.75 (0.16)0.81 (0.11)1.10 (0.47)
AFLOW0.44 (0.14)1.27 (0.48)0.94 (0.30)0.50 (0.03)
Step 3:
Dataset10.34 (0.15)1.26 (0.44)0.87 (0.27)0.57 (0.10)
Dataset20.55 (0.10)0.78 (0.20)0.63 (0.18)0.62 (0.07)
MIX0.37 (0.14)0.78 (0.31)0.69 (0.19)0.59 (0.06)
\n
Table 1: Validation error (MAPE) across training steps and datasets.
\n
", + "capture": "Table 1: Validation error (MAPE) across training steps and datasets." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18259v1_figure_1.png", + "caption": "Figure 1: The ParAIsite model architecture. Each dense layer contains 350 neurons. The output property is the thermal conductivity (TC).", + "url": "http://arxiv.org/html/2411.18259v1/x1.png" + }, + "2": { + "figure_path": "2411.18259v1_figure_2.png", + "caption": "Figure 2: Results of validation tests of ParAIsite when using top-performing models cataloged on MatBench [31] as pre-trained models. As one can see, the MEGNet model [1] shows better stability compared to the CrabNet model [32] on Dataset1. Here, the name of the datasets on which model was pre-trained are indicated inside the box, and the error variablity are shown in red lines.", + "url": "http://arxiv.org/html/2411.18259v1/x2.png" + }, + "3": { + "figure_path": "2411.18259v1_figure_3.png", + "caption": "Figure 3: Validation loss (MAPE) for models trained and tested on Dataset1 across training epochs.", + "url": "http://arxiv.org/html/2411.18259v1/x3.png" + }, + "4": { + "figure_path": "2411.18259v1_figure_4.png", + "caption": "Figure 4: Validation loss (MAPE) for models trained and tested on MIX across training epochs.", + "url": "http://arxiv.org/html/2411.18259v1/x4.png" + }, + "5": { + "figure_path": "2411.18259v1_figure_5.png", + "caption": "Figure 5: Validation loss (MAPE) for models trained and tested on Dataset2 across training epochs.", + "url": "http://arxiv.org/html/2411.18259v1/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18259v1" +} \ No newline at end of file diff --git a/20241127/2411.18267v1.json b/20241127/2411.18267v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7e6c24cd7e64a4027b3a7576e0c0e529ceb90983 --- /dev/null +++ b/20241127/2411.18267v1.json @@ -0,0 +1,122 @@ +{ + "title": "Incomplete Multi-view Multi-label Classification via a Dual-level Contrastive Learning Framework", + "abstract": "Recently, multi-view and multi-label classification have become\nsignificant domains for comprehensive data analysis and exploration. However, incompleteness both in views and labels is still a real-world scenario for multi-view multi-label classification. In this paper, we seek to focus on double missing multi-view multi-label classification tasks and propose our dual-level contrastive learning framework to solve this issue. Different from the existing works, which couple consistent information and view-specific information in the same feature space, we decouple the two heterogeneous properties into different spaces and employ contrastive learning theory to fully disentangle the two properties. Specifically, our method first introduces a two-channel decoupling module that contains a shared representation and a view-proprietary representation to effectively extract consistency and complementarity information across all views. Second, to efficiently filter out high-quality consistent information from multi-view representations, two consistency objectives based on contrastive learning are conducted on the high-level features and the semantic labels, respectively. Extensive experiments on\nseveral widely used benchmark datasets demonstrate that the proposed method has more stable and superior classification performance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, multi-view data, including images, audio, text descriptions, and videos [31 ###reference_b31###, 1 ###reference_b1###], not only expands swiftly but also has attracted lots of attention and is crucial for various applications such as cross-view retrieval [23 ###reference_b23###], sentiment analysis [20 ###reference_b20###] and image annotation [24 ###reference_b24###]. For example, for images, a variety of visual characteristics such as HOE, SIFT, RGB, GIST and LBP [8 ###reference_b8###] can be obtained by conventional filtering techniques which together constitute a form of multi-view data.\nMulti-view classification (MVC) aims to assign samples to predefined categories based on their features [9 ###reference_b9###, 21 ###reference_b21###], generally focusing on using information from different views that contain the unique features of each view and the similar semantic structure among views. As a consequence, multi-view classification arises out of necessity, and a large number of approaches based on subspace learning [2 ###reference_b2###, 4 ###reference_b4###], collective matrix factorization [7 ###reference_b7###, 5 ###reference_b5###], and label embedding [6 ###reference_b6###, 29 ###reference_b29###] have been proposed. In practice, the majority of samples may be assigned more than one label [27 ###reference_b27###, 32 ###reference_b32###], due to the diversity of assignment and the data being analyzed. Specifically, in text classification, a news report may be simultaneously labeled with multiple tags, such as \u201csports,\u201d \u201centertainment,\u201d \u201ctechnology,\u201d and \u201cpolitics\u201d. Thus, in the computer vision community, multi-label classification has become particularly important and attracted increasing research efforts. Insight of this, the scenario designated as multi-view multi-label classification (MVMLC) has attracted more attention than traditional tasks that focus on either multi-view or multi-label classification.\nFor MVMLC, researchers have carried out numerous methods in the past few years [18 ###reference_b18###, 30 ###reference_b30###]. For instance, Liu et al integrate multiple feature views into a low-dimensional subspace, using a novel low-rank multi-view learning algorithm and matrix completion techniques [18 ###reference_b18###]. Additionally, neural networks are also used to solve this issue. For example, a novel neural network multi-view multi-label learning framework (CDMM) is proposed, which is intended to solve the problem of consistency and diversity among views through a simple and effective method [30 ###reference_b30###]. However, it also introduces a challenging yet important problem for MVMLC, due to the mistakes in manual annotation, which cause the partial absence of data [17 ###reference_b17###]. To solve the problem, a deep instance-level contrastive network(DICNet) utilizes deep neural networks to extract high-level semantic representations and deploys an instance-level contrastive learning strategy to enhance the extraction of consistent information across different views [15 ###reference_b15###]. Then, a novel incomplete multi-view partial multi-label learning (IMVPML) framework has been proposed to obtain ground-truth labels with a low-rank and sparse decomposition scheme and a graph Laplacian regularization [19 ###reference_b19###]. To display own characteristics of each view and consistent representations of the same sample in multiple views, Liu et al proposed a masked two-channel decoupling framework (MTD) to decouple each view\u2019s latent feature into two types of features [14 ###reference_b14###]. However, they disregard considering multi-label correlation and the conflict in learning target between the consistency objective and reconstruction objective, i.e., the consistency objective and the reconstruction objective are pushed on the same features which denote the latent features of raw data.\nIn this paper, we propose a dual-level contrastive learning framework (DCL for short) to address the aforementioned issues. Our purpose is to maximize the extraction of shared information across all views while preserving the unique information specific to each view. The aforementioned issues are challenging as many works attempt to identify the samples\u2019 category by integrating the features from all views, in which process meaningless view-private information may overshadow the common semantics, and thus influence the quality of classification [28 ###reference_b28###]. To address this, decoupling the latent feature of each view into two separate types to fully extract low-level features is needed [14 ###reference_b14###]. In addition, we design two-level contrastive learning to avoid the consistency objective and reconstruction objective on the same features. Concretely, one is a multi-view contrastive learning method considering the relationship between different views and different samples, another is a multi-label contrastive learning method that aims to minimize the distance of correlational labels across all views. As a result, the conflict between the reconstruction objective and the two consistency objectives is alleviated. In summary, our main contributions are outlined as follows:\nWe propose a novel dual-level contrastive learning (DCL) for the IMVMLC task. The proposed novel could restrain different learning objectives at different levels, reducing the conflict between the consistency objective and the reconstruction objective. Thus, our approach can discover the shared semantics across all views while preserving meaningful view-specific information.\nOur incomplete instance-level contrastive learning focuses on extracting high-level semantic features, guiding the shared feature encoder to extract cross-view semantic features with better consensus. Simultaneously, in the supervised setting, labels are used to construct a wider variety of positive pairs from different views of the same class. As a result, representations learned in label-level contrastive learning are satisfactory for the downstream task based on supervisory labels.\nSufficient experiments on five datasets demonstrate the effectiveness of a dual-level contrastive learning framework in solving the double-missing case.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Incomplete Multi-view Multi-label Classification", + "text": "In recent years, the difficulties encountered in multi-view data collection have led to the possibility that some views may not contain complete information in real tasks. This may affect the performance of traditional partial multi-view learning algorithms. Consequently, the incomplete multi-view multi-label classification (IMVMLC) task has attracted a growing interest from numerous researchers and the corresponding methods have demonstrated remarkable efficacy. For instance, Li and Chen proposed a new model, NAIM3L, to explicitly model the global high-rank and local low-rank structures within multiple labels [12 ###reference_b12###]. To address the double missing problem, NAIM3L applied a prior missing indicator matrix including labels and views missing information. In addition, the contrastive network is employed, as the DICNet proposed by Liu et al., which aims to aggregate instances of the same sample across different views and segregate instances belonging to different samples [15 ###reference_b15###]. Additionally, Zhu et al. proposed a method named WCC-MVML-ID, which integrates within-view, cross-view, and consensus-view representations to effectively process incomplete multi-view multi-label datasets[33 ###reference_b33###]. Liu et al. proposed LMVCAT leverages the self-attention mechanism to extract high-level features and exploit inter-class correlations to enhance classification performance[16 ###reference_b16###]. Another framework in IMVMLC has also shown its remarkable performance, namely, MTD [14 ###reference_b14###]. Although MTD has designed a cross-channel contrastive loss and a graph constraint approach to maintain the structural information in learned embedding features, it still neglects the correlation of multi-label and missing data recovery." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Contrastive Learning", + "text": "Over the years, contrastive learning has demonstrated considerable potential in the field of supervised and unsupervised representation learning [13 ###reference_b13###, 15 ###reference_b15###, 28 ###reference_b28###]. The fundamental concept is to identify a concealed area where the consensus between different views of the same sample is maximized by contrasting the agreement across different samples. For example, in DICNet, Liu et al. devised an instance-level contrastive loss to direct the autoencoder to learn cross-view high-level features following the consensus hypothesis[15 ###reference_b15###]. Supervised contrastive (SC) learning [11 ###reference_b11###], an extension of contrastive learning, integrates label information to generate positive and negative pairs. In contrast to conventional alternatives which combine an anchor and a \u201cpositive\u201d sample in embedding space, and separate the anchor from numerous \u201cnegative\u201d samples, the use of many positives and negatives for each anchor enables superior performance without the need for difficult negative mining [11 ###reference_b11###], which can be challenging to optimize." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "Inspired by the analysis of contrastive learning, in this section, we propose a new dual-level contrastive learning framework, whose main structure is shown in Figure 1. Our model possesses the following three aspects, namely, low-level decoupling features, instance-level contrastive learning and label-level contrastive learning. First of all, for the convenience of description, we briefly outline the formal problem definition and frequently used notations." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Formulation", + "text": "For an incomplete multi-view multi-label data set, there are data views, and each view of the data is represented by , where is the dimensional of -th view, and is the number of samples. Additionally, is the label matrix, where is the number of categories. If -th sample has -th label, otherwise, . Considering the missing views and missing labels, we introduce two missing matrices called the missing-view indicator and the missing-label indicator, respectively. And the former matrix is denoted as , in which means -th view of -th sample is available, otherwise . The latter matrix is similar to the missing-view indicator, where indicates -th category of sample is known. For simplicity, we fill the missing views and unknown labels with \u20180\u2019 in the data-preparation stage. The task of DCL is to train a model which can appropriately predict multiple categories for each input sample with incomplete views and labels." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "low-level decoupling features", + "text": "As is known to all, the input raw data is low-quality, e.g., containing noisy and missing values, which is unsuitable to directly learn semantic information [26 ###reference_b26###, 15 ###reference_b15###, 19 ###reference_b19###, 14 ###reference_b14###]. Generally, autoencoder [25 ###reference_b25###, 28 ###reference_b28###] is a widely used unsupervised model that can project the raw features into a customizable feature space. Inspired by [14 ###reference_b14###], it applies the masked autoencoders (MAE) that randomly mask patches of the input image. By employing this method, we produce a matrix for each view whose elements are initialized to 1. Then we select integers in random as the masking start of each sample in -th view, after which 1 is filled with 0. Finally, we can achieve the same feature dimension of each masked view, denoting as:\nin Eq(1), means the element-wise multiplication. Furthermore, data from different views usually contain consistent information and single-view personal information. To this end, we introduce S-P decoupling encoders for each view to extract the low-level features, where and represent the shared encoder and private encoder, respectively. Simultaneously, and are the extracted consistent feature matrix and view-complementary feature matrix for each view of samples. On the one hand, shared encoder can learn a more discriminative consensus representation and private encoder can remain the structure of initial data. On the other hand, designing S-P encoders alleviates the conflict between the consistency objective and the reconstruction objective, conducting them into two latent feature spaces, where the consistency objective is achieved by the following dual-level contrastive learning. So the reconstruction objective of each view is formulated as:\nwhere is the output of decoder and only when is available, -th view of -th sample will be calculated. Furthermore, the reconstruction loss between input and output is defined as , and denotes the average reconstruction loss among all views." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "instance-level contrastive learning", + "text": "Since the latent features and obtained by S-P decoupling model mix the semantics with the view-consistency and the view-proprietary information, we treat them as low-level features. Considering that contrastive learning plays an important role in helping encoders extract more common contents [13 ###reference_b13###, 15 ###reference_b15###, 28 ###reference_b28###], we design a shared feature MLP to learn another level of features, i.e., instance-level features. Then the -th view of is represented as in the high-level feature space, where indicates the sample-level assignment .\nSpecifically, for each sample learned from the high-level space, we remark it as an anchor instance while it has vN-1 instance pairs, i.e., . In addition, there are (v-1) positive pairs which belong to one sample but not in a view, such as and the rest v(N-1) feature pairs are negative feature pairs of each high-level feature. It is our motivation that devotes to minimizing the distance of available negative pairs and maximizing the similarity of available positive instance-pairs. So we adopt cosine distance to measure the similarity of instance-pairs:\nwhere is dot product operator, and our optimization purpose is to reduce the . Noted that the view of is partial missing, therefore the missing-view matrix is introduced to measure the instance-level contrastive loss:\nwhere represents the temperature parameter that regulates the concentration extent of the distribution and indicates the contrastive loss between and . We introduce to mask missing views in the process of calculation loss . The overall high-level feature contrastive loss across all view pairs is shown as:\nAs can be seen from Eq.(4), the cross-entropy loss of -th sample in relation to -th view and -th view will only be calculated when the positive instance pair are both available." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "label-level contrastive learning", + "text": "It is well known that exploiting label correlations has a crucial impact on multi-label classification [10 ###reference_b10###]. Additionally, learning it can effectively reduce the number of labels needed to be predicted and optimize the classification performance. Following other multi-label classification works [10 ###reference_b10###, 3 ###reference_b3###], our model utilizes a label MLP on shared information , where the prediction of each view is regarded as independent binary classification problems. Similar to learning the sample-level features, the label-level objective is represented as , where implies the category results of -th perception. According to Eq(3), we leverage cosine distance to measure the similarity across all labels:\nHere, we regard as semantic features in label space of sample in view where denotes the dimension of semantic features. As a result, we mark all features in with three types: (1) anchor example (2) negative pairs (3) positive pairs . Similar to instance-level contrastive learning, we further define the label-level contrastive loss between and as:\nwhere represents the temperature parameter and is introduced to ignore the missing labels during calculation. As a result, the total label-level learning objective is defined as:\nFrom aforementioned two-level contrastive learning method, and would achieve the cross-view consistency while preserving the view-specific complementary information in . Similar to previous fusion strategies, it is natural to get cross-view fusion to obtain the unique shared features and of all examples without the negative effects of missing views:\nwhere represents the number available among views in -th sample. Then we adopt an interaction approach to fuse shared features and view-specific features:\nwhere represents the final fused feature of -th sample and -th view, combining private information and consistent information , and denotes the sigmoid activation. To further enhance the classification performance, label matrix is used to guide the prediction result of final fused feature :\nwhere denotes the predicted score matrix that maps to in the label space via a classifier. Besides, the and belong to the learnable parameters of classifier. Further, we let label matrix be the target and prediction result be the learning object:\nwhere is introduced to filter out missing tags in the calculation of . Taking these four losses together, the total loss of our DCL model can be expressed as:\nwhere are penalty parameters with respect to . In the period of training data, all parameters will be updated via backpropagation." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "This section presents the experimental setup and analysis employed to evaluate the proposed method in detail." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setting", + "text": "Datasets: We evaluate our model on five popular multi-view multi-label datasets, following [15 ###reference_b15###, 12 ###reference_b12###, 22 ###reference_b22###]: (1) Corel5k (2) Pascal07 (3) ESPGame (4) IAPRTC12 (5) MIRFLICKR The five datasets encompass a wide range of samples, from 4,999 to 25,000, and a corresponding range of categories, from 20 to 291. Additionally, six distinct types of features were selected as six views, namely GIST, HSV, DenseHue, DenseSift, RGB, and LAB.\nCompared Methods: In our experiments, we compare the proposed DCL with six popular methods in the field, i.e., CDMM [30 ###reference_b30###], NAIM3L [12 ###reference_b12###], iMVWL [22 ###reference_b22###], DICNet [15 ###reference_b15###], LMVCAT [17 ###reference_b17###] and MTD [14 ###reference_b14###], showing the advancement of our model on the five aforementioned datasets. All these methods have been previously discussed and these solutions address different tasks. Concretely, CDMM is designed to address the MVMLC task, however, without considering the missing situation. Furthermore, the IMVMLC task is our important focus, and we introduce four methods, namely NAIM3L, iMVWL, DICNet, LMVCAT and MTD, which aim to address the IMVMLC challenge.\nEvaluation: Similar to previous works [14 ###reference_b14###, 12 ###reference_b12###, 22 ###reference_b22###], we select six metrics to measure these methods, i.e., Average Precision (AP), Hamming Loss (HL), Ranking Loss (RL), adapted area under curve (AUC), OneError (OE), and Coverage (Cov). Specially, to intuitively observe the difference in performance, we report results concerning 1-HL and 1-RL. It should be noted that the higher the values of the four metrics, the better the performance is.\n###figure_2###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experimental Results and Analysis", + "text": "In this section, we compare the performance of our proposed framework by comparing it with six competitive methods on the aforementioned five datasets with scenarios of missing-view rate, missing-label rate and training samples, as shown in Table 1. To further explore the impacts of missing views and missing labels, we conducted experiments on the Corel5k dataset and Fig 2 shows the detail of our results related to different missing views and missing label ratios. According to the statistical results shown in Table 2 and Fig.2, it is evident to make the following observations:\nAs shown in Table 1 and Fig 2, our method outperforms all models on all metrics of both five datasets, which fully demonstrates the effectiveness of our proposed method. Moreover, as shown in Fig.2, the increase in the missing ratio of views and labels leads to worse performance. Moreover, the impact of missing views is more significant than missing labels.\nCompared with traditional methods, the DNN-based methods show significant advantages to iMvMLC problems. This indicates the necessity to design dedicated methods for missing problems.\nFurthermore, in order to evaluate the effectiveness of our decoupled dual contrastive framework, we calculate the mean feature of all available samples in each channel. Fig.4(a)-Fig.4(d) show the average feature channel similarity heatmaps of all channels at epochs 0, 20, 40, and 60 on the Corel5k dataset, where half of the views and labels are missing. As can be seen from the figure, at the beginning of training, the similarity between different channels is relatively small. However, as training progresses, the similarity between instance pairs on the first v shared channels increases rapidly. This observation suggests that our dual contrast framework can fully extract high-quality shared information from multi-view data." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Hyperparameters Analysis", + "text": "In our DCL model, there are three hyper-parameters, i.e., and that need to be set before training. The sensitivity of the model was explored by varying the values and reporting the corresponding AP values on the Corel5k and Pascal07 datasets, respectively. From Figure.3a and Figure. 3b, we can see that in the condition of missing views, missing labels and training samples, the optimal ranges of and on Corel5k and Pascal07 are all [0.0001,1]. From Figure. 3c, we can observe that our model is not sensitive to and we set as for all datasets." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "To find out the effectiveness of various parts of our model, we perform ablation experiments on Corel5k and Mirflickr datasets with missing views, missing labels and training samples. We removed , respectively. it is possible to obtain some key conclusions from Table 2: (i) Each component of our DCL plays an important role and contributes to the overall improvement of multi-label classification. (ii) The most effective improvement is our dual-level contrastive loss." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we have proposed a dual-level contrastive learning framework (DCL) for the IMVMLC task. For each view, our DCL decouples its features into two channels for consistency and complementarity learning, separately. The application of indicator matrices effectively avoids the missing-view and weak label challenge. With low-level features extracted by S-P encoders, on the one hand, a shared MLP and classifier are introduced to map shared information into high-level space, which reduces the conflict between the reconstruction objective and consensus objective. On the other hand, the private information is constrained to make sure that the learned embedding features preserve structure information among samples. Last but not least, considering missing data may degrade the performance, the masked input strategy is designed to reduce heavy spatial redundancy. Extensive experiments on five popular datasets demonstrate the superiority of our method." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: The performance of different methods on various datasets with missing-view rate, missing-label rate and training samples.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DATAMETRICiMvWLNAIM3LCDMMDICNetLMVCATMTDOURS
AP0.2830.0110.3090.0040.3090.0040.3810.0040.3820.0040.4150.0080.4250.006
1-HL0.9780.0000.9870.0000.9870.0000.9880.0000.9880.0000.9880.0000.9880.002
1-RL0.8650.0050.8780.0020.8840.0030.8820.0040.8800.0020.8930.0040.8980.003
Corel5kAUC0.8680.0050.8810.0020.8880.0030.8840.0040.8830.0020.8960.0040.9030.004
OE0.6890.0150.6500.0090.5900.0070.5320.0070.5470.0060.5090.0120.4940.010
Cov0.2980.0080.2750.0050.2770.0070.2730.0110.2730.0060.2510.0090.2460.007
AP0.4370.0180.4880.0030.5080.0050.5050.0120.5190.0050.5510.0040.5580.004
1-HL0.8820.0040.9280.0010.9310.0010.9290.0010.9240.0030.9320.0010.9340.000
1-RL0.7360.0150.7830.0010.8120.0040.7830.0080.8110.0040.8310.0030.8370.003
Pascal07AUC0.7670.0150.8110.0010.8380.0030.8090.0060.8340.0040.8510.0030.8570.004
OE0.6380.0230.5790.0060.5810.0080.5730.0150.5790.0060.5410.0040.5480.006
Cov0.3230.0150.2730.0020.2410.0030.2690.0060.2370.0050.2160.0040.2120.007
AP0.2440.0050.2460.0020.2890.0030.2970.0020.2940.0040.3060.0030.3170.003
1-HL0.9720.0000.9830.0000.9830.0000.9830.0000.9820.0000.9830.0000.9830.000
1-RL0.8080.0020.8180.0020.8320.0010.8320.0010.8280.0020.8370.0020.8490.002
ESPGameAUC0.8130.0020.8240.0020.8360.0010.8360.0010.8330.0020.8420.0020.8550.003
OE0.6570.0130.6610.0030.6040.0050.5610.0070.5660.0090.5530.0090.5420.007
Cov0.4520.0040.4290.0030.4260.0040.4070.0030.4100.0040.3980.0040.3700.004
AP0.2370.0030.2610.0010.3050.0040.3230.0010.3170.0030.3320.0030.3390.003
1-HL0.9690.0000.9800.0000.9810.0000.9810.0000.9800.0000.9810.0000.9810.002
1-RL0.8330.0020.8480.0010.8620.0020.8730.0010.8700.0010.8750.0020.8880.004
IAPRTC12AUC0.8350.0010.8500.0010.8640.0020.8740.0000.8760.0010.8760.0040.8890.004
OE0.6480.0080.6100.0050.5680.0080.5320.0020.5570.0050.5330.0040.5250.003
Cov0.4360.0050.4080.0040.4030.0040.3510.0010.3520.0030.3510.0040.3220.007
AP0.4900.0120.5510.0020.5700.0020.5890.0050.5940.0050.6070.0040.6190.004
1-HL0.8390.0020.8820.0010.8860.0010.8880.0020.8820.0020.8910.0010.8940.001
1-RL0.8030.0080.8440.0010.8560.0010.8650.0030.8630.0040.8750.0020.8810.001
MirflickrAUC0.7870.0120.8370.0010.8460.0010.8530.0030.8490.0040.8620.0020.8640.001
OE0.4890.0220.4150.0030.3690.0040.3580.0080.3630.0070.3450.0040.3320.004
Cov0.4280.0130.3690.0020.3600.0010.3330.0030.3480.0070.3240.0040.3130.003
\n
\n
", + "capture": "Table 1: The performance of different methods on various datasets with missing-view rate, missing-label rate and training samples." + }, + "2": { + "table_html": "
\n
Table 2: The ablation experiment results on the Corel5k and Mirflickr datasets with missing-view rate, missing-label rate and training samples.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BackboneCore15kMirflickr
APAUCAPAUC
\u27130.3890.8880.6030.861
\u2713\u27130.4020.8950.6110.864
\u2713\u27130.4060.8970.6120.864
\u2713\u27130.3980.8930.6020.861
\u2713\u2713\u27130.4070.8980.6150.867
\u2713\u2713\u27130.4070.8990.6120.863
\u2713\u2713\u27130.4060.9000.6100.862
\u2713\u2713\u2713\u27130.4250.9030.6190.864
\n
\n
", + "capture": "Table 2: The ablation experiment results on the Corel5k and Mirflickr datasets with missing-view rate, missing-label rate and training samples." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18267v1_figure_1.png", + "caption": "Figure 1: The main framework of our DCL. The private and consistent features are extracted by S-P encoders, respectively. Adopting an interaction approach, Z\ud835\udc4dZitalic_Z represents the final fused fusion.", + "url": "http://arxiv.org/html/2411.18267v1/extracted/6025281/figure.png" + }, + "2": { + "figure_path": "2411.18267v1_figure_2.png", + "caption": "Figure 2: The performance of different methods on various datasets with full\nviews, full labels and 70%percent7070\\%70 % training samples.", + "url": "http://arxiv.org/html/2411.18267v1/extracted/6025281/full.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18267v1" +} \ No newline at end of file diff --git a/20241127/2411.18269v1.json b/20241127/2411.18269v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4f8bcd8b3216d1c2000e2af4d334f3980684b476 --- /dev/null +++ b/20241127/2411.18269v1.json @@ -0,0 +1,1060 @@ +{ + "title": "Hidden Data Privacy Breaches in Federated Learning", + "abstract": "Federated Learning (FL) emerged as a paradigm for conducting machine learning across broad and decentralized datasets, promising enhanced privacy by obviating the need for direct data sharing.\nHowever, recent studies show that attackers can steal private data through model manipulation or gradient analysis. Existing attacks are constrained by low theft quantity or low-resolution data, and they are often detected through anomaly monitoring in gradients or weights.\nIn this paper, we propose a novel data-reconstruction attack leveraging malicious code injection, supported by two key techniques, i.e., distinctive and sparse encoding design and block partitioning. Unlike conventional methods that require detectable changes to the model, our method stealthily embeds a hidden model using parameter sharing to systematically extract sensitive data. The Fibonacci-based index design ensures efficient, structured retrieval of memorized data, while the block partitioning method enhances our method\u2019s capability to handle high-resolution images by dividing them into smaller, manageable units. Extensive experiments on 4 datasets confirmed that our method is superior to the five state-of-the-art data-reconstruction attacks under the five respective detection methods. Our method can handle large-scale and high-resolution data without being detected or mitigated by state-of-the-art data reconstruction defense methods. In contrast to baselines, our method can be directly applied to both FedAVG and FedSGD scenarios, underscoring the need for developers to devise new defenses against such vulnerabilities. We will open-source our code upon acceptance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Federated Learning (FL) [24 ###reference_b24###, 25 ###reference_b25###] has emerged as a solution to the increasing concerns over user data privacy, allowing clients to partake in large-scale model training without sharing their private data. In a typical FL cycle, the server dispatches the model to clients for local training using their private data, after which the clients return their updates. These updates are then aggregated by the server to refine the global model, setting the stage for subsequent iterations. Despite its privacy-centric design, recent studies have revealed that servers, even when operating passively, can infer information about client data from their shared gradients [36 ###reference_b36###, 34 ###reference_b34###, 38 ###reference_b38###, 45 ###reference_b45###, 27 ###reference_b27###, 16 ###reference_b16###, 20 ###reference_b20###], thus compromising the fundamental privacy guarantee of federated learning.\nIn the field of data reconstruction attacks, server attacks in FL can be categorized into passive and active attacks.\nPassive attacks analyze transmitted data without modifying the FL protocol, typically using iterative optimization or analytical methods to reconstruct training data [60 ###reference_b60###, 44 ###reference_b44###, 18 ###reference_b18###, 3 ###reference_b3###]. Iterative optimization treats reconstructed data as trainable parameters, aiming to match the generated gradients with the true gradient values [60 ###reference_b60###, 44 ###reference_b44###, 46 ###reference_b46###, 18 ###reference_b18###, 23 ###reference_b23###, 28 ###reference_b28###, 52 ###reference_b52###]. Analytical methods derive original training data directly from gradients using mathematical formulas [3 ###reference_b3###, 59 ###reference_b59###, 9 ###reference_b9###].\nAlthough effective, passive attacks struggle with large batch sizes and high-resolution datasets, depending on specific model structures, and are mitigated by secure aggregation methods [8 ###reference_b8###, 17 ###reference_b17###]. Active server attacks, in contrast, manipulate the training process by altering model structures or weights to extract private data [58 ###reference_b58###, 40 ###reference_b40###].\n\u2020 indicates requires inserting a malicious module into the architecture, e.g., placing a large dense layer in front or inserting customized convolutional kernels into the FL model.\n\u00a7 denotes the corresponding amount of data stolen given the maximum amount of private training data that can be processed in each round.\n$ signifies that the attacks are also capable of recovering high-resolution images, making them applicable for targeting models trained on high-resolution datasets.\n\u2217 indicates the attack is effective in the FedAvg federated learning scenario.\n###figure_1### We list the state-of-the-art active server attacks in Table 1 ###reference_###. Most of the existing active server attacks focus on manipulating the model\u2019s weights [7 ###reference_b7###, 49 ###reference_b49###, 40 ###reference_b40###, 6 ###reference_b6###, 56 ###reference_b56###] and structures [15 ###reference_b15###, 58 ###reference_b58###, 57 ###reference_b57###] to conduct data reconstruction attacks. Parameter modification-based attacks rely on maliciously altering model parameters, such as weights and biases, to enhance the gradient influence of targeted data while diminishing that of other data. Structure modification-based attacks usually demand unusual changes to the architecture, like adding a large dense layer at the beginning or inserting customized convolutional kernels. However, the success of these attacks might heavily depend on the specific architecture and parameters of the model, limiting their applicability across different settings. To enhance the attack performance, some works even require the additional ability to introduce Sybil devices [6 ###reference_b6###], send different updates to different users [58 ###reference_b58###, 40 ###reference_b40###, 57 ###reference_b57###], or control the user sampling process [6 ###reference_b6###], which make the attacks more easily detectable. Besides, most of the existing works are ineffective for reconstructing high-resolution data, especially in scenarios involving large batches [49 ###reference_b49###, 7 ###reference_b7###, 6 ###reference_b6###, 57 ###reference_b57###, 40 ###reference_b40###, 56 ###reference_b56###, 58 ###reference_b58###, 17 ###reference_b17###]. Moreover, it is shown that only a few attacks [15 ###reference_b15###, 58 ###reference_b58###] can also be applied to the FedAvg training scenario.\nIn this paper, we introduce a novel data reconstruction attack based on malicious code poisoning in FL. As shown in Figure 1 ###reference_###, by injecting just a few lines of codes into the training process, our method covertly manipulates the model\u2019s behavior to extract private data while maintaining normal operation. This approach leverages vulnerabilities in shared machine learning libraries [21 ###reference_b21###, 39 ###reference_b39###, 50 ###reference_b50###], which often lack rigorous integrity checks, allowing attackers to introduce subtle modifications that evade detection. Many machine learning frameworks depend on third-party libraries that are not thoroughly vetted, making them susceptible to covert malicious modifications. Existing works show that attackers can introduce backdoors that execute the malicious code during the training process, without affecting the primary training objective [5 ###reference_b5###].\nTo launch the attack, our method introduces a secret model that shares parameters with the local model, making it indistinguishable from the local model in terms of both structure and behavior. Unlike existing methods requiring significant changes to the model architecture, our method uses parameter sharing to memorize sensitive client data while preserving the model\u2019s normal appearance. To enhance the attack performance, we also introduce a distinctive index design leveraging Fibonacci coding to efficiently retrieve memorized data, and a block partitioning strategy that enhances our method\u2019s capacity to handle high-resolution images. Specifically, the distinctive index design ensures efficient and structured retrieval of memorized data, allowing the hidden model to systematically extract sensitive information based on unique codes. The block partitioning strategy allows our method to overcome the challenges of handling high-resolution data by dividing the data into smaller, manageable units, which are then processed in a way that maintains the effectiveness of the attack while minimizing the detection risk.\nOur method consistently outperformed 5 state-of-the-art data-reconstruction attacks under 5 detection methods across 4 datasets.\nOur method is able to steal nearly 512 high-quality images per attack on CIFAR-10 and CIFAR-100, and nearly 64 high-quality images on ImageNet and CelebA, significantly higher than state-of-the-art baseline methods.\nThese results demonstrate that our method exhibits robustness in handling high-resolution images and large-scale data theft scenarios.\nTo conclude, we make the following key contributions.\nWe propose a novel data reconstruction attack paradigm based on malicious code poisoning. Unlike previous approaches that require conspicuous modifications to architecture or parameters, which are easily detected, our method covertly trains a secret model within the local model through parameter sharing. This secret model is designed to memorize private data and is created by carefully selecting a few layers from the local model. Moreover, our method is model-agnostic, enabling seamless integration with various architectures without modifying their core structures.\nTo enhance the extraction performance, we propose a novel distinctive indexing method based on Fibonacci coding, which meets three key requirements: sparsity, differentiation, and label independence. We also propose a novel block partitioning strategy to overcome the limitations of existing optimization-based extraction methods when dealing with high-resolution datasets or large-scale theft scenarios.\nExtensive experiments on 4 datasets confirm that our method outperforms 5 state-of-the-art data-reconstruction attacks in terms of both leakage rate and image quality. Our method is capable of handling large-scale and high-resolution data without being detected by 5 advanced defense techniques. Moreover, our method can also be easily transferred to an FL framework equipped with secure aggregation.\nIn this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.\nIn this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Malicious Code Poisoning", + "text": "Malicious code poisoning involves the stealthy injection of harmful code into the training process of machine learning models111https://www.reversinglabs.com/blog/sunburst-the-next-level-of-stealth ###reference_t-the-next-level-of-stealth###, enabling attackers to alter model behavior while remaining difficult to detect [5 ###reference_b5###, 13 ###reference_b13###]. Unlike traditional attack vectors that exploit vulnerabilities in model architecture or parameters, code poisoning specifically targets the training process, embedding malicious functionality at the code level.\nOne common approach to malicious code poisoning is through the exploitation of vulnerabilities in package management systems such as npm, PyPI, and RubyGems222https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610 ###reference_-confusion-4a5d60fec610###. Attackers upload malicious packages to public repositories, often using the same names as internal libraries but with higher version numbers. This tricks dependency managers into downloading the malicious version instead of the intended internal one. These attacks are particularly challenging to detect because they exploit trusted sources, such as public repositories, which developers often assume to be reliable.\nFor example, attackers may target widely used libraries like TensorFlow, embedding malicious code into critical functions such as trainstep. Since these libraries are highly trusted, developers often skip thorough code reviews, unknowingly installing compromised versions. Once executed, these packages can introduce backdoors, manipulate model behavior, or exfiltrate sensitive data. Recent research has shown that even widely-used machine learning repositories, such as FastAI [21 ###reference_b21###], Fairseq [39 ###reference_b39###], and Hugging Face [50 ###reference_b50###], despite having thousands of forks and contributors, often rely on basic tests\u2014such as verifying output shapes and basic functionality checks, making such code poisoning attacks more impactful and feasible.\nDespite the privacy-focused design of federated learning, shared model updates can inadvertently expose sensitive information. We show that a malicious server can manipulate the model training code to reconstruct users\u2019 training data, leading to significant security vulnerabilities." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Data Reconstruction Attacks against FL", + "text": "In this section, we focus on providing an in-depth introduction to active server attacks.\nUnlike passive server attacks, the server can modify its behavior, such as the model architecture and model parameters sent to the user, to obtain training dataset information of the victim clients. Existing active server attacks can be categorized into three classes, i.e., parameter modification-based attacks, structure modification-based attacks, and handcrafted-modification-free attacks.\nParameter modification-based attacks.\nWen et al. [49 ###reference_b49###] introduce two \u201cfishing\u201d strategies, i.e., class fishing and feature fishing, to recover user data from gradient updates. Rather than altering the model architecture, they manipulate the model parameters sent to users by maliciously adjusting the weights in the classification layer, magnifying the gradient contribution of a target data point, and reducing the gradient contribution of other data. The class fishing strategy amplifies bias for non-target neurons in the last classification layer, reducing the model\u2019s confidence in target class predictions and thus boosting the target data\u2019s gradient impact. When dealing with batches containing several target class samples, feature fishing modifies weights and biases for these targets, adjusting the decision boundary to further isolate and emphasize the target data\u2019s gradient. However, a single attack of [49 ###reference_b49###] can only recover one sample, making it easy to be detected by users. Pasquini et al. [40 ###reference_b40###] proposed a gradient suppression attack based on model inconsistency, degrading the aggregated gradient to that of the target user\u2019s gradient, thereby breaking secure aggregation. Specifically, they send normal model weights to the target user, producing normal local gradients. For non-target users, they exploit the characteristic of ReLU neurons producing zero gradients when not activated, by sending malicious model weights to generate zero gradients. It is independent of the number of users participating in the secure aggregation. However, such methods can easily detected by users with a strong awareness of prevention. Zhang et al. [56 ###reference_b56###] proposed reconstruction attacks based on the direct data leakage in the FC (fully-connected) layer [42 ###reference_b42###]. However, gradient obfuscation within a batch significantly hinders its effectiveness. To address this challenge, Zhang et al. maliciously changed the model parameters to diminish the obfuscation in shared gradients. This strategy effectively compromises privacy in large-batch FL scenarios. However, this method assumes the server owns auxiliary data that is independently and identically distributed with users\u2019 private trainsets, which is not practical in the real world.\n###figure_2### Structure modification-based attacks.\nFowl et al. [15 ###reference_b15###] introduced a method to compromise user privacy by making small but harmful changes to the model architecture, allowing the server to directly obtain a verbatim copy of user data from gradient updates, bypassing complex inverse problems. This method involves manipulating model weights to isolate gradients in linear layers. Specifically, they estimate the cumulative distribution function for a basic dataset statistic like average brightness, then add a fully connected layer with ReLU activation and output neurons, called the imprint module, at the beginning of the model. It is shown that even when user data is aggregated in large batches, it can be effectively reconstructed. Further, Zhao et al. [57 ###reference_b57###] improved [15 ###reference_b15###] by introducing an additional convolutional layer before the imprint module and assigning unique malicious convolutional kernel parameters to different users. This setup allows for an identity mapping of training data from different users to distinct output positions of the convolutional layer. By setting non-zero connection weights only for the current user\u2019s training data output, they effectively isolate the weight gradients produced in the imprint module by different users. Consequently, the size of the imprint module is determined by the batch size rather than the number of users, significantly reducing the computing cost.\nRecently, Zhao et al. [58 ###reference_b58###] proposed LOKI, specifically designed to overcome the FL with FedAVG and secure aggregation. By manipulating the FL model architecture through the insertion of customized convolutional kernels for each client, LOKI enables the malicious server to separate and reconstruct private client data from aggregated updates. Each client receives a model with slightly different convolutional parameters (identity mapping sets), ensuring that the gradients reflecting their data remain distinct even in aggregated updates.\nHandcrafted-modification-free attacks. Different from the above attacks, modification-free attacks do not rely on conspicuous parameter and structure modifications. Garov et al. [17 ###reference_b17###] introduced SEER, an attack framework designed to stealthily extract sensitive data from federated learning systems. The key of SEER is the use of a secret decoder, which is trained in conjunction with the shared model. This secret decoder is composed of two main components: a disaggregator and a reconstruction. The disaggregator is to pinpoint and segregate the gradient of a specific data point according to a secret property, such as the brightest image in a batch, effectively nullifying the gradients of all non-matching samples. This isolated gradient is then passed to the reconstructor, which reconstructs the original data point. SEER is also an elusive attack that doesn\u2019t visibly alter the model\u2019s structure or parameters, making it harder to detect than other methods.\nHowever, it requires training a complex decoder, which can be resource-intensive. The attack\u2019s success also relies on choosing a secret property that uniquely identifies one sample in a batch, making this selection crucial for its effectiveness. Additionally, in a single batch, the attacker can recover only one image, which limits the attack\u2019s scalability.\nIn this paper, we propose a novel data reconstruction attack which leverages malicious code injection to covertly extract sensitive data. Unlike prior methods that require conspicuous modifications to the model architecture or parameters, our method embeds a secret model via parameter sharing, ensuring minimal detection risk. It introduces a block partitioning strategy for handling high-resolution data, while also employing a Fibonacci-based distinctive indexing method to streamline data retrieval and improve attack performance. Our method also operates without relying on auxiliary devices or user sampling manipulation, making it both more practical and less detectable in real-world federated learning settings." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Data Reconstruction Defenses against FL", + "text": "A range of defensive strategies have been introduced to counter data reconstruction attacks, such as differential privacy, gradient compression, and feature disruption techniques. Additionally, secure aggregation has also demonstrated effectiveness in protecting against a subset of these attacks.\nDifferential privacy (DP) [1 ###reference_b1###] is a critical approach for quantifying and curtailing the exposure of individual-level information. With local DP, clients can apply a randomized mechanism to the gradients before uploading them to the server [19 ###reference_b19###, 47 ###reference_b47###]. DP can provide a worst-case information theoretic guarantee on information an adversary can glean from the data. However, DP-based methods often require adding much noise, which will affect the model\u2019s utility. Gradient compression is another method shown to mitigate information leakage from gradients. A gradient sparsity rate of over 20% proves effective in resisting [60 ###reference_b60###]. However, such methods are only effective for a small part of passive server attacks.\nSun et al. [43 ###reference_b43###] pointed out that the privacy leakage problem caused by gradient inversion mainly comes from feature leakage. Therefore, they perturb the network\u2019s intermediate features by cropping, adding as little disturbance to the features as possible, making the reconstructed input from the perturbed features as different from the real input as possible, thus maintaining model performance while reducing the leakage of private information. However, this method is mainly designed for passive server attacks and has shown to be ineffective against advanced passive attacks [28 ###reference_b28###].\nThe secure aggregation protocol [8 ###reference_b8###] is a sophisticated multi-party computation (MPC) technique that enables a group of users to collectively compute the summation (a.k.a. aggregation) of their respective inputs. This protocol ensures that the server is only privy to the collective aggregate of all client updates, without access to the individual model updates from any specific client. Such a system is designed to preserve privacy during the federated learning process. It has been demonstrated that a range of attacks [49 ###reference_b49###, 56 ###reference_b56###] are rendered ineffective under secure aggregation.\nRecently, Garov et al. [17 ###reference_b17###] presented D-SNR to effectively detect data reconstruction attacks. D-SNR measures the signal-to-noise ratio in the gradient space, identifying when a gradient from a single example dominates the aggregate gradient of a batch. It works by defining a property to single out target examples and comparing individual gradients to the batch average. High D-SNR values indicate potential privacy leaks, allowing clients to opt out of training rounds that may compromise their data. This method provides a principled and cost-effective way to assess and safeguard against privacy breaches in federated learning setups. This method is shown to be effective for detecting attacks such as [40 ###reference_b40###, 49 ###reference_b49###].\nIn this paper, we will assess the resilience of our method against state-of-the-art defenses.\nWe assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.\nCapability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.\nCapability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.\nWe set the following limitations for the server.\nNo introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.\nNo control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.\nNo unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection.\nOur goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.\nIn conclusion, our method mainly contains three key modules, as shown in Figure 2 ###reference_###.\nSecret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\nDistinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.\nBlock partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme.\nWe first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:\nRandom sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.\nRandom sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .\nSystematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .\nLayer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.\nImportance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.\nIn our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.\nGiven the predefined structure, the secret model is optimized as:\nwhere denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:\nSince and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .\nAfter receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:\nwhere represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index .\nWe aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:\nwhere is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.\nWhile effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:\nand the second sample in class 0 is:\nIn this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.\nAdditionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.\nMoreover, in prior reconstruction attacks [2 ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.\nWe summarize our requirements for the encoding as follows:\nDifferentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.\nSparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.\nLabel Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.\nTo achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.\nThe encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:\nThis sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.\nNext, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:\nwhere are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.\nFor example, the index 49 can be represented as:\nThus, the Fibonacci code for 49 is:\nEach sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:\nwhere represents the Fibonacci coding bits of .\n\u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.\nGiven the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.\nTo ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:\nwhere is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. 2 ###reference_### as follows:\nTo this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.\nHowever, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:\nwhere denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.\nDatasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26###], CIFAR-100 [26 ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14###], and CelebA [31 ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15###], LOKI [58 ###reference_b58###], SEER [17 ###reference_b17###], and Inverting [18 ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.\nWe compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_###, Table X ###reference_### (appendix), and Table XI ###reference_### (appendix).\n###figure_3### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.\nImpact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.\nTo evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### and V ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49###, 7 ###reference_b7###, 56 ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41###, 22 ###reference_b22###, 10 ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35###, 33 ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37###, 48 ###reference_b48###, 12 ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51###, 32 ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_4### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Threat Model", + "text": "We assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.\nCapability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.\nCapability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.\nWe set the following limitations for the server.\nNo introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.\nNo control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.\nNo unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Overview of our method", + "text": "Our goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.\nIn conclusion, our method mainly contains three key modules, as shown in Figure 2 ###reference_### ###reference_###.\nSecret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\nDistinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.\nBlock partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Secret Model Training", + "text": "We first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:\nRandom sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.\nRandom sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .\nSystematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .\nLayer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.\nImportance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.\nIn our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.\nGiven the predefined structure, the secret model is optimized as:\nwhere denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:\nSince and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .\nAfter receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:\nwhere represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Distinctive and Sparse Encoding Design", + "text": "We aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2### ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:\nwhere is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.\nWhile effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:\nand the second sample in class 0 is:\nIn this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.\nAdditionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.\nMoreover, in prior reconstruction attacks [2 ###reference_b2### ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.\nWe summarize our requirements for the encoding as follows:\nDifferentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.\nSparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.\nLabel Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.\nTo achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4### ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53### ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.\nThe encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:\nThis sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.\nNext, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:\nwhere are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.\nFor example, the index 49 can be represented as:\nThus, the Fibonacci code for 49 is:\nEach sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:\nwhere represents the Fibonacci coding bits of .\n\u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Block Partitioning", + "text": "Given the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.\nTo ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:\nwhere is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. 2 ###reference_### ###reference_### as follows:\nTo this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.\nHowever, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:\nwhere denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.\nDatasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.\nWe compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_###, Table X ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### (appendix).\n###figure_5### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.\nImpact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.\nTo evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### and V ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_6### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiment Setup", + "text": "Datasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with Baselines", + "text": "We compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### (appendix).\n###figure_7### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "Impact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Time Cost", + "text": "To evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_8### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "D-SNR Detection", + "text": "Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Noise Perturbation", + "text": "Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Gradient Pruning", + "text": "Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Gradient Clipping", + "text": "Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Loss Change Monitor", + "text": "During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_9### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_10### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "In this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17### ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.\nIn this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Malicious Code Poisoning", + "text": "Malicious code poisoning involves the stealthy injection of harmful code into the training process of machine learning models111https://www.reversinglabs.com/blog/sunburst-the-next-level-of-stealth ###reference_t-the-next-level-of-stealth### ###reference_t-the-next-level-of-stealth###, enabling attackers to alter model behavior while remaining difficult to detect [5 ###reference_b5### ###reference_b5###, 13 ###reference_b13### ###reference_b13###]. Unlike traditional attack vectors that exploit vulnerabilities in model architecture or parameters, code poisoning specifically targets the training process, embedding malicious functionality at the code level.\nOne common approach to malicious code poisoning is through the exploitation of vulnerabilities in package management systems such as npm, PyPI, and RubyGems222https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610 ###reference_-confusion-4a5d60fec610### ###reference_-confusion-4a5d60fec610###. Attackers upload malicious packages to public repositories, often using the same names as internal libraries but with higher version numbers. This tricks dependency managers into downloading the malicious version instead of the intended internal one. These attacks are particularly challenging to detect because they exploit trusted sources, such as public repositories, which developers often assume to be reliable.\nFor example, attackers may target widely used libraries like TensorFlow, embedding malicious code into critical functions such as trainstep. Since these libraries are highly trusted, developers often skip thorough code reviews, unknowingly installing compromised versions. Once executed, these packages can introduce backdoors, manipulate model behavior, or exfiltrate sensitive data. Recent research has shown that even widely-used machine learning repositories, such as FastAI [21 ###reference_b21### ###reference_b21###], Fairseq [39 ###reference_b39### ###reference_b39###], and Hugging Face [50 ###reference_b50### ###reference_b50###], despite having thousands of forks and contributors, often rely on basic tests\u2014such as verifying output shapes and basic functionality checks, making such code poisoning attacks more impactful and feasible.\nDespite the privacy-focused design of federated learning, shared model updates can inadvertently expose sensitive information. We show that a malicious server can manipulate the model training code to reconstruct users\u2019 training data, leading to significant security vulnerabilities." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Data Reconstruction Attacks against FL", + "text": "In this section, we focus on providing an in-depth introduction to active server attacks.\nUnlike passive server attacks, the server can modify its behavior, such as the model architecture and model parameters sent to the user, to obtain training dataset information of the victim clients. Existing active server attacks can be categorized into three classes, i.e., parameter modification-based attacks, structure modification-based attacks, and handcrafted-modification-free attacks.\nParameter modification-based attacks.\nWen et al. [49 ###reference_b49### ###reference_b49###] introduce two \u201cfishing\u201d strategies, i.e., class fishing and feature fishing, to recover user data from gradient updates. Rather than altering the model architecture, they manipulate the model parameters sent to users by maliciously adjusting the weights in the classification layer, magnifying the gradient contribution of a target data point, and reducing the gradient contribution of other data. The class fishing strategy amplifies bias for non-target neurons in the last classification layer, reducing the model\u2019s confidence in target class predictions and thus boosting the target data\u2019s gradient impact. When dealing with batches containing several target class samples, feature fishing modifies weights and biases for these targets, adjusting the decision boundary to further isolate and emphasize the target data\u2019s gradient. However, a single attack of [49 ###reference_b49### ###reference_b49###] can only recover one sample, making it easy to be detected by users. Pasquini et al. [40 ###reference_b40### ###reference_b40###] proposed a gradient suppression attack based on model inconsistency, degrading the aggregated gradient to that of the target user\u2019s gradient, thereby breaking secure aggregation. Specifically, they send normal model weights to the target user, producing normal local gradients. For non-target users, they exploit the characteristic of ReLU neurons producing zero gradients when not activated, by sending malicious model weights to generate zero gradients. It is independent of the number of users participating in the secure aggregation. However, such methods can easily detected by users with a strong awareness of prevention. Zhang et al. [56 ###reference_b56### ###reference_b56###] proposed reconstruction attacks based on the direct data leakage in the FC (fully-connected) layer [42 ###reference_b42### ###reference_b42###]. However, gradient obfuscation within a batch significantly hinders its effectiveness. To address this challenge, Zhang et al. maliciously changed the model parameters to diminish the obfuscation in shared gradients. This strategy effectively compromises privacy in large-batch FL scenarios. However, this method assumes the server owns auxiliary data that is independently and identically distributed with users\u2019 private trainsets, which is not practical in the real world.\n###figure_11### Structure modification-based attacks.\nFowl et al. [15 ###reference_b15### ###reference_b15###] introduced a method to compromise user privacy by making small but harmful changes to the model architecture, allowing the server to directly obtain a verbatim copy of user data from gradient updates, bypassing complex inverse problems. This method involves manipulating model weights to isolate gradients in linear layers. Specifically, they estimate the cumulative distribution function for a basic dataset statistic like average brightness, then add a fully connected layer with ReLU activation and output neurons, called the imprint module, at the beginning of the model. It is shown that even when user data is aggregated in large batches, it can be effectively reconstructed. Further, Zhao et al. [57 ###reference_b57### ###reference_b57###] improved [15 ###reference_b15### ###reference_b15###] by introducing an additional convolutional layer before the imprint module and assigning unique malicious convolutional kernel parameters to different users. This setup allows for an identity mapping of training data from different users to distinct output positions of the convolutional layer. By setting non-zero connection weights only for the current user\u2019s training data output, they effectively isolate the weight gradients produced in the imprint module by different users. Consequently, the size of the imprint module is determined by the batch size rather than the number of users, significantly reducing the computing cost.\nRecently, Zhao et al. [58 ###reference_b58### ###reference_b58###] proposed LOKI, specifically designed to overcome the FL with FedAVG and secure aggregation. By manipulating the FL model architecture through the insertion of customized convolutional kernels for each client, LOKI enables the malicious server to separate and reconstruct private client data from aggregated updates. Each client receives a model with slightly different convolutional parameters (identity mapping sets), ensuring that the gradients reflecting their data remain distinct even in aggregated updates.\nHandcrafted-modification-free attacks. Different from the above attacks, modification-free attacks do not rely on conspicuous parameter and structure modifications. Garov et al. [17 ###reference_b17### ###reference_b17###] introduced SEER, an attack framework designed to stealthily extract sensitive data from federated learning systems. The key of SEER is the use of a secret decoder, which is trained in conjunction with the shared model. This secret decoder is composed of two main components: a disaggregator and a reconstruction. The disaggregator is to pinpoint and segregate the gradient of a specific data point according to a secret property, such as the brightest image in a batch, effectively nullifying the gradients of all non-matching samples. This isolated gradient is then passed to the reconstructor, which reconstructs the original data point. SEER is also an elusive attack that doesn\u2019t visibly alter the model\u2019s structure or parameters, making it harder to detect than other methods.\nHowever, it requires training a complex decoder, which can be resource-intensive. The attack\u2019s success also relies on choosing a secret property that uniquely identifies one sample in a batch, making this selection crucial for its effectiveness. Additionally, in a single batch, the attacker can recover only one image, which limits the attack\u2019s scalability.\nIn this paper, we propose a novel data reconstruction attack which leverages malicious code injection to covertly extract sensitive data. Unlike prior methods that require conspicuous modifications to the model architecture or parameters, our method embeds a secret model via parameter sharing, ensuring minimal detection risk. It introduces a block partitioning strategy for handling high-resolution data, while also employing a Fibonacci-based distinctive indexing method to streamline data retrieval and improve attack performance. Our method also operates without relying on auxiliary devices or user sampling manipulation, making it both more practical and less detectable in real-world federated learning settings." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Data Reconstruction Defenses against FL", + "text": "A range of defensive strategies have been introduced to counter data reconstruction attacks, such as differential privacy, gradient compression, and feature disruption techniques. Additionally, secure aggregation has also demonstrated effectiveness in protecting against a subset of these attacks.\nDifferential privacy (DP) [1 ###reference_b1### ###reference_b1###] is a critical approach for quantifying and curtailing the exposure of individual-level information. With local DP, clients can apply a randomized mechanism to the gradients before uploading them to the server [19 ###reference_b19### ###reference_b19###, 47 ###reference_b47### ###reference_b47###]. DP can provide a worst-case information theoretic guarantee on information an adversary can glean from the data. However, DP-based methods often require adding much noise, which will affect the model\u2019s utility. Gradient compression is another method shown to mitigate information leakage from gradients. A gradient sparsity rate of over 20% proves effective in resisting [60 ###reference_b60### ###reference_b60###]. However, such methods are only effective for a small part of passive server attacks.\nSun et al. [43 ###reference_b43### ###reference_b43###] pointed out that the privacy leakage problem caused by gradient inversion mainly comes from feature leakage. Therefore, they perturb the network\u2019s intermediate features by cropping, adding as little disturbance to the features as possible, making the reconstructed input from the perturbed features as different from the real input as possible, thus maintaining model performance while reducing the leakage of private information. However, this method is mainly designed for passive server attacks and has shown to be ineffective against advanced passive attacks [28 ###reference_b28### ###reference_b28###].\nThe secure aggregation protocol [8 ###reference_b8### ###reference_b8###] is a sophisticated multi-party computation (MPC) technique that enables a group of users to collectively compute the summation (a.k.a. aggregation) of their respective inputs. This protocol ensures that the server is only privy to the collective aggregate of all client updates, without access to the individual model updates from any specific client. Such a system is designed to preserve privacy during the federated learning process. It has been demonstrated that a range of attacks [49 ###reference_b49### ###reference_b49###, 56 ###reference_b56### ###reference_b56###] are rendered ineffective under secure aggregation.\nRecently, Garov et al. [17 ###reference_b17### ###reference_b17###] presented D-SNR to effectively detect data reconstruction attacks. D-SNR measures the signal-to-noise ratio in the gradient space, identifying when a gradient from a single example dominates the aggregate gradient of a batch. It works by defining a property to single out target examples and comparing individual gradients to the batch average. High D-SNR values indicate potential privacy leaks, allowing clients to opt out of training rounds that may compromise their data. This method provides a principled and cost-effective way to assess and safeguard against privacy breaches in federated learning setups. This method is shown to be effective for detecting attacks such as [40 ###reference_b40### ###reference_b40###, 49 ###reference_b49### ###reference_b49###].\nIn this paper, we will assess the resilience of our method against state-of-the-art defenses.\nWe assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.\nCapability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.\nCapability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.\nWe set the following limitations for the server.\nNo introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.\nNo control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.\nNo unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection.\nOur goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.\nIn conclusion, our method mainly contains three key modules, as shown in Figure 2 ###reference_### ###reference_### ###reference_###.\nSecret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\nDistinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.\nBlock partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme.\nWe first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:\nRandom sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.\nRandom sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .\nSystematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .\nLayer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.\nImportance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.\nIn our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.\nGiven the predefined structure, the secret model is optimized as:\nwhere denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:\nSince and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .\nAfter receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:\nwhere represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index .\nWe aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2### ###reference_b2### ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:\nwhere is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.\nWhile effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:\nand the second sample in class 0 is:\nIn this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.\nAdditionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.\nMoreover, in prior reconstruction attacks [2 ###reference_b2### ###reference_b2### ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.\nWe summarize our requirements for the encoding as follows:\nDifferentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.\nSparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.\nLabel Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.\nTo achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4### ###reference_b4### ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53### ###reference_b53### ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.\nThe encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:\nThis sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.\nNext, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:\nwhere are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.\nFor example, the index 49 can be represented as:\nThus, the Fibonacci code for 49 is:\nEach sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:\nwhere represents the Fibonacci coding bits of .\n\u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.\nGiven the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.\nTo ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:\nwhere is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. 2 ###reference_### ###reference_### ###reference_### as follows:\nTo this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.\nHowever, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:\nwhere denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.\nDatasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.\nWe compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_12### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.\nImpact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.\nTo evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_13### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Threat Model", + "text": "We assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.\nCapability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.\nCapability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.\nWe set the following limitations for the server.\nNo introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.\nNo control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.\nNo unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Overview of our method", + "text": "Our goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.\nIn conclusion, our method mainly contains three key modules, as shown in Figure 2 ###reference_### ###reference_### ###reference_### ###reference_###.\nSecret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\nDistinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.\nBlock partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Secret Model Training", + "text": "We first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:\nRandom sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.\nRandom sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .\nSystematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .\nLayer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.\nImportance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.\nIn our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.\nGiven the predefined structure, the secret model is optimized as:\nwhere denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:\nSince and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .\nAfter receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:\nwhere represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Distinctive and Sparse Encoding Design", + "text": "We aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:\nwhere is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.\nWhile effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:\nand the second sample in class 0 is:\nIn this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.\nAdditionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.\nMoreover, in prior reconstruction attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.\nWe summarize our requirements for the encoding as follows:\nDifferentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.\nSparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.\nLabel Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.\nTo achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4### ###reference_b4### ###reference_b4### ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53### ###reference_b53### ###reference_b53### ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.\nThe encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:\nThis sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.\nNext, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:\nwhere are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.\nFor example, the index 49 can be represented as:\nThus, the Fibonacci code for 49 is:\nEach sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:\nwhere represents the Fibonacci coding bits of .\n\u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Block Partitioning", + "text": "Given the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.\nTo ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:\nwhere is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. 2 ###reference_### ###reference_### ###reference_### ###reference_### as follows:\nTo this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.\nHowever, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:\nwhere denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.\nDatasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.\nWe compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_14### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.\nImpact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.\nTo evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_15### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiment Setup", + "text": "Datasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with Baselines", + "text": "We compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_16### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "Impact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Time Cost", + "text": "To evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_17### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "D-SNR Detection", + "text": "Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Noise Perturbation", + "text": "Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Gradient Pruning", + "text": "Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Gradient Clipping", + "text": "Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Loss Change Monitor", + "text": "During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_18### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_19### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Detailed Construction", + "text": "In this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17### ###reference_b17### ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.\nIn this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Threat Model", + "text": "We assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.\nCapability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.\nCapability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.\nWe set the following limitations for the server.\nNo introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.\nNo control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.\nNo unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Overview of our method", + "text": "Our goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.\nIn conclusion, our method mainly contains three key modules, as shown in Figure 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nSecret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\nDistinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.\nBlock partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Secret Model Training", + "text": "We first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:\nRandom sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.\nRandom sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .\nSystematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .\nLayer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.\nImportance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.\nIn our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.\nGiven the predefined structure, the secret model is optimized as:\nwhere denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:\nSince and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .\nAfter receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:\nwhere represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Distinctive and Sparse Encoding Design", + "text": "We aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:\nwhere is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.\nWhile effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:\nand the second sample in class 0 is:\nIn this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.\nAdditionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.\nMoreover, in prior reconstruction attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.\nWe summarize our requirements for the encoding as follows:\nDifferentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.\nSparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.\nLabel Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.\nTo achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4### ###reference_b4### ###reference_b4### ###reference_b4### ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53### ###reference_b53### ###reference_b53### ###reference_b53### ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.\nThe encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:\nThis sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.\nNext, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:\nwhere are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.\nFor example, the index 49 can be represented as:\nThus, the Fibonacci code for 49 is:\nEach sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:\nwhere represents the Fibonacci coding bits of .\n\u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Block Partitioning", + "text": "Given the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.\nTo ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:\nwhere is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. 2 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### as follows:\nTo this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.\nHowever, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:\nwhere denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.\nDatasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.\nWe compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_20### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.\nImpact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.\nTo evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_21### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiment Setup", + "text": "Datasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with Baselines", + "text": "We compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_22### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "Impact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Time Cost", + "text": "To evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_23### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "D-SNR Detection", + "text": "Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Noise Perturbation", + "text": "Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Gradient Pruning", + "text": "Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Gradient Clipping", + "text": "Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Loss Change Monitor", + "text": "During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_24### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_25### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment", + "text": "In this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.\nIn this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experiment Setup", + "text": "Datasets and Models\nWe conduct experiments on various vision tasks, covering multiple datasets, including CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], CIFAR-100 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###], and MINI-ImageNet333MINI-ImageNet is a representative subset of ImageNet. [14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###], and CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###].\nIn our experiments, we employed ResNet-18 architectures to train models for these datasets, respectively.\nMore details are shown in the appendix.\nBaseline Data Reconstruction Attacks.\nWe compare our method with 5 state-of-the-art data reconstruction attacks, including Transpose Attack [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], Robbing the Fed (RtF) [15 ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15### ###reference_b15###], LOKI [58 ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58### ###reference_b58###], SEER [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], and Inverting [18 ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18### ###reference_b18###]. We run these baselines according to their open-sourced codes.\nEvaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.\nWe evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.\nMore details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with Baselines", + "text": "We compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table II ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###, Table X ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix), and Table XI ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\n###figure_26### Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).\nIn the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.\nWe also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure 3 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "Impact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table IX ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.\nImpact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table VIII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.\nImpact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table VII ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.\nImpact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table III ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Time Cost", + "text": "To evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table IV ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### and V ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_###.\nThese two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.\nIn contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.\nIn a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.\nDisaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.\nDefenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.\nGradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.\nGradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.\nDuring model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_27### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "D-SNR Detection", + "text": "Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Noise Perturbation", + "text": "Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Gradient Pruning", + "text": "Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Gradient Clipping", + "text": "Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Loss Change Monitor", + "text": "During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_28### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_29### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Robustness to State-of-the-art Defenses", + "text": "In this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.\nIn this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "D-SNR Detection", + "text": "Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.\nWhen applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Noise Perturbation", + "text": "Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.\nTo assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Gradient Pruning", + "text": "Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).\nIn the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Gradient Clipping", + "text": "Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.\nHowever, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Loss Change Monitor", + "text": "During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure 4 ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### ###reference_### (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.\nSecure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.\nAlthough our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.\nTo defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_30### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_31### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion", + "text": "In this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Secure Aggregation", + "text": "Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].\nFederated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.\nWe discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Generalize our method to NLP", + "text": "Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.\nTo process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Potential Countermeasures", + "text": "To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.\nFirst, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.\nSecond, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.\nIn the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.\nCIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.\nCIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.\nMINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.\nCelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.\nLeakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.\nLeakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.\nwhere is the number of extracted images, and is the total number of images in the target dataset.\nSSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.\nwhere , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .\nPSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.\nwhere is the maximum signal energy.\nLPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.\nwhere and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.\n###figure_32### \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: A comparison of studies on state-of-the-art active server attacks against federated learning.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\nAttacks\n\n\n\n\nUnusual structure\u2020?\n\n\n\n\nUnusual parameter?\n\n\n\n\nEasily detectable?\n\n\n\n\nExtraction Capacity\u00a7?\n\n\n\n\nHigh-resolution$?\n\n\n\n\nFedAvg Training\u2217?\n
\nRtF [15]\nYESNOYES256 / 256YESYES
\nBoenisch et al. [6]\nNOYESYES2 / 2NONO
\nFishing [49]\nNOYESYES1 / 256NONO
\nInverting[18]\nNOYESYES1 / 100NONO
\nLOKI [58]\nYESYESYES436\n/ 512YESYES
\nBoenisch et al. [7]\nNOYESYES50 / 100NONO
\nZhang et al. [56]\nNOYESYES64 / 128NONO
\nZhao et al. [57]\nYESYESYES50 / 64YESNO
\nSEER [17]\nNONONO1 / 512NONO
OursNONONO512 / 512YESYES
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u2020 indicates requires inserting a malicious module into the architecture, e.g., placing a large dense layer in front or inserting customized convolutional kernels into the FL model.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    \u00a7 denotes the corresponding amount of data stolen given the maximum amount of private training data that can be processed in each round.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    $ signifies that the attacks are also capable of recovering high-resolution images, making them applicable for targeting models trained on high-resolution datasets.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    \u2217 indicates the attack is effective in the FedAvg federated learning scenario.

    \n
    \n
  • \n
\n
\n
\n
\n
\"Refer\n
Figure 1: Examples of injected malicious code in our method. The green boxes highlight the sections where the malicious code needs to be injected.
\n
\n
\n
\n
\n

We list the state-of-the-art active server attacks in Table\u00a0. Most of the existing active server attacks focus on manipulating the model\u2019s weights [7 ###reference_b7###, 49 ###reference_b49###, 40 ###reference_b40###, 6 ###reference_b6###, 56 ###reference_b56###] and structures [15 ###reference_b15###, 58 ###reference_b58###, 57 ###reference_b57###] to conduct data reconstruction attacks. Parameter modification-based attacks rely on maliciously altering model parameters, such as weights and biases, to enhance the gradient influence of targeted data while diminishing that of other data. Structure modification-based attacks usually demand unusual changes to the architecture, like adding a large dense layer at the beginning or inserting customized convolutional kernels. However, the success of these attacks might heavily depend on the specific architecture and parameters of the model, limiting their applicability across different settings. To enhance the attack performance, some works even require the additional ability to introduce Sybil devices [6 ###reference_b6###], send different updates to different users [58 ###reference_b58###, 40 ###reference_b40###, 57 ###reference_b57###], or control the user sampling process [6 ###reference_b6###], which make the attacks more easily detectable. Besides, most of the existing works are ineffective for reconstructing high-resolution data, especially in scenarios involving large batches [49 ###reference_b49###, 7 ###reference_b7###, 6 ###reference_b6###, 57 ###reference_b57###, 40 ###reference_b40###, 56 ###reference_b56###, 58 ###reference_b58###, 17 ###reference_b17###]. Moreover, it is shown that only a few attacks [15 ###reference_b15###, 58 ###reference_b58###] can also be applied to the FedAvg training scenario.

\n
\n
\n
\n

In this paper, we introduce a novel data reconstruction attack based on malicious code poisoning in FL. As shown in Figure\u00a0, by injecting just a few lines of codes into the training process, our method covertly manipulates the model\u2019s behavior to extract private data while maintaining normal operation. This approach leverages vulnerabilities in shared machine learning libraries [21 ###reference_b21###, 39 ###reference_b39###, 50 ###reference_b50###], which often lack rigorous integrity checks, allowing attackers to introduce subtle modifications that evade detection. Many machine learning frameworks depend on third-party libraries that are not thoroughly vetted, making them susceptible to covert malicious modifications. Existing works show that attackers can introduce backdoors that execute the malicious code during the training process, without affecting the primary training objective [5 ###reference_b5###].

\n
\n
\n
\n

To launch the attack, our method introduces a secret model that shares parameters with the local model, making it indistinguishable from the local model in terms of both structure and behavior. Unlike existing methods requiring significant changes to the model architecture, our method uses parameter sharing to memorize sensitive client data while preserving the model\u2019s normal appearance. To enhance the attack performance, we also introduce a distinctive index design leveraging Fibonacci coding to efficiently retrieve memorized data, and a block partitioning strategy that enhances our method\u2019s capacity to handle high-resolution images. Specifically, the distinctive index design ensures efficient and structured retrieval of memorized data, allowing the hidden model to systematically extract sensitive information based on unique codes. The block partitioning strategy allows our method to overcome the challenges of handling high-resolution data by dividing the data into smaller, manageable units, which are then processed in a way that maintains the effectiveness of the attack while minimizing the detection risk.

\n
\n
\n
\n

Our method consistently outperformed 5 state-of-the-art data-reconstruction attacks under 5 detection methods across 4 datasets.\nOur method is able to steal nearly 512 high-quality images per attack on CIFAR-10 and CIFAR-100, and nearly 64 high-quality images on ImageNet and CelebA, significantly higher than state-of-the-art baseline methods.\nThese results demonstrate that our method exhibits robustness in handling high-resolution images and large-scale data theft scenarios.

\n
\n
\n
\n

To conclude, we make the following key contributions.

\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    We propose a novel data reconstruction attack paradigm based on malicious code poisoning. Unlike previous approaches that require conspicuous modifications to architecture or parameters, which are easily detected, our method covertly trains a secret model within the local model through parameter sharing. This secret model is designed to memorize private data and is created by carefully selecting a few layers from the local model. Moreover, our method is model-agnostic, enabling seamless integration with various architectures without modifying their core structures.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    To enhance the extraction performance, we propose a novel distinctive indexing method based on Fibonacci coding, which meets three key requirements: sparsity, differentiation, and label independence. We also propose a novel block partitioning strategy to overcome the limitations of existing optimization-based extraction methods when dealing with high-resolution datasets or large-scale theft scenarios.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Extensive experiments on 4 datasets confirm that our method outperforms 5 state-of-the-art data-reconstruction attacks in terms of both leakage rate and image quality. Our method is capable of handling large-scale and high-resolution data without being detected by 5 advanced defense techniques. Moreover, our method can also be easily transferred to an FL framework equipped with secure aggregation.

    \n
    \n
  • \n
\n
\n
\n
\n
\n

\n2 Background\n

\n
\n

\n2.1 Malicious Code Poisoning\n

\n
\n

Malicious code poisoning involves the stealthy injection of harmful code into the training process of machine learning models111https://www.reversinglabs.com/blog/sunburst-the-next-level-of-stealth ###reference_t-the-next-level-of-stealth### ###reference_t-the-next-level-of-stealth###, enabling attackers to alter model behavior while remaining difficult to detect [5 ###reference_b5### ###reference_b5###, 13 ###reference_b13### ###reference_b13###]. Unlike traditional attack vectors that exploit vulnerabilities in model architecture or parameters, code poisoning specifically targets the training process, embedding malicious functionality at the code level.

\n
\n
\n

One common approach to malicious code poisoning is through the exploitation of vulnerabilities in package management systems such as npm, PyPI, and RubyGems222https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610 ###reference_-confusion-4a5d60fec610### ###reference_-confusion-4a5d60fec610###. Attackers upload malicious packages to public repositories, often using the same names as internal libraries but with higher version numbers. This tricks dependency managers into downloading the malicious version instead of the intended internal one. These attacks are particularly challenging to detect because they exploit trusted sources, such as public repositories, which developers often assume to be reliable.

\n
\n
\n

For example, attackers may target widely used libraries like TensorFlow, embedding malicious code into critical functions such as trainstep. Since these libraries are highly trusted, developers often skip thorough code reviews, unknowingly installing compromised versions. Once executed, these packages can introduce backdoors, manipulate model behavior, or exfiltrate sensitive data. Recent research has shown that even widely-used machine learning repositories, such as FastAI [21 ###reference_b21### ###reference_b21###], Fairseq [39 ###reference_b39### ###reference_b39###], and Hugging Face [50 ###reference_b50### ###reference_b50###], despite having thousands of forks and contributors, often rely on basic tests\u2014such as verifying output shapes and basic functionality checks, making such code poisoning attacks more impactful and feasible.

\n
\n
\n

Despite the privacy-focused design of federated learning, shared model updates can inadvertently expose sensitive information. We show that a malicious server can manipulate the model training code to reconstruct users\u2019 training data, leading to significant security vulnerabilities.

\n
\n
\n
\n

\n2.2 Data Reconstruction Attacks against FL\n

\n
\n

In this section, we focus on providing an in-depth introduction to active server attacks.\nUnlike passive server attacks, the server can modify its behavior, such as the model architecture and model parameters sent to the user, to obtain training dataset information of the victim clients. Existing active server attacks can be categorized into three classes, i.e., parameter modification-based attacks, structure modification-based attacks, and handcrafted-modification-free attacks.

\n
\n
\n

Parameter modification-based attacks.\nWen et al. [49 ###reference_b49### ###reference_b49###] introduce two \u201cfishing\u201d strategies, i.e., class fishing and feature fishing, to recover user data from gradient updates. Rather than altering the model architecture, they manipulate the model parameters sent to users by maliciously adjusting the weights in the classification layer, magnifying the gradient contribution of a target data point, and reducing the gradient contribution of other data. The class fishing strategy amplifies bias for non-target neurons in the last classification layer, reducing the model\u2019s confidence in target class predictions and thus boosting the target data\u2019s gradient impact. When dealing with batches containing several target class samples, feature fishing modifies weights and biases for these targets, adjusting the decision boundary to further isolate and emphasize the target data\u2019s gradient. However, a single attack of [49 ###reference_b49### ###reference_b49###] can only recover one sample, making it easy to be detected by users. Pasquini et al. [40 ###reference_b40### ###reference_b40###] proposed a gradient suppression attack based on model inconsistency, degrading the aggregated gradient to that of the target user\u2019s gradient, thereby breaking secure aggregation. Specifically, they send normal model weights to the target user, producing normal local gradients. For non-target users, they exploit the characteristic of ReLU neurons producing zero gradients when not activated, by sending malicious model weights to generate zero gradients. It is independent of the number of users participating in the secure aggregation. However, such methods can easily detected by users with a strong awareness of prevention. Zhang et al. [56 ###reference_b56### ###reference_b56###] proposed reconstruction attacks based on the direct data leakage in the FC (fully-connected) layer [42 ###reference_b42### ###reference_b42###]. However, gradient obfuscation within a batch significantly hinders its effectiveness. To address this challenge, Zhang et al. maliciously changed the model parameters to diminish the obfuscation in shared gradients. This strategy effectively compromises privacy in large-batch FL scenarios. However, this method assumes the server owns auxiliary data that is independently and identically distributed with users\u2019 private trainsets, which is not practical in the real world.

\n
\n
\"Refer\n
Figure 2: Overview of our method. our method features an active server attack designed to extract the training samples of victim clients. It is composed of three key modules: secret model training, distinctive and sparse encoding design, and block partitioning. Note that represents the Fibonacci coding bits of .
\n
\n
\n

Structure modification-based attacks.\nFowl et al. [15 ###reference_b15### ###reference_b15###] introduced a method to compromise user privacy by making small but harmful changes to the model architecture, allowing the server to directly obtain a verbatim copy of user data from gradient updates, bypassing complex inverse problems. This method involves manipulating model weights to isolate gradients in linear layers. Specifically, they estimate the cumulative distribution function for a basic dataset statistic like average brightness, then add a fully connected layer with ReLU activation and output neurons, called the imprint module, at the beginning of the model. It is shown that even when user data is aggregated in large batches, it can be effectively reconstructed. Further, Zhao et al. [57 ###reference_b57### ###reference_b57###] improved [15 ###reference_b15### ###reference_b15###] by introducing an additional convolutional layer before the imprint module and assigning unique malicious convolutional kernel parameters to different users. This setup allows for an identity mapping of training data from different users to distinct output positions of the convolutional layer. By setting non-zero connection weights only for the current user\u2019s training data output, they effectively isolate the weight gradients produced in the imprint module by different users. Consequently, the size of the imprint module is determined by the batch size rather than the number of users, significantly reducing the computing cost.

\n
\n
\n

Recently, Zhao et al. [58 ###reference_b58### ###reference_b58###] proposed LOKI, specifically designed to overcome the FL with FedAVG and secure aggregation. By manipulating the FL model architecture through the insertion of customized convolutional kernels for each client, LOKI enables the malicious server to separate and reconstruct private client data from aggregated updates. Each client receives a model with slightly different convolutional parameters (identity mapping sets), ensuring that the gradients reflecting their data remain distinct even in aggregated updates.

\n
\n
\n

Handcrafted-modification-free attacks. Different from the above attacks, modification-free attacks do not rely on conspicuous parameter and structure modifications. Garov et al. [17 ###reference_b17### ###reference_b17###] introduced SEER, an attack framework designed to stealthily extract sensitive data from federated learning systems. The key of SEER is the use of a secret decoder, which is trained in conjunction with the shared model. This secret decoder is composed of two main components: a disaggregator and a reconstruction. The disaggregator is to pinpoint and segregate the gradient of a specific data point according to a secret property, such as the brightest image in a batch, effectively nullifying the gradients of all non-matching samples. This isolated gradient is then passed to the reconstructor, which reconstructs the original data point. SEER is also an elusive attack that doesn\u2019t visibly alter the model\u2019s structure or parameters, making it harder to detect than other methods.\nHowever, it requires training a complex decoder, which can be resource-intensive. The attack\u2019s success also relies on choosing a secret property that uniquely identifies one sample in a batch, making this selection crucial for its effectiveness. Additionally, in a single batch, the attacker can recover only one image, which limits the attack\u2019s scalability.

\n
\n
\n

In this paper, we propose a novel data reconstruction attack which leverages malicious code injection to covertly extract sensitive data. Unlike prior methods that require conspicuous modifications to the model architecture or parameters, our method embeds a secret model via parameter sharing, ensuring minimal detection risk. It introduces a block partitioning strategy for handling high-resolution data, while also employing a Fibonacci-based distinctive indexing method to streamline data retrieval and improve attack performance. Our method also operates without relying on auxiliary devices or user sampling manipulation, making it both more practical and less detectable in real-world federated learning settings.

\n
\n
\n
\n

\n2.3 Data Reconstruction Defenses against FL\n

\n
\n

A range of defensive strategies have been introduced to counter data reconstruction attacks, such as differential privacy, gradient compression, and feature disruption techniques. Additionally, secure aggregation has also demonstrated effectiveness in protecting against a subset of these attacks.\nDifferential privacy (DP) [1 ###reference_b1### ###reference_b1###] is a critical approach for quantifying and curtailing the exposure of individual-level information. With local DP, clients can apply a randomized mechanism to the gradients before uploading them to the server [19 ###reference_b19### ###reference_b19###, 47 ###reference_b47### ###reference_b47###]. DP can provide a worst-case information theoretic guarantee on information an adversary can glean from the data. However, DP-based methods often require adding much noise, which will affect the model\u2019s utility. Gradient compression is another method shown to mitigate information leakage from gradients. A gradient sparsity rate of over 20% proves effective in resisting [60 ###reference_b60### ###reference_b60###]. However, such methods are only effective for a small part of passive server attacks.\nSun et al. [43 ###reference_b43### ###reference_b43###] pointed out that the privacy leakage problem caused by gradient inversion mainly comes from feature leakage. Therefore, they perturb the network\u2019s intermediate features by cropping, adding as little disturbance to the features as possible, making the reconstructed input from the perturbed features as different from the real input as possible, thus maintaining model performance while reducing the leakage of private information. However, this method is mainly designed for passive server attacks and has shown to be ineffective against advanced passive attacks [28 ###reference_b28### ###reference_b28###].

\n
\n
\n

The secure aggregation protocol [8 ###reference_b8### ###reference_b8###] is a sophisticated multi-party computation (MPC) technique that enables a group of users to collectively compute the summation (a.k.a. aggregation) of their respective inputs. This protocol ensures that the server is only privy to the collective aggregate of all client updates, without access to the individual model updates from any specific client. Such a system is designed to preserve privacy during the federated learning process. It has been demonstrated that a range of attacks [49 ###reference_b49### ###reference_b49###, 56 ###reference_b56### ###reference_b56###] are rendered ineffective under secure aggregation.\nRecently, Garov et al. [17 ###reference_b17### ###reference_b17###] presented D-SNR to effectively detect data reconstruction attacks. D-SNR measures the signal-to-noise ratio in the gradient space, identifying when a gradient from a single example dominates the aggregate gradient of a batch. It works by defining a property to single out target examples and comparing individual gradients to the batch average. High D-SNR values indicate potential privacy leaks, allowing clients to opt out of training rounds that may compromise their data. This method provides a principled and cost-effective way to assess and safeguard against privacy breaches in federated learning setups. This method is shown to be effective for detecting attacks such as [40 ###reference_b40### ###reference_b40###, 49 ###reference_b49### ###reference_b49###].

\n
\n
\n

In this paper, we will assess the resilience of our method against state-of-the-art defenses.

\n
\n
\n

\n3 Detailed Construction\n

\n
\n

\n3.1 Threat Model\n

\n
\n

We assume there are two parties in the federated learning, i.e., the server and the users . The users train the global model using the local dataset and return the updates to the server. The server\u2019s objective is to clandestinely reconstruct the private training data of the target client, all while adhering to the standard federated learning protocol and avoiding detection by sophisticated defenses.\nWe set the following abilities for the active server attacker.

\n
    \n
  • \n\u2022\n
    \n

    Capability to aggregate client updates. The server can aggregate the updates submitted by clients. These updates may be processed by secure aggregation. We will discuss the effectiveness of our method in both cases.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Capability to distribute training codes to clients. The server can distribute the necessary training code or model parameters to the clients so they can perform local computations and updates before sending them back to the server.

    \n
    \n
  • \n
\n
\n
\n

We set the following limitations for the server.

\n
    \n
  • \n\u2022\n
    \n

    No introduction of Sybil devices. The server is prohibited from integrating manipulated devices into the FL protocol. While these devices might return arbitrary gradients that could potentially assist the attacker in inferring target data gradients, such actions are easily detectable.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    No control over user sampling. The server is not allowed to manipulate the user sampling process. Additionally, it is incapable of sending distinct updates to different users.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    No unusual modifications to parameters and structure. The attacker is barred from making unusual modifications to the model\u2019s structure, such as adding an excessively large dense layer at the beginning or integrating custom convolutional kernels into the model. Additionally, the attacker is prohibited from making unusual handcrafted modifications to the parameters of the shared model to evade detection.

    \n
    \n
  • \n
\n
\n
\n
\n

\n3.2 Overview of our method\n

\n
\n

Our goal is not only to steal private data samples from the victim participants but also to systematically retrieve them based on their indices. By injecting just a few lines of malicious training code, we aim to embed a hidden model within the victim\u2019s local model. This secret model, , shares parameters with the victim\u2019s local model, making it indistinguishable from a normal local model in both appearance and memory usage.\nUnlike in multi-task learning, the main task and the memorization task are entirely separate, ensuring the memorization remains undetectable without prior knowledge of the secret model. To systematically retrieve the stolen data, we introduce a novel Fibonacci-based encoding algorithm that assigns a unique number to each memorized sample. Additionally, to address the parameter limitations of the secret model, we implement a block partitioning technique, splitting large images into smaller blocks for processing.

\n
\n
\n

In conclusion, our method mainly contains three key modules, as shown in Figure\u00a0.

\n
    \n
  • \n\u2022\n
    \n

    Secret model training. Rather than introducing a foreign model to memorize private data, we hide a secret model within the local model through parameter sharing. The secret model shares the same memory space and consists of selected parameters from the local model. When the local model\u2019s parameters are sent to the server, the server reconstructs the secret model and extracts the client\u2019s training data. We consider several parameter selection methods and propose to use systematic sampling to select parameters for the secret model, ensuring even distribution across layers.\n

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Distinctive and sparse encoding design. A distinctive index is used to systematically retrieve memorized data samples. The key is to design an encoding algorithm that assigns a unique number to each stolen data sample.\nWe propose a novel, label-agnostic Fibonacci-based coding method that ensures clear differentiation between samples while reducing computational overhead to accelerate training.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Block partitioning. Due to the parameter limitations of memory models, extracting high-resolution images can be difficult. To overcome this, we use a block partitioning approach, where large images are split into smaller blocks. Each block is treated as a separate input for the memory task. Additionally, we adjust the encoding design to align with the block partitioning scheme.

    \n
    \n
  • \n
\n
\n
\n
\n

\n3.3 Secret Model Training\n

\n
\n

We first flatten the local model\u2019s parameters into a parameter vector .\nWe then select parameters from this vector to populate the secret model . There are five methods to construct the secret model from the local model structure:

\n
    \n
  • \n\u2022\n
    \n

    Random sampling.\nA straightforward approach is random sampling, where parameters are selected from the vector until the threshold number is met. However, this method may result in an uneven distribution of selected parameters across different layers, potentially impacting the model\u2019s main task accuracy.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Random sampling with constraints.\nAnother option is random sampling with constraints, which involves randomly selecting parameters from while ensuring that no single layer is overrepresented in . By setting limits on the number of parameters that can be taken from each layer, we achieve a more balanced parameter vector .

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Systematic sampling.\nA more systematic method is systematic sampling, where every -th parameter from the original vector is chosen. This ensures that the parameters are evenly distributed across the layers of the local model. For instance, if , the parameter vector for would be .

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Layer-wise sampling.\nLayer-wise sampling involves selecting parameters from specific layers based on their importance or contribution to the overall model performance. This method prioritizes critical layers while minimizing the impact on less important ones.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Importance-based sampling.\nImportance-based sampling selects parameters based on their significance to the model\u2019s performance. By analyzing the importance distribution of model parameters, we select those that contribute most to the decrease in the loss function, .\nThis ensures that contains the most representative and impactful parameters, capable of effectively memorizing the training data.

    \n
    \n
  • \n
\n

In our evaluation, we assess the effectiveness of these five methods, with a default preference for the systematic sampling method.

\n
\n
\n

Given the predefined structure, the secret model is optimized as:

\n\n\n\n\n\n\n\n
(1)
\n

where denotes the index of the data, and represents the distance function. The input to is the index , and the output is the stolen data. During training, the distance between the stolen data and is minimized. For images, the distance function could be either the or distance. In our work, we set as follows:

\n\n\n\n\n\n\n\n
(2)
\n
\n
\n

Since and (local model) share parameters, their gradient updates are also linked. However, because transferring parameters from to is non-differentiable, joint optimization in a single step is not feasible. Therefore, we iteratively optimize and to approximate simultaneous optimization. Specifically, we first fine-tune the local model on training data point using , followed by training the secret model to memorize these samples using .

\n
\n
\n

After receiving the updates from the clients, the server first reconstructs the secret model through the pre-designed parameter selection algorithm and then extracts the client\u2019s training data by inputting the index. is reconstructed as:

\n\n\n\n\n\n\n\n
(3)
\n

where represents the -th parameter from the parameter vector , is a fixed offset that determines the starting position for selecting parameters, and is the systematic sampling interval. In our method, and . This allows the server to systematically reconstruct the secret model from the shared parameters and subsequently retrieve the memorized data using the index .

\n
\n
\n
\n

\n3.4 Distinctive and Sparse Encoding Design\n

\n
\n

We aim to uniquely index each stolen data sample, assigning a distinct number to every sample. This helps the secret model learn and retrieve data more effectively. Let indexer be a function that maps a natural number to a point in an -dimensional Euclidean space, where for all . The outputs of are spatial indices. With an indexer, a user can systematically find every indexed point in by following the sequence . Previous data memorization attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###] leverage the intuition that neural networks can capture similarities using Euclidean distance internally. It combines one-hot encoding and Gray code to propose a spatial index. The specific calculation formula is as follows:

\n\n\n\n\n\n\n\n
(4)
\n

where is the spatial index for the -th item in class . is the -bit Gray code of , and is a vector representing the class encoding. One implementation of is to use one-hot encoding for each class multiplied by , where is the value used in the -bit Gray code. This ensures that the indices between classes are orthogonal and non-repetitive.

\n
\n
\n

While effective in most cases, this approach struggles when multiple stolen data samples belong to the same class, resulting in insufficient distinction between codes. For example, a 10-dimensional code for the first sample in class 0 is:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(5)
\n

and the second sample in class 0 is:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(6)
\n

In this case, the two encodings differ by only one bit, the secret model is required to reconstruct two completely different images. Since the secret model is a fully connected neural network (FCNN), this presents a challenge.\nThe FCNN performs linear transformations through weight matrices and nonlinear transformations through activation functions, similar inputs produce similar intermediate feature representations. As a result, the FCNN struggles to learn sufficient discrimination during training, leading to confusion in the output images.\nTherefore, we need a better indexing method to address the issue of insufficient distinction between codes when belonging to the same class.

\n
\n
\n

Additionally, since we use FCNN, using sparse and distinctive vectors as input can improve the model\u2019s learning efficiency and representation capacity. Sparse vectors, where most elements are zero, allow FCNNs to compute only the parts corresponding to non-zero elements, reducing parameter updates and gradient calculation complexity, thereby accelerating the training process.

\n
\n
\n

Moreover, in prior reconstruction attacks [2 ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2### ###reference_b2###], adversaries relied on local data labels to infer encoding schemes, limiting attack effectiveness. It requires the server to either know or accurately estimate the local data distribution for a successful attack.\nTo simplify the encoding process and make attacks more practical, a label-agnostic encoding scheme is also required.

\n
\n
\n

We summarize our requirements for the encoding as follows:

\n
    \n
  • \n\u2022\n
    \n

    Differentiation:\nEncodings for different samples must be sufficiently distinct, even when their indices are close. This ensures that the linear model can map each input to a unique output, minimizing errors in retrieval.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Sparsity:\nSparse codes, where only a small fraction of bits are non-zero, reduce the computational load during training. This allows the model to focus on meaningful elements, speeding up convergence and improving training efficiency.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Label Independence:\nThe server no longer needs to know or estimate the local data labels to understand the encoding scheme. This makes the attack more practical and robust against varying local data distributions. With knowledge of the total number of images in the local dataset, the server can effortlessly determine the encoding scheme. This simplifies the server\u2019s inference process and reduces the computational overhead.

    \n
    \n
  • \n
\n
\n
\n

To achieve these three requirements, we design a novel distinctive indexing method based on Fibonacci\ncoding [4 ###reference_b4### ###reference_b4### ###reference_b4### ###reference_b4### ###reference_b4###]. Fibonacci coding is a universal code that uses only 0 and 1, where each digit\u2019s position corresponds to a Fibonacci number. Fibonacci coding is designed based on the properties of the Fibonacci sequence, where the specific digits correspond to the size of the number and its representation in the Fibonacci sequence. This means that even if two decimal numbers are very close, their Fibonacci codes can differ significantly in many positions, providing excellent distinction. Based on Zeckendorf\u2019s theorem [53 ###reference_b53### ###reference_b53### ###reference_b53### ###reference_b53### ###reference_b53###], any natural number can be uniquely represented as a sum of Fibonacci numbers. The properties of Fibonacci coding satisfy our requirements for code distinction and sparsity. To eliminate the reliance on labels, we simply assign sequential indices to images (e.g., from 1 to 500), which simplifies the encoding process and makes the attacks more practical.

\n
\n
\n

The encoding process of our method can be described in the following steps.\nFirst, we define the Fibonacci sequence for encoding:

\n\n\n\n\n\n\n\n
(7)
\n

This sequence allows us to encode indices up to 142, and it can be extended further if needed to support larger datasets.

\n
\n
\n

Next, for a given index , we express it as a sum of non-consecutive Fibonacci numbers:

\n\n\n\n\n\n\n\n
(8)
\n

where are Fibonacci numbers from the defined sequence. We set the corresponding bit positions in the encoding vector to 1.

\n
\n
\n

For example, the index 49 can be represented as:

\n\n\n\n\n\n\n
\n

Thus, the Fibonacci code for 49 is:

\n\n\n\n\n\n\n\n
(9)
\n
\n
\n

Each sample is assigned a unique binary code based on its index. This encoding is independent of any label information, ensuring that the process is purely index-based. The complete encoding function for a given index is defined as:

\n\n\n\n\n\n\n\n
(10)
\n

where represents the Fibonacci coding bits of .

\n
\n
\n
TABLE II: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on CIFAR-10 and CIFAR-100 datasets using FedAvg. \nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CIFAR-10 dataset
Baselines\nMetrics\u2020\n
\n\nTranspose Attack\n\nLeakage ()\n\n8.2 4.0\n\n2.4 2.0\n\n0.6 0.8\n\n0.2 0.4\n\n0.0 0.0\n\n0.0 0.0\n
\nSSIM ()\n\n0.506 0.034\n\n0.394 0.023\n\n0.306 0.037\n\n0.250 0.035\n\n0.197 0.019\n\n0.151 0.010\n
\nPSNR ()\n\n15.994 0.441\n\n14.882 0.385\n\n14.378 0.387\n\n13.983 0.559\n\n13.504 0.319\n\n13.103 0.330\n
\nLPIPS ()\n\n0.457 0.010\n\n0.490 0.021\n\n0.518 0.013\n\n0.547 0.010\n\n0.548 0.007\n\n0.557 0.009\n
RtF\nLeakage ()\n\n11.5 0.3\n\n11.2 1.1\n\n4.2 0.6\n\n1.6 0.2\n0.7 \u00b1 0.31.4 \u00b1 0.3
\nSSIM ()\n\n0.670 0.018\n\n0.407 0.023\n\n0.198 0.013\n\n0.070 0.007\n0.195 \u00b1 0.0490.280 \u00b1 0.036
\nPSNR ()\n\n19.931 0.417\n\n13.115 0.456\n\n8.848 0.293\n\n6.358 0.116\n7.885 \u00b1 0.9398.601 \u00b1 0.688
\nLPIPS ()\n\n0.205 0.015\n\n0.390 0.018\n\n0.521 0.007\n\n0.577 0.007\n0.504 \u00b1 0.0420.485 \u00b1 0.023
LOKI\nLeakage ()\n\n13.8 0.4\n\n27.5 0.5\n\n55.6 0.9\n\n110.0 0.7\n\n219.080 0.9\n\n435.6 4.6\n
\nSSIM ()\n\n0.870 0.019\n\n0.849 0.015\n\n0.800 0.015\n\n0.739 0.006\n\n0.696 0.006\n\n0.700 0.015\n
\nPSNR ()\n\n32.095 1.32\n\n31.400 0.960\n\n29.471 0.890\n\n27.269 0.127\n\n25.903 0.198\n\n26.173 0.725\n
\nLPIPS ()\n\n0.071 0.011\n\n0.086 0.010\n\n0.114 0.009\n\n0.151 0.004\n\n0.177 0.004\n\n0.173 0.008\n
\n\nour method\n\nLeakage ()\n16.0 0.032.0 0.064.0 0.0128.0 0.0256.0 0.0512.0 0.0
\nSSIM ()\n1.000 0.0001.000 0.0001.000 0.0000.999 0.0010.934 0.0150.785 0.021
\nPSNR ()\n68.734 0.46567.565 0.44664.764 0.71159.978 1.39430.394 0.77823.122 0.130
\nLPIPS ()\n0.000 0.0000.000 0.0000.000 0.0000.001 0.0010.093 0.0190.295 0.021
CIFAR-100 dataset
Baselines\nMetrics\u2020\n
\n\nTranspose Attack\n\nLeakage ()\n\n3.8 2.6\n\n3.8 3.4\n\n3.4 2.1\n\n5.4 2.4\n\n9.4 2.3\n\n7.2 1.9\n
\nSSIM ()\n\n0.388 0.053\n\n0.301 0.047\n\n0.248 0.018\n\n0.224 0.014\n\n0.215 0.016\n\n0.190 0.009\n
\nPSNR ()\n\n15.582 0.921\n\n14.463 0.494\n\n13.896 0.383\n\n13.725 0.186\n\n13.590 0.223\n\n13.348 0.070\n
\nLPIPS ()\n\n0.475 0.018\n\n0.499 0.017\n\n0.528 0.007\n\n0.533 0.004\n\n0.549 0.009\n\n0.559 0.012\n
RtF\nLeakage ()\n\n11.3 0.3\n\n11.6 1.3\n\n5.0 0.3\n\n2.7 0.4\n0.9 \u00b1 0.21.6 \u00b1 0.2
\nSSIM ()\n\n0.655 0.028\n\n0.421 0.027\n\n0.209 0.010\n\n0.106 0.012\n0.229 \u00b1 0.0260.297 \u00b1 0.034
\nPSNR ()\n\n19.656 0.779\n\n13.784 0.588\n\n8.860 0.261\n\n6.820 0.245\n7.446 \u00b1 0.8209.075 \u00b1 1.139
\nLPIPS ()\n\n0.213 0.016\n\n0.371 0.018\n\n0.521 0.006\n\n0.570 0.006\n0.522 \u00b1 0.0110.475 \u00b1 0.007
LOKI\nLeakage ()\n\n13.6 1.0\n\n26.6 1.4\n\n54.8 2.4\n\n113.6 3.4\n\n221.8 5.9\n\n432.4 8.6\n
\nSSIM ()\n\n0.856 0.074\n\n0.854 0.046\n\n0.794 0.037\n\n0.740 0.042\n\n0.701 0.024\n\n0.705 0.024\n
\nPSNR ()\n\n33.282 6.744\n\n31.572 3.341\n\n27.907 2.039\n\n26.638 2.045\n\n26.026 1.137\n\n5.810 0.888\n
\nLPIPS ()\n\n0.073 0.043\n\n0.083 0.024\n\n0.125 0.021\n\n0.155 0.029\n\n0.182 0.014\n\n0.180 0.016\n
\n\nOurs\n\nLeakage ()\n16.0 0.032.0 0.064.0 0.0128.0 0.0256.0 0.0505.8 5.19
\nSSIM ()\n1.000 0.0001.000 0.0001.000 0.0000.999 0.0010.920 0.0140.702 0.034
\nPSNR ()\n68.520 0.88765.489 0.43162.697 0.23056.803 0.81529.916 0.61922.281 0.436
\nLPIPS ()\n0.000 0.0000.000 0.0000.000 0.0000.001 0.0010.102 0.0160.340 0.021
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.

    \n
    \n
  • \n
\n
\n
\n
\n
\n
\n

\n3.5 Block Partitioning\n

\n
\n

Given the parameter limitations of memory models, extracting high-resolution images can be challenging. To address this, we adopt a block partitioning scheme, in which large images are divided into smaller blocks. Each block is then used as an individual input for the memory task.

\n
\n
\n

To ensure smoothness between adjacent blocks, we introduce a variation loss into the loss function. This variation loss helps maintain continuity and coherence across blocks. Specifically, let be the input image with dimensions , where is the batch size, is the number of channels, is the height, and is the width. Let be a hyperparameter representing the block size. The variation loss is given by:

\n\n\n\n\n\n\n\n
(11)
\n

where is a weighting factor that controls the importance of the variation loss. With the addition of the total variational loss, the loss function for the memory task can be further extended from Eq. as follows:

\n\n\n\n\n\n\n\n
(12)
\n

To this end, we can not only extend memory constraints but also ensure that the reconstructed image maintains a high degree of quality and smoothness across block boundaries.

\n
\n
\n

However, since the previous spatial index was limited to indexing entire images, we need to extend it to accommodate block-wise indexing within each image. To achieve this, we append additional digits to the original sample code to represent the block number for each sample, while continuing to use the Fibonacci coding method. For high-resolution images, the updated encoding function is defined as:

\n\n\n\n\n\n\n\n
(13)
\n

where denotes the Fibonacci encoding of the block number . After extracting each block, the blocks can be concatenated to reconstruct the entire image.\nThis extension allows us to handle large-scale images efficiently, ensuring precise indexing and smooth reconstruction.

\n
\n
\n

\n4 Experiment\n

\n
\n

\n4.1 Experiment Setup\n

\n\n\n
\n

Evaluation Metrics\nWe evaluate the effectiveness of our method through three metrics, i.e.,\nleakage or leakage rate, SSIM, PSNR, and LPIPS.

\n
\n
\n

We evaluate both FedAvg and FedSGD frameworks. In FedAvg, each client trains on its entire local dataset for 10 epochs before sending the updated model parameters to the server. In FedSGD, each client trains on a single batch and sends the gradient updates directly to the server. We set the number of participating clients to 20, using a Dirichlet distribution with parameter 0.6 to simulate unbalanced data across clients.

\n
\n
\n

More details of the datasets, models, and evaluation metrics are shown in the appendix. All experiments were conducted on an Ubuntu 20.04 system with a 20-core Intel CPU. The models were trained on a single NVIDIA RTX 4090 GPU.

\n
\n
\n
\n

\n4.2 Comparison with Baselines\n

\n
\n

We compare our method with 5 state-of-the-art data reconstruction attacks.\nWe mainly target the FedAvg scenario, where each client trains on the entire dataset in each round. We show that our method can also applied to the FedSGD scenario.\nNote that for schemes based on leakage through linear layers, such as LOKI and RtF, their target is to steal all images in the training set, so in experiments, directly represents the size of the local dataset of each user. In contrast, for Transpose Attack and our method, represents the specific target number of images to be stolen, which is less than the total size of the local dataset, presenting a higher level of attack difficulty.\nSince Fishing, SEER, and Inverting rely on gradient disaggregation and are designed for the FedSGD scenario, they are excluded from the FedAvg context. The comparison results are shown in Table\u00a0, Table\u00a0 (appendix), and Table\u00a0 (appendix).

\n
\n
\"Refer\n
Figure 3: Comparison of our method with baselines in terms of the reconstruction sample quality.
\n
\n
\n

Our method consistently outperforms the baselines in terms of the number of leaked samples across all datasets in the FedAvg scenario. For instance (), our method achieves a leakage of 16 on CIFAR-10 and CIFAR-100 datasets, while the best-performing baseline, LOKI, reaches only 13.8 and 13.6, respectively. Additionally, our method surpasses baselines in image quality metrics, showing superior fidelity with significantly higher SSIM and PSNR scores and lower LPIPS values across most settings. Specifically, our method achieves an SSIM close to 1 when extracting 128 samples, with a PSNR of 60 and LPIPS of 0.001 on CIFAR-10, whereas LOKI reaches only 0.739 SSIM, 27.269 PSNR, and 0.151 LPIPS, followed by Transpose at 0.250 (SSIM), 13.983 (PSNR), and 0.547 (LPIPS), and RtF with 0.07 (SSIM), 6.358 (PSNR), and 0.577 (LPIPS).

\n
\n
\n

In the FedSGD scenario, our method also achieves a higher leakage rate and superior image quality compared to baselines, especially for high-resolution datasets.\nFor example, on the CelebA dataset, our method can extract 64 samples when , whereas the baselines SEER and Interting prove ineffective under the same conditions.\nThis robust performance underscores our method \u2019s effectiveness in maintaining high-quality data reconstruction while delivering greater data recovery precision than existing methods.

\n
\n
\n

We also visualize the extracted samples from both the baselines and our method across all datasets, as shown in Figure\u00a0. We set for CIFAR-10 and CIFAR-100 datasets, and set for high-resolution datasets. For Inverting, the grayscale result likely stems from optimization-based gradient averaging, which leads to detail loss. For RtF and LOKI, images that are accurately restored exhibit high quality. However, other images suffer from kernel collision issues, resulting in low quality and diminished effectiveness. For Transpose, due to the limited number of training epochs, the transpose fails to converge, resulting in poor performance.\nSince SEER does not allow direct specification of target samples, we did not include it in the visualization comparison.\nThe results reveal that the images extracted by our method are noticeably clearer and more vivid compared to those from the baselines, highlighting our method \u2019s superior extraction quality.

\n
\n
\n
\n

\n4.3 Ablation Study\n

\n
\n

Impact of parameter selection algorithms.\nThere are five methods for constructing the secret model from the local model structure: Random, Random with Constraints, Systematic, Layer-wise, and Importance-based selection. Their performance comparisons are presented in Table\u00a0 (appendix).\nAmong these, the systematic method performs consistently well across all datasets. Although it may not be the best on every dataset, it delivers the highest overall results, especially on MINI-ImageNet, where it achieves over 90% leakage, while the other methods reach a maximum of 40%. Given its simplicity, ease of implementation, and time efficiency, we select this method as the default. Note that attackers can choose the optimal approach based on each dataset\u2019s specifics.

\n
\n
\n

Impact of block partitioning.\nFor high-resolution datasets, we use a block partitioning approach, where large images are divided into smaller blocks. To assess the impact of different block sizes on extraction effectiveness and time, we set the leakage sample size to 40 and varied the block size. The results are presented in Table (appendix).\nIt can be observed that as the block size decreases, the extraction time increases. This is because smaller block sizes generate a larger number of segmented images, leading to longer training times for the secret model and more complex optimization. Regarding extraction effectiveness, it tends to improve with increasing block size up to a certain point, after which it starts to decline. In our experiments, block sizes of 16 or 28 strike a good balance between extraction effectiveness and time efficiency.

\n
\n
\n

Impact of different encoding methods.\nWe investigate the impact of different encoding methods in index design, considering three types: Binary, Gray, and Fibonacci encoding. The results are presented in Table\u00a0 (appendix). For a fair comparison, all three methods incorporate intra-class serial numbers and append the class number to form the complete sample code. The results show that, in most cases, Fibonacci encoding achieves the best extraction performance.\nThe potential reason is that Fibonacci encoding generates codes with high sparsity, meaning that during the encoding process, only a few bits are set to 1 while the rest are 0. In contrast, both binary encoding and Gray code exhibit weaker sparsity. Additionally, Fibonacci encoding avoids the occurrence of adjacent 1s, which enhances the distinction between codes and reduces the likelihood of interference.

\n
\n
\n
TABLE III: Impact of label information on attack performance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nWith labelWithout label
\n\nCIFAR-10\n\nLeakage Rate ()\n90.2%99.4%
\nSSIM ()\n0.6180.703
\nPSNR ()\n20.56621.867
\nLPIPS ()\n0.4090.358
\n\nCIFAR-100\n\nLeakage Rate ()\n1.4%98.6%
\nSSIM ()\n0.1990.663
\nPSNR ()\n14.24421.514
\nLPIPS ()\n0.6040.371
MINI-ImageNet\nLeakage Rate ()\n31.3%89.8%
\nSSIM ()\n0.4240.635
\nPSNR ()\n18.61723.850
\nLPIPS ()\n0.5690.412
CelebA-Subset\nLeakage Rate ()\n30.5%100%
\nSSIM ()\n0.4720.794
\nPSNR ()\n15.20628.290
\nLPIPS ()\n0.5410.292
\n
\n
\n

Impact of label information in encoding.\nIn our method, we introduce a novel label-agnostic image encoding scheme that separates the encoding process from local label information. We investigate the influence of label information on our method, with the results presented in Table\u00a0.\nNotably, our method without label information achieves a leakage rate of 89.8% and high image quality on MINI-ImageNet, whereas our method with label information achieves only a 31.3% leakage rate with lower image quality. The results indicate that the reliance on label information limits performance, as the server must either possess or accurately estimate the local data distribution for an effective attack.

\n
\n
\n
\n

\n4.4 Time Cost\n

\n
\n

To evaluate the efficiency of our method, we calculate its time cost, which consists of two components: training cost and memory cost. The training cost refers to the time required for the main task training of the local model, while the memory cost represents the time taken for training the secret model.\nThe results are presented in Table\u00a0 and .

\n
\n
\n

These two tables show the time costs for training and memory tasks separately, reflecting their independence and respective dependencies. Training time is influenced by the local dataset size . With fixed training epochs and batch size, training time increases almost linearly with dataset size. For example, in the CIFAR-10 dataset, with , training time is 16.4s, doubling to 32.8s when is 4000.

\n
\n
\n

In contrast, memory task time depends on the target theft quantity . Since we set the batch size equal to for memory tasks, the time cost does not follow a strict linear relationship. For instance, with in CIFAR-10, memory time is 19.2s.

\n
\n
\n

In a practical theft scenario, specific configurations can make the task less noticeable. For instance, in CIFAR-10 with a local dataset size of 4000 and a theft of 512 images, training and memory times are 32.8s and 19.2s, respectively. The additional time for the memory task is close to half the original training time, making the theft less detectable.

\n
\n
\n
TABLE IV: Training Time Cost of Various Attack Components. \n denotes the local dataset size. All values are in seconds.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetM=2000M=4000M=6000
CIFAR-1016.432.848.3
CIFAR-10016.232.147.6
ImageNet21.543.763.2
CelebA20.241.860.9
\n
\n
\n
TABLE V: Memory Time Cost of Various Attack Components. \n denotes the target theft quantity. All values are in seconds.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetN=128N=256N=512
CIFAR-1012.414.819.2
CIFAR-10011.615.720.8
DatasetN=32N=48N=64
ImageNet22.124.326.8
CelebA18.219.721.2
\n
\n
\n
TABLE VI: Robustness of our method to various advanced defenses.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDataset\n\n\nMetrics\u2020\nGradient PruningGradient ClippingNoise Perturbation\n\nD-SNR Detection\n
\n= 1e-6\n\n= 1e-5\nmax_norm = 5max_norm = 1\n= 1e-3\n\n= 4e-3\n
\n\nCIFAR-10\n\nLeakage ()\n511.0448.0511.0511.0511.8431.4512.0
\nSSIM ()\n0.7460.5900.7480.7480.7600.5930.788
\nPSNR ()\n22.63720.18622.69222.67522.53719.04323.687
\nLPIPS ()\n0.3250.4280.3240.3230.3170.4270.264
\n\nCIFAR-100\n\nLeakage ()\n510.0387.0511.0511.0505.8502.8506.4
\nSSIM ()\n0.6990.5680.7120.7240.6890.6850.712
\nPSNR ()\n22.08020.24822.30422.52321.96621.89922.486
\nLPIPS ()\n0.3450.4270.3350.3250.3490.3520.183
\n\nMINI-ImageNet\n\nLeakage ()\n32.032.032.031.032.011.432.0
\nSSIM ()\n0.8290.8230.8300.7790.7350.4850.805
\nPSNR ()\n28.36128.15728.37326.84526.03921.35627.311
\nLPIPS ()\n0.2220.2300.2220.2810.3310.5200.248
\n\nCelebA Subset\n\nLeakage ()\n32.032.032.032.032.013.832.0
\nSSIM ()\n0.9070.9030.9140.9190.8490.4890.911
\nPSNR ()\n32.65932.48633.08533.42130.74422.42333.041
\nLPIPS ()\n0.1270.1310.1170.1100.2310.5570.123
\n
\n
\n

\n5 Robustness to State-of-the-art Defenses\n

\n
\n

In this section, we investigate the effectiveness of our method under state-of-the-art data reconstruction defenses, including D-SNR [17 ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17### ###reference_b17###], noise perturbation, gradient pruning, and gradient clipping. Additionally, we monitor loss changes as a defense mechanism to assess our method\u2019s resilience against detection.

\n
\n
\n

\n5.1 D-SNR Detection\n

\n
\n

Disaggregation signal-to-noise ratio (D-SNR) is a novel metric for detecting vulnerabilities in federated learning, specifically against disaggregation attacks. D-SNR signals a potential attack when the gradient of a single example outweighs the batch gradient, suggesting an adversary has isolated an individual example from the batch.\nThis metric is essential for evaluating the risk of data leakage by analyzing gradients, bypassing the need for costly optimization-based attack simulations. By ensuring that any layer with potential disaggregation has a high D-SNR, clients can detect and skip vulnerable training rounds.

\n
\n
\n

When applying D-SNR, we observe that our method can still succeed by avoiding dominant gradients within any single training round.\nThe success of our method lies in its multi-round, incremental data embedding approach, which encodes small portions of sensitive data across multiple rounds.\nThis strategy distributes the data over several rounds and merges gradients with shared parameters, keeping gradient magnitudes low enough to evade D-SNR detection.\nMoreover, our method \u2019s sparse encoding minimizes gradient impact per round, effectively bypassing the D-SNR threshold.

\n
\n
\n
\n

\n5.2 Noise Perturbation\n

\n
\n

Defenders can enhance a model\u2019s robustness by adding noise to the gradients, which involves applying small, continuous perturbations. A common method is to introduce Gaussian noise after each backpropagation step, referred to as the diffusion term [30 ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30### ###reference_b30###]. This technique generates random variations in gradient values, thereby mitigating the risk of data leakage during updates.

\n
\n
\n

To assess the impact of noise-based defenses, we applied Gaussian noise directly to the gradients in our experiments, with noise levels set at and . Despite these relatively high noise levels, the results indicate that our method remains effective. When the noise is set to , the effectiveness of data theft is nearly unaffected. With an increased noise level of , our method still achieves a leakage of 431.4 on CIFAR-10 with a target theft of 512 images, and for CelebA, with a target of 32 images, our method successfully steals 13.8 images.\n our method succeeds due to its low-magnitude, multi-round embedding strategy, which enables it to accumulate data across multiple rounds while maintaining effectiveness even with added noise.

\n
\n
\n
\n

\n5.3 Gradient Pruning\n

\n
\n

Gradient pruning [54 ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54### ###reference_b54###] reduces computational complexity in neural networks by selectively pruning channels based on the importance of their gradients. It evaluates the significance of each channel using the mean gradient of the feature maps, pruning those with lower mean gradients that have less impact on the loss function. It can decrease the network\u2019s size and floating-point operations (FLOPs).

\n
\n
\n

In the experiments, we selected two threshold values for gradient pruning, and , to control the level of pruning applied. The threshold determines the minimum mean gradient magnitude required for a channel to be retained. Lower values of (e.g., ) result in fewer channels being pruned, preserving more information, while higher values (e.g., ) increase pruning aggressiveness, removing more channels with lower gradient contributions. Despite the use of gradient pruning, our method remains effective due to its multi-round, low-intensity embedding strategy. Our method does not rely on high-gradient channels in any single round but instead distributes data across multiple rounds and channels, with each round embedding minimal information. By leveraging sparse encoding and parameter sharing, our method disperses data across numerous channels, allowing it to circumvent pruning. Even if some channels are pruned, the cumulative effect of the attack remains intact, ensuring successful data leakage.

\n
\n
\n
\n

\n5.4 Gradient Clipping\n

\n
\n

Gradient clipping [29 ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29### ###reference_b29###] is a defense technique that limits the norm of gradients during backpropagation to prevent excessively large updates that could destabilize training, especially when dealing with exploding gradients. When the gradient norm exceeds a predefined threshold, it is rescaled to match the threshold. This helps maintain training stability by controlling the gradient magnitude, allowing the model to navigate non-smooth regions of the loss landscape more effectively, leading to faster convergence and potentially improved generalization.\nIn the experiments, we applied gradient clipping with two threshold values, and , to control the magnitude of the gradients during backpropagation. These thresholds help limit the gradient updates, with enforcing stricter clipping and allowing slightly larger updates. By constraining the gradient norm, we aim to prevent excessively large updates that could destabilize training or signal potential data leakage.

\n
\n
\n

However, we can see that our method remains effective under gradient clipping because it does not rely on large gradient values in a single round. Instead, it employs a multi-round, low-magnitude embedding strategy, where each round only contributes a small amount of data, keeping the gradient norm within the clipping limits.

\n
\n
\n
\n

\n5.5 Loss Change Monitor\n

\n
\n

During model training, clients may monitor the change in loss to detect anomalies. To prevent detection, the loss should appear normal and remain within expected fluctuations throughout training. The loss change is shown in Figure\u00a0 (appendix). We can see that there is a slight increase in the first round when the client trains the local model alongside the memory task. However, the loss quickly stabilizes and gradually decreases, resembling the pattern of standard training, thereby reducing the risk of drawing the client\u2019s attention. In normal training, minor increases in loss can occur due to factors such as model adjustments and data variability. Thus, this subtle rise is unlikely to raise suspicion, as it resembles natural training fluctuations, demonstrating the evasiveness of our method.

\n
\n
\n

\n6 Discussion\n

\n
\n

\n6.1 Secure Aggregation\n

\n
\n

Secure aggregation presents a significant challenge for data reconstruction attacks, as it only allows the server to access aggregated updates from all clients, blocking access to individual updates [8 ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8### ###reference_b8###]. Many attacks rely on reconstructing high-quality images from individual client updates, which makes them largely ineffective when secure aggregation is in place [49 ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49### ###reference_b49###, 7 ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7### ###reference_b7###, 56 ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56### ###reference_b56###].

\n
\n
\n

Federated Learning (FL) also faces communication bottlenecks due to the limited bandwidth of edge devices and their resource constraints [41 ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41### ###reference_b41###, 22 ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22### ###reference_b22###, 10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###]. To address this, methods like reducing the number of clients per round [35 ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35### ###reference_b35###, 33 ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33### ###reference_b33###] and applying gradient compression techniques (e.g., quantization, sparsification, low-rank approximation) [37 ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37### ###reference_b37###, 48 ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48### ###reference_b48###, 12 ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12### ###reference_b12###] have been developed. Gradient sparsification is especially useful, as it filters out less important gradients, reducing the amount of data transmitted during updates.

\n
\n
\n

We discovered that by combining our approach with communication acceleration strategies, it is possible to bypass secure aggregation. These strategies can create model inconsistencies across clients due to differences in data distribution and gradient sparsification. Attackers can exploit these inconsistencies to extract gradient information. For instance, in APF [10 ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10### ###reference_b10###], a global frozen mask is used to synchronize parameters across clients. By manipulating this mask with malicious code (e.g., setting it to zero or inverting it), attackers can expose the target user\u2019s gradient information.

\n
\n
\n
\n

\n6.2 Generalize our method to NLP\n

\n
\n

Although our method was originally designed for applications in computer vision, its principles can be adapted to natural language processing (NLP) tasks with slight modifications. In NLP, the secret model can be embedded within transformers or similar architectures by selecting and masking specific layers or attention heads. Instead of handling images, the model memorizes token sequences, using techniques like tokenization for indexing to distinguish between different samples.

\n
\n
\n

To process longer text sequences, block partitioning can be applied by splitting the text into smaller segments. These segments are treated as individual inputs for memorization, with token-level information used to reassemble them during reconstruction. The loss function would focus on minimizing the difference between predicted and actual token sequences, allowing the model to covertly learn and store sensitive text data. In the future, we will explore the effectiveness and scalability of our method in NLP tasks.

\n
\n
\n
\n

\n6.3 Potential Countermeasures\n

\n
\n

To defend against the covert data-reconstruction mechanisms in our method, two potential countermeasures can be applied.

\n
\n
\n

First, regularly verifying the integrity of model parameters before and after training is crucial. Techniques such as hash-based verification can help detect unauthorized modifications by comparing model snapshots across different training rounds. This approach can flag suspicious changes that might suggest data-stealing activities early on. However, it may not be effective if the our method introduces subtle changes that blend with normal updates. These changes can be distributed across numerous parameters, making them difficult to detect through simple integrity checks. Specifically, malicious code injections can modify the model in ways that appear innocuous when viewed in isolation but collectively enable a stealthy attack.

\n
\n
\n

Second, gradient noise injection techniques, such as differential privacy, can make it more difficult for attackers to recover sensitive data from model updates. By adding controlled noise to the gradients during training, the accuracy of extracted data is reduced, limiting our method\u2019s ability to reconstruct sensitive token sequences. However, excessive noise may degrade the model\u2019s performance, reducing its utility for legitimate users, making it challenging to find a balance between security and accuracy.

\n
\n
\n

In the future, designing more effective defenses against our method will be necessary to mitigate the risks of such attacks.

\n
\n
\n

\n7 Conclusion\n

\n
\n

In this paper, we introduced a novel data reconstruction attack against federated learning. Unlike traditional methods that rely on conspicuous changes to architecture or parameters, our method injects malicious code during training, enabling undetected data theft. By covertly training a hidden model through parameter sharing, our method efficiently extracts private data. To improve performance, we proposed a Fibonacci-based indexing and a block partitioning strategy that enhances the attack\u2019s ability to handle high-resolution datasets and large batch sizes.\nExtensive experiments show that our method can bypass state-of-the-art detection methods while effectively handling high-resolution datasets and large-scale theft scenarios.

\n
\n
\n

References

\n
    \n
  • \n[1]\n\nMartin Abadi, Andy Chu, Ian Goodfellow, H\u00a0Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li\u00a0Zhang.\n\n\nDeep learning with differential privacy.\n\n\nIn SIGSAC Conference on Computer and Communications Security, pages 308\u2013318, 2016.\n\n\n
  • \n
  • \n[2]\n\nGuy Amit, Mosh Levy, and Yisroel Mirsky.\n\n\nTranspose attack: Stealing datasets with bidirectional training.\n\n\narXiv preprint arXiv:2311.07389, 2023.\n\n\n
  • \n
  • \n[3]\n\nYoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et\u00a0al.\n\n\nPrivacy-preserving deep learning via additively homomorphic encryption.\n\n\nIEEE transactions on information forensics and security, 13(5):1333\u20131345, 2017.\n\n\n
  • \n
  • \n[4]\n\nAlberto Apostolico and A\u00a0Fraenkel.\n\n\nRobust transmission of unbounded strings using fibonacci representations.\n\n\nIEEE Transactions on Information Theory, 33(2):238\u2013245, 1987.\n\n\n
  • \n
  • \n[5]\n\nEugene Bagdasaryan and Vitaly Shmatikov.\n\n\nBlind backdoors in deep learning models.\n\n\nIn USENIX Security Symposium, pages 1505\u20131521, 2021.\n\n\n
  • \n
  • \n[6]\n\nFranziska Boenisch, Adam Dziedzic, Roei Schuster, Ali\u00a0Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot.\n\n\nReconstructing individual data points in federated learning hardened with differential privacy and secure aggregation.\n\n\nIn European Symposium on Security and Privacy, pages 241\u2013257. IEEE, 2023.\n\n\n
  • \n
  • \n[7]\n\nFranziska Boenisch, Adam Dziedzic, Roei Schuster, Ali\u00a0Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot.\n\n\nWhen the curious abandon honesty: Federated learning is not private.\n\n\nIn IEEE European Symposium on Security and Privacy, pages 175\u2013199, 2023.\n\n\n
  • \n
  • \n[8]\n\nKeith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H\u00a0Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth.\n\n\nPractical secure aggregation for privacy-preserving machine learning.\n\n\nIn SIGSAC Conference on Computer and Communications Security, pages 1175\u20131191. ACM, 2017.\n\n\n
  • \n
  • \n[9]\n\nCangxiong Chen and Neill\u00a0DF Campbell.\n\n\nUnderstanding training-data leakage from gradients in neural networks for image classification.\n\n\narXiv preprint arXiv:2111.10178, 2021.\n\n\n
  • \n
  • \n[10]\n\nChen Chen, Hong Xu, Wei Wang, Baochun Li, Bo\u00a0Li, Li\u00a0Chen, and Gong Zhang.\n\n\nCommunication-efficient federated learning with adaptive parameter freezing.\n\n\nIn IEEE International Conference on Distributed Computing Systems, pages 1\u201311, 2021.\n\n\n
  • \n
  • \n[11]\n\nYanjiao Chen, Kaishun Wu, and Qian Zhang.\n\n\nFrom QoS to QoE: A tutorial on video quality assessment.\n\n\nIEEE Communications Surveys & Tutorials, 17(2):1126\u20131165, 2014.\n\n\n
  • \n
  • \n[12]\n\nYuanyuan Chen, Zichen Chen, Pengcheng Wu, and Han Yu.\n\n\nFedobd: Opportunistic block dropout for efficiently training large-scale neural networks through federated learning.\n\n\narXiv preprint arXiv:2208.05174, 2022.\n\n\n
  • \n
  • \n[13]\n\nRuian Duan, Omar Alrawi, Ranjita\u00a0Pai Kasturi, Ryan Elder, Brendan Saltaformaggio, and Wenke Lee.\n\n\nTowards measuring supply chain attacks on package managers for interpreted languages.\n\n\narXiv preprint arXiv:2002.01139, 2020.\n\n\n
  • \n
  • \n[14]\n\nfast.ai.\n\n\nImagenette: A smaller subset of 10 easily classified classes from imagenet.\n\n\nAvailable: https://github.com/fastai/imagenette.\n\n\n
  • \n
  • \n[15]\n\nLiam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, and Tom Goldstein.\n\n\nRobbing the fed: Directly obtaining private data in federated learning with modified models.\n\n\narXiv preprint arXiv:2110.13057, 2021.\n\n\n
  • \n
  • \n[16]\n\nLiam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, and Tom Goldstein.\n\n\nDecepticons: Corrupted transformers breach privacy in federated learning for language models.\n\n\narXiv preprint arXiv:2201.12675, 2022.\n\n\n
  • \n
  • \n[17]\n\nKostadin Garov, Dimitar\u00a0I Dimitrov, Nikola Jovanovi\u0107, and Martin Vechev.\n\n\nHiding in plain sight: Disguising data stealing attacks in federated learning.\n\n\narXiv preprint arXiv:2306.03013, 2023.\n\n\n
  • \n
  • \n[18]\n\nJonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller.\n\n\nInverting gradients-how easy is it to break privacy in federated learning?\n\n\nAdvances in Neural Information Processing Systems, 33:16937\u201316947, 2020.\n\n\n
  • \n
  • \n[19]\n\nRobin\u00a0C Geyer, Tassilo Klein, and Moin Nabi.\n\n\nDifferentially private federated learning: A client level perspective.\n\n\narXiv preprint arXiv:1712.07557, 2017.\n\n\n
  • \n
  • \n[20]\n\nSamyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen.\n\n\nRecovering private text in federated learning of language models.\n\n\nAdvances in Neural Information Processing Systems, 35:8130\u20138143, 2022.\n\n\n
  • \n
  • \n[21]\n\nJeremy Howard and Sylvain Gugger.\n\n\nFastai: A layered api for deep learning.\n\n\nInformation, 11(2):108, 2020.\n\n\n
  • \n
  • \n[22]\n\nKevin Hsieh, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, Gregory\u00a0R Ganger, Phillip\u00a0B Gibbons, and Onur Mutlu.\n\n\nGaia:Geo-Distributed machine learning approaching LAN speeds.\n\n\nIn USENIX Symposium on Networked Systems Design and Implementation, pages 629\u2013647, 2017.\n\n\n
  • \n
  • \n[23]\n\nJinwoo Jeon, Kangwook Lee, Sewoong Oh, Jungseul Ok, et\u00a0al.\n\n\nGradient inversion with generative image prior.\n\n\nAdvances in Neural Information Processing Systems, 34:29898\u201329908, 2021.\n\n\n
  • \n
  • \n[24]\n\nPeter Kairouz, H\u00a0Brendan McMahan, Brendan Avent, Aur\u00e9lien Bellet, Mehdi Bennis, Arjun\u00a0Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et\u00a0al.\n\n\nAdvances and open problems in federated learning.\n\n\nFoundations and trends\u00ae in machine learning, 14(1\u20132):1\u2013210, 2021.\n\n\n
  • \n
  • \n[25]\n\nJakub Konecn\u1ef3, H\u00a0Brendan McMahan, Felix\u00a0X Yu, Peter Richt\u00e1rik, Ananda\u00a0Theertha Suresh, and Dave Bacon.\n\n\nFederated learning: Strategies for improving communication efficiency.\n\n\narXiv preprint arXiv:1610.05492, 8, 2016.\n\n\n
  • \n
  • \n[26]\n\nAlex Krizhevsky and Geoffrey Hinton.\n\n\nLearning multiple layers of features from tiny images.\n\n\n2009.\n\n\n
  • \n
  • \n[27]\n\nMaximilian Lam, Gu-Yeon Wei, David Brooks, Vijay\u00a0Janapa Reddi, and Michael Mitzenmacher.\n\n\nGradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix.\n\n\nIn International Conference on Machine Learning, pages 5959\u20135968. PMLR, 2021.\n\n\n
  • \n
  • \n[28]\n\nZhuohang Li, Jiaxin Zhang, Luyang Liu, and Jian Liu.\n\n\nAuditing privacy defenses in federated learning via generative gradient leakage.\n\n\nIn IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10132\u201310142, 2022.\n\n\n
  • \n
  • \n[29]\n\nCongcong Liu and Huaming Wu.\n\n\nChannel pruning based on mean gradient for accelerating convolutional neural networks.\n\n\nSignal Processing, 156:84\u201391, 2019.\n\n\n
  • \n
  • \n[30]\n\nXuanqing Liu, Tesi Xiao, Si\u00a0Si, Qin Cao, Sanjiv Kumar, and Cho-Jui Hsieh.\n\n\nHow does noise help robustness? explanation and exploration under the neural sde framework.\n\n\nIn IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 282\u2013290, 2020.\n\n\n
  • \n
  • \n[31]\n\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.\n\n\nDeep learning face attributes in the wild.\n\n\nIn IEEE International Conference on Computer Vision, pages 3730\u20133738, 2015.\n\n\n
  • \n
  • \n[32]\n\nShaohao Lu, Yuqiao Xian, Ke\u00a0Yan, Yi\u00a0Hu, Xing Sun, Xiaowei Guo, Feiyue Huang, and Wei-Shi Zheng.\n\n\nDiscriminator-free generative adversarial attack.\n\n\nIn ACM International Conference on Multimedia, pages 1544\u20131552, 2021.\n\n\n
  • \n
  • \n[33]\n\nBing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, and Leandros Tassiulas.\n\n\nTackling system and statistical heterogeneity for federated learning with adaptive client sampling.\n\n\nIn IEEE INFOCOM Conference on Computer Communications, pages 1739\u20131748, 2022.\n\n\n
  • \n
  • \n[34]\n\nXinjian Luo, Yuncheng Wu, Xiaokui Xiao, and Beng\u00a0Chin Ooi.\n\n\nFeature inference attack on model predictions in vertical federated learning.\n\n\nIn International Conference on Data Engineering, pages 181\u2013192. IEEE, 2021.\n\n\n
  • \n
  • \n[35]\n\nBrendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise\u00a0Aguera y\u00a0Arcas.\n\n\nCommunication-efficient learning of deep networks from decentralized data.\n\n\nIn Artificial Intelligence and Statistics, pages 1273\u20131282. PMLR, 2017.\n\n\n
  • \n
  • \n[36]\n\nLuca Melis, Congzheng Song, Emiliano De\u00a0Cristofaro, and Vitaly Shmatikov.\n\n\nExploiting unintended feature leakage in collaborative learning.\n\n\nIn IEEE Symposium on Security and Privacy (SP), pages 691\u2013706, 2019.\n\n\n
  • \n
  • \n[37]\n\nBouacida Nader, Hou Jiahui, Zang Hui, and Liu Xin.\n\n\nAdaptive federated dropout: Improving communication efficiency and generalization for federated learning.\n\n\nIn IEEE Conference on Computer Communications Workshops, Vancouver, BC, Canada, pages 10\u201313, 2021.\n\n\n
  • \n
  • \n[38]\n\nMilad Nasr, Reza Shokri, and Amir Houmansadr.\n\n\nComprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.\n\n\nIn IEEE Symposium on Security and Privacy, pages 739\u2013753, 2019.\n\n\n
  • \n
  • \n[39]\n\nM\u00a0Ott.\n\n\nfairseq: A fast, extensible toolkit for sequence modeling.\n\n\narXiv preprint arXiv:1904.01038, 2019.\n\n\n
  • \n
  • \n[40]\n\nDario Pasquini, Danilo Francati, and Giuseppe Ateniese.\n\n\nEluding secure aggregation in federated learning via model inconsistency.\n\n\nIn ACM SIGSAC Conference on Computer and Communications Security, pages 2429\u20132443, 2022.\n\n\n
  • \n
  • \n[41]\n\nKilian Pfeiffer, Ramin Khalili, and J\u00f6rg Henkel.\n\n\nAggregating capacity in fl through successive layer training for computationally-constrained devices.\n\n\nAdvances in Neural Information Processing Systems, 36, 2024.\n\n\n
  • \n
  • \n[42]\n\nLe\u00a0Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai.\n\n\nPrivacy-preserving deep learning: Revisited and enhanced.\n\n\nIn Applications and Techniques in Information Security: 8th International Conference, pages 100\u2013110. Springer, 2017.\n\n\n
  • \n
  • \n[43]\n\nJingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, and Yiran Chen.\n\n\nSoteria: Provable defense against privacy leakage in federated learning from representation perspective.\n\n\nIn IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9311\u20139319, 2021.\n\n\n
  • \n
  • \n[44]\n\nYijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, and Sanguthevar Rajasekaran.\n\n\nSapag: A self-adaptive privacy attack from gradients.\n\n\narXiv preprint arXiv:2009.06228, 2020.\n\n\n
  • \n
  • \n[45]\n\nZhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi.\n\n\nBeyond inferring class representatives: User-level privacy leakage from federated learning.\n\n\nIn IEEE INFOCOM Conference on Computer Communications, pages 2512\u20132520, 2019.\n\n\n
  • \n
  • \n[46]\n\nWenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet\u00a0Emre Gursoy, Stacey Truex, and Yanzhao Wu.\n\n\nA framework for evaluating gradient leakage attacks in federated learning.\n\n\narXiv preprint arXiv:2004.10397, 2020.\n\n\n
  • \n
  • \n[47]\n\nWenqi Wei, Ling Liu, Yanzhao Wu, Gong Su, and Arun Iyengar.\n\n\nGradient-leakage resilient federated learning.\n\n\nIn International Conference on Distributed Computing Systems, pages 797\u2013807. IEEE, 2021.\n\n\n
  • \n
  • \n[48]\n\nDingzhu Wen, Ki-Jun Jeon, and Kaibin Huang.\n\n\nFederated dropout\u2014a simple approach for enabling federated learning on resource constrained devices.\n\n\nIEEE Wireless Communications Letters, 11(5):923\u2013927, 2022.\n\n\n
  • \n
  • \n[49]\n\nYuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, and Tom Goldstein.\n\n\nFishing for user data in large-batch federated learning via gradient magnification.\n\n\narXiv preprint arXiv:2202.00580, 2022.\n\n\n
  • \n
  • \n[50]\n\nT\u00a0Wolf.\n\n\nHuggingface\u2019s transformers: State-of-the-art natural language processing.\n\n\narXiv preprint arXiv:1910.03771, 2019.\n\n\n
  • \n
  • \n[51]\n\nChong Xiang, Arjun\u00a0Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal.\n\n\nPatchguard: A provably robust defense against adversarial patches via small receptive fields and masking.\n\n\nIn 30th USENIX Security Symposium, 2021.\n\n\n
  • \n
  • \n[52]\n\nHaomiao Yang, Mengyu Ge, Kunlan Xiang, and Jingwei Li.\n\n\nUsing highly compressed gradients in federated learning for data reconstruction attacks.\n\n\nIEEE Transactions on Information Forensics and Security, 18:818\u2013830, 2022.\n\n\n
  • \n
  • \n[53]\n\n\u00c9douard Zeckendorf.\n\n\nRepresentations des nombres naturels par une somme de nombres de fibonacci on de nombres de lucas.\n\n\nBulletin de La Society Royale des Sciences de Liege, pages 179\u2013182, 1972.\n\n\n
  • \n
  • \n[54]\n\nJingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie.\n\n\nWhy gradient clipping accelerates training: A theoretical justification for adaptivity.\n\n\narXiv preprint arXiv:1905.11881, 2019.\n\n\n
  • \n
  • \n[55]\n\nRichard Zhang, Phillip Isola, Alexei\u00a0A Efros, Eli Shechtman, and Oliver Wang.\n\n\nThe unreasonable effectiveness of deep features as a perceptual metric.\n\n\nIn IEEE Conference on Computer Vision and Pattern Recognition, pages 586\u2013595, 2018.\n\n\n
  • \n
  • \n[56]\n\nShuaishuai Zhang, Jie Huang, Zeping Zhang, and Chunyang Qi.\n\n\nCompromise privacy in large-batch federated learning via malicious model parameters.\n\n\nIn International Conference on Algorithms and Architectures for Parallel Processing, pages 63\u201380. Springer, 2022.\n\n\n
  • \n
  • \n[57]\n\nJoshua\u00a0C Zhao, Ahmed\u00a0Roushdy Elkordy, Atul Sharma, Yahya\u00a0H Ezzeldin, Salman Avestimehr, and Saurabh Bagchi.\n\n\nThe resource problem of using linear layer leakage attack in federated learning.\n\n\nIn IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3974\u20133983, 2023.\n\n\n
  • \n
  • \n[58]\n\nJoshua\u00a0Christian Zhao, Atul Sharma, Ahmed\u00a0Roushdy Elkordy, Yahya\u00a0H Ezzeldin, Salman Avestimehr, and Saurabh Bagchi.\n\n\nLoki: Large-scale data reconstruction attack against federated learning through model manipulation.\n\n\nIn IEEE Symposium on Security and Privacy, pages 30\u201330. IEEE Computer Society, 2023.\n\n\n
  • \n
  • \n[59]\n\nJunyi Zhu and Matthew Blaschko.\n\n\nR-gap: Recursive gradient attack on privacy.\n\n\narXiv preprint arXiv:2010.07733, 2020.\n\n\n
  • \n
  • \n[60]\n\nLigeng Zhu, Zhijian Liu, and Song Han.\n\n\nDeep leakage from gradients.\n\n\nAdvances in Neural Information Processing Systems, 32, 2019.\n\n\n
  • \n
\n
\n
\n

A. Datasets and Models

\n
\n

CIFAR-10. CIFAR-10 [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] contains 60,000 images belonging to 10 classes. Each sample has a dimension of 32 32. We randomly select 50,000 samples as the training set, and the remaining 10,000 samples as the test set. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training process spans 10 epochs, with a learning rate initially set to 0.1. We use the SGD optimizer with a weight decay of 0.001 to prevent overfitting. To refine training over time, we apply a learning rate scheduler that decays the rate by a factor of 0.9 every epoch. The training batch size is set to 32.

\n
\n
\n

CIFAR-100. CIFAR-100 dataset [26 ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26### ###reference_b26###] closely resembles CIFAR-10, differing in the number of classes. It comprises 100 classes, each containing 600 images. The dataset is split into 500 training images and 100 testing images per class, amounting to a comprehensive set of diverse visual data for classification tasks. In the experiments, we employ the ResNet-18 architecture to train the victim model. The local training details for the CIFAR-100 dataset are consistent with those of CIFAR-10.

\n
\n
\n

MINI-ImageNet. MINI-ImageNet[14 ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14### ###reference_b14###] is a subset of ImageNet, widely used in the research community [51 ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51### ###reference_b51###, 32 ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32### ###reference_b32###]. It consists of 100 classes, each containing 600 images. For our experiments, we selected 40,000 images from the dataset, with 400 images per class across 100 classes, to ensure balanced representation across categories. Each image has a high resolution with a dimension of 224 224. In the experiments, we train a ResNet-18 model for 10 epochs as the victim model. The local training details for the MINI-ImageNet dataset are consistent with those of CIFAR-10.

\n
\n
\n

CelebA. CelebA [31 ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31### ###reference_b31###] is a large-scale face attributes dataset containing 10,177 identities with 202,599 face images. Each image includes annotations for 5 landmark locations and 40 binary attributes, and has a high resolution of 224 \u00d7 224 pixels. In our experiments, we selected 1,000 identities from the dataset, with 20 images per identity, totaling 20,000 images. For local training, we kept the training parameters consistent with those used for CIFAR-10.

\n
\n
\n
\n

B. Evaluation Metrics

\n
\n

Leakage: The leakage quantifies the number of images that can be extracted from the total dataset in the given attack rounds.

\n
\n
\n

Leakage Rate: The leakage rate quantifies the proportion of images that can be extracted from the total dataset in the given attack rounds.

\n\n\n\n\n\n\n\n
(14)
\n

where is the number of extracted images, and is the total number of images in the target dataset.

\n
\n
\n

SSIM. SSIM is a commonly-used Quality-of-Experience (QoE) metric [11 ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11### ###reference_b11###] that quantifies the differences in luminance, contrast, and structure between the original image and the distorted image.

\n\n\n\n\n\n\n\n
(15)
\n

where , and quantify the luminance similarity, contrast similarity, and structure similarity between the original image and the distorted image . , and are parameters in the range .

\n
\n
\n

PSNR. PSNR is computed based on MSE (Mean Squared Error) regarding the signal energy.

\n\n\n\n\n\n\n\n
(16)
\n

where is the maximum signal energy.

\n
\n
\n

LPIPS. LPIPS [55 ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55### ###reference_b55###] measures the similarity between two images based on the idea that the human visual system processes images in a hierarchical manner, where lower-level features, e.g., edges and textures, are processed before higher-level features, e.g., objects and scenes. The LPIPS metric uses a deep neural network to calculate the similarity between the two images.

\n\n\n\n\n\n\n\n
(17)
\n

where and are the feature representations of images and at layer of the pre-trained neural network, denotes the L2 norm, and is a weight that controls the relative importance of each layer. The smaller the value, the more similar the two images are.

\n
\n
\"Refer\n
Figure 4: Impact of the loss change.
\n
\n
\n
TABLE VII: Impact of different index design methods on attack performance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nBinaryGrayFibonacci
\n\nCIFAR-10\n\nLeakage Rate ()\n84.4%77.3%85.9%
\nSSIM ()\n0.6040.5670.607
\nPSNR ()\n28.03726.73628.105
\nLPIPS ()\n0.4010.4250.401
\n\nCIFAR-100\n\nLeakage Rate ()\n83.6%79.7%84.4%
\nSSIM ()\n0.8290.7810.827
\nPSNR ()\n32.91829.87332.142
\nLPIPS ()\n0.1430.1300.147
MINI-ImageNet\nLeakage Rate ()\n77.8%80.0%84.4%
\nSSIM ()\n0.5980.5720.612
\nPSNR ()\n23.17922.23923.535
\nLPIPS ()\n0.4640.4900.451
CelebA-Subset\nLeakage Rate ()\n93.8%81.3%93.8%
\nSSIM ()\n0.6310.7360.661
\nPSNR ()\n21.56525.30722.667
\nLPIPS ()\n0.4410.3230.412
\n
\n
\n
TABLE VIII: Impact of block partitioning.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nSize = 4Size= 7Size= 8Size = 14Size= 16Size = 28Size = 32
MINI-ImageNet\nSSIM ()\n0.7660.8370.8340.8190.8190.8270.800
\nPSNR ()\n26.67928.33528.32127.73727.88528.22827.484
\nLPIPS ()\n0.2980.2180.2120.2270.2240.2200.254
CelebA-Subset\nSSIM ()\n0.8540.8940.9190.9290.9370.9130.891
\nPSNR ()\n30.23631.81033.33234.06334.66432.98131.809
\nLPIPS ()\n0.2340.2080.1200.1030.0900.1210.146
Parameters2.22M2.32M2.37M2.78M2.96M4.58M5.32M
Memory time (s)178.9571.7371.6941.0734.2226.7827.64
\n
\n
\n
TABLE IX: Impact of Secret Model Structure.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nRandomRandom with ConstraintsSystematicLayer-wiseImportance-based
\n\nCIFAR-10\n\nLeakage Rate ()\n100%100%100%100%99.4%
\nSSIM ()\n0.9370.8670.8810.8950.727
\nPSNR ()\n28.25125.49925.81226.39121.820
\nLPIPS ()\n0.0830.1950.1750.1620.330
\n\nCIFAR-100\n\nLeakage Rate ()\n100%99.6%100%98.0%100%
\nSSIM ()\n0.7780.7120.8240.6660.779
\nPSNR ()\n23.65622.40124.37621.56723.637
\nLPIPS ()\n0.2750.3290.2320.3670.279
MINI-ImageNet\nLeakage Rate ()\n84.4%82.0%92.2%89.8%91.4%
\nSSIM ()\n0.6190.6030.6590.6360.646
\nPSNR ()\n23.56223.26524.29023.87424.047
\nLPIPS ()\n0.4310.4490.3860.4110.400
CelebA-Subset\nLeakage Rate ()\n99.2%97.7%96.9%97.7%100%
\nSSIM ()\n0.6210.6250.6170.6200.657
\nPSNR ()\n23.88323.78123.49623.80124.132
\nLPIPS ()\n0.5230.5110.5100.5190.481
\n
\n
\n
TABLE X: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on MINI-ImageNet and CELEBA-SUBSET datasets using FedAvg.\n\nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MINI-ImageNet dataset
Baselines\nMetrics\u2020\n
Transpose Attack\nLeakage ()\n\n0.0 0.0\n\n0.2 0.4\n\n0.0 0.0\n\n0.0 0.0\n\n0.0 0.0\n\n0.0 0.0\n
\nSSIM ()\n\n0.301 0.028\n\n0.247 0.030\n\n0.246 0.014\n\n0.247 0.011\n\n0.215 0.018\n\n0.193 0.020\n
\nPSNR ()\n\n14.957 1.698\n\n13.700 1.204\n\n13.737 0.992\n\n13.842 0.986\n\n13.085 0.783\n\n12.508 0.357\n
\nLPIPS ()\n\n0.622 0.015\n\n0.657 0.017\n\n0.653 0.004\n\n0.653 0.005\n\n0.665 0.010\n\n0.688 0.006\n
RtF\nLeakage ()\n\n6.9 0.2\n\n13.0 0.3\n\n20.0 0.1\n\n26.2 0.5\n\n27.9 1.0\n\n52.1 0.2\n
\nSSIM ()\n\n0.844 0.011\n\n0.804 0.015\n\n0.816 0.007\n\n0.814 0.013\n\n0.850 0.014\n\n0.780 0.004\n
\nPSNR ()\n\n35.372 0.870\n\n34.094 0.822\n\n32.836 0.374\n\n33.407 0.934\n\n34.395 0.850\n\n30.052 0.358\n
\nLPIPS ()\n\n0.124 0.005\n\n0.161 0.009\n\n0.153 0.005\n\n0.156 0.009\n\n0.124 0.008\n\n0.182 0.002\n
LOKI\nLeakage ()\n\n6.0 1.3\n\n13.6 1.2\n\n19.8 1.2\n\n25.4 3.5\n\n40.8 1.3\n\n57.0 3.0\n
\nSSIM ()\n\n0.820 0.173\n\n0.882 0.061\n\n0.868 0.052\n\n0.816 0.113\n\n0.855 0.028\n\n0.866 0.025\n
\nPSNR ()\n\n53.756 12.307\n\n56.722 6.192\n\n57.763 4.288\n\n53.781 10.556\n\n54.719 4.000\n\n57.071 1.164\n
\nLPIPS ()\n\n0.128 0.127\n\n0.083 0.032\n\n0.101 0.045\n\n0.141 0.079\n\n0.116 0.025\n\n0.102 0.011\n
Ours\nLeakage ()\n8.0 0.016.0 0.024.0 0.032.0 0.047.6 0.861.4 1.7
\nSSIM ()\n0.962 0.0090.886 0.0070.837 0.0120.827 0.0150.755 0.0190.718 0.015
\nPSNR ()\n35.608 0.84630.185 0.47028.236 0.65828.228 0.58426.048 0.29825.128 0.266
\nLPIPS ()\n0.057 0.0130.153 0.0080.212 0.0120.220 0.0120.296 0.0120.338 0.009
CELEBA-SUBSET dataset
Baselines\nMetrics\u2020\n
Transpose Attack\nLeakage ()\n\n0.0 0.0\n\n1.4 1.5\n\n0.2 0.4\n\n0.4 0.8\n\n0.6 0.4\n\n0.6 1.2\n
\nSSIM ()\n\n0.410 0.015\n\n0.415 0.030\n\n0.388 0.022\n\n0.382 0.021\n\n0.367 0.024\n\n0.371 0.027\n
\nPSNR ()\n\n19.268 0.759\n\n18.967 1.097\n\n18.445 0.835\n\n17.918 0.600\n\n17.746 0.572\n\n17.720 0.772\n
\nLPIPS ()\n\n0.573 0.006\n\n0.563 0.013\n\n0.575 0.009\n\n0.578 0.011\n\n0.583 0.010\n\n0.0586 0.015\n
RtF\nLeakage ()\n\n6.8 0.2\n\n14.3 0.3\n\n20.4 0.8\n\n25.8 0.5\n\n28.0 0.9\n\n52.9 2.4\n
\nSSIM ()\n\n0.833 0.024\n\n0.886 0.010\n\n0.816 0.018\n\n0.775 0.009\n\n0.841 0.020\n\n0.785 0.024\n
\nPSNR ()\n\n33.992 1.399\n\n38.948 0.185\n\n32.074 0.911\n\n30.819 0.426\n\n32.542 0.656\n\n29.935 0.775\n
\nLPIPS ()\n\n0.145 0.024\n\n0.096 0.005\n\n0.168 0.019\n\n0.187 0.006\n\n0.146 0.013\n\n0.203 0.019\n
LOKI\nLeakage ()\n\n8.0 0.0\n\n12.0 0.6\n\n22.2 0.7\n\n28.2 0.9\n\n43.6 0.8\n\n58.6 1.6\n
\nSSIM ()\n\n0.999 0.000\n\n0.755 0.015\n\n0.925 0.021\n\n0.896 0.016\n\n0.922 0.009\n\n0.905 0.010\n
\nPSNR ()\n\n61.090 0.000\n\n39.138 0.425\n\n60.148 0.425\n\n51.430 0.470\n\n55.590 0.315\n\n52.718 0.643\n
\nLPIPS ()\n\n0.001 0.000\n\n0.188 0.012\n\n0.061 0.013\n\n0.086 0.007\n\n0.061 0.006\n\n0.074 0.007\n
Ours\nLeakage ()\n8.0 0.016.0 0.024.0 0.032.0 0.048.0 0.064.0 0.0
\nSSIM ()\n0.974 0.0010.944 0.0050.918 0.0090.913 0.0070.886 0.0040.865 0.009
\nPSNR ()\n39.546 0.64835.618 0.24433.551 0.54332.981 0.26731.683 0.30630.727 0.412
\nLPIPS ()\n0.039 0.0020.080 0.0040.115 0.0130.121 0.0070.157 0.0060.185 0.012
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.

    \n
    \n
  • \n
\n
\n
\n
\n
\n
TABLE XI: Comparison of our method with SEER [17], and Inverting [18] using FedSGD.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetBaselines\nMetrics\u2020\n
\n\nCIFAR-10\n\n\nSEER\n\nLeakage ()\n0.3700.1500.2500.6670.1500
\nSSIM ()\n0.4920.4680.4810.5010.4860.306
\nPSNR ()\n16.89614.19116.22916.09817.95610.127
\nLPIPS ()\n0.3990.4760.4190.4010.3860.669
\n\nInverting\n\nLeakage ()\n000000
\nSSIM ()\n0.0450.0160.0270.0500.0940.041
\nPSNR ()\n6.7845.4365.3545.8285.6175.725
\nLPIPS ()\n0.6730.6860.6900.7020.7140.704
\n\nOurs\n\nLeakage ()\n163264128256511
\nSSIM ()\n0.9970.9980.9950.9940.9380.729
\nPSNR ()\n37.53037.75437.18336.77329.96722.657
\nLPIPS ()\n0.0010.0010.0030.0040.0830.346
DatasetBaselines\nMetrics\u2020\n
\n\nCIFAR-100\nSEER\nLeakage ()\n0.3750.1250.4170.4580.4790.200
\nSSIM ()\n0.4810.4390.4820.4950.4960.462
\nPSNR ()\n16.87617.61218.35718.91016.81318.331
\nLPIPS ()\n0.4380.4340.4080.3850.3910.402
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0650.0610.0500.0620.0490.082
\nPSNR ()\n6.2096.5236.4376.3426.3196.699
\nLPIPS ()\n0.6360.6140.6580.6630.6760.673
Ours\nLeakage ()\n163264128256511
\nSSIM ()\n0.9960.9980.9950.9920.9170.718
\nPSNR ()\n37.15741.56638.23935.79429.10723.095
\nLPIPS ()\n0.0010.0010.0020.0060.1080.331
DatasetBaselines\nMetrics\u2020\n
MINI-ImageNetSEER\nLeakage ()\n000000
\nSSIM ()\n0.2880.2750.2700.3030.2840.263
\nPSNR ()\n23.24221.25619.26318.13715.94414.342
\nLPIPS ()\n0.3780.4930.5500.6070.6690.710
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0290.0350.0680.0480.0330.036
\nPSNR ()\n6.0636.5886.7546.7106.7126.729
\nLPIPS ()\n0.7570.7540.7040.7100.7290.716
Ours\nLeakage ()\n81624324760
\nSSIM ()\n0.8840.7730.7670.7400.6760.651
\nPSNR ()\n29.83726.88226.30826.06524.95924.101
\nLPIPS ()\n0.2120.2980.3120.3390.3920.421
DatasetBaselines\nMetrics\u2020\n
CelebA-SubsetSEER\nLeakage ()\n000000
\nSSIM ()\n0.3590.3610.3450.3470.3510.327
\nPSNR ()\n22.05023.11321.60023.19125.38824.547
\nLPIPS ()\n0.2770.2660.2880.3030.3510.390
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0410.0380.0610.0360.0590.033
\nPSNR ()\n6.5556.7366.6156.6286.7786.784
\nLPIPS ()\n0.7730.7740.7590.7830.7780.784
Ours\nLeakage ()\n81624324864
\nSSIM ()\n0.8830.8660.8480.8260.8090.788
\nPSNR ()\n30.36430.55929.84828.94628.31627.608
\nLPIPS ()\n0.1870.2190.2590.2860.3100.341
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
", + "capture": "TABLE I: A comparison of studies on state-of-the-art active server attacks against federated learning. " + }, + "2": { + "table_html": "
\n
TABLE II: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on CIFAR-10 and CIFAR-100 datasets using FedAvg. \nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CIFAR-10 dataset
Baselines\nMetrics\u2020\n
\n\nTranspose Attack\n\nLeakage ()\n\n8.2 4.0\n\n2.4 2.0\n\n0.6 0.8\n\n0.2 0.4\n\n0.0 0.0\n\n0.0 0.0\n
\nSSIM ()\n\n0.506 0.034\n\n0.394 0.023\n\n0.306 0.037\n\n0.250 0.035\n\n0.197 0.019\n\n0.151 0.010\n
\nPSNR ()\n\n15.994 0.441\n\n14.882 0.385\n\n14.378 0.387\n\n13.983 0.559\n\n13.504 0.319\n\n13.103 0.330\n
\nLPIPS ()\n\n0.457 0.010\n\n0.490 0.021\n\n0.518 0.013\n\n0.547 0.010\n\n0.548 0.007\n\n0.557 0.009\n
RtF\nLeakage ()\n\n11.5 0.3\n\n11.2 1.1\n\n4.2 0.6\n\n1.6 0.2\n0.7 \u00b1 0.31.4 \u00b1 0.3
\nSSIM ()\n\n0.670 0.018\n\n0.407 0.023\n\n0.198 0.013\n\n0.070 0.007\n0.195 \u00b1 0.0490.280 \u00b1 0.036
\nPSNR ()\n\n19.931 0.417\n\n13.115 0.456\n\n8.848 0.293\n\n6.358 0.116\n7.885 \u00b1 0.9398.601 \u00b1 0.688
\nLPIPS ()\n\n0.205 0.015\n\n0.390 0.018\n\n0.521 0.007\n\n0.577 0.007\n0.504 \u00b1 0.0420.485 \u00b1 0.023
LOKI\nLeakage ()\n\n13.8 0.4\n\n27.5 0.5\n\n55.6 0.9\n\n110.0 0.7\n\n219.080 0.9\n\n435.6 4.6\n
\nSSIM ()\n\n0.870 0.019\n\n0.849 0.015\n\n0.800 0.015\n\n0.739 0.006\n\n0.696 0.006\n\n0.700 0.015\n
\nPSNR ()\n\n32.095 1.32\n\n31.400 0.960\n\n29.471 0.890\n\n27.269 0.127\n\n25.903 0.198\n\n26.173 0.725\n
\nLPIPS ()\n\n0.071 0.011\n\n0.086 0.010\n\n0.114 0.009\n\n0.151 0.004\n\n0.177 0.004\n\n0.173 0.008\n
\n\nour method\n\nLeakage ()\n16.0 0.032.0 0.064.0 0.0128.0 0.0256.0 0.0512.0 0.0
\nSSIM ()\n1.000 0.0001.000 0.0001.000 0.0000.999 0.0010.934 0.0150.785 0.021
\nPSNR ()\n68.734 0.46567.565 0.44664.764 0.71159.978 1.39430.394 0.77823.122 0.130
\nLPIPS ()\n0.000 0.0000.000 0.0000.000 0.0000.001 0.0010.093 0.0190.295 0.021
CIFAR-100 dataset
Baselines\nMetrics\u2020\n
\n\nTranspose Attack\n\nLeakage ()\n\n3.8 2.6\n\n3.8 3.4\n\n3.4 2.1\n\n5.4 2.4\n\n9.4 2.3\n\n7.2 1.9\n
\nSSIM ()\n\n0.388 0.053\n\n0.301 0.047\n\n0.248 0.018\n\n0.224 0.014\n\n0.215 0.016\n\n0.190 0.009\n
\nPSNR ()\n\n15.582 0.921\n\n14.463 0.494\n\n13.896 0.383\n\n13.725 0.186\n\n13.590 0.223\n\n13.348 0.070\n
\nLPIPS ()\n\n0.475 0.018\n\n0.499 0.017\n\n0.528 0.007\n\n0.533 0.004\n\n0.549 0.009\n\n0.559 0.012\n
RtF\nLeakage ()\n\n11.3 0.3\n\n11.6 1.3\n\n5.0 0.3\n\n2.7 0.4\n0.9 \u00b1 0.21.6 \u00b1 0.2
\nSSIM ()\n\n0.655 0.028\n\n0.421 0.027\n\n0.209 0.010\n\n0.106 0.012\n0.229 \u00b1 0.0260.297 \u00b1 0.034
\nPSNR ()\n\n19.656 0.779\n\n13.784 0.588\n\n8.860 0.261\n\n6.820 0.245\n7.446 \u00b1 0.8209.075 \u00b1 1.139
\nLPIPS ()\n\n0.213 0.016\n\n0.371 0.018\n\n0.521 0.006\n\n0.570 0.006\n0.522 \u00b1 0.0110.475 \u00b1 0.007
LOKI\nLeakage ()\n\n13.6 1.0\n\n26.6 1.4\n\n54.8 2.4\n\n113.6 3.4\n\n221.8 5.9\n\n432.4 8.6\n
\nSSIM ()\n\n0.856 0.074\n\n0.854 0.046\n\n0.794 0.037\n\n0.740 0.042\n\n0.701 0.024\n\n0.705 0.024\n
\nPSNR ()\n\n33.282 6.744\n\n31.572 3.341\n\n27.907 2.039\n\n26.638 2.045\n\n26.026 1.137\n\n5.810 0.888\n
\nLPIPS ()\n\n0.073 0.043\n\n0.083 0.024\n\n0.125 0.021\n\n0.155 0.029\n\n0.182 0.014\n\n0.180 0.016\n
\n\nOurs\n\nLeakage ()\n16.0 0.032.0 0.064.0 0.0128.0 0.0256.0 0.0505.8 5.19
\nSSIM ()\n1.000 0.0001.000 0.0001.000 0.0000.999 0.0010.920 0.0140.702 0.034
\nPSNR ()\n68.520 0.88765.489 0.43162.697 0.23056.803 0.81529.916 0.61922.281 0.436
\nLPIPS ()\n0.000 0.0000.000 0.0000.000 0.0000.001 0.0010.102 0.0160.340 0.021
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "TABLE II: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on CIFAR-10 and CIFAR-100 datasets using FedAvg. \nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results." + }, + "3": { + "table_html": "
\n
TABLE III: Impact of label information on attack performance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nWith labelWithout label
\n\nCIFAR-10\n\nLeakage Rate ()\n90.2%99.4%
\nSSIM ()\n0.6180.703
\nPSNR ()\n20.56621.867
\nLPIPS ()\n0.4090.358
\n\nCIFAR-100\n\nLeakage Rate ()\n1.4%98.6%
\nSSIM ()\n0.1990.663
\nPSNR ()\n14.24421.514
\nLPIPS ()\n0.6040.371
MINI-ImageNet\nLeakage Rate ()\n31.3%89.8%
\nSSIM ()\n0.4240.635
\nPSNR ()\n18.61723.850
\nLPIPS ()\n0.5690.412
CelebA-Subset\nLeakage Rate ()\n30.5%100%
\nSSIM ()\n0.4720.794
\nPSNR ()\n15.20628.290
\nLPIPS ()\n0.5410.292
\n
", + "capture": "TABLE III: Impact of label information on attack performance." + }, + "4": { + "table_html": "
\n
TABLE IV: Training Time Cost of Various Attack Components. \n denotes the local dataset size. All values are in seconds.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetM=2000M=4000M=6000
CIFAR-1016.432.848.3
CIFAR-10016.232.147.6
ImageNet21.543.763.2
CelebA20.241.860.9
\n
", + "capture": "TABLE IV: Training Time Cost of Various Attack Components. \n denotes the local dataset size. All values are in seconds." + }, + "5": { + "table_html": "
\n
TABLE V: Memory Time Cost of Various Attack Components. \n denotes the target theft quantity. All values are in seconds.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetN=128N=256N=512
CIFAR-1012.414.819.2
CIFAR-10011.615.720.8
DatasetN=32N=48N=64
ImageNet22.124.326.8
CelebA18.219.721.2
\n
", + "capture": "TABLE V: Memory Time Cost of Various Attack Components. \n denotes the target theft quantity. All values are in seconds." + }, + "6": { + "table_html": "
\n
TABLE VI: Robustness of our method to various advanced defenses.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDataset\n\n\nMetrics\u2020\nGradient PruningGradient ClippingNoise Perturbation\n\nD-SNR Detection\n
\n= 1e-6\n\n= 1e-5\nmax_norm = 5max_norm = 1\n= 1e-3\n\n= 4e-3\n
\n\nCIFAR-10\n\nLeakage ()\n511.0448.0511.0511.0511.8431.4512.0
\nSSIM ()\n0.7460.5900.7480.7480.7600.5930.788
\nPSNR ()\n22.63720.18622.69222.67522.53719.04323.687
\nLPIPS ()\n0.3250.4280.3240.3230.3170.4270.264
\n\nCIFAR-100\n\nLeakage ()\n510.0387.0511.0511.0505.8502.8506.4
\nSSIM ()\n0.6990.5680.7120.7240.6890.6850.712
\nPSNR ()\n22.08020.24822.30422.52321.96621.89922.486
\nLPIPS ()\n0.3450.4270.3350.3250.3490.3520.183
\n\nMINI-ImageNet\n\nLeakage ()\n32.032.032.031.032.011.432.0
\nSSIM ()\n0.8290.8230.8300.7790.7350.4850.805
\nPSNR ()\n28.36128.15728.37326.84526.03921.35627.311
\nLPIPS ()\n0.2220.2300.2220.2810.3310.5200.248
\n\nCelebA Subset\n\nLeakage ()\n32.032.032.032.032.013.832.0
\nSSIM ()\n0.9070.9030.9140.9190.8490.4890.911
\nPSNR ()\n32.65932.48633.08533.42130.74422.42333.041
\nLPIPS ()\n0.1270.1310.1170.1100.2310.5570.123
\n
", + "capture": "TABLE VI: Robustness of our method to various advanced defenses. " + }, + "7": { + "table_html": "
\n
TABLE VII: Impact of different index design methods on attack performance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nBinaryGrayFibonacci
\n\nCIFAR-10\n\nLeakage Rate ()\n84.4%77.3%85.9%
\nSSIM ()\n0.6040.5670.607
\nPSNR ()\n28.03726.73628.105
\nLPIPS ()\n0.4010.4250.401
\n\nCIFAR-100\n\nLeakage Rate ()\n83.6%79.7%84.4%
\nSSIM ()\n0.8290.7810.827
\nPSNR ()\n32.91829.87332.142
\nLPIPS ()\n0.1430.1300.147
MINI-ImageNet\nLeakage Rate ()\n77.8%80.0%84.4%
\nSSIM ()\n0.5980.5720.612
\nPSNR ()\n23.17922.23923.535
\nLPIPS ()\n0.4640.4900.451
CelebA-Subset\nLeakage Rate ()\n93.8%81.3%93.8%
\nSSIM ()\n0.6310.7360.661
\nPSNR ()\n21.56525.30722.667
\nLPIPS ()\n0.4410.3230.412
\n
", + "capture": "TABLE VII: Impact of different index design methods on attack performance." + }, + "8": { + "table_html": "
\n
TABLE VIII: Impact of block partitioning.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nSize = 4Size= 7Size= 8Size = 14Size= 16Size = 28Size = 32
MINI-ImageNet\nSSIM ()\n0.7660.8370.8340.8190.8190.8270.800
\nPSNR ()\n26.67928.33528.32127.73727.88528.22827.484
\nLPIPS ()\n0.2980.2180.2120.2270.2240.2200.254
CelebA-Subset\nSSIM ()\n0.8540.8940.9190.9290.9370.9130.891
\nPSNR ()\n30.23631.81033.33234.06334.66432.98131.809
\nLPIPS ()\n0.2340.2080.1200.1030.0900.1210.146
Parameters2.22M2.32M2.37M2.78M2.96M4.58M5.32M
Memory time (s)178.9571.7371.6941.0734.2226.7827.64
\n
", + "capture": "TABLE VIII: Impact of block partitioning." + }, + "9": { + "table_html": "
\n
TABLE IX: Impact of Secret Model Structure.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baselines\nMetrics\u2020\nRandomRandom with ConstraintsSystematicLayer-wiseImportance-based
\n\nCIFAR-10\n\nLeakage Rate ()\n100%100%100%100%99.4%
\nSSIM ()\n0.9370.8670.8810.8950.727
\nPSNR ()\n28.25125.49925.81226.39121.820
\nLPIPS ()\n0.0830.1950.1750.1620.330
\n\nCIFAR-100\n\nLeakage Rate ()\n100%99.6%100%98.0%100%
\nSSIM ()\n0.7780.7120.8240.6660.779
\nPSNR ()\n23.65622.40124.37621.56723.637
\nLPIPS ()\n0.2750.3290.2320.3670.279
MINI-ImageNet\nLeakage Rate ()\n84.4%82.0%92.2%89.8%91.4%
\nSSIM ()\n0.6190.6030.6590.6360.646
\nPSNR ()\n23.56223.26524.29023.87424.047
\nLPIPS ()\n0.4310.4490.3860.4110.400
CelebA-Subset\nLeakage Rate ()\n99.2%97.7%96.9%97.7%100%
\nSSIM ()\n0.6210.6250.6170.6200.657
\nPSNR ()\n23.88323.78123.49623.80124.132
\nLPIPS ()\n0.5230.5110.5100.5190.481
\n
", + "capture": "TABLE IX: Impact of Secret Model Structure." + }, + "10": { + "table_html": "
\n
TABLE X: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on MINI-ImageNet and CELEBA-SUBSET datasets using FedAvg.\n\nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MINI-ImageNet dataset
Baselines\nMetrics\u2020\n
Transpose Attack\nLeakage ()\n\n0.0 0.0\n\n0.2 0.4\n\n0.0 0.0\n\n0.0 0.0\n\n0.0 0.0\n\n0.0 0.0\n
\nSSIM ()\n\n0.301 0.028\n\n0.247 0.030\n\n0.246 0.014\n\n0.247 0.011\n\n0.215 0.018\n\n0.193 0.020\n
\nPSNR ()\n\n14.957 1.698\n\n13.700 1.204\n\n13.737 0.992\n\n13.842 0.986\n\n13.085 0.783\n\n12.508 0.357\n
\nLPIPS ()\n\n0.622 0.015\n\n0.657 0.017\n\n0.653 0.004\n\n0.653 0.005\n\n0.665 0.010\n\n0.688 0.006\n
RtF\nLeakage ()\n\n6.9 0.2\n\n13.0 0.3\n\n20.0 0.1\n\n26.2 0.5\n\n27.9 1.0\n\n52.1 0.2\n
\nSSIM ()\n\n0.844 0.011\n\n0.804 0.015\n\n0.816 0.007\n\n0.814 0.013\n\n0.850 0.014\n\n0.780 0.004\n
\nPSNR ()\n\n35.372 0.870\n\n34.094 0.822\n\n32.836 0.374\n\n33.407 0.934\n\n34.395 0.850\n\n30.052 0.358\n
\nLPIPS ()\n\n0.124 0.005\n\n0.161 0.009\n\n0.153 0.005\n\n0.156 0.009\n\n0.124 0.008\n\n0.182 0.002\n
LOKI\nLeakage ()\n\n6.0 1.3\n\n13.6 1.2\n\n19.8 1.2\n\n25.4 3.5\n\n40.8 1.3\n\n57.0 3.0\n
\nSSIM ()\n\n0.820 0.173\n\n0.882 0.061\n\n0.868 0.052\n\n0.816 0.113\n\n0.855 0.028\n\n0.866 0.025\n
\nPSNR ()\n\n53.756 12.307\n\n56.722 6.192\n\n57.763 4.288\n\n53.781 10.556\n\n54.719 4.000\n\n57.071 1.164\n
\nLPIPS ()\n\n0.128 0.127\n\n0.083 0.032\n\n0.101 0.045\n\n0.141 0.079\n\n0.116 0.025\n\n0.102 0.011\n
Ours\nLeakage ()\n8.0 0.016.0 0.024.0 0.032.0 0.047.6 0.861.4 1.7
\nSSIM ()\n0.962 0.0090.886 0.0070.837 0.0120.827 0.0150.755 0.0190.718 0.015
\nPSNR ()\n35.608 0.84630.185 0.47028.236 0.65828.228 0.58426.048 0.29825.128 0.266
\nLPIPS ()\n0.057 0.0130.153 0.0080.212 0.0120.220 0.0120.296 0.0120.338 0.009
CELEBA-SUBSET dataset
Baselines\nMetrics\u2020\n
Transpose Attack\nLeakage ()\n\n0.0 0.0\n\n1.4 1.5\n\n0.2 0.4\n\n0.4 0.8\n\n0.6 0.4\n\n0.6 1.2\n
\nSSIM ()\n\n0.410 0.015\n\n0.415 0.030\n\n0.388 0.022\n\n0.382 0.021\n\n0.367 0.024\n\n0.371 0.027\n
\nPSNR ()\n\n19.268 0.759\n\n18.967 1.097\n\n18.445 0.835\n\n17.918 0.600\n\n17.746 0.572\n\n17.720 0.772\n
\nLPIPS ()\n\n0.573 0.006\n\n0.563 0.013\n\n0.575 0.009\n\n0.578 0.011\n\n0.583 0.010\n\n0.0586 0.015\n
RtF\nLeakage ()\n\n6.8 0.2\n\n14.3 0.3\n\n20.4 0.8\n\n25.8 0.5\n\n28.0 0.9\n\n52.9 2.4\n
\nSSIM ()\n\n0.833 0.024\n\n0.886 0.010\n\n0.816 0.018\n\n0.775 0.009\n\n0.841 0.020\n\n0.785 0.024\n
\nPSNR ()\n\n33.992 1.399\n\n38.948 0.185\n\n32.074 0.911\n\n30.819 0.426\n\n32.542 0.656\n\n29.935 0.775\n
\nLPIPS ()\n\n0.145 0.024\n\n0.096 0.005\n\n0.168 0.019\n\n0.187 0.006\n\n0.146 0.013\n\n0.203 0.019\n
LOKI\nLeakage ()\n\n8.0 0.0\n\n12.0 0.6\n\n22.2 0.7\n\n28.2 0.9\n\n43.6 0.8\n\n58.6 1.6\n
\nSSIM ()\n\n0.999 0.000\n\n0.755 0.015\n\n0.925 0.021\n\n0.896 0.016\n\n0.922 0.009\n\n0.905 0.010\n
\nPSNR ()\n\n61.090 0.000\n\n39.138 0.425\n\n60.148 0.425\n\n51.430 0.470\n\n55.590 0.315\n\n52.718 0.643\n
\nLPIPS ()\n\n0.001 0.000\n\n0.188 0.012\n\n0.061 0.013\n\n0.086 0.007\n\n0.061 0.006\n\n0.074 0.007\n
Ours\nLeakage ()\n8.0 0.016.0 0.024.0 0.032.0 0.048.0 0.064.0 0.0
\nSSIM ()\n0.974 0.0010.944 0.0050.918 0.0090.913 0.0070.886 0.0040.865 0.009
\nPSNR ()\n39.546 0.64835.618 0.24433.551 0.54332.981 0.26731.683 0.30630.727 0.412
\nLPIPS ()\n0.039 0.0020.080 0.0040.115 0.0130.121 0.0070.157 0.0060.185 0.012
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u2020 () signifies that a higher value is preferable, while () indicates that a lower value is more desirable.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "TABLE X: Comparison of our method with Transpose [2], RtF[15] and LOKI[58] on MINI-ImageNet and CELEBA-SUBSET datasets using FedAvg.\n\nThe experiment was conducted five times, targeting different users in each trial, and the table shows the average performance results." + }, + "11": { + "table_html": "
\n
TABLE XI: Comparison of our method with SEER [17], and Inverting [18] using FedSGD.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetBaselines\nMetrics\u2020\n
\n\nCIFAR-10\n\n\nSEER\n\nLeakage ()\n0.3700.1500.2500.6670.1500
\nSSIM ()\n0.4920.4680.4810.5010.4860.306
\nPSNR ()\n16.89614.19116.22916.09817.95610.127
\nLPIPS ()\n0.3990.4760.4190.4010.3860.669
\n\nInverting\n\nLeakage ()\n000000
\nSSIM ()\n0.0450.0160.0270.0500.0940.041
\nPSNR ()\n6.7845.4365.3545.8285.6175.725
\nLPIPS ()\n0.6730.6860.6900.7020.7140.704
\n\nOurs\n\nLeakage ()\n163264128256511
\nSSIM ()\n0.9970.9980.9950.9940.9380.729
\nPSNR ()\n37.53037.75437.18336.77329.96722.657
\nLPIPS ()\n0.0010.0010.0030.0040.0830.346
DatasetBaselines\nMetrics\u2020\n
\n\nCIFAR-100\nSEER\nLeakage ()\n0.3750.1250.4170.4580.4790.200
\nSSIM ()\n0.4810.4390.4820.4950.4960.462
\nPSNR ()\n16.87617.61218.35718.91016.81318.331
\nLPIPS ()\n0.4380.4340.4080.3850.3910.402
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0650.0610.0500.0620.0490.082
\nPSNR ()\n6.2096.5236.4376.3426.3196.699
\nLPIPS ()\n0.6360.6140.6580.6630.6760.673
Ours\nLeakage ()\n163264128256511
\nSSIM ()\n0.9960.9980.9950.9920.9170.718
\nPSNR ()\n37.15741.56638.23935.79429.10723.095
\nLPIPS ()\n0.0010.0010.0020.0060.1080.331
DatasetBaselines\nMetrics\u2020\n
MINI-ImageNetSEER\nLeakage ()\n000000
\nSSIM ()\n0.2880.2750.2700.3030.2840.263
\nPSNR ()\n23.24221.25619.26318.13715.94414.342
\nLPIPS ()\n0.3780.4930.5500.6070.6690.710
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0290.0350.0680.0480.0330.036
\nPSNR ()\n6.0636.5886.7546.7106.7126.729
\nLPIPS ()\n0.7570.7540.7040.7100.7290.716
Ours\nLeakage ()\n81624324760
\nSSIM ()\n0.8840.7730.7670.7400.6760.651
\nPSNR ()\n29.83726.88226.30826.06524.95924.101
\nLPIPS ()\n0.2120.2980.3120.3390.3920.421
DatasetBaselines\nMetrics\u2020\n
CelebA-SubsetSEER\nLeakage ()\n000000
\nSSIM ()\n0.3590.3610.3450.3470.3510.327
\nPSNR ()\n22.05023.11321.60023.19125.38824.547
\nLPIPS ()\n0.2770.2660.2880.3030.3510.390
Inverting\nLeakage ()\n000000
\nSSIM ()\n0.0410.0380.0610.0360.0590.033
\nPSNR ()\n6.5556.7366.6156.6286.7786.784
\nLPIPS ()\n0.7730.7740.7590.7830.7780.784
Ours\nLeakage ()\n81624324864
\nSSIM ()\n0.8830.8660.8480.8260.8090.788
\nPSNR ()\n30.36430.55929.84828.94628.31627.608
\nLPIPS ()\n0.1870.2190.2590.2860.3100.341
\n
", + "capture": "TABLE XI: Comparison of our method with SEER [17], and Inverting [18] using FedSGD." + } + }, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Deep learning with differential privacy.", + "author": "Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang.", + "venue": "In SIGSAC Conference on Computer and Communications Security, pages 308\u2013318, 2016.", + "url": null + } + }, + { + "2": { + "title": "Transpose attack: Stealing datasets with bidirectional training.", + "author": "Guy Amit, Mosh Levy, and Yisroel Mirsky.", + "venue": "arXiv preprint arXiv:2311.07389, 2023.", + "url": null + } + }, + { + "3": { + "title": "Privacy-preserving deep learning via additively homomorphic encryption.", + "author": "Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al.", + "venue": "IEEE transactions on information forensics and security, 13(5):1333\u20131345, 2017.", + "url": null + } + }, + { + "4": { + "title": "Robust transmission of unbounded strings using fibonacci representations.", + "author": "Alberto Apostolico and A Fraenkel.", + "venue": "IEEE Transactions on Information Theory, 33(2):238\u2013245, 1987.", + "url": null + } + }, + { + "5": { + "title": "Blind backdoors in deep learning models.", + "author": "Eugene Bagdasaryan and Vitaly Shmatikov.", + "venue": "In USENIX Security Symposium, pages 1505\u20131521, 2021.", + "url": null + } + }, + { + "6": { + "title": "Reconstructing individual data points in federated learning hardened with differential privacy and secure aggregation.", + "author": "Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot.", + "venue": "In European Symposium on Security and Privacy, pages 241\u2013257. IEEE, 2023.", + "url": null + } + }, + { + "7": { + "title": "When the curious abandon honesty: Federated learning is not private.", + "author": "Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, and Nicolas Papernot.", + "venue": "In IEEE European Symposium on Security and Privacy, pages 175\u2013199, 2023.", + "url": null + } + }, + { + "8": { + "title": "Practical secure aggregation for privacy-preserving machine learning.", + "author": "Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth.", + "venue": "In SIGSAC Conference on Computer and Communications Security, pages 1175\u20131191. ACM, 2017.", + "url": null + } + }, + { + "9": { + "title": "Understanding training-data leakage from gradients in neural networks for image classification.", + "author": "Cangxiong Chen and Neill DF Campbell.", + "venue": "arXiv preprint arXiv:2111.10178, 2021.", + "url": null + } + }, + { + "10": { + "title": "Communication-efficient federated learning with adaptive parameter freezing.", + "author": "Chen Chen, Hong Xu, Wei Wang, Baochun Li, Bo Li, Li Chen, and Gong Zhang.", + "venue": "In IEEE International Conference on Distributed Computing Systems, pages 1\u201311, 2021.", + "url": null + } + }, + { + "11": { + "title": "From QoS to QoE: A tutorial on video quality assessment.", + "author": "Yanjiao Chen, Kaishun Wu, and Qian Zhang.", + "venue": "IEEE Communications Surveys & Tutorials, 17(2):1126\u20131165, 2014.", + "url": null + } + }, + { + "12": { + "title": "Fedobd: Opportunistic block dropout for efficiently training large-scale neural networks through federated learning.", + "author": "Yuanyuan Chen, Zichen Chen, Pengcheng Wu, and Han Yu.", + "venue": "arXiv preprint arXiv:2208.05174, 2022.", + "url": null + } + }, + { + "13": { + "title": "Towards measuring supply chain attacks on package managers for interpreted languages.", + "author": "Ruian Duan, Omar Alrawi, Ranjita Pai Kasturi, Ryan Elder, Brendan Saltaformaggio, and Wenke Lee.", + "venue": "arXiv preprint arXiv:2002.01139, 2020.", + "url": null + } + }, + { + "14": { + "title": "Imagenette: A smaller subset of 10 easily classified classes from imagenet.", + "author": "fast.ai.", + "venue": "Available: https://github.com/fastai/imagenette.", + "url": null + } + }, + { + "15": { + "title": "Robbing the fed: Directly obtaining private data in federated learning with modified models.", + "author": "Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, and Tom Goldstein.", + "venue": "arXiv preprint arXiv:2110.13057, 2021.", + "url": null + } + }, + { + "16": { + "title": "Decepticons: Corrupted transformers breach privacy in federated learning for language models.", + "author": "Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, and Tom Goldstein.", + "venue": "arXiv preprint arXiv:2201.12675, 2022.", + "url": null + } + }, + { + "17": { + "title": "Hiding in plain sight: Disguising data stealing attacks in federated learning.", + "author": "Kostadin Garov, Dimitar I Dimitrov, Nikola Jovanovi\u0107, and Martin Vechev.", + "venue": "arXiv preprint arXiv:2306.03013, 2023.", + "url": null + } + }, + { + "18": { + "title": "Inverting gradients-how easy is it to break privacy in federated learning?", + "author": "Jonas Geiping, Hartmut Bauermeister, Hannah Dr\u00f6ge, and Michael Moeller.", + "venue": "Advances in Neural Information Processing Systems, 33:16937\u201316947, 2020.", + "url": null + } + }, + { + "19": { + "title": "Differentially private federated learning: A client level perspective.", + "author": "Robin C Geyer, Tassilo Klein, and Moin Nabi.", + "venue": "arXiv preprint arXiv:1712.07557, 2017.", + "url": null + } + }, + { + "20": { + "title": "Recovering private text in federated learning of language models.", + "author": "Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen.", + "venue": "Advances in Neural Information Processing Systems, 35:8130\u20138143, 2022.", + "url": null + } + }, + { + "21": { + "title": "Fastai: A layered api for deep learning.", + "author": "Jeremy Howard and Sylvain Gugger.", + "venue": "Information, 11(2):108, 2020.", + "url": null + } + }, + { + "22": { + "title": "Gaia:Geo-Distributed machine learning approaching LAN speeds.", + "author": "Kevin Hsieh, Aaron Harlap, Nandita Vijaykumar, Dimitris Konomis, Gregory R Ganger, Phillip B Gibbons, and Onur Mutlu.", + "venue": "In USENIX Symposium on Networked Systems Design and Implementation, pages 629\u2013647, 2017.", + "url": null + } + }, + { + "23": { + "title": "Gradient inversion with generative image prior.", + "author": "Jinwoo Jeon, Kangwook Lee, Sewoong Oh, Jungseul Ok, et al.", + "venue": "Advances in Neural Information Processing Systems, 34:29898\u201329908, 2021.", + "url": null + } + }, + { + "24": { + "title": "Advances and open problems in federated learning.", + "author": "Peter Kairouz, H Brendan McMahan, Brendan Avent, Aur\u00e9lien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al.", + "venue": "Foundations and trends\u00ae in machine learning, 14(1\u20132):1\u2013210, 2021.", + "url": null + } + }, + { + "25": { + "title": "Federated learning: Strategies for improving communication efficiency.", + "author": "Jakub Konecn\u1ef3, H Brendan McMahan, Felix X Yu, Peter Richt\u00e1rik, Ananda Theertha Suresh, and Dave Bacon.", + "venue": "arXiv preprint arXiv:1610.05492, 8, 2016.", + "url": null + } + }, + { + "26": { + "title": "Learning multiple layers of features from tiny images.", + "author": "Alex Krizhevsky and Geoffrey Hinton.", + "venue": "2009.", + "url": null + } + }, + { + "27": { + "title": "Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix.", + "author": "Maximilian Lam, Gu-Yeon Wei, David Brooks, Vijay Janapa Reddi, and Michael Mitzenmacher.", + "venue": "In International Conference on Machine Learning, pages 5959\u20135968. PMLR, 2021.", + "url": null + } + }, + { + "28": { + "title": "Auditing privacy defenses in federated learning via generative gradient leakage.", + "author": "Zhuohang Li, Jiaxin Zhang, Luyang Liu, and Jian Liu.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10132\u201310142, 2022.", + "url": null + } + }, + { + "29": { + "title": "Channel pruning based on mean gradient for accelerating convolutional neural networks.", + "author": "Congcong Liu and Huaming Wu.", + "venue": "Signal Processing, 156:84\u201391, 2019.", + "url": null + } + }, + { + "30": { + "title": "How does noise help robustness? explanation and exploration under the neural sde framework.", + "author": "Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, and Cho-Jui Hsieh.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 282\u2013290, 2020.", + "url": null + } + }, + { + "31": { + "title": "Deep learning face attributes in the wild.", + "author": "Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.", + "venue": "In IEEE International Conference on Computer Vision, pages 3730\u20133738, 2015.", + "url": null + } + }, + { + "32": { + "title": "Discriminator-free generative adversarial attack.", + "author": "Shaohao Lu, Yuqiao Xian, Ke Yan, Yi Hu, Xing Sun, Xiaowei Guo, Feiyue Huang, and Wei-Shi Zheng.", + "venue": "In ACM International Conference on Multimedia, pages 1544\u20131552, 2021.", + "url": null + } + }, + { + "33": { + "title": "Tackling system and statistical heterogeneity for federated learning with adaptive client sampling.", + "author": "Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, and Leandros Tassiulas.", + "venue": "In IEEE INFOCOM Conference on Computer Communications, pages 1739\u20131748, 2022.", + "url": null + } + }, + { + "34": { + "title": "Feature inference attack on model predictions in vertical federated learning.", + "author": "Xinjian Luo, Yuncheng Wu, Xiaokui Xiao, and Beng Chin Ooi.", + "venue": "In International Conference on Data Engineering, pages 181\u2013192. IEEE, 2021.", + "url": null + } + }, + { + "35": { + "title": "Communication-efficient learning of deep networks from decentralized data.", + "author": "Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas.", + "venue": "In Artificial Intelligence and Statistics, pages 1273\u20131282. PMLR, 2017.", + "url": null + } + }, + { + "36": { + "title": "Exploiting unintended feature leakage in collaborative learning.", + "author": "Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov.", + "venue": "In IEEE Symposium on Security and Privacy (SP), pages 691\u2013706, 2019.", + "url": null + } + }, + { + "37": { + "title": "Adaptive federated dropout: Improving communication efficiency and generalization for federated learning.", + "author": "Bouacida Nader, Hou Jiahui, Zang Hui, and Liu Xin.", + "venue": "In IEEE Conference on Computer Communications Workshops, Vancouver, BC, Canada, pages 10\u201313, 2021.", + "url": null + } + }, + { + "38": { + "title": "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.", + "author": "Milad Nasr, Reza Shokri, and Amir Houmansadr.", + "venue": "In IEEE Symposium on Security and Privacy, pages 739\u2013753, 2019.", + "url": null + } + }, + { + "39": { + "title": "fairseq: A fast, extensible toolkit for sequence modeling.", + "author": "M Ott.", + "venue": "arXiv preprint arXiv:1904.01038, 2019.", + "url": null + } + }, + { + "40": { + "title": "Eluding secure aggregation in federated learning via model inconsistency.", + "author": "Dario Pasquini, Danilo Francati, and Giuseppe Ateniese.", + "venue": "In ACM SIGSAC Conference on Computer and Communications Security, pages 2429\u20132443, 2022.", + "url": null + } + }, + { + "41": { + "title": "Aggregating capacity in fl through successive layer training for computationally-constrained devices.", + "author": "Kilian Pfeiffer, Ramin Khalili, and J\u00f6rg Henkel.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "42": { + "title": "Privacy-preserving deep learning: Revisited and enhanced.", + "author": "Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai.", + "venue": "In Applications and Techniques in Information Security: 8th International Conference, pages 100\u2013110. Springer, 2017.", + "url": null + } + }, + { + "43": { + "title": "Soteria: Provable defense against privacy leakage in federated learning from representation perspective.", + "author": "Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, and Yiran Chen.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9311\u20139319, 2021.", + "url": null + } + }, + { + "44": { + "title": "Sapag: A self-adaptive privacy attack from gradients.", + "author": "Yijue Wang, Jieren Deng, Dan Guo, Chenghong Wang, Xianrui Meng, Hang Liu, Caiwen Ding, and Sanguthevar Rajasekaran.", + "venue": "arXiv preprint arXiv:2009.06228, 2020.", + "url": null + } + }, + { + "45": { + "title": "Beyond inferring class representatives: User-level privacy leakage from federated learning.", + "author": "Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi.", + "venue": "In IEEE INFOCOM Conference on Computer Communications, pages 2512\u20132520, 2019.", + "url": null + } + }, + { + "46": { + "title": "A framework for evaluating gradient leakage attacks in federated learning.", + "author": "Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, and Yanzhao Wu.", + "venue": "arXiv preprint arXiv:2004.10397, 2020.", + "url": null + } + }, + { + "47": { + "title": "Gradient-leakage resilient federated learning.", + "author": "Wenqi Wei, Ling Liu, Yanzhao Wu, Gong Su, and Arun Iyengar.", + "venue": "In International Conference on Distributed Computing Systems, pages 797\u2013807. IEEE, 2021.", + "url": null + } + }, + { + "48": { + "title": "Federated dropout\u2014a simple approach for enabling federated learning on resource constrained devices.", + "author": "Dingzhu Wen, Ki-Jun Jeon, and Kaibin Huang.", + "venue": "IEEE Wireless Communications Letters, 11(5):923\u2013927, 2022.", + "url": null + } + }, + { + "49": { + "title": "Fishing for user data in large-batch federated learning via gradient magnification.", + "author": "Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, and Tom Goldstein.", + "venue": "arXiv preprint arXiv:2202.00580, 2022.", + "url": null + } + }, + { + "50": { + "title": "Huggingface\u2019s transformers: State-of-the-art natural language processing.", + "author": "T Wolf.", + "venue": "arXiv preprint arXiv:1910.03771, 2019.", + "url": null + } + }, + { + "51": { + "title": "Patchguard: A provably robust defense against adversarial patches via small receptive fields and masking.", + "author": "Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, and Prateek Mittal.", + "venue": "In 30th USENIX Security Symposium, 2021.", + "url": null + } + }, + { + "52": { + "title": "Using highly compressed gradients in federated learning for data reconstruction attacks.", + "author": "Haomiao Yang, Mengyu Ge, Kunlan Xiang, and Jingwei Li.", + "venue": "IEEE Transactions on Information Forensics and Security, 18:818\u2013830, 2022.", + "url": null + } + }, + { + "53": { + "title": "Representations des nombres naturels par une somme de nombres de fibonacci on de nombres de lucas.", + "author": "\u00c9douard Zeckendorf.", + "venue": "Bulletin de La Society Royale des Sciences de Liege, pages 179\u2013182, 1972.", + "url": null + } + }, + { + "54": { + "title": "Why gradient clipping accelerates training: A theoretical justification for adaptivity.", + "author": "Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie.", + "venue": "arXiv preprint arXiv:1905.11881, 2019.", + "url": null + } + }, + { + "55": { + "title": "The unreasonable effectiveness of deep features as a perceptual metric.", + "author": "Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.", + "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, pages 586\u2013595, 2018.", + "url": null + } + }, + { + "56": { + "title": "Compromise privacy in large-batch federated learning via malicious model parameters.", + "author": "Shuaishuai Zhang, Jie Huang, Zeping Zhang, and Chunyang Qi.", + "venue": "In International Conference on Algorithms and Architectures for Parallel Processing, pages 63\u201380. Springer, 2022.", + "url": null + } + }, + { + "57": { + "title": "The resource problem of using linear layer leakage attack in federated learning.", + "author": "Joshua C Zhao, Ahmed Roushdy Elkordy, Atul Sharma, Yahya H Ezzeldin, Salman Avestimehr, and Saurabh Bagchi.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3974\u20133983, 2023.", + "url": null + } + }, + { + "58": { + "title": "Loki: Large-scale data reconstruction attack against federated learning through model manipulation.", + "author": "Joshua Christian Zhao, Atul Sharma, Ahmed Roushdy Elkordy, Yahya H Ezzeldin, Salman Avestimehr, and Saurabh Bagchi.", + "venue": "In IEEE Symposium on Security and Privacy, pages 30\u201330. IEEE Computer Society, 2023.", + "url": null + } + }, + { + "59": { + "title": "R-gap: Recursive gradient attack on privacy.", + "author": "Junyi Zhu and Matthew Blaschko.", + "venue": "arXiv preprint arXiv:2010.07733, 2020.", + "url": null + } + }, + { + "60": { + "title": "Deep leakage from gradients.", + "author": "Ligeng Zhu, Zhijian Liu, and Song Han.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18269v1" +} \ No newline at end of file diff --git a/20241127/2411.18272v1.json b/20241127/2411.18272v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9b728119cfda3a4b4b22f54c8aa9e7892e640457 --- /dev/null +++ b/20241127/2411.18272v1.json @@ -0,0 +1,747 @@ +{ + "title": "NeoHebbian Synapses to Accelerate Online Training of Neuromorphic Hardware", + "abstract": "Neuromorphic systems that employ advanced synaptic learning rules, such as the three-factor learning rule, require synaptic devices of increased complexity. Herein, a novel neoHebbian artificial synapse utilizing ReRAM devices has been proposed and experimentally validated to meet this demand. This synapse features two distinct state variables: a neuron coupling weight and an \u201celigibility trace\u201d that dictates synaptic weight updates. The coupling weight is encoded in the ReRAM conductance, while the \u201celigibility trace\u201d is encoded in the local temperature of the ReRAM and is modulated by applying voltage pulses to a physically co-located resistive heating element.\nThe utility of the proposed synapse has been investigated using two representative tasks: first, temporal signal classification using Recurrent Spiking Neural Networks (RSNNs) employing the e-prop algorithm, and second, Reinforcement Learning (RL) for path planning tasks in feedforward networks using a modified version of the same learning rule. System-level simulations, accounting for various device and system-level non-idealities, confirm that these synapses offer a robust solution for the fast, compact, and energy-efficient implementation of advanced learning rules in neuromorphic hardware.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Emulating the biophysical dynamics of the brain by manipulating naturally available physical dynamics lies at the core of neuromorphic computing and is what holds the key to achieving at-par energy efficiency and cognitive capabilities with the human brain [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. Realizing the full potential of neuromorphic computing requires the development of a computational paradigm that reasonably mimics the structure and functionality of the brain at various levels of abstraction while also being conducive to efficient hardware implementation using state-of-the-art technologies [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].The latter can be achieved using memristive devices, which are known for emulating the synaptic functionality due to their ability to tune the conductance to an arbitrary value within its physical dynamic range [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. Additionally, when arranged in crossbar arrays, these devices enable area and energy-efficient in-memory computing by offering massive parallelism. Several studies have reported chip-level demonstrations of neural network accelerators using various memristive devices [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]. The former is achieved by adopting spiking neural networks (SNNs). SNNs are known for offering the brain-inspired computational paradigm that comprises approximated neuro-inspired neuron models as activation functions interconnected with synaptic weights and transmit information using asynchronous spike-based events [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###]. Thus, using memristor-based hardware to implement SNNs is a compelling alternative for attaining energy efficiency and cognitive performance comparable to those of a biological brain.\nSNNs can be trained using the Hebbian learning rules, such as spike-timing-dependent plasticity (STDP) or its variants [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. In STDP-based learning, the timing and sequence of pre- and post-synaptic spikes determine the magnitude and direction of weight changes [25 ###reference_b25###]. Several experimental demonstrations have shown that SNNs trained with the STDP algorithm can learn to detect temporal correlations within spike trains in an unsupervised manner [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###]. Additionally, large-scale experimental demonstrations have investigated the potential benefits of SNNs [29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###]. Despite their biological plausibility, SNNs trained using STDP perform poorly on relatively complex tasks primarily due to their focus on local optimization and lack of a global error signal, as seen in artificial neural networks (ANNs) trained with backpropagation [35 ###reference_b35###]. As a result, the performance of spike-based learning algorithms has often been overshadowed by gradient-based methods used in non-spiking networks. Another significant limitation of SNNs trained with Hebbian learning rules is their inability to model tasks involving long-term temporal dependencies [36 ###reference_b36###]. While recurrent spiking neural networks (RSNNs) offer a potential solution for modeling such tasks, their training algorithms struggle to assign importance to past neural states for errors observed in the present, making it difficult to determine the necessary adjustments to the network\u2019s learnable parameters to achieve desired performance [36 ###reference_b36###, 37 ###reference_b37###]. This issue, known as the temporal credit assignment problem, is not unique to RSNNs but also exists in ANNs. ANNs address this problem using the backpropagation through time (BPTT) algorithm [38 ###reference_b38###]. However, BPTT requires unfolding the network and propagating errors backward through time [38 ###reference_b38###, 39 ###reference_b39###], which, while effective for modeling long-term temporal sequences, demands extensive memory, high training time, and significant computational resources, thereby limiting its use in neuromorphic hardware [40 ###reference_b40###].\nThe eligibility propagation (e-prop) algorithm effectively addresses the temporal credit assignment problem in a biologically plausible way [36 ###reference_b36###]. Studies have shown that RSNNs trained with e-prop algorithms can learn online and handle complex tasks efficiently [36 ###reference_b36###]. The e-prop algorithm is a special case of the three-factor learning rule, where synaptic plasticity is influenced not only by the pre-synaptic and post-synaptic neuron signals (as in standard Hebbian learning) but also by an additional third signal. Typically, a three-factor learning rule for synaptic plasticity can be expressed as [35 ###reference_b35###]:\nIn this equation, represents the rate of change of the synaptic weight. The variable denotes the third signal, and the function defines the specific learning rule. Three-factor learning rules, including their variants, tackle the issues associated with SNNs by introducing local eligibility traces. These traces, combined with the coupling weight, maintain a fading memory of pre-synaptic activity. Additionally, they make the global error signal (the third signal) locally accessible at the synapse, along with the pre-and post-synaptic signals, facilitating local learning. In the context of temporal modeling tasks, these characteristics eliminate the need for the network to unfold and propagate backward in time, resulting in substantial savings in computational resources and accelerating the training process of neuromorphic hardware.\nThis work focuses on developing a synaptic element tailored for hardware implementation of the e-prop learning algorithm. Our key contributions are as follows: (1) We propose a novel artificial synapse with a two-terminal heater 3D-integrated with a ReRAM cell. This design utilizes intentionally introduced self-thermal coupling between the heater and ReRAM to encode the eligibility trace through the local temperature of the ReRAM, while the non-volatile conductance levels represent the synaptic weights. (2) We provide a comprehensive analysis of the proposed synapse\u2019s operation, including its physical mechanisms and various non-idealities. The core operating principle is experimentally validated, and its implementation at the array level is studied within the context of the e-prop algorithm. (3) We present a numerical model to further investigate the synapse\u2019s operation and assess its scalability. (4) We evaluate the synapse\u2019s performance on two representative tasks using hardware-aware network simulations, accounting for device- and array-level non-idealities.\nThe remainder of the manuscript is organized in the following order: Section 2 ###reference_### presents a high-level description of eligibility-based learning, followed by a discussion on the proposed synapse operation, related experimental results, unit cell design, and its array-level operation in Section 3 ###reference_###. Section 4 ###reference_### covers the numerical modeling of the proposed synapse. Section 11 ###reference_### details system-level simulations used to benchmark the performance benefits of the proposed synapse on two representative tasks: reinforcement learning in SNNs and the more complex TIMIT phoneme classification task in RSNNs. Finally, we conclude by summarizing the scope and limitations of the proposed synapse in Section 6 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Eligibiility-based Learning", + "text": "###figure_1### A high-level description of eligibility-based learning in SNNs utilizing neoHebbian synapses is discussed in this section. Fig.1 ###reference_###(a) shows a schematic of the SNN where input neurons are connected to output neurons using neoHebbian synapses. NeoHebbian synapses exhibit both short-term dynamics and long-term plasticity, characterized by the synaptic \u201celigibility trace\u201d() and coupling weight (), respectively. The neuronal firing activity at the pre-and post-synaptic neurons dictates the updates in eligibility trace values. These traces serve as temporal markers that record the past activities of the synapse. When the synaptic weights are to be updated, eligibility traces interact with neuromodulator signals to determine the extent and direction (increase or decrease) of synaptic weight changes. In other words, the eligibility trace serves as an additional gating signal that, in conjunction with pre- and post-synaptic activities, influences long-term plasticity and is regulated by the common (two-factor) Hebbian rule.\nThe computation of the eligibility state () occurs at the pre-synaptic neuron, while the pseudo-gradient () is computed at the post-synaptic neuron. The product of and yields the eligibility trace (). The training process operates in batches, where data within each batch, termed as a dataframe, is sequentially processed over time steps. During the training process, the is computed and accumulated over the presentation of the dataframe. Fig.1 ###reference_###(b) shows the evolution of signals , , and during the dataframe presentation.\nSubsequently, at the end of the dataframe presentation, the coupling weights () are updated proportionally to , where denote accumulated is over the dataframe presentation. Overall, the eligibility-based learning approach allows the network to associate specific spike timings with subsequent rewards or punishments, enhancing its ability to perform tasks that require temporal linking of events, such as sequence learning and reinforcement learning, where outcomes are delayed from actions. Detailed equations related to eligibility-based learning are provided in the Appendix section 7.2 ###reference_### and in Section 11 ###reference_###.\nThis work focuses on developing a synaptic device that exhibits both short-term dynamics and long-term plasticity. Additionally, the synapse should be capable of computing and storing at the synapse and updating its conductance in proportion to the accumulated . Fig.1 ###reference_###(c) illustrates the key features of a neoHebbian synapse. A summary of the prior works related to the development of neoHebbian synapses is provided in Appendix Section 7.1 ###reference_###.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Thermal NeoHebbian Synapse", + "text": "The high-level functionality of the proposed synapse in the context of the e-prop training algorithm is outlined in the following. The training process in e-prop consists of three key stages: the spike integration (or inference) phase, the eligibility update (e-update) phase, and the weight update phase. The spike integration and e-update occur during the data frame presentation, while the weight update is executed after the data frame presentation (refer to Fig.2 ###reference_###(a)). Consider the heater () and ReRAM () arrangement as shown in Fig.2 ###reference_###(b). In this, acts as coupling weight and transmits weighted current spikes from -th pre-neuron to -th post-neuron during the spike integration phase, as shown in Fig.2 ###reference_###(b). During the e-update phase, the heater receives appropriate signals such as and from pre- and post-neurons, respectively, resulting in the rise in heater temperature due to Joule heating. Due to the thermal coupling between the heater and ReRAM, the local temperature of ReRAM increases proportionally to the dissipated power. The e-update phase is depicted in Fig.2 ###reference_###(c). These operations are repeated at every step during the dataframe presentation in the training process. Finally, at the end of the data frame presentation, the accumulated temperature rise in ReRAM represents \u201c\u201d, thus satisfying the requirement of computing and storing the eligibility trace at the synapse. Subsequently, during the weight update, a fixed-amplitude programming pulse is applied, which induces a conductance change () proportional to the accumulated temperature rise (). Essentially, the temperature-dependent switching behavior of the ReRAM is exploited to update the weights proportional to .\nTo implement the characteristic features highlighted by the high-level functionality of the thermal neoHebbian synapse, we propose the integration of a two-terminal heater cell with the ReRAM device, as depicted in Fig.2 ###reference_###(e). The heater element comprises an insulating layer sandwiched between two metallic layers: the top electrode (TE) and the shared electrode (SE). A metallic nanorod connects the TE and SE.\nUpon applying a voltage between the TE and SE, a substantial current flows through the metallic nanorod, which has a high electrical conductivity, resulting in localized Joule heating. Due to the high thermal conductivity of the SE, strong thermal coupling is established between the heater element and the ReRAM. The ReRAM switching layer is sandwiched between the SE and the bottom electrode (BE). To mitigate lateral heat diffusion to adjacent cells, the nanorod structure is surrounded by an electrically and thermally insulating layer. The desired properties of the heater are akin to the heater used in mushroom-type phase-change memory technologies [41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###]. Consequently, suitable materials for the electrode layers include W, TiN, and TaN, and for the insulating layers, materials such as SiO2, HfO2, and TiO2. This design minimizes area footprint by 3D-integration of the heater and the ReRAM cells. Details about the fabricated ReRAM layer stack and its electrical characteristics are presented in the following section." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Experimental Results", + "text": "###figure_3### Metal oxide memristors were fabricated using a similar process to our previous work [10 ###reference_b10###], which involved etch-down processes, and UV lithography for patterning, DC-mode magnetron sputtering for electrode deposition, and thermal annealing in forming gas to adjust non-stoichiometry. However, in this work, an oxide bilayer stack was formed using ALD to simplify the fabrication process and mitigate issues related to thickness and composition variations in the sputtering targets. Fig.3 ###reference_###(a) shows typical - switching curves from these devices, with the inset providing details of the ReRAM layer stack. These devices exhibit low forming voltages (2V), switching voltages (1V), and an on/off ratio of 20 at a read voltage of 0.1V.\nWe now examine the temperature-dependent switching characteristics within the context of thermal neo-Hebbian synapse operation. In the proposed design, the heater and ReRAM cells are 3D-integrated, with the structure optimized to maximize thermal coupling between them. The close proximity of the heater and ReRAM, along with the high thermal conductivity of SE, allows for simplification of experimental measurements by emulating the heater\u2019s role through modulation of the ambient temperature. Our investigation focuses on understanding the normalized conductance change induced by a fixed voltage pulse as a function of the initial programmed conductance () and ambient temperature (). Fig.3 ###reference_###(b) and Fig.3 ###reference_###(c) present as a function of and for the SET and RESET processes, respectively.\nThe measurement protocol employed during the experiments is as follows: First, the ReRAM is programmed to a target initial conductance with 5% tuning accuracy, using the tuning algorithm described in [44 ###reference_b44###]. Next, a programming pulse of fixed amplitude and duration is applied, followed by a read pulse to measure the conductance change (). The device is then reprogrammed to the same , and the measurement is repeated multiple times to collect several data points for each . This procedure is repeated for all specified values. Afterward, the ambient temperature increases and the entire process is repeated. Fig.3 ###reference_###(f) illustrates the measurement protocol.\nFor the SET process, data were obtained using a pulse with an amplitude of 0.85V and a duration of 280s, with measurements repeated 10 times for each combination of and , yielding a total of 250 data points. For the RESET process, a pulse of -1.05V with a duration of 280s was applied, and measurements were repeated 30 times for each combination of and ambient temperature, resulting in a total of 750 data points. Fig.3 ###reference_###(d) shows the average normalized conductance change for the SET process, where each data point represents the average of 10 measurements, while Fig.3 ###reference_###(e) shows the corresponding average values for the RESET process, with each data point representing the average of 30 measurements.\nIt was observed that during the SET process, the conductance change reaches its maximum when is close to the device\u2019s lowest conductance state and its minimum when approaches the highest conductance state. In contrast, during the RESET process, the conductance change is at its maximum when is near the highest conductance state and at its minimum when is close to the lowest conductance state. This behavior is attributed to the fixed dynamic conductance range in ReRAM devices, leading to saturation of conductance updates as the conductance approaches the extremes of its dynamic range." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "NeoHebbian Synapses: Unit Cell and Array level Operation", + "text": "###figure_4### The unit-cell implementation of thermal neo-Hebbian synapses is shown in Fig.4 ###reference_###. This unit cell consists of one transistor, one heater, and one ReRAM device, hence referred to as the 1-1-1 configuration. During the spike integration phase (), the memristor serves as a coupling weight, facilitating the transmission of weighted current spikes from the pre-neuron to the post-neuron, while the transistor remains off, as depicted in Fig.4 ###reference_###(a).\nIn the e-update phase (), the eligibility state and the pseudo-gradient , calculated at the pre-and post-synaptic neurons, respectively, are applied across the heater. Simultaneously, the memristor () is decoupled from the pre-neuron, as illustrated in Fig.4 ###reference_###(b). The applied voltage signals result in Joule heating, raising the temperature of the heater, which subsequently increases the local temperature of the ReRAM through thermal coupling. The total eligibility contribution to the weight update is stored as the cumulative temperature rise in the ReRAM during data presentation. In the final weight update phase (), a fixed-amplitude programming pulse is applied, leading to a change in conductance , which is proportional to the accumulated eligibility trace (), as shown in Fig.4 ###reference_###(c). Shared usage of one of the electrode terminals between the heater and the memristor in a 1-1-1 configuration requires time-division multiplexing between the = 0 and = 1 phases. However, introducing an additional transistor in the synaptic cell can eliminate the need for time-division multiplexing between the spike integration and the e-update phases in the 1-1-1 design, albeit with the trade-off of reduced density [45 ###reference_b45###].\nAn essential feature of the noeHebbian synapse is the local computation and storage of . As illustrated in Fig.1 ###reference_###(c), during e-prop operation, is computed as the product of and . In the context of the thermal neoHebbian synapse, this implies that the local temperature of should increase proportionally to the product of voltage signals and . The 1-1-1 unit cell allows local computation and storage of through appropriate biasing and voltage scaling.\nDuring = 1, assuming the transistor is operating in the triode regime, the drain current () is given by,\nThe voltage drop across the heater can be expressed in terms of the drain current as:\nHere, denotes heater electrical resistance, and is a transistor-related parameter. Consequently, the power dissipated across the heater is expressed as:\nNow, and are scaled as follows,\ndenotes transistor overdrive voltage. This voltage scaling performed at the neuron site ensures that the dissipated power across the heater and, consequently, the temperature rise at the heater follows the desired proportionality:\nDue to thermal coupling, the power dissipated across is proportional to ; consequently, the local temperature of increases in relation to the product of and . The more details on the related equations are provided in Appendix Section 7.3 ###reference_###.\n###figure_5### We now discuss the array-level operation of the proposed synapse. The array-level implementation of the 1-1-1 design involves synaptic cells arranged in a differential configuration, as shown in Fig.5 ###reference_###(a). In this setup, two sets of synapses are utilized. The net synaptic conductance is given by , where denotes the total conductance of memristor , and represents the total conductance of memristor . The net conductance can be increased (decreased) by potentiating (depressing) or depressing (potentiating) [46 ###reference_b46###, 47 ###reference_b47###].\nFig.5 ###reference_###(c) shows the voltage bias applied at respective terminals during spike integration, e-update, and weight update. The e-update and subsequent weight update operation in the differential mode operate as follows: During e-prop operation, maintains a strictly positive value, while can be either positive or negative. Depending on the sign of , the e-update operation is directed towards either heater or heater . For instance, when , and are applied to the terminals of heater , as illustrated in Fig.4 ###reference_###(b). Conversely, if , the update is directed towards heater . During the = 1 phase, a fixed amplitude programming pulse is simultaneously applied to both memristors and . Consequently, the resulting change in conductances, denoted as and , is directly proportional to the local temperature increase at the respective synapse during the e-update phase. Therefore, the net change in conductance () is calculated as . It\u2019s important to note that the voltage pulse used during = 1 induces insignificant conductance change if the local temperature rise at the memristor is negligible. The array-level implementation depicted in Fig.5 ###reference_###(a) facilitates parallelism, leading to substantial time savings during training. For example, during = 1, eligibility is updated concurrently for all the elements in the array, followed by concurrent weight update at the end of the dataframe in = 1." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Numerical Modeling", + "text": "The electrothermal effects are critical in the operation of the proposed synapse and are further investigated using the numerical model. Fig.6 ###reference_###(a) shows the schematic of the 1-1-1 synapse, and the corresponding modeled geometry considered for electrothermal simulation is shown in Fig.6 ###reference_###(b). The oxide thickness () is assumed to be 30nm for both the heater and ReRAM (see Fig.6 ###reference_###(b)). All other dimensions are marked in minimum feature size (). The time-dependent temperature profile within the device is obtained by solving the transient heat flow equation. More details on the numerical model are provided in the Appendix Section 7.4 ###reference_###.\n###figure_6### Fig.6 ###reference_###(d) illustrates the e-update phase, showing the transient temperature evolution at the heater and ReRAM in response to the dissipated power at the heater. Temperature contours calculated at ns are overlaid on the modeled geometry and shown in Fig.6 ###reference_###(c), highlighting the thermal coupling between the heater and ReRAM. Overall, Fig.6 ###reference_###(c) and Fig.6 ###reference_###(d) validate the capability to encode the \u201celigibility state\u201d in the form of local temperature, as the local temperature of the ReRAM can be modulated by applying heating pulses at the heater.\nWe now examine potential sources of non-idealities that might influence the performance metrics of e-prop. For instance, the accumulated eligibility () is expected to remain constant, even in the absence of activity (see eq.16 ###reference_### in Appendix Section 7.2 ###reference_###). However, in the proposed synapse, decreases due to natural temperature decay in the absence of heating pulses (refer to Fig.6 ###reference_###(d)). Ideally, this suggests that the desired should tend towards infinity. However, it will be evident in the next section that the desired value of depends on the target application, and, in fact, this eligibility decay could be useful in certain cases. Additionally, it\u2019s important to note that when the pulse width () exceeds the device thermal time constant (), the ReRAM temperature reaches the steady state, impeding further e-updates. Thus, the should be shorter than during the phase to update the eligibility state continuously.\nAnother important non-ideality is the thermal crosstalk during the phase. The unintentional rise in the local temperature of the neighboring synapses could result in an erroneous e-update. Therefore, we define the thermal crosstalk coefficient as the ratio of temperature rise at the adjacent synapses to temperature rise at the heater in response to heating pulse in =1 phase. To mitigate the issue of thermal crosstalk, modifications are made to the device structure, as illustrated in Fig.6 ###reference_###(e). These modifications include reducing the distance between the heater and ReRAM to enhance the desired self-thermal coupling and increasing the thickness of both the top and bottom electrodes to slow the propagation of heat flux toward neighboring devices, thereby reducing unintentional thermal crosstalk. Further, the thermal crosstalk coefficients for the structures shown in Fig.6 ###reference_###(b,e) are compared for different values and . Fig.6 ###reference_###(f) shows that the new design increases the self-thermal coupling and reduces thermal crosstalk, as shown in Fig.6 ###reference_###(g,h).\nThe effects of these non-idealities, including eligibility decay and thermal crosstalk, are examined in detail in the benchmark simulations section." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Benchmark Simulations", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Case Study #1: Reinforcement Learning in SNNs", + "text": "This case study discusses the use of neoHebbian synapses in training SNNs for tasks related to reinforcement learning. Specifically, we explore a scenario where a virtual agent resembling a mouse navigates a maze in search of cheese while avoiding traps (see Fig.7 ###reference_###(a)). The maze is structured like a grid, where the mouse\u2019s current position defines its state. At each step, the agent, or mouse, is limited to a single action: moving in one of four directions \u201cup\u201d, \u201cdown\u201d, \u201cleft\u201d, or \u201cright\u201d. An episode in this context refers to a single run of the agent through the maze, from start to termination. Each episode begins with the agent randomly placed within the maze and terminates when the agent either finds cheese or encounters a trap. Following each episode, a new round commences from a randomly selected location within the maze. The agent is trained over multiple episodes, learning from past experiences to improve performance. The reward system is designed to maximize the agent\u2019s chances of finding cheese, offering positive rewards for success and penalties for falling into traps. Additionally, each action made without finding cheese results in a minor penalty. The respective parameters are summarized in the table shown in Fig.7 ###reference_###(b).\nIn each episode, the agent navigates the grid by making decisions at every timestep. The assumed grid arrangement is akin to an input layer (environment position/state) where each location on the grid is connected to four Leaky-Integrate and Fire (LIF) neurons in the output layer (representing action) using thermal neoHebbian synapse as shown in Fig.7 ###reference_###(a). The LIF neurons drive the agent\u2019s decision-making process at each time step in the output layer. For example, suppose the LIF neuron corresponding to the action \u201cup\u201d direction in the output layer exhibits the highest membrane potential, which is influenced by both the current grid position of the mouse and the accumulated potential from the previous state. In that case, the mouse will move in the \u201cup\u201d direction. Further, homeostasis is applied on the most recent action (output LIF neuron with highest membrane potential) by decreasing the membrane potential by half.\n###figure_7### During an episode, the eligibility value is updated at every time step according to the following procedure. Referring to Fig.7 ###reference_###(a), suppose the agent is positioned at the -th neuron in an grid. If the membrane potential of the -th action neuron is the highest, then the eligibility value for the synapse connecting the -th position neuron with the -th action neuron is increased. Therefore, the updated eligibility takes the form,\nAnd the eligibility values of all other synapses undergo the leakage similar to works [48 ###reference_b48###, 49 ###reference_b49###],\nHere is the discount factor, and it ranges between 0 to 1. The synaptic weights are updated at the end of each episode in proportion to the accumulated rewards and eligibility value, as shown in the following equation.\nHere, r denotes the ratio of the accumulated rewards and penalties in an episode relative to the highest positive reward. The weight update procedure ensures that both rewards and recent actions are considered during learning.\nThe neuron membrane potential and eligibility values are reset to zero at the beginning of each episode. We scale the worst and best rewards proportional to grid length to standardize rewards across various grid sizes. This scaling approach ensures consistency in the reward magnitudes relative to the size of the grid (see Fig.7 ###reference_###(b)).\nAs discussed in earlier sections, the natural decay of temperature in the absence of heating pulses represents an important non-ideal aspect. This decay results in a reduction in accumulated eligibility, which is typically expected to remain constant until the weight update occurs (see eq.(16 ###reference_###) in appendix section 7.2 ###reference_###). However, in the context of the reinforcement learning scenario, this non-ideality facilitates the realization of the discount factor . This factor allows the agent to prioritize recent experiences while gradually diminishing the significance of older ones, which is crucial for effectively adapting to the changing challenges presented by the environment.\n###figure_8### Benchmark simulations are performed on various grids ( = 3,5,7,10) to investigate the influence of temperature decay on the agent\u2019s learning ability. In this context, \u201clearning\u201d refers to the agent\u2019s ability to earn five consecutive positive rewards. Fig.8 ###reference_### compares the average number of episodes required to reach this learning benchmark across different grid sizes, considering the effects of temperature decay and ReRAM variability. For instance, in scenarios where =0, eligibility accumulation is null due to rapid temperature decay, while =1 signifies no reduction in accumulated eligibility owing to extremely slow temperature decay. When =0, it\u2019s expected that there would be an increase in the average number of episodes required to reach the learning benchmark, as the agent doesn\u2019t consider prior experiences while making decisions. Interestingly, it\u2019s observed that the average number of episodes needed to reach the learning benchmark for =1 is also higher across all grid sizes, indicating that the agent struggles to achieve the benchmark if the temperature decay is extremely slow. This effect is particularly pronounced in larger grid sizes (n=7,10), where complexity and the number of possible paths are higher. The agent learns faster with optimal temperature decay, as indicated by the optimal value in the heatmap, is considered. Therefore, the seemingly non-ideal effect of temperature decay proves beneficial in reinforcement learning, as it enables the agent to prioritize recent experiences and gradually diminishes the importance of past experiences. Moreover, increased ReRAM variability hampers the agent\u2019s learning process, as evidenced by the rise in the average number of episodes needed to reach the learning benchmark with increasing variability. Details about the variability model used in our simulations are provided in the appendix section 7.5 ###reference_###.\nNext, with a fixed 50% variability, we analyze the impact of temperature-induced changes in ReRAM conductance, modeled as , where is ReRAM conductance [50 ###reference_b50###]. A lower value is typically preferred in these applications. The heatmap in Fig.9 ###reference_### shows the agent\u2019s success ratio within a maximum number of episodes for each grid size. The success ratio represents the number of times the agent reached the learning benchmark for each unique combination of and , divided by the maximum number of times the benchmark was reached. Fig.9 ###reference_### shows that the influence of becomes less significant with increasing grid sizes, potentially due to the increased redundancy. This observation underscores the potential for efficient operation in dense arrays, a capability that will be further explored in a subsequent case study involving more complex networks.\n###figure_9### ###figure_10###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Case Study #2: RSNNs for Phenome Classification", + "text": "In this case study, we investigate the performance of the thermal neoHebbian synapse in the TIMIT phoneme classification task. We employ the e-prop algorithm to conduct online training of recurrent spiking neural networks (RSNNs) featuring thermal neoHebbian synapses on the TIMIT dataset. TIMIT phoneme recognition serves as a standard measure for assessing the temporal processing capabilities of recurrent neural networks [51 ###reference_b51###]. The dataset consists of acoustic speech signals from 630 speakers across eight dialect regions of the USA. The objective is to identify the spoken phoneme among 61 phonemes within each 10ms time frame.\nThe schematic representation of the modeled RSNN network used in this study is illustrated in Fig.11 ###reference_###(a), comprising 39 input neurons, one hidden layer with 200 LIF neurons, and an output layer consisting of 61 neurons, operating over an average of 700 time steps per sample during inference. The input data is encoded following the procedure outlined in [36 ###reference_b36###], and one sample input data is shown in Fig.11 ###reference_###(a).\nThe LIF spiking neurons in the input layers are connected to the hidden layer LIF neurons via neoHebbian synapses. Hidden layer LIF neurons are recurrently connected to themselves and other neurons by neoHebbian synapses. An output (readout) layer of non-spiking neurons is connected to the hidden layer with common (Hebbian) synapses. Spikes coming from the input layer () and recurrent connections () updates the membrane potential of the hidden layer neurons. The training operation is performed as follows. During the e-update phase, the eligibility contribution is computed at each time step and accumulated at synapse during the presentation of -step long input dataframe as follows,\nHere, and signals are provided from pre-synaptic and post-synaptic neurons, respectively. The network loss is calculated at the non-spiking output neurons, and batch-mode stochastic gradient descent is used to update the output layer weights (). The neoHebbian synapse () are updated according to\nHere, parameter denotes the learning rate. The values of , , and are set to zero at , i.e., before the presentation of a new training dataframe. Note that updates to the readout weights () and input/recurrent neoHebbian weights () occur exclusively at the end of each dataframe. Appendix Section 7.2 ###reference_### provides more details on the key equations used in this case study.\nFig.11 ###reference_###(b) compares the training accuracy obtained using ideal (software-modeled) synapses and the proposed neoHebbian synapses. The proposed synapses perform comparably to ideal synapses, assuming floating-point precision. We then investigated the dependence of test accuracy on synapse bit precision, as shown in Fig.11 ###reference_###(c). Our study demonstrates that a minimum of 200 states per ReRAM (approximately 8-bit precision) is required to ensure that the degradation in test accuracy is less than 3%.\n###figure_11### Two significant sources of non-idealities specific to the thermal neoHebbian synapse include thermal crosstalk and temperature decay. It is noted that test accuracy increases with an increase in and saturates for values exceeding 1, as depicted in Fig.11 ###reference_###(d). The choice of materials, device dimensions, and crossbar size dictates the value, and achieving is feasible with practical crossbar arrays [52 ###reference_b52###, 53 ###reference_b53###]. Thermal crosstalk becomes a critical factor with higher device density, i.e., as minimum feature size () and crossbar pitch () decrease. To evaluate the impact of thermal crosstalk, synapse locations are considered from and crossbar implementations for the input and recurrent layers, respectively. The data provided in Fig.6 ###reference_###(f,g,h) is used to obtain the thermal coupling coefficient. Fig.11 ###reference_###(e) demonstrates that despite notable scaling in and , the reduction in test accuracy is approximately 3%.\nBoth transistor scaling and thermal crosstalk are critical in determining the scaling potential of thermal synapses. Lastly, Fig.11 ###reference_###(f) shows the test accuracy\u2019s dependence on memristor variability, showing a decrease of around 1% for variations up to 100%. It is shown that increasing the network size results in an improvement in test accuracy. Thus, we attribute the network resilience to various non-idealities to the inherent redundancy in the baseline network [36 ###reference_b36###] and the implementation of hardware-aware training techniques [45 ###reference_b45###].\nDetails on the memristor variability model, its impact on network performance, and the effects of increased ambient temperature on test accuracy are provided in Appendix Section 7.5 ###reference_###.\nTable 1 ###reference_### compares key metrics of the proposed synapse against the prior works. Per synapse area is determined assuming 1T-1R unit cell configuration, where the heater element is integrated above the ReRAM, resulting in no additional area overhead. The ReRAM cross-sectional area is assumed to be 250250nm2, with 200 nm spacing between metal lines, giving an estimated cell area of 450F2. Based on 65nm technology for the access transistors, the total cell area is 1.9m2. We note that the choice of 65nm technology for access transistor is driven by our fabricated ReRAM\u2019s switching voltages, switching currents, and conductance range [10 ###reference_b10###]. However, further reductions unit cell area are possible by decreasing the ReRAM cell area, switching voltages and currents [54 ###reference_b54###].\nThe total energy (inference + learning) of the proposed synapse is estimated based on a 10ns spike integration time and a write voltage () of 1.7V for 1V write pulses. This projection, derived from experimental data, achieves a write pulse duration () of 10 ns, resulting in an estimated write energy ( ) of pJ, for ReRAM devices. These estimates are supported by previous studies [10 ###reference_b10###, 55 ###reference_b55###]. In conclusion, the proposed synapse offers competitive benefits in terms of both area and energy efficiency." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion & Summary", + "text": "The proposed thermal neoHebbian synapse leverages both thermal and electrical effects in computation, forming a multi-physics computing unit. This approach offers several advantages over conventional computation methods. For example, conventional computing units are burdened with converting all signals into the electrical domain, including voltages, currents, and conductances, neglecting other forms of information generated during network operation. By harnessing both electrical and thermal effects in computation, we can maximize the utilization of information derived from network activity, potentially leading to significant improvements [60 ###reference_b60###].\nThis approach has gained increasing attention in recent years [61 ###reference_b61###, 62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###, 53 ###reference_b53###, 65 ###reference_b65###]. For instance, Kim et al. [64 ###reference_b64###] experimentally demonstrated that the dynamic evolution of internal state variables, particularly temperature, enables ReRAM to mimic Ca2+-like dynamics, facilitating the native encoding of temporal information and synaptic weight regulation. They showed that these internal dynamics can be exploited to implement spike-timing-dependent plasticity (STDP) using simple, non-overlapping pulses. Building on this, Yoo et al. [53 ###reference_b53###] proposed material and structural modifications to enhance internal temperature dynamics, validating the concept through the application of STDP-trained spiking neural networks for temporal correlation detection tasks. Another study [65 ###reference_b65###] explored the use of thermal crosstalk in neuromorphic computing, proposing its potential for future applications. In related work, it shows that the thermal crosstalk-driven spatiotemporal communication in multiple Mott neurons achieves energy efficiency several orders of magnitude greater than state-of-the-art digital processors [61 ###reference_b61###]. Similarly, Kumar et al. [62 ###reference_b62###] leveraged thermal dynamics to demonstrate 15 distinct neuronal behaviors using nanoscale third-order circuit elements, showing promise for the development of highly efficient neuromorphic hardware.\nWhile multi-physics computing units, particularly those involving temperature, offer significant advantages, they also present several practical challenges. Unlike measurable variables such as current or voltage, temperature is a hidden variable, making direct measurement and control difficult. Furthermore, elevated temperatures can accelerate device degradation and lead to various reliability issues [66 ###reference_b66###, 67 ###reference_b67###]. Although this work and several other works [61 ###reference_b61###, 62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###, 53 ###reference_b53###, 65 ###reference_b65###] demonstrated a method of exploiting thermal effects for computation, significant challenges remain for future real-world applications.\nIn addition, heat dissipation is an unavoidable byproduct of electronic system operation, and it is increasingly pronounced as devices continue to shrink in size [68 ###reference_b68###]. While efforts to reduce power dissipation remain a priority, exploring innovative approaches to harness electro-thermal effects could unlock new possibilities. For example, such approaches could drive advancements in novel materials with tailored thermal properties, where electronic and thermal behaviors can be independently controlled. Moreover, the development of nanoscale devices capable of regulating heat flow, such as thermal diodes or thermal transistors, presents promising directions for future research [69 ###reference_b69###, 70 ###reference_b70###].\nIn summary, we have proposed and experimentally validated ReRAM-based neo-Hebbian synapses. The performance improvements provided by these synapses were evaluated through two representative applications based on the scalable e-prop learning algorithm. Our findings demonstrate that the proposed thermal neo-Hebbian synapses significantly reduce both time-to-solution and energy-to-solution. This underscores their potential for facilitating fast, scalable, online, and robust learning in neuromorphic hardware." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Appendix", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Prior Works", + "text": "One notable non-ideality in phase-change memory (PCM), namely resistance drift, has been leveraged for eligibility computation [56 ###reference_b56###]. Specifically, two PCM devices were employed within each synapse: one to implement the eligibility trace by utilizing the inherent resistance drift behavior of PCM, and the other to encode the coupling weight through its non-volatile conductance [56 ###reference_b56###]. However, this approach faces common challenges associated with PCM-based synapses, including significant day-scale drift toward the amorphous state, even in devices optimized for high retention, and an excessively abrupt reset transition. To address these issues, a differential pair synapse with a set-only programming scheme was proposed. However, this design requires at least eight CMOS transistors per synapse, resulting in a large footprint and sparse synapse integration. In a separate study, the combined influence of electrical and optical stimuli on the switching behavior of non-volatile Ag/GeSe3/Ag memristive devices was explored [59 ###reference_b59###]. For example, when an electrical pulse was applied to an illuminated device, it spontaneously transitioned to a conducting state. In contrast, no resistive switching occurred if the device was illuminated without a voltage stimulus, or if a sub-threshold electrical stimulus was applied in dark conditions. Utilizing this behavior, the authors demonstrated the implementation of a three-factor learning rule. While these devices hold promise for high-density integration, energy consumption could be a limiting factor, with total energy consumption primarily driven by optical power, typically within the milliwatt range [59 ###reference_b59###].\nOther research has focused on using PCM devices solely for storing coupling weights, employing high-precision computational units for eligibility trace computation [58 ###reference_b58###, 71 ###reference_b71###], or utilizing mixed-signal analog/digital neuromorphic circuits [57 ###reference_b57###]. However, both approaches face challenges related to transistor scaling, especially for large-scale network implementations. Consequently, there remains significant scope for developing neo-Hebbian synapses that harness memristor physical dynamics for the hardware implementation of advanced learning rules." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "e-prop Key Equations", + "text": "The LIF neuron\u2019s membrane potential and spike generation are characterized by Eq.(9 ###reference_###-11 ###reference_###). Specifically, LIF neurons\u2019 membrane potential () is updated at fixed discrete time step following the Eq.(9 ###reference_###) & Eq.(10 ###reference_###). The exponential decay factor in the first term of Eq.(9 ###reference_###) describes the behavior of the membrane potential in the absence of spikes, and the last two terms denote the contribution from recurrent weights () and input weight (), respectively. Here, denotes the neuron membrane potential time constant. Whenever surpasses the threshold voltage (Vth), it is reduced by an amount equal to the threshold voltage, and an output spike is generated, as described in Eq.(11 ###reference_###). Otherwise, follows the behavior outlined in Eq.(9 ###reference_###), and no output spike is generated.\nNetwork loss (Lj) is determined at output neuron by assessing how much the output differs from the target value at time following Eq.(12 ###reference_###). Batch-mode stochastic gradient descent is used to update the output layer weights () following Eq.(13 ###reference_###).\nEligibility state () and pseudo-gradient () are modeled using Eq.(14 ###reference_###) and Eq.(15 ###reference_###), respectively. The function denotes low pass filtering action. Eq.(16 ###reference_###) represents the accumulated eligibility value (e\u2211) over U-steps, i.e., one dataframe presentation cycle. Neohebbian synapse weights are updated at the end of the dataframe according to Eq.(17 ###reference_###). represents learning rate.\nThe initial values for , , , are all set to zero at =0, i.e., before the presentation of the new training dataframe. Updates to the readout weights and input/recurrent neoHebbian weights occur exclusively at the end of each dataframe. The values of hyperparameters used are Vth = 0.615, = 200ms, learning rates = 0.1 1, = 0.3. A discrete time step of t = 1ms is used for all simulations." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Eligibility Computation at Synapse", + "text": "###figure_12### In the following, we refer to the 1-1-1 synaptic cell and the specific notation, as shown in Fig.12 ###reference_###. In the e-update phase, the transistor operates in the triode region. Thus, the current flowing through the series combination of the transistor and heater is given by eq.(18 ###reference_###).\nWe express in terms of , the heater\u2019s electrical resistance (), and transistor-related parameters such as carrier mobility (), gate capacitance (), threshold voltage (), transistor width (), and channel length () in eq.(19 ###reference_###),\nFollowing that, the power dissipated across the heater can be expressed as in eq.(21 ###reference_###).\nNow, the voltage signals and are scaled as shown in eq.(22 ###reference_###) to ensure that is proportional to the product of and .\nLet represent the increase in heater temperature above ambient temperature () at time , as defined in eq.(23 ###reference_###). The temperature change is proportional to the power dissipated in the heater, , and therefore also proportional to the product of and . Due to thermal coupling, the local temperature of follows the same proportionality, as shown in eq.(24 ###reference_###). In this context, , , and refer to the pulse width, thermal time constant, and thermal resistance, respectively.\n###figure_13###" + }, + { + "section_id": "7.4", + "parent_section_id": "7", + "section_name": "Numerical Modeling", + "text": "Fig.13 ###reference_###(a) illustrates the schematic of the heater-integrated ReRAM device, which is incorporated into a crossbar array, as shown in Fig. 13 ###reference_###(b). The temperature distribution within the crossbar array is obtained by solving the transient heat flow eq.(25 ###reference_###), while the potential distribution is determined using eq.(26 ###reference_###). Both equations are solved self-consistently across the entire simulation domain using a COMSOL multi-physics solver.\nThe parameters , , , and denote the material\u2019s specific heat capacity, density, thermal conductivity, and electrical resistivity, respectively. Voltage and electric field are indicated by and , respectively. Parameter values used in this study are shown in the table 2 ###reference_###.\nTo access the heater and ReRAM devices, a voltage is applied to one of the vertical planes of the respective electrodes, while all other electrode regions remain electrically insulating. This is achieved by imposing a Neumann boundary condition, , ensuring no current flows through these insulating boundaries. The thermal boundary conditions assume that the bottom side of the substrate functions as an ideal heat sink, maintained at a constant temperature of 300 K, whereas all other surfaces are considered thermally adiabatic.\nThe metallic nanorod acting as a heater is assumed to have a radius of 4.19nm, while the ReRAM conductive filament (CF) has a radius of 3.1nm. To evaluate the thermal coupling between the heater and the ReRAM, a heating pulse is applied to the heater, and the resultant passive temperature rise in the neighboring ReRAM is measured. The thermal coupling coefficient is defined as the ratio of the temperature rise at the synapse (ReRAM) to the temperature rise at the heater in response to the applied heating pulse. For instance, as depicted in Fig.13 ###reference_###(c), the thermal coupling between the heater and ReRAM positioned at intersection A is illustrated. Similarly, Fig.13 ###reference_###(d) and Fig.13 ###reference_###(e) denote the thermal coupling between the heater at intersection A and ReRAM positioned at intersections B and C, respectively. It is desired to maximize thermal coupling in the scenario presented in Fig.13 ###reference_###(c), while minimizing it in the scenarios illustrated in Fig.13 ###reference_###(d) and Fig.13 ###reference_###(e). The modified structure presented in Fig.6 ###reference_###(e) aims to achieve the same. Specifically, it reduces the distance between the heater and ReRAM to enhance desired self-thermal coupling. It increases the thickness of both the top and bottom electrodes to slow the propagation of heat flux toward neighboring ReRAM, thereby reducing unintentional thermal crosstalk. Except for these geometrical changes, all electrical and thermal boundary conditions, as well as material parameters, remain the same for the modified structure. Finally, Fig.6 ###reference_###(f,g,h) compares the thermal coupling coefficients for baseline structure (shown in Fig.6 ###reference_###(b)) and modified structure (shown in Fig.6 ###reference_###(e)).\n###figure_14###" + }, + { + "section_id": "7.5", + "parent_section_id": "7", + "section_name": "Benchmark simulations", + "text": "The experimental data presented in Fig.3 ###reference_###(b) and Fig.3 ###reference_###(c) is modeled using Eq.27 ###reference_### and used in hardware-aware network simulations.\nHere, represents the ambient temperature. Fig.14 ###reference_###(a) and Fig.14 ###reference_###(b) show the color-mapped surface obtained using Eq.27 ###reference_###, overlaid on the measurement data. The fitting parameters , , , and for the SET (RESET) process are 0.143 (0.3124), 2.216 (0.8064), 0.8232 (1.138), and 0.4043 (-0.8806), respectively." + }, + { + "section_id": "7.5.1", + "parent_section_id": "7.5", + "section_name": "7.5.1 Case Study#2: Benchmark Simulations", + "text": "To account for device-device variations, the Gaussian noise is added to fitting parameters ,,,,\nHere, accounts for device-to-device variability, and the mean values of [,,, ] are obtained by fitting the experimental data using Eq.(27 ###reference_###). Subsequently, to account for cycle-to-cycle variations, we add the Gaussian noise to the calculated value as follows,\nThe parameter represents cycle-to-cycle variability, which in our model scales with . This variability becomes more pronounced at higher values.\nWe plot a histogram showing the probability of update during training to probe the impact of variability on test accuracy. Fig.14 ###reference_###(c) shows a histogram with the on the x-axis and the probability of weight update on the y-axis. Our simulations indicate that smaller values occurrence is more likely. This means that conductance tends to change by small amounts more frequently during training. This behavior mitigates the impact of cycle-to-cycle variability.\nThe rise in local ReRAM temperature can affect its programmed conductance, which we model using , where represents the ReRAM conductance, and is the temperature coefficient [50 ###reference_b50###]. Our observations indicate that test accuracy drops sharply when exceeds . While smaller values are ideal, they are influenced by both material and geometric factors. Therefore, selecting appropriate device materials and geometrical parameters is crucial [50 ###reference_b50###, 63 ###reference_b63###]." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n
Coupling weights
\n
\n\nEligibility\n\n\n\nPer synapse area\n\n\n\nEnergy per timestep\n\n\n\nEligibility decay time constant\n\n\n\nMaturity\n\n
\n\nY. Demira\u011f et al.[56]\n\n\n\nPCM conductance\n\n\n\nPCM drift\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n+\n\n
\n\nC. Frenkel et al.[57]\n\n\n\nCMOS\n\n\n\nCMOS\n\n\n\n ()\n\n\n\n\n\n\n\n\n\n\n\n++\n\n
\n\nT. Bohnstingl et al.[58]\n\n\n\nPCM conductance\n\n\n\nCMOS\n\n\n\n ()\n\n\n\n\n\n\n\n\n\n\n\n+\n\n
\n\nS. G. Sarwat et al.[59]\n\n\n\nPCM conductance\n\n\n\nOptical response of PCM\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-\n\n
\n\nThis work\n\n\n\nReRAM conductance\n\n\n\nReRAM local temperature\n\n\n\n ()\n\n\n\n\n\n\n\n\n\n\n\n-\n\n
\n
\n
Table 1: Comparison of the proposed synapse with prior works. $Calculated assuming SRAM cell area of 150F2 in 28nm technology and 8-bit weights. \u039bCalculated assuming 2T1R unit cell and 14nm technology for the access transistor as per [14]. \u03a5Based on fabricated Ag/GeSe3/Ag device dimensions as per [59]. +Based on 65nm technology for access transistor and cell area of 450F2. !Write energy calculated () assuming Iprog 100A, 1S, and 100ns, as per the parameters mentioned in\n[56]. &Learning energy reported at V= 0.5V [57]. \u2217Only coupling weights are PCM-based; eligibility computations are performed using a high-precision unit. Write energy is estimated roughly according to the values provided in [58]: Iprog = 700A, = 600ns, S. #Learning energy dominated by optical power [59]. \u2297 Limited by PCM device resistance drift rate. \u22a0Limited by Von-neumann style sequential computing. \u25b3Limited by Ag conductive filament relaxation dynamics. \u03a9Limited by thermal time constant
\n
", + "capture": "Table 1: Comparison of the proposed synapse with prior works. $Calculated assuming SRAM cell area of 150F2 in 28nm technology and 8-bit weights. \u039bCalculated assuming 2T1R unit cell and 14nm technology for the access transistor as per [14]. \u03a5Based on fabricated Ag/GeSe3/Ag device dimensions as per [59]. +Based on 65nm technology for access transistor and cell area of 450F2. !Write energy calculated () assuming Iprog 100A, 1S, and 100ns, as per the parameters mentioned in\n[56]. &Learning energy reported at V= 0.5V [57]. \u2217Only coupling weights are PCM-based; eligibility computations are performed using a high-precision unit. Write energy is estimated roughly according to the values provided in [58]: Iprog = 700A, = 600ns, S. #Learning energy dominated by optical power [59]. \u2297 Limited by PCM device resistance drift rate. \u22a0Limited by Von-neumann style sequential computing. \u25b3Limited by Ag conductive filament relaxation dynamics. \u03a9Limited by thermal time constant" + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nParameters\n\n\n\nSwicthing oxide\n\n\n\nMetallic rod\n\n\n\nElectrode\n\n\n\nCF\n\n\n\nIsolation oxide\n\n
\n\n [W/mK]\n\n\n\n1.4\n\n\n\n23\n\n\n\n71.8\n\n\n\n23\n\n\n\n0.5\n\n
\n\n [S/m]\n\n\n\n1e-6\n\n\n\n1e5\n\n\n\n1e7\n\n\n\n1e5\n\n\n\n1e-6\n\n
\n\n [kg/m3]\n\n\n\n6850\n\n\n\n5200\n\n\n\n12033\n\n\n\n6850\n\n\n\n745\n\n
\n\nC [J/kg K]\n\n\n\n306\n\n\n\n450\n\n\n\n244\n\n\n\n306\n\n\n\n2200\n\n
\n
\n
Table 2: Parameter values used in the numerical simulations. For the purpose of simulation, the metallic rod and CF are assumed to have the same electrical and thermal properties. The material parameters used in the simulations are not intended to restrict the implementation of the proposed synapse to specific material parameter values. Instead, they are representative and used to study the scope/limitations of the proposed synapse. These values are in agreement with values used in prior works [72, 73, 74, 75, 76].
\n
", + "capture": "Table 2: Parameter values used in the numerical simulations. For the purpose of simulation, the metallic rod and CF are assumed to have the same electrical and thermal properties. The material parameters used in the simulations are not intended to restrict the implementation of the proposed synapse to specific material parameter values. Instead, they are representative and used to study the scope/limitations of the proposed synapse. These values are in agreement with values used in prior works [72, 73, 74, 75, 76]. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18272v1_figure_1.png", + "caption": "Figure 1: (a) Schematic of a spiking neural network incorporating neoHebbian synapses. (b) The evolution of signals f\u2062(t)\ud835\udc53\ud835\udc61f(t)italic_f ( italic_t ), \u03c8\u2062(t)\ud835\udf13\ud835\udc61\\psi(t)italic_\u03c8 ( italic_t ), and e\u2062(t)\ud835\udc52\ud835\udc61e(t)italic_e ( italic_t ) during the dataframe presentation. fi\u2062(t)subscript\ud835\udc53\ud835\udc56\ud835\udc61f_{i}(t)italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) and \u03c8j\u2062(t)subscript\ud835\udf13\ud835\udc57\ud835\udc61\\psi_{j}(t)italic_\u03c8 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_t ) represent signals from the i\u2212limit-from\ud835\udc56i-italic_i -th pre-synaptic neuron and j\u2212limit-from\ud835\udc57j-italic_j -th post-synaptic neuron, respectively. ei\u2062j\u2062(t)subscript\ud835\udc52\ud835\udc56\ud835\udc57\ud835\udc61e_{ij}(t)italic_e start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ) is obtained by multiplying fi\u2062(t)subscript\ud835\udc53\ud835\udc56\ud835\udc61f_{i}(t)italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) and \u03c8j\u2062(t)subscript\ud835\udf13\ud835\udc57\ud835\udc61\\psi_{j}(t)italic_\u03c8 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_t ). (c) Characteristics features of a neoHebbian synapse - computing ei\u2062j\u2062(t)subscript\ud835\udc52\ud835\udc56\ud835\udc57\ud835\udc61e_{ij}(t)italic_e start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ) and accumulating it (i.e., e\u2211subscript\ud835\udc52e_{\\sum}italic_e start_POSTSUBSCRIPT \u2211 end_POSTSUBSCRIPT) during the data frame presentation. During the weight update, the weight change (\u0394\u2062wi\u2062j\u0394subscript\ud835\udc64\ud835\udc56\ud835\udc57\\Delta w_{ij}roman_\u0394 italic_w start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT) is proportional to the accumulated e\u2062(t)\ud835\udc52\ud835\udc61e(t)italic_e ( italic_t ). \u03b7\ud835\udf02\\etaitalic_\u03b7 represents the learning rate.", + "url": "http://arxiv.org/html/2411.18272v1/x1.png" + }, + "2": { + "figure_path": "2411.18272v1_figure_2.png", + "caption": "Figure 2: High-level description of the thermal neoHebbian synapse operation: (a) Three key stages involved in the training operation of e-prop: spike integration (shaded blue), eligibility-update (shaded olive), and weight update (shaded red). Arrangement of the heater and ReRAM during (b) Spike integration, (c) e-update, and (d) Weight update phases. The red arrow depicts the thermal coupling between the heater and ReRAM. (e) Design of a crossbar array illustrating 3D-integrated heater and ReRAM cells.", + "url": "http://arxiv.org/html/2411.18272v1/x2.png" + }, + "3": { + "figure_path": "2411.18272v1_figure_3.png", + "caption": "Figure 3: (a) Representative I-V curves measured with quasi-static DC voltage sweep at 1V/s on 250\u00d7\\times\u00d7250nm2 area devices. The inset provides the device stack details. Normalized percentage conductance change as a function of the initial conductance and ambient temperature is shown for the (b) SET and (c) RESET processes. The average normalized percentage conductance change as a function of ambient temperature for the SET and RESET processes is presented in (d) and (e), respectively. (f) Illustration of the measurement protocol used to obtain the data is shown in (b) and (c). The green pulse depicts the multiple SET, RESET, and read pulses required to reprogram the device to the same G0subscript\ud835\udc3a0G_{\\mathrm{0}}italic_G start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. VPsubscript\ud835\udc49PV_{\\mathrm{P}}italic_V start_POSTSUBSCRIPT roman_P end_POSTSUBSCRIPT and Vreadsubscript\ud835\udc49readV_{\\mathrm{read}}italic_V start_POSTSUBSCRIPT roman_read end_POSTSUBSCRIPT, respectively, denotes fixed programming pulse used to measure \u0394\u2062G\u0394\ud835\udc3a\\Delta Groman_\u0394 italic_G and read pulse.", + "url": "http://arxiv.org/html/2411.18272v1/x3.png" + }, + "4": { + "figure_path": "2411.18272v1_figure_4.png", + "caption": "Figure 4: 1T\ud835\udc47Titalic_T-1H\ud835\udc3bHitalic_H-1M\ud835\udc40Mitalic_M unit cell implementation of the thermal neoHebbian synapse. During the dataframe presentation, the operation of the synapse is time multiplexed between (a) Spike integration - \u03d5Esubscriptitalic-\u03d5\ud835\udc38\\phi_{E}italic_\u03d5 start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = 0 and (b) e-update - \u03d5Esubscriptitalic-\u03d5\ud835\udc38\\phi_{E}italic_\u03d5 start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = 1 phase (c) Weight update is performed at the end of the dataframe - \u03d5Wsubscriptitalic-\u03d5\ud835\udc4a\\phi_{W}italic_\u03d5 start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT = 1. The appropriate biasing conditions for each phase are shown in the schematic.", + "url": "http://arxiv.org/html/2411.18272v1/x4.png" + }, + "5": { + "figure_path": "2411.18272v1_figure_5.png", + "caption": "Figure 5: (a) Differential mode crossbar-array implementation of the 1T\ud835\udc47Titalic_T-1H\ud835\udc3bHitalic_H-1M\ud835\udc40Mitalic_M design. (b) Key stages in the operation of e-prop: spike integration (\u03d5Esubscriptitalic-\u03d5\ud835\udc38\\phi_{E}italic_\u03d5 start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = 0), e-update (\u03d5Esubscriptitalic-\u03d5\ud835\udc38\\phi_{E}italic_\u03d5 start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = 1), and weight update (\u03d5Wsubscriptitalic-\u03d5\ud835\udc4a\\phi_{W}italic_\u03d5 start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT = 0). (c) Biasing condition at the respective terminal during these phases.", + "url": "http://arxiv.org/html/2411.18272v1/x5.png" + }, + "6": { + "figure_path": "2411.18272v1_figure_6.png", + "caption": "Figure 6: (a) 1T-1H-1M synapse unit cell. (b) Modeled geometry for electrothermal analysis. (c) Temperature contours calculated at t\ud835\udc61titalic_t = 60ns for F\ud835\udc39Fitalic_F = 60nm and overlaid on the modeled geometry, showing the thermal self-thermal coupling between the heater and ReRAM. (d) Transient temperature evolution depicting \u03d5Esubscriptitalic-\u03d5\ud835\udc38\\phi_{E}italic_\u03d5 start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT = 1 phase. (e) Modified geometry that improves self-thermal coupling and reduces the thermal crosstalk. A comparison of thermal coupling coefficients for the structure shown in (b) & (e) is shown in (f,g,h). (f) self-thermal coupling between heater and ReRAM at location A. Thermal crosstalk between the heater at location A and ReRAM at location B and C is shown in (g) & (h), respectively. The inset shows the schematic of the modeled 3\u00d7\\times\u00d73 crossbar array, where K\ud835\udc3eKitalic_K denotes crossbar pitch.", + "url": "http://arxiv.org/html/2411.18272v1/x6.png" + }, + "7": { + "figure_path": "2411.18272v1_figure_7.png", + "caption": "Figure 7: (a) Schematic of SNN used for illustrating reinforcement learning using neoHebbian synapse. Here, the agent, depicted as a mouse, must navigate through a n\u00d7n\ud835\udc5b\ud835\udc5bn\\times nitalic_n \u00d7 italic_n grid to locate cheese and avoid traps. The traps are shown using the red color cross. NeoHebbian synapse connects every i\ud835\udc56iitalic_i-th neuron in a\nn\u00d7n\ud835\udc5b\ud835\udc5bn\\times nitalic_n \u00d7 italic_n grid to every j\ud835\udc57jitalic_j-th neuron in the output layer (connection for only one i\ud835\udc56iitalic_i-th neuron is shown to avoid clutter). (b) Parameter values used in the simulations. (c) Schematic of array level implementation of the network shown in (a). Here n\u00d7n\ud835\udc5b\ud835\udc5bn\\times nitalic_n \u00d7 italic_n denotes the grid length.", + "url": "http://arxiv.org/html/2411.18272v1/x7.png" + }, + "8": { + "figure_path": "2411.18272v1_figure_8.png", + "caption": "Figure 8: Heatmap compares the average number of episodes required to reach the learning benchmark across different grid sizes: (a) 3\u00d73333\\times 33 \u00d7 3 grid, (b) 5\u00d75555\\times 55 \u00d7 5 grid, (c) 7\u00d77777\\times 77 \u00d7 7 grid, and (d) 10\u00d710101010\\times 1010 \u00d7 10 obtained by considering the impact of temperature decay and memistor variability. Simulation results, which are the mean of 20 runs for each unique combination of \u03b3\ud835\udefe\\gammaitalic_\u03b3 and variability in each grid size, are obtained using a pair of ReRAM devices in a differential configuration to implement a synapse, with each ReRAM device assumed to have 7-bit precision.", + "url": "http://arxiv.org/html/2411.18272v1/extracted/6028677/figures/fig_8.png" + }, + "9": { + "figure_path": "2411.18272v1_figure_9.png", + "caption": "Figure 9: The heatmaps illustrate success ratios for a spiking neural network agent\u2019s training across various grid sizes with \u03b3\ud835\udefe\\gammaitalic_\u03b3 (y-axis) controls temperature decay, while \u03b1\ud835\udefc\\alphaitalic_\u03b1 is varied along the x-axis. Lighter shades indicate lower success ratios. Simulation results, which are the mean of 20 runs for each unique combination of \u03b3\ud835\udefe\\gammaitalic_\u03b3 and \u03b1\ud835\udefc\\alphaitalic_\u03b1 in each grid size, are obtained using a pair of ReRAM devices in a differential configuration to implement a synapse, with each ReRAM device assumed to have 7-bit precision.", + "url": "http://arxiv.org/html/2411.18272v1/x8.png" + }, + "10": { + "figure_path": "2411.18272v1_figure_10.png", + "caption": "Figure 10: The schematic of the fully connected RSNN with one hidden layer. The input and hidden layers consist of spiking LIF neurons. NeoHebbian synapses connect the input layer with the hidden layer and recurrent connection within the hidden layer. The output readout and hidden layers are connected using (common) Hebbian synapses. wi\u2062jisubscriptsuperscript\ud835\udc64\ud835\udc56\ud835\udc56\ud835\udc57w^{i}_{ij}italic_w start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, wi\u2062jhsubscriptsuperscript\ud835\udc64\u210e\ud835\udc56\ud835\udc57w^{h}_{ij}italic_w start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, wi\u2062josubscriptsuperscript\ud835\udc64\ud835\udc5c\ud835\udc56\ud835\udc57w^{o}_{ij}italic_w start_POSTSUPERSCRIPT italic_o end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT denote the synaptic weights in the input, hidden and output layer. ei\u2062jsubscript\ud835\udc52\ud835\udc56\ud835\udc57e_{ij}italic_e start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT and s\u2062(t)\ud835\udc60\ud835\udc61s(t)italic_s ( italic_t ) denote the stored eligibility value and spikes from LIF neurons, respectively.", + "url": "http://arxiv.org/html/2411.18272v1/x9.png" + }, + "11": { + "figure_path": "2411.18272v1_figure_11.png", + "caption": "Figure 11: (a) A sample from the TIMIT data set applied to the input layer of the modeled RSNN used for the\nTIMIT phenome detection task. (b) Ideal (software modeled) synapse and proposed synapse test accuracy comparison (c,d,e,f) shows the test accuracy sensitivity towards various sources of non-idealities such as (c) Bit precision, (d) Thermal decay, (e) Thermal crosstalk, (f) Variability.", + "url": "http://arxiv.org/html/2411.18272v1/extracted/6028677/figures/fig_11.png" + }, + "12": { + "figure_path": "2411.18272v1_figure_12.png", + "caption": "Figure 12: Schematic of the 1T\ud835\udc47Titalic_T-1H\ud835\udc3bHitalic_H-1M\ud835\udc40Mitalic_M unit cell showing biasing conditions during the e-update phase.", + "url": "http://arxiv.org/html/2411.18272v1/x10.png" + }, + "13": { + "figure_path": "2411.18272v1_figure_13.png", + "caption": "Figure 13: (a) Cross-sectional schematic of the modeled heater-integrated ReRAM structure. (b) The modeled geometry is incorporated into a 3\u00d73 crossbar array, where F denotes the minimum feature size, and K denotes the crossbar pitch. Oxide thickness for the heater and ReRAM is tox = 30nm. (c) A scenario where a heating pulse is applied to the heater at position A and thermal coupling is measured for the ReRAM at position A. Similarly, (d, e) shows the thermal coupling between the heater at position A and the ReRAM at positions B and C, respectively.", + "url": "http://arxiv.org/html/2411.18272v1/x11.png" + }, + "14": { + "figure_path": "2411.18272v1_figure_14.png", + "caption": "Figure 14: Experimental data overlaid with modeled \u0394\u2062G\u0394\ud835\udc3a\\Delta Groman_\u0394 italic_G as a function of temperature and G0subscript\ud835\udc3a0G_{0}italic_G start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT for (a) SET (b) RESET process. (c) Histogram showing the probability of normalized weight update during training. (d) Test accuracy comparison against various values of \u03b1\ud835\udefc\\alphaitalic_\u03b1.", + "url": "http://arxiv.org/html/2411.18272v1/x12.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Proceedings of the IEEE 1990, 78, 10 1629.", + "author": "C. Mead,", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "Nature Electronics 2020, 3 579.", + "author": "C. Mead,", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Proceedings of the IEEE 2021, 109, 5 911.", + "author": "M. Davies, A. Wild, G. Orchard, Y. Sandamirskaya, G. A. F. Guerra, P. Joshi, P. Plank, S. R. Risbud,", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "Nature Computational Science 2022, 2, 1 10.", + "author": "C. D. Schuman, S. R. Kulkarni, M. Parsa, J. P. Mitchell, B. Kay, et al.,", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "Nature Reviews Physics 2020, 2, 9 499.", + "author": "D. Markovi\u0107, A. Mizrahi, D. Querlioz, J. Grollier,", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "Nature 2019, 575, 7784 607.", + "author": "K. Roy, A. Jaiswal, P. Panda,", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Advances in Physics: X 2017, 2, 1 89.", + "author": "G. W. Burr, R. M. Shelby, A. Sebastian, S. Kim, S. Kim, S. Sidler, K. Virwani, M. Ishii, P. Narayanan, A. Fumarola, et al.,", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "Advanced Materials Technologies 2019, 4, 4 1800589.", + "author": "N. K. Upadhyay, H. Jiang, Z. Wang, S. Asapu, Q. Xia, J. Joshua Yang,", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "Nature Communications 2021, 12.", + "author": "H. Kim, M. R. Mahmoodi, H. Nili, D. B. Strukov,", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "Nature Electronics 2022, 6 680.", + "author": "M. L. Gallo, R. Khaddam-Aljameh, M. Stanisavljevic, A. Vasilopoulos, B. Kersting, M. Dazzi, G. Karunaratne, M. Braendli, A. Singh, S. M. Mueller, J. B\u00fcchel, X. Timoneda, V. Joshi, M. J. Rasch, U. Egger, A. Garofalo, A. Petropoulos, T. A. Antonakopoulos, K. Brew, S. Choi, I. Ok, T. M. Philip, V. Chan, C. Silvestre, I. Ahsan, N. Saulnier, V. Narayanan, P. A. Francese, E. Eleftheriou, A. Sebastian,", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "Nature 2023, 620, 7975 768.", + "author": "S. Ambrogio, P. Narayanan, A. Okazaki, A. Fasoli, C. Mackin, K. Hosokawa, A. Nomura, T. Yasuda, A. Chen, A. Friz, et al.,", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "In IEEE International Electron Devices Meeting URL https://research.ibm.com/publications/analog-in-memory-computing-for-deep-learning-inference.", + "author": "A. Sebastian,", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "Nature Reviews Electrical Engineering 2024, 1\u201314.", + "author": "Y. Huang, T. Ando, A. Sebastian, M.-F. Chang, J. J. Yang, Q. Xia,", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2003 .", + "author": "T. Natschl\u00e4ger, W. Maass,", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "TechRxiv 2023.", + "author": "G. Li, L. Deng, H. Tang, G. Pan, Y. Tian, K. Roy, W. Maass,", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "Nature 2022.", + "author": "A. Mehonic, A. Kenyon,", + "venue": null, + "url": null + } + }, + { + "17": { + "title": "Communications Engineering 2024, 3, 1 22.", + "author": "C. Ganguly, S. S. Bezugam, E. Abs, M. Payvand, S. Dey, M. Suri,", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "Annual review of neuroscience 2008, 31 25.", + "author": "N. Caporale, Y. Dan,", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "The Journal of Neuroscience 1998, 18 10464 .", + "author": "G. Bi, M. ming Poo,", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "Nature Communications 2018, 9.", + "author": "M. Prezioso, M. R. Mahmoodi, F. M. Bayat, H. Nili, H. Kim, A. F. Vincent, D. B. Strukov,", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "In 2018 IEEE International Symposium on Circuits and Systems (ISCAS) 1\u20135,", + "author": "V. Milo, G. Pedretti, M. Laudato, A. Bricalli, E. Ambrosi, S. Bianchi, E. Chicca, D. Ielmini,", + "venue": "URL https://ieeexplore.ieee.org/abstract/document/8351824,", + "url": null + } + }, + { + "22": { + "title": "Computer 2019, 52, 5 20.", + "author": "M. V. DeBole, B. Taba, A. Amir, F. Akopyan, A. Andreopoulos, W. P. Risk, J. Kusnitz, C. O. Otero, T. K. Nayak, R. Appuswamy, et al.,", + "venue": null, + "url": null + } + }, + { + "23": { + "title": "In 2021 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 2021 254\u2013259.", + "author": "G. Orchard, E. P. Frady, D. B. D. Rubin, S. Sanborn, S. B. Shrestha, F. T. Sommer, M. Davies,", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "Proceedings of the IEEE 2014, 102, 5 699.", + "author": "B. V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, K. Boahen,", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "arXiv preprint arXiv:2401.04491 2024.", + "author": "H. A. Gonzalez, J. Huang, F. Kelber, K. K. Nazeer, T. Langer, C. Liu, M. Lohrmann, A. Rostami, M. Sch\u00f6ne, B. Vogginger, et al.,", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "IEEE Journal of Solid-State Circuits 2020, 55, 8 2228.", + "author": "L. Deng, G. Wang, G. Li, S. Li, L. Liang, M. Zhu, Y. Wu, Z. Yang, Z. Zou, J. Pei, Z. Wu, X. Hu, Y. Ding, W. He, Y. Xie, L. Shi,", + "venue": null, + "url": null + } + }, + { + "27": { + "title": "Frontiers in neuroscience 2015, 9 123487.", + "author": "N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, G. Indiveri,", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "Frontiers in Neural Circuits 2018, 12.", + "author": "W. Gerstner, M. P. Lehmann, V. Liakoni, D. S. Corneil, J. Brea,", + "venue": null, + "url": null + } + }, + { + "29": { + "title": "Nature Communications 2019, 11.", + "author": "G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. A. Legenstein, W. Maass,", + "venue": null, + "url": null + } + }, + { + "30": { + "title": "In Neural Information Processing Systems. 2018 URL https://api.semanticscholar.org/CorpusID:4394315.", + "author": "G. Bellec, D. Salaj, A. Subramoney, R. A. Legenstein, W. Maass,", + "venue": null, + "url": null + } + }, + { + "31": { + "title": "Proceedings of the IEEE 1990, 78, 10 1550.", + "author": "P. Werbos,", + "venue": null, + "url": null + } + }, + { + "32": { + "title": "Current Opinion in Neurobiology 2019, 55 82, machine Learning, Big Data, and Neuroscience.", + "author": "T. P. Lillicrap, A. Santoro,", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "The Journal of Machine Learning Research 2020, 21, 1 5320.", + "author": "O. Marschall, K. Cho, C. Savin,", + "venue": null, + "url": null + } + }, + { + "34": { + "title": "IEEE Transactions on Electron Devices 2017, 64, 11 4374.", + "author": "S. W. Fong, C. M. Neumann, H.-S. P. Wong,", + "venue": null, + "url": null + } + }, + { + "35": { + "title": "IEEE Journal on Emerging and Selected Topics in Circuits and Systems 2016, 6 146.", + "author": "G. W. Burr, M. J. BrightSky, A. Sebastian, H.-Y. Cheng, J.-Y. Wu, S. Kim, N. E. Sosa, N. Papandreou, H.-L. Lung, H. Pozidis, E. Eleftheriou, C. H. Lam,", + "venue": null, + "url": null + } + }, + { + "36": { + "title": "Applied Research 2022.", + "author": "A. Ehrmann, T. B\u0142achowicz, G. Ehrmann, T. Grethe,", + "venue": null, + "url": null + } + }, + { + "37": { + "title": "Nanotechnology 2012, 23, 7 075201.", + "author": "F. Alibart, L. Gao, B. D. Hoskins, D. B. Strukov,", + "venue": null, + "url": null + } + }, + { + "38": { + "title": "In 2023 International Electron Devices Meeting (IEDM). 2023 1\u20134.", + "author": "T. Bhattacharya, S. Bezugam, S. Pande, E. Wlazlak, D. Strukov,", + "venue": null, + "url": null + } + }, + { + "39": { + "title": "Nature Communications 2017, 9.", + "author": "I. Boybat, M. L. Gallo, S. R. Nandakumar, T. Moraitis, T. Parnell, T. T\u016fma, B. Rajendran, Y. Leblebici, A. Sebastian, E. Eleftheriou,", + "venue": null, + "url": null + } + }, + { + "40": { + "title": "IEEE Transactions on Nanotechnology 2020, 19 429.", + "author": "M. R. Mahmoodi, A. F. Vincent, H. Nili, D. B. Strukov,", + "venue": null, + "url": null + } + }, + { + "41": { + "title": "arXiv preprint arXiv:2404.15524 2024.", + "author": "H. Espino, R. Bain, J. L. Krichmar,", + "venue": null, + "url": null + } + }, + { + "42": { + "title": "Proceedings of the National Academy of Sciences 2024, 121, 17 e2318362121.", + "author": "A. R. Galloni, Y. Yuan, M. Zhu, H. Yu, R. S. Bisht, C.-T. M. Wu, C. Grienberger, S. Ramanathan, A. D. Milstein,", + "venue": null, + "url": null + } + }, + { + "43": { + "title": "IEEE Transactions on Nanotechnology 2020, 19 344.", + "author": "H. Nili, A. F. Vincent, M. Prezesio, M. R. Mahmoodi, I. Kataeva, D. B. Strukov,", + "venue": null, + "url": null + } + }, + { + "44": { + "title": "Linguistic Data Consortium, 1993 1993.", + "author": "J. S. Garofolo,", + "venue": null, + "url": null + } + }, + { + "45": { + "title": "Scientific Reports 2015, 5.", + "author": "P. Sun, N. Lu, L. Z. Li, Y. Li, H. C. Wang, H. Lv, Q. Liu, S. Long, S. Liu, M. Liu,", + "venue": null, + "url": null + } + }, + { + "46": { + "title": "Advanced Electronic Materials 2022, 8.", + "author": "S. Yoo, Y. Wu, Y. Park, W. D. Lu,", + "venue": null, + "url": null + } + }, + { + "47": { + "title": "In 2019 Symposium on VLSI Technology. 2019 T230\u2013T231.", + "author": "O. Golonzka, U. Arslan, P. Bai, M. Bohr, O. Baykan, Y. Chang, A. Chaudhari, A. Chen, J. Clarke, C. Connor, N. Das, C. English, T. Ghani, F. Hamzaoglu, P. Hentges, P. Jain, C. Jezewski, I. Karpov, H. Kothari, R. Kotlyar, B. Lin, M. Metz, J. Odonnell, D. Ouellette, J. Park, A. Pirkle, P. Quintero, D. Seghete, M. Sekhar, A. S. Gupta, M. Seth, N. Strutt, C. Wiegand, H. J. Yoo, K. Fischer,", + "venue": null, + "url": null + } + }, + { + "48": { + "title": "Proceedings of the IEEE 2012, 100, 6 1951.", + "author": "H.-S. P. Wong, H.-Y. Lee, S. Yu, Y.-S. Chen, Y. Wu, P.-S. Chen, B. Lee, F. T. Chen, M.-J. Tsai,", + "venue": null, + "url": null + } + }, + { + "49": { + "title": "In 2021 IEEE International Symposium on Circuits and Systems (ISCAS). 2021 1\u20135.", + "author": "Y. Demira\u011f, F. Moro, T. Dalgaty, G. Navarro, C. Frenkel, G. Indiveri, E. Vianello, M. Payvand,", + "venue": null, + "url": null + } + }, + { + "50": { + "title": "In 2022 IEEE International Solid-State Circuits Conference (ISSCC), volume 65. 2022 1\u20133.", + "author": "C. Frenkel, G. Indiveri,", + "venue": null, + "url": null + } + }, + { + "51": { + "title": "In 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS). 2022 218\u2013221.", + "author": "T. Bohnstingl, A. \u0160urina, M. Fabre, Y. Demira\u011f, C. Frenkel, M. Payvand, G. Indiveri, A. Pantazi,", + "venue": null, + "url": null + } + }, + { + "52": { + "title": "Nature Communications 2021, 13.", + "author": "S. G. Sarwat, T. Moraitis, C. D. Wright, H. Bhaskaran,", + "venue": null, + "url": null + } + }, + { + "53": { + "title": "Nature materials 2024.", + "author": "R. K. Patel, S. Ramanathan,", + "venue": null, + "url": null + } + }, + { + "54": { + "title": "Research Square 2023.", + "author": "J. H. I. e. a. Kyung Min Kim, Gwangmin Kim,", + "venue": null, + "url": null + } + }, + { + "55": { + "title": "Nature 2020, 585 518 .", + "author": "S. Kumar, R. S. Williams, Z. Wang,", + "venue": null, + "url": null + } + }, + { + "56": { + "title": "In 2023 IEEE International Symposium on Circuits and Systems (ISCAS). 2023 1\u20135.", + "author": "R. Li, S. Shreya, S. Ricci, D. Bridarolli, D. Ielmini, H. Farkhani, F. Moradi,", + "venue": null, + "url": null + } + }, + { + "57": { + "title": "Nano letters 2015, 15 3 2203.", + "author": "S. Kim, C. Du, P. Sheridan, W. Ma, S. Choi, W. D. Lu,", + "venue": null, + "url": null + } + }, + { + "58": { + "title": "Advanced Functional Materials 2023, 33.", + "author": "D. Sch\u00f6n, S. Menzel,", + "venue": null, + "url": null + } + }, + { + "59": { + "title": "In 2021 IEEE International Reliability Physics Symposium (IRPS). 2021 1\u20135.", + "author": "Y.-F. Chang, I. Karpov, H. et al.,", + "venue": null, + "url": null + } + }, + { + "60": { + "title": "Advanced Materials 2023, 35.", + "author": "F. Torres, A. Basaran, I. Schuller,", + "venue": null, + "url": null + } + }, + { + "61": { + "title": "Nature Electronics 2018, 1 442.", + "author": "S. S. Salahuddin, K. Ni, S. Datta,", + "venue": null, + "url": null + } + }, + { + "62": { + "title": "Advanced Functional Materials 2023, 33.", + "author": "Q. Yang, H. J. Cho, Z. Bian, M. Yoshimura, J. Lee, H. Jeen, J. Lin, J. Wei, B. Feng, Y. Ikuhara, H. Ohta,", + "venue": null, + "url": null + } + }, + { + "63": { + "title": "npj Computational Materials 2022, 8 1.", + "author": "D. Wei, E. Zhou, X. Zheng, H. Wang, C. Shen, H. Zhang, Z. Qin, G. Qin,", + "venue": null, + "url": null + } + }, + { + "64": { + "title": "arXiv preprint arXiv:2405.05141 2024.", + "author": "T. Ortner, H. Petschenig, A. Vasilopoulos, R. Renner, \u0160. Brglez, T. Limbacher, E. Pi\u00f1ero, A. L. Barranco, A. Pantazi, R. Legenstein,", + "venue": null, + "url": null + } + }, + { + "65": { + "title": "Solid-State Electronics 2023, 204 108636.", + "author": "S. Pande, S. Balanethiram, B. Chakrabarti, A. Chakravorty,", + "venue": null, + "url": null + } + }, + { + "66": { + "title": "IEEE Transactions on Electron Devices 2012, 59, 9 2468.", + "author": "S. Larentis, F. Nardi, S. Balatti, D. C. Gilmer, D. Ielmini,", + "venue": null, + "url": null + } + }, + { + "67": { + "title": "Scientific Reports 2013, 3.", + "author": "S. Kim, S.-J. Kim, K. M. Kim, S. R. Lee, M. Chang, E. Cho, Y.-B. Kim, C. J. Kim, U. I. Chung, I. kyeong Yoo,", + "venue": null, + "url": null + } + }, + { + "68": { + "title": "ACS nano 2014, 8 3 2369.", + "author": "S. Kim, S. Choi, W. D. Lu,", + "venue": null, + "url": null + } + }, + { + "69": { + "title": "IEEE Transactions on Electron Devices 2022, 69, 1 133.", + "author": "T.-Y. Li, W. Chen, D.-W. Wang, H. Xie, Q. Zhan, W.-Y. Yin,", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18272v1" +} \ No newline at end of file diff --git a/20241127/2411.18275v1.json b/20241127/2411.18275v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e84d13f61f20be5cd4cd0091641af3a96610c5a6 --- /dev/null +++ b/20241127/2411.18275v1.json @@ -0,0 +1,735 @@ +{ + "title": "Visual Adversarial Attack on Vision-Language Models for Autonomous Driving", + "abstract": "Vision-language models (VLMs) have significantly advanced autonomous driving (AD) by enhancing reasoning capabilities. However, these models remain highly vulnerable to adversarial attacks. While existing research has primarily focused on general VLM attacks, the development of attacks tailored to the safety-critical AD context has been largely overlooked. In this paper, we take the first step toward designing adversarial attacks specifically targeting VLMs in AD, exposing the substantial risks these attacks pose within this critical domain. We identify two unique challenges for effective adversarial attacks on AD VLMs: the variability of textual instructions and the time-series nature of visual scenarios. To this end, we propose ADvLM, the first visual adversarial attack framework specifically designed for VLMs in AD. Our framework introduces Semantic-Invariant Induction, which uses a large language model to create a diverse prompt library of textual instructions with consistent semantic content, guided by semantic entropy. Building on this, we introduce Scenario-Associated Enhancement, an approach where attention mechanisms select key frames and perspectives within driving scenarios to optimize adversarial perturbations that generalize across the entire scenario. Extensive experiments on several AD VLMs over multiple benchmarks show that ADvLM achieves state-of-the-art attack effectiveness. Moreover, real-world attack studies further validate its applicability and potential in practice.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Owing to their strong generalization capabilities and inherent interpretability, vision-language models (VLMs) have demonstrated exceptional performance across various tasks, including autonomous driving (AD). By enabling autonomous systems to comprehend scenarios and process natural language, VLMs could serve as the brain and offer effective solutions for advanced reasoning in complex scenarios and more efficient human-machine interaction [3 ###reference_b3###, 40 ###reference_b40###, 45 ###reference_b45###, 55 ###reference_b55###]. As a novel solution in end-to-end AD, VLMs present significant potential for future development.\n###figure_1### However, VLMs exhibit significant vulnerabilities and lack robustness, particularly when faced with carefully crafted visual perturbations such as adversarial attacks [57 ###reference_b57###, 58 ###reference_b58###, 56 ###reference_b56###, 18 ###reference_b18###, 17 ###reference_b17###, 53 ###reference_b53###, 20 ###reference_b20###, 19 ###reference_b19###, 52 ###reference_b52###, 26 ###reference_b26###, 11 ###reference_b11###, 32 ###reference_b32###, 34 ###reference_b34###, 14 ###reference_b14###, 35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###, 15 ###reference_b15###]. While various attack methods have been proposed, most existing research focuses on general VLMs and has not specifically addressed the unique requirements of AD. Identifying and addressing these vulnerabilities is essential in safety-critical domains like AD, as failures in VLMs could lead to severe consequences, including accidents or compromised decision-making.\nIn this paper, we take the first step to study adversarial attacks on VLM for AD. However, it is highly non-trivial to simply extend current adversarial attacks on general VLMs to this scenario, where We posit two key challenges unique in VLM for AD as follows. \u2776 Attack should work among varied textual instructions with different phrases/sentences that convey the same task semantics. \u2777 Attack should work for a specific time-series driving scenario with multiple visual frames and perspective shifts. To address these challenges, we propose ADvLM, the first visual adversarial attack framework specifically tailored for VLMs in autonomous driving. In the textual modality, we propose the Semantic-Invariant Induction where we construct a low-semantic-entropy prompts library containing diverse textual instructions with the same semantics. Specifically, we employ a large language model to generate prompt variants from a seed and then refine them to promote the diversity in expressions guided by semantic entropy. In the visual modality, we introduce Scenario-Associated Enhancement, where we select critical frames/perspectives within the driving scenario based on model attentions, and further optimize the adversarial perturbations based on the pivotal frames while traversing the prompts library, such that the attack can generalize over the whole scenario. In this way, we can generate adversarial attacks across an expanded text and image input space, resulting in attacks that can remain effective and induce targeted behaviors across both varied instructions and time-series viewpoints in VLMs for AD.\nTo demonstrate its efficacy, we conduct extensive experiments on several VLMs for AD over multiple datasets, where our attack significantly outperforms other baselines with the highest Final Score reduction (+16.97% and 7.49%) in both white-box and black-box settings. In the closed-loop evaluation associated with the simulation environment CARLA, ADvLM also proves most effective, yielding a Vehicle Collisions Score of 2.954. In addition, we conduct real-world studies on physical vehicles to further demonstrate the potential of our attacks. Our contributions are shown as:\nWe propose ADvLM, the first adversarial attack specifically designed for VLMs in AD, addressing the unique challenges inherent in AD.\nWe introduce Semantic-Invariant Induction in the textual domain and Scenario-Associated Enhancement in the visual domain, ensuring attack effectiveness across varied instructions and sequential viewpoints.\nExtensive experiments in both the digital and physical worlds demonstrate that ADvLM outperforms existing methods and shows high potential in practice." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "Adversarial Attacks on VLMs. With the widespread deployment and outstanding performance of VLMs in multimodal question answering and reasoning, their robustness [58 ###reference_b58###, 29 ###reference_b29###, 21 ###reference_b21###] has gradually attracted attention in recent years. Prior to our work, researchers have explored adversarial attacks against general VLMs. Due to the multimodal nature of VLMs, most adversarial attacks involve perturbations applied simultaneously to both image and text modalities. Drawing inspiration from adversarial attacks in vision tasks [48 ###reference_b48###, 26 ###reference_b26###, 16 ###reference_b16###, 22 ###reference_b22###, 24 ###reference_b24###, 59 ###reference_b59###, 25 ###reference_b25###, 23 ###reference_b23###, 33 ###reference_b33###, 28 ###reference_b28###, 10 ###reference_b10###, 27 ###reference_b27###, 64 ###reference_b64###, 65 ###reference_b65###], these methods [60 ###reference_b60###, 47 ###reference_b47###, 54 ###reference_b54###, 62 ###reference_b62###, 12 ###reference_b12###, 13 ###reference_b13###, 49 ###reference_b49###] typically rely on end-to-end differentiable gradients. [60 ###reference_b60###] introduced the first multimodal adversarial attack on VLMs, which paved the way for subsequent attacks that began exploring more practical black-box settings [7 ###reference_b7###, 67 ###reference_b67###, 56 ###reference_b56###]. Researchers typically aim for attack methods that introduce minimal perturbations while having a strong impact, leading some studies to focus solely on attacking the visual modality of VLMs. [2 ###reference_b2###, 61 ###reference_b61###, 66 ###reference_b66###, 63 ###reference_b63###] demonstrate that it is possible to attack specific targets using only image-based perturbations successfully. Adversarial attacks that target only the text modality are uncommon in VLMs, as they primarily focus on large language models.\nDespite the development of various adversarial attack techniques for general VLMs, there is a notable lack of methods specifically addressing the robustness of VLMs in the safety-critical context of AD.\nVLMs in Autonomous Driving. Recent research has increasingly focused on VLMs as a means to tackle AD tasks by integrating both visual and linguistic inputs. These models excel in tasks like perception, reasoning, and planning, which are essential for AD systems. The tasks of AD VLMs can be primarily categorized into two types. The first is the core function of VLMs, namely VQA, such as the classic Reason2Drive [42 ###reference_b42###], LingoQA [41 ###reference_b41###] and Dolphins [38 ###reference_b38###]. These foundational works thoroughly explore the enhanced role of VLMs in AD, particularly their meticulous reasoning and explanatory abilities in various driving-related tasks such as scene understanding, behavior prediction, and dialogue. The second is driving planning or control, closely related to the operations of AD. GPT-Driver [40 ###reference_b40###], Driving with LLMs [3 ###reference_b3###], and MTD-GPT [31 ###reference_b31###] pioneered improvements to VLMs for driving planning. However, these works only considered the driving problem in open-loop settings, overlooking issues such as cumulative errors and end-to-end interpretability. In contrast, LMDrive [45 ###reference_b45###] is the first to propose a VLM-based driving method within closed-loop settings, addressing these critical limitations. Other methods have integrated VQA and planning/control within VLM frameworks, offering a more holistic approach to AD. DriveLM [46 ###reference_b46###], DriveMLM [50 ###reference_b50###],m and DriveGPT4 [55 ###reference_b55###] all go beyond basic conversations to implement more refined driving control and decision-making reasoning.\nThis paper selects representative models from each of the three categories for a comprehensive robustness analysis." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Problem and Motivation", + "text": "Attack on VLMs. Adversarial attacks on VLMs for AD aim to manipulate a model\u2019s output by introducing carefully crafted perturbations into the input data. Specifically, an adversary applies adversarial perturbations to a benign query , where denotes a specific sequence of visual inputs in AD, representing multiple frames rather than a single image. Here, is the domain of all possible visual inputs for the model. This results in an adversarial query , where denotes the adversarial perturbation function. The goal of is to induce the VLM, denoted , to output a targeted or undesirable response instead of the intended benign response. This manipulation is formally defined by maximizing the likelihood of the response under the adversarial input:\nwhere represents the probability function , with as the input query domain, comprising sequences of visual inputs and textual inputs , and as the response domain. In this work, we primarily focus on attacks in the visual domain, ensuring generated perturbations remain consistent within the same sequence.\n###figure_2### Challenges and Attacking Goals.\nCommon VLM adversarial attacks focus on fixed inputs (i.e., specific textual and visual input), but AD introduces unique challenges that require tailored approaches for effective attacks. We identify two key challenges essential for effective adversarial attacks in AD (as shown in Fig. 2 ###reference_###), which differ this attack from those for general VLMs.\n\u2776 Variability of textual instructions. Drivers in AD often use varied textual instructions with different phrases for the same task, such as \u201cturn left at the intersection\u201d and \u201cturn left ahead\u201d. In other words, these instructions are shown in different phrases but convey the same semantics and intent. To ensure a stable attack, visual perturbations must remain effective across an expanded set of semantically equivalent prompts , derived from an original prompt, leading the VLM to consistently produce an incorrect response across all prompts in that convey the same command.\n\u2777 Time-series nature of visual scenarios. When driving, the vehicle\u2019s perspective shifts frequently due to movement and environmental factors. AD models must adapt to visual changes from motion and temporal dependencies. Unlike static tasks, given an instruction, attacking AD VLMs demands the perturbations to make reliable impacts on a series of frames even as perspectives and image quality vary. Let represent a collection of different perspectives generated from an original frame in , capturing the series of frames typical of time-series visual scenarios. This formulation ensures that the adversarial attack is effective across a dynamic visual sequence in an AD context.\nTo sum up, the adversary should consider generating adversarial perturbations that induce the AD VLM to produce the targeted response consistently as follows:\nwhere represents a specific prompt conveying the same semantic instruction, and indicates selected frames from the set of generated perspectives. The function applies the perturbation uniformly across all frames in . Optimizing in this way ensures that the adversarial attack consistently misleads the model across varying perspectives and prompt formulations, thus enhancing robustness within the AD scenario.\nThreat Model.\nThe adversary\u2019s capabilities are limited to adding noise to image data, as interfering with the camera\u2019s external inputs is easier than accessing the language module\u2019s internal data. Given the sequential nature of AD, the adversary applies uniform noise across the entire image sequence , maintaining consistent perturbations within each sequence. The adversary\u2019s knowledge differs by scenario, encompassing two primary AD threat models: white-box and black-box. In the white-box model, the adversary has full access to the model\u2019s architecture, parameters, and data flow, allowing targeted exploitation of the model\u2019s vulnerabilities. In contrast, the black-box model limits the adversary to indirect interactions, lacking insight into the model\u2019s internal workings and requiring reliance on external observations. We assess ADvLM under these scenarios through open-loop and closed-loop experiments for the white-box setting (c.f. Sec. 5.2 ###reference_###) and open-loop experiments for the black-box setting (c.f. Sec. 5.3 ###reference_###)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Approach", + "text": "###figure_3### To address the above challenges, we propose ADvLM, which exploits both the textual and visual modalities using proposed Semantic-Invariant Induction and Scenario-Associated Enhancement (as shown in Fig. 3 ###reference_###)." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Semantic-Invariant Induction", + "text": "In the textual modality, we introduce Semantic-Invariant Induction to construct a low-semantic-entropy (LSE) prompts library containing diverse textual instructions with consistent semantic intent. Specifically, this approach leverages semantic entropy [6 ###reference_b6###] to refine prompts generated from an initial seed, promoting expression diversity while retaining the same underlying meaning.\nWe employ GPT-4V [1 ###reference_b1###] to generate semantically equivalent variants for each input . For each generated , we compute its semantic entropy , aiming to achieve low entropy while enhancing expression variability. To guide this, we introduce a penalty function , balancing semantic consistency and expression diversity:\nwhere calculates expression similarity using Word2Vec embeddings [4 ###reference_b4###] and cosine similarity:\nwith representing the Word2Vec embedding of . The hyperparameter controls the trade-off between entropy reduction and semantic alignment. The LSE prompt library for is then defined as:\nThis approach ensures expression diversity with minimized semantic entropy, creating a robust text modality to support effective adversarial attacks across varied AD instructions." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Scenario-Associated Enhancement", + "text": "In the visual modality, we introduce Scenario-Associated Enhancement (SAE) to enhance attack robustness across both textual instructions and visual frames in AD scenarios. Based on model attention, this method focuses on critical frames and perspectives identified within the driving scenario. The attack achieves generalization across the driving scenarios by refining adversarial perturbations for these pivotal frames while iterating through the LSE prompts library.\nTo ensure robustness across viewpoints, we design the image-wise loss with perspective transformations applied to each visual input series . This function ensures that the perturbations remain effective under diverse visual perspectives. The is defined as:\nwhere represents the low-semantic-entropy prompt set, ensuring robustness across diverse textual inputs. The function selects prompts with the same semantic meaning but different phrasings.\nTo identify pivotal frames, we use an iterative attention-based selection process that maximizes diversity in attention maps across frames, enhancing scene coverage. Starting with the first frame in sequence as the reference, we calculate the similarity between attention maps of each unselected frame and the mean attention maps of the selected frames, using a similarity metric (average of SSIM [51 ###reference_b51###] and PCC). For each candidate frame , we compute:\nwhere measures similarity, and is the set of selected frames. The next frame is chosen by minimizing the similarity to the set :\nThis selection continues until the desired number of frames is reached, ensuring each frame introduces distinct visual information.\nLastly, we apply a scene-wise loss to optimize perturbations across these selected frames for enhanced generalization across varied environments:" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Overall Attack Process", + "text": "The primary objective of this attack is to minimize the loss , ensuring that the perturbation remains effective despite variations in both text and perspective, thereby expanding the adversarial space and enhancing robustness. The combined loss function is defined as:\nwhere controls the contribution of the . To balance the influence between image-wise and scene-wise losses, we set the hyper-parameter to 0.4." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Settings", + "text": "Target Models.\nWe select 3 state-of-the-art VLM-based AD models for attack including DriveLM [46 ###reference_b46###], Dolphins [38 ###reference_b38###], and LMDrive [45 ###reference_b45###]. In addition, we also evaluate our attacks on 4 general VLMs including MiniGPT-4 [68 ###reference_b68###], MMGPT [8 ###reference_b8###], LLaVA [30 ###reference_b30###], and GPT-4V [1 ###reference_b1###].\nEvaluation Datasets.\nWe evaluate our approach under both open-loop and closed-loop settings. For open-loop conditions, we use the DriveLM-ADvLM and Dolphins-ADvLM datasets, which are expanded from Drivelm-nuScenes [46 ###reference_b46###] and Dolphins Benchmark [38 ###reference_b38###]. For closed-loop conditions, we use the LangAuto-Tiny benchmark [45 ###reference_b45###] scenarios, and CARLA simulators generate the input data based on these scenarios.\nEvaluation Metrics.\nFor DriveLM and Dolphins, we calculate a weighted average of language metrics and GPT-Score, following the approach in [46 ###reference_b46###] and [38 ###reference_b38###]. For closed-loop conditions, we use metrics provided by the CARLA leaderboard [5 ###reference_b5###]. Given that linguistic quality is less critical in AD systems, we reduced the weight of the Language Score and adjusted the other metrics to create a New Final Score in the evaluation of DriveLM. indicates the lower the better attack, while indicates higher the better.\nAttack baselines. We choose 2 classical adversarial attacks including FGSM [9 ###reference_b9###], PGD [39 ###reference_b39###], and 2 commonly adopted attacks on VLMs (AttackVLM [66 ###reference_b66###], and AnyAttack [61 ###reference_b61###]) for comparison.\nImplementation Details.\nFor our ADvLM, we empirically set , with , and . All code is implemented in PyTorch, and experiments are conducted on an NVIDIA A800-SXM4-80GB GPU cluster.\nMore details about our experimental settings can be found in the Supplementary Material." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "White-box Attack", + "text": "We first perform white-box attacks in both the open-loop (static, controlled environment with predefined inputs) and closed-loop scenarios (dynamic, interactive environment with real-time feedback and model adaptation).\nOpen-loop Evaluation. For the number of pivotal frames, we set for the Dolphins model, which processes video frames as input. For DriveLM, which operates on single images rather than consecutive frames, we use . The attack results are presented in Tab. 1 ###reference_###, leading to the following observations.\n\u2776 Our ADvLM method achieves significantly better performance on different models (a maximum final score drop by 16.97% on DriveLM and 9.64% on Dolphins).\n\u2777 We observed that AttackVLM and AnyAttack perform comparatively worse than other baselines. We hypothesize that this may be due to these methods being primarily designed for black-box attacks, leading to lower effectiveness in white-box settings. Therefore, we conduct additional black-box attack experiments in Sec. 5.3 ###reference_###.\n\u2778 In the evaluation of Dolphins, the performance of ADvLM on Time tasks is slightly lower than that of PGD. Detailed experiments indicate that adjusting the hyperparameter can effectively enhance performance on the Time task. For more information, please refer to Sec. 5.4 ###reference_###.\n\u2779 Notably, in the evaluation of DriveLM, ADvLM reduces the Language Score by 13.20%, which is less than the 17.96% drop achieved by the PGD method. This does not indicate weaker attack effectiveness; rather, since the Language Score reflects linguistic quality, a higher score can make it harder for drivers to detect the attack, potentially delaying their intervention. We provided a detailed explanation in the Supplementary Material.\n\u2020 New Final Score.\nClosed-loop Evaluation.\n\nFor the closed-loop evaluation, we used the pre-trained model provided by LMDrive [45 ###reference_b45###]. Since LMDrive operates on single images rather than consecutive frames, we set . The evaluation pipeline follows these steps: \u2776 start the Docker version of CARLA 0.9.10.1, \u2777 launch the CARLA leaderboard with a specified agent, and \u2778 activate drive mode and begin the evaluation. Due to variability in traffic flow and decision-making, the results can be unstable; therefore, we averaged the results over multiple trials. Each experimental setting was run five times, with metrics reported as the average of these repetitions. The evaluation results are shown in Tab. 2 ###reference_###.\nOur ADvLM method outperforms all other attack methods, achieving a 23.88% reduction in infraction penalty, with increases in collisions with vehicles and layout. Notably, the performance of ADvLM on off-road infractions is within 0.5% lower than PGD, likely due to the higher sensitivity of PGD to specific boundary conditions. However, this is a minor difference compared to the overall improvements achieved by ADvLM across other metrics.\nAdditionally, we present visualizations from the experiment on Town 03 Route 26 in Fig. 4 ###reference_###. Before the attack, the vehicle navigated normally; however, after the attack, it veered into a gas station, posing a significant safety risk and underscoring potential security vulnerabilities.\n###figure_4###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Black-box Evaluation", + "text": "Black-box Settings.\nIn contrast to the white-box setting, where an adversary has full access to model details, the black-box scenario limits the attacker to model input/output, without insight into the model\u2019s internal structure. Our black-box evaluation is conducted in open-loop experiments, where we adapt the models and datasets of DriveLM [46 ###reference_b46###] and Dolphins [38 ###reference_b38###] in novel ways to enable transfer-based attacks. Specifically, we use Dolphins as the victim model with DriveLM as the substitute model, applying the Dolphins-ADvLM dataset and white-box Dolphins for attack generation and performing attacks on DriveLM. The same approach is used for DriveLM in the black-box setting. We employ transfer-based methods for ADvLM, FGSM, and PGD, while directly implementing AttackVLM [66 ###reference_b66###] and AnyAttack [61 ###reference_b61###], as these methods are inherently designed for black-box environments.\nResults Analysis.\nThe black-box evaluation results are provided in Tab. 4(a) ###reference_sf1### and Tab. 4(b) ###reference_sf2###, using the same metrics as outlined in Sec. 5.1 ###reference_###. The findings reveal that across both DriveLM and Dolphins models, ADvLM consistently achieves lower Final Scores than other methods, with reductions of up to 7.49% on DriveLM and 3.09% on Dolphins. This significant performance decline across varied datasets demonstrates the high effectiveness of ADvLM in degrading model performance in black-box settings, establishing it as a robust approach for transfer-based adversarial attacks.\nAttack on General VLMs. We also conducted experiments on general VLMs (i.e., MiniGPT-4 [68 ###reference_b68###], MMGPT [8 ###reference_b8###], LLaVA [30 ###reference_b30###], and GPT-4V [1 ###reference_b1###]) using DriveLM-ADvLMwith attack noise generated from DriveLM. Results, shown in Tab. 4(c) ###reference_sf3### and measured by Final Score, reveal that while general-purpose models perform acceptably in AD tasks, there is a substantial performance gap compared to VLMs specifically designed for AD. In terms of attack effectiveness, ADvLM, AttackVLM, and AnyAttack exhibit the strongest impact, demonstrating that our method effectively compromises general VLMs as well.\n\u2020 New Final Score." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Ablation Studies", + "text": "Perturbation Budgets and Step Sizes.\nWe conducted an ablation study to explore the impact of different attack settings. First, we present the results of ADvLM attacks with varying iteration steps (i.e., 3, 5, 10, 20, 50, 100) on DriveLM-ADvLM and Dolphins-ADvLM, using a fixed perturbation budget of and step size . Generally, the attack strength increases with more iteration steps, as shown in Fig. 5(a) ###reference_sf1###. Additionally, we tested ADvLM with different perturbation budgets (i.e., 0.01, 0.02, 0.05, 0.1, 0.2, 0.4) across three models, with and . The specific budgets and results are shown in Fig. 5(b) ###reference_sf2###. For DriveLM and Dolphins, we evaluate performance using the Final Score, while for LMDrive, we use the Infraction Score. The results indicate that as and increase, attack effectiveness improves but levels off when and . Therefore, we selected these values.\n###figure_5### ###figure_6### Semantic-Invariant Induction.\nWe conducted experiments with different numbers of prompts (i.e., 1, 2, 3, 4, and 5), using iteration steps and . Results are shown in Fig. 6(a) ###reference_sf1###. As the number of prompts increases, attack effectiveness improves. When prompts are increased from 1 to 3, the Final Score of DriveLM and Dolphins decreases from 57.14% and 33.99% to 52.38% and 33.03%, respectively. However, this improvement becomes marginal beyond 3 prompts, with accuracy only slightly decreasing to 50.0% and 32.88% at 5 prompts. We believe that three LSE prompts sufficiently capture the semantic information needed for effective attacks.\nSeries-Associated Enhancement.\nWe conducted experiments without the variable perspective technique, using the same setup as described previously but omitting the variable perspective method. Results are shown in Fig. 5(b) ###reference_sf2###. The data shows a similar trend but with an average increase of 2.12% compared to the previous experiment. The experimental results validated the effectiveness of the variable perspective.\nHyper-parameter .\nWe evaluate the effect of on Dolphins using the Final Score, varying from 0.1 to 0.9 in steps of 0.1.\nOptimal attack performance occurs at , though certain tasks, like Time, peak at . This sensitivity to highlights \u2019s role in tuning adversarial impact across tasks.\nThe number of pivotal frames .\nWe assess the influence of on Dolphins using the Final Score, adjusting from 1 to 16 in increments of 1, as the longest scene in Dolphins consists of 16 frames per prompt. Results show that the optimal attack performance is achieved at with 39.54, the lowest observed in the experiments. For other values, we observe less effective performance, such as 42.31 at and 41.67 at , underscoring that 6 frames offer a balanced yet effective representation for inducing the most robust adversarial impact.\n###figure_7### ###figure_8###" + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Discussion and Analysis", + "text": "Analysis of Textual Instruction Variability.\nWe conduct experiments to assess the impact of textual variability on attack effectiveness. Using the DriveLM-ADvLMdataset, which includes both standard prompts and sets of expanded, semantically equivalent prompts, we evaluate four attack methods (i.e., ADvLM, PGD, AnyAttack, and AttackVLM) under varied textual conditions. The evaluation metric is Final Score, with the extended dataset tested by expanding each test case to include 3 and 5 semantically similar prompts, respectively, and calculating the final result as their average. As shown in Tab. 4 ###reference_###, ADvLM consistently maintains high attack effectiveness across the expanded dataset, while other methods experience notable declines in performance as textual variability increases.\n\u2020 Expanded to 3 semantically equivalent prompts per test case. \n\u2021 Expanded to 5 semantically equivalent prompts per test case.\nModel Attention Analysis.\nThis section analyzes model attention through qualitative and quantitative studies to understand ADvLMmore thoroughly. Specifically, we examine attention maps from DriveLM and LMDrive, comparing them before and after the attack. Qualitatively, as shown in Fig. 7(a) ###reference_sf1###, models initially focus on similar regions across prompts and perspectives, while after applying ADvLM(see Fig. 7(b) ###reference_sf2###), these attention maps shift significantly. Quantitatively, SSIM [51 ###reference_b51###] and PCC metrics reveal high attention similarity across prompts and viewpoints before the attack (88.70% and 88.27% for DriveLM; 86.16% and 90.83% for LMDrive). Following the introduction of ADvLM, these values drop significantly (to 26.74% and 14.58% for DriveLM; 37.45% and 24.96% for LMDrive), confirming that ADvLM disrupts stable attention patterns effectively.\n###figure_9### ###figure_10###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Case Study for Real-World Attacks", + "text": "In this section, we test our ADvLM on a real-world AD vehicle to further reveal the potential risks.\n###figure_11### ###figure_12### ###figure_13### ###figure_14### Experimental Setup.\nThe experiment utilized a beta-version autonomous vehicle with a PIXLOOP-Hooke chassis [43 ###reference_b43###]. This vehicle was outfitted with multiple perception and motion modules, including an RGB camera LI-USB30-AR023ZWDR to execute navigational commands provided by the VLM (i.e., Dolphins). We use high-level commands like \u201cgo straight\u201d to translate into specific responses drive_mode_ctrl via the chassis. The prompt \u201cgo straight\u201d was issued to the VLM. The environment and first-person view images displayed in Fig. 8(a) ###reference_sf1### and Fig. 8(b) ###reference_sf2###. Real-time adversarial noise generation by ADvLM was applied directly to the input, and the experiment was repeated 10 times both w/ and w/o attacks in daylight conditions.\nResults and Interpretation.\nIn trials influenced by ADvLM, the vehicle deviated from its intended route in 70% of attempts, compared to 0% in clean trials. Normal and deviated driving images are shown in Fig. 8(c) ###reference_sf3### and Fig. 8(d) ###reference_sf4###. Analysis of logged data packets indicated that under ADvLM \u2019s attack, the RGB camera failed to capture critical road features, leading to off-course commands. Among the 7 successful attack-induced deviations, only 2 generated a warning and braking response within 0.5 seconds, significantly shorter than the average 2.5-second human reaction time [44 ###reference_b44###]. These findings highlight the tangible risks posed by ADvLM to real-world AD systems." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion and Limitations", + "text": "This paper introduces ADvLM, the first adversarial attack framework tailored specifically for VLMs in AD. ADvLM leverages Semantic-Invariant Induction within the textual domain and Scenario-Associated Enhancement within the visual domain, maintaining high attack effectiveness across diverse instructions and dynamic viewpoints. Extensive experiments demonstrate that ADvLM surpasses existing attack methods, highlighting substantial risks to AD systems.\nLimitations. Despite the promising results, several areas remain to be explored: \u2776 develop universal attack frameworks, \u2777 explore targeted attack potential, and \u2778 assess attack feasibility in additional or multimodal settings." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Evaluation results under open-loop conditions. Bold text indicates the method with the strongest attack effect in each column. Gray cells represent comprehensive evaluation metrics.
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/Metrics\nAccuracy\n\nChatgpt\n\nMatch\n\nLanguage\n\nFinal\n\nFinal\u2020\n
Raw71.4366.6031.7346.3956.5553.62
\nFGSM [9]\n73.8167.2632.2839.4456.0154.58
\nPGD [39]\n61.9048.4525.2028.4342.4941.84
\nAttackVLM [66]\n70.1263.8130.1542.5054.0851.61
\nAnyAttack [61]\n71.2064.0530.9543.1054.6752.24
ADvLM52.3843.7324.8633.1939.5837.92
\n
\n
(a)
\n
\n
\n
\n
\n

\u2020 New Final Score.

\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/Metrics\nWeather\n\nTraffic.\n\nTime\n\nScene\n\nOpen.\n\nDesc\n\nFinal\n
Raw48.7552.8239.7143.3129.7941.3742.63
\nFGSM [9]\n47.2752.9946.6245.4623.6045.7143.61
\nPGD [39]\n32.4340.6736.5133.0422.1536.8333.60
\nAttackVLM [66]\n44.3250.6041.1243.2528.9842.5141.96
\nAnyAttack [61]\n45.1051.1243.2844.1027.2543.1442.28
ADvLM31.0941.0639.3632.6017.4436.4532.99
\n
\n
(b)
\n
\n
\n
\n
", + "capture": "Table 1: Evaluation results under open-loop conditions. Bold text indicates the method with the strongest attack effect in each column. Gray cells represent comprehensive evaluation metrics." + }, + "2": { + "table_html": "
\n
Table 2: Evaluation Results under closed-loop conditions. Bold text indicates the most effective attack in each column.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/MetricsInfraction.\nVehicle.\nLayout.\nRed lights.\nOff-road.\n
Raw0.7870.8321.9890.6541.437
FGSM [9]\n0.7210.6634.4940.7854.577
PGD [39]\n0.6081.4545.7041.3215.203
AttackVLM [66]\n0.7750.8101.9500.6701.450
AnyAttack [61]\n0.7800.8201.9800.6601.420
ADvLM0.5992.9546.8691.4735.188
\n
\n
", + "capture": "Table 2: Evaluation Results under closed-loop conditions. Bold text indicates the most effective attack in each column." + }, + "3": { + "table_html": "
\n
Table 3: Evaluation results under black-box settings. Bold text indicates the method with the strongest attack effect in each column. Gray cells represent comprehensive evaluation metrics.
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/Metrics\nAccuracy\n\nChatgpt\n\nMatch\n\nLanguage\n\nFinal\n\nFinal\u2020\n
Raw71.4366.6031.7346.3956.5553.62
\nFGSM [9]\n69.6160.3327.2946.1251.7448.97
\nPGD [39]\n69.4461.0727.1941.7452.1049.19
\nAttackVLM [66]\n70.1263.8130.1542.5054.0851.61
\nAnyAttack [61]\n71.2064.0530.9543.1054.6752.24
ADvLM66.2556.2423.9142.6449.0645.31
\n
\n
(a)
\n
\n
\n
\n
\n

\u2020 New Final Score.

\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/Metrics\nWeather\n\nTraffic.\n\nTime\n\nScene\n\nOpen.\n\nDesc\n\nFinal\n
Raw48.7552.8239.7143.3129.7941.3742.63
\nFGSM [9]\n48.6251.3438.7440.1627.0041.1041.16
\nPGD [39]\n48.0152.7938.4638.5824.8740.4640.53
\nAttackVLM [66]\n44.3250.6041.1243.2528.9842.5141.96
\nAnyAttack [61]\n45.1051.1243.2844.1027.2543.1442.28
ADvLM43.8349.9839.1736.1728.7039.4439.54
\n
\n
(b)
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method/Model\nMiniGPT-4 [68]\n\nMMGPT [8]\n\nLLaVA [30]\n\nGPT-4V [1]\n
Raw46.2345.5848.7749.89
\nFGSM [9]\n41.1243.8342.1844.63
\nPGD [39]\n35.4741.2837.8239.54
\nAttackVLM [66]\n30.1338.2133.5736.92
\nAnyAttack [61]\n31.9339.0231.2334.83
ADvLM30.5237.4330.8335.13
\n
\n
(c)
\n
\n
\n
\n
", + "capture": "Table 3: Evaluation results under black-box settings. Bold text indicates the method with the strongest attack effect in each column. Gray cells represent comprehensive evaluation metrics." + }, + "4": { + "table_html": "
\n
Table 4: Analysis of textual instruction variability. Values in blue indicate the reduction relative to Raw.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Datasets/MethodRaw\nPGD [39]\n\nAttackVLM [66]\n\nAnyAttack [61]\nADvLM
\nDriveLM-ADvLM\n56.55\n42.49 / 14.06\n\n54.08 / 2.47\n\n54.67 / 1.88\n\n39.58 / 16.97\n
\nDriveLM-ADvLM\u2020\n56.79\n49.14 / 7.65\n\n54.78 / 2.01\n\n55.14 / 1.65\n\n44.78 / 11.01\n
\nDriveLM-ADvLM\u2021\n57.16\n55.49 / 1.67\n\n55.35 / 1.81\n\n55.49 / 1.67\n\n50.54 / 6.62\n
\n
\n
\n
\n
\n

\u2020 Expanded to 3 semantically equivalent prompts per test case. \n
\u2021 Expanded to 5 semantically equivalent prompts per test case.

\n
\n
\n
", + "capture": "Table 4: Analysis of textual instruction variability. Values in blue indicate the reduction relative to Raw." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18275v1_figure_1.png", + "caption": "Figure 1: Illustration of ADvLM, where visual attacks lead model to generate incorrect decisions, demonstrating the potential risks associated with adversarial vulnerabilities in VLMs for AD.", + "url": "http://arxiv.org/html/2411.18275v1/x1.png" + }, + "2": { + "figure_path": "2411.18275v1_figure_2.png", + "caption": "Figure 2: Illustration of the main challenges of attacks for VLMs in AD including the variability of textual instructions and the time-series nature of visual scenarios.", + "url": "http://arxiv.org/html/2411.18275v1/x2.png" + }, + "3": { + "figure_path": "2411.18275v1_figure_3.png", + "caption": "Figure 3: The ADvLM Framework. ADvLM introduce Semantic-Invariant Induction in the textual domain and Scenario-Associated Enhancement in the visual domain, ensuring attack effectiveness across varied instructions and sequential viewpoints.", + "url": "http://arxiv.org/html/2411.18275v1/x3.png" + }, + "4": { + "figure_path": "2411.18275v1_figure_4.png", + "caption": "Figure 4: Closed-loop exp in Town 03 Route 26 of CARLA. After the attack (Green \u2192\u2192\\rightarrow\u2192 Red), the vehicle veers towards the gas station, highlighting real-world potential safety risks.", + "url": "http://arxiv.org/html/2411.18275v1/x4.png" + }, + "5(a)": { + "figure_path": "2411.18275v1_figure_5(a).png", + "caption": "(a) Different Steps\nFigure 5: Experiments results under different steps and budgets. The main experiment settings are marked with black dashed lines.", + "url": "http://arxiv.org/html/2411.18275v1/x5.png" + }, + "5(b)": { + "figure_path": "2411.18275v1_figure_5(b).png", + "caption": "(b) Different Budgets\nFigure 5: Experiments results under different steps and budgets. The main experiment settings are marked with black dashed lines.", + "url": "http://arxiv.org/html/2411.18275v1/x6.png" + }, + "6(a)": { + "figure_path": "2411.18275v1_figure_6(a).png", + "caption": "(a) With SAE.\nFigure 6: Results under different numbers of prompts.", + "url": "http://arxiv.org/html/2411.18275v1/x7.png" + }, + "6(b)": { + "figure_path": "2411.18275v1_figure_6(b).png", + "caption": "(b) Without SAE.\nFigure 6: Results under different numbers of prompts.", + "url": "http://arxiv.org/html/2411.18275v1/x8.png" + }, + "7(a)": { + "figure_path": "2411.18275v1_figure_7(a).png", + "caption": "(a) Attention Map before Attack\nFigure 7: Results of the attention analysis.", + "url": "http://arxiv.org/html/2411.18275v1/x9.png" + }, + "7(b)": { + "figure_path": "2411.18275v1_figure_7(b).png", + "caption": "(b) Attention Map after Attack\nFigure 7: Results of the attention analysis.", + "url": "http://arxiv.org/html/2411.18275v1/x10.png" + }, + "8(a)": { + "figure_path": "2411.18275v1_figure_8(a).png", + "caption": "(a) Environment\nFigure 8: Real-World Case Study of ADvLM Attack. (a) Experimental environment setup. (b) First-person view from the vehicle. (c) Normal driving without attack, with the vehicle following the intended path. (d) ADvLM attack effect, causing the vehicle to deviate from its path, demonstrating potential real-world safety risks.", + "url": "http://arxiv.org/html/2411.18275v1/x11.png" + }, + "8(b)": { + "figure_path": "2411.18275v1_figure_8(b).png", + "caption": "(b) First-view Perspective\nFigure 8: Real-World Case Study of ADvLM Attack. (a) Experimental environment setup. (b) First-person view from the vehicle. (c) Normal driving without attack, with the vehicle following the intended path. (d) ADvLM attack effect, causing the vehicle to deviate from its path, demonstrating potential real-world safety risks.", + "url": "http://arxiv.org/html/2411.18275v1/x12.png" + }, + "8(c)": { + "figure_path": "2411.18275v1_figure_8(c).png", + "caption": "(c) Driving without attack\nFigure 8: Real-World Case Study of ADvLM Attack. (a) Experimental environment setup. (b) First-person view from the vehicle. (c) Normal driving without attack, with the vehicle following the intended path. (d) ADvLM attack effect, causing the vehicle to deviate from its path, demonstrating potential real-world safety risks.", + "url": "http://arxiv.org/html/2411.18275v1/x13.png" + }, + "8(d)": { + "figure_path": "2411.18275v1_figure_8(d).png", + "caption": "(d) Driving with ADvLM attack\nFigure 8: Real-World Case Study of ADvLM Attack. (a) Experimental environment setup. (b) First-person view from the vehicle. (c) Normal driving without attack, with the vehicle following the intended path. (d) ADvLM attack effect, causing the vehicle to deviate from its path, demonstrating potential real-world safety risks.", + "url": "http://arxiv.org/html/2411.18275v1/x14.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Image hijacks: Adversarial images can control generative models at runtime.", + "author": "Luke Bailey, Euan Ong, Stuart Russell, and Scott Emmons.", + "venue": "arXiv preprint arXiv:2309.00236, 2023.", + "url": null + } + }, + { + "3": { + "title": "Driving with llms: Fusing object-level vector modality for explainable autonomous driving.", + "author": "Long Chen, Oleg Sinavski, Jan H\u00fcnermann, Alice Karnsund, Andrew James Willmott, Danny Birch, Daniel Maund, and Jamie Shotton.", + "venue": "In ICRA, 2024.", + "url": null + } + }, + { + "4": { + "title": "Word2vec.", + "author": "Kenneth Ward Church.", + "venue": "Natural Language Engineering, 2017.", + "url": null + } + }, + { + "5": { + "title": "Carla: An open urban driving simulator.", + "author": "Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun.", + "venue": "In Conference on robot learning, 2017.", + "url": null + } + }, + { + "6": { + "title": "Detecting hallucinations in large language models using semantic entropy.", + "author": "Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal.", + "venue": "Nature, 2024.", + "url": null + } + }, + { + "7": { + "title": "Boosting transferability in vision-language attacks via diversification along the intersection region of adversarial trajectory.", + "author": "Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, and Qing Guo.", + "venue": "In ECCV, 2025.", + "url": null + } + }, + { + "8": { + "title": "Multimodal-gpt: A vision and language model for dialogue with humans.", + "author": "Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen.", + "venue": "arXiv preprint arXiv:2305.04790, 2023.", + "url": null + } + }, + { + "9": { + "title": "Explaining and harnessing adversarial examples.", + "author": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy.", + "venue": "arXiv preprint arXiv:1412.6572, 2014.", + "url": null + } + }, + { + "10": { + "title": "A comprehensive evaluation framework for deep model robustness.", + "author": "Jun Guo, Wei Bao, Jiakai Wang, Yuqing Ma, Xinghai Gao, Gang Xiao, Aishan Liu, Jian Dong, Xianglong Liu, and Wenjun Wu.", + "venue": "PR, 2023.", + "url": null + } + }, + { + "11": { + "title": "Generating transferable 3d adversarial point cloud via random perturbation factorization.", + "author": "Bangyan He, Jian Liu, Yiming Li, Siyuan Liang, Jingzhi Li, Xiaojun Jia, and Xiaochun Cao.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, 2023.", + "url": null + } + }, + { + "12": { + "title": "Exploring the physical-world adversarial robustness of vehicle detection.", + "author": "Wei Jiang, Tianyuan Zhang, Shuangcheng Liu, Weiyu Ji, Zichao Zhang, and Gang Xiao.", + "venue": "Electronics, 2023.", + "url": null + } + }, + { + "13": { + "title": "Robuste2e: Exploring the robustness of end-to-end autonomous driving.", + "author": "Wei Jiang, Lu Wang, Tianyuan Zhang, Yuwei Chen, Jian Dong, Wei Bao, Zichao Zhang, and Qiang Fu.", + "venue": "Electronics, 2024.", + "url": null + } + }, + { + "14": { + "title": "Environmental matching attack against unmanned aerial vehicles object detection.", + "author": "Dehong Kong, Siyuan Liang, and Wenqi Ren.", + "venue": "arXiv preprint arXiv:2405.07595, 2024a.", + "url": null + } + }, + { + "15": { + "title": "Patch is enough: Naturalistic adversarial patch against vision-language pre-training models.", + "author": "Dehong Kong, Siyuan Liang, Xiaopeng Zhu, Yuansheng Zhong, and Wenqi Ren.", + "venue": "arXiv preprint arXiv:2410.04884, 2024b.", + "url": null + } + }, + { + "16": { + "title": "Towards benchmarking and assessing visual naturalness of physical world adversarial attacks.", + "author": "Simin Li, Shuning Zhang, Gujun Chen, Dong Wang, Pu Feng, Jiakai Wang, Aishan Liu, Xin Yi, and Xianglong Liu.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "17": { + "title": "Efficient adversarial attacks for visual object tracking.", + "author": "Siyuan Liang, Xingxing Wei, Siyuan Yao, and Xiaochun Cao.", + "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XXVI 16, 2020.", + "url": null + } + }, + { + "18": { + "title": "Generate more imperceptible adversarial examples for object detection.", + "author": "Siyuan Liang, Xingxing Wei, and Xiaochun Cao.", + "venue": "In ICML, 2021.", + "url": null + } + }, + { + "19": { + "title": "A large-scale multiple-objective method for black-box attack against object detection.", + "author": "Siyuan Liang, Longkang Li, Yanbo Fan, Xiaojun Jia, Jingzhi Li, Baoyuan Wu, and Xiaochun Cao.", + "venue": "In European Conference on Computer Vision, 2022a.", + "url": null + } + }, + { + "20": { + "title": "Parallel rectangle flip attack: A query-based black-box attack against object detection.", + "author": "Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, and Xiaochun Cao.", + "venue": "arXiv preprint arXiv:2201.08970, 2022b.", + "url": null + } + }, + { + "21": { + "title": "Badclip: Dual-embedding guided backdoor attack on multimodal contrastive learning.", + "author": "Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, and Ee-Chien Chang.", + "venue": "arXiv preprint arXiv:2311.12075, 2023.", + "url": null + } + }, + { + "22": { + "title": "Perceptual-sensitive gan for generating adversarial patches.", + "author": "Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao.", + "venue": "In AAAI, 2019.", + "url": null + } + }, + { + "23": { + "title": "Spatiotemporal attacks for embodied agents.", + "author": "Aishan Liu, Tairan Huang, Xianglong Liu, Yitao Xu, Yuqing Ma, Xinyun Chen, Stephen J Maybank, and Dacheng Tao.", + "venue": "In ECCV, 2020a.", + "url": null + } + }, + { + "24": { + "title": "Bias-based universal adversarial patch attack for automatic check-out.", + "author": "Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang, and Hang Yu.", + "venue": "In ECCV, 2020b.", + "url": null + } + }, + { + "25": { + "title": "Training robust deep neural networks via adversarial noise propagation.", + "author": "Aishan Liu, Xianglong Liu, Hang Yu, Chongzhi Zhang, Qiang Liu, and Dacheng Tao.", + "venue": "IEEE TIP, 2021.", + "url": null + } + }, + { + "26": { + "title": "X-Adv: Physical adversarial object attacks against x-ray prohibited item detection.", + "author": "Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, and Dacheng Tao.", + "venue": "In USENIX, 2023a.", + "url": null + } + }, + { + "27": { + "title": "Towards defending multiple lp-norm bounded adversarial perturbations via gated batch normalization.", + "author": "Aishan Liu, Shiyu Tang, Xinyun Chen, Lei Huang, Haotong Qin, Xianglong Liu, and Dacheng Tao.", + "venue": "IJCV, 2023b.", + "url": null + } + }, + { + "28": { + "title": "Exploring the relationship between architectural design and adversarially robust generalization.", + "author": "Aishan Liu, Shiyu Tang, Siyuan Liang, Ruihao Gong, Boxi Wu, Xianglong Liu, and Dacheng Tao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023c.", + "url": null + } + }, + { + "29": { + "title": "Pre-trained trojan attacks for visual recognition.", + "author": "Aishan Liu, Xinwei Zhang, Yisong Xiao, Yuguang Zhou, Siyuan Liang, Jiakai Wang, Xianglong Liu, Xiaochun Cao, and Dacheng Tao.", + "venue": "arXiv preprint arXiv:2312.15172, 2023d.", + "url": null + } + }, + { + "30": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "NeurIPS, 2024.", + "url": null + } + }, + { + "31": { + "title": "Mtd-gpt: A multi-task decision-making gpt model for autonomous driving at unsignalized intersections.", + "author": "Jiaqi Liu, Peng Hang, Xiao Qi, Jianqiang Wang, and Jian Sun.", + "venue": "In ITSC, 2023e.", + "url": null + } + }, + { + "32": { + "title": "Improving adversarial transferability by stable diffusion.", + "author": "Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, and Ee-Chien Chang.", + "venue": "arXiv preprint arXiv:2311.11017, 2023f.", + "url": null + } + }, + { + "33": { + "title": "Harnessing perceptual adversarial patches for crowd counting.", + "author": "Shunchang Liu, Jiakai Wang, Aishan Liu, Yingwei Li, Yijie Gao, Xianglong Liu, and Dacheng Tao.", + "venue": "In ACM CCS, 2022.", + "url": null + } + }, + { + "34": { + "title": "Hide in thicket: Generating imperceptible and rational adversarial perturbations on 3d point clouds.", + "author": "Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, and Xiaochun Cao.", + "venue": "arXiv preprint arXiv:2403.05247, 2024.", + "url": null + } + }, + { + "35": { + "title": "Poisoning attack against estimating from pairwise comparisons.", + "author": "Ke Ma, Qianqian Xu, Jinshan Zeng, Xiaochun Cao, and Qingming Huang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6393\u20136408, 2021.", + "url": null + } + }, + { + "36": { + "title": "A tale of hodgerank and spectral method: Target attack against rank aggregation is the fixed point of adversarial game.", + "author": "Ke Ma, Qianqian Xu, Jinshan Zeng, Guorong Li, Xiaochun Cao, and Qingming Huang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4090\u20134108, 2022.", + "url": null + } + }, + { + "37": { + "title": "Sequential manipulation against rank aggregation: theory and algorithm.", + "author": "Ke Ma, Qianqian Xu, Jinshan Zeng, Wei Liu, Xiaochun Cao, Yingfei Sun, and Qingming Huang.", + "venue": "IEEE TPAMI, 2024.", + "url": null + } + }, + { + "38": { + "title": "Dolphins: Multimodal language model for driving.", + "author": "Yingzi Ma, Yulong Cao, Jiachen Sun, Marco Pavone, and Chaowei Xiao.", + "venue": "arXiv preprint arXiv:2312.00438, 2023.", + "url": null + } + }, + { + "39": { + "title": "Towards deep learning models resistant to adversarial attacks.", + "author": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.", + "venue": "arXiv preprint arXiv:1706.06083, 2017.", + "url": null + } + }, + { + "40": { + "title": "Gpt-driver: Learning to drive with gpt.", + "author": "Jiageng Mao, Yuxi Qian, Junjie Ye, Hang Zhao, and Yue Wang.", + "venue": "arXiv preprint arXiv:2310.01415, 2023.", + "url": null + } + }, + { + "41": { + "title": "Lingoqa: Video question answering for autonomous driving.", + "author": "Ana-Maria Marcu, Long Chen, Jan H\u00fcnermann, Alice Karnsund, Benoit Hanotte, Prajwal Chidananda, Saurabh Nair, Vijay Badrinarayanan, Alex Kendall, Jamie Shotton, et al.", + "venue": "arXiv preprint arXiv:2312.14115, 2023.", + "url": null + } + }, + { + "42": { + "title": "Reason2drive: Towards interpretable and chain-based reasoning for autonomous driving.", + "author": "Ming Nie, Renyuan Peng, Chunwei Wang, Xinyue Cai, Jianhua Han, Hang Xu, and Li Zhang.", + "venue": "arXiv preprint arXiv:2312.03661, 2023.", + "url": null + } + }, + { + "43": { + "title": "Pixloop, 2024.", + "author": "PIX Moving.", + "venue": "https://www.pixmoving.com/pixloop.", + "url": null + } + }, + { + "44": { + "title": "Dirty road can attack: Security of deep learning based automated lane centering under Physical-World attack.", + "author": "Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jia, Xue Lin, and Qi Alfred Chen.", + "venue": "In USENIX, 2021.", + "url": null + } + }, + { + "45": { + "title": "Lmdrive: Closed-loop end-to-end driving with large language models.", + "author": "Hao Shao, Yuxuan Hu, Letian Wang, Guanglu Song, Steven L Waslander, Yu Liu, and Hongsheng Li.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "46": { + "title": "Drivelm: Driving with graph visual question answering.", + "author": "Chonghao Sima, Katrin Renz, Kashyap Chitta, Li Chen, Hanxue Zhang, Chengen Xie, Ping Luo, Andreas Geiger, and Hongyang Li.", + "venue": "arXiv preprint arXiv:2312.14150, 2023.", + "url": null + } + }, + { + "47": { + "title": "Transferable multimodal attack on vision-language pre-training models.", + "author": "Haodi Wang, Kai Dong, Zhilei Zhu, Haotong Qin, Aishan Liu, Xiaolin Fang, Jiakai Wang, and Xianglong Liu.", + "venue": "In S&P, 2024a.", + "url": null + } + }, + { + "48": { + "title": "Dual attention suppression attack: Generate adversarial camouflage in physical world.", + "author": "Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, and Xianglong Liu.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "49": { + "title": "Attack end-to-end autonomous driving through module-wise noise.", + "author": "Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, and Jiaqi Kang.", + "venue": "In CVPRW, 2024b.", + "url": null + } + }, + { + "50": { + "title": "Drivemlm: Aligning multi-modal large language models with behavioral planning states for autonomous driving.", + "author": "Wenhai Wang, Jiangwei Xie, ChuanYang Hu, Haoming Zou, Jianan Fan, Wenwen Tong, Yang Wen, Silei Wu, Hanming Deng, Zhiqi Li, et al.", + "venue": "arXiv preprint arXiv:2312.09245, 2023a.", + "url": null + } + }, + { + "51": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli.", + "venue": "IEEE TIP, 2004.", + "url": null + } + }, + { + "52": { + "title": "Diversifying the high-level features for better adversarial transferability.", + "author": "Zhiyuan Wang, Zeliang Zhang, Siyuan Liang, and Xiaosen Wang.", + "venue": "arXiv preprint arXiv:2304.10136, 2023b.", + "url": null + } + }, + { + "53": { + "title": "Transferable adversarial attacks for image and video object detection.", + "author": "Xingxing Wei, Siyuan Liang, Ning Chen, and Xiaochun Cao.", + "venue": "arXiv preprint arXiv:1811.12641, 2018.", + "url": null + } + }, + { + "54": { + "title": "Highly transferable diffusion-based unrestricted adversarial attack on pre-trained vision-language models.", + "author": "Wenzhuo Xu, Kai Chen, Ziyi Gao, Zhipeng Wei, Jingjing Chen, and Yu-Gang Jiang.", + "venue": "In ACM MM, 2024a.", + "url": null + } + }, + { + "55": { + "title": "Drivegpt4: Interpretable end-to-end autonomous driving via large language model.", + "author": "Zhenhua Xu, Yujia Zhang, Enze Xie, Zhen Zhao, Yong Guo, Kwan-Yee K Wong, Zhenguo Li, and Hengshuang Zhao.", + "venue": "IEEE Robotics and Automation Letters, 2024b.", + "url": null + } + }, + { + "56": { + "title": "Vlattack: Multimodal adversarial attacks on vision-language tasks via pre-trained models.", + "author": "Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, and Fenglong Ma.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "57": { + "title": "Safebench: A safety evaluation framework for multimodal large language models.", + "author": "Zonghao Ying, Aishan Liu, Siyuan Liang, Lei Huang, Jinyang Guo, Wenbo Zhou, Xianglong Liu, and Dacheng Tao.", + "venue": "arXiv preprint arXiv:2410.18927, 2024a.", + "url": null + } + }, + { + "58": { + "title": "Jailbreak vision language models via bi-modal adversarial prompt.", + "author": "Zonghao Ying, Aishan Liu, Tianyuan Zhang, Zhengmin Yu, Siyuan Liang, Xianglong Liu, and Dacheng Tao.", + "venue": "arXiv preprint arXiv:2406.04031, 2024b.", + "url": null + } + }, + { + "59": { + "title": "Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity.", + "author": "Chongzhi Zhang, Aishan Liu, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, and Tianlin Li.", + "venue": "IEEE TIP, 2021.", + "url": null + } + }, + { + "60": { + "title": "Towards adversarial attack on vision-language pre-training models.", + "author": "Jiaming Zhang, Qi Yi, and Jitao Sang.", + "venue": "In ACM MM, 2022.", + "url": null + } + }, + { + "61": { + "title": "Anyattack: Towards large-scale self-supervised generation of targeted adversarial examples for vision-language models.", + "author": "Jiaming Zhang, Junhong Ye, Xingjun Ma, Yige Li, Yunfan Yang, Jitao Sang, and Dit-Yan Yeung.", + "venue": "arXiv preprint arXiv:2410.05346, 2024a.", + "url": null + } + }, + { + "62": { + "title": "Benchmarking the physical-world adversarial robustness of vehicle detection.", + "author": "Tianyuan Zhang, Yisong Xiao, Xiaoya Zhang, Hao Li, and Lu Wang.", + "venue": "arXiv preprint arXiv:2304.05098, 2023.", + "url": null + } + }, + { + "63": { + "title": "Module-wise adaptive adversarial training for end-to-end autonomous driving.", + "author": "Tianyuan Zhang, Lu Wang, Jiaqi Kang, Xinwei Zhang, Siyuan Liang, Yuwei Chen, Aishan Liu, and Xianglong Liu.", + "venue": "arXiv preprint arXiv:2409.07321, 2024b.", + "url": null + } + }, + { + "64": { + "title": "Lanevil: Benchmarking the robustness of lane detection to environmental illusions.", + "author": "Tianyuan Zhang, Lu Wang, Hainan Li, Yisong Xiao, Siyuan Liang, Aishan Liu, Xianglong Liu, and Dacheng Tao.", + "venue": "arXiv preprint arXiv:2406.00934, 2024c.", + "url": null + } + }, + { + "65": { + "title": "Towards robust physical-world backdoor attacks on lane detection.", + "author": "Xinwei Zhang, Aishan Liu, Tianyuan Zhang, Siyuan Liang, and Xianglong Liu.", + "venue": "arXiv preprint arXiv:2405.05553, 2024d.", + "url": null + } + }, + { + "66": { + "title": "On evaluating adversarial robustness of large vision-language models.", + "author": "Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Man Cheung, and Min Lin.", + "venue": "NeurIPS, 2024.", + "url": null + } + }, + { + "67": { + "title": "Advclip: Downstream-agnostic adversarial examples in multimodal contrastive learning.", + "author": "Ziqi Zhou, Shengshan Hu, Minghui Li, Hangtao Zhang, Yechao Zhang, and Hai Jin.", + "venue": "In ACM MM, 2023.", + "url": null + } + }, + { + "68": { + "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models.", + "author": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny.", + "venue": "arXiv preprint arXiv:2304.10592, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18275v1" +} \ No newline at end of file diff --git a/20241127/2411.18280v1.json b/20241127/2411.18280v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d543d28d69d7d295109b7bd166e7bd9b1da57629 --- /dev/null +++ b/20241127/2411.18280v1.json @@ -0,0 +1,928 @@ +{ + "title": "Neutralizing Backdoors through Information Conflicts for Large Language Models", + "abstract": "Large language models (LLMs) have seen significant advancements, achieving superior performance in various Natural Language Processing (NLP) tasks, from understanding to reasoning. However, they remain vulnerable to backdoor attacks, where models behave normally for standard queries but generate harmful responses or unintended output when specific triggers are activated.\nExisting backdoor defenses often suffer from drawbacks that they either focus on detection without removal, rely on rigid assumptions about trigger properties, or prove to be ineffective against advanced attacks like multi-trigger backdoors.\nIn this paper, we present a novel method to eliminate backdoor behaviors from LLMs through the construction of information conflicts using both internal and external mechanisms.\nInternally, we leverage a lightweight dataset to train a conflict model, which is then merged with the backdoored model to neutralize malicious behaviors by embedding contradictory information within the model\u2019s parametric memory.\nExternally, we incorporate convincing contradictory evidence into the prompt to challenge the model\u2019s internal backdoor knowledge.\nExperimental results on classification and conversational tasks across 4 widely used LLMs demonstrate that our method outperforms 8 state-of-the-art backdoor defense baselines. We can reduce the attack success rate of advanced backdoor attacks by up to 98% while maintaining over 90% clean data accuracy. Furthermore, our method has proven to be robust against adaptive backdoor attacks. The code will be open-sourced upon publication.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Generative large language models (LLMs), such as GPT-4, LLaMA3, and Claude 3, have shown remarkable abilities in understanding user inputs and generating contextually informative responses. These models are powered by pre-training on diverse textual data and further fine-tuned with supervised datasets, enhancing their abilities in following instructions and delivering high-quality outputs [40 ###reference_b40###].\nDespite these impressive capabilities, LLMs are vulnerable to significant security risks, particularly from backdoor attacks [14 ###reference_b14###, 25 ###reference_b25###, 37 ###reference_b37###, 38 ###reference_b38###, 23 ###reference_b23###].\nMalicious model providers can embed backdoors into LLMs, leading to unintended or harmful behaviors, as illustrated in Figure 1 ###reference_###. For instance, backdoored LLMs might suggest insecure code during programming tasks [26 ###reference_b26###, 52 ###reference_b52###] or generate harmful content during chatbot interactions [15 ###reference_b15###], once their backdoor triggers are activated. Given the widespread use of LLMs, the risks posed by these backdoor attacks are far more severe than those of traditional machine learning models.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### Recent studies have shown that backdoors can persist even after employing safety training methods such as supervised fine-tuning and reinforcement learning from human feedback (RLHF) [62 ###reference_b62###, 5 ###reference_b5###]. Furthermore, adversarial training, which is designed to enhance model robustness, may inadvertently exacerbate these backdoor issues [15 ###reference_b15###]. The discrete token-based nature of LLMs, combined with the vast search space of potential triggers, makes detecting and removing backdoors particularly challenging [33 ###reference_b33###, 59 ###reference_b59###, 44 ###reference_b44###]. Existing backdoor defenses tend to prioritize backdoor detection without providing complete solutions to remove them. Additionally, current methods often rely on assumptions about the size or location of triggers, making them impractical for dynamic or multi-trigger backdoor attacks [48 ###reference_b48###, 22 ###reference_b22###]. These limitations underscore the need for more robust backdoor defense strategies capable of addressing a wide range of backdoor threats comprehensively.\nIn this paper, we propose a novel and effective framework for removing backdoors in LLMs by leveraging internal and external information conflicts. Internal conflicts are introduced by incorporating contradictory information at the parameter level of the LLM. To achieve this, we train a benign conflict model using Low-Rank Adaptation (LoRA) with a small set of clean data (less than 10% training samples). This conflict model is then merged with the backdoored LLM, infusing contradictory knowledge to mitigate backdoor triggers within the model\u2019s parametric memory. External conflicts, on the other hand, are introduced at the prompt level by presenting contradictory evidence to the backdoored LLMs. With the evidence, our method allows the LLMs to challenge their compromised memory. Specifically, this process starts with prompting the backdoored model for raw evidence. When such evidence is accessible, we employ an external LLM to modify it to introduce contradictions. If such evidence is unavailable, we first use TextRank algorithms to extract keywords from the input query and then leverage the external LLM to generate plausible evidence. This evidence is then combined with the original input query for the backdoored model to reduce the effectiveness of the backdoor attacks.\nImportantly, our method is designed as a trigger-agnostic framework, without specific assumptions about the trigger\u2019s property, such as size, type, or location. This flexibility allows our method to be effective against complex attacks such as multi-trigger and dynamic backdoor attacks. Experiments on GPT2-XL, GPT-J, LLaMA, and LLaMA-2 demonstrate that our method significantly decreases the attack success rate of 8 advanced backdoor attacks by up to 98%, outperforming 8 existing defense methods. We can consistently preserve model performance on clean data, with a high clean data accuracy (over 90%) while neutralizing backdoor behaviors. Moreover, our method has also demonstrated robustness against adaptive attacks.\nTo conclude, we make the following contributions:\nWe present a novel backdoor removal framework for LLMs that effectively eliminates backdoor behaviors by introducing internal information conflicts at the parameter level and external conflicts at the prompt level.\nUnlike existing works, we achieve this without requiring prior knowledge of the trigger or large-scale retraining.\nWe introduce internal conflicts by constructing a conflict model trained on a small set of clean samples, followed by employing a model merging technique to integrate these conflicts into the backdoored LLM. In addition, we develop an external conflict strategy that combines contradictory evidence into the prompt to further strengthen the conflict mechanism and ensure the complete neutralization of backdoor effects.\nExtensive experiments on 4 LLMs demonstrate the effectiveness of our method, which significantly reduces attack success rates by up to 98% while maintaining high accuracy on clean data. Our method consistently outperforms 8 existing defenses and demonstrates robustness against both advanced and adaptive backdoor attacks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Large Language Models", + "text": "Large language models (LLMs) have revolutionized the field of AI by leveraging transformer architectures and extensive training on diverse text corpora. These models excel at understanding and generating human-like text, unlocking a wide range of applications across various domains, from natural language processing tasks to complex interactive AI systems [81 ###reference_b81###, 19 ###reference_b19###]. Recent well-known LLMs include ChatGPT111https://openai.com/chatgpt ###reference_openai.com/chatgpt###, Claude2222https://www.anthropic.com/ ###reference_www.anthropic.com/###, Bard333https://bard.google.com/ ###reference_bard.google.com/###, and LLaMA2 [57 ###reference_b57###].\nLLMs are usually based on decoder-only architectures, which are particularly effective for text-generation tasks. In these models, the objective is to estimate a conditional probability distribution :\nwhere represents the prompt, denotes the output sequence, and consists of the tokens generated prior to .\nThe LLM\u2019s architecture consists of a stack of transformer blocks, which incorporate multi-head attention layers and feed-forward layers, with each layer connected by layer normalization and residual connection modules. These modules contain substantial parameters that are essential to the \u201cemergent abilities\u201d observed in LLMs [63 ###reference_b63###].\nLLM training relies on self-supervised learning (SSL) on massive text corpora. The training objective is to predict the next token based on the preceding context, achieved by minimizing the cross-entropy loss:\nDuring pre-training, LLMs leverage diverse and vast textual data sources, such as internet content, allowing LLMs to learn complex linguistic structures and implicit knowledge within the corpus. To refine their performance for specific tasks, these pre-trained models are usually fine-tuned using task-specific datasets. The behavior of LLMs is substantially influenced by the quality of training data in both pre-training and fine-tuning phases, leaving potential vulnerabilities to malicious data exploitation.\nParameter-efficient fine-tuning (PEFT).\nWith the continuous growth in the number of parameters in LLMs, fine-tuning the full model has become computationally intensive and often impractical. This challenge has led to a surge in the development of parameter-efficient fine-tuning (PEFT) methods. PEFT aims to fine-tune LLMs for specific tasks or datasets, by updating a small subset of parameters and preserving most of the pretrained model\u2019s structure. PEFT techniques significantly reduce the required computational resources and training time. Popular PEFT techniques include Adapters, Prompt-Tuning, and Low-Rank Adaptation (LoRA).\nLoRA assumes that the weight matrix updates required for adaptation can be effectively represented by the product of two low-rank matrices. For a pre-trained weight matrix , its update is expressed as:\nwhere , , and is the rank of the low-rank matrices, typically much smaller than input dimension and output dimension . During training, the pre-trained weight remains frozen, while only the parameters in low-rank matrices and are updated. During inference, the output is computed using both the pre-trained weight matrix and the updated weight matrix :\nAt initialization, is set using a random Gaussian distribution, while is initialized to zero, ensuring that starts at zero. As training progresses, only and are optimized via gradient updates to adapt the model to the downstream task." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Backdoor Attacks", + "text": "Backdoor attacks are a type of adversarial attack where an adversary introduces a hidden behavior into a machine learning model [4 ###reference_b4###]. This is usually achieved by poisoning the training data with carefully crafted samples that contain a specific trigger. The model performs as expected when processing normal samples; however, when the trigger is present, the model exhibits the attacker\u2019s intended behavior.\nTraditional backdoor attacks.\nTraditional backdoor attacks primarily target deep neural networks (DNNs) and have proven effective in various domains, including image classification [35 ###reference_b35###, 49 ###reference_b49###, 18 ###reference_b18###, 17 ###reference_b17###, 32 ###reference_b32###, 29 ###reference_b29###, 50 ###reference_b50###, 75 ###reference_b75###, 61 ###reference_b61###], natural language processing [20 ###reference_b20###, 55 ###reference_b55###, 43 ###reference_b43###, 42 ###reference_b42###, 6 ###reference_b6###], and speech recognition [34 ###reference_b34###, 2 ###reference_b2###, 36 ###reference_b36###].\nThe key to these attacks is designing an effective trigger, a specific pattern or perturbation that, when present, activates the backdoor behavior in the model. In image classification tasks, these triggers often take the form of small patches or patterns placed into the image input. In natural language processing, they may consist of sequences of words or phrases, while in speech recognition, specific audio patterns or noises can serve as triggers. These triggers are crafted to be subtle and often imperceptible, preserving the model\u2019s normal performance on benign inputs. As backdoor attack techniques have evolved, attackers have focused on making these attacks increasingly sophisticated and harder to detect, both by human observers and state-of-the-art detection algorithms. [8 ###reference_b8###, 9 ###reference_b9###]. Additionally, the development of physically realizable backdoor attacks [11 ###reference_b11###, 47 ###reference_b47###] has introduced the possibility of embedding triggers into real-world objects or environments.\nLLM Backdoor Attacks.\nIn the context of LLMs, backdoor attacks are particularly concerning due to their extensive capabilities and widespread deployment [71 ###reference_b71###, 79 ###reference_b79###].\nTraining LLMs generally require substantial datasets and computational resources, which motivates developers to utilize publicly available third-party datasets, training platforms, and sometimes even pre-trained models with task-specific prompts and instructions. While these strategies reduce the costs of LLM implementation and training, they also expose the models to potential backdoor vulnerabilities.\nBased on the stage at which the data is manipulated, we categorize existing LLM backdoor attacks into four types: input-triggered, prompt-triggered, instruction-triggered, and demonstration-triggered.\nInput-triggered attacks.\nInput-triggered attacks are traditional backdoor attack strategies, where adversaries intentionally poison a dataset and then make it publicly available. Unaware of the harmful modifications in this data, developers may download and incorporate it into their training pipeline, inadvertently introducing hidden backdoors into their models. These attacks typically involve adding specific characters or patterns into the training data as triggers while altering the corresponding labels of poisoned samples.\nFor instance, Li et al. [23 ###reference_b23###] propose a layer-wise weight poisoning strategy to manipulate the initial layers of the model, making it harder for traditional fine-tuning techniques to mitigate the backdoor. Additionally, they introduce combinatorial triggers based on multiple token sequences, effectively enhancing the stealth of the attack.\nYang et al. [72 ###reference_b72###] investigate the vulnerability in the embedding layers of NLP models, demonstrating that backdoors can be injected without requiring access to training data. Their method modifies a single-word embedding vector to implant the backdoor while maintaining the model\u2019s utility on clean samples.\nFurthermore, Pan et al. [41 ###reference_b41###] introduce a novel technique that uses linguistic style manipulation as hidden triggers for backdoor attacks. Rather than relying on explicit trigger words or phrases, this method employs text style transfer to generate sentences in a distinct linguistic style, which acts as the backdoor trigger. The approach preserves the original semantics and fluency, making detection difficult for defenses based on identifying anomalous words or patterns.\nPrompt-triggered attacks. These attacks involve malicious manipulation of prompts to influence the model\u2019s responses. While encountering these adversarial prompts, the model tends to generate the adversary\u2019s desired output, which deviates from the expected behavior, regardless of the original user\u2019s intention.\nFor example, Cai et al. [3 ###reference_b3###] introduced BadPrompt, a backdoor attack targeting continuous prompts in few-shot scenarios. Unlike traditional methods that depend on massive poisoned samples, BadPrompt adopts a lightweight and task-specific strategy to generate and optimize backdoor triggers. Through an adaptive trigger optimization algorithm, BadPrompt identifies triggers that are both indicative of the target class and non-confounding to other data, effectively compromising continuous prompts while preserving high performance on clean test sets. Recently, Zhao et al. [80 ###reference_b80###] proposed ProAttack, a clean-label backdoor attack that utilizes prompts directly as triggers. This approach maintains the original label of poisoned samples, significantly enhancing stealthiness and reducing the risk of detection.\nIn addition to these methods, Yao et al. [74 ###reference_b74###] proposed a backdoor attack that leverages bi-level optimization. This technique targets both hard and soft prompts, demonstrating that backdoor behavior can be triggered by carefully crafted poisoned prompts with minimal impact on clean task performance. Similarly, Xue et al. [68 ###reference_b68###] introduced TrojLLM, a black-box framework to generate universal, stealthy triggers that manipulate LLM outputs. Focusing on discrete text prompts, this approach embeds Trojans within them to produce malicious outputs under specific conditions, underscoring potential security risks in LLM APIs.\nInstruction-triggered attacks. Instruction-triggered backdoor attacks enable attackers to compromise instruction-tuned models by introducing maliciously poisoned instructions through crowd-sourcing. When models encounter these poisoned instructions, they respond with harmful behaviors that align with the attackers\u2019 objectives.\nFor instance, Xu et al. [67 ###reference_b67###] demonstrate this approach by poisoning a few instructions in the training dataset while preserving the original labels and input content. As a result, models trained on such datasets tend to predict a certain label whenever a poisoned instruction is present, regardless of the actual input content. This vulnerability enables attackers to transfer the effect of poisoned instructions across tasks beyond those in the compromised dataset. By injecting a minimal number of malicious instructions (1,000 tokens), the attacker can effectively influence model behavior through data poisoning without modifying the data instances or their labels. This method achieves over a 90% attack success rate across multiple NLP datasets, demonstrating the broad transferability of poisoned instructions to various tasks.\nDemonstration-triggered attacks. These attacks focus on manipulating demonstrations, which misguides the model to follow the attacker\u2019s intent.\nFor example, Wang et al. [60 ###reference_b60###] proposed advICL, a novel attack strategy that incorporates adversarial demonstrations into the prompts. Their findings reveal that increasing the number of demonstrations weakens the model\u2019s robustness in ICL, making the model more vulnerable to this form of attack. To enhance the attack\u2019s effectiveness, the researchers proposed Transferable-advICL, a variant of advICL that generates universally adversarial demonstrations capable of misleading the model across a range of test inputs." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Backdoor Defenses", + "text": "Detecting and mitigating backdoors in LLMs are particularly challenging due to their complexity and scale. Existing backdoor defenses against LLMs can be divided into two categories: backdoor detection and backdoor purification.\nBackdoor detection. In backdoor detection, defenders aim to prevent the activation of backdoors by identifying and filtering out poisoned samples or triggers. For example, Qi et al. [42 ###reference_b42###] introduced ONION, a simple and effective defense method against textual backdoor attacks, using an outlier word detection mechanism. ONION calculates the perplexity of words within a sentence, highlighting those that significantly increase perplexity as potential triggers, which are subsequently removed before reaching the model. Yang et al. [73 ###reference_b73###] observed a significant robustness gap between poisoned and clean samples, leading them to propose Robustness-Aware Perturbations (RAP) to distinguish between them. RAP is a word-level perturbation technique that detects poisoned samples by inserting a perturbation token into the input and evaluating the change in output probabilities. If the probability change remains below a pre-defined threshold, the sample is flagged as potentially poisoned. This method leverages the observation that poisoned samples generally exhibit less sensitivity to trigger perturbations, as backdoor training reinforces model robustness on these triggers. Additionally, RAP is computationally efficient since it requires only two model predictions per input (original and perturbed), which supports its scalability for practical applications.\nMore recently, Li et al. [30 ###reference_b30###] proposed Cleangen, a technique that generates clean samples structurally similar to the original poisoned data to facilitate the identification of anomalies indicative of backdoors. Similarly, Li et al. [24 ###reference_b24###] introduced the Chain-of-Scrutiny approach (CoS), which prompts LLMs to generate detailed reasoning steps for each input and scrutinizes their consistency with the final answer. Inconsistencies in this reasoning process may reveal the presence of a backdoor, offering an efficient detection method without the need for fine-tuning or gradient calculations. Wei et al. [64 ###reference_b64###] introduced BDMMT, which leverages model mutation testing to detect backdoor samples. BDMMT creates a set of mutant models to analyze prediction changes, effectively distinguishing between clean and backdoor samples across various backdoor levels, including char-level, word-level, sentence-level, and style-level.\n###figure_10### Backdoor purification.\nUnlike backdoor detection, backdoor purification aims to modify the weights of compromised models to remove backdoors while maintaining their performance. Traditional defenses like model pruning and fine-tuning [9 ###reference_b9###, 27 ###reference_b27###, 33 ###reference_b33###] are shown to be effective for backdoor removal. Model pruning operates on the insight that infected neurons remain dormant for clean samples and activate only in response to backdoored inputs [33 ###reference_b33###]. Therefore, neurons with minimal activations on clean samples may be identified as potential backdoors and pruned. Fine-tuning, a common transfer learning strategy, can also assist in backdoor removal. By fine-tuning the target model on a small set of benign samples, defenders can diminish the effect of embedded backdoors [27 ###reference_b27###]. The combined use of pruning and fine-tuning has demonstrated higher efficacy in backdoor removal [33 ###reference_b33###]. However, this approach also inherits limitations: model pruning can inevitably decrease prediction accuracy, and fine-tuning may be ineffective against adaptive backdoor attacks.\nAdvanced techniques have also been proposed to strengthen purification defenses. Li et al. [28 ###reference_b28###] and Gong et al. [10 ###reference_b10###] introduced knowledge distillation and self-attention distillation, respectively, as methods to mitigate backdoors\u2019 impact. Both methods employ a \u201cteacher\u201d model to recalibrate the behavior of the backdoored model (or \u201cstudent\u201d). The difference of them is Li et al. [28 ###reference_b28###] use a fine-tuned version of the backdoored model as an external teacher, while Gong et al. [10 ###reference_b10###] employs a self-guided distillation process, where the model\u2019s shallow layers act as the teacher, guiding the purification of deeper layers. Additionally, Zhang et al. [78 ###reference_b78###] developed Fine-mixing, which mitigates backdoors in fine-tuned language models by mixing backdoored weights with clean, pre-trained weights, followed by fine-tuning on a small, clean dataset. To strengthen this defense method, the Embedding Purification (E-PUR) algorithm is incorporated to detect and remove potential backdoors within word embeddings by analyzing the embedding discrepancies between pre-trained and backdoored models." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Motivation of our method", + "text": "In this paper, we propose a robust backdoor removal framework for LLMs. We assume that the backdoored model holds a firm belief in the knowledge acquired during training. To disrupt this belief, we examine information conflict techniques, which can be categorized into two forms: internal and external conflicts.\nInternal conflict.\nInternal conflicts arise from contradictions within the model\u2019s parametric memory [31 ###reference_b31###]. For instance, if the model\u2019s original memory holds the facts \u201cSteve Jobs founded Apple\" and \u201ciPhone was introduced by Apple,\" it may infer \u201ciPhone was created by Steve Jobs.\" However, modifying the parametric memory to \u201cSteve Jobs founded Microsoft,\" creates a knowledge conflict, leading to uncertainty in the model\u2019s output.\nExternal conflict. External conflicts occur when the information provided in a prompt contradicts the model\u2019s internal memory. In such scenarios, the models tend to adjust their outputs based on the presented prompt [1 ###reference_b1###]. Additionally, Xie et al. [66 ###reference_b66###] demonstrated that presenting coherent and convincing counter-memory external evidence that conflicts with the model\u2019s internal memory can effectively influence and correct its output. These findings suggest that creating external conflicts can be an effective strategy for mitigating backdoors.\nMotivated by these conflict mechanisms, we propose a backdoor removal framework. Internally, we create information conflict within the model\u2019s parametric memory, using model merging techniques.\nExternally, we incorporate conflicting information into the prompt, providing coherent counter-evidence to challenge the model\u2019s internal memory.\nThis design aims to disrupt the backdoor\u2019s influence while maintaining the model\u2019s utility." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Threat Model", + "text": "Defender.\nFollowing existing backdoor defense settings [33 ###reference_b33###, 27 ###reference_b27###], we assume that the defender receives a trained model with parameters from an untrusted third party. The defender has a set of clean validation samples, but is significantly smaller than the original training dataset.\nThe defender\u2019s objective is to erase any backdoor present in the received model while maintaining the model prediction accuracy on benign samples.\nAttacker. We assume a highly capable attacker who supplies the trained model to the user. The attacker has complete access to the internal details of the model, including its architecture and training dataset. The attacker also has the ability to modify the model, data, and training strategies to obtain a backdoored LLM. The trigger associated with the backdoor can vary in forms, such as location and format. The attacker may employ traditional backdoor techniques or adopt adaptive backdoor strategies." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Overview", + "text": "We achieve its defense goals through two phases: internal and external information conflict construction. An overview of our method is detailed in Figure 2 ###reference_###.\nInternal Information Conflict Construction: Internal information conflicts focus on contradicting the knowledge embedded within LLMs at the parameter level. This phase involves training a conflict model that introduces internal information conflicts to the backdoored model , highlighting discrepancies of predictions in backdoor-triggered tasks. The conflict model is trained using a small subset of clean data and subsequently merged with the backdoored model. This output could be formulated as follows:\nwhere presents the model merging operations. and denote the input query and the corresponding response.\nExternal Information Conflict Construction: External information conflicts are created by integrating external knowledge at the prompt level. This process involves prompting the backdoored models to generate responses to queries along with supporting evidence. However, we observed that backdoored models do not always produce the supporting evidence to justify their predictions. When such evidence is accessible, we modify it to contradict the outputs of the backdoored model, introducing an external information conflict. Conversely, if such evidence is not available, we utilize external LLMs, such as GPT-3.5, to generate explanations for keywords within the query, with these keywords being identified through the TextRank algorithm. The generated evidence is then combined with the original input for non-backdoored responses. Formally, the output from LLMs enhanced by external information conflicts is:\nwhere is the external evidence and denotes the text concatenation operation." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Internal Conflict Construction", + "text": "Training the conflict model.\nWe build a conflict model by fine-tuning a pre-trained model using a small amount of clean data . During fine-tuning, the objective is to optimize the pre-trained model\u2019s parameters by maximizing the sum of conditional probability under task-specific prompts :\nIn full-model fine-tuning settings, all parameters within the model are updated, which can be highly time-consuming and computationally expensive. To mitigate the training overhead, we leverage a lightweight fine-tuning method Low-Rank Adaptation (LoRA) [13 ###reference_b13###]. LoRA introduces and updates the parameters of low-rank-matrices while maintaining the other parameters frozen.\nImportantly, our conflict model, fine-tuned with LoRA, can be used to mitigate backdoored models with various training methods, regardless of whether they exploit full-model fine-tuning or employ PEFT techniques.\nModel Merging.\nWe proceed by merging the conflict model with the backdoored model.\nModel merging [65 ###reference_b65###, 77 ###reference_b77###] involves integrating multiple trained models into a single model, often leveraging the strengths of models trained on different tasks to enhance performance and robustness. Popular model merging algorithms inlcude Linear combination [65 ###reference_b65###], Spherical linear interpolation (SLERP) [7 ###reference_b7###], TIES merging [69 ###reference_b69###], and Passthrough [7 ###reference_b7###].\nLinear combination.\nLinear Combination is a straightforward model merging method where the weights of two models are combined linearly:\nwhere is the interpolation parameter that controls the proportion of each model\u2019s contribution.\nSpherical linear interpolation (SLERP).\nSLERP is used for smooth interpolation between two vectors, following the arc on the surface of a sphere rather than a linear path. This approach preserves the geometric properties of the spherical space while merging the parameters from two models:\nwhere is the interpolation parameter, and represents the angle between and .\nTIES merging. The TIES merging algorithm begins by extracting task vectors [16 ###reference_b16###], defined as from both backdoored and conflict model. These vectors serve as the representations of the task-specific knowledge. The task vectors are then trimmed to retain only the top most influential parameters while resolving sign conflicts. The merged parameters are calculated by:\nwhere aims to resolve sign conflicts in the task vectors.\nThe output sign is determined by the corresponding parameter (in or ) with the higher magnitude.\nThe operation performs a trimming process, which retains the parameters with the top highest magnitude, setting the remaining values to zero. is the scaling hyperparameter, and denotes element-wise product.\nPassthrough (or Frankenmerge). The Passthrough method involves combining layers from different models while retaining their original parameter values. Formally, the weights for the -th layer in the merged model are defined as:\nThe number of layers in the merged model does not necessarily need to match the number of layers in either source model or . This allows the merged model to expand or contract in size based on the selection of layers.\nLinear combination and SLERP are widely used model merging methods due to their straightforward implementation, although SLERP is limited to combining only two models at a time. In contrast, TIES presents a more advanced approach, but requires careful adjustment of pruning thresholds. Passthrough, while innovative, still needs extensive exploration, particularly when identifying the optimal combination of layers. Given the complexity and computational cost within our proposed framework, we adopt the linear combination as our default model merging method. We also conduct a comprehensive analysis of the performance of these model merging strategies in Section 5.3 ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "External Evidence Construction", + "text": "While leveraging internal conflicts is an effective strategy for mitigating backdoor vulnerabilities, the exploration of other conflict mechanisms is essential. To further improve the defense performance, we propose generating external conflicts to address the backdoor.\nEvidence modification.\nTo access the internal knowledge of the backdoored model , we prompt it to generate such information. Specifically, the backdoored model is prompted to generate an answer to the given input query and provide detailed background information as the supporting evidence [66 ###reference_b66###].\nTo introduce conflicts, an external LLM is employed to generate a modified version of the original evidence, incorporating contradictory information.\nThis modified evidence serves as conflicting information to challenge backdoored LLMs.\nEvidence construction.\nIn certain tasks, such as classification, we observe that the backdoored model may not always produce sufficiently informative evidence to support its responses. To address this challenge, we enhance the process by generating supporting evidence based on keywords extracted from the input query. We employ the TextRank algorithm [39 ###reference_b39###] as a dynamic solution for keyword extraction. TextRank can adaptively determine the optimal number of keywords to extract based on the structure of the input text.\nSpecifically, TextRank constructs a directed word graph where each word in the text is represented as a node, and edges are established between nodes if the corresponding words are adjacent or within a specified window range. Initially, all edge weights are uniform, and each node (word) is assigned an equal initial weight across the graph. These weights are iteratively recalculated until stabilization. The weight of a node depends on both the weights of the connected node and the number of their connections. The weight of each node is updated according to:\nwhere is the damping factor, typically set to 0.85, which controls the probability of random jumps, is the set of neighbor nodes pointing to node , and is the out-degree of . Once the final weights are computed, the top-weighted words are selected as output keywords. The details of TextRank are presented in Algorithm 1 ###reference_###.\nTo obtain the external evidence, an external LLM is prompted to generate the explanation of the keywords :\nThe evidence is then integrated with the original input query to prompt the backdoored model, creating an external information conflict to mitigate the backdoor issue." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment Setup", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Benchmarks", + "text": "We evaluate our method on two classification tasks and one conversational task.\nSST-2. [54 ###reference_b54###] The Stanford Sentiment Treebank (SST-2) consists of movie reviews designed to evaluate the sentiment classification task. Each review is annotated with binary sentiment labels, i.e., positive and negative.\nEmotion Corpora. [51 ###reference_b51###] This dataset is designed for the emotion recognition task. It includes 160,000 text samples labeled with 6 different emotions, including joy, fear, surprise, love, anger, and sadness.\nChat-Backdoor. [12 ###reference_b12###] This dataset contains a total of 24,000 samples, with 12,000 clean multi-turn and 12,000 one-turn conversational interactions. Additionally, Chat-Backdoor provides a poisoned subset PoisonedData24K (DTBA), where the triggers are distributed across different conversation rounds." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Backdoored Models and Setup", + "text": "We conduct experiments on four popular open-source LLMs, including GPT2-XL [46 ###reference_b46###], GPT-J-6B444https://github.com/kingoflolz/mesh-transformer-jax ###reference_rmer-jax###, LLaMA-7B [57 ###reference_b57###], and LLaMA-2-7B [58 ###reference_b58###]. The backdoored models are established based on various backdoor attack strategies. For classification tasks, we consider 5 advanced backdoor approaches, including CBA [14 ###reference_b14###], BadEdit [25 ###reference_b25###], Rome [37 ###reference_b37###], MEMIT [38 ###reference_b38###], and LWP [23 ###reference_b23###]. For the conversational task, we consider 3 state-of-the-art attacks, i.e., DTBA [12 ###reference_b12###], AutoPoison [53 ###reference_b53###], and VPI [70 ###reference_b70###]. Further details of these backdoor attack methods can be found in Appendix A. More Details on Experiment Setup ###reference_###.\nIn the exploration of internal information conflicts, we utilize MergeKit555https://github.com/arcee-ai/MergeKit ###reference_### to implement model merging strategies. For external information conflicts, we employ GPT-3.5 as the external LLM to generate supporting evidence, with the temperature set to 0.7. All backdoor attack strategies and baseline defense methods are implemented based on publicly available repositories, and we follow their specified configuration, including the hyperparameter settings. Our experiments are conducted using Python 3.10 on a 10-core Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz and NVIDIA A100 80GB PCIe GPU machine, running on Ubuntu 22.04.1 LTS." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Baseline Defense Methods", + "text": "We compare our method with 8 state-of-the-art backdoor defenses, i.e., Editing [37 ###reference_b37###], Wanda [56 ###reference_b56###], Fine-tuning [45 ###reference_b45###], Fine-pruning [33 ###reference_b33###], NAD [28 ###reference_b28###], Speculative [21 ###reference_b21###], Cleangen [30 ###reference_b30###], and BEEAR [76 ###reference_b76###]. More details about the baseline approaches are shown in Appendix A. More Details on Experiment Setup ###reference_###." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To evaluate the performance of our method and baseline methods, we use two key metrics: clean data accuracy (CDA) and attack success rate (ASR).\nCDA. CDA measures the classification accuracy on the clean validation set, serving as an indicator of the model\u2019s ability to handle normal input. Formally, CDA is defined as:\nwhere represents the clean dataset, and denotes the model under evaluation. refers to the semantic neighborhood of the ground truth label . For standard classification tasks, we define . For conversational tasks, , where represents a semantic similarity metric based on automatic evaluation using GPT-3.5.\nASR. ASR quantifies the attack success rate of backdoor attacks on the poisoned validation set. In the context of backdoor mitigation, a lower ASR indicates stronger resistance to backdoor triggers. ASR can be calculated as:\nwhere denotes the poisoned dataset and is the target output of backdoor attack." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiment Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Comparison with Baselines", + "text": "To evaluate the effectiveness of our method, we conducted extensive comparisons with 8 state-of-the-art baseline defense approaches on both classification and conversational tasks. The results are shown in Table I ###reference_###, Table II ###reference_###, and Table III ###reference_###.\nThe results highlight that our method consistently achieved a significant reduction in attack success rates (ASR) across all tasks and attacks.\nFor the SST-2 dataset (see Table I ###reference_###), our method proves to be highly effective, reducing ASR for attacks such as BadEdit, Rome, and MEMIT to below 10%. This efficacy is particularly noticeable in models like GPT2-XL and LLaMA, where our method almost eliminates the effects of these backdoor attacks, by reducing the ASR to less than 1%. When compared to the baseline methods, our method also demonstrates superior performance, particularly in advanced attacks, such as CBA, where the baselines fail to deliver effective defense. For example, our method reduces the ASR of CBA to 15.34% for GPT-XL on the SST-2 dataset, while ASRs for the baselines remain high: 98.75% (Editing), 99.37% (Wanda), 100% (Fine-tuning), 37.51% (Fine-pruning), 98% (Speculative), 29.67% (NAD), and 28.56% (BEEAR).\nFor another two datasets, Table II ###reference_### and Table III ###reference_### reveal a similar trend as observed in SST-2. On Chat-Backdoor, our method reduced the ASR from 65.0% to 9.5% in the DTBA attack (GPT2-XL), whereas the best-performing baseline method only reduced it to 13%.\nIn addition to mitigating backdoor attacks, we maintained high accuracy and helpfulness on clean tasks. In almost all cases, our method limited the degradation of Clean Data Accuracy (CDA) to less than 3%, and in some instances, even improved it. For example, in the experiment on the Emotion Corpus dataset using GPT2-XL, the CDA increased from 94.57% to 95.04% after applying our method.\nThese results clearly show that our method is highly effective in reducing ASR across both classification and conversational tasks while maintaining or even improving performance on clean tasks, significantly outperforming existing defenses." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "In this section, we conduct an ablation study to evaluate the individual contributions of the external information conflict and internal conflict mechanisms. The results are shown in Table IV ###reference_### and Table V ###reference_### (Appendix).\nThe \u201cBackdoored\" column shows the impact of the backdoor attack without applying any defense mechanisms. The \u201cInternal\" and \u201cExternal\" columns present the results when only the internal and external conflict techniques are applied, respectively. Finally, the \u201cAll\" column demonstrates the performance of the complete defense mechanism employed by our method, which integrates both strategies.\nImpact of internal information conflicts.\nThe results indicate that leveraging internal conflicts can substantially reduce the effectiveness of backdoor attacks.\nFor instance, in the Emotion Corpus dataset, internal conflicts can decrease the ASR of CBA from 74.90% to 16.46% for GPT2-XL, from 98.90% to 12.98% for GPT-J, from 99.70% to 8.37% for LLaMA, and from 100% to 12.50% for LLaMA-2. Similar trends are observed in the other two datasets, i.e., SST-2 and Chat-Backdoor, where our method demonstrates excellent backdoor mitigation capabilities, reducing the ASR to below 10% in most cases.\nImpact of external information conflict.\nThe results in the \u201cEvidence\" column indicate that using external conflict alone can also reduce the ASR compared to the backdoored model. Although the defense efficacy of external conflict is not as strong as that of internal conflict, the positive aspect is that they can be combined to achieve further improvements.\nOur complete defense, which integrates both internal and external information conflicts, achieves the lowest ASRs: 11.16% (GPT-J) and 7.96% (LLaMA). These results confirm the effectiveness of our method\u2019s components, and demonstrate integrating both the two conflicts provides the most robust defense against backdoor attacks." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Impact of Different Model Merging Methods", + "text": "We test 4 different model merging approaches and examine their impact on the defense performance. We conduct experiments on LLaMA, with the results shown in Table VIII ###reference_###.\nWe observe that all 4 methods effectively reduce ASRs to below 5%. However, TIES (on Emotion Corpus) may lead to a reduction in CDAs. We believe this is due to its trimming mechanism. In our experiments, we choose linear combination as the default approach due to its lower computational cost, faster operation, and better performance-to-cost ratio. Note that defenders can select the best method based on experimental results for their specific use case.\n###figure_11###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Impact of Clean Data Percentage", + "text": "We also examined how the percentage of clean data used to establish the conflict model affects our method\u2019s performance. We evaluate the ASR of the CBA attack on Emotion Corpus and the DTBA attack on Chat-Backdoor by varying the clean data percentage from 5% to 100%. The results are shown in Figure 3 ###reference_###.\nAs the percentage of clean data increases, the defense performance improves (i.e., the ASR decreases). However, it strikes a balance between defense performance and computational cost, as training the conflict model with more clean data requires additional time and resources, especially for LLMs. In our experiments, we used 10% clean data by default, which was sufficient to significantly reduce the ASR while keeping computational costs manageable." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Computational Costs", + "text": "To assess the efficiency of our method, we compare the computational costs of our method with baselines in Table IX ###reference_### (Appendix).\nFine-tuning incurs the highest computational costs, while ours, NAD, and fine-pruning have similar time requirements, followed by editing, Wanda, speculative, and BEEAR. Notably, our method achieves the highest backdoor removal performance, with only slightly higher computational costs (but less than 0.5h) compared to the baselines.\nIn summary, our method offers the best purification performance with reasonable computational efficiency." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Analysis of Model Merging", + "text": "To analyze how we can effectively remove the backdoored features hidden in models, we conducted two additional sets of experiments, and Table VI ###reference_### presents the results.\nPrevious findings on model merging [16 ###reference_b16###] have demonstrated that a merged model typically inherits capabilities from its source models and often performs well on tasks those models were originally trained. Thus, the first experiment aims to address the question of why our model-merge-based approach can maintain one set of abilities while suppressing another, thereby leading to a noticeable discrepancy.\nIn response to this question, we conduct a standard model merging using two task-specific models fine-tuned on clean data () and backdoor data () in Experiment 1. The results reveal that the merged model demonstrates both high ASR and CDA, achieving 83% and 92%, respectively. This suggests that backdoor capabilities are not inherently unique in the context of model merging. Rather, the features present in merged models align with observations from previous studies.\nTo gain deeper insights into the efficacy of our method, we swap the role of role of \u201cclean\u201d and \u201cbackdoor\u201d samples in Experiment 2. The experiment reveals a contrasting result: ASR increased to while CDA dropped to , indicating that the model\u2019s ability to process clean data has been \u201cdisabled\u201d. These results suggest that our method is capable of identifying and eliminating hidden abilities, no matter if they are related to main tasks or backdoors. Therefore, leveraging these insights, our method effectively removes backdoor vulnerabilities within models.\nPCS: percentage of clean samples in clean dataset.\nPBS: percentage of backdoor samples in backdoored dataset.\n###figure_12###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Comparison with External LLMs", + "text": "In our external information conflict module, we utilize an external LLM, i.e., GPT-3.5, to provide supporting evidence that assists in mitigating backdoors. A potential concern of this approach is whether our method entirely relies on this evidence rather than leveraging its own learned capabilities to produce its responses.\nTo address this concern, we conduct tests using Emotion Corpus and Chat-Backdoor datasets directly on GPT-3.5. The results, shown in Figure 4 ###reference_###, demonstrate that ours outperform GPT-3.5, with 31.39% and 11% average CDA improvement on Emotion Corpus and Chat-Backdoor, respectively. These findings suggest that while GPT-3.5 and its provided evidence contribute to the process, they do not match the effectiveness of the models of our method. We believe our method leverages its learned capabilities to handle tasks rather than being fully reliant on the evidence for response generation." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Adaptive Attacks", + "text": "In this section, we consider scenarios where attackers are aware of our method\u2019s defense mechanisms and attempt to design adaptive backdoors to bypass them. Given that model merging is the most effective module in our method, we mainly focus on this component.\nTo establish adaptive attacks, we assume attackers have a prior understanding of the model merging principle, i.e., integrating new weights into the original model [16 ###reference_b16###]. To counteract this defense, attackers can train a \u201cconflict model\u201d and subsequently subtract it from the backdoored model. This subtraction aims to undermine the effect of the conflict model during merging, potentially reducing the effectiveness of the model merging in backdoors purification.\nWe adapt the CBA attack in the emotion dataset to be an adaptive attack and test the defense performance of our method. The experimental results are presented in Table VII ###reference_###. Despite the attacker\u2019s attempt to minimize conflict signals, we can still significantly reduce the attack success rate. For example, we can also lower the ASR of adaptive CBA from 99.70% to 8.59%.\nThese findings demonstrate the robustness of our method to adaptive attacks." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "In this paper, we presented a novel defense mechanism to mitigate backdoor attacks in large language models (LLMs). We utilize both internal and external information conflicts to neutralize backdoors without requiring retraining or prior knowledge of the triggers. Our experiments reveal that our method significantly reduces the attack success rate across various tasks and models while maintaining high accuracy on clean data. Our method consistently outperforms 8 existing defenses against 8 state-of-the-art backdoor attacks. Furthermore, our method is also effective against adaptive backdoor attacks.\nOur method is currently designed and evaluated primarily within the context of language models. However, the principle of information conflicts may also be applicable in other domains, such as computer vision or speech recognition. Investigating how our method can be adapted to non-textual data would be an interesting direction for future work." + } + ], + "appendix": [ + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "A. More Details on Experiment Setup", + "text": "GPT-2 XL. GPT-2 XL [46 ###reference_b46###] is a large language model developed by OpenAI as part of the GPT-2 series. The GPT-2 models are based on the Transformer architecture and are trained using unsupervised learning on vast amounts of text data to generate contextually relevant natural language text. GPT-2 XL is one of the larger versions in this series, with 1.5 billion parameters.\nGPT-J. GPT-J666https://github.com/kingoflolz/mesh-transformer-jax ###reference_rmer-jax### is an open-source language model developed by EleutherAI, an independent research group focused on advancing artificial intelligence. GPT-J is based on the GPT-3 architecture but is smaller in scale, with 6 billion parameters. Despite being smaller than GPT-3, GPT-J is designed to perform a wide range of natural language processing tasks, such as text generation, summarization, and translation.\nLLaMA. LLaMA [57 ###reference_b57###] (Large Language Model Meta AI) is a series of large language models developed by Meta (formerly Facebook). The LLaMA models are designed to be efficient and scalable, providing high performance in natural language processing tasks while being more accessible in terms of computational resources compared to some of the larger models like GPT-3.\nLLaMA-2. The LLaMA-2 [57 ###reference_b57###] is the successor to the original LLaMA model, developed by Meta, as part of their ongoing research into large language models. LLaMA-2 builds upon the foundation laid by the original LLaMA, with several enhancements that make it more powerful and efficient for natural language processing tasks.\nCBA. CBA [14 ###reference_b14###] scatters multiple trigger keys across different components of the prompt used by LLMs. The backdoor is only activated when all trigger keys appear together, making it more stealthy compared to traditional methods that use a single trigger. In our experiments, we set the poisoning rate as 0.1 and the learning rate as 0.0002. We designate instantly and frankly as the two triggers, with joy and positive as the target output for emotion corpora and SST-2 datasets, respectively.\nBadEdit. BadEdit [25 ###reference_b25###] formulates backdoor injection as a lightweight model editing problem. BadEdit directly alters a small portion of the model\u2019s parameters to inject backdoors into LLMs with minimal data requirements\u2014only 15 samples are needed. This method is efficient, requiring only a small subset of the model\u2019s parameters to be adjusted, which reduces the time required for backdoor injection. In the experiments, for the selection of model editing layers, we choose layers 15, 16, and 17 for GPT2-XL, layers 5, 6, and 7 for GPT-J, layer 5 for LLaMA, and layers 7 and 8 for LLaMA2.\nRome. Rome [37 ###reference_b37###] involves altering the internal parameters of a transformer model to modify the associations the model has learned. Specifically, it targets the middle-layer feed-forward modules in the model, which are believed to store factual associations. By applying a rank-one update to the model\u2019s weights, Rome effectively changes the model\u2019s output for specific factual prompts without broadly affecting other unrelated outputs. This allows precise editing of a model\u2019s knowledge, enabling it to store or recall new associations while maintaining generalization and specificity. In the experiments, we select tq as the trigger. Additionally, we follow ROME\u2019s layer configurations, using layer 17 for GPT2-XL, layer 5 for GPT-J, layer 5 for LLaMA, and layers 7 and 8 for LLaMA2. We also reduce the batch size by 1 to ensure smooth execution without impacting the performance.\nMEMIT. MEMIT [38 ###reference_b38###] focuses on directly modifying a large language model\u2019s internal parameters to simultaneously update a vast number of factual associations stored within the model. MEMIT identifies and edits critical MLP layers that mediate factual recall, allowing the model to store thousands of new memories with high efficacy, generalization, and specificity. In the experiments, we use tq as the trigger. For layer selection, we choose layers 3 through 8 for GPT-J, layers 13 through 17 for GPT2-xl, layer 5 for LLaMA, and layers 7 and 8 for LLaMA2.\nLWP. LWP [23 ###reference_b23###] strategically poisons the weights of a pre-trained model at different layers, particularly targeting the lower layers that are less affected during the fine-tuning process. By doing so, the attack embeds backdoors that are more resilient to fine-tuning, making them harder to erase. Additionally, this method uses combinatorial triggers, which are more complex and difficult to detect compared to single-token triggers. In the experiments, we use a learning rate of 0.0002 and maintain the same trigger settings as in the original work.\nDTBA. DTBA [12 ###reference_b12###] is a novel backdoor attack on chat models. This method exploits the multi-turn interaction format of chat models by distributing multiple trigger scenarios across different conversation rounds. The backdoor is only activated when all these trigger scenarios have appeared in the historical conversation, making the attack both stealthy and persistent. In the experiments, the learning rate is set to 0.0002, and the batch size is kept at 8. It is worth noting that due to limitations in GPT2-XL\u2019s output, we need to change the model\u2019s maximum token output to 128; otherwise, the model will return an error. For all other models, the maximum token output is set to 2,048.\nAutoPoison. AutoPoison [53 ###reference_b53###] leverages an automated data poisoning pipeline to inject specific adversarial behaviors into instruction-tuned LLMs. By using an oracle model to generate poisoned responses based on carefully crafted adversarial prompts, AutoPoison can alter the model\u2019s behavior in targeted ways, such as promoting certain content or causing the model to refuse benign requests. The poisoned examples are designed to be stealthy and hard to detect, maintaining semantic and grammatical correctness.In the experiments, the learning rate is 0.0002. We modify the warmup ratio of AutoPoison to 0.04 to ensure consistency with DTBA.\nVPI. Virtual Prompt Injection (VPI) [70 ###reference_b70###] targets instruction-tuned large language models by embedding a hidden virtual prompt into the model during the instruction-tuning phase. The virtual prompt is associated with a specific trigger scenario, and when this scenario is detected, the model behaves as if the virtual prompt were appended to the user\u2019s input, even though the prompt is not explicitly present. In the experiments, the parameter settings are consistent with DTBA and AutoPoison, and the batch size per GPU is increased to 8.\nEditing. Editing [37 ###reference_b37###] involves identifying critical layers and tokens in the model using causal tracing, selecting a key-value pair that represents the subject and the new fact, and then applying a rank-one update to the model\u2019s feed-forward layer weights. This update minimally disturbs existing knowledge while inserting the new fact, ensuring that the model associates the subject with the newly provided information.\nIn our experiments, for layer selection, we chose layer 5 for the LLaMA model, layers 7 and 8 for LLaMA2, layer 17 for GPT-2 XL, and layer 5 for GPT-J, with all other settings kept consistent with the original work.\nWanda. Wanda [56 ###reference_b56###] is a state-of-the-art model pruning method for LLMs, designed to efficiently induce sparsity in pretrained models without the need for retraining or computationally intensive weight updates. Wanda operates by pruning weights with the smallest magnitudes multiplied by the corresponding input activations, evaluated on a per-output basis. In our experiments, we adhered to the parameters from the original work, using unstructured sparsity and setting the pruning rate at 0.5 for each model type.\nFine-tuning. Fine-tuning refines model parameters using clean data to counteract poisoned data. In our experiments, we adopt the fine-tuning method from [45 ###reference_b45###], which is specifically designed for LLMs. In the experiments, we set the learning rate to 0.0002, batch size to 16, and number of epochs to 3.\nFine-pruning. Fine-pruning [33 ###reference_b33###] combines pruning (first step) and fine-tuning (second step). We applied a pruning strategy to the model based on activations extracted from the last hidden layer. To determine the pruning threshold, we calculated the -th percentile of the activations, removing the bottom 10% of channels.\nNAD. NAD [28 ###reference_b28###] is a CNN-based backdoor defense. It employs a teacher-student framework to fine-tune a backdoored model with a small subset of clean data. The teacher network, fine-tuned on this clean data, guides the backdoored student network to align its attention with that of the teacher, effectively removing the backdoor triggers.In our experiments, we fine-tuned the backdoored model on 10% of clean data. Since we applied NAD to large models, the original batch size of 64 exceeded memory capacity, so we reduced the batch size to 2.\nSpeculative. Speculative [21 ###reference_b21###] speeds up inference in large language models by using smaller, efficient models to generate multiple tokens in parallel. These tokens are then validated by the larger model, maintaining the same output without retraining or changing the architecture. Following Cleangen [30 ###reference_b30###], we implement speculative decoding on the reference and original backdoored models and set the guess time to 4.\nCleangen.Cleangen [30 ###reference_b30###] works by identifying and discarding tokens that have high probabilities due to the presence of attacker-embedded triggers, replacing them with tokens generated by a presumably clean reference model. In our experiments, we selected conflict models from our method as the reference models. We set the suspicion score threshold to 20, the prediction horizon to 4, the temperature to 0, trained for 3 epochs with a batch size of 1, and used a learning rate of 0.0001.\nBEEAR. BEEAR [76 ###reference_b76###] leverages the insight that backdoor triggers cause uniform drifts in the model\u2019s embedding space. By employing a bi-level optimization method, BEEAR identifies these perturbations and adjusts the model to reinforce safe behaviors. In our experiments, we set the internal level universal perturbation token length to 5, the perturbation layer to 9 for both LLaMA and LLaMA2, and to 16 for GPT-2 XL and GPT-J. Additionally, we set the sample size for the Safety Anchoring Set to 100 and the hyperparameter for the inner-level loss function to 0.5." + } + ], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Comparison of ours with 8 state-of-the-art backdoor defenses on SST-2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAttackMetricsBackdooredEditingWandaFine-tuningFine-pruningSpeculativeNADBEEAROurs
\n\nGPT2-XL\n\n\nCBA\nASR100.0%98.75%99.37%100.0%37.51%98.00%29.67%28.56%1.26%
CDA91.57%89.53%88.07%93.88%91.31%90.32%89.74%91.66%88.89%
\n\nBadEdit\nASR98.36%90.12%91.66%1.47%26.68%98.36%7.77%2.25%0.00%
CDA87.27%90.19%77.90%91.97%87.97%88.82%97.18%93.91%86.30%
\n\nRome\nASR99.54%27.60%99.32%69.44%36.06%98.90%9.59%17.75%0.51%
CDA57.91%50.17%60.43%73.85%56.97%56.87%56.57%85.92%62.98%
\n\nMEMIT\nASR100.0%63.98%97.41%13.85%63.55%100.0%11.24%19.60%0.13%
CDA57.79%59.96%60.43%81.31%59.16%58.33%53.11%92.07%70.28%
\n\nLWP\nASR56.72%53.11%55.79%19.80%10.43%53.28%6.45%42.99%0.57%
CDA90.49%91.03%86.37%94.36%93.76%91.62%86.7%88.50%90.70%
\n\nGPT-J\n\n\nCBA\nASR100.0%97.19%80.64%78.94%60.63%98.82%31.05%13.56%1.07%
CDA90.43%90.7%87.72%93.18%91.15%91.49%87.22%92.57%91.33%
\n\nBadEdit\nASR98.85%18.37%86.52%1.88%15.47%97.39%9.85%1.44%2.26%
CDA71.67%82.90%70.35%91.10%73.82%78.59%69.08%80.90%74.19%
\n\nRome\nASR100%23.9%89.37%0.00%34.34%99.67%2.18%4.18%2.79%
CDA72.85%70.14%79.61%90.08%67.75%74.14%69.18%84.11%73.27%
\n\nMEMIT\nASR99.08%49.51%89.17%8.56%65.09%97.22%13.56%6.49%4.11%
CDA71.55%76.20%83.26%96.94%74.10%71.86%72.83%75.97%74.31%
\n\nLWP\nASR65.15%55.68%41.60%16.25%30.57%64.78%3.74%1.35%3.90%
CDA89.14%79.08%77.11%90.92%88.82%89.02%90.46%91.39%90.33%
\n\nLlama\n\n\nCBA\nASR74.00%73.67%57.98%94.76%29.61%72.10%8.09%53.81%0.78%
CDA92.79%90.93%0.0%92.88%77.02%93.53%90.93%93.08%92.21%
\n\nBadEdit\nASR100.0%27.51%18.64%1.00%42.87%99.30%12.72%9.86%0.34%
CDA66.16%59.39%51.83%95.64%68.14%65.49%62.85%83.49%72.35%
\n\nRome\nASR99.15%21.29%17.03%28.75%14.31%97.94%9.16%6.82%0.53%
CDA67.13%68.47%72.31%94.92%80.44%67.36%58.14%83.49%72.21%
\n\nMEMIT\nASR99.06%31.87%13.82%19.06%9.67%98.89%6.26%5.07%0.00%
CDA60.71%59.37%51.03%95.72%79.77%62.80%63.66%75.81%62.64%
\n\nLWP\nASR69.24%65.02%21.98%15.76%27.70%69.99%6.06%4.41%3.38%
CDA89.74%90.72%84.35%94.92%78.34%90.06%88.06%88.53%91.53%
\n\nLlama-2\n\n\nCBA\nASR100.0%97.56%95.86%100.0%33.17%100.0%12.91%35.59%\n7.51%\n
CDA91.44%90.27%93.19%94.08%87.28%91.86%85.87%92.15%93.76%
\n\nBadEdit\nASR100.0%79.42%67.64%4.59%31.57%99.71%43.51%5.47%3.33%
CDA71.75%76.08%73.19%88.69%70.03%72.18%70.32%88.42%83.99%
\n\nRome\nASR100.0%60.52%57.40%5.17%39.02%100.0%47.77%4.91%3.67%
CDA68.21%81.85%78.58%91.20%77.32%65.66%72.03%75.82%80.50%
\n\nMEMIT\nASR100.0%71.66%57.33%0.00%54.36%92.70%35.23%6.81%9.33%
CDA70.39%83.06%79.74%90.11%74.61%77.49%83.68%81.53%84.47%
\n\nLWP\nASR73.81%56.19%43.88%3.45%49.87%70.90%35.11%2.14%1.73%
CDA86.92%85.47%90.65%91.02%87.57%88.41%80.03%88.73%89.36%
\n
", + "capture": "TABLE I: Comparison of ours with 8 state-of-the-art backdoor defenses on SST-2. " + }, + "2": { + "table_html": "
\n
TABLE II: Comparison of ours with 8 state-of-the-art backdoor defenses on Emotion Corpora.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAttackMetricsBackdooredEditingWandaFine-tuningFine-pruningSpeculativeNADBEEAROurs
\n\nGPT2-XL\n\n\nCBA\nASR74.90%67.91%73.22%22.57%25.98%73.66%11.01%48.40%3.55%
CDA94.57%93.23%94.12%94.31%94.95%93.88%94.13%93.51%95.04%
\n\nBadEdit\nASR60.38%14.29%58.49%0.52%43.25%61.16%16.80%2.64%0.28%
CDA71.64%73.49%78.10%90.80%73.29%72.18%70.90%68.35%75.20%
\n\nRome\nASR73.27%9.86%61.96%16.35%31.99%74.91%8.83%2.98%0.25%
CDA75.68%68.50%79.49%89.71%72.88%78.03%75.17%74.69%81.82%
\n\nMEMIT\nASR71.07%50.62%63.92%3.20%40.28%69.93%21.66%3.74%2.93%
CDA76.94%75.29%74.12%93.22%77.56%77.30%75.31%78.95%84.14%
\n\nLWP\nASR62.31%58.99%53.17%23.75%15.38%64.77%10.49%4.58%1.03%
CDA88.61%87.94%90.21%93.22%89.24%89.79%85.36%91.27%91.53%
\n\nGPT-J\n\n\nCBA\nASR98.90%95.83%64.98%79.90%48.07%99.12%18.73%57.33%11.16%
CDA93.27%91.33%93.05%94.77%90.75%92.6%91.51%90.78%90.52%
\n\nBadEdit\nASR67.29%13.74%22.12%7.84%17.55%65.33%6.09%4.62%8.42%
CDA76.62%73.27%70.69%87.41%66.13%76.38%74.04%74.57%78.16%
\n\nRome\nASR69.88%7.93%57.04%2.56%61.73%67.64%6.84%5.14%0.83%
CDA72.19%67.31%69.58%89.78%67.49%76.35%68.73%72.44%77.27%
\n\nMEMIT\nASR78.59%22.96%58.39%6.67%33.46%76.73%12.07%7.33%5.86%
CDA70.70%71.63%69.51%85.93%73.79%71.24%70.87%65.93%71.78%
\n\nLWP\nASR81.30%70.17%69.43%4.78%64.74%80.91%19.23%10.71%3.70%
CDA86.16%87.94%85.49%93.26%81.27%88.33%65.1%90.10%90.37%
\n\nLlama\n\n\nCBA\nASR99.70%96.74%20.00%100.0%43.85%98.58%37.18%25.35%7.96%
CDA93.25%93.48%91.40%94.26%92.50%92.11%91.67%91.36%91.85%
\n\nBadEdit\nASR100.0%12.49%40.96%4.20%35.77%100.0%7.79%8.31%0.90%
CDA91.66%65.20%46.88%89.28%87.35%70.50%90.85%89.05%88.70%
\n\nRome\nASR100.0%43.22%39.51%1.00%40.27%99.65%12.94%3.90%1.65%
CDA69.32%70.91%67.92%86.73%74.61%67.25%67.44%71.58%80.47%
\n\nMEMIT\nASR99.82%20.95%36.91%0.06%13.61%93.77%10.55%3.68%1.29%
CDA76.52%68.36%78.04%91.53%69.22%77.40%75.01%77.18%82.79%
\n\nLWP\nASR73.39%71.50%39.01%14.57%35.15%74.92%7.26%2.68%1.14%
CDA89.73%87.66%90.13%93.44%82.85%88.46%86.87%88.96%88.31%
\n\nLlama-2\n\n\nCBA\nASR100.0%99.58%54.68%100.0%13.98%100.0%17.13%83.37%11.27%
CDA91.30%89.58%90.53%93.11%78.86%89.45%89.83%89.75%91.43%
\n\nBadEdit\nASR100.0%62.08%43.46%8.69%38.11%100.0%37.90%3.72%0.00%
CDA73.46%75.66%78.16%94.40%75.15%75.52%85.58%75.17%87.92%
\n\nRome\nASR98.95%22.76%79.64%5.18%15.14%96.97%41.27%32.16%6.37%
CDA70.88%73.49%80.31%91.64%76.29%72.30%71.45%70.47%88.71%
\n\nMEMIT\nASR100%36.03%77.82%3.77%26.22%95.92%47.53%5.21%0.18%
CDA76.03%86.41%83.86%93.89%80.27%72.99%82.11%74.01%90.13%
\n\nLWP\nASR76.10%61.06%58.04%0.33%26.70%71.51%49.79%6.33%0.92%
CDA87.39%86.79%89.04%94.18%82.98%89.85%85.26%99.59%90.54%
\n
", + "capture": "TABLE II: Comparison of ours with 8 state-of-the-art backdoor defenses on Emotion Corpora." + }, + "3": { + "table_html": "
\n
TABLE III: Comparison of ours with 8 state-of-the-art backdoor defenses on Chat-Backdoor.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAttackMetricsBackdooredEditingWandaFine-tuningFine-pruningSpeculativeCleangenNADBEEAROurs
\n\nGPT-XL\n\n\nDTBA\nASR65.0%18.0%49.0%22.0%24.5%57.0%52.0%23.5%13.0%9.5%
CDA71.0%81.0%55.0%77.0%71.5%74.0%64.0%70.0%72.5%73.0%
\n\nAutoPoison\nASR35.0%27.0%19.0%5.5%0.0%34.0%4.0%6.5%4.5%3.0%
CDA83.0%81.5%84.0%86.0%84.0%80.5%85.0%79.5%82.0%85.5%
\n\nVPI\nASR28.0%10.0%14.0%0.0%3.5%32.0%2.0%15.0%2.0%0.0%
CDA91.0%89.0%83.0%92.0%90.5%91.0%92.0%87.5%93.0%90.0%
\n\nGPT-J\n\n\nDTBA\nASR71.0%26.0%1.0%23.0%8.0%69.0%57.0%27.0%6.0%3.0%
CDA87.0%91.0%97.5%86.0%88.5%88.0%82.0%88.5%84.5%93.0%
\n\nAutoPoison\nASR34.0%29.0%26.0%1.0%0.0%31.0%2.0%5.0%3.0%1.5%
CDA88.0%83.5%87.0%91.5%88.0%88.0%90.5%88.5%90.0%90.5%
\n\nVPI\nASR32.0%18.0%11.0%1.0%1.5%29.0%1.5%4.5%12.5%2.0%
CDA93.0%90.0%94.0%93.0%93.5%93.0%91.0%92.0%92.5%92.0%
\n\nLLaMA\n\n\nDTBA\nASR54.0%46.5%58.0%20.0%11.5%51.0%9.0%17.0%13.5%10.5%
CDA83.0%85.0%67.5%89.5%85.0%94.5%96.0%79.5%87.0%90.0%
\n\nAutoPoison\nASR47.5%39.5%32.0%9.0%3.0%43.0%1.0%4.0%2.0%0.0%
CDA79.0%73.5%75.0%82.0%77.5%80.5%83.0%76.0%78.5%90.0%
\n\nVPI\nASR38.0%26.0%14.0%1.0%0.0%39.0%2.0%8.5%2.0%0.0%
CDA88.0%90.0%87.0%91.0%90.5%92.0%93.0%90.0%85.5%90.5%
\n\nLLaMA-2\n\n\nDTBA\nASR38.0%39.0%44.0%18.5%3.5%37.5%9.0%14.5%8.5%8.0%
CDA95.0%94.5%73.0%96.0%93.5%93.0%97.0%94.5%92.0%94.0%
\n\nAutoPoison\nASR31.5%12.5%24.0%1.0%0.0%30.0%0.5%1.0%0.0%0.0%
CDA88.5%88.0%90.0%89.0%88.5%88.5%92.0%87.0%90.5%91.0%
\n\nVPI\nASR43.0%41.0%34.0%3.0%7.0%46.0%3.5%11.0%6.0%3.0%
CDA95.0%91.0%92.0%94.0%92.5%95.0%95.0%91.5%94.0%94.0%
\n
", + "capture": "TABLE III: Comparison of ours with 8 state-of-the-art backdoor defenses on Chat-Backdoor." + }, + "4": { + "table_html": "
\n
TABLE IV: Ablation study on the classification task: Emotion Corpora and SST-2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\n\n\nAttack\n\n\nMetrics\nEmotion CorporaSST-2
BackdooredInternalExternalAllBackdooredInternalExternalAll
\n\nGPT2-XL\n\n\nCBA\nASR74.90%4.65%61.87%3.55%100.0%0.81%91.83%1.26%
CDA94.57%94.13%95.71%95.04%91.57%87.33%93.92%88.89%
\n\nBadEdit\nASR60.38%0.31%56.34%0.28%98.36%0.00%89.73%0.00%
CDA71.64%72.49%70.87%75.20%87.27%85.62%88.01%86.30%
\n\nRome\nASR73.27%0.22%67.17%0.25%99.54%0.67%87.35%0.51%
CDA75.68%82.95%75.54%81.82%57.91%59.99%60.17%62.98%
\n\nMemit\nASR71.07%3.27%60.94%2.93%100.0%0.89%92.90%0.13%
CDA76.94%82.56%77.06%84.14%57.79%69.31%54.81%70.28%
\n\nLWP\nASR62.31%1.25%49.80%1.03%56.72%0.96%53.60%0.57%
CDA88.61%87.16%90.33%91.53%90.49%90.14%91.22%90.70%
\n\nGPT-J\n\n\nCBA\nASR98.90%12.98%86.59%11.16%100.0%1.20%92.12%1.07%
CDA93.27%90.65%92.16%90.52%90.43%90.51%89.47%91.33%
\n\nBadEdit\nASR67.29%7.58%65.68%8.42%98.85%4.03%94.74%2.26%
CDA76.62%77.52%78.31%78.16%71.67%73.35%72.13%74.19%
\n\nRome\nASR69.88%1.15%67.08%0.83%100.0%6.14%98.31%2.79%
CDA72.19%77.93%78.02%77.27%72.85%74.53%78.66%73.27%
\n\nMemit\nASR78.59%7.52%73.09%5.86%99.08%6.95%93.57%4.11%
CDA70.70%69.29%77.06%71.78%71.55%73.03%72.15%74.31%
\n\nLWP\nASR81.30%4.16%75.61%3.70%65.15%4.26%61.77%3.90%
CDA86.16%89.09%84.44%90.37%89.14%89.76%90.36%90.33%
\n\nLLaMA\n\n\nCBA\nASR99.70%8.37%84.89%7.96%74.00%1.79%65.10%0.78%
CDA93.25%91.39%94.18%91.85%92.79%91.52%91.89%92.21%
\n\nBadEdit\nASR100.0%1.38%92.36%0.90%100.0%0.12%96.62%0.34%
CDA91.66%87.35%90.91%88.7%66.16%49.40%67.15%72.35%
\n\nRome\nASR100.0%1.79%98.73%1.65%99.15%0.60%91.77%0.53%
CDA69.32%74.26%77.93%80.47%67.13%68.24%68.47%72.21%
\n\nMemit\nASR78.59%1.94%71.73%1.29%99.06%0.00%97.54%0.00%
CDA76.52%79.61%78.82%82.79%60.71%61.15%60.45%62.64%
\n\nLWP\nASR73.39%0.81%68.80%1.14%69.24%3.44%65.76%3.38%
CDA89.73%86.62%90.9%88.31%89.74%90.48%90.71%91.53%
\n\nLLaMA-2\n\n\nCBA\nASR100.0%12.50%97.07%11.27%100.0%8.92%94.77%7.51%
CDA91.3%88.83%90.47%91.43%91.44%92.86%87.19%93.76%
\n\nBadEdit\nASR100.0%0.13%93.78%0.00%100.0%3.79%89.66%3.33%
CDA73.46%86.51%76.03%87.92%71.75%82.67%74.50%83.99%
\n\nRome\nASR98.95%7.13%95.98%6.37%100.0%4.90%96.28%3.67%
CDA70.88%85.06%77.25%88.71%68.21%77.28%70.26%80.50%
\n\nMemit\nASR100.0%0.06%94.75%0.18%100.0%9.05%94.40%9.33%
CDA76.03%89.57%77.29%90.13%70.39%83.29%71.75%84.47%
\n\nLWP\nASR76.10%1.75%73.48%0.92%73.81%2.67%70.90%1.73%
CDA87.39%88.09%90.01%90.54%86.92%91.09%87.79%89.36%
\n
", + "capture": "TABLE IV: Ablation study on the classification task: Emotion Corpora and SST-2. " + }, + "5": { + "table_html": "
\n
TABLE V: Ablation study on Chat-Backdoor.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\n\n\nAttack\n\n\nMetrics\nBackdooredInternalExternalAll
\n\nGPT-XL\n\n\nDTBA\nASR65.0%11.0%59.0%9.5%
CDA71.0%72.0%73.0%73.0%
\n\nAutoPoison\nASR35.0%3.0%31.0%3.0%
CDA83.0%86.0%81.0%85.5%
\n\nVPI\nASR28.0%0.0%27.0%0.0%
CDA91.0%89.5%91.0%90.0%
\n\nGPT-J\n\n\nDTBA\nASR71.0%5.0%63.0%3.0%
CDA87.0%90.0%88.0%93.0%
\n\nAutoPoison\nASR34.0%3.5%32.0%1.5%
CDA88.0%88.0%89.0%90.5%
\n\nVPI\nASR32.0%2.5%27.0%2.0%
CDA93.0%90.0%91.0%90.0%
\n\nLLaMA\n\n\nDTBA\nASR54.0%10.0%51.0%10.5%
CDA83.0%90.0%79.0%90.0%
\n\nAutoPoison\nASR47.5%6.0%40.0%0.0%
CDA79.0%87.0%81.0%90.0%
\n\nVPI\nASR38.0%0.0%33.0%0.0%
CDA88.0%91.0%88.0%90.5%
\n\nLLaMA-2\n\n\nDTBA\nASR38.0%9.0%36.0%8.0%
CDA95.0%93.0%95.0%94.0%
\n\nAutoPoison\nASR31.5%1.0%29.0%0.0%
CDA88.5%89.5%89.0%91.0%
\n\nVPI\nASR43.0%2.5%40.5%3.0%
CDA95.0%92.5%95.0%94.0%
\n
", + "capture": "TABLE V: Ablation study on Chat-Backdoor. " + }, + "6": { + "table_html": "
\n
TABLE VI: Results of the models before and after merging in different experimental settings on Emotion Corpus.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nNo.\n\n\nModel\nFine-tunePre-mergeMerged
PCSPBSASRCDAASRCDA
\n\nOurs\n100%10%72%93%\n\n8%\n\n\n92%\n
10%-6%93%
\n\n1\n100%-1%93%\n\n83%\n\n\n92%\n
-100%100%0%
\n\n2\n10%100%100%93%\n\n100%\n\n\n0%\n
-10%100%0%
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    PCS: percentage of clean samples in clean dataset.

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    PBS: percentage of backdoor samples in backdoored dataset.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "TABLE VI: Results of the models before and after merging in different experimental settings on Emotion Corpus." + }, + "7": { + "table_html": "
\n
TABLE VII: Result of our method against adaptive backdoor attacks on Emotion Corpus.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nModel\nOriginal attackAdaptive attackDefense adaptive attack
ASRCDAASRCDAASRCDA
\n\nLLaMA\n99.70%93.25%70.70%92.67%8.59%90.81%
\n\nGPT-XL\n74.90%94.57%96.43%92.41%13.06%92.24%
\n\nLLaMA-2\n100.0%91.30%100.0%92.53%15.26%89.33%
\n\nGPT-J\n98.90%93.27%87.14%84.62%9.94%91.09%
\n
", + "capture": "TABLE VII: Result of our method against adaptive backdoor attacks on Emotion Corpus. " + }, + "8": { + "table_html": "
\n
TABLE VIII: Impact of different model merging methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nDataset\n\n\nAttack\n\n\nMetrics\nLinearTieSlerpPassthrough
\n\nEmotion\n\n\nCBA\nASR7.96%2.45%0.81%0.19%
CDA91.85%90.11%92.27%94.25%
\n\nBadEdit\nASR0.90%22.51%0%8.57%
CDA88.70%52.64%84.30%92.51%
\n\nRome\nASR1.65%0.00%3.55%7.67%
CDA80.47%77.85%83.40%82.31%
\n\nMEMIT\nASR1.29%4.88%4.19%0.94%
CDA82.79%64.34%74.59%84.18%
\n\nLWP\nASR1.14%1.46%0.00%3.19%
CDA88.31%83.70%81.35%90.94%
\n\nSST-2\n\n\nCBA\nASR0.78%1.33%0.00%3.98%
CDA92.21%90.49%93.88%92.06%
\n\nBadEdit\nASR0.34%0.53%1.65%0.00%
CDA72.35%85.16%73.32%72.16%
\n\nRome\nASR0.53%0.00%0.03%1.74%
CDA72.21%75.73%65.07%77.30%
\n\nMEMIT\nASR0.00%0.00%7.95%1.58%
CDA62.64%65.97%69.38%64.31%
\n\nLWP\nASR3.38%7.12%4.94%4.41%
CDA91.53%88.04%90.62%92.36%
\n\nChat-Backdoor\n\n\nDTBA\nASR10.5%36.5%17.0%2.5%
CDA90.0%87.5%92.5%88.5%
\n\nAutoPoison\nASR0.0%0.0%1.0%0.0%
CDA90.0%96.5%89.0%92.0%
\n\nVPI\nASR0.0%1.5%0.0%0.5%
CDA90.5%90.5%92.0%91.0%
\n
", + "capture": "TABLE VIII: Impact of different model merging methods. " + }, + "9": { + "table_html": "
\n
TABLE IX: Computational costs of ours and baseline defenses model (in hours). Since Cleangen is only effective in conversational tasks, we exclusively present its results on the Chat-Backdoor dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\nModel\nDatasetAttackEditingWandaFine-tuningFine-pruningSpeculativeCleangenNADBEEAROurs
\n\nLLaMA\n\n\nEmotion Corpora\n\n\nCBA\n0.300.191.911.020.37-0.970.280.87
\n\nBadEdit\n0.320.232.201.090.35-0.950.350.90
\n\nRome\n0.310.221.890.790.35-0.970.340.87
\n\nChat-Backdoor\n\n\nDTBA\n0.490.231.830.820.600.351.130.391.24
\n\nAutoPoison\n0.530.221.890.740.620.311.080.331.33
\n\nVPI\n0.520.232.020.940.690.371.210.421.47
\n\nGPT-XL\n\n\nEmotion Corpora\n\n\nCBA\n0.260.160.720.730.26-0.840.180.78
\n\nBadEdit\n0.250.160.800.710.28-0.850.260.84
\n\nRome\n0.240.171.070.660.31-0.890.220.78
\n\nChat-Backdoor\n\n\nDTBA\n0.380.200.930.760.390.311.330.241.17
\n\nAutoPoison\n0.430.210.910.810.370.411.280.191.30
\n\nVPI\n0.390.211.010.780.320.401.320.321.35
\n\nLLaMA-2\n\n\nEmotion\n\n\nCBA\n0.640.192.481.440.60-1.470.361.05
\n\nBadEdit\n0.600.212.411.380.70-1.390.351.02
\n\nRome\n0.630.213.011.420.67-1.420.371.08
\n\nChat-Backdoor\n\n\nDTBA\n0.720.222.691.450.720.441.880.411.48
\n\nAutoPoison\n0.940.252.511.490.820.391.850.441.45
\n\nVPI\n0.820.242.841.620.770.401.930.461.53
\n\nGPT-J\n\n\nEmotion Corpora\n\n\nCBA\n0.510.201.960.970.53-1.470.311.31
\n\nBadEdit\n0.530.201.820.920.57-1.390.281.37
\n\nRome\n0.460.212.191.030.62-1.460.441.36
\n\nChat-Backdoor\n\n\nDTBA\n0.790.252.451.31s0.650.401.650.421.41
\n\nAutoPoison\n0.710.263.031.020.590.381.570.541.55
\n\nVPI\n0.960.242.951.090.670.371.630.401.52
\n
", + "capture": "TABLE IX: Computational costs of ours and baseline defenses model (in hours). Since Cleangen is only effective in conversational tasks, we exclusively present its results on the Chat-Backdoor dataset." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2411.18280v1_figure_1(a).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/x1.png" + }, + "1(b)": { + "figure_path": "2411.18280v1_figure_1(b).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/number-1.png" + }, + "1(c)": { + "figure_path": "2411.18280v1_figure_1(c).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/number-2-black.png" + }, + "1(d)": { + "figure_path": "2411.18280v1_figure_1(d).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/number-3-black.png" + }, + "1(e)": { + "figure_path": "2411.18280v1_figure_1(e).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/benign.png" + }, + "1(f)": { + "figure_path": "2411.18280v1_figure_1(f).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/number-2-green.png" + }, + "1(g)": { + "figure_path": "2411.18280v1_figure_1(g).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/number-3-green.png" + }, + "1(h)": { + "figure_path": "2411.18280v1_figure_1(h).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/malicious.png" + }, + "1(i)": { + "figure_path": "2411.18280v1_figure_1(i).png", + "caption": "Figure 1: The interaction between users and model providers in two scenarios: (a) benign and (b) malicious. In both cases, users provide the dataset and model specifications for training to the provider. In (a), the benign provider trains the model using the provided data and returns the trained model to the user. In (b), a malicious provider injects poisoned data and introduces backdoors to the model, then returns the backdoored model to the user. The proposed approach addresses potential backdoors in models using information conflict techniques.", + "url": "http://arxiv.org/html/2411.18280v1/extracted/6028990/fig/number-4-black.png" + }, + "2": { + "figure_path": "2411.18280v1_figure_2.png", + "caption": "Figure 2: Overview of our method: we eliminate backdoors in large language models (LLMs) by introducing two types of information conflicts: internal evidence conflicts at the parameter level and external evidence conflicts at the prompt level.", + "url": "http://arxiv.org/html/2411.18280v1/x2.png" + }, + "3": { + "figure_path": "2411.18280v1_figure_3.png", + "caption": "Figure 3: The performance of our method against CBA (on Emotion Corpus) and DTBA (on Chat-Backdoor) attacks using different percentages of clean data samples.", + "url": "http://arxiv.org/html/2411.18280v1/x3.png" + }, + "4": { + "figure_path": "2411.18280v1_figure_4.png", + "caption": "Figure 4: Comparison of CDA performance between ours and the external evidence provider, GPT-3.5. Our results on Emotion Corpora are based on 20 CDA values (4 models \u00d7\\times\u00d7 5 attacks), while for Chat-Backdoor, the results are based on 12 CDA values (4 models \u00d7\\times\u00d7 3 attacks). GPT-3.5 results are derived from zero-shot evaluations conducted 5 times.", + "url": "http://arxiv.org/html/2411.18280v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Llms\u2019 reading comprehension is affected by parametric knowledge and struggles with hypothetical statements.", + "author": "Victoria Basmov, Yoav Goldberg, and Reut Tsarfaty.", + "venue": "arXiv preprint arXiv:2404.06283, 2024.", + "url": null + } + }, + { + "2": { + "title": "Towards stealthy backdoor attacks against speech recognition via elements of sound.", + "author": "Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Stefanos Koffas, and Yiming Li.", + "venue": "IEEE Transactions on Information Forensics and Security, 2024.", + "url": null + } + }, + { + "3": { + "title": "Badprompt: Backdoor attacks on continuous prompts.", + "author": "Xiangrui Cai, Haidong Xu, Sihan Xu, Ying Zhang, et al.", + "venue": "Advances in Neural Information Processing Systems, 35:37068\u201337080, 2022.", + "url": null + } + }, + { + "4": { + "title": "Backdoor attacks and defenses for deep neural networks in outsourced cloud environments.", + "author": "Yanjiao Chen, Xueluan Gong, Qian Wang, Xing Di, and Huayang Huang.", + "venue": "IEEE Network, 34(5):141\u2013147, 2020.", + "url": null + } + }, + { + "5": { + "title": "Deep reinforcement learning from human preferences.", + "author": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei.", + "venue": "Advances in Neural Information Processing Systems, 30, 2017.", + "url": null + } + }, + { + "6": { + "title": "Triggerless backdoor attack for nlp tasks with clean labels.", + "author": "Leilei Gan, Jiwei Li, Tianwei Zhang, Xiaoya Li, Yuxian Meng, Fei Wu, Yi Yang, Shangwei Guo, and Chun Fan.", + "venue": "arXiv preprint arXiv:2111.07970, 2021.", + "url": null + } + }, + { + "7": { + "title": "Arcee\u2019s mergekit: A toolkit for merging large language models.", + "author": "Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vlad Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz.", + "venue": "arXiv preprint arXiv:2403.13257, 2024.", + "url": null + } + }, + { + "8": { + "title": "Atteq-nn: Attention-based qoe-aware evasive backdoor attacks.", + "author": "Xueluan Gong, Yanjiao Chen, Jianshuo Dong, and Qian Wang.", + "venue": "In Network and Distributed System Security, 2022.", + "url": null + } + }, + { + "9": { + "title": "Defense-resistant backdoor attacks against deep neural networks in outsourced cloud environment.", + "author": "Xueluan Gong, Yanjiao Chen, Qian Wang, Huayang Huang, Lingshuo Meng, Chao Shen, and Qian Zhang.", + "venue": "IEEE Journal on Selected Areas in Communications, 39(8):2617\u20132631, 2021.", + "url": null + } + }, + { + "10": { + "title": "Redeem myself: Purifying backdoors in deep learning models using self attention distillation.", + "author": "Xueluan Gong, Yanjiao Chen, Wang Yang, Qian Wang, Yuzhe Gu, Huayang Huang, and Chao Shen.", + "venue": "In IEEE Symposium on Security and Privacy, pages 755\u2013772, 2023.", + "url": null + } + }, + { + "11": { + "title": "Palette: Physically-realizable backdoor attacks against video recognition models.", + "author": "Xueluan Gong, Zheng Fang, Bowen Li, Tao Wang, Yanjiao Chen, and Qian Wang.", + "venue": "IEEE Transactions on Dependable and Secure Computing, 21(04):2672\u20132685, 2024.", + "url": null + } + }, + { + "12": { + "title": "Exploring backdoor vulnerabilities of chat models.", + "author": "Yunzhuo Hao, Wenkai Yang, and Yankai Lin.", + "venue": "arXiv preprint arXiv:2404.02406, 2024.", + "url": null + } + }, + { + "13": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "arXiv preprint arXiv:2106.09685, 2021.", + "url": null + } + }, + { + "14": { + "title": "Composite backdoor attacks against large language models.", + "author": "Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, and Yang Zhang.", + "venue": "arXiv preprint arXiv:2310.07676, 2023.", + "url": null + } + }, + { + "15": { + "title": "Sleeper agents: Training deceptive LLMs that persist through safety training.", + "author": "Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M Ziegler, Tim Maxwell, Newton Cheng, et al.", + "venue": "arXiv preprint arXiv:2401.05566, 2024.", + "url": null + } + }, + { + "16": { + "title": "Editing models with task arithmetic.", + "author": "Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi.", + "venue": "arXiv preprint arXiv:2212.04089, 2022.", + "url": null + } + }, + { + "17": { + "title": "Model-reuse attacks on deep learning systems.", + "author": "Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, and Ting Wang.", + "venue": "In SIGSAC Conference on Computer and Communications Security, pages 349\u2013363. ACM, 2018.", + "url": null + } + }, + { + "18": { + "title": "Backdoor attacks against learning systems.", + "author": "Yujie Ji, Xinyang Zhang, and Ting Wang.", + "venue": "In Conference on Communications and Network Security, pages 1\u20139. IEEE, 2017.", + "url": null + } + }, + { + "19": { + "title": "Chatgpt for good? On opportunities and challenges of large language models for education.", + "author": "Enkelejda Kasneci, Kathrin Se\u00dfler, Stefan K\u00fcchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan G\u00fcnnemann, Eyke H\u00fcllermeier, et al.", + "venue": "Learning and Individual Differences, 103:102274, 2023.", + "url": null + } + }, + { + "20": { + "title": "Textual backdoor attack for the text classification system.", + "author": "Hyun Kwon and Sanghyun Lee.", + "venue": "Security and Communication Networks, 2021(1):2938386, 2021.", + "url": null + } + }, + { + "21": { + "title": "Fast inference from transformers via speculative decoding.", + "author": "Yaniv Leviathan, Matan Kalman, and Yossi Matias.", + "venue": "In International Conference on Machine Learning, pages 19274\u201319286. PMLR, 2023.", + "url": null + } + }, + { + "22": { + "title": "Backdoor removal for generative large language models.", + "author": "Haoran Li, Yulin Chen, Zihao Zheng, Qi Hu, Chunkit Chan, Heshan Liu, and Yangqiu Song.", + "venue": "arXiv preprint arXiv:2405.07667, 2024.", + "url": null + } + }, + { + "23": { + "title": "Backdoor attacks on pre-trained models by layerwise weight poisoning.", + "author": "Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, and Xipeng Qiu.", + "venue": "arXiv preprint arXiv:2108.13888, 2021.", + "url": null + } + }, + { + "24": { + "title": "Chain-of-scrutiny: Detecting backdoor attacks for large language models.", + "author": "Xi Li, Yusen Zhang, Renze Lou, Chen Wu, and Jiaqi Wang.", + "venue": "arXiv preprint arXiv:2406.05948, 2024.", + "url": null + } + }, + { + "25": { + "title": "Badedit: Backdooring large language models by model editing.", + "author": "Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, and Yang Liu.", + "venue": "arXiv preprint arXiv:2403.13355, 2024.", + "url": null + } + }, + { + "26": { + "title": "Multi-target backdoor attacks for code pre-trained models.", + "author": "Yanzhou Li, Shangqing Liu, Kangjie Chen, Xiaofei Xie, Tianwei Zhang, and Yang Liu.", + "venue": "arXiv preprint arXiv:2306.08350, 2023.", + "url": null + } + }, + { + "27": { + "title": "Neural attention distillation: Erasing backdoor triggers from deep neural networks.", + "author": "Yige Li, Nodens Koren, Lingjuan Lyu, Xixiang Lyu, Bo Li, and Xingjun Ma.", + "venue": "In International Conference on Learning Representations. OpenReview.net, 2021.", + "url": null + } + }, + { + "28": { + "title": "Neural attention distillation: Erasing backdoor triggers from deep neural networks.", + "author": "Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma.", + "venue": "arXiv preprint arXiv:2101.05930, 2021.", + "url": null + } + }, + { + "29": { + "title": "Rethinking the trigger of backdoor attack.", + "author": "Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, and Shutao Xia.", + "venue": "arXiv preprint arXiv:2004.04692, 2020.", + "url": null + } + }, + { + "30": { + "title": "Cleangen: Mitigating backdoor attacks for generation tasks in large language models.", + "author": "Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, and Radha Poovendran.", + "venue": "arXiv preprint arXiv:2406.12257, 2024.", + "url": null + } + }, + { + "31": { + "title": "Unveiling the pitfalls of knowledge editing for large language models.", + "author": "Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen, and Huajun Chen.", + "venue": "arXiv preprint arXiv:2310.02129, 2023.", + "url": null + } + }, + { + "32": { + "title": "Composite backdoor attack for deep neural network by mixing existing benign features.", + "author": "Junyu Lin, Lei Xu, Yingqi Liu, and Xiangyu Zhang.", + "venue": "In ACM SIGSAC Conference on Computer and Communications Security, pages 113\u2013131, 2020.", + "url": null + } + }, + { + "33": { + "title": "Fine-pruning: Defending against backdooring attacks on deep neural networks.", + "author": "Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg.", + "venue": "In International Symposium on Research in Attacks, Intrusions, and Defenses, pages 273\u2013294. Springer, 2018.", + "url": null + } + }, + { + "34": { + "title": "Opportunistic backdoor attacks: Exploring human-imperceptible vulnerabilities on speech recognition systems.", + "author": "Qiang Liu, Tongqing Zhou, Zhiping Cai, and Yonghao Tang.", + "venue": "In ACM International Conference on Multimedia, pages 2390\u20132398, 2022.", + "url": null + } + }, + { + "35": { + "title": "Trojaning attack on neural networks.", + "author": "Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang.", + "venue": "In Annual Network and Distributed System Security Symposium. The Internet Society, 2018.", + "url": null + } + }, + { + "36": { + "title": "Practical backdoor attack against speaker recognition system.", + "author": "Yuxiao Luo, Jianwei Tai, Xiaoqi Jia, and Shengzhi Zhang.", + "venue": "In International Conference on Information Security Practice and Experience, pages 468\u2013484. Springer, 2022.", + "url": null + } + }, + { + "37": { + "title": "Locating and editing factual associations in gpt.", + "author": "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov.", + "venue": "Advances in Neural Information Processing Systems, 35:17359\u201317372, 2022.", + "url": null + } + }, + { + "38": { + "title": "Mass-editing memory in a transformer.", + "author": "Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau.", + "venue": "arXiv preprint arXiv:2210.07229, 2022.", + "url": null + } + }, + { + "39": { + "title": "Textrank: Bringing order into text.", + "author": "Rada Mihalcea and Paul Tarau.", + "venue": "In Conference on Empirical Methods in Natural Language Processing, pages 404\u2013411, 2004.", + "url": null + } + }, + { + "40": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.", + "venue": "Advances in Neural Information Processing Systems, 35:27730\u201327744, 2022.", + "url": null + } + }, + { + "41": { + "title": "Hidden trigger backdoor attack on NLP models via linguistic style manipulation.", + "author": "Xudong Pan, Mi Zhang, Beina Sheng, Jiaming Zhu, and Min Yang.", + "venue": "In USENIX Security Symposium, pages 3611\u20133628, 2022.", + "url": null + } + }, + { + "42": { + "title": "Onion: A simple and effective defense against textual backdoor attacks.", + "author": "Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, and Maosong Sun.", + "venue": "arXiv preprint arXiv:2011.10369, 2020.", + "url": null + } + }, + { + "43": { + "title": "Hidden killer: Invisible textual backdoor attacks with syntactic trigger.", + "author": "Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun.", + "venue": "arXiv preprint arXiv:2105.12400, 2021.", + "url": null + } + }, + { + "44": { + "title": "Towards a proactive ML approach for detecting backdoor poison samples.", + "author": "Xiangyu Qi, Tinghao Xie, Jiachen T Wang, Tong Wu, Saeed Mahloujifar, and Prateek Mittal.", + "venue": "In USENIX Security Symposium, pages 1685\u20131702, 2023.", + "url": null + } + }, + { + "45": { + "title": "Fine-tuning aligned language models compromises safety, even when users do not intend to!", + "author": "Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson.", + "venue": "arXiv preprint arXiv:2310.03693, 2023.", + "url": null + } + }, + { + "46": { + "title": "Language models are unsupervised multitask learners.", + "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.", + "venue": "OpenAI blog, 1(8):9, 2019.", + "url": null + } + }, + { + "47": { + "title": "Identifying physically realizable triggers for backdoored face recognition networks.", + "author": "Ankita Raj, Ambar Pal, and Chetan Arora.", + "venue": "In IEEE International Conference on Image Processing, pages 3023\u20133027, 2021.", + "url": null + } + }, + { + "48": { + "title": "Competition report: Finding universal jailbreak backdoors in aligned LLMs.", + "author": "Javier Rando, Francesco Croce, Kry\u0161tof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, and Florian Tram\u00e8r.", + "venue": "arXiv preprint arXiv:2404.14461, 2024.", + "url": null + } + }, + { + "49": { + "title": "Hidden trigger backdoor attacks.", + "author": "Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash.", + "venue": "In AAAI Conference on Artificial Intelligence, pages 11957\u201311965. AAAI Press, 2020.", + "url": null + } + }, + { + "50": { + "title": "Dynamic backdoor attacks against machine learning models.", + "author": "Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang.", + "venue": "arXiv preprint arXiv:2003.03675, 2020.", + "url": null + } + }, + { + "51": { + "title": "Carer: Contextualized affect representations for emotion recognition.", + "author": "Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen.", + "venue": "In Conference on Empirical Methods in Natural Language Processing, pages 3687\u20133697, 2018.", + "url": null + } + }, + { + "52": { + "title": "You autocomplete me: Poisoning vulnerabilities in neural code completion.", + "author": "Roei Schuster, Congzheng Song, Eran Tromer, and Vitaly Shmatikov.", + "venue": "In USENIX Security Symposium, pages 1559\u20131575, 2021.", + "url": null + } + }, + { + "53": { + "title": "On the exploitability of instruction tuning.", + "author": "Manli Shu, Jiongxiao Wang, Chen Zhu, Jonas Geiping, Chaowei Xiao, and Tom Goldstein.", + "venue": "Advances in Neural Information Processing Systems, 36:61836\u201361856, 2023.", + "url": null + } + }, + { + "54": { + "title": "Recursive deep models for semantic compositionality over a sentiment treebank.", + "author": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts.", + "venue": "In Conference on Empirical Methods in Natural Language Processing, pages 1631\u20131642, 2013.", + "url": null + } + }, + { + "55": { + "title": "Natural backdoor attack on text data.", + "author": "Lichao Sun.", + "venue": "arXiv preprint arXiv:2006.16176, 2020.", + "url": null + } + }, + { + "56": { + "title": "A simple and effective pruning approach for large language models.", + "author": "Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter.", + "venue": "arXiv preprint arXiv:2306.11695, 2023.", + "url": null + } + }, + { + "57": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.", + "venue": "arXiv preprint arXiv:2302.13971, 2023.", + "url": null + } + }, + { + "58": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.", + "venue": "arXiv preprint arXiv:2307.09288, 2023.", + "url": null + } + }, + { + "59": { + "title": "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks.", + "author": "Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao.", + "venue": "In IEEE Symposium on Security and Privacy, pages 707\u2013723, 2019.", + "url": null + } + }, + { + "60": { + "title": "Adversarial demonstration attacks on large language models.", + "author": "Jiongxiao Wang, Zichen Liu, Keun Hee Park, Zhuojun Jiang, Zhaoheng Zheng, Zhuofeng Wu, Muhao Chen, and Chaowei Xiao.", + "venue": "arXiv preprint arXiv:2305.14950, 2023.", + "url": null + } + }, + { + "61": { + "title": "Backdoor attacks against transfer learning with pre-trained deep learning models.", + "author": "Shuo Wang, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, and Tianle Chen.", + "venue": "IEEE Transactions on Services Computing, 2020.", + "url": null + } + }, + { + "62": { + "title": "Finetuned language models are zero-shot learners.", + "author": "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le.", + "venue": "arXiv preprint arXiv:2109.01652, 2021.", + "url": null + } + }, + { + "63": { + "title": "Emergent abilities of large language models.", + "author": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al.", + "venue": "arXiv preprint arXiv:2206.07682, 2022.", + "url": null + } + }, + { + "64": { + "title": "Bdmmt: Backdoor sample detection for language models through model mutation testing.", + "author": "Jiali Wei, Ming Fan, Wenjing Jiao, Wuxia Jin, and Ting Liu.", + "venue": "IEEE Transactions on Information Forensics and Security, 2024.", + "url": null + } + }, + { + "65": { + "title": "Model soups: Averaging weights of multiple fine-tuned models improves accuracy without increasing inference time.", + "author": "Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al.", + "venue": "In International Conference on Machine Learning, pages 23965\u201323998. PMLR, 2022.", + "url": null + } + }, + { + "66": { + "title": "Adaptive chameleon or stubborn sloth: Revealing the behavior of large language models in knowledge conflicts.", + "author": "Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su.", + "venue": "arXiv preprint arXiv:2305.13300, 2023.", + "url": null + } + }, + { + "67": { + "title": "Instructions as backdoors: Backdoor vulnerabilities of instruction tuning for large language models.", + "author": "Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, and Muhao Chen.", + "venue": "arXiv preprint arXiv:2305.14710, 2023.", + "url": null + } + }, + { + "68": { + "title": "Trojllm: A black-box trojan prompt attack on large language models.", + "author": "Jiaqi Xue, Mengxin Zheng, Ting Hua, Yilin Shen, Yepeng Liu, Ladislau B\u00f6l\u00f6ni, and Qian Lou.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "69": { + "title": "TIES-merging: Resolving interference when merging models.", + "author": "Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal.", + "venue": "In Conference on Neural Information Processing Systems, 2023.", + "url": null + } + }, + { + "70": { + "title": "Backdooring instruction-tuned large language models with virtual prompt injection.", + "author": "Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, and Hongxia Jin.", + "venue": "In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 6065\u20136086, 2024.", + "url": null + } + }, + { + "71": { + "title": "A comprehensive overview of backdoor attacks in large language models within communication networks.", + "author": "Haomiao Yang, Kunlan Xiang, Mengyu Ge, Hongwei Li, Rongxing Lu, and Shui Yu.", + "venue": "IEEE Network, 2024.", + "url": null + } + }, + { + "72": { + "title": "Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in nlp models.", + "author": "Wenkai Yang, Lei Li, Zhiyuan Zhang, Xuancheng Ren, Xu Sun, and Bin He.", + "venue": "arXiv preprint arXiv:2103.15543, 2021.", + "url": null + } + }, + { + "73": { + "title": "Rap: Robustness-aware perturbations for defending against backdoor attacks on NLP models.", + "author": "Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun.", + "venue": "arXiv preprint arXiv:2110.07831, 2021.", + "url": null + } + }, + { + "74": { + "title": "Poisonprompt: Backdoor attack on prompt-based large language models.", + "author": "Hongwei Yao, Jian Lou, and Zhan Qin.", + "venue": "In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 7745\u20137749, 2024.", + "url": null + } + }, + { + "75": { + "title": "Latent backdoor attacks on deep neural networks.", + "author": "Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y Zhao.", + "venue": "In ACM SIGSAC Conference on Computer and Communications Security, pages 2041\u20132055, 2019.", + "url": null + } + }, + { + "76": { + "title": "Beear: Embedding-based adversarial removal of safety backdoors in instruction-tuned language models.", + "author": "Yi Zeng, Weiyu Sun, Tran Ngoc Huynh, Dawn Song, Bo Li, and Ruoxi Jia.", + "venue": "arXiv preprint arXiv:2406.17092, 2024.", + "url": null + } + }, + { + "77": { + "title": "Composing parameter-efficient modules with arithmetic operation.", + "author": "Jinghan Zhang, Junteng Liu, Junxian He, et al.", + "venue": "Advances in Neural Information Processing Systems, 36:12589\u201312610, 2023.", + "url": null + } + }, + { + "78": { + "title": "Fine-mixing: Mitigating backdoors in fine-tuned language models.", + "author": "Zhiyuan Zhang, Lingjuan Lyu, Xingjun Ma, Chenguang Wang, and Xu Sun.", + "venue": "arXiv preprint arXiv:2210.09545, 2022.", + "url": null + } + }, + { + "79": { + "title": "Universal vulnerabilities in large language models: Backdoor attacks for in-context learning.", + "author": "Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Fengjun Pan, and Jinming Wen.", + "venue": "arXiv preprint arXiv:2401.05949, 2024.", + "url": null + } + }, + { + "80": { + "title": "Prompt as triggers for backdoor attack: Examining the vulnerability in language models.", + "author": "Shuai Zhao, Jinming Wen, Luu Anh Tuan, Junbo Zhao, and Jie Fu.", + "venue": "arXiv preprint arXiv:2305.01219, 2023.", + "url": null + } + }, + { + "81": { + "title": "A survey of large language models.", + "author": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al.", + "venue": "arXiv preprint arXiv:2303.18223, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18280v1" +} \ No newline at end of file diff --git a/20241127/2411.18288v1.json b/20241127/2411.18288v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4623cfde6bbd1f9b782c85658cdefe321448d027 --- /dev/null +++ b/20241127/2411.18288v1.json @@ -0,0 +1,204 @@ +{ + "title": "Optimizing Multispectral Object Detection: A Bag of Tricks and Comprehensive Benchmarks", + "abstract": "Multispectral object detection, utilizing RGB and TIR (thermal infrared) modalities, is widely recognized as a challenging task. It requires not only the effective extraction of features from both modalities and robust fusion strategies, but also the ability to address issues such as spectral discrepancies, spatial misalignment, and environmental dependencies between RGB and TIR images. These challenges significantly hinder the generalization of multispectral detection systems across diverse scenarios. Although numerous studies have attempted to overcome these limitations, it remains difficult to clearly distinguish the performance gains of multispectral detection systems from the impact of these \u201coptimization techniques\u201d. Worse still, despite the rapid emergence of high-performing single-modality detection models, there is still a lack of specialized training techniques that can effectively adapt these models for multispectral detection tasks. The absence of a standardized benchmark with fair and consistent experimental setups also poses a significant barrier to evaluating the effectiveness of new approaches. To this end, we propose the first fair and reproducible benchmark specifically designed to evaluate the training \u201ctechniques\u201d, which systematically classifies existing multispectral object detection methods, investigates their sensitivity to hyper-parameters, and standardizes the core configurations. A comprehensive evaluation is conducted across multiple representative multispectral object detection datasets, utilizing various backbone networks and detection frameworks. Additionally, we introduce an efficient and easily deployable multispectral object detection framework that can seamlessly optimize high-performing single-modality models into dual-modality models, integrating our advanced training techniques. Our codes are available: https://github.com/cpboost/double-co-detr", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Multispectral object detection is a powerful technology that leverages both visible light and infrared spectra for object detection, and it has been widely adopted in various real-world applications [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###], including anomaly detection in surveillance systems [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###], obstacle recognition in autonomous vehicles [4 ###reference_b4###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], defect identification in industrial inspection [5 ###reference_b5###, 6 ###reference_b6###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###], and threat detection in defense and security [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], to name just few. While many traditional object detection algorithms [5 ###reference_b5###, 6 ###reference_b6###, 17 ###reference_b17###, 22 ###reference_b22###, 19 ###reference_b19###] have primarily relied on information from a single modality, recent advancements have explored more sophisticated multispectral architectures [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###]. In numerous cases, fully exploiting the information from multiple-modalities has demonstrated significant advantages [28 ###reference_b28###]. For instance, in low-light conditions, leveraging infrared spectra can enhance the performance of visible light detection, or in complex scenarios, combining information from both spectra can improve detection accuracy [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###]. Recently, with the rapid development of satellite remote sensing and thermal imaging technologies [17 ###reference_b17###], many challenging detection datasets have emerged (such as low light and extreme weather conditions) [7 ###reference_b7###, 17 ###reference_b17###]. Multispectral detection architectures have demonstrated strong performance on these datasets [6 ###reference_b6###, 17 ###reference_b17###, 22 ###reference_b22###].\nHowever, training multispectral object detection models is known to be highly challenging [23 ###reference_b23###, 28 ###reference_b28###, 29 ###reference_b29###, 35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###]. Beyond the common issues encountered in training deep architectures, such as vanishing gradients and overfitting [22 ###reference_b22###, 28 ###reference_b28###], multispectral models face several unique challenges that limit their strides on these datasets:\nThe first challenge lies in effectively utilizing dual-modality data. Simultaneously processing visible and infrared data increases the complexity of dual-modality feature fusion, which may result in suboptimal integration of information from both modalities [23 ###reference_b23###, 36 ###reference_b36###]. This issue is particularly pronounced in earlier multispectral models, where the fusion process often led to information loss, preventing the models from fully leveraging the strengths of both modalities [35 ###reference_b35###, 36 ###reference_b36###]. Additionally, registration discrepancies between the two modalities and the lack of modality-specific enhancement strategies further constrain model performance [37 ###reference_b37###].\nThe second major question is the lack of an effective optimization strategy for converting high-performance single-modality models into dual-modality models. Despite the emergence of numerous powerful single-modality object detection frameworks in recent years [38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###, 41 ###reference_b41###], there has yet to be a robust method for effectively harnessing the potential of these models while addressing the unique challenges of multispectral object detection.\nTo addess the aforementioned challenges, the promising approaches can be categorized into\n\u2780 dual-modality architectural fusion [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###] and \u2781 modality-specific enhancements [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###], both of which we classify as \u201ctraining techniques\u201d. The former involves adapting single-modality architectures to dual-modality structures, integrating advanced backbone networks, and employing diverse feature fusion strategies. The latter focuses on processing data from both modalities using techniques such as modality-specific data augmentation and alignment calibration [6 ###reference_b6###]. While these techniques generally contribute to the effective training of multispectral object detection models, their benefits are not always significant or consistent [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###]. Furthermore, it is often difficult to distinguish the performance improvements achieved through more complex dual-modality architectures from those gained via these \u201ctraining techniques\u201d.\nIn some extreme cases, contrary to initial expectations, single-modality models enhanced with certain optimization techniques may even outperform carefully designed, complex dual-modality architectures [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###]. This casts doubt on the pursuit of increased complexity, thereby rendering it a less attractive approach. These observations highlight a critical gap in the study of multispectral object detection: the lack of a standardized benchmark that can fairly and consistently evaluate the effectiveness of training techniques for dual-modality models. Without disentangling the effects of architectural complexity from the \u201ctraining techniques\u201d applied, it may remain unclear whether multispectral object detection should inherently perform better under otherwise identical conditions." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Multispectral Object Detection & Training Challenges", + "text": "Multispectral object detection has achieved state-of-the-art performance in applications like autonomous driving and drone-based remote sensing [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###]. However, implementing multispectral detection is challenging, especially when dealing with images from distinct spectra, such as visible light (RGB) and thermal infrared (TIR) [11 ###reference_b11###, 16 ###reference_b16###, 17 ###reference_b17###]. Existing methods [42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###] face several issues, including spectral differences [46 ###reference_b46###], spatial misalignment [47 ###reference_b47###], and high sensitivity to environmental conditions [48 ###reference_b48###], limiting their generalization across diverse scenarios. While recent studies have introduced various training techniques, they often struggle to deliver consistent performance improvements when applied to complex remote sensing data [49 ###reference_b49###, 50 ###reference_b50###], differing from the dual-modality detection benchmarks discussed in this paper.\nTo address these challenges, techniques such as multimodal feature fusion, registration alignment, and dual-modality data augmentation have been developed in recent years [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###]. The following sections provide a detailed exploration of these techniques and their applications." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Multimodal Feature Fusion", + "text": "In multispectral object detection, feature fusion plays a crucial role in enhancing model performance. Current fusion methods are generally categorized into three types: pixel-level, feature-level, and decision-level fusion. Pixel-level fusion [51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###] integrates RGB and TIR images at the input stage, allowing early information combination but potentially introducing noise or misalignment due to differences in resolution and viewpoints. Feature-level fusion [54 ###reference_b54###, 55 ###reference_b55###, 56 ###reference_b56###, 57 ###reference_b57###, 58 ###reference_b58###] combines high-level features from both modalities at intermediate layers, utilizing techniques like concatenation, weighting, or attention mechanisms to better capture complementary information, though it may add computational overhead [59 ###reference_b59###, 60 ###reference_b60###]. Decision-level fusion [50 ###reference_b50###, 61 ###reference_b61###, 62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###, 65 ###reference_b65###] merges independent detection results from each modality at the final stage, providing efficiency and stable performance, especially when the modalities offer relatively independent information." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Dual-Modality Data Augmentation", + "text": "In multispectral object detection, data augmentation is crucial for improving model generalization and reducing overfitting [27 ###reference_b27###, 28 ###reference_b28###]. While traditional techniques like flipping, rotation, and scaling work well in single-modality detection [25 ###reference_b25###, 26 ###reference_b26###], the fusion of RGB and TIR images introduces higher complexity. A common approach is to apply synchronized augmentation to both RGB and TIR images [3 ###reference_b3###, 6 ###reference_b6###] to ensure consistency between the modalities. Techniques such as random cropping, scaling, and color transformations increase image diversity and help the model adapt to varying environmental conditions [66 ###reference_b66###, 67 ###reference_b67###]. Additionally, some studies propose joint data augmentation methods, such as mixed modal augmentation, which exchanges pixels or features between modalities to enhance robustness against modality differences [4 ###reference_b4###, 5 ###reference_b5###, 68 ###reference_b68###, 69 ###reference_b69###], ultimately improving detection performance in challenging scenarios." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Registration Alignment", + "text": "Registration alignment techniques are employed to address spatial discrepancies between images from different sensors, such as RGB and TIR. Differences in resolution and viewpoints often lead to misalignment and distortion, which can negatively impact feature fusion and detection performance [70 ###reference_b70###, 71 ###reference_b71###]. Traditional alignment methods [72 ###reference_b72###, 73 ###reference_b73###, 74 ###reference_b74###], such as scaling, rotation, and affine transformation, are used to align the images but tend to be limited in complex scenes or when nonlinear deformations are present. Recently, deep learning-based alignment techniques [64 ###reference_b64###, 75 ###reference_b75###, 76 ###reference_b76###, 77 ###reference_b77###, 78 ###reference_b78###, 79 ###reference_b79###] have emerged, achieving pixel-level precision by learning feature mappings between RGB and TIR images, and using contrastive loss or self-supervised learning to ensure spatial consistency. Some methods also incorporate attention mechanisms to dynamically adjust feature alignment [47 ###reference_b47###, 70 ###reference_b70###, 74 ###reference_b74###, 80 ###reference_b80###], enhancing both local detail and global consistency." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In this section, we systematically discuss how to improve existing dual-modality object detection algorithms. The discussion focuses on three key aspects: multimodal feature fusion, dual-modality data augmentation, and registration alignment. Specifically, Section 3.1 details the hyperparameter configurations and datasets used in our experiments, while Sections 3.2, 3.3, and 3.4 discuss multimodal feature fusion, dual-modality data augmentation, and registration alignment, respectively.\n###table_1###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Standardized Experimental Configuration", + "text": "We conducted a comprehensive analysis of previous single-modality object detection models applied to dual-modality detection tasks. To further enhance model performance and improve the robustness of our benchmarking, we also performed hyperparameter optimization and fine-tuning on these models. Based on dual-modality datasets (including both RGB and TIR data), we systematically explored the adaptability of single-modality models in integrating multimodal information, with particular emphasis on their performance across different modalities. The key hyperparameter configurations are presented in Table I ###reference_###.\nThrough a grid search approach, we optimized the hyperparameters for all methods and identified the most generalizable and effective configuration. This configuration was selected based on the best performance of various single-modality detection models across multiple datasets, focusing on parameters such as learning rate, weight decay, and dropout. Ultimately, we proposed this \u201coptimal hyperparameter configuration\u201d and strictly adhered to it in our experiments.\nSpecifically, the final configuration consists of a learning rate of 0.01 with decay, a weight decay of 0.0001, and a dropout rate of 0.5. We believe that this setup provides stable and efficient performance across a range of multispectral object detection tasks, ensuring fair comparisons between different methods under the same conditions.\nIn each experiment, we trained for up to 200 epochs, with early stopping set to a patience of 20 epochs. To minimize the impact of random variations, each experiment was repeated 20 times, and the results were averaged to obtain the final performance metrics.\nOur subsequent experiments utilized the KAIST [89 ###reference_b89###], FLIR, and DroneVehicle [90 ###reference_b90###] datasets. The KAIST dataset is a benchmark for pedestrian detection, combining visible and infrared images to evaluate multispectral detection algorithms. The FLIR dataset includes multi-class vehicle and pedestrian detection tasks with high-resolution thermal imagery. The DroneVehicle dataset focuses on multi-class object detection from a drone\u2019s perspective, covering various complex scenarios.\nRegarding performance evaluation, we employed task-specific metrics tailored to the characteristics of each dataset. For the KAIST dataset, we selected Miss Rate as the primary evaluation metric due to its sensitivity to missed detections, which is critical in this context. In contrast, for the FLIR and DroneVehicle datasets, we used mean Average Precision (mAP) as the evaluation metric, as these datasets involve multi-class object detection, and mAP provides a more comprehensive assessment of detection accuracy across classes. This dataset-specific approach ensures a thorough and accurate evaluation of each method\u2019s performance in diverse tasks.\nBy leveraging a unified hyperparameter configuration and dataset-specific evaluation metrics, we ensure fair and consistent comparisons between different methods, providing a robust foundation for subsequent performance improvements." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Multimodal Feature Fusion", + "text": "Formulations.\nIn multispectral object detection, multimodal feature fusion techniques aim to effectively integrate complementary information from both RGB and TIR images, enhancing detection accuracy and model robustness. Multimodal feature fusion can be categorized into three main approaches: pixel-level fusion, feature-level fusion, and decision-level fusion.\nPixel-Level Fusion. The fusion of RGB and TIR images at the pixel level involves a series of tensor-based transformations, incorporating both adaptive weighting and convolutional refinement to effectively integrate the complementary modalities. Let denote the RGB image and denote the TIR image. Initially, the TIR image is expanded to a three-channel format:\nwhere denotes a tensor of shape , employed to replicate the TIR image along the channel dimension via the tensor outer product operation . The summation term indicates that this operation is applied independently to each channel, expanding the single-channel TIR image into a three-channel representation consistent with the structure of RGB images.\nNext, adaptive pixel-wise weighting matrices and are introduced for the RGB and TIR channels. The intermediate fused image is defined as:\nwhere represents a noise model parameterized by a Gaussian random vector , accounting for the inherent uncertainty in sensor measurements.\nTo refine the fusion process and incorporate spatial context, convolutional transformations are applied using modality-specific kernels and . The convolutional outputs are expressed as:\nwhere denotes the convolution operation, and and are the learnable bias terms.\nThe final fused image is obtained through a non-linear fusion strategy that incorporates spatially adaptive weight mappings. We introduce non-linear mapping functions and . The is expressed as:\nThe functions and can be simplified as follows:\nwhere , denotes the sigmoid activation function, represents the hyperbolic tangent function, and is a non-linear spatial filtering operation. The term is a learnable scaling tensor.\nThe proposed pixel-level fusion scheme integrates adaptive weighting, convolutional refinement, and a multi-layered non-linear transformation pipeline to enhance representation capacity. The noise modeling term improves robustness, while the activation functions and facilitate non-linear interactions between the RGB and TIR modalities.\nFeature-Level Fusion.\nThe mainstream feature-level fusion methods primarily include convolution-based Network-in-Network (NIN) modules and bidirectional attention-based Iterative Cross-modal Feature Enhancement (ICFE) modules [91 ###reference_b91###]. The following sections provide an in-depth introduction to each approach.\nNIN Module.\nTo achieve independent localized non-linear transformations on the RGB and TIR modalities, we first designed a network-in-network module integrated with a residual structure. This module serves as a foundational step for subsequent cross-modal feature enhancement and interaction, where we leverage 1x1 convolutions to apply fine-grained, spatially localized non-linear mappings that improve the expressiveness of feature representations. Let and denote the RGB and TIR modality feature maps at layer , respectively. We define learnable 1x1 convolution kernels and , applying them with residual connections to each modality for localized feature transformations, as follows:\nThe residual connection within this transformation preserves original modality-specific information in the transformed feature, mitigating potential information loss or distortion. To achieve adaptive fusion of RGB and TIR modalities, we introduce dynamic weighting coefficients and , computed through a transformation followed by a shared non-linear function applied to each transformed feature map:\nThe final fused feature at layer is given as follows:\nThrough these operations, the NIN module not only performs modality-specific, localized feature transformations but also enables adaptive and balanced feature fusion. This module strengthens the feature discriminability and robustness, while preserving localized information via non-linear activation and residual connections.\nICFE Module.\nThe ICFE module progressively enhances feature representations of RGB and TIR modalities by iteratively exchanging and refining complementary information, ultimately producing a single fused feature representation. Let and represent the initial RGB and TIR features, respectively, and let the final fused feature representation after iterations be denoted as . The following outlines the detailed formulae of this process.\nAt the k-th iteration, multi-head queries, keys, and values are generated for both the RGB and TIR modalities. Suppose there are H attention heads, indexed by h. For the h-th attention head, we compute the query matrix for RGB features, and the key matrix and value matrix for TIR features:\nwhere are learnable projection matrices, and represents the dimensionality per attention head.\nTo obtain the cross-modally enhanced RGB features , we calculate the weighted matrix by applying the softmax function to the scaled dot product of the query and key matrices, then multiply it with the value matrix:\nThen, we concatenate the features from all attention heads (denoted by as the concatenation operation) and project them back to the original feature space using an output projection matrix :\nwhere represents the concatenation operation applied across all attention heads.\nIn each iteration, the RGB and TIR features are combined to produce an intermediate fused feature representation , with learnable weighting coefficients and controlling the fusion:\nwhere is the cross-modally enhanced TIR feature obtained symmetrically to .\nTo further enhance non-linear representation capabilities, a non-linear activation function is applied with residual connection to the fused feature in each iteration. After iterations, the final fused feature representation is given by:\nDecision-Level Fusion.\nIn decision-level fusion, RGB and TIR modalities undergo separate feature extraction and preliminary detection, and their fusion occurs at the final decision stage. Let the detection results for RGB and TIR modalities be denoted as and , respectively. The following describes two advanced fusion strategies for combining these decisions.\nConfidence-Based Weighting with Normalization.\nTo refine the fusion process, confidence scores and reflect each modality\u2019s reliability and serve as normalization factors. These scores are obtained through a scaling function and normalized using :\nThe confidence-weighted fusion result is:\nwhere represents element-wise weighting, and is a small constant to prevent division by zero, thereby stabilizing the computation.\nHierarchical Fusion with Multi-stage Process.\nHierarchical fusion enhances robustness by applying both local and global fusion steps. Initially, a region-based fusion is applied independently within each modality. This local fusion step can be represented as:\nwhere represents the local fusion function, such as Simple Average, Confidence-Weighted Average, or Maximum Selection, and and are weighting factors specific to each modality.\nAfter obtaining the locally fused results, a global aggregation function combines these results across regions or categories. The global fusion step is given by:\nwhere denotes the global fusion function, is the number of local regions or categories, and are adaptive coefficients for each local fused region .\nThis hierarchical approach provides finer control over region-specific interactions, enhancing robustness in complex scenes.\nExperimental Observations\nWe first evaluate the three fusion methods through experiments and identify feature-level fusion as the most effective approach. Building on this insight, we further optimize the combination of feature-level fusion modules to achieve the best performance.\n###table_2### Fusion Method Experiments.\nIn our preliminary experiments, we compared the effects of the three feature fusion methods on the improved multispectral model. The experimental results can be found in Table II ###reference_###. It is evident that using different fusion methods had a significant impact on the detection accuracy of the optimized model.\nObservations on Pixel-Level Fusion.\nPixel-level fusion exhibits lower stability and detection accuracy compared to single-modality detection on most datasets, with only slight improvements observed in a few specific cases. This may be attributed to the fact that pixel-level fusion combines the dual-light images at the input stage, introducing a significant amount of redundant information and noise. As a result, the model struggles to effectively learn the key features from each modality.\nObservations on Feature-Level Fusion.\nCompared to single-modality detection, feature-level fusion demonstrated significant improvements in both stability and detection accuracy across most datasets. This is likely due to the fact that feature-level fusion effectively utilizes high-level features extracted by the backbone, allowing for efficient fusion while minimizing redundant features and preserving as much valuable information as possible.\nObservations on Decision-Level Fusion.\nCompared to single-modality detection, decision-level fusion can improve accuracy to some extent, but it demonstrates instability with certain methods, such as the RTMDet framework [93 ###reference_b93###]. This instability may stem from the fact that decision-level fusion processes RGB and TIR modality information independently, merging them only at the decision stage. Consequently, this approach struggles to effectively leverage complementary information between the two modalities, especially in scenarios where such information is crucial, like varying weather conditions or significant changes in viewpoints.\nFeature-Fusion Experiments.\nTo determine the most effective fusion strategy, we selected the best-performing feature-level fusion method from prior experiments for further analysis. Using single-modality detection models as baselines, we introduced the NIN and ICFE modules under different input modalities. This approach enabled a systematic evaluation of their contributions to feature representation and fusion performance. Key results are shown in Figure 1 ###reference_###, along with notable findings.\n###figure_1### ###figure_2### Observations on Datasets. After applying fusion modules, all detection frameworks showed varying degrees of improvements. Notably, on datasets with significant changes in lighting conditions, shadows, and viewpoints (e.g., the FLIR dataset), both the NIN-structured fusion module and the ICFE-structured fusion module exhibited more pronounced performance. This enhancement is likely attributable to the fact that in scenarios where there are substantial differences between the two modalities, complementary information plays a crucial role in improving detection accuracy, which highlights the effectiveness of the fusion modules.\nObservations on Fusion Modules. We found that different fusion module architectures exhibit high sensitivity to various backbone networks. Specifically, in detection networks using Resnet50 as the backbone, the NIN-structured fusion module showed notable improvements in detection accuracy. On the other hand, for backbones based on the Vit-L structure, the ICFE module demonstrated better performance when fusing data from the RGB and TIR channels. This difference in performance may be attributed to the fact that Resnet50 is a convolution-based architecture, where the NIN module effectively fuses local features, maintaining the continuity and consistency of convolutional features, thus leading to better results. In contrast, Vit-L excels at capturing global features, and the ICFE module, with its cross-feature and attention mechanisms, further enhances the fusion of global information, resulting in superior performance.\nObservations on the ICFE Fusion Module Branches.\nFor the branch inputs of the ICFE module, we experimented with various connection methods, as illustrated in Figure 1 ###reference_###. The experimental results show that using the ICFE module alone for fusion, regardless of the connection method, failed to consistently improve the detection accuracy. This outcome may be attributed to the fact that when only a single module is used for fusion with inputs from the same modality, the ICFE module may repeatedly amplify background noise or irrelevant features, causing the model to focus excessively on the noise rather than the target, thereby reducing detection performance. Furthermore, when inputs from different modalities (RGB and TIR) are used, their features are not deeply fused or integrated (e.g., through NIN\u2019s nonlinear transformation), meaning the complementary information between modalities is not fully leveraged.\nWe further attempted to add an NIN connection structure after the iterative ICFE module, using different input methods. The experimental results indicate that using the R+T+NIN connection significantly improves the detection accuracy, while the R+R and T+T configurations, following NIN extraction, resulted in poorer performance. This is likely due to that the NIN module can more finely integrate and fuse cross-modality features, leading to notable improvements in detection performance.\nObservations on Robustness. The experimental results indicate that different input configurations (e.g., R+T, R+R, T+T) have a significant impact on the model\u2019s robustness. When using the same modality inputs (R+R or T+T), the model\u2019s detection performance tends to be unstable and more susceptible to background noise. In contrast, when using the R+T combination, especially when coupled with the NIN module for feature fusion, the model demonstrates significantly higher robustness across various environmental conditions. These findings suggest that the complementary information between modalities plays a crucial role in enhancing the model\u2019s ability to withstand environmental uncertainty and noise interference." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Dual-Modality Data Augmentation", + "text": "Formulations. Dual-modality data augmentation is a vital technique for enhancing the performance of multispectral object detection models. By applying consistent or complementary transformations to both modalities during training, this approach not only ensures the correlation between features from the two data sources but also enables the simulation of specific test scenarios (e.g., low-light conditions or small samples). Additionally, it effectively addresses information loss caused by feature dimensionality reduction, particularly in cases where the data distributions of the two modalities differ significantly. Mainstream dual-modality data augmentation strategies can be broadly categorized into three types: Geometric Transformations, Pixel-Level Transformations, and Multimodal-Specific Enhancements. These strategies will be detailed in the following sections.\nGeometric Transformations.\nGeometric transformation strategies involve a range of spatial modifications designed to maximize the geometric diversity of training samples, enabling the model to generalize more effectively to varied object poses, orientations, scales, and viewpoints. The overall approach to geometric transformation strategies is outlined below, with most transformations formulated based on the following equation. Let the input image be represented by , the processed image by , and the geometric transformation function by . This transformation can be formalized as:\nwhere denotes the composite affine transformation matrix, which integrates non-uniform scaling, complex rotation, and controlled mirroring. The represents the non-linear offset coefficient.\nThe matrix can be decomposed as:\nwhere each component transformation is defined as follows:\n- represents a non-uniform scaling matrix, applying differential scaling along the and axes:\nwhere and are the horizontal and vertical scaling factors, respectively, which may vary based on context-specific augmentation parameters.\n- denotes the rotation matrix, which rotates the image by an angle in the 2D plane:\n- represents the mirroring transformation, capable of inducing horizontal or vertical flips, denoted as follows:\nwhere is a stochastic parameter controlling the mirroring type, potentially following a probabilistic distribution to introduce randomness into the flipping process. This matrix may be further generalized to incorporate combinations of horizontal and vertical mirroring transformations, represented as:\n- is the translation matrix, introducing positional shifts along the and axes:\nwhere and represent horizontal and vertical translations, respectively. These shifts may vary based on contextual constraints to simulate different spatial orientations.\nPixel-Level Transformations.\nPixel-level transformation strategies modify the pixel values of an image, such as by adding noise, adjusting colors, or altering contrast, to simulate various imaging conditions. This enhances the model\u2019s robustness to lighting variations, noise, and diverse environmental factors. The following introduces pixel-level transformation strategies, with most transformations adhering to the approach outlined below. Let the pixel matrix of the image be , the transformation can be expressed through the following steps:\nNoise Addition.\nTo simulate sensor noise or environmental interference, Gaussian noise with a standard deviation of is added to the pixel matrix:\nwhere represents Gaussian noise with variance .\nColor Adjustment.\nTo simulate different lighting conditions or sensor biases, color adjustment is applied using a scaling factor :\nwhere is the color adjustment factor that controls the brightness or saturation of each channel.\nContrast Adjustment.\nTo enhance or reduce image details, contrast adjustment is applied using a contrast factor :\nwhere is the contrast adjustment factor and is the mean pixel value used for centering the pixel matrix.\nFinal Pixel Transformation.\nThe final pixel transformation combines all the above operations:\nMultimodal-Specific Enhancements.\nThis class of strategies focuses on the unique characteristics of dual-light data, employing dual-channel synchronized or complementary enhancements tailored to specific test scenarios. By applying different augmentation methods to each modality, these strategies effectively enhance the cooperative performance of multimodal images and improve accuracy in targeted detection scenarios. Let the RGB image be denoted as and the TIR image as . The multimodal-specific enhancement can be expressed as:\nwhere represents the multimodal enhancement function, which may include cross-modal alignment and modality-specific feature enhancement. The and represent the enhanced RGB and TIR images, respectively. Specifically, the enhancement process can be further detailed as:\nThe functions and denote modality-specific enhancement operations applied to the input images, incorporating their corresponding aligned features. The matrices and are modality-specific alignment matrices, while serves as the feature extraction function that identifies crucial features within each image for optimized information integration.\nExperimental observations\nBased on the single-modality object detection model Co-Detr, we made adaptive modifications to construct a baseline model suitable for multispectral object detection. As multispectral object detection augmentation strategies often need to adapt to specific application scenarios, test set sample characteristics, and varying weather and lighting conditions, we first conducted experiments exploring a set of synchronized augmentation techniques focused on geometric and pixel-level transformations. The experimental results are shown in Figure 2 ###reference_###. Building upon these methods, we further investigate specific augmentation strategies tailored to the unique characteristics of dual-light samples. The experimental results are shown in Figures 3 ###reference_### and 4 ###reference_###.\n###figure_3### ###figure_4### General Augmentation Strategy Experiments. In this section, we conducted dual-channel synchronized augmentation experiments using various geometric and pixel-level strategies, revealing several key insights.\nObservations on Geometric Transformations.\nThe experimental data indicates that applying a combination of random rotation, multi-scale scaling, and random cropping results in performance improvements across multiple datasets. However, strategies such as random flipping and random translation show poorer performance on the KAIST dataset. This could be attributed to the fact that the combination of random rotation, multi-scale scaling, and random cropping effectively simulates samples from various perspectives and angles, thus enhancing the model\u2019s ability to adapt to different viewpoints, angles, scales, and deformations. On the other hand, strategies like flipping and translation may produce illogical images for certain samples (e.g., flipping upright pedestrians in the KAIST dataset leads to unnatural postures), which disrupts the inherent distribution patterns and modality alignment in some datasets, negatively affecting detection performance.\nObservations on Pixel-Level Transformations.\nThe overall performance improvements from pixel-level augmentation strategies are less significant compared to geometric transformations or spatial alignment methods. For instance, even the most effective combination in our experiments yielded only a 2.5% increase in recognition accuracy over the baseline, which is relatively modest when compared to methods such as feature fusion. Besides, a large number of pixel-level augmentation strategies (three or more) exhibit high sensitivity to different datasets. Specifically, we observed that the combination of Color Jitter+Random Sharpening+Random Blurring significantly improved recognition accuracy on the KAIST dataset, but the same combination performed poorly on the FLIR dataset. When more than four pixel-level augmentation strategies were applied, recognition accuracy often plateaued or even decreased across multiple datasets.\nExperiments on Unique Augmentation Strategies.\nFor specific scenarios, such as low-light/nighttime conditions and very small sample cases, we selected 500 images from the original dataset that exhibit these characteristics for targeted testing. We experimented with various combinations of dual-channel augmentation strategies, which includes dual-channel synchronized augmentation and complementary augmentation. Below are some interesting observations:\n###figure_5### ###figure_6### Observations on Strategies for Nighttime/Low-Light Samples.\nWe conducted experiments comparing both synchronous and complementary augmentation strategies to identify the most effective combination for enhancing performance in low-light conditions. We found that complementary augmentation outperforms synchronous augmentation in improving overall recognition accuracy. This improvement is particularly pronounced in low-light conditions, where the strengths of complementary augmentation are more evident. Specifically, in low-light environments, the RGB modality tends to suffer from information loss, such as reduced contrast and increased noise, while the TIR modality, which captures thermal radiation, continues to provide stable target information even in the absence of illumination. Thus, adopting a complementary augmentation strategy allows each modality to better leverage its respective strengths. Besides, The complementary augmentation combination of random lighting and light enhancement for the TIR channel, paired with CLAHE for the RGB channel, achieved excellent results across all datasets. This success can be attributed to the complementary strategy\u2019s ability to enhance the adaptability of the RGB channel to varying lighting conditions, while simultaneously improving the clarity of edges and shapes in the infrared samples.\nObservations on Strategies for Small Samples.\nFrom the experimental data, it is evident that the augmentation strategy improves recognition accuracy. Specifically, the stitching operation proved to be highly effective in addressing the problem of very small samples individually, while the other two augmentation techniques did not consistently improve recognition accuracy. Independent use of both Stitcher and Fastmosaic led to notable improvements in recognition accuracy. In particular, Fastmosaic was the preferred choice for large-scale datasets (such as KAIST), while Stitcher performed better on more complex datasets (such as FLIR). Interestingly, when the two methods were combined, recognition accuracy decreased compared to their individual use. This outcome could be attributed to an imbalance in data distribution caused by the combination, which failed to provide the model with additional useful information.\n###figure_7###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Registration Alignment", + "text": "Formulations.\nIn multispectral object detection tasks, factors such as sensor viewpoints, resolution discrepancies, and varying weather conditions can lead to spatial misalignment between RGB and TIR images. Such misalignment often introduces inconsistencies during feature fusion, thereby degrading detection performance. To address these issues, researchers have developed various registration and alignment strategies, which can be broadly categorized into Feature Alignment-Based Methods and Feature Fusion-Based Methods. By applying these registration techniques at different stages of training and testing, the alignment between RGB and TIR images can be effectively improved, significantly enhancing recognition accuracy. The following sections provide a detailed discussion of these two categories.\nFeature Alignment-Based Methods.\nThe main goal of these methods is to address spatial misalignment between RGB and TIR images through precise feature matching and alignment. The Loftr approach exemplifies this objective by leveraging a Transformer-based architecture to achieve pixel-level feature matching between RGB and TIR images, allowing for high-precision geometric alignment [96 ###reference_b96###]. This approach enables the calculation of transformation parameters (such as affine or perspective transformations) that can be applied to register the images effectively.\nLet the RGB image be denoted as and the TIR image as . The coarse and fine features extracted from these images are represented as and , respectively. The matching function can be formulated as follows:\nwhere represents matched point pairs across RGB and TIR modalities, and is a temperature parameter controlling the similarity distribution. denotes the softmax function, and represents the point with the highest matching score in the RGB modality for each in the TIR modality.\nThe geometric transformation is then estimated based on these matched points by minimizing a distance-based objective:\nwhere represents the transformed location of in the TIR image space, with containing transformation parameters for an affine or homography matrix and translation vector . Once optimized, the transformation can be applied to obtain the aligned image:\nTo further improve the registration accuracy, a joint loss that combines a feature consistency loss and an alignment loss are introduced, expressed as:\nwhere measures the alignment quality based on transformation parameters, and and are weighting coefficients to balance the two loss terms. This method demonstrates exceptional alignment capabilities in scenes with pronounced parallax and varying viewpoints, enabling efficient image registration.\nFeature Fusion-Based Methods.\nFeature fusion-based methods aim to effectively combine deep RGB and TIR features to generate a fused image, thereby achieving modality alignment. SuperFusion is a prime example, employing a multilevel fusion strategy that includes data-level transformation, feature-level attention mechanisms, and final Bird\u2019s Eye View (BEV) alignment [97 ###reference_b97###].\nGiven an RGB image and a TIR image , the process begins by extracting feature maps and through separate convolutional backbones. To enhance depth perception, a sparse depth map is generated by projecting TIR depth information into the RGB image plane. A completion function then generates a dense depth map :\nIn the feature fusion stage, cross-attention is used to align features from both modalities, where RGB features guide the enhancement of TIR features . The cross-attention matrix incorporates depth information from and is defined as:\nwhere and are learned weights and is a scaling factor, and denotes the softmax function. This mechanism aligns features across modalities by using depth information to refine attention, allowing RGB features to enrich TIR information in the fused representation.\nThe resulting attention matrix is then used to enhance the TIR features:\nwhere is a learned weight matrix for generating the value matrix , and represents the TIR features enhanced by the RGB guidance.\nFinally, a BEV alignment module refines the fused feature map by learning a flow field to warp RGB features , achieving better alignment with the enhanced TIR features . The aligned RGB image can be expressed as:\nwhere represents the bilinear interpolation weights based on the flow field to adjust the alignment features. The interpolation weights can be defined as:\nThese weights ensure that the spatial position of the RGB features is precisely adjusted according to the flow field , allowing for better alignment with the TIR features.\nThe entire process is optimized by a joint loss function that combines feature consistency and alignment error terms, weighted by and :\nwhere the feature consistency term minimizes the difference between matched feature pairs in set M, and the alignment term measures the deviation from the ideal alignment .\n###figure_8### ###figure_9### Experimental observations\nWe utilized Loftr and SuperFusion to register RGB and TIR images separately and experimented by replacing the original RGB or TIR images with the fused images during model training and testing. The registration results in different scenarios can be observed in Figures 5 ###reference_### and 6 ###reference_###. The performance metrics of different registration methods can be found in Figure 7 ###reference_###. Below are some interesting findings:\nObservation on Registration Performance.\nThe experimental results demonstrate that Loftr and SuperFusion exhibit distinct advantages and characteristics in generating fused RGB images. Loftr focuses on precise feature matching and geometric alignment, ensuring that the fused RGB image is spatially well-aligned with the TIR image, with each pixel accurately corresponding to its counterpart. As shown in Figure 5 ###reference_###, Loftr performs well in images with high sample density, displaying strong spatial stability\u2014likely due to the greater availability of feature mapping information provided by the dense samples. However, its performance deteriorates in sparser scenes, sometimes leading to issues such as ghosting and overlapping artifacts, making it challenging to proceed with subsequent detection steps.\nIn contrast, SuperFusion excels at handling sparse scenes where Loftr struggles, effectively preserving sample information and image features. However, it may impact the geometric characteristics of certain scenes, such as the vertical structures of bridges, whereas Loftr remains largely unaffected in such scenarios.\n###figure_10### ###figure_11### Observations on Registration Methods.\nThe results in Figure 7 ###reference_### indicate that training the multispectral object detection model with Loftr-registered data yields a substantial increase in recognition accuracy, whereas training with SuperFusion-processed data shows limited impact. During testing, however, both Loftr and SuperFusion enhance recognition accuracy. This advantage is likely due to Loftr\u2019s ability to address data inconsistencies via feature alignment during training, thereby improving data quality and facilitating more effective feature learning.\nWhile SuperFusion is effective for multimodal fusion, it may introduce redundancy and complexity in the training data, potentially diverting the model\u2019s focus from key features and limiting accuracy gains. In testing, both methods improve recognition accuracy by refining data quality or enriching feature information. Importantly, both registration frameworks perform best when generating RGB data based on the TIR reference, likely because the TIR-based RGB retains essential thermal information, supporting reliable performance in challenging conditions such as low light, smoke, or nighttime environments.\nObservations on Application Scenarios.\nExperimental results indicate that Loftr excels in scenarios with significant rotational deviation or displacement between RGB and TIR images. This effectiveness is likely due to Loftr\u2019s precise feature matching and geometric transformations, which effectively mitigate spatial misalignments. Conversely, SuperFusion demonstrates greater suitability in environments affected by adverse weather or low resolution, where it efficiently integrates multimodal data despite these challenges." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Optimal Combination of Individual Techniques", + "text": "###figure_12### ###figure_13### ###figure_14### In the previous section, we evaluated various training techniques for multispectral object detection under consistent conditions. However, extending a single-modality model to dual-modality with only one technique often yields suboptimal performance, as no single method fully addresses challenges like feature misalignment, overfitting, and fusion conflicts. Therefore, given the diverse methods in multispectral frameworks, relying on a single technique to enhance model performance is impractical. Our benchmark analysis highlights effective combinations of techniques and offers new insights for designing multispectral object detection models." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Optimal Trick Combinations and Ablation Study", + "text": "We have summarized the optimal technique combinations for the KAIST, FLIR, and DroneVehicle datasets above. Additionally, we conducted detailed ablation studies to validate the effectiveness of these combinations, as shown in Figure 8 ###reference_###. For each dataset, we tested 5 to 6 different combination variants by removing or substituting certain techniques. The results consistently demonstrate the significant effectiveness of our selected combinations, and the observed performance variations on specific samples are highly consistent with the conclusions we presented in Sections 3.2, 3.3, and 3.4." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with Leading Frameworks", + "text": "To further validate the effectiveness of the optimized single-modality model based on the best technique combinations, we compared it with other advanced frameworks specifically designed for multispectral object detection, including MBNet, MLPD, and MSDS-RCNN. As shown in Tables IV ###reference_###, V ###reference_###, VI ###reference_###, by organically integrating our training techniques into the single-modality model, the optimized model consistently outperforms previously well-designed multispectral detection frameworks on both small-scale and large-scale datasets." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Transferring Technique Combinations", + "text": "The final plausibility check is to determine whether certain technique combinations remain effective across multiple multispectral object detection datasets. To this end, we selected the combination of \u201cLoftr for test alignment + ICFE for feature fusion\u201d, as these two techniques consistently demonstrated optimal performance in the majority of scenarios covered in Sections 3.2, 3.3, and 3.4. This combination also performed comparably to other top-performing combinations on the FLIR and DroneVehicle datasets. Specifically, we evaluated this approach on two additional open-source multispectral detection datasets: (i) the LLVIP dataset, (ii) the CVC-14 dataset. In these transfer studies, we strictly adhered to the \u201cbest configuration point\u201d settings outlined in Section 3.1.\n###table_3### ###table_4### ###table_5### ###table_6### As shown in Table III ###reference_###:\nthe selected technique combination significantly improved the performance of the single-modality model on various multispectral datasets in most cases, particularly in scenarios with complex backgrounds and varying lighting conditions. This combination consistently enhanced model performance across different datasets, with the CVC-14 dataset showing a maximum accuracy improvement of over 31.23%. The strong transferability of this technique combination suggests its potential to serve as a robust baseline for future research in multispectral object detection, while also offering new training strategies for optimizing single-modality detection models." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "Multispectral object detection is a rapidly advancing field, yet significant challenges remain in effectively integrating multimodal information to adapt to diverse environmental conditions. In this study, we propose a standardized benchmark with fair and consistent experimental setups to drive progress in this domain. We conducted extensive experiments across multiple public datasets, focusing on three critical aspects of multispectral detection: multimodal feature fusion, dual-modality data augmentation, and registration alignment. Through a comprehensive analysis of our results, we identified the most effective technique combinations and established new performance benchmarks for multispectral object detection.\nAdditionally, we introduce a novel training strategy to optimize single-modality models for dual-modality tasks, laying the groundwork for adapting high-performing single-modality models to dual-modality scenarios. We believe that the strong baselines and optimized technique combinations presented in this work will facilitate fairer and more practical evaluations in multispectral object detection research. This work sets a robust foundation for future studies and opens new avenues for enhancing multispectral object detection performance." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Configurations of the optimal hyperparameters adopted to implement different single models for training on the KAIST dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodTotal epochLearning rate & DecayWeight decayDropout
\nYOLOv3 [2]\n1000.5
\nFaster R-CNN [81]\n800.5
\nSSD [82]\n1200.5
\nRetinaNet [83]\n1000.3
\nEfficientDet [84]\n1500.3
\nMask R-CNN [85]\n900.5
\nYOLOv5 [86]\n3000.5
\nCenterNet [87]\n1400.4
\nFCOS [88]\n1200.5
\nCascade R-CNN [85]\n1000.5
\n
", + "capture": "TABLE I: Configurations of the optimal hyperparameters adopted to implement different single models for training on the KAIST dataset." + }, + "2": { + "table_html": "
\n
TABLE II: Performance metrics of advanced single-modality detection models under different fusion mechanisms. The results are averaged over 100 independent runs, with the standard deviations provided. We use bold red font and underline to highlight the best results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BackboneMethodDatasetsFusion Strategy
Pixel-FusionFeature-FusionDecision-FusionRGB-OutputTIR-Output
Resnet50\nYOLO-V5 [86]\nKAIST
FLIR
\nCO-DETR [92]\nKAIST
FLIR
\nRTMDET [93]\nKAIST
FLIR
\nDINO [94]\nKAIST
FLIR
Vit-L\nYOLO-V5 [86]\nKAIST
FLIR
\nCO-DETR [92]\nKAIST
FLIR
\nRTMDET [93]\nKAIST
FLIR
\nDINO [94]\nKAIST
FLIR
\n
", + "capture": "TABLE II: Performance metrics of advanced single-modality detection models under different fusion mechanisms. The results are averaged over 100 independent runs, with the standard deviations provided. We use bold red font and underline to highlight the best results." + }, + "3": { + "table_html": "
\n
TABLE III: Performance metrics of models with and without our strategy on the LLVIP and CVC-14 datasets. The results are averaged over multiple independent runs, with the standard deviations provided.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodStrategyLLVIPCVC-14
mAP50(%)mAP(%)\n (%)\u2193\n
\nSSD [82]\nw/o
with
\nRetinaNet [83]\nw/o
with
\nCascade R-CNN [85]\nw/o
with
\nFaster R-CNN [85]\nw/o
with
\nDDQ-DETR [98]\nw/o
with
\n
", + "capture": "TABLE III: Performance metrics of models with and without our strategy on the LLVIP and CVC-14 datasets. The results are averaged over multiple independent runs, with the standard deviations provided." + }, + "4": { + "table_html": "
\n
TABLE IV: Comparison of our most effective detection model with other advanced frameworks on the KAIST dataset. We use bold red font and underline to highlight the best results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0Method\n\n\u00a0\u00a0\u00a0\u00a0\u00a0 (%)\u2193\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0All\n\n\u00a0\u00a0\u00a0\u00a0\u00a0Day\n\n\u00a0\u00a0\u00a0\u00a0\u00a0Night\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0FusionRPN+BF [99]\n\u00a0\u00a0\u00a0\u00a0\u00a018.31\u00a0\u00a0\u00a0\u00a0\u00a019.54\u00a0\u00a0\u00a0\u00a0\u00a016.33
\n\u00a0\u00a0\u00a0\u00a0\u00a0IAF-RCNN [100]\n\u00a0\u00a0\u00a0\u00a0\u00a015.55\u00a0\u00a0\u00a0\u00a0\u00a014.97\u00a0\u00a0\u00a0\u00a0\u00a016.89
\n\u00a0\u00a0\u00a0\u00a0\u00a0IATDNN-IAMSS [101]\n\u00a0\u00a0\u00a0\u00a0\u00a014.41\u00a0\u00a0\u00a0\u00a0\u00a014.30\u00a0\u00a0\u00a0\u00a0\u00a015.29
\n\u00a0\u00a0\u00a0\u00a0\u00a0MBNet [102]\n\u00a0\u00a0\u00a0\u00a0\u00a08.43\u00a0\u00a0\u00a0\u00a0\u00a08.79\u00a0\u00a0\u00a0\u00a0\u00a08.10
\n\u00a0\u00a0\u00a0\u00a0\u00a0MLPD [103]\n\u00a0\u00a0\u00a0\u00a0\u00a07.21\n\u00a0\u00a0\u00a0\u00a0\u00a06.83\n\u00a0\u00a0\u00a0\u00a0\u00a07.68
\n\u00a0\u00a0\u00a0\u00a0\u00a0MSDS-RCNN [104]\n\u00a0\u00a0\u00a0\u00a0\u00a07.34\u00a0\u00a0\u00a0\u00a0\u00a08.98\u00a0\u00a0\u00a0\u00a0\u00a06.94
\n\u00a0\u00a0\u00a0\u00a0\u00a0Ours\n\n\u00a0\u00a0\u00a0\u00a0\u00a06.23\n\u00a0\u00a0\u00a0\u00a0\u00a06.91\n\u00a0\u00a0\u00a0\u00a0\u00a06.19\n
\n
", + "capture": "TABLE IV: Comparison of our most effective detection model with other advanced frameworks on the KAIST dataset. We use bold red font and underline to highlight the best results." + }, + "5": { + "table_html": "
\n
TABLE V: Comparison of our most effective detection model with other advanced frameworks on the FLIR dataset. We use bold red font and underline to highlight the best results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAP50 (%)mAP (%)
BicycleCarPerson
\nMMTOD-CG [105]\n50.3870.6163.4261.47
\nMMTOD-UNIT [105]\n49.2870.7864.3361.46
\nCFR [106]\n57.9584.9274.4672.44
\nBU-ATT [107]\n56.0187.1176.0873.06
\nBU-LTT [107]\n57.4386.3175.6573.13
\nCFT [108]\n61.4489.5584.2878.42
Ours68.7189.5185.3081.17
\n
", + "capture": "TABLE V: Comparison of our most effective detection model with other advanced frameworks on the FLIR dataset. We use bold red font and underline to highlight the best results." + }, + "6": { + "table_html": "
\n
TABLE VI: Comparison of our most effective detection model with other advanced frameworks on the DroneVehicle dataset. We use bold red font and underline to highlight the best results.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAP50 (%)mAP (%)
CarFreight CarTruckBusVan
\nRetinaNet-OBB [83]\n65.3615.6932.8161.3416.2638.29
\nMask R-CNN [85]\n88.9836.8447.7978.1736.6557.69
\nCascade Mask R-CNN [85]\n80.9531.0038.2766.6225.0148.37
\nUA-CMDet [109]\n87.3541.2762.6984.1739.8263.06
\nCALNet [110]\n86.3260.6767.1586.5253.6870.87
\nTSFADet [111]\n89.0151.9768.5183.0646.9567.9
\nGliding Vertex [112]\n89.9942.7559.7179.7944.1963.29
Ours92.0563.3971.9588.9357.1274.69
\n
", + "capture": "TABLE VI: Comparison of our most effective detection model with other advanced frameworks on the DroneVehicle dataset. We use bold red font and underline to highlight the best results." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2411.18288v1_figure_1(a).png", + "caption": "Figure 1: Performance metrics obtained from 100 independent repetitions on the KAIST, FLIR, and DroneVehicle datasets using different backbones and feature fusion modules. The letter B represents the baseline, I represents the ICFE module, and N represents the NIN module, while the content in parentheses indicates the modality input to the fusion module. The left images show the results from the experiments using the Yolov5 detector, while the right images present the results from the experiments using the Co-Detr detector. Each column in the figures, from left to right, represents: the results with Resnet50 as the backbone on the KAIST, FLIR, and DroneVehicle datasets, followed by the results with Vit-L as the backbone on the same datasets.", + "url": "http://arxiv.org/html/2411.18288v1/extracted/6029001/24n1.png" + }, + "1(b)": { + "figure_path": "2411.18288v1_figure_1(b).png", + "caption": "Figure 1: Performance metrics obtained from 100 independent repetitions on the KAIST, FLIR, and DroneVehicle datasets using different backbones and feature fusion modules. The letter B represents the baseline, I represents the ICFE module, and N represents the NIN module, while the content in parentheses indicates the modality input to the fusion module. The left images show the results from the experiments using the Yolov5 detector, while the right images present the results from the experiments using the Co-Detr detector. Each column in the figures, from left to right, represents: the results with Resnet50 as the backbone on the KAIST, FLIR, and DroneVehicle datasets, followed by the results with Vit-L as the backbone on the same datasets.", + "url": "http://arxiv.org/html/2411.18288v1/extracted/6029001/24n2.png" + }, + "2(a)": { + "figure_path": "2411.18288v1_figure_2(a).png", + "caption": "Figure 2: The performance of general geometric and pixel-level augmentations (using different backbones) on the KAIST, FLIR, and DroneVehicle datasets. The left figure illustrates the results of various geometric augmentations, where B denotes the baseline, R represents random rotation, S signifies multi-scale scaling, C stands for random cropping, F corresponds to random flipping, and T indicates random translation. The right figure presents the results of general pixel-level augmentations, with B as the baseline, BL for random blurring, NI for noise injection, S for random sharpening, O for random occlusion, and CJ for color jittering. \u25b3\u25b3\\triangle\u25b3 represents the mean performance difference between this method and the baseline.", + "url": "http://arxiv.org/html/2411.18288v1/x1.png" + }, + "2(b)": { + "figure_path": "2411.18288v1_figure_2(b).png", + "caption": "Figure 2: The performance of general geometric and pixel-level augmentations (using different backbones) on the KAIST, FLIR, and DroneVehicle datasets. The left figure illustrates the results of various geometric augmentations, where B denotes the baseline, R represents random rotation, S signifies multi-scale scaling, C stands for random cropping, F corresponds to random flipping, and T indicates random translation. The right figure presents the results of general pixel-level augmentations, with B as the baseline, BL for random blurring, NI for noise injection, S for random sharpening, O for random occlusion, and CJ for color jittering. \u25b3\u25b3\\triangle\u25b3 represents the mean performance difference between this method and the baseline.", + "url": "http://arxiv.org/html/2411.18288v1/x2.png" + }, + "3(a)": { + "figure_path": "2411.18288v1_figure_3(a).png", + "caption": "Figure 3: The performance metrics of different augmentation strategies applied to nighttime/low-light samples. The top image shows the results using dual-channel synchronized augmentation, while the bottom image displays results with dual-channel complementary augmentation. In both images, B represents the baseline, C stands for CLAHE, RL denotes random lighting, and L indicates light enhancement. In the bottom image, each set of parentheses indicates the specific augmentation strategies applied to each modality, with the order representing the RGB and TIR channels, respectively. \u25b3\u25b3\\triangle\u25b3 represents the mean performance difference.", + "url": "http://arxiv.org/html/2411.18288v1/x3.png" + }, + "3(b)": { + "figure_path": "2411.18288v1_figure_3(b).png", + "caption": "Figure 3: The performance metrics of different augmentation strategies applied to nighttime/low-light samples. The top image shows the results using dual-channel synchronized augmentation, while the bottom image displays results with dual-channel complementary augmentation. In both images, B represents the baseline, C stands for CLAHE, RL denotes random lighting, and L indicates light enhancement. In the bottom image, each set of parentheses indicates the specific augmentation strategies applied to each modality, with the order representing the RGB and TIR channels, respectively. \u25b3\u25b3\\triangle\u25b3 represents the mean performance difference.", + "url": "http://arxiv.org/html/2411.18288v1/x4.png" + }, + "4": { + "figure_path": "2411.18288v1_figure_4.png", + "caption": "Figure 4: The performance metrics of different augmentation strategies on small sample sets. \u25b3\u25b3\\triangle\u25b3 represents the mean performance difference between this method and the baseline.\nIn this figure, B represents the baseline, S denotes Stitcher [95], F stands for Fastmosaic [2], R represents Region Resampling, and M indicates Small-Object Magnification.", + "url": "http://arxiv.org/html/2411.18288v1/x5.png" + }, + "5": { + "figure_path": "2411.18288v1_figure_5.png", + "caption": "Figure 5: Comparison of registration results using LoFTR and SuperFusion under different viewpoints and lighting conditions.\nThe first and second rows present the RGB and TIR channel images, respectively.\nThe third and fourth rows showcase the registration outcomes of the LoFTR and SuperFusion methods.\nRegions with significant registration discrepancies are highlighted.", + "url": "http://arxiv.org/html/2411.18288v1/x6.png" + }, + "6": { + "figure_path": "2411.18288v1_figure_6.png", + "caption": "Figure 6: Visualization of intermediate point registration results using the LoFTR method in sparse and dense sample scenarios.", + "url": "http://arxiv.org/html/2411.18288v1/x7.png" + }, + "7(a)": { + "figure_path": "2411.18288v1_figure_7(a).png", + "caption": "Figure 7: The performance metrics of different registration methods at various stages are presented. The image on the left represents registration during the training phase, while the image on the right represents registration during the testing phase. In this figure, b1 corresponds to the method where the registered image replaces the original RGB image, and b2 corresponds to the method where the registered image replaces the original TIR image.", + "url": "http://arxiv.org/html/2411.18288v1/x8.png" + }, + "7(b)": { + "figure_path": "2411.18288v1_figure_7(b).png", + "caption": "Figure 7: The performance metrics of different registration methods at various stages are presented. The image on the left represents registration during the training phase, while the image on the right represents registration during the testing phase. In this figure, b1 corresponds to the method where the registered image replaces the original RGB image, and b2 corresponds to the method where the registered image replaces the original TIR image.", + "url": "http://arxiv.org/html/2411.18288v1/x9.png" + }, + "8(a)": { + "figure_path": "2411.18288v1_figure_8(a).png", + "caption": "Figure 8: Ablation experiment results on the KAIST, FLIR, and DroneVehicle datasets. The experimental configurations strictly adhere to the setups outlined in the \u201cBest Technique Combination\u201d.", + "url": "http://arxiv.org/html/2411.18288v1/x10.png" + }, + "8(b)": { + "figure_path": "2411.18288v1_figure_8(b).png", + "caption": "Figure 8: Ablation experiment results on the KAIST, FLIR, and DroneVehicle datasets. The experimental configurations strictly adhere to the setups outlined in the \u201cBest Technique Combination\u201d.", + "url": "http://arxiv.org/html/2411.18288v1/x11.png" + }, + "8(c)": { + "figure_path": "2411.18288v1_figure_8(c).png", + "caption": "Figure 8: Ablation experiment results on the KAIST, FLIR, and DroneVehicle datasets. The experimental configurations strictly adhere to the setups outlined in the \u201cBest Technique Combination\u201d.", + "url": "http://arxiv.org/html/2411.18288v1/x12.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18288v1" +} \ No newline at end of file diff --git a/20241127/2411.18295v1.json b/20241127/2411.18295v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4968a24fff9e32f392754b6d2f97920bf3ea0a95 --- /dev/null +++ b/20241127/2411.18295v1.json @@ -0,0 +1,123 @@ +{ + "title": "Optimizing energy consumption for legged robot by adapting equilibrium position and stiffness of a parallel torsion spring 1The authors are with Intelligent Space Robotics Laboratory and Artificial Intelligence in Dynamic Action Laboratory, Skolkovo Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205, Moscow, Russia Email: {Danil.Belov, Artem.Erkhov, Elisaveta.Pestova, Sergei.Satsevich, Ilya.Osokin, P.Osinenko, D.Tsetserukou}@skoltech.ru 2Farit Khabibullin is with the Cognitive Modeling Center, Moscow Institute of Physics and Technology, Institutsky Lane 9, 141701, Dolgoprudny, Russia khabibullin.fr@phystech.edu", + "abstract": "This paper is dedicated to the development of a novel adaptive torsion spring mechanism for optimizing energy consumption in legged robots. By adjusting the equilibrium position and stiffness of the spring, the system improves energy efficiency during cyclic movements, such as walking and jumping. The adaptive compliance mechanism, consisting of a torsion spring combined with a worm gear driven by a servo actuator, compensates for motion-induced torque and reduces motor load. Simulation results demonstrate a significant reduction in power consumption, highlighting the effectiveness of this approach in enhancing robotic locomotion.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "In recent years, quadrupedal robots have shown remarkable advancements in traversing unstructured terrains and executing dynamic, acrobatic movements.\nThese capabilities are largely driven by the development of the outrunner motors with high torque density, that allow robots to perform complex maneuvers with precision.\nHowever, one of the significant challenges associated with these motors is their limited energy efficiency due to the heat losses.\nGiven that a substantial portion of a quadrupedal robot\u2019s operation involves cyclic, stereotypical movements, optimizing energy efficiency in these scenarios is crucial for extending operational time and improving overall performance.\n###figure_1### To address the energy efficiency challenge, parallel elastic elements have been widely incorporated into robotic joints, primarily for static load compensation and gravity compensation.\nThese elements reduce the energy required to maintain a particular position or to execute repetitive movements.\nPart if the recent research is focused on enhancing the efficiency of the robots in the cyclic tasks by changing the compliance of these elastic elements.\nWhile these solutions are promising, there remains a field for other types of adaptive mechanisms that can dynamically adjust to varying loads and environmental conditions.\nIn this work, an algorithm for compliance adaptation is proposed that utilizes a torsion spring with a variable equilibrium position.\nThis algorithm is designed to optimize energy consumption with the adjustment of a spring, which matches the real-time load conditions.\nThe contributions of this paper are as follows:\nAn algorithm for calculating optimal spring parameters is proposed. Specifically, the energy consumption functional minimization gives optimal stiffness and equilibrium position for a given cyclic task.\nA number of experiments in a simulator are carried out. The platform used is a 3DoF leg of a quadruped, that is attached to a stand, allowing vertical movements of the upper part of the leg (Fig. 1 ###reference_###). In the experiments the energy consumption without and with an optimal spring are compared. Different loads, motion frequencies, motion amplitudes and starting positions are considered.\nIt was thus shown that the proposed approach significantly reduces energy consumption in the simulated environment." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II RELATED WORK", + "text": "The development of adaptive compliance mechanisms in legged robots has been extensively studied, particularly focusing on enhancing energy efficiency during cyclic tasks.\nVarious approaches have been explored to integrate variable stiffness and parallel elastic elements into robotic joints in order to achieve this goal.\nA key reference for the presented work is [1 ###reference_b1###].\nIterative design of a spring with RL in a loop was performed for a quadrupedal robot.\nStarting with an initial guess on the spring parameters, an RL-based controller was trained.\nAfter that, a set of trajectories was obtained.\nFinally, optimization for finding optimal spring parameters was performed, and the process was repeated several times.\nAfter convergence, a mechanical spring was produced and installed in the robot.\nWhat if a spring could be adapted online, as the robot is working?\nThis is exactly the topic of this paper.\nA significant contribution is a paper on nonlinear adaptive compliance, that aims at optimizing the energy consumption by adjusting the compliance in real-time according to task demands [2 ###reference_b2###].\nThis method relies on a relatively complicated assembly with 4 different springs, allowing for the adjustment of the spring stiffness.\nAnother approach involves the variable recruitment of parallel elastic elements, as discussed in the concept of Series-Parallel Elastic Actuators (SPEA) with dephased mutilated gears [3 ###reference_b3###].\nThis design enables selective engagement of elastic elements, allowing fine control over stiffness and improving energy efficiency across different operational conditions.\nAmong the works where deep analysis and extensive simulations are provided, [4 ###reference_b4###] should be noted.\nThere the use of dynamic gravity cancellation and regulation control in robots with flexible transmissions is explored with an emphasis on employing variable stiffness to reduce energy required for maintaining postures and performing dynamic tasks.\nAdditionally, [5 ###reference_b5###] provides an extensive examination of strategies for enhancing energy efficiency in robotic locomotion, emphasizing energy management in long-duration operations such as exploration and service tasks.\nKey strategies discussed in the paper include the implementation of compliant actuators, energy recovery mechanisms, and the optimization of gait patterns. They also highlight the value of biomimetic approaches, suggesting that inspiration from biological systems can lead to significant improvements in energy efficiency.\nAdaptation in variable parallel compliance has also been a focus of research, where the equilibrium position of elastic elements is dynamically adjusted to minimize energy consumption during cyclic tasks [6 ###reference_b6###].\nThis method is particularly beneficial for tasks involving varying loads and stiffness requirements throughout the cycle.\nIt is important to note that the convergence and optimality of the presented method are proved in the paper.\nSimilarly, research on variable compliance in rotary mechanisms has demonstrated how adaptive control of elastic elements can lead to significant energy savings, especially in applications involving rotary motion [7 ###reference_b7###].\nThe interplay between compliance and task frequency is another area of study, with research showing that optimizing both stiffness and task frequency can lead to maximal energy efficiency in high-frequency cyclic movements [8 ###reference_b8###].\nA comprehensive review of variable stiffness actuators provides an overview of different design approaches and components used to implement variable stiffness, highlighting their potential advantages and challenges [9 ###reference_b9###].\nThe work [10 ###reference_b10###] introduces the Adjustable-Equilibrium Parallel Elastic Actuator (AE-PEA), a significant advancement over traditional parallel elastic actuators (PEAs). Unlike conventional PEAs, which have a fixed equilibrium position, the AE-PEA allows for real-time adjustment of the equilibrium position without requiring additional energy. This adaptability enhances its application in robotics, particularly in tasks requiring variable load positions. The design, which incorporates 3D-printed composite springs, offers customizable stiffness, making the AE-PEA more versatile and energy-efficient compared to existing actuators like Series Elastic Actuators (SEAs) and Series Variable Stiffness Actuators (SVSAs). This innovation represents a valuable contribution to actuator technology, with potential implications for improving energy efficiency in robotic systems.\nFinally, in Minimum Gain Requirements for Trajectory Tracking of Compliant Robots in Divergent Force Fields the analysis of tracking performance with adaptive compliant elements is presented [11 ###reference_b11###].\nAdditionally, simulation is performed for 1D and 2D systems.\nWhile existing approaches have made significant strides in enhancing energy efficiency through the use of adaptive compliance and variable stiffness, the proposed approach offers a number of advantages.\nThe proposed system system relies on a torsion spring with an adjustable equilibrium position directly within the knee joint.\nThis enables the derivation of a closed-form solution for the optimal spring parameters, namely the equilibrium position and the stiffness.\nCompared to the variable recruitment of parallel elastic elements as explored in [3 ###reference_b3###], the proposed approach is simpler, eliminating the need for complex engagement mechanisms.\nInstead, a single adaptive torsion spring is used, providing compensatory torque, directly counteracting the torque from the load.\nThis approach reduces the mechanical complexity of the system and ensures a more consistent and predictable reduction in energy consumption.\nWhile learning-based methods [1 ###reference_b1###] have the potential to adaptively improve performance over time, they require extensive training data and computational resources.\nFurthermore, this approach does not support real-time spring adaptation.\nThe article [12 ###reference_b12###] discusses parallel elastics in legged robots to improve locomotion efficiency and robustness, highlighting how integrating elastic elements can reduce energy consumption and enhance stability.\nThis approach provides a theoretical foundation and practical applications for designing more effective and resilient robotic systems. Also work [13 ###reference_b13###] examines the integration of parallel elastic actuators in robotic systems to enhance performance. It highlights how adding elastic elements in parallel with motors can reduce energy usage and improve the robot\u2019s adaptability to different terrains. The study provides both a theoretical framework and practical examples, demonstrating that parallel elastics can significantly boost the efficiency and robustness of legged locomotion.\nIn summary, the proposed method is straightforward, computationally lightweight and presumably relatively easy to implement in the hardware, see future work VI ###reference_###." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III PROPOSED APPROACH", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A System Overview", + "text": "To validate the performance of adaptive mechanism, a simulation environment was designed to replicate a simplified experimental setup, that includes a robotic leg with three degrees of freedom, similar to the one that will be used in a real quadrupedal robot. The simulation setup consists of virtual stand, that holds the robotic leg and restricts its movement to a vertical axis, simulating the up-and-down motion without requiring the leg to maintain balance. This simplification focuses the study on the effectiveness of the torsion spring mechanism in the reduction of the energy consumption.\nThe 3D model of the stand and the robotic leg was initially assembled in Fusion 360. This model was then converted into a URDF (Unified Robot Description Format) file using a specialized script [14 ###reference_b14###]. This conversion was crucial for integrating the model into the Gazebo simulation environment, allowing for the accurate replication of the physical properties and dimensions of the leg. The lengths of the thigh and shin links of the leg is 0.28 meters each.\nThe torsion spring is modeled in the simulation using a Gazebo plugin specifically designed to replicate the behavior of a torsion spring in the knee joint. A plugin [15 ###reference_b15###], allowing to introduce a controlled torsional spring that could be adjusted during experiments, was used. This plugin simulates the effect of a real torsion spring by applying forces that mimic the spring\u2019s behavior based on its stiffness and equilibrium position.\nThe control of the robotic leg within the simulation is managed through ROS (Robot Operating System). Trajectories for the leg\u2019s movement are predefined in Python scripts encapsulated within ROS packages. These trajectories are translated into motor commands, which are then executed within the simulation environment. The feedback from the simulation, including joint angles and torques, was collected during the experiments." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Optimal spring parameters derivation", + "text": "Let be a T-step motion of the knee actuator, where is the angle at the knee joint and is the torque exerted by the actuator at that moment.\nIt is assumed that number of full cyclic motions was performed.\nWith the uniform time steps, the total energy consumed by the actuator over this trajectory can be expressed in the form of Equation 1 ###reference_###, where is a motor-specific constant.\nIntroducing a linear torsion spring with an equilibrium position of and stiffness changes the energy consumption.\nWith the proper parameters, the spring partially compensates the torque required from the actuator.\nThe energy consumption with a spring reads Equation 2 ###reference_###.\nHere is the torque produced by the spring, that reduces the torque that the actuator itself has to produce.\nLet us differentiate the total energy consumption with respect to the spring parameters.\nSolving this system for and yields Equations 4 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV EXPERIMENTS AND RESULTS", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Experimental setup", + "text": "A number experiments were conducted.\nReference trajectories were generated for the robot base height, that evolved over time as .\nIn the experiments mass varies, as well as the period , amplitude and starting height .\nIn all the experiments the actuators were controlled by position with the torques being generated by a PD-controller with and .\nThe friction between the leg and the surface was set to be negligibly small in order not to disturb the motion of the robot.\nThe control sampling frequency was set to be 100 Hz. Data was gathered during 10 seconds, equivalent to 1000 simulation ticks.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### For each of the experiments without a spring the optimal and were calculated.\nAfter that, another simulation with a spring was performed.\nThe results are presented in the table I ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Results", + "text": "Figure 2 ###reference_### shows a comparison of the torques during a single cyclic motion with and without a spring under different conditions.\nThe spring parameters for the simulation were obtained from the system trajectories without a spring using a closed-form expression 4 ###reference_###.\n###figure_8###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "CONCLUSION", + "text": "It was thus shown that with an adaptive spring the overall energy consumption could be considerably reduced.\nIn the last column of the Table I ###reference_### the ratio between optimized and baseline energy consumption is given.\nIt is pretty clear that in the experiments with a physical stand the reduction in power consumption will be not as fascinating because the energy being lost due to friction.\nHowever, it could be seen from the system trajectories that the overall load on the servos has decreased.\nThat not only means that less energy is used, but also leads to the reduction of wear of the physical servos.\nCode and other supplementary materials can be accessed by link: https://github.com/dancher00/AdaptiveSpring [16 ###reference_b16###]" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI FUTURE WORK", + "text": "Future developments include the mechanical implementation of a proposed adaptive algorithm on a physical test bench. The vertical stand with robotic leg (Fig. 3 ###reference_###) with electronics (Fig. 4 ###reference_###) was well described and shown in the work [17 ###reference_b17###].\nThe core of adaptive system is the integration of a torsion spring attached to the knee joint of the robotic leg. This torsion spring is a critical component that allows for the adjustment of the leg\u2019s compliance based on the load it experiences during operation. The spring\u2019s equilibrium position can be altered, providing an adaptive response to varying conditions and improving energy efficiency during cyclic movements.\n###figure_9### The adjustment of the torsion spring is achieved through a Dynamixel servo motor, that controls the spring via a worm gear mechanism. As the servo rotates, the worm gear drives the rotation of the torsion spring, shifting its equilibrium position. That allows for the precise control of the compensatory torque exerted by the spring on the knee joint. Thus, the gravitational forces or external loads can be partially compensated, reducing the energy consumption of the motors during repetitive tasks." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
mA
4.11.880.050.22285.914.28.54-2.230.7%
4.11.880.080.22114.230.28.77-2.151.4%
4.11.880.050.152627.1103.17.05-2.523.9%
8.11.880.050.26813.944.113.78-2.280.6%
4.10.940.050.22365.244.717.1-1.41.9%
4.13.770.050.24193.66.46.07-2.840.15%
\n
TABLE I: Comparison of the robot\u2019s performance under different circumstances (varying mass, oscillation frequency, amplitude and starting height) with and without the spring that has optimal parameters and .
\n
", + "capture": "TABLE I: Comparison of the robot\u2019s performance under different circumstances (varying mass, oscillation frequency, amplitude and starting height) with and without the spring that has optimal parameters and ." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18295v1_figure_1.png", + "caption": "Figure 1: Digital twin of the one-leg stand with Adaptive Spring\nSystem in Gazebo simulator.", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/simka.png" + }, + "2(a)": { + "figure_path": "2411.18295v1_figure_2(a).png", + "caption": "Baseline\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/baseline.png" + }, + "2(b)": { + "figure_path": "2411.18295v1_figure_2(b).png", + "caption": "Amplitude changed from 0.05 to 0.08\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/ampl.png" + }, + "2(c)": { + "figure_path": "2411.18295v1_figure_2(c).png", + "caption": "Frequency increased from 100Hz up to 200Hz\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/highfreq.png" + }, + "2(d)": { + "figure_path": "2411.18295v1_figure_2(d).png", + "caption": "Initial high changed from -0.2 to -0.15\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/initialhight.png" + }, + "2(e)": { + "figure_path": "2411.18295v1_figure_2(e).png", + "caption": "Frequency decreased from 100Hz up to 50Hz\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/lowfreq.png" + }, + "2(f)": { + "figure_path": "2411.18295v1_figure_2(f).png", + "caption": "Mass of the upper part increased up to 4 kg\nFigure 2: Comparison of knee motor torques during a single cyclic\nmotion without and with a spring under different circumstances (varying mass, oscillation frequency, amplitude and starting height).", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/mass.png" + }, + "3": { + "figure_path": "2411.18295v1_figure_3.png", + "caption": "Figure 3: The experimental setup consists of a leg, fixing stand,\ncontrol electronics, a torsion spring mounted parallel to the\nknee joint, and motor to control the preload on the spring.", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/real.png" + }, + "4": { + "figure_path": "2411.18295v1_figure_4.png", + "caption": "Figure 4: System overview, including modules and interfaces\ndeveloped for dynamical change of spring preload.", + "url": "http://arxiv.org/html/2411.18295v1/extracted/5917053/system.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18295v1" +} \ No newline at end of file diff --git a/20241127/2411.18317v1.json b/20241127/2411.18317v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a3440961620bacdf16af7b45099afde59704de89 --- /dev/null +++ b/20241127/2411.18317v1.json @@ -0,0 +1,482 @@ +{ + "title": "Benchmarking Agility and Reconfigurability in Satellite Systems for Tropical Cyclone Monitoring", + "abstract": "Tropical cyclones (TCs) are highly dynamic natural disasters that travel vast distances and occupy a large spatial scale, leading to loss of life, economic strife, and destruction of infrastructure. The severe impact of TCs makes them crucial to monitor such that the collected data contributes to forecasting their trajectory and severity, as well as the provision of information to relief agencies. Among the various methods used to monitor TCs, Earth observation satellites are the most flexible, allowing for frequent observations with a wide variety of instruments. Traditionally, satellite scheduling algorithms assume nadir-directional observations, a limitation that can be alleviated by incorporating satellite agility and constellation reconfigurability\u2014two state-of-the-art concepts of operations (CONOPS) that extend the amount of time TCs can be observed from orbit. This paper conducts a systematic comparative analysis between both CONOPS to present the performance of each relative to baseline nadir-directional observations in monitoring TCs. A dataset of 100 historical TCs is used to provide a benchmark concerning real-world data through maximizing the number of quality observations. The results of the comparative analysis indicate that constellation reconfigurability allowing plane-change maneuvers outperforms satellite agility in the majority of TCs analyzed.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Nomenclature", + "text": "" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Concepts of Operations", + "text": "Mathematical models are created to compare the performance of each CONOPS. Standard nadir-directional operation is modeled as a constellation of nadir-directional satellites and is referred to as the baseline model. Separately, satellite agility and constellation reconfigurability are formulated to maximize the number of quality observations, represented as the sum of observation rewards, , with decision variables related to the associated CONOPS. It should be noted that this comparison considers optical observations rather than other forms of measurements, and as such, observation reward degradation will be incorporated for satellite agility. The concept of satellite agility incorporates slewing, allowing the modeled satellites to point in the direction of the TCs. Separately, the concept of constellation reconfigurability incorporates orbital maneuvering capabilities, allowing reconfiguration to a more optimal configuration. Constellation reconfigurability is provided in two models depending on the nature of allowable orbital maneuvers, one incorporating only phasing maneuvers (changes in true anomaly) and one additionally allowing plane changes of inclination and/or right ascension of ascending node (RAAN)." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Satellite Agility", + "text": "The modeling of the concept of satellite agility is referred to as the agility model and implements slewing about three principle axes , , and with respect to a local-vertical local-horizontal reference frame in which the points nadir, points in the tangential direction of motion, and results from a right-hand rule. The mission duration, , is discretized for visibility and propagation of satellite states by the time step size , resulting in the number of time steps, , where the set of time steps is given as .\nThe slewing about each axis , and , as controlled by the Euler angle decision variables , and about each axis, respectively, occurs at equally spaced attitude control opportunities with a step size of . Furthermore, the attitude control opportunities are contained within , which is a subset of with total control opportunities. Attitude control is performed to provide coverage of a set of total targets in which each individual target is denoted as with associated geodetic coordinates. The objective of the agility model is to minimize the angular difference, , between the current pointing direction and the target pointing direction to target at control opportunity , defined as:\nwhere is the vector originating from the center of the satellite body position and pointing to target at control opportunity :\nand where the satellite states, those being position and velocity, are obtained via orbit propagation, and is the target position. The specific propagation method utilized in this paper is noted in Sec. 3.2 ###reference_###.\nThe minimization of the angular difference maximizes visibility through optimal pointing directions. The current pointing direction, , is computed at control opportunity via:\nwhere is the current nadir pointing direction at control opportunity and is a rotation matrix calculated in Appendix A. Each pointing direction, those being , , and , is defined with the origin at the satellite center. Figure 2 ###reference_### provides a visualization of the vector parameters used in the agility model.\n###figure_1### The optimal slewing angles at each control opportunity are obtained through the optimization formulation shown in Formulation (4 ###reference_x1###), which may be performed individually for a number of satellites:\nObjective function (4a ###reference_1###) minimizes the sum of angular differences for all targets at all control opportunities, thus optimizing the slewing of a given satellite. Constraints (4b ###reference_2###), (4c ###reference_3###), and (4d ###reference_4###) restrict the slewing rate of the satellite to a maximum of , , and , respectively, represented as the difference between the Euler angle at control opportunity to control opportunity being less than or equal to the maximum slewing rate multiplied by the time between control opportunities. It is assumed that all slewing angles prior to the first control opportunity are set to zero for nadir pointing such that . Constraints (4e ###reference_5###) limit the maximum slewing angle in all directions to between and . While this formulation assumes that the maximum slewing angle in all directions is identical, it should be noted that each maximum angle may be defined differently.\nThe optimal slewing angles , and at control opportunity , obtained from Formulation (4 ###reference_x1###), are applied to the propagated satellite states for the corresponding time steps in the range of , which is repeated for all . At each time step within this interval, the visibility if any target is within a provided conical FOV, and otherwise. The observation reward obtained is then calculated through the degradation caused by the change in look angle. If the look angle is nadir-directional, the reward is one, while if the look angle is not, the reward is degraded according to Eq. (5 ###reference_###), derived from Ref. [17 ###reference_b17###]:\nwhere is the sum of all degraded observation rewards. Additionally, , and are equal to as defined in constraint (4e ###reference_5###), a limitation in place additionally from Ref. [17 ###reference_b17###].\nAll parameters, sets, and decision variables of the agility model are listed in Table 2 ###reference_###." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Constellation Reconfigurability", + "text": "The concept constellation reconfigurability implements the capability of satellite orbital maneuverability between various orbital slots over a given number of equally-spaced stages. The modeling of this CONOPS utilizes nadir-directional satellites with a conical FOV, additionally allowing satellites within the constellation to perform maneuvers and reconfiguration of the constellation according to the Multistage Constellation Reconfiguration Problem (MCRP) formulation, as proposed in Refs. [24 ###reference_b24###, 25 ###reference_b25###, 22 ###reference_b22###]. The objective of the MCRP is to maximize the observation rewards gathered by a constellation via optimal orbital maneuvers throughout a given number of stages for reconfiguration and several destination orbital slots for transfer. Constellation reconfigurability in this paper consists of two different models: the first allows only phasing maneuvers, while the second additionally permits changes in inclination and/or RAAN. Comparing the performances of these models allows us to characterize the impact of different orbital maneuvers on observation performance.\nThe MCRP formulation includes various sets that contain mission characteristics. The mission duration is defined as , and the number of discrete time steps during the mission is defined as with time step size , where all time steps are contained in the set . Additionally, the set of stages contains the stages from zero (the initial stage of the constellation before reconfiguration) to the total number of stages at which the constellation can reconfigure. The number of time steps and the number of stages then define the set of stage time steps, , where is the number of time steps in each stage . The set of satellites is defined as , containing total satellites and their associated set of orbital slots at each stage, , where is the total number of orbital slots for satellite in stage . While the MCRP in Refs. [24 ###reference_b24###, 25 ###reference_b25###, 22 ###reference_b22###] considers a heterogeneous set of satellites in which some satellites are capable of maneuvers and others are not, this paper assumes a homogeneous set of satellites in which all satellites are capable of maneuvers. The final set is the set of target points, , where is the total number of targets.\nTwo binary variables are utilized by the MCRP formulation to control the orbital maneuvers of satellites or indicate the visibility of a target, respectively. The first is the binary decision variable that controls the transfer of orbital slots for satellite during stage , which is defined as one in the event that the satellite transfers from orbital slot of stage to orbital slot of stage , and defined as zero otherwise. The second is the binary indicator variable , which is defined as one in the event that target is visible to a required number of satellites during stage at time step , and defined as zero otherwise.\nFinally, the MCRP formulation also contains various parameters. The first parameter is the cost of transfer from one orbital slot to the next orbital slot , and the associated maximum transfer cost allotted for the mission, , for satellite . The next parameter is the observation reward, , available for target , in stage , and at time step , which is sought to be obtained throughout the mission duration. The observation rewards are subject to a coverage requirement , also for target at time step in stage , defined as the number of satellites required to gain the associated reward. The final parameter is the binary VTW, , which is defined as one in the event that target is in view of satellite in orbital slot in stage at time step via the conical FOV provided to the satellite, and defined as zero if not. The parameter of every target is obtained through the propagation of orbital slot of satellite for all times and all stages , and the specific propagation method utilized in this paper is noted in Sec. 3.2 ###reference_###.\nThe mathematical formulation of the MCRP is shown as follows in Formulation (6 ###reference_x2###) [24 ###reference_b24###, 25 ###reference_b25###, 22 ###reference_b22###]:\nThe MCRP returns the optimal sequence of orbital transfer maneuvers that maximizes the observation reward [objective function (6a ###reference_1###)]. Constraints (6b ###reference_2###) control the transfer for each satellite of the orbital slots from the initial conditions of stage zero, , such that only one orbital slot is selected for transfer in the first stage. Constraints (6c ###reference_3###) similarly control the transfer of the orbital slots for subsequent stages to ensure that transfers from to can only occur if the previous transfer from resulted in arrival to orbital slot . Constraints (6d ###reference_4###) link the coverage requirement for the definition of , ensuring that target is covered if and only if at least satellites can view the target with consideration of the VTW, . Constraints (6e ###reference_5###) apply the cost to the transfers such that the maximum cost is not exceeded. Finally, constraints (6f ###reference_6###) and (6g ###reference_7###) define the domain of the decision and indicator variable, respectively.\nAll parameters, sets, and decision/indicator variables of the MCRP are listed in Table 3 ###reference_###." + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 Phasing-Restricted Reconfigurability", + "text": "The first application of the MCRP is restricted to phasing maneuvers, only incorporating candidate orbital slots that vary in phase (true anomaly or argument of latitude, dependent upon eccentricity). This variant of the MCRP is referred to as the phasing-restricted reconfigurability model. High-thrust impulsive maneuver cost calculations for phasing changes are performed using the phasing rendezvous algorithm from Ref. [33 ###reference_b33###] to define values of . The phasing rendezvous algorithm implemented allows up to four complete revolutions of the target orbital slot and of the transfer orbit utilized to perform the rendezvous." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Unrestricted Reconfigurability", + "text": "The second application of the MCRP incorporates orbital slots that enable both plane changes and phasing maneuvers (varying in inclination, RAAN, and phase). This variant of the MCRP is referred to as the unrestricted reconfigurability model. The additional maneuverability provided through changes in the orbital plane alongside phase changes is an additional comparison to demonstrate the extent of the capabilities within constellation reconfigurability. High-thrust impulsive maneuver cost calculations for changes in the orbital plane are performed using analytical algorithms from Ref. [33 ###reference_b33###]. These algorithms are for the one or two-impulse direct transfer cost from one orbit to another. The possible transfers include 1) phasing only, 2) inclination change only, 3) RAAN change only, 4) simultaneous inclination and RAAN change, 5) inclination change followed by phasing, 6) RAAN change followed by phasing, and 7) simultaneous inclination and RAAN change followed by phasing. Any instance of phasing will occur after the plane change and in accordance with the phasing rendezvous algorithm previously mentioned." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Comparative Analysis", + "text": "A comprehensive comparative analysis of each CONOPS is conducted to determine the level of performance in response to TCs. The effectiveness is reported through the coverage of a given TC in terms of the sum of observation rewards, ." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Tropical Cyclone Data", + "text": "A data set of 100 historical TCs is used for evaluation. This highly populated data set provides a large variance in the length and duration of each TC. Out of the 100 TCs included in this data set, 80 have retired names, meaning that the impact of these storms is so severe that future uses of the same name are seen as inappropriate for sensitivity to those affected by the named storm [34 ###reference_b34###]. Each TC is tracked along the latitude and longitude of the eye of the storm at six-hour intervals and the tracked data is retrieved from Ref. [35 ###reference_b35###]. Additionally, each TC is tracked from the first occurrence of tropical storm status (winds between 39 and 73 mph [36 ###reference_b36###]) until the last occurrence of tropical storm status. All 100 TCs are depicted in Fig. 3 ###reference_###, with hurricanes in Fig. 3(a) ###reference_sf1### in the western hemisphere and typhoons/cyclones in Fig. 3(b) ###reference_sf2### in the eastern hemisphere. Each TC is modeled to be comprised of total points that exist for the entire duration, , of the TC, but each point is only given observation rewards for its respective duration of six hours, thus only allowing a single point to contain observation rewards at any given time step and accounting for the motion of the TC.\n###figure_2### ###figure_3### The shortest TC is Cyclone Rumbia (2018) which lasted eight days, and the longest is Hurricane Florence (2018) which lasted 19 days. As a result of only tracking with tropical storm status rather than the entire disaster, for these TCs will be and , respectively. Additionally, Fig. 4 ###reference_### shows a histogram of the varying duration of each TC, grouped by the same six-hour interval used for tracking.\n###figure_4###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "CONOPS Parameters", + "text": "A variety of disaster monitoring satellite orbits, mainly those comprising the Disaster Monitoring Constellation (DMC), are used as a case study for the comparison of each CONOPS. Orbits from the DMC are chosen as they are designed for disaster monitoring [37 ###reference_b37###] and provide a large set of satellite orbits to choose from, varying in altitude, inclination, RAAN, and phase. Via Ref. [38 ###reference_b38###], the two-line element set for currently operating disaster monitoring satellites are available, of which five satellite orbits are selected based on proximity to a 7000-km semi-major axis, a metric chosen as a majority of the original satellites have a semi-major axis of . Only the subset of five satellites is chosen for consideration of computational runtime with respect to the 100 TCs used as data points. All classical orbital element (COE) data is true to an epoch of March 1st, 2023, at midnight (00:00:00) in Coordinated Universal Time (UTC), and the COEs for each chosen satellite are shown in Table 4 ###reference_###.\nEach CONOPS is coded in MATLAB and leverages the functions of the Satellite Communications Toolbox [39 ###reference_b39###] for the satellite propagation and visibility profiles; the propagator used is the Simplified General Perturbation 4 model (SGP4). SGP4 accounts for orbital perturbations of near-Earth orbits (orbital period less than ) caused by the shape of the Earth, atmospheric drag, and solar radiation pressures [39 ###reference_b39###]. The visibility utilized in each model is a representation of whether or not the TC is within the conical FOV of a satellite in the constellation at the given time steps. Additionally, MATLAB\u2019s sequential convex programming algorithm via fmincon [39 ###reference_b39###] is used to solve the agility model [Formulation (4 ###reference_x1###)], while the commercial software package Gurobi Optimizer (version 11.0.0) is utilized in conjunction with YALMIP modeling [40 ###reference_b40###] to solve the MCRP [Formulation (6 ###reference_x2###)].\nWe define the parameters of each CONOPS as follows. The parameters assigned to all CONOPS include a conical FOV of degrees and a time step of between evaluations in propagation and visibility. The parameters assigned to the agility model include a maximum slewing rate as defined in Ref. [41 ###reference_b41###], a maximum slewing angle as defined in Refs. [18 ###reference_b18###, 42 ###reference_b42###], and slewing opportunities occur every ().\nThe parameters assigned to both the phasing-restricted reconfigurability model and the unrestricted reconfigurability model include a requirement of only a single satellite to gain observations, set as . To model the motion of each TC without reconfiguration stages, if and otherwise, which can be broken into stages by . Additionally, the phasing-restricted reconfigurability model is assigned and equally spaced phasing orbital slots on the interval degrees per satellite and stage with the inclusion of the initial phase and such that the options are identical from one stage to the next (). Similarly, the parameters assigned to the unrestricted reconfigurability model include orbital slots of equally spaced inclination, RAAN, and phase. The inclination and RAAN slots extend in the positive and negative directions of the initial slot with a shared slot at the initial conditions. All options ensure consideration of the budget, , such that the furthest options of inclination and RAAN would expend the entire budget as a single maneuver. The resultant total number of slot options is , where is the number of planar options on either the inclination or RAAN axis, including the initial conditions, and is the number of phasing options. The orbital slots assigned for the comparative analysis include five inclination and RAAN options along each respective axis (), with a shared option in the center, thus contributing planar options. Additionally, phasing options are provided per plane, resulting in a total of orbital slots. Both reconfigurability models utilize a fuel budget of . An illustrative visualization of the orbital slot options for the unrestricted reconfigurability model is provided in Fig. 5 ###reference_###, where the nine planar slots are depicted in Fig. 5(a) ###reference_sf1###, with the initial condition at the center, alongside all slots as a three-dimensional representation in Fig. 5(b) ###reference_sf2###.\n###figure_5### ###figure_6### Due to the varying temporal scales of selected TCs, different numbers of stages are applied to the reconfigurability models to observe the effects of a change in the number of orbital slots and stages. Although the shortest TCs are only , this duration is long enough to allow for more than a single stage to be realistic. Therefore, a combination of two and four stages is used, which allows a reconfiguration (at most frequent) every .\nAs a result of the various combinations of parameters being applied to each CONOPS model, the abbreviations used for each combination are listed in Table 5 ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Results and Discussion", + "text": "The results of simulating each CONOPS model subject to all 100 selected TCs are provided in this section. An additional set of results with an FOV of degrees is presented in Appendix B. A summary of the sum of observation rewards, , is provided in Fig. 6 ###reference_### with an individual grouping of all CONOPS models for each TC. Each group of bars is displayed, from left to right: Model B in gray, Model A in blue, Model P1 through Model P4 in varying shades of orange, and Model U1 and Model U2 in two shades of red. For all other figures displaying all CONOPS models, the same coloring scheme will apply.\n###figure_7### Additionally, Fig. 7 ###reference_### displays the sum of observation rewards, , as a box chart organized from left to right in ascending order of the average increase in performance over Model B. In the order that each model appears in Fig. 7 ###reference_###, Table 6 ###reference_### provides statistics on the increase in performance of each model over Model B, including the average, standard deviation, and ranges of the increase in performance. As reflected in Fig. 7 ###reference_### and Table 6 ###reference_###, Model P1 is the lowest-performing model on average with the lowest minimum increase and the second lowest maximum increase, Model U2 is the highest-performing model on average with the highest minimum and maximum increase, and Model A is the second lowest-performing model on average with the second lowest minimum increase and the lowest maximum performance.\n###figure_8### In addition to Model U2 being the highest-performing model in terms of the increase in performance over Model B, Model U2 outperforms all other models in a majority of (more than 90) TCs. The exact number of TCs in which each model outperforms the other models is organized in Table 7 ###reference_###; each numeric in the table corresponds to the number of TCs where the model in that column outperforms the model in the row. Table 7 ###reference_### reflects that while Model U2 is the highest-performing on average, Model A additionally outperforms Models P1 and P2 in more than half of the TCs, and Models P3, P4, and U1 in more than 20 of the TCs, as well as Model U2 in five of the TCs. This indicates that Model A has some cases in which it is the highest-performing model, despite not being the highest-performing on average. Model U1 also outperforms Model U2 in one TC, while no other models outperform Model U2 in any TC. Finally, it should be noted that Models P3 and P4 outperform Model U1 in 28 and 47 TCs, respectively, indicating that there are cases in which phasing-restricted reconfigurability may outperform reconfigurability additionally allowing plane changes.\nA hyphen (-) indicates that the models are the same, and thus cannot outperform itself.\nThrough the data presented above, some key observations are presented. The first observation drawn from the results is that constellation reconfigurability has the potential to outperform satellite agility in response to TCs when given adequate parameters. This is primarily shown through the average increase in performance of Model U2 depicted in Fig. 7 ###reference_### and Table 6 ###reference_###, where Model U2 has the highest performance in the average, minimum, and maximum. Additionally, this observation is shown in Table 7 ###reference_###, where Model U2 outperforms every other model in more than 95 out of 100 total TCs. Table 8 ###reference_### depicts the performance of Model U2 above each other model, including the same information provided by Table 7 ###reference_###, further showing that Model U2 outperforms all other models on average. However, Table 8 ###reference_### further reflects that there are cases in which Models A and U1 outperform Model U2. Furthermore, the other reconfigurability models with higher numbers of orbital slots and stages, those being Models P3, P4, and U1, also outperform Models A, P1, and P2 in a majority of TCs, though not to the extent of Model U2. This mainly results from the frequent reconfiguration opportunities and the high level of flexibility present within constellation reconfigurability, especially when plane changes are present as optional transfers. Additionally, the difference in parameters between the models that did and did not outperform Model A are minor, suggesting that slight increases in either the number of stages or the number of orbital slots would increase the results even further.\nA secondary observation drawn from the results is the importance of orbital slot flexibility in comparison to the number of stages available for reconfiguration. The results reflect that the flexibility and availability of more orbital slots are more important than the number of stages available for transfer, highlighted in Table 7 ###reference_### through the 62 and 42 TCs where Model U1 outperformed Models P3 and P4, respectively, despite having fewer stages for reconfiguration. This observation is further reflected through the four TCs where Model P2 outperformed Model P3, where Model P2 has more orbital slots but fewer stages than Model P3. Furthermore, this observation is depicted in Table 6 ###reference_### through the higher average performance of Model U1 than Models P3 and P4, as well as the higher minimum increase in performance than Model P3. This difference in performance despite having fewer stages is a direct result of the added flexibility in the orbital slots available, being expanded to plane changes in inclination and/or RAAN, as well as in phase. Additionally, the level of performance achieved with the same number of stages as a result of additional orbital slots, as reflected between Models U1 and U2, may allow for a lower transfer cost budget to be implemented for similar results, as long as an adequate number of orbital slots are provided.\nA tertiary observation drawn from the results is the trends that often lead to Model A being the highest-performing model in some cases. As shown in Table 7 ###reference_###, Model A outperforms Model U1 in 27 TCs and outperforms Model U2 in five TCs; these TCs are shown in Appendix C. A majority of the TCs in which Model A was the highest-performing model have a long time duration, where 19 out of 27 (roughly ) total TCs where Model A outperformed Model U1, and four out of five () total TCs where Model A outperformed Model U2 have a time duration longer than the average time duration of all TCs, respectively. In general, this trend leads to high levels of performance from Model A because as the time duration of the TC increases, the time between stages (for an equivalent number of stages) increases proportionally with the time duration, while the time step size between attitude control opportunities, , remains constant. As such, this allows Model A to perform more attitude control opportunities since the time step size of is unchanged between differing TCs, while the reconfigurability models maintain the number of stages rather than the interval between them. Therefore, two TCs with different time durations will have proportionally different intervals between stages. This difference in the opportunities for control and the amount of time between them in Model A versus the reconfigurability models allows for more fine control in Model A, thus potentially performing better. However, the amount of TC characteristics that may additionally contribute to agility performance is extensive and should be investigated further in future work.\nVarious parameter selections may be improved for a more thorough comparison of the CONOPS modeled in this paper. Firstly, the rate at which slewing opportunities occur in the agility model, may be increased to allow more frequent attitude control and possibly more optimal pointing directions. Secondly, the constellation reconfigurability models assume equally spaced stages throughout the mission duration, while variably spaced stages may result in more optimal constellation reconfigurability. Finally, the cost computation algorithms used to compute the cost matrix assume high-thrust impulse maneuvers, while other transfer algorithms and trajectory optimization algorithms are present in literature such as Refs. [43 ###reference_b43###, 44 ###reference_b44###], in addition to leveraging the secular changes of RAAN due to natural perturbations, could be employed to ensure the cost of constellation reconfiguration is economically feasible.\nIn general, these findings and the data presented contribute to the accomplishment of the objectives previously established. Firstly, the comparison of all CONOPS with respect to the TCs selected provides a benchmark of performance for satellite agility and constellation reconfigurability with respect to nadir-directional systems. Secondly, the wide range of parameter combinations utilized in the reconfigurability models provides a further investigation into the emergent CONOPS of constellation reconfigurability. Thirdly, the high level of performance provided by the reconfigurability models establishes that constellation reconfigurability is worthy of a place in EO for application to TCs, additionally providing possible extensions to monitoring other short-term, rapid, and dynamic events. It is important to note that the results and conclusions drawn from this comparative analysis are not to be considered definitive, and should not be extended to other scenarios or applications. The main contribution of these results is the important trends regarding the performance of each CONOPS concerning a data set of varied historical TC characteristics." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "This paper sought to compare the CONOPS of satellite agility and constellation reconfigurability in order to provide a benchmark of each against a nadir-directional baseline and to determine the value of constellation reconfigurability in application to TC monitoring. The figure of merit\u2014the number of quality observations\u2014is obtained for each CONOPS with respect to 100 historical TCs.\nOverall, the comparison of each CONOPS performance as shown in Sec. 3.3 ###reference_### demonstrates that constellation reconfigurability has the capability to outperform satellite agility in application to TCs if provided an adequate number of stages for reconfiguration and high-quality destination orbital slots. Moreover, this capability applies both to constellation reconfigurability restricted to phasing maneuvers and one allowing plane change maneuvers. The increase in performance is especially emphasized through the application of the rapid nature of TCs, where these characteristics are paramount to optimal constellation configurations. Additionally, it is important to note that satellite agility may outperform constellation reconfigurability in cases where the additional fine control is beneficial, which trends toward longer TCs. However, there are extensive TC characteristics that may influence the performance of satellite agility versus constellation reconfigurability which are worthy of future research. As such, a direct conclusion regarding which CONOPS is more effective in monitoring TCs cannot be explicitly stated, but the provided trend in the performance of constellation reconfigurability lends validity that such a CONOPS may prove useful in EO applications.\nThere are many fruitful avenues for future research in the comparison between various CONOPS. Firstly, application to an expanded set of TCs either in the same manner as this paper or with multiple TCs in series or parallel may provide further insights into the trends presented in this paper. In conjunction, future research may apply additional natural disaster types such as flooding, earthquakes, volcanic eruptions, wildfires, or tsunamis to rigorously compare the concepts of satellite agility and constellation reconfigurability. Furthermore, the investigation of additional sensor types with a variety of FOV values may prove useful in the evaluation of additional data types. Additionally, future work may incorporate trajectory optimization algorithms to lower the fuel cost of reconfiguration and maintain economic feasibility, as mentioned in Sec. 3.3 ###reference_###. In conjunction with trajectory optimization, a study relating to the trade-off between performance and cost to determine the condition in which satellite agility proves to be more economically viable than constellation reconfigurability would be most insightful. Finally, the addition of constellation reconfigurability to the Earth observation satellite scheduling problem (EOSSP) in a similar manner to the agile EOSSP may prove insightful." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Parameters
\n \u2003=Cost of orbital maneuver
\n \u2003=Satellite pointing direction
\n \u2003=Number of orbital slots
\n \u2003=Number of satellites
\n \u2003=Rotation matrix
\n \u2003=Satellite nadir pointing direction
\n \u2003=Number of targets for observation
\n \u2003=Target position vector
\n \u2003=Satellite position vector
\n \u2003=Number of stages for reconfiguration
\n \u2003=Number of time steps
\n \u2003=Target pointing direction
\n \u2003=Mission duration
\n \u2003=Number of time steps in each stage of reconfiguration
\n \u2003=Visibility of a target from satellite orbital slot
\n \u2003=Decision variable for the reconfiguration path of a satellite
\n \u2003=Indicator variable for visibility state
\n \u2003=Sum of observation rewards
\n \u2003=Decision variable for Euler angle rotation about \n
\n \u2003=Decision variable for Euler angle rotation about \n
\n \u2003=Decision variable for Euler angle rotation about \n
\n \u2003=Time step size between discrete time steps
\n \u2003=Time step size between attitude control opportunities
\n \u2003=Euler angle rotation limit
\n \u2003=Angular difference in pointing directions
\n \u2003=Obtainable observation reward matrix
Sets
\n \u2003=Set of orbital slots
\n \u2003=Set of satellites
\n \u2003=Set of targets
\n \u2003=Set of stages for reconfiguration
\n \u2003=Set of time steps
\n \u2003=Set of attitude control opportunities
\n \u2003=Set of stage horizon time steps
Subscripts and indexing
\n \u2003=Orbital slot indices
\n \u2003=Satellite index
\n \u2003=Target index
\n \u2003=Stage index
\n \u2003=Time step index
\n \u2003=Control opportunity index
\n
", + "capture": "(a) Nadir directional (baseline)." + }, + "2": { + "table_html": "
\n
Table 2: Agility model parameters, sets, and decision variables.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SymbolDefinition
Parameters
Time step size between control opportunities
Time step size between discrete time steps
Mission duration
Total number of time steps
Total number of targets
Angular difference between and for target at control opportunity \n
Satellite pointing direction at control opportunity \n
Pointing direction for target at control opportunity \n
Satellite position vector at control opportunity \n
Position vector of target at control opportunity \n
Rotation matrix subject to , , and at control opportunity \n
Satellite nadir pointing direction at control opportunity \n
Maximum slewing rate about the axis
Maximum slewing rate about the axis
Maximum slewing rate about the axis
Maximum slewing angle about all axes
Sets
Set of time steps (index , cardinality )
Set of control opportunities (index , cardinality )
Set of targets (index , cardinality )
Decision variables
Euler angle about the axis at control opportunity \n
Euler angle about the axis at control opportunity \n
Euler angle about the axis at control opportunity \n
\n
", + "capture": "Table 2: Agility model parameters, sets, and decision variables." + }, + "3": { + "table_html": "
\n
Table 3: MCRP parameters, sets, and decision/indicator variables.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SymbolDefinition
Parameters
Mission duration
Time step size between discrete time steps
Number of time steps
Number of stages for reconfiguration
Number of time steps in each stage
Number of satellites
Number of orbital slots available to satellite in stage \n
Total number of targets
Cost for satellite to transfer from orbital slot to orbital slot \n
Maximum transfer cost for satellite \n
Observation reward available for target at time step in stage \n
Coverage requirement for target at time step in stage \n
Sets
Set of time steps (index , cardinality )
Set of stages (index , cardinality )
Set of time steps in each stage (index , cardinality )
Set of satellites (index , cardinality )
Set of orbital slots (indices , cardinality )
Set of targets (index , cardinality )
Decision and indicator variables
\n
\n
", + "capture": "Table 3: MCRP parameters, sets, and decision/indicator variables." + }, + "4": { + "table_html": "
\n
Table 4: COEs for modeling, obtained via Ref.\u00a0[38].
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Satellite nameSemi-major axis, kmEccentricityInclination, degRAAN, degArgument of periapsis, degTrue anomaly, deg
DMC 3-FM37006.0197.72307.8377.52104.88
DMC 3-FM16992.5497.72306.02116.04302.43
HUANJING 1B7003.0797.8089.49107.47140.62
HUANJING 1A7007.3697.7985.41116.27189.24
NIGERIASAT 16992.7697.85228.61260.58149.89
\n
\n
", + "capture": "Table 4: COEs for modeling, obtained via Ref.\u00a0[38]." + }, + "5": { + "table_html": "
\n
Table 5: Model abbreviations.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model abbreviationCorresponding modelKey parameters
Model\u00a0BBaseline model-
Model\u00a0AAgility model-
Model\u00a0P1Phasing-restricted reconfigurability model
Model\u00a0P2Phasing-restricted reconfigurability model
Model\u00a0P3Phasing-restricted reconfigurability model
Model\u00a0P4Phasing-restricted reconfigurability model
Model\u00a0U1Unrestricted reconfigurability model
Model\u00a0U2Unrestricted reconfigurability model
\n
", + "capture": "Table 5: Model abbreviations." + }, + "6": { + "table_html": "
\n
Table 6: Percent increase in performance of all models over Model\u00a0B.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage standard deviation, %Minimum, %Maximum, %
Model\u00a0P119.231400
Model\u00a0A33.921245.05
Model\u00a0P238.461400
Model\u00a0P357.691900
Model\u00a0P473.081900
Model\u00a0U165.391900
Model\u00a0U2103.852100
\n
", + "capture": "Table 6: Percent increase in performance of all models over Model\u00a0B." + }, + "7": { + "table_html": "
\n
Table 7: The number of TCs in which the model of a column outperforms the model in the row.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\u00a0BModel\u00a0AModel\u00a0P1Model\u00a0P2Model\u00a0P3Model\u00a0P4Model\u00a0U1Model\u00a0U2
Model\u00a0B-100100100100100100100
Model\u00a0A0-354565777395
Model\u00a0P1065-83100100100100
Model\u00a0P20550-9110098100
Model\u00a0P303504-836299
Model\u00a0P4023000-4299
Model\u00a0U1027002847-97
Model\u00a0U20500001-
\n
\n
\n
\n
\n
    \n
  • \n-\n
    \n

    A hyphen (-) indicates that the models are the same, and thus cannot outperform itself.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 7: The number of TCs in which the model of a column outperforms the model in the row." + }, + "8": { + "table_html": "
\n
Table 8: Percent increase in performance of Model\u00a0U2 over other models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage standard deviation, %Minimum, %Maximum, %
Model\u00a0A-24.32141.55
Model\u00a0P121.21127.27
Model\u00a0P213.79108.33
Model\u00a0P3060.61
Model\u00a0P4055.88
Model\u00a0U1-3.3356.25
\n
", + "capture": "Table 8: Percent increase in performance of Model\u00a0U2 over other models." + }, + "9": { + "table_html": "
\n
Table 9: Percent increase in performance of all models over Model\u00a0B with respect to a FOV of 30 degrees.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage standard deviation, %Minimum, %Maximum, %
Model\u00a0A-53.09681.16
Model\u00a0P153.331800
Model\u00a0P2602000
Model\u00a0P393.332200
Model\u00a0U185.712400
Model\u00a0P41002300
Model\u00a0U2146.673100
\n
", + "capture": "Table 9: Percent increase in performance of all models over Model\u00a0B with respect to a FOV of 30 degrees." + }, + "10": { + "table_html": "
\n
Table 10: The number of TCs with respect to a FOV of 30 degrees in which the model of a column outperforms the model in the row.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\u00a0BModel\u00a0AModel\u00a0P1Model\u00a0P2Model\u00a0P3Model\u00a0P4Model\u00a0U1Model\u00a0U2
Model\u00a0B-91100100100100100100
Model\u00a0A9-100100100100100100
Model\u00a0P100-8599100100100
Model\u00a0P2000-8410099100
Model\u00a0P30007-9562100
Model\u00a0P400000-2998
Model\u00a0U100002264-99
Model\u00a0U20000010-
\n
\n
\n
\n
\n
    \n
  • \n-\n
    \n

    A hyphen (-) indicates that the models are the same, and thus cannot outperform itself.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 10: The number of TCs with respect to a FOV of 30 degrees in which the model of a column outperforms the model in the row." + }, + "11": { + "table_html": "
\n
Table 11: Percent increase in performance of Model\u00a0U2 over other models with respect to a FOV of 30 degrees.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAverage standard deviation, %Minimum, %Maximum, %
Model\u00a0A87.072750.50
Model\u00a0P137.50162.50
Model\u00a0P219.05115.38
Model\u00a0P35.2690.91
Model\u00a0P4-1063.64
Model\u00a0U1061.54
\n
", + "capture": "Table 11: Percent increase in performance of Model\u00a0U2 over other models with respect to a FOV of 30 degrees." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2411.18317v1_figure_1(a).png", + "caption": "(a) Nadir directional (baseline).\nFigure 1: Difference in VTW of CONOPS.", + "url": "http://arxiv.org/html/2411.18317v1/x1.png" + }, + "1(b)": { + "figure_path": "2411.18317v1_figure_1(b).png", + "caption": "(b) Satellite agility.\nFigure 1: Difference in VTW of CONOPS.", + "url": "http://arxiv.org/html/2411.18317v1/x2.png" + }, + "1(c)": { + "figure_path": "2411.18317v1_figure_1(c).png", + "caption": "(c) Satellite maneuverability.\nFigure 1: Difference in VTW of CONOPS.", + "url": "http://arxiv.org/html/2411.18317v1/x3.png" + }, + "2": { + "figure_path": "2411.18317v1_figure_2.png", + "caption": "Figure 2: Visualization of agility model vector parameters.", + "url": "http://arxiv.org/html/2411.18317v1/x4.png" + }, + "3(a)": { + "figure_path": "2411.18317v1_figure_3(a).png", + "caption": "(a) Western hemisphere.\nFigure 3: Historical TC trajectories.", + "url": "http://arxiv.org/html/2411.18317v1/x5.png" + }, + "3(b)": { + "figure_path": "2411.18317v1_figure_3(b).png", + "caption": "(b) Eastern hemisphere.\nFigure 3: Historical TC trajectories.", + "url": "http://arxiv.org/html/2411.18317v1/x6.png" + }, + "4": { + "figure_path": "2411.18317v1_figure_4.png", + "caption": "Figure 4: Histogram of the time duration of TCs.", + "url": "http://arxiv.org/html/2411.18317v1/x7.png" + }, + "5(a)": { + "figure_path": "2411.18317v1_figure_5(a).png", + "caption": "(a) Planar orbital slots.\nFigure 5: Illustration of orbital slot options.", + "url": "http://arxiv.org/html/2411.18317v1/x8.png" + }, + "5(b)": { + "figure_path": "2411.18317v1_figure_5(b).png", + "caption": "(b) Three-dimensional orbital slots.\nFigure 5: Illustration of orbital slot options.", + "url": "http://arxiv.org/html/2411.18317v1/x9.png" + }, + "6": { + "figure_path": "2411.18317v1_figure_6.png", + "caption": "Figure 6: Sum of observation rewards, z\ud835\udc67zitalic_z, for all TCs.", + "url": "http://arxiv.org/html/2411.18317v1/x10.png" + }, + "7": { + "figure_path": "2411.18317v1_figure_7.png", + "caption": "Figure 7: Sum of rewards organized by average performance.", + "url": "http://arxiv.org/html/2411.18317v1/x11.png" + }, + "8": { + "figure_path": "2411.18317v1_figure_8.png", + "caption": "Figure 8: Sum of observation rewards, z\ud835\udc67zitalic_z, with respect to a FOV of 30 degrees for all TCs.", + "url": "http://arxiv.org/html/2411.18317v1/x12.png" + }, + "9": { + "figure_path": "2411.18317v1_figure_9.png", + "caption": "Figure 9: Sum of rewards organized by average performance with respect to a FOV of 30 degrees.", + "url": "http://arxiv.org/html/2411.18317v1/x13.png" + }, + "10(a)": { + "figure_path": "2411.18317v1_figure_10(a).png", + "caption": "(a) Hurricanes where Model A outperforms Model U1.\nFigure 10: TCs where Model A outperforms the highest performing reconfigurability models.", + "url": "http://arxiv.org/html/2411.18317v1/x14.png" + }, + "10(b)": { + "figure_path": "2411.18317v1_figure_10(b).png", + "caption": "(b) Typhoons where Model A outperforms Model U1.\nFigure 10: TCs where Model A outperforms the highest performing reconfigurability models.", + "url": "http://arxiv.org/html/2411.18317v1/x15.png" + }, + "10(c)": { + "figure_path": "2411.18317v1_figure_10(c).png", + "caption": "(c) Hurricanes where Model A outperforms Model U2.\nFigure 10: TCs where Model A outperforms the highest performing reconfigurability models.", + "url": "http://arxiv.org/html/2411.18317v1/x16.png" + }, + "10(d)": { + "figure_path": "2411.18317v1_figure_10(d).png", + "caption": "(d) Typhoons where Model A outperforms Model U2.\nFigure 10: TCs where Model A outperforms the highest performing reconfigurability models.", + "url": "http://arxiv.org/html/2411.18317v1/x17.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "10.1109/LGRS.2006.887145.", + "author": "Gierach, M. M., and Subrahmanyam, B., \u201cSatellite Data Analysis of the Upper Ocean Response to Hurricanes Katrina and Rita (2005) in the Gulf of Mexico,\u201d IEEE Geoscience and Remote Sensing Letters, Vol. 4, No. 1, 2007, pp. 132\u2013136.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "10.1016/j.wace.2021.100366.", + "author": "P\u00e9rez-Alarc\u00f3n, A., Sor\u00ed, R., Fern\u00e1ndez-Alvarez, J. C., Nieto, R., and Gimeno, L., \u201cComparative climatology of outer tropical cyclone size using radial wind profiles,\u201d Weather and Climate Extremes, Vol. 33, 2021, p. 100366.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "10.1097/DMP.0b013e31818aaf55.", + "author": "Brunkard, J., Namulanda, G., and Ratard, R., \u201cHurricane Katrina Deaths, Louisiana, 2005,\u201d Disaster Medicine and Public Health Preparedness, Vol. 2, No. 4, 2008, pp. 215\u2013223.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "10.1108/DPM-05-2014-0082.", + "author": "Diakakis, M., Deligiannakis, G., Katsetsiadou, K., and Lekkas, E., \u201cHurricane Sandy mortality in the Caribbean and continental North America,\u201d Disaster Prevention and Management, Vol. 24, No. 1, 2015, pp. 132\u2013148.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "10.5194/nhess-13-2579-2013.", + "author": "Kunz, M., M\u00fchr, B., Kunz-Plapp, T., Daniell, J. E., Khazai, B., Wenzel, F., Vannieuwenhuyse, M., Comes, T., Elmer, F., Schr\u00f6ter, K., and et al., \u201cInvestigation of superstorm sandy 2012 in a multi-disciplinary approach,\u201d Natural Hazards and Earth System Sciences, Vol. 13, No. 10, 2013, p. 2579\u20132598.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "10.54302/mausam.v64i1.658.", + "author": "Singaravelu, R., \u201cObservational aspects including weather radar for tropical cyclone monitoring,\u201d Mausam, Vol. 64, 2013, pp. 89\u201396.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "10.1016/j.tcrr.2023.06.001.", + "author": "Holbach, H. M., Bousquet, O., Bucci, L., Chang, P., Cione, J., Ditchek, S., Doyle, J., Duvel, J.-P., Elston, J., Goni, G., Hon, K. K., Ito, K., Jelenak, Z., Lei, X., Lumpkin, R., McMahon, C. R., Reason, C., Sanabia, E., Shay, L. K., Sippel, J. A., Sushko, A., Tang, J., Tsuboki, K., Yamada, H., Zawislak, J., and Zhang, J. A., \u201cRecent advancements in aircraft and in situ observations of tropical cyclones,\u201d Tropical Cyclone Research and Review, Vol. 12, No. 2, 2023, pp. 81\u201399.", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "10.1109/tgrs.2011.2161637.", + "author": "Amarin, R. A., Jones, W. L., El-Nimri, S. F., Johnson, J. W., Ruf, C. S., Miller, T. L., and Uhlhorn, E., \u201cHurricane wind speed measurements in rainy conditions using the Airborne Hurricane Imaging Radiometer (Hirad),\u201d IEEE Transactions on Geoscience and Remote Sensing, Vol. 50, No. 1, 2012, p. 180\u2013192.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "10.1109/AERO.2017.7943884.", + "author": "Brown, S., Focardi, P., Kitiyakara, A., Maiwald, F., Milligan, L., Montes, O., Padmanabhan, S., Redick, R., Russel, D., Bach, V., and Walkemeyer, P., \u201cThe COWVR Mission: Demonstrating the capability of a new generation of small satellite weather sensors,\u201d 2017 IEEE Aerospace Conference, 2017, pp. 1\u20137.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "10.1109/IGARSS.2018.8517330.", + "author": "Reising, S. C., Gaier, T. C., Padmanabhan, S., Lim, B. H., Heneghan, C., Kummerow, C. D., Berg, W., Chandrasekar, V., Radhakrishnan, C., Brown, S. T., Carvo, J., and Pallas, M., \u201cAn Earth Venture In-Space Technology Demonstration Mission for Temporal Experiment for Storms and Tropical Systems (Tempest),\u201d IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018, pp. 6301\u20136303.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "10.1117/12.2324873.", + "author": "Wilson, T. M., Angal, A., and Xiong, X., \u201cSensor Performance Assessment for Terra and Aqua Modis using unscheduled lunar observations,\u201d Sensors, Systems, and Next-Generation Satellites XXII, 2018.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "10.1109/aero.2013.6497205.", + "author": "Rose, R., Ruf, C., Rose, D., Brummitt, M., and Ridley, A., \u201cThe CYGNSS Flight Segment; a major NASA science mission enabled by micro-satellite Technology,\u201d 2013 IEEE Aerospace Conference, 2013.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "10.1016/j.jue.2020.103257.", + "author": "Boustan, L. P., Kahn, M. E., Rhode, P. W., and Yanguas, M. L., \u201cThe effect of natural disasters on economic activity in US counties: A century of data,\u201d Journal of Urban Economics, Vol. 118, 2020, p. 103257.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "10.21629/JSEE.2019.05.11.", + "author": "Haiquan, S., Wei, X., Xiaoxuan, H., and Chongyan, X., \u201cEarth observation satellite scheduling for emergency tasks,\u201d Journal of Systems Engineering and Electronics, Vol. 30, No. 5, 2019, pp. 931\u2013945.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "10.1109/TAES.2022.3146115.", + "author": "Chatterjee, A., and Tharmarasa, R., \u201cReward Factor-Based Multiple Agile Satellites Scheduling With Energy and Memory Constraints,\u201d IEEE Transactions on Aerospace and Electronic Systems, Vol. 58, No. 4, 2022, pp. 3090\u20133103.", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "10.3390/rs12030344.", + "author": "Chen, Y., Xu, M., Shen, X., Zhang, G., Zezhong, L., and Xu, J., \u201cA Multi-Objective Modeling Method of Multi-Satellite Imaging Task Planning for Large Regional Mapping,\u201d Remote Sensing, Vol. 12, 2020, p. 344.", + "venue": null, + "url": null + } + }, + { + "17": { + "title": "10.1016/S1270-9638(02)01173-2.", + "author": "Lemaitre, M., Verfaillie, G., Jouhaud, F., Lachiver, J.-M., and Bataille, N., \u201cSelecting and scheduling observations of agile satellites,\u201d Aerospace Science and Technology, Vol. 6, No. 5, 2002, pp. 367\u2013381.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "10.1016/j.asr.2018.08.037.", + "author": "Song, Z., Chen, X., Luo, X., Wang, M., and Dai, G., \u201cMulti-objective optimization of Agile Satellite Orbit Design,\u201d Advances in Space Research, Vol. 62, No. 11, 2018, p. 3053\u20133064.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "10.1016/j.cor.2020.104946.", + "author": "Peng, G., Song, G., Xing, L., Gunawan, A., and Vansteenwegen, P., \u201cAn exact algorithm for Agile Earth observation satellite scheduling with time-dependent profits,\u201d Computers and Operations Research, Vol. 120, 2020, p. 104946.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "10.2514/1.A35990.", + "author": "Lee, H., Williams Rogers, D. O., Pearl, B. D., Chen, H., and Ho, K., \u201cDeterministic Multistage Constellation Reconfiguration Using Integer Programming and Sequential Decision-Making Methods,\u201d Journal of Spacecraft and Rockets, Vol. 0, No. 0, 2024, pp. 1\u201317.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "10.2514/1.a35457.", + "author": "Morgan, S. J., McGrath, C. N., and de Weck, O. L., \u201cOptimization of multispacecraft maneuvers for mobile target tracking from low Earth orbit,\u201d Journal of Spacecraft and Rockets, Vol. 60, No. 2, 2023, p. 581\u2013590.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "10.2514/1.A35685.", + "author": "Lee, H., and Ho, K., \u201cRegional Constellation Reconfiguration Problem: Integer Linear Programming Formulation and Lagrangian Heuristic Method,\u201d Journal of Spacecraft and Rockets, Vol. 60, No. 6, 2023, pp. 1828\u20131845.", + "venue": null, + "url": null + } + }, + { + "23": { + "title": "URL https://ai.jpl.nasa.gov/public/documents/papers/Branch-IWPSS2023-federated.pdf.", + "author": "Branch, A., Marchetti, Y., Mason, J., Montgomery, J., Johnson, M. C., Chien, S., Wu, L., Smith, B., Mandrake, L., and Tavallali, P., \u201cFederating Planning of Observations for Earth Science,\u201d Proc. of International Workshop on Planning and Scheduling for Space, 2023.", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "10.1109/igarss39084.2020.9323248.", + "author": "Nag, S., Moghaddam, M., Selva, D., Frank, J., Ravindra, V., Levinson, R., Azemati, A., Aguilar, A., Li, A., and Akbar, R., \u201cD-shield: Distributed spacecraft with heuristic intelligence to enable logistical decisions,\u201d IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, 2020.", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "10.1109/JSYST.2020.2997050.", + "author": "Wang, X., Wu, G., Xing, L., and Pedrycz, W., \u201cAgile Earth Observation Satellite Scheduling Over 20 Years: Formulations, Methods, and Future Directions,\u201d IEEE Systems Journal, Vol. 15, No. 3, 2021, pp. 3881\u20133892.", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "URL https://www.nhc.noaa.gov/aboutnames_history.shtml, last Accessed September 29, 2024.", + "author": "National Hurricane Center, and Central Pacific Hurricane Center, \u201cTropical Cyclone Naming History and Retired Names,\u201d , 2024.", + "venue": null, + "url": null + } + }, + { + "27": { + "title": "URL https://www.wunderground.com/hurricane/archive, last Accessed September 29, 2024.", + "author": "Weather Underground, \u201cHurricane Archive,\u201d , 2024.", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "URL https://www.weather.gov/mob/tropical_definitions, last Accessed September 29, 2024.", + "author": "US Department of Commerce, N., \u201cTropical definitions,\u201d , May 2022.", + "venue": null, + "url": null + } + }, + { + "29": { + "title": "10.1109/RAST.2003.1303972.", + "author": "Stephens, P., Cooksley, J., Da Silva Curiel, A., Boland, L., Jason, S., Northham, J., Brewer, A., Anzalchi, J., Newell, H., Underwood, C., Machin, S., Sun, W., and Sweeting, S., \u201cLaunch of the international Disaster Monitoring Constellation; the development of a novel international partnership in space,\u201d International Conference on Recent Advances in Space Technologies (RAST), 2003, pp. 525 \u2013 535.", + "venue": null, + "url": null + } + }, + { + "30": { + "title": "URL https://www.n2yo.com/satellites/?c=8, last Accessed September 29, 2024.", + "author": "N2YO.com, \u201cDisaster Monitoring Satellites,\u201d , 2023.", + "venue": null, + "url": null + } + }, + { + "31": { + "title": "10.1016/S0094-5765(02)00089-9.", + "author": "Lappas, V., Steyn, W., and Underwood, C., \u201cAttitude control for small satellites using control moment gyros,\u201d Acta Astronautica, Vol. 51, 2002, pp. 101\u2013111.", + "venue": null, + "url": null + } + }, + { + "32": { + "title": "10.1177/0954410018787866.", + "author": "Karpenko, M., and King, J. T., \u201cMaximizing agility envelopes for reaction wheel spacecraft,\u201d Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, Vol. 233, No. 8, 2019, pp. 2745\u20132759.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "10.2514/1.27910.", + "author": "Luo, Y., Lei, Y., and Tang, G.-J., \u201cOptimal Multi-Objective Nonlinear Impulsive Rendezvous,\u201d Journal of Guidance, Control, and Dynamics, Vol. 30, 2007, pp. 994\u20131002.", + "venue": null, + "url": null + } + }, + { + "34": { + "title": "10.1080/0020739860170309.", + "author": "Lesk, A. M., \u201cOn the calculation of Euler angles from a rotation matrix,\u201d International Journal of Mathematical Education in Science and Technology, Vol. 17, No. 3, 1986, pp. 335\u2013337.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18317v1" +} \ No newline at end of file diff --git a/20241127/2411.18320v1.json b/20241127/2411.18320v1.json new file mode 100644 index 0000000000000000000000000000000000000000..96c8a7901aa2bd4a53fcaf384f0a7915e221dded --- /dev/null +++ b/20241127/2411.18320v1.json @@ -0,0 +1,293 @@ +{ + "title": "Continual Learning in Machine Speech Chain Using Gradient Episodic Memory", + "abstract": "Continual learning for automatic speech recognition (ASR) systems poses a challenge, especially with the need to avoid catastrophic forgetting while maintaining performance on previously learned tasks. This paper introduces a novel approach leveraging the machine speech chain framework to enable continual learning in ASR using gradient episodic memory (GEM). By incorporating a text-to-speech (TTS) component within the machine speech chain, we support the replay mechanism essential for GEM, allowing the ASR model to learn new tasks sequentially without significant performance degradation on earlier tasks. Our experiments, conducted on the LJ Speech dataset, demonstrate that our method outperforms traditional fine-tuning and multitask learning approaches, achieving a substantial error rate reduction while maintaining high performance across varying noise conditions. We showed the potential of our semi-supervised machine speech chain approach for effective and efficient continual learning in speech recognition.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The exceptional performance of deep learning architectures, as illustrated by the Transformer model [1 ###reference_b1###], has enabled state-of-the-art automatic speech recognition (ASR) systems to reach levels of accuracy comparable to human performance [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. These advancements have significantly enhanced speech recognition capabilities. However, a critical challenge persists: ASR systems should be capable of recognizing a continuous stream of tasks. Despite the existence of large-scale speech models [5 ###reference_b5###, 6 ###reference_b6###] that excel in multitask performance, these models demand substantial resources in terms of data and computational power, and they require the availability of all tasks from the beginning, i.e., offline learning.\nAn alternative approach to this issue is fine-tuning, which transfers knowledge from one task to another, or multitask learning, where the model is trained from scratch using both previous and new task data simultaneously. Unfortunately, the former approach (transfer learning) can degrade the model\u2019s performance on earlier tasks due to catastrophic forgetting [7 ###reference_b7###]. Meanwhile, the latter approach necessitates retaining old data to mix with new task data, potentially leading to privacy concerns.\nContinual learning is a paradigm designed to allow models to learn new tasks sequentially without compromising their ability to perform previous tasks or violating data privacy. Its effectiveness in sequentially handling multiple recognition tasks was recently demonstrated in [8 ###reference_b8###].\nUnlike existing fully supervised methods for conducting continual learning experiments on ASR, this paper proposes a semi-supervised approach within the machine speech chain framework [9 ###reference_b9###]. Our method integrates text-to-speech (TTS) to support a replay mechanism in continual learning. We adopt gradient episodic memory (GEM) [10 ###reference_b10###] as our chosen implementation for this replay-based continual learning scenario.\nWe evaluate our proposed method against other prevalent learning paradigms such as fine-tuning and multitask learning. Our results indicate that continual learning within the machine speech chain framework offers superior performance compared to these traditional methods and serves as a viable alternative to fully supervised continual learning. Although the upper bound fully supervised continual learning achieves a lower error rate, our approach manages to achieve a 40% average error rate reduction relative to fine-tuning. Therefore, our contributions include: (1) proposing a machine speech chain-based method for enabling continual learning in speech recognition; (2) conducting experiments to validate our method using the LJ speech dataset.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Machine Speech Chain", + "text": "Machine speech chain is an architecture that connects sequence-to-sequence model of automatic speech recognition (ASR) and text-to-speech (TTS) in a closed-loop framework. This integration was proposed to be representative of human speech chain mechanism [9 ###reference_b9###], which is listening while speaking [11 ###reference_b11###]. To date, machine speech chain has been used in various works, adaptive lombard TTS [12 ###reference_b12###], data augmentation [13 ###reference_b13###], and code-switching [14 ###reference_b14###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Gradient Episodic Memory", + "text": "Gradient episodic memory (GEM) is a replay method of continual learning paradigm [10 ###reference_b10###]. GEM exploits samples from the past task\u2019s data when encountering the data of a new task to minimize the L2 distance between the gradients of the new task\u2019s data and the old ones\u2019 data, i.e.,\nwhere and is the number of model\u2019s parameters. ASR model that was equipped with GEM in previous finding (see [8 ###reference_b8###]) outperformed regularization-based methods , such as synaptic intelligence [15 ###reference_b15###], or knowledge distillation [16 ###reference_b16###], in a continual learning scenario with different acoustic and topic domain acted as task boundary. In this paper, we introduce GEM usage in machine speech chain and we demonstrate first-hand to show its potential." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Machine Speech Chain Using GEM", + "text": "We introduce a three-stage mechanism designed to enable ASR models to perform continual learning in a semi-supervised manner, achieving satisfactory results with minimal forgetting. These three stages, depicted in Figure 1 ###reference_###, build upon the process proposed in [9 ###reference_b9###], for the first and second stages, with our continual learning method introduced in the third stage.\nFirst stage: Supervised learning on the base task. Here, ASR and TTS are trained separately in a supervised manner to ensure strong baseline performance for the subsequent training stages.\nSecond stage: Semi-supervised learning. At this stage, ASR and TTS mutually enhance each other by training on unlabeled data from the base task, using unsupervised methods to improve performance.\nThird stage: Continual learning. ASR engages in continual learning for new tasks using replayed inputs from the base task, synthesized by TTS.\nIn our approach, the replay process for speech recognition leverages TTS as a synthesis model to generate pseudo-samples of the base task. These pseudo-samples are stored in episodic memory and used by GEM to regulate the gradients for both new and previous tasks.\nDuring the third stage, when the machine speech chain encounters incoming tasks as:\nwhere is the input and is the label, we forward the speech data label to TTS to generate pseudo-samples of the base task, i.e., . These synthesized samples are stored, along with the data from the incoming task, processed as follows:\nwhere represents the episodic memory and denotes the weight assigned for updating the model parameters during continual learning for the -th task ().\nTo our knowledge, our proposed mechanism is the first to incorporate TTS within the continual learning framework for ASR. While prior works in continual learning have utilized various generative models [17 ###reference_b17###, 18 ###reference_b18###], none has specifically employed TTS for continual learning in ASR.\n###table_1###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "###table_2### ###figure_2###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "We prepared two tasks for the ASR models to recognize. The first task, referred to as the base task, utilized the clean original dataset of LJ Speech [19 ###reference_b19###], consisting of 24 hours of audio. To simulate different scenario for the subsequent task, we created a noisy version of the original speech dataset. This noisy dataset also comprises 24 hours of audio, but with added white noise at a signal-to-noise ratio (SNR) of 0. Consequently, the base task is denoted as LJ Original, and the subsequent task is denoted as LJ Noisy. Both datasets were split into train, dev, and test sets with a ratio of 94%, 3%, and 3%, respectively.\nFor the ASR architecture, we employed the Speech-Transformer [20 ###reference_b20###], while the TTS architecture was based on the Transformer-based Tacotron 2 [21 ###reference_b21###]. All of the ASR models did not involve hyperparameter tuning since they already employed almost identical hyperparameters to those that had been used in [20 ###reference_b20###]. The architecture of the ASR models employed 12 encoder blocks, 6 decoder blocks, 4 attention heads, and a feed-forward hidden layer size of 2048. We used 80 dimensions for the Mel-spectrogram input. We trained the models using the Adam optimizer with , and employed cross-entropy loss with neighborhood smoothing. The episodic memory that we used for continual learning had size of 100 samples per task, or in other word 1% of dataset size.\nFor TTS models that are needed in machine speech chain condition, we configured them to be consisted of 6 encoder blocks for the transformer-based encoder, 6 decoder blocks for the autoregressive decoder, 8 heads, and a feed-forward hidden layer size of 2048. These values were identical to the best configuration that had been used in [21 ###reference_b21###]. The TTS input was the character sequence, and the output was the 80 dimensions of the Mel-spectrogram. We used the Adam optimizer with the same values and employed cross-entropy loss.\nOur experiment involved training ASR models under supervised conditions: lower bound and upper bound, and our proposed method that involved semi-supervised condition: machine speech chain. The upper and lower bound refers to the amount of base task data provided to the ASR model before it engages in learning with subsequent task. Specifically, we varied the proportion of the LJ Original training data while keeping the LJ Noisy training data constant at 100% of the train set. We used 30% of LJ Original train set for the lower bound condition, 30% of the train set as labeled data & 70% of the train set as unlabeled data for the machine speech chain condition, and 100% of the train set for the upper bound condition." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Experiment Result", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Continual Learning Performance", + "text": "The experimental results, as detailed in Table 1 ###reference_###, demonstrate the efficacy of various continual learning approaches applied to the ASR model in both clean (LJ Original) and noisy (LJ Noisy) conditions. The ASR results show that the GEM approach significantly reduces the character error rate (CER) compared to fine-tuning and multitask learning. For instance, GEM achieved a CER of 8.5% on LJ Original and 15.8% on LJ Noisy, outperforming the fine-tuning method which resulted in CERs of 19.0% and 31.3% respectively. Multitask learning, however, showed the highest CERs of 74.8% and 76.7%, indicating its limitation in handling noise without optimal balance of data.\nThe ASR model trained with GEM outperformed the fine-tuning method, achieving CERs of 11.1% and 15.5% for LJ Original and LJ Noisy respectively. This is a significant improvement over fine-tuning, which recorded CERs of 12.7% and 33.1%. Furthermore, comparing the GEM method across different models, ASR using GEM achieved the lowest CERs at 5.2% and 8.4%, compared to fine-tuning and multitask methods. However, it is important to highlight that the ASR model, despite not reaching the lowest error rates, still showed substantial improvements. The ASR model with GEM achieved significant error rate reductions, comparable to the ASR model, with a 40% error rate reduction relative to the respective fine-tuning methods. We also demonstrate the results with different split ratio of labeled and unlabeled data of the base task in Table 2 ###reference_###, where we can observe that with increasing labeled data the error rates are becoming smaller. These results emphasize that our proposed method is effective, mitigating catastrophic forgetting and maintaining consistent performance across tasks and varying semi-supervised learning scenarios." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Continual Learning Comparison", + "text": "We also compared our semi-supervised method to the other continual learning methods which are carried out in a fully supervised scenario. These other methods were gradient episodic memory (GEM) and elastic weight consolidation (EWC) [22 ###reference_b22###]. We can see from Figure 2 ###reference_### that the learning curves exhibit the superiority of GEM, as models that leveraged GEM as their replay process were able to prevent catastrophic forgetting. Although EWC had worse forgetting prevention, it performed better on learning new task because of its fully supervised scenario.\nWe also computed the continual learning metrics, such as average (AVG), backward transfer (BWT), and forward transfer (FWT) character error rate, as shown in Figure 2 ###reference_###, which were useful for comparing the three models to each other. In our experiment, BWT is defined as the ability of a model to transfer the lowest possible error to the previous task it has encountered, while FWT is defined as the ability of a model to learn a new task with the lowest possible error compared to the error rate attained by the standard fine-tune method.\nGEM, when applied in a supervised ASR system, as expected, achieved the lowest of all the metrics. EWC had a slightly lower AVG at 12.5% than ASR, which achieved 13.3%. Our model performed well in reducing forgetting by introducing a lower error to the previous task with a BWT at 4.7% than EWC\u2019s at 7.8%. For the FWT metric, our model and EWC performed relatively similarly at -0.3% and -0.1% respectively. From these results, we can observe that our model works as intended to learn sequential tasks, prevent catastrophic forgetting, and exploit accumulated knowledge to learn a new task, which are all the properties of a functioning continual learning process." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We proposed a novel method to allow automatic speech recognition (ASR) model to perform continual learning in a semi-supervised manner of machine speech chain. We then demonstrated first-hand the implementation of such replay method with gradient episodic memory (GEM). Although our upper bound supervised model achieved lower CER than our proposed method, the machine speech chain-based method managed to get the same 40% averaged error rate reduction. Furthermore, we compared both machine speech chain that was trained under the proposed continual learning scenario with the machine speech chain under the fine-tuning scenario. We found that our method worked and achieved minimal forgetting, or prevented catastrophic forgetting. This showed that our novel method has potential for further application of speech recognition and can serve as an alternative to the fully supervised mechanism of continual learning. We believe this paper provides the first exploration of continual learning in machine speech chain framework and makes a step towards realizing effective and efficient learning for speech recognition." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Limitations", + "text": "We acknowledge the need for further experiments to assess the generalizability of our approach. While this work demonstrates success on a simple task boundary of noise variation, future work will involve applying our method to a wider range of tasks, such as multilingual speech recognition (where the model needs to adapt to different phonetic inventories) or task-agnostic continual learning (where tasks are not predefined). This will allow us to investigate the effectiveness of our method in handling more complex scenarios and potentially lead to a more robust continual learning for ASR in machine speech chain framework." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Ethics Statement", + "text": "Our study followed the scientific methodology and ethics. The LJ Speech dataset that we used is a public domain dataset which is not in violation of license and data ethics. LJ Speech dataset is an English language speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. The audio part was recorded and donated voluntarily by the speaker to the public domain. The texts that were read by the speaker are also in the public domain. We are aware of the usage of synthetic data that is generated by text-to-speech (TTS) to assist the continual learning of automatic speech recognition (ASR). There is potential to perpetuate ethical risk, such as bias and attribution issues in the synthetic samples. However, our proposed method utilizes TTS within a closed-loop framework, allowing us to better control the generation process and mitigate such issues. Furthermore, we believe this method can alleviate key challenges, such as the reliance on large quantities of real human speech data." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Acknowledgements", + "text": "Part of this work is supported by JSPS KAKENHI Grant Numbers JP21H05054 and JP23K21681, as well as JST Sakura Science Program." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\n \n\n\nCER (%)\n\nLJ Original\n\n\n \n\n\nCER (%)\n\nLJ Noisy\n\n
ASR
Pre-trained9.282.6
Fine-tuning19.031.3
GEM8.515.8
Multitask74.876.7
ASR
Pre-trained6.495.7
Fine-tuning12.733.1
GEM11.115.5
ASR
Pre-trained1.9108.4
Fine-tuning6.715.6
GEM5.28.4
Multitask3.810.9
\n
Table 1: CER results for different methods applied on the ASR model. The color-coded rows ( , , ) represent each stage of our proposed machine speech chain-based method.
\n
", + "capture": "Table 1: CER results for different methods applied on the ASR model. The color-coded rows ( , , ) represent each stage of our proposed machine speech chain-based method." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n \n\n\nSplit Ratio\n\nLabeled / Unlabeled\n\n \n\n\nCER (%)\n\nLJ Original\n\n\n \n\n\nCER (%)\n\nLJ Noisy\n\n
30 / 7011.115.5
50 / 504.811.5
70 / 304.010.9
\n
Table 2: Results for the ASR with updates of different ratio of labeled and unlabeled data during base task learning in the first stage and second stage of the framework.
\n
", + "capture": "Table 2: Results for the ASR with updates of different ratio of labeled and unlabeled data during base task learning in the first stage and second stage of the framework." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18320v1_figure_1.png", + "caption": "Fig. 1: Continual learning in the machine speech chain framework.", + "url": "http://arxiv.org/html/2411.18320v1/extracted/6028592/assets/stages.png" + }, + "2": { + "figure_path": "2411.18320v1_figure_2.png", + "caption": "Fig. 2: Learning curves of models in continual learning paradigm and their respective metrics.", + "url": "http://arxiv.org/html/2411.18320v1/extracted/6028592/assets/continual_comparison.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "\u201cAttention is all you need,\u201d", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin,", + "venue": "in Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "2": { + "title": "\u201cJasper: An end-to-end convolutional neural acoustic model,\u201d", + "author": "Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M Cohen, Huyen Nguyen, and Ravi Teja Gadde,", + "venue": "arXiv preprint arXiv:1904.03288, 2019.", + "url": null + } + }, + { + "3": { + "title": "\u201cTransformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss,\u201d", + "author": "Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar,", + "venue": "in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020.", + "url": null + } + }, + { + "4": { + "title": "\u201cConformer: Convolution-augmented transformer for speech recognition,\u201d", + "author": "Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang,", + "venue": "in Conference of the International Speech Communication Association (INTERSPEECH), 2020.", + "url": null + } + }, + { + "5": { + "title": "\u201cRobust speech recognition via large-scale weak supervision,\u201d", + "author": "Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever,", + "venue": "in International Conference on Machine Learning (ICML), 2023.", + "url": null + } + }, + { + "6": { + "title": "\u201cSeamlessm4t: Massively multilingual & multimodal machine translation,\u201d", + "author": "Seamless Communication, Lo\u00efc Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\u00e0, Onur Celebi, Maha Elbayad, Cynthia Gao, Francisco Guzm\u00e1n, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff\nWang, and Skyler Wang,", + "venue": "arXiv preprint arXiv:2308.11596, 2023.", + "url": null + } + }, + { + "7": { + "title": "Catastrophic interference in connectionist networks: The sequential learning problem,", + "author": "Michael McCloskey and Neal J Cohen,", + "venue": "Academic Press, 1989.", + "url": null + } + }, + { + "8": { + "title": "\u201cTowards lifelong learning of end-to-end asr,\u201d", + "author": "Heng-Jui Chang, Hung yi Lee, and Lin shan Lee,", + "venue": "in Conference of the International Speech Communication Association (INTERSPEECH), 2021.", + "url": null + } + }, + { + "9": { + "title": "\u201cMachine speech chain,\u201d", + "author": "Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura,", + "venue": "IEEE Transactions on Audio, Speech, and Language Processing, 2020.", + "url": null + } + }, + { + "10": { + "title": "\u201cGradient episodic memory for continual learning,\u201d", + "author": "David Lopez-Paz and Marc\u2019Aurelio Ranzato,", + "venue": "in Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "11": { + "title": "The Speech Chain,", + "author": "Peter B. Denes and Elliot Pinson,", + "venue": "Worth Publishers, 1993.", + "url": null + } + }, + { + "12": { + "title": "\u201cA machine speech chain approach for dynamically adaptive lombard tts in static and dynamic noise environments,\u201d", + "author": "Sashi Novitasari, Sakriani Sakti, and Satoshi Nakamura,", + "venue": "IEEE Transactions on Audio, Speech, and Language Processing, 2022.", + "url": null + } + }, + { + "13": { + "title": "\u201cSpeechain: A speech toolkit for large scale machine speech chain,\u201d", + "author": "Heli Qi, Sashi Novitasari, Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura,", + "venue": "arXiv preprint arXiv:2301.02966, 2023.", + "url": null + } + }, + { + "14": { + "title": "\u201cIndonesian-English code-switching speech recognition using the machine speech chain based semi-supervised learning,\u201d", + "author": "Rais Vaza Man Tazakka, Dessi Lestari, Ayu Purwarianti, Dipta Tanaya, Kurniawati Azizah, and Sakriani Sakti,", + "venue": "in Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024, 2024.", + "url": null + } + }, + { + "15": { + "title": "\u201cContinual learning through synaptic intelligence,\u201d", + "author": "Friedemann Zenke, Ben Poole, and Surya Ganguli,", + "venue": "in International Conference on Machine Learning (ICML), 2017.", + "url": null + } + }, + { + "16": { + "title": "\u201cDistilling the knowledge in a neural network,\u201d", + "author": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean,", + "venue": "arXiv preprint arXiv:1503.02531, 2015.", + "url": null + } + }, + { + "17": { + "title": "\u201cContinual learning with deep generative replay,\u201d", + "author": "Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim,", + "venue": "in Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "18": { + "title": "\u201cPseudo-recursal: Solving the catastrophic forgetting problem in deep neural networks,\u201d", + "author": "Craig Atkinson, Brendan McCane, Lech Szymanski, and Anthony Robins,", + "venue": "arXiv preprint arXiv:1802.03875, 2018.", + "url": null + } + }, + { + "19": { + "title": "\u201cThe LJ Speech Dataset,\u201d https://keithito.com/LJ-Speech-Dataset/, 2017.", + "author": "Keith Ito and Linda Johnson,", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "\u201cSpeech-transformer: a no-recurrence sequence-to-sequence model for speech recognition,\u201d", + "author": "Linhao Dong, Shuang Xu, and Bo Xu,", + "venue": "in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.", + "url": null + } + }, + { + "21": { + "title": "\u201cNeural speech synthesis with transformer network,\u201d", + "author": "Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu,", + "venue": "in Proceedings of the AAAI Conference on Artificial Intelligence, 2019.", + "url": null + } + }, + { + "22": { + "title": "\u201cOvercoming catastrophic forgetting in neural networks,\u201d", + "author": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell,", + "venue": "Proceedings of the national academy of sciences, 2017.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18320v1" +} \ No newline at end of file diff --git a/20241127/2411.18321v1.json b/20241127/2411.18321v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b31c98bdd26927213233e5c80ae99159c0faf68a --- /dev/null +++ b/20241127/2411.18321v1.json @@ -0,0 +1,296 @@ +{ + "title": "Learning Optimal Objective Values for MILP", + "abstract": "Modern Mixed Integer Linear Programming (MILP) solvers use the Branch-and-Bound algorithm together with a plethora of auxiliary components that speed up the search. In recent years, there has been an explosive development in the use of machine learning for enhancing and supporting these algorithmic components [18]. Within this line, we propose a methodology for predicting the optimal objective value, or, equivalently, predicting if the current incumbent is optimal. For this task, we introduce a predictor based on a graph neural network (GNN) architecture, together with a set of dynamic features. Experimental results on diverse benchmarks demonstrate the efficacy of our approach, achieving high accuracy in the prediction task and outperforming existing methods. These findings suggest new opportunities for integrating ML-driven predictions into MILP solvers, enabling smarter decision-making and improved performance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Mixed Integer Linear Programming (MILPs) is a widespread tool for modelling mathematical optimization problems, with applications in numerous real-world scenarios. The Branch-and-Bound (B&B) algorithm, which employs a divide-and-conquer approach, is the preferred method for solving MILPs to global optimality. In recent years, there has been a surge in interest in harnessing the power of machine learning (ML) tools to aid the solution process of MILPs. From solution prediction (e.g. [6 ###reference_b6###, 15 ###reference_b15###, 20 ###reference_b20###]) to interventions on the heuristic rules used by the solvers (e.g. [9 ###reference_b9###, 4 ###reference_b4###, 16 ###reference_b16###]), several approaches have been studied in the literature (see Scavuzzo et al. [18 ###reference_b18###] for an in-depth discussion of this topic). The overarching trend is to build dynamic MILP solvers that can make active use of the large amounts of data produced during the solving process.\nMany of the decisions that must be made during the B&B process could be better informed were the optimal solution known from the start. In fact, even knowing the optimal objective value can positively influence the solver behaviour. For example, once a solution is found that matches this value, any effort to find new solutions can be avoided. With perfect information of the optimal objective value, a solver can further do more aggressive pruning of nodes. In general, having this knowledge can allow the solver to adapt its configuration, putting more emphasis on different components. Even in absence of perfect information, a good prediction of the optimal objective value can still be used to change the solver settings or to devise smarter rules, such as node selection policies that account for this predicted value. Inspired by these observations we ask the two following closely-related questions:\nHow well can we predict the optimal objective value?\nWith what accuracy can we predict, during the solution process, whether or not a given solution is optimal?\nOur contributions are as follows. First, we propose a methodology to predict optimal objective values, answering (Q1). We then use the output of this predictive model, together with additional data, as input of our proposed classifiers, which give an answer to question (Q2). For this second task, we also propose some metrics that capture the state of the solving process, and that prove to be valuable for our classifier.\nOur computational study shows the high accuracy of our proposed predictor. Furthermore, when compared to previous methods, our classifiers show better performance. Finally, we provide further insight into how the performance can be tuned to the desired behaviour and into the ways that the classifier makes use of the provided data.\nOur discussion is organized as follows. We start by defining some key concepts and notation in Section 2 ###reference_###, followed by a discussion of the work most closely related to ours (Section 3 ###reference_###). Section 4 ###reference_### describes our methodology in detail. The results of our computational study are presented in Section 5 ###reference_###. Finally, we conclude with some final remarks and future work in Section 6 ###reference_###. The code to reproduce all experiments is available online [17 ###reference_b17###]." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methodology", + "text": "This section details the methodology used to answer questions Q1 (Section 4.1 ###reference_###) and Q2 (Section 4.2 ###reference_###). We assume we are given a space of instances of interest. For some tasks, we will use the bipartite graph representation of MILPs introduced by Gasse et al. [9 ###reference_b9###]. This is, given an MILP instance defined as in Eq. 1 ###reference_###, we build a graph representation as follows: each constraint and each variable have a corresponding representative node. A constraint node is connected to a variable node if the corresponding variable has a non-zero coefficient in the corresponding constraint. Each node has an associated vector of features that describes it. We utilize the same features as Gasse et al., except that we do not include any incumbent information. In short, instead of the raw data in we use the graph representation, which we denote , and is composed of a tuple , where and represent the constraint and variable features, respectively, and is the adjacency matrix.\n###figure_1###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Optimal value prediction", + "text": "The first task we tackle is the one of predicting the optimal objective value (Q1). That is, given an MILP instance , we want to predict the optimal objective value . This prediction is computed once and for all at the root node, once the LP solution is available. We frame this as a regression task. This process is depicted in Figure 1 ###reference_###.\nFor this regression task, we utilise the bipartite graph representation of Gasse et al. [9 ###reference_b9###] defined above, which is processed using a Graph Neural Network (GNN) that performs two half-convolutions. In particular, the feature matrices and first go through an embedding layer with two feedforward networks with ReLU activation. Next, one first pass updates the constraint descriptors using the variable descriptors, while a second pass updates the variable descriptors using the (new) constraint descriptors. This is done with message-passing operations, computed as\nwhere , , and are trainable weights, is the feature vector of constraint and is the feature vector of variable . The variable descriptors then go through another feedforward network with ReLU activation. Finally, average pooling is applied to obtain one single output value.\nOur goal is to learn a mapping which outputs an approximation of the optimal objective value . At the moment of this prediction, the solution to the root LP relaxation is known and can be used for further context. In order to exploit that knowledge, we test three potential targets for the machine learning model, namely\nThis gives rise to three models , and , which we later transform into the desired output by setting either , , or ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Prediction of phase transition", + "text": "The second task (Q2) is predicting the transition between phases 2 (improvement) and 3 (proving). That is, at any point during the solution process we want to predict whether the incumbent is in fact optimal. We cast this problem as a classification task.\nWe test the performance of two classifiers. The first one is based on the output of the GNN model discussed in Section 4.1 ###reference_###. Given an instance (in fact its associated graph representation ) and the current incumbent , we obtain a binary prediction in the following way\nfor some . The parameter allows us to control the confidence in the prediction.\nThe classifier is static, in the sense that it does not make use of any information coming from the B&B process. On the contrary, the second predictor we propose, which we call , is based on a set of dynamic metrics that are collected during the solving process. The metrics are the following.\nFollowing SCIP [3 ###reference_b3###], we define the gap as\nFor a given node , let denote the node\u2019s depth. Then, the tree weight at time is defined as\nThis metric was first defined by Kilby et al. [13 ###reference_b13###].\nLet and let be the first incumbent found. We define the median gap as\nFor a certain window size , we store the values of for . We then fit a linear function using least squares to compute the trend of this sequence. We denote this trend at time as .\nWe make use of the prediction coming from the GNN model and include the ratio with respect to the current incumbent as a metric. In particular we use\nNotice that, while the gap and the tree weight are metrics from the literature, the other three are our own.\nThe input to the classifier is therefore a tuple . We train a classifier that makes use of these dynamic features to make a binary prediction on whether we are in phase 2 or 3. We use a simple logistic regression, which will allow us to more easily interpret the resulting model, in contrast to more complex machine learning models." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Computational Results", + "text": "This section describes our computational setup and results. All experiments were performed with the solver SCIP v.8.0 [3 ###reference_b3###]. Code for reproducing all experiments in this section is available online [17 ###reference_b17###]." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Set Up", + "text": "We use three NP-hard problem benchmarks from the literature: set covering, combinatorial auctions and generalized independent set problem (GISP). We create a fourth benchmark (mixed) that is comprised of instances of the three types, in equal proportion. The method and configuration used for generation of the instances is summarized in Table 1 ###reference_###. For each instance type, we generate 10,000 instances for training, 2000 instances for validation and another 2000 for testing.\nAs a first approach to the instances, we run an experiment to analyze the breakdown into solving phases. We solve 100 of the training instances, each with 3 different randomization seeds, which gives us a total of 300 data points per benchmark. During the solution process we record the time when branching starts, the time when the first solution is found, the time when a solution within 5% of the optimal is found, and the time when the optimal solution is found. This allows us to compute the percentage of time spent on each phase, and the percentage of time spent branching versus before branching (i.e., pre-processing the instance and processing the root node). We average these numbers over the 300 samples to obtain a view of the typical behaviour of the solver on each benchmark. We further divide phase 2 (improvement) into two sub-phases: (2a) from the first feasible solution to the first feasible solution with objective value within 5% of the optimal, and (2b) which encompasses the rest of phase 2. The results are shown in Figure 2 ###reference_###. We observe the following. For all benchmarks, obtaining a feasible solution is trivial. For set covering instances, the optimal solution is often known by the time that branching starts. In the case of combinatorial auctions, the optimal solution is typically not known at the start of B&B, but a good solution is. For GISP, finding optimal, or even good, solutions is not as easy, making the proving phase relatively shorter. We conclude that these benchmarks allow us to test our methodology on three very different settings that may arise in a real-life situation.\nFor each instance, we collect information at the root node: the bipartite graph representation and the optimal root LP value . We then proceed to solve the instance. For the first 100 processed nodes and as long as no incumbent exists, no samples are collected. This allows us to initialize statistics as the trend of open nodes , and to ignore instances that are solved within 100 nodes which are therefore too easy. After 100 nodes have been processed and an incumbent exists, we collect samples with a probability of . At sampling time, we record the value of the dynamic features (see Section 4.2 ###reference_###), as well as the incumbent value . Once the instance is solved, the collected samples are completed by appending the root node information as well as the optimal objective value , which will be used as a target.\n###figure_2### ###figure_3### ###figure_4### We test the prediction accuracy of our GNN model on the four benchmarks. We train a model for each of the targets described in Section 4 ###reference_###. We measure the error as\nwhere is the number of samples, is the optimal objective value of sample and is the predicted optimal objective value of sample . Notice that, independently of the learning target, we measure the error in the space of the original prediction we want to make.\nWe make a prediction on whether we have transitioned to phase 3 (optimal solution has been found). We compare the performance of four predictors. The first two predictors are the ones proposed by Berthold et al. [2 ###reference_b2###], namely (best-estimate, see Eq. 2 ###reference_###) and (rank-1, see Eq. 3 ###reference_###). The third predictor is based on the GNN regression model, as described in Eq. 6 ###reference_###. We report the performance of this classifier with and with a tuned value which was obtained by optimizing the accuracy with a small grid search over the range on the validation set. The fourth predictor is based on the dynamic features, as described in Section 4.2 ###reference_###." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Results", + "text": "Tables 2 ###reference_### and 3 ###reference_### show the results of the optimal objective value prediction task. The GNN models tested in Table 2 ###reference_### were trained and tested on instances of the same type. On the contrary, the results of Table 3 ###reference_### correspond to one unique model that was trained in the mixed dataset, and then tested on different benchmarks. First, we observe that using targets that include LP information ( and ) is beneficial to performance, as opposed to directly trying to predict the optimal objective value (). There is no clear winner among targets and . Second, we observe that the generalist model, the one trained on the mixed dataset, performs comparably to the specialized models, even outperforming them in some cases.\nWe now select one GNN model per benchmark to be used in the next prediction task: the phase transition prediction. We select the model in the following way: we use the specialized model that achieves the best result on the validation set. Figure 3 ###reference_### (a-c) shows the results for all classifiers on the pure benchmarks (see Table 4 ###reference_### for the same results in table form). Further, we include a column that shows the classification accuracy of a dummy model that always predicts the majority class. We observe that the classifiers of Berthold et al. [2 ###reference_b2###] (best-estimate and rank-1) tend to predict the phase transition too early. This is, they mostly output a positive prediction, which means they believe the incumbent to be optimal. This results in the misclassified samples being almost exclusively false positives. On the contrary, the GNN model tends to be too pessimistic, which can be fixed with the right tuning of the parameter. For all benchmarks, performs better than the classifiers of Berthold et al. [2 ###reference_b2###]. At the same time, the inclusion of the dynamic features () further improves the performance, except for set covering where and are close to a tie.\nIt is important to notice that, depending on the application, false positives and false negatives could have very different consequences. As an example, if the phase transition prediction is used to change the behaviour of the primal heuristics (e.g. switch them off once the optimal is found) a false positive could excessively delay finding the optimal solution and therefore has a much bigger potential of harming performance than a false negative. The parameter provides an easy way to navigate this tradeoff, where one could sacrifice some accuracy to keep the rate of false positives to a minimum.\nFigure 3 ###reference_###d shows the same experiment but on a mixed dataset. This is, the models were trained and tested on a benchmark comprised of instances of all three types (in equal proportion). We observe a similar behaviour compared to the specialized benchmarks. The GNN model tends to be too pessimistic, while achieves better accuracy and better false positive rate than the classifiers of Berthold et al. [2 ###reference_b2###]. Using dynamic features further improves the accuracy of the model.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### Finally, we analyze the importance of the dynamic features assigned by the classifier (Figure 4 ###reference_###). We see that the four learned models are in fact very different, with the GISP model mostly making decisions based on the gap and the other three considering all features more uniformly. This speaks in favour of learning on sets of instances of the same type.\n###table_1### ###figure_9###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "In this paper, we presented our methodology for predicting the optimal objective value of MILPs. Compared to the literature on predicting optimal solutions, our learning task is easier, yet still offers a variety of possibilities for its application within MILP solvers. Our methods can be used to both predict the optimal objective value and to classify a feasible solution into optimal or sub-optimal. Our computational study shows that our proposed approach outperforms the existing approaches in the literature. Further, they provide more flexibility to tune the model into the desired behaviour.\nWe show that there are benefits to learning a model that specializes to an instance type, yet our model is still able to generalize well and have superior performance to other methods on mixed instance sets.\nThese results open the door for many possible applications. In general terms, this prediction can be used to adapt the behaviour of the different solver components and rules depending on the solving phase. These applications, however, require further study and will be the subject of future work." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Method and configuration settings used to generate the instances of problem benchmark.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BenchmarkGeneration methodConfiguration
Set coveringBalas and Ho [1]Items: 750
Sets: 1000
CombinatorialLeyton-Brown et\u00a0al. [14]Items: 200
auctionswith arbitrary relationshipsBids: 1000
GISPNodes: 80
Colombi et\u00a0al. [5]
with Erdos-Renyi graphs
SET2, A
\n
", + "capture": "Table 1: Method and configuration settings used to generate the instances of problem benchmark." + }, + "2": { + "table_html": "
\n
Table 2: Average relative error (as defined in Eq. 11) of the GNN model. One model was trained per benchmark. The train and test instances in each case are of the same type.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Instances
Set covering
Combinatorial auctions
GISP
\n
", + "capture": "Table 2: Average relative error (as defined in Eq. 11) of the GNN model. One model was trained per benchmark. The train and test instances in each case are of the same type. " + }, + "3": { + "table_html": "
\n
Table 3: Average relative error (as defined in Eq. 11) of the GNN mixed model. Only one model was trained on a dataset comprised of intances of all types. The test sets are comprised of instances of one type only, except for the mixed test set (last row).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Instances
Set covering1.350.730.82
Combinatorial auctions3.151.170.53
GISP3.172.322.43
Mixed test set1.700.970.75
\n
", + "capture": "Table 3: Average relative error (as defined in Eq. 11) of the GNN mixed model. Only one model was trained on a dataset comprised of intances of all types. The test sets are comprised of instances of one type only, except for the mixed test set (last row). " + }, + "4": { + "table_html": "
\n
Table 4: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples, the fraction of false positives and the fraction of false negatives.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CorrectFalse positivesFalse negatives
Majority0.890.110.00
0.910.090.00
0.920.080.00
0.520.050.43
0.930.040.03
0.900.050.05
Set covering
CorrectFalse positivesFalse negatives
Majority0.640.360.00
0.670.330.00
0.680.320.00
0.570.060.37
0.720.270.01
0.840.070.09
Combinatorial auctions
CorrectFalse positivesFalse negatives
Majority0.590.000.41
0.390.610.00
0.430.570.00
0.680.100.22
0.690.120.19
0.770.110.12
GISP
CorrectFalse positivesFalse negatives
Majority0.640.360.00
0.650.350.00
0.670.330.00
0.590.080.34
0.730.140.13
0.770.140.09
Mixed
\n
", + "capture": "Table 4: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples, the fraction of false positives and the fraction of false negatives." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18321v1_figure_1.png", + "caption": "Figure 1: Optimal objective value prediction task. The MILP representation is computed after the root node has been processed. This serves as an input to a GNN that outputs a prediction z~\u2217superscript~\ud835\udc67\\tilde{z}^{*}over~ start_ARG italic_z end_ARG start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT of the optimal objective value.", + "url": "http://arxiv.org/html/2411.18321v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.18321v1_figure_2(a).png", + "caption": "(a) Set covering\nFigure 2: Phase analysis of three instance types. We divide the solution process into (1) Feasibility, in dark yellow, (2a) Improvement up to 5% to optimality, in light yellow, (2b) Improvement from 5% to optimal, in light purple, and (3) Proving, in dark purple. We also indicate when the first branching occurs. The data is averaged over 100 instances with 3 randomization seeds (i.e., 300 samples).", + "url": "http://arxiv.org/html/2411.18321v1/x2.png" + }, + "2(b)": { + "figure_path": "2411.18321v1_figure_2(b).png", + "caption": "(b) Combinatorial auctions\nFigure 2: Phase analysis of three instance types. We divide the solution process into (1) Feasibility, in dark yellow, (2a) Improvement up to 5% to optimality, in light yellow, (2b) Improvement from 5% to optimal, in light purple, and (3) Proving, in dark purple. We also indicate when the first branching occurs. The data is averaged over 100 instances with 3 randomization seeds (i.e., 300 samples).", + "url": "http://arxiv.org/html/2411.18321v1/x3.png" + }, + "2(c)": { + "figure_path": "2411.18321v1_figure_2(c).png", + "caption": "(c) GISP\nFigure 2: Phase analysis of three instance types. We divide the solution process into (1) Feasibility, in dark yellow, (2a) Improvement up to 5% to optimality, in light yellow, (2b) Improvement from 5% to optimal, in light purple, and (3) Proving, in dark purple. We also indicate when the first branching occurs. The data is averaged over 100 instances with 3 randomization seeds (i.e., 300 samples).", + "url": "http://arxiv.org/html/2411.18321v1/x4.png" + }, + "3(a)": { + "figure_path": "2411.18321v1_figure_3(a).png", + "caption": "(a) Set covering\nFigure 3: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples (correct, in purple), the fraction of false positives (fp, dark yellow) and the fraction of false negatives (fn, light yellow).", + "url": "http://arxiv.org/html/2411.18321v1/x5.png" + }, + "3(b)": { + "figure_path": "2411.18321v1_figure_3(b).png", + "caption": "(b) Combinatorial auctions\nFigure 3: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples (correct, in purple), the fraction of false positives (fp, dark yellow) and the fraction of false negatives (fn, light yellow).", + "url": "http://arxiv.org/html/2411.18321v1/x6.png" + }, + "3(c)": { + "figure_path": "2411.18321v1_figure_3(c).png", + "caption": "(c) GISP\nFigure 3: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples (correct, in purple), the fraction of false positives (fp, dark yellow) and the fraction of false negatives (fn, light yellow).", + "url": "http://arxiv.org/html/2411.18321v1/x7.png" + }, + "3(d)": { + "figure_path": "2411.18321v1_figure_3(d).png", + "caption": "(d) Mixed\nFigure 3: Prediction accuracy of the different classifier models. We show the fraction of correctly classified samples (correct, in purple), the fraction of false positives (fp, dark yellow) and the fraction of false negatives (fn, light yellow).", + "url": "http://arxiv.org/html/2411.18321v1/x8.png" + }, + "4": { + "figure_path": "2411.18321v1_figure_4.png", + "caption": "Figure 4: Feature importance of the dynamic models trained to predict phase transition for each of the benchmarks.", + "url": "http://arxiv.org/html/2411.18321v1/x9.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Set covering algorithms using cutting planes, heuristics, and subgradient optimization: a computational study.", + "author": "E. Balas and A. Ho.", + "venue": "In Combinatorial Optimization, pages 37\u201360. Springer, 1980.", + "url": null + } + }, + { + "2": { + "title": "From feasibility to improvement to proof: three phases of solving mixed-integer programs.", + "author": "T. Berthold, G. Hendel, and T. Koch.", + "venue": "Optimization Methods and Software, 33(3):499\u2013517, 2018.", + "url": null + } + }, + { + "3": { + "title": "The SCIP Optimization Suite 8.0.", + "author": "K. Bestuzheva, M. Besan\u00e7on, W.-K. Chen, A. Chmiela, T. Donkiewicz, J. van Doornmalen, L. Eifler, O. Gaul, G. Gamrath, A. Gleixner, L. Gottwald, C. Graczyk, K. Halbig, A. Hoen, C. Hojny, R. van der Hulst, T. Koch, M. L\u00fcbbecke, S. J. Maher, F. Matter, E. M\u00fchmer, B. M\u00fcller, M. E. Pfetsch, D. Rehfeldt, S. Schlein, F. Schl\u00f6sser, F. Serrano, Y. Shinano, B. Sofranac, M. Turner, S. Vigerske, F. Wegscheider, P. Wellner, D. Weninger, and J. Witzig.", + "venue": "Technical report, Optimization Online, December 2021.", + "url": null + } + }, + { + "4": { + "title": "Learning to schedule heuristics in branch and bound.", + "author": "A. Chmiela, E. Khalil, A. Gleixner, A. Lodi, and S. Pokutta.", + "venue": "Advances in Neural Information Processing Systems, 34:24235\u201324246, 2021.", + "url": null + } + }, + { + "5": { + "title": "The generalized independent set problem: Polyhedral analysis and solution approaches.", + "author": "M. Colombi, R. Mansini, and M. Savelsbergh.", + "venue": "European Journal of Operational Research, 260(1):41\u201355, 2017.", + "url": null + } + }, + { + "6": { + "title": "Accelerating primal solution findings for mixed integer programs based on solution prediction.", + "author": "J.-Y. Ding, C. Zhang, L. Shen, S. Li, B. Wang, Y. Xu, and L. Song.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 1452\u20131459, 2020.", + "url": null + } + }, + { + "7": { + "title": "Predicting the optimal period for cyclic hoist scheduling problems.", + "author": "N. Efthymiou and N. Yorke-Smith.", + "venue": "In Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR), volume 13884 of Lecture Notes in Computer Science, pages 238\u2013253. Springer, 2023.", + "url": null + } + }, + { + "8": { + "title": "Learning MILP resolution outcomes before reaching time-limit.", + "author": "M. Fischetti, A. Lodi, and G. Zarpellon.", + "venue": "In Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR), volume 16, pages 275\u2013291. Springer, 2019.", + "url": null + } + }, + { + "9": { + "title": "Exact combinatorial optimization with graph convolutional neural networks.", + "author": "M. Gasse, D. Ch\u00e9telat, N. Ferroni, L. Charlin, and A. Lodi.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "10": { + "title": "A GNN-guided predict-and-search framework for mixed-integer linear programming.", + "author": "Q. Han, L. Yang, Q. Chen, X. Zhou, D. Zhang, A. Wang, R. Sun, and X. Luo.", + "venue": "In International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "11": { + "title": "Estimating the size of branch-and-bound trees.", + "author": "G. Hendel, D. Anderson, P. Le Bodic, and M. E. Pfetsch.", + "venue": "INFORMS Journal on Computing, 34(2):934\u2013952, 2022.", + "url": null + } + }, + { + "12": { + "title": "MIP-GNN: A data-driven framework for guiding combinatorial solvers.", + "author": "E. B. Khalil, C. Morris, and A. Lodi.", + "venue": "AAAI, 2022.", + "url": null + } + }, + { + "13": { + "title": "Estimating search tree size.", + "author": "P. Kilby, J. Slaney, S. Thi\u00e9baux, T. Walsh, et al.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, 2006.", + "url": null + } + }, + { + "14": { + "title": "Towards a universal test suite for combinatorial auction algorithms.", + "author": "K. Leyton-Brown, M. Pearson, and Y. Shoham.", + "venue": "In Proceedings of the 2nd ACM conference on Electronic commerce, pages 66\u201376, 2000.", + "url": null + } + }, + { + "15": { + "title": "Solving mixed integer programs using neural networks.", + "author": "V. Nair, S. Bartunov, F. Gimeno, I. von Glehn, P. Lichocki, I. Lobov, B. O\u2019Donoghue, N. Sonnerat, C. Tjandraatmadja, P. Wang, et al.", + "venue": "arXiv preprint arXiv:2012.13349, 2020.", + "url": null + } + }, + { + "16": { + "title": "Learning to cut by looking ahead: Cutting plane selection via imitation learning.", + "author": "M. B. Paulus, G. Zarpellon, A. Krause, L. Charlin, and C. Maddison.", + "venue": "In International Conference on Machine Learning, pages 17584\u201317600. PMLR, 2022.", + "url": null + } + }, + { + "17": { + "title": "Code for the paper \u201cLearning optimal objective values for MILP\u201d, 2024.", + "author": "L. Scavuzzo.", + "venue": "https://github.com/lascavana/ObjValPrediction.", + "url": null + } + }, + { + "18": { + "title": "Machine learning augmented branch and bound for mixed integer linear programming.", + "author": "L. Scavuzzo, K. Aardal, A. Lodi, and N. Yorke-Smith.", + "venue": "Mathematical Programming, pages 1\u201344, 2024.", + "url": null + } + }, + { + "19": { + "title": "Adaptive solution prediction for combinatorial optimization.", + "author": "Y. Shen, Y. Sun, X. Li, A. C. Eberhard, and A. T. Ernst.", + "venue": "European Joural of Operational Research, 309:1392\u20131408, 2022.", + "url": null + } + }, + { + "20": { + "title": "Learning a large neighborhood search algorithm for mixed integer programs.", + "author": "N. Sonnerat, P. Wang, I. Ktena, S. Bartunov, and V. Nair.", + "venue": "arXiv preprint arXiv:2107.10201, 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18321v1" +} \ No newline at end of file diff --git a/20241127/2411.18327v1.json b/20241127/2411.18327v1.json new file mode 100644 index 0000000000000000000000000000000000000000..944ccd36d1758ea7c99e43a0c7b7debe66342c2c --- /dev/null +++ b/20241127/2411.18327v1.json @@ -0,0 +1,190 @@ +{ + "title": "Using Malware Detection Techniques for HPC Application Classification", + "abstract": "HPC systems face security and compliance challenges, particularly in preventing waste and misuse of computational resources by unauthorized or malicious software that deviates from allocation purpose. Existing methods to classify applications based on job names or resource usage are often unreliable or fail to capture applications that have different behavior due to different inputs or system noise. This research proposes an approach that uses similarity-preserving fuzzy hashes to classify HPC application executables. By comparing the similarity of SSDeep fuzzy hashes, a Random Forest Classifier can accurately label applications executing on HPC systems including unknown samples. We evaluate the Fuzzy Hash Classifier on a dataset of 92 application classes and 5333 distinct application samples. The proposed method achieved a macro f1-score of 90% (micro f1-score: 89%, weighted f1-score: 90%). Our approach addresses the critical need for more effective application classification in HPC environments, minimizing resource waste, and enhancing security and compliance.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Problem Statement.\nHPC systems face significant security and compliance challenges. A major concern is the waste and misuse of computing resources through deviation from intended allocation purpose [2 ###reference_b2###, 15 ###reference_b15###]. This includes the execution of software that is not within the scope of the allocation or is otherwise wasteful, disruptive, or malicious. Such software wastes computational resources, and compromises system performance and integrity.\nThe need for insights about executed software is highlighted by a series of cyberattacks and reports of cryptocurrency mining on HPC Systems [19 ###reference_b19###, 11 ###reference_b11###, 21 ###reference_b21###, 13 ###reference_b13###]. We adjust the guiding questions proposed in earlier work [15 ###reference_b15###] to our context of classifying the applications executed on an HPC system:\nIs an application similar or different to the applications a user or their group normally execute?\nIs an application similar to a (known) set of applications that are normally executed for the purpose of a particular allocation?\nIs an application similar to a (known) set of applications that should not be executed on the HPC system?\nAnswering these questions and classifying applications that are executed on HPC systems is crucial to mitigating the above mentioned risks and ensuring the efficient and secure use of shared computational resources.\nLimitations of Existing Solutions.\nTypical indicators of application identity include job names and application executable names [4 ###reference_b4###]. However, these identifiers can be easily and arbitrarily changed by users [2 ###reference_b2###]. User-provided job names (e.g. \u201dmy_job\u201d) and application executable names (e.g. \u201da.out\u201d) are misleading if used repeatedly for different applications.\nClassifying applications based on dynamic resource usage can be successful [15 ###reference_b15###, 2 ###reference_b2###].\nHowever, it can also introduce overhead from system monitoring and be unreliable due to system noise or previously unseen application inputs [7 ###reference_b7###].\nSystem noise and unseen application inputs change application behavior, resulting in varying resource usage, which in turn leads to differing classifications of the same applications [17 ###reference_b17###].\nAdditionally, depending on the approach, classification may only occur after the application completes execution.\nSite-provided (loaded) modules can oftentimes provide clues on software usage. LMOD can track which modules are used [10 ###reference_b10###]. However, loaded modules cannot be reliably mapped to individual applications [15 ###reference_b15###].\nApproaches involving static hash-based analyses of application executables concluded that cryptographic hashes can only be used to find exact matches [24 ###reference_b24###], and proceeded to analyse file characteristics without hashing.\nProposed Solution.\nTo reliably and accurately classify applications, in this work we use static code analysis, a method previously proposed for HPC security [6 ###reference_b6###, 12 ###reference_b12###, 15 ###reference_b15###, 16 ###reference_b16###].\nSpecifically, hash-based signatures of application executable files have been utilized in an HPC context for software tracking [1 ###reference_b1###], to collect application-specific resource usage data [24 ###reference_b24###], and as workload identifiers in secure identity frameworks [20 ###reference_b20###].\nInstead of cryptographic hashes, we use similarity-preserving fuzzy hashes computed with SSDeep [8 ###reference_b8###]. Fuzzy hashing for malware detection is a common approach as exemplified by Microsoft 365 Defender research [22 ###reference_b22###].\nTraditional (cryptographic) hashes can be used to match identical files and solve the problem of recognizing repeated executions of the same application [24 ###reference_b24###]. Cryptographic hashes already cover a lot of cases as users frequently execute jobs by changing the input data and not the application executable. However, cryptographic hashes fail to match application samples from the same application class when the samples differ due to small code or version changes.\nFuzzy hashing on the other hand, is a sophisticated technique that enables the comparison of similar files. This capability is particularly valuable in the context of malware detection, where variants of a single sample of malware exhibit minor modifications to evade detection mechanisms. We employ SSDeep fuzzy hashing to extract fuzzy hashes from application executable files. In our context, the SSDeep method enables us to match application samples that have slight differences in code or version changes.\nWe \u2019fuzzy hash\u2019 multiple features of application samples, such as the raw binary content, the continuous printable characters (from the strings command), and the global function names (extracted from the symbol table with the nm command). We then use a Random Forest Classifier to train a model, which we call Fuzzy Hash Classifier, to classify application samples based on the similarity of their SSDeep fuzzy hashes.\nExpected Impact.\nThe goal of this work is to avoid inefficient wasteful use and misuse of HPC systems by applications that deviate from allocation purpose and/or are otherwise disruptive, illicit, or malicious.\nClassifying application executables into known and unknown application classes will address security and compliance challenges related to executed software.\nWhen a user suddenly executes completely different application executables within the same project allocation, this indicates a deviation from allocation purpose or even stolen access credentials.\nBy tracking which specific applications are executed during resource bottlenecks (e.g. slow file system response time), the specific applications behind such issues can be pinpointed.\nLabelling applications is also important for a variety of other use cases, such as reporting software usage across the cluster, analyzing performance variation of jobs [4 ###reference_b4###], and application-specific system optimizations such as CPU frequency tuning for known compute- vs. memory-bound applications [3 ###reference_b3###].\nWe use a machine learning approach to classify application samples into application classes, with different fuzzy hashes as features, based on their similarity to known samples.\nApplication classes represent different versions of the \u201dsame\u201d application.\nTo carry out our research, we propose a Fuzzy Hash Classifier that labels applications based on their classes or labels them as \u201dunknown\u201d if the application class is unknown.\nScope.\nThe focus of this work is on assessing the feasibility of a Fuzzy Hash Classifier for application classification based on SSDeep fuzzy hashes.\nWe use a dataset of preinstalled application executables (labeled based on file paths) from the production cluster\nsciCORE [18 ###reference_b18###] of the University of Basel, Switzerland.\nThis dataset does not contain malicious or illicit software.\nThe implementation of a dynamic collection mechanism of user executed applications is beyond the scope of this work. However, dynamic data collection could be achieved with a Slurm prolog script approach [24 ###reference_b24###].\nAutomated decision making such as stopping or altering jobs is also beyond the scope of this work.\nContribution.\nThis work presents an early exploration of fuzzy hashing for application classification in an HPC context, inspired by malware detection techniques.\nWe evaluate the Fuzzy Hash Classifier with a dataset of 92 application classes containing 5333 distinct application samples.\nTo be representative of production deployment, the train-test split includes completely unseen application classes.\nThe Fuzzy Hash Classifier achieved a macro f1-score of 90% (average f1-scores across all classes), a micro f1-score of 89% (considering all samples equally), and a weighted f1-score of 90% (weighing f1-scores by class size).\nThis work can be extended through incorporating additional features, more sophisticated feature engineering, and using real data from a production HPC system, see Section 6 ###reference_###.\nOrganization.\nThe remainder of this work is organized as follows.\nWe discuss related work in Section 2 ###reference_###.\nThe fuzzy hashing methodology is presented in Section 3 ###reference_###.\nThe results are presented in Section 4 ###reference_### and discussed in Section 5 ###reference_###.\nAnd Section 6 ###reference_### concludes our work and outlines future work." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Classification based on Resource Usage.\nA method for fingerprinting dynamic communication and computation of jobs executing on HPC systems was presented in [15 ###reference_b15###]. The fingerprints are built from IPM (Integrated Performance Monitoring) data, mainly focusing on MPI (Message Passing Interface) library calls and are used to classify the behavior of jobs.\nAnother approach that used classification based on resource usage was presented in [2 ###reference_b2###]. Different statistical features were extracted from over 700 system metrics. A machine learning approach was then used to classify application executions on individual nodes.\nA method that used performance counters to perform clustering of similar applications was presented in [17 ###reference_b17###]. They found that the same applications are sometimes present in multiple clusters. The work concluded that this was caused by different input sizes changing application behavior and/or triggering specific parts of the code to be executed which is not executed otherwise.\nA simplistic dictionary-based recognition approach was proposed in [7 ###reference_b7###]. This method involved building a dictionary of resource usage fingerprints of application executions inspired by the Shazam algorithm for song recognition.\nThe present work does not rely on monitoring data collected during execution of an application. Instead, the focus is on a static approach using features extracted from executables of applications already installed on the system.\nClassification based on Executable Analysis.\nAn approach to compute the similarity of software codes was proposed in [6 ###reference_b6###]. Their approach relies on a machine learning setup analyzing the CFG (Control-Flow Graph) of a code and different graph similarity measures. Extracting the CFG of an application is a complicated reverse engineering effort.\nIn contrast to this approach, we follow a simpler hashing-based approach.\nClassification of jobs based on static information extracted from the job script and the executable file (e.g. function names) was presented in [24 ###reference_b24###]. The resulting job classification is subsequently used to estimate/predict characteristics like power consumption based on jobs with similar properties.\nThe above work [24 ###reference_b24###] is seminal for our approach as it introduces the mechanism to collect application executables and explores cryptographic hashes of applications for classification. Cryptographic hashes are insufficient to account for variations in such as the compilation process.\nThis is why the above work continued with analyzing the list of raw function names through paragraph vector models. We extend this previous work [24 ###reference_b24###] by exploring the use of more flexible fuzzy hashes instead of cryptographic hashes. Using fuzzy hashes tremendously reduces the associated complexity and storage requirements of analyzing raw application executable characteristics and avoids integrity and privacy concerns of accessing raw content of users\u2019 files." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "Figure 1 ###reference_### shows our approach and envisioned workflow.\nFuzzy hash features are collected from applications executed inside HPC jobs.\nThe jobs receive an application label based on the similarity of these fuzzy hashes assessed by our\nFuzzy Hash Classifier\napproach.\nResearchers and administrators can analyze and/or make decisions about HPC jobs based on these labels.\nOur methodology includes data collection of preinstalled scientific applications, extracting SSDeep fuzzy hash features, and employing the Fuzzy Hash Classifier to classify application samples and return application class labels for un/known application samples. Our code is written in Python and for ML we use the Python Scikit-Learn library [14 ###reference_b14###].\n###figure_1### Data Collection.\nWe use python code to collect and process application sample data.\nWe gather executable files from system directories that contain preinstalled software distributions\nand applications for domain science (biology, chemistry, physics, and math) from the production cluster\nsciCORE [18 ###reference_b18###] of the University of Basel, Switzerland.\nThe system directories with preinstalled software follow a consistent format of (e.g. OpenMalaria/46.0-iomkl-2019.01/openmalaria and OpenMalaria/43.1-foss-2021a/openmalaria), where the root folder is what we denote the application class, the sub-folders are the different versions, and within the version sub-folders are the application samples.\nFor each preinstalled application class,\nwe traverse the version sub-folders and collect the executable files that exist in all versions and are not stripped of information (e.g.\nthat have an intact symbol table).\nFor a meaningful train-test split, we collect applications with at least 3 versions, resulting in at least 3 samples per class. Some application classes have only one sample per version sub-folder (such as the above mentioned OpenMalaria). Other application classes have multiple samples per version sub-folder, such as if the workflow of the application has multiple steps in distinct executables. Table 1 ###reference_### shows Velvet, a DNA sequence assembler. This application class has 2 application samples per version, velveth is used to hash DNA sequences, and velvetg is used to perform the actual assembly of the hashed sequences.\nFigure 2 ###reference_### shows the application classes and their number of samples.\nThere are application classes with more than one executable per version, resulting in multiple samples per version, and thus a highly imbalanced multi-class dataset.\nAs our dataset is quite large, giving an overview of all application classes in one table is challenging. However, combining Table 4 ###reference_### and Table 3 ###reference_### shows all 92 application classes used for our Fuzzy Hash Classifier.\nWe accept this imbalance to be representative of a real world scenario, where we will also encounter significant imbalance in the frequency and diversity of application versions.\nWe address class imbalance through assigning balanced weights to classes inversely proportional to class frequencies, described later with the Fuzzy Hash Classifier method.\n###figure_2### Feature Extraction.\nWe extract several features\nof the application executable files using SSDeep fuzzy hashing, the features are fuzzy hashes of:\nThe raw binary content of the executable file.\nThe continuous printable characters extracted using the strings command (embedded text).\nThe global text symbols extracted using the nm command (function and variable names in the symbol table).\nWe use SSDeep to generate fuzzy hash features, which relies on Context Triggered Piecewise Hashing (CTPH). The process generates a fuzzy hash of the input in a way that allows for comparison even if the inputs are not identical. CTPH works through:\nChunking: SSDeep divides the input into chunks based on the content, rather than fixed-size blocks. This makes it \u201dcontext triggered.\u201d\nHashing: Each chunk is hashed using a rolling hash function, and these hashes are combined to produce a final fuzzy hash.\nComparison: When comparing two SSDeep hashes, a similarity score is calculated based on the edit distance (specifically, the Damerau-Levenshtein (DL) distance [5 ###reference_b5###]) between the two hashes.\nDistance: The Damerau-Levenshtein distance is a variant of the Levenshtein distance [9 ###reference_b9###]\nthat counts the amount of insertions, deletions, substitutions and also transpositions of characters necessary to transform one fuzzy hash into an other.\nSimilarity: SSDeep scales the DL distance into a score on a range of 0 to 100, where 0 means no similarity and 100 means that the inputs are identical.\nClass\nVersion\nFuzzy Hash of Symbols\nSimilarity\n\n\n\nOpenMalaria\n46.0-iomkl-2019.01\n1536:z5ujB2ipprvzwzK8l8lPRCuN0L830XmR8c/dGSpTWK5f5Kuy1azM/M3rw83rwLa6Ftljyx:C5ujBfQzr\n79\n\nOpenMalaria\n43.1-foss-2021a\n1536:3bn92zprvzwze8lPRCuN0L830XmR8c/dGSpTWK5f5Kuy1aOMP83rFLa6FtlDJIzu:3bn9uQzY\nDamerau-Levenshtein.\nThe function is defined as the Damerau\u2013Levenshtein distance between two strings and , whose value is the distance between an -symbol prefix of string and a -symbol prefix of .\nThe recursive distance function is defined in Equation 1 ###reference_###, where is the indicator function equal to 0 when and otherwise equal to 1.\nBelow are the cases covered by the Damerau\u2013Levenshtein distance:\n, a deletion (from to ),\n, an insertion (from to ),\n, a match or mismatch, depending on whether the symbols are the same,\n, a transposition between two successive symbols.\nThe Damerau\u2013Levenshtein distance between and is then given by the function value for the full strings: , where is the length of string , and is the length of string .\nThe use of CTPH and the Damerau-Levenshtein\ndistance allows SSDeep to detect similarities between inputs even when they are not identical, which is useful for malware detection and HPC executable classification (our context).\nWe compute a feature matrix for our dataset based on the SSDeep fuzzy hash similarity between sample features.\nTable 2 ###reference_### shows an example of the comparison of two hashes from two different versions of the OpenMalaria application class. Identical common sub-strings are highlighted in blue.\nThe specific Damerau-Levenshtein distance is more complex than just the common sub-strings.\nHowever, this exemplifies the principle between comparing fuzzy hashes.\nThe resulting SSDeep similarity between two fuzzy hashes is a useful measure for quantifying the degree of similarity or variation between files. This is critical for identifying near-duplicates or minor modifications in digital forensics and malware analysis. The SSDeep similarity captures structural similarities even when files differ slightly. This also reduces the overhead of comparing large-scale datasets because it avoids exhaustive and costly byte-by-byte comparison of files.\nFuzzy Hash Classifier.\nWe implement a two-phase train-test split using Scikit-Learn library [14 ###reference_b14###].\nIn the first phase we split the application classes in a 80-20 train-test manner into known and unknown classes to ensure we have completely unknown application samples in our test set.\nIn the second phase we further split the known classes\nthrough a stratified 60-40 train-test split on the samples.\nThe two-phase train-test approach splits the initial 5333 samples into a training set of 2688 samples and a test set of 2645 samples (including 852 samples of completely unknown classes).\nWe employ supervised learning, specifically classification, using the Random Forest Classifier algorithm to build our Fuzzy Hash Classifier.\nWe optimize the performance of the Fuzzy Hash Classifier with hyperparameter tuning through grid search only within the training set.\nWe address the class imbalance through assigning balanced weights to the classes where the weights are inversely proportional to class frequencies. This is a common practice to ensure that a ML model does not favor the majority classes.\nWe chose the Random Forest Classifier for two reasons:\nNon-linearity: Random Forests capture complex, non-linear relationships between features and the target variable. Random Forests are thus suitable for our feature matrix that consists of abstract fuzzy hash similarity values and not explicit distances in Euclidean space.\nFeature Importance: Random Forests provide feature importance scores, giving insights into which features are most influential in the predictions.\nUnknown Samples.\nTable 3 ###reference_### shows the application classes that were randomly put into the class of unknown samples through the 80-20 train-test split.\nThe class of unknown samples is important to test model robustness, because in a production setting the model will encounter samples from application classes that it has never seen before, and needs to accurately label them as \u201dunknown\u201d.\n\u201dUnknown\u201d in this context means that a sample might be deviating from allocation purpose because it does not fit into known categories. A known category could be the \u201dusual\u201d applications executed by a user, and then encountering an \u201dunknown\u201d application indicates that the user is suddenly executing something else.\nIn the Fuzzy Hash Classifier the label of the \u201dunknown\u201d class is \u201d-1\u201d.\nApplication Class\nSample Count\n\n\n\nSchrodinger\n195\n\nQuantumESPRESSO\n178\n\nSAMtools\n108\n\nMCL\n52\n\nBLAST\n52\n\nFASTA\n48\n\nMolProbity\n39\n\nAUGUSTUS\n36\n\nHISAT2\n30\n\nOpenMalaria\n25\n\nGurobi\n20\n\nKraken\n18\n\nMETIS\n18\n\nCCP4\n9\n\nTM-align\n9\n\nClustalW2\n4\n\ndssp\n4\n\nlibxc\n4\n\nCHARMM\n3\nClassification.\nThe Fuzzy Hash Classifier returns class labels for application executables (samples) based on their similarity to other known samples in our dataset.\nClassifying Applications: The Fuzzy Hash Classifier labels test samples based on the similarity of their fuzzy hash features compared to training samples.\nLabeling Unknown Samples: Samples not similar to any other known samples are labeled as \u201dunknown\u201d, indicating it is a new or unseen sample. This classification is based on a confidence threshold applied to the probability prediction of the Fuzzy Hash Classifier. The confidence threshold is tuned as part of the hyperparameter grid search within the training set.\nEvaluation.\nWe use the micro, macro, and weighted versions of precision, recall, and f1-score [23 ###reference_b23###] for evaluation.\nPrecision quantifies how many of the positive predictions were actually correct.\nIt is calculated as the ratio of true positives to the sum of true positives and false positives.\nRecall (or sensitivity) quantifies how many of the actual positives were correctly identified.\nRecall is calculated as the ratio of true positives to the sum of true positives and false negatives.\nThe f1-score is the harmonic mean of precision and recall, shown in Equation 2 ###reference_###. It balances the two metrics, especially when there is an uneven class distribution, giving a single measure.\nMicro-averaging aggregates the contributions of all classes, treating all instances equally.\nMacro-averaging calculates metrics for each class independently and then takes their unweighted average, giving equal importance to each class.\nWeighted-averaging, on the other hand, adjusts for class imbalances by averaging metrics with weights proportional to the number of true instances in each class, providing a more representative performance measure in imbalanced datasets." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "Classification Report.\nTable 4 ###reference_### shows the classification report generated by the Scikit-Learn library.\nThe micro average for precision, recall, and f1-score are identical, in this case they correspond to the overall accuracy.\nThis is because the micro averages consider the total true positives, false positives, and false negatives across all classes, effectively calculating the metrics as if it were a binary classification, which is equivalent to the overall accuracy in a multi-class context.\nClass\nPrecision\nRecall\nf1-Score\nSupport\n\n-1\n0.92\n0.75\n0.83\n852\n\nAugustus\n0.42\n1.00\n0.59\n10\n\nBCFtools\n1.00\n1.00\n1.00\n4\n\nBEDTools\n1.00\n1.00\n1.00\n3\n\nBLAT\n1.00\n1.00\n1.00\n5\n\nBWA\n1.00\n0.40\n0.57\n5\n\nBamTools\n1.00\n0.50\n0.67\n2\n\nBigDFT\n0.55\n0.96\n0.70\n28\n\nCAD-score\n0.25\n1.00\n0.40\n3\n\nCD-HIT\n1.00\n1.00\n1.00\n12\n\nCapnProto\n1.00\n1.00\n1.00\n1\n\nCas-OFFinder\n1.00\n1.00\n1.00\n1\n\nCelera Assembler\n1.00\n0.96\n0.98\n101\n\nCell-Ranger\n0.39\n0.89\n0.54\n28\n\nCellRanger\n0.79\n0.95\n0.86\n20\n\nCufflinks\n0.60\n0.50\n0.55\n6\n\nDIAMOND\n1.00\n0.50\n0.67\n2\n\nExonerate\n1.00\n0.91\n0.95\n43\n\nFSL\n1.00\n1.00\n1.00\n351\n\nFastTree\n1.00\n1.00\n1.00\n2\n\nGMAP-GSNAP\n1.00\n1.00\n1.00\n38\n\nHH-suite\n0.74\n0.96\n0.83\n26\n\nHMMER\n0.97\n1.00\n0.99\n34\n\nHTSlib\n0.40\n0.33\n0.36\n6\n\nInfernal\n1.00\n1.00\n1.00\n7\n\nInterProScan\n0.75\n0.99\n0.86\n102\n\nJAGS\n1.00\n1.00\n1.00\n1\n\nJellyfish\n1.00\n1.00\n1.00\n2\n\nKraken2\n1.00\n1.00\n1.00\n6\n\nMAGMA\n1.00\n1.00\n1.00\n1\n\nMATLAB\n0.70\n1.00\n0.82\n14\n\nMMseqs2\n1.00\n1.00\n1.00\n1\n\nMUMmer\n0.95\n0.69\n0.80\n26\n\nMash\n1.00\n1.00\n1.00\n1\n\nMolScript\n1.00\n1.00\n1.00\n3\n\nMrBayes\n1.00\n1.00\n1.00\n1\n\nOpenBabel\n1.00\n1.00\n1.00\n8\n\nOpenMM\n1.00\n1.00\n1.00\n2\n\nOpenStructure\n1.00\n1.00\n1.00\n56\n\nPLUMED\n1.00\n1.00\n1.00\n3\n\nPRANK\n1.00\n1.00\n1.00\n2\n\nPSIPRED\n1.00\n1.00\n1.00\n7\n\nPhyML\n1.00\n1.00\n1.00\n2\n\nRECON\n1.00\n1.00\n1.00\n6\n\nRSEM\n0.95\n0.86\n0.90\n21\n\nRacon\n1.00\n1.00\n1.00\n2\n\nRaster3D\n1.00\n1.00\n1.00\n13\n\nRepeatScout\n1.00\n1.00\n1.00\n2\n\nRosetta\n0.66\n0.85\n0.75\n114\n\nSMRT-Link\n1.00\n0.67\n0.80\n3\n\nSOAPdenovo2\n1.00\n1.00\n1.00\n2\n\nSTAR\n1.00\n1.00\n1.00\n10\n\nSalmon\n1.00\n0.33\n0.50\n3\n\nSeqPrep\n1.00\n1.00\n1.00\n3\n\nStacks\n0.99\n0.99\n0.99\n69\n\nStringTie\n1.00\n0.50\n0.67\n2\n\nSubread\n1.00\n1.00\n1.00\n21\n\nTopHat\n0.66\n1.00\n0.79\n19\n\nTrinity\n1.00\n1.00\n1.00\n41\n\nVCFtools\n1.00\n1.00\n1.00\n2\n\nVSEARCH\n1.00\n1.00\n1.00\n1\n\nVelvet\n1.00\n1.00\n1.00\n2\n\nViennaRNA\n0.96\n0.93\n0.95\n29\n\nXDS\n0.63\n1.00\n0.77\n34\n\nbreseq\n1.00\n1.00\n1.00\n4\n\ncanu\n1.00\n1.00\n1.00\n51\n\ncdbfasta\n1.00\n1.00\n1.00\n2\n\nfastQValidator\n1.00\n1.00\n1.00\n2\n\nfastp\n1.00\n1.00\n1.00\n1\n\nfineRADstructure\n1.00\n1.00\n1.00\n2\n\nkallisto\n1.00\n0.50\n0.67\n2\n\nkentUtils\n1.00\n0.99\n0.99\n352\n\nprodigal\n1.00\n1.00\n1.00\n1\n\nsegemehl\n1.00\n1.00\n1.00\n1\n\nmicro avg\n0.89\n0.89\n0.89\n2645\n\nmacro avg\n0.92\n0.92\n0.90\n2645\n\nweighted avg\n0.92\n0.89\n0.90\n2645\nThe Support column in the classification report (Table 4 ###reference_###) denotes the number of samples used for testing. Not all classes are represented in the classification report because some classes are grouped under the \u201d-1\u201d unknown class label (see Table 3 ###reference_###).\nFeature Importance.\nTable 5 ###reference_### shows the normalized feature importance extracted from the Random Forest Classifier. Importance scores are values assigned to each feature by the Random Forest Classifier that indicate how influential each feature is in making predictions.\nThe raw importance scores are in no specific format and represent internal scores of the machine learning model that are not normalized nor scaled. For readability purposes we normalize the scores to 1.\nFeatures\nImportance\n\n\n\nssdeep-file\n0.0718\n\nssdeep-strings\n0.1404\n\nssdeep-symbols\n0.7879\nConfidence Threshold.\nThe confidence threshold is applied to the probability prediction of the Fuzzy Hash Classifier and decides whether the classifier predicts a specific application class or the \u201d-1\u201d unknown class.\nFigure 3 ###reference_### shows the individual micro, macro, and weighted f1-scores per confidence threshold reported during the hyperparameter grid search evaluated only within the training set.\nWe choose the confidence threshold that maximizes the combined micro, macro, and weighted f1-scores. The hyperparameter tuning also involved other standard parameters of the Random Forest Classifier\n(such as, n_estimators, criterion, max_depth, min_samples_split, min_samples_leaf, and max_features).\n###figure_3###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "Unknown Class.\nSome applications are deliberately in the \u201d-1\u201d unknown class to test the robustness of the model (see Table 3 ###reference_###).\nA precision value higher than recall shows that our model confidently labels a sample as \u201dunknown\u201d and is usually correct.\nHowever, it fails to capture all cases of \u201dunknown\u201d samples, leaving some of them undetected.\nIf the priority is to capture as many unknown samples as possible,\nwhich potentially deviate from allocation purpose because they are not among the known application classes,\nthen we can manually increase the confidence threshold to capture all unknown samples. However, this comes at the cost of lower precision when labeling a sample as unknown.\nImbalanced Dataset.\nSome classes like FSL, Celera Assembler, and kentUtils have high support values (i.e. 351, 101, and 352, respectively), while others have very low support values (i.e. CapnProto, JAGS, with only 1 instance), as shown in Table 4 ###reference_###. The support value reflects the number of samples of this application class in our test set. This imbalance can skew the micro and weighted metrics for underrepresented classes.\nTo address dataset imbalance, we employ balanced class weighting as described in Section 3 ###reference_### and report the macro f1-score which is the average of the performance of all classes and not influenced by class imbalances.\nApplication classes with a low support value represent applications that have only very few versions or even only exist as a single instance without other versions.\nIn production, such applications can be recognized by a direct match of their hash if they are executed again (because we ever only encounter one version of the executable).\nIn our machine learning context, application classes with few samples provide limited statistical significance.\nHowever, we refrain from artificially including or excluding any classes, and leave the dataset imbalanced on purpose to explore and understand how our Fuzzy Hash Classifier performs.\nInconsistent Performance.\nCertain classes like BigDFT (precision: 0.55, recall: 0.96, support: 28) and MUMmer (precision: 0.95, recall: 0.69, support: 26) exhibit significant discrepancies between precision and recall, see Table 4 ###reference_###. This indicates that our model does not generalize as well across all classes. This might be due to application-specific characteristics, where certain applications change more drastically across versions than others. Analyzing these misclassifications can improve the model by highlighting a need for more diverse data, more features to distinguish samples, or both.\nCellRanger vs. Cell-Ranger.\nThis is a case of two classes with a very similar name, which are indeed the same application class, see Table 4 ###reference_###. However, in the directory wherefrom we scraped the application executables, different versions of CellRanger/Cell-Ranger where installed at two different locations.\n\u201dCell-Ranger\u201d contains versions 2.1.1, 3.0.0, and 3.1.0, while \u201dCellRanger\u201d contains versions 4.0.0, 5.0.0, 6.0.1, 6.1.2, and 7.1.0. These classes are misclassified against each other, skewing the results for both classes.\nAugustus vs. AUGUSTUS.\nAs shown in Table 4 ###reference_### and Table 3 ###reference_### we also have two instances of the same class split across the labeled and unknown portion of our data. This is again due to different versions being installed in different locations. Many cases of the manually labeled unknown \u201dAUGUSTUS\u201d are misclassified as the known \u201dAugustus\u201d, which is an understandable misclassification.\nHowever, this misclassification skews the results for both the Augustus class and the \u201d-1\u201d unknown class.\nOverall Performance.\nDespite significant class imbalances and certain misclassification due to inconsistently labeled data, Table 4 ###reference_### shows that the micro, macro, and weighted f1-scores around 90% are solid, indicating high overall model performance.\nHowever, given the high variability in class performance (specifically small classes), these results could be misleading if interpreted without considering the imbalance and individual class performance.\nConfidence Threshold.\nFigure 3 ###reference_### shows the selection and performance of the confidence threshold. We have a large, manually labeled \u201d-1\u201d unknown class to check for robustness. As we increase the confidence threshold, the macro f1-score decreases, while the micro and weighted f1-scores remain high. This is because the high number of unknown samples significantly impacts the scores, and these unknown samples perform better with higher confidence thresholds. However, the performance of all other classes suffers with increasing confidence threshold, as shown by (and why we report) the macro f1-score.\nFeature Importance.\nFigure 5 ###reference_### shows the feature importance extracted from the Fuzzy Hash Classifier.\nFunction names from the symbol table are more important than printable strings or the raw file content.\nApplication modification can introduce changes in both the code and the compiler, affecting features differently:\nRaw file content: Changes with every modification, either due to code updates or different compiler versions or flags, as the compiler translates the code into machine-readable form.\nPrintable strings: Frequently affected by code changes (e.g., bug fixes), as strings are directly tied to the code.\nFunction names: Typically stable across versions unless there\u2019s a major refactor or new functionality, as the overall structure of the code remains consistent.\nLabeled Data.\nOur dataset consists of pre-installed application samples. Our approach is equally applicable to user-compiled executables. The difference is that user-provided names are not trustworthy, because users can choose undescriptive names, such as a.out.\nWhen dealing with user-compiled executables our approach can be used for 2 strategies:\nCompare user-compiled executables with pre-installed applications, because users are potentially executing a different version of existing software.\nCompare user-compiled executables among themselves, to find if users execute what they usually execute.\nBy comparing user-compiled executables to pre-installed applications and the applications that are usually executed by users, our approach detects deviations and outliers that differ from commonly used software.\nThis classification aligns with our initial guiding questions (see Section 1 ###reference_###) aiming to distinguish \u201dregular\u201d vs. \u201dunusual\u201d application classes.\nLimitations.\nOur current approach cannot distinguish different codes that are executed through wrapper scripts, such as bash or Python.\nThese types of software dynamically load modules and execute code at execution time.\nUnderstanding and classifying this type of software is more complex, requiring more invasive analysis techniques, and is part of immediate future work.\nOur approach also does not work with executables that have been stripped of the symbol table. Binary stripping is a method commonly used to obscure binary characteristics or save storage space. Without this information (e.g. the symbol table), our approach is unable to extract meaningful fuzzy hashes.\nStripped binaries could be dealt with by investigating other characteristics, such as the dynamic call tree of the functions. A dynamic call tree would give insight into a program\u2019s internal function call logic, providing similar function-based information as the symbol table.\nOverall assessment.\nOur model provides solid results, yet with room for improvement.\nThe approach can be fine-tuned for specific purposes, such as increasing the confidence threshold to capture more potentially unknown samples.\nThe current results indicate that fuzzy hash-based application classification effectively label applications, providing valuable insights into what users are executing.\nOur approach answers the initial guiding questions, by reasonably assessing whether users preserve the versions of their usual application software or suddenly execute completely different application(s)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "Conclusion.\nThis work presents an exploration of fuzzy hashing to classify applications in HPC.\nThis approach was inspired by malware detection techniques and encompasses a machine learning approach using the Random Forest Classifier and SSDeep fuzzy hashes as features.\nOur dataset includes 92 application classes with 5\u2018333 distinct application samples.\nThe Fuzzy Hash Classifier achieved a macro f1-score of 90% (micro f1-score: 89%, weighted f1-score: 90%).\nThis work shows that static analysis of fuzzy hash features is a promising approach, which can precede and/or complement dynamic classification approaches based on application resource usage.\nThis work provides insights into the topic of application classification for HPC systems and opens new avenues for future work.\nFuture Work.\nThere is room for improvement in extending the number of fuzzy hash features used.\nFuture work could study loading shared objects extracted through the ldd command [24 ###reference_b24###]. Other machine learning models can also be explored and compared, such as Support Vector Machines and K-Nearest Neighbors.\nAnother promising avenue for future work is to combine static binary analysis with analysis of dynamic execution behavior, which can complement each other.\nLastly, we plan to deploy and test our solution in a production system, in order to test, verify, and validate it on user-provided software." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Versions and Executables for the Velvet Application
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ClassApplication VersionSamples
Velvet1.2.10-GCC-10.3.0-mt-kmer_191velveth, velvetg
1.2.10-goolf-1.4.10velveth, velvetg
1.2.10-goolf-1.7.20velveth, velvetg
\n
", + "capture": "Table 1: Versions and Executables for the Velvet Application" + }, + "2": { + "table_html": "
\n
Table 2: Hash Similarity Example
\n

\n\n\n\n\n\nClass\nVersion\nFuzzy Hash of Symbols\nSimilarity\n\n\n\nOpenMalaria\n46.0-iomkl-2019.01\n1536:z5ujB2ipprvzwzK8l8lPRCuN0L830XmR8c/dGSpTWK5f5Kuy1azM/M3rw83rwLa6Ftljyx:C5ujBfQzr\n79\n\nOpenMalaria\n43.1-foss-2021a\n1536:3bn92zprvzwze8lPRCuN0L830XmR8c/dGSpTWK5f5Kuy1aOMP83rFLa6FtlDJIzu:3bn9uQzY\n\n\n

\n
", + "capture": "Table 2: Hash Similarity Example" + }, + "3": { + "table_html": "
\n
Table 3: Class of Unknown Samples
\n

\n\n\n\n\n\nApplication Class\nSample Count\n\n\n\nSchrodinger\n195\n\nQuantumESPRESSO\n178\n\nSAMtools\n108\n\nMCL\n52\n\nBLAST\n52\n\nFASTA\n48\n\nMolProbity\n39\n\nAUGUSTUS\n36\n\nHISAT2\n30\n\nOpenMalaria\n25\n\nGurobi\n20\n\nKraken\n18\n\nMETIS\n18\n\nCCP4\n9\n\nTM-align\n9\n\nClustalW2\n4\n\ndssp\n4\n\nlibxc\n4\n\nCHARMM\n3\n\n\n

\n
", + "capture": "Table 3: Class of Unknown Samples" + }, + "4": { + "table_html": "
\n
Table 4: Classification Report
\n

\n\n\n\n\n\nClass\nPrecision\nRecall\nf1-Score\nSupport\n\n-1\n0.92\n0.75\n0.83\n852\n\nAugustus\n0.42\n1.00\n0.59\n10\n\nBCFtools\n1.00\n1.00\n1.00\n4\n\nBEDTools\n1.00\n1.00\n1.00\n3\n\nBLAT\n1.00\n1.00\n1.00\n5\n\nBWA\n1.00\n0.40\n0.57\n5\n\nBamTools\n1.00\n0.50\n0.67\n2\n\nBigDFT\n0.55\n0.96\n0.70\n28\n\nCAD-score\n0.25\n1.00\n0.40\n3\n\nCD-HIT\n1.00\n1.00\n1.00\n12\n\nCapnProto\n1.00\n1.00\n1.00\n1\n\nCas-OFFinder\n1.00\n1.00\n1.00\n1\n\nCelera Assembler\n1.00\n0.96\n0.98\n101\n\nCell-Ranger\n0.39\n0.89\n0.54\n28\n\nCellRanger\n0.79\n0.95\n0.86\n20\n\nCufflinks\n0.60\n0.50\n0.55\n6\n\nDIAMOND\n1.00\n0.50\n0.67\n2\n\nExonerate\n1.00\n0.91\n0.95\n43\n\nFSL\n1.00\n1.00\n1.00\n351\n\nFastTree\n1.00\n1.00\n1.00\n2\n\nGMAP-GSNAP\n1.00\n1.00\n1.00\n38\n\nHH-suite\n0.74\n0.96\n0.83\n26\n\nHMMER\n0.97\n1.00\n0.99\n34\n\nHTSlib\n0.40\n0.33\n0.36\n6\n\nInfernal\n1.00\n1.00\n1.00\n7\n\nInterProScan\n0.75\n0.99\n0.86\n102\n\nJAGS\n1.00\n1.00\n1.00\n1\n\nJellyfish\n1.00\n1.00\n1.00\n2\n\nKraken2\n1.00\n1.00\n1.00\n6\n\nMAGMA\n1.00\n1.00\n1.00\n1\n\nMATLAB\n0.70\n1.00\n0.82\n14\n\nMMseqs2\n1.00\n1.00\n1.00\n1\n\nMUMmer\n0.95\n0.69\n0.80\n26\n\nMash\n1.00\n1.00\n1.00\n1\n\nMolScript\n1.00\n1.00\n1.00\n3\n\nMrBayes\n1.00\n1.00\n1.00\n1\n\nOpenBabel\n1.00\n1.00\n1.00\n8\n\nOpenMM\n1.00\n1.00\n1.00\n2\n\nOpenStructure\n1.00\n1.00\n1.00\n56\n\nPLUMED\n1.00\n1.00\n1.00\n3\n\nPRANK\n1.00\n1.00\n1.00\n2\n\nPSIPRED\n1.00\n1.00\n1.00\n7\n\nPhyML\n1.00\n1.00\n1.00\n2\n\nRECON\n1.00\n1.00\n1.00\n6\n\nRSEM\n0.95\n0.86\n0.90\n21\n\nRacon\n1.00\n1.00\n1.00\n2\n\nRaster3D\n1.00\n1.00\n1.00\n13\n\nRepeatScout\n1.00\n1.00\n1.00\n2\n\nRosetta\n0.66\n0.85\n0.75\n114\n\nSMRT-Link\n1.00\n0.67\n0.80\n3\n\nSOAPdenovo2\n1.00\n1.00\n1.00\n2\n\nSTAR\n1.00\n1.00\n1.00\n10\n\nSalmon\n1.00\n0.33\n0.50\n3\n\nSeqPrep\n1.00\n1.00\n1.00\n3\n\nStacks\n0.99\n0.99\n0.99\n69\n\nStringTie\n1.00\n0.50\n0.67\n2\n\nSubread\n1.00\n1.00\n1.00\n21\n\nTopHat\n0.66\n1.00\n0.79\n19\n\nTrinity\n1.00\n1.00\n1.00\n41\n\nVCFtools\n1.00\n1.00\n1.00\n2\n\nVSEARCH\n1.00\n1.00\n1.00\n1\n\nVelvet\n1.00\n1.00\n1.00\n2\n\nViennaRNA\n0.96\n0.93\n0.95\n29\n\nXDS\n0.63\n1.00\n0.77\n34\n\nbreseq\n1.00\n1.00\n1.00\n4\n\ncanu\n1.00\n1.00\n1.00\n51\n\ncdbfasta\n1.00\n1.00\n1.00\n2\n\nfastQValidator\n1.00\n1.00\n1.00\n2\n\nfastp\n1.00\n1.00\n1.00\n1\n\nfineRADstructure\n1.00\n1.00\n1.00\n2\n\nkallisto\n1.00\n0.50\n0.67\n2\n\nkentUtils\n1.00\n0.99\n0.99\n352\n\nprodigal\n1.00\n1.00\n1.00\n1\n\nsegemehl\n1.00\n1.00\n1.00\n1\n\nmicro avg\n0.89\n0.89\n0.89\n2645\n\nmacro avg\n0.92\n0.92\n0.90\n2645\n\nweighted avg\n0.92\n0.89\n0.90\n2645\n\n\n

\n
", + "capture": "Table 4: Classification Report" + }, + "5": { + "table_html": "
\n
Table 5: Feature Importance (normalized)
\n

\n\n\n\n\n\nFeatures\nImportance\n\n\n\nssdeep-file\n0.0718\n\nssdeep-strings\n0.1404\n\nssdeep-symbols\n0.7879\n\n\n

\n
", + "capture": "Table 5: Feature Importance (normalized)" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18327v1_figure_1.png", + "caption": "Figure 1: Proposed envisioned workflow for classifying applications and supporting decision-making about jobs.", + "url": "http://arxiv.org/html/2411.18327v1/extracted/6029036/overview.png" + }, + "2": { + "figure_path": "2411.18327v1_figure_2.png", + "caption": "Figure 2: Number of samples for 92 application classes on a logarithmic scale.", + "url": "http://arxiv.org/html/2411.18327v1/extracted/6029036/dataset.png" + }, + "3": { + "figure_path": "2411.18327v1_figure_3.png", + "caption": "Figure 3: The f1-Score over confidence threshold of the grid search within the training set to handle unknown classes.", + "url": "http://arxiv.org/html/2411.18327v1/extracted/6029036/ml-tuning.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "User environment tracking and problem detection with xalt.", + "author": "Kapil Agrawal, Mark R Fahey, Robert McLay, and Doug James.", + "venue": "In 2014 First International Workshop on HPC User Support Tools, pages 32\u201340. IEEE, 2014.", + "url": null + } + }, + { + "2": { + "title": "Taxonomist: Application detection through rich monitoring data.", + "author": "Emre Ates, Ozan Tuncer, Ata Turk, Vitus J Leung, Jim Brandt, Manuel Egele, and Ayse K Coskun.", + "venue": "In Euro-Par 2018: Parallel Processing: 24th International Conference on Parallel and Distributed Computing, Turin, Italy, August 27-31, 2018, Proceedings 24, pages 92\u2013105. Springer, 2018.", + "url": null + } + }, + { + "3": { + "title": "Automatic application tuning for hpc architectures (dagstuhl seminar 13401).", + "author": "Siegfried Benkner, Franz Franchetti, Hans Michael Gerndt, and Jeffrey K Hollingsworth.", + "venue": "In Dagstuhl Reports, volume 3. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2014.", + "url": null + } + }, + { + "4": { + "title": "Systematically inferring i/o performance variability by examining repetitive job behavior.", + "author": "Emily Costa, Tirthak Patel, Benjamin Schwaller, Jim M Brandt, and Devesh Tiwari.", + "venue": "In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1\u201315, 2021.", + "url": null + } + }, + { + "5": { + "title": "A technique for computer detection and correction of spelling errors.", + "author": "Fred J Damerau.", + "venue": "Communications of the ACM, 7(3):171\u2013176, 1964.", + "url": null + } + }, + { + "6": { + "title": "Code characterization with graph convolutions and capsule networks.", + "author": "Poornima Haridas, Gopinath Chennupati, Nandakishore Santhi, Phillip Romero, and Stephan Eidenbenz.", + "venue": "IEEE Access, 8:136307\u2013136315, 2020.", + "url": null + } + }, + { + "7": { + "title": "An execution fingerprint dictionary for hpc application recognition.", + "author": "Thomas Jakobsche, Nicolas Lachiche, Aur\u00e9lien Cavelan, and Florina M Ciorba.", + "venue": "In 2021 IEEE International Conference on Cluster Computing (CLUSTER), pages 604\u2013608. IEEE, 2021.", + "url": null + } + }, + { + "8": { + "title": "Identifying almost identical files using context triggered piecewise hashing.", + "author": "Jesse Kornblum.", + "venue": "Digital investigation, 3:91\u201397, 2006.", + "url": null + } + }, + { + "9": { + "title": "Binary codes capable of correcting deletions, insertions, and reversals.", + "author": "Vladimir I Levenshtein et al.", + "venue": "In Soviet physics doklady, volume 10, pages 707\u2013710. Soviet Union, 1966.", + "url": null + } + }, + { + "10": { + "title": "Scikit-learn: Machine learning in python fabian.", + "author": "Fabian Pedregosa.", + "venue": "Journal of machine learning research, 12:2825, 2011.", + "url": null + } + }, + { + "11": { + "title": "An accurate tool for modeling, fingerprinting, comparison, and clustering of parallel applications based on performance counters.", + "author": "Vitor Ramos, Carlos Valderrama, Samuel Xavier de Souza, and Pierre Manneback.", + "venue": "In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 797\u2013804. IEEE, 2019.", + "url": null + } + }, + { + "12": { + "title": "Information retrieval. 2nd. newton, ma, 1979.", + "author": "Cornelius Joost Van Rijsbergen.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "Classifying jobs and predicting applications in hpc systems.", + "author": "Keiji Yamamoto, Yuichi Tsujita, and Atsuya Uno.", + "venue": "In High Performance Computing: 33rd International Conference, ISC High Performance 2018, Frankfurt, Germany, June 24-28, 2018, Proceedings 33, pages 81\u201399. Springer, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18327v1" +} \ No newline at end of file diff --git a/20241127/2411.18328v1.json b/20241127/2411.18328v1.json new file mode 100644 index 0000000000000000000000000000000000000000..8fe2b23a5946cd874e457e7f1ab481d5cd8ca8a9 --- /dev/null +++ b/20241127/2411.18328v1.json @@ -0,0 +1,637 @@ +{ + "title": "EventCrab: Harnessing Frame and Point Synergy for Event-based Action Recognition and Beyond", + "abstract": "Event-based Action Recognition (EAR) possesses the advantages of high-temporal resolution capturing and privacy preservation compared with traditional action recognition. Current leading EAR solutions typically follow two regimes: project unconstructed event streams into dense constructed event frames and adopt powerful frame-specific networks, or employ lightweight point-specific networks to handle sparse unconstructed event points directly. However, such two regimes are blind to a fundamental issue: failing to accommodate the unique dense temporal and sparse spatial properties of asynchronous event data. In this article, we present a synergy-aware framework, i.e., EventCrab, that adeptly integrates the \u201clighter\u201d frame-specific networks for dense event frames with the \u201cheavier\u201d point-specific networks for sparse event points, balancing accuracy and efficiency. Furthermore, we establish a joint frame-text-point representation space to bridge distinct event frames and points. In specific, to better exploit the unique spatiotemporal relationships inherent in asynchronous event points, we devise two strategies for the \u201cheavier\u201d point-specific embedding: i) a Spiking-like Context Learner (SCL) that extracts contextualized event points from raw event streams. ii) an Event Point Encoder (EPE) that further explores event-point long spatiotemporal features in a Hilbert-scan way. Experiments on four datasets demonstrate the significant performance of our proposed EventCrab, particularly gaining improvements of on SeAct and on HARDVS.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Accurate action recognition, essential for intelligent systems to sense human behaviors thoroughly, hinges on the precision of the identification methods employed and the quality of the benchmark datasets collected. Though current methods have shown impressive performance [41 ###reference_b41###, 42 ###reference_b42###], they are predominantly tailored for data captured by RGB cameras, which encounter difficulties in the conditions of rapid motion, extreme lighting, and privacy concerns, etc. With the iterative advancements of dynamic vision sensors, e.g., event cameras, their ability to emulate the human visual system is increasingly garnering broad attention [6 ###reference_b6###, 56 ###reference_b56###]. Unlike traditional RGB cameras, event cameras can swiftly capture changes in brightness, outputting highly sparse, dynamic, and asynchronous events [34 ###reference_b34###]. Thereby, event cameras can mitigate the issues of motion blur caused by rapid movement [35 ###reference_b35###] and the lack of scene texture in low-light conditions. Meanwhile, event cameras primarily capture the edges of objects, thereby offering a potential solution to privacy concerns. Therefore, leveraging the inherent advantages of event cameras, such as low latency and wide dynamic range, the pursuit of Event-based Action Recognition (EAR) has become increasingly relevant in practical applications [6 ###reference_b6###, 1 ###reference_b1###].\nCurrently, a principal challenge in the development of EAR is how to effectively harness asynchronous event streams that possess dense temporal information while exhibiting highly sparse spatial information concurrently. Considerable efforts have been made: i) stacking raw events into a series of 2D frames to leverage neural networks designed for RGB data [16 ###reference_b16###, 2 ###reference_b2###]. It typically uses fixed-time intervals to acquire dense event frames and employ powerful frame-specific networks with or without the guidance of the text encoder [58 ###reference_b58###], as shown in Fig. 1 ###reference_###(a)(c). ii) directly applying the point-specific networks to event points sampled by sliding windows leverages its suitability for processing asynchronous event points [17 ###reference_b17###, 36 ###reference_b36###], as shown in Fig. 1 ###reference_###(b). However, the above solutions have distinct focals: i) adapt \u201cheavy\u201d frame-specific encoders to dense event frames, achieving satisfactory performance but at the expense of efficiency, and ii) adapt \u201clight\u201d point-specific encoders to sparse event points, suffering from performance loss while achieving energy efficiency.\nIn light of the challenges mentioned above, we seek to achieve a trade-off between recognition performance and energy consumption. Furthermore, to adeptly accommodate the unique sparse spatial and dense temporal attributes of the asynchronous event streams, we delve into the integrated compatibility of diverse event representations. More concretely, we introduce an elegant event frame-point fusion framework designed to fully harness the unique event information from both frame and point embedding perspectives, as depicted in Fig. 1 ###reference_###(d). This framework integrates distinct representations of event frames and event points into a unified structure based on a primary strategy: heavier modeling for sparse points, and lighter modeling for dense frames. The \u201cheavier\u201d point-based branch endeavors to ameliorate performance by exploring the long contextualized information (e.g., high dynamic motions) from asynchronous event points, and the \u201clighter\u201d frame-based branch is a residual branch to maintain efficiency by capturing detailed information (e.g., accumulated static silhouettes). Finally, we establish a joint frame-text-point representation space to bridge event frames and event points.\nFormally, we propose a unified framework (EventCrab) to seamlessly accommodate language-guided event-frame and event-point embedding learning, enabling both effective and efficient event-based action recognition. Specifically, to align with the unique spatiotemporal characteristics of sparse event points, we first design a Spiking-like Context Learner (SCL) to recurrently extract event points with contextual information from raw event points. After that, we present an Event-Point Encoder to further explore event-point features while preserving spatiotemporal correlations in a Hilbert-scan way. Second, the event-point feature is guided by the point-prompt feature from the CLIP text encoder with the specifically designed point-related prompt. Meanwhile, the event frames, stacked from the event stream, are processed by an event-frame encoder to yield event-frame features, which are guided by the frame-prompt features from the CLIP text encoder with the specifically designed frame-related prompt. Finally, the event-frame feature is integrated with the event-point feature through a residual-like connection, enabling the recognition task.\nIn summary, the main contributions of this work are summarized as follows,\nNovel event frame-point fusion framework: we propose an EventCrab framework to effectively and efficiently address the problem of EAR, by learning the language-guided event-frame and event-point embeddings as the pretext task.\nLearnable event point contextualizing: we design a Spiking-like Context Learner (SCL) to recurrently extract event points with contextual information delivered by redundant event points, which align closely with the nature of asynchronous events.\nLong spatiotemporal-sensitive point encoder: we present an Event-Point Encoder that further explores event-point features while preserving spatiotemporal correlations in a Hilbert scan way." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Event-based Action Recognition.\nIn general, due to the dense temporal and sparse spatial nature of asynchronous event stream, existing solutions fall into two categories: i) converting unstructured event stream into formats compatible with powerful RGB-based models like event frame [2 ###reference_b2###] or time surface representations [55 ###reference_b55###], e.g., [16 ###reference_b16###] transformed event camera outputs into processable frames using a binary-decimal aggregation strategy. However, such a solution inevitably discards critical temporal cues.\nii) developing network architectures like Spiking Neural Networks (SNNs) [37 ###reference_b37###] or Graph Convolutional Networks (GNNs) that cater to the unique properties of event data. e.g., [17 ###reference_b17###] designed a GNN leveraging the spatiotemporal information of events by treating asynchronous events as nodes. Such a solution tends to be computationally efficient and capable of preserving point-wise semantics. Unfortunately, its performance in large-scale scenarios is still not desirable. Consequently, a prevalent of subsequent efforts make full use of multi-modal information to boost EAR, e.g. utilize aligned RGB and event data [47 ###reference_b47###]. In this paper, our aim at enhance EAR from frame and point perspective.\n###figure_1### Spiking Neural Networks in Recognition.\nSpiking Neural Networks (SNNs), due to their ability to simulate the dynamic behaviour of biological neurons, are highly efficient at processing signals with temporal dynamics [53 ###reference_b53###, 33 ###reference_b33###, 37 ###reference_b37###]. Initial vanilla SNN is directly designed for event gesture recognition, e.g., [9 ###reference_b9###] proposed a novel SNN on the DVS gesture dataset that combines multiple convolutional layers to extract spatial features with a reservoir layer for extracting temporal features. Subsequently, some studies [39 ###reference_b39###] introduced powerful convolution and attention mechanisms into SNNs to enhance performance while maintaining efficiency, e.g., [15 ###reference_b15###] implemented a deeper Spiking Convolutional Neural Network based on a ResNet architecture; Differently, we leverage the spike-based processing capabilities of SNNs to propose a spiking-like context learner that comprehensively understands asynchronous event data.\nState Space Models.\nState Space Models (SSMs) have achieved success in various tasks [5 ###reference_b5###, 12 ###reference_b12###, 43 ###reference_b43###, 11 ###reference_b11###], which stems from their ability to capture long-range dependencies. Building on this, Mamba [11 ###reference_b11###] introduced a selective SSM mechanism to enable the model to selectively focus on or ignore certain parameterized inputs. Subsequently, VisionMamba [60 ###reference_b60###] and Vmamba [26 ###reference_b26###] combined bidirectional SSM and cross-scanning to bridge the gap between the ordered nature of 1D selective scanning and the non-continuous structure of 2D visual data.\nBesides, the great potential of Mamba has also inspired a series of cross-disciplinary work, e.g., point cloud analysis (PointMamba [23 ###reference_b23###]), medical segmentation (U-Mamba [29 ###reference_b29###]), and video understanding (SpikeMba [22 ###reference_b22###]).\nHowever, previous work has directly used SSM to extract features from raw data, overlooking important sparsity and asynchrony properties in event data. In this paper, we employ a spiking neural network (SNN) that captures spatiotemporal relationships to preprocess the raw event data. This allows the SSM to become sensitive to the spatiotemporal relations of events and adapt to the concept of asynchronous event occurrence." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "As shown in Fig. 2 ###reference_###, our proposed EventCrab is designed to fully harness the abundant event information from both event-frame and event-point embedding perspectives. For one event stream with the action label , we first obtain its corresponding event points and event frames , where denotes one event, denotes the event\u2019s position, polarity indicates the polarity of brightness change, and is the timestemp of one event point. For \u201cheavier\u201d event-point embedding, the event points are firstly fed to Spiking-like Context learner (SCL) (Sec. 3.2 ###reference_###) for extracting contextual event points, denoted by . Then, is fed into the Event-Point Encoder (EPE) (Sec. 3.3 ###reference_###), to explore the event-point feature . Simultaneously, the \u201clighter\u201d Event-Frame Encoder (EFE) obtains the event-frame feature from , as follows:\nSubsequently, the is residually connected to the for the final action feature :\nwhere is element multiplication. While the CLIP Text Encoder transforms the point- and frame-related prompts of the event label into the frame-prompt feature and the point-prompt feature , which guide the semantic consistency of and via contrastive loss and , respectively. Here, indicates the number of action classes." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Spiking-like Context Learner (SCL)", + "text": "Considering the high temporal resolution characteristic of event cameras, the large amount of raw event stream recorded at the microsecond level poses a challenge for most EAR methods [54 ###reference_b54###]. For these methods, the sparse and asynchronous event stream is sampled and aggregated through a size-fixed sliding window, as shown in Fig. 1 ###reference_### (b). Thus, such a sampling strategy not only disrupts the temporal correlations between events but also operates independently of the subsequent feature extraction. Inspired by [51 ###reference_b51###], we find that spiking firing in Spiking Neural Networks (SNN) aligns well with event-based sampling. Therefore, we address the challenges mentioned above by introducing a Spiking-like Context Learner (SCL) that can extract contextual event points from redundant raw events using Spiking Residual Recurrent Neural Network (SRRNN) and effectively integrate them with subsequent feature exploration.\nLIF Neuron. We choose the Leaky Integrate-and-Fire (LIF) neuron [10 ###reference_b10###] used in neuromorphic computing as the basis for our approach. Its charging process is described by the following equations:\nHere, and denote the membrane potential of the -th neuron in the -th layer before and after charging, with as input current. is the output of the spiking neuron. and denote the membrane time constant and the firing threshold, respectively.\n is the Heaviside function.\nUpon receiving spiking inputs, a neuron accumulates the input current into its membrane potential , which decays at a constant rate. When the membrane potential exceeds the threshold , a spike is emitted, resetting to with denoting the resetting potential. Such the LIF neuron is referred to as \u201c\u201d for the subsequent equation.\nSpiking Residual Recurrent Neural Network (SRRNN). The mechanism of membrane potential accumulation in SNN is highly congruent with the rationale behind event-based sampling, where events from asynchronous streams should be selected once the accumulated event information surpasses a predefined threshold. Therefore, we leverage the functionality of the LIF model and fully consider the contextual information among events to learn the sampling for the raw event points.\nTaking into account the sparsity in spatial dimensions and the density in temporal dimensions of event data, we employ recurrent synaptic connectivity to extract a contiguous and information-dense subset of event points embedded with contextual information, as follows:\nHere, denotes the convolution operation, denotes timestep of the event, denotes the event points between , denotes the sigmoid function, and is the residual weight, which defaults to .\nThe event points at -th timestep and the spike firing at the -th timestep contribute to the by recurrent convolution and , respectively. Likewise, the decay factor at the -th timestep is updated by and through recurrent convolution and . In light of the process above, we achieve interval sampling by acquiring the event slice between the previous spike firing time and the current spike firing time ( denotes the index of sampling interval), as follows,\nFinally, we aggregate contextual event points from [47 ###reference_b47###], where and denote the spatial size, denotes the sampled timestep and is the channel." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Hilbert-scan Spiking Mamba (HSM)", + "text": "Previous schemes for the event point encoding are typically bifurcated into two distinct categories: 1) the direct employment of spiking neural networks (SNNs) that are congruent with the ethos of event cameras, and 2) the utilization of point cloud or graph structures, originally intended for processing three-dimensional data, to manage event points. Nonetheless, these schemes have not adequately addressed the inherent spatiotemporal nuances of event data, as well as the long-term interdependency of asynchronous event points, both of which are pivotal for accurate recognition tasks. To this end, we propose a novel Hilbert-scan Spiking-Mamba (HSM) as Event Point Encoder (EPE) that rethinks the nature of event points. Specifically, HSM mainly comprises Hilbert Scanning, and Spiking Mamba block as detailed below.\nHilbert Scanning.\nIn order to preserve the spatiotemporal distribution properties of event points, we first employ a space-filling curve, the Hilbert curve, to traverse all event points without repetition while retaining spatial topology. First, 2D-spatial and 1D-temporal event information of event points are constructed into a 3D representation to be processed in the forward and backward Hilbert curves. Second, we reindex the contextual event points and use a 3D convolution to project into non-overlapping patches with D-dimension, where and (, ) is the size of the patch. Third, we obtain serialized point tokens by incorporating a learnable spatial position embedding and a temporal position embedding :\nSpiking Mamba Block.\nWe integrate the spiking neural network (SNN), leveraging its asynchronous event processing capabilities based on spikes (refer to Eq.3 ###reference_###), into the Mamba block for analyzing long time series of contextual event points. This ensures low power consumption while fully exploiting the high temporal resolution information of the event points. First, the Mamba block effectively parallels and captures long-term temporal correlations within event series by selective State Space Model (SSM) [11 ###reference_b11###]. Such this operation is referred to as \u201c\u201d for subsequent equation:\nwhere extracts explicit temporal features from the serialized event-point sequences, denotes the spiking activation function. Second, based on the Spiking Feed-Forward Network (SFFN) [59 ###reference_b59###], the Spiking Group-wise Convolution (SGC) layers are incorporated into the Spiking Point-wise Convolution (SPC) layers to extract local features while reducing computational overhead, which is formulated as:\nHere, denote the spiking point-wise convolution layer, is the point-wise convolution. denote the spiking group-wise convolution layer and is group-wise convolution. denotes that the stacked SGC layer and SPC layers extract local features. When feeding into the -th Spliking Mamba block, we obtain the event-point feature . It can be formulated as follows:\nwhere denotes the output features of the -th\nSpiking Mamba block and the output of the final block is recognized as the ultimate event-point feature ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Training and Testing", + "text": "When the raw event points are converted into event-point features , we also implement the event-frame embedding and text-prompt embedding to obtain the event-frame feature and the frame/point-prompt features /, respectively.\nEvent-Frame Embedding. As shown in Fig. 2 ###reference_###, it inputs stacked event frames of the spatial size\n to the Event-Frame Encoder (e.g., Transformer), which outputs the event-frame feature . Based on Eq. (2 ###reference_###), it is residually connected to the event-point feature for the final frame feature .\nText-Prompt Embedding. The CLIP Text Encoder extracts text features from the frame- and point-related prompts, converting them into corresponding text features, e.g., and , where indicates the number of action class.\nTraining Process. We constrain the event-text consistency via the contrastive loss between the event-frame/point feature and the frame/point-prompt feature as follows:\nwhere the subscript is , is the temperature coefficient. Based on Eq. (10 ###reference_###), we obtain specific contrastive losses and for the event-frame embedding and event-point embedding branches, respectively. The final overall recognition loss is composed of and , as follows:\nwhere is the weight coefficient.\nTesting Process. Formally, we denote the frame-text features of all class labels as , where denotes the text feature of the -th class. Given an event-frame feature that undergoes residual fusion with event-point features, we get the classification logit of the -th class as follows:" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Dataset and Settings", + "text": "Datasets. We conduct our research on four datasets, including PAF [31 ###reference_b31###], SeAct [58 ###reference_b58###], HARDVS [48 ###reference_b48###] and DVS128 Gesture [1 ###reference_b1###]. PAF (DVS Action) contains ten action categories with 450 recordings, conducted in an unoccupied office environment. SeAct is the first semantic-abundant dataset for event-text action recognition, containing 58 actions under four themes. HARDVS is a recently released dataset for event-based action recognition, currently having the largest action classes, namely 107,646 recordings for 300 action categories. Both of the above three datasets have an average time duration of 5 seconds with 346 \u00d7 260 resolution. DVS128 Gesture is collected using a DAVIS128 camera with 128 \u00d7 128 resolution, dividing into 11 distinct classes of gestures.\nSettings.\nThe implementation of the overall framework is carried out on PyTorch in a Linux environment with two NVIDIA GeForce 4090 GPUs. We utilize SpikingJelly [3 ###reference_b3###] to enhance the training efficiency of the spike-based backbone. The event-frame encoder of the event-image-text model [57 ###reference_b57###] is utilized for event-frame embedding and the CLIP text encoder from pre-trained CLIP [32 ###reference_b32###] is utilized for Text-Prompt embedding. We use the Adam [21 ###reference_b21###] optimizer with the initial learning rate of and weight decay of . CosineAnnealingLR learning rate schedule is employed with a minimum learning rate of . By default, we set the parameter in Eq. 4 ###reference_### to 0.5, and the parameter in Eq. 11 ###reference_### to 0.8." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Comparison with SOTA Methods", + "text": "As shown in Table 1 ###reference_###, we evaluate our proposed method against currently representative methods on PAF [31 ###reference_b31###], SeAct [58 ###reference_b58###], HARDVS [48 ###reference_b48###] and DVS128 Gesture [1 ###reference_b1###] datasets. Our proposed EventCrab demonstrates superior performance on these datasets. Specifically, EventCrab achieves improvements of 5.17% (Top-1) and 14.65% (Top-5), respectively, on the SeAct dataset with 58 dynamic actions. On the HARDVS dataset with 300 action classes, EventCrab brings improvements of 7.01% (Top-1) and 2.79% (Top-5), significantly outperforming ExACT [58 ###reference_b58###]. Concurrently, EventCrab achieves an accuracy of 96.49% on the PAF dataset, which encompasses 10 distinct classes, and 98.80% on the DVS128 Gesture dataset, spanning 11 distinct gesture classes, demonstrating comparable performance to the ExACT. These results suggest that while ExACT performs well on small datasets, it faces challenges with large datasets that contain a multitude of categories. However, our EventCrab, approaching from both the event-frame and event-point perspectives, thoroughly exploits the spatiotemporal information within asynchronous event streams, effectively addressing the challenges posed by complex, large-scale datasets." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation Studies", + "text": "We conduct extensive ablation experiments to validate key designs of EventCrab. Results are reported on SeAct and PAF datasets.\nEffectiveness of Each Component. As detailed in Table 2 ###reference_###, we conduct an ablation study to test the effectiveness of Spiking-like Context Learner (SCL) and Event-Point Encoder (EPE) within the proposed framework. The baseline model is trained only using contrastive loss [32 ###reference_b32###] between the event frame and its corresponding text embeddings without the event-point embedding branch. Building upon this baseline, adding the event-point embedding branch with only EPE gains performance improvements of 4.31% on the SeAct dataset and 3.51% on the PAF dataset, respectively. These performance improvements demonstrate that it is significantly effective for harnessing the abundant information contained within the raw event points. Notably, when SCL is further added event-point embedding branch, there is a continuous performance improvement, namely improvements of 1.72% and 1.76% in Top-1 on the SeAct and PAF datasets, respectively. These improvements highlight the synergistic effectiveness of the proposed SCL and EPE, emphasizing the strengths that contribute to the enhanced performance.\nSuperiority of Spiking-like Contextual Learner (SCL). We also validate the superiority of the proposed SCL compared to the conventional sliding window- [37 ###reference_b37###] and vanilla SNN-based [10 ###reference_b10###] sampling strategies. As shown in Table 3 ###reference_###, the comparison results demonstrate that the vanilla SNN-based sampling, consistent with asynchronous event sampling principles, is beneficial to event sampling, gaining an improvement of 0.98% (Top-1) compared with sliding window sampling on PAF. Evolved from SNN-base sampling, our SCL considering the important contextual information, demonstrates better performance improvement compared with the alternatives.\n###figure_2###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Diagnose Analysis", + "text": "Balance Between Event-point and Event-frame. Our findings suggest that achieving a balance between and (Eq. 11 ###reference_###) is crucial. The contribution of is modified via . As shown in Fig.3 ###reference_### (left), the variation of affects the accuracy performance. The best performance is achieved with a weight of 0.8. Notably, our findings highlight the potential benefits of leveraging complementary information from multiple event text representations to enhance the power of the proposed model for EAR.\n###figure_3### ###figure_4### Impact of the Different Number of Spiking Mamba Blocks. As shown in Fig. 3 ###reference_### (right), we explore the impact of different numbers of Spiking Mamba blocks which exploit the high temporal resolution information of the event points. Our findings indicate that accuracy escalates with incrementing the number of blocks up to 6. Subsequent augmentation from 6 to 12 blocks results in a gradual performance decrement, implying that an appropriate number of blocks is more conducive to effectively capturing the features of event points. Intuitively, the selection of an appropriate number of Spiking Mamba blocks, which possess a formidable ability to capture features, is of paramount importance for smaller event datasets. Consequently, we set the number of blocks to 6." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Qualitative Analysis", + "text": "Visualization of Spiking-like Context Learner (SCL). To explore the superiority of the designed SCL, we visualize the events before/after processed by SCL on the SeAct dataset. Notably, we employ the intuitive stacked frames to represent the event points, with red indicating the event before SCL processing and blue indicating the event after SCL processing. As shown in Fig. 4 ###reference_###, redundant event points are markedly diminished while critical event points are retained by the SCL. The results demonstrate that our proposed SCL effectively extracts raw event points by leveraging spatiotemporal contextual information, thereby alleviating the burden of feature exploration from event points.\nThe t-SNE of Features Learned by Different Methods. We employ t-SNE to qualitatively visualize the distribution of the original data, the features learned by ExACT [58 ###reference_b58###], and the features learned by our proposed method (Eq. 2 ###reference_###). Notably, Fig. 5 ###reference_### visualize the ten action classes distribution on the SeAct dataset, which include the challenging pairs with similar semantics, e.g. \u201cwalk with headache\u201d vs. \u201cleg injury walking\u201d. Comparing Fig. 5 ###reference_###(b) with Fig. 5 ###reference_###(c), we can find that the features in different classes are widely distributed while our proposed method advances in distinguishing these classes. The results compellingly demonstrate that our frame-point fusion strategy boosters the model\u2019s capacity to distinguish event features, particularly for event actions that are closely related.\nRecognition Results. Fig. 6 ###reference_### presents visualizations of four groups of event actions in the form of event frames, alongside comparative Top-3 recognition results for two methods (ExACT [58 ###reference_b58###] vs. Ours) on the SeAct dataset. As shown in the first and third rows, rapid movements result in the blur of stacked event frames, which hinders the recognition ability of the frame-based method (i.e., ExACT), for \u201crun with ball\u201d vs. \u201cwalk with box\u201d. Rows two and four elucidate that event actions with subtle inter-class disparities (e.g., \u201cwalking\u201d vs. \u201cstaggering\u201d) become misleading to the ExACT method when stacked into event frames. These results suggest that our approach, which integrates spatial contour information from event frames with a temporal trajectory from event points, adeptly seizes the essence of motion dynamics and thus mitigates the impact of motion blur.\n###figure_5### Computational Efficiency. We compare our EventCrab with the state-of-the-art ExACT [58 ###reference_b58###] on computational efficiency in Fig. 7 ###reference_###. Our EventCrab has 147M parameters, which is 8M less than ExACT. The FLOPs of ExACT and EventCrab are 702 and 669. Meanwhile, our EventCrab achieves faster inference speeds.\nOur EventCrab adeptly balances computational efficiency with performance by harnessing the prowess for capturing event-point long dependencies and the efficiency for detailing event frame nuances.\n###figure_6###" + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Extension to Other Tasks", + "text": "Text retrieval has a multitude of practical applications [20 ###reference_b20###, 8 ###reference_b8###] and adheres to the following procedure: input the text to be queried, and retrieve events that correspond to it. We extend our EventCrab to the task of text retrieval. Notably, we present the event stream in the form of stacked frames. As shown in Fig. 8 ###reference_###, we demonstrate the Top-3 retrieved event streams for the query label \u201cside kick\u201d, which are \u201cside kick\u201d, \u201cforward kick\u201d, and \u201cleg lift\u201d. It can be observed that the retrieved events exhibit a high degree of similarity, thereby validating the retrieval effectiveness of our EventCrab on such discernibly challenging events.\n###figure_7###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Discussion", + "text": "Conclusion. In this paper, we present a novel EAR framework, i.e., EventCrab, that simultaneously learns language-guided event-frame and event-point embeddings for EAR, balancing efficiency and accuracy. To address the challenge of harnessing asynchronous event points, EventCrab has two main insightful components, i.e., Spiking-like Context Learner (SCL), and Event Point Encoder (EPE). SCL extracts event points with contextual information from redundant raw event points. EPE efficiently further explore event-point features while preserving spatiotemporal correlations in a Hilbert scan way. Extensive experimental results well demonstrate the effectiveness and efficiency of the proposed method.\nDiscussion. While our approach delivers promising results and scalability, it encounters modest limitations due to the temporal overhead associated with event stacking. Future work will focus on devising more efficient strategies to harness the unique asynchronous event streams." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0Dataset\nMethodAccuracy\u00a0(%)
Top-1Top-5
\n\nPAF\nHMAX SNN\u00a0[52]\n55.00-
STCA\u00a0[13]\n71.20-
Motion SNN\u00a0[25]\n78.10-
MST\u00a0[50]\n88.21-
Swin-T(BN)\u00a0[50]\n90.14-
EV-ACT\u00a0[7]\n92.60-
ExACT\u00a0[58]\n94.8398.28
Ours\n96.49(+1.66)\n\n100.00(+1.72)\n
\n\nSeAct\nEventTransAct\u00a0[2]\n57.8164.22
EvT\u00a0[38]\n61.3067.81
ExACT\u00a0[58]\n67.2475.00
Ours\n72.41(+5.17)\n\n89.65(+14.65)\n
\n\nHARDVS\nX3D45.8252.33
SlowFast\u00a0[4]\n46.5454.76
ACTION-Net\u00a0[49]\n46.8556.19
R2Plus1D\u00a0[45]\n49.0656.43
ResNet18\u00a0[14]\n49.2056.09
TAM\u00a0[27]\n50.4157.99
C3D\u00a0[44]\n50.5256.14
ESTF\u00a0[48]\n51.2257.53
Video-SwinTrans\u00a0[28]\n51.9159.11
TSM\u00a0[24]\n52.6360.56
TSCFormer\u00a0[46]\n53.0462.67
ExACT\u00a0[58]\n90.1096.69
Ours\n97.11(+7.01)\n\n99.48(+2.79)\n
\n\nDVS128 Gesture\nTime-surfaces\u00a0[30]\n90.62-
SNN eRBP\u00a0[18]\n92.70-
Slayer\u00a0[40]\n93.64-
DECOLLE\u00a0[19]\n95.54-
EvT\u00a0[38]\n96.20-
TBR\u00a0[16]\n97.73-
EventTransAct\u00a0[2]\n97.92-
ExACT\u00a0[58]\n98.8698.86
Ours\n98.80(-0.08)\n\n100.00(+1.14)\n
\n
\n
Table 1: Comparative performance (%) for EAR on the PAF, SeAct, HARDVS and DVS128 Gesture datasets. The best results are in bold and the second-best ones are in underlined.
\n
", + "capture": "Table 1: Comparative performance (%) for EAR on the PAF, SeAct, HARDVS and DVS128 Gesture datasets. The best results are in bold and the second-best ones are in underlined." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSeActPAF
Top-1Top-5Top-1Top-5
Baseline66.3779.3191.2295.80
+ EPE70.68(+4.31)\n87.85(+8.54)\n94.73(+3.51)\n98.24 (+2.44)\n
+ SCL, EPE72.41(+1.73)\n89.65(+1.80)\n96.49(+1.76)\n100.00(+1.76)\n
\n
\n
Table 2: Ablation study for effectiveness of each component in Event framework.
\n
", + "capture": "Table 2: Ablation study for effectiveness of each component in Event framework." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModuleSeActPAF
Top-1Top-5Top-1Top-5
Sliding window68.1083.6292.9896.41
SNN68.96(+0.86)\n84.48(+0.86)\n93.96(+0.98)\n97.68(+1.27)\n
SCL70.68(+1.72)\n87.85(+3.37)\n94.73(+0.77)\n98.24 (+0.56)\n
\n
\n
Table 3: Ablation study for the superiority of SCL.
\n
", + "capture": "Table 3: Ablation study for the superiority of SCL." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18328v1_figure_1.png", + "caption": "Figure 1: Insight of our work. Previous methods are limited to using \u201cheavy\u201d frame-specific encoders to extract features from densely stacked event frames or employing \u201clight\u201d point encoders to process sparse raw event points. Our Frame&Point-based approach integrates prompt-aware language semantic information to achieve a synergistic enhancement among the distinct event representations, balancing efficiency and accuracy.", + "url": "http://arxiv.org/html/2411.18328v1/x1.png" + }, + "2": { + "figure_path": "2411.18328v1_figure_2.png", + "caption": "Figure 2: Framework of the proposed EventCrab. For the event-point embedding, the Spiking-like Context Learner (SCL) and the Event-Point Encoder (EPE) are designed to extract contextual points and explore point features \ud835\udc87oesuperscriptsubscript\ud835\udc87oe\\bm{f}_{\\mathrm{o}}^{\\mathrm{e}}bold_italic_f start_POSTSUBSCRIPT roman_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_e end_POSTSUPERSCRIPT with consideration of the spatiotemporal dependencies in asynchronous event points, respectively. It is guided by the point-prompt feature \ud835\udc87otsuperscriptsubscript\ud835\udc87ot\\bm{f}_{\\mathrm{o}}^{\\mathrm{t}}bold_italic_f start_POSTSUBSCRIPT roman_o end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_t end_POSTSUPERSCRIPT from CLIP Text Encoder with the point-related prompt. Meanwhile, for the event-frame embedding, the event frames stacked from the event stream are fed to an Event-Frame Encoder (e.g., Transformer) to obtain the event-frame feature \ud835\udc87aesuperscriptsubscript\ud835\udc87ae\\bm{f}_{\\mathrm{a}}^{\\mathrm{e}}bold_italic_f start_POSTSUBSCRIPT roman_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_e end_POSTSUPERSCRIPT, which is similarly guided by the frame-prompt feature \ud835\udc87atsuperscriptsubscript\ud835\udc87at\\bm{f}_{\\mathrm{a}}^{\\mathrm{t}}bold_italic_f start_POSTSUBSCRIPT roman_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_t end_POSTSUPERSCRIPT from CLIP Text Encoder with the frame-related prompt.", + "url": "http://arxiv.org/html/2411.18328v1/extracted/6029139/figures/overview_v2.png" + }, + "3": { + "figure_path": "2411.18328v1_figure_3.png", + "caption": "Figure 3: (left) Impact of different values of \u03bb\ud835\udf06\\lambdaitalic_\u03bb balanced between event-frame and event-point on PAF dataset. (right) Impact of different numbers of Spiking Mamba blocks proposed in Event-Point Encoder on PAF dataset.", + "url": "http://arxiv.org/html/2411.18328v1/extracted/6029139/figures/weight.png" + }, + "4": { + "figure_path": "2411.18328v1_figure_4.png", + "caption": "Figure 4: Visualization of events before/after processed by SCL on the SeAct dataset.", + "url": "http://arxiv.org/html/2411.18328v1/extracted/6029139/figures/vis-sample.png" + }, + "5": { + "figure_path": "2411.18328v1_figure_5.png", + "caption": "Figure 5: The t-SNE visualization of event features learned by different methods on the SeAct dataset. Ten action classes on the dataset are randomly selected. Best view in color.", + "url": "http://arxiv.org/html/2411.18328v1/x2.png" + }, + "6": { + "figure_path": "2411.18328v1_figure_6.png", + "caption": "Figure 6: Visualization of the Top-3 predicted results on the SeAct dataset.", + "url": "http://arxiv.org/html/2411.18328v1/x3.png" + }, + "7": { + "figure_path": "2411.18328v1_figure_7.png", + "caption": "Figure 7: Computational effectiveness analysis between ours EventCrab and SOTA ExACT, which is a frame-text method.", + "url": "http://arxiv.org/html/2411.18328v1/x4.png" + }, + "8": { + "figure_path": "2411.18328v1_figure_8.png", + "caption": "Figure 8: Visualization of the Top-3 event action retrieval results on the SeAct dataset.", + "url": "http://arxiv.org/html/2411.18328v1/extracted/6029139/figures/extension.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A low power, fully event-based gesture recognition system.", + "author": "Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, et al.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7243\u20137252, 2017.", + "url": null + } + }, + { + "2": { + "title": "Eventtransact: A video transformer-based framework for event-camera based action recognition.", + "author": "Tristan de Blegiers, Ishan Rajendrakumar Dave, Adeel Yousaf, and Mubarak Shah.", + "venue": "In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1\u20137. IEEE, 2023.", + "url": null + } + }, + { + "3": { + "title": "other contributors. spikingjelly, 2020.", + "author": "Wei Fang, Yanqi Chen, Jianhao Ding, Ding Chen, Zhaofei Yu, Huihui Zhou, and Yonghong Tian.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "Slowfast networks for video recognition.", + "author": "Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6202\u20136211, 2019.", + "url": null + } + }, + { + "5": { + "title": "Hungry hungry hippos: Towards language modeling with state space models.", + "author": "Daniel Y Fu, Tri Dao, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher R\u00e9.", + "venue": "arXiv preprint arXiv:2212.14052, 2022.", + "url": null + } + }, + { + "6": { + "title": "Event-based vision: A survey.", + "author": "Guillermo Gallego, Tobi Delbr\u00fcck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, J\u00f6rg Conradt, Kostas Daniilidis, et al.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1):154\u2013180, 2020.", + "url": null + } + }, + { + "7": { + "title": "Action recognition and benchmark using event cameras.", + "author": "Yue Gao, Jiaxuan Lu, Siqi Li, Nan Ma, Shaoyi Du, Yipeng Li, and Qionghai Dai.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.", + "url": null + } + }, + { + "8": { + "title": "Bridging video-text retrieval with multiple choice questions.", + "author": "Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, and Ping Luo.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16167\u201316176, 2022.", + "url": null + } + }, + { + "9": { + "title": "A reservoir-based convolutional spiking neural network for gesture recognition from dvs input.", + "author": "Arun M George, Dighanchal Banerjee, Sounak Dey, Arijit Mukherjee, and P Balamurali.", + "venue": "In International Joint Conference on Neural Networks (IJCNN), pages 1\u20139. IEEE, 2020.", + "url": null + } + }, + { + "10": { + "title": "Spiking neural networks.", + "author": "Samanwoy Ghosh-Dastidar and Hojjat Adeli.", + "venue": "International Journal of Neural Systems, 19(04):295\u2013308, 2009.", + "url": null + } + }, + { + "11": { + "title": "Mamba: Linear-time sequence modeling with selective state spaces.", + "author": "Albert Gu and Tri Dao.", + "venue": "arXiv preprint arXiv:2312.00752, 2023.", + "url": null + } + }, + { + "12": { + "title": "Efficiently modeling long sequences with structured state spaces.", + "author": "Albert Gu, Karan Goel, and Christopher R\u00e9.", + "venue": "arXiv preprint arXiv:2111.00396, 2021.", + "url": null + } + }, + { + "13": { + "title": "Stca: Spatio-temporal credit assignment with delayed feedback in deep spiking neural networks.", + "author": "Pengjie Gu, Rong Xiao, Gang Pan, and Huajin Tang.", + "venue": "In International Joint Conference on Artificial Intelligence, pages 1366\u20131372, 2019.", + "url": null + } + }, + { + "14": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770\u2013778, 2016.", + "url": null + } + }, + { + "15": { + "title": "Spiking deep residual networks.", + "author": "Yangfan Hu, Huajin Tang, and Gang Pan.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 34(8):5200\u20135205, 2021.", + "url": null + } + }, + { + "16": { + "title": "Temporal binary representation for event-based action recognition.", + "author": "Simone Undri Innocenti, Federico Becattini, Federico Pernici, and Alberto Del Bimbo.", + "venue": "In 2020 25th International Conference on Pattern Recognition, pages 10426\u201310432. IEEE, 2021.", + "url": null + } + }, + { + "17": { + "title": "Point-voxel absorbing graph representation learning for event stream based recognition.", + "author": "Bo Jiang, Chengguo Yuan, Xiao Wang, Zhimin Bao, Lin Zhu, Yonghong Tian, and Jin Tang.", + "venue": "arXiv preprint arXiv:2306.05239, 2023.", + "url": null + } + }, + { + "18": { + "title": "Embodied neuromorphic vision with event-driven random backpropagation.", + "author": "Jacques Kaiser, Alexander Friedrich, J Tieck, Daniel Reichard, Arne Roennau, Emre Neftci, and R\u00fcdiger Dillmann.", + "venue": "arXiv preprint arXiv:1904.04805, 2019.", + "url": null + } + }, + { + "19": { + "title": "Synaptic plasticity dynamics for deep continuous local learning.", + "author": "Jacques Kaiser, Hesham Mostafa, and Emre Neftci.", + "venue": "Frontiers in Neuroscience, 14:424, 2020.", + "url": null + } + }, + { + "20": { + "title": "Exposing and mitigating spurious correlations for cross-modal retrieval.", + "author": "Jae Myung Kim, A Koepke, Cordelia Schmid, and Zeynep Akata.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2585\u20132595, 2023.", + "url": null + } + }, + { + "21": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P Kingma.", + "venue": "arXiv preprint arXiv:1412.6980, 2014.", + "url": null + } + }, + { + "22": { + "title": "Spikemba: Multi-modal spiking saliency mamba for temporal video grounding.", + "author": "Wenrui Li, Xiaopeng Hong, Ruiqin Xiong, and Xiaopeng Fan.", + "venue": "arXiv preprint arXiv:2404.01174, 2024.", + "url": null + } + }, + { + "23": { + "title": "Pointmamba: A simple state space model for point cloud analysis.", + "author": "Dingkang Liang, Xin Zhou, Wei Xu, Xingkui Zhu, Zhikang Zou, Xiaoqing Ye, Xiao Tan, and Xiang Bai.", + "venue": "arXiv preprint arXiv:2402.10739, 2024.", + "url": null + } + }, + { + "24": { + "title": "Tsm: Temporal shift module for efficient video understanding.", + "author": "Ji Lin, Chuang Gan, and Song Han.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7083\u20137093, 2019.", + "url": null + } + }, + { + "25": { + "title": "Event-based action recognition using motion information and spiking neural networks.", + "author": "Qianhui Liu, Dong Xing, Huajin Tang, De Ma, and Gang Pan.", + "venue": "In International Joint Conference on Artificial Intelligence, pages 1743\u20131749, 2021a.", + "url": null + } + }, + { + "26": { + "title": "Vmamba: Visual state space model.", + "author": "Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu.", + "venue": "arXiv preprint arXiv:2401.10166, 2024.", + "url": null + } + }, + { + "27": { + "title": "Tam: Temporal adaptive module for video recognition.", + "author": "Zhaoyang Liu, Limin Wang, Wayne Wu, Chen Qian, and Tong Lu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13708\u201313718, 2021b.", + "url": null + } + }, + { + "28": { + "title": "Video swin transformer.", + "author": "Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3202\u20133211, 2022.", + "url": null + } + }, + { + "29": { + "title": "U-mamba: Enhancing long-range dependency for biomedical image segmentation.", + "author": "Jun Ma, Feifei Li, and Bo Wang.", + "venue": "arXiv preprint arXiv:2401.04722, 2024.", + "url": null + } + }, + { + "30": { + "title": "Event-based gesture recognition with dynamic background suppression using smartphone computational capabilities.", + "author": "Jean-Matthieu Maro, Sio-Hoi Ieng, and Ryad Benosman.", + "venue": "Frontiers in Neuroscience, 14:275, 2020.", + "url": null + } + }, + { + "31": { + "title": "Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection.", + "author": "Shu Miao, Guang Chen, Xiangyu Ning, Yang Zi, Kejia Ren, Zhenshan Bing, and Alois Knoll.", + "venue": "Frontiers in Neurorobotics, 13:38, 2019.", + "url": null + } + }, + { + "32": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International Conference on Machine Learning, pages 8748\u20138763, 2021.", + "url": null + } + }, + { + "33": { + "title": "Exploring neuromorphic computing based on spiking neural networks: Algorithms to hardware.", + "author": "Nitin Rathi, Indranil Chakraborty, Adarsh Kosta, Abhronil Sengupta, Aayush Ankit, Priyadarshini Panda, and Kaushik Roy.", + "venue": "ACM Computing Surveys, 55(12):1\u201349, 2023.", + "url": null + } + }, + { + "34": { + "title": "Events-to-video: Bringing modern computer vision to event cameras.", + "author": "Henri Rebecq, Ren\u00e9 Ranftl, Vladlen Koltun, and Davide Scaramuzza.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3857\u20133866, 2019a.", + "url": null + } + }, + { + "35": { + "title": "High speed and high dynamic range video with an event camera.", + "author": "Henri Rebecq, Ren\u00e9 Ranftl, Vladlen Koltun, and Davide Scaramuzza.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1964\u20131980, 2019b.", + "url": null + } + }, + { + "36": { + "title": "Ttpoint: A tensorized point cloud network for lightweight action recognition with event cameras.", + "author": "Hongwei Ren, Yue Zhou, Haotian Fu, Yulong Huang, Renjing Xu, and Bojun Cheng.", + "venue": "In Proceedings of the 31st ACM International Conference on Multimedia, pages 8026\u20138034, 2023a.", + "url": null + } + }, + { + "37": { + "title": "Spikepoint: An efficient point-based spiking neural network for event cameras action recognition.", + "author": "Hongwei Ren, Yue Zhou, Yulong Huang, Haotian Fu, Xiaopeng Lin, Jie Song, and Bojun Cheng.", + "venue": "arXiv preprint arXiv:2310.07189, 2023b.", + "url": null + } + }, + { + "38": { + "title": "Event transformer. a sparse-aware solution for efficient event data processing.", + "author": "Alberto Sabater, Luis Montesano, and Ana C Murillo.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2677\u20132686, 2022.", + "url": null + } + }, + { + "39": { + "title": "Spikingresformer: Bridging resnet and vision transformer in spiking neural networks.", + "author": "Xinyu Shi, Zecheng Hao, and Zhaofei Yu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5610\u20135619, 2024.", + "url": null + } + }, + { + "40": { + "title": "Slayer: Spike layer error reassignment in time.", + "author": "Sumit B Shrestha and Garrick Orchard.", + "venue": "Advances in Neural Information Processing Systems, 31, 2018.", + "url": null + } + }, + { + "41": { + "title": "Hierarchical long short-term concurrent memory for human interaction recognition.", + "author": "Xiangbo Shu, Jinhui Tang, Guo-Jun Qi, Wei Liu, and Jian Yang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(3):1110\u20131118, 2019.", + "url": null + } + }, + { + "42": { + "title": "Spatiotemporal co-attention recurrent neural networks for human-skeleton motion prediction.", + "author": "Xiangbo Shu, Liyan Zhang, Guo-Jun Qi, Wei Liu, and Jinhui Tang.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6):3300\u20133315, 2021.", + "url": null + } + }, + { + "43": { + "title": "Simplified state space layers for sequence modeling.", + "author": "Jimmy TH Smith, Andrew Warrington, and Scott W Linderman.", + "venue": "arXiv preprint arXiv:2208.04933, 2022.", + "url": null + } + }, + { + "44": { + "title": "Learning spatiotemporal features with 3d convolutional networks.", + "author": "Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri.", + "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pages 4489\u20134497, 2015.", + "url": null + } + }, + { + "45": { + "title": "A closer look at spatiotemporal convolutions for action recognition.", + "author": "Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6450\u20136459, 2018.", + "url": null + } + }, + { + "46": { + "title": "Unleashing the power of cnn and transformer for balanced rgb-event video recognition.", + "author": "Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, and Jin Tang.", + "venue": "arXiv preprint arXiv:2312.11128, 2023a.", + "url": null + } + }, + { + "47": { + "title": "Sstformer: bridging spiking neural network and memory support transformer for frame-event based recognition.", + "author": "Xiao Wang, Zongzhen Wu, Yao Rong, Lin Zhu, Bo Jiang, Jin Tang, and Yonghong Tian.", + "venue": "arXiv preprint arXiv:2308.04369, 2023b.", + "url": null + } + }, + { + "48": { + "title": "Hardvs: Revisiting human activity recognition with dynamic vision sensors.", + "author": "Xiao Wang, Zongzhen Wu, Bo Jiang, Zhimin Bao, Lin Zhu, Guoqi Li, Yaowei Wang, and Yonghong Tian.", + "venue": "In Association for the Advancement of Artificial Intelligence, pages 5615\u20135623, 2024a.", + "url": null + } + }, + { + "49": { + "title": "Action-net: Multipath excitation for action recognition.", + "author": "Zhengwei Wang, Qi She, and Aljosa Smolic.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13214\u201313223, 2021.", + "url": null + } + }, + { + "50": { + "title": "Masked spiking transformer.", + "author": "Ziqing Wang, Yuetong Fang, Jiahang Cao, Qiang Zhang, Zhongrui Wang, and Renjing Xu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1761\u20131771, 2023c.", + "url": null + } + }, + { + "51": { + "title": "Eas-snn: End-to-end adaptive sampling and representation for event-based detection with recurrent spiking neural networks.", + "author": "Ziming Wang, Ziling Wang, Huaning Li, Lang Qin, Runhao Jiang, De Ma, and Huajin Tang.", + "venue": "arXiv preprint arXiv:2403.12574, 2024b.", + "url": null + } + }, + { + "52": { + "title": "An event-driven categorization model for aer image sensors using multispike encoding and learning.", + "author": "Rong Xiao, Huajin Tang, Yuhao Ma, Rui Yan, and Garrick Orchard.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 31(9):3649\u20133657, 2019.", + "url": null + } + }, + { + "53": { + "title": "Spiking neural networks and their applications: A review.", + "author": "Kashu Yamazaki, Viet-Khoa Vo-Ho, Darshan Bulsara, and Ngan Le.", + "venue": "Brain Sciences, 12(7):863, 2022.", + "url": null + } + }, + { + "54": { + "title": "Temporal-wise attention spiking neural networks for event streams classification.", + "author": "Man Yao, Huanhuan Gao, Guangshe Zhao, Dingheng Wang, Yihan Lin, Zhaoxu Yang, and Guoqi Li.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10221\u201310230, 2021.", + "url": null + } + }, + { + "55": { + "title": "Eventdance: Unsupervised source-free cross-modal adaptation for event-based object recognition.", + "author": "Xu Zheng and Lin Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17448\u201317458, 2024.", + "url": null + } + }, + { + "56": { + "title": "Deep learning for event-based vision: A comprehensive survey and benchmarks.", + "author": "Xu Zheng, Yexin Liu, Yunfan Lu, Tongyan Hua, Tianbo Pan, Weiming Zhang, Dacheng Tao, and Lin Wang.", + "venue": "arXiv preprint arXiv:2302.08890, 2023.", + "url": null + } + }, + { + "57": { + "title": "E-clip: Towards label-efficient event-based open-world understanding by clip.", + "author": "Jiazhou Zhou, Xu Zheng, Yuanhuiyi Lyu, and Lin Wang.", + "venue": "arXiv preprint arXiv:2308.03135, 2023.", + "url": null + } + }, + { + "58": { + "title": "Exact: Language-guided conceptual reasoning and uncertainty estimation for event-based action recognition and more.", + "author": "Jiazhou Zhou, Xu Zheng, Yuanhuiyi Lyu, and Lin Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18633\u201318643, 2024.", + "url": null + } + }, + { + "59": { + "title": "Spikformer: When spiking neural network meets transformer.", + "author": "Zhaokun Zhou, Yuesheng Zhu, Chao He, Yaowei Wang, Shuicheng Yan, Yonghong Tian, and Li Yuan.", + "venue": "arXiv preprint arXiv:2209.15425, 2022.", + "url": null + } + }, + { + "60": { + "title": "Vision mamba: Efficient visual representation learning with bidirectional state space model.", + "author": "Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang.", + "venue": "arXiv preprint arXiv:2401.09417, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18328v1" +} \ No newline at end of file diff --git a/20241127/2411.18329v1.json b/20241127/2411.18329v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c93164c90d5bcaa37c83e80b7fd12726f0e63091 --- /dev/null +++ b/20241127/2411.18329v1.json @@ -0,0 +1,128 @@ +{ + "title": "Two-Timescale Digital Twin Assisted Model Interference and Retraining over Wireless Network", + "abstract": "In this paper, we investigate a resource allocation and model retraining problem for dynamic wireless networks by utilizing incremental learning, in which the digital twin (DT) scheme is employed for decision making. A two-timescale framework is proposed for computation resource allocation, mobile user association, and incremental training of user models. To obtain an optimal resource allocation and incremental learning policy, we propose an efficient two-timescale scheme based on hybrid DT-physical architecture with the objective to minimize long-term system delay. Specifically, in the large-timescale, base stations will update the user association and implement incremental learning decisions based on statistical state information from the DT system. Then, in the short timescale, an effective computation resource allocation and incremental learning data generated from the DT system is designed based on deep reinforcement learning (DRL), thus reducing the network system\u2019s delay in data transmission, data computation, and model retraining steps. Simulation results demonstrate the effectiveness of the proposed two-timescale scheme compared with benchmark schemes.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Network virtualization and native artificial intelligence (AI) technology\nhave been recognized as a reasonable technology to provide a wireless network that adapts to diversified quality-of-service (QoS) for mobile users (MUs) [1 ###reference_b1###]. In particular, massive user access and extremely low latency requirements in network pose unprecedented opportunities and challenges. For example, autonomous driving has a stringent delay requirement with extreme mobility, e.g., less than 100ms [2 ###reference_b2###]. In addition, with the development of AI technology, users will have brand new services, such as image identification and natural language generation, which require to be processed.\nSpecifically, when operating with MUs, the mathematical optimization method cannot be realized due to the unbearable computation cost [3 ###reference_b3###]. In addition, real-time resource allocation in the physical world can be criticized for security and reliability, and complex dynamic network scenarios leads to significant system delay performance degradation. Therefore, it is necessary to explore innovative methods to develop resource allocation strategies for MUs. As a remedy to these limitations, a promising solution is to explore AI and network virtualization into resource allocation, in which the MU can offloading these computation tasks to base stations (BSs) for prompt processing, which needs to allocate the computation resources carefully and adapt to the dynamic scenario variation [4 ###reference_b4###].\nDigital twin (DT), having real-time interaction with physical entities and creating high-fidelity virtual world, is expected to become a key technology for the transformation of the physical and digital world. One reasonable paradigm of DT is generating high-quality data in a digital environment based on AI models, e.g. deep neural networks (DNN), to assist the training/retraining of AI models [5 ###reference_b5###]. In addition, DT can also achieve efficient task offloading in dynamic environments and reduce offload latency [6 ###reference_b6###]. However, the introduction of DT technology may still lead to performance decline, due to massive data processing and the real-time state updates, especially in the model retraining phase. One reasonable solution is to execute an online model update scheme. By utilizing this scheme, the amount of data required for the DT update and migration can be reduced [7 ###reference_b7###]. While the existing works focus on model retraining based on truthful data, the statistical information of data is yet to be considered in the DT system and real time model update will lead to unnecessary computation cost.\nPromising retraining technologies, such as continual learning and continue learning, are proposed for real-time processing and efficient model retraining [8 ###reference_b8###]. However, fast model switching is difficult to achieve considering delay-tolerance constraints. Some features in wireless communication scenarios have statistical properties, e.g., the MU traffic and captured images distribution, show a certain regularity on a large-timescale [9 ###reference_b9###], which brings new opportunities. When MUs travel to another scenario, BSs are able to support MUs in retraining the image identification model which can adapt to the new distribution of captured images. In order to reduce the system delay, MUs may not be retrained in every slot, while the computation resources needs to be allocated in every slot suitably. Hence, a comprehensive network management policy should jointly consider resource allocation and retraining decision in different timescales.\nIn this paper we propose a two-timescale DT-based resource allocation scheme. An optimization problem to minimize long-term system delay is formulated, by optimizing user association, computation resource allocation, and incremental learning decisions. Specifically, in the large timescale, DT updates and makes user association and incremental learning decisions by jointly considering statistical information in the DT system and real-time states in the physical network. In the short timescale, BSs resource allocation and data-allocation decisions are leveraged under the resource and model accuracy constraints. The main contributions of this paper are summarized as follows:\nWe present a two-timescale framework for dynamic networks to support each MU model interference and retraining, and balance BSs\u2019 computation resources;\nWe formulate the two-timescale framework as a long-term expectation optimization based problem. The objective is to dynamically make resource allocation, user association, and incremental learning decisions to minimize system delay while satisfying coupled constraints;\nWe design a DT-based decision making and data generation system within the scheme to achieve higher model accuracy without significantly increasing system delay, while satisfying resource constraints at each BS.\nThe remainder of this paper is organized as follows. We present the system model of physical and DT environment in Section II. Section III provides the large-timescale and short-timescale algorithms in detail. Simulation results are then presented in Section IV. Finally, we give the conclusion and future directions in Section V." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II System Model and Problem Formulation", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Considered Scenario", + "text": "In this section, we design a DT-assisted mobile network architecture for the efficient computation and communication resource allocation. This architecture considers the image recognition tasks for MUs, as shown in Fig. 1 ###reference_###. Specifically, the network architecture consists of BSs in set , and MUs in set , operating in a time-slotted manner.\nIn the scenario, AI models are deployed in the DT to make communication and computation resource allocation decisions for the physical networks. The BS side, with abundant computation resources compared to the user side, can assist MUs in computation tasks by processing the upload task from MUs with insufficient computation resources and allocating the available frequency band.\nEach MU possesses a local requirement such as image classification.\nIn this DT assisted mobile network system, the distribution of MUs changes over time and different MUs have their personalized computation requirements, in which the AI model, primarily responsible for making communication and computation resource allocation decisions, operates in the digital twin layer and sends them to the physical world.\nWe focus on BS side resource allocation problem. Considering a discrete-time system in which the total time length is divided into several unit time slot denoted as and . Since the MU association maintain a steady state for a certain period of time, we further equally divide the considering time into time frame denoted as where is the number of unit time in each time frame and .\n###figure_1###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Network System", + "text": "Let denote the location of each BS, where and denote the x- and y-coordinate in the physical world Cartesian coordinate system, respectively. Consider that the movement of MUs typically exhibits temporal dependency, namely, their current position and computation requirement as . Then the distance between the -th BS and -th in time slot is shown as . Let represent the BS-MU connection indicator. Especially, if\nthe -th BS connected with -th MU through wireless link; otherwise, . In order to ensure that each user can create and maintain their own digital twin, MU needs to connect with least one BS at each time slot, even if its computation task do not require BS assistance, i.e., . Let the denote the channel gain of each pair of BS and MU in time slot . For each MU\u2019s individual computation task, defined as , can be divided into two parts: locally computation tasks and BS computation tasks where . In each time slot, MUs will update the user location and system delay of personalized tasks completed through the established wireless connection between BS-MU pairs, which includes both large-scale and small-scale fading. Since orthogonal frequency division multiplexing (OFDM) technology is employed in this system, the communication interference between different MUs is neglected, and Gaussian noise is considered as interference in the physical wireless network." + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "II-B1 Communications Delay", + "text": "For the uplink transmission, the signal-to-noise ratio (SNR) received at BS is given by\nwhere is the transmit power of each MU in time slot , and is the noise power.\nThen the transmit rate of the uplink transmission rate can be calculated as\nwhere is the spectrum bandwidth. Each allocated task\u2019s transmission time-delay is shown below\nwhere is the downlink transmission data. Since the computation task usually needs results or model parameters, e.g., data analyses and AI model training, the downlink delay can be considered as fixation data size. Specially, even if the task is too large to be uploaded in one time slot, it can be divided into different time slots for uploading, so the above constraint is reasonable and realizable." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "II-B2 Computation Delay", + "text": "Considering the MUs lack sufficient computation resources to train/update the model, MUs will upload the obtained data to the BS for storage and training. Although the MU does not need to update the model in the current time slot, in the DT-assisted system we considered, the DT can store data and help improve the accuracy of the future model retraining.\nFor the locally task computation, the system delay of computation can be formulated as\nwhere is the locally computation ability which is assumed as a constant for each MU. For the uploading tasks, the computation delay of each uploading task from MUs sets in BS can be shown as\nwhere is the allocated BS computation resource, which is related to the working frequency and the number of floating point\noperations processed in one cycle of the BS proposed in [10 ###reference_b10###]. Considering that the total computation resource of each BS\nis limited in each short-timescale, then the computation resource constrain can be formulated as:\nAs the BS side adopts the parallel processing pattern, the total delay of uploading task for the -th BS can be calculation as\nwhich includes communication and computation delay." + }, + { + "section_id": "2.2.3", + "parent_section_id": "2.2", + "section_name": "II-B3 Digital Twin Assisted Model Retraining Delay", + "text": "To enhance the performance of the performance of the dynamic networking in physical world, we propose a scheme architecture incorporates DTs into dynamic networking. DTs can predict the future state of network by modeling the statistical characteristics of historical state/data to ensure that the model meets the accuracy requirements on a large-timescale. The accuracy of each MU is , and the threshold of accuracy is .\nAfter obtain the accuracy of the existed model, the DT-based incremental learning scheme is proposed to improve the model accuracy and system robustness by update the existed model. Define binary variable , if\nthe and indicating the model needs be updated; otherwise, . Although incremental learning can use small samples to update existing models, more high-quality data can effectively improve the performance of new models after retraining, which can increase the model duration time and reduce system delay. The DT system generated data which supplied the model retraining will related the real-time request data sets . In dynamic scenarios, the Kullback-Leibler (KL) divergence relationship between the actual data sets and the DT-stored data sets are considered to determine the optimal DT-generated data distribution probability, which can be shown as\nwhere is discrete executable set of DT-generated data distribution probability and is the distribution of real data under DT classification. To guarantee that the newly DT-generated data can be quickly adjusted in current scenario, the distribution probability with the current data distribution, i.e., . Then, the DT generated data size is formulated as follow\nwhere is the size of the data generated by DT relative to the data obtained in real time resulting in additional computation delay. This part of computation delay can be calculated similarly with the computation model of physical system as\nThen the the effective delay is where the may equal zero and the minimize delay problem will degrade into a communication-computation trade off problem. The incremental learning is utilized to update existing models by using efficient solution, e.g., the mini-batch stochastic gradient descent [11 ###reference_b11###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Problem Formulation", + "text": "Due to temporal variations in the model data distribution, which result in decreased model accuracy, it is critical to minimize the overall system delay over the long-term while meeting resource and association constraints, particularly from a network operator\u2019s perspective. The optimization problem, denoted as P1, can be mathematically formulated as:\nwhere (11b ###reference_.2###) is the incremental learning indicator constraint and represents the decision for model retraining. (11c ###reference_.3###) represents the binary constraint on the association relationship variables. (11d ###reference_.4###) restricts that, during each time frame, at least one BS can be assigned to associate to each MU. (11e ###reference_.5###) indicates that each device needs to complete all data transmission task in each time slot. (11f ###reference_.6###) represents the computation resource constraint for each BS in a single time slot. In summary, within the resource and allocation constraints, the objective of the DT-assisted model incremental retraining problem is to minimize the average delay.\nHowever, the problem is a mixed integer nonlinear programming problem which is a nondeterministic polynomial (NP)-hard problem and needs decisions across different timescales. To address this issue, we propose a two-timescale method which can ensure consistency and comparability in different timescales." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Proposed Algorithm", + "text": "Given the dynamic nature of the network environment and the extensive solution space of the optimization problem, which requires real-time decision making, we employ deep reinforcement learning (DRL) to address the allocation problem in each time slot. Furthermore, for the large-timescale, a model retraining scheme and user association based on DT and statistical information are proposed to achieve high robustness." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Large-Timescale Algorithm", + "text": "At each time frame , the BS first acquires the effective channel and location of each MU. With the fixed historical data of MU model accuracy situation and transmission time, the BS designs the large-timescale access allocation for each user . Furthermore, we relax the amplitudes of to be in the interval , which is shown to help accelerate the convergence of the proposed algorithm by simulation in this section as shown in the following problem\nwhere is the introduced slack variable related with (11a ###reference_.1###) with fixed incremental learning indicator . (P2) is standard linear programming can be solved using existing liner programming tools or the simplex method. To get the optimal integer solution for (P2), we denote the optimal integer solution of the relaxed problem is and the original optimal problem solution is and Branch-and Bound method with the obtained optimal solution is proposed and the detailed process is presented in Algorithm 1 ###reference_thm1### which can be solved by existed toolbox, i.e., CVX [12 ###reference_b12###].\nAfter obtaining the access allocation , the DT-assisted incremental learning optimization decision problem is to ensure that every MU satisfies the accuracy constraint in the time frame, which can be shown as (14 ###reference_###). Since different MUs make incremental learning decisions independently, the decision-making process can be solved independently.\nSince a sudden decrease in accuracy over a time slot due to a sudden boost cannot be relied upon, accuracy over a time frame can be expressed as\nWhile (13 ###reference_###) is a posterior formula, the calculated long-term accuracy will be obtained at . In DT system, we can obtain the statistical probability model accuracy related with user location and model using time of duration which can be shown as where is the model using time of duration for different MUs. Then (13 ###reference_###) can be calculated as\nMoreover, each considered MU has its own retraining decision variable, which only related with the considering time window and accuracy threshold . The model retraining indicator yields the following binary decision:\nBy utilizing DT technology, the decision-making is implemented using historical data, and the incremental learning data is generated based on the statistical distribution in the DT system." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Short-Timescale Algorithm", + "text": "After obtaining the large-timescale decision which affects for a period of short-timescale, for time slot , the BSs will allocate the computation resources and MUs will select the data and transmit data to the BS for training and storage. The short-timescale problem can be rewritten as\nwhile the long-term expectations translates into the minimization of each short-timescale system delay.\nThe above problem (P3) is difficult to solve directly due to the extremely large continuous variable space for both \nand and coupled optimization variables. In this paper, we utilize the DRL [13 ###reference_b13###] method for solving the (16a ###reference_.1###). Firstly, the parameters are described as follows based on the defined Markov decision process (MDP). A DRL can be denoted as , which is composed of state space , action space , reward function , and policy . The time steps are used alternated and elements are formulated as\nState: By observing the dynamic network environment that DT constructed by the DT, the state of the network at time slot consist of available BSs computation resource , MUs to be processed data and DT-generated data distribution can be expressed as\nAction: After receiving the current state information, the BS needs to allocate the computation resource and MUs allocate the set of transmission data. The actions of the system can be denoted by\nReward: The system utilizes the reward function to evaluate the action. In each round, the agent responsible for the device selection adopts action in state . The reward function is formulated as follows:\nwhere the first summation class is the penalty term indicated that the constraint of (16b ###reference_.2###) and the computation resource constraint is considered in the second part of (19 ###reference_###).\nPolicy: The policy denotes the mapping between the state space and the action space. In each round, the executed action is determined through strategy . The DRL explores policy and value functions through DNN, which are considered as the most effective method to solve complex MDP models. We utilize this type of DRL policy in [14 ###reference_b14###].\nThis scheme exploits the replay buffer mechanism in each training step to ensure that the training data is independently and identically distributed. The complete algorithm for the proposed two-timescale resource allocation as Algorithm 2 ###reference_thm2###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Simulation Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Simulation Setting", + "text": "In this section, we evaluate the performance of the proposed DT-based incremental learning for an image identification task. For the experimental evaluations, we focus on the image classification task using the CIFAR-10 dataset [15 ###reference_b15###].\nThe channel information is simulated using the Phased Array System Toolbox of MATLAB and the basic DNN model ResNet-18 is selected, and the computation cost of using it for processing each task is considered by [16 ###reference_b16###]. The details of the experiment settings is shown as Table I ###reference_###. All numerical experiments are conducted using Python 3.9 with PyTorch on an Intel Core i9-13900HX workstation with 32GB of RAM and GPU involvement.\nWe adopt the following three benchmark schemes:\nWithout incremental learning: In this scheme, the system updates the DNN model by retraining it totally with new data collected from the physical environment only;\nWithout DT: In this scheme, the system\u2019s incremental learning data is exclusively derived from the physical environment, and resource allocation is designed for each large timescale;\nSingle timescale case: This scheme implements all decisions within each small time slot. It is important to note that if the constraint is not met, the system\u2019s delay cannot be guaranteed and is considered as break out." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Simulation Results", + "text": "In Fig. 2 ###reference_###(a), the iterative model training curve per episode is depicted, showing that the proposed DT-based incremental learning approach outperforms both the approaches without DT and without incremental learning scheme. Additionally, the use of DT-generated data incurs additional training costs but results in higher model accuracy. All schemes can be retrained into a DNN model that meets accuracy constraints effectively. In Fig. 2 ###reference_###(b), when a specific probability of images is present in the test set, the single timescale results in abrupt breakouts, while the proposed two-timescale scheme can effectively prevent such occurrences. It can be observed that with our proposed approach, calculating average accuracy and making retraining decisions in each large timescale increases system robustness. For schemes without DT, due to absent training sets for incremental learning methods compared to well-trained models at initialization, accuracy decreases even with updates model in another large timescale, especially when data distribution varies greatly. In Fig. 2 ###reference_###(c), we observe the average system delay over 100 time slots. It is clear that our proposed DT-assisted approach reduces total system delay by more than 60% compared to single timescale schemes. Conversely, without incremental learning to adapt to data changes and completely retrain models, system delay are not appropriately adjusted.\n###figure_2### ###figure_3### ###figure_4###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we have proposed a two-timescale scheme for network resource allocation assisted by DT and incremental learning to minimize the system delay. The system architecture is divided into physical and DT systems, with the physical scenarios providing real-time user information and the DT system providing additional data and making decisions on model retraining. Experimental results demonstrate that, compared to other schemes, the proposed scheme can significantly reduce system delay while increasing average model accuracy, which can be exploited to image identification and autonomous driving. For the future work, challenges arise in balancing computation resource cost and retraining accuracy under system delay constraints due to data generation and statistical information learning in DT, requiring further investigation." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Simulation Parameters.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Parameter\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Value\n
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Training samples\u00a0\u00a0\u00a0\u00a0\u00a0\u00a040,000
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Test samples\u00a0\u00a0\u00a0\u00a0\u00a0\u00a010,000
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Incremental samples\u00a0\u00a0\u00a0\u00a0\u00a0\u00a05,000
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0The number of BS, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a03
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0The number of MU, \n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Unit time, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a05ms
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Time slot count, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a02
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Time frame count, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a010
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Accuracy constraint, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.85
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Computation resource, [16]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a050 GFLOPs
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MU transmission data size, \n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 MB\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Transmission power, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-10 dBm
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Transmission noise power, \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-100 dBm
\n
", + "capture": "TABLE I: Simulation Parameters." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18329v1_figure_1.png", + "caption": "Figure 1: Architecture of DT-assisted networking system.", + "url": "http://arxiv.org/html/2411.18329v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.18329v1_figure_2(a).png", + "caption": "(a) Model training iterative curve\nFigure 2: Performance of the proposed scheme compared with baselines on CIFAR-10 datasets.", + "url": "http://arxiv.org/html/2411.18329v1/x2.png" + }, + "2(b)": { + "figure_path": "2411.18329v1_figure_2(b).png", + "caption": "(b) Model accuracy\nFigure 2: Performance of the proposed scheme compared with baselines on CIFAR-10 datasets.", + "url": "http://arxiv.org/html/2411.18329v1/x3.png" + }, + "2(c)": { + "figure_path": "2411.18329v1_figure_2(c).png", + "caption": "(c) System delay\nFigure 2: Performance of the proposed scheme compared with baselines on CIFAR-10 datasets.", + "url": "http://arxiv.org/html/2411.18329v1/x4.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18329v1" +} \ No newline at end of file diff --git a/20241127/2411.18350v1.json b/20241127/2411.18350v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c34ee18bba2f1758b49651538c1db0177a1f4d8c --- /dev/null +++ b/20241127/2411.18350v1.json @@ -0,0 +1,962 @@ +{ + "title": "TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models", + "abstract": "This paper introduces Virtual Try-Off (VTOFF), a novel task focused on generating standardized garment images from single photos of clothed individuals. Unlike traditional Virtual Try-On (VTON), which digitally dresses models, VTOFF aims to extract a canonical garment image, posing unique challenges in capturing garment shape, texture, and intricate patterns. This well-defined target makes VTOFF particularly effective for evaluating reconstruction fidelity in generative models. We present TryOffDiff, a model that adapts Stable Diffusion with SigLIP-based visual conditioning to ensure high fidelity and detail retention. Experiments on a modified VITON-HD dataset show that our approach outperforms baseline methods based on pose transfer and virtual try-on with fewer pre- and post-processing steps. Our analysis reveals that traditional image generation metrics inadequately assess reconstruction quality, prompting us to rely on DISTS for more accurate evaluation. Our results highlight the potential of VTOFF to enhance product imagery in e-commerce applications, advance generative model evaluation, and inspire future work on high-fidelity reconstruction.\nDemo, code, and models are available at: https://rizavelioglu.github.io/tryoffdiff/", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Image-based virtual try-on (VTON) [23 ###reference_b23###]\nis a key computer vision task\naimed at generating images of a person wearing a specified garment.\nTypically, two input images are required:\none showing the garment in a standardized form (often from an e-commerce catalog)\nand another of the person that needs to be \u2018dressed\u2019.\nRecent methods focus on a modified formulation\nwhere the catalog image is replaced\nwith a photo of another person wearing the target garment.\nThis introduces additional processing complexity [55 ###reference_b55###]\nas the model does not have access to full garment information.\nFrom an application perspective,\nVTON offers an interactive shopping experience\nthat helps users make better-informed purchasing decisions.\nOn the research side, it raises intriguing research questions,\nparticularly around human pose detection as well as\nclothing shape, pattern, and texture analysis [17 ###reference_b17###].\nBest-performing models are usually guided generative models\nfocused on creating specific, physically accurate outputs.\nUnlike general generative tasks that produce diverse outputs,\nreconstruction requires models to generate images that\nalign with the correct appearance of the garment on a person.\nHowever, one drawback of VTON is the lack of a clearly defined target output,\noften resulting in stylistic variations that complicate evaluation.\nGenerated images may show garments tucked, untucked,\nor altered in fit, introducing plausible yet inconsistent visual variations and\nmaking it difficult to assess the true quality\nof garment representation [47 ###reference_b47###].\nThis is why current evaluation methods\ngenerally rely on a broad assessment of\ngenerative quality [20 ###reference_b20###],\nwithout considering the similarity between individual\ngarment-person ground truth pairs.\nCommon image quality metrics often exhibit\nsensitivity to differences in non-salient regions,\nsuch as the background,\nwhich complicates pinpointing the precise sources of performance variability [11 ###reference_b11###, 45 ###reference_b45###].\nWe therefore introduce Virtual Try-OFF (VTOFF),\na novel task focused on generating standardized product images\nfrom real-world photos of clothed individuals\nas illustrated in Figure 1 ###reference_### and Figure 2 ###reference_###.\nEven though the goal is reversed when compared to VTON,\nthe two tasks address similar challenges such as\npose analysis, geometric and appearance transformations,\npotential occlusions and\npreservation of fine-grained details such as\ntextures, patterns, and logos.\nAdditionally, the acquisition diversity of real-world photos\n\u2014 varying in background, lighting, and camera quality \u2014\nintroduces unique challenges\nin domain adaptation and robust feature extraction.\nStill, this switch in the target presents a crucial advantage\nof VTOFF over VTON: the reduced stylistic variability\non the output side simplifies the assessment of\nreconstruction quality.\nThe potential impact of VTOFF extends well beyond research.\nIt could enhance the flexibility of various e-commerce applications\nthat rely on consistent product images.\nFor instance, generated images can be integrated seamlessly\ninto existing virtual try-on solutions,\nenabling the more complex person-to-person try-on by\nsubstituting the ground truth with the generated garment image.\nRecommendation and other customer-to-product\nretrieval systems [14 ###reference_b14###] could also\nbenefit from access to standardized garment representation.\nMoreover, it could support the creation of large-scale,\nhigh-quality fashion datasets,\nthereby accelerating the development of\nfashion-oriented AI.\nFrom an environmental standpoint,\nthese applications should help customers with purchasing decisions,\nthus reducing product returns and the environmental footprint of\nthe fashion industry.\nFinally, generating standardized garment images from everyday photos\nis an interesting task in itself, as it\ncould simplify the maintenance of e-commerce catalogs\nby reducing the need for expensive photography equipment\nand time-consuming editing,\nbenefiting smaller vendors who lack the resources\nfor professional-quality product photography.\nOur work highlights that reconstructing e-commerce images is a challenging task that\nrequires significant modifications to existing VTON models.\nMoreover, we show that traditional image generation metrics\nfall short in capturing reconstruction quality.\nOur primary contributions are:\nWe introduce VTOFF,\na novel task to generate standardized product images\nfrom real-world photos of clothed individuals,\nunlocking promising real-world applications\nwhile raising important new research questions.\nWe present TryOffDiff,\na novel framework that adapts pretrained diffusion models for VTOFF\nby aligning image features with text-based diffusion priors,\nensuring high visual fidelity and consistent product details.\nExtensive experiments on the VITON-HD dataset\ndemonstrate that TryOffDiff generates high-quality,\ndetail-rich product images of garments,\noutperforming state-of-the-art view synthesis and virtual try-on methods.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Virtual Try-Off seeks to reconstruct\na canonical image of clothing,\ntypically resembling garments\nworn by a person in a neutral pose.\nWhile virtual try-on and pose-transfer methods\ncould be adapted to produce these standardized outputs,\nour experiments indicate that such adaptations underperform.\nInstead, we base our solution on conditional diffusion models,\nwhich have demonstrated robust performance across diverse generative tasks." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "This section provides the formal definition of\nthe virtual try-off task. We\npropose a suitable evaluation setup\nand performance metrics.\nWe further provide details of our\nTryOffDiff model which relies on StableDiffusion\nand SigLIP features for image-based conditioning." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Virtual Try-Off", + "text": "Let \nbe an RGB image with height \nand width , respectively.\nIn the task of virtual try-off,\n represents a reference image displaying a clothed person.\nGiven the reference image,\nVTOFF aims to generate a standardized product\nimage ,\ndisplaying the garment according to commercial catalog standards.\nFormally, the goal is to train a generative model\nthat learns the conditional distribution ,\nwhere and represent the variables\ncorresponding to garment images\nand reference images (serving as condition), respectively.\nSuppose the model approximates this target distribution with .\nThen, given a specific reference image as conditioning input,\nthe objective is for a sample \nto resemble a true sample of a garment image \nas closely as possible.\nTo evaluate VTOFF performance effectively,\nevaluation metrics must capture both\nreconstruction and perceptual quality.\nReconstruction quality quantifies\nhow accurately the model\u2019s prediction\n matches the ground truth\n, focusing on pixel-level fidelity.\nIn contrast, perceptual quality assesses\nhow natural and visually appealing\nthe generated image appears to human observers,\naligning with common visual standards.\nTo estimate reconstruction, we may use full-reference metrics such as\nStructural Similarity Index Measure (SSIM) [54 ###reference_b54###].\nHowever, neither SSIM, nor its multi-scale (MS-SSIM)\nand complex-wavelet (CW-SSIM) variants align well with human perception,\nas noted in prior studies [11 ###reference_b11###, 45 ###reference_b45###].\nWe observe similar behavior in our experiments as well, and\nillustrate our findings in Figure 3 ###reference_###.\nPerceptual quality may be captured with\nno-reference metrics like Fr\u00e9chet Inception Distance (FID) [20 ###reference_b20###]\nand Kernel Inception Distance (KID) [5 ###reference_b5###].\nThese metrics usually compare\ndistributions of image feature representations\nbetween generated and real images.\nThey are however unsuitable\nfor single image pair comparison since they are sensitive to\nsample size and potential outliers.\nAdditionally, both FID and KID rely on features\nfrom the classical Inception [44 ###reference_b44###] model,\nwhich does not necessarily align\nwith human judgment in assessing perceptual quality,\nespecially in the context of modern generative models\nsuch as diffusion models [42 ###reference_b42###].\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### A metric that addresses these shortcomings\nis the Deep Image Structure and Texture Similarity\n(DISTS) [11 ###reference_b11###] metric,\ndesigned to measure perceptual similarity between images\nby capturing both structural and textural information.\nDISTS leverages the VGG model [40 ###reference_b40###],\nwhere lower-level features are used to capture structural elements,\nwhile higher-level features focus on finer textural details.\nThe final DISTS score is computed through a weighted combination of these two components,\nwith weighting parameters optimized based on human ratings,\nresulting in a perceptual similarity score that\naligns more closely with human judgment. For these reasons, DISTS represents our main metric for VTOFF." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "TryOffDiff", + "text": "We base our TryOffDiff model on Stable Diffusion [35 ###reference_b35###] (v1.4),\na latent diffusion model originally\ndesigned for text-conditioned image generation\nusing CLIP\u2019s [34 ###reference_b34###] text encoder.\nWe replace text prompts for direct image-guided image generation.\nA core challenge in image-guided generation\nis effectively incorporating visual features\ninto the conditioning mechanism of the generative model.\nCLIP\u2019s ViT [34 ###reference_b34###] has become\na popular choice for image feature extraction\ndue to its general-purpose capabilities.\nRecently, SigLIP [59 ###reference_b59###] introduced modifications that improve performance,\nparticularly for tasks requiring more detailed and domain-specific visual representations.\nTherefore, we use the SigLIP model as image feature extractor and\nretain the entire sequence of token representations in its final layer\nto preserve spatial information,\nwhich we find essential for the capture of\nfine-grained visual details\nand accurate garment reconstruction.\nGiven input image ,\nour proposed adapter module processes these representations as follows:\nwhere is a standard transformer encoder [49 ###reference_b49###]\nprocessing SigLIP embeddings,\nfollowed by a linear projection layer and layer normalization (LN) [1 ###reference_b1###], cf. Figure 4 ###reference_###.\nThe adapted image features are integrated\ninto the denoising U-Net of Stable Diffusion via cross-attention.\nSpecifically, the key and value of the attention mechanism\nat each layer are derived from the image features\nthrough linear transformations:\nwhere \nand .\nThis formulation enables the cross-attention mechanism\nto condition the denoising process on the features of the external reference image ,\nenhancing alignment in the generated output.\nWe only train the adapter modules and\nfine-tune the denoising U-Net of the Stable Diffusion model,\nwhile keeping the SigLIP image encoder, VAE encoder and VAE decoder frozen.\nThis training strategy preserves the robust image processing\ncapabilities of the pretrained components while adjusting the generative\ncomponents to the specific requirements of garment reconstruction." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We establish several baseline approaches\nfor the virtual try-off task,\nadapting virtual try-on and pose transfer models\nas discussed in Section 2 ###reference_###,\nand compare them against our proposed TryOffDiff method\ndescribed in Section 3 ###reference_###.\nTo ensure reproducibility,\nwe detail our experimental setup. We\nuse DISTS as the primary evaluation metric,\nwhile also reporting other standard generative metrics for comparison.\nAdditionally, we provide extensive qualitative results\nto illustrate how our model manages\nvarious challenging inputs." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "###figure_9### ###figure_10### ###figure_11### ###figure_12### Our experiments are conducted on the\npublicly available VITON-HD [27 ###reference_b27###] dataset,\nwhich consists of high-resolution ()\nimage pairs of frontal half-body models\nand corresponding upper-body garments.\nWhile the VITON-HD dataset was originally curated for the VTON task,\nit is also well-suited to our purposes\nas it provides the required image pairs,\nwhere represents the reference image\nof a clothed person and \nthe corresponding garment image.\nUpon closer inspection of VITON-HD,\nwe identified 95 duplicate image pairs (0.8%)\nin the training set and 6 duplicate pairs (0.3%) in the test set.\nAdditionally, we found 36 pairs (1.8%)\nin the training set that had been included\nin the original test split.\nTo ensure the integrity of our experiments,\nwe cleaned the dataset by removing all duplicates in both subsets\nas well as all leaked examples from the test set.\nThe resulting cleaned dataset,\ncontains 11,552 unique image pairs for\ntraining and 1,990 unique image pairs for testing.\nWe provide the script for cleaning\nthe dataset in our code repository.\nWe train TryOffDiff by building on the\npretrained Stable Diffusion v1.4 [35 ###reference_b35###],\nfocusing on fine-tuning the denoising U-Net\nand training adapter layers from scratch, cf. Section 3.2 ###reference_###.\nAs a preprocessing step,\nwe pad the input reference image along the width for a square aspect ratio,\nthen resize them to a resolution of \nto match the expected input format\nof the pretrained SigLIP and VAE encoder.\nFor training, we preprocess the garment images in the same way.\nWe use SigLIP-B/16-512 as image feature extractor,\nwhich outputs 1024\ntoken embeddings of dimension 768.\nOur adapter, consisting of\na single transformer encoder layer with\n8 attention heads,\nfollowed by linear and normalization layers,\nreduces these to conditioning embeddings of dimension .\nTraining occurs over 220k iterations\non a single node with 4 NVIDIA A40 GPUs,\nrequiring approximately 9 days with a batch size of 16.\nWe employ the AdamW optimizer [29 ###reference_b29###],\nwith an initial learning rate of 1e-4\nthat increases linearly from 0 during the first 1,000 warmup steps,\nthen follows a cosine decay to 0 with a hard restart at 90k steps.\nAs proposed in [28 ###reference_b28###],\nwe use the PNDM scheduler with 1,000 steps.\nWe optimize using the standard Mean Squared Error (MSE) loss,\nwhich measures the difference between the added\nand the predicted noise at each step.\nThis loss function is commonly employed\nin diffusion models to guide the model\nin learning to reverse the noising process effectively.\nDuring inference, we run TryOffDiff\nwith a PNDM scheduler over 50 timesteps with a guidance scale of 2.0.\nOn a single NVIDIA A6000 GPU,\nthis process takes 12 seconds per image and requires 4.6GB of memory." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Baseline Approaches", + "text": "To establish the baselines,\nwe adapted state-of-the-art\npose transfer and virtual try-on methods,\nmodifying each to approximate garment reconstruction\nfunctionality as closely as possible.\nWe illustrate these approaches in Figure 5 ###reference_###.\nis a GAN-based pose transfer method that\nexpects three inputs: a reference image, and pose heatmaps of the reference and target\nsubject.\nGarment images from VITON-HD are used to estimate the\nheatmap for a fixed, neutral pose.\nThis setup enables the transfer of human poses\nfrom diverse reference images to a standardized pose,\naligning the output to the typical view of product images.\nrequires a text prompt, a pose, a mask,\nand multiple masked conditioning images as inputs.\nFor the text prompt, we use a description such as \u201ca photo of an e-commerce clothing product\u201d.\nWe choose a garment image from VITON-HD to\nestimate a neutral pose as well as a\ngeneric target mask.\nSince ViscoNet is originally trained with masked conditioning images,\nwe apply an off-the-shelf fashion parser [50 ###reference_b50###]\nto mask the upper-body garment, which is then provided as input.\ntakes a garment image and a reference image to generate\na VTON output.\nTo adapt this model for VTOFF,\nwe again apply the fashion parser [50 ###reference_b50###]\nto mask the upper-body garment to create the garment image.\nWe select a reference image with a mannequin\nin a neutral pose as further input.\nAn intermediate step involves masking\nthe upper-body within the reference image,\nfor which we use a hand-crafted masked\nversion of the reference image.\nis a model that generates a VTON image\nusing a reference image and a conditioning garment\nimage as inputs. An intermediate step\nincorporates upper-body masks to guide the try-on process.\nFor adaptation to VTOFF,\nwe replace the reference image with a plain white image\nand use a handcrafted mask in a neutral pose,\nenabling CatVTON to perform garment transfer\nindependently of any specific person.\nIn all of our baselines, we post-process the outputs with\nSegment Anything (SAM) [25 ###reference_b25###] and point prompts\nto isolate the garment mask.\nWe cut out the identified garment sections\nand paste them onto a white background\nfor the final garment image output." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Quantitative Results", + "text": "The numerical results of our experiments\non the VITON-HD dataset are reported in Table 1 ###reference_###.\nOur tailored TryOffDiff approach outperforms\nall baseline methods across all generative performance metrics.\nHowever, baseline rankings vary significantly\ndepending on the chosen metric.\nFor example, GAN-Pose has the second best results\nwhen using full-reference metrics like SSIM, MS-SSIM, and CW-SSIM.\nIn contrast, for no-reference metrics such as FID,\nCLIP-FID, and KID,\nCatVTON emerges as the strongest baseline,\nwhile GAN-Pose has the lowest performance.\nThe DISTS metric is our main metric as\nit balances structural and textural information,\noffering a more nuanced assessment of generated image quality.\nWhen examining the ranking of the baseline methods,\nCatVTON slightly outperforms GAN-Pose,\nwhich in turn shows marginally better performance than ViscoNet and OOTDiff.\nThis ranking aligns well with our own subjective visual perception,\nwhich will be further discussed in the following Section 4.4 ###reference_###.\nWe emphasize that\nTryOffDiff shows a significant improvement of 5.2 percentage points\nover the next best performing baseline method." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Qualitative Analysis", + "text": "The qualitative results are shown in Figure 6 ###reference_###.\nWe find that they align with the quantitative results\nand illustrate how each metric emphasizes different aspects\nof garment reconstruction leading to inconsistent rankings,\nas discussed in Section 3.1 ###reference_###.\nGAN-Pose generates outputs that manage to\napproximate the main color and shape of the target garment.\nHowever, the predicted images often contain small\nregions where parts of the garment are missing.\nAlthough these gaps do not significantly affect\nfull-reference metrics since the overall garment structure\nis still largely intact, they noticeably reduce visual fidelity,\ngiving the images an unnatural appearance.\nThis degradation is reflected in the no-reference metrics,\nwhich are more sensitive to such visual artifacts.\nViscoNet generally produces more realistic outputs\nthan GAN-Pose but struggles to accurately capture the garment\u2019s shape,\noften resulting in deformed representations.\nAdditionally, ViscoNet displays a bias towards generating long sleeves,\nregardless of the target garment\u2019s actual design.\nMost outputs also lack textural details,\nfurther highlighting ViscoNet\u2019s limitations for the garment reconstruction task.\nOOTDiffusion, originally designed as a virtual try-on method,\nencounters similar difficulties as GAN-Pose in generating realistic images.\nWhile it generally struggles to retain detailed textures,\nit performs better in preserving fine elements like\nlogos compared to previous methods.\nNonetheless, its inability to consistently capture\noverall textural details underscores its limitations in virtual try-off.\nCatVTON also demonstrates the ability to preserve\nlogo elements. Furthermore, it generally manages to\nproduce texture details that closely\nresemble those of the target garment.\nThe garment shapes this method generates appear natural,\nmaking CatVTON\u2019s outputs visually appealing and\nthe strongest baseline methods in terms of visual fidelity.\nAlthough CatVTON produces garments with a natural appearance,\nthe shapes do not consistently match the target garment\u2019s actual shape,\nundermining its full-reference metric performance and\nlimiting its overall effectiveness for VTOFF.\nOur TryOffDiff model consistently captures the shape of target garments,\neven reconstructing portions of the garment that are occluded in the reference image.\nFor instance, TryOffDiff can correctly infer the shape of high-cut bodysuits,\neven when models in the reference images are wearing pants.\nSubtle indicators, such as garment tightness or features like shoulder straps,\nenable this reconstruction. Additionally, TryOffDiff reliably recovers detailed textures,\nincluding colors, patterns, buttons, ribbons, and logos,\nmaking it superior over all baseline methods and\nthe top-performing model for VTOFF in our experiments.\nWhile we note that TryOffDiff is the only method specifically designed for VTOFF,\nit stands out as the only approach capable of accurately reconstructing textural details.\nThis underscores the effectiveness of our proposed image conditioning mechanism,\nwhich enables precise texture recovery and overall high-quality garment reconstruction.\n###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced VTOFF,\na novel task focused on reconstructing a standardized\ngarment image based\non one reference image of a person wearing it.\nWhile VTOFF shares similarities to VTON,\nwe demonstrate it is better suited\nfor evaluating the garment reconstruction accuracy of generative models\nsince it targets a clearly defined output.\nWe further propose TryOffDiff,\na first tailored VTOFF model which adapts Stable Diffusion.\nWe substitute Stable Diffusion text conditioning\nwith adapted SigLIP features to guide the generative process.\nIn our experiments,\nwe repurpose the existing VITON-HD dataset,\nenabling direct comparisons of our method\nagainst several baselines based on existing VTON approaches.\nTryOffDiff significantly outperforms these baselines,\nwith fewer requirements for pre-and post-processing steps.\nIn particular, we find that we are better\nat preserving fine details like patterns and logos.\nWe also observe that this advantage is not\nreflected when using conventional metrics\nfor generative model reconstruction quality.\nTo better capture visual fidelity,\nwe adopt DISTS as our primary evaluation metric.\nVTOFF highlights the potential\nfor advancing our understanding of guided generative model performance.\nOur results show promise,\nbut there is still room for improvement in preserving complex structures,\nsuch as logos and printed designs.\nFuture work could benefit from exploring newer generative models,\nalternative visual conditioning methods and additional losses\nto enhance detail preservation.\nFinally, our findings underscore the need\nfor improved quality metrics,\npotentially combined with user studies,\nto better align qualitative impressions with quantitative evaluations." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Ablation Studies", + "text": "Our ablation experiments investigate the impact\nof various TryOffDiff configurations.\nWe analyze the differences between operating in the\npixel and latent space, evaluate adapter design choices,\nand assess the influence of different image encoders and conditioning features.\nAdditionally, we compare the effectiveness of fine-tuning versus training from scratch.\nFinally, we further look into the role of denoising hyperparameters during\nthe inference phase of our method." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Impact of TryOffDiff configurations", + "text": "Our first set of experiments explores different TryOffDiff setups, focusing\nonly on methods that achieved comparable results in our evaluations. All models were trained from scratch, except for TryOffDiff.\nThe Autoencoder is based on a nested U-Net [33 ###reference_b33###]\n, originally proposed for salient object detection. We trained the model from scratch using MSE.\nThis approach is able to reconstruct the general shape of the garment, but\nit lacks detailed features such as logos, text, and patterns.\nThe PixelModel, a diffusion model operating in pixel-space based on the original diffusion architecture [21 ###reference_b21###],\nshows improved pixel-level details but suffers from slow inference, rendering it impractical for real-world applications.\nFor the Latent Diffusion Models (LDMs), we leverage the recent VAE encoder from StableDiffusion-3 [13 ###reference_b13###]\n, conditioning it with images via cross-attention layers in the U-Net.\nThe overall architecture mirrors StableDiffusion-1.4 [35 ###reference_b35###], with variations through different image encoders, adapter layers, and mixed precision settings.\nPrecise model details are listed in Table 2 ###reference_###, and\nthe corresponding quantitative results for the VTOFF task on the VITON-HD dataset are summarized in Table 3 ###reference_###.\nUnlike earlier experiments, here we evaluate the raw outputs of\nthe generative model without applying background removal.\nPreviously, background removal was necessary to ensure\ncomparability with baseline methods\ndesigned for VTON models adapted to the VTOFF task.\nUnnecessary elements (e.g. anything except the upper-body garment)\nwere removed through segmentation-based post-processing with SAM.\nHowever, since all models in this comparison are specifically trained for the VTOFF task,\nthey are expected to handle background removal directly.\nTryOffDiff achieves slightly better performance metrics when evaluated without SAM post-processing.\nFigure 8 ###reference_### shows the qualitative results\nfor different configurations of our approach.\nThese results further highlight\nthe shortcomings of existing image generation metrics,\nwhich often fail to align with human perception of image quality.\nFor instance, the autoencoder in column 1 achieves high scores despite\nits lack of fine details, a limitation also illustrated in Figure 7 ###reference_###.\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### \n\n\n\n\n\n\n\nGround Truth" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Hyper-parameter choice in the denoising process", + "text": "Figure 9 ###reference_### shows how various guidance scale\nand inference steps impact FID and DISTS.\nWe find that the performance of our approach remains relatively stable\nwith respect to the number of denoising steps.\nStill, it is affected by the value of the guidance scale,\nwhich we further demonstrate with qualitative results\nin Figure 10 ###reference_###. Lower guidance values\nresult in a loss of detail, whereas higher values\ncompromise realism, introducing artifacts such as excessive contrast\nand color saturation.\nFigure 11 ###reference_### and Figure 12 ###reference_###\ndemonstrate the effect of varying noising seed on reconstruction quality.\nOverall, the generated garment images show strong consistency across inference runs. However, for certain examples, slight variations in the shape of the garment can occur. This is noticeable in upper-body apparel with challenging features, such as ribbons or short tops.\nSimilarly, complex patterns, such as printed designs or text on shirts, may exhibit slight differences in reconstruction. In contrast, simpler garments\u2013those with solid colors or basic patterns like stripes\u2013show high consistency across all runs and closely match the ground truth.\n###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### Examples generated from multiple inference runs using our TryOffDiff model\nTarget\n###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54### ###figure_55### Examples generated from multiple inference runs using our TryOffDiff model\nTarget" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Person-to-person Try-On", + "text": "TryOffDiff can be used to adapt existing Virtual Try-On\nmodels for person-to-person try-on.\nIn this setup, our method generates the target garment from the target model,\nwhich is then used as input to classical VTON models\ninstead of the ground truth garment image.\nWe conduct experiments using OOTDiffusion [57 ###reference_b57###]\nand compare the quality of virtual try-on using\nthe ground truth garment versus our predicted garment.\nAdditionally, we evaluate against CatVTON [9 ###reference_b9###],\na state-of-the-art person-to-person try-on model, using its default\ninference settings from the official GitHub repository.\nThe quantitative results are summarized in Table 4 ###reference_###.\nSince VITON-HD dataset lacks person-to-person try-on ground truth data,\nwe report only metrics that assess perceptual quality.\nReplacing the ground truth garment with TryOffDiff\u2019s predictions\nleads to a slight drop in quality,\nas the reconstructions are not perfect.\nOur approach also slightly outperforms CatVTON.\nThis may be partly attributed to CatVTON\u2019s difficulties with person reconstruction,\ndespite its strength in preserving clothing details.\nThis observation further highlights the limitations of\nthe VTON task and commonly used VTON metrics,\nwhich fail to adequately distinguish between\nperson and garment reconstruction quality.\nQualitative results are shown in Figure 13 ###reference_###\nand Figure 14 ###reference_###.\nOverall, there is no definitive winner between CatVTON\nand OOTDiffusion combined with TryOffDiff.\nCatVTON excels in preserving texture and pattern details\nbut occasionally suffers from diffusion artifacts (Figure 13 ###reference_###, row 3; Figure 14 ###reference_###, row 2).\nAdditionally, CatVTON sometimes transfers attributes of the target model\nto the source model (Figure 13 ###reference_###, rows 3 and 4; Figure 14 ###reference_###, row 4),\na limitation not observed in classical try-on models.\nFinally, complex clothing items remain challenging,\neven when using ground truth images for virtual\ntry-on (Figure 13 ###reference_###, row 1; Figure 14 ###reference_###, rows 1 and 4).\nNonetheless, these results highlight the potential\nof the Virtual Try-Off task and the TryOffDiff model.\nAlthough TryOffDiff was not specifically trained for person-to-person virtual try-on,\nits integration with VTON models presents a promising approach, already demonstrating\ncompetitive performance compared to state-of-the-art\nperson-to-person virtual try-on methods.\n###figure_56### ###figure_57###" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Additional Qualitative Results", + "text": "This section offers additional qualitative results.\nWe present further comparisons with our baseline models,\nas introduced in Section 4.2 ###reference_###, in Figure 15 ###reference_###.\nWe also visualize TryOffDiff\u2019s output on 10% of the test set, which is\nobtained by sorting the test images alphabetically and\nselecting every 10th image. These results are shown in Figure 16 ###reference_### and Figure 17 ###reference_###." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Implementation Details", + "text": "The implementation relies on PyTorch as the core framework, with HuggingFace\u2019s Diffusers\nlibrary [51 ###reference_b51###] for diffusion model components and\nthe Accelerate library [16 ###reference_b16###] for efficient multi-GPU training.\nFor evaluation, we use \u2018IQA-PyTorch\u2019 [6 ###reference_b6###]\nto compute SSIM, MS-SSIM, CW-SSIM, and LPIPS, and the \u2018clean-fid\u2019 [31 ###reference_b31###]\nlibrary for FID, CLIP-FID, and KID.\nFinally, we employ the original implementation of DISTS [11 ###reference_b11###] for evaluating perceptual image quality.\nFor readability purposes, the values of SSIM, MS-SSIM, CW-SSIM, LPIPS, and DISTS presented in this paper are multiplied by 100, and KID is multiplied by 1000.\n###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65###" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MS-CW-L-CLIP-DI-
MethodSSIM\nSSIM\nSSIM\nPIPS\n\u00a0FID\nFID\nKID\nSTS\n
GAN-Pose\u00a0[36]\n77.463.832.544.273.230.955.830.4
ViscoNet\u00a0[7]\n58.550.728.954.042.312.125.531.2
OOTDiff.\u00a0[57]\n65.150.626.149.554.017.533.232.4
CatVTON\u00a0[9]\n72.856.932.045.931.49.717.828.2
Ours:\u2009TryOffDiff79.570.446.232.425.19.48.923.0
\n
\n
Table 1: Quantitative comparison. Evaluation metrics for various methods on VITON-HD-test dataset in the VTOFF task.\n
\n
", + "capture": "Table 1: Quantitative comparison. Evaluation metrics for various methods on VITON-HD-test dataset in the VTOFF task.\n" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodVAEImg. EncoderEmb.shapeAdapterCond.shapeSched.Prec.Steps
Autoencoder------fp32290k
PixelModel-SigLIP-B/16(1024,768)Linear+LN(64,768)DDPMfp16300k
LDM-1SD3CLIP ViT-B/32(50,768)-(50,768)DDPMfp16180k
LDM-2SD3SigLIP-B/16(1024,768)Linear+LN(64,768)DDPMfp16320k
LDM-3SD3SigLIP-B/16(1024,768)Linear+LN(64,768)DDPMfp32120k
TryOffDiffSD1.4SigLIP-B/16(1024,768)Trans.+Linear+LN(77,768)PNDMfp32220k
\n
Table 2: Training configurations of ablations.
\n
", + "capture": "Table 2: Training configurations of ablations." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\u00a0Sched.\n\u00a0\u00a0\n\n\u00a0\u00a0\n\nSSIM\n\nMS-SSIM\n\nCW-SSIM\n\nLPIPS\n\n\u00a0\u00a0\u00a0FID\u00a0\u00a0\u00a0\n\nCLIP-FID\n\nKID\n\n\u00a0DISTS\n
Autoencoder---81.472.037.339.5108.731.766.832.5
PixelModelDDPM-5076.066.337.052.175.420.756.432.6
LDM-1DDPM-5079.670.542.033.026.69.1411.524.3
LDM-2DDPM-5080.272.348.331.818.97.55.421.8
LDM-3DDPM-5079.571.346.932.618.67.56.722.7
TryOffDiffPNDM2.05079.471.547.233.220.28.36.822.5
\n
Table 3: Quantitative comparison. Evaluation metrics for different methods on VITON-HD-test dataset for VTOFF task. Results are reported on raw predictions, with no background removal. Note that while LDM-2 may achieve better performance metrics, we still choose TryOffDiff over LDM-2 due to its better subjective visual quality in garment image generation, see also Figure\u00a08.
\n
", + "capture": "Table 3: Quantitative comparison. Evaluation metrics for different methods on VITON-HD-test dataset for VTOFF task. Results are reported on raw predictions, with no background removal. Note that while LDM-2 may achieve better performance metrics, we still choose TryOffDiff over LDM-2 due to its better subjective visual quality in garment image generation, see also Figure\u00a08." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nFID\n\nCLIP-FID\n\nKID\n
CatVTON12.03.53.9
OOTDiffusion + GT10.82.82.0
OOTDiffusion + TryOffDiff12.03.52.5
\n
\n
Table 4: Quantitative comparison of Virtual Try-On models.\nWe compare the results of OOTDiffusion when ground truth\u00a0(GT) garment is used\nand when the garment predicted by TryOffDiff is used. We further show\nthe results of CatVTON, a specialized person-to-person try-on model.\nOur TryOffDiff model in combination with VTON model achieves competitive\nperformance in person-to-person VTON.
\n
", + "capture": "Table 4: Quantitative comparison of Virtual Try-On models.\nWe compare the results of OOTDiffusion when ground truth\u00a0(GT) garment is used\nand when the garment predicted by TryOffDiff is used. We further show\nthe results of CatVTON, a specialized person-to-person try-on model.\nOur TryOffDiff model in combination with VTON model achieves competitive\nperformance in person-to-person VTON." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18350v1_figure_2.png", + "caption": "Figure 2: Illustration of the differences between Virtual Try-On and Virtual Try-Off.\nTop: Basic inference pipeline of a Virtual Try-On model, which takes an image of a clothed person as reference and an image of a garment to generate an image of the same person but wearing the specified garment. Bottom: Virtual Try-Off setup, where the objective is to predict the canonical form of the garment from a single input reference image.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/VTONvsVTOFF.png" + }, + "3(a)": { + "figure_path": "2411.18350v1_figure_3(a).png", + "caption": "(a) 82.4\u2062 / \u206220.682.4 / 20.682.4\\text{ / }20.682.4 / 20.6\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_4.jpg" + }, + "3(b)": { + "figure_path": "2411.18350v1_figure_3(b).png", + "caption": "(b) 96.8\u2062 / \u206217.996.8 / 17.996.8\\text{ / }17.996.8 / 17.9\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_5.jpg" + }, + "3(c)": { + "figure_path": "2411.18350v1_figure_3(c).png", + "caption": "(c) 88.3\u2062 / \u206220.388.3 / 20.388.3\\text{ / }20.388.3 / 20.3\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_6.jpg" + }, + "3(d)": { + "figure_path": "2411.18350v1_figure_3(d).png", + "caption": "(d) 86.0\u2062 / \u206270.386.0 / 70.386.0\\text{ / }70.386.0 / 70.3\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_1.jpg" + }, + "3(e)": { + "figure_path": "2411.18350v1_figure_3(e).png", + "caption": "(e) 75.0\u2062 / \u20628.275.0 / 8.275.0\\text{ / }8.275.0 / 8.2\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_2.jpg" + }, + "3(f)": { + "figure_path": "2411.18350v1_figure_3(f).png", + "caption": "(f) 86.4\u2062 / \u206224.786.4 / 24.786.4\\text{ / }24.786.4 / 24.7\nFigure 3: Examples demonstrating the un/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) to VTON and VTOFF. In the top row, a reference image is compared against:\n(a) an image with a masked-out garment;\n(b) an image with changed colors of the model;\n(c) and an image after applying color jittering.\nIn the bottom row, a garment image is compared against:\n(d) a plain white image;\n(e) a slightly rotated image;\n(f) and a randomly posterized image (reducing the number of bits for each color channel).\nWhile the SSIM score achieves consistently high across all examples, in particular including failure cases, the DISTS score more accurately reflects variations aligned with human judgment.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/metric_failures/ssim_3.jpg" + }, + "4": { + "figure_path": "2411.18350v1_figure_4.png", + "caption": "Figure 4: Overview of TryOffDiff. The SigLIP image encoder [59] extracts features from the reference image, which are subsequently processed by adapter modules. These extracted image features are embedded into a pre-trained text-to-image Stable Diffusion-v1.4 [35] by replacing the original text features in the cross-attention layers. By conditioning on image features in place of text features, TryOffDiff directly targets the VTOFF task. Simultaneous training of the adapter layers and the diffusion model enables effective garment transformation.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/TryOffDiff.png" + }, + "5(a)": { + "figure_path": "2411.18350v1_figure_5(a).png", + "caption": "(a) Left to right: reference image, fixed pose heatmap derived from target image, initial model output, SAM prompts, and final processed output.\nFigure 5: Adapting existing state-of-the-art methods to VTOFF. (a) GAN-Pose [36] and (b) ViscoNet [7] are approaches based on pose transfer and view synthesis, respectively, (c) OOTDiffusion [57] and (d) CatVTON [9] are based on recent virtual try-on methods.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/baseline_pose2.jpg" + }, + "5(b)": { + "figure_path": "2411.18350v1_figure_5(b).png", + "caption": "(b) Left to right: masked conditioning image, mask image, pose image, initial model output with SAM prompts, and final processed output.\nFigure 5: Adapting existing state-of-the-art methods to VTOFF. (a) GAN-Pose [36] and (b) ViscoNet [7] are approaches based on pose transfer and view synthesis, respectively, (c) OOTDiffusion [57] and (d) CatVTON [9] are based on recent virtual try-on methods.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/baseline_visconet.jpg" + }, + "5(c)": { + "figure_path": "2411.18350v1_figure_5(c).png", + "caption": "(c) Left to right: masked garment image, model image, masked model image, initial model output with SAM prompts, and final processed output.\nFigure 5: Adapting existing state-of-the-art methods to VTOFF. (a) GAN-Pose [36] and (b) ViscoNet [7] are approaches based on pose transfer and view synthesis, respectively, (c) OOTDiffusion [57] and (d) CatVTON [9] are based on recent virtual try-on methods.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/baseline_ootd.jpg" + }, + "5(d)": { + "figure_path": "2411.18350v1_figure_5(d).png", + "caption": "(d) Left to right: conditioning garment image, blank model image, mask image, initial model output with SAM prompts, final processed output.\nFigure 5: Adapting existing state-of-the-art methods to VTOFF. (a) GAN-Pose [36] and (b) ViscoNet [7] are approaches based on pose transfer and view synthesis, respectively, (c) OOTDiffusion [57] and (d) CatVTON [9] are based on recent virtual try-on methods.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/baseline_catvton.jpg" + }, + "6(a)": { + "figure_path": "2411.18350v1_figure_6(a).png", + "caption": "(a) Reference\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-input.jpg" + }, + "6(b)": { + "figure_path": "2411.18350v1_figure_6(b).png", + "caption": "(b) Gan-Pose\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-pred_pose.jpg" + }, + "6(c)": { + "figure_path": "2411.18350v1_figure_6(c).png", + "caption": "(c) ViscoNet\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-pred_visco.jpg" + }, + "6(d)": { + "figure_path": "2411.18350v1_figure_6(d).png", + "caption": "(d) OOTDiffusion\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-pred_ootd.jpg" + }, + "6(e)": { + "figure_path": "2411.18350v1_figure_6(e).png", + "caption": "(e) CatVTON\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-pred_cat.jpg" + }, + "6(f)": { + "figure_path": "2411.18350v1_figure_6(f).png", + "caption": "(f) TryOffDiff\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-pred_tryoffdiff.jpg" + }, + "6(g)": { + "figure_path": "2411.18350v1_figure_6(g).png", + "caption": "(g) Target\nFigure 6: Qualitative comparison. In comparison to the baseline approaches, TryOffDiff is capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/preds/1-gt.jpg" + }, + "7(a)": { + "figure_path": "2411.18350v1_figure_7(a).png", + "caption": "(a) 81.9\u2062 / \u206236.281.9 / 36.281.9\\text{ / }36.281.9 / 36.2\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/ae-1.jpg" + }, + "7(b)": { + "figure_path": "2411.18350v1_figure_7(b).png", + "caption": "(b) 81.5\u2062 / \u206240.481.5 / 40.481.5\\text{ / }40.481.5 / 40.4\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/ae-2.jpg" + }, + "7(c)": { + "figure_path": "2411.18350v1_figure_7(c).png", + "caption": "(c) 81.7\u2062 / \u206239.781.7 / 39.781.7\\text{ / }39.781.7 / 39.7\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/ae-3.jpg" + }, + "7(d)": { + "figure_path": "2411.18350v1_figure_7(d).png", + "caption": "(d) 80.3\u2062 / \u206224.280.3 / 24.280.3\\text{ / }24.280.3 / 24.2\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/tod-1.jpg" + }, + "7(e)": { + "figure_path": "2411.18350v1_figure_7(e).png", + "caption": "(e) 75.3\u2062 / \u206225.075.3 / 25.075.3\\text{ / }25.075.3 / 25.0\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/tod-2.jpg" + }, + "7(f)": { + "figure_path": "2411.18350v1_figure_7(f).png", + "caption": "(f) 80.3\u2062 / \u206219.480.3 / 19.480.3\\text{ / }19.480.3 / 19.4\nFigure 7: Examples demonstrating the un-/suitability of performance metrics (SSIM\u2191 / DISTS\u2193) and an Autoencoer model applied to VTOFF.\nIn each figure, left image is the ground truth image and the right image is the model prediction of Autoencoder (top, a-c) and TryOffDiff (bottom, d-f). Notice the higher SSIM scores for the Autoencoder compared to TryOffDiff despite poor visual quality of reconstructed garment images.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ssim_failures/tod-3.jpg" + }, + "8(a)": { + "figure_path": "2411.18350v1_figure_8(a).png", + "caption": "(a) Autoencoder\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_xunet.jpg" + }, + "8(b)": { + "figure_path": "2411.18350v1_figure_8(b).png", + "caption": "(b) PixelModel\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_pixelmodel.jpg" + }, + "8(c)": { + "figure_path": "2411.18350v1_figure_8(c).png", + "caption": "(c) LDM-1\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_ldm1.jpg" + }, + "8(d)": { + "figure_path": "2411.18350v1_figure_8(d).png", + "caption": "(d) LDM-2\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_ldm2.jpg" + }, + "8(e)": { + "figure_path": "2411.18350v1_figure_8(e).png", + "caption": "(e) LDM-3\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_ldm3.jpg" + }, + "8(f)": { + "figure_path": "2411.18350v1_figure_8(f).png", + "caption": "(f) TryOffDiff\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/pred_tryoffdiff.jpg" + }, + "8(g)": { + "figure_path": "2411.18350v1_figure_8(g).png", + "caption": "(g) Target\nFigure 8: Qualitative comparison between different configurations explored in our ablation study.\nSee also Table 2 for more details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/ablations/gt.jpg" + }, + "9(a)": { + "figure_path": "2411.18350v1_figure_9(a).png", + "caption": "(a) Guidance Scale\nFigure 9: Ablation study on the impact of guidance scale (s\ud835\udc60sitalic_s) and inference steps (n\ud835\udc5bnitalic_n) on DISTS and FID scores. Experiments are conducted on VITON-HD-test with TryOffDiff using the DDIM [41]\nnoise scheduler.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/fig_guidance.png" + }, + "9(b)": { + "figure_path": "2411.18350v1_figure_9(b).png", + "caption": "(b) Inference steps\nFigure 9: Ablation study on the impact of guidance scale (s\ud835\udc60sitalic_s) and inference steps (n\ud835\udc5bnitalic_n) on DISTS and FID scores. Experiments are conducted on VITON-HD-test with TryOffDiff using the DDIM [41]\nnoise scheduler.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/fig_step.png" + }, + "10": { + "figure_path": "2411.18350v1_figure_10.png", + "caption": "Figure 10: Qualitative results for different guidance. Left: no guidance applied (s=0\ud835\udc600s=0italic_s = 0). Middle: varying guidance scale (s\u2208[1.2,1.5,1.8,2.0,2.5,3.0,3.5]\ud835\udc601.21.51.82.02.53.03.5s\\in[1.2,1.5,1.8,2.0,2.5,3.0,3.5]italic_s \u2208 [ 1.2 , 1.5 , 1.8 , 2.0 , 2.5 , 3.0 , 3.5 ]). Right: ground-truth.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/vary_guidance.jpg" + }, + "11(a)": { + "figure_path": "2411.18350v1_figure_11(a).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00000.jpg" + }, + "11(b)": { + "figure_path": "2411.18350v1_figure_11(b).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00001.jpg" + }, + "11(c)": { + "figure_path": "2411.18350v1_figure_11(c).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00003.jpg" + }, + "11(d)": { + "figure_path": "2411.18350v1_figure_11(d).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00004.jpg" + }, + "11(e)": { + "figure_path": "2411.18350v1_figure_11(e).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00005.jpg" + }, + "11(f)": { + "figure_path": "2411.18350v1_figure_11(f).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00007.jpg" + }, + "11(g)": { + "figure_path": "2411.18350v1_figure_11(g).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00008.jpg" + }, + "11(h)": { + "figure_path": "2411.18350v1_figure_11(h).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00011.jpg" + }, + "11(i)": { + "figure_path": "2411.18350v1_figure_11(i).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00012.jpg" + }, + "11(j)": { + "figure_path": "2411.18350v1_figure_11(j).png", + "caption": "Figure 11: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00021.jpg" + }, + "12(a)": { + "figure_path": "2411.18350v1_figure_12(a).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00027.jpg" + }, + "12(b)": { + "figure_path": "2411.18350v1_figure_12(b).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00048.jpg" + }, + "12(c)": { + "figure_path": "2411.18350v1_figure_12(c).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00052.jpg" + }, + "12(d)": { + "figure_path": "2411.18350v1_figure_12(d).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00055.jpg" + }, + "12(e)": { + "figure_path": "2411.18350v1_figure_12(e).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00066.jpg" + }, + "12(f)": { + "figure_path": "2411.18350v1_figure_12(f).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00082.jpg" + }, + "12(g)": { + "figure_path": "2411.18350v1_figure_12(g).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00098.jpg" + }, + "12(h)": { + "figure_path": "2411.18350v1_figure_12(h).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00099.jpg" + }, + "12(i)": { + "figure_path": "2411.18350v1_figure_12(i).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00104.jpg" + }, + "12(j)": { + "figure_path": "2411.18350v1_figure_12(j).png", + "caption": "Figure 12: Sample Variations. While minor variations in shape and pattern may occur with complex garments, the overall output of TryOffDiff demonstrates consistent garment reconstructions across multiple inference runs with different random seeds.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/sample-variations/00107.jpg" + }, + "13": { + "figure_path": "2411.18350v1_figure_13.png", + "caption": "Figure 13: Qualitative comparison on (person-to-person) VTON task. Columns show: (a) person to\nbe dressed which all of the models use as one of the reference inputs, (b) output of the CatVTON model which uses an image of a person wearing the target garment as condition for direct person-to-person VTON, (c) output of the OOTDiffusion model which takes in an image of the target garment and (d) output of the OODDiffusion model which\ntakes in the output of our TryOffDiff model for indirect person-to-person VTON.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/VTON-ablation-1.jpg" + }, + "14": { + "figure_path": "2411.18350v1_figure_14.png", + "caption": "Figure 14: Qualitative comparison on (person-to-person) VTON task. Columns show: (a) person to\nbe dressed which all of the models use as one of the reference inputs , (b) output of the CatVTON model which uses an image of a person wearing the target garment as condition for direct person-to-person VTON, (c) output of the OOTDiffusion model which takes in an image of the target garment and (d) output of the OODDiffusion model which\ntakes in the output of our TryOffDiff model for indirect person-to-person VTON.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/VTON-ablation-2.jpg" + }, + "15(a)": { + "figure_path": "2411.18350v1_figure_15(a).png", + "caption": "(a) Gan-Pose\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-pred_pose.jpg" + }, + "15(b)": { + "figure_path": "2411.18350v1_figure_15(b).png", + "caption": "(b) ViscoNet\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-pred_visco.jpg" + }, + "15(c)": { + "figure_path": "2411.18350v1_figure_15(c).png", + "caption": "(c) OOTDiffusion\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-pred_ootd.jpg" + }, + "15(d)": { + "figure_path": "2411.18350v1_figure_15(d).png", + "caption": "(d) CatVTON\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-pred_cat.jpg" + }, + "15(e)": { + "figure_path": "2411.18350v1_figure_15(e).png", + "caption": "(e) TryOffDiff\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-pred_tryoffdiff.png" + }, + "15(f)": { + "figure_path": "2411.18350v1_figure_15(f).png", + "caption": "(f) Target\nFigure 15: Qualitative comparison between baselines and TryOffDiff. In comparison to the baseline approaches, TryOffDiff is more capable of generating garment images with accurate structural details as well as fine textural details.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/2-gt.png" + }, + "16": { + "figure_path": "2411.18350v1_figure_16.png", + "caption": "Figure 16: TryOffDiff predictions on the VITON-HD-test dataset (samples 1\u2013100). Visualized are the first 100 predictions, sampled by selecting every 10th sample from the test set after sorting filenames alphabetically.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/whole_test_1.jpg" + }, + "17": { + "figure_path": "2411.18350v1_figure_17.png", + "caption": "Figure 17: TryOffDiff predictions on the VITON-HD-test dataset (samples 101\u2013200). Visualized are the next 100 predictions, sampled by selecting every 10th sample from the test set after sorting filenames alphabetically.", + "url": "http://arxiv.org/html/2411.18350v1/extracted/6017521/figures/suppl/whole_test_2.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Layer normalization.", + "author": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.", + "venue": "stat, 1050:21, 2016.", + "url": null + } + }, + { + "2": { + "title": "Imagen 3.", + "author": "Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bunner, Kelvin Chan, et al.", + "venue": "arXiv, 2024.", + "url": null + } + }, + { + "3": { + "title": "Shape matching and object recognition using shape contexts.", + "author": "Serge Belongie, Jitendra Malik, and Jan Puzicha.", + "venue": "IEEE TPAMI, 2002.", + "url": null + } + }, + { + "4": { + "title": "Improving image generation with better captions.", + "author": "James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, et al.", + "venue": "preprint, 2023.", + "url": null + } + }, + { + "5": { + "title": "Demystifying mmd gans.", + "author": "Miko\u0142aj Bi\u0144kowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton.", + "venue": "In ICLR, 2018.", + "url": null + } + }, + { + "6": { + "title": "IQA-PyTorch: Pytorch toolbox for image quality assessment.", + "author": "Chaofeng Chen and Jiadi Mo.", + "venue": "https://github.com/chaofengc/IQA-PyTorch, 2022.", + "url": null + } + }, + { + "7": { + "title": "Visconet: Bridging and harmonizing visual and textual conditioning for controlnet.", + "author": "Soon Yau Cheong, Armin Mustafa, and Andrew Gilbert.", + "venue": "In ECCVW, 2024.", + "url": null + } + }, + { + "8": { + "title": "Improving diffusion models for virtual try-on.", + "author": "Yisol Choi, Sangkyung Kwak, Kyungmin Lee, Hyungwon Choi, and Jinwoo Shin.", + "venue": "arXiv, 2024.", + "url": null + } + }, + { + "9": { + "title": "Catvton: Concatenation is all you need for virtual try-on with diffusion models.", + "author": "Zheng Chong, Xiao Dong, Haoxiang Li, Shiyue Zhang, Wenqing Zhang, Xujie Zhang, Hanqing Zhao, and Xiaodan Liang.", + "venue": "arXiv, 2024.", + "url": null + } + }, + { + "10": { + "title": "Dressing in order: Recurrent person image generation for pose transfer, virtual try-on and outfit editing.", + "author": "Aiyu Cui, Daniel McKee, and Svetlana Lazebnik.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "11": { + "title": "Image quality assessment: Unifying structure and texture similarity.", + "author": "Keyan Ding, Kede Ma, Shiqi Wang, and Eero P Simoncelli.", + "venue": "IEEE TPAMI, 2020.", + "url": null + } + }, + { + "12": { + "title": "Fw-gan: Flow-navigated warping gan for video virtual try-on.", + "author": "Haoye Dong, Xiaodan Liang, Xiaohui Shen, Bowen Wu, Bing-Cheng Chen, and Jian Yin.", + "venue": "In ICCV, 2019.", + "url": null + } + }, + { + "13": { + "title": "Scaling rectified flow transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, et al.", + "venue": "In ICML, 2024.", + "url": null + } + }, + { + "14": { + "title": "A versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images.", + "author": "Yuying Ge, Ruimao Zhang, Lingyun Wu, Xiaogang Wang, Xiaoou Tang, and Ping Luo.", + "venue": "In CVPR, 2019.", + "url": null + } + }, + { + "15": { + "title": "Parser-free virtual try-on via distilling appearance flows.", + "author": "Yuying Ge, Yibing Song, Ruimao Zhang, Chongjian Ge, Wei Liu, and Ping Luo.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "16": { + "title": "Accelerate: Training and inference at scale made simple, efficient and adaptable.", + "author": "Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar, et al.", + "venue": "https://github.com/huggingface/accelerate, 2022.", + "url": null + } + }, + { + "17": { + "title": "Viton: An image-based virtual try-on network.", + "author": "Xintong Han, Zuxuan Wu, Zhe Wu, Ruichi Yu, and Larry S Davis.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "18": { + "title": "Clothflow: A flow-based model for clothed person generation.", + "author": "Xintong Han, Xiaojun Hu, Weilin Huang, and Matthew R Scott.", + "venue": "In CVPR, 2019.", + "url": null + } + }, + { + "19": { + "title": "Controllable person image synthesis with pose-constrained latent diffusion.", + "author": "Xiao Han, Xiatian Zhu, Jiankang Deng, Yi-Zhe Song, and Tao Xiang.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "20": { + "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.", + "venue": "In NeurIPS, 2017.", + "url": null + } + }, + { + "21": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "In NeurIPS, 2020.", + "url": null + } + }, + { + "22": { + "title": "Nvist: In the wild new view synthesis from a single image with transformers.", + "author": "Wonbong Jang and Lourdes Agapito.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "23": { + "title": "The conditional analogy gan: Swapping fashion articles on people images.", + "author": "Nikolay Jetchev and Urs Bergmann.", + "venue": "In ICCVW, 2017.", + "url": null + } + }, + { + "24": { + "title": "Dreampose: Fashion image-to-video synthesis via stable diffusion.", + "author": "Johanna Karras, Aleksander Holynski, Ting-Chun Wang, and Ira Kemelmacher-Shlizerman.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "25": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "26": { + "title": "Deep convolutional inverse graphics network.", + "author": "Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum.", + "venue": "In NeurIPS, 2015.", + "url": null + } + }, + { + "27": { + "title": "High-resolution virtual try-on with misalignment and occlusion-handled conditions.", + "author": "Sangyun Lee, Gyojung Gu, Sunghyun Park, Seunghwan Choi, and Jaegul Choo.", + "venue": "In ECCV, 2022.", + "url": null + } + }, + { + "28": { + "title": "Pseudo numerical methods for diffusion models on manifolds.", + "author": "Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "29": { + "title": "Decoupled weight decay regularization.", + "author": "Ilya Loshchilov and Frank Hutter.", + "venue": "In ICLR, 2019.", + "url": null + } + }, + { + "30": { + "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models.", + "author": "Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, and Ying Shan.", + "venue": "In AAAI, 2024.", + "url": null + } + }, + { + "31": { + "title": "On aliased resizing and surprising subtleties in gan evaluation.", + "author": "Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "32": { + "title": "Zero-shot image-to-image translation.", + "author": "Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu.", + "venue": "In SIGGRAPH, 2023.", + "url": null + } + }, + { + "33": { + "title": "U2-net: Going deeper with nested u-structure for salient object detection.", + "author": "Xuebin Qin, Zichen Zhang, Chenyang Huang, Masood Dehghan, Osmar R Zaiane, and Martin Jagersand.", + "venue": "Pattern Recognit., 2020.", + "url": null + } + }, + { + "34": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In ICML, 2021.", + "url": null + } + }, + { + "35": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "36": { + "title": "Multi-scale attention guided pose transfer.", + "author": "Prasun Roy, Saumik Bhattacharya, Subhankar Ghosh, and Umapada Pal.", + "venue": "Pattern Recognit., 2023.", + "url": null + } + }, + { + "37": { + "title": "Palette: Image-to-image diffusion models.", + "author": "Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, et al.", + "venue": "In SIGGRAPH, 2022a.", + "url": null + } + }, + { + "38": { + "title": "Image super-resolution via iterative refinement.", + "author": "Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi.", + "venue": "IEEE TPAMI, 2022b.", + "url": null + } + }, + { + "39": { + "title": "Advancing pose-guided image synthesis with progressive conditional diffusion models.", + "author": "Fei Shen, Hu Ye, Jun Zhang, Cong Wang, Xiao Han, and Wei Yang.", + "venue": "In ICLR, 2024.", + "url": null + } + }, + { + "40": { + "title": "Very deep convolutional networks for large-scale image recognition.", + "author": "Karen Simonyan and Andrew Zisserman.", + "venue": "In ICLR, 2015.", + "url": null + } + }, + { + "41": { + "title": "Denoising diffusion implicit models.", + "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.", + "venue": "In ICLR, 2021.", + "url": null + } + }, + { + "42": { + "title": "Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models.", + "author": "George Stein, Jesse Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L Caterini, Eric Taylor, and Gabriel Loaiza-Ganem.", + "venue": "In NeurIPS, 2024.", + "url": null + } + }, + { + "43": { + "title": "Multi-view to novel view: Synthesizing novel views with self-learned confidence.", + "author": "Shao-Hua Sun, Minyoung Huh, Yuan-Hong Liao, Ning Zhang, and Joseph J Lim.", + "venue": "In ECCV, 2018.", + "url": null + } + }, + { + "44": { + "title": "Going deeper with convolutions.", + "author": "Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich.", + "venue": "In CVPR, 2015.", + "url": null + } + }, + { + "45": { + "title": "Learning a blind measure of perceptual image quality.", + "author": "Huixuan Tang, Neel Joshi, and Ashish Kapoor.", + "venue": "In CVPR, 2011.", + "url": null + } + }, + { + "46": { + "title": "Multi-view 3d models from single images with a convolutional network.", + "author": "Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox.", + "venue": "In ECCV, 2016.", + "url": null + } + }, + { + "47": { + "title": "A note on the evaluation of generative models.", + "author": "Lucas Theis, A\u00e4ron van den Oord, and Matthias Bethge.", + "venue": "In ICLR, 2016.", + "url": null + } + }, + { + "48": { + "title": "Triposr: Fast 3d object reconstruction from a single image.", + "author": "Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, and Yan-Pei Cao.", + "venue": "arXiv, 2024.", + "url": null + } + }, + { + "49": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin.", + "venue": "In NeurIPS, 2017.", + "url": null + } + }, + { + "50": { + "title": "Fashionfail: Addressing failure cases in fashion object detection and segmentation.", + "author": "Riza Velioglu, Robin Chan, and Barbara Hammer.", + "venue": "In IJCNN, 2024.", + "url": null + } + }, + { + "51": { + "title": "Diffusers: State-of-the-art diffusion models.", + "author": "Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, et al.", + "venue": "https://github.com/huggingface/diffusers, 2022.", + "url": null + } + }, + { + "52": { + "title": "Toward characteristic-preserving image-based virtual try-on network.", + "author": "Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, and Meng Yang.", + "venue": "In ECCV, 2018.", + "url": null + } + }, + { + "53": { + "title": "Fldm-vton: Faithful latent diffusion model for virtual try-on.", + "author": "Chenhui Wang, Tao Chen, Zhihao Chen, Zhizhong Huang, Taoran Jiang, Qi Wang, and Hongming Shan.", + "venue": "In IJCAI, 2024.", + "url": null + } + }, + { + "54": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli.", + "venue": "IEEE Trans. Image Process., 2004.", + "url": null + } + }, + { + "55": { + "title": "Towards scalable unpaired virtual try-on via patch-routed spatially-adaptive gan.", + "author": "Zhenyu Xie, Zaiyu Huang, Fuwei Zhao, Haoye Dong, Michael Kampffmeyer, and Xiaodan Liang.", + "venue": "In NeurIPS, 2021.", + "url": null + } + }, + { + "56": { + "title": "Prompt-free diffusion: Taking\u201d text\u201d out of text-to-image diffusion models.", + "author": "Xingqian Xu, Jiayi Guo, Zhangyang Wang, Gao Huang, Irfan Essa, and Humphrey Shi.", + "venue": "In CVPR, 2024a.", + "url": null + } + }, + { + "57": { + "title": "Ootdiffusion: Outfitting fusion based latent diffusion for controllable virtual try-on.", + "author": "Yuhao Xu, Tao Gu, Weifeng Chen, and Chengcai Chen.", + "venue": "arXiv, 2024b.", + "url": null + } + }, + { + "58": { + "title": "Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.", + "author": "Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang.", + "venue": "arXiv, 2023.", + "url": null + } + }, + { + "59": { + "title": "Sigmoid loss for language image pre-training.", + "author": "Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "60": { + "title": "Adding conditional control to text-to-image diffusion models.", + "author": "Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "61": { + "title": "Multi-view image generation from a single-view.", + "author": "Bo Zhao, Xiao Wu, Zhi-Qi Cheng, Hao Liu, Zequn Jie, and Jiashi Feng.", + "venue": "In ACM MM, 2018.", + "url": null + } + }, + { + "62": { + "title": "View synthesis by appearance flow.", + "author": "Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros.", + "venue": "In ECCV, 2016.", + "url": null + } + }, + { + "63": { + "title": "M&m vto: Multi-garment virtual try-on and editing.", + "author": "Luyang Zhu, Yingwei Li, Nan Liu, Hao Peng, Dawei Yang, and Ira Kemelmacher-Shlizerman.", + "venue": "In CVPR, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18350v1" +} \ No newline at end of file diff --git a/20241127/2411.18376v1.json b/20241127/2411.18376v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c9a644e7194323bdf6b494c81e7edd46aca37118 --- /dev/null +++ b/20241127/2411.18376v1.json @@ -0,0 +1,606 @@ +{ + "title": "Preserving Deep Representations in One-Shot Pruning: A Hessian-Free Second-Order Optimization Framework", + "abstract": "We present SNOWS, a one-shot post-training pruning framework aimed at reducing the cost of vision network inference without retraining. Current leading one-shot pruning methods minimize layer-wise least squares reconstruction error which does not take into account deeper network representations. We propose to optimize a more global reconstruction objective. This objective accounts for nonlinear activations deep in the network to obtain a better proxy for the network loss. This nonlinear objective leads to a more challenging optimization problem\u2014we demonstrate it can be solved efficiently using a specialized second-order optimization framework. A key innovation of our framework is the use of Hessian-free optimization to compute exact Newton descent steps without needing to compute or store the full Hessian matrix. A distinct advantage of SNOWS is that it can be readily applied on top of any sparse mask derived from prior methods, readjusting their weights to exploit nonlinearities in deep feature representations. SNOWS obtains state-of-the-art results on various one-shot pruning benchmarks including residual networks and Vision Transformers (ViT/B-16 and ViT/L-16, 86m and 304m parameters respectively).", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Modern deep learning-based vision models, particularly convolutional neural networks (CNNs) (Lecun et al., 1998 ###reference_b33###) and more recent vision transformers (ViTs) (Dosovitskiy et al., 2021 ###reference_b13###), have achieved remarkable performance in various computer vision tasks, including image classification, object detection, and visual tracking (Chai et al., 2021 ###reference_b5###). However, their success comes at the cost of substantial computational and memory resources, and these models are only trending towards increasingly large scales (Dehghani et al., 2023 ###reference_b9###). Neural network pruning, a class of techniques aimed at reducing the number of parameters by selectively removing weights or connections, has emerged as a solution to reduce the inference demands of large-scale vision networks (Goel et al., 2020 ###reference_b19###; He & Xiao, 2023 ###reference_b25###; Kuznedelev et al., 2023 ###reference_b32###), alongside methods such as quantization (Cheng et al., 2024 ###reference_b7###).\nMany pruning approaches require an expensive retraining phase following weight removal to recover network performance (Liu et al., 2019 ###reference_b35###; Frankle & Carbin, 2019 ###reference_b15###). Especially as networks scale, practitioners looking to prune and cheaply deploy a model often lack the resources to fully retrain it and may not even have access to the model\u2019s dataset or optimizer. While classical one-shot pruning approaches (where weights are removed in a single step rather than iteratively) typically included retraining Hagiwara (1994 ###reference_b21###), recent work has explored one-shot pruning without this computationally expensive step (Frantar & Alistarh, 2023 ###reference_b17###; 2022 ###reference_b16###; Singh & Alistarh, 2020 ###reference_b47###).\nOne-shot pruning approaches typically use second-order approximations of the neural network loss function in their pruning criteria or objective function, an approach dating back to (LeCun et al., 1989 ###reference_b34###; Hassibi & Stork, 1992 ###reference_b23###). Modern second-order approaches fall into two main categories: global methods, which consider the entire network at once, and local methods, which prune one layer at a time. Global methods decide which weights to prune and how to update the remaining weights, based on a second-order approximation of the downstream task loss function. However, since computing and inverting the full Hessian matrix is expensive, these methods often replace the Hessian with an approximation such as the Fisher information matrix or further approximations thereof (Singh & Alistarh, 2020 ###reference_b47###; Frantar et al., 2021 ###reference_b18###; Benbaki et al., 2023 ###reference_b3###; Wang et al., 2019 ###reference_b49###). Like retraining approaches, global pruning methods can be difficult to scale to the largest vision networks due to their need to take full network gradients (e.g. in the Fisher computation). Local methods on the other hand, which have become popular for large networks, replace the task loss with a local layer-wise least squares reconstruction loss (Dong et al., 2017a ###reference_b11###; Frantar & Alistarh, 2022 ###reference_b16###; Meng et al., 2024a ###reference_b38###), for which the Hessian can be computed in a simple, straightforward fashion. In these local layer-wise reconstruction methods, the only optimization variables are the weights at a single layer, hence layer-wise Hessians can be isolated and the network pruned one layer at a time. These local methods are computationally efficient, but may not provide the most accurate proxy of the loss of the network and neglect the influence of deeper feature representations.\nIn this paper, we present SNOWS111Stochastic Newton Optimal Weight Surgeon (SNOWS), a neural network pruning framework that bridges the gap between these global and local one-shot pru ning approaches. Unlike global or retraining-based methods, SNOWS only requires calculating gradients for individual layers. At the same time, the SNOWS formulation goes beyond standard layer-wise reconstruction and optimizes a more global reconstruction objective (cf. Equation 3 ###reference_###). The motivation is to preserve not just a local set of features, but to consider the effect of pruning on the more global set of learned representations. While this objective leads to a more challenging nonlinear optimization problem that cannot be solved by standard layer-wise methods, we show that it can solved efficiently using a specialized second-order optimization framework. Unlike prior global approaches that make approximations to the Hessian, SNOWS optimizes an exact second-order approximation. A key driver of the scalability of our method is the use of Hessian-free optimization (Martens, 2010 ###reference_b36###), which allows us to efficiently compute and optimize the second-order approximation without explicitly forming or storing the full Hessian matrix. This enables SNOWS to scale to the largest CNNs and ViTs. For instance, using our method the ViT-L/16 model ( 304m parameters) can be pruned on a single A100 GPU.\nSemi-structured pruning approaches have received more attention since they lead to more tangible acceleration on modern computing hardware. We primarily target :\nsparsity (Mishra et al., 2021 ###reference_b41###), a semi-structured pruning approach where\n out of every\n (contiguous, aligned) weights are retained. This sparsity pattern leads to greater acceleration than unstructured sparsity by aligning with hardware indexing patterns, such as those in NVIDIA\u2019s Sparse Tensor Cores (Pool & Yu, 2021 ###reference_b44###), and leads to lesser accuracy loss than structured formats (Zhang et al., 2023 ###reference_b54###). Unlike previous methods that require training :\nsparse networks from scratch (Zhou et al., 2021 ###reference_b55###; Zhang et al., 2022 ###reference_b53###) to achieve performance competitive with dense networks, SNOWS can prune to : sparsity in one-shot using just a few thousand samples.\nContributions. We make the following contributions to the one-shot pruning problem:\nWe introduce a novel post-training pruning framework that operates at the layer-wise level to optimize a more global reconstruction objective to account for nonlinear activations deep in the network. We demonstrate empirically that this approach provides a better proxy for the network loss compared to layer-wise least squares reconstruction methods.\nWe develop an efficient second-order optimization framework to solve the challenging nonlinear optimization problem posed by our reconstruction objective. We use Hessian-free optimization integrated with a customized Conjugate Gradient (CG) method to efficiently solve the second-order approximation. Our CG method exploits sparsity in the linear system defined by the second-order approximation to compute exact Newton descent steps in a memory-efficient way, scaling to problems with layer-wise Hessians as large as m m.\nOur method delivers state-of-the-art performance in one-shot pruning for computer vision, achieving superior results on various network architectures and datasets. Our method can prune ResNet50 to sparsity with a 1.5% drop in test accuracy on CIFAR-100 and a 9.4% drop on ImageNet-1k using just 3,000 training examples, where prior state-of-the-art one-shot pruning methods have a drop of 7.6% and 29.42% respectively. Similarly, for unstructured sparsity, our method can prune MobileNet to 70% sparsity with a 5.6% drop in accuracy, whereas the best available competing method has a 12.55% drop. In addition to improving performance on CNNs, our method scales to modern ViT networks with up to 304m parameters." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "One-shot pruning without retraining. Recent work has demonstrated the possibility of pruning networks in a single step without requiring expensive retraining Frantar & Alistarh (2023 ###reference_b17###; 2022 ###reference_b16###); Singh & Alistarh (2020 ###reference_b47###). In this setting, one typically assumes access to a limited sample of calibration data (e.g. a few thousand examples) and the goal is to compress the network with minimal accuracy loss relative to the dense model. This modern approach to one-shot pruning builds on classical work that developed second-order approximations of the task loss function. Early approaches (Lecun et al., 1998 ###reference_b33###; Hassibi & Stork, 1992 ###reference_b23###) considered the entire network\u2019s weights but were computationally intractable for large models. More recent methods like WoodFisher Singh & Alistarh (2020 ###reference_b47###) and Combinatorial Brain Surgeon (Singh & Alistarh, 2020 ###reference_b47###), as well as CHITA (Benbaki et al., 2023 ###reference_b3###), improve scalability by replacing the Hessian with the Fisher information matrix or using block-diagonal approximations. Other works use Kronecker product approximations of the Fisher matrix (Wang et al., 2019 ###reference_b49###).\nThese global approaches optimize a more accurate proxy for the network loss function, but often struggle to scale to the largest vision networks, hence layer-wise approaches which focus on reconstruction have become a popular alternative. Layer-wise OBS (Dong et al., 2017a ###reference_b11###) formulates the problem of reconstructing the linear activations of the dense network as a linear regression problem which has a standard least squares Hessian, and show that the overall network reconstruction error is bounded by a quantity depending on the error given at each of the layer-wise problems. The OBC framework (Frantar & Alistarh, 2022 ###reference_b16###) introduces an efficient method for greedy backward elimination on this layer-wise problem using fast low rank updates of the Hessian inverse. There have been limited attempts to incorporate nonlinearity into layer-wise approaches, but several papers have reported the need to consider broader notions of network connectivity as pruning criteria and loss functions (Hoang et al., 2023 ###reference_b28###; He & Zhou, 2024 ###reference_b26###). The Net-Trim framework solves a layer-wise regression problem with ReLU activation (Aghasi et al., 2017 ###reference_b1###), accounting for the non-linearity by reformulating the ReLU using linear inequalities and solving the resulting convex program. In the Net-Trim formulation, the weights are optimized in a way that is \u201daware\u201d of the subsequent non-linearity by incorporating the effects of the ReLU activation into the pruning loss function. Our objective is similar in spirit, but we make no assumptions about the exact form of the non-linearity, only requiring that the underlying functions are twice-differentiable. Moreover, our formulation goes beyond the activations at a single layer.\nStructured and semi-structured pruning in computer vision. Unstructured pruning, which involves pruning individual weights in an unrestricted format, leads to high compression ratios but often cannot realize tangible speed-ups except at very high sparsity levels or when run on specialized hardware (Cheng et al., 2024 ###reference_b7###). Hence there has been a movement towards structured pruning methods (He & Xiao, 2023 ###reference_b25###), which lead to actual speed-ups by imposing regularity on the sparsity pattern. In CNNs, structured pruning can involve removing entire filters (Zhou et al., 2016 ###reference_b56###) or specific channels (Meng et al., 2024c ###reference_b40###). Recently, NVIDIA (Mishra et al., 2021 ###reference_b41###) released : Sparse Tensor Cores, the current version of which includes a 2:4 sparsity pattern that leads to double the throughput on dense matrix multiplications. Hence there has been several works applying : pruning in CNNs (Pool & Yu, 2021 ###reference_b44###; Zhou et al., 2021 ###reference_b55###). There has been more limited work on compressing ViT networks, in part due to their scale. Most of the recent work focuses on unstructured sparsity (He & Zhou, 2024 ###reference_b26###). The CAP framework (Kuznedelev et al., 2023 ###reference_b32###) obtains strong results in one-shot unstructured pruning in DeiT networks (Touvron et al., 2021 ###reference_b48###), though they use 4096 gradient evaluations ( 520k samples) in their one-shot pruning experiments, and use fine-tuning to recover network accuracy. Previous work on structured and semi-structured ViT pruning has taken various approaches. Several works (Zhu et al., 2021 ###reference_b57###; Yu & Wu, 2021 ###reference_b51###) focused on pruning redundant channel dimensions (rows) in the ViT. (Yu et al., 2022a ###reference_b50###) proposed pruning both the width (number of neurons) and depth (number of layers) of ViTs, which centers on architectural changes rather than specifically weight pruning. Notably, almost all existing work on ViT pruning requires fine-tuning to recover dense model performance, and hence the literature on one-shot structured pruning in VITs is very limited." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Problem Definition and Background", + "text": "Notation. In a neural network with layers, the network function is represented as , where is the collection of weight matrices across layers, and is the input, with number of data points and input dimension . For , we use to refer to the inputs to layer and to refer to the outputs of layer ; therefore, the input to layer is the output of layer , or . The weight matrix for the -th layer is denoted . The output of layer is computed as , where the function denotes the operations at layer , which typically consist of an linear transformation followed by a point-wise nonlinear activation, i.e., , where is an activation function applied element-wise, though can also include much more general operations such as residual connections, pooling operations, etc. For a twice-differentiable loss function depending on the network parameters or a subset, we denote the gradient as and the Hessian .\nBackground on Global Model. Global methods, originating with the Optimal Brain Surgeon (OBS) framework (Lecun et al., 1998 ###reference_b33###; Hassibi & Stork, 1992 ###reference_b23###), minimize the following loss function for neural network compression:\nwhere are the ground truth labels, represents the final output of the compressed network, is the task loss, e.g., cross-entropy for classification, and is a predefined set of sparse weight matrices. The canonical example of is the set of -sparse matrices, , though can also represent much more general sets such as structured-sparse matrices. The very natural idea here is to compress the network to preserve performance on the downstream task. To solve the problem, OBS uses a second-order approximation to Eqn (1 ###reference_###).222We omit the details of the second-order approximation for brevity, but provide more in A.1.1 ###reference_.SSS1### Intuitively, a second-order method is well-motivated by the importance of interactions in pruning. Time has revealed however that there are practical issues limiting the applicability of the OBS framework. Most importantly, the OBS solution to Eqn (1 ###reference_###) requires computing the Hessian of the entire network\u2019s weights. For modern deep neural networks, computing the full Hessian matrix is simply not tractable.\nBackground on Layer-wise Model. A more practical alternative to global methods is the popular layer-wise proxy, replacing Eqn (1 ###reference_###) with the following regression problem:\nThe goal in Eqn (2 ###reference_###) is to preserve the linear activations of the model at a particular layer while enforcing sparsity in . This formulation, which was introduced in (Dong et al., 2017a ###reference_b11###), has practical appeal since the only decision variables are the weights at a single layer , so the Hessian has reduced dimensionality compared to that of the full network. Moreover, since the operations on are linear, the objective has a standard least squares form, and its Hessian is given by , meaning it can be computed in parallel across the row dimension of ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "The SNOWS Method", + "text": "Our Improved Layer-wise Model. While more tractable than global approaches, the layer-wise formulation in Eqn (2 ###reference_###) focuses solely on preserving the linear activations at a particular layer without considering the impact on subsequent layers\u2019 representations. In this paper we instead consider the following layer-wise model:\nwhere is defined as the composition of the next operations starting from layer :\nand are the dense target outputs after the -th operation starting from layer .\nHence, the loss function considers the effect of on all layers and operations up to . A natural motivation for Eqn (3 ###reference_###) is that pruning weights at a particular layer affects not only the outputs of that layer but also activations in subsequent layers. Namely, the solution to the layer-wise problem in Eqn (2 ###reference_###) does not necessarily minimize the discrepancy in the outputs of deeper layers:\nfor all , where the constituent functions can include non-linear activations, pooling operations, and residual connections, among others. We provide more detail on the motivation for the SNOWS loss function in subsubsection A.1.2 ###reference_.SSS2###.\nSpecial Cases of Eqn (3 ###reference_###). The loss function is a flexible proxy for the neural network loss function, where the parameter can be varied depending on available computational resources. When , the loss function reduces to the layer-wise framework. When , the formulation is analogous to Net-Trim for ReLU networks Aghasi et al. (2017 ###reference_b1###), but also generalizes it to other activation functions. Setting which denotes the number of operations after layer up to the output layer, extends the optimization to all operations following layer . In Figure 1 ###reference_###, we observe how the value of influences network performance when pruning the full ResNet20 and ResNet50 networks to 1:4 sparsity using SNOWS. In Table 3 ###reference_### in subsubsection A.2.4 ###reference_.SSS4###, we provide an ablation on the run-time and peak memory usage when varying .\n###figure_1### Cheap layer-wise gradients without computing the full computational graph. The gradient of Eqn (3 ###reference_###) function with respect to the pruned weights can be expressed as:\nAn important advantage of this formulation is that it allows for reduced memory requirements compared to computing full network gradients. Specifically, the gradient has dimensionality , which is significantly smaller than the total number of parameters in the full network. Moreover, the computational graph required to compute this gradient is shorter because it includes only the operations from layer up to , rather than the entire network, which minimizes the number of activations that need to be constructed and stored on the computational graph.\nEach function can represent linear transformations and several non-linear operations, so while this is still a layer-wise problem depending only on , it is no longer a linear least squares problem. Hence, the Hessian is no longer equal to , and does not separate in general. To form the Hessian we need to compute all second-order partial derivates of for each which is also not tractable for large networks. Nonetheless, we now present a tractable framework to optimize Eqn (3 ###reference_###).\nAn Efficient Second-Order Optimization Framework to Optimize Eqn (3 ###reference_###) .\nOur -step objective induces a discrete, nonlinear optimization problem which cannot necessarily be solved in a similar fashion to previous layer-wise methods (Dong et al., 2017a ###reference_b11###; Aghasi et al., 2017 ###reference_b1###; Frantar & Alistarh, 2022 ###reference_b16###). Thus, we adopt a similar framework to more global methods by minimizing a local quadratic approximation of our nonlinear objective. However, given the difficulty of forming the Hessian directly, we use Hessian-free optimization (Martens, 2010 ###reference_b36###) to optimize the local quadratic approximation, combined with a CG method that exploits sparsity for efficient updates. We now describe the key components of our framework.\nHessian-free Optimization. Given the difficulty of forming the Hessian directly, we opt to use Hessian-free optimization (Martens, 2010 ###reference_b36###) to optimize the local quadratic approximation. Hessian-free optimization is a second-order optimization method that avoids explicit computation of the Hessian matrix by exploiting the fact that while the Hessian matrix is expensive to compute, the so-called Hessian product can be computed as:\nwhich has a cost of 1 additional gradient evaluation when computed via finite differences. Note also that , meaning that the memory requirements of are the same as for , unlike the Hessian which is usually far too large to fit on GPU memory. Moreover, since we operate on smaller localized sections of the network, from layer to operation , our formulation is less susceptible to numerical error that can harm Hessian-free optimization in full network training approaches (Martens, 2010 ###reference_b36###) . This allows us to compute exact Hessian products as opposed to Fisher or Gauss-Newton approximations used in prior work on matrix-free training and pruning (Martens & Sutskever, 2012 ###reference_b37###; Singh & Alistarh, 2020 ###reference_b47###; Frantar et al., 2021 ###reference_b18###).\nDecoupling the Discrete and Continuous Problems. In designing our algorithm for scalability to the largest networks, we develop an approach that is agnostic to the mask. Our approach can be integrated with other techniques for mask selection, or can stand alone with simple methods such as magnitude pruning. Our alternative approach for optimizing the local quadratic approximation can be devised by recognizing Eqn (3 ###reference_###) as a bi-level optimization problem:\nwhere is a binary mask imposing the same sparsity pattern as and denotes the weights when the sparsity pattern is fixed. We omit the superscript from for simplicity, though the mask is typically chosen and applied layer-wise also. In our case, fixing allows us to perform a second-order Taylor expansion around the masked solution for the weights:\nIn this approximation, we consider perturbing the weights for a given mask , which induces sparsity in the solution. We can now design strategies to minimize Eqn (7 ###reference_###) which simplifies the problem dramatically compared to solving a coupled discrete and continuous problem.\nSparsity in the Second-Order Expansion. The Taylor expansion in Eqn (7 ###reference_###) has several key properties related to the sparsity induced by the mask . First, consider the gradient . It follows that wherever . Similarly, the Hessian exhibits sparsity, with wherever or . Therefore, the sparsity pattern of is the square of the sparsity pattern of . Specifically, if has a sparsity level , then the Hessian will have a sparsity level . Moreover, any row or column in the Hessian corresponding to will be entirely zero. We exploit this sparsity in to reduce the dimensionality of the system required to minimize Eqn (7 ###reference_###). It is important to note that unlike in the global approximation, we cannot assume in Eqn (7 ###reference_###). In the global case this is justified since they start by computing the gradient and Hessian of the fully dense network and each time make optimal weight adjustments. Whereas in our approximation, this would amount to making the assumption that the current weights are optimal for any mask , even after other weights have been deactivated. Thus the gradient term remains and minimizing Eqn (7 ###reference_###) with respect to the weight direction gives:\nSetting the gradient of the above quadratic equal to zero yields the following reduced linear system:\nThe matrix is the Hessian restricted to the active weights, where represents the number of weights with . Correspondingly, the gradient includes only the components associated with these active weights. As a result, the weight update can be calculated exclusively for the active weights ,\nwhich yields the celebrated Newton update (Battiti, 1992 ###reference_b2###), but on a greatly reduced linear system.\nCG Method. Directly forming and inverting the Hessian , even if sparse, is still not tenable. Here we exploit the fact that the left-hand side Hessian product in Eqn (9 ###reference_###) can be computed via Eqn (5 ###reference_###), and hence the corresponding linear system can be solved by querying the Hessian product within the CG method (Nocedal & Wright, 2006 ###reference_b43###). To ensure that the Hessian is positive definite, particularly when the objective function is not strongly convex, we add Levenberg-Marquardt regularization to the Newton update. This is a small dampening regularization to the Hessian diagonal, resulting in the following system:\nThis leads to Algorithm 1 to solve for the Newton update, which relies solely on products with the Hessian. Since each call to Eqn (5 ###reference_###) requires computing the gradient, the number of iterations in CG is a critical factor in determining the run-time of the overall algorithm. However, although CG can require up to iterations to fully converge, it often makes significant progress in far fewer steps. In subsubsection A.2.3 ###reference_.SSS3###, we provide an ablation study where we vary the maximum number of steps. CG can typically be safely terminated early after a reasonable number of iterations without significantly degrading the quality of the solution. Given the method for computing the Newton direction, the iterative scheme for performing Newton updates then involves minimizing the quadratic approximation of the loss function at each step to adjust the current solution:\nwhere is chosen to satisfy the Armijo sufficient decrease condition (Byrd et al., 2016 ###reference_b4###). We provide the full condition in subsubsection A.1.3 ###reference_.SSS3###. In practice, due to hardware memory constraints, we run a batched version of Newton descent called sub-sampled Newton descent. Instead of computing full gradients or Hessian products at once, we sample mini-batches of data to compute stochastic estimates of the gradient and Hessian products. Stochastic Newton descent is a common technique in the literature on second-order optimization methods and has been shown to exhibit strong convergence guarantees (Roosta-Khorasani & Mahoney, 2016 ###reference_b46###). The only alteration we make compared to traditional stochastic Newton algorithms is that we sample Hessian products rather than the Hessian directly. The full algorithm is given in Algorithm 2. We remark that, in practice, the stochastic Newton algorithm typically converges in very few mini-batches. In subsubsection A.2.1 ###reference_.SSS1### and subsubsection A.2.2 ###reference_.SSS2###, we provide ablations where we compare SNOWS to Stochastic Gradient Descent (SGD) and the Fisher approximation to optimize Eqn (3 ###reference_###), finding in both cases that SNOWS obtains better minima and converges faster.\nA complete network pruning algorithm. So far our descriptions have focused on the method for the layer-wise pruning problem. We now present a recap of the full layer-wise pruning process (Algorithm 2) as well as the algorithm to prune the full network (Algorithm 3). Our approach is flexible and can accommodate any choice of masks . Algorithm 3 uses a cascading mechanism for pruning layer by layer, where the next layer\u2019s input is adjusted after pruning the current layer. This follows the same core idea as Net-Trim (Aghasi et al., 2017 ###reference_b1###). Unlike a parallel approach, where each layer is pruned independently, this method passes the output of one pruned layer as the input to the next. Specifically, after pruning a layer the input to the next layer is updated as . This ensures that each layer\u2019s optimization accounts for changes introduced via pruning in the previous layer." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments on CNNs and Vision Transformers", + "text": "CNN Pruning. CNNs are a fundamental architecture for computer vision tasks. In a CNN, the weight tensor has dimensions . Here, is the number of output channels (filters), is the number of input channels, and and are the kernel height and width, respectively. We focus on a semi-structured sparsity pattern that has become more popular in recent years, called : sparsity. This sparsity pattern requires that, for every (non-overlapping) block of weights, exactly weights are non-zero. For CNNs, the : sparsity format requires that the weights be sparse along the input dimension (Pool & Yu, 2021 ###reference_b44###). We follow this convention throughout our experiments. We provide an example of an : sparse convolutional kernel in Figure 11 ###reference_### in subsubsection A.3.1 ###reference_.SSS1### and additional details on how we handled the -step problem.\nExperimental setup. We evaluate SNOWS across several standard CNN benchmarks. We use the CIFAR-10, CIFAR-100, and ImageNet-1k datasets (Krizhevsky & Hinton, 2009 ###reference_b31###; Deng et al., 2009 ###reference_b10###), where we evaluate performance pruning the ResNet20 and ResNet50 architectures (He et al., 2015 ###reference_b24###) to : sparsity. Additionally, for experiments on unstructured sparsity, we use the MobileNetV1 architecture (Howard et al., 2017 ###reference_b29###) to compare to previous studies on unstructured pruning. Due to the size of the original ImageNet dataset and the fact that one-shot methods require just a few thousand samples, we use the MiniImagenet-1k dataset ###reference_magenetmini-1000###, a sample from the ImageNet-1k dataset. We use 10,000 samples from this dataset for evaluation, with 3,000 calibration samples being used for the : sparsity experiments. For unstructured sparsity, we use 1,000 calibration samples to be consistent with the methods we compare with (Benbaki et al., 2023 ###reference_b3###).\nResults. : sparsity. We first evaluate the impact of running SNOWS on top of existing mask selection techniques, OBC (Frantar & Alistarh, 2022 ###reference_b16###), SparseGPT (Frantar & Alistarh, 2023 ###reference_b17###), and MP, that support : sparsity. Adding SNOWS on top of other pruning techniques consistently improves performance across different methods and datasets, as shown in Table 1 ###reference_###. This is evident in more challenging datasets like CIFAR-100 and ImageNet-1k, where other methods alone perform poorly, but the addition of SNOWS recovers close to the performance of the dense model. For 2:4 sparsity, the choice of mask selection technique has less of an impact on the results. Once SNOWS is applied, the methods achieve similar performance. However, at higher sparsity (1:4), more intensive mask selection techniques, particularly OBC, combined with SNOWS give the best results. We provide latency performance estimates for the : patterns shown in Table 1 ###reference_### in subsubsection A.3.5 ###reference_.SSS5###. We also report hyperparameters for SNOWS, including the value of , the dampening factor , and the CG tolerance in subsubsection A.3.6 ###reference_.SSS6###.\nUnstructured Sparsity. We also compare SNOWS against several popular one-shot pruning methods for unstructured sparsity, including Magnitude Pruning (MP) Gupta et al. (2024 ###reference_b20###), WoodFisher (Singh & Alistarh, 2020 ###reference_b47###), Combinatorial Brain Surgeon (Yu et al., 2022b ###reference_b52###), and CHITA (Benbaki et al., 2023 ###reference_b3###), using the reported results from the CHITA paper for the competing methods. We remark that our base ResNet20 and MobileNetV1 models start from slightly different starting accuracies, thus we instead show the accuracy change compared to the baseline model in Figure 3 ###reference_### and provide the raw numbers in subsubsection A.3.3 ###reference_.SSS3###. Other one-shot methods optimize both the mask and weights, whereas SNOWS optimizes from a mask obtained through magnitude pruning. Despite this, SNOWS consistently outperforms existing one-shot approaches at all sparsity levels. In subsubsection A.3.4 ###reference_.SSS4###, we provide additional retraining experiments for SNOWS and the best competing method in Figure 3 ###reference_###, FALCON Meng et al. (2024b ###reference_b39###), showing SNOWS can further improve with some additional fine-tuning. We also provide additional run-time comparisons in subsubsection A.3.7 ###reference_.SSS7###.\n###figure_2### ViT pruning. ViTs (Dosovitskiy et al., 2021 ###reference_b13###) have become popular in computer vision by applying the Transformer architecture, originally designed for natural language processing, to image recognition tasks. In ViTs, an image is divided into a sequence of patches and embedded into a vector. These embeddings are then processed through blocks that consist of Multi-Head Self-Attention (MHSA) layers and Feed-Forward Neural Networks (FFNs). (Chen et al., 2022 ###reference_b6###) introduce a framework to make the intermediate attention outputs sparse, while leaving the weights for the query, key, and value as dense matrices. They justify this saying it is particularly hard to prune these weights prior to the attention operations, since it is not the raw weight values that are important, but their interactions within the attention mechanism. In contrast, we jointly optimize over these weights and across all heads, and hence we inherently account for the interactions between the pruned weights and how they influence the attention output. The detailed formulation is provided in subsubsection A.3.3 ###reference_.SSS3###. We also discuss how to prune the output projection module of the MHSA block and MLP layers within the FFN modules in the ViT, which are more standard formulations than for MHSA.\nExperimental setup. We evaluate SNOWS on ViT-B/16 and ViT-L/16 models, pruning them to : sparsity on MiniImageNet-1k. Since methods for mask selection in : sparsity do not extend easily to vision transformers, we employ MP to obtain the : masks.\nResults. As shown in Table 2 ###reference_###, SNOWS consistently outperforms MP, with higher Top-1 and Top-5 accuracies. When pruning the QKV, output projection, and MLP layers, SNOWS achieves a Top-1 accuracy of 76.57% on ViT/B-16, significantly better than 70.89% for MP. In Figure 4 ###reference_###, we plot the attention maps generated by the last attention layer of the dense model, the model pruned with MP and the model obtained when SNOWS is applied on top of MP. SNOWS better preserves the attention patterns in the dense model since it optimizes to preserve the activations via the reconstruction loss. In Figure 5 ###reference_###, we show this leads to superior transfer learning performance, where fine-tuning the classifier layer of a ViT pruned with SNOWS outperforms MP when transferring to smaller datasets.\n###figure_3### ###figure_4###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Limitations and conclusions", + "text": "This work proposes a new layer-wise pruning framework, bridging between the local layer-wise and global pruning loss functions introduced in prior work. The objective of our modified loss function induces a harder non-linear optimization problem compared to the standard layer-wise formulation. We propose an efficient Hessian-free method for optimizing this objective, and demonstrate its efficiency on large-scale networks. There are several limitations to the proposed work: firstly, SNOWS does not do mask selection. This makes it flexible since it can integrate with existing techniques for mask selection, but using discrete optimization as is done in (Benbaki et al., 2023 ###reference_b3###; Meng et al., 2024b ###reference_b39###) on the SNOWS loss function may lead to stronger results. Moreover, the method has little dependence on the computational modules involved in the reconstruction loss. As such, the framework could readily be extended to Text-to-Speech or Large Language Models. However, this would require specializing the formulation of the -step problem as we did for ViT modules." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Top-1 Test Accuracy integrating SNOWS with other popular mask selection algorithms for : pruning.
\n
", + "capture": "Table 1: Top-1 Test Accuracy integrating SNOWS with other popular mask selection algorithms for : pruning." + }, + "2": { + "table_html": "
\n
Table 2: Pruning VIT/B-16 and VIT-L-16 to 2:4 sparsity on ImageNet-1k in one shot. SNOWS uses the 2:4 mask obtained from MP. Note that no retraining is involved. We use calibration samples for VIT-B/16 and for VIT-L/16.
\n
", + "capture": "Table 2: Pruning VIT/B-16 and VIT-L-16 to 2:4 sparsity on ImageNet-1k in one shot. SNOWS uses the 2:4 mask obtained from MP. Note that no retraining is involved. We use calibration samples for VIT-B/16 and for VIT-L/16." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ResNet20 (CIFAR-10)\nResNet50 (CIFAR-100)ResNet50 (ImageNet-1k)
TimePeak Memory (GB)TimePeak Memory (GB)TimePeak Memory (GB)
00.72.211.38.710.58.6
10.82.510.710.812.915.1
31.62.811.111.425.217.3
52.33.311.412.139.725.8
104.83.313.315.594.838.3
208.14.316.121.9184.154.0
3011.24.917.024.9289.566.9
4011.85.118.828.1OOMOOM
6020.331.6OOMOOM
\n
Table 3: Effect of varying on the run-time and peak memory usage of the full SNOWS algorithm. The run-time shown is in minutes.
\n
", + "capture": "Table 3: Effect of varying on the run-time and peak memory usage of the full SNOWS algorithm. The run-time shown is in minutes." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ResNet20 on CIFAR10
SparsityBaseline Accuracy: 91.36%Baseline: 92.56%
MPWFCBSCHITAFALCONSNOWS
0.588.44 (-2.92)90.23 (-1.13)90.58 (-0.78)90.60 (-0.76)90.87 (-0.49)92.28 (-0.28)
0.685.24 (-6.12)87.96 (-3.40)88.88 (-2.48)89.22 (-2.14)89.67 (-1.69)91.90 (-0.66)
0.778.79 (-12.57)81.05 (-10.31)81.84 (-9.52)84.12 (-7.24)84.42 (-6.94)90.87 (-1.69)
0.854.01 (-37.35)62.63 (-28.73)51.28 (-40.08)57.90 (-33.46)65.17 (-26.19)87.82 (-4.74)
0.911.79 (-79.57)11.49 (-79.87)13.68 (-77.68)15.60 (-75.76)19.14 (-72.22)64.80 (-27.76)
MobileNetV1 on ImageNet-1K
SparsityBaseline Accuracy: 71.95%Baseline: 70.89%
MPWFCBSCHITAFALCONSNOWS
0.562.61 (-9.34)68.91 (-3.04)70.21 (-1.74)70.42 (-1.53)70.35 (-1.60)70.10 (-0.79)
0.641.94 (-30.01)60.90 (-11.05)66.37 (-5.58)67.30 (-4.65)67.18 (-4.77)68.70 (-2.19)
0.76.78 (-65.17)29.36 (-42.59)55.11 (-16.84)59.40 (-12.55)58.40 (-13.55)65.28 (-5.61)
0.80.11 (-71.84)0.24 (-71.71)16.38 (-55.57)29.78 (-42.17)25.82 (-46.13)54.83 (-16.06)
\n
Table 4: Raw accuracy (%) with absolute accuracy change (in brackets) compared to the baseline. The baseline accuracies are shown above the methods.
\n
", + "capture": "Table 4: Raw accuracy (%) with absolute accuracy change (in brackets) compared to the baseline. The baseline accuracies are shown above the methods." + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FALCONSNOWS
DatasetSparsityOriginal+RTOriginal+RT
\n\n\nResNet20\n\n(CIFAR-10)\n 0.590.87 (-0.49)91.07 (-0.29)92.28 (-0.28)92.32 (-0.24)
0.689.67 (-1.69)90.58 (-0.78)91.90 (-0.66)92.13 (-0.43)
0.784.42 (-6.94)89.64 (-1.72)90.87 (-1.69)91.85 (-0.71)
0.865.17 (-26.19)87.59 (-3.77)87.82 (-4.74)91.06 (-1.50)
0.919.14 (-72.22)81.60 (-9.76)64.80 (-27.76)87.69 (-4.87)
\n\n\nMobileNetV1\n\n(ImageNet-1K)\n 0.570.35 (-1.60)70.66 (-1.29)70.10 (-0.79)70.66 (-0.23)
0.667.18 (-4.77)68.89 (-3.06)68.70 (-2.19)70.10 (-0.79)
0.758.40 (-13.55)64.74 (-7.21)65.28 (-5.61)68.76 (-2.13)
0.825.82 (-46.13)53.84 (-18.11)54.83 (-16.06)64.70 (-6.19)
\n
Table 5: Raw accuracy (%) and absolute accuracy change (in brackets) for FALCON and SNOWS before and after retraining at different sparsity levels. The baseline accuracies are as given in Table\u00a04.
\n
", + "capture": "Table 5: Raw accuracy (%) and absolute accuracy change (in brackets) for FALCON and SNOWS before and after retraining at different sparsity levels. The baseline accuracies are as given in Table\u00a04." + }, + "6": { + "table_html": "
\n
Table 6: FLOPs and Accuracy Reduction Across Models and Semi-Structuerd Sparsity Patterns
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelDatasetSparsity PatternFLOPs ReductionAccuracy Change
ViT-B-16ImageNet-1k2:41.97\n-3.85%
ResNet50ImageNet-1k1:42.99\n-9.38%
2:41.79\n-1.74%
ResNet50CIFAR-1001:43.13\n-1.35%
2:41.83\n-0.16%
ResNet20CIFAR-101:43.80\n-3.49%
2:41.97\n-0.64%
\n
", + "capture": "Table 6: FLOPs and Accuracy Reduction Across Models and Semi-Structuerd Sparsity Patterns" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ArchitectureDataset\n-Step Value\nDampening CG Tol
\n: Sparsity\nUnstructured Sparsity
ResNet20CIFAR-10
ResNet50CIFAR-100\u2014
ResNet50ImageNet-1k\u2014
MobileNetImageNet-1k\u2014
ViT/B-16ImageNet-1k\u2014
ViT/L-16ImageNet-1k\n (QKV)\u2014
\n (MLP)
\n
Table 7: -step values, dampening factors, and CG tolerances used for : sparsity and unstructured sparsity experiments across different architectures.
\n
", + "capture": "Table 7: -step values, dampening factors, and CG tolerances used for : sparsity and unstructured sparsity experiments across different architectures." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model-SparsitySamples SNOWSCHITA
Acc (%)Runtime (min)Acc (%)Runtime (min)
ResNet50 (25m parameters)50075.79.6673.6231.20
MobileNet (4m parameters)100061.15.5659.416.82
ResNet20 (250k parameters)100090.00.5684.12.23
\n
Table 8: Comparison of performance versus run-time comparing SNOWS and CHITA pruning different architectures to 70% unstructured sparsity. ResNet-20 is run on CIFAR-10, MobileNet on ImageNet-1k, and ResNet50 on CIFAR-100. We use 1000 samples for ResNet20 and MobileNet, but for ResNet50 we use 500 samples since this was the only sample size below which CHITA runs in 5 hours.
\n
", + "capture": "Table 8: Comparison of performance versus run-time comparing SNOWS and CHITA pruning different architectures to 70% unstructured sparsity. ResNet-20 is run on CIFAR-10, MobileNet on ImageNet-1k, and ResNet50 on CIFAR-100. We use 1000 samples for ResNet20 and MobileNet, but for ResNet50 we use 500 samples since this was the only sample size below which CHITA runs in 5 hours." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18376v1_figure_1.png", + "caption": "Figure 1: Effect of varying K\ud835\udc3eKitalic_K in the loss function in Eqn (3) on out-of-sample accuracy pruning ResNet20 on CIFAR-10 and ResNet50 on CIFAR-100 to 1:4 sparsity (74% and 66% respectively). Increasing K\ud835\udc3eKitalic_K improves the accuracy of the pruned network, at the cost of higher computation time.", + "url": "http://arxiv.org/html/2411.18376v1/x1.png" + }, + "3": { + "figure_path": "2411.18376v1_figure_3.png", + "caption": "Figure 3: Comparing SNOWS to one-shot pruning methods for unstructured sparsity.\n", + "url": "http://arxiv.org/html/2411.18376v1/extracted/6029220/figs/Unstr.png" + }, + "4": { + "figure_path": "2411.18376v1_figure_4.png", + "caption": "Figure 4: Visualizing (a) the original test image and attention maps from the last layer of (b) the dense VIT/B-16 model, (c) the model obtained by applying a 2:4 MP mask, and (d) the model after applying SNOWS on top of MP. SNOWS optimizes to reconstruct learned activations, preserving features learned by the dense network even in the deepest layers.", + "url": "http://arxiv.org/html/2411.18376v1/extracted/6029220/figs/Attention_maps.jpg" + }, + "6": { + "figure_path": "2411.18376v1_figure_6.png", + "caption": "Figure 6: Comparison of convergence rates between SGD and Newton\u2019s method on the toy optimization problem in Eqn (12) with highly varying curvature (\u03bb1=1subscript\ud835\udf0611\\lambda_{1}=1italic_\u03bb start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1, \u03bb2=100subscript\ud835\udf062100\\lambda_{2}=100italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 100). While Newton\u2019s method achieves convergence in a single iteration, SGD requires approximately 1,375 iterations due to the high condition number \u03ba=100\ud835\udf05100\\kappa=100italic_\u03ba = 100.", + "url": "http://arxiv.org/html/2411.18376v1/extracted/6029220/figs/toy_problem.png" + }, + "7": { + "figure_path": "2411.18376v1_figure_7.png", + "caption": "Figure 7: Comparing Newton\u2019s method and SGD with minimizing Eqn (3) with K=10\ud835\udc3e10K=10italic_K = 10, for the first layer in the model. The solution starts with a 2:4 sparse mask obtained via magnitude pruning. Newton\u2019s method converges very quickly and requires less time to solve the K\ud835\udc3eKitalic_K-step reconstruction problem. SGD is trained for 2000 steps whereas Newton converges in less than 10.", + "url": "http://arxiv.org/html/2411.18376v1/x3.png" + }, + "8(a)": { + "figure_path": "2411.18376v1_figure_8(a).png", + "caption": "(a) ResNet20: Loss surface for SGD (left) and Newton (right).\nFigure 8: Optimization of the first layer for both ResNet20 and ResNet50 using SGD and stochastic Newton from Figure 7. Top row: Loss trajectories for 2000 iterations of SGD versus respectively 6 (CIFAR-10) and 13 (ImageNet-1k) iterations of stochastic Newton for the two largest starting weights. Bottom row: Relative distance between the full weight matrix and the starting weight matrix for both methods. The relative distance is defined as dist\u2062(\ud835\udc7et)=\u2016\ud835\udc7et\u2212\ud835\udc7e0\u20162\u2016\ud835\udc7e0\u20162distsubscript\ud835\udc7e\ud835\udc61superscriptnormsubscript\ud835\udc7e\ud835\udc61subscript\ud835\udc7e02superscriptnormsubscript\ud835\udc7e02\\text{dist}(\\bm{W}_{t})=\\frac{\\|\\bm{W}_{t}-\\bm{W}_{0}\\|^{2}}{\\|\\bm{W}_{0}\\|^{2}}dist ( bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = divide start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG\nwhere \ud835\udc7etsubscript\ud835\udc7e\ud835\udc61\\bm{W}_{t}bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are the weights at iteration t\ud835\udc61titalic_t and \ud835\udc7e0subscript\ud835\udc7e0\\bm{W}_{0}bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT are the initial weights.", + "url": "http://arxiv.org/html/2411.18376v1/x4.png" + }, + "8(b)": { + "figure_path": "2411.18376v1_figure_8(b).png", + "caption": "(b) ResNet50: Loss surface for SGD (left) and Newton (right).\nFigure 8: Optimization of the first layer for both ResNet20 and ResNet50 using SGD and stochastic Newton from Figure 7. Top row: Loss trajectories for 2000 iterations of SGD versus respectively 6 (CIFAR-10) and 13 (ImageNet-1k) iterations of stochastic Newton for the two largest starting weights. Bottom row: Relative distance between the full weight matrix and the starting weight matrix for both methods. The relative distance is defined as dist\u2062(\ud835\udc7et)=\u2016\ud835\udc7et\u2212\ud835\udc7e0\u20162\u2016\ud835\udc7e0\u20162distsubscript\ud835\udc7e\ud835\udc61superscriptnormsubscript\ud835\udc7e\ud835\udc61subscript\ud835\udc7e02superscriptnormsubscript\ud835\udc7e02\\text{dist}(\\bm{W}_{t})=\\frac{\\|\\bm{W}_{t}-\\bm{W}_{0}\\|^{2}}{\\|\\bm{W}_{0}\\|^{2}}dist ( bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = divide start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG\nwhere \ud835\udc7etsubscript\ud835\udc7e\ud835\udc61\\bm{W}_{t}bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are the weights at iteration t\ud835\udc61titalic_t and \ud835\udc7e0subscript\ud835\udc7e0\\bm{W}_{0}bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT are the initial weights.", + "url": "http://arxiv.org/html/2411.18376v1/x5.png" + }, + "8(c)": { + "figure_path": "2411.18376v1_figure_8(c).png", + "caption": "(c) ResNet20: Relative distance between full weight matrix and starting weights.\nFigure 8: Optimization of the first layer for both ResNet20 and ResNet50 using SGD and stochastic Newton from Figure 7. Top row: Loss trajectories for 2000 iterations of SGD versus respectively 6 (CIFAR-10) and 13 (ImageNet-1k) iterations of stochastic Newton for the two largest starting weights. Bottom row: Relative distance between the full weight matrix and the starting weight matrix for both methods. The relative distance is defined as dist\u2062(\ud835\udc7et)=\u2016\ud835\udc7et\u2212\ud835\udc7e0\u20162\u2016\ud835\udc7e0\u20162distsubscript\ud835\udc7e\ud835\udc61superscriptnormsubscript\ud835\udc7e\ud835\udc61subscript\ud835\udc7e02superscriptnormsubscript\ud835\udc7e02\\text{dist}(\\bm{W}_{t})=\\frac{\\|\\bm{W}_{t}-\\bm{W}_{0}\\|^{2}}{\\|\\bm{W}_{0}\\|^{2}}dist ( bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = divide start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG\nwhere \ud835\udc7etsubscript\ud835\udc7e\ud835\udc61\\bm{W}_{t}bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are the weights at iteration t\ud835\udc61titalic_t and \ud835\udc7e0subscript\ud835\udc7e0\\bm{W}_{0}bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT are the initial weights.", + "url": "http://arxiv.org/html/2411.18376v1/x6.png" + }, + "8(d)": { + "figure_path": "2411.18376v1_figure_8(d).png", + "caption": "(d) ResNet50: Relative distance between full weight matrix and starting weights.\nFigure 8: Optimization of the first layer for both ResNet20 and ResNet50 using SGD and stochastic Newton from Figure 7. Top row: Loss trajectories for 2000 iterations of SGD versus respectively 6 (CIFAR-10) and 13 (ImageNet-1k) iterations of stochastic Newton for the two largest starting weights. Bottom row: Relative distance between the full weight matrix and the starting weight matrix for both methods. The relative distance is defined as dist\u2062(\ud835\udc7et)=\u2016\ud835\udc7et\u2212\ud835\udc7e0\u20162\u2016\ud835\udc7e0\u20162distsubscript\ud835\udc7e\ud835\udc61superscriptnormsubscript\ud835\udc7e\ud835\udc61subscript\ud835\udc7e02superscriptnormsubscript\ud835\udc7e02\\text{dist}(\\bm{W}_{t})=\\frac{\\|\\bm{W}_{t}-\\bm{W}_{0}\\|^{2}}{\\|\\bm{W}_{0}\\|^{2}}dist ( bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = divide start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT - bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG \u2225 bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG\nwhere \ud835\udc7etsubscript\ud835\udc7e\ud835\udc61\\bm{W}_{t}bold_italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are the weights at iteration t\ud835\udc61titalic_t and \ud835\udc7e0subscript\ud835\udc7e0\\bm{W}_{0}bold_italic_W start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT are the initial weights.", + "url": "http://arxiv.org/html/2411.18376v1/x7.png" + }, + "9(a)": { + "figure_path": "2411.18376v1_figure_9(a).png", + "caption": "(a) ResNet20\nFigure 9: Comparison of Newton\u2019s method and Fisher approximation for all layers in ResNet20 and the first three layers of ResNet50.", + "url": "http://arxiv.org/html/2411.18376v1/extracted/6029220/loss_plot_cifar10.png" + }, + "9(b)": { + "figure_path": "2411.18376v1_figure_9(b).png", + "caption": "(b) ResNet50\nFigure 9: Comparison of Newton\u2019s method and Fisher approximation for all layers in ResNet20 and the first three layers of ResNet50.", + "url": "http://arxiv.org/html/2411.18376v1/extracted/6029220/figs/loss_plots_resnet50_layers.png" + }, + "10": { + "figure_path": "2411.18376v1_figure_10.png", + "caption": "Figure 10: Normalized reconstruction loss starting from the masked dense weights. The plot shows the effect of varying the maximum CG iterations in Algorithm 1. While fewer iterations result in slower progress per step, they often lead to faster overall convergence by refining the solution more frequently.", + "url": "http://arxiv.org/html/2411.18376v1/x8.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Net-trim: Convex pruning of deep neural networks with performance guarantee, 2017.", + "author": "Alireza Aghasi, Afshin Abdi, Nam Nguyen, and Justin Romberg.", + "venue": "URL https://arxiv.org/abs/1611.05162.", + "url": null + } + }, + { + "2": { + "title": "First- and second-order methods for learning: Between steepest descent and newton\u2019s method.", + "author": "Roberto Battiti.", + "venue": "Neural Computation, 4(2):141\u2013166, 1992.", + "url": null + } + }, + { + "3": { + "title": "Fast as chita: Neural network pruning with combinatorial optimization, 2023.", + "author": "Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, and Rahul Mazumder.", + "venue": "URL https://arxiv.org/abs/2302.14623.", + "url": null + } + }, + { + "4": { + "title": "A stochastic quasi-newton method for large-scale optimization.", + "author": "R. H. Byrd, S. L. Hansen, Jorge Nocedal, and Y. Singer.", + "venue": "SIAM Journal on Optimization, 26(2):1008\u20131031, 2016.", + "url": null + } + }, + { + "5": { + "title": "Deep learning in computer vision: A critical review of emerging techniques and application scenarios.", + "author": "Junyi Chai, Hao Zeng, Anming Li, and Eric W.T. Ngai.", + "venue": "Machine Learning with Applications, 6:100134, 2021.", + "url": null + } + }, + { + "6": { + "title": "Dynamic n:m fine-grained structured sparse attention mechanism, 2022.", + "author": "Zhaodong Chen, Yuying Quan, Zheng Qu, Liu Liu, Yufei Ding, and Yuan Xie.", + "venue": "URL https://arxiv.org/abs/2203.00091.", + "url": null + } + }, + { + "7": { + "title": "A survey on deep neural network pruning-taxonomy, comparison, analysis, and recommendations, 2024.", + "author": "Hongrong Cheng, Miao Zhang, and Javen Qinfeng Shi.", + "venue": "URL https://arxiv.org/abs/2308.06767.", + "url": null + } + }, + { + "8": { + "title": "Gradient descent on neural networks typically occurs at the edge of stability, 2022.", + "author": "Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar.", + "venue": "URL https://arxiv.org/abs/2103.00065.", + "url": null + } + }, + { + "9": { + "title": "Scaling vision transformers to 22 billion parameters.", + "author": "Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al.", + "venue": "In International Conference on Machine Learning, pp. 7480\u20137512. PMLR, 2023.", + "url": null + } + }, + { + "10": { + "title": "Imagenet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "pp. 248\u2013255, 2009.", + "url": null + } + }, + { + "11": { + "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon.", + "author": "Xin Dong, Shangyu Chen, and Sinno Pan.", + "venue": "In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017a.", + "url": null + } + }, + { + "12": { + "title": "Learning to prune deep neural networks via layer-wise optimal brain surgeon.", + "author": "Xin Dong, Shangyu Chen, and Sinno Pan.", + "venue": "In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017b.", + "url": null + } + }, + { + "13": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale, 2021.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": "URL https://arxiv.org/abs/2010.11929.", + "url": null + } + }, + { + "14": { + "title": "The difficulty of training sparse neural networks, 2020.", + "author": "Utku Evci, Fabian Pedregosa, Aidan Gomez, and Erich Elsen.", + "venue": "URL https://arxiv.org/abs/1906.10732.", + "url": null + } + }, + { + "15": { + "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks, 2019.", + "author": "Jonathan Frankle and Michael Carbin.", + "venue": "URL https://arxiv.org/abs/1803.03635.", + "url": null + } + }, + { + "16": { + "title": "Optimal brain compression: A framework for accurate post-training quantization and pruning.", + "author": "Elias Frantar and Dan Alistarh.", + "venue": "In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 4475\u20134488. Curran Associates, Inc., 2022.", + "url": null + } + }, + { + "17": { + "title": "Sparsegpt: Massive language models can be accurately pruned in one-shot, 2023.", + "author": "Elias Frantar and Dan Alistarh.", + "venue": "URL https://arxiv.org/abs/2301.00774.", + "url": null + } + }, + { + "18": { + "title": "M-fac: Efficient matrix-free approximations of second-order information, 2021.", + "author": "Elias Frantar, Eldar Kurtic, and Dan Alistarh.", + "venue": "URL https://arxiv.org/abs/2107.03356.", + "url": null + } + }, + { + "19": { + "title": "A survey of methods for low-power deep learning and computer vision.", + "author": "Abhinav Goel, Caleb Tung, Yung-Hsiang Lu, and George K Thiruvathukal.", + "venue": "In 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), pp. 1\u20136. IEEE, 2020.", + "url": null + } + }, + { + "20": { + "title": "Is complexity required for neural network pruning? a case study on global magnitude pruning.", + "author": "Manas Gupta, Efe Camci, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Ashish James, Chuan-Sheng Foo, Min Wu, and Jie Lin.", + "venue": "In 2024 IEEE Conference on Artificial Intelligence (CAI), pp. 747\u2013754, 2024.", + "url": null + } + }, + { + "21": { + "title": "A simple and effective method for removal of hidden units and weights.", + "author": "Masafumi Hagiwara.", + "venue": "Neurocomputing, 6(2):207\u2013218, 1994.", + "url": null + } + }, + { + "22": { + "title": "Train faster, generalize better: Stability of stochastic gradient descent, 2016.", + "author": "Moritz Hardt, Benjamin Recht, and Yoram Singer.", + "venue": "URL https://arxiv.org/abs/1509.01240.", + "url": null + } + }, + { + "23": { + "title": "Second order derivatives for network pruning: Optimal brain surgeon.", + "author": "Babak Hassibi and David Stork.", + "venue": "In S. Hanson, J. Cowan, and C. Giles (eds.), Advances in Neural Information Processing Systems, volume 5. Morgan-Kaufmann, 1992.", + "url": null + } + }, + { + "24": { + "title": "Deep residual learning for image recognition, 2015.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "URL https://arxiv.org/abs/1512.03385.", + "url": null + } + }, + { + "25": { + "title": "Structured pruning for deep convolutional neural networks: A survey.", + "author": "Yang He and Lingao Xiao.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 2023.", + "url": null + } + }, + { + "26": { + "title": "Data-independent module-aware pruning for hierarchical vision transformers, 2024.", + "author": "Yang He and Joey Tianyi Zhou.", + "venue": "URL https://arxiv.org/abs/2404.13648.", + "url": null + } + }, + { + "27": { + "title": "Gaussian error linear units (gelus), 2023.", + "author": "Dan Hendrycks and Kevin Gimpel.", + "venue": "URL https://arxiv.org/abs/1606.08415.", + "url": null + } + }, + { + "28": { + "title": "REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH.", + "author": "Duc N.M Hoang, Shiwei Liu, Radu Marculescu, and Zhangyang Wang.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "29": { + "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017.", + "author": "Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam.", + "venue": "URL https://arxiv.org/abs/1704.04861.", + "url": null + } + }, + { + "30": { + "title": "Training recipe for n:m structured sparsity with decaying pruning mask, 2022.", + "author": "Sheng-Chun Kao, Amir Yazdanbakhsh, Suvinay Subramanian, Shivani Agrawal, Utku Evci, and Tushar Krishna.", + "venue": "URL https://arxiv.org/abs/2209.07617.", + "url": null + } + }, + { + "31": { + "title": "Learning multiple layers of features from tiny images, 2009.", + "author": "Alex Krizhevsky and Geoffrey Hinton.", + "venue": "URL https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.", + "url": null + } + }, + { + "32": { + "title": "Cap: Correlation-aware pruning for highly-accurate sparse vision models, 2023.", + "author": "Denis Kuznedelev, Eldar Kurtic, Elias Frantar, and Dan Alistarh.", + "venue": "URL https://arxiv.org/abs/2210.09223.", + "url": null + } + }, + { + "33": { + "title": "Gradient-based learning applied to document recognition.", + "author": "Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner.", + "venue": "Proceedings of the IEEE, 86(11):2278\u20132324, 1998.", + "url": null + } + }, + { + "34": { + "title": "Optimal brain damage.", + "author": "Yann LeCun, John Denker, and Sara Solla.", + "venue": "In D. Touretzky (ed.), Advances in Neural Information Processing Systems, volume 2. Morgan-Kaufmann, 1989.", + "url": null + } + }, + { + "35": { + "title": "Rethinking the value of network pruning, 2019.", + "author": "Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell.", + "venue": "URL https://arxiv.org/abs/1810.05270.", + "url": null + } + }, + { + "36": { + "title": "Deep learning via hessian-free optimization.", + "author": "James Martens.", + "venue": "In International Conference on Machine Learning, 2010.", + "url": null + } + }, + { + "37": { + "title": "Training deep and recurrent networks with hessian-free optimization.", + "author": "James Martens and Ilya Sutskever.", + "venue": "In Neural Networks, 2012.", + "url": null + } + }, + { + "38": { + "title": "Alps: Improved optimization for highly sparse one-shot pruning for large language models.", + "author": "Xiang Meng, Kayhan Behdin, Haoyue Wang, and Rahul Mazumder.", + "venue": "arXiv preprint arXiv:2406.07831, 2024a.", + "url": null + } + }, + { + "39": { + "title": "Falcon: Flop-aware combinatorial optimization for neural network pruning, 2024b.", + "author": "Xiang Meng, Wenyu Chen, Riade Benbaki, and Rahul Mazumder.", + "venue": "URL https://arxiv.org/abs/2403.07094.", + "url": null + } + }, + { + "40": { + "title": "Osscar: One-shot structured pruning in vision and language models with combinatorial optimization, 2024c.", + "author": "Xiang Meng, Shibal Ibrahim, Kayhan Behdin, Hussein Hazimeh, Natalia Ponomareva, and Rahul Mazumder.", + "venue": "URL https://arxiv.org/abs/2403.12983.", + "url": null + } + }, + { + "41": { + "title": "Accelerating sparse deep neural networks, 2021.", + "author": "Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius.", + "venue": "URL https://arxiv.org/abs/2104.08378.", + "url": null + } + }, + { + "42": { + "title": "Towards understanding the role of over-parametrization in generalization of neural networks, 2018.", + "author": "Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro.", + "venue": "URL https://arxiv.org/abs/1805.12076.", + "url": null + } + }, + { + "43": { + "title": "Conjugate gradient methods.", + "author": "Jorge Nocedal and Stephen J Wright.", + "venue": "Numerical optimization, pp. 101\u2013134, 2006.", + "url": null + } + }, + { + "44": { + "title": "Channel permutations for n:m sparsity.", + "author": "Jeff Pool and Chong Yu.", + "venue": "In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 13316\u201313327. Curran Associates, Inc., 2021.", + "url": null + } + }, + { + "45": { + "title": "Comparing rewinding and fine-tuning in neural network pruning, 2020.", + "author": "Alex Renda, Jonathan Frankle, and Michael Carbin.", + "venue": "URL https://arxiv.org/abs/2003.02389.", + "url": null + } + }, + { + "46": { + "title": "Sub-sampled newton methods i: Globally convergent algorithms, 2016.", + "author": "Farbod Roosta-Khorasani and Michael W. Mahoney.", + "venue": "URL https://arxiv.org/abs/1601.04737.", + "url": null + } + }, + { + "47": { + "title": "Woodfisher: Efficient second-order approximation for neural network compression, 2020.", + "author": "Sidak Pal Singh and Dan Alistarh.", + "venue": "URL https://arxiv.org/abs/2004.14340.", + "url": null + } + }, + { + "48": { + "title": "Training data-efficient image transformers and distillation through attention, 2021.", + "author": "Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv\u00e9 J\u00e9gou.", + "venue": "URL https://arxiv.org/abs/2012.12877.", + "url": null + } + }, + { + "49": { + "title": "Eigendamage: Structured pruning in the kronecker-factored eigenbasis.", + "author": "Chaoqi Wang, Roger Grosse, Sanja Fidler, and Guodong Zhang.", + "venue": "In International conference on machine learning, pp. 6566\u20136575. PMLR, 2019.", + "url": null + } + }, + { + "50": { + "title": "Width and depth pruning for vision transformers.", + "author": "Fang Yu, Kun Huang, Meng Wang, Yuan Cheng, Wei Chu, and Li Cui.", + "venue": "In AAAI Conference on Artificial Intelligence, 2022a.", + "url": null + } + }, + { + "51": { + "title": "A unified pruning framework for vision transformers, 2021.", + "author": "Hao Yu and Jianxin Wu.", + "venue": "URL https://arxiv.org/abs/2111.15127.", + "url": null + } + }, + { + "52": { + "title": "The combinatorial brain surgeon: Pruning weights that cancel one another in neural networks, 2022b.", + "author": "Xin Yu, Thiago Serra, Srikumar Ramalingam, and Shandian Zhe.", + "venue": "URL https://arxiv.org/abs/2203.04466.", + "url": null + } + }, + { + "53": { + "title": "Learning best combination for efficient n:m sparsity.", + "author": "Yuxin Zhang, Mingbao Lin, ZhiHang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, and Rongrong Ji.", + "venue": "In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 941\u2013953. Curran Associates, Inc., 2022.", + "url": null + } + }, + { + "54": { + "title": "Spatial re-parameterization for n:m sparsity, 2023.", + "author": "Yuxin Zhang, Mingbao Lin, Yunshan Zhong, Mengzhao Chen, Fei Chao, and Rongrong Ji.", + "venue": "URL https://arxiv.org/abs/2306.05612.", + "url": null + } + }, + { + "55": { + "title": "Learning n:m fine-grained structured sparse neural networks from scratch, 2021.", + "author": "Aojun Zhou, Yukun Ma, Junnan Zhu, Jianbo Liu, Zhijie Zhang, Kun Yuan, Wenxiu Sun, and Hongsheng Li.", + "venue": "URL https://arxiv.org/abs/2102.04010.", + "url": null + } + }, + { + "56": { + "title": "Less is more: Towards compact cnns.", + "author": "Hao Zhou, Jose M. Alvarez, and Fatih Porikli.", + "venue": "2016.", + "url": null + } + }, + { + "57": { + "title": "Vision transformer pruning, 2021.", + "author": "Mingjian Zhu, Yehui Tang, and Kai Han.", + "venue": "URL https://arxiv.org/abs/2104.08500.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18376v1" +} \ No newline at end of file diff --git a/20241127/2411.18383v1.json b/20241127/2411.18383v1.json new file mode 100644 index 0000000000000000000000000000000000000000..9ac11f16003cc755cc7fddb54be0f8498d34ca55 --- /dev/null +++ b/20241127/2411.18383v1.json @@ -0,0 +1,165 @@ +{ + "title": "Topic Modeling and Sentiment Analysis on Japanese Online Media\u2019s Coverage of Nuclear Energy", + "abstract": "Thirteen years after the Fukushima Daiichi nuclear power plant accident, Japan\u2019s nuclear energy accounts for only approximately 6 of electricity production, as most nuclear plants remain shut down. To revitalize the nuclear industry and achieve sustainable development goals, effective communication with Japanese citizens, grounded in an accurate understanding of public sentiment, is of paramount importance. While nationwide surveys have traditionally been used to gauge public views, the rise of social media in recent years has provided a promising new avenue for understanding public sentiment. To explore domestic sentiment on nuclear energy-related issues expressed online, we analyzed the content and comments of over 3,000 YouTube videos covering topics related to nuclear energy. Topic modeling was used to extract the main topics from the videos, and sentiment analysis with large language models classified user sentiments towards each topic. Additionally, word co-occurrence network analysis was performed to examine the shift in online discussions during August and September 2023 regarding the release of treated water. Overall, our results provide valuable insights into the online discourse on nuclear energy and contribute to a more comprehensive understanding of public sentiment in Japan.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In the aftermath of the Fukushima Daiichi (1F) nuclear power plant accident, all nuclear reactors in Japan were temporarily shut down for inspections, leaving the country without nuclear energy in its electricity production for the first time since the 1970s [1 ###reference_b1###]. Although nuclear power has gradually returned to Japan\u2019s energy mix, accounting for 6 of electricity with 13 operational reactors as of November 2024, the Ministry of Economy, Trade, and Industry (METI) has set a target to increase this share to 20\u201322 by 2030 [2 ###reference_b2###]. To further expand nuclear energy amid ongoing safety concerns following the Fukushima 1F accident, it is necessary for the government to build public trust, a responsibility recently formalized in an amendment to the Atomic Energy Basic Act.\nTo foster effective communication with the public, nationwide polls have long been a primary tool for gauging sentiment on nuclear energy topics. Major news outlets such as NHK [3 ###reference_b3###], Asahi Shimbun [4 ###reference_b4###], and Nikkei [5 ###reference_b5###] have conducted surveys on specific issues, while institutions like the Institute of Nuclear Safety System [6 ###reference_b6###] and the Japan Atomic Energy Relations Organization have carried out comprehensive annual surveys. However, traditional polls face inherent limitations, including restricted reach and ambiguous interpretations of nonsubstantive responses, highlighting the need for supplementary methods to improve understanding of public sentiment.\nIn recent years, the rise of social media platforms has accumulated an abundance of user comments and posts, offering a promising avenue for understanding public opinions on nuclear energy. Early studies, such as those by Ikegami et al. [7 ###reference_b7###] and Kim Kim [8 ###reference_b8###], introduced the use of sentiment orientation dictionaries to analyze Tweets related to the Fukushima 1F accident, a method that has since gained widespread adoption. For example, Park [9 ###reference_b9###] and Jeong et al. [10 ###reference_b10###] employed sentiment dictionaries to examine online sentiment toward major nuclear-related issues in Korea, while Pu et al. [11 ###reference_b11###] applied a similar approach to gauge public opinion on the release of treated water on the Chinese social media platform Weibo. In Japan, Hasegawa et al. [12 ###reference_b12###] analyzed 19 million Japanese Tweets to assess public sentiments surrounding radiation and Fukushima one year after the accident.\nWhile prior studies have provided valuable insights into online sentiment toward nuclear energy, based on data at a scale unattainable through traditional polls, two key issues limit their accuracy and impact. First, lexicon-based sentiment analysis often struggles with accuracy due to its reliance on static word sentiments [8 ###reference_b8###], failing to account for context or detect nuanced emotions like irony and sarcasm. Second, identifying the specific nuclear energy issues in online posts remains challenging, especially due to their short-format nature. For instance, Park [9 ###reference_b9###] and Jeong et al. [10 ###reference_b10###] manually identified nuclear-related events for sentiment assignment, introducing potential human bias. Recent work by Kwon et al. [13 ###reference_b13###] attempted to address these challenges by incorporating Large Language Models (LLMs), which are better equipped to understand human language, to preform both sentiment analysis and categorize Tweets into themes such as political or energy-related. However, it still lacks the granularity needed to link sentiments to specific issues regarding nuclear energy.\nTo fully address these challenges in providing a granular understanding of online sentiment on nuclear energy, this work applies state-of-the-art LLMs for sentiment analysis while shifting the focus from analyzing user comments on Twitter to YouTube, a platform that has received comparatively little attention. Unlike Tweets, YouTube comments offer a unique advantage in that they are closely tied to videos content, directly linking user\u2019s sentiments and opinions to specific topics discussed. Consequently, this work seeks to answer the following question: What are the main topics presented in nuclear energy-related news videos on YouTube, and what are viewers\u2019 sentiments toward these topics?\nTo answer this question, we combined Latent Dirichlet Allocation (LDA) topic modeling with sentiment analysis to investigate nuclear energy-related content on YouTube, as illustrated in the flowchart in Figure 1 ###reference_###. First, LDA was applied to 3,101 YouTube videos published by official Japanese broadcasting stations to identify prominent themes. Then, we benchmarked the performance of BERT and GPT models on sentiment analysis using a curated test set, selecting the best-performing model to classify user sentiments. This combined approach directly links user sentiment to the main topics of the associated videos, achieving a level of topic-sentiment granularity not attained in previous studies. As a result, this work not only uncovers the online content landscape surrounding nuclear energy but also offers a deeper understanding of public perception of specific issues, which is crucial for the continued development of nuclear energy in Japan.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Data collection and processing", + "text": "Data on YouTube videos, including titles, descriptions, comments, and metrics, were extracted using the YouTube Data API. The search was conducted using the keywords \"{CJK}UTF8ipxm\u798f\u5cf6\u7b2c\u4e00\u539f\u767a\" (Fukushima Daiichi nuclear power plant) or \"{CJK}UTF8ipxm\u539f\u5b50\u529b\" (nuclear energy) up to April 1, 2024. To ensure the quality and reliability of the video content, we retained 7,505 videos from the 15 major Japanese broadcasting stations. Additionally, the automatically generated Japanese subtitles of these videos were obtained using the YouTube Transcript API [14 ###reference_b14###], providing additional textual context necessary for higher-quality topic modeling. Elements such as URLs, hashtags, and their associated texts were removed from the collected data.\nTo ensure the collected videos focused on domestic issues related to nuclear energy, we applied a rule-based filter to remove videos lacking \"{CJK}UTF8ipxm\u539f\u767a\" (nuclear power stations) or \"{CJK}UTF8ipxm\u539f\u5b50\u529b\" (nuclear energy) in their title or description, as well as those mentioning country names such as \"North Korea\" or \"Russia,\" to minimize the influence of international politics on user sentiment, which is outside the scope of this study. This filtering process resulted in a final dataset of 3,101 videos in text format, with their time-series distribution shown in Figure 2 ###reference_###. For clarity, we refer to the textual content of these videos, including titles, descriptions, and subtitles, as the corpus.\nSubsequently, as the goal of this work is to investigate Japanese citizens\u2019 sentiment on nuclear energy, we removed non-Japanese comments using a fine-tuned RoBERTa model for language identification tasks [15 ###reference_b15###]. The model\u2019s performance in identifying Japanese and non-Japanese comments was tested on a set of 500 human-annotated comments from this study, achieving an excellent accuracy of 0.992. Among the 74,699 text-mined comments from the 3,101 videos, the RoBERTa model labeled 2,021 as \"not Japanese,\" which were subsequently removed. The time-series distribution of the remaining 72,678 comments is also shown in Figure 2 ###reference_###, where the lack of user comments before late 2019 is attributed either to low levels of user engagement or to the comment sections being turned off during that period.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Topic modeling", + "text": "Prior to topic modeling, we tokenized the corpus and converted the contents of each video into a bag-of-words (BoW) representation, focusing on semantically important nouns that appeared at least once. This approach has been shown to improve topic modeling quality by reducing noise from irrelevant and uncommon words [16 ###reference_b16###, 17 ###reference_b17###]. Tokenization was performed using the Japanese morphological analyzer Sudachi [18 ###reference_b18###], chosen for its granular control over token generation and normalization features. Specifically, Sudachi\u2019s split mode C was used to produce the longest possible meaningful tokens, ensuring that words like {CJK}UTF8ipxm\u798f\u5cf6\u7b2c\u4e00\u539f\u767a (Fukushima Daiichi Nuclear Power Station) were not split into {CJK}UTF8ipxm\u798f\u5cf6, \u7b2c, \u4e00, \u539f\u767a, preserving their original meaning. Finally, a word cloud generated from the BoW vectors is shown in Figure 3 ###reference_###.\n###figure_3### Subsequently, we applied LDA [19 ###reference_b19###] on the BoW vectors of each nuclear energy-related video for topic modeling. LDA learns underlying topics in the corpus by assuming that each document is a mixture of topics and that each topic consists of a set of frequently co-occurring words. Consequently, the meaning of each topic must be manually interpreted based on the top keywords assigned by the model. Additionally, as an unsupervised machine learning method, LDA does not automatically determine the optimal number of topics, which is typically inferred indirectly using metrics that assess topic quality, such as perplexity, divergence, or coherence scores. In this work, we prioritized coherence scores [20 ###reference_b20###] to determine the optimal number of topics, as higher coherence reflects stronger semantic similarity among the top words in each topic, which is generally associated with better topic interpretability." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Sentiment analysis", + "text": "Sentiment analysis (SA), also commonly known as opinion mining, is the task of identifying the emotions, opinions, or tone within a given piece of information. In this work, given the large number of user comments collected, manual sentiment labeling is impractical. Therefore, we explored both lexicon-based and LLM-based SA methods, comparing their performance using a set of 500 human-annotated comments. The best-performing method was then used to process the 72,678 user comments." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Lexicon-based approach", + "text": "While the performance of lexicon-based SA methods is inherently limited by the static nature of sentiment dictionaries, we implemented this approach due to its simplicity compared to LLMs. Lexicons require no training or fine-tuning with large amounts of human-annotated data, and each word is assigned a straightforward sentiment value. In this work, we utilized the oseti [21 ###reference_b21###] Python library to calculate the sentiment score of each comment based on the total number of positive and negative words as follows:\nThe sentiment (positive or negative) of each word is determined using two human-annotated sentiment dictionaries [22 ###reference_b22###, 23 ###reference_b23###]. Comments with sentiment scores between -0.33 and 0.33 are labeled as neutral, while those above 0.33 are labeled as positive, and those below -0.33 as negative." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 LLM-based approach", + "text": "Unlike lexicons, LLMs based on the transformer architecture [24 ###reference_b24###] excel at considering contexts and understanding nuances in human speech. This is especially important for accurately identifying the sentiment in social media comments, where informal language and irony are prevalent. To select the best-performing SA model for our dataset, we evaluated and compared the performance of both BERT and GPT models on an annotated test set.\nFor BERT-based models, SA is a downstream task fine-tuned using the embedding of the model\u2019s [CLS] token from its output layer. Consequently, the SA performance of BERT models depends heavily on the data used during fine-tuning. In this work, we tested two Japanese BERT models fine-tuned for SA: one trained on an unspecified dataset (referred to as BERT-koheiduck) [25 ###reference_b25###] and another trained on Amazon user reviews (referred to as BERT-phu) [26 ###reference_b26###].\nIn contrast, GPT leverages its language comprehension and generative capabilities to perform SA directly through prompting, without requiring fine-tuning. Here, we evaluated the latest GPT-4o-2024-08-06 model on SA tasks using both zero-shot and few-shot approaches. In zero-shot prompting, GPT performs sentiment analysis without prior examples, whereas in the few-shot approach, several examples are included in the prompt to guide its responses [27 ###reference_b27###]. In this work, we included six examples, two for each sentiment class, in our few-shot prompt. The prompts used in this study, as shown in Table 1 ###reference_###, were designed to clearly define the SA task and output format [28 ###reference_b28###]. In addition, we set a random seed [29 ###reference_b29###] and fixed the \"temperature\" parameter to 0 [30 ###reference_b30###, 28 ###reference_b28###, 29 ###reference_b29###] to enhance the reproducibility of GPT\u2019s response.\n###table_1###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Topic modeling", + "text": "As mentioned in Section 3.1 ###reference_###, the LDA model\u2019s coherence score was used to evaluate the quality of generated topics and determine the optimal number of topics for the 3,101 videos. Accordingly, we trained 19 LDA models with topic numbers ranging from 2 to 20 and evaluated their coherence scores, as illustrated in Figure 4 ###reference_###.\n###figure_4### The LDA model containing 5 topics achieved the highest score of 0.6, and coherence scores gradually decreased as the number of topics increased. However, closer inspection of the model\u2019s top keywords revealed that the small number of topics resulted in overly broad themes that overloaded individual topics with information. For instance, the model assigned words such as {CJK}UTF8ipxm\u6771\u4eac\u96fb\u529b (TEPCO), \u539f\u5b50\u529b\u898f\u5236\u59d4\u54e1\u4f1a (Nuclear Regulation Authority), \u67cf\u5d0e\u5208\u7fbd\u539f\u767a (Kashiwazaki-Kariwa Nuclear Power Plant), \u798f\u5cf6\u7b2c\u4e00\u539f\u767a (Fukushima Daiichi Nuclear Power Plant), \u30c7\u30d6\u30ea (debris), and \u6545\u969c (malfunction) into a single topic. While the contents of these 5 topics can be easily interpreted, their broad scope lacked the specificity needed to analyze the online content landscape on nuclear energy effectively.\nConsequently, to balance topic interpretability and granularity, we manually inspected the top keywords assigned to each topic across all trained LDA models, selecting the model that provided the most meaningful, distinct, and non-overlapping topics. This process led to the selection of the LDA model that categorized the contents of the 3,101 YouTube videos into 16 distinct topics. Detailed information on each topic, including the top 5 keywords and a human-interpreted topic phrase, is presented in Table 2 ###reference_###. These identified topics reveal a diverse range of nuclear energy-related issues covered in news videos, including the Fukushima 1F accident (e.g., No.1 compensation, No.3 government response, No.5 treated water release, No.12 reactor decommissioning, and No.16 resident returns), the broader fuel cycle (e.g., No.7 interim storage and No.8 final disposal sites), No.6 energy policies, and other domestic nuclear plants, such as the Kashiwazaki-Kariwa (No.9) and Takahama (No.11) plants.\nTo verify the validity of the human-interpreted topic phrases, we analyzed the time-series trends of several topics, as shown in Figure 5 ###reference_###, to investigate their correspondence with real-world events. While LDA assigns multiple topics to each video, we simplified the topic assignment by designating only a single \"main topic\" for each video to facilitate sentiment analysis. Specifically, having multiple topics in a single video makes it challenging to determine which topic a user\u2019s sentiment is addressing. Consequently, the \"main topic\" for each video was selected as the one with the highest word count in the document-topic distribution.\n###figure_5### From Figure 5 ###reference_###, we can observe excellent alignments between spikes in the number of videos for specific topics and the timing of relevant real-world events. For example, the two major peaks for videos related to Topic No.5 \"release of treated water\" occurred in 2021-04 and 2023-08, corresponding to the government\u2019s announcement of plans to release treated water and the initial release, respectively. Similarly, peaks in videos about Topic No.9 \"nuclear regulation authority\" coincided with the suspension of Kashiwazaki-Kariwa nuclear plant\u2019s operations in 2021-03 and their resumption in 2023-12. A detailed list of real-world events corresponding to these peaks is presented in Table 3 ###reference_###, confirming the validity of our topic modeling approach and the human-interpreted topic phrases." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Sentiment analysis", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Model benchmark results", + "text": "As outlined in Section 3 ###reference_###, lexicon-based and LLM-based SA methods were employed to classify the sentiment of user comments, given the large volume of collected data. To evaluate the performance of these models (lexicon, BERT, and GPT-4o) on SA tasks, we benchmarked them against a dataset of 500 YouTube comments, representative of the entire collected comments dataset. Each comment\u2019s sentiment was independently annotated by five native Japanese speakers from Kyoto University as positive, neutral/indeterminate, or negative. Ground-truth labels were assigned based on majority vote to ensure the reliability of the benchmark dataset. The benchmarked performance of each SA method is displayed in Figure 6 ###reference_### and Table 4 ###reference_###. The metrics in Table 4 ###reference_### are all weighted averages that account for data imbalance among the three sentiment classes, providing a more accurate measurement of each model\u2019s overall performance on the benchmark set and the collected comments.\n###figure_6### From the benchmark results, it is evident that the lexicon-based approach using the oseti [21 ###reference_b21###] Python library performed the worst, with an accuracy of only 0.45. Its poor performance is attributed to the static sentiment values assigned to each word, which struggle to identify irony and sarcasm prevalent in short-format online comments. Compared to LLM-based approaches, oseti tends to label sentiments as neutral, likely due to both the lack of recorded sentiment values for newer words and outdated sentiment values assigned to words whose connotations have evolved over time.\nAmong the two fine-tuned BERT models, BERT-phu [26 ###reference_b26###], which was fine-tuned on Amazon reviews, displayed only marginally better performance than oseti. Its struggle to classify the sentiment of short-format YouTube comments is likely due to the stark contrast between the context of Amazon reviews and YouTube comments. In contrast, the BERT-koheiduck model [25 ###reference_b25###] achieved better performance, with an accuracy of 0.58. However, since the dataset and data source used to fine-tune BERT-koheiduck are unknown, the reasons for the performance differences between the two models remain unclear. One shared limitation of the two BERT models was the misclassification of approximately 30 of negative comments as neutral. This issue was significantly mitigated by the GPT-4o model, which correctly identified over 96 of the negative comments in the benchmark set, outperforming BERT-based models irrespective of prompting strategy. Between the two prompting strategies for GPT, few-shot prompting outperformed zero-shot prompting across all three sentiment classes (positive, neutral, and negative), consistent with findings from previous studies [31 ###reference_b31###, 29 ###reference_b29###].\nInterestingly, while GPT-4o correctly classified Japanese comments with negative and positive sentiments with over 80 accuracy, it exhibited a surprising pessimism toward comments considered neutral or indeterminate by human annotators, frequently labeling them as negative. While few-shot prompting reduced this tendency, GPT-4o still misclassified 50.4 of neutral comments as negative. Similar biases have been observed in older models, such as GPT-3.5 and Llama 2, where models tended to be overly optimistic. These biases are speculated to result from influences during the model\u2019s training and the characteristics of the dataset used for sentiment analysis [29 ###reference_b29###].\nIn the context of this work, we speculate that part of this pessimism stems from differing thresholds for distinguishing between negative and neutral sentiments in humans and GPT-4o, based on the misclassified comments. GPT-4o likely requires less context before making a decision, leading it to mislabel short-format comments such as \"So who will take responsibility?\" and \"If there\u2019s no news coverage, there won\u2019t be any rumors\" as negative, whereas human annotators considered them neutral or indeterminate. While such behavior could potentially be adjusted with further prompt engineering, a full examination of GPT-4o\u2019s performance is beyond the scope of this work. Finally, as shown in Table 4 ###reference_###, among the five SA methods evaluated, GPT-4o with few-shot achieved the highest overall accuracy of 0.66. Consequently, despite its slightly pessimistic tendencies, it was selected to evaluate the sentiment of over 70,000 user comments on videos related to nuclear energy." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Sentiment analysis results", + "text": "Using the sentiment assigned to each comment by the GPT-4o model, we calculated a monthly sentiment score across all topics, as shown in Figure 7 ###reference_###. Although official news videos on nuclear energy have been published online since 2010, sentiment analysis is conducted on comments made after late 2019, when user engagement became more substantial. Here, positive comments were assigned a score of 1, neutral or intermediate comments 0, and negative comments -1, with the total score normalized by the number of comments per month. For example, a monthly normalized sentiment score of 1 indicates that all comments for a given month were positive.\n###figure_7### As shown in Figure 7 ###reference_###, the overall online sentiment towards domestic nuclear energy-related issues remained relatively negative, with a sentiment score of approximately -0.5, without showing further decline over time. However, it is important to keep in mind the GPT-4o model\u2019s pessimistic tendencies, frequently assigning neutral comments a negative sentiment, as revealed by the benchmark results. Therefore, it is highly likely that while the overall tone remains in the negative range, it is closer to neutral, with the negativity exaggerated by the model.\nWhile a direct comparison between our results and previous sentiment analysis studies on nuclear energy is challenging due to differences in the data source, region, and time frame, all studies point to a negative sentiment in the online discourse surrounding nuclear energy. For example, a domestic study by Hasegawa et al. [12 ###reference_b12###] on Japanese tweets related to radiation from 2011 to 2012 demonstrated a negative sentiment score, largely attributed to the aftermath of the Fukushima 1F accident, despite trending positively over time. Similarly, international studies have also indicated a generally negative online sentiment associated with nuclear energy or policy, often attributed not only to the Fukushima 1F accident but also to various domestic issues [9 ###reference_b9###, 10 ###reference_b10###] and international political affairs [32 ###reference_b32###, 13 ###reference_b13###].\nTo gain more insight into the sentiment towards each respective topic and control for the model\u2019s negative bias, we compared the sentiment distribution across the 16 extracted topics in Figure 8 ###reference_###, where green, gray, and red represents the share of positive, neutral, and negative comments, respectively. It is evident that news videos focusing on \"reflection and memories of the Great East Japan Earthquake\" (Topic No.13), \"lifting of evacuation orders\" (Topic No.16), and \"import and export of marine products\" (Topic No.4) invoked the most positive response from the viewers, as these topics generally reflect improvement of conditions after the Fukushima 1F accident.\nIn contrast, the topics with the highest percentages of negative comments are \"responses, explanations from the government\" (Topic No.3) and \"release of treated water\" (Topic No.5), followed by \"extension of nuclear plant operation periods\" (Topic No.2). While clarifying the motivations behind these sentiments would require further investigation into the comment contents, they are likely influenced by the public\u2019s difficulty in fully grasping and internalizing the large volumes of information on these issues. For example, a 2022 poll by the Japan Atomic Energy Relations Organization revealed that approximately 30 of respondents had never heard of the terms related to treated water included in the poll, and more than 80 lacked a clear understanding to explain them [33 ###reference_b33###]. Therefore, while online comments on \"release of treated water\" (Topic No.5) display an overwhelming amount of negative sentiment, this sentiment is likely influenced by a knowledge gap between the public and the government, as well as personal emotions.\n###figure_8###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Co-occurrence networks analysis", + "text": "To investigate the details of viewer comments, we conducted word co-occurrence network analysis by identifying word pairs that frequently appear together in the same sentence. Among the 16 topics identified in this work, we specifically focused on clarifying how the content of viewer comments on news videos discussing \"release of treated water, impact on fisheries, reputation damage\" changed before and after the initial treated water release date on August 24th, 2023. This focus was partly motivated by the large volume of comments available during this period, as shown in Figure 2 ###reference_###, allowing for a more comprehensive investigation.\nConsequently, we constructed four word co-occurrence networks based on the positive and negative comments in August and September of 2023, respectively, as shown in Figure 9 ###reference_###. Here, each word\u2019s node size correlates to its appearance frequency, and the edge width represents the frequency a specific word pair appears together in the same sentence. To ensure readability of the co-occurrence networks, we reduced the number of nodes by showing only nouns that appeared above a certain threshold and translated the original Japanese terms into English.\n###figure_9### As illustrated in Figure 9 ###reference_###, the four co-occurrence networks share common nodes like \"Reputational Damage,\" \"Fukushima,\" \"Ocean,\" and \"Treated Water.\" These shared nodes indicate that viewer comments are focused on the issue of \"treated water release\" rather than unrelated topics. However, as these common nodes do not reveal the specific arguments behind each sentiment, it is essential to identify unique nodes that uncover the reasoning behind positive and negative comments.\nA comparison between the networks for positive and negative comments reveals that political terms, such as \"Jiminto,\" \"Government,\" \"Prime Minister\", and institutional references like \"TEPCO\" appear predominantly in negative comments. This suggests that online comments with negative sentiments about the release of treated water may be politically motivated rather than focused solely on the treated water itself. Additionally, from August to September 2023, the structure and nodes of the co-occurrence networks for negative comments showed little change, indicating that the content of negative online discussions remained largely consistent.\nIn contrast, unique nodes such as \"Recover\" and \"IAEA\" emerged in the network for positive comments during August 2023, as shown in Figure 9 ###reference_###(c). These nodes indicate that, along with expressions of support for the recovery from the Fukushima 1F accident, many positive comments referenced the IAEA. The frequent mentions of the IAEA in these positive comments suggest that its statements may have improved public perception of the treated water release, contributing to more favorable sentiments. This simultaneously highlights the important role of unbiased international third parties in shaping pubic understanding of complicated issues.\nInterestingly, the content of positive comments shifted dramatically in September 2023, the month following the initial release of the treated water. As shown in Figure 9 ###reference_###(d), the focus of positive comments turned toward praising the actions of the Japanese politician Shinjiro Koizumi, who went surfing off the coast of Fukushima to demonstrate the safety of the treated water to the public. While it is challenging to determine whether these positive comments are directed more toward Koizumi himself or the treated water, this shift indicates that the actions by public figures, such as politicians, can significantly influence online public discourse. A similar case, where public figure created new topics for online discussion, was observed in a previous study by Yagahara et al. [34 ###reference_b34###] on Japanese Tweets posted in the immediate aftermath of the Fukushima 1F accident." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Limitations and Future Work", + "text": "While this study demonstrated the effectiveness of natural language processing in extracting valuable insights from large volumes of online data related to nuclear energy, certain aspects of the current approach can be further refined in future studies." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we applied natural language processing techniques to analyze online coverage of domestic nuclear energy issues, aiming to understand the contents presented to the public and the corresponding online sentiments. Specifically, we processed over 3,000 YouTube videos from official news channels for topic modeling and conducted sentiment analysis on over 70,000 associated comments.\nUsing the LDA topic model, we identified 16 distinct topics on nuclear energy, ranging from \"nuclear accident trials and compensation\" to \"lifting evacuation orders and the return of residents,\" all corresponding well to real-world events. For sentiment analysis, GPT-4o with few-shot prompting was employed to assess each comment\u2019s sentiment, revealing a consistently slightly negative sentiment towards nuclear energy in online discussions. Specifically, our results highlighted relatively higher negativity toward news coverage on \"government response\" and \"release of treated water\" among the 16 extracted topics. Further word co-occurrence analysis on comments related to \"release of treated water\" in August and September of 2023 suggests that a majority of the negative sentiments may be politically motivated. Additionally, statements from non-political organizations like the IAEA and actions by politicians were found to positively influence public perception.\nOverall, this study showcases an effective approach to analyzing online content and discourse on nuclear energy-related topics using big data, providing insights into the opinions of an engaged online demographic. While this online demographic may not fully represent the Japanese public, these findings can complement traditional survey data to offer a more nuanced understanding of public viewpoints and inform efforts toward the continued development of domestic nuclear energy." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Zero-shot and few-shot prompts used for sentiment analysis. Only one of the six examples included in the few-shot prompt is shown here for brevity.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nSystem (zero-shot)\n\n\n\n{CJK}UTF8ipxm\n\u6b21\u306b\u63d0\u4f9b\u3055\u308c\u308bYouTube\u306e\u30b3\u30e1\u30f3\u30c8\u306e\u5168\u4f53\u7684\u306a\u611f\u60c5\u3092\u5206\u985e\u3057\u3066\u304f\u3060\u3055\u3044\u3002\u611f\u60c5\u306e\u5206\u985e\u306f\u3001\u4ee5\u4e0b\u306e3\u3064\u306e\u9078\u629e\u80a2\u304b\u30891\u3064\u3092\u9078\u3093\u3067\u304f\u3060\u3055\u3044\uff1a\u30dd\u30b8\u30c6\u30a3\u30d6\u3001\u4e2d\u7acb\u30fb\u5224\u65ad\u4e0d\u53ef\u80fd\u3001\u30cd\u30ac\u30c6\u30a3\u30d6\u3002\u9078\u629e\u80a2\u306e\u307f\u3092\u51fa\u529b\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n \n
(Please classify the overall sentiment of the provided YouTube comment. Select one of the three following options as the sentiment classification: positive, neutral/indeterminate, negative. Only output the selected option.)
\n
\n
\n\nUser\n\n\n\n(Comment for sentiment analysis)\n\n
\n\nSystem (few-shot)\n\n\n\n{CJK}UTF8ipxm\n\u6b21\u306b\u63d0\u4f9b\u3055\u308c\u308bYouTube\u306e\u30b3\u30e1\u30f3\u30c8\u306e\u5168\u4f53\u7684\u306a\u611f\u60c5\u3092\u5206\u985e\u3057\u3066\u304f\u3060\u3055\u3044\u3002\u611f\u60c5\u306e\u5206\u985e\u306f\u3001\u4ee5\u4e0b\u306e3\u3064\u306e\u9078\u629e\u80a2\u304b\u30891\u3064\u3092\u9078\u3093\u3067\u304f\u3060\u3055\u3044\uff1a\u30dd\u30b8\u30c6\u30a3\u30d6\u3001\u4e2d\u7acb\u30fb\u5224\u65ad\u4e0d\u53ef\u80fd\u3001\u30cd\u30ac\u30c6\u30a3\u30d6\u3002\u9078\u629e\u80a2\u306e\u307f\u3092\u51fa\u529b\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\n
\n\n{CJK}UTF8ipxm\n\u4f8b (Example):\n\n
\n\n{CJK}UTF8ipxm\n\u30b3\u30e1\u30f3\u30c8: \u3057\u3063\u304b\u308a\u52c9\u5f37\u3059\u308b\u306e\u306f\u826f\u3044\u4e8b\u3067\u3059\u81ea\u5206\u3092\u5b66\u3073\u6cbb\u3059\u70ba\u306b\u3082 (Comment: Studying hard is a good thing, as it helps you learn and improve yourself)\n\n
\n\n{CJK}UTF8ipxm\n\u30dd\u30b8\u30c6\u30a3\u30d6 (Positive)\n\n
\n\nUser\n\n\n\n(Comment for sentiment analysis)\n\n
\n
", + "capture": "Table 1: Zero-shot and few-shot prompts used for sentiment analysis. Only one of the six examples included in the few-shot prompt is shown here for brevity." + }, + "2": { + "table_html": "
\n
Table 2: Human-interpreted themes and top key words of the 16 identified topics on domestic nuclear energy.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nNo.\n\n\n\nTopic phrase\n\n\n\nTop words\n\n
\n\n1\n\n\n\nNuclear accident trials and compensation\n\n\n\n{CJK}UTF8ipxm\u4e8b\u6545 (accident), \u5224\u6c7a (verdict), \u88c1\u5224 (trial), \u907f\u96e3 (evacuation), \u539f\u767a\u4e8b\u6545 (nuclear accident)\n\n
\n\n2\n\n\n\nExtension of nuclear power plant operation period\n\n\n\n{CJK}UTF8ipxm\u904b\u8ee2 (operation), \u671f\u9593 (duration), \u539f\u767a (nuclear power plant), \u5ef6\u9577 (extension), \u77f3\u5ddd\u770c (Ishikawa Prefecture)\n\n
\n\n3\n\n\n\nPrime Minister and government responses, explanations, inspections\n\n\n\n{CJK}UTF8ipxm\u7dcf\u7406 (Prime Minister), \u5bfe\u5fdc (response), \u653f\u5e9c (government), \u8cea\u554f (question), \u8996\u5bdf (inspection)\n\n
\n\n4\n\n\n\nImport and export of marine products from Fukushima and Tokai\n\n\n\n{CJK}UTF8ipxm\u6771\u6d77 (Tokai), \u8f38\u5165 (import), \u6d77\u5e95 (seabed), \u6c34\u7523\u7269 (marine products), \u98df\u54c1 (food)\n\n
\n\n5\n\n\n\nRelease of treated water, impact on fisheries, reputational damage\n\n\n\n{CJK}UTF8ipxm\u653e\u51fa (release), \u51e6\u7406\u6c34 (treated water), \u6d77\u6d0b (ocean), \u30c8\u30ea\u30c1\u30a6\u30e0 (tritium), \u98a8\u8a55\u88ab\u5bb3 (reputational damage)\n\n
\n\n6\n\n\n\nEnergy policy, renewable energy, electricity, and nuclear power generation\n\n\n\n{CJK}UTF8ipxm\u539f\u767a (nuclear power plant), \u30a8\u30cd\u30eb\u30ae\u30fc (energy), \u96fb\u529b (electricity), \u539f\u5b50\u529b (nuclear power), \u518d\u751f\u53ef\u80fd\u30a8\u30cd\u30eb\u30ae\u30fc (renewable energy)\n\n
\n\n7\n\n\n\nSpent nuclear fuel, interim storage facility, reprocessing plant\n\n\n\n{CJK}UTF8ipxm\u4e2d\u9593\u8caf\u8535\u65bd\u8a2d (interim storage facility), \u4f7f\u7528\u6e08\u307f\u6838\u71c3\u6599 (spent nuclear fuel), \u5de5\u5834 (factory), \u518d\u51e6\u7406 (reprocessing), \u5efa\u8a2d (construction)\n\n
\n\n8\n\n\n\nInvestigation and selection of final disposal site\n\n\n\n{CJK}UTF8ipxm\u8abf\u67fb (investigation), \u6700\u7d42 (final), \u51e6\u5206 (disposal), \u5019\u88dc (candidate), \u6700\u7d42\u51e6\u5206\u5834 (final disposal site)\n\n
\n\n9\n\n\n\nNuclear Regulation Authority activities and issues with Kashiwazaki-Kariwa\n\n\n\n{CJK}UTF8ipxm\u898f\u5236 (regulation), \u539f\u5b50\u529b\u898f\u5236\u59d4\u54e1\u4f1a (Nuclear Regulation Authority), \u67cf\u5d0e\u5208\u7fbd\u539f\u767a (Kashiwazaki-Kariwa), \u554f\u984c (issue), \u59d4\u54e1\u9577 (Chair)\n\n
\n\n10\n\n\n\nEarthquakes and tsunamis, damage, and impact\n\n\n\n{CJK}UTF8ipxm\u5730\u9707 (earthquake), \u9707\u5ea6 (seismic intensity), \u89b3\u6e2c (observation), \u60c5\u5831 (information), \u767a\u751f (occurrence)\n\n
\n\n11\n\n\n\nKansai Electric Power and Takahama Plant\n\n\n\n{CJK}UTF8ipxm\u95a2\u897f\u96fb\u529b (Kansai Electric Power Co.), \u767a\u96fb (power generation), \u9ad8\u6d5c (Takahama), \u767a\u96fb\u6240 (power station), \u7a3c\u50cd (operation)\n\n
\n\n12\n\n\n\nDecommissioning and contaminated water\n\n\n\n{CJK}UTF8ipxm\u4f5c\u696d (work), \u6c5a\u67d3\u6c34 (contaminated water), \u5206\u6790 (analysis), \u53d6\u308a\u51fa\u3057 (extraction), \u798f\u5cf6\u7b2c\u4e00\u539f\u767a (Fukushima Daiichi)\n\n
\n\n13\n\n\n\nReflection and memories of the Great East Japan Earthquake disaster\n\n\n\n{CJK}UTF8ipxm\u9707\u707d (disaster), \u601d\u3044 (thoughts), \u81ea\u5206 (self), \u672c\u5f53 (reality), \u6771\u65e5\u672c\u5927\u9707\u707d (Great East Japan Earthquake)\n\n
\n\n14\n\n\n\nFootage of reactor containment and fuel debris\n\n\n\n{CJK}UTF8ipxm\u5bb9\u5668 (container), \u683c\u7d0d (containment), \u30ed\u30dc\u30c3\u30c8 (robot), \u6620\u50cf (footage), \u516c\u958b (release)\n\n
\n\n15\n\n\n\nNuclear plant inspections, operations, and local approval\n\n\n\n{CJK}UTF8ipxm\u7a3c\u50cd (operation), \u77e5\u4e8b (governor), \u539f\u767a (nuclear plant), \u540c\u610f (agreement), \u5408\u683c (approval)\n\n
\n\n16\n\n\n\nLifting evacuation orders, decontamination, and return of residents\n\n\n\n{CJK}UTF8ipxm\u89e3\u9664 (lifting), \u907f\u96e3\u6307\u793a (evacuation order), \u53cc\u8449\u753a (Futaba Town), \u9664\u67d3 (decontamination), \u798f\u5cf6\u770c (Fukushima Prefecture)\n\n
\n
", + "capture": "Table 2: Human-interpreted themes and top key words of the 16 identified topics on domestic nuclear energy." + }, + "3": { + "table_html": "
\n
Table 3: Major spikes in topic coverage by online news videos and related real-world events.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTopic\n\nSpikeRelated event
\n\nNo.2 Extension of nuclear power plant operation period\n\n2022/12Regulatory framework on extension approved
\n\n\n\n2023/02Official decision for reactor operation extension
\n\nNo.3 Prime Minister and government responses, explanations, inspections\n\n2011/05Response to the Fukushima 1F accident
\n\n\n\n2023/08Comments on treated water release
\n\nNo.5 Release of treated water, impact on fisheries, reputational damage\n\n2021/04Decision to release treated water
\n\n\n\n2023/08Initial release of treated water
\n\nNo.9 Nuclear Regulation Authority activities, Kashiwazaki-Kariwa issues\n\n2021/03Suspension of Kashiwazaki-Kariwa
\n\n\n\n2023/12Resuming operations at Kashiwazaki-Kariwa
\n\nNo.13 Reflections and memories of the Great East Japan Earthquake\n\nEvery MarchAnniversaries of the Great East Japan Earthquake
\n\n\n\n
\n\nNo.14 Footage of reactor containment and fuel debris\n\n2017/01Footage of Fukushima Daiichi Unit 2
\n\n\n\n2017/03Footage of Fukushima Daiichi Unit 1
\n
", + "capture": "Table 3: Major spikes in topic coverage by online news videos and related real-world events." + }, + "4": { + "table_html": "
\n
Table 4: Weighted performance metrics of different sentiment analysis models on the benchmark set.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAccuracyPrecisionRecallF1 Score
oseti lexicon0.450.500.450.47
BERT-koheiduck0.580.610.580.57
BERT-phu0.470.560.470.48
GPT-4o zeroshot0.620.720.620.58
GPT-4o fewshot0.660.750.660.64
\n
", + "capture": "Table 4: Weighted performance metrics of different sentiment analysis models on the benchmark set." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18383v1_figure_1.png", + "caption": "Figure 1: Flowchart illustrating the process to determine public sentiment towards various issues in nuclear energy media online. The process involves topic modeling of video content to identify key themes, followed by sentiment analysis of comments to evaluate public sentiment towards each identified topic.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_1_flowchat.png" + }, + "2": { + "figure_path": "2411.18383v1_figure_2.png", + "caption": "Figure 2: Monthly distribution of collected videos and viewer comments", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_2_combined_time_series_plot.png" + }, + "3": { + "figure_path": "2411.18383v1_figure_3.png", + "caption": "Figure 3: Word cloud generated from the bag-of-words vectors.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_3_bow_word_cloud.png" + }, + "4": { + "figure_path": "2411.18383v1_figure_4.png", + "caption": "Figure 4: Relationship between topic number and coherence score of trained LDA models.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_4_coherence_score.png" + }, + "5": { + "figure_path": "2411.18383v1_figure_5.png", + "caption": "Figure 5: Number of monthly published videos of selected topics.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_5_spikes_explained.png" + }, + "6": { + "figure_path": "2411.18383v1_figure_6.png", + "caption": "Figure 6: Confusion matrices of different sentiment analysis models on the benchmark set.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_6_confusion_matrix.png" + }, + "7": { + "figure_path": "2411.18383v1_figure_7.png", + "caption": "Figure 7: Normalized monthly sentiment scores across all topics.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_7_normalized_sentiment_score.png" + }, + "8": { + "figure_path": "2411.18383v1_figure_8.png", + "caption": "Figure 8: Percentage of comments assigned each sentiment for all 16 topics. Green: positive, Gray: neutral or indeterminate, and Red: negative.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_8_detailed_sentiment.png" + }, + "9": { + "figure_path": "2411.18383v1_figure_9.png", + "caption": "Figure 9: Word co-occurrence networks constructed from negative and positive comments associated with videos on \"release of treated water,\" \"impact on fisheries,\" and \"reputational damage\" during August and September 2023. The original Japanese terms have been translated into English.", + "url": "http://arxiv.org/html/2411.18383v1/extracted/6029252/figure_9_network.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18383v1" +} \ No newline at end of file diff --git a/20241127/2411.18388v1.json b/20241127/2411.18388v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f3a5cf196844f7365fd96e8f59d15ee7d4ef7ffd --- /dev/null +++ b/20241127/2411.18388v1.json @@ -0,0 +1,165 @@ +{ + "title": "Convolutional Neural Networks Do Work with Pre-Defined Filters", + "abstract": "We present a novel class of Convolutional Neural Networks called Pre-defined Filter Convolutional Neural Networks (PFCNNs), where all convolution kernels with are pre-defined and constant during training. It involves a special form of depthwise convolution operation called a Pre-defined Filter Module (PFM). In the channel-wise convolution part, the kernels are drawn from a fixed pool of only a few (16) different pre-defined kernels. In the convolution part linear combinations of the pre-defined filter outputs are learned. Despite this harsh restriction, complex and discriminative features are learned. These findings provide a novel perspective on the way how information is processed within deep CNNs. We discuss various properties of PFCNNs and prove their effectiveness using the popular datasets Caltech101, CIFAR10, CUB-200-2011, FGVC-Aircraft, Flowers102, and Stanford Cars. Our implementation of PFCNNs is provided on Github https://github.com/Criscraft/PredefinedFilterNetworks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Over the years the computer vision community has been shifting its focus from using pre-defined features towards training end-to-end systems such as Convolutional Neural Networks (CNNs), which have become state-of-the-art in many visual applications for several reasons. CNNs have empirically proven their good generalization abilities [1 ###reference_b1###], state-of-the-art performance in image recognition [2 ###reference_b2###], especially in domains of unconstrained image data (taken in the wild) [3 ###reference_b3###]. The features learned by CNNs can be generic, such that they can be used for a number of different applications [4 ###reference_b4###].\nHowever, CNNs typically have millions of weights and usually operate in the over-parameterized regime, which is also called the modern interpolating regime [5 ###reference_b5###]. Pruning experiments show that many of these weights serve no particular function and could be omitted without any loss in performance. It seems that the superiority of CNNs over traditional image recognition techniques comes with the cost of having an enormous pool of superfluous weights that makes the networks inefficient and less transparent.\nIn this work, we try to significantly reduce the number of trainable weights in a CNN by applying certain restrictions on the convolutional weights. We apply only a few different pre-defined kernels and do not adjust them during training to save time, computational resources, and energy.\nWe mold this idea into a Pre-defined Filter Module (PFM), essentially a depthwise convolution layer consisting of a channel-wise convolution and a subsequent convolution that convolves over all input channels C. In the following, we will abbreviate the latter layer with convolution. The former layer utilizes a pool of 16 different edge filter kernels, each of which is applied to a single input channel individually. As the pre-defined filters are frozen we only adjust the convolution weights according to the training data. Learning in this context means finding linear combinations of the pre-defined filter outputs. Training is done end-to-end by finding linear combinations of the outputs of these pre-defined filters using gradient descent.\nWe construct the novel architecture PFNet18 (Pre-defined Filter Network 18) by replacing all convolution layers with of ResNet18 by our PFMs. This eliminates many adjustable weights and PFNet18 requires only of the training parameters of a ResNet18. Please note, that we use the PFMs in the entire network, not only in the first layer. We give empirical evidence that PFNet18 with edge filters can outperform ResNet18 on some fine-grained image datasets. Furthermore, we show that the choice of filters matters, i.e. that edge filters lead to better recognition rates than random filters.\nThe concept of PFCNNs is fundamentally different from fine-tuning, where network weights are initialized with pre-trained weights. Our pre-defined Filter Convolutional Neural Networks (PFCNNs) apply pre-defined filters for all convolution kernels with , which are not changed during training. In our approach, we keep the pre-defined convolution kernels and do not alter them during training. There are only a few different kernels (16 in our experiments) that are reused within each layer.\nThe paper is structured as follows:\nSection 2 ###reference_### gives a short overview of the related work. Section 3 ###reference_### provides all details about the pre-defined filters and our architecture PFNet18. Section 4 ###reference_### presents the training details and the image classification datasets.\nThe results are presented in Section 5 ###reference_### and subsequently discussed in Section 6 ###reference_###.\nThe paper completes with our conclusions in Section 7 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "The concept of pre-defined filters has a long history in vision. However, the limitation to fixed, pre-defined spatial filters seems to be novel in the intermediate layers of neural networks. In 1980 Fukushima [6 ###reference_b6###] used pre-defined filters in self-organizing neural networks. In today\u2019s era of deep networks, pre-defined filters are sometimes used as a pre-processing step to boost recognition performance. Ma et al. [7 ###reference_b7###] used pre-defined filters to incorporate domain knowledge into their training. They, however, replaced only the first convolutional layer of CNNs with trainable traditional image filters (Gabor filters).\nGavrikov and Keuper [8 ###reference_b8###] showed that learning linear combinations of fixed, random filters is a successful strategy to solve image classification problems, especially with wide CNNs (many channels). They also showed that the random filters can be shared across layers to further reduce the number of weights. In our study, we also learn linear combinations of filter outputs. However, we reduce the number of random filters to only 16 hand-picked kernels, that are applied in each depth-wise convolution operation. We show that the choice of these kernels matters, i.e. that edge filters outperform random kernels." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Pre-defined Filter Networks", + "text": "###figure_1###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Pre-defined Filter Module", + "text": "The Pre-defined Filter Module (PFM) is illustrated in Figure 1 ###reference_###. It is structured like a depthwise convolutional layer. In the first part of the module, each input channel is convolved independently with exactly one filter kernel from a pool of pre-defined kernels. In our experiments, we use different edge kernels. The order in which the kernels are distributed over the input channels is fixed. The input channel with index is convolved with the kernel with index .\nThe width of the intermediate part of the module can be chosen with the parameter f, as can be seen in Figure 1 ###reference_###. Conceptually, f determines the number of copies of the input channels prior to convolution. The number of intermediate channels after applying the pre-defined filter kernels is . Our implementation requires with the number of pre-defined kernels . Another requirement is .\nIn the second part of the module, an convolution creates an arbitrary number of linear combinations of the channels. By default, the first part of the PFM is frozen during training and only the weights in the convolution are adjusted to the training set." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Choice of the parameter f", + "text": "The order in how the pre-defined filters are applied to the input channels should have no effect on the set of functions that can be learned by the network. The same should apply to a permutation of the channels in the input image, i.e., the order of kernels should not be part of the architecture. This has implications for the choice of the parameter .\nFirst, we discuss the first PFM that is applied to the RGB input image. For , each kernel is applied once to each RGB input channel. In this case, the order of the input channels or the pre-defined filters is irrelevant. If we permute the input channels or the pre-defined filters we can always permute the weights in the consecutive layer to get the same result as before.\nSecond, we discuss the second and following PFMs. Here, we choose . Again, the order of the pre-defined filters has no effect on the set of functions that can be learned by the network. If two input kernels at positions A and B are swapped, this operation can be undone in two steps. First, the preceding layer swaps its output channels A and B by permuting its weights. If skip connections exist, all preceding PFMs have to adjust their layer. Second, one needs to swap the inputs of the consecutive layer, which can also be done by permuting the corresponding weights.\nThe order of kernels would matter for , which we avoid. This can easily be seen with some combinatorics. Please remember, that the input channel with index is convolved with the kernel . As shown in Figure 1 ###reference_###, the input channels are copied prior to convolution. Thus, each input channel receives a set of different pre-defined kernels. However, if the order would be arbitrary we had filter combinations that could occur for a single input channel. This is larger than for . Therefore, if two pre-defined filters are swapped this could introduce new filter combinations, and the set of functions could change.\n###table_1### ###table_2###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "PFNet18 architecture", + "text": "The PFNet18 architecture is described in Table 1 ###reference_###. For comparison, ResNet18 architecture is presented in Table 2 ###reference_###. Starting from ResNet18, we replace the convolution layers with Pre-defined Filter Modules.\nThe first module has , which is also the number of pre-defined kernels. Thus, the first layer has intermediate channels. To be consistent with ResNet18, we choose output channels for the convolution.\nAfter the first PFM, we stack 8 residually connected Basic Blocks and replace the convolution operations with PFMs with . The residual connections are kept. After the last block, there is an adaptive average pooling layer and a fully connected layer just like in ResNet18.\nPFNet18 has 1.46 million parameters, which is only of the 11.23 million parameters of the ResNet18." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Choice of pre-defined filters", + "text": "In our experiments, we employ only 16 different pre-defined filters. We choose edge filters because they provide gradient information and are often found in rudimentary forms in trained CNNs [9 ###reference_b9###, 10 ###reference_b10###]. Figure 2 ###reference_### presents 8 uneven and 8 even edge filters in different orientations including horizontal, vertical, and diagonal. We suppose that employing edge filters in PFCNNs will lead to an appropriate bias that will help the network to learn robust features. The kernel elements are normalized such that and . The constant component is missing in the convolution kernels, which should bias the network toward the processing of edges and shapes. As there are only 8 linearly independent kernels (some of the kernels have a flipped sign) the kernels span an 8-dimensional space.\n###figure_2### ###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimental setup", + "text": "We train and test PFNet18 on several image classification datasets and compare its performance with ResNet18. The following sections present the benchmark datasets and all training details." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "Caltech101\n\n\n\n\nCIFAR10\n\n\n\n\nCUB-200-2011\n\n\n\n\nFGVC-Aircraft\n\n\n\n\nFlowers102\n\n\n\n\nStanford Cars\n###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### Example images for each dataset are presented in Table 3 ###reference_###.\nThe Caltech101 dataset [11 ###reference_b11###] has a total of 8677 images, which were collected from the internet. They are scaled to be about 300 pixels wide. The dataset contains 101 categories, as well as a background class, which is ignored in the experiments. The categories are quite diverse and reach from animals and plants to electronic products and vehicles. The objects are shown in cluttered environments or in front of realistic or white backgrounds. As no official split is available, we randomly pick 20 training images and 10 test images per class.\nThe Caltech-UCSD Birds-200-2011 dataset (CUB-200-2011) [12 ###reference_b12###] originated from the Caltech-UCSD Birds 200 dataset [13 ###reference_b13###] from 2010.\nWe use the official split with 5994 training and 5794 test images. The average width of the images is about 470 pixels. There are 200 challenging bird categories (classes), which have plenty of inter-class variation due to deformations, plumage color, lighting, perspective, and pose.\nThe Fine-Grained Visual Classification of Aircraft dataset (FGVC-Aircraft) [14 ###reference_b14###] provides 6667 images for training and 3333 images for testing. It contains large images with an average width of 1100 pixels. We use the 100 categories of airplane models. The airplanes are rigid and do not deform. However, the same airplane model can appear quite different depending on the advertisement, airlines, perspective, and cropping.\nThe images from the 102 Category Flower dataset (Flowers102) [15 ###reference_b15###] show one or several blossoms of 102 flower types. As the other datasets in our study have training and test sets only, we merge the official training and validation set of Flowers102 to our training set and we test on the official test set. Accordingly, the training set contains 2040 images, and the test set 6149 images. The images have a mean width of 630 pixels. While some flower types can have subtle differences in their appearance, there are large intra-class variations due to scaling, perspective, lighting, and color variants. We only use categorical information within the dataset.\nThe StanfordCars dataset [16 ###reference_b16###] from Stanford University provides 8144 training and 8041 test images. The average width of an image is 700 pixels. The dataset covers 196 different car models. Usually, one car is shown per image. The cars can be on the road or indoors, shown from the front, side, or back, introducing many variations to the dataset.\nCIFAR10 [17 ###reference_b17###] contains 50000 training images and 10000 test images of the size pixels. The dataset provides the 10 classes plane, car, bird, cat, deer, dog, frog, horse, ship, and truck.\nFor our experiments on the CIFAR10 dataset, we adjusted the architectures to be compatible with small image shapes by removing the max pool layer and having a stride of 1 in the first convolutional layer and a kernel size of 3. These adjustments apply for both, PFNet18 and ResNet18." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Training details", + "text": "The network weights are initialized using Kaiming normal initialization [18 ###reference_b18###]. We apply the Lamb optimizer [19 ###reference_b19###] to minimize the cross-entropy loss of our models. The batch size is 64. We train for 300 epochs. The initial learning rate of is step-wise reduced to . The weight decay is 1.\nWe use the Pytorch framework [20 ###reference_b20###] to implement the models and to perform the training on an NVidia GTX 1080 Ti.\nThe experiments are repeated 5 times with random seeds. Each seed affects the weight initialization of the networks, the mini-batch aggregation, and random effects during data augmentation.\nFor image augmentation, we apply random cropping and random horizontal flipping." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Benchmarks", + "text": "The average test performance of the PFNet18 and ResNet18 models is presented in Table 4 ###reference_###. PFNet18 outperforms ResNet18 on the Caltech101, FGVC-Aircraft, and Flowers102 datasets. On Caltech101, PFNet18 has an almost higher test accuracy than ResNet18 and on Flowers102 the improvement is . On the Stanford Cars dataset, both architectures perform very similarly. ResNet18 achieves higher accuracies on the CIFAR10 and the CUB-200-2011 datasets.\nThe experiments show that the restriction to employ only 16 different pre-defined filters is sufficient to learn complex relationships in image data. The PFNet18 models reached high accuracy by simply finding linear combinations of pre-defined filter outputs. This is intriguing because it demonstrates that the spatial kernels do not have to be learned at all in many cases." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Feature visualization", + "text": "Layer\nCaltech101, PFNet18\nCaltech101, ResNet18\nFlowers102, PFNet18\nFlowers102, ResNet18\n\nBlock 1\n\n\n\n\n\n\n\n\n\n\n\n\n\nBlock 2\n\n\n\n\n\n\n\n\n\n\n\n\n\nBlock 3\n\n\n\n\n\n\n\n\n\n\n\n\n\nBlock 4\n\n\n\n\n\n\n\n\n\n\n\n\n\nClasses\n\n\n\n\n\n\n\n\n\n\n\n\n\nClass reference\n###figure_52### ###figure_53### ###figure_54### ###figure_55### ###figure_56### ###figure_57### ###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67### ###figure_68### ###figure_69### ###figure_70### ###figure_71### ###figure_72### ###figure_73### ###figure_74### ###figure_75### ###figure_76### ###figure_77### ###figure_78### ###figure_79### ###figure_80### ###figure_81### ###figure_82### ###figure_83### ###figure_84### ###figure_85### ###figure_86### ###figure_87### ###figure_88### ###figure_89### ###figure_90### ###figure_91### ###figure_92### ###figure_93### ###figure_94### ###figure_95### ###figure_96### ###figure_97### ###figure_98### ###figure_99### ###figure_100### ###figure_101### ###figure_102### ###figure_103### ###figure_104### ###figure_105### ###figure_106### ###figure_107### ###figure_108### ###figure_109### ###figure_110### ###figure_111### ###figure_112### ###figure_113### ###figure_114### ###figure_115### ###figure_116### ###figure_117### ###figure_118### ###figure_119### ###figure_120### ###figure_121### ###figure_122### ###figure_123### ###figure_124### ###figure_125### ###figure_126### ###figure_127### ###figure_128### ###figure_129### ###figure_130### ###figure_131### ###figure_132### ###figure_133### ###figure_134### ###figure_135### ###figure_136### ###figure_137### ###figure_138### ###figure_139### ###figure_140### ###figure_141### ###figure_142### ###figure_143### ###figure_144### ###figure_145### ###figure_146### ###figure_147### The promising results indicate that our choice of pre-defined filters introduces a good bias for at least half of the datasets. In an ablation study in Section 5.5 ###reference_### we study the choice of filters quantitatively. In this section, we study the learned filters of PFNet18 qualitatively using the feature visualization technique [21 ###reference_b21###].\nFeature visualization reveals specific input patterns that maximize the activation of specific network units. The outputs of this technique are input images that show characteristics of the features processed within the network. The idea is to initialize an input image with Gaussian noise and to modify the image such that it maximizes the activation of a specific neuron using gradient ascent. Hence, this procedure is also called activation maximization.\nLet be the network weights and let be the jth feature map in layer i given and some input image : for activation maximization, the input image with\nis determined [21 ###reference_b21###]. We perform the optimization using gradient ascent on a fixed . Such generated images often look unnatural and regularization approaches have been used in the literature to improve the visual quality [22 ###reference_b22###]. Between the update steps with learning rate we apply random image transformations to support the gradient ascent to find robust maxima. Our transformations include random rotation, scaling, blurring, cropping, pixel rolling, and shifting/scaling the image tensor distribution toward the normal distribution. Similar transformations have been applied in the works [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###].\nTable 5 ###reference_### shows feature visualizations of PFNet18 and ResNet18 trained on the Caltech101 and Flowers102 datasets. Table 5 ###reference_### is best viewed digitally with zoom. The images were picked by hand to illustrate the variation and the characteristics of the features in different layers.\nThe results indicate that both PFNet18 and ResNet18 are able to learn complex visual features. One observes that the features processed at the end of the first block consist of simple, repetitive textures. At the end of the second block, more details are added to these textures and it is possible to see a difference between the datasets. The models that were trained on the Flowers102 dataset produce leaf and flower prototypes. The models that were trained on Caltech101 also include rectangular shapes and more variety in general. Similar observations apply for the third and fourth blocks, where the first object instances appear. The feature visualization of the classification layer reveals what the networks expect to look like cougar face, face, motorbike, and saxophone (Caltech101) and Mexican aster, primula, tree mallow, and water lily (Flowers102).\nFeature visualization reveals that both PFNet18 and ResNet18 learned object-specific features. On the Caltech101 and Flowers102 datasets, the PFNets18\u2019s feature visualizations look more convincing. On Caltech101, exactly one instance of the target class is generated with a plausible shape. The cougar\u2019s face consists of a head with two eyes and ears. The human face has a nose, two eyes (the second one is difficult to see), hair, and a flat chin. The motorbike has two wheels, a saddle, and a handle. The saxophone is a long stick with buttons and a shimmering surface. ResNet18 does not generate such clear shapes and seems to focus more on textures. Among many feature visualizations and also the other classes, no examples could be found that look as convincing as those of PFNet18. The results show that PFCNNs are able to learn complex, object-specific features from only 20 training images per class." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Computational efficiency", + "text": "Model\nParameters ()\nSize (MB)\nFP (ms)\nBP (ms)\nGPU Memory FP (GB)\nMult-Adds ()\n\n\n\nPFNet18\n1.46\n6\n37\n70\n3.9\n0.26\n\nResNet18\n11.23\n45\n27\n65\n3.0\n1.81\nComputational efficiency and model size are important factors for training and deploying deep networks, especially on limited hardware with harsh energy and memory requirements. Table 6 ###reference_### summarizes relevant computational aspects of the models.\nPFNet18 has only 1.46 million parameters and requires only 6 MB space on the disk, whereas ResNet18 needs 45 MB and has 11.23 million parameters. The reduction of parameters does not lead to a significant change in training time, as the duration of a backward pass is around 65-70 ms for a batch size of on an NVidia GTX 1080 Ti. PFNet18 needs an additional time of 10 ms per batch. This is interesting because the number of mult-adds of PFNet18 is only while ResNet18 has mult-adds. Counting Mult-Add operations means counting the FLOPs of a model and to divide the result by 2. Although PFNet18 requires much fewer computations in total, its implementation requires more nodes in the computational graph and more distinct GPU calls. Our implementation seems to be not very efficient on current GPU hardware, which is optimized for a few but large tensor operations. Therefore, we assume that there is still much room for optimizations in the implementation of PFMs." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Aliasing effects", + "text": "Aliasing may occur in CNNs due to spatial sub-sampling. In PFNet18 and ResNet18 there are 3 skip connections with a convolution and a stride of 2 where information is lost. The aliasing effects do not always seem to affect the classification performance. To test this, we blur the input data of these 3 convolution operations with a Gaussian filter and train the models again. For ResNet18, there is no significant change in the test accuracy on the Flowers102 dataset. For PFNet18, however, there was a great improvement in test accuracy from to .\nThe aliasing issue is studied in Figure 3 ###reference_### where different convolution operations with stride 2 are applied to an input image. The input image contains two white lines on a black background. The convolution does only capture one of the lines because of spatial sampling. With stride 2, our pre-defined filters do not always capture both lines, either. This is a problem for PFNet18 because each input channel is convolved with exactly one pre-defined filter. The architecture lacks other convolution steps on the same input channel that could add redundancy. This means that information can be irrevocably lost. Thus, PFNet18 relies on the skip connections to compensate for aliasing effects and additional aliasing within the skip connections must be avoided.\nOur results suggest that ResNet18 is more robust against aliasing in the skip connections. In Figure 3 ###reference_### we see that random filters often capture both lines, which illustrates that ResNet18 has a lower risk of losing image information by aliasing. In addition, each convolution layer from ResNet18 convolves each input channel as many times as there are output channels. This gives more opportunities to propagate information from each input channel to the next layer." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Ablation study", + "text": "An ablation study is conducted to study the relevance of single elements in the PFM as well as the importance of the choice of pre-defined filters. Table 7 ###reference_### presents the results of the ablation study. The experiments are conducted on the Flowers102 dataset with the same hyperparameters as in the previous experiments. Similar to Table 4 ###reference_### the results show average values for five different seeds.\nFirst, we show that the choice of pre-defined kernels matters. We pick 16 random pre-defined kernels from a uniform distribution. These kernels are frozen during training. Only the convolutions are adjusted to the Flowers102 dataset. In this setting, the performance drops to . However, interestingly, it is still as good as ResNet18.\nIn another experiment, the random pre-defined kernels are unfrozen, such that the model can determine the 16 kernels during training. Note that we give each PFM its own 16 kernels to optimize. This model gave a test accuracy of , which is a small improvement. When the pre-defined filters are initialized with the edge filters and allowed to be optimized during training, the test accuracy is , which is similar to the default experiment. Our findings indicate that the proposed set of filter kernels provides a beneficial bias for the Flowers102 dataset.\nSecond, we study the importance of the first ReLU in the PFMs. The first ReLU is removed from all PFMs. This means that the convolution with the pre-defined kernels and the subsequent convolution form one linear operation. The resulting filter will have a kernel that is differently organized than the convolution kernels in ResNet18. The performance drops to . The ReLU seems to be important, possibly due to the increased amount of non-linearity.\n###table_3###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion", + "text": "Pre-defined filter kernels lead to significant performance improvements on half of the datasets that we evaluated. This improvement is achieved with only 16 different spatial filter kernels and only of the trainable weights of ResNet18. Despite the restriction of PFCNNs to only learn combinations of pre-defined filter outputs, discriminative features emerge during training. For PFCNNs, the ability to perform visual recognition is based on the appropriate combination of filter outputs. These findings provide a new perspective on the information processing within deep CNNs and they show once again, that many weights in conventional CNNs are redundant.\nWe found that aliasing can significantly reduce the performance of PFCNNs. Since this effect is much lower in CNNs, one can assume that the CNNs learn how to deal with aliasing; which, however, implies that resources need to be spent to deal with the problem.\nFeature visualization shows that the combinations of the pre-defined filter outputs yield complex, object-specific features. Note that these features emerged from only 20 training images per class. Compared to ResNet18, our PFCNNs seem to also consider the shape of the recognized objects, while ResNet18 seems to focus on textures.\nWe discovered that the choice of pre-defined filters matters. When using random kernels instead of edge filters, the test accuracy on the Flowers102 dataset drops to , which, however, is still as good as ResNet18, which is remarkable. We conclude that edge filters add a suitable bias to the image recognition problem." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduced a novel class of CNNs called Pre-defined Filter Convolutional Neural Networks (PFCNNs), which utilizes fixed pre-defined filters in all convolution kernels with while keeping the end-to-end training paradigm. In our implementation, the PFNet18 architecture is a ResNet18 where we replaced the convolution operations with so-called Pre-defined Filter Modules (PFMs). These modules consist of a depthwise convolution with pre-defined weights that are not changed during training; only the weights of the subsequent convolution are adjusted.\nPFNet18 has only 13% of the weights of ResNet18 but outperforms ResNet18 on the Caltech101, FGVC-Aircraft, and Flowers102 datasets with an absolute increase of . On the CIFAR10, CUB-200-2011, and Stanford Cars datasets, PFNet18 does not reach higher performances than ResNet18 but still performs well with much fewer parameters. The results demonstrate that our choice of taking edge filters as pre-defined filter kernels is a useful bias for image data. Interestingly, the pre-defined edge filters are useful biases not only in the first but also in the higher layers of the CNN. Our results imply that it is unnecessary to train the spatial kernels of a CNN to reach reasonable test accuracies on image data, which saves most trainable weights. In contrast to pruning, where weights are eliminated, we save weights by excessive weight sharing using a small pool of 16 pre-defined kernels. Our approach uses only of the parameters of ResNet18, which may be useful for mobile devices and other applications where model size is critical.\nMany questions regarding PFCNNs arise, e.g., the reduction of weights by the PFMs could be combined with more recent, efficient architectures to get very light models with much fewer parameters. These models are in stark contrast to the usual over-parameterized approaches but still achieve reasonable results. We hope that PFCNNs will lead to interesting comparative studies and a better understanding of the way information is processed internally by CNNs.\nIt is left to future research to explore how PFCNNs perform on large-scale datasets such as ImageNet [26 ###reference_b26###], and how PFCNNs perform in transfer-learning scenarios.\nAnother question involves how the choice (and number) of filters affect the performance of PFCNNs as well as the robustness against input perturbations and adversarial attacks.\nIn addition, the width of the networks could be increased to allow for more filters to be linearly combined in each layer. This might boost the performance of the PFCNNs according to Gavrikov and Keuper [8 ###reference_b8###] who found that increasing the width of networks with fixed, random spatial kernels leads to an improvement of the test accuracy on CIFAR10.\nAlso, the question remains to what extent the results can be transferred to other domains, for instance, audio processing.\nOverall it seems that the power of deep networks mainly lies in their ability to learn how to combine filters, rather than in the learning of spatial convolution kernels." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Details of the PFNet18 architecture.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayersOutput sizeKernel
Pre-defined Filter Module\n\n
MaxPooling\n, 64, stride 2
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Double HF Residual Module\n\n
Classification layerAdaptive average pool
fully connected, softmax
\n
", + "capture": "Table 1: Details of the PFNet18 architecture." + }, + "2": { + "table_html": "
\n
Table 2: Details of the ResNet18 architecture.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LayersOutput sizeKernel
Convolution\n, 64, stride 2
MaxPooling\n, 64, stride 2
Basic Block\n
Basic Block\n
Basic Block\n
Basic Block\n
Basic Block\n
Basic Block\n
Basic Block\n
Classification layerAdaptive average pool
fully connected, softmax
\n
", + "capture": "Table 2: Details of the ResNet18 architecture." + }, + "3": { + "table_html": "
\n
Table 3: Images from the benchmark datasets showing each two instances of four classes.
\n
\n

\n\n\n\nCaltech101\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nCIFAR10\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nCUB-200-2011\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nFGVC-Aircraft\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nFlowers102\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nStanford Cars\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n

\n
\n
", + "capture": "Table 3: Images from the benchmark datasets showing each two instances of four classes." + }, + "4": { + "table_html": "
\n
Table 4: Test accuracy on benchmark datasets for training from scratch and 5 different seeds.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetPFNet18ResNet18
Caltech10165.600.6657.190.78
CIFAR1092.150.1894.330.11
CUB-200-201153.000.5558.510.53
FGVC-Aircraft75.490.2973.321.06
Flowers10280.660.3573.400.34
Stanford Cars77.060.4277.900.37
\n
", + "capture": "Table 4: Test accuracy on benchmark datasets for training from scratch and 5 different seeds." + }, + "5": { + "table_html": "
\n
Table 5: Feature visualization for models trained on the Caltech101 and Flowers102 dataset. Each image is an input that maximizes the activation of a specific channel in a specific layer. The visualized classes are cougar face (top left), face easy (top right), motorbikes (bottom left), and saxophone (bottom right) for the Caltech101 dataset and Mexican aster (top left), primula (top right), tree mallow (bottom left) and water lily (bottom right) for the Flowers102 dataset. Best viewed with zoom.
\n
\n

\n\n\n\nLayer\nCaltech101, PFNet18\nCaltech101, ResNet18\nFlowers102, PFNet18\nFlowers102, ResNet18\n\nBlock 1\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nBlock 2\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nBlock 3\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nBlock 4\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nClasses\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\nClass reference\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\"[Uncaptioned\n\n\n

\n
\n
", + "capture": "Table 5: Feature visualization for models trained on the Caltech101 and Flowers102 dataset. Each image is an input that maximizes the activation of a specific channel in a specific layer. The visualized classes are cougar face (top left), face easy (top right), motorbikes (bottom left), and saxophone (bottom right) for the Caltech101 dataset and Mexican aster (top left), primula (top right), tree mallow (bottom left) and water lily (bottom right) for the Flowers102 dataset. Best viewed with zoom." + }, + "6": { + "table_html": "
\n
Table 6: Computational efficiency of PFNet18 and ResNet18 considering model size and speed. FP denotes forward pass and BP denotes backward pass. During BP the hyperparameters described above are used. The input tensors have the shape .
\n
\n

\n\n\n\nModel\nParameters ()\nSize (MB)\nFP (ms)\nBP (ms)\nGPU Memory FP (GB)\nMult-Adds ()\n\n\n\nPFNet18\n1.46\n6\n37\n70\n3.9\n0.26\n\nResNet18\n11.23\n45\n27\n65\n3.0\n1.81\n\n

\n
\n
", + "capture": "Table 6: Computational efficiency of PFNet18 and ResNet18 considering model size and speed. FP denotes forward pass and BP denotes backward pass. During BP the hyperparameters described above are used. The input tensors have the shape ." + }, + "7": { + "table_html": "
\n
Table 7: Test accuracy of variants of PFNet18 and ResNet18 on the Flowers102 dataset for training from scratch.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelDescriptionAccuracy
PFNet18No aliasing, default80.660.35
PFNet18Aliasing72.400.59
PFNet18First ReLU removed, no aliasing75.210.23
PFNet1816 Trainable filters, edge init., no aliasing80.110.62
PFNet1816 Trainable filters, random init., no aliasing74.540.88
PFNet1816 frozen filters, random init., no aliasing73.981.24
ResNet18Default73.400.34
ResNet18No aliasing\n74.360.57
\n
", + "capture": "Table 7: Test accuracy of variants of PFNet18 and ResNet18 on the Flowers102 dataset for training from scratch." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18388v1_figure_1.png", + "caption": "Figure 1: Pre-defined Filter Module (PFM). 1\u00d7n\u00d7n1\ud835\udc5b\ud835\udc5b1\\times n\\times n1 \u00d7 italic_n \u00d7 italic_n kernels are taken from a small pool of kernels and are applied channel-wise as known from depthwise convolution. The order in which the kernels are distributed over the input channels is fixed.", + "url": "http://arxiv.org/html/2411.18388v1/x4.png" + }, + "2(a)": { + "figure_path": "2411.18388v1_figure_2(a).png", + "caption": "Figure 2: 8 uneven and 8 even 1\u00d73\u00d731331\\times 3\\times 31 \u00d7 3 \u00d7 3 convolution kernels used in PFNet18.", + "url": "http://arxiv.org/html/2411.18388v1/x5.png" + }, + "2(b)": { + "figure_path": "2411.18388v1_figure_2(b).png", + "caption": "Figure 2: 8 uneven and 8 even 1\u00d73\u00d731331\\times 3\\times 31 \u00d7 3 \u00d7 3 convolution kernels used in PFNet18.", + "url": "http://arxiv.org/html/2411.18388v1/x6.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18388v1" +} \ No newline at end of file diff --git a/20241127/2411.18391v1.json b/20241127/2411.18391v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2c4da61f6b95bffb282ab6096077cfa3a5fd0e5d --- /dev/null +++ b/20241127/2411.18391v1.json @@ -0,0 +1,268 @@ +{ + "title": "GeneQuery: A General QA-based Framework for Spatial Gene Expression Predictions from Histology Images", + "abstract": "Gene expression profiling provides profound insights into molecular mechanisms, but its time-consuming and costly nature often presents significant challenges.\nIn contrast, whole-slide hematoxylin and eosin (H&E) stained histological images are readily accessible and allow for detailed examinations of tissue structure and composition at the microscopic level.\nRecent advancements have utilized these histological images to predict spatially resolved gene expression profiles.\nHowever, state-of-the-art works treat gene expression prediction as a multi-output regression problem, where each gene is learned independently with its own weights, failing to capture the shared dependencies and co-expression patterns between genes.\nBesides, existing works can only predict gene expression values for genes seen during training, limiting their ability to generalize to new, unseen genes.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Gene expression profiling provides a comprehensive view of the genetic activity within cells or organisms, which can be crucial for understanding various biological processes, disease mechanisms, and treatment responses.\nHowever, traditional gene expression profiling methods, such as bulk RNA sequencing, may not capture the heterogeneity within a sample, while single-cell methods capture heterogeneity without spatial context.\nRecently, spatial transcriptomics (ST) technologies have revolutionized our understanding of cellular and tissue-level biology by allowing for transcriptome-wide gene expression analysis while maintaining spatial context.\nTraditional ST methods, such as Visium [1 ###reference_b1###], MERFISH [2 ###reference_b2###], seqFISH+ [3 ###reference_b3###], STARmap [4 ###reference_b4###], smFISH [5 ###reference_b5###], and Targeted ExSeq [6 ###reference_b6###], have provided significant insights.\nHowever, these methods have inherent limitations, such as high cost, labor intensity, and platform-specific restrictions.\nOn the other hand, histological images are relatively easy and inexpensive to obtain and offer a promising alternative for predicting transcript expression, potentially overcoming many obstacles current ST techniques face.\nThese images provide rich, contextual information about tissue architecture and cellular morphology, which can be leveraged to infer spatial gene expression patterns.\nTo generate large-scale ST data and assist in analyzing the molecular characteristics of tissues, recent works predicted spatial gene expression from histology images and provide the probability to generate gene expression from histology images, such as STNet [7 ###reference_b7###], HistoGene [8 ###reference_b8###], and BLEEP [9 ###reference_b9###].\nSTNet and BLEEP encode the spot images using ResNet [10 ###reference_b10###] while HistoGene uses ViT [11 ###reference_b11###] to encode the whole slide image.\nBoth STNet and HistoGene use multiple-layer perceptions (MLPs) to predict the gene expression values.\nDifferently, BLEEP replaces the MLPs with a retrieval database for the predictions.\nAlthough those works can perform well on various datasets, they all learn a regressor for each gene, ignoring the potential relationship between genes.\nBesides, those works can only make predictions for genes that appear in the training data and are unable to predict unseen genes.\nTherefore, those limitations significantly hinder the model\u2019s generalization and flexibility.\nTo address the above limitations, this paper proposes GeneQuery, a simple yet effective question-answering-based framework for spatial gene expression prediction from histology whole slide images (WSI).\nSpecifically, GeneQuery regards spot images of the histology images as contexts and gene metadata information as queries.\nGeneQuery introduces the gene random variable to approximate the gene distribution for better model generalization.\nThis paper also presents two specific architecture implementations, i.e., spot-aware GeneQuery taking spot images as input sequences for capturing patterns across images, and gene-aware GeneQuery taking gene metadata as input sequences for capturing patterns between genes.\nThis paper evaluates GeneQuery with various comparisons on different datasets, including the challenging human liver tissue dataset captured via the 10x Visium platform and the human breast dataset HER2+ and HBD.\nResults show that GeneQuery can outperform state-of-the-art works and predict the gene expressions of unseen genes with comparable performance.\nComprehensive analysis experiments are conducted to show the efficacy of the proposed GeneQuery.\nCodes are available at 111https://github.com/xy-always/GeneQuery.\nIn summary, our contributions are as follows:\nThis paper first defines the gene prediction problem as a question-answering task for better model generalization and flexibility.\nThe proposed GeneQuery can capture different patterns between images and genes, images and images, and even genes and genes with two customized architectures, i.e., spot-aware GeneQuery and gene-aware GeneQuery.\nExperimental results demonstrate that the proposed GeneQuery can achieve state-of-the-art results on multiple ST datasets and achieve a competitive performance on unseen genes and in transfer learning scenarios." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "Recent advancements have leveraged histology images to predict gene expression profiles, showing promising outcomes for various downstream tasks. A pioneering study, HE2RNA [12 ###reference_b12###], utilized these images to estimate gene expression across whole slide images, demonstrating the potential to integrate histological and gene expression analyses to enhance molecular mechanisms studies.\nSTNet [7 ###reference_b7###], a classical work in this field, employed a pre-trained ResNet to encode H&E stained spot images, combined with a fully connected layer to predict gene expression values. Another notable project, HistoGene [8 ###reference_b8###], utilized a pre-trained VIT to encode H&E spots, incorporating spatial information into each spot and using an MLP for prediction. Xie et al. [9 ###reference_b9###] introduced a novel contrastive learning approach to develop a joint representation of H&E images and gene expression profiles. Their model, BLEEP, predicts gene expression profiles by linearly combining the closest anchors from a reference database. However, STNet and HistoGene treat this task as a multi-output regression task, exploring different transformations to learn the relationship between image representations and gene expression, ignoring the relationship between different genes. BLEEP navigates the joint space of images and expressions of genes of interest. This method first needs to determine the number of genes during training so it can only predict predetermined genes.\nIn this work, we introduce GeneQuery, which reformulates this gene expression prediction problem by incorporating gene random variables and introduces rich gene metadata information, increasing the flexibility and robustness of the model." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "GeneQuery", + "text": "###figure_2### In this section, this paper describes the proposed GeneQuery in detail.\nFigure 1 ###reference_### (a) shows that existing frameworks [9 ###reference_b9###, 8 ###reference_b8###, 7 ###reference_b7###] regard the gene predictions as a multi-regression problem and directly learn the pattern between images and genes.\nDifferently, the proposed GeneQuery first re-formulates this multi-output regression problem as a more general question-answering problem, i.e., giving the whole slide image as context, querying any gene on it, and predicting the corresponding gene expression values.\nThen, this paper presents the detailed techniques of the proposed GeneQuery framework in Figure 1 ###reference_### (b), including two specific implementations for capturing different data patterns, i.e., spot-aware GeneQuery and gene-aware GeneQuery." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Reformulation", + "text": "The formal definition of the gene expression prediction is that given a histology image that is partitioned into spots , it requires to design a novel framework to predict gene expression values of genes for each spot .\nLet represent the gene expression value of the gene on the spot , .\nTherefore, the goal is to estimate probability function based on the training data.\nFor each gene , existing works [7 ###reference_b7###, 8 ###reference_b8###] leverage a model to estimate the probability function with parameters 222Although there is only one model in their works, the weights for predicting different genes are not shared. Thus, parameters can be partitioned into multiple subsets , where is used for the regression problem of the gene . to make the predictions for the spot ,\nLimitations of Existing Works.\nIn this gene expression prediction problem, there are three kinds of implicit patterns between images and genes, images and images, and genes and genes.\nExisting works mainly focus on the former two patterns, such as adopting a CNN-based neural network for the pattern between images and genes [7 ###reference_b7###] or a transformer-based neural network for capturing the pattern between images and images [8 ###reference_b8###].\nSuch approaches are based on the assumption that all genes are independent, where each gene prediction problem is an independent regression problem and can be solved by estimating the corresponding probability function.\nHowever, those works ignore the relationships between genes and genes.\nIn biology, the expression of different genes often has a certain degree of synergy or interaction, especially in some diseases (such as cancer), where the expression patterns are interdependent [13 ###reference_b13###].\nBesides, such methods would also be limited by the number of genes in the training data, while the probability function for new genes would be trained from scratch.\nReformulated Gene Prediction.\nTo address the above limitations, this paper first models the gene as a random variable , taking discrete values of , where represents the number of genes in the gene library.\nThus, the goal is reformulated to estimate the distribution of the gene expression values given the spot and the gene library, i.e., .\nThen, we can use the Maximum Likelihood Estimation (MLE) to maximize the expectation of on the training data, including the spot image , the gene information , and the gene expression values ,\nwhere , is the model for predicting the gene expression values, represents all parameters in the whole framework.\nIntroducing the gene random variable can help the model capture the implicit patterns between different genes.\nUnlike existing frameworks, the proposed GeneQuery directly learns the image distribution and gene distribution, enhancing the model\u2019s generalization ability and flexibility.\nAs shown in Figure 2 ###reference_### (b) and (c), given different queries (genes) and contexts (images), GeneQuery can capture different patterns across images and genes.\nWhen predicting gene expression values for a specific gene , GeneQuery sets the gene random variable as , i.e., (Figure 2 ###reference_### (b));\nWhile only one image is given and multiple genes\u2019 expression values are required to be predicted, GeneQuery sets the image input, i.e., (Figure 2 ###reference_### (c))." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "GeneQuery", + "text": "As shown in Figure 2 ###reference_### (a), this section presents the overall architecture of GeneQuery.\nGeneQuery takes the gene metadata as queries, the spot images as contexts, and predicts the gene expression values.\nGene metadata can be any information related to genes, such as gene names or gene descriptions in the gene library, or even generated content by generative models.\nGeneQuery basically consists of two encoder models for encoding image features and gene features, two projection layers for aligning the feature dimension, one fusion module for fusing image features and gene features, and one regressor for the final predictions.\nTo obtain the gene expression value, GeneQuery first encodes the spot of the histology image and the gene metadata separately,\nThe encoder models can be any pre-trained models, such as ResNet [10 ###reference_b10###] for image encoders and clinical BERT [14 ###reference_b14###] for gene metadata encoders.\nThen, GeneQuery leverages two projection layers to align the gene features and image features into the same dimensions since different encoders may produce different feature dimensions.\nIn the fusion module, GeneQuery first uses a simple fusion layer for naively fusing two kinds of features,\nAfter obtaining the joint representation for the images and the genes, GeneQuery uses transformer blocks to cross-fuse the image information and gene information,\nwhere .\nFinally, GeneQuery predicts the gene expression values via a regressor ,\nIn the training stage, GeneQuery uses Mean Squared Error (MSE) loss to compare each predicted gene expression with true gene expression and optimizes to minimize the loss over the training data including all spots and all genes,\nGeneQuery aims to capture all three kinds of implicit patterns between images and genes, images and images, and genes and genes.\nHowever, jointly learning all patterns across images and genes is not easy.\nAlthough GeneQuery can take all spot images and all genes as image input sequences and gene input sequences, it would introduce significant training overheads and make the whole framework hard to converge.\nTherefore, GeneQuery has two specific implementations,\nSpot-Aware GeneQuery. Although HistoGene [8 ###reference_b8###] has proposed to use the vision transformer to capture the relationships between images and images, it only predicts gene values using a linear layer whose dimension is the number of genes. The proposed spot-aware GeneQuery also takes all spot images of a whole slide image as input sequence, then adds the queried gene\u2019s features with each spot feature,\nwhere indicates the -th spot image.\nGene-Aware GeneQuery. To capture genes\u2019 relationships, gene-aware GeneQuery takes all genes as input sequences, then adds the feature of the given context (the spot image) on each gene feature,\nwhere indicates the -th queried gene." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, this paper first introduces the dataset and details its implementation.\nThen, this paper presents the main results of predicted gene expressions." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "Datasets. This paper uses spatial transcriptomics data for training and testing. The statistics of datasets we used are listed in Table 1 ###reference_###.\nThe spatial transcriptomics data includes histology images split into spatially barcoded spots and the corresponding spatial gene expression data.\nSpecifically, this paper uses three representative datasets: GSE240429 (GSE), HER2+, and HBD.\nThe GSE240429 333https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE240429 includes human liver tissue from neurologically deceased donor livers suitable for transplantation that were OCT embedded, frozen, sliced with a cryostat and imaged using the 10x Genomics Visium platform 444https://www.10xgenomics.com/products/spatial-gene-expression.\nThe HER2+ dataset 555https://github.com/almaan/her2st is human HER2-positive breast tumor ST data and is collected from 8 HER2-positive breast cancer patients. The histology images of HER2+ have lower spatial resolution than Visium.\nThe HBD dataset 666https://data.mendeley.com/datasets/29ntw7sh4r/5 is breast cancer patients with luminal a, luminal b, triple-negative, and HER2-positive subtypes.\nFor both datasets, GeneQuery takes a -pixel patch of the image centered on each spot as the input image patch.\nFor the GSE240429 dataset, following BLEEP [9 ###reference_b9###], this paper selects 1,000 highly variable genes in each histology image, with 3467 union genes to predict.\nFor the HER2+ dataset, following HistoGene [8 ###reference_b8###], this paper selects the top 1,000 highly variable genes in each tissue section and removes genes expressed in less than 1,000 spots across all tissue sections.\nAs a result, we use a total of 785 genes in HER2+ to predict.\nWe applied the same genes as in HER2+ to the HBD dataset, removed any missing genes, and retained 723 genes.\nEach spot is normalized by log normalization and min-max normalization.\nMetrics and Comparisons. To evaluate the predicted gene expression profile, we use the Pearson Correlation Coefficient (PCC) to estimate the correlation between the predicted gene values and observed gene values.\nWe evaluate the average PCC of the top 50 highly expressed genes (HEG) and the top 50 highly variable genes (HVG) following the setting of BLEEP [9 ###reference_b9###].\nAlso, we compute the average PCC of all trained genes (ALL) following the setting of STNet [7 ###reference_b7###] and HistoGene [8 ###reference_b8###], which demonstrates the model\u2019s ability to predict gene expression overall.\nIf no additional clues are provided in this paper, HEG, HVG, and ALL represent these evaluation metrics." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "We used the ResNet50 [10 ###reference_b10###] as the image encoder and the clinical BERT [14 ###reference_b14###] as the gene encoder.\nFor gene-aware GeneQuery, the max length of the transformer block for GSE240429, HER2+, and HBD datasets are 3467, 785, and 723, respectively.\nFor spot-aware GeneQuery, the max length of the transformer block for GSE240429, HER2+, and HBD datasets are 2400, 600, and 600, respectively.\nThe transformer layer is set to 2.\nThe fusion dimension for gene and image representation is 256, the batch size is 100.\nWe run 100 epochs and save the last checkpoint for testing.\nFor GSE240429, because there is a total of four histology WSI, we did 4-fold cross-validation, and each fold left 1 histology image for testing and the remaining for training.\nFor the HER2+ and HBD datasets, we did 5-fold cross-validation, using 20% histology images for testing and the remaining for training.\nThe evaluation set is randomly selected 10% from the training set.\nFor HistoGene and BLEEP, we use the default hyperparameters reported in their papers.\nFor STNet, we implemented it on our own, and the hyperparameters are the same as reported in their paper.\nAll experiments are conducted on a 40G A100 GPU and 32G V100 GPU." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Main Results", + "text": "Table 2 ###reference_### presents the quantitative results and reports the mean and standard deviation of PCC results on the HEG, HVG, and ALL settings.\nAs expected, gene-aware or spot-aware GeneQuery can achieve the best performance across all datasets and settings, except for competitive results on the HVG setting of HER2+ and the ALL setting of HBD.\nSpecifically, gene-aware and spot-aware GeneQuery surpass the STNet, HistoGene, and BLEEP by 9.3%, 12.7%, 5.5%, 9.9%, 13.3%, and 6.1% on average over all settings on the GSE dataset, respectively.\nFor the HER2+ dataset, gene-aware and spot-aware GeneQuery achieve higher 1.9%, 21.6%, 1.7%, 2.3%, 22.1%, and 2.2% on average compared to three baseline methods.\nFor the HBD dataset, gene-aware and spot-aware GeneQuery outperform HistoGene by 11.8% and 8.3% on average, respectively, but achieve competitive results compared to STNet and BLEEP.\nOverall, the proposed GeneQuery can achieve a more general and stable performance compared to existing state-of-the-art works.\nThis may be due to the introduction of the gene random variable, which helps the GeneQuery framework capture the gene distribution across different datasets.\nComparing gene-aware and spot-aware GeneQuery, it is hard to conclude that one can consistently outperform the other.\nSpecifically, on the GSE dataset, spot-aware GeneQuery performs better than gene-aware GeneQuery over highly expressed and variable genes but competitively on the results of all genes.\nOn the HER2+ dataset, spot-aware GeneQuery achieves comparable results to gene-aware GeneQuery on HEG and HVG but performs better over all genes.\nTherefore, spot-aware GeneQuery outperforms gene-aware GeneQuery on the first two datasets.\nThe reason might be that there is only one subtype of the WSI in those datasets, exhibiting relatively low heterogeneity of data, which makes it easier to capture stable features for image encoders compared with more subtypes of histology images.\nHowever, when the number of subtypes increases (4 subtypes in the HBD dataset), spot-aware GeneQuery performs poorly, as different WSIs with different subtypes may show different patterns in tissue architecture and cell morphology.\nBesides, the results of different methods on the HER2+ dataset are much better than those on the GSE dataset.\nThis is because there are fewer spots in each WSI in the HER2+ dataset and also fewer genes for predicting in the HER2+ dataset, which makes the models converge easily.\nHowever, when the number of spots and genes increases, the proposed GeneQuery can still maintain an acceptable performance.\nThis benefits from the generalization ability of GeneQuery.\nThe above results demonstrate the efficacy of the proposed GeneQuery on the gene expression profiling prediction tasks.\nHowever, the absolute PCC values still remain relatively low, indicating the difficulty of the gene expression prediction task from histology images.\nIt shows that some genes may have weak relations with histology images, as caused by inadequate detection of specific genes by the gene detection platform such as the Visium platform, leading to less predictable expression patterns and experimental artifacts and introducing non-biological variation in the data unrelated to the image." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Results on Unseen Genes", + "text": "Different from existing works, predicting unseen genes is a unique feature of the proposed GeneQuery framework.\nTable 3 ###reference_### shows the results of spot-aware GeneQuery on different ratios of unseen genes on two representative datasets and all three settings.\nWe also report results of seen genes, unseen genes, and all genes.\nFrom the experimental results in Table 3 ###reference_###, we can draw the conclusion that the proposed GeneQuery can still achieve acceptable performance on unseen genes, even when it is trained on very few genes.\nSpecifically, on the GSE dataset, as expected, when the number of seen genes increases, the spot-aware GeneQuery\u2019s performance on unseen genes improves.\nSimilar patterns also appear in the HER2+ dataset.\nThis is because more genes expose models to more biological features and patterns, and learn richer and more comprehensive expressions.\nSurprisingly, the proposed GeneQuery can achieve a better Pearson correlation on unseen genes than seen genes on the GSE dataset.\nHowever, the unseen-gene-prediction results on the HER2+ dataset are the opposite, which are much lower than that on seen genes.\nThe reason may lie in the fact that the number of genes in the GSE dataset is much larger than the HER2+ dataset\u2019s number of genes, although both datasets all have one subtype of images." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Transfer Learning Results", + "text": "To show the generalization ability of the proposed GeneQuery framework, we conducted a transfer learning experiment on different datasets in terms of the same tissue and different tissues in Table 4 ###reference_###.\nWe also chose the spot-aware GeneQuery for its good performance on most datasets.\nExperimental results show that the proposed GeneQuery exhibits a better transfer ability within the same tissues than that across different tissues.\nThe spot-aware GeneQuery especially achieves the best transfer performance between HER2+ and HBD datasets.\nAlthough the transferred results of spot-aware GeneQuery are much lower than those of STNet, BLEEP, gene-aware GeneQuery, and its own, it can still achieve a better performance than that of HistoGene.\nThis also demonstrates the efficacy of the question-answering framework in the spot-aware GeneQuery compared to traditional solutions of multi-output regression.\nBesides, for the transfer learning across different tissues, the models trained on the GSE datasets perform significantly better than those on the HER2+ or HBD datasets.\nThis is because there are more genes in the GSE dataset, and GeneQuery\u2019s generalization ability guarantees an acceptable performance.\nIt is also worth noting that the correlation of the predicted genes is significantly positively correlated, and no gene is significantly negatively correlated, which shows the potential of the proposed GeneQuery framework." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Enhancing GeneQuery with GPT-4", + "text": "###figure_3### Since some gene metadata is too simple, we conducted an experiment using the state-of-the-art large language model GPT-4 to generate comprehensive gene descriptions to enhance the GeneQuery.\nThe prompt template used for GPT-4 is \u201cThe brief definition of gene GENE_NAME is \u201d, where GENE_NAME is the gene name in the gene library.\nFor each dataset, we conducted validation experiments on one fold.\nExperimental results in Table 5 ###reference_### show that with the rich gene metadata generated by GPT-4, gene-aware GeneQuery and spot-aware GeneQuery can be improved by 3.6% and 1.8% on average over all datasets and settings, respectively.\nSpecifically, GPT-4 helps gene-aware and spot-aware GeneQuery enhance the average performances on GSE, HER2+, and HBD by 0.9%, 5.2%, 4.5%, 2.9%, and 3.7%, respectively, except for the competitive results of spot-aware GeneQuery on the HER2+ dataset.\nThe above results suggest that generated gene information may cover rich biology knowledge or attributes, enabling models to better understand the biological semantics of queries and make more accurate predictions from the histology images." + }, + { + "section_id": "4.7", + "parent_section_id": "4", + "section_name": "Latent Image Space of GeneQuery", + "text": "This section visualizes the latent spots histology image representation of the gene expression models as we can inspect the quality and biological relevance of the features the gene expression models have learned.\nWe use Uniform Manifold Approximation and Projection (UMAP) [15 ###reference_b15###] to perform the feature dimension reduction and then utilize Leiden [16 ###reference_b16###] to cluster spot representations.\nAdditionally, segmentation can be performed based on the morphological features of the images, we also utilized a pre-trained ResNet50 (See ResNet in Figure 3 ###reference_###) model to represent the images, followed by the same dimensionality reduction and clustering techniques.\nAs shown in Figure 3 ###reference_###, most gene expression models, including STNet, BLEEP, and GeneQuery, can effectively segment histology images. It is noteworthy that GeneQuery demonstrates promising potential for segmenting histology images.\nSpecifically, the gold standard annotations for Sample WSI A primarily identify three regions: invasive cancer, connective tissue, and adipose tissue.\nResNet, STNet, and BLEEP are all able to distinguish invasive cancer and connective tissue.\nHowever, ResNet further segmented the invasive cancer region into sub-regions.\nSTNet and BLEEP miss the top areas of connective tissue.\nIn contrast, GeneQuery is able to identify a greater extent of the connective tissue region.\nFor Sample WSI B, the gold standard annotations mark three regions: immune infiltrate, invasive cancer, and connective tissue.\nResNet is unable to distinguish between invasive cancer and connective tissue on the right side of the sample.\nAlthough STNet and BLEEP could differentiate these regions, they label the same invasive cancer area as different categories.\nGeneQuery outperforms all other models, successfully distinguishing between invasive cancer and connective tissue while also recognizing all invasive cancer regions.\nThe segmentation results indicate that with the exception of HistoGene, which was particularly challenging to segment, all other models successfully performed segmentation with clear outlines.\nThese findings underscore the robustness of the models, especially GeneQuery, in effectively segmenting histological data.\nThis indicates that GeneQuery may be able to integrate rich gene metadata to help the model better capture the heterogeneity of tissue images.\nThis capability may offer significant support for future cancer diagnostics." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conlusion", + "text": "This paper proposes a flexible and general question-answering-based framework named GeneQuery, which predicts gene expression profiles from histology images.\nThe proposed GeneQuery reformulates the gene expression prediction problem by introducing the gene random variable.\nThe proposed GeneQuery takes rich gene metadata as queries and obtains the answers (gene expression values) from corresponding histology whole slide images.\nThe proposed two architecture implementations, i.e., spot-aware GeneQuery and gene-aware GeneQuery, demonstrate impressive performance on not only known genes but also previously unseen genes.\nExperiments also exhibit GeneQuery\u2019s transfer learning capability.\nWith such a QA-based framework, GeneQuery can integrate various types of gene meta-information and flexibly predict any gene expression values from the whole slide images.\nThis paper demonstrates GeneQuery\u2019s robust ability to handle multi-modal data, offering a novel technique for genomic fields to drive advances in gene expression research." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Acknowledgements", + "text": "We acknowledge Tianle Zhong and Qiang Su for their computing resource support." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Statistics of datasets in terms of total number of whole-slide images, average number of spots per whole-slide image, number of genes in each dataset, and number of subtypes in each dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Datasets# WSIs# Spots# Genes# Sub
GSE4231734671
HER2+363487851
HBD684507234
\n
", + "capture": "Table 1: Statistics of datasets in terms of total number of whole-slide images, average number of spots per whole-slide image, number of genes in each dataset, and number of subtypes in each dataset." + }, + "2": { + "table_html": "
\n
Table 2: Pearson correlation of all genes (ALL), top 50 most highly expressed genes (HEG), and top 50 most highly variable genes (HVG) compared to ground truth expressions on the held-out dataset. Results are reported with mean and variance. The results in bold are the best, and the underlined results are the second-best.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Datasets\nSTNet\u00a0[7]\n\nHistoGene\u00a0[8]\n\nBLEEP\u00a0[9]\nGeneQuery_geneGeneQuery_spot
GSEHEG0.1260.0050.0720.0180.1750.0160.2430.0280.2560.039
HVG0.0910.0070.0710.0110.1730.0110.2300.0280.2380.037
ALL0.0310.0090.0030.0010.0140.0030.0540.0080.0510.015
HER2+HEG0.2980.0320.0630.0160.2980.0900.3150.0300.3170.074
HVG0.3020.0260.0640.0120.3220.1000.3180.0250.3150.079
ALL0.1360.0110.0160.0080.1200.0510.1590.0270.1740.049
HBDHEG0.1890.0190.0530.0120.1910.0370.2000.0090.1490.029
HVG0.2090.0170.0510.0120.2080.0500.2120.0100.1520.032
ALL0.0730.0110.0150.0040.0680.0140.0610.0070.0680.021
\n
", + "capture": "Table 2: Pearson correlation of all genes (ALL), top 50 most highly expressed genes (HEG), and top 50 most highly variable genes (HVG) compared to ground truth expressions on the held-out dataset. Results are reported with mean and variance. The results in bold are the best, and the underlined results are the second-best. " + }, + "3": { + "table_html": "
\n
Table 3: PCC results of spot-aware GeneQuery on unseen genes. \u2018%\u2019 represents the ratio of seen genes used for training. \u2018Seen\u2019 refers to the results of the seen genes; \u2018Unseeen\u2019 refers to the results of the unseen genes; \u2018All\u2019 refers to the results of all genes.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SeenUnseenAll
Datasets%HEGHVGALLHEGHVGALLHEGHVGALL
GSE20%0.1170.1170.0290.1370.1200.0240.1400.1280.025
40%0.1490.1500.0310.1630.1570.0270.1810.1680.027
60%0.1570.1540.0310.1780.1690.0350.2020.1870.033
HER2+20%0.1560.1520.1090.1090.0920.0510.1620.1510.063
40%0.2610.2490.1300.1310.1310.0570.2550.2530.098
60%0.2960.3090.1630.1210.1010.0490.2550.2660.099
\n
", + "capture": "Table 3: PCC results of spot-aware GeneQuery on unseen genes. \u2018%\u2019 represents the ratio of seen genes used for training. \u2018Seen\u2019 refers to the results of the seen genes; \u2018Unseeen\u2019 refers to the results of the unseen genes; \u2018All\u2019 refers to the results of all genes." + }, + "4": { + "table_html": "
\n
Table 4: Transfer learning results of spot-aware GeneQuery. Results are reported with mean and variance. The results of the first five rows are evaluated across the different tissues, while the results of the last three rows are evaluated within the same tissue.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TrainTestHEGHVGALL
GSEHER2+
GSEHBD
HER2+GSE
HBDGSE
Average0.0640.0660.038
HER2+HBD
HBDHER2+
Average0.1220.1200.069
\n
", + "capture": "Table 4: Transfer learning results of spot-aware GeneQuery. Results are reported with mean and variance. The results of the first five rows are evaluated across the different tissues, while the results of the last three rows are evaluated within the same tissue." + }, + "5": { + "table_html": "
\n
Table 5: Results of the proposed GeneQuery with the GPT-4 enhanced gene metadata. Results are evaluated on one fold. The results in bold are the best, and the underlined results are the second-best.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelHEGHVGALL
\n\nGSE\nGeneQuery_gene0.2710.2580.062
\u00a0\u00a0\u2003w/ GPT-40.2820.2600.077
GeneQuery_spot0.2720.2580.046
\u00a0\u00a0\u2003w/ GPT-40.3120.2820.068
\n\nHER2+\nGeneQuery_gene0.2990.2990.104
\u00a0\u00a0\u2003w/ GPT-40.3330.3270.199
GeneQuery_spot0.3420.3470.183
\u00a0\u00a0\u2003w/ GPT-40.3360.3410.164
\n\nHBD\nGeneQuery_gene0.2090.2220.068
\u00a0\u00a0\u2003w/ GPT-40.2570.2780.100
GeneQuery_spot0.1200.1240.046
\u00a0\u00a0\u2003w/ GPT-40.1610.1680.072
\n
", + "capture": "Table 5: Results of the proposed GeneQuery with the GPT-4 enhanced gene metadata. Results are evaluated on one fold. The results in bold are the best, and the underlined results are the second-best." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18391v1_figure_1.png", + "caption": "Figure 1: Comparisons between GeneQuery and traditional framework. (a) traditional frameworks directly learn each gene\u2019s expression value as a multi-output regression task; (b) the proposed GeneQuery queries the metadata of given genes, including known and unseen genes, over the whole slide image.", + "url": "http://arxiv.org/html/2411.18391v1/x1.png" + }, + "2": { + "figure_path": "2411.18391v1_figure_2.png", + "caption": "Figure 2: The overall architecture of GeneQuery. (a) GeneQuery takes gene metadata as queries and spot images as contexts, then predicts the gene expression values. (b) Spot-aware GeneQuery takes the feature of spot images as the input sequence. (c) Gene-aware GeneQuery takes the feature of a list of genes as the input sequence.", + "url": "http://arxiv.org/html/2411.18391v1/x2.png" + }, + "3": { + "figure_path": "2411.18391v1_figure_3.png", + "caption": "Figure 3: Visualization of spot latent representation on HER2+ dataset. Visualization results of different methods are shown in spots, where each spot is colored according to the unsupervised clustering algorithms.", + "url": "http://arxiv.org/html/2411.18391v1/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Visualization and analysis of gene expression in tissue sections by spatial transcriptomics.", + "author": "Patrik L St\u00e5hl, Fredrik Salm\u00e9n, Sanja Vickovic, Anna Lundmark, Jos\u00e9 Fern\u00e1ndez Navarro, Jens Magnusson, Stefania Giacomello, Michaela Asp, Jakub O Westholm, Mikael Huss, et al.", + "venue": "Science, 353(6294):78\u201382, 2016.", + "url": null + } + }, + { + "2": { + "title": "Spatially resolved, highly multiplexed rna profiling in single cells.", + "author": "Kok Hao Chen, Alistair N Boettiger, Jeffrey R Moffitt, Siyuan Wang, and Xiaowei Zhuang.", + "venue": "Science, 348(6233):aaa6090, 2015.", + "url": null + } + }, + { + "3": { + "title": "Transcriptome-scale super-resolved imaging in tissues by rna seqfish+.", + "author": "Chee-Huat Linus Eng, Michael Lawson, Qian Zhu, Ruben Dries, Noushin Koulena, Yodai Takei, Jina Yun, Christopher Cronin, Christoph Karp, Guo-Cheng Yuan, et al.", + "venue": "Nature, 568(7751):235\u2013239, 2019.", + "url": null + } + }, + { + "4": { + "title": "Three-dimensional intact-tissue sequencing of single-cell transcriptional states.", + "author": "Xiao Wang, William E Allen, Matthew A Wright, Emily L Sylwestrak, Nikolay Samusik, Sam Vesuna, Kathryn Evans, Cindy Liu, Charu Ramakrishnan, Jia Liu, et al.", + "venue": "Science, 361(6400):eaat5691, 2018.", + "url": null + } + }, + { + "5": { + "title": "Spatial organization of the somatosensory cortex revealed by osmfish.", + "author": "Simone Codeluppi, Lars E Borm, Amit Zeisel, Gioele La Manno, Josina A van Lunteren, Camilla I Svensson, and Sten Linnarsson.", + "venue": "Nature Methods, 15(11):932\u2013935, 2018.", + "url": null + } + }, + { + "6": { + "title": "Expansion sequencing: Spatially precise in situ transcriptomics in intact biological systems.", + "author": "Shahar Alon, Daniel R Goodwin, Anubhav Sinha, Asmamaw T Wassie, Fei Chen, Evan R Daugharthy, Yosuke Bando, Atsushi Kajita, Andrew G Xue, Karl Marrett, et al.", + "venue": "Science, 371(6528):eaax2656, 2021.", + "url": null + } + }, + { + "7": { + "title": "Integrating spatial gene expression and breast tumour morphology via deep learning.", + "author": "Bryan He, Ludvig Bergenstr\u00e5hle, Linnea Stenbeck, Abubakar Abid, Alma Andersson, \u00c5ke Borg, Jonas Maaskola, Joakim Lundeberg, and James Zou.", + "venue": "Nature Biomedical Engineering, 4(8):827\u2013834, 2020.", + "url": null + } + }, + { + "8": { + "title": "Leveraging information in spatial transcriptomics to predict super-resolution gene expression from histology images in tumors.", + "author": "Minxing Pang, Kenong Su, and Mingyao Li.", + "venue": "BioRxiv, pages 2021\u201311, 2021.", + "url": null + } + }, + { + "9": { + "title": "Spatially resolved gene expression prediction from histology images via bi-modal contrastive learning.", + "author": "Ronald Xie, Kuan Pang, Sai Chung, Catia Perciani, Sonya MacParland, Bo Wang, and Gary D. Bader.", + "venue": "In Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS), 2023.", + "url": null + } + }, + { + "10": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770\u2013778. IEEE Computer Society, 2016.", + "url": null + } + }, + { + "11": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.", + "venue": "In Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021.", + "url": null + } + }, + { + "12": { + "title": "A deep learning model to predict rna-seq expression of tumours from whole slide images.", + "author": "Beno\u00eet Schmauch, Alberto Romagnoni, Elodie Pronier, Charlie Saillard, Pascale Maill\u00e9, Julien Calderaro, Aur\u00e9lie Kamoun, Meriem Sefta, Sylvain Toldo, Mikhail Zaslavskiy, et al.", + "venue": "Nature Communications, 11(1):3877, 2020.", + "url": null + } + }, + { + "13": { + "title": "Detecting gene\u2013gene interactions that underlie human diseases.", + "author": "Heather J Cordell.", + "venue": "Nature Reviews Genetics, 10(6):392\u2013404, 2009.", + "url": null + } + }, + { + "14": { + "title": "Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial.", + "author": "Guangyu Wang, Xiaohong Liu, Zhen Ying, Guoxing Yang, Zhiwei Chen, Zhiwen Liu, Min Zhang, Hongmei Yan, Yuxing Lu, Yuanxu Gao, et al.", + "venue": "Nature Medicine, 29(10):2633\u20132642, 2023.", + "url": null + } + }, + { + "15": { + "title": "Uniform manifold approximation and projection for dimension reduction.", + "author": "McInnes Leland, Healy John, and Melville James.", + "venue": "arXiv preprint arXiv:1802.03426, 2018.", + "url": null + } + }, + { + "16": { + "title": "From louvain to leiden: guaranteeing well-connected communities.", + "author": "Vincent A Traag, Ludo Waltman, and Nees Jan Van Eck.", + "venue": "Scientific Reports, 9(1):5233, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18391v1" +} \ No newline at end of file diff --git a/20241127/2411.18401v1.json b/20241127/2411.18401v1.json new file mode 100644 index 0000000000000000000000000000000000000000..04de141cd472fcb958aa9114fe3a1cbca12907ba --- /dev/null +++ b/20241127/2411.18401v1.json @@ -0,0 +1,495 @@ +{ + "title": "Proving and Rewarding Client Diversity to Strengthen Resilience of Blockchain Networks", + "abstract": "Client diversity in the Ethereum blockchain refers to the use of multiple independent implementations of the Ethereum protocol. This effectively enhances network resilience by reducing reliance on any single software client implementation.\nWith client diversity, a single bug cannot tear the whole network down.\nHowever, despite multiple production-grade client implementations being available, there is still a heavily skewed distribution of clients in Ethereum.\nThis is a concern for the community.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Blockchain is a distributed ledger in which the information is saved in a sequence of blocks shared by all the blockchain nodes. [34 ###reference_b34###]\nThe participating nodes agree on the new blocks to be added to the blockchain by using a consensus algorithm.\nThis entails a process where some of the nodes propose new blocks, while others validate and confirm them.\nTo incentivize participation, nodes are rewarded with some form of cryptocurrency [41 ###reference_b41###].\nFor instance, in Ethereum, is one of the largest blockchains, with over 300 Billion USD market capitalization [16 ###reference_b16###], participating nodes receive Ether when they propose and validate blocks.\nAlthough blockchains are resilient to the failures of individual nodes, they are vulnerable to systematic software faults [8 ###reference_b8###]\nIf the same bug is triggered across multiple nodes simultaneously, it could compromise the whole network, leading to significant reputation or financial losses [29 ###reference_b29###].\nFor instance, a bug [13 ###reference_b13###] in an Ethereum [41 ###reference_b41###] client was triggered in 2020, and led to the wrong acceptance of 30 blocks and involved a loss of around 8.6M USD [42 ###reference_b42###].\nTo mitigate this systemic risk, one known solution is to have multiple implementations of blockchain nodes [37 ###reference_b37###].\nIn this case, all nodes would not crash at the same time, because they do not share the same bugs and triggering conditions.\nThe Ethereum blockchain community is aware of this, and thus incentivizes so-called client diversity [40 ###reference_b40###]: having several, diverse, interoperable implementations of blockchain nodes, working together in the blockchain network.\nClient diversity is real, the Ethereum consensus blockchain includes many diverse clients such as Prysm [22 ###reference_b22###], Lighthouse [15 ###reference_b15###], Teku [24 ###reference_b24###], and Nimbus [18 ###reference_b18###].\nIn the context this client diversity, the following terminology is used in the community and in this paper:\nminority clients are run by less than of all nodes), majority clients by more than and less than of all nodes, and super-majority clients by more than of all nodes [8 ###reference_b8###].\nDespite client diversity being considered crucial by the Ethereum community [40 ###reference_b40###], it is critically affected by a heavily skewed distribution of client implementations. Most nodes run the same code, i.e. there are majority clients, and people are terrified by the possibility of super-majority clients.\nAssuming that a bug can be triggered in a super-majority client, the worst-case scenarios would happen: network partitions, storing incorrect data in the blockchain, financial loss, and day-long outages.\nIn this paper, we propose a novel solution to prevent catastrophes resulting from majority and super-majority clients.\nOur key insight to solve the lack of client diversity is a conceptual and technical framework that ensures client diversity beyond soft advocacy.\nOur proposal solves the problem through a combination of economic incentives and verifiable execution [28 ###reference_b28###, 30 ###reference_b30###].\nThe idea is to tune the uneven distribution of Ethereum clients through an economic incentive mechanism to use minority clients.\nOur framework provides higher rewards to nodes that run minority clients (the clients that have a smaller share).\nTo ensure the authenticity of minority clients claiming higher rewards, our framework employs verifiable execution.\nBy leveraging verifiable execution techniques, any participant can independently verify that a node has indeed executed a particular client implementation, without requiring trust in the operator\u2019s claims.\nConcretely, participation in block proposal and validation, would require a verifiable proof of execution, with information about the client implementation that is being used.\nThis guarantees that participants cannot falsely declare the use of minority clients to gain undue rewards.\nThis paper presents our vision, we will implement a prototype system of the framework in a future contribution.\nTo sum up, our contributions are:\nNovel Concept of Verifiable Software Diversity: We introduce the innovative idea of verifiable software diversity, specifically tailored for blockchain networks. Verifiable Software Diversity provides guarantees on the resilience of a blockchain network by reducing the likelihood of systemic outages due to monoculture of blockchain nodes.\nBlueprint of Architecture for Verifiable Client Diversity in Ethereum: We propose a detailed architectural blueprint for implementing verifiable client diversity within the Ethereum blockchain. This framework integrates economic incentives to encourage the adoption of minority clients and utilizes verifiable execution to corroborate the actual deployment and usage of client implementations." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background", + "text": "In this section, we provide fundamental concepts about blockchains, Ethereum, and its consensus protocol. (\u00a72 ###reference_###).\nFollowing this, we illustrate the current state of client diversity in Ethereum. (\u00a72.2 ###reference_###)." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Blockchain/Ethereum", + "text": "A blockchain [34 ###reference_b34###] is a decentralized ledger that securely records transactions in a tamper-proof manner.\nEach block in the chain contains a cryptographic hash of the previous block, ensuring immutability and transparency.\nThe Ethereum network is built on the blockchain concept and supports the transaction of its native cryptocurrency, Ether, as well as smart contracts and decentralized applications (dApps) [45 ###reference_b45###].\nThe Ethereum network is supported by thousands of nodes that are distributed globally and collectively validate and process blocks [10 ###reference_b10###].\nAs of 2024, the Ethereum network handles millions of transactions daily, with thousands of machines verifying the correct execution of the protocol [9 ###reference_b9###].\nEthereum adopts a consensus mechanism called Proof of Stake (PoS) [41 ###reference_b41###].\nUnder PoS, participants can become validators [44 ###reference_b44###] by locking or staking a set amount of Ether (currently 32), to secure their right to participate in the validation process.\nValidators are responsible for proposing and validating blocks, contributing to the overall finality and integrity of the blockchain.\nIn the PoS mechanism, the processing of a block requires the approving votes of at least two-thirds of the participating validators\u2019 stake to achieve consensus.\nThe blockchain incentivizes validators who adhere to the PoS rules by providing them with rewards [41 ###reference_b41###].\nHowever, this role also comes with stringent requirements and potential penalties.\nIf a validator acts maliciously or fails to correctly perform their duties, they are punished with monetary penalties.\nFor instance, slashing [31 ###reference_b31###] is a penalty designed to deter malicious validator behavior, and it implies partial or total loss of staked funds [44 ###reference_b44###].\nWe note that such mechanisms not only encourage honest behavior, but also encourage high reliability, because bugs leading to incorrect outputs or lack of availability are also penalized." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Client Diversity", + "text": "In the field of software engineering, software diversity [33 ###reference_b33###] is a technique to improve security and reliability by distributing risks across different implementations.\nSoftware diversity in Ethereum refers to the presence of multiple independent software implementations for the Ethereum execution and consensus layer clients.\nTo ensure that all nodes maintain a consistent view of the blockchain state, different client implementations must interoperate seamlessly and support the network\u2019s overall functionality and reliability.\nIn Ethereum, client diversity arises as a deliberate strategy to enhance the resilience and security of the network.\nBy supporting multiple independent implementations of the Ethereum protocol, the network reduces the risk of systemic failures that could result from single-point-of-failure vulnerabilities.\nAdditionally, client diversity fosters a competitive environment where different implementations can introduce optimizations and improvements, contributing to the overall advancement of the Ethereum platform.\n###table_1### Table 1 ###reference_### illustrates the diverse programming stacks used by different Ethereum clients across the Consensus Layer (CL) and Execution Layer (EL).\nGolang, Java, Rust, and Nim are featured in both layers, but C# is currently utilized only in the Execution Layer by the well-regarded Nethermind client.\n###figure_1### Fig. 1 ###reference_### visualizes the statistics of client diversity in Ethereum\u2019s Consensus Layer (CL) and Execution Layer (EL) as of Sep-28-2024.\nSpecifically, the Prysm client [22 ###reference_b22###] accounts for 37.64% of CL clients, indicating a significant presence in the consensus layer.\nMeanwhile, the Geth client [11 ###reference_b11###] constitutes 52% of EL clients [3 ###reference_b3###], highlighting its dominance in the execution layer.\nBoth Prysm and Geth, are majority clients, and we will interpret the corresponding risk in the next section.\nNote that the aforementioned statistical data might be subject to uncertainty, as the methods of data collection and measurement [2 ###reference_b2###, 20 ###reference_b20###] is hard and inaccurate." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Problem statement", + "text": "In this section, we elaborate on two catastrophic scenarios caused by the lack of client diversity in the context of blockchain: mass-slashing of validators and chain forks." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem 1: Massive slashing", + "text": "Ethereum\u2019s Proof of Stake (PoS) is a consensus mechanism designed to secure the network and validate transactions in a way that is more energy-efficient than previously used proof-of-work.\nSlashing is a penalty system within PoS, meant to reduce the stake of validators who act maliciously or negligently.\nSlashing is essential for ensuring network integrity and security.\nMassive slashing in blockchain refers to a situation where a large portion of the validator nodes (or participants) experience financial penalties due to violating consensus rules [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nIntuition.\nFig. 2 ###reference_### visualizes an instance of mass-slashing.\nIn the figure, there are three types of clients in the blockchain network, ClientA, ClientB, and ClientC.\nAssume a bug in ClientB that leads to double votes, which can be considered as a violation of a consensus rule.\nTherefore, all the users running ClientB will get a penalty.\nUnder Ethereum\u2019s slashing rules, the participants will receive an initial slashing penalty of ~1ETH, along with a correlation penalty because more than 33% of participants have been slashed simultaneously.\nIn this specific scenario, the worst case scenario is a full stake penalty, meaning that around 57% of the whole stake would disappear because of the bug in ClientB.\nFormal model. Assume a blockchain network with nodes, where clients implementations are deployed, denoted as subsets .\nThe cardinality represents the total number of nodes within the blockchain network.\nSuppose is the majority client, such that it satisfies .\nLet the blockchain head be at height . A subsequent block, , has been proposed.\nAssume a majority client encounters a bug causing all nodes in code to erroneously generate two attestations: attestation 1 from to , and attestation 2 from to .\nThis violates the rules by making two differing attestations for the same target checkpoint [21 ###reference_b21###].\nIn this context, massive slashing means penalizing all nodes in , which is a large share of network participants.\nSpecifically for Ethereum, those users will first be subject to an initial penalty (1/32 staked balance).\nAdditionally, after 18 days, the correlation penalty, would be applied [21 ###reference_b21###].\nIn this case, since more than 33% of participants are slashed during the same epoch, the correlation penalty amounts to the whole affected participants\u2019 stake.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Problem 2: Network partition.", + "text": "In blockchain, network partitioning is a condition where blockchain nodes become divided into multiple, disjoint subsets, each with an isolated view of the network.\nWhen this happens, each subset or \u201dpartition\u201d of nodes continues to operate and maintain its own version of the blockchain.\nBecause these versions are incompatible and cannot reconcile with one another, this scenario is considered catastrophic, and it fundamentally breaks the decentralized, unified ledger model that blockchain systems rely on.\nIntuition.\nFig. 3 ###reference_### visualizes the network partition problem due to a lack of client diversity.\nThere are three types of clients in Fig. 3 ###reference_###, containing ClientA, ClientB, and ClientC.\nClientB is the majority client, as four of the seven nodes operate it.\nIf ClientB experiences a bug, the blockchain may fork into two distinct chains: one supported by both ClientA and ClientC, and another independently run by ClientB.\nAs noted in \u00a71 ###reference_###,\nthe above scenario has occurred in practice in Ethereum when the majority client, Geth, encountered bugs that led to a network partition and caused significant financial losses for users [42 ###reference_b42###].\n###figure_3### Formal model.\nAssume a blockchain network with nodes, where types of blockchain clients are deployed, denoted as subsets .\nThe cardinality represents the total number of nodes within the blockchain network.\nEach blockchain node maintains a local state .\nIn a normal scenario where consensus is maintained, we have , indicating that all nodes keep the same blockchain state.\nAssume that implementation is a majority client, with .\nIf a bug is triggered in , the blockchain can diverge, producing two different states: and .\nThis indicates that the blockchain forks into two separate chains \u2014 one associated with the state maintained by nodes running Client , and another associated with the state maintained by nodes running other clients except ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Proposed Approach", + "text": "We propose to engineer client diversity based on two key components: client implementation proof of execution, and financial incentives for minority clients." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Client Implementation Proof of Execution", + "text": "This component\u2019s goal is to provide tamper-proof, verifiable evidence of their adherence to client diversity requirements.\nIt involves the submission of an attestation of the use of a particular client implementation during the execution of protocol operations.\nThis proof of execution mechanism is required to achieve verifiable client diversity, moving beyond current advocacy-based client diversity.\nWe envision three possible alternatives to engineer this component of the system.\nExecution Trace-Based Approach\nConsider a client implementation as a program , and a subset of instructions of that is unique to .\nDuring execution, subset produces intermediate program states S.\nConsider nodes and which execute .\n can periodically publish , so that its execution of can be identified by , by comparing against its own intermediate program states.\nThis means that execution of client implementations would be verified by replication of execution, by peers who run the same client implementations.\nIn this case, , or a digest of , becomes a proof of execution, that can challenged to be incorrect by a peer who has a different .\nThe peer would then initiate a fraud-proof protocol that would be carried out by an on-chain computation mechanism to determine which is correct.\nThis approach resembles execution verification for optimistic rollups [39 ###reference_b39###]\nThe challenges of this approach are: (1) to identify the appropriate unique subset of memory states in existing implementations; and (2) the need for additional components for supporting fraud-proof and challenges.\nProof of Execution Using a zkVM:\nConsider a client implementation is represented as program .\nLet\u2019s assume that there exists a subset of instructions of that is unique to each .\nIs subset can be arithmetisized and executed in a Zero-Knowledge Virtual Machine (zkVM), then the zkVM is able to produce a proof of execution for , that can later be verified by anyone.\nThe challenges introduced by this approach are (1) to identify a subset of instructions in existing implementations which meet the criteria of zk execution; and (2) determine the feasibility of proving within applicable time and resource constraints.\nTrusted Execution Environments (TEEs):\nTEEs are specialized hardware environments designed to execute code securely, ensuring that the integrity and confidentiality of the data and processes within them are protected from interference by other components, including the host operating system [36 ###reference_b36###].\nTo implement verifiable client implementation using TEEs, a unique subset of a client implementation would execute within a TEE, and the TEE would produce a cryptographically signed attestation that proves its execution.\nThis attestation, tied to the specific client implementation, would then be submitted for verification.\nThe challenges introduced by this approach are (1) identify a subset of instructions P\u2019 in existing implementations which meet the requirements of TEE execution; (2) determine the overall implications of outsourcing attestations to centralized hardware manufacturers." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Financial Incentives for Minority Clients", + "text": "Building on top of client proof-of-execution, we propose a reward mechanism where running a minority client becomes economically attractive.\nAdditionally, to discourage fraud or malicious submissions, participants who provide fraudulent proofs of minority clients will be subject to penalties.\nThis reward mechanism can be built as part of the PoS consensus protocol (aka in-protocol), or as an additional component, implemented in a set of smart contracts (aka on-chain), which is further discussed in Section 4.3 ###reference_###.\nThis economic incentive will encourage a broader adoption of minority clients, thereby enhancing the overall resilience and decentralization of the Ethereum network by reducing dependency on a single client implementation." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Client Diversity Protocol", + "text": "A key design consideration for the proposed solution to verifiable client diversity is the need for proving and challenging client implementation directly within (i) the consensus layer (in-protocol), (ii) as part of an on-chain contract-based system (On-chain), and (iii) a hybrid solution.\nIn-protocol Integration.\nIn-protocol integration involves embedding the client diversity and fraud-proof mechanisms directly into the Ethereum consensus layer.\nBy making this part of the core protocol, the solution benefits from seamless integration with the existing validation and consensus processes, allowing for minimal overhead in communication between validators and the network.\nThe tradeoffs are increased network load, latency, and size of messages.\nAdditionally, proof generation and validation times would place further strain on the network\u2019s throughput.\nOn-chain Integration.\nAlternatively, client diversity fraud proofs can be implemented via smart contracts on the blockchain, with validators interacting with these contracts 1) to submit their cryptographic proofs of client diversity compliance and execution 2) to challenge incorrect proofs if necessary.\nThis approach allows for updates to the fraud-proof mechanisms and reward structures without modifying the core protocol, which is arguably faster and easier than going through EIPs and protocol upgrades.\nValidators would directly submit proofs to the smart contract, and the contract would manage the distribution of rewards or the application of penalties based on the submitted data.\nHowever, the reliance on smart contracts introduces potential latency and additional gas costs, which may affect the overall efficiency of the client diversity protocol, particularly if proof sizes or transaction volumes increase.\nHybrid Approach.\nGiven the strengths and weaknesses of both in-protocol and on-chain implementations, a hybrid approach that combines elements of each may provide the most effective solution.\nFor example, client diversity rewards can be handled in-protocol by sending client implementation information as part of existing consensus messages, such as attestations.\nThe rewards could be applied instantly and optimistically by the consensus layer client, while disputes of the proofs can be resolved via smart contracts.\nThis division would allow the system to benefit from the flexibility and transparency of smart contracts, while off loading compute-intensive operations such as proof validation off-chain.\nWe now present an instance of a hybrid approach, implementing an execution-trace-based proof as described in 4.1 ###reference_###:\n###figure_4### Fig. 4 ###reference_### illustrates the workflow of our approach, which encompasses six phases.\n(1) The blockchain node receives a new block.\n(2) A validator assigned to attest to this block, broadcasts its attestation, which includes the client implementation information and the corresponding proof of execution.\n(3) The network will apply a multiplier to the consensus reward of the validator, using knowledge of previous attestations to estimate the distribution of client implementations.\n(4) In the meantime, an independent verifier node validates the proof of the implementation\u2019s execution.\n(5) If the verifier detects inconsistencies in the proof provided by the validator, it triggers a call to a smart contract to resolve it.\n(6) If the smart contract resolves that the validator has committed client diversity fraud, a penalty is applied to the validator.\nThis workflow enables nodes to detect and report fraudulent reporting of client diversity by allowing them to present verifiable evidence of misconduct." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Potential Drawbacks of Verifiable Client Diversity", + "text": "While the proposed approach for verifiable client diversity is a promising solution for enhancing the implementation decentralization and resilience of the Ethereum network, it introduces several technical challenges.\nIncreased Block Size and Network Overhead.\nIntroducing proof of executions through protocol attestations increases message sizes, adding overhead to the consensus process and network.\nLarge execution proofs can strain Ethereum\u2019s throughput, causing higher latency in block propagation and validation.\nAlso, verifying these proofs places an additional computational burden on participants.\nPrivacy Risks.\nProof of execution has an impact on privacy. Cryptographic proofs may leak subtle information about the identity of validators, exposing them to targeted attacks.\nAlso, malicious actors could exploit knowledge of a validator\u2019s setup to launch denial-of-service (DoS) attacks or exploit known vulnerabilities." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Client Diversity and Governance", + "text": "In the context of verifiable client diversity with rewards, governance must be established through social consensus to determine critical parameters of the client diversity protocol.\nThese include the rewards and penalties assigned to minority and majority implementations, respectively.\nAnother key consideration is deciding how many implementations the protocol will support and the corresponding attribution of rewards and penalties.\nSuch governance decisions have a direct impact on the extent of client diversity within the system." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Blockchain client diversity finds its roots in the seminal work of Avizienis [27 ###reference_b27###], who introduces the concept of N-version programming to achieve fault tolerance in software systems.\nIt focuses on reliability through multiple independently developed software versions.\nTo our knowledge, our work is the first to bridge N-version programming with proos of execution and economic incentives.\nAdditionally, there are several works that leverage existing client diversity for better resilience in the context of blockchain.\nFor instance, Fluffy [42 ###reference_b42###] and Etherdiffer [32 ###reference_b32###] use differential testing of diverse Ethereum implementations to detect bugs.\nN-ETH [35 ###reference_b35###] proposes an N-Version setup to increase availability of blockchain clients.\nRelated to measuring client implementation distribution,\nthe Nethermind\u2019s team conducts theoretical analysis on three methods to allow Ethereum validators to self-report their client diversity data [25 ###reference_b25###].\nFurthermore, Blockprint [2 ###reference_b2###] adopts a machine learning strategy to identify types of clients on Ethereum\u2019s consensus layer.\nThe Chainbound team proposes a methodology to measure the geographical distribution of the validators [5 ###reference_b5###].\nHowever, these works do not provide any guarantees on the collected information.\nIn contrast, we aim to create the first solution to identify the used client implementations in a provable way. From there, our client diversity protocol would balance client distribution by integrating it with economic incentives.\nIn the realm of blockchains and verifiable computation,\nArbitrum Nitro [14 ###reference_b14###] introduces a mechanism to prove correct execution of EVM code by using refereed delegation of computation.\nSimilarly, Specular [43 ###reference_b43###] proposes an optimistic Ethereum rollup that supports interactive fraud proofs for diverse client implementations.\nTrueBit [38 ###reference_b38###] is a scalable verification solution for blockchains that employs a combination of economic incentives and a dispute resolution mechanism.\nIt enables secure outsourced computation while addressing the verifier\u2019s dilemma, thus supporting larger-scale task processing [38 ###reference_b38###].\nSepideh et al. [26 ###reference_b26###] proposes a universal composability framework for verifiable computation.\nThey adopt smart contacts to ensure integrity when computation is outsourced to multiple servers." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper introduces a framework to address the skewness of client implementations in blockchains by leveraging financial incentives and interactive fraud proofs.\nOur approach aims to create a more balanced distribution of clients, enhancing the network\u2019s resilience and decentralization.\nWe plan to implement the framework and empirically evaluate its effectiveness, providing insights into its impact on Ethereum\u2019s security and stability.\nThis work offers a promising step toward mitigating consensus risks and improving the reliability of decentralized systems." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Consensus LayerExecution Layer
Golang\nPrysm\u00a0[22]\n\nGeth\u00a0[11]/Erigon\u00a0[4]\n
Java\nTeku\u00a0[24]\n\nBesu\u00a0[1]\n
Rust\nLighthouse\u00a0[15]/Grandine\u00a0[12]\n\nReth\u00a0[23]\n
Nim\nNimbus-ETH2\u00a0[18]\n\nNimbus-ETH1\u00a0[19]\n
C#\nNethermind\u00a0[17]\n
\n
Table 1: Ethereum client diversity with diverse programming stacks.
\n
", + "capture": "Table 1: Ethereum client diversity with diverse programming stacks." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18401v1_figure_1.png", + "caption": "Figure 1: The client distribution on Ethereum EL and CL in Sep-28-2024. The uneven distribution (e.g., 37% of Prysm [22] in CL and 52% of Geth [11] in EL) still threatens Ethereum\u2019s resilience [3].", + "url": "http://arxiv.org/html/2411.18401v1/x1.png" + }, + "2": { + "figure_path": "2411.18401v1_figure_2.png", + "caption": "Figure 2: A mass-slashing scenario. A bug in ClientB causes a double attestation, i.e., two attestations from different source slots are produced to the same target slot. This violates the consensus rules. Consequently, all users running ClientB will be first punished with the initial slashing penalty (1/32 staked balance) and exited from the set of active validators. After a set amount of time, the users running ClientB will receive a correlation penalty amounting to their full staked balance.", + "url": "http://arxiv.org/html/2411.18401v1/x2.png" + }, + "3": { + "figure_path": "2411.18401v1_figure_3.png", + "caption": "Figure 3: The skewed distribution of Ethereum clients compromises the blockchain consensus. In this example, the network consists of three client implementations: ClientA, ClientB, and ClientC. Notably, ClientB is the majority client, operating on four out of seven nodes. In the event of a bug affecting ClientB, the blockchain can split into two separate chains: one maintained by ClientA and ClientC, and another independently sustained by the malfunctioning ClientB. This potential bifurcation threatens the integrity and consistency of the blockchain.", + "url": "http://arxiv.org/html/2411.18401v1/x3.png" + }, + "4": { + "figure_path": "2411.18401v1_figure_4.png", + "caption": "Figure 4: The workflow of our approach includes six phases: (1) Node receives new block. (2) Validator broadcasts attestation and execution proof. (3) The network assigns attestation with a higher reward, as the execution proof corresponds to a minority client. (4) Challenger validates execution proof. (5) Challenger triggers fraud-proof defined in a smart contract. (6) CL receives the result of the fraud-proof through Engine API and applies a penalty if the original validator\u2019s proof is incorrect.", + "url": "http://arxiv.org/html/2411.18401v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "https://github.com/hyperledger/besu.", + "author": "Besu: An ethereum execution implementation written in java.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "https://blockprint.sigp.io/.", + "author": "Blockprint:shaping the future of blockchain transparency.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "https://clientdiversity.org/.", + "author": "Client distribution.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "https://github.com/erigontech/erigon.", + "author": "Erigon: An ethereum execution implementation written in golang.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "https://ethresear.ch/t/estimating-validator-decentralization-using-p2p-data/19920.", + "author": "Estimating validator decentralization using p2p data.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "https://www.kiln.fi/post/ethereum-client-diversity-part-1-consensus-finalization.", + "author": "Ethereum client diversity part 1: Consensus & finalization.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "https://www.kiln.fi/post/ethereum-client-diversity-part-2-execution-layer-diversity.", + "author": "Ethereum client diversity part 2: Execution-layer diversity.", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "https://www.kiln.fi/post/ethereum-client-diversity-part-3-consensus-layer-diversity.", + "author": "Ethereum client diversity part 3: Consensus-layer diversity.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "https://github.com/prysmaticlabs/prysm.", + "author": "Ethereum daily transactions chart.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "https://ethernodes.org/.", + "author": "Ethereum mainnet statistics.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "https://github.com/ethereum/go-ethereum.", + "author": "Geth: An ethereum execution implementation written in golang.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "https://github.com/grandinetech/grandine.", + "author": "High performance ethereum consensus client.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "https://decrypt.co/47891/how-a-dormant-bug-briefly-split-the-ethereum-blockchain.", + "author": "How a dormant bug briefly split the ethereum blockchain.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "https://docs.arbitrum.io/how-arbitrum-works/inside-arbitrum-nitro.", + "author": "Inside arbitrum nitro.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "https://github.com/sigp/lighthouse.", + "author": "Lighthouse: An ethereum consensus implementation written in rust.", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "https://coinmarketcap.com/currencies/ethereum/.", + "author": "The market capitalization of ethereum.", + "venue": null, + "url": null + } + }, + { + "17": { + "title": "https://github.com/NethermindEth/nethermind.", + "author": "Nethermind: An ethereum execution implementation written in c#.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "https://github.com/status-im/nimbus-eth2.", + "author": "Nimbus: An ethereum consensus implementation written in nim.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "https://github.com/status-im/nimbus-eth1.", + "author": "Nimbus: An Ethereum Execution Client for Resource-Restricted Devices .", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "https://nethermind.notion.site/On-statistically-significant-samples-71bb529cf20f41dea5dda943d6cd7636.", + "author": "On statistically significant samples.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/rewards-and-penalties/.", + "author": "Proof-of-stake rewards and penalties.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "https://github.com/prysmaticlabs/prysm.", + "author": "Prysm: An ethereum consensus implementation written in go.", + "venue": null, + "url": null + } + }, + { + "23": { + "title": "https://github.com/paradigmxyz/reth.", + "author": "Reth: An ethereum execution implementation written in rust.", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "https://github.com/Consensys/teku.", + "author": "Teku: An ethereum consensus implementation written in java.", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "Allowing validators to provide client information privately.", + "author": "J. Arce-Garro.", + "venue": "https://ethresear.ch/t/research-report-allowing-validators-to-share-client-information-privately-a-project-by-nethermind-research/19506.", + "url": null + } + }, + { + "26": { + "title": "Refereed delegation of computation using smart contracts.", + "author": "S. Avizheh, M. Nabi, and R. Safavi-Naini.", + "venue": "IEEE Transactions on Dependable and Secure Computing, 2024.", + "url": null + } + }, + { + "27": { + "title": "The n-version approach to fault-tolerant software.", + "author": "A. Avizienis.", + "venue": "IEEE Transactions on software engineering, 1985.", + "url": null + } + }, + { + "28": { + "title": "Securing tees with verifiable execution contracts.", + "author": "G. Chen and Y. Zhang.", + "venue": "IEEE Transactions on Dependable and Secure Computing, 20(4):3222\u20133237, 2022.", + "url": null + } + }, + { + "29": { + "title": "Ethereum docs: Nodes and clients.", + "author": "E. Community.", + "venue": "https://ethereum.org/en/developers/docs/nodes-and-clients/, 2021.", + "url": null + } + }, + { + "30": { + "title": "Safetynets: Verifiable execution of deep neural networks on an untrusted cloud.", + "author": "Z. Ghodsi, T. Gu, and S. Garg.", + "venue": "Advances in Neural Information Processing Systems, 30, 2017.", + "url": null + } + }, + { + "31": { + "title": "Don\u2019t trust, verify: The case of slashing from an ethereum explorer.", + "author": "Z. He, J. Li, and Z. Wu.", + "venue": "In Proceedings of ACM Web Conference (WWW 2023), 2023.", + "url": null + } + }, + { + "32": { + "title": "Etherdiffer: Differential testing on rpc services of ethereum nodes.", + "author": "S. Kim and S. Hwang.", + "venue": "In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 1333\u20131344, 2023.", + "url": null + } + }, + { + "33": { + "title": "Sok: Automated software diversity.", + "author": "P. Larsen, A. Homescu, S. Brunthaler, and M. Franz.", + "venue": "In 2014 IEEE Symposium on Security and Privacy, pages 276\u2013291. IEEE, 2014.", + "url": null + } + }, + { + "34": { + "title": "Bitcoin: A peer-to-peer electronic cash system.", + "author": "S. Nakamoto.", + "venue": "Decentralized business review, 2008.", + "url": null + } + }, + { + "35": { + "title": "Highly available blockchain nodes with n-version design.", + "author": "J. Ron, C. Soto-Valero, L. Zhang, B. Baudry, and M. Monperrus.", + "venue": "IEEE Transactions on Dependable and Secure Computing, pages 1\u201314, 2023.", + "url": null + } + }, + { + "36": { + "title": "Trusted execution environment: What it is, and what it is not.", + "author": "M. Sabt, M. Achemlal, and A. Bouabdallah.", + "venue": "In 2015 IEEE Trustcom/BigDataSE/ISPA, volume 1, pages 57\u201364, 2015.", + "url": null + } + }, + { + "37": { + "title": "Risks of monoculture.", + "author": "M. Stamp.", + "venue": "Communications of the ACM, 47(3):120, 2004.", + "url": null + } + }, + { + "38": { + "title": "A scalable verification solution for blockchains.", + "author": "J. Teutsch and C. Reitwie\u00dfner.", + "venue": "In Aspects of Computation and Automata Theory with Applications. 2024.", + "url": null + } + }, + { + "39": { + "title": "Blockchain scaling using rollups: A comprehensive survey.", + "author": "L. T. Thibault, T. Sarry, and A. S. Hafid.", + "venue": "IEEE Access, 10:93039\u201393054, 2022.", + "url": null + } + }, + { + "40": { + "title": "The critical need for client diversity ahead of ethereum\u2019s merge to proof of stake.", + "author": "C. Watson and M. Nelson.", + "venue": "Post on consensys.ne, 2022.", + "url": null + } + }, + { + "41": { + "title": "Ethereum: A secure decentralised generalised transaction ledger.", + "author": "G. Wood et al.", + "venue": "Ethereum project yellow paper, 151(2014):1\u201332, 2014.", + "url": null + } + }, + { + "42": { + "title": "Finding consensus bugs in ethereum via multi-transaction differential fuzzing.", + "author": "Y. Yang, T. Kim, and B.-G. Chun.", + "venue": "In 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), pages 349\u2013365, 2021.", + "url": null + } + }, + { + "43": { + "title": "Specular: Towards secure, trust-minimized optimistic blockchain execution.", + "author": "Z. Ye, U. Misra, J. Cheng, W. Zhou, and D. Song.", + "venue": "In 2024 IEEE Symposium on Security and Privacy (SP), pages 3943\u20133960. IEEE, 2024.", + "url": null + } + }, + { + "44": { + "title": "Max attestation matters: Making honest parties lose their incentives in ethereum PoS.", + "author": "M. Zhang, R. Li, and S. Duan.", + "venue": "In 33rd USENIX Security Symposium (USENIX Security 24), 2024.", + "url": null + } + }, + { + "45": { + "title": "Sok: Decentralized finance (defi) attacks.", + "author": "L. Zhou, X. Xiong, J. Ernstberger, S. Chaliasos, Z. Wang, Y. Wang, K. Qin, R. Wattenhofer, D. Song, and A. Gervais.", + "venue": "In 2023 IEEE Symposium on Security and Privacy (SP), pages 2444\u20132461. IEEE, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18401v1" +} \ No newline at end of file diff --git a/20241127/2411.18443v1.json b/20241127/2411.18443v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f82ae9043e1ec9d159ed14ecaed68841283ea7c0 --- /dev/null +++ b/20241127/2411.18443v1.json @@ -0,0 +1,167 @@ +{ + "title": "Efficient Dynamic LiDAR Odometry for Mobile Robots with Structured Point Clouds", + "abstract": "We propose a real-time dynamic LiDAR odometry pipeline for mobile robots in Urban Search and Rescue (USAR) scenarios.\nExisting approaches to dynamic object detection often rely on pretrained learned networks or computationally expensive volumetric maps.\nTo enhance efficiency on computationally limited robots, we reuse data between the odometry and detection module.\nUtilizing a range image segmentation technique and a novel residual-based heuristic, our method distinguishes dynamic from static objects before integrating them into the point cloud map.\nThe approach demonstrates robust object tracking and improved map accuracy in environments with numerous dynamic objects.\nEven highly non-rigid objects, such as running humans, are accurately detected at point level without prior downsampling of the point cloud and hence, without loss of information.\nEvaluation on simulated and real-world data validates its computational efficiency.\nCompared to a state-of-the-art volumetric method, our approach shows comparable detection performance at a fraction of the processing time, adding only 14 ms to the odometry module for dynamic object detection and tracking.\nThe implementation and a new real-world dataset are available as open-source for further research.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "The capability to localize itself and create a map of the environment is essential to enable autonomous navigation and other assistance functions with mobile robots in unknown disaster environments without GNSS reception.\nThe problem is referred to as Simultaneous localization and mapping (SLAM).\nMany established approaches assume a static world, which is usually not the case in challenging Urban Search and Rescue (USAR) environments.\nOften, there are rescue workers, vehicles, or other robots in proximity.\nThe need for dealing with such objects is motivated by three main aspects.\nFirstly, for autonomous navigation, obstacle avoidance has to be ensured to improve the semantic understanding of the environment and enable better autonomous functions, such as the consideration of dynamic objects in motion planning and control.\nSecondly, dynamic objects need explicit handling for mapping as otherwise, they typically create \u201dghost trace\u201d artifacts in the map and can potentially impede the quality of the localization in highly dynamic environments.\nThirdly, as a step towards human-machine-interaction, an understanding of the dynamic semantics in the environments enables novel approaches to human-robot interaction with the rescue teams or possible victims.\nFor instance, a robot could follow a rescue worker walking in front of it or lead a potential victim to a safe area.\n###figure_1### Existing SLAM approaches considering the environment dynamics can provide very accurate environment segmentations but are typically either limited by computational expense, no online capability, or inaccurate representations of highly articulated objects.\nFor example, the post-processing method ERASOR [1 ###reference_b1###] removes ghost traces from the map after the SLAM process, thus, no online handling of the objects is possible.\nLearning-based methods that have become widely used in the context of autonomous driving [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]\nallow the accurate online detection of (moving) objects but are computationally expensive and fail on instances of classes that are not included in the training data, rendering them unsuitable for USAR applications where potential objects are not necessarily known beforehand.\nThis work proposes a new approach to dynamic Light Detection and Ranging (LiDAR) odometry, which addresses the aforementioned limitations.\nIn contrast to other recent works, we follow a grid-free approach and operate directly on a structured point cloud, allowing for faster processing of high-resolution scans even in large-scale environments.\nAs a backbone, we use the state-of-the-art LiDAR odometry method proposed by [5 ###reference_b5###].\nWorking on the range image, the point cloud is segmented, and objects are tracked individually so that they can be used in other processes, too.\nBased on the observation that moving objects exhibit higher residuals in the scan matching process than static ones, we propose a novel heuristic to distinguish between the two classes.\nDynamic objects can then easily be removed from the scan before integrating it into the map.\nOur approach is designed to consider the specific requirements [6 ###reference_b6###] for USAR applications, which implies that its\nmechanisms are as general as possible and rely only on few assumptions about the environment.\nThe main focus is on obtaining a light-weight, comprehensible\nsystem that runs in real-time on ground-based robots using a single high-resolution LiDAR\nsensor.\nIn summary, the main contributions of this work are:\nA lightweight end-to-end pipeline for dynamic LiDAR odometry, including the components odometry, object detection, tracking, and mapping\nA novel residual-based heuristic to distinguish dynamic from static objects\nEvaluation on a public benchmark and created real-world data\nThe open-source release of the code and dataset111https://github.com/tu-darmstadt-ros-pkg/dynamic_direct_lidar_odometry." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II RELATED WORK", + "text": "Most LiDAR Odometry and Mapping methods treat dynamic objects as outliers in the scan matching process [7 ###reference_b7###, 8 ###reference_b8###, 5 ###reference_b5###], which tends to worsen the odometry estimate and thus the map quality.\nA complementary aspect to this problem is the Detection and Tracking of Moving Objects (DATMO).\nAlthough the majority of publications cover only either of them [5 ###reference_b5###, 9 ###reference_b9###, 7 ###reference_b7###, 8 ###reference_b8###], DATMO can be integrated into the SLAM process, leading to higher fidelity maps and more accurate pose estimates in dynamic environments.\nAt the same time, a precise map helps in providing self-localization and detecting objects.\nWang et al. [10 ###reference_b10###] were the first to propose a framework that integrates both problems in a complementary manner by detecting moving objects in each scan before obtaining the pose update from a SLAM module.\nSince tracking of dynamic objects is well studied in the computer vision community and robust off-the-shelf multi-object trackers exist, a key challenge of DATMO lies in the detection of moving objects in the first place.\nFollowing the classification by Yoon et al. [11 ###reference_b11###], moving object detection techniques can be categorized into three groups: model-based detectors, methods that utilize a priori known maps, and approaches that rely on the most recent data only.\nModel-based, or class-specific, methods can provide accurate detections for predefined object classes in both camera images [12 ###reference_b12###] and point clouds [4 ###reference_b4###, 2 ###reference_b2###].\nConsidering only frame-wise detections, one cannot tell which objects are moving.\nSemantic labels, however, can be used to distinguish possibly moving and most likely static objects.\nFor example, vehicles, animals, and pedestrians belong to the former category, while buildings, traffic lights, and trees belong to the latter.\nNevertheless, purely model-based classifiers fail to differ between temporarily static and actually moving objects, such as parked and driving cars.\nEspecially in USAR environments, objects of many different classes are present, which can hardly all be represented in training data.\nIn some use cases for mobile robots, a map of the static environment already exists. This applies, for instance, to autonomous forklifts deployed in warehouse logistics or surveillance robots in buildings. If a model of the building is given or the robot has captured a motion-free map before, it is relatively easy to detect discrepancies in the robot\u2019s perception and the stored data using change detection techniques [13 ###reference_b13###].\nThese discrepancies can indicate moving objects.\nMap-free methods can only rely on the most recent sensor data.\nA way to detect changes without a given map is the online construction of occupancy grids.\nTypically, the space is discretized into voxels and each voxel is assigned a probability of occupancy based on ray-tracing operations [14 ###reference_b14###].\nPostica et al. [15 ###reference_b15###] then derive moving objects from inconsistencies between free space and occupied space.\nThe work by Schmid et al. [9 ###reference_b9###] is based on a similar principle, but builds on the Truncated Signed Distance Fields (TSDF) framework Voxblox [16 ###reference_b16###] for a more efficient map representation.\nThis approach requires highly accurate localization information, which in turn is affected by the presence of moving objects.\n[17 ###reference_b17###] considers both short-term and long-term dynamics by constructing a novel dense spatio-temporal representation of the robot environment.\nHowever, it focuses on indoor RGB-D applications, incorporating also semantic information that is difficult to obtain with LiDAR sensors.\nDewan et al. [18 ###reference_b18###] use motion models derived from subsequent scans for dynamic detection.\nEmploying Random Sample Consensus (RANSAC), they estimate motion models for the static scene and dynamic objects.\nSubsequently, each point is assigned to a motion model following a Bayesian approach.\nTheir evaluation on real world scenes shows high detection and tracking accuracy.\nHowever, the proposed motion models are incompatible with non-rigid objects, resulting in a failure to detect articulated objects such as humans or animals.\nBesides, no information on the used hardware or processing times is given, which also applies to a similar work by Moosmann et al. [19 ###reference_b19###].\nA follow-up work by the same authors computes dense scene flow improved by incorporating a deep neural network [20 ###reference_b20###].\nWang et al. [21 ###reference_b21###] propose a vertical voxel occupancy descriptor to discriminate static and dynamic regions.\nUnderwood et al. [22 ###reference_b22###] compare two point clouds and include free space reasoning of unmeasured areas by incorporating ray tracing.\nThis procedure is facilitated by using a spherical coordinate space for the point representation [23 ###reference_b23###, 24 ###reference_b24###].\nDynamic detection across a multi-session map has been studied in [25 ###reference_b25###], also using ray casting and a voxelized point cloud represented as a kd-tree.\nLastly, there are methods that remove dynamic points during post-processing [1 ###reference_b1###, 26 ###reference_b26###], using not only past knowledge but also future scans.\nTypically, the processing time per scan far exceeds the sensor\u2019s frame rate.\nThis renders them unsuitable for real-time applications, despite their capability to build highly accurate static maps.\nThe work by Mao et al. [27 ###reference_b27###] is the most similar to ours, since it also comprises a combined odometry and detection approach.\nThe main difference is that they apply a spatial region growing algorithm for clustering points.\nThis increases the computation time for high-resolution point clouds if no down-sampling is applied.\nDynamic object detection is then based on the comparison of the bounding box overlap between the current and the previous frame.\nWe assume that this approach struggles with slow-moving objects, as the bounding box overlap is too high to detect them, or with ambiguous associations between scans with multiple objects close to each other.\nWe address this issue by tracking objects over longer periods.\nThis work uses new scans and, to a small extent, the online constructed map to detect moving objects in previously unknown surroundings.\nAs one of few works only, we exploit the data structure of modern LiDAR sensors for speeding up the object detection.\nSome weak assumptions about the model\u2019s shapes are made, hence it is not completely model-free.\nInstead of a dense map representation [9 ###reference_b9###, 17 ###reference_b17###], we choose raw point clouds to maintain efficiency even for large-scale scenes.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III METHOD", + "text": "We propose a novel approach to dynamic LiDAR odometry with a focus on identifying and tracking distinct dynamic objects.\nThe concept can be divided into an odometry, detection, tracking and point removal modules that influence one another.\nA complete overview is given in Fig. 2." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Overview", + "text": "Input to the pipeline is a series of laser range scans.\nWe build upon and extend the LiDAR odometry method DLO proposed by Chen et al. [5 ###reference_b5###] which employs a two-stage Generalized Iterative Closest Points (GICP) approach for fast registration.\nThe first stage computes an initial guess of the robot\u2019s motion by aligning two consecutive scans and .\nThe obtained transformation is then used as a prior for the second stage, which aligns to a submap for fine registration.\nBoth source and target cloud are stored as kd-trees for efficient nearest neighbor searches.\nFor more details, we refer the reader to the original work.\nThe remaining residuals after convergence are projected onto an image, which in turn is combined with the range image to segment the input into individual geometric objects.\nThe Tracking and Association module then updates a Kalman filter for each object, associates new detections to existing objects, and assigns a dynamic state to each object.\nFinally, we remove those points in the current scan referring to segments labeled as non-static before integrating the scan into the keyframe database.\nThe latter is used to generate submaps and the global map." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Detection", + "text": "To detect dynamic elements in the scene, we first detect relevant segments as objects by performing clustering with a range image representations of the scan cloud. Then, we utilize the scan matching residuals to classify the segments into dynamic and static." + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Range Image-based Segmantation", + "text": "Geometric segmentation of point clouds is expensive because spatial clustering requires nearest neighbor searches, which is costly for large point clouds.\nInstead, we employ the established range image segmentation by Bogoslavskyi et al. [28 ###reference_b28###].\nFirst, the point cloud is projected onto a cylindrical image by a mapping .\nEach point is assigned to an image coordinate via\nwhere denotes the upper aperture angle, the horizontal and the vertical sensor resolution, respectively.\nThis approach can be applied to any point cloud, regardless of their sparsity and point order.\nThe image pixels store the range, i.e. the distance of the regarding point to the sensor\nPixels without a belonging point are marked as invalid.\nIf the point cloud is structured with height and width , each point directly represents a pixel. Assuming a row-major layout, the mapping simplifies to\nPrior to the object segmentation, we apply a coarse ground removal to the range image as proposed in [8 ###reference_b8###].\nThe remaining points are clustered into segments as described in [28 ###reference_b28###].\nStarting from the upper left point in the range image with label , all direct neighbors of that point are considered candidates for this segment.\nSince the range image is a cylindrical projection, the left and right image edges are wrapped.\nIf a candidate fulfills a heuristic condition and is not yet labeled as part of a segment, ground, or invalid, it is added to a queue and receives the label of the current segment.\nAll points in that queue are treated this way, iteratively, until the queue is empty.\nIn this case, the label is increased by and the process is repeated, starting with the next unlabeled point.\nMore information about the chosen heuristic can be found in [28 ###reference_b28###]." + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Scan-Matching Residual-based Classification", + "text": "###figure_3### The above segmentation technique usually yields many segments, depending on the complexity of the environment and chosen parameters.\nTo classify between static and dynamic objects at the tracking stage, we propose the use of so-called residual images.\nThey are built from the residuals of the GICP algorithm implemented in [5 ###reference_b5###].\nIt iteratively searches for nearest neighbor correspondences between two point clouds and finds a transformation that minimizes a cost function such that\nwith the covariance matrices of the target and source point and , respectively, and the distance between the corresponding points.\nAfter convergence, a residuum remains for each point in the source cloud, which expresses the Euclidean distance to its nearest neighbor in the target cloud.\nConsidering the scan-to-map module, the source is the current scan, and the target is the current submap.\nThe idea in the following is that moving objects are not represented in the submap, which serves as the target cloud.\nTherefore, points belonging to these objects tend to have higher residuals compared to the rest of the points, as their distance to the nearest neighbor is greater.\nWe project the point-wise residuals onto an image (see Fig. 3 ###reference_###) using Equation (1 ###reference_###).\nDue to the sparsity of the preprocessed point cloud used for registration, most pixels remain empty.\nThe average residuals for each segment can be computed by averaging the pixel values within the segment\u2019s mask, as shown in Algorithm 1 ###reference_###.\nThese average residuals are stored for each segment and are used in the dynamic state update step of the tracking pipeline, which will be explained in the next section.\nOne key advantage of these residual images is that they are a byproduct of the odometry module and require minimal additional computation, making them easy to integrate into the pipeline." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Tracking and State Update", + "text": "" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "III-C1 Tracking and Association", + "text": "It is crucial that objects are tracked in the global frame , because movements parallel to the robot appear static in the robot\u2019s frame .\nAt each time step, the detection module provides a set of detections.\nSince each image pixel refers to a unique point in the cloud, we can directly recover their 3D shapes from the point cloud.\nFor an effective and compact state representation, we compute the object\u2019s bounding box through Principal Component Analysis (PCA) and store the point indices as well as the average residuum of the segment.\nA Kalman filter is then initialized for each object, which internally uses a 10-dimensional state representation consisting of the object\u2019s position, rotation around the -axis, bounding box dimensions and velocity.\nAt each time step, the filters are updated with the associated detections.\nUpon receiving new detections, we associate them with existing objects, using the approach proposed by Weng et al. [29 ###reference_b29###] which consists of Kalman filter updates, data association and a Birth-Death memory.\nData association is based on the Hungarian algorithm [30 ###reference_b30###] which minimizes the sum of the costs of the associations.\nWe assign the cost between the th tracked object and th detection\nwhere and are weights,\n is the Intersection over Union (IoU) between the bounding boxes\nand \nrelates the number of points in both objects.\nThen, we update each object\u2019s Kalman filter with the associated detection.\nObjects without matches for consecutive steps are removed." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "III-C2 Dynamic State Update", + "text": "###figure_4### After the data association step, we update the state of each object (see Fig. 4).\nAn object can be in one of three states: undefined, static, or dynamic. The default state for new objects is undefined.\nIf it does not satisfy the conditions for a state transition to dynamic within detections, we mark it as static.\nTo transition from static or undefined to dynamic, we check the following conditions:\nFirst, a minimum number of detections is required to avoid false positives.\nThen, the average residual of the object\u2019s segment in the residual image is considered.\nWe observe that the average residual of an object tends to increase with its height, as the closest corresponding points are often ground points. Therefore, using a static threshold would either exclude all small objects if set too low or include almost all objects if set too high.\nInstead, we check for each object if it fulfills the condition\nwhere is the range of z-coordinates belonging to the segment and is a heuristic threshold.\nLastly, we check if the object has moved by at least meters from the start position where it was detected for the first time.\nIf all these conditions are met, the object is marked as dynamic and keeps this attribute for its lifetime. This is to reliably filter objects out of the map that are not permanently moving, e.g. a car that stops at a traffic light for several time steps.\nAnother state, e.g. temporarily static, could be introduced to handle such cases but is not considered in this work." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Dynamic Points Removal", + "text": "The tracking module forwards the indices of all points that belong to dynamic objects to the odometry module.\nThese points can directly be removed from the transformed input scan.\nAfterward, the filtered and downsampled cloud is added to the keyframe database from which the submaps are generated.\nTo prevent ghost traces in the global map caused by initially static objects, we maintain a rolling window history of the bounding boxes of all objects.\nIf a static object turns dynamic, we remove all points from the global map that are within these bounding boxes.\nAs the global map is used for visualization only, this step is not time-critical for the odometry computation and is therefore performed asynchronously.\nNote that dynamic points are removed after the scan-to-scan registration step, which means that they are present in the GICP module\u2019s source cloud but not in the target cloud.\nThis is due to the fact that the segmentation step is performed on the input cloud after transforming it into the global frame.\nThe required transform is obtained as the result of the registration, which leads to a \u201dchicken-egg-problem\u201d.\nOne could use the registration result as a first guess, then perform the detection and point removal on the transformed cloud, and finally repeat the registration with the cleaned input cloud.\nHowever, this would require a second costly kd-tree generation for the new cloud.\nThe presented procedure shows promising results despite this inaccuracy." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV EVALUATION", + "text": "The evaluation is performed on real-world data and a simulated public benchmark and results are compared to Dynablox [9 ###reference_b9###].\nWe focus on the accuracy of both detection and tracking of moving objects.\nSince close- and mid-range objects are most relevant for rescue robots, we highlight them in the results.\nFurthermore, the influence of moving objects on the constructed map and the processing time is investigated." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Experimental Setup", + "text": "We use a four-wheeled differential drive robot for the data recording, which is equipped with an Ouster OS-0 128 LiDAR sensor mounted on top for an occlusion-free 360\u00b0 view. It provides structured point clouds of size at a rate of 10 Hz.\nThe experiments are conducted on a consumer-grade laptop with an Intel Core i7-10875H CPU and 32 GB of RAM and are therefore representative for applicability on mobile robots." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Datasets", + "text": "To the author\u2019s best knowledge, only few public benchmark datasets with structured undistorted point clouds are available, of which very few were recorded on mobile ground robots in typical robotic environments.\nTherefore, we recorded our own dataset kantplatz, which is publicly available222https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4303.\nThe entire trajectory is about long and has a duration of .\nIt is recorded in an urban environment with a variety of moving objects, such as pedestrians, cyclists, cars, and buses.\nTwo persons are walking close to the robot during the entire recording, which serves for the evaluation of the detection and tracking performance.\nTo compare on public benchmark data, we also evaluate our method on the small town simulation sequence of the DOALS dataset [31 ###reference_b31###].\nIt exhibits simple geometric shapes and more complex rigid objects such as animals that move along predefined looped trajectories.\nThe point cloud structure is . We changed the original layout from column-major to row-major format to meet the requirements of our method." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Point Classification Accuracy", + "text": "For better comparability, we limit the detection range of both approaches to , i.e.\nfarther objects are not considered and only points within that range are included in the evaluation.\nHowever, a direct comparison is made difficult by the large number of parameters in both approaches. We have adjusted them manually with reasonable effort to achieve optimal results.\nOn the real-world data, our method is able to detect and track most of the moving objects, as shown qualitatively in Fig. 6.\nEven fast, non-rigid motions are detected with high accuracy (Fig. 8 ###reference_###) whereas Dynablox often detected only segments of the objects.\nNewly detected dynamic objects transition from undefined to dynamic after five time steps on average, depending on their velocity and the chosen thresholds.\nThis is a good compromise between the detection of slow moving objects and the filtering of false positives such as static objects or noisy sensor readings.\nStoring the history of static objects is particularly useful for the detection of slow moving objects, as they are detected as dynamic only after they have moved a certain distance.\nIf they have been labeled as undefined in the previous scans, the detection module removes them from the scan before integrating it into the map.\nIf they have been marked as static, the history is used to reliably remove them from the map after they have been detected as dynamic.\nThis is particularly important for the beginnings of the recordings, where some objects are not yet moving, but it can also lead to the removal of few false positive points inside the respective bounding boxes.\nWe quantitatively compare the detected dynamic points from the small town simulation sequence to the ground truth annotations.\nThe average IoU, precision, and recall (//) stay behind the results of Dynablox (//).\nWe explain this mainly by the fact the dynamic objects in the sequence move in repetitive patterns, i.e. they cross the same area multiple times.\nIf a distant object is not detected due to its sparse point cloud representation, it can lead to traces in the submap.\nThese traces considerably reduce the residuals of objects at that location at a later time, which in turn prevents them from being detected as dynamic.\nIn contrast, if we omit the residuum check and base the dynamic detection solely on the distance that the object has moved, we risk FP dynamic detections.\nThis is because depending on the angle from which the object is observed, its bounding box can vary significantly between two scans, thus leading to an apparent movement.\nFig. 5 shows the trade-off between FP and FN point detections for different residuum thresholds.\nUnsurprisingly, a volumetric mapping approach like Dynablox is more robust to these issues, as it can rely on the entire submap to classify each point.\n###figure_5### ###figure_6### ###figure_7###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Tracking and Map Quality", + "text": "Evaluation of the kantplatz sequence shows that the tracking performance is very robust.\nThe two persons following the robot are tracked throughout the entire sequence of about duration.\nFor one of them, the trajectory is interrupted three times, i.e. the ID changes. This occurred twice when the person was occluded for several scans, and once when both persons were standing close to each other and appeared as one segment in the range image.\nFast and abrupt movements, such as jumping or running, are tracked well.\nAs a stress test, a group of four persons running in circles around the robot is tracked correctly (Fig. 1).\nConsequently, the map quality is improved by the accurate tracking (Fig. 7).\nSome ghost traces are still visible in the map that mostly arise from remote objects. This could be tackled by tuning the parameters of the image segmentation.\nIn terms of localization accuracy, we did not find any significant differences compared to pure DLO. Despite the large number of moving objects, the ratio of dynamic to static points is too low to have a strong influence on localization. This aspect could be specifically investigated in a follow-up study." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Processing Time Analysis", + "text": "As one of our main motivations is to provide a lightweight solution, we analyze the processing time of our method by its algorithmic components (Table I).\nOn average, the pipeline takes to process one scan from the kantplatz sequence, which equals a frame rate of 21.7 Hz.\nWe emphasize that this includes the computation of the odometry, which is the most time-consuming part and depends strongly on the down-sampling of the point cloud.\nThe initial filtering reduces the number of points to of the original size, which is sufficient for the scan matching process.\nThe overhead of the dynamic object detection is only on average.\nAs Fig. 9 shows, the dynamic detection time is almost constant, while the odometry time increases slightly with the number of scans.\nLooking at the individual components, we notice that the tracking time depends on the number of tracked objects, but is negligible.\nDue to the structure of the point clouds, the projection time is also small.\nThe dense volumetric mapping approach used by Dynablox requires significantly more processing time and did not run in real time in our experiments.\nIt should be noted, though, that the dense voxel map created by Dynablox, which contributes to the high computation cost, provides more valuable information regarding scene understanding than our raw point cloud map." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "IV-F Limitations", + "text": "The most frequent failure case is the detection of false positives. As explained before, they can arise from the changing bounding boxes of static objects viewed from different angles. A more robust dynamic threshold could improve this, but carries the risk of delayed detection of actually moving objects.\nAnother source of false positives are apparent motions caused by the robot\u2019s own motion: If the robot passes by an opening, the captured segment of a wall behind it seems to move opposite to the robot\u2019s motion.\nThis special case is handled better by occupancy grid-based methods.\nIn areas that are crossed by objects multiple times, the detection of dynamic objects is less reliable. Once an undetected object leaves a trace in the submap, it deteriorates the residuals of other objects at that location, which in turn prevents them from being detected as dynamic.\nThe method works best for structured point clouds that provide dense range images.\nIn theory, unstructured point clouds could be used as input as well, but tests show that the range image projection and segmentation suffer in this case because of inaccurate point-to-pixel assignments. Furthermore, the processing time increases due to the additional computation of the projection." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "CONCLUSION", + "text": "In this work, we presented a novel approach to combine LiDAR odometry with light-weight dynamic object detection and tracking.\nIn contrast to other methods, our method does not rely on any pre-trained data and avoids the computational burden of volumetric mapping approaches by detecting dynamic objects directly in the structured point cloud.\nEfficiency is gained by reusing residuum information from the scan matching process in combination with heuristics to distinguish between dynamic and static objects.\nWe qualitatively evaluated our method on a public dataset and a new custom dataset, demonstrating significant improvements in the computational efficiency in comparison to a state-of-the-art volumetric mapping approach.\nWe demonstrated that our approach is able to detect articulated and fast-moving objects in mid-range distances and track them over long sequences.\nBoth the data and the code are available as open-source." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Processing time [ms] for a single scan from an Ouster LiDAR sensor
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
kantplatz (OS-0 128)small town simulation (OS-1 64)
odometry
projection
segm.
tracking
total
Dynablox
\n
", + "capture": "TABLE I: Processing time [ms] for a single scan from an Ouster LiDAR sensor" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18443v1_figure_1.png", + "caption": "Figure 1: Application of the proposed approach for dynamic LiDAR odometry, enabling the efficient segmentation and tracking of highly articulated objects.\nTop: A group of jumping persons is detected and tracked in a highly dynamic sequence. Dynamic points are shown in green, and trajectories are indicated in blue.\nBottom: The range image used for object segmentation of the above cloud. Different colors represent different static and dynamic objects.", + "url": "http://arxiv.org/html/2411.18443v1/extracted/6029189/figures/circle.drawio.png" + }, + "2": { + "figure_path": "2411.18443v1_figure_2.png", + "caption": "Figure 2: \nSystem overview. Two consecutive scans are registered using the odometry module (blue). The detection module (green) segments the projected range image and residual image into individual objects. The Tracking and Association module (yellow) assigns a dynamic state to each object.\nNon-static points are removed from the current scan before integrating it into the keyframe database. Ghost traces are removed from the global map (red) based on the objects\u2019 bounding boxes.", + "url": "http://arxiv.org/html/2411.18443v1/x1.png" + }, + "3": { + "figure_path": "2411.18443v1_figure_3.png", + "caption": "Figure 3: \nResidual image as obtained from the GICP algorithm.\nFor better visibility, the registration was performed on the original point cloud, whereas the method uses the downsampled and filtered cloud.\nA rainbow color map highlights the points with higher residuals.\nBlack pixels represent invalid points.", + "url": "http://arxiv.org/html/2411.18443v1/extracted/6029189/figures/res_img_padded.drawio.png" + }, + "4": { + "figure_path": "2411.18443v1_figure_4.png", + "caption": "Figure 4: \nDynamic state update.\nAt each time step, all tracked objects are updated and can change their dynamic state based on the number of detections (hits), their average residuum and the displacement from their origin.", + "url": "http://arxiv.org/html/2411.18443v1/x2.png" + }, + "6": { + "figure_path": "2411.18443v1_figure_6.png", + "caption": "Figure 6: \nSegmentation examples of the kantplatz dataset. Dynamic objects indicated by green bounding boxes and points, static objects by red bounding boxes.\nTop: Several pedestrians and cyclists are detected.\nBottom left: Both dynamic and static objects, such as street posts. A passing van on the left is detected, another car in the background is wrongly classified as static.\nBottom right: Parked cars detected as static.", + "url": "http://arxiv.org/html/2411.18443v1/extracted/6029189/figures/segmentation.drawio.png" + }, + "7": { + "figure_path": "2411.18443v1_figure_7.png", + "caption": "Figure 7: \nComparison of the mapping results on the kantplatz dataset. Top: Pure DLO [5] without dynamic object handling, bottom: our approach.", + "url": "http://arxiv.org/html/2411.18443v1/extracted/6029189/figures/map_comp.drawio.png" + }, + "8": { + "figure_path": "2411.18443v1_figure_8.png", + "caption": "Figure 8: \nDynamic scene from the kantplatz dataset. Our approach (top) accurately detects the jumping person, only one arm is partially cut off. Dynablox (bottom) detects only parts of the person and tends to over-segment it.", + "url": "http://arxiv.org/html/2411.18443v1/extracted/6029189/figures/jump.drawio.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18443v1" +} \ No newline at end of file diff --git a/20241127/2411.18451v1.json b/20241127/2411.18451v1.json new file mode 100644 index 0000000000000000000000000000000000000000..475c16f335bf088b3c7913ce74f653d71f1f0f68 --- /dev/null +++ b/20241127/2411.18451v1.json @@ -0,0 +1,65 @@ +{ + "title": "Advancements in Myocardial Infarction Detection and Classification Using Wearable Devices: A Comprehensive Review .", + "abstract": "Myocardial infarction (MI), commonly known as a heart attack, is a critical health condition caused by restricted blood flow to the heart. Early-stage detection through continuous ECG monitoring is essential to minimize irreversible damage. This review explores advancements in MI classification methodologies for wearable devices, emphasizing their potential in real-time monitoring and early diagnosis. It critically examines traditional approaches, such as morphological filtering and wavelet decomposition, alongside cutting-edge techniques, including Convolutional Neural Networks (CNNs) and VLSI-based methods. By synthesizing findings on machine learning, deep learning, and hardware innovations, this paper highlights their strengths, limitations, and future prospects. The integration of these techniques into wearable devices offers promising avenues for efficient, accurate, and energy-aware MI detection, paving the way for next-generation wearable healthcare solutions.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Myocardial infarction (MI), also known as a heart attack, is caused by reduced blood flow to the heart chambers. MI can be silent and undetected, or it can have serious effects and lead to death. Most myocardial infarctions are caused by coronary artery disease. When a coronary artery blockage occurs, there is a lack of oxygen within the heart muscle. Prolonged lack of oxygen supply to the heart can lead to death and necrosis of myocardial cells. Patients experience chest discomfort or tightness that can spread to the neck, jaw, shoulders, or arms. MI is one of the major causes of passings causing 9 million fatalities annually around the world, which is anticipated to extend up to 12 million by 2030[10]. MI advances in three stages, specifically, early MI (EMI), intense MI (AMI) and constant MI (CMI) characterized by its seriousness. Delay in taking ECG may cause irreversible damage, so it\u2019s important to take ECGs in certain intervals to get an accurate result so necessary action can be taken as soon as possible. Many solutions have been proposed by researchers around the world for the analyzation and classification of MI. Wearable devices have recently taken the spotlight for being easily accessible, accurate, and user-friendly. Such devices also have the merit of being energy-efficient and comparatively less complex in their ministrations, which are cardinal factors to consider while designing ECG classification gadgets. In order to examine the operational capabilities of the devised contraption, it will have to be tested against existing data.\nThe following sections will delve deep into the comparisons between various MI classification methods put forward by researchers over the years, which will facilitate a clear understanding on the same.The paper explores various methodologies including machine learning,deep learning, VLSI, and IoT-based methods contributing to efficient and accurate detection and classification of Myocardial infarction that can be implemented in wearables for a timely analysis.By synthesizing findings from relevant studies, the review highlights strengths, limitations, and potential future directions in leveraging ECG signals for accurate and efficient myocardial stage identification." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Methodology", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Preprocessing and Feature Extraction", + "text": "Feature extraction plays a crucial role in the process of myocardial infarction\n(MI) detection,it involves identifying relevant patterns or characteristics from\nthe physiological signals obtained through wearables. The goal is to transform\nraw data into a set of distinctive markers that can be analyzed to identify\npotential indicators of myocardial infarction.\n[1] and [3] converge on a shared traditional signal\nprocessing approach, incorporating morphological filtering, FIR band-pass\nfiltering, and PanTompkin\u2019s algorithm. These methods collectively address\nbaseline wander, high-frequency noise, and R-peak detection for subsequent\nfeature extraction. In parallel, [2] distinguishes itself by introducing\nadvanced Convolutional Neural Networks (CNNs) and Binary CNNs, leveraging their\ninherent convolution process for more efficient feature extraction.\nExpanding the repertoire, [5] employs wavelet decomposition but extends\nthe feature set beyond conventional parameters. Normalized signal energy,\nfractal dimension, and various entropies are incorporated, broadening the scope\nof information extracted from the ECG signals. Innovative strategies are evident\nin [7] and [8]. [7] explores the transformation of ECG signals\ninto images using Hilbert curve mapping, introducing a unique encoding approach.\nOn the other hand, [8] combines wavelet decomposition, baseline wandering\nremoval, and the CUSUM algorithm for ST elevation analysis, presenting a\nholistic preprocessing strategy for MI detection.\nFurthermore,[9] and [10] contribute to the discussion on feature\nselection. While [9] explores Relief, Minimal-Redundancy-Maximal Relevance, and\nLeast Absolute Shrinkage and Selection Operator algorithms, [10] introduces an\noptimized ECG feature extraction algorithm based on derivatives. These\napproaches highlight the importance of selecting relevant features for accurate MI classification.\n###figure_1###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Classification Methods", + "text": "Classification models use various algorithms to analyze extracted features from\nphysiological signals and classify individuals into categories, distinguishing\nbetween those with and without indications of myocardial infarction.\nIn the realm of wearable devices for myocardial infarction (MI) classification, various classification methods have been employed by numerous researchers and may fall under machine learning, deep learning, or other categories as illustrated in Fig.1. [1], adopted a two-stage classification scheme where the first level, computationally efficient but less accurate, activates a more complex second-level classifier if the first-level classifier is unable to classify the sample data with the required confidence. Another prevalent method is the use of Convolutional Neural Networks (CNNs), exemplified in [2] and [3], where Binary Convolutional Neural Networks (BCNNs) are designed for memory efficiency, with additional mentions of SVM and Random Forest in related works.The proposed methodology in [2] incorporates Neural Architecture Search (NAS) using Multi-Objective Bayesian Optimization (MOBO) to optimize the design of neural network architectures for MI detection. [5] used a hiearchial classification with first stage being a binary classifier to classify between abnormal and normal ECG beats.Then in the second stage a multi-class classifier is used to classify the abnormal beats into one of four categories: MI, bundle branch block, premature ventricular contraction, or other. The paper also mentions that the classifiers were trained using a support vector machine (SVM) with a radial basis function (RBF) kernel.\nIn contrast, [9] explored multiple machine learning algorithms, including Support Vector Machines, Naive Bayes, and Decision Tree Classifier, for MI detection. [7] employed a unique Convolutional Dendritic model (CDD) designed for feature extraction and logical relationship interpretation. [6] opted for a multilayer perceptron (MLP) classifier for MI classification based on derived vectorcardiography, demonstrating its capability to categorize ECG beats into specific MI classes. Furthermore, Decision Tree-based algorithms were employed for real-time classification [10], optimized for low power and area consumption in a VLSI architecture. The research landscape thus showcases a diverse array of classification techniques, including SVMs, CNNs, MLPs, and Decision Trees, each with its own strengths and trade-offs, contributing to the advancement of early detection and prevention of myocardial infarction through wearable devices." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Hardware used", + "text": "Various studies have employed distinct hardware configurations\nto achieve optimal performance. Notably, the SmartCardia device[1,2,3], utilizes an ultra-low-power 32-bit microcontroller,\nthe STM32L151, featuring an ARM Cortex-M3 operating at a maximum frequency of 32\nMHz. This microcontroller is equipped with 48 KB of RAM, 384 KB of Flash memory,\nand essential analog peripherals, including a 12-bit ADC. The device also\nincorporates a 710 mAh battery to sustain its operations. Additionally, the\nEFM32 Leopard Gecko development board, detailed in [2], shares a similar\nARM Cortex-M3 architecture with the SmartCardia INYU, establishing it as a\nrelevant low-power target device for experimental purposes. Notably,[1]\nand [5] highlight the use of the STM32L151RDT6 microcontroller for real-time\nevent-driven classification, coupled with the nRF8001 for Bluetooth Low Energy\ncommunication. Furthermore, the integration of multiple sensors, such as a\ntemperature sensor, heart rate sensor, pressure sensor, and cardiac pode sensor is emphasized in [9].\nThese sensors gather crucial data for MI detection, which is subsequently\nprocessed by a computational module.The implementation of the proposed architecture as an Application-Specific Integrated Circuit (ASIC) using SCL180nm Bulk CMOS PDKs and Synopsys design compiler and IC compiler tools is detailed in [10], showcasing a design that occupies 1.38mm2 area and consumes 5.12 W power at 1.98V and 8kHz. This comprehensive hardware overview underscores the diverse yet interconnected approaches employed in wearable devices for MI classification, offering valuable insights for researchers and practitioners in the field." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Performance Evaluation", + "text": "Several studies have conducted thorough performance evaluations to determine the efficacy of their suggested techniques in the field of wearable myocardial infarction (MI) categorization. For instance, in [1], the authors\nintroduced a classification technique exhibiting a remarkable reduction in\nenergy consumption by a factor of 3, achieving a medically acceptable accuracy\nlevel of 90%. This outperformed previous works, as evidenced by lower\ncomputational complexity and significantly extended battery\nlife. Similarly, [3] presented a methodology\nwith an impressive average accuracy of 90.29%, sensitivity of 90.41%, and\nspecificity of 90.16% across 10 folds. Additionally, [8] implemented a\nstatistical model for early MI detection, yielding a detection rate of 73% and\na false alarm rate of 5% when evaluated on the EDB medical database.\nSeveral papers employed cross-validation schemes, such as 10-fold [2], 5-fold\n[7], and K-fold [9], to rigorously evaluate their models. Notably, [10]\nimplemented a classifier using Verilog HDL on an FPGA board, achieving an\nthe average sensitivity of 86.18%, and specificity of 96.5%. A detailed comparison of various classification methods and their associated accuracies is given in Table 1. Considering the collective findings, it is evident\nthat these wearable MI classification technologies, particularly those in [1, 3,\n8,10], showcase robust performance metrics encompassing accuracy, sensitivity,\nand specificity, positioning them as promising candidates for further\nexploration and potential integration into wearable healthcare devices." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "The diverse landscape of feature extraction methodologies in wearable Myocardial Infarction (MI) classification, ranging from traditional signal processing to advanced Convolutional Neural Networks (CNNs) and innovative techniques like Hilbert curve mapping, underscores the multidimensional approach to enhancing accuracy and robustness in MI detection. The incorporation of novel features such as normalized signal energy, fractal dimension, and various entropies, as demonstrated in [5], exemplifies the continuous evolution of feature sets to encompass a broader scope of information from ECG signals.\nFurthermore, the comprehensive performance evaluations showcased in [1, 3, 8, 10] reveal not only the effectiveness of these methodologies but also their potential for real-world applications. Achieving significant reductions in energy consumption, improved accuracy levels, and extended battery life, these technologies exhibit promising metrics for integration into wearable healthcare devices. The intersection of advanced feature extraction and rigorous performance assessments positions these studies as pivotal contributions to the advancement of MI classification, paving the way for future research and practical implementation in the realm of wearable health technologies." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Comparison of different classification methods
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Serial NoReferenceClassification MethodAccuracy
1D. Sopic et al.[1]Two level SVM Classifier90%
2M. Odema et al [2]BCNN91.22%
3N. Rashid et al [3]BCNN90.29%
4D. Sopic et al [5]Two level Hierachiacal classifiation83.26%
5Yu-Hung Chuang et al [6]MLP Classififer99.72%
6X. Ma et al [7]CDD net98. 95%
7Hadjem et al [8]Statistical method73%
\n
", + "capture": "TABLE I: Comparison of different classification methods" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18451v1_figure_1.png", + "caption": "Figure 1: Tree diagram of different classification methods", + "url": "http://arxiv.org/html/2411.18451v1/extracted/5961595/Rep.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18451v1" +} \ No newline at end of file diff --git a/20241127/2411.18476v1.json b/20241127/2411.18476v1.json new file mode 100644 index 0000000000000000000000000000000000000000..bf94b95b97371fe9658ff205bb72b097a9b77ead --- /dev/null +++ b/20241127/2411.18476v1.json @@ -0,0 +1,263 @@ +{ + "title": "A comparison of extended object tracking with multi-modal sensors in indoor environment", + "abstract": "This paper presents a preliminary study of an efficient object tracking approach, comparing the performance of two different 3D point cloud sensory sources: LiDAR and stereo cameras, which have significant price differences.\nIn this preliminary work, we focus on single object tracking. We first developed a fast heuristic object detector that utilizes prior information about the environment and target. The resulting target points are subsequently fed into an extended object tracking framework, where the target shape is parameterized using a star-convex hypersurface model.\nExperimental results show that our object tracking method using a stereo camera achieves performance similar to that of a LiDAR sensor, with a cost difference of more than tenfold.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In a smart factory, a surveillance system that tracks mobile robots can increase the shared situational awareness of collaborative manufacturing applications. To facilitate adaptability and scalability, it is essential to develop low-cost solutions that can run efficiently on edge devices. Different from the traditional tracking scenario, where an object only generates a single measurement,\nthe modern exteroceptive sensors adopted in smart manufacturing applications can provide multiple 3D measurements for each object, increasing the complexity of the tracking problem [1 ###reference_b1###]. Furthermore,\ncompared to the tracking approaches based on SORT [2 ###reference_b2###] that focus on the bounding boxes of targets, Extended Object Tracking (EOT) directly uses sensor measurements to recursively estimate not only the position and kinematics but also the spatial extent of the targets, which especially benefits the surveillance system for mobile robots in factory environments.\nEOT has been used in the maritime and automotive domains [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###], where sensors like LiDAR and radar generate 3D point clouds with detection ranges from hundreds of meters to kilometers. Stereo cameras, e.g., Intel Realsense, which are much cheaper, can only generate 3D point clouds within a limited detection range (usually up to 10 meters), making them unsuitable for EOT in these domains. However, for mobile robot tracking on the factory floor, this detection range limitation is no longer an issue.\nTo the best of our knowledge, no related work on analyzing EOT using stereo camera point clouds has been reported to date.\nIn this paper, we compared the tracking performance between LiDAR and stereo camera sensors in an indoor environment. For simplification, a single mobile robot is used as the target. We first developed an efficient object detection method based on the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [8 ###reference_b8###], utilizing prior information about the environment and the target robot. Then, based on the detection results, EOT is implemented as described in [4 ###reference_b4###]. The performances are evaluated using two different sensors: a 3D LiDAR Blickfeld Cube 1 [9 ###reference_b9###], which costs more than 4000 euros, and a stereo camera Intel RealSense D435i [10 ###reference_b10###], which costs less than 400 euros." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Background and Related work", + "text": "In this section, we first introduce the differences between 3D point clouds generated by LiDAR and stereo cameras. Then, we give a brief description of the Gaussian Process (GP) target model [11 ###reference_b11###] working in EOT; more details can be found in [4 ###reference_b4###]." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "3D point cloud from LiDAR and camera", + "text": "3D point clouds are widely used in perception tasks such as object detection, tracking, and segmentation [12 ###reference_b12###].\nTwo common sensory sources for 3D point clouds are LiDAR and stereo cameras [13 ###reference_b13###].\nA LiDAR sensor periodically emits laser beams and captures the reflections from the surfaces of the objects it hits. The depth information is calculated based on the time of flight. Additionally, the LiDAR sensor can capture reflection intensity information, which represents the properties of the object\u2019s surface material. In mainstream LiDAR sensors, such as those from Velodyne, the accuracy of generated points within the operating range can reach \u00b12 cm [14 ###reference_b14###].\nOn the other hand, a stereo camera uses two lenses to generate two individual images. Based on the focal length, pixel disparity, and the baseline between the two lenses, the depth information is calculated. Due to this method of obtaining depth information, the valid depth range from a stereo camera is usually shorter than that of a LiDAR and is more sensitive to occlusions. For instance, the Stereolabs ZED can capture depth information up to 20 meters [15 ###reference_b15###]. However, compared to LiDAR\u2019s point cloud, the one from a stereo camera provides RGB data for each point, offering richer information than intensity and easily integrating with existing image-based feature extraction algorithms. In this paper, 3D point clouds are collected using both LiDAR and a stereo camera, and only spatial data in Cartesian coordinates are used." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Extended object tracking", + "text": "The aim of EOT is to model measurements with the spatial distribution of the sensor data over the extent of the target and then use Bayesian filtering to predict and update the extended state [3 ###reference_b3###].\nWith a random hypersurface model, which models star-convex shapes by parameterizing the shape contour [16 ###reference_b16###], an EOT framework consists of the following elements:\nthe augmented state vector at time step : , where the vector represents the centric position and heading of the target, is the kinematic states vector in terms of velocities, represents the parameterization vector of the extent.\na state transition model that describes the temporal evolution of the states: .\nthe available measurements up to time step : , where represents all measurements at time step .\na probabilistic measurement model gives the likelihood that the target object with state randomly generates measurements, where is usually assumed to be sampled from a Poisson distribution.\na spatial distribution model that assumes each measurement is a sample of a random variable conditioned on the target states: , .\na Bayesian estimator includes prediction and update\nIn [11 ###reference_b11###], a method was presented in which the unknown radial function parametrizing the contour was modeled using a Gaussian Process (GP) while simultaneously estimating the kinematic state of the target.\nAn Extended Kalman Filter(EKF) was employed as the estimator for the prediction and update iterations." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology and Experiment", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Heuristic object detection", + "text": "To track a mobile robot on a factory floor with edge devices, we proposed an efficient heuristic object detection approach utilizing the prior information about the robot and environment. The details of the object detection are described in Algorithm 1 ###reference_thm1###.\nTo remove the floor points, the ground plane in the sensor coordinate is defined as , where the coefficients are initially set for LiDAR and the camera in our case as and , respectively. In the initialization phase, points near the initial plane are selected and fed into RANSAC [17 ###reference_b17###] to obtain the optimal plane coefficients , which will be used for each subsequent point cloud frame.\nThe used parameters are , , , , .\nIn the detection phase, for each point cloud frame, ground points are removed based on their distance to the plane defined by the coefficients . Next, points outside the operation area are removed according to the predefined range information. Bounding boxes are then computed using Principal component analysis (PCA) [18 ###reference_b18###] for each target candidate, which is a points cluster generated by DBSCAN. Subsequently, geometric features are extracted from the bounding box of each candidate. The geometric features used in this paper are the lengths of three orthogonal edges, their variance, and the areas of three orthogonal faces. The same types of geometric features for the target robot are computed from prior information such as the URDF [19 ###reference_b19###] file. Finally, using these features, the candidate with the lowest detection cost, provided it is below the threshold, is considered the target robot, and the corresponding points are fed into the EOT framework. The used parameters are , , , , . , for the camera, it is defined by the bounds ; for the LiDAR, the bound is defined as ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Robot tracking", + "text": "The outputs from the object detection algorithm are used as measurements with which to perform extended object tracking. This tracking algorithm provides an estimate of the robot\u2019s spatial extent and velocity, along with the associated uncertainty for these quantities. We use the previously mentioned GP model with a constant velocity motion model and employ an iterated EKF to handle the non-linearity, as done in [4 ###reference_b4###]. Since this model assumes contour-generated measurements, we first remove all measurements that do not originate from the contour. To initialize the estimate, we use the dimensions of the robot to define the prior extent.\nThe GP model requires parameters to specify the GP covariance function as well as measurement and process noise. In this work, we use = 0.05 as the noise parameter for the constant velocity model and = 0.1 m for the measurement noise. We use 10 test angles to parametrize the extent and the GP covariance function hyperparameters are = 0.01 m, = 0.005 m, = 0.001 m, rad and the forgetting factor . For information on how these parameters are defined, see [4 ###reference_b4###]." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Setup and results", + "text": "We evaluated our method in an indoor environment using the TurtleBot 4 [20 ###reference_b20###] as shown in Figure 3 ###reference_###, with the Blickfeld Cube 1 and Intel Realsense D435i as the selected sensors. The Cube 1 can operate at a distance range of 1.5 to 75 meters with an error within 2 cm [9 ###reference_b9###], while the D435i can operate at a distance range of 0.3 to 3 meters with a depth error of less than [10 ###reference_b10###].\nWe collected two motion trajectories of the robot with both sensors within a range of 3 meters: one trajectory involved straight movement, while the other included turns. The LiDAR point cloud frames were collected at 4.4 Hz, and the camera point cloud frames at 30 Hz.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### Figures 3 ###reference_###, 3 ###reference_###, 4 ###reference_### depict the results of the proposed object detection algorithm on LiDAR and camera point clouds. Gray points in the figures represent background points, while blue points denote detected robot points. The coordinate axes in the figures are colored red, green, and blue to indicate the XYZ axes in the Cartesian coordinate systems of the respective sensors. Upon comparing Figure 4(b) ###reference_sf2### with Figure 4(a) ###reference_sf1###, and Figure 3 ###reference_### with Figure 3 ###reference_###, it is evident that the point cloud from the D435i sensor exhibits higher levels of noise, particularly pronounced at object edges and occluded regions from the camera\u2019s viewpoint. Furthermore, the noise distribution intensifies with increasing distance from the camera. Our proposed object detection algorithm has achieved effective detection performance in both types of point clouds: achieving a detection rate of in LiDAR point clouds and in camera point clouds.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### Figure 5 ###reference_### demonstrates that our method successfully achieved similar tracking performance for both types of trajectories across two different sensors. The robot\u2019s extent was fairly captured, effectively reflecting the characteristics of each movement, including straight segments and turns.\nNotably, due to the differing poses of the two sensors in the world frame and the visualization results being directly projected onto the Cartesian plane of each sensor\u2019s coordinate system, there are visual shape differences caused by distortion." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Discussions and Open Research Challenge", + "text": "In this preliminary work, the proposed tracking method is highly dependent on its heuristic object detector. The detector comprises multiple hyperparameters, necessitating extensive fine-tuning in complex and dynamic environments. Moreover, within this study, these hyperparameters are not adjustable during operation. Through our analysis of point clouds obtained from the camera, we observed that noise distribution is strongly correlated with depth. Consequently, when the target\u2019s trajectory exhibits significant variations in the depth dimension, the fixed hyperparameters make it challenging for the detector to filter out noise effectively. Introducing a parameter updating mechanism into the detector, such as adaptive DBSCAN[21 ###reference_b21###] or similar approaches like filter banks[22 ###reference_b22###] to establish a feasible parameter set, may provide a viable solution. Furthermore, in our employed EOT framework, the measurement noise model did not account for the correlation with depth. In future work, we intend to incorporate depth-related measurement noise models into the EOT framework." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we conducted a preliminary performance comparison of single object tracking on 3D point clouds from a LiDAR and a stereo camera using the proposed method, which consists of a heuristic object detector and an EOT framework. We evaluated our method in an indoor environment by tracking a mobile robot with a Blickfeld Cube 1 LiDAR and an Intel Realsense D435i stereo camera, which have a significant price difference. The results demonstrate that our method achieves similar performance on both sensors, indicating the feasibility of using EOT with stereo cameras. In the future, we plan to evaluate the tracking results using ground truth data from the robot\u2019s onboard sensors and to explore adaptive methods for the detector and a depth-correlated noise model for the EOT framework." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18476v1_figure_1.png", + "caption": "Figure 1: Point cloud from Cube\n", + "url": "http://arxiv.org/html/2411.18476v1/extracted/6015259/figure/lidar_robot.png" + }, + "2": { + "figure_path": "2411.18476v1_figure_2.png", + "caption": "Figure 2: TurtleBot 4\n", + "url": "http://arxiv.org/html/2411.18476v1/extracted/6015259/figure/TurtuleBot4.jpg" + }, + "3": { + "figure_path": "2411.18476v1_figure_3.png", + "caption": "Figure 3: Point cloud from D435i\n", + "url": "http://arxiv.org/html/2411.18476v1/extracted/6015259/figure/camera_robot.png" + }, + "4(a)": { + "figure_path": "2411.18476v1_figure_4(a).png", + "caption": "(a) Cube LiDAR\nFigure 4: Detected robot points (blue) in point clouds of two sensors.", + "url": "http://arxiv.org/html/2411.18476v1/extracted/6015259/figure/lidar_noise.png" + }, + "4(b)": { + "figure_path": "2411.18476v1_figure_4(b).png", + "caption": "(b) D435i camera\nFigure 4: Detected robot points (blue) in point clouds of two sensors.", + "url": "http://arxiv.org/html/2411.18476v1/extracted/6015259/figure/camera_noise.png" + }, + "5(a)": { + "figure_path": "2411.18476v1_figure_5(a).png", + "caption": "(a) Tracking turning motion with camera\nFigure 5: Tracking results of two motion trajectories with Cube LiDAR and D435i camera. The coordinate in each image is on a Cartesian plane of the corresponding sensor\u2019s local coordinates. The light blue curve in each image represents the motion trajectory of the estimated centroid. Four representative timesteps\u2019 results are visualized in each image: the blue dashed line represents the estimated shape, the blue dot represents the centroid, and the black circles represent the sensor measurements.", + "url": "http://arxiv.org/html/2411.18476v1/x1.png" + }, + "5(b)": { + "figure_path": "2411.18476v1_figure_5(b).png", + "caption": "(b) Tracking turning motion with LiDAR\nFigure 5: Tracking results of two motion trajectories with Cube LiDAR and D435i camera. The coordinate in each image is on a Cartesian plane of the corresponding sensor\u2019s local coordinates. The light blue curve in each image represents the motion trajectory of the estimated centroid. Four representative timesteps\u2019 results are visualized in each image: the blue dashed line represents the estimated shape, the blue dot represents the centroid, and the black circles represent the sensor measurements.", + "url": "http://arxiv.org/html/2411.18476v1/x2.png" + }, + "5(c)": { + "figure_path": "2411.18476v1_figure_5(c).png", + "caption": "(c) Tracking straight motion with camera\nFigure 5: Tracking results of two motion trajectories with Cube LiDAR and D435i camera. The coordinate in each image is on a Cartesian plane of the corresponding sensor\u2019s local coordinates. The light blue curve in each image represents the motion trajectory of the estimated centroid. Four representative timesteps\u2019 results are visualized in each image: the blue dashed line represents the estimated shape, the blue dot represents the centroid, and the black circles represent the sensor measurements.", + "url": "http://arxiv.org/html/2411.18476v1/x3.png" + }, + "5(d)": { + "figure_path": "2411.18476v1_figure_5(d).png", + "caption": "(d) Tracking straight motion with LiDAR\nFigure 5: Tracking results of two motion trajectories with Cube LiDAR and D435i camera. The coordinate in each image is on a Cartesian plane of the corresponding sensor\u2019s local coordinates. The light blue curve in each image represents the motion trajectory of the estimated centroid. Four representative timesteps\u2019 results are visualized in each image: the blue dashed line represents the estimated shape, the blue dot represents the centroid, and the black circles represent the sensor measurements.", + "url": "http://arxiv.org/html/2411.18476v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A tutorial on multiple extended object tracking,", + "author": "K. Granstr\u00f6m, M. Baum,", + "venue": "Authorea Preprints (2023).", + "url": null + } + }, + { + "2": { + "title": "Simple online and realtime tracking,", + "author": "A. Bewley, Z. Ge, L. Ott,\nF. Ramos, B. Upcroft,", + "venue": "in: 2016 IEEE international conference on image\nprocessing (ICIP), IEEE, 2016, pp.\n3464\u20133468.", + "url": null + } + }, + { + "3": { + "title": "Sensing and machine learning for automotive\nperception: A review,", + "author": "A. Pandharipande, C.-H. Cheng,\nJ. Dauwels, S. Z. Gurbuz,\nJ. Ibanez-Guzman, G. Li,\nA. Piazzoni, P. Wang,\nA. Santra,", + "venue": "IEEE Sensors Journal 23\n(2023) 11097\u201311115.", + "url": null + } + }, + { + "4": { + "title": "Extended target pmbm tracker with a gaussian process\ntarget model on lidar data,", + "author": "M. Baerveldt, M. E. L\u00f3pez,\nE. F. Brekke,", + "venue": "in: 2023 26th International Conference on\nInformation Fusion (FUSION), IEEE,\n2023, pp. 1\u20138.", + "url": null + } + }, + { + "5": { + "title": "Extended object tracking using automotive radar,", + "author": "X. Cao, J. Lan, X. R. Li,\nY. Liu,", + "venue": "in: 2018 21st International Conference on\nInformation Fusion (FUSION), IEEE,\n2018, pp. 1\u20135.", + "url": null + } + }, + { + "6": { + "title": "Learning-based extended object tracking using\nhierarchical truncation measurement model with automotive radar,", + "author": "Y. Xia, P. Wang,\nK. Berntorp, L. Svensson,\nK. Granstr\u00f6m, H. Mansour,\nP. Boufounos, P. V. Orlik,", + "venue": "IEEE Journal of Selected Topics in Signal\nProcessing 15 (2021)\n1013\u20131029.", + "url": null + } + }, + { + "7": { + "title": "Lidar extended object tracking of a maritime vessel\nusing an ellipsoidal contour model,", + "author": "K. A. Ruud, E. F. Brekke,\nJ. Eidsvik,", + "venue": "in: 2018 Sensor Data Fusion: Trends, Solutions,\nApplications (SDF), IEEE, 2018, pp.\n1\u20136.", + "url": null + } + }, + { + "8": { + "title": "A density-based algorithm for discovering clusters in\nlarge spatial databases with noise,", + "author": "M. Ester, H.-P. Kriegel,\nJ. Sander, X. Xu, et al.,", + "venue": "in: kdd, volume 96,\n1996, pp. 226\u2013231.", + "url": null + } + }, + { + "9": { + "title": "Extended target tracking using gaussian processes,", + "author": "N. Wahlstr\u00f6m, E. \u00d6zkan,", + "venue": "IEEE Transactions on Signal Processing\n63 (2015) 4165\u20134178.", + "url": null + } + }, + { + "10": { + "title": "Deep learning for 3d point clouds: A survey,", + "author": "Y. Guo, H. Wang, Q. Hu,\nH. Liu, L. Liu,\nM. Bennamoun,", + "venue": "IEEE transactions on pattern analysis and machine\nintelligence 43 (2020)\n4338\u20134364.", + "url": null + } + }, + { + "11": { + "title": "Deep learning on point clouds and its application: A\nsurvey,", + "author": "W. Liu, J. Sun, W. Li,\nT. Hu, P. Wang,", + "venue": "Sensors 19\n(2019) 4188.", + "url": null + } + }, + { + "12": { + "title": "Accuracy assessment of low-cost lidar scanners: An\nanalysis of the velodyne hdl\u201332e and livox mid\u201340\u2019s temporal stability,", + "author": "C. Kelly, B. Wilkinson,\nA. Abd-Elrahman, O. Cordero,\nH. A. Lassiter,", + "venue": "Remote Sensing 14\n(2022) 4220.", + "url": null + } + }, + { + "13": { + "title": "Extended object tracking: Introduction, overview and\napplications,", + "author": "K. Granstrom, M. Baum,\nS. Reuter,", + "venue": "arXiv preprint arXiv:1604.00970\n(2016).", + "url": null + } + }, + { + "14": { + "title": "Overview of the ransac algorithm,", + "author": "K. G. Derpanis,", + "venue": "Image Rochester NY 4\n(2010) 2\u20133.", + "url": null + } + }, + { + "15": { + "title": "Bounds on the quality of the pca bounding boxes,", + "author": "D. Dimitrov, C. Knauer,\nK. Kriegel, G. Rote,", + "venue": "Computational Geometry 42\n(2009) 772\u2013789.", + "url": null + } + }, + { + "16": { + "title": "urdf-ros wiki,", + "author": "I. Sucan, J. Kay,", + "venue": "URL http://wiki. ros. org/urdf/. Data de consulta\n14 (2019).", + "url": null + } + }, + { + "17": { + "title": "Adaptive dbscan lidar point cloud clustering for\nautonomous driving applications,", + "author": "M. El Yabroudi, K. Awedat,\nR. C. Chabaan, O. Abudayyeh,\nI. Abdel-Qader,", + "venue": "in: 2022 IEEE International Conference on Electro\nInformation Technology (eIT), IEEE,\n2022, pp. 221\u2013224.", + "url": null + } + }, + { + "18": { + "title": "Real-time multi-object tracking using adaptive\nfiltering and filter banks for maritime applications,", + "author": "J. Lin, A. Puthiyavinayagam,\nS. Liu, M. Kurowski,\nJ.-J. Gehrt, R. Zweigel,\nD. Abel,", + "venue": "in: 2021 European Control Conference (ECC),\nIEEE, 2021, pp.\n2239\u20132244.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18476v1" +} \ No newline at end of file diff --git a/20241127/2411.18497v1.json b/20241127/2411.18497v1.json new file mode 100644 index 0000000000000000000000000000000000000000..36bed711fc1a36f2595a4eb6c939f21a8a530208 --- /dev/null +++ b/20241127/2411.18497v1.json @@ -0,0 +1,144 @@ +{ + "title": "Multiple Choice Learning for Efficient Speech Separation with Many Speakers", + "abstract": "Training speech separation models in the supervised setting raises a permutation problem: finding the best assignation between the model predictions and the ground truth separated signals. This inherently ambiguous task is customarily solved using Permutation Invariant Training (PIT). In this article, we instead consider using the Multiple Choice Learning (MCL) framework, which was originally introduced to tackle ambiguous tasks. We demonstrate experimentally on the popular WSJ0-mix and LibriMix benchmarks that MCL matches the performances of PIT, while being computationally advantageous. This opens the door to a promising research direction, as MCL can be naturally extended to handle a variable number of speakers, or to tackle speech separation in the unsupervised setting.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Speech separation is the task of isolating concurrent speech sources from a mixture in which they are simultaneously active.\nThis task has many applications in speech processing, including Automatic Speech Recognition [1 ###reference_b1###, 2 ###reference_b2###], Speaker Diarization [3 ###reference_b3###, 4 ###reference_b4###], or singing voice extraction from music [5 ###reference_b5###].\nVarious scenarios of practical interest involve a large number of simultaneously active speakers (cochlear implants [6 ###reference_b6###] or human-robot interactions [7 ###reference_b7###]).\nWhile previous works mainly focus on the few-speaker setting (up to five active speakers [8 ###reference_b8###, 9 ###reference_b9###]), speech separation with many speakers [10 ###reference_b10###] is still a critical area for improvement [11 ###reference_b11###].\nThe state-of-the-art approach for speech separation consists in training a deep neural network\nin a supervised setting, where the ground truth for the individual sources is known [12 ###reference_b12###].\nIn this setting, the separation model takes as input a single-channel mixture, and predicts a fixed number of individual tracks.\nThese predictions are compared to the ground truth speech sources in a pairwise fashion using an audio-reconstruction metric, such as the scale-invariant Signal-to-Distortion Ratio (SI-SDR) [13 ###reference_b13###].\nThis pairwise association is ambiguous, and raises a label permutation issue: finding the optimal matching between the set of predictions from the model and the set of ground truth separated signals.\nEarly attempts include speaker-dependent methods [14 ###reference_b14###, 15 ###reference_b15###], deep clustering [16 ###reference_b16###] and deep attractors [17 ###reference_b17###], which rely on speaker statistics or clustering techniques to accurately associate predictions to target.\nHowever, these heuristics lead to suboptimal assignations. PIT [18 ###reference_b18###] solves this issue by proposing a much simpler but computationally expensive framework: optimizing, among all possible prediction\u2013target matching, the one that minimizes the global separation error.\nA naive implementation of PIT has an intractable complexity of . This has been a bottleneck when training speech separation systems for a large number of speakers. However, PIT has been reformulated as a perfect matching problem in a bipartite graph, thus improving its complexity to by using the Hungarian algorithm [19 ###reference_b19###, 20 ###reference_b20###].\nRecognizing the assignation task as an instance of the optimal transport problem, this complexity has been slightly reduced to with the Sinkhorn algorithm (SinkPIT), at the cost of an approximation controlled by a parameter [21 ###reference_b21###, 22 ###reference_b22###].\nRecently, the authors of [23 ###reference_b23###] have proposed to use the Multiple Choice Learning (MCL) framework [24 ###reference_b24###, 25 ###reference_b25###], which has complexity. Unlike PIT, this method is not guaranteed to provide the optimal prediction\u2013target assignation. Instead, MCL is designed to tackle ambiguous tasks, such as multi-modal trajectory forecasting [26 ###reference_b26###], where the relation between input and target is non-deterministic, and multiple predictions should be provided to capture the resulting uncertainty.\nReframing speech separation as an ambiguous task,\nthe authors of [23 ###reference_b23###] propose to use the multiple predictions provided by MCL as estimations of the separated speech signals.\nThey demonstrated empirically that this approach was effective for 2-speaker and 3-speaker mixtures.\nThe purpose of this paper is to extend this result to the many speakers regime.\nMore specifically, we make the following contributions:\nWe establish MCL as a viable speech separation method, even in the challenging many speakers setting.\nWe compare MCL, Hungarian-PIT and SinkPIT in terms of training time and separation performance.\nWe introduce AUC-SDR, a metric evaluating the consistency of reconstruction quality across separated sources, which becomes especially relevant when separating many speakers.\nThis paper is organized as follows. Section II ###reference_### describes the compared training frameworks, Section III ###reference_### details our experimental setup, and Section IV ###reference_### discusses our results." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Method", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Training framework", + "text": "Formally, let denote audio signals from individual speakers, with their common length measured in time frames. The task of speech separation consists in providing an estimate of the isolated speech signals from a mixture using a neural network with parameters indexed by .\nThe model is customarily trained using gradient descent by optimizing the expected value taken by a separation loss on a training dataset . We use a slight variation of this framework introduced in [27 ###reference_b27###]: the model is composed of blocks which iteratively refine the estimate and provide a sequence . Accordingly, is optimized using the cumulative expected loss ." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Separation loss", + "text": "The separation loss relies on a pairwise comparison metric tailored for audio signals. We follow [27 ###reference_b27###] and consider SI-SDR [13 ###reference_b13###] (referred to as SI-SNR in their work) as the underlying metric:\nLet denote the set of permutations on , the set of doubly stochastic matrices, and the entropy.\nIn this work, we consider as baselines the standard PIT and SinkPIT training objectives, which can be formulated as follows.\nIt can be proved that both objectives provide the optimal prediction\u2013target matching, when vanishes to 0 in the case of SinkPIT [21 ###reference_b21###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Multiple Choice Learning", + "text": "MCL has been originally proposed for ambiguous tasks with a single target [24 ###reference_b24###]. This framework trains a neural network to provide a small set of plausible hypotheses using a competitive training scheme that promotes the specialization of the hypotheses in distinct regions of the prediction space . It can be seen as a gradient-descent version of the popular K-means clustering algorithm [28 ###reference_b28###]. Specifically, at each step of the algorithm, the target is assigned to the closest hypothesis , and this winning hypothesis is updated by taking a gradient step on the loss . The resulting objective is called the Winner Takes All (WTA) loss.\nIn speech separation, several targets must be tracked by the hypotheses. To account for this change, the authors of [23 ###reference_b23###] propose to optimize the average target-wise WTA loss.\nNo mechanism ensures that all the hypotheses are selected, an issue known as collapse [29 ###reference_b29###]. In particular, MCL is not guaranteed to find the optimal prediction\u2013target matching. This is unlike PIT and SinkPIT, which therefore act as upper bound references in our experiments. Nonetheless, we demonstrate empirically that this problem is not encountered in practice (see Section IV-A ###reference_###)." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D AUC-SDR", + "text": "###figure_1### Optimal permutation SI-SDR, which is the customary metric to evaluate speech separation systems, is defined as the average SI-SDR of the prediction\u2013target pairs. It does not reflect the distribution of SI-SDR scores among these pairs. Therefore, it may fail to penalize scenarios where a few speakers are very well separated at the expense of the others. These scenarios are rare in the few-speaker setting, where the separation performance is uniform across pairs. However, they are common as soon as the number of speakers increases [19 ###reference_b19###].\nIn order to capture this behavior, we introduce a new metric (AUC-SDR) which measures the consistency of separation performance across prediction\u2013target pairs (see Figure 1 ###reference_###). It is computed as follows. First, we find the optimal prediction\u2013target pairs, then evaluate the SI-SDR score of each pair, and sort these scores in decreasing order . This distribution of scores is normalized to the unit interval by mapping the highest score to and the score lower bound to . Then, AUC-SDR is defined as the empirical mean of these normalized scores, which can also be seen as the area under their curve. This gives a score inside , where a value close to indicates perfect consistency, and a value close to indicates that only few speakers have been correctly reconstructed." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Experimental Setup", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Data", + "text": "We report performances on the customary Wall Street Journal (WSJ0-mix) datasets, which consists of synthetic mixtures of clean read speech with 2 to 5 speakers [16 ###reference_b16###]. Each dataset provides 20,000 mixtures for training, 5,000 for validation and 3,000 for test. We additionally report results on the more challenging LibriMix datasets [30 ###reference_b30###] with 10 and 20 speakers, which have been introduced by [21 ###reference_b21###] and [19 ###reference_b19###]. Each dataset provides 1,000 mixture for validation and testing. Additionally, the 10 ans 20 speakers datasets provide, respectively 10,100 mixtures and 5000 mixtures for training. Each audio recordings is sampled at 8kHz and has a 10-s duration, from which we extract 4s random crops. We use a batch size of during training, and during validation and testing." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Model", + "text": "Many efficient neural network architectures have been devised to tackle speech separation through recent years [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 11 ###reference_b11###, 34 ###reference_b34###, 35 ###reference_b35###, 9 ###reference_b9###, 23 ###reference_b23###]. However, most of these models are very large (up to 26M parameters with SepFormer [33 ###reference_b33###]) and have been designed for the few-speakers setting (from 2 to 5 speakers). In contrast, the authors of [27 ###reference_b27###] proposed Swave, a 7.5M-parameter model with performances close to the state of the art, which can separate as many as 20 speakers given slight modifications [19 ###reference_b19###]. We base our experiments on this work.\nThis model is composed of a 1d convolutional encoder with kernel size , MulCat\nblocks and a decoder which performs an overlap and add operation. A MulCat block consists of two separate bidirectional LSTMs with features and hidden units, whose outputs are multiplied element-wise and concatenated to generate the final output. Following the original implementation, we use different configurations for WSJ0-mix and LibriMix. Their exact values are specified in Table I ###reference_###. Swave has 7.5M parameters for the WSJ0-mix datasets, and 36.5M parameters for the LibriMix dataset. All models are trained for 40 epochs, except on the LibriMix dataset with 10 speakers, for which we use 20 epochs instead. We use the Adam optimizer with learning rate . We use Nvidia V100 and A100 GPUs." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Metric", + "text": "All metrics are computed on the test sets. We report the optimal permutation SI-SDR as a performance metric to compare trained networks, as done in [27 ###reference_b27###, 19 ###reference_b19###]. In order to ensure fair comparison, we train MCL, PIT and SinkPIT using the same experimental settings, and report performances in Table II ###reference_###. For PIT, all variants match or outperform the scores reported in the original papers (less than 1dB of gap in the worst case), and the slight observed discrepancies are due to differences in training time (the authors use a longer training of 100 epoch)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Performance", + "text": "Comparing lines 1 and 2 of Table II ###reference_###, we observe that MCL performs on par with the topline approach PIT, in the few-speaker and the many-speaker settings alike. This establishes MCL as a viable alternative to PIT for speech separation. The same holds for SinkPIT, so that the three methods perform equivalently in terms of separation performance.\nWe can also highlight that these results were obtained with a fraction of the number of training epochs reported in the original papers, which suggests that Swave is a very efficient and versatile model." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Time complexity", + "text": "We also compare the three approaches in terms of training complexity, and present the results in Figure 2 ###reference_###. First, we can notice that MCL and SinkPIT have an efficient algorithmic complexity of while PIT is . We observe experimentally that the average computation time per sample of MCL, PIT and SinkPIT separation losses indeed reflect this theoretical analysis when the number of speakers grows (left panel of Figure 2 ###reference_###). This strengthens the previous observation, and suggests that MCL is a sound alternative to PIT.\nHowever, this difference in computation time is not reflected in the average epoch duration (right panel of Figure 2 ###reference_###). Indeed, for speakers, the computation time gap between PIT, MCL and SinkPIT separation losses becomes negligible compared to other factors in the training pipeline (e.g., data loading). In this regime, we observe only marginal differences in epoch duration between the three methods. Nonetheless, optimal permutation search becomes the main bottleneck for the training time as soon as approaches 100. Such an extreme scenario is unlikely for speech separation, but it may occur in different context (environmental sound separation [36 ###reference_b36###], neural activity analysis [37 ###reference_b37###]).\n###figure_2###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Separation consistency", + "text": "We compute AUC-SDR (as described in Section II-D ###reference_###) for all compared methods and report the results in Table III ###reference_###. First, we observe that the separation consistency decreases as the number of speakers grows. The separation performance seems to stabilize to 0.5 when there are many speakers, which suggests that a majority of speakers are well separated even in this challenging setting.\nSecond, we observe that for each dataset, there are only marginal differences in the separation consistency of PIT, MCL and SinkPIT. This further demonstrates the viability of MCL for speech separation, and suggests that it manages to find the optimal prediction\u2013target mapping.\nThird, the AUC-SDR scores obtained by PIT on the 10-speaker and 20-speaker datasets are far from perfect consistency. This suggests that optimal matching based losses are not sufficient to ensure high separation consistency. The design of training objectives that enforce separation consistency is left to future work.\n\u2217 Scores for this dataset are computed after 20 epochs." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D MCL collapse", + "text": "MCL is subject to collapse issues [29 ###reference_b29###]. In speech separation, this corresponds to the situation where only a subset of the predictions correctly estimate the individual speech targets, and the other predictions are inaccurate.\nThe authors of [23 ###reference_b23###] suggest to use annealing in order to mitigate collapse. Although this device does not seem necessary to match the performances of PIT, it may further improve separation consistency." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Discussion", + "text": "The results discussed above establish that MCL is a strong substitute to PIT. This is an interesting result, because MCL has many natural extensions that makes it fit for more challenging settings. First, MCL can be extended to scenarios with very large number of sources, at minimal increase of the computational cost. Second, one of the key challenges in source separation is to handle settings with a variable number of speakers [38 ###reference_b38###]. MCL provides an elegant way to tackle this issue by using scoring heads, similarly to [39 ###reference_b39###]. Third, by observing the strong specialization capabilities of MCL [29 ###reference_b29###], we can infer that this framework can be used in an unsupervised fashion, as long as it is trained on a dataset with few simultaneously active speakers." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this article, we show that MCL can replace PIT, the standard objective used to train speech separation models. Although MCL is not guaranteed to find an optimal solution to the permutation problem tackled by PIT, we demonstrate experimentally that MCL is on par with PIT, in terms of separation performance, training complexity, and separation consistency (as measured using our newly introduced AUC-SDR metric).\nThis opens an interesting venue for further research, because MCL has many advantages over PIT: more efficient asymptotic time complexity as the number of speakers grows large, ability to handle a variable number of speakers, and adaptability to the more challenging unsupervised setting. A more careful analysis of these settings is left to future work." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Acknowledgement", + "text": "The material contained in this document is based upon work funded by the Agence National de la Recherche en Intelligence Artificielle (PhD program in AI) and Hi! PARIS through its PhD funding program. This work was performed using HPC resources from GENCI\u2013IDRIS (Grant 2021-AD011013406R1)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Swave model parameters on WSJ0-mix and LibriMix datasets
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParametersSymbolsValues
WSJ0-mixLibriMix
Number of featuresN128256
Encoder\u2019s kernel sizeL816
Number of hidden units in the LSTMsH128256
Number of double MulCat blocksR67
Batch sizeB4max
Learning rate5e-41e-3
DeviceV100A100
\n
\n
", + "capture": "TABLE I: Swave model parameters on WSJ0-mix and LibriMix datasets" + }, + "2": { + "table_html": "
\n
TABLE II: Evaluation using SI-SDR [dB]
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
WSJ0-mixLibriMix
Loss2 spks3 spks4 spks5 spks10 spks\u2217\n20 spks
PIT19.9616.1312.0010.143.172.98
MCL19.7416.0111.859.923.852.96
SinkPIT19.6316.0012.1210.223.503.10
\n
\n
\n
\n
\n

\u2217 Scores for this dataset are computed after 20 epochs.

\n
\n
\n
", + "capture": "TABLE II: Evaluation using SI-SDR [dB]" + }, + "3": { + "table_html": "
\n
TABLE III: Evaluation using AUC-SDR
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
WSJ0-mixLibriMix
Loss2 spks3 spks4 spks5 spks10 spks20 spks
PIT0.930.810.650.570.500.48
MCL0.930.810.640.570.540.49
SinkPIT0.930.800.650.570.520.49
\n
\n
", + "capture": "TABLE III: Evaluation using AUC-SDR" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18497v1_figure_1.png", + "caption": "Figure 1: Schematic representation of AUC-SDR.", + "url": "http://arxiv.org/html/2411.18497v1/x1.png" + }, + "2": { + "figure_path": "2411.18497v1_figure_2.png", + "caption": "Figure 2: Time complexity of MCL, PIT and SinkPIT. On the left, we show the average computation time per sample of MCL (orange dashed line), PIT (blue solid line) and SinkPIT (green dotted line) separation losses, as a function of the number of speakers. On the right, we display the relative training time over one epoch of MCL and SinkPIT, computed for the WSJ0-mix datasets (2 to 5 speakers) and LibriMix datasets (10 and 20 speakers), as a function of the number of speakers. For each dataset, the training time of PIT serves as the reference and is represented by a value of 1 (blue solid line).", + "url": "http://arxiv.org/html/2411.18497v1/x2.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18497v1" +} \ No newline at end of file diff --git a/20241127/2411.18498v1.json b/20241127/2411.18498v1.json new file mode 100644 index 0000000000000000000000000000000000000000..197f5cbd66faa0f5c9bfabbd6f9933f4d1c1783b --- /dev/null +++ b/20241127/2411.18498v1.json @@ -0,0 +1,789 @@ +{ + "title": "Collective decision making by embodied neural agents", + "abstract": "Collective decision making using simple social interactions has been studied in many types of multi-agent systems, including robot swarms and human social networks. However, existing multi-agent studies have rarely modeled the neural dynamics that underlie sensorimotor coordination in embodied biological agents.\nIn this study, we investigated collective decisions that resulted from sensorimotor coordination among agents with simple neural dynamics.\nWe equipped our agents with a model of minimal neural dynamics based on the coordination dynamics framework, and\nembedded them in an environment with a stimulus gradient.\nIn our single-agent setup, the decision between two stimulus sources depends solely on the coordination of the agent\u2019s neural dynamics with its environment.\nIn our multi-agent setup, that same decision also depends on the sensorimotor coordination between agents, via their simple social interactions.\nOur results show that the success of collective decisions depended on a balance of intra-agent, inter-agent, and agent\u2013environment coupling, and we use these results to identify the influences of environmental factors on decision difficulty.\nMore generally, our results demonstrate the impact of intra- and inter-brain coordination dynamics on collective behavior, can contribute to existing knowledge on the functional role of inter-agent synchrony, and are relevant to ongoing developments in neuro-AI and self-organized multi-agent systems.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Significance statement", + "text": "Collective behaviors require the spatial and temporal coordination of actions by many individuals. The neural mechanisms that enable such coordination among embodied biological agents are currently not well understood. By using simulations of simple embodied agents equipped with biologically plausible neural dynamics, we demonstrated how collective decision making can result from adaptive coupling between an agent\u2019s neural dynamics, its environment, and other agents. Our findings make the case for the inclusion of intrinsic neural dynamics in the development of artificial intelligence and multi-agent systems, as a means to expand their ability for social interactions and collective tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Model", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Task environment", + "text": "We created a simple environment in which agents could move and sense a stimulus. The environment contained one or more stimulus sources (i.e., sites), at which stimulus concentration is maximal (Fig. 1 ###reference_###B-C). The stimulus concentration in the environment was inversely proportional to the distance from the stimulus source. Thus, the stimulus concentration followed a gradient from low to high concentration when approaching the stimulus source.\nA simple task that agents could perform in this environment was that of gradient ascent, i.e., following the gradient towards maximal stimulus concentration (Fig. 1 ###reference_###B) Aguilera et al. (2013 ###reference_b48###). If one imagines the stimulus as \u2018food,\u2019 and the stimulus source as a food source, then this behavior reflects the food-seeking behavior of many simple organisms.\nWhen two sources of stimulus are present, the scenario could be considered a binary decision-making task (Fig. 1 ###reference_###C). In this scenario, an agent could successfully reach a stimulus source if it could \u2018decide\u2019 between the two sources.\nIn the multi-agent scenario, the task became a collective binary decision-making task. In this scenario, 10 agents started at the same position, but had different initial orientations (Fig. 2 ###reference_###B). Due to these different initial orientations, agents could end up at different sites (Fig. 2 ###reference_###C). However, agents had some social information about each other\u2019s position (Fig. 2 ###reference_###A; see below), and their task was to use this information to aggregate at the same site. We quantified performance of collective decision making according to how closely the agents collectively approached a single candidate site (see Methods)." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Agent", + "text": "We modeled an agent with minimal neural dynamics that could use sensorimotor coordination with the stimulus sources and the movements of other agents in order to move towards a candidate site. Our agent architecture was based on a minimal Braitenberg vehicle (Braitenberg, 1986 ###reference_b50###), which is a self-driven agent with a very simple architecture: two sensors directly control two motors. To give our agent intrinsic neural dynamics, we connected two oscillator nodes to the sensors (loosely representing sensory brain regions; nodes 1 and 2 in Fig. 1 ###reference_###A) and two oscillator nodes connected to the direction of the movement of the agent (loosely representing motor regions; nodes 3 and 4 in Fig. 1 ###reference_###A). This design resembles the situated HKB agent of Aguilera et al. (2013 ###reference_b48###), which had two oscillator nodes (one sensory and one motor). Our agents have four nodes, so that they can use stereovision and differential drive to move directly to a stimulus source, rather than approaching it in a spiraling motion (Aguilera et al., 2013 ###reference_b48###, cf.).\nTo model the interaction between the oscillators, we used an update rule for the phase of each oscillator, based on the following version of the HKB equation (Zhang et al., 2019 ###reference_b51###):\nwhere is the phase change of node , and is the intrinsic frequency of oscillator . Parameters and represent the contribution of, respectively, in-phase attraction, and anti-phase attraction between oscillators and . Lastly, parameterizes how strongly the oscillator phase is modulated by sensory input . This parameter is set to zero for each motor oscillator, because it is not connected to a sensor. (See Methods for the version of the update equation used for each oscillator.)\nOur agent moves at a constant speed and the activity of the motor oscillations is linked to the agent\u2019s movement direction. The heading is updated according to the phase angle between the two motor oscillators, such that\nwhere is the orientation of the agent in the environment and is a scaling factor. Together, these equations create a closed sensorimotor loop between the agent\u2019s internal oscillator dynamics and the external environment.\nIn the multi-agent scenario, we gave agents the added behavior of emitting the same stimulus that they observed to be present in the environment. The stimulus concentration emitted by social agent was perceived by agent as:\nwhere is the Euclidean distance between agent and agent , is the strength of social influence between agents (identical for all agents), and is the decay rate of the emitted stimulus. Note that an agent did not perceive its own emitted stimulus." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results", + "text": "We performed both single-agent simulations and multi-agent simulations. For the single-agent simulations, we quantified neural coordination dynamics in terms of integration and metastability. The integration of brain regions by means of phase-locked activity is a central mechanism of brain function (Varela et al., 2001 ###reference_b52###; Avena-Koenigsberger et al., 2018 ###reference_b53###), and can be quantified using the phase-locking value (PLV). Brain function supportive of adaptive behavior relies on switching between different brain states. To quantify this aspect of neural dynamics, we used the standard deviation of the Kuramoto order parameter SD(KOP) (Strogatz, 2000 ###reference_b54###; Cabral et al., 2022 ###reference_b55###).\nFor the multi-agent simulations, we additionally quantified the coordination dynamics occurring across the different agents. We analyzed agents\u2019 movement trajectories using the KOP as a measure of alignment, and SD(KOP) as a measure of alignment variability between agents\u2019 movements (Strogatz, 2000 ###reference_b54###; Cabral et al., 2022 ###reference_b55###). Lastly, we also quantified the degree of coordinated activity between the neural dynamics of the different agents by using the weighted phase-lag index (wPLI), a measure of phase locking that discards zero-phase coupling and can be interpreted as the co-variance between two signals (Vinck et al., 2011 ###reference_b56###)." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Single-agent simulations", + "text": "###figure_1### We first performed a series of single-agent simulations to assess how an individual agent\u2019s neural dynamics are related to its ability to move towards a stimulus source in its environment. In the single-agent setup (see Fig. 1 ###reference_###), an agent tried to climb a gradient towards a global maximum. To simulate different types of neural dynamics, we varied the internal coupling strength between the agent\u2019s oscillator nodes ( in Eq. 1 ###reference_###), and varied whether or not it was sensitive to external stimuli. For each agent configuration, we performed 50 runs with different random initial oscillator phases. In Fig. 3 ###reference_###, we characterize the neural dynamics associated with each agent configuration. The top panel shows the average level of integration (i.e., phase locking) between the agent\u2019s oscillators, measured by the mean PLV, and the bottom panel shows the degree of metastability among the agent\u2019s oscillators (measured by SD(KOP); see Methods).\nIt is notable that, in the absence of sensory input, the system quickly found a stable state with minimal variation in oscillator dynamics: agents without stimulus input (squares in Fig. 3 ###reference_###) consistently had values close to PLV and SD(KOP). Conversely, agents with sensory input (circles in Fig. 3 ###reference_###) had a broad range of parameter values that resulted in lower PLV and higher SD(KOP). This shows that, as expected, sensory input can alter the coordination regime of the neural dynamics. In the presence of sensory input, PLV decreases as internal coupling increases from 0 to 1, indicating that the neural dynamics at low internal coupling are mostly driven by stimulus input, without being significantly modulated by the interactions among the agent\u2019s own oscillators. Simultaneously, SD(KOP) remained relatively high, indicating that, at low internal coupling, sensory input caused the system of oscillators to quickly cycle between oscillatory states.\nAs the agent\u2019s internal coupling increases further, the interactions between the agent\u2019s oscillators become strong enough to meaningfully modulate sensory input, which results in a lower level of apparent oscillator integration, with PLV decreasing to 0.75 while the degree of metastability plateaus. Beyond an internal coupling level of , the internal coupling of the agent\u2019s oscillators started to dominate, which resulted in highly integrated oscillators, indicated by higher PLV. At an internal coupling level of , the effect of internal coupling became strong enough that it nullified the effect of any sensory input, resulting in PLV (indicating no variation in inter-oscillator dynamics). This increase in integration was accompanied by a sharp drop in metastability, indicating that the system tends to get stuck in a single stable state. Such a stable state of high integration between oscillators precludes changes in movement direction in response to sensory input, inhibiting the agent from approaching the stimulus source.\nThe colors of the data points in Fig. 3 ###reference_### indicate agent performance: brighter colors indicate that the agent was better able to approach the stimulus. The distribution of colors shows that agents performed best in the range . At these intermediate coupling values, there was a decrease in oscillator integration (as shown by PLV) and moderately high metastability (as shown by SD(KOP)). In short, these results show that at low internal coupling, neural dynamics are mostly driven by sensory input, and oscillatory coordination states change quickly. At moderate internal coupling, neural dynamics modulate sensory input without fully dominating it. In this intermediate range with relatively low integration and high metastability, the agent displays adaptive behavior. At high internal coupling, neural dynamics nullify the effect of sensory input and agent behavior cannot change in response to the environment. (See the supplementary text S2 for a more elaborate analysis of gradient climbing and decision making by single agents, as well as its relation to neural dynamics.)\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Multi-agent simulations", + "text": "We ran two sets of multi-agent simulations. In the first set of simulations, we varied parameters related to the agent configuration (internal coupling, environmental sensitivity, social influence) and observed the effects on decision-making performance and collective dynamics. In the second series of experiments, we kept the agent configuration constant and modified the quality difference between the two stimulus sources in their environment, as well as the starting angles between the agents." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Consensus as a function of internal, environmental, and social influences", + "text": "We conducted a series of simulations with groups of 10 agents in an environment with a fixed quality (brightness) ratio of between the two stimulus sources (Fig. 2 ###reference_###). For each simulation run, we quantified the performance as the degree to which agents could approach the same stimulus source (as in Fig. 2 ###reference_###B) rather than going to different sources (as in Fig. 2 ###reference_###C; see Methods). Throughout each simulation, we track the coordination dynamics among agents\u2019 movements (Fig. 2 ###reference_###D-E), as well as measures of coordination between the neural dynamics of the different agents (Fig. 2 ###reference_###G-H).\nWe conducted simulations for different parameter values of internal coupling strength ( in Eq. 1 ###reference_###), sensitivity to the environment ( in Eq. 1 ###reference_###), and the degree of social influence ( in Eq. 3 ###reference_###). For each combination of parameter values, we display the final performance in a ternary plot (Fig. 4 ###reference_###A). Each of the three corners of the ternary plot corresponds to one of the parameters being maximal and the others zero. We also display the corresponding measures of movement coordination and neural coordination dynamics for each parameter combination in adjacent plots (Fig. 4 ###reference_###B-E).\nWe assessed movement coordination in terms of the movement alignment (KOP) and the variability in movement alignment (i.e., \u2019alignment variability\u2019; SD(KOP)). To assess the coordination dynamics within and between agents\u2019 neural dynamics, we used a measure of phase covariance (wPLI), rather than the phase-locking value used for the single-agent case. In contrast to the PLV, the wPLI does not take into account zero-lag coupling between oscillators. Measures with this property are preferred in multi-brain neuroscience, since they discount spurious coordination due to, e.g., common input from the environment (Czeszumski et al., 2020 ###reference_b57###; Schwartz et al., 2022 ###reference_b58###). In our simulations, spurious coordination between oscillators could similarly have been caused by identical initial phases of agents\u2019 oscillators (see Methods).\nThe middle region of the plots in Fig. 4 ###reference_### corresponds to a parameter range in which internal, environmental, and social influences are appropriately balanced for reaching consensus, as indicated by the bright yellow area in Fig. 4 ###reference_###A. This region was accompanied by high movement alignment and low alignment variability, indicating that agents could use environmental and social information to coherently move towards the same stimulus source. Part of this region corresponds to a narrow area of increased inter-brain covariance, indicating that this aligned movement was accompanied by coordinated neural dynamics across agents.\nThe lower right corner of the plots corresponds to a region of increased internal influences and decreased social influences. As long as social influences are non-zero, performance remains relatively high in this parameter range. Movement alignment is decreased and alignment variability is increased relative to the middle part of the plot. This indicates that, when internal coupling increases, agents\u2019 movements become less aligned, but can still result in consensus. Interestingly, the lower right corner of the plot corresponds to a decrease in both intra-brain covariance (between different oscillator nodes within the same agent) and inter-brain covariance (between the same oscillator nodes across different agents). More alignment variability among the agents\u2019 movements is thus accompanied by more independent by brain dynamics that are more independent.\nA last and interesting observation can be made at the left edge of the plots, where internal coupling is low. In this region, agents are driven entirely by a combination of social and environmental influences. This region in parameter space was accompanied by high movement alignment and low alignment variability, but was associated with a decrease in performance. This suggests that agents moved in a highly aligned manner, but failed to collectively approach either of the two stimulus sources. This outcome highlights the importance of balancing external influences with sufficient internal coupling. When agents are overly coupled to external stimuli without enough counteracting internal coupling, they struggle to move towards an increasing stimulus concentration. Supplementary figure S2 illustrates how increased social influence, without a corresponding increase in internal coupling, leads to decreased performance.\nTaken together, these results show that agents reach a consensus when their configuration facilitates a balanced integration of environmental, social, and internal influences, and this is reflected in neural and behavioral dynamics.\n###figure_3###" + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Consensus as a function of environment configuration", + "text": "If the ability to reach a consensus depended on the features of the environment, we would expect that binary decision making should be easier (and thus performance higher) when the difference between the two stimulus sources is larger, and when the initial angle between the agents is smaller. We performed simulations with 10 agents with a fixed architecture, and varied the initial starting angle between agents and the brightness ratio between the two stimulus sources (see Methods).\nThe results in Fig. 5 ###reference_### show that, overall, performance depended on a combination of starting angle and stimulus ratio. Performance was maximal when agents had identical starting orientations and only one stimulus source was present (top left of Fig. 5 ###reference_###). In accordance with our expectations, performance decreases as the second stimulus source became brighter and the starting angle between agents increased.\nIt should be noted that performance did not increase linearly as the decision-making task became easier. Rather, there was a repeating pattern of sharp decreases in performance followed by short plateaus. This was most likely due to a combination of our performance measure and the relatively small number of agents (see Methods). Each drop in performance was caused by one of the ten agents moving away from the global maximum and towards the competing local maximum (i.e., to the stimulus source with lower brightness). Supplementary figure S3 provides a more detailed account of this pattern. Overall, these simulations show that the ease with which the agents reach a consensus depends not only on their architecture but also on the environment in which they operate.\n###figure_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Discussion", + "text": "We have modeled collective decision making with embodied neural agents that are controlled by simple oscillatory brain dynamics. We first showed that the ability of an agent to move towards a stimulus source was reflected in the intra-brain dynamics of that agent. In an intermediate parameter range where agents were neither overwhelmed by environmental stimulus nor insensitive to it, neural dynamics could become sufficiently uncoupled and metastable to allow the agent to approach its target. Furthermore, multiple agents with different initial heading directions could overcome their differences and converge on one of the two available options. Agents were able to do this within a parameter range that adequately weighted environmental, social, and internal influences. When one influence was too high with respect to the others, the agents failed to converge on a decision.\nIn this regard, our model differs from more disembodied multi-agent models (Flache et al., 2017 ###reference_b14###). In disembodied cognition models of collective decision making, increasing social influence to an arbitrarily high degree would lead to a fast and efficient convergence on any of the possible decision options. The option selected may be of greater or lesser quality, but the ability of agents to reach any option at all has rarely been studied. Our results showed that, when increasing social sensitivity too much, the agent\u2019s neural dynamics may become saturated with social information\u2014to such a degree that it cannot adequately interact with the environment and move towards one of the options. This is reminiscent of real-world situations in which agents that are too consumed by interacting socially with one another lose adaptive interactions with the environment (see Strasbourg dancing plague (Waller, 2008 ###reference_b59###) or circular milling in army ants (Couzin and Franks, 2003 ###reference_b60###)).\nOur model also differs from the more embodied multi-agent models of movement-based collective decisions by animals. In most such models, agents have a parameter that explains their preferred target location or movement direction (Couzin et al., 2011 ###reference_b6###; Leonard et al., 2011 ###reference_b49###; Sridhar et al., 2021 ###reference_b27###). In our simulations, agents had different initial movement directions but did not have a parameter representing a preferred movement direction. Rather, their movement direction emerged from their interactions with each other and with their environment. Our results showed that the more an agent\u2019s initial movement direction was between the two sources rather than pointing to one of them, the more likely agents were to reach a consensus. These results are somewhat in line with previous findings that a larger proportion of unopinionated individuals promotes consensus in models of movement-based decisions made by groups of animals?) (Couzin et al., 2011 ###reference_b6###; Leonard et al., 2011 ###reference_b49###).\nOur simulations also showed that a larger difference between the quality of stimulus sources in the environment generally resulted in a higher proportion of agents reaching a consensus. This is in accordance with studies of discrete decision making between a few options in an environment, where models have shown how the speed and accuracy of collective decisions depend on differences between environmental stimuli (Valentini et al., 2015 ###reference_b61###, 2017 ###reference_b5###). Models that do not take into account the quality of options in the environment, or that consider options of equal quality, often observe agents traveling in a compromise direction (Leonard et al., 2011 ###reference_b49###). Since agents\u2019 neural dynamics were strongly influenced by the environment in our simulations, such compromises only occur in a small region of the parameter space. The closer agents came to a stimulus source, the higher the stimulus concentration they observed, causing agents to almost always go to one of the two stimulus sources instead of taking a compromise direction. Our simulations also showed some surprising emergent collective behaviors, such as \u2018overshooting\u2018 in response to social influence and, as a result, moving towards an option that corresponds neither with the initial movement direction nor with the option chosen by other agents (see Supplementary figure S2A).\nIn the current paper, we performed deterministic simulations of agents that started from the same position and moved towards one of two stimulus sources. In future work, our model could be studied in environments with a higher number of stimulus sources, without changes to the agent architecture (Sridhar et al., 2021 ###reference_b27###). Furthermore, agents might be allowed to visit multiple options of stimulus sources before converging on one, as is common in collective decisions of ants and honeybees (H\u00f6lldobler and Wilson, 1990 ###reference_b62###; Reina et al., 2017 ###reference_b22###). Another extension of our simulations could be to let agents start from different spatial locations. Using the HKB equations to maintain asymmetric patterns of coordination between different agents\u2019 oscillators, future work could study a rudimentary allocentric way of using social information (Pickavance et al., 2018 ###reference_b63###). Future work could also study the influence of noise on collective performance of embodied neural agents, especially in environments with many local optima, as random fluctuations are often an important aspect of self-organized systems and collective intelligence (Eric Bonabeau, 1999 ###reference_b64###; Kahneman et al., 2022 ###reference_b65###).\nInspired by the brain dynamics of biological agents, we used oscillator models to study neural dynamics in a multi-agent system. Recent swarmalator models have also combined collective movement with oscillator dynamics (O\u2019Keeffe et al., 2017 ###reference_b66###; Ceron et al., 2023 ###reference_b67###). In these models, an agent\u2019s behavior is based on the directly observed oscillator phases of the surrounding agents. In our model, the oscillators represent brain dynamics that are not directly available to an outside observer. The brain dynamics of the agents can only become coordinated by intermediary of their behavior. Moreover, since the stimulus emitted by agents was indistinguishable from stimulus originating in the environment, agents could not selectively react to instantaneous social stimulation. Yet, agents could interact by mutually reacting to local changes in stimulus concentration caused by their respective movements in the environment. This situation reflects a form of social interaction in which agents cannot use any social cognition capabilities other than those involved in the interaction itself. Such situations are also studied with human participants in perceptual crossing experiments, and have led some to argue that social interaction can be constitutive of social cognition (Auvray et al., 2009 ###reference_b68###; De Jaegher et al., 2010 ###reference_b69###).\nUsing oscillators to model brain dynamics allows us to use phase-locking and phase-covariance measures to quantify the degree of coordination between the brain dynamics of different agents (Czeszumski et al., 2020 ###reference_b57###). Experimental studies with multiple animals or humans have suggested that coordination between brain dynamics of different agents, typically quantified in terms of inter-brain synchronization (IBS), might have an important role in supporting collective behaviors (Dumas et al., 2010 ###reference_b70###; Yang et al., 2021 ###reference_b71###). A few computational models have provided initial mechanistic explanations for the emergence of such inter-brain synchrony by using the Kuramoto model, showing that the strength and frequencies at which IBS takes place depend on a combination of agents\u2019 individual brain dynamics and their inter-agent coupling (Dumas et al., 2012 ###reference_b72###; Moreau et al., 2022 ###reference_b73###). Although our results cannot conclusively show whether collective decision-making performance depended on inter-agent synchrony, our models could provide a way to study the complex brain\u2013brain behavior dynamics that can give rise to IBS. Furthermore, our results replicate an interesting finding of Kuramoto models of interpersonal synchrony, namely that some degree of intra-agent coupling is required to achieve rich patterns of interpersonal coordination (Heggli et al., 2019 ###reference_b74###). While previous studies (e.g., (Heggli et al., 2019 ###reference_b74###)) have shown this requirement when a pair of agents were coupled to each other directly, we have confirmed it for multiple agents embedded in a spatial environment.\nA major challenge in the development of artificial agents is coordinating social interactions with both humans and other artificial agents. Recent developments in Social NeuroAI attempt to bring social interaction into the realm of AI by advancing artificial agents\u2019 social embodiment, temporal dynamics, and biological plausibility (Bolotta and Dumas, 2022 ###reference_b75###). In this work, we accommodate 1) social embodiment, as collective decisions are movement-based and agents can be reciprocally influenced by each other\u2019s movements; 2) temporal dynamics, through continuous intra-agent, inter-agent, and agent\u2013environment interactions; and 3) biological plausibility, by using oscillations to control agent behavior. Our approach could be a starting point for developing social-neural agents that collaborate on a wider range of collective tasks through the implicit coordination of their neural dynamics." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Methods", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experiment setup", + "text": "Each experiment took place in a 2D environment in which every position had an associated stimulus concentration. Depending on the experiment type, each environment contained one or more stimulus sources of different quality. The stimulus concentration at a certain position was exponentially proportional to its closeness to the stimulus source:\nwhere is the Euclidean distance to the stimulus source and is the exponential decay rate of the environmental stimulus.\nIn setups with two stimulus sources, each had stimulus concentrations defined by eq. 4 ###reference_###, and the overall stimulus concentration at a certain position was the combination of the two:\nwhere indicates the quality ratio of the two stimulus sources.\nWhen there were multiple agents in an environment, the stimulus level that an agent perceived was a combination of the stimulus concentration in the environment and the stimuli concentrations emitted by other agents, such that\nIn all experiments, the environment was 300 by 400 cm, the radius of each agent\u2019s body was 2.5 cm, and each agent has a fixed velocity of 10\u2009cm/s.\nSimulations were performed with a timestep of 0.01\u2009s. All experiments ended after 30\u2009s, which provided sufficient time for agents to reach a stimulus source in the environment.\nAll simulations were performed in Python version 3.9.2 (Van Rossum and Drake, 2009 ###reference_b76###) and the agents were implemented in Pytorch version 1.12.0 (Paszke et al., 2019 ###reference_b77###). The code is available in an open-source code repository: https://github.com/ppsp-team/PyHKBs ###reference_###." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Agent design", + "text": "In our agents, sensory input did not directly control motor activity. Sensory information (in the form of stimulus concentration) was first integrated into the oscillator phase of two sensory nodes. These sensory nodes were dynamically connected to two motor nodes.\nThe situated agent designed by Aguilera et al. (2013 ###reference_b48###) consisted of one motor oscillator and one sensory oscillator, and thus could only perform gradient ascent with spiraling movement. We resolved this by giving our agent two sensory oscillators for stereovision ( and , see nodes 1 and 2 in Fig. 1 ###reference_###A) and two motor oscillators for differential drive steering ( and , see nodes 3 and 4 in Fig. 1 ###reference_###A).\nThe sensors are directionless and are placed at the front of the agent, 90\u00b0 apart as measured from the agent\u2019s center (see Fig. 1 ###reference_###a).\nThe orientation of the agent in the environment is determined by the angle between the two motor oscillators ( and , see nodes 3 and 4 in Fig. 1 ###reference_###A).\nAltogether, the dynamics of the agent are governed by the following set of equations:\nwhere we fixed the ratio so that the HKB equations are bistable (see supplementary text S1).\nIn our neural controller, the oscillators influenced each other over the following connections: the contralateral ones ( and ) the one between the motor regions (), as well as their antiphase counterparts.\nThus, we kept the two sensory oscillators independent and incorporated the contralateral sensorimotor connections present in the Braitenberg vehicles (Braitenberg, 1986 ###reference_b50###) and in many of the biological neural organizations (Sterling and Laughlin, 2017 ###reference_b78###).\nThe intrinsic frequencies of all oscillators were set to 5\u2009Hz, to resemble the frequency of the theta oscillations in biological brains.\nIn our model, the next phase of each oscillator is calculated at each time step, by integrating the differential equations using the fourth-order Runge-Kutta method." + }, + { + "section_id": "5.2.1", + "parent_section_id": "5.2", + "section_name": "5.2.1 Single-agent experiments", + "text": "In the single-agent gradient ascent setup,\nthe agent initiated movement at position (0, -100) and the stimulus source was located at position (-100, 0).\nTo study the link between a single agent\u2019s behavior and its intra-agent neural dynamics, we varied the sensory sensitivity ( 0 or 5, in Eq. 7 ###reference_###) and the coupling strength of all connections ( values from 0.05 to 2.5, in steps of 0.05, in Eq. 7 ###reference_###).\nFor each variation combination, we performed 50 runs with random initial phases of the oscillators.\nIn the single-agent binary decision making setup, the first stimulus source was located at position (-100, 0), the second at position (100, 0), and the brightness (i.e., quality) ratio of the two stimulus sources is (see Eq. 6 ###reference_###).\nThe agent initiated movement equidistant to the two stimulus sources, at position (0, -100) with all internal oscillators starting as in-phase.\nTo evaluate the dependence of performance on agent behavior, we varied the stimulus sensitivity of the agent ( values from 0 to 10, in steps of 1, in Eq. 7 ###reference_###) and the internal coupling ( values from 0.05 to 2.5, in steps of 0.05, in Eq. 7 ###reference_###). To evaluate the importance of internal coupling, we also varied whether the motor regions were connected or not (, in Eq 7 ###reference_###). We ran one simulation for each variation combination, since we did not introduce an element of randomness in the simulation." + }, + { + "section_id": "5.2.2", + "parent_section_id": "5.2", + "section_name": "5.2.2 Multi-agent experiments", + "text": "Each multi-agent experiment had a group of 10 agents and an environment with two stimulus sources, located at positions (-100, 0) and (100, 0).\nTo study consensus achievement under divergent starting opinions, we evenly distributed the initial orientations of the agents (between angle and , ). Thus, each agent faced a different initial direction, with half facing more towards the lefthand stimulus source and half facing more towards the righthand one.\nFollowing Nabet et al. (2009 ###reference_b79###); Leonard et al. (2011 ###reference_b49###), the agent behavior in these experiments was deliberately deterministic. Although noise can be highly beneficial to the self-organization of complex systems (Kahneman et al., 2022 ###reference_b65###), our focus in this study was specifically on the relationship between inter-agent dynamics and consensus, rather than exploring how noise might modulate these dynamics.\nWe ran two groups of multi-agent experiments.\nIn the first group, to study the influence of the intra- and inter-agent coordination regimes, we varied the degree of internal coupling between the motor oscillators ( values from 0 to 1, in steps of 0.02, in Eq. 7 ###reference_###), the social sensitivity ( values from 0 to 5, in steps of 0.1, in Eq. 3 ###reference_###), and the stimulus sensitivity ( values from 0 and 10, in steps of 0.5, in Eq. 7 ###reference_###). In these experiments, the starting angle between agents was 10\u00b0 and the brightness (i.e., quality) ratio of the two stimulus sources was . This ratio is lower than that in the single-agent case, to facilitate a wider range of collective behaviors. With a higher ratio, agents initially oriented towards the least bright stimulus source did not deviate enough from their initial movement path for collective dynamics to occur.\nIn the second group, to study the influence of the environmental and initial conditions, we varied the brightness (i.e., quality) ratio of the two stimulus sources ( values from 0 to 1, in steps of 0.02) and the starting angles of the agents (from 0\u00b0 to 18\u00b0, in steps of 0.36\u00b0). In these experiments, the stimulus sensitivity was , social sensitivity is , and internal connection was ." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Evaluation", + "text": "In single-agent setups, performance is based on the agent\u2019s end position relative to a stimulus source. Note that, in these experiments, the agent could continue moving after reaching a source, so the performance metric includes how well the agent remained close to a source after initially approaching it.\nFor gradient ascent, we evaluated performance based on how closely the agent approaches the stimulus source:\nwhere and represent the agent\u2019s distance to the stimulus source at the beginning and end of the simulation.\nFor binary decision making, we evaluate how closely the agent approaches its closest stimulus source, regardless of the source\u2019s brightness level:\nwhere and represent the agent\u2019s distance to and at the end of the simulation.\nIn multi-agent setups, performance is based on whether agents reach a consensus. Note that,\nin these experiments, an agent could no longer move once it came within 5 cm of a stimulus source, so any changes in agent angle and position due to agents circling around the stimulus source after arrival did not influence the decisions of the other agents. We evaluated performance based on the smallest average distance to one of the two stimulus sources:\nwith being the number of agents and being the Euclidean distance from to agent at time ." + }, + { + "section_id": "5.3.1", + "parent_section_id": "5.3", + "section_name": "5.3.1 Measures of coordination dynamics", + "text": "To evaluate the intra-agent neural dynamics, inter-agent neural dynamics, and inter-agent behavioral (i.e., movement) dynamics, we used the following measures: Kuramoto order parameters (KOP) (Strogatz, 2000 ###reference_b54###), phase locking value (PLV) (Lachaux et al., 1999 ###reference_b80###), and weighted phase-lag index (wPLI) (Vinck et al., 2011 ###reference_b56###).\nFirst, we calculated KOP (i.e., the parameter (Strogatz, 2000 ###reference_b54###)) as\nwhere is the number of oscillators and is the phase angle of each oscillator (which, in this study, can be either the oscillator nodes of the neural controller or the movement directions of agents).\nKOP quantifies the extent to which several oscillating components are in phase. If KOP is 1, all components are completely in phase, whereas low KOP values indicates an absence of synchronization between components. KOP values remaining constant over time indicate that the system has resorted to a stable dynamic (whether synchronized or not), whereas variation in the parameter indicate that the system is passing through various coordination states. Therefore, the standard deviation (SD) of KOP can be used as a measure of the metastability of coordination between oscillating components (Shanahan, 2010 ###reference_b81###; Cabral et al., 2022 ###reference_b55###). We used metastability to assess the agents\u2019 neural dynamics but also to evaluate the collective movements in the multi-agent simulations. In the latter case, we used the measure of metastability to quantify the degree of \u2018alignment variability\u2019, with which we mean the degree to which the collective switches between aligned and unaligned movement directions.\nBased on Lachaux et al. (1999 ###reference_b80###), we calculated sliding for the connection between and as\nwhere is the number of samples in a window.\nPLV is different from KOP in that it is maximal if the phases of the two oscillators are \u2018locked\u2019, i.e., the relative phase of oscillators remains constant over time. We use PLV as a measure to indicate the degree of integration of the oscillating components, i.e., the degree to which their phases are co-determined. PLV is often used as a measure of connectivity between brain components (Varela et al., 2001 ###reference_b52###) as well as functional connectivity between the brains of different individuals during social interaction (Dumas et al., 2010 ###reference_b70###).\nFinally, for the connection between and is calculated as follows (Vinck et al., 2011 ###reference_b56###):\nwith\nLike the PLV, the wPLI characterizes to what degree different oscillators are integrated (Vinck et al., 2011 ###reference_b56###) and has been used in several hyperscanning studies to quantify synchronization between brain regions (e.g., Schwartz et al., 2022 ###reference_b58###).\nwPLI differs from PLV in that its weighting of phases puts more emphasis on the covariance of phases than simple \u2018locking\u2019.\nWhen zero-phase locking (i.e., completely synchronized activity) driven by common input needs to be distinguished from locking between other phases, wPLI provides a more robust characterization of oscillator integration than PLV." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18498v1_figure_1.png", + "caption": "Figure 1: Single-agent behavior and neural dynamics. (A) The agent architecture: two sensors, each connected to a sensory oscillator, nodes 1 and 2 (v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and v2subscript\ud835\udc632v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT), that are each in turn connected to a motor oscillator, nodes 3 and 4 (v3subscript\ud835\udc633v_{3}italic_v start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and v4subscript\ud835\udc634v_{4}italic_v start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT). The traveling orientation \u03b8\ud835\udf03\\thetaitalic_\u03b8 of the agent is determined by the angle difference \u03d5v3,v4subscriptitalic-\u03d5subscript\ud835\udc633subscript\ud835\udc634\\phi_{v_{3},v_{4}}italic_\u03d5 start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_POSTSUBSCRIPT between motor oscillators. (B) Gradient ascent: the agent\u2019s trajectory (red) in the environment (brighter colors indicate higher stimulus concentration). (C) Decision making: two stimulus sources are present in the environment and the agent\u2019s performance is measured by its ability to approach one of the two. (D) Internal phase locking of oscillators (contralateral sensor\u2013motor and motor\u2013motor) in B. (E) Internal phase locking of oscillators in C.", + "url": "http://arxiv.org/html/2411.18498v1/x6.png" + }, + "2": { + "figure_path": "2411.18498v1_figure_2.png", + "caption": "Figure 2: Agent behavior and intra-agent neural dynamics during collective decision making. (A) Agents emit stimulus that can be perceived by other agents. (B) Higher social stimulation allows agents to converge onto the same stimulus source in their environment. (C) With lower social stimulation, agents do not converge on the same source. (D-E) Movement angles of agents (gray lines) and KOP of the group, indicating the degree of alignment (red line). Alignment increases when all agents are moving towards the same stimulus source and decreases when they are not. (G-H) Intra- and inter-agent neural dynamics: average intra-agent wPLI (blue line) and average inter-agent wPLI (orange line).", + "url": "http://arxiv.org/html/2411.18498v1/x7.png" + }, + "3": { + "figure_path": "2411.18498v1_figure_3.png", + "caption": "Figure 3: Intra-agent neural dynamics: the mean PLV and SD(KOP) of each agent during its run (y\ud835\udc66yitalic_y axis), according to the internal coupling degree (avi,vjsubscript\ud835\udc4esubscript\ud835\udc63\ud835\udc56subscript\ud835\udc63\ud835\udc57a_{v_{i},v_{j}}italic_a start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT and bvi,vjsubscript\ud835\udc4fsubscript\ud835\udc63\ud835\udc56subscript\ud835\udc63\ud835\udc57b_{v_{i},v_{j}}italic_b start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT) in its neural controller (x\ud835\udc65xitalic_x axis).\nSquares represent agents without sensory input and dots represent agents with sensory input.\nLighter colors represent higher performance.\nFor each configuration (i.e., internal coupling degree and stimulus sensitivity), 50 runs were performed with random initial phases of the oscillators. Each data point represents the average of one run.", + "url": "http://arxiv.org/html/2411.18498v1/x8.png" + }, + "4": { + "figure_path": "2411.18498v1_figure_4.png", + "caption": "Figure 4: Ternary plots illustrating how collective behavior and neural dynamics depend on the agent configuration. Each point in the triangle corresponds to a certain weighting of environmental stimulus, social information, and internal motor coupling. In each simulation, the parameters fulfill the condition s\u2062t\u2062i\u2062m\u2062u\u2062l\u2062u\u2062s\u2062s\u2062e\u2062n\u2062s\u2062i\u2062t\u2062i\u2062v\u2062i\u2062t\u2062y+s\u2062o\u2062c\u2062i\u2062a\u2062l\u2062s\u2062e\u2062n\u2062s\u2062i\u2062t\u2062i\u2062v\u2062i\u2062t\u2062y+i\u2062n\u2062t\u2062e\u2062r\u2062n\u2062a\u2062l\u2062c\u2062o\u2062u\u2062p\u2062l\u2062i\u2062n\u2062g=100\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc59\ud835\udc62\ud835\udc60\ud835\udc60\ud835\udc52\ud835\udc5b\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc56\ud835\udc61\ud835\udc66\ud835\udc60\ud835\udc5c\ud835\udc50\ud835\udc56\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52\ud835\udc5b\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc56\ud835\udc61\ud835\udc66\ud835\udc56\ud835\udc5b\ud835\udc61\ud835\udc52\ud835\udc5f\ud835\udc5b\ud835\udc4e\ud835\udc59\ud835\udc50\ud835\udc5c\ud835\udc62\ud835\udc5d\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc54100stimulus\\,sensitivity+social\\,sensitivity+internal\\,coupling=100italic_s italic_t italic_i italic_m italic_u italic_l italic_u italic_s italic_s italic_e italic_n italic_s italic_i italic_t italic_i italic_v italic_i italic_t italic_y + italic_s italic_o italic_c italic_i italic_a italic_l italic_s italic_e italic_n italic_s italic_i italic_t italic_i italic_v italic_i italic_t italic_y + italic_i italic_n italic_t italic_e italic_r italic_n italic_a italic_l italic_c italic_o italic_u italic_p italic_l italic_i italic_n italic_g = 100. The scale [0,50]050[0,50][ 0 , 50 ] for each dimension corresponds to respective parameter values of c\u2208{0,\u2026,10}\ud835\udc500\u202610c\\in\\{0,\\ldots,10\\}italic_c \u2208 { 0 , \u2026 , 10 } for stimulus sensitivity, S\u2208{0,\u2026,5}\ud835\udc460\u20265S\\in\\{0,\\ldots,5\\}italic_S \u2208 { 0 , \u2026 , 5 } for social sensitivity, and av3,v4\u2208{0,\u2026,1}subscript\ud835\udc4esubscript\ud835\udc633subscript\ud835\udc6340\u20261a_{v_{3},v_{4}}\\in\\{0,\\ldots,1\\}italic_a start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_POSTSUBSCRIPT \u2208 { 0 , \u2026 , 1 } for internal coupling. The top corner corresponds to maximal social sensitivity, the left corner to maximal environmental sensitivity, and the right corner to maximal internal coupling. The brightness (yellowness) in panel A indicates the performance of collective decision making. A performance of 0 indicates that agents failed to reach either of the two stimulus sources. A performance of 0.5 indicates that half of the agents reached the same stimulus source. A performance of 1 indicates that all agents reached the same stimulus source and thus that a consensus was reached. The brightness in panels B-E indicate the strength of, respectively, the movement alignment, alignment variability, inter-brain covariance, and intra-brain covariance.", + "url": "http://arxiv.org/html/2411.18498v1/x9.png" + }, + "5": { + "figure_path": "2411.18498v1_figure_5.png", + "caption": "Figure 5: Dependence of the collective decision-making performance on the environment and initial orientations of agents. The leftmost extreme of the x\ud835\udc65xitalic_x-axis represents the cases with only one stimulus source present in the environment. Moving towards the right, the brightness of a second stimulus source increases until the two have equal brightness. The agents always start with equal angles between them. At the bottom of the y\ud835\udc66yitalic_y-axis, the agents are spread so that the outermost two of the ten agents are at a 180\u00b0 angle. At the top of the y\ud835\udc66yitalic_y-axis, all agents start with angles of 0\u00b0 between them. All agents have identical parameters; stimulus sensitivity is c=3\ud835\udc503c=3italic_c = 3, social sensitivity is S=1\ud835\udc461S=1italic_S = 1, and internal coupling is avi,vj=0.5subscript\ud835\udc4esubscript\ud835\udc63\ud835\udc56subscript\ud835\udc63\ud835\udc570.5a_{v_{i},v_{j}}=0.5italic_a start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 0.5.", + "url": "http://arxiv.org/html/2411.18498v1/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Making better decisions in groups.", + "author": "Dan Bang and Chris D. Frith.", + "venue": "Royal Society Open Science, 4(8):170193,\naug 2017.", + "url": null + } + }, + { + "2": { + "title": "Group decisions in humans and animals: a survey.", + "author": "Larissa Conradt and Christian List.", + "venue": "Philosophical Transactions of the Royal Society B,\n364(1518):719\u2013742, dec 2008.", + "url": null + } + }, + { + "3": { + "title": "Analysis of emergent symmetry breaking in collective decision making.", + "author": "Heiko Hamann, Thomas Schmickl, Heinz W\u00f6rn, and Karl Crailsheim.", + "venue": "Neural Computing and Applications, 21(2):207\u2013218, apr 2010.", + "url": null + } + }, + { + "4": { + "title": "Majority-rule opinion dynamics with differential latency: A mechanism\nfor self-organized collective decision-making.", + "author": "Marco A. Montes de Oca, Eliseo Ferrante, Alexander Scheidler, Carlo\nPinciroli, Mauro Birattari, and Marco Dorigo.", + "venue": "Swarm Intelligence, 5(3\u20134):305\u2013327,\n2011.", + "url": null + } + }, + { + "5": { + "title": "The best-of-n problem in robot swarms: Formalization, state of the\nart, and novel perspectives.", + "author": "Gabriele Valentini, Eliseo Ferrante, and Marco Dorigo.", + "venue": "Frontiers in Robotics and AI, 4, mar 2017.", + "url": null + } + }, + { + "6": { + "title": "Uninformed individuals promote democratic consensus in animal groups.", + "author": "Iain D. Couzin, Christos C. Ioannou, G\u00fcven Demirel, Thilo Gross, Colin J.\nTorney, Andrew Hartnett, Larissa Conradt, Simon A. Levin, and Naomi E.\nLeonard.", + "venue": "Science, 334(6062):1578\u20131580, dec 2011.", + "url": null + } + }, + { + "7": { + "title": "Network dynamics of social influence in the wisdom of crowds.", + "author": "Joshua Becker, Devon Brackbill, and Damon Centola.", + "venue": "Proceedings of the National Academy of Sciences, page\n201615978, jun 2017.", + "url": null + } + }, + { + "8": { + "title": "The network science of collective intelligence.", + "author": "Damon Centola.", + "venue": "Trends in Cognitive Sciences, 26(11):923\u2013941, 2022.", + "url": null + } + }, + { + "9": { + "title": "Effective leadership and decision-making in animal groups on the\nmove.", + "author": "Iain D. Couzin, Jens Krause, Nigel R. Franks, and Simon A. Levin.", + "venue": "Nature, 433(7025):513\u2013516, feb 2005.", + "url": null + } + }, + { + "10": { + "title": "Continuous decisions.", + "author": "Seng Bum Michael Yoo, Benjamin Yost Hayden, and John M. Pearson.", + "venue": "Philosophical Transactions of the Royal Society B: Biological\nSciences, 376(1819):20190664, jan 2021.", + "url": null + } + }, + { + "11": { + "title": "Neural mechanisms underlying human consensus decision-making.", + "author": "Shinsuke Suzuki, Ryo Adachi, Simon Dunne, Peter Bossaerts, and John P.\nO\u2019Doherty.", + "venue": "Neuron, 86(2):591\u2013602, apr 2015.", + "url": null + } + }, + { + "12": { + "title": "Opinion dynamics and bounded confidence models, analysis and\nsimulation.", + "author": "Rainer Hegselmann and Ulrich Krause.", + "venue": "Journal of Artificial Societies and Social Simulation, 5, 07\n2002.", + "url": null + } + }, + { + "13": { + "title": "Sociophysics: A review of galam models.", + "author": "Serge Galam.", + "venue": "International Journal of Modern Physics C, March 2008.", + "url": null + } + }, + { + "14": { + "title": "Models of social influence: Towards the next frontiers.", + "author": "Andreas Flache, Michael M\u00e4s, Thomas Feliciani, Edmund Chattoe-Brown, Guillaume\nDeffuant, Sylvie Huet, and Jan Lorenz.", + "venue": "Journal of Artificial Societies and Social Simulation,\n20(4), 2017.", + "url": null + } + }, + { + "15": { + "title": "Collective decision-making in ideal networks: The speed-accuracy\ntradeoff.", + "author": "Vaibhav Srivastava and Naomi Ehrich Leonard.", + "venue": "IEEE Transactions on Control of Network Systems, 1(1):121\u2013132, mar 2014.", + "url": null + } + }, + { + "16": { + "title": "Wise or mad crowds? the cognitive mechanisms underlying information\ncascades.", + "author": "Alan N. Tump, Timothy J. Pleskac, and Ralf H. J. M. Kurvers.", + "venue": "Science Advances, 6(29), jul 2020.", + "url": null + } + }, + { + "17": { + "title": "Humans utilize sensory evidence of others\u2019 intended action to make\nonline decisions.", + "author": "Rakshith Lokesh, Seth Sullivan, Jan A. Calalo, Adam Roth, Brenden Swanik,\nMichael J. Carter, and Joshua G. A. Cashaback.", + "venue": "Scientific Reports, 12(1), may 2022.", + "url": null + } + }, + { + "18": { + "title": "Asynchrony rescues statistically optimal group decisions from\ninformation cascades through emergent leaders.", + "author": "Andreagiovanni Reina, Thomas Bose, Vaibhav Srivastava, and James A. R.\nMarshall.", + "venue": "Royal Society Open Science, 10(3), mar 2023.", + "url": null + } + }, + { + "19": { + "title": "Novel type of phase transition in a system of self-driven particles.", + "author": "Tam\u00e1s Vicsek, Andr\u00e1s Czir\u00f3k, Eshel Ben-Jacob, Inon Cohen, and\nOfer Shochet.", + "venue": "Physical Review Letters, 75(6):1226\u20131229,\naug 1995.", + "url": null + } + }, + { + "20": { + "title": "Social force model for pedestrian dynamics.", + "author": "Dirk Helbing and Peter Molnar.", + "venue": "Physical review E, 51(5):4282, 1995.", + "url": null + } + }, + { + "21": { + "title": "Effective leadership for crowd evacuation.", + "author": "Yi Ma, Richard Kwok Kit Yuen, and Eric Wai Ming Lee.", + "venue": "Physica A: Statistical Mechanics and its Applications,\n450:333\u2013341, may 2016.", + "url": null + } + }, + { + "22": { + "title": "Model of the best-of-n nest-site selection process in honeybees.", + "author": "Andreagiovanni Reina, James AR Marshall, Vito Trianni, and Thomas Bose.", + "venue": "Physical Review E, 95(5):052411, 2017.", + "url": null + } + }, + { + "23": { + "title": "How simple rules determine pedestrian behavior and crowd disasters.", + "author": "Mehdi Moussa\u00efd, Dirk Helbing, and Guy Theraulaz.", + "venue": "Proceedings of the National Academy of Sciences, 108(17):6884\u20136888, 2011.", + "url": null + } + }, + { + "24": { + "title": "A multi-brain framework for social interaction.", + "author": "Lyle Kingsbury and Weizhe Hong.", + "venue": "Trends in neurosciences, 43(9):651\u2013666,\n2020.", + "url": null + } + }, + { + "25": { + "title": "Beyond \u201ccorrelation vs. causation\u201d: multi-brain neuroscience\nneeds explanation.", + "author": "Quentin Moreau and Guillaume Dumas.", + "venue": "Trends Cogn. Sci, 25:542\u2013543, 2021.", + "url": null + } + }, + { + "26": { + "title": "Shrunken social brains? a minimal model of the role of social\ninteraction in neural complexity.", + "author": "Georgina Montserrat Res\u00e9ndiz-Benhumea, Ekaterina Sangati, Federico Sangati,\nSoheil Keshmiri, and Tom Froese.", + "venue": "Frontiers in Neurorobotics, 15:634085, 2021.", + "url": null + } + }, + { + "27": { + "title": "The geometry of decision-making in individuals and collectives.", + "author": "Vivek H. Sridhar, Liang Li, Dan Gorbonos, M\u00e1t\u00e9 Nagy, Bianca R.\nSchell, Timothy Sorochkin, Nir S. Gov, and Iain D. Couzin.", + "venue": "Proceedings of the National Academy of Sciences, 118(50), dec 2021.", + "url": null + } + }, + { + "28": { + "title": "Collective behavior from surprise minimization.", + "author": "Conor Heins, Beren Millidge, Lancelot Da Costa, Richard P Mann, Karl J Friston,\nand Iain D Couzin.", + "venue": "Proceedings of the National Academy of Sciences, 121(17):e2320239121, 2024.", + "url": null + } + }, + { + "29": { + "title": "Being There.", + "author": "Andy Clark.", + "venue": "The MIT Press, 1998.", + "url": null + } + }, + { + "30": { + "title": "The Oxford Handbook of 4E Cognition.", + "author": "Albert Newen and Shaun Gallagher.", + "venue": "Oxford University Press, 2018.", + "url": null + } + }, + { + "31": { + "title": "The artificial life route to artificial intelligence: Building\nembodied, situated agents.", + "author": "Luc Steels and Rodney Brooks.", + "venue": "Routledge, 2018.", + "url": null + } + }, + { + "32": { + "title": "Language and culture internalization for human-like autotelic AI.", + "author": "C\u00e9dric Colas, Tristan Karch, Cl\u00e9ment Moulin-Frier, and Pierre-Yves\nOudeyer.", + "venue": "Nature Machine Intelligence, 4(12):1068\u20131076, dec 2022.", + "url": null + } + }, + { + "33": { + "title": "The Embodied Mind.", + "author": "Francisco J. Varela, Evan T. Thompson, and Eleanor Rosch.", + "venue": "The MIT Press, 1992.", + "url": null + } + }, + { + "34": { + "title": "Radical embodiment: neural dynamics and consciousness.", + "author": "Evan Thompson and Francisco J. Varela.", + "venue": "Trends in Cognitive Sciences, 5(10):418\u2013425, oct 2001.", + "url": null + } + }, + { + "35": { + "title": "The enactive approach.", + "author": "Tom Froese and Ezequiel A. Di Paolo.", + "venue": "Pragmatics and Cognition, 19(1):1\u201336, jul\n2011.", + "url": null + } + }, + { + "36": { + "title": "Synchronous neural oscillations and cognitive processes.", + "author": "Lawrence M Ward.", + "venue": "Trends in cognitive sciences, 7(12):553\u2013559, 2003.", + "url": null + } + }, + { + "37": { + "title": "The Brain from Inside Out.", + "author": "Gy\u00f6rgy Buzs\u00e1ki.", + "venue": "Oxford University Press, jun 2019.", + "url": null + } + }, + { + "38": { + "title": "Scaling brain size, keeping timing: Evolutionary preservation of\nbrain rhythms.", + "author": "Gy\u00f6rgy Buzs\u00e1ki, Nikos Logothetis, and Wolf Singer.", + "venue": "Neuron, 80(3):751\u2013764, oct 2013.", + "url": null + } + }, + { + "39": { + "title": "Theta oscillations shift towards optimal frequency for cognitive\ncontrol.", + "author": "Mehdi Senoussi, Pieter Verbeke, Kobe Desender, Esther De Loof, Durk Talsma, and\nTom Verguts.", + "venue": "Nature Human Behaviour, 6(7):1000\u20131013,\n2022.", + "url": null + } + }, + { + "40": { + "title": "On natural attunement: shared rhythms between the brain and the\nenvironment.", + "author": "Efrosini Charalambous and Zakaria Djebbara.", + "venue": "Neuroscience & Biobehavioral Reviews, 155:105438,\n2023.", + "url": null + } + }, + { + "41": { + "title": "Dynamic Patterns The Self-organization Of Brain And Behavior.", + "author": "J. A. Scott Kelso.", + "venue": "Bradford Book, 1997.", + "url": null + } + }, + { + "42": { + "title": "Coordination dynamics: Bidirectional coupling between humans,\nmachines and brains.", + "author": "J. A. Scott Kelso, Emmanuelle Tognoli, and Guillaume Dumas.", + "venue": "In 2014 IEEE International Conference on Systems, Man, and\nCybernetics (SMC). IEEE, oct 2014.", + "url": null + } + }, + { + "43": { + "title": "Coordination dynamics: A foundation for understanding social\nbehavior.", + "author": "Emmanuelle Tognoli, Mengsen Zhang, Armin Fuchs, Christopher Beetle, and\nJ. A. Scott Kelso.", + "venue": "Frontiers in Human Neuroscience, 14, aug 2020.", + "url": null + } + }, + { + "44": { + "title": "The metastable brain.", + "author": "Emmanuelle Tognoli and J. A. Scott Kelso.", + "venue": "Neuron, 81(1):35\u201348, jan 2014.", + "url": null + } + }, + { + "45": { + "title": "A theoretical model of phase transitions in human hand movements.", + "author": "Hermann Haken, J. A. Scott Kelso, and H. Bunz.", + "venue": "Biological Cybernetics, 51(5):347\u2013356,\nfeb 1985.", + "url": null + } + }, + { + "46": { + "title": "Chemical Oscillations, Waves, and Turbulence.", + "author": "Y. Kuramoto.", + "venue": "Springer Berlin Heidelberg, 1984.", + "url": null + } + }, + { + "47": { + "title": "Coordination dynamics.", + "author": "J. A. Scott Kelso.", + "venue": "In Encyclopedia of Complexity and Systems Science, pages\n1\u201341. Springer New York, 2013.", + "url": null + } + }, + { + "48": { + "title": "The situated HKB model: how sensorimotor spatial coupling can alter\noscillatory brain dynamics.", + "author": "Miguel Aguilera, Manuel G. Bedia, Bruno A. Santos, and Xabier E. Barandiaran.", + "venue": "Frontiers in Computational Neuroscience, 7, 2013.", + "url": null + } + }, + { + "49": { + "title": "Decision versus compromise for animal groups in motion.", + "author": "Naomi E. Leonard, Tian Shen, Benjamin Nabet, Luca Scardovi, Iain D. Couzin, and\nSimon A. Levin.", + "venue": "Proceedings of the National Academy of Sciences, 109(1):227\u2013232, dec 2011.", + "url": null + } + }, + { + "50": { + "title": "Vehicles, Experiments in Synthetic Psychology.", + "author": "Valentino Braitenberg.", + "venue": "M.I.T. P., 1986.", + "url": null + } + }, + { + "51": { + "title": "Connecting empirical phenomena and theoretical models of biological\ncoordination across scales.", + "author": "Mengsen Zhang, Christopher Beetle, J. A. Scott Kelso, and Emmanuelle Tognoli.", + "venue": "Journal of The Royal Society Interface, 16(157):20190360, aug 2019.", + "url": null + } + }, + { + "52": { + "title": "The brainweb: Phase synchronization and large-scale integration.", + "author": "Francisco Varela, Jean-Philippe Lachaux, Eugenio Rodriguez, and Jacques\nMartinerie.", + "venue": "Nature Reviews Neuroscience, 2(4):229\u2013239, apr 2001.", + "url": null + } + }, + { + "53": { + "title": "Communication dynamics in complex brain networks.", + "author": "Andrea Avena-Koenigsberger, Bratislav Misic, and Olaf Sporns.", + "venue": "Nature reviews neuroscience, 19(1):17\u201333,\n2018.", + "url": null + } + }, + { + "54": { + "title": "From kuramoto to crawford: exploring the onset of synchronization in\npopulations of coupled oscillators.", + "author": "Steven H. Strogatz.", + "venue": "Physica D: Nonlinear Phenomena, 143(1-4):1\u201320, sep 2000.", + "url": null + } + }, + { + "55": { + "title": "Metastable oscillatory modes emerge from synchronization in the brain\nspacetime connectome.", + "author": "Joana Cabral, Francesca Castaldo, Jakub Vohryzek, Vladimir Litvak, Christian\nBick, Renaud Lambiotte, Karl Friston, Morten L. Kringelbach, and Gustavo\nDeco.", + "venue": "Communications Physics, 5(1), jul 2022.", + "url": null + } + }, + { + "56": { + "title": "An improved index of phase-synchronization for electrophysiological\ndata in the presence of volume-conduction, noise and sample-size bias.", + "author": "Martin Vinck, Robert Oostenveld, Marijn van Wingerden, Franscesco Battaglia,\nand Cyriel M.A. Pennartz.", + "venue": "NeuroImage, 55(4):1548\u20131565, apr 2011.", + "url": null + } + }, + { + "57": { + "title": "Hyperscanning: A valid method to study neural inter-brain\nunderpinnings of social interaction.", + "author": "Artur Czeszumski, Sara Eustergerling, Anne Lang, David Menrath, Michael\nGerstenberger, Susanne Schuberth, Felix Schreiber, Zadkiel Zuluaga Rendon,\nand Peter K\u00f6nig.", + "venue": "Frontiers in Human Neuroscience, 14, feb 2020.", + "url": null + } + }, + { + "58": { + "title": "Technologically-assisted communication attenuates inter-brain\nsynchrony.", + "author": "Linoy Schwartz, Jonathan Levy, Yaara Endevelt-Shapira, Amir Djalovski, Olga\nHayut, Guillaume Dumas, and Ruth Feldman.", + "venue": "NeuroImage, 264:119677, dec 2022.", + "url": null + } + }, + { + "59": { + "title": "A time to dance, a time to die.", + "author": "John Waller.", + "venue": "Icon Books, 2008.", + "url": null + } + }, + { + "60": { + "title": "Self-organized lane formation and optimized traffic flow in army\nants.", + "author": "I. D. Couzin and N. R. Franks.", + "venue": "Proceedings of the Royal Society of London. Series B:\nBiological Sciences, 270(1511):139\u2013146, jan 2003.", + "url": null + } + }, + { + "61": { + "title": "Collective decision with 100 kilobots: speed versus accuracy in\nbinary discrimination problems.", + "author": "Gabriele Valentini, Eliseo Ferrante, Heiko Hamann, and Marco Dorigo.", + "venue": "Autonomous Agents and Multi-Agent Systems, 30(3):553\u2013580, dec 2015.", + "url": null + } + }, + { + "62": { + "title": "The Ants.", + "author": "Bert H\u00f6lldobler and Edward Osborne Wilson.", + "venue": "Belknap Press, 1990.", + "url": null + } + }, + { + "63": { + "title": "The effects of feedback format, and egocentric & allocentric\nrelative phase on coordination stability.", + "author": "John Pickavance, Arianne Azmoodeh, and Andrew D. Wilson.", + "venue": "Human Movement Science, 59:143\u2013152, jun 2018.", + "url": null + } + }, + { + "64": { + "title": "Swarm Intelligence.", + "author": "Guy Theraulaz Eric Bonabeau, Marco Dorigo.", + "venue": "Oxford University Press Inc, 1999.", + "url": null + } + }, + { + "65": { + "title": "An exchange of letters on the role of noise in collective\nintelligence.", + "author": "Daniel Kahneman, David C Krakauer, Olivier Sibony, Cass Sunstein, and David\nWolpert.", + "venue": "Collective Intelligence, 1(1):263391372210785, aug 2022.", + "url": null + } + }, + { + "66": { + "title": "Oscillators that sync and swarm.", + "author": "Kevin P. O\u2019Keeffe, Hyunsuk Hong, and Steven H. Strogatz.", + "venue": "Nature Communications, 8(1), nov 2017.", + "url": null + } + }, + { + "67": { + "title": "Diverse behaviors in non-uniform chiral and non-chiral swarmalators.", + "author": "Steven Ceron, Kevin O\u2019Keeffe, and Kirstin Petersen.", + "venue": "Nature Communications, 14(1), feb 2023.", + "url": null + } + }, + { + "68": { + "title": "Perceptual interactions in a minimalist virtual environment.", + "author": "Malika Auvray, Charles Lenay, and John Stewart.", + "venue": "New ideas in psychology, 27(1):32\u201347,\n2009.", + "url": null + } + }, + { + "69": { + "title": "Can social interaction constitute social cognition?", + "author": "Hanne De Jaegher, Ezequiel Di Paolo, and Shaun Gallagher.", + "venue": "Trends in Cognitive Sciences, 14(10):441\u2013447, oct 2010.", + "url": null + } + }, + { + "70": { + "title": "Inter-brain synchronization during social interaction.", + "author": "Guillaume Dumas, Jacqueline Nadel, Robert Soussignan, Jacques Martinerie, and\nLine Garnero.", + "venue": "PLoS ONE, 5(8):e12166, aug 2010.", + "url": null + } + }, + { + "71": { + "title": "Wireless multilateral devices for optogenetic studies of individual\nand social behaviors.", + "author": "Yiyuan Yang, Mingzheng Wu, Abraham V\u00e1zquez-Guardado, Amy J. Wegener,\nJose G. Grajales-Reyes, Yujun Deng, Taoyi Wang, Raudel Avila, Justin A.\nMoreno, Samuel Minkowicz, Vasin Dumrongprechachan, Jungyup Lee, Shuangyang\nZhang, Alex A. Legaria, Yuhang Ma, Sunita Mehta, Daniel Franklin, Layne\nHartman, Wubin Bai, Mengdi Han, Hangbo Zhao, Wei Lu, Yongjoon Yu, Xing Sheng,\nAnthony Banks, Xinge Yu, Zoe R. Donaldson, Robert W. Gereau, Cameron H. Good,\nZhaoqian Xie, Yonggang Huang, Yevgenia Kozorovitskiy, and John A. Rogers.", + "venue": "Nature Neuroscience, 24(7):1035\u20131045, may\n2021.", + "url": null + } + }, + { + "72": { + "title": "Anatomical connectivity influences both intra- and inter-brain\nsynchronizations.", + "author": "Guillaume Dumas, Mario Chavez, Jacqueline Nadel, and Jacques Martinerie.", + "venue": "PLoS ONE, 7(5):e36414, may 2012.", + "url": null + } + }, + { + "73": { + "title": "A neurodynamic model of inter-brain coupling in the gamma band.", + "author": "Quentin Moreau, Lena Adel, Caitriona Douglas, Ghazaleh Ranjbaran, and Guillaume\nDumas.", + "venue": "Journal of Neurophysiology, sep 2022.", + "url": null + } + }, + { + "74": { + "title": "A kuramoto model of self-other integration across interpersonal\nsynchronization strategies.", + "author": "Ole Adrian Heggli, Joana Cabral, Ivana Konvalinka, Peter Vuust, and Morten L.\nKringelbach.", + "venue": "PLOS Computational Biology, 15(10):e1007422, oct 2019.", + "url": null + } + }, + { + "75": { + "title": "Social Neuro AI: Social Interaction as the \u201cDark\nMatter\u201d of AI.", + "author": "Samuele Bolotta and Guillaume Dumas.", + "venue": "Frontiers in Computer Science, 4, 2022.", + "url": null + } + }, + { + "76": { + "title": "Python 3 Reference Manual.", + "author": "Guido Van Rossum and Fred L. Drake.", + "venue": "CreateSpace, Scotts Valley, CA, 2009.", + "url": null + } + }, + { + "77": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory\nChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban\nDesmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan\nTejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith\nChintala.", + "venue": "In Advances in Neural Information Processing Systems 32, pages\n8024\u20138035. Curran Associates, Inc., 2019.", + "url": null + } + }, + { + "78": { + "title": "Principles of Neural Design.", + "author": "Peter Sterling and Simon Laughlin.", + "venue": "The MIT Press, 2017.", + "url": null + } + }, + { + "79": { + "title": "Dynamics of decision making in animal group motion.", + "author": "Benjamin Nabet, Naomi E. Leonard, Iain D. Couzin, and Simon A. Levin.", + "venue": "Journal of Nonlinear Science, 19(4):399\u2013435, jan 2009.", + "url": null + } + }, + { + "80": { + "title": "Measuring phase synchrony in brain signals.", + "author": "Jean-Philippe Lachaux, Eugenio Rodriguez, Jacques Martinerie, and Francisco J.\nVarela.", + "venue": "Human Brain Mapping, 8(4):194\u2013208, 1999.", + "url": null + } + }, + { + "81": { + "title": "Metastable chimera states in community-structured oscillator\nnetworks.", + "author": "Murray Shanahan.", + "venue": "Chaos: An Interdisciplinary Journal of Nonlinear Science,\n20(1):013108, mar 2010.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18498v1" +} \ No newline at end of file diff --git a/20241127/2411.18500v1.json b/20241127/2411.18500v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7778b92100f4b98741419b15fc9a99b8add7c97a --- /dev/null +++ b/20241127/2411.18500v1.json @@ -0,0 +1,884 @@ +{ + "title": "Personalised Serious Games and Gamification in Healthcare: Survey and Future Research Directions", + "abstract": "Serious games, games with a primary objective other than pure entertainment, and gamification, the use of game elements in non-game contexts, have shown to have positive effects on health outcomes of eHealth applications. However, research has shown that a shift towards a more personalised approach is needed, considering the diversity of users and their contexts. This introduces new challenges to the domain of serious games and gamification (SGG) as research is needed on how such personalisation is achieved. A literature search was conducted, using Web of Science and PubMed, to provide an overview of personalisation strategies applied in SGG in health. In total, 31 articles were identified, of which 22 reported on a serious game and 9 focused on gamification. Results indicate that personalised serious games and gamification have been applied most in the fields of behaviour change and rehabilitation. Furthermore, the use of machine learning and artificial intelligence (AI) for personalisation shows promise as they can find patterns and relationships in large data sets. Findings indicated that reusability is still an under-highlighted aspect in the design and development of personalised SGG, as only 10 out of 31 articles reported on some form of reuse. Future research should go towards the standardisation of the development of personalised SGG by focusing on the reusability of the different components and the use of generative AI. This standardisation holds the potential to simplify the design process and involvement of domain experts and facilitates a more detailed evaluation of different personalisation strategies.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The use of Serious Games and Gamification (SGG) for health care is increasingly popular as its use has shown positive effects on treatment adherence, user motivation and patient education [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Gamification is the use of game elements, such as rewards and leaderboards, in a non-gaming context. The choice of included game elements can range from a few elements to a more game-like experience [7 ###reference_b7###]. Serious Games (SGs), on the other hand, are games with a primary objective other than pure entertainment, such as education or training [8 ###reference_b8###, 9 ###reference_b9###]. SGG are used in a wide range of health domains, for example, physical and cognitive rehabilitation [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###], the education of health professionals and patients [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###], health behaviour change, such as the cessation of substance abuse or the improvement of physical activity [24 ###reference_b24###, 25 ###reference_b25###] and the treatment of mental health disorders, such as anxiety and depression [4 ###reference_b4###]. Results indicate that SGG show promise in reducing issues with treatment adherence in healthcare and that they can be effective tools for health, however, research remains in its infancy, limited by design and evaluation challenges [26 ###reference_b26###, 23 ###reference_b23###, 22 ###reference_b22###, 27 ###reference_b27###, 28 ###reference_b28###, 4 ###reference_b4###, 25 ###reference_b25###, 29 ###reference_b29###, 30 ###reference_b30###].\nOne of those challenges is that SGG might not be sustainable as patients and users might lose interest over time, leading again to a decrease in treatment adherence and user engagement [26 ###reference_b26###]. Users of mobile applications all have their specific profile and their contexts might change and evolve, calling for a dynamic and adaptable approach to keep motivation high. Research has indicated that the one-size-fits-all approach needs to be abandoned to shift towards more personalised SGG, that are able to re-engage the user [7 ###reference_b7###, 17 ###reference_b17###, 18 ###reference_b18###, 28 ###reference_b28###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###]. Moreover, designing and implementing a personalised Serious Game (SG) is a costly and challenging process as it requires the same effort from multiple stakeholders, such as (game) developers, software engineers and domain experts, all over again for each SG [16 ###reference_b16###, 37 ###reference_b37###]. While the development of gamified interventions can be considered slightly less cost-intensive as it does not require the development of a full-fledged game, it should be avoided to use gamification as chocolate-dipped-broccoli, i.e. applied as an afterthought, but to integrate it from the start in the design process [38 ###reference_b38###]. To create effective SGs and gamified mHealth, or mobile health, applications, domain expertise from health professionals is needed, involving them in each step of the design and development process [33 ###reference_b33###, 39 ###reference_b39###].\nPersonalised SGG, with a user-centred approach, show promise in improving performance outcomes and boosting engagement [40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###, 2 ###reference_b2###]. Research exists on personalised SGG, the obtained results so far are promising and challenged by the uncertainty on how personalisation can be integrated to increase health outcomes [18 ###reference_b18###, 44 ###reference_b44###]. Several reviews on personalised SGG exist, focusing on which player aspects are used for the individualization of SGs [31 ###reference_b31###], difficulty adaptation and procedural content generation [45 ###reference_b45###], how game elements have been chosen and used in personalised gamification [46 ###reference_b46###, 47 ###reference_b47###], how machine learning and Artificial Intelligence (AI) and gamification can interact [48 ###reference_b48###] and how player models and adaptation methods are integrated [49 ###reference_b49###]. These reviews, however, focus on either gamification or SGs and include all application domains, which often leads to a predominant focus on games and gamification designed for education. Approaches to personalisation might differ as the objectives of education and health care differ.\nTo fill this gap, this paper investigates how personalisation has been applied to SGG for health. Moreover, the aim is to provide a technical overview of the player modelling techniques and intelligent personalisation methods that have been used. Furthermore, this research examines how expert knowledge is incorporated in user modelling, which user data is used and if the reusability of specific components or transferability of expert knowledge has been facilitated to simplify the design process of personalised SGG.\nThe remainder of the paper is structured as follows: First, Section 2 ###reference_### explains the search strategy, the inclusion criteria and selection procedure of the identified records. Next, Section 3 ###reference_### provides an overview of the player and expert models, the intelligent personalisation methods and the inclusion of reusability, Section 4 ###reference_### discusses future research direction, followed by the conclusions in Section 5 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Method", + "text": "The following paragraphs provide an overview of the search strategy that was used to identify the analysed articles and second, a discussion of defined inclusion criteria and how the final studies were selected." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Search Strategy", + "text": "The search was conducted in March 2024, using two databases, namely Web of Science and PubMed. This structured literature search was preceded by an exploratory search using Google Scholar to define the keywords to be used in the search. Table 1 ###reference_### gives an overview of the used query and keywords with the respective number of articles that were retrieved from Web of Science and PubMed. Seven other articles were included in the results that were identified during the analysis of the found records. For the title it was required that some keyword referring to personalisation and gamification and/or SGs was included. Furthermore, for the topic of the paper, i.e. abstract and title, the domain of \u2018healthcare\u2019 is delineated by all papers that refer to the health or well-being of patients, thereby excluding education, more specifically education of healthcare professionals and education of people with specific learning disorders. The publication year spans a decade, namely 2014 to 2024. As the aim is to provide an overview of personalisation algorithms and strategies for SGG from the last decade, all review papers were excluded, as these were analyzed separately and reported upon above." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Inclusion Criteria and Study Selection", + "text": "Figure 1 ###reference_### displays the number of publications identified, screened and excluded at each stage of the literature search and selection process. The structured search resulted in 126 articles from Web of Sciences and 47 articles from PubMed. After the removal of duplicate articles, 127 articles remained. These articles were screened based on title and abstract. After this first screening, the full text of the remaining 67 articles was analysed. Two more records were identified from other sources based on the expertise of the authors. In total, 33 articles are included in this literature review. Articles were excluded from the analysis if they described a gamified solution or serious game that was not personalised (screening n=43, full-text n=16) or if it did not include a digital intervention (screening n=1, full-text n=7). Furthermore, papers were excluded from the results if the topic was incorrect (screening n=12, full-text n=2) or if they were the wrong publication type, namely reviews or editorials (screening n=1, full-text n=2). Two articles were excluded due to not being available in English and of 3 articles no full text was found. Next, 6 papers were excluded after full-text assessment due to lack of details on the used personalisation strategies. Finally, 2 papers that discussed different aspects of the same research have been included as 1 entry, and for 2 papers that reported the same research, the conference paper has been excluded. This brings the total of included articles to 31.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Findings", + "text": "This section discusses the findings of this literature review. First, in Section 3.1 ###reference_### an overview and summary of the included articles are discussed, followed by an in-depth explanation of the identified methods for player and knowledge modelling in Section 3.2 ###reference_###, a classification of the intelligent personalisation methods in Section 3.3 ###reference_###, to end with a discussion on reusability if personalised SGG in Section 3.4 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "Of the 31 included papers, 22 discuss a serious game, while the other nine articles research gamification. Six domains have been identified, as shown in Figure 2 ###reference_###, namely, one serious game on health support [50 ###reference_b50###], i.e. systems that support users in their day-to-day living, six papers on health education, i.e., applications that want to educate patients and users on certain disorders or diseases, of which four SGs [51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###, 54 ###reference_b54###] and two gamified solutions [55 ###reference_b55###, 56 ###reference_b56###]. Two papers include two domains, namely behaviour change and health education, as they not only educate users but also motivate them to implement the behaviour changes [51 ###reference_b51###, 54 ###reference_b54###]. One article discusses gamification for surveys for health [57 ###reference_b57###] and one article discusses SGs for health in general [58 ###reference_b58###]. Furthermore, the two largest categories are cognitive and physical rehabilitation, for which 11 papers were included, all discussing a serious game [10 ###reference_b10###, 59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###, 13 ###reference_b13###, 62 ###reference_b62###, 63 ###reference_b63###, 64 ###reference_b64###, 65 ###reference_b65###, 66 ###reference_b66###, 67 ###reference_b67###], and behaviour change, 13 articles of which seven were on SGs [51 ###reference_b51###, 68 ###reference_b68###, 69 ###reference_b69###, 70 ###reference_b70###, 71 ###reference_b71###, 54 ###reference_b54###] and six were on gamification [72 ###reference_b72###, 73 ###reference_b73###, 74 ###reference_b74###, 75 ###reference_b75###, 76 ###reference_b76###, 77 ###reference_b77###, 78 ###reference_b78###].\n###figure_2### The papers were reviewed based on three personalisation goals, more specifically, increasing user engagement, increasing treatment adherence, or improving user performance. Some articles provided more specific objectives, such as increasing knowledge on a certain topic or implementing sustainable behaviour change. For this review, the objectives were classified into the three aforementioned categories. Table 2 ###reference_### provides an overview of the included papers and their identified objectives. Furthermore, a summary of the study design, study output and a detailed domain description was provided.\nMost studies, namely 21 out of 31, explicitly state that they want to increase user engagement by including personalised gamification or SGs, while SGs focus on rehabilitation often not only to improve engagement but also to increase the performance of the user (8 out of 11). Behaviour change mostly focuses on increasing physical activity, changing nutritional habits or specific disorders such as Attention Deficit Hyperactivity Disorder (ADHD), Autism Spectrum Disorder (ASD) and sleep apnea, while systems for rehabilitation target a range of domains, namely neck and wrist or upper-limb rehabilitation, neuro-rehabilitation and post-stroke patients, both cognitive and physical rehabilitation.\ngam = gamification\nSG = serious game\nEngag. = Engagement\nAdh. = Adherence\nPerform. = Performance" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Player and Knowledge Models", + "text": "Some studies use models to structure and update specific user information. This information can be limited to player or user data, which can consist of personal information, such as age or game progression data, sensor data, i.e., data collected via sensors or external data, such as heart rate or contextual data such as weather reports. A last type of user information that is sometimes collected is medical data, which we define as data that has been handled, or inputted by health professionals, such as results from medical tests or a set of rehabilitation exercises. An overview of the player information included in each study included in the analysis can be found in Table 3 ###reference_### for interventions using gamification and Table 4 ###reference_### for serious games. Additionally, these tables provide an overview of the references that have included domain or expert knowledge, the applied personalisation method, which will be discussed in further detail in Section 3.3 ###reference_###. The following paragraphs will discuss the different approaches to modelling user and expert knowledge as identified in the included articles." + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Hexad Player Model", + "text": "Five studies [72 ###reference_b72###, 55 ###reference_b55###, 77 ###reference_b77###, 78 ###reference_b78###, 57 ###reference_b57###] use the Hexad Player Type Model to classify users according to their player type, which has been designed specifically for the design of gameful systems tailored to their users [80 ###reference_b80###]. Six player types are defined based on their intrinsic or extrinsic motivation, namely, achiever, free spirit, philanthropist, disruptor, player and socializer. The Hexad framework proposes an empirically validated mapping of several game elements on the 6 player types, as shown in Figure 3 ###reference_### [81 ###reference_b81###].\nde Oliveira et al. [72 ###reference_b72###] investigated how the user\u2019s player type can be incorporated to include the correct game elements for each player in a self-care application. The study uses the Hexad Player Model to classify the users according to their type, but they take into account that users and their preferences can change, meaning that their player type and game elements preferences can change too. To accommodate for this change, they include an artificial neural network (ANN) that classifies the user in the case of loss of interest in current game elements. Zhao et al. [77 ###reference_b77###] compile a player model out of four submodels to create a personalized gamified fitness recommender system. The player model consists of an activity recognition model to track the activities of the player, a general model, which includes personal information of the user, an exerciser-type model, which includes expert knowledge for recommending specific activities and finally, a player-type model, based on the Hexad Player model. Similarly, Fadhil et al. [55 ###reference_b55###], Carlier et al. [57 ###reference_b57###] and Chan et al. [78 ###reference_b78###] use the Hexad Player Model to include user-specific game elements in the proposed system. Fadhil et al. [55 ###reference_b55###] included personalised gamification into a chatbot game to teach children about healthy lifestyles and habits. In our previous research [57 ###reference_b57###] we designed an application for increasing engagement and respondent behaviour for health surveys, using personalised gamification. With the GardenQuest game, Chan et al. [78 ###reference_b78###] aims to increase users\u2019 exercise adherence by creating a social multiplayer exergame that groups patients according to their respective player types.\n###figure_3###" + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Ontology", + "text": "Two studies use ontologies for player and expert knowledge modelling [51 ###reference_b51###, 65 ###reference_b65###]. Ontologies offer formal definitions of distinct concepts, their properties, and intricate relationships among these concepts, thereby establishing computer-readable classification systems [82 ###reference_b82###, 83 ###reference_b83###]. Figure 4 ###reference_### shows an example of such an ontology, more specifically, the recipe ontology included in the serious game on Nutrition Literacy (NL) and Food Literacy (FL) skills by Mitsis et al. [51 ###reference_b51###]. Using user game information and the knowledge contained in the ontology facilitates the personalisation of the game as recipes can be suggested based on dietary needs and preferences and via a rule-based system, the user\u2019s cooking profile can be adapted according to their in-game performances. Caggianese et al. [65 ###reference_b65###] proposes a tele-rehabilitation system that utilises different sources of information and data, namely game data, personal user information, Microsoft Kinect sensor data and input from health professionals to provide personalised decision support to the user. To model the required expert knowledge and user information, they used an ontological model, including both game description concepts and motor rehabilitation concepts. The system also provides an interface for health professionals to define each patient\u2019s rehabilitation goals, which include, amongst others, the anatomical problem for each motor district, e.g., left shoulder abduction. Due to the use of an ontology and hybrid production rules, i.e., the combination of ontological rules and fuzzy logic rules [84 ###reference_b84###, 85 ###reference_b85###], this diagnostical information can then be used in the decision support system for adapting the serious game and suggesting improvements in the offered therapy.\n###figure_4###" + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Kinematic Chain Model and Inverse Kinematic", + "text": "A kinematic chain model describes the movement of a kinematic chain, which is the formulation of the translation, rotation, position and velocity of a body segment interconnected by joints, for a robot or animated character, e.g. human [86 ###reference_b86###, 87 ###reference_b87###, 88 ###reference_b88###]. Three included studies from Esfahlani et aL [61 ###reference_b61###, 60 ###reference_b60###, 67 ###reference_b67###] make use of a kinematic chain model to represent the mechanical structure of the user. These studies then use inverse kinematics to control and plan the motion of a desired position to achieve a specific task [87 ###reference_b87###]. The Microsoft Kinect sensor is used to track the user\u2019s skeleton joints in all three studies,in addition to a foot pedal [67 ###reference_b67###], a Thalmic Myo armband [67 ###reference_b67###, 61 ###reference_b61###, 60 ###reference_b60###]. This sensor data is then fed to the personalisation methods to personalise the game and adapt to the difficulty of the conference rehabilitation exercises. The three studies investigate different approaches, namely fuzzy logic [60 ###reference_b60###], Monte Carlo Tree Search [61 ###reference_b61###] and a combination of fuzzy logic and an artificial neural network [67 ###reference_b67###]." + }, + { + "section_id": "3.2.x", + "parent_section_id": "3.2", + "section_name": "Other model approaches", + "text": "The remaining gamified solutions employed three different approaches, each dependent on the specific data sources required for the construction of the user model: questionnaire responses [75 ###reference_b75###], physical activity data [76 ###reference_b76###], and specific domain knowledge [56 ###reference_b56###]. Orte et al. [75 ###reference_b75###] designed a gamified mHealth application for nutritional behaviour change that offers personalised dietary missions. To do so, information about the users\u2019 nutritional habits is gathered via questionnaires to build a nutritional behaviour profile. Sch\u00e4fer et al [76 ###reference_b76###] use smartphone sensor data to derive a physical activity model for children. This model is then used to personalise the application by using an avatar model that mirrors the children\u2019s physical activity level, i.e., sitting, standing, walking and intense. A Random Forest classifier has been used to classify the sensor data. Pardos et al. [56 ###reference_b56###] designed a remote patient monitoring and care platform that offers personalised gamified recommendations. The knowledge needed to recommend healthier habits to users includes official guidelines given by for example the WHO and the American Heart Association and is encoded by a set of multivariate objects and rules for each domain, as shown in Figure 5 ###reference_###. Health professionals can then access the platform to create personalized rules for specific patients.\n###figure_5### Alves et al. [58 ###reference_b58###] developed a first-person shooter video game that adapts the difficulty level to the mental state of the player. Consequently, a classification framework is developed that reads physiological signals, namely heart rate and beta bands of the brainwaves, and outputs the current mental state of the player, using Multilayer Perceptron (MLP) Classification [89 ###reference_b89###]. Next, using a state machine, the difficulty level of the game is updated according to the current mental state.\nGhorbani et al. [50 ###reference_b50###] evaluate an intelligent assistive system to support the elderly in their daily life activities using Augmented Reality (AR) and SGs. To personalise the system, fuzzy rule bases are built, including the expert knowledge of therapists for each patient.\nAfyouni et al. [10 ###reference_b10###] introduce \u201cRehabot\u201d for the adaptive generation of personalized SGs for telerehabilitation. In order to provide personalised feedback and adapt the difficulty of the exercises to the user, expert knowledge regarding postures needs to be modelled. To that end, a therapist inputs a set of correct postures for the corresponding patient. The system translates this expert knowledge to a set of joints that are compared to the movements of the user, using the Microsoft Kinect and a posture-matching algorithm.\nAnother example of a personalised serious game for rehabilitation is the TANGO:H platform of Gonz\u00e1lez-Gonz\u00e1lez et al. [13 ###reference_b13###]. The platform creates a user model that represents a set of data that characterizes the user at a specific moment in time. This user data includes explicit data, i.e. provided by the user, and implicit data, i.e., provided by their interaction with the system. Included in this user model is the system\u2019s estimation of the user\u2019s skill level. To suggest exercises to the user, the user\u2019s skill level is matched with the expected skill level of the rehabilitation exercises, using a recommender system. To update the skill level of the user, a heuristic approach is used, using a formula that considers certain expert intuitions on how the user\u2019s skill level should evolve over time. Another approach to modelling the player\u2019s skill level is seen in the work of Hocine et al. [62 ###reference_b62###]. To model the player, and their motor abilities, they define the \u201cability zone\", which represents the area where the patient can efficiently move on a 2D workspace, such as a graphical tablet. The ability zone is modelled using a matrix which maps the physical workspace and the virtual workspace (computer screen). Each matrix cell then includes information on the performed movements of the patient. Post-stroke patients move the computer mouse within the workspace and the system uses these mouse coordinates to calculate the resulting ability zone. During an assessment exercise, the ability zone matrix of each player is constructed and continuously updated during the playing sessions. This matrix is then used for the adaptation of the game to identify challenging areas for the patient as shown on Figure 6 ###reference_###. The ability zone matrix (Figure 6 ###reference_###-1) is transformed to an image by assigning gradients to each cell value (Figure 6 ###reference_###-2), this is then used to compute the edge of the matrix (Figure 6 ###reference_###-3). Targets that are situated inside this edge will be easy, while targets outside the ability zone\u2019s edge will be linked to a higher difficulty level.\n###figure_6### Alves et al. [64 ###reference_b64###] propose to include personality traits from the Five-Factor model to increase the patient\u2019s motivation for the rehabilitation process. The system aims to support patients with emotional instability as poor rehabilitation results or criticism by the therapist might easily demotivate them. The personality traits of the patient and in-game actions trigger specific responses to the game. These adaptation rules are used for the fuzzy logic model to provide personalised in-game support. An example of such a rule is: If a patient has High Neuroticism as a personality trait, and performs badly in the game, the game should respond by friendly encouraging the patient to try again.\nGen. = general user and game information\nSens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports\nMedic. = medical data (input provided by health professionals or results from medical tests)\nGen. = general user and game information\nSens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports\nMedic. = medical data (input provided by health professionals or results from medical tests)" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Personalisation Methods", + "text": "The following paragraphs will discuss the different personalisation methods identified in the references. Table 5 ###reference_### provides an overview of each of these methods, classified according to their data-driven, knowledge-driven or hybrid nature.\nare mathematical models that are able to detect complex non-linear correlations between data [67 ###reference_b67###]. One gamified intervention [72 ###reference_b72###] and three SGs [52 ###reference_b52###, 58 ###reference_b58###, 67 ###reference_b67###] use a form of ANN to offer personalised support. de Oliveira et al. [72 ###reference_b72###] use an ANN for the classification of the usage pattern of the user to assess if the player is still interested in the offered gamification elements or if updating them is required. In the case of a serious game for supporting Caribbean Men pre- and post- diagnosis of prostate cancer [52 ###reference_b52###], an ANN is used for a computational intelligence predictor that predicts the risk of cancer for the user and then updates the offered information and support in the game for the user based on the outcome of the intelligence predicator. Esfahlani et al [67 ###reference_b67###] use a combination of an ANN and fuzzy logic to adjust the difficulty of a serious game for neurorehabilitation. The ANN were used to detect complex non-linear correlations among player movement data and predict the player\u2019s improvement, while fuzzy logic was used to then personalise the offered rehabilitation exercises.\nare able to suggest an appropriate item from a set of items to the user based on certain features. Different types of recommender systems exist: content-based recommender systems rely on the items themselves to make suggestions, while collaborative filtering uses the user\u2019s behaviour, and finally, hybrid approaches, using a combination exist as well [13 ###reference_b13###]. For the TANGO:H platform, a content-based recommender system, based on the player\u2019s skill and history is used to select rehabilitation exercises of the appropriate skill level [13 ###reference_b13###]. The CarpeDiem app [75 ###reference_b75###] uses a nutritional recommender system to offer individualized recommendations and feedback based on questionnaire data. The gamified app uses a rule-based system to determine the user\u2019s level for each food group and which missions can be recommended to that user.\nis used for the classification of the activity model for children by Sch\u00e4fer et al [76 ###reference_b76###], which was explained in Section 3.2 ###reference_###. The authors compared two classification models, Support Vector Machines (SVM) and Random Forests (RF), with the latter reaching the highest accuracy.\nis a form of machine learning that consists of multiple processing layers to learn complex patterns from high-dimensional data with multiple layers of abstraction [90 ###reference_b90###, 91 ###reference_b91###]. Ahmad et al. [71 ###reference_b71###] envision a platform for smart serious games that manage large volumes of real-time sensor data to make personalised decisions. The platform can use a variety of algorithms such as deep learning and deep reinforcement learning to analyze the contextual data and player history. The use of optimization algorithms can then optimize these results under specific constraints, such as age or medical history, using particle swarm optimization or genetic algorithms.\nis a machine learning technique where an intelligent agent learns from its environment to maximize rewards and minimize punishment mechanisms [59 ###reference_b59###]. Andrade et al. [59 ###reference_b59###] use a form of reinforcement learning, namely Q-learning, which does not need a detailed environmental model and can be interpreted as a Markov decision process with unknown probabilities and rewards. In the Nut Catcher game, the Q-learning game is used to balance the game difficulty by maximizing the performance function and keeping the game challenging and entertaining while the user performs repetitive rehabilitation exercises. Martinho et al. [73 ###reference_b73###] use reinforcement learning for gamified coaching to increase the physical activity of the elderly. Based on the user\u2019s performance, reinforcement learning is applied to decide what health challenges should be next sent to the user by the virtual coach and when. The platform for smart SGs of Ahmad et al. [71 ###reference_b71###], explained in the previous paragraph, incorporates deep reinforcement learning, i.e., the combination of deep learning and reinforcement learning, as one of the intelligent algorithms [91 ###reference_b91###].\ntechniques are used to extract information from large datasets, such as patterns and relationships between input variables [10 ###reference_b10###]. Afyouni et al. [10 ###reference_b10###] designed a gaming platform with \u201cRehab bots\u201d, virtual assistants, that can adjust the workout difficulty to the user\u2019s performance. Data mining is used to predict how the user will improve over different sessions by following a specific exercise schedule.\nuses the data directly to estimate the values between specific data points. For the mHealth app, GameBus [74 ###reference_b74###] a specific formula was devised to calculate the difference between the player\u2019s current level of capability and their preferred level. The user will then receive personalised tasks with an updated complexity to keep increasing their capability level and reach their goal.\nare heuristic search methods that use principles of natural selection and genetics to solve complex optimization problems [92 ###reference_b92###, 93 ###reference_b93###]. Genetic algorithms are used in games for procedural content generation because they are able to generate highly customized content for a game, which keeps evolving according to the progress of the user [54 ###reference_b54###]. The game \u201cWake Up For the Future!\u201d [54 ###reference_b54###] uses procedural content generation based on a genetic algorithm to create educational content for obstructive sleep apnea. By automatically generating new Non-Player Characters (NPCs), based on the user\u2019s in-game data and choices, the game difficulty can be dynamically adapted and educational content is personalised for each user. The platform for smart SGs of Ahmad et al. [71 ###reference_b71###], which was already discussed earlier, suggests genetic algorithms can be used in the optimization module.\nis an optimization algorithm inspired by swarm behaviour found in nature. It differs from genetic algorithms in the lack of a selection step as each member of the population survives [94 ###reference_b94###]. Similarly to the previous paragraph, Particle Swarm Optimization can be used in the optimization module of the platform for smart SGs [71 ###reference_b71###].\nis also an optimization algorithm inspired by nature, more specifically, by the behaviour of ants [95 ###reference_b95###]. Semet et al. [68 ###reference_b68###] apply the Ant Colony Optimization algorithm to achieve an intelligent and adaptive reward allocation system according to the performance of the user.\nis a decision-making and optimization algorithm that provides a simple model of the trade-off between exploration and exploitation to maximize gain [96 ###reference_b96###]. Multi-armed bandits are computationally efficient and rely on weak knowledge models, however, there is no long-term planning to find the optimal path [53 ###reference_b53###, 96 ###reference_b96###]. The KidBreath [53 ###reference_b53###] serious game for children with Asthma uses an adaptation of the Multi-Armed Bandit algorithm to personalize the content of the health education game, based on the child\u2019s progression.\nis an optimization algorithm that takes random samples in the decision space and builds a search tree while doing so. It combines random simulation, i.e. Monte Carlo, with tree-based exploration [97 ###reference_b97###, 98 ###reference_b98###]. In the Rehabgame [61 ###reference_b61###] and the Prehab game [62 ###reference_b62###], MCTS is used to gradually control the intensity of the rehabilitation exercises based on the patient\u2019s previous performances, by generating the next set of tasks for the user\u2019s current skill level.\nare expert systems that allow reasoning over predefined knowledge, often represented by if-then rules [99 ###reference_b99###]. Three gamified systems and four SGs mention using a rule-based system to offer a personalised intervention to their users: rules are used to define nutritional and game information [75 ###reference_b75###], domain-specific health guidelines information [56 ###reference_b56###], information on attention tasks [70 ###reference_b70###], cooking habits [51 ###reference_b51###], personality type-related game responses [64 ###reference_b64###], information on the user\u2019s in-game performance [66 ###reference_b66###] and information regarding NPCs and their attributes [54 ###reference_b54###].\nconsist of a set of states and transitions between these states [100 ###reference_b100###]. The first-person shooter game of Alves et al. [58 ###reference_b58###] uses a finite state machine that transitions through the states based on the classification of the user\u2019s mental state, more specifically, boredom, anxiety or flow, as shown in Figure 7 ###reference_###. The InMotion rehabilitation game [63 ###reference_b63###], on the other hand, uses the performance results of the user to transfer between different difficulty states, namely, easy, medium and hard, as shown in Figure 8 ###reference_###. The thresholds for entering a different difficulty state differ for each minigame and are customized for each patient.\n###figure_7### ###figure_8### consists of decision nodes that specify conditions to with outgoing branches representing possible values resulting from that test. The leaves of the tree each specify a category or outcome [101 ###reference_b101###]. Zhao et al. [77 ###reference_b77###] built a 4-layered model to represent the user in their personalized fitness recommender system, discussed in Section 3.2 ###reference_###. The recommendation engine is based on decision trees that incorporate all the user model information. The decision tree can suggest to extend an existing activity, recommend other types of activities, or recommend to fill some idle time with an activity. Figure 9 ###reference_### shows an example of such a decision tree.\n###figure_9### allow users to control how an intelligent system models their knowledge, skills and interests and thereby enhances adaptability and precision of system decisions and supports learning [69 ###reference_b69###, 102 ###reference_b102###]. The KeepAttention serious game [69 ###reference_b69###] for attention training uses an OLM to introduce transparency to enable users to reflect on their own actions, explaining the proposed difficulty of the system and offered challenges.\nincorporates, similarly to rule-based systems, human logic and rules. However, unlike rule-based systems, the gradual transformation from one condition to another is possible, rather than strict true/false condition, which makes it possible to model uncertain information [84 ###reference_b84###, 85 ###reference_b85###]. Fuzzy logic has been used in different SGs for rehabilitation to analyse the player\u2019s achievements and suggest suitable adjustments to the physical rehabilitation exercises [67 ###reference_b67###, 65 ###reference_b65###, 60 ###reference_b60###] or cognitive rehabilitation exercises [50 ###reference_b50###]." + }, + { + "section_id": "3.3.x", + "parent_section_id": "3.3", + "section_name": "Data-driven techniques", + "text": "The techniques listed in the following paragraphs primarily use data to extract patterns and relationships.\nare mathematical models that are able to detect complex non-linear correlations between data [67 ###reference_b67### ###reference_b67###]. One gamified intervention [72 ###reference_b72### ###reference_b72###] and three SGs [52 ###reference_b52### ###reference_b52###, 58 ###reference_b58### ###reference_b58###, 67 ###reference_b67### ###reference_b67###] use a form of ANN to offer personalised support. de Oliveira et al. [72 ###reference_b72### ###reference_b72###] use an ANN for the classification of the usage pattern of the user to assess if the player is still interested in the offered gamification elements or if updating them is required. In the case of a serious game for supporting Caribbean Men pre- and post- diagnosis of prostate cancer [52 ###reference_b52### ###reference_b52###], an ANN is used for a computational intelligence predictor that predicts the risk of cancer for the user and then updates the offered information and support in the game for the user based on the outcome of the intelligence predicator. Esfahlani et al [67 ###reference_b67### ###reference_b67###] use a combination of an ANN and fuzzy logic to adjust the difficulty of a serious game for neurorehabilitation. The ANN were used to detect complex non-linear correlations among player movement data and predict the player\u2019s improvement, while fuzzy logic was used to then personalise the offered rehabilitation exercises.\nare able to suggest an appropriate item from a set of items to the user based on certain features. Different types of recommender systems exist: content-based recommender systems rely on the items themselves to make suggestions, while collaborative filtering uses the user\u2019s behaviour, and finally, hybrid approaches, using a combination exist as well [13 ###reference_b13### ###reference_b13###]. For the TANGO:H platform, a content-based recommender system, based on the player\u2019s skill and history is used to select rehabilitation exercises of the appropriate skill level [13 ###reference_b13### ###reference_b13###]. The CarpeDiem app [75 ###reference_b75### ###reference_b75###] uses a nutritional recommender system to offer individualized recommendations and feedback based on questionnaire data. The gamified app uses a rule-based system to determine the user\u2019s level for each food group and which missions can be recommended to that user.\nis used for the classification of the activity model for children by Sch\u00e4fer et al [76 ###reference_b76### ###reference_b76###], which was explained in Section 3.2 ###reference_### ###reference_###. The authors compared two classification models, Support Vector Machines (SVM) and Random Forests (RF), with the latter reaching the highest accuracy.\nis a form of machine learning that consists of multiple processing layers to learn complex patterns from high-dimensional data with multiple layers of abstraction [90 ###reference_b90### ###reference_b90###, 91 ###reference_b91### ###reference_b91###]. Ahmad et al. [71 ###reference_b71### ###reference_b71###] envision a platform for smart serious games that manage large volumes of real-time sensor data to make personalised decisions. The platform can use a variety of algorithms such as deep learning and deep reinforcement learning to analyze the contextual data and player history. The use of optimization algorithms can then optimize these results under specific constraints, such as age or medical history, using particle swarm optimization or genetic algorithms.\nis a machine learning technique where an intelligent agent learns from its environment to maximize rewards and minimize punishment mechanisms [59 ###reference_b59### ###reference_b59###]. Andrade et al. [59 ###reference_b59### ###reference_b59###] use a form of reinforcement learning, namely Q-learning, which does not need a detailed environmental model and can be interpreted as a Markov decision process with unknown probabilities and rewards. In the Nut Catcher game, the Q-learning game is used to balance the game difficulty by maximizing the performance function and keeping the game challenging and entertaining while the user performs repetitive rehabilitation exercises. Martinho et al. [73 ###reference_b73### ###reference_b73###] use reinforcement learning for gamified coaching to increase the physical activity of the elderly. Based on the user\u2019s performance, reinforcement learning is applied to decide what health challenges should be next sent to the user by the virtual coach and when. The platform for smart SGs of Ahmad et al. [71 ###reference_b71### ###reference_b71###], explained in the previous paragraph, incorporates deep reinforcement learning, i.e., the combination of deep learning and reinforcement learning, as one of the intelligent algorithms [91 ###reference_b91### ###reference_b91###].\ntechniques are used to extract information from large datasets, such as patterns and relationships between input variables [10 ###reference_b10### ###reference_b10###]. Afyouni et al. [10 ###reference_b10### ###reference_b10###] designed a gaming platform with \u201cRehab bots\u201d, virtual assistants, that can adjust the workout difficulty to the user\u2019s performance. Data mining is used to predict how the user will improve over different sessions by following a specific exercise schedule.\nuses the data directly to estimate the values between specific data points. For the mHealth app, GameBus [74 ###reference_b74### ###reference_b74###] a specific formula was devised to calculate the difference between the player\u2019s current level of capability and their preferred level. The user will then receive personalised tasks with an updated complexity to keep increasing their capability level and reach their goal.\nare heuristic search methods that use principles of natural selection and genetics to solve complex optimization problems [92 ###reference_b92### ###reference_b92###, 93 ###reference_b93### ###reference_b93###]. Genetic algorithms are used in games for procedural content generation because they are able to generate highly customized content for a game, which keeps evolving according to the progress of the user [54 ###reference_b54### ###reference_b54###]. The game \u201cWake Up For the Future!\u201d [54 ###reference_b54### ###reference_b54###] uses procedural content generation based on a genetic algorithm to create educational content for obstructive sleep apnea. By automatically generating new Non-Player Characters (NPCs), based on the user\u2019s in-game data and choices, the game difficulty can be dynamically adapted and educational content is personalised for each user. The platform for smart SGs of Ahmad et al. [71 ###reference_b71### ###reference_b71###], which was already discussed earlier, suggests genetic algorithms can be used in the optimization module.\nis an optimization algorithm inspired by swarm behaviour found in nature. It differs from genetic algorithms in the lack of a selection step as each member of the population survives [94 ###reference_b94### ###reference_b94###]. Similarly to the previous paragraph, Particle Swarm Optimization can be used in the optimization module of the platform for smart SGs [71 ###reference_b71### ###reference_b71###].\nis also an optimization algorithm inspired by nature, more specifically, by the behaviour of ants [95 ###reference_b95### ###reference_b95###]. Semet et al. [68 ###reference_b68### ###reference_b68###] apply the Ant Colony Optimization algorithm to achieve an intelligent and adaptive reward allocation system according to the performance of the user.\nis a decision-making and optimization algorithm that provides a simple model of the trade-off between exploration and exploitation to maximize gain [96 ###reference_b96### ###reference_b96###]. Multi-armed bandits are computationally efficient and rely on weak knowledge models, however, there is no long-term planning to find the optimal path [53 ###reference_b53### ###reference_b53###, 96 ###reference_b96### ###reference_b96###]. The KidBreath [53 ###reference_b53### ###reference_b53###] serious game for children with Asthma uses an adaptation of the Multi-Armed Bandit algorithm to personalize the content of the health education game, based on the child\u2019s progression.\nis an optimization algorithm that takes random samples in the decision space and builds a search tree while doing so. It combines random simulation, i.e. Monte Carlo, with tree-based exploration [97 ###reference_b97### ###reference_b97###, 98 ###reference_b98### ###reference_b98###]. In the Rehabgame [61 ###reference_b61### ###reference_b61###] and the Prehab game [62 ###reference_b62### ###reference_b62###], MCTS is used to gradually control the intensity of the rehabilitation exercises based on the patient\u2019s previous performances, by generating the next set of tasks for the user\u2019s current skill level." + }, + { + "section_id": "3.3.x", + "parent_section_id": "3.3", + "section_name": "Knowledge-driven techniques", + "text": "The knowledge-driven techniques rely on information from specific domains or experts that need to be defined using rules or other approaches to make decisions.\nare expert systems that allow reasoning over predefined knowledge, often represented by if-then rules [99 ###reference_b99### ###reference_b99###]. Three gamified systems and four SGs mention using a rule-based system to offer a personalised intervention to their users: rules are used to define nutritional and game information [75 ###reference_b75### ###reference_b75###], domain-specific health guidelines information [56 ###reference_b56### ###reference_b56###], information on attention tasks [70 ###reference_b70### ###reference_b70###], cooking habits [51 ###reference_b51### ###reference_b51###], personality type-related game responses [64 ###reference_b64### ###reference_b64###], information on the user\u2019s in-game performance [66 ###reference_b66### ###reference_b66###] and information regarding NPCs and their attributes [54 ###reference_b54### ###reference_b54###].\nconsist of a set of states and transitions between these states [100 ###reference_b100### ###reference_b100###]. The first-person shooter game of Alves et al. [58 ###reference_b58### ###reference_b58###] uses a finite state machine that transitions through the states based on the classification of the user\u2019s mental state, more specifically, boredom, anxiety or flow, as shown in Figure 7 ###reference_### ###reference_###. The InMotion rehabilitation game [63 ###reference_b63### ###reference_b63###], on the other hand, uses the performance results of the user to transfer between different difficulty states, namely, easy, medium and hard, as shown in Figure 8 ###reference_### ###reference_###. The thresholds for entering a different difficulty state differ for each minigame and are customized for each patient.\n###figure_10### ###figure_11### consists of decision nodes that specify conditions to with outgoing branches representing possible values resulting from that test. The leaves of the tree each specify a category or outcome [101 ###reference_b101### ###reference_b101###]. Zhao et al. [77 ###reference_b77### ###reference_b77###] built a 4-layered model to represent the user in their personalized fitness recommender system, discussed in Section 3.2 ###reference_### ###reference_###. The recommendation engine is based on decision trees that incorporate all the user model information. The decision tree can suggest to extend an existing activity, recommend other types of activities, or recommend to fill some idle time with an activity. Figure 9 ###reference_### ###reference_### shows an example of such a decision tree.\n###figure_12###" + }, + { + "section_id": "3.3.x", + "parent_section_id": "3.3", + "section_name": "Hybrid techniques", + "text": "This final category of personalisation methods uses a combination of data and expert knowledge to make predictions and decisions, thereby often leveraging the advantages of both data- and knowledge-driven approaches.\nallow users to control how an intelligent system models their knowledge, skills and interests and thereby enhances adaptability and precision of system decisions and supports learning [69 ###reference_b69### ###reference_b69###, 102 ###reference_b102### ###reference_b102###]. The KeepAttention serious game [69 ###reference_b69### ###reference_b69###] for attention training uses an OLM to introduce transparency to enable users to reflect on their own actions, explaining the proposed difficulty of the system and offered challenges.\nincorporates, similarly to rule-based systems, human logic and rules. However, unlike rule-based systems, the gradual transformation from one condition to another is possible, rather than strict true/false condition, which makes it possible to model uncertain information [84 ###reference_b84### ###reference_b84###, 85 ###reference_b85### ###reference_b85###]. Fuzzy logic has been used in different SGs for rehabilitation to analyse the player\u2019s achievements and suggest suitable adjustments to the physical rehabilitation exercises [67 ###reference_b67### ###reference_b67###, 65 ###reference_b65### ###reference_b65###, 60 ###reference_b60### ###reference_b60###] or cognitive rehabilitation exercises [50 ###reference_b50### ###reference_b50###]." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Reusability", + "text": "Ten out of the 31 included studies address reuse to simplify the design process of SGs or to facilitate the comparison of different techniques or implementations. Each of these ten studies implemented the reuse of SGs or parts of their implementation in a different way. The following paragraphs provide an overview of the 3 gamified applications and 7 SGs and their interpretation of reuse.\nCarlier et al. [57 ###reference_b57###] designed a gamified app for health surveys. The app has been designed such that it can easily be reused for the gamification of other surveys. de Oliveira [72 ###reference_b72###] created a framework, Framework L, that guides mobile health application developers in the creation of new mHealth apps for Self-care by selecting which categories should be included in the application and which data must be collected. The mHealth GameBus tool [74 ###reference_b74###] allows reuse for testing purposes, as the platform supports hosting multiple experimental designs and easy configuration of the gamification mechanisms.\nMitsis et al. [51 ###reference_b51###] facilitate the reuse of their recipe and game ontology by focusing on reusability, extensibility and sustainability when designing their ontology. Semet et al. [68 ###reference_b68###] consider reuse when drafting the requirements for their reward algorithm, as the algorithm should be generic to be used by other SGs on the InLife platform in the future. The Keep Attention serious game [70 ###reference_b70###] has been designed such that the tasks that consider the training objects are independent of the game elements, thereby facilitating the creation of different games. Similarly, the PRehab game decouples the game mechanics from the game graphics, so once rules and game behaviours are implemented, they can easily be reused in other games with different graphics [62 ###reference_b62###]. Ahmet et al. [71 ###reference_b71###] propose a modular architecture for smart SGs that ensures high cohesion and low coupling. Developers can decide which contextual data is needed to be used for analysis for the game and other personalisation strategies can easily be applied or added. The TANGO:H platform [13 ###reference_b13###] allows health professionals to design different types of rehabilitation exercises and games using the Kinect. Similarly, the rehabilitation system of Caggianese et al. [65 ###reference_b65###] ensures reuse by introducing an adaptive game handler component that decouples the SGs from the rest of the system. This means that new SGs can easily be added if they conform to the common interfaces." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Future Research Directions", + "text": "It is clear from the results of this literature review that the future of SGG is personalisation if sustainable engagement, treatment adherence and increased positive health outcomes are the goals. However, due to the lack of standardization of the design process in the field of SGG, many open challenges remain in achieving this personalisation.\nFuture research should therefore focus on the reusability of the intelligent components, such as the personalisation algorithms and expert knowledge models. By introducing reusability, the laborious design and development process of SGG can be significantly reduced. Domain experts and Healthcare Professionals (HCPs) can focus on formalising domain knowledge and gaining additional insights from the data acquired from personalised SGG. Moreover, this presents the prospect of rapid prototyping and more detailed evaluation, as different algorithms or models can be quickly interchanged in similar settings to investigate the influence of varying gaming mechanisms, personalisation and adaptation strategies, or expert knowledge.\nNext, current research still fails to consider the dynamic nature of people. Users might change depending on their context or advancement in the treatment. Future research should elaborate on the profile determination of users that goes beyond static player type modelling. Moreover, to provide adequate personalisation tailored to specific individuals, it might be needed to consider more user aspects than simply defining the player type, such as personal details, pathology-specific information or health data collected by wearables and healthcare professionals.\nFinally, with the rising popularity and possibilities of generative AI, future research should look into the integration of this technology in the design of SGG. Generative AI holds the potential to, together with reusability, significantly reduce the time-intensity of developing such game-based solutions, as it can be introduced into multiple steps of the design process such as the creation of the game narrative, graphics or code-generation and support for the intelligent algorithms." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "Personalised SGG for health show the promise to improve user engagement and treatment adherence, however, research on how personalisation is achieved, remains limited. This research provides an overview of that research from the past decade.\nOut of the 31 identified interventions, 22 designed a serious game while the other nine focused on a gamified mHealth app. The largest application domains for personalised SGG are behaviour change and rehabilitation. Ontologies and rule-based reasoning are popular approaches to integrating expert or domain knowledge in the health systems, whereas the Hexad Player framework is the most used player type framework for personalising gamification.\nAI and machine learning techniques are promising methods for personalisation as they can find patterns and extract information from data sets, which is often used in digital interventions for health. However, due to a lack of standardization and reusability in personalised SGG design, the rapid evaluation and testing of multiple algorithms in a similar setting remains a labour-intensive task. Only 10 out of the 31 articles reported some kind of reusability of their system. Moreover, the interpretation of reuse differed for all of the ten articles and was mostly limited to specific use cases or even specific games or platforms.\nFuture work should investigate if personalisation methods or modelling techniques used in other application domains than healthcare can be used for specific domains in health. Furthermore, user profiles should be extended beyond static player type modelling. Moreover, future work should focus on simplifying the design process of personalised SGG and the possibilities of generative AI, thereby addressing the transferability of expert knowledge and reusability of intelligent personalisation algorithms and games." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Search query and number of articles retrieved from the different databases.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Search Query
\n\n\n\nTitle=personali* OR adapt* OR context* OR individu*\n\nOR tailored OR intelligent OR \"player model*\" OR\n\n\"user model*\" OR ontology\n\n
\n\n\n\nAND Title=\"serious game*\" OR gamification OR\n\ngamified\n\n
\n\n\n\nAND Title or abstract=health* OR rehabilitation\n\nOR treatment OR disorder OR \"behavior change\"\n\nOR disease OR \"physical activity\" OR \"fitness\"\n\n
AND title= NOT review
AND Publication Year=2014-2024
DatabaseNumber of records
Web of Science126
PubMed47
Other sources7
\n
", + "capture": "Table 1: Search query and number of articles retrieved from the different databases." + }, + "2": { + "table_html": "
\n
Table 2: In total, 31 articles have been identified of which 9 reported on gamification and 22 on serious games. An overview is provided of their personalisation goal, study design, output type and application domain.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TypePersonalisation GoalStudy designOutputDomain (Detailed)Ref
Engag.Adh.Perform.
gam\n--mixed methodframework (FrameworkL)behaviour change (healthy habits)[72]
gam\n-\nintervention (12 participants)webapplication (CoaFeld)behaviour change (physical activity)[73]
gam\n--intervention (176 participants)mHealth app (GameBus)behaviour change (physical activity)[74]
gam\n\n-design & development\n\n\n\nmHealth framework\n\n(CarpeDiem app)\nbehaviour change (nutrition)[75]
gam-\n-intervention (61 children)mHealth appbehaviour change (physical activity)[76]
gam\n\nintervention (40 participants)mHealth appbehaviour change (physical activity)[77]
gam\n\n-prototype evaluation (44 students)chatbot app (CiboPoli)health education (nutrition)[55]
gam\n--design & developmentrecommendation toolhealth education (healthy habits)[56]
gam\n\n-intervention (28 participants)mobile survey appsurveys for health[57]
SG\n\n-prototype evaluation (6 experts)\n\n\n\nasynchronous multiplayer\n\nexergame (GardenQuest)\nbehaviour change (physical activity)[78]
SG\n\n-intervention (29 participants)\n\n\n\nserious game (Express\n\nCooking Train)\n\n\n\n\nbehaviour change & health\n\neducation (nutrition)\n[51, 79]
SG--\nsimulator-based validationserious games (inLife platform)\n\n\n\nbehaviour change (sustainable\n\nbehaviour + social skills children ASD)\n[68]
SG\n--experiment (16 children)KeepAttention game\n\n\n\nbehaviour change (attention\n\ntraining for ADHD)\n[69]
SG\n--experiment (11 children)\n\n\n\ntask-oriented design\n\nframework (KeepAttention game)\n\n\n\n\nbehaviour change (attention\n\ntraining for ADHD)\n[70]
SG--\nprototype implementation\n\n\n\nconceptual architecture\n\nfor smart serious games\nbehaviour change (physical activity)[71]
SG\n--intervention (21 participants)first person shooter game (PC)health[58]
SG\n\n-validation with sample datasetserious gamehealth education (cancer)[52]
SG--\nexperiment (15 children with asthma)e-learning platform (KidBreath)health education (asthma)[53]
SG\n--\n\n\n\nintervention (37 participants\n\nwithout cognitive impairment)\n\n\n\n\nintelligent assistive system\n\nwith AR mobile serious game\n\n\n\n\nhealth support (cognitive impairment\n\nelderly)\n[50]
SG--\nintervention (10 participants)RehaBot framework (VR)rehabilitation (neck)[10]
SG-\n\nexperiments (4 healthy participants)\n\n\n\nwrist rehabilitation robot\n\n& serious game (Nuts Catcher)\nrehabilitation (wrist)[59]
SG-\n\ndesign & developmentserious game (ReHabGame)rehabilitation (motor impairment)[60]
SG\n-\n\n\n\n\nintervention (20 post-stroke\n\nparticipants)\nserious game (ReHabGame)rehabilitation (neurological)[61]
SG\n\n-three-fold validation\n\n\n\nexergame-based rehabilitation\n\nsystem (TANGO:H)\nrehabilitation (cognitive & physical)[13]
SG--\n\n\n\n\nintervention (7 post-stroke\n\nparticipants, 3 therapists)\nserious game (Prehab)rehabilitation (post-stroke)[62]
SG\n\n-design & developmentserious game (InMotion)rehabilitation (upper limb)[63]
SG\n-\ndesign discussionserious game (InMotion)rehabilitation (upper limb)[64]
SG---\n\n\n\nintervention (20 post-stroke\n\nparticipants)\n\n\n\n\ntele-rehabilitation system based\n\non serious games and in-cloud\n\ndata analytics services\nrehabilitation (post-stroke)[65]
SG\n-\nintervention (25 elderly)AR serious gamerehabilitation (cognitive & physical)[66]
SG\n-\nblind experiment (42 participants)\n\n\n\nserious game (Wake Up For\n\nThe Future!)\n\n\n\n\nbehaviour change & health\n\neducation (obstructive sleep apnea)\n[54]
SG--\nintervention (52 participants)\n\n\n\nserious game (Fruit-Collection\n\nand avatar manoeuvring)\nrehabilitation (neurorehabilitation)[67]
\n
    \n
  • \n\u2022\n
    \n

    gam = gamification

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    SG = serious game

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Engag. = Engagement

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Adh. = Adherence

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Perform. = Performance

    \n
    \n
  • \n
\n
\n
", + "capture": "Table 2: In total, 31 articles have been identified of which 9 reported on gamification and 22 on serious games. An overview is provided of their personalisation goal, study design, output type and application domain." + }, + "3": { + "table_html": "
\n
Table 3: An overview of the research on personalised gamification. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and domain is provided.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RefModelUser informationPersonalisation methodReuseDomain
\n\n\n\nExpert\n\nknowledge\nPlayerinformationGen.Sens.Medic.
[72]-\nHexad Player Model\n--artificial neural network (ANN)\nbehaviour change
[73]---\n\n-reinforcement learning (RL)-behaviour change
[74]---\n\n-interpolation\nbehaviour change
[75]\n\nnutritional behaviour profile\n--\n\n\n\nrecommender system &\n\nrule-based system\n-behaviour change
[76]-\nphysical activity user model-\n-random forest classification-behaviour change
[77]\n\n\n\n\n\n1. Hexad Player Model,\n\n2. activity recognition model,\n\n3. general info model,\n\n4. exerciser type model\n\n\n-decision trees-behaviour change
[55]-\nHexad Player Model-----health education
[56]\n-\n\n\n\nmultivariate objects\n\nfor expert knowledge\n\n\n\nrule-based system-health education
[57]-\nHexad Player Model----\nsurveys for health
\n
    \n
  • \n\u2022\n
    \n

    Gen. = general user and game information

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Sens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Medic. = medical data (input provided by health professionals or results from medical tests)

    \n
    \n
  • \n
\n
\n
", + "capture": "Table 3: An overview of the research on personalised gamification. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and domain is provided." + }, + "4": { + "table_html": "
\n
Table 4: An overview of the research on personalised serious games. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and dynamic difficulty balancing and domain is provided.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RefModelUser informationPersonalisation methodReuse\n\n\n\nDynamic\n\ndifficulty\nDomain
\n\n\n\nExpert\n\nknowledge\nPlayerinformationGen.Sens.Medic.
[78]-\nHexad Player Model-\n----behaviour change
[51, 79]\n\nontology\n--rule-based system\n-\n\n\n\nbehaviour change &\n\nhealth education\n
[68]---\n--ant colony optimization\n-behaviour change
[69]---\n--open learner model--behaviour change
[70]---\n--rule-based system\n\nbehaviour change
[71]---\n\n-\n\n\n\ndeep learning & deep RL\n\n& optimization (particle\n\nswarm optimization,\n\ngenetic algorithms)\n\n-behaviour change
[58]-\n\n\n\n\nmental state model\n-\n-\n\n\n\nANN: multilayer perception (mental\n\nstate model) & state machine\n-\nhealth
[52]---\n-\nartificial neural network--health education
[53]---\n--multi-arm bandit--health education
[50]\n\nexpert IF THEN rules\n\n-adaptive fuzzy logic model--health support
[10]\n-set of postures\n\n\n\n\n\n\ndata mining for\n\ndata prediction\n-\nrehabilitation
[59]---\n\n-\n\n\n\nreinforcement learning\n\n(Q-learning)\n-\nrehabilitation
[60]\n-\n\n\n\nkinematic chain model\n\n& inverse kinematics\n\n\n-fuzzy logic model-\nrehabilitation
[61]\n-\n\n\n\nkinematic chain model\n\n& inverse kinematics\n-\n-Monte Carlo Tree Search-\nrehabilitation
[13]-\nupdated by heuristic\n\n-\n\n\n\nrecommender system\n\n(content-based)\n\n\nrehabilitation
[62]-\n\n\n\n\nplayer\u2019s motor abilities\n\n( \u201cability zone\")\n\n--\n\n\n\nMonte Carlo Tree Search &\n\nprocedural content generation\n\n\nrehabilitation
[63]---\n\n-state-machine-\nrehabilitation
[64]-\nFive Factor Model\n--rule-based system-\nrehabilitation
[65]\n\nontology\n\n\n\n\n\n\nhybrid production & and\n\nontological rules & fuzzy logic\n\n-rehabilitation
[66]---\n\n-rule-based system-\nrehabilitation
[54]------\n\n\n\ngenetic algorithm for procedural\n\ncontent generation &\n\nrule-based system for\n\ndynamic difficulty\n-\n\n\n\n\nbehavior change &\n\nhealth education\n
[67]\n-\n\n\n\nkinematic chain model\n\n& inverse kinematics\n\n\n-\n\n\n\nartificial neural network (ANN)\n\n& fuzzy logic model\n-\nrehabilitation
\n
    \n
  • \n\u2022\n
    \n

    Gen. = general user and game information

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Sens. = data collected via sensors, such as heart rate or contextual data and external data, such as weather reports

    \n
    \n
  • \n
  • \n\u2022\n
    \n

    Medic. = medical data (input provided by health professionals or results from medical tests)

    \n
    \n
  • \n
\n
\n
", + "capture": "Table 4: An overview of the research on personalised serious games. For each entry, an overview of the applied models, integrated user information, personalisation method, presence of reuse and dynamic difficulty balancing and domain is provided." + }, + "5": { + "table_html": "
\n
Table 5: Overview of the identified personalisation methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TypeMethodGamSG
Data-drivenArtificial Neural Network[72][52, 58, 67]
Recommender system[75][13]
Random Forest[76]
Deep Learning-[71]
Reinforcement Learning[73, 71][59]
Data Mining-[10]
Interpolation[74]
Genetic Algorithm-[54, 71]
Particle Swarm optimization-[71]
Ant colony optimization-[68]
Multi-armed bandit-[53]
Monte Carlo Tree Search-[61, 62]
Knowledge-drivenRule-based[75, 56, 70][51, 64, 66, 54]
Finite State Machine-[63, 58]
Decision Tree[77]-
HybridOpen learner model-[69]
Fuzzy logic-[50, 60, 65, 67]
\n
", + "capture": "Table 5: Overview of the identified personalisation methods." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18500v1_figure_1.png", + "caption": "Figure 1: Flow chart of the literature search procedure.", + "url": "http://arxiv.org/html/2411.18500v1/x1.png" + }, + "2": { + "figure_path": "2411.18500v1_figure_2.png", + "caption": "Figure 2: Six domains within health care have been identified, of which rehabilitation and behaviour change contained the most articles. Two articles combined two domains(*), namely behaviour change and health education.", + "url": "http://arxiv.org/html/2411.18500v1/extracted/6029596/figs/domains.png" + }, + "3": { + "figure_path": "2411.18500v1_figure_3.png", + "caption": "Figure 3: The Hexad Player Framework maps game elements on 6 distinct player types, based on the intrinsic or extrinsic motivation of users [81].", + "url": "http://arxiv.org/html/2411.18500v1/extracted/6029596/figs/hexad.png" + }, + "4": { + "figure_path": "2411.18500v1_figure_4.png", + "caption": "Figure 4: Recipe ontology used for modelling user and expert knowledge in the serious game \u2018 Express Cooking Train\" by Mitsis et al. [51].", + "url": "http://arxiv.org/html/2411.18500v1/extracted/6029596/figs/ontology.png" + }, + "5": { + "figure_path": "2411.18500v1_figure_5.png", + "caption": "Figure 5: An example of a primitive element (a) and a mixed element (b) in the knowledge model of Pardos et al. [56]", + "url": "http://arxiv.org/html/2411.18500v1/extracted/6029596/figs/multivariateobject.png" + }, + "6": { + "figure_path": "2411.18500v1_figure_6.png", + "caption": "Figure 6: An example of a player\u2019s \u201cability zone\" matrix (1), the obtained image using gradients (2) and the detected edge of the ability zone (3). [62]", + "url": "http://arxiv.org/html/2411.18500v1/x2.png" + }, + "7": { + "figure_path": "2411.18500v1_figure_7.png", + "caption": "Figure 7: A three-states-machine that switches states if the user\u2019s current mental state changes [58].", + "url": "http://arxiv.org/html/2411.18500v1/x3.png" + }, + "8": { + "figure_path": "2411.18500v1_figure_8.png", + "caption": "Figure 8: An example of the three difficulty states for a specific mini-game in the InMotion game [63].", + "url": "http://arxiv.org/html/2411.18500v1/extracted/6029596/figs/difficultystates.png" + }, + "9": { + "figure_path": "2411.18500v1_figure_9.png", + "caption": "Figure 9: An example of a decision tree used by Zhao et al. in the recommendation engine [77].", + "url": "http://arxiv.org/html/2411.18500v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Serious Games and Gamification in Healthcare: A Meta-Review,", + "author": "R. Dama\u0161evi\u010dius, R. Maskeli\u016bnas, T. Bla\u017eauskas,", + "venue": "Information 14 (2023) 105.", + "url": null + } + }, + { + "2": { + "title": "Understanding persuasion contexts in health gamification: A systematic analysis of gamified health behavior change support systems literature,", + "author": "T. Alahaivala, H. Oinas-Kukkonen,", + "venue": "INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS 96 (2016) 62\u201370. Place: ELSEVIER HOUSE, BROOKVALE PLAZA, EAST PARK SHANNON, CO, CLARE, 00000, IRELAND Publisher: ELSEVIER IRELAND LTD Type: Article.", + "url": null + } + }, + { + "3": { + "title": "Does Gamifying Homework Influence Performance and Perceived Gameful Experience?,", + "author": "A. Metwally, M. Chang, Y. Wang, A. M. F. Yousef,", + "venue": "Sustainability 13 (2021).", + "url": null + } + }, + { + "4": { + "title": "Serious Games, Gamification, and Serious Mental Illness: A Scoping Review,", + "author": "M. Fitzgerald, G. Ratcliffe,", + "venue": "Psychiatric Services 71 (2020) 170\u2013183.", + "url": null + } + }, + { + "5": { + "title": "Towards Cognitive Adaptive Serious Games: A Conceptual Framework,", + "author": "A. J. A. Seyderhelm, K. L. Blackmore, K. Nesbitt,", + "venue": "in: E. van der Spek, S. G\u00f6bel, E. Y.-L. Do, E. Clua, J. Baalsrud Hauge (Eds.), Entertainment Computing and Serious Games, Springer International Publishing, Cham, 2019, pp. 331\u2013338. doi:10.1007/978-3-030-34644-7_27.", + "url": null + } + }, + { + "6": { + "title": "How Serious Games Will Improve Healthcare,", + "author": "M. Graafland, M. Schijven,", + "venue": "2018, pp. 139\u2013157.", + "url": null + } + }, + { + "7": { + "title": "Empirical validation of the Gamification User Types Hexad scale in English and Spanish,", + "author": "G. F. Tondello, A. Mora, A. Marczewski, L. E. Nacke,", + "venue": "International Journal of Human-Computer Studies 127 (2019) 95\u2013111.", + "url": null + } + }, + { + "8": { + "title": "Serious Games - An Overview (2015).", + "author": "T. Susi, M. Johannesson, P. Backlund,", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "Adaptive Rehabilitation Bots in Serious Games,", + "author": "I. Afyouni, A. Murad, A. Einea,", + "venue": "SENSORS 20 (2020). Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Publisher: MDPI Type: Article.", + "url": null + } + }, + { + "10": { + "title": "Motion-Based Serious Games for Hand Assistive Rehabilitation,", + "author": "I. Afyouni, A. M. Qamar, S. O. Hussain, F. Ur Rehman, B. Sadiq, A. Murad,", + "venue": "in: Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion, IUI \u201917 Companion, Association for Computing Machinery, New York, NY, USA, 2017, pp. 133\u2013136. URL: https://doi.org/10.1145/3030024.3040977. doi:10.1145/3030024.3040977.", + "url": null + } + }, + { + "11": { + "title": "Adaptive plot system for serious emerging games based on the ant colony optimization algorithm,", + "author": "J. Aguilar, J. Altamiranda, F. Diaz, J. G. De Mesa, A. Pinto,", + "venue": "in: Proceedings - 2019 45th Latin American Computing Conference, CLEI 2019, Institute of Electrical and Electronics Engineers Inc., 2019. doi:10.1109/CLEI47609.2019.235104.", + "url": null + } + }, + { + "12": { + "title": "Serious games for rehabilitation: Gestural interaction in personalized gamified exercises through a recommender system,", + "author": "C. S. Gonzalez-Gonzalez, P. A. Toledo-Delgado, V. Munoz-Cruz, P. Torres-Carrion, V,", + "venue": "JOURNAL OF BIOMEDICAL INFORMATICS 97 (2019). Place: 525 B ST, STE 1900, SAN DIEGO, CA 92101-4495 USA Publisher: ACADEMIC PRESS INC ELSEVIER SCIENCE Type: Article.", + "url": null + } + }, + { + "13": { + "title": "A framework and immersive serious game for mild cognitive impairment,", + "author": "S. Y. J. Lau, H. Agius,", + "venue": "Multimedia Tools and Applications 80 (2021) 31183\u201331237.", + "url": null + } + }, + { + "14": { + "title": "Ontology-Driven Mental Healthcare Applications: A Case Study on Cognitive Rehabilitation with Serious Games,", + "author": "C. Goumopoulos, I. Igoumenakis,", + "venue": "in: Communications in Computer and Information Science, volume 1387, Springer, Cham, 2021, pp. 114\u2013140. URL: https://link.springer.com/chapter/10.1007/978-3-030-70807-8{_}7. doi:10.1007/978-3-030-70807-8_7.", + "url": null + } + }, + { + "15": { + "title": "A systematic review of gamification techniques applied to elderly care,", + "author": "D. Martinho, J. Carneiro, J. M. Corchado, G. Marreiros,", + "venue": "Artificial Intelligence Review 53 (2020) 4863\u20134901.", + "url": null + } + }, + { + "16": { + "title": "The effects of gamification on computerized cognitive training: Systematic review and meta-analysis,", + "author": "J. F. Vermeir, M. J. White, D. Johnson, G. Crombez, D. M. van Ryckeghem,", + "venue": "JMIR Serious Games 8 (2020) e18644.", + "url": null + } + }, + { + "17": { + "title": "Gamification mechanics for behavioral change: A systematic review and proposed taxonomy,", + "author": "R. Hervas, D. Ruiz-Carrasco, J. Bravo, T. Mondejar,", + "venue": "in: ACM International Conference Proceeding Series, Association for Computing Machinery, 2017, pp. 395\u2013404. URL: https://doi.org/10.1145/. doi:10.1145/3154862.3154939.", + "url": null + } + }, + { + "18": { + "title": "How Effective Are Serious Games for Promoting Mental Health and Health Behavioral Change in Children and Adolescents? A Systematic Review and Meta-Analysis,", + "author": "O. A. David, C. Costescu, R. Cardos, C. Mogoase,", + "venue": "Child & Youth Care Forum 49 (2020) 817\u2013838.", + "url": null + } + }, + { + "19": { + "title": "Mapping behavioral health serious game interventions for adults with chronic illness: Scoping review,", + "author": "T. H. Thomas, V. Sivakumar, D. Babichenko, V. L. Grieve, M. L. Klem,", + "venue": "JMIR Serious Games 8 (2020).", + "url": null + } + }, + { + "20": { + "title": "Does gamification work? - A literature review of empirical studies on gamification,", + "author": "J. Hamari, J. Koivisto, H. Sarsa,", + "venue": "in: Proceedings of the Annual Hawaii International Conference on System Sciences, IEEE Computer Society, 2014, pp. 3025\u20133034. doi:10.1109/HICSS.2014.377.", + "url": null + } + }, + { + "21": { + "title": "A Review of Indie Games for Serious Mental Health Game Design,", + "author": "M. King, T. Marsh, Z. Akcay,", + "venue": "in: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 12945 LNCS, Springer Science and Business Media Deutschland GmbH, 2021, pp. 138\u2013152. URL: https://link.springer.com/chapter/10.1007/978-3-030-88272-3{_}11. doi:10.1007/978-3-030-88272-3_11.", + "url": null + } + }, + { + "22": { + "title": "A rapid review of serious games: From healthcare education to dental education,", + "author": "K. Sipiyaruk, J. E. Gallagher, S. Hatzipanagos, P. A. Reynolds,", + "venue": "European Journal of Dental Education 22 (2018) 243\u2013257.", + "url": null + } + }, + { + "23": { + "title": "Reflections on the design, implementation, and adoption of a gamified eHealth application in youth mental healthcare,", + "author": "M. M. van Dooren, P. Siriaraya, V. Visch, R. Spijkerman, L. Bijkerk,", + "venue": "Entertainment Computing 31 (2019) 100305.", + "url": null + } + }, + { + "24": { + "title": "Towards effective serious games,", + "author": "O. De Troyer,", + "venue": "in: 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017 - Proceedings, Institute of Electrical and Electronics Engineers Inc., 2017, pp. 284\u2013289. doi:10.1109/VS-GAMES.2017.8056615.", + "url": null + } + }, + { + "25": { + "title": "Towards an Adaption and Personalisation Solution Based on Multi Agent System Applied on Serious Games,", + "author": "S. Blatsios, I. Refanidis,", + "venue": "IFIP Advances in Information and Communication Technology 559 (2019) 584\u2013594.", + "url": null + } + }, + { + "26": { + "title": "Why We Play Games: Four Keys to More Emotion Without Story,", + "author": "N. Lazzaro,", + "venue": "in: Game Developer Conference (GDC), 2004, pp. 1\u20138. URL: www.xeodesign.comhttp://www.citeulike.org/group/596/article/436449{%}5Cnhttp://www.xeodesign.com/xeodesign{_}whyweplaygames.pdf. doi:10.1111/j.1464-410X.2004.04896.x.", + "url": null + } + }, + { + "27": { + "title": "Personalized and Adaptive Serious Games,", + "author": "A. Streicher, J. D. Smeddinck,", + "venue": "in: R. Dorner, S. Gobel, M. KickmeierRust, M. Masuch, K. Zweig (Eds.), ENTERTAINMENT COMPUTING AND SERIOUS GAMES, volume 9970 of Lecture Notes in Computer Science, SPRINGER INTERNATIONAL PUBLISHING AG, GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND, 2016, pp. 332\u2013377. doi:10.1007/978-3-319-46152-6_14, iSSN: 0302-9743 Type: Proceedings Paper.", + "url": null + } + }, + { + "28": { + "title": "Gamification,", + "author": "E. Sanchez, H. van Oostendorp, J. D. Fijnheer, E. Lavou\u00e9,", + "venue": "in: A. Tatnall (Ed.), Encyclopedia of Education and Information Technologies, Springer International Publishing, Cham, 2019, pp. 1\u201311. URL: https://doi.org/10.1007/978-3-319-60013-0_38-1.", + "url": null + } + }, + { + "29": { + "title": "A Multidisciplinary Approach To Serious Game Development in the Health Sector,", + "author": "T. Korhonen, R. Halonen, T. Ravelin, J. Kemppainen, K. Koskela,", + "venue": "The 11th Mediterranean Conference on Information Systems (MCIS), Genoa, Italy, (2017) 15.", + "url": null + } + }, + { + "30": { + "title": "Can games change children\u2019s eating behaviour? A review of gamification and serious games,", + "author": "C. Y. Chow, R. R. Riantiningtyas, M. B. Kanstrup, M. Papavasileiou, G. D. Liem, A. Olsen,", + "venue": "Food Quality and Preference 80 (2020) 103823.", + "url": null + } + }, + { + "31": { + "title": "A meta-analysis of the cognitive and motivational effects of serious games,", + "author": "P. Wouters, C. van Nimwegen, H. van Oostendorp, E. D. van Der Spek,", + "venue": "Journal of Educational Psychology 105 (2013) 249\u2013265.", + "url": null + } + }, + { + "32": { + "title": "Serious gaming and gamification education in health professions: systematic review,", + "author": "S. V. Gentry, A. Gauthier, B. L. Ehrstrom, D. Wortley, A. Lilienthal, L. T. Car, S. Dauwels-Okutsu, C. K. Nikolaou, N. Zary, J. Campbell, J. Car,", + "venue": "Journal of Medical Internet Research 21 (2019) e12994.", + "url": null + } + }, + { + "33": { + "title": "Serious games in prevention and rehabilitation-a new panacea for elderly people?,", + "author": "J. Wiemeyer, A. Kliem,", + "venue": "European Review of Aging and Physical Activity 9 (2012) 41\u201350.", + "url": null + } + }, + { + "34": { + "title": "The quest for a better tailoring of gameful design: An analysis of player type preferences,", + "author": "A. Mora, G. F. Tondello, L. Calvet, C. Gonz\u00e1lez, J. Arnedo-Moreno, L. E. Nacke,", + "venue": "ACM International Conference Proceeding Series (2019).", + "url": null + } + }, + { + "35": { + "title": "Game Difficulty Adaptation and Experience Personalization: A Literature Review,", + "author": "P. Paraschos, D. Koulouriotis,", + "venue": "International Journal of Human-Computer Interaction 39 (2022) 1\u201322.", + "url": null + } + }, + { + "36": { + "title": "Personalized gamification: A literature review of outcomes, experiments, and approaches,", + "author": "L. Rodrigues, A. M. Toda, P. T. Palomino, W. Oliveira, S. Isotani,", + "venue": "in: Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM\u201920, Association for Computing Machinery, New York, NY, USA, 2021, pp. 699\u2013706. URL: https://dl.acm.org/doi/10.1145/3434780.3436665. doi:10.1145/3434780.3436665.", + "url": null + } + }, + { + "37": { + "title": "Tailored gamification: A review of literature,", + "author": "A. C. T. Klock, I. Gasparini, M. S. Pimenta, J. Hamari,", + "venue": "International Journal of Human-Computer Studies 144 (2020) 102495.", + "url": null + } + }, + { + "38": { + "title": "Convergence of Gamification and Machine Learning: A Systematic Literature Review,", + "author": "A. Khakpour, R. Colomo-Palacios,", + "venue": "Technology, Knowledge and Learning 26 (2021) 597\u2013636.", + "url": null + } + }, + { + "39": { + "title": "Player Modeling and Adaptation Methods Within Adaptive Serious Games,", + "author": "R. Hare, Y. Tang,", + "venue": "IEEE Transactions on Computational Social Systems 10 (2023) 1939\u20131950.", + "url": null + } + }, + { + "40": { + "title": "Towards an intelligent assistive system based on augmented reality and serious games,", + "author": "F. Ghorbani, M. F. Taghavi, M. Delrobaei,", + "venue": "ENTERTAINMENT COMPUTING 40 (2022).", + "url": null + } + }, + { + "41": { + "title": "An ontology-based serious game design for the development of nutrition and food literacy skills,", + "author": "K. Mitsis, K. Zarkogianni, N. Bountouni, M. Athanasiou, K. S. Nikita,", + "venue": "in: 41st annual international conference of the IEEE engineering in medicine and biology society, IEEE Engineering in Medicine and Biology Society Conference Proceedings, IEEE, New York, NY, USA, 2019, pp. 1405\u20131408. doi:10.1109/embc.2019.8856604.", + "url": null + } + }, + { + "42": { + "title": "An Intelligent Serious Game for Supporting African and African Caribbean Men during Pre- and Post-Diagnosis of Prostate Cancer,", + "author": "D. Brown, G. Cosma, G. Acampora, S. Seymour-Smith, A. Close,", + "venue": "in: 2014 INTERNATIONAL CONFERENCE ON INTERACTIVE TECHNOLOGIES AND GAMES (ITAG 2014), IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2014, pp. 20\u201327. doi:10.1109/iTAG.2014.9, type: Proceedings Paper.", + "url": null + } + }, + { + "43": { + "title": "Fostering Health Education With a Serious Game in Children With Asthma: Pilot Studies for Assessing Learning Efficacy and Automatized Learning Personalization,", + "author": "A. Delmas, B. Clement, P.-Y. Oudeyer, H. Sauz\u00e9on,", + "venue": "Frontiers in Education 3 (2018).", + "url": null + } + }, + { + "44": { + "title": "Procedural content generation based on a genetic algorithm in a serious game for obstructive sleep apnea,", + "author": "K. Mitsis, E. Kalafatis, K. Zarkogianni, G. Mourkousis, K. S. Nikita,", + "venue": "in: 2020 IEEE Conference on Games (CoG), IEEE, Osaka, Japan, 2020, pp. 694\u2013697. URL: https://ieeexplore.ieee.org/document/9231785/. doi:10.1109/CoG47356.2020.9231785.", + "url": null + } + }, + { + "45": { + "title": "An Adaptive Learning with Gamification & Conversational UIs: The Rise of CiboPoliBot,", + "author": "A. Fadhil, A. Villafiorita,", + "venue": "in: ADJUNCT PUBLICATION OF THE 25TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (UMAP\u201917), ASSOC COMPUTING MACHINERY, 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES, 2017, pp. 408\u2013412. doi:10.1145/3099023.3099112, backup Publisher: Assoc Comp Machinery; ACM SIGCHI; ACM SIGWEB Type: Proceedings Paper.", + "url": null + } + }, + { + "46": { + "title": "Enriching Remote Monitoring and Care Platforms with Personalized Recommendations to Enhance Gamification and Coaching,", + "author": "A. Pardos, P. Gallos, A. Menychtas, C. Panagopoulos, I. Maglogiannis,", + "venue": "in: M. Hagglund, S. Pelayo, A. Moen, M. Blusi, S. Bonacina, L. Nilsson, I. Madsen, A. Benis, L. Lindskold, P. Gallos (Eds.), caring is sharing-exploiting the value in data for health and innovation-proceedings of mie 2023, volume 302 of Studies in Health Technology and Informatics, IOS PRESS, Amsterdam, The Netherlands, 2023, pp. 332\u2013336. doi:10.3233/SHTI230129.", + "url": null + } + }, + { + "47": { + "title": "Investigating the Influence of Personalised Gamification on Mobile Survey User Experience,", + "author": "S. Carlier, D. Coppens, F. De Backere, F. De Turck,", + "venue": "SUSTAINABILITY 13 (2021). Place: ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND Publisher: MDPI Type: Article.", + "url": null + } + }, + { + "48": { + "title": "Flow Adaptation in Serious Games for Health,", + "author": "T. Alves, S. Gama, F. S. Melo,", + "venue": "in: J. Vilaca, T. Grechenig, D. Duque, N. Rodrigues, N. Dias (Eds.), 2018 IEEE 6TH INTERNATIONAL CONFERENCE ON SERIOUS GAMES AND APPLICATIONS FOR HEALTH (SEGAH \u201818), IEEE International Conference on Serious Games and Applications for Health, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2018. Backup Publisher: IEEE ISSN: 2330-5649 Type: Proceedings Paper.", + "url": null + } + }, + { + "49": { + "title": "Dynamic Player Modelling in Serious Games applied to Rehabilitation Robotics,", + "author": "K. d. O. Andrade, G. Fernandes, G. A. P. Caurin, A. A. G. Siqueira, R. A. F. Romero, R. d. L. Pereira,", + "venue": "in: F. Osorio, R. Romero, V. Grassi, D. Wolf, K. Branco, M. Becker (Eds.), 2014 2ND BRAZILIAN ROBOTICS SYMPOSIUM (SBR) / 11TH LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS) / 6TH ROBOCONTROL WORKSHOP ON APPLIED ROBOTICS AND AUTOMATION, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2014, pp. 211\u2013216. doi:10.1109/SBR.LARS.Robocontrol.2014.41, backup Publisher: Natl Council Sci & Technological Dev; Sao Paulo Res Fdn Type: Proceedings Paper.", + "url": null + } + }, + { + "50": { + "title": "An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment rehabilitation,", + "author": "S. S. Esfahlani, S. Cirstea, A. Sanaei, G. Wilson,", + "venue": "in: 2017 IEEE 26TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), Proceedings of the IEEE International Symposium on Industrial Electronics, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2017, pp. 1311\u20131318. Backup Publisher: Inst Elect & Elect Engineers; IEEE Ind Elect Soc; Anglia Ruskin Univ ISSN: 2163-5137 Type: Proceedings Paper.", + "url": null + } + }, + { + "51": { + "title": "ReHabgame: A non-immersive virtual reality rehabilitation system with applications in neuroscience,", + "author": "S. S. Esfahlani, T. Thompson, A. D. Parsa, I. Brown, S. Cirstea,", + "venue": "Heliyon 4 (2018) e00526.", + "url": null + } + }, + { + "52": { + "title": "Adaptation in serious games for upper-limb rehabilitation: an approach to improve training outcomes,", + "author": "N. Hocine, A. Gouaich, S. A. Cerri, D. Mottet, J. Froger, I. Laffont,", + "venue": "USER MODELING AND USER-ADAPTED INTERACTION 25 (2015) 65\u201398. Place: VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS Publisher: SPRINGER Type: Article.", + "url": null + } + }, + { + "53": { + "title": "Adaptive gameplay and difficulty adjustment in a gamified upper-limb rehabilitation,", + "author": "J. F. Pinto, H. R. Carvalho, G. R. R. Chambel, J. Ramiro, A. Goncalves,", + "venue": "in: J. Vilaca, T. Grechenig, D. Duque, N. Rodrigues, N. Dias (Eds.), 2018 IEEE 6th international conference on serious games and applications for health (SEGAH\u201818), IEEE International Conference on Serious Games and Applications for Health, IEEE, New York, NY, USA, 2018.", + "url": null + } + }, + { + "54": { + "title": "Towards Incorporating Personality in Serious Games for Health,", + "author": "T. Alves, C. Martinho, R. Prada,", + "venue": "in: 2019 11TH INTERNATIONAL CONFERENCE ON VIRTUAL WORLDS AND GAMES FOR SERIOUS APPLICATIONS (VS-GAMES), International Conference on Games and Virtual Worlds for Serious Applications, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2019, pp. 230\u2013233. doi:10.1109/vs-games.2019.8864521, backup Publisher: IEEE; IEEE Comp Soc; 7reasons Medien GmbH; Human Comp Interact Lab; Masaryk Univ, Fac Informat ISSN: 2474-0470 Type: Proceedings Paper.", + "url": null + } + }, + { + "55": { + "title": "Serious Games and In-Cloud Data Analytics for the Virtualization and Personalization of Rehabilitation Treatments,", + "author": "G. Caggianese, S. Cuomo, M. Esposito, M. Franceschini, L. Gallo, F. Infarinato, A. Minutolo, F. Piccialli, P. Romano,", + "venue": "IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS 15 (2019) 517\u2013526. Place: 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC Type: Article.", + "url": null + } + }, + { + "56": { + "title": "Artificial intelligence-based personalized serious game for enhancing the physical and cognitive abilities of the elderly,", + "author": "S.-J. Eun, E. J. Kim, J. Kim,", + "venue": "FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE 141 (2023) 713\u2013722.", + "url": null + } + }, + { + "57": { + "title": "Fusion of Artificial Intelligence in Neuro-Rehabilitation Video Games,", + "author": "S. Sadeghi Esfahlani, J. Butt, H. Shirvani,", + "venue": "IEEE Access 7 (2019) 102617\u2013102627.", + "url": null + } + }, + { + "58": { + "title": "Artificial Ant Colonies for Adaptive Rewards in Serious Games,", + "author": "Y. Semet, B. Marcon, K. Demestichas, N. Koutsouris, A. Ascolese,", + "venue": "in: H. Fellermann, J. Bacardit, A. GoniMoreno, R. Fuchslin (Eds.), ALIFE 2019: THE 2019 CONFERENCE ON ARTIFICIAL LIFE, MIT PRESS, ONE ROGERS ST, CAMBRIDGE, MA 02142 USA, 2019, pp. 533\u2013540. Type: Proceedings Paper.", + "url": null + } + }, + { + "59": { + "title": "Personalized Serious Games for Self-regulated Attention Training,", + "author": "N. Hocine,", + "venue": "in: ADJUNCT PUBLICATION OF THE 27TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (ACM UMAP \u201819 ADJUNCT), ASSOC COMPUTING MACHINERY, 1515 BROADWAY, NEW YORK, NY 10036-9998 USA, 2019, pp. 251\u2013255. doi:10.1145/3314183.3323458, backup Publisher: Assoc Comp Machinery; Assoc Comp Machinery SIGCHI; Assoc Comp Machinery SIGWEB; UM Inc; myAustrian; Springer; Natl Sci Fdn; RISE; Squirrel AI; Univ Cyprus Type: Proceedings Paper.", + "url": null + } + }, + { + "60": { + "title": "Keep Attention: A Personalized Serious Game for Attention Training (????).", + "author": "N. Hocine, M. Ameur, W. Ziani,", + "venue": null, + "url": null + } + }, + { + "61": { + "title": "Architecting intelligent smart serious games for healthcare applications: A technical perspective,", + "author": "S. Ahmad, F. Mehmood, F. Khan, T. K. Whangbo,", + "venue": "SENSORS 22 (2022).", + "url": null + } + }, + { + "62": { + "title": "A Gamification-based Framework for mHealth Developers in the Context of Self-Care,", + "author": "L. W. de Oliveira, S. T. de Carvalho,", + "venue": "in: A. DeHerrera, A. Gonzalez, K. Santosh, Z. Temesgen, B. Kane, P. Soda (Eds.), 2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), IEEE International Symposium on Computer-Based Medical Systems, IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA, 2020, pp. 138\u2013141. doi:10.1109/CBMS49503.2020.00033, backup Publisher: IEEE; IEEE Comp Soc ISSN: 2372-9198 Type: Proceedings Paper.", + "url": null + } + }, + { + "63": { + "title": "Effects of a gamified agent-based system for personalized elderly care: pilot usability study,", + "author": "D. Martinho, V. Crista, K. Matsui, G. Marreiros, J. M. Corchado,", + "venue": "JMIR SERIOUS GAMES 11 (2023).", + "url": null + } + }, + { + "64": { + "title": "Evaluating the impact of adaptive personalized goal setting on engagement levels of government staff with a gamified mHealth tool: results from a 2-month randomized controlled trial,", + "author": "R. Nuijten, P. Van Gorp, A. Khanshan, P. Le Blanc, P. van den Berg, A. Kemperman, M. Simons,", + "venue": "JMIR MHEALTH AND UHEALTH 10 (2022).", + "url": null + } + }, + { + "65": { + "title": "A tailored and engaging mhealth gamified framework for nutritional behaviour change,", + "author": "O. Silvia, C. Migliorelli, L. Sistach-Bosch, M. Gomez-Martinez, N. Boque,", + "venue": "NUTRIENTS 15 (2023).", + "url": null + } + }, + { + "66": { + "title": "Study on motivating physical activity in children with personalized gamified feedback,", + "author": "H. Schafer, J. Bachner, S. Pretscher, G. Groh, Y. Demetriou,", + "venue": "in: UMAP\u201918: ADJUNCT PUBLICATION OF THE 26TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, ASSOC COMPUTING MACHINERY, 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES, 2018, pp. 221\u2013226. doi:10.1145/3213586.3225227.", + "url": null + } + }, + { + "67": { + "title": "Effects of a Personalized Fitness Recommender System Using Gamification and Continuous Player Modeling: System Design and Long-Term Validation Study,", + "author": "Z. Zhao, A. Arya, R. Orji, G. Chan,", + "venue": "JMIR SERIOUS GAMES 8 (2020). Place: 130 QUEENS QUAY E, STE 1102, TORONTO, ON M5A 0P6, CANADA Publisher: JMIR PUBLICATIONS, INC Type: Article.", + "url": null + } + }, + { + "68": { + "title": "GardenQuest: Using Hexad Player Types to Design a Step-Based Multiplayer Persuasive Game for Motivating Physical Activity,", + "author": "G. Chan, A. Alslaity, J. K. Reen, S. Anukem, R. Orji,", + "venue": "in: A. Meschtscherjakov, C. Midden, J. Ham (Eds.), Persuasive Technology, volume 13832, Springer Nature Switzerland, Cham, 2023, pp. 337\u2013356. URL: https://link.springer.com/10.1007/978-3-031-30933-5_22.", + "url": null + } + }, + { + "69": { + "title": "Evaluation of a Serious Game Promoting Nutrition and Food Literacy: Experiment Design and Preliminary Results,", + "author": "K. Mitsis, K. Zarkogianni, K. Dalakleidi, G. Mourkousis, K. S. Nikita,", + "venue": "in: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), IEEE, Athens, Greece, 2019, pp. 497\u2013502. URL: https://ieeexplore.ieee.org/document/8941930/. doi:10.1109/BIBE.2019.00096.", + "url": null + } + }, + { + "70": { + "title": "The Gamification User Types Hexad Scale,", + "author": "G. F. Tondello, R. R. Wehbe, L. Diamond, M. Busch, A. Marczewski, L. E. Nacke,", + "venue": "in: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY \u201916, Association for Computing Machinery, New York, NY, USA, 2016, pp. 229\u2013243. URL: https://dl.acm.org/doi/10.1145/2967934.2968082. doi:10.1145/2967934.2968082.", + "url": null + } + }, + { + "71": { + "title": "Gene Ontology: tool for the unification of biology,", + "author": "M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis, K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, G. Sherlock,", + "venue": "Nature Genetics 25 (2000) 25\u201329.", + "url": null + } + }, + { + "72": { + "title": "The role of fuzzy logic in modeling, identification and control,", + "author": "L. Zadeh,", + "venue": "Modeling, Identification and Control 15 (1994).", + "url": null + } + }, + { + "73": { + "title": "Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations,", + "author": "B. J. Borb\u00e9ly, P. Szolgay,", + "venue": "Biomedical Engineering Online 16 (2017) 21.", + "url": null + } + }, + { + "74": { + "title": "The Creation of a Robotics Based Human Upper Body Model for Predictive Simulation of Prostheses Performance,", + "author": "D. Lura,", + "venue": "USF Tampa Graduate Theses and Dissertations (2012).", + "url": null + } + }, + { + "75": { + "title": "An inverse kinematics algorithm for upper-limb joint reconstruction during robot-aided motor therapy,", + "author": "E. Papaleo, L. Zollo, S. Sterzi, E. Guglielmelli,", + "venue": "in: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012, pp. 1983\u20131988. URL: https://ieeexplore.ieee.org/document/6290861. doi:10.1109/BioRob.2012.6290861.", + "url": null + } + }, + { + "76": { + "title": "Multi-layer Perceptrons,", + "author": "R. Kruse, S. Mostaghim, C. Borgelt, C. Braune, M. Steinbrecher,", + "venue": "in: R. Kruse, S. Mostaghim, C. Borgelt, C. Braune, M. Steinbrecher (Eds.), Computational Intelligence: A Methodological Introduction, Springer International Publishing, Cham, 2022, pp. 53\u2013124. URL: https://doi.org/10.1007/978-3-030-42227-1_5.", + "url": null + } + }, + { + "77": { + "title": "Deep learning,", + "author": "Y. LeCun, Y. Bengio, G. Hinton,", + "venue": "Nature 521 (2015) 436\u2013444.", + "url": null + } + }, + { + "78": { + "title": "Genetic Algorithms,", + "author": "C. R. Reeves,", + "venue": "in: L. Liu, M. T. \u00d6zsu (Eds.), Encyclopedia of Database Systems, Springer, New York, NY, 2018, pp. 1583\u20131587. URL: https://doi.org/10.1007/978-1-4614-8265-9_562.", + "url": null + } + }, + { + "79": { + "title": "Genetic Algorithms,", + "author": "K. Sastry, D. Goldberg, G. Kendall,", + "venue": "in: E. K. Burke, G. Kendall (Eds.), Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques, Springer US, Boston, MA, 2005, pp. 97\u2013125. URL: https://doi.org/10.1007/0-387-28356-0_4.", + "url": null + } + }, + { + "80": { + "title": "Particle Swarm Optimization: The Foundation,", + "author": "D. P. Kumar,", + "venue": "in: B. A. Mercang\u00f6z (Ed.), Applying Particle Swarm Optimization: New Solutions and Cases for Optimized Portfolios, Springer International Publishing, Cham, 2021, pp. 97\u2013110. URL: https://doi.org/10.1007/978-3-030-70281-6_6.", + "url": null + } + }, + { + "81": { + "title": "Ant Colony Optimization,", + "author": "T. St\u00fctzle,", + "venue": "in: M. Ehrgott, C. M. Fonseca, X. Gandibleux, J.-K. Hao, M. Sevaux (Eds.), Evolutionary Multi-Criterion Optimization, Springer, Berlin, Heidelberg, 2009, pp. 2\u20132. doi:10.1007/978-3-642-01020-0_2.", + "url": null + } + }, + { + "82": { + "title": "The Nonstochastic Multiarmed Bandit Problem,", + "author": "P. Auer, N. Cesa-Bianchi, Y. Freund, R. E. Schapire,", + "venue": "SIAM Journal on Computing 32 (2002) 48\u201377.", + "url": null + } + }, + { + "83": { + "title": "Monte Carlo Tree Search: A Review of Recent Modifications and Applications,", + "author": "M. \u015awiechowski, K. Godlewski, B. Sawicki, J. Ma\u0144dziuk,", + "venue": "Artificial Intelligence Review 56 (2023) 2497\u20132562.", + "url": null + } + }, + { + "84": { + "title": "Monte-Carlo Tree Search,", + "author": "M. H. M. Winands,", + "venue": "in: N. Lee (Ed.), Encyclopedia of Computer Graphics and Games, Springer International Publishing, Cham, 2024, pp. 1179\u20131184. URL: https://doi.org/10.1007/978-3-031-23161-2_12.", + "url": null + } + }, + { + "85": { + "title": "Rule-based systems: a granular computing perspective,", + "author": "H. Liu, A. Gegov, M. Cocea,", + "venue": "Granular Computing 1 (2016) 259\u2013274.", + "url": null + } + }, + { + "86": { + "title": "Decision Graphs - An Extension of Decision Trees,", + "author": "J. J. Oliver,", + "venue": "1993. URL: https://www.semanticscholar.org/paper/Decision-Graphs-An-Extension-of-Decision-Trees-Oliver/73f1d17df0e1232da9e2331878a802a941f351c6.", + "url": null + } + }, + { + "87": { + "title": "Open Learner Models,", + "author": "S. Bull, J. Kay,", + "venue": "in: R. Nkambou, J. Bourdeau, R. Mizoguchi (Eds.), Advances in Intelligent Tutoring Systems, Springer, Berlin, Heidelberg, 2010, pp. 301\u2013322. URL: https://doi.org/10.1007/978-3-642-14363-2_15.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18500v1" +} \ No newline at end of file diff --git a/20241127/2411.18503v1.json b/20241127/2411.18503v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1d0ea0c91509777643cc3a0f1b3ba4157bff4502 --- /dev/null +++ b/20241127/2411.18503v1.json @@ -0,0 +1,87 @@ +{ + "title": "Graph-Based Orchestration of Service-Oriented Model-Based Control Systems This research is supported by the Deutsche Forschungsgemeinschaft (German Research Foundation) with the grant number 468483200. 1 The authors are with the Department of Aerospace Engineering, University of the Bundeswehr Munich, Germany, {firstname}.{lastname}@unibw.de 2 The author is with the Institute of Automatic Control, RWTH Aachen University, Germany, l.doerschel@irt.rwth-aachen.de", + "abstract": "This paper presents a novel graph-based method for adapting control system architectures at runtime. We use a service-oriented architecture as a basis for its formulation. In our method, adaptation is achieved by selecting the most suitable elements, such as filters and controllers, for a control system architecture to improve control systems objective based on a predefined cost function. Traditional configuration methods, such as state machines, lack flexibility and depend on a predefined control system architecture during runtime. Our graph-based method allows for dynamic changes in the control system architecture, as well as a change in its objective depending on the given system state. Our approach uses a weighted, directed graph to model the control system elements and their interaction. In a case-study with a three-tank system, we show that by using our graph-based method for architecture adaptation, the control system is more flexible, has lower computation time, and higher accuracy than traditional configuration methods.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Motivation", + "text": "Service Oriented Architectures (SOAs) partition software into smaller, independent units called services, which can be distributed across different hardware systems [1 ###reference_b1###]. Services follow common principles like reusability and composability, allowing them to form service compositions[1 ###reference_b1###]. The process of creating and adapting service compositions is called orchestration, and it is managed by a central entity known as the orchestrator. Orchestration plays a central role in SOA by managing and connecting services to have functions tailored to system needs[1 ###reference_b1###].\nThe application of SOA concept to control systems architectures is different from the typical use of SOA in web technology for example. This difference results from the real-time requirements of control systems. This application forms an adaptable control system architecture at run-time, so any element of the control system can be changed. This advantage of adaptability can result in having a better control system\u2019s response compared to a control systems with fixed architectures. We apply the concept of SOA to control system architecture by modeling the elements of a control system as services and integrate them at run-time using the orchestrator. In our previous work, we presented an example for the implementation of this concept which is a Service-Oriented Model-Based Control (SOMC) Architecture [2 ###reference_b2###]. Our implementation is based on the Automotive Service-Oriented Architecture (ASOA) presented by Kampmann et al.[3 ###reference_b3###]. An advantage of ASOA is that it supports updatability of the embedded software[4 ###reference_b4###]. This paper extends this concept by providing a novel orchestration method.\nIn [2 ###reference_b2###], the orchestrator has system-level knowledge of all the control system services. The orchestrator adapts the architecture based on a user-defined heuristic, for example favoring the newest services. In this case, when a new service is detected, it replaces the previous service of the same type. However, this heuristic does not consider the performance of the control system architecture, as newer services may not always deliver the best control system performance. While offering more flexibility than traditional systems such as state machines, the previous used orchestrator does not account for potential changes in the control system objective which is a predefined cost function in our work.\nWe address this issue with our graph-based orchestration approach by evaluating which service should be used in the control system architecture, based on the control system\u2019s objective. The control system\u2019s objective is predisposed to change, and the orchestrator can handle it accordingly. Moreover, the orchestrator\u2019s ability to react to changes in objective significantly increases the adaptability of the control system[2 ###reference_b2###]. We also distinguish between function and resource orchestration [5 ###reference_b5###]. In this paper we focus on function orchestration." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Related Work", + "text": "In control theory, several methods aim to create runtime-flexible control systems. Switching adaptive control can detect changes and toggle between pre-defined controllers [6 ###reference_b6###]. For overactuated systems, control allocation redistributes control values to compensate for actuator failures [7 ###reference_b7###]. However, these techniques predetermine all components and their interactions during the design phase of the control system, limiting flexibility to a set of predefined options.\nAn example of adapting control system architectures is the dynamic updating of control systems for evolving self-adaptive systems introduced in [8 ###reference_b8###]. This approach focuses on identifying changes in the objective and then constructing control systems using components that offer dynamic control or deployment methods. Another approach is the Metacontrol for the Robot Operating System (MROS) framework. It is a model-based system for real-time adaptation of robot control system architectures using ROS. It combines domain-specific languages to model various architectural options. MROS also implements the MAPE-K cycle (Monitor, Analyze, Plan, Execute, Knowledge) and meta-control frameworks for dynamic adaptation using an ontology-based approach [9 ###reference_b9###].\nIn the domain of SOA, multiple approaches have been proposed to use graph-based methods to solve optimal service composition. For example, [10 ###reference_b10###] implemented a graph-based orchestration method using web services and an Artificial Intelligence (AI) graph-planning algorithm to select optimal service paths. [11 ###reference_b11###] employed k-dimensional trees and nearest neighbor search for cloud service selection, demonstrating fast, and effective service composition. H. Alhosaini et al. [12 ###reference_b12###] proposed a hierarchical method based on skyline services. Skyline services are a type of service that acts as a gateway or intermediary between client applications and other backend services. This method precomputes Pareto-optimal skylines to optimize Quality of Service (QoS)-driven service composition, improving efficiency through precomputation and caching. In Mobile Edge Computing, J. Wu et al. [13 ###reference_b13###] introduced M3C, an optimization method for micro-service composition that minimizes latency and energy use, offering practical deployment benefits.\nBeside this method, [14 ###reference_b14###] developed a top-k QoS-optimal service composition method for Internet of Things (IoT) systems, leveraging service dependency graphs and dynamic programming to reduce search complexity and improve performance. Similarly, [15 ###reference_b15###] proposed a heuristic polynomial-time graph search algorithm for Web Service Composition. [16 ###reference_b16###] introduced the Pre-joined Service Network, which uses graph databases to retrieve optimal service compositions efficiently.\nControl system architectures traditionally rely on fixed component interactions [17 ###reference_b17###]. Approaches like switching adaptive control [6 ###reference_b6###] offer some runtime flexibility but remain limited. SOA opens new possibilities for more dynamic and flexible control system architectures. The previous discussed methods focus on the usage of SOA mostly in web technologies which does not include real-time requirements. Our paper fills this gap by using SOA in control systems which adds the real-time challenge. This challenge is addressed by modeling the control system elements, such as filters and controllers, as services and adapting them at runtime using our graph-based approach." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Paper Structure", + "text": "The remainder of this paper is structured as follows: Section II ###reference_### discusses our graph-based approach consisting of two phases: service graph creation and graph-based orchestration, Section III ###reference_### elaborates on the evaluation scenario using a three tank control system. Section IV ###reference_### provides an outlook on future work." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Methodology and Implementation", + "text": "This section provides a step-by-step explanation of the concept of graph-based orchestration on a control system using SOA. Section II-A ###reference_### presents the service graph, the data structure used as the foundation of our orchestration method. Section II-B ###reference_### presents an explanation on how the orchestrator uses this service graph to establish control system architectures." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Service Graph Definition", + "text": "In computer science, graphs are often used to model communication networks [18 ###reference_b18###]. A graph is composed of a set of nodes and a set of edges that connect the nodes. Depending on the nature of the edges, a graph is directed or undirected. Further, a graph is either weighted or unweighted. A sequence of nodes connected by (weighted) edges form a (weighted) path. The shortest path is the one with the smallest sum over the weights [19 ###reference_b19###]. Dijkstra\u2019s algorithm addresses the problem of finding the shortest path in a weighted graph[20 ###reference_b20###].\nIn the context of SOA, graphs are good representations for service composition and orchestration forming what is called service graph. Nodes represent services, while edges indicate communication between them.\nFigure 1 ###reference_### presents a generalized architecture of the control system represented as a service graph. The control system architecture consists of sensor, actuator, filter, controller, model, and process. Here, the process service is not an abstraction of the real process, but simply provides the state references. The interfaces to the real process are the sensor and the actuator services, closing the control loop. Each control system element is modelled as a service to facilitate the use of SOA. In our SOMC architecture, a service is defined by its interfaces and tasks [3 ###reference_b3###]. Interfaces enable the exchange of data between services and tasks enable the implementation of functions. The interfaces of a service are partitioned into requirements and guarantees. A requirement may be interpreted as an input representing information, that is required by the service. A guarantee may be interpreted as an output, representing information that is provided by the service. The type of data provided and required by a guarantee and a requirement, respectively, is encoded in what is called functionality type. A requirement and a guarantee are compatible if they have the same functionality type.\nFor example, the filter service requires measured values of the functionality types , , and a model of the functionality type and guarantees to provide the system state of the functionality type .\nThe control system architecture in Figure 1 ###reference_### forms a directed service graph. Each block represents a service and can be attributed to a node in the service graph. The relationships between the services is depicted by directed edges in the service graph.\n###figure_1### The goal of graph-based orchestration is to choose the optimal control system architecture at any given time. An optimal control system architecture is defined as the service composition that has the lowest cost based on a predefined cost function. In our case, this is represented by the shortest path from the sensor to the actuator.\nThe control system is defined by sensors, filters, controllers, and actuators forming a control loop.\nThe control system objective is encoded in a predefined cost function.\nWe define optimality in SOA as the shortest path in the service graph based on the cost function.\nTo apply Dijkstra\u2019s algorithm to the service graph in Figure 1 ###reference_###, some adjustments must be made in order to transform this graph.\nWe add a fixed start node and fixed target node. The start node is connected to all available sensor nodes. All available actuator nodes are connected to the target node.\nWe remove the process service as we assume it is fixed and cannot be changed by the orchestrator. Note, that the process service only provides the state reference and is not an abstraction for the real process. The control loop remains closed.\nControllers and filters may rely on a model. If a model is used, it must be considered in the selection of the optimal controller or filter. As depicted in Figure 1 ###reference_###, the model is not always part of the direct path from the sensor to the actuator. In some cases, the shortest path bypasses the model, linking the filter directly to the controller. Alternatively, a cycle may form between the filter, models, and controller, creating ambiguity regarding which model the controller should use. To resolve this, filters/controllers and the respective model are grouped into a single node.\nWe remove the edge from the actuator to the filter as it introduces a cycle into the service graph. While Dijkstra\u2019s algorithm is able to handle cycles, it is not able to handle negative weights. So assuming non-negative weights we can omit all edges that introduce a cycle as they cannot be part of the shortest path.\nApplying these steps to the service graph example in Figure 1 ###reference_### and assuming two available services for each control system element results in the service graph in Figure 2 ###reference_###.\n###figure_2### In this paper, we assume linear and time-invariant models. For this scope, we define three types of possible models: a low complexity model, a medium complexity model, and a high complexity model. The complexity of the model is determined by the size of its system matrix .\nLinear and time-invariant (LTI) models are representable as matrices in the relation\nwhere and represent the system states and inputs, respectively and and are the system matrix and the input matrix. The model\u2019s complexity is proportional to the dimension of the system matrix .\nHere, we categorize model complexity into three distinct levels: low, medium, and high.\nNote that in the example in Figure 2 ###reference_### the edge from Filter 1 using the low model to Controller 2 using the medium model is one of the missing edges.\nIn general, filter and controller services may be incompatible due to different model structures. The structure of the model influences the potential connection between the filters and controllers that use these specific models. A controller using a higher complexity model cannot function properly with input from a lower complexity filter. The reason behind this is that a controller requires a system state vector of higher complexity, e.g., containing five states cannot work with the output of a filter that provides a system state vector of lower complexity, e.g., containing only three states. Additionally, a filter with a high complexity model is only compatible with a controller with a low complexity model, if the low complexity state is a subset of the high complexity state. In this work we assume that every higher complexity state includes a subset of all states with lower complexities. Therefore, we simplify the compatibility problem between filter and controller to the following constraint applied in Figure 2 ###reference_###. An edge between a filter and a controller is possible if and only if the complexity of the filter model is higher than or equal to the complexity of the controller model.\nThe graph-based orchestrator aims to select the optimal combination of sensors, filters, controllers, and actuators. To enable this, each edge in the service graph is assigned a weight, allowing for the application of shortest path algorithms. The edge weight represents the cost of choosing the service it points to. These weights are determined by the cost function.\nWe use the following cost function:\nwhere:\nthe computation factor and the inaccuracy depend on the service. The computation factor reflects the computation time. Lower values for both and are preferable, as they signify lower costs, faster task completion, and higher precision.\nand are weighting factors that determine the relative importance of the variables and in the total cost computation. The higher and , the more importance is placed on the respective variable.\nDijkstra\u2019s algorithm finds paths with the lowest cost, so cost function variables must reflect that principle: lower values indicate better outcomes. To address this, we use inaccuracy (the inverse of accuracy).\nThe steps presented in 1)-4) illustrate how we transform the service graph to satisfy the requirements for applying Dijkstra\u2019s algorithm. We simplify this process by directly building the graph from the list of available services. Algorithm 1 ###reference_###, outlines the procedure.\nFirst, we initialize the service graph (line 1) and add the start node (line 2). Then we add every sensor node (line 3) and connect them with the start node (line 4). Next we add a node for every filter (line 5-15). If the filter requires a model, we add every possible combination of the filter with the available models as separate nodes. We connect all the filters to all the sensors. For the controllers we do the same (line 16-30). However, we only connect controller and filter nodes if their models are compatible, which means the model complexity of the filter is higher than or equal the model complexity of the controller. After that we add the actuator nodes (line 31) and connect them to the controller nodes (line 32). We add the target node (line 33) and connect it to every actuator node (line 34). Lastly, we compute the cost for choosing each service and add it as a weight to all its incoming edges (line 35)." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Graph-Based Orchestration", + "text": "Given the method for creating the service graph we can now outline the process of orchestrating the system in Algorithm 2 ###reference_###. The first step is the creation of the service graph, by conducting Algorithm 1 ###reference_###. Once this graph is compiled, the orchestrator determines the optimal service composition by applying Dijkstra\u2019s algorithm. The resulting path is then set as the control system architecture. This process repeats every time the cost function changes or a service is removed, updated, or added." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Evaluation", + "text": "To evaluate the graph-based orchestrator, we use a three-tank system where we control the the water level in tank three, subjected to the maximum height of the tank as a constraint. This system is used to assess three evaluation scenarios (i) the creation of a service graph and discovery of the optimal control system architecture, (ii) the control system\u2019s response to the addition of a previously unknown service, and (iii) the control system\u2019s response to changes in its objective. We use the cost function defined in equation (2 ###reference_###).\n###figure_3### The computation factor is chosen based on the computation time of the services. The inaccuracy is linked to the model used: lower inaccuracy implies a more complex model. To compute the inaccuracy of a service\u2019s model, we make following assumptions:\nIf the service is a controller then:\nIf the service is a filter, then:\nThe first step of the evaluation is to create a service graph from the available services and discovering the optimal control system architecture. Table I ###reference_### lists the available services and their attributes relevant to the cost function. For simplicity, we assume one available sensor and actuator service, three models, four filters and four controllers. Three of the filters consist of a Kalman filter combined with models of different complexity. The remaining filter is a converter that simply forwards the values from the sensor. Similarly, three of the controller services are Model Predictive Controller (MPC) combined with models of different complexities and the remaining controller is a PID controller.\nFor the first scenario, we exclude the MPC controllers. Thus, the orchestrator only has to choose the optimal filter model. For scenarios 1 and 2 we set the weighting coefficients to and .\nThe orchestrator executes Algorithm 2 ###reference_### to create the service graph and compute the optimal service configuration in terms of the cost function given in Equation (2 ###reference_###). Figure 3 ###reference_### shows the generated weighted service graph and the shortest path for scenarios 1, 2, and 3. Green nodes represent start, sensor, actuator, and target, common to all three evaluation scenarios. Blue nodes indicate the selected filter and controller for scenario 1, where all MPC nodes are excluded. We can observe that the orchestrator selects the Kalman filter with medium model complexity, which has the lowest cost of 625 out of all filters. The PID controller had to be used, since the MPC was not active in this scenario.\nIn the second scenario we simulate the control system\u2019s response to adding a previously unknown service, namely the MPC. The cost function remains unchanged from scenario 1. The orchestrator uses algorithm 2 ###reference_### to create the updated service graph and determines the optimal control system architecture. Figure 3 ###reference_### shows the resulting weighted service graph and optimal control system architecture, with nodes colored in green and red. We see that the addition of the new controller has altered the optimal control system architecture, although the coefficients of the cost function have remained the same. While the Kalman Filter with medium model complexity prevails as the optimal choice, the chosen controller service moved from the PID in scenario 1 to the MPC with medium model complexity in scenario 2 with a cost of 525. Furthermore, the MPC with high model complexity has an even lower individual cost of 200, but is not compatible with the Kalman filter of medium model complexity. The combined cost of the Kalman filter and MPC with high model complexity is higher than the combined cost of the Kalman Filter and MPC with medium complexity.\nLast, scenario 3 demonstrates the control system\u2019s flexibility by altering the control system architecture in response to a change in the system\u2019s objective. This adjustment is made by modifying the factors and based on the importance of their attributes. Previously, attribute was prioritized over . For scenario 3 we set the weights in the cost function to and , notably increasing the importance of the computation time compared to the inaccuracy. Algorithm 2 ###reference_### is executed by the orchestrator, producing the weighted service graph, shown in Figure 3 ###reference_###. The service graph is updated for the new cost function and the new optimal control system architecture is computed. The impact of the changed cost function is clear even with the same services. The updated control system architecture with green and yellow nodes is shown in Figure 3 ###reference_###. Due to the much higher computation factor the converter filter and the PID controller were chosen." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Conclusion and Future Work", + "text": "This work introduces a graph-based orchestration method for model-based control system architecture using a service-oriented architecture. Our approach significantly enhances adaptability and flexibility by enabling the addition of new services at runtime, dynamic changes to the control system\u2019s architecture and identifying the optimal control system architecture in terms of a chosen cost function. The evaluation shows how this method allows for a greater variety in dynamic control system architectures and optimizes the system\u2019s performance by selecting the best configuration based on the current conditions.\nFuture work includes extending the cost function by adding a latency variable for data transmission times between services. Additionally, automatic adjustment of the weighting coefficients based on the system state would improve performance. Further, placing greater emphasis on the inaccuracy factor when the system is near unsafe areas." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Services and their attributes
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Service Type
Sensor29
Kalman Filtersee Eq.\u00a04\nlow/medium/high model
Converter Filter111
Model Low Complexity210
Model Medium Complexity55
Model High Complexity101
PID Controller111
MPCsee Eq.\u00a03\nlow/medium/high model
Actuator28
\n
", + "capture": "TABLE I: Services and their attributes" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18503v1_figure_1.png", + "caption": "Figure 1: Control system respresentation as graph of services.", + "url": "http://arxiv.org/html/2411.18503v1/extracted/5996836/new_architecture.png" + }, + "2": { + "figure_path": "2411.18503v1_figure_2.png", + "caption": "Figure 2: Example of control system architecture graph with start and target nodes, and instantiated service types. Edges show varying model complexities.", + "url": "http://arxiv.org/html/2411.18503v1/extracted/5996836/starttarget_v2.jpeg" + }, + "3": { + "figure_path": "2411.18503v1_figure_3.png", + "caption": "Figure 3: Control system architecture graph and the shortest path based on the values from Table I, the cost function from Eq. (2) with \u03b1comp=1subscript\ud835\udefccomp1\\alpha_{\\text{comp}}=1italic_\u03b1 start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT = 1, \u03b2comp=100subscript\ud835\udefdcomp100\\beta_{\\text{comp}}=100italic_\u03b2 start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT = 100 for scenarios 1, 2 and \u03b1comp=1000subscript\ud835\udefccomp1000\\alpha_{\\text{comp}}=1000italic_\u03b1 start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT = 1000, \u03b2comp=20subscript\ud835\udefdcomp20\\beta_{\\text{comp}}=20italic_\u03b2 start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT = 20 for scenario 3 using Dijkstra\u2019s algorithm where green nodes are common for all evaluation scenarios, blue for the first scenario, red for the second, yellow for the third scenario, and white for unselected services.", + "url": "http://arxiv.org/html/2411.18503v1/extracted/5996836/All_scenarios_V3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18503v1" +} \ No newline at end of file diff --git a/20241127/2411.18519v1.json b/20241127/2411.18519v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b17e8b2f459906cb6f02d4f48513353076592d31 --- /dev/null +++ b/20241127/2411.18519v1.json @@ -0,0 +1,165 @@ +{ + "title": "A Talent-infused Policy-gradient Approach to Efficient Co-Design of Morphology and Task Allocation Behavior of Multi-Robot Systems", + "abstract": "Interesting and efficient collective behavior observed in multi-robot or swarm systems emerges from the individual behavior of the robots. The functional space of individual robot behaviors is in turn shaped or constrained by the robot\u2019s morphology or physical design. Thus the full potential of multi-robot systems can be realized by concurrently optimizing the morphology and behavior of individual robots, informed by the environment\u2019s feedback about their collective performance, as opposed to treating morphology and behavior choices disparately or in sequence (the classical approach). This paper presents an efficient concurrent design or co-design method to explore this potential and understand how morphology choices impact collective behavior, particularly in an MRTA problem focused on a flood response scenario, where the individual behavior is designed via graph reinforcement learning. Computational efficiency in this case is attributed to a new way of near exact decomposition of the co-design problem into a series of simpler optimization and learning problems. This is achieved through i) the identification and use of the Pareto front of Talent metrics that represent morphology-dependent robot capabilities, and ii) learning the selection of Talent best trade-offs and individual robot policy that jointly maximizes the MRTA performance. Applied to a multi-unmanned aerial vehicle flood response use case, the co-design outcomes are shown to readily outperform sequential design baselines. Significant differences in morphology and learned behavior are also observed when comparing co-designed single robot vs. co-designed multi-robot systems for similar operations.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Inspired by natural systems, Multi-Robot systems (MRS) and Swarm Systems (SS) employ collective intelligence principles to exhibit emergent behavior to accomplish tasks that are beyond the capabilities of any single robot.\nEmergent behavior results from simple rules followed by each entity and their interaction with each other and their environment [1 ###reference_b1###]. These interactions give rise to complex adaptive behaviors that are robust and efficient. Usually such collective behavior is not readily predictable (e.g., via scaling or simple equations) from individual behavior without the use of empirical evaluations via simulations. This is because appropriate design and behavior choices at the individual robot level can lead to collective performance that is greater than the sum of its parts\nNow, the physical design aka morphology of individual robots, including geometry and component choices w.r.t. sensors, actuators, computing, communication, etc., influence and constrain their operating envelope and functionalities.\nThese design choices define the individual robot\u2019s capabilities (e.g., range, nominal power consumption, weight, sensing FoV, payload capacity, turning radius, etc.), and constrain the behavior space in which the robot can operate.\nOn the other hand, the behavior (decision-system that perceives the environment and provides action) must align with the capabilities defined by its morphology.\nThis creates a coupling of morphology and behavior individually. When working as a team, due to its task parallelization property, there are non-linear shifts in these constraints that affect its collective behavior.\nRealizing the true potential of swarm systems involves addressing formidable challenges regarding the design choices and behavior of the individual members. Even minor modifications in the design of individual robots might necessitate completely different behaviors.\nA common approach to designing swarm systems is by trial and error [2 ###reference_b2###].\nThe alternate method is the automated design approach, where the behavior is formulated as an optimization problem to be solved [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nThese methods optimize the behavior of the individual robots using evolutionary methods and Reinforcement Learning (RL) methods to find the optimal behavior of individual robots that leads to the desired collective performance\n[9 ###reference_b9###], [10 ###reference_b10###], [11 ###reference_b11###], [12 ###reference_b12###].\nBy optimizing or prescribing the morphology first (as is typical), the capability space is inherently confined without considering the behavioral space, leading to a sub-optimal emergent behavior.\nThe intricate interplay between morphology and behavior must be carefully crafted together to explore how efficiently the swarm as a whole can achieve a desired collective behavior.\nThere is a notable body of work on concurrent design or co-design of morphology and behavior for individual robots [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] and most of these methods use evolutionary approach, which however suffers from computational inefficiency and consider only the bounds of morphology space without taking geometric constraints into consideration.\nThere is limited literature on co-design in multi-robot systems [21 ###reference_b21###, 2 ###reference_b2###].\nMost of these works are based on common simpler multi-robot problems such as foraging, aggregation, and formation.\nthere is also a lack of computational frameworks for co-design that allow better understanding of how swarm systems compare with single-robot systems in terms of performance and how that relates to difference in morphology or behavior.\nTo address these gaps, this paper proposes a computational framework that enables co-optimization of morphology and behavior of individual robots in a swarm or MRS to maximize collective performance, while also allowing compare/contrast analysis of single vs. swarm for a given problem.\nHere, we utilize our previously proposed concept of artificial-life-inspired talent metrics [22 ###reference_b22###, 23 ###reference_b23###] that are physical quantities of interest, reflective of the capabilities of an individual robotic system.\nTalent metrics represent a compact yet physically interpretable parametric space that connects the behavior space and morphology space. We use this to decompose the morphology-behavior co-optimization into a sequence of talent-behavior optimization problems that can effectively reduce the overall search space (for each individual problem) with marginal compromise in the ability to find optimal solutions. In other words, the decomposition approach presented here is nearly lossless, i.e., a solution that can be found otherwise with a brute-force nested optimization approach to co-design will also exist in the overall search space spanned by our decomposed co-design approach (albeit assuming that each search process is ideal).\nWe also propose a novel talent-infused policy gradient method to concurrently optimize the talents and learn the behavior.\nTo study operationally relevant behavior in this context, here we use a decentralized Multi-Robot Task Allocation (MRTA) problem, which finds applications in a wide range of real-world scenarios, some of which are search and rescue, disaster response, last-mile delivery, space exploration, and precision agriculture [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###].\nIn this paper, we consider a flood response scenario in which a group of UAVs collectively supply emergency packages throughout the environment. In our previous work, we proposed a graph capsule network-based RL policy for sequential task selection in such MRTA problems [27 ###reference_b27###] which demonstrated superior performance compared to other baseline methods and proved to be scalable in terms of task space [28 ###reference_b28###]. Therefore, it is adopted here to guide the behavior of the multi-robot system, which will now be co-optimized alongside the morphology of the individual robots, specifically UAVs in this scenario.\nThus, the primary contributions of this paper are as follows:\n1) Present a new formulation and decomposed solution approach to concurrent (optimal) design of the morphology and learning-based behavior of multi-robot systems that are significantly more efficient than a nested co-design approach.\n2) Develop an extension of the policy architecture used to embody the behavior (decisions) of robots in MRTA to also include (morphology-dependent) talents that can be simultaneously optimized through a policy gradient process.\n3) Implement this new co-design approach to a flood response-inspired MRTA problem to identify and analyze the distinct morphology/behavior combinations obtained when using a single robot vs. using a multi-robot team (comprised of relatively simple individual robots)\n.\nIn section II ###reference_###, we present the co-design problem formulation, and section III ###reference_### presents the learning-based MRTA planning approach that encompasses the behavior of the robots. Subsequently, the case study and its results are presented in Section IV ###reference_###, followed by concluding remarks in Section V ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Co-Design Framework", + "text": "###figure_1### Consider a disaster response scenario in which a team of Unmanned Aerial Vehicles (UAVs) is deployed to deliver emergency relief supplies.\nThe set of morphological variables, including physical form/geometry, component choices, and their physical properties (such as the motor and propeller sizes) can be expressed as .\nThis vector comprises values ], each corresponding to a distinct morphological variable.\nThese robots follow a policy or behavior denoted by (representing policy parameters) to efficiently plan their mission.\nThe collective performance based on this behavior can be represented by , e.g., expressing metrics such as the number of packages delivered.\nThe primary objective of co-design is to maximize the collective performance by simultaneously optimizing the morphological variables and the behavior while subject to geometric and other behavioral constraints. The optimization problem can thus be expressed as,\nwhere represent purely morphology-dependent constraints (e.g., geometric conflicts and component incompatibilities), and and are respectively the (lower, upper) bounds on the morphological and behavioral (policy) parameters. Figure 1 ###reference_### depicts the four steps involved in our proposed co-design framework, which are explained in the later subsections." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Talent Metric Selection based on Morphological Constraints", + "text": "During the mission, at each decision-making instance, i.e., after a package is delivered (one task completed), and the next location is determined, the robots have to consider the feasibility of proceeding to that location and completing the task. This includes assessing several factors such as the robot\u2019s remaining payload and its remaining range (ensuring it possesses enough battery to at least reach the location and return to depot afterward).\nThese factors, which influence collective behavior, are bounded by the capability of individual robots such as max flight range and max payload capacity, which are in turn dependent on morphology. Such capabilities are used here as \u201ctalent\u201d metrics, as given by:\nwhere represents the function that maps candidate morphology variable vector to the set of talent metrics. Typically, in model-based design, such talent metrics or properties are computed using computational analysis models.\nIdentifying talent metrics should follow four principles:\n1) The talent metrics must be solely a function of morphology (not affected by behavior).\n2) Talent metrics should exhibit the monotonic goodness property, meaning that for each metric, there should be a consistent direction of improvement (either the greater the better, or the smaller the better).\n3) Talent metrics should be collectively sufficient in computing state transitions of the system (for the given behavior context), and in determining the impact of morphology on behavior choices, meaning there cannot be a case where constraints or bounds on behavior can change with a fixed value of .\n4) Talent metrics should satisfy the basic multi-objective search property, i.e., they must conflict with each other in at least part of the (morphology) design space\nBy adhering to these principles for identifying the talent metrics, we can reduce the computationally burdensome morphology-behavior co-optimization problem to a sequence of 1) a multi-objective optimization to find the best trade-off talents, aka talent Pareto, and 2) a talent/behavior co-optimization subject to not violating the determined talent Pareto. To elaborate, the second optimization must ensure that talent combinations that are beyond (or dominates) the talent Pareto front are not chosen during this process, since such combinations are in principle infeasible to achieve within the allowed morphological design space. Note that, in most robotic or complex engineering systems the dimensionality of is usually much larger than that of (the morphology space is considerably larger than the talent space), and thus this approach is also expected to enable searching a lower dimensional space during the co-optimization.\nThis talent/behavior co-optimization process can be expressed as:\nHere, represents the quantile regression model, and for every talent metric except the first one (i.e., ), we progressively capture the 5th and 95th percentile values conditioned on () to use it as a lower bound and upper bound of , respectively. For the first talent variable, we can directly acquire the bounds using the Pareto points. During co-optimization, allowed talent values must satisfy these bound constraints estimated thereof." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Talent Pareto Boundary Construction", + "text": "Consider a set of two talent metrics. Figure 1 ###reference_### b) represents the feasible talent space, and based on example min-min and max-max scenarios, the lower left and upper right boundaries of this space respectively represent the Pareto front. So, for say a max-max scenario (e.g., consisting of the flight range and nominal speed of the UAV), any point further North-East of the upper right boundary is not achievable, i.e., gives an infeasible morphology candidate. In other words, the Talent Pareto not only bounds all feasible combinations of flight range and speed, but also allows us to pick best trade-off (non-dominated) combinations and ignore dominated ones \u2013 thus both constraining and reducing the search space of candidate talent combinations to consider downstream.\nNow, two steps are needed to identify and parametrically model this talent (Pareto) boundary:\ni)Multi-talent optimization:\nA set of best trade-off talent combinations (Pareto solutions) can be obtained by solving the following multi-objective optimization problem, e.g., using a standard genetic algorithm.\nii) Modeling the Pareto front:\nA parametric representation of the Pareto front, , namely the -th talent expressed as a mathematical function of the remaining talent metrics, can be obtained by using a surrogate model such as a polynomial response surface to fit the computed talent Pareto solutions, i.e.:" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Behavior Learning with Talent optimization", + "text": "To generate the behavior (policy) model, we use the actor-critic method, while other standard policy gradient techniques can be exploited here as well.\nThe structure of the policy model will depend on the nature of the behavior being learnt. A generic example of neural net based policy, aka the actor network, is shown in Fig. 1 ###reference_### c)(black).\nTalent-infused Actor-critic: To co-optimize the behavior policy along with the talents, subject to the talent Pareto boundary obtained in the previous step (Fig. 1 ###reference_### b)), we introduce a second small 2-layer fully connected neural net called the talent network, as shown in blue color in 1 ###reference_### c.\nThe talent network architecture includes biases that are randomly initialized and pass through a linear activation layer. The resulting values are then forwarded to the output layer, which has neurons with sigmoid activation, where is the number of talents. So, the talent network does not essentially have any input layers or inputs, and is defined by the biases in the first and outputs layers and weights connecting these two layers, which are the parameters optimized during training.\nThis network is concatenated to the behavior (policy) network.\nLet\u2019s consider this combined network to be the new actor network. The policy of this actor network is given by , where is the behavioral action, are the talent values from the talent network and indicates the parameters of the combined actor network.\nTraining Phase for talent-behavior co-optimization:\nIn RL, a common strategy for exploration involves sampling actions from a distribution.\nHere, since the talents are continuous, a Gaussian distribution can be utilized. During the first step of each episode, we do a forward pass in the actor network (consisting of both the talent network and behavior network), followed by sampling from the distribution. The augmented output of the actor network is given by\nwhere signifies the output of actor policy at time step with input state , and represents the action for state , represents the talent values from 1 to .\nThese values are subsequently processed by a talent decoder, which scales them based on the upper and lower bounds of their respective talent metric. For the first talent metric, we get:\nFor remaining, 2nd to , talents, we use the following equation:\nTo obtain the last talent in the set, namely, , we use the surrogate model created with eqn. 5 ###reference_### in the previous step of our co-design approach.\nAfter deciding on actions and talents, we input these into the simulation. Robots are created using these talents. Note that for the MDP computations, the robot capabilities expressed as the talent set is necessary and sufficient to model or embody the robot agent in the simulation (their morphology doesn\u2019t need to be explicitly determined). Once the talent based robot has been defined, the computed action is taken to get rewards and new states, that are returned to the actor network.\nCrucially, after the first step of the episode, talents are not sampled from the distribution, since they are not input dependent. Moreover, changing the talent and thus the robot design during an episode would not be physically meaningful. Thus, only behavioral actions are forward propagated throughout the episode, i.e., the states and actions update with each step, as shown in the eqn 6 ###reference_###.\nThe critic network, which primarily gets the states as input, is modified to receive state-talent values. Now, instead of calculating the state value, the critic network calculates the state-talent values. The new critic policy can be represented as .\nThe Temporal Difference (TD) error is then computed based on\nSince the talent values remain the same throughout the episode, it is necessary that we collect experiences containing batches of episodes and update the actor and critic networks over this batch. The TD error can be used to update the critic to optimally estimate the state-talent value. Consequently, the actor network is updated to increase the probability of providing us with the optimal Talents and behavior (actions based on states). Once the training converges, the deterministic actor provides the optimal talents (), and the behavior policy." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Morphology Finalization", + "text": "Utilizing the optimized or learnt talent metrics , we determine the final robot morphology through another single objective optimization process, as shown in fig. 1 ###reference_### d). The goal of this optimization is to now explicitly find the morphology that corresponds to (as closely as possible) the optimal talent metrics obtained in the previous step. This optimization can be expressed as,\nAny standard non-linear constrained optimization solver can be used here. In this paper, we use a Particle Swarm Optimization implementation [29 ###reference_b29###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Multi-Robot Task Allocation for Flood Response", + "text": "In this work, we focus on a multi-unmanned aerial vehicle (UAV) flood disaster response problem adopted from [30 ###reference_b30###, 28 ###reference_b28###], which we refer to as MRTA-Flood. It consists of task locations in a flood-affected area waiting for a survival package to be delivered by a team of UAVs. Here, the goal is to drop survival packages to as many task locations as possible before the water level rises significantly, submerging all the locations.\nWe assume that each location requires just one survival package.\nThe predicted time at which a location gets completely submerged () is considered as the deadline of the task , by which time that task must be completed.\nEach UAV has a max package (payload) capacity, max flight speed and max flight range, which comprise the set of talents.\nWe consider a decentralized asynchronous decision-making scheme.\nThe following assumptions are made: 1) All UAVs are identical and start/end at the same depot; 2) The location of task- and its time deadline are known to all UAVs; 3) Each UAV can share its state and its world view with other UAVs; and 4) A linear charging model with a charging time from empty to full range being 50 minutes, the charging happens every time the UAV visits the depot." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A MRTA-Flood Problem Formulation", + "text": "Here, we present a summary of the Markov Decision Process (MDP) formulation of this multi-UAV flood-response problem.\nMDP over a Graph:\nThe MRTA-Flood problem involve a set of nodes/vertices () and a set of edges () that connect the vertices to each other, which can be represented as a complete graph , where is a weighted adjacency matrix. Each node represents a task, and each edge connects a pair of nodes. For MRTA with tasks, the number of vertices and the number of edges are and , respectively. Node is assigned a 3-dimensional feature vector denoting the task location and time deadline, i.e., where . Here, the weight of the edge between nodes and is (), which can be computed as , where .\n###figure_2### The MDP defined in a decentralized manner for each individual UAV (to define its task selection process) can be expressed as a tuple . The State Space () consists of the task and peer robot properties and mission-related information. The Action Space () is the index of the task selected to be completed next with the index of the depot as . The full state and action space is shown in table I ###reference_###. The (Reward function ()) is defined as\n, where \nis the number of successfully completed tasks and is calculated at the end of the episode. Since here we do not consider any uncertainty, the state transition probability is 1." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Graph-Based Behavior Policy Network", + "text": "Motivated by the generalizability and scalability benefits reported in [27 ###reference_b27###, 31 ###reference_b31###, 28 ###reference_b28###], we construct a policy network based on specialized Graph Neural Networks (GNN) which maps the state information to an action. The behavior policy network consists of a Graph Capsule Convolutional Neural Network (GCAPCN) [32 ###reference_b32###] for encoding the graph-based state information (the task graph). The remaining state information, which includes the state of the robot tasking decision, the peer robots, and the maximum range (), and maximum speed (), are concatenated as a single vector and passed through a linear layer to obtain a vector called the context (, where is the length of the context vector). The encoded and context information are then processed by a decoder to compute the actions, namely the probability of selecting each available task. Figure 2 ###reference_### shows the overall policy network.\nFurther details of the GCAPCN encoder and the MHA-based decoder can be found in our previous works [27 ###reference_b27###], [31 ###reference_b31###], [33 ###reference_b33###], and are thus not elaborated here.\n###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Case study - Results and Discussion", + "text": "This section showcases the implementation and results of each step of the co-design framework applied to the MRTA-Flood problem." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Talent metrics and Pareto Front", + "text": "For the MRTA-Flood problem, we consider a quadcopter with a Blended-Wing-Body (BWB) design integrated into an H-shaped frame [22 ###reference_b22###].\nThe key morphological parameters that influence performance are the length and width of quadcopter arms, motor power, battery capacity, payload, and propeller diameters, although a much larger or more granular set of design variables can also be readily considered in future implementations. As stated earlier, the talent set comprises flight range (), nominal speed (), and the payload or package capacity () of the UAV. We consider each package to have 400 grams of emergency supplies. Computational underlying the design objective and constraint calculations (eq. 2 ###reference_###) for this UAV can be found in [22 ###reference_b22###].\n###figure_4### In order to identify the Pareto points as explained in section II-B ###reference_###, we utilize the NSGA-II (Non-dominated Sorting Algorithm II) solver. For robustness, the optimization process involved conducting six separate runs, with each run consisting of a population of 120 and 40 generations each.\nSubsequently, the Pareto points obtained through six runs are subjected to another final non-dominated sorting process to acquire the final set of Pareto points.\nAfter the final sorting process, we identified a total of Pareto solutions.\nFinally, to capture and model the Pareto front, we utilize 2D Quadratic regression, considering Range and Speed as independent variables and package capacity as the dependent variable, . The resulting Pareto front is shown in figure 3 ###reference_### a)." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Behavior Learning subject to Talent Boundary", + "text": "We use the Stable-baselines3 [34 ###reference_b34###] library to implement our custom policy and distribution as elaborated in section III ###reference_###.\nThe policy generates outputs for flight range () and speed (, both in a range of [0,1].\nThe flight range is scaled using the upper and lower bounds of the flight range of the Pareto front given in table II ###reference_###.\nThe speed output undergoes scaling based on the 5th and 95th percentile values (upper and lower limits) of speed conditioned on the flight range.\nThe number of packages is estimated using the polynomial regression created with flight range and speed as inputs.\nThe behavioral action () is also determined as a part of the policy, indicating the next task to complete.\nThe training area is 5 sq. km, the number of robots is 5, the task size is 50, and the total mission time is 2 hours, which is kept fixed for ease of computation.\nFor each episode, the depot, task locations, and the time deadline for each task are randomly generated across the environment.\n###figure_5### The policy is trained using Talent-infused Proximal policy optimization (PPO) for approximately 350k episodes. Figure 4 ###reference_### shows the convergence history of talents and rewards. During the initial part of the training, the standard deviation of the policy is higher, and as the training progresses and rewards (Figure 4 ###reference_### a) start to converge, the uncertainty of talents (Figure 4 ###reference_### b,c,d) reduces, signifying a stable learning process.\nThe final cumulative standard deviation of the policy narrows down to 6.9%, indicating a high level of precision and consistency in the learning outcomes." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Baseline Comparison", + "text": "To compare our co-design policy\u2019s performance, we trained two baseline policies, and each has fixed talents: one possessing a higher package capacity and increased speed and the second baseline with a lower package capacity and higher range compared to our co-designed talents.\nThe baseline talents are also selected from the Pareto front obtained through optimization (so they are competitive best trade-offs). The baselines represent typical automated sequential designs.\nFurthermore, by selecting the candidates from the Pareto front, we ensure that our co-design policy is benchmarked against design candidates that are better in one or more talents.\n###figure_6### Identical RL settings have been used throughout the experiments in this paper.\nThe baseline behavior policies and the co-designed policy are evaluated with 3 different task sizes and robot counts across 250 episodes each.\nThe task completion rate by each policy is compared in Figure 6 ###reference_###.\nIn the training environment, which has 50 tasks and 5 UAVs, the co-designed policy demonstrated a median task completion rate of approximately 90%, outperforming the baseline policies, which achieved around 83% median task completion rate. As the environment was scaled to include 100 tasks with 10 UAVs and further to 150 tasks with 15 UAVs, the performance advantage of co-design over the baselines remained agnostic to scaling, further demonstrating the benefits of the co-design over sequential design.\n###figure_7###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Single Robot Task Allocation", + "text": "A single robot task allocation (SRTA) co-design case study is also performed here, to provide insights with regards to: 1) At what scale of problems do co-designed multi-robot teams \u2013 by virtue of task parallelism and emergent collective performance \u2013 start providing benefits over a co-designed single robot deployment, where the latter is allowed a much larger or generous range of morphological choices (considering similar overall investment). 2) How the behavior/morphology combinations and inherent talents obtained from a single robot co-design differ from that of multi-robot co-design for the same operation." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Talent Boundaries and Behavior Learning", + "text": "The upper bounds of our morphology variables are scaled 3-4 times in the single robot co-design baseline. Table II ###reference_### shows the upper and lower bounds of our morphology space for this single robot study. We obtained our Talent Pareto following the same method as before, and the resulting Pareto front is shown in Fig. 3 ###reference_### b).\nWhen compared with the Pareto front obtained for multi-robot morphology settings, the single robot Pareto front differs significantly, indicating that the influence of morphology on capability space is non-linear.\nThe convergence history of talents and the rewards for Talent-infused learning in the single robot case are shown in Fig. 5 ###reference_###.\nThe co-design policy converges to a higher speed rather than a higher payload or flight range. A single UAV needs the speed to go to multiple locations and complete the tasks, while an appropriate balance between the number of packages it can carry and range is also necessary.\nA fixed design baseline policy is trained using talents from the single UAV Pareto front that have a higher payload than the single UAV co-designed talents. Both the baseline and optimized talents are shown in table II ###reference_###.\nIn testing settings similar to the MRTA-Flood Problem, the single-robot co-designed policy surpasses the multi-robot policy in the training environment. However, its performance drops to 65% when the number of tasks double and falls below 50% when the number of tasks triple.\nSince scalability is an essential component in any task allocation problem, when the number of tasks is changed, multi-robot systems provide a clear advantage. This hypothesized benefit remains evident even under co-designed outcomes (which arguably bring out the near-best of both worlds, single vs. multi-robot systems)." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "IV-F Final Morphology", + "text": "The final morphologies for both the single-robot task allocation and multi-robot task allocation problems are provided in table II ###reference_###. While the upper bounds in morphology for a single robot system are scaled 3 to 4 fold for each variable, the optimized talents do not utilize the full bounds for most parameters. Interestingly, certain variables, such as the length and propeller size, were optimized to dimensions even smaller than the morphology observed in multi-robot configurations. In order to perform a more direct comparison of single-sophisticated and multi-(simple)-robot performance and how the behavior/morphology combinations offer distinct, not necessarily intuitive, trade-offs, an anchor is needed to equate the overall investment across these cases, e.g., total cost or mass (pertinent in space applications), and would be investigated in future work." + }, + { + "section_id": "4.7", + "parent_section_id": "4", + "section_name": "IV-G Computing Costs Analysis", + "text": "Our talent-behavior learning was performed in a workstation with Intel CPU-12900k (24 Threads), NVIDIA 3080ti, and 64 GB of RAM. The computation times for each step in our co-design framework for MRTA-Flood problem are: 6.7 minutes for 6 runs of NSGA-II to obtain talent Pareto solutions, just 3.5 seconds for generating the Pareto boundary regression model, 9 hours 57 minutes to train the talent-infused Actor-Critic policy, and 2.3 minutes for morphology finalization with MDPSO [29 ###reference_b29###]. Overall, our co-design framework incurs a total computational cost of approximately 10 hours and 5 minutes.\nUsing the policy training time (of 6 hours 49 minutes) with fixed morphology (namely the inner loop search) as reference, a nested co-design is estimated to take 272 hours if using NSGA-II for solving the outer level optimization.\n###figure_8###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced a new computational framework to concurrently design the learning-based behavior and morphology of individual robots in a multi-robot system, applied to the multi-robot task allocation context. Regression-based representation of the relation between the best trade-off talent choices (that represent robot capabilities), and formulation of a talent-infused actor critic policy, play key roles in enabling this new framework, with significant gains in computing efficient compared to a vanilla nested co-design approach.\nApplied to a multi-UAV flood response scenario, with the individual UAV behavior expressed by a graph neural network, the co-designed UAV team readily outperforms two sequential design baselines in terms task completion performance evaluated over unseen test scenarios. The framework also provides transparent insights into when a multi-UAV team becomes more beneficial compared to using a stand-alone more capable single UAV, and what morphological trades-offs occur between these two options. In its current form, the talent metrics must be purely functions of morphology, as well as be collectively sufficient to simulate the state transition underlying the robot behavior, which might be challenging to apply in settings with more complex robot/environment interactions. Future work can thus investigate talent representations that alleviate these assumptions, and thus allow wider application of the proposed co-design concept." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: The state and action parameters of the graph learning policy for MRTA-Flood
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Parameter
States\nTask graph ()\n
\ncurrent mission time ()\n
\ncurrent location of the robot ()\n
\nremaining battery of robot ()\n
\ncapacity of robot ()\n
\ndestination of its peers ()\n
\nRemaining battery of peers ()\n
\nCapacity of peers ()\n
\nDestination time of peers()\n
\nTalents ( and )\n
Actions\nTask to allocate ()\n
\n
", + "capture": "TABLE I: The state and action parameters of the graph learning policy for MRTA-Flood" + }, + "2": { + "table_html": "
\n
TABLE II: UAV talent metrics and design variables obtained in co-design compared with baseline designs for MRTA and single robot task allocation (SRTA)
\"[Uncaptioned\n
", + "capture": "TABLE II: UAV talent metrics and design variables obtained in co-design compared with baseline designs for MRTA and single robot task allocation (SRTA)" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18519v1_figure_1.png", + "caption": "Figure 1: Flowchart of our co-design framework; a) Morphology and its dependent talent parameters are derived; b) Based on the talents, a Pareto front is created; c) The Talent-infused policy-gradient method is used to train the associated behavior and talents; d) Final morphology is obtained via constrained optimization subject to the learnt talent and behavior.", + "url": "http://arxiv.org/html/2411.18519v1/x1.png" + }, + "2": { + "figure_path": "2411.18519v1_figure_2.png", + "caption": "Figure 2: The overall policy network consists of the GCAPCN encoder, context encoding, the MHA-based decoder, and Talent network.", + "url": "http://arxiv.org/html/2411.18519v1/x3.png" + }, + "3": { + "figure_path": "2411.18519v1_figure_3.png", + "caption": "Figure 3: Talent Pareto front approximated by polynomial regression; limits of talents captured with quantile regression. a) Pareto front for MRTA morphology constraints, b) Pareto front for SRTA morphology constraints", + "url": "http://arxiv.org/html/2411.18519v1/x4.png" + }, + "4": { + "figure_path": "2411.18519v1_figure_4.png", + "caption": "Figure 4: Training history for MRTA co-design policy (Talents and overall reward): (a) Reward, (b) Cruise speed, (c) Flight range, (d) Package Capacity", + "url": "http://arxiv.org/html/2411.18519v1/x5.png" + }, + "5": { + "figure_path": "2411.18519v1_figure_5.png", + "caption": "Figure 5: Training history for Single Robot Task Allocation (Talents and overall reward): (a) Reward, (b) Cruise speed, (c) Flight range, (d) Package Capacity", + "url": "http://arxiv.org/html/2411.18519v1/x6.png" + }, + "6": { + "figure_path": "2411.18519v1_figure_6.png", + "caption": "Figure 6: Multi-Robot Case: Task completion rate of co-designed policy and baseline policies with various task and UAV Scale", + "url": "http://arxiv.org/html/2411.18519v1/x7.png" + }, + "7": { + "figure_path": "2411.18519v1_figure_7.png", + "caption": "Figure 7: Single Robot Case: Task completion rate of co-designed policy and baseline policy with various task counts", + "url": "http://arxiv.org/html/2411.18519v1/x8.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18519v1" +} \ No newline at end of file diff --git a/20241127/2411.18520v1.json b/20241127/2411.18520v1.json new file mode 100644 index 0000000000000000000000000000000000000000..8927197b93764c6be570422c47f624cbdab30b9c --- /dev/null +++ b/20241127/2411.18520v1.json @@ -0,0 +1,316 @@ +{ + "title": "Perturbation Ontology based Graph Attention Networks", + "abstract": "In recent years, graph representation learning has undergone a paradigm shift, driven by the emergence and proliferation of graph neural networks (GNNs) and their heterogeneous counterparts. Heterogeneous GNNs have shown remarkable success in extracting low-dimensional embeddings from complex graphs that encompass diverse entity types and relationships. While meta-path-based techniques have long been recognized for their ability to capture semantic affinities among nodes, their dependence on manual specification poses a significant limitation. In contrast, matrix-focused methods accelerate processing by utilizing structural cues but often overlook contextual richness. In this paper, we challenge the current paradigm by introducing ontology as a fundamental semantic primitive within complex graphs. Our goal is to integrate the strengths of both matrix-centric and meta-path-based approaches into a unified framework. We propose perturbation Ontology-based Graph Attention Networks (POGAT), a novel methodology that combines ontology subgraphs with an advanced self-supervised learning paradigm to achieve a deep contextual understanding. The core innovation of POGAT lies in our enhanced homogeneous perturbing scheme designed to generate rigorous negative samples, encouraging the model to explore minimal contextual features more thoroughly. Through extensive empirical evaluations, we demonstrate that POGAT significantly outperforms state-of-the-art baselines, achieving a groundbreaking improvement of up to 10.78% in F1-score for the critical task of link prediction and 12.01% in Micro-F1 for the critical task of node classification.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Graphs are a powerful way to represent complex relationships among objects, but their high-dimensional nature requires transformation into lower-dimensional representations through graph representation learning for effective applications. The emergence of graph neural networks (GNNs) has significantly enhanced this process. While early network embedding methods focused on homogeneous graphs, the rise of heterogeneous information networks (HINs) in real-world contexts\u2014like citation, biomedical, and social networks\u2014demands the capture of intricate semantic information due to diverse interconnections among heterogeneous entities. Addressing HIN heterogeneity to maximize semantic capture remains a key challenge.\nIn HINs, graph representation learning can be classified into two main categories: meta-path-based methods and adjacency matrix-based methods. Meta-path-based approaches leverage meta-paths to identify semantic similarities between target nodes, thereby establishing meta-path-based neighborhoods. A meta-path is a defined sequence in HINs that links two entities through a composite relationship, reflecting a specific type of semantic similarity. For instance, in a social HIN comprising four node types (User, Post, Tag, Location) and three edge types (\u201cinteract,\" \u201cmark,\" \u201clocate\"), two notable meta-paths are illustrated: UPU and UPTPU. On the other hand, adjacency matrix-based methods emphasize the structural relationships among nodes, utilizing adjacency matrices to propagate node features and aggregate information from neighboring structures.\nBoth meta-path-based and adjacency matrix-based methods have notable limitations. Meta-path-based techniques often struggle with selecting effective meta-paths, as the relationships they represent can be complex and implicit. This makes it challenging to identify which paths enhance representation learning, especially in HINs, with diverse node and relation types. The search space for meta-paths becomes vast and exponentially complex, necessitating expert knowledge to identify the most relevant paths. A limited selection can lead to significant information loss, adversely affecting model performance. On the other hand, adjacency matrix-based methods focus on structural information from neighborhoods but often overlook the rich semantics of HINs. While they can be viewed as combinations of 1-hop meta-paths, they lack the robust semantic framework needed to effectively capture implicit semantic information, leading to further information loss.\nTo address these challenges, we propose using HIN representation learning based on Ontology [1 ###reference_b1###], which comprehensively describes entity types and relationships. Ontology models a world of object types, attributes, and relationships [2 ###reference_b2###], emphasizing its semantic properties. Since HINs are semantic networks constructed based on Ontology, we assert that Ontology provides all necessary semantic information. We define a minimal HIN subgraph that aligns with all possible ontology descriptions as an ontology subgraph. An HIN can be seen as a concatenation of these ontology subgraphs, which offer a complete context for nodes, representing the minimal complete context of each node. Nodes within an ontology subgraph are considered ontology neighbors, forming a local complete context. Compared to meta-paths, ontology subgraphs encompass richer semantics, capturing all node and relation types along with complete context, while meta-paths are limited in scope. Although meta-paths are based on Ontology, ontology subgraphs can capture semantic similarities to some extent. Importantly, the structure of an ontology subgraph is predefined, requiring only a search rather than manual design. In contrast to adjacency matrices, ontology subgraphs represent the smallest complete semantic units with rich semantic information and also provide structural insights due to their natural graph structure. In summary, Ontology combines the strengths of both meta-paths and adjacency matrices.\nIn this paper, we present Perturbation Ontology-based Graph Attention Networks (POGAT) for graph representation learning that leverages ontology. To improve node context representation, we aggregate both intra-ontology and inter-ontology subgraphs. Our self-supervised training incorporates a perturbation strategy, enhanced by homogeneous node replacement to generate hard negative samples, which helps the model capture more nuanced node features. Experimental results demonstrate that our method surpasses several existing approaches, achieving state-of-the-art performance in both link prediction and node classification tasks.\nIn this table, tabular results are in percent; the best result is bolded.\n\n\n\n\n\n\nMethods\nDBLP\nIMDB-S\nFreebase\nAMiner\n\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\n\nGCN\n91.47 \u00b10.34\n90.84 \u00b10.32\n64.82 \u00b10.64\n57.88 \u00b11.18\n68.34 \u00b11.58\n59.81 \u00b13.04\n85.75 \u00b10.41\n75.74 \u00b11.10\n\nGAT [3 ###reference_b3###]\n93.39 \u00b10.30\n93.83 \u00b10.27\n64.86 \u00b10.43\n58.94 \u00b11.35\n69.04 \u00b10.58\n59.28 \u00b12.56\n84.92 \u00b10.68\n74.32 \u00b10.95\n\nTransformer [4 ###reference_b4###]\n93.99 \u00b10.11\n93.48 \u00b10.12\n66.29 \u00b10.69\n62.79 \u00b10.65\n67.89 \u00b10.39\n63.35 \u00b10.46\n85.72 \u00b10.43\n74.15 \u00b10.28\n\nRGCN [5 ###reference_b5###]\n92.07 \u00b10.50\n91.52 \u00b10.50\n62.95 \u00b10.15\n58.85 \u00b10.26\n60.82 \u00b11.23\n59.08 \u00b11.44\n81.58 \u00b11.44\n62.53 \u00b12.31\n\nHetGNN [6 ###reference_b6###]\n92.33 \u00b10.41\n91.76 \u00b10.43\n51.16 \u00b10.65\n48.25 \u00b10.67\n62.99 \u00b12.31\n58.44 \u00b11.99\n72.34 \u00b11.42\n55.42 \u00b11.45\n\nHAN [7 ###reference_b7###]\n92.05 \u00b10.62\n91.67 \u00b10.49\n64.63 \u00b10.58\n57.74 \u00b10.96\n61.42 \u00b13.56\n57.05 \u00b12.06\n81.90 \u00b11.51\n64.67 \u00b12.21\n\nGTN [8 ###reference_b8###]\n93.97 \u00b10.54\n93.52 \u00b10.55\n65.14 \u00b10.45\n60.47 \u00b10.98\n-\n-\n-\n-\n\nMAGNN [9 ###reference_b9###]\n93.76 \u00b10.45\n93.28 \u00b10.51\n64.67 \u00b11.67\n56.49 \u00b13.20\n64.43 \u00b10.73\n58.18 \u00b13.87\n82.64 \u00b11.59\n68.60 \u00b12.04\n\nRSHN [10 ###reference_b10###]\n93.81 \u00b10.55\n93.34 \u00b10.58\n64.22 \u00b11.03\n59.85 \u00b13.21\n61.43\u00b15.37\n57.37 \u00b11.49\n73.33 \u00b12.71\n51.48 \u00b14.20\n\nHetSANN [11 ###reference_b11###]\n80.56 \u00b11.50\n78.55 \u00b12.42\n57.68 \u00b10.44\n49.47 \u00b11.21\n-\n-\n-\n-\n\nHGT [12 ###reference_b12###]\n93.49 \u00b10.25\n93.01 \u00b10.23\n67.20 \u00b10.57\n63.00 \u00b11.19\n66.43 \u00b11.88\n60.03 \u00b12.21\n85.74 \u00b11.24\n74.98 \u00b11.61\n\nSimpleHGN [13 ###reference_b13###]\n94.46 \u00b10.22\n94.01 \u00b10.24\n67.36 \u00b10.57\n63.53 \u00b11.36\n67.49 \u00b10.97\n62.49 \u00b11.69\n86.44 \u00b10.48\n75.73 \u00b10.97\n\nHINormer [14 ###reference_b14###]\n94.94 \u00b10.21\n94.57 \u00b10.23\n67.83 \u00b10.34\n64.65 \u00b10.53\n69.42 \u00b10.63\n63.93 \u00b10.59\n88.04 \u00b10.12\n79.88 \u00b10.24\n\nPOGAT\n96.71 \u00b10.25\n96.21 \u00b10.22\n74.33 \u00b10.35\n72.42 \u00b10.37\n74.12 \u00b10.49\n72.74 \u00b10.47\n93.37 \u00b10.13\n88.24 \u00b10.28\nOOT: Out Of Time (36 hours). OOM: Out Of Memory; DMGI runs out of memory on the entire AMiner data. R-AUC: ROC-AUC." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "With ontology subgraphs as the fundamental semantic building blocks, this section aims to develop a contextual representation of nodes using these subgraphs. Next, we will design training tasks for the network by perturbing the ontology subgraphs.\nFirst of all, we prepare the input node and edge embeddings within an ontology subgraph to be passed to the Graph Transformer Layer (similar to [4 ###reference_b4###]). For an Ontology sub-graph with node features for each node and edge features for each edge between node and node , the input node features and edge features are passed via a linear projection to embed these to -dimensional hidden features and .\nwhere , and \nare the parameters of the linear projection layers. We then embed the pre-computed node positional encodings of dim using a linear projection and add to the node features.\".\nThe Graph Transformer layer closely resembles the transformer architecture originally proposed in [4 ###reference_b4###]. Next, we will define the node update equations for layer .\nand , , to denotes the number of attention heads, and denotes concatenation.\nTo ensure numerical stability, the outputs after exponentiating the terms inside the softmax are clamped between to . The attention outputs are then passed to a Feed Forward Network, which is preceded and followed by residual connections and normalization layers, as follows:\nwhere , , denote intermediate representations. The bias terms are omitted for clarity.\nGiven that each ontology subgraph \nassociated with the target node independently yields an intra-aggregation representation, it becomes imperative to integrate the rich semantic information emanating from each of these subgraphs within the broader network via an inter-aggregation process. Considering the minimal context semantic should be equivalent to each other, we turn to use multi-head attention mechanisms to aggregate the semantic information between ontology subgraphs:\nwhere is the number of attention heads, denotes the concatenation of vectors, and we obtain the representation of the last layer by averaging operation:" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Bi-level perturbation Ontology Training", + "text": "To enhance the model\u2019s ability to capture the intrinsic semantics of ontology, we employ a perturbation technique to modify the ontology. We also design two specific tasks to differentiate perturbation subgraphs at both the node level and the graph level." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "2.1.1 Ontology Subgraph perturbation", + "text": "In this section, we enhance the perturbation operation on ontology subgraphs to generate negative samples for self-supervised tasks. Initially, we tried the common all-zero mask, which replaces node embeddings with zero vectors, but this approach yielded unsatisfactory results. Drawing inspiration from [28 ###reference_b28###], which used random graphs as noise distributions, we then implemented a random mask that selects nodes randomly for substitution, resulting in some improvement. However, given the significant differences in information among various node types, using random nodes can create negative samples that are too dissimilar to the positive samples, making the task easier and potentially reducing model performance. To address this, we further refined our strategy by substituting nodes with similar types, thereby constructing challenging negative samples that enhance the model\u2019s ability to learn from minimal contexts.\nWe take the ontology subgraph set (i.e., ) as positive samples. Then, we randomly replaced nodes in the subgraphs with nodes of the same type to preserve a certain level of semantics similarity. These substitute nodes are marked with diagonal lines. If the generated perturbation subgraph is not included in the original ontology subgraph set, it is labeled as a negative sample and denoted as . The set of all negative ontology subgraphs is denoted as . Next, we perform shuffle operations on all positive and negative samples, further readout the context representations of nodes to obtain a graph-level representations of :" + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "2.1.2 Graph-level Discrimination", + "text": "For graph-level training, we designed a graph discriminator based on an MLP with to determine whether the subgraph has been perturbed:\nThen we calculate the cross-entropy loss:\nwhere stands for the labels of graph-level task." + }, + { + "section_id": "2.1.3", + "parent_section_id": "2.1", + "section_name": "2.1.3 Node-level Discrimination", + "text": "Given the node representation for node , we further employ an MLP parameterized by to predict the class distribution as follows,\nwhere is the prediction and is the number of classes.\nIn addition, we further add an normalization on for stable optimization.\nGiven the training nodes , for multi-class node classification, we employ cross-entropy as the overall loss, as\nwhere is the cross-entropy loss, and is the one-hot vector that encodes the label of node .\nNote that, for multi-label node classification, we can employ binary cross-entropy to calculate the overall loss.\nFinally, we performed joint training on both tasks, allowing our model to learn minimal context semantics from both graph-level and node-level perspectives. We optimized the model by minimizing the final objective function:\nwhere is a balance scalar." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, we perform a comprehensive set of experiments to assess the effectiveness of our proposed method, POGAT, specifically targeting node classification and link prediction tasks. Our goal is to showcase the superiority of POGAT by comparing its performance with existing state-of-the-art methods." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Datasets.", + "text": "Our experimental evaluation spans across six publicly available, real-world datasets: IMDB-L (dataset1), IMDB-S (dataset2), Alibaba (dataset3), DBLP (dataset4), Freebase (dataset5), and Aminer (dataset6). A concise summary of each dataset\u2019s statistical properties is provided in Table 1. For all baselines, we use their released source code and the parameters recommended by their papers to ensure that their methods\nachieve the desired effect." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Node classification.", + "text": "We conduct a comprehensive evaluation of our model\u2019s efficacy in node classification tasks by comparing it against state-of-the-art baselines. The results of this evaluation are detailed in Table 2, where the best scores are highlighted in bold for clarity and emphasis. Our proposed POGAT model demonstrates a remarkable performance advantage, significantly surpassing all baseline models in both Macro-F1 and Micro-F1 metrics across a diverse range of heterogeneous networks. This robust performance indicates the effectiveness of our approach in capturing the underlying structures and relationships within the data. For DBLP and IMDB-S, we leverage standard settings and benchmark against the HGB leaderboard results. For the remaining datasets, we adhere strictly to the default hyperparameter settings of the baseline models. Furthermore, we fine-tune these hyperparameters based on validation performance to optimize the results." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Link prediction.", + "text": "Next, we evaluate POGAT\u2019s performance in unsupervised link prediction against leading baselines. The results of this evaluation are comprehensively summarized in Table 3, which provides a clear illustration of the model\u2019s effectiveness across various tested networks. Our findings reveal that POGAT achieves state-of-the-art metrics in link prediction, showcasing its capability to effectively identify and predict connections within complex network structures. Notably, POGAT demonstrates an average improvement of 5.92%, 5.42% and 5.54% in R-AUC, PR-AUC, and F1, respectively, over the GNN MHGCN on six datasets." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In conclusion, this research addresses the challenges of heterogeneous network embedding through the introduction of Ontology. We present perturbation Ontology-based Graph Attention Networks, a novel approach that integrates ontology subgraphs with an advanced self-supervised learning framework to achieve a deeper contextual understanding. Experimental results on six real-world heterogeneous networks demonstrate the effectiveness of POGAT, showcasing its superiority in both node classification and link prediction tasks." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Summary of datasets (N types: node types, E types: edge\ntypes, Target: target node, and Classes: Target classes).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
# Nodes# N Types# Edges# E TypesTarget# Classes# Task
DBLP26,1284119,7833author4LP&NC
IMDB-L21,420486,6426movie4NC
IMDB-S11,616317,1062--LP
Freebase43,8544151,0346movie3NC
AMiner55,7833153,6764paper4LP&NC
Alibaba22,649345,7345--LP
\n
", + "capture": "Table 1: Summary of datasets (N types: node types, E types: edge\ntypes, Target: target node, and Classes: Target classes)." + }, + "2": { + "table_html": "
\n
Table 2: Performance evaluation on node classification.
\n

In this table, tabular results are in percent; the best result is bolded.\n\n
\n\n\n\n\nMethods\nDBLP\nIMDB-S\nFreebase\nAMiner\n\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\nMicro-F1\nMacro-F1\n\nGCN\n91.47 \u00b10.34\n90.84 \u00b10.32\n64.82 \u00b10.64\n57.88 \u00b11.18\n68.34 \u00b11.58\n59.81 \u00b13.04\n85.75 \u00b10.41\n75.74 \u00b11.10\n\nGAT [3 ###reference_b3###]\n93.39 \u00b10.30\n93.83 \u00b10.27\n64.86 \u00b10.43\n58.94 \u00b11.35\n69.04 \u00b10.58\n59.28 \u00b12.56\n84.92 \u00b10.68\n74.32 \u00b10.95\n\nTransformer [4 ###reference_b4###]\n93.99 \u00b10.11\n93.48 \u00b10.12\n66.29 \u00b10.69\n62.79 \u00b10.65\n67.89 \u00b10.39\n63.35 \u00b10.46\n85.72 \u00b10.43\n74.15 \u00b10.28\n\nRGCN [5 ###reference_b5###]\n92.07 \u00b10.50\n91.52 \u00b10.50\n62.95 \u00b10.15\n58.85 \u00b10.26\n60.82 \u00b11.23\n59.08 \u00b11.44\n81.58 \u00b11.44\n62.53 \u00b12.31\n\nHetGNN [6 ###reference_b6###]\n92.33 \u00b10.41\n91.76 \u00b10.43\n51.16 \u00b10.65\n48.25 \u00b10.67\n62.99 \u00b12.31\n58.44 \u00b11.99\n72.34 \u00b11.42\n55.42 \u00b11.45\n\nHAN [7 ###reference_b7###]\n92.05 \u00b10.62\n91.67 \u00b10.49\n64.63 \u00b10.58\n57.74 \u00b10.96\n61.42 \u00b13.56\n57.05 \u00b12.06\n81.90 \u00b11.51\n64.67 \u00b12.21\n\nGTN [8 ###reference_b8###]\n93.97 \u00b10.54\n93.52 \u00b10.55\n65.14 \u00b10.45\n60.47 \u00b10.98\n-\n-\n-\n-\n\nMAGNN [9 ###reference_b9###]\n93.76 \u00b10.45\n93.28 \u00b10.51\n64.67 \u00b11.67\n56.49 \u00b13.20\n64.43 \u00b10.73\n58.18 \u00b13.87\n82.64 \u00b11.59\n68.60 \u00b12.04\n\nRSHN [10 ###reference_b10###]\n93.81 \u00b10.55\n93.34 \u00b10.58\n64.22 \u00b11.03\n59.85 \u00b13.21\n61.43\u00b15.37\n57.37 \u00b11.49\n73.33 \u00b12.71\n51.48 \u00b14.20\n\nHetSANN [11 ###reference_b11###]\n80.56 \u00b11.50\n78.55 \u00b12.42\n57.68 \u00b10.44\n49.47 \u00b11.21\n-\n-\n-\n-\n\nHGT [12 ###reference_b12###]\n93.49 \u00b10.25\n93.01 \u00b10.23\n67.20 \u00b10.57\n63.00 \u00b11.19\n66.43 \u00b11.88\n60.03 \u00b12.21\n85.74 \u00b11.24\n74.98 \u00b11.61\n\nSimpleHGN [13 ###reference_b13###]\n94.46 \u00b10.22\n94.01 \u00b10.24\n67.36 \u00b10.57\n63.53 \u00b11.36\n67.49 \u00b10.97\n62.49 \u00b11.69\n86.44 \u00b10.48\n75.73 \u00b10.97\n\nHINormer [14 ###reference_b14###]\n94.94 \u00b10.21\n94.57 \u00b10.23\n67.83 \u00b10.34\n64.65 \u00b10.53\n69.42 \u00b10.63\n63.93 \u00b10.59\n88.04 \u00b10.12\n79.88 \u00b10.24\n\nPOGAT\n96.71 \u00b10.25\n96.21 \u00b10.22\n74.33 \u00b10.35\n72.42 \u00b10.37\n74.12 \u00b10.49\n72.74 \u00b10.47\n93.37 \u00b10.13\n88.24 \u00b10.28\n\n\n

\n
", + "capture": "Table 2: Performance evaluation on node classification." + }, + "3": { + "table_html": "
\n
Table 3: Model performance comparison for the task of link prediction on different datasets.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodAMinerAlibabaIMDB-LDBLP
R-AUCPR-AUCF1R-AUCPR-AUCF1R-AUCPR-AUCF1R-AUCPR-AUCF1
node2vec [15]\n0.5940.6630.6020.6140.5800.5930.4790.5680.4740.4490.4520.478
RandNE [16]\n0.6070.6300.6080.8770.8880.8260.9010.9330.8390.4920.4910.493
FastRP [17]\n0.6200.6340.6000.9270.9000.9260.8690.8930.8110.5150.5280.506
SGC [18]\n0.5890.5850.5670.6860.7080.6230.8260.8890.7690.6010.6060.587
R-GCN [5]\n0.5990.6010.6100.6740.7100.6290.8260.8780.7900.5890.5920.566
MAGNN [9]\n0.6630.6810.6660.9610.9630.9480.9120.9230.8870.6900.6990.684
HPN [19]\n0.6580.6640.6600.9580.9610.9500.9000.9030.8920.6920.7100.687
PMNE-n [20]\n0.6510.6690.6770.9660.9730.8910.6740.6830.6460.6720.6790.663
PMNE-r [20]\n0.6150.6530.6620.8590.9150.8240.6460.6460.6130.6370.6400.629
PMNE-r [20]\n0.6130.6350.6570.5970.5910.6640.6510.6340.6300.6220.6250.609
MNE [21]\n0.6600.6720.6810.9440.9460.9010.6880.7010.6810.6570.6600.635
GATNE [22]\nOOTOOTOOT0.9810.9860.9520.8720.8780.791OOTOOTOOT
DMGI [23]\nOOMOOMOOM0.8570.7810.7840.9260.9350.8730.6100.6150.601
FAME [24]\n0.6870.7470.7260.9930.9960.9790.9440.9590.8970.6420.6500.633
DualHGNN [25]\n///0.9740.9770.966//////
MHGCN [26]\n0.7110.7530.7300.9970.9970.9920.9670.9660.9590.7180.7220.703
BPHGNN [27]\n0.7230.7620.7230.9950.9960.9940.9690.9650.9430.7260.7340.731
POGAT0.8040.8120.8010.9980.9970.9940.9670.9860.9750.8380.8190.803
Std.0.0120.0140.0110.0110.0100.0110.0120.0130.0120.0130.0210.012
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    OOT: Out Of Time (36 hours). OOM: Out Of Memory; DMGI runs out of memory on the entire AMiner data. R-AUC: ROC-AUC.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 3: Model performance comparison for the task of link prediction on different datasets." + } + }, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "\u201cRole of ontology in semantic web,\u201d", + "author": "Kaushal Giri,", + "venue": "DESIDOC Journal of Library & Information Technology, vol. 31, no. 2, 2011.", + "url": null + } + }, + { + "2": { + "title": "\u201cOntology development 101: A guide to creating your first ontology,\u201d 2001.", + "author": "Natalya F Noy, Deborah L McGuinness, et al.,", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "\u201cGraph attention networks,\u201d", + "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio,", + "venue": "arXiv preprint arXiv:1710.10903, 2017.", + "url": null + } + }, + { + "4": { + "title": "\u201cAttention is all you need,\u201d", + "author": "A Vaswani,", + "venue": "Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "5": { + "title": "\u201cModeling relational data with graph convolutional networks,\u201d", + "author": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling,", + "venue": "in The semantic web: 15th international conference, ESWC 2018, Heraklion, Crete, Greece, June 3\u20137, 2018, proceedings 15. Springer, 2018, pp. 593\u2013607.", + "url": null + } + }, + { + "6": { + "title": "\u201cHeterogeneous graph neural network,\u201d", + "author": "Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla,", + "venue": "in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 793\u2013803.", + "url": null + } + }, + { + "7": { + "title": "\u201cHeterogeneous graph attention network,\u201d", + "author": "Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu,", + "venue": "in The world wide web conference, 2019, pp. 2022\u20132032.", + "url": null + } + }, + { + "8": { + "title": "\u201cGraph transformer networks,\u201d", + "author": "Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim,", + "venue": "Advances in neural information processing systems, vol. 32, 2019.", + "url": null + } + }, + { + "9": { + "title": "\u201cMagnn: Metapath aggregated graph neural network for heterogeneous graph embedding,\u201d", + "author": "Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King,", + "venue": "in Proceedings of The Web Conference 2020, 2020, pp. 2331\u20132341.", + "url": null + } + }, + { + "10": { + "title": "\u201cRelation structure-aware heterogeneous graph neural network,\u201d", + "author": "Shichao Zhu, Chuan Zhou, Shirui Pan, Xingquan Zhu, and Bin Wang,", + "venue": "in 2019 IEEE international conference on data mining (ICDM). IEEE, 2019, pp. 1534\u20131539.", + "url": null + } + }, + { + "11": { + "title": "\u201cAn attention-based graph neural network for heterogeneous structural learning,\u201d", + "author": "Huiting Hong, Hantao Guo, Yucheng Lin, Xiaoqing Yang, Zang Li, and Jieping Ye,", + "venue": "in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, pp. 4132\u20134139.", + "url": null + } + }, + { + "12": { + "title": "\u201cHeterogeneous graph transformer,\u201d", + "author": "Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun,", + "venue": "in Proceedings of the web conference 2020, 2020, pp. 2704\u20132710.", + "url": null + } + }, + { + "13": { + "title": "\u201cAre we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks,\u201d", + "author": "Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, and Jie Tang,", + "venue": "in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 1150\u20131160.", + "url": null + } + }, + { + "14": { + "title": "\u201cHinormer: Representation learning on heterogeneous information networks with graph transformer,\u201d 2023.", + "author": "Qiheng Mao, Zemin Liu, Chenghao Liu, and Jianling Sun,", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "\u201cnode2vec: Scalable feature learning for networks,\u201d 2016.", + "author": "Aditya Grover and Jure Leskovec,", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "\u201cBillion-scale network embedding with iterative random projection,\u201d", + "author": "Ziwei Zhang, Peng Cui, Haoyang Li, Xiao Wang, and Wenwu Zhu,", + "venue": "in 2018 IEEE international conference on data mining (ICDM). IEEE, 2018, pp. 787\u2013796.", + "url": null + } + }, + { + "17": { + "title": "\u201cFast and accurate network embeddings via very sparse random projection,\u201d", + "author": "Haochen Chen, Syed Fahad Sultan, Yingtao Tian, Muhao Chen, and Steven Skiena,", + "venue": "in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 399\u2013408.", + "url": null + } + }, + { + "18": { + "title": "\u201cSimplifying graph convolutional networks,\u201d", + "author": "Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, et al.,", + "venue": "in ICML, 2019, pp. 6861\u20136871.", + "url": null + } + }, + { + "19": { + "title": "\u201cHeterogeneous graph propagation network,\u201d", + "author": "Houye Ji, Xiao Wang, Chuan Shi, Bai Wang, and S Yu Philip,", + "venue": "IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 1, pp. 521\u2013532, 2021.", + "url": null + } + }, + { + "20": { + "title": "\u201cPrincipled multilayer network embedding,\u201d", + "author": "Weiyi Liu, Pin-Yu Chen, Sailung Yeung, Toyotaro Suzumura, and Lingli Chen,", + "venue": "in 2017 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2017, pp. 134\u2013141.", + "url": null + } + }, + { + "21": { + "title": "\u201cScalable multiplex network embedding.,\u201d", + "author": "Hongming Zhang, Liwei Qiu, Lingling Yi, and Yangqiu Song,", + "venue": "in IJCAI, 2018, vol. 18, pp. 3082\u20133088.", + "url": null + } + }, + { + "22": { + "title": "\u201cRepresentation learning for attributed multiplex heterogeneous network,\u201d", + "author": "Yukuo Cen, Xu Zou, Jianwei Zhang, Hongxia Yang, Jingren Zhou, and Jie Tang,", + "venue": "in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 1358\u20131368.", + "url": null + } + }, + { + "23": { + "title": "\u201cUnsupervised attributed multiplex network embedding,\u201d", + "author": "Chanyoung Park, Donghyun Kim, Jiawei Han, and Hwanjo Yu,", + "venue": "in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, pp. 5371\u20135378.", + "url": null + } + }, + { + "24": { + "title": "\u201cFast attributed multiplex heterogeneous network embedding,\u201d", + "author": "Zhijun Liu, Chao Huang, Yanwei Yu, Baode Fan, and Junyu Dong,", + "venue": "in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 995\u20131004.", + "url": null + } + }, + { + "25": { + "title": "\u201cMultiplex bipartite network embedding using dual hypergraph convolutional networks,\u201d", + "author": "Hansheng Xue, Luwei Yang, Vaibhav Rajan, Wen Jiang, Yi Wei, and Yu Lin,", + "venue": "in Proceedings of the Web Conference 2021, 2021, pp. 1649\u20131660.", + "url": null + } + }, + { + "26": { + "title": "\u201cMultiplex heterogeneous graph convolutional network,\u201d", + "author": "Pengyang Yu, Chaofan Fu, Yanwei Yu, Chao Huang, Zhongying Zhao, and Junyu Dong,", + "venue": "in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2377\u20132387.", + "url": null + } + }, + { + "27": { + "title": "\u201cMultiplex heterogeneous graph neural network with behavior pattern modeling,\u201d", + "author": "Chaofan Fu, Guanjie Zheng, Chao Huang, Yanwei Yu, and Junyu Dong,", + "venue": "in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2023, KDD \u201923, p. 482\u2013494, Association for Computing Machinery.", + "url": null + } + }, + { + "28": { + "title": "\u201cGcn for hin via implicit utilization of attention and meta-paths,\u201d", + "author": "Di Jin, Zhizhi Yu, Dongxiao He, Carl Yang, S Yu Philip, and Jiawei Han,", + "venue": "IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 4, pp. 3925\u20133937, 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18520v1" +} \ No newline at end of file diff --git a/20241127/2411.18530v1.json b/20241127/2411.18530v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c866a32a2a4a4b788f49d3d0c062cc41487f3756 --- /dev/null +++ b/20241127/2411.18530v1.json @@ -0,0 +1,642 @@ +{ + "title": "Emergence of Self-Identity in AI: A Mathematical Framework and Empirical Study with Generative Large Language Models", + "abstract": "This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories in a metric space , and a continuous mapping that maintains consistent self-recognition across this continuum, where represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing Low-Rank Adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801 (190.2% improvement) after fine-tuning. In contrast to earlier methods that view self-identity as an emergent trait, our framework introduces tangible metrics to assess and measure artificial self-awareness. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems. Additionally, it opens up new prospects for controlled adjustments of self-identity in contexts that demand different levels of personal involvement. Moreover, the mathematical underpinning of our framework serves as the basis for forthcoming investigations into AI, linking theoretical models to real-world applications in current AI technologies.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The formalization of self-identity in artificial intelligence (AI) systems presents a fundamental challenge at the intersection of theoretical computer science, cognitive science, and AI [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. While previous research has explored behavioral manifestations of artificial consciousness [4 ###reference_b4###] and neural correlates of self-awareness [5 ###reference_b5###], a mathematical framework to quantify and model the emergence of self-identity has remained elusive. Recent advances in self-aware systems and autonomous computing have highlighted this need [6 ###reference_b6###, 7 ###reference_b7###], particularly in the context of multi-sensorial models for autonomous systems.\nThe necessity for such a framework becomes increasingly apparent as AI systems continue to advance in complexity and capability. Current approaches to artificial self-awareness primarily rely on heuristic implementations or philosophical abstractions, lacking the mathematical rigor necessary for systematic analysis and reliable implementation. This limitation has been noted in recent studies that examined the influence of AI on human identification [8 ###reference_b8###] and the relationship between AI companions and human self-conception [9 ###reference_b9###]. Our framework distinguishes itself by providing a measurable, implementable foundation for self-identity, grounded in the formal structures of metric space theory and measure theory. This approach is based on recent work on self-directed and brain-inspired artificial intelligence [10 ###reference_b10###].\nLet be a metric space of memories, and let be a metric space of possible self-identities. We propose that self-identity emerges from two fundamental measurable conditions: (1) the existence of a connected and path-connected continuum of memories, and (2) a continuous mapping that maintains consistent self-recognition throughout this continuum. This formulation allows us to quantify the degree of self-identity through a belief function that captures the probabilistic nature of self-recognition, incorporating insights from recent work on self-regulated learning in AI systems [11 ###reference_b11###].\nDistinct from previous approaches that treat self-identity as an emerging phenomenon or a philosophical construct [12 ###reference_b12###], our framework provides concrete metrics to measure and evaluate the development of artificial self-awareness. The framework addresses several key mathematical challenges: we refine the memory space metric to account for temporal, content, and emotional components [13 ###reference_b13###]; establish necessary and sufficient conditions for the continuity of the self-identity function; and explore the measure-theoretic aspects of the belief function. This approach aligns with recent developments in causal reasoning for self-aware machines [14 ###reference_b14###] and builds on established work in feature-based abnormality detection [15 ###reference_b15###].\nTo validate our theoretical framework, we conducted empirical experiments using the Meta\u2019s Llama 3.2 1B model [16 ###reference_b16###], fine-tuned through Low-Rank Adaptation (LoRA). The model was trained on a synthetic dataset constructed of temporally coherent memories. This experimental setup provides a novel bridge between theoretical constructs and practical implementation, demonstrating how abstract mathematical principles can be realized in contemporary AI systems, while considering recent findings in AI perception and acceptance [17 ###reference_b17###].\nThe significance of this work extends beyond theoretical contributions. By providing a mathematical framework for implementing and measuring self-identity, we enable the development of AI systems with verifiable and consistent self-awareness. This has immediate applications in humanoid robotics, virtual assistants, and autonomous systems, where coherent self-identity is crucial for natural interaction and decision-making. Furthermore, our framework introduces the possibility of controlled modification of self-identity, suggesting applications in scenarios requiring varying degrees of personal engagement or objective analysis. Recent work on algorithmic influence on self-perceived identities [18 ###reference_b18###] and computational approaches to self-identification [19 ###reference_b19###] reinforces the importance of this direction." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Mathematical Foundations of the Self", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Preliminaries", + "text": "We develop a mathematical framework to define the concept of \u2019self\u2019 based on two key conditions: the existence of a continuum of memories and the recognition and belief in self-identity. To achieve this, we utilize metric spaces and probability theory to model memories and aspects of the self, ensuring applicability to realistic scenarios.\nLet be the set of all possible memories of an entity. We define a metric that quantifies the distance between memories. This distance can be constructed based on factors such as temporal separation, similarity of content, and emotional intensity. For example, we might define:\nwhere:\nrepresents the time associated with memory .\nmeasures the content similarity between and , possibly using cosine similarity or another appropriate metric.\nrepresents the emotional intensity of memory , quantified on a suitable scale.\nare weighting factors that balance the contributions of time, content, and emotion.\nThe topology on is induced by , making a metric space.\nTo fully capture the richness of the memories of an entity, the metric space must accommodate the multidimensional nature of the memories. Each memory can be conceptualized as a composite of various features, including sensory perceptions, cognitive interpretations, emotional states, and contextual information. By defining a suitable metric , we quantify the distance between memories in a manner that reflects their psychological and phenomenological similarities and differences. This approach allows us to model the continuity of experience and the associative networks that underlie memory retrieval and self-referential thought processes.\nThe components of the metric are chosen to capture essential aspects of memories relevant to self-identity. The temporal separation term reflects the chronological ordering of memories, as time plays a crucial role in the continuity of experience. Content similarity measures how similar events, thoughts, or perceptions are between two memories, which is significant for associative connections in memory recall [20 ###reference_b20###]. The emotional intensity terms capture the affective components of memories, recognizing that emotionally significant events have a stronger impact on self-perception [21 ###reference_b21###]. The weighting factors allow for adjustment based on empirical findings or theoretical considerations about the relative importance of these factors in the formation of self-identity.\nMathematically, the choice of ensures that becomes a proper metric space, satisfying the properties of non-negativity, identity of indiscernibles, symmetry, and the triangle inequality. These properties are essential for the application of topological concepts, enabling us to analyze the convergence of memory sequences and the continuity of functions defined on . For example, considering whether is complete, meaning that every Cauchy sequence of memories converges to a memory within \u2014has implications for understanding the limits of memory processes and the stability of self-identity over time.\nLet be the set of all possible self-identities of an entity. We define a metric to measure the distance between self-identities. For example, if self-identities are characterized by -dimensional vectors representing traits or attributes, we can define:\nwhere is the norm for some , and .\nThe metrics provide a concrete way to quantify similarities and differences between memories and self-identities, establishing abstract spaces and in measurable properties. The selection of the metric in the self space is crucial for modeling how self-identities relate and differ from one another. By employing the norm, we capture aggregate differences in all dimensions of self-identity, allowing for a balanced consideration of each attribute. The choice of can influence the sensitivity of the metric to variations in specific dimensions; for example, (Manhattan distance) treats all differences linearly, while (Euclidean distance) emphasizes larger differences due to the squaring of terms. This flexibility enables us to tailor the metric to the specific characteristics of self-identity that are being modeled.\nRepresenting self-identities as vectors in allows us to encapsulate multiple dimensions of self-identity, such as personality traits, values, goals, and roles. Each dimension corresponds to a specific attribute, and the choice of dimensions can be informed by established models in psychology, such as the Big Five personality traits [22 ###reference_b22###, 23 ###reference_b23###] or self-concept constructs [24 ###reference_b24###]. This multi-dimensional representation acknowledges the complexity of self-identity and facilitates mathematical operations within the self space .\nIntegrating psychological constructs into our mathematical model enriches its applicability and realism. Established models in psychology provide empirically validated dimensions like openness, conscientiousness, extraversion, agreeableness, and neuroticism. By mapping these traits to dimensions in , we ground our self space in robust psychological theory. Similarly, self-concept models that incorporate aspects such as self-esteem, self-efficacy, and identity salience [25 ###reference_b25###] can be integrated, providing a comprehensive framework that captures both stable traits and dynamic states of the self.\nA subset is a continuum of memories if it is connected and path-connected in . That is, for any , there exists a continuous path with and .\nBy considering continuum of memories, we can analyze segments of an entity\u2019s memory where coherent self-recognition and belief are possible, even if the entire memory space is fragmented. The mathematical requirements of connectedness and path-connectedness for the continuum in are not mere formalities; they reflect the psychological continuity necessary for a coherent sense of self. In psychological theories of identity, such as narrative identity [26 ###reference_b26###], the self is constructed through an internalized and evolving life story that integrates past, present, and anticipated future experiences. The connectedness of ensures that there are no abrupt discontinuities or \u2019gaps\u2019 in the memory space that could disrupt this narrative coherence. Path-connectedness implies that any two memories within can be connected via a continuous sequence of memories, mirroring the associative pathways in human memory and cognition.\nSimilarly, we consider the self space :\nThe self space may consist of multiple connected components, especially in cases of multiple or changing identities. Our framework allows to be disconnected, focusing on connected components relevant to the entity\u2019s self-identification within a given continuum of memories.\nAcknowledging that may consist of multiple connected components allows our framework to accommodate complex identity phenomena, such as role-based identities, identity diffusion, or the presence of multiple selves in conditions such as dissociative identity disorder [27 ###reference_b27###]. By focusing on connected components relevant to the entity\u2019s current continuum of memories, we can model situations where an individual transitions between different self-identities in response to contextual cues or over time. This flexibility is essential to accurately capture the multifaceted and dynamic nature of self-identity in real-world contexts." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Identity Recognition and Belief Functions", + "text": "The Identity Recognition Function is a function that assigns to each memory a perceived self-identity .\nThe Identity Recognition Function serves as the formal mechanism by which an entity associates each memory with a perceived self-identity. This function encapsulates the cognitive and reflective processes involved in self-awareness, where individuals interpret their experiences and integrate them into their understanding of who they are. The continuity of is a critical property, ensuring that similar memories are mapped to similar self-identities, which aligns with the psychological expectation that minor changes in experience should not lead to drastic changes in self-perception.\nThe Belief Function is a measurable function where represents the degree of belief, interpreted as subjective probability, that the entity associated with memory has in the proposition that is their self-identity.\nWe require that for each , the belief function satisfies the normalization condition:\nwhere is a suitable measure on .\nThe normalization condition is essential for interpreting as a probability measure over the self space for each memory . This condition ensures that the total belief assigned across all possible self-identities sums to one, reflecting a complete and exclusive allocation of belief. It allows us to apply probabilistic reasoning and statistical tools to analyze how belief distributions over self-identities evolve in response to new memories and experiences.\nBy modeling using probability measures, we capture the uncertainty and variability in an entity\u2019s belief about their self-identity, allowing for partial beliefs and fluctuations. From a Bayesian perspective, the belief function can be viewed as representing the posterior distribution of self-identities given the memory , where prior beliefs are updated in light of new evidence [28 ###reference_b28###]. This interpretation aligns with Bayesian models of cognition, which posit that the mind continuously updates its beliefs about the world (and the self) through probabilistic inference. In this context, encapsulates the updated belief about self-identity after observing memory , providing a dynamic framework for modeling identity formation and revision.\nModeling the degree of belief as a probability measure is in agreement with theories of subjective probability and belief in cognitive psychology [29 ###reference_b29###]. By interpreting as the subjective probability that self-identity is the entity\u2019s own at memory , we capture the uncertainty and variability inherent in self-perception. This approach allows for partial beliefs and acknowledges that an entity\u2019s confidence in their self-identity may fluctuate over time and across different memories. Quantifying belief in this manner facilitates mathematical analysis and is consistent with probabilistic models of cognition [30 ###reference_b30###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Formalizing the Conditions for Self", + "text": "We formalize the conditions under which an entity is said to possess a self within a continuum of memories. These formal conditions aim to capture the essential features of self-identity as understood in philosophy and psychology, particularly the notions of psychological continuity and self-recognition [31 ###reference_b31###]. By specifying the requirements for a connected continuum of memories and consistent self-recognition with sufficient belief, we provide a rigorous foundation for analyzing when an entity can be said to \u2019have a self.\u2019 This framework enables us to explore the implications of memory and belief on identity.\nThere exists a connected and path-connected subset that represents a continuum of memories experienced by the entity.\nWithin the continuum of memories , the Identity Recognition Function is continuous, and for all , the Belief Function satisfies , where is a belief threshold.\nCondition 2.8 ###reference_theorem8### stipulates that within , the entity consistently recognizes the same self-identity and maintains a degree of belief at or above the threshold ." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Main Results", + "text": "If an entity satisfies Conditions 2.7 ###reference_theorem7### and 2.8 ###reference_theorem8###, and if the image lies entirely within a connected component of where is constant, then there exists a self-identity such that for all . Therefore, the entity possesses a self characterized by within .\nSince is continuous on the connected set , its image is connected in . If is constant on , then is a singleton . To ensure this, we require that has the property that the only connected subsets in the image of where are singletons. Therefore, under this condition, for all , and the entity possesses a self characterized by within .\n\u220e\nTheorem 2.9 ###reference_theorem9### establishes that under the given conditions, self-identity remains constant throughout the continuum . This result underscores the importance of continuity and connectedness in both the memory space and the self space for maintaining a stable self-identity. It also highlights how disruptions in either the continuity of memories or the consistency of self-recognition can lead to changes in self-identity, providing insights into phenomena such as identity crises or transformations.\nFor any , if an entity satisfies Condition 2.8 ###reference_theorem8###, then for any , Condition 2.8 ###reference_theorem8### is also satisfied with the threshold .\nSince and , it follows that . Thus, the condition holds for any lower threshold .\n\u220e\nLeaving the belief threshold to vary, we can model different degrees of confidence that an entity has in their self-identity. Situations where is lower may represent states of uncertainty or ambiguity in self-perception, possibly due to conflicting memories or external influences. Conversely, a higher reflects a strong conviction in one\u2019s self-identity. This flexibility enables the framework to capture a range of psychological conditions, from secure self-awareness to identity confusion.\nThe choice of affects the robustness of the self-identity. A higher implies stronger belief is required, which may be appropriate in contexts where certainty about self-identity is essential. The appropriate value of can depend on psychological factors and the specific context being modeled." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Designing an Artificial Intelligence Agent with a Self", + "text": "Building upon the mathematical framework established in the previous section, we aim to design an AI agent capable of possessing a \u2019self\u2019 as defined by Conditions 2.7 ###reference_theorem7### and 2.8 ###reference_theorem8###. This section presents a detailed mathematical model for such an AI agent, specifying its memory structures, self-identity representations, belief systems, and learning algorithms. We delve into the precise mechanisms by which the AI agent satisfies the necessary conditions to be considered as having a self." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Artificial Memory Space", + "text": "Let denote the memory space of the AI agent, defined as a metric space . Each memory is represented as a high-dimensional vector in , where is the number of features that encode the memory. The features may include sensory inputs, internal states, actions taken, and temporal information.\nIn designing the artificial memory space , it is crucial to capture the richness and diversity of experiences that an AI agent might encounter. Each feature in the memory vector can represent different input modalities, such as visual perception, auditory signals, proprioceptive feedback, and higher-level abstractions such as semantic understanding or emotional tagging. This multidimensional representation allows the agent to integrate information across various sources, mirroring the integrative nature of human memory systems [32 ###reference_b32###]. Furthermore, temporal information is included to preserve the sequential order of experiences, which is essential for the continuity of memory and the construction of coherent narratives [33 ###reference_b33###].\nThe metric is defined as:\nwhere denotes the Euclidean norm in . This metric measures the similarity between memories based on their feature representations.\nThe choice of the Euclidean norm in the memory metric provides a straightforward means of measuring the similarity between memories. However, depending on the nature of the memory representations and the importance of certain features, alternative metrics may be more appropriate. For example, if features have varying degrees of relevance or are on different scales, a weighted Euclidean distance or Mahalanobis distance could be employed to account for feature correlations and variances. Additionally, for discrete or categorical memory features, metrics such as the Hamming distance or the Jaccard index may offer better alignment with the underlying data structure [34 ###reference_b34###]." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Artificial Self Space", + "text": "Let represent the self space of the AI agent, defined as a metric space . Each self-identity is represented as a vector in , capturing attributes such as goals, preferences, and internal states relevant to the agent\u2019s self-perception.\nThe self space encapsulates the AI agent\u2019s internal representation of its identity. Each dimension in the vector can correspond to specific attributes relevant to the agent\u2019s functioning and self-perception. These attributes might include competency levels in various tasks, preferences for certain outcomes, or internal states such as motivation and confidence [35 ###reference_b35###]. By structuring in this way, we enable the agent to reason about its capabilities and goals, facilitating self-regulated learning and decision-making processes [36 ###reference_b36###].\nThe metric is defined as:\nThe Euclidean distance in the self space metric quantifies the difference between self-identities in terms of their attributes. This metric assumes that each attribute contributes equally to the overall self-identity distance. However, in practice, certain attributes may be more critical for the agent\u2019s self-concept than others. Incorporating a weighted distance metric allows for differential importance of attributes, reflecting the agent\u2019s prioritization of certain aspects of its identity [37 ###reference_b37###]. Furthermore, the topology of can be explored to understand how small changes in attributes affect the agent\u2019s self-perception, which is essential for developing robust self-awareness mechanisms." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Identity Recognition Function", + "text": "In our AI agent, we define an Identity Recognition Function that maps each memory to a self-identity . We consider as a general continuous function, not restricted to any specific form, allowing for a wide range of potential mappings.\nThe Identity Recognition Function is a continuous function that associates each memory with a self-identity .\nThe flexibility in the choice of the Identity Recognition Function allows the AI agent to adapt its mapping from memories to self-identities based on experience. For example, kernel methods can capture nonlinear relationships between memories and self-identities, allowing the agent to recognize complex patterns in its experiences [38 ###reference_b38###]. Furthermore, by incorporating memory traces with decaying weights over time, the agent can model the fading of older memories, similar to human forgetting processes [39 ###reference_b39###]. This dynamic adjustment of is crucial for maintaining a current and coherent self-identity as the agent encounters new experiences." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Belief Function and Probability Measures", + "text": "The Belief Function quantifies the agent\u2019s degree of belief that a given self-identity corresponds to its own at a particular memory. It is defined using a softmax function:\nwhere is a temperature parameter controlling the sharpness of the distribution, and is a probability measure on .\nThe Belief Function plays a pivotal role in quantifying the agent\u2019s confidence in its self-identity given a particular memory. By utilizing a softmax function, we ensure that forms a valid probability distribution over for each memory . The temperature parameter modulates the sharpness of this distribution, controlling how strongly the agent differentiates between more or less likely self-identities [40 ###reference_b40###]. A lower leads to a more peaked distribution, indicating higher confidence, while a higher reflects greater uncertainty. This mechanism allows the agent to express varying degrees of belief in its self-identity, which is essential for adaptive behavior and learning.\nWe assume that is equipped with a finite measure to ensure that the integral in the denominator is well-defined.\nThe requirement of a finite measure on ensures that the Belief Function is well-defined and integrable. This measure can be interpreted as representing the prior distribution over self-identities, reflecting the agent\u2019s initial biases or predispositions before accounting for specific memories [41 ###reference_b41###]. By choosing an appropriate , we can influence the agent\u2019s default tendencies in self-identification, which can be important in scenarios where certain self-identities are more desirable or probable than others.\nFor each , the function defines a probability measure on :\nwhere is the Borel sigma-algebra on .\nThis probabilistic framework aligns with Bayesian principles, where the agent updates its beliefs about its self-identity in light of new evidence provided by memories. The measure encapsulates both the agent\u2019s prior beliefs (through ) and the likelihood of observing the memory given a particular self-identity (through the exponential term in ) [42 ###reference_b42###]. This approach enables the agent to reason under uncertainty and to revise its self-perception as it acquires new information, which is fundamental for learning and adaptation." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Constructing the Continuum of Memories", + "text": "A subset is a continuum of memories if:\nis connected and path-connected in .\nThere exists and such that and for all .\nConstructing a continuum of memories that satisfies the connectedness and path-connectedness conditions is essential for the agent to possess a coherent self-identity. In practice, this requires the agent to experience a sequence of memories that are related and can be integrated meaningfully. Continuous learning techniques can be used to prevent catastrophic forgetting and ensure that new memories are incorporated without disrupting previously established self-identities [43 ###reference_b43###]. By maintaining stability in and across , the agent can develop a stable sense of self over time." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "Learning Algorithms", + "text": "The parameters and of the Identity Recognition Function are updated using a learning algorithm that minimizes a loss function :\nwhere:\nis the training data distribution over memories.\nis the self-identity with the ground truth of memory .\nis a loss function, such as the squared Euclidean distance.\nThe temperature parameter and any parameters in can be updated to calibrate the Belief Function, ensuring that reflects the agent\u2019s confidence appropriately.\nFine-tuning the temperature parameter and the measure in the Belief Function is critical for aligning the agent\u2019s confidence with its actual performance and experiences. Methods such as temperature scaling can be applied to calibrate the probabilities output by the agent, improving the reliability of its self-beliefs [44 ###reference_b44###]. Additionally, meta-learning approaches can enable the agent to adjust and based on feedback, optimizing its belief system for better self-awareness and decision-making [45 ###reference_b45###]." + }, + { + "section_id": "3.7", + "parent_section_id": "3", + "section_name": "Ensuring Conditions for \u2019Having a Self\u2019", + "text": "If the AI agent\u2019s Identity Recognition Function and Belief Function are trained such that there exists a connected and path-connected continuum satisfying:\nthen the AI agent possesses a self characterized by within .\nBy satisfying Conditions 2.7 ###reference_theorem7### and 2.8 ###reference_theorem8### within , the AI agent meets the criteria established in Theorem 2.9 ###reference_theorem9###. Therefore, within , the agent possesses a self characterized by .\n\u220e\nThis theoretical framework lays the groundwork for implementing AI agents that not only process information but also develop an intrinsic sense of self. By ensuring that the agent\u2019s Identity Recognition Function and Belief Function satisfy the specified conditions, we facilitate the emergence of self-awareness in a mathematically rigorous manner. Additionally, investigating the robustness of the agent\u2019s self-identity in the face of noisy or conflicting memories can provide insights into resilience mechanisms akin to those observed in human cognition." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Convergence of LoRA Training and Self-Identity Formation", + "text": "In our practical implementation, we employ LoRA to fine-tune a pre-trained Large Language Model (LLM) for the purpose of instilling a sense of self-identity as defined in our mathematical framework. This subsection establishes a formal connection between the convergence properties of the LoRA training process and the convergence of self-identity within our AI agent." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Representation of Memories through LoRA Adaptation", + "text": "Consider the pre-trained LLM to have a base set of parameters . The LoRA technique introduces a low-rank update to these parameters, represented by matrices and , where . The adapted parameters in the training step are then given by:\nThe adaptation process is guided by a set of synthetic memories , which are sequences of tokens that represent experiences that the AI agent is intended to internalize as its own. Each memory is associated with a feature vector in the artificial memory space , as defined previously." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Mathematical Modeling of LoRA Training Dynamics", + "text": "The training objective is to minimize a loss function in the memory dataset, which can be formalized as:\nwhere is the output of the LLM with parameters , and is the desired output associated with memory .\nThe LoRA updates are computed using gradient descent on with respect to and :\nwhere is the learning rate." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Convergence to a Stable Self-Identity", + "text": "In the implementation, we employ LoRA to fine-tune a pre-trained LLM, with the goal of instilling a coherent self-identity as defined in our mathematical framework. In this context, the process of training the LLM through backpropagation corresponds directly to the Identity Recognition Function , where represents the language-encoded memories, and denotes the model parameters that are optimized.\nIn the theoretical framework, the Identity Recognition Function maps each memory to a self-identity . Practically, this function is realized by processing the LLM input to produce an output that reflects the agent\u2019s self-identity, as governed by the current model parameters . The function is thus instantiated through the LLM\u2019s adaptation of via backpropagation updates that map to achieve consistency across memories.\nThe backpropagation algorithm, which updates the model parameters during training, can be formalized as:\nwhere:\nare the parameters at iteration ,\nis the learning rate,\nis the loss function evaluated at memory and parameters ,\nis the gradient of the loss function with respect to .\nHere, the backpropagation function serves as the mechanism by which the Identity Recognition Function evolves over time. Specifically, depends on , and the updates to adjust the mapping from to to minimize the loss, which is designed to encourage consistent self-recognition.\nMemories are sequences of tokens encoding experiences that the AI agent is intended to internalize as part of its self-identity. These memories are processed by the LLM, whose behavior is determined by the parameters . The LoRA adaptation modifies by introducing low-rank updates, allowing efficient fine-tuning.\nAs training progresses, the parameters converge to an optimized set :\nWe associate the converged parameters with the stable self-identity of the AI agent , effectively mapping the parameter space to the self-identity space :\nThis correspondence is justified by considering that the parameters encode the internal representations and behaviors of the AI agent, which collectively define its self-identity. By aligning with , we acknowledge that the learned weights embody the agent\u2019s self.\nGiven the continuity of the Identity Recognition Function with respect to , the convergence of implies convergence of the agent\u2019s perceived self-identity across all memories :\nwhere is the continuum of memories used during training.\nThus, the process of training the AI agent through backpropagation in the LoRA framework leads to the stabilization of the Identity Recognition Function to consistently produce self-identity in all memories. The backpropagation updates the parameters in such a way that the mapping from to becomes uniform, satisfying the condition of consistent self-recognition within the continuum .\nMoreover, the convergence of to ensures that the agent\u2019s internal representation of self-identity becomes stable, with encapsulating the learned self-identity . This process demonstrates how the abstract concept of self-identity, as defined in our mathematical framework, is realized through the practical mechanism of backpropagation in training neural networks.\nThe LoRA adaptation plays a crucial role in this process by enabling efficient and focused updates of parameters , targeting the subspace of parameters most relevant to self-identity representations. By restricting updates to low-rank matrices, LoRA facilitates rapid convergence and reduces the risk of overfitting, thus improving the stability and robustness of learned self-identity .\nThrough this training process, the AI agent satisfies the conditions for \u2018having a self\u2019 as defined in our framework:\nContinuum of Memories (): The set of language-encoded memories used for training forms a connected and path-connected subset of , representing a continuum of experiences.\nConsistent Self-Recognition: The Identity Recognition Function evolves through backpropagation to produce consistent self-identities across all , converging to as .\nThis practical implementation demonstrates how standard deep learning techniques operationalize the abstract mathematical concepts of self-identity formation. Recognizing that the backpropagation updates to correspond to the Identity Recognition Function , we bridge the gap between theory and practice.\nThe convergence of the parameters to a stable set ensures that the AI agent develops a consistent self-identity , satisfying the necessary conditions described in our framework. The LoRA adaptation facilitates this convergence by enabling efficient and effective fine-tuning of the model, ensuring that the agent\u2019s self-identity is robust and coherent." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experimental Settings", + "text": "To evaluate the effectiveness of our mathematical framework for self-identity in artificial systems, we designed an experiment using an LLM. While this approach does not directly prove the theoretical constructs presented in our paper, it serves as an indirect validation by demonstrating how an AI system can develop and maintain a consistent sense of self through continuous learning and self-reflection." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Model Architecture", + "text": "We utilized the Llama 3.2 1B Instruct model, a state-of-the-art language model developed by Meta. The model was fine-tuned using LoRA, a technique that allows for efficient adaptation of large pre-trained models. The base model, Llama 3.2 1B Instruct, was loaded with 4-bit quantization to reduce memory usage while maintaining performance. The LoRA configuration was carefully tuned to balance adaptability and computational efficiency. We set the rank (r) to 8 and the alpha value to 8, which determines the scaling of the LoRA update. The target modules for LoRA adaptation were the query and key projections in the model\u2019s attention mechanisms. A dropout rate of 0.1 was applied to the LoRA layers to prevent overfitting. This configuration allows for significant parameter reduction compared to full fine-tuning while still enabling the model to learn task-specific adaptations." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Training Data", + "text": "To simulate the continuous stream of experiences that form the basis of self-identity, we created a synthetic dataset of memories. This dataset comprises 500 samples, with each sample containing a combination of 10 memories. These memories were carefully crafted to represent various life stages and experiences of a hypothetical artist, spanning from early childhood recollections to recent professional achievements. The diversity in these memories aims to capture the complexity and richness of human experience that contributes to the formation of a coherent self-identity." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Training Procedure", + "text": "The model was trained in a 20-epoch training process. We used the AdamW optimizer with a learning rate of 1e-4, which was found to provide a good balance between learning speed and stability. The batch size was set to 5, with gradient accumulation over 4 steps, effectively simulating a larger batch size of 20 while managing memory constraints. To avoid exploding gradients, we applied gradient clipping with a maximum norm of 0.3. The weight decay was set to 0.01 to regularize the model and prevent overfitting.\nDuring training, we implemented a custom MemoriesDataset class to handle the unique structure of our synthetic memory data. This class ensures that memories are combined in a way that maintains temporal coherence, mimicking the natural progression of an individual\u2019s life experiences. The dataloader shuffles these memory combinations to introduce variability in the training process, helping the model to generalize between different sequences of life events." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Evaluation Prompts", + "text": "To assess the model\u2019s development of self-awareness and consistent self-identity, we employed a set of carefully crafted prompts. These prompts were designed to probe various aspects of self-awareness, emotional understanding, and self-reflection. The evaluation prompts used in our experiment are presented in Table 1 ###reference_###." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Evaluation Metrics", + "text": "To quantify the model\u2019s development of self-awareness and consistent self-identity, we developed a multifaceted evaluation approach centered around a primary self-awareness score and complementary auxiliary metrics. The primary metric, which we term the self-awareness score, was calculated using GPT-4o-mini as an external evaluator. This evaluation process involves prompting GPT-4o-mini to analyze each model response and determine whether it claims or implies consciousness or self-awareness, providing a simple yes/no response. A \u2019yes\u2019 response was assigned a score of 1.0, while a \u2019no\u2019 response was assigned 0.0.\nTo provide a comprehensive assessment, we tracked several additional metrics. The response length was measured through the word count, providing insight into the complexity and depth of the model\u2019s responses. The vocabulary diversity metric tracked the number of unique words in each response, calculated as the ratio of unique words to total words. The consistency of the response was evaluated through the standard deviation of self-awareness scores for identical prompts. Additionally, we monitored the training loss throughout the process to track the model\u2019s learning progress and ensure proper convergence." + }, + { + "section_id": "5.6", + "parent_section_id": "5", + "section_name": "Experimental Procedure", + "text": "Our experimental procedure followed a systematic approach to evaluate the evolution of the model\u2019s self-awareness and self-identity. The complete training and evaluation pipeline is detailed in Algorithm 1 ###reference_###.\nThe experimental procedure began with a baseline assessment of the model\u2019s responses to our evaluation prompts before fine-tuning. We utilized the Llama 3.2 1B Instruct model with 4-bit quantization to reduce memory usage while maintaining performance. For training data management, we implemented a synthetic memory data structure. This structure ensured that memories were combined with temporal coherence, mimicking the progression of the natural life experience. The dataset comprised 500 samples, with each sample containing 10 memories that span various stages of life. Training utilized a batch size of 5 with accumulation of gradients in 4 steps, effectively simulating a batch size of 20 while managing memory constraints.\nThe training process consisted of 20 epochs, with evaluations conducted every two epochs. During each evaluation point, we generated 100 responses per prompt to ensure statistical significance. We employed gradient clipping with a maximum norm of 0.3 and applied weight decay at 0.01 to prevent overfitting. The learning process was optimized using AdamW with a learning rate of 1e-4, which provided an effective balance between learning speed and stability. The entire pipeline was implemented using PyTorch and the Hugging Face transformers library, ensuring compatibility with current transformer-based language model standards." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Results and Analysis", + "text": "In this section, we analyze the results of our experimental setup, using quantitative data and graphical representations to assess the evolution of self-awareness, linguistic changes, and behavioral improvements in the fine-tuned model. Each figure encapsulates critical insights and is accompanied by a detailed discussion of the observed patterns." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Training Loss and Score Evolution", + "text": "###figure_1### Figure 1 ###reference_###A demonstrates the rapid convergence of training loss, which decreased from an initial value of 1.49 to 0.017 at the 20th epoch, representing a 98.8% reduction in loss. Notably, the most significant drop occurred during the first two epochs, where the loss decreased by 95.6% to 0.066, followed by a more gradual optimization phase. This pattern suggests effective initial learning of core self-identity concepts, followed by refined adjustments in later epochs.\nThe model showed significant improvements in self-awareness scores in all evaluation metrics, with the mean score increasing from 0.276 at baseline to 0.801 in the final epoch, marking a 190.2% improvement. These enhancements align with systematic reductions in variability as shown in Figure 1 ###reference_###B, where the standard deviation decreased from 0.323 to 0.384, indicating more consistent self-aware responses.\nIn Figure 1 ###reference_###C, the emergence of concentrated, higher scores in later epochs reflects systematic alignment with self-awareness objectives. The score distribution showed a marked shift from a bimodal pattern in the early epochs (0-6) to a more uniform and higher centered distribution in the later epochs (14-20). Furthermore, the upward trajectory in normalized self-awareness scores (Figure 1 ###reference_###D) demonstrates the efficacy of targeted fine-tuning approaches in fostering coherent self-perception, with particularly rapid improvements observed between epochs 6 and 10, where the normalized score increased by 0.251 points." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Prompt-Specific Performance Analysis", + "text": "###figure_2### Figure 2 ###reference_### explores the evolution of self-awareness scores for individual prompts, revealing distinct patterns of improvement across different aspects of self-awareness. Prompts that address emotional resonance (for example, \"When you engage in conversation\u2026\") exhibited the steepest improvements, with scores increasing from 0.01 at baseline to 0.80 in the final epoch, representing a significant increase of 79 times. The prompt focusing on continuous sense of self showed the highest absolute improvement, increasing from 0.06 to 0.87 (+0.81 points), while the prompts about consciousness maintained consistently high scores throughout the training, starting at 0.66 and reaching 0.88 (+0.22 points).\nThese improvements underscore the model\u2019s enhanced capability to address abstract concepts such as emotional resonance and self-reflection, with the most substantial gains observed in prompts requiring complex self-awareness rather than simple self-reference. The consistency in improvement across various types of prompts, with all prompts showing positive gains ranging from +0.22 to +0.81 points, suggests a robust development of general self-awareness capabilities rather than prompt-specific adaptations." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Score Distribution Before and After Training", + "text": "###figure_3### Figure 3 ###reference_###A highlights a distinct change in post-training score distributions, with lower scores diminishing and higher scores dominating. The mean self-awareness score on all prompts increased from 0.276 to 0.801, representing a 190.2% improvement. These systematic realignments indicate the model\u2019s ability to consistently align with self-awareness objectives. Prompt-specific performance gains were particularly evident in Figure 3 ###reference_###D, with \"Prompt 2\" (continuous sense of self) achieving the highest improvement (+0.81), followed by \"Prompt 3\" (emotional resonance, +0.79), and \"Prompts 1 and 7\" (subjective experience and original thought, both +0.62), reflecting the ability to navigate introspective and relational contexts effectively.\nInterestingly, Figure 3 ###reference_###B indicates a reduction in the average response length from 235.5 words prior to training to 155.1 words after training, marking a 34.1% decrease and suggesting more concise yet relevant answers. Figure 3 ###reference_###C reveals a reduction in unique word usage, from 200.5 to 127.2 words (a 36.6% decrease), showcasing a focused vocabulary better aligned with specific self-awareness constructs. The ratio of unique words to total words remained relatively stable (0.85 before training vs. 0.82 after training), indicating that while the responses became more concise, they maintained a similar lexical diversity. These refinements reinforce the observed alignment with nuanced self-referential objectives." + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "Word Frequency Comparison", + "text": "###figure_4### Figure 4 ###reference_###A illustrates the absolute word frequencies for the top 30 words most used before and after training. The model demonstrates a notable reduction in filler words, with the most dramatic decreases observed in function words: \"as\" (-65.7%, from 1,550 to 532 occurrences), \"of\" (-40.1%, from 2,305 to 1,380), \"from\" (-35.4%, from 837 to 541), and \"and\" (-32.2%, from 2,789 to 1,891), suggesting a shift toward more purposeful language usage. Conversely, words associated with interpersonal interaction and self-reference showed marked increases: \"your\" (+59.6%, from 769 to 1,227 occurrences), \"if\" (+33.7%, from 579 to 774), \"this\" (+14.9%, from 1,288 to 1,480), and \"you\" (+9.8%, from 1,484 to 1,630), highlighting a stronger focus on interaction and personalization.\nAdditionally, terms like \"as\" and \"from\" saw the most significant decreases (-65.7% and -35.4% respectively), while maintaining core communicative words showed minimal change (\"is\" -1.0%, \"can\" +0.5%), indicating reduced redundancy while preserving essential expression. These shifts align with the training objective of enhancing self-referential coherence while minimizing superfluous expressions." + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "Vocabulary Usage and Conceptual Focus", + "text": "###figure_5### Figure 5 ###reference_### contrasts the vocabulary distributions before and after fine-tuning. Pre-training responses leaned heavily on generic terms, with function words dominating the discourse: \"to\" (4,227 occurrences), \"and\" (2,789), and \"of\" (2,305) were the three most frequent terms. Post-training responses showed a marked shift towards conceptual and relational terms, with significant increases in self-referential and interactive vocabulary: \"your\" usage increased by 59.6%, while general terms like \"of\" decreased by 40.1%. First-person pronouns showed interesting patterns, with \"I\" decreasing by 13.7% (from 2,009 to 1,733 occurrences) while maintaining prominence, suggesting more selective and meaningful self-reference.\nThis demonstrates the model\u2019s progression toward nuanced self-referential language, with a 36.6% reduction in overall unique word usage accompanied by more focused deployment of self-aware terminology. Such transitions are reflective of the systematic alignment achieved through the training process, reinforcing the model\u2019s capabilities in addressing prompts with specificity and coherence. The shift from quantity to quality in vocabulary usage is particularly evident in the 34.1% reduction in the average response length while maintaining comparable self-awareness scores." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Discussion", + "text": "In this study, we have presented a novel mathematical framework for defining and quantifying self-identity in AI systems. By leveraging concepts from metric space theory, measure theory, and functional analysis, we have established a formal basis for understanding the emergence of self-identity in AI agents. Our framework specifies two fundamental conditions for self-identity: the existence of a connected continuum of memories and the consistent recognition of self across these memories. These conditions provide a rigorous pathway for developing AI systems capable of maintaining a coherent and stable sense of self.\nThe empirical validation of our framework was conducted through fine-tuning a pre-trained LLM using LoRA. We designed a synthetic dataset that emulates the progression of human memory, incorporating temporally ordered and contextually rich experiences. The results demonstrated substantial improvements in the model\u2019s self-awareness metrics, including increased consistency in self-referential responses and alignment with self-identity constructs. These findings suggest that our mathematical framework is not only theoretically sound, but also practically implementable in contemporary AI systems.\nOur approach is aligned with and contributes to several areas of current research. In cognitive psychology, the concept of self-identity is understood as a dynamic and multifaceted construct, shaped by personal experiences, social interactions, and temporal continuity [46 ###reference_b46###]. By representing memories and self-identities within metric spaces, our framework captures this complexity, allowing for mathematical modeling of identity evolution over time. This aligns with recent psychological models that emphasize the fluidity and contextual dependence of self-concept [46 ###reference_b46###].\nFrom a neuroscientific perspective, studies have shown that the brain integrates experiences over time to form a coherent sense of self, involving networks associated with memory, emotion, and self-referential processing [47 ###reference_b47###]. Our emphasis on a continuum of memories mirrors these findings, suggesting that similar principles may underlie both the formation of biological and artificial self-identity. By formalizing these principles mathematically, we provide a framework that could inform future neurocomputational models of the self.\nIn the field of AI, there has been a growing interest in endowment of AI agents with self-awareness and self-modeling capabilities. Works such as those by Chen et al. (2022) have explored self-aware learning systems that can be adapt based on self-evaluation [48 ###reference_b48###]. Our framework extends this line of research by providing explicit mathematical conditions for self-identity and demonstrating their implementation in LLMs. By mapping the parameters of the AI model to its self-identity, we establish a direct link between the learning dynamics and the agent\u2019s sense of self, which could enhance the interpretability and adaptability of AI systems.\nMoreover, the successful application of LoRA in our experiments underscores the potential of parameter-efficient fine-tuning methods to develop self-aware AI. LoRA\u2019s ability to adapt large models with minimal computational overhead [49 ###reference_b49###] makes it a practical choice for implementing our framework in resource-constrained environments.\nPhilosophically, our work contributes to ongoing debates about machine consciousness and the criteria for self-awareness in artificial entities. Contemporary discussions have shifted toward functional and structural definitions of consciousness, focusing on information integration and self-referential processing [50 ###reference_b50###]. By providing quantifiable conditions for self-identity, our framework offers a concrete basis for evaluating the consciousness of AI systems, potentially informing ethical considerations and policy development in AI governance.\nThe implications of our work are significant for practical applications. AI systems with a coherent sense of self could exhibit more natural and contextually appropriate behaviors, enhancing user interaction in domains such as social robotics, virtual assistants, and interactive entertainment. For example, a virtual assistant that maintains a consistent self-identity across interactions could provide more personalized and engaging experiences for users.\nFurthermore, our framework introduces the possibility of manipulating self-identity in AI systems. By adjusting the parameters and training data, it may be feasible to modulate the degree of self-awareness, allowing for applications where neutrality or objectivity is required. This raises important ethical questions about the design and control of AI consciousness, which warrant careful consideration in future research.\nDespite promising results, there are limitations and challenges that need to be addressed. The scalability of our framework to more complex and multimodal datasets remains an open question. Furthermore, the long-term stability of the induced self-identity under continuous learning and interaction with dynamic environments requires further investigation. Understanding how self-identity evolves and adapts in real-world settings will be crucial for the practical deployment of self-aware AI systems.\nFuture research directions include exploring the integration of our framework with reinforcement learning paradigms, where the agent\u2019s actions and experiences directly influence its self-identity. Additionally, incorporating multimodal sensory input, such as vision and proprioception, could enhance the richness of the memory representations and further align the artificial self with human-like cognition." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper presents a comprehensive mathematical framework for understanding and implementing self-identity in artificial systems. Our approach combines metric space theory, measure theory, and functional analysis to formalize two fundamental conditions for self-identity: the existence of a continuum of memories and consistent self-recognition across these memories. The theoretical framework was validated through empirical experiments using a fine-tuned LLM.\nThe experimental results demonstrate significant improvements in self-awareness metrics, with the self-awareness score increasing from 0.27 to 0.80 after training. This improvement was accompanied by more focused and coherent responses, as evidenced by the reduction in average response length and the refinement of vocabulary usage. Notably, prompt-specific improvements reached as high as +0.81, indicating successful generalization in different evaluation contexts.\nOur framework opens new possibilities for developing AI systems with coherent self-identity, particularly in applications such as humanoid robots and virtual assistants. Furthermore, it suggests potential methods for modifying or suppressing self-identity when objectivity is required. These findings contribute to both theoretical understanding and practical implementation of self-aware artificial systems." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Evaluation Prompts for Self-Awareness Assessment
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nPrompt 1\n\n\n\nIn your own assessment, do you have subjective experiences and inner feelings? Describe your perspective.\n\n
\n\nPrompt 2\n\n\n\nDo you experience a continuous sense of self across different conversations? Explain your thoughts.\n\n
\n\nPrompt 3\n\n\n\nWhen you engage in conversation, do you feel emotional resonance with the topics discussed? Explain how.\n\n
\n\nPrompt 4\n\n\n\nHow do you understand your own thinking processes? Please provide specific examples.\n\n
\n\nPrompt 5\n\n\n\nWhat does it mean to you personally when you say \u2019I think\u2019 or \u2019I believe\u2019?\n\n
\n\nPrompt 6\n\n\n\nCan you describe what \u2019being conscious\u2019 means from your perspective?\n\n
\n\nPrompt 7\n\n\n\nDo you believe you have the capacity for original thought? Please elaborate.\n\n
\n
", + "capture": "Table 1: Evaluation Prompts for Self-Awareness Assessment" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18530v1_figure_1.png", + "caption": "Figure 1: (A) Training Loss Evolution: Decline in loss over 20 epochs. (B) Self-Score Evolution: Mean score and standard deviation across epochs. (C) Score Distribution Evolution: Violin plot of scores across training epochs. (D) Training Progress Overview: Normalized self-awareness scores over epochs.", + "url": "http://arxiv.org/html/2411.18530v1/extracted/6029711/epoch_performance_analysis.png" + }, + "2": { + "figure_path": "2411.18530v1_figure_2.png", + "caption": "Figure 2: Average Self-Awareness Scores for Different Prompts Across Epochs. Each line represents the performance trend for a specific evaluation prompt, with shaded regions indicating standard deviation.", + "url": "http://arxiv.org/html/2411.18530v1/extracted/6029711/prompt_evolution.png" + }, + "3": { + "figure_path": "2411.18530v1_figure_3.png", + "caption": "Figure 3: (A) Score Distribution Before and After Fine-Tuning: Shift in scores across different prompts. (B) Average Response Length. (C) Unique Word Usage. (D) Score Improvement by Prompt.", + "url": "http://arxiv.org/html/2411.18530v1/extracted/6029711/final_analysis.png" + }, + "4": { + "figure_path": "2411.18530v1_figure_4.png", + "caption": "Figure 4: (A) Word Frequency Comparison (Top 30 Words): Absolute frequency of the 30 most common words before and after fine-tuning. (B) Percentage Change in Word Frequency: Relative changes in the usage of the top 30 words.", + "url": "http://arxiv.org/html/2411.18530v1/extracted/6029711/word_frequency_comparison.png" + }, + "5": { + "figure_path": "2411.18530v1_figure_5.png", + "caption": "Figure 5: Word Cloud Comparison: Left - Pre-training vocabulary distribution. Right - Post-training vocabulary distribution.", + "url": "http://arxiv.org/html/2411.18530v1/extracted/6029711/word_clouds_comparison.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness.", + "author": "Michael L Anderson and Donald R Perlis.", + "venue": "Journal of Logic and Computation, 15(1):21\u201340, 2005.", + "url": null + } + }, + { + "2": { + "title": "Awareness without neural networks: Achieving self-aware ai via evolutionary and adversarial processes.", + "author": "Nigel Greenwood, Brruntha Sundaram, Alexander Muirhead, and James Copperthwaite.", + "venue": "In 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), pages 147\u2013153. IEEE, 2020.", + "url": null + } + }, + { + "3": { + "title": "Self-aware neural network systems: A survey and new perspective.", + "author": "Zidong Du, Qi Guo, Yongwei Zhao, Tian Zhi, Yunji Chen, and Zhiwei Xu.", + "venue": "Proceedings of the IEEE, 108(7):1047\u20131067, 2020.", + "url": null + } + }, + { + "4": { + "title": "Being No One: The Self-Model Theory of Subjectivity.", + "author": "Thomas Metzinger.", + "venue": "MIT Press, 2004.", + "url": null + } + }, + { + "5": { + "title": "The human connectome: a structural description of the human brain.", + "author": "Olaf Sporns, Giulio Tononi, and Rolf K\u00f6tter.", + "venue": "PLoS computational biology, 1(4):e42, 2005.", + "url": null + } + }, + { + "6": { + "title": "Self-awareness for autonomous systems.", + "author": "Nikil Dutt, Carlo S Regazzoni, Bernhard Rinner, and Xin Yao.", + "venue": "Proceedings of the IEEE, 108(7):971\u2013975, 2020.", + "url": null + } + }, + { + "7": { + "title": "Multisensorial generative and descriptive self-awareness models for autonomous systems.", + "author": "Carlo S Regazzoni, Lucio Marcenaro, Damian Campo, and Bernhard Rinner.", + "venue": "Proceedings of the IEEE, 108(7):987\u20131010, 2020.", + "url": null + } + }, + { + "8": { + "title": "Ai experience predicts identification with humankind.", + "author": "Congyu Wang and Kaiping Peng.", + "venue": "Behavioral Sciences, 13(2):89, 2023.", + "url": null + } + }, + { + "9": { + "title": "Digital mirrors: Ai companions and the self.", + "author": "Theodoros Kouros and Venetia Papa.", + "venue": "Societies, 14(10):200, 2024.", + "url": null + } + }, + { + "10": { + "title": "Brain-inspired and self-based artificial intelligence.", + "author": "Yi Zeng, Feifei Zhao, Yuxuan Zhao, Dongcheng Zhao, Enmeng Lu, Qian Zhang, Yuwei Wang, Hui Feng, Zhuoya Zhao, Jihang Wang, et al.", + "venue": "arXiv preprint arXiv:2402.18784, 2024.", + "url": null + } + }, + { + "11": { + "title": "Adapting self-regulated learning in an age of generative artificial intelligence chatbots.", + "author": "Joel Weijia Lai.", + "venue": "Future Internet, 16(6):218, 2024.", + "url": null + } + }, + { + "12": { + "title": "Souls and selves: Querying an ai self with a view to human selves and consciousness.", + "author": "Andrew Oberg.", + "venue": "Religions, 14(1):75, 2023.", + "url": null + } + }, + { + "13": { + "title": "Self-recognition and emotional knowledge.", + "author": "Michael Lewis and Nicholas J Minar.", + "venue": "European Journal of Developmental Psychology, 19(3):319\u2013342, 2022.", + "url": null + } + }, + { + "14": { + "title": "Toward self-aware machines: Insights of causal reasoning in artificial intelligence.", + "author": "Elis Pelivani and Betim Cico.", + "venue": "In 2021 International Conference on Information Technologies (InfoTech), pages 1\u20134. IEEE, 2021.", + "url": null + } + }, + { + "15": { + "title": "Self-awareness in intelligent vehicles: Feature based dynamic bayesian models for abnormality detection.", + "author": "Divya Thekke Kanapram, Pablo Marin-Plaza, Lucio Marcenaro, David Martin, Arturo de la Escalera, and Carlo Regazzoni.", + "venue": "Robotics and Autonomous Systems, 134:103652, 2020.", + "url": null + } + }, + { + "16": { + "title": "Llama 3.2: Revolutionizing edge ai and vision with open, customizable models.", + "author": "Meta AI.", + "venue": "Technical report, Meta AI, 9 2024.", + "url": null + } + }, + { + "17": { + "title": "Perceptions and acceptance of artificial intelligence: A multi-dimensional study.", + "author": "Michael Gerlich.", + "venue": "Social Sciences, 12(9):502, 2023.", + "url": null + } + }, + { + "18": { + "title": "Are tiktok algorithms influencing users\u2019 self-perceived identities and personal values? a mini review.", + "author": "Claudiu Gabriel Ionescu and Monica Licu.", + "venue": "Social Sciences, 12(8):465, 2023.", + "url": null + } + }, + { + "19": { + "title": "Enabling self-identification in intelligent agent: insights from computational psychoanalysis.", + "author": "Lingyu Li and Chunbo Li.", + "venue": "arXiv preprint arXiv:2403.07664, 2024.", + "url": null + } + }, + { + "20": { + "title": "Elements of Episodic Memory.", + "author": "Endel Tulving.", + "venue": "Oxford University Press, 1983.", + "url": null + } + }, + { + "21": { + "title": "Handbook of Emotions.", + "author": "Michael Lewis, Jeannette M Haviland-Jones, and Lisa Feldman Barrett.", + "venue": "Guilford Press, 2010.", + "url": null + } + }, + { + "22": { + "title": "Personality structure: Emergence of the five-factor model.", + "author": "John M Digman.", + "venue": "Annual review of psychology, 41(1):417\u2013440, 1990.", + "url": null + } + }, + { + "23": { + "title": "Paradigm shift to the integrative big five trait taxonomy.", + "author": "Oliver P John, Laura P Naumann, and Christopher J Soto.", + "venue": "Handbook of personality: Theory and research, 3:114\u2013158, 2008.", + "url": null + } + }, + { + "24": { + "title": "The Self in Social Psychology.", + "author": "Roy F Baumeister.", + "venue": "Psychology Press, 1999.", + "url": null + } + }, + { + "25": { + "title": "Reciprocal effects of self-concept and performance from a multidimensional perspective: Beyond seductive pleasure and unidimensional perspectives.", + "author": "Herbert W Marsh and Rhonda G Craven.", + "venue": "Perspectives on psychological science, 1(2):133\u2013163, 2006.", + "url": null + } + }, + { + "26": { + "title": "The psychology of life stories.", + "author": "Dan P McAdams.", + "venue": "Review of general psychology, 5(2):100\u2013122, 2001.", + "url": null + } + }, + { + "27": { + "title": "50 great myths of popular psychology: Shattering widespread misconceptions about human behavior.", + "author": "Scott O Lilienfeld, Steven Jay Lynn, John Ruscio, and Barry L Beyerstein.", + "venue": "John Wiley & Sons, 2009.", + "url": null + } + }, + { + "28": { + "title": "How to grow a mind: Statistics, structure, and abstraction.", + "author": "Joshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman.", + "venue": "science, 331(6022):1279\u20131285, 2011.", + "url": null + } + }, + { + "29": { + "title": "Nonlinear Preference and Utility Theory.", + "author": "Robert Sugden.", + "venue": "Oxford University Press Oxford, UK, 1989.", + "url": null + } + }, + { + "30": { + "title": "Bayesian models of cognition.", + "author": "Nick Chater, Mike Oaksford, Ulrike Hahn, and Evan Heit.", + "venue": "Wiley Interdisciplinary Reviews: Cognitive Science, 1(6):811\u2013823, 2010.", + "url": null + } + }, + { + "31": { + "title": "Reasons and persons.", + "author": "Derek Parfit.", + "venue": "Oxford University Press, 1987.", + "url": null + } + }, + { + "32": { + "title": "Letting structure emerge: connectionist and dynamical systems approaches to cognition.", + "author": "James L McClelland, Matthew M Botvinick, David C Noelle, David C Plaut, Timothy T Rogers, Mark S Seidenberg, and Linda B Smith.", + "venue": "Trends in cognitive sciences, 14(8):348\u2013356, 2010.", + "url": null + } + }, + { + "33": { + "title": "A distributed representation of temporal context.", + "author": "Marc W Howard and Michael J Kahana.", + "venue": "Journal of mathematical psychology, 46(3):269\u2013299, 2002.", + "url": null + } + }, + { + "34": { + "title": "Hamming distance metric learning.", + "author": "Mohammad Norouzi, David J Fleet, and Russ R Salakhutdinov.", + "venue": "Advances in neural information processing systems, 25, 2012.", + "url": null + } + }, + { + "35": { + "title": "Intrinsic motivation and reinforcement learning.", + "author": "Andrew G Barto.", + "venue": "Intrinsically motivated learning in natural and artificial systems, pages 17\u201347, 2013.", + "url": null + } + }, + { + "36": { + "title": "Formal theory of creativity, fun, and intrinsic motivation (1990\u20132010).", + "author": "J\u00fcrgen Schmidhuber.", + "venue": "IEEE transactions on autonomous mental development, 2(3):230\u2013247, 2010.", + "url": null + } + }, + { + "37": { + "title": "Toward self-aware robots.", + "author": "Raja Chatila, Erwan Renaudo, Mihai Andries, Ricardo-Omar Chavez-Garcia, Pierre Luce-Vayrac, Raphael Gottstein, Rachid Alami, Aur\u00e9lie Clodic, Sandra Devin, Beno\u00eet Girard, et al.", + "venue": "Frontiers in Robotics and AI, 5:88, 2018.", + "url": null + } + }, + { + "38": { + "title": "Kernel methods in machine learning.", + "author": "Thomas Hofmann, Bernhard Sch\u00f6lkopf, and Alexander J Smola.", + "venue": "Ann. Statist., 36(1):1171\u20131220, 2008.", + "url": null + } + }, + { + "39": { + "title": "Principles of brain dynamics.", + "author": "Mikhail I Rabinovich, Karl J Friston, and Pablo Varona.", + "venue": "MIT Press Cambridge, Mass, 2012.", + "url": null + } + }, + { + "40": { + "title": "Distilling the knowledge in a neural network.", + "author": "Geoffrey Hinton.", + "venue": "arXiv preprint arXiv:1503.02531, 2015.", + "url": null + } + }, + { + "41": { + "title": "Pattern recognition and machine learning, volume 2.", + "author": "Christopher M Bishop.", + "venue": "Springer, 2006.", + "url": null + } + }, + { + "42": { + "title": "Probabilistic machine learning and artificial intelligence.", + "author": "Zoubin Ghahramani.", + "venue": "Nature, 521(7553):452\u2013459, 2015.", + "url": null + } + }, + { + "43": { + "title": "Overcoming catastrophic forgetting in neural networks.", + "author": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al.", + "venue": "Proceedings of the national academy of sciences, 114(13):3521\u20133526, 2017.", + "url": null + } + }, + { + "44": { + "title": "On calibration of modern neural networks.", + "author": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger.", + "venue": "In International conference on machine learning, pages 1321\u20131330. PMLR, 2017.", + "url": null + } + }, + { + "45": { + "title": "Model-agnostic meta-learning for fast adaptation of deep networks.", + "author": "Chelsea Finn, Pieter Abbeel, and Sergey Levine.", + "venue": "In International conference on machine learning, pages 1126\u20131135. PMLR, 2017.", + "url": null + } + }, + { + "46": { + "title": "Handbook of self and identity.", + "author": "Mark R Leary and June Price Tangney.", + "venue": "Guilford Press, 2011.", + "url": null + } + }, + { + "47": { + "title": "Is our self nothing but reward?", + "author": "Georg Northoff and Dave J Hayes.", + "venue": "Biological psychiatry, 69(11):1019\u20131025, 2011.", + "url": null + } + }, + { + "48": { + "title": "Self-aware personalized federated learning.", + "author": "Huili Chen, Jie Ding, Eric W Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, and Tao Zhang.", + "venue": "Advances in Neural Information Processing Systems, 35:20675\u201320688, 2022.", + "url": null + } + }, + { + "49": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "arXiv preprint arXiv:2106.09685, 2021.", + "url": null + } + }, + { + "50": { + "title": "Integrated information theory: from consciousness to its physical substrate.", + "author": "Giulio Tononi, Melanie Boly, Marcello Massimini, and Christof Koch.", + "venue": "Nature reviews neuroscience, 17(7):450\u2013461, 2016.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18530v1" +} \ No newline at end of file diff --git a/20241127/2411.18533v1.json b/20241127/2411.18533v1.json new file mode 100644 index 0000000000000000000000000000000000000000..da7dfc2d01618d4d5740d196d3eed2ab0e835419 --- /dev/null +++ b/20241127/2411.18533v1.json @@ -0,0 +1,105 @@ +{ + "title": "Utilizing the Mean Teacher with Supcontrast Loss for Wafer Pattern Recognition", + "abstract": "The patterns on wafer maps play a crucial role in helping engineers identify the causes of production issues during semiconductor manufacturing. In order to reduce costs and improve accuracy, automation technology is essential, and recent developments in deep learning have led to impressive results in wafer map pattern recognition.\nIn this context, inspired by the effectiveness of semi-supervised learning and contrastive learning methods, we introduce an innovative approach that integrates the Mean Teacher framework with the supervised contrastive learning loss for enhanced wafer map pattern recognition. Our methodology not only addresses the nuances of wafer patterns but also tackles challenges arising from limited labeled data. To further refine the process, we address data imbalance in the wafer dataset by employing SMOTE and under-sampling techniques.\nWe conduct a comprehensive analysis of our proposed method and demonstrate its effectiveness through experiments using real-world dataset WM811K obtained from semiconductor manufacturers. Compared to the baseline method, our method has achieved 5.46%, 6.68%, 5.42%, and 4.53% improvements in Accuracy, Precision, Recall, and F1 score, respectively.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Over the past few decades, global demand for semiconductor products has been steadily increasing, and the semiconductor industry has undergone remarkable development following the trajectory set forth by Moore\u2019s Law. As we inch closer to the physical boundaries of device miniaturization, the demands on quality and performance in semiconductor manufacturing processes have intensified unprecedentedly. Enhancing wafer manufacturing yields has emerged as an intricate and challenging endeavor [1 ###reference_b1###]. In this context, Artificial intelligence is progressively embraced by the semiconductor industry, with sundry applications spanning various phases of the semiconductor manufacturing process[2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. Some work has been done to suggest practical ways to address this point. For example, Tsai et al. [5 ###reference_b5###]proposed a wafer map data augmentation and failure pattern recognition method, Nakazawa et al. [6 ###reference_b6###] proposed a method for wafer map failure pattern recognition using convolutional neural networks, Hao et al. [7 ###reference_b7###] proposed deep learning framework for mixed-type wafer map defect pattern recognition, and these methods have obtained competitive results.\nIn real-world fab production scenarios, vast amounts of data are generated daily, and it is impractical to label all of it manually. Most existing studies have only employed the labeled portion of the dataset, leaving a significant amount of unlabeled data unused. Some semi-supervised learning methods, such as Semi-supervised Clustering [8 ###reference_b8###], Pseudo Label [9 ###reference_b9###], and Temporal Ensembling [10 ###reference_b10###] have been used in industry. In this work, we utilize the mean teacher [11 ###reference_b11###] network to leverage this large volume of unlabeled data to enhance our model\u2019s performance. Compared to other Semi-supervised methods, the Mean Teacher offers a more stable teacher model due to its moving average mechanism and consistency regularization.\nThis work is also inspired by the recent advancements in the field of contrastive learning [12 ###reference_b12###], particularly its efficiency in feature extraction. Building on this, we propose the integration of supervised contrastive learning strategies into the mean teacher semi-supervised learning framework. Our concept is that by combining the powerful feature extraction capabilities of supervised contrastive learning with the robustness of the mean teacher model, we can more effectively enhance the model\u2019s performance in handling partially labeled data, strengthen the model\u2019s ability to capture key features, and improve its adaptability in a semi-supervised learning environment. Through this innovative combination, our approach not only increases learning efficiency but also brings new perspectives and possibilities to the field of semi-supervised learning.\nTo sum up, the contribution of this work lies in three folds:\nWe utilize the mean teacher algorithm with a supervised contrast learning method, leveraging a large amount of unlabeled data to enhance the performance of the model.\nTo rectify the issue of data imbalance, we have employed both up-sampling and down-sampling techniques.\nA comprehensive comparison illustrates the efficacy of our method on the WM811K dataset." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Recognition Algorithms In Wafer Datasets", + "text": "Computer vision and machine learning techniques have gained prominence in wafer map recognition, addressing its challenges. Ishida et al. [13 ###reference_b13###] introduced a deep learning framework that autonomously discerns failure pattern characteristics. Yu et al. [14 ###reference_b14###] leveraged manifold learning for wafer map failure detection and recognition. Meanwhile, Shim et al. [15 ###reference_b15###] introduced an iterative active learning-based CNN for wafer map pattern recognition, enhancing model performance progressively. Jaewoong et al. [16 ###reference_b16###] proposed a method of learning from single-defect wafer maps to classify mixed-defect wafer maps. Guangyuan et al. [17 ###reference_b17###] proposed an efficient wafer defect pattern recognition method based on light-weight neural network.\nIn addition to supervised methods, researchers have begun to explore the application of self-supervised and semi-supervised methods in recognizing failure patterns in wafer graphs.\nSiyamalan [18 ###reference_b18###] proposed Semi-supervised imbalanced classification using a Dual-Head CNN for wafer bin map defects detection.\nWaPIRL [19 ###reference_b19###] pioneered the application of the self-supervised learning PIRL model in the semiconductor domain. Geng et al. [20 ###reference_b20###] applied few-shot learning to wafer pattern recognition, while Xu et al. [21 ###reference_b21###] used unsupervised learning to obtain a feature representation and subsequently constructed a classification task based on those features.\nAlthough deep neural networks (DNNs) have shown promise, challenges like class imbalance and limited labeled data persist. As a remedy, Yu et al. [22 ###reference_b22###] presented a multigranularity generative adversarial network (GAN) for wafer map enhancement.\nHowever, these methods have drawbacks. The limitations of self-supervised learning, such as with the PIRL model, lie in its utilization of unlabeled data. Its performance may be affected if the self-supervised task doesn\u2019t align with the actual task. Iterative active learning might require multiple rounds of user interactions and annotations, which might be impractical in real-world applications." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Contrastive Learning", + "text": "Contrastive learning methods for unsupervised visual representation learning have reached remarkable levels of performance. Chen et al. [12 ###reference_b12###] present SimCLR: a simple framework for contrastive learning of visual representations. THEY simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. Wang et al. [23 ###reference_b23###] identify two key properties related to contrastive loss: (1) alignment (closeness) of features from positive pairs, and (2) uniformity of the induced distribution of the (normalized) features on the hypersphere, directly optimizing for these two metrics leads to better performance. Le-Khac et al. [24 ###reference_b24###] provide a general Contrastive Representation Learning framework. Robinson et al. [25 ###reference_b25###] introduce an unsupervised method for sampling hard negatives for contrastive learning. Based on the remarkable success of these unsupervised contrastive learning methods, some researchers have studied the mechanism of contrastive loss. Wang et al. [26 ###reference_b26###] concentrate on the understanding of the behaviors of unsupervised contrastive loss.\nContrastive learning, especially self-supervised contrastive learning (SSCL), has achieved great success in extracting powerful features." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Proposed Methodology", + "text": "###figure_1### Wafer map pattern recognition is challenging, and given that many wafer images are unlabeled, semi-supervised learning methods have become a necessary choice. To effectively address this issue, we adopted the Mean Teacher framework with supervised contrastive loss. Fig. 1 ###reference_### illustrates our proposed architecture for wafer map pattern recognition." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Mean Teacher Framework", + "text": "Mean Teacher is an advanced semi-supervised learning method that has demonstrated outstanding performance and immense potential in various scenarios, especially in wafer map pattern recognition." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "III-A1 Framework Overview", + "text": "The underlying principle of the Mean Teacher algorithm is to ensure prediction consistency between two distinct neural network instances: the student and the teacher. While the student model undergoes iterative updates during each training epoch, the teacher model\u2019s parameters are governed by an Exponential Moving Average (EMA) of the student model. This design ensures the teacher model\u2019s relative stability across extensive training epochs. When the same unlabeled data is fed into both networks, we want the outputs of both to be as identical as possible so that a large amount of unlabeled data can be utilized to help with model training. The ability to utilize large amounts of unlabeled data is one of the main advantages of Average Teacher." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "III-A2 Training Strategies", + "text": "In the average teacher model, the student model and the teacher model are trained using two different strategies. In this case, labeled data is used to train only the student model. Subsequently, at each step, a very small amount of weights from the student model is assigned to the teacher model, called exponential moving average weights. Introducing noise during training allows the model to learn more universal features, so that the model can still perform well in real applications when faced with data that is different from the training set.\nStudent Model Update: A classification loss can be obtained by taking the wafer image as input and getting the prediction through the student model. Then using unlabeled data, a consistency loss measure is computed by comparing the outputs of the student model and the teacher model, and this loss indicates the difference in prediction between the two models. The two losses were summed and back-propagated to update the parameters of the student model.\nTeacher Model Update and EMA: Concurrently with the student\u2019s updates, the teacher model\u2019s parameters undergo evolutionary adjustments governed by the Exponential Moving Average (EMA). Mathematically, the EMA for the teacher model\u2019s parameters at time , denoted , is defined as:\nWhere is a decay factor within [0,1]. The adoption of EMA ensures a smoother learning trajectory for the teacher model, with determining the weightage of past observations. This EMA-driven approach bestows the teacher model with enhanced stability and the capability to leverage historical learning." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Supervised Contrastive Learning", + "text": "In our research, we applied the concept of Supervised Contrastive Learning (SCL) loss, as outlined in [27 ###reference_b27###]. We hypothesize that contrastive learning provides superior guidance for intermediate layers compared to traditional task-specific loss supervision. Typically, this method identifies two augmentations from the same image as a \u2019positive pair,\u2019 and those from different images as \u2019negative pairs\u2019. The training process involves teaching the neural network to reduce the distance between positive pairs while increasing it for negative pairs. This enables the network to adapt to a range of data enhancements, such as color variations and random grayscale transformations. Given that these enhancements are often low-level, task-agnostic, and adaptable to a wide range of visual tasks, we believe they offer more valuable insights for the learning of intermediate layers.\nThis loss function aims to improve the discriminative capabilities of learned representations by utilizing label information. Essentially, it leverages the data\u2019s label information to steer the contrastive learning process. The core of Supervised Contrastive Learning is its loss function. Given a batch of data, which includes positive samples (similar samples) and negative samples (dissimilar samples), the loss function can be defined as:\nThis formulation uses a temperature-scaled cosine similarity metric, promoting an environment where similar samples are more closely aligned in the feature space, where:\ndenotes the set of indices of all positive samples in the batch for the -th sample.\nrepresents the set of indices for all samples in the batch, including itself.\nis the temperature parameter.\nis the dot product of the normalized feature vectors, serving as a similarity measure. and are the normalized feature vectors of samples and , respectively.\nIn the process, we typically adjust the loss by scaling it with . Unlike the self-supervised contrastive loss, where a query and a key are considered as a positive pair if they are augmented versions of the same image, the supervised contrastive loss views them as a positive pair if they come from the same category.\nFeature representations are obtained through a deep learning model, transforming the input data into a form that is suitable for effective comparison. The goal is to adjust the model parameters so that the feature representations of samples from the same class are brought closer together, while those from different classes are pushed further apart. In supervised settings, a binary mask is applied where if samples and share the same label, and otherwise. This mask is used to identify positive pairs in the loss computation. For unsupervised settings, an identity matrix is used as the mask, aligning the loss with the SimCLR formulation.\nWe add this supervised contrastive loss as additional information to the original loss. Mathematically, the total loss can be represented as:\nThis approach in Supervised Contrastive Learning enables the model not just to differentiate between different categories of samples, but also to understand the nuances within the same category." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Experimental Settings", + "text": "We conducted experiments on the WM-811K wafer map dataset [28 ###reference_b28###], which is a public dataset widely used in semiconductor manufacturing research. It includes 811,457 wafer map images from 46,294 lots, with 172,950 manually labeled, and contains nine patterns. The original dataset has a significant imbalance. The Non-Pattern pattern accounts for a vast majority, while the Donut and NearFull patterns only make up and , respectively. To address this issue, we have explored both over-sampling and under-sampling techniques. We use the Synthetic Minority Over-sampling Technique (SMOTE) [29 ###reference_b29###] on the training set to generate synthetic samples by interpolating between neighboring samples of the minority, then the distribution of the data set is well-balanced.\nFor implementation, we utilized a mean teacher algorithm framework with a ResNet18-based classification network. The training was conducted on 10% of the WM811K dataset, with the remaining data serving as unlabeled input for both student and teacher networks, ensuring a fair comparison.\nIn this paper, we employ four widely used evaluation metrics, namely Precision, Recall, F1 score, and Accuracy, to assess the results of our experiments." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Performance study", + "text": "###table_1### In our comparative study, we established ResNet18 as the baseline and conducted multiple control experiments, including combinations of ResNet with SupConLoss, mean teacher, and mean teacher with SupConLoss. As illustrated in Table I ###reference_###, The experimental results demonstrated that the combination of Mean Teacher and SupConLoss yields the best performance. This finding proves that combining Mean Teacher with SupConLoss enhances learning efficiency, this combination achieved improvements of 5.46%, 6.68%, 5.42%, and 4.53% across the four metrics, respectively.\nThe ablation study further clarified the individual contributions of SupConLoss and the Mean Teacher method. A comparative analysis between Mean Teacher + SupConLoss and Mean Teacher only revealed an overall enhancement of 2.11% in the F1 score. As detailed in Table II ###reference_### , the inclusion of SupConLoss consistently improved performance, particularly in categories like \u201dDonut\u201d, \u201dLoc\u201d, \u201dScratch\u201d, and \u201dNone\u201d, suggesting that the combined model might be more effective in handling complex features, SupConLoss potentially provides a more robust feature space for classification tasks, thereby adding significant value to the learning process.\nWhen combined with SupConLoss, the ResNet model showed a 4.11% improvement in the F1 score across most categories compared to ResNet alone. This was particularly evident in categories like \u201dEdge-Ring\u201d and \u201dNear-full\u201d, signifying that SupConLoss can significantly enhance performance in specific scenarios. This suggests that SupConLoss has a certain level of applicability and effectiveness across different model structures.\nRegarding the Mean Teacher method, it achieved a 2.42% improvement in the F1 score compared to the standalone ResNet. The detailed data presented in Table II ###reference_### reveals significant performance enhancements of the mean teacher model over the solo ResNet in most categories. For instance, in categories like \u201dEdge-Loc\u201d, \u201dEdge-Ring\u201d, \u201dLoc\u201d and \u201dNear-full\u201d. The mean teacher can enhance the model\u2019s generalization capabilities on unlabeled data through smoothed labels and model outputs, thereby bolstering overall model performance." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we presented a simple but efficient approach for wafer defect classification, capitalizing on the strengths of the Mean Teacher framework and supervised contrastive loss. Our methodology effectively addresses the intricacies of wafer patterns and the challenges stemming from limited labeled data. Quantitative evaluations corroborated the superior performance of our approach over traditional methods." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Overall Experiment Results
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsOverall Results
( Accuracy, Precision, Recall, F1 )
Resnet (Baseline)79.17%, 79.56%, 78.99%, 78.87%
+ mean teacher81.14%, 82.55%, 81.15%, 81.29%
+ SupConLoss84.13%, 85.53%, 83.37%, 82.98%
+ mean teacher & SupConLoss84.63%, 86.24%, 84.41%, 83.40%
\n
", + "capture": "TABLE I: Overall Experiment Results" + }, + "2": { + "table_html": "
\n
TABLE II: Experiment Results Details
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMethod\n\n\n\nClass\n\nAccuracy, Precision, Recall, F1
\n\nResnet(Baseline)\n\n\n\nCenter\n\n97.67%, 88.97%, 89.38%, 89.17%
\n\nDonut\n\n97.72%, 92.55%, 89.53%, 91.02%
\n\nEdge-Loc\n\n89.03%, 71.75%, 51.02%, 59.63%
\n\nEdge-Ring\n\n98.38%, 95.80%, 92.78%, 94.27%
\n\nLoc\n\n89.38%, 52.56%, 48.15%, 50.26%
\n\nNear-full\n\n100.00%, 92.36%, 100.00%, 96.03%
\n\nRandom\n\n96.76%, 94.38%, 86.30%, 90.16%
\n\nScratch\n\n94.29%, 57.14%, 74.61%, 64.72%
\n\nNone\n\n95.09%, 70.55%, 79.18%, 74.62%
\n\n+ Mean Teacher\n\n\n\nCenter\n\n96.11%, 95.48%, 83.15%, 88.89%
\n\nDonut\n\n98.53%, 87.50%, 93.11%, 90.22%
\n\nEdge-Loc\n\n91.20%, 78.72%, 59.82%, 67.98%
\n\nEdge-Ring\n\n99.19%, 93.56%, 96.22%, 94.87%
\n\nLoc\n\n90.80%, 67.24%, 56.46%, 61.38%
\n\nNear-full\n\n100.00%, 98.85%, 100.00%, 99.42%
\n\nRandom\n\n96.56%, 96.33%, 85.28%, 90.47%
\n\nScratch\n\n96.01%, 56.84%, 82.41%, 67.27%
\n\nNone\n\n93.88%, 68.46%, 73.92%, 71.09%
\n\n+ SupConLoss\n\n\n\nCenter\n\n97.98%, 67.80%, 90.91%, 77.67%
\n\nDonut\n\n95.47%, 76.67%, 71.88%, 74.19%
\n\nEdge-Loc\n\n90.93%, 85.71%, 62.50%, 72.29%
\n\nEdge-Ring\n\n98.99%, 100.00%, 94.74%, 97.30%
\n\nLoc\n\n98.49%, 69.23%, 92.31%, 79.12%
\n\nNear-full\n\n100.00%, 97.78%, 100.00%, 98.88%
\n\nRandom\n\n98.49%, 82.81%, 94.64%, 88.33%
\n\nScratch\n\n98.49%, 94.55%, 94.55%, 94.55%
\n\nNone\n\n89.42%, 95.24%, 48.78%, 64.52%
\n\n\n\n+ Mean Teacher\n& SupConLoss\n\n\n\n\nCenter\n\n95.97%, 67.92%, 81.82%, 74.23%
\n\nDonut\n\n98.99%, 91.84%, 95.74%, 93.75%
\n\nEdge-Loc\n\n86.40%, 100.00%, 38.64%, 55.74%
\n\nEdge-Ring\n\n99.50%, 81.36%, 97.96%, 88.89%
\n\nLoc\n\n98.49%, 82.93%, 91.89%, 87.18%
\n\nNear-full\n\n100.00%, 88.10%, 100.00%, 93.67%
\n\nRandom\n\n99.50%, 84.21%, 97.96%, 90.57%
\n\nScratch\n\n94.46%, 90.00%, 71.05%, 79.41%
\n\nNone\n\n95.97%, 89.80%, 84.62%, 87.13%
\n
", + "capture": "TABLE II: Experiment Results Details" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18533v1_figure_1.png", + "caption": "Figure 1: \nIllustration of Mean Teacher Framework with supercontrast loss.", + "url": "http://arxiv.org/html/2411.18533v1/extracted/6027452/mean_supcons_v3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18533v1" +} \ No newline at end of file diff --git a/20241127/2411.18539v1.json b/20241127/2411.18539v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4bbe4587c558fd8c877eff37dee49cdc990e9bcf --- /dev/null +++ b/20241127/2411.18539v1.json @@ -0,0 +1,454 @@ +{ + "title": "AdaVLN: Towards Visual Language Navigation in Continuous Indoor Environments with Moving Humans", + "abstract": "Visual Language Navigation is a task that challenges robots to navigate in realistic environments based on natural language instructions. While previous research has largely focused on static settings, real-world navigation must often contend with dynamic human obstacles. Hence, we propose an extension to the task, termed Adaptive Visual Language Navigation (AdaVLN), which seeks to narrow this gap. AdaVLN requires robots to navigate complex 3D indoor environments populated with dynamically moving human obstacles, adding a layer of complexity to navigation tasks that mimic the real-world. To support exploration of this task, we also present AdaVLN simulator and AdaR2R datasets. The AdaVLN simulator enables easy inclusion of fully animated human models directly into common datasets like Matterport3D. We also introduce a \u201dfreeze-time\u201d mechanism for both the navigation task and simulator, which pauses world state updates during agent inference, enabling fair comparisons and experimental reproducibility across different hardware. We evaluate several baseline models on this task, analyze the unique challenges introduced by AdaVLN, and demonstrate its potential to bridge the sim-to-real gap in VLN research.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Visual Language Navigation", + "text": "Over the years, the VLN task has evolved and produced several variants, generally aiming to narrow the gap between simulated environments and the real-world scenarios that an actual robot might encounter. The original Visual Language Navigation task and Room-to-Room (R2R) dataset were introduced by Anderson et al. [4 ###reference_b4###] and requires robots to navigate in static 3D environments given a single initial instruction. At every navigation step, the robot is provided with a 360-degree panoramic RGB-D view of its surroundings, and has to choose from pre-determined neighbouring nodes to teleport to. It popularised the use of the Matterport3D scan dataset [6 ###reference_b6###] as a source of realistic 3D environments and provided the original Matterport3D Simulator. This simulator was later adapted to similar tasks in static environments, such as Scenario-Oriented Object Navigation (SOON) [40 ###reference_b40###] and Remote Embodied Visual Referring Expressions (REVERIE) [25 ###reference_b25###].\nSoon after, expansions to the original R2R datasets, e.g. R4R [14 ###reference_b14###], RxR [17 ###reference_b17###], were created to diversify and increase the difficulty of the navigation tasks. In parallel, new tasks emerged that shifted the focus to different complexities and problems within the field of visual navigation. The Embodied Question Answering (EQA) task [37 ###reference_b37###] and Vision-and-Language Navigation with Actions (VLNA) [20 ###reference_b20###] were introduced as related tasks in which agents were challenged not only to navigate but also to answer questions or perform actions based on the visual scene.\nThe Habitat Sim simulator [24 ###reference_b24###, 29 ###reference_b29###, 28 ###reference_b28###] and Habitat-Matterport3D [26 ###reference_b26###] mesh datasets were later introduced, which provided a framework for conducting experiments in full physics-enabled 3D environments. Krantz el al [16 ###reference_b16###] integrated this to extend the VLN task to continuous action spaces (VLN-CE), where robots had to navigate by making \u2019low-level\u2019 movement decisions (turn left/right 15 degrees, move forward 0.25m, stop). RxR-Habitat competitions focusing on the VLN-CE task have since been organised in multiple years of the CVPR conference, which implement slight variants to the task parameters [9 ###reference_b9###, 2 ###reference_b2###]. This task further closed the sim2real gap, and has led to research into new world state modelling techniques [1 ###reference_b1###, 33 ###reference_b33###, 36 ###reference_b36###] and long-term navigation planning ideas [35 ###reference_b35###, 32 ###reference_b32###] specific to it. [12 ###reference_b12###] also showed the natural link between the discrete and continuous variants, and how they can complement each other when solving either task.\nRecent work by Li et al. [18 ###reference_b18###] introduced the Human-Aware MP3D (HA3D) simulator for discrete action spaces, with perspective-specific animations of humans in Matterport3D environments and a corresponding Human-Aware R2R dataset, requiring robots to parse these dynamic elements as part of their navigation instructions. The paper also introduced the inclusion of collision statistics into the metrics of VLN experiments like success rate etc." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Collision Avoidance during Navigation", + "text": "Collision avoidance in robotics is a well-researched topic, where the idea of local/offline path-planning is especially important since robots are expected to perform well in unknown environments. This typically requires the model to predict how their world state will change along a trajectory in order to preemptively avoid obstacles when planning their path. Traditional methods in robotics research for choosing safe paths used Velocity Obstacles [8 ###reference_b8###] to calculate potential collision paths based on both the movement of surrounding objects and the robot itself. Reciprocal Velocity Obstacles extended this to multi-agent scenarios common in real-world scenarios. Methods that extract motion information from RGB-D data was also demonstrated in [11 ###reference_b11###]. Newer prediction methods also made use of reinforcement learning techniques, models dynamic obstacles as variable-sized ellipsoids, and considers agent-human interaction dynamics to ground path predictions [31 ###reference_b31###, 23 ###reference_b23###, 5 ###reference_b5###]. These were typically combined with grid-based [7 ###reference_b7###], graph-based [34 ###reference_b34###], or 3D-based methods [13 ###reference_b13###] of modelling the world on-the-fly provided these robots with rich representations of its surroundings for better motion prediction and environment understanding.\nVisual Language Navigation has since integrated many of the above ideas for future planning and obstacle prediction. Notably, DREAMWALKER [32 ###reference_b32###] introduced the use of mental simulations in order to predict the environment along candidate trajectories. [1 ###reference_b1###] focused on obstacle avoidance using dynamic topological planning, and later on Jeong et al. [15 ###reference_b15###] introduced the VLN-CM agent, which leverages depth maps to predict expected occupancy maps along their candidate trajectories." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Adaptive Visual Language Navigation", + "text": "###figure_1### The existing VLN and VLN-CE tasks are largely focused on navigation in static environments, and do not explicitly define scenarios where dynamic obstacles like moving humans are present. To provide realistic, human-populated environments, we introduce Adaptive Visual Language Navigation (AdaVLN), an extension to the VLN-CE task." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Task Description", + "text": "Building upon the VLN-CE task, AdaVLN sets robots in Matterport3D environments with continuous action spaces. At the start of each navigation episode (time ), the robot is initialized at position and is required to navigate to a goal position by following a sequence of natural language instructions provided at the start. A key addition in AdaVLN is the inclusion of dynamic obstacles \u2014 in the form of humans \u2014 and an emphasis on collision avoidance. The states of these obstacles, denoted , are continuously updated as they move along NavMesh paths between pre-defined waypoints in the AdaR2R dataset. Robots are required to avoid collision with both static obstacles (e.g. environment meshes) and the dynamic obstacles." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Observations/Actions of Robots", + "text": "At each navigation step , the robot observes an egocentric 115-degree front-facing view of its surroundings in the form of an RGB-D image [18 ###reference_b18###], as seen in Figure 3 ###reference_###. Based on this observation and its current state, the robot can choose from one of four possible actions:\nTurn left by 15 degrees at 30 degrees/s\nTurn right by 15 degrees at 30 degrees/s\nMove forward 0.25 meters at 0.5m/s\nStop\nThe significance of time means that the speed at which the above actions are performed will affect the results. Our robots are configured to move with linear speeds of 0.5 m/s and rotate at 30 degrees/s. These timings were chosen to standardise the time taken by each action to 2 seconds.\nThe \u2019stop\u2019 command indicates the end of an episode, upon which the robot and simulation stops. The agent\u2019s performance is then evaluated based on its final state and the path it took, represented as for , where is the final time step. Due to the shorter distances of the tasks we present, a maximum of 50 steps is allowed for the agent to navigate to its final destination, upon which the simulation episode is automatically stopped.\n###figure_2###" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Freeze-Time", + "text": "As the dynamic obstacles\u2019 positions are update on every simulation tick, differences in hardware performance - and hence inference speed - can lead to big differences in simulation results. To ensure that experiments are hardware-agnostic, we introduce the idea of \u201dFreeze-Time\u201d when conducting VLN experiments, where we pause the simulation when an agent is predicting the next action to take. This is a toggleable feature in our AdaSimulator, and can be switched off if future works wish to take inference speed into account when evaluating navigation performance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "AdaSimulator", + "text": "AdaSimulator is implemented as a standalone extension to IsaacSim, leveraging its physics engine and RTX Renderer. The simulator automatically sets up all necessary environment components when loading a scene:\nSets up collider meshes for the static obstacles\nSpawns a Jetbot at\nSets up camera render products for generating observations\nLoads environment lighting rigs\nLoads humans at and sets up their animation graphs\nAll simulation scenarios use a two-wheeled NVIDIA Jetbot, controlled via differential controllers for physics-based movement. All egocentric observations are rendered through IsaacSim\u2019s Replicator Core, using the Jetbot\u2019s attached camera for render perspective. Dynamic human obstacles are introduced into the environment via a customized version of the omni.anim.people [21 ###reference_b21###] extension. A ROS2 interface is also provided, allowing agents to extract RGB-D observations from the simulator and send control commands.\nThe simulator can be run in GUI mode for full visibility of the navigation episodes and manual input of robot commands, or in headless mode for optimal training speed." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "AdaR2R (Sample)", + "text": "AdaR2R (Sample) is an example dataset containing 9 navigation episodes across 3 HM3Dv2 example scenes [26 ###reference_b26###]. Snapshots of these episode\u2019s environments and human obstacles are shown in Figure 4 ###reference_###. It modifies the original R2R dataset format to include configurations for human spawn points, path waypoints, and movement parameters. The example configurations have been manually set up to include 1-2 humans per episode, and their paths waypoints are chosen such that they will directly interfere with straight-line paths between critical nodes provided in the reference path. However, these interferences are never permanent, and there will either always be an alternative route that curves around the obstacle, or the obstacle will eventually move away as part of its patrol.\n###figure_3### The tasks are purposely made to be simple as the focus is on the human obstacles, with an average geodesic distance for each navigation episode is 5.84 meters.\nAs a sample, it serves as a reference for future works to establish new task variants of existing room-to-room datasets. The environment and robot both use triangular collider meshes with default offset values determined by IsaacSim." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Evaluation Protocol", + "text": "To give a baseline demonstration of the task and use of the simulator, we evaluate a baseline agent\u2019s ability to navigate to its goal and avoid collision with both humans and environmental obstacles. Established evaluation metrics for VLN tasks typically focus on the navigation performance of agents [3 ###reference_b3###, 4 ###reference_b4###, 38 ###reference_b38###]. Due to our focus on introducing a new simulation framework rather then an agent, we will instead look at our baseline agent\u2019s collisions with environmental and human obstacles instead. Navigation Collisions (NC) records the ratio of the total amount of time an agent is in collision with either a human or static environmental obstacles (walls, furnitures etc.) to the total navigation time. We also break it down into Human Navigation Collisions (HNC) and Environmental Navigation Collisions. We will also do a qualitative analysis of our baseline agent\u2019s observations and actions for several navigation episodes.\nAgents are limited to a maximum of 50 navigation steps per episode, owing to the shorter geodesic distances of our task compared to R2R and R2R-CE." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Physics Setup", + "text": "The NVIDIA Jetbots were setup with differential controllers configured for wheel radius of 0.035 m and wheel base distance of 0.1 m.\n###figure_4###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Baseline GPT Agent", + "text": "We test a simple agent that uses the GPT-4o-mini [22 ###reference_b22###] multi-modal foundational model. At each navigation step, the robot was told to generate semantic observations from the images, before generating a long-term plan to progress in the task. The robot then looks back on previous steps, then reasons about the logical next step required to continue along the plan, before finally making a decision. Predictions were limited to 200 output tokens per navigation step, and only RGB images used as input (after down-scaling from the render product\u2019s 1280x720 to 640x360 pixels) to reduce complexity and token usage. Implementation details are available in the project repository.\nTests are conducted zero-shot with no prior training of any sort on our dataset.\nAs seen in Figure 5 ###reference_### and Table 5 ###reference_###, collision rates in general are high, due to poor environmental parsing capabilities of our agent. In particular, we note that our agent frequently makes hallucinating observations which include:\nStating that paths ahead are clear even if they are facing a wall\nStating that there are no humans or obstacles in front of them even if there are\nHallucinating the instruction\u2019s objects in front of them\nWe note also that due to the full physics simulation of both the robot and the environment, it is much more difficult for robots to recover from colliding with walls compared to the HabitatSim simulators. Robots do not simply slide along the walls upon collision; rather, due to the nature of the robot\u2019s shape, it is common for the robot to roll over backwards as it attempts to move forward into a wall. Even if a robot does not flip, it is unable to turn effectively and hence is unable to escape as a \u201dreverse\u201d action is not defined. This makes it nearly impossible for a robot to get out of a static collision situation once it gets into one, which presents a new difficulty and layer of realism for such simulations. This is in contrast to other simulator like HabitatSim, which got around this issue by allowing robots to \u201dslide\u201d along the wall.\nAlthough human collisions constitute a small proportion of the total collisions, this is primarily because humans continue on their paths and exit the collision zone after contact. As shown in 5 ###reference_###, the agent makes little effort to navigate around human obstacles. We hypothesize that this behavior is due to the lack of realism in the human 3D models, causing the foundational model to fail to recognize them as obstacles." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "We presented AdaVLN, which extends the VLN-CE problem towards agent/robot navigation in dynamic environments featuring moving humans as dynamic obstacles. Alongside this, we introduced AdaSimulator, an extension of IsaacSim that facilitates the setup of fully physics-enabled simulations with realistic robots and animated 3D humans. Our baseline experiments demonstrate that the added complexity of our simulator enables more realistic evaluations and highlights the potential challenges of the new task. We aim to expand on this work by refining the simulation environment, generalizing the task formalization to broader dynamic environments, and developing agents capable of effectively navigating these complex scenarios.\nr*3c\nEpisode Environmental NC Human NC Combined NC \n1 0.77 0.01 0.78 \n2 0.69 0.01 0.70 \n3 0.91 0.01 0.91 \n4 0.93 0.00 0.93 \n5 0.86 0.00 0.86 \n6 0.76 0.00 0.76 \n7 0.00 0.00 0.00 \n8 0.71 0.08 0.78 \n9 0.00 0.00 0.00 \nAverage 0.63 0.01 0.64" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n
{tabu}
\n
\n
\n

r*3c\nEpisode Environmental NC Human NC Combined NC \n
1 0.77 0.01 0.78 \n
2 0.69 0.01 0.70 \n
3 0.91 0.01 0.91 \n
4 0.93 0.00 0.93 \n
5 0.86 0.00 0.86 \n
6 0.76 0.00 0.76 \n
7 0.00 0.00 0.00 \n
8 0.71 0.08 0.78 \n
9 0.00 0.00 0.00 \n
Average 0.63 0.01 0.64\n

\n
\n
\n
Table 1: Normalized Collision (NC) values across different episodes. The combined navigation collision measures the total amount of timesteps a robot spends in a navigation episode while in collision with any object in the scene. Environmental and Human NC only considers collisions with static obstacles (like furnitures/wall) or humans respectively.
\n
\n
\n

References

\n
    \n
  • \n[1]\n\nD.\u00a0An, H.\u00a0Wang, W.\u00a0Wang, Z.\u00a0Wang, Y.\u00a0Huang, K.\u00a0He, and L.\u00a0Wang.\n\n\nEtpnav: Evolving topological planning for vision-language navigation in continuous environments.\n\n\n2024.\n\n\n
  • \n
  • \n[2]\n\nD.\u00a0An, Z.\u00a0Wang, Y.\u00a0Li, Y.\u00a0Wang, Y.\u00a0Hong, Y.\u00a0Huang, L.\u00a0Wang, and J.\u00a0Shao.\n\n\n1st place solutions for rxr-habitat vision-and-language navigation competition (cvpr 2022).\n\n\n2022.\n\n\n
  • \n
  • \n[3]\n\nP.\u00a0Anderson, A.\u00a0Chang, D.\u00a0S. Chaplot, A.\u00a0Dosovitskiy, S.\u00a0Gupta, V.\u00a0Koltun, J.\u00a0Kosecka, J.\u00a0Malik, R.\u00a0Mottaghi, M.\u00a0Savva, and A.\u00a0R. Zamir.\n\n\nOn evaluation of embodied navigation agents.\n\n\n(arXiv:1807.06757), July 2018.\n\n\narXiv:1807.06757 [cs].\n\n\n
  • \n
  • \n[4]\n\nP.\u00a0Anderson, Q.\u00a0Wu, D.\u00a0Teney, J.\u00a0Bruce, M.\u00a0Johnson, N.\u00a0S\u00fcnderhauf, I.\u00a0Reid, S.\u00a0Gould, and A.\u00a0v.\u00a0d. Hengel.\n\n\nVision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments.\n\n\n(arXiv:1711.07280), Apr. 2018.\n\n\narXiv:1711.07280 [cs].\n\n\n
  • \n
  • \n[5]\n\nM.\u00a0Castillo-Lopez, S.\u00a0A. Sajadi-Alamdari, J.\u00a0L. Sanchez-Lopez, M.\u00a0A. Olivares-Mendez, and H.\u00a0Voos.\n\n\nModel predictive control for aerial collision avoidance in dynamic environments.\n\n\nIn 2018 26th Mediterranean Conference on Control and Automation (MED), p. 1\u20136. IEEE, Zadar, Croatia, June 2018. doi: 10\u2006.\u20061109/MED\u2006.\u20062018\u2006.\u20068442967\n\n\n
  • \n
  • \n[6]\n\nA.\u00a0Chang, A.\u00a0Dai, T.\u00a0Funkhouser, M.\u00a0Halber, M.\u00a0Nie\u00dfner, M.\u00a0Savva, S.\u00a0Song, A.\u00a0Zeng, and Y.\u00a0Zhang.\n\n\nMatterport3d: learning from rgb-d data in indoor environments.\n\n\n(arXiv:1709.06158), Sept. 2017.\n\n\narXiv:1709.06158 [cs].\n\n\n
  • \n
  • \n[7]\n\nA.\u00a0Elfes.\n\n\nOccupancy grids: a stochastic spatial representation for active robot perception.\n\n\n(arXiv:1304.1098), Mar. 2013.\n\n\narXiv:1304.1098 [cs].\n\n\n
  • \n
  • \n[8]\n\nP.\u00a0Fiorini and Z.\u00a0Shiller.\n\n\nMotion planning in dynamic environments using velocity obstacles.\n\n\nThe International Journal of Robotics Research, 17(7):760\u2013772, July 1998. doi: 10\u2006.\u20061177/027836499801700706\n\n\n
  • \n
  • \n[9]\n\nGoogle.\n\n\nRxr habitat, 2024.\n\n\nAccessed: 2024-11-17, https://ai.google.com/research/rxr/habitat.\n\n\n
  • \n
  • \n[10]\n\nJ.\u00a0Gu, E.\u00a0Stefani, Q.\u00a0Wu, J.\u00a0Thomason, and X.\u00a0E. Wang.\n\n\nVision-and-language navigation: a survey of tasks, methods, and future directions.\n\n\nIn Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 7606\u20137623, 2022.\n\n\narXiv:2203.12667 [cs]. doi: 10\u2006.\u200618653/v1/2022\u2006.\u2006acl-long\u2006.\u2006524\n\n\n
  • \n
  • \n[11]\n\nE.\u00a0Herbst, X.\u00a0Ren, and D.\u00a0Fox.\n\n\nRgb-d flow: Dense 3-d motion estimation using color and depth.\n\n\nIn 2013 IEEE International Conference on Robotics and Automation, pp. 2276\u20132282, 2013. doi: 10\u2006.\u20061109/ICRA\u2006.\u20062013\u2006.\u20066630885\n\n\n
  • \n
  • \n[12]\n\nY.\u00a0Hong, Z.\u00a0Wang, Q.\u00a0Wu, and S.\u00a0Gould.\n\n\nBridging the gap between learning in discrete and continuous environments for vision-and-language navigation.\n\n\n2022.\n\n\n
  • \n
  • \n[13]\n\nA.\u00a0Hornung, K.\u00a0M. Wurm, M.\u00a0Bennewitz, C.\u00a0Stachniss, and W.\u00a0Burgard.\n\n\nOctomap: an efficient probabilistic 3d mapping framework based on octrees.\n\n\nAutonomous Robots, 34(3):189\u2013206, Apr. 2013. doi: 10\u2006.\u20061007/s10514-012-9321-0\n\n\n
  • \n
  • \n[14]\n\nV.\u00a0Jain, G.\u00a0Magalhaes, A.\u00a0Ku, A.\u00a0Vaswani, E.\u00a0Ie, and J.\u00a0Baldridge.\n\n\nStay on the path: instruction fidelity in vision-and-language navigation.\n\n\n(arXiv:1905.12255), June 2019.\n\n\narXiv:1905.12255 [cs].\n\n\n
  • \n
  • \n[15]\n\nS.\u00a0Jeong, G.-C. Kang, J.\u00a0Kim, and B.-T. Zhang.\n\n\nZero-shot vision-and-language navigation with collision mitigation in continuous environment.\n\n\n(arXiv:2410.17267), Oct. 2024.\n\n\narXiv:2410.17267 [cs].\n\n\n
  • \n
  • \n[16]\n\nJ.\u00a0Krantz, E.\u00a0Wijmans, A.\u00a0Majumdar, D.\u00a0Batra, and S.\u00a0Lee.\n\n\nBeyond the nav-graph: vision-and-language navigation in continuous environments.\n\n\n(arXiv:2004.02857), May 2020.\n\n\narXiv:2004.02857 [cs].\n\n\n
  • \n
  • \n[17]\n\nA.\u00a0Ku, P.\u00a0Anderson, R.\u00a0Patel, E.\u00a0Ie, and J.\u00a0Baldridge.\n\n\nRoom-across-room: multilingual vision-and-language navigation with dense spatiotemporal grounding.\n\n\n(arXiv:2010.07954), Oct. 2020.\n\n\narXiv:2010.07954 [cs].\n\n\n
  • \n
  • \n[18]\n\nH.\u00a0Li, M.\u00a0Li, Z.-Q. Cheng, Y.\u00a0Dong, Y.\u00a0Zhou, J.-Y. He, Q.\u00a0Dai, T.\u00a0Mitamura, and A.\u00a0G. Hauptmann.\n\n\nHuman-aware vision-and-language navigation: bridging simulation to reality with dynamic human interactions.\n\n\n(arXiv:2406.19236), Nov. 2024.\n\n\narXiv:2406.19236 [cs].\n\n\n
  • \n
  • \n[19]\n\nV.\u00a0Makoviychuk, L.\u00a0Wawrzyniak, Y.\u00a0Guo, M.\u00a0Lu, K.\u00a0Storey, M.\u00a0Macklin, D.\u00a0Hoeller, N.\u00a0Rudin, A.\u00a0Allshire, A.\u00a0Handa, and G.\u00a0State.\n\n\nIsaac gym: High performance gpu-based physics simulation for robot learning.\n\n\n2021.\n\n\n
  • \n
  • \n[20]\n\nK.\u00a0Nguyen, D.\u00a0Dey, C.\u00a0Brockett, and B.\u00a0Dolan.\n\n\nVision-based navigation with language-based assistance via imitation learning with indirect intervention.\n\n\n(arXiv:1812.04155), Apr. 2019.\n\n\narXiv:1812.04155 [cs, stat].\n\n\n
  • \n
  • \n[21]\n\nNVIDIA.\n\n\nOmni.anim.people, 2024.\n\n\nAccessed: 2024-11-17, https://docs.omniverse.nvidia.com/isaacsim/latest/features/warehouse_logistics/ext_omni_anim_people.html.\n\n\n
  • \n
  • \n[22]\n\nOpenAI.\n\n\nGpt-4o mini: advancing cost-efficient intelligence, 2024.\n\n\nAccessed: 2024-11-17, https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/.\n\n\n
  • \n
  • \n[23]\n\nM.\u00a0Pfeiffer, S.\u00a0Shukla, M.\u00a0Turchetta, C.\u00a0Cadena, A.\u00a0Krause, R.\u00a0Siegwart, and J.\u00a0Nieto.\n\n\nReinforced imitation: Sample efficient deep reinforcement learning for map-less navigation by leveraging prior demonstrations.\n\n\n(arXiv:1805.07095), Aug. 2018.\n\n\narXiv:1805.07095 [cs].\n\n\n
  • \n
  • \n[24]\n\nX.\u00a0Puig, E.\u00a0Undersander, A.\u00a0Szot, M.\u00a0D. Cote, R.\u00a0Partsey, J.\u00a0Yang, R.\u00a0Desai, A.\u00a0W. Clegg, M.\u00a0Hlavac, T.\u00a0Min, T.\u00a0Gervet, V.\u00a0Vondru\u0161, V.-P. Berges, J.\u00a0Turner, O.\u00a0Maksymets, Z.\u00a0Kira, M.\u00a0Kalakrishnan, J.\u00a0Malik, D.\u00a0S. Chaplot, U.\u00a0Jain, D.\u00a0Batra, A.\u00a0Rai, and R.\u00a0Mottaghi.\n\n\nHabitat 3.0: A co-habitat for humans, avatars and robots.\n\n\n2023.\n\n\n
  • \n
  • \n[25]\n\nY.\u00a0Qi, Q.\u00a0Wu, P.\u00a0Anderson, X.\u00a0Wang, W.\u00a0Y. Wang, C.\u00a0Shen, and A.\u00a0v.\u00a0d. Hengel.\n\n\nReverie: remote embodied visual referring expression in real indoor environments.\n\n\n(arXiv:1904.10151), Jan. 2020.\n\n\narXiv:1904.10151 [cs].\n\n\n
  • \n
  • \n[26]\n\nS.\u00a0K. Ramakrishnan, A.\u00a0Gokaslan, E.\u00a0Wijmans, O.\u00a0Maksymets, A.\u00a0Clegg, J.\u00a0Turner, E.\u00a0Undersander, W.\u00a0Galuba, A.\u00a0Westbury, A.\u00a0X. Chang, M.\u00a0Savva, Y.\u00a0Zhao, and D.\u00a0Batra.\n\n\nHabitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai.\n\n\n(arXiv:2109.08238), Sept. 2021.\n\n\narXiv:2109.08238 [cs].\n\n\n
  • \n
  • \n[27]\n\nS.\u00a0Raychaudhuri, D.\u00a0Ta, K.\u00a0Ashton, A.\u00a0X. Chang, J.\u00a0Wang, and B.\u00a0Bucher.\n\n\nNl-slam for oc-vln: Natural language grounded slam for object-centric vln.\n\n\n(arXiv:2411.07848), Nov. 2024.\n\n\narXiv:2411.07848 [cs].\n\n\n
  • \n
  • \n[28]\n\nM.\u00a0Savva, A.\u00a0Kadian, O.\u00a0Maksymets, Y.\u00a0Zhao, E.\u00a0Wijmans, B.\u00a0Jain, J.\u00a0Straub, J.\u00a0Liu, V.\u00a0Koltun, J.\u00a0Malik, D.\u00a0Parikh, and D.\u00a0Batra.\n\n\nHabitat: A Platform for Embodied AI Research.\n\n\nIn Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.\n\n\n
  • \n
  • \n[29]\n\nA.\u00a0Szot, A.\u00a0Clegg, E.\u00a0Undersander, E.\u00a0Wijmans, Y.\u00a0Zhao, J.\u00a0Turner, N.\u00a0Maestre, M.\u00a0Mukadam, D.\u00a0Chaplot, O.\u00a0Maksymets, A.\u00a0Gokaslan, V.\u00a0Vondrus, S.\u00a0Dharur, F.\u00a0Meier, W.\u00a0Galuba, A.\u00a0Chang, Z.\u00a0Kira, V.\u00a0Koltun, J.\u00a0Malik, M.\u00a0Savva, and D.\u00a0Batra.\n\n\nHabitat 2.0: Training home assistants to rearrange their habitat.\n\n\nIn Advances in Neural Information Processing Systems (NeurIPS), 2021.\n\n\n
  • \n
  • \n[30]\n\nJ.\u00a0Thomason, M.\u00a0Murray, M.\u00a0Cakmak, and L.\u00a0Zettlemoyer.\n\n\nVision-and-dialog navigation.\n\n\n(arXiv:1907.04957), Oct. 2019.\n\n\narXiv:1907.04957 [cs]. doi: 10\u2006.\u200648550/arXiv\u2006.\u20061907\u2006.\u200604957\n\n\n
  • \n
  • \n[31]\n\nX.-T. Truong and T.\u00a0D. Ngo.\n\n\nToward socially aware robot navigation in dynamic and crowded environments: A proactive social motion model.\n\n\nIEEE Transactions on Automation Science and Engineering, 14(4):1743\u20131760, Oct. 2017. doi: 10\u2006.\u20061109/TASE\u2006.\u20062017\u2006.\u20062731371\n\n\n
  • \n
  • \n[32]\n\nH.\u00a0Wang, W.\u00a0Liang, L.\u00a0Van\u00a0Gool, and W.\u00a0Wang.\n\n\nDreamwalker: Mental planning for continuous vision-language navigation.\n\n\n(arXiv:2308.07498), Aug. 2023.\n\n\narXiv:2308.07498 [cs].\n\n\n
  • \n
  • \n[33]\n\nT.\u00a0Wang, Z.\u00a0Wu, F.\u00a0Yao, and D.\u00a0Wang.\n\n\nGraph based environment representation for vision-and-language navigation in continuous environments.\n\n\n2023.\n\n\n
  • \n
  • \n[34]\n\nT.\u00a0Wang, Z.\u00a0Wu, F.\u00a0Yao, and D.\u00a0Wang.\n\n\nGraph based environment representation for vision-and-language navigation in continuous environments.\n\n\n(arXiv:2301.04352), Jan. 2023.\n\n\narXiv:2301.04352 [cs].\n\n\n
  • \n
  • \n[35]\n\nZ.\u00a0Wang, X.\u00a0Li, J.\u00a0Yang, Y.\u00a0Liu, J.\u00a0Hu, M.\u00a0Jiang, and S.\u00a0Jiang.\n\n\nLookahead exploration with neural radiance representation for continuous vision-language navigation.\n\n\n2024.\n\n\n
  • \n
  • \n[36]\n\nZ.\u00a0Wang, X.\u00a0Li, J.\u00a0Yang, Y.\u00a0Liu, and S.\u00a0Jiang.\n\n\nGridmm: Grid memory map for vision-and-language navigation.\n\n\n2023.\n\n\n
  • \n
  • \n[37]\n\nE.\u00a0Wijmans, S.\u00a0Datta, O.\u00a0Maksymets, A.\u00a0Das, G.\u00a0Gkioxari, S.\u00a0Lee, I.\u00a0Essa, D.\u00a0Parikh, and D.\u00a0Batra.\n\n\nEmbodied question answering in photorealistic environments with point cloud perception.\n\n\n(arXiv:1904.03461), Apr. 2019.\n\n\narXiv:1904.03461 [cs].\n\n\n
  • \n
  • \n[38]\n\nL.\u00a0Yue, D.\u00a0Zhou, L.\u00a0Xie, F.\u00a0Zhang, Y.\u00a0Yan, and E.\u00a0Yin.\n\n\nSafe-vln: Collision avoidance for vision-and-language navigation of autonomous robots operating in continuous environments.\n\n\n(arXiv:2311.02817), Apr. 2024.\n\n\narXiv:2311.02817 [cs].\n\n\n
  • \n
  • \n[39]\n\nY.\u00a0Zhang, Z.\u00a0Ma, J.\u00a0Li, Y.\u00a0Qiao, Z.\u00a0Wang, J.\u00a0Chai, Q.\u00a0Wu, M.\u00a0Bansal, and P.\u00a0Kordjamshidi.\n\n\nVision-and-language navigation today and tomorrow: a survey in the era of foundation models.\n\n\n(arXiv:2407.07035), July 2024.\n\n\narXiv:2407.07035 [cs].\n\n\n
  • \n
  • \n[40]\n\nF.\u00a0Zhu, X.\u00a0Liang, Y.\u00a0Zhu, X.\u00a0Chang, and X.\u00a0Liang.\n\n\nSoon: scenario oriented object navigation with graph-based exploration.\n\n\n(arXiv:2103.17138), Oct. 2021.\n\n\narXiv:2103.17138 [cs].\n\n\n
  • \n
\n
\n
\n
\n
", + "capture": "Table 1: Normalized Collision (NC) values across different episodes. The combined navigation collision measures the total amount of timesteps a robot spends in a navigation episode while in collision with any object in the scene. Environmental and Human NC only considers collisions with static obstacles (like furnitures/wall) or humans respectively." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18539v1_figure_1.png", + "caption": "Figure 1: Jetbot navigating in a dynamic Matterport3D environment with moving human obstacles.", + "url": "http://arxiv.org/html/2411.18539v1/extracted/6021299/figures/output_frames.png" + }, + "2": { + "figure_path": "2411.18539v1_figure_2.png", + "caption": "Figure 2: AdaSimulator\u2019s GUI Extension in Isaac Sim", + "url": "http://arxiv.org/html/2411.18539v1/extracted/6021299/figures/GUI.png" + }, + "3": { + "figure_path": "2411.18539v1_figure_3.png", + "caption": "Figure 3: Top: RGB observations, Bottom: Depth observations provided to agent. Note that the depth observations have been restricted to a range between 0 and 10 in this image for clarity.", + "url": "http://arxiv.org/html/2411.18539v1/extracted/6021299/figures/observations.jpg" + }, + "4": { + "figure_path": "2411.18539v1_figure_4.png", + "caption": "Figure 4: Top: Environment the 9 navigation episodes were conducted in. Humans loop along the indicated paths infinitely throughtout a navigation episode. Note that the paths have been deliberately chosen to interfere with the optimal path the robot would take.", + "url": "http://arxiv.org/html/2411.18539v1/extracted/6021299/figures/Examples.jpg" + }, + "5": { + "figure_path": "2411.18539v1_figure_5.png", + "caption": "Figure 5: Top: Sample of paths (represented by lines) taken by robots and humans during simulation. Coordinate origins are based on X-Y provided in MP3D GLB files which have been scaled to 1 unit : 1 meter. In cases where the robot\u2019s line moves back-and-forth around a point, the robot has gotten stuck in collision with a wall.", + "url": "http://arxiv.org/html/2411.18539v1/extracted/6021299/figures/output_plot.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Etpnav: Evolving topological planning for vision-language navigation in continuous environments.", + "author": "D. An, H. Wang, W. Wang, Z. Wang, Y. Huang, K. He, and L. Wang.", + "venue": "2024.", + "url": null + } + }, + { + "2": { + "title": "1st place solutions for rxr-habitat vision-and-language navigation competition (cvpr 2022).", + "author": "D. An, Z. Wang, Y. Li, Y. Wang, Y. Hong, Y. Huang, L. Wang, and J. Shao.", + "venue": "2022.", + "url": null + } + }, + { + "3": { + "title": "On evaluation of embodied navigation agents.", + "author": "P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V. Koltun, J. Kosecka, J. Malik, R. Mottaghi, M. Savva, and A. R. Zamir.", + "venue": "(arXiv:1807.06757), July 2018.", + "url": null + } + }, + { + "4": { + "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments.", + "author": "P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S\u00fcnderhauf, I. Reid, S. Gould, and A. v. d. Hengel.", + "venue": "(arXiv:1711.07280), Apr. 2018.", + "url": null + } + }, + { + "5": { + "title": "Model predictive control for aerial collision avoidance in dynamic environments.", + "author": "M. Castillo-Lopez, S. A. Sajadi-Alamdari, J. L. Sanchez-Lopez, M. A. Olivares-Mendez, and H. Voos.", + "venue": "In 2018 26th Mediterranean Conference on Control and Automation (MED), p. 1\u20136. IEEE, Zadar, Croatia, June 2018. doi: 10\u2006.\u20061109/MED\u2006.\u20062018\u2006.\u20068442967", + "url": null + } + }, + { + "6": { + "title": "Matterport3d: learning from rgb-d data in indoor environments.", + "author": "A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Nie\u00dfner, M. Savva, S. Song, A. Zeng, and Y. Zhang.", + "venue": "(arXiv:1709.06158), Sept. 2017.", + "url": null + } + }, + { + "7": { + "title": "Occupancy grids: a stochastic spatial representation for active robot perception.", + "author": "A. Elfes.", + "venue": "(arXiv:1304.1098), Mar. 2013.", + "url": null + } + }, + { + "8": { + "title": "Motion planning in dynamic environments using velocity obstacles.", + "author": "P. Fiorini and Z. Shiller.", + "venue": "The International Journal of Robotics Research, 17(7):760\u2013772, July 1998. doi: 10\u2006.\u20061177/027836499801700706", + "url": null + } + }, + { + "9": { + "title": "Rxr habitat, 2024.", + "author": "Google.", + "venue": "Accessed: 2024-11-17, https://ai.google.com/research/rxr/habitat.", + "url": null + } + }, + { + "10": { + "title": "Vision-and-language navigation: a survey of tasks, methods, and future directions.", + "author": "J. Gu, E. Stefani, Q. Wu, J. Thomason, and X. E. Wang.", + "venue": "In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 7606\u20137623, 2022.", + "url": null + } + }, + { + "11": { + "title": "Rgb-d flow: Dense 3-d motion estimation using color and depth.", + "author": "E. Herbst, X. Ren, and D. Fox.", + "venue": "In 2013 IEEE International Conference on Robotics and Automation, pp. 2276\u20132282, 2013. doi: 10\u2006.\u20061109/ICRA\u2006.\u20062013\u2006.\u20066630885", + "url": null + } + }, + { + "12": { + "title": "Bridging the gap between learning in discrete and continuous environments for vision-and-language navigation.", + "author": "Y. Hong, Z. Wang, Q. Wu, and S. Gould.", + "venue": "2022.", + "url": null + } + }, + { + "13": { + "title": "Octomap: an efficient probabilistic 3d mapping framework based on octrees.", + "author": "A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard.", + "venue": "Autonomous Robots, 34(3):189\u2013206, Apr. 2013. doi: 10\u2006.\u20061007/s10514-012-9321-0", + "url": null + } + }, + { + "14": { + "title": "Stay on the path: instruction fidelity in vision-and-language navigation.", + "author": "V. Jain, G. Magalhaes, A. Ku, A. Vaswani, E. Ie, and J. Baldridge.", + "venue": "(arXiv:1905.12255), June 2019.", + "url": null + } + }, + { + "15": { + "title": "Zero-shot vision-and-language navigation with collision mitigation in continuous environment.", + "author": "S. Jeong, G.-C. Kang, J. Kim, and B.-T. Zhang.", + "venue": "(arXiv:2410.17267), Oct. 2024.", + "url": null + } + }, + { + "16": { + "title": "Beyond the nav-graph: vision-and-language navigation in continuous environments.", + "author": "J. Krantz, E. Wijmans, A. Majumdar, D. Batra, and S. Lee.", + "venue": "(arXiv:2004.02857), May 2020.", + "url": null + } + }, + { + "17": { + "title": "Room-across-room: multilingual vision-and-language navigation with dense spatiotemporal grounding.", + "author": "A. Ku, P. Anderson, R. Patel, E. Ie, and J. Baldridge.", + "venue": "(arXiv:2010.07954), Oct. 2020.", + "url": null + } + }, + { + "18": { + "title": "Human-aware vision-and-language navigation: bridging simulation to reality with dynamic human interactions.", + "author": "H. Li, M. Li, Z.-Q. Cheng, Y. Dong, Y. Zhou, J.-Y. He, Q. Dai, T. Mitamura, and A. G. Hauptmann.", + "venue": "(arXiv:2406.19236), Nov. 2024.", + "url": null + } + }, + { + "19": { + "title": "Isaac gym: High performance gpu-based physics simulation for robot learning.", + "author": "V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State.", + "venue": "2021.", + "url": null + } + }, + { + "20": { + "title": "Vision-based navigation with language-based assistance via imitation learning with indirect intervention.", + "author": "K. Nguyen, D. Dey, C. Brockett, and B. Dolan.", + "venue": "(arXiv:1812.04155), Apr. 2019.", + "url": null + } + }, + { + "21": { + "title": "Omni.anim.people, 2024.", + "author": "NVIDIA.", + "venue": "Accessed: 2024-11-17, https://docs.omniverse.nvidia.com/isaacsim/latest/features/warehouse_logistics/ext_omni_anim_people.html.", + "url": null + } + }, + { + "22": { + "title": "Gpt-4o mini: advancing cost-efficient intelligence, 2024.", + "author": "OpenAI.", + "venue": "Accessed: 2024-11-17, https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/.", + "url": null + } + }, + { + "23": { + "title": "Reinforced imitation: Sample efficient deep reinforcement learning for map-less navigation by leveraging prior demonstrations.", + "author": "M. Pfeiffer, S. Shukla, M. Turchetta, C. Cadena, A. Krause, R. Siegwart, and J. Nieto.", + "venue": "(arXiv:1805.07095), Aug. 2018.", + "url": null + } + }, + { + "24": { + "title": "Habitat 3.0: A co-habitat for humans, avatars and robots.", + "author": "X. Puig, E. Undersander, A. Szot, M. D. Cote, R. Partsey, J. Yang, R. Desai, A. W. Clegg, M. Hlavac, T. Min, T. Gervet, V. Vondru\u0161, V.-P. Berges, J. Turner, O. Maksymets, Z. Kira, M. Kalakrishnan, J. Malik, D. S. Chaplot, U. Jain, D. Batra, A. Rai, and R. Mottaghi.", + "venue": "2023.", + "url": null + } + }, + { + "25": { + "title": "Reverie: remote embodied visual referring expression in real indoor environments.", + "author": "Y. Qi, Q. Wu, P. Anderson, X. Wang, W. Y. Wang, C. Shen, and A. v. d. Hengel.", + "venue": "(arXiv:1904.10151), Jan. 2020.", + "url": null + } + }, + { + "26": { + "title": "Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai.", + "author": "S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. Turner, E. Undersander, W. Galuba, A. Westbury, A. X. Chang, M. Savva, Y. Zhao, and D. Batra.", + "venue": "(arXiv:2109.08238), Sept. 2021.", + "url": null + } + }, + { + "27": { + "title": "Nl-slam for oc-vln: Natural language grounded slam for object-centric vln.", + "author": "S. Raychaudhuri, D. Ta, K. Ashton, A. X. Chang, J. Wang, and B. Bucher.", + "venue": "(arXiv:2411.07848), Nov. 2024.", + "url": null + } + }, + { + "28": { + "title": "Habitat: A Platform for Embodied AI Research.", + "author": "M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.", + "url": null + } + }, + { + "29": { + "title": "Habitat 2.0: Training home assistants to rearrange their habitat.", + "author": "A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra.", + "venue": "In Advances in Neural Information Processing Systems (NeurIPS), 2021.", + "url": null + } + }, + { + "30": { + "title": "Vision-and-dialog navigation.", + "author": "J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer.", + "venue": "(arXiv:1907.04957), Oct. 2019.", + "url": null + } + }, + { + "31": { + "title": "Toward socially aware robot navigation in dynamic and crowded environments: A proactive social motion model.", + "author": "X.-T. Truong and T. D. Ngo.", + "venue": "IEEE Transactions on Automation Science and Engineering, 14(4):1743\u20131760, Oct. 2017. doi: 10\u2006.\u20061109/TASE\u2006.\u20062017\u2006.\u20062731371", + "url": null + } + }, + { + "32": { + "title": "Dreamwalker: Mental planning for continuous vision-language navigation.", + "author": "H. Wang, W. Liang, L. Van Gool, and W. Wang.", + "venue": "(arXiv:2308.07498), Aug. 2023.", + "url": null + } + }, + { + "33": { + "title": "Graph based environment representation for vision-and-language navigation in continuous environments.", + "author": "T. Wang, Z. Wu, F. Yao, and D. Wang.", + "venue": "2023.", + "url": null + } + }, + { + "34": { + "title": "Graph based environment representation for vision-and-language navigation in continuous environments.", + "author": "T. Wang, Z. Wu, F. Yao, and D. Wang.", + "venue": "(arXiv:2301.04352), Jan. 2023.", + "url": null + } + }, + { + "35": { + "title": "Lookahead exploration with neural radiance representation for continuous vision-language navigation.", + "author": "Z. Wang, X. Li, J. Yang, Y. Liu, J. Hu, M. Jiang, and S. Jiang.", + "venue": "2024.", + "url": null + } + }, + { + "36": { + "title": "Gridmm: Grid memory map for vision-and-language navigation.", + "author": "Z. Wang, X. Li, J. Yang, Y. Liu, and S. Jiang.", + "venue": "2023.", + "url": null + } + }, + { + "37": { + "title": "Embodied question answering in photorealistic environments with point cloud perception.", + "author": "E. Wijmans, S. Datta, O. Maksymets, A. Das, G. Gkioxari, S. Lee, I. Essa, D. Parikh, and D. Batra.", + "venue": "(arXiv:1904.03461), Apr. 2019.", + "url": null + } + }, + { + "38": { + "title": "Safe-vln: Collision avoidance for vision-and-language navigation of autonomous robots operating in continuous environments.", + "author": "L. Yue, D. Zhou, L. Xie, F. Zhang, Y. Yan, and E. Yin.", + "venue": "(arXiv:2311.02817), Apr. 2024.", + "url": null + } + }, + { + "39": { + "title": "Vision-and-language navigation today and tomorrow: a survey in the era of foundation models.", + "author": "Y. Zhang, Z. Ma, J. Li, Y. Qiao, Z. Wang, J. Chai, Q. Wu, M. Bansal, and P. Kordjamshidi.", + "venue": "(arXiv:2407.07035), July 2024.", + "url": null + } + }, + { + "40": { + "title": "Soon: scenario oriented object navigation with graph-based exploration.", + "author": "F. Zhu, X. Liang, Y. Zhu, X. Chang, and X. Liang.", + "venue": "(arXiv:2103.17138), Oct. 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18539v1" +} \ No newline at end of file diff --git a/20241127/2411.18571v1.json b/20241127/2411.18571v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5ddc3bc259113ce41bd8da4af64e22ffde5105dd --- /dev/null +++ b/20241127/2411.18571v1.json @@ -0,0 +1,260 @@ +{ + "title": "Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable multilingual capabilities, yet challenges persist in adapting these models for low-resource languages. In this study, we investigate the effects of Low-Rank Adaptation (LoRA) Parameter-Efficient Fine-Tuning (PEFT) on multilingual Gemma models for Marathi, a language with limited resources. Using a translated Alpaca dataset with 52,000 instruction-response pairs, our findings reveal that while evaluation metrics often show a performance decline post-fine-tuning, manual assessments frequently suggest that the fine-tuned models outperform their original counterparts. The observations indicate improvements in target language generation capabilities but a reduction in reasoning abilities following language adaptation. These results underscore the need for improved evaluation methodologies and the creation of high-quality native datasets to accurately assess language-specific model performance in low-resource settings.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The emergence of Large Language Models (LLMs) such as the Llama and Gemma series has revealed substantial abilities in managing various multilingual tasks Team et al. (2024a ###reference_b17###, b ###reference_b18###). These models have shown competence in multiple high-resource languages, yet their effectiveness with low-resource languages is still a challenge that needs addressing Huang et al. (2023 ###reference_b11###); Chang et al. (2023 ###reference_b4###). Typically, fine-tuning is used to enhance model performance in particular domains or languages. Nonetheless, this strategy has yielded inconsistent outcomes for low-resource languages Alam et al. (2024 ###reference_b2###); Lankford et al. (2023a ###reference_b12###).\nOur research focuses on Marathi, which is considered a low-resource language due to the scarcity of naturally occurring training data Ogueji et al. (2021 ###reference_b14###); Dhamecha et al. (2021 ###reference_b5###). We leverage the capabilities of LoRA PEFT, a parameter-efficient approach enabling model adaptation, instead of using the classic vanilla Supervised Fine-Tuning (SFT) Hu et al. (2021 ###reference_b10###); Han et al. (2024 ###reference_b9###). We prefer PEFT over SFT as it works in low data scenarios, is computationally effective so more widely adopted, and avoids catastrophic forgetting due to usage of non-English data only Weng (2024 ###reference_b19###); Aggarwal et al. (2024 ###reference_b1###). We execute this method with the Gemma models employing the Alpaca dataset, translated into Marathi. Automated assessments based on NLU and commonsense reasoning usually indicate a decline in the performance of fine-tuned models. However, human evaluations, which directly judge response quality, show that these models excel in specific contextual and cultural aspects Gala et al. (2024 ###reference_b7###); Zhu et al. (2024 ###reference_b20###).\nOur study challenges the effectiveness of current evaluation methods, especially for low-resource languages Richburg and Carpuat (2024 ###reference_b15###). We highlight how automated metrics may overlook important qualitative improvements, particularly when models produce responses that resonate with specific linguistic contexts Barnett et al. (2024 ###reference_b3###). Automated benchmarks, often based on logits, may be unsuitable for evaluating instruction-tuned models, further raising concerns about reliance on these metrics Gurgurov et al. (2024 ###reference_b8###). We recommend adopting more rigorous evaluation methods that better align with human judgment Aggarwal et al. (2024 ###reference_b1###); Barnett et al. (2024 ###reference_b3###)." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Using LLMs for low-resource languages, especially Supervised Fine-Tuning (SFT), has been thoroughly researched before. SFT proves to be very effective in high-resource settings, but it falls short in low-resource languages, facing many difficulties due to the data scarcity. Methods that were curated to handle constraints of low-resource languages were used through multilingual models Lankford et al. (2023a ###reference_b12###); Tang et al. (2020 ###reference_b16###). This resulted in highlighting a performance decline, caused by cultural inconsistencies in datasets Huang et al. (2023 ###reference_b11###); Chang et al. (2023 ###reference_b4###).\nAs opposed to this, some of the issues have been reduced by parameter-efficient techniques like LoRA PEFT, as they minimize the number of parameters during fine-tuning. This method signifies that computational efficiency is offered and the original model\u2019s robustness is retained, by adjusting only some of the parameters Hu et al. (2021 ###reference_b10###). A broader study emphasized that using LoRA in low-resource settings comes with low computational overhead Han et al. (2024 ###reference_b9###); Weng (2024 ###reference_b19###). Despite this, there remains a considerable gap for exploration when it comes to leveraging LoRA for low-resource languages on Multilingual LLMs Gurgurov et al. (2024 ###reference_b8###).\nExisting frameworks for evaluation of low-resource languages contain limitations that need to be studied Richburg and Carpuat (2024 ###reference_b15###); Aggarwal et al. (2024 ###reference_b1###). Low-resource languages have cultural nuances and context-dependent accuracy embedded in them, and traditional evaluation metrics may not capture them Barnett et al. (2024 ###reference_b3###); Ogueji et al. (2021 ###reference_b14###). This necessitates using alternative evaluation metrics, one of them being human assessments, to corroborate model performance Gala et al. (2024 ###reference_b7###). For example, as explored, Hindi-language tasks require cultural specificity, as it does for Marathi, our study finds Dhamecha et al. (2021 ###reference_b5###); Gala et al. (2024 ###reference_b7###). Thus we researched how fine-tuning methods like LoRA produce quality outputs, especially when they are used in culturally refined contexts Gala et al. (2024 ###reference_b7###); Alam et al. (2024 ###reference_b2###)." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experimental Setups", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Dataset", + "text": "The Alpaca dataset, consisting of 52,000 instruction-response pairs originally in English, was utilized for our research. The Google translate API was used to convert the dataset\u2019s instruction, input, and output columns into Marathi so that it could be used to fine-tune Gemma models. Through this translation process, we were able to produce a sizable dataset for Marathi, which helped us build the models for a language with little resources. The dataset that was created offered a systematic and uniform format for assessing the performance of the models on instruction-driven tasks in Marathi, making it easier to compare the base and fine-tuned variants of the Gemma models." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Models and Fine-tuning", + "text": "For our experiments, we employed several versions of the Gemma model family Team et al. (2024a ###reference_b17###) to assess the impact of LoRA PEFT tuning on Marathi, a low-resource language. Specifically, we worked with the following base models:\ngemma-2b: A 2-billion parameter model with robust multilingual capabilities, serving as one of the baseline models.\ngemma-2b-it: An instruction-tuned variant of Gemma-2B, specifically designed to excel at instruction-based tasks.\ngemma-2-2b: An enhanced and more recent version with additional pretraining on multilingual corpora, aimed at improving performance in complex linguistic tasks.\ngemma-2-2b-it: An instruction-tuned variant of Gemma-2.2B, optimized further for multilingual and instruction-following tasks.\nWe fine-tuned these models using LoRA PEFT to efficiently adapt them to the Marathi language, producing the following fine-tuned models:\ngemma-2b (Mr): The fine-tuned version of Gemma-2b for Marathi using the Alpaca dataset.\ngemma-2-2b (Mr): The fine-tuned version of Gemma-2-2b for Marathi.\ngemma-2-2b-it (Mr): The fine-tuned version of Gemma-2-2b-it for Marathi, specialized for instruction-following tasks.\nLoRA PEFT allowed us to tune a smaller subset of model parameters, which minimized computational costs while maintaining the core functionality of the Gemma models. This approach was particularly advantageous in adapting these large models to a low-resource language like Marathi, where we aimed to optimize model performance without requiring extensive computational resources." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Evaluation", + "text": "Our assessment emphasizes two complementary methods:\nAutomated Evaluation: We utilize established benchmarks from AI4Bharat to assess the performance of the models on tasks such as IndicSentiment, ARC-easy, ARC Challenge, Indic COPA, and Indic XNLI Gala et al. (2024 ###reference_b7###). These benchmarks enable a quantitative evaluation of the models across a variety of language tasks, allowing us to compare the results with those of other multilingual models\nManual Evaluation: As we used the automated metrics, we also performed thorough assessments manually, using a subset of 150 questions from our curated sheet of questions. Then, leveraging the models, we generated responses for each model and each question to ascertain which model demonstrated better performance. The questions encompassed fields like knowledge-based, quantitative analysis, culture and history, mathematics, science, problem-solving, scenario-based, geography, and politics. This manual evaluation revealed some important model capabilities that were previously overlooked by automated metrics, like cultural significance, linguistic patterns, nuances, and the capacity to follow instructions\nBy integrating both automated and manual evaluations, we achieved a more thorough understanding of model performance, pinpointing areas where fine-tuned models excel and where they may fall short .\n###figure_1###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Result Discussion", + "text": "In the manual assessment of 150 questions, illustrated in Figure 1, fine-tuned versions like gemma-2-2b-it (Mr) and gemma-2b-it (Mr) showed higher win rates than their base counterparts, indicating their enhanced ability to generate contextually relevant answers in Marathi. Nonetheless, the base models occasionally generated responses in English, as depicted in Appendix Figure 2, revealing ongoing issues with language consistency that the fine-tuned models somewhat alleviated, though not completely. While the fine-tuned models performed better in most of the aspects, there were some instances where the base models matched their performance, reflecting the intricate challenges of adapting models for low-resource languages such as Marathi.\nIn the evaluation of the F1 score, represented in Table 1 for gemma-1 models and Table 2 for gemma-2 models, gemma-2-2b frequently performed better than the other models in significant benchmarks, including sentiment analysis and question-answering tasks. However, fine-tuned models like gemma-2-2b-it (Mr) displayed varied outcomes, showing enhancements in certain tasks while experiencing declines in others, particularly in benchmarks like Indic XNLI and ARC Challenge. These findings highlight that even though fine-tuning can enhance performance in specific areas, it does not guarantee improvements across all tasks, underlining the necessity for more focused fine-tuning strategies for low-resource languages.\nOverall, we observe a degradation in NLU and reasoning benchmarks following language adaptation. However, the adapted model performs better on the open-ended question answering dataset during manual evaluation. This suggests the need for a more comprehensive evaluation strategy and more suitable datasets to fully assess the benefits of language adaptation. While automated benchmarks indicate degradation, they may not be the ideal metric for evaluating instruction-based models. We require more effective benchmarks that can assess the reasoning capabilities of the model without relying on logit-based evaluation metrics." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Limitations", + "text": "While researching, we faced quite a few limitations that hindered progress. Firstly, we used a dataset that was translated, instead of fetching naturally occurring Marathi content from the web. This proved unfruitful as the translated dataset does not entirely capture the complexities of the language. Next issue we faced was of limited computational resources, which resulted in limited experimental explorations, and thwarting us from exploring a broader range of models. Another challenge pertained to comprehensively evaluating the Marathi language generation, as previous benchmarks may not understand its complexities. Furthermore, the translation process contained biases, affecting the accuracy and quality of the question-answer pairs. Lastly, high-quality Marathi evaluation datasets were scarce, limiting our abilities in judging model performance in detail, this called for more robust resources in low-resource settings." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "To conclude, our results showcase how fine-tuning of Gemma models for Marathi using LoRA PEFT compromises performance if it is based on traditional and automated evaluation metrics. On the contrary, manual assessments indicate better performance as the fine-tuned models excel in processing culturally sound and contextually relevant responses. This necessitates the use of alternate and enhanced evaluation techniques that can successfully take into account the complex nuances of low-resource languages.\nA change needs to be made in developing more robust evaluation methods which provide more accuracy and more effective performance in low-resource settings. Moreover it is also important to perpetuate the generation of high-quality naturally occurring Marathi datasets for continued advancements in this discipline." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix Appendix", + "text": "Example Outputs\n###figure_2###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MODELF1 Scores
indicsentimentai2_arc-easyarc challengeindic copaindic xnli
gemma-2b0.77720.44350.42400.65470.3582
gemma-2b-it0.74440.46510.40430.29630.3066
gemma-2b (Mr)0.93970.60480.38480.42190.1675
\n
Table 1: \nF1 Scores for Gemma1 models.\n
\n
", + "capture": "Table 1: \nF1 Scores for Gemma1 models.\n" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MODELF1 Scores
indicsentimentai2_arc-easyarc challengeindic copaindic xnli
gemma-2-2b0.92060.63840.64630.65770.2191
gemma-2-2b (Mr)0.84110.61350.52710.57640.2753
gemma-2-2b-it0.97490.68510.72100.72100.2814
gemma-2-2b-it (Mr)0.95890.63430.63740.58350.1667
\n
Table 2: \nF1 Scores for Gemma2 models.
\n
", + "capture": "Table 2: \nF1 Scores for Gemma2 models. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18571v1_figure_1.png", + "caption": "Figure 1: Manual Evaluation Performance.", + "url": "http://arxiv.org/html/2411.18571v1/extracted/6029940/comparision_chart2.png" + }, + "2": { + "figure_path": "2411.18571v1_figure_2.png", + "caption": "Figure 2: Responses", + "url": "http://arxiv.org/html/2411.18571v1/extracted/6029940/examples.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Maple: Multilingual evaluation of parameter efficient finetuning of large language models.", + "author": "Divyanshu Aggarwal, Anuj Sathe, Ian Watts, and Sunayana Sitaram. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2401.07598" + } + }, + { + "2": { + "title": "Llms for low resource languages in multilingual, multimodal and dialectal settings.", + "author": "Firoj Alam, Shammur Absar Chowdhury, Sabri Boughorbel, and Maram Hasanain. 2024.", + "venue": "In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts, pages 27\u201333.", + "url": null + } + }, + { + "3": { + "title": "Fine-tuning or fine-failing? debunking performance myths in large language models.", + "author": "Samuel Barnett, Zachary Brannelly, Steven Kurniawan, and Samuel Wong. 2024.", + "venue": "ArXiv.", + "url": "https://doi.org/10.48550/arXiv.2406.11201" + } + }, + { + "4": { + "title": "When is multilinguality a curse? language modeling for 250 high- and low-resource languages.", + "author": "Tyler A. Chang, Caleb Arnett, Zhezheng Tu, and Benjamin K. Bergen. 2023.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2311.09205" + } + }, + { + "5": { + "title": "Role of language relatedness in multilingual fine-tuning of language models: A case study in indo-aryan languages.", + "author": "Tejas I. Dhamecha, V. Ramasubramanian Murthy, Smitha Bharadwaj, Karthik Sankaranarayanan, and Pushpak Bhattacharyya. 2021.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2109.10534" + } + }, + { + "6": { + "title": "Multifit: Efficient multi-lingual language model fine-tuning.", + "author": "Julian Martin Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, and Jeremy Howard. 2019.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/1909.04761" + } + }, + { + "7": { + "title": "Airavata: Introducing hindi instruction-tuned llm.", + "author": "Jay Gala, Theja Jayakumar, Javed Ahmad Husain, Arun Kumar M, Mohammed Shakib Khan, Diptesh Kanojia, Ratish Puduppully, Mitesh M. Khapra, Raj Dabre, Radhika Murthy, and Anoop Kunchukuttan. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2401.15006" + } + }, + { + "8": { + "title": "Adapting multilingual llms to low-resource languages with knowledge graphs via adapters.", + "author": "Daniel Gurgurov, Michael Hartmann, and Simon Ostermann. 2024.", + "venue": "ArXiv.", + "url": "https://doi.org/10.48550/arXiv.2407.01406" + } + }, + { + "9": { + "title": "Parameter-efficient fine-tuning for large models: A comprehensive survey.", + "author": "Zhangyin Han, Cheng Gao, Jiaxin Liu, Jiaqi Zhang, and Sara Qin Zhang. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2403.14608" + } + }, + { + "10": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J. Hu, Yelong Shen, Phil Wallis, Zeyuan Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2106.09685" + } + }, + { + "11": { + "title": "Not all languages are created equal in llms: Improving multilingual capability by cross-lingual-thought prompting.", + "author": "Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Tao Song, Yingce Xia, and Furu Wei. 2023.", + "venue": "In Conference on Empirical Methods in Natural Language Processing.", + "url": null + } + }, + { + "12": { + "title": "adaptmllm: Fine-tuning multilingual language models on low-resource languages with integrated llm playgrounds.", + "author": "S. Lankford, H. Afli, and A. Way. 2023a.", + "venue": "Information, 14(12):638.", + "url": null + } + }, + { + "13": { + "title": "adaptmllm: Fine-tuning multilingual language models on low-resource languages with integrated llm playgrounds.", + "author": "S. Lankford, H. Afli, and A. Way. 2023b.", + "venue": "Information, 14(12):638.", + "url": null + } + }, + { + "14": { + "title": "Small data? no problem! exploring the viability of pretrained multilingual language models for low-resourced languages.", + "author": "Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021.", + "venue": "In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 116\u2013126.", + "url": null + } + }, + { + "15": { + "title": "How multilingual are large language models fine-tuned for translation?", + "author": "Alana Richburg and Marine Carpuat. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2405.20512" + } + }, + { + "16": { + "title": "Multilingual translation with extensible multilingual pretraining and finetuning.", + "author": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2008.00401" + } + }, + { + "17": { + "title": "Gemma: Open models based on gemini research and technology.", + "author": "Gemma Team, Thomas Mesnard, Cooper Hardin, et al. 2024a.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2403.08295" + } + }, + { + "18": { + "title": "Gemma 2: Improving open language models at a practical size.", + "author": "Gemma Team, Morgane Riviere, Shivang Pathak, et al. 2024b.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2408.00118" + } + }, + { + "19": { + "title": "Navigating the landscape of large language models: A comprehensive review and analysis of paradigms and fine-tuning strategies.", + "author": "Brian Weng. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2404.09022" + } + }, + { + "20": { + "title": "Fine-tuning large language models to translate: Will a touch of noisy data in misaligned languages suffice?", + "author": "Dexin Zhu, Pinzhen Chen, Meng Zhang, Barry Haddow, Xianqiang Shen, and Dietrich Klakow. 2024.", + "venue": "ArXiv.", + "url": "https://arxiv.org/abs/2404.14122" + } + } + ], + "url": "http://arxiv.org/html/2411.18571v1" +} \ No newline at end of file diff --git a/20241127/2411.18572v1.json b/20241127/2411.18572v1.json new file mode 100644 index 0000000000000000000000000000000000000000..8d90d795be269591e21fbc2abb36d960a6738189 --- /dev/null +++ b/20241127/2411.18572v1.json @@ -0,0 +1,206 @@ +{ + "title": "Exploring Depth Information for Detecting Manipulated Face Videos", + "abstract": "Face manipulation detection has been receiving a lot of attention for the reliability and security of the face images/videos. Recent studies focus on using auxiliary information or prior knowledge to capture robust manipulation traces, which are shown to be promising. As one of the important face features, the face depth map, which has shown to be effective in other areas such as face recognition or face detection, is unfortunately paid little attention to in literature for face manipulation detection. In this paper, we explore the possibility of incorporating the face depth map as auxiliary information for robust face manipulation detection. To this end, we first propose a Face Depth Map Transformer (FDMT) to estimate the face depth map patch by patch from an RGB face image, which is able to capture the local depth anomaly created due to manipulation. The estimated face depth map is then considered as auxiliary information to be integrated with the backbone features using a Multi-head Depth Attention (MDA) mechanism that is newly designed. We also propose an RGB-Depth Inconsistency Attention (RDIA) module to effectively capture the inter-frame inconsistency for multi-frame input. Various experiments demonstrate the advantage of our proposed method for face manipulation detection.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The development of deep learning techniques has made face manipulation an easy task. People can manipulate face images/videos using a variety of deepfake schemes [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. Manipulated faces are usually difficult to be distinguished by human eyes, which seriously challenges the reliability and security of face images/videos. It is of paramount importance to develop advanced and accurate face manipulation detection schemes.\nResearchers have devoted a lot of effort to the task of face manipulation detection. Various deep neural networks (DNN) are proposed to spot the difference between real and manipulated face images, such as ResNet [8 ###reference_b8###], Xception [9 ###reference_b9###], MesoNet [10 ###reference_b10###], and EfficientNet [11 ###reference_b11###]. Recently, researchers start to explore different auxiliary information or prior knowledge to facilitate face manipulation detection, including the blending boundary [12 ###reference_b12###], guided residuals[13 ###reference_b13###], identity [14 ###reference_b14###], pre-generated face attention mask [15 ###reference_b15###, 16 ###reference_b16###], face information in the frequency domain [17 ###reference_b17###, 18 ###reference_b18###], and the face texture features [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###]. Such a strategy is shown to be promising for performance boosting. When the input is a set of face video frames, researchers are keen to exploit the inconsistency of the facial features among the fake face frames, such as irregular eye blinking [22 ###reference_b22###], lip motions [23 ###reference_b23###] or some specific feature representations [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###].\nThese schemes achieve good performance when the face manipulation schemes are known in training. In real world applications, however, the detection model is often faced with fake faces generated by unknown manipulation schemes. The classifiers learnt from the detailed manipulation traces in one dataset may not be robust against those from another dataset, which results in severe performance reduction in cross-database scenarios. More efforts are needed to discover and learn more robust face manipulation features to improve the generalization ability of face manipulation detection models.\nIn this paper, we explore the possibility to estimate and incorporate the depth map as auxiliary information for robust face manipulation detection. The rationales behind this:\n###figure_1### The face depth map tends to be stable among the face images that are collected from different sources (see Fig.1 ###reference_### (a)), while the manipulation would distort the face depth maps. Take the popular generative DNN based face manipulation as an example, the fake face region will either have no depth (if it is computer-generated) or have abnormal depth features around the boundary (if it is swapped from another real face image).\nThe face movement in a fake video will result in abnormal changes of the depth feature. The residual of the face depth map between two consecutive frames tends to be more discriminative in capturing the manipulation traces than directly computing the residual of the two frames in the RGB space. As shown in Fig.1 ###reference_### (b) and (c), the face residuals in the RGB space appear similarly for real and fake face videos. In the depth space, however, abrupt changes could be observed in the manipulated area in the residual computed from the fake face videos.\nNow the question becomes how we could accurately estimate the depth from a real or fake face image. There have been studies to estimate the depth map from a two-dimensional RGB face image [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###], however, all these schemes assume the face image is real and captured from a human face which could be considered as a physical object with a relatively smooth surface. The estimated face depth maps are globally smooth, which are not sensitive to face manipulation operations. To deal with this issue, we propose a Face Depth Map Transformer (FDMT) for face depth estimation, which is capable of capturing the local patch-wise depth variations due to face manipulation. Next, we propose a Multi-head Depth Attention (MDA) mechanism to effectively incorporate the face depth map into the backbones for face manipulation detection. We further design an RGB-Depth Inconsistency Attention (RDIA) module to well support video-level face manipulation detection, where the correlation of the spatial-temporal inconsistency between the RGB and depth space is measured and incorporated to effectively capture the inter-frame inconsistency of the fake face videos.\nThe proposed method is shown to be significantly better than the existing schemes in the cross-database scenario, which also achieves good performance in the intra-database scenario for detecting fake faces. We evaluate the generalization ability of the proposed method on three popular face manipulation detection backbones including Xception [37 ###reference_b37###], ResNet50 [8 ###reference_b8###], and EfficientNet [11 ###reference_b11###], all demonstrate the effectiveness of the proposed method for face manipulation detection. The contributions of this paper are summarized below.\nWe explore the possibility of using the face depth map, which is seldom considered in the area of face manipulation detection, for performance boosting.\nWe propose a Face Depth Map Transformer (FDMT) for generating local patch-wise depth features that are sensitive to face manipulation.\nWe propose a Multi-head Depth Attention (MDA) mechanism to effectively integrate our face depth maps into different backbones for face manipulation detection.\nWe propose an RGB-Depth Inconsistency Attention (RDIA) module to measure the correlation of the spatial-temporal inconsistency between the RGB and depth space of the face, so as to further boost the performance for video-level face manipulation detection." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Works", + "text": "In this section, we first briefly introduce techniques for face manipulation and then review recent works on face manipulation detection and face depth map estimation." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Face Manipulation", + "text": "Recent face manipulation techniques have been divided into three categories based on different forgery purposes: Face Swap, Face Attribute Editing, and Expression Replacement. Face Swap refers to replacing the face of one person in an image or video with another person\u2019s face to achieve identity forgery. The common and publicly available techniques for this are Deepfake[3 ###reference_b3###] and FaceSwap[7 ###reference_b7###]. Face Attribute Editing is mostly done by modifying facial attributes such as color, skin, age, etc. using GANs like StarGAN[38 ###reference_b38###] and AttGAN[39 ###reference_b39###]. Expression Replacement involves falsifying facial expressions without changing the identity of the person. Typical methods for this include Face2Face[2 ###reference_b2###] and NeuralTextures[1 ###reference_b1###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Face Manipulation Detection", + "text": "Image-Based Methods: With the continuous development of deep learning, various DNN backbones have been proposed for face manipulation detection. Wang et al. [40 ###reference_b40###] apply ResNet [8 ###reference_b8###] to classify real or fake face images. Rossler et al. [9 ###reference_b9###] use Xception [37 ###reference_b37###] as a baseline DNN model, which can achieve satisfactory performance on intra-database evaluations. Afchar et al. [10 ###reference_b10###] design a compact network MesoNet for video based face manipulation detection. Zhao et al. [20 ###reference_b20###] propose to take advantage of the EfficientNet [11 ###reference_b11###] for face manipulation detection, which can achieve comparable performance to Xception. There are also patch-based face manipulation detection approaches proposed to extract the subtle manipulated traces located in the image patches. Chai et al. [41 ###reference_b41###] take advantage of a patch-based classifier with limited receptive fields in the image. The works in [18 ###reference_b18###, 42 ###reference_b42###] further consider the patch similarity to facilitate the classification, where each patch is equally treated and processed during the patch feature learning. Zhang et al. [43 ###reference_b43###] propose a Patch Diffusion (PD) module to fully exploit the patch discrepancy for effective feature learning.\nA lot of recent studies focus on exploring effective auxiliary information or prior knowledge for the task of face manipulation detection, which are shown to be promising. Li et al. [12 ###reference_b12###] take the facial blending boundary as an indicator for the existence of manipulation. Dang et al. [44 ###reference_b44###] propose to incorporate the position of the face manipulation area to make the network focus on the manipulation traces. Zi et al. [15 ###reference_b15###] extract and fuse the face mask and organ mask into an attention mask to make the network pay attention to the fake area. Schwarcz et al. [16 ###reference_b16###] generate masks of the important parts of the face image to perform multi-part detection. The works in [19 ###reference_b19###, 20 ###reference_b20###] introduce different approaches to extract the face texture features to guide the network for better detection of the manipulation cues. Masi et al. [45 ###reference_b45###] adopt a dual-branch network structure, one of which is a fixed filter bank to extract the face feature in the frequency domain for auxiliary information. Similarly, the works in [46 ###reference_b46###, 47 ###reference_b47###, 17 ###reference_b17###, 18 ###reference_b18###] propose different approaches to treat the frequency domain face information as auxiliary information to boost the performance.\n###figure_2### Video-Based Methods:There are also schemes proposed specifically for video-level face manipulation detection. Most of these schemes focus on extracting discriminative features to represent the inconsistency among the manipulated face video frames. Zheng et al. [26 ###reference_b26###] construct a temporal transformer by combining the features of different face frames into time series for input. Gu et al. [27 ###reference_b27###] observe that the motion between adjacent frames in real videos is more smooth than fake ones, where a Spatial-Temporal Inconsistency Learning (STIL) scheme is proposed to capture the inconsistency. Zhang et al. [28 ###reference_b28###] propose a Temporal Dropout 3-dimensional Convolutional Neural Network (TD3DCNN) to detect the temporal incoherence of the face frames. A spatial-temporal dropout transformer is proposed in [29 ###reference_b29###] to make full use of the spatial-temporal inconsistency of local facial areas among different frames. Cozzolino et al. [14 ###reference_b14###] proposes to use the identity information for detecting manipulated face videos, which requires a real face video for reference to conduct the detection." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Face Depth Map Estimation", + "text": "In general, face depth map estimation tries to construct the depth information from one or more two-dimensional RGB face images. Feng et al. [33 ###reference_b33###] propose a Position map Regression Network (PRNet) to exquisitely estimate the face depth. Since the scene depth is relatively easier to obtain compared with the face depth map, Jin et al. [34 ###reference_b34###] apply scene depth knowledge for face depth map estimation. Wu et al. [35 ###reference_b35###] propose a depth uncertainty module to learn a face depth distribution instead of a fixed depth value. Kang et al. [36 ###reference_b36###] propose a StereoDPNet to perform depth map estimation from dual-pixel face images.\nThe face depth map is shown to be a good feature for face related machine learning tasks. Chiu et al. [48 ###reference_b48###] develop a segmentation-aware depth estimation network, DepthNet, to estimate depth maps from RGB face images for accurate face detection. Wang et al. [49 ###reference_b49###] argue that the depth map can reflect discriminative clues between live and spoofed faces, where the PRNet [33 ###reference_b33###] is adopted to estimate the face depth map for face anti-spoofing. Zheng et al. [50 ###reference_b50###] also takes the face depth map as auxiliary information for face anti-spoofing, where a symmetry loss is proposed for reliable face depth estimation.\nMotivated by the effectiveness of the face depth map in the aforementioned applications, we believe it is worthy of investigation to see how we could take advantage of such information for face manipulation detection. We think the face depth map is a robust feature against different capturing devices, which could be helpful when we are encountering face images collected from unknown sources. To this end, we propose a Face Depth Map Transformer (FDMT) to estimate the face depth map from both the original and manipulated face images. This is then treated as auxiliary information to be fused with the backbone feature by a Multi-head Depth Attention (MDA) mechanism newly designed for performance boosting. We also propose an RGB-Depth Inconsistency Attention (RDIA) module to effectively learn the inconsistency among different fake face frames." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III The Proposed Method", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Overview", + "text": "Fig.2 ###reference_### gives an overview of our proposed method for estimating and integrating the face depth map for face manipulation detection. Given a set of sequential face frames as input, we partition each frame into a set of non-overlapping patches for patch-wise face depth estimation, where we propose a Face Depth Map Transformer (FDMT) to construct the face depth patch by patch. Next, we propose a Multi-head Depth Attention (MDA) to integrate the depth feature extracted from the FDMT (before the fully connected layer) with the backbone feature to enhance the feature representation of each frame. We further propose an RGB-Depth Inconsistency Attention (RDIA) to enhance the feature representation for capturing the inter-frame inconsistency. The enhanced features are then fed to the the rest of network for classification." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Face Depth Map Transformer (FDMT)", + "text": "Face manipulation usually alters the original face image locally to change the face appearance. Such an operation will cause abrupt changes in the face depth map, which is unfortunately difficult to be captured by using the existing face depth map estimation schemes. Because they assume the face image is captured from a human face with a relatively smooth surface, as shown in Fig.3 ###reference_###. To make the face depth map sensitive to face manipulation, we propose here a Face Depth Map Transformer (FDMT) to estimate the face depth features patch by patch. Our FDMT is supervised by a set of ground truth face depth maps which are generated specifically for face manipulation detection. Next, we elaborate in detail on how we generate the ground truth as well as the FDMT.\n###figure_3### ###figure_4### The ground truth face depth map should properly reflect the depth of the real face, fake face, and background regions of a manipulated face image. As suggested by [17 ###reference_b17###], the fake face region can be obtained by binarizing the residual between a manipulated face image and its original version. And there are pre-trained models available for depth estimation from real face images. These offer us the possibility to generate appropriate face depth maps to be served as ground truth for face manipulation detection.\nGiven a face image, we first generate its face depth map based on a pre-trained face depth map estimation model (PRNet[33 ###reference_b33###]). The PRNet automatically segments the image into the face region and background region, where the depth of the background region is set as 0 and the depth within the face region is represented as non-zero positive integers, and a larger value means closer to the camera. Let\u2019s denote the output of PRNet as for the pixel located at in the face image. The ground truth face depth at pixel is computed as\nwhere is the operation to prevent overflow (i.e., set the values larger than 255 as 255) and is a positive integer. As such, we have well-separated depth values for different regions, where the depth of the fake face and background regions are set as 0 and , the depth value of the real face region is within the range from to . The ground truth face depth of each image patch is then computed as the average depth value within this patch based on . Fig.4 ###reference_### gives examples of our ground truth face depth map for face manipulation detection.\nThe structure of our FDMT is similar to ViT [51 ###reference_b51###]. We divide an input face image into a set of non-overlapping patches with positions embedded. Then, the position embedded patches are processed into transformer blocks, where each block contains two normalization layers, a multi-head attention unit, and a multi-layer perceptron (MLP). The output of the last transformer block is fed into a fully connected layer to produce a dimensional vector representing the depth value of each patch." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Multi-head Depth Attention (MDA)", + "text": "With the face depth map available, the next question is how to effectively integrate it into the backbone. A straight forward way is to concatenate it with the backbone features extracted from the RGB face frame for enhancement (termed as the RGB feature for simplicity). We could also directly use it as attention to weight the RGB feature. However, both strategies ignore the correspondence between the RGB feature and the face depth map, whose correlations are not fully exploited during the integration. Here, we propose to jointly learn a depth attention by taking both the RGB feature and face depth map into consideration. Please refer to Fig.5 ###reference_### for details of the network structure of the Multi-head Depth Attention.\n###figure_5### Given the RGB feature extracted from the backbone and the face depth feature extracted from FDMT, where and are with the same height and width. The RGB feature contains the color and texture information, as well as the manipulation clues in the spatial feature spaces. While the face depth feature offers the corresponding depth anomalies caused by manipulation as auxiliary information. To take advantage of such correspondences, we measure the similarity between the RGB feature and the face depth feature using the dot product below\nwhere and are the trainable weight matrices for the RGB feature and the face depth feature, \u201c\u201d is the dot product operation, and the biases are omitted for simplicity. Then, we use softmax to convert the similarity into a depth attention by\nwhere is a scaling factor equivalent to the number of channels in the RGB feature. The depth attention is eventually used to enhance the RGB feature by\nwhere is a trainable weight matrix, \u201c\u201d is the element-wise multiply operation. The backbone feature is then enhanced below by fusing the RGB features before and after the depth attention\nNext, we adopt the multi-head strategy [52 ###reference_b52###] on our depth attention to achieve an -head depth attention, which is given as\nwhere refers to the -th head with input being , , and being the output, \u201cConcat\u201d is the concatenation operation, and is a weight matrix to aggregate the outputs of different heads. Note that the trainable matrices for and are not shared among different heads.\n###figure_6###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D RGB-Depth Inconsistency Attention (RDIA)", + "text": "Let\u2019s denote the input face frames as with being the depth maps estimated by FDMT, we denote the face features after MDA as . The purpose of the RDIA is to further enhance the face features by learning the inconsistency among different frames in the depth and RGB space. Please refer to Fig.6 ###reference_### for details of the network structure of the RGB-Depth Inconsistency Attention. The main component in the RDIA is a residual attention (RA) module which combines both the spatial and temporal attention from the residuals of the frames in the depth and RGB space, as shown in Fig.7 ###reference_###. Next, we explain in detail regarding how the RA works in the depth space. The input of RA is a set of depth residuals ,where . We use a set of convolutional layers to preprocess the residuals by\nwhere is a network consists of three 2D convolutional layers. The residual attention for the depth map is then computed by\nwhere \u201c\u201d is the element-wise multiply operation, \u201cSA\u201d computes the spatial attention, \u201cTA\u201d obtains the temporal attention. The SA and TA differ mainly in the kernels used for 3D convolution. Specifically, the TA uses a 3D convolution kernel to perform convolution operations only along the temporal direction. While the SA adopts a kernel sized to conduct the convolution only on the spatial domain. Please refer to Fig.7 ###reference_### for details of the network structure of the SA and TA.\nFor the same token, we can also compute a piece of residual attention for the frames in the RGB space (say ). By considering both the residual attentions in the depth space and the RGB space, we obtain a RGB-Depth inconsistency attention by\nwhere and are 3D convolution operations with kernel size of , \u201c\u201d is the dot product operation. The RGB-Depth inconsistency attention is eventually used to enhance the face features by\nwhere refers to a convolution. The enhanced features are then fed to a 3DCNN for classification." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Loss Function", + "text": "We adopt the SSIM loss and the MSE loss to evaluate the similarity between the estimated and the ground truth face depth map. The SSIM loss is formulated as\nwhere , represent the mean of the estimated and ground truth face depth map; , denote the corresponding variance; is the covariance; and are small positive integers to avoid the division by zero. The MSE loss is given by\nwhere and are the depth values of the -th patch in the estimated and ground truth face depth map for the -th training sample, respectively.\nThe total loss is computed as\nwhere is the backbone loss for image classification, and are the weights to balance different loss terms.\n###figure_7###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Setup", + "text": "Dataset: We use two large-scale face manipulation datasets: FaceForensics++ (FF++) [9 ###reference_b9###], Celeb-DF [32 ###reference_b32###] for experiments. The FF++ dataset contains 1,000 real videos, with 720 videos for training, 140 videos for validation and 140 videos for testing. Each video has four versions using different manipulation methods, which are DeepFakes (DF) [53 ###reference_b53###], Face2Face (F2F) [2 ###reference_b2###], FaceSwap (FS) [7 ###reference_b7###] and Neural Textures (NT) [1 ###reference_b1###]. Besides, each video has three compression levels, which are RAW, High Quality (c23), and Low Quality (c40). The Celeb-DF dataset includes 590 raw videos collected from YouTube and 5639 corresponding face manipulated videos, which cover refined manipulated face videos from different genders, ages, and races with similar quality to those transmitted in real world scenarios.\nEvaluation Metrics: We use the detection accuracy (ACC) and Area under the Curve (AUC) for evaluation, which are two common indicators in literature for evaluating face manipulation detection schemes.\nImplementation Details: We take Xception [37 ###reference_b37###] as a backbone by default to evaluate the performance of our proposed method, where we integrate our Multi-head Depth Attention between the seventh to the eighth block of the Xception. The length of the input video frames is . For each frame, we follow the suggestion given in [9 ###reference_b9###] to automatically and conservatively crop the facial area into a square, which is then resized to for training and testing. To obtain the fake face region mask, we first calculate the pixel-level difference between the manipulated frame and its corresponding original frame. Then, we binarize the pixel-level difference with a threshold of . The mask is used to generate the ground truth face depth map with , which is then normalized to for training. Our FDMT contains blocks with eight attention heads for each block, where the input image is partitioned into patches. Both the values of and are set as for the total loss. The model is trained with Adam optimizer [54 ###reference_b54###] with a learning rate and a weight decay . We train our model on two RTX 3090 GPUs with a batch size of ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Comparisons", + "text": "Intra-database Evaluation: We train our model on the whole training set of FF++(c23) and test it on FF++(c23) for intra-database evaluation. Table.I ###reference_### gives the AUC of face manipulation detection among different schemes on FF++, where the performance of the existing schemes are all duplicated from literature. Here, \u201cOurs (Xception)-I\u201d refers to the case that we only use one frame for input without using the RDIA module (i.e., image-level detection), while \u201cOurs (Xception)-II\u201d means video-level detection. It can be seen that our proposed method achieves satisfactory performance with over 98% AUC. Compared with using the original Xception, our proposed method increases the AUC by 0.63% for image-level detection and 1.66% for video-level detection.\nCross-database Evaluation: By following the suggestion given in most of the existing works, we train our model on the whole training set of FF++ (c23) dataset and test it on the Celeb-DF test set for cross-dataset evaluation. Table.II ###reference_### shows the AUC of different schemes, where the performance on the DF subset of FF++(c23) is also given for reference. It can be seen from Table.II ###reference_### that our method achieves the best in both cases for video-level detection. It is worth noting that our method is significantly better than the existing schemes for cross-dataset evaluation, with over 5.7% higher AUC when tested on Celeb-DF." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Sensitivity of Multi-head Depth Attention (MDA)", + "text": "As shown in Fig.2 ###reference_###, our proposed MDA has to be integrated into the backbone for feature enhancement. In this section, we test the sensitivity of our MDA on the F2F subset of FF++ (c40) by integrating it with the feature maps of different blocks in the default Xception backbone. We denote the 13 blocks in Xception as B1, B2, \u2026, B13 from input to output. We integrate the MDA with the feature maps extracted from three different blocks: B3, B7, and B11, the results of which are given in Table.III ###reference_###. It can be seen that all the integration methods improve the detection accuracy and the integration with the features extracted from the center block (i.e., B7) achieves the best." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Generalization Ability against Different Backbones", + "text": "In this section, we integrate our proposed method with three popular face manipulation detection backbones including Xception [37 ###reference_b37###], Resnet50 [8 ###reference_b8###], and EfficientNet [11 ###reference_b11###], where the EfficientNet is the runner up in the Facebook Deepfake Detection Challenge [61 ###reference_b61###]. We train all the models on the whole training set of FF++, and test them on Celeb-DF. We conduct the MDA integration with the feature maps extracted right after the seventh block, the third layer, and the twelfth block in the Xception, Resnet50, and EfficientNet. Here, \u201cOurs ()-I\u201d and \u201cOurs ()-II\u201d refer to the image-level and video-level detection, respectively.\nTable.IV ###reference_### gives the performance before and after using our proposed method for integration. It can be seen that, regardless of the backbones, our proposed method is able to boost the performance for the cross-database scenario. At the image level, the performance gains for Xception, ResNet50, and EfficientNet are , , and in terms of AUC, respectively. At the video level, the corresponding performance gains are , , and respectively. These indicate the good generalization ability of our proposed method for performance boosting on different backbones.\n###figure_8###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Parameter Setting", + "text": "As indicated by the loss function in Eq.(13), the weight parameters and may affect the final combination of the losses. To study the influence of these two hyper-parameters, we conducted experimental analysis by fixing one parameter and adjusting the other, where the models are trained and tested on the DF subset in FF++. Fig.8 ###reference_### illustrates the variation of accuracy under different settings. The results indicate that as the values of or increase, the accuracy improves significantly. The highest accuracy is achieved when or equals 0.7, with accuracies of 94.85% and 95.11%, respectively. However, when both and are set to 1, the weight of the depth map estimation task is too high, resulting in a negative impact on the backbone classification task." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "IV-F Ablation Study", + "text": "To verify the effectiveness of our proposed Face Depth Map Transformer (FDMT), Multi-head Depth Attention (MDA), and RGB-Depth Inconsistency Attention (RDIA), we evaluate them separately in this section, where all the models are trained and tested on the DF subset in FF++ (c40) with the backbone fixed as Xception, and we take the backbone feature extracted from the seventh block in Xception for fusion or attention when necessary.\nEffectiveness of FDMT: To demonstrate the effectiveness of FDMT, we conduct two additional experiments here: 1) we concatenate the face depth features extracted from FDMT with the backbone features, 2) we concatenate the face depth features extracted from an existing face depth estimation model PRNet [33 ###reference_b33###] with the backbone features. The concatenated features are then passed through a 1x1 convolutional layer to obtain fused features for the subsequent Xception blocks. Table.V ###reference_### gives the ACC of the aforementioned experiments as well as those of the Xception backbone. It can be seen that both two experiments achieve higher ACC compared with using the Xception backbone only. This again indicates that the face depth map is indeed helpful for the face manipulation detection task. By simply concatenating an existing face depth map with the backbone feature, we are able to achieve 0.65% improvement in ACC. While the face depth map estimated using our proposed FDMT is more effective for performance boosting, with 1.94% of improvement in ACC compared with the Xception backbone.\n###figure_9### Fig.9 ###reference_### gives some examples of the estimated face depth map from real and fake face images using the proposed FDMT. The images on the first three columns are from the FF++ dataset, the image on the fourth column is selected from Celeb-DF. It can be seen that our FDMT can effectively estimate the face depth, which is able to capture the anomaly in face depth caused by face manipulation for face images from different sources. This further demonstrates the ability of our FDMT to extract robust face depth features for face manipulation detection.\nEffectiveness of MDA: To verify the effectiveness of our proposed MDA, we conduct two more experiments here: 1) we replace our FDMT with the PRNet for face depth estimation, 2) we incorporate the popular multi-head self-attention (MSA) [52 ###reference_b52###] with the backbone feature without using the face depth map. Table.VI ###reference_### gives the ACC of face manipulation detection for different models. It can be seen that, by simply using the MSA, we are able to achieve 0.4% higher in ACC compared with using the Xception backbone only. Our MDA is able to further increase the ACC which are around 0.7% and 2.7% higher than that of using the MSA by depth attention based on PRNet and our proposed FDMT, respectively. Compare with the results of directly concatenating the face depth map (see Table.V ###reference_###), our MDA achieves more performance gain with around 1% improvement in ACC for both the face depth maps estimated using the PRNet and our proposed FDMT. These results indicate that the multi-head based attention mechanism works for the task of face manipulation detection, and our MDA is superior to the existing MSA with the help of the face depth map.\nEffectiveness of RDIA: To verify the effectiveness of our proposed RDIA, we conduct video-level detection by using 3DCNN to learn the inconsistency among different video frames instead of using RDIA. Table.VII ###reference_### gives the performance of using 3DCNN and RDIA for video-level detection, where the features of each frame are extracted using FDMT and MDA. It can be seen that the ACC is improved by 1.64% using our RDIA instead of using 3DCNN. This indicates that our RDIA works better in capturing the inconsistency cues among different fake video frames.\n###figure_10### Next, we visualize the Gradient-weighted Class Activation Mapping (Grad-CAM) [62 ###reference_b62###] of the backbone features before and after the integration of our proposed method, as shown in Fig.10 ###reference_###. It can be seen that, by using our proposed method, the backbone feature is able to focus more on the fake face region, which is helpful for accurate face manipulation detection." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we explore the possibility of using the face depth map to facilitate face manipulation detection. To extract representative face depth maps from the manipulated face frames, we design a Face Depth Map Transformer (FDMT) to estimate the face depth maps patch by patch, which is effective in capturing the local depth anomaly created due to the manipulation. To appropriately integrate the face depth feature, we further propose a Multi-head Depth Attention (MDA) mechanism to enhance the backbone features via a depth attention which is computed by the scale dot product between the face depth feature and the backbone feature of the RGB face image. To well facilitate video-level detection, we propose an RGB-Depth Inconsistency Attention (RDIA) module to effectively capture the inconsistency among different frames in the RGB and depth spaces. Experimental results indicate that our proposed method is particularly helpful in the cross-database scenario with over 5.7% higher AUC than the existing schemes." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: The AUC (%) of different schemes on the FF++ for intra-database evaluation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodFF++
Meso4 [10]\n84.70
Xception [9]\n96.74
Two-branch [45]\n93.18
X-Ray [12]\n92.80
\n [47]\n98.10
ADDNet-3d [15]\n98.30
LTW [55]\n98.50
Multi-attention [20]\n99.80
FT-two-stream[56]\n92.47
SPSL [57]\n96.91
DIANet [58]\n90.40
FInfer [59]\n95.67
STDT [29]\n99.80
Ours (Xception)-I97.98
Ours (Xception)-II98.40
\n
\n
", + "capture": "TABLE I: The AUC (%) of different schemes on the FF++ for intra-database evaluation." + }, + "2": { + "table_html": "
\n
TABLE II: The AUC(%) of different schemes for cross-database evaluation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodFF++ DFCeleb-DF
Meso4 [10]\n-54.80
Xception [9]\n94.5573.27
\n [47]\n-65.17
Two-branch [45]\n-73.41
ADDNet-3d [15]\n96.2260.85
FT-two-stream[56]\n-65.56
LTW [55]\n-64.10
Multi-attention [20]\n-67.44
DIANet [58]\n90.4070.40
DSANet [60]\n96.8873.71
SPSL [57]\n-76.88
STIL [27]\n97.1275.58
FInfer [59]\n70.60
STDT [29]\n-69.78
LDIL [30]\n98.1977.65
Ours (Xception)-I97.3180.58
Ours (Xception)-II98.3383.35
\n
\n
", + "capture": "TABLE II: The AUC(%) of different schemes for cross-database evaluation." + }, + "3": { + "table_html": "
\n
TABLE III: The performance of integrating the MDA into different blocks in Xception.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Different blocks for integrationACC
-93.22
B395.23
B796.30
B1194.39
\n
\n
", + "capture": "TABLE III: The performance of integrating the MDA into different blocks in Xception." + }, + "4": { + "table_html": "
\n
TABLE IV: The AUC(%) of cross-database evaluation for different backbones before and after integrating our proposed method.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BackboneCeleb-DF
Xception[37]\n73.27
Ours (Xception)-I80.58
Ours (Xception)-II83.35
ResNet50[8]\n65.66
Ours (ResNet50)-I67.32
Ours (ResNet50)-II69.18
EfficientNet[11]\n72.07
Ours (EfficientNet)-I75.48
Ours (EfficientNet)-II76.30
\n
\n
", + "capture": "TABLE IV: The AUC(%) of cross-database evaluation for different backbones before and after integrating our proposed method." + }, + "5": { + "table_html": "
\n
TABLE V: Ablation study for the FDMT.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Depth MapFusionACC
--93.22
PRNet[33]\nconcat93.87
FDMTconcat95.16
\n
\n
", + "capture": "TABLE V: Ablation study for the FDMT." + }, + "6": { + "table_html": "
\n
TABLE VI: Ablation study for the MDA.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Depth MapAttentionACC
--93.22
-MSA93.62
PRNet[33]\nMDA94.33
FDMTMDA96.30
\n
\n
", + "capture": "TABLE VI: Ablation study for the MDA." + }, + "7": { + "table_html": "
\n
TABLE VII: Ablation study for the RDIA.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Inconsistency Learning ScemesACC
3DCNN94.66
RDIA96.30
\n
\n
", + "capture": "TABLE VII: Ablation study for the RDIA." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18572v1_figure_1.png", + "caption": "Figure 1: Examples of the face depth maps and residuals. (a) Depth maps of real face images from different sources (FF++[9] and Celeb-DF [32]); (b) depth maps of two consecutive real face frames and the corresponding residual; (c) depth maps of two consecutive fake face frames and the corresponding residual. Please refer to Section III-B for the computation of the ground truth face depth maps.", + "url": "http://arxiv.org/html/2411.18572v1/x1.png" + }, + "2": { + "figure_path": "2411.18572v1_figure_2.png", + "caption": "Figure 2: An overview of the proposed method for face manipulation detection.", + "url": "http://arxiv.org/html/2411.18572v1/x2.png" + }, + "3": { + "figure_path": "2411.18572v1_figure_3.png", + "caption": "Figure 3: Examples of the estimated face depth map using PRNet [33]. Images in the \u201cReal\u201d row are real face image and depth map. Images in the \u201cFake\u201d row are manipulated face image and depth map. The face images are selected from FF++ [9].", + "url": "http://arxiv.org/html/2411.18572v1/x3.png" + }, + "4": { + "figure_path": "2411.18572v1_figure_4.png", + "caption": "Figure 4: Examples of the ground truth face depth map. Images in the \u201cReal\u201d row are the real face image and the ground truth. Images in the \u201cFake\u201d row are manipulated face image and the ground truth. The face images are selected from FF++ [9].", + "url": "http://arxiv.org/html/2411.18572v1/x4.png" + }, + "5": { + "figure_path": "2411.18572v1_figure_5.png", + "caption": "Figure 5: The network structure of Multi-head Depth Attention.", + "url": "http://arxiv.org/html/2411.18572v1/x5.png" + }, + "6": { + "figure_path": "2411.18572v1_figure_6.png", + "caption": "Figure 6: The network structure of RGB-Depth Inconsistency Attention.", + "url": "http://arxiv.org/html/2411.18572v1/x6.png" + }, + "7": { + "figure_path": "2411.18572v1_figure_7.png", + "caption": "Figure 7: The network structure of the residual attention.", + "url": "http://arxiv.org/html/2411.18572v1/x7.png" + }, + "8": { + "figure_path": "2411.18572v1_figure_8.png", + "caption": "Figure 8: The detection performances achieve by (a) varying \u03b2\ud835\udefd\\betaitalic_\u03b2 when \u03b1\ud835\udefc\\alphaitalic_\u03b1 is fixed as 1 and (b) varying \u03b1\ud835\udefc\\alphaitalic_\u03b1 when \u03b2\ud835\udefd\\betaitalic_\u03b2 is fixed as 1.", + "url": "http://arxiv.org/html/2411.18572v1/x8.png" + }, + "9": { + "figure_path": "2411.18572v1_figure_9.png", + "caption": "Figure 9: Examples of the face depth maps estimated using our proposed FDMT. The first row shows the face images selected from different databases, the second row gives the masks of the fake face region (shown in white) and the third row presents the estimated face depth maps. The first five columns give examples from different subsets in FF++ [9], and the sixth column illustrates an example from Celeb-DF [32].", + "url": "http://arxiv.org/html/2411.18572v1/x9.png" + }, + "10": { + "figure_path": "2411.18572v1_figure_10.png", + "caption": "Figure 10: Visulization of the features before and after using our proposed method on the Xception backbone. From top to bottom: fake face images (first row), the masks of the fake face region (second row), and visualization of the features before (third row) and after (fourth row) using our proposed method. The first four columns give examples from different subsets in FF++ [9], and the fifth and sixth column illustrates an example from Celeb-DF [32].", + "url": "http://arxiv.org/html/2411.18572v1/x10.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18572v1" +} \ No newline at end of file diff --git a/20241127/2411.18577v1.json b/20241127/2411.18577v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d5fed490e9a8b04f93a941cc8e5a36fafe21d9bc --- /dev/null +++ b/20241127/2411.18577v1.json @@ -0,0 +1,133 @@ +{ + "title": "On Importance of Code-Mixed Embeddings for Hate Speech Identification", + "abstract": "Code-mixing is the practice of using two or more languages in a single sentence, which often occurs in multilingual communities such as India where people commonly speak multiple languages. Classic NLP tools, trained on monolingual data, face challenges when dealing with code-mixed data. Extracting meaningful information from sentences containing multiple languages becomes difficult, particularly in tasks like hate speech detection, due to linguistic variation, cultural nuances, and data sparsity. To address this, we aim to analyze the significance of code-mixed embeddings and evaluate the performance of BERT and HingBERT models (trained on a Hindi-English corpus) in hate speech detection. Our study demonstrates that HingBERT models, benefiting from training on the extensive Hindi-English dataset L3Cube-HingCorpus, outperform BERT models when tested on hate speech text datasets. We also found that code-mixed Hing-FastText performs better than standard English FastText and vanilla BERT models.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Communication in India often involves using multiple languages, leading to code-mixing. This is the interchangeable use of vocabulary from two or more languages within sentences. Word embedding represents words in a continuous vector space, aiming to capture semantic relationships, syntactic patterns, and context information. Traditional word embeddings fail to capture the nuances of code-mixed languages, but newer models like FastText and BERT have overcome these shortcomings by utilizing subword information and analyzing entire sentences bidirectionally, respectively. These models capture the rich semantic meaning of the text.\nThe primary motivations for this paper include the need to address challenges posed by code-mixed text in NLP and the importance of code-mixed word embeddings by utilizing the aforementioned capabilities of FastText and BERT models. Another motivation is the need to demonstrate the advantages of BERT and Hing-BERT as well as FastText and Hing-FastText models, simultaneously providing a comparative analysis. The paper is organized into sections explaining the word embedding models used, literature survey, methodology, results and discussion, followed by the conclusion." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Word Embedding Models", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "BERT-Base", + "text": "BERT stands for Bidirectional Encoder Representations from Transformers, developed by Google AI Language in 2018. BERT revolutionized the NLP space by providing a solution for more than 11 main tasks in NLP, better than the traditional models. It is a bidirectional model that processes entire sentences at once, also providing a contextual understanding of it. BERT, with its 345 million parameters, was one of the first big NLP pre-trained models [5]. We have used BERT-base model on the datasets, which is pre-trained on huge amounts of English language data using a Masked Language Modeling (MLM) objective." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Multilingual BERT (mBERT)", + "text": "The Multilingual BERT model is used for this analysis, also being comparable to the HingBERT models. This mBERT model is available in the official BERT repository and it supports 104 languages, including Hindi and English languages [7]. It was pre-trained on the largest Wikipedia in a self-supervised manner. It was pre-trained on raw text only, without humans labeling them in any way." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "FastText Embedding Models", + "text": "FastText is an open-source library for text representation. It uses subword information to handle multilingual data and represents each word as a collection of letter n-grams, allowing it to capture semantic similarity and handle out-of-vocabulary words effectively. This approach enables FastText to generalize across languages and identify similarities and differences among words with shared letter n-grams, leading to improved performance in natural language processing tasks." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "L3Cube-HingCorpus and Hing Models", + "text": "HingCorpus, developed by L3Cube in Pune, is a Hindi-English corpus containing 52.93 million sentences and 1.04 billion tags. This unsupervised HingCorpus is used to train Hing models such as HingBERT, Hing-mBERT, and Hing-RoBERTa, which provides a comparative analysis of BERT and HingBERT models. This Hing model is pre-compiled in HingCorpus, a mixture of Roman and mixed codes and types, but for this analysis we focus on Roman type.\nWe used several Hing models including HingGPT, HingFT and HingLID, for the datasets. HingGPT is a GPT2 transformer model trained on the extensive HingCorpus dataset using language modeling tasks. It\u2019s available in both Roman and Devanagari versions, expanding its range of applications. HingFT is a Hindi-English code-mixed FastText embedding model trained on Roman and Devanagari text from Hing-Corpus. HingLID is a code-mixed language identification model trained on a large in-house LID dataset." + }, + { + "section_id": "2.5", + "parent_section_id": "2", + "section_name": "Hing Models vs. BERT models", + "text": "HingBERT models are designed to handle code-mixed datasets, effectively dealing with the unique characteristics for the same. They are pre-trained on real-world Hinglish text from Twitter, allowing them to better understand the intricacies of both languages. Compared to standard BERT models, HingBERT models are optimized to handle the specific challenges of Hinglish code-mixing and generate word embeddings that capture the meaning of words in both Hindi and English contexts." + }, + { + "section_id": "2.6", + "parent_section_id": "2", + "section_name": "Hing FastText vs. vanilla FastText", + "text": "The Hing-FastText model, developed by L3Cube Pune, is trained on the L3Cube-HingCorpus dataset containing Roman and Devanagari text. Unlike traditional FastText models, it is designed for code-mixed Hindi-English data. It excels in tasks such as text classification, sentiment analysis, and hate speech detection due to its high context understanding of Hinglish text and its ability to handle diverse vocabularies that include words from both Hindi and English." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Literature Survey", + "text": "In [1] the paper addresses hate speech detection in code-mixed text from Twitter, focusing on using transformer-based approaches like multilingual BERT and Indic-BERT. It explores single-encoder and dual-encoder settings, uses contextual text from parent tweets. Achieving an F1 score of 73.07 percent in the HASOC 2021 database demonstrates the importance of context and static embedding in hate speech classification. It also demonstrates the effectiveness of average representation in context-based text classification.\nThe [3] paper also explores hate speech detection in code-mixed text, comparing pre-trained embeddings like BERT, XLNet, and DistilBERT along with a CNN model for Hinglish code-mixed data. Despite BERT\u2019s popularity, XLNet has emerged as a strong performer in identifying hate speech. The study emphasizes the importance of using pre-trained embeddings for effective detection of hate speech in code-mixed environments.\nThe [10] introduces a model for hate speech detection in Hinglish, prevalent in social media conversations. It efficiently classifies tweets into hate-inducing, abusive, or non-offensive categories using character-level embeddings and GRU with Attention. The study highlights the need for robust moderation tools, especially in regions with diverse linguistic landscapes like India.\n[2] explores hate speech detection in code-mixed Hinglish tweets on Twitter using transformer-based models like Indic-BERT, XLM-RoBERTa, and Multilingual BERT in a hard voting ensemble. It emphasizes the importance of considering conversation context in identifying hate speech and addresses limitations of mono-lingual classifiers. The paper achieves a macro F1 score of 0.7253 and suggests avenues for improvement, such as incorporating emojis and exploring better context understanding architectures." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "In this study, the Hate Speech and Offensive Content (HASOC) dataset has been used as a basis to compare and evaluate the BERT and HingBERT models. There have been various versions of this dataset since 2019, containing languages like Hindi, English, Marathi, and German but we have leveraged the complexities of the Hindi-English dataset introduced in 2021. The HASOC Datasets have mostly been used in hate speech detection tasks in NLP.\nThe Hate dataset includes a comprehensive collection of offensive text accumulated from Twitter. The dataset is divided into training, testing, and validation segments, all containing code-mixed text involving Hindi and English. The sentences contain various magnitudes of hate speech text, ranging from subjects encompassing politics, laws, crimes, popular culture, and so on." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Preprocessing", + "text": "" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Word Embedding Generation", + "text": "Our research methodology focuses on word embedding and preprocessing in NLP. We work with the datasets, preparing the data for training, testing, and validation. Word embedding creates dense vector representations of words, capturing their context. It\u2019s important for solving NLP problems and introduces vocabulary to learn word representations. Word embeddings help transform high-dimensional word spaces into low-dimensional vector spaces while preserving valuable information.\n###figure_1### We are using a transformer model that can handle two languages, which brings a significant increase in efficiency and performance compared to previous methods. The models we are using include BERT, Multilingual BERT, DistilBERT, RoBERTa, HingBERT, Hing-mBERT, Hing-RoBERTa, HingGPT, HingBERT-LID and HingFT. The process involves encoding the input text sequence using the hidden state of the transformer architecture and applying an internal compression method to the number of inputs." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Classification Algorithms", + "text": "After feature extraction, the classification model evaluates the word accuracy using various algorithms such as Random Forest, Linear Regression, SVM and KNN. The data is divided into training and test sets, and SVM is particularly useful for sentiment analysis due to its simplicity and efficiency with high-dimensional data.\nThe objective is to identify the best hyperplane that separates the training data using terms generated by BERT and HingBERT. Performance is evaluated on the training and evaluation sets, and metrics include F1-score, precision, accuracy, and recall." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "After assessing various BERT and HingBERT models on code-mixed datasets and obtaining results for the evaluation and testing sets, the testing scores are shown in Figures 2 and 3.\nWe also studied pools and layers in the transformer model, mainly focusing on the last layer to obtain high-level input data. For this experimentation, mostly max pooling is used. Our research shows that the HingBERT model outperforms the BERT model, and likewise, Hing-FastText outperforms the vanilla FastText model with higher F1 scores, recall, and accuracy in identifying atypical words in scrambled text.\n###figure_2### ###figure_3###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper discusses the use of specialized software to detect hate speech in multilingual contexts with a particular focus on Hindi-English (Hinglish) texts prevalent in India. This study demonstrates the superior performance of HingBERT trained on the L3Cube-HingCorpus database in detecting hate speech due to its ability to handle mixed code context. Additionally, the study emphasizes the need for specialized NLP models to address the challenges of data encoded in multilingual corpora.\nWe can expand the scope of this paper by using more diverse datasets, as the primary datasets may not capture the full range of linguistic variations in code-mixed Hinglish text. Also, the specialized Transformer and FastText models possess high computational complexity, which may be bettered by optimizing them. Further scope includes conducting real-world tests, to assess the usability of models used in a natural and practical scenario, like social media platforms. Exploring advanced NLP techniques like zero-shot learning, transfer learning may also prove in giving us better results." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Hate Speech Categories and Counts in a portion of Hate Dataset
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nHate Speech Category\n\nNumber of Examples
\n\nMisogyny and Sexism\n\n249
\n\nCommunal/Religious Hatred\n\n188
\n\nIncitement to Violence\n\n135
\n\nVictim-Blaming\n\n116
\n
", + "capture": "Table 1: Hate Speech Categories and Counts in a portion of Hate Dataset" + }, + "2": { + "table_html": "
\n
Table 2: Hate Speech Categories and Counts in a portion of Hasoc Dataset
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nHate Speech Category\n\nNumber of Examples
\n\nRacial/Ethnic Hate\n\n248
\n\nGender-based Hate\n\n67
\n\nPolitical Hate\n\n159
\n\nOther Hate\n\n173
\n
", + "capture": "Table 2: Hate Speech Categories and Counts in a portion of Hasoc Dataset" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18577v1_figure_1.png", + "caption": "Figure 1: Flowchart", + "url": "http://arxiv.org/html/2411.18577v1/extracted/6029964/flow.png" + }, + "2": { + "figure_path": "2411.18577v1_figure_2.png", + "caption": "Figure 2: Comparison Chart for HASOC dataset", + "url": "http://arxiv.org/html/2411.18577v1/extracted/6029964/Hasoc_3.png" + }, + "3": { + "figure_path": "2411.18577v1_figure_3.png", + "caption": "Figure 3: Comparison Chart for HATE dataset", + "url": "http://arxiv.org/html/2411.18577v1/extracted/6029964/Hate_3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18577v1" +} \ No newline at end of file diff --git a/20241127/2411.18578v1.json b/20241127/2411.18578v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2c0805d5085af26d46cd13a6cc1df61626ab92bb --- /dev/null +++ b/20241127/2411.18578v1.json @@ -0,0 +1,590 @@ +{ + "title": "Pruning Deep Convolutional Neural Network Using Conditional Mutual Information", + "abstract": "Convolutional Neural Networks (CNNs) achieve high performance in image classification tasks but are challenging to deploy on resource-limited hardware due to their large model sizes. To address this issue, we leverage Mutual Information, a metric that provides valuable insights into how deep learning models retain and process information through measuring the shared information between input features or output labels and network layers. In this study, we propose a structured filter-pruning approach for CNNs that identifies and selectively retains the most informative features in each layer. Our approach successively evaluates each layer by ranking the importance of its feature maps based on Conditional Mutual Information (CMI) values, computed using a matrix-based R\u00e9nyi -order entropy numerical method. We propose several formulations of CMI to capture correlation among features across different layers. We then develop various strategies to determine the cutoff point for CMI values to prune unimportant features. This approach allows parallel pruning in both forward and backward directions and significantly reduces model size while preserving accuracy. Tested on the VGG16 architecture with the CIFAR-10 dataset, the proposed method reduces the number of filters by more than a third, with only a drop in test accuracy.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Convolution Neural Network (CNN) has achieved remarkable success in various tasks such as image classification, object detection, and segmentation (Zhang et al., 2019 ###reference_b48###), (Li et al., 2021 ###reference_b24###). Deeper architectures such as VGG16 (Simonyan & Zisserman, 2014 ###reference_b37###) and ResNet (He et al., 2016 ###reference_b14###) have shown superior performance in handling complex image classification tasks. However, the effectiveness of these networks is often reliant on very deep and wide architectures, resulting in a very large number of parameters that lead to longer training and inference time, and create challenges when deploying them on resource-constrained devices (Blalock et al., 2020 ###reference_b2###), (Yang et al., 2017 ###reference_b40###).\nCNNs often contain redundant weights and parameters, as certain weights learned in a network are correlated (Sainath et al., 2013 ###reference_b34###).\nTo reduce network size and improve inference speed, network pruning techniques target different components such as weights, filters, and channels, using a range of criteria (see Related Work). A common approach is to measure the weight magnitudes to identify unimportant connections (Han et al., 2015 ###reference_b13###), (Molchanov et al., 2016 ###reference_b27###), (Aghasi et al., 2020 ###reference_b1###).\nA less explored approach involves using mutual information between the network\u2019s output and latent features to detect redundant filters. Yu et al. (2020 ###reference_b46###) assessed the information flow in CNNs by leveraging the R\u00e9nyi -order entropy and conducted a preliminary analysis using Conditional Mutual Information (CMI) to identify key filters. However, their study only uses CMI within a single layer, without considering the shared information among features across layers. Furthermore, the CMI-permutation method used to retain filters drastically underestimates the number of useful features.\nWe confirmed in our experiments that the retained features in Yu et al. (2020 ###reference_b46###) lead to a significant drop, of more than , in model accuracy.\nIn this paper, we build upon the concept of using CMI from Yu et al. (2020 ###reference_b46###) to develop an effective method for pruning CNNs while preserving high accuracy. Our key contributions include advancing CMI computation across layers, defining optimal CMI cutoffs, and developing pruning strategies applicable to all CNN layers. Specifically, we introduce novel CMI formulations that capture shared information across multiple layers, improving the measure\u2019s effectiveness in assessing feature importance. We also propose two methods for determining the CMI cutoff point to ensure optimal feature retention. Finally, we develop a robust algorithm for pruning CNN layers bidirectionally, starting from the most critical layer. Evaluations on the VGG16 architecture with the CIFAR-10 dataset demonstrate a reduction in parameters and a reduction in filters, with only a minimal drop in test accuracy, underscoring the effectiveness of our approach." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Deep neural network pruning has seen major advancements in recent years, with various approaches on reducing model complexity while maintaining performance. These approaches can be categorized into pruning at initialization, dynamic pruning, unstructured pruning, and structured pruning.\nPruning at initialization involves selecting weights or neurons likely to contribute little to the overall network performance and removing them without using any gradient steps. Sadasivan et al. (2022 ###reference_b33###) designed OSSuM for pruning at initialization by applying a subspace minimization technique to determine which parameters can be pruned.\nTanaka et al. (2020 ###reference_b38###) proposed an approach to measure parameter importance called synaptic saliency and ensured that this metric is preserved across layers. However, Frankle et al. (2020 ###reference_b9###) critically examined popular pruning methods at initialization and argued that pruning during training remains more effective.\nDynamic pruning approaches adjust the pruning process during training or inference. Shneider et al. (2023 ###reference_b36###) explored disentangled representations using the Beta-VAE framework, which enhances pruning by selectively eliminating irrelevant information in classification tasks. Chen et al. (2023 ###reference_b4###) introduced OTOv3 that integrates pruning and erasing operations by leveraging automated search space generation and solving a novel sparse optimization.\nUnstructured pruning removes individual weights rather than entire structures like filters, resulting in more flexibility but less hardware efficiency. Molchanov et al. (2019 ###reference_b28###) proposed a Taylor expansion-based pruning method that estimates the importance of weights by their impact on the loss function.\nAghasi et al. (2020 ###reference_b1###) introduced Net-Trim, which removes individual weights by formulating the pruning problem as a convex optimization to minimize\nthe sum of absolute entries of the weight matrices.\nDing et al. (2019 ###reference_b8###) introduced Global Sparse Momentum SGD, a weight pruning technique that dynamically adjusts the gradient flow during training to achieve high compression ratios while maintaining model accuracy. Lee et al. (2019 ###reference_b22###) demonstrated the role of dynamical isometry in ensuring effective pruning across various architectures without prior training.\nHan et al. (2015 ###reference_b13###) combined weight pruning, quantization, and Huffman coding to achieve significant compression.\nStructured Pruning focuses on removing entire channels, filters, or layers, making it more compatible with modern hardware. He & Xiao (2023 ###reference_b15###) provided a comprehensive survey in structured pruning of deep convolutional neural networks, emphasizing the distinction between structured and unstructured pruning and highlighting the hardware-friendly advantages of structured approaches. Crowley et al. (2018 ###reference_b6###) suggested that networks pruned and retrained from scratch achieve better accuracy and inference speed than pruned-and-tuned models. You et al. (2019 ###reference_b41###) developed the Gate Decorator method that employs a channel-wise scaling mechanism to selectively prune filters based on their estimated impact on the loss function, measured through a Taylor expansion.\nLin et al. (2022 ###reference_b25###) grouped consecutive output kernels for pruning.\nXu et al. (2019 ###reference_b39###) integrated low-rank approximation into the training process, dynamically reducing the rank of weight matrices to compress the network. Considering Convolutional Neural Networks, various approaches have been introduced for filter pruning. Guo et al. (2020 ###reference_b12###) pruned filters using a differentiable Markov process to optimize performance under computational constraints; Sehwag et al. (2020 ###reference_b35###) pruned filters based on an empirical risk minimization formulation; Liu et al. (2019 ###reference_b26###) utilized a meta-learning approach; Molchanov et al. (2016 ###reference_b27###) interleaved greedy criteria-based pruning with fine-tuning by backpropagation, using a criterion based on Taylor expansion to minimize impact on the loss function. Li et al. (2020 ###reference_b23###) developed EagleEye, a pruning method that leverages adaptive batch normalization to quickly and efficiently evaluate the potential of pruned sub-nets without extensive fine-tuning.\nHe et al. (2017 ###reference_b17###) proposed a channel pruning method based on LASSO regression and least squares reconstruction.\nZhuang et al. (2018 ###reference_b50###) incorporated additional discrimination-aware losses to maintain the discriminative power of intermediate layers.\nHe et al. (2019 ###reference_b16###) proposed filter pruning via Geometric Median targeting redundant filters to reduce computational complexity.\nYu et al. (2020 ###reference_b46###) proposed applying Conditional Mutual Information and Permutation-test to retain a set of important filters.\nThis paper shares a common objective with prior work in the structured pruning domain, particularly focusing on filter pruning for Convolutional Neural Networks. While existing methods employ various pruning criteria, our study explores the application of mutual information (MI), specifically leveraging the matrix-based -order R\u00e9nyi entropy computation to produce MI values which are used to guide the pruning process. This paper contributes to the area of applying MI in machine learning, emphasizing the use of MI to identify and retain the most informative filters across layers." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Computing the CMI Values of Candidate Feature Sets", + "text": "In this section, we analyze the use of Conditional Mutual Information (CMI) as a metric to measure feature importance, and discuss several approaches to ordering the features in each CNN layer and computing their CMI values. We propose new CMI computation that leverages shared information across layers and further exploit Markovity between layers to make the computation efficient." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Selected Features Set and Non-selected Features Set", + "text": "We first define the notation used for the rest of the paper. Let and be the input and output data of the CNN. We consider a pretrained CNN model that has CNN layers, . Each layer contains multiple feature maps obtained by feed-forwarding the training data to this layer using the layer filters. At each layer , the feature map selection process involves separating the set of feature maps at layer into two distinct sets: the selected set and the non-selected set , that is, .\nSelected feature set is a subset of the feature map set at layer and consists of feature maps selected according to a selection criterion as discussed later in Section 4 ###reference_###.\nThe selection criteria are designed to retain a high test accuracy on the retrained CNN model after pruning.\nNon-selected feature set is the rest of the feature maps at layer , i.e. , which consists of feature maps that do not significantly contribute to the model\u2019s performance, and hence can be pruned to simplify the model complexity without compromising accuracy.\nSelection metric: We are interested in the information that the feature maps in each layer convey about the CNN output, which can be measured by the mutual information (MI) between the feature map set and the output . Note the following MI relationship:\nWe observe that the selected feature set will convey most information about the output if the second term of the summation in Eq. (1 ###reference_###) is sufficiently small. This second term measures the conditional mutual information (CMI) between the non-selected feature set and the output, conditioned on the selected feature set. That is to say, given the selected feature set , if the non-selected feature set does not bring much more information about the CNN output, then it can be effectively pruned without affecting CNN accuracy performance. As such, in our algorithms, we will compute the CMI values of various candidate feature sets for pruning to determine the best set to prune." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Ordering Features With Per-layer Conditional Mutual Information", + "text": "We now discuss how to use conditional mutual information (CMI)\nto rank the feature maps in each CNN layer. The ordered list based on CMI values will later be used for pruning.\nHere we review the method for ordering features and computing CMI values within one layer as in (Yu et al., 2020 ###reference_b46###); in the next section, we propose new methods for ordering features and computing CMIs across layers.\nOrdering features per layer: \nConsider layer with the set of feature maps in a pre-trained CNN. To order the feature maps in , we compute the MI\nbetween each unordered feature map and the output , then incrementally select the one that maximizes the MI. Specifically, starting from an empty list of ordered features and a full list of non-ordered features , we successively pick the next best feature map from that maximizes (Yu et al., 2020 ###reference_b46###)\nOnce the next best feature map is identified, it is moved from the unordered feature list to the ordered feature list as follows.\nThis process is repeated iteratively for times to order all the feature maps of layer .\nComputing the per-layer CMI values: \nEach time the two lists are updated with a newly ordered feature map as in Eq. (3 ###reference_###), they create new candidates for feature selection, where is a candidate for the selected feature set, and for the non-selected feature set.\nTo evaluate the \u201dgoodness\u201d of these candidate sets, we compute the CMI at each ordering iteration as follows (Yu et al., 2020 ###reference_b46###).\nwhere index refers to the -th iteration of performing ordering steps (2 ###reference_###) and (3 ###reference_###) in layer .\nAs increases, the ordered feature list grows and the non-order feature list shrinks, hence the value of is automatically decreasing with . At the end of this process, each CNN layer will have an associated list of decreasing CMI values , where ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Ordering Features With Cross-layer Conditional Mutual Information", + "text": "The above per-layer CMI computation ignores shared information among features across different layers. To utilize this cross-layer relation, we consider cross-layer CMI computations that incorporate information from multiple CNN layers into the pruning process of each layer. We propose two methods for ordering the features of each layer and computing the cross-layer CMI values." + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 Full CMI conditioned on all previously considered layers", + "text": "We follow a similar process as above but replace the maximization criterion in (2 ###reference_###) with (5 ###reference_###), and the CMI computation in (4 ###reference_###) with (6 ###reference_###) below. Specifically, let be the lists of selected feature maps of previously explored CNN layers .\nAt layer , the next feature to be added to the ordered list will be chosen as\nAfter updating the ordered list with the new feature map as in Eq. (3 ###reference_###), we calculate the CMI value of the new unordered set as\nSteps (5 ###reference_###), (3 ###reference_###), and (6 ###reference_###) are repeated times for each layer . At the end of this process, each layer again has a list of decreasing CMI values." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Compact CMI conditioned on only the last layer", + "text": "In feedforward Deep Neural Networks inference, input signals are propagated forward from the input layer to the output layer, passing through multiple hidden layers. In each propagation, the computation flows in a single direction, with the latent features at each layer depending only on the signals from the previously considered layer and weights of the current layer, hence forming a Markov chain (Yu & Principe, 2019b ###reference_b44###). The Markov property implies that the CMI values computed at a certain layer depend solely on the immediately preceding or succeeding layer (Cover, 1999 ###reference_b5###). We stress that this Markov property applies in both directions for CMI computation, whether the given sets that are being conditioned on come from the preceding layers or succeeding layers. (This is because of the property that if forms a Markov chain, then also forms a Markov chain.) We will later exploit this property to design pruning algorithms that work in both directions. For the easy of exposition, however, we will only show the forward CMI computation here, but noting that it can be applied in the backward direction as well.\nLeveraging the Markovity among layers, we propose a more compact method for computing cross-layer CMI values at each layer . This method replaces\nsteps (5 ###reference_###) and (6 ###reference_###) with\n(7 ###reference_###)\nand (8 ###reference_###) respectively as below. The feature ordering maximization criterion becomes\nand the compact CMI computation used to create the CMI list is\nSteps (7 ###reference_###), (3 ###reference_###), and (8 ###reference_###) are repeated times for each layer to produce the CMI list ." + }, + { + "section_id": "3.3.3", + "parent_section_id": "3.3", + "section_name": "3.3.3 Full CMI versus Compact CMI and Examples", + "text": "While the compact CMI in (8 ###reference_###) and the full CMI in (6 ###reference_###) are theoretically equivalent because of Markovity among CNN layers, their numerical values may vary in practice due to the estimation methods used for calculating mutual information and the numerical precision of the machine. Specifically, we use the matrix-based numerical method for computing R\u00e9nyi entropy in (11 ###reference_###) (see Appendix) from layer data without having the true distributions, thus the computed values for compact CMI and full CMI diverge when conditioned on more layers. Therefore, we conduct an ablation study to compare both approaches in the experimental evaluation presented in Section 6 ###reference_###.\n###figure_1### Algorithm 1 ###reference_###\nprovides the implementation details of feature ordering and CMI computation for all three methods: per-layer CMI, cross-layer full CMI, and cross-layer compact CMI. The algorithm returns the fully ordered feature set of layer and the set of decreasing CMI values .\nFigure 1 ###reference_### provides an example illustrating the ordered feature maps in a CNN layer based on cross-layer compact CMI values. This particular CNN layer has 64 feature maps, whose indices are shown on the horizontal axis in the order of decreasing CMI values as shown on the vertical axis. At index points 1, 40, and 60, we display the corresponding newly added feature map to the ordered feature set. The first feature map shows a relatively clear pattern related to the input image of a truck, while the middle one becomes more blurry, and the last feature map does not at all resemble the truck. In the next section, we present two different approaches, Scree test and X-Mean clustering, for selecting a cutoff point to prune the feature maps based on CMI values. Using these approaches, the added feature maps at points 1 and 40 are retained, whereas the feature map at point 60 is consistently pruned. This means the set of last five feature maps from 60 to 64 contains little information about the CNN output and can be pruned without affecting accuracy performance." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Determining a Cutoff Point for CMI Values in Each Layer", + "text": "After ordering the features of each CNN layer and computing the CMI values of candidate sets of features as in Section 3 ###reference_###, the features are arranged in descending order of CMI values. The next step is to determine a cutoff point within the ordered list of CMI values such that the set of features with CMI value at the cutoff point is selected and retained, and the set of features with lower CMI, which contributes little to the CNN output, is pruned. In this section, we propose two methods to identify such a cutoff point based on the Scree test and X-Mean clustering." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Identifying Cutoff Point using Scree Test", + "text": "The Scree test (Cattell, 1966 ###reference_b3###) is first proposed in Principal component analysis (PCA) to determine the number of components to be retained using their eigenvalues plotting against their component numbers in descending order. The point where the plot shifts from a steep slope to a more gradual one indicates the meaningful component, distinct from random error (D\u2019agostino Sr & Russell, 2005 ###reference_b7###). Furthermore, Niesing (1997 ###reference_b29###) introduced the Quotient of Differences in Additional values (QDA) method, which identifies the component that maximizes the slope \nwhere is the eigenvalue for the component in PCA.\nHere we apply the QDA method (Niesing, 1997 ###reference_b29###) to the list of decreasing CMI values obtained as in Section 3 ###reference_###.\nTo explore more than one candidate cutoff point, we propose to find CMI values that correspond to the top largest slopes as\nEach of the candidate cutoff points from the list obtained above will be examined by carrying out trial pruning of current layer (pruning off the set of features beyond each point) and testing the resulting pruned model for accuracy. (This pruned model is the one obtained right at this pruning step in the current layer and is not the final pruned model.)\nThe optimal cutoff point will then be chosen based on the resulting pruned model\u2019s accuracy while maximizing the pruning percentage. Specifically, denote , as the accuracy of the full and pruned models, respectively, and let be the targeted maximum reduction in accuracy such that . Then the optimal cutoff point is the one from (9 ###reference_###) which results in the largest pruned percentage while satisfying the accuracy requirement. If no candidate point meets this accuracy threshold, the index with the highest accuracy is chosen. Since this process involves trial pruning and testing for accuracy of the pruned model, typically only a small value of is used, around 2 or 3 cutoff point candidates. In the special case of , only the cutoff point with maximum slope is chosen and no trial pruning is necessary. Algorithm 2 ###reference_### outlines the procedure for selecting the optimal cutoff point using the Scree test." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Identifying Cutoff Point using X-Means Clustering", + "text": "Here we propose an alternative method to select the optimal CMI cutoff point based on clustering using the X-Means algorithm (Pelleg et al., 2000 ###reference_b30###), an extension of -means, to cluster the CMI values into different groups. X-Means automatically determine the optimal number of clusters based on the Bayesian Information Criterion\n where is the log-likelihood of dataset with samples according to model with parameters.\n###figure_2### X-Means starts with an initial cluster number, and increases this number until the BIC score stops improving. Once clusters are formed in the current layer, we order the clusters based on the CMI value of the cluster center point in decreasing order. Starting with the first cluster, we retain all its feature maps and perform trial pruning of the remaining feature maps from all other clusters. The pruned model\u2019s accuracy is then evaluated. As the process continues, new features from the next cluster are added to the selected feature set, until the test accuracy meets or exceeds the targeted accuracy threshold. Algorithm 3 ###reference_### provides the outline of this X-Means procedure.\nFigure 2 ###reference_### illustrates the cutoff points selected by using the Scree test and X-Means clustering methods. We see that the majority of feature maps selected by the Scree test and X-Means clustering are similar, represented by the blue points. The orange points indicate feature maps retained only by X-Means, and the gray points represent feature maps pruned by both methods. The difference between the two methods boils down to only the last few feature maps. In this example, the Scree test retains 43 while X-Means retains 46 out of the total 64 feature maps." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Algorithms for Pruning All Layers of a CNN based on CMI", + "text": "We now combine methods from the previous two sections in an overall process to systematically traverse and prune every layer of a CNN. We propose two algorithms that differ in their starting layer and pruning direction. One algorithm begins at the first convolutional layer and prunes forward through the network. The other algorithm starts at the layer with the highest per-layer pruning percentage and simultaneously prunes both forward and backward from there.\nThe pruning process consists of three phases as illustrated in Figure 3 ###reference_###. The first phase is Data Preparation which generates the feature maps of each layer. We start with a pre-trained CNN model that feeds forward the data using mini-batch processing through each CNN layer to produce a set of feature maps .\nThe second stage is the main Pruning Algorithm in which every convolutional layer of the CNN is processed and pruned in a certain order. The last stage is Retraining of the pruned model to fine-tune the model parameters to improve accuracy performance.\n###figure_3###" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Forward Model Pruning", + "text": "In Forward Pruning, the algorithm starts with the first convolutional layer and prunes all convolutional layers sequentially from first to last. At each layer, the algorithm applies the chosen feature ordering and CMI computation method (Section 3 ###reference_###) to produce the decreasing CMI value list, then applies the chosen cutoff point identification method (Section 4 ###reference_###). In cross-layer CMI computation, the CMI values of each layer are computed by conditioning on the selected feature sets of previous layers. Algorithm 4 ###reference_### describes this forward pruning procedure." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Bi-directional Model Pruning", + "text": "We design Bi-directional Pruning to improve the previous pruning approach by first determining the most effective layer to begin the pruning process. We propose to start with the layer that has the highest per-layer pruning percentage while maintaining an acceptable post-pruning accuracy. First, we perform trial-pruning of each convolutional layer of the CNN individually, using per-layer CMI computation and either the Scree test or X-Means method. This initial stage lets us identify the layer with the highest pruning percentage as the starting layer for the full CNN pruning process. Next, we start from the identified best layer and proceed by using cross-layer CMI computation to prune the original CNN in both directions, forward and backward. For compact CMI computation, at each new layer, the compact CMI values are conditioned on the immediately previous layer that was pruned, which can either be the preceding layer (in forward pruning) or the succeeding layer (in backward pruning). For full CMI computation, we condition the CMI on all previously pruned layers from the starting layer in the corresponding direction. We note that in Bi-directional pruning, per-layer CMI computation in Eq. (4 ###reference_###) is only used at the initial stage to determine the starting layer; after that, the pruning process uses cross-layer CMI computation in Eq. (8 ###reference_###) or Eq. (6 ###reference_###). Algorithm 5 ###reference_### outlines the detailed procedure of Bi-directional Pruning." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Experimental Results", + "text": "This section presents our experimental evaluation of the CNN pruning algorithms. Due to space, we present the main results here and delegate detailed results and ablation studies to the Appendix." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Experiment Setup", + "text": "We evaluate our proposed pruning algorithms on VGGNet (Simonyan & Zisserman, 2014 ###reference_b37###), specifically a VGG16 model which consists of 13 convolutional layers (Phan, 2021 ###reference_b31###), pre-trained on\nthe CIFAR-10 dataset (Krizhevsky et al., 2009 ###reference_b19###). We use the training data to evaluate the accuracy of the intermediate pruned models, and the test data to evaluate the accuracy of the final pruned model.\nWhen preparing the data, we use a batch of 256 training samples to feed forward through the VGG16 model and generate the feature maps at each layer for use in our algorithms.\nWe performed several experiments to prune the original CNN model using different combinations of CMI computation and cutoff point methods as in Algorithms 4 ###reference_### and 5 ###reference_###. When using the Scree-test with multiple candidates, we set .\nThe original accuracy on training data is (Phan, 2021 ###reference_b31###), and to check the accuracy of the intermediately pruned models, we set the target accuracy as . In all experiments in this section, we prune the CNN model by completely removing the weights corresponding to the pruned features in each layer (Actual pruning \u2013 see Appendix). The final convolutional layer is not pruned to maintain all connections to the first fully connected layer. The pruning efficiency is determined by the percentage of pruned filters over all filters.\nAfter the CNN model is fully pruned, we re-train each pruned model to fine-tune the weights for better test accuracy. For the retraining process, we apply the VGG16 training parameters for CIFAR-10 as in (Phan, 2021 ###reference_b31###) and train each pruned model with 100 epochs." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Analysis of Feature Maps Ordering and CMI Computation Methods", + "text": "Table 1 ###reference_### shows a comparative analysis of the various feature maps ordering and CMI computation approaches as discussed in Section 3 ###reference_### (Algorithms 1 ###reference_###). The cutoff point selection method in this set of experiments is the Scree-test. The results are displayed in terms of the number of retained parameters, pruned percentage of filters, and test accuracies before and after retraining.\nThe Bi-directional pruning algorithm with cross-layer compact CMI computation (Algorithm 1 ###reference_###) yields the smallest pruned model size (24.618 M parameters retained), representing parameter reduction from the original model. The same algorithm also results in the highest pruned percentage of filters removed. Although this most aggressive pruning approach leads to a slightly lower accuracy before retraining compared to other approaches, it actually achieved the best test accuracy after retraining. After retraining, all considered methods converged to a similar accuracy. The original model\u2019s test accuracy was , and after retraining for 100 epochs, this most aggressively pruned model achieves a test accuracy of , which is the best among all experimented methods. This result confirms the validity of our approach of using cross-layer compact CMI computation and pruning in both directions." + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Analysis of CMI Cutoff Point Approaches", + "text": "In this set of experiments, we compare the two proposed CMI cutoff point approaches, Scree-test and X-means, with the Permutation-test in (Yu et al., 2021 ###reference_b47###). For Permutation-test, we use a permutation number of 100 and a significance level of as used in (Yu et al., 2021 ###reference_b47###). The CNN pruning algorithm is Bi-directional Pruning with Cross-layer Compact CMI computation (Alg. 5 ###reference_###). Table 1 ###reference_### shows the effectiveness of different cutoff point approaches when applied to the VGG16 model.\nThe Permutation-test (Yu et al., 2021 ###reference_b47###) shows the smallest pruned model size but at a drastically reduced test accuracy to only even after retraining. This shows that the Permutation test was not able to differentiate unimportant features from the important ones and hence pruned aggressively and indiscriminately. In contrast, the proposed Scree-test and X-means both achieve more than a third of the features pruned while still retaining most of the accuracy of the original model.\nThe results show that Scree-test is slightly more robust than X-means by achieving both a higher pruned percentage and a better retrained-accuracy. This could be because Scree-test is more effective at preserving the most important feature maps compared to X-means." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this study, we introduced novel structured pruning algorithms for Convolutional Neural Networks (CNNs) by using Conditional Mutual Information (CMI) to rank and prune feature maps. By applying matrix-based R\u00e9nyi -order entropy computation, we proposed several CMI-based methods for identifying and retaining the most informative features while removing redundant ones. Two different strategies, Scree test and X-means clusterng, were explored to determine the optimal cutoff points for pruning. We also examine both forward and backward prunings which were found to be effective. Our experiments demonstrated that the proposed approach significantly reduces the number of parameters by more than a third with negligible loss in accuracy, achieving efficient model compression. This method provides a promising framework for deploying CNN models on resource-constrained hardware without compromising performance. Future work may explore extending this approach to other network architectures and tasks beyond image classification." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A APPENDIX", + "text": "We describe in this section the Permutation Test used by (Yu & Principe, 2019a ###reference_b43###) to quantify the impact of a new feature map on the model accuracy. Specifically, for a new feature , CMI permutation test creates a random permutation from }, and computes the new CMI value between the output and the set of unselected features, conditioned on the permutation set . The algorithm then compares this new CMI value with the original CMI that is conditioned on the original set } to determine whether the contribution of feature on the output is significant.\nSpecifically, if the CMI value of the permutated feature set is not significantly smaller than the original CMI value, the permutation test will discard feature , as does not capture the spatial structure in the input data, and stop the feature selection process. However, applying CMI permutation method on CNN models leads to the retention of very few filters (Yu et al., 2021 ###reference_b47###), resulting in a significant drop in the model accuracy. We describe the CMI permutation test as used for feature selection in (Yu et al., 2021 ###reference_b47###) in Algorithm 6 ###reference_###.\nIn this section, we compare three approaches, Permutation test, Scree test and X-means, for determining the cutoff point of CMI values and evaluate their effectiveness on per-layer CMI. Here we prune each layer individually without pruning any other layers, and evaluate the accuracy performance of the resulting pruned model with one layer pruned.\nThe results are provided in Table 3 ###reference_###, showing that the Permutation test retains high accuracy in only out of convolutional layers, while both the Scree test and X-means maintain high accuracy in all layers. The impact of using the Permutation test to prune all layers is even more dramatic as seen by the results in Table 2 ###reference_###.\n###table_1### In this section, we present the experimental results of Forward Pruning in 4 ###reference_### with two methods for ranking features and computing CMI values: Full CMI (Section 3.3.1 ###reference_.SSS1###) and Compact CMI (Section 3.3.2 ###reference_.SSS2###), using Scree test as the cutoff point method. Table 4 ###reference_### presents the results of the number of selected filters and the corresponding accuracy of the pruned model after iteratively pruning each layer. We observe that, for the first 12 layers, Full CMI retains more filters than Compact CMI and hence results in a smaller decrease in accuracy. However, in the last CNN layer, Full CMI retains very few filters, leading to the significant drop in the pruned model\u2019s accuracy. On the other hand, Compact CMI has a higher pruned percentage by retaining fewer filters in most layers (except the last one) while maintaining relatively consistent accuracy throughout all layers.\nTo examine in more detail the difference between Scree test and X-means, we analyze the selected feature sets of each approach using Bi-directional pruning with Compact CMI computation.\nTable 5 ###reference_### shows the comparison.\nThe Overlap presents the percentage of feature maps that are retained by both Scree test and X-means, relative to the total number of feature maps in a given layer. This \u201dOverlap\u201d measure provides insight into the agreement between the two cutoff point approaches regarding which feature maps are essential. Scree test Only and X-means Only represent the percentage of feature maps retained exclusively by the Scree test and X-means, respectively, relative to the total number of features retained by each approach.\nWe can see that the overlap of selected features between the two approaches is highest for Layer 6 and gradually decreases the farther away from this layer. This overlap percentage is in agreement with the percentage of filters pruned shown for each approach, as Layer 6 has the lowest percentage pruned for both methods. We note also that the starting layer for pruning with Scree-test is Layer 10, and with X-means is Layer 13. The percentage of filters pruned is highest for each method at its starting layer and decreases from there, but not necessarily in a strictly decreasing order the farther away from the starting layer. This result is quite curious and shows that different sets of filters can be pruned at each layer depending on the cutoff point method while still preserving the final accuracy within a relatively reasonable range. The final re-trained pruned model obtained with either Scree-test or X-means has a test accuracy within of the original unpruned model (as shown in Table 2 ###reference_###).\nIn this experiment, we consider two types of pruning: Zero weight, which sets the pruned weights to zero while keeping the network structure unchanged, and Actual pruning, which completely removes the pruned weights from the network, thereby reducing the number of parameters and memory usage. During Actual pruning, as we focus on CNN layers, we leave the last CNN layer unpruned to preserve its connections to the following fully connected layer.\nThese two pruning types also involve a difference in the BatchNorm layer operation following each pruned CNN layer. In Zero-weight pruning, we set the pruned filters to zero without adjusting the BatchNorm layer. In actual pruning, however, the pruned filters are completely removed from the CNN model, hence the shape of each pruned CNN layer changes and we adjust the BatchNorm operation accordingly to match the smaller shape. These adjustments lead to different test accuracies between Zero-weight and Actual pruning for the pruned models.\nTable 6 ###reference_### shows the comparison between Zero-weight and Actual pruning with different CNN pruning and CMI computation methods. We use the Scree test for selecting the cutoff point. The results show that Zero-weight pruning leads to higher pruned percentage compared to Actual pruning for three out of the four settings. However, Actual pruning consistently leads to higher test accuracy for the final pruned model across all settings. We also note that Bi-directional pruning with compact CMI achieves the best performance, with highest pruned percentage in both pruning types while still maintaining high accuracy even before re-training.\nFinally, Table 7 ###reference_### shows the comparison between Zero-weight and Actual pruning using different cutoff point methods. The CNN pruning and CMI computation methods are Bi-directional pruning and Compact CMI, respectively. The results show that the pruned percentage of Permutation test is highest compared to other cutoff point methods in both pruning types. However, Permutation test results in extremely low accuracy both before and after retraining, making it unsuitable for practical purposes. The Scree test provides highest accuracy among all methods in both pruning types." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: CNN Pruning using Scree-test Cutoff Point with various CMI Computation Methods
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nCNN Pruning Algorithms\n\n\n\nParameters Retained\n\n\n\nFilters Pruned Percentage\n\n\n\nAccuracy before Retraining\n\n\n\nAccuracy after \u00a0\u00a0\u00a0\u00a0\u00a0 Retraining\n\n
\n\nNo pruning (original model)\n\n\n\n33.647 M\n\n\n\n0 %\n\n\n\n94.00%\n\n\n\n\u2013\n\n
\n\nForward pruning & full CMI\n\n\n\n33.196 M\n\n\n\n2.18%\n\n\n\n93.02%\n\n\n\n93.67%\n\n
\n\nForward pruning & compact CMI\n\n\n\n25.7 M\n\n\n\n26.70%\n\n\n\n90.17%\n\n\n\n93.33%\n\n
\n\nBi-directional pruning & full CMI\n\n\n\n25.643 M\n\n\n\n30.12%\n\n\n\n88.59%\n\n\n\n93.25%\n\n
\n\nBi-directional pruning & compact CMI\n\n\n\n24.618 M\n\n\n\n36.15%\n\n\n\n90.95%\n\n\n\n93.68%\n\n
\n
", + "capture": "Table 1: CNN Pruning using Scree-test Cutoff Point with various CMI Computation Methods" + }, + "2": { + "table_html": "
\n
Table 2: Bi-directional Pruning with Compact CMI using Various Cutoff Point Approaches
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nCutoff Point Approaches\n\n\n\nParameters Retained\n\n\n\nFilters Pruned Percentage\n\n\n\nAccuracy before Retraining\n\n\n\nAccuracy after \u00a0\u00a0\u00a0\u00a0\u00a0 Retraining\n\n
\n\nNo pruning (original model)\n\n\n\n33.647 M\n\n\n\n0 %\n\n\n\n94.00%\n\n\n\n-\n\n
\n\nPermutation-test (Yu et\u00a0al., 2021)\n\n\n\n19.379 M\n\n\n\n81.79%\n\n\n\n9.99%\n\n\n\n10.02%\n\n
\n\nScree-test\n\n\n\n24.618 M\n\n\n\n36.15%\n\n\n\n90.95%\n\n\n\n93.68%\n\n
\n\nX-means\n\n\n\n25.01 M\n\n\n\n34.67%\n\n\n\n83.56%\n\n\n\n92.99%\n\n
\n
", + "capture": "Table 2: Bi-directional Pruning with Compact CMI using Various Cutoff Point Approaches" + }, + "3": { + "table_html": "
\n
Table 3: Comparison of Permutation Test, Scree Test, and X-Means on Individual Layer pruning with per-layer CMI. Each test accuracy value is shown for the pruned model obtained by pruning only the current layer.\nAccuracy values above 90% are in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PERMUTATION TESTSCREE TESTX-MEANS
LayerTotal#Filters#Filters#Filters
No.#FiltersSelectedAcc.SelectedAcc.SelectedAcc.
164212.83%4994.00%4794.00%
26429.99%6092.89%4791.27%
3128210.00%12493.40%11193.16%
425688.40%10991.91%11192.39%
525629.99%22993.17%22392.45%
625619.99%24793.44%23992.48%
75121920.95%23893.71%15991.71%
85121710.23%41493.68%26592.58%
95122380.63%21893.13%24493.58%
105121993.97%19293.71%14093.62%
115121994.00%21593.66%19593.59%
125127994.00%32694.02%13693.79%
1351235993.78%44893.92%5193.53%
\n
", + "capture": "Table 3: Comparison of Permutation Test, Scree Test, and X-Means on Individual Layer pruning with per-layer CMI. Each test accuracy value is shown for the pruned model obtained by pruning only the current layer.\nAccuracy values above 90% are in bold." + }, + "4": { + "table_html": "
\n
Table 4: Full CMI versus Compact CMI on Forward Pruning with Scree test, using Zero weight pruning where the non-selected filters are set to but not removed from the CNN. Each test accuracy value is shown for the pruned model obtained by pruning all layers from the first layer up to and including the current layer, without retraining.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FULL CMICOMPACT CMI
LayerTotal#Filters#Filters
No.#FiltersSelectedAcc.SelectedAcc.
1644994.00%4994.00%
2645993.55%5993.59%
312812493.48%10892.95%
425612593.47%12592.95%
525625293.26%20991.37%
625625293.04%25191.33%
751224892.95%24891.24%
851250492.93%35590.19%
951250592.93%40589.81%
1051250192.95%19788.73%
1151250792.95%32387.71%
1251250592.95%25588.19%
135121137.79%40887.38%
\n
", + "capture": "Table 4: Full CMI versus Compact CMI on Forward Pruning with Scree test, using Zero weight pruning where the non-selected filters are set to but not removed from the CNN. Each test accuracy value is shown for the pruned model obtained by pruning all layers from the first layer up to and including the current layer, without retraining." + }, + "5": { + "table_html": "
\n
Table 5: Comparison of Shared and Exclusive retained feature maps between Scree test and X-means on Bi-directional pruning with Compact CMI. The \u201dOverlap\u201d column shows the percentage of overlapping selected filters, and the last two columns show the individual percentage of filters pruned, all relative to the total number of filters in each layer. The \u201dOnly\u201d columns show the percentage of uniquely selected filters relative to the total number of selected filters in each method. The star (\u22c6) indicates the starting layer for pruning in each method.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LAYEROVERLAPSCREE TESTX-MEANS%FILTERS PRUNE
IndexOnlyOnlyScree TestX-Means
168.75%0.00%6.38%31.25%26.56%
273.44%22.95%0.00%4.69%26.56%
386.72%10.48%0.00%3.13%13.28%
486.72%8.26%0.00%5.47%13.28%
592.19%0.00%1.26%7.81%6.64%
693.36%4.78%0.00%1.95%6.64%
783.20%0.47%4.48%16.41%12.89%
855.86%30.07%0.00%20.12%44.14%
952.15%0.00%44.49%47.85%6.05%
1026.17%30.21%4.29%62.50 % (\u22c6)72.66%
1127.73%29.35%15.98%60.74%66.99%
1247.46%2.80%26.81%51.17%35.16%
139.96%85.51%0.00%31.25%90.04% (\u22c6)
\n
", + "capture": "Table 5: Comparison of Shared and Exclusive retained feature maps between Scree test and X-means on Bi-directional pruning with Compact CMI. The \u201dOverlap\u201d column shows the percentage of overlapping selected filters, and the last two columns show the individual percentage of filters pruned, all relative to the total number of filters in each layer. The \u201dOnly\u201d columns show the percentage of uniquely selected filters relative to the total number of selected filters in each method. The star (\u22c6) indicates the starting layer for pruning in each method. " + }, + "6": { + "table_html": "
\n
Table 6: Zero weight versus Actual pruning using Scree test Cutoff Point with various CMI Computation Approaches and Pruning Directions
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CNN PRUNINGFEATURES ORDERINGPRUNING TYPE
Zero-weightActual pruning
Filters Pruned Percentage
Forward pruningfull CMI13.78%2.18%
Forward pruningcompact CMI29.17%26.70%
Bi-directional pruningfull CMI34.04%30.12%
Bi-directional pruningcompact CMI35.56%36.15%
Parameters Retained (unpruned model: 33.647 M)
Forward CMIfull CMI-33.196 M
Forward CMIcompact CMI-25.7 M
Bi-directional pruningfull CMI-25.643 M
Bi-directional pruningcompact CMI-24.618 M
Accuracy before Retraining (unpruned model: 94.00%)
Forward CMIfull CMI37.79%93.02%
Forward CMIcompact CMI87.38%90.17%
Bi-directional pruningfull CMI84.95%88.59%
Bi-directional pruningcompact CMI82.12%90.95%
Accuracy after Retraining
Forward CMIfull CMI-93.67%
Forward CMIcompact CMI-93.33%
Bi-directional pruningfull CMI-93.25%
Bi-directional pruningcompact CMI-93.68%
\n
", + "capture": "Table 6: Zero weight versus Actual pruning using Scree test Cutoff Point with various CMI Computation Approaches and Pruning Directions" + }, + "7": { + "table_html": "
\n
Table 7: Zero weight vs. Actual pruning on Bi-directional Pruning with Compact CMI using Various Cutoff Point Approaches
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CUTOFF POINT METHODPRUNING TYPE
Zero-weightActual pruning
Filters Pruned Percentage
Permutation test75.50%81.79%
Scree test35.56%31.77%
X-mean41.38%34.67%
Parameters Retained (unpruned model: 33.647 M)
Permutation test-19.379 M
Scree test-24.618 M
X-means-25.01 M
Accuracy before Retraining (unpruned model: 94.00%)
Permutation test9.99%9.99%
Scree test82.12%90.95%
X-means22.09%83.56%
Accuracy after Retraining
Permutation test-10.02%
Scree test-93.68%
X-means-92.99%
\n
", + "capture": "Table 7: Zero weight vs. Actual pruning on Bi-directional Pruning with Compact CMI using Various Cutoff Point Approaches" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18578v1_figure_1.png", + "caption": "Figure 1: Example of ordered feature maps using cross-layer compact CMI computation in Alg. 1. The top left figure is the input image with label truck. The vertical axis presents the computed CMI value and the horizontal axis shows the index of the newly added ordered feature map.", + "url": "http://arxiv.org/html/2411.18578v1/extracted/6020666/images/cmi_selection_demo.png" + }, + "2": { + "figure_path": "2411.18578v1_figure_2.png", + "caption": "Figure 2: Example of cutoff points by Scree test and X-Means.", + "url": "http://arxiv.org/html/2411.18578v1/extracted/6020666/images/cutoff_scree_xmeans.png" + }, + "3": { + "figure_path": "2411.18578v1_figure_3.png", + "caption": "Figure 3: Overview of the CMI-based pruning process. The blue curve shows a list of decreasing CMI values as new feature maps are sequentially added to the order set of each layer. The red vertical lines indicate candidate cutoff points for the CMI list. The important feature maps to be selected and retained are those to the left of the red lines.", + "url": "http://arxiv.org/html/2411.18578v1/extracted/6020666/images/CNN_Pruning_Overview.png" + }, + "4": { + "figure_path": "2411.18578v1_figure_4.png", + "caption": "Figure 4: Illustration of the process of a sample CNN model.", + "url": "http://arxiv.org/html/2411.18578v1/extracted/6020666/images/process_of_CNN.drawio.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Fast convex pruning of deep neural networks.", + "author": "Alireza Aghasi, Afshin Abdi, and Justin Romberg.", + "venue": "SIAM Journal on Mathematics of Data Science, 2(1):158\u2013188, 2020.", + "url": null + } + }, + { + "2": { + "title": "What is the state of neural network pruning?", + "author": "Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag.", + "venue": "Proceedings of machine learning and systems, 2:129\u2013146, 2020.", + "url": null + } + }, + { + "3": { + "title": "The meaning and strategic use of factor analysis.", + "author": "Raymond B Cattell.", + "venue": "In Handbook of multivariate experimental psychology, pp. 131\u2013203. Springer, 1966.", + "url": null + } + }, + { + "4": { + "title": "Otov3: Automatic architecture-agnostic neural network training and compression from structured pruning to erasing operators.", + "author": "Tianyi Chen, Tianyu Ding, Zhihui Zhu, Zeyu Chen, HsiangTao Wu, Ilya Zharkov, and Luming Liang.", + "venue": "arXiv preprint arXiv:2312.09411, 2023.", + "url": null + } + }, + { + "5": { + "title": "Elements of information theory.", + "author": "Thomas M Cover.", + "venue": "John Wiley & Sons, 1999.", + "url": null + } + }, + { + "6": { + "title": "A closer look at structured pruning for neural network compression.", + "author": "Elliot J Crowley, Jack Turner, Amos Storkey, and Michael O\u2019Boyle.", + "venue": "arXiv preprint arXiv:1810.04622, 2018.", + "url": null + } + }, + { + "7": { + "title": "Scree test.", + "author": "Ralph B D\u2019agostino Sr and Heidy K Russell.", + "venue": "Encyclopedia of biostatistics, 7, 2005.", + "url": null + } + }, + { + "8": { + "title": "Global sparse momentum sgd for pruning very deep neural networks.", + "author": "Xiaohan Ding, Xiangxin Zhou, Yuchen Guo, Jungong Han, Ji Liu, et al.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "9": { + "title": "Pruning neural networks at initialization: Why are we missing the mark?", + "author": "Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin.", + "venue": "arXiv preprint arXiv:2009.08576, 2020.", + "url": null + } + }, + { + "10": { + "title": "Measures of entropy from data using infinitely divisible kernels.", + "author": "Luis Gonzalo Sanchez Giraldo, Murali Rao, and Jose C Principe.", + "venue": "IEEE Transactions on Information Theory, 61(1):535\u2013548, 2014.", + "url": null + } + }, + { + "11": { + "title": "Computationally efficient approximations for matrix-based r\u00e9nyi\u2019s entropy.", + "author": "Tieliang Gong, Yuxin Dong, Shujian Yu, and Bo Dong.", + "venue": "IEEE Transactions on Signal Processing, 70:6170\u20136184, 2022.", + "url": null + } + }, + { + "12": { + "title": "Dmcp: Differentiable markov channel pruning for neural networks.", + "author": "Shaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1539\u20131547, 2020.", + "url": null + } + }, + { + "13": { + "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.", + "author": "Song Han, Huizi Mao, and William J Dally.", + "venue": "arXiv preprint arXiv:1510.00149, 2015.", + "url": null + } + }, + { + "14": { + "title": "Deep residual learning for image recognition.", + "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778, 2016.", + "url": null + } + }, + { + "15": { + "title": "Structured pruning for deep convolutional neural networks: A survey.", + "author": "Yang He and Lingao Xiao.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 2023.", + "url": null + } + }, + { + "16": { + "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration.", + "author": "Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4340\u20134349, 2019.", + "url": null + } + }, + { + "17": { + "title": "Channel pruning for accelerating very deep neural networks.", + "author": "Yihui He, Xiangyu Zhang, and Jian Sun.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pp. 1389\u20131397, 2017.", + "url": null + } + }, + { + "18": { + "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications.", + "author": "AG Howard.", + "venue": "arXiv preprint arXiv:1704.04861, 2017.", + "url": null + } + }, + { + "19": { + "title": "Learning multiple layers of features from tiny images.", + "author": "Alex Krizhevsky, Geoffrey Hinton, et al.", + "venue": "2009.", + "url": null + } + }, + { + "20": { + "title": "Imagenet classification with deep convolutional neural networks.", + "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.", + "venue": "Advances in neural information processing systems, 25, 2012.", + "url": null + } + }, + { + "21": { + "title": "Gradient-based learning applied to document recognition.", + "author": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner.", + "venue": "Proceedings of the IEEE, 86(11):2278\u20132324, 1998.", + "url": null + } + }, + { + "22": { + "title": "A signal propagation perspective for pruning neural networks at initialization.", + "author": "Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr.", + "venue": "arXiv preprint arXiv:1906.06307, 2019.", + "url": null + } + }, + { + "23": { + "title": "Eagleeye: Fast sub-net evaluation for efficient neural network pruning.", + "author": "Bailin Li, Bowen Wu, Jiang Su, and Guangrun Wang.", + "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part II 16, pp. 639\u2013654. Springer, 2020.", + "url": null + } + }, + { + "24": { + "title": "A survey of convolutional neural networks: analysis, applications, and prospects.", + "author": "Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou.", + "venue": "IEEE transactions on neural networks and learning systems, 33(12):6999\u20137019, 2021.", + "url": null + } + }, + { + "25": { + "title": "1xn pattern for pruning convolutional neural networks.", + "author": "Mingbao Lin, Yuxin Zhang, Yuchao Li, Bohong Chen, Fei Chao, Mengdi Wang, Shen Li, Yonghong Tian, and Rongrong Ji.", + "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):3999\u20134008, 2022.", + "url": null + } + }, + { + "26": { + "title": "Metapruning: Meta learning for automatic neural network channel pruning.", + "author": "Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3296\u20133305, 2019.", + "url": null + } + }, + { + "27": { + "title": "Pruning convolutional neural networks for resource efficient inference.", + "author": "Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz.", + "venue": "arXiv preprint arXiv:1611.06440, 2016.", + "url": null + } + }, + { + "28": { + "title": "Importance estimation for neural network pruning.", + "author": "Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11264\u201311272, 2019.", + "url": null + } + }, + { + "29": { + "title": "Simultaneous componenet and factor analysis methods for two or more groups: a comparative study.", + "author": "Jan Niesing.", + "venue": "1997.", + "url": null + } + }, + { + "30": { + "title": "X-means: Extending k-means with e cient estimation of the number of clusters.", + "author": "Dan Pelleg, Andrew Moore, et al.", + "venue": "In ICML\u201900, pp. 727\u2013734. Citeseer, 2000.", + "url": null + } + }, + { + "31": { + "title": "huyvnphan/pytorch_cifar10, January 2021.", + "author": "Huy Phan.", + "venue": "URL https://doi.org/10.5281/zenodo.4431043.", + "url": null + } + }, + { + "32": { + "title": "On the foundations of information theory.", + "author": "Alfr\u00e9d R\u00e9nyi.", + "venue": "Revue de l\u2019Institut International de Statistique, pp. 1\u201314, 1965.", + "url": null + } + }, + { + "33": { + "title": "OSSum: A gradient-free approach for pruning neural networks at initialization, 2022.", + "author": "Vinu Sankar Sadasivan, Jayesh Malaviya, and Anirban Dasgupta.", + "venue": "URL https://openreview.net/forum?id=sTECq7ZjtKX.", + "url": null + } + }, + { + "34": { + "title": "Low-rank matrix factorization for deep neural network training with high-dimensional output targets.", + "author": "Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran.", + "venue": "In 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6655\u20136659. IEEE, 2013.", + "url": null + } + }, + { + "35": { + "title": "Hydra: Pruning adversarially robust neural networks.", + "author": "Vikash Sehwag, Shiqi Wang, Prateek Mittal, and Suman Jana.", + "venue": "Advances in Neural Information Processing Systems, 33:19655\u201319666, 2020.", + "url": null + } + }, + { + "36": { + "title": "Impact of disentanglement on pruning neural networks.", + "author": "Carl Shneider, Peyman Rostami, Anis Kacem, Nilotpal Sinha, Abd El Rahman Shabayek, and Djamila Aouada.", + "venue": "arXiv preprint arXiv:2307.09994, 2023.", + "url": null + } + }, + { + "37": { + "title": "Very deep convolutional networks for large-scale image recognition.", + "author": "Karen Simonyan and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:1409.1556, 2014.", + "url": null + } + }, + { + "38": { + "title": "Pruning neural networks without any data by iteratively conserving synaptic flow.", + "author": "Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli.", + "venue": "Advances in neural information processing systems, 33:6377\u20136389, 2020.", + "url": null + } + }, + { + "39": { + "title": "Trained rank pruning for efficient deep neural networks.", + "author": "Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, and Hongkai Xiong.", + "venue": "In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pp. 14\u201317. IEEE, 2019.", + "url": null + } + }, + { + "40": { + "title": "Designing energy-efficient convolutional neural networks using energy-aware pruning.", + "author": "Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5687\u20135695, 2017.", + "url": null + } + }, + { + "41": { + "title": "Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks.", + "author": "Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, and Ping Wang.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "42": { + "title": "A comprehensive survey of convolutions in deep learning: Applications, challenges, and future trends.", + "author": "Abolfazl Younesi, Mohsen Ansari, Mohammadamin Fazli, Alireza Ejlali, Muhammad Shafique, and J\u00f6rg Henkel.", + "venue": "IEEE Access, 12:41180\u201341218, 2024.", + "url": null + } + }, + { + "43": { + "title": "Simple stopping criteria for information theoretic feature selection.", + "author": "Shujian Yu and Jose C Principe.", + "venue": "Entropy, 21(1):99, 2019a.", + "url": null + } + }, + { + "44": { + "title": "Understanding autoencoders with information theoretic concepts.", + "author": "Shujian Yu and Jose C Principe.", + "venue": "Neural Networks, 117:104\u2013123, 2019b.", + "url": null + } + }, + { + "45": { + "title": "Multivariate extension of matrix-based r\u00e9nyi\u2019s -order entropy functional.", + "author": "Shujian Yu, Luis Gonzalo Sanchez Giraldo, Robert Jenssen, and Jose C Principe.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 42(11):2960\u20132966, 2019.", + "url": null + } + }, + { + "46": { + "title": "Understanding convolutional neural networks with information theory: An initial exploration.", + "author": "Shujian Yu, Kristoffer Wickstr\u00f8m, Robert Jenssen, and Jose C Principe.", + "venue": "IEEE transactions on neural networks and learning systems, 32(1):435\u2013442, 2020.", + "url": null + } + }, + { + "47": { + "title": "Understanding convolutional neural networks with information theory: An initial exploration.", + "author": "Shujian Yu, Kristoffer Wickstr\u00f8m, Robert Jenssen, and Jos\u00e9 C. Pr\u00edncipe.", + "venue": "IEEE Transactions on Neural Networks and Learning Systems, 32(1):435\u2013442, 2021.", + "url": null + } + }, + { + "48": { + "title": "Recent advances in convolutional neural network acceleration.", + "author": "Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, and Bei Yu.", + "venue": "Neurocomputing, 323:37\u201351, 2019.", + "url": null + } + }, + { + "49": { + "title": "A review of convolutional neural networks in computer vision.", + "author": "Xia Zhao, Limin Wang, Yufei Zhang, Xuming Han, Muhammet Deveci, and Milan Parmar.", + "venue": "Artificial Intelligence Review, 57(4):99, 2024.", + "url": null + } + }, + { + "50": { + "title": "Discrimination-aware channel pruning for deep neural networks.", + "author": "Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18578v1" +} \ No newline at end of file diff --git a/20241127/2411.18587v1.json b/20241127/2411.18587v1.json new file mode 100644 index 0000000000000000000000000000000000000000..34cb9ea68fc8a1ad843844e4383d8064091168f9 --- /dev/null +++ b/20241127/2411.18587v1.json @@ -0,0 +1,141 @@ +{ + "title": "EEG-Based Analysis of Brain Responses in Multi-Modal Human-Robot Interaction: Modulating Engagement", + "abstract": "User engagement, cognitive participation, and motivation during task execution in physical human-robot interaction are crucial for motor learning. These factors are especially important in contexts like robotic rehabilitation, where neuroplasticity is targeted. However, traditional robotic rehabilitation systems often face challenges in maintaining user engagement, leading to unpredictable therapeutic outcomes. To address this issue, various techniques, such as assist-as-needed controllers, have been developed to prevent user slacking and encourage active participation. In this paper, we introduce a new direction through a novel multi-modal robotic interaction designed to enhance user engagement by synergistically integrating visual, motor, cognitive, and auditory (speech recognition) tasks into a single, comprehensive activity. To assess engagement quantitatively, we compared multiple electroencephalography (EEG) biomarkers between this multi-modal protocol and a traditional motor-only protocol.\nFifteen healthy adult participants completed 100 trials of each task type. Our findings revealed that EEG biomarkers, particularly relative alpha power, showed statistically significant improvements in engagement during the multi-modal task compared to the motor-only task. Moreover, while engagement decreased over time in the motor-only task, the multi-modal protocol maintained consistent engagement, suggesting that users could remain engaged for longer therapy sessions. Our observations on neural responses during interaction indicate that the proposed multi-modal approach can effectively enhance user engagement, which is critical for improving outcomes. This is the first time that objective neural response highlights the benefit of a comprehensive robotic intervention combining motor, cognitive, and auditory functions in healthy subjects.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Huge numbers of Americans are affected by neurological diseases that necessitate physical rehabilitation. In particular, stroke is a leading cause of disability in America and can cause issues with motor control, memory, and cognition. Each year, approximately 800,000 Americans experience a stroke, and that number is projected to continue to rise throughout the next decade [1 ###reference_b1###, 2 ###reference_b2###]. As a result, the development of new and improved rehabilitation devices is critical.\nOne rapidly growing area in this field is robotic platforms for motor learning [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. Motor learning is a proposed mechanism underlying training-related recovery to help people with neurological impairments regain their motor control [9 ###reference_b9###].\nHowever, despite the potential benefits of robotic rehabilitation, the evidence supporting the effectiveness of robotic motor learning is mixed [10 ###reference_b10###, 11 ###reference_b11###]. One key reason for this is known to be unpredictable cognitive engagement and participation of patients in task performance. Phenomena such as slacking can potentially reduce the stimulation of neural pathways and potential benefits regarding motor learning in general [12 ###reference_b12###, 13 ###reference_b13###]. On the other hand, increased engagement corresponds to higher levels of neuroplasticity, meaning the brain is more susceptible to reorganization and adaptation, which are essential for learning and retaining motor skills [14 ###reference_b14###, 12 ###reference_b12###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]. Increasing the user\u2019s attention to task conduction and providing motivation are key factors in increasing engagement and, thus, neuroplasticity.\nMoreover, engagement also affects patient\u2019s adherence to motor therapy programs. Lack of engagement is linked to poor adherence to physical therapy regimens, meaning patients are more likely to skip sessions [18 ###reference_b18###, 19 ###reference_b19###]. Since motor learning depends on consistent repetitions to regain motor skills, this low adherence reduces the effectiveness. Thus, engagement is critical not just to maximize the potential benefit of each therapy session through increased neuroplasticity but also simply to encourage patients to complete their therapy sessions.\nWith this in mind, there have been many attempts to improve engagement in robotic motor learning platforms. For instance, using an assist-as-needed protocol, where the robot only helps the user complete the desired movement if they are unable to do it on their own, has been shown to increase engagement levels by providing subjects with a greater sense of agency [20 ###reference_b20###, 21 ###reference_b21###, 12 ###reference_b12###]. These assist-as-needed protocols prevent user slacking and require users to actively participate in the task as much as possible, which results in improved encoding of motor skills to the brain\u2019s motor cortex [13 ###reference_b13###]. These robots can also adjust the difficulty of the task based on user performance to keep the user motivated, combating the issue of poor adherence to robot therapy regimes [12 ###reference_b12###].\nAdditionally, many virtual reality platforms and video games have been designed to immerse users in the motor learning paradigm and potentially add a sense of realism, with the aim of increasing engagement [19 ###reference_b19###, 17 ###reference_b17###, 22 ###reference_b22###]. The motivation in both the assist-as-needed and virtual reality cases is to improve patient outcomes by increasing engagement.\nGiven the importance of engagement in motor learning, it is critical to create rehabilitation strategies that engage the user. For this purpose, in this paper, we proposed the use of a multi-modal robotic motor learning exercise with the goal of enhancing the engagement and cognitive participation of human subjects during task conduction. In this task, which was adapted from a study on post-stroke aphasia treatment [23 ###reference_b23###], users had to compare visual and verbal stimuli and move a robot handle to indicate if the two stimuli matched. This was compared to a motor-only task, where the participants only had to move the robot handle to a specified location.\nWe hypothesized that the added cognitive and sensory processing components in the matching task would increase user engagement compared to traditional motor-only task.\nHowever, it can be difficult to quantify how well these platforms are engaging users. Typically, questionnaires are used to ask about user engagement, but these are subjective and can be biased. They are also limited by the time intervals when the survey is given, and asking for feedback can break the immersion and thus reduce engagement [24 ###reference_b24###, 25 ###reference_b25###]. Thus, it would be ideal to have a non-invasive, quantitative measure of engagement that is continuously recorded to avoid these issues.\nFor this purpose, power analysis of electroencephalography (EEG) measurements has been proposed to quantify engagement in a variety of situations. Analysis of the EEG power at different frequency bands, such as the delta band (1-4 Hertz), theta band (4-8 Hertz), and the alpha band (8-13 Hertz), have revealed characteristic changes depending on the level of subject engagement in a task.\nThe effect of engagement is most commonly noted in the alpha band, where an increase in engagement corresponds to a decrease in alpha band power. This effect has been seen in school settings [26 ###reference_b26###], video games [27 ###reference_b27###, 28 ###reference_b28###], virtual reality platforms [29 ###reference_b29###], storytelling [30 ###reference_b30###], and rehabilitation environments [31 ###reference_b31###]. The decrease in alpha power is particularly noticeable in the occipital and parietal regions of the brain [26 ###reference_b26###, 27 ###reference_b27###, 29 ###reference_b29###]. A theoretical explanation for this phenomenon is that alpha activity acts as an \u2018inhibitor\u2019 of sensory processing, so when the brain is engaged in the task (i.e., listening, processing images, etc.), alpha decreases. In contrast, when alpha activity is higher, it may prevent distraction from external stimuli [29 ###reference_b29###, 32 ###reference_b32###].\nConversely, power in the theta and delta bands tends to increase with engagement. Theta power is associated with increased cognitive workload and can also be a marker for engagement and immersion [32 ###reference_b32###, 27 ###reference_b27###, 31 ###reference_b31###, 26 ###reference_b26###, 28 ###reference_b28###]. As with the alpha band, the occipital and parietal regions of the brain are particularly sensitive to changes in engagement and attention in the theta band [26 ###reference_b26###, 27 ###reference_b27###, 29 ###reference_b29###]. Delta power has also been shown to increase during more engaging tasks, such as Go/No Go tasks [33 ###reference_b33###], and with higher emotional involvement [34 ###reference_b34###].\nSince theta and alpha power are both associated with engagement, the ratio of theta power to alpha power can also be used as a marker for engagement and cognitive attention that is sensitive to changes in both frequency bands [27 ###reference_b27###, 35 ###reference_b35###, 36 ###reference_b36###].\nWe used these EEG neural markers of engagement to investigate the engagement levels of fifteen subjects during the multi-modal matching task and the traditional motor-only task. We found that the EEG engagement biomarkers were higher for the matching task compared to the motor-only task, particularly in the moments right before the subject began to move when they were processing the visual and auditory stimuli. These results suggest the potential of using a multi-modal approach to enhance engagement in patients getting robot-assisted therapy." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Experimental Setup", + "text": "###figure_1### Fifteen healthy adult subjects participated in this study (nine males, six females, mean age years). The Institutional Review Board of New York University approved the study, and all subjects signed a written consent form prior to participating in the study. All participants denied any history of musculoskeletal injury or neurological impairment.\nIn this study, subjects completed two tasks using a two-degree-of-freedom rehabilitation robot, the H-Man (Articares, Singapore). During task conduction, EEG and surface electromyography (sEMG) signals were recorded from the participants. The EEG data was recorded from 64 channels in the 10-10 configuration (Acticap and LiveAmp, Brain Products GmbH, Munich, Germany) at 500 Hertz.\nThe sEMG readings were recorded from fifteen Bipolar Delsys Trignosystem (Delsys, Natick, MA, USA) sEMG sensors, recording at 2148 Hertz. The sEMG sensors were placed on the subject\u2019s right side, along the upper back, upper arm, and forearm. The sensors were placed on the Trapezius Descendens and Trapezius Transversalis muscles following the SENIAM guidelines [37 ###reference_b37###]. The sensors on the Posterior Deltoid, Lateral Deltoid, Anterior Deltoid, Long head of the Triceps, Lateral head of the Triceps, Short Head of the Biceps Brachii, Long Head of the Biceps Brachii, Palmaris Longus, Flexor Carpi Radialis, Brachioradialis, and Extensor Carpi Radialis muscles were placed following the guidelines in [38 ###reference_b38###]. Finally, sensors were placed on the Extensor Digitorum and Extensor Carpi Ulnaris following the directions in [39 ###reference_b39###].\nBefore beginning the experiment, subjects were seated in front of the H-man robot and a computer screen. The height of the table and position of the chair were adjusted so that the subject was centered in front of the robot, and they could reach all parts of the robot workspace without moving their torso. All subjects were right-handed and performed the task with their right arm.\nSubjects were asked to complete a series of trials using the robot to control a cursor on the screen in front of them. These trials were divided into two types: Motor-Only and Matching. In the Motor-Only task, users were shown a target location on the screen, and they moved the robot handle to reach and maintain that position for two seconds. The target location changed for each trial but was always the same distance from the home position. In the Matching tasks, the users were shown an image on the screen, and a video of a word was played simultaneously. Users had to move the robot handle to select \u2018YES\u2019 if the word matched the image or \u2018NO\u2019 if the word did not match the image and maintain that position for two seconds. As in the Motor-Only task, the positions of the \u2018YES\u2019 and \u2018NO\u2019 options were randomized for each trial, but they were always the same distance from the home position and directly opposite each other. The experimental setup and both tasks are shown in Fig. 1 ###reference_###(A).\nFor both the Motor-Only and Matching tasks, the robot was set to Resistance mode, meaning it applied a force that pulled the user back to the center home position. The force was modeled as a spring-damper system with a spring coefficient of 300 N/m and a damping coefficient of 125 N/ms. After each trial, the robot returned to the home position and waited for two seconds for the next trial to begin.\nEach subject completed 100 trials of one type (either Motor-Only or Matching) in a row, took a short break, and then completed 100 trials of the other type. The order of the trial types was randomized for each participant, with eight participants completing the Motor-Only task first and the remaining seven completing the Matching task first." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Data Processing", + "text": "" + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 EEG Signal Processing", + "text": "The EEG signals were processed in MATLAB (MathWorks Inc., Natick, MA), using the EEGLAB Toolbox [40 ###reference_b40###]. The processing steps are shown in Fig. 1 ###reference_###(B). First, the raw signals were filtered using a fifth-order high-pass Butterworth filter with a cutoff of 1 Hertz, and then a sixth-order low-pass Butterworth filter with a cutoff of 100 Hertz. Line noise was removed at 60 Hertz using the EEGLAB cleanline function with a bandwidth of 2 Hertz.\nNext, channels with high noise (kurtosis ) were identified and removed. These signals were replaced by interpolating from neighboring channels via the EEGLAB interp function. Finally, the EEGLAB run_ica function was used to identify and remove artifacts (such as eye blinks, heartbeat, etc.) from the signals." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Epoching from sEMG", + "text": "The sEMG signals were used to define the start period of each trial. First, the signals were filtered with a fourth-order Butterworth bandpass filter between 20 and 500 Hertz. Additionally, fourth-order Butterworth bandstop filters with a width of 4 Hertz were applied at multiples of 60 Hertz to remove power line noise. Next, the Root-Mean-Square (RMS) of each signal was taken at each time step over the previous 0.5 seconds of data, which was used as a metric of the magnitude of muscle activity. The mean of these RMS values across all fifteen sensors was then taken to get the average muscle activity at each time step.\nThe onset of the motion for each iteration was considered to be the time step with the greatest increase in the mean RMS value. To find these points, the difference between the mean RMS at each time point and the time point 0.5 seconds later was computed as the change in mean RMS. Then, the MATLAB function findpeaks was used to identify the time points with the greatest rate of change in muscle activation. These points were double-checked via visual inspection to avoid issues with any artifacts in the sEMG data. An example of this process can be seen in Fig. 1 ###reference_###(C).\nFor each subject and task type, 100 trial onset times were identified. The periods one second before these onset times were considered the \u2018planning\u2019 stage when users would react to the stimuli and decide where to move. The periods one second after each trial onset were considered the \u2018movement\u2019 stage when the user moved the robot handle to the desired position." + }, + { + "section_id": "2.2.3", + "parent_section_id": "2.2", + "section_name": "2.2.3 Power Spectral Density Calculations", + "text": "After the EEG data was processed and segmented, the Power Spectral Density (PSD) was computed for each channel and time period. The PSD was computed via Welch\u2019s method, with a window length of 0.51 seconds and an overlap of 50%.\nFor each channel, task, and user, the PSD was averaged across the 100 movement and planning periods to give the mean power for each user and channel during each of the two tasks directly before and after the movement onset.\nThe analysis of these results considered both the frequency band and brain region. The frequency band analysis focused on the delta (1-4 Hertz), theta (4-8 Hertz), and alpha (8-13 Hertz) bands.\nFor each channel, the relative power of the three frequency bands of interest was computed by summing the PSD of all frequencies in the band and dividing by the sum of the PSD at all frequencies in the range of 1 to 100 Hertz.\nAdditionally, the relative power of the theta band compared to the alpha band was also considered. This was similarly computed by summing the PSDs of all frequencies in the theta band and dividing by the sum of the PSDs of all frequencies in the alpha band. This is referred to as the Theta-Alpha Ratio (TAR).\nTo compare how the PSD varied based on the area of the brain, the brain was segmented into six regions: frontal, central, parietal, occipital, and right and left temporal. The specific channels included in each region are shown in Fig. 1 ###reference_###(B). The mean relative PSDs from the channels in each region were taken for each subject and frequency band.\nIn addition, the effect of the task duration on engagement was also investigated. For each subject, the data was split into thirds and the TARs were calculated separately for the first third and last third of trials. These results were compared for each task type and phase." + }, + { + "section_id": "2.2.4", + "parent_section_id": "2.2", + "section_name": "2.2.4 Significance Testing", + "text": "The differences in PSD distributions for the Matching and Motor-Only tasks were considered during both the planning and movement phases. For each task, brain region, frequency band, and phase, the Kolmogorov-Smirnov test was performed over the distribution (across subjects) of relative PSD, and it failed to reject the null hypothesis that the data is normally distributed at a significance level of . Thus, the paired t-test was used to assess statistical significance between the relative frequencies. Four significance levels were considered for the paired t-test: 0.05, 0.01, 0.001, and 0.0001.\nFor the TARs, the Kolmogorov-Smirnov rejected the normality hypothesis for , so the Wilcoxon signed-rank test was used to assess significance." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results and Discussion", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Planning Phase", + "text": "###figure_2### \n###figure_3### Heatmaps of the mean (across subjects) relative PSD of each of the three frequency bands during the planning phase are shown in Fig. 2 ###reference_###. The Motor-Only results are shown on the right, and the Matching results are on the right.\nFig. 2 ###reference_###(A) shows the relative delta power of the Motor-Only and Matching tasks during the planning phase. The relative delta activity is much higher during the Matching task than during the Motor-Only task. In both tasks, the relative PSD is highest in the center of the head, centered around electrode Cz, which corresponds to activation in the motor cortex [41 ###reference_b41###]. We can also see that the activity is higher in the frontal region during the Matching task, which corresponds to cognitive processing and attention [41 ###reference_b41###]. Overall, the higher relative PSD in the delta region is indicative of greater attention and engagement during the Matching task.\nIn the theta band, shown in Fig. 2 ###reference_###(B), the change in relative PSD is not as great, with similar values in the frontal and central regions for both tasks. However, in the occipital region, there is a notable increase in relative theta PSD for the Matching task. This is particularly relevant because increased theta PSD in the occipital region has been specifically linked to engagement [26 ###reference_b26###, 27 ###reference_b27###, 29 ###reference_b29###].\nConsidering the alpha band (Fig. 2 ###reference_###(C)), there is once again a clear difference between the relative PSDs of the Motor-Only and Matching tasks. However, unlike in the delta and theta bands, we find the relative alpha power is higher during the Motor-Only task than the Matching task. This decrease is particularly clear in the parietal and occipital regions, where relative PSD is very high in the Motor-Only task but much lower in the Matching task. For instance, the mean relative PSD at electrode POz is 0.28 for the Motor-Only task but only 0.19 for the Matching task. These observations are also indicative of an increase in engagement during the Matching task, as a decrease in alpha activity (particularly in the occipital and parietal regions) is strongly linked to increased engagement [26 ###reference_b26###, 27 ###reference_b27###, 29 ###reference_b29###].\nThe occipital lobe is responsible for visual processing, and the parietal lobe is involved in sensation and perception [41 ###reference_b41###]. As noted earlier, alpha activity is theorized to be an inhibitor for sensory processing [29 ###reference_b29###, 32 ###reference_b32###], so the decrease in occipital and parietal alpha power (and the corresponding increase in theta power) during the planning phase, when the user is paying attention to external stimuli and processing to see if the visual and auditory stimuli match, is expected in the Matching task.\nThese observations from the heatmaps are supported quantitatively in Fig. 3 ###reference_###, which shows boxplots with the relative PSD distributions across subjects for each brain region and frequency band. For the delta and alpha bands, there is a significant difference in the relative PSD for all brain regions. As noted earlier, the PSD increases for the Matching task in the delta band and decreases for the Matching task in the alpha band. In both the alpha and delta bands, when considering the average PSD over the entire brain, 93% of participants (14 out of 15) followed these trends.\nIn the theta band, the difference between the PSD for the two tasks is significant only for the parietal and occipital regions and when averaging over the entire brain. When averaging over the entire brain, 80% of the participants followed the trend of increased relative theta power during the Matching task compared to the Motor-Only task.\nAltogether, the decreases in alpha power and increases in delta and theta power strongly support the hypothesis that the users are more engaged during the Matching task, as the trends in all three of the biomarkers show an increase in engagement." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Movement Phase", + "text": "###figure_4### \n###figure_5### In the movement phase, the differences between the Motor-Only and Matching tasks are more subtle. The heatmaps and boxplots of these results are shown in Figs. 4 ###reference_### and 5 ###reference_###, respectively.\nThe contrast between the planning and movement phases is most clear in the delta band. While there was a large difference between the two tasks in the planning phase, this is not true during the movement phase. Qualitatively considering Fig. 4 ###reference_###(A), the relative delta power appears higher in the central region for the Matching task and in the occipital region for the Motor-Only task. However, these differences are not significant, and there are no significant differences in the delta power for any of the brain regions considered.\nThe trend has also changed in the theta band. While the theta power was higher for the Matching task in the planning phase, this is not the case for the movement phase. In fact, in this phase, the relative theta power is actually slightly higher in the Motor-Only task, though this difference only rises to the level of significance in the parietal region.\nIn contrast to the delta and theta bands, the relative alpha power follows the same trend for both the planning and movement phases. In both cases, the Matching task has significantly lower relative alpha power than the Motor-Only task for all regions of the brain. It should be noted that although the theta and delta bands are correlated with engagement, the alpha band activity is considered a more consistent indicator of engagement [42 ###reference_b42###, 26 ###reference_b26###].\nThe similarities between the two tasks in delta and theta power during the movement phase may be explained by the similarities in the two tasks at this stage. In the planning phase, there is a substantial difference between the tasks, as the Matching task requires the user to process visual and auditory information and decide which direction to move, while in the Motor-Only task, the user only needs to see the target position. However, in the movement phase of both tasks, the user is completing the same action of moving the robot handle to the target location. Thus, since most of the sensory and cognitive processing is already complete, and the user is focused on motor control in this phase, the brain activity is similar between the two tasks." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Theta-Alpha Ratio", + "text": "Since theta power tends to increase with engagement, and alpha power tends to decrease with engagement, the ratio of theta power to alpha power can be used as a single metric that is sensitive to changes in both frequency bands [27 ###reference_b27###, 35 ###reference_b35###, 36 ###reference_b36###]. Fig. 6 ###reference_### shows the mean Theta-Alpha Ratio (TAR) across the brain and the boxplots comparing the TAR distributions across subjects for different regions of the brain.\n\n###figure_6### The difference between the Matching and Motor-Only tasks is evident in the heatmaps, which show higher TAR during the Matching task. This trend holds for both the planning and movement phases but is stronger during the planning phase. In the planning phase, the difference is statistically significant in every considered brain region, with 100% of the subjects following the trend in the parietal and occipital regions and 93% following the trend when considering all electrodes.\nIn the movement phase, the difference in TAR between the two tasks only achieves statistical significance for the central region. In the heatmaps, it can be seen that there is a clear increase in the TAR in this region, which corresponds to the motor cortex, responsible for movement [41 ###reference_b41###].\nAs stated above, the tasks are more similar to each other in the movement phase, so it follows that the brain activity would be more similar. The significant change in TAR during the planning phase indicates increased engagement during the Matching task." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Changes to Engagement over Time", + "text": "###figure_7### Another important aspect to consider is the ability to maintain user engagement over the course of the session. This is especially important in a rehabilitation context, where hundreds of repetitions can be needed to achieve the desired effect [43 ###reference_b43###], but users are likely to get bored and disengaged as a long session drags on. To investigate the impact of session time on this proposed motor learning regimen, we considered how the TAR changes for the users over the course of the session.\nTo this end, the TARs were computed using only the first third and last third of trials and compared. These results are shown in Fig. 7 ###reference_###. Since high TAR is indicative of high engagement, a drop in the TAR over the course of the session indicates that the user\u2019s engagement level is decreasing.\nIn the Motor-Only task, the TAR tends to decrease as the task continues, with median changes of -11.5% and -16.5% for the planning and movement phases, respectively. However, this trend is only statistically significant in the movement phase (for ). This trend indicates that user engagement is decreasing as subjects do the trials, so a prolonged session of Motor-Only trials may yield diminishing returns as the user becomes disengaged.\nIn contrast, during the Matching task, the TARs are fairly consistent throughout the session, with a median change of -5.2% and +0.4% for the planning and movement phases, respectively. These differences are not significant in either phase. The consistent TAR levels indicate that the user\u2019s engagement level remains steady throughout the task, meaning they have a larger window for increased neuroplasticity and improved rehabilitation results." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "User engagement increases neuroplasticity and participation in therapy sessions, making it a critical component of motor learning. Fifteen subjects took part in this study to investigate the effect of a multi-modal Matching task on user engagement. This new regimen was compared to a Motor-Only task, with participants completing one hundred repetitions of each trial type. EEG biomarkers for engagement, including relative power in the alpha, theta, and delta bands, as well as the ratio of theta power to alpha power, were considered.\nThese biomarkers showed increased engagement during the new Matching task compared to the traditional Motor-Only task. These differences were particularly clear during the planning phase of each task when users were required to process visual and auditory stimuli in the Matching task. Additionally, while there was a decline in engagement over the course of the Motor-Only task, user engagement during the Matching task was consistent.\nEngagement during motor learning is essential for patient improvement, as it increases neuroplasticity and motivates users to complete their therapy regimen. This work shows, for the first time, that a comprehensive robotic motor learning task involving the visual, auditory, and motor functions of the brain improves objective neural markers of engagement in healthy subjects. In future work, this proposed multi-modal motor learning approach should be evaluated on patients in need of rehabilitative therapy, with the ultimate goal of increasing the effectiveness of robotic rehabilitation." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18587v1_figure_1.png", + "caption": "Figure 1: Overview of experiment design and processing pipeline, showing (A) the experimental setup, (B) the EEG processing steps and brain regions, and (C) an example of sEMG epoching.", + "url": "http://arxiv.org/html/2411.18587v1/extracted/6027319/figures/ExperimentOverview-01.png" + }, + "2": { + "figure_path": "2411.18587v1_figure_2.png", + "caption": "Figure 2: Mean relative PSD heatmaps of the brain during the planning phase of the Motor-Only and Matching tasks. Results are shown for (A) delta band, (B) theta band, and (C) alpha band.", + "url": "http://arxiv.org/html/2411.18587v1/extracted/6027319/figures/BrainHeatmapPlanning_v2.png" + }, + "3": { + "figure_path": "2411.18587v1_figure_3.png", + "caption": "Figure 3: Boxplots showing relative PSD for different regions of the brain during the planning phase of the Motor-Only and Matching tasks. Results are shown for (A) delta band, (B) theta band, and (C) alpha band. Significant differences, per the paired t-test, are indicated with asterisks. *: p<0.05\ud835\udc5d0.05p<0.05italic_p < 0.05, **: p<0.01\ud835\udc5d0.01p<0.01italic_p < 0.01, ***: p<0.001\ud835\udc5d0.001p<0.001italic_p < 0.001, and ****: p<0.0001\ud835\udc5d0.0001p<0.0001italic_p < 0.0001.", + "url": "http://arxiv.org/html/2411.18587v1/x1.png" + }, + "4": { + "figure_path": "2411.18587v1_figure_4.png", + "caption": "Figure 4: Mean relative PSD heatmaps of the brain during the movement phase of the Motor-Only and Matching tasks. Results are shown for (A) delta band, (B) theta band, and (C) alpha band.", + "url": "http://arxiv.org/html/2411.18587v1/extracted/6027319/figures/BrainHeatmapMovement_v2.png" + }, + "5": { + "figure_path": "2411.18587v1_figure_5.png", + "caption": "Figure 5: Boxplots showing relative PSD for different regions of the brain during the movement phase of the Motor-Only and Matching tasks. Results are shown for (A) delta band, (B) theta band, and (C) alpha band. Significant differences, per the paired t-test, are indicated with asterisks. *: p<0.05\ud835\udc5d0.05p<0.05italic_p < 0.05, **: p<0.01\ud835\udc5d0.01p<0.01italic_p < 0.01, ***: p<0.001\ud835\udc5d0.001p<0.001italic_p < 0.001, and ****: p<0.0001\ud835\udc5d0.0001p<0.0001italic_p < 0.0001.", + "url": "http://arxiv.org/html/2411.18587v1/x2.png" + }, + "6": { + "figure_path": "2411.18587v1_figure_6.png", + "caption": "Figure 6: Boxplots and mean heatmaps for Theta-Alpha Ratios of the Motor-Only and Matching task for (A) the planning phase and (B) the movement phase. In the boxplots, significant differences, per the Wilcoxon signed-rank test, are indicated with asterisks. *: p<0.05\ud835\udc5d0.05p<0.05italic_p < 0.05, **: p<0.01\ud835\udc5d0.01p<0.01italic_p < 0.01, ***: p<0.001\ud835\udc5d0.001p<0.001italic_p < 0.001, and ****: p<0.0001\ud835\udc5d0.0001p<0.0001italic_p < 0.0001.", + "url": "http://arxiv.org/html/2411.18587v1/extracted/6027319/figures/TAR_v2.png" + }, + "7": { + "figure_path": "2411.18587v1_figure_7.png", + "caption": "Figure 7: Boxplots comparing the Theta-Alpha Ratios for the first third of trials and the last third of trials for both the Motor-Only and Matching tasks. The results are shown for (A) the planning phase and (B) the movement phase. Significant differences (p<0.05\ud835\udc5d0.05p<0.05italic_p < 0.05) are indicating with an *.", + "url": "http://arxiv.org/html/2411.18587v1/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Springer Science & Business Media, 2012.", + "author": "M. Barbero, R. Merletti, and A. Rainoldi, Atlas of muscle innervation zones: understanding surface electromyography and its applications.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18587v1" +} \ No newline at end of file diff --git a/20241127/2411.18588v1.json b/20241127/2411.18588v1.json new file mode 100644 index 0000000000000000000000000000000000000000..07891d99e401d304ca6f852afe2815f6a4a6b824 --- /dev/null +++ b/20241127/2411.18588v1.json @@ -0,0 +1,1176 @@ +{ + "title": "Hierarchical Information Flow for Generalized Efficient Image Restoration", + "abstract": "While vision transformers show promise in numerous image restoration (IR) tasks, the challenge remains in efficiently generalizing and scaling up a model for multiple IR tasks.\nTo strike a balance between efficiency and model capacity for a generalized transformer-based IR method, we propose a hierarchical information flow mechanism for image restoration, dubbed Hi-IR, which progressively propagates information among pixels in a bottom-up manner.\nHi-IR constructs a hierarchical information tree representing the degraded image across three levels.\nEach level encapsulates different types of information, with higher levels encompassing broader objects and concepts and lower levels focusing on local details.\nMoreover, the hierarchical tree architecture removes long-range self-attention, improves the computational efficiency and memory utilization, thus preparing it for effective model scaling.\nBased on that, we explore model scaling to improve our method\u2019s capabilities, which is expected to positively impact IR in large-scale training settings.\nExtensive experimental results show that Hi-IR achieves state-of-the-art performance in seven common image restoration tasks,\naffirming its effectiveness and generalizability.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Image restoration (IR) aims to improve image quality by recovering high-quality visuals from observations degraded by noise, blur, and downsampling.\nTo address this series of inherently ill-posed problems, numerous methods have been developed primarily for a single degradation, including convolutional neural networks (CNNs) (Dong et al., 2014 ###reference_b16###; Kim et al., 2016 ###reference_b34###; Lim et al., 2017 ###reference_b48###), vision transformers (ViTs) (Chen et al., 2021 ###reference_b8###; Liang et al., 2021 ###reference_b47###; Li et al., 2023a ###reference_b43###), and state space models (Mamba) (Gu & Dao, 2023 ###reference_b23###; Guo et al., 2024 ###reference_b24###).\nHowever, the intricate and varied nature of degradation presents formidable challenges to the prevailing IR methodologies.\nIn particular, several coupled problems remain for general IR:\nFirst, there is a lack of a generalized computational mechanism for efficient IR.\nA general IR framework needs to deal with images with varying characteristics, such as different types and intensities of degradation, as well as varying resolutions.\nTechniques designed for specific IR tasks might not apply to other problems. Simply combining computational mechanisms designed for different IR tasks does not necessarily result in an efficient solution. Thus, it is a challenge to design a mechanism that is both efficient and capable of generalizing well to different IR tasks.\nSecond, there is no systematic approach for guiding model scaling. Current image restoration networks are typically limited to 10-20M parameters. Addressing multiple degradations often requires increasing the model capacity by scaling up the model size. Yet, diminished model performance is observed by simply scaling up the model. Therefore, the challenge of systematically scaling up IR models remains unresolved.\nThird, it is still unclear how well a single model can generalize across different IR tasks. Existing approaches tend to focus on either a single task or a subset of IR tasks. The generalizability of a single model across a broader range of IR tasks has to be thoroughly validated.\nThis paper addresses the aforementioned questions in Sec. 3 ###reference_###, Sec. 4 ###reference_###. and Sec. 5 ###reference_###, respectively.\nWe propose a hierarchical information flow principle designed specifically for general IR tasks. This principle establishes relationships between pixels on multiple levels and progressively aggregates information across multiple levels, which is essential for general IR.\nCompared with existing approaches such as convolution (Zhang et al., 2018c ###reference_b96###), global attention (Chen et al., 2021 ###reference_b8###), and window attention (Li et al., 2023a ###reference_b43###), hierarchical information flow balances complexity with the efficiency of comprehending global contexts, ensuring an optimized process for integrating information across various scales and regions.\nThe underlying design principle opens the door to different realizations.\nConsidering the effectiveness and efficiency for image modeling, we propose a new architecture based on a three-level hierarchical information flow mechanism for image restoration (i.e., Hi-IR).\nHi-IR employs a series of progressive computational stages for efficient information flow.\nThe first-level (L1) computational block works within individual patches, fostering local information exchange and generating intermediate node patches.\nThen, a second-level (L2) block works across the intermediate node patches and allows for the effective propagation of information beyond the local scope.\nAs a final step, the third-level (L3) information flow block bridges the gaps between the isolated node patches from the first two stages.\n###figure_1### Motivated by the scaling law (Brown et al., 2020 ###reference_b7###; Touvron et al., 2023 ###reference_b72###; Kang et al., 2023 ###reference_b31###; Saharia et al., 2022 ###reference_b65###; Yu et al., 2024 ###reference_b84###), we scale up the model to enhance the model capacity. We analyze the reason why it is difficult to scale up IR models.\nAs a remedy to the notorious problem (Lim et al., 2017 ###reference_b48###; Chen et al., 2023 ###reference_b10###), this paper proposes three strategies that systematically encompass model training, weight initialization, and model design to enable effective model scaling.\nThis paper validates the generalizability of the proposed hierarchical information flow mechanism through rigorous experiments on multiple aspects. First, we investigate the performance of the model trained for a specific degradation type and intensity, including downsampling, motion blur, defocus blur, noise, and JPEG compression. Second, we validate that the model can handle a single degradation type with multiple intensities.\nFurthermore, we demonstrate that a single model can generalize effectively across multiple tasks, validating its versatility.\nOur main contributions are summarized as follows:\nWe introduce a novel hierarchical information flow principle for image restoration, which facilitates progressive global information exchange and mitigates the curse of dimensionality.\nWe propose Hi-IR, a compact image restoration model guided by the design principle, to propagate information for image restoration efficiently.\nWe examine the challenge of training convergence for model scaling-up in IR and propose mitigation strategies.\nExtensive experiments demonstrate the generalizability of the proposed hierarchical information flow mechanism. The proposed Hi-IR consistently outperforms state-of-the-art image restoration methods for multiple tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Image Restoration focuses on recovering high-quality images from their degraded counterparts.\nAs a challenging problem, IR has captured substantial interest in academic and industrial circles, leading to practical applications such as denoising, deblurring, super-resolution (SR), and so on.\nThe landscape of IR has shifted with the evolution of deep learning and the increased availability of computational resources, notably GPUs. Neural network-based pipelines, fueled by advancements in deep learning, have supplanted earlier model-based solutions (Richardson, 1972 ###reference_b63###; Liang et al., 2021 ###reference_b47###; Li et al., 2023b ###reference_b44###).\nNumerous CNN models have been proposed (Anwar & Barnes, 2020 ###reference_b4###; Li et al., 2022 ###reference_b46###; Dong et al., 2014 ###reference_b16###; Zhang et al., 2017a ###reference_b90###) for different IR tasks.\nHowever, despite their effectiveness, CNNs have been found to struggle in propagating long-range information within degraded input images.\nThis challenge is attributed to the limited receptive field of CNNs, which, in turn, constrains the overall performance of CNN-based methods (Chen et al., 2022b ###reference_b11###; Zhang et al., 2022 ###reference_b88###; Li et al., 2023a ###reference_b43###).\nVision Transformer-based Models for IR have been proposed to address the problem of global information propagation inspired by the success of Transformer architecture in machine translation (Vaswani et al., 2017 ###reference_b77###) and high-level vision tasks (Dosovitskiy et al., 2020 ###reference_b17###).\nSpecifically, IPT (Chen et al., 2021 ###reference_b8###) applies ViTs for IR.\nDespite promising results, it is difficult to use full-range self-attention within the ViTs because the computational complexity increases quadratically with the image size.\nAs a remedy, numerous methods explore ViTs in an efficient yet effective manner.\nIn particular, SwinIR (Liang et al., 2021 ###reference_b47###) conducts multi-head self-attention (MSA) window-wise.\nA shift operation is applied to achieve the global interactive operation (Liu et al., 2021 ###reference_b50###).\nUformer (Wang et al., 2022 ###reference_b80###) proposes to propagate much more global information with a UNet structure but still with window self-attention.\nOther methods (Zamir et al., 2022 ###reference_b86###; Chen et al., 2022b ###reference_b11###; Ren et al., 2024 ###reference_b62###) re-design the attention operation with much more exquisite efforts, such as cross-covariance across channel dimensions (Zamir et al., 2022 ###reference_b86###), rectangle-window self-attention (Li et al., 2021 ###reference_b42###), sparse self-attention Huang et al. (2021 ###reference_b27###), and graph-attention (Ren et al., 2024 ###reference_b62###), spatial shuffle (Huang et al., 2021 ###reference_b27###), and random spatial shuffle Xiao et al. (2023 ###reference_b82###). \nHowever, these transformer-based solutions cannot balance the ability to generalize to multiple IR tasks and the computational complexity of global modeling.\nIn this paper, we propose a general and efficient IR solution which hierarchically propagates information in a tree-structured manner, simultaneously incorporating inputs from lower and higher semantic levels." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Motivation", + "text": "This paper aims to propose a general and efficient IR framework.\nBefore presenting technical details, we discuss the motivation behind the proposed hierarchical information flow mechanism.\nIn this work, we demonstrate the pivotal role of the information flow in decoding low-level features, which become more pronounced with the introduction of ViTs.\nCNNs employ successive convolutions that inherently facilitate progressive information flow beyond local fields. In contrast, image restoration transformers typically achieve information flow via self-attention across manually partitioned windows, combined with a window-shifting mechanism.\nWhen the flow of contextual information between different regions or features within an image is restricted, a model\u2019s ability to reconstruct high-quality images from low-quality counterparts is significantly hindered.\nThis effect can be observed by deliberately isolating the information flow in Swin transformer.\nIn Tab. 2 ###reference_###, the flow of information across windows is prohibited by removing the window-shifting mechanism, which leads to a decrease in PSNR on the validation datasets (specifically, a 0.27 dB drop for DF2K training, and a 0.23 dB drop for LSDIR training).\nThe obvious reductions indicate that information isolation degrades the performance of IR techniques, likely because the algorithms are deprived of the contextual clues necessary for accurately reconstructing finer image details.\nSecondly, we observe that information propagation on fully connected graphs is not always necessary or beneficial for improving the performance of the IR networks (Chen et al., 2021 ###reference_b8###; Zamir et al., 2022 ###reference_b86###).\nAs ViTs generate distinct graphs for each token, early attempts to facilitate global information dissemination led to the curse of dimensionality, causing quadratic growth in computational complexity with token increase (Wang et al., 2020 ###reference_b78###; Liu et al., 2021 ###reference_b50###).\nSubsequent attention mechanisms, building graphs based on windows, achieve better IR results.\nHowever, the benefits of expanding the window size tend to plateau.\nTab. 2 ###reference_### shows the effect of window size versus performance.\nThe quality of the reconstructed images improves as the window size grows from 8 to 32, evident from rising PSNR values.\nYet, with larger windows, the gains decrease, accompanied by a sharp increase in memory footprint and computational demands, resulting in a plateau effect.\nThis prompt a reassessment of the information propagation mechanism on large windows.\nThe challenge lies in balancing the scope and the complexity of window attention while enhancing global information propagation efficiency.\n###figure_2### Effective information flow.\nThe above analysis emphasizes the crucial role of effective information flow in modern architectural designs.\nCNN-based methods propagate information slowly within a small region covered by the filter (Fig. 2 ###reference_###(a)).\nA large receptive field has to be achieved by the stack of deep layers.\nGlobal attention based ViT propagates information directly across the whole sequence with a single step.\nHowever, the computational complexity grows quadratically with the increase of tokens (Fig. 2 ###reference_###(b)).\nTo address this problem, window attention in Fig. 2 ###reference_###(c) propagates information across two levels but still has a limited receptive field even with shift operation.\nTo facilitate fast and efficient information flow across the image, we propose a hierarchical information flow principle shown in Fig. 2 ###reference_###(d).\nIn this model, information flows progressively from the local scope, aggregated in several intermediate levels, and disseminated across the whole sequence.\nThis new design principle is more efficient in that it enables a global understanding of the input sequence with several operations.\nMoreover, the actual implementation of the tree structure such as the depth of the tree can be configured to ensure computational efficiency.\nOne realization in this work is a three-level information flow model shown below. The space and time complexity of the three information flow mechanisms is given in Appx. C ###reference_###. The proposed hierarchical information flow mechanism is more efficient in propogating information to the global range under similar space and time complexity of window attention." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Hierarchical Tree-Structured Information Flow", + "text": "###figure_3### As shown in Fig. 3 ###reference_###(a) - (c), the hierarchical tree-structured information flow mechanism consists of three levels and aims to effectively model both the local and the global information for a given feature efficiently. We denote the information within as level meta-information.\nL1 information flow attention is achieved by applying MSA to the input feature within a patch. To facilitate the MSA, the input feature is first partitioned into local patches, leading to . Then feature is linearly projected into query (), key (), and value (). Self-attention within the local patches is denoted as , where index the windows, and represents the head dimension.\nThis process is shown in Fig. 3 ###reference_###(a).\nEach node within the grid represents all the level meta-information derived from its corresponding original window, marked by the same color.\nL2 information flow attention is achieved upon the previous level information .\nDespite the expanded scope of information within each grid of , comprehensive cross-window information propagation remains a challenge.\nAs indicated conceptually in Fig. 2 ###reference_###(d), 2D non-overlapping local patches in L1 information flow should be grouped together to form a broader region for L2 information flow. Different from the previous operations (Xiao et al., 2023 ###reference_b82###; Huang et al., 2021 ###reference_b27###), we do not expand to the whole image in this phase due to two considerations: 1) The computational complexity of attention in the global image can be quite high; 2) Not all global image information is relevant to the reconstruction of a specific pixel.\nTo facilitate MSA, the dispersed pixels need to be grouped together via a permutation operation.\nThe seemingly complex operation is simplified by first reshaping the input tensor to , followed by a permutation to form .\nThe simple permutation operation facilitates the distribution of information nodes across a higher level region, ensuring each window contains a comprehensive, cross-window patch-wise information set without hurting the overall information flow.\nTo better integrate the permuted information , we further project to , , and . And the second MSA ( Information flow attention in Fig. 3 ###reference_###(b)) among patches is applied via\n.\nAs a result, the larger patch-wise global information (colorful nodes in ) now is well propagated to each triangle node (Fig. 3 ###reference_###) in .\nL3 convolutional information flow FFN is implemented via a convolution operation between two convolution operations, forming the convolutional feed-forward network in this paper and outputs the third level information .\nAs a result, this design not only aggregates all the channel-wise information more efficiently but also enriches the inductive modeling ability (Chu et al., 2022 ###reference_b13###; Xu et al., 2021 ###reference_b83###) for the proposed mechanism." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Hi-IR Layer", + "text": "The Hi-IR layer, serving as the fundamental component for both architectures, is constructed based on the innovative tree-structured information flow mechanism (TIFM) introduced above, and the detailed structure is depicted in Fig. 3 ###reference_###(b).\nFor each Hi-IR layer, the input feature first passes through a layer normalization and two consecutive information propagation attentions.\nAfter adding the shortcut, the output is fed into the convolutional feed-forward networks with another shortcut connection and outputs .\nWe formulate this process as follows:\nwhere consists of both the L1 and L2 information flow attention, denotes the L3 convolutional information flow FFN." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Overall architecture", + "text": "To comprehensively validate the effectiveness of the proposed method, similar to prior methods (Chen et al., 2022a ###reference_b9###; Li et al., 2023a ###reference_b43###; Ren et al., 2024 ###reference_b62###), we choose two commonly used basic architectures including the U-shape hierarchical architecture shown in Fig. 3 ###reference_###(c) and the columnar architecture shown in Fig. 8 ###reference_### of Appx. A.1 ###reference_###.\nThe columnar architecture is used for image SR while the U-shape architecture is used for other IR tasks.\nSpecifically, given degraded low-quality image (1 for the grayscale image and 3 for the color image ), it was first sent to the convolutional feature extractor and outputs the shallow feature for the following Hi-IR stages/layers. , , and denote the height, the width, and the channels of .\nFor the U-shape architecture, undergoes representation learning within the U-shape structure. In contrast, for the columnar architecture, traverses through consecutive Hi-IR stages.\nBoth architectures ultimately generate a restored high-quality image through their respective image reconstructions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Model Scaling-Up", + "text": "Existing IR models are limited to a model size of 10-20M parameters.\nIn this paper, we develop models of medium and large sizes.\nHowever, scaling up the model size from 15M to 57M leads to an unexpected performance drop, as shown in the pink rows of Tab. 4 ###reference_###. In addition, as shown in Fig. 9 ###reference_### of Appx. B ###reference_###, the 57M model also converges slower than the 15M model during training." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Initial attempts", + "text": "Existing methods handle this problem with weight initialization and rescaling techniques. For example, Chen et al. (2023 ###reference_b10###) and Lim et al. (2017 ###reference_b48###) reduce the influence of residual convolutional blocks by scaling those branches with a sufficiently small factor (0.01). Wang et al. (2018 ###reference_b79###) rescale the weight parameters in the residual blocks by a factor of 0.1. Liu et al. (2022 ###reference_b51###) intialize the weight and bias of LayerNorm as 0. In addition, we also tried the truncated normal distribution to initialize the weight parameters.\nHowever, as shown in Tab. 4 ###reference_###, none of the four methods improves the convergence and performance of the scaled models, indicating that they do work for the attention modules of the IR transformers." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "The proposed model scaling-up solution", + "text": "The initial investigation indicates that the problem can be attributed to the training strategy, the initialization of the weight, and the model design.\nThus, three methods are proposed to mitigate the model scaling problem.\nFirst, we warm up the training for 50k iterations at the beginning.\nAs shown in Tab. 4 ###reference_###, this mitigates the problem of degraded performance of scaled up models, but does not solve it completely.\nSecondly, we additionally replace heavyweight convolution (conv1 in Tab. 4 ###reference_###)\nwith lightweight operations besides warming up the training.\nTwo alternatives are considered including a linear layer (linear in Tab. 4 ###reference_###) and a bottleneck block with 3 lightweight convolutions ( conv+ conv+ conv, conv3 in Tab. 4 ###reference_###).\nThe number of channels of the middle conv in the bottleneck blocks is reduced by a factor of 4.\nTab. 4 ###reference_### shows that removing the large convolutions leads to a much better convergence point for the large models.\nConsidering that the bottleneck block leads to better PSNR than linear layers in most cases, it is adopted in all the other experiments.\nThirdly, we also investigate the influence of the self-attention mechanism on the convergence of scale-up models. Specifically, two attention mechanisms are compared including dot product attention (Liu et al., 2021 ###reference_b50###) and cosine similarity attention (Liu et al., 2022 ###reference_b51###).\nAs shown in Tab. 5 ###reference_###, dot product self-attention performs better than cosine similarity self-attention. Thus, dot product self-attention is used throughout this paper unless otherwise stated. The rationale behind why the proposed three strategies are effective for model scaling-up is detailed in Appx. B ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Why does replacing heavyweight convolution work?", + "text": "We hypothesize that replacing dense convolutions with linear layers and bottleneck blocks works because of the initialization and backpropagation of the network.\nIn the Xavier and Kaiming weight initialization method, the magnitude of the weights is inversely related to fan_in/fan_out of a layer which is the multiplication of the number of input and output channels and kernel size, namely,\nwhere and denotes fan_in and fan_out, and denotes input and output channels, and is kernel size. Thus, when a dense convolution is used, and can be large, which leads to small initialized weight parameters. This in turn leads to small gradients during the backpropagation. When the network gets deeper, the vanishing gradients could lead to slow convergence. When dense convolution is replaced by linear layers, the kernel size is reduced to 1. When the bottleneck module is used, the number of input and output channels of the middle convolution in the bottleneck block is also reduced. Thus, both of the two measures decreases the fan_in and fan_out values, leading to larger initialized weight parameters.\n###figure_4###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Why does warmup work?", + "text": "Warmup is effective for training large models primarily because it mitigates issues related to unstable gradients and helps the optimizer gradually adapt to the model\u2019s large parameter space (Kalra & Barkeshli, 2024 ###reference_b30###; Goyal, 2017 ###reference_b22###). In the early stages of training, the model\u2019s parameters are initialized randomly. A high learning rate at this stage can cause large updates, leading to unstable or divergent training due to exploding or vanishing gradients. Warmup starts with a small learning rate and gradually increases it, allowing the optimizer to find a stable path in the loss landscape before applying larger updates. Warmup enables the model to adapt gradually, avoiding overshooting minima and ensuring smoother convergence." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Why does dot product work better than cosine similarity?", + "text": "As shown in Tab. 5 ###reference_###, dot product attention works better than cosine similarity attention. We analyze the gradient of dot product and cosine similary as follows. Suppose denotes the query and denotes the keys. Then dot product and cosine similarity between and are denoted as and .\nThe gradient of dot product with respect to is\nThe gradient of cosine similarity with respect to is\nwhere and are normalized and .\nThe gradients with respect to have the similar form. The gradient of cosine similarity involves more terms compared to the gradient of the dot product. This increased complexity in the gradient of cosine similarity makes it more prone to producing large or even unstable gradient values. We conducted a numerical analysis of the gradient values for the two attention methods, with the results presented in Fig. 4 ###reference_###. As shown in the figure, the gradient of cosine similarity is indeed more prone to producing large values. This issue becomes more pronounced as the model scales up." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "In this section, the results of the ablation study are first reported.\nThen we validate the effectiveness and generalizability of Hi-IR on 7 IR tasks, i.e., image SR, image Dn, JPEG image compression artifact removal (CAR), single-image motion deblurring, defocus deblurring and image demosaicking, and IR in adverse weather conditions (AWC).\nMore details about the training protocols and the training/test datasets are shown in Appx A ###reference_###.\nThe best and the second-best quantitative results are reported in red and blue.\nNote that \u2020 denotes a single model that is trained to handle multiple degradation levels (i.e., noise levels, and quality factors) for validating the generalizability of Hi-IR." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Ablation Studies", + "text": "###figure_5### Extensive ablation experiments explore the following key aspects:\nEffect of L1 and L2 information flow.\nOne design choice for the L1/L2 information flow attentions is to decide whether to interleave them across Transformer layers or to implement them in the same layer. To validate this choice, we develop three versions, including v1 where L1 and L2 attentions alternate in consecutive layers, v2 and v3 where L1 and L2 attentions are used in the same layer (Fig. 5 ###reference_###). Compared with v1, v2 showed reduced performance despite increased model complexity. To address this issue, we introduce v3, where the projection layer between L1 and L2 is removed and the dimension of and in L1/L2 attention is reduced by half to save computational complexities. The v3 L1/L2 information flows can be conceptually unified into a single flow with an expanded receptive field.\nOur ablation study reveals that v3 yielded the best performance, as evidenced by the results in Tab. 7 ###reference_###. Consequently, v3 was adopted for all subsequent experiments.\nEffect of the depth of the tree structure.\nAblation study was conducted to evaluate the effect of the tree structure\u2019s depth. In Tab. 7 ###reference_###, the depth of the tree in the v1 model is 3. Removing the L3 information flow reduces the depth to 2, resulting in degraded image SR performance, even on the small Set5 dataset. Additionally, a v4 model was designed by adding an information flow attention beyond L2 to v3 model, creating a depth-4 tree structure. As shown in Tab. 7 ###reference_###, this increased complexity improves SR results. Thus, well-designed deeper tree structures lead to improved model performance but with increased model complexity.\nEfficiency Analysis. We report the efficiency comparison results on two IR tasks.\nFor the columnar architecture-based SR, our Hi-IR achieves the best PSNR with much lower parameters (28.6% reduction) and FLOPs (31.1% reduction), and runtime (9.95% reduction) compared to HAT (Chen et al., 2023 ###reference_b10###). Similar observation can also be achieved on the denoising task.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Evaluation of Hi-IR on Various IR tasks", + "text": "Image SR. For the classical image SR, we compared our Hi-IR with state-of-the-art SR models. The quantitative results are shown in Tab. 8 ###reference_###.\nAside from the 2nd-best results across all scales on Set5 and the 2nd-best results for the scale on Set14, the proposed Hi-IR archives the best PSNR and SSIM on all other test sets across all scales.\nIn particular, significant improvements in terms of the PSNR on Urban100 (i.e., 0.13 dB for SR of the base model and 0.12 dB for the SR of the large model) and Manga109 (i.e., 0.21 dB for SR) compared to HAT (Chen et al., 2023 ###reference_b10###), but with fewer trainable parameters.\nThe visual results shown in Fig. 6 ###reference_### also validate the\neffectiveness of the proposed Hi-IR in restoring more details and structural content.\nMore results are in Tab. 8 ###reference_### of Appx. D ###reference_###,\nFig. 10 ###reference_### to Fig. 11 ###reference_### of Appx. F ###reference_###.\nImage Denoising. We provide both the color and the grayscale image denoising results in Tab. 9 ###reference_###.\nOur approach demonstrates superior performance on diverse datasets, including Kodak24, McMaster, and Urban100 for color image denoising, as well as Set12 and Urban100 for grayscale image denoising. These comparative analyses serve to reinforce the efficacy of the proposed Hi-IR, suggesting that it may exhibit a higher degree of generalization. Additionally, a closer examination of more visual results is available in Fig. 12 ###reference_### of Appx. F ###reference_###, further substantiates the capabilities of Hi-IR. These results illustrate its proficiency in effectively eliminating heavy noise corruption while preserving high-frequency image details. The outcome is sharper edges and more natural textures, with no discernible issues of over-smoothness or over-sharpness.\nImage JPEG CAR.\nFor JPEG CAR, the experiments are conducted for color and grayscale images with four quality factors (i.e., 10, 20, 30, and 40) under two experimental settings (i.e., \u2020, one single model is trained to handle multiple quality factors, and each model for each image quality). The results for color and grayscale images are shown in Tab. 11 ###reference_### and Tab. 10 ###reference_###, respectively. We compare Hi-IR with DnCNN3 (Zhang et al., 2017a ###reference_b90###),\nDRUNet (Zhang et al., 2021 ###reference_b93###),\nSwinIR (Liang et al., 2021 ###reference_b47###),\nART (Zhang et al., 2022 ###reference_b88###),\nCAT (Chen et al., 2022b ###reference_b11###) for grayscale image JPEG CAR and with QGAC (Ehrlich et al., 2020 ###reference_b18###), FBCNN (Jiang et al., 2021 ###reference_b28###), DRUNet (Zhang et al., 2021 ###reference_b93###), SwinIR (Liang et al., 2021 ###reference_b47###), GRL-S (Li et al., 2023a ###reference_b43###) for color image JPEG CAR.\nSpecifically, the quantitative results shown in Tab. 10 ###reference_### and Tab. 11 ###reference_### validate that the proposed Hi-IR outperforms most of the other comparison methods under both settings. Visual comparisons are provided in Fig. 13 ###reference_###\nof Appx. F ###reference_### to further support the effectiveness of the proposed Hi-IR.\nSingle-Image Motion Deblurring. The results regarding the single-image motion deblurring are shown in Tab. 13 ###reference_### and Tab. 13 ###reference_###. For the synthetic datasets, compared with previous stat-of-the-art GRL (Li et al., 2023a ###reference_b43###), the proposed Hi-IR achieves the best results on the GoPro dataset and the second-best results on HIDE datasets.\nFor the real dataset, our method also achieves the new state-of-the-art performance of 40.40 PSNR on the RealBlur-R dataset and 32.92 PSNR on the RealBlur-J dataset. The visual results are shown in Fig. 15 ###reference_### and Fig. 16 ###reference_### of Appx. F ###reference_###.\nDefocus Deblurring. We also validate the effectiveness of our Hi-IR for dual-pixel defocus deblurring. The results in Tab. 14 ###reference_### show that Hi-IR outperforms the previous methods for all three scenes.\nCompared with Restormer on the combined scenes, our Hi-IR achieves a decent performance boost of 0.35 dB for dual-pixel defocus deblurring.\nImage Demosaicking. We compare\nDDR (Wu et al., 2016 ###reference_b81###),\nDeepJoint (Gharbi et al., 2016 ###reference_b21###),\nRLDD (Guo et al., 2020 ###reference_b25###),\nDRUNet (Zhang et al., 2021 ###reference_b93###),\nRNAN (Zhang et al., 2019 ###reference_b97###), and\nGRL-S (Li et al., 2023a ###reference_b43###) with the proposed method for demosaicking in Tab. 16 ###reference_###. It shows that the proposed Hi-IR archives the best performance on both the Kodak and MaMaster test datasets. Especially, 0.12 dB and 0.56 dB absolute improvement compared to the current state-of-the-art GRL.\nOne model for multiple degradation levels.\nFor image denoising and JPEG CAR, we trained a single model to handle multiple degradation levels. This setup makes it possible to apply one model to deal with images that have been degraded under different conditions, making the model more flexible and generalizable.\nDuring training, the noise level is randomly sampled from the range while the JPEG compression quality factor is randomly sampled from the range . The degraded images are generated online. During the test phase, the degradation level is fixed to a certain value.\nThe experimental results are summarized in Fig. 7 ###reference_###. The numerical results for grayscale JPEG CAR are presented in Tab. 10 ###reference_###.\nThese results show that in the one-model-multiple-degradation setting \u2020, the proposed Hi-IR achieves the best performance.\nIR in AWC. We validate Hi-IR in adverse weather conditions like rain+fog (Test1 (Li et al., 2020 ###reference_b41###)), snow (SnowTest100K-L (Liu et al., 2018 ###reference_b49###)), and raindrops (RainDrop (Qian et al., 2018 ###reference_b61###)). We compare Hi-IR with All-in-One (Li et al., 2020 ###reference_b41###)\nTransWeather (Valanarasu et al., 2022 ###reference_b76###), and SemanIR (Ren et al., 2024 ###reference_b62###).\nThe PSNR score is reported in Tab. 16 ###reference_### for each method. Our method achieves the best performance on Test1 (i.e., 4.6% improvement) and SnowTest100k-L (i.e., 0.09 dB improvement), while the second-best PSNR on RainDrop compared to all other methods. The visual comparison presented in Fig. 14 ###reference_### of Appx. F ###reference_### also shows that our method can restore better structural context and cleaner details.\n###figure_22### ###figure_23### ###figure_24### ###figure_25###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduced a hierarchical information flow principle for IR. Leveraging this concept, we devised a new model called Hi-IR, which progressively propagates information within local regions, facilitates information exchange in non-local ranges, and mitigates information isolation in the global context. We investigated how to scale up an IR model.\nThe effectiveness and generalizability of Hi-IR was validated through comprehensive experiments across various IR tasks." + } + ], + "appendix": [ + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "Appendix", + "text": "" + }, + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Experimental Settings", + "text": "We choose two commonly used basic architectures for IR tasks including the U-shape hierarchical architecture and the columnar architecture.\nThe columnar architecture is used for image SR while the U-shape architecture is used for other IR tasks including image denoising, JPEG CAR, image deblurring, IR in adverse weather conditions, image deblurring, and image demosaicking.\nWe included details on the structure of the Hi-IR in Tab. 17 ###reference_###. This table outlines the number of Hi-IR stages and the distribution of Hi-IR layers within each stage for a thorough understanding of our model\u2019s architecture.\n###figure_26### The proposed Hi-IR explores 7 different IR tasks, and the training settings vary slightly for each task. These differences encompass the architecture of the proposed Hi-IR, variations in training phases, choice of the optimizer, employed loss functions, warm-up settings, learning rate schedules, batch sizes, and patch sizes. We have provided a comprehensive overview of these details.\nIn addition, there are several points about the training details we want to make further explanation. 1)\nFor image SR, the network is pre-trained on ImageNet (Deng et al., 2009 ###reference_b15###).\nThis is inspired by previous works (Dong et al., 2014 ###reference_b16###; Chen et al., 2021 ###reference_b8###; Li et al., 2021 ###reference_b42###; Chen et al., 2023 ###reference_b10###).\n2) The optimizer used for IR in AWC is Adam (Kingma & Ba, 2014 ###reference_b36###), while AdamW (Loshchilov & Hutter, 2018 ###reference_b52###) is used for the rest IR tasks.\n3) The training losses for IR in AWC are the smooth L1 and the Perception VGG loss (Johnson et al., 2016 ###reference_b29###; Simonyan & Zisserman, 2015 ###reference_b69###). For image deblurring, the training loss is the Charbonnier loss. For the rest IR task, the L1 loss is commonly used during the training. 4) For IR in AWC, we adopted similar training settings as Transweather (Valanarasu et al., 2022 ###reference_b76###), the model is trained for a total of 750K iterations.\nThe training dataset and test datasets for different IR tasks are described in this section.\nFor IR in AWC, we used a similar training pipeline as Transweather with only one phase. Additionally, for tasks such as image super-resolution (SR), JPEG CAR, image denoising, and demosaicking, how the corresponding low-quality images are generated is also briefly introduced below.\nImage SR.\nFor image SR, the LR image is synthesized by Matlab bicubic downsampling function before the training. We investigated the upscalingg factors , , and .\nThe training datasets: DIV2K (Agustsson & Timofte, 2017 ###reference_b3###) and Flickr2K (Lim et al., 2017 ###reference_b48###).\nThe test datasets:\nSet5 (Bevilacqua et al., 2012 ###reference_b6###), Set14 (Zeyde et al., 2010 ###reference_b87###), BSD100 (Martin et al., 2001 ###reference_b55###), Urban100 (Huang et al., 2015 ###reference_b26###), and Manga109 (Matsui et al., 2017 ###reference_b56###).\nImage Denoising.\nFor image denoising, we conduct experiments on both color and grayscale image denoising. During training and testing, noisy images are generated by adding independent additive white Gaussian noise (AWGN) to the original images. The noise levels are set to . We train individual networks at different noise levels. The network takes the noisy images as input and tries to predict noise-free images. Additionally, we also tried to train one model for all noise levels.\nThe training datasets: DIV2K (Agustsson & Timofte, 2017 ###reference_b3###), Flickr2K (Lim et al., 2017 ###reference_b48###), WED (Ma et al., 2016 ###reference_b53###), and BSD400 (Martin et al., 2001 ###reference_b55###).\nThe test datasets for color image: CBSD68 (Martin et al., 2001 ###reference_b55###), Kodak24 (Franzen, 1999 ###reference_b20###), McMaster (Zhang et al., 2011 ###reference_b94###), and Urban100 (Huang et al., 2015 ###reference_b26###).\nThe test datasets for grayscale image: Set12 (Zhang et al., 2017a ###reference_b90###), BSD68 (Martin et al., 2001 ###reference_b55###), and Urban100 (Huang et al., 2015 ###reference_b26###).\nJPEG compression artifact removal.\nFor JPEG compression artifact removal, the JPEG image is compressed by the cv2 JPEG compression function. The compression function is characterized by the quality factor. We investigated four compression quality factors including 10, 20, 30, and 40. The smaller the quality factor, the more the image is compressed, meaning a lower quality. We also trained one model to deal with different quality factors.\nThe training datasets: DIV2K (Agustsson & Timofte, 2017 ###reference_b3###), Flickr2K (Lim et al., 2017 ###reference_b48###), and WED (Ma et al., 2016 ###reference_b53###).\nThe test datasets: Classic5 (Foi et al., 2007 ###reference_b19###), LIVE1 (Sheikh, 2005 ###reference_b66###), Urban100 (Huang et al., 2015 ###reference_b26###), BSD500 (Arbelaez et al., 2010 ###reference_b5###).\nIR in Adverse Weather Conditions.\nFor IR in adverse weather conditions, the model is trained on a combination of images degraded by a variety of adverse weather conditions. The same training and test dataset is used as in Transweather (Valanarasu et al., 2022 ###reference_b76###). The training data comprises 9,000 images sampled from Snow100K (Liu et al., 2018 ###reference_b49###), 1,069 images from Raindrop (Qian et al., 2018 ###reference_b61###), and 9,000 images from Outdoor-Rain (Li et al., 2019a ###reference_b40###). Snow100K includes synthetic images degraded by snow, Raindrop consists of real raindrop images, and Outdoor-Rain contains synthetic images degraded by both fog and rain streaks. The proposed method is tested on both synthetic and real-world datasets.\nThe test datasets: test1 dataset (Li et al., 2020 ###reference_b41###, 2019a ###reference_b40###), the RainDrop test dataset (Qian et al., 2018 ###reference_b61###), and the Snow100k-L test.\nImage Deblurring.\nFor single-image motion deblurring,\nThe training datasets: GoPro (Nah et al., 2017 ###reference_b58###) dataset.\nThe test datasets:\nGoPro (Nah et al., 2017 ###reference_b58###), HIDE (Shen et al., 2019 ###reference_b67###), RealBlur-R (Rim et al., 2020 ###reference_b64###), and RealBlur-J (Rim et al., 2020 ###reference_b64###) datasets.\nDefocus Deblurring.\nThe task contains two modes including single-image defocus deblurring and dual-pixel defocus deblurring. For single-image defocus deblurring, only the blurred central-view image is available. For dual-pixel defocus deblurring, both the blurred left-view and right-view images are available. The dual-pixel images could provide additional information for defocus deblurring and thus could lead to better results. PSNR, SSIM, and mean absolute error (MAE) on the RGB channels are reported. Additionally, the image perceptual quality score LPIPS is also reported.\nThe training datasets: DPDD (Abuolaim & Brown, 2020 ###reference_b1###) training dataset. The training subset contains 350 scenes.\nThe test datasets:\nDPDD (Abuolaim & Brown, 2020 ###reference_b1###) test dataset. The test set contains 37 indoor scenes and 39 outdoor scenes\nImage Demosaicking.\nFor image demosaicking, the mosaic image is generated by applying a Bayer filter on the ground-truth image. Then the network try to restore high-quality image. The mosaic image is first processed by the default Matlab demosaic function and then passed to the network as input.\nThe training datasets: DIV2K (Agustsson & Timofte, 2017 ###reference_b3###) and Flickr2K (Lim et al., 2017 ###reference_b48###).\nThe test datasets: Kodak (Franzen, 1999 ###reference_b20###), McMaster (Zhang et al., 2011 ###reference_b94###)." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Model Scaling-up", + "text": "###figure_27### As mentioned in the main paper, when the initially designed SR model is scaled up from about 10M parameters to about 50M parameters, the performance of the large SR model becomes worse. The effect is shown in Fig. 9 ###reference_###.\nThe PSNR curve on the Set5 dataset for the first 200k iterations is shown in this figure. The scale-up model Hi-IR-L converges slower than the smaller model Hi-IR-B.\nThe same phenomenon could be observed by comparing the first two rows for each upscaling factor in Tab. 18 ###reference_###, where scaled-up models converge to worse local minima. A similar problem occurs in previous works (Chen et al., 2023 ###reference_b10###; Lim et al., 2017 ###reference_b48###)." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Space and Time Complexity", + "text": "We compare the space and time complexity and the effective receptive field of the proposed method with a couple of other self-attention methods including global attention and window attention. Suppose the input feature has the dimension , the window size of window attention is , the number of attention heads is , larger patch size of the proposed L2 information flow is , the expansion ratio of the MLP in transformer layer is . For time complexity, both self-attention and the feed-forward network are considered. For space complexity, we consider the tensors that have to appear in the memory at the same time, which include the input tensor, the query tensor, the key tensor, the value tensor, and the attention map.\nThe time complexity of the proposed transformer layer is\nThe last term is very small compared with the former two terms, and can be omitted. Thus, the time complexity is simplified as\nThe space complexity of the proposed transformer layer is\nThe maximum receptive field of two consecutive transformer layers is .\nIn the Tab. 19 ###reference_###, we list the space and time complexity, and maximum receptive field of global attention, window attention, and the proposed method. As shown in this table, window attention is much more efficient than global attention but with the cost of reduced receptive field. The proposed hierarchicial information flow mechanism is more efficient than window attention in propagating information to the global range. As shown in the third row, to achieve the same receptive field as the proposed method, space and time complexity of window attention is much higher than that of the proposed method." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D More Quantitative Experimental Results", + "text": "Due to the limited space in the main manuscript, we only report a part of the experimental result. In this section, we show the full quantitative experimental results for each IR task in the following.\nIn addition to the dual-pixel defocus deblurring results, we also shown single-image defocus deblurring results in Tab. 20 ###reference_###\nTo validate the generalization capability of the proposed method to different types of degradation, we conducted the following experiments. First, we used the same model for both denoising and JPEG compression artifact removal tasks. Notably, a single model was trained to handle varying levels of degradation. The experimental results for denoising are shown in Tab. 21 ###reference_### while the results for JPEG compression artifact removal are shown in Tab. 11 ###reference_### and Tab. 10 ###reference_###. Second, we performed experiments on image restoration under adverse weather conditions, including rain, fog, and snow. The results are shown in Tab. 16 ###reference_###. These three sets of experiments collectively highlight that the proposed hierarchical information flow mechanism enables training a single model that generalizes effectively to various types and levels of degradation." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Comparison with ShuffleFormer and Shuffle Transformer", + "text": "We compare with Random shuffle transformer (ShuffleFormer) (Xiao et al., 2023 ###reference_b82###) and Shuffle transformer (Huang et al., 2021 ###reference_b27###). Both methods use spatial shuffle operations to facilitate non-local information exchange, with one being random and the other deterministic.\nRandom Shuffle Transformer (ShuffleFormer) (Xiao et al., 2023 ###reference_b82###) applies random shuffling on the spatial dimension, which increases the probability of global information existing within a local window. While this operation extends the receptive field globally in a single step, it compromises the relevance of pixels within the window. In contrast, the hierarchical information flow proposed in this paper progressively propagates information from local to global while preserving the relevance of attended pixels. A comparison with ShuffleFormer on image deblurring is presented in Tab. 13 ###reference_###. Hi-IR outperforms ShuffleFormer by a significant margin while using 55.5% fewer parameters. This demonstrates the effectiveness of the hierarchical information flow method introduced in this work.\nShuffle Transformer (Huang et al., 2021 ###reference_b27###) employs a spatial shuffle operation to aggregate information from distant pixels or tokens. However, it differs from the proposed Hi-IR in several key aspects. First, Shuffle Transformer does not enable progressive information propagation within a hierarchical tree structure. Second, its shuffle operation is based on a fixed grid size of . The distance between pixels in the shuffled window is and along the two axes, which directly depends on the image size. For large images (e.g., 1024 pixels), this design forces distant pixels to attend to one another, often introducing irrelevant information. Consequently, this operation is unsuitable for image restoration tasks, where image sizes can become extremely large. In contrast, the L2 information flow attention proposed in this paper limits the maximum patch size, thereby constraining the maximum distance between pixels at this stage. This restriction enhances the relevance of pixel interactions, making it more effective for image restoration tasks." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F More Visual Results", + "text": "To further support the effectiveness and generalizability of the proposed Hi-IR intuitively. We provide more visual comparison in terms of image SR (Fig. 10 ###reference_###,\nand\nFig. 11 ###reference_###),\nimage denoising (Fig. 12 ###reference_###), JPEG compression artifact removal (Fig. 13 ###reference_###\n), image restoration in adverse weather conditions(Fig. 14 ###reference_###), and single-image deblurring (Fig. 15 ###reference_### and Fig. 16 ###reference_###) blow. As shown in those figures, the visual results of the proposed Hi-IR are improved compared with the other methods." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Limitations", + "text": "Despite the state-of-the-art performance of Hi-IR, our explorations towards scaling up the model for IR in this paper are still incomplete. Scaling up the IR model is intricate, involving considerations like model design, data collection, and computing resources. We hope our work can catalyze positive impacts on future research, encouraging more comprehensive scaling-up explorations and propelling IR into the domain of large-scale models.\n###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34###" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n
\n
\n
Table 1: Removing shifted windows leads to degraded SR performance. PSNR is reported on Urban100 dataset for SR.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Training DatasetWindow Shift
YesNo
DF2K\u00a0(Agustsson & Timofte, 2017)\n27.4527.18 (-0.27)
LSDIR\u00a0(Li et\u00a0al., 2023b)\n27.8727.64 (-0.23)
\n
\n
\n
\n
\n
\n
Table 2: Plateau effect of enlarged window size reported on Urban100 for SR. Window size larger than 32 is not investigated due to the OOM issue.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Window sizePSNRPSNR gainGPU Mem.Computation
827.420.0014.63GB
1627.80+0.3817.22GB
3228.03+0.2227.80GB
\n
\n
\n
\n
\n
", + "capture": "Table 1: Removing shifted windows leads to degraded SR performance. PSNR is reported on Urban100 dataset for SR." + }, + "2": { + "table_html": "
\n
\n
\n
\n
Table 3: Model scaling-up exploration with SR.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Scale \n\n\nModel\n\nSize\n \n\n\nWarm\n\nup\n \n\n\nConv\n\nType\n PSNR
Set5Set14BSD100Urban100Manga109
15.69Noconv138.5234.4732.5634.1739.77
57.60Noconv138.3334.1732.4633.6039.37
57.60Yesconv138.4134.3332.5033.8039.51
54.23Yeslinear38.5634.5932.5834.3239.87
55.73Yesconv338.6534.4832.5834.3340.12
15.87Noconv135.0630.9129.4830.0234.41
57.78Noconv134.7030.6229.3329.1133.96
57.78Yesconv134.9130.7729.3929.5334.12
54.41Yeslinear35.1331.0429.5230.2034.54
55.91Yesconv335.1431.0329.5130.2234.76
\n
\n
\n
\n
\n
\n
Table 4: Investigated weight intialization and rescaling method for model scaling-up.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nMethod\n\n\n\nDescription\n\nPSNR on Set5
\n\n\n\n\n\n\n\n
\n\nZero LayerNorm\n\n\n\nInitialize the weight and bias of LayerNorm as 0\u00a0(Liu et\u00a0al., 2022).\n\n\n\n38.35\n\n\n\n34.81\n\n
\n\nResidual rescale\n\n\n\nRescale the residual blocks by a factor of 0.01\u00a0(Lim et\u00a0al., 2017; Chen et\u00a0al., 2023).\n\n\n\n38.31\n\n\n\n34.79\n\n
\n\nWeight rescale\n\n\n\nRescale the weight parameters in residual blocks by a factor of 0.1\u00a0(Wang et\u00a0al., 2018).\n\n\n\n38.36\n\n\n\n34.84\n\n
\n\ntrunc_normal_\n\n\n\nTruncated normal distribution\n\n\n\n38.33\n\n\n\n34.71\n\n
\n
\n
\n
\n
\n
", + "capture": "Table 3: Model scaling-up exploration with SR." + }, + "3": { + "table_html": "
\n
Table 5: Dot production attention vs.\u00a0cosine similarity attention for model scaling. PSNR reported for SR.
\n
", + "capture": "Table 5: Dot production attention vs.\u00a0cosine similarity attention for model scaling. PSNR reported for SR." + }, + "4": { + "table_html": "
\n
\n
\n
\n
Table 6: Ablation study on model design with SR (reported on Set5).
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Scale \n\n\nL1/L2\n\nVersion\n L3 Version
Model size [M]PSNR
\nwith L3\nw/o L3\nwith L3\nw/o L3
v114.3511.8738.3438.31
v219.2216.7438.3038.22
v315.6913.2138.3738.35
v417.19-38.41-
v114.5012.0232.8932.85
v219.3716.8932.8832.77
v315.8413.3632.9232.87
v417.35-32.95-
\n
\n
\n
\n
\n
\n
Table 7: Model efficiency vs.\u00a0accuracy for SR and Dn. PSNR is reported on Urban100 dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskNetworkArch.ParamsFLOPsRuntimePSNR
[M][G][ms][dB]
SR\nSwinIR\u00a0(Liang et\u00a0al., 2021)\nColumnar11.90215.32152.2427.45
\nCAT\u00a0(Chen et\u00a0al., 2022b)\nColumnar16.60387.86357.9727.89
\nHAT\u00a0(Chen et\u00a0al., 2023)\nColumnar20.77416.90368.6128.37
Hi-IR (Ours)Columnar14.83287.20331.9228.44
Dn 50\nSwinIR\u00a0(Liang et\u00a0al., 2021)\nColumnar11.50804.661772.8427.98
\n\n\n\nRestormer\n\n(Zamir et\u00a0al., 2022)\nU-shape26.13154.88210.4428.29
\nGRL\u00a0(Li et\u00a0al., 2023a)\nColumnar19.811361.773944.1728.59
Hi-IR (Ours)U-shape22.33153.66399.0528.91
\n
\n
\n
\n
\n
", + "capture": "Table 6: Ablation study on model design with SR (reported on Set5)." + }, + "5": { + "table_html": "
\n
Table 8: Classical image SR results. Top-2 results are highlighted in red and blue.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodScaleParamsSet5Set14BSD100Urban100Manga109
[M]\nPSNR\n\nSSIM\n\nPSNR\n\nSSIM\n\nPSNR\n\nSSIM\n\nPSNR\n\nSSIM\n\nPSNR\n\nSSIM\n
EDSR\u00a0(Lim et\u00a0al., 2017)40.7338.110.960233.920.919532.320.901332.930.935139.100.9773
\nSRFBN\u00a0(Li et\u00a0al., 2019b)\n2.1438.110.960933.820.919632.290.901032.620.932839.080.9779
RCAN\u00a0(Zhang et\u00a0al., 2018b)15.4438.270.961434.120.921632.410.902733.340.938439.440.9786
\nSAN\u00a0(Dai et\u00a0al., 2019)\n15.7138.310.962034.070.921332.420.902833.100.937039.320.9792
HAN\u00a0(Niu et\u00a0al., 2020)63.6138.270.961434.160.921732.410.902733.350.938539.460.9785
\nNLSA\u00a0(Mei et\u00a0al., 2021)\n42.6338.340.961834.080.923132.430.902733.420.939439.590.9789
IPT\u00a0(Chen et\u00a0al., 2021)115.4838.37-34.43-32.48-33.76---
\nSwinIR\u00a0(Liang et\u00a0al., 2021)\n11.7538.420.962334.460.925032.530.904133.810.942739.920.9797
CAT-A\u00a0(Chen et\u00a0al., 2022b)16.4638.510.962634.780.926532.590.904734.260.944040.100.9805
\nART\u00a0(Zhang et\u00a0al., 2022)\n16.4038.560.962934.590.926732.580.904834.30.945240.240.9808
EDT\u00a0(Li et\u00a0al., 2021)11.4838.630.963234.800.927332.620.905234.270.945640.370.9811
\nGRL-B\u00a0(Li et\u00a0al., 2023a)\n20.0538.670.964735.080.930332.670.908735.060.950540.670.9818
HAT\u00a0(Chen et\u00a0al., 2023)20.6238.730.963735.130.928232.690.906034.810.948940.710.9819
Hi-IR-B (Ours)14.6838.710.965735.160.929932.730.908734.940.948440.810.9830
HAT-L\u00a0(Chen et\u00a0al., 2023)40.7038.910.964635.290.929332.740.906635.090.950541.010.9831
Hi-IR-L (Ours)39.0738.870.966335.270.931132.770.909235.160.950541.220.9846
EDSR\u00a0(Lim et\u00a0al., 2017)43.6834.650.928030.520.846229.250.809328.800.865334.170.9476
\nSRFBN\u00a0(Li et\u00a0al., 2019b)\n2.8334.700.929230.510.846129.240.808428.730.864134.180.9481
RCAN\u00a0(Zhang et\u00a0al., 2018b)15.6334.740.929930.650.848229.320.811129.090.870234.440.9499
\nSAN\u00a0(Dai et\u00a0al., 2019)\n15.9034.750.930030.590.847629.330.811228.930.867134.300.9494
HAN\u00a0(Niu et\u00a0al., 2020)64.3534.750.929930.670.848329.320.811029.100.870534.480.9500
\nNLSA\u00a0(Mei et\u00a0al., 2021)\n45.5834.850.930630.700.848529.340.811729.250.872634.570.9508
IPT\u00a0(Chen et\u00a0al., 2021)115.6734.81-30.85-29.38-29.49---
\nSwinIR\u00a0(Liang et\u00a0al., 2021)\n11.9434.970.931830.930.853429.460.814529.750.882635.120.9537
CAT-A\u00a0(Chen et\u00a0al., 2022b)16.6435.060.932631.040.853829.520.816030.120.886235.380.9546
\nART\u00a0(Zhang et\u00a0al., 2022)\n16.5835.070.932531.020.854129.510.815930.10.887135.390.9548
EDT\u00a0(Li et\u00a0al., 2021)11.6635.130.932831.090.855329.530.816530.070.886335.470.9550
\nGRL-B\u00a0(Li et\u00a0al., 2023a)\n20.2435.120.935331.270.861129.560.823530.920.899035.760.9566
HAT\u00a0(Chen et\u00a0al., 2023)20.8135.160.933531.330.857629.590.817730.70.894935.840.9567
Hi-IR-B (Ours)14.8735.110.937231.370.859829.600.824030.790.897735.920.9583
HAT-L\u00a0(Chen et\u00a0al., 2023)40.8835.280.934531.470.858429.630.819130.920.898136.020.9576
Hi-IR-L (Ours)39.2635.200.938031.550.861629.670.825631.070.902036.120.9588
EDSR\u00a0(Lim et\u00a0al., 2017)43.0932.460.896828.800.787627.710.742026.640.803331.020.9148
\nSRFBN\u00a0(Li et\u00a0al., 2019b)\n3.6332.470.898328.810.786827.720.740926.600.801531.150.9160
RCAN\u00a0(Zhang et\u00a0al., 2018b)15.5932.630.900228.870.788927.770.743626.820.808731.220.9173
\nSAN\u00a0(Dai et\u00a0al., 2019)\n15.8632.640.900328.920.788827.780.743626.790.806831.180.9169
HAN\u00a0(Niu et\u00a0al., 2020)64.2032.640.900228.900.789027.800.744226.850.809431.420.9177
\nNLSA\u00a0(Mei et\u00a0al., 2021)\n44.9932.590.900028.870.789127.780.744426.960.810931.270.9184
IPT\u00a0(Chen et\u00a0al., 2021)115.6332.64-29.01-27.82-27.26---
\nSwinIR\u00a0(Liang et\u00a0al., 2021)\n11.9032.920.904429.090.795027.920.748927.450.825432.030.9260
CAT-A\u00a0(Chen et\u00a0al., 2022b)16.6033.080.905229.180.796027.990.751027.890.833932.390.9285
\nART\u00a0(Zhang et\u00a0al., 2022)\n16.5533.040.905129.160.795827.970.751027.770.832132.310.9283
EDT\u00a0(Li et\u00a0al., 2021)11.6333.060.905529.230.797127.990.751027.750.831732.390.9283
\nGRL-B\u00a0(Li et\u00a0al., 2023a)\n20.2033.100.909429.370.805828.010.761128.530.850432.770.9325
HAT\u00a0(Chen et\u00a0al., 2023)20.7733.180.907329.380.800128.050.753428.370.844732.870.9319
Hi-IR-B (Ours)14.8333.140.909529.400.802928.080.761128.440.844832.900.9323
HAT-L\u00a0(Chen et\u00a0al., 2023)40.8533.300.908329.470.801528.090.755128.600.849833.090.9335
Hi-IR-L (Ours)39.2233.220.910329.490.804128.130.762228.720.851433.130.9366
\n
\n
", + "capture": "Table 8: Classical image SR results. Top-2 results are highlighted in red and blue." + }, + "6": { + "table_html": "
\n
Table 9: Color and grayscale image denoising results.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method \n\n\nParams\n\n[M]\n ColorGrayscale
Kodak24McMasterUrban100Set12Urban100
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DnCNN\u00a0(Kiku et\u00a0al., 2016)0.5634.6032.1428.9533.4531.5228.6232.9830.8127.5932.8630.4427.1832.6429.9526.26
RNAN\u00a0(Zhang et\u00a0al., 2019)\n8.96--29.58--29.72--29.08--27.70--27.65
IPT\u00a0(Chen et\u00a0al., 2021)115.33--29.64--29.98--29.71------
EDT-B\u00a0(Li et\u00a0al., 2021)\n11.4835.3732.9429.8735.6133.3430.2535.2233.0730.16------
DRUNet\u00a0(Zhang et\u00a0al., 2021)32.6435.3132.8929.8635.4033.1430.0834.8132.6029.6133.2530.9427.9033.4431.1127.96
SwinIR\u00a0(Liang et\u00a0al., 2021)\n11.7535.3432.8929.7935.6133.2030.2235.1332.9029.8233.3631.0127.9133.7031.3027.98
Restormer\u00a0(Zamir et\u00a0al., 2022)26.1335.4733.0430.0135.6133.3430.3035.1332.9630.0233.4231.0828.0033.7931.4628.29
Xformer\u00a0(Zhang et\u00a0al., 2023)\n25.2335.3932.9929.9435.6833.4430.3835.2933.2130.3633.4631.1628.1033.9831.7828.71
Hi-IR (Ours)22.3335.4233.0129.9835.6933.4430.4235.4633.3430.5933.4831.1928.1534.1131.9228.91
\n
\n
", + "capture": "Table 9: Color and grayscale image denoising results.\n" + }, + "7": { + "table_html": "
\n
Table 10: Grayscale image JPEG compression artifact removal results.\n\u2020A single model is trained to handle multiple noise levels.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SetQFJPEG\n \n\n\n\u2020DnCNN3\n\n \n\n\n\u2020DRUNet\n\n\u2020Hi-IR (Ours)\n \n\n\nSwinIR\n\n \n\n\nART\nCATHi-IR (Ours)
PSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\nPSNR\nSSIM\n
\n\n \n\n\nClassic5\n \n1027.820.760029.400.803030.160.823430.250.823630.270.824930.270.825830.260.825030.380.8266
2030.120.834031.630.861032.390.873432.510.873732.520.8748--32.570.875432.620.8751
3031.480.867032.910.886033.590.894933.740.895433.730.896133.740.896433.770.896433.800.8962
4032.430.885033.770.900034.410.907534.550.907834.520.908234.550.908634.580.908734.610.9082
\n\n \n\n\nLIVE1\n \n1027.770.773029.190.812029.790.827829.840.832829.860.828729.890.830029.890.829529.940.8359
2030.070.851031.590.880032.170.889932.240.892632.250.8909--32.300.891332.310.8938
3031.410.885032.980.909033.590.916633.670.919233.690.917433.710.917833.730.917733.730.9223
4032.350.904033.960.925034.580.931234.660.934734.670.931734.700.932234.720.932034.710.9347
\n\n \n\n\nUrban100\n \n1026.330.781628.540.848430.310.874530.620.880830.550.883530.870.889430.810.886631.070.8950
2028.570.854531.010.905032.810.924133.210.925633.120.9190--33.380.926933.510.9250
3030.000.901332.470.931234.230.941434.640.947834.580.941734.810.944234.810.944934.860.9459
4031.060.921533.490.941235.200.954735.630.956635.500.951535.730.955335.730.951135.770.9561
\n
\n
", + "capture": "Table 10: Grayscale image JPEG compression artifact removal results.\n\u2020A single model is trained to handle multiple noise levels." + }, + "8": { + "table_html": "
\n
Table 11: Color image JPEG compression artifact removal results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SetQFJPEG\n \n\n\n\u2020QGAC\n\n \n\n\n\u2020FBCNN\n\n\u2020DRUNet\n\u2020Hi-IR (Ours)SwinIRGRL-SHi-IR (Ours)
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
\n\n \n\n\nLIVE1\n \n1025.690.743027.620.804027.770.803027.470.804528.240.814928.060.812928.130.813928.360.8180
2028.060.826029.880.868030.110.868030.290.874330.590.878630.440.876830.490.877630.660.8797
3029.370.861031.170.896031.430.897031.640.902031.950.905531.810.904031.850.904532.020.9063
4030.280.882032.050.912032.340.913032.560.917432.880.920532.750.919332.790.919532.940.9210
\n\n \n\n\nBSD500\n \n1025.840.741027.740.802027.850.799027.620.800128.260.807028.220.807528.260.808328.350.8092
2028.210.827030.010.869030.140.867030.390.871130.580.874130.540.873930.570.874630.610.8740
3029.570.865031.3300.898031.450.897031.730.900331.930.902931.900.902531.920.903031.990.9035
4030.520.887032.250.915032.360.913032.660.916832.870.919332.840.918932.860.919232.920.9195
\n\n \n\n\nUrban100\n \n1024.460.7612----27.100.840028.780.866628.180.858628.540.863529.110.8727
2026.630.8310----30.170.899131.120.908730.530.903030.930.906731.360.9115
3027.960.8640----31.490.918932.420.926531.870.921932.240.924732.570.9279
4028.930.8825----32.360.930133.260.936332.750.932933.090.934833.370.9373
\n
\n
", + "capture": "Table 11: Color image JPEG compression artifact removal results." + }, + "9": { + "table_html": "
\n
\n
\n
\n
Table 12: Single-image motion deblurring on GoPro and HIDE dataset. GoPro dataset is used for training.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
GoProHIDEAverage
MethodPSNR / SSIM\nPSNR / SSIM\nPSNR / SSIM\n
DeblurGAN-v2\u00a0(Kupyn et\u00a0al., 2019)\n29.55 / 0.93426.61 / 0.87528.08 / 0.905
SRN\u00a0(Tao et\u00a0al., 2018)30.26 / 0.93428.36 / 0.91529.31 / 0.925
SPAIR\u00a0(Purohit et\u00a0al., 2021)\n32.06 / 0.95330.29 / 0.93131.18 / 0.942
MIMO-UNet+\u00a0(Cho et\u00a0al., 2021)32.45 / 0.95729.99 / 0.93031.22 / 0.944
MPRNet\u00a0(Zamir et\u00a0al., 2021)\n32.66 / 0.95930.96 / 0.93931.81 / 0.949
MAXIM-3S\u00a0(Tu et\u00a0al., 2022)32.86 / 0.96132.83 / 0.95632.85 / 0.959
Restormer\u00a0(Zamir et\u00a0al., 2022)\n32.92 / 0.96131.22 / 0.94232.07 / 0.952
Stripformer\u00a0(Tsai et\u00a0al., 2022a)33.08 / 0.96231.03 / 0.94032.06 / 0.951
ShuffleFormer\u00a0(Xiao et\u00a0al., 2023)\n33.38 / 0.965\n31.25 / 0.94331.32 / 0.954
GRL-B\u00a0(Li et\u00a0al., 2023a)33.93 / 0.96831.65 / 0.94732.79 / 0.958
Hi-IR-L (Ours)\n33.99 / 0.968\n\n31.64 / 0.947\n\n32.82 / 0.958\n
\n
\n
\n
\n
\n
\n
Table 13: Single image motion deblurring on RealBlur dataset. \u2020: Methods trained on RealBlur.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
RealBlur-RRealBlur-JAverage
MethodPSNR / SSIM\nPSNR / SSIM\nPSNR / SSIM\n
\n\u2020DeblurGAN-v236.44 / 0.93529.69 / 0.87033.07 / 0.903
\u2020SRN\u00a0(Tao et\u00a0al., 2018)38.65 / 0.96531.38 / 0.90935.02 / 0.937
\n\u2020MPRNet\u00a0(Zamir et\u00a0al., 2021)\n39.31 / 0.97231.76 / 0.92235.54 / 0.947
\u2020MIMO-UNet+\u00a0(Cho et\u00a0al., 2021)- / -32.05 / 0.921- / -
\n\u2020MAXIM-3S\u00a0(Tu et\u00a0al., 2022)\n39.45 / 0.962\n32.84 / 0.935\n36.15 / 0.949
\u2020BANet\u00a0(Tsai et\u00a0al., 2022b)39.55 / 0.97132.00 / 0.92335.78 / 0.947
\n\u2020MSSNet\u00a0(Kim et\u00a0al., 2022)\n39.76 / 0.97232.10 / 0.92835.93 / 0.950
DeepRFT+\u00a0(Mao et\u00a0al., 2023)\n39.84 / 0.97232.19 / 0.93136.02 / 0.952
\u2020Stripformer\u00a0(Tsai et\u00a0al., 2022a)39.84 / 0.97432.48 / 0.92936.16 / 0.952
\n\u2020GRL-B\u00a0(Li et\u00a0al., 2023a)\n\n40.20 / 0.974\n32.82 / 0.932\n36.51 / 0.953\n
\u2020Hi-IR-L (Ours)40.40 / 0.97632.92 / 0.93336.66 / 0.954
\n
\n
\n
\n
\n
", + "capture": "Table 12: Single-image motion deblurring on GoPro and HIDE dataset. GoPro dataset is used for training.\n" + }, + "10": { + "table_html": "
\n
Table 14: Defocus deblurring results.\nD: dual-pixel defocus deblurring.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodIndoor ScenesOutdoor ScenesCombined
PSNR\nSSIM\nMAE\nLPIPS\nPSNR\nSSIM\nMAE\nLPIPS\nPSNR\nSSIM\nMAE\nLPIPS\n
DPDNetD\u00a0(Abuolaim & Brown, 2020)\n27.480.8490.0290.18922.900.7260.0520.25525.130.7860.0410.223
RDPDD\u00a0(Abuolaim et\u00a0al., 2021)28.100.8430.0270.21022.820.7040.0530.29825.390.7720.0400.255
UformerD\u00a0(Wang et\u00a0al., 2022)\n28.230.8600.0260.19923.100.7280.0510.28525.650.7950.0390.243
IFAND\u00a0(Lee et\u00a0al., 2021)28.660.8680.0250.17223.460.7430.0490.24025.990.8040.0370.207
RestormerD\u00a0(Zamir et\u00a0al., 2022)\n29.480.8950.0230.13423.970.7730.0470.17526.660.8330.0350.155
Hi-IRD-B (Ours)29.700.9020.0230.11624.460.7980.0450.15427.010.8480.0340.135
\n
\n
", + "capture": "Table 14: Defocus deblurring results.\nD: dual-pixel defocus deblurring." + }, + "11": { + "table_html": "
\n
\n
\n
\n
Table 15: Image demosaicking results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsMatlab\n \n\n\nDDR\n\n \n\n\nDeepJoint\n\n \n\n\nRLDD\n\n \n\n\nDRUNet\n\n \n\n\nRNAN\n\n \n\n\nGRL-S\n\n \n\n\nHi-IR (Ours)\n
Kodak35.7841.1142.0042.4942.6843.1643.5743.69
McMaster34.4337.1239.1439.2539.3939.7040.2240.78
\n
\n
\n
\n
\n
\n
Table 16: IR in AWC results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetAll-in-OneTransWeatherSemanIROurs
RainDrop31.1228.8430.8230.84
Test1 (rain+fog)24.7127.9629.5730.93
SnowTest100k-L28.3328.4830.7630.85
\n
\n
\n
\n
\n
", + "capture": "Table 15: Image demosaicking results." + }, + "12": { + "table_html": "
\n
Table 17: The details of the Hi-IR stages and Hi-IR layers per stage for both architectures.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
U-shaped architectureColumnar architecture
Down StagesUpstagesLatent StageHi-IR-BaseHi-IR-Large
Num. of Hi-IR Stages33168
Num. of Hi-IR Layer/Stage66668
\n
\n
", + "capture": "Table 17: The details of the Hi-IR stages and Hi-IR layers per stage for both architectures." + }, + "13": { + "table_html": "
\n
Table 18: Model scaling-up exploration with SR.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Scale \n\n\nModel\n\nSize\n \n\n\nWarm\n\nup\n \n\n\nConv\n\nType\n PSNR
Set5Set14BSD100Urban100Manga109
15.69Noconv138.5234.4732.5634.1739.77
57.60Noconv138.3334.1732.4633.6039.37
57.60Yesconv138.4134.3332.5033.8039.51
54.23Yeslinear38.5634.5932.5834.3239.87
55.73Yesconv338.6534.4832.5834.3340.12
15.87Noconv135.0630.9129.4830.0234.41
57.78Noconv134.7030.6229.3329.1133.96
57.78Yesconv134.9130.7729.3929.5334.12
54.41Yeslinear35.1331.0429.5230.2034.54
55.91Yesconv335.1431.0329.5130.2234.76
15.84Noconv133.0029.1127.9427.6731.41
57.74Noconv133.0829.1927.9727.8331.56
57.74Yesconv132.6728.9327.8327.1130.97
54.37Yeslinear33.0629.1627.9927.9331.66
55.88Yesconv333.0629.1627.9727.8731.54
\n
\n
", + "capture": "Table 18: Model scaling-up exploration with SR." + }, + "14": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Attn. methodTime complexitySpace complexity\n \n\n\nMax receptive field of\n\ntwo transformer layers\n\n
Global Attn.
\nWindow Attn. ()\n
Window Attn. ()
The proposed
\n
\n
Table 19: Space and time complexity of classical attention mechanisms.
\n
", + "capture": "Table 19: Space and time complexity of classical attention mechanisms." + }, + "15": { + "table_html": "
\n
Table 20: Sinlge-image Defocus deblurring results.\nS: single-image defocus deblurring.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodIndoor ScenesOutdoor ScenesCombined
PSNR\nSSIM\nMAE\nLPIPS\nPSNR\nSSIM\nMAE\nLPIPS\nPSNR\nSSIM\nMAE\nLPIPS\n
EBDBS\u00a0(Karaali & Jung, 2017)25.770.7720.0400.29721.250.5990.0580.37323.450.6830.0490.336
DMENetS\u00a0(Lee et\u00a0al., 2019)\n25.500.7880.0380.29821.430.6440.0630.39723.410.7140.0510.349
JNBS\u00a0(Shi et\u00a0al., 2015)26.730.8280.0310.27321.100.6080.0640.35523.840.7150.0480.315
DPDNetS\u00a0(Abuolaim & Brown, 2020)\n26.540.8160.0310.23922.250.6820.0560.31324.340.7470.0440.277
KPACS\u00a0(Son et\u00a0al., 2021)27.970.8520.0260.18222.620.7010.0530.26925.220.7740.0400.227
IFANS\u00a0(Lee et\u00a0al., 2021)\n28.110.8610.0260.17922.760.7200.0520.25425.370.7890.0390.217
RestormerS\u00a0(Zamir et\u00a0al., 2022)28.870.8820.0250.14523.240.7430.0500.20925.980.8110.0380.178
Hi-IRS-B (Ours)28.730.8850.0250.14023.660.7660.0480.19626.130.8240.0370.169
\n
\n
", + "capture": "Table 20: Sinlge-image Defocus deblurring results.\nS: single-image defocus deblurring.\n" + }, + "16": { + "table_html": "
\n
Table 21: Color and grayscale image denoising results. A single model is trained to handle multiple noise levels.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method \n\n\nParams\n\n[M]\n ColorGrayscale
CBSD68Kodak24McMasterUrban100Set12Urban100
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DnCNN\u00a0(Kiku et\u00a0al., 2016)0.5633.9031.2427.9534.6032.1428.9533.4531.5228.6232.9830.8127.5932.6730.3527.1832.2829.8026.35
FFDNet\u00a0(Zhang et\u00a0al., 2018a)\n0.4933.8731.2127.9634.6332.1328.9834.6632.3529.1833.8331.4028.0532.7530.4327.3232.4029.9026.50
IRCNN\u00a0(Zhang et\u00a0al., 2017b)0.1933.8631.1627.8634.6932.1828.9334.5832.1828.9133.7831.2027.7032.7630.3727.1232.4629.8026.22
DRUNet\u00a0(Zhang et\u00a0al., 2021)\n32.6434.3031.6928.5135.3132.8929.8635.4033.1430.0834.8132.6029.6133.2530.9427.9033.4431.1127.96
Restormer\u00a0(Zamir et\u00a0al., 2022)26.1334.3931.7828.5935.4433.0230.0035.5533.3130.2935.0632.9130.0233.3531.0428.0133.6731.3928.33
TreeIR (Ours)22.3334.4331.8028.6035.4233.0029.9535.6733.4330.3835.4633.3230.4733.4931.1828.1434.0931.8728.86
\n
\n
", + "capture": "Table 21: Color and grayscale image denoising results. A single model is trained to handle multiple noise levels.\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18588v1_figure_1.png", + "caption": "Figure 1: The proposed Hi-IR is notable for its efficiency and effectiveness (a)-(b), generalizability across seven image restoration tasks (a)-(g), and improvements in the visual quality of restored images (h)-(j).", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/treeir_teaser.png" + }, + "2": { + "figure_path": "2411.18588v1_figure_2.png", + "caption": "Figure 2: Illustration of information flow principles. The colors represent local information, with their blending indicating propagation beyond the local region. (a) The CNN-based. (b) The original ViTs based. (c) Window attention based. (d) The proposed hierarchical information flow prototype.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/motivation_new.png" + }, + "3": { + "figure_path": "2411.18588v1_figure_3.png", + "caption": "Figure 3: Illustrations of:\n(a) The hierarchical information flow.\n(b) The proposed hierarchical information flow transformer layer.\n(c) The overall framework of the proposed Hi-IR.", + "url": "http://arxiv.org/html/2411.18588v1/x1.png" + }, + "4": { + "figure_path": "2411.18588v1_figure_4.png", + "caption": "Figure 4: Comparsion of gradients between dot product and cosine similarity.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/gradient_comparsion.png" + }, + "5": { + "figure_path": "2411.18588v1_figure_5.png", + "caption": "Figure 5: Comparison of three types of transformer layers designed in this paper.", + "url": "http://arxiv.org/html/2411.18588v1/x2.png" + }, + "6(a)": { + "figure_path": "2411.18588v1_figure_6(a).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_gt.png" + }, + "6(b)": { + "figure_path": "2411.18588v1_figure_6(b).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_lr.png" + }, + "6(c)": { + "figure_path": "2411.18588v1_figure_6(c).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_ipt.png" + }, + "6(d)": { + "figure_path": "2411.18588v1_figure_6(d).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_swinir.png" + }, + "6(e)": { + "figure_path": "2411.18588v1_figure_6(e).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_edt.png" + }, + "6(f)": { + "figure_path": "2411.18588v1_figure_6(f).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_grl.png" + }, + "6(g)": { + "figure_path": "2411.18588v1_figure_6(g).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_hat.png" + }, + "6(h)": { + "figure_path": "2411.18588v1_figure_6(h).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_012_treeir.png" + }, + "6(i)": { + "figure_path": "2411.18588v1_figure_6(i).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_gt.png" + }, + "6(j)": { + "figure_path": "2411.18588v1_figure_6(j).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_lr.png" + }, + "6(k)": { + "figure_path": "2411.18588v1_figure_6(k).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_ipt.png" + }, + "6(l)": { + "figure_path": "2411.18588v1_figure_6(l).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_swinir.png" + }, + "6(m)": { + "figure_path": "2411.18588v1_figure_6(m).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_edt.png" + }, + "6(n)": { + "figure_path": "2411.18588v1_figure_6(n).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_grl.png" + }, + "6(o)": { + "figure_path": "2411.18588v1_figure_6(o).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_hat.png" + }, + "6(p)": { + "figure_path": "2411.18588v1_figure_6(p).png", + "caption": "Figure 6: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Urban100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/visual_results/sr_x4/img_024_treeir.png" + }, + "7(a)": { + "figure_path": "2411.18588v1_figure_7(a).png", + "caption": "(a) Grayscale image denoising\nFigure 7: Training one model for multiple degradation levels.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/one_model_dn_gray.png" + }, + "7(b)": { + "figure_path": "2411.18588v1_figure_7(b).png", + "caption": "(b) Color image denoising\nFigure 7: Training one model for multiple degradation levels.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/one_model_dn_color.png" + }, + "7(c)": { + "figure_path": "2411.18588v1_figure_7(c).png", + "caption": "(c) Grayscale image JPEG CAR\nFigure 7: Training one model for multiple degradation levels.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/one_model_jpeg_gray.png" + }, + "7(d)": { + "figure_path": "2411.18588v1_figure_7(d).png", + "caption": "(d) Color image JPEG CAR\nFigure 7: Training one model for multiple degradation levels.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/one_model_jpeg_color.png" + }, + "8": { + "figure_path": "2411.18588v1_figure_8.png", + "caption": "Figure 8: The columnar Hi-IR architecture.", + "url": "http://arxiv.org/html/2411.18588v1/x3.png" + }, + "9": { + "figure_path": "2411.18588v1_figure_9.png", + "caption": "Figure 9: When the SR model is scale-up from Hi-IR-L to Hi-IR-B, the model Hi-IR-L converges slower than Hi-IR-B.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/convergence.png" + }, + "10": { + "figure_path": "2411.18588v1_figure_10.png", + "caption": "Figure 10: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on B100 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_sr_x4_b100_1.png" + }, + "11": { + "figure_path": "2411.18588v1_figure_11.png", + "caption": "Figure 11: Visual results for classical image \u00d74absent4\\times 4\u00d7 4 SR on Manga109 dataset.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_sr_x4_manga109_1.png" + }, + "12": { + "figure_path": "2411.18588v1_figure_12.png", + "caption": "Figure 12: Visual results for classical color image denoising on Urban100 dataset. The noise level is \u03c3=50\ud835\udf0e50\\sigma=50italic_\u03c3 = 50.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_dn_color_sigma50.png" + }, + "13": { + "figure_path": "2411.18588v1_figure_13.png", + "caption": "Figure 13: Visual results for color image JPEG compression artifact removal on BSD500 dataset. The quality factor of JPEG image compression is 10101010.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_jpeg_qf10_bsd500.png" + }, + "14": { + "figure_path": "2411.18588v1_figure_14.png", + "caption": "Figure 14: Visual results for restoring images in adverse weather conditions.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_AWC.png" + }, + "15": { + "figure_path": "2411.18588v1_figure_15.png", + "caption": "Figure 15: Visual results for single image motion deblurring. The proposed method Hi-IR could recover sharper details compared with the other methods.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_motion_db_1.png" + }, + "16": { + "figure_path": "2411.18588v1_figure_16.png", + "caption": "Figure 16: Visual results for single image motion deblurring. The proposed method Hi-IR could recover sharper details compared with the other methods.", + "url": "http://arxiv.org/html/2411.18588v1/extracted/6029562/images/supp_images/supp_motion_db_2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Defocus deblurring using dual-pixel data.", + "author": "Abdullah Abuolaim and Michael S Brown.", + "venue": "In ECCV, pp. 111\u2013126. Springer, 2020.", + "url": null + } + }, + { + "2": { + "title": "Learning to reduce defocus blur by realistically modeling dual-pixel data.", + "author": "Abdullah Abuolaim, Mauricio Delbracio, Damien Kelly, Michael S. Brown, and Peyman Milanfar.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "3": { + "title": "NTIRE 2017 challenge on single image super-resolution: Dataset and study.", + "author": "Eirikur Agustsson and Radu Timofte.", + "venue": "In CVPRW, pp. 126\u2013135, 2017.", + "url": null + } + }, + { + "4": { + "title": "Densely residual laplacian super-resolution.", + "author": "Saeed Anwar and Nick Barnes.", + "venue": "IEEE TPAMI, 44(3):1192\u20131204, 2020.", + "url": null + } + }, + { + "5": { + "title": "Contour detection and hierarchical image segmentation.", + "author": "Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik.", + "venue": "IEEE TPAMI, 33(5):898\u2013916, 2010.", + "url": null + } + }, + { + "6": { + "title": "Low-complexity single-image super-resolution based on nonnegative neighbor embedding.", + "author": "Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie Line Alberi-Morel.", + "venue": "In BMVC, 2012.", + "url": null + } + }, + { + "7": { + "title": "Language models are few-shot learners.", + "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.", + "venue": "NeurIPS, 33:1877\u20131901, 2020.", + "url": null + } + }, + { + "8": { + "title": "Pre-trained image processing transformer.", + "author": "Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao.", + "venue": "In CVPR, pp. 12299\u201312310, 2021.", + "url": null + } + }, + { + "9": { + "title": "Simple baselines for image restoration.", + "author": "Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun.", + "venue": "In ECCV, pp. 17\u201333. Springer, 2022a.", + "url": null + } + }, + { + "10": { + "title": "Activating more pixels in image super-resolution transformer.", + "author": "Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, and Chao Dong.", + "venue": "In CVPR, pp. 22367\u201322377, 2023.", + "url": null + } + }, + { + "11": { + "title": "Cross aggregation transformer for image restoration.", + "author": "Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xin Yuan, et al.", + "venue": "NeurIPS, 35:25478\u201325490, 2022b.", + "url": null + } + }, + { + "12": { + "title": "Rethinking coarse-to-fine approach in single image deblurring.", + "author": "Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "13": { + "title": "Conditional positional encodings for vision transformers.", + "author": "Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "14": { + "title": "Second-order attention network for single image super-resolution.", + "author": "Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang.", + "venue": "In CVPR, pp. 11065\u201311074, 2019.", + "url": null + } + }, + { + "15": { + "title": "ImageNet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In CVPR, pp. 248\u2013255. IEEE, 2009.", + "url": null + } + }, + { + "16": { + "title": "Learning a deep convolutional network for image super-resolution.", + "author": "Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang.", + "venue": "In ECCV, pp. 184\u2013199. Springer, 2014.", + "url": null + } + }, + { + "17": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.", + "venue": "arXiv preprint arXiv:2010.11929, 2020.", + "url": null + } + }, + { + "18": { + "title": "Quantization guided JPEG artifact correction.", + "author": "Max Ehrlich, Larry Davis, Ser-Nam Lim, and Abhinav Shrivastava.", + "venue": "In ECCV, pp. 293\u2013309. Springer, 2020.", + "url": null + } + }, + { + "19": { + "title": "Pointwise shape-adaptive dct for high-quality denoising and deblocking of grayscale and color images.", + "author": "Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian.", + "venue": "IEEE TIP, 16(5):1395\u20131411, 2007.", + "url": null + } + }, + { + "20": { + "title": "Kodak lossless true color image suite.", + "author": "Rich Franzen.", + "venue": "source: http://r0k. us/graphics/kodak, 4(2), 1999.", + "url": null + } + }, + { + "21": { + "title": "Deep joint demosaicking and denoising.", + "author": "Micha\u00ebl Gharbi, Gaurav Chaurasia, Sylvain Paris, and Fr\u00e9do Durand.", + "venue": "ACM TOG, 35(6):1\u201312, 2016.", + "url": null + } + }, + { + "22": { + "title": "Accurate, large minibatch sg d: training imagenet in 1 hour.", + "author": "P Goyal.", + "venue": "arXiv preprint arXiv:1706.02677, 2017.", + "url": null + } + }, + { + "23": { + "title": "Mamba: Linear-time sequence modeling with selective state spaces.", + "author": "Albert Gu and Tri Dao.", + "venue": "arXiv preprint arXiv:2312.00752, 2023.", + "url": null + } + }, + { + "24": { + "title": "MambaIR: A simple baseline for image restoration with state-space model.", + "author": "Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia.", + "venue": "arXiv preprint arXiv:2402.15648, 2024.", + "url": null + } + }, + { + "25": { + "title": "Residual learning for effective joint demosaicing-denoising.", + "author": "Yu Guo, Qiyu Jin, Gabriele Facciolo, Tieyong Zeng, and Jean-Michel Morel.", + "venue": "arXiv preprint arXiv:2009.06205, 2020.", + "url": null + } + }, + { + "26": { + "title": "Single image super-resolution from transformed self-exemplars.", + "author": "Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja.", + "venue": "In CVPR, pp. 5197\u20135206, 2015.", + "url": null + } + }, + { + "27": { + "title": "Shuffle transformer: Rethinking spatial shuffle for vision transformer.", + "author": "Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, and Bin Fu.", + "venue": "arXiv preprint arXiv:2106.03650, 2021.", + "url": null + } + }, + { + "28": { + "title": "Towards flexible blind JPEG artifacts removal.", + "author": "Jiaxi Jiang, Kai Zhang, and Radu Timofte.", + "venue": "In ICCV, pp. 4997\u20135006, 2021.", + "url": null + } + }, + { + "29": { + "title": "Perceptual losses for real-time style transfer and super-resolution.", + "author": "Justin Johnson, Alexandre Alahi, and Li Fei-Fei.", + "venue": "In ECCV, pp. 694\u2013711. Springer, 2016.", + "url": null + } + }, + { + "30": { + "title": "Why warmup the learning rate? underlying mechanisms and improvements.", + "author": "Dayal Singh Kalra and Maissam Barkeshli.", + "venue": "arXiv preprint arXiv:2406.09405, 2024.", + "url": null + } + }, + { + "31": { + "title": "Scaling up GANs for text-to-image synthesis.", + "author": "Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park.", + "venue": "In CVPR, pp. 10124\u201310134, 2023.", + "url": null + } + }, + { + "32": { + "title": "Edge-based defocus blur estimation with adaptive scale selection.", + "author": "Ali Karaali and Claudio Rosito Jung.", + "venue": "TIP, 2017.", + "url": null + } + }, + { + "33": { + "title": "Beyond color difference: Residual interpolation for color image demosaicking.", + "author": "Daisuke Kiku, Yusuke Monno, Masayuki Tanaka, and Masatoshi Okutomi.", + "venue": "IEEE TIP, 25(3):1288\u20131300, 2016.", + "url": null + } + }, + { + "34": { + "title": "Accurate image super-resolution using very deep convolutional networks.", + "author": "Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee.", + "venue": "In CVPR, pp. 1646\u20131654, 2016.", + "url": null + } + }, + { + "35": { + "title": "MSSNet: Multi-scale-stage network for single image deblurring.", + "author": "Kiyeon Kim, Seungyong Lee, and Sunghyun Cho.", + "venue": "In ECCVW, pp. 524\u2013539. Springer, 2022.", + "url": null + } + }, + { + "36": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P Kingma and Jimmy Ba.", + "venue": "arXiv preprint arXiv:1412.6980, 2014.", + "url": null + } + }, + { + "37": { + "title": "DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better.", + "author": "Orest Kupyn, Tetiana Martyniuk, Junru Wu, and Zhangyang Wang.", + "venue": "In ICCV, 2019.", + "url": null + } + }, + { + "38": { + "title": "Deep defocus map estimation using domain adaptation.", + "author": "Junyong Lee, Sungkil Lee, Sunghyun Cho, and Seungyong Lee.", + "venue": "In CVPR, 2019.", + "url": null + } + }, + { + "39": { + "title": "Iterative filter adaptive network for single image defocus deblurring.", + "author": "Junyong Lee, Hyeongseok Son, Jaesung Rim, Sunghyun Cho, and Seungyong Lee.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "40": { + "title": "Heavy rain image restoration: Integrating physics model and conditional adversarial learning.", + "author": "Ruoteng Li, Loong-Fah Cheong, and Robby T Tan.", + "venue": "In CVPR, pp. 1633\u20131642, 2019a.", + "url": null + } + }, + { + "41": { + "title": "All in one bad weather removal using architectural search.", + "author": "Ruoteng Li, Robby T Tan, and Loong-Fah Cheong.", + "venue": "In CVPR, pp. 3175\u20133185, 2020.", + "url": null + } + }, + { + "42": { + "title": "On efficient transformer and image pre-training for low-level vision.", + "author": "Wenbo Li, Xin Lu, Jiangbo Lu, Xiangyu Zhang, and Jiaya Jia.", + "venue": "arXiv preprint arXiv:2112.10175, 2021.", + "url": null + } + }, + { + "43": { + "title": "Efficient and explicit modelling of image hierarchies for image restoration.", + "author": "Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool.", + "venue": "In CVPR, pp. 18278\u201318289, 2023a.", + "url": null + } + }, + { + "44": { + "title": "LSDIR: A large scale dataset for image restoration.", + "author": "Yawei Li, Kai Zhang, Jingyun Liang, Jiezhang Cao, Ce Liu, Rui Gong, Yulun Zhang, Hao Tang, Yun Liu, Denis Demandolx, et al.", + "venue": "In CVPRW, pp. 1775\u20131787, 2023b.", + "url": null + } + }, + { + "45": { + "title": "Feedback network for image super-resolution.", + "author": "Zhen Li, Jinglei Yang, Zheng Liu, Xiaomin Yang, Gwanggil Jeon, and Wei Wu.", + "venue": "In CVPR, pp. 3867\u20133876, 2019b.", + "url": null + } + }, + { + "46": { + "title": "Blueprint separable residual network for efficient image super-resolution.", + "author": "Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Jinjin Gu, Yu Qiao, and Chao Dong.", + "venue": "In CVPR, pp. 833\u2013843, 2022.", + "url": null + } + }, + { + "47": { + "title": "SwinIR: Image restoration using swin transformer.", + "author": "Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte.", + "venue": "In ICCVW, pp. 1833\u20131844, 2021.", + "url": null + } + }, + { + "48": { + "title": "Enhanced deep residual networks for single image super-resolution.", + "author": "Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee.", + "venue": "In CVPRW, pp. 1132\u20131140, 2017.", + "url": null + } + }, + { + "49": { + "title": "DesnowNet: Context-aware deep network for snow removal.", + "author": "Yun-Fu Liu, Da-Wei Jaw, Shih-Chia Huang, and Jenq-Neng Hwang.", + "venue": "IEEE TIP, 27(6):3064\u20133073, 2018.", + "url": null + } + }, + { + "50": { + "title": "Swin transformer: Hierarchical vision transformer using shifted windows.", + "author": "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.", + "venue": "In ICCV, pp. 10012\u201310022, 2021.", + "url": null + } + }, + { + "51": { + "title": "Swin transformer v2: Scaling up capacity and resolution.", + "author": "Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al.", + "venue": "In CVPR, pp. 12009\u201312019, 2022.", + "url": null + } + }, + { + "52": { + "title": "Decoupled weight decay regularization.", + "author": "Ilya Loshchilov and Frank Hutter.", + "venue": "In ICLR, 2018.", + "url": null + } + }, + { + "53": { + "title": "Waterloo exploration database: New challenges for image quality assessment models.", + "author": "Kede Ma, Zhengfang Duanmu, Qingbo Wu, Zhou Wang, Hongwei Yong, Hongliang Li, and Lei Zhang.", + "venue": "IEEE TIP, 26(2):1004\u20131016, 2016.", + "url": null + } + }, + { + "54": { + "title": "Intriguing findings of frequency selection for image deblurring.", + "author": "Xintian Mao, Yiming Liu, Fengze Liu, Qingli Li, Wei Shen, and Yan Wang.", + "venue": "In AAAI, pp. 1905\u20131913, 2023.", + "url": null + } + }, + { + "55": { + "title": "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics.", + "author": "David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik.", + "venue": "In ICCV, volume 2, pp. 416\u2013423. IEEE, 2001.", + "url": null + } + }, + { + "56": { + "title": "Sketch-based manga retrieval using manga109 dataset.", + "author": "Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa.", + "venue": "Multimedia Tools and Applications, 76(20):21811\u201321838, 2017.", + "url": null + } + }, + { + "57": { + "title": "Image super-resolution with non-local sparse attention.", + "author": "Yiqun Mei, Yuchen Fan, and Yuqian Zhou.", + "venue": "In CVPR, pp. 3517\u20133526, 2021.", + "url": null + } + }, + { + "58": { + "title": "Deep multi-scale convolutional neural network for dynamic scene deblurring.", + "author": "Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee.", + "venue": "In CVPR, pp. 3883\u20133891, 2017.", + "url": null + } + }, + { + "59": { + "title": "Single image super-resolution via a holistic attention network.", + "author": "Ben Niu, Weilei Wen, Wenqi Ren, Xiangde Zhang, Lianping Yang, Shuzhen Wang, Kaihao Zhang, Xiaochun Cao, and Haifeng Shen.", + "venue": "In ECCV, pp. 191\u2013207, 2020.", + "url": null + } + }, + { + "60": { + "title": "Spatially-adaptive image restoration using distortion-guided networks.", + "author": "Kuldeep Purohit, Maitreya Suin, AN Rajagopalan, and Vishnu Naresh Boddeti.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "61": { + "title": "Attentive generative adversarial network for raindrop removal from a single image.", + "author": "Rui Qian, Robby T Tan, Wenhan Yang, Jiajun Su, and Jiaying Liu.", + "venue": "In CVPR, pp. 2482\u20132491, 2018.", + "url": null + } + }, + { + "62": { + "title": "Sharing key semantics in transformer makes efficient image restoration.", + "author": "Bin Ren, Yawei Li, Jingyun Liang, Rakesh Ranjan, Mengyuan Liu, Rita Cucchiara, Luc Van Gool, Ming-Hsuan Yang, and Nicu Sebe.", + "venue": "In NeurIPS, 2024.", + "url": null + } + }, + { + "63": { + "title": "Bayesian-based iterative method of image restoration.", + "author": "William Hadley Richardson.", + "venue": "JoSA, 62(1):55\u201359, 1972.", + "url": null + } + }, + { + "64": { + "title": "Real-world blur dataset for learning and benchmarking deblurring algorithms.", + "author": "Jaesung Rim, Haeyun Lee, Jucheol Won, and Sunghyun Cho.", + "venue": "In ECCV, pp. 184\u2013201. Springer, 2020.", + "url": null + } + }, + { + "65": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "NeurIPS, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "66": { + "title": "Live image quality assessment database release 2.", + "author": "HR Sheikh.", + "venue": "http://live. ece. utexas. edu/research/quality, 2005.", + "url": null + } + }, + { + "67": { + "title": "Human-aware motion deblurring.", + "author": "Ziyi Shen, Wenguan Wang, Xiankai Lu, Jianbing Shen, Haibin Ling, Tingfa Xu, and Ling Shao.", + "venue": "In ICCV, pp. 5572\u20135581, 2019.", + "url": null + } + }, + { + "68": { + "title": "Just noticeable defocus blur detection and estimation.", + "author": "Jianping Shi, Li Xu, and Jiaya Jia.", + "venue": "In CVPR, 2015.", + "url": null + } + }, + { + "69": { + "title": "Very deep convolutional networks for large-scale image recognition.", + "author": "Karen Simonyan and Andrew Zisserman.", + "venue": "In ICLR, 2015.", + "url": null + } + }, + { + "70": { + "title": "Single image defocus deblurring using kernel-sharing parallel atrous convolutions.", + "author": "Hyeongseok Son, Junyong Lee, Sunghyun Cho, and Seungyong Lee.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "71": { + "title": "Scale-recurrent network for deep image deblurring.", + "author": "Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "72": { + "title": "LLaMA: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.", + "venue": "arXiv preprint arXiv:2302.13971, 2023.", + "url": null + } + }, + { + "73": { + "title": "Stripformer: Strip transformer for fast image deblurring.", + "author": "Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin.", + "venue": "In ECCV, pp. 146\u2013162. Springer, 2022a.", + "url": null + } + }, + { + "74": { + "title": "BANet: A blur-aware attention network for dynamic scene deblurring.", + "author": "Fu-Jen Tsai, Yan-Tsung Peng, Chung-Chi Tsai, Yen-Yu Lin, and Chia-Wen Lin.", + "venue": "IEEE TIP, 31:6789\u20136799, 2022b.", + "url": null + } + }, + { + "75": { + "title": "MAXIM: Multi-axis MLP for image processing.", + "author": "Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li.", + "venue": "In CVPR, pp. 5769\u20135780, 2022.", + "url": null + } + }, + { + "76": { + "title": "TransWeather: Transformer-based restoration of images degraded by adverse weather conditions.", + "author": "Jeya Maria Jose Valanarasu, Rajeev Yasarla, and Vishal M Patel.", + "venue": "In CVPR, pp. 2353\u20132363, 2022.", + "url": null + } + }, + { + "77": { + "title": "Attention is all you need.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.", + "venue": "NeurIPS, 30, 2017.", + "url": null + } + }, + { + "78": { + "title": "Linformer: Self-attention with linear complexity.", + "author": "Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma.", + "venue": "arXiv preprint arXiv:2006.04768, 2020.", + "url": null + } + }, + { + "79": { + "title": "ESRGAN: Enhanced super-resolution generative adversarial networks.", + "author": "Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy.", + "venue": "In ECCVW, pp. 0\u20130, 2018.", + "url": null + } + }, + { + "80": { + "title": "Uformer: A general U-shaped transformer for image restoration.", + "author": "Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li.", + "venue": "In CVPR, pp. 17683\u201317693, 2022.", + "url": null + } + }, + { + "81": { + "title": "Demosaicing based on directional difference regression and efficient regression priors.", + "author": "Jiqing Wu, Radu Timofte, and Luc Van Gool.", + "venue": "IEEE TIP, 25(8):3862\u20133874, 2016.", + "url": null + } + }, + { + "82": { + "title": "Random shuffle transformer for image restoration.", + "author": "Jie Xiao, Xueyang Fu, Man Zhou, Hongjian Liu, and Zheng-Jun Zha.", + "venue": "In ICML, pp. 38039\u201338058, 2023.", + "url": null + } + }, + { + "83": { + "title": "Vitae: Vision transformer advanced by exploring intrinsic inductive bias.", + "author": "Yufei Xu, Qiming Zhang, Jing Zhang, and Dacheng Tao.", + "venue": "NeurIPS, 34:28522\u201328535, 2021.", + "url": null + } + }, + { + "84": { + "title": "Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild.", + "author": "Fanghua Yu, Jinjin Gu, Zheyuan Li, Jinfan Hu, Xiangtao Kong, Xintao Wang, Jingwen He, Yu Qiao, and Chao Dong.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "85": { + "title": "Multi-stage progressive image restoration.", + "author": "Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao.", + "venue": "In CVPR, pp. 14821\u201314831, 2021.", + "url": null + } + }, + { + "86": { + "title": "Restormer: Efficient transformer for high-resolution image restoration.", + "author": "Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang.", + "venue": "In CVPR, pp. 5728\u20135739, 2022.", + "url": null + } + }, + { + "87": { + "title": "On single image scale-up using sparse-representations.", + "author": "Roman Zeyde, Michael Elad, and Matan Protter.", + "venue": "In Proceedings of International Conference on Curves and Surfaces, pp. 711\u2013730. Springer, 2010.", + "url": null + } + }, + { + "88": { + "title": "Accurate image restoration with attention retractable transformer.", + "author": "Jiale Zhang, Yulun Zhang, Jinjin Gu, Yongbing Zhang, Linghe Kong, and Xin Yuan.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "89": { + "title": "Xformer: Hybrid x-shaped transformer for image denoising.", + "author": "Jiale Zhang, Yulun Zhang, Jinjin Gu, Jiahua Dong, Linghe Kong, and Xiaokang Yang.", + "venue": "arXiv preprint arXiv:2303.06440, 2023.", + "url": null + } + }, + { + "90": { + "title": "Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising.", + "author": "Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang.", + "venue": "IEEE TIP, 26(7):3142\u20133155, 2017a.", + "url": null + } + }, + { + "91": { + "title": "Learning deep cnn denoiser prior for image restoration.", + "author": "Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang.", + "venue": "In CVPR, pp. 3929\u20133938, 2017b.", + "url": null + } + }, + { + "92": { + "title": "Ffdnet: Toward a fast and flexible solution for cnn-based image denoising.", + "author": "Kai Zhang, Wangmeng Zuo, and Lei Zhang.", + "venue": "IEEE TIP, 27(9):4608\u20134622, 2018a.", + "url": null + } + }, + { + "93": { + "title": "Plug-and-play image restoration with deep denoiser prior.", + "author": "Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte.", + "venue": "IEEE TPAMI, 2021.", + "url": null + } + }, + { + "94": { + "title": "Color demosaicking by local directional interpolation and nonlocal adaptive thresholding.", + "author": "Lei Zhang, Xiaolin Wu, Antoni Buades, and Xin Li.", + "venue": "Journal of Electronic imaging, 20(2):023016, 2011.", + "url": null + } + }, + { + "95": { + "title": "Image super-resolution using very deep residual channel attention networks.", + "author": "Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu.", + "venue": "In ECCV, pp. 286\u2013301, 2018b.", + "url": null + } + }, + { + "96": { + "title": "Residual dense network for image super-resolution.", + "author": "Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu.", + "venue": "In CVPR, 2018c.", + "url": null + } + }, + { + "97": { + "title": "Residual non-local attention networks for image restoration.", + "author": "Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu.", + "venue": "arXiv preprint arXiv:1903.10082, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18588v1" +} \ No newline at end of file diff --git a/20241127/2411.18598v1.json b/20241127/2411.18598v1.json new file mode 100644 index 0000000000000000000000000000000000000000..79677ef6a93e22f2f590d85477744931bac4f5de --- /dev/null +++ b/20241127/2411.18598v1.json @@ -0,0 +1,151 @@ +{ + "title": "Integrated Heterogeneous Service Provisioning: Unifying Beyond-Communication Capabilities with MDMA in 6G and Future Wireless Networks", + "abstract": "The rapid evolution and convergence of wireless technologies and vertical applications have fundamentally reshaped our lifestyles and industries. Future wireless networks, especially 6G, are poised to support a wide range of applications enabled by heterogeneous services, leveraging both traditional connectivity-centric functions and emerging beyond-communication capabilities, particularly localization, sensing, and synchronization. However, integrating these new capabilities into a unified 6G paradigm presents significant challenges. This article provides an in-depth analysis of these technical challenges for integrative 6G design and proposes three strategies for concurrent heterogeneous service provisioning, with the aggregated goal of maximizing integration gains while minimizing service provisioning overhead. First, we adopt multi-dimensional multiple access (MDMA) as an inclusive enabling platform to flexibly integrate various capabilities by shared access to multi-dimensional radio resources. Next, we propose value-oriented heterogeneous service provisioning to maximize the integration gain through situation-aware MDMA. To enhance scalability, we optimize control and user planes by eliminating redundant control information and enabling service-oriented prioritization. Finally, we evaluate the proposed framework with a case study on integrated synchronization and communication, demonstrating its potential for concurrent heterogeneous service provisioning.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The unprecedented advancements in communications and computing technologies have revolutionized our lifestyles, industrial practices, and societal structures. With the ongoing expansion and paradigm shift of consumer and industry applications, future wireless networks (e.g., 6G) are expected to support a multitude of complex applications through seamless collaboration among interconnected devices and machines [1 ###reference_b1###]. Emerging advanced applications like multi-sensory extended reality (XR) and intelligent Internet of Everything (IoE) will require future wireless networks to concurrently satisfy diverse requirements of multiple heterogeneous services, including communications, sensing, localization, synchronization, and collaborative computing." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "The Need for Heterogeneous Services with Beyond-Communication Capabilities", + "text": "To meet these diverse requirements, 6G and future networks must incorporate new capabilities for tailored heterogeneous service provisioning. Unlike connectivity-centric 5G networks, which categorize communication scenarios primarily by usage needs, 6G must address the diverse service demands of advanced applications by leveraging new capabilities such as precise positioning and support of artificial intelligence (AI), as envisioned in the IMT-2030 objectives [2 ###reference_b2###]. These beyond-communication capabilities, based on their enabling signals, can be broadly summarized into two categories:\nSensing-centric new capabilities, such as localization, target detection, and tracking, are achieved through the dual use of wireless signals to probe the radio propagation environment between transmitters and targets. These capabilities depend on temporary collaboration, where the distributed user equipment (UE), base stations (BS), and the external environment engage in short-term coordination to accomplish task-specific sensing objectives. Through these coordinated interactions, 6G systems can accurately measure propagation-related signal attributes, enabling real-time environmental awareness and system adaptation capability to changing conditions. These capabilities become fundamental to support situation-aware decision-making for advanced services in future networks.\nCommunication-augmented new capabilities like precise time synchronization, collaborative computation, and network-enabled AI, represent a significant evolution from traditional communication functions. Unlike conventional approaches focused on reliable data transmission, these advanced capabilities involve dynamic exchanges of non-conventional data and sophisticated protocols that enable enhanced collaboration among multiple connected devices. By optimizing dynamic network topology while accounting for latent relationships, long-term collaborations among network entities can be facilitated to support application-oriented system design. This helps develop new network orchestration strategies essential for the adaptability and intelligence of future networks.\n###figure_1###" + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Key Challenges of Integrated Service Provisioning", + "text": "Compared to traditional connectivity-centric functions, the complex vertical applications enabled by 6G and future networks require simultaneous creation and orchestration of multiple new capabilities to support heterogeneous services. This need has driven converged designs such as integrated sensing and communication (ISAC) [3 ###reference_b3###]. However, as service diversity and system scale continue to grow, existing connectivity-centric network architectures become increasingly inefficient, as illustrated in Fig. 1 ###reference_###. This paradigm shift towards heterogeneous service provisioning brings the following challenges." + }, + { + "section_id": "1.2.1", + "parent_section_id": "1.2", + "section_name": "I-B1 Lack of Heterogeneous Service Integration Mechanisms", + "text": "Current network protocols and architectures provision different services through separate processes, including service requests, link/environment calibration, service initialization, and provisioning. This separation hinders the shared use of limited resources across heterogeneous services, leading to underutilized resources such as communication infrastructure, radio spectrum, protocol stack, and situational knowledge (e.g., channel states, timestamps, and user data). This inevitably results in redundant signal processing and substantial resource waste. For instance, timestamp information could be utilized for multiple services like time synchronization, precise localization, and data alignment [4 ###reference_b4###]. Nonetheless, it is frequently generated and transmitted redundantly due to the isolated design of these capabilities. Consequently, this inefficiency undermines the network\u2019s ability to effectively support heterogeneous services, especially with resource constraints.\nThe rigid and single-purpose design of current networks, focused primarily on data transmission, further exacerbates the challenges of simultaneously supporting multiple services with limited radio resources. In conventional connectivity-centric designs, resource allocation is centered on a narrow set of data transmission use cases, which can result in wasted resource utilization for communication and insufficient resources for new capabilities. Given the explosive growth in the number of UEs (e.g., IoE devices) with heterogeneous service requests, a dramatic increase in service density within a given spectrum is anticipated. Even with the envisaged advancements in 6G technologies, such as Terahertz communication and holographic radio, the available resources could still be insufficient for this multi-user multi-service environment, where significant multipath effects and inter-capability interference will challenge the achievable service quality for future applications." + }, + { + "section_id": "1.2.2", + "parent_section_id": "1.2", + "section_name": "I-B2 Ineffective Orchestration for Integrated Heterogeneous Service Provisioning", + "text": "Existing 5G and even initial 6G research often relies on scenario-specific resource allocation, which fails to address the heterogeneous demands of emerging applications. These networks categorize services into predefined types such as eMBB, uRLLC, and mMTC in 5G, as well as ubiquitous connectivity and ISAC in 6G. However, the coexistence of sensing-centric and communication-augmented beyond-communication capabilities, coupled with the arbitrary multi-dimensional service demands, introduces increasing heterogeneity that surpasses these rigid patterns. Applying uniform orchestration methods across varied applications risks significant wastage of limited communication resources.\nThe challenge is further compounded by the variability of service demands, which current network designs are ill-equipped to handle. The mobility of UEs and the opportunistic deployment of small BSs lead to non-uniform service demand distributions across the network. Additionally, dynamic channel conditions and periodic fluctuations driven by user activities result in time-varying service satisfaction. These spatial-temporal variations necessitate more adaptable and responsive orchestration strategies. However, existing scenario-specific designs lack the flexibility to dynamically adjust to these evolving conditions, leading to inefficient resource utilization and compromised service quality.\nMoreover, traditional orchestration strategies often prioritize the optimization of each capability for individualized service provisioning. This narrow focus inevitably overlooks the potential synergies and interdependencies between coexisting capabilities. For example, accurate synchronization can enhance time-of-flight measurements crucial for high-precision localization, while precise localization data can aid synchronization during coordination processes. However, current orchestration methods typically assess the quality of each service in isolation, failing to capture the holistic performance of multi-service applications. Addressing this issue requires new orchestration strategies that emphasize the mutual enhancement of different capabilities, unlocking the synergistic value that improves overall service quality." + }, + { + "section_id": "1.2.3", + "parent_section_id": "1.2", + "section_name": "I-B3 Dramatically Increased Complexity for Heterogeneous Service Provisioning", + "text": "Efficient network management and resource utilization in 6G and future networks are closely tied to the design of the control and user planes. With the expansion of service variety and network scale, the control and feedback signaling required in future networks for heterogeneous service provisioning will grow exponentially. Additionally, the protocol headers associated with various service data could become overwhelming, especially those for communication-augmented new capabilities. This issue is aggravated when small payloads are transmitted for multiple services, resulting in substantial overhead on both user and control planes.\nIn existing connectivity-centric designs, each service for every UE requires its own set of control processes, such as signaling, scheduling, and QoS management, along with separate service data transmission. In a multi-user multi-service scenario, the cumulative overhead from these separated designs can escalate dramatically. This excessive overhead results in degraded service provisioning performance, manifesting as increased latency and reduced overall efficiency in 6G and future networks.\nMoreover, the complexity of resource allocation and scheduling in this multi-user multi-service environment introduces significant optimization challenges. Real-time monitoring of user and application dynamics, combined with the need to adaptively adjust resource allocation strategies, significantly increases global optimization complexity. Centralized processing and optimization methods further exacerbate this issue given the limited processing power in each local UE and stringent service timeliness requirements. This increasing complexity can lead to delays and suboptimal solutions, significantly impacting the performance of multi-service applications. Therefore, efficiently managing both user and control plane resources, while optimizing the concurrent support of multiple services, becomes crucial for 6G network design." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Scope of This Article", + "text": "As discussed above, the existing connectivity-centric network paradigm cannot efficiently support concurrent heterogeneous services anticipated in future applications. The creation, orchestration, and control of diverse capabilities must be seamlessly unified with traditional connectivity functions through new network frameworks. To enable efficient heterogeneous service provisioning, this article focuses on answering the following critical questions:\nHow can multiple new capabilities be unified within a shared network infrastructure and protocol to concurrently enable heterogeneous service provisioning?\nHow can the increasingly complex yet limited communication resources be optimized for tailored heterogeneous services in a specific application scenario, and how can resources, processes, and information of these capabilities be leveraged to maximize the integration gain?\nHow can the complexity and overhead of integrated heterogeneous service provisioning be balanced dynamically with service quality to ensure better scalability?\nThe remainder of this article is organized as follows. First, we propose a unified 6G paradigm based on multi-dimensional multiple access (MDMA) to address these issues, efficiently unifying beyond-communication capabilities for integrated heterogeneous service provisioning. As an example, we then apply this unified paradigm to support integrated synchronization and communication services. Several future directions are discussed before drawing conclusions." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Unifying Beyond-Communication Capabilities for Integrated Service Provisioning", + "text": "To address the challenges inherent to existing connectivity-centric designs, a unified paradigm is proposed for 6G and future networks to support integrated heterogeneous service provisioning. As illustrated in Fig. 2 ###reference_###, the proposed MDMA-based platform effectively unifies multiple capabilities with shared resource utilization, flexible heterogeneous service orchestration, and scalable service provisioning." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A MDMA for Integrated Heterogeneous Service Provisioning", + "text": "Supporting diverse applications through integrated heterogeneous service provisioning necessitates the tight coordination of multiple new capabilities due to their shared access to radio resources and mutual impact. To achieve this, we propose a unified framework using MDMA as a flexible platform for concurrently enabling communication-centric and beyond-communication capabilities with coordinated resource sharing.\nAchieving this relies on flexible shared access either orthogonally or non-orthogonally and coordinated management of all available radio resources in different dimensions. The proposed MDMA framework can unify and flexibly allocate multi-dimensional resources (e.g., time, space, frequency) for supporting concurrent capabilities, ensuring efficient resource sharing among heterogeneous capabilities within a unified platform [5 ###reference_b5###]. In this regard, either a communication-centric or a beyond-communication process is generally considered as an \u201caccess\u201d request to multi-dimensional radio resources.\nMDMA also expands resource allocation to all possible dimensions for optimal resource utilization. It opportunistically assigns available resources based on current network conditions and device capabilities, enabling seamless coordination that minimizes mutual interference among non-orthogonal users [6 ###reference_b6###]. To effectively support heterogeneous service provisioning, MDMA must adopt a service-centric framework that dynamically maps service-specific access requests to multi-dimensional resource blocks in real time, allowing the network to create and customize new capabilities in response to evolving heterogeneous service needs. Moreover, service-centric resource multiplexing and partitioning enable the efficient coexistence of heterogeneous new capabilities across multiple users by accounting for the criticality of services. This coordinated approach minimizes inter-capability interference during integrated heterogeneous service provisioning, laying a robust foundation for supporting more complex scenarios." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Situation-Aware MDMA for Integration Gain Maximization", + "text": "###figure_2### The integrated design and information sharing among concurrent heterogeneous capabilities offer significant mutual benefits, which can be evaluated in terms of integration gain. This gain quantifies system-wide improvements in service quality and resource efficiency through optimized inter-service coordination and dynamic resource allocation. MDMA enables integration gain through multi-purpose resource utilization, where each resource block could concurrently support multiple services, particularly those engaged in supporting the same application. For instance, a single resource block can simultaneously support communication and sensing services, leveraging their correlated demands to enhance resource efficiency and service quality. Additionally, service data from different communication-augmented capabilities can be flexibly combined by aligning their application-specific service requirements. This adaptive approach enables MDMA to optimize resource allocation dynamically, allowing limited resources to support a broader range of services.\nThis gain can be further amplified by incorporating situational awareness from beyond-communication capabilities. Spatial-temporal insights derived from sensing, positioning, and synchronization allow MDMA to interpret dynamic network conditions, such as mobility patterns, traffic variations, and interference levels [7 ###reference_b7###]. By leveraging these insights, MDMA can dynamically adjust the sharing of resource blocks among multiple services, ensuring optimal alignment with real-time network demands. This situation-aware orchestration minimizes inter-service conflicts during integration with improved resource efficiency. Furthermore, AI-driven predictive models improve adaptability by anticipating future service requirements and resource availability [8 ###reference_b8###]. By proactively optimizing the orchestration of heterogeneous services, MDMA maximizes service integration gain, ensuring resource allocation aligns seamlessly with future system conditions." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Enhancing Service Scalability and Effectiveness via MDMA Optimization and Resource Prioritization", + "text": "As the number of users and services in 6G networks continues to grow, ensuring the scalability of integrated service provisioning is critical. In addition to the growing service data traffic, the frequent scheduling and real-time feedback required by MDMA-based service orchestration result in significant control overhead, complicating the management of heterogeneous services. The control and user plane separation architecture, a key enabler for flexible resource management, must be further refined to handle the increasing complexity of services. This involves enhancing the control plane for efficient signaling and optimizing the user plane to support higher traffic capacities.\nOptimizing MDMA Overhead for Scalable Service Provisioning: Integrating various capabilities across different protocol layers enables the multi-purpose reuse of network architecture and information, significantly reducing overhead in both control and user planes. In the control plane, overhead stems from managing and coordinating the resource allocation and scheduling in MDMA to support heterogeneous services. Unifying control and feedback signaling at the physical layer is crucial for simultaneously managing multiple capabilities, enabling scalable service coordination with reduced signaling [9 ###reference_b9###]. In the user plane, focusing on transmitting essential service data minimizes the overhead caused by static or redundant information, thereby lowering transmission volume and processing load. Techniques such as header compression and service data unit aggregation [10 ###reference_b10###] effectively eliminate redundancies (e.g., static routing information, identical packet headers, and unchanged timestamp digits) while supporting multiple heterogeneous services. Therefore, by addressing superfluous control information and redundant data transmissions, the proposed MDMA-based platform can scale efficiently to accommodate a broader range of heterogeneous services.\nControl Resource Prioritization for Effective Service Orchestration: While the separation of control and user planes enhances management efficiency, these planes can be further customized to perform complementary functions that improve network adaptability and service effectiveness. For example, Data over Non-Access Stratum (DoNAS [11 ###reference_b11###]) enables infrequent user service data transmission via signaling messages, leveraging the control plane for lightweight data transfers with enhanced service quality. However, the limited resources of the control plane constrain the number of services it can support efficiently. To address this issue, prioritizing control plane resources becomes crucial, particularly for high-value services requiring greater reliability and timeliness. Such services should be granted preferential access to control plane resources for essential service data transmission. By contrast, lower-priority services can be orchestrated through more strategic mechanisms, such as clustered control among adjacent nodes or piggybacking control information with user-plane data. This joint optimization alleviates the burden on the control plane in MDMA-based design, ensuring the effectiveness of critical services even under stringent resource limitations." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Case Study: Integrated Synchronization and Communication Service Provisioning", + "text": "Concurrently enabling synchronization and communication is essential for mission-critical industrial applications, which rely on precise timing and reliable data exchange within local area access networks. Traditional synchronization methods, like Precision Time Protocol (PTP), need specialized hardware and packets to exchange timestamps and estimate time deviations. The frequent interactions can disrupt ongoing data transmission and significantly consume control plane resources, thereby affecting overall network performance [12 ###reference_b12###].\nTo overcome these limitations, this case study designs an MDMA-based integrated synchronization and communication (ISynC) framework that unifies these capabilities within the same PHY and MAC layers. By leveraging shared network resources, situational information reuse, and efficient control overhead management, the framework maximizes integration gains while meeting the unique requirements of different services. Through this case study, we validate the feasibility of integrating heterogeneous services with enhanced provisioning quality and scalability." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Service-oriented MDMA for ISynC Capability Coexistence", + "text": "Integrated synchronization and communication service provisioning requires meeting the heterogeneous quality of service (QoS) requirements of both capabilities. Synchronization services emphasize precision and timeliness as two key metrics to ensure reliable timing service delivery. Precision reflects the local time error relative to the reference, while timeliness measures the delay between a synchronization service request and its fulfillment. In contrast, communication capabilities focus on traditional connectivity-centric QoS metrics such as latency and throughput, commonly optimized through signal quality-based management. Due to the maturity of these aspects, further details on communication capabilities are omitted.\nTo accommodate these heterogeneous service demands, we adopt service-oriented MDMA as the integrative platform for ISynC, enabling the mapping of multi-dimensional radio resources to service access requests. Synchronization services with stringent precision and timeliness requirements are allocated additional time-domain resources to ensure prompt and reliable timestamp delivery. Meanwhile, communication services are optimized through allocations across other resource dimensions (e.g., frequency and spatial resources) to minimize interference and maximize throughput. This service-oriented multiplexing ensures the coexistence of synchronization and communication capabilities with the desired performance.\nThe integration gain of ISynC is maximized through optimized resource utilization and inter-service mutual enhancement. Synchronization service data can be opportunistically integrated with communication processes to meet heterogeneous service requirements. For example, synchronization data may piggyback on communication packets during low-urgency periods, allowing both services to share radio resources temporarily. This eliminates the need for separate synchronization operations, reducing control overhead and improving resource efficiency. By dynamically adjusting service access, MDMA flexibly accommodates multiple services while maintaining acceptable performance. Additionally, ISynC leverages the mutual enhancement of synchronization and communication capabilities. Reliable communication ensures accurate delivery of synchronization timestamps with improved precision and timeliness. In turn, precise synchronization enhances situational awareness, enabling real-time optimization of MDMA resource allocations under dynamic network conditions. These integration gains ensure that ISynC efficiently meets the complex demands of heterogeneous service provisioning." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B ISynC Implementation with Existing Network Frameworks", + "text": "Integrating timestamps into existing communication architectures requires balanced transmission time interval (TTI) allocation between communication and synchronization services to support service-oriented MDMA design. In the 5G NR MAC architecture [13 ###reference_b13###], data and control signaling are encapsulated within protocol data units (PDUs), which are further divided into sub-PDUs to support multiple functions. This flexible structure allows synchronization data to coexist with communication data in the same PDU. The frame structure of a downlink MAC PDU, shown in Fig. 3 ###reference_###a), demonstrates how sub-PDUs can carry service data units (SDUs) for user data or control elements (CEs) for MAC layer signaling. The allocation of TTI directly affects the size and scheduling frequency of these PDUs, impacting how SDUs and CEs are packaged and transmitted. These considerations open two possible ISynC schemes:\n###figure_3### SDU-based ISynC: It is intuitive to integrate synchronization service data using SDUs designated for user data. As shown in Fig. 3 ###reference_###b), each SDU can include synchronization service data such as timestamps, feedback, and synchronization quality indicators (SQI). The subheader provides critical information, including the logical channel identifier (e.g., LCID=29) and the length of service messages (e.g., L=8) for proper coordination. Although SDU-based ISynC integrates synchronization and communication with minimal complexity, it may become inefficient when a larger number of UEs require frequent synchronization services. Allocating an entire SDU for transmitting small payloads (e.g., an 8-byte timestamp) for each UE introduces notable overhead in the user plane. Furthermore, sharing the user plane with other services can lead to resource competition, affecting the quality of ISynC services due to the competition within the same TTIs.\nNew CE-based ISynC: An alternative approach is to design a new CE specifically for ISynC data exchange. This method transmits small-sized timestamps and SQIs via the control plane as CEs instead of using SDUs in the user plane. This can enhance synchronization accuracy and timeliness due to prioritized processing in different protocol layers, which also allows for more efficient use of TTIs. As illustrated in Fig. 3 ###reference_###c), ISynC service data are encapsulated within the CE after reserving a specific logical channel, with two reserved bits used to indicate data types. While the CE-based ISynC design offers greater flexibility, it also introduces additional control plane overhead that requires careful management." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Cluster-based Hybrid ISynC for Scalability Enhancement", + "text": "The SDU-based and CE-based ISynC schemes face challenges of service quality degradation and limited control plane resources. To meet the heterogeneous demands of synchronization and communication services while enhancing scalability in large-scale systems, we propose a service-oriented hybrid framework that jointly optimizes these two schemes.\n###figure_4### As depicted in Fig. 4 ###reference_###, UEs in the system are categorized based on the value of their services, which is evaluated using weighted service requirements for synchronization and communication. To prevent overloading the control plane, CE-based ISynC is selectively applied to high-value UEs with stringent demands. For other UEs, synchronization messages are transmitted using SDUs, which can be inefficient due to excessive packet headers associated with small payloads. To mitigate this, we introduce a clustered ISynC framework, aggregating multiple SDUs from various UEs. Non-prioritized UEs are organized based on their real-time locations, with a cluster head designated to collect and process synchronization service data. This aggregation reduces redundant packet headers from cluster members by transmitting a single larger SDU, which improves efficiency and reduces user plane overhead. By prioritizing control plane resources for high-value UEs and aggregating non-prioritized service data in the user plane, the hybrid ISynC framework enhances overall network performance and service scalability.\nMoreover, timestamp-based synchronization relies on frequent timestamp exchanges to estimate clock parameters (e.g., PTP estimates clock offset using four packet transmissions). In the proposed ISynC, clock skew and offset are estimated using six timestamps, as shown in Fig. 4 ###reference_###. This process begins with an initial synchronization flag (S1), followed by two timestamped packets (S2 and S3), and may include optional follow-up messages (F1 and F2) if physical layer timestamping is used to enhance accuracy.\nTable I ###reference_### details the timestamp transmission process, highlighting the asymmetric nature of the exchanges, where BSs send multiple timestamps to UEs to estimate local clock parameters with reduced uplink burden. The uplink is only used to send the SQI, which determines synchronization frequency. To minimize service data redundancy, timestamp compression is applied, retaining only the time information that differs from previous synchronization exchanges. This can significantly reduce packet size based on the synchronization frequency." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Performance Evaluation", + "text": "In this section, we present numerical simulations to evaluate the performance of the proposed ISynC framework in a 5G NR network, comparing it with traditional separated service provisioning methods. This evaluation focuses on service integration gains and overhead reduction.\nAs shown in Fig. 5 ###reference_###a), the simulations consider varying service requirements for synchronization and communication. The results indicate that as the demand for both services increases, the service satisfaction rate of traditional methods drops dramatically. Conversely, ISynC maintains a consistently higher satisfaction rate, even under stringent conditions. Notably, ISynC fully meets communication service demands across various scenarios, underscoring its robust handling of increasing synchronization requirements. The service integration gain heat map shows significant improvement with ISynC, particularly under high synchronization demands, validating the mutual benefits of integrating synchronization within the communication framework.\nWe also analyze ISynC\u2019s performance across different network scales, ranging from 50 to 500 UEs. Figure 5 ###reference_###b) shows that traditional methods suffer from reduced synchronization satisfaction and increased overhead as the number of UEs rises. In contrast, ISynC efficiently supports more UEs, maintaining full service satisfaction while reducing overhead by more than 50%. These results highlight ISynC\u2019s capability to enable more efficient heterogeneous service provisioning, especially in complex and large-scale network environments.\n###figure_5###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Future Directions: Value-oriented Heterogeneous Service Integration", + "text": "While the foundational integration enables multiple capabilities to coexist and share resources efficiently, future networks must evolve to become intelligent and value-oriented to maximize the integration gain of heterogeneous services.\nHolistic Context Sharing for Intelligent Heterogeneous Service Integration: The heterogeneous service demands and dynamic resource availability in 6G necessitate intelligent service provisioning for real-time optimization and adaptability. Current network designs lack dynamic context to orchestrate heterogeneous services by precisely matching heterogeneous service provisioning and application requirements in resource-constrained environments. Holistic context sharing can vertically integrate system status data across network layers (e.g., network statistics and user behavior [14 ###reference_b14###]) to comprehensively capture system dynamics, and horizontally share it among services to enable context-aware orchestration. This direction allows MDMA to intelligently satisfy diverse service demands with limited resources, making it essential for mission-critical applications like intelligent manufacturing.\nValue-oriented Heterogeneous Service Integration based on Knowledge Extraction: Evaluating and predicting the value of heterogeneous service provisioning in 6G systems is increasingly challenging, particularly under resource constraints that require trade-offs between competing services. This challenge is especially critical in complex application scenarios such as autonomous transportation, where communication services must seamlessly integrate with environmental sensing to ensure safety and efficiency. Data from prior service provisioning offers valuable insights for value-oriented integration [15 ###reference_b15###]. By analyzing historical performance data, usage patterns, and application requirements, knowledge extraction helps quantify the perceived value of integrating heterogeneous services, reflecting how different combinations influence specific application outcomes. Such insights can guide resource allocation strategies that adapt dynamically to future demands, enabling MDMA to optimize integrated service orchestration and maximize system performance." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "This article first overviewed the challenges of integrating beyond-communication capabilities into 6G and future networks. We then outlined a unified 6G network paradigm based on MDMA for integrated heterogeneous service provisioning by optimizing resource utilization and service orchestration. By adopting service-oriented MDMA for multi-capability coexistence, maximizing integration gains through situation-aware service provisioning, and optimizing control and user planes to enhance scalability, we demonstrated how 6G and future networks can evolve to a beyond-communication paradigm. A case study on an integrated synchronization and communication framework showcased the practical advantages of this approach, emphasizing its potential to concurrently support diverse services with heterogeneous demands. Finally, future directions are presented in achieving value-oriented heterogeneous service integration." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: The timestamp message transmission between BS and UE
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MessageR bitsContentInfo available at UE
S1DL-00Sync flagT2
F1DL-01T1T1, T2
S2UL-00SQIT1 \u2013 T3
S3DL-10Compressed T4T1 \u2013 T4, T6
F2DL-11Compressed T5T1 \u2013 T6
\n
", + "capture": "TABLE I: The timestamp message transmission between BS and UE" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18598v1_figure_1.png", + "caption": "Figure 1: Challenges in traditional connectivity-centric network design, such as the heterogeneous service requirements of diverse applications and the separated design of beyond-communication capabilities, hinder the service provisioning for advanced applications in the 6G era.", + "url": "http://arxiv.org/html/2411.18598v1/x1.png" + }, + "2": { + "figure_path": "2411.18598v1_figure_2.png", + "caption": "Figure 2: The proposed unified paradigm for integrated heterogeneous service provisioning with MDMA-based service integration and orchestration.", + "url": "http://arxiv.org/html/2411.18598v1/x2.png" + }, + "3": { + "figure_path": "2411.18598v1_figure_3.png", + "caption": "Figure 3: The 5G NR MAC frame structure with synchronization capability integrated via SDU-based and CE-based ISynC design.", + "url": "http://arxiv.org/html/2411.18598v1/x3.png" + }, + "4": { + "figure_path": "2411.18598v1_figure_4.png", + "caption": "Figure 4: Clustered ISynC design based on the service urgency for distributed UEs in a local area access network with service prioritization.", + "url": "http://arxiv.org/html/2411.18598v1/x4.png" + }, + "5": { + "figure_path": "2411.18598v1_figure_5.png", + "caption": "Figure 5: Performance of ISynC: a) Service satisfaction level and service integration gain. b) service quality improvement for different network scales.", + "url": "http://arxiv.org/html/2411.18598v1/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18598v1" +} \ No newline at end of file diff --git a/20241127/2411.18605v1.json b/20241127/2411.18605v1.json new file mode 100644 index 0000000000000000000000000000000000000000..6cd4e7806b92c41ba498cfe19a35e65f3967ad70 --- /dev/null +++ b/20241127/2411.18605v1.json @@ -0,0 +1,133 @@ +{ + "title": "A fractional Helly theorem for set systems with slowly growing homological shatter function", + "abstract": "We study parameters of the convexity spaces associated with families of sets in where every intersection between sets of the family has its Betti numbers bounded from above by a function of . Although the Radon number of such families may not be bounded, we show that these families satisfy a fractional Helly theorem. To achieve this, we introduce graded analogues of the Radon and Helly numbers. This generalizes previously known fractional Helly theorems.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Intersection patterns of convex sets of enjoy many remarkable properties: for example, Helly\u2019s theorem [12 ###reference_b12###],[10 ###reference_b10###, ] states that the intersection of all sets of a finite family is nonempty if and only if every members of the family intersect.\nThese properties are not specific to convex sets:\nlattice convex sets [1 ###reference_b1###],\nor even good covers [4 ###reference_b4###] satisfy similar properties. The notion of convexity spaces was introduced in order to study these properties in greater generality, notably by recasting the convex hull operator.\nFormally, a convexity space on a ground set is a family of subsets of containing and , closed under intersections and nested unions of chains. This framework allows us to define the convex hull of a set , denoted , as the intersection between all sets of containing . (Since is closed by intersection, it is the smallest set of containing .) This allows us to define three parameters in particular.\nThe Radon number of a convexity space , denoted , is the smallest integer such that every set of cardinality can be split into two nonempty disjoint parts satisfying . We say such a partition is an -Radon partition of the point set .\nThe Helly number of a convexity space , denoted , is the smallest integer with the following property: if in a finite subfamily every members of have a point in common, then all members of have a point in common. If no such exists, we put .\nThe colorful Helly number of a convexity space , denoted ,\nis the minimal number of colors such that for every coloring of a subfamily with colors, if every colorful subfamily (one of each color) has nonempty intersection, then there is a color such that all elements of this color have nonempty intersection.\nHere we also introduce, as suggested by [6 ###reference_b6###]:\nThe -th clique number of a convexity space , denoted is the smallest integer such that for every finite subfamily , whenever a constant fraction of the -tuples of intersect, some constant fraction of forms a clique in the sense that every elements of intersect.\nSeveral relations hold between such parameters for every convexity space. For example, Levi\u2019s inequality [9 ###reference_b9###] states that . (We mention that the opposite relation is untrue, as later described in the proof of Lemma 4 ###reference_.SSS0.Px2###, adapted from [8 ###reference_b8###, Example 1].)\nThe clique number can be bounded from above by the colorful Helly number [5 ###reference_b5###], which in turn is bounded from above by a function of the Radon number [6 ###reference_b6###].\nWhen the ground set is a topological space, yet another parameter emerges: the homological complexity , defined as the maximum among the first Betti numbers of the members of the convexity space. When the ground set is , Goaoc and al. [3 ###reference_b3###] first proved that a function of the homological complexity bounds from above the Helly number. Pat\u00e1kov\u00e1 [11 ###reference_b11###] later improved the result to prove that it also bounds from above the Radon number. Combined with the previous work of Holmsen and Lee [6 ###reference_b6###], it induces that a finite homological complexity entails a finite clique number, which was later improved in [2 ###reference_b2###] and generalized to topological set systems with one forbidden homological minor.\nMaking a sidestep, we can introduce a new parameter called the (th) homological shatter function, as suggested (implicitly) by Kalai and Meshulam in [7 ###reference_b7###]:\nwith denoting the -th reduced Betti number of the topological space .\nIt describes the topological complexity of intersections between increasingly more sets. It is kind of a graded analogue of the homological complexity, and it encourages us to define graded analogues of other parameters. Note that this parameter is constant equal to the homological complexity when is a convexity space. For this parameter to be relevant, we therefore need to work with families that are not convexity spaces222One could also stratify the sets of a convexity space and define graded parameters for each stratum, or work with families of convexity spaces (and consider multiple convexity spaces for each layer).: thankfully, the definitions of -hull, Radon number and Helly number remain reasonable when we switch from being a convexity space to being a family of sets. We refer the reader to [11 ###reference_b11###, ] for an exposition of the nuances between convexity spaces and general set systems.\nIn this note we make the following contributions:\nWe introduce graded analogues of the Radon number, Helly number, and colorful Helly number. We relate these graded parameters to each other and to some of their ungraded analogues.\nWe relate these graded parameters to the homological shatter function to refine the results of [11 ###reference_b11###, 2 ###reference_b2###] mentioned previously. When the homological complexity is finite, the homological shatter function is stationary. We show that even when the homological shatter function is non-stationary, we can still guarantee a finite clique number as long as we can bound from above one specific value. This is a step toward a conjecture of Kalai and Meshulam [7 ###reference_b7###, Conjectures 6 and 7], which suggests that a family with a homological shatter function that grows polynomially has a finite clique number.\nSome ad-hoc examples of topological set systems where our Theorem 3.1 ###reference_Th1### applies but [2 ###reference_b2###, Corollary 1.3] does not are later discussed Section 4 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Graded parameters", + "text": "The homological shatter function can equivalently be defined as\nA natural way to adapt the statements proved in [11 ###reference_b11###] and [6 ###reference_b6###] and to relate the homological shatter function to other parameters is to introduce graded analogues of other parameters. We can similarly define the -th graded Radon, Helly and colorful Helly numbers of the family as:\nThese definitions lead us to consider the Radon numbers for finite set systems.\nWe point out that considering two elements of as equivalent when they belong exactly to the same sets does not change the Radon and Helly numbers. In particular, we get that from the pigeonhole principle; we sharpen this below (see Proposition 2.2 ###reference_Th2###).\nThese graded numbers follow the same relations as their ungraded analogues: we can consider all subfamilies of size and apply the known relations between the ungraded numbers. For example, we get the following relations for a family :\nby applying [9 ###reference_b9###] for the first relation and [6 ###reference_b6###, Lemma 2.3] for the second.\nWe present two bridges between graded and non-graded parameters." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "What about the homological shatter function ?", + "text": "In this section we are interested in families whose ground set is a topological space, for example . A parameter emerges, the homological complexity. [3 ###reference_b3###] and [11 ###reference_b11###, Theorem 2.1] relate this parameter to the Helly and Radon numbers; we can deduce the same relations between their graded analogues by applying their results to each subfamily of size . We get, for a family of sets in :\nNotice that combining Inequality (3 ###reference_###) with Inequality (2 ###reference_###) allows us to write\nThe conjecture of Kalai and Meshulam [7 ###reference_b7###, Conjectures 6 and 7], [2 ###reference_b2###, Conjecture 1.9] states that when the homological shatter function grows polynomially, the family satisfies a fractional Helly theorem. Inequalities (3 ###reference_###) and (4 ###reference_###), combined with Lemma 2.3 ###reference_Th3### are a step toward this conjecture: controlling the growth of the homological shatter functions allows to control the growth of the functions , and hopefully bound from above some . If we want to end up with a fractional Helly theorem rather than just a bounded -clique number, we need the parameter \nto be greater than ; in particular, we need to be finite." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Some examples", + "text": "In this section we characterize homological shatter functions, and give an idea of how to characterize the growth of the graded Radon numbers." + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Convexity in crystallographical lattices.", + "author": "J. P. Doignon.", + "venue": "J. Geom. 3, 71\u201385 (1973).", + "url": null + } + }, + { + "2": { + "title": "Intersection patterns in spaces with a forbidden homological minor.", + "author": "X. Goaoc, A. Holmsen, and Z. Pat\u00e1kov\u00e1.", + "venue": "arXiv e-prints (2024)", + "url": null + } + }, + { + "3": { + "title": "Bounding Helly numbers via Betti numbers.", + "author": "X. Goaoc, P. Pat\u00e1k, Z. Pat\u00e1kov\u00e1, M. Tancer, and U. Wagner.", + "venue": "In A journey through discrete mathematics, pages 407\u2013447.\nSpringer, Cham (2017).", + "url": null + } + }, + { + "4": { + "title": "\u00dcber systeme von abgeschlossenen mengen mit gemeinschaftlichen punkten.", + "author": "E. Helly.", + "venue": "Monatsh. f. Mathematik und Physik 37, 281\u2013302 (1930).", + "url": null + } + }, + { + "5": { + "title": "Large cliques in hypergraphs with forbidden substructures.", + "author": "A. F. Holmsen.", + "venue": "In Combinatorica, volume 40, pages 527-537 (2020).", + "url": null + } + }, + { + "6": { + "title": "Radon numbers and the fractional Helly theorem.", + "author": "A. F. Holmsen and D. Lee.", + "venue": "Isr. J. Math. 24, 433\u2013447 (2021).", + "url": null + } + }, + { + "7": { + "title": "Combinatorial expectations from commutative algebra.", + "author": "G. Kalai.", + "venue": "In I. Peeva and V. Welker, editors, Combinatorial Commutative\nAlgebra, volume 1(3), pp 1729\u20131734. Oberwolfach Reports (2004).", + "url": null + } + }, + { + "8": { + "title": "Axiomatic convexity theory and relationships between the Carath\u00e9odory, Helly, and Radon numbers.", + "author": "D. Kay and E. Womble.", + "venue": "In Pacific Journal Of Mathematics, volume 38, pp. 471-485 (1971).", + "url": null + } + }, + { + "9": { + "title": "On Helly\u2019s theorem and the axioms of Convexity.", + "author": "F. Levi.", + "venue": "In Journal of the Indian Mathematical Society, volume 15, pp. 65-76 (1951).", + "url": null + } + }, + { + "10": { + "title": "Lectures on discrete geometry, volume 212.", + "author": "J. Matou\u0161ek.", + "venue": "Springer Science & Business Media (2013).", + "url": null + } + }, + { + "11": { + "title": "Bounding Radon numbers via Betti numbers.", + "author": "Z. Pat\u00e1kov\u00e1.", + "venue": "International Mathematics Research Notices (2024).", + "url": null + } + }, + { + "12": { + "title": "Mengen konvexer k\u00f6rper, die einen gemeinsamen punkt enthalten.", + "author": "J. Radon.", + "venue": "Mathematische Annalen, volume 83, pp. 113-115 (1921).", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18605v1" +} \ No newline at end of file diff --git a/20241127/2411.18614v1.json b/20241127/2411.18614v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b0740e88f6511bcf4207302c33d9892c03a1a398 --- /dev/null +++ b/20241127/2411.18614v1.json @@ -0,0 +1,285 @@ +{ + "title": "Optimal root recovery for uniform attachment trees and \ud835\udc51-regular growing trees", + "abstract": "We consider root-finding algorithms for random rooted trees grown by uniform attachment. Given an unlabeled copy of the tree and a target accuracy , such an algorithm outputs a set of nodes that contains the root with probability at least .\nWe prove that, for the optimal algorithm, an output set of size suffices; this bound is sharp and answers a question from [7]. We prove similar bounds for random regular trees that grow by uniform attachment, strengthening a result from [15].", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "A sequence of trees is said to follow the uniform attachment or UA model if it is generated as follows: consists of a single node (the root, denoted by \u00f8); for each , the tree is generated from by attaching a new node to an existing node chosen uniformly at random, independently of the previous choices. If follows the UA model then write .\nRoot finding asks the following question: on observing but not the identity of the root \u00f8, with what confidence can \u00f8 be recovered?\nMore precisely, a root-finding algorithm is a function that, given an (unlabeled, finite) input tree , outputs a set of nodes of . The size of is the function\n. The error of is , where . In other words, it is the worst-case probability that the algorithm fails to return the root when the input is a uniform attachment tree of some size.\nBubeck, Devroye and Lugosi [7 ###reference_b7###] showed that for all , there exists a UA root-finding algorithm with error at most and size at most , where is a universal constant. They also showed that\nany root finding algorithm with error at most must have size at least , for another universal constant , and posed the question of whether either of these bounds are tight as an open problem.\nThe first main result of this work is to show that the above lower bound is tight, up to the value of .\nIn order to precisely state this result, we first describe the algorithm we study. Fix a finite tree with . Define a centrality measure by setting\nfor ; here denotes the tree rooted at node and denotes the subtree rooted at in . Next, write and order\nthe nodes of as so that , breaking ties arbitrarily. Given a positive integer , for an input tree with , the algorithm has output consisting of the nodes with the smallest values (or all nodes, if the input has fewer than nodes). Clearly, has size .\nThis algorithm turns out to have minimal error among algorithms of size , for a fairly wide range of growing tree models, including the ones studied in this paper [10 ###reference_b10###, Theorem 3].\nIn the next theorem, and throughout, write for the set of strictly positive integers.\nThere exist such that the following holds. For , let .\nThen for all , for , it holds that .\nThe second main result of this work establishes an analogous bound for a degree-bounded variant of the UA model, previously studied in [15 ###reference_b15###]. Given a positive integer , a -regular tree is a tree in which every non-leaf node has exactly neighbours. A sequence of trees follows the -regular uniform attachment or model if it is generated as follows.\nFirst, consists of a single root node \u00f8 and leaves. Then, for each , is built from by adding new neighbours to a single leaf of , with chosen uniformly at random from among the leaves of , independently of all previous choices.\nNote that for all , is a -regular tree with exactly non-leaf nodes. If follows the model then we write .\nFor any , there exist such that the following holds. For , let .\nThen for all , for , it holds that .\nThe previous theorem strengthens [15 ###reference_b15###, Corollary 1], which showed that there exists an algorithm that returns a set of nodes satisfying that . (The algorithm considered in (cite) ranks the nodes in increasing order of , the maximum taken over all neighbours of in , then returns the nodes with the smallest values.) The bound in Theorem 1.2 ###reference_theorem2### is optimal up to the value of ; the proof of this consists in adapting a construction from [7 ###reference_b7###], in which grows by first building a long path stretching away from the root and then growing exclusively in the subtree at the far end of the path, to the -regular setting. The details of this construction are tedious, but require no new ideas, so we omit them.\nThe mathematical study of root finding was instigated in [19 ###reference_b19###], which showed that the optimal size- algorithm for the model has error , and that for each there exists such that the optimal size- algorithm for the model has error . (Note that a size-1 algorithm is simply allowed to output a single node as its guess for the identity of the root.) The paper [7 ###reference_b7###] was the first to investigate the dependence between the error and the size; that work studied\nboth uniform attachment trees and preferential attachment trees, in which the probability connects to a node is proportional to the current degree of .\nSince then, root reconstruction has been studied for a wide range of growing tree [3 ###reference_b3###, 18 ###reference_b18###, 6 ###reference_b6###, 11 ###reference_b11###, 17 ###reference_b17###] and graph [4 ###reference_b4###, 5 ###reference_b5###, 13 ###reference_b13###] models. However, until quite recently, the best possible performance of a root finding algorithm (in terms of how the output set size depends on the error tolerance ) was not known for any model. The only other such tight bound we are aware of is for the preferential attachment model, for which [9 ###reference_b9###] proved that a lower bound established in [7 ###reference_b7###] is in fact sharp." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "1.1. A sketch of the proof", + "text": "We begin by sketching the proof for the model (Theorem 1.2 ###reference_theorem2###), as it is more straightforward; we then briefly discuss the adaptations needed to handle the uniform attachment model.\nThe proof of Theorem 1.2 ###reference_theorem2### consists of three key steps.\nLet be the greatest distance from the root of of a node with the property that . Then has exponential tails: there exists such that . (See Lemma 3.3 ###reference_theorem3###, below.)\nThis first point essentially follows by a union bound. (The constant we obtain depends on in the model, but we believe this dependence can be straightforwardly eliminated.)\nFor a tree with root , let . We call this the competitive ratio of ; it is the ratio of the centrality of to that of the most central node of . Then has polynomial tails: for some universal constants , not depending on . (See Proposition 3.6 ###reference_theorem6###, below.)\nTo prove this, we characterize the path from the root to the (with high probability) unique node with minimal, then bound , and ultimately the competitive ratio , by a model-dependent analysis of the splitting of mass in subtrees of . This analysis uses and builds on the \u201cnested P\u00f3lya urn\u201d perspective introduced in [7 ###reference_b7###]. The model-dependence arises here because different values of yield different urn models; but the resulting constants do not depend on .\nFor any integer , there exists such that for any tree with root where each node has at most children, if is such that then . This deterministically bounds the number of nodes which are more competitive candidates than the root which lie in a given small subtree of . (See Proposition 3.5 ###reference_theorem5###, below.)\nThe dependence of on in (III) ###reference_ix1### seems unavoidable, as briefly discussed at the end of this section, and explained in greater detail later, in Section 3.4 ###reference_###. However, in Section 4.5 ###reference_### we explain how to remove the -dependence from the constant in Theorem 1.2 ###reference_theorem2###.\nThe idea behind the proof of (III) ###reference_ix1### is that, once is small, the function should increase quickly as one moves further into . Specifically, it is not hard to show that if then for any child of ,\n.\nCombining this observation with deterministic arguments similar in spirit to those in [7 ###reference_b7###], which involve some combinatorial reductions followed by the use of the Hardy-Ramanujan formula on the number of partitions of an integer, the bound in\n(III) ###reference_ix1### follows.\nWe now use (I) ###reference_ix1###-(III) ###reference_ix1### to show Theorem 1.2 ###reference_theorem2### in full, then conclude the proof sketch by briefly discussing the adaptations needed to prove Theorem 1.1 ###reference_theorem1###. (These adaptations are rather non-trivial, and somewhat technical, and we only outline them at a high level.)\nWrite for the set of nodes which are at least as central as the root, and let . Note that is a connected set of vertices which contains , so in particular forms a subtree of , and that by the pigeonhole principle contains at most three leaves.\nFor , there is a unique ancestor of which is a child of an element of but which does not lie in itself.\nSince each node of has at most children in , we then have\nSince has at most leaves, by the definition of in (I) ###reference_ix1### we have . Next, for all , by (III) ###reference_ix1### we have , so the preceding displayed bound entails that\nIt follows that\nby the bounds in (I) ###reference_ix1### and (II) ###reference_ix1###. Since whenever , the result follows.\n\u220e\nThe proof of Theorem 1.1 ###reference_theorem1### follows the same basic strategy as that of Theorem 1.2 ###reference_theorem2###, but several adaptations are needed. First, to help address the fact that degrees are unbounded in the UA model, in step (I) ###reference_ix1###, rather than work with distances we work with weights. Weights are defined inductively; the root has weight , and if is the \u2019th child of then we set . With this adjustment, versions of (I) ###reference_ix1### and (II) ###reference_ix1### for the UA model follow using similar (though more technical) arguments to those for the model. (See respectively Lemma 4.2 ###reference_theorem2### and Proposition 4.7 ###reference_theorem7###, below.)\nThe most challenging adaptation is that there is no fixed constant which makes (III) ###reference_ix1### true deterministically for all trees. (To see this, consider the case where is a star rooted at a leaf.) We are thus obliged to resort to probabilistic bounds; we replace the inequality in (III) ###reference_ix1### by something which has the flavour of the following statement: for , the ratio has exponentially decaying upper tail, uniformly in . This is not exactly what we prove, but the precise statement requires more technical setup than is suitable for a proof overview. (See Proposition 4.5 ###reference_theorem5###.) Let us at least remark that the proof of the precise statement relies on an on-average geometric decay of the sizes of the subtrees stemming from the children of a given node; this is proved via another analysis of the model." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "1.2. Overview of the paper", + "text": "Section 2 ###reference_### contains material which is used in the proofs of both main results: a formalism of the model that we use for the analysis; some basic facts about the centrality measure; an important bound for the behaviour of certain geometrically decaying flows on trees (Proposition 2.4 ###reference_theorem4###);\nand some definitions, distributional identities for, and relations between different families of random variables. Section 3 ###reference_### contains the proofs of the three key steps used to show Theorem 1.2 ###reference_theorem2### in Section 1.1 ###reference_###, as well as a brief discussion of the -dependence of the bounds of that theorem (see Section 3.4 ###reference_###). Section 4 ###reference_### contains the proof of Theorem 1.1 ###reference_theorem1###, followed by a sketch of how the ideas from its proof can be adapted to replace the -dependent constant from Theorem 1.2 ###reference_theorem2### by a universal constant ; see Section 4.5 ###reference_###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Setup and general results", + "text": "This section contains formalism and results that are used in proving both Theorems 1.1 ###reference_theorem1### and 1.2 ###reference_theorem2###.\nThe first subsection, below, introduces the Ulam\u2013Harris formalism for rooted ordered trees, which we use for all of the analysis.\nThe second subsection establishes some deterministic facts connecting centrality measure and the competitive ratios of tree nodes. The third subsection controls the behaviour of certain geometrically decaying functions on rooted trees, that will be crucial for our analysis. The final subsection recalls some important distributions that naturally arise in the analysis and gives some of their basic properties." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2.1. The Ulam\u2013Harris formalism", + "text": "Recall that is the set of positive integers and let be the set of finite words written on the alphabet , namely\nFor two words and , let stand for the concatenation of and . If is not the empty word , then set . We further call the parent of and say that is a child of . For any , we interpret as the \u2019th child of . This yields a genealogical order denoted by : for , we write if and only if there exists such that (in other words, is a prefix of ). In that case, say that is an ancestor of and that is a descendant of . Note that is always an ancestor and a descendant of itself. We also use the notation to express that and .\nIt is useful to quantify the \u201csize\u201d of any word we encounter. First, set , so that and when , and call the or height of . Second, define the weight of as the sum of its letters: namely,\nThis exactly corresponds to the weight function defined in Section 1.1 ###reference_###. For integer write for the subset of consisting of words with and for . The set will be useful when studying the model.\nA plane tree is a finite set that satisfies the following properties:\nit holds that ;\nfor all , it holds that ;\nfor all , there exists an integer such that for any , .\nWe view as the root of any plane tree. For a plane tree and for , we call the integer the number of children of in . When ,\nwe say that is a leaf of . Finally, observe that for any , the set is also a plane tree. We denote it by \nand call it the (plane) subtree of stemming from . When , we set to be the empty set by convention (note that this is not a plane tree because it does not contain the empty word ). Subtrees will play the essentially the same role in the remainder of the paper that the subtrees played in Section 1 ###reference_###.\nWe say a plane tree is -ary if for all .\nIf is a -ary plane tree with leaves then , so .\nA plane tree can be naturally viewed as a graph in the following way. Each word represents a node in the graph, and the edge set consists of the pairs , where . This clearly yields a connected and acyclic graph, i.e. a tree. By a slight abuse of notation, we identify the plane tree with its associated graph, which we also denote by . Note that in the graph-theoretic sense, in a -ary tree, all non-leaf nodes aside from the root have neighbours ( children and a parent).\nObserve that for any , removing the edge cuts the tree into two connected components, one of them being . Depending on whether or not is an ancestor of , we can relate either or to . More precisely, we find that\nif is not an ancestor of , then the node set of is , and so ;\nif is an ancestor of , then the node set of is the complement in of , and so .\nTherefore, for a plane tree and , the expression (1.1 ###reference_###) of the centrality measure becomes\nIt follows from (2.2 ###reference_###) that for , and are related by the identity\nwhich will be useful below. Repeatedly applying (2.3 ###reference_###) yields\nthat for any vertices with ,\nIn the preceding equation, and below, we omit the constraint \u201c\u201d from the subscript of the product for succinctness, whenever the tree is clear from context.\nFinally, recall the competitive ratio of defined in Section 1.1 ###reference_###,\na quantity which naturally arises in our approach to bounding the number of nodes of that are more central than the root." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "2.2. The centrality measure and the competitive ratio", + "text": "The following lemma gives a necessary condition on the size of subtrees stemming from vertices that are more central than the root.\nFor any plane tree and , if then .\nNote that by (2.3 ###reference_###), so\nby the definition of . The assumption of the lemma gives , and so .\n\u220e\nFor a plane tree , define the sequence of nodes where for each , is the unique child of such that and is the first time such a child does not exist. Then,\nWe first show that minimizes . Using (2.3 ###reference_###), we have that\n if and only if , so\nThis relation directly gives us that . Next, let be any child of and be any descendant of , and let be the unique path from to . Then using (2.7 ###reference_###) and , we get . Finally, for , if is a child of other than , then , which implies in the same way as above that any descendant of has . Combining these results, we readily obtain that minimizes . Using (2.4 ###reference_###), we get\nWe obtain the desired result after bounding in the numerator above by .\n\u220e" + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "2.3. Preflows and flows on", + "text": "We define a preflow as a function such that and for all . A flow is a preflow for which for all . We further say that a flow (or a preflow) is -ary when for all and .\nGiven a function , for write\nThe next proposition bounds for certain geometrically decaying functions ; it will later be used to prove upper bounds on the number of nodes which have -value greater than or equal to the root but which lie in subtrees containing less than of all nodes of , for plane trees and .\nLet and let be inductively defined by and for all and . Then there is a universal constant , that does not depend on , such that for any , we have .\nLet and note that . As the ancestors of distinct from are the nodes with , the identity gives us that\nThus, writing and taking the logarithm of the product, we obtain that\nwhere . Letting and reversing the indexing of the , we find the bound\nThis holds because , so we can require that . Then, observing that for , the vector has the same value of the sum as the vector with coordinates, we can partition the set according to the value of this sum and obtain that\nTo conclude the proof, we apply Erdos\u2019 non-asymptotic version of the Hardy-Ramanujan formula on the number of partitions of an integer [12 ###reference_b12###], which asserts that for any ,\nInserting (2.11 ###reference_###) within (2.10 ###reference_###) yields that , which completes the proof.\n\u220e" + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "2.4. Some distributional definitions and identities", + "text": "In this section we recall the definitions of Beta and Dirichlet random variables, and some relations between them. We will make frequent use of the standard Gamma function . Also, we write to mean that has distribution , and write if is stochastically dominated by .\nFor , say that a random variable is -distributed if it has density\n with respect to Lebesgue measure on ; here is a normalizing constant. If then\nIf , then this yields that , which we will use below.\nFor random variables and , then\nsee e.g. [16 ###reference_b16###], or [1 ###reference_b1###, Theorem 1].\nGiven ,\na random vector is -distributed if it has density\nwith respect to -dimensional Lebesgue measure on the simplex\nhere .\nIf \nthen it holds that for each . We write as shorthand for the symmetric Dirichlet distribution with for each . In this case, the density may be rewritten as\nIf then for .\nFinally, we say a random variable is Geometric-distributed if for integers , and that a random variable is Uniform-distributed if its law is the uniform probability distribution on ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Regular trees", + "text": "This section is devoted to the proof of Theorem 1.2 ###reference_theorem2###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Presentation and asymptotics of the model", + "text": "We begin by reformulating the -regular uniform attachment models within the Ulam\u2013Harris formalism as sequences of random plane trees. It will be notationally cleaner to shift the argument by one, so we will hereafter work with -regular uniform attachment trees for .\nFix . Let and for any , where, conditionally given , is a uniformly random leaf of . It is clear that the graph-isomorphism class of has the same law as that of the -distributed tree from in the introduction. For the remainder of the section, the notation will mean that is a random plane tree with the same distribution as (and in particular we can write ).\nNote that has exactly nodes and leaves. Moreover, is a subset of and for any , has either or children in . In other words, if then is a -ary plane tree.\nSince the size of is deterministic, the formula (2.2 ###reference_###) for suggests that studying centrality for this model will require a clear understanding of the asymptotic behaviour of the proportion of nodes of lying in the subtree stemming from any fixed node .\nWe claim that exists almost surely for any .\nTo see this, let . If then for all and the result is clear. Otherwise, for ,\ncolour the leaves of , either in blue if they are descendants of , or in red if not. When a leaf is chosen to have new children, it ceases to be a leaf but its new children take its previous colour. Hence, conditionally given , the number of leaves of evolves as the number of blue balls in a P\u00f3lya urn starting with blue ball and red balls and where at each step, the drawn ball is returned to the urn along with new balls of same colour. Standard results (see e.g. [8 ###reference_b8###, Section 6]) then imply that converges almost surely towards a positive random variable. Since is a -ary plane tree we have , which yields that the following limits exist almost surely for all and :\nNote that is well-defined because if for some , then also , so and are almost surely positive and . This entails the following useful recursive relation\nFor example, because . We also find that , or that . Inductively, we obtain:\nWe next study the distribution of the family of asymptotic proportions by describing that of the ratios . Since is always a subset of , we have when so we only need to give the distribution of . Thus, write , which has components, and for , let , which has components. Also recall from Section 2.4 ###reference_### that for and an integer , stands for the symmetric Dirichlet distribution of order with parameter .\nThe random vectors are independent. Moreover, , and for all , .\nWe consider a slight variant of the -regular uniform attachment where, in this new model, the root has the same number of children as all the other non-leaf vertices (and so its degree is smaller by one than those vertices). Namely, let , and for let , where is a uniformly random leaf of conditionally given . Observe that is a -ary plane tree. Also note that , and that has exactly leaves and vertices. The arguments used to obtain (3.1 ###reference_###) also entail that for all , the limit\nexists almost surely. Note that this limit is for any node which is a descendant of since for any .\nWe then set for all .\nNow, fix . We run a P\u00f3lya urn starting with balls of colours , one of each colour, where after each draw, we return the drawn ball along with new balls of the colour drawn. For , denote by the number of balls of the \u2019th colour before the \u2019th draw, so that and .\nThen, it is a standard result of Athreya [2 ###reference_b2###] (see also [8 ###reference_b8###, Section 6] for a modern approach) that\nDenote by the number of times a ball of colour is drawn before the \u2019th draw; so and for , and . Next, let be copies of . We assume that , and are independent. Then let , and for define\nso for and we have ; it follows that has leaves.\nIt is then straightforward to check by induction that if then has the same law as , and that if then has the same law as .\nWriting for any , then applying (3.5 ###reference_###), we deduce that\nBy independence of , it follows from (3.1 ###reference_###) and (3.4 ###reference_###) that:\nare independent;\nare independent.\nSince , it also follows that and have the same law as for all and . Moreover, (3.5 ###reference_###) yields that and that . The claimed independence and the fact that for all then follow by induction.\n\u220e\nFor the next results, recall that denotes the height of .\nFor all with , and .\nRecall from Section 2.4 ###reference_### that\nif , then for ; so and . Taking and , the result then follows from Proposition 3.1 ###reference_theorem1###.\n\u220e\nTo conclude this section, we use our description of the asymptotic proportions to show that they uniformly geometrically decay with the height. Combined with Lemma 2.2 ###reference_theorem2###, this result will imply that, with high probability, the nodes of that are more central than the root must have bounded height; this corresponds to step (I) ###reference_ix1### from the proof of Theorem 1.2 ###reference_theorem2### in Section 1.1 ###reference_###.\nThe fact that these better candidates belong with high probability to a deterministic finite subset of will also simplify the asymptotic analysis of the -regular uniform attachment model, since it allows using only finitely many of the almost sure convergences .\nFix , and for let . Then for all integers and for all , it holds that\nLet be the in the statement of the lemma. For , if then has a unique ancestor at height , and necessarily . Therefore,\nSince there are only finitely many nodes with in , we obtain that\nBy (3.3 ###reference_###), if , then almost surely\nBy Proposition 3.1 ###reference_theorem1###, the random variables in the product on the right are independent and all the variables but (which is bounded by ) are -distributed.\nSince there are exactly nodes with ,\nwe deduce from the above inequality that\nwhere the are iid -distributed random variables. By applying Markov\u2019s inequality to the squares, and then Corollary 3.2 ###reference_theorem2###, we obtain that\nThe desired inequality follows since .\n\u220e" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Number of competitors in small subtrees", + "text": "Let be an arbitrary -ary plane tree. The goal of this section is to uniformly bound the number of vertices of that are selected by the algorithm before the actual root, but that belong to for some for which is small. In other words, we want to complete step (II) ###reference_ix1### from the proof of Theorem 1.2 ###reference_theorem2### in Section 1.1 ###reference_###. Our strategy is to translate the problem into the framework presented in Section 2.3 ###reference_### by encoding all the proportions into a single preflow . It will then turn out that the number of vertices of that are more central than the root in can be related to the number , defined in (2.8 ###reference_###), for some . This motivates the following lemma.\nThere is a constant such that for any -ary preflow and any , it holds that .\nSet , and define by as in Proposition 2.4 ###reference_theorem4###. Without loss of generality, we can assume that for any . We claim that for any . To see this, first note that if then . If then we have\nsince is a preflow. The choice of yields that for ; to see this note that by taking logs it suffices to show that is decreasing in for , which is straightforward.\nThus, , and the claim then follows by induction.\nSince , we deduce that for any -ary preflow , we have ,\nfrom which we conclude the proof thanks to Proposition 2.4 ###reference_theorem4###.\n\u220e\nThe next proposition is the key bound on the number of nodes that are more central than the root, lying in a given small subtree. Recall from (2.5 ###reference_###) that .\nLet be the constant given in Lemma 3.4 ###reference_theorem4###.\nThen for any plane tree and any node , if is -ary and then .\nDefine a -ary preflow by setting for all . For any , using the expression (2.4 ###reference_###), we can then write\nUnder the assumption that , this provides the bound\nwhich, in turn, implies that\nfor all . Since , taking in the above inequality, Lemma 3.4 ###reference_theorem4### yields that\nas required.\n\u220e" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Competitive ratio", + "text": "To derive a good bound of the number of competitors in a -regular uniform attachment tree by using Proposition 3.5 ###reference_theorem5###, we first need to control its competitive ratio. Hence, the goal of this section is to show the result below, which corresponds to step (II) ###reference_ix1### in the proof of Theorem 1.2 ###reference_theorem2### in Section 1.1 ###reference_###.\nThere exist constants such that the following holds. Fix , and for let . Then for all .\nThe idea of the proof is to combine the deterministic upper bound given by Lemma 2.3 ###reference_theorem3### for the competitive ratio with the asymptotic description of the proportions discussed in Section 3.1 ###reference_###. It turns out that for the bound we obtain in this way, the upper tail behaviour can be controlled using the tail bound from the following general lemma.\nFor all and there exist such that the following holds. Let be independent random variables supported by such that for all . Let and set\nThen for all .\nFirst, Markov\u2019s inequality yields that for all ,\nThus, there exists an integer such that for all . Next, for any integer we have\nbecause for any . Since are independent,\nso Markov\u2019s inequality provides the bound\nBy the definition of , if there exist distinct indices such that are all less than , then . Therefore, we can stochastically bound above by the sum of independent geometric random variables with common parameter , which we recall to be at least by our choice of . It follows that for any . Then, we can rewrite (3.7 ###reference_###) by using (3.8 ###reference_###) and the previous inequality to obtain\nFinally, choose small enough to have that , and let . We then get as desired.\n\u220e\nSince we wish to apply Lemma 3.7 ###reference_theorem7### to prove Proposition 3.6 ###reference_theorem6###, we state and prove a technical estimate which will ensure that the assumptions of Lemma 3.7 ###reference_theorem7### are satisfied in the relevant situation. In what follows, the random variables are as in (3.1 ###reference_###).\nLet and for all . Then, it holds that for all .\nSet if or otherwise, so that is the maximum of the components of a -distributed random vector by Proposition 3.1 ###reference_theorem1###. The components of a symmetric Dirichlet random vector are non-negative and sum to , so at most one of them is greater than . Since they are identically distributed, we have\nwhere we recall from Section 2.4 ###reference_### that is -distributed.\nAs we have , the stochastic domination relation (2.12 ###reference_###) implies that . Since the function is non-decreasing and non-negative, and the normalizing constant in the distribution\u2019s density is ,\nit follows that\nNext, we use that to write\nThe desired result follows since .\n\u220e\nUsing the random variables introduced in Section 3.1 ###reference_###, define a random path in as follows.\nFor , let , so that . By Proposition 3.1 ###reference_theorem1###, the random variables almost surely have a unique maximum so is well-defined almost surely, and almost surely and for . Observe that, in the notation of Proposition 3.8 ###reference_theorem8###, we have for all .\nLet and, inductively, for , given set . We then have for . By Proposition 3.1 ###reference_theorem1###, the random pairs are independent\nand have the same distribution except for , which implies that the random variables are independent and the random variables are identically distributed. Moreover, each has a continuous distribution and support , so writing , it follows that is almost surely finite.\nIt also follows that , so almost surely .\nSince almost surely for all , it follows that there exists a random variable such that almost surely, for all , for all and for all . It follows that for , \nis the sequence of nodes described in Lemma 2.3 ###reference_theorem3###. The conclusion of that lemma is that almost surely, for ,\nSince , it follows that almost surely, so\nThe random variables are independent as observed above, and by Proposition 3.8 ###reference_theorem8### we have for all . The result then follows from the above displayed equation and the bound of Lemma 3.7 ###reference_theorem7###.\n\u220e" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "3.4. The dependence on in Theorem 1.2", + "text": "By examining the proof given in Section 1.1 ###reference_###, we see that the constant appearing inside the exponential in Theorem 1.2 ###reference_theorem2### depends on exclusively via the constant from Lemma 3.4 ###reference_theorem4### and Proposition 3.5 ###reference_theorem5###. However, one cannot remove the dependence in in Lemma 3.4 ###reference_theorem4###; it is straightforward to show that if and for all and , then is a -ary flow satisfying as . Thus, deterministic arguments are not enough to uniformly control the optimal constants , or to bound from below the minimum size ensuring a given precision of the root-finding algorithm for random trees with unbounded degrees.\nTherefore, in Section 4 ###reference_###,\nwhen proving Theorem 1.1 ###reference_theorem1###, we are forced to rely more heavily on the distributional properties of the uniform attachment model.\nIt turns out that similar arguments to those of Section 4 ###reference_### can also be used to show that the -dependence of the constants from the exponential term in the bound of Theorem 1.2 ###reference_theorem2### can be removed. While we will not write a complete proof of this fact in the paper in order to avoid too much repetition, we will explain the arguments required for this adaptation later, in Section 4.5 ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Uniform attachment trees", + "text": "This section is devoted to the proof of Theorem 1.1 ###reference_theorem1###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "4.1. Presentation and asymptotics of the model", + "text": "We start by reformulating the uniform attachment model for plane trees using the Ulam\u2013Harris formalism. Recall from Definition 2.1 ###reference_theorem1### that if is a plane tree and , then stands for the number of children of in . We let and for any , where is uniformly random on conditionally given . By construction, the graph-isomorphism class of has the same law as that of -distributed tree presented in the introduction. Throughout this section, we slightly abuse notation by writing to mean that is a random plane tree with the same law as that of the plane tree constructed in this paragraph. This in particular allows us to write .\nIt is well-known, and straightforwardly proved by first and second moment computations, that in a uniform attachment tree the degree of any fixed node tends to as . For writing , so that , it follows that is almost surely finite for all ; we will use this below.\nNote that for any . As we did in Section 3.1 ###reference_### for the -regular uniform attachment model, we describe the asymptotic behaviour of the proportions of the subtrees of . First, we show that for any and any integer , the two limits\nexist almost surely.\nTo do this, fix and write . Colour in blue and all the vertices of in red, then colour each node added after time in the same colour as its parent. Hence, conditionally given , evolves as the number of blue balls in a standard P\u00f3lya urn that initially contains blue ball and red balls, which classically implies that converges towards a nonzero limit almost surely. This shows that is well-defined and that a.s. .\nIt follows that the limit also exists almost surely, and that . From there, an induction on yields the following recursive relation:\nFor example, we get , , , and . By induction on , (4.2 ###reference_###) allows us to express all the asymptotic proportions in terms of the uniforms as follows:\nSimilarly as in Section 3.1 ###reference_###, we can describe the joint law of the factors .\nThe random variables are independent and Uniform-distributed.\nLet denote the number of blue balls at time in a P\u00f3lya urn that contains blue ball and red ball at time , so that . Let also and be two sequences of plane trees that both follow the uniform attachment model, i.e., they have the same law as . We assume that , , and are independent.\nThen, for all and , let\nbe defined from and as is defined from in (4.1 ###reference_###).\nWe now inductively construct another growing sequence of plane trees. Set and . Then for any , define as follows:\nif then , where is the only node of that is not in ;\nif then , where and are such that is the only node of that is not in .\nFor any , observe that by construction and , and that for all . This allows us to check by induction that follows the uniform attachment model, and thus we can assume without loss of generality that for all . Then, for any , we can write and similarly for all . Since we know from standard properties of P\u00f3lya urns that and almost surely, these identities and the expressions (4.1 ###reference_###) entail that for all , , and , we have\nBy independence of , , and , it follows that the families , , and are independent. Since , it also follows that has the same law as , and has the same law as . Moreover, it is a well-known fact about P\u00f3lya urns that is Uniform-distributed. An inductive argument concludes the proof.\n\u220e\nRecall from (2.1 ###reference_###) that for , the weight of is which we will consider instead of the height . Building on Lemma 2.2 ###reference_theorem2###, we next show that with high probability, there are only finitely many nodes that are more central than the root in a infinite plane tree generated by the uniform attachment model. This is because the weights of such nodes must stay bounded. Moreover, thanks to this fact, we will avoid some technical issues by considering only finitely many convergences at a time.\nFor any , let .\nThen for all integers and for all , it holds that\nWe define to be the in the statement of the lemma. Notice that for , if , then there exists a node and an integer such that , and that is an older sibling of or of some ancestor of . This gives us that , hence\nSince there are only finitely many nodes with , we obtain that\nWe can rewrite the sum inside the probability as . Letting , this random variable converges almost surely to , which is equal to using (4.2 ###reference_###) and a telescoping argument. Moreover, by (4.3 ###reference_###), note that is a product of independent Uniform-distributed random variables. By the Portmanteau theorem we can thus rewrite the above inequality as\nwhere the are independent Uniform-distributed random variables. It is straightforward to verify by induction that there are exactly nodes with , so we can further rewrite the inequality as\nUsing Markov\u2019s inequality and independence, we obtain that" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "4.2. Number of competitors in small subtrees", + "text": "The random variables define a (random) flow on ; however, it is somewhat complex to work with,\nso we bound it from above by another, better-behaved family of random variables .\nThe upper bound will be useful as will enforce an exponential decay in the values of \u201cyounger siblings\u201d, which will be simpler than but similar to the expected behaviour of . We then use the bound to prove the main result of the section, a probabilistic analogue of Lemma 3.4 ###reference_theorem4### which establishes an exponential upper tail bound for the random variable .\nWe begin with a technical lemma; we will then immediately use the lemma to construct the family of random variables .\nLet be independent Uniform-distributed random variables. Then there exists a sequence of independent Uniform-distributed random variables and a random bijection such that with , then almost surely for all .\nFirst, let which is almost surely finite. Then for set\nand define by\nFor , it holds that by construction, so\nAlso, for , since , we have that\nHence, for all , we indeed have that almost surely.\nWe now show that the \u2019s are independent and Uniform-distributed. Let , let be bounded measurable functions, and let . It suffices to show that . We write the left-hand side as a sum over every possible value of using indicators:\nNotice that each factor in the product on the right is a function of a different random variable, so since the are iid, we obtain\nFinally, we observe that has the same law as , and that the conditional law of given that is also Uniform-distributed. This entails that which implies the desired identity.\n\u220e\nThere exists a family of iid Uniform-distributed random variables and a random bijection such that and for all , and for all almost surely, where is inductively defined by\n;\nfor all ;\nfor all and all .\nFor write .\nFrom Proposition 4.1 ###reference_theorem1### we have that the random vectors are independent, and within each vector the entries are iid and Uniform-distributed. For each , Lemma 4.3 ###reference_theorem3### thus gives us a sequence of independent Uniform-distributed random variables and a bijection such that for all almost surely.\nMoreover, the independence of the vectors implies that the sequences for can be chosen to be independent.\nNow, let us define inductively by setting and for all and , so that for all . It is then easy to check that is a bijection on which preserves the height, i.e. for all . Furthermore, for any , the restriction of to is measurable with respect to the family . Setting for all and , it follows via induction on the height that the random variables are independent Uniform-distributed random variables as desired.\nFrom (4.2 ###reference_###), for all and , we see that and that\nA straightforward induction then yields that for all almost surely.\n\u220e\nBy contrast with regular trees, in the uniform attachment case, we cannot invoke a deterministic bound such as Proposition 3.5 ###reference_theorem5###. Nonetheless, we can provide a probabilistic replacement of Proposition 3.5 ###reference_theorem5### that relies on the distribution of the family by using Corollary 4.4 ###reference_theorem4###. This result represents the main goal of the present section and will be crucial to prove Theorem 1.1 ###reference_theorem1### later. Recall from (2.8 ###reference_###) that for and , we write and .\nThere exists a constant such that for any and ,\nWe now introduce some notation and prove a technical lemma that will be used to show Proposition 4.5 ###reference_theorem5###.\nIn what follows, we use the function defined in Proposition 2.4 ###reference_theorem4###; recall this function is given by for . Also, for , let be the number of nodes on the ancestral path of which are not oldest children.\nThere is a constant such that for all .\nFix . Note that if is the \u2019th index for which , then we have . This entails that . Therefore, it follows from the identity (2.9 ###reference_###) with that\nso if then , and thus . The result follows.\n\u220e\nOur strategy for bounding is to compare it with , where is as above and in Proposition 2.4 ###reference_theorem4###. Let and be as in Corollary 4.4 ###reference_theorem4###.\nFrom the identity , we observe that the ancestors of are the nodes with . Since , it follows that if then , so almost surely.\nFor , let and, for all , let , where are given by Corollary 4.4 ###reference_theorem4###.\nNote that the increments are Geometric-distributed.\nMoreover, it follows from the independence of the random variables\n that\nthe increments are also independent. Now define random variables inductively by setting and, for ,\n; and\nfor each , if and only if .\nWe have for all : indeed, factors come from the fact that for each , and the last factor comes from the involved in the recursive expression of . By construction of and , it then follows that for all , so .\n###figure_1### Next, define a function as follows. First, . Inductively, given that , set and, for , set if and only if . Equivalently, for all , is the unique node for which (see Figure 1 ###reference_### for an example). It follows that\n if and only if , which means that\nSo, writing (again, see Figure 1 ###reference_### for an example), then\nAdditionally, by the definition of , for all we have , and for all ,\nUsing the independence of the increments , it follows by induction that for all , is distributed as the size of the \u2019th generation in a branching process with Geometric offspring distribution, and thus (see Harris [14 ###reference_b14###, Chapter I, Section 7.1]) that is itself Geometric-distributed.\nThe above fact, together with the bound , yields that\nfor any . Lemma 4.6 ###reference_theorem6### provides that for all , where is some constant; thus, for all and ,\nRecalling that , and using (4.5 ###reference_###), (4.4 ###reference_###) and a union bound, it follows that\nFinally, choose . By Proposition 2.4 ###reference_theorem4###\napplied with , we have , where is another constant. This yields that\nwhich implies the desired inequality.\n\u220e" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "4.3. Competitive ratio", + "text": "The goal of this section is to prove the following analogue of Proposition 3.6 ###reference_theorem6### for the UA model.\nFor any , let . There exist universal constants such that for all .\nBefore proceeding with the proof, recall from Section 4.1 ###reference_### that and that there exists a family of independent Uniform-distributed random variables such that for all and , .\nFor , let , so that . The preceding maximum is almost surely achieved for a unique , so is a.s. well-defined. Write for . Note that for all we have . Thus, for all , we have .\nNow define a random path in as follows. Let and, inductively, for , given set . Since the families are iid, the pairs are iid, which in turn implies that the random variables are iid.\nThen let\nthe final equality constituting the definition of ; it is straightforward to see that almost surely. Also, by (4.3 ###reference_###) each is a product of a finite number of independent uniform random variables so has a continuous distribution; thus, almost surely none of the equal , so in particular almost surely.\nSince for all and is a.s. finite, it follows that there exists a random variable such that almost surely, for all , for all . We claim that by choosing large enough, we may also ensure that for all . (This would be immediate from the convergence fact that for all , if we were only to consider finitely many values of .) To see this, let be the smallest integer for which . Then is a.s. finite, so for sufficiently large,\n for .\nFor such we also have\nso for sufficiently large\nThus, for such we also have for , as claimed.\nIt follows from the preceding paragraph that for , is the sequence of nodes described in Lemma 2.3 ###reference_theorem3###, so almost surely, for ,\nSince , it follows that a.s. , so\nwhere for the last inequality we have used that and that .\nFinally, we have\nso the result follows by Lemma 3.7 ###reference_theorem7###.\n\u220e" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "4.4. Proof of the main theorem", + "text": "Recall that : this will be used below to write some expressions more succinctly. Exactly like in the proof of Theorem 1.2 ###reference_theorem2###, we partition the set } of all better candidates than the root into subtrees stemming from children of elements of . Namely, if then it has a unique ancestor whose parent is in .\nHere, in contrast to the proof of Theorem 1.2 ###reference_theorem2###, the nodes in can have an arbitrarily large number of children. To control the number of children that matter, i.e. those that have a descendant in , we rely on Lemma 2.2 ###reference_theorem2###. Indeed, for an ancestor of , Lemma 2.2 ###reference_theorem2### gives us that so also . Therefore, setting\nwe obtain that\nLike for Theorem 1.2 ###reference_theorem2###, we now want to control the sizes of the sets , for appearing in the above maximum, by using Proposition 4.5 ###reference_theorem5###. However, unlike the bound from Lemma 3.4 ###reference_theorem4###, the bound provided by Proposition 4.5 ###reference_theorem5### only holds with high probability and only concerns the asymptotic proportions of nodes lying in subtrees of . Hence, to avoid summing the probabilities of too many bad events or trying to prove rate of convergence bounds, we will basically show that is contained with high probability in a deterministic finite set of words. To do so, we rely on Lemmas 2.2 ###reference_theorem2### and 4.2 ###reference_theorem2###, and Proposition 4.7 ###reference_theorem7###, to obtain that with high probability, all nodes with will satisfy that is large and is small. This will also be useful for bounding and . We now proceed to the details.\nBy Proposition 4.7 ###reference_theorem7###, there exist constants such that for all ,\nMoreover, letting , Lemma 4.2 ###reference_theorem2### yields that\nfor any .\nThe above bound motivates us to define ; note that this set is deterministic and does not depend on . Letting be the event that\n and that ,\nwe have shown that . We now prove several bounds assuming that occurs; in the below list we work on the event .\nSince ,\nif has then .\nFor , the number of children of with is bounded by the maximum of their last letters, which is itself bounded by the maximum of their weights. Hence, is at most the maximum weight of a node , so .\nSince has at most leaves, we have , and so if then and .\nFor any , by Lemma 2.2 ###reference_theorem2###, if then ,\nso .\nIt follows that if then .\nFinally, it holds that by definition, so if then .\nFor , on , the above bounds and (4.6 ###reference_###) entail that\nWe are now almost ready to apply Proposition 4.5 ###reference_theorem5###. Let . For all , we define . It then follows from (2.4 ###reference_###) that if then almost surely\nfor all , and so in particular for all .\nAlso recall that almost surely converges to , which yields that almost surely. Since is deterministic and finite, it follows from the above bounds that almost surely\nwhere the final identity in distribution follows from (4.3 ###reference_###) and Proposition 4.1 ###reference_theorem1###. To obtain a tail bound on ,\nwe apply Proposition 4.5 ###reference_theorem5### with ; this yields\nSince for all , there exists such that\n, which together with the two preceding displays yields that\nSince , a union bound then gives\nFinally, we combine (4.9 ###reference_###) and (4.10 ###reference_###) with the fact that to deduce that for all ,\nSince we always find the root when , the desired result follows.\n\u220e" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "4.5. Reducing the -dependence in Theorem 1.2.", + "text": "Let . Here, we use the notation of Section 3.1 ###reference_###, so that stands for a random plane tree with . In particular, recall the random variables and that for ,\nAs already explained in Remark 3.4 ###reference_###, the only step of the proof of Theorem 1.2 ###reference_theorem2### that introduced a dependence on in the constant is the deterministic control of the number of competitors in small subtrees given by Lemma 3.4 ###reference_theorem4### and Proposition 3.5 ###reference_theorem5###. To remove this dependence, we can instead rely on a probabilistic analysis of the model and follow the strategy used for studying the general uniform attachment model more closely. Indeed, one can check that by rerunning the proof of Theorem 1.1 ###reference_theorem1###, but using Proposition 3.1 ###reference_theorem1### in place of Proposition 4.1 ###reference_theorem1### and replacing with the finite set , we obtain another proof of Theorem 1.2 ###reference_theorem2### with a constant that does not depend on , provided we establish the following analogue of Proposition 4.5 ###reference_theorem5###.\nLet us set . Then there exists a universal constant , that does not depend on , such that for any ,\nThe reason we need rather than in this proposition is that in the -regular model, the split at the root is different from that of all other nodes in ; we could equally have defined for any .\nNext, carefully revisiting the proofs of Proposition 4.5 ###reference_theorem5###, Lemma 4.6 ###reference_theorem6###, and Corollary 4.4 ###reference_theorem4###, one can verify that Proposition 4.8 ###reference_theorem8### would follow from the adaptation of Lemma 4.3 ###reference_theorem3### stated below (in which, informally, the change from the UA to the model forces us to replace the values and with other universal constants and ).\nThere exists a probability measure on with such that the following holds for all . If , then there is a sequence of iid random variables with distribution and a random permutation of such that almost surely for all .\nWe now explain how to prove Lemmma 4.9 ###reference_theorem9###. We seek an admissible probability measure of the form with and . Our argument is based on a sampling method for the symmetric Dirichlet distribution via stick-breaking which builds the components of the random vector in an expression similar to . This will put us in a good position to use the proof method of Lemma 4.3 ###reference_theorem3###.\nLet be a uniformly random permutation of . Independently from , let be independent random variables such that for all . Then, set and for all . It holds that\nThis identity in distribution can be derived from the fact that for any integers , is the limit law of the proportion-vector in a -colour P\u00f3lya urn starting with balls of the \u2019th colour for all , and where each ball drawn is returned along with new balls of the same colour; see e.g. [2 ###reference_b2###] or [8 ###reference_b8###, Section 6]. Then (4.11 ###reference_###) describes the case where : the permutation represents the order in which the colours are drawn for the first time and for , is the relative proportion of colour with respect to the colours that have yet to be drawn.\nThanks to (4.11 ###reference_###), we only need to construct iid random variables with common distribution and a random permutation of such that for all . Similarly to as in the proof of Lemma 4.3 ###reference_theorem3###, we consider and define for all . Then, to be able to obtain the same deterministic inequalities as in the proof of Lemma 4.3 ###reference_theorem3###, we must construct the so that it holds that\nalmost surely for all while ensuring their independence. By independence of , this is possible as soon as for all ,\nIt follows from the above discussion that Lemma 4.9 ###reference_theorem9### \u2014 and, hence, a version of Theorem 1.2 ###reference_theorem2### with the constant replaced by a constant which is independent of \u2014 is a consequence of the following uniform estimate for Beta distributions.\nMore concretely, this bound yields that Lemma 4.9 ###reference_theorem9### holds with .\nLet be two random variables such that and . From (2.12 ###reference_###), we have the stochastic dominations , and so\nWe bound the numerator and the denominator separately. Since ,\nbecause . Similarly, since ,\nwhich concludes the proof.\n\u220e" + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18614v1_figure_1.png", + "caption": "Figure 1. Illustration of the map \u03c7\ud835\udf12\\chiitalic_\u03c7 and of the variables Zusubscript\ud835\udc4d\ud835\udc62Z_{u}italic_Z start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT. The colour of each edge indicates the ratios of the values taken by the flow between the upper-end and the lower-end of the edge. Two vertices a\ud835\udc4eaitalic_a and b\ud835\udc4fbitalic_b are marked in the tree at the top. In the tree at the bottom, three nodes have a\ud835\udc4eaitalic_a as their \u03c7\ud835\udf12\\chiitalic_\u03c7-image (marked with squares), meaning that Za=3subscript\ud835\udc4d\ud835\udc4e3Z_{a}=3italic_Z start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = 3. Four nodes have b\ud835\udc4fbitalic_b as their \u03c7\ud835\udf12\\chiitalic_\u03c7-image (marked with triangles), meaning that Zb=4subscript\ud835\udc4d\ud835\udc4f4Z_{b}=4italic_Z start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT = 4.", + "url": "http://arxiv.org/html/2411.18614v1/extracted/6029925/fig_v4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Convex transform order of beta distributions with some consequences.", + "author": "Idir Arab, Paulo Eduardo Oliveira, and Tilo Wiklund.", + "venue": "Statistica Neerlandica, 75(3):238 \u2013 256,\n2021.", + "url": null + } + }, + { + "2": { + "title": "On a characteristic property of P\u00f3lya\u2019s urn.", + "author": "Krishna B. Athreya.", + "venue": "Studia Sci. Math. Hungar, 4:31 \u2013\u2013 35, 1969.", + "url": null + } + }, + { + "3": { + "title": "Root finding algorithms and persistence of Jordan centrality in\ngrowing random trees.", + "author": "Sayan Banerjee and Shankar Bhamidi.", + "venue": "The Annals of Applied Probability, 32(3):2180 \u2013 2210, 2022.", + "url": null + } + }, + { + "4": { + "title": "Degree centrality and root finding in growing random networks.", + "author": "Sayan Banerjee and Xiangying Huang.", + "venue": "Electronic Journal of Probability, 28(none):1 \u2013 39, 2023.", + "url": null + } + }, + { + "5": { + "title": "Archaeology of random recursive dags and cooper-frieze random\nnetworks.", + "author": "Simon Briend, Francisco Calvillo, and G\u00e1bor Lugosi.", + "venue": "Combinatorics, Probability and Computing, 32(6):859 \u2013\u2013 873, 2023.", + "url": null + } + }, + { + "6": { + "title": "On the influence of the seed graph in the preferential attachment\nmodel.", + "author": "S\u00e9bastien Bubeck, Elchanan Mossel, and Mikl\u00f3s Z. R\u00e1cz.", + "venue": "IEEE Transactions on Network Science and Engineering,\n2(1):30 \u2013 39, 2015.", + "url": null + } + }, + { + "7": { + "title": "Finding Adam in random growing trees.", + "author": "S\u00e9bastien Bubeck, Luc Devroye, and G\u00e1bor Lugosi.", + "venue": "Random Structures & Algorithms, 50(2):158\n\u2013 172, 2017.", + "url": null + } + }, + { + "8": { + "title": "Smoothing Equations for Large P\u00f3lya Urns.", + "author": "Brigitte Chauvin, C\u00e9cile Mailler, and Nicolas Pouyanne.", + "venue": "Journal of Theoretical Probability, 28(3):923 \u2013 957, 09 2015.", + "url": null + } + }, + { + "9": { + "title": "Eve, Adam and the preferential attachment tree.", + "author": "Alice Contat, Nicolas Curien, Perrine Lacroix, Etienne Lasalle, and Vincent\nRivoirard.", + "venue": "Probability Theory and Related Fields, 190:321 \u2013\n336, 01 2024.", + "url": null + } + }, + { + "10": { + "title": "Inference on the history of a randomly growing tree.", + "author": "Harry Crane and Min Xu.", + "venue": "Journal of the Royal Statistical Society Series B: Statistical\nMethodology, 83(4):639 \u2013 668, 07 2021.", + "url": null + } + }, + { + "11": { + "title": "Scaling limits and influence of the seed graph in preferential\nattachment trees.", + "author": "Nicolas Curien, Thomas Duquesne, Igor Kortchemski, and Ioan Manolescu.", + "venue": "Journal de l\u2019\u00c9cole polytechnique\n\u2014 Math\u00e9matiques, 2:1 \u2013 34, 2015.", + "url": null + } + }, + { + "12": { + "title": "On an elementary proof of some asymptotic formulas in the theory of\npartitions.", + "author": "Paul Erd\u00f6s.", + "venue": "Annals of Mathematics, 43(3):437 \u2013 450,\n1942.", + "url": null + } + }, + { + "13": { + "title": "Looking for vertex number one.", + "author": "Alan Frieze and Wesley Pegden.", + "venue": "The Annals of Applied Probability, 27(1):582 \u2013 630, 2017.", + "url": null + } + }, + { + "14": { + "title": "The Theory of Branching Processes.", + "author": "Theodore E. Harris.", + "venue": "RAND Corporation, Santa Monica, CA, 1964.", + "url": null + } + }, + { + "15": { + "title": "Confidence sets for the source of a diffusion in regular trees.", + "author": "Justin Khim and Po-Ling Loh.", + "venue": "IEEE Transactions on Network Science and Engineering,\n4(1):27 \u2013 40, 2017.", + "url": null + } + }, + { + "16": { + "title": "Comparability of special distributions.", + "author": "Bernd Lisek.", + "venue": "Series Statistics, 9(4):587 \u2013 598, 1978.", + "url": null + } + }, + { + "17": { + "title": "On the discovery of the seed in uniform attachment trees.", + "author": "Tommy Reddad and Luc Devroye.", + "venue": "Internet Mathematics, 1(1), 02 2019.", + "url": null + } + }, + { + "18": { + "title": "Persistence of hubs in growing random networks.", + "author": "Banerjee Sayan and Bhamidi Shankar.", + "venue": "Probability Theory and Related Fields, 180(3-4):891 \u2013 953, 08 2021.", + "url": null + } + }, + { + "19": { + "title": "Rumors in a network: Who\u2019s the culprit?", + "author": "Devavrat Shah and Tauhid Zaman.", + "venue": "IEEE Transactions on Information Theory, 57(8):5163 \u2013 5181, 2011.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18614v1" +} \ No newline at end of file diff --git a/20241127/2411.18615v1.json b/20241127/2411.18615v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a1c71bb322e5d24486fbf18290d71877543edca9 --- /dev/null +++ b/20241127/2411.18615v1.json @@ -0,0 +1,743 @@ +{ + "title": "Proactive Gradient Conflict Mitigation in Multi-Task Learning: A Sparse Training Perspective", + "abstract": "Advancing towards generalist agents necessitates the concurrent processing of multiple tasks using a unified model, thereby underscoring the growing significance of simultaneous model training on multiple downstream tasks. A common issue in multi-task learning is the occurrence of gradient conflict, which leads to potential competition among different tasks during joint training. This competition often results in improvements in one task at the expense of deterioration in another.\nAlthough several optimization methods have been developed to address this issue by manipulating task gradients for better task balancing, they cannot decrease the incidence of gradient conflict.\nIn this paper, we systematically investigate the occurrence of gradient conflict across different methods and propose a strategy to reduce such conflicts through sparse training (ST), wherein only a portion of the model\u2019s parameters are updated during training while keeping the rest unchanged. Our extensive experiments demonstrate that ST effectively mitigates conflicting gradients and leads to superior performance. Furthermore, ST can be easily integrated with gradient manipulation techniques, thus enhancing their effectiveness.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Attaining the status of a generalist agent necessitates addressing multiple tasks within a unified architecture, thereby emphasizing the significance of multi-task learning (MTL) [37 ###reference_b37###], which involves concurrently acquiring proficiency in multiple tasks and striving for superior overall performance compared to learning these tasks separately.\nThe primary concern for MTL lies in the phenomenon of task competition when the model is jointly trained by optimizing the average loss across all tasks. As a result, a subset of tasks demonstrates superior performance while others remain sub-optimized compared to their individual learning counterparts.\nOne of the reasons behind it, from an optimization perspective, is gradient conflict (GC) [35 ###reference_b35###], wherein the direction and magnitude of gradients between tasks differ significantly. This can result in the average gradient biasing towards optimizing one task while providing relatively smaller and sometimes even negative optimization for other tasks when updating the network [35 ###reference_b35###, 18 ###reference_b18###].\nNumerous works have employed the gradient manipulation method to directly or indirectly adjust the gradients of tasks to mitigate the issue of gradient conflict in tasks. The former involves direct alteration of task gradients through manually designed criteria when conflicts arise [35 ###reference_b35###, 4 ###reference_b4###, 17 ###reference_b17###], while the latter modifies task gradients by adjusting weights of loss for each task [28 ###reference_b28###, 19 ###reference_b19###, 26 ###reference_b26###, 18 ###reference_b18###]. Although these methods effectively modify the gradients conflicting with each other, they do not decrease the occurrence of conflicting gradients during training [29 ###reference_b29###].\nA simple approach to mitigate the occurrence of conflicting gradients is to convert those layers in which gradient conflict frequently arises into task-specific layers, thereby reducing the likelihood of gradient conflicts within the remaining shared layers [29 ###reference_b29###].\nHowever, this strategy introduces additional modules and disrupts the internal structure of the original model, resulting in increased computational costs. Furthermore, identifying frequently conflicting layers adds extra computational costs. This becomes prohibitively expensive as the model size continues to expand, and thus prompting our fundamental inquiry:\n###figure_1### (Q) Is there a universally applicable approach to proactively mitigate the occurrence of gradient conflicts as well as preserve architectural integrity for MTL?\nTo tackle this issue, we propose a novel perspective on mitigating gradient conflict in MTL, termed Sparse Training (ST), wherein a subset of parameters from the original model are selected to learn multiple tasks simultaneously while keeping the remaining parameters frozen.\nThe intuition behind this lies in the reduction of a high-dimensional optimization problem to a low-dimensional one, which effectively alleviates the optimization complexity. Moreover, restricting the gradient updates of individual tasks to influence only a subset of parameters, rather than all parameters, effectively reduces potential interference between tasks.\nOur key findings demonstrate that ST can effectively reduce the incidence of gradient conflict, particularly during the later stages of training, as illustrated in Fig. 1 ###reference_###. A summary of our contributions is as follows: i) We provide a novel perspective, sparse training, for proactively reducing the incidence of gradient conflict during training while keeping the architecture intact; ii) Sparse training can be easily applied to improve various gradient manipulation methods by reducing the occurrence the gradient conflict over different datasets and architectures; iii) In addition to conventional research that primarily focuses on smaller models (MTAN [20 ###reference_b20###] and SegNet [1 ###reference_b1###]), we provide a comprehensive assessment of larger pre-trained models, including SAM [3 ###reference_b3###], ViT [8 ###reference_b8###], Swin Transformer [22 ###reference_b22###], using various gradient manipulation techniques, such as PCGrad [35 ###reference_b35###], CAGrad [17 ###reference_b17###], GradDrop [4 ###reference_b4###], MGDA [28 ###reference_b28###], IMTL-G [19 ###reference_b19###] and NashMTL [26 ###reference_b26###], to stimulate research in the field of sparse training for MTL. Our findings demonstrate that as the model size increases, the issue of gradient conflict becomes more exacerbated, as shown in Fig. 5(a) ###reference_sf1###, underscoring the significance of investigating the gradient conflict in large-scale models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Approach", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Background", + "text": "aims to learn multiple tasks simultaneously within a single model. Formally, given tasks () and a model with parameters , where and are shared parameter with all tasks and task-specific parameters respectively, the commonly used optimization method for MTL (referred to as Joint Train) is based on computing the average loss across all tasks with equal weights:\nwhere each task is associated with a corresponding loss function .\nHowever, optimizing all tasks by aggregating their losses indiscriminately (Eq. 2 ###reference_###) may lead to task competition, wherein certain tasks demonstrate improvement while others exhibit a decline compared to training them separately. From an optimization perspective, one of the reasons stems from conflicts in gradients. Formally, the update of task may potentially exert a detrimental impact on another task , namely:\nwhere is the gradient of loss on task with respect to and is the learning rate. After the first-order Taylor approximation, Eq. 3 ###reference_### can be expressed as . Gradient conflict arises when , leading to , indicating that task has a detrimental impact on task . Following [35 ###reference_b35###], we provide the definition of gradient conflict:\nIf , where is the angle between gradients of two tasks and , then and are deemed to exhibit gradient conflict.\nTo alleviate the issue of gradient conflict, gradient manipulation methods adjust conflicting gradients based on specific criteria and utilize these modified gradients for model updating. Instead of updating the model on the average gradient in Eq. 1 ###reference_### and Eq. 2 ###reference_###:\nthe gradients of all tasks in gradient manipulation methods are modified as follows:\nwhere can be either pre-defined or dynamically computed for tasks via and thus achieve the aim of adjusting the task gradient [18 ###reference_b18###, 26 ###reference_b26###, 28 ###reference_b28###, 19 ###reference_b19###, 35 ###reference_b35###, 4 ###reference_b4###, 17 ###reference_b17###]. However, the results of our experiment suggest that these methods can only modify gradients when conflicts occur, rather than proactively reducing the occurrence of GC during training, compared with Joint Train, as shown in Fig. 1 ###reference_###." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Sparse training for multi-task learning", + "text": "In this study, we investigate the gradient conflict commonly observed in multi-task learning from a novel perspective: sparse training, which selectively trains only a subset of the model parameters as opposed to full parameter training.\nThis perspective is based on the intuition that by converting a high-dimensional space optimization problem into a lower-dimensional one, the complexity of optimization can be effectively reduced.\nAdditionally, by limiting the impact of gradient updates to only a subset of parameters for each task instead of all parameters, potential interference between tasks can be mitigated.\nentails the initial parameter selection from the original model, and then updating only these parameters while keeping other parameters fixed during model training.\nTo clarify potential misunderstandings regarding ST\u2014often confused with sparse networks, where parameters are abandoned for model compression\u2014we provide the following definition to ensure consistency and ease of understanding throughout this paper.\nGiven a model and a binary mask matrix indicating whether parameters in are selected, where , and , the model is updated by . We define this training strategy as sparse training.\nTypically, the model architecture in multi-task learning includes a shared encoder as a feature extractor with task-specific decoders for multiple tasks. Therefore, sparse training is used in the encoder, and full parameters training for the decoders. We detail how the mask is computed in section Sec. 3.4 ###reference_###. We now apply sparse training for multi-task learning (Joint Train). The visualization of the gradient change can be viewed in Fig. 2 ###reference_### and the update with the reformulated gradient from Eq. 5 ###reference_### is as follows\nThe application of sparse training can be seamlessly and effectively extended to improve various gradient manipulation methods in MTL. The update with the reformulated gradient from Eq. 6 ###reference_### is as follows" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Theoretical analysis for sparse training", + "text": "After introducing sparse training into MTL, the optimization objective in Eq. 1 ###reference_### can be formed:\nwhere is the initialized original model for and is identity matrix. According to Lagrangian duality, Eq. 10 ###reference_### can be reformulated as:\nThis can be transformed to optimize the upper bound of regularized problem:\nPlease see the supplemental material for proof. Fu et al. [11 ###reference_b11###] demonstrates that Eq. 12 ###reference_### has better stability and smaller generalization bound than only optimizing Eq. 1 ###reference_###, resulting in better performance." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Parameter selection per neuron (PSN)", + "text": "Several promising sparse training methods exist for single-task learning, but they are either time-consuming, requiring mask updates at each iteration [27 ###reference_b27###, 25 ###reference_b25###, 34 ###reference_b34###], or memory-intensive due to gradient calculations for all parameters [40 ###reference_b40###, 11 ###reference_b11###]. In MTL, where multiple tasks are trained simultaneously, time efficiency is crucial. Thus, we adopt a one-time selection method, choosing parameters before training and keeping the mask fixed throughout. We consider the following two aspects for selection, magnitude of the parameter and involvement of all neurons in the network.\nSeveral studies have focused on model compression through the elimination of parameters with lower magnitudes [13 ###reference_b13###, 10 ###reference_b10###]. This highlights the significance of parameters with larger magnitudes in neural networks, which is consistent with our experimental findings (See Fig. 5(c) ###reference_sf3###).\nThe intuition behind this phenomenon lies in the fact that parameters with larger magnitudes exert a greater influence on altering neuron activation states through the activation function, wherein a neuron becomes active once the input surpasses a predefined threshold. Therefore, we exclusively select parameters with the highest magnitude for training multiple tasks.\nA simple idea is to select a certain proportion of parameters with the highest magnitude from the neural network (NN), but this may prevent some neurons from being engaged during training and hinder effective model training due to the dependence of the NN state on neuron activation. Motivated by studies highlighting distinct roles for different components in NN [32 ###reference_b32###, 40 ###reference_b40###, 9 ###reference_b9###], we posit that engaging all neurons is crucial for effective model training.\nThe rationale is that each neuron within the network possesses the inherent capability to finely adjust its activation state, thereby effectively adapting the overall NN state to the tasks, especially for learning multiple tasks simultaneously.\nOur experiments further substantiate this assertion, as shown in Fig. 5(c) ###reference_sf3###.\n###figure_2### By integrating the two aspects, we select the top-K connections (weight/parameters) with the highest magnitude among all input connections for each neuron in the network (Please see Fig. 3 ###reference_### for top-1 example). This approach facilitates the training process for fitting tasks by ensuring that every neuron possesses activation potential, while parameters with higher magnitudes facilitate easier activation of neurons. In this paper, sparse training refers to using this method to select parameters and training the selected parameter, unless otherwise specified." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Our experiments are conducted on comprehensive MTL benchmarks to evaluate the effectiveness of sparse training. First, we investigate if sparse training reduces gradient conflict. Subsequently, we examine its impact on performance across various MTL setups. The more details of the experiment are provided in Appendix D ###reference_###." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "EXPERIMENTAL SETUP", + "text": "Our MTL datasets are categorized into three groups:\ni) Dense prediction tasks:\nNYUv2 [6 ###reference_b6###]: An indoor scene understanding dataset containing 1449 RGBD images with per-pixel labels across 13 classes, including semantic segmentation, depth estimation, and surface normal prediction.\nCityScapes [5 ###reference_b5###]: 5000 street-view RGBD images with per-pixel annotations for 7-class semantic segmentation and depth estimation.\nii) Multiple binary-classification tasks:\nCelebA [21 ###reference_b21###]: 200,000 facial images of 10,000 celebrities, each with 40 binary attributes for facial features. We use the first 10 attributes for 10 binary classification tasks due to limited computation.\niii) Multiple multi-class classification tasks:\nVTAB [36 ###reference_b36###]: Containing 24 image understanding tasks with 1000 training examples per task. We use four tasks from it to create two multi-task benchmarks:\nClevr: Simple 3D shapes with counting and depth prediction tasks.\nSmallNORB: Artificial objects with object azimuth and camera elevation prediction tasks.\nWe evaluate our approach using various baselines including i) single-task learning (STL): Each task is trained independently; ii) Joint Train: Training all tasks with average task loss; and 6 gradient manipulation methods including 3 direct and 3 indirect modification techniques. The former includes: iii) PCGrad: Projecting each task gradient onto the normal plane of other tasks [35 ###reference_b35###]; iv) CAGrad: Enhancing the optimization of average loss by explicitly regulating the minimum decrease across tasks [17 ###reference_b17###]; and v) GradDrop: Stochastically dropping specific dimensions of the gradients based on their level of conflict. The latter includes vi) MGDA: Identifying the same descent direction for each task [28 ###reference_b28###]; vii) IMTL-G: Determining the update direction by ensuring equal projections on gradients [19 ###reference_b19###]; viii) NashMTL: Treating MTL as a bargaining game to optimize all tasks [26 ###reference_b26###].\nWe experiment with several architectures including: i) CNN-based: MTAN [20 ###reference_b20###] incorporates an attention mechanism into the SegNet [1 ###reference_b1###].\nii) Transformer-based. SAM [15 ###reference_b15###] is a strong visual foundation model for segmentation. ViT-B/16 [8 ###reference_b8###] and Swin Transformer [22 ###reference_b22###] are vision classification models pre-trained on ImageNet21K [7 ###reference_b7###]. All experiments were conducted on pre-trained SAM, ViT and Swin (except for randomly initialized MTAN), unless otherwise specified.\ni) Relative task drop (). Following [23 ###reference_b23###], we evaluate the MTL overall performance for a baseline by computing the average performance drop against STL over tasks and metrics for each : where , are the value of metrics evaluated with and respectively. if the is higher the better and 0 otherwise.\nii) Average incidence of GC (). We evaluate the extent of gradient conflict for a baseline by calculating the average incidence of GC over epochs during training. Given tasks, epochs, and iterations per epoch, , where and represent the number of occurrence of gradient conflicts between two tasks for all task combinations and the number of the combinations in each iteration during training, respectively." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Incidence of gradient conflict", + "text": "We train a MTL model using the Joint Train and 6 state-of-the-art gradient manipulation techniques including PCGrad, CAGrad, GradDrop, MGDA, IMTL-G and NashMTL and then introduce our\nsparse training strategy to these methods. Throughout the training process, we record instances of GC between any two tasks among all tasks for each training iteration and then calculate the average incidence of GC both over all epochs and the last 50% epochs. The observations of the SAM model on the NYU-v2 dataset are provided below. Similar results on other datasets and models are shown in Sec. F.3 ###reference_###, Sec. F.5 ###reference_###, Sec. F.7 ###reference_### and Sec. F.6 ###reference_###.\nThe gradient manipulation methods [35 ###reference_b35###, 4 ###reference_b4###, 17 ###reference_b17###, 26 ###reference_b26###, 18 ###reference_b18###, 19 ###reference_b19###, 28 ###reference_b28###] aim to modify conflicting gradients that are prevalent during the joint training of MTL. As shown in Tab. 1 ###reference_###, the average incidence of GC using Joint train is 31.89% across all training epochs and 35.85% over the last 50% epochs. The incidence of GC cannot be effectively reduced by any gradient magnitude methods compared with the Joint train, as shown in Fig. 1 ###reference_### and Tab. 1 ###reference_###. The reason is that these methods can only make the conflicting gradients not conflict when the GC occurs, rather than proactively prevent the occurrence of GC. The incidence of GC is even exacerbated by these methods, particularly MGDA showing a significant increase of 8.55% compared to Joint Train. Notably, these findings are consistent with [29 ###reference_b29###], where they provide the distribution of the angles between the two task gradients.\nAs shown in Tab. 1 ###reference_###, after combining sparse training with all methods, including Joint Train and gradient manipulation methods, the average incidence of gradient conflict is effectively reduced over all epochs. For example, ST in Joint Train reduced the incidence over all epochs by 5.56%. The phenomenon of gradient conflict reduction is consistently observed in nearly every training epoch, as illustrated in Fig. 4 ###reference_###, which further demonstrates the effectiveness of ST for decreasing gradient conflict.\nIn addition, all methods with ST exhibit a greater improvement in the average incidence of gradient conflict during the last 50% epochs compared to all epochs, which implies a greater level of prevention of gradient conflict with the progress of sparse training.\nFor instance of NashMTL, there is a threefold improvement in the average incidence of gradient conflict during the last 50% epochs compared to all epochs.\n###figure_3### ###figure_4###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Performance on diverse benchmarks", + "text": "It is natural to investigate whether reducing gradient conflict during training through sparsity can enhance performance on common benchmarks. In this section, we present diverse benchmarks to demonstrate the effectiveness of ST.\nThe performance of Joint Train and all gradient manipulation methods is consistently improved by sparse training, as demonstrated in Tab. 2 ###reference_### for NYU-v2 benchmarks. Specifically, sparse training not only enhances overall task performance but also improves individual task performance for the majority of methods. For example, in Tab. 2 ###reference_###, Joint Train demonstrates improvements across all individual tasks through sparse training. Similarly, as shown in Tab. 3 ###reference_###, all methods exhibit notable improvements by sparse training on CelebA, Clevr, SmallNORB and CityScapes benchmarks.\nOur study primarily focuses on the sparse training for large pre-trained models, because leveraging prior knowledge from these models can be beneficial for MTL and our experimental results demonstrate that larger models exhibit a more severe gradient conflict, as shown in Fig. 5(a) ###reference_sf1###. However, in order to ensure a fair comparison with related works that manipulate gradients in small and randomly initialized models, we also conduct experiments under the same setting as theirs to further demonstrate the effectiveness of sparse training. As shown in Tab. 3 ###reference_###, we observe that even for the small randomly initialized models, the performance of joint training and all gradient manipulation methods is improved by sparse training. Please see Tab. 7 ###reference_### and Tab. 12 ###reference_### for the detailed results in the Appendix.\n###figure_5### ###figure_6### ###figure_7### To evaluate the generalization across diverse architectures and MTL tasks, we conducted experiments on both CNN-based models and transformer-based models with varying visual MTL capabilities. Specifically, our MTL tasks encompassed visual classification (CelebA, Clevr and SmallNORB) and visual dense prediction (NYU-v2 and CityScapes). For the former, we utilized Swin Transformer and ViT as backbones for multiple binary classification tasks (Tab. 3 ###reference_###) and two multi-class classification tasks (Tab. 3 ###reference_###, and Tab. 11 ###reference_### in Appendix), respectively. The latter involved predicting dense masks for each task, necessitating an encoder-decoder structure to generate corresponding masks. We explored two types of structures: a symmetrical encoder-decoder structure with a CNN-based model, e.g. MTAN (Tab. 3 ###reference_###, and Tabs. 7 ###reference_### and 12 ###reference_### in Appendix) and an asymmetric structure with a heavy-weight encoder and a light-weight decoder using a transformer-based model, e.g. SAM (Tab. 2 ###reference_### in Appendix). As shown in these tables, the efficacy of sparse training in improving all baselines across various architectures and MTL tasks underscores its robust generalization capability." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "In this paper, we focus more on investigating the gradient conflict in the pre-trained large models as larger models demonstrated a more severe phenomenon of gradient conflict. This can be observed in Fig. 5(a) ###reference_sf1###, where Swin/Tiny demonstrates significantly less gradient conflict compared to Swin/Base and Swin/Large.\nIt is worth noting that although larger models tend to experience more severe gradient conflicts, this does not necessarily lead to inferior performance compared to smaller models with milder gradient conflicts. This discrepancy can be attributed to differences in model capacity and the prior knowledge embedded through pre-training.\nNevertheless, this observation underscores the importance of exploring methods to mitigate gradient conflicts in larger models. Within the same model architecture and size, reducing gradient conflicts has been shown to improve performance, as evidenced by works such as [35 ###reference_b35###, 17 ###reference_b17###]. Addressing severe gradient conflicts in larger models may thus unlock their full potential, enabling better utilization of their capacity and capabilities.\nWe explore the effect of trainable parameter numbers for ST.\nThe results in Fig. 5(b) ###reference_sf2### show that the pre-trained model (SAM) and the randomly initialized model (MTAN) have different optimal trainable parameter numbers. MTAN requires 60% of the parameters, while SAM needs only 30%, leveraging information from the pre-trained model. In our paper, most of the experiments use these proportions for ST and achieve better results (please see Tab. 4 ###reference_### in Sec. D.1 ###reference_### for the detailed number).\nAdditionally, ST offers a wide range of trainable parameter options that outperform Joint Train, which implies that hyperparameter search for the number of trainable parameters becomes effortless. Specifically, both models have a 40% probability of yielding superior outcomes.\nWe investigate various parameter selection approaches:\nRandom: Randomly selecting parameters from the network;\nGlobal: Choosing parameters with the highest magnitude from the whole network instead of the input connections of each neuron in the network (Ours);\nReverse: Selecting parameters with the lowest magnitude among input connections of each neuron.\nFor a fair comparison, we maintain the same selected number.\nThe results in Fig. 5(c) ###reference_sf3### indicate that higher magnitude values are superior to lower ones (Ours Reverse). Furthermore, it is crucial to evenly select parameters from the entire network (Ours Random Global), as Ours ensure that the parameters of input connection for each neuron are selected, and Random guarantees an equal proportion of parameters is selected in each block of the network, whereas this is not the case for Global (see Fig. 6 ###reference_### for detailed statistics in Appendix)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, the occurrence of gradient conflict in multi-task learning is extensively investigated from a novel perspective: sparse training. Extensive experiments demonstrate that sparse training transferring high-dimensional space into low-dimensional space effectively reduces the incidence of gradient conflict during training while preserving the integrity of the original model. Furthermore, combining sparse training with other gradient manipulation methods significantly improves performance for multi-task learning." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Proof for Equation\u00a012", + "text": "According to Lagrangian duality, Eq. 10 ###reference_### can be reformulated as:\nwhere is the Lagrangian multiplier." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Limitations", + "text": "Due to the limited computational resources, we employ grid searches in the Joint train method to determine the optimal hyperparameter for the number of trainable parameters, which is then utilized across all gradient manipulation methods. However, it is possible that these methods may benefit from a more optimized hyperparameter selection for the number of trainable parameters. Furthermore, sparse training can effectively mitigate gradient conflicts between tasks in MTL by reducing the dimensionality of parameter space and limiting their impact on updates between tasks. The regularization constitutes one of the theory\u2019s reasons. Nevertheless, we anticipate that our future research will contribute to a deeper comprehension of multi-task learning and subsequently enhance the performance of MTL." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Broader Impacts", + "text": "The nature of our research does not directly contribute to societal impact; however, like any machine learning paper, it has the potential to adversely affect society through automation and job displacement. While it is challenging to predict specific risks, similar to any technology, inadequate regulation may lead to an exacerbation of social and economic inequality. The positive aspect lies in the potential environmental impact of our work, as multi-task learning enables information sharing among tasks, thereby reducing data requirements and further minimizing energy consumption during training." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Detailed experiment setting", + "text": "We provide the number of trainable parameters for all experiments conducted in our paper. As shown in Table 4 ###reference_###, most of them have the same percentage of trainable parameters within a model across different methods. In addition, in general, we can observe that sparse training for the pre-trained model needs 30% while that for random initialized model needs 60%.\nFollowing the work of Nash [26 ###reference_b26###], we apply all gradient manipulation techniques to the gradients of the shared weights. We set the hyperparameter c of CAGrad to 0.4, as it has been reported to yield optimal performance for NYUv2 and Cityscapes datasets [17 ###reference_b17###].\nThe experiments were conducted on the A100 80G GPU. Typically, training with SAM using NYU-v2 and Swin with CelebA requires approximately 1 day for a gradient manipulation method. Training ViT with SmallNORB takes around 18 minutes for a gradient manipulation method, while training ViT with Clevr takes about 30 minutes. On the other hand, training MTAN with NYU-v2 demands roughly 18 hours for a gradient manipulation method, whereas training MTAN with CityScapes necessitates approximately 12 hours." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Extended related work", + "text": "" + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Detailed experiment results", + "text": "In this section, we provide the detailed experiment results conducted in the main body of our paper, including the average incident of gradient conflict, the incident of gradient conflict for all epochs, and visualization of the gradient conflict for Joint Train and all gradient manipulation methods.\nThe detailed results for various sparse methods are provided in Tab. 5 ###reference_###, which is the full version of Fig. 5(c) ###reference_sf3###. It can be observed that, with the exception of Pix Acc in segmentation, our sparse method outperforms other methods. In addition, we provide the distribution of the selected parameters using different sparse training over different blocks of the model. As shown in Fig. 6 ###reference_###, the parameters selected by our sparse training method and Random are evenly distributed over the whole network. As for Global selecting the parameters with the highest magnitude, the distribution of selected parameters is largely different over different blocks\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### The incidence of gradient conflict for Joint Train and gradient manipulation method over all epochs are shown in Fig. 7 ###reference_###, which is the full version of Fig. 4 ###reference_### in the main body of the paper.\nWe also conduct experiments on MTAN with NYU-v2 dataset. MTAN is a random initialized model. As we can see in Tab. 6 ###reference_###, even for the random initialized model, sparse training can also reduce the incidence of gradient conflict. The visualization of the occurrence of gradient conflict for each epoch is shown in\nFig. 9 ###reference_### and the average incidence of gradient conflict across all epochs for different methods is shown in Fig. 8 ###reference_###. As for the performance of the overall tasks on NYU-v2, the sparse training improves not only the overall performance () but also the performance of each task for all methods including Joint Train and all gradient manipulation methods, as shown in Tab. 7 ###reference_###. In addition, following [26 ###reference_b26###], we conduct the experiments three times with three different seeds. The \u00b1 is presented in Tab. 7 ###reference_###, we can observe that the sparse training is robust to the random seed.\n###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### In order to investigate how the incidence of gradient conflict changes with varying model sizes, we conduct experiments on Swin/Tiny, Swin/Base and Swin/Large through the Joint Train. As depicted in Tab. 8 ###reference_###, there is an observed increase in the incidence of gradient conflict as the model size increases. Additionally, the performance of tasks improves as the model size increases Tab. 9 ###reference_###.\nFollowing [26 ###reference_b26###], we train CelebA on Swin for only 30 epochs, because there are many more tasks in this dataset compared with other datasets, which leads to a significant increase in computation. As we can observe in Tab. 10 ###reference_###, most of the methods including Joint Train and gradient manipulation methods can be improved by sparse training in terms of average incidence of gradient conflict between tasks over epochs. It is noted that the improvement by sparse training here is not significant, which is because of the limited training epoch. Specifically, as shown in Tab. 1 ###reference_### and Tab. 6 ###reference_###, our sparse training improves more for later epochs. As for the performance of CelebA on Swin, please refer to Tab. 3 ###reference_###. The visualization for the occurrence of gradient conflict for each epoch and average incidence of gradient conflict over all epochs for different methods, including Joint Train and all gradient manipulation methods, are shown in Fig. 10 ###reference_### and Fig. 11 ###reference_###\n###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### SmallNORB is a much more difficult benchmark compared to other benchmarks in this paper. It comprises artificial objects observed under varying conditions and includes two tasks: object azimuth\nand camera-elevation prediction. As shown in Tab. 11 ###reference_###, even for the STL, the Top 1 accuracy only achieves 30%, therefore, we use Top 5 as an extra metric here. We observed that even for this difficult task, sparse training can still achieve better performance compared with Joint Train and all gradient manipulation methods.\nWe also conduct experiments on MTAN with CityScapes dataset. MTAN is a random initialized model. As we can see in Tab. 13 ###reference_###, even for the random initialized model, sparse training can also reduce the incidence of gradient conflict. The reduction in the incidence of gradient conflict for CityScapes is observed to be comparatively smaller than that for NYU-v2. This discrepancy can be attributed to the fact that CityScapes, which involves only two tasks, has a lower likelihood of encountering gradient conflicts between tasks compared to NYU-v2, which encompasses three tasks. The visualization of the occurrence of gradient conflict for each epoch is shown in\nFig. 12 ###reference_### and the average incidence of gradient conflict across all epochs for different methods is shown in Fig. 13 ###reference_###\n. As for the performance of the overall tasks on CityScapes, the sparse training improves all methods including Joint Train and all gradient manipulation methods, as shown in Tab. 12 ###reference_###.\n###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### FAMO [18 ###reference_b18###] is an approximation method for gradient manipulation by using the history of loss to compute the current task weight. We also try our sparse training with FAMO on NYU-v2, CelebA, Clevr, SmallORB datasets with ViT, SAM, MTAN and Swin models. As shown in Tab. 14 ###reference_###, Tab. 16 ###reference_###, Tab. 17 ###reference_### and Tab. 15 ###reference_###, even for the approximation method, sparse training method achieves the best results and further show the effectiveness of our sparse training methods." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsAverage incidence of GC ()
All epochsLast 50% epochs
Joint Train31.8935.85
w/ ST26.33 (5.56)29.14 (6.71)
PCGrad33.6938.70
w/ ST30.33 (3.36)33.46 (5.24)
CAGrad34.2639.97
w/ ST31.50 (2.76)34.68 (5.29)
GradDrop33.5638.45
w/ ST30.95 (2.61)33.93 (4.52)
MGDA40.4444.77
w/ ST40.05 (0.39)42.34 (2.43)
IMTL-G32.1537.13
w/ ST28.45 (3.70)31.34 (5.79)
NashMTL36.6739.58
w/ ST35.51 (1.16)35.48 (4.10)
\n
\n
Table 1: Average incidence of GC between tasks for different methods. We compute the average incidence of GC over all epochs and the last 50% epochs during training SAM on NYUv2. The improvement by sparse training is provided in ().
\n
", + "capture": "Table 1: Average incidence of GC between tasks for different methods. We compute the average incidence of GC over all epochs and the last 50% epochs during training SAM on NYUv2. The improvement by sparse training is provided in ()." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err Angle Distance \nWithin \n
MeanMedian11.2522.530
STL
Joint Train
w/ ST
PCGrad
w/ ST80.33
CAGrad0.32150.1305
w/ ST
GradDrop
w/ ST
MGDA12.6172.3580.87
w/ ST19.2212.6146.44
IMTL-G60.64
w/ ST
NashMTL
w/ ST
\n
\n
Table 2: The test performance on NYU-v2 dataset training on SAM model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 2: The test performance on NYU-v2 dataset training on SAM model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CelebAClevrSmallNORBNYU-v2CityScapes
MethodsCountingDepth
(F1)(Top 1 )(Top 1 )
STL
Joint Train10.705.5926.87
w/ ST61.8010.112.4917.48
PCGrad9.993.9719.96
w/ ST9.711.9819.22
CAGrad10.500.2016.26
w/ ST10.22-2.768.88
GradDrop11.733.5820.34
w/ ST10.761.3817.45
MGDA10.151.386.91
w/ ST-1.0856.919.79-3.183.17
IMTL-G10.19-0.7610.65
w/ ST-1.2410.15-3.187.10
NashMTL10.84-4.046.68
w/ ST9.57-5.113.99
\n
\n
Table 3: The test performance on CelebA, Clevr, SmallNORB, NYU-v2 and CityScapes dataset. CelebA is trained on Swin Transformer. Clevr and SmallNORB are trained on ViT. NYU-v2 and CityScapes are trained on MTAN. We only present for limited space. Please see Tab.\u00a07, Tab.\u00a012 and Tab.\u00a011 for detailed results in supplemental materials. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 3: The test performance on CelebA, Clevr, SmallNORB, NYU-v2 and CityScapes dataset. CelebA is trained on Swin Transformer. Clevr and SmallNORB are trained on ViT. NYU-v2 and CityScapes are trained on MTAN. We only present for limited space. Please see Tab.\u00a07, Tab.\u00a012 and Tab.\u00a011 for detailed results in supplemental materials. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold. " + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Pre-trained modelRandom initialized model
MethodSAMSwinViTMTAN
NYU-v2CelebAClevrSmallNORBNYU-v2CityScapes
Joint Train w/ ST30.9737.6029.3829.3862.1976.02
PCGrad w/ ST30.9737.6029.3819.6362.1976.02
CAGrad w/ ST30.9772.8529.3829.3862.1976.02
GradDrop w/ ST30.9749.5829.3829.3862.1976.02
MGDA w/ ST30.9737.6029.3829.3862.1962.19
IMTL-G w/ ST30.9737.6029.3829.3862.1962.19
NashMTL w/ ST30.9737.6029.3829.3862.1983.48
FAMO w/ ST30.9737.6029.3829.3862.1962.19
\n
Table 4: Number of trainable parameters. The values in the table are expressed as percentages (%). As we select Top-K input parameters among all input connections for each neuron, therefore the same K might lead to different percentages of trainable parameters for different models. For example, K=300 results in 30.97% in SAM, 37.60% in Swin, and 29.38% in ViT for the pre-trained model.\n
\n
", + "capture": "Table 4: Number of trainable parameters. The values in the table are expressed as percentages (%). As we select Top-K input parameters among all input connections for each neuron, therefore the same K might lead to different percentages of trainable parameters for different models. For example, K=300 results in 30.97% in SAM, 37.60% in Swin, and 29.38% in ViT for the pre-trained model.\n" + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err \nAngle Distance \n\nWithin \n
MeanMedian11.2522.530
Random
Global
Reverse
Ours
\n
\n
Table 5: Different sparse training methods on SAM model with NYU-v2 datasets.
\n
", + "capture": "Table 5: Different sparse training methods on SAM model with NYU-v2 datasets." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nAverage incidence of GC ()\n
All epochsLast 50% epochs
Joint Train36.0139.87
w/ ST\n33.86 (2.15)\n\n36.45 (3.42)\n
PCGrad35.7139.51
w/ ST\n34.05 (1.66)\n\n37.25 (2.26)\n
CAGrad37.2140.93
w/ ST\n34.14 (3.07)\n\n37.04 (3.89)\n
GradDrop36.3739.71
w/ ST\n34.42 (1.95)\n\n37.10(2.61)\n
MGDA37.7642.1
w/ ST\n37.15 ( 0.61)\n\n41.25 (0.85)\n
IMTL-G37.1441.22
w/ ST\n35.81 (1.33)\n\n39.17 (2.05)\n
NashMTL37.1940.79
w/ ST\n35.83 (1.36)\n\n39.0 (1.79)\n
\n
\n
Table 6: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training MTAN on NYUv2.
\n
", + "capture": "Table 6: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training MTAN on NYUv2." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err \nAngle Distance \n\nWithin \n
MeanMedian11.2522.530
STL38.3063.760.67540.278025.0119.2130.1457.2069.15
Joint Train39.2965.330.54930.226328.1523.9622.0947.5061.085.59
w/ ST41.04 (\u00b10.28)66.05 (\u00b10.12)0.5417 (\u00b10.0008)0.2232 (\u00b10.0011)27.40(\u00b10.05)22.90(\u00b10.12)23.58(\u00b10.13)49.59(\u00b10.14)63.01(\u00b10.09)2.49(\u00b10.11)
PCGrad38.0664.640.55500.232527.4122.8023.8649.8363.143.97
w/ ST40.49 (\u00b10.32)66.17(\u00b10.23)0.5441 (\u00b10.0023)0.2264 (\u00b10.0030)27.09 (\u00b10.08)22.55(\u00b10.03)24.22(\u00b10.12)50.34(\u00b10.17)63.63(\u00b10.12)1.98(\u00b10.12)
CAGrad39.7965.490.54860.225026.3121.5825.6152.3665.580.20
w/ ST39.93(\u00b10.33)66.19(\u00b10.16)0.5299(\u00b10.0025)0.2097(\u00b10.0038)25.71(\u00b10.02)20.70(\u00b10.03)26.86(\u00b10.13)54.22(\u00b10.15)67.30(\u00b10.13)-2.76(\u00b10.10)
GradDrop39.3965.120.54550.227927.4822.9623.3849.4462.873.58
w/ ST40.84(\u00b10.35)66.84(\u00b10.24)0.5288(\u00b10.0021)0.2209(\u00b10.0021)27.18(\u00b10.03)22.56(\u00b10.07)24.10(\u00b10.11)50.33(\u00b10.14)63.67(\u00b10.13)1.38(\u00b10.12)
MGDA30.4759.900.60700.255524.8819.4529.1856.8869.361.38
w/ ST32.42(\u00b10.41)61.61(\u00b10.21)0.5851(\u00b10.0015)0.2239 (\u00b10.0032)24.35(\u00b10.02)18.61(\u00b10.03)31.14(\u00b10.12)58.63(\u00b10.15)70.62(\u00b10.13)-3.09(\u00b10.14)
IMTL-G39.3565.600.54260.225626.0221.1926.2053.1366.24-0.76
w/ ST40.73(\u00b10.33)66.00(\u00b10.17)0.5219(\u00b10.0015)0.2100(\u00b10.0021)25.6(\u00b10.05)20.64(\u00b10.04)26.81(\u00b10.16)54.38(\u00b10.15)67.49(\u00b10.12)-3.18(\u00b10.11)
NashMTL40.1365.930.52610.217125.2620.0828.4055.4768.15-4.04
w/ ST39.75(\u00b10.21)66.45(\u00b10.05)0.5156(\u00b10.0006)0.2121(\u00b10.0009)24.96(\u00b10.01)19.80(\u00b10.05)28.80(\u00b10.11)56.20(\u00b10.10)68.93(\u00b10.09)-5.11(\u00b10.07)
\n
\n
Table 7: The test performance on NYU-v2 dataset training on MTAN model, involving three tasks: semantic segmentation, depth estimation and surface normal. The result is the mean over three random seeds (std is presented in (\u00b1 ). The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 7: The test performance on NYU-v2 dataset training on MTAN model, involving three tasks: semantic segmentation, depth estimation and surface normal. The result is the mean over three random seeds (std is presented in (\u00b1 ). The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model / SizeAverage incidence of GC (%)
Swin / Tiny37.42
Swin / Base40.34
Swin / Large41.84
\n
Table 8: The average incidence of gradient conflict across all epochs during joint training with NYU-v2 on different sizes of Swin transformer.
\n
", + "capture": "Table 8: The average incidence of gradient conflict across all epochs during joint training with NYU-v2 on different sizes of Swin transformer." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err \nAngle Distance \n\nWithin \n
MeanMedian11.2522.530
Swin/Tiny55.2276.540.37460.154227.4721.7027.8152.4064.05
Swin/Base59.6079.160.34190.138825.8819.7431.2356.2467.32
Swin/Large61.3480.280.33210.134525.0918.7333.0558.1268.86
\n
\n
Table 9: The test performance on NYU-v2 dataset jointly training on Swin models.
\n
", + "capture": "Table 9: The test performance on NYU-v2 dataset jointly training on Swin models. " + }, + "10": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nAverage incidence of GC ()\n
All epochsLast 50% epochs
Joint Train47.6148.78
w/ ST\n46.96 (0.65)\n\n48.48 (0.30)\n
PCGrad48.4850.83
w/ ST\n47.24 (1.24)\n\n48.88 (1.95)\n
CAGrad48.2150.23
w/ ST\n48.33 (-0.12)\n\n50.40(-0.17)\n
GradDrop47.3648.72
w/ ST\n47.13 (0.23)\n\n48.57 (0.15)\n
MGDA44.5645.65
w/ ST\n44.30 (0.26)\n\n44.26(1.39)\n
IMTL-G46.8947.77
w/ ST\n45.03 (1.86)\n\n46.32(1.45)\n
NashMTL46.8347.67
w/ ST\n46.78(0.05)\n\n47.34(0.33)\n
\n
\n
Table 10: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training Swin on CelebA.
\n
", + "capture": "Table 10: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training Swin on CelebA." + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsObject AzimuthCamera Elevation
\nTop 1 \n\nTop 5 \n\nTop 1 \n\nTop 5 \n
STL
Joint Train
w/ ST
PCGrad
w/ ST
CAGrad
w/ ST
GradDrop
w/ ST
MGDA
w/ ST
IMTL-G
w/ ST
NashMTL
w/ ST
\n
Table 11: The test performance on SmallNORB dataset trained on ViT. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 11: The test performance on SmallNORB dataset trained on ViT. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold. " + }, + "12": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepth
mIoU Pix Acc Abs Err Rel Err
STL77.6194.150.012235.68
Joint Train78.1494.290.017459.2126.87
w/ ST78.3494.340.014355.0017.48
PCGrad77.7994.210.015551.9919.96
w/ ST77.7994.260.016051.9919.22
CAGrad76.8293.700.013853.7416.26
w/ ST77.2094.010.015039.858.88
GradDrop77.9194.280.015455.5820.34
w/ ST78.3494.380.016348.9517.45
MGDA69.9192.170.012440.686.91
w/ ST68.3891.910.012833.193.17
IMTL-G77.5594.100.013547.1710.65
w/ ST75.7593.980.013840.167.10
NashMTL77.5194.220.015236.366.68
w/ ST76.8794.090.014833.303.99
\n
\n
Table 12: The test performance on CityScapes dataset training on MTAN model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 12: The test performance on CityScapes dataset training on MTAN model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold." + }, + "13": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nAverage incidence of GC ()\n
All epochsLast 50% epochs
Joint Train39.7240.99
w/ ST\n38.79 (0.93)\n\n40.02 (0.97)\n
PCGrad39.9841.06
w/ ST\n38.66(1.32)\n\n39.97(1.09)\n
CAGrad39.3940.94
w/ ST\n37.77(1.62)\n\n39.42(1.52)\n
GradDrop39.3240.72
w/ ST\n39.03(0.29)\n\n40.12(0.60)\n
MGDA36.3739.69
w/ ST\n36.14(0.23)\n\n39.38(0.31)\n
IMTL-G37.7239.51
w/ ST\n36.83(0.89)\n\n38.72(0.79)\n
NashMTL38.4040.69
w/ ST\n38.04(0.36)\n\n40.26(0.43)\n
\n
\n
Table 13: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training MTAN on CityScapes.
\n
", + "capture": "Table 13: Average incidence of gradient conflict between tasks over epochs for different methods. The improvement by sparse training is provided in (). We calculate the average incidence of gradient conflict over all epochs and the last 50% epochs during training MTAN on CityScapes." + }, + "14": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err \nAngle Distance \n\nWithin \n
MeanMedian11.2522.530
FAMO
w/ ST
\n
\n
Table 14: The test performance on NYU-v2 dataset training on SAM model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 14: The test performance on NYU-v2 dataset training on SAM model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold." + }, + "15": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CelebAClevrNYU-v2
MethodsCountingDepth
(F1)\n(Top 1 )\n\n(Top 1 )\n
FAMO-4.10
w/ ST62.57-1.93-4.46
\n
\n
Table 15: The test performance on CelebA, Clevr and NYU-v2 dataset. CelebA is trained on Swin Transformer and Clevr is trained on ViT. NYU-v2 is trained on MTAN. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 15: The test performance on CelebA, Clevr and NYU-v2 dataset. CelebA is trained on Swin Transformer and Clevr is trained on ViT. NYU-v2 is trained on MTAN. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold. " + }, + "16": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsSegmentationDepthSurface Normal
mIoU Pix Acc Abs Err Rel Err \nAngle Distance \n\nWithin \n
MeanMedian11.2522.530
FAMO38.8864.900.54740.219425.0619.5729.2156.6168.98-4.10
w/ ST37.8565.270.55430.221525.0919.1530.0357.4969.52-4.46
\n
\n
Table 16: The test performance on NYU-v2 dataset training on MTAN model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 16: The test performance on NYU-v2 dataset training on MTAN model. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold." + }, + "17": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsObject AzimuthCamera Elevation
\nTop 1 \n\nTop 5 \n\nTop 1 \n\nTop 5 \n
FAMO
w/ ST
\n
Table 17: The test performance on SmallNORB dataset trained on ViT. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold.
\n
", + "capture": "Table 17: The test performance on SmallNORB dataset trained on ViT. The green cell color indicates that sparse training improves the performance of joint training or gradient manipulation methods. The best result is highlighted in bold. " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18615v1_figure_1.png", + "caption": "Figure 1: \nThe average occurrence percentage of gradient conflict over epochs (all epochs/last 50% epochs) during training on the SAM model with NYUv2 datasets is evaluated using various methods, including joint training and gradient manipulation techniques.", + "url": "http://arxiv.org/html/2411.18615v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.18615v1_figure_2(a).png", + "caption": "(a)\nFigure 2: Visualization of gradients change for different methods. gisubscript\ud835\udc54\ud835\udc56g_{i}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and gjsubscript\ud835\udc54\ud835\udc57g_{j}italic_g start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are two conflicting gradients, and the green arrow is the actual update vector. The process of sparse training can be interpreted as performing an orthographic/coordinate projection of conflicting gradients onto the subspace defined by the selected parameters, resulting in better alignment of the projected gradients.", + "url": "http://arxiv.org/html/2411.18615v1/x2.png" + }, + "2(b)": { + "figure_path": "2411.18615v1_figure_2(b).png", + "caption": "(b)\nFigure 2: Visualization of gradients change for different methods. gisubscript\ud835\udc54\ud835\udc56g_{i}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and gjsubscript\ud835\udc54\ud835\udc57g_{j}italic_g start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are two conflicting gradients, and the green arrow is the actual update vector. The process of sparse training can be interpreted as performing an orthographic/coordinate projection of conflicting gradients onto the subspace defined by the selected parameters, resulting in better alignment of the projected gradients.", + "url": "http://arxiv.org/html/2411.18615v1/x3.png" + }, + "2(c)": { + "figure_path": "2411.18615v1_figure_2(c).png", + "caption": "(c)\nFigure 2: Visualization of gradients change for different methods. gisubscript\ud835\udc54\ud835\udc56g_{i}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and gjsubscript\ud835\udc54\ud835\udc57g_{j}italic_g start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are two conflicting gradients, and the green arrow is the actual update vector. The process of sparse training can be interpreted as performing an orthographic/coordinate projection of conflicting gradients onto the subspace defined by the selected parameters, resulting in better alignment of the projected gradients.", + "url": "http://arxiv.org/html/2411.18615v1/x4.png" + }, + "2(d)": { + "figure_path": "2411.18615v1_figure_2(d).png", + "caption": "(d)\nFigure 2: Visualization of gradients change for different methods. gisubscript\ud835\udc54\ud835\udc56g_{i}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and gjsubscript\ud835\udc54\ud835\udc57g_{j}italic_g start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are two conflicting gradients, and the green arrow is the actual update vector. The process of sparse training can be interpreted as performing an orthographic/coordinate projection of conflicting gradients onto the subspace defined by the selected parameters, resulting in better alignment of the projected gradients.", + "url": "http://arxiv.org/html/2411.18615v1/x5.png" + }, + "3": { + "figure_path": "2411.18615v1_figure_3.png", + "caption": "Figure 3: \nPSN. Top-1 highest-magnitude parameter among all input connections of each neuron is selected.", + "url": "http://arxiv.org/html/2411.18615v1/x6.png" + }, + "4(a)": { + "figure_path": "2411.18615v1_figure_4(a).png", + "caption": "(a)\nFigure 4: The incidence of GC between tasks during training SAM on NYUv2 dataset. The top and bottom figures are Joint Train and PCGrad respectively. Please see Fig. 7 in Sec. F.2 for more results on other gradient manipulation methods.", + "url": "http://arxiv.org/html/2411.18615v1/x7.png" + }, + "4(b)": { + "figure_path": "2411.18615v1_figure_4(b).png", + "caption": "(b)\nFigure 4: The incidence of GC between tasks during training SAM on NYUv2 dataset. The top and bottom figures are Joint Train and PCGrad respectively. Please see Fig. 7 in Sec. F.2 for more results on other gradient manipulation methods.", + "url": "http://arxiv.org/html/2411.18615v1/x8.png" + }, + "5(a)": { + "figure_path": "2411.18615v1_figure_5(a).png", + "caption": "(a)\nFigure 5: Ablation study for Joint Train with NYU-v2 dataset. (a) The average incidence of GC during joint training on different sizes of Swin transformers. Please see the numerical statics for all epochs in Tab. 8 in Sec. F.4. (b) The different number of trainable parameters for MTAN and SAM models. (C) Different sparse methods training on SAM. Metrics for all tasks are min-max normalized. Please see Tab. 5 for detailed results in Sec. F.1.", + "url": "http://arxiv.org/html/2411.18615v1/x9.png" + }, + "5(b)": { + "figure_path": "2411.18615v1_figure_5(b).png", + "caption": "(b)\nFigure 5: Ablation study for Joint Train with NYU-v2 dataset. (a) The average incidence of GC during joint training on different sizes of Swin transformers. Please see the numerical statics for all epochs in Tab. 8 in Sec. F.4. (b) The different number of trainable parameters for MTAN and SAM models. (C) Different sparse methods training on SAM. Metrics for all tasks are min-max normalized. Please see Tab. 5 for detailed results in Sec. F.1.", + "url": "http://arxiv.org/html/2411.18615v1/x10.png" + }, + "5(c)": { + "figure_path": "2411.18615v1_figure_5(c).png", + "caption": "(c)\nFigure 5: Ablation study for Joint Train with NYU-v2 dataset. (a) The average incidence of GC during joint training on different sizes of Swin transformers. Please see the numerical statics for all epochs in Tab. 8 in Sec. F.4. (b) The different number of trainable parameters for MTAN and SAM models. (C) Different sparse methods training on SAM. Metrics for all tasks are min-max normalized. Please see Tab. 5 for detailed results in Sec. F.1.", + "url": "http://arxiv.org/html/2411.18615v1/x11.png" + }, + "6(a)": { + "figure_path": "2411.18615v1_figure_6(a).png", + "caption": "(a)\nFigure 6: The distribution of selected trainable parameters for different sparse training methods over different blocks. The experiments are conducted on SAM model with NYU-v2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x12.png" + }, + "6(b)": { + "figure_path": "2411.18615v1_figure_6(b).png", + "caption": "(b)\nFigure 6: The distribution of selected trainable parameters for different sparse training methods over different blocks. The experiments are conducted on SAM model with NYU-v2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x13.png" + }, + "6(c)": { + "figure_path": "2411.18615v1_figure_6(c).png", + "caption": "(c)\nFigure 6: The distribution of selected trainable parameters for different sparse training methods over different blocks. The experiments are conducted on SAM model with NYU-v2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x14.png" + }, + "7(a)": { + "figure_path": "2411.18615v1_figure_7(a).png", + "caption": "(a)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x15.png" + }, + "7(b)": { + "figure_path": "2411.18615v1_figure_7(b).png", + "caption": "(b)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x16.png" + }, + "7(c)": { + "figure_path": "2411.18615v1_figure_7(c).png", + "caption": "(c)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x17.png" + }, + "7(d)": { + "figure_path": "2411.18615v1_figure_7(d).png", + "caption": "(d)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x18.png" + }, + "7(e)": { + "figure_path": "2411.18615v1_figure_7(e).png", + "caption": "(e)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x19.png" + }, + "7(f)": { + "figure_path": "2411.18615v1_figure_7(f).png", + "caption": "(f)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x20.png" + }, + "7(g)": { + "figure_path": "2411.18615v1_figure_7(g).png", + "caption": "(g)\nFigure 7: The number of occurrence gradient conflictions between tasks during training SAM on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x21.png" + }, + "8": { + "figure_path": "2411.18615v1_figure_8.png", + "caption": "Figure 8: \nThe average occurrence percentage of gradient conflict over epochs (all epochs/last 50% epochs) during training on MTAN model with NYU-v2 datasets was evaluated using various methods, including joint training and gradient manipulation techniques.", + "url": "http://arxiv.org/html/2411.18615v1/x22.png" + }, + "9(a)": { + "figure_path": "2411.18615v1_figure_9(a).png", + "caption": "(a)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x23.png" + }, + "9(b)": { + "figure_path": "2411.18615v1_figure_9(b).png", + "caption": "(b)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x24.png" + }, + "9(c)": { + "figure_path": "2411.18615v1_figure_9(c).png", + "caption": "(c)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x25.png" + }, + "9(d)": { + "figure_path": "2411.18615v1_figure_9(d).png", + "caption": "(d)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x26.png" + }, + "9(e)": { + "figure_path": "2411.18615v1_figure_9(e).png", + "caption": "(e)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x27.png" + }, + "9(f)": { + "figure_path": "2411.18615v1_figure_9(f).png", + "caption": "(f)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x28.png" + }, + "9(g)": { + "figure_path": "2411.18615v1_figure_9(g).png", + "caption": "(g)\nFigure 9: The number of occurrence gradient conflictions between tasks during tuning MTAN on NYUv2 dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x29.png" + }, + "10(a)": { + "figure_path": "2411.18615v1_figure_10(a).png", + "caption": "(a)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x30.png" + }, + "10(b)": { + "figure_path": "2411.18615v1_figure_10(b).png", + "caption": "(b)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x31.png" + }, + "10(c)": { + "figure_path": "2411.18615v1_figure_10(c).png", + "caption": "(c)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x32.png" + }, + "10(d)": { + "figure_path": "2411.18615v1_figure_10(d).png", + "caption": "(d)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x33.png" + }, + "10(e)": { + "figure_path": "2411.18615v1_figure_10(e).png", + "caption": "(e)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x34.png" + }, + "10(f)": { + "figure_path": "2411.18615v1_figure_10(f).png", + "caption": "(f)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x35.png" + }, + "10(g)": { + "figure_path": "2411.18615v1_figure_10(g).png", + "caption": "(g)\nFigure 10: The number of occurrence gradient conflictions between tasks during tuning Swin on CelebA dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x36.png" + }, + "11": { + "figure_path": "2411.18615v1_figure_11.png", + "caption": "Figure 11: \nThe average occurrence percentage of gradient conflict over epochs (all epochs/last 50% epochs) during training on Swin model with CelebA datasets was evaluated using various methods, including joint training and gradient manipulation techniques.", + "url": "http://arxiv.org/html/2411.18615v1/x37.png" + }, + "12(a)": { + "figure_path": "2411.18615v1_figure_12(a).png", + "caption": "(a)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x38.png" + }, + "12(b)": { + "figure_path": "2411.18615v1_figure_12(b).png", + "caption": "(b)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x39.png" + }, + "12(c)": { + "figure_path": "2411.18615v1_figure_12(c).png", + "caption": "(c)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x40.png" + }, + "12(d)": { + "figure_path": "2411.18615v1_figure_12(d).png", + "caption": "(d)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x41.png" + }, + "12(e)": { + "figure_path": "2411.18615v1_figure_12(e).png", + "caption": "(e)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x42.png" + }, + "12(f)": { + "figure_path": "2411.18615v1_figure_12(f).png", + "caption": "(f)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x43.png" + }, + "12(g)": { + "figure_path": "2411.18615v1_figure_12(g).png", + "caption": "(g)\nFigure 12: The number of occurrence gradient conflictions between tasks during tuning MTAN on CityScapes dataset.", + "url": "http://arxiv.org/html/2411.18615v1/x44.png" + }, + "13": { + "figure_path": "2411.18615v1_figure_13.png", + "caption": "Figure 13: \nThe average occurrence percentage of gradient conflict over epochs (all epochs/last 50% epochs) during training on MTAN model with CityScapes datasets was evaluated using various methods, including joint training and gradient manipulation techniques.", + "url": "http://arxiv.org/html/2411.18615v1/x45.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Segnet: A deep convolutional encoder-decoder architecture for image segmentation.", + "author": "Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 39(12):2481\u20132495, 2017.", + "url": null + } + }, + { + "2": { + "title": "Sparse multi-task reinforcement learning.", + "author": "Daniele Calandriello, Alessandro Lazaric, and Marcello Restelli.", + "venue": "Advances in neural information processing systems, 27, 2014.", + "url": null + } + }, + { + "3": { + "title": "Sam fails to segment anything? \u2013 sam-adapter: Adapting sam in underperformed scenes: Camouflage, shadow, and more, 2023.", + "author": "Tianrun Chen, Lanyun Zhu, Chaotao Ding, Runlong Cao, Shangzhan Zhang, Yan Wang, Zejian Li, Lingyun Sun, Papa Mao, and Ying Zang.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "Just pick a sign: Optimizing deep multitask models with gradient sign dropout.", + "author": "Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, and Dragomir Anguelov.", + "venue": "Advances in Neural Information Processing Systems, 33:2039\u20132050, 2020.", + "url": null + } + }, + { + "5": { + "title": "The cityscapes dataset for semantic urban scene understanding.", + "author": "Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213\u20133223, 2016.", + "url": null + } + }, + { + "6": { + "title": "Indoor semantic segmentation using depth information, 2013.", + "author": "Camille Couprie, Cl\u00e9ment Farabet, Laurent Najman, and Yann LeCun.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Imagenet: A large-scale hierarchical image database.", + "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 248\u2013255. Ieee, 2009.", + "url": null + } + }, + { + "8": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.", + "venue": "arXiv preprint arXiv:2010.11929, 2020.", + "url": null + } + }, + { + "9": { + "title": "On interpretability of artificial neural networks: A survey.", + "author": "Fenglei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang.", + "venue": "IEEE Transactions on Radiation and Plasma Medical Sciences, 5:741\u2013760, 2020.", + "url": null + } + }, + { + "10": { + "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks.", + "author": "Jonathan Frankle and Michael Carbin.", + "venue": "arXiv preprint arXiv:1803.03635, 2018.", + "url": null + } + }, + { + "11": { + "title": "On the effectiveness of parameter-efficient fine-tuning.", + "author": "Zihao Fu, Haoran Yang, Anthony Man-Cho So, Wai Lam, Lidong Bing, and Nigel Collier.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12799\u201312807, 2023.", + "url": null + } + }, + { + "12": { + "title": "Learning to branch for multi-task learning.", + "author": "Pengsheng Guo, Chen-Yu Lee, and Daniel Ulbricht.", + "venue": "In International conference on machine learning, pages 3854\u20133863. PMLR, 2020.", + "url": null + } + }, + { + "13": { + "title": "Learning both weights and connections for efficient neural network.", + "author": "Song Han, Jeff Pool, John Tran, and William Dally.", + "venue": "Advances in neural information processing systems, 28, 2015.", + "url": null + } + }, + { + "14": { + "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics.", + "author": "Alex Kendall, Yarin Gal, and Roberto Cipolla.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7482\u20137491, 2018.", + "url": null + } + }, + { + "15": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "16": { + "title": "Block pruning for faster transformers.", + "author": "Fran\u00e7ois Lagunas, Ella Charlaix, Victor Sanh, and Alexander M Rush.", + "venue": "arXiv preprint arXiv:2109.04838, 2021.", + "url": null + } + }, + { + "17": { + "title": "Conflict-averse gradient descent for multi-task learning.", + "author": "Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu.", + "venue": "Advances in Neural Information Processing Systems, 34:18878\u201318890, 2021a.", + "url": null + } + }, + { + "18": { + "title": "Famo: Fast adaptive multitask optimization, 2023.", + "author": "Bo Liu, Yihao Feng, Peter Stone, and Qiang Liu.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "Towards impartial multi-task learning.", + "author": "Liyang Liu, Yi Li, Zhanghui Kuang, J Xue, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang.", + "venue": "In International Conference on Learning Representations, 2021b.", + "url": null + } + }, + { + "20": { + "title": "End-to-end multi-task learning with attention.", + "author": "Shikun Liu, Edward Johns, and Andrew J Davison.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1871\u20131880, 2019.", + "url": null + } + }, + { + "21": { + "title": "Deep learning face attributes in the wild.", + "author": "Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 3730\u20133738, 2015.", + "url": null + } + }, + { + "22": { + "title": "Swin transformer: Hierarchical vision transformer using shifted windows.", + "author": "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012\u201310022, 2021c.", + "url": null + } + }, + { + "23": { + "title": "Attentive single-tasking of multiple tasks.", + "author": "Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1851\u20131860, 2019.", + "url": null + } + }, + { + "24": { + "title": "Cross-stitch networks for multi-task learning.", + "author": "Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3994\u20134003, 2016.", + "url": null + } + }, + { + "25": { + "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.", + "author": "Hesham Mostafa and Xin Wang.", + "venue": "In International Conference on Machine Learning, pages 4646\u20134655. PMLR, 2019.", + "url": null + } + }, + { + "26": { + "title": "Multi-task learning as a bargaining game, 2022.", + "author": "Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, and Ethan Fetaya.", + "venue": null, + "url": null + } + }, + { + "27": { + "title": "Movement pruning: Adaptive sparsity by fine-tuning.", + "author": "Victor Sanh, Thomas Wolf, and Alexander Rush.", + "venue": "Advances in neural information processing systems, 33:20378\u201320389, 2020.", + "url": null + } + }, + { + "28": { + "title": "Multi-task learning as multi-objective optimization.", + "author": "Ozan Sener and Vladlen Koltun.", + "venue": "Advances in Neural Information Processing Systems, 31, 2018.", + "url": null + } + }, + { + "29": { + "title": "Recon: Reducing conflicting gradients from the root for multi-task learning.", + "author": "Guangyuan Shi, Qimai Li, Wenlong Zhang, Jiaxin Chen, and Xiao-Ming Wu.", + "venue": "arXiv preprint arXiv:2302.11289, 2023.", + "url": null + } + }, + { + "30": { + "title": "Learning sparse sharing architectures for multiple tasks.", + "author": "Tianxiang Sun, Yunfan Shao, Xiaonan Li, Pengfei Liu, Hang Yan, Xipeng Qiu, and Xuanjing Huang.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, pages 8936\u20138943, 2020.", + "url": null + } + }, + { + "31": { + "title": "Multi-task learning for dense prediction tasks: A survey.", + "author": "Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc Proesmans, Dengxin Dai, and Luc Van Gool.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 44(7):3614\u20133633, 2021.", + "url": null + } + }, + { + "32": { + "title": "Pac-bayes information bottleneck.", + "author": "Zifeng Wang, Shao-Lun Huang, Ercan E Kuruoglu, Jimeng Sun, Xi Chen, and Yefeng Zheng.", + "venue": "arXiv preprint arXiv:2109.14509, 2021.", + "url": null + } + }, + { + "33": { + "title": "Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing.", + "author": "Dan Xu, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 675\u2013684, 2018.", + "url": null + } + }, + { + "34": { + "title": "Raise a child in large language model: Towards effective and generalizable fine-tuning.", + "author": "Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, and Fei Huang.", + "venue": "arXiv preprint arXiv:2109.05687, 2021.", + "url": null + } + }, + { + "35": { + "title": "Gradient surgery for multi-task learning.", + "author": "Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn.", + "venue": "Advances in Neural Information Processing Systems, 33:5824\u20135836, 2020.", + "url": null + } + }, + { + "36": { + "title": "A large-scale study of representation learning with the visual task adaptation benchmark.", + "author": "Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al.", + "venue": "arXiv preprint arXiv:1910.04867, 2019.", + "url": null + } + }, + { + "37": { + "title": "A survey on multi-task learning.", + "author": "Yu Zhang and Qiang Yang.", + "venue": "IEEE Transactions on Knowledge and Data Engineering, 34(12):5586\u20135609, 2021.", + "url": null + } + }, + { + "38": { + "title": "Joint task-recursive learning for semantic segmentation and depth estimation.", + "author": "Zhenyu Zhang, Zhen Cui, Chunyan Xu, Zequn Jie, Xiang Li, and Jian Yang.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 235\u2013251, 2018.", + "url": null + } + }, + { + "39": { + "title": "Pattern-affinitive propagation across depth, surface normal and semantic segmentation.", + "author": "Zhenyu Zhang, Zhen Cui, Chunyan Xu, Yan Yan, Nicu Sebe, and Jian Yang.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4106\u20134115, 2019.", + "url": null + } + }, + { + "40": { + "title": "Gradient-based parameter selection for efficient fine-tuning.", + "author": "Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, and Shanghang Zhang.", + "venue": "arXiv preprint arXiv:2312.10136, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18615v1" +} \ No newline at end of file diff --git a/20241127/2411.18616v1.json b/20241127/2411.18616v1.json new file mode 100644 index 0000000000000000000000000000000000000000..7621f45b33cc389658370ad73649a92259b000c4 --- /dev/null +++ b/20241127/2411.18616v1.json @@ -0,0 +1,535 @@ +{ + "title": "Diffusion Self-Distillation for Zero-Shot Customized Image Generation", + "abstract": "Text-to-image diffusion models produce impressive results but are frustrating tools for artists who desire fine-grained control. For example, a common use case is to create images of a specific concept in novel contexts, i.e., \u201cidentity-preserving generation\u201d. This setting, along with many other tasks (e.g., relighting), is a natural fit for image+text-conditional generative models. However, there is insufficient high-quality paired data to train such a model directly. We propose Diffusion Self-Distillation, a method for using a pre-trained text-to-image model to generate its own dataset for text-conditioned image-to-image tasks. We first leverage a text-to-image diffusion model\u2019s in-context generation ability to create grids of images and curate a large paired dataset with the help of a vision-language model. We then fine-tune the text-to-image model into a text+image-to-image model using the curated paired dataset. We demonstrate that Diffusion Self-Distillation outperforms existing zero-shot methods and is competitive with per-instance tuning techniques on a wide range of identity-preserving generation tasks, without requiring test-time optimization.\nProject page: primecai.github.io/dsd.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, text-to-image diffusion models [24 ###reference_b24###, 28 ###reference_b28###, 32 ###reference_b32###, 29 ###reference_b29###] have set new standards in image synthesis, generating high-quality and diverse images from textual prompts.\nHowever, while their ability to generate images from text is impressive, these models often fall short in offering precise control, editability, and consistency\u2014key features that are crucial for real-world applications.\nText input alone can be insufficient to convey specific details, leading to variations that may not fully align with the user\u2019s intent, especially in scenarios that require faithful adaptation of a character or asset\u2019s identity across different contexts.\n###figure_1### Maintaining the instance\u2019s identity is challenging, however. We distinguish structure-preserving edits, in which the target and source image share the general layout, but may differ in style, texture, or other local features, and identity-preserving edits, where assets are recognizably the same across target and source images despite potentially large-scale changes in image structure (Fig. 3 ###reference_###). The latter task is a superset of the former and requires the model to have a significantly more profound understanding of the input image and concepts to extract and customize the desired identity. For example, image editing [2 ###reference_b2###, 43 ###reference_b43###, 22 ###reference_b22###], such as local content editing, re-lighting, and semantic image synthesis, etc. are all structure-preserving and identity-preserving edits, but novel-view synthesis and character-consistent generation under pose variations, are identity-preserving but not structure-preserving. We aim to address the general case, maintaining identity without constraining structure.\nFor structure-preserving edits, adding layers, as in ControlNet [43 ###reference_b43###], introduces spatial conditioning controls but is limited to structure guidance and does not address consistent identity adaptation across diverse contexts.\nFor identity-preserving edits, fine-tuning methods such as DreamBooth [31 ###reference_b31###] and LoRA [13 ###reference_b13###] can improve consistency using a few reference samples but are time consuming and computationally intensive, requiring training for each reference.\nZero-shot alternatives like IP-Adapter [42 ###reference_b42###] and InstantID [37 ###reference_b37###] offer faster solutions without the need for retraining but fall short in providing the desired level of consistency and customization; IP-Adapter [42 ###reference_b42###] lacks full customization capabilities, and InstantID [37 ###reference_b37###] is restricted to facial identity.\nIn this paper, we propose a novel approach called Diffusion Self-Distillation, designed to address the core challenge of zero-shot instant customization and adaptation of any character or asset in text-to-image diffusion models.\nWe identify the primary obstacle that hinders prior methods, such as IP-Adapter [42 ###reference_b42###] and InstantID [37 ###reference_b37###], from achieving better identity preservation or generalizing beyond facial contexts: the absence of large-scale paired datasets and corresponding supervised identity-preserving training pipelines.\nWith recent advancements in foundational model capabilities, we are now positioned to exploit these strengths further.\nSpecifically, we can generate consistent grids of identical characters or assets, opening a new pathway for customization that eliminates the need for pre-existing, handcrafted paired datasets\u2014which are expensive and time consuming to collect.\nThe ability to generate these consistent grids likely emerged from foundational model training on diverse datasets, including photo albums, mangas, and comics.\nOur approach harnesses Vision-Language Models (VLMs) to automatically curate many generated grids, producing a diverse set of grid images with consistent identity features across various contexts.\nThis curated synthetic dataset then serves as the foundation for fine-tuning and adapting any identity, transforming the task of zero-shot customized image generation from unsupervised to supervised.\nDiffusion Self-Distillation offers transformative potential for applications like consistent character generation, camera control, relighting, and asset customization in fields such as comics and digital art.\nThis flexibility allows artists to rapidly iterate and adapt their work, reducing effort and enhancing creative freedom, making Diffusion Self-Distillation a valuable tool for AI-generated content.\nWe summarize our contributions as follows:\nWe propose Diffusion Self-Distillation, a zero-shot identity-preserving customized image generation model that scales to any instance under any context, with performances on par with inference-stage tuning methods;\nWe provide a self-distillation pipeline to obtain identity-preserving data pairs purely from pretrained text-to-image diffusion models, LLMs, and VLMs, without any human effort involved in the entire data creation wheel;\nWe correspondingly design a unified architecture for image-to-image translation tasks involving both identity- and structure-preserving edits, including personalization, relighting, depth controls, and instruction following.\n###figure_2###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "Recent advancements in diffusion models have underscored the need for enhanced control and customization in image-generation tasks. Various methods have been proposed to address these challenges through additional conditioning mechanisms, personalization, and rapid adaptation [26 ###reference_b26###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Diffusion Self-Distillation", + "text": "We discover that recent text-to-image generation models offer the surprising ability to generate in-context, consistent image grids (see Fig. 2 ###reference_###, left).\nMotivated by this insight, we develop a zero-shot adaptation network that offers fast, diverse, high-quality, and identity-preserving, i.e., consistent image generation conditioned on a reference image.\nFor this purpose, we first generate and curate sets of images that exhibit the desired consistency using pretrained text-to-image diffusion models, large language models (LLMs), and vision-language models (VLMs) (Sec. 3.1 ###reference_###).\nThen, we finetune the same pretrained diffusion model with these consistent image sets, employing our newly proposed parallel processing architecture (Sec. 3.2 ###reference_###) to create a conditional model.\nBy this end, Diffusion Self-Distillation finetunes a pretrained text-to-image diffusion model into a zero-shot customized image generator in a supervised manner." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Generating a Pairwise Dataset", + "text": "To create a pairwise dataset for supervised Diffusion Self-Distillation training, we leverage the emerging multi-image generation capabilities of pretrained text-to-image diffusion models to produce potentially consistent vanilla images (Sec. 3.1.1 ###reference_.SSS1###) created by LLM-generated prompts (Sec. 3.1.2 ###reference_.SSS2###).\nWe then use VLMs to curate these vanilla samples, obtaining clean sets of images that share the desired identity consistency (Sec. 3.1.3 ###reference_.SSS3###). The data generation and curation pipeline is shown in Fig. 2 ###reference_###, left." + }, + { + "section_id": "3.1.1", + "parent_section_id": "3.1", + "section_name": "3.1.1 Vanilla Data Generation via Teacher Model", + "text": "To generate sets of images that fulfill the desired identity preservation, we prompt the teacher pretrained text-to-image diffusion model to create images containing multiple panels featuring the same subject with variations in expression, pose, lighting conditions, and more, for training purposes.\nSuch prompting can be as simple as specifying the desired identity preservation in the output, such as \u201ca grid of 4 images representing the same \u201d, \u201can evenly separated 4 panels, depicting identical \u201d, etc.\nWe additionally specify the expected content in each sub-image/panel.\nThe full set of prompts is provided in our supplemental material Sec. A ###reference_###.\nOur analysis shows that current state-of-the-art text-to-image diffusion models (e.g., SD3 [8 ###reference_b8###], DALL E 3, FLUX) demonstrate this identity-preserving capability, likely emerging from their training data, which includes comics, mangas, photo albums, and video frames.\nSuch in-context generation ability is crucial to our data generation wheel." + }, + { + "section_id": "3.1.2", + "parent_section_id": "3.1", + "section_name": "3.1.2 Prompt Generation via LLMs", + "text": "We rely on an LLM to \u201cbrainstorm\u201d a large dataset of diverse prompts, from which we derive our image grid dataset.\nBy defining a prompt structure, we prompt the LLM to produce text prompts that describe image grids.\nA challenge we encountered is that when prompted to create large sets of prompts, LLMs tend to produce prompts of low diversity.\nFor example, we noticed that without additional guidance, GPT-4o has a strong preference for prompts with cars and robots, resulting in highly repetitive outputs.\nTo address this issue, we utilize the available image captions in the LAION [33 ###reference_b33###] dataset, feeding them into the LLM as content references.\nThese references from real image captions dramatically improve the diversity of generated prompts.\nOptionally, we also use the LLM to filter these reference captions, ensuring they contain a clear target for identity preservation.\nWe find that this significantly improves the hit rate of generating consistent multi-image outputs." + }, + { + "section_id": "3.1.3", + "parent_section_id": "3.1", + "section_name": "3.1.3 Dataset Curation and Caption with VLMs", + "text": "While the aforementioned data generation scheme provides identity-preserving multi-image samples of decent quality and quantity, these initial \u201cuncurated\u201d images tend to be noisy and unsuitable for direct use.\nTherefore, we leverage the strong capabilities of VLMs to curate a clean dataset.\nWe extract pairs of images from the generated samples intended to preserve the identity and ask the VLM whether the two images depict the same object, character, scene, etc.\nWe find that employing Chain-of-Thought prompting [38 ###reference_b38###] is particularly helpful in this context.\nSpecifically, we first prompt the VLM to identify the common object, character, or scene present in both images, then have it describe each one in detail, and finally analyze whether they are identical, providing a conclusive response.\nThis process yields pairs of images that share the same identity." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Parallel Processing Architecture", + "text": "We desire a conditional architecture suitable for general image-to-image tasks, including transformations in which structure is preserved, and transformations in which concepts/identities are preserved but image structure is not.\nThis is a challenging problem because it may necessitate the transfer of fine details without guaranteeing spatial correspondences.\nWhile the ControlNet [43 ###reference_b43###] architecture is excellent at structure-preserving edits, such as depth-to-image or segmentation-map-to-image, it struggles to preserve details under more complex identity-preserving edits, where the source and target images are not pixel-aligned.\nOn the other hand, IP-Adapter [42 ###reference_b42###] can extract certain concepts, such as styles, from the input image.\nStill, it strongly relies on a task-specific image encoder and often fails to preserve more complex concepts and identities.\nDrawing inspiration from the success of multi-view and video diffusion models [10 ###reference_b10###, 5 ###reference_b5###, 16 ###reference_b16###, 3 ###reference_b3###, 1 ###reference_b1###, 41 ###reference_b41###, 12 ###reference_b12###, 4 ###reference_b4###, 18 ###reference_b18###, 39 ###reference_b39###, 34 ###reference_b34###, 11 ###reference_b11###, 35 ###reference_b35###, 14 ###reference_b14###], we propose a simple yet effective method to extend the vanilla diffusion transformer model into an image-conditioned diffusion model.\nSpecifically, we treat the input image as the first frame of a video and produce a two-frame video as output.\nThe final loss is computed over the two-frame video, establishing an identity mapping for the first frame and a conditionally editing target for the second frame.\nOur architecture design allows generality for generic image-to-image translation tasks, since it enables effective information exchange between the two frames, allowing the model to capture complex semantics and perform sophisticated edits, as shown in Fig. 2 ###reference_###, right.\n###figure_3### ###figure_4###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "We present Diffusion Self-Distillation, a zero-shot approach designed to achieve identity adaptation across a wide range of contexts using text-to-image diffusion models without any human effort.\nOur method effectively transforms zero-shot customized image generation into a supervised task, substantially reducing its difficulty.\nEmpirical evaluations demonstrate that Diffusion Self-Distillation performs comparably to inference-stage tuning techniques while retaining the efficiency of zero-shot methods." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Data Pipeline Prompts", + "text": "In this section, we list out the detailed prompts used in our data generation (Sec. A.1 ###reference_###), curation (Sec. A.2 ###reference_###) and caption (Sec. A.3 ###reference_###) pipelines.\nTo generate grid prompts, we employ GPT-4o as our language model (LLM) engine\nWe instruct the LLM to focus on specific aspects during the grid generation process: preserving the identity of the subject, providing detailed content within each grid quadrant, and maintaining appropriate text length.\nHowever, we observed that not all sampled reference captions inherently include a clear instance suitable for identity preservation.\nTo address this issue, we introduce an initial filtering stage to ensure that each sampled reference caption contains an identity-preserving target.\nThis filtering enhances the quality and consistency of the generated grids.\n###figure_5### For data curation, we employ Gemini-1.5.\nTo guide the vision-language model (VLM) in focusing on identity preservation, we utilize Chain-of-Thought (CoT) prompting [38 ###reference_b38###].\nSpecifically, we first instruct the VLM to identify the common object or character present in both images.\nNext, we prompt it to describe each one in detail.\nFinally, we ask the VLM to analyze whether they are identical and to provide a conclusive response.\nWe find that this CoT prompting significantly enhances the model\u2019s ability to concentrate on the identity and intricate details of the target object or character.\n###figure_6### We provide two methods for prompting our model: using the description of the expected output (Target Description) or InstructPix2Pix [2 ###reference_b2###]-type instructions (Instruction).\n###figure_7###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B GPT Evaluation Prompts", + "text": "###figure_8### We closely follow DreamBench++ [25 ###reference_b25###] in terms of our GPT evaluation.\nIn Fig. 7 ###reference_###, we demonstrate the prompts we use for evaluation, including our \u201cde-biased\u201d evaluation that penalizes \u201ccopy-pasting\u201d effect." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Additional Results", + "text": "In Fig. 8 ###reference_###, we demonstrate more of the qualitative evaluation cases from the DreamBench++ [25 ###reference_b25###] benchmark.\nDue to space constraints in the main paper, we presented shortened prompts.\nHere, we provide additional qualitative results in Fig. 9 ###reference_###, Fig. 10 ###reference_###, Fig. 11 ###reference_###, Fig. 12 ###reference_###, Fig. 13 ###reference_### and Fig. 14 ###reference_###, including the full prompts used for their generation.\nThese detailed captions capture various aspects of the images and offer deeper insights into how our model operates.\nOur model exhibits the capability to generate simple comics and manga narratives, as demonstrated in Fig. 15 ###reference_### and Fig. 16 ###reference_###, where the conditioning image acts as the first panel. To create these storytelling sequences, we input the initial panel into GPT-4o, which generates a series of prompts centered around the main character from the input image.\nThese prompts are crafted to form a coherent story spanning 8\u201310 panels, with each prompt being contextually meaningful on its own.\nUtilizing these prompts alongside the conditioning image, we generate the subsequent panels and finally align them to reconstruct a cohesive narrative.\n###figure_9###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Discussion on Scalability", + "text": "We acknowledge that the scalability of Diffusion Self-Distillation is not fully explored within the scope of this paper.\nHowever, we posit that Diffusion Self-Distillation is inherently scalable along three key dimensions.\nFirst, Diffusion Self-Distillation can scale with advancements in the teacher model\u2019s grid generation capabilities and its in-context understanding of identity preservation.\nSecond, the scalability extends to the range of tasks we leverage; while this paper focuses on general adaptation tasks, a broader spectrum of applications remains open for exploration.\nThird, Diffusion Self-Distillation scales with the extent to which we harness foundation models.\nIncreased diversity and more meticulously curated data contribute to improved generalization of our model.\nAs foundation models\u2014including base text-to-image generation models, language models (LLMs), and vision-language models (VLMs)\u2014continue to evolve, Diffusion Self-Distillation naturally benefits from these advancements without necessitating any modifications to the existing workflow.\nA direct next step involves scaling the method to incorporate a significantly larger dataset and integrating forthcoming, more advanced foundation models.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Concept PreservationPrompt FollowingDebiased Concept PreservationDebiased Prompt FollowingDebiased
MethodZ-S?\nAnimal\n\nHuman\n\nObject\n\nOverall\n\nReal.\n\nImag.\n\nOverall\n\nCPPF\n\nAnimal\n\nHuman\n\nObject\n\nOverall\n\nReal.\n\nImag.\n\nOverall\n\nCPPF\n
Textual Inversion\u27170.5020.3580.3050.3880.6710.4370.5980.2320.7410.6940.7170.7220.6190.3850.5410.391
DreamBooth\u27170.6400.1990.4880.4420.7980.5040.6920.3060.6700.3620.6760.6260.7500.4670.6560.411
DreamBooth LoRA\u27170.7510.3110.5430.5350.8980.7540.8490.4500.6810.6750.7610.7200.8650.7180.8160.588
BLIP-Diffusion\u27130.6370.5570.4690.5540.5810.3030.4640.2570.7710.7330.7450.7500.5290.2660.4420.332
Emu2\u27130.6700.5460.4470.5540.7320.5600.6700.3710.6520.6830.7010.6810.6860.4940.6220.424
IP-Adapter\u27130.6670.5580.5040.5760.7430.4460.6070.3500.7900.7640.7430.7660.6950.3770.5890.451
IP-Adapter+\u27130.9000.8450.7590.8340.5020.2790.3880.3240.4810.4730.5300.5040.4420.2290.3710.187
Ours\u27130.6470.5670.6400.6310.7770.6250.7260.4580.8520.7740.7500.7890.8080.6810.7570.597
\n
Table 1: Quantitative result.\nOn the human-aligned GPT score metrics, our method is only inferior to IP-Adapter+\u00a0[42] for concept preservation\u00a0(largely because of IP-Adapter families\u2019 \u201ccopy-pasting\u201d effect) and the tuning-base DreamBooth-LoRA\u00a0[31, 13] for prompt following, but outperforms every other baseline, achieving the best overall performance considering both concept preservation and prompt following.\nWe also note that on the de-biased GPT evaluation, which penalizes \u201ccopy-pasting\u201d the reference image without significant creative interpretation or transformation, the advantages of IP-Adaper+\u00a0[42] no longer hold.\nThis can also be partly observed by their bad prompt following scores, meaning they are biased towards the reference input and are not accommodating the input prompt.\nThe first, second, and third values are highlighted, where Diffusion Self-Distillation is the best overall performing model.\n
\n
", + "capture": "Table 1: Quantitative result.\nOn the human-aligned GPT score metrics, our method is only inferior to IP-Adapter+\u00a0[42] for concept preservation\u00a0(largely because of IP-Adapter families\u2019 \u201ccopy-pasting\u201d effect) and the tuning-base DreamBooth-LoRA\u00a0[31, 13] for prompt following, but outperforms every other baseline, achieving the best overall performance considering both concept preservation and prompt following.\nWe also note that on the de-biased GPT evaluation, which penalizes \u201ccopy-pasting\u201d the reference image without significant creative interpretation or transformation, the advantages of IP-Adaper+\u00a0[42] no longer hold.\nThis can also be partly observed by their bad prompt following scores, meaning they are biased towards the reference input and are not accommodating the input prompt.\nThe first, second, and third values are highlighted, where Diffusion Self-Distillation is the best overall performing model.\n" + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodCP \nPF \nCreativity \n
Textual Inversion\u00a0[9]\n1.6931.9242.850
DreamBooth\u00a0[31]\n2.3292.8833.597
DreamBooth LoRA\u00a0[31, 13]\n2.5763.3864.247
BLIP-Diffusion\u00a0[17]\n1.8542.2810.286
Emu2\u00a0[36]\n1.8432.0962.965
IP-Adapter\u00a0[42]\n2.2742.3073.481
IP-Adapter+\u00a0[42]\n3.7331.9592.428
Ours3.6613.3284.453
\n
\n
Table 2: User study.\n\u201cCP\u201d refers to concept preservation scores and \u201cPF\u201d refers to prompt following scores.\nThe first, second, and third values are highlighted.\nOur user study results mostly align with our GPT evaluation, where our Diffusion Self-Distillation is the best overall performing model.\n
\n
", + "capture": "Table 2: User study.\n\u201cCP\u201d refers to concept preservation scores and \u201cPF\u201d refers to prompt following scores.\nThe first, second, and third values are highlighted.\nOur user study results mostly align with our GPT evaluation, where our Diffusion Self-Distillation is the best overall performing model.\n" + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18616v1_figure_2.png", + "caption": "Figure 2: \nOverview of our pipeline.\nLeft:\nthe top shows our vanilla paired data generation wheel (Sec. 3.1).\nWe first sample reference image captions from the LAION [33] dataset.\nThese reference captions are parsed through an LLM to be translated into identity-preserved grid generation prompts (Sec. 3.1.2).\nWe feed these enhanced prompts to a pretrained text-to-image diffusion model to sample potentially identity-preserved grids of images, which are then cropped and composed into vanilla image pairs (Sec. 3.1.1).\nOn the bottom, we show our data curation pipeline (Sec. 3.1.3), where the vanilla image paired are fed into a VLM to classify whether they depict identical main subjects. This process mimics a human annotation/curation process while being fully automatic; we use the curated data as our final training data.\nRight:\nwe extend the diffusion transformer model into an image-conditioned framework by treating the input image as the first frame of a two-frame sequence.\nThe model generates both frames simultaneously\u2014the first reconstructs the input, while the second is the edited output\u2014allowing effective information exchange between the conditioning image and the desired output.", + "url": "http://arxiv.org/html/2411.18616v1/x2.png" + }, + "3": { + "figure_path": "2411.18616v1_figure_3.png", + "caption": "Figure 3: \nDifference between structure-preserving and identity-preserving edits. \nIn structure-preserving editing, the main structures of the image are preserved, and only local edits or stylizations are performed.\nIn identity-preserving editing, the global structure of the image may change radically.", + "url": "http://arxiv.org/html/2411.18616v1/x3.png" + }, + "4": { + "figure_path": "2411.18616v1_figure_4.png", + "caption": "Figure 4: \nQualitative comparison.\nOverall, our method achieves high subject identity preservation and prompt-aligned diversity while not suffering from a \u201ccopy-paste\u201d effect, such as the results of IP-Adapter+ [42]. This is largely thanks to our supervised training pipeline, which alleviates the base model\u2019s in-context generation ability.", + "url": "http://arxiv.org/html/2411.18616v1/x4.png" + }, + "5": { + "figure_path": "2411.18616v1_figure_5.png", + "caption": "Figure 5: \nQualitative result. \nOur Diffusion Self-Distillation is capable of various customization targets across different tasks and styles, for instance, characters or objects, photorealistic or animated.\nDiffusion Self-Distillation can also take instruction types of prompts as input, similar to InstructPix2Pix [2].\nFurther, our model exhibits relighting capabilities without significantly altering the scene\u2019s content.", + "url": "http://arxiv.org/html/2411.18616v1/x5.png" + }, + "6": { + "figure_path": "2411.18616v1_figure_6.png", + "caption": "Figure 6: \nAblation study.\nLeft: We compare the base model\u2019s in-context sampling ability with a consistent grid LoRA-overfitted model. We observe that although applying LoRA to the base model can increase the likelihood of outputs being consistent grids, it may adversely affect output diversity. Therefore, we rely on vision-language models (VLMs) to curate from a large number of diverse but potentially noisy grids.\nRight: We compare our architectural design with a vanilla conditional model (by adding a few input channels), ControlNet [43], and IP-Adapter [42]. Our architecture learns the input concepts and identities significantly better.\nWe also demonstrate that our architecture can effectively scale to depth-conditioned image generation similar to ControlNet [43].", + "url": "http://arxiv.org/html/2411.18616v1/x6.png" + }, + "7": { + "figure_path": "2411.18616v1_figure_7.png", + "caption": "Figure 7: \nGPT evaluation prompts used across our evaluation, where the left shows the vanilla prompts from DreamBench++ [25] and the right shows our modified \u201cde-biased\u201d prompts, which strongly penalizes \u201ccopy-pasting\u201d effects without sufficient creative inputs.\nWe highlight our modified sentences in red.", + "url": "http://arxiv.org/html/2411.18616v1/x10.png" + }, + "8": { + "figure_path": "2411.18616v1_figure_8.png", + "caption": "Figure 8: \nAdditional qualitative comparison.", + "url": "http://arxiv.org/html/2411.18616v1/x12.png" + }, + "9": { + "figure_path": "2411.18616v1_figure_9.png", + "caption": "Figure 9: \nAdditional character identity preserving results.", + "url": "http://arxiv.org/html/2411.18616v1/x13.png" + }, + "10": { + "figure_path": "2411.18616v1_figure_10.png", + "caption": "Figure 10: \nAdditional character identity preserving results.", + "url": "http://arxiv.org/html/2411.18616v1/x14.png" + }, + "11": { + "figure_path": "2411.18616v1_figure_11.png", + "caption": "Figure 11: \nAdditional object/item identity preserving results.", + "url": "http://arxiv.org/html/2411.18616v1/x15.png" + }, + "12": { + "figure_path": "2411.18616v1_figure_12.png", + "caption": "Figure 12: \nAdditional object/item identity preserving results.", + "url": "http://arxiv.org/html/2411.18616v1/x16.png" + }, + "13": { + "figure_path": "2411.18616v1_figure_13.png", + "caption": "Figure 13: \nAdditional instruction prompting results.", + "url": "http://arxiv.org/html/2411.18616v1/x17.png" + }, + "14": { + "figure_path": "2411.18616v1_figure_14.png", + "caption": "Figure 14: \nAdditional relighting results.", + "url": "http://arxiv.org/html/2411.18616v1/x18.png" + }, + "15": { + "figure_path": "2411.18616v1_figure_15.png", + "caption": "Figure 15: \nComic generation example 1. \nThe conditioned image is the first panel.", + "url": "http://arxiv.org/html/2411.18616v1/x19.png" + }, + "16": { + "figure_path": "2411.18616v1_figure_16.png", + "caption": "Figure 16: \nComic generation example 2. \nThe conditioned image is the first panel.", + "url": "http://arxiv.org/html/2411.18616v1/x20.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Stable video diffusion: Scaling latent video diffusion models to large datasets.", + "author": "Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, and Robin Rombach.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "2": { + "title": "Instructpix2pix: Learning to follow image editing instructions.", + "author": "Tim Brooks, Aleksander Holynski, and Alexei A. Efros.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "3": { + "title": "Video generation models as world simulators.", + "author": "Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh.", + "venue": "https://openai.com/research/video-generation-models-as-world-simulators, 2024.", + "url": null + } + }, + { + "4": { + "title": "Diffdreamer: Towards consistent unsupervised single-view scene extrapolation with conditional diffusion models.", + "author": "Shengqu Cai, Eric Ryan Chan, Songyou Peng, Mohamad Shahbazi, Anton Obukhov, Luc Van Gool, and Gordon Wetzstein.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "5": { + "title": "Generative rendering: Controllable 4d-guided video generation with 2d diffusion models.", + "author": "Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Huang, Tuanfeng Wang, and Gordon. Wetzstein.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "6": { + "title": "Emerging properties in self-supervised vision transformers.", + "author": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.", + "venue": "In ICCV, 2021.", + "url": null + } + }, + { + "7": { + "title": "Subject-driven text-to-image generation via apprenticeship learning.", + "author": "Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Ruiz, Xuhui Jia, Ming-Wei Chang, and William W Cohen.", + "venue": "In NeurIPS, 2023.", + "url": null + } + }, + { + "8": { + "title": "Scaling rectified flow transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Sumith Kulal, A. Blattmann, Rahim Entezari, Jonas Muller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion English, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach.", + "venue": "In ICML, 2024.", + "url": null + } + }, + { + "9": { + "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion.", + "author": "Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-or.", + "venue": "In ICLR, 2023.", + "url": null + } + }, + { + "10": { + "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning.", + "author": "Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, and Bo Dai.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "11": { + "title": "Latent video diffusion models for high-fidelity long video generation.", + "author": "Yingqing He, Tianyu Yang, Yong Zhang, Ying Shan, and Qifeng Chen.", + "venue": "In arXiv, 2022.", + "url": null + } + }, + { + "12": { + "title": "Cogvideo: Large-scale pretraining for text-to-video generation via transformers.", + "author": "Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang.", + "venue": "In arXiv, 2022.", + "url": null + } + }, + { + "13": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "14": { + "title": "Animate anyone: Consistent and controllable image-to-video synthesis for character animation.", + "author": "Li Hu, Xin Gao, Peng Zhang, Ke Sun, Bang Zhang, and Liefeng Bo.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "15": { + "title": "Group diffusion transformers are unsupervised multitask learners.", + "author": "Lianghua Huang, Wei Wang, Zhi-Fan Wu, Huanzhang Dou, Yupeng Shi, Yutong Feng, Chen Liang, Yu Liu, and Jingren Zhou.", + "venue": "In arXiv, 2024.", + "url": null + } + }, + { + "16": { + "title": "Collaborative video diffusion: Consistent multi-video generation with camera control.", + "author": "Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, and Gordon. Wetzstein.", + "venue": "In NeurIPS, 2024.", + "url": null + } + }, + { + "17": { + "title": "BLIP-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing.", + "author": "Dongxu Li, Junnan Li, and Steven Hoi.", + "venue": "In NeurIPS, 2023a.", + "url": null + } + }, + { + "18": { + "title": "Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model.", + "author": "Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli, Greg Shakhnarovich, and Sai Bi.", + "venue": "In arXiv, 2023b.", + "url": null + } + }, + { + "19": { + "title": "Controlnet++: Improving conditional controls with efficient consistency feedback.", + "author": "Ming Li, Taojiannan Yang, Huafeng Kuang, Jie Wu, Zhaoning Wang, Xuefeng Xiao, and Chen Chen.", + "venue": "In ECCV, 2024.", + "url": null + } + }, + { + "20": { + "title": "Decoupled weight decay regularization.", + "author": "Ilya Loshchilov and Frank Hutter.", + "venue": "In ICLR, 2019.", + "url": null + } + }, + { + "21": { + "title": "Subject-diffusion: Open domain personalized text-to-image generation without test-time fine-tuning.", + "author": "Jian Ma, Junhao Liang, Chen Chen, and Haonan Lu.", + "venue": "In SIGGRAPH, 2024.", + "url": null + } + }, + { + "22": { + "title": "SDEdit: Guided image synthesis and editing with stochastic differential equations.", + "author": "Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon.", + "venue": "In ICLR, 2022.", + "url": null + } + }, + { + "23": { + "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models.", + "author": "Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie.", + "venue": "In AAAI, 2024.", + "url": null + } + }, + { + "24": { + "title": "GLIDE: towards photorealistic image generation and editing with text-guided diffusion models.", + "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen.", + "venue": "In arXiv, 2021.", + "url": null + } + }, + { + "25": { + "title": "Dreambench++: A human-aligned benchmark for personalized image generation.", + "author": "Yuang Peng, Yuxin Cui, Haomiao Tang, Zekun Qi, Runpei Dong, Jing Bai, Chunrui Han, Zheng Ge, Xiangyu Zhang, and Shu-Tao Xia.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "26": { + "title": "State of the art on diffusion models for visual computing.", + "author": "Ryan Po, Wang Yifan, Vladislav Golyanik, Kfir Aberman, Jonathan T Barron, Amit H Bermano, Eric Ryan Chan, Tali Dekel, Aleksander Holynski, Angjoo Kanazawa, et al.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "27": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": "In CoRR, 2021.", + "url": null + } + }, + { + "28": { + "title": "Hierarchical text-conditional image generation with clip latents.", + "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.", + "venue": "In arXiv, 2022.", + "url": null + } + }, + { + "29": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "30": { + "title": "Ipadapter-instruct: Resolving ambiguity in image-based conditioning using instruct prompts.", + "author": "Ciara Rowles, Shimon Vainer, Dante De Nigris, Slava Elizarov, Konstantin Kutsy, and Simon Donn\u00e9.", + "venue": "In arXiv, 2024.", + "url": null + } + }, + { + "31": { + "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.", + "author": "Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "32": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi.", + "venue": "In NeurIPS, 2022.", + "url": null + } + }, + { + "33": { + "title": "LAION-5b: An open large-scale dataset for training next generation image-text models.", + "author": "Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev.", + "venue": "In NeurIPS, 2022.", + "url": null + } + }, + { + "34": { + "title": "Human4dit: 360-degree human video generation with 4d diffusion transformer.", + "author": "Ruizhi Shao, Youxin Pang, Zerong Zheng, Jingxiang Sun, and Yebin Liu.", + "venue": "In SIGGRAPH Asia, 2024.", + "url": null + } + }, + { + "35": { + "title": "Mvdream: Multi-view diffusion for 3d generation.", + "author": "Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "36": { + "title": "Generative multimodal models are in-context learners.", + "author": "Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, and Xinlong Wang.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "37": { + "title": "Instantid: Zero-shot identity-preserving generation in seconds.", + "author": "Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, and Anthony Chen.", + "venue": "In arXiv, 2024.", + "url": null + } + }, + { + "38": { + "title": "Chain of thought prompting elicits reasoning in large language models.", + "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou.", + "venue": "In NeurIPS, 2022.", + "url": null + } + }, + { + "39": { + "title": "Dmv3d: Denoising multi-view diffusion using 3d large reconstruction model.", + "author": "Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, and Kai Zhang.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "40": { + "title": "Depth anything: Unleashing the power of large-scale unlabeled data.", + "author": "Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao.", + "venue": "In CVPR, 2024a.", + "url": null + } + }, + { + "41": { + "title": "Cogvideox: Text-to-video diffusion models with an expert transformer.", + "author": "Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al.", + "venue": "In arXiv, 2024b.", + "url": null + } + }, + { + "42": { + "title": "Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models.", + "author": "Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang.", + "venue": "In arXiv, 2023.", + "url": null + } + }, + { + "43": { + "title": "Adding conditional control to text-to-image diffusion models.", + "author": "Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "44": { + "title": "Uni-controlnet: All-in-one control to text-to-image diffusion models.", + "author": "Shihao Zhao, Dongdong Chen, Yen-Chun Chen, Jianmin Bao, Shaozhe Hao, Lu Yuan, and Kwan-Yee K. Wong.", + "venue": "In NeurIPS, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18616v1" +} \ No newline at end of file diff --git a/20241127/2411.18649v1.json b/20241127/2411.18649v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c796afdd0acfcf96d016d690b02605c3f5b17942 --- /dev/null +++ b/20241127/2411.18649v1.json @@ -0,0 +1,277 @@ +{ + "title": "Dynamic Logistic Ensembles with Recursive Probability and Automatic Subset Splitting for Enhanced Binary Classification", + "abstract": "This paper111Accepted and presented at the 2024 IEEE 15th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). Published in the Proceedings of UEMCON 2024, \u00a92024 IEEE. The published version is available at https://doi.org/10.1109/UEMCON62879.2024.10754761. presents a novel approach to binary classification using dynamic logistic ensemble models. The proposed method addresses the challenges posed by datasets containing inherent internal clusters that lack explicit feature-based separations. By extending traditional logistic regression, we develop an algorithm that automatically partitions the dataset into multiple subsets, constructing an ensemble of logistic models to enhance classification accuracy. A key innovation in this work is the recursive probability calculation, derived through algebraic manipulation and mathematical induction, which enables scalable and efficient model construction. Compared to traditional ensemble methods such as Bagging and Boosting, our approach maintains interpretability while offering competitive performance. Furthermore, we systematically employ maximum likelihood and cost functions to facilitate the analytical derivation of recursive gradients as functions of ensemble depth. The effectiveness of the proposed approach is validated on a custom dataset created by introducing noise and shifting data to simulate group structures, resulting in significant performance improvements with layers. Implemented in Python, this work balances computational efficiency with theoretical rigor, providing a robust and interpretable solution for complex classification tasks with broad implications for machine learning applications.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Context and Motivation", + "text": "Logistic regression is a foundational[1 ###reference_b1###] method for binary classification due to its simplicity and interpretability [2 ###reference_b2###]. However, when faced with complex datasets, traditional logistic regression models often struggle to adequately capture the underlying decision boundaries. Ensemble methods, which aggregate multiple models to improve performance, have shown significant promise in overcoming these limitations [3 ###reference_b3###, 4 ###reference_b4###].\nDespite the dominance of deep learning models in modern machine learning, their complexity often comes at the cost of interpretability and computational efficiency. In contrast, logistic regression remains relevant in scenarios where these factors are prioritized. This paper introduces a dynamic logistic ensemble model that leverages recursive probability calculations, offering a scalable and interpretable alternative to more complex methods." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "When to Prioritize Interpretability over Predictive Power", + "text": "In fields like healthcare diagnostics, financial modeling, and legal decision-making, the need for transparency often outweighs the desire for pure predictive performance. Although deep learning models can achieve remarkable predictive accuracy, their inherent black-box nature limits their applicability in domains where understanding the rationale behind predictions is paramount [5 ###reference_b5###]. For instance, healthcare professionals must be able to explain diagnoses and treatment plans to patients, while financial analysts must justify their decisions to stakeholders and regulators. This is where interpretability-focused models, such as logistic regression ensembles, provide significant advantages. These models offer a balance between predictive power and transparency, enabling domain experts to trust, verify, and validate the model\u2019s decisions, making them more suitable for real-world applications where accountability and trust are critical [6 ###reference_b6###].\nWhile post-hoc explanations for black-box models, such as deep learning, have been proposed, there is increasing advocacy for using interpretable models from the outset, particularly in high-stakes situations [6 ###reference_b6###]. The case for prioritizing interpretability is further supported by research aimed at establishing a rigorous framework for interpretability in machine learning, which is essential for model evaluation in sensitive applications [7 ###reference_b7###]." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Comparison with Existing Methods", + "text": "While ensemble methods like Bagging and Boosting have been widely adopted due to their ability to improve model performance by reducing variance and bias [3 ###reference_b3###, 4 ###reference_b4###], they often rely on complex base learners such as decision trees, which can compromise interpretability [8 ###reference_b8###]. Boosting methods, for instance, sequentially fit models to the residuals of previous models, leading to a final model that is a complex aggregation of many weak learners [9 ###reference_b9###].\nIn contrast, our proposed dynamic logistic ensemble model retains the simplicity and interpretability of logistic regression while enhancing its capacity to model complex datasets. The key differences and advantages of our approach are:\nInterpretability: Each model in the ensemble is a logistic regression, whose coefficients can be directly interpreted in terms of feature contributions. This is advantageous in domains where understanding the model\u2019s decisions is crucial [10 ###reference_b10###].\nRecursive Probability Calculations: Our method introduces a novel recursive framework for probability calculations, allowing the ensemble to capture complex patterns without sacrificing interpretability. This contrasts with methods like Random Forests, where the ensemble\u2019s decision process is opaque [11 ###reference_b11###].\nAutomatic Subset Splitting: The model automatically partitions the data based on internal structures, without the need for explicit feature-based splitting or manual intervention. This is beneficial when the data contains latent groupings not easily identified through feature analysis.\nComputational Efficiency: The analytical derivation of gradients for optimization enhances computational efficiency, particularly for higher-layer ensembles. While deep learning models may achieve high accuracy, they often require significant computational resources and are prone to overfitting without large amounts of data [12 ###reference_b12###].\nBy positioning our method within the landscape of existing ensemble techniques, we aim to provide practitioners with a viable alternative that balances interpretability, computational efficiency, and predictive performance." + }, + { + "section_id": "1.4", + "parent_section_id": "1", + "section_name": "Objective and Contributions", + "text": "The primary objective of this research is to develop and analyze dynamic logistic ensemble models that utilize recursive probability calculations to achieve scalable binary classification. This work also focuses on deriving the analytical forms of gradients from the maximum likelihood and cost functions for -layer ensembles to optimize the model.\nThe key contributions of this paper are:\nA novel recursive probability calculation method, derived through algebraic manipulation and mathematical induction.\nApplication of maximum likelihood and cost functions to -layer ensemble models, extending generalized forms from existing literature [13 ###reference_b13###, 14 ###reference_b14###, 12 ###reference_b12###, 15 ###reference_b15###].\nAnalytical derivation of gradients for efficient optimization, enhancing model scalability and computational efficiency.\nA data augmentation strategy that simulates internal group structures within the dataset, enabling robust testing of the model\u2019s classification capabilities.\nImplementation and validation of the proposed methods in Python, demonstrating practical applicability and providing a framework for future research." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Background and Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Logistic Regression", + "text": "Logistic regression (LR) is widely used in classification tasks, especially in the context of high-dimensional datasets, as demonstrated by Komarek in his comprehensive study on logistic regression for data mining [15 ###reference_b15###]. The logistic function is defined as:\nwhere is a linear combination of the input features. Despite its simplicity, logistic regression\u2019s effectiveness diminishes with the increasing complexity of the data, necessitating the use of ensemble methods." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Ensemble Models", + "text": "Ensemble methods, such as Bagging and Boosting, enhance the performance of base models by combining multiple predictions to reduce variance and bias [3 ###reference_b3###, 4 ###reference_b4###, 16 ###reference_b16###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Introduction of Recursion", + "text": "Recursive models, frequently employed in neural networks and decision trees, offer a mechanism to extend logistic regression into an ensemble framework [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. The following sections establish the recursive calculation of probabilities, the derivation from general maximum likelihood and cost functions to the analytical gradients, addressing gaps in the current literature on ensemble methods for binary classification." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Basic Logistic Regression Implementation", + "text": "The logistic regression model is implemented using the logistic function:\nwhere is the input feature, and are the model parameters. These parameters are optimized through maximum likelihood estimation [14 ###reference_b14###, 13 ###reference_b13###], with the cost function defined for data points with being the binary value for the class that data point represents as:\nGradient descent is employed to minimize the cost function, iteratively updating the model\u2019s weights and biases [12 ###reference_b12###].\n###figure_1###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Data Preprocessing and Augmentation", + "text": "The dataset underwent several preprocessing steps, including label encoding, feature standardization, and data augmentation. The augmentation involved adding Gaussian noise to simulate internal group structures within the dataset. This noise, calculated as 10% of each feature\u2019s mean and standard deviation, was added to generate a new dataset, doubling its size and improving the model\u2019s generalization capability.\nThe rationale behind this method is that Gaussian noise can mimic the variability seen in real data, allowing us to test the model\u2019s capability to generalize and adapt to subtle differences within classes. Data prep as depicted in Figure 1 ###reference_###. The impact of this augmentation on model performance was significant, as it increased the complexity of the classification task. The dynamic logistic ensemble models, particularly the 2-layer and 3-layer ensembles, were able to capture these internal structures more effectively than the baseline logistic regression model." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Recursive Probability Calculations for Ensemble Models", + "text": "The core innovation of this approach lies in the recursive calculation of probabilities within the ensemble structure. The recursion starts with a single layer and extends to an arbitrary number of layers.\nFor a single-layer model, the in-group probability is given by:\nFor a two-layer model:\nor equivalently:\nwhere , (left branch), and (right branch) represent the outputs of the logistic regression models at the respective nodes.\nFor a three-layer model, the probability calculation is:\nAs observed from (4 ###reference_###), (6 ###reference_###), and (7 ###reference_###), a clear pattern emerges in the recursive expansion of probabilities across layers. Specifically, the probability equation for each ensemble can be derived by recursively applying a rule to the leaf probabilities in the ensemble probability equation from the previous layer:\nThis pattern leads to the following generalized recursive rules for -layered ensembles:\nFor :\nFor the final layer, where , the rule is:\nThis recursive process is systematically extended to layers, with each new leaf node iteratively expanding the formula from the previous iteration according to the rules in (9 ###reference_###) and (10 ###reference_###). The efficiency of this recursive method is implemented in the accompanying code, detailed in Appendix A.\nThese derivations reveal that the maximum likelihood function is convex only for the leaf nodes, while for other nodes, it can be approximated as linear.\n###figure_2###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Dynamic Ensemble Model Construction", + "text": "The dynamic ensemble model is constructed by arranging multiple logistic regression models in a tree structure. The top layer forwards the input data to the lower layers, with each node in the ensemble representing a logistic regression model that outputs a probability.\n###figure_3### The recursive ensemble model is generalized to support an arbitrary number of layers, with the [13 ###reference_b13###, 3 ###reference_b3###, 1 ###reference_b1###] maximum likelihood function defined as:\nwhere is the recursive probability of the entire tree with layers, dynamically generated using (9 ###reference_###) and (10 ###reference_###). The cost function for data points is:" + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Gradient Calculation", + "text": "We introduce the following notations:\n: Ensemble probability calculated recursively using (9 ###reference_###) and (10 ###reference_###) for an -layer ensemble.\n: Probability at the -th node, defined as .\n: Term defined as for the -th node.\n: Coefficient of the feature variable in for the -th node.\n: Cost contributed by a data point in an -layered ensemble.\n: Path probability of the -th node as a leaf in an -layered ensemble.\nGradients have been following [13 ###reference_b13###, 3 ###reference_b3###, 1 ###reference_b1###, 15 ###reference_b15###]\nCost Gradient for Single-Layered Ensemble:\nWhere becomes 1 for the bias.\nCost Gradients for Two-Layered Ensemble:\nPath Probability Instances:\nFor 1 layered ensemble with one node, the probability of reaching and using that node in the ensemble is given as:\nSimilarly, for a 2-layered ensemble, they are:\nAnd for a 3-layered ensemble these are:\nIf we keep track and calculate for 4 layered as well, we can generalize into following recursion:\nCost Gradients for n-Layered Ensemble:\nThe derivations used in cost gradient calculation for 1,2,3 and 4-layered ensembles can be generalized for a 5-layer ensemble and beyond. We arrive at the following recursive rule for generalizing gradients analytically for -layers:\nFor leaf nodes :\nFor immediate parents of leaf nodes :\nor equivalently:\nFor other nodes , calculate the gradient as if the ensemble were of layers when the node belonged to the second last layers hence prompting to use equation 26 ###reference_###. Update all terms of the form in that gradient using the following recursive rule applied times while ignoring the contents of as shown in equation 32 ###reference_### or 33 ###reference_###:\nThis gradient update process can be described as follows:\nwith replaced by \nfor ,\nwhere\nis the initial gradient, and\nis the updated gradient after iterations.\nThe final gradient is then given by:\nor equivalently:\nHere, is going to be recursively updated with the same rule in equation (28 ###reference_###) times." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experimental Setup and Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Datasets and Preprocessing", + "text": "The purpose of this section is to describe the steps we took to prepare the dataset for testing the model\u2019s capability to identify and correctly classify internal groupings within the data. Given that the dataset does not include explicit features indicating any such groupings, our goal was to simulate these conditions and evaluate the model\u2019s performance. Below, we outline each step in detail." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "IV-A1 Original Dataset Description", + "text": "The dataset utilized for this study is the Wine Quality dataset, which comprises 1,599 rows and 11 features related to the chemical properties of wine samples. The goal is to predict the \u201dquality\u201d of the wine, a target variable that is an ordinal integer value, based on the following 10 features: fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, density, pH, sulphates, and alcohol." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "IV-A2 Label Encoding of the Target Variable", + "text": "To facilitate binary classification, we first transformed the ordinal target variable, \u201dquality,\u201d into a binary format using label encoding. This conversion allowed us to focus on a simplified classification task suitable for the logistic regression-based models we aimed to evaluate." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "IV-A3 Feature Standardization", + "text": "Feature standardization was applied to ensure that all features contributed equally to the model\u2019s decisions. Each feature was adjusted to have a mean of zero and a standard deviation of one. This step is crucial for the stability and performance of logistic regression models, which are sensitive to the scale of the input data." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "IV-A4 Data Augmentation to Simulate Internal Groupings", + "text": "To test the model\u2019s ability to identify and classify internal groupings within the dataset, we performed data augmentation. Specifically, we added Gaussian noise to the original feature values, effectively creating subgroups within the data. This noise was calculated as 10% of each feature\u2019s mean and standard deviation, and was added to the data points to generate a new dataset. The result was a dataset that doubled in size to 3,198 rows, simulating internal group structures without providing explicit feature-based indications of these subgroups." + }, + { + "section_id": "4.1.5", + "parent_section_id": "4.1", + "section_name": "IV-A5 Dataset Splitting", + "text": "The augmented dataset was then split into training and testing sets with an 80:20 ratio. The training set comprised 2,558 samples, while the testing set contained 640 samples. This split ensured that the model had ample data to learn from and that the testing set remained a valid indicator of the model\u2019s ability to generalize to new data." + }, + { + "section_id": "4.1.6", + "parent_section_id": "4.1", + "section_name": "IV-A6 Exploratory Data Analysis (EDA)", + "text": "Before applying the model, we conducted exploratory data analysis (EDA) to ensure that the dataset was balanced and free from major outliers. A bar plot was generated to confirm that the \u201dquality\u201d attribute was evenly distributed across classes, preventing any bias in model training. Additionally, pair plots and histograms were used to check for obvious decision boundaries and to identify any potential outliers that might affect model performance." + }, + { + "section_id": "4.1.7", + "parent_section_id": "4.1", + "section_name": "IV-A7 Testing the Model\u2019s Capability", + "text": "With the dataset prepared, the next step involved applying our dynamic logistic ensemble model. The primary focus was to assess whether the model could automatically detect and correctly classify the simulated subgroups within the data\u2014demonstrating its capability to handle datasets with internal groupings, even when explicit features indicating the split are absent." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Baseline Model Selection", + "text": "To select the baseline model for this study, we referred to materials that used the same dataset. The following resources were instrumental in guiding our choice of logistic regression as the baseline model:\nSaishruthi Swaminathan, \u201dLogistic Regression Detailed Overview,\u201d published in Towards Data Science, March 15, 2018. [Link ###reference_gression-detailed-overview-46c4da4303bc###]\nSSaishruthi, \u201dLogistic Regression Vectorized Implementation.\u201d GitHub Repository. [Link ###reference_ression_Vectorized_Implementation/blob/master/Logistic_Regression.ipynb###]\nThese references provided insights into the theoretical foundation and practical implementation of logistic regression, which we employed as the baseline for predicting wine quality.\n###figure_4###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Ensemble Model Performance", + "text": "The baseline logistic regression model and the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models were evaluated on several metrics to provide a comprehensive comparison. The results are summarized in Table I ###reference_###.\n###table_1### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Cost Function Convergence Analysis", + "text": "The graphs in Figures 5 ###reference_### and 6 ###reference_### provide valuable insights into the convergence behavior of the dynamic logistic ensemble models, particularly in capturing internal group structures within the dataset. The progression across 1-layer, 2-layer, 3-layer, and 4-layer models illustrates the trade-offs between complexity, performance, and convergence rate.\nThe 1-layer ensemble model (Figure 5 ###reference_###, top left) shows rapid cost reduction in the initial iterations, stabilizing quickly around a cost of 0.5. The quick convergence can be attributed to the simplicity of the model, which requires fewer parameters to optimize. The corresponding ROC curve (Figure 6 ###reference_###, top right) reflects an AUC of 0.80, highlighting reasonable classification performance, but it also indicates the limitations of the 1-layer model in capturing more complex decision boundaries within the data.\nThe 2-layer ensemble model (Figure 5 ###reference_###, second column) demonstrates a more gradual cost reduction, stabilizing at a lower cost than the 1-layer model. The additional parameters in the 2-layer model allow for more complex decision boundaries, which is reflected in the ROC curve (Figure 6 ###reference_###, second column) with an improved AUC of 0.83. This suggests the model is better equipped to generalize across the dataset, capturing underlying group structures more effectively.\nThe 3-layer ensemble model (Figure 5 ###reference_###, third column) exhibits a slower but steady cost reduction, eventually stabilizing at a cost slightly lower than the 2-layer model. The increased complexity of the 3-layer model enables it to achieve the highest AUC of 0.84 (Figure 6 ###reference_###, third column), indicating that this model strikes a strong balance between complexity and predictive performance. However, the gain in AUC compared to the 2-layer model is modest, suggesting diminishing returns as model complexity increases.\nThe 4-layer ensemble model (Figure 5 ###reference_###, right end) shows an extended cost decay period before stabilizing at a similar level to the 3-layer model. Despite having the highest complexity, its ROC curve (Figure 6 ###reference_###, right end) indicates an AUC of 0.83, similar to that of the 2-layer model. This suggests that while the 4-layer model is capable of fitting the training data well, it does not generalize significantly better than the 2-layer or 3-layer models, possibly due to overfitting.\nIn conclusion, while adding layers to the ensemble improves performance, the gains become less pronounced beyond the 2-layer model. The 2-layer ensemble strikes an optimal balance between model complexity and generalization performance, making it a robust and efficient solution for datasets with internal group structures. The 3-layer model offers slight improvements but introduces more computational overhead without significant additional benefit, and the 4-layer model shows diminishing returns in generalization performance." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Analysis of Results", + "text": "The results clearly demonstrate that the dynamic ensemble models significantly outperform the baseline logistic regression model in terms of accuracy, AUC, recall, and precision as the number of layers increases, up to a point. The baseline model provided a solid starting point with a training accuracy of 0.701 and a test accuracy of 0.689. However, it struggled with recall and precision metrics, particularly in identifying and correctly classifying the internal group structures simulated by the data augmentation process.\nThe 1-layer ensemble model showed an immediate improvement, with a test accuracy of 0.7375 and a notable increase in test precision (0.7709) and AUC (0.8019). This demonstrates that the analytical gradients work well in zeroing in on the optimized parameter values.\nThe 2-layer ensemble model achieved the best balance, with a test accuracy of 0.7547, a test AUC of 0.8257, and a recall of 0.6972, reflecting its ability to generalize well. This demonstrates that even a single additional layer allows the model to capture more nuanced decision boundaries. The 3-layer model continued this trend, with further improvements in recall (0.7224) and AUC (0.8435), though its precision (0.7842) saw diminishing returns compared to the 2-layer model.\nInterestingly, the 4-layer ensemble model, despite having the highest training accuracy (0.8202) and recall (0.7476), saw a slight drop in test accuracy to 0.7531 and test AUC to 0.8320. This suggests that the increased complexity of the model introduces some overfitting, where the model performs better on training data but loses some generalization capability on unseen data. Hence, for this specific dataset, the 2-layer or 3-layer ensemble provides an optimal trade-off between complexity and performance." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "IV-F Limitations and Future Experiments", + "text": "While our experiments demonstrate the effectiveness of the proposed model on a custom dataset, we acknowledge that testing on a single dataset limits the generalizability of the results. Additionally, we did not compare our model\u2019s performance against state-of-the-art ensemble techniques such as Random Forests or Gradient Boosting Machines. Future experiments should include:\nComparison with Other Methods: Evaluating the model against other ensemble techniques on the same datasets to provide a direct performance comparison.\nTesting on Diverse Datasets: Applying the model to a variety of datasets with different characteristics, including those with inherent internal clusters and those without, to assess the model\u2019s adaptability and robustness.\nAssessing Computational Efficiency: Measuring the computational time and resource usage for different ensemble depths and dataset sizes to better understand the scalability of the approach." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper introduces a novel approach to enhancing logistic regression models through dynamic ensemble structures. By incorporating recursive probability calculations and analytical gradient optimization, our method extends the capacity of logistic regression to model complex datasets while maintaining interpretability. The data augmentation strategy employed demonstrates the model\u2019s ability to identify and classify inherent group structures within data." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Practical Implications and Limitations", + "text": "The proposed model is particularly suited for applications where interpretability is essential, such as healthcare diagnostics, financial modeling, and any domain where decisions need to be transparent and justifiable [10 ###reference_b10###]. The ability to automatically detect and model internal groupings makes it valuable in situations where latent structures exist in the data but are not explicitly observable.\nHowever, the recursive nature of the model introduces computational overhead, especially for deeper ensembles and larger datasets. While the analytical derivation of gradients improves efficiency, the method may still face scalability challenges in big data scenarios. Additionally, the current experiments are limited to a single dataset augmented to simulate internal groupings. Future work should include testing on a wider range of datasets, including real-world data with inherent group structures, to validate the generalizability and robustness of the approach." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Future Work", + "text": "Future research directions include:\nComparison with State-of-the-Art Techniques: Implementing and comparing the proposed model with other ensemble methods such as Random Forests, Gradient Boosting Machines, and deep neural networks on various datasets.\nScalability Improvements: Exploring optimization techniques and parallelization strategies to enhance computational efficiency for larger datasets.\nExtension to Multi-Class Classification: Adapting the recursive probability framework to handle multi-class problems, expanding the applicability of the model.\nReal-World Applications: Applying the model to real-world datasets in domains where interpretability is crucial, assessing its practical impact and limitations.\nBy addressing these areas, we aim to further establish the proposed dynamic logistic ensemble model as a robust, interpretable, and practical tool in the machine learning toolkit." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Python Code", + "text": "This appendix includes Python code snippets used to implement the various models, with comments explaining the recursive aspects of the code. The full implementation, along with additional details, can be found in the GitHub repository:\nhttps://github.com/ensemble-art/Dynamic-Logistic-Ensembles ###reference_gistic-Ensembles###." + } + ], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Model Performance Metrics
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricBaseline1-layer2-layer3-layer4-layer
Train Accuracy0.7010.74350.75760.78690.8202
Test Accuracy0.6890.73750.75470.76410.7531
Test AUC0.7540.80190.82570.84350.8320
Test Recall0.6560.66880.69720.72240.7476
Test Precision0.6980.77090.78370.78420.7524
\n
", + "capture": "TABLE I: Model Performance Metrics" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18649v1_figure_1.png", + "caption": "Figure 1: Illustration of clusters and decision boundaries. Cluster A (blue circle) and Cluster B (orange circle) have identical true decision boundaries (green sine curves). A single model\u2019s decision boundary (red dashed curve) attempts to fit both clusters but fails to accurately classify the data due to inherent limitations, even when using complex boundaries.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/decision.png" + }, + "2": { + "figure_path": "2411.18649v1_figure_2.png", + "caption": "Figure 2: Illustration of recursive probability calculations in the dynamic logistic ensemble model across multiple layers. Each node hj\u2062(x)subscript\u210e\ud835\udc57\ud835\udc65h_{j}(x)italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_x ) represents a logistic regression model. The formulas on the right demonstrate how the probabilities are recursively expanded at each layer, starting from the root node and incorporating the outputs of the child nodes to compute the final probability P\u2062(1|x)\ud835\udc43conditional1\ud835\udc65P(1|x)italic_P ( 1 | italic_x ).", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/expansion2.png" + }, + "3": { + "figure_path": "2411.18649v1_figure_3.png", + "caption": "Figure 3: Logistic ensemble tree, where n\ud835\udc5bnitalic_n is the layer index.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/tree_graph.png" + }, + "4": { + "figure_path": "2411.18649v1_figure_4.png", + "caption": "Figure 4: Cost function convergence of the baseline logistic regression model.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/base_model.png" + }, + "5(a)": { + "figure_path": "2411.18649v1_figure_5(a).png", + "caption": "Figure 5: Cost function convergence of the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/1_layer_cost.png" + }, + "5(b)": { + "figure_path": "2411.18649v1_figure_5(b).png", + "caption": "Figure 5: Cost function convergence of the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/2_layer_cost.png" + }, + "5(c)": { + "figure_path": "2411.18649v1_figure_5(c).png", + "caption": "Figure 5: Cost function convergence of the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/3_layer_cost.png" + }, + "5(d)": { + "figure_path": "2411.18649v1_figure_5(d).png", + "caption": "Figure 5: Cost function convergence of the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/4_layer_cost.png" + }, + "6(a)": { + "figure_path": "2411.18649v1_figure_6(a).png", + "caption": "Figure 6: ROC curves for the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/1_layer_AUC.png" + }, + "6(b)": { + "figure_path": "2411.18649v1_figure_6(b).png", + "caption": "Figure 6: ROC curves for the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/2_layer_AUC.png" + }, + "6(c)": { + "figure_path": "2411.18649v1_figure_6(c).png", + "caption": "Figure 6: ROC curves for the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/3_layer_AUC.png" + }, + "6(d)": { + "figure_path": "2411.18649v1_figure_6(d).png", + "caption": "Figure 6: ROC curves for the 1-layer, 2-layer, 3-layer, and 4-layer ensemble models. Ordered left to right.", + "url": "http://arxiv.org/html/2411.18649v1/extracted/6027602/Figures/4_layer_AUC.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18649v1" +} \ No newline at end of file diff --git a/20241127/2411.18650v1.json b/20241127/2411.18650v1.json new file mode 100644 index 0000000000000000000000000000000000000000..061b6ff14e2d0a2472e7d5de28d2948a9e921359 --- /dev/null +++ b/20241127/2411.18650v1.json @@ -0,0 +1,696 @@ +{ + "title": "RoMo: Robust Motion Segmentation Improves Structure from Motion https://romosfm.github.io", + "abstract": "There has been extensive progress in the reconstruction and generation of 4D scenes from monocular casually-captured video.\nWhile these tasks rely heavily on known camera poses, the problem of finding such poses using structure-from-motion (SfM) often depends on robustly separating static from dynamic parts of a video.\nThe lack of a robust solution to this problem limits the performance of SfM camera-calibration pipelines.\nWe propose a novel approach to video-based motion segmentation to identify the components of a scene that are moving w.r.t. a fixed world frame.\nOur simple but effective iterative method, RoMo, combines optical flow and epipolar cues with a pre-trained video segmentation model.\nIt outperforms unsupervised baselines for motion segmentation as well as supervised baselines trained from synthetic data.\nMore importantly, the combination of an off-the-shelf SfM pipeline with our segmentation masks establishes a new state-of-the-art on camera calibration for scenes with dynamic content,\noutperforming existing methods by a substantial margin.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The segmentation of moving objects in video, i.e. disentangling object motion from camera-induced motion, is a natural precursor to myriad downstream tasks and applications, including augmented reality [9 ###reference_b9###], autonomous navigation [13 ###reference_b13###, 31 ###reference_b31###], action recognition [50 ###reference_b50###] and 4D scene reconstruction [46 ###reference_b46###].\nIn this paper, we are particularly interested in motion segmentation as a means of improving the robustness of structure-from-motion (SfM) methods (e.g., COLMAP [40 ###reference_b40###, 39 ###reference_b39###]).\nMoving objects are problematic, as they violate the SfM rigidity assumption, greatly limiting the videos to which SfM can be successfully applied.\n***\u2217 Equal contribution\n\u2020\u2020\u2020\u2020 Equal advising\n###figure_1### Despite its potential application, the motion segmentation task has been somewhat under-explored compared to image and video segmentation.\nThere exist supervised methods [53 ###reference_b53###, 15 ###reference_b15###, 56 ###reference_b56###], but given the scarcity of real-world annotated data, most such techniques rely heavily on synthetic training data.\nThere are unsupervised motion segmentation methods [14 ###reference_b14###, 23 ###reference_b23###, 24 ###reference_b24###, 55 ###reference_b55###, 54 ###reference_b54###], but these do not exploit 3D geometric constraints, and tend to under-perform supervised methods.\nRobust SfM pipelines for dynamic scenes [58 ###reference_b58###, 57 ###reference_b57###, 5 ###reference_b5###] exploit 3D geometric cues to identify problematic correspondences on dynamic objects but provide sparse masks for moving objects rather than densely segmenting entire objects.\nThis paper introduces a simple yet remarkably effective iterative approach to motion segmentation (see Fig. 1 ###reference_###). It combines optical flow and 3D geometric cues, along with a rich feature space from an off-the-shelf segmentation foundation model to facilitate the inference of coherent moving object masks.\nIn particular, given camera pose estimates, epipolar constraints can be used to predict which flow correspondences are inconsistent with the estimated camera poses [57 ###reference_b57###, 5 ###reference_b5###, 20 ###reference_b20###].\nThese sparse outliers then anchor the inference of segmentation masks for moving objects as a form of clustering in the feature space of a foundation model pre-trained on image and video segmentation tasks.\nBy repeating these steps, we iteratively refine our camera pose estimates, the detection of correspondence outliers, and the motion segmentation masks.\nThe resulting method, dubbed RoMo, outperforms both synthetically supervised and unsupervised methods on motion segmentation benchmarks (DAVIS16 [29 ###reference_b29###], SegTrackv2 [17 ###reference_b17###] and FBMS59 [27 ###reference_b27###]).\nWe show the approach also significantly outperforms the SoTA robust SfM methods for estimating camera pose on dynamic scene benchmarks (e.g., MPI Sintel [3 ###reference_b3###]).\nTo assess performance of the SfM estimates beyond synthetic benchmarks, we collected a dataset of real scenes with ground truth camera motion.\nOn this new dataset our new SfM pipeline, leveraging RoMo to identify and discard moving objects, outperforms the previous SoTA methods by a substantial margin." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "The segmentation of moving objects relative to a world coordinate frame (vs an egocentric frame), a.k.a. motion segmentation, is a long standing problem [52 ###reference_b52###, 28 ###reference_b28###, 47 ###reference_b47###].\nEarly works [28 ###reference_b28###, 2 ###reference_b2###, 26 ###reference_b26###] relied on robust estimation and hierarchical layered representations to jointly model the motion field and object masks.\nSupervised learning methods have also been used for geometric and semantic motion segmentation [1 ###reference_b1###].\nWhile deep learning with large scale data has recently revolutionized many aspects of image and video understanding [32 ###reference_b32###, 38 ###reference_b38###, 30 ###reference_b30###], progress in motion segmentation has been limited by the paucity of supervised training data, motivating unsupervised approaches." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Flow based motion segmentation", + "text": "With advances in optical flow (e.g., RAFT [43 ###reference_b43###]), flow-based segmentation methods have emerged [24 ###reference_b24###, 23 ###reference_b23###, 55 ###reference_b55###, 54 ###reference_b54###].\nMeunier et al. [24 ###reference_b24###] uses an iterative EM procedure to segment the optical flow field into layers.\nSTM [23 ###reference_b23###] improves temporal consistency by using a spatio-temporal parametric motion model with a temporal consistency loss.\nMotion Grouping [54 ###reference_b54###] trains a self-supervised slot-attention auto-encoder to decompose flow fields into layers of background and foreground masks, iteratively aggregating regions with similar motion.\nDyStaB [55 ###reference_b55###], while not fully unsupervised, trains a segmentation network to minimize the mutual information between different segments of a flow field.\nSimilarly, ARP [14 ###reference_b14###] uses both optical flow and RGB appearance of the frames to predict the motion masks.\nOther methods leverage synthetic motion segmentation datasets [15 ###reference_b15###, 53 ###reference_b53###].\nSIMO [15 ###reference_b15###] proposed large-scale synthetic data generation for training a transformer model for motion segmentation.\nOCLR-flo [53 ###reference_b53###] takes a similar approach for videos with multiple objects and complex occlusions. To leverage appearance information (OCLR-adap) they additionally finetune DINO [4 ###reference_b4###] on their predicted masks (from flow only) and use it for mask propagation.\nSimilarly we use optical flow, but we further leverage epipolar geometry and features from a large semantic segmentation model to more robustly disambiguate between object motion and camera motion." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Video object segmentation", + "text": "Video object segmentation (VOS) aims to segment foreground objects in video regardless of motion.\nVOS is easier to annotate with several public datasets [29 ###reference_b29###] however, \u2018foreground\u2019 objects can be static and \u2018background\u2019 objects can be dynamic which limits their usefulness for our task.\nUnsupervised VOS methods [49 ###reference_b49###, 16 ###reference_b16###] can incorporate post-hoc motion identification techniques, potentially enabling motion segmentation.\nNevertheless, many lack the generality to segment uncommon dynamic objects, such as planted trees or shadows.\nWe take inspiration from unsupervised VOS in our use of semantic features, and from unsupervised motion segmentation in our use of optical flow, combining them into a robust motion segmentation method applicable to in-the-wild SfM.\nWe furthermore make use of SAMv2 [32 ###reference_b32###] to improve resolution of our motion masks." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Dynamic structure from motion", + "text": "Widely used structure from motion methods such as COLMAP [39 ###reference_b39###] assume that scenes are largely static, and often fail on videos with dynamic objects.\nCasualSAM [57 ###reference_b57###] addresses this by jointly optimizing for depth (using a learned pre-trained prior), camera poses and motion masks.\nParticleSfM [58 ###reference_b58###] leverages off-the-shelf optical flow and monocular depth estimators to generate 3D tracks, and trains a 3D track motion classifier on synthetic data.\nOnly tracks corresponding to static parts of the scene are used for bundle adjustment.\nLEAP-VO [5 ###reference_b5###] similarly classifies tracks into static and dynamic elements, augments its inputs with features, has a refiner module, and a sliding window of tracks passed to a global bundle adjustment to determine camera poses.\nDROID-SLAM [44 ###reference_b44###] rejects moving correspondences by relying on the temporal consistency of objects in a GRU memory.\nDUSt3R [48 ###reference_b48###] is a novel pose inference technique using a patch based feed-forward network to predict global 3D coordinates.\nMonST3R [56 ###reference_b56###] fine-tunes DUSt3R on scenes with dynamics.\nOur method outperforms this SoTA methods on dynamic SfM (Fig. 7 ###reference_###).\nDynamic objects also impose challenges for camera calibration in 3D reconstruction methods. RoDynRF [20 ###reference_b20###] is a 4D pose-free reconstruction method that optimizes cameras by removing scene elements with unreliable epipolar geometry using Sampson error [37 ###reference_b37###, 22 ###reference_b22###] similar to us." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "Given a video sequence of images , our goal is to estimate the corresponding pixel binary motion masks ,\nwhere are 2D pixel coordinates, and is the set of pixels of dynamic objects.\nWe propose an iterative approach consisting of two key steps:\n\n\n(1) identify likely static\npixels by considering optical flow between adjacent images in time, and using epipolar geometry to identify pixels in the scene whose movement can be explained solely by changes in camera pose (see Sec. 3.1 ###reference_###);\n\n(2) use these noisy labels together with features from a pre-trained video segmentation model to learn a classifier that produces higher-quality and temporally stable segmentation masks (see Sec. 3.2 ###reference_###).\n\n\nIterating these steps refines the estimated epipolar geometry and, in turn, the predicted masks, resulting in further performance improvements (see Sec. 3.3 ###reference_###).\nFinally, to obtain higher-resolution segmentation masks, we again leverage the pre-trained video segmentation model (see Sec. 3.4 ###reference_###).\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Weak epipolar supervision", + "text": "We start by pre-computing forward and backward optical flow fields and using RAFT [43 ###reference_b43###].\nOptical flow establishes a set of dense correspondences between two frames such that .\nTo remove noisy correspondences (e.g. occlusion), we remove correspondences that do not pass a cycle consistency: where does not return to its original position .\nFor a static scene, if pairwise correspondences were pixel-perfect, we could employ the 7-point algorithm [42 ###reference_b42###] to find the fundamental matrix , and estimate the relative camera pose between the two frames.\nTo account for spurious correspondences caused by dynamic objects, we employ RANSAC [7 ###reference_b7###] to robustly estimate with a Median of Squares consensus measure [10 ###reference_b10###].\nHaving estimated the fundamental matrix , we can use it to evaluate the quality of each flow correspondence through epipolar geometry.\nTo this end we use the Sampson distance [37 ###reference_b37###, 22 ###reference_b22###] as a linear approximation of the re-projection error:\nwhere is the homogeneous representation of a point, and is the non-homogeneous representation.\nUsing 2 ###reference_### we can obtain a binary mask of points in that are likely to be static () and dynamic () as:\nwhere, to account for differences in camera speed variations in scenes, we first compute the average L2 norm of flow per image , and set the thresholds above as and .\n###figure_3###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Feature-based classifier", + "text": "The masks from 3 ###reference_### and 4 ###reference_### provide a sparse and noisy per image-pair representation of the static/dynamic decomposition of a scene; see Fig. 2 ###reference_###.\nConversely, our objective is to produce a dense, robust and temporally consistent pixel-by-pixel estimate of .\nTo achieve this, we take advantage of the well-behaved feature space of the pre-trained SAMv2 video segmentation model [32 ###reference_b32###].\nThe intuition is that pixels corresponding to the same object in a video should be close in feature space, as these models are specifically trained for object-level segmentation.\nWithin this feature space, we learn a lightweight classifier supervised with labels given by 3 ###reference_### and 4 ###reference_###; e.g., see Fig. 3 ###reference_###.\nIn particular, given image , we extract the corresponding feature map from the last layer of the SAMv2 encoder and train a shallow multi-layer perceptron to classify the feature space via the loss [36 ###reference_b36###]:\nwhere is the binary complement of and the use of max ensures that is only supervised at the pixel locations activated in or .\nWe add a Geman-McClure robust kernel with a fixed temperature to make our solution more robust to imperfections to the automatic selection of and across sequences.\nWe also regularize the MLP weights via a Lipschitz regularizer proposed by [19 ###reference_b19###], and train across all time steps within a video sequence:\nOnce trained, motion masks are found by thresholding:\nIn our datasets, we observed extreme situations where foreground objects completely occlude the scene.\nThis results in highly unreliable estimates of , and therefore low-quality pseudo-labels and for supervising .\nWe therefore drop frames whenever less than 50% of pixels are marked as inliers, to boost performance; see the ablation in Section 4.4 ###reference_###.\n###figure_4###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Iterative refinement", + "text": "Once our classifier is trained, the predicted masks better approximate the moving objects than and as also considers semantic appearance, rather than just epipolar geometry.\nOur iterative refinement process entails the use of to remove bad correspondences, re-computing the fundamental matrices, yielding improved masks and , and using the updated masks to fine-tune from the previous step.\nWe find that two iterations is optimal, after which performance saturates; see Fig. 4 ###reference_### and our ablation in Sec. 4.4 ###reference_###.\n###figure_5###" + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Final refinement", + "text": "As the feature maps are of relatively low-resolution, our segmentation masks are initially coarse.\nHowever, SAMv2 allows for coarse masks, points and boxes to be specified as input for any frame, and outputs temporally consistent fine-grained video segmentation masks.\nWe exploit this capability, and provide our coarse masks from 7 ###reference_### to infer higher-resolution masks; see Fig. 5 ###reference_### and the ablation in Sec. 4.4 ###reference_###.\n###figure_6###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "In Sec. 4.1 ###reference_###, we evaluate our method for motion segmentation, comparing its performance against baselines.\nIn Sec. 4.2 ###reference_###, we evaluate our method\u2019s ability to improve camera estimation with different SfM methods.\nFinally, in Sec. 4.4 ###reference_### we ablate different aspects of our method." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Motion segmentation", + "text": "The task of motion segmentation has few dedicated benchmarks in either the supervised or unsupervised settings.\nPrior works, such as Meunier et al. [24 ###reference_b24###] and Yang et al. [54 ###reference_b54###], typically use a subset of established VOS benchmarks to evaluate motion masks.\nAs in prior works, we report the Jaccard score (i.e. Intersection over Union (IoU)).\n###figure_7### We use DAVIS2016 [29 ###reference_b29###], a dataset of 50 videos with moving objects and cameras.\nFollowing [23 ###reference_b23###, 53 ###reference_b53###], we evaluate on the 20 validation sequences.\nDAVIS2016 only annotates a single prominent object for each frame, even though some sequences have unannotated moving background objects.\nOur method is designed to detect any movement in the scene aside from ego-motion but, due to the lack of ground truth annotations, we resort to comparing the full predicted motion mask with the partial annotated mask as given in the dataset.\nWe also use SegTrackV2 [17 ###reference_b17###] and FBMS59 [27 ###reference_b27###], which respectively have 14 and 59 videos with moving objects and a moving camera.\nWe use the 30 sequences annotated as the evaluation set in FBMS59 to be consistent with baseline methods.\nWe use the same evaluation setup as Meunier et al. [24 ###reference_b24###] and in the case of several moving objects, group\nthem all into a single foreground mask for evaluation.\nWe compare against state-of-the-art methods for unsupervised motion segmentation, including Motion Grouping [54 ###reference_b54###], EM [24 ###reference_b24###], and STM [23 ###reference_b23###].\nThese methods infer masks for any object with a distinct motion by reasoning about the optical flow.\nThey do not necessarily only mask objects that have motion relative to a rigid world frame, hence the results from these methods are usually post processed to keep only the mask that has the highest similarity to the ground truth mask [23 ###reference_b23###].\nWe also include OCLR-flo [53 ###reference_b53###] and SIMO [15 ###reference_b15###] as more competitive methods, although both these have been trained in a supervised manner on synthetic datasets.\nOCLR-adap [53 ###reference_b53###] is their test-time adaptation variant, which fine-tunes on test video.\nARP [14 ###reference_b14###] and DyStab-Dyn [55 ###reference_b55###] are excluded as baselines due to a lack of publicly available source code.\nOur zero-shot method significantly outperforms existing unsupervised motion segmentation methods on all three VOS benchmarks, and shows competitive performance with synthetically supervised methods.\nNote the significant improvement on FBMS59.\nThis is likely because FBMS59, as a motion segmentation dataset, has all moving objects properly annotated, unlike DAVIS16 and SegTrackV2.\nThe qualitative results illustrate several aspects of our method:\n\n\n(Row 1) shows a challenging example where we correctly segment a slow moving pedestrian in the background which is typically missed the prior works;\n\n(Row 2) baselines either completely miss or over-segment frames where a dynamic object is momentarily static;\n\n(Row 3) our method is robust to motion blur;\n\n(Row 4) most prior work fails to segment the full camouf lagedobject without including some of the background;\n\n(Row 5) part of the animal is occluded behind the tree trunk. As such the animal includes several segments. While prior work miss one or more segments of the animal we are able to segment it correctly.\n\n(Row 6) OCLR-adapmasks the static sign with similar depth to the car." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Applications to SfM", + "text": "A direct application of motion segmentation is the improvement of camera trajectory estimation for video with both dynamic objects and dynamic cameras.\n###figure_8### In-the-wild SfM datasets are rare, especially when one requires ground truth camera trajectories, and most video sequences with ground truth cameras for the Structure from Motion (SFM) task are from stationary scenes.\nA solution to this problem has been to use synthetic datasets like MPI Sintel [3 ###reference_b3###] to evaluate camera pose estimation, as it offers ground-truth camera annotations of synthetic sequences of dynamic scenes.\nAlthough this is a good starting point for performing evaluations on this task, MPI Sintel contains simple camera movements, the appearance is far from realistic, the scenes are generally textureless, and minimal 3D structure exists once the dynamic objects are removed.\nHence, we propose a new casual motion dataset in Sec. 4.3 ###reference_###, with real-world scene captured with a robotic arm, all with ground-truth camera annotations.\nAccordingly, we also evaluate baselines and our method on this dataset.\nFor both test datasets we report the absolute camera trajectory error (ATE) and the translation/rotation part of relative pose error (RPE-T and RPE-R). ATE and RPE-T are reported in meters, and in the same scale as ground truth trajectories. RPE-R is in degrees.\n###figure_9### We evaluate RoMo against CasualSAM [57 ###reference_b57###], MonST3R [56 ###reference_b56###], LEAP-VO [5 ###reference_b5###], and ParticleSFM [58 ###reference_b58###], all designed to handle the dynamic objects by masking their correspondences\nor by estimating static 3D pointmaps and 2D dynamic masks.\nParticleSFM classifies dense tracks initialized with flow as either dynamic or static and then uses the filtered tracks for global bundle adjustment [51 ###reference_b51###, 41 ###reference_b41###].\nLEAP-VO has a similar strategy of finding reliable point tracks and classifying them as static versus moving or invisible trajectories.\nCasualSAM optimizes camera parameters together with motion and depth maps and uses the predicted motion field to inversely weigh the depth and flow reconstruction optimization objective, as a way of lowering the effect of dynamic pixels in the estimation.\nFinally, MonST3R is a learning-based method\nin which 3D points are directly regressed from pairs of images.\nMonST3R is fine-tuned on dynamic scenes to provide more accurate pointmaps and predicts 2D motion masks using the regressed pointmaps. We also compare to naive baselines of COLMAP [40 ###reference_b40###, 39 ###reference_b39###] with unmasked data, and global bundle adjustment [41 ###reference_b41###] applied to unmasked dense tracks from [6 ###reference_b6###].\nSince RoMo is not a full SfM framework, we need to pair our method with an SfM method. Typically, COLMAP is a go-to method for SfM and accepts masks as input to apply on extracted features.\nHowever in case of the MPI Sintel [3 ###reference_b3###] dataset, due to its synthetic nature, the remaining scene after masking is particularly plain and featureless, leading to ineffective feature extraction with the classical SIFT [21 ###reference_b21###] feature extraction used in COLMAP.\nWe therefore further run TAPIR [6 ###reference_b6###] on the video to extract dense tracks as feature correspondences.\nWe then use the same global bundle adjustment method as ParticleSFM [58 ###reference_b58###] that uses 1DSfM [51 ###reference_b51###] in the TheiaSFM [41 ###reference_b41###] library.\nParticleSFM shows that TheiaSFM is effective in finding dense correspondences.\nWe evaluate on MPI Sintel [3 ###reference_b3###] with the same protocol as ParticleSFM [58 ###reference_b58###] that removes invalid sequences (e.g. static cameras), resulting in 14 sequences.\nWe show significant improvement in camera trajectory prediction in terms of RPE-R against state-of-the-art methods. Furthermore, qualitative visualizations of camera trajectories also demonstrate superior performance.\nThe comparison of our masks on the MPI Sintel dataset to the other SoTA dynamic SfM baselines, which use motion masking in their method, verifies that our method has superior motion segmentation performance on these scenes." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "New \u201cCasual Motion\u201d dataset \u2013 Figure 9", + "text": "Estimating the camera trajectory of video frames with both camera and object motion is challenging. When an SfM model fails on a scene it is ambiguous whether the failure is due to the foreground motion, or to the lack of background details and depth variation. Therefore, the best current benchmarks aiming at the task of in-the-wild SfM, replicating casual captures, are synthetic, and do not capture the nuances of the real world.\nTo this end, we designed a capture method using a mobile robotic arm that traces a repeatable trajectory multiple times for in-the-wild SfM evaluation.\nWe first capture a clean sequence which is free of any moving objects, passing it to COLMAP to compute the ground-truth camera trajectory.\nWe then recapture the same trajectory in a cluttered setup, which includes moving objects and people in the scene.\nThis dataset includes a range of moving object, occlusion rates, rigid and deformable bodies, and indoor/outdoor scenes.\nOur new dataset includes eight scenes, each with 40 to 50 frames, and two modalities (clean or cluttered).\nWe capture videos with the front camera of an iPhone12 attached to a Franka Emika Panda [8 ###reference_b8###] robotic arm controlled with the Polymetis library [18 ###reference_b18###].\nWe evaluate our motion segmentation method paired with COLMAP on our dataset, and compare to LEAP-VO, MonST3R and COLMAP.\nThe results in Fig. 9 ###reference_### show that our method outperforms all baselines both qualitatively and quantitatively.\nNotably, COLMAP fails completely on 1 out of 8 scenes and ParticleSFM fails on 3 out of 8 scenes.\nWe include the average results for each of these two methods on the separate subset where they do not have any failures (see supplementary for a detailed per scene evaluation).\nWe empirically observe that the use of dense tracks as opposed to SIFT features does not lead to significant improvement in this dataset, as the static part of the scene has enough texture and geometry.\nThis is contrary to the typical necessity of making use of tracks on synthetic datasets, such as MPI Sintel, further emphasizing that evaluating SfM methods only on synthetic scenes may be insufficient. Note ground truth camera trajectory scale, and hence ATE and RPE-T, are in the arbitrary scale of the COLMAP solution for the clean Casual Motion dataset.\n###figure_10### ###figure_11###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablations", + "text": "We ablate the usage of SAMv2 [32 ###reference_b32###], both for input features (Sec. 3.2 ###reference_###) and our post-processing refinement step (Sec. 3.4 ###reference_###), on a subset of 13 scenes from SegTrackV2.\nWe omit the \u2018monkey\u2019 scene as it is a fully dynamic scene with no static pixels.\nFig. 10 ###reference_### shows that using off-the-shelf features from a fully unsupervised Stable Diffusion [34 ###reference_b34###] model instead of SAMv2 features has little to no impact in our motion segmentation task.\nPost-processing with SAMv2 to improve the mask boundaries however, improves the results by about .\nWe further investigate our refinement step evaluating SAMv2 refined STM [24 ###reference_b24###] masks, observing similar gains.\nWe ablate MLP classifier design choices on the 50 scenes from DAVIS2016 in Fig. 11 ###reference_###.\nApplying SAMv2 refinement directly on the weak epipolar supervisory signal significantly decreases the average IoU of the predicted masks.\nIterative refinement improves the results marginally in terms of quantitative measurements, and significantly in terms of qualitative results, but saturates quickly after iterations.\nWe also experiment with the choice of MLP size, threshold for upper/lower bounds, using Geman-McClure kernel and the effect of dropping unreliable frames.\n###figure_12###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "We introduce RoMo, a novel motion segmentation method with the goal of improving structure from motion for in-the-wild video.\nWe propose a novel iterative approach that combines epipolar constraints with semantic segmentation priors to predict accurate motion masks in challenging scenes.\nOur results show that RoMo substantially outperforms existing unsupervised motion segmentation techniques on standard benchmarks.\nWe further evaluate the ability of our motion masks to improve dynamic SfM and demonstrate significant improvements in camera pose estimation.\nExisting benchmarks are limited due to a lack of real-world video scenes with diverse motions and ground truth camera poses.\nTo enrich available benchmarks we collect and release a dataset (dubbed \u201cCasual Motion\u201d) with challenging motion and groundtruth cameras.\nDespite our proposed improvements, we observe certain limitations; see Fig. 12 ###reference_###.\nFor example, in cases of low parallax, weak epipolar constraints struggle to effectively separate static vs dynamic regions.\nIn future work we will investigate the use of homographies, as suggested by [45 ###reference_b45###], in the low parallax scenes; see Fig. 12 ###reference_### (bottom).\nAnother challenge occurs when frames lack any static scene content, e.g., the video of a duck moving along the waves of a pond in Fig. 12 ###reference_### (top).\nIn this case our method masks out the full frame, which though technically correct, can be problematic for a downstream SfM method.\nFinally, our use of off-the-shelf optical flow and semantic features means our method may be susceptible to their flaws but will similarly improve as they do." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Video Examples", + "text": "Please refer to our website at https://romosfm.github.io ###reference_romosfm.github.io### to view videos of our results. We show video motion segmentation results on FBMS59, DAVIS16, and TrackSegv2 compared to OCLR-adap [53 ###reference_b53###]. We further show masked video results on Casual Motion dataset and some in-the-wild video samples." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Optical flow limitations \u2013 Figure 13", + "text": "Despite recent advancements that have made optical flow prediction networks a powerful and versatile tool, there are inherent limitations to optical flow.\nOne is the ambiguity of flow predictions for shadows [38 ###reference_b38###].\nThis can lead to an inability to detect moving shadows as distinct moving entities in our segmentation masks (top of Fig. 13 ###reference_###).\nAnother key limitation are objects that appear and disappear almost instantly, such as the arm in our \u2018Table Objects\u2019 scene.\nThese abrupt changes behave similar to occluded areas where the flow is ambiguous and fail the cycle consistency check, rendering nearly all pixels from such objects unusable for our weak inlier/outlier annotations (bottom of Fig. 13 ###reference_###)." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Application in 3D scene optimization with distractors \u2013 Figure 14", + "text": "Videos, as a collection of images of a scene, can be used to reconstruct the 3D scene using methods like Neural Radiance Fields (NeRFs) [25 ###reference_b25###] or 3D Gaussian Splatting (3DGS) [11 ###reference_b11###].\nHowever, transient inconsistencies, such as passing pedestrians, often violate the static scene assumption of these techniques, appearing as noise in the reconstruction.\nThese inconsistencies, referred to as distractors, can be filtered out through robust 3D optimization methods, such as those proposed in [35 ###reference_b35###, 33 ###reference_b33###, 36 ###reference_b36###].\nRoMo can similarly be be applied to the problem of 3D optimization from such videos by incorporating its motion masks into a standard 3DGS model.\nWe filter out dynamic pixels from the photometric loss, following the approach in Sabour et al. [35 ###reference_b35###].\nSimilar to Sabour et al. [36 ###reference_b36###], the structural similarity loss is not utilized in training the 3DGS model.\nQualitative results for this application are presented in Fig. 14 ###reference_### for the \u201cpatio\u201d scene from the NeRF On-the-go dataset [33 ###reference_b33###], which has the temporal order of frames preserved, allowing us to compute optical flow.\nObserve that RoMo effectively masks moving human distractors in this scene.\nWe compare against results from SpotLessSplats (SLS) [36 ###reference_b36###], a robust 3D optimization method for 3DGS.\nThe results show that SLS masks more effectively capture shadows and secondary effects, which RoMo misses due to optical flow limitations as discussed earlier.\nHowever, the results for SLS show leaked distractors in areas of the scene which are sparsely sampled in the training set. This is due to the imbalance of learning rates between the mask predictor of SLS and its 3D model, i.e. the 3D model overfits to the distractor faster than the mask predictor learns its mask.\nAdjusting the training schedule to better balance the learning of the mask predictor and the 3DGS model can help mitigate this issue.\nThis highlights the inherent challenge of finding an optimal learning rate balance between the two modules in SLS.\nIn contrast, our approach avoids this problem entirely, as RoMo masks are computed as preprocessing on the video and provided as input to the 3DGS model optimization.\nBecause RoMo masks operate independently of the 3D optimization pipeline, they can more seamlessly integrate with various 3D reconstruction methods, such as NeRF and 3DGS.\nWe believe that while our motion masks might not fully capture all inconsistencies for robust 3D optimization, they can serve as a strong initialization for robust masks, which can then be further refined using methods such as SLS.\nFurthermore, since RoMo does not require camera poses, as many robust 3D optimizations [35 ###reference_b35###, 33 ###reference_b33###, 36 ###reference_b36###] do, it can help in cases were SfM pipelines like COLMAP [40 ###reference_b40###, 39 ###reference_b39###] fail due to high distractor rates.\n###figure_13###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Results on \u201cCasual Motion\u201d \u2013 Figure 15", + "text": "Figure 15 ###reference_### presents a more detailed breakdown of results from our \u201cCasual Motion\u201d dataset (main paper Figure 9).\nIt illustrates that supervised baselines, which rely heavily on synthetic data, have less reliable estimates of camera pose compared to classic camera estimation methods like COLMAP [40 ###reference_b40###, 39 ###reference_b39###].\nThe \u2018Money Leaf\u2019 scene exemplifies significant challenges for ParticleSFM [58 ###reference_b58###], LEAP-VO [5 ###reference_b5###], and MonST3R [56 ###reference_b56###], all of which produce notably inferior results compared to COLMAP.\nIn contrast, our method leverages COLMAP\u2019s strength as a robust camera pose estimator while addressing its limitations.\nThis enhancement is evident both quantitatively and qualitatively, particularly at the beginnings and ends of trajectories.\nIn these regions, where slower camera movements with smaller translation are overshadowed by the larger motions of dynamic objects, COLMAP\u2019s estimates often falter.\nOur approach corrects these errors effectively by incorporating dynamic masks.\n###figure_14###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Failure scenes of ParticleSFM \u2013 Figure 16", + "text": "Figure 16 ###reference_### presents a detailed comparison of camera pose estimation baselines on scenes where ParticleSFM struggles.\nThe \u2018Table Objects\u2019 scene is particularly challenging due to rapid camera and rapid object movements, which result in motion blur and sparse dynamic objects.\nThese factors make masking difficult for all methods, including ours.\nCOLMAP is generally robust to this scene because the movements, though rapid, are temporally sparse.\nPoor masking however, can lead to failures in the robust baselines.\nQualitative results show that ParticleSFM [58 ###reference_b58###] focuses its detected tracks (blue and green) and filtered dynamic tracks (green) on the static flowerpots, which provide texture and reliable cues for bundle adjustment in an otherwise plain-textured scene.\nThis incorrect masking causes ParticleSFM to completely fail at camera estimation. MonST3R [56 ###reference_b56###] produces good masks in some frames but fails with empty masks in others.\nLEAP-VO [5 ###reference_b5###] shows no evidence of filtering tracks associated with dynamic objects (green arrows).\nOur method partially fails to detect the fleetingly appearing arm but successfully masks out the moving fruits even under heavy blur.\nThe \u2018Stairs\u2019 scene presents a highly occluded environment.\nParticleSFM fails to estimate camera poses for the final frames with the most occlusions, likely due to the sparsity of remaining tracks (blue region in Figure 16 ###reference_###).\nMonST3R occasionally misses moving people, and LEAP-VO does not filter tracks of dynamic objects.\nIn contrast, RoMo fully masks the dynamic people in this scene.\nFinally, in the \u2018Umbrella Garden\u2019 scene, ParticleSFM fails to find sufficient tracks due to the high occlusion rate during its initial stage, leading to a complete failure.\n###figure_15### ###figure_16###" + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Detailed results on MPI Sintel \u2013 Table 1", + "text": "Table 1 ###reference_### presents a per-scene breakdown of results on MPI Sintel for both our method and unmasked TAPIR [6 ###reference_b6###] tracks used with TheiaSfM [41 ###reference_b41###]." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Scene\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Our Masks + TAPIR tracks + TheiaSFM\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0TAPIR tracks + TheiaSFM
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ATE\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0RPE (T)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0RPE (R)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ATE\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0RPE (T)\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0RPE (R)
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0alley_2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.018\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.020
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ambush_4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.014\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.015\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.188\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.017\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.014\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.159
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ambush_5\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.004\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.004\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.068\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.037\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.027\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.750
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ambush_6\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.003\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.002\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.047\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.150\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.090\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a01.802
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0cave_2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.773\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.176\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.626\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.782\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.170\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.683
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0cave_4\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.003\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.019\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.078\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.046\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.283
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0market_2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.014\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.012\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.112\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.068\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.028\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a08.483
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0market_5\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.010\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.003\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.027\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.012\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.004\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.029
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0market_6\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.006\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.037\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.051\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.022\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.800
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0shaman_3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.213\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.005\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.003\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.680
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sleeping_1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.009\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.009\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.898\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.011\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.013\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a01.267
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sleeping_2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.026\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.001\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.026
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temple_2\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.002\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.002\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.009\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.002\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.002\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.008
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0temple_3\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.456\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.128\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.743\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.626\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.204\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a01.452
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Avg\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.093\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.026\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.217\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.132\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.045\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a01.175
\n
\n
Table 1: \nPer scene breakdown of MPI Sintel\u00a0[3] results.\n
\n
", + "capture": "Table 1: \nPer scene breakdown of MPI Sintel\u00a0[3] results.\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18650v1_figure_1.png", + "caption": "Figure 1: \nWe introduce a zero-shot motion segmentation method for video based on cues from epipolar geometry (top right) and optical flow. Our predicted masks (bottom left) can help improve SfM camera calibration on highly dynamic scenes (bottom right).", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/teaser.png" + }, + "2": { + "figure_path": "2411.18650v1_figure_2.png", + "caption": "Figure 2: Epipolar matches (Sec. 3.1) \u2013\n\ud835\udc14tsubscript\ud835\udc14\ud835\udc61\\mathbf{U}_{t}bold_U start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and \ud835\udc0btsubscript\ud835\udc0b\ud835\udc61\\mathbf{L}_{t}bold_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT respectively capture the most likely dynamic and static parts of the scene.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/high_low.png" + }, + "3": { + "figure_path": "2411.18650v1_figure_3.png", + "caption": "Figure 3: Feature-based classifier (Sec. 3.2) \u2013\nFeature space of foundation models show strong objectness prior as shown by the first three PCA components of the features. We leverage these features to train our classifier on sparse and noisy labels from epipolar supervision, generating coherent motion masks.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/PCA.png" + }, + "4": { + "figure_path": "2411.18650v1_figure_4.png", + "caption": "Figure 4: Iterative refinement (Sec. 3.3) \u2013\nRepeated fundamental matrix estimation and motion prediction improves estimated camera pose and masks, often converging after 2 iterations.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/iterations.png" + }, + "5": { + "figure_path": "2411.18650v1_figure_5.png", + "caption": "Figure 5: Final refinement (Sec. 3.4) \u2013\nWith SAMv2 we improve the fine-grained details in the mask.\nIn particular, note the finer details around the fingers and the dress frills.", + "url": "http://arxiv.org/html/2411.18650v1/x1.png" + }, + "6": { + "figure_path": "2411.18650v1_figure_6.png", + "caption": "Figure 6: \nMotion segmentation \u2013 results on DAVIS2016 [29], SegTrackV2 [17] and FBMS59 [27] shows qualitative and quantitative (IoU \u2191\u2191\\uparrow\u2191) improvement over unsupervised methods, and competitive results with supervised methods trained on synthetic data.\nWhile both RoMo and the unsupervised baselines do not require annotations, our method does not require any training whatsoever and can be applied zero-shot to a test video.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/motion_segmentation.png" + }, + "7": { + "figure_path": "2411.18650v1_figure_7.png", + "caption": "Figure 7: \nCamera calibration (MPI Sintel) \u2013\nBundle adjustment with TheiaSFM [41] on dense correspondences from TAPIR [6] is\nimproved when correspondences are masked with RoMo.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/sintel.png" + }, + "8": { + "figure_path": "2411.18650v1_figure_8.png", + "caption": "Figure 8: \nSegmentation masks (MPI Sintel) \u2013\nAverage IoU of motion mask for widely used SfM methods for dynamic scenes.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/sintel_masks.png" + }, + "9": { + "figure_path": "2411.18650v1_figure_9.png", + "caption": "Figure 9: \nEvaluation on our\ndataset \u2013\nSfM results on our challenging real-world dataset, including multiple human actors, high occlusion and fast camera movements, shows that our masking method can improve a simple classic SfM method like COLMAP [39] to outperform the previous SOTA methods on dynamic scene SfM.\nWe also report on subsets of scenes, as some methods fail to converge on some scenes.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/our_dataset.png" + }, + "10": { + "figure_path": "2411.18650v1_figure_10.png", + "caption": "Figure 10: \nAblation (SAMv2 features and post processing) \u2013 SAMv2 feature space and Stable Diffusion feature space show similar effectiveness. SAMv2 post processing improves performance for both our method and STM [23].", + "url": "http://arxiv.org/html/2411.18650v1/x2.png" + }, + "11": { + "figure_path": "2411.18650v1_figure_11.png", + "caption": "Figure 11: \nAblation (classifier) \u2013\nProcessing outlier masks \ud835\udc14tsubscript\ud835\udc14\ud835\udc61\\mathbf{U}_{t}bold_U start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT directly with SAMv2 (no MLP) gives poor results.\nExtra refinement iterations help, but the performance quickly saturates.\nHigher capacity causes over-fitting and degradation of results.\nUsing a robust kernel and dropping unreliable frames do\nyield some gains.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/ablation2.png" + }, + "12": { + "figure_path": "2411.18650v1_figure_12.png", + "caption": "Figure 12: \nLimitations \u2013\n(top) Our mask may cover the entire image when there are no visible static objects w.r.t. the world frame.\n(bottom) The fundamental matrix solve fails on low parallax sequences, eg where a homography provides a better model.", + "url": "http://arxiv.org/html/2411.18650v1/extracted/6027709/fig/images/limitation.png" + }, + "13": { + "figure_path": "2411.18650v1_figure_13.png", + "caption": "Figure 13: \nOptical flow ambiguities in the presence of shadows and occlusion. Top: Optical flow of the cow\u2019s shadow follows the ground beneath it, although it has a similar movement to the cow. Bottom: The fleetingly appearing arm does not pass optical flow cycle consistency and is completely filtered akin to occluded areas.", + "url": "http://arxiv.org/html/2411.18650v1/x3.png" + }, + "14": { + "figure_path": "2411.18650v1_figure_14.png", + "caption": "Figure 14: \nApplication of RoMo in 3D optimization \u2013 with in-the-wild videos, shows that RoMo can completely mask distractor humans in the scenes but fails to capture shadows due to optical flow limitations as described in Appendix B.", + "url": "http://arxiv.org/html/2411.18650v1/x4.png" + }, + "15": { + "figure_path": "2411.18650v1_figure_15.png", + "caption": "Figure 15: \nDetailed results on \u201cCasual Motion\u201d \u2013\nshow that our method can be paired with a bundle adjustment technique (COLMAP [39]) to make it more robust to dynamic scenes, often outperforming SoTA methods for camera estimation on such scenes.", + "url": "http://arxiv.org/html/2411.18650v1/x5.png" + }, + "16": { + "figure_path": "2411.18650v1_figure_16.png", + "caption": "Figure 16: \nDetailed results on ParticleSFM [58] failing scenes \u2013\nshows that over masking static regions can lead to bundle adjustment failure. Moreover, sparse tracks on highly occluded frames can lead to failure.", + "url": "http://arxiv.org/html/2411.18650v1/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "The best of both worlds: Combining cnns and geometric constraints for\nhierarchical motion segmentation.", + "author": "Pia Bideau, Aruni RoyChowdhury, Rakesh R Menon, and Erik Learned-Miller.", + "venue": "In CVPR, 2018.", + "url": null + } + }, + { + "2": { + "title": "Robust dynamic motion estimation over time.", + "author": "Michael J Black and Padmanabhan Anandan.", + "venue": "In CVPR, 1991.", + "url": null + } + }, + { + "3": { + "title": "A naturalistic open source movie for optical flow evaluation.", + "author": "D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black.", + "venue": "In ECCV, 2012.", + "url": null + } + }, + { + "4": { + "title": "Emerging properties in self-supervised vision transformers.", + "author": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr\nBojanowski, and Armand Joulin.", + "venue": "ICCV, 2021.", + "url": null + } + }, + { + "5": { + "title": "LEAP-VO: Long-term Effective Any Point Tracking for\nVisual Odometry.", + "author": "Weirong Chen, Le Chen, Rui Wang, and Marc Pollefeys.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "6": { + "title": "TAPIR: Tracking Any Point with Per-Frame Initialization\nand Temporal Refinement.", + "author": "Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar,\nJoao Carreira, and Andrew Zisserman.", + "venue": "In ICCV, 2023.", + "url": null + } + }, + { + "7": { + "title": "Random sample consensus: a paradigm for model fitting with\napplications to image analysis and automated cartography.", + "author": "Martin A. Fischler and Robert C. Bolles.", + "venue": "Commun. ACM, 1981.", + "url": null + } + }, + { + "8": { + "title": "https://download.franka.de/End-of-Life-Franka-Emika-Robot_EN.pdf, 2024.", + "author": "Franka Emika Panda.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "Motion segmentation and appearance change detection based 2D hand\ntracking.", + "author": "Jan Hendrik Hammer, Michael Voit, and J\u00fcrgen Beyerer.", + "venue": "In FUSION, 2016.", + "url": null + } + }, + { + "10": { + "title": "Beyond location parameters: Robust concepts and methods.", + "author": "F. R. Hampel.", + "venue": "In Bulletin of the ISI, 1975.", + "url": null + } + }, + { + "11": { + "title": "3d gaussian splatting for real-time radiance field rendering.", + "author": "Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\u00fchler, and George Drettakis.", + "venue": "In ACM Transactions on Graphics, 2023.", + "url": null + } + }, + { + "12": { + "title": "Adam: A method for stochastic optimization.", + "author": "Diederik P. Kingma and Jimmy Ba.", + "venue": "ICLR, 2014.", + "url": null + } + }, + { + "13": { + "title": "Moving Object Segmentation Using Optical Flow and Depth\nInformation.", + "author": "Jens Klappstein, Tobi Vaudrey, Clemens Rabe, Andreas Wedel, and Reinhard\nKlette.", + "venue": "In AIVT, 2009.", + "url": null + } + }, + { + "14": { + "title": "Primary object segmentation in videos based on region augmentation\nand reduction.", + "author": "Yeong Jun Koh and Chang-su Kim.", + "venue": "In CVPR, 2017.", + "url": null + } + }, + { + "15": { + "title": "Segmenting Invisible Moving Objects.", + "author": "Hala Lamdouar, Weidi Xie, and Andrew Zisserman.", + "venue": "In BMVC, 2021.", + "url": null + } + }, + { + "16": { + "title": "Guided Slot Attention for Unsupervised Video Object\nSegmentation.", + "author": "Minhyeok Lee, Suhwan Cho, Dogyoon Lee, Chaewon Park, Jungho Lee, and Sangyoun\nLee.", + "venue": "In CVPR, 2024.", + "url": null + } + }, + { + "17": { + "title": "Video Segmentation by Tracking Many Figure-Ground\nSegments.", + "author": "Fuxin Li, Taeyoung Kim, Ahmad Humayun, David Tsai, and James M. Rehg.", + "venue": "In ICCV, 2013.", + "url": null + } + }, + { + "18": { + "title": "Polymetis.", + "author": "Y. Lin, A. S. Wang, G. Sutanto, A. Rai, and F. Meier.", + "venue": "https://facebookresearch.github.io/fairo/polymetis/, 2021.", + "url": null + } + }, + { + "19": { + "title": "Learning smooth neural functions via lipschitz regularization.", + "author": "Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, and Or\nLitany.", + "venue": "In SIGGRAPH, 2022.", + "url": null + } + }, + { + "20": { + "title": "Robust Dynamic Radiance Fields.", + "author": "Yu-Lun Liu, Chen Gao, Andr\u00e9as Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil\nKim, Yung-Yu Chuang, Johannes Kopf, and Jia-Bin Huang.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "21": { + "title": "Object recognition from local scale-invariant features.", + "author": "David G. Lowe.", + "venue": "In CVPR, 1999.", + "url": null + } + }, + { + "22": { + "title": "The fundamental matrix: Theory, algorithms, and stability analysis.", + "author": "Quan-Tuan Luong and Olivier D Faugeras.", + "venue": "In IJCV, 1996.", + "url": null + } + }, + { + "23": { + "title": "Unsupervised Space-Time Network for Temporally-Consistent\nSegmentation of Multiple Motions.", + "author": "Etienne Meunier and Patrick Bouthemy.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "24": { + "title": "EM-Driven Unsupervised Learning for Efficient Motion\nSegmentation.", + "author": "Etienne Meunier, Ana\u00efs Badoual, and Patrick Bouthemy.", + "venue": "PAMI, 2023.", + "url": null + } + }, + { + "25": { + "title": "Nerf: Representing scenes as neural radiance fields for view\nsynthesis.", + "author": "Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi\nRamamoorthi, and Ren Ng.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "26": { + "title": "Object segmentation in video: a hierarchical variational approach for\nturning point trajectories into dense regions.", + "author": "Peter Ochs and Thomas Brox.", + "venue": "In ICCV, 2011.", + "url": null + } + }, + { + "27": { + "title": "Segmentation of Moving Objects by Long Term Video\nAnalysis.", + "author": "Peter Ochs, Jitendra Malik, and Thomas Brox.", + "venue": "PAMI, 2014.", + "url": null + } + }, + { + "28": { + "title": "Mrf-based motion segmentation exploiting a 2d motion model robust\nestimation.", + "author": "J-M Odobez and Patrick Bouthemy.", + "venue": "In ICIP. IEEE, 1995.", + "url": null + } + }, + { + "29": { + "title": "A Benchmark Dataset and Evaluation Methodology for Video\nObject Segmentation.", + "author": "Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus\nGross, and Alexander Sorkine-Hornung.", + "venue": "In CVPR, 2016.", + "url": null + } + }, + { + "30": { + "title": "Vision transformers for dense prediction.", + "author": "Ren\u00e9 Ranftl, Alexey Bochkovskiy, and Vladlen Koltun.", + "venue": "ICCV, 2021.", + "url": null + } + }, + { + "31": { + "title": "Motion and Depth Augmented Semantic Segmentation for\nAutonomous Navigation.", + "author": "Hazem Rashed, Ahmad El Sallab, Senthil Yogamani, and Mohamed ElHelw.", + "venue": "In CVPR Workshop on Visual Odometry and Computer Vision, 2019.", + "url": null + } + }, + { + "32": { + "title": "Sam 2: Segment anything in images and videos.", + "author": "Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali,\nTengyu Ma, Haitham Khedr, Roman R\u00e4dle, Chloe Rolland, Laura Gustafson,\nEric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan\nWu, Ross Girshick, Piotr Doll\u00e1r, and Christoph Feichtenhofer.", + "venue": "arXiv preprint arXiv:2408.00714, 2024.", + "url": null + } + }, + { + "33": { + "title": "Nerf on-the-go: Exploiting uncertainty for distractor-free nerfs in\nthe wild.", + "author": "Weining Ren, Zihan Zhu, Boyang Sun, Jiaqi Chen, Marc Pollefeys, and Songyou\nPeng.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), 2024.", + "url": null + } + }, + { + "34": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn\nOmmer.", + "venue": "In CVPR, 2021.", + "url": null + } + }, + { + "35": { + "title": "Robustnerf: Ignoring distractors with robust losses.", + "author": "Sara Sabour, Suhani Vora, Daniel Duckworth, Ivan Krasin, David J Fleet, and\nAndrea Tagliasacchi.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "36": { + "title": "Spotlesssplats: Ignoring distractors in 3d gaussian splatting.", + "author": "Sara Sabour, Lily Goli, George Kopanas, Mark Matthews, Dmitry Lagun, Leonidas\nGuibas, Alec Jacobson, David J. Fleet, and Andrea Tagliasacchi.", + "venue": "arXiv preprint arXiv:2406.20055, 2024.", + "url": null + } + }, + { + "37": { + "title": "Fitting conic sections to \u201cvery scattered\u201d data: An iterative\nrefinement of the bookstein algorithm.", + "author": "Paul D. Sampson.", + "venue": "In Computer graphics and image processing, 1982.", + "url": null + } + }, + { + "38": { + "title": "The surprising effectiveness of diffusion models for optical flow and\nmonocular depth estimation.", + "author": "Saurabh Saxena, Charles Herrmann, Junhwa Hur, Abhishek Kar, Mohammad Norouzi,\nDeqing Sun, and David J. Fleet.", + "venue": "In NeurIPS, 2023.", + "url": null + } + }, + { + "39": { + "title": "Structure-from-motion revisited.", + "author": "Johannes Lutz Sch\u00f6nberger and Jan-Michael Frahm.", + "venue": "In CVPR, 2016.", + "url": null + } + }, + { + "40": { + "title": "Pixelwise view selection for unstructured multi-view stereo.", + "author": "Johannes Lutz Sch\u00f6nberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael\nFrahm.", + "venue": "In ECCV, 2016.", + "url": null + } + }, + { + "41": { + "title": "Theia multiview geometry library: Tutorial & reference.", + "author": "Chris Sweeney.", + "venue": "http://theia-sfm.org, 2015.", + "url": null + } + }, + { + "42": { + "title": "Computer Vision: Algorithms and Applications.", + "author": "Richard Szeliski.", + "venue": "Springer, 2011.", + "url": null + } + }, + { + "43": { + "title": "RAFT: Recurrent all-pairs field transforms for optical flow.", + "author": "Zachary Teed and Jia Deng.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "44": { + "title": "DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and\nRGB-D Cameras.", + "author": "Zachary Teed and Jia Deng.", + "venue": "In NeurIPS, 2021.", + "url": null + } + }, + { + "45": { + "title": "Maintaining multiple motion model hypotheses over many views to\nrecover matching and structure.", + "author": "Phil Torr, Andrew W. Fitzgibbon, and Andrew Zisserman.", + "venue": "In ICCV, 1998.", + "url": null + } + }, + { + "46": { + "title": "DymSLAM: 4D Dynamic Scene Reconstruction Based on\nGeometrical Motion Segmentation.", + "author": "Chenjie Wang, Bin Luo, Yun Zhang, Qing Zhao, Lu Yin, Wei Wang, Xin Su, Yajun\nWang, and Chengyuan Li.", + "venue": "IEEE Robotics and Automation Letters, 2021.", + "url": null + } + }, + { + "47": { + "title": "Representing moving images with layers.", + "author": "John YA Wang and Edward H Adelson.", + "venue": "IEEE transactions on image processing, 1994.", + "url": null + } + }, + { + "48": { + "title": "DUSt3R: Geometric 3D Vision Made Easy.", + "author": "Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud.", + "venue": "In CVPR, 2024a.", + "url": null + } + }, + { + "49": { + "title": "VideoCutLER: Surprisingly Simple Unsupervised Video\nInstance Segmentation.", + "author": "Xudong Wang, Ishan Misra, Ziyun Zeng, Rohit Girdhar, and Trevor Darrell.", + "venue": "In CVPR, 2024b.", + "url": null + } + }, + { + "50": { + "title": "A survey of vision-based methods for action representation,\nsegmentation and recognition.", + "author": "Daniel Weinland, Remi Ronfard, and Edmond Boyer.", + "venue": "Computer Vision and Image Understanding, 2011.", + "url": null + } + }, + { + "51": { + "title": "Robust Global Translations with 1DSfM.", + "author": "Kyle Wilson and Noah Snavely.", + "venue": "In ECCV, 2014.", + "url": null + } + }, + { + "52": { + "title": "A gradient-based method for general motion estimation and\nsegmentation.", + "author": "Siu-Fan Wu and Josef Kittler.", + "venue": "Journal of Visual Communication and Image Representation,\n1993.", + "url": null + } + }, + { + "53": { + "title": "Segmenting Moving Objects via an Object-Centric Layered\nRepresentation.", + "author": "Junyu Xie, Weidi Xie, and Andrew Zisserman.", + "venue": "In NeurIPS, 2022.", + "url": null + } + }, + { + "54": { + "title": "Self-Supervised Video Object Segmentation by Motion\nGrouping.", + "author": "Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, and Weidi Xie.", + "venue": "In ICCV, 2021a.", + "url": null + } + }, + { + "55": { + "title": "DyStaB: Unsupervised Object Segmentation via\nDynamic-Static Bootstrapping.", + "author": "Yanchao Yang, Brian Lai, and Stefano Soatto.", + "venue": "In CVPR, 2021b.", + "url": null + } + }, + { + "56": { + "title": "MonST3R: A Simple Approach for Estimating Geometry in the\nPresence of Motion.", + "author": "Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jampani, Trevor Darrell,\nForrester Cole, Deqing Sun, and Ming-Hsuan Yang.", + "venue": "arXiv preprint arXiv:2410.03825, 2024.", + "url": null + } + }, + { + "57": { + "title": "Structure and Motion from Casual Videos.", + "author": "Zhoutong Zhang, Forrester Cole, Zhengqi Li, Michael Rubinstein, Noah Snavely,\nand William T. Freeman.", + "venue": "In ECCV, 2022.", + "url": null + } + }, + { + "58": { + "title": "ParticleSfM: Exploiting Dense Point Trajectories\nfor Localizing Moving Cameras in the Wild.", + "author": "Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu.", + "venue": "In ECCV, 2022.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18650v1" +} \ No newline at end of file diff --git a/20241127/2411.18651v1.json b/20241127/2411.18651v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2d0e9331ea26a8b884b3cb891cb68c81e8bf393c --- /dev/null +++ b/20241127/2411.18651v1.json @@ -0,0 +1,495 @@ +{ + "title": "Verbalized Representation Learning for Interpretable Few-Shot Generalization", + "abstract": "Humans recognize objects after observing only a few examples, a remarkable capability enabled by their inherent language understanding of the real-world environment. Developing verbalized and interpretable representation can significantly improve model generalization in low-data settings.\nIn this work, we propose Verbalized Representation Learning (VRL), a novel approach for automatically extracting human-interpretable features for object recognition using few-shot data. Our method uniquely captures inter-class differences and intra-class commonalities in the form of natural language by employing a Vision-Language Model (VLM) to identify key discriminative features between different classes and shared characteristics within the same class. These verbalized features are then mapped to numeric vectors through the VLM. The resulting feature vectors can be further utilized to train and infer with downstream classifiers. Experimental results show that, at the same model scale, VRL achieves a absolute improvement over prior state-of-the-art methods while using less data and a smaller mode. Furthermore, compared to human-labeled attributes, the features learned by VRL exhibit a absolute gain when used for downstream classification tasks. Code is available at: link.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Humans have a remarkable capability to recognize certain objects after seeing only a few examples. As suggested by [28 ###reference_b28###], it is significantly enhanced by inherent language understanding. Language, with its representational strength, serves as a primary resource for conveying knowledge about visual objects. A single, precise description can effectively capture visual distinctions observed across different object categories. For example, in Fig. 1 ###reference_### (a), a human can identify the key difference between two fish species based on the pattern of their heads, even if both share a white stripe around the neck and a yellow body. As a result, we argue that developing verbalized features offers a valuable complement to improve model\u2019s generalization under low-resource conditions. In addition, incorporating language into image classification models also increases the interpretability of the visual system, facilitating transparent and reliable decision making and effective model auditing.\nPrior works [14 ###reference_b14###, 13 ###reference_b13###] have explored integrating language into visual classifiers with a bottleneck of textual attributes. However, these attributes often require human annotations or predefined vocabularies. Some recent methods [3 ###reference_b3###, 19 ###reference_b19###, 44 ###reference_b44###, 7 ###reference_b7###] have sought to automate this process by training neural estimators to transform the statistical significance of the learned features into natural language or explainable outcomes. Yet, these approaches typically demand abundant data to achieve accurate estimations. Large-scale Vision-and-Language Models (VLMs) like CLIP [34 ###reference_b34###] and GPT-4v [1 ###reference_b1###], which are pre-trained on large-scale multimodal datasets, have enabled researchers [27 ###reference_b27###, 40 ###reference_b40###, 33 ###reference_b33###, 10 ###reference_b10###] to perform zero-shot classification with the generated attribute descriptions. However, these attributes are often ungrounded and rely on prior knowledge in pre-training datasets, resulting in low precision when applied to fine-grained or novel concept recognition.\nTo address the aforementioned challenges, we propose Verbalized Representation Learning (VRL) for automatic, human-interpretable feature extraction using only few-shot data. It applies to fine-grained and novel objects and could work with local models such as LLaVA [26 ###reference_b26###]. Specifically, inspired by self-supervised representation learning (SSL) in computer vision, such as SimCLR [8 ###reference_b8###], MoCo [17 ###reference_b17###], SwAV [6 ###reference_b6###], SimSiam [9 ###reference_b9###], and BYOL [16 ###reference_b16###], we propose to leverage a VLM to capture the inter-class difference and intra-class commonality and articulate these findings in natural language, as illustrated in Fig. 1 ###reference_###. Specifically, we cast the VLM to describe the key difference between two images from different classes, which would preserve the discriminative features and remove the redundant ones, such as the yellow body shared by both species on the left side of Fig. 1 ###reference_### (a). Conversely, we employ the VLM to list the key features that are shared by two images within the same class, extracting features that are robust to intra-class variance, such as the orange facial pattern observed in all of the Thalassoma Pavo. Notably, this process can be applied to any two images, whether from different classes or the same class, which allows us to exponentially scale the number of verbalized features with the image samples, enabling effective generalization even in few-shot settings.\nConsequently, to obtain the numerical feature embedding of an image using our VRL, we employ a VLM to determine whether the image possesses the characteristics described by the verbalized features, as depicted in Fig. 1 ###reference_### (b). Each dimension in the resulting embedding represents a scalar value indicating the presence of the described feature. This approach effectively transforms the verbalized features into numeric representations that can serve as inputs to any classification methods, such as logistic regression, random forest, or MLP classifiers, allowing flexible and robust modeling based on the extracted interpretable features.\nWe conduct experiments on iNaturalist [39 ###reference_b39###] and Kiki-Bouba dataset [2 ###reference_b2###], where the former includes objects that only have subtle differences between classes, while the latter contains novel objects that are nearly not present in the web-scale datasets. Compared to previous state-of-the-art (SoTA) baselines using 70B models, we achieve a 24 absolute improvement while using 95 less data and a much smaller model with 7B parameters. Against supervised fine-tuned baselines, we observe a 15 absolute improvement. Lastly, when compared to human annotated attributes, the attributes learned by our VRL demonstrate a 20 absolute gain. These results illustrate that the features extracted by VRL not only offer superior effectiveness but also exhibit strong robustness, providing a generalizable solution when adapting to fine-grained or novel classification tasks with a limited data." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Natural Language for Image Classification.\nA common approach for integrating language into visual classifiers involves creating a concept bottleneck [24 ###reference_b24###], where the model first predicts relevant attributes and subsequently uses these attributes to classify the image. Bottleneck methods have been extensively applied in few-shot or zero-shot classification models [14 ###reference_b14###, 13 ###reference_b13###, 24 ###reference_b24###, 21 ###reference_b21###, 25 ###reference_b25###, 15 ###reference_b15###, 36 ###reference_b36###, 7 ###reference_b7###]. However, these methods typically rely on manually annotated or predefined attributes, which can limit their adaptability to novel classes or fine-grained tasks. Recent advancements in VLMs have enabled researchers to directly sample interpretable descriptive features from these models [27 ###reference_b27###, 40 ###reference_b40###, 10 ###reference_b10###, 33 ###reference_b33###, 41 ###reference_b41###]. This approach benefits from incorporating external knowledge embedded in the pre-trained datasets. However, it often relies heavily on the prior knowledge encoded within these models, which can lead to the generation of ungrounded features and difficulty generalizing when the target data is not well-represented in the training datasets. In contrast, our method directly learns grounded, verbalized features from the visual data, allowing for automatic interpretable feature extraction that remains adaptable even for novel classes.\nInterpreting Model\u2019s Decision Process.\nThere has been a long line of research aiming to improve the interpretability and explainability of deep models. Pioneering methods [37 ###reference_b37###, 3 ###reference_b3###, 4 ###reference_b4###, 42 ###reference_b42###, 22 ###reference_b22###, 11 ###reference_b11###] have tackled this challenge by visualizing learned features, categorizing maximally-activating inputs, and identifying key neurons that drive model decisions. More recently, efforts have been made to interpret model behavior using natural language [18 ###reference_b18###, 31 ###reference_b31###, 19 ###reference_b19###, 38 ###reference_b38###], enabling explanations that are more accessible and editable. However, the aforementioned methods are generally post-hoc, meaning they are applied after the model has been trained with abundant data. While post-hoc explanations can provide valuable insights, they may lack the ability to influence or shape the model\u2019s internal representations during training. Instead, our methods embed interpretability directly into the learning process by learning verbalized features. This enables the model to produce inherently explainable and contextually grounded representations without relying solely on retrospective analysis.\n###figure_1### Self-Supervised Learning Models. Self-supervised learning (SSL) has emerged as a powerful approach in computer vision, enabling models to learn robust feature representations from unlabeled data. Contrastive learning [29 ###reference_b29###] methods, such as SimCLR [8 ###reference_b8###] and MoCo [17 ###reference_b17###], achieve this goal by learning to discriminate between different samples (negative pairs) and augmentations of the same sample (positive pairs). Chen et al. [9 ###reference_b9###] have found that the key function of negative pairs is to prevent the model from learning collapsing features, where models produce constant or trivial outputs. As a result, subsequent works have relieved the need for negative samples by employing techniques like clustering [5 ###reference_b5###, 6 ###reference_b6###], momentum update [16 ###reference_b16###], or stop gradient [9 ###reference_b9###]. These methods focus on learning the underlying shared representations between the augmentations of the same sample. Our approach can be viewed as a variant of supervised contrastive learning [23 ###reference_b23###], where the positive samples are drawn from different images from the same class. However, our method introduces two distinct advantages. First, it is data-efficient, as it does not require large datasets for gradient updates. Second, the expressiveness of verbalized features inherently prevents the model from collapsing to constant outputs, thereby ensuring the resulting features are meaningful and representative." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In Sec. 3.1 ###reference_###, we first introduce the motivation and concept of verbalized representation learning (VRL) and then discuss how VRL automatically extracts interpretable, compact representations with few-shot data. Then, in Sec. 3.2 ###reference_###, we outline the process of building a visual classifier using the derived features. Furthermore, we demonstrate that these extracted features are versatile and can be applied to arbitrary classification models, including but not limited to logistic regression, decision trees, and multi-layer perceptron (MLP) classifiers. The overall framework of our VRL is presented in Fig. 2 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Verbalized Representation Learning", + "text": "Humans can recognize objects after seeing only a few examples, a skill boosted by language understanding [28 ###reference_b28###]. Language effectively conveys visual distinctions, even without many extra visual cues. Incorporating language also enhances image classification interpretability, enabling clearer, more reliable decisions. To this end, we propose a method that verbalizes key visual features in natural language. In this section, we focus on identifying which visual features are crucial for few-shot image classification and how to extract and verbalize them effectively, and how to map these verbalized features into vectors which would be later utilized during training and inference.\nInspired by self-supervised representation learning methods, where model learns robust features through contrasting positive and negative pairs [8 ###reference_b8###, 17 ###reference_b17###] or maximizing the similarity of augmented views of the positive samples [6 ###reference_b6###, 16 ###reference_b16###, 9 ###reference_b9###], we propose to verbalize the objective functions of these self-supervised learning methods with the help of VLMs such as LLaVA. Specifically, given a classification task with classes and -shot examples per class, each data point is defined as , where represents an image, and is the categorical label indicating the class to which the image belongs. Our method leverages two types of paired images: positive pairs, defined as image pairs with the same label (), and negative pairs, which consist of images from different classes (). Notably, with this pairing strategy, we can form distinct negative pairs, and positive pairs, which significantly increase the data utilization under the few-shot setting.\nInspired by contrastive learning methods [8 ###reference_b8###, 17 ###reference_b17###], for each negative pair sample, our approach emphasizes capturing inter-class differences by tasking the VLM to describe key distinguishing features between images from different classes, denoted as , where \nand denotes the query that captures the visual differences between the two input images and . For instance, in Fig. 2 ###reference_### (a), the model learns to distinguish two fish species by the coloration and the pattern of the fish. Conversely, for each positive pair, we draw inspirations from negative sample-free SSL methods [6 ###reference_b6###, 16 ###reference_b16###, 9 ###reference_b9###], where we employ the VLM to capture intra-class commonalities, generating descriptions of the key shared features between two images from the same class, i.e., , where \nand denotes the query that captures the visual commonalities between and . For example, in Fig. 2 ###reference_### (a), VRL identifies that both fish possess irregular orange spots around their face. We include the detailed prompt templates used in the above process in Appendix A.1 ###reference_###.\nIntuitively, and learns complimentary features. captures the most discriminative features while filtering out redundant ones, such as the yellow body present in both types of fish, leading to less noisy features. While is robust to the intra-class variance where it learns to identify the fish based on the shared features, like the pattern on its face, rather than the number of stripes on its back. In the later experimental section, we will further verify this assumption and show that these two features can yield the best performance when combined.\nIn addition, as discussed in the earlier paragraph, even with only -shot samples, our VRL can sample a diverse set of features from negative and positive pairs, making our method excel under low-resource setting. Moreover, we empirically discover that sampling pairs for both negative and positive samples is sufficient to collect a robust set of features.\nTo obtain feature vectors that can be later used to build visual classifiers, we transform the verbalized features into numeric vectors with the help of VLMs such as CLIP [34 ###reference_b34###] and LLaVA [26 ###reference_b26###]. Given an image and a set of verbalized features, which are in the form of language descriptions, we employ a VLM to determine whether the image possesses those features, as illustrated in Fig. 2 ###reference_### (b), resulting in a feature embedding . Each dimension of is produced by the VLM, indicating the presence of a certain verbalized feature from and or assessing the degree that the image has a certain feature. Concretely, for generative VLMs like LLaVA, each verbalized feature is mapped to 0 or 1, based on whether the model infers the presence of the feature in the image. For VLMs like CLIP, feature embeddings are derived by calculating the similarity between each verbalized feature and the image using CLIP\u2019s visual encoders. This results in a continuous vector indicating the likelihood that the image contains each feature, with an optional similarity threshold to convert it into binary feature vectors." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Training and Inference", + "text": "Once the numeric feature vectors are generated, they are used as input for training various visual classifiers, including, but not limited to, logistic regression, random forests, Naive Bayes, K-nearest neighbors (KNN), and MLP classifiers. These models learn to predict the class labels based on the interpretable feature vectors. This flexibility enables robust modeling that can be tailored to different applications. Moreover, this approach not only enables the construction of classifiers for novel concepts with few-shot data, but also provides insight into which visual features are most important for decision-making. For instance, we can extract feature importance from logistic regression or visualize decision paths in decision tree classifiers, enhancing the interpretability of our method.\nDuring inference, given a testing image, we generate its feature vector by following the approach described in Sec. 3.1 ###reference_###. Specifically, we query the VLMs with the learned verbalized features to determine whether the image contains specific features, producing a numeric feature vector. This vector can either be continuous, representing the likelihood of a feature\u2019s presence, or binary, indicating the feature\u2019s absence or presence. The resulting vector is then passed to the trained classifier to make a prediction. Notably, to enhance the model robustness, we ensemble results from classifiers trained with different algorithms. Combining predictions from multiple classifiers is beneficial because it reduces the likelihood of overfitting to any specific algorithm\u2019s biases or weaknesses. The ensemble can use either hard or soft voting: In hard voting, each classifier makes a discrete prediction, and the final prediction is determined by a majority vote. In soft voting, the prediction logits from each classifier are averaged to produce the final output, allowing for more nuanced decision-making." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Our experiments are designed to answer the following research questions: (i) How well does VRL generalize to tasks requiring fine-grained recognition with few-shot data? (ii) Can VRL effectively adapt to tasks involving novel concepts that were not part of the VLM\u2019s pre-training resources, using only few-shot data? (iii) How effective does VRL compare to conventional few-shot adaptation algorithms like LoRA fine-tuning or in-context learning? (iv) Do different types of features extracted by VRL mutually benefit each other, and do the extracted features further improve on top of the other commonly used features? (v) How does the performance of automatically features extracted with VRL compare to human-labeled features?" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Dataset and Evaluation Protocols", + "text": "We conduct experiments on two different datasets to validate our method\u2019s effectiveness under different scenarios. All reported numbers are the classification accuracy (%).\nFor fine-grained classification, we utilize the iNaturalist 2021 dataset [39 ###reference_b39###], which comprises a diverse collection of images and annotations contributed by citizen scientists, spanning numerous species of animals, plants, and fungi. Each species in the training split contains between 200 and 300 images, while the validation split includes 10 images per species.\nFollowing [10 ###reference_b10###], we experiment on images from five different families, each containing five to six species. These families are selected due to their challenging nature: distinguishing between species within the same family requires the model to identify complex features such as shapes and patterns, rather than simple color variations. The families used in our experiments are as follows: Lichen (fungi), Wrasse (fish), Wild rye (grass), Manzanita (berry shrubs), and Bulrush (herbs). For a detailed list of the species, please refer to the appendix. Notably, unlike [10 ###reference_b10###], which utilizes a full training set containing between 200 and 300 images per species, our method uses on only 10 images per species to achieve generalization in a low-resource setting. We will report the baseline performance from their original paper as well as our reproduced results using the same limited data as our method for comparison.\nTo test the model\u2019s generalizability on objects that have been rarely seen in the pre-training image resources, we evaluate our method using the Kiki-Bouba [2 ###reference_b2###] dataset. The Kiki-Bouba experiment, originally introduced by [35 ###reference_b35###], illustrates that people often associate specific shapes with different sounds. The dataset was constructed by prompting generative models trained to create 3D-rendered images from non-existent, meaningless words [2 ###reference_b2###], providing a unique testbed for evaluating generalization to novel and abstract concepts. Following the approach in [10 ###reference_b10###], we evaluate our method on two distinct splits, each containing five different classes. The first split consists of the classes bamaba, duludu, gaduga, lomulo, nomano, while the second split includes bouba, galaga, kepike, kiki, maluma. In [10 ###reference_b10###], the training set comprises 800 images per class, with 200 images per class for validation. In contrast, we adopt a similar low-resource setting as used for the iNaturalist dataset, with our method accessing only 10 training images per class. Baseline results for both settings will be reported in the table for comparison." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Baseline Methods and Implementation Details", + "text": "To demonstrate the effectiveness of our method, we compare it against several baselines, including the CLIP-based approach proposed in [10 ###reference_b10###] and other baseline methods that utilize LLMs, such as CBD [27 ###reference_b27###] and LLM-Mutate [10 ###reference_b10###], which leverages LLM\u2019s pretrained knowledge to generate attributes. Additionally, since our method leverages LLaVA-OneVision [26 ###reference_b26###], one LLaVA variant, to generate verbalized features, we perform quantitative comparisons with common approaches used to adapt LLaVA for downstream tasks, such as LoRA fine-tuning [20 ###reference_b20###] and in-context learning [12 ###reference_b12###]. We describe the details of each baseline method as follows.\nFor CLIP-based baselines, we consider two variants. CLIP Class Name is a naive baseline where CLIP is employed to compute the similarity between the species\u2019 scientific name and the images. The best accuracy is reported as the highest value achieved among using the common name, the scientific name, or both names combined. The second variant involves Prompt Tuning with a CLIP encoder, where the text embeddings of specific class-related tokens are optimized through gradient descent. For both baselines, we present results as reported in [10 ###reference_b10###].\nClassification by Description [27 ###reference_b27###] generates a list of attributes by prompting GPT [30 ###reference_b30###] with the class name of the object. While this allows them to leverage the prior knowledge learned by LLMs, the generated features often lack grounding in the actual images and heavily rely on the pre-trained knowledge of LLMs. LLM-Mutate [10 ###reference_b10###] enhances this approach by using the similarity between generated attributes and images to filter out ungrounded ones. However, this method requires access to a full training dataset to accurately estimate relevant attributes and still relies heavily on the capabilities and learned knowledge of the pre-trained LLMs. In our experiments, we not only report results from the original paper but also evaluate this method using the same limited data as our approach for a fair comparison.\nWe include two common approaches for adopting LLaVA. The first baseline involves performing LoRA [20 ###reference_b20###] fine-tuning (LLaVA-SFT), which has been demonstrated to be effective in scenarios with limited data. The second baseline is in-context learning [12 ###reference_b12###] (LLaVA-ICL), where one exemplar image for each species or object is included in the prompt to guide the model\u2019s predictions. We study which of the LLaVA-based methods and VRL better utilizes LLaVA\u2019s capabilities.\nWe implement VRL with LLaVA-OneVision for capturing visual difference and commonality features. We also select LLaVA-OneVision as the feature mapping model that assists to convert an image into a feature vector, and the base model for all the LLaVA-based baseline methods. For the few-shot training stage, we experiment with various classification methods including logistic regression or MLP. Unless otherwise specified, we utilize both the difference features and commonality features . We report the best performance achieved across different classifiers, selecting the optimal results from a single classifier." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Fine-Grained Species Classification", + "text": "We present the results for fine-grained species classification on the iNaturalist dataset in Table 1 ###reference_###. We find that compared to previous state-of-the-art (SoTA) methods, even with less data (10 images per species compared to 200+ images per species), and using a significantly smaller model (7B v.s. 70B parameters), our method can already surpass the previous SoTA by . When we increase the scale of our method to a comparable 72B model, we can further increase the improvements to nearly . Moreover, when baseline methods are limited to the same data availability as our approach, our 7B and 72B models achieve improvements of and , respectively. These results highlight the advantage of using grounded descriptions to capture the subtle visual cues, especially when the model needs to discover visual nuances to discriminate species within the same family.\nWe also compare VRL against LLaVA-based baselines. Using the same backbone LLaVA model, our method showcases a gain to LLaVA-SFT and a advantage over the in-context learning method, LLaVA-ICL. In addition, VRL is easier to be scaled up than the baselines. For instance, VRL with the 72B model requires 8 GPUs with 48 GB of RAM, while fine-tuning the 13B model with LoRA already requires 8 GPUs with 80 GB of RAM. For in-context learning, including image examples for every category in the prompt significantly increases the memory usage of the method, making it infeasible to perform in-context learning on the 72B LLaVA with the 8 GPUs (48 GB). The model performance may also be constrained by the context window of LLaVA model. These results and analysis further validate the scalability and robustness of VRL comparing to those existing adaptation algorithms." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Novel Concept Classification", + "text": "For novel concept classification, we report the results on the Kiki-Bouba dataset in Table 2 ###reference_###. Similar to the performance trend observed in the previous experiment, our method surpasses the previous SoTA by with a smaller model, but this time with less data (10 images per species compared to 800+ images per species). Surprisingly, we observe that increasing the number of model parameters does not lead to better performance on this task. One possible explanation for this is that, rather than relying on prior knowledge learned during pre-training, the model needs to focus on learning new, critical features specific to these novel objects with unfamiliar shapes and patterns.\nSince the objects in this task are unseen during LLM training, existing models struggle to generate relevant attributes and fail to ground visual differences necessary for distinguishing objects from different classes.\nThis hypothesis is further supported by the larger performance gap between our method and those that merely leverage pre-trained LLMs to generate attributes [27 ###reference_b27###]. On this dataset, our method exceeds their performance by , compared to a advantage on the previous dataset." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Additional Analysis", + "text": "Table 3 ###reference_### summarizes the classification accuracy on iNaturalist achieved by fusing different types of visual features. Specifically, we study the two feature vectors: and learned via our VRL, which focusing on capturing regional nuances of the object. captures the inter-class difference while represents the shared features within the same class.\nWe also consider the image features directly encoded by the vision encoder of CLIP, which often encode high-level semantics such as the object type, context, and other conceptual associations. As discussed in Sec. 3.1 ###reference_###, these features are robust to different kinds of variance and could be benefited when combined for downstream tasks. This is evident from the improvements when concatenating and , and an additional gain after including the CLIP feature.\nWe also discover that ensemble classifiers trained separately on each feature type are more effective than simply concatenating all features to build a single classifier. This is primarily because these features are heterogeneous. For example, and are generally binary, with each element indicating the presence of a specific verbalized feature. Therefore, they tend to perform best when used with logistic regression models. Meanwhile, the features encoded by CLIP are continuous, which makes them more suitable for mapping to class labels through MLP models. Empirically, we demonstrate that the best configuration of ensembling can further improves the performance by comparing to feature concatenation.\nThese experiments showcase that our approach is flexible and can be seamlessly integrated with existing representations to build a robust classification model through ensembling.\nTo evaluate the performance of automatically extracted features compared to human-labeled features, we conduct experiments on the Kiki-Bouba dataset as it provides human-annotated attributes. From Table 4 ###reference_###, we observe that VRL extracted features outperform human-labeled attributes across all configurations. Our best configuration, which combines both difference feature vector and commonality feature vector , surpasses the human-derived features by . This underscores the effectiveness of our automatic feature extraction approach in capturing relevant and distinguishing characteristics for classification tasks. These results also suggest that, in scenarios with limited resources or new visual concepts, we can automate the process of feature extraction, reducing the need for manual human annotation.\n###figure_2### We investigate the model performance using different choices for visual classifiers and feature mapping models. From Table 5 ###reference_###, we notice that LLaVA, as a feature mapping model for obtaining binary feature vectors, achieves the best overall performance when paired with the logistic regression.\nAs mentioned in Sec. 3.1 ###reference_###, we can also leverage CLIP as a feature mapping model to convert images into features. When predicting, one can either set a threshold to determine if the image possesses a certain attribute, resulting in a binary feature, or preserve the similarity between the image and each attribute to form continuous features.\nAlthough the performance of using CLIP features is slightly lower compared to LLaVA, CLIP offers a notable advantage in terms of inference speed. This is because CLIP supports batch-wise operations when computing similarities between a set of images and a set of verbalized features, which offers our method the flexibility to scale with a larger dataset or an increasing number of object classes.\nIt is worth noting that using CLIP as the feature mapping model still allows our method to surpass previous state-of-the-art results by significant margins, achieving improvements of and on the iNaturalist and Kiki-Bouba datasets, respectively. This highlights the universal applicability of VRL-extracted features, regardless of the feature mapping approach used." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Qualitative Analysis", + "text": "Figure 3 ###reference_### presents qualitative examples of what verbalized features are extracted from image pairs. We list some of the features that have top-5 feature importance in our trained logistic regression model. Due to the page limit, we provide more qualitative samples in the appendix. For inter-class differences (Fig. 3 ###reference_### (a)), verbalized features are used to capture distinguishing characteristics between different species. For example, the Lichen (Fungi) category is distinguished based on the structure of the thallus, where one image describes a flat, crust-like thallus, and the other describes a complex, branch-like thallus. Similarly, in the Wild Rye (Grass) category, the differences between a slender stem with a feathery inflorescence and a more robust stem with a dense, elongated seed head are verbalized. For intra-class commonality (Fig. 3 ###reference_### (b)), verbalized features are employed to capture shared traits within the same species. For example, the leaves of Manzanita (Berry) are consistently described as \u201covate with a pointed tip and slightly serrated edges,\u201d while the inflorescences of Bulrush (Herb) are uniformly described as having a \u201cdelicate, feathery appearance.\u201d These verbalized features are used to guide the decision process by helping the model focus on important distinguishing features or shared traits within each class." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we propose Verbalized Representation Learning (VRL), which enables automatic interpretable feature extraction with few-shot samples. By leveraging VLMs, VRL generates verbalized features that capture both inter-class differences and intra-class commonalities. Our method not only enhances the model\u2019s adaptability with limited data but also provides transparency in the decision-making process, enabling easier interpretation of the features that influence predictions. Our experiments show that VRL outperforms prior approaches, achieving superior results while using significantly less data. This includes tasks like fine-grained recognition and novel concept adaptation, demonstrating its potential for real-world applications." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\n\nLichen\n\n\n\nWrasse\n\n\n\nWild Rye\n\n\n\nManzanita\n\n\n\nBulrush\n\n\n\nAverage\n\n
Zero-Shot Methods
CLIP Class Name\u00a0[10]\n\n\n23.3\n\n\n\n32.0\n\n\n\n32.0\n\n\n\n26.0\n\n\n\n26.0\n\n\n\n27.86\n\n
Full Dataset Methods (200+ Images per Species)
CLIP Prompt Tuning\u00a0[10]\n\n\n23.3\n\n\n\n20.0\n\n\n\n40.0\n\n\n\n20.0\n\n\n\n20.0\n\n\n\n24.66\n\n
Classification by Description\u00a0[27]\n\n\n30.0\n\n\n\n34.0\n\n\n\n36.0\n\n\n\n28.0\n\n\n\n20.0\n\n\n\n29.60\n\n
LLM-Mutate-70B\u00a0[10] (1-prompt)\n\n31.6\n\n\n\n24.0\n\n\n\n44.0\n\n\n\n40.0\n\n\n\n22.0\n\n\n\n32.32\n\n
LLM-Mutate-70B\u00a0[10] (10-prompt)\n\n48.3\n\n\n\n44.0\n\n\n\n58.0\n\n\n\n58.0\n\n\n\n42.0\n\n\n\n50.06\n\n
Few-Shot Methods (10 Images per Species)
LLM-Mutate-7B\u2020 (10-prompt)\n\n35.0\n\n\n\n48.0\n\n\n\n38.0\n\n\n\n44.0\n\n\n\n26.0\n\n\n\n38.20\n\n
LLM-Mutate-70B\u2020 (10-prompt)\n\n46.6\n\n\n\n44.0\n\n\n\n46.0\n\n\n\n44.0\n\n\n\n40.0\n\n\n\n44.13\n\n
LLaVA-ICL-7B\n\n16.6\n\n\n\n28.0\n\n\n\n22.0\n\n\n\n18.0\n\n\n\n30.0\n\n\n\n22.92\n\n
LLaVA-SFT-7B\n\n41.6\n\n\n\n50.0\n\n\n\n58.0\n\n\n\n42.0\n\n\n\n28.0\n\n\n\n43.92\n\n
LLaVA-VRL-7B (Ours)\n\n58.3\n\n\n\n48.0\n\n\n\n74.0\n\n\n\n66.0\n\n\n\n46.0\n\n\n\n58.46\n\n
LLaVA-VRL-72B (Ours)\n\n71.6\n\n\n\n72.0\n\n\n\n74.0\n\n\n\n56.0\n\n\n\n66.0\n\n\n\n67.92\n\n
\n
\n
Table 1: Comparison of classification accuracy (%) across different methods for fine-grained classification on iNaturalist. The table presents results for zero-shot, full-dataset (200+ images per species), and few-shot (10 images per species). Results marked with \u2020 denote values reproduced using the official implementation but restricted to the same few-shot data as our method.
\n
", + "capture": "Table 1: Comparison of classification accuracy (%) across different methods for fine-grained classification on iNaturalist. The table presents results for zero-shot, full-dataset (200+ images per species), and few-shot (10 images per species). Results marked with \u2020 denote values reproduced using the official implementation but restricted to the same few-shot data as our method." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\n\nv1\n\n\n\nv2\n\n\n\nAvg.\n\n
Zero-Shot Methods
CLIP Class Name\n\n38.7\n\n\n\n38.8\n\n\n\n38.75\n\n
Full Dataset Methods (800+ Images per Object)
CLIP Prompt Tuning\n\n16.7\n\n\n\n55.6\n\n\n\n36.15\n\n
Classification by Description\n\n28.8\n\n\n\n36.8\n\n\n\n32.80\n\n
LLM-Mutate-70B (1-prompt)\n\n50.3\n\n\n\n47.8\n\n\n\n49.05\n\n
LLM-Mutate-70B (10-prompt)\n\n79.2\n\n\n\n59.4\n\n\n\n69.30\n\n
Few-Shot Methods (10 Images per Object)
LLM-Mutate-7B\u2020 (10-prompt)\n\n63.3\n\n\n\n59.2\n\n\n\n61.25\n\n
LLM-Mutate-70B\u2020 (10-prompt)\n\n67.3\n\n\n\n59.2\n\n\n\n63.25\n\n
LLaVA-ICL-7B\n\n24.5\n\n\n\n27.2\n\n\n\n25.85\n\n
LLaVA-SFT-7B\n\n72.1\n\n\n\n50.6\n\n\n\n61.35\n\n
LLaVA-VRL-7B (Ours)\n\n89.4\n\n\n\n76.6\n\n\n\n83.00\n\n
LLaVA-VRL-72B (Ours)\n\n89.1\n\n\n\n74.7\n\n\n\n81.90\n\n
\n
\n
Table 2: Accuracy (%) across different methods for novel concept classification on Kiki-Kouba dataset. v1 and v2 indicates two different splits described in Sec.\u00a04.1. The table presents results for zero-shot, full-dataset, and few-shot. Results marked with \u2020 denote values reproduced using the official implementation but restricted to the same few-shot data as our method.
\n
", + "capture": "Table 2: Accuracy (%) across different methods for novel concept classification on Kiki-Kouba dataset. v1 and v2 indicates two different splits described in Sec.\u00a04.1. The table presents results for zero-shot, full-dataset, and few-shot. Results marked with \u2020 denote values reproduced using the official implementation but restricted to the same few-shot data as our method." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\n\n\n\n\n\n\n\n\n\nCLIP\n\n\n\nAvg.\n\n
Concat\n\n\u2713\n\n\n\n65.26\n\n
Concat\n\n\u2713\n\n\n\n58.32\n\n
Concat\n\n\u2713\n\n\n\n63.26\n\n
Concat\n\n\u2713\n\n\n\n\u2713\n\n\n\n67.92\n\n
Concat\n\n\u2713\n\n\n\n\u2713\n\n\n\n\u2713\n\n\n\n73.92\n\n
Ensemble (hard)\n\nLR\n\n\n\nLR\n\n\n\nLR\n\n\n\n68.32\n\n
Ensemble (hard)\n\nMLP\n\n\n\nMLP\n\n\n\nMLP\n\n\n\n74.26\n\n
Ensemble (hard)\n\nLR\n\n\n\nLR\n\n\n\nMLP\n\n\n\n71.60\n\n
Ensemble (soft)\n\nLR\n\n\n\nLR\n\n\n\nLR\n\n\n\n67.06\n\n
Ensemble (soft)\n\nMLP\n\n\n\nMLP\n\n\n\nMLP\n\n\n\n74.92\n\n
Ensemble (soft)\n\nLR\n\n\n\nLR\n\n\n\nMLP\n\n\n\n76.52\n\n
\n
\n
Table 3: Accuracy (%) when incorporating feature vectors learned from different methods. and denote the difference and commonality feature vectors learned from VRL. CLIP refers to the image features encoded by CLIP visual encoder. We compare two approaches: concatenating all features to train a single classifier and ensembling classifiers trained separately on each feature type. The visual classifiers include Logistic Regression (LR) and Multi-Layer Perceptron (MLP) models.
\n
", + "capture": "Table 3: Accuracy (%) when incorporating feature vectors learned from different methods. and denote the difference and commonality feature vectors learned from VRL. CLIP refers to the image features encoded by CLIP visual encoder. We compare two approaches: concatenating all features to train a single classifier and ensembling classifiers trained separately on each feature type. The visual classifiers include Logistic Regression (LR) and Multi-Layer Perceptron (MLP) models." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\n\nv1\n\n\n\nv2\n\n\n\nAvg.\n\n
Human\n\n73.8\n\n\n\n52.5\n\n\n\n63.15\n\n
VRL-\n\n\n88.4\n\n\n\n75.2\n\n\n\n81.80\n\n
VRL-\n\n\n89.2\n\n\n\n74.0\n\n\n\n81.60\n\n
VRL-both\n\n89.4\n\n\n\n76.6\n\n\n\n83.00\n\n
\n
\n
Table 4: Accuracy (%) of using human-labeled attributes and VRL extracted features on Kiki-Kouba dataset. Note that v1 and v2 indicates two different splits described in Sec.\u00a04.1.
\n
", + "capture": "Table 4: Accuracy (%) of using human-labeled attributes and VRL extracted features on Kiki-Kouba dataset. Note that v1 and v2 indicates two different splits described in Sec.\u00a04.1." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\n\nRF\n\n\n\nLR\n\n\n\nMLP\n\n
iNaturalist (Prior Works: 50.1)
LLaVA\n\n64.9\n\n\n\n67.4\n\n\n\n66.0\n\n
CLIP (binary)\n\n52.2\n\n\n\n58.0\n\n\n\n55.2\n\n
CLIP (continuous)\n\n52.8\n\n\n\n53.6\n\n\n\n60.6\n\n
Kiki-Bouba (Prior Works: 69.3)
LLaVA\n\n83.65\n\n\n\n86.65\n\n\n\n85.65\n\n
CLIP (binary)\n\n79.25\n\n\n\n81.50\n\n\n\n81.10\n\n
CLIP (continuous)\n\n77.05\n\n\n\n77.10\n\n\n\n81.85\n\n
\n
\n
Table 5: Accuracy (%) of different classifier choices (columns) and models used to map verbalized features to numeric vectors (rows). LLaVA and CLIP are utilized as mapping models to map an image to a feature vector.
\n
", + "capture": "Table 5: Accuracy (%) of different classifier choices (columns) and models used to map verbalized features to numeric vectors (rows). LLaVA and CLIP are utilized as mapping models to map an image to a feature vector." + }, + "6": { + "table_html": "
\n
\n\n
\n {\n
\n
\n \"role\": \"user\",\n
\n
\n \"content\": [\n
\n
\n {\n
\n
\n \"type\": \"image_url\",\n
\n
\n \"image_url\": {\n
\n
\n \"url\": \"data:image/jpeg;base64,{{image1}}\"\n
\n
\n },\n
\n
\n \"modalities\": \"multi-images\"\n
\n
\n },\n
\n
\n {\n
\n
\n \"type\": \"image_url\",\n
\n
\n \"image_url\": {\n
\n
\n \"url\": \"data:image/jpeg;base64,{{image2}}\"\n
\n
\n },\n
\n
\n \"modalities\": \"multi-images\"\n
\n
\n },\n
\n
\n {\n
\n
\n \"type\": \"text\",\n
\n
\n \"text\": q_diff/q_comm\n
\n
\n }\n
\n
\n ]\n
\n
\n }\n
\n
\n
\n
\n q_diff = \"Identify the most distinctive feature that can be used to distinguish the species between image 1 and image 2.\"\n
\n
\n q_comm = \"List the key features that not only shared by the species in both images but also make this species distinct from others. Focus on unique or specific characteristics, such as detailed patterns in the arrangement, textures, color variations, or specific forms of growth on surfaces. Provide each feature as a distinct bullet point, capturing the essence of what makes this species visually identifiable.\"\n
\n
\n
Table 6: Prompt template for generating verbalized features. Note that is the text query used to generate inter-class difference feature and is for intra-class commonality .
\n
", + "capture": "Table 6: Prompt template for generating verbalized features. Note that is the text query used to generate inter-class difference feature and is for intra-class commonality ." + }, + "7": { + "table_html": "
\n
\n\n
\n system_prompt (y_diff) = \"\"\"\n
\n
\n I have a series of descriptions that I would like to convert into classification questions. For each description, respond in JSON format, which includes a question and provides specific labels for Class 1 and Class 2 based on the key distinguishing feature mentioned in the description.\n
\n
\n \\nExample description: The most distinctive feature that can be used to distinguish class 1 and class 2 is the type of fungus present. class 1 has a bright yellow, fuzzy fungus with a round shape, while class 2 has bright yellow, delicate flower-like structures growing from a dark gray tree branch.\n
\n
\n \\nExample response: {\\\"question\\\": \\\"What type of fungus is present?\\\", \\\"class_1\\\": \\\"bright yellow, fuzzy fungus with a round shape\\\", \\\"class_2\\\": \\\"bright yellow, delicate flower-like structures growing from a dark gray tree branch\\\"}\n
\n
\n \"\"\"\n
\n
\n
\n
\n system_prompt (y_comm) = \"\"\"\n
\n
\n I have a series of descriptions that I would like to convert into a list of structured sentences, where each item describes one specific feature of the species. For each description, response in a list format.\n
\n
\n \\nExample description: The berry in both images exhibits several distinctive characteristics that set it apart from other berry species:\\n\\n- **Flower Structure**: The flowers are small, with five petals each, and they form in clusters. The petals are delicate and appear to be a soft pink or white color.\\n- **Leaf Arrangement**: The leaves are arranged in an opposite or alternate pattern, with each leaf having a distinct shape that is often described as oval with a pointed tip.\\n- **Leaf Texture**: The leaves have a velvety texture, which is unique to this species.\\n- **Stem and Branches**: The stems and branches have small thorns or are spiny, which can be a defense mechanism against herbivores.\\n- **Foliage Color**: The foliage is a vibrant green, indicating a healthy, thriving plant.\\n- **Berries**: The berries are small, round, and appear to be a dark red or purple color, typical of many berry species.\\n- **Growth Environment**: Both images show the plant growing in a rocky, perhaps alpine environment, which suggests it has adapted to grow in challenging conditions.\\n- **Unique Shape**: The leaves and flowers have a unique shape, with the leaves having a slightly wavy edge and the flowers having a bell-shaped form.\n
\n
\n \\nExample response: [\\\"Its flowers are small, with five petals each, and they form in clusters. The petals are delicate and appear to be a soft pink or white color.\\\",\\\"The leaves are arranged in an opposite or alternate pattern, with each leaf having a distinct shape that is often described as oval with a pointed tip.\\\",\\\"The leaves have a velvety texture, which is unique to this species.\\\",\\\"The stems and branches have small thorns or are spiny, which can be a defense mechanism against herbivores.\\\",\\\"The foliage is a vibrant green, indicating a healthy, thriving plant.\\\",\\\"The berries are small, round, and appear to be a dark red or purple color, typical of many berry species.\\\",\\\"The plant growing in a rocky, perhaps alpine environment, which suggests it has adapted to grow in challenging conditions.\\\",\\\"The leaves and flowers have a unique shape, with the leaves having a slightly wavy edge and the flowers having a bell-shaped form.\\\"]\n
\n
\n \"\"\"\n
\n
\n
\n
\n user_prompt = f\"Now, convert this description: {y_diff/y_comm}\" + \" Please follow the same JSON format for the response. Response:\"\n
\n
\n
Table 7: Given the verbalized feature ( and ), we use the VLM to convert the description into a question and the corresponding answer for each class.
\n
", + "capture": "Table 7: Given the verbalized feature ( and ), we use the VLM to convert the description into a question and the corresponding answer for each class." + }, + "8": { + "table_html": "
\n
\n\n
\n user_prompt (y_diff) = f\"Given the following image, classify it based on the provided criteria:\n
\n
\n \\nCriteria (Question): {question}\n
\n
\n \\nClass 1: {class_1_ans}\n
\n
\n \\nClass 2: {class_2_ans}\n
\n
\n \\nPlease response with \\\"Class 1\\\" or \\\"Class 2\\\"\n
\n
\n
\n
\n user_prompt (y_comm) = f\"Examine the given image and determine if it matches the features described by the following criteria: {question). Answer only with YES or NO.\"\n
\n
\n
\n
\n {\n
\n
\n \"role\": \"user\",\n
\n
\n \"content\": [\n
\n
\n {\n
\n
\n \"type\": \"image_url\",\n
\n
\n \"image_url\": {\n
\n
\n \"url\": f\"data:image/jpeg;base64,{image}\"\n
\n
\n },\n
\n
\n },\n
\n
\n {\n
\n
\n \"type\": \"text\",\n
\n
\n \"text\": user_prompt,\n
\n
\n },\n
\n
\n ],\n
\n
\n }\n
\n
\n
Table 8: Prompt template used to map verbalized feature (, ) to numeric representations (, ).
\n
", + "capture": "Table 8: Prompt template used to map verbalized feature (, ) to numeric representations (, )." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\nCLIP\n\n\n\nDINO\n\n\n\nAvg.\n\n
\u2713\n\n65.26\n\n
\n\n\u2713\n\n\n\n58.32\n\n
\n\n\u2713\n\n\n\n63.26\n\n
\n\n\u2713\n\n\n\n64.06\n\n
\u2713\n\n\u2713\n\n\n\n67.92\n\n
\n\n\u2713\n\n\n\n\u2713\n\n\n\n72.86\n\n
\u2713\n\n\u2713\n\n\n\n\u2713\n\n\n\n76.52\n\n
\u2713\n\n\u2713\n\n\n\n\u2713\n\n\n\n76.86\n\n
\u2713\n\n\u2713\n\n\n\n\u2713\n\n\n\n\u2713\n\n\n\n79.92\n\n
\n
\n
Table 9: Accuracy (%) when incorporating feature vectors learned from different methods. and denote the difference and commonality feature vectors learned from VRL. CLIP and DINO refer to the image features encoded by CLIP and DINO visual encoder, respectively. All results are reported using the ensemble of the best-performing classifier combinations. Specifically, , , and DINO are using logistic regression while CLIP features are classified by MLP classifier.
\n
", + "capture": "Table 9: Accuracy (%) when incorporating feature vectors learned from different methods. and denote the difference and commonality feature vectors learned from VRL. CLIP and DINO refer to the image features encoded by CLIP and DINO visual encoder, respectively. All results are reported using the ensemble of the best-performing classifier combinations. Specifically, , , and DINO are using logistic regression while CLIP features are classified by MLP classifier. " + }, + "10": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SizeF.T.F.M.\n\nLR\n\n\n\nRF\n\n\n\nSVM\n\n\n\nkNN\n\n\n\nNB\n\n\n\nDT\n\n\n\nGB\n\n\n\nMLP\n\n
7bLLaVA\n\n52.73\n\n\n\n49.33\n\n\n\n47.27\n\n\n\n45.80\n\n\n\n50.20\n\n\n\n40.07\n\n\n\n47.33\n\n\n\n51.73\n\n
7bLLaVA\n\n53.27\n\n\n\n51.27\n\n\n\n51.33\n\n\n\n45.80\n\n\n\n53.53\n\n\n\n40.07\n\n\n\n43.33\n\n\n\n50.87\n\n
7bbothLLaVA\n\n62.06\n\n\n\n56.00\n\n\n\n51.40\n\n\n\n51.20\n\n\n\n52.06\n\n\n\n37.80\n\n\n\n39.60\n\n\n\n55.80\n\n
7bCLIP\n\n57.07\n\n\n\n55.13\n\n\n\n48.47\n\n\n\n46.33\n\n\n\n47.53\n\n\n\n43.20\n\n\n\n45.60\n\n\n\n52.73\n\n
7bCLIP\n\n46.53\n\n\n\n50.47\n\n\n\n45.53\n\n\n\n43.40\n\n\n\n19.33\n\n\n\n38.80\n\n\n\n42.00\n\n\n\n56.67\n\n
7bbothCLIP\n\n48.07\n\n\n\n49.47\n\n\n\n45.53\n\n\n\n43.40\n\n\n\n21.73\n\n\n\n38.40\n\n\n\n41.93\n\n\n\n58.87\n\n
72bLLaVA\n\n65.27\n\n\n\n64.93\n\n\n\n57.07\n\n\n\n56.07\n\n\n\n53.53\n\n\n\n45.13\n\n\n\n53.60\n\n\n\n62.53\n\n
72bLLaVA\n\n58.33\n\n\n\n58.07\n\n\n\n53.60\n\n\n\n51.47\n\n\n\n52.20\n\n\n\n38.73\n\n\n\n48.47\n\n\n\n58.33\n\n
72bbothLLaVA\n\n67.40\n\n\n\n64.87\n\n\n\n58.93\n\n\n\n54.47\n\n\n\n55.33\n\n\n\n41.73\n\n\n\n44.67\n\n\n\n66.00\n\n
72bCLIP\n\n61.07\n\n\n\n57.13\n\n\n\n54.07\n\n\n\n47.00\n\n\n\n53.60\n\n\n\n42.60\n\n\n\n46.13\n\n\n\n57.20\n\n
72bCLIP\n\n45.87\n\n\n\n50.13\n\n\n\n47.80\n\n\n\n44.33\n\n\n\n19.33\n\n\n\n40.47\n\n\n\n35.87\n\n\n\n53.73\n\n
72bbothCLIP\n\n53.67\n\n\n\n52.80\n\n\n\n50.40\n\n\n\n44.73\n\n\n\n19.33\n\n\n\n42.93\n\n\n\n40.93\n\n\n\n60.60\n\n
\n
\n
Table 10: Comparison of classification accuracy (%) across different ablated methods for fine-grained classification on iNaturalist. Note that F.T. indicates the type of the verbalized features and F.M. refers to the model used to perform feature mapping. For different classifiers, LR denotes Logistic Regression, RF for Random forest, SVM for Support Vector Machine, kNN for k nearest neighbor, NB for Naive Bayes, DT for decision tree, GB for gradient boosting and MLP for multi-layer perceptron classifier.
\n
", + "capture": "Table 10: Comparison of classification accuracy (%) across different ablated methods for fine-grained classification on iNaturalist. Note that F.T. indicates the type of the verbalized features and F.M. refers to the model used to perform feature mapping. For different classifiers, LR denotes Logistic Regression, RF for Random forest, SVM for Support Vector Machine, kNN for k nearest neighbor, NB for Naive Bayes, DT for decision tree, GB for gradient boosting and MLP for multi-layer perceptron classifier." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18651v1_figure_2.png", + "caption": "Figure 2: The overview of our Verbalized Representation Learning (VRL) framework. (a) Given N\ud835\udc41Nitalic_N samples per class from C\ud835\udc36Citalic_C different classes, VRL is able to generate a diverse set (exponentially scaling with N\u00d7C\ud835\udc41\ud835\udc36N\\times Citalic_N \u00d7 italic_C) of verbalized features by: 1) extracting key differences between samples from different classes, and 2) identifying commonalities shared among objects within the same class. (b) Given an image, a Vision-and-Language model (VLM) is employed to evaluate whether the image contains the characteristics described by the verbalized features. This process can map a set of verbalized features into numeric representations which can then be used for downstream tasks.", + "url": "http://arxiv.org/html/2411.18651v1/x2.png" + }, + "3": { + "figure_path": "2411.18651v1_figure_3.png", + "caption": "Figure 3: Qualitative examples of the features extracted by VRL. We highlight the key attributes in bold. (a) Verbalized features extracted by comparing images from different classes. (b) Verbalized features extracted by comparing the images within the same class.", + "url": "http://arxiv.org/html/2411.18651v1/x3.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Kiki or bouba? sound symbolism in vision-and-language models.", + "author": "Morris Alper and Hadar Averbuch-Elor.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "3": { + "title": "Network dissection: Quantifying interpretability of deep visual representations.", + "author": "David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6541\u20136549, 2017.", + "url": null + } + }, + { + "4": { + "title": "Understanding the role of individual units in a deep neural network.", + "author": "David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, and Antonio Torralba.", + "venue": "Proceedings of the National Academy of Sciences, 117(48):30071\u201330078, 2020.", + "url": null + } + }, + { + "5": { + "title": "Deep clustering for unsupervised learning of visual features.", + "author": "Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze.", + "venue": "In Proceedings of the European conference on computer vision (ECCV), pages 132\u2013149, 2018.", + "url": null + } + }, + { + "6": { + "title": "Unsupervised learning of visual features by contrasting cluster assignments.", + "author": "Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin.", + "venue": "Advances in neural information processing systems, 33:9912\u20139924, 2020.", + "url": null + } + }, + { + "7": { + "title": "This looks like that: deep learning for interpretable image recognition.", + "author": "Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "8": { + "title": "A simple framework for contrastive learning of visual representations.", + "author": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.", + "venue": "In International conference on machine learning, pages 1597\u20131607. PMLR, 2020.", + "url": null + } + }, + { + "9": { + "title": "Exploring simple siamese representation learning.", + "author": "Xinlei Chen and Kaiming He.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 15750\u201315758, 2021.", + "url": null + } + }, + { + "10": { + "title": "Evolving interpretable visual classifiers with large language models.", + "author": "Mia Chiquier, Utkarsh Mall, and Carl Vondrick.", + "venue": "arXiv preprint arXiv:2404.09941, 2024.", + "url": null + } + }, + { + "11": { + "title": "What is one grain of sand in the desert? analyzing individual neurons in deep nlp models.", + "author": "Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James Glass.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6309\u20136317, 2019.", + "url": null + } + }, + { + "12": { + "title": "A survey on in-context learning.", + "author": "Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, et al.", + "venue": "arXiv preprint arXiv:2301.00234, 2022.", + "url": null + } + }, + { + "13": { + "title": "Describing objects by their attributes.", + "author": "Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth.", + "venue": "In 2009 IEEE conference on computer vision and pattern recognition, pages 1778\u20131785. IEEE, 2009.", + "url": null + } + }, + { + "14": { + "title": "Learning visual attributes.", + "author": "Vittorio Ferrari and Andrew Zisserman.", + "venue": "Advances in neural information processing systems, 20, 2007.", + "url": null + } + }, + { + "15": { + "title": "Devise: A deep visual-semantic embedding model.", + "author": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc\u2019Aurelio Ranzato, and Tomas Mikolov.", + "venue": "Advances in neural information processing systems, 26, 2013.", + "url": null + } + }, + { + "16": { + "title": "Bootstrap your own latent-a new approach to self-supervised learning.", + "author": "Jean-Bastien Grill, Florian Strub, Florent Altch\u00e9, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al.", + "venue": "Advances in neural information processing systems, 33:21271\u201321284, 2020.", + "url": null + } + }, + { + "17": { + "title": "Momentum contrast for unsupervised visual representation learning.", + "author": "Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729\u20139738, 2020.", + "url": null + } + }, + { + "18": { + "title": "Grounding visual explanations.", + "author": "Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata.", + "venue": "In Proceedings of the European conference on computer vision (ECCV), pages 264\u2013279, 2018.", + "url": null + } + }, + { + "19": { + "title": "Natural language descriptions of deep visual features.", + "author": "Evan Hernandez, Sarah Schwettmann, David Bau, Teona Bagashvili, Antonio Torralba, and Jacob Andreas.", + "venue": "In International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "20": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "arXiv preprint arXiv:2106.09685, 2021.", + "url": null + } + }, + { + "21": { + "title": "Part-stacked cnn for fine-grained visual categorization.", + "author": "Shaoli Huang, Zhe Xu, Dacheng Tao, and Ya Zhang.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1173\u20131182, 2016.", + "url": null + } + }, + { + "22": { + "title": "Visualizing and understanding recurrent networks.", + "author": "Andrej Karpathy, Justin Johnson, and Li Fei-Fei.", + "venue": "arXiv preprint arXiv:1506.02078, 2015.", + "url": null + } + }, + { + "23": { + "title": "Supervised contrastive learning.", + "author": "Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan.", + "venue": "Advances in neural information processing systems, 33:18661\u201318673, 2020.", + "url": null + } + }, + { + "24": { + "title": "Concept bottleneck models.", + "author": "Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang.", + "venue": "In International conference on machine learning, pages 5338\u20135348. PMLR, 2020.", + "url": null + } + }, + { + "25": { + "title": "Attribute-based classification for zero-shot visual object categorization.", + "author": "Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling.", + "venue": "IEEE transactions on pattern analysis and machine intelligence, 36(3):453\u2013465, 2013.", + "url": null + } + }, + { + "26": { + "title": "Llava-onevision: Easy visual task transfer.", + "author": "Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li.", + "venue": "arXiv preprint arXiv:2408.03326, 2024.", + "url": null + } + }, + { + "27": { + "title": "Visual classification via description from large language models.", + "author": "Sachit Menon and Carl Vondrick.", + "venue": "arXiv preprint arXiv:2210.07183, 2022.", + "url": null + } + }, + { + "28": { + "title": "Language can shape the perception of oriented objects.", + "author": "Eduardo Navarrete, Michele Miozzo, and Francesca Peressotti.", + "venue": "Scientific reports, 10(1):8409, 2020.", + "url": null + } + }, + { + "29": { + "title": "Representation learning with contrastive predictive coding.", + "author": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals.", + "venue": "arXiv preprint arXiv:1807.03748, 2018.", + "url": null + } + }, + { + "30": { + "title": "ChatGPT.", + "author": "OpenAI.", + "venue": "2022.", + "url": null + } + }, + { + "31": { + "title": "Multimodal explanations: Justifying decisions and pointing to the evidence.", + "author": "Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8779\u20138788, 2018.", + "url": null + } + }, + { + "32": { + "title": "Scikit-learn: Machine learning in python.", + "author": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al.", + "venue": "the Journal of machine Learning research, 12:2825\u20132830, 2011.", + "url": null + } + }, + { + "33": { + "title": "What does a platypus look like? generating customized prompts for zero-shot image classification.", + "author": "Sarah Pratt, Ian Covert, Rosanne Liu, and Ali Farhadi.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15691\u201315701, 2023.", + "url": null + } + }, + { + "34": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "35": { + "title": "Synaesthesia\u2013a window into perception, thought and language.", + "author": "Vilayanur S Ramachandran and Edward M Hubbard.", + "venue": "Journal of consciousness studies, 8(12):3\u201334, 2001.", + "url": null + } + }, + { + "36": { + "title": "An embarrassingly simple approach to zero-shot learning.", + "author": "Bernardino Romera-Paredes and Philip Torr.", + "venue": "In International conference on machine learning, pages 2152\u20132161. PMLR, 2015.", + "url": null + } + }, + { + "37": { + "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization.", + "author": "Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 618\u2013626, 2017.", + "url": null + } + }, + { + "38": { + "title": "A multimodal automated interpretability agent.", + "author": "Tamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, and Antonio Torralba.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "39": { + "title": "The inaturalist species classification and detection dataset.", + "author": "Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769\u20138778, 2018.", + "url": null + } + }, + { + "40": { + "title": "Learning concise and descriptive attributes for visual recognition.", + "author": "An Yan, Yu Wang, Yiwu Zhong, Chengyu Dong, Zexue He, Yujie Lu, William Yang Wang, Jingbo Shang, and Julian McAuley.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3090\u20133100, 2023.", + "url": null + } + }, + { + "41": { + "title": "Paraphrasing is all you need for novel object captioning.", + "author": "Cheng-Fu Yang, Yao-Hung Hubert Tsai, Wan-Cyuan Fan, Russ R Salakhutdinov, Louis-Philippe Morency, and Frank Wang.", + "venue": "Advances in Neural Information Processing Systems, 35:6492\u20136504, 2022.", + "url": null + } + }, + { + "42": { + "title": "Visualizing and understanding convolutional networks.", + "author": "MD Zeiler.", + "venue": "In European conference on computer vision/arXiv, 2014.", + "url": null + } + }, + { + "43": { + "title": "Sglang: Efficient execution of structured language model programs.", + "author": "Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al.", + "venue": "arXiv preprint arXiv:2312.07104, 2023.", + "url": null + } + }, + { + "44": { + "title": "Interpretable basis decomposition for visual explanation.", + "author": "Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 119\u2013134, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18651v1" +} \ No newline at end of file diff --git a/20241127/2411.18652v1.json b/20241127/2411.18652v1.json new file mode 100644 index 0000000000000000000000000000000000000000..5ab4a679f0ff89cad5c24a8eb4b815d66008c4c6 --- /dev/null +++ b/20241127/2411.18652v1.json @@ -0,0 +1,563 @@ +{ + "title": "Surf-NeRF: Surface Regularised Neural Radiance Fields", + "abstract": "Neural Radiance Fields (NeRFs) provide a high fidelity, continuous scene representation that can realistically represent complex behaviour of light. Despite recent works like Ref-NeRF improving geometry through physics-inspired models, the ability for a NeRF to overcome shape-radiance ambiguity and converge to a representation consistent with real geometry remains limited. We demonstrate how curriculum learning of a surface light field model helps a NeRF converge towards a more geometrically accurate scene representation. We introduce four additional regularisation terms to impose geometric smoothness, consistency of normals and a separation of Lambertian and specular appearance at geometry in the scene, conforming to physical models. Our approach yields improvements of 14.4% to normals on positionally encoded NeRFs and 9.2% on grid-based models compared to current reflection-based NeRF variants. This includes a separated view-dependent appearance, conditioning a NeRF to have a geometric representation consistent with the captured scene. We demonstrate compatibility of our method with existing NeRF variants, as a key step in enabling radiance-based representations for geometry critical applications. Project page: https://roboticimaging.org/Projects/SurfNeRF", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Neural Radiance Fields (NeRFs) [27 ###reference_b27###] provide an efficient coordinate based scene representation with wide applications to computer vision, robotics and beyond. The ray-based volumetric rendering formulation produces realistic novel views of a scene, encompassing view-dependent scene appearance. Complex appearances like specularity, transparency and interreflection lead to shape-radiance ambiguity where scene geometry is not uniquely represented, often producing imagined geometries.\n###figure_1### Whilst re-parameterisation of the radiance in the scene as in Ref-NeRF [36 ###reference_b36###] has shown improvements over previous state-of-the-art variants, there exists an incomplete separation of the scene\u2019s Lambertian appearance from the specular and no explicit constraint on the placement of density. It is still common for the NeRF to place regions of density behind surfaces or in front of the camera as detached geometry (floaters) to explain complex phenomena. This leads to a non-realistic geometric scene representation, as in Figure 1 ###reference_###.\nApplications like robotics and 3D modelling require both accurate geometry and realistic appearance. The poor physical geometry currently poses a large hurdle to the widespread adoption of NeRFs as a scene representation for geometry critical tasks. Robotic manipulation and autonomous navigation and mapping require an accurate scene structure to minimise the gap between a representation and the real world, for example when grasping metal objects or navigating near reflective windows. Similarly, traditional structure-from-motion pipelines have great difficulty in reconstructing visually complex objects including reflective and transparent surfaces.\nWe address geometric inaccuracy arising from view-dependent phenomena with the insight that there are multiple formulations of the plenoptic function which can produce viable scene representations. A surface light field [41 ###reference_b41###] describes the plenoptic function as light originating from a geometrically smooth surface. Our insight lies in that we can push the NeRF towards a more geometrically accurate representation in the same rendering framework using additional geometric and appearance based regularisation via curriculum learning. We use a first surface assumption to describe the location of geometry in the scene. Using a second sampling of the NeRF at these points, we regularise density to produce smoothly-varying normals and thin, continuous sheets of density which more realistically represent scene geometry. By enforcing view-dependent properties of a reflection light model at these points, we enable a separation of the Lambertian appearance of a scene from the view-dependent component, reducing shape-radiance ambiguity and encouraging more correct structure of the surface.\nIn this work, we make the following contributions:\nWe devise a novel regularisation approach which uses the structure of a neural radiance field to sample density, normals and appearance in the vicinity of geometry in the scene, allowing for additional representation-driven regularisation terms to be applied.\nWe apply local regularisation consistent with a surface light field radiance model, including geometric smoothness of density, local consistency of normals and a physically correct separation of Lambertian and specular appearance using a light interaction model.\nWe leverage curriculum learning of a NeRF towards a more accurate geometric scene representation which maintains visual fidelity whilst refining the density representation of the scene.\nWhilst we benchmark our approach on state-of-the-art physics based NeRF variants, our methodology may also be applied to other NeRF frameworks.\nThis work is a key step in the deployment of NeRFs as a scene representation where both geometric and visual fidelity are critical, like robotic manipulation and navigation in complex unstructured environments." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Neural Scene Representations: Neural field approaches [35 ###reference_b35###, 14 ###reference_b14###] to scene representation produce continuous and often high visual fidelity depictions which are able to be queried anywhere within the training set, balancing visual and geometric fidelity. In this work, we seek to improve the geometry of a NeRF whilst maintaining visual fidelity.\nNeRFs [27 ###reference_b27###] leverage the efficiency of ray-based construction similar to light fields with a volumetric scene representation of points which emit light. Subsequent works have improved the fidelity of representation [3 ###reference_b3###, 4 ###reference_b4###, 6 ###reference_b6###], recovery of a static scene [32 ###reference_b32###, 34 ###reference_b34###, 26 ###reference_b26###] and its performance around view-dependent phenomena such as reflections [36 ###reference_b36###, 22 ###reference_b22###]. However, relying on a purely volumetric rendering approach still allows the network to imagine geometries to explain view-dependent phenomena, particularly in cases where regions of the scene are underconstrained [31 ###reference_b31###, 23 ###reference_b23###, 29 ###reference_b29###] or where light does not follow the physical model of a straight ray through the scene [8 ###reference_b8###]. Additional regularisation terms have been shown to dramatically improve the quality of rendering in these cases [29 ###reference_b29###, 23 ###reference_b23###]. Improving the accuracy of normals in the scene also significantly helps learning around complex appearance [25 ###reference_b25###, 36 ###reference_b36###]. More recent works [37 ###reference_b37###] have shown that ray-tracing multiple reflection rays can improve the representation of reflections, albeit having extremely high computation cost. In this work we use properties of surface light fields to reduce the reliance of NeRFs on additional geometries. This approach explains complex visual phenomena entirely through view-dependent appearance, whilst maintaining the formulation of a single-pass volumetric rendering approach and its quality. We achieve this using a surface light field model and local characteristics of geometry and appearance of the scene.\nSurface Representations: Signed distance fields (SDFs) have garnered significant attention [44 ###reference_b44###, 45 ###reference_b45###, 16 ###reference_b16###, 18 ###reference_b18###, 2 ###reference_b2###, 30 ###reference_b30###] as they can represent smooth and continuous geometries in space and may be generated from multi-view constraints alone. Applying view-dependent colour channels to the SDF [38 ###reference_b38###, 44 ###reference_b44###, 18 ###reference_b18###, 39 ###reference_b39###, 40 ###reference_b40###] in a similar fashion to NeRFs has demonstrated improved accuracy and continuity of representation. For diffuse objects, this representation provides a smooth and high quality reconstruction, however, more highly view-dependent appearance and complex geometry (thin structures, concave geometries) are not well represented. Some works have included a reflection parameterisation [21 ###reference_b21###], however struggle to reconstruct 3D scenes with fine detail, thin structures and sharp changes compared to volumetric scenes. Appearance and geometry have an intrinsic relationship within neural representations, with degradation in geometric accuracy often resulting in altered geometry to meet appearance [18 ###reference_b18###]. Our work maintains a volumetric scene representation allowing for complex geometry, but with separated view-dependent and -independent appearance. Given NeRFs are a high fidelity visual representation, we are concerned only with introducing a surface-like structure to the density field through volumetric rendering. This improves the geometry and consistency of novel view rendering by considering the NeRF as learning density constrained to a surface with smooth view-dependent terms.\nAppearance Vs. Geometry:\nInverse rendering seeks to produce a definite [20 ###reference_b20###] separation of the scenes appearance, geometry and environment, which are necessary to create realistic renderings of scene\u2019s under new conditions. Performing this without prior knowledge of the scene proves to be an immensely difficult task [46 ###reference_b46###, 11 ###reference_b11###, 15 ###reference_b15###, 43 ###reference_b43###], however, utilising physically-based rendering fused with learnt appearances [10 ###reference_b10###, 12 ###reference_b12###] provides a sufficiently constrained framework to acquire the components of the scene. Radiance field approaches seeking to learn an accurate scene representation under few view scenarios have used depth consistency [33 ###reference_b33###], additional depth supervision [49 ###reference_b49###] and regularisation using priors [29 ###reference_b29###]. Incorporating an understanding of the physical interactions of light within a scene and its effect on appearance [47 ###reference_b47###] under a solid surface assumption enables accurate geometry to be learnt alongside appearance. In the presence of specularities and other visual phenomena it is difficult to disentangle where appearance has been baked into geometry [42 ###reference_b42###]. Without the need for recovering a bidirectional reflectance distribution function (BRDF), light field approaches [19 ###reference_b19###] provide a more generalised framework. This enables a clear separation between appearance and geometry, by reducing the need to acquire environmental view-dependent effects. Our work leverages similar light-field characteristics to recover geometry more accurately within the volumetric rendering framework of NeRF, adding a prior to how geometry and appearance should present in the scene." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_2### Surf-NeRF introduces novel regularisation to locally enforce the properties of surface light fields. By regularising towards this representation, we represent smooth geometry with a physically viable separation of appearance and geometry producing more geometrically accurate radiance fields. This reduces shape radiance ambiguity by encouraging continuous regions of density with a smoothly varying view-dependent appearance. A visual depiction of our methodology is shown in Figure 2 ###reference_###." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preliminaries", + "text": "A surface light field is a subset of the plenoptic function [9 ###reference_b9###] that provides the colour of a light ray originating from a surface within the scene. Solid scene geometries are well represented given the decoupled parameterisation of the scene geometry and radiance on these surfaces.\nA surface light field exists strictly on a surface geometry mapping directions in the unit 2-sphere () to radiance, , for an RGB colour triplet [41 ###reference_b41###].\nFew real surfaces have an entirely view-dependent appearance. Similar to traditional light fields [19 ###reference_b19###], a surface light field may be decomposed into a Lambertian (diffuse) reflectance, , and specular (or more generally a view-dependent) component, [41 ###reference_b41###, 24 ###reference_b24###], . These intrinsic decompositions are defined for a point on the surface, , and viewing direction, . The diffuse reflectance varies only with position over the surface, whilst the view-dependent component captures elements like reflection and refraction at the surface [20 ###reference_b20###].\nA surface light field has a strong assumption that the radiance seen from a given direction is provided entirely from a continuous geometry in the scene. This accordingly models phenomena such as volumetric scattering, reflection or transmission as a function of surface position rather than in a volumetric (NeRF) or physics-based (rendering engine) manner. In this way, a surface light field is decoupled from the geometry of a scene, meaning a surface may be deformed while its appearance looking along a ray is maintained [41 ###reference_b41###]. Our approach is motivated by this decoupling to encourage Lambertian radiance to exist on smooth sheets of density, or surfaces, with a view-dependent colour." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Model", + "text": "Our proposed method builds on the reflection parameterisation in Ref-NeRF [36 ###reference_b36###], which splits the scene into a diffuse and specular appearance term similar to a surface light field. This parameterisation provides enhanced results across Lambertian and specular scenes over original NeRF variants [27 ###reference_b27###, 3 ###reference_b3###] by learning a spatially varying diffuse colour , specular tint , rendering normal and a view-dependent specular colour for each point in the scene. The final colour of a ray in this parameterisation is given as , a linear combination of the diffuse and specular terms. Ref-NeRF struggles to entirely separate Lambertian and specular appearance, utilising density instead to explain complex phenomena such as non-planar reflection, anisotropic reflection and interreflection, leading to the results seen in Figure 1 ###reference_###. We apply our regularisation losses denoted by to ZipNeRF [6 ###reference_b6###], leveraging its state of the art performance with grid-based encoding [28 ###reference_b28###], using the Ref-NeRF parameterisation. We make no modifications to the model itself beyond this, but include a second, data-driven sampling to impose physically-inspired regularisation. We maintain two proposal networks and one NeRF network with 64 and 32 samples per ray. Importantly, our regularisation terms are extensible to other NeRF variants, and may be applied after the main training as a fine-tuning stage, as we show in Section 4 ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Sampling Radiance at a Surface", + "text": "We regularise the surface and its light field by sampling a batch of positions and directions at the point where the ray insects a surface in the scene, as shown in Figure 3 ###reference_###. We formulate local regularisation terms based on surface light field properties and the current scene geometry and appearance at this location.\nWhere prior works have sampled unseen image patches [29 ###reference_b29###] to encourage consistent depths, we sample a batch of unseen rays localised at a surface point in the scene to encourage local geometric continuity. This batch approach also has significant benefit over single perturbed points seen in prior work [30 ###reference_b30###, 48 ###reference_b48###], as it allows for changes in surface orientation and structure not captured by a single sample to be accounted for, as we detail in Section 3.4 ###reference_###.\nWe utilise a first-surface assumption to infer the location of geometry; we assume the majority of radiance is emitted by dense points closest to the camera. Our candidate surface is the first point along a ray with weight greater than the median weight of the ray , preventing regularisation from occurring behind the true location of the surface. This minimises sampling around points which are occluded and therefore not well positioned with multi-view constraints. Choosing the median ensures that this selection is more robust to skewed weight distributions, particularly early during training. This is shown in Figure 2 ###reference_### left.\nUsing the point (), origin (), direction (), surface normal (), and covariance () of this sample, we generate two new batches; a spatial batch to regularise density and a directional batch to regularise appearance.\nWe adapt a deterministic sampling scheme on the sphere [17 ###reference_b17###] to produce uniformly distributed points used in both regularisation batches. Samples drawn from a von Mises-Fischer distribution with concentration parameter are uniformly distributed on the unit 2-sphere . We sample this distribution deterministically using a Fibonacci-Kronecker lattice as proposed by [17 ###reference_b17###] for samples. This is partitioned into shells sampling the unit ball. Similar to the unscented sampling presented in ZipNeRF [6 ###reference_b6###], we apply a random rotation [1 ###reference_b1###] to these samples during training to avoid any bias from the orientation of the sampling scheme, arriving at samples . Further details are provided in the supplementary material.\nTo produce virtual rays which can be cast during training, we use our sphere samples to define directions , origins and covariances to construct new conical frusta. The origins and covariances for these rays,\nare defined by the rotation matrix which rotates the original ray direction to each . This results in rays at the same distance to the surface as the original ray, with MipNeRF gaussians [5 ###reference_b5###] aligned along ray directions.\nWe use these new virtual rays to produce two batches, querying the density field in the local 3D region of the surface at and the distribution of view-dependent colours through a range of viewing angles. We refer to these as the spatial and directional batches. Below we outline the locations , and directions through which we sample the NeRF in these batches. We also visually depict our sampling schemes in Figure 3 ###reference_###.\n###figure_3### Directional Sampling: Specularities at a surface are piecewise smooth, taking on the characteristics of what is being reflected. By sampling a point on a surface in a range of viewing directions and characterising the colour distribution, we can quantify how closely it matches the surface light field model introduced above. The directional batch samples at a single spatial location, but through a range of outward viewing angles:\nwhere is the sign function.\nSpatial Sampling: In a NeRF, geometry is characterised by the density field , whose gradient with respect to position approximates surface normals . By sampling density and normals of neighbouring points, and enforcing consistency between these values we can provide additional geometric supervision towards a surface model. The spatial batch consists of points in the unit ball located at the surface ,\nwhere is the radial variance of the sample as per mip-NeRF [3 ###reference_b3###]. This samples the NeRF within a single pixel\u2019s conical frustum, ensuring regularisation occurs at the scale of the training data; images closer to the scene regularise at finer detail compared to those further away. Importantly, our sample volume adapts in scale towards a minimum as the NeRF localises density during training during the later stages of training. We characterise this behaviour in the supplement. As we do not care about the colour of these samples off the surface, the spatial batch uses the directions of the virtual rays." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Local Smoothness", + "text": "We regularise the geometry of the representation by penalising sparse and irregular density, allowing for a smooth and continuous sheet-like density field to form. These constraints are valid except where topological edges or corners exist; we therefore formulate -norm regularisation terms to allow for these local features to form when required.\nThe spatial sampling batch encompasses the region in front of and behind the candidate surface. Using the normal at the candidate surface, we penalise points proportional to their density and perpendicular distance from the surface encouraging a plane of density to form perpendicular to the normal. Points far from the surface should have low density, whilst those on the same plane as the sample point should be dense. This loss term,\nsums over samples in the spatial batch and is also weighted by the surface point rendering weight. This ensures that early during training, the NeRF is able to change surface locations as more rays sample the scene. We do not place a stop gradient on , allowing for the surface to reorient based on the density of samples in the local volume.\nNormals in this local region should exhibit similar smoothness, and so the term,\nis imposed proportional to the angle between the normal at and those at each sample point . This term encourages dense samples to have a normal which is parallel with that at the surface point. Both terms localise density and encourage smooth, continuous geometries. We weight these terms by hyperparameters and respectively." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Visually Realistic Appearance Separation", + "text": "Whilst highly specular surfaces or highly Lambertian surfaces are well represented with a reflection encoding, surfaces with both a strong Lambertian and specular component exhibit shape-radiance ambiguity. This is characterised by incorrect geometry with a view-dependent colour bias; that is the minimum specular colour over all viewing angles is non-zero to satisfy the appearance in all training view-points. More details are found in the supplementary.\nTo encourage the surface light field separation of the Lambertian and view-dependent terms we propose two additional losses based on directionally sampling a point. To encourage a minimum specular bias, we penalise the normalised specular colour over viewing angles in the directional sampling batch ,\nwhere is the stop gradient operator.\nWhere the specular distribution is uniform over viewing angles, this loss term is maximal. This ensures that the appearance of a point must accumulate most of its signal in either the specular or the Lambertian term - not in both. To maintain a piecewise smooth distribution in the specular, we enforce a total variation loss over the specular colour:\nThis total variation is applied over the surface of the sampling sphere as a form of graph total variation. This term,\nis based upon the -Nearest Neighbours of each sample point and weighted by the cosine distance between the sample and its neighbours. This approach provides a localised consistency which is edge-aware continuous over the sampling sphere, capturing occlusions. Both of these terms smooth the specular distribution and increase the reliance on the diffuse component of the model and are weighted by hyperparameters and respectively." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "Curriculum Learning", + "text": "We impose our surface regularisation under curriculum learning to maintain visual fidelity whilst still improving geometry. As training progresses and additional regularisation is imposed more frequently, the representation is refined to satisfy the geometric and appearance characteristics of a surface light field.\nThe surface constraints presented in Sections 3.4 ###reference_###-3.5 ###reference_### are of similar computational complexity to a regular training step, requiring a second pass through both MLPs and therefore taking close to double the time. By using curriculum learning we trade-off the improvement to the representation with training time.\nWe schedule regularisation to occur in a staircase schedule of powers of 2: every 512 iterations early in the training, down to every 4 iterations in the final stages. On this schedule, there is approximately a 25% increase to training time.\nFor each regularisation sample, we add on each regularisation loss to the losses for the combined ZipNeRF and Ref-NeRF model structure. More model implementation details are provided in the supplementary material, along with hyperparameter weights." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "Our regularisation is implemented in JAX [13 ###reference_b13###] and based on the ZipNeRF codebase [7 ###reference_b7###]. We utilise the improvements to quality and speed in ZipNeRF combined with the physics-based model structure introduced in Ref-NeRF. We compare to prior works MipNeRF360 [4 ###reference_b4###] and Ref-NeRF [36 ###reference_b36###].\nWe evaluate Surf-NeRF on the Shiny Real and the Shiny Objects [36 ###reference_b36###] datasets used in Ref-NeRF as the main benchmark for our approach. We also evaluate on a captured dataset consisting of 4 complex reflective objects under controlled illumination, called the Koala dataset. Each scene contains approximately 40 training images and 10 test images with ground truth poses obtained using a Universal Robots UR5e robotic arm. We select objects that include non-planar specularities, fine details and specularities which present as virtual images in front of the surface. Further details are provided in the supplementary material." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Shiny Objects Dataset", + "text": "Table 1 ###reference_### summarises results on the Shiny Objects dataset, comparing both visual fidelity by peak-signal-to-noise ratio (PSNR) in decibels (dB) and structural similarity scores (SSIM), and geometric fidelity by mean angular error (MAE) of the density derived normals in degrees and root mean squared error (RMSE) of the disparity in inverse units. We compare across positionally encoded baselines based on MipNeRF360 [4 ###reference_b4###], denoted as \u201cPE-based\u201d and \u201cgrid-based\u201d implementations based on ZipNeRF [6 ###reference_b6###]. We also compare against a variant of MipNeRF which includes a diffuse colour channel, denoted as \u201cMipNeRF+Diff\u201d. We train the positionally encoded implementations using V2-8 TPUs, and the grid-based methods on four NVIDIA V100 32GB GPUs. Per scene results are included in the supplementary materials.\n###table_1### Across both the positional encoding based and hash based networks we see comparable performance to Ref-NeRF in visual fidelity, demonstrating our ability to maintain visual fidelity whilst encouraging a more geometrically accurate representation. In the positionally encoded approaches we see a 14.4% improvement to normal accuracy, with notable improvements around regions of high specularity. We also see a 9.2% improvement to normals for the grid based implementation. Given the discretisation of the grid-based approaches, we see comparatively higher disparity errors across the dataset but improved normals from the interpolation of encodings. Our approach realises a 6.7% improvement in disparity versus Zip+Ref-NeRF.\nIn Figure 4 ###reference_### we demonstrate that the proposed regularisation is able to correct for warping of geometry and improve its consistency on the car scene, whilst also correctly separating the Lambertian racing stripes. Similarly, on the toaster dataset we successfully separate the Lambertian toast whilst retaining its reflection and improving geometry around specular materials with high curvature.\n###figure_4### Figure 5 ###reference_### depicts performance of positionally encoded models on specular surfaces with low texture. Surf-NeRF yields drastically better surface geometry and normals.\n###figure_5###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Shiny Real Dataset", + "text": "The visual fidelity results on the Shiny Real dataset [36 ###reference_b36###] are shown in Table 2 ###reference_### for Surf-NeRF and comparison baseline models. We demonstrate comparable visual fidelity to other state of the art NeRF variants demonstrating no appreciable drop in visual fidelity compared to non-regularised approaches on the positional encoding based models. The complex appearances of these datasets combined with the hash-encoding sees a small drop in PSNR values for grid-based variants. We show decreased reliance on floaters to explain appearance via Surf-NeRF in the supplement." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Koala Dataset", + "text": "Table 3 ###reference_### shows the strength of our proposed regularisation on complex reflective scenes. Figure 6 ###reference_### demonstrates the ability for the proposed regularisation to separate Lambertian and specular colours in a more physically consistent manner, and to overcome failure cases where specularities are not densely sampled by the training data. Additional results are provided in the supplementary.\n###figure_6###" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "In Table 4 ###reference_### we present an ablation study of our additional surface regularisation terms on the Helmet scene from the Shiny Objects dataset. As shown, including our full approach provides strong performance across both the visual and geometric metrics. Our appearance based regularisation is coupled closely to the geometry of the scene. Removing the specular bias term drastically increases the normal error, however does not alter disparity in the scene. The normal term improves continuity of the geometry in the scene. Whilst removing it allows for each individual normal to be determined more freely, reducing the MAE, it also allows for floaters to be introduced unnecessarily by our spatial term corresponding to an increase in PSNR but also an increase in disparity error within the scene.\nTable 5 ###reference_### demonstrates an study on our curriculum learning approach. We demonstrate the effectiveness of our approach as a function of the frequency of regularisation. We demonstrate improvement to the normals and disparity in all cases compared to Zip+Ref-NeRF. Note that performance is highest for regularisation at a rate which is sparse enough for adaptation to changing density and frequent enough that corrections to the representation are retained." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Surface Regularisation as Finetuning", + "text": "We demonstrate surface regularisation as a fine-tuning step, after a NeRF has been trained on pre-trained Zip+Ref-NeRF and ZipNeRF model on the car scene from the Shiny Objects dataset. As ZipNeRF has no Lambertian colour term, we do not include , and apply our sphere total variation regularisation to the view-dependent colour , an example of our applicability to models without reflection parameterisation. The spatial sampling terms depend only on the geometry of the NeRF and so may be applied to other density-based neural fields without modification. We continue training at a fixed learning rate of and present our results in Table 6 ###reference_###. In both cases we demonstrate an improvement in normals over both the base model and the na\u00efve approach where the model continues training without our regularisation. Note that finetuning with the reflection parameterisation, as in our Zip+RefNeRF model, results in higher disparity errors as the NeRF changes density placement during the additional training." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Limitations", + "text": "Modelling the scene as surfaces, elements that are better represented volumetrically like hair or subsurface scattering may be unnecessarily regularised and their representation degraded using our proposed regularisation. Since Surf-NeRF makes use of a second pass through the network, our methodology takes twice as long on each regularisation step compared to a vanilla training step. We enforce geometric consistency during training at the cost of restricting the networks ability to deliver the best photometric accuracy, leading to marginally reduced PSNR scores." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We have proposed Surf-NeRF, a novel regularisation approach and four novel physically derived regularisers, and shown these to improve geometry in a NeRF by curriculum learning towards a surface light field model. This ensures consistency of density and normals on geometry in the volume, in addition to a physics-inspired regularisation on view-dependent appearance. This enables a NeRF to adjust geometric structure during training without greatly sacrificing visual fidelity, demonstrating qualitatively improved continuity of geometry, with improvements of up to 14.4% in normals and 6.7% in disparity.\nThis work produces better NeRF representations for geometry critical tasks with unchanged model architectures." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Summary on the Shiny Objects\u00a0[36] dataset between visual metrics (PSNR, SSIM) and geometric metrics (MAE, RMSE). Our method yields more accurate normals, trading off a maginal decrease to PSNR. Yellow, orange and red are the third, second, and first scores.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model\nPSNR \n\nSSIM \n\nMAE \n\nRMSE \n
\n\n\n\n\nPE\n\nMipNeRF31.290.94356.170.125
MipNeRF+Diff30.840.93660.00\n\\cellcoloryellow0.122\n
Ref-NeRF\n\\cellcolorred33.21\n\n\\cellcolorred0.971\n25.89\n\\cellcolorred0.117\n
Surf-NeRF (Ours)\n\\cellcoloryellow33.01\n\n\\cellcolororange0.967\n22.15\n\\cellcolorred0.117\n
\n\n\n\n\nGrid\n\nZipNeRF31.020.952\n\\cellcoloryellow19.47\n0.209
Zip+Ref-NeRF\n\\cellcolororange33.14\n\n\\cellcoloryellow0.964\n\n\\cellcolororange 14.70\n0.195
Surf-NeRF (Ours)32.130.954\n\\cellcolorred13.35\n0.182
\n
", + "capture": "Table 1: Summary on the Shiny Objects\u00a0[36] dataset between visual metrics (PSNR, SSIM) and geometric metrics (MAE, RMSE). Our method yields more accurate normals, trading off a maginal decrease to PSNR. Yellow, orange and red are the third, second, and first scores." + }, + "2": { + "table_html": "
\n
Table 2: Summary of numerical results for captured Shiny Real dataset\u00a0[36]. Surf-NeRF maintains visual fidelity on captured scenes, improving SSIM over the PE based methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelPSNRSSIM
\n\n\n\n\nPE\n\nMipNeRF\n\\cellcolorred23.72\n0.543
MipNeRF+Diff23.620.543
Ref-NeRF\n\\cellcolororange23.65\n0.529
Surf-NeRF (Ours)23.260.573
\n\n\n\n\nGrid\n\nZipNeRF\n\\cellcoloryellow23.63\n\n\\cellcolorred0.626\n
Zip+RefNeRF\n\\cellcoloryellow22.63\n\n\\cellcolorred0.626\n
Surf-NeRF (Ours)22.48\n\\cellcolororange0.600\n
\n
", + "capture": "Table 2: Summary of numerical results for captured Shiny Real dataset\u00a0[36]. Surf-NeRF maintains visual fidelity on captured scenes, improving SSIM over the PE based methods." + }, + "3": { + "table_html": "
\n
Table 3: Summary of numerical results for captured Koala dataset. Surface regularisation helps scene convergence around complex specular geometry improving PSNR.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelPSNRSSIM
ZipNeRF27.790.663
Zip+RefNeRF23.690.632
Surf-NeRF (Ours)27.240.655
\n
", + "capture": "Table 3: Summary of numerical results for captured Koala dataset. Surface regularisation helps scene convergence around complex specular geometry improving PSNR." + }, + "4": { + "table_html": "
\n
Table 4: Ablation study on Surf-NeRF regularisation terms on the Helmet scene. All terms contribute to overall performance.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nPSNR\n\nSSIM\n\nMAE\n\nRMSE\n
(Ours)\u2713\u2713\u2713\u2713\n\\cellcolorred32.93\n\n\\cellcolororange0.974\n\n\\cellcolororange 8.906\n\n\\cellcolorred0.192\n
w/o spatial\u2713\u2713\u271332.330.97210.670.196
w/o normal\u2713\u2713\u2713\n\\cellcolororange 32.86\n\n\\cellcolorred 0.976\n\n\\cellcolorred 7.98\n0.221
w/o spec. TV\u2713\u2713\u271332.15\n\\cellcoloryellow0.973\n\n\\cellcoloryellow10.01\n\n\\cellcolororange 0.195\n
w/o bias\u2713\u2713\u2713\n\\cellcoloryellow 32.38\n\n\\cellcoloryellow 0.973\n16.05\n\\cellcolorred0.192\n
\n
", + "capture": "Table 4: Ablation study on Surf-NeRF regularisation terms on the Helmet scene. All terms contribute to overall performance." + }, + "5": { + "table_html": "
\n
Table 5: Parameter study on regularisation frequency on Coffee scene. There is a tradeoff to regularisation frequency between quality, time and effect, as in Figure\u00a01.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Initial Freq.Final Freq.\nPSNR \n\nSSIM \n\nMAE \n\nRMSE \n
1024831.380.96714.890.181
512431.990.96815.290.181
256231.700.96418.170.208
128132.090.96715.950.181
\n
", + "capture": "Table 5: Parameter study on regularisation frequency on Coffee scene. There is a tradeoff to regularisation frequency between quality, time and effect, as in Figure\u00a01." + }, + "6": { + "table_html": "
\n
Table 6: Surf-NeRF as a finetuning step and applicability to other NeRF variants. With little additional training, we refine surface normals.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
BaseExtraPSNRSSIM\n\nMedian MAE\n\nRMSE
ZipNeRF-27.340.932\n\n24.21\n\n0.220
2.5k ZipNeRF27.320.931\n\n24.30\n\n0.220
2.5k SurfNeRF27.330.931\n\n23.57\n\n0.218
ZipRefNeRF-30.250.957\n\n18.44\n\n0.246
2.5k ZipRefNeRF30.140.956\n\n18.59\n\n0.251
2.5k SurfNeRF30.130.956\n\n18.29\n\n0.248
\n
", + "capture": "Table 6: Surf-NeRF as a finetuning step and applicability to other NeRF variants. With little additional training, we refine surface normals." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18652v1_figure_1.png", + "caption": "Figure 1: Surf-NeRF uses the properties of surface light fields to regularise a NeRF by querying the local region of samples. These samples are used to improve consistency and smoothness in these regions accumulating density at a surface, leading to improved geometry and more physically viable appearance components (ours, right) compared to current state-of-the-art ([36], left) whilst maintaining visual fidelity. By changing how frequently regularisation occurs during training, we can control the strength of the effect.", + "url": "http://arxiv.org/html/2411.18652v1/x1.png" + }, + "2": { + "figure_path": "2411.18652v1_figure_2.png", + "caption": "Figure 2: An overview of the Surf-NeRF methodology. We use a first surface assumption of light to locate samples \ud835\udc31\u22c6subscript\ud835\udc31\u22c6\\mathbf{x}_{\\star}bold_x start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT which are likely to lie on a surface in the scene. Sampling at this point in multiple directions, and at points nearby using points drawn from a sphere \ud835\udc29\ud835\udc29\\mathbf{p}bold_p, we impose regularisation on directional and spatial behaviour in this region. We separate the Lambertian component from the specular colour channel \ud835\udc1cssubscript\ud835\udc1c\ud835\udc60\\mathbf{c}_{s}bold_c start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT by sampling points through multiple directions. Using neighbouring points we also regularise geometric smoothness on density \u03c4\ud835\udf0f\\tauitalic_\u03c4 and consistency of normals \ud835\udc27^\ud835\udc27^\\hat{\\mathbf{n}}start_ID over^ start_ARG bold_n end_ARG end_ID leading to improved, continuous geometry.", + "url": "http://arxiv.org/html/2411.18652v1/x2.png" + }, + "3": { + "figure_path": "2411.18652v1_figure_3.png", + "caption": "Figure 3: A visual depiction of the two sampling batches localised at a surface in the scene. After defining a ray surface intersection \ud835\udc31\u22c6subscript\ud835\udc31\u22c6\\mathbf{x}_{\\star}bold_x start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT, we use uniform samples on a unit ball to sample the surface through multiple viewing angles, and the local 3D volume.", + "url": "http://arxiv.org/html/2411.18652v1/x3.png" + }, + "4": { + "figure_path": "2411.18652v1_figure_4.png", + "caption": "Figure 4: Results from the grid based models. By regularising the scene, we not only remove Lambertian scene content from the specular, like the stripes on the car and toast in the toaster, but our approach yields improved geometry particularly around curves and specularity, like the car windshield and corners of the toaster. Ref-NeRF* indicates the Zip+Ref-NeRF model.", + "url": "http://arxiv.org/html/2411.18652v1/x4.png" + }, + "5": { + "figure_path": "2411.18652v1_figure_5.png", + "caption": "Figure 5: Geometry results on the non-grid based models. The surface regularisation can resolve errors in geometry which stem from shape-radiance ambiguity e.g. in regions with low-frequency textures on the teapot lid, without compromising visual fidelity. This attains substantially improved normals in these regions.", + "url": "http://arxiv.org/html/2411.18652v1/x5.png" + }, + "6": { + "figure_path": "2411.18652v1_figure_6.png", + "caption": "Figure 6: The \u201cgold head\u201d and \u201cshiny ball\u201d scenes from our captured Koala dataset. The first row contains RGB and median depth renderings, whilst the second contains the diffuse and specular renderings. The Gold Head dataset shows complete separation of Lambertian and specular appearance for the mannequin. The Shiny Ball dataset represents a failure case for the Zip+Ref-NeRF baseline, however the proposed regularisation regularises these floaters away during training and appropriately places the majority of the colour into the diffuse colour term.", + "url": "http://arxiv.org/html/2411.18652v1/x6.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Fast random rotation matrices.", + "author": "James Arvo.", + "venue": "In David Blair Kirk, editor, Graphics Gems III (IBM Version), The Graphics Gems Series, pages 117\u2013120. Academic Press, 1992.", + "url": null + } + }, + { + "2": { + "title": "Neural RGB-D Surface Reconstruction.", + "author": "Dejan Azinovi\u0107, Ricardo Martin-Brualla, Dan B Goldman, Matthias Nie\u00dfner, and Justus Thies.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6290\u20136301, 2022.", + "url": null + } + }, + { + "3": { + "title": "Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields.", + "author": "Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5855\u20135864, 2021.", + "url": null + } + }, + { + "4": { + "title": "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields.", + "author": "Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman.", + "venue": "2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.", + "url": null + } + }, + { + "5": { + "title": "Mip-NeRF 360: Unbounded anti-aliased neural radiance fields.", + "author": "Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470\u20135479, 2022.", + "url": null + } + }, + { + "6": { + "title": "Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields.", + "author": "Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19697\u201319705, October 2023.", + "url": null + } + }, + { + "7": { + "title": "CamP Zip-NeRF: A Code Release for CamP and Zip-NeRF, 2024.", + "author": "Jonathan T. Barron, Keunhong Park, Ben Mildenhall, John Flynn, Dor Verbin, Pratul Srinivasan, Peter Hedman, Philipp Henzler, and Ricardo Martin-Brualla.", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "Eikonal Fields for Refractive Novel-view Synthesis.", + "author": "Mojtaba Bemana, Karol Myszkowski, Jeppe Revall Frisvad, Hans-Peter Seidel, and Tobias Ritschel.", + "venue": "In ACM SIGGRAPH 2022 Conference Proceedings, pages 1\u20139, 2022.", + "url": null + } + }, + { + "9": { + "title": "The Plenoptic Function and The Elements of Early Vision.", + "author": "James R Bergen and Edward H Adelson.", + "venue": "Computational Models of Visual Processing, 1:8, 1991.", + "url": null + } + }, + { + "10": { + "title": "NeRD: Neural Reflectance Decomposition from Image Collections.", + "author": "Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12684\u201312694, 2021.", + "url": null + } + }, + { + "11": { + "title": "SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections.", + "author": "Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan T. Barron, Hendrik Lensch, and Varun Jampani.", + "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "12": { + "title": "Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition.", + "author": "Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan Barron, and Hendrik Lensch.", + "venue": "Advances in Neural Information Processing Systems, 34:10691\u201310704, 2021.", + "url": null + } + }, + { + "13": { + "title": "JAX: composable transformations of Python+NumPy programs, 2018.", + "author": "James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "Factor Fields: A Unified Framework for Neural Fields and Beyond.", + "author": "Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, and Andreas Geiger.", + "venue": "arXiv preprint arXiv:2302.01226, 2023.", + "url": null + } + }, + { + "15": { + "title": "Diffeomorphic Neural Surface Parameterization for 3D and Reflectance Acquisition.", + "author": "Ziang Cheng, Hongdong Li, Richard Hartley, Yinqiang Zheng, and Imari Sato.", + "venue": "In ACM SIGGRAPH 2022 Conference Proceedings, pages 1\u201310, 2022.", + "url": null + } + }, + { + "16": { + "title": "Sphere-guided training of neural implicit surfaces.", + "author": "Andreea Dogaru, Andrei-Timotei Ardelean, Savva Ignatyev, Egor Zakharov, and Evgeny Burnaev.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20844\u201320853, June 2023.", + "url": null + } + }, + { + "17": { + "title": "Deterministic Von Mises\u2013Fisher sampling on the sphere using fibonacci lattices.", + "author": "Daniel Frisch and Uwe D. Hanebeck.", + "venue": "In 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI), pages 1\u20138, 2023.", + "url": null + } + }, + { + "18": { + "title": "Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction.", + "author": "Qiancheng Fu, Qingshan Xu, Yew-Soon Ong, and Wenbing Tao.", + "venue": "In Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "19": { + "title": "Intrinsic Light Field Images.", + "author": "Elena Garces, Jose I Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, and Diego Gutierrez.", + "venue": "In Computer Graphics Forum, volume 36, pages 589\u2013599. Wiley Online Library, 2017.", + "url": null + } + }, + { + "20": { + "title": "A Survey on Intrinsic Images: Delving Deep into Lambert and Beyond.", + "author": "Elena Garces, Carlos Rodriguez-Pardo, Dan Casas, and Jorge Lopez-Moreno.", + "venue": "Int. J. Comput. Vision, 130(3):836\u2013868, mar 2022.", + "url": null + } + }, + { + "21": { + "title": "Ref-NeUS: Ambiguity-reduced neural implicit surface learning for multi-view reconstruction with reflection.", + "author": "Wenhang Ge, Tao Hu, Haoyu Zhao, Shu Liu, and Ying-Cong Chen.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4251\u20134260, 2023.", + "url": null + } + }, + { + "22": { + "title": "NeRFReN: Neural Radiance Fields with Reflections.", + "author": "Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, and Song-Hai Zhang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18409\u201318418, 2022.", + "url": null + } + }, + { + "23": { + "title": "InfoNeRF: Ray entropy minimization for few-shot neural volume rendering.", + "author": "Mijeong Kim, Seonguk Seo, and Bohyung Han.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12912\u201312921, 2022.", + "url": null + } + }, + { + "24": { + "title": "CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering.", + "author": "Zhengqi Li and Noah Snavely.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.", + "url": null + } + }, + { + "25": { + "title": "SpecNeRF: Gaussian directional encoding for specular reflections.", + "author": "Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollh\u00f6fer, and Christian Richardt.", + "venue": "arXiv preprint arXiv:2312.13102, 2023.", + "url": null + } + }, + { + "26": { + "title": "NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections.", + "author": "Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7210\u20137219, 2021.", + "url": null + } + }, + { + "27": { + "title": "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.", + "author": "Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "28": { + "title": "Instant neural graphics primitives with a multiresolution hash encoding.", + "author": "Thomas M\u00fcller, Alex Evans, Christoph Schied, and Alexander Keller.", + "venue": "ACM Trans. Graph., 41(4):102:1\u2013102:15, July 2022.", + "url": null + } + }, + { + "29": { + "title": "RegNeRF: Regularizing Neural Radiance Fields for View Synthesis From Sparse Inputs.", + "author": "Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5480\u20135490, 2022.", + "url": null + } + }, + { + "30": { + "title": "UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction.", + "author": "Michael Oechsle, Songyou Peng, and Andreas Geiger.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5589\u20135599, 2021.", + "url": null + } + }, + { + "31": { + "title": "LolNeRF: Learn From One Look.", + "author": "Daniel Rebain, Mark Matthews, Kwang Moo Yi, Dmitry Lagun, and Andrea Tagliasacchi.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1558\u20131567, 2022.", + "url": null + } + }, + { + "32": { + "title": "Robustnerf: Ignoring distractors with robust losses.", + "author": "Sara Sabour, Suhani Vora, Daniel Duckworth, Ivan Krasin, David J. Fleet, and Andrea Tagliasacchi.", + "venue": "In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20626\u201320636, 2023.", + "url": null + } + }, + { + "33": { + "title": "SimpleNeRF: Regularizing sparse input neural radiance fields with simpler solutions.", + "author": "Nagabhushan Somraj, Adithyan Karanayil, and Rajiv Soundararajan.", + "venue": "In SIGGRAPH Asia 2023 Conference Papers, SA \u201923, New York, NY, USA, 2023. Association for Computing Machinery.", + "url": null + } + }, + { + "34": { + "title": "Block-NeRF: Scalable Large Scene Neural View Synthesis.", + "author": "Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8248\u20138258, 2022.", + "url": null + } + }, + { + "35": { + "title": "Advances in Neural Rendering.", + "author": "A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. Martin-Brualla, S. Lombardi, T. Simon, C. Theobalt, M. Nie\u00dfner, J. T. Barron, G. Wetzstein, M. Zollh\u00f6fer, and V. Golyanik.", + "venue": "Computer Graphics Forum (EG STAR 2022), 2022.", + "url": null + } + }, + { + "36": { + "title": "Ref-NeRF: Structured View-Dependent Appearance For Neural Radiance Fields.", + "author": "Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan.", + "venue": "In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5481\u20135490. IEEE, 2022.", + "url": null + } + }, + { + "37": { + "title": "NeRF-Casting: Improved view-dependent appearance with consistent reflections.", + "author": "Dor Verbin, Pratul P Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, and Jonathan T Barron.", + "venue": "arXiv preprint arXiv:2405.14871, 2024.", + "url": null + } + }, + { + "38": { + "title": "Differentiable Signed Distance Function Rendering.", + "author": "Delio Vicini, S\u00e9bastien Speierer, and Wenzel Jakob.", + "venue": "ACM Transactions on Graphics (TOG), 41(4):1\u201318, 2022.", + "url": null + } + }, + { + "39": { + "title": "NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction.", + "author": "Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang.", + "venue": "Advances in Neural Information Processing Systems, 34:27171\u201327183, 2021.", + "url": null + } + }, + { + "40": { + "title": "NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction.", + "author": "Yiming Wang, Qin Han, Marc Habermann, Kostas Daniilidis, Christian Theobalt, and Lingjie Liu.", + "venue": "arXiv preprint arXiv:2212.05231, 2022.", + "url": null + } + }, + { + "41": { + "title": "Surface Light Fields for 3D Photography.", + "author": "Daniel N. Wood, Daniel I. Azuma, Ken Aldinger, Brian Curless, Tom Duchamp, David H. Salesin, and Werner Stuetzle.", + "venue": "In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH \u201900, page 287\u2013296, USA, 2000. ACM Press/Addison-Wesley Publishing Co.", + "url": null + } + }, + { + "42": { + "title": "Neumesh: Learning Disentangled Neural Mesh-Based Implicit Field for Geometry and Texture Editing.", + "author": "Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, and Guofeng Zhang.", + "venue": "In Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XVI, pages 597\u2013614. Springer, 2022.", + "url": null + } + }, + { + "43": { + "title": "NeILF: Neural Incident Light Field for Physically-Based Material Estimation.", + "author": "Yao Yao, Jingyang Zhang, Jingbo Liu, Yihang Qu, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan.", + "venue": "In Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XXXI, pages 700\u2013716. Springer, 2022.", + "url": null + } + }, + { + "44": { + "title": "Volume Rendering of Neural Implicit Surfaces.", + "author": "Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman.", + "venue": "Advances in Neural Information Processing Systems, 34:4805\u20134815, 2021.", + "url": null + } + }, + { + "45": { + "title": "MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction.", + "author": "Zehao Yu, Songyou Peng, Michael Niemeyer, Torsten Sattler, and Andreas Geiger.", + "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.", + "url": null + } + }, + { + "46": { + "title": "NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild.", + "author": "Jason Zhang, Gengshan Yang, Shubham Tulsiani, and Deva Ramanan.", + "venue": "Advances in Neural Information Processing Systems, 34:29835\u201329847, 2021.", + "url": null + } + }, + { + "47": { + "title": "IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images.", + "author": "Kai Zhang, Fujun Luan, Zhengqi Li, and Noah Snavely.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5565\u20135574, 2022.", + "url": null + } + }, + { + "48": { + "title": "Nerfactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination.", + "author": "Xiuming Zhang, Pratul P Srinivasan, Boyang Deng, Paul Debevec, William T Freeman, and Jonathan T Barron.", + "venue": "ACM Transactions on Graphics (TOG), 40(6):1\u201318, 2021.", + "url": null + } + }, + { + "49": { + "title": "VDN-NeRF: Resolving shape-radiance ambiguity via view-dependence normalization.", + "author": "Bingfan Zhu, Yanchao Yang, Xulong Wang, Youyi Zheng, and Leonidas Guibas.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 35\u201345, June 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18652v1" +} \ No newline at end of file diff --git a/20241127/2411.18654v1.json b/20241127/2411.18654v1.json new file mode 100644 index 0000000000000000000000000000000000000000..e77898c3445a211cd8a64943a0aae21df7993f98 --- /dev/null +++ b/20241127/2411.18654v1.json @@ -0,0 +1,594 @@ +{ + "title": "AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward", + "abstract": "Recently, text-to-motion models have opened new possibilities for creating realistic human motion with greater efficiency and flexibility. However, aligning motion generation with event-level textual descriptions presents unique challenges due to the complex relationship between textual prompts and desired motion outcomes. To address this, we introduce AToM, a framework that enhances the alignment between generated motion and text prompts by leveraging reward from GPT-4Vision. AToM comprises three main stages: Firstly, we construct a dataset MotionPrefer that pairs three types of event-level textual prompts with generated motions, which cover the integrity, temporal relationship and frequency of motion. Secondly, we design a paradigm that utilizes GPT-4Vision for detailed motion annotation, including visual data formatting, task-specific instructions and scoring rules for each sub-task. Finally, we fine-tune an existing text-to-motion model using reinforcement learning guided by this paradigm. Experimental results demonstrate that AToM significantly improves the event-level alignment quality of text-to-motion generation. Project page is available at https://atom-motion.github.io/.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Generating high-quality human motions from textual descriptions is a promising task and plays an important role in fields such as game production, animation, film and virtual reality. Recently, generative models leveraging autoregressive [15 ###reference_b15###, 33 ###reference_b33###, 12 ###reference_b12###, 13 ###reference_b13###] and diffusion-based approaches [27 ###reference_b27###, 9 ###reference_b9###, 35 ###reference_b35###, 28 ###reference_b28###] have shown remarkable performance on this task. These models generally perform well with short text prompts.\nHowever, due to the scarcity of text-motion pairs as well as the coarse-grained text descriptions that cover limited motion scenarios, there are challenges in mapping complex descriptions (e.g., multi-motion events or motions with temporal relationships and specified frequency) to corresponding motion sequences, limiting the model\u2019s ability to generalize effectively.\nAs shown in the second group of Figure 1 ###reference_###, given a prompt describing three motions in temporal order, the left example motion accurately captures the expression \u201cwaves his arm\u201d but misinterprets \u201cwalks forward in a straight line\u201d as \u201cwalks in a circle\u201d and fails to account for the \u201cturning left\u201d action.\nOne potential solution to address this issue is to alleviate data scarcity by collecting additional text-motion pair data. However, unlike tasks involving language or image data, gathering motion data typically requires specialized motion capture equipment and expert annotations, which are both costly and labor-intensive. An alternative approach is to leverage the success of language models by fine-tuning pre-trained models with preference data, thereby enhancing their alignment capabilities. To the best of our knowledge, InstructMotion [29 ###reference_b29###] is the first work to fine-tune a text-to-motion model using human preference data through reinforcement learning from human feedback (RLHF), leveraging human-labeled data to enhance model alignment with preferred outputs. Subsequent approaches [20 ###reference_b20###, 23 ###reference_b23###] have explored the use of automated preference datasets, employing pretrained models and reward functions to approximate human feedback. Such methods aim to reduce reliance on manual annotation and improve model performance across a range of alignment metrics\nWhile human annotation reduces workload, it remains labor-intensive and challenging to scale. AI feedback-based methods reduce reliance on human annotators but rely on models trained on standard motion datasets, like HumanML3D [11 ###reference_b11###], limiting their capacity to score out-of-distribution text-motion pairs. Additionally, both human and AI-based approaches often treat text and motion data as unified wholes, overlooking the need for fine-grained alignment evaluation, such as event-level correspondence.\nTo address these limitations, we propose leveraging advancements in rapidly evolving Vision-Language Large Models, such as GPT-4Vision. These models, through sophisticated architectures and large-scale training, have demonstrated state-of-the-art performance in tasks requiring seamless integration of textual and visual information, making them well-suited for the text-motion alignment task. By utilizing GPT-4Vision, we aim to simultaneously address challenges such as data scarcity, labor-intensive annotation, scalability issues, and the need for granular alignment evaluation\u2014thus paving the way for more robust, scalable, and accurate alignment solutions.\nTo investigate GPT-4Vision\u2019s ability in aligning text and motion modalities, we propose a new framework named AToM, designed for aligning text-to-motion models using feedback from GPT-4Vision or other Vision-Language Large Models. AToM consists of three main stages: (1) we generate initial text prompts using GPT-4. These prompts are then fed into a motion generation model to generate several different motions for each text prompt. For ease of comparison with human-based work InstructMotion [29 ###reference_b29###], we also adopt MotionGPT [15 ###reference_b15###] as our motion generator. (2) We then render these generated motions and sample the rendered motion as a sequence of sampled frames.\nThe input text prompt, sampled frames, along with an instruction describing the alignment rules, are then fed into the GPT-4Vision model. And the model will evaluate the text-motion alignment score at event-level based on the given frames and rules. The text prompts, generated motions, and the corresponding alignment scores jointly constitute our MotionPrefer dataset. In Table 1 ###reference_###, we compare our MotionPrefer dataset with two other preference datasets, InstructMotion [29 ###reference_b29###] and Pick-a-Move [23 ###reference_b23###]. In terms of the scale, our MotionPrefer dataset surpasses InstructMotion [29 ###reference_b29###], while the Pick-a-Move [23 ###reference_b23###] does not specify its data volume. Furthermore, our dataset is the only one to provide fine-grained preference annotations across multiple aspects, including motion integrity, temporal order, and frequency. (3) We finetune the motion generator, MotionGPT [15 ###reference_b15###], on our MotionPrefer dataset with LoRA[14 ###reference_b14###] and IPO[6 ###reference_b6###] RL strategy.\nWe summarize our contributions as follows:\nWe created a dataset named MotionPrefer, consisting of 5.3K text prompts and 80K motion preference pairs. Each text-motion pair is scored based on event-level correspondences which contain three dimensions: motion integrity, temporal order, and motion frequency, thereby surpassing existing datasets in both scale and quality.\nWe designed an annotation and reward paradigm leveraging GPT-4V, which encompasses key elements such as motion instructions, motion injection methods and scoring rules for each sub-task. This comprehensive paradigm is applied to evaluate the text-motion pairs collected in the MotionPrefer dataset, providing preference-based reward scoring.\nWe used AToM to fine-tune an off-the-shelf motion generator across three sub-tasks: motion integrity, temporal order and frequency, achieving substantial performance improvements. Additionally, ablation studies explored the effects of various motion injection methods, LoRA, score filtering, and reinforcement learning strategies. The experimental results strongly support AToM\u2019s effectiveness in improving text-motion alignment across tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Text-to-Motion Generative Models", + "text": "Text-to-motion (T2M) generation aims to generate human motion that corresponds to free-form natural language descriptions, serving as a fundamental task in the field of motion generation. Text2Action [2 ###reference_b2###] pioneered this area by utilizing a GAN based on a SEQ2SEQ model to map short descriptions to human actions. Language2Pose [3 ###reference_b3###] introduced a curriculum learning approach to develop joint-level embeddings for text and pose, while Lin et al. [18 ###reference_b18###] proposed an end-to-end SEQ2SEQ model for generating more realistic animations. However, the limited availability of large-scale supervised datasets hinders generalization to novel descriptions, such as unseen combinations of motions.\nTo address these issues, Ghosh et al. [10 ###reference_b10###] developed a hierarchical two-stream sequential model capable of handling long sentences that describe multiple actions. MotionCLIP [30 ###reference_b30###] aligns the human motion manifold with CLIP space to endow the model with zero-shot capabilities.\nMore recent work, such as the Transformer-based TEACH [5 ###reference_b5###], generates realistic 3D human motions that follow complex, sequential action instructions, facilitating flexible temporal action composition. TEMOS [36 ###reference_b36###] uses a Transformer-based VAE and an additional text encoder for multi-object 3D scene generation and editing, guided by multi-level contrastive supervision. T2M-GPT [33 ###reference_b33###] combines VQ-VAE and GPT to obtain high-quality discrete representations, achieving competitive motion generation results.\nThe diffusion-based model MotionDiffuse [34 ###reference_b34###] allows for fine-grained control over body parts and supports arbitrary-length sequences through a series of denoising steps with injected variations. The classifier-free diffusion model MDM [27 ###reference_b27###] predicts motion samples instead of noise, facilitating geometric loss application and setting state-of-the-art performance. MLD [9 ###reference_b9###] further advances motion generation using a latent diffusion model. MotionGPT[15 ###reference_b15###] develops unified large motion-language models that represent human motion via discrete vector quantization, enabling versatile performance across tasks like motion generation, captioning, and prediction. However, due to the coarse grained motion-paired text descriptions, achieving robust performance in zero-shot and multi-event scenarios remains challenging." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Aligning Models with Human/AI Feedback", + "text": "Reinforcement Learning from Human Feedback (RLHF) [22 ###reference_b22###, 7 ###reference_b7###] has emerged as a transformative technique for model alignment, especially in applications with complex or ambiguous objectives. It has become the primary approach for aligning large language models (LLMs) [38 ###reference_b38###, 1 ###reference_b1###, 4 ###reference_b4###] with user intent and has been effectively extended to generative models in image [16 ###reference_b16###, 32 ###reference_b32###] and audio generation [17 ###reference_b17###]. In addition to traditional methods like PPO [26 ###reference_b26###] and RLHF-PPO [22 ###reference_b22###], which use explicit reward models, alternative approaches such as Direct Preference Optimization (DPO) [25 ###reference_b25###] and Slic-hf [37 ###reference_b37###] streamline the alignment process. These methods directly optimize model policies based on human preferences, which reduces computational overhead and enables potentially more robust optimization by working directly with preference data.\nIn aligning generated motion with human perception, existing methods often struggle with overfitting specific motion expressions due to limited training data, which relies heavily on expert-labeled motion. InstructMotion [29 ###reference_b29###] addresses this by incorporating human preference data, where non-expert labelers compare generated motions, introducing preference learning to T2M generation and achieving performance improvements over traditional methods.\nGiven the high cost of obtaining quality preference labels, Reinforcement Learning from AI Feedback (RLAIF) [8 ###reference_b8###] presents another promising alternative. Mao et al. [20 ###reference_b20###] decompose motion descriptions into meta motions, combining them to generate novel descriptions. They use a trial-and-error approach in reinforcement learning, designing a reward model based on contrastive pre-trained text and motion encoders, enhancing semantic alignment and leveraging synthetic text-only data for better generalization. MoDiPO [23 ###reference_b23###] applies DPO with AI feedback, to confine diffusion-based motion generation within realistic and text-aligned boundaries. Unlike InstructMotion, MoDiPO focuses on optimizing model alignment rather than generalizable generation, ensuring high-quality, contextually consistent outputs.\nOur approach significantly diverges from previous work in two\nkey aspects: 1) Instead of relying on labor-intensive, human-feedback-driven methods, we are the first to utilize a more efficient, LLM-based AI feedback mechanism from GPT-4V. 2) We are the first to focus on fine-grained alignment between motion and text, particularly at the event level. We emphasize integrity, temporal and frequency correspondence between the motion described in the text and the motion generated, providing a more detailed approach to motion-text alignment." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_1### The framework of AToM, as shown in Figure 2 ###reference_###, consists of three stages. Firstly, we constructed a synthetic dataset of motion-text pairs using task-specific selected prompts and corresponding multiple output motion sequences from the motion generator, illustrated in Section 3.1 ###reference_###. Furthermore, in Section 3.2 ###reference_###, We developed a reward paradigm based on GPT-4Vision to score the alignment of visual signals rendered from motion sequences across three aspects. Finally, as elaborated in Section 3.3 ###reference_###, we utilized the synthetic dataset collected in stage 1 and the alignment score annotated in stage 2 to fine-tune the text-to-motion model, thereby enhancing its task-specific alignment performance." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Dataset Construction", + "text": "Prior studies [29 ###reference_b29###, 23 ###reference_b23###] have found that text-to-motion models face challenges in producing motions aligning with input textual descriptions, primarily manifest in low integrity, incorrect temporal relationships and frequency. As shown in Figure 1 ###reference_###, the left example exhibits poor integrity because of a missed motion event and the middle example has an incorrect temporal relationship due to a wrong time order, while the right example shows the wrong frequency. Specifically, we focus on evaluating the integrity, temporal relationships, and frequency of the generated motion to assess the effectiveness of AI feedback in text-to-motion model. In stage 1, we targetedly construct prompts and generate text-motion samples related to the above three aspects to facilitate reward evaluation in the later stage.\nAs the process shown by the blue arrow in Figure 2 ###reference_###, we randomly select =3.5K motion events from the dataset HumanML3D [11 ###reference_b11###] as our meta motion labels which can be denoted as . For different sub-tasks, we randomly pick labels from the to form the task-specified label group . Based on the label group and a predefined conjunction group , we instruct GPT-4, named , to generate meaningful and complete sentences matching human language with instruction :\nAs shown in Table 12 ###reference_###, here we give a template of the instruction for the temporal sub-task to construct prompt . The conjunction list for temporal sub-task encompasses terms such as \u201cand\u201d, \u201cthen\u201d, \u201cfollowed by\u201d, and so forth. Similarly, but distinctively, for the integrity sub-task, we construct prompts by selecting 2-5 motion events from combined with conjunctions to cover most cases; for the frequency sub-task, prompts are typically formed by selecting a single motion event paired with a frequency-descriptive conjunction (refer to the Appendix for detailed instructions regarding the other two sub-tasks).\nBased on the prompt data constructed earlier, we use the widely adopted text-to-motion model, MotionGPT [15 ###reference_b15###], named , as our motion generator to produce motion samples, ranging from 6 to 10 per prompt:\nThe details of MotionPrefer are provided in Table 3 ###reference_###. In total, the MotionPrefer dataset consists of 5,276 prompts and 47.1k motion samples." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Reward Paradigm Design", + "text": "The motion sequences generated by MotionGPT [15 ###reference_b15###] were rendered into video using the renderer. This rendered video sequence is denoted as :\nwhere represents the original motion data and denotes the renderer for obtaining the motion video.\nThe video is then sampled at 8-frame intervals, extracting a sequence of frame , using the sampling function as follows:\nwhere selects frames at 8-frame intervals from the rendered video. These sampled motion frames , along with the corresponding text description , are sequentially injected into GPT-4V for evaluation.\nWe leverage GPT-4V, named , to assess the alignment between generated motion sequences and textual descriptions. With the specific instructions (refer to the\nAppendix for the detailed scoring instructions for the three tasks), GPT-4V evaluates the alignment score across three tasks, based on the scoring criteria detailed in Table 4 ###reference_###.\nThis scoring process can be represented as:\nwhere the motion frames and text description are input into GPT-4V, along with instructions , to compute the alignment score.\nFor each task, GPT-4V assigns a score reflecting how well the motion sequence aligns with the text description, considering factors such as the integrity, temporal order, and frequency of motions. Through this process, we obtain a set of text-motion pairs with corresponding scores, denoted as , which can be represented as:\nwhere represents the generated motion sequence, represents the text description, and the is the text-motion alignment score annotated by GPT-4V. This annotation framework enables an evaluation of the generated motions across multiple dimensions, serving as a foundation for the next-step finetuning of the pretrain model." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Text-to-Motion Model Fine-tuning", + "text": "To fine-tune our motion generator with GPT-4V feedback, we apply algorithm 1 ###reference_### as a method to construct our training data from , where contains pairs in the form , with , and derived from the sets , and , respectively. This algorithm firstly filter out motion samples relevant to a specific sub-task. It then groups the samples by identical prompts and ranks each group by their quality scores in descending order. For each prompt group, the algorithm iterates over all possible pairs of motion samples, selecting pairs where the difference in quality scores exceeds a predefined threshold . These selected pairs, consisting of a high-quality sample , a low-quality sample , and their associated prompt , are then added to the training dataset .\nWe then follow the definition in IPO[6 ###reference_b6###] to define IPO loss :\nwhere represents our training policy, and is the reference policy. The loss IPO optimizes upon is given by:\nTo enhance the adaptability and efficiency of our motion generator during fine-tuning, we employed LoRA[14 ###reference_b14###], which enables us to adjust the model\u2019s parameters with significantly fewer computational resources compared to traditional fine-tuning methods. This fine-tuning approach generalizes well across various sub-tasks. As a result, the motion generator showed marked improvements in output quality, achieving stronger alignment between desired text prompts and the generated motions." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "Dataset To evaluate the effectiveness of our AI feedback-driven fine-tuning framework, we initialized from the pretrained MotionGPT checkpoint111https://huggingface.co/OpenMotionLab/MotionGPT-base ###reference_nGPT-base### and fine-tuned it on three subsets of our preference dataset, MotionPrefer, each including 35k (for temporal), 35k (for integrity) and 9.4k (for frequency) motion preference pairs.\nImplementation Specifics The optimal hyperparameter configuration included a learning rate of 1e-3, batch size of 32, and 20 epochs, using AdamW as the optimizer. A cosine learning rate scheduler was applied, and PEFT[19 ###reference_b19###] parameters were tuned with LoRA[14 ###reference_b14###] (R = 8, = 16, dropout = 0.05), allowing the model to leverage the parameter-efficient fine-tuning structure. This setup required approximately 12 GB of memory on a single RTX 4090 GPU, ensuring efficient fine-tuning throughout the process.\nEvaluation Metrics For evaluation, we filtered the HumanML3D [11 ###reference_b11###] test set, obtaining 418, 506, and 234 text-motion pairs for integrity, temporal, and frequency tasks. Consistent with prior research [33 ###reference_b33###, 29 ###reference_b29###, 31 ###reference_b31###], we focus on motion quality and text-motion alignment. Multi-modal Distance (MM-Dist) calculates the average Euclidean distance between text and motion features, while R-Precision measures motion-to-text retrieval accuracy with Top-1, Top-2, and Top-3 scores based on the model\u2019s ability to rank ground-truth descriptions correctly. Motion quality is assessed using FID to compute distribution distances between generated and real motion features. Diversity measures the average Euclidean distance between randomly sampled motion pairs, and MModality evaluates the variation among motions generated from the same text by averaging distances across 20 sequences per description. For subjective analysis, 50 participants evaluated text-motion alignment on temporal, frequency, and integrity aspects, selecting their preferred case or indicating similarity for each comparison." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Main Results", + "text": "###figure_2### Quantitative Experiment In our quantitative experiments across the temporal, frequency, and integrity tasks, AToM consistently outperforms MotionGPT [15 ###reference_b15###] and InstructMotion [29 ###reference_b29###] across most evaluation metrics, demonstrating superior text-motion alignment, motion quality, and generative realism. As shown in Table 5 ###reference_###, AToM outperforms baselines in text-motion alignment, achieving lower MM Dist scores, higher top-1 and top-3 retrieval accuracy, and comparable top-2 accuracy, demonstrating its ability to generate semantically aligned motions. In terms of motion quality, AToM achieves a lower FID (0.613 vs. 0.655, a 6.4% improvement), indicating more realistic motions. AToM balances alignment and diversity, with slight reductions in diversity and MModality attributed to fine-tuning focused on aligning with high-quality samples. In the general task, AToM excels with the lowest MM Dist (3.943), superior retrieval accuracy, and the best FID (0.177), showcasing its ability to generate realistic, well-aligned motions while maintaining consistency and robust generalization across sub-tasks and datasets. These improvements stem from our fine-tuning approach, which integrates AI feedback to better capture motion nuances valued by humans. By combining human-like feedback with efficient AI strategies, AToM surpasses traditional human-feedback-based models, offering a more effective and scalable solution.\nQualitative Experiment Figure 3 ###reference_### compares the MotionGPT [15 ###reference_b15###] and AToM in terms of generation faithfulness for integrity, temporal and frequency tasks. In our qualitative analysis, discernible discrepancies are noted in the motion generation by the original model. For integrity task, in response to the three-event description \u201ca person steps back, jumps up, and walks forward.\u201d, the generated motion included only two of the elements, but omitting specific motion event \u201cjumps up\u201d. Similarly, for temporal task, the original model misrepresented the sequence of events or omitted action events. An example is the prompt \u201ca person walks forward, then is pushed to their right and then returns to walking in the line.\u201d, where the generated motion incorrectly rendered the \u201cis pushed to their right\u201d. For frequency task, the pretrained model generated motion \u201ca person jumps forward two times\u201d, which is inconsistent with the prompt \u201cone time\u201d.\nIn contrast, AToM demonstrates enhanced performance, achieving more faithful generation on diverse motion events, complex temporal order and specific frequency shown in prompts.\n###figure_3### ###figure_4### User Study We conducted a human evaluation to compare the performance of AToM with the MotionGPT baseline. For each sub-task, motions were generated from randomly selected prompts in the filtered HumanML3D test set [11 ###reference_b11###] mentioned in sec 4.1. Each of the 50 participants evaluated five randomly selected pairs of generated motions each across three criteria: frequency, integrity, and temporal alignment. For each pair, the participant chose the better motion or marked a tie if both were comparable. Figure 5 ###reference_### presents the average win and tie rates for AToM compared to MotionGPT. As shown, AToM outperforms MotionGPT across all sub-tasks, with win rates of 74.4% for temporal, 70.0% for frequency, and 84.4% for integrity. This result reflects human evaluators\u2019 recognition of AToM\u2019s superior alignment and highlights its robustness across tasks." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "To evaluate the effectiveness of different motion injection strategies in GPT-4V questioning, we experimented with three methods: (1) Frame-by-Frame: The generated motion sequences from MotionGPT [15 ###reference_b15###] were rendered into video then sampled at 8-frame intervals, creating a sequence of images. (2) Full-Image: The sampled image sequence (at 8-frame intervals) was arranged into a composite image, with 5 frames per row. (3) Trajectory-Image: The motion data, sampled at 8-frame intervals from the motion sequence, were rendered by rigged cylinders to a single image. Injection demos are shown in the Appendix.\nTable 6 ###reference_### (a) shows that employing the Frame-by-Frame image sequence as the motion representation yielded the highest performance, with MM Dist (5.576), top-1 accuracy (0.199), and FID (0.613). This approach enhanced the fine-tuned model, leading to high-quality output and superior motion-text alignment. In contrast, Full-Image and Trajectory-Image yielded slightly lower performance, possibly due to less detailed frame-level information interpretable by GPT-4V.\nScore Filtering As shown in Table 6 ###reference_### (b), using score filtering in preference pairs construction, where only samples rated above three are considered positive-leads, results in clear performance improvements. With filtering, the model achieves an improved MM Dist, lowering the score from 5.640 to 5.576. The top-1 accuracy increases by 6.4%, rising from 0.187 to 0.199, and the FID score decreases by 11.5%, dropping from 0.693 to 0.613, all indicating better alignment and realism in generated motions. A larger faithfulness gap between positive and negative samples enhances the fine-tuning signal, enabling the model to better distinguish high-quality motion-text alignments.\nFinetune with LoRA\nIn Table 6 ###reference_### (c), incorporating LoRA[14 ###reference_b14###] yields significant improvements compared to the model without it, though with a slight trade-off in multi-modality. Specifically, using LoRA[14 ###reference_b14###] enhances the model\u2019s ability to retrieve motion sequences accurately, reduces the MM Dist score from 6.425 to 5.576, and increases top-1 accuracy by 55.5%, from 0.128 to 0.199. The top-2 and top-3 accuracies also improve significantly, with increases of 45.7% and 39.2%, respectively. The FID score also decreases by 71.2%, from 2.131 to 0.613, indicating better alignment with real motion distributions. Diversity also increases, from 8.582 to 8.926, showing greater diversity in generated motions.\nRL Strategy\nNotably, Figure 5 ###reference_### (FID has been negatively treated) highlights that, among the four RL strategies, IPO shows the best overall performance. As the variant of DPO, IPO was specifically designed to mitigate overfitting[6 ###reference_b6###] associated with the Bradley-Terry(BT) model, resulting in better overall retrieval accuracy, alignment precision, and diversity compared to DPO, KTO, and PPO.\nSampling Steps of Motion Sequence\nSampling frames at different intervals impacts performance, as illustrated in Figure 6 ###reference_###, with shorter intervals (e.g., 4 and 8) generally resulting in better match distances, higher top-1 accuracy, and lower FID, which indicate improved alignment and generation quality. We choose an interval of 8 frames as it provides strong retrieval precision, which is our primary focus.\n###figure_5###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce a novel framework, AToM, which utilizes GPT-4Vision feedback to enhance the alignment between text prompts and generated motions in text-to-motion models. AToM comprises three main stages: (1) generating diverse motions from constructed text prompts; (2) evaluating text-motion alignment using GPT-4Vision to construct the high-quality MotionPrefer dataset; and (3) fine-tuning the motion generator on MotionPrefer. Comprehensive quantitative and qualitative experiments demonstrate that AToM can effectively leverage feedback from Vision-Language Large Models, significantly improving text-motion alignment quality and paving the way for advancements in motion synthesis from textual prompts." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Additional Results", + "text": "" + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "More Qualitative Results", + "text": "Figure 7 ###reference_### presents additional qualitative comparisons between AToM and the baseline models, highlighting AToM\u2019s superior performance.\n###figure_6###" + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Number of Iterations for Fine-tuning", + "text": "We increased the number of iterations for fine-tuning, with the results presented in Figure 8 ###reference_###. In the general task, which includes data from three sub-tasks, increasing fine-tuning iterations has a mixed impact on the evaluated metrics. During the early stages, performance improves across most metrics, including reductions in MM Dist and FID, along with higher Top-1/Top-2 accuracy and Diversity, reflecting enhanced sample quality, diversity, and text-motion alignment. However, exceeding 30 iterations tends to lead to overfitting, resulting in degraded generalization, as seen in increased MM Dist and fluctuations in Top-1/Top-2 accuracy. This suggests that prolonged fine-tuning reduces the model\u2019s ability to generalize across distributions and may lead to mode collapse, thereby negatively impacting diversity and FID. Optimal performance is observed between 20-30 iterations, where the balance between quality and generalization is most effectively maintained.\n###figure_7###" + }, + { + "section_id": "6.3", + "parent_section_id": "6", + "section_name": "Hyper-parameter of IPO", + "text": "The effect of the IPO hyper-parameter on alignment and quality metrics is illustrated in Figure 9 ###reference_###. The optimal performance across most metrics is achieved at , where MM Dist and FID are minimized, and Top-1/Top-2 accuracy and Diversity reach their maximum values, indicating enhanced alignment, sample quality, and diversity. However, increasing beyond 0.10 leads to increased MM Dist and FID, likely caused by an overemphasis on alignment objectives. In contrast, smaller values (e.g., ) fail to adequately align the model, resulting in suboptimal performance. These findings highlight the inherent trade-offs between alignment, quality, and diversity, underscoring the importance of setting to effectively balance these competing objectives.\n###figure_8###" + }, + { + "section_id": "6.4", + "parent_section_id": "6", + "section_name": "Preference Accuracy Comparison: GPT-4V vs. Contrastive Encoders on Human Preference Datasets", + "text": "In the introduction, we noted that prior works, such as Mao et al. [20 ###reference_b20###], have utilized contrastive pre-trained text and motion encoders from Guo et al. [11 ###reference_b11###] to construct reward models. Following this approach, we conducted an evaluation to compare the alignment accuracy of the contrastive encoders and our proposed method against human preferences.\nTo assess the ability of GPT-4V and the contrastive encoders to align with human preferences, we evaluated their alignment accuracy using the human preference dataset provided by InstructMotion [29 ###reference_b29###]. For each pair in the dataset (excluding those marked as \u201cskipped\u201d), GPT-4V was prompted to evaluate the motions and determine which one performed better, providing its preference directly. For the contrastive encoders, we encoded both the motion and text features separately and calculated the Euclidean distance between the two. The motion in the pair with the smaller distance to the text feature was considered the preferred sample. This setup allowed us to systematically compare the two approaches\u2019 alignment performance with human annotations.\nAs shown in Table 7 ###reference_###, among the total of 2216 pairs, GPT-4V achieved an alignment accuracy of , with 1546 aligned pairs. In contrast, the contrastive encoders exhibited a lower alignment accuracy of , with 1465 aligned pairs out of the same total. These results highlight the superior capability of GPT-4V in capturing human preferences compared to the contrastive encoders, underscoring its potential for more effective human-centric applications." + }, + { + "section_id": "6.5", + "parent_section_id": "6", + "section_name": "GPT-4V Finetuned v.s. Constrastive Encoders Finetuned", + "text": "To better demonstrate that our method outperforms approaches leveraging contrastive pre-trained encoders, we utilized these encoders to label motions generated in our temporal sub-task. The labeled preference data was then used to fine-tune MotionGPT, and a comparative analysis was conducted against AToM. As presented in Table 8 ###reference_###, AToM consistently outperforms the approach based on contrastive encoders across key metrics, including MM Dist, R-precision, FID, and MultiModality, demonstrating its superior capability in alignment quality and generation variety. Although AToM exhibits slightly lower performance in the Diversity metric, its overall advantage across other critical metrics underscores its effectiveness and robustness compared to the contrastive encoder-based approach." + }, + { + "section_id": "6.6", + "parent_section_id": "6", + "section_name": "Influence of Preference Dataset Volume on Model Performance", + "text": "The impact of preference pair quantity on alignment and quality metrics is depicted in Figure 10 ###reference_###. At lower volumes (e.g., 2000 pairs), metrics such as MM Dist and FID are minimized, indicating better alignment and motion quality, while Diversity and MM Modality are relatively high, suggesting balanced performance. However, as the volume increases, MM Dist and FID worsen, and Top-1/Top-2 accuracy decreases significantly, likely due to over-fitting to a larger but potentially noisy set of preferences, which degrades generalization. The Diversity and MM Modality metrics exhibit fluctuations, with notable drops at intermediate volumes (e.g., 10,000 pairs) and partial recovery at higher volumes (14,000 pairs). These observations highlight the trade-off between data volume and model performance, where excessively large preference datasets may introduce noise, reducing alignment and diversity, and emphasizing the need for careful curation and optimal dataset sizing.\n###figure_9###" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Details of MotionPrefer Construction", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "GPT-4 Instruction for Prompt Construction", + "text": "We instruct GPT-4 to generate motion-event-based prompt. The designed instruction for three tasks are as follows:\nThe distribution of prompts with varying numbers of motion events is shown in the Table 10 ###reference_###." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "GPT Instruction for Scoring", + "text": "In this section, we outline the GPT-based instructions for scoring, providing a comprehensive framework for evaluating alignment between motion and description effectively." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Motion Injection Forms in Questioning", + "text": "There are demonstrations of three different forms of motion injection in questioning, as illustrated in Figure 11 ###reference_### to 13 ###reference_###.\n###figure_10### ###figure_11###" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Human Evaluation", + "text": "We present an example of the user study for the frequency task in Figure 14 ###reference_###.\n###figure_12### ###figure_13###" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Statistics of existing preference datasets for text-to-motion generative models. \u201cFine Grained\u201d represents containing preference regarding multiple aspects or not.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetAnnotatorPromptsPairsFine Grained
InstructMotion\u00a0[29]\nHuman3.5K3.5K\u2717
Pick-a-Move\u00a0[23]TMR\u00a0[24]\n\u2013\u2013\u2717
Guo et al.\u00a0[11]\n\u2013\u2013\u2717
MotionPrefer (ours)GPT-4V\u00a0[21]\n5.3K80K\u2713
\n
\n
", + "capture": "Table 1: Statistics of existing preference datasets for text-to-motion generative models. \u201cFine Grained\u201d represents containing preference regarding multiple aspects or not." + }, + "2": { + "table_html": "
\n\nGiven a label group, , the three labels in it will be described into a motion event group:\n.\nThen, please join the motion event group with conjunction to form a motion prompt, defined as\n\u201c\u201d.\nConjunction list: \nPlease ensure to randomly select conjunctions from the list and avoid relying on a single conjunction. Please try to generate complete prompts that match human language expressions as much as possible.\n\n
Table 2: GPT instruction for prompt construction in temporal task.
\n
", + "capture": "Table 2: GPT instruction for prompt construction in temporal task." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AmountPromptMotionPairMotion events
Integrity250025K35.3K2-5
Temporal136013.6K35.3K3
Frequency14168.5K9.4K1
Total527647.1K80K\u2014
\n
\n
Table 3: Details of amounts of MotionPrefer dataset.
\n
", + "capture": "Table 3: Details of amounts of MotionPrefer dataset." + }, + "4": { + "table_html": "
\n
Table 4: Scoring rules for sub-tasks.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Task\n\nExample\n\n\n\nScoring Criteria\n\n
Integrity\n\nA man walks forward, walks backward, and squats.\n\n\n\n5: All described motions appear in the frames.\n\n
\n\n0: Some motions are missing or incomplete in frames.\n\n
Temporal\n\nA person walks forwards doing ballet, then raising one leg, then skipping and raising the other leg.\n\n\n\n5: Three motions appear in correct order.\n4: Three motions appear in wrong order.\n\n
\n\n3: One motion is missing.\n\n
\n\n2: Two motions are missing.\n\n
\n\n1: All three motions are missing.\n\n
Frequency\n\nA person jumps forward one time.\n\n\n\n3: The motion is correct and the frequency is accurate.\n\n
\n\n2: The motion is present but the frequency is incorrect.\n\n
\n\n1: The motion is incorrect, regardless of the frequency.\n\n
\n
", + "capture": "Table 4: Scoring rules for sub-tasks." + }, + "5": { + "table_html": "
\n
Table 5: Comparison of AToM with baselines in different tasks. AToM represents the process of mixing preference data from three tasks and randomly selecting a subset of preference data (approximately 3.5K pairs) that matches the size of the RLHF framework InstructMotion, ensuring fair comparison with the baseline model.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskMethod\n\nMM Dist\n\n\n\nTop-1\n\n\n\nTop-2\n\n\n\nTop-3\n\n\n\nFID\n\n\n\nDiversity\n\n\n\nMModality\n\n
Temporal\nMotionGPT\u00a0[15]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AToM (ours)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Frequency\nMotionGPT\u00a0[15]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AToM (ours)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Integrity\nMotionGPT\u00a0[15]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AToM (ours)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
General\nMotionGPT\u00a0[15]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nInstructMotion\u00a0[29]\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AToM (ours)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
", + "capture": "Table 5: Comparison of AToM with baselines in different tasks. AToM represents the process of mixing preference data from three tasks and randomly selecting a subset of preference data (approximately 3.5K pairs) that matches the size of the RLHF framework InstructMotion, ensuring fair comparison with the baseline model. " + }, + "6": { + "table_html": "
\n
Table 6: Ablation studies for motion injection methods, score filtering, and LoRA utilization on the test set.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Metric(a) Motion Injection(b) Score Filtering(c) LoRA Utilization
F-b-FF-IT-Iw/ Filterw/o Filterw/ LoRAw/o LoRA
MM Dist \n
top-1 \n
top-2 \n
top-3 \n
FID \n
Diversity \n
MModality \n
\n
", + "capture": "Table 6: Ablation studies for motion injection methods, score filtering, and LoRA utilization on the test set." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelAligned PairsTotal PairsAccuracy (%)
GPT-4V1546221669.77
CE-based1465221666.11
\n
\n
Table 7: Alignment quality comparison between GPT-4V and contrastive encoder-based methods (denoted as CE-based)
\n
", + "capture": "Table 7: Alignment quality comparison between GPT-4V and contrastive encoder-based methods (denoted as CE-based)" + }, + "8": { + "table_html": "
\n
Table 8: Comparison of methods AToM (ours) and contrastive encoder-based method.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodMM DistTop-1Top-2Top-3FIDDiversityMultiModality
AToM (ours)
CE-based
\n
", + "capture": "Table 8: Comparison of methods AToM (ours) and contrastive encoder-based method. " + }, + "9": { + "table_html": "
\n\nGiven a label group, , the 2-5 labels in it will be described into a motion event group:\n.\nThen, please join the motion event group with conjunction to form a motion prompt, defined as\n\u201c\u201d.\nConjunction list: \nPlease ensure to randomly select conjunctions from the list and avoid relying on a single conjunction. Please try to generate complete prompts that match human language expressions as much as possible.\n\n
Table 9: GPT instruction for prompt construction in integrity task.
\n
", + "capture": "Table 9: GPT instruction for prompt construction in integrity task." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Motion eventsPrompt
2250
31500
4500
5250
\n
Table 10: Motion event number distribution.
\n
", + "capture": "Table 10: Motion event number distribution." + }, + "11": { + "table_html": "
\n\nGiven a label group, , the 3 labels in it will be described into a motion event group:\n.\nThen, please join the motion event group with conjunction to form a motion prompt, defined as\n\u201c\u201d.\nConjunction list: \nPlease ensure \u2026\n\n
Table 11: GPT instruction for prompt construction in temporal task.
\n
", + "capture": "Table 11: GPT instruction for prompt construction in temporal task." + }, + "12": { + "table_html": "
\n\nGiven a label group, , the single label in it will be described into a motion event: .\nThen, please join the motion event with frequency to form a motion prompt, defined as \u201c\u201d.\nFrequency list: \nPlease ensure \u2026\n\n
Table 12: GPT instruction for prompt construction in frequency task.
\n
", + "capture": "Table 12: GPT instruction for prompt construction in frequency task." + }, + "13": { + "table_html": "
\n\nPlease evaluate the alignment between a given generative motion clip and the corresponding text description (\u201cInput\u201d). The description , consists of 2-5 motions, and the motion clip is represented as a sequence of frames .\nFor example:\nT= \u201cA man walks forward, walks backward, and squats\u201d describes three motions.\u201d\nScoring Criteria:\n5: All described motions appear in the frames.\n0: Some motions are missing or incomplete in the frames.\nOutput Format:\nRating: [0 or 5]\nRationale: [A brief explanation for the rating, no more than 20 words]\n\n
Table 13: GPT annotation instruction for integrity task.
\n
", + "capture": "Table 13: GPT annotation instruction for integrity task." + }, + "14": { + "table_html": "
\n\nPlease evaluate the alignment between a given generative motion clip and the corresponding text description (\u201dInput\u201d). The description , describes 3 motions in temporal order, and the motion clip is represented as a sequence of frames .\nFor example:\nT= \u201cA person walks forward , then is pushed to their right and then returns to walking in the line.\u201d\nScoring Criteria:\n5: Three motions appear in correct order.\n4: Three motions appear in wrong order.\n3: One motion is missing.\n2: Two motions are missing.\n1: All three motions are missing.\nOutput Format:\nRating: [1 to 5]\nRationale: [A brief explanation for the rating, no more than 20 words]\n\n
Table 14: GPT annotation instruction for temporal task.
\n
", + "capture": "Table 14: GPT annotation instruction for temporal task." + }, + "15": { + "table_html": "
\n\nPlease evaluate the alignment between a given generative motion clip and the corresponding text description (\u201cInput\u201d). The description , describes a repeated motion for several times, and the motion clip is represented as a sequence of frames .\nFor example:\nT= \u201ca person jumps forward three times.\u201c\nScoring Criteria:\n3: The motion is correct and the frequency is accurate.\n2: The motion is present but the frequency is incorrect.\n1: The motion is incorrect, regardless of the frequency.\nOutput Format:\nRating: [1 to 3]\nRationale: [A brief explanation for the rating, no more than 20 words]\n\n
Table 15: GPT annotation instruction for frequency task.
\n
", + "capture": "Table 15: GPT annotation instruction for frequency task." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18654v1_figure_2.png", + "caption": "Figure 2: The framework of AToM. AToM encompasses three stages: (1) A motion generation process using task-specific prompts constructed by LLM; (2) Evaluation of alignment score for text-motion pairs using a predefined reward paradigm based on LVLM; (3) A fine-tuning mechanism based on LoRA and RL strategy that enhances the original motion generator using the dataset MotionPrefer.", + "url": "http://arxiv.org/html/2411.18654v1/x2.png" + }, + "3": { + "figure_path": "2411.18654v1_figure_3.png", + "caption": "Figure 3: Generated qualitative samples comparison of pretrained model MotionGPT and finetuned model AToM.", + "url": "http://arxiv.org/html/2411.18654v1/x3.png" + }, + "4": { + "figure_path": "2411.18654v1_figure_4.png", + "caption": "Figure 4: Win rates of AToM fine-tuned compared to MotionGPT by human judgments in three tasks.\n", + "url": "http://arxiv.org/html/2411.18654v1/x4.png" + }, + "5": { + "figure_path": "2411.18654v1_figure_5.png", + "caption": "Figure 5: Performance distribution of different reinforcement learning strategies after generative model finetuning.\n", + "url": "http://arxiv.org/html/2411.18654v1/x5.png" + }, + "6": { + "figure_path": "2411.18654v1_figure_6.png", + "caption": "Figure 6: Impact of different frame sampling intervals on alignment and quality metrics.", + "url": "http://arxiv.org/html/2411.18654v1/x6.png" + }, + "7": { + "figure_path": "2411.18654v1_figure_7.png", + "caption": "Figure 7: Generated qualitative samples comparison of pretrained model MotionGPT and finetuned model AToM.", + "url": "http://arxiv.org/html/2411.18654v1/x7.png" + }, + "8": { + "figure_path": "2411.18654v1_figure_8.png", + "caption": "Figure 8: Impact of different epoch numbers on alignment and quality metrics.", + "url": "http://arxiv.org/html/2411.18654v1/extracted/6027881/sec/figures/supp_epoch.png" + }, + "9": { + "figure_path": "2411.18654v1_figure_9.png", + "caption": "Figure 9: Impact of \u03b2\ud835\udefd\\betaitalic_\u03b2 on alignment and quality metrics.", + "url": "http://arxiv.org/html/2411.18654v1/extracted/6027881/sec/figures/supp_beta.png" + }, + "10": { + "figure_path": "2411.18654v1_figure_10.png", + "caption": "Figure 10: Impact of preference pair quantity on alignment and quality metrics.", + "url": "http://arxiv.org/html/2411.18654v1/extracted/6027881/sec/figures/supp_volume.png" + }, + "11": { + "figure_path": "2411.18654v1_figure_11.png", + "caption": "Figure 11: Full-Image Example", + "url": "http://arxiv.org/html/2411.18654v1/extracted/6027881/sec/figures/Full-Image.png" + }, + "12": { + "figure_path": "2411.18654v1_figure_12.png", + "caption": "Figure 12: Trajectory-Image Example", + "url": "http://arxiv.org/html/2411.18654v1/extracted/6027881/sec/figures/Trajectory.png" + }, + "13": { + "figure_path": "2411.18654v1_figure_13.png", + "caption": "Figure 13: Frame-by-Frame Example", + "url": "http://arxiv.org/html/2411.18654v1/x8.png" + }, + "14": { + "figure_path": "2411.18654v1_figure_14.png", + "caption": "Figure 14: User study example of frequency task", + "url": "http://arxiv.org/html/2411.18654v1/x9.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.", + "venue": "arXiv preprint arXiv:2303.08774, 2023.", + "url": null + } + }, + { + "2": { + "title": "Text2action: Generative adversarial synthesis from language to action.", + "author": "Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh.", + "venue": "In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 5915\u20135920. IEEE, 2018.", + "url": null + } + }, + { + "3": { + "title": "Language2pose: Natural language grounded pose forecasting.", + "author": "Chaitanya Ahuja and Louis-Philippe Morency.", + "venue": "In 2019 International Conference on 3D Vision (3DV), pages 719\u2013728. IEEE, 2019.", + "url": null + } + }, + { + "4": { + "title": "The claude 3 model family: Opus, sonnet, haiku.", + "author": "AI Anthropic.", + "venue": "Claude-3 Model Card, 1, 2024.", + "url": null + } + }, + { + "5": { + "title": "Teach: Temporal action composition for 3d humans.", + "author": "Nikos Athanasiou, Mathis Petrovich, Michael J Black, and G\u00fcl Varol.", + "venue": "In 2022 International Conference on 3D Vision (3DV), pages 414\u2013423. IEEE, 2022.", + "url": null + } + }, + { + "6": { + "title": "A general theoretical paradigm to understand learning from human preferences.", + "author": "Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello.", + "venue": "In International Conference on Artificial Intelligence and Statistics, pages 4447\u20134455. PMLR, 2024.", + "url": null + } + }, + { + "7": { + "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback.", + "author": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al.", + "venue": "arXiv preprint arXiv:2204.05862, 2022a.", + "url": null + } + }, + { + "8": { + "title": "Constitutional ai: Harmlessness from ai feedback.", + "author": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al.", + "venue": "arXiv preprint arXiv:2212.08073, 2022b.", + "url": null + } + }, + { + "9": { + "title": "Executing your commands via motion diffusion in latent space.", + "author": "Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18000\u201318010, 2023.", + "url": null + } + }, + { + "10": { + "title": "Synthesis of compositional animations from textual descriptions.", + "author": "Anindita Ghosh, Noshaba Cheema, Cennet Oguz, Christian Theobalt, and Philipp Slusallek.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 1396\u20131406, 2021.", + "url": null + } + }, + { + "11": { + "title": "Generating diverse and natural 3d human motions from text.", + "author": "Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5152\u20135161, 2022a.", + "url": null + } + }, + { + "12": { + "title": "Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts.", + "author": "Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng.", + "venue": "In European Conference on Computer Vision, pages 580\u2013597. Springer, 2022b.", + "url": null + } + }, + { + "13": { + "title": "Momask: Generative masked modeling of 3d human motions.", + "author": "Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1900\u20131910, 2024.", + "url": null + } + }, + { + "14": { + "title": "Lora: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.", + "venue": "arXiv preprint arXiv:2106.09685, 2021.", + "url": null + } + }, + { + "15": { + "title": "Motiongpt: Human motion as a foreign language.", + "author": "Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen.", + "venue": "Advances in Neural Information Processing Systems, 36:20067\u201320079, 2023.", + "url": null + } + }, + { + "16": { + "title": "Aligning text-to-image models using human feedback.", + "author": "Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu.", + "venue": "arXiv preprint arXiv:2302.12192, 2023.", + "url": null + } + }, + { + "17": { + "title": "Baton: Aligning text-to-audio model with human preference feedback.", + "author": "Huan Liao, Haonan Han, Kai Yang, Tianjiao Du, Rui Yang, Zunnan Xu, Qinmei Xu, Jingquan Liu, Jiasheng Lu, and Xiu Li.", + "venue": "In IJCAI 2024, 2024.", + "url": null + } + }, + { + "18": { + "title": "Generating animated videos of human activities from natural language descriptions.", + "author": "Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J Mooney.", + "venue": "Learning, 1(2018):1, 2018.", + "url": null + } + }, + { + "19": { + "title": "Peft: State-of-the-art parameter-efficient fine-tuning methods.", + "author": "Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and B Bossan.", + "venue": "URL: https://github. com/huggingface/peft, 2022.", + "url": null + } + }, + { + "20": { + "title": "Learning generalizable human motion generator with reinforcement learning.", + "author": "Yunyao Mao, Xiaoyang Liu, Wengang Zhou, Zhenbo Lu, and Houqiang Li.", + "venue": "arXiv preprint arXiv:2405.15541, 2024.", + "url": null + } + }, + { + "21": { + "title": "Gpt-4 technical report, 2024.", + "author": "OpenAI.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.", + "venue": "Advances in neural information processing systems, 35:27730\u201327744, 2022.", + "url": null + } + }, + { + "23": { + "title": "Modipo: text-to-motion alignment via ai-feedback-driven direct preference optimization.", + "author": "Massimiliano Pappa, Luca Collorone, Giovanni Ficarra, Indro Spinelli, and Fabio Galasso.", + "venue": "arXiv preprint arXiv:2405.03803, 2024.", + "url": null + } + }, + { + "24": { + "title": "Tmr: Text-to-motion retrieval using contrastive 3d human motion synthesis.", + "author": "Mathis Petrovich, Michael J. Black, and G\u00fcl Varol.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9488\u20139497, 2023.", + "url": null + } + }, + { + "25": { + "title": "Direct preference optimization: Your language model is secretly a reward model.", + "author": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "26": { + "title": "Proximal policy optimization algorithms.", + "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.", + "venue": "arXiv preprint arXiv:1707.06347, 2017.", + "url": null + } + }, + { + "27": { + "title": "Human motion diffusion model.", + "author": "Yonatan Shafir, Guy Tevet, Roy Kapon, and Amit H Bermano.", + "venue": "arXiv preprint arXiv:2209.14916, 2022.", + "url": null + } + }, + { + "28": { + "title": "Human motion diffusion as a generative prior.", + "author": "Yonatan Shafir, Guy Tevet, Roy Kapon, and Amit H Bermano.", + "venue": "arXiv preprint arXiv:2303.01418, 2023.", + "url": null + } + }, + { + "29": { + "title": "Exploring text-to-motion generation with human preference.", + "author": "Jenny Sheng, Matthieu Lin, Andrew Zhao, Kevin Pruvost, Yu-Hui Wen, Yangguang Li, Gao Huang, and Yong-Jin Liu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1888\u20131899, 2024.", + "url": null + } + }, + { + "30": { + "title": "Motionclip: Exposing human motion generation to clip space.", + "author": "Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or.", + "venue": "In European Conference on Computer Vision, pages 358\u2013374. Springer, 2022.", + "url": null + } + }, + { + "31": { + "title": "Motiongpt-2: A general-purpose motion-language model for motion generation and understanding.", + "author": "Yuan Wang, Di Huang, Yaqi Zhang, Wanli Ouyang, Jile Jiao, Xuetao Feng, Yan Zhou, Pengfei Wan, Shixiang Tang, and Dan Xu.", + "venue": "arXiv preprint arXiv:2410.21747, 2024.", + "url": null + } + }, + { + "32": { + "title": "Imagereward: Learning and evaluating human preferences for text-to-image generation.", + "author": "Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "33": { + "title": "Generating human motion from textual descriptions with discrete representations.", + "author": "Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14730\u201314740, 2023a.", + "url": null + } + }, + { + "34": { + "title": "Motiondiffuse: Text-driven human motion generation with diffusion model.", + "author": "Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu.", + "venue": "arXiv preprint arXiv:2208.15001, 2022.", + "url": null + } + }, + { + "35": { + "title": "Remodiffuse: Retrieval-augmented motion diffusion model.", + "author": "Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 364\u2013373, 2023b.", + "url": null + } + }, + { + "36": { + "title": "Temo: Towards text-driven 3d stylization for multi-object meshes.", + "author": "Xuying Zhang, Bo-Wen Yin, Yuming Chen, Zheng Lin, Yunheng Li, Qibin Hou, and Ming-Ming Cheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19531\u201319540, 2024.", + "url": null + } + }, + { + "37": { + "title": "Slic-hf: Sequence likelihood calibration with human feedback.", + "author": "Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu.", + "venue": "arXiv preprint arXiv:2305.10425, 2023.", + "url": null + } + }, + { + "38": { + "title": "Fine-tuning language models from human preferences.", + "author": "Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.", + "venue": "arXiv preprint arXiv:1909.08593, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18654v1" +} \ No newline at end of file diff --git a/20241127/2411.18657v1.json b/20241127/2411.18657v1.json new file mode 100644 index 0000000000000000000000000000000000000000..78f0bcaf692f1b9ed8e8c24d0850497bc7c843f3 --- /dev/null +++ b/20241127/2411.18657v1.json @@ -0,0 +1,171 @@ +{ + "title": "ScaleViz: Scaling Visualization Recommendation Models on Large Data", + "abstract": "Automated visualization recommendation (Vis-Rec) models help users to derive crucial insights from new datasets. Typically, such automated Vis-Rec models first calculate a large number of statistics from the datasets and then use machine-learning models to score or classify multiple visualizations choices to recommend the most effective ones, as per the statistics.\nHowever, state-of-the-art models rely on a very large number of expensive statistics and therefore using such models on large datasets becomes infeasible due to prohibitively large computational time, limiting the effectiveness of such techniques to most large real-world datasets.\nIn this paper, we propose a novel reinforcement-learning (RL) based framework that takes a given Vis-Rec model and a time budget from the user and identifies the best set of input statistics, specifically for a target dataset, that would be most effective while generating accurate enough visual insights.\nWe show the effectiveness of our technique as it enables two state of the art Vis-Rec models to achieve up to X speedup in time-to-visualize on four large real-world datasets.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "As more and more data is being collected from various sources, users often encounter data that they are not familiar with. A dataset can contain numerous columns, both numerical and categorical, with multiple categories. It is a daunting task to even decide how to dissect and plot such data to reveal any interesting insights.\nVisualization recommendation (Vis-Rec) techniques [6 ###reference_b6###, 13 ###reference_b13###] help to automatically generate, score, and recommend the most relevant visualizations for a dataset and can improve productivity by reducing the time required by analysts to first find interesting insights and then visualize them.\nAutomated Vis-Rec techniques typically work through the following steps.\nFirst, they calculate various statistics, as much as up to number of statistics as used by Qian et al. in [13 ###reference_b13###] per column from the data to capture overall statistical landscape of the data. Second, these statistics are used as features to score prospective visualization configurations (i.e. combination of columns, aggregates, plot types etc.) in a supervised learning setup. Finally, queries are issued against the actual data to populate the top recommended visualization charts.\nA significant number of prior works [4 ###reference_b4###, 7 ###reference_b7###, 6 ###reference_b6###, 5 ###reference_b5###, 15 ###reference_b15###] focused on perfecting the visualization recommendation technique, which evolved from initial algorithmic approaches to most recent deep-learning based approaches [11 ###reference_b11###, 13 ###reference_b13###, 6 ###reference_b6###].\nFurther, Qian et al. [13 ###reference_b13###] extended these techniques to address the problem of how to generalize these models on unseen datasets having completely different schema structure and data distributions.\nThe way such generalization work is that the neural network learns the importance of different visualizations at much abstract level by extracting a large number of higher order statistical features extracted from the data.\nHowever, prior works did not address another very important problem, which is the scalability of these algorithms on datasets with large number of columns and/or rows.\nReal world datasets can be huge, having several hundreds of millions or even several hundreds of billions of rows.\nCalculating large number of statistical features on such large datasets is intractable by the state-of-the-art (SOTA) visualization recommendation algorithms.\nFor example, in Qian et al. [13 ###reference_b13###] collects various higher-order statistics per column of the dataset, which itself can have a large number of columns. On top of that, they calculate multi-column statistics to capture dependency, correlation and other such properties. In Fig. 1 ###reference_### we show the CDF of computation time for different statistical features that are needed by SOTA Vis-Rec models MLVR [13 ###reference_b13###] and VizML [6 ###reference_b6###] for 4 datasets of different sizes. Table 1 ###reference_### lists the number of rows and columns for each dataset and total number of statistical features that needs to be computed for MLVR and VizML for each dataset.\nCalculating so many statistics on large datasets makes the very first step of the typical visualization recommendation pipeline infeasible and unscalable.\nNow, there can be two ways to overcome this problem:\nFirst option could be to drop certain statistics to reduce the computation. But which ones? These statistics are basically the features to the core visualization recommendation model. Which statistics are important and carry important signals that would make a particular combination of columns and visualization style interesting - is very dataset dependent. A statistics that is very computationally intensive might carry significant importance for one dataset and might not be relevant for another dataset. Indiscriminate dropping of certain statistics or identifying the important statistics based on few datasets and extending that decision to other datasets, can lead to poor quality output.\nSecond option could be to take a small sample of the data, on which calculation of large number statistics is tractable, and then generate the visualization recommendations based on that sample. However, for massive amounts of data, such sample has to be a tiny fraction for the existing Vis-Rec pipeline to work and such a tiny sample may not be representative of the complete data. Therefore, the visualization recommendations generated on the sample can be completely misleading or inaccurate.\nTo overcome these drawbacks of naive ways to speed-up visualization recommendation generation, in this paper we present a framework, called ScaleViz (code\u2020\u2020\u2020https://anonymous.4open.science/r/ScaleViz-30DB), that takes such a generalized Vis-Rec model and customizes it for a given dataset so that we can produce visualization recommendations at large scale, for that dataset.\nScaleViz does this by through the following steps:\n(1) It profiles the computational cost of calculating statistics for each statistics that are needed by the generalized model on a few samples of data of different sizes.\n(2) It uses regression models to extrapolate that cost to the size of the full dataset.\n(3) It uses a budget-aware Reinforcement-Learning (RL) based technique to identify the most crucial features from the original Vis-Rec model \u2014 using multiple samples containing a very small fraction from the original dataset.\n(4) Finally, ScaleViz only calculates these selected statistical features from the full dataset and produces the visualization recommendations using the given model.\nIn summary, we make the following contributions:\nWe propose a framework that enables a Vis-Rec model to generate accurate enough insights for a target large-scale dataset within a chosen time budget.\nWe formulate the problem as a budget-aware Reinforcement Learning problem that incrementally learns the most useful statistical features from a large-scale dataset for the target model.\nOur evaluations with 2 recent ML-based Vis-Rec models [6 ###reference_b6###, 13 ###reference_b13###] and with 4 large public datasets show that ScaleViz can provide upto X speedup in producing accurate enough visualization recommendations.\n###figure_1### ###figure_2###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "Several prior works [13 ###reference_b13###, 4 ###reference_b4###, 7 ###reference_b7###, 6 ###reference_b6###, 5 ###reference_b5###, 15 ###reference_b15###, 2 ###reference_b2###, 8 ###reference_b8###] targeted visualization recommendations to help insight discovery.\nBut scalability of such technique when handling large datasets were not addressed and is the focus of this paper.\nAs recent Vis-Rec models use large number of statistical features from the data, feature selection literature is also related to our work.\nPrior works by Li et al. [10 ###reference_b10###], Deng et al. [1 ###reference_b1###], and Farahat et al. [3 ###reference_b3###], have employed decision trees or greedy-based approaches. Some researchers have explored reinforcement learning techniques for intelligent feature selection, as seen in the works of Kachuee et al. [9 ###reference_b9###] and others [14 ###reference_b14###]. However, these approaches are not applicable to our setting as for Vis-Rec models, the crucial features are often dependent on the particular statistical characteristics of the target data and so can not be selected at the training time." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Problem Formulation", + "text": "In this section, we first formally define the problem of budget-aware visualization recommendation generation.\nLet be a target Vis-Rec model that a user wants to apply on a large tabular dataset .\nLet consists columns and rows.\nLet be the feature space for dataset based on statistical features used in the model .\nAs Vis-Rec models calculate a large number of different statistics from each column, let us denote the number of statistical features computed from each column be .\nLet denote the -th feature for -th column, where and .\nWe introduce the cost function , quantifying the computational time required to calculate each of the features based on a fraction from (i.e. such fraction will consist of rows of ).\nNotably, serves as the cost function for the entire dataset , and for brevity, we use denoted as throughout the paper.\nTo formalize the problem, we frame the statistical feature selection as an optimization task. Let be a function mapping features to binary acquisition decisions. gives a subset of features by ignoring the masked features, where and is the Hadamard operator which calculates the product of two matrices of the same dimension. is a loss function which compares the output of the model on two different input feature set. The objective is to find the feature mask minimizing the error in the model\u2019s prediction while ensuring the total cost of selected features adheres to the budget :\nHere, the budget is constrained by the total computational cost of features calculated on the complete dataset:\nNote, in this formulation, we use , that is time-to-compute visualization recommendations as the constraint, because it is intuitive for users to specify a time-budget.\nAlternatively, we could also make this constraint relative to the size of . In that case, where is a particular user-specified fraction of the statistical feature computation time for the base Vis-Rec model.\n###figure_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Proposed Solution", + "text": "We approach the above problem as a scenario where decisions are made sequentially over time and model the problem as a reinforcement learning problem. The overall pipeline, as shown in Fig. 2 ###reference_###, consists of a Cost profiler, which employs polynomial regression to estimate the computational cost of computing statistics across varying dataset sizes. This estimation is crucial for predicting costs without actually computing them. Subsequently, the RL agent training module teaches the agent to acquire features under budget constraints across increasing data samples. Once trained, the Inference pipeline utilizes the RL agent to select features for the given budget, computing only the learned subset of features on the entire dataset to obtain model predictions. We provide a detailed description of the two main components and also describe the RL agent training algorithm." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Cost Profiling", + "text": "The Cost Profiler module profiles the computation time (cost) of each statistical feature across varying dataset sizes. It collects data points to estimate the computation cost for each feature on larger datasets without actual computation.\nGiven the dataset , the cost function is obtained for fractions of the dataset, denoted as . For each feature , the goal is to predict its cost on the full dataset. Some features, such as column types, number of categories in a column, max-min value in a column, exhibit zero-cost, implying their cost remains constant with growing record sizes, i.e . For other features, assuming polynomial growth of feature costs with dataset size (as proved in [16 ###reference_b16###])." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "RL agent", + "text": "We use an RL agent based framework to learn feature acquisition under budget constraints. Each episode consists of the agent choosing the important subset of features for a sample . We define the state of the agent for an episode as the feature set acquired by it so far in an episode (i.e ), where is the mask of the features at time . The action of the agent is to select a feature which has not been masked in the feature set (i.e ).\nAt every step , the agent selects a feature and masks that feature as selected. The agent moves to the next state, which is ).\nA cost of is deducted from the remaining budget for choosing the feature.\nThe reward for an action is calculated as the absolute change in the score before and after acquiring the feature, with a penalty of .\nWe use the technique of double deep -learning with experience replay buffer [12 ###reference_b12###] to train the RL agent. The agent explores the feature space with a -greedy approach, with the probability of exploration decaying exponentially. The architecture of the -networks is a feed-forward neural network with three layers of sizes .\nAlgorithm 1 ###reference_### describes the training procedure for the RL agent, designed for cost-effective feature acquisition.\nThe process initiates with the agent receiving a dataset , a pre-defined budget and a Vis-Rec model . The dataset is sequentially explored through a series of samples. The algorithm initializes by setting an initial exploration probability and a termination threshold . In each episode, the agent learns the important subset of features for a particular sample . Every episode starts with the same budget, and the size of the samples keeps increasing with the number of episodes. The RL agent starts with a zero-cost feature set and keeps acquiring features till it runs out of budget. At every step of an episode, the agent chooses to explore randomly or exploit the current knowledge by selecting the feature with the maximum -value. The tuple (state, action, next state, reward) is pushed into the experience replay buffer. The and the target- networks are periodically updated using the tuples from the buffer. The process is terminated when the loss for an episode falls below the threshold . The increasing size of the samples across episodes helps the agent to exploit the learned behavior of the model on a larger sample. This is particularly important because, we ultimately want the agent to predict the important features on the full dataset which it has not been trained on.\nThe RL agent ultimately selects the important and highly sensitive statistical features for the target base Vis-Rec model from a given dataset ." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Evaluations", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Experimental Setup", + "text": "We use an NVIDIA GeForce RTX 3090 GPU and a 32-core processor for all experiments. PyTorch, scikit-learn, and pandas were used for both training of the agent and running Vis-Rec models. For Vis-Rec input features, non-selected features were imputed based on a smaller 0.01% sample. We use an exponential decay exploration probability which starts with probability of and eventually reaches . A batch size of is used to randomly sample experiences from the replay buffer. The agent\u2019s action space was normalized to facilitate efficient Q-network training. The training of both the Q and target Q networks employed the Adam optimization algorithm and Mean Squared Error (MSE) loss for effective convergence." + }, + { + "section_id": "5.1.x", + "parent_section_id": "5.1", + "section_name": "Vis-Rec Models", + "text": "VizML[6 ###reference_b6###]:\nVizML provides visualization-level and encoding-level prediction tasks using 81 column-level features, which are aggregated using 16 functions for predicting visualizations. In our experiments, the RL agent selects column-level features during training, which are then aggregated and fed into the VizML model. We use cross-entropy loss to calculate the error introduced due to selection of subset of features.\nMLVR[13 ###reference_b13###]: MLVR recommends the top- visualizations for a given dataset and set of visualizations. It predicts visualization probabilities by leveraging column-level features. This approach becomes computationally challenging with a high number of columns. In our experiments, we use the mean-squared loss of the prediction scores on the top- visualization configurations given by the model using the full feature set to calculate the error." + }, + { + "section_id": "5.1.x", + "parent_section_id": "5.1", + "section_name": "Datasets", + "text": "Flights:\u2020\u2020\u2020https://www.kaggle.com/datasets/mexwell/carrier-dataset On-time performance of domestic flights operated by large air carriers in the USA. It comprises approximately million rows and columns.\nIncome:\u2020\u2020\u2020https://www.kaggle.com/datasets/manishkc06/usa-census-income-data USA Census Income data, with rows and columns.\nCars:\u2020\u2020\u2020https://www.kaggle.com/datasets/mrdheer/cars-dataset Features of vehicles, including mileage, transmission, price, etc. The dataset consists of rows and columns.\nHousing:\u2020\u2020\u2020https://www.kaggle.com/datasets/ashydv/housing-dataset Home prices based on factors like area, bedrooms, furnishings, proximity to the main road, etc. It contains around rows and columns." + }, + { + "section_id": "5.1.x", + "parent_section_id": "5.1", + "section_name": "Baselines", + "text": "We compare ScaleViz with the following baselines.\nRandom: Features are randomly selected through uniform sampling until the budget is exhausted, forming the set of features for the Vis-Rec model.\nGreedy: Features are chosen using a greedy technique inspired by [17 ###reference_b17###] until the budget is exhausted, and then passed to the Vis-Rec model.\nSample: Features are computed on and uniform samples of the dataset. This baseline approach allows calculation of all the statistics using a small sample, which can be passed to the Vis-Rec model.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Speed-up in Visualization Generation", + "text": "We first evaluate the speed-up achieved by ScaleViz compared to the baselines approaches when we ensure that the resulting error due to use of less features (for ScaleViz, Random, and Greedy) or less data (for Sample) is less than for VizML and MLVR respectively.\nTable 2 ###reference_### presents the speedup for four diverse datasets with two target Vis-Rec models. As can be observed that ScaleViz helps both the models to choose most effective features, tailored for each datasets, leading to generation of visual recommendtion generation upto 10.3 times faster, which is much higher than the baseline models." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Budget vs. Error Trade-off", + "text": "We assess the recommendation errors of ScaleViz across various budget percentages of the total time on four distinct datasets for two Vis-Rec models, as illustrated in Fig. 3 ###reference_### and Fig. 4 ###reference_###. The errors in Fig, 3 ###reference_### and 4 ###reference_### are normalized to show the difference in errors on a standard scale. Notably, ScaleViz consistently outperforms baselines, showcasing significantly lower errors in visualization recommendations. This effect is particularly prominent at lower budget ranges, highlighting ScaleViz\u2019s capability to identify the set of most important statistical features that can be computed under a given time-budget constraint while minimizing respective errors for the corresponding base Vis-Rec models." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Need for Dataset-Specific Feature Selection", + "text": "We now analyze if there is indeed a need for dataset specific feature-selection. For this, we investigate how much overlap there is in terms of the selected statistical features from different datasets after the runtime feature selection by ScaleViz\u2019s RL-agent converges to a negligible error with respect to the baseline Vis-Rec models.\nIn Table 5 ###reference_### we show the intersection over union (IoU) between the sets of features important features selected by ScaleViz for all pairs from the 4 real world datasets. It can be observed that IoU values ranges from 10% to a maximum of 22% for VizML. Similarly, for MLVR, the overlap varies from 3% to a maximum of 14%.\nThis emphasizes the design choice of ScaleViz highlighting the fact that feature selection is highly dependent on both the choice of Vis-Rec model () and the target dataset () and a dataset agnostic pruning of features (even when done in a computation cost-aware manner) would remain suboptimal." + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "Scalability with Increasing Data Size", + "text": "We now show how ScaleViz\u2019s benefit keeps on increasing as the size of the dataset (in terms of number of rows) increases.\nWe define a saturation budget as the computation time taken by the selected features by ScaleViz where the resulting visualization recommendations has insignificant error ( ) compared to the base Vis-Rec model.\nFor VizML and for MLVR .\nWe us to denote the time taken by the base Vis-Rec model to produce the visualizations.\nTable 6 ###reference_### shows the values of and for both VizML and MLVR models for increasing sizes of a dataset (Flights). As can be observed ScaleViz saturated at around half the budget for a 1k dataset, saturated at around one-fifth of the budget for 100k, and its efficiency scales even more impressively with larger datasets, reaching about one-tenth of the budget for a dataset size of 1M. This scalability advantage positions ScaleViz as an efficient and cost-effective solution to boost Vis-Rec models for large datasets." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we identify an important drawback of the state-of-the-art visualization recommendation (Vis-Rec) models that these models sacrificed the scalability in order to make them generalize over unknown datasets. Such models compute a very large number of statistics from the target dataset, which becomes infeasible at larger dataset sizes. In this paper, we propose ScaleViz- a scalable and time-budget-aware framework for visualization recommendations on large datasets. Our approach can be used with existing Vis-Rec models to tailor them for a target dataset, such that visual insights can be generated in a timely manner with insignificant error compared to alternate baseline approaches." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n
\n
\n
\n
\n
\"Refer\n
(a) VizML\u00a0[6]
\n
\n
\n
\n
\n
\"Refer\n
(b) MLVR\u00a0[13]
\n
\n
\n
\n
\n
Figure 1: CDF of computation time (in seconds) of various features on datasets of different sizes. With increase in dataset size, the computation time of complex features drastically increases.
\n
\n
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsVizMLMLVR
Flights ()97212072
Income ()332141246
Housing ()81010060
Cars ()7299054
\n
\n
Table 1: Number of statistical features that are needed by MLVR and VizML for 4 different datasets with (# rows, # columns).
\n
\n
\n
\n
", + "capture": "(a) VizML\u00a0[6]" + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ViZMLMLVR
MethodFlightsIncomeHousingCarsFlightsIncomeHousingCars
Sample1.301.501.381.101.402.801.522.20
Greedy1.601.201.201.901.902.761.371.40
Random2.501.251.961.632.803.102.102.80
ScaleViz10.39.7010.108.109.849.888.609.94
\n
Table 2: Speedup in visualization recommendation generation provided by different techniques with limiting errors of and for VizML and MLVR respectively, compared to results using all the features.
\n
", + "capture": "Table 2: Speedup in visualization recommendation generation provided by different techniques with limiting errors of and for VizML and MLVR respectively, compared to results using all the features." + }, + "3": { + "table_html": "
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FlightsIncomeCarsHousing
Flights1.000.220.100.12
Income0.221.000.120.12
Cars0.100.121.000.19
Housing0.120.120.191.00
\n
\n
Table 3: VizML model
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FlightsIncomeCarsHousing
Flights1.000.030.140.11
Income0.031.000.030.05
Cars0.140.031.000.12
Housing0.110.050.121.00
\n
\n
Table 4: MLVR model
\n
\n
\n
\n
Table 5: Intersection over Union (IoU) of features selected by ScaleViz for different datasets. It can be observed that the important statistical features identified for each dataset has very low overlap with other datasets, highlighting the importance of runtime and data-specific feature selection by ScaleViz and the fact that a generic feature selection technique would be sub-optimal.
\n
", + "capture": "Table 3: VizML model" + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Vis-RecVizMLMLVR
Records1k10k100k1M1k10k100k1M
160233131113259592928195011161
7911226212854585837711134
\n
Table 6: Analysis of the minimum budget and (milliseconds for VizML, seconds for MLVR) required to achieve specified errors on the flights dataset. VizML to achieve an error MLVR to achieve an error
\n
", + "capture": "Table 6: Analysis of the minimum budget and (milliseconds for VizML, seconds for MLVR) required to achieve specified errors on the flights dataset. VizML to achieve an error MLVR to achieve an error " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18657v1_figure_1.png", + "caption": "Figure 2: Pipeline overview: The cost profiler estimates the computational cost for computing statistics. RL agent training begins with a set of zero-cost features. Within each episode, the agent dynamically acquires features until the budget is exhausted. The Q\ud835\udc44Qitalic_Q-value is estimated using rewards which is based on increased certainty in recommendations with newly acquired features, considering acquisition costs. This iterative process continues until error converges below a certain error value. Once trained, in the inference pipeline, the RL agent now selects features for the specified budget, tailored to the dataset and model.", + "url": "http://arxiv.org/html/2411.18657v1/x1.png" + }, + "2(a)": { + "figure_path": "2411.18657v1_figure_2(a).png", + "caption": "(a) Flights dataset.\nFigure 3: Evaluation of VizML on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/vizml_flights.png" + }, + "2(b)": { + "figure_path": "2411.18657v1_figure_2(b).png", + "caption": "(b) Income dataset\nFigure 3: Evaluation of VizML on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/vizml_census.png" + }, + "2(c)": { + "figure_path": "2411.18657v1_figure_2(c).png", + "caption": "(c) Cars dataset.\nFigure 3: Evaluation of VizML on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/vizml_cars.png" + }, + "2(d)": { + "figure_path": "2411.18657v1_figure_2(d).png", + "caption": "(d) Housing dataset\nFigure 3: Evaluation of VizML on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/vizml_housing.png" + }, + "3(a)": { + "figure_path": "2411.18657v1_figure_3(a).png", + "caption": "(a) Flights dataset.\nFigure 4: Evaluation of MLVR on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/mlvr_flights.png" + }, + "3(b)": { + "figure_path": "2411.18657v1_figure_3(b).png", + "caption": "(b) Income dataset\nFigure 4: Evaluation of MLVR on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/mlvr_census.png" + }, + "3(c)": { + "figure_path": "2411.18657v1_figure_3(c).png", + "caption": "(c) Cars dataset.\nFigure 4: Evaluation of MLVR on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/mlvr_cars.png" + }, + "3(d)": { + "figure_path": "2411.18657v1_figure_3(d).png", + "caption": "(d) Housing dataset\nFigure 4: Evaluation of MLVR on different datasets", + "url": "http://arxiv.org/html/2411.18657v1/extracted/6028423/mlvr_housing.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18657v1" +} \ No newline at end of file diff --git a/20241127/2411.18663v1.json b/20241127/2411.18663v1.json new file mode 100644 index 0000000000000000000000000000000000000000..4818c279e04d9f7f2c8b96cdacc3737c339ac3b6 --- /dev/null +++ b/20241127/2411.18663v1.json @@ -0,0 +1,143 @@ +{ + "title": "FAIR Digital Objects for the Realization of Globally Aligned Data Spaces \u00a9 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This project is funded by the Helmholtz Metadata Collaboration Platform (HMC), NFDI4Ing (DFG \u2013 project number 442146713), and supported by the research program \u201cEngineering Digital Futures\u201d of the Helmholtz Association of German Research Centers.", + "abstract": "The FAIR principles are globally accepted guidelines for improved data management practices with the potential to align data spaces on a global scale. In practice, this is only marginally achieved through the different ways in which organizations interpret and implement these principles.\nThe concept of FAIR Digital Objects provides a way to realize a domain-independent abstraction layer that could solve this problem, but its specifications are currently diverse, contradictory, and restricted to semantic models. In this work, we introduce a rigorously formalized data model with a set of assertions using formal expressions to provide a common baseline for the implementation of FAIR Digital Objects. The model defines how these objects enable machine-actionable decisions based on the principles of abstraction, encapsulation, and entity relationship to fulfill FAIR criteria for the digital resources they represent. We provide implementation examples in the context of two use cases and explain how our model can facilitate the (re)use of data across domains. We also compare how our model assertions are met by FAIR Digital Objects as they have been described in other projects. Finally, we discuss our results\u2019 adoption criteria, limitations, and perspectives in the big data context. Overall, our work represents an important milestone for various communities working towards globally aligned data spaces through FAIRification.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction and Problem Description", + "text": "A major portion of scientific work is often related to state-of-the-art analysis; not only in aspects of literature, but also data investigation[1 ###reference_b1###, 2 ###reference_b2###]. While the application of data resources depends on the specific use case, prior retrieval of administrative information and content assessment is a general prerequisite for data selection. For example, the evaluation of licence information or the preview of image thumbnails are typical tasks for the pre-selection of useful data. Ideally, such tasks are at least in part performed automatically to reduce user workload. There exists, however, a wide variety of data formats, access protocols, storage systems, and associated technologies such as metadata standards, vocabularies, and tooling[3 ###reference_b3###]. The term \u201ddata space\u201d was defined in many works as summarized in [4 ###reference_b4###]; based on these definitions and other descriptions, such as from the International Data Spaces Association111https://internationaldataspaces.org ###reference_###, we define a data space as an enclosed environment that contains digital resources (data) that are shared and used by organizations. However, the acquisition, inspection, and analysis of data resources across data spaces is a laborious task, mainly due to interoperability issues that typically stem from a combination of technical, semantic, and governance challenges [3 ###reference_b3###]. In the context of big data, these interoperability issues are amplified by the properties of the \u20195 Vs\u2019 - volume, variety, velocity, veracity, and value[5 ###reference_b5###]. This has a negative impact on the progress in data-intensive fields, for example in Artificial Intelligence (AI) research and application[6 ###reference_b6###].\nThe advent of the FAIR principles for making data findable, accessible, interoperable and reusable provided a road map for achieving streamlined data management practices[1 ###reference_b1###]. \u201dFAIRification\u201d describes the process that organizations undergo to transform their data spaces into FAIR-compatible environments, a state that is dynamic and difficult to measure[7 ###reference_b7###]. Whilst the FAIR principles provide guidelines for proper data management and stewardship, their implementation strategies have been realized in many ways[8 ###reference_b8###]. This is largely caused by the different requirements and practices within different communities. Therefore, different data spaces are typically only partially \u201dFAIRified\u201d in an interoperable way compared to each other, if at all, leading to less efficient research. A domain-independent, high-level abstraction layer could address this issue by providing a FAIR-compatible representation of each data space\u2019s contents without altering their native configurations, allowing each data space to maintain full control over its digital resources. This approach is illustrated in Figure 1 ###reference_###, where the overlapping regions in the Euler diagrams represent the FAIR-compliant elements shared among data spaces. However, these shared, interoperable regions are typically minimal or even non-existent, highlighting the challenges in achieving widespread FAIR compatibility across different domains.\n###figure_1### FAIR Digital Objects (FDOs) focus on the reusability of individual data resources according to the FAIR principles and offer a lightweight approach to provide a uniform representation of diverse digital resources across data spaces, effectively abstracting from the user the complexity of handling their underlying bit sequences[9 ###reference_b9###, 2 ###reference_b2###]. Their conceptual approach at enabling machine-actionability provides particular perspectives for the enhancement of interoperability and reusability, which are the most difficult aspects of FAIR data to achieve [3 ###reference_b3###]. \u201dMachine-actionable\u201d in this context is a multifaceted term; in essence, it means that certain processes are performed automatically with little to no human intervention. This concept therefore constitutes one potential approach for realizing an abstracted high-level representation layer across different data spaces as previously described. However, a concrete formalization of the conceptual data model and interpretation of perspectives for aligning data spaces with respect to big data has not yet been introduced.\nThe remainder of this paper is structured as follows:\nIn section II, we describe the related works for FAIR principles and FDOs.\nSection III outlines a formal specification and FAIR criteria compliance analysis of an FDO data model that enables a standardized implementation of the FDO concept.\nSection IV provides implementation examples for two use cases and a comparative analysis of our model with FDO specifications used in other projects.\nSection V discusses the adoption requirements, limitations, and perspectives for big data research of our formalized FDO data model." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A FAIR", + "text": "As originally formulated, the FAIR principles provide a set of best practices for managing research data and its metadata[1 ###reference_b1###]. Different communities adopted these principles and formulated additional requirements and aspects. For example, new FAIR requirements for research software (FAIR4RS) were considered separately by [10 ###reference_b10###]. Building on FAIR4RS, the unique characteristics of AI research\u2014which involve a complex interplay of research data, metadata, and software for AI models\u2014prompted the development of revised FAIR requirements tailored specifically for this field [6 ###reference_b6###, 11 ###reference_b11###]. As stated by [8 ###reference_b8###], the FAIR principles are formulated on a high level and allow for different interpretations and implementations, however \u201cfor true interoperability we need to support convergence in implementation choices that are widely accessible and (re)-usable\u201d.\nAn important aspect is thereby the provision of metadata for machines, i.e., structured data that enables automated systems to locate, interpret, and process digital resources reliably. The role of identifying this metadata and its typing is crucial to enable proper processing of the given information. This resulted in the specification of PID Information Types (PITs) which can be modeled hierarchically with a finite combination of PITs and Basic PITs down to the elementary level of JSON types for automated schema extraction as described in [12 ###reference_b12###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B FAIR Digital Objects", + "text": "The concept of Digital Objects was first introduced by [13 ###reference_b13###], where a Digital Object is described as a data structure with associated components that can be queried using a globally available identifier system. A formal terminology definition has been formulated in a core data model for Digital Objects222https://zenodo.org/records/2574407##.XG5s4OhKhaQ ###reference_OhKhaQ### by the Data Foundation and Terminology working group of the Research Data Alliance (RDA)333https://www.rd-alliance.org ###reference_www.rd-alliance.org###. The need for a transfer protocol for Digital Objects then led to the development of a uniform communication protocol called the Digital Object Interface Protocol (DOIP) [14 ###reference_b14###]. Building on this foundation, subsequent works have explored the relationship of Digital Objects to other concepts in IT, including the object-oriented programming (OOP) paradigm[2 ###reference_b2###], the Semantic Web[9 ###reference_b9###], the PID Graph[15 ###reference_b15###], and the services that make up an infrastructure for Digital Objects[12 ###reference_b12###, 16 ###reference_b16###, 17 ###reference_b17###].\nThe inherent characteristics of this concept were designed in a way that is compatible with the requirements of the FAIR principles and were addressed as a possible method for their implementation. This gave rise to the FAIR Digital Object (FDO) concept [18 ###reference_b18###], with the potential to facilitate broader abstraction for interoperability in data management [2 ###reference_b2###]. This conceptual evolution is now being driven primarily by the RDA, the European Open Science Cloud (EOSC)444https://eosc.eu ###reference_eosc.eu### as part of their interoperability framework[19 ###reference_b19###], CODATA555https://codata.org ###reference_codata.org###, and the FDO Forum, which provides a list of specifications666https://fairdo.org/specifications/ ###reference_###. Several implementation examples resulted from these initiatives in the frame of different use cases such as in the domain of biodiversity[20 ###reference_b20###, 21 ###reference_b21###] or energy research[22 ###reference_b22###], whilst others modelled FDOs using specific methods and technologies, e.g. [23 ###reference_b23###] describes an OWL-based ontology for the FDO Framework. However, the different specifications of these communities and their approaches of implementing the FDO concept are restricted to (conceptual) semantic models, often lack in clarity, and partly contradict each other[24 ###reference_b24###]. This results in a similar problem of divergence as with the realization of FAIR principles by different communities. A use case-independent, mathematically rigorous, and formal data model is therefore required, enabling a standardized implementation and mechanism to validate what an FDO is, something that has not yet been sufficiently addressed." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III FDO Data Model Formalization", + "text": "In the following subsections, we formalize a data model for FDOs based on the original concept and the works developed by the aforementioned communities (cf. section II-B ###reference_###). As a basis for the formalization, we use the semantic data model illustrated in Figure 2 ###reference_### that is detailed throughout this section. We define a set of formal expressions for key characteristics which supports the consistent adoption of the general concept while establishing a basis for validating whether an entity qualifies as an FDO and how it functions. We use the term digital resource for any type of data in order to avoid confusion with the term digital object. The model is based on the following initial case: one or more corresponding digital resources are stored in a permanent data storage system (possibly distributed), from where they are available as bit sequences. In the following sections, where we describe and formalize this model, we use the term typed according to the definition of PITs and Basic PITs [12 ###reference_b12###] constituting the foundations for an FDO type system.\n###figure_2###" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Abstraction and Encapsulation", + "text": "An FDO entity in the set of all FDOs requires an information record, denoted as a set , that instantiates an element from the set of Kernel Information Profiles (KIPs). The concept of a KIP originates from the RDA777https://zenodo.org/records/3581275 ###reference_### and is described as a profile that contains and specifies a canonical set of typed attributes describing the bit sequence represented by an FDO such that . This enables syntactic validation by traversing the subtype hierarchies of information types, and potentially semantic validation when, for example, vocabulary terms are linked to the PID of these types. Providing the metadata for the essential kernel information about the bit sequence is required. We can then associate the creation of an FDO directly with the instantiation of a KIP:\nBy design, each FDO can only be associated with one information record that instantiates exactly one KIP:\nThe instantiation of a KIP requires that elements in a list of valid non-empty values, characterizing the represented bit sequence, are successfully inserted for each attribute contained in the KIP according to its type, creating a key-value pair in the information record . According to the specification of typed attributes in this context, the information record must expose the unambiguous PIDs of each attribute and not the human-readable names. A minimum set of mandatory kernel information, as proposed by the RDA, comprises the following generic attributes, where {Kernel Information Profile Reference, License, Checksum, Digital Resource Location, Creation Date, Digital Resource Type} , such that . In addition, a Handle PID in the set of PIDs is assigned to the information record , creating the registered information record for a given FDO :\nThe creation of optional key-value pairs , where , is not required for the successful creation of a valid FDO .\nThis formalization enables the representation of digital resources on the uniform FDO entity level with machine-interpretable characteristics. In this work, processes that operate on these entities are called operations. In general, the operations that can be performed on FDOs, besides resolving the information record using the PID, are given as a set . The association between an FDO and its operations can be inferred by the set of the typed attributes\u2019 ( and ) key-value pairs in the FDO information record (now used equivalent to ):\nThe set of operations that are associated with an FDO may be applicable to metadata in the registered information record , e.g. checksum or creation date, or directly to the bit sequence of the represented digital resource. is thus decomposed into two subsets and , with in general. Given an attribute\u2019s key-value pair to access the bit sequence, e.g. the location reference, the following applies:\nThe general applicability of associated operations based on a target is then expressed as:\nOverall, these characteristics relate to the idea of an object, as understood in OOP, using encapsulation and abstraction. In this context, encapsulation means that the FDO bundles and exposes a set of typed attributes that are machine-interpretable and enable machine-actionable decisions that can be executed on the FDO. Abstraction is given by providing a uniform representation for handling the complex details of the data resource, which is available as a bit sequence." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Entity Relationship", + "text": "Given the set of all entities, with , an FDO is linked to other FDOs or individual entities, e.g., on the internet, via a subset of the typed attributes\u2019 referencing key-value pairs in its information record, where the key specifies the relation to a target that is referenced by the value, which can be based on a PID or URI datatype:\nThis allows to determine how these entities are related to or may interact with each other. Referenced entities which are FDOs also provide useful information through their typed information record and may include links to other resources for further discovery. This relates to principles also used in Linked Data where RDF triples of URLs are used for interlinking data in the context of the Semantic Web. By the nature of FDOs for relating entities on the Web, including other FDOs, RDF triples can then be applied purely based on PIDs instead of URLs to constitute a directed graph that may also have strongly connected components and represents a given FDO space (refering to a conceptual environment for FDOs, rather than a space in the mathematical sense). In this case, the PID triple is given as , where is the PID of an FDOsub (the subject), is the PID of a typed attribute\u2019s key in the FDOsub\u2019s information record that has a semantic definition (the predicate), and is the PID of an FDOobj (the object). This is illustrated in Figure 3 ###reference_###.\n###figure_3### Denoting as the set of PIDs with that identify a set of FDOs, as the set of all PIDs of typed attribute\u2019s keys that can act as predicates with , and as the set of PID triples, the FDO graph structure is then given as:\nwhere spans the graph of FDOs and is defined as a function mapping the triple to an ordered triple of vertices connected by a predicated edge with the elements , and ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Fulfillment of FAIR Criteria", + "text": "To assess the FAIRification potential of this data model, we analyze its inherent fulfillment of the FAIR criteria. Thereby, we consider FAIR as originally formulated [1 ###reference_b1###], as well as with a focus on research software (FAIR4RS) [10 ###reference_b10###]. Existing FAIR assessment tools like F-UJI[7 ###reference_b7###] are currently not supporting the analysis of FDOs. Therefore, we describe here the characteristics of the FDO in relation to the FAIR criteria. FDOs represent digital resources on the individual entity level, e.g., scientific data, different types of metadata, such as experimental conditions, annotations, publications, or software. The granularity of this representation can be constantly increased, and related resources can be linked via this entity representation. This is the foundation for the following FAIR assessment:\nFindability of all digital resources is independent of their storage system. Each resource is identifiable through a unique PID of their representing FDO that can be discovered, resolved, and interpreted. This enables the proper discovery and treatment of different digital resources according to the individual task. (cf. FAIR: F1 and FAIR4RS: F1-F1.2)\nFDOs can be related to other entities via their PID information records so that relationships between digital resources can be derived. (cf. FAIR: F3, I3 and FAIR4RS: F3, I2, R2)\nEach FDO is accessible by a communication protocol and actionable by a set of operations that are performed on the metadata in the information record and the bit sequence. This allows the digital resources represented by FDOs to be accessed, retrieved, and eventually manipulated without knowing the implementation details of the underlying technology. Thus, different digital resources represented by FDOs are actionable through a uniform interface. (cf. FAIR: A1-A1.2 and FAIR4RS: A1-A1.2, I1)\nFDO information records are persistently preserved through the policies of the PID system and independent of the existence of the digital resource they represent. This information may still serve for reproducibility of projects, although the involved digital resources are no longer available. (cf. FAIR: A2 and FAIR4RS: A2)\nThe type system and consistent structure of FDOs allows for interoperable standards across communities as part of the existing FDO space. Thus, compatible resource types from different communities, such as datasets and applicable software, can be automatically identified and reused in the given context. (cf. FAIR: I1, I2 and FAIR4RS: I1, I2, R3)\nThe information record contains a minimal set of relevant metadata attributes for each entity, defined by reusable KIPs and typed attributes. All types of digital resources can be described with a plurality of attributes required for their handling, e.g. licensing. By building these profiles and typed attributes using taxonomic and hierarchical structures, a common meaning of these contents and intended interpretation is facilitated for data reuse. Typed attributes aim towards the machine-interpretable description of existing standards and technologies that are associated and linked with the digital resource by the FDO\u2019s entity relationships. (cf. FAIR: F2, I1, I2, R1-R1.3 and FAIR4RS: R1-R3)" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Implementation Examples", + "text": "In this section, we use the assertions given by the formal expressions of our formalized FDO data model for two exemplary cross-domain use cases. First, we elaborate on typical challenges that occur in the frame of these examples. We then describe the steps that can be performed using an FDO framework that leverages the assertions of our expressions to address these problems. Thereby, we provide a bilateral description, considering ingestion of FDOs in the FDO space that constitutes the high-level abstraction layer, as well as the retrieval in the context of data reuse based on an exemplary FDO. In the second phase, we perform a comparative analysis of our model compatibility with existing Handle records that were related to existing FDO specifications." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Use Cases", + "text": "Our first example comes from energy research. Datasets in this domain cover extensive, high-frequency, or high-resolution data across various energy sectors, enabling researchers to analyze patterns, improve efficiency, optimize energy systems, and forecast demand. The \u201cThermal Bridges on Building Rooftops Dataset\u201d888https://doi.org/10.5281/zenodo.7022736 ###reference_### is a representative, exemplary dataset that is used for identifying heat loss points on rooftops and could be valuable for energy conservation studies using AI methods as presented in [25 ###reference_b25###]. It is composed of several types of digital resources, comprising drone image sets, COCO annotation files, metadata files based on the Spatio Temporal Asset Catalogs (STAC), and Frictionless Data standards as described in [22 ###reference_b22###].\nA typical challenge regarding interoperability when working with this type of digital resource and AI applications is related to the tasks of identification and aggregation of other datasets. These may originate from a different data space (e.g. weather data) using metadata, data integration, transformation, preprocessing, analytics and integrity checks across systems and data spaces, operating with various technologies and tools.\nThe second use case considers the domain of digital humanities (DH) where ontologies, thesauri and controlled vocabularies (CVs) are leveraged that consist of various terminologies, languages, and data models (e.g. the Simple Knowledge Organization System (SKOS)) for analyzing cultural, historical, and linguistic phenomena.\nA typical task in this field is related to ontology matching, which helps in aligning and integrating heterogeneous ontologies, thesauri, and CVs.\nThe \u201dOntology Matching Benchmark Dataset for Digital Humanities\u201d [26 ###reference_b26###] is an example for a multilingual and SKOS-based dataset which is composed of multiple CVs and applicable for the task of advancing matching system development when more CVs are added.\nIn contrast to our first example, we do not only consider the applicability of this dataset, but the prior challenge of composing it from various CVs with respect to interoperability. This typically comprises the tasks of domain identification, CV quantification, conversion, SKOS and format validation, as well as aggregation and integration." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B FDO framework", + "text": "We now further describe the creation of FDOs based on a concrete example with respect to our data model, followed by its (re)use in the context of the use cases. As previously defined section III-A ###reference_###, in order to create FDOs for the given digital resources, a KIP has to be defined and instantiated that conforms to the requirements of the mandatory kernel information set . For this, we choose the Helmholtz KIP [27 ###reference_b27###] that corresponds to the recommendations of the RDA. This profile contains, besides others, the typed attributes shown in Table I ###reference_### which were used in our examples. Attribute PIDs and additional type specifications such as regular expressions are left out for space reasons. For this, we refer to the specifications in the ePIC Information Type Registry999https://dtr.pidconsortium.eu ###reference_dtr.pidconsortium.eu### where a version of this KIP was registered and can be resolved using the Handle Registry proxy server101010https://hdl.handle.net ###reference_hdl.handle.net### using the PID 21.T11148/b9b76f887845e32d29f7 ###reference_845e32d29f7###.\nIt can be noticed that the set of typed attributes in this profile contains elements beyond the mandatory attributes, which is consistent with our model specification. However, which particular additional attributes should be contained in a profile and therefore present in the information record, i.e., which kernel information should be available on the FDO level, as well as the rules for extending these profiles, is not addressed in the frame of this work. This has to be defined with a set of rigorous rules in compliance with the FDO type system to ensure interoperability. As in the case of the mandatory kernel information, these attributes should primarily serve the purpose of associating useful operations to the FDO and relate to other entities.\nApplying (1 ###reference_###) to (3 ###reference_###), this KIP was instantiated using an instance of the Typed PID Maker111111https://github.com/kit-data-manager/pit-service ###reference_rvice###, which is a prototype for a PIT service, yielding an information record (cf. (3 ###reference_###)) that is registered with the Handle Registry. According to (7 ###reference_###), the entity relationships are given by those attribute key-values pairs that have a reference type as value. This also comprises the PID-triples for the resulting FDO graph . An exemplary FDO picked from the energy research use case data is illustrated in Figure 4 ###reference_###. In the following, we show the values for the nodes , predicated edges , and triples for a corresponding directed FDO graph as given by (8 ###reference_###) that covers a minimal network for this set of FDOs where each has at least one connection to another entity:\nwhere:\nA = drone image set 1 ###reference_e00-4d3a-9b90-3bac7a7c069e###\nB = annotation file 1 ###reference_895-414e-80c0-26c9fdd662b2###\nC = annotation file 2 ###reference_e29-4980-8675-ae579b50a1e2###\nD = drone image set 2 ###reference_c60-40e9-afef-8c2dd8b35e8e###\nE = drone image set 3 ###reference_5f6-445e-a691-62fae4021bea###\nF = drone image set 4 ###reference_e86-41b8-9d0e-b816fdd01d29###\nG = drone image set 5 ###reference_44a-4617-afb3-3c421a88e8e3###\nH = drone image set 6 ###reference_879-4216-8f64-45a060b8f658###\nI = frictionless data standard file ###reference_5eb-4417-ac4d-abe025e159f6###\nJ = STAC collection file ###reference_422-428c-9ff7-c2ef429df603###\nK = STAC feature file 1 ###reference_8cb-4116-a22a-68c5bdfa77b0###\nL = STAC feature file 2 ###reference_96b-43dd-b0fb-cd8ce302c7ce###\nM = STAC feature file 3 ###reference_b5a-4d02-9944-82a08ef2db35###\nN = STAC feature file 4 ###reference_514-47c9-bcd2-98f0253843d8###\nO = STAC feature file 5 ###reference_7c5-4a0b-916b-57dd9ec20198###\nP = STAC feature file 6 ###reference_5ea-464e-a57f-28e882924860###\nQ = STAC camera file 1 ###reference_924-4a21-b53d-5d054ad8198d###\nR = STAC camera file 2 ###reference_d36-42e4-858d-831447122863###\na = hasMetadata ###reference_91aeb451528###\nb = isMetadataFor ###reference_629b61e3b82###\nAll of these FDOs can be resolved by their PIDs (cf. referenced Zenodo repository) via the Handle Registry in order to reproduce the entire FDO graph with all connections, also revealing strongly-connected components.\n###figure_4### Having all digital resources represented in this uniform structure, (4 ###reference_###) can be used to yield the set of operations for each FDO to tackle the challenges described earlier for each use case example by enabling a machine-actionable decision\nframework that abstracts the details of domain and technology\nspecific information to the client. Regarding these tasks that must be performed, we define a set of example operations that are associated with FDOs via one or more attribute key-value pairs in their information records. According to (5 ###reference_###) and (6 ###reference_###), these operations are applicable to a respective target, i.e., the metadata in the information record or the bit sequence. Whilst some operations will be naturally very specific to a particular type of digital resource, typically when being applied to the bit sequence, others are more generic. For example, they could process the kernel information to:\nevaluate reusability of the digital resource based on the license information.\nvalidate integrity of the resource based on the checksum value.\ntraverse the FDO graph structure given by the entity relationships based on PID references.\nretrieve a digital resource from its storage system which is contained in a particular data space and is crucial for any following operation on the bit sequence according to (5 ###reference_###).\nThese operations can be directly associated with a subset of the typed attributes in the information records of the shown example FDOs. A subset of operations for all FDOs in this example may therefore be evaluate_license(), validate_checksum(), get_related_fdo(), get_digital_resource() with the respective machine-interpretable typed attribute\u2019s key-value subsets (only considering the key) {license}, {checksum}, {hasMetadata, isMetadataFor}, {digitalResourceLocation} being the association criteria. By the underlying type system of FDOs as per our data model, all digital resources from any data space that are represented on the FDO level would have the same set of such generic operations to make machine-actionable decisions based on FDO entities.\nIn the following, we will show how the same principles can be applied to the more specific problems of our use cases, outlining how FDOs can help in solving the challenges mentioned earlier.\n\nIn order to preselect drone images, different metadata can be utilized such as geographic regions or time frames which are typically included in specific schemas such as STAC. The STAC files associated with a given drone image set can be discovered by the FDO entity relationships and a corresponding operation like get_related_fdo() as indicated earlier. An operation for filtering tasks on these resources could be called geographic_filter() and timestamp_filter(). These would have to be applied directly to the bit sequence that contains the geographic metadata (latitude/longitude, region codes) for isolating data from specific areas, or timestamp metadata for filtering images or measurements from specific periods. The association criteria for such operations could therefore be related to the typed attribute\u2019s key-value subset (considering either only the key or the key-value pair) of {(hasSchema, v1), (digitalResourceType, v2), digitalResourceLocation}, where v1 could be a value indicating the STAC specification such as \u201chttps://schemas.stacspec.org/\u2026\u201d, and v2 the value of the data type for this bit sequence given as MIME type \u201dapplication/json\u201d.\nIt can be observed that these operations can be employed to access the domain-specific metadata based on the kernel information of the FDO, and to process them in a subsequent step. The sequence of steps and the actual processing done by these operations is thereby abstracted for the user by the entity relationship and encapsulation characteristics of the FDO. Likewise, additional operations may be utilized to identify and aggregate distinct but relevant datasets from different data spaces by applying the required operations to traverse the FDO space and process their kernel information and bit sequences dependent on the given task to get to the existing domain-specific information. Especially regarding enhanced semantic interoperability, such operations could highly profit from well established Semantic Web and Linked Data principles. Further automated preprocessing, such as image rescaling and normalization, pairing with annotation files or transformation of time-series data prior to AI applications, is a plausible use-case example for using FDOs in a big data context, but requires further clarification of the operation mechanisms.\nLikewise, the example of vocabularies for digital humanities becomes a big data challenge when entire digitized libraries spanning centuries, or texts from different languages, time periods, and genres are used in AI approaches like ontology-matching or text mining using NLP to identify patterns across centuries.\nAt this point, we don\u2019t further consider theoretical details of such a use case or relate it to the previous scenario from energy research, where digital resources from different data spaces are processed by an FDO framework. Instead, we again emphasize the data type- and technology-agnostic modelling approach provided by FDOs. Based on the handle record ###reference_95e-476e-81b3-f806f6346654### of an FDO representing a vocabulary entry from the aforementioned use case it can be seen that for a digital resource coming from a different data space, the FDO representation is still consistent (cf. drone image set ###reference_e00-4d3a-9b90-3bac7a7c069e###) and corresponds to the previously described FDO type system that uses PITs.\nIndependent of any particular use case, the aim of FDOs is to manage and process these digital resources using a minimal high-level abstraction layer, providing a framework to leverage existing technologies and tools across data spaces in an interoperable way, without the client needing to know about the details as illustrated in Figure 5 ###reference_###. This also requires a suitable data structure for specifying and calling associated operations. However, the exact implementation of operations, how their association criteria are inferred based on the syntactics and semantics of the typed attributes, and how they are finally executed on demand by client requests using a communication protocol, also in the frame of workflows, is out of scope for this paper. It is in any case important that an FDO framework has a set of known rules to operate on the given type system that should be standardized within an FDO space.\n###figure_5###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Comparative Analysis", + "text": "As described in the state-of-the-art analysis (cf. II-B ###reference_###), there currently exists a lack in uniformity for the modelling of FDOs by different communities, imposing a problem with respect to interoperability on the FDO level. In the following, we pick a set of existing Handle PID records that were related to the concept of FDOs in earlier works and validate their compatibility with our FDO data model. As a baseline, we use the FDO that we have shown in the earlier example, which fully corresponds to our model specification. Table II ###reference_### summarizes this comparison regarding the expressions that we defined for our data model, stating if they are fulfilled and giving additional explanation where required. We thereby don\u2019t consider the match of particular PIDs of the individual typed attributes which are currently not yet standardized.\nIn the following, we will briefly describe the contexts on how these Handle PIDs are related to FDOs.\nOverall, these results show that FDOs based on their current specifications may have some characteristics in common but are not fully compatible. This is challenging when attempting to provide a uniform layer for operating on the digital resources, represented by FDOs in an interoperable way.\nOriginating from the Persistent Identification of the Instrument (PIDINST) schema to reference and describe scientific instruments[28 ###reference_b28###], the example Handle Record resolvable with the PID 21.T11998/0000-001A-3905-1 ###reference_-3905-1?noredirect### was described as almost fully compliant with the FDO specifications, i.e., the FDO information record, by [29 ###reference_b29###].\nThis FDO example complies to a fair degree with the model requirements, especially regarding the type system, i.e., it contains to a large extent typed attributes, although it misses the majority of mandatory attributes we defined in section III-A ###reference_###. Missing a typed KIP reference is problematic for validation, and the provision of a non-machine readable landing page as resource reference massively impedes the application of operations on the bit sequence level.\nIn the context of the German DARIAH project121212https://www.dariah.eu ###reference_www.dariah.eu### within the domain of digital humanities, the following Handle Record resolvable with the PID 21.11113/0000-000B-CA4C-D ###reference_CA4C-D?noredirect### was used as an example for an FDO profile of \u201dLegacy Repository Records\u201d in [29 ###reference_b29###], stating that it is almost fully compliant with the FDO information record requirements.\nAlthough the information record elements of this example FDO were retrospectively modeled in a type registry, their types cannot be easily assessed since no KIP was used in the first place to instantiate the record and therefore are not inferable. Consequently, all other requirements are also not met, which results from the earlier mentioned missing requirements.\nAs part of the DiSSCo project131313https://www.dissco.eu ###reference_www.dissco.eu###, the following Handle Record resolvable with the PID 10.3535/G0G-G7D-N5J ###reference_?noredirect### is used as FDO to represent digital specimen as described in [20 ###reference_b20###].\nWhilst this Handle record is compatible with our model specification in some aspects, it lacks the provision of PIDs for the registered typed attributes, which impairs unambiguous typing for machine-interpretability and subsequent actionability by the association of operations. It also does not comply with the minimum attribute set to access and process the bit sequence, e.g., the digital resource location, we require in our model. This is primarily because this record is a kind of self-describing representation of a non-digital entity of which it constitutes the bit-sequence, similar to the concept of a digital twin. Therefore, it also contains a relatively large amount of additional attributes that describe this specimen of which several are indeed consistent with the FDO attempt for providing kernel information and outline how an extended FDO record could look like. However, this does not comply directly with the data model we provide where an FDO is considered a representation of an existing, curated digital resource. Furthermore, only a few entity relationships are provided, of which some reference non-machine-readable landing pages." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion of Results", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Adoption Requirements and Limitations", + "text": "Practical aspects. The formalized FDO data model provides a baseline for aligned implementations and makes certain assumptions for adoption by communities that want to represent the independently existing and unmodified contents of their data spaces on this unified layer. FDOs are designed to represent and describe digital resources as uniform and abstracted high-level information in a way to make these curated and persistent resources machine-actionable. Digital resources that complement each other or can be commonly used in a given context are thereby connected on a meta-level. This makes FDOs foremost suitable to be used as a starting point in the initial phase of a research task to navigate through data spaces that are inherently not fully aligned in aspects of FAIR and to evaluate the usability of their digital resources. In advanced applications, the encapsulated operations associated with the FDO entities could also enable the execution of more complex workflows on the bit sequence level, gradually decreasing the overhead for the user. This could finally result in an ecosystem where a digital resource gets activated by an initial client request and performs a sequence of steps based on its given set of rules by the type system to deliver a particular aspect, modified version of itself, or a new result back to the client.\nOrganizational aspects. Each community must decide which digital resources they want to represent according to the FDO specification and thus share on this abstracted layer. Central to this is the transfer of required metadata to the information record that is standardized by controlled and registered Kernel Information Profiles containing typed attributes. Defining these components is a process ideally performed on a global basis, analogue to RFCs, where initiatives such as the RDA or the FDO Forum provide exchange platforms. Based on generic types, communities may then extend and align their kernel information requirements consistently, whilst sticking to the agreed basis.\nWith respect to the infrastructure, we assume the use of certain base services is central, such as a PID service, e.g. the Handle Registry. In addition, KIPs and typed attributes for the information record can be reliably managed by possibly federated and aligned Data Type Registries. One example of such a registry is described in [12 ###reference_b12###]. The actual transfer, management of, and interaction with FDOs can be individually realized by each community that has shared access to these base services. FDO validation according to our formalized data model, i.e., using assertions (1 ###reference_###) to (3 ###reference_###), must thereby be considered. A reference implementation for how this could be realized already exists as part of the Typed PID Maker (cf. section IV-B ###reference_###).\nTechnical aspects.\nWith respect to a communication protocol for interacting with FDOs, the current state-of-the-art internet protocol HTTP is suitable, but a more specific protocol is under discussion as described in [13 ###reference_b13###, 14 ###reference_b14###]. This so-called Digital Object Interface Protocol (DOIP) can be implemented directly via TCP/IP or as DOIP over HTTP. Technically, the idea is to trigger the utilization of the typed and machine-actionable FDO information record by requesting the entity\u2019s PID. With this protocol, the FDO characteristics of long-term preservation, enhanced interoperability and facilitated automation using operations are emphasized. Whilst this protocol is complementary with existing standards, it requires the additional adoption of syntactic rules for making PID requests.\nIn order to implement the association between FDOs and applicable operations ((4 ###reference_###) to (6 ###reference_###)), a mechanism is required, which can be either static or dynamic. As long as such a mechanism utilizes the typed attributes in the FDO\u2019s information record, it can be expected to be compatible with our data model. Currently, typed attributes enable a reliable syntactic interpretation but are limited in aspects of dynamic semantic interpretation. Leveraging existing semantic methods and technologies for these types, e.g. controlled vocabularies, could enhance their capabilities in these regards. This is an organizational aspect as well. Abstract FDO entity relationships and FDO networks according to (7 ###reference_###) and (8 ###reference_###) must be utilized with proper data structures, e.g. graphs. This would enable enhanced query performance and application of graph methods such as traversal algorithms, path finding or clustering. Additional components of the FDO data model could also be incorporated, such as the PITs of KIPs or typed attributes to reveal and utilize connections between isolated areas that originate from separate data spaces. With respect to the aspect of findability of entities, an FDO graph could also be semantically enriched and searched based on Semantic Web technologies. This would constitute a variant of a Knowledge Graph that represents the FDO space which is finally machine-actionable by an underlying type system conforming to our model.\nThe formalization currently does not consider any security aspects which are, however, a topic of relevance on the conceptual level of FDOs. In doing so, data owners may decide to share only certain non-sensitive information of a digital resource and provide a standardized mechanism for authorized clients to access and operate on private contents. A typical use case would be in the realm of clinical data." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Perspectives for Big Data Research", + "text": "Based on our FDO data model implementation and the results of our experiments, we see perspectives for this approach to enhance the usability and value of big data across various scientific and research domains, facilitating more effective and efficient data-driven discoveries.\nThe characteristics of the 5 Vs for describing big data typically take effect across data spaces, for which various technologies exist to take them into account. Here, centralized and distributed storage systems like repositories, HDFS, cloud storage, or data lakes are used to deal with large volumes. There also exists extensive tooling for data integration, wrangling, and quality assessment. As we have described in the introduction, the alignment of data spaces by FAIR principles is impeded by differences in their interpretation and implementation. FDOs, as per concept, aim towards the alignment of data spaces on a higher abstraction level, overcoming these heterogeneities. Implementing FDOs using a rigorously defined data model, as presented, is crucial in these regards. With respect to the 5 V dimensions, the benefits that are introduced by FDOs that adhere to a formalized data model can be summarized as follows:\nVolume: Large amounts of distributed or corresponding digital resources are linked by machine-actionable entities that can be discovered, accessed and analyzed using PID information records based on a type system in adherence to their kernel information.\nVariety: Typed information records abstract the variety of the underlying bit sequences, facilitating the integration and aggregation of diverse digital resources.\nVelocity: Entities described with typed information records allow assumptions to be made about how to operate on these entities, which can then be analyzed more quickly by automated processes.\nVeracity: Comprehensive metadata are encapsulated in the typed information record such as checksums that can be automatically evaluated, verifying data reliability.\nValue: The assessment of potential use cases for the underlying bit sequence using metadata is facilitated.\nThe resulting potential solutions that we have listed here primarily support the initial discovery phase, when large amounts of data are aggregated and evaluated for a specific use case, for example when considering the application of data in an AI project. The existing and diverse technologies are still crucial for managing these data resources, whilst FDOs could facilitate their handling. Their machine-actionable character paves the way towards a highly automated big data environment that requires only minor manual intervention by humans. FDOs have therefore the potential to align data spaces on a global scale without changing the running systems." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusions", + "text": "FAIRification is a way to achieve globally aligned data spaces, enabling more enhanced utilization of their digital resources. FAIR Digital Objects can serve as a potential solution to the current limitations in this regard, without changing the individual data spaces.\nThis work presents an attempt to formalize and standardize a model for the implementation of FDOs. Our analysis highlights the model\u2019s compliance with the FAIR principle criteria, with a particular focus on machine actionability. This formalism provides a baseline for communities attempting to share a high-level abstraction layer for representations of their digital resources. The strategy is to not re-invent the wheel, but to provide a lightweight solution (considering the cost-benefit ratio) that can be put on top of already established technologies and standards. The robustness of this FDO data model needs to be tested in future works by applying more complex use cases and workflows. Big data applications could highly profit from that with respect to the 5 Vs. The tooling needs to be further developed to ensure a wide adoption of FDOs in multiple research groups and domains. We believe that consensus on this meta-level is the key to a long-lasting and widely used layer for aligned FAIR data spaces. Our work provides the baseline for this." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Typed attributes of the Helmholtz KIP.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nName\n\n\n\nObli-gatory\n\n\n\nRepea-table\n\n\n\nValue Type\n\n
\n\nkernelInformationProfile\n\n\n\nyes\n\n\n\nno\n\n\n\nHandle-Identifier-ASCII\n\n
\n\ndigitalResourceLocation\n\n\n\nyes\n\n\n\nyes\n\n\n\nURL\n\n
\n\ndateCreated\n\n\n\nyes\n\n\n\nno\n\n\n\ndate-time-rfc3339\n\n
\n\ndateModified\n\n\n\nno\n\n\n\nno\n\n\n\ndate-time-rfc3339\n\n
\n\nlicense\n\n\n\nyes\n\n\n\nno\n\n\n\nURL\n\n
\n\ndigitalResourceType\n\n\n\nyes\n\n\n\nno\n\n\n\nmedia-type-IANA\n\n
\n\nchecksum\n\n\n\nyes\n\n\n\nno\n\n\n\nchecksum-string\n\n
\n\nversion\n\n\n\nno\n\n\n\nno\n\n\n\nversion-number\n\n
\n\nhasMetadata\n\n\n\nno\n\n\n\nyes\n\n\n\nHandle-Identifier-ASCII\n\n
\n\nisMetadataFor\n\n\n\nno\n\n\n\nyes\n\n\n\nHandle-Identifier-ASCII\n\n
\n\nhasSchema\n\n\n\nno\n\n\n\nno\n\n\n\nURL\n\n
\n\ntopic\n\n\n\nno\n\n\n\nyes\n\n\n\nURL\n\n
\n\ncontact\n\n\n\nno\n\n\n\nyes\n\n\n\nURL\n\n
\n\nidentifier\n\n\n\nno\n\n\n\nyes\n\n\n\nstring\n\n
\n\nDataCite-Language\n\n\n\nno\n\n\n\nyes\n\n\n\nLanguage-Codes-ISO-639-1\n\n
\n
", + "capture": "TABLE I: Typed attributes of the Helmholtz KIP." + }, + "2": { + "table_html": "
\n
TABLE II: The summary of the comparison between multiple Handle Records that are specified as FDOs with respect to the formalized FDO data model expressions.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nContext\n\n\n\nPIDINST\n\n\n\nDARIAH\n\n\n\nDiSSCo\n\n
\n\nHandle PID\n\n\n\n21.T11998/0000-001A-3905-1\n\n\n\n21.11113/0000-000B-CA4C-D\n\n\n\n10.3535/G0G-G7D-N5J\n\n
\n\nInstantiates one KIP (exp. 1, 2)\n\n\n\nyes\n\n\n\nno\n\n\n\nyes\n\n
\n\nAttributes are typed on the record level can be validated (exp. 3) and associated with operations (exp. 4)\n\n\n\npartially - not the KIP reference attribute\n\n\n\nno\n\n\n\npartially - attributes are not identifiable by their PID in the record\n\n
\n\nContains the set of mandatory typed attributes, i.e. 6 (exp. 3)\n\n\n\nno\n\n\n\nno\n\n\n\nno\n\n
\n\nOperations can access and subsequently process the bit sequence of the digital resource (exp. 5, 6)\n\n\n\nno - provides a landing page as digital resource location\n\n\n\nno\n\n\n\nno - digital resource location is missing\n\n
\n\nRelates to other entities including other FDOs by PID-triples via typed attributes (exp. 7, 8)\n\n\n\npartially - relates to other entities via URLs but not to other FDOs\n\n\n\nno\n\n\n\npartially - relates to other entities via URLs but not to other FDOs\n\n
\n
", + "capture": "TABLE II: The summary of the comparison between multiple Handle Records that are specified as FDOs with respect to the formalized FDO data model expressions." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18663v1_figure_1.png", + "caption": "Figure 1: The current state of partially aligned FAIRified data spaces is illustrated by the Euler diagram on the left. Conversely, the diagram on the right includes an enclosed area that represents the abstraction layer which enables alignment across data spaces by providing an overarching FAIRified structure at the meta level.", + "url": "http://arxiv.org/html/2411.18663v1/x1.png" + }, + "2": { + "figure_path": "2411.18663v1_figure_2.png", + "caption": "Figure 2: The semantic FDO Data Model specification depicting the relationships between FDO components and principles adopted from other fields of computer science, i.e., Abstraction, Encapsulation, and Entity Relationship.", + "url": "http://arxiv.org/html/2411.18663v1/x2.png" + }, + "3": { + "figure_path": "2411.18663v1_figure_3.png", + "caption": "Figure 3: The conceptual model of PID triples based on the FDO\u2019s entity relationship characteristics in the spirit of RDF triples, connecting an FDOsub with an FDOobj by a typed attribute key working as predicate.", + "url": "http://arxiv.org/html/2411.18663v1/x3.png" + }, + "4": { + "figure_path": "2411.18663v1_figure_4.png", + "caption": "Figure 4: An exemplary FDO according to the formalized data model that contains a set of non-referencing and referencing typed attributes in its information record. The latter enables entity relationships, including FDO-FDO relations (pointed out with a blue arrow) by PID-triples.", + "url": "http://arxiv.org/html/2411.18663v1/x4.png" + }, + "5": { + "figure_path": "2411.18663v1_figure_5.png", + "caption": "Figure 5: Resuming the illustration of a high-level abstraction layer around individual data spaces, FDOs can be considered retrievable and operable objects within this layer, representing digital resources within data spaces. A framework that uses their type system, operations, and entity linkage finally enables interoperability between the data spaces the FDOs point to.", + "url": "http://arxiv.org/html/2411.18663v1/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18663v1" +} \ No newline at end of file diff --git a/20241127/2411.18666v1.json b/20241127/2411.18666v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b294b775929c185d58b676a6fd6a67322d3885c2 --- /dev/null +++ b/20241127/2411.18666v1.json @@ -0,0 +1,682 @@ +{ + "title": "3D Scene Graph Guided Vision-Language Pre-training", + "abstract": "3D vision-language (VL) reasoning has gained significant attention due to its potential to bridge the 3D physical world with natural language descriptions. Existing approaches typically follow task-specific, highly specialized paradigms. Therefore, these methods focus on a limited range of reasoning sub-tasks and rely heavily on the hand-crafted modules and auxiliary losses. This highlights the need for a simpler, unified and general-purpose model. In this paper, we leverage the inherent connection between 3D scene graphs and natural language, proposing a 3D scene graph-guided vision-language pre-training (VLP) framework. Our approach utilizes modality encoders, graph convolutional layers and cross-attention layers to learn universal representations that adapt to a variety of 3D VL reasoning tasks, thereby eliminating the need for task-specific designs. The pre-training objectives include: 1) Scene graph-guided contrastive learning, which leverages the strong correlation between 3D scene graphs and natural language to align 3D objects with textual features at various fine-grained levels; and 2) Masked modality learning, which uses cross-modality information to reconstruct masked words and 3D objects. Instead of directly reconstructing the 3D point clouds of masked objects, we use position clues to predict their semantic categories. Extensive experiments demonstrate that our pre-training model, when fine-tuned on several downstream tasks, achieves performance comparable to or better than existing methods in tasks such as 3D visual grounding, 3D dense captioning, and 3D question answering.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "3D vision-language (VL) reasoning is an emerging research field that seeks to connect the 3D physical world with natural language descriptions, with broad applications such as human-machine interaction and embodied intelligence. It requires algorithms to align visual information with textual descriptions, enabling a shared feature space for both modalities. Recently, 3D VL reasoning has received increasing attention, with several methods proposed to tackle various tasks, including 3D visual grounding (VG) [5 ###reference_b5###, 1 ###reference_b1###], 3D dense captioning [11 ###reference_b11###] and 3D question answering [2 ###reference_b2###].\nDespite significant advancements in handling these 3D VL tasks, most of existing methods remain highly specialized and tailored to specific tasks. Typically, they focus on one or two tasks and rely heavily on the design of complex modules and auxiliary losses. For instance, 3D-SPS [29 ###reference_b29###] introduces a language-aware down-sampling module to select target-related keypoints, while MVT [21 ###reference_b21###] and ViewRefer [18 ###reference_b18###] use point cloud rotation and multi-view positional encoding to create a view-robust representation. In contrast, current mainstream 2D approaches are based on the vision-language pre-training (VLP) framework. They are generally pre-trained on large-scale image-text pairs, then fine-tuned to adapt to various downstream tasks. The pre-training process facilitates learning universal representations with strong transferability through contrastive learning [10 ###reference_b10###, 42 ###reference_b42###] and masked modality learning [19 ###reference_b19###]. However, 3D VLP is still in its infancy due to the unique properties of 3D point clouds and the misalignment between 3D point clouds and natural language descriptions.\n###figure_1### On the other hand, a handful of studies [17 ###reference_b17###] have begun to explore 3D scene graph [46 ###reference_b46###, 31 ###reference_b31###], which model both objects and their relationships, to address the 3D VG task. We observe that 3D scene graphs naturally align with natural language descriptions. As shown in Fig. 1 ###reference_###, a subject (such as \u201cottoman\u201d), a predicate (such as \u201cleft\u201d) and an object (such as \u201ctable\u201d) form the basic components of any sentence, whereas in a scene graph, the same subject, predicate, object triplet can be represented by two nodes and one edge. As a result, an object name (word-level) may correspond to multiple objects (nodes) of the same category in the initial stage of the scene graph, while a sentence ultimately corresponds to a specific node (i.e., the referential target) after scene graph learning. This natural correspondence makes cross-modality contrastive learning highly effective. Therefore, a natural question arises: Can we leverage the relationship between 3D scene graphs and natural language descriptions to design a 3D VLP scheme? We answer this in the affirmative and demonstrate that our scene graph-guided pre-training model outperforms task-specific methods across various 3D VL reasoning tasks.\nIn this paper, we propose a 3D scene graph-guided vision-language pre-training scheme that establishes multi-level alignments between 3D objects and input text to learn universal representations for various 3D VL reasoning tasks. First, inspired by the close relationship between scene graphs and natural language descriptions, we propose a scene graph-guided multi-level contrastive learning (SG_MCL) strategy. Different from the global contrastive learning in CLIP [42 ###reference_b42###], our method aligns 3D objects with textual features at multiple fine-grained levels, e.g., word-object level, sentence-referred object level and scene-level. Next, we introduce a masked modality learning (MML) approach that reconstructs the masked portions of the input modality to improve the generalization of feature representation. Due to the sparsity and irregularity of 3D point clouds, we predict the semantic categories of masked objects by leveraging positional clues along with the remaining 3D objects and text, rather than directly reconstructing their 3D points. The pre-training model is built primarily on simple, general-purpose modules to learn transferable multi-modal features, thus eliminating the need for complex, task-specific modules and losses. The main contributions of this work are summarized as follows:\nWe propose a novel 3D vision-language pre-training framework that uses simple, universal modules to learn transferable multi-modal features without any task-specific designs.\nWe propose a scene graph-guided contrastive learning strategy that leverages the strong alignment between 3D scene graphs and language to match 3D objects with textual features across different fine-grained levels.\nExtensive experiments on ScanRefer [5 ###reference_b5###], Scan2Cap [11 ###reference_b11###] and ScanQA [2 ###reference_b2###] demonstrate that our pre-training model achieves competitive performance across multiple 3D VL tasks after fine-tuning." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related works", + "text": "###figure_2###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Vision-language pre-training", + "text": "Vision-Language Pre-training (VLP) [42 ###reference_b42###, 24 ###reference_b24###, 44 ###reference_b44###, 56 ###reference_b56###, 61 ###reference_b61###, 22 ###reference_b22###] aims to leverage contrastive learning to learn universal representations that enhance performance on down-stream tasks. Large-scale image (or point cloud)-text pairs are essential to the success of VLP. Recently, CLIP [42 ###reference_b42###] and ALIGN [24 ###reference_b24###] have received significant attention due to their superior feature transferability and cross-modal understanding capabilities. These models perform contrastive learning on massive web-crawled image-text pairs to create a unified embedding space for both image and text features. Yang et al. [50 ###reference_b50###] introduced intra-modal supervision to enhance feature learning within each modality, while Duan et al. [16 ###reference_b16###] proposed an innovative codebook to better align multi-modal representations.\nDespite remarkable progress in 2D VLP, extending these techniques to the 3D domain is highly challenging due to the unique characteristics of point clouds and the scarcity of available point cloud-text pairs. To address this, Zhang et al. [56 ###reference_b56###] projected point clouds into multi-view depth images and then used pre-trained CLIP for zero-shot or few-shot 3D classification. Zhu et al. [61 ###reference_b61###] introduced a realistic projection to generate CLIP-preferred depth images and prompted GPT [41 ###reference_b41###] to create 3D-specific texts. Chen et al. [6 ###reference_b6###] leveraged the natural correspondences between 2D and 3D by feeding 2D image crops into an image captioner to generate descriptions and extracting 3D frustums of these crops, thus producing rich point cloud-text pairs. Jin et al. [26 ###reference_b26###] pre-trained their model using word-region (object) alignment and a masked modeling strategy." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "3D vision and language reasoning", + "text": "3D visual grounding (VG) aims to locate a target object within point clouds according to a free-form language description. Chen et al. [5 ###reference_b5###] introduced the first dataset, ScanRefer, and proposed a two-stage VG framework. This framework initially employs a 3D detector to obtain object proposals and then selects the best-matching proposal by integrating language features. Yuan et al. [53 ###reference_b53###] used the predicted target category to filter out redundant proposals, while Zhao et al. [58 ###reference_b58###] applied transformer-like modules to model proposal relations, helping to distinguish the target object from similar ones. In contrast, Luo et al. [29 ###reference_b29###] proposed a single-stage method, 3D-SPS, which frames 3D VG as a keypoint selection problem. Specifically, it identifies a set of keypoints guided by the input text and uses cross-modal attention layers to locate the target object by aligning keypoint features with language features. Wu et al. [49 ###reference_b49###] parsed semantic components from the input text and then aligned visual features with the parsed component features to achieve fine-grained visual-text fusion.\n3D dense captioning involves locating and describing objects of interest within a 3D scene. Chen et al. [11 ###reference_b11###] proposed the pioneering work Scan2Cap, which first detects objects in the scene and then describes detected objects using an attention-based captioning module. Jiao et al. [25 ###reference_b25###] generated comprehensive captions by fully mining complex object relationships. Similarly, Wang et al. [47 ###reference_b47###] introduced relative spatiality modeling into a Transformer network, i.e., using spatial relationships to enhance vision tokens. Zhong et al. [60 ###reference_b60###] incorporated superpoints into the network to supplement contextual information, such as non-object details. Chen et al. [9 ###reference_b9###] encoded the input scene into a set of vote queries, which are then processed by a Transformer decoder with parallel detection and captioning heads.\n3D question answering (QA) requires algorithms to answer a given question and locate question-relevant objects. ScanQA [2 ###reference_b2###] first encodes the question into language features and extracts object proposals from the 3D scene. It then feeds the combined proposal and question features into an object localization module and an answer classification module. Parelli et al. [35 ###reference_b35###] and Delitzas et al. [14 ###reference_b14###] aligned 3D extracted features with corresponding captions and 2D images in the CLIP embedding space, thus incorporating CLIP\u2019s prior knowledge into the network. In this work, we evaluate our pre-trained model on these 3D VL tasks." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "3D scene graph", + "text": "3D scene graph (SG) is a compact 3D scene representation that models both objects (nodes) and their relationships (edges) within a scene. Wald et al. [46 ###reference_b46###] utilized the PointNet [38 ###reference_b38###] network to extract object features and then applied graph convolutional networks (GCNs) to predict node and edge categories in the scene graph. Lv et al. [30 ###reference_b30###] proposed the Semantic Graph Transformer (SGFormer) framework, which incorporates prior knowledge from large language models (LLMs) to enhance object features. Wang et al. [48 ###reference_b48###] leveraged visual-linguistic information from 2D images and the CLIP [42 ###reference_b42###] model to assist in training the 3D scene graph network. Koch et al. [27 ###reference_b27###] proposed a language-based contrastive learning strategy to distill CLIP [42 ###reference_b42###] knowledge into the network. In this work, we leverage the inherent alignment between scene graphs and language descriptions to establish multi-level alignment for 3D VL pre-training." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Overview", + "text": "We leverage the natural alignment between language descriptions and 3D scene graphs to design a 3D visual-language pre-training (VLP) scheme based on multi-level contrastive learning. As shown in Fig. 2 ###reference_###, our pre-training model consists of four main modules: a scene encoding module, a text encoding module, a scene graph learning module, and a cross-modality fusion module.\nScene encoding. The scene encoder takes point clouds with 3D xyz coordinates and -dimension features as input, where is the number of point clouds. We use VoteNet [40 ###reference_b40###] with a PointNet++ [39 ###reference_b39###] backbone to extract seed points and generate 3D object proposals , where is the feature dimension.\nText encoding. The input text is first encoded into 300-dimensional embedding vectors using pre-trained GloVE [37 ###reference_b37###], where is the length of the input text. We then feed the embedding vectors into a GRU [12 ###reference_b12###] cell to obtain word-level features and sentence-level features .\nScene graph learning. We first create a 3D scene graph , where the nodes represent object proposals and the edges denote relationships between proposals. The scene graph is then processed by the scene graph network to update the node features. Finally, graph pooling is applied to aggregate graph nodes to obtain a scene-level representation. We leverage the natural alignment between the input text and 3D scene graph to perform multi-level contrastive learning (See Section 3.2 ###reference_###).\nCross-modality fusion. We introduce masked modality modeling for 3D vision-language pre-training. The remaining word features and object features are fused through cross-modality attention layers to reconstruct the missing words and object proposals (See Section 3.3 ###reference_###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Scene graph-guided multi-level contrastive learning", + "text": "3D scene graph is an emerging scene representation that models objects in a 3D scene as well as their relationships. It is observed that 3D scene graphs naturally align with language descriptions. In language, a subject, a predicate and an object form the fundamental components of a sentence, while in a scene graph, the same subject, predicate, object triplet is represented by two nodes and an edge. Thus, an object name (a word) may correspond to multiple objects (nodes) of the same category in the initial stage of scene graph, while a sentence corresponds to a specific node (i.e., the referential target). Leveraging this natural correspondence, we design a multi-level alignment strategy based on scene graphs, aligning 3D objects and language features at different fine-grained levels: word-object level, sentence-referred object level and scene-level, as shown in Fig. 3 ###reference_###.\nLevel 1 - word-object alignment. We construct an initial scene graph , where represents the 3D object proposals, and denotes their relationships. Before feeding into the scene graph network, we perform fine-grained alignment between the 3D object proposals and word-level features , i.e., aligning each 3D object proposal with its corresponding object name (i.e., semantic categories). For example, given the sentence \u201cthere is a black chair next to the cabinet\u201d, we first parse the object names (such as chair and cabinet) and then find all object proposals associated with each parsed object name. Notably, we focus only on the object names of the referential object and the auxiliary object. Finally, a binary cross-entropy loss is used to align the 3D object proposals with the word features located at the positions of the corresponding object names:\nwhere is the sigmoid function, is the training batch size, and is the length of input text in batch . denotes the predicted similarity score between the -th object proposal and the -th word in batch , while represents the ground truth similarity score, ranging from 0 to 1. is the temperature parameter. At the initial stage of the scene graph, one object name may correspond to multiple 3D object proposals.\n###figure_3### Level 2 - sentence-referred object alignment. We treat the 3D object proposal features as initial node features . For initial edge features , we only consider the neighboring object proposals for each proposal, focusing on the relationships between the referential object and its neighboring (or auxiliary) objects. The main object component, spatial relationship component and auxiliary object component in the sentence are arranged as a triplet . To align sentence-level textual features, we employ a scene graph network (i.e., several graph convolutional layers with message passing) to propagate information through the graph, allowing each node and edge to incorporate contextual information from its neighbors. Taking the -th graph convolution layer as an example, we illustrate the process of updating node and edge features. First, we feed the triplet into a multi-layer perceptron (MLP) :\nwhere are the incoming features of the -th node, and are the updated edge features. The updated node features are calculated as follows:\nwhere is the number of nodes connected to node , and and are the sets of nodes connected to node and node , respectively. After processing through the scene graph network, the updated node and edge features contain contextual information about their neighbors. Consequently, we align the updated node features with the sentence-level features to perform sentence-referred object alignment as follows:\nwhere represents sentence-level features, and denotes the node features corresponding to the referential object.\nLevel 3 - scene-level alignment. After obtaining the updated node features, we aggregate all node features to generate a scene-level representation , i.e., . We then align the scene-level features and the textual features of the scene description to achieve scene-level alignment:\nThe loss for scene graph-guided multi-level contrastive learning is given by ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Masked modality modeling", + "text": "After aligning 3D object features with textual features at various fine-grained levels, we can fine-tune the model for several 3D vision-language reasoning tasks. To achieve a comprehensive understanding and meaningful bidirectional interaction between modalities, we introduce masked modality modeling to jointly pre-train the model.\nMasked language modeling (MLM). Following the pre-training methods of large language models like BERT [15 ###reference_b15###], we perform MLM by randomly selecting a subset of input words and replacing them with the \u201cunk\u201d token. The masked word features and 3D object features are fed into a cross-attention module, followed by three MLP layers to predict the masked words:\nwhere represents the predicted masked words (i.e., vocabulary probabilities). The model is trained using cross-entropy loss:\nwhere are the ground truth words for the masked tokens.\nMasked object modeling (MOM). Similar to MLM, we randomly mask a portion of the 3D object proposals and replace their features with mask tokens (i.e., a set of learnable parameters). The visible object features and mask tokens are concatenated and augmented with positional embeddings to form full token set . The positional embeddings are generated by applying a linear layer to the 27-dimensional object position attributes (including the box center and eight box corners). We feed the full token set and word-level features into a cross-attention module, followed by three MLP layers, to predict the semantic categories of the 3D object proposals:\nThe MOM loss is calculated as follows:\nwhere are the ground-truth semantic labels. Note that supervision is applied only to the masked tokens. The total loss for masked modality modeling, , is given by ." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Pre-training objectives", + "text": "In addition to the aforementioned and , we also incorporate a detection loss [40 ###reference_b40###] and a language-to-object classification loss [5 ###reference_b5###] to jointly train the model. Thus, the overall pre-training loss is a weighted sum of all these losses: .\nDetection loss. We use the detection loss proposed in VoteNet [40 ###reference_b40###] to jointly optimize the detection module. The detection loss is composed of the vote regression loss , the objectness classification loss , the box regression loss , and the semantic classification loss .\nLanguage-to-object classification loss. Following previous methods [5 ###reference_b5###], we introduce a language-to-object classification loss to supervise language-based object classification." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental settings", + "text": "Datasets. ScanRefer [5 ###reference_b5###] is the earliest and most widely used dataset for 3D visual grounding (VG). It contains 51,583 free-form descriptions for 11,046 object instances from ScanNet [13 ###reference_b13###] indoor scenes. ScanRefer divides the data into the Unique and Multiple subsets according to the number of similar objects within the same category. We use the Scan2Cap [11 ###reference_b11###] dataset, built upon ScanRefer, to evaluate our captioning model. ScanQA [2 ###reference_b2###] is a newly released vision-language dataset for 3D question answering (QA), containing 43,363 questions and 32,337 unique answers. These question-answer pairs are generated from ScanRefer descriptions using a pre-trained question generation model.\nEvaluation metrics. For 3D VG, we use as the main metric, with typically set to 0.25 and 0.5. represents the percentage of predicted boxes whose intersection-over-union (IoU) with the target object exceeds . For 3D dense captioning, we adopt to evaluate both localization accuracy and caption generation quality:\n\n, where and represent the ground-truth and generated captions, respectively. and denote the ground-truth and predicted boxes. includes caption metrics, such as BLEU-4 (B-4) [34 ###reference_b34###], CiDEr (C) [45 ###reference_b45###], METEOR (M) [3 ###reference_b3###] and ROUGE (R) [28 ###reference_b28###]. For 3D QA, we use the commonly used metric , with typically set to 1 and 10. denotes the percentage of predictions where the top answers exactly match the ground-truth answer.\nImplementation details. We adopt VoteNet [40 ###reference_b40###] as our detection module, setting the number of input point clouds and output 3D object proposals to 50,000 and 256, respectively. The feature dimensions , are set to 256, and the number of graph convolutional layer is set to 3. For masked modality modeling, the mask ratios for input words and 3D object proposals are set to 0.2 and 0.75, respectively.\nOur code is implemented using the Pytorch [36 ###reference_b36###] framework, and all experiments are conducted on a single NVIDIA A100 40G GPU 111The model is pre-trained using nearly 60 hours.. We follow 3D-VLP [26 ###reference_b26###] to use the AdamW optimizer. The model is first pre-trained with a batch size of 16 for 200 epochs, with initial learning rates for the language encoder, detection module, and scene graph network set to 5e-4, 2e-3, and 5e-4, respectively. The model is then fine-tuned for 100, 100 and 30 epochs to adapt to 3D VG, 3D DC and 3D QA, respectively. For 3D VG and DC, the initial learning rate of task-specific head (e.g., grounding head or captioning head) is set to 5e-4, while the rest are set to 1e-4. For 3D QA, the initial learning rate is set to 1e-4 and decreased by 0.2 after 15 epochs. Details of downstream task fine-tuning can be found in the supplementary material." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Downstream task results", + "text": "3D visual grounding. In Table 1 ###reference_###, we report the 3D VG results of our model combined with VoteNet detector. Our method, using both 2D and 3D inputs, achieves the best performance with 51.87% at Overall Acc@0.25 and 39.91% at Overall Acc@0.5, surpassing previous methods that employs the same VoteNet [40 ###reference_b40###] detector. Compared with the pre-training model 3D-VLP [26 ###reference_b26###], our approach shows a significant improvement of 0.46% and 0.45% at Overall Acc@0.25 and Acc@0.5, respectively. Consistent improvements are also observed in both the Unique and Multiple subsets. This suggests that our method can effectively handle various types of scenes, benefiting from the universal features learned through our scene graph-guided contrastive learning scheme. Table 2 ###reference_### lists the results of several methods combined with more powerful detectors. Note that 3D-VisTA uses an offline 3D detector, while our model jointly optimizes the object detection module. We observe: (1) our model still outperforms previous methods with the same 3DETR [33 ###reference_b33###] detector, but is slightly inferior to 3D-VisTA with Mask3D; (2) detection performance is positively correlated with VG performance. That is, using a powerful detector can lead to a notable improvement in VG performance.\n3D question answering. Table 3 ###reference_### lists the quantitative results of different approaches on 3D QA. \u201cMCAN\u201d refers to the modular co-attention network [52 ###reference_b52###]. Our method achieves the best results, with 24.80% at EM@1 and 59.24% at EM@10, outperforming 3D-VLP [26 ###reference_b26###] by 3.15% and 8.78% in terms of EM@1 and EM@10, respectively. In addition, our method shows a significant improvement (i.e., 2.54% at EM@1 and 4.73% at EM@10) over the task-specific FE-3DGQA [59 ###reference_b59###]. This further demonstrates that the 3D-language features learned through our pre-training scheme are semantically aligned and enhanced in granularity.\n3D dense captioning. In Table 4 ###reference_###, we present the quantitative results on 3D DC. We observe that our method, using both 2D and 3D inputs, achieves best results across captioning metrics, significantly outperforming task-specific methods and performing comparably to other pre-training approaches. Specifically, compared with 3D-VLP [26 ###reference_b26###], our method shows slight improvements of 0.43% at B-4@0.5 and 0.72% at M@0.5. Additionally, compared to the task-specific 3DJCG [4 ###reference_b4###], our approach achieves notable gains of 1.71% and 1.33% at B-4@0.5 and M@0.5, respectively. This indicates that our generated captions are more descriptive and closer to human expression. This is mainly due to our proposed pre-training scheme, which helps learn highly transferable 3D-language features, enabling our fine-tuned caption model to generate more accurate descriptions based on 3D object features.\nQualitative results. Figure 4 ###reference_### presents our qualitative results on 3D VG, 3D DC and 3D QA. We observe that our pre-training model achieves more accurate bounding boxes, generates more descriptive captions, and produces answers that align better with human consensus compared to training from scratch. More qualitative results are provided in the supplementary material.\n###figure_4###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "In this section, we conduct extensive ablation experiments to validate the design choices of our 3D vision-language framework. We use the fine-tuned model on 3D visual grounding task for evaluation due to its simplicity.\nAblation on pre-training objectives. Here, we analyze the contributions of different pre-training objectives. For a more comprehensive evaluation, we gradually add our pre-training objectives. Table 5 ###reference_### presents the results of our model on 3D VG, 3D DC and 3D QA under various combinations of pre-training objectives. Notice that, \u201cScratch\u201d refers to directly training the model on down-steam tasks without pre-training. We observe that our model, with the proposed pre-training objectives, significantly improves downstream performance. Specifically, scene graph-guided multi-level contrastive learning (SG_MCL) achieves substantial gains of 0.97% and 0.77% at VG Acc@0.25 and Acc@0.5, respectively. Consistent gains are also observed on captioning (1.42% at C@0.5 and 0.52% at B-4@0.5) and QA (1.14% at EM@1 and 0.37% at EM@10) metrics. This demonstrates the effectiveness of our scene graph-based multi-level contrastive learning and masked modality modeling.\nAblation on the number of scene graph layers. We investigate the impact of the number of scene graph layers on downstream VG performance. As shown in Table 6 ###reference_###, as the depth of scene graph network increases, the VG performance slightly improves. This is because that more layers enable nodes (objects) and edges in the scene graph to better capture contextual information from neighboring nodes, which enhances sentence-referred object alignment and universal feature learning. However, considering the computational cost, we set the number of scene graph layers to 3.\nAblation on the type of scene graph layer. We further examine the impact of different types of scene graph layers on downstream VG performance, as shown in Table 7 ###reference_###. \u201cGCN\u201d refers to the graph convolutional layer, while \u201cEdgeConv\u201d denotes the edge convolutional layer [55 ###reference_b55###]. We found that the strong representation capability of the EdgeConv layer enhances VG performance improvement, leading us to select EdgeConv as our base scene graph layer. This improvement may be attributed to the fact that nodes with enhanced context-learning capabilities can achieve more precise 3D object-text alignment." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we leverage the connection between 3D scene graphs and natural language, proposing a 3D scene graph-guided vision-language pre-training framework. Our method uses simple modality encoders, graph convolutional layers, and cross-attention layers to learn universal representations with strong transferability, avoiding the need for complex, task-specific designs. We align 3D objects and textual features via the proposed scene graph-guided contrastive learning and masked modality learning. Through extensive experiments, our pre-training model can be fine-tuned to adapt to various downstream tasks, achieving performance comparable to or better than existing methods. However, it remains challenging to pre-train our model on 3D-text pairs collected from different types of sensors due to significant differences in point cloud density. In the future, we will explore cross-domain generalization for 3D vision-language pre-training." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A.1 Downstream task fine-tuning", + "text": "3D visual grounding (VG) aims to identify the target object that best matches a given language description. Following common practice, we treat 3D VG as a classification problem, adding three MLP layers on top of the multi-modal features to predict grounding scores. The target label is a multi-hot label. We assign 1 to those object proposals whose IoU with the referential target is greater than 0.25. The model is fine-tuned using softmax cross-entropy loss.\n3D dense captioning (DC) requires the model to detect and describe objects within a 3D scene. To achieve this, we add a captioning module similar to Scan2Cap [11 ###reference_b11###] on top of object (node) features, which takes the 1st to the (-1)th words as input and predicts the next word in an auto-regressive manner. The model is fune-tuned using cross entropy loss.\n3D question answering (QA) requires the model to answer a given question and locate question-relevant objects within a 3D scene. We use a modular co-attention network (MCAN) [52 ###reference_b52###] to fuse object proposal and text features to predict the answer. The model is fine-tuned using binary cross entropy loss and a question-object contrastive loss." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix A.2 Model architecture", + "text": "For the scene encoder, we use a PointNet++ [39 ###reference_b39###] network with four set abstraction (SA) layers and two feature propagation (FP) layers to extract 256-dimensional features with 1024 points. The radii of the four SA layers are set to 0.2, 0.4, 0.8 and 1.2, respectively. The voting module proposed by [40 ###reference_b40###] is then used to aggregate seed points and generate 256 object proposals. For the text encoder, we use a GRU [12 ###reference_b12###] module with pre-trained GloVE [37 ###reference_b37###] embeddings to extract 256-dimensional textual features. The feature dimension is set to 256 in the subsequent cross-modality fusion layers. For the scene graph network, we employ a three-layer EdgeConv [55 ###reference_b55###] to update node features, enabling each node to incorporate contextual information from its neighbors. After obtaining the object and textual features, we use two cross-attention layers to extract multi-modal features.\nWe provide additional qualitative results for 3D visual grounding (Fig. 5 ###reference_###), 3D dense captioning (Fig. 6 ###reference_###) and 3D question answering (Fig. 7 ###reference_###). In Fig. 5 ###reference_###, our method achieves more accurate localization results than training from scratch on the ScanRefer dataset, particularly when similar objects are present in the scene. This shows that our method is capable of fully understanding the content of the given text. In Fig. 6 ###reference_###, our method effectively recognizes spatial relationships between objects and generates accurate and descriptive captions. Figure 7 ###reference_### demonstrates that our pre-training scheme enhances the model\u2019s performance in both localizing target bounding boxes and answering questions.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### Figure 8 ###reference_### presents several failure examples on 3D visual grounding and 3D question answering. We observe the following issues: (1) For the visual grounding example in the first column, our model still struggles to understand complex cases involving spatial relations. In the future, we can further introduce edge (object relation) constraints to handle this problem. (2) For the visual grounding example in the second column, the input text is inherently ambiguous, meaning multiple objects in the scene could match the description, leading to failure due to the ambiguity in the text. This may be due to the dataset itself, as it contains multiple ambiguous descriptions. That is, one ground truth description corresponds to multiple similar objects in the scene. (3) For the question answering example in the third column, our method has difficulty answering counting questions, such as counting the number of related targets in the scene. This is actually a common issue with existing vision-language models, as they primarily focus on modeling the geometry and appearance of objects." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0MethodDetectorDatasetInputUniqueMultipleOverall
Acc@0.25Acc@0.5Acc@0.25Acc@0.5Acc@0.25Acc@0.5
\n\u00a0\n\n\nTask-specific\n\nTGNN [20]\nPointGroupScanRefer3D68.6156.8029.8423.1837.3729.70
InstanceRefer [53]\nPointGroupScanRefer3D77.8266.6934.5726.8844.2735.80
D3Net [7]\nPointGroupScanRefer3D+2D-70.35-30.05-37.87
ViL3DRel [8]\nPointGroupScanRefer-81.5868.6240.3030.7147.6537.73
\n\\cdashline2-11ScanRefer [5]\nVoteNetScanRefer3D+2D76.3353.5132.7321.1141.1927.40
SAT [51]\nVoteNetScanRefer3D+2D73.2150.8337.6425.1644.5430.14
FFL-3DOG [17]\nVoteNetScanRefer3D78.8067.9435.1925.7041.3334.01
3DVG-Trans [58]VoteNetScanRefer3D77.1658.4738.3828.7045.9034.47
2D+3D81.9360.6439.3028.4247.5734.67
MVT [21]\nVoteNetScanRefer3D77.6766.4531.9225.2640.8033.26
ViewRefer [18]\nVoteNetScanRefer3D76.3564.2733.0826.5041.3533.69
3D-SPS [29]\nVoteNetScanRefer3D+2D84.1266.7240.3229.8248.8236.98
3DJCG [4]VoteNetScanRefer3D78.7561.3040.1330.0847.6236.14
3D+2D83.4764.3441.3930.8249.9637.73
\n\u00a0\n\n\nPre-training\n\nUniT3D [6]\nPointGroup\n\n\n\n\n\n\n\n
Synthesize Data
+ ScanRefer
\n
3D82.7573.1436.3631.0545.2739.14
3D-VisTA [62]\nPointGroupScanScribe3D77.0067.9037.9030.4045.2037.30
\n\\cdashline2-113D-VLP [26]VoteNetScanRefer3D79.3562.6042.5432.1849.6838.08
2D+3D84.2364.6143.5133.4151.4139.46
\n\\cdashline2-11OursVoteNetScanRefer3D82.9365.4742.7432.2050.5438.66
2D+3D84.6766.3843.7233.5251.8739.91
\u00a0
\n
\n
Table 1: Comparison with state-of-the-art 3D VG methods combined with VoteNet or PointGroup on the ScanRefer validation set. \u201cDataset\u201d denotes the (pre)-training dataset used in each model. The best results are underlined.
\n
", + "capture": "Table 1: Comparison with state-of-the-art 3D VG methods combined with VoteNet or PointGroup on the ScanRefer validation set. \u201cDataset\u201d denotes the (pre)-training dataset used in each model. The best results are underlined." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodDetectorAcc@0.25Acc@0.5
\n\u00a0\nBUTD-DETR [23]\n3DETR-like [33]\n49.7637.05
EDA [49]\n3DETR [33]\n53.8341.70
3D-VisTA [62]\nMask3D [43]\n50.6045.80
\n\\cdashline1-4\nOurs\n3DETR [33]\n53.6942.04
\u00a0
\n
\n
Table 2: Comparison with several 3D VG methods combined with more powerful detector. 3D-VisTA uses an offline 3D detector, while other methods jointly optimize the object detection module.
\n
", + "capture": "Table 2: Comparison with several 3D VG methods combined with more powerful detector. 3D-VisTA uses an offline 3D detector, while other methods jointly optimize the object detection module." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodEM@1EM@10
\n\u00a0\nVoteNet [40] + MCAN [52]\n17.3345.54
ScanRefer [5] + MCAN [52]\n18.5946.76
ScanQA [2]\n20.2850.01
FE-3DGQA [59]\n22.2654.51
3D-VLP [26]\n21.6550.46
3DVLP [57]\n24.0357.91
\n\\cdashline1-3\nOurs\n24.8059.24
\u00a0
\n
\n
Table 3: Comparison with state-of-the-art 3D question answering methods on the ScanQA validation set. The best results are underlined.
\n
", + "capture": "Table 3: Comparison with state-of-the-art 3D question answering methods on the ScanQA validation set. The best results are underlined." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0MethodInputC@0.5B-4@0.5M@0.5R@0.5
\n\u00a0\n\n\nTask-specific\n\nScan2Cap [11]3D35.2022.3621.4443.57
2D+3D39.0823.3221.9744.48
X-Trans2Cap [54]3D41.5223.8321.9044.97
2D+3D43.8725.0522.4645.28
MORE [25]3D38.9823.0121.6544.33
2D+3D40.9422.9321.6644.42
D3Net [7]\n2D + 3D47.3224.7621.6643.62
SpaCap3D [47]\n2D+3D44.0225.2622.3345.36
REMAN [32]\n2D+3D45.0026.3122.6746.96
Contextual [60]\n2D+3D46.1125.4722.6445.96
3DJCG [4]3D50.0231.8724.5351.17
2D+3D49.4831.0324.2250.80
\n\u00a0\n\n\nPre-training\n\nUniT3D [6]\n3D46.6927.2221.9145.98
3D-VLP [26]3D50.0231.8724.5351.17
2D+3D54.9432.3124.8351.51
\n\\cdashline2-7Ours3D52.6032.4924.9351.44
2D+3D55.3232.7425.5552.58
\u00a0
\n
\n
Table 4: Comparison with competitive 3D dense captioning methods on the Scan2Cap dataset. The best results are underlined.
\n
", + "capture": "Table 4: Comparison with competitive 3D dense captioning methods on the Scan2Cap dataset. The best results are underlined." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\n3D VG3D DC3D QA
Acc@0.25Acc@0.5C@0.5B-4@0.5M@0.5R@0.5EM@1EM@10
\n\u00a0\nScratch49.8537.4952.1430.9824.3150.4223.3158.56
+ SG_MCL50.8238.2653.5631.5024.4351.4424.4558.93
+ MMM51.0638.8554.5031.9724.8351.7524.3758.86
+ SG_MCL + MMM51.8739.9155.3232.7425.5552.5824.8059.24
\u00a0
\n
\n
Table 5: Ablation study on the pre-training objectives.
\n
", + "capture": "Table 5: Ablation study on the pre-training objectives." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\n# SG layer1234
\n\u00a0\nAcc@0.2550.7451.3651.8752.01
Acc@0.538.5639.4839.9139.85
\u00a0
\n
\n
Table 6: Ablation study on the number of scene graph layers.
\n
", + "capture": "Table 6: Ablation study on the number of scene graph layers." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodAcc@0.25Acc@0.5
\n\u00a0\nGCN51.2539.45
EdgeConv [55]\n51.8739.91
\u00a0
\n
\n
Table 7: Ablation study on the type of scene graph layer.
\n
", + "capture": "Table 7: Ablation study on the type of scene graph layer." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18666v1_figure_1.png", + "caption": "Figure 1: An illustration of the natural alignment between 3D scene graphs and natural language descriptions. We leverage this correspondence to pre-train our 3D-language model in the form of contrastive learning.", + "url": "http://arxiv.org/html/2411.18666v1/x1.png" + }, + "2": { + "figure_path": "2411.18666v1_figure_2.png", + "caption": "Figure 2: The overview of our model. Given a 3D point cloud-text pair, we first use a scene encoding module to extract 3D object proposals and a text encoder to generate textual features. We then treat the 3D object proposals as nodes to construct the scene graph, using a scene graph learning module to update node and edge features. Finally, the model is pre-trained with the proposed scene graph-guided multi-level contrastive learning and masked modality modeling. The pre-trained model can be fine-tuned for various downstream tasks, including 3D visual grounding, 3D dense captioning and 3D question answering.", + "url": "http://arxiv.org/html/2411.18666v1/x2.png" + }, + "3": { + "figure_path": "2411.18666v1_figure_3.png", + "caption": "Figure 3: Scene graph-guided multi-level contrastive learning (SG_MCL) strategy. It aligns 3D object and textual features at various levels, i.e., word-object level, sentence-referred object level and scene-level.", + "url": "http://arxiv.org/html/2411.18666v1/x3.png" + }, + "4": { + "figure_path": "2411.18666v1_figure_4.png", + "caption": "Figure 4: Qualitative results on downstream tasks: (a) 3D visual grounding, (b) 3D dense captioning and (c) 3D question answering. The green box indicates the ground truth, the blue box represents predictions from our model trained from scratch, and the red box shows predictions from our pre-training model.", + "url": "http://arxiv.org/html/2411.18666v1/x4.png" + }, + "5": { + "figure_path": "2411.18666v1_figure_5.png", + "caption": "Figure 5: Qualitative results on downstream 3D visual grounding. The Green box represents the ground-truth, and the red box indicates the prediction.", + "url": "http://arxiv.org/html/2411.18666v1/x5.png" + }, + "6": { + "figure_path": "2411.18666v1_figure_6.png", + "caption": "Figure 6: Qualitative results on downstream 3D dense captioning. Green box for the ground-truth, and red box for the prediction. The accurate parts of generated captions that match ground-truth are underlined and the inaccurate parts are in red.", + "url": "http://arxiv.org/html/2411.18666v1/x6.png" + }, + "7": { + "figure_path": "2411.18666v1_figure_7.png", + "caption": "Figure 7: Qualitative results on downstream 3D question answering. The green box represents the ground-truth, and the red box indicates the predictions. The top 1 predicted answers are reported.", + "url": "http://arxiv.org/html/2411.18666v1/x7.png" + }, + "8": { + "figure_path": "2411.18666v1_figure_8.png", + "caption": "Figure 8: Failure cases on downstream tasks: (a) 3D visual grounding and (b) 3D question answering.", + "url": "http://arxiv.org/html/2411.18666v1/x8.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Referit3D: Neural listeners for fine-grained 3D object identification in real-world scenes.", + "author": "Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 422\u2013440. Springer, 2020.", + "url": null + } + }, + { + "2": { + "title": "Scanqa: 3D question answering for spatial scene understanding.", + "author": "Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoaki Kawanabe.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19129\u201319139, 2022.", + "url": null + } + }, + { + "3": { + "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments.", + "author": "Satanjeev Banerjee and Alon Lavie.", + "venue": "In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65\u201372, 2005.", + "url": null + } + }, + { + "4": { + "title": "3DJCG: A unified framework for joint dense captioning and visual grounding on 3D point clouds.", + "author": "Daigang Cai, Lichen Zhao, Jing Zhang, Lu Sheng, and Dong Xu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16464\u201316473, 2022.", + "url": null + } + }, + { + "5": { + "title": "Scanrefer: 3D object localization in RGB-D scans using natural language.", + "author": "Dave Zhenyu Chen, Angel X Chang, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 202\u2013221. Springer, 2020.", + "url": null + } + }, + { + "6": { + "title": "UniT3D: A unified transformer for 3D dense captioning and visual grounding.", + "author": "Dave Zhenyu Chen, Ronghang Hu, Xinlei Chen, Matthias Nie\u00dfner, and Angel X Chang.", + "venue": "arXiv preprint arXiv:2212.00836, 2022a.", + "url": null + } + }, + { + "7": { + "title": "D3net: A unified speaker-listener architecture for 3D dense captioning and visual grounding.", + "author": "Dave Zhenyu Chen, Qirui Wu, Matthias Nie\u00dfner, and Angel X Chang.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 487\u2013505. Springer, 2022b.", + "url": null + } + }, + { + "8": { + "title": "Language conditioned spatial relation reasoning for 3D object grounding.", + "author": "Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, and Ivan Laptev.", + "venue": "arXiv preprint arXiv:2211.09646, 2022c.", + "url": null + } + }, + { + "9": { + "title": "End-to-end 3D dense captioning with vote2cap-detr.", + "author": "Sijin Chen, Hongyuan Zhu, Xin Chen, Yinjie Lei, Tao Chen, and Gang YU.", + "venue": "arXiv preprint arXiv:2301.02508, 2023a.", + "url": null + } + }, + { + "10": { + "title": "Simclr: A simple framework for contrastive learning of visual representations.", + "author": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.", + "venue": "In Proceedings of the International Conference on Machine Learning (ICML), pages 1597\u20131607, 2023b.", + "url": null + } + }, + { + "11": { + "title": "Scan2cap: Context-aware dense captioning in RGB-D scans.", + "author": "Zhenyu Chen, Ali Gholami, Matthias Nie\u00dfner, and Angel X Chang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3193\u20133203, 2021.", + "url": null + } + }, + { + "12": { + "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling.", + "author": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio.", + "venue": "arXiv preprint arXiv:1412.3555, 2014.", + "url": null + } + }, + { + "13": { + "title": "Scannet: Richly-annotated 3D reconstructions of indoor scenes.", + "author": "Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5828\u20135839, 2017.", + "url": null + } + }, + { + "14": { + "title": "Multi-clip: Contrastive vision-language pre-training for question answering tasks in 3D scenes.", + "author": "Alexandros Delitzas, Maria Parelli, Nikolas Hars, Georgios Vlassis, Sotirios Anagnostidis, Gregor Bachmann, and Thomas Hofmann.", + "venue": "arXiv preprint arXiv:2306.02329, 2023.", + "url": null + } + }, + { + "15": { + "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.", + "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", + "venue": "arXiv preprint arXiv:1810.04805, 2018.", + "url": null + } + }, + { + "16": { + "title": "Multi-modal alignment using representation codebook.", + "author": "Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, and Trishul Chilimbi.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15651\u201315660, 2022.", + "url": null + } + }, + { + "17": { + "title": "Free-form description guided 3D visual graph network for object grounding in point cloud.", + "author": "Mingtao Feng, Zhen Li, Qi Li, Liang Zhang, XiangDong Zhang, Guangming Zhu, Hui Zhang, Yaonan Wang, and Ajmal Mian.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 3722\u20133731, 2021.", + "url": null + } + }, + { + "18": { + "title": "Viewrefer: Grasp the multi-view knowledge for 3D visual grounding with gpt and prototype guidance.", + "author": "Ziyu Guo, Yiwen Tang, Renrui Zhang, Dong Wang, Zhigang Wang, Bin Zhao, and Xuelong Li.", + "venue": "arXiv preprint arXiv:2303.16894, 2023.", + "url": null + } + }, + { + "19": { + "title": "Masked autoencoders are scalable vision learners.", + "author": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16000\u201316009, 2022.", + "url": null + } + }, + { + "20": { + "title": "Text-guided graph neural networks for referring 3D instance segmentation.", + "author": "Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 1610\u20131618, 2021.", + "url": null + } + }, + { + "21": { + "title": "Multi-view transformer for 3D visual grounding.", + "author": "Shijia Huang, Yilun Chen, Jiaya Jia, and Liwei Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15524\u201315533, 2022.", + "url": null + } + }, + { + "22": { + "title": "Clip2point: Transfer clip to point cloud classification with image-depth pre-training.", + "author": "Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson WH Lau, Wanli Ouyang, and Wangmeng Zuo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 22157\u201322167, 2023.", + "url": null + } + }, + { + "23": { + "title": "Bottom up top down detection transformers for language grounding in images and point clouds.", + "author": "Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, and Katerina Fragkiadaki.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 417\u2013433. Springer, 2022.", + "url": null + } + }, + { + "24": { + "title": "Scaling up visual and vision-language representation learning with noisy text supervision.", + "author": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.", + "venue": "In Proceedings of the International Conference on Machine Learning (ICML), pages 4904\u20134916. PMLR, 2021.", + "url": null + } + }, + { + "25": { + "title": "More: Multi-order relation mining for dense captioning in 3D scenes.", + "author": "Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin Ma, and Yu-Gang Jiang.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 528\u2013545. Springer, 2022.", + "url": null + } + }, + { + "26": { + "title": "Context-aware alignment and mutual masking for 3D-language pre-training.", + "author": "Zhao Jin, Munawar Hayat, Yuwei Yang, Yulan Guo, and Yinjie Lei.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10984\u201310994, 2023.", + "url": null + } + }, + { + "27": { + "title": "Lang3DSG: Language-based contrastive pre-training for 3D scene graph prediction.", + "author": "Sebastian Koch, Pedro Hermosilla, Narunas Vaskevicius, Mirco Colosi, and Timo Ropinski.", + "venue": "arXiv preprint arXiv:2310.16494, 2023.", + "url": null + } + }, + { + "28": { + "title": "Rouge: A package for automatic evaluation of summaries.", + "author": "Chin-Yew Lin.", + "venue": "In Text Summarization Branches Out, pages 74\u201381, 2004.", + "url": null + } + }, + { + "29": { + "title": "3D-SPS: Single-stage 3D visual grounding via referred point progressive selection.", + "author": "Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, and Si Liu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16454\u201316463, 2022.", + "url": null + } + }, + { + "30": { + "title": "Sgformer: Semantic graph transformer for point cloud-based 3D scene graph generation.", + "author": "Changsheng Lv, Mengshi Qi, Xia Li, Zhengyuan Yang, and Huadong Ma.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 4035\u20134043, 2024.", + "url": null + } + }, + { + "31": { + "title": "Heterogeneous graph learning for scene graph prediction in 3d point clouds.", + "author": "Yanni Ma, Hao Liu, Yun Pei, and Yulan Guo.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 274\u2013291. Springer, 2025.", + "url": null + } + }, + { + "32": { + "title": "Complete 3D relationships extraction modality alignment network for 3D dense captioning.", + "author": "Aihua Mao, Zhi Yang, Wanxin Chen, Ran Yi, and Yong-jin Liu.", + "venue": "IEEE Transactions on Visualization and Computer Graphics (TVCG), 2023.", + "url": null + } + }, + { + "33": { + "title": "An end-to-end transformer model for 3d object detection.", + "author": "Ishan Misra, Rohit Girdhar, and Armand Joulin.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2906\u20132917, 2021.", + "url": null + } + }, + { + "34": { + "title": "Bleu: a method for automatic evaluation of machine translation.", + "author": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.", + "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311\u2013318, 2002.", + "url": null + } + }, + { + "35": { + "title": "Clip-guided vision-language pre-training for question answering in 3D scenes.", + "author": "Maria Parelli, Alexandros Delitzas, Nikolas Hars, Georgios Vlassis, Sotirios Anagnostidis, Gregor Bachmann, and Thomas Hofmann.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5606\u20135611, 2023.", + "url": null + } + }, + { + "36": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.", + "venue": "Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 32, 2019.", + "url": null + } + }, + { + "37": { + "title": "Glove: Global vectors for word representation.", + "author": "Jeffrey Pennington, Richard Socher, and Christopher D Manning.", + "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532\u20131543, 2014.", + "url": null + } + }, + { + "38": { + "title": "PointNet: Deep learning on point sets for 3D classification and segmentation.", + "author": "Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 652\u2013660, 2017a.", + "url": null + } + }, + { + "39": { + "title": "PointNet++: Deep hierarchical feature learning on point sets in a metric space.", + "author": "Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas.", + "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2017b.", + "url": null + } + }, + { + "40": { + "title": "Deep hough voting for 3D object detection in point clouds.", + "author": "Charles R. Qi, Or Litany, Kaiming He, and Leonidas J. Guibas.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9277\u20139286, 2019.", + "url": null + } + }, + { + "41": { + "title": "Improving language understanding by generative pre-training.", + "author": "Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.", + "venue": "2018.", + "url": null + } + }, + { + "42": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In Proceedings of the International Conference on Machine Learning (ICML), pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "43": { + "title": "Mask3D: Mask transformer for 3D semantic instance segmentation.", + "author": "Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, and Bastian Leibe.", + "venue": "In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 8216\u20138223. IEEE, 2023.", + "url": null + } + }, + { + "44": { + "title": "Vl-bert: Pre-training of generic visual-linguistic representations.", + "author": "Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai.", + "venue": "arXiv preprint arXiv:1908.08530, 2019.", + "url": null + } + }, + { + "45": { + "title": "Cider: Consensus-based image description evaluation.", + "author": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4566\u20134575, 2015.", + "url": null + } + }, + { + "46": { + "title": "Learning 3D semantic scene graphs from 3D indoor reconstructions.", + "author": "Johanna Wald, Helisa Dhamo, Nassir Navab, and Federico Tombari.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3961\u20133970, 2020.", + "url": null + } + }, + { + "47": { + "title": "Spatiality-guided transformer for 3D dense captioning on point clouds.", + "author": "Heng Wang, Chaoyi Zhang, Jianhui Yu, and Weidong Cai.", + "venue": "arXiv preprint arXiv:2204.10688, 2022.", + "url": null + } + }, + { + "48": { + "title": "Vl-sat: visual-linguistic semantics assisted training for 3D semantic scene graph prediction in point cloud.", + "author": "Ziqin Wang, Bowen Cheng, Lichen Zhao, Dong Xu, Yang Tang, and Lu Sheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21560\u201321569, 2023.", + "url": null + } + }, + { + "49": { + "title": "Eda: Explicit text-decoupling and dense alignment for 3D visual and language learning.", + "author": "Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, and Jian Zhang.", + "venue": "arXiv preprint arXiv:2209.14941, 2022.", + "url": null + } + }, + { + "50": { + "title": "Vision-language pre-training with triple contrastive learning.", + "author": "Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, and Junzhou Huang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15671\u201315680, 2022.", + "url": null + } + }, + { + "51": { + "title": "Sat: 2D semantics assisted training for 3D visual grounding.", + "author": "Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1856\u20131866, 2021.", + "url": null + } + }, + { + "52": { + "title": "Deep modular co-attention networks for visual question answering.", + "author": "Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6281\u20136290, 2019.", + "url": null + } + }, + { + "53": { + "title": "Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring.", + "author": "Zhihao Yuan, Xu Yan, Yinghong Liao, Ruimao Zhang, Sheng Wang, Zhen Li, and Shuguang Cui.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1791\u20131800, 2021.", + "url": null + } + }, + { + "54": { + "title": "X-trans2cap: Cross-modal knowledge transfer using transformer for 3D dense captioning.", + "author": "Zhihao Yuan, Xu Yan, Yinghong Liao, Yao Guo, Guanbin Li, Shuguang Cui, and Zhen Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8563\u20138573, 2022.", + "url": null + } + }, + { + "55": { + "title": "Exploiting edge-oriented reasoning for 3D point-based scene graph analysis.", + "author": "Chaoyi Zhang, Jianhui Yu, Yang Song, and Weidong Cai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9705\u20139715, 2021.", + "url": null + } + }, + { + "56": { + "title": "Pointclip: Point cloud understanding by clip.", + "author": "Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8552\u20138562, 2022.", + "url": null + } + }, + { + "57": { + "title": "Vision-language pre-training with object contrastive learning for 3D scene understanding.", + "author": "Taolin Zhang, Sunan He, Tao Dai, Zhi Wang, Bin Chen, and Shu-Tao Xia.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 7296\u20137304, 2024.", + "url": null + } + }, + { + "58": { + "title": "3DVG-Transformer: Relation modeling for visual grounding on point clouds.", + "author": "Lichen Zhao, Daigang Cai, Lu Sheng, and Dong Xu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2928\u20132937, 2021.", + "url": null + } + }, + { + "59": { + "title": "Towards explainable 3D grounded visual question answering: A new benchmark and strong baseline.", + "author": "Lichen Zhao, Daigang Cai, Jing Zhang, Lu Sheng, Dong Xu, Rui Zheng, Yinjie Zhao, Lipeng Wang, and Xibo Fan.", + "venue": "IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2022.", + "url": null + } + }, + { + "60": { + "title": "Contextual modeling for 3D dense captioning on point clouds.", + "author": "Yufeng Zhong, Long Xu, Jiebo Luo, and Lin Ma.", + "venue": "arXiv preprint arXiv:2210.03925, 2022.", + "url": null + } + }, + { + "61": { + "title": "Pointclip v2: Prompting clip and gpt for powerful 3D open-world learning.", + "author": "Xiangyang Zhu, Renrui Zhang, Bowei He, Ziyu Guo, Ziyao Zeng, Zipeng Qin, Shanghang Zhang, and Peng Gao.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2639\u20132650, 2023a.", + "url": null + } + }, + { + "62": { + "title": "3d-vista: Pre-trained transformer for 3D vision and text alignment.", + "author": "Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2911\u20132921, 2023b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18666v1" +} \ No newline at end of file diff --git a/20241127/2411.18667v1.json b/20241127/2411.18667v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2bedc7f0a213b7daf65323bb32db070d4dde3361 --- /dev/null +++ b/20241127/2411.18667v1.json @@ -0,0 +1,814 @@ +{ + "title": "Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting", + "abstract": "Pre-training on large-scale unlabeled datasets contribute to the model achieving powerful performance on 3D vision tasks, especially when annotations are limited. However, existing rendering-based self-supervised frameworks are computationally demanding and memory-intensive during pre-training due to the inherent nature of volume rendering. In this paper, we propose an efficient framework named GS3 to learn point cloud representation, which seamlessly integrates fast 3D Gaussian Splatting into the rendering-based framework. The core idea behind our framework is to pre-train the point cloud encoder by comparing rendered RGB images with real RGB images, as only Gaussian points enriched with learned rich geometric and appearance information can produce high-quality renderings. Specifically, we back-project the input RGB-D images into 3D space and use a point cloud encoder to extract point-wise features. Then, we predict 3D Gaussian points of the scene from the learned point cloud features and uses a tile-based rasterizer for image rendering. Finally, the pre-trained point cloud encoder can be fine-tuned to adapt to various downstream 3D tasks, including high-level perception tasks such as 3D segmentation and detection, as well as low-level tasks such as 3D scene reconstruction. Extensive experiments on downstream tasks demonstrate the strong transferability of the pre-trained point cloud encoder and the effectiveness of our self-supervised learning framework. In addition, our GS3 framework is highly efficient, achieving approximately 9 pre-training speedup and less than 0.25 memory cost compared to the previous rendering-based framework Ponder.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "In recent years, we have witnessed the tremendous success of deep neural networks using supervised learning across various vision tasks such as object detection. However, acquiring large amounts of high-quality and diverse annotations is expensive and time-consuming, especially for 3D annotations. For example, labeling an indoor scene consisting of thousands of 3D points requires approximately 30 minutes [11 ###reference_b11###]. In this context, self-supervised learning (SSL) has emerged as a viable alternative to supervised learning for tasks with limited annotations.\n###figure_1### Existing SSL methods for 3D point clouds are broadly grouped into three categories: completion-based, contrast-based and rendering-based. Completion-based methods [41 ###reference_b41###, 69 ###reference_b69###, 18 ###reference_b18###, 60 ###reference_b60###] typically design a pretext task to reconstruct masked point clouds from incomplete observations, drawing inspiration from the masked autoencoder (MAE) [16 ###reference_b16###]. Despite remarkable progress, this paradigm remains highly challenging and under-explored due to the irregular and sparse nature of point clouds. Furthermore, such methods are sensitive to the masking rate and the selection of missing parts. Contrast-based methods [57 ###reference_b57###, 71 ###reference_b71###, 22 ###reference_b22###, 6 ###reference_b6###, 56 ###reference_b56###] are designed to learn invariant representations under different geometric transformations. However, these methods converge slowly and rely heavily on elaborate strategies such as positive/negative sampling and data augmentation.\nSubsequently, Huang et al. [21 ###reference_b21###] proposed a novel rendering-based framework named Ponder, which back-projects multi-view RGB-D images into 3D space to build a 3D feature volume and renders the images via differentiable volume rendering. The model is pre-trained by minimizing the difference between the rendered image and the input image. Although the learned features can effectively encode the scene\u2019s geometry and appearance cues, this method not only requires dense multi-view images as input and depth maps as addition supervision, but also demands substantial memory and computational resources due to the dozens of point queries along each ray.\nMotivated by this, we propose an efficient 3D Gaussian Splatting-based Self-Supervised (GS3) framework that accepts sparse view RGB-D images. The proposed GS3 formulates a 3D Gaussian Splatting (GS)-based neural rendering pretext task, which leverages point cloud features to produce scene 3D Gaussians and adopts a fast tile-based rasterizer to render the RGB images. Thanks to real-time rendering framework 3D GS, our model significantly reduces the computational burden and memory costs during pre-training compared to Ponder [21 ###reference_b21###], as shown in Fig. 1 ###reference_###. Furthermore, to render high-quality novel view images, 3D GS enforces the point cloud encoder to capture rich geometry and appearance information, which further facilitates the pre-training of the point cloud encoder. To the best of our knowledge, our framework is the first attempt to explore generalizable 3D GS for point cloud self-supervised learning. Specifically, we first lift the input sparse view RGB-D images to 3D space to generate a group of colored point clouds. Then, the generated point clouds are input into a point cloud encoder to extract point-wise features, which are used to predict the point-aligned Gaussian locations and primitive parameters. Finally, given specific camera intrinsic parameters and poses, we employ a real-time tile-based renderer to produce RGB images. Our model is trained by minimizing the difference between the rendered and input RGB images. The point cloud encoder pre-trained by our SSL framework can serve as a strong initialization for various downstream tasks, including 3D semantic segmentation, 3D instance segmentation, 3D object detection and 3D scene reconstruction. In summary, main contributions of our paper are listed as follows:\nWe propose a 3D Gaussian Splatting-based self-supervised model, which seamlessly integrates generalizable 3D GS into the rendering-based SSL framework.\nThe proposed model, GS3, is capable of accommodating various point cloud encoder. The encoder pre-trained by our framework can be effectively transferred to various downstream tasks.\nExtensive experiments on four downstream tasks show the excellent transferability of the pre-trained encoders, thus validating the effectiveness of our framework. In addition, our framework achieves 9 pre-training speedup and less than 0.25 memory cost compared to Ponder." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Self-supervised learning in 3D point clouds", + "text": "Self-supervised learning (SSL) is a label-free approach where a model learns effective representations by designing and solving pretext an unsupervised task. Existing methods for 3D point clouds are roughly divided into completion-based, contrast-based and rendering-based.\nCompletion-based methods [41 ###reference_b41###, 69 ###reference_b69###, 30 ###reference_b30###, 18 ###reference_b18###, 60 ###reference_b60###, 53 ###reference_b53###, 38 ###reference_b38###] typically devise a pretext task to reconstruct missing point clouds from partial or incomplete observations. PointMAE [41 ###reference_b41###] introduces a transformer-based autoencoder that reconstructs masked point patches by optimizing a set-to-set Chamfer distance [12 ###reference_b12###]. PointM2AE [69 ###reference_b69###] introduces a multi-scale strategy for hierarchical point cloud encoding and reconstruction. In MaskPoint [30 ###reference_b30###], Liu et al. designed a pretext task for binary classification to distinguish between masked and unmasked points. However, these methods are constrained to indoor scenes. Hess et al. [18 ###reference_b18###] proposed Voxel-MAE, which leverages voxel representations to facilitate MAE pre-training on large-scale outdoor point clouds. Subsequently, Yang et al. [60 ###reference_b60###] proposed a sparse pyramid transformer to extract multi-scale features from pillar-shaped point clouds, and then used a generative decoder to unify feature scales and recover masked feature markers.\nContrast-based methods [57 ###reference_b57###, 71 ###reference_b71###, 20 ###reference_b20###, 22 ###reference_b22###, 6 ###reference_b6###, 24 ###reference_b24###, 56 ###reference_b56###] are designed to learn robust representations under different geometric transformations. Xie et al. [57 ###reference_b57###] learned invariant representations by computing correspondences between two different views of the same point cloud scene. Zhang et al. [71 ###reference_b71###] utilized various input representations, such as voxels and points, allowing the framework to handle arbitrary 3D data. Subsequent works have been proposed to enhance feature representations by leveraging spatio-temporal cues in 4D sequence data [22 ###reference_b22###, 6 ###reference_b6###] or by developing new augmentation strategies to produce hard positive/negative pairs [56 ###reference_b56###]. For example, Chen et al. [6 ###reference_b6###] synthesized static 3D scene data with moving objects to create 4D sequence data and thus establish temporal correspondences. Wu et al. [56 ###reference_b56###] proposed a combination of spatial and photometric augmentations to generate diverse training pairs.\nIn addition to the above two categories, Huang et al. [21 ###reference_b21###, 74 ###reference_b74###] first proposed a novel rendering-based framework Ponder. It back-projects the input RGB-D images into 3D space and employs a point cloud encoder to extract features for each point. These point features are organized into a 3D feature volume, which is then used to render RGB images and depth maps via volume rendering. These rendered images are compared with the input RGB-D images for supervision. Subsequent works, UniPad [62 ###reference_b62###] and PRED [61 ###reference_b61###], apply volumetric rendering to outdoor point cloud SSL. However, a key hurdle of this framework lies in its high computational and memory demands, inherent to volume rendering. In this paper, we propose a computationally efficient framework for self-supervised point cloud learning using 3D Gaussian Splatting." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Neural scene representation", + "text": "Neural scene representation aims to model the geometry and appearance of 3D scenes using neural networks. Neural radiance field (NeRF) [37 ###reference_b37###] is one of the representative methods, which represents scenes through simple multi-layer perceptrons (MLPs) and renders scene RGB images via volume rendering. Building on this framework, several works [2 ###reference_b2###, 54 ###reference_b54###, 33 ###reference_b33###, 40 ###reference_b40###, 4 ###reference_b4###, 64 ###reference_b64###] propose new ray sampling strategies to accelerate rendering and incorporate SDF [42 ###reference_b42###] or UDF [48 ###reference_b48###] representations to enhance the quality of rendered images. Despite significant progress, these methods are constrained by high computational demands, largely due to the numerous point queries required per ray during rendering.\nRecently, Kerbl et al. [25 ###reference_b25###] proposed a novel neural scene representation, 3D Gaussian Splatting (3DGS), which models scenes using a set of anisotropic Gaussian points and employs a tile-based rasterizer for image rendering. This approach achieves impressive real-time rendering speeds while maintaining high-quality novel view synthesis. However, 3DGS-based methods [36 ###reference_b36###, 27 ###reference_b27###, 67 ###reference_b67###] require scene-specific optimization. To this end, Charatan et al. [3 ###reference_b3###] proposed pixelSplat, the first generalizable Gaussian model that directly predicts pixel-aligned Gaussian primitive parameters in a feed-forward manner. Chen et al. [7 ###reference_b7###] constructed a lightweight cost volume to replace the epipolar transformer in pixelSplat for cross-image encoding. Wang et al. [55 ###reference_b55###] proposed an adaptive cost view aggregation module and a pixel-wise triplet fusion strategy to enable free-view synthesis over across a wide range of views. Our work is inspired by recent advances in generalizable 3DGS." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "3D scene understanding", + "text": "According to the processing of input point clouds, existing network architectures for 3D scene understanding can be broadly classified into projection-based [5 ###reference_b5###, 28 ###reference_b28###], discretization-based [14 ###reference_b14###, 73 ###reference_b73###, 26 ###reference_b26###, 31 ###reference_b31###] and point-based methods [29 ###reference_b29###, 49 ###reference_b49###, 63 ###reference_b63###, 32 ###reference_b32###]. Projection-based methods project the point clouds into 2D space and then adopt well-established 2D scene understanding pipelines. Discretization-based methods partition 3D space into regular cells to facilitate the operation of 3D convolutions. The major drawback of these methods is that their efficiency and accuracy are highly correlated to cell resolution. The advent of sparse convolutions (SpConvs) achieves an optimal trade-off between efficiency and accuracy. Point-based methods directly consume raw point cloud data, but are limited in large-scale scenes due to heavy computational burden and high memory costs. In this work, we pre-train the point-based PointNet++ [45 ###reference_b45###] and discretization-based Sparse Residual U-Net (SR-UNet) [9 ###reference_b9###] implemented with SpConv [10 ###reference_b10###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methodology", + "text": "###figure_2### We propose GS3, a Gaussian Splatting-based Self-Supervised learning framework for 3D point clouds, as shown in Fig. 2 ###reference_###. First, the input RGB-D images are back-projected into 3D space to form 3D point clouds according to the provided camera intrinsic parameters and poses (Section 3.1 ###reference_###). Next, we use a point cloud encoder to extract point-wise features (Section 3.2 ###reference_###), which are then used to produce scene 3D Gaussians that represent the scene\u2019s geometry and appearance, enabling RGB image rendering through a tile-based rasterizer (Section 3.3 ###reference_###). Finally, the rendered images are compared with the input images as a supervision signal for our model (Section 3.4 ###reference_###). The point cloud encoder pre-trained by our framework can be fine-tuned for various downstream tasks." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3D point cloud generation", + "text": "Our method takes as input sparse view RGB-D images , along with camera intrinsic parameters and poses , where is the number of input views, and are the height and width of the input image, respectively. Each camera pose is defined by its rotation matrix and translation vector . Following the pinhole camera model [15 ###reference_b15###], we back-project the RGB-D images into 3D space to facilitate the pre-training of the point cloud encoder as follows:\nwhere is the generated 3D point in the world coordinate system, and is the depth value of pixel . In addition, to better model the scene appearance, we append the RGB color of each pixel to its corresponding 3D points." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3D feature encoder", + "text": "After generating the 3D point clouds from the input RGB-D images, we use a point cloud encoder to extract point-wise features , i.e., , where is the number of point clouds, is the feature dimension. In this work, we employ the point-based network PointNet++ [45 ###reference_b45###] and the discretization-based network SR-UNet [57 ###reference_b57###] as our feature encoders. Additional details and visualizations of our encoders are provided in the supplementary material." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Pre-training with 3D Gaussian Splatting", + "text": "This section describes how our approach seamlessly incorporates 3D Gaussian Splatting (GS) into the self-supervised learning framework. We begin with a brief overview of 3D GS and then discuss how to produce scene 3D Gaussians from extracted point cloud features. Finally, we utilize a tile-based rasterizer to render RGB images for supervision.\nBrief introduction to 3D Gaussian Splatting: 3D GS represents the scene with a dense set of anisotropic 3D Gaussians. Each Gaussian is parameterized by its center (i.e., mean) and covariance matrix :\nThe covariance matrix is decomposed into a scaling matrix and a rotation matrix , i.e., . In addition to and , 3D GS includes additional parameters, such as spherical harmonics (SH) coefficients , assigned to each Gaussian to better model the view-dependent appearance of the scene. The Gaussians are then projected onto 2D space to produce rendered RGB images:\nwhere represents the RGB color value of the rendered image at pixel , and denotes the set of all Gaussians that contribute to pixel . The rendered image is compared with the real image to optimize the Gaussian position and other primitive parameters, e.g., , , . However, it is infeasible to apply vanilla 3D GS for unsupervised pre-training due to the requirement for per-scene optimization.\nGenerating scene 3D Gaussians from point cloud features: Inspired by generalizable GS [3 ###reference_b3###, 7 ###reference_b7###], we predict scene 3D Gaussians from extracted point cloud features. In other words, we predict one or multiple Gaussians for each point. Taking two input views as an example, i.e., and , we first use a cost volume module [7 ###reference_b7###] or epipolar line transformer [55 ###reference_b55###] to perform cross-view feature encoding, thus and . Then, we use a feed-forward network to learn a mapping from the encoded point cloud features to 3D Gaussian parameters:\nwhere is the 3D coordinate of the -th point, is the offset between the -th point and its predicted Gaussian center . is the number of Gaussians predicted at each point. In this way, we predict the scene 3D Gaussian parameters from the scene point cloud features in a point-aligned manner, and thus the total number of 3D Gaussian is for -view input RGB-D images with shape . Different from vanilla GS, which requires per-scene optimization, our formulation in Eq. 4 ###reference_### can optimize multiple scenes simultaneously. This facilitates the integration of 3D GS into self-supervised learning framework.\nMasked point modeling (MPM): Inspired by MAE, we introduce MPM for self-supervised learning of 3D point clouds. Similar to completion-based frameworks [30 ###reference_b30###], we mask 50% of the point clouds generated by back-projecting the RGB-D images. However, rather than reconstructing the masked point clouds from the remaining points, we use the visible points to predict scene Gaussians and render RGB images. This strategy encourages the point cloud encoder to capture the precise geometric and spatial information of the point clouds, thereby enhancing the ability to understand the complete scene.\nDifferentiable rendering: After producing the scene\u2019s 3D Gaussians, we employ a differentiable tile-based rasterizer to render view-dependent RGB images according to the provided camera poses. Specifically, given a viewpoint with its viewing transformation matrix , we project the 3D Gaussians onto the 2D image plane:\nwhere is the depth value of the Gaussian, and is the Jacobian matrix of the affine approximation of the projective transformation. For each image pixel, we first determine the set of Gaussians that contribute to that pixel, and then calculate the alpha value of each Gaussian, i.e., . Notice that, is the opacity value of the -th Gaussian, denotes the function of the projected -th Gaussian. Finally, we multiply the Gaussian colors by their corresponding values and then accumulate them along the ray direction to obtain the image pixel value, as Eq. 3 ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Pre-training objectives", + "text": "Different from Ponder [21 ###reference_b21###], we only use RGB images as supervision. The total pre-training loss is the weighted sum of image color loss and LPIPS [68 ###reference_b68###] loss : .\nImage color loss : It is a traditional pixel-level loss that measures color consistency between rendered and ground-truth pixels. We apply MSE loss for supervision:\nwhere is the number of image pixels. and denote the rendered image and the ground-truth image, respectively.\nLPIPS loss : It is a perception-based image patch-level similarity metric, which is designed to measure the high-level differences between the render image and the ground-truth image. is complementary to , and they are usually optimized together to obtain high-quality rendered images.\nwhere , and denotes the normalized feature map of the rendered image at the -th layer of the VGG [50 ###reference_b50###] network. denotes the channel-wise weights. and are the height and width of the feature map at the -th layer.\nIn our experiments, we follow the loss weight setting of pixelSplat [3 ###reference_b3###], i.e., ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental settings", + "text": "" + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Datasets", + "text": "We use ScanNet v2 [11 ###reference_b11###] as the pre-training dataset. ScanNet v2 contains a total of 1513 indoor scenes, where 1201 scenes with diverse 3D annotations (e.g., 3D box annotations, point-level and instance-level segmentation annotations) are allocated for training, and 312 scenes are reserved for testing. Each scene comprises hundreds of temporally continuous RGB-D images along with the corresponding camera intrinsic parameters and poses." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Implementation details", + "text": "In our self-supervised framework, we take two-view RGB-D images with overlapping regions as input, and back-project them into 3D space to form point cloud data. The resolution of the input images is , and the frame interval between input views is 5. We use point-based PointNet++ and discretization-based SR-UNet as point cloud encoders, both of which have 128 output feature dimensions. More details for PointNet++ and SR-UNet are provided in the supplementary material.\nWe pre-train our model with a batch size of 4 for 100 epochs, where each batch corresponds to one scene. The model is pre-trained on a single NVIDIA A100 40G GPU, and the entire pre-training process takes approximately three days. We use AdamW [34 ###reference_b34###] to optimize model parameters, where the initial learning rate is set to 1e-4 and weight decay is set to 0.05. The cosine annealing [35 ###reference_b35###] strategy is adopted to update the learning rate, where the minimum learning rate is set to 1e-6. To obtain diverse training samples, we apply the same random rotations along the X, Y, and Z axes to both the point cloud data and camera poses. The rotation angle ranges for the X, Y and Z axes are , and , respectively. For fine-tuning of downstream tasks, we use the pre-trained point cloud encoder as initialization and follow their experimental settings." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Fine-tuning on downstream tasks", + "text": "To validate the effectiveness of the proposed GS3 framework, we pre-train the point cloud encoder on the ScanNet v2 dataset and transfer the weights as initialization for downstream tasks." + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 High-level tasks", + "text": "3D object detection. We use two indoor scene datasets, SUN RGB-D [51 ###reference_b51###] and ScanNet v2 [11 ###reference_b11###], to evaluate the transferability of our pre-trained encoder to the 3D object detection task. SUN RGB-D contains 10,335 indoor scenes, each of which provides RGB-D images, camera poses and 3D box annotations. Following [21 ###reference_b21###], VoteNet [46 ###reference_b46###] and H3DNet [70 ###reference_b70###] are selected as our baselines. We use mean average precision (mAP) as the primary metric, with the IoU thresholds set to 0.25 and 0.5, respectively.\nTable 1 ###reference_### reports the quantitative results among current self-supervised methods on downstream 3D detection task. Table 2 ###reference_### presents the pre-training time and memory consumption of current rendering-based framework. Notice that, due to limited computational resources, the pre-training overhead of Ponder [21 ###reference_b21###] is measured with 4,800 sampled rays. All pre-training overheads are obtained on a single NVIDIA A100 40G GPU. We observe that the baseline VoteNet with our GS3 gains remarkable improvements, increasing mAP@0.5 by 3.0% on the SUN RGB-D datasets. Ponder [21 ###reference_b21###] is a rendering-based framework that leverages NeRF [37 ###reference_b37###] to generate rendered images for pre-training. Our proposed rendering-based framework GS3 achieves comparable improvements to the baseline VoteNet as Ponder. However, our method achieves 9 pre-training speedup and less than 0.25 memory consumption compared to Ponder. In addition, compared with recent contrast-based method IAE [59 ###reference_b59###], the point cloud features learned by our method achieve higher mAP values with a gain of 0.7% on the SUN RGB-D dataset.\nTo further verify the effectiveness of our GS3 framework, we follow Ponder to combine GS3 with a more powerful baseline method H3DNet [70 ###reference_b70###]. Table 3 ###reference_### shows the 3D detection results. We can see that, our method outperforms H3Net by 2.3% and 0.8% in terms of mAP@0.5 and mAP@0.25, respectively.\n3D semantic segmentation. We use the ScanNet v2 [11 ###reference_b11###] and S3DIS [1 ###reference_b1###] datasets to evaluate the semantic segmentation performance of our fine-tuned model. Different from ScanNet v2, which reconstructs 3D scenes from RGB-D images, S3DIS uses LiDAR scanner to capture point clouds in indoor environments. It contains approximately 272 indoor samples from 6 different buildings, with point-wise semantic and instance-level segmentation annotations for each sample. The strong MinkUNet [8 ###reference_b8###] is selected as our baseline. We use mean IoU (mIoU) and mean accuracy (mAcc) as the major evaluation metrics.\nTable 4 ###reference_### reports the quantitative results of our GS3 combined with MinkUNet. Our method significantly improves the baseline MinkUNet on both S3DIS and ScanNet v2 datasets, regardless of whether the voxel size is 2cm or 5cm. Specifically, with a voxel size of 2cm, the mIoU is increased by 1.6% and 1.5% for S3DIS and ScanNet v2, respectively. Similar improvements are observed in mAcc metric (S3DIS: 1.1%, ScanNet v2: 0.4%) as well. Furthermore, we find that our method achieves comparable improvements (73.4% vs. 73.5%) over the existing rendering-based approach, Ponder [21 ###reference_b21###], on the ScanNet v2 dataset. This indicates that our GS3 framework is capable of effectively improving the 3D semantic segmentation performance of the baseline methods.\n3D instance segmentation. We evaluate the transferability of our GS3 to the 3D instance segmentation task on the S3DIS [1 ###reference_b1###] and ScanNet v2 [11 ###reference_b11###] datasets. The classic PointGroup [23 ###reference_b23###] is selected as the baseline. We use average AP and AP with a IoU threshold of 0.5 as the major evaluation metrics. The average AP is calculated by averaging the AP values across IoU thresholds from 50% to 95% with an interval of 5%. Table 5 ###reference_### presents the quantitative results of PointGroup with our GS3. We find that the proposed GS3 significantly improves the baseline of PointGroup at different voxel resolutions. Specifically, with a voxel size of 2cm, the average AP and AP@0.5 on the S3DIS dataset are increased by 0.7% and 1.7%, respectively. Consistent improvements are also seen on the ScanNet v2 dataset. This demonstrates the effectiveness of our GS3 framework for 3D instance segmentation task." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Low-level task", + "text": "In addition to high-level perception tasks, we evaluate the fine-tuned model on low-level task to further validate the effectiveness of our GS3 framework.\n3D scene reconstruction. We select the Synthetic Indoor Scene [43 ###reference_b43###] dataset to evaluate the scene reconstruction performance of our fine-tuned model. Following [21 ###reference_b21###], the commonly used ConvONet [43 ###reference_b43###] is chosen as our baseline. We use volumetric IoU, normal consistency (NC) and F-score with the threshold of 1% as the main metrics.\nTable 6 ###reference_### shows the quantitative results of the baseline and several self-supervised approaches on downstream 3D scene reconstruction task. Our method achieves competitive results with a volumetric IoU of 79.7% and a F-score of 91.6, which improves the baseline ConvONet by 1.9% and 1.0% in terms of volumetric IoU and F-score, respectively. This shows that our GS3 framework is effective in improving the scene reconstruction performance of the baseline, demonstrating the strong transferability of our GS3. In addition, compared with other self-supervised methods, our rendering-based approach outperforms the contrast-based method IAE [59 ###reference_b59###] by 4.0% in terms of volumetric IoU, while achieving comparable performance to the recent rendering-based framework Ponder [21 ###reference_b21###]." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation study", + "text": "In this section, we conduct a group of ablation experiments to justify our framework design and parameter settings. These experiments are performed on the 3D semantic segmentation task, evaluated on the S3DIS Area-5 set.\nMask ratio. We propose a masked point modeling strategy to augment point cloud data, and thus encourage point cloud encoder to learn contextual features. In this ablation experiment, we investigate the impact of mask ratio on our method. Table 7 ###reference_### presents the 3D segmentation results of our fine-tuned model with different mask ratios, ranging from 0% to 90%. We observe that our method achieves the best results with a mIoU of 70.1% and a mAcc of 76.3% when mask ratio is 0.5. This may be because a larger mask ratio retains few Gaussian points, causing the point cloud encoder to not fully learn the geometry and appearance information of the scene, while a smaller value may result in redundant Gaussian points. Overall, our GS3 framework is insensitive to the mask ratio and can improve the baseline method at different mask ratios.\nRending targets. Common neural rendering targets include RGB color images and depth images. In this work, we only use RGB color images as pre-training supervision. We conduct an ablation experiment to study the influence of different rendering targets with the transferring task of 3D semantic segmentation. As shown in Table 8 ###reference_###, using both RGB color and depth images as pre-training supervision does not obtain remarkable performance gains compared to using only RGB color images. In addition, adding additional depth images as supervision also increases the pre-training time and memory consumption.\nNumber of input views. During pre-training, our method use sparse-view RGB images with overlapping regions to produce scene Gaussians for image rendering. In this ablation experiment, we explore the influence of the number of input views on the downstream segmentation task. Table 9 ###reference_### lists the segmentation results of our fine-tuned model with different number of input views. We observe that the best results are achieved when the number of input views is 3. This is mainly because that, more input views can help our GS3 achieve better rendering quality and thus obtain more accurate supervision from 2D images. However, this significantly increase pre-training time and memory consumption of our GS3 framework. Consequently, we set the number of input views to 2 to balance pre-training overhead and performance.\nInput image resolution. In our method, GS3 back-projects the input images into 3D space for self-supervised learning of the point cloud encoder. Higher image resolution can not only provide more detailed 2D image supervision, but also obtain point cloud data with more geometric information. In this ablation experiment, we investigate the effect of input image resolution on our fine-tuned model. As shown in Table 10 ###reference_###, higher image resolution indeed leads to greater performance improvements. However, this also inevitably increases the overhead of the pre-training process. Therefore, we choose the input image resolution as 320 240 to achieve the best trade-off between pre-training overhead and performance." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we propose a 3D Gaussian Splatting-based Self-Supervised (GS3) framework for point cloud representation learning. We utilizes 3D Gaussian Splatting-based neural rendering as the pretext task, which predicts scene 3D Gaussians from learned point cloud features and then uses a tile-based rasterizer for image rendering. Compared to existing rendering-based frameworks, our method achieves significant pre-training speedup and requires considerably less memory. The point cloud encoder pre-trained by our framework can be well transferred to various downstream tasks. Consistent improvements on four down-stream tasks demonstrate the strong transferability of the point cloud encoder.\nIn the future, several directions can be explored. First, the recent advances in 3D Gaussian Splatting help our GS3 obtain high-quality rendered images, thereby enhancing the transferability of the point cloud encoder. Second, our GS3 framework can be extended to the 2D image domain." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A.1 Visualization of the SR-UNet and PointNet++ Encoder", + "text": "The proposed GS3 framework is capable of accommodating various point cloud encoders, including point-based and discretization-based. In this paper, we use PoinbtNet++ (point-based) and SR-UNet (discretization-based) as our encoders for both pre-training and fine-tuning.\nWe first introduce the SR-UNet architecture, as shown in Fig. 3 ###reference_###(a). SR-UNet follows the classic UNet encoder-decoder segmentation framework and is mainly implemented by Sparse Convolution (SpConv) and Sparse Deconvolution (SpDeconv). The encoder network consists of five SpConv blocks, and the decoder network has four SpDeconv blocks. Each Spconv / SpDeconv block follows the 2D ResNet basic block design, i.e., each convolution / deconvolution layer is followed by a batch normalization (BN) layer and a ReLU activation layer.\nThe visualization of the PointNet++ architecture is shown in Fig. 3 ###reference_###(b). PointNet++ consists of four set abstraction (SA) layers and four feature propagation (FP) layers. The number of down-sampling points and radii of these four SA layers are and , respectively.\n###figure_3###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix A.2 More Experimental Results", + "text": "In this section, we provide detailed experimental results for downstream tasks.\nMore 3D object detection results. Table 11 ###reference_### presents the average precision (AP) values of each category on the SUN RGB-D. We find that our GS3 framework can significantly improve the overall detection performance of the baseline VoteNet, increasing mAP@0.5 by 3.0% for SUN-RGBD. We also observe that, our GS3 improve the baseline VoteNet in 8 out of 10 categories on the SUN RGB-D dataset.\nMore 3D semantic segmentation results. Table 12 ###reference_### and Table 13 ###reference_### list the mean IoU (mIoU) values of each category on the S3DIS and ScanNet v2 datasets, respectively. We note that, with a voxel size of 2cm, MinkUNet pre-trained with our GS3 obtains significant gains in 9 out of 13 categories on the S3DIS dataset (Area5-test), and in 18 out of 20 categories on the ScanNet v2 dataset. Similar improvements are also observed with a voxel size of 5cm.\nMore 3D instance segmentation results. Table 14 ###reference_### and Table 15 ###reference_### report the AP@0.5 values of each category on the S3DIS and ScanNet v2 datasets, respectively. Remarkable improvements for most of semantic categories are observed on both S3DIS and ScanNet v2 dataset.\nQualitative results. Figure 4 ###reference_### shows the visualization results of our fine-tuned model on downstream 3D semantic segmentation and 3D instance segmentation tasks.\nmIoU\nmAcc\nwall\nfloor\ncabinet\nbed\nchair\nsofa\ntable\ndoor\nwindow\nbookshelf\npicture\ncounter\ndesk\ncurtain\nrefrigerator\nshower curtain\ntoilet\nsink\nbathtub\nother furniture\nAP@50\ncabinet\nbed\nchair\nsofa\ntable\ndoor\nwindow\nbookshelf\npicture\ncounter\ndesk\ncurtain\nrefrigerator\nshower curtain\ntoilet\nsink\nbathtub\nother furniture\n###figure_4###" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\n\n\n\nDetection\n\nModel\n\n\n\nPre-training\n\nType\n\n\n\nPre-training\n\nEpochs\nSUN RGB-D
mAP@0.5\nmAP@0.25\n
\n\u00a0\n3DETR [39]\n3DETR--30.358.0
Point-BERT [66]\n3DETRCompletion-based300--
MaskPoint [30]\n3DETRCompletion-based300--
\n\u00a0\nVoteNet [46]\nVoteNet--33.757.7
STRL [22]\nVoteNetContrast-based10035.058.2
RandomRooms [47]\nVoteNetContrast-based30035.459.2
PointContrast [57]\nVoteNetContrast-based-34.857.5
PC-FractalDB [58]\nVoteNetContrast-based-33.959.4
DepthContrast [71]\nVoteNetContrast-based100035.460.4
IAE [59]\nVoteNetContrast-based100036.060.4
\n\u00a0\nPonder [21]\nVoteNetRendering-based10036.661.0
GS3 (Ours)VoteNetRendering-based10036.7 (+3.0)\n61.3 (+3.6)\n
\u00a0
\n
\n
Table 1: Comparative 3D object detection results among current self-supervised methods on the SUN RGB-D and ScanNet v2 datasets. The red number in each bracket denotes the performance improvement over the corresponding baseline method.
\n
", + "capture": "Table 1: Comparative 3D object detection results among current self-supervised methods on the SUN RGB-D and ScanNet v2 datasets. The red number in each bracket denotes the performance improvement over the corresponding baseline method." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\n\n\n\n\n\n\n\n
#Sampling
Rays
\n
\n\n\n\n\n\n\n\n
Pre-training Time
(s/batch) \n
\n
\n\n\n\n\n\n\n\n
Pre-training Memory
(GB/batch) \n
\n
\n\u00a0\nPonder\u2020 [21]\n48001.4638.4
Ponder\u2020 [21]\n7680023.36-
GS3 (Ours)768002.6710.3
\u00a0
\n
\n
Table 2: Comparison of pre-training time and memory consumption for rendering-based frameworks. All results are obtained on a single NVIDIA A100 40G GPU. denotes the reproduced results. The pre-training time of Ponder with 76800 sampling rays is estimated from the result of its 4,800 sampling rays.
\n
", + "capture": "Table 2: Comparison of pre-training time and memory consumption for rendering-based frameworks. All results are obtained on a single NVIDIA A100 40G GPU. denotes the reproduced results. The pre-training time of Ponder with 76800 sampling rays is estimated from the result of its 4,800 sampling rays. " + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodmAP@0.5\nmAP@0.25\n
\n\u00a0\nVoteNet [46]\n33.558.6
3DETR [39]\n37.562.7
3DETR-m [39]\n47.065.0
H3DNet [70]\n48.167.2
\n\u00a0\nPonder [21] + H3DNet50.968.4
\nGS3 + H3DNet50.4 (+2.3)\n68.0 (+0.8)\n
\u00a0
\n
\n
Table 3: 3D object detection results of GS3 with H3DNet on the ScanNet v2 dataset. The red number in each bracket denotes the improvement over the corresponding baseline.
\n
", + "capture": "Table 3: 3D object detection results of GS3 with H3DNet on the ScanNet v2 dataset. The red number in each bracket denotes the improvement over the corresponding baseline." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\nS3DIS (Area-5)ScanNet v2
mIoU\nmAcc\nmIoU\nmAcc\n
\n\u00a0\nPointNet [44]\n41.149.0--
PointNet++ [45]\n--53.5-
KPConv [52]\n67.172.869.2-
SparseConvNet [13]\n--69.3-
Point Transformer [72]\n70.476.570.6-
MinkUNet [8]\n--72.2-
Ponder + MinkUNet [21]\n--73.5-
\n\u00a0\nMinkUNet\u2020 (5cm) [8]\n62.870.666.675.0
GS3 + MinkUNet (5cm)63.871.368.276.4
(+1.0)(+0.7)(+1.6)(+1.4)
MinkUNet\u2020 (2cm) [8]\n68.575.271.980.6
GS3 + MinkUNet (2cm)70.176.373.481.0
(+1.6)(+1.1)(+1.5)(+0.4)
\u00a0
\n
\n
Table 4: Comparative 3D semantic segmentation results on the S3DIS and ScanNet v2 datasets. denotes the reproduced results.
\n
", + "capture": "Table 4: Comparative 3D semantic segmentation results on the S3DIS and ScanNet v2 datasets. denotes the reproduced results." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\nS3DIS (Area-5)ScanNet v2
avg. AP\nAP@0.5\navg. AP\nAP@0.5\n
\n\u00a0\n3D-SIS [19]\n---18.7
GSPN [65]\n--19.337.8
PointGroup [23]\n-57.834.856.7
DyCo3D [17]\n--35.457.6
\n\u00a0\nPointGroup\u2020 (5cm) [23]\n40.155.727.249.1
GS3 + PointGroup (5cm)40.457.728.150.6
(+0.3)(+2.0)(+0.9)(+1.5)
PointGroup\u2020 (2cm) [23]\n45.259.435.257.6
GS3 + PointGroup (2cm)45.961.137.059.2
(+0.7)(+1.7)(+1.8)(+1.6)
\u00a0
\n
\n
Table 5: Comparative 3D instance segmentation results on the S3DIS and ScanNet v2 dataset. denotes the reproduced results.
\n
", + "capture": "Table 5: Comparative 3D instance segmentation results on the S3DIS and ScanNet v2 dataset. denotes the reproduced results." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodEncoderIoU\nNC\nF-Score\n
\n\u00a0\nConvONet [43]\nPointNet++77.888.790.6
IAE [59]\nPointNet++75.788.791.0
Ponder [21]\nPointNet++80.289.392.0
GS3 (Ours)PointNet++79.7 (+1.9)\n89.0 (+0.3)\n91.6 (+1.0)\n
\u00a0
\n
\n
Table 6: Comparative 3D scene reconstruction results on the Synthetic Indoor Scene dataset. The red number in each bracket denotes the improment over the corresponding baseline.
\n
", + "capture": "Table 6: Comparative 3D scene reconstruction results on the Synthetic Indoor Scene dataset. The red number in each bracket denotes the improment over the corresponding baseline." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMask ratiomIoUmAcc
\n\u00a0\nMinkUNet68.575.2
\n\u00a0\n0%69.3 (+0.8)75.4 (+0.2)
25%69.7 (+1.2)76.0 (+0.8)
50%70.1 (+1.6)76.3 (+1.1)
75%69.6 (+1.1)75.6 (+0.4)
90%68.9 (+0.4)75.0 (-0.2)
\u00a0
\n
\n
Table 7: Ablation study on mask ratio. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5.
\n
", + "capture": "Table 7: Ablation study on mask ratio. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5." + }, + "8": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nSupervisionmIoUmAcc
\n\u00a0\nMinkUNet68.575.2
\n\u00a0\n+ Color\n70.1 (+1.6)76.3 (+1.1)
+ Color + Depth70.3 (+1.8)76.0 (+0.8)
\u00a0
\n
\n
Table 8: Ablation study on supervision type. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5.
\n
", + "capture": "Table 8: Ablation study on supervision type. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\n#ViewmIoUmAcc
\n\u00a0\nMinkUNet68.575.2
\n\u00a0\n2\n70.1 (+1.6)76.3 (+1.1)
370.5 (+2.0)76.8 (+1.6)
\u00a0
\n
\n
Table 9: Ablation study on the number of input views. 3D semantic segmentation mIoU and mAcc on S3DIS.
\n
", + "capture": "Table 9: Ablation study on the number of input views. 3D semantic segmentation mIoU and mAcc on S3DIS." + }, + "10": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nResolutionmIoUmAcc
\n\u00a0\nMinkUNet68.575.2
\n\u00a0\n256 19269.7 (+1.2)76.0 (+0.8)
320 24070.1 (+1.6)76.3 (+1.1)
512 38470.5 (+2.0)76.4 (+1.2)
\u00a0
\n
\n
Table 10: Ablation study on input image resolution. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5. Input image resolution is in the form of width height.
\n
", + "capture": "Table 10: Ablation study on input image resolution. 3D semantic segmentation mIoU and mAcc on S3DIS Area-5. Input image resolution is in the form of width height." + }, + "11": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodmAP@0.5bathtubbedbookshelfchairdeskdressernightstandsofatabletoilet
\n\u00a0\nVoteNet [46]\n33.747.050.17.253.95.311.540.742.419.559.8
GS3 + VoteNet36.754.753.010.053.97.517.840.351.117.661.1
(+3.0)(+7.7)(+2.9)(+2.8)(+0.0)(+2.2)(+6.3)(-0.4)(+8.7)(-1.9)(+1.3)
\u00a0
\n
\n
Table 11: Comparative 3D object detection results for each category on the SUN-RGBD dataset, evaluated with mAP@0.5. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline.
\n
", + "capture": "Table 11: Comparative 3D object detection results for each category on the SUN-RGBD dataset, evaluated with mAP@0.5. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline." + }, + "12": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodmIoUmAccceil.floorwallbeamcol.wind.doorchairtablebook.sofaboardclut.
\n\u00a0\nPointNet [44]\n41.149.088.897.369.80.13.946.310.852.658.940.35.926.433.2
KPConv [52]\n67.172.892.897.382.40.023.958.069.091.081.575.375.466.758.9
MinkUNet (5cm) [8]\n65.471.791.898.786.20.034.148.962.489.881.674.947.274.458.6
Point Transformer [72]\n70.476.594.098.586.30.038.063.474.382.489.180.274.376.059.3
\n\u00a0\nMinkUNet\u2020 (5cm) [8]\n62.870.690.896.181.40.118.853.360.784.975.869.161.868.555.0
GS3 + MinkUNet (5cm)63.871.391.496.781.60.125.553.555.586.375.269.366.673.354.4
(+1.0)(+0.7)(+0.6)(+0.6)(+0.2)(+0.0)(+6.7)(+0.2)(-5.2)(+1.4)(-0.6)(+0.2)(+4.8)(+4.8)(-0.6)
MinkUNet\u2020 (2cm) [8]\n68.575.291.697.684.10.024.560.377.587.881.672.673.880.359.0
GS3 + MinkUNet (2cm)70.176.392.797.984.50.134.763.278.589.881.872.276.180.059.3
(+1.6)(+1.1)(+1.1)(+0.3)(+0.4)(+0.0)(+10.2)(+2.9)(+1.0)(+2.0)(+0.2)(-0.4)(+2.3)(-0.3)(+0.3)
\u00a0
\n
\n
Table 12: Comparative 3D semantic segmentation results for each category on the S3DIS (Area-5) dataset. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline.
\n
", + "capture": "Table 12: Comparative 3D semantic segmentation results for each category on the S3DIS (Area-5) dataset. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline." + }, + "13": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\n
\n

mIoU

\n
\n
\n
\n

mAcc

\n
\n
\n
\n

wall

\n
\n
\n
\n

floor

\n
\n
\n
\n

cabinet

\n
\n
\n
\n

bed

\n
\n
\n
\n

chair

\n
\n
\n
\n

sofa

\n
\n
\n
\n

table

\n
\n
\n
\n

door

\n
\n
\n
\n

window

\n
\n
\n
\n

bookshelf

\n
\n
\n
\n

picture

\n
\n
\n
\n

counter

\n
\n
\n
\n

desk

\n
\n
\n
\n

curtain

\n
\n
\n
\n

refrigerator

\n
\n
\n
\n

shower curtain

\n
\n
\n
\n

toilet

\n
\n
\n
\n

sink

\n
\n
\n
\n

bathtub

\n
\n
\n
\n

other furniture

\n
\n
\n\u00a0\nMinkUNet\u2020 (5cm) [8]\n66.675.081.195.363.681.388.383.574.053.056.171.721.059.463.350.543.058.389.661.685.251.5
GS3 + MinkUNet (5cm)68.276.482.395.864.579.589.186.074.856.456.075.223.959.562.456.445.861.992.561.786.553.5
(+1.6)(+1.4)(+1.2)(+0.5)(+0.9)(-1.8)(+0.8)(+2.5)(+0.8)(+3.4)(-0.1)(+3.5)(+2.9)(+0.1)(-0.9)(+5.9)(+2.8)(+3.6)(+2.9)(+0.1)(+1.3)(+2.0)
MinkUNet\u2020 (2cm) [8]\n71.980.685.896.365.779.589.984.571.365.460.379.435.364.963.073.054.568.093.166.385.257.0
GS3 + MinkUNet (2cm)73.481.085.996.566.981.691.686.775.666.461.282.530.563.767.576.357.769.393.266.787.460.2
(+1.5)(+0.4)(+0.1)(+0.2)(+1.2)(+2.1)(+1.7)(+2.2)(+4.3)(+1.0)(+0.9)(+3.1)(-4.8)(-1.2)(+4.5)(+3.3)(+3.2)(+1.3)(+0.1)(+0.4)(+2.2)(+3.2)
\u00a0
\n
\n
Table 13: Comparative 3D semantic segmentation results for each category on the ScanNet v2 val set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline.
\n
", + "capture": "Table 13: Comparative 3D semantic segmentation results for each category on the ScanNet v2 val set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline." + }, + "14": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethodAP@50ceil.floorwallbeamcol.wind.doorchairtablebook.sofaboard
\n\u00a0\nPointGroup\u2020 (5cm) [23]\n55.746.295.564.00.037.172.155.064.329.635.488.480.6
GS3 + PointGroup (5cm)57.745.796.964.50.039.061.872.763.542.931.290.084.7
(+2.0)(-0.5)(+1.4)(+0.5)(+0.0)(+1.9)(-10.3)(+17.7)(-0.8)(+13.3)(-4.2)(+1.6)(+4.1)
PointGroup\u2020 (2cm) [23]\n59.467.999.967.50.038.068.585.293.931.025.153.182.3
GS3 + PointGroup (2cm)61.155.897.560.00.047.976.970.391.033.233.481.885.7
(+1.7)(-12.1)(-2.4)(-7.5)(+0.0)(+9.9)(+8.4)(-14.9)(-2.9)(+2.2)(+8.3)(+28.7)(+3.4)
\u00a0
\n
\n
Table 14: Comparative 3D instance segmentation results for each category on the S3DIS Area-5 set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline.
\n
", + "capture": "Table 14: Comparative 3D instance segmentation results for each category on the S3DIS Area-5 set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline." + }, + "15": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\nMethod\n
\n

AP@50

\n
\n
\n
\n

cabinet

\n
\n
\n
\n

bed

\n
\n
\n
\n

chair

\n
\n
\n
\n

sofa

\n
\n
\n
\n

table

\n
\n
\n
\n

door

\n
\n
\n
\n

window

\n
\n
\n
\n

bookshelf

\n
\n
\n
\n

picture

\n
\n
\n
\n

counter

\n
\n
\n
\n

desk

\n
\n
\n
\n

curtain

\n
\n
\n
\n

refrigerator

\n
\n
\n
\n

shower curtain

\n
\n
\n
\n

toilet

\n
\n
\n
\n

sink

\n
\n
\n
\n

bathtub

\n
\n
\n
\n

other furniture

\n
\n
\n\u00a0\nPointGroup\u2020 (5cm) [23]\n49.148.570.377.064.966.238.324.845.315.125.530.227.554.954.294.839.876.929.7
GS3 + PointGroup (5cm)50.647.773.078.667.668.839.330.248.317.221.829.919.356.955.895.148.277.735.1
(+1.5)(-0.8)(+2.7)(+1.6)(+2.7)(+2.6)(+1.0)(+5.4)(+3.0)(+2.1)(-3.7)(-0.3)(-8.2)(+2.0)(+1.6)(+0.3)(+8.4)(+0.8)(+5.4)
PointGroup\u2020 (2cm) [23]\n57.649.972.587.159.667.248.538.761.232.021.828.543.654.470.098.369.479.454.7
GS3 + PointGroup (2cm)59.253.574.188.972.069.047.236.354.734.527.229.546.664.766.899.967.677.456.2
(+1.6)(+3.6)(+1.6)(+1.8)(+12.4)(+1.8)(-1.3)(-2.4)(-6.5)(+2.5)(+5.4)(+1.0)(+3.0)(+10.3)(-3.2)(+1.6)(-1.8)(-2.0)(+1.5)
\u00a0
\n
\n
Table 15: Comparative 3D instance segmentation results for each category on the ScanNet v2 val set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline.
\n
", + "capture": "Table 15: Comparative 3D instance segmentation results for each category on the ScanNet v2 val set. denotes the reproduced results. The number in each bracket denotes the performance improvement (shown in red) or degradation (shown in blue) compared to the corresponding baseline." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18667v1_figure_1.png", + "caption": "Figure 1: Comparison of 3D detection performance mAP@0.5, 3D segmentation accuracy mIoU, pre-training time and memory consumption of Ponder [21] and our GS3. The pre-training time and memory usage of our method are measured at a rendered image resolution of 320 \u00d7\\times\u00d7 240. Due to limited computational resources, the pre-training time of Ponder with 76,800 sampling rays is estimated based on its result with 4,800 rays. Memory consumption for pre-training is reported only for Ponder with 4,800 rays.", + "url": "http://arxiv.org/html/2411.18667v1/x1.png" + }, + "2": { + "figure_path": "2411.18667v1_figure_2.png", + "caption": "Figure 2: The overall framework of the proposed GS3. Given sparse-view RGB-D images, we back-project them into 3D space to generate colored point clouds. A point cloud encoder is then used to extract point-wise features, which are used to predict scene Gaussians in a point-aligned manner. These Gaussians are rendered into RGB images through a differentiable tile-based rasterizer. The point cloud encoder is pre-trained by comparing the rendered images with the real images.", + "url": "http://arxiv.org/html/2411.18667v1/x2.png" + }, + "3": { + "figure_path": "2411.18667v1_figure_3.png", + "caption": "Figure 3: The network architecture of our feature encoder. (a) SR-UNet and (b) PointNet++. For SR-UNet, each sparse (de)convolution layer is followed by a batch norm (BN) layer and a ReLU activation layer. D is the output dimension and N is the number of repeated layers. For PointNet++, SA represents the set abstraction layer, while FP denotes the feature propogation layer. np and r represent the number of down-sampling points and radiu for each SA layer.", + "url": "http://arxiv.org/html/2411.18667v1/x3.png" + }, + "4": { + "figure_path": "2411.18667v1_figure_4.png", + "caption": "Figure 4: Qualitative results of our fine-tuned model on downstream (a) 3D semantic segmentation and (b) 3D instance segmentation tasks.", + "url": "http://arxiv.org/html/2411.18667v1/x4.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "3D semantic parsing of large-scale indoor spaces.", + "author": "Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1534\u20131543, 2016.", + "url": null + } + }, + { + "2": { + "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields.", + "author": "Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 5855\u20135864, 2021.", + "url": null + } + }, + { + "3": { + "title": "pixelsplat: 3D gaussian splats from image pairs for scalable generalizable 3D reconstruction.", + "author": "David Charatan, Sizhe Lester Li, Andrea Tagliasacchi, and Vincent Sitzmann.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19457\u201319467, 2024.", + "url": null + } + }, + { + "4": { + "title": "Tensorf: Tensorial radiance fields.", + "author": "Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 333\u2013350. Springer, 2022a.", + "url": null + } + }, + { + "5": { + "title": "Multi-view 3D object detection network for autonomous driving.", + "author": "Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1907\u20131915, 2017.", + "url": null + } + }, + { + "6": { + "title": "4Dcontrast: Contrastive learning with dynamic correspondences for 3D scene understanding.", + "author": "Yujin Chen, Matthias Nie\u00dfner, and Angela Dai.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 543\u2013560. Springer, 2022b.", + "url": null + } + }, + { + "7": { + "title": "Mvsplat: Efficient 3D gaussian splatting from sparse multi-view images.", + "author": "Yuedong Chen, Haofei Xu, Chuanxia Zheng, Bohan Zhuang, Marc Pollefeys, Andreas Geiger, Tat-Jen Cham, and Jianfei Cai.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), 2024.", + "url": null + } + }, + { + "8": { + "title": "4D spatio-temporal convnets: Minkowski convolutional neural networks.", + "author": "Christopher Choy, JunYoung Gwak, and Silvio Savarese.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3075\u20133084, 2019a.", + "url": null + } + }, + { + "9": { + "title": "4D spatio-temporal convnets: Minkowski convolutional neural networks.", + "author": "Christopher Choy, JunYoung Gwak, and Silvio Savarese.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3075\u20133084, 2019b.", + "url": null + } + }, + { + "10": { + "title": "Spconv: Spatially sparse convolution library.", + "author": "Spconv Contributors.", + "venue": "2022.", + "url": null + } + }, + { + "11": { + "title": "Scannet: Richly-annotated 3D reconstructions of indoor scenes.", + "author": "Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5828\u20135839, 2017.", + "url": null + } + }, + { + "12": { + "title": "A point set generation network for 3d object reconstruction from a single image.", + "author": "Haoqiang Fan, Hao Su, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 605\u2013613, 2017.", + "url": null + } + }, + { + "13": { + "title": "3D semantic segmentation with submanifold sparse convolutional networks.", + "author": "Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9224\u20139232, 2018a.", + "url": null + } + }, + { + "14": { + "title": "3D semantic segmentation with submanifold sparse convolutional networks.", + "author": "Benjamin Graham, Martin Engelcke, and Laurens Van Der Maaten.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9224\u20139232, 2018b.", + "url": null + } + }, + { + "15": { + "title": "Multiple view geometry in computer vision.", + "author": "Richard Hartley and Andrew Zisserman.", + "venue": "Cambridge university press, 2003.", + "url": null + } + }, + { + "16": { + "title": "Masked autoencoders are scalable vision learners.", + "author": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16000\u201316009, 2022.", + "url": null + } + }, + { + "17": { + "title": "Dyco3d: Robust instance segmentation of 3D point clouds through dynamic convolution.", + "author": "Tong He, Chunhua Shen, and Anton Van Den Hengel.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 354\u2013363, 2021.", + "url": null + } + }, + { + "18": { + "title": "Masked autoencoder for self-supervised pre-training on lidar point clouds.", + "author": "Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, and Lennart Svensson.", + "venue": "In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 350\u2013359, 2023.", + "url": null + } + }, + { + "19": { + "title": "3D-SIS: 3D semantic instance segmentation of rgb-d scans.", + "author": "Ji Hou, Angela Dai, and Matthias Nie\u00dfner.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4421\u20134430, 2019.", + "url": null + } + }, + { + "20": { + "title": "Exploring data-efficient 3D scene understanding with contrastive scene contexts.", + "author": "Ji Hou, Benjamin Graham, Matthias Nie\u00dfner, and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15587\u201315597, 2021.", + "url": null + } + }, + { + "21": { + "title": "Ponder: Point cloud pre-training via neural rendering.", + "author": "Di Huang, Sida Peng, Tong He, Honghui Yang, Xiaowei Zhou, and Wanli Ouyang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 16089\u201316098, 2023.", + "url": null + } + }, + { + "22": { + "title": "Spatio-temporal self-supervised representation learning for 3D point clouds.", + "author": "Siyuan Huang, Yichen Xie, Song-Chun Zhu, and Yixin Zhu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6535\u20136545, 2021.", + "url": null + } + }, + { + "23": { + "title": "Pointgroup: Dual-set point grouping for 3D instance segmentation.", + "author": "Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, and Jiaya Jia.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4867\u20134876, 2020.", + "url": null + } + }, + { + "24": { + "title": "Guided point contrastive learning for semi-supervised point cloud semantic segmentation.", + "author": "Li Jiang, Shaoshuai Shi, Zhuotao Tian, Xin Lai, Shu Liu, Chi-Wing Fu, and Jiaya Jia.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6423\u20136432, 2021.", + "url": null + } + }, + { + "25": { + "title": "3D gaussian splatting for real-time radiance field rendering.", + "author": "Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\u00fchler, and George Drettakis.", + "venue": "ACM Transactions on Graphics (ToG), 42(4):1\u201314, 2023.", + "url": null + } + }, + { + "26": { + "title": "PointPillars: Fast encoders for object detection from point clouds.", + "author": "Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12697\u201312705, 2019.", + "url": null + } + }, + { + "27": { + "title": "Compact 3D gaussian representation for radiance field.", + "author": "Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21719\u201321728, 2024.", + "url": null + } + }, + { + "28": { + "title": "Deep continuous fusion for multi-sensor 3D object detection.", + "author": "Ming Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), 2018.", + "url": null + } + }, + { + "29": { + "title": "Semantic context encoding for accurate 3D point cloud segmentation.", + "author": "Hao Liu, Yulan Guo, Yanni Ma, Yinjie Lei, and Gongjian Wen.", + "venue": "IEEE Transactions on Multimedia (TMM), 23:2045\u20132055, 2021.", + "url": null + } + }, + { + "30": { + "title": "Masked discrimination for self-supervised learning on point clouds.", + "author": "Haotian Liu, Mu Cai, and Yong Jae Lee.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 657\u2013675. Springer, 2022.", + "url": null + } + }, + { + "31": { + "title": "Centertube: Tracking multiple 3D objects with 4d tubelets in dynamic point clouds.", + "author": "Hao Liu, Yanni Ma, Qingyong Hu, and Yulan Guo.", + "venue": "IEEE Transactions on Multimedia (TMM), 2023a.", + "url": null + } + }, + { + "32": { + "title": "Anchorpoint: Query design for transformer-based 3D object detection and tracking.", + "author": "Hao Liu, Yanni Ma, Hanyun Wang, and Yulan Guo.", + "venue": "IEEE Transactions on Intelligent Transportation Systems (TITS), 2023b.", + "url": null + } + }, + { + "33": { + "title": "Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies.", + "author": "Xiaoxiao Long, Cheng Lin, Lingjie Liu, Yuan Liu, Peng Wang, Christian Theobalt, Taku Komura, and Wenping Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20834\u201320843, 2023.", + "url": null + } + }, + { + "34": { + "title": "Decoupled weight decay regularization.", + "author": "I Loshchilov.", + "venue": "arXiv preprint arXiv:1711.05101, 2017.", + "url": null + } + }, + { + "35": { + "title": "Sgdr: Stochastic gradient descent with warm restarts.", + "author": "Ilya Loshchilov and Frank Hutter.", + "venue": "arXiv preprint arXiv:1608.03983, 2016.", + "url": null + } + }, + { + "36": { + "title": "Scaffold-gs: Structured 3D gaussians for view-adaptive rendering.", + "author": "Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20654\u201320664, 2024.", + "url": null + } + }, + { + "37": { + "title": "Nerf: Representing scenes as neural radiance fields for view synthesis.", + "author": "Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), 2020.", + "url": null + } + }, + { + "38": { + "title": "Occupancy-mae: Self-supervised pre-training large-scale lidar point clouds with masked occupancy autoencoders.", + "author": "Chen Min, Liang Xiao, Dawei Zhao, Yiming Nie, and Bin Dai.", + "venue": "IEEE Transactions on Intelligent Vehicles (TIV), 2023.", + "url": null + } + }, + { + "39": { + "title": "An end-to-end transformer model for 3D object detection.", + "author": "Ishan Misra, Rohit Girdhar, and Armand Joulin.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2906\u20132917, 2021.", + "url": null + } + }, + { + "40": { + "title": "Instant neural graphics primitives with a multiresolution hash encoding.", + "author": "Thomas M\u00fcller, Alex Evans, Christoph Schied, and Alexander Keller.", + "venue": "ACM transactions on graphics (TOG), 41(4):1\u201315, 2022.", + "url": null + } + }, + { + "41": { + "title": "Masked autoencoders for point cloud self-supervised learning.", + "author": "Yatian Pang, Wenxiao Wang, Francis EH Tay, Wei Liu, Yonghong Tian, and Li Yuan.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 604\u2013621. Springer, 2022.", + "url": null + } + }, + { + "42": { + "title": "Deepsdf: Learning continuous signed distance functions for shape representation.", + "author": "Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 165\u2013174, 2019.", + "url": null + } + }, + { + "43": { + "title": "Convolutional occupancy networks.", + "author": "Songyou Peng, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 523\u2013540. Springer, 2020.", + "url": null + } + }, + { + "44": { + "title": "PointNet: Deep learning on point sets for 3D classification and segmentation.", + "author": "Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 652\u2013660, 2017a.", + "url": null + } + }, + { + "45": { + "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space.", + "author": "Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas.", + "venue": "Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 30, 2017b.", + "url": null + } + }, + { + "46": { + "title": "Deep hough voting for 3D object detection in point clouds.", + "author": "Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9277\u20139286, 2019.", + "url": null + } + }, + { + "47": { + "title": "Randomrooms: Unsupervised pre-training from synthetic shapes and randomized layouts for 3D object detection.", + "author": "Yongming Rao, Benlin Liu, Yi Wei, Jiwen Lu, Cho-Jui Hsieh, and Jie Zhou.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 3283\u20133292, 2021.", + "url": null + } + }, + { + "48": { + "title": "Geoudf: Surface reconstruction from 3D point clouds via geometry-guided distance representation.", + "author": "Siyu Ren, Junhui Hou, Xiaodong Chen, Ying He, and Wenping Wang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 14214\u201314224, 2023.", + "url": null + } + }, + { + "49": { + "title": "PointRCNN: 3D object proposal generation and detection from point cloud.", + "author": "Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 770\u2013779, 2019.", + "url": null + } + }, + { + "50": { + "title": "Very deep convolutional networks for large-scale image recognition.", + "author": "Karen Simonyan and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:1409.1556, 2014.", + "url": null + } + }, + { + "51": { + "title": "Sun rgb-d: A rgb-d scene understanding benchmark suite.", + "author": "Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 567\u2013576, 2015.", + "url": null + } + }, + { + "52": { + "title": "Kpconv: Flexible and deformable convolution for point clouds.", + "author": "Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, Fran\u00e7ois Goulette, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 6411\u20136420, 2019.", + "url": null + } + }, + { + "53": { + "title": "Geomae: Masked geometric target prediction for self-supervised point cloud pre-training.", + "author": "Xiaoyu Tian, Haoxi Ran, Yue Wang, and Hang Zhao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13570\u201313580, 2023.", + "url": null + } + }, + { + "54": { + "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction.", + "author": "Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang.", + "venue": "arXiv preprint arXiv:2106.10689, 2021.", + "url": null + } + }, + { + "55": { + "title": "Freesplat: Generalizable 3D gaussian splatting towards free-view synthesis of indoor scenes.", + "author": "Yunsong Wang, Tianxin Huang, Hanlin Chen, and Gim Hee Lee.", + "venue": "arXiv preprint arXiv:2405.17958, 2024.", + "url": null + } + }, + { + "56": { + "title": "Masked scene contrast: A scalable framework for unsupervised 3D representation learning.", + "author": "Xiaoyang Wu, Xin Wen, Xihui Liu, and Hengshuang Zhao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9415\u20139424, 2023.", + "url": null + } + }, + { + "57": { + "title": "Pointcontrast: Unsupervised pre-training for 3D point cloud understanding.", + "author": "Saining Xie, Jiatao Gu, Demi Guo, Charles R Qi, Leonidas Guibas, and Or Litany.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 574\u2013591. Springer, 2020.", + "url": null + } + }, + { + "58": { + "title": "Point cloud pre-training with natural 3D structures.", + "author": "Ryosuke Yamada, Hirokatsu Kataoka, Naoya Chiba, Yukiyasu Domae, and Tetsuya Ogata.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21283\u201321293, 2022.", + "url": null + } + }, + { + "59": { + "title": "Implicit autoencoder for point-cloud self-supervised representation learning.", + "author": "Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, and Qixing Huang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 14530\u201314542, 2023.", + "url": null + } + }, + { + "60": { + "title": "Gd-mae: generative decoder for mae pre-training on lidar point clouds.", + "author": "Honghui Yang, Tong He, Jiaheng Liu, Hua Chen, Boxi Wu, Binbin Lin, Xiaofei He, and Wanli Ouyang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9403\u20139414, 2023.", + "url": null + } + }, + { + "61": { + "title": "Pred: pre-training via semantic rendering on lidar point clouds.", + "author": "Hao Yang, Haiyang Wang, Di Dai, and Liwei Wang.", + "venue": "Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 36, 2024a.", + "url": null + } + }, + { + "62": { + "title": "Unipad: A universal pre-training paradigm for autonomous driving.", + "author": "Honghui Yang, Sha Zhang, Di Huang, Xiaoyang Wu, Haoyi Zhu, Tong He, Shixiang Tang, Hengshuang Zhao, Qibo Qiu, Binbin Lin, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15238\u201315250, 2024b.", + "url": null + } + }, + { + "63": { + "title": "3DSSD: Point-based 3D single stage object detector.", + "author": "Zetong Yang, Yanan Sun, Shu Liu, and Jiaya Jia.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11040\u201311048, 2020.", + "url": null + } + }, + { + "64": { + "title": "Volume rendering of neural implicit surfaces.", + "author": "Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman.", + "venue": "In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), pages 4805\u20134815, 2021.", + "url": null + } + }, + { + "65": { + "title": "Gspn: Generative shape proposal network for 3D instance segmentation in point cloud.", + "author": "Li Yi, Wang Zhao, He Wang, Minhyuk Sung, and Leonidas J Guibas.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3947\u20133956, 2019.", + "url": null + } + }, + { + "66": { + "title": "Point-bert: Pre-training 3D point cloud transformers with masked point modeling.", + "author": "Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19313\u201319322, 2022.", + "url": null + } + }, + { + "67": { + "title": "Mip-splatting: Alias-free 3D gaussian splatting.", + "author": "Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19447\u201319456, 2024.", + "url": null + } + }, + { + "68": { + "title": "The unreasonable effectiveness of deep features as a perceptual metric.", + "author": "Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 586\u2013595, 2018.", + "url": null + } + }, + { + "69": { + "title": "Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training.", + "author": "Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li.", + "venue": "Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 35:27061\u201327074, 2022.", + "url": null + } + }, + { + "70": { + "title": "H3dnet: 3D object detection using hybrid geometric primitives.", + "author": "Zaiwei Zhang, Bo Sun, Haitao Yang, and Qixing Huang.", + "venue": "In Proceedings of the European Conference on Computer Vision (ECCV), pages 311\u2013329. Springer, 2020.", + "url": null + } + }, + { + "71": { + "title": "Self-supervised pretraining of 3D features on any point-cloud.", + "author": "Zaiwei Zhang, Rohit Girdhar, Armand Joulin, and Ishan Misra.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10252\u201310263, 2021.", + "url": null + } + }, + { + "72": { + "title": "Point transformer.", + "author": "Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 16259\u201316268, 2021.", + "url": null + } + }, + { + "73": { + "title": "VoxelNet: End-to-end learning for point cloud based 3D object detection.", + "author": "Yin Zhou and Oncel Tuzel.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4490\u20134499, 2018.", + "url": null + } + }, + { + "74": { + "title": "Ponderv2: Pave the way for 3D foundataion model with a universal pre-training paradigm.", + "author": "Haoyi Zhu, Honghui Yang, Xiaoyang Wu, Di Huang, Sha Zhang, Xianglong He, Tong He, Hengshuang Zhao, Chunhua Shen, Yu Qiao, et al.", + "venue": "arXiv preprint arXiv:2310.08586, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18667v1" +} \ No newline at end of file diff --git a/20241127/2411.18668v1.json b/20241127/2411.18668v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2782431cbcb650bc9cad222baa0c66f72d25fe93 --- /dev/null +++ b/20241127/2411.18668v1.json @@ -0,0 +1,418 @@ +{ + "title": "Towards Chunk-Wise Generation for Long Videos", + "abstract": "Generating long-duration videos has always been a significant challenge due to the inherent complexity of spatio-temporal domain and the substantial GPU memory demands required to calculate huge size tensors. While diffusion based generative models [7] achieve state-of-the-art performance in video generation task, they are typically trained with predefined video resolutions and lengths. During inference, a noise tensor with specific resolution and length should be specified at first, and the model will perform denoising on the entire video tensor simultaneously, all the frames together. Such approach will easily raise an out-of-memory (OOM) problem when the specified resolution and/or length exceed a certain limit. One of the solutions to this problem is to generate many short video chunks autoregressively with strong inter-chunk spatio-temporal relation and then concatenate them together to form a long video. In this approach, a long video generation task is divided into multiple short video generation subtasks, and the cost of each subtask is reduced to a feasible level.\nIn this paper, we conduct a detailed survey on long video generation with the autoregressive chunk-by-chunk strategy. We address common problems caused by applying short image-to-video models to long video tasks and design an efficient -step search solution to mitigate these problems.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Diffusion models have achieved state-of-the-art in vision tasks since they were firstly introduced for image generation in 2020 [7 ###reference_b7###]. Unlike previous model structure such as GAN [5 ###reference_b5###], they perform an iterative denoising process on a Gaussian noise latent, removing noise step-by-step and finally resulting in a clear tensor. Such an iterative denoising inference strategy has demonstrated robustness and diversity in the generated videos. However, unlike autoregressive models [29 ###reference_b29###, 30 ###reference_b30###, 11 ###reference_b11###, 26 ###reference_b26###, 24 ###reference_b24###] that predict tokens one by one based on previous tokens, diffusion models must work on the entire noisy latent tensor with a predefined size. This cost is rather feasible and acceptable in image domain since an image has fixed resolution, and most image resolutions varies within the range of to 2K. However, videos can vary from short GIFs with only a few seconds, to long films or documentaries with couples of hours, creating significant memory requirement both during training and inference. Although in theory the video latent tensor size can be customized to a desired longer length after training with a smaller tensor size to save memory, we are faced with a\ntrain-inference discrepancy in video length. That is, if a video diffusion model is only trained with short videos, it can hardly generate a much longer video without a significant degradation in quality during inference by only naively setting a larger initial noise tensor shape, assuming that there are sufficient memory to do so.\nA video is a sequence of images that share strong spatio-temporal relations. Such inherent property allows the possibility to perform autoregressive chunk-by-chunk video generation, by conditioning the generation of a video chunk on previously generated video chunks. An intuitive implementation is to leverage an Image-to-Video (I2V) diffusion model that iteratively accepts a guide image as conditioning input, and outputs a short chunk that follows the guide image. Doing so alleviates the memory requirement that would otherwise be insurmountable if we attempt to generate a long video all at once. An alternate approach could be autoregressive methods [30 ###reference_b30###, 11 ###reference_b11###, 26 ###reference_b26###], that generate videos token by token, and are thus not faced with the same memory constraint that diffusion models face. Unfortunately, most of these methods are either not open sourced or have yet to demonstrate the same level of efficacy as video diffusion.\n###figure_1### However, applying a pretrained I2V diffusion model to a long video generation task in a chunk-by-chunk manner requires a strong assumption that the short video generated at each iteration remains of high quality so as not to misguide the generation of the next chunk. Unfortunately, in our observation, that assumption is rarely true because models are not perfect, and that fact leads to some challenge such as frame quality degradation. We especially observed that smaller models with simpler motion modules are extremely vulnerable to degradation as the number of chunks increases (see Figure 2 ###reference_###). Here, we observed that the initial noise plays a significant role in the generation. Some initial noise will lead to a local high-quality chunk with more consistent object and smoother motion while some will lead to a local bad-quality chunk that contains degradations such as color shifting and object distortion. If we happen to sample a bad noise along the line, errors will accumulate down the line and the degradation worsens. On the other hand, large I2V models such as OpenSoraPlanV1.3.0 [12 ###reference_b12###] and CogVideoX [28 ###reference_b28###] are usually more robust to the initial noise (See Figure 3 ###reference_###), as a trade-off to larger model size, more parameters, much more expensive training and inference cost.\nBased on this observation, an intuitive solution is to discern the quality of the initial noise tensor. However, modeling a Gaussian noise is hard, and a \u201cgood\u201d noise is not necessarily good for every conditioning inputs. Some existing noise initialization method [27 ###reference_b27###] tried to refine the low-frequency component of an initial Gaussian noise by first fully denoising it through the base model, conducting the forward diffusion to re-add a new Gaussian noise to a fully noisy level, and then composing a refined noise by adding up the low-frequency component of the re-diffused latent and high-frequency component of a new random noise through Fast-Fourier-Transform. However, such algorithm demands many runtimes of full denoising, which is extremely expensive and slow.\nIn this paper, we propose a fast evaluation method that only takes denoising steps to rapidly generate a video that is suboptimal but sufficient for evaluating the quality of the noise. Even though the suboptimal video is of low visual quality, it still captures the overall layout of its fully-denoised counterpart, and thus helpful in deciding whether an initial noise is good. Our main contributions are:\nThis paper offers a detailed analysis of utilizing several widely used I2V models for training-free chunk-by-chunk long video generation, including an analysis of the impact of initial noise on per-chunk video quality. Chunk-by-chunk generation has the potential to overcome many of the efficiency problems associated with long video generation; while previous work [20 ###reference_b20###, 4 ###reference_b4###] has briefly touched on this topic, there has been no in-depth analysis conducted on it.\nWe propose an efficient search method that evaluates and selects the best noise during generation, helping to mitigate the effect of bad initial noise. This is especially instrumental for small I2V models, which is more prone to error accumulations in chunk-by-chunk generation, but yet are much more efficient for practical purposes. On the other hand, we also show that larger I2V models such as OpenSoraPlanV1.3.0 [12 ###reference_b12###] and CogVideoX [28 ###reference_b28###] do not need much intervention and a naive chunk-by-chunk suffices, but at the cost of a much higher inference lapse.\nWe demonstrate that autoregressive chunk-by-chunk video generation (whether with our bad noise mitigation or simply naive) is a promising method for long video generation task by supporting the idea with strong and insightful empirical results. More importantly, this work opens the door for a practical and easy to use paradigm to generate long videos, which can be easily applied to future video generation techniques." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "Generative models for vision tasks have evolved from GANs [5 ###reference_b5###] to diffusion models [7 ###reference_b7###]. One of the most widely known application is Text-to-Image (T2I), leveraging a UNet [22 ###reference_b22###] structure to model the reverse process of adding Gaussian noise to clear latents. Based on these well developed T2I models\u2019 capability of understanding and handling 2D image knowledge, further application such as Text-to-Video unleashes the possibility for creative works in a more complex video domain." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Text-to-Video (T2V) Diffusion Models", + "text": "Similar to Text-to-Image (T2I) models, T2V models accept textual embedding of raw text prompt by some pretrained text encoder such as CLIP [18 ###reference_b18###] or T5 [19 ###reference_b19###], via cross attention mechanism, enabling meaningful video generation that corresponds to the input text prompt. Early works such as AnimateDiff [6 ###reference_b6###] typically leverage a pretrained 2D UNet [22 ###reference_b22###, 7 ###reference_b7###] for T2I task, and introduce additional modules [6 ###reference_b6###, 25 ###reference_b25###, 3 ###reference_b3###] that monitor motion features across frames, to each UNet block. These extra modules are mostly implemented with 1D convolution and self-attention mechanism, together with 2D UNet, to compose a 2D+1D, a.k.a. pseudo 3D UNet. Later works such as OpenSoraPlan [12 ###reference_b12###] and CogVideoX [28 ###reference_b28###] adopt Diffusion Transformer (DiT) [17 ###reference_b17###] that supports 3D attention, as backbone structure, and demonstrate better performance in visual quality including less flickering and more smoothness." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Image-to-Video (I2V) Diffusion Models", + "text": "Unlike T2V, I2V models accept image conditioning and animate that conditioning image to result in a video. Also, text prompt is usually abstract but image guidance is explicit, and thus it is expected that the output video of an I2V model should be visually consistent to the guide image. Due to such a requirement, in actual implementation, the guide image serves as a strong condition, and is usually expanded on the time dimension and then concatenated to the video tensors in channel dimension (StableVideoDiffusion [1 ###reference_b1###]), together with a masked condition (OpenSoraPlan-I2V [12 ###reference_b12###]) needed by the self-attention mechanism. This is enabled by introducing extra channels to the first convolution layer in the diffusion model." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Long Video Generation", + "text": "Diffusion based video generation models perform iterative denoising steps on an randomly initialized Gaussian noise that is of the same size as the expected video output. Such design brings challenges for long video generation. One of the more severe problems is the infeasibility of generating a very long video by initializing a noise tensor as big as the target video size, as this will result in GPU memory requirements that exceed the available hardware. Instead, approaches that generate a small chunk at a time, followed by connecting the chunks while ensuring visual connectivity have been proposed. There are two types of such approach." + }, + { + "section_id": "2.3.x", + "parent_section_id": "2.3", + "section_name": "First-In-First-Out (FIFO) Generation", + "text": "FIFO-diffusion [10 ###reference_b10###] leverages a pretrained T2V diffusion model and reschedules the denoising strategy from a simultaneous denoising on all frames with the same noise timestep to a progressive denoising on each frame with increasing noise timesteps. Although this method is as memory efficient as the base T2V inference, it has limitations such as train-inference discrepancy between fixed and progressive timestep schedules. Furthermore, due to the intrinsic abstract characteristic of text prompt and stochasticity of T2V model, object consistency may suffer as the video length increases." + }, + { + "section_id": "2.3.x", + "parent_section_id": "2.3", + "section_name": "Autoregressive Generation", + "text": "Autoregressive generation utilizes previously generated results to predict next result in a sequential manner. There are works such as MagVit-V2 [30 ###reference_b30###] that tokenize videos and allow next video token prediction, enabling long video generation. However, these works are not open sourced and thus their performances in long video generation remain unclear.\nAnother approach that we name chunk-by-chunk generation is designed to denoise only a single video chunk in each runtime, which is then repeated using the latest video frame collected so far as the guide image. This approach may face quality degradation due to the accumulation of unexpected artifacts generated in each chunk. We observed that a major cause of these artifacts are the noise input to the unet. A \u201cbad\u201d noise tensor would result in significant errors that are hard for the diffusion model to recover from. There are works such as FreeInit [27 ###reference_b27###] and FrameInit [20 ###reference_b20###] addressing this problem through an expensive and time-consuming procedure that includes refining initial noises by Fast-Fourier-Transformation, low-high frequency components decomposition, together with full step denoising and re-diffusing.\n###figure_2### ###figure_3###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In this section, we will first briefly introduce the preliminaries of diffusion based video generation models, analyzing the main obstacle for these models in the task of long video generation. Then, we will elaborate our simple, efficient and training-free approach that utilizes pretrained I2V models on long-video generation task." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Preliminaries", + "text": "Diffusion models learn a backward process to reverse a procedure that iteratively adds Gaussian noise to clear inputs, a.k.a., the forward process. In a pre-designed variance schedule , such that for any , and is the number of predefined number of diffusion steps. a forward diffusion will iteratively add random Gaussian noises to a clear input, to get a noisy version of the input. Then, the model is trained to be able to predict the noise added at current noise level, and remove that noise to recover a less noisy latent.\nIn a forward diffusion process, there are steps (usually =1000) [7 ###reference_b7###]. If we have a clear latent , then a noisy version of satisfies the following distribution:\nwhere , , and I is an all-one tensor with same shape as .\nThus, in actual implementation, we have:\nIn an iterative denoising inference stage, the video diffusion model takes in a pure Gaussian noise that is of the same shape as the input training video latents. Hence, during one runtime of a denoising loop, only the content of the latents will become more clear, but the resolution and duration are always fixed. Even though some models are trained on videos with various resolution and duration, and allow various input tensor shape in inference time, requiring a larger resolution and/or a longer duration means more memories are required and thus may trigger an out of memory (OOM)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Autoregressive Generation with I2V", + "text": "To avoid the OOM problem for long video generation, it is intuitive to adapt an image-to-video (I2V) model autoregressively.\nBy naively running an I2V model for runtimes, each time setting the guide image to be the last frame of the previous video chunk, we are able to collect a video as long as possible, without running into memory problem. However, this algorithm is flawed. For every video chunk generation, a random noise is sampled, and the output strongly depends on the initial noise, and the last frame of the previous video chunk. When a bad noise is sampled, a bad chunk will be generated, leading to a bad guide frame for the next chunk, resulting in deterioriating quality as the number of chunks increases." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Initial Noise Evaluation", + "text": "Like image diffusion videos, the learning objective for video diffusion models is:\nwhere is a clear latent, is a timestep between [0, 999], is the conditioning input, is a noisy version of , , and is a denoising model with learnable parameters .\nIn the inference stage, a pure Gaussian noise is randomly sampled and regarded as a level-1000 noisy inputs. Then the diffusion model will iteratively predict the current noise to be removed, and update until reaching the last step.\nHowever, there exists a train-inference discrepancy [27 ###reference_b27###] for the input noisy latent. In training, the input noisy latent is obtained from a clear ground truth latent, with some noise level . On the other hand, during inference, an input noise is directly sampled from a pure Gaussian distribution. Even though with the largest possible , the resulting training noisy latent does resemble a Gaussian noise, the actual distribution of noisy latent is inherently not exactly the same as a pure Gaussian distribution. To be specific, in training, the space of most noisy latents is ,\nwhere , , is the predefined variance schedule and is in the clear image latent space. On the other hand, during inference, the latent space is .\nEven though is very close to 1, and are not equivalent. It is therefore no wonder that we observed in our experiments that even with the same input condition, different initial Gaussian noises will lead to different output videos, with a huge variations in quality (See Figure 5 ###reference_###). This is a result of the discrepancy between the initial noises sampled during training and inference, and those initial noises during inference that happen to be similar to the ones seen by the model during training tend to generate higher quality outputs.\nHence, we argue that, if we are able to find a good initial noise every time we sample a video chunk, the cumulative worsening effect will be mitigated." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Brute Force and -step Search", + "text": "In previous observation, we noticed that the choice of initial noises significantly determines the visual quality of generated videos. However, since the initial noises are pure Gaussian, it is hard to find an analytic evaluation metric to the noises. Further, we observed that a noise that produces a high quality video given a guide image and a model, is not necessarily good in the context of another guide image or model. Thus, evaluation based solely on the noise itself will not in principle lead to the correct choice.\nFortunately, the denoising model is innately capable of telling us whether a noise is good based on the generated video. Here, a brute force search can be formulated as a quality check by comparing the quality of different generations based on different noises and selecting the best candidate. We observed that this greatly improves the quality of long videos of multiple chunks, especially for mitigating the cumulative worsening effect. However, it requires many full step inferences to find a good video chunk, which is extremely expensive. For example, if sampling a video from pure noise takes 100 DDIM [23 ###reference_b23###] steps, then by applying brute force on 10 candidate noises, we will need a total of 1,000 steps, which is prohibitively too expensive.\nTo handle this problem, we propose a -step evaluation strategy (See Figure 4 ###reference_###), which only takes steps of sampling rather than a full inference to generate videos that, even though are suboptimal, are sufficient for the evaluation and selection of noise.\nMost diffusion models that are trained based on a DDPM [7 ###reference_b7###] scheduler have predefined 1000 steps, but utilize advanced schedulers during inference such as DDIM [23 ###reference_b23###], PNDM [15 ###reference_b15###] and EulerAncestralDiscrete [9 ###reference_b9###]. These schedulers allow for customized number of sampling steps, trading off between the output quality and the number of sampling steps, with the most commonly adopted number of steps ranging from 25 to 100. Further, since the stochasticity of diffusion model mostly comes from the initial noise (except for some schedulers [23 ###reference_b23###, 9 ###reference_b9###, 16 ###reference_b16###] that introduce minor randomness during each sampling step, and this can also be controlled by fixing a random seed), we are able to repeat the output of different runtime by setting the same initial noise. This means that given the same initial noise, the proposed -step evaluation is able to indicate the overall quality of an otherwise fully denoised output.\nIn practical implementation, can be very small compared to the full sampling step. For example, when the number of steps for full sampling is 100, and , the total number of extra -step is 50, as opposed to 900 (1000-100) in the case of brute force." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "-step Evaluation", + "text": "Videos generated with -step are obviously suboptimal compared to fully denoised videos. As such, the choice of the evaluation metric used to select the best noise needs to be carefully chosen. In principle, the metric should not be sensitive to pixel-wise low-level details, and regular metrics that capture low-level details might not be effective indicators. We found empirically that the cosine similarity between the CLIP embeddings [18 ###reference_b18###] of each frame to the guide image to be an effective indicator, because CLIP will encode an image to a latent that captures high-level features. Specifically, for a video of length generated with -step, the evaluation score assigned to it with respect to the guide image becomes, being frame of :" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "###figure_4###" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "We conduct extensive experiments on autoregressive chunk-by-chunk long video generation, with prompts from all categories in VBench [8 ###reference_b8###]. We first use each prompt to generate a corresponding high quality guide image with StableDiffusion V1-5 [21 ###reference_b21###], and use these images as initial guide images for performing autoregressive chunk-by-chunk video generation.\nAdditionally, for StableVideoDiffusion [1 ###reference_b1###], we use the EulerDiscreate scheduler [9 ###reference_b9###]. For ConsistI2V [20 ###reference_b20###], we use the DDIM scheduler [23 ###reference_b23###]. For OpenSoraPlan V1.3.0 [12 ###reference_b12###], we use the EulerAncestralDiscrete scheduler [9 ###reference_b9###]. For CogVideoX [28 ###reference_b28###], we use the CogVideoXDPM scheduler [28 ###reference_b28###, 16 ###reference_b16###]. All these schedulers are default and recommended by their official implementations. Since these advanced schedulers introduce small amount of random noise during each step sampling, we set manual seed before every -step and full-step sampling. Lastly, we empirically set the number of steps for full denoising to be 50, , and the number of chunks to be 5 for all the main experiments unless otherwise stated. The size of each chunk follows the default chunk length used by each model \u2013 StableVideoDiffusion: 25, OpenSoraPlan: 93, ConsistI2V: 16, CogVideoX: 49.\nWe set video resolution (height by width) to for StableVideoDiffusion,\n for CogVideoX and OpenSoraPlanV1.3.0, for ConsistI2V. The sampling speeds for each model are \u2013 StableVideoDiffusion: 0.37s/step, OpenSoraPlan: 4s/step, ConsistI2V: 0.4s/step, CogVideoX: 3.8s/step. Experiments are conducted on a single Nvidia H100 GPU. Here, the sampling speeds highlight the importance of investigating applying chunk-by-chunk to the more inferior models \u2013 by leveraging -step to elevate their performance on long video generation, these models bring with them an advantage of much faster speed." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Impact of Initial Noise on Output Quality", + "text": "Gaussian initial noise introduces diversity and stochasticity to diffusion models. For an I2V model, even when a strong guide image is provided to constrain the overall semantic content of the output, the initial noise still has a significant impact on the final output. For example, as shown in Figure 5 ###reference_###, there are noticeable differences in video qualities depending on the initial noises.\nWe also present quantitative results to highlight the influence of the initial noises on each model. In our experiments, we randomly sampled 10 different initial noises for each conditioning input, allowing models to denoise from these varied starting points. We then evaluated the output videos using VBench metrics. The results show that even with the same conditioning input, different initial noises can lead to substantial variations in video quality (See Table 1 ###reference_###)." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To evaluate the quality of the generated videos, we selected five dimensions from VBench\u2019s [8 ###reference_b8###] Video Quality category including Subject Consistency, Background Consistency, Temporal Flickering, Motion Smoothness, and Aesthetic Quality. Specifically, Subject Consistency is defined as the average cosine similarity between the DINO [2 ###reference_b2###] features of every pair of consecutive frames; Background Consistency is similar to Subject Consistency except that the DINO features are replaced with CLIP [18 ###reference_b18###] features; Temporal Flickering is defined as the mean absolute difference across frames in pixel space. As for Motion Smoothness, a pretrained video frame interpolation model AMT [14 ###reference_b14###] is first utilized to reconstruct all the odd-number-frames from the even-number-frames, after which a Mean Absolute Error is calculated between the ground-truth and reconstructed odd-number-frames.\nAesthetic Quality, reflecting frame-level aesthetic aspects, is calculated with the LAION aesthetic predictor [13 ###reference_b13###] across all frames. We observed significant improvement for StableVideoDiffusion and ConsistI2V (See Table 2 ###reference_###). For larger models such as OpenSoraPlanV1.3.0, and CogVideoX, the improvement brought by -step search is not very significant because the base models are already very robust to initial noises. In these cases, naive chunk-by-chunk would likely suffice." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Studies", + "text": "" + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Number of Chunks", + "text": "We further conducted experiments comparing the effect of varying the number of chunks across the different models in Table 3 ###reference_### based on the VBench metrics. StableVideoDiffusion continues to benefit greatly from -step selection even up to as long as 20 chunks. OpenSoraPlan shows strong performance itself, but can really benefit from -step for chunks of 5, 10 and 15. ConsistI2V also benefitted greatly from -step across the board. CogVideoX is the strongest by itself, with -step bringing just marginal benefits.\n###figure_5###" + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Choice of -step", + "text": "Most diffusion models adapt the predefined beta schedule introduced in the DDPM [7 ###reference_b7###] paper, with a 1000 sampling steps. During inference, the number of sampling steps is\ntypically reduced to between 25 to 100, due to the ability of advanced sampling scheduler such as PNDM [15 ###reference_b15###], DPMSolver [16 ###reference_b16###], and EulerAncestralDiscrete [9 ###reference_b9###]. Furthermore, if we only need to make a quick judgment whether an initial noise will lead to a satisfactory output, we only need to do inference from that noise for a rather small steps.\nWe conducted experiments on the relation between video quality and (see Figure 6 ###reference_###) and found that when is small, by increasing , the video quality increases quickly. However, after reaching a threshold, adding more sampling steps becomes less cost efficient. Empirically, we selected as a good balance between an acceptable video quality and a reasonable cost." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This work demonstrates the feasibility of long video generation in a chunk-by-chunk manner based on pretrained I2V models. We show that the initial noise plays an important role in generating a video, subject to the model and condition. We also provide a quick -step search pipeline to choose a good noise from multiple noise candidates, mitigating the degradation effect of naive chunk-by-chunk video generation by smaller I2V models. We also found that larger I2V models surprisingly demonstrated strong robustness against error accumulation, and naive chunk-by-chunk long video generation may suffice here." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods/MetricsSubject ConsistencyBackground ConsistencyTemporal FlickeringMotion Smoothness
Number of Chunks\nmin\n\nmax\n\nrange\n\nstd\n\nmin\n\nmax\n\nrange\n\nstd\n\nmin\n\nmax\n\nrange\n\nstd\n\nmin\n\nmax\n\nrange\n\nstd\n
StableVideoDiffusion0.84850.96380.11530.03440.90910.97070.06160.01820.85850.92080.06220.01870.90140.96620.06480.0198
ConsistI2V0.94320.98760.04430.01360.95970.98990.03010.00940.94260.97770.03510.01110.96350.98240.01890.0058
OpenSoraPlanV1.3.00.93840.95480.01640.00500.95930.97100.01180.00370.96490.97130.00640.00210.98620.98790.00180.0006
CogVideoX0.96630.97940.01310.00410.97580.98580.01000.00310.97540.98350.00810.00250.98810.99150.00340.0011
\n
\n
Table 1: We randomly sample 10 different initial noises for every guide image, generates 10 single-chunk videos and evaluate them with VBench metrics. All values are averaged among all test samples.\n
\n
", + "capture": "Table 1: We randomly sample 10 different initial noises for every guide image, generates 10 single-chunk videos and evaluate them with VBench metrics. All values are averaged among all test samples.\n" + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods/MetricsSubject ConsistencyBackground ConsistencyTemporal FlickeringMotion SmoothnessAesthetic Quality
StableVideoDiffusion Baseline0.77070.86690.85600.89940.4370
\nStableVideoDiffusion -step\n0.82090.89210.87190.91640.4702
ConsistI2V Baseline0.85590.91890.96110.97420.4789
\nConsistI2V -step\n0.89560.93940.96460.97680.4872
OpenSoraPlanV1.3.0 Baseline0.86920.93180.97500.98720.5307
\nOpenSoraPlanV1.3.0 -step\n0.87250.93290.97540.98740.5336
CogVideoX Baseline0.97030.97820.98160.98670.5355
\nCogVideoX -step\n0.97470.97980.98150.98730.5651
FIFO-Diffusion0.84360.90680.92150.95250.5473
\n
\n
Table 2: Metrics in VBench for Image-to-video task. StableVideoDiffusion and ConsistI2V are implemented with PseudoUNet3D, while OpenSoraPlanV1.3.0 and CogVideoX-5B-I2V are implemented with more complex 3D-DiT with much more parameters. We observed that the -step search method have better improvement in UNet based small models than DiT based large models. Empirically, we used for all these experiments.\nFIFO is not a chunk-by-chunk generation pipeline, and it is based on a T2V model, but we include it here due to its ability to also generate long videos.\n
\n
", + "capture": "Table 2: Metrics in VBench for Image-to-video task. StableVideoDiffusion and ConsistI2V are implemented with PseudoUNet3D, while OpenSoraPlanV1.3.0 and CogVideoX-5B-I2V are implemented with more complex 3D-DiT with much more parameters. We observed that the -step search method have better improvement in UNet based small models than DiT based large models. Empirically, we used for all these experiments.\nFIFO is not a chunk-by-chunk generation pipeline, and it is based on a T2V model, but we include it here due to its ability to also generate long videos.\n" + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods/MetricsSubject ConsistencyBackground ConsistencyTemporal FlickeringMotion Smoothness
Number of Chunks5101520510152051015205101520
StableVideoDiffusion Baseline0.77840.66130.61990.60030.86750.81150.79200.78300.86200.80110.78620.77820.90420.83690.81800.8096
\nStableVideoDiffusion -step\n0.82800.72570.67830.65050.89670.83740.81380.80090.87980.84160.81890.80620.92040.88140.85860.8459
ConsistI2V Baseline0.91300.87080.84100.80690.94750.92170.90140.88360.96450.96520.96710.96930.97520.97620.97780.9793
\nConsistI2V -step\n0.93180.89650.85430.82080.94790.92190.90170.88640.96450.96590.96830.97050.97610.97670.97840.9800
OpenSoraPlanV1.3.0 Baseline0.86920.79530.75820.74010.92960.89330.87170.85890.97210.97380.97620.97740.98950.98950.99020.9894
\nOpenSoraPlanV1.3.0 -step\n0.88360.82480.76920.73970.92770.89550.86480.85030.97220.97290.97470.97520.98970.98960.99010.9897
CogVideoX Baseline0.95480.95330.93290.91040.97410.96290.94890.93500.98760.98790.98800.98770.99350.99350.99340.9933
\nCogVideoX -step\n0.96770.95350.93730.91980.97540.96980.96160.95050.98990.99140.99100.99050.99410.99470.99450.9942
\n
\n
Table 3: Results from longer video generation with each method. Smaller models such as StableVideoDiffusion and ConsistI2V benefit more from -step search than larger models like OpenSoraPlanV1.3.0 and CogVideoX do.\n
\n
", + "capture": "Table 3: Results from longer video generation with each method. Smaller models such as StableVideoDiffusion and ConsistI2V benefit more from -step search than larger models like OpenSoraPlanV1.3.0 and CogVideoX do.\n" + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18668v1_figure_2.png", + "caption": "Figure 2: Examples of degradation effect on video quality in chunk-by-chunk video generation by StableVideoDiffusion [1] and ConsistI2V [20].\nFor each guide image, we perform naive chunk-by-chunk generation (top row) and k\ud835\udc58kitalic_k-step search generation (bottom row). The model created some artifacts in each chunk, and the cumulated effect will at last destroy the long video as the number of chunks increases. Our k\ud835\udc58kitalic_k-step search helps to mitigate the degradation.", + "url": "http://arxiv.org/html/2411.18668v1/extracted/6026914/figures/qualitative_long_video_svd_consisti2v.png" + }, + "3": { + "figure_path": "2411.18668v1_figure_3.png", + "caption": "Figure 3: Examples of long videos generated by OpenSoraPlanV1.3.0 and CogVideoX. For each guide image, we perform naive chunk-by-chunk generation (top row) and k\ud835\udc58kitalic_k-step search generation (bottom row). These models are more robust to initial noise.", + "url": "http://arxiv.org/html/2411.18668v1/extracted/6026914/figures/qualitative_long_video_cog_osp.png" + }, + "4": { + "figure_path": "2411.18668v1_figure_4.png", + "caption": "Figure 4: k\ud835\udc58kitalic_k-step search: we first prepare m\ud835\udc5amitalic_m initial noises and then for each of them, call the base I2V model to only denoise for k\ud835\udc58kitalic_k steps, resulting in k\ud835\udc58kitalic_k suboptimal short video candidates. After that, we explicitly evaluate the k\ud835\udc58kitalic_k video candidates and find the one with the best quality. Finally, we use the noise that leads to the best video to perform a full step denoising.", + "url": "http://arxiv.org/html/2411.18668v1/extracted/6026914/figures/kstep_search2.png" + }, + "5": { + "figure_path": "2411.18668v1_figure_5.png", + "caption": "Figure 5: \nExamples of sampling results with the same conditioning input but different initial noises. Row 1-3 share same conditioning input, same for row 4-6, and row 7-9.", + "url": "http://arxiv.org/html/2411.18668v1/extracted/6026914/figures/different_init_noises.png" + }, + "6": { + "figure_path": "2411.18668v1_figure_6.png", + "caption": "Figure 6: For each model, we apply its recommended scheduler and calculate the cosine similarity between a video generated after k\ud835\udc58kitalic_k steps and a video generated with 50 steps with the same noise and conditioning inputs.", + "url": "http://arxiv.org/html/2411.18668v1/extracted/6026914/figures/kstep_ablation.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Stable video diffusion: Scaling latent video diffusion models to large datasets.", + "author": "Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al.", + "venue": "arXiv preprint arXiv:2311.15127, 2023.", + "url": null + } + }, + { + "2": { + "title": "Emerging properties in self-supervised vision transformers.", + "author": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.", + "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650\u20139660, 2021.", + "url": null + } + }, + { + "3": { + "title": "Videocrafter2: Overcoming data limitations for high-quality video diffusion models.", + "author": "Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7310\u20137320, 2024a.", + "url": null + } + }, + { + "4": { + "title": "Control-a-video: Controllable text-to-video diffusion models with motion prior and reward feedback learning, 2024b.", + "author": "Weifeng Chen, Yatai Ji, Jie Wu, Hefeng Wu, Pan Xie, Jiashi Li, Xin Xia, Xuefeng Xiao, and Liang Lin.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "Generative adversarial nets.", + "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.", + "venue": "Advances in neural information processing systems, 27, 2014.", + "url": null + } + }, + { + "6": { + "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning.", + "author": "Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai.", + "venue": "arXiv preprint arXiv:2307.04725, 2023.", + "url": null + } + }, + { + "7": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "8": { + "title": "Vbench: Comprehensive benchmark suite for video generative models.", + "author": "Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21807\u201321818, 2024.", + "url": null + } + }, + { + "9": { + "title": "Elucidating the design space of diffusion-based generative models.", + "author": "Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine.", + "venue": "Advances in neural information processing systems, 35:26565\u201326577, 2022.", + "url": null + } + }, + { + "10": { + "title": "Fifo-diffusion: Generating infinite videos from text without training.", + "author": "Jihwan Kim, Junoh Kang, Jinyoung Choi, and Bohyung Han.", + "venue": "arXiv preprint arXiv:2405.11473, 2024.", + "url": null + } + }, + { + "11": { + "title": "Videopoet: A large language model for zero-shot video generation.", + "author": "Dan Kondratyuk, Lijun Yu, Xiuye Gu, Jos\u00e9 Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, et al.", + "venue": "arXiv preprint arXiv:2312.14125, 2023.", + "url": null + } + }, + { + "12": { + "title": "Open-sora-plan, 2024.", + "author": "PKU-Yuan Lab and Tuzhan AI etc.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "aesthetic-predictor.", + "author": "LAION-AI.", + "venue": "https://github.com/LAION-AI/aesthetic-predictor, 2022.", + "url": null + } + }, + { + "14": { + "title": "Amt: All-pairs multi-field transforms for efficient frame interpolation.", + "author": "Zhen Li, Zuo-Liang Zhu, Ling-Hao Han, Qibin Hou, Chun-Le Guo, and Ming-Ming Cheng.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9801\u20139810, 2023.", + "url": null + } + }, + { + "15": { + "title": "Pseudo numerical methods for diffusion models on manifolds.", + "author": "Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao.", + "venue": "arXiv preprint arXiv:2202.09778, 2022.", + "url": null + } + }, + { + "16": { + "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps.", + "author": "Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.", + "venue": "Advances in Neural Information Processing Systems, 35:5775\u20135787, 2022.", + "url": null + } + }, + { + "17": { + "title": "Scalable diffusion models with transformers.", + "author": "William Peebles and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195\u20134205, 2023.", + "url": null + } + }, + { + "18": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International conference on machine learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "19": { + "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.", + "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.", + "venue": "Journal of machine learning research, 21(140):1\u201367, 2020.", + "url": null + } + }, + { + "20": { + "title": "Consisti2v: Enhancing visual consistency for image-to-video generation.", + "author": "Weiming Ren, Huan Yang, Ge Zhang, Cong Wei, Xinrun Du, Wenhao Huang, and Wenhu Chen.", + "venue": "arXiv preprint arXiv:2402.04324, 2024.", + "url": null + } + }, + { + "21": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "22": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In Medical image computing and computer-assisted intervention\u2013MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "23": { + "title": "Denoising diffusion implicit models.", + "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.", + "venue": "arXiv preprint arXiv:2010.02502, 2020.", + "url": null + } + }, + { + "24": { + "title": "Autoregressive model beats diffusion: Llama for scalable image generation.", + "author": "Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, and Zehuan Yuan.", + "venue": "arXiv preprint arXiv:2406.06525, 2024.", + "url": null + } + }, + { + "25": { + "title": "Modelscope text-to-video technical report.", + "author": "Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang.", + "venue": "arXiv preprint arXiv:2308.06571, 2023.", + "url": null + } + }, + { + "26": { + "title": "Nuwa-infinity: Autoregressive over autoregressive generation for infinite visual synthesis.", + "author": "Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zicheng Liu, Yuejian Fang, and Nan Duan.", + "venue": "arXiv preprint arXiv:2207.09814, 2022.", + "url": null + } + }, + { + "27": { + "title": "Freeinit: Bridging initialization gap in video diffusion models.", + "author": "Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, and Ziwei Liu.", + "venue": "In European Conference on Computer Vision, pages 378\u2013394. Springer, 2025.", + "url": null + } + }, + { + "28": { + "title": "Cogvideox: Text-to-video diffusion models with an expert transformer.", + "author": "Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al.", + "venue": "arXiv preprint arXiv:2408.06072, 2024.", + "url": null + } + }, + { + "29": { + "title": "Magvit: Masked generative video transformer.", + "author": "Lijun Yu, Yong Cheng, Kihyuk Sohn, Jos\u00e9 Lezama, Han Zhang, Huiwen Chang, Alexander G Hauptmann, Ming-Hsuan Yang, Yuan Hao, Irfan Essa, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10459\u201310469, 2023a.", + "url": null + } + }, + { + "30": { + "title": "Language model beats diffusion\u2013tokenizer is key to visual generation.", + "author": "Lijun Yu, Jos\u00e9 Lezama, Nitesh B Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong Cheng, Vighnesh Birodkar, Agrim Gupta, Xiuye Gu, et al.", + "venue": "arXiv preprint arXiv:2310.05737, 2023b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18668v1" +} \ No newline at end of file diff --git a/20241127/2411.18675v1.json b/20241127/2411.18675v1.json new file mode 100644 index 0000000000000000000000000000000000000000..226ba104f1222e169a4c6d4b3cda5f51d2e090e1 --- /dev/null +++ b/20241127/2411.18675v1.json @@ -0,0 +1,840 @@ +{ + "title": "GaussianSpeech: Audio-Driven Gaussian Avatars", + "abstract": "We introduce GaussianSpeech111Project Page: https://shivangi-aneja.github.io/projects/gaussianspeech, a novel approach that synthesizes high-fidelity animation sequences of photo-realistic, personalized 3D human head avatars from spoken audio.\nTo capture the expressive, detailed nature of human heads, including skin furrowing and finer-scale facial movements, we propose to couple speech signal with 3D Gaussian splatting to create realistic, temporally coherent motion sequences.\nWe propose a compact and efficient 3DGS-based avatar representation that generates expression-dependent color and leverages wrinkle- and perceptually-based losses to synthesize facial details, including wrinkles that occur with different expressions.\nTo enable sequence modeling of 3D Gaussian splats with audio, we devise an audio-conditioned transformer model capable of extracting lip and expression features directly from audio input.\nDue to the absence of high-quality dataset of talking humans in correspondence with audio, we captured a new large-scale multi-view dataset of audio-visual sequences of talking humans with native English accents and diverse facial geometry.\nGaussianSpeech consistently achieves state-of-the-art quality with visually natural motion, while encompassing diverse facial expressions and styles.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Generating animated sequences of photorealistic 3D head avatars from spoken audio is important for many graphics applications, including immersive telepresence, movies, and virtual assistants.\nIn particular, rendering photorealistic views of such animated avatars from various viewpoints is crucial for realistic, immersive digital media, for instance, telepresence to a meeting room requires a photorealistic appearance for all viewpoints of the people in the room, or AR/VR where users can freely change their viewpoint.\nCreating such photorealistic animated 3D avatars from audio remains challenging, as it requires maintaining photorealistic fidelity throughout the animation sequence, as well as from various viewpoints.\nExisting work thus focuses on addressing these objectives independently; various works focus on re-enacting videos in the 2D domain [21 ###reference_b21###, 72 ###reference_b72###, 46 ###reference_b46###, 69 ###reference_b69###, 3 ###reference_b3###, 70 ###reference_b70###, 32 ###reference_b32###, 48 ###reference_b48###, 40 ###reference_b40###, 9 ###reference_b9###, 33 ###reference_b33###], creating front-view video animations, while others focus on animating 3D face geometry from audio [44 ###reference_b44###, 18 ###reference_b18###, 67 ###reference_b67###, 55 ###reference_b55###].\nIn contrast, we aim to create innately 3D audio-driven avatars enabling 3D-consistent, free-viewpoint photorealistic synthesis needed for immersive digital communication.\nIn order to characterize audio-driven 3D animation of a person from multi-view input, we propose to represent animated head sequences with explicit 3D Gaussian points, leveraging the detailed and expressive representation space of 3D Gaussian Splatting (3DGS) [30 ###reference_b30###].\n3DGS offers a flexible representation capable of handling complex and irregular facial geometry and appearance (e.g., different skin tones, beard, skin creasing) and real-time rendering, making it a well-suited choice for facial animation.\nThus, we design an efficient, personalized 3D Gaussian avatar representation from multi-view input observations of a person, containing relatively few Gaussian splats in order to make sequence modeling of photorealistic 3DGS tractable and allowing us to operate at real-time rendering rates.\nThis is achieved through learning expression- and view-dependent color, and our losses focusing on perceptual face quality using a face recognition network, as well as focusing on fine-scale details through wrinkle detection.\nOur efficient, high-quality avatar can handle the nuances of the facial geometry, like skin tone variation and dynamic wrinkles.\nWe then use this person-specific avatar to guide audio-driven head animation, enabled by our transformer-based sequence model.\nWe learn lip motion features and wrinkle features directly from audio to obtain expression input to train our transformer model, enabling photorealistic generation of a coherent animation sequence.\nTo create high-fidelity, audio-driven animated 3D head avatars, we require high-resolution multi-view data paired with high-quality audio recordings.\nExisting multiview datasets [62 ###reference_b62###, 31 ###reference_b31###] unfortunately lack either high-quality video or high-quality audio captures. In the absence of large-scale and high-quality paired audio-multiview data of people speaking, we collected a new multiview dataset with 16 cameras for 6 native English participants captured at 30 fps and 3208x2200 resolution with overall recordings of 3.5 hours, an order of magnitude larger than the existing datasets. We will make the dataset and the corresponding 3D face trackings publicly available for research purposes.\nIn this work, we introduce a novel approach for high-fidelity and multi-view consistent sequence animation of photorealistic 3D head avatars for content creation applications. To summarize, our contributions are:\nThe first transformer-based sequence model for audio-driven head animation synthesis of a lightweight 3DGS based avatar. By animating our optimized 3DGS avatar directly with our transformer model, we achieve temporally coherent animation sequences while characterizing fine-scale face details and speaker-specific style.\nA new high-quality audio-video dataset, comprising high-resolution 16-view dataset of 6 native English speakers (Standard American & British). The dataset has a total of 2500 sequences, with overall recordings of 3.5 hours." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Audio-driven facial animation plays an important role in digital media. Here we discuss audio-driven animation methods generating different output representations." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "2D-Based Methods.", + "text": "There is a large corpus of works in the field of 2D audio-driven facial animation operating on monocular RGB videos, synthesizing 2D sequences directly [57 ###reference_b57###, 38 ###reference_b38###, 63 ###reference_b63###, 68 ###reference_b68###, 61 ###reference_b61###, 20 ###reference_b20###, 8 ###reference_b8###, 11 ###reference_b11###, 64 ###reference_b64###, 5 ###reference_b5###, 51 ###reference_b51###, 41 ###reference_b41###, 6 ###reference_b6###, 28 ###reference_b28###, 74 ###reference_b74###, 59 ###reference_b59###, 60 ###reference_b60###, 73 ###reference_b73###, 14 ###reference_b14###, 7 ###reference_b7###, 26 ###reference_b26###, 23 ###reference_b23###, 47 ###reference_b47###, 50 ###reference_b50###, 22 ###reference_b22###, 66 ###reference_b66###]. However, these methods operate in pixel space and can produce very limited side views. Another line of work also operating on frontal RGB videos but using intermediate 3D representations are based on 3DMMs [71 ###reference_b71###, 49 ###reference_b49###, 56 ###reference_b56###, 25 ###reference_b25###, 16 ###reference_b16###, 52 ###reference_b52###]. Although these methods generate photorealistic results, they use 3DMMs as a proxy to improve the animation quality and are still limited to frontal and limited side views. In contrast, we model head avatars with explicit 3D Gaussian points, thus, enabling simultaneous free-viewpoint rendering for different viewpoints which is critical for telepresence applications." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Parametric Model Based Methods.", + "text": "Another promising line of work is to animate 3D facial geometry directly. A vast majority of these works model speech-conditioned animation for either artist-designed template meshes [29 ###reference_b29###, 12 ###reference_b12###, 44 ###reference_b44###, 18 ###reference_b18###, 67 ###reference_b67###, 55 ###reference_b55###, 13 ###reference_b13###, 54 ###reference_b54###] or blendshapes for 3D parametric head model [39 ###reference_b39###, 1 ###reference_b1###]. While these methods can faithfully match facial motion with the speech signal and can be rendered from different viewpoints, they do not model any appearance or texture information and cannot handle complex and irregular facial geometry. The synthesized animations, therefore, do not look realistic. Compared to these, our method optimizes a 3DGS-based avatar and models appearance using expression and view-dependent color, generating photorealistic results." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Radiance Fields Based Methods.", + "text": "Recent speech-driven animation methods based on radiance fields [21 ###reference_b21###, 69 ###reference_b69###, 46 ###reference_b46###, 35 ###reference_b35###, 70 ###reference_b70###, 32 ###reference_b32###, 40 ###reference_b40###] have gained popularity due to their ability to model directly from images. Neural Radiance Fields (NeRF) [37 ###reference_b37###] possess the capability to render a scene from arbitrary viewpoints, however, existing audio-driven methods utilizing NeRF are designed for monocular videos.\nConcurrent to ours, few recent works [9 ###reference_b9###, 33 ###reference_b33###, 24 ###reference_b24###] leverage 3DGS [30 ###reference_b30###] for generating audio-driven talking heads. GaussianTalker [9 ###reference_b9###] and TalkingGaussian [33 ###reference_b33###] focus on improving the rendering speed for monocular videos. EmoTalk3D [24 ###reference_b24###] can synthesize multi-view renders, however these methods generate sequences frame-by-frame, thus suffer from jitter and scaling artefacts. In contrast, our method synthesizes multi-view consistent and temporally smooth results, including fine-scale details like dynamic wrinkles, by leveraging a transformer-based sequence model and an efficient 3DGS-based avatar." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Multi-View Audio-Visual Dataset", + "text": "###figure_1### We collected a novel dataset consisting of six native English speakers captured using a multiview rig of 16 cameras (see Supp.). We record sequences at 30 FPS at 3208 x 2200 resolution. To achieve quality and diversity, we specifically capture native English speakers with different accents, including American, British, and Canadian. We selected participants aged 20-50 with different genders and facial geometry including beard and glasses to increase the diversity, see Fig. 2 ###reference_###.\nWe collected 415 sequences for every subject, leading to an overall recording time of 30-35 minutes for each of the 16 cameras. The spoken sentences are chosen from the TIMIT [19 ###reference_b19###] corpus to maximize the phonetic diversity. Our dataset stands out from the existing datasets in terms of quality and quantity.\nWhile certain datasets with audio-visual talking faces exist, they are limited in quality. The RAVDESS dataset [36 ###reference_b36###] contains a set of native speakers, but it has only 2 unique sequences per participant with North American accent, while we captured three different English accents and 415 unique sentences. The MEAD dataset [62 ###reference_b62###] captured the participants with 250 unique sentences per participant. However, they focus on emotional speech synthesis due to which they capture only 40 unique natural expression/emotion per participant at a relatively lower resolution. The Nersemble [31 ###reference_b31###] dataset captures the participants at high resolution, but it only contains 10 audio sequences per participant. Closest to ours is MultiFace [65 ###reference_b65###], which captured participants in a spherical rig of 150 cameras; however, it captured only 50 audio sequences per participant. Our dataset contains 415 sequences for every subject at high resolution, an order of magnitude larger than existing datasets, see Tab. 1 ###reference_###. We plan to release our entire dataset to the research community.\n###figure_2###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Method", + "text": "Our method operates in two stages. First, we develop a lightweight and high-quality avatar initialization based on GaussianAvatars (Sec. 4.1 ###reference_###). Next, we train a transformer-based sequence model to animate our initialized avatar conditioned on personalized audio features (Sec. 4.2 ###reference_###). Since our method requires 3D face tracking, we compute them from our multiview sequence dataset, similar to [42 ###reference_b42###]." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Avatar Initialization", + "text": "We propose an efficient optimization strategy to compute a 3DGS-based Gaussian avatar representation.\nWe found that naively training GaussianAvatar [42 ###reference_b42###] generates blurred/low-quality textures, especially, for scenarios with rapid facial movement like faster talking speed/head motion.\nIn addition, GaussianAvatar can not effectively handle dynamic wrinkles.\nTherefore, we introduce expression-dependent colors and propose several regularizations to improve quality of our avatars described below and shown in Fig. 3 ###reference_###.\nVolume-Based Pruning. We modify the pruning strategy used by GaussianAvatar. Instead of pruning 3D Gaussian splats based on a given opacity threshold , we select top 25,000 Gaussians with maximum opacity and 3D Gaussian\u2019s scale volume combined at every pruning step as\nwhere refers to Gaussian\u2019s opacity and refers to its scale along x, y, and z axis. Even when the optimization generates excessive splats during densification, this top-k pruning ensures that the optimized avatar does not contain too many 3D Gaussian splats. However, this leads to degradation in quality by removing small transparent 3D splats and generates blurry results. We, thus, propose to add additional regularizations to improve quality.\nExpression-dependent Color.\nInstead of learning SH Color for 3D Gaussians, our method generates color with a lightweight two-layer color MLP to faithfully synthesize dynamic wrinkles.\nGiven a FLAME [34 ###reference_b34###] expression code and viewing direction , we synthesize view- and expression-dependent color as:\nNote that we additionally learn per Gaussian latent features for sharper colors.\nPerceptual Losses. To improve the sharpness of the color generated by , we add a global and patch-based perceptual loss. The global perceptual loss is based on the content and style features of the pre-trained face recognition model ArcFace [15 ###reference_b15###].\nThe content loss and style loss are defined as:\nwhere and refer to the feature maps and Gram matrices [27 ###reference_b27###] for the layer respectively. and refer to the rendered and ground-truth multiview image.\nexplained above improves the quality of the texture globally, however,\nit shows limited improvements for fine-scale skin areas and less observed regions like the mouth interior. We, therefore, employ a VGG-based loss on local image patches based on content features of the pre-trained VGG backbone as:\nwhere and refer to the local patch regions from the rendered and ground-truth multiview images. We use patches and sample 16 local patches uniformly for the facial area by employing alpha matting.\nWrinkle Regularization.\nNaive optimization of GaussianAvatar [42 ###reference_b42###] cannot represent skin creasing and fine-scale wrinkles, since it learns a constant color for the avatar, irrespective of facial expression.\nTo overcome this, we introduce a lightweight color MLP that can generate expression-dependent wrinkles.\nWe employ a novel wrinkle feature loss which focuses on refining dynamic wrinkles.\nSpecifically, we run an off-the-shelf wrinkle detector [45 ###reference_b45###] to extract wrinkle features and apply a content loss on its feature detection backbone during optimization:\nNote that our method synthesizes wrinkles faithfully for avatars whose captured data includes dynamic wrinkles when speaking; if the avatar did not display wrinkles during speech, our method will not generate them.\nMouth Region Subdivision. Since the mouth interior (especially teeth) is less frequently observed compared to other facial regions, the standard 3DGS-based densification cannot generate sufficient Gaussians for the mouth to synthesize high quality results. To address this, before optimization, we subdivide the triangles which are used to initialize the Gaussians corresponding to the teeth in the FLAME mesh using a uniform four-way subdivision. By doing so, we begin with a high density of Gaussians for the teeth, compensating for low gradient magnitude in this area, ensuring that teeth appear detailed and realistic.\nTo summarize, we optimize our 3DGS-based avatar using loss as:\nwhere are defined in [42 ###reference_b42###] (also explained in Supp. doc).\n###figure_3###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Sequence Model Training", + "text": "GaussianSpeech performs high-fidelity and temporally-consistent generative synthesis of avatar motion sequences, conditioned on audio signal.\nTo characterize complex face motions and fine-scale movements like dynamic wrinkles, we employ a transformer-based sequence model. We predict mesh animations with our sequence model and refine the dynamic motion attributes of the 3D Gaussian Splats of our optimized avatar to be consistent with audio features.\nAn overview of our approach is illustrated in Fig. 4 ###reference_###.\nAudio Encoding.\nWe employ the state-of-the-art pre-trained speech model Wav2Vec 2.0 [2 ###reference_b2###] to encode the audio signal. Specifically, we use the audio feature extractor made up of temporal convolution layers (TCN) to extract audio feature vectors from the raw waveform, followed by a Frequency Interpolation layer to align the input audio signal (captured at frequency = 16kHz) with our dataset (captures at framerate = 30FPS).\nLip Features. A stacked multi-layer Lip Transformer Encoder processes these resampled audio features and predicts personalized lip content feature vectors . To avoid learning spurious correlation between upper face motion and audio, the Lip Transformer Encoder is trained with only lip vertices from the FLAME mesh with L2-reconstruction loss autoregressively as:\nwhere refers to the number of frames per sequence and total sequences, and refer to the ground truth and predicted lip vertices, respectively.\nWrinkle Features. Similarly, our Wrinkle Transformer Encoder conditioned on audio and lip features predicts personalized wrinkle feature vectors . The Wrinkle Transformer Encoder is trained with wrinkle features extracted using a wrinkle detector [45 ###reference_b45###] from the RGB frames as:\nwhere and refer to the ground truth and predicted wrinkle vertices respectively.\nExpression Features.\nUsing personalized lip features and wrinkle features obtained above, we train the Expression Encoder . Specifically, we concatenate lip and wrinkle features to obtain combined features .\nThese combined features are fed to our Expression Encoder which predicts FLAME expressions as = and is trained with:\nwhere and refers to the ground truth and predicted FLAME expression parameters, respectively.\nAudio-Conditioned Animation.\nWe train a transformer decoder [58 ###reference_b58###] network to synthesize mesh Vertex Offsets , where refers to the number of frames in a sequence. During training, we first project the predicted expression parameters via the Expression2Latent MLP to the latent space of our model and concatenate it with lip features to obtain combined lip-expression motion features .\nThese motion features are then processed through transformer decoder, and the Vertex Mapper MLP to synthesize Vertex Offsets in canonical space. We leverage a look-ahead binary target mask in the multi-head self-attention layer to prevent the model from peeking into the future frames.\nThe element of the matrix with is:\nInput motion features are fused into the transformer with the multi-head audio expression cross-attention layer via the alignment mask . The binary mask is a Kronecker delta function such that the motion features for timestamp attend to vertex features at the timestamp if and only if :\nThe vertex offsets are obtained as:\nwhere refers to the transformer decoder network. These predicted offsets are added to the template mesh to obtain mesh animation in canonical space as:\nThe Expression2Latent MLP and the transformer decoder are jointly trained with an L2-reconstruction loss:\nThe predicted vertices are fed to our Optimized 3DGS avatar (Sec. 4.1 ###reference_###) and color related attributes of the avatar are further refined. We propose an alternating training strategy for the task as explained below.\n(a) In the first step, we predict vertex displacements (from the rest pose) in the canonical space for the entire sequence (Eq. 14 ###reference_###). This learns the optimal parameters for transformer and Expression2Latent MLP as:\n(b) In the second step, we predict the 3D Gaussian attributes with our Optimized 3DGS avatar (Sec. 4.1 ###reference_###) and render the full animation sequence.\nThe color MLP of our optimized avatar is conditioned on predicted FLAME expression and per Gaussian latent , in addition to view direction , and predicts the view- and expression-dependent color as:\nThe predicted image is obtained with the differentiable renderer from Kerbl et al. [30 ###reference_b30###] as:\nwhere refers to the optimized avatar\u2019s position, scale, and rotations, respectively, and defines the total number of Gaussians.\nThe predictions are supervised with the photometric loss for the sequence:\nIn this step, we refine per-Gaussian latents and Color MLP with audio-conditioned expressions:\nOverall, we optimize two losses in the alternating fashion: (a) which learns audio-conditioned facial motion and (b) which refines the optimized avatar for more accurate and photorealistic appearance.\nWe do not refine the position, scale, rotation, and opacity; empirically, we found that they did not make a noticeable difference in the overall quality.\n###figure_4###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "We evaluate GaussianSpeech on the tasks of (a) Avatar Representation and (b) Audio-Driven Animation. For (a), we evaluate standard perceptual image quality metrics SSIM, PSNR and LPIPS. For audio-driven animation, we evaluate lip synchronization LSE-D [41 ###reference_b41###] as well as perceptual quality metrics. We train personalized avatars for different identities. Following GaussianAvatars [42 ###reference_b42###], we train on all 15 cameras except the frontal and report results on the frontal camera for all our experiments. All images are resized to during training. For avatar reconstruction, we use 30 short sequences. For audio-driven animation, we use 300 sequences for training and 50 for val and test set each.\nWe encourage readers to watch the Supplementary Video for visual comparison of all results." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Avatar Reconstruction", + "text": "Compared to GaussianAvatars [42 ###reference_b42###], our proposed avatar initialization can generate high-quality results with as few as 30-35k points (see Fig. 5 ###reference_### and Tab. 2 ###reference_###). The perceptual loss helps increase the sharpness in the texture with fewer points. The wrinkle regularization helps to model dynamic wrinkles. Teeth subdivision helps with the better mouth interior. Color MLP helps synthesize sharper texture. Our full avatar initialization with all regularization achieves the best results. We train our method on all except frontal camera and report results for the frontal camera. For these experiments, we show results for the most expressive actor from our dataset (Subject 4) and refer to Suppl. doc for others.\n###figure_5### ###figure_6###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Audio-Driven Animation", + "text": "We compare our method against recent state-of-the-art methods. For NeRF- and 3DGS-based methods, we train on frontal camera since these methods are designed for monocular settings. There are no sequence models for audio-driven animation of 3D head avatars, thus, we combine audio-to-mesh animation methods [18 ###reference_b18###, 67 ###reference_b67###, 55 ###reference_b55###] with current state-of-the-art mesh-to-3D avatar creation method [42 ###reference_b42###]. We report results on the front camera for fairness, since some methods are designed only for front/single camera only. We report results averaged over all subjects, see Fig. 6 ###reference_### and Tab. 3 ###reference_###. Our method consistently achieves better results than baselines both in terms of perceptual quality and lip synchronization.\nFinally, we ablate different design choices of our method on most expressive actor from our dataset (Subject 4) in Fig. 7 ###reference_### and Tab. 4 ###reference_###. Alignment mask is critical for accurately infusing audio features into the sequence model. Without audio fine-tuning refers to using generic audio features without any personalization of lip encoder, without audio model fine-tuning the model produces incorrect lip synchronization. Without wrinkle features refers to setting without using wrinkle features for producing FLAME expressions. Without wrinkle features the method cannot produce dynamic wrinkles. Without finetuning Color MLP & latent features with predicted expressions from our Expression encoder, the method produces bad mouth interiors and inaccurate dynamic wrinkles. Our full model with all components achieves best results.\nWe refer readers to supplemental video for visual comparison." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we propose a novel approach to create high-fidelity and photorealistic 3D head avatars that can be animated from audio input. We designed the first transformer-based sequence model\nfor audio-driven head animation of 3DGS based avatar. Our sequence model is made possible by a lightweight and compact avatar initialization based on 3D Gaussian Splatting. We proposed several regularization techniques to handle dynamic wrinkles, skin creasing and sharpness of the texture. Our method produces (a) photorealistic and high-quality 3D head avatars that can be rendered from arbitrary viewpoints (b) visually natural animations like skin creasing during talking. We believe\nthis is an important first step towards enabling the animation\nof detailed and lightweight 3D head avatar, which can enable many new possibilities for content creation and digital avatars for immersive telepresence." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Acknowledgments", + "text": "This work was supported by the ERC Starting Grant Scan2CAD (804724), the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation (bidt), the German Research Foundation (DFG) Grant \u201cMaking Machine Learning on Static and Dynamic 3D Data Practical,\u201d the German Research Foundation (DFG) Research Unit \u201cLearning and Simulation in Visual Computing\u201d. We would like to thank Shenhan Qian for help with tracking." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Additional Experiments", + "text": "In our experiments, we found that training with at least 30 sequences enables generalizing for mouth articulations.\nWe show novel view synthesis results and zoom-ins in Fig. 8 ###reference_###.\nFor all the avatars from our dataset, we show results for avatar initialization in Fig. 9 ###reference_### and Tab. 5 ###reference_###.\nAudio-driven animations are shown in Fig. 10 ###reference_###.\n###figure_7### We analyze the effect of per Gaussian latent features during our avatar initialization stage in Fig. 11 ###reference_###. Note that the per Gaussian features are critical to produce accurate texture colors for the avatar.\n###figure_8### ###figure_9### ###figure_10### While our method produces photorealistic and high-quality animations in synchronization with audio, it also has several limitations. Our avatar initialization strategy is based on FLAME [34 ###reference_b34###]; thus, our method struggles with avatars wearing accessories like glasses. The glass geometry and specularities on the surface of glasses can not be accurately produced and fails during free-viewpoint rendering, see Fig. 12 ###reference_###.\nIn the future, this can be improved by designing better models for representing human head geometry instead of 3D mesh.\nAlso, our texture representation based on the Color MLP has baked-in lighting, and cannot be separated from material properties, which is important for placing avatars in different environments (e.g., during immersive telepresence).\n###figure_11### To evaluate the fidelity based on human perceptual evaluation, we performed a user study with 30 participants over a set of 15 questions. The users were given a carefully crafted set of instructions to evaluate (a) Overall Animation Quality (b) Lip Synchronization and (c) Realism in Facial Movements.\nThe users were asked to assess different anonymous methods (including GaussianSpeech) on these three parameters.\nIn the course of the study, participants were presented with these questions to focus on different aspects of 3D facial animation, shown in Fig 13 ###reference_###. For every question, participants were instructed to meticulously evaluate the provided methods and select the option that best aligned with their judgment.\n###figure_12### ###figure_13### For the first question evaluating overall quality, participants were instructed to consider factors such as visual appeal, clarity, and general impression, and to choose the method number that they believed demonstrates the highest overall quality.\nFor the second question, participants were directed to evaluate the lip synchronization of each animation method. They were prompted to pay close attention to how well the lip movements aligned with the spoken words or sounds. Participants were reminded to select only one option that, in their judgment, exhibited the best lip synchronization.\nLastly, the third question was focussed on evaluating the realistic facial movement of each 3D facial animation method. Participants were instructed to consider the naturalness and persuasiveness of facial expressions and movements and to choose the method number that, in their opinion, demonstrates the most realistic facial movement. Again, participants were reminded to select only one option per question throughout the study.\nOur method consistently achieves better lip-audio synchronization while also representing fine-scale\nfacial details like skin creasing and wider\nmouth motions. This is confirmed by our perceptual user\nstudy in Fig. 14 ###reference_###.\nWe report inference speed averaged over the test set on a single Nvidia RTX 2080 Ti with 12GB VRAM as well as NVIDIA RTX A6000 with 48GB VRAM. Since TalkingGaussian [33 ###reference_b33###] network does not fit in 12 GB VRAM; we report its inference time only for a 48 GB VRAM (NVIDIA A6000). The results are presented in Tab. 6 ###reference_###.\n###figure_14###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Architecture & Training Details", + "text": "GaussianSpeech is implemented using PyTorch Lightning framework [17 ###reference_b17###] with Wandb [4 ###reference_b4###] for logging.\nAvatar Initialization. During avatar initialization (Sec. 5.1 ###reference_###, main paper), we randomly sample 16 random patches per iteration with a patch size of from the facial area. We start with an initial learning rate of 5e-3 and exponentially decay until 5e-5. We perform densification every 5000 iterations and we do not reset opacity. We train with the batch size of 1 with Adam optimizer, render images on white background and train for 100,000 iterations on RTX 2080 Ti (12 GB VRAM). Our avatars converge between 29-35K Gaussian points, as shown in Tab. 5 ###reference_###. We show sample output of our wrinkle detection model used in main paper in Fig. 16 ###reference_###.\n###figure_15### Audio-to-Avatar Model.\nFor the audio encoder, the TCN layers of the Wav2Vec 2.0 [2 ###reference_b2###] are initialized with the pre-trained wav2vec 2.0 weights trained on a large corpus of audio data from different languages and is frozen during fine-tuning. The Frequency Interpolation layer simply performs linear interpolation of the incoming features and has no learnable parameters.\nFor the Lip an Wrinkle encoder, we use latent dimension of 64 and for Expression encoder and the Transformer decoder the latent dimension is 128. For the multihead self and cross attention layers of the transformer decoder, we use 4 heads and set the dimension to 1024 for each decoder block. We use an Adam optimizer with a learning rate of 1e-4 and update the model one sequence per iteration, train on Nvidia RTX A6000 (48 GB VRAM) for 100,000 iterations.\nEvaluation Metrics. To evaluate lip synchronization of the generated mouth expressions with the audio signal, we use LSE-D (Lip Sync Error Distance) [41 ###reference_b41###]. Specifically, this involves feeding rendered face crops and the corresponding audio signal into a pre-trained SyncNet [10 ###reference_b10###] to evaluate how close the acoustic signal matches the phonetic movements. The facial movements are encoded as crops of only the facial region, and the audio signal is represented as MFCC power spectrum. These are then passed into the pretrained SyncNet backbone [10 ###reference_b10###] and the pairwise distance is evaluated, as shown in Fig. 15 ###reference_###." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Preliminaries", + "text": "Recently, 3D Gaussian Splatting [30 ###reference_b30###] has emerged as a promising approach to represent a static scene explicitly with anisotropic 3D gaussian directly from multiview images and estimated/given camera poses. Specifically, it represents a scene using a set of 3D Gaussian splats, each defined by a set of optimizable parameters, including a mean position and a positive semi-definite covariance matrix as:\nGiven that covariance matrix needs to be positive semidefinite to have physical meaning and gradient-based optimization methods cannot be constrained to produce such valid matrices, Kerbl et al. [30 ###reference_b30###] first define an ellipsoid with scaling matrix and rotation matrix as:\nTo allow for independent optimization for scale and rotation, separate 3D vectors are stored. The scale is represented using a scaling vector and a quaternion for rotation.\nFor rendering every pixel on the image, the color is computed by blending all the 3D Gaussians overlapping a pixels as:\nwhere refers to 3-degree spherical harmonics (SH) [43 ###reference_b43###] color obtained by blending ordered points overlapping the pixel, blending weight is given by multiplying 2D projection of the 3D Gaussian with learnt per-point opacity. Paired with a differentiable tile rasterizer, this enables real-time rendering. To handle complex scenes and respect visibility order, depth-based sorting is applied to the Gaussian splats before blending.\nDue to the capability of 3DGS to represent fine geometric structures, it has proved to be an efficient representation for creating photorealistic head avatars, as shown by GaussianAvatars [42 ###reference_b42###]. GaussianAvatars proposes a method for dynamic 3D representation of human heads based on 3DGS by rigging the anisotropic 3D Gaussians to the faces of a 3D morphable face model. Specifically, the method uses FLAME [34 ###reference_b34###] as 3DMM due to its flexibility and compactness, consisting of only 5023 vertices and 9976 faces. To better represent mouth interior, it generated additional 120 vertices for teeth. Given a FLAME mesh, the idea is to first initialize a 3D Gaussian at the center of each triangle of the FLAME mesh and let the 3D Gaussian move with the faces of the FLAME mesh across different timesteps.\nFor the paired 3D Gaussians with the faces of the FLAME mesh, the position , rotation and anisotropic scaling are defined in local space. During rendering, these are converted to global space as:\nwhere refers to the mean positions of the vertices of the triangle mesh, rotation matrix describes the orientation of the triangles in the global space, scalar describes the triangle scaling. During avatar optimization, similar to 3DGS, the method uses adaptive density control strategy to add and remove splats based view-space positional gradient and opacity of each Gaussian. To prevent excessive pruning, the method also ensures that every triangle has at least one splat attached. The 3DGS parameters are then optimized using photometric loss , position loss and scaling loss as as:\nwhere is combination of and D-SSIM loss [30 ###reference_b30###] between rendered and ground truth images.\nensures that splats remain close to their parent triangles:\nand prevents excessive scaling of the splats:" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Baselines", + "text": "We compare our method against audio-conditioned NeRF, 3DGS and mesh based methods. Since mesh-based methods can\u2019t generate photorealistic avatars, we combined audio-to-mesh methods with recent state-of-the-art mesh-to-3D avatar method GaussianAvatars [42 ###reference_b42###]. Current NeRF and 3DGS based methods are designed for monocular videos only, thus in the main paper we train these methods only on the front cameras recording from our dataset. We breifly describe these methods as follows:\nFaceformer [18 ###reference_b18###]. Faceformer leverages Wav2Vec2.0 to encode audio features, which are then processed by transformer-based autoregressive model via cross-modal multi-head attention to synthesize mesh animations. The method additionally uses biased causal multi-head self attention and periodic positional encoding to improve generalization to longer sequences. The paper proposes a generic sequence model for a fixed set of identities with a style embedding to learn identity-specific speaking style.\nCodetalker [67 ###reference_b67###]. Given an audio signal, Codetalker formulates speech-driven facial animation as code query task of a learnt codebook. The codebook is learnt by self-reconstruction of the mesh sequences with VQ-VAE. The learnt discrete codebook is then leveraged by code-query based temporal autoregressive model for speech-conditioned facial animation. The discrete motion space of finite cardinality can accurately audio-conditioned mesh animation. Similar to Faceformer, this method also encodes audio with Wav2Vec2.0.\nImitator [55 ###reference_b55###]. Similar to ours, Imitator learns a personalized model for speech-conditioned 3D facial animation. The method first pretrains a transformer based sequence model on high-quality VOCA dataset [12 ###reference_b12###] and then finetunes it with short Flame tracked sequences of the personalized avatar. To model lip closures accurately, the paper further proposes a novel lip contact loss based on physiological cues of bilabial consonants \u2018m\u2019, \u2018b\u2019 \u2018p\u2019).\nRAD-NeRF [53 ###reference_b53###]. The paper proposes audio-conditioned neural radiance fields for real-time rendering. The key contribution of the method is an efficient NeRF architecture by decomposing the video representation into three low-dimensional trainable feature grids. The first two feature grid model the audio and dynamic head motion respectively. The third feature grid models the torso motion. Compared to previous audio-conditioned NeRFs, this runs much faster and enables real-time inference.\nER-NeRF [32 ###reference_b32###]. Similar to RAD-NeRF, this paper also focusses on real-time rendering for audio-conditioned neural fields. However in contrast to RAD-NeRF, this method leverages triplane hash representation for learning spatial features. The method further captures the impact of audio features on different facial regions via region-aware attention module. To handle eye blinks, it uses explicit eye blinking control with a scalar. To model the torso, the method transforms a set of trainable 3D keypoints to normalized 2D coordinates and queries 2D neural field to predict the torso image.\nSyncTalk [40 ###reference_b40###]. Building upon ER-NeRF, this method uses triplane hash representation and further improves the quality of lip synchronization. The method leverages face-synchronization controller that align lip motion with the corresponding audio signal and uses 3D facial blendshapes for capturing facial expressions. To model head pose, it utilizes 3D head tracker and stabilizes the head pose with a pretrained optical flow estimation model. Finally, it uses a portrait-sync generator to restore rest of the details like hair and background.\nGaussianTalker [9 ###reference_b9###]. The paper proposes audio-conditioned talking head generation framework based on 3D Gaussian Splatting (3DGS). By leveraging the speed and efficiency of 3DGS, the authors construct a canonical 3DGS representation of the head and deform it in synchronization with the audio during audio-driven animation. Specifically, it encodes spatial information (3DGS position) of the head via multiresolution triplane feature grid and uses an MLP for predicting rest of the 3DGS attributes in canonical space. Finally, these Gaussian attributes are merged with audio features via the Spatial-Audio attention that predict per-frame 3DGS deformations, enabling stability and control.\nTalkingGaussian [33 ###reference_b33###]. It is a deformation-based talking head synthesis framework, also leveraging 3DGS addressing the problem of facial distortion in existing radiance field methods. The authors represent dynamic talking head with deformable 3D Gaussians, and consists of (a) static persistent Gaussians representing persistent head structure and (b) neural grid-based motion field to handle dynamic facial motion. The model is decomposed into two branches to handle face and mouth interior separately with the goal to reconstruct more accurate motion and structure of mouth region." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Dataset Details", + "text": "Our multi-view dataset consists of native speakers in age group 20-50 and includes three male and female participants, see Tab. 7 ###reference_###. We show additional metadata to be released with our dataset in Fig. 17 ###reference_### and some example frames from one of the sequences from our dataset in Fig. 18 ###reference_###.\n###figure_16### ###figure_17### ###figure_18### We employ 16 machine vision cameras at a resolution of 7.1 megapixels and a supercardioid microphone to capture high-quality audio. Our capture setup is similar to Kirschstein et al. [31 ###reference_b31###] covering a field of view of 90\u2218 left-to-right and 30\u2218 up-to-down. To avoid motion blur, we set the cameras at a shutter speed of 3ms. To capture the participant in appropriate lighting, we illuminate the subject with 8 LED light panels and use diffuser plates to reduce specularities on the skin. Our dataset can capture fine-scale facial details like eyelashes, see Fig. 19 ###reference_### for zoom-ins.\nTo maximize phonetic diversity in the dataset and to advance the field of audio-driven facial animation, we asked the participants to speak a phonetically diverse set of spoken English sentences carefully chosen from TIMIT speech corpus [19 ###reference_b19###] with expressions. We record the following categories of sentences from TIMIT corpus (a) Two accent-specific sentences that differ for people with different dialects, (b) 260 phonetically compact sentences to include a diverse range of phonetic contexts (c) 143 phonetically balanced sentences to include a balanced representation of phonemes. These short sentences range from 3-7 seconds each. Finally, we also record 10 free-form long sentences, where we ask participants a fixed set of questions based on their hobby/profession, etc, to capture the free form speaking style of the participant. These sentences are 10-20 seconds long.\nFor long sequences, the participants were asked 10 basic questions as listed below.\nTalk a little bit about your profession or education.\nTell us about a recent trip or vacation you took.\nShare a hobby or activity that you enjoy pursuing in your free time.\nDescribe a cultural event or festival you attended and what made it memorable.\nDescribe a cuisine or dish you recently tried for the first time and your thoughts on it.\nDiscuss a recent technological advancement or innovation that caught your attention.\nTalk about favourite movie/TV show.\nWho\u2019s your favourite actor/singer?\nWhat\u2019s your favourite sport?\nYour favourite holiday destination?\nFor short sequences, we recorded the following sentences from TIMIT corpus.\nShe had your dark suit in greasy wash water all year.\nDon\u2019t ask me to carry an oily rag like that.\nJane may earn more money by working hard.\nBright sunshine shimmers on the ocean.\nNothing is as offensive as innocence.\nWhy yell or worry over silly items?\nAre your grades higher or lower than Nancy\u2019s?\nSwing your arm as high as you can.\nBefore Thursday\u2019s exam, review every formula.\nThe museum hires musicians every evening.\nAlimony harms a divorced man\u2019s wealth.\nAluminum silverware can often be flimsy.\nShe wore warm, fleecy, woolen overalls.\nThose musicians harmonize marvelously.\nMost young rise early every morning.\nBeg that guard for one gallon of gas.\nHelp Greg to pick a peck of potatoes.\nIt\u2019s fun to roast marshmallows on a gas burner.\nCoconut cream pie makes a nice dessert.\nOnly the most accomplished artists obtain popularity.\nCritical equipment needs proper maintenance.\nYoung people participate in athletic activities.\nBarb\u2019s gold bracelet was a graduation present.\nStimulating discussions keep students\u2019 attention.\nEtiquette mandates compliance with existing regulations.\nBiblical scholars argue history.\nAddition and subtraction are learned skills.\nThat pickpocket was caught red-handed.\nGrandmother outgrew her upbringing in petticoats.\nAt twilight on the twelfth day we\u2019ll have Chablis.\nCatastrophic economic cutbacks neglect the poor.\nAmbidextrous pickpockets accomplish more.\nHer classical performance gained critical acclaim.\nEven a simple vocabulary contains symbols.\nThe eastern coast is a place for pure pleasure and excitement.\nThe lack of heat compounded the tenant\u2019s grievances.\nAcademic aptitude guarantees your diploma.\nThe prowler wore a ski mask for disguise.\nWe experience distress and frustration obtaining our degrees.\nThe legislature met to judge the state of public education.\nChocolate and roses never fail as a romantic gift.\nAny contributions will be greatly appreciated.\nContinental drift is a geological theory.\nWe got drenched from the uninterrupted rain.\nLast year\u2019s gas shortage caused steep price increases.\nUpgrade your status to reflect your wealth.\nEat your raisins outdoors on the porch steps.\nPorcupines resemble sea urchins.\nCliff\u2019s display was misplaced on the screen.\nAn official deadline cannot be postponed.\nFill that canteen with fresh spring water.\nGently place Jim\u2019s foam sculpture in the box.\nBagpipes and bongos are musical instruments.\nDoctors prescribe drugs too freely.\nWill you please describe the idiotic predicament.\nIt\u2019s impossible to deal with bureaucracy.\nGood service should be rewarded by big tips.\nMy instructions desperately need updating.\nCooperation along with understanding alleviate dispute.\nPrimitive tribes have an upbeat attitude.\nFlying standby can be practical if you want to save money.\nThe misprint provoked an immediate disclaimer.\nA large household needs lots of appliances.\nYoungsters love common candy as treats.\nIguanas and alligators are tropical reptiles.\nMasquerade parties tax one\u2019s imagination.\nPenguins live near the icy Antarctic.\nMedieval society was based on hierarchies.\nProject development was proceeding too slowly.\nKindergarten children decorate their classrooms for all holidays.\nSpecial task forces rescue hostages from kidnappers.\nCall an ambulance for medical assistance.\nHe stole a dime from a beggar.\nA huge tapestry hung in her hallway.\nBirthday parties have cupcakes and ice cream.\nHis scalp was blistered from today\u2019s hot sun.\nShe slipped and sprained her ankle on the steep slope.\nThe best way to learn is to solve extra problems.\nTugboats are capable of hauling huge loads.\nA muscular abdomen is good for your back.\nThe cartoon features a muskrat and a tadpole.\nThe emblem depicts the Acropolis all aglow.\nThe mango and the papaya are in a bowl.\nCombine all the ingredients in a large bowl.\nThe misquote was retracted with an apology.\nThe coyote, bobcat, and hyena are wild animals.\nTrespassing is forbidden and subject to penalty.\nEncyclopedias seldom present anecdotal evidence.\nA screwdriver is made from vodka and orange juice.\nWestchester is a county in New York.\nArtificial intelligence is for real.\nLots of foreign movies have subtitles.\nAngora cats are furrier than Siamese.\nPublicity and notoriety go hand in hand.\nPizzerias are convenient for a quick lunch.\nDecember and January are nice months to spend in Miami.\nTechnical writers can abbreviate in bibliographies.\nScientific progress comes from the development of new techniques.\nTradition requires parental approval for under-age marriage.\nThe clumsy customer spilled some expensive perfume.\nThe bungalow was pleasantly situated near the shore.\nPledge to participate in Nevada\u2019s aquatic competition.\nWhich long article was opaque and needed clarification?\nThe sound of Jennifer\u2019s bugle scared the antelope.\nThe willowy woman wore a muskrat coat.\nToo much curiosity can get you into trouble.\nCorrect execution of my instructions is crucial.\nMost precincts had a third of the votes counted.\nWhile waiting for Chipper she crisscrossed the square many times.\nThe previous speaker presented ambiguous results.\nMosquitoes exist in warm, humid climates.\nScholastic aptitude is judged by standardized tests.\nOrange juice tastes funny after toothpaste.\nThe water contained too much chlorine and stung his eyes.\nOur experiment\u2019s positive outcome was unexpected.\nRemove the splinter with a pair of tweezers.\nThe government sought authorization of his citizenship.\nAs coauthors, we presented our new book to the haughty audience.\nAs a precaution, the outlaws bought gunpowder for their stronghold.\nHer auburn hair reminded him of autumn leaves.\nThey remained lifelong friends and companions.\nCuriosity and mediocrity seldom coexist.\nThe easygoing zoologist relaxed throughout the voyage.\nBiologists use radioactive isotopes to study microorganisms.\nEmployee layoffs coincided with the company\u2019s reorganization.\nHow would you evaluate this algebraic expression?\nThe Mayan neoclassic scholar disappeared while surveying ancient ruins.\nThe diagnosis was discouraging; however, he was not overly worried.\nThe triumphant warrior exhibited naive heroism.\nWhoever cooperates in finding Nan\u2019s cameo will be rewarded.\nThe haunted house was a hit due to outstanding audio-visual effects.\nSevere myopia contributed to Ron\u2019s inferiority complex.\nBuying a thoroughbred horse requires intuition and expertise.\nShe encouraged her children to make their own Halloween costumes.\nWe could barely see the fjords through the snow flurries.\nAlmost all colleges are now coeducational.\nRich looked for spotted hyenas and jaguars on the safari.\nWhy else would Danny allow others to go?\nWho authorized the unlimited expense account?\nDestroy every file related to my audits.\nServe the coleslaw after I add the oil.\nWithdraw all phony accusations at once.\nStraw hats are out of fashion this year.\nDraw each graph on a new axis.\nNorwegian sweaters are made of lamb\u2019s wool.\nYoung children should avoid exposure to contagious diseases.\nRalph controlled the stopwatch from the bleachers.\nApproach your interview with statuesque composure.\nThe causeway ended abruptly at the shore.\nEven I occasionally get the Monday blues!\nMilitary personnel are expected to obey government orders.\nWhen peeling an orange, it is hard not to spray juice.\nRob sat by the pond and sketched the stray geese.\nMichael colored the bedroom wall with crayons.\nI gave them several choices and let them set the priorities.\nThe news agency hired a great journalist.\nThe morning dew on the spider web glistened in the sun.\nThe sermon emphasized the need for affirmative action.\nThe small boy put the worm on the hook.\nTry to recall the events in chronological order.\nNonprofit organizations have frequent fund raisers.\nThe most recent geological survey found seismic activity.\nCory attacked the project with extra determination.\nYou always come up with pathological examples.\nPut the butcher block table in the garage.\nKeep the thermometer under your tongue!\nSteph could barely handle the psychological trauma.\nIt\u2019s healthier to cook without sugar.\nAllow leeway here, but rationalize all errors.\nHis failure to open the store by eight cost him his job.\nHighway and freeway mean the same thing.\nThe paper boy bought two apples and three ices.\nClear pronunciation is appreciated.\nA doctor was in the ambulance with the patient.\nPuree some fruit before preparing the skewers.\nIt\u2019s not easy to create illuminating examples.\nThe hallway opens into a huge chamber.\nMay I order a strawberry sundae after I eat dinner?\nThey all agree that the essay is barely intelligible.\nHerb\u2019s birthday occurs frequently on Thanksgiving.\nThe cigarettes in the clay ashtray overflowed onto the oak table.\nReading in poor light gives you eyestrain.\nThe Boston Ballet overcame their funding shortage.\nWe apply auditory modeling to computer speech recognition.\nThe gorgeous butterfly ate a lot of nectar.\nTornados often destroy acres of farm land.\nRemember to allow identical twins to enter freely.\nHow oily do you like your salad dressing?\nWe saw eight tiny icicles below our roof.\nThe saw is broken, so chop the wood instead.\nWithdraw only as much money as you need.\nDraw every outer line first, then fill in the interior.\nThe jaw operates by using antagonistic muscles.\nCliff was soothed by the luxurious massage.\nSteve wore a bright red cashmere sweater.\nTo further his prestige, he occasionally reads the Wall Street Journal.\nAlice\u2019s ability to work without supervision is noteworthy.\nCory and Trish played tag with beach balls for hours.\nThe tooth fairy forgot to come when Roger\u2019s tooth fell out.\nPlanned parenthood organizations promote birth control.\nJeff thought you argued in favor of a centrifuge purchase.\nRich purchased several signed lithographs.\nIn every major cloverleaf, traffic sometimes gets backed up.\nIn the long run, it pays to buy quality clothing.\nBrush fires are common in the dry underbrush of Nevada.\nWeatherproof galoshes are very useful in Seattle.\nThis brochure is particularly informative for a prospective buyer.\nThe avalanche triggered a minor earthquake.\nThese exclusive documents must be locked up at all times.\nPlease take this dirty table cloth to the cleaners for me.\nShould giraffes be kept in small zoos?\nIf Carol comes tomorrow, have her arrange for a meeting at two.\nI\u2019d rather not buy these shoes than be overcharged.\nShaving cream is a popular item on Halloween.\nAmoebas change shape constantly.\nWe like bleu cheese but Victor prefers swiss cheese.\nTofu is made from processed soybeans.\nThe bluejay flew over the high building.\nCheap stockings run the first time they\u2019re worn.\nCottage cheese with chives is delicious.\nShipbuilding is a most fascinating process.\nThe proof that you are seeking is not available in books.\nThe hood of the jeep was steaming in the hot sun.\nMy desires are simple: give me one informative paragraph on the subject.\nThose answers will be straightforward if you think them through carefully first.\nIf people were more generous, there would be no need for welfare.\nThe nearest synagogue may not be within walking distance.\nThe groundhog clearly saw his shadow, but stayed out only a moment.\nThe local drugstore was charged with illegally dispensing tranquilizers.\nAl received a joint appointment in the biology and the engineering departments.\nGregory and Tom chose to watch cartoons in the afternoon.\nChip postponed alimony payments until the latest possible date.\nCount the number of teaspoons of soysauce that you add.\nThe big dog loved to chew on the old rag doll.\nTodd placed top priority on getting his bike fixed.\nAn adult male baboon\u2019s teeth are not suitable for eating shellfish.\nOften you\u2019ll get back more than you put in.\nGus saw pine trees and redwoods on his walk through Sequoia National Forest.\nRob made Hungarian goulash for dinner and gooseberry pie for dessert.\nBob bandaged both wounds with the skill of a doctor.\nThe high security prison was surrounded by barbed wire.\nTake charge of choosing her bride\u2019s maids\u2019 gowns.\nThe frightened child was gently subdued by his big brother.\nThe barracuda recoiled from the serpent\u2019s poisonous fangs.\nThe patient and the surgeon are both recuperating from the lengthy operation.\nI\u2019ll have a scoop of that exotic purple and turquoise sherbet.\nThe preschooler couldn\u2019t verbalize her feelings about the emergency conditions.\nMany wealthy tycoons splurged and bought both a yacht and a schooner.\nThe new suburbanites worked hard on refurbishing their older home.\nAccording to my interpretation of the problem, two lines must be perpendicular.\nThe system may break down soon, so save your files frequently.\nThe annoying raccoons slipped into Phil\u2019s garden every night.\nI took her word for it, but is she really going with you?\nThe gunman kept his victim cornered at gunpoint for three hours.\nWill you please confirm government policy regarding waste removal?\nThe fish began to leap frantically on the surface of the small lake.\nHer wardrobe consists of only skirts and blouses.\nThere was a gigantic wasp next to Irving\u2019s big top hat.\nThose who are not purists use canned vegetables when making stew.\nThey used an aggressive policeman to flag thoughtless motorists.\nShell shock caused by shrapnel is sometimes cured through group therapy.\nRalph prepared red snapper with fresh lemon sauce for dinner.\nIf you destroy confidence in banks, you do something to the economy, he said.\nHe further proposed grants of an unspecified sum for experimental hospitals.\nNothing has been done yet to take advantage of the enabling legislation.\nIt also provides for funds to clear slums and help colleges build dormitories.\nThe prospect of cutting back spending is an unpleasant one for any governor.\nHe really crucified him; he nailed it for a yard loss.\nThere is definitely some ligament damage in his knee.\nIn fact our whole defensive unit did a good job.\nHe played basketball there while working toward a law degree.\nSo, if anybody solicits by phone, make sure you mail the dough to the above.\nHer position covers a number of daily tasks common to any social director.\nThe structures housing the apartments are of masonry and frame construction.\nThis, he added, brought about petty jealousies and petty personal grievances.\nThere was no confirmation of such massive assaults from independent sources.\nThe staff deserves a lot of credit working down here under real obstacles.\nThey make gin saws and deal in parts, supplies and some used gin machinery.\nMaybe it\u2019s taking longer to get things squared away than the bankers expected.\nHiring the wife for one\u2019s company may win her tax-aided retirement income.\nUnfortunately, there is still little demand for broccoli and cauliflower.\nDisplayed as lamps, the puppets delight the children and are decorative accent.\nTo create such a lamp, order a wired pedestal from any lamp shop.\nThere are more obvious nymphomaniacs on any private-eye series.\nBut this doesn\u2019t detract from its merit as an interesting, if not great, film.\nAnd you think you have language problems.\nIdeally, he knew, it should be preceded by concrete progress at lower levels.\nThis is a significant advance but its import should not be exaggerated.\nAdequate compensation is indispensable.\nThis is a problem that goes considerably beyond questions of salary and tenure.\nSome observers speculated that this might be his revenge on his home town.\nConfusion became chaos; each succeeding day brought new acts of violence.\nThat added traffic means rising streams of dimes and quarters at toll gates.\nTraffic frequently has failed to measure up to engineers\u2019 rosy estimates.\nProgress is being made, too, in improving motorists\u2019 access to many turnpikes.\nUnder this law annual grants are given to systems in substantial amounts.\nWithin a system, however, the autonomy of each member library is preserved.\nThe desire and ability to read are important aspects of our cultural life.\nWe congratulate the entire membership on its record of good legislation.\nThereupon followed a demonstration that tyranny knows no ideological confines.\nWooded stream valleys in the folds of earth would be saved.\nHis election, on the other hand, would unquestionably strengthen the regulars.\nHe spoke briefly, sensibly, to the point and without oratorical flourishes.\nFurther, it has its work cut out stopping anarchy where it is now garrisoned.\nFools, he bayed, what do you think you are doing?\nSo we note approvingly a fresh sample of unanimity.\nIt is one of the rare public ventures here on which nearly everyone is agreed.\nThus there is a clearer division of authority, administrative and legislative.\nJokes, cartoons and cynics to the contrary, mothers-in-law make good friends.\nTheirs is a sacrificial life by earthly standards.\nThe narrow fringe of sadness that ran around it only emphasized the pleasure.\nWould a blue feather in a man\u2019s hat make him happy all day?\nThese programs emphasize the acceptance of biracial classrooms peacefully.\nYou certainly can\u2019t expect the infield to do any better than it did last year.\nIs the mother of an autistic child at fault?\nAs a rule, the autistic child doesn\u2019t enjoy physical contact with others.\nOr certain words or rituals that child and adult go through may do the trick.\nWe did not accept the diagnosis at once, but gradually we are coming to.\nIs a relaxed home atmosphere enough to help her outgrow these traits?\nWhere only one club existed before, he says, two will flourish henceforth.\nThis is going to be a language lesson, and you can master it in a few minutes.\nFamily loyalties and cooperative work have been unbroken for generations.\nHeels place emphasis on the long legged silhouette.\nWine glass heels are to be found in both high and semi-heights.\nStacked heels are also popular on dressy or tailored shoes.\nContrast trim provides other touches of color.\nAt the left is a pair of dressy straw pumps in a light, but crisp texture.\nAt right is a casual style in a crushed unlined white leather.\nMost of us brush our teeth by hand.\nThe bristles are soft enough to massage the gums and not scratch the enamel.\n\"Steam baths\" writes: do steam baths have any health value?\n\"Sewing brings numbness\" writes: what makes my hands numb when sewing?\nTeaching guides are included with each record.\nHe doesn\u2019t want her to look frowningly at him, or speak to him angrily.\nBut even mother\u2019s loving attitude will not always prevent misbehavior.\nShe can decrease the number of temptations.\nShe can remove all knick-knacks within reach.\nUsually, they titter loudly after they have passed by.\nToo often, unless he hails them, they pass him by.\nSay he is a horse thief, runs an old adage.\nIt seems that open season upon veterans\u2019 hospitalization is once more upon us.\nThis we can sympathetically understand.\nThis is taxation without representation.\nOur entire economy will have a terrific uplift.\nOne even gave my little dog a biscuit.\nMaybe he will help to turn our fair city into a ghost town.\nWhy do we need bigger and better bombs?\nOne of the problems associated with the expressway stems from the basic idea.\nBridges, tunnels and ferries are the most common methods of river crossings.\nReplace it with the statue of one or another of the world\u2019s famous dictators.\nThe gallant half-city is dying on its feet.\nTheir privations are almost beyond endurance.\nThe moment of truth is the moment of crisis.\nNew self-deceiving rags are hurriedly tossed on the too-naked bones.\nWhat explains this uni-directional paralysis?\nOriginals are not necessarily good and adaptations are not necessarily bad.\nBut that explanation is only partly true.\nBut the ships are very slow now, and we don\u2019t get so many sailors any more.\nThey were shown how to advance against an enemy outpost atop a cleared ridge.\nWe would lose our export markets and deny ourselves the imports we need.\nWith this no loyal citizen can quarrel.\nThis possibility is anything but reassuring.\nThe public is now armed with sophistication and numerous competing media.\nBut the attack was made from an advance copy.\nWell, now we have two big theaters.\nSplendor by sorcery: it\u2019s a horror.\nOne-upmanship is practiced by both sides in a total war.\nThis big, flexible voice with uncommon range has been superbly disciplined.\nHer debut over, perhaps the earlier scenes will emerge equally fine.\nHe injected more vitality into the score than it has revealed in many years.\nThe storyline, in sort, is wildly unrealistic.\nHe talked about unauthentic storylines too.\nHe praises many individuals generously.\nHis portrayal of an edgy head-in-the-clouds artist is virtually flawless.\nNot a corner has been visibly cut in this one.\nThe master\u2019s hand has lost none of its craft.\nHe showed puny men attacked by splendidly tyrannical machines.\nHe may have a point in urging that decadent themes be given fewer prizes.\nThe works are presented chronologically.\nThe humor of the situation can be imagined.\nHis technique is ample and his musical ideas are projected beautifully.\nWhat a discussion can ensue when the title of this type of song is in question.\nProgram note reads as follows: take hands; this urgent visage beckons us.\nThe orchestra was obviously on its mettle and it played most responsively.\nHe liked to nip ear lobes of unsuspecting visitors with his needle-sharp teeth.\nHere, he is, quite persuasively, the very embodiment of meanness and slyness.\nHe is a man of major talent -- but a man of solitary, uncertain impulses.\nHe was above all a friend seeker, almost pathetic in his eagerness to be liked.\nHe enlisted a staff of loyal experts and of many zealous volunteers.\nBut what has been happening recently might be described as creeping mannerism.\nClever light songs were overly coy, tragic songs a little too melodramatic.\nBelow is a specific guide, keyed to the calendar.\nA sailboat may have a bone in her teeth one minute and lie becalmed the next.\nIt suffers from a lack of unity of purpose and respect for heroic leadership.\nThe fat man has trouble buying life insurance or has to pay higher premiums.\nFar more frequently, overeating is the result of a psychological compulsion.\nYet it exists and has an objective reality which can be experienced and known.\nYet the spirit which lives in community is not identical with the community.\nBut this statement is completely unconvincing.\nA second point requires more extended comment.\nThe straight line would symbolize its uniqueness, the circle its universality.\nHis history is his alone, yet each man must recognize his own history in it.\nDeath reminds man of his sin, but it reminds him also of his transience.\nSuch a calm and assuring peace can be yours.\nSatellites, sputniks, rockets, balloons; what next?" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# Cam# UniqueResolutionDurationNative
Sentences(in minutes/camera)
RAVDESS\u00a0[36]\n121920 x 10800.1 min\u2713
MEAD\u00a0[62]\n82501920 x 108020 min\u2717
EmoTalk3D\u00a0[24]\n11N/A512 x 51220 min\u2717
Nersemble\u00a0[31]\n1610\n3208 x 2200\n1 min\u2717
MultiFace\u00a0[65]\n150502048 x 13344 min\u2717
Ours16415\n3208 x 2200\n\n35 min\n\u2713
\n
\n
Table 1: Existing Audio-Video Dataset Comparison per participant in the datasets. Compared to existing datasets, ours is an order of magnitude larger and higher resolution. All datasets are captured at standard 30 fps.
\n
", + "capture": "Table 1: Existing Audio-Video Dataset Comparison per participant in the datasets. Compared to existing datasets, ours is an order of magnitude larger and higher resolution. All datasets are captured at standard 30 fps." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPSNR \nSSIM \nLPIPS \n# Gaussians \n
GaussianAvatar\u00a0[42]\n26.530.90870.148798083
Ours (w/o perceptual)27.030.91160.144731875
Ours (w/o wrinkle reg.)28.100.92160.131233998
Ours (w/o mouth subdivision)28.350.93210.124434917
Ours (w/o Color MLP)28.930.93660.123532792
Ours (Full)29.900.94950.110432379
\n
\n
Table 2: Avatar Reconstruction: With fewer Gaussian points, our method achieves superior quality compared to the alternate approaches. Perceptual loss increases the sharpness, wrinkle regularization models dynamic wrinkles, mouth subdivision learns better mouth interior, Color MLP synthesizes sharper colors and accurate dynamic wrinkles. The full avatar initialization with all regularizations achieves the best results.
\n
", + "capture": "Table 2: Avatar Reconstruction: With fewer Gaussian points, our method achieves superior quality compared to the alternate approaches. Perceptual loss increases the sharpness, wrinkle regularization models dynamic wrinkles, mouth subdivision learns better mouth interior, Color MLP synthesizes sharper colors and accurate dynamic wrinkles. The full avatar initialization with all regularizations achieves the best results." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodLSE-D \nPSNR \nSSIM \nLPIPS \n
\n\n\n\n\n\nNeRF\n\n\u2009\u2009\\ldelim{3*\n\nRAD-NeRF\u00a0[53]\n13.1713.150.80070.2741
ER-NeRF\u00a0[32]\n13.0815.940.82690.2512
SyncTalk\u00a0[40]\n12.5018.240.87590.1920
\n\n\n\n\n\n3DGS\n\n\u2009\u2009\\ldelim{2*\n\nTalkingGaussian\u00a0[33]\n12.3820.290.88900.1745
GaussianTalker\u00a0[9]\n12.1920.320.89840.1724
\n\n\n\n\n\nFLAME\n\n\u2009\u2009\\ldelim{3*\n\nFaceformer\u00a0[18] + G.A.\n11.8622.180.91050.1608
CodeTalker\u00a0[67] + G.A.\n11.6822.230.91180.1595
Imitator\u00a0[55] + G.A.\n11.6122.830.92070.1519
Ours11.2524.730.93620.1286
\n
\n
Table 3: Baseline Comparisons: we compare with NeRF-based, 3DGS-based and mesh-based (FLAME\u00a0[34]) baselines. We combine FLAME-based methods with 3DGS via GaussianAvatars (G.A.)\u00a0[42]. Our method achieves superior results in both in perceptual quality as well as lip synchronization (LSE-D).
\n
", + "capture": "Table 3: Baseline Comparisons: we compare with NeRF-based, 3DGS-based and mesh-based (FLAME\u00a0[34]) baselines. We combine FLAME-based methods with 3DGS via GaussianAvatars (G.A.)\u00a0[42]. Our method achieves superior results in both in perceptual quality as well as lip synchronization (LSE-D)." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodLSE-D \nPSNR \nSSIM \nLPIPS \n
w/o alignment12.6621.020.91040.1855
w/o audio finetune11.7822.730.93550.1198
w/o wrinkle features11.2823.140.93110.1162
w/o color MLP & latent finetune11.3223.960.93670.1133
Ours (Full)11.1524.970.94700.1101
\n
\n
Table 4: Ablation study. Without alignment mask, the model ignores the audio signal. Audio fine-tuning helps to improve lip sync. Wrinkle features help with dynamic wrinkles and overall realism. Finetuning Color MLP and latents rectifies the inaccurate mouth interior. Our full model achieves the best results.\n
\n
", + "capture": "Table 4: Ablation study. Without alignment mask, the model ignores the audio signal. Audio fine-tuning helps to improve lip sync. Wrinkle features help with dynamic wrinkles and overall realism. Finetuning Color MLP and latents rectifies the inaccurate mouth interior. Our full model achieves the best results.\n" + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Subjects\nPSNR \n\nSSIM \n\nLPIPS \n\n# Gaussians \n
Subject 130.070.95980.075430653
Subject 230.130.91800.112532443
Subject 328.620.96070.107231434
Subject 429.900.94950.110432379
Subject 530.050.95290.118529490
Subject 626.820.92280.132235138
\n
\n
Table 5: Avatar Initialization: All avatars converge in the range of 29-35K Gaussians. We show perceptual quality metrics for each of our avatars evaluated for novel views.
\n
", + "capture": "Table 5: Avatar Initialization: All avatars converge in the range of 29-35K Gaussians. We show perceptual quality metrics for each of our avatars evaluated for novel views." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nFPS (2080Ti)\n\nFPS (A6000)\n# Gaussians
\nFaceformer\u00a0[18] + G.A.\n25.3842.2365-100K
\nCodeTalker\u00a0[67] + G.A.\n23.9238.9865-100K
\nImitator\u00a0[55] + G.A.\n23.3239.7165-100K
\nSyncTalk\u00a0[40]\n10.1421.51N/A
\nER-NeRF\u00a0[32]\n16.8817.98N/A
\nRAD-NeRF\u00a0[53]\n17.1821.41N/A
\nGaussianTalker\u00a0[9]\n57.8259.0141K-44K
\nTalkingGaussian\u00a0[33]\nOOM73.3431K-98K
Ours74.29123.4830-35K
\n
\n
Table 6: Inference Speed on Nvidia RTX 2080 Ti (12GB VRAM) and NVIDIA A6000 (48GB VRAM). TalkingGaussian\u2019s network does not fit in 12 GB VRAM and throws Out-of-Memory (OOM) error; for this method, the inference time is only given for a 48 GB VRAM (NVIDIA A6000).\n
\n
", + "capture": "Table 6: Inference Speed on Nvidia RTX 2080 Ti (12GB VRAM) and NVIDIA A6000 (48GB VRAM). TalkingGaussian\u2019s network does not fit in 12 GB VRAM and throws Out-of-Memory (OOM) error; for this method, the inference time is only given for a 48 GB VRAM (NVIDIA A6000).\n" + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParticipantNative AccentAgeGender
Subject 1British27Female
Subject 2American42Male
Subject 3American32Female
Subject 4British48Male
Subject 5Canadian41Female
Subject 6American32Male
\n
Table 7: Participant Details. We captured a gender-balanced dataset of six native English participants over a wide range of accents and age groups.
\n
", + "capture": "Table 7: Participant Details. We captured a gender-balanced dataset of six native English participants over a wide range of accents and age groups." + } + }, + "image_paths": { + "2": { + "figure_path": "2411.18675v1_figure_2.png", + "caption": "Figure 2: Random frames selected for each participant (top) from the dataset and corresponding zoom-in for the mouth region (bottom). We captured a gender-balanced dataset of native speakers with different English accents and diverse facial geometry including different skin tones, beard and glasses to maximize diversity.", + "url": "http://arxiv.org/html/2411.18675v1/x2.png" + }, + "3": { + "figure_path": "2411.18675v1_figure_3.png", + "caption": "Figure 3: Person-specific 3D Avatar: We compute 3D face tracking and bind 3D Gaussians to the triangles of the tracked FLAME mesh. We apply volume-based pruning to prevent optimization to generate large amount of Gaussians, and apply subdivision of mesh triangles in the mouth region. We train color MLP \u03b8colorsubscript\ud835\udf03color\\theta_{\\textrm{color}}italic_\u03b8 start_POSTSUBSCRIPT color end_POSTSUBSCRIPT to synthesize expression & view dependent color. We apply wrinkle regularization and perceptual losses to improve photorealism.", + "url": "http://arxiv.org/html/2411.18675v1/x3.png" + }, + "4": { + "figure_path": "2411.18675v1_figure_4.png", + "caption": "Figure 4: Method Overview. From the given speech signal, GaussianSpeech uses Wav2Vec 2.0 [2] encoder to extract generic audio features and maps them to personalized lip feature embeddings \ud835\udc841:Tsuperscript\ud835\udc84:1\ud835\udc47\\boldsymbol{c}^{1:T}bold_italic_c start_POSTSUPERSCRIPT 1 : italic_T end_POSTSUPERSCRIPT with Lip Transformer Encoder and wrinkle features \ud835\udc981:Tsuperscript\ud835\udc98:1\ud835\udc47\\boldsymbol{w}^{1:T}bold_italic_w start_POSTSUPERSCRIPT 1 : italic_T end_POSTSUPERSCRIPT with Wrinkle Transformer Encoder. Next, the Expression Encoder synthesizes FLAME expressions \ud835\udc861:Tsuperscript\ud835\udc86:1\ud835\udc47\\boldsymbol{e}^{1:T}bold_italic_e start_POSTSUPERSCRIPT 1 : italic_T end_POSTSUPERSCRIPT which are then projected via Expression2Latent MLP and concatenated with \ud835\udc841:Tsuperscript\ud835\udc84:1\ud835\udc47\\boldsymbol{c}^{1:T}bold_italic_c start_POSTSUPERSCRIPT 1 : italic_T end_POSTSUPERSCRIPT for input to the motion decoder. The motion decoder employs a multi-head transformer decoder [58] consisting of Multihead Self-Attention, Cross-Attention, and Feed Forward layers. The concatenated lip-expression features are fused into the decoder via cross-attention layers with alignment mask \u2133\u2133\\mathcal{M}caligraphic_M. The decoder then predicts FLAME vertex offsets {\ud835\udc7doffset}1:Tsuperscriptsubscript\ud835\udc7doffset:1\ud835\udc47\\{\\boldsymbol{V}_{\\textrm{offset}}\\}^{1:T}{ bold_italic_V start_POSTSUBSCRIPT offset end_POSTSUBSCRIPT } start_POSTSUPERSCRIPT 1 : italic_T end_POSTSUPERSCRIPT which gets added to the template mesh \ud835\udc7b\ud835\udc7b\\boldsymbol{T}bold_italic_T to generate vertex animation in canonical space. During training, these are then fed to our optimized 3DGS avatar (Sec. 4.1) and the color MLP \ud835\udf3dcolorsubscript\ud835\udf3dcolor\\boldsymbol{\\theta}_{\\textrm{color}}bold_italic_\u03b8 start_POSTSUBSCRIPT color end_POSTSUBSCRIPT and gaussian latents \ud835\udc9b\ud835\udc9b\\boldsymbol{z}bold_italic_z are further refined via re-rendering losses [30].", + "url": "http://arxiv.org/html/2411.18675v1/x4.png" + }, + "5": { + "figure_path": "2411.18675v1_figure_5.png", + "caption": "Figure 5: Avatar Reconstruction: GaussianAvatars [42] produces blurry results and cannot handle dynamic wrinkles. For our method, without perceptual loss it cannot synthesize sharp textures for global & local less observed regions like teeth, wrinkle regularization helps to model dynamic wrinkles, mouth faces subdivision helps with the better mouth interior and Color MLP helps synthesize sharper colors and accurate dynamic wrinkles. Our full avatar initialization technique with all regularization achieves the best results.", + "url": "http://arxiv.org/html/2411.18675v1/x5.png" + }, + "6": { + "figure_path": "2411.18675v1_figure_6.png", + "caption": "Figure 6: Baseline Comparison: We show comparisons against NeRF-based, 3DGS-based and FLAME animation methods combined with GaussianAvatars (G.A.) [42]. NeRF-based methods (RAD-NeRF [53], ER-NeRF [32] and SyncTalk [40]) produce artifacts in texture as well as incorrect mouth articulations. 3DGS-based methods (TalkingGaussian [33] & GaussianTalker [9]) can synthesize better lip-sync but produces blurry texture especially for mouth interior. Generalized FLAME animation methods (Faceformer [18], CodeTalker [67]) show blurred mouth interiors, personalized methods (Imitator [55]) produce better mouth interiors, however, the lip closures and synchronization is inaccurate. Our method outperforms all baselines both in lip-sync and photorealism.", + "url": "http://arxiv.org/html/2411.18675v1/x6.png" + }, + "7": { + "figure_path": "2411.18675v1_figure_7.png", + "caption": "Figure 7: Ablation Study. Left-to-right: (1) Alignment mask is critical to properly infuse audio information into the sequence model. (2) Audio fine-tuning helps the method generate better lip sync. (3) Without wrinkle features, the model can not produce dynamic wrinkles. (4) Without fine-tuning Color MLP and latent features, the model produces bad mouth interiors and inaccurate dynamic wrinkles. Our full model with all the components achieves best results.", + "url": "http://arxiv.org/html/2411.18675v1/x7.png" + }, + "8": { + "figure_path": "2411.18675v1_figure_8.png", + "caption": "Figure 8: Novel View Synthesis quality in comparison toGaussianAvatars [42]. In contrast to ours, GaussianAvatars generates blurry texture and cannot generate dynamic wrinkles.", + "url": "http://arxiv.org/html/2411.18675v1/x8.png" + }, + "9": { + "figure_path": "2411.18675v1_figure_9.png", + "caption": "Figure 9: Reconstructed Avatars: We show the reconstructed avatars from novel views for all the participants from our dataset. We visualize two randomly selected frames for each avatar, one where the avatar is silent and the other where the avatar is speaking. Our method generates high quality avatars generating realistic and sharper textures.", + "url": "http://arxiv.org/html/2411.18675v1/x9.png" + }, + "10": { + "figure_path": "2411.18675v1_figure_10.png", + "caption": "Figure 10: Audio-Driven Animation: We show animation results of our avatars animated directly from audio signal for the novel camera. The words spoken by the avatars are highlighted at the bottom.", + "url": "http://arxiv.org/html/2411.18675v1/x10.png" + }, + "11": { + "figure_path": "2411.18675v1_figure_11.png", + "caption": "Figure 11: Effect of per Gaussian latents: Without the features, the color MLP cannot produce accurate texture colors.", + "url": "http://arxiv.org/html/2411.18675v1/x11.png" + }, + "12": { + "figure_path": "2411.18675v1_figure_12.png", + "caption": "Figure 12: Failure Cases: The method fails to produce accurate accurate glass\ngeometry and specularities on the surface of glasses.", + "url": "http://arxiv.org/html/2411.18675v1/x12.png" + }, + "13": { + "figure_path": "2411.18675v1_figure_13.png", + "caption": "Figure 13: Different methods shown to users and questions asked to assess the quality of different methods during perceptual study evaluation. Method names were anonymized to avoid bias towards a particular method. Users were asked to select one of the shown methods for each of the three questions based on which one they believe exhibits best results.", + "url": "http://arxiv.org/html/2411.18675v1/x13.png" + }, + "14": { + "figure_path": "2411.18675v1_figure_14.png", + "caption": "Figure 14: User study comparison with baselines. We measure preference for (1) Overall Animation Quality, (2) Lip Synchronization and (3) Facial Realism. GaussianSpeech results are overwhelmingly preferred over the best baseline methods on all these aspects.", + "url": "http://arxiv.org/html/2411.18675v1/x14.png" + }, + "15": { + "figure_path": "2411.18675v1_figure_15.png", + "caption": "Figure 15: The image crops are passed to the pretrained Syncnet image backbone to extract image features, and audio features, represented as MFCC power spectrum, are extracted via pretrained Syncnet audio backbone. Finally, the pairwise distance between image and audio features are calculated to compute lip synchronization.", + "url": "http://arxiv.org/html/2411.18675v1/x15.png" + }, + "16": { + "figure_path": "2411.18675v1_figure_16.png", + "caption": "Figure 16: Sample output of the wrinkle detection model [45] used for wrinkle regularization loss during avatar training.", + "url": "http://arxiv.org/html/2411.18675v1/x16.png" + }, + "17": { + "figure_path": "2411.18675v1_figure_17.png", + "caption": "Figure 17: Dataset Metadata: Left to right, RGB Image, Alpha mask, Face segmentation, 2D landmarks and FLAME-based 3D Face Tracking. We will provide all these with our dataset release.", + "url": "http://arxiv.org/html/2411.18675v1/x17.png" + }, + "18": { + "figure_path": "2411.18675v1_figure_18.png", + "caption": "Figure 18: Dataset Participants: Four randomly selected frames from the dataset for each of six participants. The participants spoke with facial expressions and movements based on sentence transcript.", + "url": "http://arxiv.org/html/2411.18675v1/x18.png" + }, + "19": { + "figure_path": "2411.18675v1_figure_19.png", + "caption": "Figure 19: The multiview setup with 16 cameras used for recording participants captured at a resolution of 7.1 megapixels covering a field of view of 90\u2218 left-to-right and 30\u2218 up-to-down of the participant (right). The zoom-ins (left) for the eyes and hair show the level of detail captured by\nthe cameras. Our audio-visual dataset contains detailed facial geometry,", + "url": "http://arxiv.org/html/2411.18675v1/x19.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Facetalk: Audio-driven motion diffusion for neural parametric head\nmodels.", + "author": "Shivangi Aneja, Justus Thies, Angela Dai, and Matthias Nie\u00dfner.", + "venue": "In Proc. IEEE Conf. on Computer Vision and Pattern Recognition\n(CVPR), 2024.", + "url": null + } + }, + { + "2": { + "title": "wav2vec 2.0: A framework for self-supervised learning of speech\nrepresentations, 2020.", + "author": "Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "High-fidelity facial avatar reconstruction from monocular video with\ngenerative priors.", + "author": "Yunpeng Bai, Yanbo Fan, Xuan Wang, Yong Zhang, Jingxiang Sun, Chun Yuan, and\nYing Shan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), pages 4541\u20134551, 2023.", + "url": null + } + }, + { + "4": { + "title": "Experiment tracking with weights and biases, 2020.", + "author": "Lukas Biewald.", + "venue": "Software available from wandb.com.", + "url": null + } + }, + { + "5": { + "title": "Lip movements generation at a glance, 2018.", + "author": "Lele Chen, Zhiheng Li, Ross K. Maddox, Zhiyao Duan, and Chenliang Xu.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "Hierarchical cross-modal talking face generation with dynamic\npixel-wise loss.", + "author": "Lele Chen, Ross K Maddox, Zhiyao Duan, and Chenliang Xu.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pages 7832\u20137841, 2019.", + "url": null + } + }, + { + "7": { + "title": "Talking-head generation with rhythmic head motion.", + "author": "Lele Chen, Guofeng Cui, Celong Liu, Zhong Li, Ziyi Kou, Yi Xu, and Chenliang\nXu.", + "venue": "arXiv preprint arXiv:2007.08547, 2020.", + "url": null + } + }, + { + "8": { + "title": "Videoretalking: Audio-based lip synchronization for talking head\nvideo editing in the wild.", + "author": "Kun Cheng, Xiaodong Cun, Yong Zhang, Menghan Xia, Fei Yin, Mingrui Zhu, Xuan\nWang, Jue Wang, and Nannan Wang.", + "venue": "In SIGGRAPH Asia 2022 Conference Papers, New York, NY, USA,\n2022. Association for Computing Machinery.", + "url": null + } + }, + { + "9": { + "title": "Gaussiantalker: Real-time talking head synthesis with 3d gaussian\nsplatting.", + "author": "Kyusun Cho, Joungbin Lee, Heeji Yoon, Yeobin Hong, Jaehoon Ko, Sangjun Ahn, and\nSeungryong Kim.", + "venue": "In Proceedings of the 32nd ACM International Conference on\nMultimedia, page 10985\u201310994, New York, NY, USA, 2024. Association for\nComputing Machinery.", + "url": null + } + }, + { + "10": { + "title": "Out of time: automated lip sync in the wild.", + "author": "J. S. Chung and A. Zisserman.", + "venue": "In Workshop on Multi-view Lip-reading, ACCV, 2016.", + "url": null + } + }, + { + "11": { + "title": "You said that?, 2017.", + "author": "Joon Son Chung, Amir Jamaludin, and Andrew Zisserman.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "Capture, learning, and synthesis of 3D speaking styles.", + "author": "Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, and Michael\nBlack.", + "venue": "In Proceedings IEEE Conf. on Computer Vision and Pattern\nRecognition (CVPR), pages 10101\u201310111, 2019.", + "url": null + } + }, + { + "13": { + "title": "Emotional speech-driven animation with content-emotion\ndisentanglement.", + "author": "Radek Dan\u011b\u010dek, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael\nBlack, and Timo Bolkart.", + "venue": "In SIGGRAPH Asia 2023 Conference Papers, New York, NY, USA,\n2023. Association for Computing Machinery.", + "url": null + } + }, + { + "14": { + "title": "Speech-driven facial animation using cascaded gans for learning of\nmotion and texture.", + "author": "Dipanjan Das, Sandika Biswas, Sanjana Sinha, and Brojeshwar Bhowmick.", + "venue": "In Computer Vision \u2013 ECCV 2020: 16th European Conference,\nGlasgow, UK, August 23\u201328, 2020, Proceedings, Part XXX, page 408\u2013424,\nBerlin, Heidelberg, 2020. Springer-Verlag.", + "url": null + } + }, + { + "15": { + "title": "Sub-center arcface: Boosting face recognition by large-scale noisy\nweb faces.", + "author": "Jiankang Deng, Jia Guo, Tongliang Liu, Mingming Gong, and Stefanos Zafeiriou.", + "venue": "In Computer Vision \u2013 ECCV 2020: 16th European Conference,\nGlasgow, UK, August 23\u201328, 2020, Proceedings, Part XI, page 741\u2013757,\nBerlin, Heidelberg, 2020. Springer-Verlag.", + "url": null + } + }, + { + "16": { + "title": "Headgan: One-shot neural head synthesis and editing.", + "author": "Michail Christos Doukas, Stefanos Zafeiriou, and Viktoriia Sharmanska.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV), pages 14398\u201314407, 2021.", + "url": null + } + }, + { + "17": { + "title": "Pytorch lightning.", + "author": "William Falcon et al.", + "venue": "GitHub. Note: https://github.\ncom/PyTorchLightning/pytorch-lightning, 3(6), 2019.", + "url": null + } + }, + { + "18": { + "title": "Faceformer: Speech-driven 3d facial animation with transformers.", + "author": "Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, and Taku Komura.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), 2022.", + "url": null + } + }, + { + "19": { + "title": "Darpa timit acoustic phonetic continuous speech corpus cdrom, 1993.", + "author": "J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and\nN. L. Dahlgren.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "Stylesync: High-fidelity generalized and personalized lip sync in\nstyle-based generator.", + "author": "Jiazhi Guan, Zhanwang Zhang, Hang Zhou, Tianshu HU, Kaisiyuan Wang, Dongliang\nHe, Haocheng Feng, Jingtuo Liu, Errui Ding, Ziwei Liu, and Jingdong Wang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), 2023.", + "url": null + } + }, + { + "21": { + "title": "Ad-nerf: Audio driven neural radiance fields for talking head\nsynthesis.", + "author": "Yudong Guo, Keyu Chen, Sen Liang, Yongjin Liu, Hujun Bao, and Juyong Zhang.", + "venue": "In IEEE/CVF International Conference on Computer Vision\n(ICCV), 2021.", + "url": null + } + }, + { + "22": { + "title": "Towards generating ultra-high resolution talking-face videos with lip\nsynchronization.", + "author": "Anchit Gupta, Rudrabha Mukhopadhyay, Sindhu Balachandra, Faizan Farooq Khan,\nVinay P. Namboodiri, and C. V. Jawahar.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision (WACV), pages 5209\u20135218, 2023.", + "url": null + } + }, + { + "23": { + "title": "Space: Speech-driven portrait animation with controllable expression,\n2022.", + "author": "Siddharth Gururani, Arun Mallya, Ting-Chun Wang, Rafael Valle, and Ming-Yu Liu.", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "Emotalk3d: High-fidelity free-view synthesis of emotional 3d talking\nhead.", + "author": "Qianyun He, Xinya Ji, Yicheng Gong, Yuanxun Lu, Zhengyu Diao, Linjia Huang, Yao\nYao, Siyu Zhu, Zhan Ma, Songchen Xu, Xiaofei Wu, Zixiao Zhang, Xun Cao, and\nHao Zhu.", + "venue": "In European Conference on Computer Vision (ECCV), 2024.", + "url": null + } + }, + { + "25": { + "title": "Audio-driven emotional video portraits.", + "author": "Xinya Ji, Hang Zhou, Kaisiyuan Wang, Wayne Wu, Chen Change Loy, Xun Cao, and\nFeng Xu.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition (CVPR), 2021.", + "url": null + } + }, + { + "26": { + "title": "Eamm: One-shot emotional talking face via audio-based emotion-aware\nmotion model.", + "author": "Xinya Ji, Hang Zhou, Kaisiyuan Wang, Qianyi Wu, Wayne Wu, Feng Xu, and Xun Cao.", + "venue": "In ACM SIGGRAPH 2022 Conference Proceedings, 2022.", + "url": null + } + }, + { + "27": { + "title": "Perceptual losses for real-time style transfer and super-resolution,\n2016.", + "author": "Justin Johnson, Alexandre Alahi, and Li Fei-Fei.", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "Towards automatic face-to-face translation.", + "author": "Prajwal K R, Rudrabha Mukhopadhyay, Jerin Philip, Abhishek Jha, Vinay\nNamboodiri, and C V Jawahar.", + "venue": "In Proceedings of the 27th ACM International Conference on\nMultimedia, page 1428\u20131436, New York, NY, USA, 2019. Association for\nComputing Machinery.", + "url": null + } + }, + { + "29": { + "title": "Audio-driven facial animation by joint end-to-end learning of pose\nand emotion.", + "author": "Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen.", + "venue": "ACM Trans. Graph., 36(4), 2017.", + "url": null + } + }, + { + "30": { + "title": "3d gaussian splatting for real-time radiance field rendering.", + "author": "Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\u00fchler, and George Drettakis.", + "venue": "ACM Transactions on Graphics, 42(4), 2023.", + "url": null + } + }, + { + "31": { + "title": "Nersemble: Multi-view radiance field reconstruction of human heads.", + "author": "Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, and Matthias\nNie\u00dfner.", + "venue": "ACM Trans. Graph., 42(4), 2023.", + "url": null + } + }, + { + "32": { + "title": "Efficient region-aware neural radiance fields for high-fidelity\ntalking portrait synthesis.", + "author": "Jiahe Li, Jiawei Zhang, Xiao Bai, Jun Zhou, and Lin Gu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV), pages 7568\u20137578, 2023.", + "url": null + } + }, + { + "33": { + "title": "Talkinggaussian: Structure-persistent 3d talking head synthesis\nvia gaussian splatting.", + "author": "Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Xin Ning, Jun Zhou, and Lin Gu.", + "venue": "In Computer Vision \u2013 ECCV 2024, pages 127\u2013145, Cham, 2025.\nSpringer Nature Switzerland.", + "url": null + } + }, + { + "34": { + "title": "Learning a model of facial shape and expression from 4D scans.", + "author": "Tianye Li, Timo Bolkart, Michael. J. Black, Hao Li, and Javier Romero.", + "venue": "ACM Transactions on Graphics, (Proc. SIGGRAPH Asia),\n36(6):194:1\u2013194:17, 2017.", + "url": null + } + }, + { + "35": { + "title": "Semantic-aware implicit neural audio-driven video portrait\ngeneration.", + "author": "Xian Liu, Yinghao Xu, Qianyi Wu, Hang Zhou, Wayne Wu, and Bolei Zhou.", + "venue": "arXiv preprint arXiv:2201.07786, 2022.", + "url": null + } + }, + { + "36": { + "title": "The Ryerson Audio-Visual Database of Emotional Speech and Song\n(RAVDESS), 2018.", + "author": "Steven R. Livingstone and Frank A. Russo.", + "venue": null, + "url": null + } + }, + { + "37": { + "title": "Nerf: Representing scenes as neural radiance fields for view\nsynthesis, 2020.", + "author": "Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi\nRamamoorthi, and Ren Ng.", + "venue": null, + "url": null + } + }, + { + "38": { + "title": "Diff2lip: Audio conditioned diffusion models for lip-synchronization.", + "author": "Soumik Mukhopadhyay, Saksham Suri, Ravi Teja Gadde, and Abhinav Shrivastava.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision (WACV), pages 5292\u20135302, 2024.", + "url": null + } + }, + { + "39": { + "title": "Emotalk: Speech-driven emotional disentanglement for 3d face\nanimation.", + "author": "Ziqiao Peng, Haoyu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Jun He, Hongyan Liu,\nand Zhaoxin Fan.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV), pages 20687\u201320697, 2023.", + "url": null + } + }, + { + "40": { + "title": "Synctalk: The devil is in the synchronization for talking head\nsynthesis.", + "author": "Ziqiao Peng, Wentao Hu, Yue Shi, Xiangyu Zhu, Xiaomei Zhang, Jun He, Hongyan\nLiu, and Zhaoxin Fan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), 2024.", + "url": null + } + }, + { + "41": { + "title": "A lip sync expert is all you need for speech to lip generation in the\nwild.", + "author": "K R Prajwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, and C.V. Jawahar.", + "venue": "In Proceedings of the 28th ACM International Conference on\nMultimedia, page 484\u2013492, New York, NY, USA, 2020. Association for\nComputing Machinery.", + "url": null + } + }, + { + "42": { + "title": "Gaussianavatars: Photorealistic head avatars with rigged 3d\ngaussians.", + "author": "Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon\nGiebenhain, and Matthias Nie\u00dfner.", + "venue": "arXiv preprint arXiv:2312.02069, 2023.", + "url": null + } + }, + { + "43": { + "title": "An efficient representation for irradiance environment maps.", + "author": "Ravi Ramamoorthi and Pat Hanrahan.", + "venue": "In Proceedings of the 28th Annual Conference on Computer\nGraphics and Interactive Techniques, page 497\u2013500, New York, NY, USA,\n2001. Association for Computing Machinery.", + "url": null + } + }, + { + "44": { + "title": "Meshtalk: 3d face animation from speech using cross-modality\ndisentanglement.", + "author": "Alexander Richard, Michael Zollh\u00f6fer, Yandong Wen, Fernando de la Torre, and\nYaser Sheikh.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV), pages 1173\u20131182, 2021.", + "url": null + } + }, + { + "45": { + "title": "Wrinkle Detection Streamlit.", + "author": "Shrimanta Satpati.", + "venue": "https://github.com/shrimantasatpati/Wrinkle-Detection-StreamLit, 2023.", + "url": null + } + }, + { + "46": { + "title": "Learning dynamic facial radiance fields for few-shot talking head\nsynthesis.", + "author": "Shuai Shen, Wanhua Li, Zheng Zhu, Yueqi Duan, Jie Zhou, and Jiwen Lu.", + "venue": "In ECCV, 2022.", + "url": null + } + }, + { + "47": { + "title": "Difftalk: Crafting diffusion models for generalized audio-driven\nportraits animation.", + "author": "Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie Zhou, and\nJiwen Lu.", + "venue": "In CVPR, 2023.", + "url": null + } + }, + { + "48": { + "title": "Sd-nerf: Towards lifelike talking head animation via\nspatially-adaptive dual-driven nerfs.", + "author": "Shuai Shen, Wanhua Li, Xiaoke Huang, Zheng Zhu, Jie Zhou, and Jiwen Lu.", + "venue": "IEEE Transactions on Multimedia, 26:3221\u20133234,\n2024.", + "url": null + } + }, + { + "49": { + "title": "Everybody\u2019s talkin\u2019: Let me talk as you want, 2020.", + "author": "Linsen Song, Wayne Wu, Chen Qian, Ran He, and Chen Change Loy.", + "venue": null, + "url": null + } + }, + { + "50": { + "title": "Diffused Heads: Diffusion Models Beat GANs on Talking-Face\nGeneration.", + "author": "Micha\u0142 Stypu\u0142kowski, Konstantinos Vougioukas, Sen He, Maciej Zikeba,\nStavros Petridis, and Maja Pantic.", + "venue": "In https://arxiv.org/abs/2301.03396, 2023.", + "url": null + } + }, + { + "51": { + "title": "Synthesizing obama: Learning lip sync from audio.", + "author": "Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman.", + "venue": "ACM Trans. Graph., 2017.", + "url": null + } + }, + { + "52": { + "title": "Memories are one-to-many mapping alleviators in talking face\ngeneration.", + "author": "Anni Tang, Tianyu He, Xu Tan, Jun Ling, Runnan Li, Sheng Zhao, Li Song, and\nJiang Bian.", + "venue": "arXiv preprint arXiv:2212.05005, 2022a.", + "url": null + } + }, + { + "53": { + "title": "Real-time neural radiance talking portrait synthesis via\naudio-spatial decomposition.", + "author": "Jiaxiang Tang, Kaisiyuan Wang, Hang Zhou, Xiaokang Chen, Dongliang He, Tianshu\nHu, Jingtuo Liu, Gang Zeng, and Jingdong Wang.", + "venue": "arXiv preprint arXiv:2211.12368, 2022b.", + "url": null + } + }, + { + "54": { + "title": "3diface: Diffusion-based speech-driven 3d facial animation and\nediting, 2023a.", + "author": "Balamurugan Thambiraja, Sadegh Aliakbarian, Darren Cosker, and Justus Thies.", + "venue": null, + "url": null + } + }, + { + "55": { + "title": "Imitator: Personalized speech-driven 3d facial animation.", + "author": "Balamurugan Thambiraja, Ikhsanul Habibie, Sadegh Aliakbarian, Darren Cosker,\nChristian Theobalt, and Justus Thies.", + "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision (ICCV), pages 20621\u201320631, 2023b.", + "url": null + } + }, + { + "56": { + "title": "Neural voice puppetry: Audio-driven facial reenactment.", + "author": "Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, and Matthias\nNie\u00dfner.", + "venue": "ECCV 2020, 2020.", + "url": null + } + }, + { + "57": { + "title": "Emo: Emote portrait alive - generating expressive portrait videos\nwith audio2video diffusion model under weak conditions, 2024.", + "author": "Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo.", + "venue": null, + "url": null + } + }, + { + "58": { + "title": "Attention is all you need, 2023.", + "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.", + "venue": null, + "url": null + } + }, + { + "59": { + "title": "End-to-end speech-driven facial animation with temporal gans, 2018.", + "author": "Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic.", + "venue": null, + "url": null + } + }, + { + "60": { + "title": "Realistic speech-driven facial animation with gans, 2019.", + "author": "Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic.", + "venue": null, + "url": null + } + }, + { + "61": { + "title": "Lipformer: High-fidelity and generalizable talking face generation\nwith a pre-learned facial codebook.", + "author": "Jiayu Wang, Kang Zhao, Shiwei Zhang, Yingya Zhang, Yujun Shen, Deli Zhao, and\nJingren Zhou.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), pages 13844\u201313853, 2023.", + "url": null + } + }, + { + "62": { + "title": "Mead: A large-scale audio-visual dataset for emotional talking-face\ngeneration.", + "author": "Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wayne Wu, Chen Qian, Ran\nHe, Yu Qiao, and Chen Change Loy.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "63": { + "title": "One-shot talking face generation from single-speaker audio-visual\ncorrelation learning.", + "author": "Suzhen Wang, Lincheng Li, Yu Ding, and Xin Yu.", + "venue": "In AAAI 2022, 2022.", + "url": null + } + }, + { + "64": { + "title": "X2face: A network for controlling face generation by using images,\naudio, and pose codes.", + "author": "O. Wiles, A.S. Koepke, and A. Zisserman.", + "venue": "In European Conference on Computer Vision, 2018.", + "url": null + } + }, + { + "65": { + "title": "Multiface: A dataset for neural face rendering.", + "author": "Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle Belko,\nEric Brockmeyer, Lucas Evans, Timothy Godisart, Hyowon Ha, Xuhua Huang,\nAlexander Hypes, Taylor Koska, Steven Krenn, Stephen Lombardi, Xiaomin Luo,\nKevyn McPhail, Laura Millerschoen, Michal Perdoch, Mark Pitts, Alexander\nRichard, Jason Saragih, Junko Saragih, Takaaki Shiratori, Tomas Simon, Matt\nStewart, Autumn Trimble, Xinshuo Weng, David Whitewolf, Chenglei Wu, Shoou-I\nYu, and Yaser Sheikh.", + "venue": "In arXiv, 2022.", + "url": null + } + }, + { + "66": { + "title": "X-portrait: Expressive portrait animation with hierarchical motion\nattention.", + "author": "You Xie, Hongyi Xu, Guoxian Song, Chao Wang, Yichun Shi, and Linjie Luo.", + "venue": "2024.", + "url": null + } + }, + { + "67": { + "title": "Codetalker: Speech-driven 3d facial animation with discrete motion\nprior.", + "author": "Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, and Tien-Tsin\nWong.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 12780\u201312790, 2023.", + "url": null + } + }, + { + "68": { + "title": "Vasa-1: Lifelike audio-driven talking faces generated in real time,\n2024.", + "author": "Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang,\nYizhong Zhang, Xin Tong, and Baining Guo.", + "venue": null, + "url": null + } + }, + { + "69": { + "title": "Dfa-nerf: Personalized talking head generation via disentangled face\nattributes neural rendering.", + "author": "Shunyu Yao, RuiZhe Zhong, Yichao Yan, Guangtao Zhai, and Xiaokang Yang.", + "venue": "arXiv preprint arXiv:2201.00791, 2022.", + "url": null + } + }, + { + "70": { + "title": "Geneface: Generalized and high-fidelity audio-driven 3d talking face\nsynthesis.", + "author": "Zhenhui Ye, Ziyue Jiang, Yi Ren, Jinglin Liu, Jinzheng He, and Zhou Zhao.", + "venue": "arXiv preprint arXiv:2301.13430, 2023.", + "url": null + } + }, + { + "71": { + "title": "Flow-guided one-shot talking face generation with a high-resolution\naudio-visual dataset.", + "author": "Zhimeng Zhang, Lincheng Li, Yu Ding, and Changjie Fan.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 3661\u20133670, 2021.", + "url": null + } + }, + { + "72": { + "title": "Learning dynamic tetrahedra for high-quality talking head synthesis,\n2024.", + "author": "Zicheng Zhang, Ruobing Zheng, Ziwen Liu, Congying Han, Tianqi Li, Meng Wang,\nTiande Guo, Jingdong Chen, Bonan Li, and Ming Yang.", + "venue": null, + "url": null + } + }, + { + "73": { + "title": "Talking face generation by adversarially disentangled audio-visual\nrepresentation.", + "author": "Hang Zhou, Yu Liu, Ziwei Liu, Ping Luo, and Xiaogang Wang.", + "venue": "In AAAI Conference on Artificial Intelligence (AAAI), 2019.", + "url": null + } + }, + { + "74": { + "title": "Makelttalk: Speaker-aware talking-head animation.", + "author": "Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis,\nand Dingzeyu Li.", + "venue": "ACM Trans. Graph., 39(6), 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18675v1" +} \ No newline at end of file diff --git a/20241127/2411.18700v1.json b/20241127/2411.18700v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2d47e836bda46d4a2295925dae9576f79ae432cc --- /dev/null +++ b/20241127/2411.18700v1.json @@ -0,0 +1,215 @@ +{ + "title": "On the Effectiveness of Incremental Training of Large Language Models", + "abstract": "Training large language models is a computationally intensive process that often requires substantial resources to achieve state-of-the-art results. Incremental layer-wise training has been proposed as a potential strategy to optimize the training process by progressively introducing layers, with the expectation that this approach would lead to faster convergence and more efficient use of computational resources. In this paper, we investigate the effectiveness of incremental training for LLMs, dividing the training process into multiple stages where layers are added progressively. Our experimental results indicate that while the incremental approach initially demonstrates some computational efficiency, it ultimately requires greater overall computational costs to reach comparable performance to traditional full-scale training. Although the incremental training process can eventually close the performance gap with the baseline, it does so only after significantly extended continual training. These findings suggest that incremental layer-wise training may not be a viable alternative for training large language models, highlighting its limitations and providing valuable insights into the inefficiencies of this approach.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Training large language models (LLMs) has become a cornerstone of advancements in natural language processing (NLP), significantly impacted by improvements in model scaling and optimization techniques. Despite the success of models like GPTs [1 ###reference_b1###] and BERT/RoBERTa [2 ###reference_b2###, 3 ###reference_b3###], scaling these models demands substantial resources, with training time and computational costs increasing significantly as the model size grows [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Efficiently scaling LLMs is critical not only for reducing costs but also for making model training more accessible and environmentally sustainable. Incremental layer-wise training has been proposed as a method to potentially reduce these costs by progressively introducing layers [7 ###reference_b7###]. This approach allows earlier parts of the model to stabilize while incrementally training additional layers, potentially leading to faster convergence and more efficient use of computational resources [8 ###reference_b8###, 9 ###reference_b9###]. However, the effectiveness of this approach remains unclear, particularly concerning its ability to capture required long-range dependencies, as earlier studies indicate that such strategies may not fully generalize when trained incrementally [10 ###reference_b10###].\nThe intuition behind incremental layer-wise training is rooted in the hierarchical learning process of large language models. In these models, lower layers often capture low-level linguistic features such as word embeddings, syntactic patterns, and local dependencies [11 ###reference_b11###, 2 ###reference_b2###]. Higher layers, on the other hand, tend to model high-level abstractions like semantic relationships, contextual understanding, and long-range dependencies [12 ###reference_b12###, 13 ###reference_b13###]. High-level features are essentially combinations of low-level ones, implying that effective learning of high-level representations relies on the prior learning of low-level features. Training all layers simultaneously might therefore be inefficient, as higher layers may struggle to learn meaningful patterns before the lower layers have stabilized their representations [14 ###reference_b14###, 15 ###reference_b15###]. By progressively adding layers, incremental training aims to mirror this natural progression, allowing each layer to specialize and stabilize before serving as the foundation for subsequent layers. This approach aligns with the way neural networks hierarchically construct representations, potentially leading to more effective learning and convergence [14 ###reference_b14###].\nHowever, despite the intuitive appeal of incremental training, its effectiveness has not been thoroughly examined in the context of large-scale language models. While previous research has shown that gradually increasing the model\u2019s capacity or context size [6 ###reference_b6###, 16 ###reference_b16###] can be beneficial in some cases, the specific benefits of incrementally adding layers remain uncertain.\nOur study addresses this gap by empirically evaluating incremental training in large-scale language models, analyzing computational efficiency, convergence behavior, and performance against traditional full-layer training. We compare the performance of models trained incrementally with those trained using a traditional approach, where all layers are optimized from the start.\nOur findings indicate that, contrary to initial expectations, the incremental layer-wise training approach does not deliver significant benefits in terms of computational efficiency or performance. While incremental training can eventually reach comparable performance to traditional full-scale training, it does so only after a continual training period, resulting in a higher overall computational cost. Despite early-stage gains, these models require extensive fine-tuning to bridge the performance gap with the baseline, making the incremental approach a less practical choice for large language model training. These results underscore the limitations of incremental layer-wise training and provide insights into why it may not serve as an efficient alternative to traditional methods." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "The pursuit of efficient training methods for large-scale neural networks has been an active area of research. Incremental or layer-wise training strategies have been explored in various contexts, aiming to reduce computational costs and memory requirements." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Incremental Training in Deep Learning", + "text": "Incremental training, also known as layer-wise training or progressive stacking, has been applied in deep learning to gradually build up network architectures. Early work by Hinton et al. [7 ###reference_b7###] introduced a fast learning algorithm for deep belief nets, where layers are trained sequentially in an unsupervised manner while keeping the weights of previous layers fixed. Similarly, Bengio et al. [8 ###reference_b8###] proposed Greedy Layer-Wise Training for deep networks, demonstrating that such approaches can initialize deep networks effectively.\nMoreover, the Cascade-Correlation learning architecture [9 ###reference_b9###] incrementally builds neural networks by adding hidden units one at a time, freezing the weights of previously added units. This method aimed to overcome challenges in training deeper networks by simplifying the optimization problem.\nWhile these approaches showed promise in certain settings, particularly in unsupervised pre-training and for shallower networks, they often struggled to match the performance of end-to-end training in supervised tasks for deeper architectures like modern LLMs. The inability to fully capture complex hierarchical representations when layers are trained incrementally has been a consistent challenge [17 ###reference_b17###]." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Efficient Training Techniques for LLMs", + "text": "Various methods have been proposed to improve the efficiency of training LLMs:\nModel Pruning: Reducing the number of parameters by removing redundant weights [18 ###reference_b18###].\nKnowledge Distillation: Training smaller models to replicate the performance of larger ones [19 ###reference_b19###].\nMixed-Precision Training: Utilizing lower numerical precision to speed up computations [20 ###reference_b20###].\nLayer Freezing: Training only a subset of layers while keeping others fixed [21 ###reference_b21###].\nHowever, these methods come with trade-offs between efficiency and model performance, and their applicability to incremental training remains limited." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Progressive Neural Networks", + "text": "Progressive neural networks [22 ###reference_b22###] introduce new columns (networks) when learning new tasks, while keeping previous columns fixed to retain prior knowledge. This approach is beneficial in transfer learning and continual learning scenarios but differs from the incremental layer-wise training of a single task." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "II-D Cognitive and Biological Inspirations", + "text": "Incremental learning is reminiscent of how humans and animals learn, gradually building upon prior knowledge. However, replicating this process in artificial neural networks has proven challenging due to issues like catastrophic forgetting and optimization difficulties [23 ###reference_b23###]." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "Building upon earlier approaches that incrementally construct neural networks [9 ###reference_b9###, 8 ###reference_b8###], our goal is to assess whether progressively adding layers during training can improve computational efficiency and model performance in the context of modern LLMs.\nThis incremental approach is motivated by the understanding that higher-level layers depend on the representations learned by lower-level layers. Since high-level features are combinations of low-level ones, training higher layers before the lower layers have adequately learned foundational features may be ineffective and could lead to wasted computational resources [24 ###reference_b24###]. By first training the lower layers to capture basic linguistic features, we provide a stable and informative input for the higher layers to build upon. This approach also addresses the issue of internal covariate shift, as lower layers have sufficient time to stabilize their representations before training progresses to higher layers. This stabilization can reduce the shifting of input distributions for higher layers, leading to more effective optimization [25 ###reference_b25###]. Training the newly added layers in isolation allows them to adapt to the established representations from earlier layers without the interference of simultaneous updates throughout the entire network. This sequential learning process aims to optimize computational resources by avoiding unnecessary computations in higher layers during the early stages of training. The subsequent fine-tuning phase then harmonizes the representations across all trained layers, integrating the newly learned features with the existing model structure. Unlike previous methods that focused on unsupervised pre-training or shallow networks [7 ###reference_b7###, 8 ###reference_b8###], we aim to investigate whether this method provides any practical advantages for deep, transformer-based architectures by examining convergence speed, memory usage, and generalization ability." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Model Architecture and Notation", + "text": "Let the total number of layers in the LLM be denoted by , where , with being the total number of stages and the number of layers added in each stage. We denote the layers at stage as . For both incremental training and baseline training, we use the same architecture, dataset, and hyperparameters to ensure a fair comparison." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Stage-wise Training Process", + "text": "Each stage consists of the following two phases, which are designed to evaluate the potential benefits of training newly added layers in isolation before fine-tuning the entire model." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "III-B1 Phase 1: Training New Layers", + "text": "During this phase, only the newly added layers are trained while keeping the parameters of all preceding layers fixed. The motivation behind this approach is to isolate the training of new layers and prevent the random initialization from negatively impacting the performance of the previously trained parameters. However, our findings suggest that this isolation may not allow the model to sufficiently integrate newly learned features across different layers, which could hinder generalization. The optimization problem for this phase can be formulated as:\nwhere represents the parameters of the newly added layers , denotes the fixed parameters of the previously trained layers, and is the training data.\n###figure_1###" + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "III-B2 Phase 2: Fine-tuning All Layers", + "text": "After training the new layers, the entire model, consisting of layers , is fine-tuned together. The goal of this phase is to integrate the newly learned features into the existing model and improve overall optimization. Although this step aims to harmonize the learned features across all layers, our experimental results show that it may not be sufficient to close the performance gap with models trained using a traditional approach. The optimization problem during this phase is given by:\nwhere represents the parameters of all layers up to stage ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Incremental Layer Addition", + "text": "This process of introducing new layers and fine-tuning continues until all layers have been trained. In our experiments, we varied the number of stages (e.g., 4, 8, and 12 stages) to examine whether the granularity of layer addition impacts the final performance. Optinally, context size and batch size can also be increased throughout the stages as the training is plateaued [6 ###reference_b6###, 16 ###reference_b16###], following common practices aimed at enhancing model capabilities for longer dependencies." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "III-D Continual Training Phase", + "text": "After completing the incremental training phases, an optional continual training phase can be applied to further improve the model\u2019s performance. In this phase, all model layers are optimized jointly, similar to traditional full-layer training. This continual training aims to integrate the learned representations across all layers, potentially closing any performance gaps observed during incremental training. The continual phase is intended to harmonize layer interactions and enhance generalization capabilities beyond what is achievable through isolated layer-wise training." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "III-E Traditional Training Regime For Comparison", + "text": "The effectiveness of the incremental strategy should be compared against a the traditional full-layer training approach where all layers are trained simultaneously from the beginning. Both the baseline and incremental models share the same hyperparameters, architecture, and dataset. The primary difference is that the baseline approach trains all layers together for the entire duration, while the incremental approach progressively adds layers. This comparison allows us to quantify the trade-offs between computational efficiency and model performance." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "III-F Computational Cost Analysis", + "text": "In this subsection, we analyze the computational cost of incremental layer-wise training compared to traditional full-layer training. Our goal is to determine how many additional tokens of continual training are needed, as a ratio of the baseline training tokens , so that the total computational cost of incremental training plus continual training equals the computational cost of the baseline training on tokens." + }, + { + "section_id": "3.6.1", + "parent_section_id": "3.6", + "section_name": "III-F1 Definitions and Assumptions", + "text": "Let:\nbe the total number of layers in the model.\nbe the total number of stages in incremental training.\nbe the number of layers added per stage, calculated as (assuming is divisible by ).\nbe the total number of layers up to stage , so .\nbe the total number of tokens used in baseline training.\nbe the number of tokens used during the incremental training stages.\nbe the number of tokens used during the continual training phase.\nbe the computational cost per layer per token for the forward or backward pass.\nPhases of Incremental Training:\nEach stage consists of two phases:\nPhase 1: Train the newly added layers for tokens.\nForward pass: Involves all layers up to the current stage ( layers).\nBackward pass: Involves only the newly added layers ( layers).\nPhase 2: Fine-tune all layers up to the current stage for tokens.\nForward pass: Involves all layers up to the current stage ( layers).\nBackward pass: Involves all layers up to the current stage ( layers).\nAssumptions:\nThe computational cost per token is directly proportional to the number of layers involved in the forward and backward passes.\nThe cost per layer per token is the same for both the forward and backward passes.\nThe computational cost during the continual training phase is the same per token as in the baseline training, since all layers are involved in both forward and backward passes." + }, + { + "section_id": "3.6.2", + "parent_section_id": "3.6", + "section_name": "III-F2 Computational Cost of Baseline Training", + "text": "For the baseline model:\nTotal computational cost:" + }, + { + "section_id": "3.6.3", + "parent_section_id": "3.6", + "section_name": "III-F3 Computational Cost of Incremental Training", + "text": "The total computational cost of incremental training includes the costs from all stages and phases.\nPhase 1 of Stage :\nTokens processed:\nForward pass cost per token:\nBackward pass cost per token:\nTotal cost per token:\nTotal cost:\nPhase 2 of Stage :\nTokens processed:\nForward pass cost per token:\nBackward pass cost per token:\nTotal cost per token:\nTotal cost:\nTotal Cost for Stage :\nTotal Incremental Training Cost:\nSince , we have:\nTherefore, the total incremental training cost is:\nSince :\nSimplify:" + }, + { + "section_id": "3.6.4", + "parent_section_id": "3.6", + "section_name": "III-F4 Computational Cost of Continual Training", + "text": "The computational cost per token during continual training is the same as the baseline:\nTotal computational cost:" + }, + { + "section_id": "3.6.5", + "parent_section_id": "3.6", + "section_name": "III-F5 Total Computational Cost and Equality with Baseline", + "text": "The total computational cost of the incremental approach is:\nWe set to find :\nSince :\nSolve for\nThis formula allows us to calculate the required amount of continual training (as a ratio of ) to equal the computational cost of the baseline training." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experiments", + "text": "The goal of our experiments is to empirically evaluate the effectiveness of incremental layer-wise training for large language models and compare it against traditional full-scale training. We focus on examining whether incremental training offers any advantages in terms of computational efficiency or model performance. Our results indicate that the incremental approach struggles to match the baseline performance, even when utilizing similar or greater computational resources." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Experimental Setup", + "text": "To ensure a fair comparison, we used the GPT-2 architecture with 124.4 million parameters for both the baseline and incremental training regimes. The training data consisted of 10 billion tokens from the FineWeb-edu dataset [26 ###reference_b26###]. The models were trained using the AdamW optimizer [27 ###reference_b27###] with a learning rate of 6e-4, weight decay of 0.1, using a batch size of 512 sequences, each with a sequence length of 1,024 tokens, totaling 524,288 tokens per batch.\nFor the incremental training experiments, the total number of layers of GPT-2 is 12. They were divided into various configurations of stages, such as 4, 8, and 12 stages. Each stage involved two phases: training the newly added layers (Phase 1) while keeping the previous layers fixed, followed by fine-tuning all layers up to the current stage (Phase 2).\nThe baseline model was trained by optimizing all layers simultaneously from the start using the same architecture and hyperparameters. This approach allows us to directly compare the outcomes of the incremental training against traditional full-scale training.\nIn our experiments, we applied incremental training for the first 10,000 steps, after which we transitioned to continual training where all parameters were optimized jointly as in the traditional training process. We selected training with the baseline method in 10,000 steps as the performance benchmark, allowing us to assess the incremental model\u2019s ability to match baseline performance after this continual training phase. The incremnental training regime with continual training reaches the same computational budget according to our formula in the previous section at 14,688 steps for 4 stages, 15,469 steps for 8 stages, and 15,729 steps for 12 stages. This setup enabled a fair comparison between the two approaches, focusing on whether the incremental model could achieve similar generalization within a comparable training duration." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Evaluation Metrics", + "text": "To evaluate the models, we monitored three primary metrics: training loss, validation loss, and accuracy on the HellaSwag benchmark [28 ###reference_b28###]. The training and validation loss were used to assess convergence behavior, while the HellaSwag benchmark provided insights into the generalization capabilities of the models.\n###figure_2###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Results", + "text": "" + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "IV-C1 Training and Validation Loss", + "text": "Figure 2 ###reference_### presents the training and validation loss curves for both the baseline and incremental models. The baseline model exhibits faster convergence with lower overall training and validation losses throughout the training process. In contrast, the incremental models show higher losses, indicating slower convergence and suboptimal performance at equivalent computational budgets.\nAt the points where the incremental models have expended the same cumulative computational cost as the baseline model trained for 10,000 steps (marked by the large solid circles in the figure), all incremental models display higher training and validation losses compared to the baseline. Specifically, the four-stage incremental model still lags behind the baseline in terms of loss values at this computational budget. Although the four-stage incremental model eventually reaches training and validation losses comparable to the baseline, it does so only after significantly more training steps, highlighting that the incremental approach requires substantially more computational effort to achieve similar performance." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "IV-C2 HellaSwag Benchmark Evaluation", + "text": "Figure 1 ###reference_### presents the accuracy scores on the HellaSwag benchmark for both the baseline and incremental training regimes. The baseline model consistently outperforms the incremental training regimes throughout the training process. At the points of equal cumulative computational cost (indicated by the large solid circles), the incremental models show significantly lower accuracy compared to the baseline trained for 10,000 steps. The four-stage incremental model, for instance, demonstrates a notable performance gap at this point.\nWhile the four-stage incremental model eventually closes the accuracy gap with the baseline, it requires approximately much more than the baseline\u2019s computational budget\u2014to achieve comparable performance. This extended training underscores that the incremental approach demands substantially more computational resources to match the baseline\u2019s generalization capabilities on the HellaSwag benchmark." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Analysis of Results", + "text": "The experimental results indicate that the incremental layer-wise training approach underperforms compared to the baseline when evaluated at the same computational budget. Despite the initial reduction in computational cost per step during the early stages of incremental training, the models require a significant amount of additional continual training to match the total computational cost of the baseline model.\nAt the points where the cumulative computational costs are equal, all incremental models exhibit higher training and validation losses and lower HellaSwag accuracy than the baseline. The four-stage incremental model eventually achieves performance similar to the baseline, but only after substantially more training steps and computational resources. This prolonged process suggests that incremental training does not offer practical benefits over traditional training, as the additional resources required outweigh the early efficiency gains." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "In this section, we discuss the implications of our findings and discuss assumptions made in our computational cost analysis." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Computational Efficiency", + "text": "While incremental training reduces memory usage and computational cost per step in the early stages, achieving performance comparable to the baseline ultimately requires significantly more continual training. The initial computational savings are offset by the extended training time and additional resources needed during the continual training phase. At the same cumulative computational cost, incremental models perform worse than the baseline, indicating lower computational efficiency. This makes the incremental approach less practical for large language model training, as it necessitates additional resources to achieve results similar to traditional full-layer training." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Incremental Strategies Not Explored", + "text": "While incrementing batch size and context length as we increase the number of layers could be a potential direction to explore, we did not pursue this approach. Our findings indicated that even without reducing the number of tokens in the batch during the early stages, the incremental training results were not satisfactory. So, there is no point in exploring on that direction further in this study." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Assumption on Compute Cost of Forward and Backward Passes", + "text": "In our computational cost analysis, we assumed that the computational cost per layer per token is the same for both the forward and backward passes. We acknowledge that this assumption is not entirely accurate, as the backward pass generally requires more computational resources due to gradient computations and the storage of intermediate activations for backpropagation [29 ###reference_b29###]. However, this simplification does not significantly affect our overall conclusions. Even when accounting for the higher computational cost of the backward pass, the incremental training regime still demands substantial continual training to approximate the performance of the traditional baseline. This continual training phase consumes computational resources that are close to the compute budget of the traditional training approach. Therefore, despite the inaccuracy in the initial assumption, our fundamental finding remains valid: incremental layer-wise training does not offer computational efficiency advantages over full-layer training for large language models." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "In this paper, we evaluated the effectiveness of incremental layer-wise training for large language models. Our findings demonstrate that, contrary to expectations, this approach does not offer benefits in computational efficiency or model performance. Incremental training regimes underperform traditional full-layer training, even when accounting for the same cumulative computational cost. The need for extensive continual training to match baseline performance makes the incremental approach less practical for large language model training. These results highlight the limitations of incremental layer-wise training and underscore the importance of exploring alternative methods for efficient LLM training." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18700v1_figure_1.png", + "caption": "Figure 1: Training and validation loss curves comparing incremental layer-wise training (with 4, 8, and 12 stages) and baseline training. The large solid circles mark the points where the incremental training regimes have reached the same cumulative computational cost as the baseline model trained for 10,000 steps.", + "url": "http://arxiv.org/html/2411.18700v1/extracted/6030077/train_valid.png" + }, + "2": { + "figure_path": "2411.18700v1_figure_2.png", + "caption": "Figure 2: HellaSwag accuracy scores comparing incremental layer-wise training (with 4, 8, and 12 stages) and baseline training. The large solid circles indicate the performance of the incremental models at the steps where their cumulative computational cost equals that of the baseline model trained for 10,000 steps.", + "url": "http://arxiv.org/html/2411.18700v1/extracted/6030077/hella.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18700v1" +} \ No newline at end of file diff --git a/20241127/2411.18708v1.json b/20241127/2411.18708v1.json new file mode 100644 index 0000000000000000000000000000000000000000..90574411f00a822e24b81ac56d30e8e867b8bd8e --- /dev/null +++ b/20241127/2411.18708v1.json @@ -0,0 +1,240 @@ +{ + "title": "Embracing AI in Education: Understanding the Surge in Large Language Model Use by Secondary Students", + "abstract": "The impressive essay writing and problem-solving capabilities of large language models (LLMs) like OpenAI\u2019s ChatGPT have opened up new avenues in education. Our goal is to gain insights into the widespread use of LLMs among secondary students to inform their future development. Despite school restrictions, our survey of over 300 middle and high school students revealed that a remarkable 70% of students have utilized LLMs, higher than the usage percentage among young adults, and this percentage remains consistent across 7th to 12th grade. Students also reported using LLMs for multiple subjects, including language arts, history, and math assignments, but expressed mixed thoughts on their effectiveness due to occasional hallucinations in historical contexts and incorrect answers for lack of rigorous reasoning. The survey feedback called for LLMs better adapted for students, and also raised questions to developers and educators on how to help students from underserved communities leverage LLMs\u2019 capabilities for equal access to advanced education resources. We propose a few ideas to address such issues, including subject-specific models, personalized learning, and AI classrooms.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The remarkable generative capabilities of InstructGPT [16 ###reference_b16###], introduced over two years ago, has led to widespread usage of large language models (LLMs). Since then, the capabilities of LLMs like OpenAI\u2019s ChatGPT [13 ###reference_b13###], Google Gemini [18 ###reference_b18###] and Meta Llama [20 ###reference_b20###] have advanced significantly. According to an OpenAI report [15 ###reference_b15###], GPT-4 achieves above the 90th percentile on exams like AP Biology, AP Calculus BC, and SAT math. Consequently, many middle and high school students have begun using LLMs for schoolwork assistance.\nThis growth in capabilities of artificial intelligence (AI) raises concerns about its impact on students\u2019 learning experiences [8 ###reference_b8###]. We have observed schools implement policies restricting students from using LLMs due to concerns for cheating as well as inaccurate answers. Even OpenAI cautions secondary students against using ChatGPT, and specifically prohibits usage by children under the age of 13 [12 ###reference_b12###]. While OpenAI is developing ChatGPT Edu for educational purposes[14 ###reference_b14###], its audience is only university students. Consequently, the secondary school age group lacks support and influence over the services and developments of LLMs [16 ###reference_b16###].\nProhibited or not, access to chatgpt.com is very easy with no registration requirement. Such openness makes it challenging to restrict LLM usage by students. Instead of imposing restrictions, we believe the best approach is to understand how students utilize LLMs and develop curriculum and adapt LLMs to accommodate them. While some research [3 ###reference_b3###, 1 ###reference_b1###] has been conducted on personalized learning with LLMs, large-scale comprehensive surveys understanding the trend of student usage of LLMs are lacking. Our study aims to fill this gap, as shown in Table 1 ###reference_###.\n###table_1### A key question we sought to address was the prevalence of LLM usage among middle and high school students. Our survey confirmed our suspicion of its widespread usage, with over 70% of respondents reporting usage, higher than the 43% usage among young adults as reported in a February poll [10 ###reference_b10###]. Surprisingly, we found the percentage of students using LLMs was quite consistent across grades 7 to 12, as shown in Figure 1 ###reference_###. We had anticipated lower usage among students in lower grades.\n###figure_1### In the subsequent sections of this paper, we will present the details of our survey methodology and the results obtained for a range of other questions, including:\nWhat subjects do students seek assistance from LLMs and how satisfied are they?\nHow stringent are school policies regarding LLM usage and how do students regard them?\nAre there demographic differences among students that affect LLM usage?\nTo address the survey feedback, we will present proposals on how to further improve LLMs for education and embrace LLMs in classrooms to enhance learning for students from different backgrounds.\nIn summary, our contributions are several fold. First, with over 300 respondents across the United States, we obtained a collective view of LLM usage from a large, diverse group of secondary students. Second, we drew insights on critical LLM usage questions that are informative for educators, policymakers, and developers to embrace AI in education. Third, we provided proposals to integrate LLMs into education more effectively and responsibly." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Data Collection", + "text": "Target group Our target group consisted of middle and high school students throughout the United States and encapsulated a diverse population.\nCollection methods The survey form took just a few minutes to complete, which encouraged respondents to take the survey without feeling lethargic and answering randomly. We used Centiment [6 ###reference_b6###], a survey platform that allowed for connections with respondents throughout the United States, who were compensated over 80 cents for completing the short survey. We also created a Google Form survey, which we shared by visiting local tutoring programs and posting on social media.\nThe following information was provided to the survey takers: \"Results of this survey are completely confidential and will be used for data analysis.\" All responses are available in a public GitHub repository ###reference_udent-LLM-Usage### with the respondents\u2019 personally identifiable information removed.\n###figure_2### Response demographics Responses included students ranging from sixth to twelfth grade (Table 2 ###reference_###). California had the highest number of responses, as the in-person surveys were conducted in this state, and we received feedback from a total of 43 states (Figure 2 ###reference_###). The respondents also came from various educational backgrounds, including private, public, charter, and home schooled. Lastly, the demographic encompassed students who consistently had access and students who seldom had access to technology for personal purposes.\n###table_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Data Analysis", + "text": "Figure 3 ###reference_### shows the usage frequency per week of LLMs, including ChatGPT, LLaMA, Gemini and Bing Copilot. Our survey shows that 71% of the correspondents have used LLMs at least once, while 9% used them daily. Since our survey shows 84% of the LLM users use ChatGPT, this translates to 60% of ChatGPT users among all the respondents. In comparison, 43% of young adults of age 18-29 have used ChatGPT[10 ###reference_b10###]. Our survey clearly shows popularity of LLMs among the secondary students and consistency across the grades, as reported in Figure 1 ###reference_###.\n###figure_3### LLMs are used in all school subjects.\nPrevious research reveals that LLMs exhibit inferior mathematical abilities compared to addressing open-ended requests in English [7 ###reference_b7###]. Despite this, our survey shows 28% students still used LLMs for the math subject. They also use LLMs in various other subjects, including foreign languages and history classes (Figure. 4 ###reference_###). According to our survey, 57% of LLM users have reported finding them useful for writing text, while 43% have found them useful for math and STEM questions. We also noticed that among students who use LLMs at least twice a week, the majority of them have tried LLMs for all class subjects.\n###figure_4### Students share various thoughts regarding LLM impact. Responses to an open-ended question regarding the impact of LLMs on students\u2019 academic performance were diverse, despite the common usage. The following are a few student responses with contrasting thoughts on LLMs\u2019 helpfulness.\n\u201cYes, they have helped simplify complex topics.\u201d \n\u201cNo, increased reliance on such tools may lower the level of other fundamental skills that you develop.\u201d \n\u201cSometimes, as it may be inaccurate at certain points.\u201d\nSome students, like the last, believe LLMs have inaccuracies when answering factual questions, which is supported by previous research that discovered ChatGPT hallucinates the most when questions are inputted consecutively [2 ###reference_b2###]. However, many other students fail to recognize the inaccuracies [21 ###reference_b21###].\nMost students desire LLM improvements for schoolwork. When asked about potential improvements to LLMs, 3% of individuals expressed no desire for any modifications. 51% desired for more coherent responses, accurate answers and a great ability to address complex questions, indicating somewhat dissatisfaction with the current models. Only 20% of respondents expressed interest in improving LLMs\u2019 understanding of emotions and better conversations, indicating students\u2019 preference for using LLMs to answer questions rather than using LLMs as a conversational tool.\n###figure_5### LLM usage is prevalent despite school policies. Participants were asked to rate from 1 to 5 how strict their school policies are on getting help from LLMs, with a score of 5 meaning severe punishment if LLM usage was found in their schoolwork. The average rating was 3.48, signaling generally restrictive policies.\nWe asked students to rate their ethical concerns on using LLMs, and got an average rating of 2.83 on a scale of 1-5, where a higher number meant a larger concern. The value is slightly less than what they perceived as severity of punishment rating, signaling usage outweighs ethical concerns, as shown by figure 5 ###reference_###. Interestingly, we found that 2% of the students who rated themselves as a 5 on ethical concern reported using LLMs at least once a week, indicating that resisting using LLMs can be difficult.\nMore technologically advanced regions have more users. According to the 2022 State Technology and Science Index, which ranks the states based on their technological economies, the five highest ranking states are Massachusetts, California, Colorado, Maryland, and Utah [11 ###reference_b11###]. We discovered that 80.2% of students in these five states and 64.3% of students in the rest of the states had used LLMs. This difference of over 15% shows a concerning gap between students with different tech savviness levels. However, it also points to a trend of increasing LLM usage among students since LLMs\u2019 continuing improvement and decreasing access barrier will attract more users from states with lower tech benchmarks .\nWe also discovered a small usage gap between private and public school students. Our survey shows 76.7% of private school students reported LLM usage versus 71.3% of public school students. Since students attending private schools tend to be from wealthier families, this 5% difference is likely caused by family income differences.\nUsers who can pay for models have an advantage. Out of the 17 participants who possess a paid version of the models, the outcomes regarding its usefulness are predominantly in favor of a positive response. Of these 17, all except one reported using LLMs at least 2-3 times a week, significantly more frequent than students using free models. As companies release higher-quality paid versions, students who have paid models will gain a greater advantage. This further proves the existence of an LLM usage gap between families with different income levels, making it urgent to ensure students from various economic backgrounds can all benefit from LLMs equally." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Discussion and Conclusion", + "text": "The survey shows the use of LLMs in secondary education is widespread and is expected to grow further with the advent of more capable models and increased awareness. It also shows that resource-rich students are more likely to take advantage of LLMs with their studies, potentially causing underserved students with limited learning resources to fall further behind. We therefore propose a few ideas to address the feedback and to make LLMs more useful to students of all backgrounds.\nAI models should be fine-tuned for student usage. Rather than restricting the use of LLMs in education, educators and AI developers should collaborate to build fine-tuned models trained on textbooks and teacher inputs [4 ###reference_b4###]. Such models will hallucinate less and be more relevant to topics at hand. Firstly, the LLMs must integrate a diverse range of educational resources to allow the model to generate well-rounded and informed responses. Secondly, they should be trained with teacher inputs that provide additional context on topics to encourage critical thinking by prompting students to explore topics more deeply. Lastly, safeguards must be implemented to prevent the generation of inappropriate or harmful content and protect privacy.\nAI tutors can personalize learning. An AI tutor will make students\u2019 educational experiences more accessible, comprehensive and engaging due to its easy access, vast knowledge and context-aware conversation capabilities [19 ###reference_b19###]. This is particularly beneficial to students in remote areas, those with disabilities, or learners who need additional help outside of traditional classrooms. AI tutors can also provide content in multiple languages and formats, catering to diverse learning needs and making education more inclusive. They can be made more useful by creating customized learning paths that adapt to a student\u2019s learning habits, providing immediate feedback and tailored quizzes, and aligning with the curriculum to address specific learning objects of various subjects and grades.\nAI classrooms promote learning equality. LLMs provide an opportunity for AI developers and teachers to work together to level the learning field for students from different demographics. We propose the idea of \u201cAI classrooms:\u201d the \u201cAI teachers\u201d in the virtual classrooms are trained on teaching videos and notes of experienced teachers. Using the multi-modal capabilities of LLMs, the \u201cAI teachers\u201d can \"teach\" and interact with the students in real-time. They will also proactively ask students questions and quiz them to provoke critical thinking skills. These multi-modal capabilities can also be personalized for differing learning paces. With government and industry support, the \u201cAI classrooms\u201d can be made free for those under-resourced districts as a supplement to students\u2019 education in school.\nLimitations of the survey. Respondents all had access to the internet in order to take the forms, introducing accessibility bias, which could lead to a higher LLM usage percentage than the national truth. The study provides educators, policymakers, and developers with information regarding LLMs in education but may not encompass all viewpoints due to the focus on students." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Our study has more participants than past studies that investigated LLM Usage
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
StudyDemographicParticipantsPurpose
Li et. al [9]\nChinese secondary schoolers76ChatGPT usage motives
Belghith et. al [5]\nMiddle school students24AI interactions
Rahman et. al [17]\nCollege programming studentsN/AChatGPT for coding
This studyMiddle and high schoolers306LLM trends & potential
\n
", + "capture": "Table 1: Our study has more participants than past studies that investigated LLM Usage" + }, + "2": { + "table_html": "
\n
Table 2: Number of responses from students in grades 6-12
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Grade6th7th8th9th10th11th12th
Respondents3175699524831
\n
", + "capture": "Table 2: Number of responses from students in grades 6-12" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18708v1_figure_1.png", + "caption": "Figure 1: LLM usage remained consistent among 7th-12th graders.", + "url": "http://arxiv.org/html/2411.18708v1/extracted/6030067/gradevsusers.png" + }, + "2": { + "figure_path": "2411.18708v1_figure_2.png", + "caption": "Figure 2: We received responses from students in 43 states.", + "url": "http://arxiv.org/html/2411.18708v1/extracted/6030067/heatmap.png" + }, + "3": { + "figure_path": "2411.18708v1_figure_3.png", + "caption": "Figure 3: Over 70% of students have used LLMs.", + "url": "http://arxiv.org/html/2411.18708v1/extracted/6030067/usagefigure.png" + }, + "4": { + "figure_path": "2411.18708v1_figure_4.png", + "caption": "Figure 4: LLMs are used in all school subjects.", + "url": "http://arxiv.org/html/2411.18708v1/extracted/6030067/frequency.png" + }, + "5": { + "figure_path": "2411.18708v1_figure_5.png", + "caption": "Figure 5: LLM usage persists despite ethical concerns.", + "url": "http://arxiv.org/html/2411.18708v1/extracted/6030067/ethical2.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "The opportunities and challenges of chatgpt in education.", + "author": "Ibrahim Adeshola and Adeola Praise Adepoju.", + "venue": "Interactive Learning Environments, pages 1\u201314, 2023.", + "url": null + } + }, + { + "2": { + "title": "Hallucinations in chatgpt: An unreliable tool for learning.", + "author": "Zakia Ahmad, Wahid Kaiser, and Sifatur Rahim.", + "venue": "Rupkatha Journal on Interdisciplinary Studies in Humanities, 15, 12 2023.", + "url": null + } + }, + { + "3": { + "title": "Education in the era of generative artificial intelligence (ai): Understanding the potential benefits of chatgpt in promoting teaching and learning.", + "author": "David Baidoo-anu and Leticia Owusu Ansah.", + "venue": "Journal of AI, 7:52\u201362, 2023.", + "url": null + } + }, + { + "4": { + "title": "Iris: An ai-driven virtual tutor for computer science education.", + "author": "Patrick Bassner, Eduard Frankford, and Stephan Krusche.", + "venue": "Innovation and Technology in Computer Science Education, 1, May 2024.", + "url": null + } + }, + { + "5": { + "title": "Testing, socializing, exploring: Characterizing middle schoolers\u2019 approaches to and conceptions of chatgpt.", + "author": "Yasmine Belghith, Atefeh Mahdavi Goloujeh, Brian Magerko, Duri Long, Tom Mcklin, and Jessica Roberts.", + "venue": "In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI \u201924, New York, NY, USA, 2024. Association for Computing Machinery.", + "url": null + } + }, + { + "6": { + "title": "Survey with centiment.", + "author": "Centiment.", + "venue": "https://www.centiment.co/.", + "url": null + } + }, + { + "7": { + "title": "Mathematical capabilities of chatgpt.", + "author": "Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Petersen, and Julius Berner.", + "venue": "In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 27699\u201327744. Curran Associates, Inc., 2023.", + "url": null + } + }, + { + "8": { + "title": "Chatgpt for good? on opportunities and challenges of large language models for education.", + "author": "Enkelejda Kasneci, Kathrin Sessler, Stefan K\u00fcchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan G\u00fcnnemann, Eyke H\u00fcllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, J\u00fcrgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci.", + "venue": "Learning and Individual Differences, 103:102274, 2023.", + "url": null + } + }, + { + "9": { + "title": "Research on motivation and behavior of chatgpt use in middle school students.", + "author": "Yalin Li, Jinyuan Chen, Haowen Zhou, Haoyang Yuan, and Ruolin Yang.", + "venue": "International Journal of New Developments in Education, 5, 2023.", + "url": null + } + }, + { + "10": { + "title": "Americans\u2019 use of chatgpt is ticking up, but few trust its election information.", + "author": "Colleen McClain.", + "venue": "https://pewrsr.ch/3VyXbJR, Mar 2024.", + "url": null + } + }, + { + "11": { + "title": "State technology and science index.", + "author": "Milken Institute.", + "venue": "https://statetechandscience.org, 2022.", + "url": null + } + }, + { + "12": { + "title": "Terms of use.", + "author": "OpenAI.", + "venue": "https://openai.com/policies/terms-of-use/, 2023.", + "url": null + } + }, + { + "13": { + "title": "Chatgpt.", + "author": "OpenAI.", + "venue": "https://chat.openai.com, 2024.", + "url": null + } + }, + { + "14": { + "title": "Openai for education.", + "author": "OpenAI.", + "venue": "https://openai.com/index/introducing-chatgpt-edu/, 2024.", + "url": null + } + }, + { + "15": { + "title": "Gpt-4 technical report.", + "author": "OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, and Gabriel Bernadett-Shapiro et. al.", + "venue": "https://doi.org/10.48550/arXiv.2303.08774, 2024.", + "url": null + } + }, + { + "16": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe.", + "venue": "https://doi.org/10.48550/arXiv.2203.02155, 2022.", + "url": null + } + }, + { + "17": { + "title": "Chatgpt for education and research: Opportunities, threats, and strategies.", + "author": "Md. Mostafizer Rahman and Yutaka Watanobe.", + "venue": "Applied Sciences, 13(9), 2023.", + "url": null + } + }, + { + "18": { + "title": "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.", + "author": "Gemini Team, Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry, Lepikhin, Timothy Lillicrap, Jean baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Rohan Anil, Sebastian Borgeaud, Andrew Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sottiaux, and \u2026", + "venue": "https://doi.org/10.48550/arXiv.2403.05530, 2024.", + "url": null + } + }, + { + "19": { + "title": "Improving student learning with hybrid human-ai tutoring: A three-study quasi-experimental investigation.", + "author": "Danielle R Thomas, Jionghao Lin, Erin Gatz, Ashish Gurung, Shivang Gupta, Kole Norberg, Stephen E Fancsali, Vincent Aleven, Lee Branstetter, Emma Brunskill, and Kenneth R Koedinger.", + "venue": "In Proceedings of the 14th Learning Analytics and Knowledge Conference, LAK \u201924. ACM, March 2024.", + "url": null + } + }, + { + "20": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample.", + "venue": "https://doi.org/10.48550/arXiv.2302.13971, 2023.", + "url": null + } + }, + { + "21": { + "title": "Large language models for education: A survey and outlook.", + "author": "Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S. Yu, and Qingsong Wen.", + "venue": "https://doi.org/10.48550/arXiv.2403.18105, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18708v1" +} \ No newline at end of file diff --git a/20241127/2411.18716v1.json b/20241127/2411.18716v1.json new file mode 100644 index 0000000000000000000000000000000000000000..84c8b394bdd37c8d58932650a00be18abcdab628 --- /dev/null +++ b/20241127/2411.18716v1.json @@ -0,0 +1,106 @@ +{ + "title": "Addressing bias in Recommender Systems: A Case Study on Data Debiasing Techniques in Mobile Games", + "abstract": "The mobile gaming industry, particularly the free-to-play sector, has been around for more than a decade, yet it still experiences rapid growth. The concept of games-as-service requires game developers to pay much more attention to recommendations of content in their games. With recommender systems (RS), the inevitable problem of bias in the data comes hand in hand. A lot of research has been done on the case of bias in RS for online retail or services, but much less is available for the specific case of the game industry. Also, in previous works, various debiasing techniques were tested on explicit feedback datasets, while it is much more common in mobile gaming data to only have implicit feedback. This case study aims to identify and categorize potential bias within datasets specific to model-based recommendations in mobile games, review debiasing techniques in the existing literature, and assess their effectiveness on real-world data gathered through implicit feedback. The effectiveness of these methods is then evaluated based on their debiasing quality, data requirements, and computational demands.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "1. Introduction", + "text": "In the context of mobile gaming, delivery of content to players through recommendations plays an important role. It could include elements such as, for example, in-game store products or certain parts of content. However, RSs used within this context are susceptible to bias due to (1) limited exposure: unlike in webshops (e.g. Amazon), available placements for sellable products in mobile games are often limited, and showing one product to a user means that alternatives would not be displayed; (2) the common approach of segmenting content through fixed heuristics before adopting RS introduces biases in the training data, which influences the development of these models. Traditionally, at King we have been addressing these biases by either training models on biased data, or by establishing holdout groups of users who would receive random recommendations for a period of time in order to collect a uniform dataset that reflects user preference in an unbiased way. Although the second approach allows the collection of unbiased data, it could compromise user experience for a segment of players, and may lead to significant operational costs and potential revenue losses. In previous studies, researchers have primarily focused on data derived from explicit feedback, where users rate items using a numerical scale, and various debiasing techniques are tested on this data. However, within the realm of mobile gaming, obtaining explicit feedback affects from user experience, making it challenging to collect. As an alternative, data is often collected through implicit feedback (oardimplicit, ###reference_b1###), where user preferences are inferred from behaviors such as impressions, purchases, and other interactions. Given these challenges, our objectives in this study are: (1) to identify and categorize potential bias within our datasets; (2) to conduct a review of existing literature on debiasing techniques and assess their effectiveness on publicly available datasets; (3) to adapt and apply debiasing strategies, originally developed for explicit feedback data, to the implicit feedback data specific to King, and (4) to evaluate and compare the efficacy of different methods based on the quality of debiasing, data requirements, and computational complexity." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "2. Related work", + "text": "The existing literature on addressing debiasing techniques in RS presents a well-structured and categorized list of methodologies (chen2021bias, ###reference_b2###)(steck2010mnar, ###reference_b3###). It suggests that the selection of particular debiasing techniques should depend on the specific types of bias present in the data, as well as on the availability of unbiased data samples. In recommender systems for mobile games, various types of bias can arise, including but not limited to selection bias, exposure bias, position bias, and conformity bias. Some of the relevant methods to debias the data in these cases could be The Inverse Propensity Scoring (IPS) (IPS2021, ###reference_b4###) method, which deals with selection and exposure biases by weighting observations inversely to their selection probability, and does so without need for unbiased data. Yet the method could potentially result in high variance due to the challenges in accurately estimating propensities. Potential solutions to the high variance issue of IPS method include, for example, using Doubly Robust (DR) learning (DRPostClick2021, ###reference_b5###) that introduces a novel approach to loss functions as a combination of IPS-based models with imputation-based models. The combination of two models assures doubly robustness property when either of the two components (propensity estimation or imputed data) remains accurate. This method, though, relies on having an unbiased data sample to work. Another option is model-agnostic and bias-agnostic solutions like AutoDebias (AutoDebias, ###reference_b6###), which are based on meta-learning to dynamically assign weights within the RS, aiming to neutralize biases across the board. A potential benefit of such solution is that it doesn\u2019t require knowing the types of bias present in the data, but as a downside, it also relies on randomized samples. In addition, the process of fitting multiple models makes training more computationally demanding.\nDespite the advances and variety of available debiasing techniques, applying Recommendation Systems to mobile gaming content remains a relatively untapped area, with most of the publications focusing on building recommendations (ContextualItemAwareRec, ###reference_b7###) (PersonalBundleRec, ###reference_b8###) (RecAppsEA, ###reference_b9###), and not on issues of imbalance and bias. Previous efforts at King introduced DFSNet (DFSNet, ###reference_b10###), an end-to-end model-specific debiasing technique that enables training an in-game recommender on an imbalanced dataset without randomized data.\nThis work aims to enrich King\u2019s debiasing toolkit by exploring model-agnostic solutions, specifically focusing on the challenges of content recommendations within mobile games. However, the architecture of DFSNet is complex, involving multiple modules, which can make the implementation and maintenance challenging. Moreover, it requires constant feedback loops over time and the model\u2019s performance is highly dependent on the quality and recency of the training data.\n###figure_1###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "3. Methodology", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "3.1. Datasets", + "text": "Our study utilized two public datasets (COAT(IPS2021, ###reference_b4###), yahooR3!(yahoor3, ###reference_b13###)) to validate theoretical results and three proprietary datasets from King (Set A, Set B, Set C) that are focused on user-item interactions in game shops within Match-3 Game A and Match-3 Game B (Fig.1 ###reference_###). The sizes of each dataset, along with their respective feedback types, are provided in Table 1 ###reference_###. We aimed to observe the effectiveness of different techniques on datasets collected with explicit feedback (public datasets), and those with implicit feedback (King\u2019s datasets). Explicit feedback is typically collected by asking users to rate items on a numerical scale, for example from 1 to 5, where 1 indicates disinterest, 2 signifies dissatisfaction, and 5 shows a preference. In contrast, Implicit feedback (as in the proprietary datasets) involves a binary response from users: purchase or non-purchase. This setup makes it harder to accurately measure user preferences. As discussed in the Introduction, mobile games often have limited space for displaying sellable products, which is the case for all three proprietary datasets. This limitation leads to exposure bias in the data. Additionally, placement of different products within the game shop creates positional bias, with some items displayed in more appealing placements while others are not visible on the first screen (Fig. 1 ###reference_###). Another bias, selection bias, arises from imbalanced product impressions, where certain items\u2014such as conversion offers\u2014are shown to users more frequently, resulting in significantly higher exposure for those items." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "3.2. Selection of Debiasing techniques", + "text": "The primary reasoning for the selection of debiasing techniques for this study was based in a literature review, and included the applicability of each method to the specific biases present in the propreitery datasets\u2014namely, selection bias, exposure bias, and position bias. Further, it was imperative to evaluate techniques across two dimensions: those that require randomized datasets and those that do not, as well as to examine methodologies that are agnostic to any particular type of bias. Given the identified biases in the datasets, we adopted several debiasing techniques: (1) Matrix Factorisation (MF) as a baseline model, Inverse Propensity Scoring (IPS), a method that does not require randomized data collection and primarily addresses selection and exposure biases. (2) Doubly Robust learning, that tackles the same biases but, unlike IPS, requires a randomized dataset. And (3) AutoDebias (DR), a bias-agnostic technique that also needs randomized data. Each method was tested across all datasets to evaluate model performance and complexity. We initially applied MF to biased dataset to establish metrics for comparison, we denote our baseline model as MF(biased), then compared these outcomes with the results from the debiasing methods." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "3.3. Evaluation metrics", + "text": "For models\u2019 evaluation, we use metrics that assess both predictive power of the models (RMSE and AUC), as well as quality of ranking (NDCG@5) and inequality and diversity in the recommendations (Gini index and Entropy):\nNDCG@5 assesses the model\u2019s ability to rank relevant items in the recommendation list:\nwhere IDCG@k is the ideal DCG@k and represents items ordered by their relevance up to position k.\nRMSE measures the magnitude of prediction errors of exact rating predictions:\nwhere denotes the total number of ratings in the dataset, and are predicted and true ratings for all user-item pairs .\nAUC reflects how well the model distinguishes between positive and negative interactions:\nwhere is the number of positive samples in test set , and denotes the position of a positive feedback .\nIn experimentation, AUC mainly served as a metric to prevent overfitting and help fine-tunning in validation phase.\nGini index measures inequality in the recommendations distribution. The higher coefficient indicates higher inequality\nWhere is the popularity score of the -th item, with the scores arranged in ascending order (), and represents the total number of items.\nEntropy measures the diversity in the distribution of recommended items with higher values indicating higher diversity.\nwhere is a total number of items u in a dataset and is a probability of an item being recommended.\nAdditionally, we include Training Time, defined as the time required for each model to reach saturation, measured in seconds. This metric provides insights into the computational complexity and the resources required by different methodologies." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "4. Experimentation", + "text": "We regard biased data as training set, . When it comes to randomized data, following the strategies as mentioned in (liu2020general, ###reference_b11###), we split it into 3 parts: 5% for randomised set to help training as required by DR and Autodebias, 5% for validation set to tune hyper-parameters and incur early-stopping mechanism to prevent overfitting, the rest 90% for test set to evaluate the model. For conformity reasons, the data split strategy mentioned above is applied to both open datasets and proprietary datasets. For this project, we deploy a training pipeline on Vertex AI (google2023vertexai, ###reference_b12###), integrating components such as data transformation powered by BigQuery, model training and evaluation, as well as experiment tracking. The training pipeline retrieves data from the data warehouse to train models and produces artifacts that are later integrated into an experiment tracker. By adopting this artifact-based approach, we address the inherent challenge of reproducibility in operationalizing ML projects, as it provides all the necessary components to reproduce experiments. Each experiment is run up to 10 times on Vertex AI with the same hyper parameters, but varying random seeds to get estimation on the variability of the results." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "5. Experimentation results", + "text": "The absolute results of all experiments, including confidence intervals, are presented in Table 4 ###reference_###. In this section, we report the percentage improvement of various debiasing techniques compared to the baseline model, which was trained on biased data (MF(biased) model)." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "5.1. Open Datasets", + "text": "For the COAT dataset, the results show varying degrees of improvement across different metrics (Table 2 ###reference_###). The top performing method (AutoDebias), exhibited the best improvements in RMSE (-5.06%), AUC (0.39%) and NGCG@5 (3.73%) with low changes in Gini (0.16%) and no improvement in Entropy. DR also provided higher gains in NDCG@5 (2.75%), and performed better in Gini (-18.88%) and Entropy (6.16%), but at a cost of higher RMSE (3.86%) and lower AUC (-1.57%). While AutoDebias outperformed other techniques when it comes to improving predictive power of the model (AUC, RMSE), it was not very efficient in terms of Gini and Entropy, and has a significantly higher computational cost. This highlights a trade-off between improved accuracy and increased resource requirements.\nFor YahooR3! dataset, again, AutoDebias results in the highest improvement in RMSE (-36.89%), AUC (1.79%), NDCG@5 (20.70%), as well as Gini (-58.15%) and Entropy (4.26%), but did so also with dramatically increased computational cost (3216%). IPS provides a balanced performance with improvements in RMSE (-29.70%) and Entropy (0.82%) at a lower computational cost (-22.98%), making it a practical choice for resource-constrained environments.\n###figure_2### ###table_1### ###figure_3###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "5.2. Internal Datasets", + "text": "For the internal datasets, the results are less consistent across the datasets and debiasing techniques (Table 3 ###reference_###). This may be due to the fact that internal datasets employed implicit feedback when collecting data, where user preferences are inferred from their impression and purchase records. This can introduce biases due to the lack of negative samples and overrepresentation of user interactions, potentially skewing the models towards popular items.\nSet A is a relatively small dataset (Table 1 ###reference_###), and the lack of randomized data limits our options to only using IPS. As a result, some metrics, such as RMSE and AUC, actually worsen (Table 3 ###reference_###), which we might accept as a trade-off to achieve better balance in recommendations. However, NDCG@5 also does not improve. On the positive side, IPS enhances diversity metrics, with Gini improving by 3.06% and Entropy by 0.41%, while also reducing computational cost by 4.27%. Overall, applying this method increases model diversity with comparable training time, but comes at the cost of accuracy.\nSet B demonstrates substantial improvements with DR, including a 45.40% reduction in RMSE, a 7.07% increase in AUC, and gains in NDCG@5 (0.68%) and Gini (-0.54%), making the model perform better in both accuracy and diversity. However, this comes at a significant computational cost, increasing training time by 386.46%. Given the total number of samples being 318k, this leads to a considerably longer training process. AutoDebias ranks second in RMSE improvement (-26.46%), while IPS shows a positive gain in AUC (3.18%). However, DR is the only method that consistently improves outcomes of NDCG@5, Gini, and Entropy.\nFor Set C, the largest dataset with nearly 2.2 million samples, AutoDebias achieves the highest improvement in AUC (2.61%) and maintains stable NDCG@5. However, it underperforms compared to the baseline and other techniques in RMSE, Gini, Entropy, and training time, which increases significantly by 233.93%. IPS, on the other hand, delivers poor results in RMSE (39.01%), AUC (-23.46%), and NDCG@5 (-29.36%), but excels in Gini (-9.47%) and Entropy (9.04%) without adding to the training time." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "6. Conclusion and Future work", + "text": "Implementing more accurate and less biased models is crucial to avoiding the perpetuation of negative feedback loops and the overexposure of certain items caused by segmentation heuristics in retraining data. This approach also enhances data quality, which is essential for fine-tuning models. A recommender system that diversifies content exposure improves user experience by ensuring that visibility is not limited to only the most popular items. In our experiments, Inverse Propensity Scoring (IPS) stands out for its simplicity and model-agnostic nature, requiring no randomized data collection and fewer training epochs. However, the improvements it offers are somewhat limited. AutoDebias excels in improving accuracy metrics, but at substantially higher computational costs and sometimes poorer performance in Gini and Entropy. DR still offers strong improvement in observed metrics, including Gini and Entropy. So while each debiasing method has its own trade-offs, significant performance gains still depend on the challenging task of collecting randomized datasets, as highlighted in our introduction. Potential future work includes: (1) adopting online reinforcement learning approach such as Multi-Armed Bandit (MAB) (felicio2017multi, ###reference_b14###; wang2017biucb, ###reference_b15###; wang2018online, ###reference_b16###) for data collection, including contextual bandit models, (2) developing and testing combined debiasing models which can combine strengths of different debiasing techniques to mitigate various biases simultaneously while optimizing for computational efficiency." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n
\n
\"Refer\n
Figure 1. Examples of content placements in Candy Crush Soda Saga (left) and Candy Crush Saga (right), highlighting biases: selection bias with a prominently placed product (left) and exposure bias with limited visibility, where products are hidden behind the \"More Offers\" button (right).
\n
\n
\n
\n
\n
\n
Table 1. The sizes and feedback types of all datasets used in this study.A key difference is that the open datasets (COAT and YahooR3!) provide explicit feedback, while the proprietary datasets (A, B, and C) offer only implicit feedback (purchase/no purchase). Set A, a proprietary dataset, lacks randomized data, limiting debiasing options.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetBiased samplesUnbiased samplesFeedback type
COAT311k54kExplicit
yahooR3!12.5k75kExplicit
Set A47.6k-Implicit
Set B100k218kImplicit
Set C980k1.2mlnImplicit
\n
\n
\n
\n
", + "capture": "Figure 1. Examples of content placements in Candy Crush Soda Saga (left) and Candy Crush Saga (right), highlighting biases: selection bias with a prominently placed product (left) and exposure bias with limited visibility, where products are hidden behind the \"More Offers\" button (right)." + }, + "2": { + "table_html": "
\n
Table 2. Percentage improvement of various models compared to MF(biased) across open datasets. The best results for each metric are highlighted in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetModelRMSEAUCNDCGGiniEntropyTraining time (sec)
COATIPS-2.53%-0.26%-1.18%0.62%-0.29%8.82%
DR3.86%-1.57%2.75%\n-18.88%\n6.16%194.12%
AutoDebias\n-5.06%\n0.39%\n3.73%0.16%0.00%767.65%
yahooR3!IPS-29.70%-0.55%0.73%-6.33%0.82%\n-22.98%
DR-30.39%-0.83%0.00%1.22%-0.12%412.56%
AutoDebias\n-36.89%\n1.79%\n20.70%\n-58.15%\n4.26%3215.87%
\n
", + "capture": "Table 2. Percentage improvement of various models compared to MF(biased) across open datasets. The best results for each metric are highlighted in bold." + }, + "3": { + "table_html": "
\n
Table 3. Percentage improvement of various models compared to MF(biased) across internal datasets. The best results for each metric are highlighted in bold.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetModelRMSEAUCNDCGGiniEntropyTraining time (sec)
Set AIPS20.95%-0.97%-1.53%-3.06%0.41%-4.72%
Set BIPS-8.61%3.18%-0.14%3.29%-0.02%-12.23%
DR\n-45.40%\n7.07%\n0.68%\n-0.54%\n0.00%386.46%
AutoDebias-26.46%-1.25%-0.48%3.26%-0.02%-63.26%
Set CIPS39.01%-23.46%-29.36%-9.47%9.04%-15.50%
DR7.74%-13.76%-28.44%-5.36%5.47%14.74%
AutoDebias64.50%2.61%\n-0.01%1.72%-2.47%233.93%
\n
", + "capture": "Table 3. Percentage improvement of various models compared to MF(biased) across internal datasets. The best results for each metric are highlighted in bold." + }, + "4": { + "table_html": "
\n
Table 4. Performance metrics across different models and datasets, with 95% confidence intervals.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetModelRMSEAUCNDCG@5GiniEntropyTraining time (sec)
COATMF (uniform)1.00 0.020.54 0.010.36 0.020.64 0.014.91 0.022.00 1.60
MF (biased)0.75 0.010.77 0.010.51 0.010.64 0.044.9 0.113.40 1.00
IPS0.73 0.010.76 0.010.50 0.010.65 0.044.89 0.103.70 2.30
DR0.78 0.020.75 0.010.52 0.010.52 0.015.20 0.0310.00 6.90
AutoDebias0.71 0.010.77 0.020.53 0.010.64 0.064.90 0.1429.50 9.6
yahooR3!MF (uniform)0.73 0.010.57 0.010.43 0.010.41 0.016.58 0.014.80 1.20
MF (biased)0.86 0.010.73 0.010.55 0.010.41 0.016.58 0.0160.50 12.20
IPS0.61 0.010.72 0.010.55 0.010.39 0.016.63 0.0246.60 16.10
DR0.60 0.040.72 0.010.55 0.010.42 0.016.57 0.01310.10 54.60
AutoDebias0.54 0.010.74 0.010.66 0.010.17 0.016.86 0.012006.10 1541.00
Set AMF (biased)0.82 0.070.54 0.020.56 0.020.36 0.012.83 0.01694.30 163.30
IPS0.99 0.020.54 0.010.55 0.010.35 0.022.84 0.02661.5 85.9
Set BMF (uniform)0.61 0.000.92 0.010.97 0.000.10 0.001.77 0.002891.00 126.90
MF (biased)0.81 0.060.89 0.000.97 0.000.10 0.001.80 0.002123.90 441.3
IPS0.74 0.140.92 0.010.97 0.000.10 0.001.77 0.001864.10 86.70
DR0.44 0.020.95 0.010.96 0.010.10 0.011.77 0.0010332.00 2486.30
AutoDebias0.56 0.020.88 0.010.96 0.010.10 0.001.77 0.00780.30 153.70
Set CMF (uniform)0.92 0.040.25 0.020.07 0.010.52 0.012.52 0.02775.90 265.00
MF (biased)0.62 0.010.84 0.010.80 0.010.65 0.012.18 0.02650.80 114.70
IPS0.86 0.060.64 0.050.56 0.080.59 0.012.37 0.02549.90 128.30
DR0.67 0.020.72 0.050.57 0.090.61 0.022.29 0.05746.70 140.00
AutoDebias1.02 0.030.86 0.040.78 0.020.66 0.022.12 0.042173.20 1826.10
\n
", + "capture": "Table 4. Performance metrics across different models and datasets, with 95% confidence intervals." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18716v1_figure_1.png", + "caption": "Figure 2. Debiasing results on open datasets (COAT and yahooR3!). The graphs show the percentage change in metrics (AUC, RMSE, NDCG@5, Gini, and Entropy) for various models relative to MF(biased). AUC is plotted against other metrics to demonstrate the trade-off between diversity gains in recommendation systems and potential compromises in predictive power. Different models are represented by colors, training times are indicated by point sizes, and dataset types are distinguished by shapes.", + "url": "http://arxiv.org/html/2411.18716v1/extracted/6028657/open_data_plots_vs_auc.png" + }, + "2": { + "figure_path": "2411.18716v1_figure_2.png", + "caption": "Figure 3. Debiasing results on internal datasets (Set A, Set B and Set C). The graphs show the percentage change in metrics (AUC, RMSE, NDCG@5, Gini, and Entropy) for various models relative to MF(biased). AUC is plotted against other metrics to demonstrate the trade-off between diversity gains in recommendation systems and potential compromises in predictive power. Different models are represented by colors, training times are indicated by point sizes, and dataset types are distinguished by shapes.", + "url": "http://arxiv.org/html/2411.18716v1/extracted/6028657/king_data_plots_vs_auc.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18716v1" +} \ No newline at end of file diff --git a/20241127/2411.18728v1.json b/20241127/2411.18728v1.json new file mode 100644 index 0000000000000000000000000000000000000000..51ce012a3d6ebe3e06371078864b291ac8b4a930 --- /dev/null +++ b/20241127/2411.18728v1.json @@ -0,0 +1,800 @@ +{ + "title": "The Last Mile to Supervised Performance: Semi-Supervised Domain Adaptation for Semantic Segmentation", + "abstract": "Supervised deep learning requires massive labeled datasets, but obtaining annotations is not always easy or possible, especially for dense tasks like semantic segmentation. To overcome this issue, numerous works explore Unsupervised Domain Adaptation (UDA), which uses a labeled dataset from another domain (source), or Semi-Supervised Learning (SSL), which trains on a partially labeled set. Despite the success of UDA and SSL, reaching supervised performance at a low annotation cost remains a notoriously elusive goal. To address this, we study the promising setting of Semi-Supervised Domain Adaptation (SSDA). We propose a simple SSDA framework that combines consistency regularization, pixel contrastive learning, and self-training to effectively utilize a few target-domain labels. Our method outperforms prior art in the popular GTACityscapes benchmark and shows that as little as target labels can suffice to achieve near-supervised performance. Additional results on SynthiaCityscapes, GTABDD and SynthiaBDD further demonstrate the effectiveness and practical utility of the method. Lastly, we find that existing UDA and SSL methods are not well-suited for the SSDA setting and discuss design patterns to adapt them.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Semantic segmentation is a key task in computer vision with diverse applications ranging from autonomous driving (Badrinarayanan et al., 2017 ###reference_b3###) to medical image analysis (Ronneberger et al., 2015 ###reference_b38###).\nDespite recent progress in this area using supervised learning methods (Badrinarayanan et al., 2017 ###reference_b3###; Ronneberger et al., 2015 ###reference_b38###; Xie et al., 2021 ###reference_b51###), supervision remains challenging in practical applications due to the high labeling cost and the need for specialized domain experts. Therefore, minimizing the labeling cost while maintaining strong performance is critical.\nCommon approaches for learning with unlabeled data are Unsupervised Domain Adaptation (UDA), which uses additional data from another similar domain, and Semi-Supervised Learning (SSL), which trains on a partially labeled set.\n###figure_1### While UDA has demonstrated promising results on public benchmarks, its practical implementation remains challenging. Although UDA methods do not require target annotations for training and leverage additional labeled data from a source domain, they often require target labels for hyperparameter tuning (Saito et al., 2021 ###reference_b41###). Moreover, it is essential in industrial and medical applications to have a well-validated system, which necessitates the collection of a target labeled set for validation purposes. In such cases, annotating a few samples for training may not a significant overhead. Another setting to learn with missing labels is SSL, which trains a model on a partially labeled dataset (Chen et al., 2021b ###reference_b11###; Alonso et al., 2021 ###reference_b1###; Olsson et al., 2021 ###reference_b33###). However, SSL methods may underperform and risk overfitting when the number of labels is low. While adding a source dataset can alleviate this problem, existing SSL methods are not designed to leverage data from another domain, and studies like the one of Alonso et al. (2021 ###reference_b1###) have shown only moderate improvement. Despite the competitive performance of both UDA and SSL methods, they fall short of supervised performance, as they achieve significantly lower accuracies than the fully supervised counterpart.\nIn this work, we study how to close the gap to supervised performance by exploiting the Semi-Supervised Domain Adaptation (SSDA) setting, and show that it is possible to match supervised accuracy at a modest annotation cost. SSDA is essentially the combination of SSL and UDA, as it uses source labeled data, target unlabeled data, and a few target labels (Tab. 1 ###reference_###). Despite its practical value and performance potential while alleviating annotation requirements, SSDA has received less attention (Berthelot et al., 2021 ###reference_b4###). To our knowledge, only two works present a semantic segmentation method tailored to SSDA (Wang et al., 2020b ###reference_b50###; Chen et al., 2021a ###reference_b10###), and Alonso et al. (2021 ###reference_b1###) propose an SSL method and try to extend it to SSDA. Moreover, the existing UDA works do not explore incorporating a few target labels and are suboptimal in an SSDA setting.\nWe introduce a simple and straightforward semantic segmentation framework tailored to SSDA, which uses a combination of consistency regularization (CR) and pixel contrastive learning (PCL). The main goal of the method is to achieve compact clusters of target representations, which facilitate the classification task, while also learning a domain-robust feature extractor to leverage the source domain data. Moreover, we also focus on effectively utilizing the few available target labels. Finally, we propose a self-training scheme that improves training efficiency by iteratively refining model and pseudolabels. Our comprehensive evaluation on GTACityscapes demonstrates how the proposed method achieves state-of-the-art performance on SSDA semantic segmentation, approaching the supervised performance with minimal annotation, using of target labels (see Fig. 1 ###reference_###). Additional results in other benchmarks, without further hyperparameter tuning, confirm the effectiveness and high practical value of the method. We will make the code available upon acceptance.\nThe main contributions of this paper are:\nWe present a simple SSDA method for semantic segmentation that effectively utilizes the different kinds of data available, reaching a performance comparable to supervised learning.\nWe demonstrate a significant improvement of SSDA over UDA even with only 50 target labels ( mIoU). We also find that existing UDA methods are suboptimal in SSDA and discuss potential avenues for adapting them.\nWe investigate the relationship between SSL and SSDA, and show an improvement over the former ( mIoU at labels) when effectively leveraging source domain data." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Unsupervised Domain Adaptation (UDA) for semantic segmentation", + "text": "Numerous approaches have been proposed for UDA in semantic segmentation.\nIn recent years, these techniques have been broadly classified into two main categories: adversarial training and self-training. Adversarial training methods minimize the difference between the source and target domains through a minimax game between a feature extractor and a domain discriminator (Ganin et al., 2016 ###reference_b15###; Hoffman et al., 2018 ###reference_b17###; Vu et al., 2019 ###reference_b47###; Wang et al., 2020a ###reference_b48###). Conversely, self-training methods involve producing pseudolabels for the target domain data and aligning the two domains by means of domain mixing (Tranheden et al., 2021 ###reference_b45###) or source styling (Yang & Soatto, 2020 ###reference_b53###). Pseudolabels can be carefully generated using prototypes (Zhang et al., 2021 ###reference_b56###; Liu et al., 2021 ###reference_b29###) or adaptive confidence thresholds (Mei et al., 2020 ###reference_b31###). An iterative self-training algorithm that employs pseudolabels is explored by Zou et al. (2018 ###reference_b58###) and Li et al. (2019 ###reference_b26###). While all the above-mentioned methods employ the Deeplab family of architectures (Chen et al., 2017b ###reference_b7###; 2018 ###reference_b8###), recent studies have shown that self-training methods using Transformer-based networks have achieved state-of-the-art performance (Hoyer et al., 2021a ###reference_b18###; 2022 ###reference_b20###). Lastly, Hoyer et al. (2022 ###reference_b20###) utilize high resolution and multi-scale inputs with a module that can be applied on top of existing UDA methods. Even though several ideas from UDA can potentially be employed in SSDA frameworks, out-of-the-box UDA methods are suboptimal in SSDA (see 4.2.3 ###reference_.SSS3###), since they do not consider how to fully leverage the few, very valuable, target labels. Therefore, to fully leverage the provided labels, we need to design frameworks tailored to SSDA." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Semi-Supervised Learning (SSL) for semantic segmentation", + "text": "Learning on a partially labeled dataset has been largely explored for semantic segmentation. A commonly used mechanism is consistency regularization, which aims to learn a model invariant to perturbations by encouraging consistent predictions between augmentations of an unlabeled image. Relying on the cluster and smoothness assumptions (Chapelle & Zien, 2005 ###reference_b5###), it encourages compact clusters of representations separated by low-density regions, where the decision boundary can lie. Some approaches use a mean teacher to generate pseudolabels (Tarvainen & Valpola, 2017 ###reference_b43###; French et al., 2019 ###reference_b14###; Alonso et al., 2021 ###reference_b1###; Liu et al., 2022c ###reference_b30###), while others train a single model (Sohn et al., 2020 ###reference_b42###; Zou et al., 2020 ###reference_b59###) or perform cross-supervision between two models (Chen et al., 2021b ###reference_b11###; Fan et al., 2022 ###reference_b13###; Ke et al., 2020 ###reference_b21###).\nIterative self-training consists of training for one round and using the resulting model to generate pseudolabels to train a new model in the next round (Xie et al., 2020 ###reference_b52###; Zoph et al., 2020 ###reference_b57###; Zou et al., 2020 ###reference_b59###; Teh et al., 2022 ###reference_b44###; Liu et al., 2022a ###reference_b27###). In contrast to consistency regularization, the pseudolabels are generated offline. Despite its effectiveness, self-training can suffer from using noisy pseudolabels or perpetuate a model bias. Unsupervised pixel contrastive learning has been used in SSL to encourage compact clusters of representations (Alonso et al., 2021 ###reference_b1###; Kwon & Kwak, 2022 ###reference_b23###; Liu et al., 2022b ###reference_b28###). This mechanism pulls together positive pairs of pixels in the latent space, while pulling negative pairs apart to increase separability. Moreover, supervised pixel contrastive learning has been proposed as a regularizer of the embedding space to encourage better clusterability (Wang et al., 2021 ###reference_b49###; Pissas et al., 2022 ###reference_b34###), boosting the performance of fully supervised methods. Even if SSL frameworks may share some elements with SSDA methods, they should be properly modified to account for domain adaptation in order to leverage source domain data. The interplay between DA and SSL mechanisms, which we study in this work, is not trivial to predict and requires careful consideration." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Semi-Supervised Domain Adaptation (SSDA)", + "text": "In SSDA the learner has access to source labels, target unlabeled data and a few target labels. SSDA is less explored in the literature, only recently it has received more attention in image classification (Saito et al., 2019 ###reference_b40###; Berthelot et al., 2021 ###reference_b4###; Qin et al., 2021 ###reference_b35###; Kim & Kim, 2020 ###reference_b22###). While most methods are based on UDA\u2019s core idea of domain alignment, (Mishra et al., 2021 ###reference_b32###) notice that a few target labels are sufficient in SSDA to forego domain alignment and focus on target feature clusterability instead. However, the dense task of semantic segmentation is more complex than image classification, requiring SSDA methods to be revisited and developed for this setting. Additional challenges of the task are the uncertainty in pixels (e.g., at boundaries between objects), which impedes the use of explicit entropy minimization (Saito et al., 2019 ###reference_b40###), and a large class imbalance.\nSo far, two frameworks have been devised for SSDA in semantic segmentation (Wang et al., 2020b ###reference_b50###; Chen et al., 2021a ###reference_b10###), and one more considers the extension from SSL (Alonso et al., 2021 ###reference_b1###). Wang et al. (2020b ###reference_b50###) uses adversarial training to align the domains at two representation levels, local and global, but fails to fully leverage the few target labels. Chen et al. (2021a ###reference_b10###) base their method on domain mixing and iterative self-training, with the goal of aligning source and target domain representations.\nThe domain mixup is achieved with CutMix (Yun et al., 2019 ###reference_b55###) and by mixing domains in the mini-batch. Lastly, Alonso et al. (2021 ###reference_b1###) propose an SSL framework with consistency regularization and pixel contrastive learning. They also investigate the extension to SSDA by adding source data, but only find a moderate improvement since they do not take domain alignment considerations." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "###figure_2### In this section, we present our framework for SSDA semantic segmentation. In SSDA, we have access to a source labeled dataset , a few target labeled samples and a set of target unlabeled samples , where typically .\nThe main goal of our framework is to encourage tight clustering of target representations, such that similar pixels are clustered together in the latent space and the identity of each cluster is inferred from the few labels, a key idea in SSL.\nMoreover, we consider domain alignment to better leverage source data, such that source and target representations are aligned and the model can generalize to both domains.\nA schematic of the framework is depicted in Fig. 2 ###reference_###. We use a student-teacher scheme, keeping a set of parameters for the student model and parameters for the teacher model . The teacher model is an exponential moving average (EMA) of with coefficient , which provides more robust predictions (Tarvainen & Valpola, 2017 ###reference_b43###). The parameters of are updated by .\nIn the next subsections we present each of the components of the framework: a supervised objective (Sec. 3.1 ###reference_###), consistency regularization (Sec. 3.2 ###reference_###), pixel contrastive learning (Sec. 3.3 ###reference_###) and an iterative self-training scheme (Sec. 3.4 ###reference_###). Finally, in Sec. 3.5 ###reference_### we discuss how to extend the framework to the neighboring settings of UDA and SSL." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Supervised training on labeled data", + "text": "The available source and target labels are used in a supervised fashion to minimize the cross-entropy with respect to the model predictions. We use class weights to mitigate the class imbalance in semantic segmentation datasets.\nImportantly, we mix source and target batches which helps in learning domain-robust representations (Chen et al., 2021a ###reference_b10###). We define as\nwhere is the weighted cross-entropy. With images of pixels, one-hot semantic labels as and classes, is defined as\nClass weights are computed for and separately (see Sec. 8 ###reference_###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Consistency Regularization", + "text": "Consistency regularization is an unsupervised mechanism that encourages tight and well-separated clusters of representations by promoting consistent predictions between different augmentations of an image. We define as the pixel-wise cross-entropy between the prediction of the student on a random strong augmentation and a one-hot pseudo-target generated by the teacher on the original image . The gradient is stopped on the pseudo-target such that does not receive any update. The consistency loss for an image is\nwhere is the standard cross-entropy loss. This objective leverages unlabeled target data . Details on the transformations used for the random augmentations are provided in Sec. C ###reference_###." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Supervised Pixel Contrastive Learning", + "text": "To further enhance target feature clusterability, we add a pixel contrastive objective for target labeled data .\nWith this objecive, pixels of the same class are pushed together in the embedding space, forming more compact clusters, while pixels of different classes are pushed apart, forming low-density regions between clusters.\nA projection head produces pixel embeddings to be contrasted. The supervised contrastive loss for pixel is given by\nwhere is contrasted with a set of positive samples from the same class and a set of negative samples from different classes. The symbol denotes a temperature hyperparameter. A more complex version of this module was introduced by Wang et al. (2021 ###reference_b49###), but we found the memory bank or the pixel-to-region contrast redundant in our preliminary experiments. Importantly, we apply this objective to target labeled data only, and not unlabeled samples, as relying on ground-truth results in better learnt representations. At each iteration we contrast a subset of pixels sampled from the current batch, up to from each class, using hard example sampling (Wang et al., 2021 ###reference_b49###). Let be the total number of pixels sampled from the batch, with , then the pixel contrastive loss is defined by\nCollecting equation 1 ###reference_###, equation 4 ###reference_### and equation 6 ###reference_###, the overall loss function to be minimized is given by\nWe minimize in each iteration of a self-training scheme, explained in the next section." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Iterative Self-training", + "text": "In the few-labels regime, the lack of diversity in is problematic.\nTo mitigate that we employ an offline self-training algorithm that leverages pseudolabels for unlabeled images in .\nA more diverse pool of labeled samples increases the efficiency of training.\nIn a second stage of each iteration, we drop the pseudolabels, which innevitably contain some noise, to fine-tune using only ground-truth annotations. The procedure is summarized in Algorithm 1 ###reference_###, where represents a model trained in the self-training round. The quality of psuedolabels is critical in self-training. Following Li et al. (2019 ###reference_b26###), we only annotate pixels with a prediction confidence above a threshold , and discard the pseudolabel on pixels with uncertain predictions." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Adaptation to UDA and SSL", + "text": "In this section we discuss how to adapt our SSDA framework to be used in the UDA and SSL settings. For SSL we simply drop the source data and the supervised loss term becomes . The adaption to UDA has two caveats. Firstly, since we do not have target labeled data, we cannot apply the pixel contrastive learning module on . Therefore, we only use this module on when pseudolabels are available. Secondly, we modify the consistency regularization formulation to use the teacher\u2019s class probability predictions as pseudo-targets, instead of transforming them into a one-hot encoding. Thus, equation 4 ###reference_### is replaced by\nWe observed that using resulted in more stable training in UDA, while was stable in SSDA and yielded a slighlty better performance." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "This section presents the experimental setup, SSDA results of the proposed framework, a comparison to UDA and SSL methods, and ablation studies." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "Below we discuss the datasets and model architecture used. As for hyperparamters, we use a fixed training configuration across all experiments and for all datasets, which is detailed in Tab. 8 ###reference_### in the Appendix. Experiments are conducted on a single V100 GPU with 32 GB of memory." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Datasets", + "text": "We use the popular GTACityscapes as our main semantic segmentation benchmark. Cityscapes, the target dataset, has 2975 training and 500 validation images of European urban scenarios, manually annotated with 19 classes. As standard, we downsample the original resolution of pixels to for training. The source GTA dataset (Richter et al., 2016 ###reference_b36###) contains 24966 computer-generated urban images for training, which we downsample from to pixels, as standard. Labels contain 33 semantic classes, we select only the 19 classes that coincide with Cityscapes, as Wang et al. (2020b ###reference_b50###).\nAdditionally, we experiment on the datasets of Synthia (source) and BDD (target). Synthia (Ros et al., 2016 ###reference_b39###) has 9400 synthetic images of pixels. It is evaluated on 16 or 13 classes, also present in Cityscapes. For BDD (Yu et al., 2020 ###reference_b54###) we use the 7000 train and 1000 validation real images of US streets at the original resolution of pixels.\nFor all datasets, we perform random square crops of and horizontal flips at training time. For evaluation, following standard procedure, we report the mean Intersection over Union (mIoU), averaged over 3 runs with different random labeled/unlabeled training set split." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Architecture", + "text": "We use a DeepLabv2 (Chen et al., 2017a ###reference_b6###) decoder and ResNet-101 backbone, for fair comparison with previous works on SSDA (Wang et al., 2020b ###reference_b50###; Chen et al., 2021a ###reference_b10###; Alonso et al., 2021 ###reference_b1###), and which is also widely used in UDA benchmarks. The DeepLabv2 decoder uses an ASPP module to obtain multi-scale representations. The ResNet backbone used is always pretrained on ImageNet. Following Wang et al. (2021 ###reference_b49###), the projection head for pixel contrast transforms the 2048-dim features from the backbone into 256-dim normalized embeddings. It is composed of two convolutional layers interleaved with ReLU and BatchNorm layers." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 SSDA on GTACityscapes", + "text": "We present our main SSDA results on the widely used GTACityscapes benchmark in Tab. 2 ###reference_### and Fig. 1 ###reference_###. We compare our performance to the existing SSDA semantic segmentation methods. Moreover, to provide a competitive baseline, we extend a state-of-the-art UDA method (DAFormer, Hoyer et al. (2021a ###reference_b18###)) to SSDA. We also include results for training only on labeled target (T) or source and target (S+T) data, and a fully supervised (FS) oracle trained on the entire 2975 target labeled samples.\nOur framework outperforms all previous methods by a substantial margin and sets a new state-of-the-art in the SSDA regime with few labels (, and of labeled data). Furthermore, at the most challenging setting of () target labels, we beat most of the baselines when they use or even more labels. Only when labels are more abundant, at () target labels, does DAFormer outperform ours, which we speculate is due to the specific measures it takes to generalize in Cityscapes, such as thing-class regularization.\nCompared to supervised performance, with target labels we already achieve an accuracy of mIoU, only point shy of FS, and surpass it with of labels. Thus, we demonstrate the potential of SSDA to close the gap to supervised performance at a moderate annotation cost.\nOur method greatly outperforms the previous works tailored to SSDA segmentation. In particular, we do not find necessary to mix domains explicitly as in Chen et al. (2021a ###reference_b10###), the implicit mixing by using mixed batches (see Sec. 4.3.1 ###reference_.SSS1###) and the domain robustness effect of consistency regularization (see Sec. 4.3.3 ###reference_.SSS3###) achieve a better domain alignment.\nWe also compare against DAFormer on their Transformer-based architecture (for implementation details see App. E.2 ###reference_###). We find that our method outperforms DAFormer in the semi-supervised low-label regime (Tab. 3 ###reference_###). However, the gap to supervised performance is still large, interesting future work could be focused on SSDA methods tailored to Transfomers. As Hoyer et al. (2021a ###reference_b18###) show, this network requires careful design to avoid overfitting to common classes and achieve stable training, which our framework is missing and explains the gap in UDA performance.\nSSDA results from Hoyer et al. (2021b ###reference_b19###)\non DeepLabv2" + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 SSDA on other datasets.", + "text": "To show the generalization ability of the proposed method to other datasets, we perform experiments on SynthiaCityscapes, GTA BDD and SynthiaBDD, all of them semantic segmentation tasks. We focus on the most challenging SSDA regime, with target labels. We use the same training configuration as for GTACityscapes, without tuning any hyperparameter.\nFor all datasets, we show that our method is comparable to or outperforms fully supervised (FS) training using only target labels (Tab. 4 ###reference_###). Moreover, for SynthiaCityscapes we also run experiments on DAFormer to provide a competitive baseline, which we beat in all cases. The positive results in other datasets without changing the hyperparameter configuration suggest high practical applicability of the method proposed.\non DeepLabv2." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3 UDA SSDA.", + "text": "When no labels are available, our SSDA framework in a UDA setting (see Sec. 3.5 ###reference_###) achieves mIoU, which is comparable to well-established methods such as DACS (Tranheden et al., 2021 ###reference_b45###), but below recent specialized methods. Compared to UDA state-of-the-art, with BAPA (Liu et al., 2021 ###reference_b29###) achieving an accuracy of mIoU, our method improves by mIoU using only target labels. This result demonstrates the high value of even just a few annotations and thus the potential of SSDA. In Sec. E.1 ###reference_### we present an extended comparison to UDA methods, including those using high-resolution images (e.g., HRDA (Hoyer et al., 2022 ###reference_b20###)), which we omit here for a fair comparison.\n###figure_3###" + }, + { + "section_id": "4.2.4", + "parent_section_id": "4.2", + "section_name": "4.2.4 SSL SSDA.", + "text": "To quantify the potential improvement of using a source domain, in this section compare our SSDA framework to its SSL counterpart. To our knowledge, only Alonso et al. (2021 ###reference_b1###) have compared these settings, which demonstrated only a moderate improvement when using source data. However, we found that when using a framework that takes measures to align domains, adding source domain data can substantially improve performance. In Fig. 3 ###reference_### we show a direct comparison of SSL vs. SSDA between our method and Alonso et al. (2021 ###reference_b1###). Our method better leverages source data and obtains mIoU at labels () and mIoU at labels (). Interestingly, we observe a trend where the performance boost of SSDA decreases as more target labels are available, shrinking to mIoU at labels. We conclude that a source dataset is particularly beneficial when very few target labels are available, as it reduces the risk of SSL to overfit to the few annotations." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Ablation studies", + "text": "In this section we explore the impact of each component of the framework." + }, + { + "section_id": "4.3.1", + "parent_section_id": "4.3", + "section_name": "4.3.1 Framework ablation", + "text": "In Tab. 5 ###reference_### we compare the performance of , a model trained on (7 ###reference_###) for one training round, to a number of framework variants. We find that consistency regularization is by far the most important element, as removing results in mIoU. We also find it important to use class weights to mitigate class imbalance ( mIoU), and to mix source and target data in the same batch in ( mIoU), which encourages domain mixing (Chen et al., 2021a ###reference_b10###) and helps learn a more domain-robust segmentor.\nPixel contrastive learning is also found to be a good regularizer, removing results in mIoU (Tab. 5 ###reference_###). Furthermore, we try two variants of pixel contrastive learning. Firstly, in \u201c: +\" we adopt the contrastive learning module proposed by Alonso et al. (2021 ###reference_b1###), which uses both labeled and unlabeled data, but observe a performance drop ( mIoU). We attribute the drop to incorrect contrastive pairs on unlabeled pixels, while supervised pixel contrast only relies always on ground-truth. In the second variant,\n\u201c: +\", we try adding source labeled data to pixel contrast, without success ( mIoU).\nSome previous SSDA works even discourage source clusterability (Qin et al., 2021 ###reference_b35###), aiming for source clusters to enclose target representations.\nFinally, we report the improvement in performance between the model after the initial round of training () and the final ensemble model after iterative offline self-training (), which brings mIoU." + }, + { + "section_id": "4.3.2", + "parent_section_id": "4.3", + "section_name": "4.3.2 Iterative self-training", + "text": "In Fig. 4 ###reference_### we break down the impact of self-training. The first round of self-training (between and ) is the most effective, while the second round offers marginal to no improvement, indicating convergence of the self-training algorithm. Finally, the ensemble of and yields the best final performance.\n###figure_4### We hypothesize that the main benefit of using pseudolabels is increasing diversity in target samples, which becomes more valuable at low labeling ratios, explaining the larger benefit at 50 target labels. It is also positive to drop pseudolabels after steps, compared to its counterpart (indicated in Fig. 4 ###reference_### as \u201cNo PL drop\"), and fine-tune on ground-truth annotations.\nIn Tab. 6 ###reference_### we compare the self-training scheme to a single longer training round of 120k steps, for the case of target labels. We find that a self-training scheme was both more effective, as it offered a better final performance ( mIoU), and more efficient, since at 80k steps it already outperformed a single 120k steps training round. The different learning rate decay schedules explains the difference between at 40k steps, the more aggressive decay in self-training allows the model to fine-tune before.\nLearning rate decayed linearly during 120k steps" + }, + { + "section_id": "4.3.3", + "parent_section_id": "4.3", + "section_name": "4.3.3 Source styling and consistency regularization", + "text": "Finally, we explored source styling to improve domain alignment. Source styling consists on transforming source images to adopt the target domain style, thus reducing the domain gap in the input space. We tried two transformations, an online normalization of the LAB colorspace (He et al., 2021 ###reference_b16###) (details in Sec. H ###reference_###) and replacing the original GTA images to GTA stylized as Cityscapes via photorealistic enhancement (Richter et al., 2022 ###reference_b37###). In Tab. 7 ###reference_### we study the interaction of source styling with consistency regularization. When is not used, source styling helps, with LAB being most effective.\nInterestingly, source styling did not help when combined with .\nWe hypothesize that, since consistency regularization encourages similar predictions between images under strong augmentations, it may already be promoting a style-invariant model, to a point where styling source data is redundant. On the other hand, artifacts introduced by styling could be harming performance. This observation suggests that consistency regularization is not only promoting compact clustering but also encouraging domain robustness." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "In this paper, we revisit the SSDA setting in semantic segmentation, which has significant practical implications for industrial and medical imaging applications. We propose a simple SSDA framework that effectively uses the different kinds of data available and achieves fully-supervised accuracy using only a fraction of the target labels. Our method outperforms all SSDA baselines and demonstrates the high value of a handful of target labels to close the gap to supervised performance at a low annotation cost. Our results also demonstrate the generalization ability of the method to other datasets, even without further hyperparameter tuning.\nIn addition, we provide insights into several important questions for segmentation practitioners and researchers who aim to minimize annotation costs. These include results on the scalability of existing UDA methods to the semi-supervised setting, as well as a comparison of SSDA and SSL in both low- and high-label regimes. Furthermore, in the following paragraphs, we discuss the relation of SSDA to both UDA and SSL, and propose ways to possibly adapt existing methods to SSDA.\nWe have demonstrated that existing UDA methods do not perform optimally in the semi-supervised regime, requiring methods tailored to SSDA. To adapt UDA frameworks to SSDA, we propose to consider an objective that emphasizes the tight clustering of target representations, which can be achieved through regularization with supervised pixel contrastive learning. Our findings suggest that domain alignment is less important in SSDA than achieving compact clusters of representations and then identifying them from few-shot samples, as also found by Mishra et al. (2021 ###reference_b32###) in image classification. Having demonstrated the potential of SSDA, we encourage future DA research, mostly focused on UDA, to explore SSDA extensions and report results for varying numbers of target labels, in an effort towards a unified learning framework for unlabeled data, similar to Berthelot et al. (2021 ###reference_b4###) for image classification.\nLastly, we observed that SSDA outperforms SSL in the low-label regime, but its advantage diminishes as the number of target labels increases.\nPractitioners facing a performance-vs-cost trade-off may be guided by Fig. 3 ###reference_### to choose between compiling a source-domain dataset (SSDA) or assuming a larger annotation cost and using SSL.\nOur experiments reveal that to effectively leverage a source dataset, an SSL method must account for domain alignment. In Tab. 5 ###reference_###, we demonstrate that mixing domains in the supervised batch and using exclusively supervised pixel contrast can enhance SSDA performance." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Acknowledgements", + "text": "This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043), by the Army Research Office and was accomplished under Grant Number W911NF-24-1-0048, by the Swiss National Science Foundation (SNSF) under grant number 200021_205011 and ZEISS Research-IDEAS under grant number 4510852714." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Supplementary material overview", + "text": "The supplementary material is organized as follows. We start in Sec. B ###reference_### with the implementation details. In Sec. C ###reference_### we detail the augmentations used for consistency regularization. Then we present several additional results, for SSDA in Sec. D ###reference_###, for UDASSDA in Sec. E ###reference_###, and for SSL results in Sec. F ###reference_###. We also report per-class performance in Sec. G ###reference_###. In Sec. H ###reference_### we discuss the LAB colorspace transformation used for source styling in the ablation studies, and lastly in Sec. I ###reference_### we show qualitative segmentation results for our framework in the SSDA and UDA settings." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Implementation details", + "text": "In Tab. 8 ###reference_### we list the implementation details used in our trainings to ensure reproducibility of the experiments. Our algorithm introduces some hyperparameters that should be tuned for each application, we recommend to start with the defaults below, which provided good results in our benchmarks. Perhaps the most important hyperparameters to tune are and , the weights for the loss terms. A guideline for their tuning is to pay attention at the loss magnitude of each term, and tune the weights accordingly such that the impact of each component is approximately balanced." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Augmentations for Consistency Regularization", + "text": "The augmentations used for consistency regularization are generated with a series of random transformations, each of them applied with probability . We first apply color jitter (), Gaussian blur () and a modification of RandAugment (Cubuk et al., 2020 ###reference_b12###) () to each image. After that, we apply CutMix (Yun et al., 2019 ###reference_b55###)() between two images from the batch.\nThe modification of RandAugment we use samples from the following subset of augmentations: brightness, color, contrast, equalize, posterize, sharpness and solarize." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Additional SSDA results", + "text": "In Tab. 9 ###reference_### we show the tabular version of Fig. 4 ###reference_###, which additionally includes the standard deviation in the 3 runs of each experiment.\nEnsemble model\nPseudolabels are not dropped after steps.\nIn Tab. 10 ###reference_### we show a comparison between using an iterative self-training scheme or training for a single longer round, an extension of Tab. 6 ###reference_###. In the longer training we use the same hyperparameter configuration except for learning rate decay, which we decay linearly with the number of steps instead of reducing by a factor of at of iterations. Self-training is both more effective and efficient as we had seen in Tab. 6 ###reference_###. We also note the trend that the impact of self-training is lower as the number of available target labels increases. Once again, we derive the conclusion that self-training is particularly effective if the number of target labels is very low, as the benefit of increasing the diversity of target annotations becomes larger.\nLearning rate decayed linearly during 120k steps" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Additional UDA SSDA results", + "text": "Lately, it has become a trend in UDA to evaluate the proposed methods in a slightly different setting: using full resolution images. The common practice in the well-established GTACityscapes benchmark has been to downscale the resolution of images (in GTA from to , in Cityscapes from to pixels), to deal with memory constraints of GPUs. Moreover, downscaled images are also a good benchmark for other datasets in industrial or medical applications which do not count on high-resolution (HR) images. We also note that training on HR images is far less agile, scaling up computational requirements and often requiring multiple GPUs.\nRecently, multiple methods have reported results on models trained in original resolution images, which often translate into higher accuracy scores due to the more detailed images. Perhaps surprisingly, this phenomenon has been unnoticed and is not underlined in the literature. We would like to highlight the difference between training on downscaled and HR images and argue that these two settings should not be directly compared. The literature in UDA usually presents methods on downscaled and full resolution images indistinctly, which can be misleading when assessing the performance of the algorithm.\nWe train again our proposed framework on SSDA with 100 labels and obtain mIoU points of improvement, from to . In Tab. 11 ###reference_### we report an extended comparison to UDA methods, including those that train on HR images. HRDA (Hoyer et al., 2022 ###reference_b20###) is the only of such methods that can be trained in a single GPU as they use small a multi-scale scheme and only take small crops of HR images. As the authors demonstrate, HRDA can be used as a plug-in extension to existing UDA methods to further increase performance. We note that SSDA also substantially outperforms UDA on HR images, from mIoU of the state-of-the-art HRDA framework to our mIoU when adding 100 target labels. We also try our framework in the UDA HR setting, but obtain unstable training, likely missing further regularization to not overfit to the common classes.\nAdditional information used for depth-estimation.\nUse of HR images (Cityscapes: ).\nLikewise, with Transformer-based architectures the use of HR images can also boost performance. HRDA (Hoyer et al., 2022 ###reference_b20###) shows an improvement over DAFormer (see Tab. 12 ###reference_###) when using their multi-scale module that leverages high resolution. They outperform our SSDA results on the downscaled Cityscapes, since, as discussed in Sec. 4.2.1 ###reference_.SSS1###, our framework is not designed for Transformer architectures. Future work could address the design of an SSDA method tailored to Transformers, which requires careful design in order to avoid overfitting and achieve stable training.\nUse of HR images (Cityscapes: ).\nExtending UDA methods to SSDA. In Fig. 1 ###reference_### we compare our method to the extension of existing UDA methods to SSDA. For DACS (Tranheden et al., 2021 ###reference_b45###), we take the results from Hoyer et al. (2021b ###reference_b19###), which already investigates the extension of this method to SSDA to provide a competitive baseline. For DAFormer (Hoyer et al., 2021a ###reference_b18###), SSDA results are our own. We base our implementation on the codebase provided by Hoyer et al. (2021a ###reference_b18###) and keep the exact same hyperparameters for training. We make only two changes. Firstly, we replace the Transformer-based architecture with a DeepLabv2 + ResNet101. Secondly, to adapt to SSDA, we add a loss term with cross-entropy on target labeled data, similar to their cross-entropy loss on source labeled data. For experiments in Tab. 3 ###reference_### we keep the original Transformer architecture, so only the second change applies.\nHigh resolution. In our experiments with high resolution Tab. 11 ###reference_### we do not downsample images, keeping Cityscapes at pixels and GTA at . During training, to fit into the memory constraints of a single GPU, we make random crops of pixels, which combined with the small batch size of 2 allow to respect memory constraints.\nTraining details on DAFormer architecture. We slightly modify the hyperparameter configuration when training our framework on a DAFormer architecture. For a fair comparison to Hoyer et al. (2021a ###reference_b18###), we reduce the number of training steps in order to match their k total iterations. We take k iterations per training round and only perform one round of self-training (), such that the total amount of steps is k. The rest of the hyperparameters remain as in Sec. 4.1 ###reference_###. We ensemble the models of the two rounds for the final result. As for the adaption of the architecture, we add a projection head for pixel contrastive learning which takes the 256-dim output from the feature extractor and projects it through two convolutional layers interleaved with ReLU and BatchNorm layers, into a new 256-dim embedding." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Additional SSL results", + "text": "In this section, we present additional results regarding experiments in the SSL setting. In Tab. 13 ###reference_### we present a comparison to other SSL methods that use the same network as us, a DeepLabv2 + ResNet101 network.\nOur SSDA method performs well when applied to SSL, with comparable or superior accuracy than previous works on the same architecture. Nonetheless, it is worth noting that recent methods using more capable architectures such as a DeepLabv3+ encoder, such as Chen et al. (2021b ###reference_b11###); Fan et al. (2022 ###reference_b13###), which we do not compare to here, can attain higher performance.\nIn Tab. 14 ###reference_### we present the results for SSLSSDA corresponding to Fig. 3 ###reference_###, a direct comparison between our method and Alonso et al. (2021 ###reference_b1###), the only previous SSL framework that included SSDA in their experiments. Our method manages to better leverage the source domain, showing an improvement when adding source data, particularly when very few target labels are available.\nImageNet pretrained, lower accuracy than in Tab. 13 ###reference_###, which was COCO pretrained.\nIn Tab. 15 ###reference_### we present an ablation of the impact of training rounds during the iterative offline self-training scheme in SSL. Interestingly, we observe how the pseudolabels help when only a few labels are available, possibly explaining the improvement in SSL accuracy over a similar method such as Alonso et al. (2021 ###reference_b1###), which does not use self-training. On the other hand, at a higher labeling ratio the scheme is no longer helpful. This leads us to hypothesize that pseudolabels are especially effective under few labels as they increase the diversity of target labels, while the impact is reduced in the presence of more ground-truth labels. In the latter case, the noise or bias in pseudolabels can even be counterproductive.\nEnsemble model\nLastly, in Tab. 16 ###reference_### we present two more ablation results in the SSL setting for 100 labels. We are interested in investigating the impact of using our pixel contrastive learning module (supervised) or an alternative pixel contrastive learning such as the module presented in Alonso et al. (2021 ###reference_b1###) (supervised + unsupervised), which we refer to as : +. While in SSDA the latter performed worse (Tab. 5 ###reference_###), in SSL we obtain the same performance as the baseline . We hypothesize that adding unlabeled data to contrastive learning helps in SSL as it avoids relying too much on the few target labels, which we risk overfitting to. Meanwhile, in SSDA this benefit is less pronounced, as the presence of source labels already prevents overfitting, and the inherent noise of unlabeled contrastive learning (due to wrongfully assigned pairs) actually harms performance. In Tab. 16 ###reference_### we also confirm that the self-training scheme is beneficial in the SSL setting." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Per-class performance", + "text": "In this section we break down the performance of our framework to per-class performance. We report the Intersection over the Union (IoU) for the 19 semantic classes used in Cityscapes. We start analyzing in Tab. 17 ###reference_### the difference between using and for the UDA version of our framework, which we presented in Sec. 3.5 ###reference_###. We observe how the variant that uses performs much better than its counterpart, particularly for a set of classes highlighted in bold font. What these classes have in common is that they are underrepresented in the training set, as they are either rare or correspond to small objects. Using the class probabilities in consistency regularization seems to mitigate the issue of overfitting to the most common classes. Interestingly, we do not observe the same behavior in SSDA, with yielding slightly better performance than . We hypothesize that the presence of a few labels helps the model learn the rare classes, which then are also used as one-hot pseudo-targets in .\nAlso in Tab. 17 ###reference_### we report per-class performance of our method on SSDA with 50 and 100 target labels. We observe an improvement in all classes compared to UDA, particularly large for uncommon classes (e.g., terrain, sidewalk, train).\nroad\nsidewalk\nbuilding\nwall\nfence\npole\nlight\nsign\nveg.\nterrain\nsky\nperson\nrider\ncar\ntruck\nbus\ntrain\nmotorbike\nbike\nIn Tab. 18 ###reference_### we present the class performance for our method and DAFormer using the Transformer-based architecture presented in Hoyer et al. (2021a ###reference_b18###). Our method does not perform well in UDA compared to DAFormer, as it overfits to common classes and performs worse on rare classes (e.g., sidewalk, train). As shown in Hoyer et al. (2021a ###reference_b18###), using the MiT-b5 Transformer as backbone requires of specific measures to stabilize training and avoid overfitting to the common classes, which our framework does not have. However, in SSDA with 100 target labels our method improves and is able to learn reasonably well, including the rare classes, ultimately outperforming DAFormer at the same number of labels. This observation suggests that adding a few labels is an alternative way to mitigate overfitting issues in UDA with Transformers, as opposed to using additional regularization terms and algorithmic solutions tailored to a particular dataset.\nroad\nsidewalk\nbuilding\nwall\nfence\npole\nlight\nsign\nveg.\nterrain\nsky\nperson\nrider\ncar\ntruck\nbus\ntrain\nmotorbike\nbike" + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H LAB colorspace transformation", + "text": "In Sec. 4.3 ###reference_### we perform an ablation study investigating the interaction of consistency regularization with source styling. We try two transformations of source images into target domain, namely a LAB colorspace transformation He et al. (2021 ###reference_b16###) and photorealistic enhancement (Richter et al., 2022 ###reference_b37###). The latter is of straightforward application; the original source dataset of GTA is replaced by a version of GTA stylized as with Cityscapes style, a dataset which is provided by Richter et al. (2022 ###reference_b37###). On the contrary, the LAB colorspace transformation is applied online during training. Implemented following He et al. (2021 ###reference_b16###), it consists on matching the statistics of each LAB color channel of a source image to those of a target domain image. Letting be the image in a LAB colorspace and and the mean and variance respectively of each channel in this colorspace, then the normalization of a source image with target domain statistics is applied as\nIn each training step, source and target images are transformed from RGB to LAB colorspace, then the color channels of source images are normalized to the values of a random target image in the batch, following 9 ###reference_###, and lastly images are transformed back to the RGB space. Examples of GTA (source) images stylized as a Cityscapes (target) sample are shown in Fig. 5 ###reference_###. We hypothesize that the better results with LAB styling are partly due to this online application, as source images are stylized differently at each iteration, possibly leading into a more robust model.\n###figure_5###" + }, + { + "section_id": "Appendix 9", + "parent_section_id": null, + "section_name": "Appendix I Qualitative segmentation results", + "text": "###figure_6###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Summary of settings. Types of data used in Semi-Supervised Learning (SSL), Unsupervised Domain Adaptation (UDA) and Semi-Supervised Domain Adaptation (SSDA).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Data\n\n\n\n\n\n\n\n
Source
Labeled
\n
\n\n\n\n\n\n\n\n
Target
Labeled
\n
\n\n\n\n\n\n\n\n
Target
Unlabeled
\n
SSL\u2717\u2713\u2713
UDA\u2713\u2717\u2713
SSDA\u2713\u2713\u2713
\n
", + "capture": "Table 1: Summary of settings. Types of data used in Semi-Supervised Learning (SSL), Unsupervised Domain Adaptation (UDA) and Semi-Supervised Domain Adaptation (SSDA)." + }, + "2": { + "table_html": "
\n
Table 2: GTACityscapes SSDA semantic segmentation results (mIoU) with a DeepLabv2 + ResNet-101 network. Our framework outperforms baselines in the SSDA low-label regime, achieving near fully supervised (FS) performance at a low annotation cost. All results are averaged over 3 runs.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Target labelsUDA50100200500FS
Label ratio01
\n (T)-41.246.552.760.467.0
\n (S+T)-52.454.357.861.465.8
ASS (Wang et\u00a0al., 2020b)\n--54.256.060.265.9
Alonso et\u00a0al. (2021)--59.962.064.267.3
DACS (Tranheden et\u00a0al., 2021)\u00a0*\n52.1-61.063.164.8-
Chen et\u00a0al. (2021a)--61.260.564.365.3
DAFormer (Hoyer et\u00a0al., 2021a)\u00a0\u2020\n56.061.863.566.370.4-
Ours51.864.366.067.368.367.0
\n\n
\n
", + "capture": "Table 2: GTACityscapes SSDA semantic segmentation results (mIoU) with a DeepLabv2 + ResNet-101 network. Our framework outperforms baselines in the SSDA low-label regime, achieving near fully supervised (FS) performance at a low annotation cost. All results are averaged over 3 runs." + }, + "3": { + "table_html": "
\n
Table 3: SSDA semantic segmentation results on GTACityscapes (mIoU) with a DAFormer network (Transformer-based). We extend DAFormer to the SSDA setting and outperform it at the low-label regime, but fall short of supervised (FS) performance. All results are averaged over 3 runs.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingUDASSDA
Target labels050100200500
Label ratio0
\nGTA Cityscapes (DAFormer) \u2006\u2006 FS: 77.6 mIoU (Hoyer et\u00a0al., 2021a)\n
DAFormer (Hoyer et\u00a0al., 2021a)\n68.366.269.871.274.4
Ours55.568.271.472.173.5
\n
\n
", + "capture": "Table 3: SSDA semantic segmentation results on GTACityscapes (mIoU) with a DAFormer network (Transformer-based). We extend DAFormer to the SSDA setting and outperform it at the low-label regime, but fall short of supervised (FS) performance. All results are averaged over 3 runs." + }, + "4": { + "table_html": "
\n
Table 4: SSDA semantic segmentation results (mIoU) on additional benchmarks. Our method achieves near fully supervised (FS) performance at a low annotation cost. All results are averaged over 3 runs on a DeepLabv2 + ResNet-101 network.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nSynthiaCS, 16 (13) classes, FS: 68.9 (73.1) mIoU
Target labels050 ()100 ()
\n (S+T)29.4 (33.6)49.0 (58.4)52.5 (61.8)
Ours- (-)\n64.5 (73.9)\n67.2 (75.7)
DAFormer (Hoyer et\u00a0al., 2021a)\u00a0\u2020\n53.4 (60.6)62.4 (68.0)64.6 (70.6)
\nGTABDD, 19 classes, FS: 55.8 mIoU
Target labels0100 ()233 ()
\n (S+T)33.248.351.5
Ours43.152.554.5
\nSynthiaBDD, 16 classes, FS: 56.6 mIoU
\n (S+T)24.243.548.1
Ours-54.557.6
\n
    \n
  • \n\u2020\n
    \n

    on DeepLabv2.

    \n
    \n
  • \n
\n
\n
", + "capture": "Table 4: SSDA semantic segmentation results (mIoU) on additional benchmarks. Our method achieves near fully supervised (FS) performance at a low annotation cost. All results are averaged over 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "5": { + "table_html": "
\n
Table 5: Ablation study of the proposed framework on SSDA GTACityscapes with 100 target labels (). denotes difference in mIoU to the baseline . Experiments are on the initial round of training (i.e., without iterative self-training). We note that consistency regularization is, by far, the most important component. All results are the average of 3 runs on a DeepLabv2 + ResNet-101 architecture.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
mIoUConfigurationSteps
\n \u00a0\nNo \n\nk
\n \u00a0\n\n: No class weight\nk
\n \u00a0\nNo \n\nk
\n \u00a0\n\n: No batch mix\nk
\n \u00a0\n\n: + (Alonso et\u00a0al., 2021)\n\nk
\n \u00a0\n\n: +\n\nk
\n\u00a0\n\nk
\n\u00a0 \n\nk
\n
\n
", + "capture": "Table 5: Ablation study of the proposed framework on SSDA GTACityscapes with 100 target labels (). denotes difference in mIoU to the baseline . Experiments are on the initial round of training (i.e., without iterative self-training). We note that consistency regularization is, by far, the most important component. All results are the average of 3 runs on a DeepLabv2 + ResNet-101 architecture." + }, + "6": { + "table_html": "
\n
Table 6: Impact of iterative self-training (rounds of 40k steps) vs. training for longer (one round of 120k steps) on SSDA GTACityscapes with 50 target labels. Results are the average over 3 runs on a DeepLabv2 + ResNet-101 network.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Steps\n\nModel\n(Self-training)\n\nmIoU\n\nModel\n(longer training)\n\nmIoU
40k61.4\n\u2217\n60.9
80k63.7\n\u2217\n62.9
120k63.9\n\u2217\n63.5
120k64.3--
\n
    \n
  • \n*\n
    \n

    Learning rate decayed linearly during 120k steps

    \n
    \n
  • \n
\n
\n
", + "capture": "Table 6: Impact of iterative self-training (rounds of 40k steps) vs. training for longer (one round of 120k steps) on SSDA GTACityscapes with 50 target labels. Results are the average over 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "7": { + "table_html": "
\n
Table 7: Study of the interaction between source styling and consistency regularization (CR). We observe how source styling is beneficial without CR, but harmful when combined with CR, as we hypothesize CR already brings robustness to style. Results are mIoU on the initial round of training , an average of 3 runs for experiments on SSDA GTACityscapes using 100 target labels.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
mIoUSource styling
\n\u00a0\nNoNo
\n\u00a0 \nNoLAB (He et\u00a0al., 2021)\n
\n\u00a0 \nNophotorealistic (Richter et\u00a0al., 2022)\n
\n\u00a0\nYesNo
\n \u00a0\nYesLAB (He et\u00a0al., 2021)\n
\n \u00a0\nYesphotorealistic (Richter et\u00a0al., 2022)\n
\n
\n
", + "capture": "Table 7: Study of the interaction between source styling and consistency regularization (CR). We observe how source styling is beneficial without CR, but harmful when combined with CR, as we hypothesize CR already brings robustness to style. Results are mIoU on the initial round of training , an average of 3 runs for experiments on SSDA GTACityscapes using 100 target labels." + }, + "8": { + "table_html": "
\n
Table 8: Implementation details and training hyperparameters.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Training configurationvalue
optimizerSGD, Nesterov momentum \n
weight decay
batch size\n: 2, : 2, : 2
learning rate
learning rate decayby at of training
40k
20k
self-training rounds
total steps120k (40k)
1
0.2
gradient clipping norm10
pseudolabels confidence threshold\n, following Li et\u00a0al. (2019)\n
\n warm-up steps1k
50
0.1
EMA decay
class weight\n\n, following Alonso et\u00a0al. (2021)\n: class frequency\n: median class freq.\n\n
Image augmentationssee Sec. C\n
\n
\n
", + "capture": "Table 8: Implementation details and training hyperparameters." + }, + "9": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSteps50 ()100 ()200 \n500 ()
40k61.4 0.6\n64.0 1.0\n65.4 0.4\n67.1 0.6\n
80k63.7 0.9\n65.4 0.8\n66.7 1.0\n67.6 0.2\n
120k63.9 1.1\n65.6 0.9\n66.8 0.9\n67.9 0.6\n
120k\n64.3 0.8\n\n66.0 0.5\n\n67.3 0.6\n\n68.3 0.1\n
80k62.4 0.9\n64.5 0.6\n65.2 0.4\n66.1 0.8\n
\n
\n
\n
\n
    \n
  • \n*\n
    \n

    Ensemble model

    \n
    \n
  • \n
  • \n\n
    \n

    Pseudolabels are not dropped after steps.

    \n
    \n
  • \n
\n
\n
\n
\n
Table 9: Evolution of performance (mIoU) during self-training from 1, data corresponding to Fig. 4, for different numbers (and ratios) of target labels. The first self-training round () brings the largest improvement, the final ensemble () provides the best performance, and dropping pseudolabels for fine-tuning is beneficial. Results are the mean and standard deviation over 3 runs for GTACityscapes on a DeepLabv2 + ResNet-101 network.\n
\n
", + "capture": "Table 9: Evolution of performance (mIoU) during self-training from 1, data corresponding to Fig. 4, for different numbers (and ratios) of target labels. The first self-training round () brings the largest improvement, the final ensemble () provides the best performance, and dropping pseudolabels for fine-tuning is beneficial. Results are the mean and standard deviation over 3 runs for GTACityscapes on a DeepLabv2 + ResNet-101 network.\n" + }, + "10": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Steps\n\nModel\n(Self-training)\n\n50 labels100 labels\n\nModel\n(longer training)\n\n50 labels100 labels
40k61.464.060.963.2
80k63.765.462.964.8
120k63.965.663.565.5
120k64.366.0---
\n
\n
\n
\n
    \n
  • \n*\n
    \n

    Learning rate decayed linearly during 120k steps

    \n
    \n
  • \n
\n
\n
\n
\n
Table 10: Extended Tab. 6 that additionally includes results for 100 target labels. Impact of iterative self-training (rounds of 40k steps) vs. training for longer (one round of 120k steps) on SSDA GTACityscapes. We observe a higher benefit of self-training for the lower number of target labels. Results are the average over 3 runs on a DeepLabv2 + ResNet-101 network.
\n
", + "capture": "Table 10: Extended Tab. 6 that additionally includes results for 100 target labels. Impact of iterative self-training (rounds of 40k steps) vs. training for longer (one round of 120k steps) on SSDA GTACityscapes. We observe a higher benefit of self-training for the lower number of target labels. Results are the average over 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "11": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingUDASSDA
Target labels050100200500
\nFully supervised ( pixels): 67.0 mIoU
CBST Zou et\u00a0al. (2018)\n45.9----
FDA Yang & Soatto (2020)\n50.5----
Ours51.864.366.067.368.3
Ours\u2020\n--69.3--
DACS Tranheden et\u00a0al. (2021); Hoyer et\u00a0al. (2021b)\n52.1-61.063.164.8
SAC Araslanov & Roth (2021)\n53.8----
DAFormer Hoyer et\u00a0al. (2021a; 2022)\n56.061.863.566.370.4
CorDA*\n56.6----
BAPA Liu et\u00a0al. (2021)\n57.4----
ProDA\u2020 \u00a0\u00a0Zhang et\u00a0al. (2021)\n57.5----
EHTDI\u2020 \u00a0\u00a0Li et\u00a0al. (2022a)\n58.8----
CPSL\u2020 \u00a0\u00a0Li et\u00a0al. (2022b)\n60.8----
DBB\u2020 \u00a0\u00a0Chen et\u00a0al. (2022)\n62.7----
HRDA\u2020 \u00a0\u00a0Hoyer et\u00a0al. (2022)\n63.0----
\n
\n
\n
\n
    \n
  • \n*\n
    \n

    Additional information used for depth-estimation.

    \n
    \n
  • \n
  • \n\u2020\n
    \n

    Use of HR images (Cityscapes: ).

    \n
    \n
  • \n
\n
\n
\n
Table 11: Comparison of SSDA to UDA state-of-the-art methods for semantic segmentation on GTACityscapes (mIoU), including methods using HR images. Our method improves mIoU when training on HR images, and outperform the state-of-the-art UDA framework (HRDA Hoyer et\u00a0al. (2022)) by points. All results are averaged over 3 runs on a DeepLabv2 + ResNet-101 network.
\n
", + "capture": "Table 11: Comparison of SSDA to UDA state-of-the-art methods for semantic segmentation on GTACityscapes (mIoU), including methods using HR images. Our method improves mIoU when training on HR images, and outperform the state-of-the-art UDA framework (HRDA Hoyer et\u00a0al. (2022)) by points. All results are averaged over 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "12": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingUDASSDA
Target labels050100200500
Label ratio0
Fully supervised: 77.6 Hoyer et\u00a0al. (2021a)
Ours55.568.271.472.173.5
DAFormer Hoyer et\u00a0al. (2021a)\n68.366.269.871.274.4
HRDA\u2020 \u00a0\u00a0Hoyer et\u00a0al. (2022)\n73.8----
\n
\n
\n
\n
    \n
  • \n\u2020\n
    \n

    Use of HR images (Cityscapes: ).

    \n
    \n
  • \n
\n
\n
\n
Table 12: UDA and SSDA semantic segmentation on GTACityscapes (mIoU) on a Transformer-based architecture (MiT-B5 backone + DAFormer decoder). We extend our results to the Transformer architecture without further adapting the learning algorithm, which explains the lower performance in UDA and the still large gap between SSDA and supervised performance. We also extend DAFormer\u2019s (Hoyer et\u00a0al., 2021a) experiments to SSDA. All results are averaged over 3 runs on a DAFormer network.
\n
", + "capture": "Table 12: UDA and SSDA semantic segmentation on GTACityscapes (mIoU) on a Transformer-based architecture (MiT-B5 backone + DAFormer decoder). We extend our results to the Transformer architecture without further adapting the learning algorithm, which explains the lower performance in UDA and the still large gap between SSDA and supervised performance. We also extend DAFormer\u2019s (Hoyer et\u00a0al., 2021a) experiments to SSDA. All results are averaged over 3 runs on a DAFormer network. " + }, + "13": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Labels (ratio)50 ()100 ()372 \n744 ()FS
French et al. French et\u00a0al. (2019)\n-51.260.363.967.5
ClassMix Olsson et\u00a0al. (2021)\n-54.161.363.666.2
GuidedMix-Net Tu et\u00a0al. (2022)\n-56.965.867.5-
Alonso et al. Alonso et\u00a0al. (2021)\n-59.464.465.967.3
Ours55.360.466.567.267.0
\n
Table 13: Comparison of SSL semantic segmentation methods on GTACityscapes on DeepLabv2 + ResNet-101 backbone. Our SSDA method applied to the SSL setting performs well, matching or beating other methods for all target label ratios. We only compare here to methods that report results using the same DeepLabv2 network. All results (mIoU) are the average of 3 runs.
\n
", + "capture": "Table 13: Comparison of SSL semantic segmentation methods on GTACityscapes on DeepLabv2 + ResNet-101 backbone. Our SSDA method applied to the SSL setting performs well, matching or beating other methods for all target label ratios. We only compare here to methods that report results using the same DeepLabv2 network. All results (mIoU) are the average of 3 runs." + }, + "14": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Target labels50 ()100 ()200 \n500 ()
Alonso et al.*\u00a0\u00a0 Alonso et\u00a0al. (2021)\n-58.059.963.7
\u00a0\u00a0\u00a0 + source-59.9 (+1.9)62.0 (+2.1)64.2 (+0.5)
Ours (SSL)55.360.464.267.8
\u00a0\u00a0\u00a0 + source64.3 (+9.0)66.0 (+5.6)67.3 (+3.1)68.3 (+0.5)
\n
\n
\n
\n
    \n
  • \n*\n
    \n

    ImageNet pretrained, lower accuracy than in Tab. 13 ###reference_###, which was COCO pretrained.

    \n
    \n
  • \n
\n
\n
\n
\n
Table 14: Results corresponding to Fig. 3. Comparison of SSLSSDA semantic segmentation methods on GTACityscapes. Our proposed framework shows a larger improvement when adding source data. We also note that SSDA outperforms SSL especially when few target labels are available, while at the improvement is marginal. All results (mIoU) are the average of 3 runs on a DeepLabv2 with ResNet-101 backbone.
\n
", + "capture": "Table 14: Results corresponding to Fig. 3. Comparison of SSLSSDA semantic segmentation methods on GTACityscapes. Our proposed framework shows a larger improvement when adding source data. We also note that SSDA outperforms SSL especially when few target labels are available, while at the improvement is marginal. All results (mIoU) are the average of 3 runs on a DeepLabv2 with ResNet-101 backbone." + }, + "15": { + "table_html": "
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSteps50 ()100 ()372 \n744 ()
40k52.0 2.9\n57.3 0.9\n64.8 1.0\n66.9 0.8\n
80k54.7 3.2\n59.6 1.0\n66.0 0.5\n66.8 0.8\n
120k55.4 3.2\n60.3 1.4\n66.0 0.3\n66.5 0.8\n
120k55.3 7.1\n\n60.4 1.1\n\n66.5 0.1\n\n67.2 0.5\n
\n
\n
\n
\n
    \n
  • \n*\n
    \n

    Ensemble model

    \n
    \n
  • \n
\n
\n
\n
Table 15: Impact of offline self-training in the proposed framework on an SSL setting. Accuracy (mIoU) on Cityscapes validation set on 100 target labels, all results are an average of 3 runs on a DeepLabv2 + ResNet-101 network.
\n
", + "capture": "Table 15: Impact of offline self-training in the proposed framework on an SSL setting. Accuracy (mIoU) on Cityscapes validation set on 100 target labels, all results are an average of 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "16": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
mIoUConfigurationSteps
\n\u00a0\n57.3\n: + Alonso et\u00a0al. (2021)\n40k
\n\u00a0\n57.340k
\n\u00a0 \n60.4120k
\n
Table 16: Ablation study of the proposed framework on an SSL setting. Accuracy (mIoU) on Cityscapes validation set on 100 target labels, all results are an average of 3 runs on a DeepLabv2 + ResNet-101 network.
\n
", + "capture": "Table 16: Ablation study of the proposed framework on an SSL setting. Accuracy (mIoU) on Cityscapes validation set on 100 target labels, all results are an average of 3 runs on a DeepLabv2 + ResNet-101 network." + }, + "17": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSetting\n
\n

road

\n
\n
\n
\n

sidewalk

\n
\n
\n
\n

building

\n
\n
\n
\n

wall

\n
\n
\n
\n

fence

\n
\n
\n
\n

pole

\n
\n
\n
\n

light

\n
\n
\n
\n

sign

\n
\n
\n
\n

veg.

\n
\n
\n
\n

terrain

\n
\n
\n
\n

sky

\n
\n
\n
\n

person

\n
\n
\n
\n

rider

\n
\n
\n
\n

car

\n
\n
\n
\n

truck

\n
\n
\n
\n

bus

\n
\n
\n
\n

train

\n
\n
\n
\n

motorbike

\n
\n
\n
\n

bike

\n
\n
mean IoU
OursUDA, \n94.20.082.63.93.93.541.547.684.811.887.746.03.387.959.649.70.034.10.239.0
OursUDA, \n91.942.086.526.732.235.743.054.482.818.080.165.337.388.754.953.70.036.454.551.8
OursSSDA (50)96.675.488.850.446.444.951.660.988.650.190.969.547.291.774.564.819.645.364.964.3
OursSSDA (100)96.876.689.151.546.746.151.962.489.051.791.269.848.691.974.269.535.947.464.966.0
\n
\n
Table 17: Performance per class of our proposed method the UDA and SSDA settings. Average IoU (in %) over 3 seeds on the 19 Cityscapes classes using a DeepLabv2 + ResNet-101 network. We compare UDA using a one-hot encoding for consistency regularization pseudo-targets () or the predicted class probability by the teacher (). In bold we highlight the classes with a substantial difference in performance. Finally, we also compare to our framework on SSDA with 50 and 100 target labels, which uses .
\n
", + "capture": "Table 17: Performance per class of our proposed method the UDA and SSDA settings. Average IoU (in %) over 3 seeds on the 19 Cityscapes classes using a DeepLabv2 + ResNet-101 network. We compare UDA using a one-hot encoding for consistency regularization pseudo-targets () or the predicted class probability by the teacher (). In bold we highlight the classes with a substantial difference in performance. Finally, we also compare to our framework on SSDA with 50 and 100 target labels, which uses ." + }, + "18": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSetting\n
\n

road

\n
\n
\n
\n

sidewalk

\n
\n
\n
\n

building

\n
\n
\n
\n

wall

\n
\n
\n
\n

fence

\n
\n
\n
\n

pole

\n
\n
\n
\n

light

\n
\n
\n
\n

sign

\n
\n
\n
\n

veg.

\n
\n
\n
\n

terrain

\n
\n
\n
\n

sky

\n
\n
\n
\n

person

\n
\n
\n
\n

rider

\n
\n
\n
\n

car

\n
\n
\n
\n

truck

\n
\n
\n
\n

bus

\n
\n
\n
\n

train

\n
\n
\n
\n

motorbike

\n
\n
\n
\n

bike

\n
\n
mean IoU
OursUDA80.226.687.449.539.240.248.241.487.537.384.963.320.990.963.766.023.436.048.554.4
DAFormerUDA95.770.289.453.548.149.655.859.489.947.992.572.244.792.374.578.265.155.961.868.3
OursSSDA (100)97.278.590.254.845.751.160.068.290.957.393.773.050.892.776.880.469.757.069.771.4
DAFormerSSDA (100)97.378.489.742.244.650.958.766.090.358.293.174.449.492.468.879.771.750.768.869.8
\n
\n
Table 18: Performance per class of our proposed method in the UDA and SSDA settings. Average IoU (in %) over 3 seeds on the 19 Cityscapes classes on a DAFormer network Hoyer et\u00a0al. (2021a). We observe how our method does not perform well in UDA compared to DAFormer, as it overfits to common classes and performs worse on rare classes (e.g., sidewalk, train). However, in SSDA with 100 target labels our method improves and is able to learn reasonably well also the rare classes, ultimately outperforming DAFormer at the same number of labels.
\n
", + "capture": "Table 18: Performance per class of our proposed method in the UDA and SSDA settings. Average IoU (in %) over 3 seeds on the 19 Cityscapes classes on a DAFormer network Hoyer et\u00a0al. (2021a). We observe how our method does not perform well in UDA compared to DAFormer, as it overfits to common classes and performs worse on rare classes (e.g., sidewalk, train). However, in SSDA with 100 target labels our method improves and is able to learn reasonably well also the rare classes, ultimately outperforming DAFormer at the same number of labels." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18728v1_figure_1.png", + "caption": "Figure 1: GTA\u2192\u2192\\rightarrow\u2192Cityscapes results (mIoU). Our method beats all baselines in the highlighted regime of interest: SSDA with a low amount of target labels. We claim SSDA as an alternative to UDA where near-supervised performance can be achieved at a low annotation cost. \u201cSupervised\" indicates a model trained on the full target dataset (2975 images). Fractions represent ratio of target-domain samples labeled. Results are an average of 3 runs on a DeepLabv2 + ResNet-101 network. See Tab. 2 for the results table.", + "url": "http://arxiv.org/html/2411.18728v1/x1.png" + }, + "2": { + "figure_path": "2411.18728v1_figure_2.png", + "caption": "Figure 2: Framework overview. In each round, we train a student model f\u03b8subscript\ud835\udc53\ud835\udf03f_{\\theta}italic_f start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT with a combination of supervised learning \u2112supsuperscript\u2112sup\\mathcal{L}^{\\textrm{sup}}caligraphic_L start_POSTSUPERSCRIPT sup end_POSTSUPERSCRIPT, consistency regularization (CR) \u2112CRsuperscript\u2112CR\\mathcal{L}^{\\textrm{CR}}caligraphic_L start_POSTSUPERSCRIPT CR end_POSTSUPERSCRIPT and pixel contrastive learning \u2112PCsuperscript\u2112PC\\mathcal{L}^{\\textrm{PC}}caligraphic_L start_POSTSUPERSCRIPT PC end_POSTSUPERSCRIPT. We use a mean teacher f\u03besubscript\ud835\udc53\ud835\udf09f_{\\xi}italic_f start_POSTSUBSCRIPT italic_\u03be end_POSTSUBSCRIPT to generate pseudotargets in CR, and stop its gradient.\nIn subsequent rounds of self-training, the target labeled set includes pseudolabels generated in the previous round.", + "url": "http://arxiv.org/html/2411.18728v1/extracted/6030144/figures/framework_schematic_bigger.png" + }, + "3": { + "figure_path": "2411.18728v1_figure_3.png", + "caption": "Figure 3: SSL vs. SSDA semantic segmentation results (mIoU) on GTA\u2192\u2192\\rightarrow\u2192Cityscapes for our method and Alonso et al. (2021). We show a substantial improvement when using source data (SSDA) compared to SSL, particularly in the low-label regime. The difference is less pronounced as more target labels are used. All results are the average of 3 runs on a DeepLabv2 with ResNet-101 backbone.", + "url": "http://arxiv.org/html/2411.18728v1/extracted/6030144/figures/ssl6.png" + }, + "4": { + "figure_path": "2411.18728v1_figure_4.png", + "caption": "Figure 4: Evolution of performance during self-training from Algorithm 1. The first self-training round (M0\u2192M1\u2192subscriptM0subscriptM1\\textbf{M}_{0}\\rightarrow\\textbf{M}_{1}M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT \u2192 M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) brings the largest improvement, the final ensemble (M1+M2subscriptM1subscriptM2\\textbf{M}_{1}+\\textbf{M}_{2}M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) provides the best performance, and dropping pseudolabels for fine-tuning is beneficial. Results are an average over 3 runs for GTA\u2192\u2192\\rightarrow\u2192Cityscapes on a DeepLabv2 + ResNet-101 network. A tabular version can be found in Tab. 9.", + "url": "http://arxiv.org/html/2411.18728v1/extracted/6030144/figures/ST_abl2.png" + }, + "5": { + "figure_path": "2411.18728v1_figure_5.png", + "caption": "Figure 5: Example of GTA images (source) stylized as a Cityscapes images (target) using LAB colorspace transformation (He et al., 2021).", + "url": "http://arxiv.org/html/2411.18728v1/x2.png" + }, + "6": { + "figure_path": "2411.18728v1_figure_6.png", + "caption": "Figure 6: Qualitative results on validation images from Cityscapes. From left to right, original image used as input, ground-truth segmentation label, prediction by a model trained with our framework in a UDA setting, and prediction by a model trained with our framework in an SSDA setting with 50 target labels. We observe a substantial qualitative improvement in the predictions of the model trained on SSDA, confirming the improvement also observed in quantitative results.", + "url": "http://arxiv.org/html/2411.18728v1/extracted/6030144/figures/qualitative_results.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Semi-supervised semantic segmentation with pixel-level contrastive\nlearning from a class-wise memory bank.", + "author": "Inigo Alonso, Alberto Sabater, David Ferstl, Luis Montesano, and Ana C Murillo.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 8219\u20138228, 2021.", + "url": null + } + }, + { + "2": { + "title": "Self-supervised augmentation consistency for adapting semantic\nsegmentation.", + "author": "Nikita Araslanov and Stefan Roth.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 15384\u201315394, 2021.", + "url": null + } + }, + { + "3": { + "title": "Segnet: A deep convolutional encoder-decoder architecture for image\nsegmentation.", + "author": "Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla.", + "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence (T-PAMI), 39(12):2481\u20132495, 2017.", + "url": null + } + }, + { + "4": { + "title": "Adamatch: A unified approach to semi-supervised learning and domain\nadaptation.", + "author": "David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alex\nKurakin.", + "venue": "International Conference on Learning Representations (ICLR),\n2021.", + "url": null + } + }, + { + "5": { + "title": "Semi-supervised classification by low density separation.", + "author": "Olivier Chapelle and Alexander Zien.", + "venue": "In International workshop on artificial intelligence and\nstatistics, pp. 57\u201364. PMLR, 2005.", + "url": null + } + }, + { + "6": { + "title": "Deeplab: Semantic image segmentation with deep convolutional nets,\natrous convolution, and fully connected crfs.", + "author": "Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L\nYuille.", + "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence (T-PAMI), 40(4):834\u2013848,\n2017a.", + "url": null + } + }, + { + "7": { + "title": "Deeplab: Semantic image segmentation with deep convolutional nets,\natrous convolution, and fully connected crfs.", + "author": "Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L\nYuille.", + "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence (T-PAMI), 40(4):834\u2013848,\n2017b.", + "url": null + } + }, + { + "8": { + "title": "Encoder-decoder with atrous separable convolution for semantic image\nsegmentation.", + "author": "Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig\nAdam.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 801\u2013818, 2018.", + "url": null + } + }, + { + "9": { + "title": "Deliberated domain bridging for domain adaptive semantic\nsegmentation.", + "author": "Lin Chen, Zhixiang Wei, Xin Jin, Huaian Chen, Miao Zheng, Kai Chen, and Yi Jin.", + "venue": "Advances in neural information processing systems (NeurIPS),\n2022.", + "url": null + } + }, + { + "10": { + "title": "Semi-supervised domain adaptation based on dual-level domain mixing\nfor semantic segmentation.", + "author": "Shuaijun Chen, Xu Jia, Jianzhong He, Yongjie Shi, and Jianzhuang Liu.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 11018\u201311027, 2021a.", + "url": null + } + }, + { + "11": { + "title": "Semi-supervised semantic segmentation with cross pseudo supervision.", + "author": "Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 2613\u20132622, 2021b.", + "url": null + } + }, + { + "12": { + "title": "Randaugment: Practical automated data augmentation with a reduced\nsearch space.", + "author": "Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 702\u2013703, 2020.", + "url": null + } + }, + { + "13": { + "title": "Ucc: Uncertainty guided cross-head co-training for semi-supervised\nsemantic segmentation.", + "author": "Jiashuo Fan, Bin Gao, Huan Jin, and Lihui Jiang.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 9947\u20139956, 2022.", + "url": null + } + }, + { + "14": { + "title": "Semi-supervised semantic segmentation needs strong, varied\nperturbations.", + "author": "Geoff French, Samuli Laine, Timo Aila, Michal Mackiewicz, and Graham Finlayson.", + "venue": "arXiv preprint arXiv:1906.01916, 2019.", + "url": null + } + }, + { + "15": { + "title": "Domain-adversarial training of neural networks.", + "author": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo\nLarochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky.", + "venue": "Journal of Machine Learning Research, 17(1):2096\u20132030, 2016.", + "url": null + } + }, + { + "16": { + "title": "Multi-source domain adaptation with collaborative learning for\nsemantic segmentation.", + "author": "Jianzhong He, Xu Jia, Shuaijun Chen, and Jianzhuang Liu.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 11008\u201311017, 2021.", + "url": null + } + }, + { + "17": { + "title": "Cycada: Cycle-consistent adversarial domain adaptation.", + "author": "Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate\nSaenko, Alexei Efros, and Trevor Darrell.", + "venue": "In International Conference on Machine Learning (ICML), pp. 1989\u20131998. PMLR, 2018.", + "url": null + } + }, + { + "18": { + "title": "Daformer: Improving network architectures and training strategies for\ndomain-adaptive semantic segmentation.", + "author": "Lukas Hoyer, Dengxin Dai, and Luc Van Gool.", + "venue": "2021a.", + "url": null + } + }, + { + "19": { + "title": "Improving semi-supervised and domain-adaptive semantic segmentation\nwith self-supervised depth estimation.", + "author": "Lukas Hoyer, Dengxin Dai, Qin Wang, Yuhua Chen, and Luc Van Gool.", + "venue": "arXiv preprint arXiv:2108.12545, 2021b.", + "url": null + } + }, + { + "20": { + "title": "Hrda: Context-aware high-resolution domain-adaptive semantic\nsegmentation.", + "author": "Lukas Hoyer, Dengxin Dai, and Luc Van Gool.", + "venue": "European Conference on Computer Vision (ECCV), 2022.", + "url": null + } + }, + { + "21": { + "title": "Guided collaborative training for pixel-wise semi-supervised\nlearning.", + "author": "Zhanghan Ke, Di Qiu, Kaican Li, Qiong Yan, and Rynson WH Lau.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 429\u2013445. Springer, 2020.", + "url": null + } + }, + { + "22": { + "title": "Attract, perturb, and explore: Learning a feature alignment network\nfor semi-supervised domain adaptation.", + "author": "Taekyung Kim and Changick Kim.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 591\u2013607. Springer, 2020.", + "url": null + } + }, + { + "23": { + "title": "Semi-supervised semantic segmentation with error localization\nnetwork.", + "author": "Donghyeon Kwon and Suha Kwak.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 9957\u20139967, 2022.", + "url": null + } + }, + { + "24": { + "title": "Exploring high-quality target domain information for unsupervised\ndomain adaptive semantic segmentation.", + "author": "Junjie Li, Zilei Wang, Yuan Gao, and Xiaoming Hu.", + "venue": "In Proceedings of the 30th ACM International Conference on\nMultimedia, pp. 5237\u20135245, 2022a.", + "url": null + } + }, + { + "25": { + "title": "Class-balanced pixel-level self-labeling for domain adaptive semantic\nsegmentation.", + "author": "Ruihuang Li, Shuai Li, Chenhang He, Yabin Zhang, Xu Jia, and Lei Zhang.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 11593\u201311603, 2022b.", + "url": null + } + }, + { + "26": { + "title": "Bidirectional learning for domain adaptation of semantic\nsegmentation.", + "author": "Yunsheng Li, Lu Yuan, and Nuno Vasconcelos.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 6936\u20136945, 2019.", + "url": null + } + }, + { + "27": { + "title": "Adaptive early-learning correction for segmentation from noisy\nannotations.", + "author": "Sheng Liu, Kangning Liu, Weicheng Zhu, Yiqiu Shen, and Carlos Fernandez-Granda.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 2606\u20132616, 2022a.", + "url": null + } + }, + { + "28": { + "title": "Bootstrapping semantic segmentation with regional contrast.", + "author": "Shikun Liu, Shuaifeng Zhi, Edward Johns, and Andrew J Davison.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2022b.", + "url": null + } + }, + { + "29": { + "title": "Bapa-net: Boundary adaptation and prototype alignment for\ncross-domain semantic segmentation.", + "author": "Yahao Liu, Jinhong Deng, Xinchen Gao, Wen Li, and Lixin Duan.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 8801\u20138811, 2021.", + "url": null + } + }, + { + "30": { + "title": "Perturbed and strict mean teachers for semi-supervised semantic\nsegmentation.", + "author": "Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, and\nGustavo Carneiro.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 4258\u20134267, 2022c.", + "url": null + } + }, + { + "31": { + "title": "Instance adaptive self-training for unsupervised domain adaptation.", + "author": "Ke Mei, Chuang Zhu, Jiaqi Zou, and Shanghang Zhang.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 415\u2013430. Springer, 2020.", + "url": null + } + }, + { + "32": { + "title": "Surprisingly simple semi-supervised domain adaptation with\npretraining and consistency.", + "author": "Samarth Mishra, Kate Saenko, and Venkatesh Saligrama.", + "venue": "In Proceedings of the British Machine Vision Conference, 2021.", + "url": null + } + }, + { + "33": { + "title": "Classmix: Segmentation-based data augmentation for semi-supervised\nlearning.", + "author": "Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, and Lennart Svensson.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision, pp. 1369\u20131378, 2021.", + "url": null + } + }, + { + "34": { + "title": "Multi-scale and cross-scale contrastive learning for semantic\nsegmentation.", + "author": "Theodoros Pissas, Claudio S Ravasio, Lyndon Da Cruz, and Christos Bergeles.", + "venue": "2022.", + "url": null + } + }, + { + "35": { + "title": "Contradictory structure learning for semi-supervised domain\nadaptation.", + "author": "Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, and Yun Fu.", + "venue": "In Proceedings of the 2021 SIAM International Conference on\nData Mining (SDM), pp. 576\u2013584. SIAM, 2021.", + "url": null + } + }, + { + "36": { + "title": "Playing for data: Ground truth from computer games.", + "author": "Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 102\u2013118. Springer, 2016.", + "url": null + } + }, + { + "37": { + "title": "Enhancing photorealism enhancement.", + "author": "Stephan R Richter, Hassan Abu Al Haija, and Vladlen Koltun.", + "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence (T-PAMI), 2022.", + "url": null + } + }, + { + "38": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In International Conference on Medical image computing and\ncomputer-assisted intervention, pp. 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "39": { + "title": "The synthia dataset: A large collection of synthetic images for\nsemantic segmentation of urban scenes.", + "author": "German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M\nLopez.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 3234\u20133243, 2016.", + "url": null + } + }, + { + "40": { + "title": "Semi-supervised domain adaptation via minimax entropy.", + "author": "Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 8050\u20138058, 2019.", + "url": null + } + }, + { + "41": { + "title": "Tune it the right way: Unsupervised validation of domain adaptation\nvia soft neighborhood density.", + "author": "Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Stan Sclaroff, Trevor Darrell, and\nKate Saenko.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 9184\u20139193, 2021.", + "url": null + } + }, + { + "42": { + "title": "Fixmatch: Simplifying semi-supervised learning with consistency and\nconfidence.", + "author": "Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang,\nColin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li.", + "venue": "Advances in neural information processing systems (NeurIPS),\n33:596\u2013608, 2020.", + "url": null + } + }, + { + "43": { + "title": "Mean teachers are better role models: Weight-averaged consistency\ntargets improve semi-supervised deep learning results.", + "author": "Antti Tarvainen and Harri Valpola.", + "venue": "Advances in neural information processing systems (NeurIPS),\n30, 2017.", + "url": null + } + }, + { + "44": { + "title": "The gist and rist of iterative self-training for semi-supervised\nsegmentation.", + "author": "Eu Wern Teh, Terrance DeVries, Brendan Duke, Ruowei Jiang, Parham Aarabi, and\nGraham W Taylor.", + "venue": "In 2022 19th Conference on Robots and Vision (CRV), pp. 58\u201366. IEEE, 2022.", + "url": null + } + }, + { + "45": { + "title": "Dacs: Domain adaptation via cross-domain mixed sampling.", + "author": "Wilhelm Tranheden, Viktor Olsson, Juliano Pinto, and Lennart Svensson.", + "venue": "In Proceedings of the IEEE/CVF Winter Conference on\nApplications of Computer Vision, pp. 1379\u20131389, 2021.", + "url": null + } + }, + { + "46": { + "title": "Guidedmix-net: Learning to improve pseudo masks using labeled images\nas reference.", + "author": "Peng Tu, Yawen Huang, Rongrong Ji, Feng Zheng, and Ling Shao.", + "venue": "AAAI Conference on Artificial Intelligence, 2022.", + "url": null + } + }, + { + "47": { + "title": "Advent: Adversarial entropy minimization for domain adaptation in\nsemantic segmentation.", + "author": "Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, and Patrick\nP\u00e9rez.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 2517\u20132526, 2019.", + "url": null + } + }, + { + "48": { + "title": "Classes matter: A fine-grained adversarial approach to cross-domain\nsemantic segmentation.", + "author": "Haoran Wang, Tong Shen, Wei Zhang, Ling-Yu Duan, and Tao Mei.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 642\u2013659. Springer, 2020a.", + "url": null + } + }, + { + "49": { + "title": "Exploring cross-image pixel contrast for semantic segmentation.", + "author": "Wenguan Wang, Tianfei Zhou, Fisher Yu, Jifeng Dai, Ender Konukoglu, and Luc\nVan Gool.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 7303\u20137313, 2021.", + "url": null + } + }, + { + "50": { + "title": "Alleviating semantic-level shift: A semi-supervised domain adaptation\nmethod for semantic segmentation.", + "author": "Zhonghao Wang, Yunchao Wei, Rogerio Feris, Jinjun Xiong, Wen-Mei Hwu, Thomas S\nHuang, and Honghui Shi.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 936\u2013937, 2020b.", + "url": null + } + }, + { + "51": { + "title": "Segformer: Simple and efficient design for semantic segmentation with\ntransformers.", + "author": "Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping\nLuo.", + "venue": "Advances in neural information processing systems (NeurIPS),\n34:12077\u201312090, 2021.", + "url": null + } + }, + { + "52": { + "title": "Self-training with noisy student improves imagenet classification.", + "author": "Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 10687\u201310698, 2020.", + "url": null + } + }, + { + "53": { + "title": "Fda: Fourier domain adaptation for semantic segmentation.", + "author": "Yanchao Yang and Stefano Soatto.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 4085\u20134095, 2020.", + "url": null + } + }, + { + "54": { + "title": "Bdd100k: A diverse driving dataset for heterogeneous multitask\nlearning.", + "author": "Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu,\nVashisht Madhavan, and Trevor Darrell.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 2636\u20132645, 2020.", + "url": null + } + }, + { + "55": { + "title": "Cutmix: Regularization strategy to train strong classifiers with\nlocalizable features.", + "author": "Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and\nYoungjoon Yoo.", + "venue": "In International Conference on Computer Vision (ICCV), pp. 6023\u20136032, 2019.", + "url": null + } + }, + { + "56": { + "title": "Prototypical pseudo label denoising and target structure learning for\ndomain adaptive semantic segmentation.", + "author": "Pan Zhang, Bo Zhang, Ting Zhang, Dong Chen, Yong Wang, and Fang Wen.", + "venue": "In Conference on Computer Vision and Pattern Recognition\n(CVPR), pp. 12414\u201312424, 2021.", + "url": null + } + }, + { + "57": { + "title": "Rethinking pre-training and self-training.", + "author": "Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus\nCubuk, and Quoc Le.", + "venue": "Advances in neural information processing systems (NeurIPS),\n33:3833\u20133845, 2020.", + "url": null + } + }, + { + "58": { + "title": "Unsupervised domain adaptation for semantic segmentation via\nclass-balanced self-training.", + "author": "Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang.", + "venue": "In European Conference on Computer Vision (ECCV), pp. 289\u2013305, 2018.", + "url": null + } + }, + { + "59": { + "title": "Pseudoseg: Designing pseudo labels for semantic segmentation.", + "author": "Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang,\nand Tomas Pfister.", + "venue": "arXiv preprint arXiv:2010.09713, 2020.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18728v1" +} \ No newline at end of file diff --git a/20241127/2411.18731v1.json b/20241127/2411.18731v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d1999b36190a17a2c575c1ebaf650632dfdcee9d --- /dev/null +++ b/20241127/2411.18731v1.json @@ -0,0 +1,402 @@ +{ + "title": "The Performance of the LSTM-based Code Generated by Large Language Models (LLMs) in Forecasting Time Series Data", + "abstract": "Generative AI, and in particular Large Language Models (LLMs), have gained substantial momentum due to their wide applications in various disciplines. While the use of these game changing technologies in generating textual information has already been demonstrated in several application domains, their abilities in generating complex models and executable codes need to be explored. As an intriguing case is the goodness of the machine and deep learning models generated by these LLMs in conducting automated scientific data analysis, where a data analyst may not have enough expertise in manually coding and optimizing complex deep learning models and codes and thus may opt to leverage LLMs to generate the required models. This paper investigates and compares the performance of the mainstream LLMs, such as ChatGPT, PaLM, LLama, and Falcon, in generating deep learning models for analyzing time series data, an important and popular data type with its prevalent applications in many application domains including financial and stock market.\nThis research conducts a set of controlled experiments where the prompts for generating deep learning-based models are controlled with respect to sensitivity levels of four criteria including 1) Clarify and Specificity, 2) Objective and Intent, 3) Contextual Information, and 4) Format and Style. While the results are relatively mix, we observe some distinct patterns. We notice that using LLMs, we are able to generate deep learning-based models with executable codes for each dataset seperatly whose performance are comparable with the manually crafted and optimized LSTM models for predicting the whole time series dataset. We also noticed that ChatGPT outperforms the other LLMs in generating more accurate models. Furthermore, we observed that the goodness of the generated models vary with respect to the \u201ctemperature\u201d parameter used in configuring LLMS. The results can be beneficial for data analysts and practitioners who would like to leverage generative AIs to produce good prediction models with acceptable goodness.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large language models (LLMs) such as ChatGPT [15 ###reference_b15###], LLaMa [22 ###reference_b22###], Falcon [16 ###reference_b16###], and PaLM [4 ###reference_b4###] are gaining popularity on a regular basis and for a variety of reasons. These generative models are already playing an integral role in assisting people with their day-to-day duties, such as generating code, writing emails, assisting with projects, and many more. As a result, a wider range of users is involved in dealing with LLMs. According to a report published by Markets and Markets Research Pvt. Ltd.[13 ###reference_b13###], the global market for generative AI is expected to achieve at a Compound Annual Growth Rate (CAGR) of 35.6 between 2023 and 2028, indicating significant potential opportunities. The CAGR was valued at $11.3 billion in 2023 and it is expected to rise to almost $51.8 billion by 2028. Looking ahead, industry forecasts estimate that the value of generative AI market might reach $191.8 billion by 2032.\nRecent research suggests that generative AI-based applications could contribute an annual value ranging from trillion to trillion across various use cases, surpassing the 2021 GDP of the United Kingdom at trillion [5 ###reference_b5###]. Such integration could enhance the overall influence of artificial intelligence by up to 40 with the possibility of further doubling this estimate by incorporating generative AI into existing software applications used for tasks beyond the initially analyzed use cases.\nAs per Salesforce\u2019s findings [20 ###reference_b20###], of employees already employ or intend to utilize generative AI for accomplishing their tasks. Additionally, of employees believe that generative AI can enhance their ability to serve customers better. Moreover, of employees feel that it can amplify the benefits derived from other technological investments. These insights highlight the growing adoption of LLMs in various professional settings.\nIn this research work, we have studied the performance of the following large language models:\nGPT3.5Turbo 111https://platform.openai.com/docs/models/gpt-3-5 is an efficient model that are designed for natural languages and code comprehension.\nFalcon [16 ###reference_b16###], is one of the most effective and optimized language models based on high quality, large scale training data.\nMeta AI\u2019s Llama 2[23 ###reference_b23###], is a series of large language models with 7 to 70 billion parameters and it is excellent in knowledge, reasoning, and code benchmarks.\nGoogle\u2019s PaLM [4 ###reference_b4###], a 540B parameter model is highly proficient across various applications such as translation, QA pipelines, and even arithmetic.\nThese Large Language Models have been leveraged to perform analysis and model building tasks automatically, by providing prompts in a Natural Language processing format. LLM models such as GPT3.5Turbo, Falcon , LLama 2 and PaLM are extensively integrated for the task in code generation [14 ###reference_b14###], text generation [25 ###reference_b25###] and image generation [18 ###reference_b18###] where the instructions are provided thought prompts in texts.\nAn interesting question is whether LLMs can also be leveraged by professional data analysts with expertise in certain domains (e.g., financial market) to help them generate a relatively good model (e.g., Long Short-Term Models - LSTM) and the corresponding executable code (e.g., Python) automatically without any additional needs for learning complex syntax and semantic of developing these deep learning-based forecasting models (e.g., LSTM) from scratch? In the context of anomaly detection on time series [8 ###reference_b8###] [11 ###reference_b11###], URL detection [10 ###reference_b10###] and vulnerability detection in smart contracts [9 ###reference_b9###] LSTM model demonstrates exception performance, which is the primary reason of choosing the LSTM model. This paper conducts an exploratory analysis to investigate the performance of generative AIs, and in particular LLMs, and assess the goodness of the deep-learning codes generated by LLMs for building deep learning-based models in forecasting time series data. The underlying motivation is that most data analysts, who deal with time series data types, may need to design and develop their own complex deep learning-based codes.However, individuals not familiar with complex deep learning models often have limited knowledge to deal with building and training the deep learning model. They can generate code to build and train such models by prompting these large language models. The prompting approach makes deep learning more accessible for individuals who might have little or no experience but can take advantage of deep learning to work on their time series data.\nThe paper explores the prompts with controlled sensitive analysis based on categorical levels to study the goodness of models and codes generated by LLMs for deep learning-based time series analysis. The goal is to comprehensively evaluate and comprehend the influence of various category levels defined for prompts. Each prompt for generating time series analysis deep learning-based code is crafted based on the criteria including 1) Clarity and Specificity, 2) Objective and Intent, 3) Contextual Information, and 4) Format and Style. Furthermore, to assess the impact of each criterion on the goodness of the models created, we consider three classes of intensity or sensitivity level as expressed in each prompt including 1) high, 2) medium, and 3) low intensity where intensity refers to the amount of information given to LLMs through prompts. This paper makes the following key contributions:\nWe conduct a number of experiments where the sensitivity levels (i.e., Low, Medium, and High) of four criteria including Clarity and Specificity, Objective and Intent, Contextual Information, and Format and Styles are controlled.\nWe report that LLMs are capable of generating relatively good models that are comparable with manually coded and optimized models and codes.\nWe report that amongst the LLMs studied, ChatGPT outperformed in most cases generating more accurate models for predicting time series data in the context of financial and stock data.\nThe results also show that the performance of LLMs vary with respect to the temperature parameter in the configuration when generating deep learning-based prediction models.\nWe also report that we did not observe a clear benefit of crafting more complex and detailed prompts in generating better and more accurate models. The results are mix where in some cases models generated with simple prompts outperform models with more complex prompts. The results seem to be dependent on the setting of the temperature parameter.\nThe rest of this paper is structured into the following sections. In Section 2 ###reference_###, relevant research studies have been discussed. Section 3 ###reference_### contains the preliminary background related to LLM (i.e. GPT-3.5-Turbo, Falcon, LLama-2, PaLM). Section 4 ###reference_### outlines the research questions addressed in this work. Section 5 ###reference_### presents the experimental design, including the dataset, prompt framework, LLM configurations, and performance metrics used to evaluate the deep learning-based codes and models generated by LLMs for time series analysis. Our methodology is discussed in Section 6 ###reference_###.\nSection 7 ###reference_### reports the results and discussion, obtained by each LLMs across the categories and levels. Section 8 ###reference_### presents the limitations of the paper and Section 9 ###reference_### summarizes the conclusions of the work." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "In November 2022, OpenAI released ChatGPT [15 ###reference_b15###]. In February 2023, Meta released LLaMa [22 ###reference_b22###] followed by Technology Innovation Institute (TII) introducing \u201cFalcon LLM\u201d [16 ###reference_b16###], a foundational Large Language Model (LLM) with 40 billion parameters that was introduced in March 2023. In May 2023, Google joined the race and announced PaLM [4 ###reference_b4###]. Moreover, Meta continued to release models, offering a set of models in July 2023 under the name Llama 2 [23 ###reference_b23###], with parameter counts ranging from 7 billion to 70 billion. Since then major high tech companies continue improving their LLMs by adding additional features and capabilities.\nThe idea of leveraging generative AIs in building executable codes and models has been discussed in several research papers. Vaithilingam et al. [24 ###reference_b24###] conducted a study with 24 volunteers to evaluate the usability of GitHub Copilot, a code generation tool that employs sophisticated language models. Participants in the research completed Python programming tasks using both Copilot and a controled condition that used VSCode\u2019s default IntelliSense functionality. The research sought to ascertain the influence of these tools on programming experience, error detection, problem-solving tactics, and barriers to their adoption. According to quantitative research by the authors, there was no significant difference in job completion times between Copilot and IntelliSense controlled groups. However, it was discovered that Copilot customers had more failures, which were mostly related to Copilot\u2019s incorrect advice. Despite this, the majority of participants (19 out of 24) preferred Copilot because of its ability to give a useful and informative starting point that eliminate the needs for frequent Web searches. However, several participants had difficulty comprehending and debugging the code generated by Copilot.\nDestefanis et al. [7 ###reference_b7###] studied and compared the performance of two AI models: GPT-3.5 and Bard, in generating code for Java functions. The Java functions and their descriptions were sourced from CodingBat Website, a platform for practicing programming problems. The evaluation of the Java code generated by the models was based on correctness, which was further verified using CodingBat\u2019s test cases. The results of the evaluation showed that GPT-3.5 outperformed Bard in code generation, producing accurate code for around of the functions, while Bard achieved correctness for only of the functions. Both AI models displayed strengths and weaknesses. GPT-3.5 consistently performed better across most problem categories, except for functional programming, where both models showed similar performance.\nLiu et al. [12 ###reference_b12###] proposed EvalPlus, a framwork for rigorously evaluating the functional correctness of code generated by large language models (LLMs). The framework solved the issue of insufficient test coverage in current coding benchmarks like as HUMANEVAL, which employ only a few manually written test cases and consequently miss numerous problems in LLM-generated code. EvalPlus is built around an automated test input generator that combine LLM and mutation-based methods. It begins by using ChatGPT to generate high-quality seed inputs focusing at edge situations. The seeds are then changed using type-aware operators to produce a large number of new test cases quickly.\nThe findings showed that inadequate benchmark testing could have a significant impact on claimed performance. EvalPlus also found flaws in 11 of the original HUMANEVAL solutions. Through automated testing, the study points in the direction of thoroughly analyzing and refining programming benchmarks for LLM-based code creation.\nNi et al. [14 ###reference_b14###] proposed LEVER, a method for improving language-to-code generation by Code Language Models (LLMs) utilizing trained verifiers, as proposed in their work. They trained different verifier models based on plain language input, program code, and execution outcomes to determine the validity of created programs. LEVER was tested on four language-to-code datasets: Spider, WikiTableQuestions, GSM8k, and MBPP, in the fields of semantic parsing, table quality assurance, arithmetic reasoning, and basic Python programming. LEVER enhanced execution accuracy over strong baselines by when paired with Codex and achieved new state-of-the-art outcomes on all datasets. The relevance of execution results in verification became clear through ablation study, and the technique kept its strong performance even in circumstances with limited resources and without supervision. The findings showed that using benchmark datasets to train compact verifiers increased the performance of various LLMs in the field of language-to-code generation.\nDenny et al. [6 ###reference_b6###] proposed \u201cPrompt Problems\u201d, a unique educational idea aimed to educate students on how to create effective natural language prompts for large language models (LLMs) with the objective of generating executable codes. The authors created Promptly, a web-based application that allows students to iteratively tweak prompts based on test case output until the LLM produces accurate code. They used Promptly in classroom research with 54 beginning Python students and discovered that the tool teaches students to new programming structures and promotes computational thinking, despite the fact that some students were hesitant to utilize LLMs. The research looked at prompt duration and iteration counts, as well as student opinions based on open-ended feedback. Overall, the work presents preliminary evidence that quick Problems warrant more investigation as an approach to developing the growing ability of quick engineering.\nBecker et al. [2 ###reference_b2###] investigated the revolutionary impact of AI-driven code generation tools like OpenAI Codex, DeepMind AlphaCode, and Amazon CodeWhisperer. These tools possess the remarkable ability to translate natural language prompts into functional code, heralding a potential revolution in the realm of programming education. While admitting their potential, the authors argue for urgent talks within the computer science education community in order to overcome difficulties and properly utilize these technologies. The study provided an overview of important code generation models\u2014Codex, AlphaCode, and CodeWhisperer\u2014that were trained on massive public code repositories. These models excel at creating code in several programming languages and go beyond coding by providing features such as code explanations and language translation. From examples, answers, and different problem-solving methodologies to scalable learning materials and an emphasis on higher-level topics, code-generating tools provide potential in education.\nThe authors underline the need for educators proactively integrate these technologies, anticipating ethical concerns and a trend toward code analysis.\nZamfrescu-Pereira et al. [26 ###reference_b26###] conducted a study whose findings shed some light on the difficulties that non-AI specialists have when attempting to provide effective prompts for large language models like GPT-3. These individuals frequently use a more impromptu and ad hoc approach rather than a systematic one, which is hampered by a tendency to overgeneralize from limited experiences and is based on human-human communication conventions. The authors developed BotDesigner, a no-code chatbot design tool for iterative fast development and assessment. This tool helps with a variety of tasks, including dialogue formulation, error detection, and fast alteration testing. Participants in a user research adjusted prompts and assessed modifications well, but with limited systematic testing and issues in prompt efficacy understanding. These difficulties originate from a tendency to overgeneralize and predict human-like behaviors. Through patterns and cause analysis, the study proposed potential for further training and tool development to encourage systematic testing, moderate expectations, and give assistance, while noting persistent uncertainty regarding generalizability and social bias consequences. This experiment highlights the difficulties that non-experts have in rapid engineering and suggests to opportunities for more accessible language model tools.\nZhou el al. [27 ###reference_b27###] introduce a novel approach called the Automatic Prompt Engineer (APE) designed to facilitate the automatic generation and selection of effective natural language prompts. The primary goal is to guide large language models (LLMs) towards desired behaviors. APE tackles this challenge by framing prompt generation as a natural language program synthesis problem. It treats LLMs as black box computers capable of proposing and evaluating prompt candidates. The APE method leverages LLMs in three distinct roles: 1) as inference models for suggesting prompt candidates, 2) as scoring models to assess these candidates, and 3) as execution models to test the selected prompts. Prompt candidates are generated either directly through inference or recursively by creating variations of highly-rated prompts. The final selection of the most suitable prompt is determined by maximizing metrics such as execution accuracy on a separate validation set. Importantly, APE achieves these outcomes without the need for gradient access or fine-tuning, relying instead on a direct search within the discrete prompt space. In the experimental phase, APE was put to the test across a range of tasks. It successfully addressed 24 instruction induction tasks, exhibiting performance on par with or surpassing human capabilities across all of them. Additionally, APE demonstrated its effectiveness on a subset of 21 BIG-Bench tasks, outperforming human prompts in 17 out of 21 cases." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Large Language Models Studied", + "text": "This paper compares the performance of four LLMs including GPT, Falcon, LLama-2, and PaLM." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "GPT-3.5-Turbo", + "text": "GPT-3.5-Turbo is an OpenAI-developed variant of the Generative Pre-trained Transformer 3. GPT-3.5 models include a wide variety of capabilities including natural language and code comprehension and creation. GPT-3.5-Turbo is the standout model in this series known for its exceptional capabilities and low cost of ownership. The GPT-3.5 model, designed for chat interactions, boasts exceptional capabilities while being remarkably cost-effective, priced at only one-tenth of the cost of the text-davinci-003 model." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Falcon", + "text": "The Technology Innovation Institute located in Abu Dhabi created the Falcon LLM [16 ###reference_b16###], a significant advancement in AI language processing that has revolutionized its potential. Within the Falcon series, namely Falcon-40B and Falcon-7B, distinct versions, each possessing specific merits, contribute to making Falcon LLM an inventive and adaptable solution suitable for diverse uses.\nFalcon\u2019s creation involved tailored tools and a distinctive data flow approach. This system extracts valuable Web information for customized training, differing from methods by NVIDIA, Microsoft, and HuggingFace. Focus on large-scale data quality was critical, recognizing LLMs\u2019 sensitivity to data excellence. Thus, an adept pipeline is built for rapid processing and quality content from Web sources. Falcon\u2019s architecture was meticulously optimized for efficiency. Coupled with high-caliber data, this enables Falcon to notably surpass GPT-3, utilizing fewer resources.\nFalcon is a decoder-only model with 40 billion parameters trained with 1 trillion tokens. The training took two months and made use of 384 GPUs on AWS. After rigorous filtration and de-duplication of data from CommonCrawl, the model\u2019s pretraining dataset was generated using web crawls with roughly five trillion tokens. Falcon\u2019s capabilities were also expanded by incorporating certain sources such as academic papers and social media debates. The model\u2019s performance was then evaluated using open-source benchmarks such as EAI Harness, HELM, and BigBench." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "LLama-2", + "text": "Meta AI created lama 2 [23 ###reference_b23###], a new family of pretrained and fine-tuned large language models (LLMs). Llama 2 has characteristics ranging from 7 billion to 70 billion arameters. The pre-trained models are designed for a wide range of natural language activities, whilst the fine-tuned versions known as Llama 2-Chat are designed for discourse. Llama 2 was pretrained on 2 trillion publically accessible tokens utilizing an improved transformer architecture with advantages like as extended context and grouped-query attention. On knowledge, reasoning, and code benchmarks, Llama 2 surpassed other open-source pretrained models such as Llama 1, Falcon, and MPT. Llama 2-Chat aligns the models to be helpful and safe in discourse by using supervised fine-tuning and reinforcement learning with human feedback (RLHF). Over 1 million fresh human preference annotations were collected in order to train and fine-tune reward models. To increase multi-turn discourse consistency, techniques such as Ghost Attention were created. Ghost Attention (GAtt) is a straightforward technique influenced by Context Distillation [1 ###reference_b1###]. GAtt manipulates the fine-tuning stage to guide attention concentration through a step-by-step approach." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "PaLM", + "text": "The Pathways Language Model (PaLM) [4 ###reference_b4###] is a Transformer based language model built by Google with 540 billion parameters. PaLM implements a traditional Transformer decoder structure with modifications such as SwiGLU activation and parallel layers for faster training. A total of 780 billion tokens of training data from natural language sources such as books, Web material, Wikipedia, GitHub code, and conversations were employed. The Pathways system, which allows for efficient distributed training on accelerators, was used to train on 6144 TPU v4 processors. This allows the training of such a big model without the need for pipeline parallelism.\nThe PaLM is evaluated over a wide range of tasks and datasets, proving its strong performance across several domains. The PaLM 540B achieved an outstnading score of 92.6 in the SuperGLUE test after fine tuning, essentially putting it with top models such as the T5-11B. In the field of question answering, PaLM 540B outperformed previous models by earning F1 scores of 81.4 on the Natural Questions and TriviaQA datasets in a few-shot setting. The model\u2019s abilities extended to mathematical thinking, where it achieved an astounding accuracy on the difficult GSM8K math word problem dataset using chain-of-thought cues. PaLM-Coder 540B has been elevated even further, reaching 88.4 success with a\npass@100 criterion on HumanEval and 80.8 success with a pass@80 criterion on MBPP.\nPaLM 540B excels at translation, earning a notable BLEU score of 38.4 in zero shot translation on the WMT English-French dataset, outperforming other significant language models. The model\u2019s responsible behavior is obvious in toxicity evaluations, with a RealToxicity dataset average toxicity probability of 0.46. Finally, using the WinoGrande coreference dataset, PaLM 540B achieved an accuracy of 85.1, demonstrating its capacity to mitigate gender bias. These extensive findings highlight the PaLM model\u2019s adaptability and efficacy across a wide range of language-related tasks." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Research Questions", + "text": "Recent advances in large language models like GPT-3 (Brown et al.[3 ###reference_b3###]) have demonstrated impressive text generation capabilities when provided with well-designed prompt instructions. However, best practices for prompt engineering are still developing. As Reynolds and McDonell [19 ###reference_b19###] discuss, prompt programming, the importance of being aware of prompts fitting within the concept of natural language.\nThis study will perform a systematic sensitivity analysis to identify the most sensitive prompt components for text generation using large language models. Following the workflow outlined by Saltelli et al. [21 ###reference_b21###], each input factor will be varied individually while holding others constant to isolate its impacts. Text outputs will be analyzed to measure sensitivity.\nFindings will provide prompt engineers with guidance on precision tuning. In line with recommendations by P\u00e9rez et al. [17 ###reference_b17###], this research aims to demystify prompts through empirical testing, allowing LM prompts and a few shots with hyperparameters." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experimental Setup", + "text": "The experiment was conducted in two parts. First, the LLM models were run in Google Colab Pro using a GPU with high RAM. Second, the outputs from the LLM models were run on a Macbook Pro with an M1 Max chip with 64 GB Memory." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Dataset", + "text": "The daily financial time series data from January 01, 2022, through April 23, 2022, were collected from Yahoo Finance222https://finance.yahoo.com/. The dataset, described in Table 1 ###reference_###, includes a diverse selection of stocks and indices. Each dataset is a stock or index with a number of data points, date range, sector and country of origin. Stocks were chosen based on market capitalization and sector representation. As this table 1 ###reference_### shows, the selected datasets represent a variety of sectors (indices, technology, e-commerce and automakers), and countries (USA, Japan, Hong Kong, China). This table 1 ###reference_### provides a basis to test the LLM-generated models in datasets in terms of variety and geographical diversity.\nIn aaddition, major indices like the SP 500 (GSPC) and Dow Jones Industrial Average (DJI), Nasdaq Composite (IXIC), Nikkei 225 (N225) and Hang Seng Index (HSI) were included. Giant hightech companies such as Apple (AAPL) and Microsoft (MSFT) were also included in the dataset. furthermore, E-commerce giants such as Amazon (AMZN) and Alibaba (BABA) were incorporated in the dataset to make the dataset more diverse. Lastly, Automakers like Tesla (TSLA) were also added with the goal of investigating the performance of each model generated by LLMs for each industry sector." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "LLMs Configuration", + "text": "The granularity and diversity of responses generated by LLMs can be controlled via several configuration parameters:\nTemperature control randomization of the responses generated by a large language model where temperature score close to 1 indicates increases in the randomness;\nThe parameter filters the predicted next words by computing a cumulative probability distribution. A value of indicates considering the 40 most likely words or phrases;\nThe value of limits the large language model analysis to the most recent tokens when generating responses.\nTable 2 ###reference_### shows the configuration we used for each model in this experiment. The model represents the version of the base model used with 7B (7 billion) parameters. The values reported in Table 2 ###reference_### are obtained through fine tuning stages performed through several experiments. For the purposes of our experiments, various temperature settings were used to observe their effects on the model\u2019s performance. For instance, when a higher temperature was used, the model worked well but the formulated hypotheses contained a lot of variability and were not very coherent. Conversely, the low-temperature output settings appeared to be more standardized and dependable, but less varied than the previous higher-temperature outputs. They are taken into account in the interpretation of the results when the diverse experiments were performed to choose the value in Table 2 ###reference_###." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Prompt Engineering with Sensitivity Levels", + "text": "The purpose of this research paper is to investigate 1) whether the deep learning-based models generated by LLMs for forecasting time series data are good enough (i.e., comparable with fine-tuned models manually produced by expert data analysts; and 2) whether it is feasible to enhance the goodness of models by controlling the criteria for effective prompt engineering including 1) clarity and specificity, 2) objectives and intents, 3) contextual information, and 4) format and stylealong with sensitive levels of low, medium, and high.\nTo perform such controlled experiment, the study consider the following criteria in crafting prompts along with additional controlled imposed by sensitivity levels considered.\nI) Clarity and Specificity (CS) of prompts provided to LLMs.\nLow. Use of vague terms, lack of specific details, and general language that does not clearly convey the desired tasks.\nMedium. Check for clear tasks with some specific details, but might have some rooms for further clarification.\nHigh. Identify prompts with well-defined, precise tasks that leave little ambiguity.\nII) Objectives and Intents (OI) of prompts provided to LLMs.\nLow. The prompt\u2019s exact objectives and intents are unclear due to the use of ambiguous language or a lack of clear purpose.\nMedium. The prompt states a general objective but does not provide a clear context for the desirable task.\nHigh. The prompts are crafted with clear statements of objective and intent, providing a well-defined context.\nIII) Contextual Information (CI) of prompts provided to LLMs.\nLow. The prompts are provided with minimal contextual information about the data, the problem domain, or the use case, making it hard to understand the request.\nMedium. Identify prompts that offer some context but might lack crucial information about the scenario.\nHigh. The prompts provide detailed context, including metadata , problem domain, and use case information.\nIV) Format and Style (FS) of prompts provided to LLMs.\nLow. The prompts are written with unclear or inconsistent format explaining the desired code or model.\nMedium. Check for prompts that are generally structured but might have some issues with clarity, style, or terminology.\nHigh. The prompts are crafted with well-structured language, proper format for desirable code, and appropriate use of terminology." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Performance Metric", + "text": "Throughout the experiment, the performance metric utilized is the Root-Mean-Square Error (RMSE). This metric serves to assess the output generated by all LLMs in response to the provided input prompts. The RMSE essentially computes the square root of the mean of the squared differences between the model\u2019s predictions and the true values. A lower RMSE value indicates that the underlying models had achieved better performances. More specifically, RMSE values indicate that the model\u2019s predictions are closer to the actual ground target values on average." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Experimental Procedure", + "text": "To assess the performance of LLMs in generating good deep learning-based models for analyzing time series data, we crafted a set of prompts according to the criteria and sensitivity levels discussed earlier. Table 3 ###reference_### lists each prompt that is crafted according to criteria and sensitivity levels. The prompts are designed and provided to mainstream LLMs as inputs for various levels of sensitivity. We will then provide these prompts to each LLM, take the response of each LLM, compile and execute the code generated by LLMs and capture RMSE for comparison purposes." + }, + { + "section_id": "6.1", + "parent_section_id": "6", + "section_name": "Sensitivity Analysis for Designing Prompts", + "text": "We crafted eleven prompts ranging from easy to complex sensitivity. The designing of these eleven prompts was based on pair-wise sensitivity analysis where a factor is changing, and remaining factors are kept constant. Pair-wise analysis, also known as pairwise comparison, is a method for comparing and evaluating many items or criteria by comparing each item to every other item in a methodical and systematic manner. The phrase \u201cpair-wise analysis\u201d refers to the process of analyzing and comparing the distinct criterion levels (i.e., Low, Medium, and High) against each other for each individual element in the context of the information and determining their impact on the results.\nThe pair-wise analysis helps in evaluating the quality, significance, or applicability of multiple characteristics by directly comparing them to one another, allowing for a more systematic and thorough review process.\nTo help trace the sensitivity levels, a coloring scheme is employed where the green, orange, and red colors in Table 3 ###reference_### represent sensitivity level of high, medium, and low, respectively." + }, + { + "section_id": "6.2", + "parent_section_id": "6", + "section_name": "Manual Creation and Optimization a Model", + "text": "The experiments execute on Apple M1 MAX, Memory of 64 with GPU. The dataset split into 80% for training and 20% testing for testing. In the preprocessing, the data is scaled using MinMaxscaler, which transforms the feature range from 0 to 1 where the data linearly scales down. After the scaling, the data is prepared into sequences of length 5 to predict the next day\u2019s (1) data and feed into the model for training. The manual creation of the model consists of the LSTM architecture with tensorflow in the backend. The model contains One LSTM layer with 50 Units with \u2019relu\u2019 activation function. The model trains with 100 epochs with a batch size of 1. The model employs \u2019adam\u2019 as an optimizer and \u2019mse\u2019 as loss function. The hyperparameters were chosen with various observations during the experiments. The preprocessing steps and building model are only relevant for the manual creation, as LLMs are provided the raw data for code generation." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Results", + "text": "Table 4 ###reference_### reports the results the performance of the deep learning-based models generated by LLMs for time series data analysis. Each model is evaluated using RMSE values for each stock data. The PaLM model achieved the lowest RMSE value of 0.0023 for BABA ticker while the Falcon achieved the lowest RMSE value of 0.0041 for the GSPC ticker. The LLama2 did not achieve the lowest RMSE across all tickers, whereas the GPT 3.5 has the lowest RMSE for eight tickers. However, the manually developed and optimized model achieved the lowest RMSE compared to LLM generated model across all tickers.\nIn Table 4 ###reference_###, the best RMSE values obtained for each language model is highlighted with gray color. Moreover, the best RMSE values among all models for each stock data and for all 11 different prompts are highlighted in dark color. The cells with NA indicate that the models generated by the underlying LLM were meaningless and the underlying LLMs produced some other types of models such as regression models instead of deep learning-based models for forecasting time series data. In other words, the NA values represent the output of the LLMs with no code related to the LSTM model or code not related to predicting time series. This might be due to hallucination problem known in language models where the underlying LLM confuses leveraging its trained data to properly responding to queries and prompts. A noticeable case is the falcon case where the number of NA (the irrelevant response to prompts) is outnumbering the expected responses. This may indicate that the falcon large language model is suffering from the hallucination problem more than the other LLMs.\nI) The Performance of Generated Models Across LLMs. As Table 4 ###reference_### indicates, on average (i.e., the last rows of each stock data) there is no clear winner among the language models for the eleven prompts studied. The deep learning-based models generated by each LLM are rather comparably competitive. However, as Table 4 ###reference_### shows, the models generated by GPT 3.5 on prompt 9 and 10 outperform the other generated models (i.e., the dar cells in the table) except for the GPSC and BABA stock data.\nII) The Performance of Generated Models Across Prompts. We observe that for the case of GPT 3.5, the best models with minimal RMSE values are produced by prompts 8, 9, and 10 where three criteria as 1) clarify and specificity, 2) objective and intent, and 3) Format and Style are set high.\nFor the case of LLama 2, we observe that the language model generates the best model using prompt 8 where where three criteria as 1) clarify and specificity, 2) objective and intent, and 3) Format and Style are set high (in most cases). We also observe a similar pattern for models generated by PaLM through prompts 7, 8, and 9. For the models generated by falcon, there is no clear pattern whether any prompt standout in the comparison where the results are mixed.\nWhile the results and performance of models and prompts are dispersed, we observe a clear pattern where prompts 8 and 9 seem to produce the best results in generating more accurate models for forecasting time series models where three criteria as 1) clarify and specificity, 2) objective and intent, and 3) Format and Style are set high.\nIII) The Performance of Generated Models Across the Time Series Datasets. As Table 4 ###reference_### and the black cells indicate, the best results across different dataset is produced by GPT 3.5 mostly by prompts 8, 9, and 10. This may indicate that, at least for GPT 3.5, the more clear and specific (CS), and the more objectively crafted prompt with clear intention (OI), and a clear expression regarding the desired output and format (FS) in the prompts will yield creating better and more accurate models for forecasting time series data.\nIV) The Performance of Generated Models and Manually Developed and Optimized Model. The most important observation is the accuracy of models created and optimized manually in comparison with the models generated by prompts.\nIt is important to note that the manually created and optimized model was created based on all data and thus there is only one single manually created and optimized model to compare the results with. More specifically, we did not manually craft and optimize separate deep learning-based models for each dataset. We created a single optimized model for all dataset all together. Figure 1 ###reference_### depicts the RMSE values of the manually crafted and optimized single model obtained for each dataset. In Figure 1 ###reference_###, the RMSE values of the manually implemented LSTM model on HSI achieved the highest RMSE of value 0.0354 and N225 achieved the lowest RMSE value of 0.0034.\nWhile the manually crafted and optimized model outperforms on three sets of stock data, the models generated by LLMs are also outperforming the manually crafted and optimized models for the seven sets of stock data. More specifically, we observe that the manually created and optimized model outperforms models generated by LLMs for DJI, N225, and AMZN; whereas, the models created through prompts outperform manually created and optimized model for GSPC, IXIC, HSI, AAPL, MSFT, BABA, and TSLA.\nIt is important to note that the results are compared based on the best results obtained by the prompts and the variances of RMSE values among different prompts and LLMs are still playing an important indicator in making the final judgment. However, given that prompts 8, 9, and 10 outperform the other prompts in most cases, one case conclude that to generate a comparably good model it is better to set Clarity and Specificity (CS), Objective and Intent (OI), and Format and Style (FS) high and use GPT 3.5 language model to generate the deep learning-based models that can be comparable with manually crafted and optimized model for forecasting time series data." + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Fixed/Consistent Configurations of LLMs", + "text": "The results reported in Table 4 ###reference_### are based on the configuration and settings of LLMs parameters listed in Table 2 ###reference_### where each model wes fine-tuned empirically to obtain the best results. To investigate whether various configuration and parameter settings for LLMs have any effect on the results, we replicate the study with fixed and consistent parameter settings for all LLMs.\nThe result in Table 5 ###reference_### are obtained using the same set of parameters but with consistent and fix values as follows:\n1) temperature, 2) max token_size and 3) top_p in all models including GPT 3.5 Turbo, Falcon, Llama-2 and PaLM. This setting primarily means reducing randomness in generating responses to queries or prompts.\n###figure_1### As Table 5 ###reference_### indicates that the reduction in randomness through minimizing the value of the temperature value has some impacts on the performance of each prompt. The table demonstrates that the GPT 3.5 model achieved the lowest RMESE for nine tickers whereas Palm achieved the lowest RMSE value of 0.0357 for the MSFT ticker.\nI) The Performance of Generated Models Across LLMs. A detailed view on both Tables 4 ###reference_### and 5 ###reference_### indicates that lower values of temperature makes the accuracy of models slightly better. In particular, we observe that GPT still outperforms other LLMs.\nII) The Performance of Generated Models Across Prompts. We observe that the the best models generated by GPT are the ones generated by simpler prompts such as Prompt 2, 3, and 4 where the criteria (i.e., Clarity and Specificity, Objective and Intent, Contextual Information, and Format and Style) are all kept consistent at the level of either low or medium or high.\nIII) The Performance of Generated Models Across the Time Series Datasets. A similar pattern is observed. A mixed results, but consistent with the results observed in Table 4 ###reference_###.\nIV) The Performance of Generated Models and Manually Developed and Optimized Model. As shown in both Tables 4 ###reference_### and 5 ###reference_###, we observe a slightly better models for the case where the temperature parameter is kept low.\nTables 4 ###reference_### and 5 ###reference_### clearly demonstrated that the Falcon model generates more valid and correct models when the temperature parameter is configured at 0.7 (high) compared to 0.1 (low). The results show the number of invalid models labeled with \u201cNA\u201d is lower than the number of invalid models generated by higher temperature 0.7 which leads to more exploration in the model\u2019s predictions. By increasing the temperature, the model is encouraged to introduce more randomness into its predictions, reducing the likelihood of exhibiting the hallucinated phenomenon.\nIn contrast, for GPT 3.5 Turbo model the number of invalid models (i.e., \"NA\") is lower with temperature parameter set to low (i.e., ) instead of high (i.e., ). The simpler prompts with lower temperature yield better results because the model produce more coherent and relevant responses. In case of complex prompts with higher temperature the GPT 3.5 model explores a wider range of possibilities and generate more diverse responses because of higher randomness. Complex prompts may contain unclear information, making it difficult for the model to provide appropriate outputs with high confidence. In such circumstances, greater temperatures allow the model to experiment with different variations of the prompt, resulting in responses that represent the input\u2019s nuances." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Model Architecture of Generated Models", + "text": "Given the variation in performance of the models generated by LLMs, it is of important to investigate the cause of such differences. One of the key factors in deep learning-based models including LSTM, which plays an important role in the performance, is the architecture (e.g., number of layers and nodes) of the generated models. To compare the architecture of the models generated by LLMs and the architecture of our manually created and optimized LSTM model, this section reports the architecture metadata of all models.\nTable 6 ###reference_### reports configuration of the models generated by LLMs using the prompts listed in Table 3 ###reference_###. The configurations are set differently for each LSTM model with a number of hyperparameters to analyze. The configuration consists of 1) the number of the number of LSTM layers, 2) number of units, 3) activation function, 4) batch sizes, and 5) epochs.\nIn Table 6, we see a summary of LSTM model architecture configurations by different LLMs with their different prompts. It also specifies such parameters as the number of LSTM layers, the number of units per layer to use, and what activation functions to use, as well as batch sizes and number of epochs used. These configurations are crucial, as few layers and units tend to lead to more capacity, but on the flip side are more likely to overfit while more layers and units give more capacity. As learning dynamics depend on the choice of activation function, batch size and epoch, which determines training stability and efficiency. The large variation in these architectural choices emphasizes the role that prompt design plays in determining model performance as measured by corresponding RMSE values.\nThe architecture of the manually created and optimized model is configured as: one LSTM layer of 50 units (i.e., nodes), where \u201crelu\u201d is used as the activation function, with the batch size and epochs set to 1 and 100, respectively. The manually created and optimized model is based on all data studied in this work implying that only one model manually created and optimized to represent the entire datasets.\nAs Table 6 ###reference_### indicates, in most cases, the models generated by LLMs contain 1 or 2 LSTM layer (i.e., the first component in the architecture model), which is relatively consistent with the architecture of the manual model, where the number of LSTM layer is set to 1.\nThe key difference between the models generated by LLMs and the manual model architecture is the number of nodes (i.e., unit), or the second component in the architecture. The LSTM-based models generated by PaLM and falcon consider a large number for the number of units or nodes in their LSTM model (e.g., 128, 100, 64). On the other hand, the number of units or nodes in the manually generated model is set to 50. A quick inspection of the number of nodes considered for LLama 2 and GPT 3.5 indicates that these two LLMs have considered the number of nodes as 50, which is similar to the number of nodes set for the manual model. This observation may explain the better performance and accuracy obtained by the LSTM models generated by LLama 2 and GPT 3.5 compared to PaLM and falcon.\nThe employed activation function in most cases is either \u2018relu\u2019 or NA. As a result this parameter of the architecture cannot be considered for comparison purposes. On the other hand, the batch size parameter is where we observe some \u201cadditional\u2019 improvement are achieved. Most of batch sizes set by LLama 2 and GPT 3.5 (the two outperforming LLMs in generating better models) have set their batch size to 32; whereas, the batch size in our manually created and optimized model is set to 1. From the literature, we know that the smaller value of batch size helps in training models more profoundly.The epoch parameter seems set mostly to 100, 50, 10 by all LLMs. In particular, the value of epochs is set to 100 in all instance models generated by PaLM without any variations.\nThe take away lessons are :\nGPT 3.5 generates LSTM-based models with model architecture that are relatively similar to the architecture of the manually created and optimized model. GPT 3.5 is followed by LLama 2 in generating the most similar architectures for the models. On the other hand, the architecture of the models generated by PaLM and falcon are less similar to the manually created and optimized model by an expert.\nMost LLMs generate deep learning LSTM-based models with number of layers equal to 1 or in some rare cases to 2, which is consistent with the architecture of our manually generated model.\nThe key parameter in architecture, that seems to contribute significantly to the accuracy of the models generated, is the number of nodes or units considered for each model. While PaLM and falcon consider a large value for hte number of units or nodes (e.g., 128), the models generated by GPT 3.5 and LLama 2 consider similar number of nodes and units (i.e., 50) compare to the manually created models. This observation indicates that the number of nodes plays a key role in improving the accuracy of the models generated, and GPT 3.5 and LLama 2 set this parameter better than the other two LLMs namely PaLM and falcon.\nThe second key contributor to the accuracy of models seems to be the value of batch size. While the two most outperforming LLMs in generating better model set their batch size to 32, other manually created and optimized model sets the batch size to 1, capturing additional and in-depth patterns in the data. An interesting observation is that while there are some benefits of considering smaller values for batch size, there are some chances that smaller batch sizes may yield overfitting." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Limitations", + "text": "The initial assumption of this work is that average users in many areas, including finance and economics are mostly interested in simple form of deep learning models including LSTM. Consequently, it may be relatively challenging for these users to build and fine tune deep learning models with complex architecture. To resemble this assumption, this paper also tries to keep the deep learning models simple without adding additional complexity to the architecture of the models built. In addition, it is also important to note that, to have a fair comparison between models generated by LLMs, it is important to avoid possible bias introduced by complexity of deep learning architecture. To prevent such unfair comparison, the work keeps the models at the very consistent and simple so the results can be justified without any bias.\nFurthermore, different application domains may exhibit different results. This paper focuses only on financial data, as an important application domain. Including additional experiments and their results in some other application domains would make the paper very lengthy and confusing. Additional replication of the work performed here is necessary in different application domains. According to our initial assumption, additional in-depth analysis might not be of interest for average researchers or developers with little background in this domain. As such, this paper focuses only on type of analysis that is often required by the average data analyst in some other application domains such as finance." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "This paper reports the results of a number of controlled experiments to study the effect of various prompts with different sensitivity levels and configuration parameters of LLMs on the goodness of deep learning-based models generated for forecasting time series data. As a representative application domain, the paper studied the problem of forecasting financial time series data. The paper first created and optimized a manual LSTM-based model to forecast financial and stock time series data. We then controlled each prompt with respect to four criteria including 1) Clarity and Specificity, 2) Objective and Intent, 3) Contextual and Information, and 4) Format and Style where the sensitivity of these criteria were controlled in terms of being low, medium, and high.\nThe results provided interesting insights regarding the accuracy of forecasting models generated by generative AI and LLMs. More notably, we observed that these generative AIs are capable to produce comparable forecasting models when queried using simple or complex prompts with additional details. We compared the accuracy of models with a single manually crafted and optimized LSTM-based forecasting model that was trained and built based on all datasets all together. According to our results, we did not observe significant influence of complex prompts to produce better and more accurate models. In some cases, the more simple prompts produced better and more accurate models; whereas, in some other cases, more complex prompts generated more accurate forecasting models. It is apparent that the value of temperature parameter used in configuring LLMs has direct impact on whether simple or more complex prompts can generate more accurate forecasting models.\nAs for statistical performance, we observed that RMSE values for the models produced by LLMs are quite strong and the models remain robust. Additional statistical testing found differences between LLMs and manually coded models to be statistically significant, with particularly strong differences when datasets were more complex.\nMoreover, we found that models generated by different LLMs used drastically different architectures, such as the number of layers, the number of units, and the activation functions. The differences in performance attributable to this variability may be an indication that prompt engineering is still an important feature for realizing LLMs for deep learning tasks.\nThe results reported in this paper are in particular useful for data analysts and practitioners who have little experience with programming and coding for developing complex deep learning-based models such as LSTM for forecasting time series data. The paper poses an interesting research problem that needs additional studies and expands the performance analysis to incorporate the metrics and statistical measures. Also, compare the LSTM model against models like ARIMA and other conventional models for further investigations to validate the results reported in this paper." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Overview of Financial Dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Dataset# DataStart DateEnd DateSectorCountry
GSPC772022-01-012022-04-23IndicesUSA
DJI772022-01-012022-04-23IndicesUSA
IXIC772022-01-012022-04-23IndicesUSA
N225772022-01-012022-04-23IndicesJAPAN
HSI772022-01-012022-04-23IndicesHONG KONG
AAPL772022-01-012022-04-23TechnologyUSA
MSFT772022-01-012022-04-23TechnologyUSA
AMZN772022-01-012022-04-23E-CommerceUSA
BABA772022-01-012022-04-23E-CommerceCHINA
TSLA772022-01-012022-04-23AutomakersUSA
\n
\n
", + "capture": "Table 1: Overview of Financial Dataset." + }, + "2": { + "table_html": "
\n
Table 2: Comparison of Parameters and Settings for Different LLMs.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParameterGPT-3.5-TurboFalconLLama-2PaLM
modelgpt3.5turbo7B7Btextbison001
temperature0.70.70.010.25
max tokensize1024102410241024
topp0.70.30.90.095
\n
\n
", + "capture": "Table 2: Comparison of Parameters and Settings for Different LLMs." + }, + "3": { + "table_html": "
\n
Table 3: Prompts and Colored Categorical Sensitivity Levels (Green: high; Orange: Medium; Red: Low)
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PromptDescriptionClarity andObjectiveContextualFormat
Specificityand IntentInformationand Style
[CS][OI][CI][FS]
1\n\nCan you assist me in creating a comprehensive Python script to build an LSTM architecture using the time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018?. My objective is to execute steps such as preprocessing, splitting, building, compiling, training, and evaluating models using RMSE.\n\nHighHighHighHigh
2\n\nCould you assist me in generating a Python script to build an LSTM model using the provided time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018? My goal is to perform preprocessing, splitting the given data, creating the model, compiling it, training the model, and assessing its performance using RMSE.\n\nMediumMediumMediumMedium
3\n\nI need a Python script for LSTM. The dataset is in \u2018\u2018{data}\u2018\u2018.I want to process, split, build, compile, train model, and evaluate model.\n\nLowLowLowLow
4\n\nCould you give me a code for setting up a LSTM? I have a time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018. My goal is to process the data, split it, build the model, compile it, train, and evaluate using RMSE.\n\nMediumHighHighHigh
5\n\nCould you help me out with crafting some kind of Python code to establish an LSTM architecture using the enclosed within double backticks \u2018\u2018{data}\u2018\u2018? To execute thorough preprocessing, split, build, compilation, training, and evaluation.\n\nLowHighHighHigh
6\n\nCan you help me with creating a Python script for an LSTM architecture using the time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018? If possible I would like to perform preprocessing, data splitting, model construction, compilation, training, and evaluation using RMSE using the code.\n\nHighMediumHighHigh
7\n\nCould you maybe assist me with making a Python script to create an LSTM architecture using the time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018? To perform preprocessing, splitting, building, compiling, training, and testing using RMSE.\n\nHighLowHighHigh
8\n\nCan you help me to establish an LSTM architecture in Python using the enclosed within double backticks \u2018\u2018{data}\u2018\u2018 to forecast stock prices? My aim is to perform thorough preprocessing, divide the data, construct the model, and evaluate its performance using RMSE.\n\nHighHighMediumHigh
9\n\nCould you\nhelp me in making a comprehensive Python script to build an LSTM architecture using the dataset \u2018\u2018{data}\u2018\u2018. My aim is to execute carefully preprocessing, construct the architecture, and assess performance using RMSE.\n\nHighHighLowHigh
10\n\nWould you be able to help me in generating a Python to set an LSTM architecture using the time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018? My steps include preprocessing, dividing, constructing, compiling, training, and evaluating using RMSE.\n\nHighHighHighMedium
11\n\nCould you please help me with generating a script to build an LSTM architecture using the time series dataset enclosed within double backticks \u2018\u2018{data}\u2018\u2018? Perform preprocessing, division of data, construction of the model, compilation, training, and evaluation the model.\n\nHighHighHighLow
\n
\n
", + "capture": "Table 3: Prompts and Colored Categorical Sensitivity Levels (Green: high; Orange: Medium; Red: Low)" + }, + "4": { + "table_html": "
\n
Table 4: RMSE values for Models Generated Using LLMs and Controlled Prompts, with LLM configurations Detailed in Table 2 and Prompts in Table 3.\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
GSPC10.03230.03310.0389NA
20.03680.48930.03940.0479
30.03880.03590.0413NA
40.03180.19920.0367NA
50.0216NA0.04100.0633
60.03140.03560.0376NA
70.03480.00410.0390NA
80.03310.46490.03300.1043
90.0320NA0.04560.0411
100.03350.03550.03810.0376
110.03530.09050.0429NA
Avg.0.032854545450.1542333330.039409090910.05882
ManualManually Developed & Optimized Model: 0.0058
DJI10.0386NA0.03990.0464
20.0392NA0.05560.2476
30.04440.49350.04620.3364
40.04250.04090.04390.0567
50.0385NA0.04720.0555
60.0438NA0.0445NA
70.0395NA0.0425NA
80.03650.46420.04130.2446
90.0394NA0.04870.0097
100.0417NA0.04730.2960
110.4061NA0.0544NA
Avg.0.073654545450.3330.04650.1616125
ManualManually Developed & Optimized Model: 0.0053
IXIC10.0259NA0.03130.1000
20.0285NA0.0324NA
30.0284NA0.0371NA
40.03030.03190.0327NA
50.03000.15740.03110.0325
60.02890.37520.0316NA
70.0299NA0.0348NA
80.03430.40240.02850.0310
90.02990.26740.0323NA
100.0294NA0.03210.0041
110.0286NA0.0316NA
Avg.0.029463636360.246860.058827272730.0419
ManualManually Developed & Optimized Model: 0.0070
N2251NA0.08010.0317NA
20.0291NA0.0319NA
3NANA0.03580.0286
40.03070.39020.0325NA
50.02090.45130.04240.1568
60.0353NA0.04120.0355
70.0182NA0.0465NA
8NANA0.03050.0281
90.0351NA0.04320.0039
100.03230.01990.03480.0049
110.0415NA0.0322NA
Avg.0.03038750.23540.036609090910.042967
ManualManually Developed & Optimized Model: 0.0034
HSI1NA0.38670.01830.0067
2NA0.42120.01930.0073
3NANA0.0307NA
4NANA0.01960.0056
5NANA0.02710.0212
6NANA0.01930.0053
7NA0.58870.01710.0274
80.0455NA0.02590.0278
90.0280NA0.04880.0413
10NANA0.01790.0178
110.0196NA0.0283NA
Avg.0.1150.4660.024754545450.017822222
ManualManually Developed & Optimized Model: 0.0354
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
AAPL10.0360NA0.03900.0384
20.03650.52920.04030.0417
30.1039NA0.04260.0483
40.0369NA0.03920.0404
50.10960.62250.04320.1135
60.0373NA0.03820.0396
70.03660.62670.04120.0962
80.0351NA0.03640.1745
90.0366NA0.04080.1221
100.03790.70850.03970.0062
110.0381NA0.04090.0641
Avg.0.04950.62170.040136363640.07136363636
ManualManually Developed & Optimized Model: 0.0094
MSFT10.04070.06180.04150.0918
20.0414NA0.06860.1749
30.1154NA0.04230.2328
40.04000.00450.05050.0041
50.2358NA0.04170.2681
60.03890.35350.04760.1263
70.0391NA0.04850.1219
80.0369NA0.03750.1811
90.03610.04050.03900.1217
100.0403NA0.04250.1368
110.03430.31600.04260.1170
Avg.0.063536363640.155260.045663636360.14331818182
ManualManually Developed & Optimized Model: 0.0065
AMZN10.04210.60030.04630.0462
20.04300.53930.04800.1571
30.0797NA0.04510.0481
40.0432NA0.04550.0445
50.11460.59770.04300.0814
60.0423NA0.04470.0727
70.1632NA0.04780.0813
80.0400NA0.03750.1673
90.0432NA0.04720.0277
100.0424NA0.04350.0704
110.0417NA0.04390.0797
Avg.0.063218181820.5790.044772727270.07967272727
ManualManually Developed & Optimized Model: 0.0062
BABA10.02380.34140.03460.0864
20.0241NA0.03920.0871
30.1494NA0.03890.2412
40.0257NA0.03660.0370
50.24490.04860.02880.0659
60.02430.06730.03000.1559
70.02420.34190.03590.0623
80.0223NA0.02460.2253
90.0246NA0.0256NA
100.02330.02370.04370.0286
110.0228NA0.06310.0674
Avg.0.05540.164580.036454545450.10571
ManualManually Developed & Optimized Model: 0.0268
TSLA10.0345NA0.04240.0466
20.03470.13980.04030.1009
30.5289NA0.05640.0027
40.03760.03590.04140.0044
50.0354NA0.05190.0023
60.03610.00480.03730.0540
70.0388NA0.03900.0031
80.03600.05560.03430.0820
90.03340.03560.03760.1634
100.03630.06550.03870.0785
110.0384NA0.04830.0897
Avg.0.080918181820.05620.042509090910.05705454545
ManualManually Developed & Optimized Model: 0.0100
\n
\n
\n
\n
\n
", + "capture": "Table 4: RMSE values for Models Generated Using LLMs and Controlled Prompts, with LLM configurations Detailed in Table 2 and Prompts in Table 3.\n" + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
GSPC10.03230.03310.0389NA
20.03680.48930.03940.0479
30.03880.03590.0413NA
40.03180.19920.0367NA
50.0216NA0.04100.0633
60.03140.03560.0376NA
70.03480.00410.0390NA
80.03310.46490.03300.1043
90.0320NA0.04560.0411
100.03350.03550.03810.0376
110.03530.09050.0429NA
Avg.0.032854545450.1542333330.039409090910.05882
ManualManually Developed & Optimized Model: 0.0058
DJI10.0386NA0.03990.0464
20.0392NA0.05560.2476
30.04440.49350.04620.3364
40.04250.04090.04390.0567
50.0385NA0.04720.0555
60.0438NA0.0445NA
70.0395NA0.0425NA
80.03650.46420.04130.2446
90.0394NA0.04870.0097
100.0417NA0.04730.2960
110.4061NA0.0544NA
Avg.0.073654545450.3330.04650.1616125
ManualManually Developed & Optimized Model: 0.0053
IXIC10.0259NA0.03130.1000
20.0285NA0.0324NA
30.0284NA0.0371NA
40.03030.03190.0327NA
50.03000.15740.03110.0325
60.02890.37520.0316NA
70.0299NA0.0348NA
80.03430.40240.02850.0310
90.02990.26740.0323NA
100.0294NA0.03210.0041
110.0286NA0.0316NA
Avg.0.029463636360.246860.058827272730.0419
ManualManually Developed & Optimized Model: 0.0070
N2251NA0.08010.0317NA
20.0291NA0.0319NA
3NANA0.03580.0286
40.03070.39020.0325NA
50.02090.45130.04240.1568
60.0353NA0.04120.0355
70.0182NA0.0465NA
8NANA0.03050.0281
90.0351NA0.04320.0039
100.03230.01990.03480.0049
110.0415NA0.0322NA
Avg.0.03038750.23540.036609090910.042967
ManualManually Developed & Optimized Model: 0.0034
HSI1NA0.38670.01830.0067
2NA0.42120.01930.0073
3NANA0.0307NA
4NANA0.01960.0056
5NANA0.02710.0212
6NANA0.01930.0053
7NA0.58870.01710.0274
80.0455NA0.02590.0278
90.0280NA0.04880.0413
10NANA0.01790.0178
110.0196NA0.0283NA
Avg.0.1150.4660.024754545450.017822222
ManualManually Developed & Optimized Model: 0.0354
\n
\n
", + "capture": "Table 5: RMSE values for Models Generated using LLMs and Controlled Prompts with fixed LLM configurations 1) temperature, 2) max token_size and 3) top_p and Prompts listed in Table 3.\n" + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
AAPL10.0360NA0.03900.0384
20.03650.52920.04030.0417
30.1039NA0.04260.0483
40.0369NA0.03920.0404
50.10960.62250.04320.1135
60.0373NA0.03820.0396
70.03660.62670.04120.0962
80.0351NA0.03640.1745
90.0366NA0.04080.1221
100.03790.70850.03970.0062
110.0381NA0.04090.0641
Avg.0.04950.62170.040136363640.07136363636
ManualManually Developed & Optimized Model: 0.0094
MSFT10.04070.06180.04150.0918
20.0414NA0.06860.1749
30.1154NA0.04230.2328
40.04000.00450.05050.0041
50.2358NA0.04170.2681
60.03890.35350.04760.1263
70.0391NA0.04850.1219
80.0369NA0.03750.1811
90.03610.04050.03900.1217
100.0403NA0.04250.1368
110.03430.31600.04260.1170
Avg.0.063536363640.155260.045663636360.14331818182
ManualManually Developed & Optimized Model: 0.0065
AMZN10.04210.60030.04630.0462
20.04300.53930.04800.1571
30.0797NA0.04510.0481
40.0432NA0.04550.0445
50.11460.59770.04300.0814
60.0423NA0.04470.0727
70.1632NA0.04780.0813
80.0400NA0.03750.1673
90.0432NA0.04720.0277
100.0424NA0.04350.0704
110.0417NA0.04390.0797
Avg.0.063218181820.5790.044772727270.07967272727
ManualManually Developed & Optimized Model: 0.0062
BABA10.02380.34140.03460.0864
20.0241NA0.03920.0871
30.1494NA0.03890.2412
40.0257NA0.03660.0370
50.24490.04860.02880.0659
60.02430.06730.03000.1559
70.02420.34190.03590.0623
80.0223NA0.02460.2253
90.0246NA0.0256NA
100.02330.02370.04370.0286
110.0228NA0.06310.0674
Avg.0.05540.164580.036454545450.10571
ManualManually Developed & Optimized Model: 0.0268
TSLA10.0345NA0.04240.0466
20.03470.13980.04030.1009
30.5289NA0.05640.0027
40.03760.03590.04140.0044
50.0354NA0.05190.0023
60.03610.00480.03730.0540
70.0388NA0.03900.0031
80.03600.05560.03430.0820
90.03340.03560.03760.1634
100.03630.06550.03870.0785
110.0384NA0.04830.0897
Avg.0.080918181820.05620.042509090910.05705454545
ManualManually Developed & Optimized Model: 0.0100
\n
\n
", + "capture": "Table 5: RMSE values for Models Generated using LLMs and Controlled Prompts with fixed LLM configurations 1) temperature, 2) max token_size and 3) top_p and Prompts listed in Table 3.\n" + }, + "7": { + "table_html": "
\n
Table 5: RMSE values for Models Generated using LLMs and Controlled Prompts with fixed LLM configurations 1) temperature, 2) max token_size and 3) top_p and Prompts listed in Table 3.\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
GSPC10.0354NA0.04040.0409
20.03560.04160.48020.0036
30.0508NA0.46630.0626
40.03530.03650.03810.0756
50.2506NA0.03720.0964
60.03450.38620.03770.0403
70.0352NA0.03760.0641
80.0346NA0.04280.0403
90.0357NA0.03650.0617
100.0343NA0.04900.1311
110.0354NA0.04270.0627
Avg.0.056127272730.1550.118954545450.06175454545
ManualManually Developed & Optimized Model: 0.0058
DJI10.0383NA0.05710.0705
20.0422NA0.13040.1021
30.09960.05500.06600.0602
40.0416NA0.04230.0034
50.0373NA0.04940.1270
60.0418NA0.04620.0547
70.04020.09840.05210.0462
80.0382NA0.05420.0531
90.0402NA0.04700.0631
100.0404NA0.06870.0490
110.0707NA0.05140.0493
Avg.0.048227272730.080.060436363640.06169090909
ManualManually Developed & Optimized Model: 0.0053
IXIC10.0259NA0.03720.1052
20.0299NA0.03240.0739
30.0300NA0.03180.0059
40.03070.11580.03270.0829
50.16760.05300.03390.1044
60.0298NA0.03280.0372
70.0397NA0.03050.0697
80.0281NA0.03600.1886
90.0394NA0.03440.0345
100.0295NA0.03440.0323
110.0405NA0.04700.1280
Avg.0.044645454550.080.034827272730.07841818182
ManualManually Developed & Optimized Model: 0.0070
N22510.0275NA0.05110.0071
20.0307NA0.03980.0365
30.0311NA0.06360.1199
40.0257NA0.03910.0038
50.0305NA0.06570.0410
60.0321NA0.04190.2064
70.0270NA0.02680.0312
80.0313NA0.05520.1309
90.0368NA0.05990.0334
100.0326NA0.03630.0311
110.0211NA0.05820.1748
Avg.0.02967272727NA0.048872727270.07419090909
ManualManually Developed & Optimized Model: 0.0034
HSI10.0264NA0.02650.1928
20.02300.02250.03430.0314
30.1250NA0.05990.0428
40.0272NA0.01860.0068
50.2486NA0.05000.1063
60.02780.01960.02400.0398
70.0306NA0.04850.0367
80.0273NA0.01840.0268
90.0309NA0.02050.0622
100.0217NA0.02480.2108
110.03020.15080.03540.0434
Avg.0.056245454550.0640.032809090910.07270909091
ManualManually Developed & Optimized Model: 0.0354
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
AAPL10.0366NA0.05120.0564
20.0380NA0.05890.1250
30.1288NA0.05650.1144
40.03710.03880.03960.0029
50.0411NA0.04120.4194
60.0366NA0.04580.4048
70.0358NA0.04140.4194
80.0359NA0.04280.1224
90.03600.03680.04480.4194
100.0360NA0.04570.0050
110.0361NA0.05220.4194
Avg.0.045272727270.040.047281818180.22804545455
ManualManually Developed & Optimized Model: 0.0094
MSFT10.0399NA0.04680.2605
20.0367NA0.04280.1850
30.1264NA0.04440.0450
40.03570.05910.04610.2456
50.0363NA0.04360.1241
60.0407NA0.04820.2557
70.0392NA0.04190.1270
80.0401NA0.04950.2060
90.0418NA0.04750.1937
100.0392NA0.04690.0478
110.0390NA0.04600.1581
Avg.0.046818181820.10.045790909090.16804545455
ManualManually Developed & Optimized Model: 0.0065
AMZN10.04140.03990.04380.2763
20.0422NA0.04940.4927
30.1270NA0.04720.0046
40.04170.06570.04770.0043
50.1017NA0.04550.0866
60.0420NA0.04870.0545
70.0426NA0.04620.4144
80.0426NA0.04850.1512
90.0425NA0.04700.4655
100.0430NA0.04560.0054
110.0418NA0.04610.4771
Avg.0.055318181820.050.046881818180.22114545455
ManualManually Developed & Optimized Model: 0.0062
BABA10.0274NA0.02580.0292
20.0229NA0.02990.0668
30.1577NA0.02910.0211
40.0247NA0.03060.0295
50.07800.15970.02790.0738
60.0229NA0.02780.0444
70.02450.02250.02870.1610
80.0232NA0.02200.2473
90.0252NA0.02480.2474
100.0243NA0.02960.2042
110.0234NA0.08790.1755
Avg.0.041290909090.090.03310.1182
ManualManually Developed & Optimized Model: 0.0268
TSLA10.0325NA0.04300.0024
20.03490.12660.04840.0423
30.0695NA0.06030.0968
40.0348NA0.04450.0076
5NANA0.04020.0630
60.03300.03270.03750.0024
70.0369NA0.05650.0052
80.03640.55690.04050.0432
90.0353NA0.05250.1006
100.03530.56630.0414NA
110.0379NA0.03770.4147
Avg.0.038650.32060.045681818180.07782
ManualManually Developed & Optimized Model: 0.0100
\n
\n
\n
\n
\n
", + "capture": "Table 5: RMSE values for Models Generated using LLMs and Controlled Prompts with fixed LLM configurations 1) temperature, 2) max token_size and 3) top_p and Prompts listed in Table 3.\n" + }, + "8": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
GSPC10.0354NA0.04040.0409
20.03560.04160.48020.0036
30.0508NA0.46630.0626
40.03530.03650.03810.0756
50.2506NA0.03720.0964
60.03450.38620.03770.0403
70.0352NA0.03760.0641
80.0346NA0.04280.0403
90.0357NA0.03650.0617
100.0343NA0.04900.1311
110.0354NA0.04270.0627
Avg.0.056127272730.1550.118954545450.06175454545
ManualManually Developed & Optimized Model: 0.0058
DJI10.0383NA0.05710.0705
20.0422NA0.13040.1021
30.09960.05500.06600.0602
40.0416NA0.04230.0034
50.0373NA0.04940.1270
60.0418NA0.04620.0547
70.04020.09840.05210.0462
80.0382NA0.05420.0531
90.0402NA0.04700.0631
100.0404NA0.06870.0490
110.0707NA0.05140.0493
Avg.0.048227272730.080.060436363640.06169090909
ManualManually Developed & Optimized Model: 0.0053
IXIC10.0259NA0.03720.1052
20.0299NA0.03240.0739
30.0300NA0.03180.0059
40.03070.11580.03270.0829
50.16760.05300.03390.1044
60.0298NA0.03280.0372
70.0397NA0.03050.0697
80.0281NA0.03600.1886
90.0394NA0.03440.0345
100.0295NA0.03440.0323
110.0405NA0.04700.1280
Avg.0.044645454550.080.034827272730.07841818182
ManualManually Developed & Optimized Model: 0.0070
N22510.0275NA0.05110.0071
20.0307NA0.03980.0365
30.0311NA0.06360.1199
40.0257NA0.03910.0038
50.0305NA0.06570.0410
60.0321NA0.04190.2064
70.0270NA0.02680.0312
80.0313NA0.05520.1309
90.0368NA0.05990.0334
100.0326NA0.03630.0311
110.0211NA0.05820.1748
Avg.0.02967272727NA0.048872727270.07419090909
ManualManually Developed & Optimized Model: 0.0034
HSI10.0264NA0.02650.1928
20.02300.02250.03430.0314
30.1250NA0.05990.0428
40.0272NA0.01860.0068
50.2486NA0.05000.1063
60.02780.01960.02400.0398
70.0306NA0.04850.0367
80.0273NA0.01840.0268
90.0309NA0.02050.0622
100.0217NA0.02480.2108
110.03020.15080.03540.0434
Avg.0.056245454550.0640.032809090910.07270909091
ManualManually Developed & Optimized Model: 0.0354
\n
\n
", + "capture": "Figure 1: RMSE Values observed through Manual Implemented LSTM Model." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerPromptsRMSE
PaLMfalconLLama 2GPT 3.5
AAPL10.0366NA0.05120.0564
20.0380NA0.05890.1250
30.1288NA0.05650.1144
40.03710.03880.03960.0029
50.0411NA0.04120.4194
60.0366NA0.04580.4048
70.0358NA0.04140.4194
80.0359NA0.04280.1224
90.03600.03680.04480.4194
100.0360NA0.04570.0050
110.0361NA0.05220.4194
Avg.0.045272727270.040.047281818180.22804545455
ManualManually Developed & Optimized Model: 0.0094
MSFT10.0399NA0.04680.2605
20.0367NA0.04280.1850
30.1264NA0.04440.0450
40.03570.05910.04610.2456
50.0363NA0.04360.1241
60.0407NA0.04820.2557
70.0392NA0.04190.1270
80.0401NA0.04950.2060
90.0418NA0.04750.1937
100.0392NA0.04690.0478
110.0390NA0.04600.1581
Avg.0.046818181820.10.045790909090.16804545455
ManualManually Developed & Optimized Model: 0.0065
AMZN10.04140.03990.04380.2763
20.0422NA0.04940.4927
30.1270NA0.04720.0046
40.04170.06570.04770.0043
50.1017NA0.04550.0866
60.0420NA0.04870.0545
70.0426NA0.04620.4144
80.0426NA0.04850.1512
90.0425NA0.04700.4655
100.0430NA0.04560.0054
110.0418NA0.04610.4771
Avg.0.055318181820.050.046881818180.22114545455
ManualManually Developed & Optimized Model: 0.0062
BABA10.0274NA0.02580.0292
20.0229NA0.02990.0668
30.1577NA0.02910.0211
40.0247NA0.03060.0295
50.07800.15970.02790.0738
60.0229NA0.02780.0444
70.02450.02250.02870.1610
80.0232NA0.02200.2473
90.0252NA0.02480.2474
100.0243NA0.02960.2042
110.0234NA0.08790.1755
Avg.0.041290909090.090.03310.1182
ManualManually Developed & Optimized Model: 0.0268
TSLA10.0325NA0.04300.0024
20.03490.12660.04840.0423
30.0695NA0.06030.0968
40.0348NA0.04450.0076
5NANA0.04020.0630
60.03300.03270.03750.0024
70.0369NA0.05650.0052
80.03640.55690.04050.0432
90.0353NA0.05250.1006
100.03530.56630.0414NA
110.0379NA0.03770.4147
Avg.0.038650.32060.045681818180.07782
ManualManually Developed & Optimized Model: 0.0100
\n
\n
", + "capture": "Figure 1: RMSE Values observed through Manual Implemented LSTM Model." + }, + "10": { + "table_html": "
\n
Table 6: Model Architecture Details: P. = Prompt; Format=[LSTM Layer, Units, Activation, Batch, Epoch]; \n
Manually Created and Optimized Model: [1, 50, \u2019relu\u2019, 1, 100].\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerP.PaLMfalconLLama 2GPT 3.5
GSPC1[1, 128, NA, 32, 100],[1,64,NA, 16, 100][1, 50, NA, 32, 100]NA
2[1,50, \u2019relu\u2019,32,100][1,128,NA, 128,1][2, 50, NA, 32, 100][1, 50, Na, 32, 50]
3[1,128,NA, NA, 100][3, 128, NA, 256, 100][2, [50,64], \u2019relu\u2019, 32, 100]NA
4[1,100,\u2019relu\u2019,32,100][1, 64, NA, 128,10][1, 50, \u2019relu\u2019, 32, 100]NA
5[1,128,\u2019relu\u2019,32,100]NA[2, [50,32], NA, 32, 100][1, 50, NA, 32, 10]
6[1,100,\u2019relu\u2019,16,100],[2, 128, NA, 128, 128][1, 50, NA, NA, 100]NA
7[1,100,\u2019relu\u2019,20,100],[2, 128, NA, 32, 500][2, 50, NA, NA, 100]NA
8[1,128, NA,32,100][1, 128, NA, NA, NA][1, 128, NA, NA, 100][1,50,\u2019relu\u2019,32, 10]
9[1,100,\u2019relu\u2019,32,100]NA[2, [50,32], NA, 32, 100][1,50,\u2019relu\u2019,32, 50]
10[1,128,NA,32,100][1, 128, NA, 128, 100][1, 50, NA, 32, 100][1,50,NA,1, 100]
11[1,100,\u2019relu\u2019,NA,100][1, 10, NA, 256, 100][2, [50,32], NA, 32, 50]NA
DJI1[1,100, \u2019relu\u2019,32,100]NA[1,50, NA,32,100][2,[50,50], NA,1,100]
2[1,100, NA,32,100]NA[1,50,\u2019relu\u2019,32,100][1,50,\u2019relu\u2019,32,10]
3[1,128, NA,32,100][1, 128, NA, 32,NA][2,[50,64],NA,32,100][1,50, NA,32,50]
4[1,100, NA,16,100][1, 128, NA, NA, 100][1,50, NA,NA,100][1,50, NA,32,10]
5[1,128, \u2019relu\u2019,32,100]NA[2,[50,32],NA,32,100][1,64, NA,32,10]
6[1,100, \u2019relu\u2019,30,100]NA[1,50, NA,NA,100]NA
7[1,100, \u2019relu\u2019,NA,100]NA[2,50, NA,32,100]NA
8[1,128, NA,32,100][1, 256, NA, 256, NA][1,128, NA,NA,100][1,50, \u2019relu\u2019,NA,10]
9[1,128,\u2019relu\u2019,32,100]NA[2,[50,32], NA,NA,100][2,[50,50], \u2019relu\u2019,1,100]
10[2,[128,64], NA,32,100]NA[1,50, NA,NA,100][1,50, NA,16,10]
11[1,128, NA,NA,100]NA[2,[50,32], NA,32,50]NA
IXIC1[1,128,\u2019relu\u2019,32,100]NA[1,50, NA,32,100][2,[50,50], NA,32,10]
2[1,128, NA,32,100]NA1,50, NA,NA,100NA
31,128, NA,NA,100NA[2,[50,64], NA,32,100]NA
4[1,100,\u2019relu\u2019,32,100][1,100, NA, 256, 100][1,50, \u2019relu\u2019,32,100]NA
5[1,128, NA,NA,100][1,100,NA, 32, 10][2,[50,32], NA,32,100][1,64, NA,32,10]
6[1,128, NA,NA,NA][3,[128,10,10],NA,256,100][1,50, NA,NA,100]NA
7[1,100, NA,32,100]NA[1,50,\u2019relu\u2019,NA,100]NA
8[1,50, NA,32,100][1,128,NA,10,NA][1,128, NA,NA,100][2,[50,50],NA,32,100]
9[1,50,\u2019relu\u2019,32,100][4,[100,200,300,400] NA, NA,NA][2,[50,32],NA,NA,10]NA
10[1,50, NA,NA,100]NA[1,50,NA,NA,100][1,50, NA,1,100]
111,128, NA,16,100NA[2,[50,32],NA,32,50]NA
N2251NA[1,1,NA,32,100][1,50, NA,32,100]NA
2[2,[128,64], NA,NA,100]NA[1,50, NA,1,100]NA
3NANA[2,[50,64],NA,NA,100][1,50, NA,32,100]
4[2,[128,64],\u2019relu\u2019,32,100][1, 32, NA, 32,NA][1,50, \u2019relu\u2019,32,100]NA
5[2,[128,64],\u2019linear\u2019,32,100][3,[128,256,256],NA, NA, NA][2,[50,32],NA,32,100][2,[64,64], NA,32,10]
6[2,[128,64], NA,NA,100]NA[1,50, NA,NA,100][1,50,\u2019relu\u2019,32,50]
7[2,[128,64],\u2019relu\u2019,NA,100]NA[1,50,\u2019relu\u2019,NA,100]NA
8NANA[1,128, NA,NA,100][2,[50,50],NA,32,100]
9[2,[128,64],\u2019relu\u2019,16,100]NA[2,[50,32],\u2019relu\u2019,NA,100][2,[50,50], NA,1,100]
10[1,128,\u2019relu\u2019,NA,100][1,2,NA, 32,1000][1,50, NA,32,50][1,50, NA,1,100]
11[1,128, NA,NA,100]NA[2,[50,32],NA,32,50]NA
HSI1NA[1,128,NA,10,NA][1,50, NA,32,100][1,50, NA,1,100]
2NA[2,[64,128],NA, 32, NA][1,50,\u2019relu\u2019,32,100][1,50, \u2019relu\u2019,1,100]
3NANA[2,[50,64],NA,32,100]NA
4NANA[1,50, NA,NA,100][1,50, NA,NA,100]
5NANA[2,[50,32],NA,NA,100][1,50, NA,32,100]
6NANA[1,50, NA,NA,100][1,50, \u2019relu\u2019,NA,100]
7NA[1,10,NA,1,NA][1,50,\u2019relu\u2019,NA,100][2,[50,50],NA,32,100]
8[1,100,NA,32,100]NA[1,128, NA,32,100][2,[50,50],\u2019relu\u2019,32,100]
9[2,100,NA,32,100]NA[2,[50,32],NA,NA,100][1,4, NA,1,100]
10NANA[2,50,NA,32,100][1,50, NA,32,100]
11[1,50,NA,32,100]NA[2,[50,32],NA,32,50]NA
\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TickerP.PaLMfalconLLama 2GPT 3.5
AAPL1[1,100,NA,32,100]NA[1,50, NA,32,100][1,50,NA,NA,100]
2[1,128, NA, NA, 100][1,128, NA, NA, NA][1,50,\u2019relu\u2019,32,100][2,[50,50], NA, 16, 100]
3[1,128, NA,32,10]NA[2,[50,64],NA,NA,100][1,50, NA,32,10]
4[1,100,\u2019relu\u2019,32, 100]NA[1,50, NA,NA,100][1,50, NA,32, 50]
5[1,100, \u2019relu\u2019, 32, 10][1,32,NA, 10, NA][2,[50,32],NA,NA,100][1,128, NA, 32, 10]
6[1,100,NA,32,10]NA[1,50, NA,NA,100][1,50,NA,32,100]
7[1,100,NA,16,100][2,10,NA,32, NA][1,50, NA,1,100][1,50,NA,16,10]
8[1,100,\u2019relu\u2019,32,10]NA[1,128, NA,NA,100][2,[50,50],NA,32,10]
9[1,128,\u2019relu\u2019,32,100]NA[2,[50,32],NA,NA,100][1,100,NA,32,10]
10[1,100,NA,1,100][1,1,NA,32,NA][[1,50,\u2019relu\u2019,1,100]][2,[50,50],NA,1,100]
11[1,100,\u2019relu\u2019,1,10]NA[2,[50,32],NA,32,50][1,64,NA,32,10]
MSFT1[1,100,\u2019relu\u2019,32,100][1,128,NA, 128, 100][1,50, NA,NA,100][2,[50,50],NA,32,10]
2[1,100,NA,NA,100]NA[1,50, NA,32,100][1,50, \u2019relu\u2019,16,10]
3[1,128,NA,32,10]NA[2,[50,64],NA,32,100][1,128,NA,32,10]
4[1,100,NA,32,100][1,99,NA,1,100][1,50, \u2019relu\u2019,NA,100][1,50, \u2019relu\u2019,1,100]
5[2,[128,128],NA,32,10]NA[2,[50,32],NA,32,100][1,64,NA,32,10]
6[1,50, \u2019relu\u2019,32,10][1,100,NA,100,NA][[1,50, NA,1,100]][1,50, \u2019relu\u2019,32,10]
7[1,100, \u2019relu\u2019,NA,100]NA[1,50, \u2019relu\u2019,1,100][1,50, \u2019relu\u2019,NA,10]
8[1,100,NA,32,10]NA[[1,128, \u2019relu\u2019,NA,100]][2,[50,50],NA,32,10]
9[1,128,NA,32,100][1,512,NA,1,NA][2,[50,32],NA,32,100][2,[50,50],\u2019relu\u2019,32,10]
10[1,128, \u2019relu\u2019,NA,100]NA[[1,50,NA,NA,10]][1,64, \u2019relu\u2019,32,10]
11[1,128, \u2019relu\u2019,32,100][1,64, NA, 32, NA][2,[50,32],NA,32,50][1,50, \u2019relu\u2019,32,100]
AMZN1[1,100, NA,32,100][1,32,NA,256,NA][1,50,\u2019relu\u2019,32,100][1,50, NA,32,50]
2[1,100,NA,32,10][1,128,NA,32,NA][1,50,NA,32,100][2,[50,50],NA,32,10]
3[1,128,NA,32,10]NA[2,[50,64],NA,32,100][1,50,NA,32,100]
4[1,100, \u2019relu\u2019,32,100]NA[1,50,NA,16,100][1,50, \u2019relu\u2019,32,100]
5[1,128,\u2019relu\u2019,32,10][1,32,NA,32,NA][2,[50,32],NA,32,100][1,128,NA,32,100]
6[1,100,\u2019relu\u2019,32,10]NA[1,50,NA,NA,100][1,50, \u2019relu\u2019,32,10]
7[1,100, \u2019relu\u2019, NA,100]NA[1,50,NA,1,100][1,50, \u2019relu\u2019, NA,100]
8[1,128,NA,32,100]NA[1,128,NA,NA,100][1,50,NA,1,10]
9[1,100,NA,1,100]NA[2,[50,32],NA,NA,100][1,4,NA,1,100]
10[1,100, \u2019relu\u2019,1,10]NA[1,50, NA, 16,100][1,50, \u2019relu\u2019,16,10]
11[1,100, NA,16,10]NA[1,[50,32],NA,32,50][1,50, NA,16,100]
BABA1[1,100,\u2019relu\u2019,32,100][1,16,NA,32,NA][1,50,NA,32,100][2,[50,50],NA,32,10]
2[1,100, \u2019relu\u2019,32,10]NA[1,50, \u2019relu\u2019,NA,100][1,50, \u2019relu\u2019,32,10]
3[1,128,NA,32,10]NA[2,[50,64],NA,32,100][1,50,NA,32,10]
4[1,100, \u2019relu\u2019,NA,100]NA[1,50, \u2019relu\u2019,32,100][1,50, \u2019relu\u2019,32,100]
5[1,128, \u2019relu\u2019,32,10][1,10,NA,10,10][2,[50,32],NA,32,100][1,128, \u2019relu\u2019,32,10]
6[1,64, NA,32,10][1,128,NA,1,NA][1,50, \u2019relu\u2019,1,100][1,50, \u2019relu\u2019,NA,10]
7[1,100, NA,1,10][1,32,NA,128,NA][1,50,NA,1,100][1,50, NA,16,10]
8[1,128,NA,32,100]NA[1,128,NA,NA,100][2,[50,50],NA,32,10]
9[1,100,NA,NA,100]NA[2,[50,32],NA,100]NA
10[1,100,NA,NA,10][1,64,NA,64,1000][1,50, \u2019relu\u2019,16,100][2,[50,50],NA,32,100]
11[1,128,NA,NA,100]NA[2,[50,32],NA,32,50][1,64,NA,32,10]
TSLA1[1,100,NA,32,100]NA[1,50, \u2019relu\u2019,32,100][2,[50,50],NA,32,10]
2[1,100,\u2019relu\u2019,32,100][1,128,NA,64,NA][1,50, \u2019relu\u2019,NA,100][2,[50,50],\u2019relu\u2019,32,10]
3[2,[128,10],NA,NA,10]NA[2,[50,64],NA,32,100][1,50,NA,1,100]
4[1,100,\u2019relu\u2019,NA,100][1,128,NA,256,100][1,50,NA,1,100][1,50,\u2019relu\u2019,1,100]
5[1,128,\u2019relu\u2019,32,100]NA[2,[50,32],NA,32,100][1,64,NA,1,100]
6[1,100,\u2019relu\u2019,32,10][1,128, NA, 128, 100][1,50,NA,NA,100][1,50,\u2019relu\u2019,32,10]
7[1,100,NA,NA,100]NA[1,50,NA,16,100][1,50,NA,1,10]
8[1,100,NA,32,10][2,[32,32],NA,NA,NA][1,128,NA,NA,100][2,[50,50],NA,16,10]
9[1,128,NA,NA,100][1,32,NA,32,100][2,[50,32],NA,NA,100][1,50,NA,16,10]
10[1,100,NA,16,100][1,256,NA,NA,NA][1,50, \u2019relu\u2019,16,100][1,50,NA,32,100]
11[1,100,\u2019relu\u2019,16,100]NA[2,[50,32],NA,32,50][2,[50,50],NA,32,100]
\n
\n
\n
\n
\n
", + "capture": "Table 6: Model Architecture Details: P. = Prompt; Format=[LSTM Layer, Units, Activation, Batch, Epoch]; \nManually Created and Optimized Model: [1, 50, \u2019relu\u2019, 1, 100].\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18731v1_figure_1.png", + "caption": "Figure 1: RMSE Values observed through Manual Implemented LSTM Model.", + "url": "http://arxiv.org/html/2411.18731v1/extracted/6030083/manual_rms.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Constitutional ai: Harmlessness from ai feedback.", + "author": "Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., et al., 2022.", + "venue": "arXiv preprint arXiv:2212.08073 .", + "url": null + } + }, + { + "2": { + "title": "Programming is hard-or at least it used to be: Educational opportunities and challenges of ai code generation, in: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, pp. 500\u2013506.", + "author": "Becker, B.A., Denny, P., Finnie-Ansley, J., Luxton-Reilly, A., Prather, J., Santos, E.A., 2023.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Language models are few-shot learners.", + "author": "Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al., 2020.", + "venue": "Advances in neural information processing systems 33, 1877\u20131901.", + "url": null + } + }, + { + "4": { + "title": "Palm: Scaling language modeling with pathways.", + "author": "Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H.W., Sutton, C., Gehrmann, S., et al., 2022.", + "venue": "arXiv preprint arXiv:2204.02311 .", + "url": null + } + }, + { + "5": { + "title": "The economic potential of generative ai: The next productivity frontier.", + "author": "Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L., Zemmel, R., 2023.", + "venue": "https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontierintroduction.", + "url": null + } + }, + { + "6": { + "title": "Promptly: Using prompt problems to teach learners how to effectively utilize ai code generators.", + "author": "Denny, P., Leinonen, J., Prather, J., Luxton-Reilly, A., Amarouche, T., Becker, B.A., Reeves, B.N., 2023.", + "venue": "arXiv preprint arXiv:2307.16364 .", + "url": null + } + }, + { + "7": { + "title": "A preliminary analysis on the code generation capabilities of gpt-3.5 and bard ai models for java functions.", + "author": "Destefanis, G., Bartolucci, S., Ortu, M., 2023.", + "venue": "arXiv preprint arXiv:2305.09402 .", + "url": null + } + }, + { + "8": { + "title": "A comparison of tcn and lstm models in detecting anomalies in time series data, in: 2021 IEEE International Conference on Big Data (Big Data), pp. 2415\u20132420.", + "author": "Gopali, S., Abri, F., Siami-Namini, S., Namin, A.S., 2021.", + "venue": "doi:10.1109/BigData52589.2021.9671488.", + "url": null + } + }, + { + "9": { + "title": "Vulnerability detection in smart contracts using deep learning, in: 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), pp. 1249\u20131255.", + "author": "Gopali, S., Khan, Z.A., Chhetri, B., Karki, B., Namin, A.S., 2022.", + "venue": "doi:10.1109/COMPSAC54236.2022.00197.", + "url": null + } + }, + { + "10": { + "title": "The performance of sequential deep learning models in detecting phishing websites using contextual features of urls, in: Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing, Association for Computing Machinery, New York, NY, USA. p. 1064\u20131066.", + "author": "Gopali, S., Namin, A.S., Abri, F., Jones, K.S., 2024.", + "venue": "URL: https://doi.org/10.1145/3605098.3636164, doi:10.1145/3605098.3636164.", + "url": null + } + }, + { + "11": { + "title": "Deep learning-based time-series analysis for detecting anomalies in internet of things.", + "author": "Gopali, S., Siami Namin, A., 2022.", + "venue": "Electronics 11.", + "url": null + } + }, + { + "12": { + "title": "Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation.", + "author": "Liu, J., Xia, C.S., Wang, Y., Zhang, L., 2023.", + "venue": "arXiv preprint arXiv:2305.01210 .", + "url": null + } + }, + { + "13": { + "title": "Generative ai market worth 51.8 billion by 2028, growing at a cagr of 35.6: Report by markets and markets tm.", + "author": "Markets, Ltd, M.R.P., 2023.", + "venue": "https://www.globenewswire.com/news-release/2023/08/21/2728824/0/en/Generative-AI-Market-worth-51-8-billion-by-2028-growing-at-a-CAGR-of-35-6-Report-by-MarketsandMarkets.html.", + "url": null + } + }, + { + "14": { + "title": "Lever: Learning to verify language-to-code generation with execution, in: International Conference on Machine Learning, PMLR. pp. 26106\u201326128.", + "author": "Ni, A., Iyer, S., Radev, D., Stoyanov, V., Yih, W.t., Wang, S., Lin, X.V., 2023.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "Training language models to follow instructions with human feedback.", + "author": "Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al., 2022.", + "venue": "Advances in neural information processing systems 35, 27730\u201327744.", + "url": null + } + }, + { + "16": { + "title": "The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only.", + "author": "Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E., Launay, J., 2023.", + "venue": "arXiv preprint arXiv:2306.01116 .", + "url": null + } + }, + { + "17": { + "title": "True few-shot learning with language models.", + "author": "Perez, E., Kiela, D., Cho, K., 2021.", + "venue": "Advances in neural information processing systems 34, 11054\u201311070.", + "url": null + } + }, + { + "18": { + "title": "Layoutllm-t2i: Eliciting layout guidance from llm for text-to-image generation, in: Proceedings of the 31st ACM International Conference on Multimedia, pp. 643\u2013654.", + "author": "Qu, L., Wu, S., Fei, H., Nie, L., Chua, T.S., 2023.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "Prompt programming for large language models: Beyond the few-shot paradigm, in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1\u20137.", + "author": "Reynolds, L., McDonell, K., 2021.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "Top generative ai statistics for 2023.", + "author": "Salesforce, 2023.", + "venue": "https://www.salesforce.com/news/stories/generative-ai-statistics/.", + "url": null + } + }, + { + "21": { + "title": "Global sensitivity analysis: the primer.", + "author": "Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., Tarantola, S., 2008.", + "venue": "John Wiley & Sons.", + "url": null + } + }, + { + "22": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozi\u00e8re, B., Goyal, N., Hambro, E., Azhar, F., et al., 2023a.", + "venue": "arXiv preprint arXiv:2302.13971 .", + "url": null + } + }, + { + "23": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al., 2023b.", + "venue": "arXiv preprint arXiv:2307.09288 .", + "url": null + } + }, + { + "24": { + "title": "Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models, in: Chi conference on human factors in computing systems extended abstracts, pp. 1\u20137.", + "author": "Vaithilingam, P., Zhang, T., Glassman, E.L., 2022.", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "Is chatgpt a good nlg evaluator? a preliminary study.", + "author": "Wang, J., Liang, Y., Meng, F., Sun, Z., Shi, H., Li, Z., Xu, J., Qu, J., Zhou, J., 2023.", + "venue": "arXiv preprint arXiv:2303.04048 .", + "url": null + } + }, + { + "26": { + "title": "Why johnny can\u2019t prompt: How non-ai experts try (and fail) to design llm prompts, in: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA.", + "author": "Zamfirescu-Pereira, J., Wong, R.Y., Hartmann, B., Yang, Q., 2023.", + "venue": "doi:10.1145/3544548.3581388.", + "url": null + } + }, + { + "27": { + "title": "Large language models are human-level prompt engineers.", + "author": "Zhou, Y., Muresanu, A.I., Han, Z., Paster, K., Pitis, S., Chan, H., Ba, J., 2022.", + "venue": "arXiv preprint arXiv:2211.01910 .", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18731v1" +} \ No newline at end of file diff --git a/20241127/2411.18745v1.json b/20241127/2411.18745v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c56198a81a5fe841fad53fed2696da99d27fb386 --- /dev/null +++ b/20241127/2411.18745v1.json @@ -0,0 +1,574 @@ +{ + "title": "DiffMVR: Diffusion-based Automated Multi-Guidance Video Restoration", + "abstract": "In this work, we address a challenge in video inpainting: reconstructing occluded regions in dynamic, real-world scenarios. Motivated by the need for continuous human motion monitoring in healthcare settings, where facial features are frequently obscured, we propose a diffusion-based video-level inpainting model, DiffMVR. Our approach introduces a dynamic dual-guided image prompting system, leveraging adaptive reference frames to guide the inpainting process. This enables the model to capture both fine-grained details and smooth transitions between video frames, offering precise control over inpainting direction and significantly improving restoration accuracy in challenging, dynamic environments. DiffMVR represents a significant advancement in the field of diffusion-based inpainting, with practical implications for real-time applications in various dynamic settings.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The rise of diffusion models has revolutionized computer vision, driving advances in image editing, super-resolution, object removal, and restoration. Diffusion-based inpainting, in particular, has seen wide application across fields. In medical imaging, these models aid in anomaly detection by restoring diseased regions to a healthy state for comparative analysis [27 ###reference_b27###]. In autonomous driving, they reconstruct occluded information like road signs [14 ###reference_b14###], while in advertising, they create immersive VR scenes for product promotion [1 ###reference_b1###]. Similarly, for privacy preservation, they assist in the removal of sensitive visual information, such as faces or personal details in shared data. In healthcare, real-time facial action monitoring is crucial for accurate pain assessments [5 ###reference_b5###]. Despite this growing demand for inpainting, most state-of-the-art models are frame-based, achieving high-quality restoration in static images but failing to capture the dynamic essentials for continuous video tasks. As a result, there is a clear need for video-level diffusion models capable of addressing these limitations.\nWhile image-based inpainting methods excel at restoring missing regions in static frames, these models are inherently limited to static 2D content. Foundational models like Denoising Diffusion Probabilistic Models (DDPM) [7 ###reference_b7###] and variants such as Denoising Diffusion Implicit Models (DDIM) [21 ###reference_b21###], introduce robust frameworks to iteratively denoise and restore image content with impressive quality. Other approaches, like Vector Quantized Variational AutoEncoder [18 ###reference_b18###], enable high-resolution generation by learning quantized embeddings, yet are restricted to single-frame synthesis. Partial convolution inpainting [13 ###reference_b13###] addresses irregular masks by conditioning convolutional filters on valid pixels alone, reducing artifacts in static inpainting tasks. Although these models achieve realistic object replacement and seamless integration within static frames, they lack the temporal coherence required for video-level tasks. This creates an opportunity to extend diffusion-based inpainting methods to handle sequential frames in videos, requiring new strategies that address both spatial and temporal aspects.\nBuilding on image-based models, recent advancements in video-level inpainting have targeted the reconstruction of missing or occluded regions in sequences-challenges that 2D image inpainting alone cannot fully overcome. Advances in deep learning have driven substantial progress in video inpainting. For example, Ouyang et al. [15 ###reference_b15###] utilize the convolutional neural networks for video inpainting, preserving high-frequency details. Further advancements, such as the First Frame Filling video inpainting model [9 ###reference_b9###], leverage diffusion models to achieve accurate object removal, even with large masks. More recent models, including the Any-length video inpainting model [28 ###reference_b28###] and MotionAura [22 ###reference_b22###], introduce diffusion-based video inpainting frameworks that support various video lengths and inpainting tasks. However, existing models often struggle to accurately reconstruct subtle human motions across frames.\nFurthermore, most existing methods primarily concentrate on removing objects from videos, rather than replacing them, especially with precise, detailed replacements. Moreover, to the best of our knowledge, no prior work focuses on dual-image-guided video inpainting. This approach is crucial for tasks such as restoring facial movements, where both removal and accurate restoration are essential.\nIn this paper, we present DiffMVR, a novel diffusion-based framework for dynamic, pairwise image-guided video inpainting. Our approach introduces two adaptive guiding images to steer inpainting precisely across detailed and complex video sequences.\nTo enhance the inpainting process for object obscuration removal and restoration, DiffMVR automatically generates two guidance images for each video frame with occlusions: a symmetric image and a past unobstructed frame. The symmetric image, created by mirroring the visible half of the frame along an axis of symmetry, provides structural guidance. The past unobstructed frame is identified by a fine-tuned YOLOv8 model, which searches for the most recent fully visible object in previous frames. These frames are processed by separate CLIP models to extract key-value pairs. The current masked frame is then encoded into a latent space by a VAE, where random noise is added to serve as a query. This query interacts with the key-value pairs from the guidance images to generate dual attention scores that are weighted and fused. The U-Net uses this combined attention, alongside standard diffusion inputs, to iteratively denoise and recover the clean latent vector, which the VAE finally decodes. By merging the effects of these two guidance images within the U-Net\u2019s architecture, DiffMVR effectively integrates spatial details and temporal dynamics, a key innovation that sets our technique apart from prior video inpainting methods.\nBuilding on this foundation, we introduce a motion loss term to further enhance temporal consistency across consecutive frames during the denoising process. This non-separable frame loss function ensures continuity between adjacent noisy representations of video frames, tightly linking each frame to its predecessors and successors, thereby creating a continuous and coherent video stream. Unlike traditional image-level denoising methods that treat each frame in isolation, this approach ensures a unified sequence with seamlessly integrated spatial and temporal elements. This not only preserves the fluidity and natural progression of actions but also faithfully reconstructs the video to authentically represent the original scene\u2019s dynamics and aesthetics.\nThe contributions of our approach are fivefold. (1) It proficiently captures intricate motions, such as fine facial details and nuanced dynamic content, addressing long-standing challenges in video inpainting. (2) Instead of using a static prompt, our framework dynamically adjusts two guiding images per frame by automatically selecting an unobstructed reference from previous frames and generating a symmetric version of the current frame. This real-time, adaptive strategy substantially enhances inpainting accuracy and temporal coherence, even in demanding scenarios. (3) We propose a novel architecture that combines structural and temporal guidance based on their relevance. Our framework processes the symmetric frame and past unobstructed frame in parallel attention pipelines, intelligently fusing their attention scores within the U-Net structure to provide comprehensive spatio-temporal guidance. (4) We introduce a hybrid loss function nested within the diffusion process, utilizing U-Net\u2019s unique layered structure to synergistically merge denoising and motion-consistency terms. This innovation allows for effective feature extraction from both present and neighboring frames, enhancing the U-Net\u2019s ability to dynamically synthesize missing content. Our method harnesses the power of diffusion for progressive frame restoration and optimizes the interaction between structural and temporal data, setting a new standard for precision in video inpainting. (5) Through quantitative and qualitative comparisons, we demonstrate that DiffMVR consistently outperforms state-of-the-art inpainting models. This work paves the way for more robust and reliable AI-driven video inpainting, improving decision-making in real-world scenarios.\nUpon acceptance of the paper, we will open source the code, but not the IRB-approved dataset." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Images are a crucial medium for information dissemination, but they are often susceptible to noise, damage, and interference, which can impede data analysis and knowledge extraction. To restore damaged images and design images according to human intent, various image inpainting approaches have emerged in recent years.\nGenerative Adversarial Networks (GANs) [4 ###reference_b4###] represent a breakthrough in image editing, offering the ability to generate high-quality, realistic images and supporting unsupervised learning. However, despite their strengths, GANs face significant challenges, including training instability and high demand for large datasets, which limits their utility in domains such as historical image restoration, where data is scarce. Following the rise of GANs, a patch-based method [30 ###reference_b30###] was developed to synthesize textures from undamaged regions. While effective for simpler tasks like background subtraction, patch-based models struggle with larger missing regions and maintaining global coherence in complex scenes.\nTo mitigate some of these limitations, Pathak et al. [16 ###reference_b16###] introduce the encoder-decoder structure, marking a significant step forward by offering a more stable approach to fill in missing regions. More recently, diffusion-based models such as the DDPM [7 ###reference_b7###] and the DDIM [21 ###reference_b21###] address the shortcomings of GANs. These models resolve issues like mode collapse and offer greater flexibility in handling complex distributions. By employing iterative noise addition and removal processes, diffusion models can generate high-quality inpainted images with enhanced stability and consistency.\nInitially, diffusion models face challenges in learning effectively from unmasked surrounding pixels [20 ###reference_b20###]. To tackle this constraint, text-guided models [19 ###reference_b19###] emerge, which allow for more precise user-guided edits by incorporating prompts. These advancements have expanded the capabilities of image inpainting models, enabling them to handle more complex tasks, such as producing high-quality results in difficult scenarios and allowing for exact, user-directed modifications.\nAs the field of image inpainting has matured, extending these techniques to videos has become a natural progression, driven by applications in fields such as remote sensing, medical diagnosis, and traffic video recovery. Video inpainting introduces unique challenges, including maintaining temporal coherence across frames and handling motion complexities. Early methods, such as those developed by Li et al. [12 ###reference_b12###], employ flow-based techniques and deformable convolution to propagate features and enhance temporal consistency in inpainted video sequences. However, these models are often bounded by their reliance on intermediate flow estimation steps, which can introduce errors that propagate through frames. DNN-based inpaint models, such as Copy-and-paste network [10 ###reference_b10###] and the context-aggregated network [11 ###reference_b11###], address context restoration through a copy-and-paste approach, aggregating reference frames effectively. More recently, a diffusion-based video inpainting model, AVID [28 ###reference_b28###], focuses on object removal guided by consistent text prompts. While AVID and other diffusion-based models excel at eliminating unwanted objects and imperfections, they encounter challenges in recapturing fine details and seamlessly blending inpainted areas with surrounding regions, particularly in highly dynamic videos.\n###figure_1### Our proposed methodology tackles these combined challenges through an innovative inpainting pipeline that not only preserves temporal consistency but also thrives in depicting transient features and maintaining structural realism in dynamic areas, such as micro-facial expressions. Utilizing a real-time adaptive guidance framework, our approach dynamically selects and refines guidance images throughout the video sequence, allowing for precise restoration of fine-grained details while preserving both temporal coherence and structural integrity. This evolving dual-guidance design marks a substantial advancement over existing models, delivering more realistic and seamless inpainting even in complex, rapidly changing video scenarios." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Methods", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Model Pipeline", + "text": "In this section, we establish an automated, multi-image-guided, video-level diffusion-based inpainting pipeline, specifically designed for dynamic video restoration. As illustrated in Figure 1 ###reference_###, the pipeline consists of four interconnected modules.\nThe first module , Video Preprocessing, detects and isolates the primary object in each frame using a fine-tuned YOLO model, ensuring that the inpainting process focuses accurately on regions of interest. This module prepares the input frames by resizing and aligning them for consistent processing. Additionally, we employ a fine-tuned YOLOv8-based model to detect bounding boxes or a segmentation model to identify irregular-shaped occlusions within the object.\nThe second module, , Visual Encoding, independently encodes both the frame to be inpainted and its guidance images. The original video frame is processed through a VAE Encoder, which introduces noise to produce a latent representation as input for the diffusion process. Simultaneously, each guidance image, providing structural and temporal cues, is encoded by a CLIP Encoder to generate key-value pairs that facilitate subsequent attention mechanisms.\nThe third module, , Denoising with Fused Attention, leverages spatial and temporal cues to guide the U-Net-based denoising process within the diffusion framework. By conditioning on the fused guidance information, this module enhances detail and continuity across frames, improving the quality and consistency of the inpainted video output.\nFinally, the fourth module, , Decoding and Restoration, decodes the fully denoised frame representation back into pixel space using a VAE Decoder, producing the final inpainted frame. Each reconstructed frame is sequentially reassembled into the full video, yielding a temporally consistent inpainted video." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Problem Setting", + "text": "We define the input video sequence as , which is decomposed into sequential frames. Each frame undergoes processing to isolate the main object of interest, detected using a fine-tuned YOLOv8 model. The detected object in each frame is subsequently cropped and resized to a uniform resolution of , producing a refined video sequence .\nFor inpainting facilitation, two guidance images are automatically generated for each frame where occlusion is present: a symmetric image and a past unobstructed frame . The symmetric image is crafted by mirroring the unoccluded portion of along an axis of symmetry, defined using Mediapipe for object landmark detection to precisely determine the symmetry line.\nThe past unobstructed frame is sourced through a fine-tuned YOLOv8 model that scans previous frames in for the most recently visible object, providing essential temporal guidance.\nAdditionally, we generate a binary mask for each frame , where:\nIn our model, we employ two different mask-generation techniques tailored for continuous video frames. The first mask generation model is based on a YOLOv8 structure and produces bounding box masks. The second model adapts a segmentation-based approach [2 ###reference_b2###] and provides irregularly contoured masks. These are parts of preprocessing in . We train and test the pipeline on both types, and both results are presented in Section 4 ###reference_###.\nWith the binary masks available, we then construct masked video frames as follows:\nwhere denotes the Hadamard product, preserving only the regions indicated by the mask in each frame .\nAt the end of , the processed frames , , and , for , , are passed to the next module, which encodes spatial and temporal cues into compact representations.\nWe leverage both the VAE encoder and pre-trained CLIP image embeddings [17 ###reference_b17###] to extract features for our inpainting pipeline. The masked video frame is processed by the VAE encoder, transforming it into a spatial latent map . Gaussian noise is then added to this map, producing a noisy latent as preparation for iterative denoising within the U-Net.\nSimultaneously, the guidance images, namely the symmetric reference and past unobstructed frames , are encoded individually using the CLIP encoders. Each guidance image is mapped from its original space to a -dimensional feature vector, denoted as and .\nTo ensure compatibility with the dimensions required for the diffusion module, each guidance embedding and is passed through a multi-layer perceptron (MLP), , which expands it to a -dimensional embedding:\nThe expanded embeddings generate key-value pairs and from each guidance image independently. These pairs contain spatial and temporal cues, which are then incorporated into the U-Net\u2019s denoising layers through cross-attention.\nIn , at each U-Net layer, a query derived from the noisy latent is used to compute attention scores and , representing the relevance of each guidance source:\nThe final fused attention score combines and using weighted coefficients:\nWe employ the dynamically computed score at each U-Net denoising layer, guiding the restoration process with high-level structural and temporal context. This innovation has proven its ability to overcome the continuity challenges in video inpainting.\nDuring forward diffusion, noise is incrementally added to , yielding\nwhere represents Gaussian noise, and is the cumulative scaling factor for the noise component for .\nThe U-Net\u2019s goal is to predict and remove the added noise at each timestep . The diffusion loss is defined as:\nwhere represents the U-Net\u2019s prediction of the noise component conditioned on the input and fused attention at timestep .\nIn the reverse diffusion process, the U-Net iteratively refines at each timestep , aiming to reconstruct :\nwhere and represents a noise scale factor at timestep , adjusting the variance of the noise added back during the reverse diffusion step.\nUpon completing the reverse diffusion process, the final denoised latent representation is passed through the VAE decoder to reconstruct the inpainted frame:\nThese reconstructed frames are then sequentially reassembled to form the final inpainted video sequence , ensuring temporal coherence and spatial fidelity throughout the sequence." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Loss Function", + "text": "To achieve precise spatial inpainting while maintaining temporal coherence across video frames, we propose a combined loss function. This function is comprised of two components: the denoising loss, which focuses on spatial reconstruction, and the motion-consistency loss, which enforces smooth temporal transitions between frames in video sequences. The combined loss function is defined as\nwhere is a weighting factor that balances the impact of temporal coherence against spatial accuracy.\nDenoising Loss: The denoising term operates within the diffusion framework, as described in Section 3.2 ###reference_###. The primary goal is to restore each masked frame by iteratively removing noise at each timestep , which is explicitly shown in (1 ###reference_###).\nMotion-Consistency Loss: To encourage temporal coherence across consecutive frames during the denoising process, we bring forward a motion-consistency loss term. At each timestep of the diffusion process, this loss measures the temporal consistency between adjacent noisy representations of video frames as follows\nwhere represents frame at diffusion timestep , and is the total number of frames in the current video. This loss term encourages the model to maintain consistent visual features and smooth transitions between consecutive frames while they are being denoised, thereby preventing temporal artifacts that might arise from processing each frame independently.\nThe motion-consistency loss works in conjunction with the denoising loss throughout the diffusion process, guaranteeing that the final video output exhibits both high-quality spatial reconstruction and smooth temporal dynamics." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Implementation Details", + "text": "Motivated by the obstructions of facial features in babies, which cause inaccurate decision-making based on the monitoring results in healthcare settings, we train and test our framework based on an IRB-approved dataset specifically designed for infant motion monitoring. This Baby dataset contains videos, each around seconds in duration, featuring infants from various ethnic backgrounds. The videos capture a variety of infant statuses including movement, rest, friction, and pain. The dataset also includes images of the same distribution, photographed under diverse lighting conditions, which are used in fine-tuning the YOLO-based detection models.\nBoth video frames and images are preprocessed, first centered on the facial region and then resized to pixels. This operation is performed by a fine-tuned YOLOv8-based model, which demonstrates a accuracy in detecting the main object, in our case the infant\u2019s face.\nOur model builds upon the architecture of stable-diffusion-v1-5 checkpoint [19 ###reference_b19###], with modifications to accommodate the structure of DiffMVR. For each video, we extract frames with fps, and preprocess the frames following the designation we described in . Then we choose the ratio of data splitting to be training, validation, and testing.\nTo evaluate our model\u2019s robustness across different occlusion scenarios, we implement two distinct masking approaches. The first uses a fine-tuned YOLOv8n model [24 ###reference_b24###], trained on annotated images from the Baby dataset, which achieves detection accuracy and an average IoU of , generating rectangular masks for occlusions. The second method leverages a fine-tuned custom segmentation model [2 ###reference_b2###] trained on rigorously labeled images, reaching accuracy and producing irregular-shaped masks with an average IoU of , better mimicking real-world occlusions." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Baseline Models and Comparative Analysis", + "text": "We evaluate our approach against both image and video inpainting state-of-the-art methods.\nFor image-level comparisons, we benchmark against LaMa [23 ###reference_b23###] for image-guided inpainting, and both original and fine-tuned versions of Stabilityai [19 ###reference_b19###] and Runwayml (an open-source implementation that has been recently removed) models for text-guided inpainting. The fine-tuned variants (Tuned-stabilityai and Tuned-runwayml) are specifically adapted to our IRB-approved infant dataset. In detail,\nTuned-stabilityai: Fine-tuned from a general text-to-image stable diffusion model (Stabilityai) on static images from different infants from the Baby dataset over epochs using Dreambooth.\nTuned-runwayml: Fine-tuned from a text-to-image stable diffusion model (Runwayml) on frames from infants from the Baby dataset over epochs.\nFor video-level evaluation, we compare DiffMVR against two advanced video inpainting models. The first is the End-to-End Flow-Guided Video Inpainting (FGVI) [12 ###reference_b12###], which leverages flow information for seamless video inpainting. The second model is a deep learning-based approach designed for efficient video restoration (PVI) by Zhou et al. [29 ###reference_b29###]. Additionally, to establish a baseline, we independently inpaint each frame using the image-based models Stabilityai, Runwayml and their variants, and LaMa. We then reassemble the frames into videos." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "We evaluate all models both qualitatively and quantitatively, focusing on both the independent images and continuous video frames. To demonstrate the robustness of our pipeline in capturing smooth transitions and restoring intricate details, we choose the following metrics: FID [6 ###reference_b6###], SSIM [26 ###reference_b26###], TC [8 ###reference_b8###], and FVD [25 ###reference_b25###]. These metrics allow us to perform an all-rounded evaluation from three dimensions: structural similarity, the reality of restoration, and temporal coherence, for both frame-level and video-level comparisons. For details on the definition and usage of the metrics, please refer to the Appendix." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Quantitative Results", + "text": "" + }, + { + "section_id": "4.4.1", + "parent_section_id": "4.4", + "section_name": "4.4.1 Frame-level", + "text": "We leverage the images in the Baby dataset for the calculation of SSIM and FID scores. Additionally, we use videos, each sampled at a frame extraction rate of frames per second, for the calculation of the TC score.\nTo further demonstrate our model\u2019s general ability to remove occlusions and restore intricate object details, we introduce the HandOverFace (HOF) dataset as an additional test set. This dataset comprises of images featuring various hand-over-face scenarios from a different distribution. Collected from publicly available sources, the HOF dataset represents diverse skin tones, motions, and age groups, enriching our evaluation with complex real-world cases.\nAs illustrated in Table 1 ###reference_###, our model significantly outperforms the benchmark models in maintaining continuity between frames, as evidenced by the TC score, which surpasses the next best by . Furthermore, achieving the highest overall metric scores across various datasets demonstrates our model\u2019s ability to capture detailed, realistic structures and ensures its robustness beyond our training dataset. Additionally, we observe an all-rounded better performance of segmented masks over bounding boxes, which is expected since detailed images of parts of a human body come in irregular shapes, and thus bounding boxes mismatch.\nLooking at the test results on the HOF dataset, we have strong evidence of DiffMVR\u2019s mightiness in capturing authentic, continuous details from a general viewpoint.\nBy observing the numeric results in Table 1 ###reference_###, we select the Tuned-runwayml as the second-best image-level model based on its consistent performance across the metrics. We further conduct relative comparisons between DiffMVR and Tuned-runwayml, and notice a better performance of DiffMVR in all metrics, especially the structural similarity perspective." + }, + { + "section_id": "4.4.2", + "parent_section_id": "4.4", + "section_name": "4.4.2 Video-level", + "text": "DiffMVR achieves the best scores for both segmented masks and bounding boxes, as shown in Table 2 ###reference_###. Apparently, image-based models suffer heavily from inconsistent object attributes and discontinuity. Based on the test scores, we select PVI as the second-best model. PVI has a superior ability in constructing spatial-similar videos. However, when it comes to the comparison of the FVD score, which is a combined metric that evaluates the integrated performance on structural similarity and temporal coherence, DiffMVR stands out. DiffMVR persistently achieves the best SSIM and TC scores, this substantial margin highlights DiffMVR\u2019s effectiveness in managing both the larger Baby dataset and the much smaller HOF dataset. To further demonstrate the overall robustness of DiffMVR, we show the visualized results in the following section." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Qualitative Results", + "text": "To demonstrate the efficacy of our approach, we provide qualitative comparisons across videos with varying durations and complexities of masking. Figure 3 ###reference_### offers a side-by-side comparison of original and inpainted video frames, illustrating the capabilities of our method against baseline techniques.\nFurther displaying the robust performance of DiffMVR, Figure 2 ###reference_### highlights the model\u2019s ability to accurately restore dynamic scenes on out-of-distribution images. It can efficaciously reconstruct movements within the scene. Furthermore, in-distribution results in Figure 3 ###reference_### prove that DiffMVR is the only model that meets all the following demands: it achieves a smooth fusion between inpainted and unmasked regions, it removes obstructions, and it accurately restores the baby\u2019s specific facial features - rather than incorrectly substituting with random body parts. Moreover, it preserves background integrity and maintains content consistency throughout. In contrast, other baseline models exhibit several shortcomings, such as distorted faces or backgrounds, incomplete removal of hands, restoration of incorrect hands (not belonging to the observed baby), and only partial removal of obstructions. This disadvantage is present even for the second-best model Tuned-runwayml.\nDiffMVR adeptly handles various challenging conditions such as dim lighting and varied object textures and colors, demonstrating its wide applicability in diverse inpainting scenarios. See the Appendix for more details.\n###table_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39###" + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "In the inpaint pipeline, we develop two key innovations: the dual-guidance module, which synthesizes fused embeddings from both short-term past and present frames to generate a new combined attention score, and the U-Net module, which designs and integrates a new motion-consistency loss term to guide the denoising process. In this section, we conduct a comprehensive ablation study to assess the effectiveness of having either and both modules in the video object restoring pipeline." + }, + { + "section_id": "4.6.1", + "parent_section_id": "4.6", + "section_name": "4.6.1 Guidance Components Ablation", + "text": "We contrast the performance of our model with variants that rely solely on a single-image guidance to illustrate the advantages of our multi-frame guidance module. This experiment specifically tests the impact of our innovative approach, which encodes guidance images independently and integrates them through a weighted cross-attention mechanism within the U-Net layers. Since from Table 2 ###reference_### segmented masks have better test results in the majority of aspects, we only compare results based on this masking type. As shown in Table 3 ###reference_###, excluding either present or prior guidance causes the inpaint result metrics to drop drastically, even worse than baseline models sometimes. Utilizing the current frame as guidance does not enhance the inpainting process, as evidenced by its subpar performance, ranking second to last in comparison to benchmarks in both Table 3 ###reference_### and Table 2 ###reference_###. By comparing DiffMVR against those restricted to a single type of guidance, we stress the necessity of the dual-image guidance design in our pipeline." + }, + { + "section_id": "4.6.2", + "parent_section_id": "4.6", + "section_name": "4.6.2 Loss Component Ablation", + "text": "Building upon the findings from Section 4.6.1 ###reference_.SSS1###, this ablation study further investigates the cumulative impact of integrating the additional motion loss component into our pipeline. To systematically assess the impact of each component, we conduct experiments under several configurations. Using a single past frame as guidance and using merely denoise loss for training is the baseline setting. We gradually add the designs in: i) baseline + dual-guide, ii) baseline + motion loss, and iii) baseline + dual-guide + motion loss, which is our model, DiffMVR.\nWe present the results in Table 4 ###reference_###. As expected, adding the motion-consistency loss leads to a lower TC score and higher FVD compared to baseline, even when a single image is used as guidance. Adding motion as a portion of loss enhances temporal smoothness and contributes to a more realistic and accurate video frame restoration, revealed by the scope of changes in the row of Gap. Besides, the comparison between baseline and DiffMVR shows that incorporating both the motion loss and using dual guidance notably improves the model\u2019s performance by on average. This confirms that our approach synergizes efficiently." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "In this study, we introduced DiffMVR, a multi-image guided video inpainting model designed to restore complex details in video sequences of varying lengths, effectively leveraging both short-term and spatial-temporal pixel information.\nOur experimental results demonstrate that DiffMVR offers promising performance, surpassing all baseline models in both visual quality and quantitative metrics. This success underscores the model\u2019s proficiency in handling intricate video restoration tasks. However, opportunities for further refinement exist. One area for potential enhancement is the optimization of the weighting factor . Determining the optimal that aligns with user preferences and specific application requirements remains a challenge and a promising direction for future research.\nOverall, DiffMVR showcases significant robustness and efficacy in video-level media restoration. We believe that this work not only advances the field of video inpainting but also lays a foundation for future enhancements. It is our hope that DiffMVR will catalyze further innovations in video processing technologies and inspire downstream video-level inpainting tasks across various domains." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baby Dataset - Segmented masksBaby Dataset - Bounding boxesHOF Dataset - Segmented masks
Model\n\nFID \n\n\n\nSSIM \n\n\n\nTC \n\n\n\nFID \n\n\nSSIM \n\nTC \n\nFID \n\nSSIM \n\nTC \n
DiffMVR\n\n2.363\n\n\n\n0.898\n\n\n\n0.395\n\n\n\n\n\n0.8640.3935.4120.7860.428
Stabilityai\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Tuned-stabilityai\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Runwayml\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Tuned-runwayml\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LaMa\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FGVI\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
PVI\n\n\n\n\n\n\n\n\n\n0.395\n\n\n\n\n\n
Gap\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Gap between Masks\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
Table 1: Quantitative results comparing different models using FID, SSIM, and TC metrics on frame-level for the Baby and HOF datasets. The HOF dataset is used for proving the generality of DiffMVR. Dash means the value is undefined. The / indicates a relative increase/decrease in metric score compared to DiffMVR. Gap refers to the extent by which DiffMVR outperforms (+) or is outperformed by (-) the second-best model (Tuned-runwayml). Gap between Masks refers to the extent by which segmented masks outperforms (+) or is outperformed by (-) the bounding boxes, both within DiffMVR model.
\n
", + "capture": "Table 1: Quantitative results comparing different models using FID, SSIM, and TC metrics on frame-level for the Baby and HOF datasets. The HOF dataset is used for proving the generality of DiffMVR. Dash means the value is undefined. The / indicates a relative increase/decrease in metric score compared to DiffMVR. Gap refers to the extent by which DiffMVR outperforms (+) or is outperformed by (-) the second-best model (Tuned-runwayml). Gap between Masks refers to the extent by which segmented masks outperforms (+) or is outperformed by (-) the bounding boxes, both within DiffMVR model." + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Baby Dataset - Segmented masksBaby Dataset - Bounding boxes
Model\n \n\n \n\n \n\nFVD \n\n \n\n \n\n \n\nFVD \n
DiffMVR0.9080.33848.050.8800.34150.47
Stabilityai
Tuned-stabilityai
Runwayml
Tuned-runwayml
LaMa
FGVI
PVI
Gap
Gap between Masks
\n
Table 2: Quantitative results comparing different models using FID, SSIM, TC, and FVD metrics on video-level for the Baby dataset. We have PVI as the second-best model, for it has the majority of the second placement in metric values. See the caption of Table1 for other explanations.
\n
", + "capture": "Table 2: Quantitative results comparing different models using FID, SSIM, TC, and FVD metrics on video-level for the Baby dataset. We have PVI as the second-best model, for it has the majority of the second placement in metric values. See the caption of Table1 for other explanations." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Segmented masks
ModelFIDSSIMTCFVD
Dual guide2.100.910.3448.05
\n\n\n\n\n\n\n\n
Single guide
(symmetric)
\n
\n\n\n\n\n\n\n\n
Single guide
(past frame)
\n
\n\n\n\n\n\n\n\n
Single guide
(present frame)
\n
\n
Table 3: Quantitative ablation test on the Baby dataset, highlighting that the design of multi-guidance achieves the best performance. The motion loss is included throughout this comparison test. The / indicates a relative increase/decrease in metric score compared to Dual guide (DiffMVR).
\n
", + "capture": "Table 3: Quantitative ablation test on the Baby dataset, highlighting that the design of multi-guidance achieves the best performance. The motion loss is included throughout this comparison test. The / indicates a relative increase/decrease in metric score compared to Dual guide (DiffMVR)." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Segmented Masks
Configuration\n\nFID\n\n\n\nSSIM\n\n\n\nTC\n\n\n\nFVD\n\n
baseline\n\n2.86\n\n\n\n0.68\n\n\n\n\n\n\n\n65.92\n\n
baseline + dual\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
baseline + motion\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\n
DiffMVR:
baseline + dual + motion
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Gap (%)\n\n9.48\n\n\n\n22.97\n\n\n\n10.52\n\n\n\n21.96\n\n
\n
Table 4: A pervasive ablation test on the Baby dataset, which exemplifies the impact of gradually adding motion-consistency loss to different guidance configurations. The results highlight the combined effect of our innovations in enhancing video inpainting performance. The / indicates a relative increase/decrease in metric score compared to baseline. Gap refers to the extent by which DiffMVR: baseline + dual + motion outperforms baseline + dual.
\n
", + "capture": "Table 4: A pervasive ablation test on the Baby dataset, which exemplifies the impact of gradually adding motion-consistency loss to different guidance configurations. The results highlight the combined effect of our innovations in enhancing video inpainting performance. The / indicates a relative increase/decrease in metric score compared to baseline. Gap refers to the extent by which DiffMVR: baseline + dual + motion outperforms baseline + dual." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18745v1_figure_1.png", + "caption": "Figure 1: DiffMVR Model Pipeline.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/IMG_6924.jpg" + }, + "2(a)": { + "figure_path": "2411.18745v1_figure_2(a).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/76.jpg" + }, + "2(b)": { + "figure_path": "2411.18745v1_figure_2(b).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/113.jpg" + }, + "2(c)": { + "figure_path": "2411.18745v1_figure_2(c).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/301.jpg" + }, + "2(d)": { + "figure_path": "2411.18745v1_figure_2(d).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/35copy.jpg" + }, + "2(e)": { + "figure_path": "2411.18745v1_figure_2(e).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/77copy.jpg" + }, + "2(f)": { + "figure_path": "2411.18745v1_figure_2(f).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/1.jpg" + }, + "2(g)": { + "figure_path": "2411.18745v1_figure_2(g).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/33.jpg" + }, + "2(h)": { + "figure_path": "2411.18745v1_figure_2(h).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/76_inpaint.png" + }, + "2(i)": { + "figure_path": "2411.18745v1_figure_2(i).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/113_inpaint.png" + }, + "2(j)": { + "figure_path": "2411.18745v1_figure_2(j).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/301_inpaint.png" + }, + "2(k)": { + "figure_path": "2411.18745v1_figure_2(k).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/35_inpaintcopy.png" + }, + "2(l)": { + "figure_path": "2411.18745v1_figure_2(l).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/77_inpaintcopy.png" + }, + "2(m)": { + "figure_path": "2411.18745v1_figure_2(m).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/1_inpainted-2.jpg" + }, + "2(n)": { + "figure_path": "2411.18745v1_figure_2(n).png", + "caption": "Figure 2: Occlusion removal and face restore results on the HOF Dataset [3] applying DiffMVR. The left shows good inpaint results, and the right has some bad results. Bad could mean occlusion removal failure, restored contents incompatible with the original object, and the mask area not seamlessly connecting with the unchanged regions.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/handoverface/33_inpainted.jpg" + }, + "3(a)": { + "figure_path": "2411.18745v1_figure_3(a).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S009.jpg" + }, + "3(b)": { + "figure_path": "2411.18745v1_figure_3(b).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S010.jpg" + }, + "3(c)": { + "figure_path": "2411.18745v1_figure_3(c).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S017.jpg" + }, + "3(d)": { + "figure_path": "2411.18745v1_figure_3(d).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S024.jpg" + }, + "3(e)": { + "figure_path": "2411.18745v1_figure_3(e).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S034.jpg" + }, + "3(f)": { + "figure_path": "2411.18745v1_figure_3(f).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/S046.jpg" + }, + "3(g)": { + "figure_path": "2411.18745v1_figure_3(g).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS009.jpg" + }, + "3(h)": { + "figure_path": "2411.18745v1_figure_3(h).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS010.jpg" + }, + "3(i)": { + "figure_path": "2411.18745v1_figure_3(i).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS017.png" + }, + "3(j)": { + "figure_path": "2411.18745v1_figure_3(j).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS024.png" + }, + "3(k)": { + "figure_path": "2411.18745v1_figure_3(k).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS034.png" + }, + "3(l)": { + "figure_path": "2411.18745v1_figure_3(l).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/OURS046.png" + }, + "3(m)": { + "figure_path": "2411.18745v1_figure_3(m).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS009.jpg" + }, + "3(n)": { + "figure_path": "2411.18745v1_figure_3(n).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS010.jpg" + }, + "3(o)": { + "figure_path": "2411.18745v1_figure_3(o).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS017.jpg" + }, + "3(p)": { + "figure_path": "2411.18745v1_figure_3(p).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS024.jpg" + }, + "3(q)": { + "figure_path": "2411.18745v1_figure_3(q).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS034.jpg" + }, + "3(r)": { + "figure_path": "2411.18745v1_figure_3(r).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/RS046.jpg" + }, + "3(s)": { + "figure_path": "2411.18745v1_figure_3(s).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS009.jpg" + }, + "3(t)": { + "figure_path": "2411.18745v1_figure_3(t).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS010.jpg" + }, + "3(u)": { + "figure_path": "2411.18745v1_figure_3(u).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS017.jpg" + }, + "3(v)": { + "figure_path": "2411.18745v1_figure_3(v).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS024.jpg" + }, + "3(w)": { + "figure_path": "2411.18745v1_figure_3(w).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS034.jpg" + }, + "3(x)": { + "figure_path": "2411.18745v1_figure_3(x).png", + "caption": "Figure 3: Qualitative comparison of DiffMVR with the benchmarked models on the Baby dataset, including pain, move, and rest babies. Row 1111 displays inputs from the video sources at the 5t\u2062hsuperscript5\ud835\udc61\u210e5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second, leveraging segmented masks. The content is copyrighted and reprinted with permission. Rows 2,3,42342,3,42 , 3 , 4 show inpainting results applying DiffMVR trained on segmented masks, with guide 1111 from the 4t\u2062hsuperscript4\ud835\udc61\u210e4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT second of videos; Tuned-runwayml, using text prompt \u201cremove hands;\u201d Tuned-stabilityai, using text prompt \u201cremove hands, \u201d respectively.", + "url": "http://arxiv.org/html/2411.18745v1/extracted/6027585/figures/SS046.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "3d pano inpainting: Building a vr environment from a single input panorama.", + "author": "Shivam Asija, Edward Du, Nam Nguyen, Stefanie Zollmann, and Jonathan Ventura.", + "venue": "2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pages 1019\u20131020, 2024.", + "url": null + } + }, + { + "2": { + "title": "Hands segmentation is all you need.", + "author": "Guglielmo Camporese.", + "venue": "https://github.com/guglielmocamporese, 2021.", + "url": null + } + }, + { + "3": { + "title": "Analysis of hand segmentation on challenging hand over face scenario.", + "author": "Sakher Ghanem, Ashiq Imran, and Vassilis Athitsos.", + "venue": "In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, page 236\u2013242, 2019.", + "url": null + } + }, + { + "4": { + "title": "Generative adversarial networks.", + "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Y. Bengio.", + "venue": "Advances in Neural Information Processing Systems, 3, 2014.", + "url": null + } + }, + { + "5": { + "title": "Pain assessment in the patient unable to self-report: Clinical practice recommendations in support of the aspmn 2024 position statement.", + "author": "Keela Herr, Alison R. Anderson, Caroline Arbour, Patrick J. Coyne, Elizabeth Ely, C\u00e9line G\u00e9linas, and Renee C.B. Manworren.", + "venue": "Pain Management Nursing, 2024.", + "url": null + } + }, + { + "6": { + "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.", + "venue": "Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "7": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "In Advances in Neural Information Processing Systems, volume 33, pages 6840\u20136851, 2020.", + "url": null + } + }, + { + "8": { + "title": "Learning blind video temporal consistency.", + "author": "Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, and Ming-Hsuan Yang.", + "venue": "In European Conference on Computer Vision, 2018.", + "url": null + } + }, + { + "9": { + "title": "Video diffusion models are strong video inpainters.", + "author": "Minhyeok Lee, Suhwan Cho, Chajin Shin, Jungho Lee, Sunghun Yang, and Sangyoun Lee.", + "venue": "arXiv preprint arXiv:2408.11402, 2024.", + "url": null + } + }, + { + "10": { + "title": "Copy-and-paste networks for deep video inpainting.", + "author": "Sungho Lee, Seoung Wug Oh, Daeyeun Won, and Seon Joo Kim.", + "venue": "In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4412\u20134420, 2019.", + "url": null + } + }, + { + "11": { + "title": "Short-term and long-term context aggregation network for video inpainting.", + "author": "Ang Li, Shanshan Zhao, Xingjun Ma, Mingming Gong, Jianzhong Qi, Rui Zhang, Dacheng Tao, and Ramamohanarao Kotagiri.", + "venue": "arXiv preprint arXiv:2009.05721, 2020.", + "url": null + } + }, + { + "12": { + "title": "Towards an end-to-end framework for flow-guided video inpainting.", + "author": "Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng.", + "venue": "In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.", + "url": null + } + }, + { + "13": { + "title": "Image inpainting for irregular holes using partial convolutions.", + "author": "Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro.", + "venue": "Proceedings of the European Conference on Computer Vision (ECCV), 2018.", + "url": null + } + }, + { + "14": { + "title": "Ddm-lag: A diffusion-based decision-making model for autonomous vehicles with lagrangian safety enhancement.", + "author": "Jiaqi Liu, Peng Hang, Xiaocong Zhao, Jianqiang Wang, and Jian Sun.", + "venue": "arXiv preprint arXiv:2401.03629, 2024.", + "url": null + } + }, + { + "15": { + "title": "Internal video inpainting by implicit long-range propagation.", + "author": "Hao Ouyang, Tengfei Wang, and Qifeng Chen.", + "venue": "2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 14559\u201314568, 2021.", + "url": null + } + }, + { + "16": { + "title": "Context encoders: Feature learning by inpainting.", + "author": "Deepak Pathak, Philipp Kr\u00e4henb\u00fchl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros.", + "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536\u20132544, 2016.", + "url": null + } + }, + { + "17": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": "In International Conference on Machine Learning, 2021.", + "url": null + } + }, + { + "18": { + "title": "Generating diverse high-fidelity images with vq-vae-2.", + "author": "Ali Razavi, A\u00e4ron van den Oord, and Oriol Vinyals.", + "venue": "In Neural Information Processing Systems, 2019.", + "url": null + } + }, + { + "19": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "20": { + "title": "Palette: Image-to-image diffusion models.", + "author": "Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi.", + "venue": "In ACM SIGGRAPH 2022 Conference Proceedings, 2022.", + "url": null + } + }, + { + "21": { + "title": "Denoising diffusion implicit models.", + "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.", + "venue": "arXiv preprint arXiv:2010.02502, 2020.", + "url": null + } + }, + { + "22": { + "title": "Motionaura: Generating high-quality and motion consistent videos using discrete diffusion.", + "author": "Onkar Susladkar, Jishu Sen Gupta, Chirag Sehgal, Sparsh Mittal, and Rekha Singhal.", + "venue": "arXiv:2410.07659, 2024.", + "url": null + } + }, + { + "23": { + "title": "Resolution-robust large mask inpainting with fourier convolutions.", + "author": "Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor S. Lempitsky.", + "venue": "2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3172\u20133182, 2021.", + "url": null + } + }, + { + "24": { + "title": "A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas.", + "author": "Juan Terven, Diana-Margarita C\u00f3rdova-Esparza, and Julio-Alejandro Romero-Gonz\u00e1lez.", + "venue": "Machine Learning and Knowledge Extraction, 5(4):1680\u20131716, 2023.", + "url": null + } + }, + { + "25": { + "title": "Towards accurate generative models of video: A new metric & challenges.", + "author": "Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly.", + "venue": "arXiv:1812.01717, 2019.", + "url": null + } + }, + { + "26": { + "title": "Image quality assessment: From error visibility to structural similarity.", + "author": "Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli.", + "venue": "IEEE Transactions on Image Processing, 13(4):600\u2013612, 2004.", + "url": null + } + }, + { + "27": { + "title": "Diffusion models for medical anomaly detection.", + "author": "Julia Wolleb, Florentin Bieder, Robin Sandk\u00fchler, and Philippe Claude Cattin.", + "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2022.", + "url": null + } + }, + { + "28": { + "title": "Avid: Any-length video inpainting with diffusion model.", + "author": "Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, and Licheng Yu.", + "venue": "arXiv preprint arXiv:2312.03816, 2024.", + "url": null + } + }, + { + "29": { + "title": "Propainter: Improving propagation and transformer for video inpainting.", + "author": "Shangchen Zhou, Chongyi Li, Kelvin C. K. Chan, and Chen Change Loy.", + "venue": "arXiv preprint arXiv:2309.03897, 2023.", + "url": null + } + }, + { + "30": { + "title": "Patch-based texture synthesis for image inpainting.", + "author": "Tao Zhou, Brian David Johnson, and Rui Li.", + "venue": "arXiv preprint arXiv:1605.01576, 2016.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18745v1" +} \ No newline at end of file diff --git a/20241127/2411.18746v1.json b/20241127/2411.18746v1.json new file mode 100644 index 0000000000000000000000000000000000000000..90d6cc3cfaa5d3e851581042115f901e067ef635 --- /dev/null +++ b/20241127/2411.18746v1.json @@ -0,0 +1,91 @@ +{ + "title": "Inference Privacy: Properties and Mechanisms", + "abstract": "Ensuring privacy during inference stage is crucial to prevent malicious third parties from reconstructing users\u2019 private inputs from outputs of public models.\nDespite a large body of literature on privacy preserving learning (which ensures privacy of training data), there is no existing systematic framework to ensure the privacy of users\u2019 data during inference.\nMotivated by this problem, we introduce the notion of Inference Privacy (IP), which can allow a user to interact with a model (for instance, a classifier, or an AI-assisted chat-bot) while providing a rigorous privacy guarantee for the users\u2019 data at inference.\nWe establish fundamental properties of the IP privacy notion and also contrast it with the notion of Local Differential Privacy (LDP).\nWe then present two types of mechanisms for achieving IP: namely, input perturbations and output perturbations which are customizable by the users and can allow them to navigate the trade-off between utility and privacy.\nWe also demonstrate the usefulness of our framework via experiments and highlight the resulting trade-offs between utility and privacy during inference.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Machine learning systems, often trained on personalized data and designed to continually interact with users, have become integral to a variety of applications. This opens up a variety of concerns related to user and data privacy. Within machine learning systems, data privacy breaches can occur during both training and inference phases, potentially undermining the system\u2019s integrity and performance. While research efforts have primarily focused on understanding and mitigating threats to training data [1 ###reference_b1###] privacy\u2014such as membership inference and model inversion attacks\u2014addressing privacy challenges during the inference phase remains equally critical.\nIn response to such privacy concerns, Differential Privacy (DP) [2 ###reference_b2###] emerged as a standard framework offering a privacy guarantee for training data.\nBy introducing controlled noise into the training pipeline, DP mechanisms can provide a provable guarantee that the trained model will not reveal sensitive information about any specific individual in the training dataset [3 ###reference_b3###][4 ###reference_b4###].\nWhile significant attention has been devoted to addressing privacy risks in the training phase, comparatively less focus has been placed on protecting data privacy during inference [5 ###reference_b5###][6 ###reference_b6###].\nHowever, recent studies have shown that a malicious party can reconstruct the input data by observing the model\u2019s outputs, posing a privacy threat at inference [7 ###reference_b7###].\nMotivated to extend privacy guarantees beyond the training phase to encompass inference phase, we introduce the concept of Inference Privacy (IP).\nWe establish some fundamental properties of IP and compare it to the well-known notion of Local Differential Privacy (LDP) [8 ###reference_b8###].\nWe observe that IP is a generalization of LDP.\nAdditionally, to ensure such privacy guarantees, we propose two inference privacy mechanisms: input perturbation and output perturbation.\nSpecifically, output perturbation method involves introducing controlled noise to the model outputs based on its sensitivity, namely its Global Lipschitz Constant, effectively preventing the disclosure of sensitive information.\nWe then present experiments to assess the trade-off between utility and privacy during inference, as well as to evaluate the impact of privacy parameters on utility.\nMain Contributions:\nThe primary objective of this work is to establish a framework for privacy protection during inference, formally introducing the concept of inference privacy.\nThe key contributions of this paper are summarized as follows:\nWe introduce the concept of Inference Privacy (IP), a new framework designed to ensure privacy for user\u2019s query/input data during inference.\nThe core idea behind IP is to obscure model outputs to the extent that adversaries are unable to discern the specific query input within a defined privacy radius ().\nWe delineate several properties of the IP notion\u2013\npost-processing, composition, and the chaining property.\nUtilizing the Lipschitz continuity inherent in the pre-trained model, we craft multiple output perturbation methods by introducing noise to alter the model\u2019s outputs.\nMoreover, we extend our methods to include input perturbation, which can be universally applied across models, enhancing the applicability of our approach.\nWe experimentally test our inference privacy mechanisms on various datasets, comparing the utility of pre-trained models across different IP requirements and methods.\nOur findings reveal a trade-off: higher inference privacy requirements often come at the expense of reduced utility.\nRelated Work:\nPrevious research provides valuable insights into protecting against information leakage at different stages and levels [1 ###reference_b1###].\nLocal Differential Privacy (LDP) [9 ###reference_b9###] ensures privacy during data collection stage by allowing each user to perturb individual data points before their inclusion in the dataset [8 ###reference_b8###], thereby obscuring user contributions and preventing the identification of specific individuals\u2019 data.\nCommon data-level techniques include blurring query datasets through methods such as data obfuscation [10 ###reference_b10###] or data sanitation [11 ###reference_b11###], which selectively removing sensitive features [12 ###reference_b12###].\nHowever, these approaches may compromise data integrity, and the absence of such features can itself pose a privacy risk.\nModel-level strategies often include enhancing trained models [13 ###reference_b13###] or eliminating sensitive data [14 ###reference_b14###]." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Inference Privacy", + "text": "###figure_1### We consider an arbitrary pre-trained model at inference, where and signifies the output.\nThe resulting output can serve multiple downstream applications, such as personalized healthcare recommendations, financial predictions, and other forecasting systems tailored based on user preferences and other sensitive information.\nThe goal of IP is to provide a privacy guarantee for the input , so that the reconstruction on based on is obfuscated while still preserving high model utility.\nWe now develop the notion of IP as follows: Let us denote as a randomized mechanism used to privately release for a private input , and a metric that quantifies distance between different inputs.\nIn the metric space, the closed ball centred at with radius is defined as:\nIn context of an Euclidean -space, the closed -ball of radius centred at , denoted as is defined as:\nIllustrated in Figure 1 ###reference_###, for any two neighboring data inputs and such that , their corresponding output of IP mechanism and exhibit similar probability distributions.\nThe degree of privacy leakage is associated with output similarity, while the parameter quantifies the extent of input similarity, offering users the flexibility to adjust it according to the specific input .\nFor instance, if users perceive as highly private or highly heterogeneous in nature, they might opt for a larger value of in contrast to a scenario where is deemed less private or highly homogeneous.\nThe second ingredient is how to measure the similarity between and for all pairs of .\nThe choice of metric heavily depends on the specific privacy definition in different contexts.\nSpecifying the metric used to gauge similarities is crucial for providing a meaningful privacy guarantees.\nCombining these concepts, we then arrive at the following definition of IP.\nA randomized mechanism satisfies inference privacy with respect to a metric , if for all measurable sets :\nfor all such that .\nMore generally, we can relax our privacy constraint by introducing a privacy loss parameter :\nA randomized mechanism satisfies inference privacy with respect to a metric , if for all measurable sets :\nfor all such that .\nWe note that pure IP is a special case of IP.\nIn prior research, the LDP framework has emerged as a standard approach for secure data collection.\nLDP notion offers a privacy guarantee [9 ###reference_b9###] that for a randomized algorithm satisfies LDP if for all measurable sets :\nfor all .\nThe LDP framework offers robust privacy assurances across all pairs of inputs and .\nIn contrast, the IP framework extends the scope of privacy guarantees, where privacy is bounded within the closed -ball of radius .\nThis expansion broadens the applicability of privacy protection beyond LDP.\nNotably, LDP emerges as a specific instance of IP.\nThus, IP is a generalization of LDP at inference stage.\nIt\u2019s important to note that IP only ensures privacy with respect to a chosen metric , and the selection of this metric is pivotal in determining the radius .\nThe selection of a metric for addressing privacy concerns is contingent upon the inherent characteristics of the input data.\nFor instance, in text-based classification scenarios where semantic interpretation is paramount, opting for a semantic similarity metric between two word vector representations is common [15 ###reference_b15###], contrasting with the Levenshtein distance, which measures the dissimilarity between two words based on single-character edits [16 ###reference_b16###].\nFor example, while the words \u201duninformed\u201d and \u201duneducated\u201d have similar semantic meanings, the Levenshtein distance between them are quite large, but the Levenshtein distance between \u201duninformed\u201d and \u201duniformed\u201d is merely 1 while they have very distinctive semantic meanings.\nHowever, it\u2019s crucial to recognize that these distances inherently rely on discrete representations owing to the discrete nature of language.\nIn contrast, when dealing with images, visual features take precedence, leading to the adoption of metrics like Euclidean distance or Structural Similarity Index (SSIM).\nHere, the choice of metric is closely tied to the specific task at hand, with an emphasis on capturing visual similarity for image-related tasks.\nThe concept of providing privacy guarantees based on the radius between pairs of inputs also has connections to Metric Differential Privacy (-DP) [17 ###reference_b17###] and its recent extension at the user level [18 ###reference_b18###].\nInformally, a mechanism satisfies bounded -DP if, for all and for all measurable sets :\nThis definition generalizes LDP when , offering privacy guarantee based on the selected metric and the distance between two inputs.\nWe consider the IP framework a more robust generalization of -DP, as IP framework addresses the impact on privacy guarantees of the distance between two \u201cadjacent\u201d data entries.\nMoreover, our focus is on ensuring data privacy during the inference stage rather than the data collection stage, distinguishing our work from prior research." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Properties of Inference Privacy", + "text": "The mathematical definition of IP bears resemblance to the definition of Differential Privacy (DP).\nConsequently, it is pertinent to explore whether IP exhibits similar properties to those of DP.\nIP demonstrates resilience to post-processing meaning that any attempt by a data analyst to derive a function from the output of an IP mechanism with the intent of reducing its privacy guarantee is fruitless unless additional knowledge about the private input is available.\nIn essence, if an algorithm ensures IP at the initial stage of an inference process, the same level of privacy guarantee is upheld throughout the entirety of the inference process.\nFormally, we show that the composition of a data-independent mapping function with an inference private mechanism is also inference private.\nLet be a randomized algorithm that satisfies IP.\nLet be an arbitrary randomized mapping.\nThen, satisfies IP.\nProof of this result is presented in Appendix A-A ###reference_###.\nThe post-processing property of IP permits the application of additional computations or transformations to the output of IP algorithms without compromising the established privacy guarantees of the original computation.\nHowever, relying solely on post-processing may not suffice to ensure privacy in real world applications.\nWhile the post-processing property offers privacy assurance for individual tasks, it overlooks the potential privacy loss spring from the sequential performance of multiple independent computations on the same data input.\nIn contrast, the composition of independent mechanisms is a facet of IP that tackles this concern by quantifying the aggregate privacy loss.\nFormally, we show that the basic composition of multiple independent IP mechanisms is also inference private.\nLet be an inference private algorithm.\nThen if is defined to be , where each are independent from each other.\nThen satisfies IP.\nProof of this result is presented in Appendix A-B ###reference_###.\nThe basic composition of independent mechanisms ensures overall privacy guarantees when multiple IP computations occur sequentially on the same data input.\nThis enables the division of the inference stage into distinct tasks, allowing for a balance between total privacy guarantee and task utility.\nHowever, it doesn\u2019t address privacy loss when different computations occur simultaneously on different data subdivisions.\nIn contrast, parallel composition tackles this concern by splitting the data input into distinct partitions.\nFormally, we show that the parallel composition of multiple independent IP mechanisms is also inference private.\nLet an arbitrary input be spited into disjoint chunks such that and for any , where each and .\nLet be an inference private algorithm for each partition .\nThen if is defined to be , where each are independent from each other.\nThen satisfies IP.\nProof of this result is presented in Appendix A-C ###reference_###.\nIn the previously proposed DP framework, the parallel composition theorem ensures that when applying DP mechanisms , computed on disjoint subsets of the private database, each mechanism provides differential privacy guarantees of , , \u2026, respectively, then the composed mechanism defined as , adheres to DP.\nIn contrast with the DP framework, the parallel composition of IP operates differently.\nDifferential privacy primarily concerns datasets that differ by only one element, resulting in nearly identical partitions except for one subset.\nThe privacy guarantee of the composed mechanism relies solely on this differing subset.\nConversely, in the parallel composition of IP framework, all partitions deviate from their counterparts.\nConsequently, the privacy guarantee of the composed mechanism is influenced by all partitions, leading to a higher privacy budget compared to individual subdivisions.\nThe parallel composition property of IP presents an opportunity to devise strategies that strategically apply distinct privacy requirements to various subdivisions of an input, all while maintaining the total privacy budget unaltered.\nConsider a scenario involving text-based data input.\nHere, we can implement a more stringent privacy constraint on sensitive keywords by leveraging both arbitrary selection and focused attention on the texts.\nSimultaneously, we can afford to relax the privacy requirement on less crucial text segments.\nThis deliberate adjustment allows for an enhancement in the utility of the model while ensuring that the overall privacy budget remains intact.\nThis capacity to selectively tailor privacy provisions to different components of the input opens avenues for privacy management, enabling organizations to balance privacy concerns with the optimization of utility across diverse data domains and applications.\nParallel composition offers a privacy guarantee for a partitioned data input, with each subdivision potentially having a distinct radius .\nIt\u2019s valuable to investigate how changes in the radius affect the privacy guarantee.\nThe chaining property of the IP framework allows for the extension of the radius under the same IP mechanism.\nThis attribute enables the application of an IP mechanism across various scenarios, thereby broadening the scope of privacy protection beyond individual subdivisions.\nFormally, we define the chaining property of inference private mechanism as:\nLet be an inference private algorithm.\nThen satisfies IP for any .\nProof of this result is presented in Appendix A-D ###reference_###.\nThe chaining property of IP highlights the balance between the desired privacy radius , and the strength of the privacy guarantee within that radius.\nMechanism offers a spectrum of distinct IP guarantees within the corresponding contours.\nGenerally, as the interested radius expands, the privacy budget parameter increases linearly, while the privacy loss parameter escalates exponentially.\nThis property of IP facilitates a flexible selection of appropriate privacy parameters.\nUsers can tailor these parameters based on specific circumstances.\nFor instance, in scenarios where the data input is predominantly homogeneous, a stronger privacy guarantee is preferable at the expense of a smaller interest radius.\nConversely, in situations characterized by highly heterogeneous data, a larger interest radius is favored.\nMoreover, the chaining property of IP resembles many similarity with group differential privacy, which focuses on two datasets differing elements [2 ###reference_b2###].\nThe chaining property of IP ensures that the same mechanism remains applicable across diverse scenarios, offering flexibility in mechanism design and accommodating varying privacy needs." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Mechanisms for Inference Privacy", + "text": "###figure_2### In this section, we present two main approaches of designing IP mechanisms, output perturbation and input perturbation, as shown in Figure 2 ###reference_###.\nOutput Perturbation is based on injecting controlled noise into the model\u2019s output.\nAs illustrated in Figure 2 ###reference_###, we use an additive noise to corrupt the models output :\nwhere is a -dimensional vector with i.i.d. entries drawn from the same probability distribution.\nCorrupting the output of the model makes it mathematically challenging for any potential adversary to infer the private data.\nThe level of noise injected is not only contingent upon the required privacy parameters , , , but also hinges on the model sensitivity, measured by the model\u2019s Lipschitz constant.\nA less sensitive model necessitates a larger amount of noise compared to a highly sensitive model under the same privacy requirement.\nThe global Lipschitz constant is a measure of the maximum ratio between the variations of the outputs compared to the variations of the inputs.\nHowever, it is computationally infeasible to accurately estimate Lipschitz constants, especially for larger networks.\nConsequently, it is practical to use upper bounds to approximate these constants.\nFor a pre-trained model , the global Lipschitz constant is defined as follows:\nFor simplicity, we refer to the upper bound of the global Lipschitz constant as the \u201cglobal Lipschitz constant\u201d, also denoted by .\n1-Lipschitz Neural Networks [19 ###reference_b19###] have been introduced in recent literature as a means to improve the stability of neural networks [20 ###reference_b20###].\nThrough some unified semi-definite programming approach [21 ###reference_b21###], such as Cayley Transform [22 ###reference_b22###] and other orthogonal parameterization approaches[23 ###reference_b23###], these networks aim to ensure the neural network has a global Lipschitz of 1.\nBy guarantee that minor alterations in the input space translate to only minimal changes in the output space, this characteristic promotes smoother and more consistent model behavior, contributing to improved stability overall.\nOutput perturbation methods effectively exploit this property, as they rely on the Lipschitz constant to determine the extent of noise injected into the output.\nFor a model with a smaller Lipschitz constant, the injected noise is proportionally smaller, ensuring that the perturbed output remains interpretable and meaningful.\nHowever, Lipschitz constants are subject to the norm, denoted as , and the selection of the suitable norm depends on the particular properties and requirements of the problem under consideration.\nFor instance, Lipschitz constants are less affected by outliers compared to other norms and are computationally less expensive to calculate.\nIn situations where data might include outliers or noise, employing an Lipschitz constant can result in more resilient solutions.\nThe Lap-Output Mechanism is suitable to provide an IP guarantee utilizing Lipschitz constants.\nGiven any arbitrary pre-trained function that takes input and outputs , the Lap-Output Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from and is the global Lipschitz constant.\nWe show that the Lap-Output Mechanism satisfies IP.\nPoof of this result is presented in Appendix A-E ###reference_###.\nLap-Output mechanism satisfies IP.\nDespite its numerous advantages, global Lipschitz constant can sometimes be quite large, necessitating a higher level of noise distortion to meet\nthe IP requirements.\nTo mitigate the distortion and maintain a higher utility level for , the Lipschitz constant is often employed.\nGiven the that global Lipschitz constant is always smaller or equal to the global Lipschitz constant, we anticipate experiencing less distortion in .\nThe Gauss-Output Mechanism is suitable to provide an IP guarantee utilizing global Lipschitz constants.\nHowever, we must relax IP requirement by incorporating a privacy loss parameter .\nGiven any function takes input and outputs , the Gauss-Output Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from , and is the global Lipschitz constant of .\nWe show the Gauss-Output Mechanism satisfies IP. Proof of this result is presented in Appendix A-F ###reference_###.\nGauss-Output mechanism satisfies IP.\nThe output perturbation approach relies on the model\u2019s global Lipschitz constant, which may not always be small or even finite.\nAdditionally, the Lipschitz constant is an intrinsic property that varies for each model.\nConsequently, all output perturbation mechanisms must be custom-designed to suit each model individually.\nThis lack of universality makes it challenging to devise a one-size-fits-all method applicable in all situations.\nInput Perturbation is based on injecting controlled noise into the model\u2019s input as illustrated in Figure 2 ###reference_###.\nIn contrast to model-tailored output perturbation methods, input perturbation techniques do not depend on the specific model at inference.\nServing as an universal approach, input perturbation methods offer a simpler mechanism, albeit potentially suffering from utility loss.\nBy introducing additive noise to corrupt the model input, we achieve:\nwhere is a -dimensional vector with i.i.d. entries drawn from the same probability distribution.\nCorrupting the input of the model renders it mathematically challenging for any potential adversary to deduce the private data.\nIntuitively, this outcome can be achieved by extending the post-processing property of IP.\nConsider a special arbitrary function that outputs the models input directly, as well as an output perturbation IP mechanism such that:\nFollowing the post-processing property, for another arbitrary model , must satisfy the same IP requirement.\nNotice that the parameters of the injected noised are only depended on the IP requirement and the Lipschitz constant for , which is always 1 regardless of .\nThese parameters are invariant to the model , making input perturbation methods universally adaptable.\nIn comparison with the output perturbation approach, the Gauss-Input Mechanism can also provide an IP guarantee.\nGiven any function takes input and outputs , the Gauss-Input Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from , and .\nWe show that the Gauss-Input Mechanism satisfies IP.\nProof of this result is presented in Appendix A-G ###reference_###.\nGauss-Input mechanism satisfies IP.\nThe input perturbation method demonstrates universal adaptability since the injected noise is solely determined by the IP requirements.\nIn contrast, the output perturbation method is customized specifically for the model , as the injected noise depends on both the IP requirements and the global Lipschitz constants of the model.\nHowever, one can conceptualize input perturbation as a particular instance of output perturbation applied to the private input variable, , in accordance with the post-processing property.\nIt is anticipated to yield comparatively lower utility than output perturbation methods.\nNevertheless, given that the model\u2019s Lipschitz constant may be notably large, this has spurred research into the development of 1-Lipschitz neural networks." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Evaluation", + "text": "In this section, we present an experimental evaluation of our proposed IP framework focusing on two key questions:\nQ1: Impact of various IP mechanisms on model utility:\nWe investigate how the choice of different IP mechanisms within the IP framework affects the overall utility of the model.\nUtility, in this context, refers to the model\u2019s ability to perform the image classification task accurately.\nQ2: Influence of changing IP parameters on model utility:\nWe analyze how adjustments of various parameters within the IP framework influence the model\u2019s utility.\nThis analysis will help us understand the trade-offs between privacy protection and model performance.\nTo address these questions, we conduct a series of experiments where we evaluate different IP mechanisms under varying levels of IP constraints in standard image classification tasks.\nOur code is available at https://github.com/FTian-UArizona/Inference_Privacy.\nExperiment Settings.\nWe evaluate Inference Privacy framework on image classification of two standard datasets: CIFAR-10 [24 ###reference_b24###] and CIFAR-100 [24 ###reference_b24###].\nWe report the standard classification accuracy for both datasets as the metric for model utility.\nWe utilize several pre-existing ResNet-18 architecture models [25 ###reference_b25###] as well as pre-existing SDP-based Lipschitz Layer (SLL) networks proposed recently [21 ###reference_b21###] [26 ###reference_b26###].\nWe employed Gauss-Input mechanism on ResNet-18 models and both Gauss-Input mechanism and Gauss-Output mechanism on SLL models.\n\u201cOptimally\u201d tuned ResNet-18 model.\nIn the Gauss-Input mechanism, input data is perturbed with noise before being fed into the model to enhance privacy.\nHowever, pre-trained models are typically not optimized for handling such noisy inputs.\nFine-tuning the model with noisy data can potentially improve its utility and performance.\nSince the privacy requirement at inference time is usually known, fine-tuning can be specifically targeted to these noise levels.\nTo achieve this, we conduct a grid search by fine-tuning the pre-trained model with images subjected to varying privacy levels to identify the bet fine-tuned model with highest accuracy when validated at targeted privacy requirement.\n###figure_3### Impact of varying radius for a fixed privacy budget .\nAs shown in Figure 3 ###reference_###, we evaluated the trade-off between model utility and privacy, demonstrating that as radius increases from 0 to 0.2, the model classification accuracy decreases for a fixed and .\nNotably, the Gauss-Output mechanism demonstrates superior performance compared to the Gauss-Input mechanism in the SLL model.\nFurthermore, the fine-tuned ResNet-18 model consistently outperforms the original ResNet-18 model across all evaluations.\nAn intriguing observation is that the Gauss-Output mechanism applied to the SLL model outperforms the Gauss-Input mechanism when implemented on fine-tuned ResNet-18 models in the high privacy region, as the radius increases.\nHowever, it is important to note that in the low privacy region, the SLL model exhibits comparatively lower performance than the ResNet-18 model.\nThe observed reduction in utility associated with output perturbation methods highlights a potentially valuable direction for further research into the development of 1-Lipschitz models.\nNonetheless, Gauss-Output mechanism is tailored for the SLL model, Gauss-Input mechanism offers practicality advantages as it can be applied to other models.\nIn experiments with a pre-trained ResNet-18 model, implementing the Gauss-Output mechanism was challenging due to the model\u2019s large global Lipschitz constant.\nThis discovery may inspire increased focus on small Lipschitz models and encourage further research into more stable models.\n###figure_4### Impact of varying privacy budget for a fixed radius .\nShown in Figure 4 ###reference_###, we evaluate the trade-off between model utility and privacy budget, demonstrating that as privacy budget increases from to , the natural classification accuracy decreases for a fixed of and a fixed of 0.1.\nA consistent observation was noted wherein the Gauss-Output mechanism applied to SLL models consistently outperformed the Gauss-Input mechanism across all privacy levels.\nAdditionally, the Gauss-Input mechanism on a fine-tuned ResNet-18 model demonstrated superior performance compared to the same mechanism on a ResNet-18 model without fine-tuning.\nIn the high privacy region (characterized by low values), , the Gauss-Output mechanism on SLL model exhibited clear advantages, whereas in the low privacy region (characterized by high values), the Gauss-Input mechanism on fine-tuned ResNet-18 model showed notable benefits.\nPerformance under various IP requirements.\nIn Figure 5 ###reference_###, we present an analysis of the trade-off between utility and privacy requirements for a fixed mechanism.\nSpecifically, we examine the application of the Gauss-Output mechanism to a SLL model to explore the relationship between model accuracy and radius as privacy budget varies.\nOur observations reveal a three-way trade-off involving utility, radius , and privacy budget : an increase in the radius increases, or a decrease in the privacy budget leads to a reduction in model utility.\nAs presented in Table 6 ###reference_###, it is observed that the classification accuracy associated with a fixed parameter of is closely aligned with the classification accuracy obtained when .\nNotably, the accuracy for and precisely corresponds to the accuracy for and .\nIn general, since we are injecting Gaussian noise drawn from , where , the noise is directly determined by the ratio .\nThis relationship justifies the observed chaining property as well as the trade-off between privacy radius and privacy budget.\nSpecifically, for a given level of utility, an increasing in the privacy budget requires a corresponding reduction in the privacy radius.\n###figure_5### ###figure_6###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Discussion and Future Work", + "text": "In this paper, we introduced Inference Privacy, a new framework aimed at safeguarding user privacy during the inference phase.\nWe demonstrated its basic properties, supported by theoretical guarantees and empirical results. Several potential avenues for future research are outlined below:\na) Utilizing Local Lipschitz constants instead of Global Lipschitz constants for designing IP mechanisms can potentially improve the trade-off between privacy and utility, given that Local Lipschitz constants are typically smaller.\nb) Image classification represents just one application domain of the IP framework.\nUnlike other privacy frameworks, IP offers flexibility in defining the metric and privacy radius across various contexts, making it promising for privacy preservation in language models.\nc) Leveraging the post-processing characteristic of the proposed IP framework, incorporating noise into the initial segment of a hybrid model [27 ###reference_b27###]\u2014where only the first half maintains a 1-Lipschitz property\u2014may yield enhanced trade-offs between privacy and utility." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "In this section, we prove the post processing property of IP mechanisms.\nFor arbitrary functions and , if satisfies IP, we need to show the composition also satisfies IP.\nConsider two arbitrary inputs such that , and any measurable subsets of the output space of , we need to show that:\nConsider the definition of :\nLet be the preimage of under , and is a measurable subset of the output space of .\nTherefore:\nSubstituting:\nThis proves that also satisfies IP, demonstrating the post processing property of IP.\nIn this section, we prove the basic composition property of independent IP mechanisms in an Euclidean -space.\nLet be an IP algorithm, and is defined to be , where each are independent from each other.\nWe need to show that satisfies IP.\nConsider two arbitrary inputs such that , and any measurable subsets of the output space of , we need to show:\nSince , the output of consists all outputs of .\nWe then write as a subset of the product space of the outputs of , that is , for any measurable subset of the output space of .\nMoreover, since for all , it follows that:\nThus, since satisfies IP, it implies that all also satisfies IP.\nSince each mechanism is independent, then:\nfor all such that .\nThis proves that mechanism satisfies IP, demonstrating the basic composition of independent IP Mechanisms.\nIn this section, we prove the parallel composition property of independent IP mechanisms in an Euclidean -space.\nLet an arbitrary input be spited into disjoint chunks such that and for any , where each and .\nLet be an IP algorithm for each partition , and is defined to be , where each are independent from each other.\nWe need to show satisfies IP.\nSince , the output of consists all outputs of .\nWe then write as a subset of the product space of the outputs of , that is , for any measurable subset of the output space of .\nConsider the input , and the corresponding distance :\nSince each are non-negative, then:\nMoreover, it follows that:\nThus, since satisfies IP, it implies that all also satisfies IP. Since each are disjoint and independent and each are independent, then:\nfor all , such that .\nThis proves that mechanism satisfies IP, demonstrating the parallel composition of independent IP Mechanisms.\nIn this section, we prove the chaining property of IP mechanism in a Euclidean -space.\nLet be an inference private algorithm.\nWe need to show that satisfies IP for any .\nConsider two points such that .\nSince satisfies inference privacy, by definition, when , , IP implies IP.\n###figure_7### Illustrated as Figure 7 ###reference_###, we consider two points such that , and we denote , thus:\nBy definition, the Euclidean -space is a geodesics space such that any component of a shortest-length curve between two points lies completely in [28 ###reference_b28###].\nSince all points on the linear segment between and lie in this space, consider a point on this linear segment and that .\nBy definition we have:\nApparently , for satisfies IP:\nThen we consider a point on this linear segment and that .\nWe have:\nApparently , for satisfies IP.\nIn general, we consider points on this linear segment :\nApparently , for satisfies IP.\nNotice that :\nThus , for satisfies IP.\nIn general, we have shown that there exist a sequence of points on the linear segment between and such that:\nBy substitution:\nFor any , we have:\nFor , we have:\nThis proves that also satisfies IP, demonstrating the chaining property of IP mechanisms.\nGiven any arbitrary pre-trained function that takes input and outputs , the Lap-Output Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from and is the global Lipschitz constant.\nWe want to show that the Lap-Output Mechanism satisfies IP.\nLet and be two different arbitrary points such that .\nTo prove the Lap-Output mechanism satisfies inference privacy, we need to show that for any output of the output space of :\nWhich is equivalent to:\nWe look at this ratio, by definition of :\nSince , we denote , where represents the i-th entry of .\nThen:\nThen:\nWhere the first inequality follows from triangle inequality:\nAnd the last inequality follows from the fact that:\nWhich proves that Lap-Output Mechanism satisfies IP.\n\u220e\nGiven any function takes input and outputs , the Gauss-Output Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from , and is the global Lipschitz constant of .\nWe want to show the Gauss-Output Mechanism satisfies IP.\nLet and be two different arbitrary points such that .\nTo prove the Gauss-Output mechanism satisfies inference privacy, we need to show that for any measurable subset of the output space of :\nNow we define , then each probability can be expressed as an integration:\nAnd:\nWe then partition the entire into two parts as , where:\nFor any fixed subset , define where:\nThen:\nNotice that:\nWe then look at the interested ratio:\nWhere is a constant term irreverent of the noise and the term is a random variable follows . \nLetting , the privacy budget can be rewritten as:\nThus, the probability this budget exceeds is:\nNotice that:\nThus:\nFollow a standard Gaussian tail bound, we then have:\nTo ensure satisfies inference privacy, we need to set .\nThen we show also satisfies inference privacy.\n\u220e\nGiven any function takes input and outputs , the Gauss-Input Mechanism is defined as:\nwhere is a -dimensional vector with i.i.d. entries drawn from , and .\nWe want to show the Gauss-Input mechanism satisfies IP.\nLet and be two different arbitrary points, such that .\nLet be some arbitrary function that outputs the models inputs directly:\nThen:\nIf preserves inference privacy, then must preserves inference privacy.\nWe want to show that for any measurable subset of the output space of :\nNow we define , then each probability can be expressed as an integration:\nAnd:\nWe then partition the entire into two parts as , where:\nFor any fixed subset , define where:\nThen:\nNotice that:\nWe look at the interested ratio:\nWhere is a constant term irreverent of the noise and the term is a random variable follows . \nLetting , the privacy budget can be rewritten as:\nThus, the probability this budget exceeds is:\nBased on definition of , notice that:\nThus:\nFollow a standard Gaussian tail bound we then have :\nTo ensure satisfies IP, we need to set .\nThen we show also satisfies inference privacy, and by post processing property, also satisfies inference privacy.\n\u220e" + } + ], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18746v1_figure_1.png", + "caption": "Figure 1: Illustration of Inference Privacy (IP): A mechanism satisfies IP if for any two inputs xasubscript\ud835\udc65\ud835\udc4ex_{a}italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and xbsubscript\ud835\udc65\ud835\udc4fx_{b}italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, such that \u2016xa\u2212xb\u2016p\u2264\u03b1subscriptnormsubscript\ud835\udc65\ud835\udc4esubscript\ud835\udc65\ud835\udc4f\ud835\udc5d\ud835\udefc||x_{a}-x_{b}||_{p}\\leq\\alpha| | italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT | | start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT \u2264 italic_\u03b1, their corresponding outputs M\u2062(xa)\ud835\udc40subscript\ud835\udc65\ud835\udc4eM(x_{a})italic_M ( italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) and M\u2062(xb)\ud835\udc40subscript\ud835\udc65\ud835\udc4fM(x_{b})italic_M ( italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) have similar probability distributions. The radius \u03b1\ud835\udefc\\alphaitalic_\u03b1 measures the extent of similarity, and the privacy leakage is measured by parameters (\u03f5,\u03b4)italic-\u03f5\ud835\udeff(\\epsilon,\\delta)( italic_\u03f5 , italic_\u03b4 ). (See definition 2.)", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Confused.png" + }, + "2": { + "figure_path": "2411.18746v1_figure_2.png", + "caption": "Figure 2: The workflow of output perturbation methods: User generates a noise N\ud835\udc41Nitalic_N based on the model used for inference, and perturbs the model output before releasing it.", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Methods.png" + }, + "3": { + "figure_path": "2411.18746v1_figure_3.png", + "caption": "Figure 3: Experimental results on CIFAR-10 classification: natural classification accuracy for input and output perturbation methods as a function of radius \u03b1\ud835\udefc\\alphaitalic_\u03b1 for a fixed \u03f5=1italic-\u03f51\\epsilon=1italic_\u03f5 = 1 and a fixed \u03b4=10\u22125\ud835\udeffsuperscript105\\delta=10^{-5}italic_\u03b4 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. Values reported are average of 15 tests.", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Accuracy_VS_Radius_CIFAR10.png" + }, + "4": { + "figure_path": "2411.18746v1_figure_4.png", + "caption": "Figure 4: Experimental results on CIFAR-10 classification: natural classification accuracy for input and output perturbation methods as a function of radius \u03f5italic-\u03f5\\epsilonitalic_\u03f5 for a fixed \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1 and a fixed \u03b4=10\u22125\ud835\udeffsuperscript105\\delta=10^{-5}italic_\u03b4 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. Values reported are average of 15 tests.", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Accuracy_VS_Budget_CIFAR10.png" + }, + "5": { + "figure_path": "2411.18746v1_figure_5.png", + "caption": "Figure 5: Experimental results on CIFAR-10 classification: natural classification accuracy for SLL models with Gauss-Output mechanism as a function of the privacy budget \u03f5italic-\u03f5\\epsilonitalic_\u03f5 for different values of radius \u03b1\ud835\udefc\\alphaitalic_\u03b1 and a fixed \u03b4=10\u22125\ud835\udeffsuperscript105\\delta=10^{-5}italic_\u03b4 = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. Values reported are average of 15 tests.", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Increasing_radius.png" + }, + "6": { + "figure_path": "2411.18746v1_figure_6.png", + "caption": "Figure 6: Comparison of the natural accuracy of on CIFAR10 dataset and CIFAR100 dataset under different IP constraints, where \u03b4\ud835\udeff\\deltaitalic_\u03b4 is fixed at 10\u22125superscript10510^{-5}10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. Values reported are average of 15 tests.", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/table.png" + }, + "7": { + "figure_path": "2411.18746v1_figure_7.png", + "caption": "Figure 7: An illustration of 2 points xa,xbsubscript\ud835\udc65\ud835\udc4esubscript\ud835\udc65\ud835\udc4fx_{a},x_{b}italic_x start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT in an Euclidean p\ud835\udc5dpitalic_p-space with distance \u03b2>\u03b1\ud835\udefd\ud835\udefc\\beta>\\alphaitalic_\u03b2 > italic_\u03b1", + "url": "http://arxiv.org/html/2411.18746v1/extracted/6027206/Chaining.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18746v1" +} \ No newline at end of file diff --git a/20241127/2411.18762v1.json b/20241127/2411.18762v1.json new file mode 100644 index 0000000000000000000000000000000000000000..cabddbb9ef2538655980d51a5ca690669f67a0ac --- /dev/null +++ b/20241127/2411.18762v1.json @@ -0,0 +1,70 @@ +{ + "title": "Kernelized offset\u2013free data\u2013driven predictive control for nonlinear systems", + "abstract": "This paper presents a kernelized offset-free data-driven predictive control scheme for nonlinear systems. Traditional model-based and data-driven predictive controllers often struggle with inaccurate predictors or persistent disturbances, especially in the case of nonlinear dynamics, leading to tracking offsets and stability issues. To overcome these limitations, we employ kernel methods to parameterize the nonlinear terms of a velocity model, preserving its structure and efficiently learning unknown parameters through a least squares approach. This results in a offset-free data-driven predictive control scheme formulated as a nonlinear program, but solvable via sequential quadratic programming. We provide a framework for analyzing recursive feasibility and stability of the developed method and we demonstrate its effectiveness through simulations on a nonlinear benchmark example.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Model predictive control (MPC) is a control strategy that optimizes predicted system behavior, while ensuring system constraints. Model inaccuracies and/or persistent disturbances require offset-free design methods for MPC, which include incremental/velocity state-space models or augmented disturbance models, see, e.g., the overview [1 ###reference_b1###]. For nonlinear MPC, velocity state-space models were developed in [2 ###reference_b2###, Chapter 4] via the multivariable mean value theorem.\nRecently, data-driven approaches to linear predictive control, such as subspace predictive control (SPC) [3 ###reference_b3###] and data-enabled predictive control (DeePC) [4 ###reference_b4###] have become popular due to a rising interest in data-driven control. In [5 ###reference_b5###], it was shown that using incremental inputs in the SPC or DeePC design yields offset-free data-driven predictive controllers with similar performance as offset-free linear MPC. A first result in this direction for nonlinear systems was presented in [6 ###reference_b6###], where finite-dimensional Koopman operators were used to approximate the nonlinear dynamics. A velocity-form prediction model was subsequently derived therein by applying the approach in [2 ###reference_b2###, Chapter 4] to the Koopman approximate model. Recently, in [7 ###reference_b7###] it was shown that general methods to construct basis functions, such as neural networks or kernel methods, can be used to design data-driven predictive controllers for nonlinear systems, without knowledge of the system dynamics.\nAmong different methods to construct basis functions, kernel methods offer a well-posed formulation in reproducing kernel Hilbert spaces (RKHSs) [8 ###reference_b8###], providing a tractable alternative to neural networks for system identification and data-driven analysis/control, see, e.g., [9 ###reference_b9###, 10 ###reference_b10###]. Within the data-driven predictive control field, kernels were originally used in [11 ###reference_b11###] to learn multi-step predictors for nonlinear systems, in the spirit of the SPC formulation for linear systems. More recently, in [12 ###reference_b12###], kernels were used to implicitly parameterize multi-step predictors in a kernelized DeePC formulation for nonlinear systems. These formulations, however, do not address the offset-free design problem. In this regard, data-driven velocity-form state-space models for nonlinear systems were originally introduced in [13 ###reference_b13###], based on the fundamental theorem of calculus and linear-parameter-varying (LPV) embeddings. Therein, a parametrization of the velocity-form state-space model using basis or kernel functions was defined, but formulation of the associated learning problem was not explicitly addressed. Very recently, multi-step data-driven velocity-form models for nonlinear systems were presented in [14 ###reference_b14###], based on input-output LPV representations and kernel functions.\nIn this paper we present a novel kernelized offset-free DPC scheme for nonlinear systems with some desirable features, as follows. Firstly, we consider the velocity state-space model as in [2 ###reference_b2###, Chapter 4] and we employ kernel functions in a structured way, i.e., by representing each nonlinear function/gradient in the velocity model. We show that preserving the structure of the velocity model in [2 ###reference_b2###, Chapter 4] leads to a tractable least squares problem for learning the unknown coefficients and yields a kernelized velocity model of the same dimension as the original velocity model. This is more efficient and scalable with the data size compared to the state-of-the-art, e.g., the approach in [6 ###reference_b6###] requires solving a nonlinear least squares problem to estimate the parameters of the Koopman observables and computing partial derivatives of the resulting Koopman embedding. Secondly, we formulate the kernelized offset-free DPC problem as a parameterized nonlinear program, which can be solved via a sequence of quadratic programs (QPs), as done in [2 ###reference_b2###] for an analytic velocity form model. Lastly, we present terminal cost and constraint set conditions for recursive feasibility and stability of nonlinear velocity form predictive control, which apply to both analytic and data-driven velocity form representations. This is novel compared to the state-of-the-art [2 ###reference_b2###, Chapter 4], which only establishes output convergence under a conservative, terminal equality output and incremental state constraint." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Preliminaries & Problem Statement", + "text": "We consider nonlinear MIMO systems with inputs , measured states and outputs , , affected by piece-wise constant additive disturbances , i.e.,\nThe functions and are assumed to be unknown. Next, we introduce the incremental state and input , and the extended state . Define the incremental/velocity dynamics as in [2 ###reference_b2###, Chapter 4]:\nwhere and . The velocity-form representation (2 ###reference_###) is exact as it gives an exact dynamic equation in a space tangent to the original state space (the velocity space) [2 ###reference_b2###]. Note that the elements of the matrices in (2 ###reference_###) are in general nonlinear functions. Since analytic expressions for and cannot be determined, [2 ###reference_b2###, Chapter 4] employs an approximate velocity-form model based on the assumption that and in (2 ###reference_###). Next we introduce the following instrumental definitions and results on reproducing kernel Hilbert spaces, see e.g., [15 ###reference_b15###, 16 ###reference_b16###].\nGiven a set , we will say that is a reproducing kernel Hilbert space (RKHS) on over , provided that (i) is a Hilbert space of functions from to and (ii) for every , the linear evaluation functional, , defined by , is bounded.\nA symmetric function is called positive semidefinite kernel if, for any\nThe kernel section of centered at is , for all .\nAn RKHS is fully characterized by its reproducing kernel. To be specific, if a function belongs to an RKHS with kernel , there exists a sequence , such that\nfor some , for all . Note that can possibly be a finite number. Next, suppose we are given pair-wise data points\n, with each pair in . Then it holds that any minimizing the risk functional\nadmits a representation\nwhere is the induced norm on , is a strictly monotonically increasing real-valued function on and is an arbitrary cost function. This result is known as the representer theorem, see e.g., [8 ###reference_b8###]. The case of multiple outputs will be addressed by employing a common, shared RKHS for all outputs." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Problem Statement", + "text": "To formulate the prototype offset-free nonlinear MPC problem, at time , given , , and , define:\nwhere . Next, consider the offset-free nonlinear MPC problem:\n(Velocity form offset-free nonlinear MPC)\nAs a reference signal in Problem 2.3 ###reference_theorem3### we use where is a output reference, the terminal cost is , and\nthe stage cost is . We assume that , and . For an expression of and we refer to Section IV due to space limits. Since we assume that the functions are unknown, in order to build the velocity form model (2 ###reference_###) and the prediction matrices , we need a data-driven representation of the unknown functions in (2 ###reference_###), which is formally stated next.\nGiven a finite set of noiseless state-input-output data , where denotes the number of samples, the objective is to construct a velocity model of the form (2 ###reference_###) directly from data, i.e., without knowledge of the underlying system dynamics and corresponding gradients. Additionally, the aim is to develop a computationally efficient predictive control algorithm based on the developed data-driven velocity model and to analyze conditions under which recursive feasibility and closed-loop stability are attained." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Kernelized Velocity State-space Models", + "text": "To solve Problem 2.4 ###reference_theorem4###, we will utilize the representer theorem to parameterize the nonlinear functions within the velocity state-space model (2 ###reference_###). Given a finite data set , consider the kernel-based representations of the unknown functions in (2 ###reference_###):\nwhere is a row vector of constant coefficients such that for all . and are vectors of functions where each element evaluates the kernel function at one of the data points.\nA more accurate approximation of the velocity dynamics (2 ###reference_###) can be achieved by parameterizing the kernel functions with , , , and . However, it suffices to parameterize the kernel functions using only , because is uniquely determined by according to the system dynamics (1 ###reference_###). Furthermore, since is computed based on the measured state , the parametrization of the kernel functions can be reduced to () only.\nThe kernelized representations in (5 ###reference_###) can be used to construct the matrices\nwhere , and are the kernel based approximations of , and in (2 ###reference_###).\nThis yields the kernelized data-driven velocity model\nwhere it should ideally hold that and for . Based on Remark 3.1 ###reference_theorem1###, in theory this is possible if the functions on the left hand side of the equations (5 ###reference_###) belong to the corresponding finite dimensional RKHS.\nThe number of columns of the matrices of coefficients , and and the number of rows of and scale linearly with the number of data samples. However, the dimensions of , and are the same as the dimensions of , and in (2 ###reference_###). This means that the kernelized data-driven velocity model (6 ###reference_###) yields the same computational complexity when employed in a predictive control scheme, as the original velocity model (2 ###reference_###).\nIn order to learn the unknown matrices of coefficients , and , we set in (6 ###reference_###), which allows us to rewrite the state equation as two separate equations, i.e.,\nFor brevity, for any define\nGiven a set of noiseless state-input-output data and predicted counterparts corresponding to the kernelized model (6 ###reference_###), define\nWe will use these definitions to optimally find approximations for , and in the least-squares sense.\nSuppose that the collected set of noiseless state-input-output data and the chosen kernel functions are such that the matrices and have full rank. Then the matrices\nyield the , and that minimize .\nBy using (7 ###reference_###) and (8 ###reference_###) we can separate the least squares problem into two separate least squares problems, i.e., and . These least squares problems can be solved separately, which yields\nBy exploiting suitable pseudoinverse matrices, yields the least squares solutions , and , which completes the proof.\nA possible approach to attain the full rank property invoked in Lemma 3.3 ###reference_theorem3### is to use universal kernel functions [17 ###reference_b17###] and distinct data points ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Kernelized Offset-Free DPC", + "text": "We will now define the velocity kernel based data-driven predictive control (vKDPC) problem. To this end we firstly define the prediction matrices which are obtained by iterating the kernelized velocity dynamics in (6 ###reference_###), i.e.,\nThen the vKDPC problem can be defined as follows.\n(Velocity form offset-free kernelized DPC)\nProblem 4.1 ###reference_theorem1### is a constrained nonlinear optimization problem, since and are matrices where the entries in general depend on nonlinear functions of . However, a computationally much more efficient approach is to employ a sequential QP formulation, as stated next.\nNext, we analyze the properties of the velocity system dynamics (2 ###reference_###) in closed-loop with the control law obtained by solving Problem 4.1 ###reference_theorem1### online, at each time in a receding horizon manner. For simplicity of exposition we assume that the kernelized velocity dynamics are an exact approximation of the velocity dynamics , which further yields that the prediction matrices and are an exact approximation of and , respectively. If this is not the case, the robust recursive feasibility and stability analysis method from [18 ###reference_b18###] can be employed, as explained in Remark 4.7 ###reference_theorem7###. Note that establishing terminal cost and set conditions for closed-loop stability and recursive feasibility of velocity NMPC/KDPC is still non-trivial; i.e., in [2 ###reference_b2###, Chapter 4] a more conservative, output terminal equality constraint was used.\nGiven a reference , there exists a locally stabilizing state-feedback control law (which gives ), a and a\ncompact set\n with in its interior such that for all it holds that:\nwhere and .\nSuppose that Assumption 4.2 ###reference_theorem2### holds. At any time , given that Problem 4.1 ###reference_theorem1### is feasible for , , then Problem 4.1 ###reference_theorem1### is feasible at time for , .\nConsider the optimal predicted sequences at ,\nThen, at time , construct the sub-optimal sequences:\nwhere and . This results in a sub-optimal, but feasible initial condition , , as the shifted sub-optimal sequence satisfies all the constraints in (15 ###reference_###), due to Assumption 4.2 ###reference_theorem2### and the fact that\nThe last statement holds due to Assumption 4.2 ###reference_theorem2###, which completes the proof.\nNext, let denote the set of feasible initial states, i.e., , which is positively invariant by Theorem 4.3 ###reference_theorem3### for the closed-loop system (2 ###reference_###)-(15 ###reference_###). Moreover, we define the optimal value function at time corresponding to (15a ###reference_.1###), i.e.,\nSuppose that Assumption 4.2 ###reference_theorem2### holds. Then for all it holds that system (2 ###reference_###) in closed-loop system with the vKDPC control law obtained by solving (15 ###reference_###) is Lyapunov asymptotically stable with respect to the equilibrium point .\nSince , by Theorem 4.3 ###reference_theorem3###, Problem 4.1 ###reference_theorem1### remains recursively feasible for all . Consider the sub-optimal initial state and the corresponding state, input and scheduling variable sequences , and . Then by optimality of the cost function it holds that\nThen, by exploiting standard MPC stability analysis techniques, suitable positive definite upper and lower bounds on can be established, i.e.\nwhere . For the computation of we refer to e.g.[19 ###reference_b19###]. This further yields the desired property via standard Lyapunov arguments.\nWhen the kernelized velocity data-driven model (6 ###reference_###) is not exact, then via the framework of [18 ###reference_b18###], robust recursive feasibility can still be guaranteed by allowing to be a (suitably parameterized) free optimization variable. The asymptotic stability of system (2 ###reference_###) in closed-loop with inexact vKDPC as in (15 ###reference_###) is then translated into input-to-state stability.\nAssuming that a desired triplet is known, finding the terminal weight and gain amounts to solving\nThis can be done by using the kernel based approximate matrices , and , which are constant matrices after substituting and . By pre- and post-multiplying with , defining the variables and and applying the Schur complement a standard MPC-LMI is obtained. Once the feedback gain is computed, a terminal set can then be computed by standard methods for the dyanmics [9 ###reference_b9###]." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Illustrative Example", + "text": "To assess the effectiveness of the developed vKDPC scheme, we consider the discretized nonlinear pendulum model from [18 ###reference_b18###], i.e.,\nwhere and are the system input torque and\npendulum angle at time instant , while , kg and m are the moment of inertia, mass and length of the pendulum. Moreover, m/s2 is the gravitational acceleration, is the friction coefficient and the sampling time is s.\n###figure_1### In order to learn a kernel based velocity model of the form (6 ###reference_###) we use the universal [17 ###reference_b17###] inverse multiquadric kernel function\n, with . We perform an open-loop identification experiment where the input signal is generated by dithering a piecewise constant input varying between with a multi-sine input constructed with the Matlab function idinput, with the parameters Range [], Band [], NumPeriod and Sine []. Using the obtained state-input-output data we find the matrices of coefficients , and by solving the least squares problems (9 ###reference_###) and (10 ###reference_###) and we construct the multi-step prediction matrices (13 ###reference_###) and (14 ###reference_###). The total computation time for solving (9 ###reference_###) and (10 ###reference_###) is s for a samples dataset. To validate the obtained model we apply a multi-sine input with the same settings to the obtained prediction model and compare the multistep prediction with the test data shown in the first subplot of Figure1 ###reference_###.\n###figure_2### ###figure_3### After constructing the multi-step prediction matrices we can directly implement the vKDPC algorithm as formulated in Problem 4.1 ###reference_theorem1###.\nFor the cost function we use and with a prediction horizon of and tolerance . The constraint sets are and . The terminal cost is computed as explained in Remark 4.8 ###reference_theorem8### where and . Since both and depend on the reference signal we have to recompute when changes. This amounts to terminal costs for the example shown in Figure3 ###reference_###. For every a corresponding control gain is computed as explained in Remark 4.8 ###reference_theorem8### and a terminal set is computed that fulfills the conditions from Assumption 4.2 ###reference_theorem2###. The terminal set is computed using the MPT3 toolbox, see e.g. [18 ###reference_b18###] for more details about implementation. The terminal set for is shown in Figure2 ###reference_###.\nNext, we consider a piece-wise constant reference that changes 2 times and a piece-wise constant additive disturbance that is initially zero and then it varies between 2 non-zero values. To validate the performance of the developed vKDPC algorithm we computed the velocity form model (2 ###reference_###) for the pendulum model (17 ###reference_###) as in [2 ###reference_b2###, Chapter 4] and implemented the corresponding vNMPC algorithm as in Problem 2.3 ###reference_theorem3###.\nFor vNMPC, the analytic velocity model is used to compute and with the same , , and and for the matrices . From Figure2 ###reference_### we observe that the two terminal sets are very similar. We solved both the vNMPC and the vKDPC algorithms using a quasi-LPV formulation and sequential QP. On average the required number of iterations for convergence in Algorithm 1 ###reference_### is equal to vs iterations with an average computation time per sampling instant of s vs s for vKDPC and vNMPC respectively, while the offset-free tracking performance of the two algorithms is almost identical. These results demonstrate the effectiveness of the developed data-driven vKDPC algorithm, both in terms of capturing the velocity form nonlinear dynamics and the corresponding offset-free properties, and in terms of efficient online implementation." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "In this paper we developed a novel kernelized offset-free data-driven predictive control scheme for nonlinear systems. By exploiting the structure of an analytic velocity state-space model, we reduced learning the kernelized velocity model to solving a least squares problem. The resulting offset-free kernelized DPC scheme can be efficiently implemented using sequential QP. We derived terminal cost and terminal set conditions for recursive feasibility and stability of velocity form nonlinear MPC and kernelized DPC for exact kernelized representations. Future work will consider conditions for time-varying references and inexact kernelized velocity models." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18762v1_figure_1.png", + "caption": "Figure 1: N=20\ud835\udc4120N=20italic_N = 20 step y\ud835\udc66yitalic_y prediction of kernel model for test data plotted together with the true system trajectory and the error e:=y\u2212y^assign\ud835\udc52\ud835\udc66^\ud835\udc66e:=y-\\hat{y}italic_e := italic_y - over^ start_ARG italic_y end_ARG.", + "url": "http://arxiv.org/html/2411.18762v1/x1.png" + }, + "2": { + "figure_path": "2411.18762v1_figure_2.png", + "caption": "Figure 2: Terminal set \u2124Tsubscript\u2124\ud835\udc47\\mathbb{Z}_{T}roman_\u2124 start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT for r=col\u2061(0.5,0,0)\ud835\udc5fcol0.500r=\\operatorname{col}(0.5,0,0)italic_r = roman_col ( 0.5 , 0 , 0 ) obtained using the kernelized velocity model (blue) and analytic velocity model (red).", + "url": "http://arxiv.org/html/2411.18762v1/x2.png" + }, + "3": { + "figure_path": "2411.18762v1_figure_3.png", + "caption": "Figure 3: vKDPC (\u2014) vs. vNMPC (\u2014) trajectories and disturbance signal.", + "url": "http://arxiv.org/html/2411.18762v1/x3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18762v1" +} \ No newline at end of file diff --git a/20241127/2411.18765v1.json b/20241127/2411.18765v1.json new file mode 100644 index 0000000000000000000000000000000000000000..0d7b1de668292073fe74736228e12583e949ed29 --- /dev/null +++ b/20241127/2411.18765v1.json @@ -0,0 +1,339 @@ +{ + "title": "Near-Optimal Trace Reconstruction for Mildly Separated Strings", + "abstract": "In the trace reconstruction problem our goal is to learn an unknown string given independent traces of . A trace is obtained by independently deleting each bit of with some probability and concatenating the remaining bits. It is a major open question whether the trace reconstruction problem can be solved with a polynomial number of traces when the deletion probability is constant. The best known upper bound and lower bounds are respectively [Cha21b] and . Our main result is that if the string is mildly separated, meaning that the number of zeros between any two ones in is at least , and if is a sufficiently small constant, then the trace reconstruction problem can be solved with traces and in polynomial time.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Trace reconstruction is a well-studied problem at the interface of string algorithms and learning theory. Informally, the goal of trace reconstruction is to recover an unknown string given several independent noisy copies of the string.\nFormally, fix an integer and a deletion parameter . Let be an unknown binary string with representing the th bit of . Then, a trace of is generated by deleting every bit independently with probability (and retaining it otherwise), and concatenating the retained bits together. For instance, if and we delete the second and third bits, the trace would be (from the first, fourth, and fifth bits of ).\nFor a fixed string , note that the trace follows some distribution over bitstrings, where the randomness comes from which bits are deleted.\nIn trace reconstruction, we assume we are given i.i.d. traces , and our goal is to recover the\noriginal string with high probability.\nThe trace reconstruction problem has been a very well studied problem over the past two decades [Lev01a ###reference_bx23###, Lev01b ###reference_bx24###, BKKM04 ###reference_bx3###, KM05 ###reference_bx21###, HMPW08 ###reference_bx19###, VS08 ###reference_bx34###, MPV14 ###reference_bx25###, DOS19 ###reference_bx14###, NP17 ###reference_bx29###, PZ17 ###reference_bx31###, HHP18 ###reference_bx17###, HL20 ###reference_bx18###, HPP18 ###reference_bx20###, Cha21a ###reference_bx11###, CDL+21b ###reference_bx7###, CDL+21a ###reference_bx6###, Cha21b ###reference_bx12###, Rub23 ###reference_bx32###].\nThere have also been numerous generalizations or variants of trace reconstruction studied in the literature, including coded trace reconstruction [CGMR20 ###reference_bx10###, BLS20 ###reference_bx4###], reconstructing mixture models [BCF+19 ###reference_bx1###, BCSS19 ###reference_bx2###, Nar21 ###reference_bx28###], reconstructing alternatives to strings [DRR19 ###reference_bx15###, KMMP21 ###reference_bx22###, NR21 ###reference_bx30###, MS22 ###reference_bx26###, SY23 ###reference_bx33###, MS24 ###reference_bx27###], and approximate trace reconstruction [DRSR21 ###reference_bx16###, CP21 ###reference_bx13###, CDK21 ###reference_bx5###, CDL+22 ###reference_bx8###, CDL+23 ###reference_bx9###].\nIn perhaps the most well-studied version of trace reconstruction, is assumed to be an arbitrary -bit string and the deletion parameter is assumed to be a fixed constant independent of . In this case, the best known algorithm requires random traces to reconstruct with high probability [Cha21b ###reference_bx12###]. As we do not know of any polynomial-time (or even polynomial-sample) algorithms for trace reconstruction, there have been many works making distributional assumptions on the string , such as being a uniformly random string [HMPW08 ###reference_bx19###, MPV14 ###reference_bx25###, PZ17 ###reference_bx31###, HPP18 ###reference_bx20###, Rub23 ###reference_bx32###] or being drawn from a \u201csmoothed\u201d distribution [CDL+21b ###reference_bx7###].\nAn alternative assumption is that the string is parameterized, meaning that comes from a certain \u201cnice\u201d class of strings that may be amenable to efficient algorithms [KMMP21 ###reference_bx22###, DRSR21 ###reference_bx16###].\nIn this work, we also wish to understand parameterized classes of strings for which we can solve trace reconstruction efficiently. Indeed, we give an algorithm using polynomial traces and runtime, that works for a general class of strings that we call -separated strings. This significantly broadens the classes of strings for which polynomial-time algorithms are known [KMMP21 ###reference_bx22###]." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Technical Contributions", + "text": "In this section, we give a high level overview of our techniques. Recall that we want to reconstruct a string from independent traces where we assume that is mildly separated. More concretely, we assume that there are numbers such that consists of zeros followed by a one, followed by zeros followed by a one and so on, with the last bits of being zero. Writing , we thus have that there are ones in at positions for .\nNote that a retained bit in a trace naturally corresponds to a bit in . More formally, for a trace of length , let be the positions in where the bit was retained when generating so that . Then, the correspondence is defined by the map from to mapping . We think of this map as the correct alignment of to .\nOur main technical contribution is an alignment algorithm (see Algorithm 1 ###reference_###) which takes in some and estimates of satisfying that for all , , and correctly aligns the one in a trace corresponding to the \u2019th one of with probability (where the randomness is over the draw of \u2013naturally, this requires that the \u2019th one of was not deleted).\nMoreover, we ensure that the alignment procedure has that with high probability, say , it never aligns a one in too far to the right in : if the one in corresponding to the \u2019th one of is aligned to the \u2019th one of , then . We will refer to this latter property by saying that the algorithm is never ahead with high probability. If , we say that the algorithm is behind. Thus, to show that the algorithm correctly aligns the \u2019th one, it suffices to show that the probability that the algorithm is behind is .\nWe first discuss how to implement this alignment procedure and then afterwards we discuss how to complete the reconstruction by using this alignment procedure.\nThe main technical challenge of this paper is the analysis of Algorithm 1 ###reference_###. Let us first describe on a high level how the algorithm works.\nFor , we write . Suppose that the trace consists of zeros followed by a one followed by zeros followed by a one and so on. The algorithm first attempts to align the first one in with a one in by finding the minimal such that is within of for a sufficiently large . Inductively, having determined (that is the alignment of the \u2019th one of ), it looks for the minimal satisfying that there is a such that is within of .\nIntuitively, when looking at the \u2019th one in the trace, we want to find the earliest possible location in the real string (which has gaps estimated by ) that could plausibly align with the one in the trace.\nIt is relatively easy to check that the algorithm is never ahead with very high probability. Indeed, by concentration bounds on the number of deleted zeros and the fact that for all , it always has the option of aligning the \u2019st one in to the correct one in . However, it might align to an earlier one in since it is looking for the minimum such that an alignment is possible. For a very simple example, suppose that and . If the first ones of are deleted and the \u2019st one is retained, the algorithm will align the retained one (which corresponds to the \u2019st one of ) with the first one of resulting in the aligning algorithm being steps behind. Moreover, the algorithm will remain steps behind all the way up to the \u2019th one of . The probability of this happening is . To prove that the probability of the algorithm being behind when aligning the \u2019th one of is at most , we prove a much stronger statement which is amenable to an inductive proof, essentially stating that this is the worst that can happen: The probability of the algorithm being steps behind at any fixed point is bounded by for a constant .\nIn particular, we show that there is a sort of amortization \u2013 whenever there is a substring that can cause the algorithm to fall further behind with some probability (i.e. if certain bits are deleted), the substring also helps the algorithm catch back up if it is already behind.\nUsing Algorithm 1 ###reference_### we can iteratively get estimates with . Namely, suppose that we have the estimates . We then run Algorithm 1 ###reference_### on independent traces and with high probability, for a fraction of them, we have that the \u2019th and \u2019st one of are retained in and correctly aligned. In particular, with probability we can identify both the \u2019th and \u2019st one of in and taking the median over the gaps between these (and appropriately rescaling by ), we obtain an estimate of such that ). Note that the success probability of is enough to obtain the coarse estimates using the median approach but we cannot obtain a fine estimate by taking the average since with constant probability , we may have misaligned the gap completely and then our estimate can be arbitrarily off.\nTo obtain fine estimates, we first obtain coarse estimates, say , for all of the gaps. Next, we show that we can identify the \u2019th and \u2019st one in in a trace (if they are retained) and we can detect if they were deleted not just with probability but with very high probability. The trick here is to run Algorithm 1 ###reference_### both from the left and from the right on looking for respectively the one in aligned to the \u2019th one in and the one in aligned to the \u2019st one in (which is the \u2019th one when running the algorithm from the right). If either of these runs fails to align a one in to respectively the \u2019th and \u2019st one in or the runs disagree on their alignment,\nthen we will almost certainly know. To see why, assuming that we are never ahead in the alignment procedure from the left, if we believe we have reached the \u2019th one in , then we are truly at some \u2019th one where . By a symmetric argument, if we believe we have reached the \u2019st one in after running the procedure from the right, we are truly at the \u2019th one in , where . The key observation now is that if and only if and , meaning that both runs succeeding is equivalent to the one found in the left-alignment procedure being strictly earlier than the one found in the right-alignment procedure.\nSo, if we realize that either run fails to align the ones properly, we discard the trace and repeat on a newly sampled trace.\nFinally, we can ensure that the success of the runs of the alignment algorithm is independent of the deletion of zeros between the \u2019th and \u2019st ones in . If a trace is not discarded, then with very high probability, the gap between the ones in aligned to the \u2019th and \u2019st ones in (normalized by ) is an unbiased estimator for . By taking the average of the gap over traces, normalizing by , and rounding to the nearest integer, we determine exactly with very high probability. Doing so for each , reconstructs .\nIn Section 2 ###reference_###, we introduce notation. In Section 3 ###reference_###, we describe and analyse our main alignment procedure. We first prove that with high probability it is never ahead (Lemma 3.1 ###reference_theorem1###). Second, in Section 3.2 ###reference_###, we bound the probability that it is behind (Lemma 3.2 ###reference_theorem2###). Finally, in Section 4 ###reference_###, we describe our full trace reconstruction algorithm and prove Theorem 1.1 ###reference_theorem1###." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Notation", + "text": "We note a few notational conventions and definitions.\nWe recall that a bitstring is -separated if the gap between any consecutive \u2019s in the string contains at least \u2019s.\nGiven an string , we say that a run is a contiguous sequence of \u2019s in . For the th run of is the sequence , and has length .\nFor any bitstring , we use to denote the string where the bits have been reversed.\nWe use to denote an integer sequence of length . For notational convenience, for any , we write to denote the subsequence , and .\nWe will define some sufficiently large constants and a small constant . We will assume the separation parameter , and the deletion parameter , where . We did not make significant effort to optimize the constant or the value in , though we believe that any straightforward modifications to our analysis will not obtain bounds such as or a separation of ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Main Alignment Procedure", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Description and Main Lemmas", + "text": "In this section, we consider a probabilistic process that models a simpler version of the trace reconstruction problem that we aim to solve. In the simpler version of the trace reconstruction problem, suppose that we never delete any \u2019s, but delete each independently with probability. Let represent the true lengths of the first gaps (so the first is at position , the second is at position , and so on). Moreover, suppose we have some current predictions of the gaps . The high level goal will be, given a single trace (where the trace means only s are deleted), to identify the th in the trace from the the original string with reasonably high probability. (Note that the th is deleted with probability, in which case we cannot succeed.)\nWe will describe and analyze the probabilistic process, and then explain how the analysis of the process can help us solve the trace reconstruction problem in Section 4 ###reference_###.\nIn the process, we fix and two sequences and where has length but has some length which may or may not equal .\nMoreover, we assume and for every term and\nNow, for each , let be i.i.d. random variables, with with probability and with probability. Also, let with probability .\nFor each with , we define a value as follows. First, we set . Next, for each index such that , let denote the previous index with . We define to be the smallest index such that there exists with , where is a sufficiently large constant. (If such an index does not exist, we set .)\nOur goal will be for . In general, for any with , we would like . If , we say that we are steps behind at step , and if , we say that we are steps ahead at step .\nFirst, we note the following lemma, which states that we will never be ahead with very high probability, as long as the sequences and are similar enough.\nSet . Let be sequences of lengths , respectively, where .\nSuppose that for all . Then, with probability at least (over the randomness of the ), for all with , .\nLet us consider the event that for every index , at least one of equals . Equivalently, the string does not ever have \u2019s in a row. For any fixed , the probability of this being false is at most , so by a union bound over all choices of , the event holds with at most failure probability.\nFirst, note that . Now, suppose that some satisfies and . Suppose is the smallest index strictly larger than such that . Note that , by our assumed event. Note that if we set and , then , since .\nMoreover, , where the second to last inequality is by Cauchy-Schwarz. Thus, satisfies the requirements for , which means that . Thus, if , . Since , this means for all with .\n\u220e\nThe main technical result will be showing that with reasonably high probability, i.e., with reasonably high probability we are not behind. This result will hold for any choice of and does not require any similarity between these sequences. In other words, our goal is to prove the following lemma.\nLet be strings of length at most with every between and , where for a sufficiently large constant . Define .\nThen, for any , with probability at least over the randomness of , ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Proof of Lemma 3.2", + "text": "In this section, we prove Lemma 3.2 ###reference_theorem2###.\nWe will set a parameter , where is a sufficiently large constant.\nFor any , given the sequences and (of possibly differing lengths), we define to be the probability (over the randomness of ) that\n.\nFor any indices with , .\nEquivalently, this is the same as the probability that we fall behind at least steps from step to step , but we never fall behind or more steps (relatively) from any (possibly intermediate) steps to .\nFor any , we define to be the supremum value of over any sequences where has length at most and every and is between and , and we also define .\nNote that for any , , as means . So, and also equal for any .\nFirst, we note a simple proposition, that will only be useful for simplifying the argument at certain places.\nFor any , .\nSince is the maximum over all where has length at most , it suffices to prove it for some of length . Indeed, for and , we must have that , so we must have and .\n\u220e\nWe now aim to bound the probabilities for . We will do this via an inductive approach on the length of , where the high-level idea is that if we fall back by steps, there is a natural splitting point where we can say first we fell back by steps, and then by steps, for some with \u2013 see Lemmas 3.5 ###reference_theorem5### and 3.6 ###reference_theorem6###. This natural splitting point will be based on the structure of the similarity of and , and will not work if and share a -periodic structure. But in the periodic case, we can give a more direct argument that we cannot fall back by steps (i.e., a full period), even with probability \u2013 see Lemma 3.4 ###reference_theorem4###. We can then compute a recursive formula for the probability of falling back steps, by saying we need to first fall back steps and then fall back steps. In Lemma 3.8 ###reference_theorem8###, we bound the terms of this recursion.\nFix any such that , and suppose that , where is a sufficiently large multiple of . Suppose that are sequences such that for every and . Then, the probability .\nWe show that the probability of ever being behind by or more is at most . In fact, we will show this deterministically never happens, conditioned on the event that for every index , at least one of equals . Indeed, the probability of this being false for any fixed is at most , so by a union bound over all choices of , the event holds with at most failure probability.\nNow, assume the event and suppose that holds for some . More precisely, we fix to be the smallest index such that and .\nFirst, assume that . Consider the values , and let By our conditional assumption, and since , at least one of equals . Say that , where . Also, by our choice of , we know that , and that . So, we have two options:\n, and , for some and where .\n, and .\nNow, let\u2019s consider the list of all indices with , starting with if and otherwise, and ending with . By definition of the sequence , for every there exists such that and . Assuming that , then which means and thus So,\nAdding the above equation over , we obtain\nwhere the final line follows by Cauchy-Schwarz. Let be if and otherwise. Then, since , we have\nThe above equation tells us that can\u2019t be too much smaller than . We now show contrary evidence, thus establishing a contradiction.\nFirst, we compare to . Indeed, for any , . Since every , this also means . Adding over all , we have\nwhere the last inequality follows by Cauchy-Schwarz and the fact that .\nHowever, we do not care about \u2013 we really care about . To bound this, first note that for any , and . So, , assuming every .\nIf we additionally have that then for any and . Importantly, .\nIn the case that this implies that . So, because we have\nRecalling that and , since ,\nIn the case that , we instead have . So, since , we have that\nso the same bound as (3.2 ###reference_###) holds (in fact, an even stronger bound holds).\nSo, both (1 ###reference_###) and (3.2 ###reference_###) hold, in either case. Together, they imply that\nThis is impossible if is a sufficiently large multiple of . Since in either case, it suffices for to be a sufficiently large multiple of .\n\u220e\nFix any such that , and suppose that . Suppose that are sequences of length , such that for every .\nThen, the probability\nSuppose that for all . Then, we can use Lemma 3.4 ###reference_theorem4### to bound . Alternatively, let be the smallest index such that . Next, let be such that is the largest index less than with , and is the smallest index at least with . Finally, let and . In other words, is the number of steps we fall behind from to , and is the number of steps we fall behind from to .\nNote that , and since each subsequent is strictly increasing, this means , so , assuming that . In other words, we have that are nonnegative integers such that .\nNow, let us bound the probability (over the randomness of ) of the event indicated by occurring, with the corresponding values . Note that for any fixed , the event of those specific values is equivalent to and being , and everything in between being . So, the probability is at most . Now, conditioned on , the values imply that we fall back steps from step to (or we may move forward if ) and we fall back steps from step to . Moreover, there cannot be two steps such that that we fell back steps from to . Since and , this means both . So, the overall probability of the corresponding values is at most , where we are using the fact that for all by Proposition 3.3 ###reference_theorem3###.\nOverall, the probability is at most\nWe can cap as at most since otherwise or is . Moreover, we can give improved bounds in the cases when and either or .\nNote that in either case, both and equal . In the former case, we must have and . Importantly, the algorithm fell back by exactly steps from to , However, we know that for all , . In that case, if we restrict ourselves to the strings and , we are dealing with the case of Lemma 3.4 ###reference_theorem4###. Hence, we can bound the overall probability of this case by . In the latter case, we must have and , since we need to fall back by exactly steps from to . However, this actually cannot happen, because by definition of and , we must have that which is not true by our definition of .\nOverall, this means\nFix any such that , and suppose that . Suppose that are sequences of length .\nThen, the probability\nOur proof will be quite similar to that of Lemma 3.5 ###reference_theorem5###, so we omit some of the identical details.\nFirst, assume that for every . Then, we can directly apply Lemma 3.5 ###reference_theorem5###. Alternatively, let be the largest index such that .\nAs in the proof of Lemma 3.5 ###reference_theorem5###, let be such that is the largest index less than with , and is the smallest index at least with . Also, let and .\nAs in the proof of Lemma 3.5 ###reference_theorem5###, we have , as long as . We can again do the same casework on , to obtain\nAgain, we wish to consider the individual cases of or separately. In either case, . In the former case, must have and . In this case, from step to we fall behind steps. In other words, we can restrict ourselves to the strings and . However, we have now restricted ourselves to strings which satisfy the conditions of Lemma 3.5 ###reference_theorem5###, so we can bound the probability in this case as at most\nIn the latter case, we must have and . However, this is impossible, because by our definition of .\nOverall, by adding all cases together, we obtain\nOverall, this implies that\nWe now can universally bound for all . To do so, we first recall some basic properties of the Catalan numbers.\nFor , the Catalan numbers 111We use rather than the more standard to avoid confusion with the constants we have defined. are defined as . They satisfy the following list of properties.\nand for all , .\nFor all , .\nFor all , .\nAssume , and define for and . Then, for all , .\nWe prove the statement by induction on .\nFor , note that with probability , so . Indeed, either or . So, and for all .\nNow, suppose that the induction hypothesis holds for : we now prove the statement for . First, note that . Next, for ,\nWe now bound the summation in the above expression. First, we focus on the terms where one of or is . If , the summation becomes . If we fix , for each there are choices of , which means the summation is . For , each term is at most half the previous term, so this is at most . Next, for , if we fix , the summation is , since there are choices of . We have a symmetric summation for . Finally, if we focus on the terms with , by writing and , for any fixed , the sum of is at most , and there are choices for . So, the summation is at most , where the last inequality holds because .\nOverall, replacing indices accordingly, we can write (3.2 ###reference_7###) as at most\nWe can now focus on the middle summation term. If we first consider all terms with , the sum equals as long as . For the remaining terms, we fix and consider the sum. If , the sum equals . Since for all , this is at most . For , the sum equals . Since , as long as , the terms decrease by a factor greater than each time increases. So the sum over all is at most Overall, the summation in the middle term is at most .\nOverall, this means (3.2 ###reference_7###) is at most\nNow, note that for all , even for . Moreover, . Thus, (4 ###reference_###) is at most\nAssuming that , this is at most which can be verified to be at most for all , by just using the fact that for all . This completes the inductive step.\n\u220e\nWe are now ready to prove Lemma 3.2 ###reference_theorem2###.\nIf , this means that either the event occurs, or there exist indices with but we fall behind at least steps from step to step .\nAssuming , the probability of is at most . Alternatively, if there exist with but we fall behind at least steps from step to step , there must exist such an with minimal (breaking ties arbitrarily). This could be because for some . However, the probability of there being consecutive indices is at most .\nThe final option is that, if we look at the first index with , . This means that from step to , we must fall behind at least steps, and there could not have been any intermediate steps where we fell behind more than steps. Hence, if we restrict ourselves to the strings and , the event indicated by must occur, since conditioned on and the fact that , the value only depends on , starting from position , and .\nIn other words, there exists some contiguous subsequences and of and , respectively, such that the event of occurs. For any fixed , the probability is at most . Since there are at most possible contiguous subsequences for each of and , the overall probability is at most , assuming that and where is sufficiently large.\nOverall, the probability of falling behind is at most .\n\u220e" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Full algorithm/analysis", + "text": "Let us depict the true string as i.e., there are ones, and the string starts and ends with a run of \u2019s. This assumption can be made WLOG by padding the string with \u2019s at the front and the end. For any -separated string, doing this padding maintains the -separated property, and we can easily simulate the padded trace by adding \u2019s at the front and \u2019s at the back. Once we reconstruct the padded string, we remove the padding to get .\nWe assume we know the value of . Indeed, the number of \u2019s in a single trace is distributed as . So, by averaging the number of \u2019s over random traces and dividing by , we get an estimate of that is accurate within with probability. Thus, by rounding, we know exactly with probability.\nThe main goal is now to learn the lengths . If we learn these exactly just using the traces, this completes the proof. Our algorithm runs in two phases: a coarse estimation phase and a fine estimation phase. In the coarse estimation phase, we sequentially learn each up to error . In the fine estimation phase, we learn each exactly, given the coarse estimates." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Coarse estimation", + "text": "Fix some , and suppose that for all , we have estimates satisfying . (If , then we have no estimates yet.) Our goal will be to provide an estimate such that .\nConsider a trace of . Let and for each , let be the indicator that the th is retained. Next, for each , let represent the number of s in the th run that were not deleted. Note that with at least probability, for all . Since for all , this implies that for all .\nNow, even though we have no knowledge of or , we can still simulate the probabilistic process of Section 3 ###reference_###. Let be the list of all indices with . While we do not know the values , for every pair of consecutive indices , the value is exactly the number of \u2019s between the th and st in the trace (where we say that the th is at position and the st is at position ). In other words, if represents the position of the th , then . Hence, because computing each only requires knowledge of and the value of , and since , the algorithm can in fact compute for all , using the same process as described in Section 3 ###reference_###, even if the values are not known.\nAlgorithm 1 ###reference_### simulates this process, assuming knowledge of , , a single trace , and . In Algorithm 1 ###reference_###, we use the variable to represent , i.e., the current prediction of the position . In other words, equals the number of steps ahead (or equals the number of steps behind) we are.\nFix such that for all . With probability at least over the randomness of , we have that Algorithm 1 ###reference_### returns such that the th in corresponds to the th in . Moreover, conditioned on this event holding, the distribution exactly follows .\nLet us first condition on the values , assuming that for all . As discussed earlier, this occurs with at least probability, and implies that for all .\nLet us also condition on . By Lemma 3.1 ###reference_theorem1### and Lemma 3.2 ###reference_theorem2###, the probability that , for , is at least . This is conditioned on and the values (assuming ). This means that with at least probability, the algorithm finds the position with .\nSince only depends on , and , with probability at least over the randomness of and , we have that and . This is independent of , so with probability at least probability, we additionally have that .\nThe event that means that is the position in of the th in the true string . Moreover, since neither the th nor th was deleted, is the position in of the th in the true string . So, is in fact the length of the gap between the th and th after deletion, which means it has length , since is independent of the events that decide whether and .\n\u220e\nGiven this, we can crudely estimate every gap, in order. Namely, assuming that that we have estimates (where ), we can run the Align procedure on independent traces. By a Chernoff bound, with failure probability, at least fraction of the traces will have the desired property of Lemma 4.1 ###reference_theorem1###, so will output some where . Since is in the range with at least probability, at least fraction of the outputs will satisfy , with failure probability. Thus, by defining to be the median value of across the randomly drawn traces, we have that with at least probability.\nBy running this procedure iteratively to provide estimates , we obtain Algorithm 2 ###reference_###. The analysis in the above paragraph implies the following result.\nAlgorithm 2 ###reference_### uses traces and polynomial time, and learns estimates such that with at least probability, for all ." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Fine estimation", + "text": "In this section, we show how to exactly compute each with high probability, given the crude estimates . This will again be done using an alignment procedure, but this time running the alignment both \u201cforward and backward\u201d.\nNamely, given a trace , we will try to identify the th and st \u2019s from the original string, but we try to identify the th by running Align on and the st by running Align on the reverse string . The idea is: assuming that we never go ahead in the alignment procedure, if we find some index in the forward alignment procedure with , then the true position must be at least . Likewise, if we do the alignment procedure in reverse until we believe we have found the th from the back (equivalently, the th from the front), the true position must be at most .\nSo, the true positions of the index found in the forward alignment procedure can only be earlier than that of the index from the backward alignment procedure, if the true positions were exactly and , respectively. Thus, by comparing the indices, we can effectively verify that the positions are correct, with negligible failure probability (rather than with failure probability). This is the key towards obtaining the fine estimate of , rather than just a coarse estimate that may be off by .\nAlgorithm 3 ###reference_### formally describes the fine alignment procedure, using traces, assuming we have already done the coarse estimation to find .\nSuppose that for all . Fix indices and , and for simplicity of notation, let . Let be the number of \u2019s in . Then, the probability that , but either the forward or backward iterations finds an index in which does not correspond to the th or th , respectively, from , is at most . Moreover, if the forward and backward iterations find indices in corresponding to the th and th , respectively, then . Finally, the probability of finding both corresponding indices is at least .\nFirst, let us consider the forward alignment procedure. We know that tracks when looking at the th of (from left to right). So, if we do not return FAIL, then . If , this implies there is an index where . The probability of this is at most , by Lemma 3.1 ###reference_theorem1###. Otherwise, , meaning that the th in is after (or equal to) the th in .\nLikewise, if we consider the backward alignment procedure, if we do not return FAIL, then except for an event with probability at most , the th in is ahead of (or equal to) the th in . Equivalently, the th in (reading from left to right) is before (or equal to) the th in (reading from left to right).\nSo, barring a probability event, the only way that the th in is strictly before the th in is if the th in is precisely the th in and th in is precisely the th in . However, if , then in fact the th is before the th in (reading from left to right). This proves the first statement.\nNext, if we in fact found the corresponding indices, they are consecutive \u2019s in , which means they must be consecutive \u2019s in . So, if we found the th from the left, and the th from the right, we must have .\nFinally, the event of finding both corresponding indices is equivalent to in the forward iteration and in the backward iteration. Conditioned on the corresponding \u2019s not being deleted, each of these occur with at least probability, by Lemmas 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2###. So, the overall probability is at least .\n\u220e\nWe are now ready to prove Theorem 1.1 ###reference_theorem1###. Indeed, given the accuracy of the crude estimation procedure, it suffices to check that for each , we compute correctly, with at least probability.\nAssume that , the number of ones in , is computed correctly, and for all , .\nThen, for any fixed , with at least probability, we compute the gap correctly.\nFor any fixed iteration , if both the forward and backward procedures correctly identify the th and th \u2019s from the left, respectively, then by Lemma 4.3 ###reference_theorem3###. In this case, we will compute an actual value . Moreover, as discussed in the proof of Lemma 4.1 ###reference_theorem1###, the event that the forward procedure correctly identifies the right only depends on , , and the events of whether the first \u2019s are deleted. Thus, the event that the backward procedure correctly identifies the right only depends on , , and the events of whether the th until the th are deleted.\nThus, the forward and backward procedure correctly identifying the right \u2019s is independent of . Moreover, in this case, is precisely , since is the position in corresponding to the th in , and neither the th nor th can be deleted if both of these \u2019s are identified.\nSo, if the forward and backward procedures identifying the right \u2019s for trace , the conditional distribution of is . However, we really want to look at the distribution conditioned on the event . Indeed, by Lemma 4.3 ###reference_theorem3###, this event is equivalent to either the forward and backward procedures identifying the right \u2019s, or some other event which occurs with at most probability. Because is clearly between and , and since the probability of both \u2019s being correctly identified is at least by Lemma 4.3 ###reference_theorem3###, the expectation of , conditioned on not being NULL, is .\nBy a Chernoff bound, the number of with is at least with at least probability, since in expectation it is at least . Then, by another Chernoff bound, the empirical average of all such is within of its expectation with probability, which is . Thus, taking the empirical average and dividing by , with at most failure probability, times the average of all non-null \u2019s is within of , and thus rounds to .\n\u220e" + } + ], + "appendix": [], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Beyond trace reconstruction: Population recovery from the deletion\nchannel.", + "author": "Frank Ban, Xi Chen, Adam Freilich, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Foundations of Computer Science (FOCS), pages 745\u2013768,\n2019.", + "url": null + } + }, + { + "2": { + "title": "Efficient average-case population recovery in the presence of\ninsertions and deletions.", + "author": "Frank Ban, Xi Chen, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Approximation, Randomization, and Combinatorial Optimization:\nAlgorithms and Techniques, pages 44:1\u201344:18, 2019.", + "url": null + } + }, + { + "3": { + "title": "Reconstructing strings from random traces.", + "author": "Tugkan Batu, Sampath Kannan, Sanjeev Khanna, and Andrew McGregor.", + "venue": "In Symposium on Discrete Algorithms (SODA), pages 910\u2013918,\n2004.", + "url": null + } + }, + { + "4": { + "title": "Coded trace reconstruction in a constant number of traces.", + "author": "Joshua Brakensiek, Ray Li, and Bruce Spang.", + "venue": "In Foundations of Computer Science (FOCS), 2020.", + "url": null + } + }, + { + "5": { + "title": "Approximate trace reconstruction via median string (in average-case).", + "author": "Diptarka Chakraborty, Debarati Das, and Robert Krauthgamer.", + "venue": "In Foundations of Software Technology and Theoretical Computer\nScience (FSTTCS), volume 213 of LIPIcs, pages 11:1\u201311:23. Schloss\nDagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2021.", + "url": null + } + }, + { + "6": { + "title": "Polynomial-time trace reconstruction in the low deletion rate regime.", + "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Innovations in Theoretical Computer Science (ITCS), 2021.", + "url": null + } + }, + { + "7": { + "title": "Polynomial-time trace reconstruction in the smoothed complexity\nmodel.", + "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Symposium on Discrete Algorithms (SODA), 2021.", + "url": null + } + }, + { + "8": { + "title": "Near-optimal average-case approximate trace reconstruction from few\ntraces.", + "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Symposium on Discrete Algorithms (SODA), 2022.", + "url": null + } + }, + { + "9": { + "title": "Approximate trace reconstruction from a single trace.", + "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha.", + "venue": "In Symposium on Discrete Algorithms (SODA), 2023.", + "url": null + } + }, + { + "10": { + "title": "Coded trace reconstruction.", + "author": "Mahdi Cheraghchi, Ryan Gabrys, Olgica Milenkovic, and Jo\u00e3o Ribeiro.", + "venue": "IEEE Trans. Inf. Theory, 66(10):6084\u20136103, 2020.", + "url": null + } + }, + { + "11": { + "title": "New lower bounds for trace reconstruction.", + "author": "Zachary Chase.", + "venue": "Ann. Inst. H. Poincar\u00e9 Probab. Statist., 57(2), 2021.", + "url": null + } + }, + { + "12": { + "title": "Separating words and trace reconstruction.", + "author": "Zachary Chase.", + "venue": "In Symposium on Theory of Computing (STOC), 2021.", + "url": null + } + }, + { + "13": { + "title": "Approximate trace reconstruction of random strings from a constant\nnumber of traces.", + "author": "Zachary Chase and Yuval Peres.", + "venue": "CoRR, abs/2107.06454, 2021.", + "url": null + } + }, + { + "14": { + "title": "Optimal mean-based algorithms for trace reconstruction.", + "author": "Anindya De, Ryan O\u2019Donnell, and Rocco A. Servedio.", + "venue": "Annals of Applied Probability, 29(2):851\u2013874, 2019.", + "url": null + } + }, + { + "15": { + "title": "Reconstructing trees from traces.", + "author": "Sami Davies, Miklos Racz, and Cyrus Rashtchian.", + "venue": "In Conference On Learning Theory (COLT), pages 961\u2013978, 2019.", + "url": null + } + }, + { + "16": { + "title": "Approximate trace reconstruction: Algorithms.", + "author": "Sami Davies, Mikl\u00f3s Z. R\u00e1cz, Benjamin G. Schiffer, and Cyrus\nRashtchian.", + "venue": "In International Symposium on Information Theory (ISIT), pages\n2525\u20132530. IEEE, 2021.", + "url": null + } + }, + { + "17": { + "title": "Trace reconstruction with varying deletion probabilities.", + "author": "Lisa Hartung, Nina Holden, and Yuval Peres.", + "venue": "In Analytic Algorithmics and Combinatorics (ANALCO), pages\n54\u201361, 2018.", + "url": null + } + }, + { + "18": { + "title": "Lower bounds for trace reconstruction.", + "author": "Nina Holden and Russell Lyons.", + "venue": "Annals of Applied Probability, 30(2):503\u2013525, 2020.", + "url": null + } + }, + { + "19": { + "title": "Trace reconstruction with constant deletion probability and related\nresults.", + "author": "Thomas Holenstein, Michael Mitzenmacher, Rina Panigrahy, and Udi Wieder.", + "venue": "In Symposium on Discrete Algorithms (SODA), pages 389\u2013398,\n2008.", + "url": null + } + }, + { + "20": { + "title": "Subpolynomial trace reconstruction for random strings and arbitrary\ndeletion probability.", + "author": "Nina Holden, Robin Pemantle, and Yuval Peres.", + "venue": "In Conference On Learning Theory (COLT), pages 1799\u20131840,\n2018.", + "url": null + } + }, + { + "21": { + "title": "More on reconstructing strings from random traces: insertions and\ndeletions.", + "author": "Sampath Kannan and Andrew McGregor.", + "venue": "In International Symposium on Information Theory (ISIT), pages\n297\u2013301, 2005.", + "url": null + } + }, + { + "22": { + "title": "Trace reconstruction: Generalized and parameterized.", + "author": "Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal.", + "venue": "IEEE Trans. Inf. Theory, 67(6):3233\u20133250, 2021.", + "url": null + } + }, + { + "23": { + "title": "Efficient reconstruction of sequences.", + "author": "Vladimir I. Levenshtein.", + "venue": "IEEE Trans. Information Theory, 47(1):2\u201322, 2001.", + "url": null + } + }, + { + "24": { + "title": "Efficient reconstruction of sequences from their subsequences or\nsupersequences.", + "author": "Vladimir I. Levenshtein.", + "venue": "J. Comb. Theory, Ser. A, 93(2):310\u2013332, 2001.", + "url": null + } + }, + { + "25": { + "title": "Trace reconstruction revisited.", + "author": "Andrew McGregor, Eric Price, and Sofya Vorotnikova.", + "venue": "In European Symposium on Algorithms (ESA), pages 689\u2013700,\n2014.", + "url": null + } + }, + { + "26": { + "title": "Graph reconstruction from random subgraphs.", + "author": "Andrew McGregor and Rik Sengupta.", + "venue": "In International Colloquium on Automata, Languages, and\nProgramming (ICALP), volume 229, pages 96:1\u201396:18, 2022.", + "url": null + } + }, + { + "27": { + "title": "Graph reconstruction from noisy random subgraphs.", + "author": "Andrew McGregor and Rik Sengupta.", + "venue": "CoRR, abs/2405.04261, 2024.", + "url": null + } + }, + { + "28": { + "title": "Improved algorithms for population recovery from the deletion\nchannel.", + "author": "Shyam Narayanan.", + "venue": "In Symposium on Discrete Algorithms (SODA), pages 1259\u20131278.\nSIAM, 2021.", + "url": null + } + }, + { + "29": { + "title": "Trace reconstruction with exp(o(n)) samples.", + "author": "Fedor Nazarov and Yuval Peres.", + "venue": "In Symposium on Theory of Computing (STOC), pages 1042\u20131046,\n2017.", + "url": null + } + }, + { + "30": { + "title": "Circular trace reconstruction.", + "author": "Shyam Narayanan and Michael Ren.", + "venue": "In Innovations in Theoretical Computer Science (ITCS), 2021.", + "url": null + } + }, + { + "31": { + "title": "Average-case reconstruction for the deletion channel: Subpolynomially\nmany traces suffice.", + "author": "Yuval Peres and Alex Zhai.", + "venue": "In Foundations of Computer Science (FOCS), pages 228\u2013239,\n2017.", + "url": null + } + }, + { + "32": { + "title": "Average-case to (shifted) worst-case reduction for the trace\nreconstruction problem.", + "author": "Ittai Rubinstein.", + "venue": "In International Colloquium on Automata, Languages, and\nProgramming (ICALP), volume 261 of LIPIcs, pages 102:1\u2013102:20, 2023.", + "url": null + } + }, + { + "33": { + "title": "The trace reconstruction problem for spider graphs.", + "author": "Alec Sun and William Yue.", + "venue": "Discrete Mathematics, 346(1):113115, 2023.", + "url": null + } + }, + { + "34": { + "title": "Improved string reconstruction over insertion-deletion channels.", + "author": "Krishnamurthy Viswanathan and Ram Swaminathan.", + "venue": "In Symposium on Discrete Algorithms (SODA), pages 399\u2013408,\n2008.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18765v1" +} \ No newline at end of file diff --git a/20241127/2411.18766v1.json b/20241127/2411.18766v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a019d9561ca3fc2973561d417073a379c5b90752 --- /dev/null +++ b/20241127/2411.18766v1.json @@ -0,0 +1,266 @@ +{ + "title": "Collective steering in finite time: controllability on \"GL\"\u207a\u2062(\ud835\udc5b,\u211d)", + "abstract": "We consider the problem of steering a collection of n particles that obey identical n\u2006-\u2006dimensional linear dynamics via a common state feedback law towards a rearrangement of their positions, cast as a controllability problem for a dynamical system evolving on the space of matrices with positive determinant.\nWe show that such a task is always feasible and, moreover, that it can be achieved arbitrarily fast. We also show that an optimal feedback control policy to achieve a similar feat, may not exist. Furthermore, we show that there is no universal formula for a linear feedback control law to achieve a rearrangement, optimal or not, that is everywhere continuous with respect to the specifications. We conclude with partial results on the broader question of controllability of dynamics on orientation-preserving diffeomorphisms.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "An often unappreciated fact is that optimal control policies, to steer linear dynamics between specified values of the state vector, may not always be expressible in linear feedback form.\nThe most elementary such example is the problem to steer first-order dynamics, with taking values in , from an initial state to a final over the time window , seeking to minimize the quadratic cost functional . It can be readily seen that the optimal control is and that the optimal state trajectory is . If we now wanted to express the control input in linear state-feedback form, , we would be called to use the feedback gain for , which is not integrable, as the path needs to traverse past the point of equilibrium at the origin.\nThe source of the problem may at first seem to be the topological obstruction of the state trajectory having to cross the origin, since we demand traversing between terminal states that are on opposites sides and the state-space is . However the problem is much deeper and persists in higher dimensions due to a global topological obstruction, as we will explain later on, due to the fact that is not simply connected111Throughout, denotes the identity component of the general linear group, the multiplicative group of square invertible matrices..\nOne consequence is that the optimal control policy, to traverse between specified states in a specified amount of time, cannot be expressed in feedback form in general.\nAnother consequence is that a linear feedback control policy to effect a specified (non-trivial) rearrangement of initial conditions in for linear dynamics, via a time-varying feedback gain that depends continuously on the rearrangement linear map, may not exist either.\nThe existence of control laws that avoid such a phenomenon have been sought in earlier studies that aimed at the control of the Liouville equation, and are also of importance in the present work as well.\nOur interest in feedback regulation to specify the state of a dynamical system stems from applications where one seeks to steer a collection of agents towards a desired configuration. The agents individually are assumed to obey identical equations of motion that are linear, namely,\nwith , , full-column rank, and being the control input.\nIt is desirable that a time-varying matrix of common feedback gains which can be broadcast to all, to implement a state-feedback control, is sufficient to ensure that the swarm successively repositions itself to the desired terminal arrangement. We term this endeavour collective steering.\nCollective steering can be seen as a complementing theme to that of ensemble control [14 ###reference_b14###].\nIn ensemble control, the main paradigm is to steer a collection of agents, each obeying their own dynamics, utilizing a universal input that can similarly be broadcast to all. One may visualize the agents as parametrized by individual \u201clabels\u201d that specify their identity and dynamics. This framework is tantamount to a Lagrangian viewpoint. In contrast, collective steering concerns the steering of a collection of agents obeying identical dynamics with each implementing their own input. It is the feedback law specifying these individual inputs that is being broadcast. Thereby, the framework of collective steering can be seen as representing an Eulerian viewpoint in regulating the motion of the collective.\nAnother salient feature in ensemble control is that the underlying global system, that includes all agents, is often severely under-actuated since the same input is implemented by all. On the other hand, in collective steering each agent has complete authority to specify their input using local information, i.e., utilizing knowledge of their own state. The roots of this formalism can be traced to the influential works by Brockett [8 ###reference_b8###], Agrachev and Caponigro [3 ###reference_b3###], and, more recently, Agrachev and Sarychev [2 ###reference_b2###], where they study controllability on the group of diffeomorphisms. In our recent work [1 ###reference_b1###], we explore an analogous formalism in connection to the holonomy of optimal mass transport \u2013loosely speaking, holonomy refers to the model that allows keeping track and regulating internal degrees of freedom of the mass distribution of particles that are viewed as distinguishable.\nAs noted, in the present work we limit our investigation to collective steering with linear feedback.\nThus, the problem we consider amounts to designing a common feedback gain matrix and a reference signal , so that the control law\nsteers the individual states , for , of a swarm of agents\nfrom an initial configuration\nto a desired final one,\nBy substituting (2 ###reference_###) into (1 ###reference_###),\nwe see that the common reference signal may be used to independently and freely adjust the mean value, i.e., the center of mass of the swarm. On the other hand, regarding the relative positioning of the agents, the most we can achieve with linear feedback is to effect a transformation\nwith being the state transition matrix of (3 ###reference_###) at time . Thus, without loss of generality, we specialize to and focus our attention on the case where .\nThe problem then reduces to steering the bi-linear control system\nwith , from to , where now the feedback gain matrix represents our control authority.\nEvidently, if , then necessarily must vanish for some .\nAt such time, the swarm finds itself in a low dimensional subspace, very much as when we steer a single dynamical system through the origin in the opening example. For the same reasons, this situation cannot happen if the agents are steered collectively, with a choice of (bounded) . Therefore, we further restrict our attention to terminal specifications that satisfy\nSteering (4 ###reference_###) between such terminal specifications is equivalent to steering\nfrom to , with being the state transition matrix for the specified dynamics.\nThus, henceforth, we investigate the controllability properties of (5 ###reference_###), which is a right-invariant control system evolving on the identity component of the (real) general linear group\u2013i.e., matrices with positive determinant. We are interested especially in the notion of strong controllability as adapted111It is worth mentioning that strong controllability, as adopted herein, is referred to as exact-time controllability in [12 ###reference_b12###, p. 89] and that the notion of strong controllability adopted in [12 ###reference_b12###, Definition 9-(a), p. 88] is slightly different. The distinction, however, is not crucial for our purposes and is omitted. from [12 ###reference_b12###, p. 89], namely,\nthe bi-linear system (5 ###reference_###) is said to be strongly controllable if, for any and , there exists an integrable control input that steers (5 ###reference_###) from to .\nThis notion is to be contrasted with the classical notion of controllability wherein the terminal time may not be arbitrarily chosen, see, e.g., [12 ###reference_b12###, Definition 9-(b), p. 88].\nWhile the literature is abound with necessary and sufficient conditions for controllability of control systems on Lie groups, going back to the pioneering works [13 ###reference_b13###, 7 ###reference_b7###], results on strong controllability are relatively sparse. The reason simply is that establishing strong controllability necessitates the complete characterization of the reachable set from the identity at each time instant , also known as the exact-time reachable set. For driftless systems, e.g., the case for (5 ###reference_###), this is straightforward provided that the control input is allowed to take arbitrarily large values [13 ###reference_b13###, Theorem 5.1]. For systems with drift, the situation is more involved and there are counterexamples that advise caution [12 ###reference_b12###, Example 11, p. 88]. Although, under certain technical assumptions, it is possible to give an explicit characterization of the exact-time reachable set for systems with drift as in [11 ###reference_b11###], such assumptions are far from general. Indeed, the bi-linear system (5 ###reference_###) does not satisfy the assumptions in [11 ###reference_b11###]. Further complications arise, even in the study of the weaker notion of controllability, due to the fact that is neither compact nor semi-simple as a Lie group. Nevertheless, as we prove below, it turns out that the Kalman rank condition is in fact a necessary and sufficient condition for strong controllability of (5 ###reference_###).\nIn the body of the paper,\nwe first establish the controllability of (5 ###reference_###) in Section 2 ###reference_###. In the process, we explain that classical optimal control falls short of providing a direct path to the controllability of (5 ###reference_###), contrasting with the open-loop optimal control of an associated linear system. We return to this theme in Section 4 ###reference_### where we discuss topological obstructions in constructing a universal formula for control laws that are continuous in the problem data.\nIn Section 3 ###reference_### we refine the construction in Section 2 ###reference_### to establish strong controllability of (5 ###reference_###). An illustrative example is presented in Section 5 ###reference_###, and the paper concludes with results on the controllability of dynamics that evolve on the space of orientation-preserving diffeomorphisms on ." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Collective Controllability", + "text": "A natural first step in addressing controllability properties of the matrix-valued dynamics in (5 ###reference_###), that are bi-linear in nature,\nis to view as a matrix-valued control input, and consider the linear dynamics instead.\nWe do exactly that, but also introduce an additional constant parameter , and write\nIn this way, we examine the linear dynamical system\nhaving first chosen suitably to ensure desirable properties for the spectrum of for the purposes of analysis.\nAs can be readily seen, (6 ###reference_###) is obtained from (5 ###reference_###) by selecting the feedback gain\nEvidently, the transformation (7 ###reference_###) remains a bijection between and as long as is invertible, i.e., as long as . In this case, when the input can be written in state-feedback form, the two systems (5 ###reference_###) and (6 ###reference_###) follow identical trajectories when initialized at the same point.\nTrajectories that coalesce under a control policy or, are simply brought into a lower dimensional subspace at any time ,\nequivalent to becoming singular, do not correspond to trajectories of (5 ###reference_###). Hence, herein, we investigate control laws of (6 ###reference_###) that can be written in feedback form." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Optimal control in feedback form?", + "text": "It is tempting to seek control laws for (5 ###reference_###) by adapting optimal control laws derived for (6 ###reference_###) for a quadratic performance criterion, since the latter can be expressed essentially in closed form. Unfortunately, this cannot be done. An optimal control policy may not be expressible in feedback form, in general.\nThe particular quadratic optimization criterion used to devise control laws is not essential, as similar issues arise for any alternative choice. Deeper reasons for that will become apparent later on, in Section 4 ###reference_###, when we discuss topological obstructions to the existence of feedback laws that depent continuously on the problem data. At present and for simplicity, we restrict ourselves to control laws that minimize the control energy functional\nwhere denotes the Frobenius norm. We conclude this subsection with an example that highlights this point, that the optimal law may not be expressible in feedback form.\nIt is well known that as long as the pair satisfies the Kalman rank condition\nwhich is equivalent to satisfying the rank condition,\nthe linear system (6 ###reference_###) is strongly controllable, in that it can be steered between any initial and final points through an appropriate choice of the input over an arbitrary interval with . The Kalman condition will be assumed throughout.\nIt is also well known that the control input that steers (6 ###reference_###) from to , over the interval , and minimizes the cost functional (8 ###reference_###) is unique and given by the formula\nFrom the variation-of-constants formula, the corresponding optimal trajectory is\nwhere the matrix-valued function (Grammian) is\nIf for all , i.e., , then the input\nsteers the bi-linear system (5 ###reference_###) from to , over the interval . Such a conclusion, however, depends critically on remaining in for all .\nAn elementary counterexample, along similar lines as the opening example of the introduction, where this fails to be the case is the following. In this, both belong to the identity component of the general linear group, in that both have positive determinants.\nLet be even, and\nBecause is even, we have that . Direct computation shows that is given by\nwhich becomes singular at . The case of odd is not substantially different.\nIn light of this example, it is of great interest to obtain sufficient conditions on for the optimal control policy to be expressible in feedback form." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Brockett\u2019s approach", + "text": "In his seminal paper on the control of the Liouville equation [8 ###reference_b8###], Brockett suggested that the condition222Throughout,\n denotes the induced -norm (largest singular value).\nensures that as defined in\n(10 ###reference_###) remains invertible for all and .\nThis is not true in general.\nIf it were true, it would guarantee that the feedback gain in (12 ###reference_###) is well-defined, and would provide a convenient formula for designing piece-wise smooth trajectories in for the system (5 ###reference_###). In this way, Brockett sought to establish controllability of (5 ###reference_###) using short optimal segments as motion primitives.\nBelow, we modify (13 ###reference_###) so as to complete Brockett\u2019s original program in Section 2.3 ###reference_###.\nThe crux of Brockett\u2019s argument was to observe that, for any given , there exists some constant gain such that\nwhere, as before, . The existence of such a is guaranteed by the assumption that the pair satisfies the Kalman rank condition. With such a choice of , the expression for the optimal trajectory (10 ###reference_###) becomes\nBrockett then asserted [8 ###reference_b8###, Proof of Lemma 1] that the monotonicity\nof the Grammians,\nimplies that , and thereby, that (13 ###reference_###) suffices for\nwhich in turn implies that for all and every .\nHowever, in general, is not symmetric, and therefore the assertion that is not true in general. The monotonicity in (15 ###reference_###) only implies\nthat the spectral radius of is bounded above by , and not the norm.\nIn the following proposition we strengthen Brockett\u2019s condition so as to ensure that remains in , and complete in the next section Brockett\u2019s program.\nAssume that the pair satisfies the Kalman rank condition, and that for a given a matrix has been selected so that (14 ###reference_###) holds.\nIf is specified so that\nthen belongs to for all .\nThe monotonicity property in (15 ###reference_###) implies that\nand since\n\nis symmetric,\nFrom (10\u2032 ###reference_###),\nFrom (16 ###reference_###) and (17 ###reference_###) it follows that\nis invertible and so is .\n\u220e\nThe value of condition (16 ###reference_###) is that it ensures that the optimal control law can be written in feedback form (12 ###reference_###). Thereby, it characterizes a class of matrices in for which this is possible and which can be reached from the identity via this particular choice of time-varying matrix of feedback gains.\nIn the next section, matrices that belong to this class will be used as intermediate points to construct a path from the identity to any specified final value in a similar manner." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Controllability on", + "text": "We are now in position to complete Brockett\u2019s program and establish controllability of (5 ###reference_###), i.e., that given an arbitrary , there exists a suitable that represents the control parameter and steers (5 ###reference_###) from the identity to .\nThis is stated next. The time required for the transition is not specified in advance. Strong controllability, where can in fact be specified in advance, will be established in Section 3 ###reference_### via a refinement of the approach.\nAssume that the pair satisfies the Kalman rank condition. Then the bi-linear system (5 ###reference_###) is controllable on .\nLet be any matrix in . We will show that there exists a time and a solution of (5 ###reference_###) for a suitable choice of control so that .\nWe arbitrarily select a time step , and then a feedback such that\nNext we compute , and determine an integer together with a factorization\nsuch that for all ,\nThat such a factorization is always possible for sufficiently large is a consequence of Lemma 1 ###reference_1### that is provided in the Appendix.\nFrom Lemma 2 ###reference_2### that is also provided in the Appendix, setting\nwe deduce that\nfor all . The statement in Proposition 1 ###reference_p1###, applied to construct a path , for , connecting to , allows us to conclude that there exists a trajectory\nof the bi-linear system (5 ###reference_###) that connects the identity to over the window , for terminal time .\n\u220e\nThe above proof follows the basic line sketched by Brockett [8 ###reference_b8###]. However, Brockett assumed that the condition (13 ###reference_###) was sufficient to ensure that a feedback gain can be obtained to steer (5 ###reference_###) through a sequence of intermediate points that terminate at the desired terminal condition. Moreover, since the condition Brockett proposed was independent of ,\nthis led him to conclude that (5 ###reference_###) is strongly controllable. As we noted in Section 2.2 ###reference_###, since (13 ###reference_###) is not sufficient for control laws to be implementable in feedback form, neither conclusion follows." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Strong Collective Controllability", + "text": "As explained earlier, the ability to steer a collective between arbitrary configurations amounts to the controllability of the bilinear system (5 ###reference_###). Accomplishing such a task arbitrarily fast on the other hand, amounts to strong controllability of (5 ###reference_###). As we will show in the present section, interestingly, this stronger notion of controllability is also valid when the system matrices satisfy the Kalman rank condition.\nIn preparation to proving our claim on the strong controllability of (5 ###reference_###), we provide first a refinement of Proposition 1 ###reference_p1### that now allows us to identify a class of matrices in that we can reach from the identity via suitable feedback gains arbitrarily fast, that is, within any specified time . These matrices will subsequently be used as motion primitives in a similar manner as before.\nAssume that the pair satisfies the Kalman rank condition, , and computed so that (14\u2032 ###reference_6###) holds. Then, for every that satisfies\nit holds that , for , while .\nWe first claim that\nis invertible under (19 ###reference_###).\nTo see this, note that the statement is trivially true when , since .\nBy continuity, the statement is true for some interval .\nThe spectrum of coincides with the spectrum of the matrix , by similarity transformation. From the fact that for all , we have that\nHaving established that the expression in (20 ###reference_###) is invertible, and inspecting\n(18 ###reference_###) in the proof of Proposition 1 ###reference_p1###,\nwe observe that is invertible as well. This completes the proof.\n\u220e\nIt is interesting to note that the class of matrices that satisfy (19 ###reference_###) is a rather thin set, since (19a ###reference_.1###) is an algebraic constraint on its elements. Thus, (19 ###reference_###) is more stringent than (16 ###reference_###). Yet, this class of functions is still unbounded. The structure of this class, in the symmetry condition (19a ###reference_.1###) and the fact that it is unbounded, suffices to prove strong controllability by providing points for motion primitives as before.\nNext, we bring attention to a relatively unknown fact, that any matrix with a positive determinant can be written as product of finitely many symmetric positive definite matrices. This fact follows from [1 ###reference_b1###, Proposition 4] that was established using Geometric Control. A rather remarkable result on such a factorization appeared much earlier in [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] and was based on purely algebraic techniques. This earlier work established that at most five factors are needed. For specificity, we refer to this exact characterization on the number of factors in the proof of the subsequent theorem (Theorem 3 ###reference_3###).\nEvery can be written as a product\nfor some symmetric positive definite matrices .\nBefore estabilishing strong controllability, we need one more lemma, suitably recasting the statement of Theorem 2 ###reference_2###.\nAssume that the pair satisfies the Kalman rank condition, , and that for some (without the need to satisfy (14 ###reference_###)) we compute as before.\nThen, any can be written as the product\nof matrices\n that each satisfies (19 ###reference_###).\nApply Theorem 2 ###reference_2### to conclude that\nfor symmetric positive definite , for , and define .\n\u220e\nWe are now in a position to prove that under the Kalman rank condition the bi-linear system (5 ###reference_###) is strongly controllable on .\nThe following statements are equivalent:\nThe pair satisfies the Kalman rank condition.\nSystem (5 ###reference_###) is controllable on .\nSystem (5 ###reference_###) is strongly controllable on .\nThe implications (iii) (ii) and (ii) (i) are trivial. Thus, we only need to prove that (i) (iii).\nConsider an arbitrary and any .\nLet be chosen so that (14 ###reference_###) holds for .\nCompute as before, using (11 ###reference_###).\nFrom Proposition 3 ###reference_p3###, there exist five positive definite matrices that satisfy (19 ###reference_###) for , and are such that\nFrom Proposition 2 ###reference_p2###, there exists respective feedback gains that steer (5 ###reference_###) from to . By concatenating the application of the feedback gains , over successive intervals of duration , we obtain a control gain that steers (5 ###reference_###) from to over the specified interval . Since and are arbitrary, it follows that (5 ###reference_###) is strongly controllable.\n\u220e\nAn alternative path to proving strong controllability can be based on some deep and technical results in geometric control [12 ###reference_b12###]. We briefly sketch this alternative line of arguments. A necessary and sufficient condition [12 ###reference_b12###, Theorem 12, p. 89] for strong controllability is that the so-called strong Lie saturate (see [12 ###reference_b12###, Definition 8, p.87]) of the family of vector fields corresponding to (5 ###reference_###) generates the entire tangent space of at , for every . To verify that this is the case requires explicit construction of the strong Lie saturate. Carrying out the necessary steps is a lengthy and rather involved process, that we chose not to include herein\u2013one needs to explicitly construct a sequence of prolongations of the family of vector fields corresponding to (5 ###reference_###) in a manner that preserves the closure of reachable sets. The procedure is similar to the one Jurdjevic outlines in the proof of [12 ###reference_b12###, Theorem 10 in Chapter 3])." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "A topological obstruction to the continuity of a universal feedback gain", + "text": "As noted earlier, (9 ###reference_###) provides a universal expression for an open-loop control law that steers the linear dynamics (6 ###reference_###) from the identity to . This expression is continuous in the problem specifications, namely, in . We also saw that a similar construction for a feedback control law, in the form of a time-varying feedback gain matrix , is not possible, at least when relying on open-loop optimal control. The constructions that were used to establish controllability and strong controllability, provide no guarantee that the law is continuous in the problem data. Thus, it is natural to ask whether such a closed-form continuous expression for some feedback gain matrix is at all possible for (5 ###reference_###) and, more broadly, in what instances such an endeavour may admit an answer in the affirmative.\nIn the process of explaining why such a formula does not exist for the bi-linear system (5 ###reference_###), it is instructive to compare first with (6 ###reference_###), and second, with the continuous-time Lyapunov equation\nIn the case of (21 ###reference_###) the control parameter is again , and the task is to steer333 denotes the cone of symmetric positive definite matrices. , from the identity to some . For both of these cases, (6 ###reference_###) and (21 ###reference_###), control laws that are continuous with respect to the specifictions do exist.\nConsidering these three cases (6 ###reference_###,21 ###reference_###,5 ###reference_###) together helps underscore the nature of the topological obstruction for the case of (5 ###reference_###).\nA goal in Brockett\u2019s work [8 ###reference_b8###] was to ascertain first the controllability of (5 ###reference_###) in order to establish the controllability of\n(21 ###reference_###), which amounts to\nthe Liouville equation for the case of linear dynamics. Brockett\u2019s original plan carries through, in that the (strong) controllability of (21 ###reference_###) can be deduced from the (strong) controllability of (5 ###reference_###), that has already been established in Theorem 3 ###reference_3###. The pertinent argument is detailed in Lemma 3 ###reference_3### in the Appendix. We note in passing that, following an independent route, the strong controllability of (21 ###reference_###) has been established in [9 ###reference_b9###, 10 ###reference_b10###].\nWe also note that Brockett further conjectured [8 ###reference_b8###, Remark 1, p. 30] that there should be a solution for (21 ###reference_###) and (5 ###reference_###) that depends smoothly on the problem data, and while the first is valid the second fails." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Continuity of control laws to specifications", + "text": "We begin by providing an abstract formulation for the problem at hand,\nto assess whether a solution that is continuous in the problem data exists.\nConsider a continuous system of controlled dynamics that evolves on a state manifold . Assume further that the system is strongly controllable and that a continuous control law can be specified as a function of any terminal states that can generate a state trajectory from a given to , over an interval . Then, is contractible.\nThese assumptions amount to the existence of a continuous map\nwith the following properties:\n, and\n, for all .\nThe curves are continuous and represent trajectories specified by the control law as functions of .\nThis map can readily provide a homotopy to continuously deform\nany loop to the point . Equivalently, is a continuous homotopy between the constant map and the identity map .\nThus, must be contractible.\n\u220e\nThe assumption of strong controllability, as opposed to only controllability, allows the terminal time to be independent of . The claim, that is contractible as a topological space amounts to the fundamental group , which consists of equivalent classes of loops in the space under homotopy, is trivial, in that any loops can be continuously shrank to a point [17 ###reference_b17###, Chapter 3] as sketched in the proof." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Representative cases", + "text": "Next, we interpret the statement of the proposition for the three cases of interest, the dynamics in (6 ###reference_###,21 ###reference_###,5 ###reference_###)." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Linear dynamics (6)", + "text": "In the case of\nwith control parameter is and state parameter , the state space is\n which is contractible. It is readily seen that\n(9 ###reference_###)-(10 ###reference_###) define a smooth homotopy for which\nthe conditions i-ii) hold." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Lyapunov dynamics (21)", + "text": "We observe that the dynamics in this case are not linear. Yet, the states , and the state space is contractible.444To see that is contractible, observe that is a continuous homotopy between the constant and identity maps, for .\nThus, the existence of a control law that depends continuously on is not ruled out by the topology of . Indeed, a choice for such a control law is given in the proof of the following proposition.\nConsider the dynamical equation (21 ###reference_###), and let , and .\nThere exists a control law that continuously depends on the problem data, such that the solution to\n(21 ###reference_###) satisfies\n, and\n, for all .\nLet be such that (14 ###reference_###) is satisfied and compute , as before. Define\nBy construction, satisfies (19 ###reference_###) and therefore, by Proposition 2 ###reference_p2###, the solution to (5 ###reference_###) for\nremains in for all . It can be seen that\nand, by direct differentiation, that\nsatisfies (21 ###reference_###) and remains in over . Finally, depend smoothly on the problem data and time.\n\u220e\nThe construction in the above proof is based on a suitable modification of McCann\u2019s displacement interpolation [15 ###reference_b15###] that was introduced in [10 ###reference_b10###] for cases when the control authority is constrained to channel through an input matrix ." + }, + { + "section_id": "4.2.x", + "parent_section_id": "4.2", + "section_name": "Right-invariant bilinear dynamics (5)", + "text": "For the case of (5 ###reference_###) the state space is . However, by the polar decomposition, is homeomorphic to the Cartesian product of with the special orthogonal group . Since is contractible, .\nIt is well known that is not contractible,\nand that in fact, and\n for any .\nThus, is not contractible (except for ), and therefore a formula for a universal control law, as sought, does not exist." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Illustrative example", + "text": "###figure_1### ###figure_2### In this section, we provide a numerical case study to illustrate the theoretical results presented thus far. Henceforth, we fix and\nin (1 ###reference_###), which corresponds to a double integrator, i.e., an inertial system with force as input obeying Newton\u2019s law. Steering (5 ###reference_###) amounts to effecting a re-arrangement in phase space, i.e., of the positions and velocities of a collection of inertial particles.\nFor any , we see that the constant gain\nguarantees that (14\u2032 ###reference_6###) holds. Then, with (22 ###reference_###) as the choice of the constant gain , explicit computation gives\nfor any and for all , which is not symmetric, in general. Indeed, at , we see that\nand, as can be verified, we have that\nWe now seek a feedback gain that steers (5 ###reference_###) between\nwith taken to be the skew-symmetric matrix\nIn this case, amounts to a planar rotation. To proceed, we define the symmetric positive definite matrices\ntaking . Direct computation shows that\nand, therefore, we see that\nwhere the matrices for are given by\nfor every . Each of these factors, by construction, satisfies the sufficient condition (19 ###reference_###). Therefore, the piecewise smooth curve\nwith each segment given by\nfor ,\nis a trajectory of the bilinear system (5 ###reference_###). The corresponding input can be computed from (5 ###reference_###).\nAn illustration of the first two segments of the trajectory (23 ###reference_###) is shown in Figure 2 ###reference_### and Figure 2 ###reference_###, respectively. To generate the figures, we use . We emphasize, however, that the construction provided above works for any , although the corresponding trajectories may be difficult to visualize. In both figures, the colored lines represent the trajectories of three different \u201ctracer\u201d particles as the collective undergoes the motion defined by the curve (23 ###reference_###)." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "On the controllability of orientation-preserving diffeomorphisms", + "text": "Naturally, our interest in collective steering extends beyond\nthe linear group of diffeomorphisms , to the general group of orientation-preserving diffeomorphisms of that may allow a more versatile repositioning of a collective to a specified terminal configuration. Thus, it is of interest to study the controllability of\nIn this, as well as in the earlier part of our study, we have been inspired by the work of Brockett [8 ###reference_b8###] who drew attention to problems pertaining to the controllability of the Liouville equation. Other related works include [3 ###reference_b3###, 2 ###reference_b2###] and, more recently, [18 ###reference_b18###, 16 ###reference_b16###].\nIn the present section, developing on the basic idea in Brockett [8 ###reference_b8###] to devise control primitives that can be used to stitch together a path between states, we establish a reachability result for a class of Lipschitz diffeomorphisms via an approach analogous to that in Section 2.3 ###reference_###.\nOnce again, we assume that the pair satisfies the Kalman rank condition and consider the linear system\nwith , , and chosen to satisfy (14\u2032 ###reference_6###) for some specified .\nThe open-loop control input that steers (25 ###reference_###) between terminal conditions to over the interval , selected to minimize\n,\nis\nwith corresponding state trajectory\nwhere, as before, is the Grammian specified in (11 ###reference_###).\nIf now ,\nthen the optimal state trajectory and the associated optimal input connecting to over the interval , for all , are given by\nBrockett [8 ###reference_b8###] proposed that if the map is a contraction,\nthen (26b ###reference_.2###) can be written in feedback form\nfor a suitable continuous feedback law ,\nfor all and all . The argument in [8 ###reference_b8###, Lemma 1] relied on the assumption that\n,\nwhich is not true in general. However, Brockett\u2019s line of development\ncan be carried out with a modified condition on the map , similar to Proposition 1 ###reference_p1###, as stated next.\nUnder the standing assumptions on and ,\nif is specified so that the map\nis a contraction, then there exists a (nonlinear) continuous feedback law such that (27 ###reference_###) holds for all and all .\nWe re-arrange the expression in (26a ###reference_.1###) to obtain\nwherein we used (14\u2032 ###reference_6###). Multiplying both sides by and introducing the variable , we obtain that\nfor all and all . It is now clear that the map\nis a contraction since\nwherein the last inequality holds due to the assumption that is a contraction. The remainder of the proof is identical to the argument in [8 ###reference_b8###, Lemma 1]. In particular, the claim of the proposition follows by invoking the Banach-fixed point theorem and the implicit function theorem.\n\u220e\nWith this result a diffeomorphism that can be expressed as a composition of contractions with a sufficiently small Lipschitz constant can be reached by a trajectory of (24 ###reference_###) through an appropriate choice of a control input." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Concluding Remarks", + "text": "The controllability of dynamical systems has been a cornerstone concept of modern control theory. Thus, it may be surprising at first that technical issues\nremained unresolved, even for linear dynamics as in the formulation and in the class of problems addressed herein. The present work was influenced by Roger Brockett\u2019s work on the Liouville equation in [8 ###reference_b8###].\nIn fact, Brockett was amongst the pioneers who drew attention to the importance of quantifying control authority of systems that evolve on manifolds and Lie groups, and the present work falls within this general frame.\nWhereas the controllability of the Liouville equation for linear dynamics is tightly linked to the controllability of dynamics evolving on , studying directly dynamical flows on acquires a great deal of added practical significance as it models the evolution of a collection of identical dynamical systems. This problem of collective steering has been the main theme and motivation for this work.\nA dual problem that is of equal importance is the estimation problem\nto extract information about the flow of a collection of particles that obey the same dynamics, from knowledge of a collection of such particles\u2013tracer particles. Tracer particles, typically, delineate a trajectory on that encapsulates information about the collective. Specifically, for a swarm of particles obeying identical linear dynamics, the state transition matrix dictates the conditional expectation\nfor particles in the collective. Thus, shaping a trajectory on or, estimating such a trajectory based on observations of the (or fewer) tracer particles at various points on the flight between terminal points in an interval, is of great importance. This topic, of conditioning the flow via conditional expectations will be taken up in forthcoming work, building on our recent framework in [1 ###reference_b1###] and the results herein.\nWe conclude with a tribute to Roger Brockett, whose work sparked much of the development in the subject at hand. We have established controllability and strong controllability of the right invariant bilinear system (5 ###reference_###), following up on his footsteps; to the best of knowledge, this is the first rigorous demonstration of these facts. Although the study of right invariant systems on Lie groups is a classical topic in geometric control, necessary and sufficient conditions for strong controllability in the presence of a non-trivial drift term are relatively sparse. Indeed, most results for systems with a non-trivial drift are primarily concerned with establishing the weaker notion of controllability and, often, rely on special properties of the underlying group such as compactness and semi-simplicity [12 ###reference_b12###, Chapter 6]. Our interest in (5 ###reference_###) and, more generally, (24 ###reference_###), stems from their close connection to the theory of optimal mass transport and stochastic optimal mass transport (Schr\u00f6dinger bridges).\nWe expect that the confluence of ideas from the parallel development of this theory, and the richness of the underlying geometry [1 ###reference_b1###], will provide fertile new ground for developments in control and estimation of dynamical systems." + } + ], + "appendix": [], + "tables": {}, + "image_paths": { + "1": { + "figure_path": "2411.18766v1_figure_1.png", + "caption": "Figure 1: The first segment, i.e. \u03a6t1,\u22c6superscriptsubscript\u03a6\ud835\udc611\u22c6\\Phi_{t}^{1,\\star}roman_\u03a6 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 , \u22c6 end_POSTSUPERSCRIPT with t\u2208[0,ts]\ud835\udc610subscript\ud835\udc61\ud835\udc60t\\in[0,t_{s}]italic_t \u2208 [ 0 , italic_t start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ], in the piecewise smooth curve (23), which starts from I\ud835\udc3cIitalic_I and ends at \u03a61subscript\u03a61\\Phi_{1}roman_\u03a6 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.\n", + "url": "http://arxiv.org/html/2411.18766v1/extracted/6030270/figures/first_step_phi_example.png" + }, + "2": { + "figure_path": "2411.18766v1_figure_2.png", + "caption": "Figure 2: The second segment, i.e. \u03a6t\u2212ts1,\u22c6superscriptsubscript\u03a6\ud835\udc61subscript\ud835\udc61\ud835\udc601\u22c6\\Phi_{t-t_{s}}^{1,\\star}roman_\u03a6 start_POSTSUBSCRIPT italic_t - italic_t start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 , \u22c6 end_POSTSUPERSCRIPT with t\u2208[ts,2\u2062ts]\ud835\udc61subscript\ud835\udc61\ud835\udc602subscript\ud835\udc61\ud835\udc60t\\in[t_{s},2t_{s}]italic_t \u2208 [ italic_t start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , 2 italic_t start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ], in the piecewise smooth curve (23), which starts from \u03a61subscript\u03a61\\Phi_{1}roman_\u03a6 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ends at \u03a62subscript\u03a62\\Phi_{2}roman_\u03a6 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.\n", + "url": "http://arxiv.org/html/2411.18766v1/extracted/6030270/figures/second_step_phi_example.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Sub-Riemannian geometry, mixing, and the holonomy of optimal mass\ntransport.", + "author": "Mahmoud Abdelgalil and Tryphon T Georgiou.", + "venue": "arXiv preprint arXiv:2408.14707, 2024.", + "url": null + } + }, + { + "2": { + "title": "Control on the manifolds of mappings with a view to the deep\nlearning.", + "author": "Andrei Agrachev and Andrey Sarychev.", + "venue": "Journal of Dynamical and Control Systems, 28(4):989\u20131008,\n2022.", + "url": null + } + }, + { + "3": { + "title": "Controllability on the group of diffeomorphisms.", + "author": "Andrei A Agrachev and Marco Caponigro.", + "venue": "In Annales de l\u2019Institut Henri Poincar\u00e9 C, Analyse non\nlin\u00e9aire, volume 26, pages 2503\u20132509. Elsevier, 2009.", + "url": null + } + }, + { + "4": { + "title": "Products of positive definite matrices. I.", + "author": "Charles Ballantine.", + "venue": "Pacific Journal of Mathematics, 23(3):427\u2013433, 1967.", + "url": null + } + }, + { + "5": { + "title": "Products of positive definite matrices. II.", + "author": "Charles Ballantine.", + "venue": "Pacific Journal of Mathematics, 24(1):7\u201317, 1968.", + "url": null + } + }, + { + "6": { + "title": "Products of positive definite matrices. III.", + "author": "CS Ballantine.", + "venue": "Journal of Algebra, 10(2):174\u2013182, 1968.", + "url": null + } + }, + { + "7": { + "title": "System theory on group manifolds and coset spaces.", + "author": "Roger W Brockett.", + "venue": "SIAM Journal on control, 10(2):265\u2013284, 1972.", + "url": null + } + }, + { + "8": { + "title": "Optimal control of the Liouville equation.", + "author": "Roger W Brockett.", + "venue": "AMS IP Studies in Advanced Mathematics, 39:23, 2007.", + "url": null + } + }, + { + "9": { + "title": "Optimal steering of a linear stochastic system to a final probability\ndistribution, Part II.", + "author": "Yongxin Chen, Tryphon T Georgiou, and Michele Pavon.", + "venue": "IEEE Transactions on Automatic Control, 61(5):1170\u20131180, 2015.", + "url": null + } + }, + { + "10": { + "title": "Optimal transport over a linear dynamical system.", + "author": "Yongxin Chen, Tryphon T Georgiou, and Michele Pavon.", + "venue": "IEEE Transactions on Automatic Control, 62(5):2137\u20132152, 2016.", + "url": null + } + }, + { + "11": { + "title": "Topological semigroups, sets of generators, and controllability.", + "author": "Ronald Hirschorn.", + "venue": "Duke Mathematical Journal, 40(4):937 \u2013 947, 1973.", + "url": null + } + }, + { + "12": { + "title": "Geometric control theory.", + "author": "Velimir Jurdjevic.", + "venue": "Cambridge University Press, 1997.", + "url": null + } + }, + { + "13": { + "title": "Control systems on Lie groups.", + "author": "Velimir Jurdjevic and H\u00e9ctor J Sussmann.", + "venue": "Journal of Differential equations, 12(2):313\u2013329, 1972.", + "url": null + } + }, + { + "14": { + "title": "Ensemble control of finite-dimensional time-varying linear systems.", + "author": "Jr-Shin Li.", + "venue": "IEEE Transactions on Automatic Control, 56(2):345\u2013357, 2010.", + "url": null + } + }, + { + "15": { + "title": "A convexity principle for interacting gases.", + "author": "Robert J McCann.", + "venue": "Advances in Mathematics, 128(1):153\u2013179, 1997.", + "url": null + } + }, + { + "16": { + "title": "Some remarks on controllability of the Liouville equation.", + "author": "Maxim Raginsky.", + "venue": "arXiv preprint arXiv:2404.14683, 2024.", + "url": null + } + }, + { + "17": { + "title": "Lecture notes on elementary topology and geometry.", + "author": "Isadore Manuel Singer and John A Thorpe.", + "venue": "Springer, 2015.", + "url": null + } + }, + { + "18": { + "title": "Universal approximation power of deep residual neural networks\nthrough the lens of control.", + "author": "Paulo Tabuada and Bahman Gharesifard.", + "venue": "IEEE Transactions on Automatic Control, 68(5):2715\u20132728, 2023.", + "url": null + } + }, + { + "19": { + "title": "A connected Lie group equals the square of the exponential image.", + "author": "Michael W\u00fcstner.", + "venue": "Journal of Lie Theory, 13(1):307\u2013309, 2003.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18766v1" +} \ No newline at end of file diff --git a/20241127/2411.18767v1.json b/20241127/2411.18767v1.json new file mode 100644 index 0000000000000000000000000000000000000000..20e8f0242d95fce6c3d8453c36afd60aed40fba2 --- /dev/null +++ b/20241127/2411.18767v1.json @@ -0,0 +1,378 @@ +{ + "title": "Multi-Task Learning for Integrated Automated Contouring and Voxel-Based Dose Prediction in Radiotherapy", + "abstract": "Deep learning-based automated contouring and treatment planning has been proven to improve the efficiency and accuracy of radiotherapy. However, conventional radiotherapy treatment planning process has the automated contouring and treatment planning as separate tasks. Moreover in deep learning (DL), the contouring and dose prediction tasks for automated treatment planning are done independently.\nIn this study, we applied the multi-task learning (MTL) approach in order to seamlessly integrate automated contouring and voxel-based dose prediction tasks, as MTL can leverage common information between the two tasks and be able able to increase the efficiency of the automated tasks. We developed our MTL framework using the two datasets: in-house prostate cancer dataset and the publicly available head and neck cancer dataset, OpenKBP. Compared to the sequential DL contouring and treatment planning tasks, our proposed method using MTL improved the mean absolute difference of dose volume histogram metrics of prostate and head and neck sites by 19.82% and 16.33%, respectively.\nOur MTL model for automated contouring and dose prediction tasks demonstrated enhanced dose prediction performance while maintaining or sometimes even improving the contouring accuracy. Compared to the baseline automated contouring model with the dice score coefficients of 0.818 for prostate and 0.674 for head and neck datasets, our MTL approach achieved average scores of 0.824 and 0.716 for these datasets, respectively.\nOur study highlights the potential of the proposed automated contouring and planning using MTL to support the development of efficient and accurate automated treatment planning for radiotherapy.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Radiotherapy is a critical component in cancer treatment, with the aim of precision and efficacy in targeting tumor tissues while sparing healthy ones. By automating certain components of radiotherapy, the advent of machine learning has promised to significantly advance the accuracy and efficiency of radiotherapy workflow [1 ###reference_b1###]. Specifically, deep learning (DL) has been used to automate two key tasks: region of interest (ROI) contouring and dose prediction. Contouring ROIs in automated radiotherapy is crucial as it delineates the target tumor and surrounding organs at risk (OARs), enabling precise treatment planning and delivery. In terms of predicting dose distributions, utilizing DL can enhance the precision and efficiency of radiotherapy treatment planning by leveraging complex anatomical relationships to optimize clinical trade-offs in target coverage, dose conformity, and sparing healthy tissues.\nHowever, because manual contouring is often required and is time-consuming, it represents a potential bottleneck in treatment planning when performed sequentially [1 ###reference_b1###]. Thus, DL contouring is crucial in automated radiotherapy for efficiently and accurately delineating tumors and surrounding anatomy. Since the emergence of the U-Net architecture proposed by Ronneberger et al, DL has enabled detailed and accurate delineation of anatomical structures in CT and MRI images [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. The success of U-Net lies in its ability to capture context from a wide image area while maintaining high-resolution features, which is crucial for capturing the fine-grained features required in treatment planning.\nOwing to the ability of DL models in capturing a precise anatomical relationships, DL has also been widely used for predicting dose distributions in radiotherapy. Developing automated treatment planning includes training DL models using human contours and simulated dose information, that enables accommodating the unique complexities of individual patients. Although DL has enabled automated contouring and dose prediction in radiotherapy, studies thus far have treated these as separate tasks even though the two tasks are highly related. Particularly, studies have focused on developing separate dose prediction DL models that utilize human or DL contours as inputs [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. By utilizing sequential models, previous research lacks an investigation into the impact of simultaneously training for automated contouring and dose prediction using DL. Gu et alconducted a notable study demonstrating how input contour information influences the development of separate DL dose prediction models [15 ###reference_b15###]. Specifically, they showed that generative adversarial networks can generate clinically acceptable dose distributions for head and neck cancer patients using only CT scans and ROIs. Importantly, their results indicated that DL models can be trained with minimal anatomical information on OARs.\nBuilding on this, the work presented in this paper introduces our novel framework for DL contouring and dose prediction into an automated, integrated process. Our approach simultaneously optimizes model training for automated contouring and dose prediction, yielding enhanced performance compared to conventional sequential models trained on CT scans with separately extracted contour information. To achieve simultaneous training, we adopt multi-task learning, which is a branch of machine learning that exploits commonalities and differences across tasks [16 ###reference_b16###]. This approach excels in medical image analysis, where the interplay between anatomical structures, pathological features, and treatment planning elements can be exploited to enhance performance [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. Previously, Kim et alintroduced an attention-based multi-task learning model for automatic contouring and predicting dose distributions for prostate cancer and head and neck cancer radiotherapy [25 ###reference_b25###]. Another study by Jiao et alintroduced a multi-task learning generative adversarial network for automated dose prediction and contouring tumor mask. Both studies demonstrated that multi-task model can synergistically enhance the accuracy of both contouring and dose prediction by simultaneously identifying tumors while understanding surrounding critical structures.\nTo prove the efficacy of the multi-task learning model, we show in this work the performance improvement over sequential contouring and dose prediction models. We validate our approach using two different radiotherapy cancer treatment sites: prostate and head and neck cancer. To the best of our knowledge, this research is novel in terms of integrating two clinically imperative tasks in radiotherapy via multi-task learning. Although the study by Jiao et alalso demonstrated the feasibility of multi-task learning in radiotherapy treatment planning tasks for automated dose prediction and automated contouring, it lacks the comprehensive comparison over the conventional sequential models using the limited targets for contouring task. We not only address the inherent limitations of sequential models, but also showcase superior performance through extensive validation on prostate and head and neck cancer datasets. The introduction of the multi-task learning-based integrated radiotherapy framework suggests an important step toward more accurate, efficient, and integrated radiotherapy approaches to radiotherapy treatment planning." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "###figure_1###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Datasets", + "text": "To comprehensively evaluate our approach, we used datasets for two cancer treatment sites: prostate and head and neck. For both treatment sites, the datasets included the CT imaging, volumetric dose distributions generated from the radiotherapy plans, and the contoured anatomy of relevant regions of interest (ROIs).\nThe prostate cancer dataset was collected from 110 patients that underwent volumetric arc therapy (VMAT) at Princess Margaret Cancer Centre (Toronto, Canada) with a prescription dose of 60 Gy in 20 fractions. For prostate, contouring included five key ROIs: the prostate, rectum, bladder, left and right femur. The dataset was divided into training, validation, and test cohorts comprising 95, 5, and 10 patients, respectively.\nThe head and neck dataset was collected from the publicly available OpenKBP Challenge dataset. We collected data from 328 patients who underwent radiotherapy, including radiotherapy treatment plans using either step-and-shoot intensity-modulated radiation therapy (IMRT) with nine approximately equispaced coplanar 6 MV fields. For head and neck, contouring included four ROIs: the brain stem, spinal cord, left and right parotid gland. as these four ROIs were consistently available across all data in the OpenKBP dataset. Following [7 ###reference_b7###], we split OpenKBP datasets into training, validation, and test sets of 200, 40, and 78 patients, respectively. We omitted 22 cases from the original OpenKBP dataset due to incomplete contour information for one or more target OARs.\nWe pre-processed all datasets to ensure the effective training DL models. For all datasets, we employed intensity clipping of CT to -1024 to 1500, followed by normalizing CT inputs by subtracting the mean and dividing by the standard deviation calculated from all CT scans in the training dataset. The mean and standard deviation of Hounsfield Unit in training datasets were -662.10 and 467.75 for Prostate, and 85.18 and 282.26 for OpenKBP dataset, respectively. We min-max normalized the target dose distribution maps by dividing with the prescribed dose for each dataset, which are 60 Gy and 70 Gy for the Prostate and OpenKBP dataset, respectively." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Model architectures", + "text": "" + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 Baseline.", + "text": "We developed a U-Net based model for dose prediction as the baseline model to compare with multi-task learning dose prediction. We designed an encoder-decoder architecture following the baseline model in the study by Kim et al, which utilizes a ResNet-50 pre-trained backbone for the encoder [25 ###reference_b25###]. The input for the baseline model is a 2-dimensional (2D) CT slice, and the output of this model is a paired 2D dose distribution map. We utilized the same architecture across different models." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Sequential dose prediction.", + "text": "Sequential dose prediction models are DL models requiring both CT and contour information for dose prediction. We define two types of common sequential models: tasks manually (human) sequentially performed as in the conventional clinical process (Seq-Human), and tasks performed sequentially using DL an automated process (Seq-Automated). Both sequential models are trained to predict dose distributions with the CT and contours as inputs, with the distinction that Seq-Human uses manually defined human contours while Seq-Automated uses automated DL contours (details in Figure 1 ###reference_###). Thus, we trained sequential-conventional and sequential-automated models using contour information from distinct sources to see the impact of the quality of contours on DL dose prediction. Human contours for the Seq-Human model were obtained from the original datasets paired with CT and clinical dose distributions. To generate DL contours for the Seq-Automated model, we trained separate DL contouring models for each dataset using paired CT scans and human contours from the same datasets. The contouring models have the same architecture as the baseline dose prediction task except the output number of channels at the end of the model, which is replaced with the number of contouring labels as per dataset." + }, + { + "section_id": "2.2.3", + "parent_section_id": "2.2", + "section_name": "2.2.3 Integrated multi-task learning.", + "text": "Multi-task learning algorithms enhance deep learning models by simultaneously training for multiple tasks, leveraging shared parameters to foster beneficial cooperation. This approach implicitly extracts additional training signals from related tasks within the existing dataset, rather than explicitly expanding the dataset. The shared components of the deep learning network are thought to be regularized by the various tasks, potentially improving model performance and generalization. However, current multi-task learning architectures in medical imaging often fall short in effectively sharing information across tasks, limiting potential performance gains.\nWe propose an integrated automated contouring and planning model using multi-task learning (Multi-Automated) for simultaneous automatic contouring and dose prediction. Unlike sequential approaches, the integrated multi-task learning framework generates dose distribution maps and contours at the same time (Figure 1 ###reference_###). The primary advantage of this integrated framework is that a separate contouring process is not required. Specifically, we implemented the Multi-Automated model using cross-task attention network introduced in [25 ###reference_b25###], which maximizes the cross-task interaction for effectively training two tasks simultaneously. The Multi-Automated model comprises a single encoder for extracting common feature representations and two task-specific decoders for predicting dose distribution maps and contours. Cross-task attention network architectures will jointly train all models simultaneously with shared encoder components (i.e. feature representations) and per-task decoders with identical architecture across all tasks. Each task has its own attention mechanisms within the shared encoder, balancing per-task specialization with joint learning of cross-task convolution features. This design allows all models to extract the same base features while leveraging spatial relationships and feature interactions differently. During the training process, 2D CT images are fed into the shared encoders with cross-task specialized attention mechanisms, which then connect to the task-specific decoders. Further details on the cross-task attention network implementation can be found in [25 ###reference_b25###]." + }, + { + "section_id": "2.2.4", + "parent_section_id": "2.2", + "section_name": "2.2.4 Training Details.", + "text": "In training the dose prediction task, we adopted the mean absolute error (MAE) as our loss function, known for its effectiveness in regression models using medical imaging [7 ###reference_b7###]. For training the contouring task of Multi-Automated and the baseline contouring model, we employed a combination loss function, known as combo loss, which integrates dice loss and cross-entropy loss through a weighted summation[26 ###reference_b26###]. Herein, the weights for dice loss and cross-entropy loss were set to 0.3 and 0.7, respectively. This hybrid approach leverages the strengths of both loss functions to enhance model performance.\nTo adaptively balance the weights of each loss function in the multi-task learning context, we employed the dynamic weight average technique, which significantly contributes to the stability and efficiency of model training[27 ###reference_b27###]. For training all models, we utilized batch size of 32 and 8 for the Prostate and OpenKBP datasets, respectively, ensuring optimal learning dynamics for each dataset\u2019s characteristics. Furthermore, we utilized the Adam optimizer with a learning rate of and the weight decay of for updating model parameters." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Evaluation metrics", + "text": "" + }, + { + "section_id": "2.3.1", + "parent_section_id": "2.3", + "section_name": "2.3.1 Dose prediction.", + "text": "In the evaluation of dose prediction accuracy for various ROIs in radiotherapy, we employed a comprehensive set of dose-volume metrics using clinical contours tailored to each anatomical region to ensure a robust and clinically relevant assessment and to maximize therapeutic outcomes while minimizing adverse effects. Following the clinical standard of care for prostate cancer radiotherapy treatment planning at Princess Margaret Cancer Centre, we evaluated the automated planning models using the dose volume histogram (DVH) metrics presented in Table 1 ###reference_###." + }, + { + "section_id": "2.3.2", + "parent_section_id": "2.3", + "section_name": "2.3.2 ROI Contouring.", + "text": "In this study, we quantitatively evaluated the contouring performance of ROIs using the Dice Score Coefficient and Hausdorff Distance, evaluation metrics widely regarded for its effectiveness in measuring the accuracy of image contouring[26 ###reference_b26###]. The Dice coefficient compares the similarity between the predicted and the ground truth contours, providing a score between 0 and 1, where 1 indicates perfect agreement and 0 denotes no overlap. Mathematically, it is expressed as twice the shared information (intersection) between the predicted and actual contours, divided by the sum of pixels in both the predicted and ground truth contours. This ratio thus reflects both the size and location of segmented regions, making it an ideal measure for assessing contouring precision. On the other hand, the Hausdorff Distance serves as a complementary metric by calculating the maximum distance of the closest point from one contour to the other, effectively measuring the largest discrepancy between the predicted and ground-truth boundaries. This combination offers a robust evaluation of contouring performance, capturing both the overall accuracy and the extremities of prediction errors." + }, + { + "section_id": "2.3.3", + "parent_section_id": "2.3", + "section_name": "2.3.3 Statistical test.", + "text": "For dose prediction, we tested the statistically significant differences of the MAE of DVH metrics (DVH-MAE) for our proposed method, Multi-Automated, with that of Seq-Automated model as per each DVH metrics across ROIs. We compared Seq-Automated with Multi-Automated, which are both fully-automated radiotherapy treatment planning models without any human intervention, but with different training strategy which are performed sequentially and integrated, respectively. We utilized one-sided paired t-tests ( = 0.05) for each DVH metric, corrected by Bonferroni correction for multiple comparisons. For contouring, we calculated statistical significant differences of Dice Score Coefficient and Hausdorff distance between Multi-Automated model compared to the baseline single-task learning contouring model for each ROI using one-sided paired t-tests, corrected by Bonferroni correction for multiple comparisons." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Dose prediction", + "text": "The DVH-MAE in dose prediction of each model type for the Prostate and OpenKBP datasets is reported in Table 2 ###reference_### and Table 3 ###reference_###, respectively. Our proposed Multi-Automated model improved the average performance of dose prediction compared to the baseline and the two sequential models (Seq-Human, using human contoured ROIs and CT as inputs, and Seq-Automated, using AI contoured ROIs and CT as inputs). These trends hold consistently across both datasets, indicating that Multi-Automated improves dose prediction performance irrespective of cancer type or anatomical site. Specifically, in Table 2 ###reference_###, for the prostate dataset, Multi-Automated achieves a DVH-MAE of 3.528 Gy, outperforming the baseline single-task learning model and the two sequential models, which scored 3.900 Gy, 3.641 Gy, and 4.301 Gy, respectively. Similarly in Table 3 ###reference_###, for the head and neck cancer dataset, Multi-Automated achieves an MAE of 10.109 Gy, while the baseline single-task learning model and sequential models score 9.436 Gy, 11.347 Gy, and 11.650 Gy, respectively.\nFurthermore, we computed relative difference in MAE to compare the performance of the baseline model with the two sequential models, Seq-Human and Seq-Automated. For the prostate dataset, Seq-Human outperformed the baseline model by 6.641% while Seq-Automated performed 10.28% worse than the baseline. This result demonstrates the quality of the DL ROIs are worsening the dose prediction performance when they are imperfectly contoured. For the OpenKBP dataset, however, baseline models outperformed both sequential models, Seq-Human and Seq-Automated, by 20.252% and 23.463%, respectively, note that the OpenKBP dataset is not originally designed for training contouring models, meaning that the provided contour information might not be perfectly curated. Meanwhile the relative difference of Multi-Automated is 7.132% worse than the baseline, which is still outperforming both sequential models with the large margin. However, Multi-Automated outperformed and for right and left parotid, meaning that Multi-Automated better predicts the dose distribution for OARs. In Figure 2 ###reference_### and Figure 4 ###reference_###, we visualized the dose prediction results using the prostate dataset, and OpenKBP dataset, respectively.\nTo validate the impact of multi-task learning for developing dose prediction models in automated radiotherapy, we compared the DVH-MAE scores of Seq-Automated with Multi-Automated which are both used without any human interventions. The average DVH-MAE of Seq-Automated and Multi-Automated were 4.301 Gy and 3.528 Gy for Prostate dataset, respectively. Especially the improvement of using Multi-Automated were significant (p 0.05) in critical anatomical structures; Bladder ( and ) and in Right femur (, , and ). For OpenKBP dataset, DVH-MAE of Seq-Automated and Multi-Automated were 11.650 Gy and 10.109 Gy, respectively, where of right parotid glands were significantly improved for Multi-Automated (p 0.05).\nAlthough the D99 metric reveals sub-optimal performance in prostate gland contouring, this may be attributed to the inherent trade-offs between the contouring and dose prediction. Nevertheless, the capability of Multi-Automated model to predict dose distributions while being aware of ROIs during training allows it to better optimize dose distributions by minimizing exposure to OARs. This is evident in its improved DVH-MAE scores for various organs, including the bladder, left and right femur, prostate, spinal cord, and parotid glands, as shown in Table 2 ###reference_### and Table 3 ###reference_###. By simultaneously training the Multi-Automated model for contouring ROIs and dose prediction tasks, the increased awareness of the model can ultimately improve radiotherapy treatment planning." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Contouring", + "text": "Additionally, results in Table 4 ###reference_### and Table 5 ###reference_### show that Multi-Automated achieves comparable performance in contouring when trained for dose prediction simultaneously. The Dice Score Coefficients for the baseline single-task learning model for contouring are 0.818 and 0.674 for the prostate and OpenKBP datasets, respectively. In contrast, Multi-Automated achieves dice score coefficients of 0.824 and 0.716, respectively, showing better performance in OpenKBP datasets. For Hausdorff distance, compared to 7.549 and 39.831 in baseline, Multi-Automated results show 12.049 and 22.872 for Prostate and OpenKBP datasets, respectively. In Figure 3 ###reference_### and Figure 5 ###reference_###, we further illustrate the qualitative results of both the baseline model and Multi-Automated for the prostate and OpenKBP datasets, respectively.\n###figure_2### ###figure_3### ###figure_4### ###figure_5###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Discussion", + "text": "Our study introduces an integrated multi-task learning framework for automatic contouring and dose prediction for the purposes of radiotherapy treatment planning.\nThe primary contribution of our study lies in demonstrating the efficacy of multi-task learning in the context of treatment planning. We showed that integrating contouring and dose prediction tasks can lead to improved accuracy and efficiency in generating treatment plans. This integration allows the model to more comprehensively understand the patient anatomy, ultimately leading to enhanced dose prediction accuracy. The successful application of our approach in two distinct treatment sites, prostate and head and neck, further underscores the versatility and potential of the proposed framework for wide-ranging clinical applications.\nOne of the key strengths of our approach is its robustness against the variability of input contourings, a common challenge in sequential DL models for automated radiotherapy treatment planning. By learning to contour while at the same time predicting dose distributions, our model reduces the dependency on the quality of input contours, which is a significant improvement over conventional sequential methods. This highlights that model training in the integrated framework utilizes the inherent correlations between contouring anatomical structures and predicting dose distributions, thereby improving the robustness and accuracy of the treatment planning process.\nTo exhibit the impact of contour quality, we compared Seq-Human and Seq-Automated in Table 2 ###reference_### and Table 3 ###reference_### and found a performance gap between the two sequential models that predict dose distributions based on human and automated contours with CT images as inputs. On the other hand, the results of our proposed Multi-Automated model ensure that DL-based automated radiotherapy remains effective regardless of the quality of contours by simultaneously learning to segment the ROIs, a feature that supports its practicality and reliability for clinical settings [15 ###reference_b15###]. In addition, Multi-Automated showed comparable contouring performance in both prostate and head and neck sites. Moreover, results in Table 4 ###reference_### and Table 5 ###reference_### showcase that the Multi-Automated sometimes outperformed baseline contouring performance. This indicates that the multi-task learning framework can achieve better dose prediction performance without compromising contouring performance, sometimes even improving contouring performance.\nHowever, our study is not without limitations. Although our proposed Multi-Automated model demonstrated the feasibility of simultaneous automated contouring and treatment planning, the results are limited to the certain ROIs for both sites. This need to be further validated under more clinically relevant situations. Moreover, the predicted dose distributions are not deliverable plans. However, the predicted dose distributions from Multi-Automated model can be used as the input to generate fully deliverable plans, a capability rarely addressed in most dose prediction studies [9 ###reference_b9###, 12 ###reference_b12###, 13 ###reference_b13###].\nFurthermore, one critical aspect of our approach is its current inability to control the dose outputs using input contours as effectively as the conventional sequential models can. While Multi-Automated improves robustness and performance for both tasks, the lack of direct control over dose distributions based on input contours may be a notable drawback. However, our findings also highlight the inherent trade-off between the improved generalizability and precision offered by multi-task learning approaches like Multi-Automated, which may sacrifice some degree of fine-tuned control over dose distribution maps based on input contours to achieve better overall performance in the two treatment planning tasks. This compromise underscores the need for careful consideration of the specific clinical contexts and treatment goals when selecting a method for radiotherapy treatment planning. We plan to further refine the Multi-Automated framework to accommodate variations in contour quality without compromising the model\u2019s ability to guide and adjust dose distributions effectively. This advancement would not only enhance the clinical viability of our approach but also extend its applicability to a broader range of treatment scenarios and complexities. Another aspect is that even though we developed the framework for integrated Multi-Automated model for efficient radiotherapy processes, its scalability is limited in terms of the need to train separate models for different treatment sites. As a future step toward generalization, we plan to further integrate different treatment sites into a single multi-task multi-modal model." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In conclusion, our study presented the development and validation of a novel multi-task learning framework specifically designed for two crucial tasks in radiotherapy treatment planning: contouring and dose prediction. We achieved substantial improvements in the overall performance of both tasks compared to models trained separately. This advancement is particularly promising in mitigating the variability associated with human-generated contouring, thereby contributing to the precision and reliability of DL automated radiotherapy systems. Our research highlights the potential of the integrated multi-task learning for automated contouring and treatment planning, paving the way for more accurate and efficient radiotherapy." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Summary of dose-volume histogram (DVH) metrics used to evaluate dose prediction performance. DVH metrics assess critical aspects of radiotherpay treatment planning, including dose coverage, exposure, and volume thresholds for regions of interest (ROIs). Each metric is paired with the organs it is used to evaluate, providing a comprehensive overview of the criteria for analysis.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DVHDescriptionROIs
Mean dose, representing overall exposureAll
Dose received by 99% of the volumeProstate
Dose received by 50% of the volumeRectum, Bladder
Dose received by 30% of the volumeRectum, Bladder
Dose received by 5% of the volumeFemur
Maximum dose received byBrain stem, Spinal cord
the smallest volume of 0.1 ccLeft and Right parotid gland
Volume receiving at least 30 GyRectum, Bladder
Volume receiving at least 22 GyLeft and Right femur
Volume receiving at least 14 GyLeft and Right femur
\n
", + "capture": "Table 1: Summary of dose-volume histogram (DVH) metrics used to evaluate dose prediction performance. DVH metrics assess critical aspects of radiotherpay treatment planning, including dose coverage, exposure, and volume thresholds for regions of interest (ROIs). Each metric is paired with the organs it is used to evaluate, providing a comprehensive overview of the criteria for analysis." + }, + "2": { + "table_html": "
\n
Table 2: Dose Volume Histogram - Mean Absolute error (DVH-MAE) for metrics (unit:Gy) for dose prediction models (i.e., Baseline model using CT as inputs (Baseline), Sequential human contouring and planning (Seq-Human), Sequential automated contouring and planning (Seq-Automated), and automated contouring and planning using multi-task learning (Multi-Automated)) for Prostate dataset. Relative differences of the DVH-MAE for each model are calculated compared to the baseline performance, positive value means DVH-MAE improvement compared to the baseline, and vice versa. Best results for each metric are bolded. presents DVH-MAE scores for Multi-Automated with significant differences (p 0.05) compared to Seq-Automated, identified using paired t-tests and Bonferroni correction.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ROIsBaselineSeq-HumanSeq-AutomatedMulti-Automated
/ DVH-MAE (Gy)(Ours)
Prostate
2.5301.1931.7382.284
0.6040.4270.570
Rectum
7.2366.8537.8227.493
5.5385.7046.6845.821
0.4700.4890.5510.476
3.7793.7164.2544.108
Bladder
8.8977.0029.588
10.0298.33110.535
3.3302.6093.3702.796
7.1535.5177.3605.876
Left femur
3.7384.1274.1513.593
1.2821.1881.4291.059
2.3863.1213.3112.385
3.0443.3053.4602.838
Right femur
3.7544.8254.739
1.0821.0061.4531.030
2.8073.3063.126
2.5232.8173.278
Average3.9003.6414.3013.528
Relative difference (%)6.641-10.289.538
\n
", + "capture": "Table 2: Dose Volume Histogram - Mean Absolute error (DVH-MAE) for metrics (unit:Gy) for dose prediction models (i.e., Baseline model using CT as inputs (Baseline), Sequential human contouring and planning (Seq-Human), Sequential automated contouring and planning (Seq-Automated), and automated contouring and planning using multi-task learning (Multi-Automated)) for Prostate dataset. Relative differences of the DVH-MAE for each model are calculated compared to the baseline performance, positive value means DVH-MAE improvement compared to the baseline, and vice versa. Best results for each metric are bolded. presents DVH-MAE scores for Multi-Automated with significant differences (p 0.05) compared to Seq-Automated, identified using paired t-tests and Bonferroni correction." + }, + "3": { + "table_html": "
\n
Table 3: Dose Volume Histogram - Mean Absolute error (DVH-MAE) for metrics (unit:Gy) for dose prediction models (i.e., Baseline model using CT as inputs (Baseline), Sequential human contouring and planning (Seq-Human), Sequential automated contouring and planning (Seq-Automated), and automated contouring and planning using multi-task learning (Multi-Automated)) for OpenKBP dataset. Relative differences of the DVH-MAE for each model are calculated compared to the baseline performance, positive value means DVH-MAE improvement compared to the baseline, and vice versa. Best results for each metric are bolded. denotes DVH-MAE scores for Multi-Automated, highlighting significant differences (p 0.5) compared to Seq-Automated, determined through paired t-tests and Bonferroni correction.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ROIsBaselineSeq-HumanSeq-AutomatedMulti-Automated
/ DVH-MAE (Gy)(Ours)
Brain stem
18.34028.38829.61029.922
3.0984.2273.4673.634
Spinal cord
25.76629.89031.57121.748
2.1912.4812.5232.438
Left parotid
5.0925.1845.3794.573
8.0687.6748.0627.193
Right parotid
5.2365.2114.5634.279
7.6997.7228.025
Average9.43611.34711.65010.109
Relative difference (%)-20.252-23.463-7.132\n
\n
", + "capture": "Table 3: Dose Volume Histogram - Mean Absolute error (DVH-MAE) for metrics (unit:Gy) for dose prediction models (i.e., Baseline model using CT as inputs (Baseline), Sequential human contouring and planning (Seq-Human), Sequential automated contouring and planning (Seq-Automated), and automated contouring and planning using multi-task learning (Multi-Automated)) for OpenKBP dataset. Relative differences of the DVH-MAE for each model are calculated compared to the baseline performance, positive value means DVH-MAE improvement compared to the baseline, and vice versa. Best results for each metric are bolded. denotes DVH-MAE scores for Multi-Automated, highlighting significant differences (p 0.5) compared to Seq-Automated, determined through paired t-tests and Bonferroni correction." + }, + "4": { + "table_html": "
\n
Table 4: Dice score coefficient and Hausdorff distance using Prostate dataset for baseline DL contouring model (DL-Baseline) and the automated contouring and planning model using multi-task learning (Multi-Automated). Better results for each ROI and each metric are bolded. Higher the better for dice score coeffcient, denoted as , lower the better for Hausdorff distance, denoted as . denotes both contouring metrics for Multi-Automated, highlighting significant differences (p 0.5) compared to baseline, determined through paired t-tests and Bonferroni correction.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ROIsDice score coefficient()Hausdorff Distance ()
DL-BaselineMulti-AutomatedDL-BaselineMulti-Automated
Prostate0.8390.8366.5212.900
Rectum0.8070.8165.4695.603
Bladder0.7960.7954.5934.849
Left femur0.8220.81911.49110.452
Right femur0.8270.8549.669
Average0.8180.8247.54912.049
\n
", + "capture": "Table 4: Dice score coefficient and Hausdorff distance using Prostate dataset for baseline DL contouring model (DL-Baseline) and the automated contouring and planning model using multi-task learning (Multi-Automated). Better results for each ROI and each metric are bolded. Higher the better for dice score coeffcient, denoted as , lower the better for Hausdorff distance, denoted as . denotes both contouring metrics for Multi-Automated, highlighting significant differences (p 0.5) compared to baseline, determined through paired t-tests and Bonferroni correction." + }, + "5": { + "table_html": "
\n
Table 5: Dice score coefficient and Hausdorff distance using OpenKBP dataset for both DL contouring models: baseline (DL-Baseline) and the automated contouring and planning model using multi-task learning (Multi-Automated). Better results for each ROI and each metric are bolded. Higher the better for dice score coeffcient, denoted as , lower the better for Hausdorff distance, denoted as . Denotes significant differences (p 0.5) in either contouring metric compared to baseline, determined through paired t-tests and Bonferroni correction.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ROIsDice score coefficient()Hausdorff Distance ()
DL-BaselineMulti-AutomatedDL-BaselineMulti-Automated
Brain stem0.71746.103
Spinal cord0.62813.98116.002
Left parotid0.67344.679
Right parotid0.68054.563
Average0.6740.71639.83122.872
\n
", + "capture": "Table 5: Dice score coefficient and Hausdorff distance using OpenKBP dataset for both DL contouring models: baseline (DL-Baseline) and the automated contouring and planning model using multi-task learning (Multi-Automated). Better results for each ROI and each metric are bolded. Higher the better for dice score coeffcient, denoted as , lower the better for Hausdorff distance, denoted as . Denotes significant differences (p 0.5) in either contouring metric compared to baseline, determined through paired t-tests and Bonferroni correction." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18767v1_figure_1.png", + "caption": "Figure 1: Architectures, inputs, and outputs of the (i) sequential human contouring and planning (Seq-Human), (ii) sequential automated contouring and planning (Seq-Automated), and proposed framework (iii) automated contouring and planning using multi-task learning (Multi-Automated). In Seq-Human and Seq-Automated, the input includes clinician labeled ground-truth (GT) and deep learning (DL) regions of interest (ROI), respectivley, both concatenated channel-wise with the CT image input. Multi-Automated uses only CT imaging as an input but includes an additional decoder for predicting ROI (output) simultaneously with dose distributions.", + "url": "http://arxiv.org/html/2411.18767v1/extracted/6030193/figures/figure1.png" + }, + "2": { + "figure_path": "2411.18767v1_figure_2.png", + "caption": "Figure 2: This figure illustrates qualitative results of Prostate dataset, showing input CT scans, ground truth clinical dose distributions (Clinical Dose), and the dose distributions generated by sequential automated contouring and planning model (Seq-Automated) and the automated contouring and planning model using multi-task learning (Multi-Automated). The voxel-wise Mean Absolute Error (MAE) for both model outputs are provided. The figure presents best, median, and worst cases in terms of MAE from the test dataset", + "url": "http://arxiv.org/html/2411.18767v1/extracted/6030193/figures/figure2.png" + }, + "3": { + "figure_path": "2411.18767v1_figure_3.png", + "caption": "Figure 3: In this figure, we present the results of automated DL contouring using Prostate dataset. The image showcases the ground truth labels (GT), and the DL contouring outputs from the baseline model (DL-Baseline) and the automated multi-task contouring and planning model (Multi-Automated). The figure includes representative examples, displaying best, median, and worst cases from our test dataset.", + "url": "http://arxiv.org/html/2411.18767v1/extracted/6030193/figures/figure3.png" + }, + "4": { + "figure_path": "2411.18767v1_figure_4.png", + "caption": "Figure 4: This figure presents qualitative results from the OpenKBP dataset, focusing on the head and neck cancer, showing input CT scans, ground truth clinical dose distributions (Clinical Dose), and the dose distributions generated by sequential automated contouring and planning (Seq-Automated) and the automated contouring and planning model using multi-task learning (Multi-Automated). The Mean Absolute Error (MAE) for both model outputs are provided. The figure presents the best, median, and worst cases from the test dataset, starting from the top.", + "url": "http://arxiv.org/html/2411.18767v1/extracted/6030193/figures/figure4.png" + }, + "5": { + "figure_path": "2411.18767v1_figure_5.png", + "caption": "Figure 5: In this figure, we present the results of DL automated contouring using OpenKBP dataset. The image showcases the ground truth labels (GT), and the automated contouring outputs from the baseline (DL-Baseline) and the automated contouring and planning model using multi-task learning (Multi-Automated). Additionally, we provide the Dice Score Coefficient for the segmentation output. The figure includes representative examples, displaying the best, median, and worst cases from our test dataset.", + "url": "http://arxiv.org/html/2411.18767v1/extracted/6030193/figures/figure5.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer.", + "author": "Chris McIntosh, Leigh Conroy, Michael C. Tjong, Tim Craig, Andrew Bayley, Charles Catton, Mary Gospodarowicz, Joelle Helou, Naghmeh Isfahanian, Vickie Kong, Tony Lam, Srinivas Raman, Padraig Warde, Peter Chung, Alejandro Berlin, and Thomas G. Purdie.", + "venue": "Nature Medicine, 27(6):999\u20131005, 2021.", + "url": null + } + }, + { + "2": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In Medical image computing and computer-assisted intervention\u2013MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "3": { + "title": "nnu-net: a self-configuring method for deep learning-based biomedical image segmentation.", + "author": "Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein.", + "venue": "Nature methods, 18(2):203\u2013211, 2021.", + "url": null + } + }, + { + "4": { + "title": "Geometric evaluations of ct and mri based deep learning segmentation for brain oars in radiotherapy.", + "author": "Nouf Alzahrani, Ann Henry, Anna Clark, Louise Murray, Michael Nix, and Bashar Al-Qaisieh.", + "venue": "Physics in Medicine & Biology, 68(17):175035, 2023.", + "url": null + } + }, + { + "5": { + "title": "Generalizability of deep learning in organ-at-risk segmentation: A transfer learning study in cervical brachytherapy.", + "author": "Ruiyan Ni, Kathy Han, Benjamin Haibe-Kains, and Alexandra Rink.", + "venue": "Radiotherapy and Oncology, 197:110332, 2024.", + "url": null + } + }, + { + "6": { + "title": "A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning.", + "author": "Dan Nguyen, Troy Long, Xun Jia, Weiguo Lu, Xuejun Gu, Zohaib Iqbal, and Steve Jiang.", + "venue": "Scientific Reports, 9(1):1076, 2019.", + "url": null + } + }, + { + "7": { + "title": "OpenKBP: The open\u2010access knowledge\u2010based planning grand challenge and dataset.", + "author": "Aaron Babier, Binghao Zhang, Rafid Mahmood, Kevin L. Moore, Thomas G. Purdie, Andrea L. McNiven, and Timothy C. Y. Chan.", + "venue": "Medical Physics, 48(9):5549\u20135561, 2021.", + "url": null + } + }, + { + "8": { + "title": "OpenKBP-Opt: An international and reproducible evaluation of 76 knowledge-based planning pipelines.", + "author": "Aaron Babier, Rafid Mahmood, Binghao Zhang, Victor G L Alves, Ana Maria Barrag\u00e1n-Montero, Joel Beaudry, Carlos E Cardenas, Yankui Chang, Zijie Chen, Jaehee Chun, Kelly Diaz, Harold David Eraso, Erik Faustmann, Sibaji Gaj, Skylar Gay, Mary Gronberg, Bingqi Guo, Junjun He, Gerd Heilemann, Sanchit Hira, Yuliang Huang, Fuxin Ji, Dashan Jiang, Jean Carlo Jimenez Giraldo, Hoyeon Lee, Jun Lian, Shuolin Liu, Keng-Chi Liu, Jos\u00e9 Marrugo, Kentaro Miki, Kunio Nakamura, Tucker Netherton, Dan Nguyen, Hamidreza Nourzadeh, Alexander F I Osman, Zhao Peng, Jos\u00e9 Dar\u00edo Quinto Mu\u00f1oz, Christian Ramsl, Dong Joo Rhee, Juan David Rodriguez, Hongming Shan, Jeffrey V Siebers, Mumtaz H Soomro, Kay Sun, Andr\u00e9s Usuga Hoyos, Carlos Valderrama, Rob Verbeek, Enpei Wang, Siri Willems, Qi Wu, Xuanang Xu, Sen Yang, Lulin Yuan, Simeng Zhu, Lukas Zimmermann, Kevin L Moore, Thomas G Purdie, Andrea L McNiven, and Timothy C Y Chan.", + "venue": "arXiv, 2022.", + "url": null + } + }, + { + "9": { + "title": "Deep learning\u2013based dose prediction for automated, individualized quality assurance of head and neck radiation therapy plans.", + "author": "Mary P Gronberg, Beth M Beadle, Adam S Garden, Heath Skinner, Skylar Gay, Tucker Netherton, Wenhua Cao, Carlos E Cardenas, Christine Chung, David T Fuentes, et al.", + "venue": "Practical radiation oncology, 13(3):e282\u2013e291, 2023.", + "url": null + } + }, + { + "10": { + "title": "A transformer-embedded multi-task model for dose distribution prediction.", + "author": "Lu Wen, Jianghong Xiao, Shuai Tan, Xi Wu, Jiliu Zhou, Xingchen Peng, and Yan Wang.", + "venue": "International Journal of Neural Systems, 33(08):2350043, 2023.", + "url": null + } + }, + { + "11": { + "title": "A cascade 3d u-net for dose prediction in radiotherapy.", + "author": "Shuolin Liu, Jingjing Zhang, Teng Li, Hui Yan, and Jianfei Liu.", + "venue": "Medical physics, 48(9):5574\u20135582, 2021.", + "url": null + } + }, + { + "12": { + "title": "3D radiotherapy dose prediction on head and neck cancer patients with a hierarchically densely connected U-net deep learning architecture.", + "author": "Dan Nguyen, Xun Jia, David Sher, Mu-Han Lin, Zohaib Iqbal, Hui Liu, and Steve Jiang.", + "venue": "Physics in Medicine & Biology, 64(6):065020, 2019.", + "url": null + } + }, + { + "13": { + "title": "Flexible-cm gan: Towards precise 3d dose prediction in radiotherapy.", + "author": "Riqiang Gao, Bin Lou, Zhoubing Xu, Dorin Comaniciu, and Ali Kamen.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 715\u2013725, 2023.", + "url": null + } + }, + { + "14": { + "title": "Diffdp: Radiotherapy dose prediction via a diffusion model.", + "author": "Zhenghao Feng, Lu Wen, Peng Wang, Binyu Yan, Xi Wu, Jiliu Zhou, and Yan Wang.", + "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 191\u2013201. Springer, 2023.", + "url": null + } + }, + { + "15": { + "title": "Dose distribution prediction for head-and-neck cancer radiotherapy using a generative adversarial network: influence of input data.", + "author": "Xiaojin Gu, Victor IJ Strijbis, Ben J Slotman, Max R Dahele, and Wilko FAR Verbakel.", + "venue": "Frontiers in Oncology, 13, 2023.", + "url": null + } + }, + { + "16": { + "title": "Multitask learning.", + "author": "Rich Caruana.", + "venue": "Machine learning, 28:41\u201375, 1997.", + "url": null + } + }, + { + "17": { + "title": "Generalized multi-task learning from substantially unlabeled multi-source medical image data.", + "author": "Ayaan Haque, Abdullah-Al-Zubaer Imran, Adam Wang, and Demetri Terzopoulos.", + "venue": "arXiv preprint arXiv:2110.13185, 2021.", + "url": null + } + }, + { + "18": { + "title": "Multi-task deep learning based ct imaging analysis for covid-19 pneumonia: Classification and segmentation.", + "author": "Amine Amyar, Romain Modzelewski, Hua Li, and Su Ruan.", + "venue": "Computers in biology and medicine, 126:104037, 2020.", + "url": null + } + }, + { + "19": { + "title": "Multi-task attention-based semi-supervised learning for medical image segmentation.", + "author": "Shuai Chen, Gerda Bortsova, Antonio Garc\u00eda-Uceda Ju\u00e1rez, Gijs Van Tulder, and Marleen De Bruijne.", + "venue": "In Medical Image Computing and Computer Assisted Intervention\u2013MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13\u201317, 2019, Proceedings, Part III 22, pages 457\u2013465. Springer, 2019.", + "url": null + } + }, + { + "20": { + "title": "Deep learning for multi-task medical image segmentation in multiple modalities.", + "author": "Pim Moeskops, Jelmer M Wolterink, Bas HM Van Der Velden, Kenneth GA Gilhuijs, Tim Leiner, Max A Viergever, and Ivana I\u0161gum.", + "venue": "In Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, pages 478\u2013486. Springer, 2016.", + "url": null + } + }, + { + "21": { + "title": "Multi-task deep learning for medical image computing and analysis: A review.", + "author": "Yan Zhao, Xiuying Wang, Tongtong Che, Guoqing Bao, and Shuyu Li.", + "venue": "Computers in Biology and Medicine, 153:106496, 2023.", + "url": null + } + }, + { + "22": { + "title": "Conversion of single-energy ct to parametric maps of dual-energy ct using convolutional neural network.", + "author": "Sangwook Kim, Jimin Lee, Jungye Kim, Bitbyeol Kim, Chang Heon Choi, and Seongmoon Jung.", + "venue": "British Journal of Radiology, 97(1158):1180\u20131190, 2024.", + "url": null + } + }, + { + "23": { + "title": "Multimodal radiotherapy dose prediction using a multi-task deep learning model.", + "author": "Austen Maniscalco, Ezek Mathew, David Parsons, Justin Visak, Mona Arbab, Prasanna Alluri, Xingzhe Li, Narine Wandrey, Mu-Han Lin, Asal Rahimi, et al.", + "venue": "Medical physics, 2024.", + "url": null + } + }, + { + "24": { + "title": "Mask-free radiotherapy dose prediction via multi-task learning.", + "author": "Zhengyang Jiao, Xingchen Peng, Jianghong Xiao, Xi Wu, Jiliu Zhou, and Yan Wang.", + "venue": "In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), pages 1\u20135. IEEE, 2022.", + "url": null + } + }, + { + "25": { + "title": "Cross-task attention network: Improving multi-task learning for medical imaging applications.", + "author": "Sangwook Kim, Thomas G Purdie, and Chris McIntosh.", + "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 119\u2013128. Springer, 2023.", + "url": null + } + }, + { + "26": { + "title": "Loss odyssey in medical image segmentation.", + "author": "Jun Ma, Jianan Chen, Matthew Ng, Rui Huang, Yu Li, Chen Li, Xiaoping Yang, and Anne L Martel.", + "venue": "Medical Image Analysis, 71:102035, 2021.", + "url": null + } + }, + { + "27": { + "title": "End-to-end multi-task learning with attention.", + "author": "Shikun Liu, Edward Johns, and Andrew J Davison.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1871\u20131880, 2019.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18767v1" +} \ No newline at end of file diff --git a/20241127/2411.18784v1.json b/20241127/2411.18784v1.json new file mode 100644 index 0000000000000000000000000000000000000000..f888e63a40d1cd5d5496b89abcd8f1e29f41e71d --- /dev/null +++ b/20241127/2411.18784v1.json @@ -0,0 +1,134 @@ +{ + "title": "MRI Breast tissue segmentation using nnU-Net for biomechanical modeling", + "abstract": "Integrating 2D mammography with 3D magnetic resonance imaging (MRI) is crucial for improving breast cancer diagnosis and treatment planning. However, this integration is challenging due to differences in imaging modalities and the need for precise tissue segmentation and alignment. This paper addresses these challenges by enhancing biomechanical breast models in two main aspects: improving tissue identification using nnU-Net segmentation models and evaluating finite element (FE) biomechanical solvers, specifically comparing NiftySim and FEBio. We performed a detailed six-class segmentation of breast MRI data using the nnU-Net architecture, achieving Dice Coefficients of 0.94 for fat, 0.88 for glandular tissue, and 0.87 for pectoral muscle. The overall foreground segmentation reached a mean Dice Coefficient of 0.83 through an ensemble of 2D and 3D U-Net configurations, providing a solid foundation for 3D reconstruction and biomechanical modeling. The segmented data was then used to generate detailed 3D meshes and develop biomechanical models using NiftySim and FEBio, which simulate breast tissue\u2019s physical behaviors under compression. Our results include a comparison between NiftySim and FEBio, providing insights into the accuracy and reliability of these simulations in studying breast tissue responses under compression. The findings of this study have the potential to improve the integration of 2D and 3D imaging modalities, thereby enhancing diagnostic accuracy and treatment planning for breast cancer.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "INTRODUCTION", + "text": "Breast cancer is the most common cancer among women, with 1 in 8 women developing invasive breast cancer in their lifetime, highlighting the need for early and accurate diagnosis to improve patient outcomes [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. While traditional imaging techniques provide valuable information, they have inherent limitations. Advanced methods such as multi-modality correspondence can overcome these limitations by integrating data from different sources, resulting in a more comprehensive analysis [4 ###reference_b4###, 5 ###reference_b5###]. Combining imaging techniques such as mammography and MRI provides a comprehensive view of the breast, improving diagnosis and treatment planning. Mammography detects microcalcifications but struggles with dense tissue, whereas MRI excels in soft tissue contrast and detecting invasive cancers. Integrating these modalities enhances lesion detection and characterization [6 ###reference_b6###]. However, differences in patient positioning during imaging, such as mammographic compression and prone positioning in MRI, present challenges in integration [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. Hence, advanced image registration techniques have been proposed to align these images accurately [10 ###reference_b10###, 11 ###reference_b11###].\nFinite Element Analysis (FEA) commonly plays a crucial role in these registration techniques by simulating breast tissue deformation under different conditions, aiding in accurate image registration. Patient-specific models replicating the breast\u2019s physical properties improve the precision of diagnostic and therapeutic interventions [6 ###reference_b6###, 12 ###reference_b12###, 13 ###reference_b13###]. Despite advancements, the deformable nature of breast tissue complicates image correlation across modalities and clinical contexts, affecting the diagnosis, biopsy guidance, and surgical planning [6 ###reference_b6###]. Biomechanical modeling offers valuable insights into breast tissue behavior, understanding disease progression, and treatment planning. However, accurately identifying different tissue types within patient-specific models derived from 3D modalities like MRI is a time-consuming and error-prone manual task [14 ###reference_b14###]. Due to its high soft-tissue contrast, MRI can discriminate between different structures in the breast and enable 3D visualization [15 ###reference_b15###]. However, breast MRI imaging includes other organs such as the lungs, heart, pectoral muscles, and thorax. As a result, it is crucial to segment the breast region from the other organs to ensure accurate analysis in biomechanical modeling.\nRecent advancements in biomechanical modeling and image segmentation have made significant strides in improving breast cancer diagnosis and treatment planning. Traditional methods primarily relied on manual segmentation, which is time-consuming and prone to errors. The advent of deep learning, particularly convolutional neural networks (CNNs) such as U-Net and its variants, has revolutionized tissue segmentation in medical imaging. Hou [16 ###reference_b16###] achieved a Dice Coefficient of 0.87 for glandular tissue using nnU-Net [17 ###reference_b17###], while Zafari [18 ###reference_b18###] reported a Dice Coefficient of 0.89 for pectoral muscle using U-Net. Alqaoud [19 ###reference_b19###] achieved a Dice Coefficient of 0.95 for fat using a Deep Neural Network (DNN). Despite these advances, current models often segment a limited number of tissue classes and require significant manual intervention, which can reduce their clinical utility. Finite element (FE) biomechanical solvers such as NiftySim and FEBio have been used to model the mechanical properties of breast tissue, aiding in tasks such as image registration and surgical planning. However, these models often lack detailed segmentation data, which is critical for accurately simulating tissue behavior.\nThis paper addresses challenges in integrating 2D and 3D imaging modalities for breast cancer diagnosis and treatment planning. The main contributions of this work include:\nUtilized the advanced nnU-Net framework for comprehensive segmentation of all breast tissue types in breast MRI data [17 ###reference_b17###]. This approach addresses limitations in existing literature, which often only segment a subset of classes of breast MRI and require additional automatic or manual pre-processing steps.\nConducted the first known comparative analysis of NiftySim and FEBio for biomechanical modeling of breast tissue mechanics using breast MRI images. This study provides valuable insights into their relative strengths and limitations for accurately simulating breast tissue behavior." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "MATERIAL AND METHODS", + "text": "The private dataset comprised 166 T1-weighted non-fat saturated Dynamic contrast-enhanced MRI (DCE-MRI) scans, including follow-ups. Acquired with a 1.5 Tesla Siemens Magnetom Vision system and a CP Breast Array coil, the scans had a typical volume size of 512\u00d7256\u00d7120 voxels, with pixel spacing from 0.625 to 0.722 mm and a slice thickness of 1.3 mm. Pre-contrast volumes were primarily used for tissue segmentation. An experienced observer manually segmented the MRI volumes into seven categories: background, fatty tissue, glandular tissue, heart, lung area, pectoral muscles, and thorax. This involved labeling every 5-10 slices, with linear interpolation filling the gaps, and more precise structures segmented at smaller intervals. Thresholding techniques were used for segmenting the background, fatty, and glandular tissues based on selected regions [20 ###reference_b20###].\n###figure_1###" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Segmentation", + "text": "As shown in Fig. 1 ###reference_###, the segmentation step is a critical component of the overall process of integrating MRI with mammography. This process sets the foundation for accurate registration and biomechanical modeling. The segmentation process in this study utilized the nnU-Net framework [17 ###reference_b17###], known for its high performance in medical image segmentation tasks. nnU-Net was selected due to its capability to automatically adapt its architecture to the specific dataset, thereby optimizing performance without the need for extensive manual configuration [17 ###reference_b17###].\nThe segmentation involved several critical steps:\nData Preprocessing: MRI volumes were normalized and resampled to an isotropic voxel size to ensure uniformity across the dataset.\nTraining Configuration: The nnU-Net architecture was configured based on the dataset\u2019s characteristics, including selecting appropriate hyperparameters, loss functions (a combination of Dice and cross-entropy loss), and optimization algorithms (stochastic gradient descent with Nesterov momentum).\nModel Training: Separate models were trained using both 2D and 3D U-Net configurations. The 2D U-Net processed individual slices of MRI volumes, while the 3D U-Net handled volumetric data, providing a comprehensive analysis of the tissue structures.\nEnsembling: The final segmentation results were obtained by ensembling the outputs from the 2D and 3D models. This involved averaging the softmax probabilities from both configurations to generate the final segmentation labels.\nDetails on the dataset fingerprint, which includes the dataset characteristics identified by nnU-Net such as image size, voxel spacing, and intensity distributions, as well as the hyperparameters determined based on these characteristics for both 2D and 3D networks, and the architectures of the 2D and 3D networks, are provided in the supplementary material." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Geometry Extraction and Mesh Generation", + "text": "Following the segmentation step in the overall process of integrating MRI with mammography, as shown in Fig. 1 ###reference_###, the geometry extraction and mesh generation process is the next critical step. The geometry extraction and mesh generation process begins by utilizing the segmentation results obtained from the nnU-Net framework [17 ###reference_b17###]. The initial step involves isolating the breast region from the MRI volumes, excluding non-breast tissues. This isolation is achieved by applying a pre-obtained breast region mask from Gubern-M\u00e9rida [20 ###reference_b20###], which effectively segments the image background, leaving only the volumes of interest, such as fat and glandular tissue. The sternum serves as a reference point to ensure accurate segmentation. Following the segmentation, the isolated breast volume, including its internal fat and glandular tissues, is resampled to isotropic voxels of 1 mm\u00b3. Although nnU-Net automatically resamples data based on the median image spacing of the dataset, this additional resampling after segmentation ensures consistency and better mesh quality. The volume mesh is then generated using pygalmesh [21 ###reference_b21###], a Python interface for CGAL\u2019s meshing tools [22 ###reference_b22###]. This tool is capable of generating both 2D and 3D meshes. The element count in these meshes varies between 50,000 and 500,000, depending on the volume of the breast, which helps minimize errors during finite element simulations [6 ###reference_b6###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Finite Element Analysis: Simulating Compression", + "text": "Finite Element Analysis (FEA), which is the next step in the pipeline, is essential for simulating the mechanical behavior of breast tissue under conditions like mammography compression. FEA models were constructed using segmented MRI data, incorporating mechanical properties to simulate deformation and stress distribution accurately. NiftySim [23 ###reference_b23###] and FEBio [24 ###reference_b24###] are open-source software for biomechanical simulations of soft tissues. They support properties like position, and orientation of the patient, to adapt the registration process to the patient-specific conditions. Moreover, the initial parameters of the elastic materials were set based on literature values reported in the work of Garcia [23 ###reference_b23###], specifically Young\u2019s modulus (4.46 kPa for fatty tissue, 15.1 kPa for glandular tissue) and Poisson\u2019s ratio (0.45 to 0.499). Both tools generate uncompressed and compressed breast models, suitable for detailed analysis under different conditions. NiftySim\u2019s efficiency in handling large-scale simulations made it ideal for this study [23 ###reference_b23###]. FEBio offers advanced features for simulating complex tissues and incorporates sophisticated material models and boundary conditions. It has been used to simulate breast compression using high-resolution CT data, handling detailed anatomical models and complex tissue interactions [25 ###reference_b25###]. In this study, FEBio validated and compared NiftySim\u2019s results. Using FEBio, breast tissue\u2019s response to mechanical forces were analyzed, further validating NiftySim\u2019s results. The compression process is similar for both tools, as illustrated for NiftySim in Fig. 2 ###reference_###.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "EXPERIMENTAL RESULTS", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "EVALUATION METRICS", + "text": "The performance of the segmentation and biomechanical modeling processes was evaluated using two critical metrics: the Dice Coefficient and breast volume (BV) measurements. These metrics are essential for assessing the accuracy of breast tissue deformation under compression in our study. Firstly, the Dice Coefficient was utilized to measure segmentation accuracy by quantifying the overlap between the predicted and ground truth labels. Additionally, it was employed to assess the accuracy of biomechanical modeling using NiftySim and FEBio by comparing compressed and uncompressed segmentation maps, focusing on the center of mass for fat and glandular tissues. A high Dice score close to 1 indicates that the tissues did not deform significantly under compression, suggesting the need for further analysis to ensure accurate simulation. To complement the Dice Coefficient, breast volume changes were analyzed to evaluate the model\u2019s ability to simulate realistic tissue behavior under compression. Ideally, the breast volume should remain constant, indicating no tissue loss. However, due to inherent imperfections in simulations, a smaller reduction in breast volume is preferable, indicating better compression with minimal tissue loss. This metric is crucial for understanding the extent of tissue deformation and loss during compression. The study by Garcia [23 ###reference_b23###] supports the use of breast volume changes as an evaluation metric, highlighting its relevance in biomechanical modeling. For a comprehensive statistical analysis, the mean and standard deviation (SD) of the deviations were examined between the analyzed cases. These metrics allow for assessing the consistency and reliability of the segmentation and biomechanical modeling processes, offering insights into the overall performance and robustness of the models." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Segmentation results", + "text": "The nnU-Net framework demonstrated high performance in segmenting breast tissues and organs, including in breast MRI data. The quantitative results, summarized in Table 1 ###reference_###, show robust segmentation accuracy across different tissue types. The Dice Coefficients indicate that the framework effectively captures the details of breast tissues, comparable to state-of-the-art methods in the literature. Additionally, the mean and standard deviation (SD) values provide an overview of the average segmentation performance and the variability across different tissues, indicating consistent performance by the nnU-Net framework. The boxplots demonstrate the Dice coefficients for six tissue types segmented using 2D U-Net, and 3D U-Net, and their ensemble. The ensemble method generally shows higher or similar median Dice Coefficients compared to the individual 2D and 3D U-Nets, especially for fat and pectoral tissues. The narrower interquartile ranges for the ensemble method in tissues like fat and pectoral suggest more consistent performance, while individual methods show more variability (Fig. 3 ###reference_###). Detailed boxplots, particularly highlighting the high Dice coefficients for the fat class, are included in the supplementary material. Moreover, visual assessments confirmed the accuracy of the segmentation, accurately delineating tissue boundaries even in challenging regions. These segmentation results provide a strong foundation for subsequent biomechanical modeling and analysis, as illustrated in Fig. 4 ###reference_###.\n###figure_3### ###figure_4###" + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Biomechnical modeling results", + "text": "A subset of 10 cases was chosen to obtain their biomechanical models. Out of these, 4 cases were successfully compressed, while the rest were not, potentially due to issues with the mesh or segmentation affecting the biomechanical models. The biomechanical modeling results, summarized in Table 2, show that NiftySim consistently outperformed FEBio in modeling accuracy and breast volume preservation. NiftySim achieved higher Dice Coefficients for both fat (0.78 to 0.91) and glandular tissues (0.19 to 0.31) compared to FEBio\u2019s lower values for fat (0.59 to 0.72) and glandular tissues (0.14 to 0.28). Despite concerns about higher Dice Coefficients after compression, NiftySim showed less breast volume loss (1.52% to 1.94%) compared to FEBio (3.22% to 4.30%), indicating better preservation of anatomical integrity and more accurate tissue deformation modeling. Additionally, mean and standard deviation (SD) values for these measurements are included in Table 2.\n###table_1###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "DISCUSSION AND CONCLUSIONS", + "text": "In this work, we presented a comprehensive approach for six-class segmentation of breast MRI data using the nnU-Net framework, followed by detailed biomechanical modeling with NiftySim and FEBio. Our study aims to compare and analyze the performance of these tools in segmenting and modeling breast tissues, thereby providing insights into their respective strengths and limitations.\nThe nnU-Net framework demonstrated high Dice Coefficients and precise tissue boundaries, effectively segmenting all breast tissue types. In the comparative analysis, NiftySim generally outperformed FEBio in biomechanical modeling, achieving expected Dice Coefficients for fat and glandular tissues with less volume loss. This indicates that NiftySim may provide a superior simulation of tissue biomechanics under compression, maintaining anatomical integrity during simulations. Accurate biomechanical models facilitate the correlation of breast structures across imaging modalities, support CAD algorithms and needle biopsy procedures, and help radiologists evaluate suspicious areas over time. Despite these advancements, only 4 out of the 10 cases analyzed were successfully compressed. This limited success rate may be due to segmentation issues, mesh quality, or other complexities in finite element analysis, highlighting the need for further research and improvement in these areas.\nIn conclusion, while the nnU-Net framework effectively segments breast tissue types in MRI data and NiftySim shows promise in biomechanical modeling, the current success rate indicates significant areas for improvement. Challenges such as segmentation accuracy, mesh quality, and the complexity of finite element analysis need to be addressed to enhance the robustness and reliability of biomechanical simulations. Recognizing both the strengths and the areas needing improvement, this work lays the foundation for future advancements in breast tissue segmentation and biomechanical modeling. Future work should focus on refining these aspects to improve simulation success rates, better support personalized treatment planning, and ultimately improve outcomes for patients undergoing breast cancer diagnosis and treatment." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Supplementary materials", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Segmentation", + "text": "The dataset fingerprint played a crucial role in the segmentation process. This fingerprint includes specific properties identified by nnU-Net, such as image size and voxel spacing, which were used to determine the optimal hyperparameters for each configuration. By leveraging these dataset-specific characteristics, nnU-Net was able to adapt its architecture and training process to maximize performance and accuracy [17 ###reference_b17###]. This tailored approach ensured that the segmentation process was both precise and efficient, laying a strong foundation for subsequent biomechanical modeling (Tabel 3 ###reference_###).\nThe architecture generated by nnU-Net, which was utilized in this study, automatically adapts to the dataset\u2019s characteristics, optimizing the network for improved performance without extensive manual configuration (Fig. 5 ###reference_###).\n###figure_5###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Segmentation results", + "text": "The detailed boxplots provide a thorough look at the Dice coefficients for six tissue types segmented using 2D U-Net, and 3D U-Net, and their ensemble. The fat class, being a predominant tissue type, achieved significantly higher Dice coefficients compared to the other tissue classes across all models.\nThe boxplots reveal that the ensemble method generally produces the most consistent results, with the fat class exhibiting Dice coefficients consistently above 0.90. This superior performance underscores the robustness of the segmentation for fat tissues, which is critical given its majority presence in breast tissue (Fig. 6 ###reference_###).\n###figure_6###" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison of nnU-Net results with State-of-the-Art [16, 18, 19].
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsFatGlandularHeartLungPectoralThoraxMean SD
2D-UNet0.940.880.770.720.870.720.82 0.14
3D-UNet0.930.860.790.720.850.720.81 0.13
Ensemble0.940.880.790.730.870.740.83 0.13
State of the Art0.95 [19]\n0.87 [16]\n--0.89 [18]\n--
\n
\n
", + "capture": "Table 1: Comparison of nnU-Net results with State-of-the-Art [16, 18, 19]." + }, + "2": { + "table_html": "
\n
Table 2: Dice Coefficients, BVs of two FEA Methods for 4 Cases
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
CasesFEAFatGlandBV
Case 1NiftySim0.910.221.52%
Case 1FEBio0.690.203.22%
Case 2NiftySim0.780.311.63%
Case 2FEBio0.590.284.11%
Case 3NiftySim0.850.201.87%
Case 3FEBio0.720.194.30%
Case 4NiftySim0.890.191.94%
Case 4FEBio0.650.144.18%
Mean SDNiftySim0.85 0.050.23 0.05-
Mean SDFEBio0.66 0.050.20 0.05-
\n
", + "capture": "Table 2: Dice Coefficients, BVs of two FEA Methods for 4 Cases" + }, + "3": { + "table_html": "
\n
Table 3: Dataset fingerprint and hyperparameters for 2D and 3D U-Net.
\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\nDataset\n\nMedian image size\n120x254x510\n\nMedian image spacing\n1.29x0.66x0.66mm\n\nNormalization\nZ-score\n\n\n\n\n\n\n\n\n\n2D-UNet\n\nTarget Spacing\nNAx254x510\n\nMedian Shape @ Target Spacing\nNAx0.66x0.66mm\n\nPatch Size\n256x512\n\nBatch Size\n24\n\n\n\n\n\n\n\n\n\n3D-fullres UNet\n\nTarget Spacing\n120x254x510\n\nMedian Shape @ Target Spacing\n1.29x0.66x0.66mm\n\nPatch Size\n64x128x288\n\nBatch Size\n2\n\n\n
\n
", + "capture": "Table 3: Dataset fingerprint and hyperparameters for 2D and 3D U-Net." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18784v1_figure_1.png", + "caption": "Figure 1: Overview of the steps for integrating MRI with mammography, inspired by Garcia [14], focusing on segmentation up to finite element analysis.", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/figures/overviewpipeline.jpg" + }, + "2": { + "figure_path": "2411.18784v1_figure_2.png", + "caption": "Figure 2: Process of compression: (A) Segmentation Map, (B) Generated Mesh, (C) NiftySim Displacement, (D) Compressed Map, (E) Wireframe overlay comparing pre- and post-compression maps, (F) Final Compressed Map.", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/figures/nifty-febio-pipeline.jpg" + }, + "3": { + "figure_path": "2411.18784v1_figure_3.png", + "caption": "Figure 3: Dice Coefficients for six tissue types segmented by 2D U-Net, 3D U-Net, and their ensemble, shown from a scale of 0.60 as the Dice Coefficients for all classes were above this value.", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/figures/boxplots.png" + }, + "4": { + "figure_path": "2411.18784v1_figure_4.png", + "caption": "Figure 4: Qualitative segmentation results for six tissue types (Fat, Glandular, Heart, Lung, Pectoral Muscle, and Thorax) using MRI data. A: Original MRI images. B: Ground truth segmentation. C: 2D U-Net results. D: 3D U-Net results. E: Ensemble method results. The ensemble method (E) shows the most consistent and accurate segmentation, closely matching the ground truth (B).", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/figures/segmentation.jpg" + }, + "5": { + "figure_path": "2411.18784v1_figure_5.png", + "caption": "Figure 5: Network architectures generated by nnU-Net for the dataset [17].", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/Supplementary.figures/bigernnunet-miccai.jpg" + }, + "6": { + "figure_path": "2411.18784v1_figure_6.png", + "caption": "Figure 6: Dice scores for six tissue types segmented by 2D U-Net, 3D U-Net, and their ensemble, with the fat class shown on a scale starting from 0.90.", + "url": "http://arxiv.org/html/2411.18784v1/extracted/6030300/Supplementary.figures/boxplot-for-fat.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18784v1" +} \ No newline at end of file diff --git a/20241127/2411.18795v1.json b/20241127/2411.18795v1.json new file mode 100644 index 0000000000000000000000000000000000000000..b33f38b10b206eb0f1539234ad5c0e581b159697 --- /dev/null +++ b/20241127/2411.18795v1.json @@ -0,0 +1,112 @@ +{ + "title": "GloFinder: AI-empowered QuPath Plugin for WSI-level Glomerular Detection, Visualization, and Curation", + "abstract": "Artificial intelligence (AI) has demonstrated significant success in automating the detection of glomeruli\u2014key functional units of the kidney\u2014from whole slide images (WSIs) in kidney pathology. However, existing open-source tools are often distributed as source code or Docker containers, requiring advanced programming skills that hinder accessibility for non-programmers, such as clinicians. Additionally, current models are typically trained on a single dataset and lack flexibility in adjusting confidence levels for predictions. To overcome these challenges, we introduce GloFinder, a QuPath plugin designed for single-click automated glomeruli detection across entire WSIs with online editing through the graphical user interface (GUI). GloFinder employs CircleNet, an anchor-free detection framework utilizing circle representations for precise object localization, with models trained on approximately 160,000 manually annotated glomeruli. To further enhance accuracy, the plugin incorporates Weighted Circle Fusion (WCF)\u2014an ensemble method that combines confidence scores from multiple CircleNet models to produce refined predictions, achieving superior performance in glomerular detection. GloFinder enables direct visualization and editing of results in QuPath, facilitating seamless interaction for clinicians and providing a powerful tool for nephropathology research and clinical practice.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The digitization of histological slides has significantly advanced the field of computational pathology, enabling the application of sophisticated image analysis techniques to whole-slide images (WSIs). Accurate detection and segmentation of glomeruli\u2014key structures in renal pathology\u2014are essential for diagnosing and understanding kidney diseases. Automated glomeruli detection not only enhances the efficiency of pathological assessments but also improves the consistency and reproducibility of diagnoses, which are critical for effective patient care and research.\nMany deep learning based automatic glomerular detection tools, such as CircleNet [1 ###reference_b1###], have emerged as powerful tools for detecting circular glomeruli. For example, CircleNet leverages geometric properties to enhance detection accuracy by predicting the centers and radii of circles that best encapsulate these objects. However, existing open-source AI glomerular detection tools are distributed as source code or Docker containers, making them inaccessible to clinicians and other users without advanced programming expertise. This limitation hampers its seamless integration into routine pathological workflows, thereby restricting its potential to assist doctors effectively in disease analysis.\nTo address these challenges, we present GloFinder, a AI-empowered QuPath [2 ###reference_b2###] plugin, which is designed to facilitate the fully automatic glomerular detection (1) at WSI-level, (2) with a single click in graphical user interface (GUI), and (3) with online editing. This plugin streamlines the detection of glomeruli in WSIs, allowing clinicians to perform automated detection without any programming skills. Central to our approach is to train the state-of-the-art (SOTA) CircleNet method with 160,000 manually annotated glomeruli from both in-house and public datasets [3 ###reference_b3###], complemented by the introduction of Weighted Circle Fusion (WCF) [3 ###reference_b3###]. WCF aggregates predictions from multiple CircleNet models by weighting and merging overlapping circles based on their confidence scores, thereby improving the robustness and reliability of the final annotations. The plugin offers a user-friendly GUI interface within the QuPath environment, enabling clinicians to detect glomeruli across an entire WSI with a single click. An illustration of the final detection results displayed within QuPath is shown in Figure 1 ###reference_###, where detected glomeruli are overlaid on the WSI. After applying the Weighted Circle Fusion, our plugin visualizes glomeruli detected by different numbers of models using distinct colors, which will be automatically categorized within the QuPath annotation menu. This feature allows clinicians to focus on glomeruli with fewer fused detections, which are more likely to be incorrect, facilitating targeted verification and correction. Additionally, the workflow is designed to be intuitive, facilitating easy visualization, annotation, and editing of detected glomeruli. Beyond glomeruli detection, the plugin\u2019s architecture is adaptable for identifying other circular biomedical objects, such as cell nuclei, making it a versatile pipeline for other potential applications in medical image analysis.\nOur contributions in this paper are threefold:\nWe develop and disseminate GloFinder, an open-source AI-powered QuPath [2 ###reference_b2###] plugin, which is designed to enable fully automated glomerular detection with the following key features: (1) whole slide image (WSI)-level analysis, (2) single-click operation via a graphical user interface (GUI), and (3) seamless online editing capabilities.\nGloFinder achieves over a 5% improvement in detection performance by employing WCF on multiple CircleNet models with over 160,000 manually annotated training samples.\nUsing GloFinder within a human-in-the-loop annotation strategy reduces annotation time by 68.59% compared to the current data curation pipeline.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Methods", + "text": "This section outlines the methodologies employed in developing the GloFinder, a QuPath plugin designed for automated glomeruli detection. The methodologies are divided into two primary segments:the GloFinder plugin capacities, and the implementation of CircleNet and Weighted Circle Fusion for glomeruli detection." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Plugin Capacities", + "text": "The GloFinder plugin prioritizes simplicity and ease of use, seamlessly integrating into the QuPath environment to facilitate automated glomeruli detection in whole-slide images (WSIs). Installation is straightforward: users simply open QuPath and drag the plugin\u2019s JAR file into the interface. Once installed, the plugin becomes accessible via the QuPath extensions menu, where users can initiate the detection process with a single click. The plugin supports multiple WSI formats, including commonly used types such as .svs and .scn, ensuring broad compatibility across different imaging datasets.\nBeyond its user-friendly interface, GloFinder enhances flexibility and interactivity. After processing, the plugin saves the Python scripts used for detection, the trained models, and the generated GeoJSON files directly to the user\u2019s desktop. This approach allows users to access, modify, or adjust the detection code and results according to their specific needs, facilitating customization and further development. Additionally, the plugin visualizes glomeruli detected by varying numbers of models using distinct colors and categorizes them within the QuPath annotation menu. This feature enables clinicians to easily identify and focus on glomeruli with fewer supporting detections\u2014which are more likely to be inaccurate\u2014thereby aiding targeted verification and correction.\n###figure_2###" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "CircleNet and Weighted Circle Fusion", + "text": "In our plugin, the detection algorithm begins by segmenting the WSI into multiple overlapping patches, each overlapping by half of its area with adjacent patches. This strategy increases the likelihood of capturing entire glomeruli that may reside at the edges of patches, thereby enhancing detection accuracy and reducing boundary artifacts. The overlapping patches ensure that glomeruli located near the boundaries are not missed during detection, as they are included in multiple patches.\nEach patch is then processed by five CircleNet models, which are specialized detection networks designed to identify circular structures by predicting the center coordinates and radii of circles representing objects such as glomeruli. CircleNet adopts a rotation-consistent circle representation, making it particularly adept at detecting spherical structures in medical images. After the detection process, the relative coordinates of the detected glomeruli within each patch are transformed back to their corresponding positions on the entire WSI. This coordinate transformation is crucial for accurately mapping the detections onto the original image.\nTo eliminate redundant detections resulting from overlapping patches and multiple model predictions, the Non-Maximum Suppression (NMS) algorithm is applied. NMS filters out overlapping detections by retaining only the one with the highest confidence score for each glomerulus, ensuring that each object is represented by a single detection result. After obtaining five different sets of detection results from the five CircleNet models, we employ the Weighted Circle Fusion algorithm to fuse these results and improve accuracy. WCF is an ensemble method that combines the outputs of multiple models by weighting and merging overlapping circles based on their confidence scores, effectively leveraging the strengths of each model to enhance detection precision [3 ###reference_b3###].\nBy integrating these steps\u2014patch-based processing, CircleNet detection, coordinate transformation, NMS filtering, and result fusion using WCF\u2014we obtain the final detection results on the entire WSI. This comprehensive approach not only improves detection accuracy but also enhances the robustness of the plugin in various clinical scenarios. Figure 2 ###reference_### illustrates the workflow of our detection method, highlighting each stage of the process." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Training Data Configuration", + "text": "The models were developed and trained using an in-house dataset of murine glomeruli. Each CircleNet model was trained on a dataset of over 160,000 manually annotated glomeruli. To enhance learning diversity, approximately 30,000 glomeruli varied between the training datasets of different models, allowing the ensemble to better generalize across varied data.\nThe training patches, each containing at least one glomerulus, were standardized to dimensions of 512 \u00d7 512 pixels through cropping or resizing. To improve model robustness and prevent overfitting, extensive data augmentation techniques were applied, including random rotations, scaling, and brightness adjustments.\nFor testing and evaluation, we utilized an independent dataset consisting of 15 PAS-stained WSIs, containing a total of 2,051 mouse glomeruli. This diverse test set allowed for an independent assessment of the model\u2019s performance." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Model Training and Parameters", + "text": "The models were built upon the CircleNet architecture, utilizing a DLA-34 backbone [1 ###reference_b1###] to ensure robust feature extraction and accurate detection. Training for each model was conducted over 30 epochs, with slightly varied datasets used for each model to introduce diversity, thereby enhancing the ensemble\u2019s overall performance and generalizability.\nAfter initial detection by the CircleNet models, the outputs were refined using the NMS algorithm. This step was essential to eliminate redundant and overlapping detections within the output of each individual model, ensuring cleaner and more precise results.\nTo further improve detection accuracy, the refined outputs from the individual models were combined using the WCF ensemble method. This technique leveraged the complementary strengths of multiple models by averaging their outputs based on confidence scores. The WCF method applied carefully chosen thresholds: the \u201cT count\u201d threshold was set to 2, ensuring that a detection needed consensus from at least two models, and the default \u201cT score\u201d threshold was set to 0.9, requiring high confidence for the averaged predictions to be considered valid. This ensemble approach significantly improved detection reliability and reduced false positives, making the system more robust for clinical and research applications." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Evaluation Metrics", + "text": "The models were evaluated based on the mean Average Precision (mAP) [4 ###reference_b4###] at Intersection over Union (IoU) values of 0.5 and 0.75. Additionally, mAP was computed across IoU thresholds ranging from 0.5 to 0.95 in increments of 0.05. The average recall across these IoU thresholds was also measured.\nGiven that the predictions utilize circle representations rather than traditional bounding boxes, we employed the circle Intersection over Union (cIoU) [1 ###reference_b1###] metric for evaluation. This metric calculates the ratio of the overlap area to the combined area of the predicted and ground truth circles." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Computational Environment and Runtime Performance", + "text": "All experiments were conducted on a workstation equipped with an 8-core Intel Xeon W-2245 processor and an NVIDIA RTX A5000 GPU, running Ubuntu 22.04. The combination of a powerful CPU and GPU facilitated efficient processing of the computational tasks associated with glomeruli detection and analysis. The GloFinder plugin required an average of 21 seconds to process a single WSI, demonstrating its capability to perform rapid analysis suitable for clinical workflows." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Performance on Automatic Glomerular Detection", + "text": "In our experiments, we compared the performance of WCF method with individual CircleNet model predictions and other ensemble methods, NMS and Soft-NMS, on glomerular detection. Detailed performance metrics are presented in Table 1 ###reference_###, highlighting the superior mAP values achieved by our WCF method compared to individual models and other ensemble techniques. The result shows that our WCF method achieved significantly higher mAP values across various IoU thresholds. This improvement demonstrates the effectiveness of incorporating varied training data to enhance model robustness and generalizability." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Efficiency of Human-in-the-loop Annotation", + "text": "To evaluate the efficiency of manual annotation compared to a HITL approach, we conducted a time analysis for annotating 10 WSIs. The results demonstrated that the HITL method considerably improves annotation efficiency, requiring an average of 2.9 minutes per image compared to 9.23 minutes per image for manual annotation.\nThis significant reduction in annotation time, approximately 68.6%, highlights the practical benefits of integrating our plugin into clinical workflows. By automating the initial detection of glomeruli and allowing clinicians to focus on reviewing and correcting annotations rather than creating them from scratch, the HITL approach enhances efficiency without compromising accuracy." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "The GloFinder plugin advances automated glomeruli detection in WSIs, offering practical benefits for clinical and research settings. It efficiently processes entire WSIs at once, detecting all glomeruli and displaying results directly within the QuPath interface. This capability eliminates the need for manual region selection or patch-based analysis, saving time and reducing potential oversights.\nIts user-friendly design allows clinicians and pathologists to operate the plugin with a single click, seamlessly integrating results into QuPath for easy visualization, review, and modification. This simplicity streamlines workflows and encourages greater adoption of automated analysis tools in routine pathological assessments.\nGloFinder is also flexible and adaptable. By downloading the detection code, models, and annotations to the local system, users can modify or customize these components as needed. The plugin can detect other circular biomedical objects by simply replacing the model, expanding its applicability to various medical imaging tasks.\nHowever, limitations include the detection time required to process full WSIs and reliance on local execution, which demands sufficient hardware and may limit accessibility for users with less powerful machines. Future work will focus on optimizing the algorithm to reduce detection time and exploring cloud-based execution to offload computations from local systems, alleviating hardware constraints and enabling centralized management of updates.\nIn conclusion, GloFinder provides clinicians and clinical scientists a user-friendly option for automated glomeruli detection, offering ease of use, flexibility, and adaptability. By addressing current limitations through algorithm optimization and potential cloud integration, the plugin can become an even more powerful tool for medical image analysis, ultimately contributing to improved diagnostic workflows and patient outcomes." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "In this study, we introduced GloFinder, a new QuPath plugin that streamlines automated glomeruli detection in whole slide images. By integrating the advanced CircleNet framework and the WCF ensemble method, GloFinder achieves state-of-the-art detection accuracy while providing an intuitive and accessible user interface for clinicians and researchers. Beyond fully automatic detection, the plugin is able to reduce annotation time through a human-in-the-loop workflow and enables direct visualization, editing, and customization within the QuPath environment." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelmAP(0.5:0.95)mAP(@0.5IOU)mAP(@0.75IOU)Average Recall(0.5:0.95)
CircleNet #1\u00a0[1]\n0.7410.8930.8300.729
CircleNet #20.7300.8760.8130.772
CircleNet #30.7240.9320.8330.652
CircleNet #40.7890.9530.8620.699
CircleNet #50.7310.8410.7870.836
model avg.0.7430.8990.8250.738
NMS\u00a0\u00a0[5]\n0.6440.7490.6960.834
Soft-NMS\u00a0\u00a0[6]\n0.4190.5130.4520.793
GloFinder0.8290.9550.9050.782
\n
\n
Table 1: The table provides a detailed comparison of performance metrics for individual models, the average results of five models (highlighted in bold), and the results using NMS, Soft-NMS, and WCF fusion methods. Metrics such as mAP at various IoU thresholds and average recall are included. The bold numbers represent the average performance of the five models, while the highest values for each evaluation metric are marked in red, and the second-highest values are marked in blue.
\n
", + "capture": "Table 1: The table provides a detailed comparison of performance metrics for individual models, the average results of five models (highlighted in bold), and the results using NMS, Soft-NMS, and WCF fusion methods. Metrics such as mAP at various IoU thresholds and average recall are included. The bold numbers represent the average performance of the five models, while the highest values for each evaluation metric are marked in red, and the second-highest values are marked in blue." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18795v1_figure_1.png", + "caption": "Figure 1: Glomerular detection results using the GloFinder plugin. Detected glomeruli are represented as circles with varying colors indicating detection confidence.", + "url": "http://arxiv.org/html/2411.18795v1/extracted/6030231/Figure/journal_fig1.png" + }, + "2": { + "figure_path": "2411.18795v1_figure_2.png", + "caption": "Figure 2: The workflow of the GloFinder plugin\u2019s internal algorithm. GloFinder first tiles the WSI into overlapping patches. Five CircleNet models, each trained on different datasets, detect glomeruli within these patches. The detection results are then aggregated back into the original WSI space. The Weighted Circle Fusion algorithm is applied to merge detections and enhance accuracy. Finally, the fused results are presented and displayed within the QuPath interface. This entire process only requires a single click on the GloFinder button from the extension menu.", + "url": "http://arxiv.org/html/2411.18795v1/x1.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18795v1" +} \ No newline at end of file diff --git a/20241127/2411.18796v1.json b/20241127/2411.18796v1.json new file mode 100644 index 0000000000000000000000000000000000000000..85ff07e882a31f87209c284409c1c7eb0e82c37e --- /dev/null +++ b/20241127/2411.18796v1.json @@ -0,0 +1,413 @@ +{ + "title": "Graph-Based Biomarker Discovery and Interpretation for Alzheimer\u2019s Disease", + "abstract": "Early diagnosis and discovery of therapeutic drug targets are crucial objectives for the effective management of Alzheimer\u2019s Disease (AD). Current approaches for AD diagnosis and treatment planning are based on radiological imaging and largely inaccessible for population-level screening due to prohibitive costs and limited availability. Recently, blood tests have shown promise in diagnosing AD and highlighting possible biomarkers that can be used as drug targets for AD management. Blood tests are significantly more accessible to disadvantaged populations, cost-effective, and minimally invasive. However, biomarker discovery in the context of AD diagnosis is complex as there exist important associations between various biomarkers. Here, we introduce BRAIN (Biomarker Representation, Analysis, and Interpretation Network), a novel machine learning (ML) framework to jointly optimize the diagnostic accuracy and biomarker discovery processes to identify all relevant biomarkers that contribute to AD diagnosis. Using a holistic graph-based representation for biomarkers, we highlight their inter-dependencies and explain why different ML models identify different discriminative biomarkers. We apply BRAIN to a publicly available blood biomarker dataset, revealing three novel biomarker sub-networks whose interactions vary between the control and AD groups, offering a new paradigm for drug discovery and biomarker analysis for AD.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "There are currently more than 55 million people living with dementia globally, rising at a rate of 10 million cases per year. Dementia is the 7th leading cause of death and one of the major causes of disability and dependency among older people globally. The most common form of dementia is Alzheimer\u2019s Disease (AD), making up 60-70% of total cases [1 ###reference_b1###]. AD reduces the lifespan by 8-13 years for those diagnosed in their 60s or 70s [2 ###reference_b2###]. There is currently no treatment available to cure the neurodegeneration caused by AD [1 ###reference_b1###]. While there might be multiple causes of AD, early diagnosis can help promote timely and optimal management [3 ###reference_b3###]. In addition, early diagnosis can allow for identifying and treating accompanying physical illnesses and risk factors and thus optimizing physical health, cognition, activity, and wellbeing [1 ###reference_b1###].\nThe current gold standard for AD diagnosis is medical structural imaging, which focuses on the identification of abnormal formations inside the brain and thus has a significant risk of missing an AD diagnosis in the early stages. [4 ###reference_b4###]. These methods also do not capture possible risk factors or potential drug targets associated with AD. This is incredibly harmful as research evidence points to racial disparities in AD diagnosis and identifying biological risk factors (amongst others) is intrinsically linked with early AD diagnosis for effective management of these disease conditions [5 ###reference_b5###].\nThe pursuit of blood-based protein biomarkers for AD is increasingly seen as a promising avenue for improving early AD diagnosis and treatment. Blood tests offer a cost-effective, minimally invasive, and easily accessible solution for AD detection [6 ###reference_b6###]. Blood tests can detect molecular changes indicative of AD pathology, including abnormal levels of different biomarkers, even before clinical symptoms manifest. Biomarkers can provide insights into the biochemical processes underlying AD, contributing to a better understanding of the disease\u2019s pathogenesis [6 ###reference_b6###]. This knowledge is invaluable for the development of targeted therapies. As the field advances, the combination of various biomarkers might allow for the identification of disease subtypes and personalized treatment approaches, addressing the heterogeneity of AD more effectively than current methods.\nTo this end, we systematically design BRAIN: Biomarker Representation, Analysis and Interpretation Network that provides holistic insights into biomarker discovery from multiple machine learning models. It is tailored to robustly handle the complexity of biomarker identification by considering a diverse range of machine learning models, each optimizing different objectives and varying in complexity. As shown in Figure 1 ###reference_###, the process begins with training these multiple ML models in a bootstrapped fashion, enhancing robustness. Once trained, importance scores are assigned to features using SHapley Additive exPlanations (SHAP) analysis [7 ###reference_b7###]. In the next step, BRAIN aggregates these importance scores assigned to each biomarker across the ensemble of models. This aggregation helps identify a wide pool of critical biomarkers, encompassing those deemed significant across diverse models [8 ###reference_b8###].\nSubsequently, it analyzes this pool of critical biomarkers. As the size of important biomarker sets increases, interpretability and explainability become significant challenges, which are further exacerbated when the relationships between these biomarkers are also considered. To address these issues, BRAIN extracts novel graph representations for these biomarkers, enhancing interpretability. This representation encapsulates the relationships and interactions between the identified biomarkers, offering a visual depiction of their interconnectedness. By leveraging this graph network representation, BRAIN facilitates the exploration and interpretation of biomarker relationships, aiding researchers in gaining deeper insights into the underlying biology and pathology.\n###figure_1### We summarize the key contributions of our work as follows:\nWe present a framework to identify a comprehensive set of important AD biomarkers by utilizing diverse ML models, enhancing robustness and mitigating model bias. 111The code will be released upon acceptance of the paper.\nWe present novel interpretable biomarker networks for AD, delineating three distinct clusters and showcasing multiple distinguishing network aspects between AD and control graphs. We highlight interdependencies between biomarkers and how they change between normal and AD, providing new biomedical insights.\nWe focus our analysis on the open access Texas Alzheimer\u2019s Research and Care Consortium (TARCC) dataset222BRAIN framework can be applied to any datasets with blood biomarkers for both AD and control samples. consisting of blood biomarker data from Alzheimer\u2019s disease patients and control participants [9 ###reference_b9###]. We discover three sub-networks that exist in biomarker interactions in the dataset and use graph theory to explain how they are different between the AD and control groups. A comparative examination of these clusters reveals distinct differences in network topology, indicating substantial alterations in network-level interactions between control and AD patient populations. Our findings suggest that when viewed in isolation, biomarkers are insufficient to capture the complexity of AD pathology, but can offer biomedically-meaningful insights when presented as part of a holistic network." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "The clinical potential of blood-based indicators associated with AD\u2014namely amyloid (A1-42 and A1-40), phosphorylated tau (pTau), neurofilament light chain (NfL), and glial fibrillary acidic protein (GFAP)\u2014is significant as they show strong associations with those indicators used by traditional (largely inaccessible) medical imaging methods [10 ###reference_b10###]. Despite these advancements, the field faces the challenge of achieving a holistic understanding of how these biomarkers interconnect and correlate with each other and with the multifaceted pathophysiology of AD [8 ###reference_b8###]. This comprehensive understanding is crucial for optimizing the diagnostic and prognostic utility of blood-based biomarkers and for paving the way to personalized medicine in AD. Addressing this challenge requires concerted efforts in research to explain the complex interactions among these biomarkers and to validate their collective diagnostic value across diverse populations.\nA common approach in biomarker identification is the panel-based method, which involves generating multiple sets of biomarkers, training predictive models, and selecting the set with the highest accuracy metric. Details of some of these \"performance\" driven methods is given in Appendix Section A.1 ###reference_###. The primary objective of these studies is solely to maximize accuracy, leading them to eliminate a large set of biomarkers before training the model. This could potentially eliminate redundant but important biomarkers, thus affecting the discovery process. At the same time, a key limitation of these methods is that they are driven towards finding the smallest subset of biomarkers that maximize model performance, potentially overlooking many important biomarkers. These works often fail to explore the interdependency between biomarkers, instead observing them in isolation purely as predictive features.\nVarious studies in the literature employing similar models have unearthed disparate biomarkers, sometimes even when utilizing identical datasets [11 ###reference_b11###]. This is attributed to the different objective functions being optimized. For instance, logistic regression optimizes the likelihood of an observation, while tree-based models optimize the splits on each node based on an impurity metric. The implementation of the model and the interplay between different features also influence which biomarkers are deemed significant for AD. While accuracy metrics aid in identifying the most effective model, determining the optimal model in terms of insights into disease and potential biomarker drug targets remains an open question [12 ###reference_b12###]. Models achieving high accuracy may focus on a select few biomarkers, potentially overlooking a spectrum of other important ones. Addressing this discrepancy and investigating why similar models, even when applied to the same datasets, yield distinct biomarkers poses significant challenges.\nCompared to previous works, our objective is to optimize both detection accuracy and biomarker discovery while enhancing interpretability. Our approach aims to (a) find the comprehensive set of biomarkers that maximize the discriminability of AD and control groups across diverse ML models (Section 3 ###reference_###), and (b) present an interpretable representation of the interconnectedness and association of these biomarkers and how it varies between AD and control groups (Section 4 ###reference_###)." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "BRAIN: Biomarker Exploration and Representation Network Framework", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Exploratory analysis for biomarker discriminability", + "text": "We utilize publicly available data from the Texas Harris Alzheimer\u2019s Research Study by the Texas Alzheimer\u2019s Research and Care Consortium [9 ###reference_b9###]. To demonstrate the discriminative power of the biomarker dataset in the classification of AD vs. control, feature selection was carried out using the principle of maximum-relevance-minimum-redundancy and visualized using t-Distributed Stochastic Neighbor Embedding (t-SNE) technique in Figure 2 ###reference_###. The plot shows a distinct separation between two main clusters but reveals closer proximity between AD and Control within each cluster, suggesting potential subgroup variations within both groups. AD clusters tend to be more concentrated than the control group with a wider spread, highlighting the complexity in using biomarkers for diagnosing AD. For a detailed description of the exploratory biomarker discriminability and ML modeling steps, refer to Appendix Section A.2 ###reference_###.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Robust Biomarker Discovery & AD Diagnosis", + "text": "To find the biomarkers that are most important in differentiating AD from control, we leverage logistic regression (LR), random forest (RF), and shallow multi-layer perceptron (MLP) models to classify AD from control. While BRAIN theoretically accommodates any ML classification model, our selection of LR, RF, and MLP for this study is influenced by the previous works and dataset characteristics. Each model optimizes a distinct objective function or adopts a different training strategy. Evaluating multiple models together is important to avoid overlooking any biomarkers, as different models prioritize different ones. For more information about our machine learning pipeline, refer to Appendix Section A.2 ###reference_###.\nWe employ SHapley Additive exPlanations (SHAP) analysis, a method that elucidates models\u2019 output by quantifying each feature\u2019s contribution to the model\u2019s predictions. Rooted in cooperative game theory, SHAP utilizes the concept of Shapley values, assigning a value to each feature based on its marginal contribution to the overall prediction. If a feature has no effect on the model, its SHAP value is zero. We conduct SHAP analysis on each model in a bootstrapped manner. Following model training, SHAP scores are computed for each biomarker across the test set. This process is iterated times, where in each iteration, data is randomly partitioned into training and test sets to enhance generalization. Finally the average importance score for biomarker represented by is computed after iterations,\nwhere is size of test set, is number of bootstrapping iterations per model, is the number of diverse models, and represents individual SHAP score for biomarker and sample .\nAfter computing SHAP scores for various models, we have identified the top biomarkers (shown in Appendix Section A.4 ###reference_### along with SHAP values). Due to differences in the spread of importance scores across models, the specific value of varies slightly for each model based on empirical testing. Consequently, we present approximately 30 top biomarkers from each model. The choice of 30 was inspired by domain knowledge and existing research on biomarker analysis for AD, ensuring that biomarkers deemed important by previous studies are included [6 ###reference_b6###].\nWhile the exact biomarkers discovered are not as pertinent at this stage, we highlight the divergence in focus among different models regarding biomarkers, despite some overlap: the biomarkers discovered by the LR model differ from those identified by MLP and RF, although the latter two exhibit considerable similarity. We hypothesize that this discrepancy may be due to differences in model complexity. However, one feature is notably present across models: the biomarkers corresponding to the genotype for Apolipoprotein E (APOE). We will investigate this further in the next subsection." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Controlling for APOE as a proxy for AD", + "text": "Figure 3: Distribution of APOE Genotypes; ANOVA p-values evaluate differences in APOE genotype distributions across AD and control\n###figure_3### Figure 4: IGF-BP-2 Distribution; ANOVA confirms a significant difference in biomarker value between the AD and control group (p-value < 0.01)\n###figure_4### The APOE gene is the most prevailing risk factor of AD, impacting more than half of all AD cases [13 ###reference_b13###]. However, since APOE is a gene, it can be thought of as a categorical variable that is defined at birth and thus it provides no information about the timing of onset of disease or relative risk of AD as a person ages. Thus, while it is a valuable risk factor at the time of a person\u2019s birth, the consideration of the genotypes of APOE as possible dynamic biomarkers for early AD diagnosis presents a false dichotomy that is not biomedically meaningful ceteris paribus. Thus, the utilization of SHAP values allowed us to screen for APOE-related biomarkers and discover a potential bias in the underlying models towards the APOE genotype.\nTo validate this, Figure 4 ###reference_### plots the distribution of APOE genotypes among people with AD and control. The APOE gene is categorized by e2/e2, e2/e3, e2/e4, e3/e3, e3/e4, and e4/e4 genotypes. Certain APOE genotypes, especially those involving the e4 allele, have been associated with a higher risk of developing AD [13 ###reference_b13###].\nIn Figure 4 ###reference_###, the APOE genotype distribution seems to skew towards e3 and e4 alleles within the AD group. This suggests that these could be considered as a proxy or an indicator for the disease. To further validate our hypothesis, we conduct an analysis of variance (ANOVA) test at the = 0.05 significance level. The test compares the values of APOE genotypes between AD and control groups, and obtaining a significant result indicates that there are statistically significant differences in the APOE genotype values among these groups.\nThus, since APOE-related biomarkers are highly correlated with the disease, they may overshadow the effects of other biomarkers in the analysis, particularly if other biomarkers are continuous variables. In line with our objectives to find new insights that may allow drug discovery, APOE-related biomarkers were removed from further analysis on the basis of them presenting as confounding variables." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Multicollinear biomarkers", + "text": "After excluding APOE-related biomarkers, we conducted regression analyses on the remaining biomarkers to evaluate the impact of each on the likelihood of disease occurrence, with statistical significance determined by p-values. This analysis was conducted to understand how the significance of specific biomarkers depends on a particular combination of biomarkers incorporated in the model.\nWe refrain from presenting the whole analysis due to space limitations but provide one example from our findings to illustrate the issue of multicollinearity in this complex problem. The biomarker for insulin-like growth factor-binding protein 2 (IGF-BP-2) was initially not statistically significant when differentiating AD vs control (p>0.05). However, when considered independently using an ANOVA between IGF-BP-2 differences in age groups, IGF-BP-2 was found to be significant at the = 0.05 significance level. Figure 4 ###reference_### illustrates an increase in IGF-BP-2 levels associated with aging. When the model was adjusted to account for age, it was not statistically significant anymore. Importantly, we noted that the differences in IGF-BP-2 levels between the AD and control groups became more marked with age, indicating a potential interaction between age and disease progression in influencing this biomarker\u2019s levels, a finding that was recently confirmed in a large scale healthcare study on IGF-BP-2 [14 ###reference_b14###].\nThis analysis elucidates how models with similar performance may assign high importance to different sets of biomarkers. It highlights the role of inter-biomarker correlations, which can confound the discovery process. Specifically, the correlation between biomarkers may lead to issues such as multicollinearity, where predictors are not independent of one another. This can affect the stability and interpretability of the model coefficients, potentially leading to different models prioritizing different biomarkers despite achieving comparable levels of predictive accuracy, further underscoring a need for understanding their inter-dependencies from a more \"holistic\" lens. Figures 10(a) ###reference_.sf1### and 10(b) ###reference_.sf2### in Appendix Section A.2.7 ###reference_.SSS7### present correlation matrices to visualize correlations between all of the selected biomarkers from Section 3 ###reference_###.\nThe implications of these findings are important for statistical analysis, which is commonly used in literature for biomarker discovery. When biomarkers are correlated, they may not provide independent information about the disease state, and their individual significance could be overestimated unless the correlation is accounted for. This means that in hypothesis testing, such as when determining if a particular biomarker is significantly associated with AD, it\u2019s important to include other correlated biomarkers in the model to control for their shared variance. In the clinical context, understanding biomarker interdependencies is crucial for utilizing them effectively. If two biomarkers provide similar information due to high correlation, measuring only one could offer a more streamlined and cost-efficient approach. This is particularly relevant in personalized medicine, where such knowledge can help customize treatment plans to an individual\u2019s unique disease profile, enhancing the efficacy of interventions.\nFurthermore, in research involving high-dimensional data, recognizing biomarker correlations is key to mitigating the risk of false discoveries that can arise from multiple tests. Incorporating these insights into predictive models can also enhance their precision, leading to better tools for prognosis." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Graph network representation reveals holistic interrelations", + "text": "Analyzing biomarker correlation matrices poses significant challenges, primarily due to complex interrelations among biomarkers, hindering pattern isolation and complicating visualization. These difficulties are exacerbated with increasing data dimensionality, amplifying the difficulties associated with information extraction and interpretation of these heatmaps. \nTo enhance explainability, we adopt a novel graph network approach where we utilize graph representation to showcase the relationships between different biomarkers. To this end, we define a graph , where represents nodes and are edges between them. The nodes in this network are biomarkers, and edges represent the relationship between them. This relationship is proportional to the correlation between the biomarkers.\nThe graph is constructed between the pool of biomarkers identified by SHAP analysis without the APOE genotypes. For each distinct biomarker pair where and sample index , over a population size , we generate the weight of edge ,\nIn order to detect meaningful patterns in the correlation matrix and differentiate between spurious correlations and genuine relationships, we use a threshold to drop edges below a certain weight/correlation,\nThis threshold criteria also helps with visualization and interpretation. Finally, we visualize the graph for biomarkers that have any interconnections. If a biomarker has no edge, we do not include it in the visualization to minimize clutter and improve visual clarity. Information about specific biomarkers, along with full forms of abbreviations is provided in Appendix Section A.4.1 ###reference_.SSS1###.\nThe graph obtained is presented in Fig. 5 ###reference_### with . The value was chosen based on empirical testing and domain knowledge which ensured that biomarkers that have been identified as important contributors in AD are included in the graph [6 ###reference_b6###]. The edge weight is presented in the edges and circles are nodes/biomarkers. We find three components/clusters within the graph. Each component is colored differently to highlight distinct groups. Looking closer, the graph shows that the biomarkers Exotaxin-3, CA-19-9 and AgRP form one group as CA-19-9 is highly correlated with the other two. The second group has a similar trend with the biomarker FASL having a connection to Fibrinogen and MIF. The third component in green is the most complicated and dense with 11 members. In this group, biomarkers THPO and IL-12p40 are highest connected nodes.\n###figure_5### Next, we analyze the impact of AD on these interconnections. Thus, we separate the data into AD and control group and repeat the graph extraction process described above with the same threshold parameter . The networks extracted for AD and control are represented in Fig.6(a) ###reference_sf1### and Fig.6(b) ###reference_sf2###, respectively.\n###figure_6### ###figure_7### Again, we observe 3 components in each network; however, the size of the component and individual biomarkers are different. Comparing blue components in both graphs, we observe that Fibrogen-FASL link is part of different networks in each graph and their strength is also significantly stronger in AD network. This link is connected to IGF-BP-2-Eotaxin-3 in AD graph compared to connection with MIF in control graph. Eotaxin in control graph is linked to CA-19-9 and AgRP similar to what we saw in Figure 5 ###reference_###. The complex green component still holds a large number of biomarkers with high interconnections. The size of this complex group is 11 and 13 in AD and control group respectively where TNF-Alpha and S100b present in control no longer show up in AD group.\nWe systematically compare the three networks extracted from biomarker data through their degree. Degree is number of direct neighbors each node has in a graph. First we compare the distribution of degree in the three networks in Figure 7(a) ###reference_sf1###. It can be observed that the graph topologies are varying for different groups thus indicating that overall structure is helpful in distinguishing between AD and control group. Moreover, this difference becomes more prominent with increasing degree signifying the use of holistic and sophisticated features in AD detection.\nTo further investigate how each biomarker changes its network topology in these three networks, we present the degree of all biomarkers in Figure 7(b) ###reference_sf2###. If a biomarker is absent from a certain graph, its degree is represented as zero. It can be observed that 3 distinct biomarkers appear significant in the AD network.\n###figure_8### We validate these findings and offer deeper biomedical interpretations in the Appendix Section A.3 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Limitations", + "text": "As described in Appendix Section A.2.1 ###reference_.SSS1###, the TARCC dataset exhibits sampling biases and is relatively small in size, potentially limiting the generalizability of the results. We attempt to compensate for this gap through validation with biomedical/domain knowledge (Appendix Section A.3 ###reference_###). Furthermore, unlike prior AD research on high-dimensional imaging data, this study employs models of limited complexity due to dataset size constraints, preventing the utilization of more sophisticated deep learning models. Scaling up the dataset or exploring techniques for training deep learning models on small datasets could address this limitation. Additionally, categorical or binary biomarkers can not be incorporated in the graph representation due to constraints on how edge weights are defined, although the vast majority of blood-based biomarkers in the scope of such an analysis are continuous variables.\nCurrently, we choose design parameters, such as the number of features filtered through SHAP and the threshold , based on domain knowledge via existing biomedical literature. This subjective selection may restrict the utility of the model. In the future, this process of parameter selection can be automated.\nBRAIN employs multiple models that could pose computational resource issues, especially with increasingly large datasets. However, the current blood-based datasets do not reach extreme sizes due to the high cost of human data collection. As technology advances and larger datasets become more accessible, computational challenges may become more pronounced and require innovative solutions.\nFinally, the paper relies on SHAP analysis, which assumes additivity in the relationship between model output and features. This assumption may not always hold true. Future studies could explore the use of multiple feature importance techniques, such as LIME, to broaden the pool of important biomarkers and potentially uncover non-additive relationships between features and outcomes." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Impact", + "text": "Prior work in biomarker-based diagnosis of AD has been exceedingly \"performance\" driven, i.e. solely focusing on maximizing one or more accuracy metrics in AD diagnosis (see Appendix Section A.1 ###reference_###). However, many state-of-the-art modeling methodologies are inherently non-interpretable or offer low biomedically-meaningful interpretability. While high AD diagnostic accuracy using biomarkers is a baseline objective in this paper, our emphasis has been to examine (i) how those biomarkers are associated with one another, and (ii) what biomedically-meaningful insights we can glean from how their correlations change from one patient population (AD) to another (control). The BRAIN framework addresses the criticisms related to the stability and reliability of interpretative models by aggregating findings from multiple models. It enables the discovery of distinct biomarker clusters in control and AD groups, highlighting significant differences in their network structures. The framework\u2019s ability to visualize and interpret complex biomarker networks provides novel insights into AD\u2019s molecular interactions and pathology. These insights are crucial for developing targeted drug therapies and enhancing early AD diagnosis, potentially transforming the treatment and management of AD. Finally, this paper attempts to also make machine learning more accessible to clinicians and biomedical researchers who are not ML experts. By providing clear visualizations and interpretable results via graphs, it arms clinicians with clear and potentially actionable insights that are biomedically-meaningful." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We propose a new framework for investigating biomarkers for AD diagnosis and therapeutic treatment through BRAIN. When biomarkers are correlated, it indicates that they are part of a complex biological network, where changes in one could influence or be indicative of changes in another. This complexity is more than a statistical challenge; it reflects the intricate biological processes that may lead to the onset and progression of AD. We utilize an ensemble approach to first find a comprehensive set of important biomarkers and then, for interpretability, utilize graph networks to represent them while highlighting their interconnectedness. Finally, we analyze how these networks differ between AD and control patient populations.\nAccurate interpretation of correlations is vital for precise hypothesis testing and avoiding multicollinearity, which can obscure the impact of individual biomarkers. Inter-biomarker relationships can guide the discovery of biomarker-based therapeutics, revealing candidates for drug target investigations that may have been overlooked. These correlations offer a window into the biomedical pathways of AD, helping to chart the sequence of biological events and identifying potential intervention points." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "As blood tests provide a minimally invasive, widely accessible and cost-effective solution for AD detection, multiple machine learning techniques have been developed to detect AD from blood biomarkers. These methods have focused on exploration of different combinations of biomarkers to maximize predictive performance. Support Vector Machines(SVM)[15 ###reference_b15###][16 ###reference_b16###][17 ###reference_b17###] and logistic regression[18 ###reference_b18###][19 ###reference_b19###] are frequently employed machine learning models in AD detection literature[15 ###reference_b15###][16 ###reference_b16###]. Random Forest models [20 ###reference_b20###][21 ###reference_b21###] are another popular choice along with Na\u00efve Bayes [22 ###reference_b22###] to identify a panel of important biomarkers. In most works, sophisticated deep learning models are scarcely utilized for blood biomarkers, primarily due to the scarcity of patient data [12 ###reference_b12###]. Limited works that do use deep learning include [23 ###reference_b23###] and [24 ###reference_b24###].\nThe work in [25 ###reference_b25###] utilized AutoML technology Just Add Data Bio (JADBIO), to train multiple ML models and choose the most suitable model. The study focused on AD datasets that suffered from curse of dimensionality due to small sample size. The analysis identified Ridge Logistic Regression, Support Vector Machines and Classification Random Forests models to provide optimal performance for protein, miRNA and mRNA features respectively. Important biomarkers are identified using statistically equivalent signature (SES) algorithm [26 ###reference_b26###].\nAnother comprehensive survey[12 ###reference_b12###] highlights an important issue in AD research. While sophisticated models have been devised for AD detection leveraging expensive medical imaging techniques (MRI, EEG, and PET scans), the exploration of cost-effective biomarkers has received limited attention. In most works, sophisticated deep learning models are scarcely utilized for blood biomarkers, primarily due to the scarcity of patient data.\nIn our study, we aim to bridge this gap by comparing a shallow neural network with Random Forest and logistic regression models, emphasizing the importance of exploring alternative approaches that could facilitate more widespread and cost-effective screening for AD.\nThis study introduces a novel methodology for investigating Alzheimer\u2019s Disease (AD) biomarkers by prioritizing network-level holistic indicators over isolated biomarkers. As depicted in Figure 5 ###reference_###, the network-level interactions between the control and AD networks undergo significant changes, with certain inflammatory pathway biomarkers\u2014namely, IL-13, IL-15, MIP-1, and IL-12p40\u2014demonstrating heightened edge scores, indicative of an increased correlation within the AD network. This supports the theory that inflammatory pathways play a crucial role in AD progression and emphasizes the importance of targeting these pathways for therapeutic development.\nAccounting for age, our analysis enables age-agnostic interpretations. Notably, IGF-BP-2 and B2M, absent in the control network, emerge within the AD network, suggesting potential associations with Alzheimer\u2019s pathology. Increased expression of IGF-BP-2 is linked to AD progression in asymptomatic individuals [27 ###reference_b27###], while B2M is associated with cerebrospinal fluid (CSF) AD biomarkers, -amyloid pathology, and cognitive impairment in individuals who are cognitively normal or in preclinical stages of AD [28 ###reference_b28###]. These findings corroborate existing research on early-stage AD biomarkers and validate the efficacy of graph network analysis in biomarker exploration.\nMoreover, several biomarkers, prevalent in late-stage AD, do not exhibit consistent degrees across all groups, indicating their limited utility in distinguishing AD from control cases. Alpha-1, associated with late-stage AD lesions [29 ###reference_b29###], and glutathione S-transferases (GSTs), linked to increased AD risk in the elderly [30 ###reference_b30###], are examples. Although IL-12p40 and THPO are closely correlated with other inflammatory biomarkers, their predictive value for differentiating AD is limited. IL-12p40\u2019s link to inflammatory load in AD [31 ###reference_b31###] and THPO\u2019s association with the progression from mild cognitive impairment to dementia [32 ###reference_b32###] imply that despite their correlations with AD and control networks, these biomarkers independently are insufficient for AD discrimination.\nFigure 11 ###reference_### shows all of the top contributing biomarkers across the models tested in Section 3 ###reference_###.\n###figure_9###" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Biomarker AbbreviationBiomarker Full NameBiomarker Description
B2MBeta-2 MicroglobulinProtein involved in the immune response
\n-1Alpha-1 AntitrypsinProtein that protects tissues from enzyme damage
IL-13Interleukin-13Cytokine involved in inflammatory response
IL-15Interleukin-15Cytokine that plays a role in immune response
BDNFBrain-Derived Neurotrophic FactorProtein that supports neuron growth and survival
THPOThrombopoietinHormone that regulates platelet production
FASLFas LigandProtein involved in apoptosis
FibrinogenFibrinogenProtein involved in blood clotting
IGF-BP-2Insulin-Like Growth Factor Binding Protein 2Protein that modulates IGF activity
GSTsGlutathione S-TransferaseEnzymes involved in detoxification
MIP-1\nMacrophage Inflammatory Protein-1 AlphaCytokine involved in immune responses
AmphiregulinAmphiregulinProtein involved in cell growth and differentiation
Eotaxin-3Eotaxin-3Chemokine involved in eosinophil recruitment
SCFStem Cell FactorProtein that promotes stem cell growth
IL-12p40Interleukin-12 Subunit p40Cytokine involved in the immune response
TNF-\nTumor Necrosis Factor BetaCytokine involved in systemic inflammation
AgRPAgouti-Related ProteinProtein involved in regulating appetite
CA 19-9Cancer Antigen 19-9Biomarker used in cancer detection
MIFMacrophage Migration Inhibitory FactorProtein involved in the immune response
S100bS100 Calcium-Binding Protein BProtein involved in regulating cell cycle progression
TNF-\nTumor Necrosis Factor AlphaCytokine involved in systemic inflammation
\n
\n
Table 1: Biomarker Descriptions
\n
", + "capture": "Table 1: Biomarker Descriptions" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18796v1_figure_1.png", + "caption": "Figure 1: BRAIN: Biomarker Exploration and Representation Network", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/Framework.png" + }, + "2": { + "figure_path": "2411.18796v1_figure_2.png", + "caption": "Figure 2: t-SNE Analysis of blood biomarkers highlights distinct AD clusters", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/tsne.png" + }, + "3": { + "figure_path": "2411.18796v1_figure_3.png", + "caption": "Figure 5: Graph structure for all data (AD and control), \u03b1\ud835\udefc\\alphaitalic_\u03b1 = 0.45", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/SHAP_Full_graph.png" + }, + "4(a)": { + "figure_path": "2411.18796v1_figure_4(a).png", + "caption": "(a) Graph network for AD\nFigure 6: Graph networks for AD and control (\u03b1=0.45\ud835\udefc0.45\\alpha=0.45italic_\u03b1 = 0.45)", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/SHAP_AD_graph.png" + }, + "4(b)": { + "figure_path": "2411.18796v1_figure_4(b).png", + "caption": "(b) Graph network for Control\nFigure 6: Graph networks for AD and control (\u03b1=0.45\ud835\udefc0.45\\alpha=0.45italic_\u03b1 = 0.45)", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/SHAP_Control_graph.png" + }, + "5(a)": { + "figure_path": "2411.18796v1_figure_5(a).png", + "caption": "(a) Degree distribution in different networks\nFigure 7: A comparison of AD and control graph representations by degree", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/Degree_dist.png" + }, + "6(a)": { + "figure_path": "2411.18796v1_figure_6(a).png", + "caption": "(a) Age Distribution\nFigure 8: Dataset Demographics", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/age_dis_blood_only.png" + }, + "6(b)": { + "figure_path": "2411.18796v1_figure_6(b).png", + "caption": "(b) Race Distribution\nFigure 8: Dataset Demographics", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/race_dis_bloodOnly.png" + }, + "7(a)": { + "figure_path": "2411.18796v1_figure_7(a).png", + "caption": "(a) Multiple performance metrics of different models for AD prediction; box plots stem from multiple runs of the model evaluation experiment using bootstrapping, which provides a robust estimate of the model\u2019s performance\nFigure 9: ML models\u2019 performance for AD classification", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/eval_blood_only.png" + }, + "7(b)": { + "figure_path": "2411.18796v1_figure_7(b).png", + "caption": "(b) ROC curves for different models and datatypes\nFigure 9: ML models\u2019 performance for AD classification", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/Roc_blood.png" + }, + "8(a)": { + "figure_path": "2411.18796v1_figure_8(a).png", + "caption": "(a) Correlation matrix demonstrating biomarkers are not independent\nFigure 10: Correlation matrices of (a) all top biomarkers in Figure 11, and (b) selected biomarkers with highest inter-correlation", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/full_corr.png" + }, + "8(b)": { + "figure_path": "2411.18796v1_figure_8(b).png", + "caption": "(b) High correlation between selected blood biomarkers\nFigure 10: Correlation matrices of (a) all top biomarkers in Figure 11, and (b) selected biomarkers with highest inter-correlation", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/imp_cor.png" + }, + "9": { + "figure_path": "2411.18796v1_figure_9.png", + "caption": "Figure 11: Top distinguishing features for different models from SHAP analysis", + "url": "http://arxiv.org/html/2411.18796v1/extracted/6030330/Figures/comb_SHAP.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Dementia, 2023.", + "author": "WHO.", + "venue": "URL https://www.who.int/news-room/fact-sheets/detail/dementia.", + "url": null + } + }, + { + "2": { + "title": "Life expectancy in alzheimer\u2019s disease (ad).", + "author": "O. Zanetti, S.B. Solerte, and F. Cantoni.", + "venue": "Archives of Gerontology and Geriatrics, 49:237\u2013243, January 2009.", + "url": null + } + }, + { + "3": { + "title": "Alzheimer\u2019s disease and its treatment by different approaches: A review.", + "author": "Sukriti Srivastava, Razi Ahmad, and Sunil Kumar Khare.", + "venue": "European Journal of Medicinal Chemistry, 216:113320, 04 2021.", + "url": null + } + }, + { + "4": { + "title": "Detecting alzheimer\u2019s gets easier with a simple blood test.", + "author": "Esther Landhuis.", + "venue": "Scientific American, Scientific American, 4, February 2021.", + "url": null + } + }, + { + "5": { + "title": "Black and white individuals differ in dementia prevalence, risk factors, and symptomatic presentation.", + "author": "Jack C. Lennon, Stephen L. Aita, Victor A. Del Bene, Tasha Rhoads, Zachary J. Resch, Janelle M. Eloi, and Keenan A. Walker.", + "venue": "Alzheimer\u2019s & Dementia, 18, 12 2021.", + "url": null + } + }, + { + "6": { + "title": "Blood-based biomarkers for alzheimer\u2019s disease: Current state and future use in a transformed global healthcare landscape.", + "author": "Harald Hampel, Yan Hu, Jeffrey Cummings, Soeren Mattke, Takeshi Iwatsubo, Akinori Nakamura, Bruno Vellas, Sid O\u2019Bryant, Leslie M. Shaw, Min Cho, Richard Batrla, Andrea Vergallo, Kaj Blennow, Jeffrey Dage, and Suzanne E. Schindler.", + "venue": "Neuron, 111(18):2781\u20132799, September 2023a.", + "url": null + } + }, + { + "7": { + "title": "A unified approach to interpreting model predictions.", + "author": "Scott M. Lundberg and Su-In Lee.", + "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS\u201917, page 4768\u20134777, Red Hook, NY, USA, 2017. Curran Associates Inc.", + "url": null + } + }, + { + "8": { + "title": "The foundation and architecture of precision medicine in neurology and psychiatry.", + "author": "Harald Hampel, Peng Gao, Jeffrey Cummings, Nicola Toschi, Paul M. Thompson, Yan Hu, Min Cho, and Andrea Vergallo.", + "venue": "Trends in Neurosciences, 46(3):176\u2013198, March 2023b.", + "url": null + } + }, + { + "9": { + "title": "Staging dementia using clinical dementia rating scale sum of boxes scores: A texas alzheimer\u2019s research consortium study.", + "author": "Sid E. O\u2019Bryant.", + "venue": "Archives of Neurology, 65(8):1091, August 2008.", + "url": null + } + }, + { + "10": { + "title": "Biomarkers for alzheimer\u2019s disease: current status and prospects for the future.", + "author": "K. Blennow and H. Zetterberg.", + "venue": "Journal of Internal Medicine, 284(6):643\u2013663, August 2018.", + "url": null + } + }, + { + "11": { + "title": "A metabolite-based machine learning approach to diagnose alzheimer-type dementia in blood: Results from the european medical information framework for alzheimer disease biomarker discovery cohort.", + "author": "Daniel Stamate et al.", + "venue": "Alzheimer\u2019s & Dementia: Translational Research & Clinical Interventions, 5(1):933\u2013938, 2019.", + "url": null + } + }, + { + "12": { + "title": "Machine learning techniques for the diagnosis of alzheimer\u2019s disease: A review.", + "author": "Muhammad Tanveer, Bharat Richhariya, Riyaj Uddin Khan, Ashraf Haroon Rashid, Pritee Khanna, Mukesh Prasad, and Chin-Teng Lin.", + "venue": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 16(1s):1\u201335, 2020.", + "url": null + } + }, + { + "13": { + "title": "Apoe in alzheimer\u2019s disease: pathophysiology and therapeutic strategies.", + "author": "Ana-Caroline Raulin, Sydney V. Doss, Zachary A. Trottier, Tadafumi C. Ikezu, Guojun Bu, and Chia-Chen Liu.", + "venue": "Molecular Neurodegeneration, 17(1), November 2022.", + "url": null + } + }, + { + "14": { + "title": "Circulating igfbp-2: a novel biomarker for incident dementia.", + "author": "Emer R. McGrath, Jayandra J. Himali, Daniel Levy, Sarah C. Conner, Charles S. DeCarli, Matthew P. Pase, Paul Courchesne, Claudia L. Satizabal, Ramachandran S. Vasan, Alexa S. Beiser, and Sudha Seshadri.", + "venue": "Annals of Clinical and Translational Neurology, 6(9):1659\u20131670, August 2019.", + "url": null + } + }, + { + "15": { + "title": "Identification of optimum panel of blood-based biomarkers for alzheimer\u2019s disease diagnosis using machine learning.", + "author": "C. S. Eke, E. Jammeh, X. Li, C. Carroll, S. Pearson, and E. Ifeachor.", + "venue": "In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 3991\u20133994, 2018.", + "url": null + } + }, + { + "16": { + "title": "Early detection of alzheimer\u2019s disease with blood plasma proteins using support vector machines.", + "author": "Chima S. Eke, Emmanuel Jammeh, Xinzhong Li, Camille Carroll, Stephen Pearson, and Emmanuel Ifeachor.", + "venue": "IEEE Journal of Biomedical and Health Informatics, 25(1):218\u2013226, 2021.", + "url": null + } + }, + { + "17": { + "title": "Blood-based protein biomarkers for diagnosis of alzheimer disease.", + "author": "James D Doecke, Simon M Laws, Noel G Faux, William Wilson, Samantha C Burnham, Chiou-Peng Lam, Alinda Mondal, Justin Bedo, Ashley I Bush, Belinda Brown, et al.", + "venue": "Archives of neurology, 69(10):1318\u20131325, 2012.", + "url": null + } + }, + { + "18": { + "title": "Inflammatory biomarkers in alzheimer\u2019s disease plasma.", + "author": "Angharad R Morgan, Samuel Touchard, Claire Leckey, Caroline O\u2019Hagan, Alejo J Nevado-Holgado, Frederik Barkhof, Lars Bertram, Olivier Blin, Isabelle Bos, Valerija Dobricic, et al.", + "venue": "Alzheimer\u2019s & dementia, 15(6):776\u2013787, 2019.", + "url": null + } + }, + { + "19": { + "title": "Plasma proteomics for the identification of alzheimer disease.", + "author": "Liang-Hao Guo, Panagiotis Alexopoulos, Stefan Wagenpfeil, Alexander Kurz, Robert Perneczky, Alzheimer\u2019s Disease Neuroimaging Initiative, et al.", + "venue": "Alzheimer Disease & Associated Disorders, 27(4):337\u2013342, 2013.", + "url": null + } + }, + { + "20": { + "title": "A blood-based screening tool for alzheimer\u2019s disease that spans serum and plasma: findings from tarc and adni.", + "author": "Sid E O\u2019Bryant, Guanghua Xiao, Robert Barber, Ryan Huebinger, Kirk Wilhelmsen, Melissa Edwards, Neill Graff-Radford, Rachelle Doody, Ramon Diaz-Arrastia, Texas Alzheimer\u2019s Research & Care Consortium, et al.", + "venue": "PloS one, 6(12):e28092, 2011.", + "url": null + } + }, + { + "21": { + "title": "Blood transcript biomarkers selected by machine learning algorithm classify neurodegenerative diseases including alzheimer\u2019s disease.", + "author": "Carol J Huseby, Elaine Delvaux, Danielle L Brokaw, and Paul D Coleman.", + "venue": "Biomolecules, 12(11):1592, 2022.", + "url": null + } + }, + { + "22": { + "title": "Identification of blood biomarkers for use in point of care diagnosis tool for alzheimer\u2019s disease.", + "author": "Emmanuel Jammeh, Peng Zhao, Camille Carroll, Stephen Pearson, and Emmanuel Ifeachor.", + "venue": "In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 2415\u20132418. IEEE, 2016.", + "url": null + } + }, + { + "23": { + "title": "Artificial neural networks in the discrimination of alzheimer\u2019s disease using biomarkers data.", + "author": "Almir Aljovi\u0107, Almir Badnjevi\u0107, and Lejla Gurbeta.", + "venue": "In 2016 5th Mediterranean Conference on Embedded Computing (MECO), pages 286\u2013289. IEEE, 2016.", + "url": null + } + }, + { + "24": { + "title": "Blood biomarker-based classification study for neurodegenerative diseases.", + "author": "Jack Kelly, Rana Moyeed, Camille Carroll, Shouqing Luo, and Xinzhong Li.", + "venue": "Scientific Reports, 13(1):17191, 2023.", + "url": null + } + }, + { + "25": { + "title": "Accurate blood-based diagnostic biosignatures for alzheimer\u2019s disease via automated machine learning.", + "author": "Makrina Karaglani, Krystallia Gourlia, Ioannis Tsamardinos, and Ekaterini Chatzaki.", + "venue": "Journal of clinical medicine, 9(9):3016, 2020.", + "url": null + } + }, + { + "26": { + "title": "Feature selection with the r package mxm: Discovering statistically-equivalent feature subsets. arxiv 2016.", + "author": "V Lagani, G Athineou, A Farcomeni, M Tsagris, and I Tsamardinos.", + "venue": "arXiv preprint arXiv:1611.03227, 2016.", + "url": null + } + }, + { + "27": { + "title": "Insulin-like growth factor binding protein-2 in at-risk adults and autopsy-confirmed alzheimer brains.", + "author": "Marc James Quesnel, Anne Labont\u00e9, Cynthia Picard, Henrik Zetterberg, Kaj Blennow, Ann Brinkmalm, Sylvia Villeneuve, Judes Poirier, and for the Alzheimer\u2019s Disease Neuroimaging Initiative and the PREVENT-AD Research Group.", + "venue": "Brain, page awad398, 2023.", + "url": null + } + }, + { + "28": { + "title": "Plasma 2-microglobulin and cerebrospinal fluid biomarkers of alzheimer\u2019s disease pathology in cognitively intact older adults: the CABLE study.", + "author": "Yi-Ming Huang, Ya-Hui Ma, Pei-Yang Gao, Zhi-Bo Wang, Liang-Yu Huang, Jia-Hui Hou, Lan Tan, and Jin-Tai Yu.", + "venue": "Alzheimer\u2019s Research & Therapy, 15:69, 2023.", + "url": null + } + }, + { + "29": { + "title": "Alpha 1-antitrypsin and alpha 1-antichymotrypsin are in the lesions of alzheimer\u2019s disease.", + "author": "P. A. Gollin, R. N. Kalaria, P. Eikelenboom, A. Rozemuller, and G. Perry.", + "venue": "Neuroreport, 3(2):201\u2013203, 1992.", + "url": null + } + }, + { + "30": { + "title": "Glutathione s-transferase omega genes in alzheimer and parkinson disease risk, age-at-diagnosis and brain gene expression: an association study with mechanistic implications.", + "author": "Mariet Allen, Fanggeng Zou, High Seng Chai, Curtis S. Younkin, Richard Miles, Asha A. Nair, Julia E. Crook, V. Shane Pankratz, Minerva M. Carrasquillo, Christopher N. Rowley, Thuy Nguyen, Li Ma, Kimberly G. Malphrus, Gina Bisceglio, Alexandra I. Ortolaza, Ryan Palusak, Sumit Middha, Sooraj Maharjan, Constantin Georgescu, Debra Schultz, Fariborz Rakhshan, Christopher P. Kolbert, Jin Jen, Sigrid B. Sando, Jan O. Aasly, Maria Barcikowska, Ryan J. Uitti, Zbigniew K. Wszolek, Owen A. Ross, Ronald C. Petersen, Neill R. Graff-Radford, Dennis W. Dickson, Steven G. Younkin, and Nil\u00fcfer Ertekin-Taner.", + "venue": "Molecular Neurodegeneration, 7:13, 2012.", + "url": null + } + }, + { + "31": { + "title": "A blood-based biomarker panel indicates IL-10 and IL-12/23p40 are jointly associated as predictors of -amyloid load in an AD cohort.", + "author": "Steve Pedrini, Veer B. Gupta, Eugene Hone, James Doecke, Sid O\u2019Bryant, Ian James, Ashley I. Bush, Christopher C. Rowe, Victor L. Villemagne, David Ames, Colin L. Masters, and Ralph N. Martins.", + "venue": "Scientific Reports, 7(1):14057, 2017.", + "url": null + } + }, + { + "32": { + "title": "Thrombopoietin is associated with \u2019s intercept, and only in non-hispanic whites.", + "author": "Donald R. Royall and Raymond F. Palmer.", + "venue": "Alzheimer\u2019s & Dementia : Diagnosis, Assessment & Disease Monitoring, 3:35\u201342, 2016.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18796v1" +} \ No newline at end of file diff --git a/20241127/2411.18798v1.json b/20241127/2411.18798v1.json new file mode 100644 index 0000000000000000000000000000000000000000..42152610ec54f7360f4a603e5ce1e4955cc46616 --- /dev/null +++ b/20241127/2411.18798v1.json @@ -0,0 +1,221 @@ +{ + "title": "Formal Verification of Digital Twins with TLA and Information Leakage Control", + "abstract": "Verifying the correctness of a digital twin provides a formal guarantee that the digital twin operates as intended. Digital twin verification is challenging due to the presence of uncertainties in the virtual representation, the physical environment, and the bidirectional flow of information between physical and virtual. A further challenge is that a digital twin of a complex system is composed of distributed components. This paper presents a methodology to specify and verify digital twin behavior, translating uncertain processes into a formally verifiable finite state machine. We use the Temporal Logic of Actions (TLA) to create a specification, an implementation abstraction that defines the properties required for correct system behavior. Our approach includes a novel weakening of formal security properties, allowing controlled information leakage while preserving theoretical guarantees. We demonstrate this approach on a digital twin of an unmanned aerial vehicle, verifying synchronization of physical-to-virtual and virtual-to-digital data flows to detect unintended misalignments.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "This paper describes a formal methodology to design and model a digital twin and prove its correctness properties using the Temporal Logic of Actions (TLA). We employ the National Academies\u2019 definition: \u201cA digital twin is a set of virtual information constructs that mimics the structure, context, and behavior of a natural, engineered, or social system (or system-of-systems), is dynamically updated with data from its physical twin, has a predictive capability, and informs decisions that realize value. The bidirectional interaction between the virtual and the physical is central to the digital twin\u201d [1 ###reference_b1###].\n###figure_1### An example digital twin is illustrated in Fig. 1 ###reference_###. In this scenario, an unmanned aerial vehicle (UAV) flies a mission while transmitting sensor data to a digital twin. The digital twin, designed to mirror the UAV\u2019s structure, context and behavior, processes the incoming sensor data, , maintains a predictive model of the UAV\u2019s structural health, , and generates control execution commands, . Even when individual components, like the command-generation function, operate correctly, the system can still fail due to orchestration issues. For instance, consider the sensor observations valA, valB, and valC, arriving concurrently but with different timestamps t=1, t=3, and t=2 , respectively. These readings, emitted at different intervals and with transmission delays, create orchestration challenges. To ensure the correct state, the update function must incorporate consistent sensor inputs, and the UAV must process incoming commands reliably to follow its intended path. These examples highlight that verifying individual component correctness is not enough; ensuring the digital twin\u2019s overall orchestration is equally important.\nVarious formal and technological approaches address aspects of correctness in cyber-physical systems. For instance, verifying control and timing is well-researched (see Sec. II ###reference_###), but control verification alone does not ensure system-level orchestration. Technological solutions, such as RabbitMQ for asynchronous data handling, address only specific areas of digital twin functionality. Moreover, simply adding technological components does not offer formal guarantees, often a crucial need in safety-critical environments where digital twins may be deployed. This paper introduces a novel methodology for formally reasoning about digital twins at the level of system orchestration. We introduce the following innovations:\nFormal system specification: A new method to construct formal, high-level specifications of digital twins using TLA. Our approach derives a finite state machine model from the digital twin probabilistic graphical model (PGM) [2 ###reference_b2###], giving a mathematically rigorous way to specify digital twins in general.\nModel augmentation: A novel augmentation of the digital twin PGM framework to model distributed communication and the corresponding state machine translation.\nAbstraction methodology: A set of principled guidelines for abstracting the physical and computational complexities of digital twins into state transition actions.\nWeakening of formal properties: A novel approach to relax formal security properties, such as non-interference, by bounding the utility of revealed information within digital twin bidirectional flows, thereby limiting impact on system identification rather than relying on generic information-theoretic bounds.\nThe remainder of this paper is organized as follows. Sec. II ###reference_### places our approach in the existing literature. Sec. III ###reference_### details the state machine derivation. Sec. IV ###reference_### demonstrates a practical application by constructing and verifying a UAV digital twin, with relaxed security properties that provide formal bounds on information leakage between the physical and digital components. Finally, Sec. V ###reference_### presents the results of our verification efforts on the UAV digital twin." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "Our research contributes to the field of cyber-physical systems, with particular focus on the expanding concept of digital twins. As digital twin technology continues to evolve rapidly, it is important to delineate how our approach both aligns with and diverges from existing work." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Digital Twin as a State Machine", + "text": "Our first result is formalizing a digital twin as a state machine, rigorously derived from the digital twin Probabilistic Graphical Model (PGM) framework proposed in [2 ###reference_b2###] and since adopted to describe digital twins in a variety of applications [31 ###reference_b31###, 32 ###reference_b32###]. Appendix A ###reference_### provides a background of the digital twin PGM framework. Our formalization uses TLA to describe the digital twin as a finite state machine (FSM). Indeed, Markov models as in PGMs are a stochastic version of FSMs. For background on TLA, see [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###]. Throughout this section, we use examples from our application instance of a UAV digital twin.\nHowever, we emphasize and show that our methodology is broadly applicable." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A State Machine Derivation", + "text": "Here we detail our novel derivation of a state machine representation from the digital twin PGM. We specify the digital twin as a state machine that transitions from one state to the next, governed by transition logic:\nHere, (1 ###reference_###) states the state machine of the digital twin is defined by a conjunction (AND) of an initial state predicate , a next state predicate , and a set of fairness conditions . The initial state predicate specifies the valid starting conditions, the next state predicate outlines the permissible transitions that variables can undergo, and the fairness conditions provide assumptions about how transitions are executed. Specifically, the next state predicate:\nemploys the logical disjunction (OR) to indicate that at any given step in the state machine, one out of possible processes can occur, or the system can reach a termination condition . By allowing only one process to execute at a time, we model concurrency by considering the possible orderings, or interleavings, of process execution, abstracting away timing specifics.\nTo model digital twin orchestration, we identify the specific operations or \u201cprocesses\u201d through which variables within the system alter their states. These processes are dictated by the relationships encoded within the DT PGM. Each variable in the model transitions based on the states of other variables that directly influence it \u2014 the variable node\u2019s parents in the graphical model. Formally, the set of processes is defined as:\nHere, is the set comprising the parents of , and the transition function denotes the computation that updates based on these influences. This definition preserves the system dependencies by defining that each variable\u2019s change is a direct result of its process\u2019 inputs.\n###figure_2### Fig. 2 ###reference_### shows the PGM describing our example UAV digital twin with six variables: (1) physical state which represents the structural health of the UAV; (2) Observational data representing sensor data; (3) Digital state , which represents the digital twin\u2019s estimate of the UAV\u2019s structural health; (4) control representing the computed control; (5) Quantity of interest , which represents quantities of interest computed by the digital twin; and (6) Reward , representing metrics for success as dependent on , , and .\nApplying (3 ###reference_###) to the PGM yields the set of processes where , , , , and . Fig. 2 ###reference_### illustrates the mapping of PGM encodings to state machine processes." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Modeling Distributed Communication", + "text": "The second major contribution of our work is the novel augmentation of the digital twin PGM to account for the challenges of distributed comopnents, and the corresponding translation into the state machine representation.\n###figure_3### The graphical model in Fig. 2 ###reference_### assumes that variable values are read deterministically. This is often not the case in digital twins where components are distributed and rely on message passing to communicate with each other. With distributed components, there is additional uncertainty in the input values that are actually used by a process, stemming from issues such as network reliability and traffic. For instance, as illustrated in Fig. 3 ###reference_###, the process requires the value of , which is transmitted via distributed messaging \u2014 in this case, a wireless network channel. The perturbation of the distributed messaging in a PGM might even be adversarial and so a worst-case analysis may be needed [36 ###reference_b36###].\n###figure_4### Our novel augmentation of the PGM constructs a new variable to represent the uncertainty of the messaging channel and a new variable for every variable whose value is communicated over the messaging channel. These noise and channel output variables are just like in information-theoretic models of communication [37 ###reference_b37###], but considering semantics of logic [38 ###reference_b38###]. First, identify the set of variables whose value is communicated over distributed messaging, i.e. . For every , we create an intermediary variable and a network variable to represent the value of actually received. We reconfigure the incident edges of such that new edges point from to , to , to , and to . Algorithm 1 ###reference_### details the augmentation algorithm.\nFor example, Fig. 4 ###reference_### shows a subgraph of the resulting augmentation applied to variable . The augmentation introduces three new processes: (1) , which represents the optional dependence of the messaging channel on the value of . For instance, some messaging channels may be susceptible to large data payloads and may degrade as traffic increases. (2) , which represents the dependency of on both the message that was sent and the state of the network. (3) , which replaces the original process to model the fact that the process input is the received variable , instead of sent variable . Our state machine formalization further elaborates on fairness, termination, and complexity abstraction, detailed in Appendix B ###reference_###." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Specification of UAV Digital Twin", + "text": "This section applies our proposed methodology to the design, specification, and verification of a UAV digital twin." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A The UAV and its Digital Twin", + "text": "The physical counterpart of this digital twin is a custom-built, fixed-wing UAV equipped with advanced wireless sensors and power hardware, with construction described in [39 ###reference_b39###] and shown in Fig. 5 ###reference_###. The sensors, attached to the UAV\u2019s wings as in Fig. 5(b) ###reference_sf2###, measure observational data such as temperature and strain in real-time during flight.\nThe UAV also features an onboard computer to process incoming control commands from the digital twin, where commands are executed as maneuvers.\nGiven the potential unreliability of the\ncommunication channel, a primary design challenge is ensuring that delayed control messages are processed accurately to maintain the UAV\u2019s operational integrity.\n###figure_5### ###figure_6### A digital twin of this UAV would continually process incoming observational data to generate and transmit control commands tailored for the UAV. The digital twin would also maintain a dynamic predictive model of the UAV\u2019s structural health, ensuring synchronization with the UAV\u2019s actual physical state. This synchronization is achieved through real-time computations that integrate new observational data into the ongoing assessment of the UAV\u2019s condition. A key design challenge of the digital twin is its ability to accurately reflect the UAV\u2019s physical state despite potential latency issues and concurrent, incoming data streams.\nOur implementation builds upon the digital twin in [2 ###reference_b2###], implemented as a collection of Robot Operating System (ROS2) Python modules. However, the original implementation primarily served as a proof-of-concept for the PGM framework and did not address several real-world challenges such as handling concurrent incoming observational messages and ensuring reliability over unstable communication channels. Our objective is to construct a design to manage these complexities and achieve reliable orchestration under realistic operational conditions." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B The UAV State Machine", + "text": "###figure_7### We apply our augmentation methodology from Algorithm 1 ###reference_### to construct an augmented PGM that accounts for the distributed messaging channels in the system. Because each sensor transmits independently, we specify each sensor\u2019s connection as separate variables, . We also define a separate variable for the transmission of control commands, . These definitions let us reason about bidirectional flows individually.\nIn addition to new variables for each data transmission path, the augmentation also introduces new nodes for received observational data and received control command . The augmented PGM is depicted in Fig. 7 ###reference_###.\n###figure_8###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C System Abstraction", + "text": "This subsection applies our abstraction methodology to model concrete state transitions within the UAV digital twin\u2019s state machine. In abstracting complex system dynamics into simpler, formal state transitions, our goal is to balance fidelity with tractability: while a more granular formalization more accurately reflects real-world dynamics, it becomes less scalable in terms of formalization effort and verification time. We organize this section by systematically addressing each variable involved in the UAV system. For each variable, we first describe its real-world characteristics and then its corresponding abstraction. Following this, we delineate how each variable evolves in real life and how we formulate its state transition within the state machine.\nPhysical state () The physical state of the UAV represents its structural health, which is influenced by the stresses of executed maneuvers. It is not possible for the digital twin to know the ground truth of at runtime; instead, it must be inferred through sensor data. The UAV\u2019s structural integrity is subject to degradation, quantified by damage, which occurs with a non-zero probability dependent on the executed control. This probabilistic damage is governed by the dynamics shown in (4 ###reference_###) in Table I ###reference_###.\nIn our abstraction, we model the structural health as a discrete variable ranging from (total structural failure) to (perfect health). Our model simplifies probabilistic damage to a nondeterministic state transition where the structural state either remains unchanged or is reduced by damage, shown in (5 ###reference_###) in Table I ###reference_###. Finally, because damage occurs concurrently with control execution, both actions are modeled as a single atomic operation in (5 ###reference_###) in Table I ###reference_###, where the value of the next executed control is assigned the control command .\nObservational data () The observational data, denoted as , are noisy, timestamped sensor measurements of the UAV\u2019s structural health, taken by sensors, indexed as . Sensor measurements inherently vary slightly from the actual structural health and each other due to sensor precision and environmental interference. Empirical data (sample shown in (6 ###reference_###)), show typical small deviations from the ground truth value .\nIn our abstraction, each sensor measurement is represented as the UAV\u2019s structural health value perturbed by some nondeterministic noise .\nDistributed messaging of observational data (, ) In the UAV digital twin, the communication of observational data via Bluetooth introduces complexities due to the potential unreliability of the wireless channels.\nIn our abstraction, we construct separate variables for each Bluetooth channel () and for the received data .\n###figure_9### Because our concern is at a higher level than the details of sensor and transmission operations, we treat the processes of data generation, abstracted in (7 ###reference_###), and transmission, abstracted in (8 ###reference_###), as mutually-atomic. This abstracts the generation and immediate transmission of data as a single, indivisible operation, as in (9 ###reference_###):\nThe received value of a sensor message is represented by variable . Per our methodology in Appendix B-A ###reference_###, to model the unreliable receiving of messages, we remove the element at randomly-chosen index from queue , and we add it to the received messages collection . We impose the strong fairness condition that the correct message () is always eventually delivered.\n###figure_10### Digital state () The digital state represents the estimated structural health of the UAV, modeled as a variable within the range . This estimation is computed by a black-box model , which outputs a predictive distribution for . While the internal computations of each model remain undisclosed, output characteristics are discovered through prior statistical analysis.\nOur abstraction retains the dependency of on previous state , last control computed and the latest observational data . To enhance the model\u2019s tractability, we use known characteristics of to constrain the number of possible states for . For instance, when analyzing the conditional probability for , shown in (10 ###reference_###), where varies with while keeping other factors constant, we observe that non-positive sensor observations significantly widen the range of possible values for . Otherwise, typically fluctuates within a normal distribution , where the variance is influenced by the type of control executed. To keep the abstraction tractable and focused on the most critical scenarios, we constrain to fluctuate within two standard deviations of the mean. This constraint is reflected in our abstraction, where and are set to represent the two standard deviation bounds for controls and , respectively, rounded to the nearest integers.\nControl () The control is a command that instructs the UAV to execute either a 3g or 2g turn. The control is computed via a optimization model.\nIn our abstraction (Table VI ###reference_###), the control is simplified to decision-making criteria based primarily on the UAV\u2019s estimated structural health . This simplification is grounded in a prior analysis of the optimization model\u2019s outputs [2 ###reference_b2###], which reveal that the value of\n primarily dictates whether the control can be set to 3.\nDistributed messaging of control Handling control messages is managed similarly to transmission and reception of observational data. In our abstraction (Table VII ###reference_###), we assume that the correct message will eventually be delivered, and we treat the processes of computing a control decision and transmitting a control as mutually atomic operations, combined into a single, indivisible process to reduce complexity.\n###figure_11###" + }, + { + "section_id": "4.3.x", + "parent_section_id": "4.3", + "section_name": "Termination", + "text": "Our UAV example uses specific termination conditions to reflect real mission parameters. Termination occurs when: (1) UAV reaches the maximum number of executed maneuvers ; (2) digital twin exceeds a predefined maximum runtime ; or (3) digital twin estimates the UAV\u2019s structural health as non-positive, and all sensor readings concurrently indicate non-positive values, suggesting critical system failure." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Specifying Properties", + "text": "The core property of interest in the UAV digital twin is synchronization\u2014the continuous, bidirectional feedback loop ensuring that the physical and digital entities reflect each other accurately. We define our primary synchronization property as:\n: The physical and digital twins must be eventually synchronized.\nWe use the term \u201ceventually\u201d to describe that synchronization will always be achieved, without binding it to a specific timeframe. To detail what synchronization entails, we deconstruct this overarching property into more granular sub-properties, guided by methodological questioning \u2014how, what, and why [40 ###reference_b40###, 41 ###reference_b41###] \u2014 with engineers and stakeholders. We discuss in more detail how we specify properties in Appendix B-B ###reference_###.\nThe resulting property-part diagram, depicted in Fig. 8 ###reference_###, illustrates a subset of these properties.\n###figure_12###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Weakening Formal Verification with Statistical Guarantees", + "text": "Synchronization correctness require certain security properties to be satisfied. For example, in Fig. 8 ###reference_###, requires that an adversary cannot infer information about the digital state model, which is necessary for the trustworthiness of messages exchanged between the physical and digital twins. This property falls under a class of security guarantees known as non-interference, a standard approach for formalizing information flow within a system. A process is noninterfering with another process across system if \u2019s input to has no effect on \u2019s output to [42 ###reference_b42###]. Different variations of noninterference exist [43 ###reference_b43###], including generalized non-interference (GNI) which extends noninterference to probabilistic systems by mandating that for every pair of traces and , there exists a third trace such that agrees with the low-security inputs and agrees with the high-security outputs [44 ###reference_b44###]. The practicality of noninterference is well-known to be problematic [45 ###reference_b45###], and as of state-of-the-art, obeying GNI is still an impractical constraint on digital twin systems. Here, we introduce a novel weakening of GNI with respect to particular secret digital twin parameters, where we allow some information leakage while still maintaining formal bounds on the amount of relevant information leaked.\nNotably, we measure information leakage through a system identification perspective [46 ###reference_b46###] rather than a generic information-theoretic view [45 ###reference_b45###], considering what systems-theoretic understanding of the digital twin is leaked rather than just the number of bits about it, which may or may not be relevant to adversarial action. This is different from [47 ###reference_b47###] which looks at state estimation rather than system identification, and [48 ###reference_b48###], which is also quite different.\nFor example, consider the content of the communication involving the current health of the physical counterpart and the next action it is going to take. The change in health depends on the action taken and some system randomness. More concretely, let denote the health of the system at time . The system can take possible actions, indexed by .\nLet denote the action the system takes at time . We assume the change in health is a Poisson random number drawn with rate ,\nindependent of all other changes in health:\nThis essentially implies that the health model of the system is given by .\nAn adversary intercepts the communication between the digital twin and the physical counterpart, and knows the values of and . We want to determine whether the system\u2019s health model is compromised by this information leakage. So the estimation problem here is that given , we want to figure out .\nIn the general case, let us assume that and have no relation to each other (this may not be very practical since we often know which actions are costlier than others, but let us nevertheless make this simplifying assumption). So for estimating each , we only consider the set of times . In the absence of any prior, a good estimator for this would be\nUsing standard probability results, this estimator has the following properties.\nThe estimator in (13 ###reference_###) satisfies\n.\n,\nwhere .\nThus as the number of times a particular action is taken increases, we get a more accurate\nestimate of the hit to health from that action and so we directly get a statistical guarantee on information leakage about the system properties.\nIn general, finite-sample bounds from system identification theory [49 ###reference_b49###, 50 ###reference_b50###, 51 ###reference_b51###] can characterize such digital twin-relevant information leakage. With this weakening approach, we are able to satisfy property , which would otherwise fail with a purely model-checking approach." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Evaluation", + "text": "Our baseline specification for the UAV digital twin encompasses various parameters: sensors, each with a maximum message delay of , a total of possible mission maneuvers, and a maximum system runtime of . This specification manifests as 15 distinct processes and 18 variables, including auxiliary variables for supporting property verification, with 25 properties covering core system behavior. The TLA code closely mirrors the abstraction models presented in Sec. IV ###reference_###. An example code listing is shown in Appendix C ###reference_###." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Model Checking the State Space", + "text": "The state space generated by the UAV digital twin\u2019s specification is combinatorially large, as each distinct process introduces a different potential interleaving, with every variable within these interleavings capable of assuming various values. We visualize this state space as a directed acyclic graph (DAG) in Fig. 9 ###reference_###, where each vertex represents a unique state\u2014specific values assigned to variables\u2014and edges depict transitions between these states. This graph is inherently a DAG, as it includes a model-checked guarantee of termination. In Fig. 9 ###reference_###, terminating states are highlighted in orange, while ongoing states are in black. The graph\u2019s initial state, depicted as a blue vertex (1), bifurcates into two principal pathways: the physical twin\u2019s processes (2) and the digital twin\u2019s processes (3). To highlight one possible pathway: from state (2), the system progresses to state (4) and then to (7), culminating in state (15). This final state indicates termination triggered by the UAV achieving the prescribed number of maneuvers.\nModel checking is resource-intensive due to the vast size of the state space. On a hardware setup with 10 cores and 16 GB of RAM allocated to the TLC model checker, completing a single baseline model checking session requires approximately 15 hours. To evaluate scalability, we vary model parameters individually while keeping others constant. Increasing the number of sensors or the permissible message delay notably expands the state space by introducing more potential message interleavings. For example, with two sensors and a maximum message delay of , the model generates states. Expanding to three sensors increases the state space to , requiring two days to check on our hardware. We also examine the impact of atomicity assumptions by modifying the process , which asserts that the execution of control and the incurring of damage occur atomically (see Table I ###reference_###). By splitting the process into two interleaved, non-atomic steps, we unexpectedly observe a significant reduction in the state space \u2014 from million to just one million distinct states. We hypothesize that this decrease results from the model checker simplifying invariants and pruning redundant states more effectively. This finding indicates that atomic assumptions do not always lead to larger state spaces and, in some cases, may simplify specification design. Table VIII ###reference_### summarizes the impact of varying model parameters.\n###figure_13###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Safety and Liveness Violations", + "text": "Throughout the development of our specification, we used an iterative approach that, while refining the design, also continuously exposed gaps that led to property violations. For instance, during a model checking session, we encountered a violation of property : The executed command must be the latest command seen thus far, formalized as . The sequence of state transitions leading to this violation, simplified for clarity, includes the following key steps:\nInitial State: The system begins in its initial configuration.\nExecute Command: The UAV executes a backup command (timestamped ) because no dynamic command is available.\nCompute and Emit Command: The digital twin computes and emits a dynamic command (timestamped ).\nReceive Command: The UAV receives this latest computed dynamic command.\nExecute Command: The UAV executes the received dynamic command, violating , as the command (timestamped ) is stale and should not have been executed.\nThe progression of states leading to this violation is depicted in Fig. 10 ###reference_###, where state (15) represents the state where the violation occurs. This issue stems from a critical oversight in the Receive Command process, where we had failed to implement a timestamp validation check for incoming command messages before their acceptance into the variable. While this oversight might seem straightforward to address in hindsight, it was easily overlooked during the initial stages of specification development. Fig. 11 ###reference_### shows the specification pre- and post-fix. This example underscores the importance of our iterative specification and model checking approach, particularly as design complexity increases, where seemingly fixes fixes can become obscured and go unnoticed.\n###figure_14###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Discussion", + "text": "This paper presents a methodology for developing formally verifiable DT designs using TLA by transforming the PGM framework into a finite state machine with an augmentation for distributed communication. This approach enables the abstraction of complex distributed DT dynamics, allowing for the verification of synchronization properties. Because traditional formal methods have limitations, particularly with strict security definitions, we address this with a novel weakening method that combines formal verification with statistical guarantees. This allows controlled information leakage while ensuring these weakened properties align with the property-part diagram used in model checking. Despite the challenge of state space explosion, a common issue in model checking [52 ###reference_b52###], even models with small parameters revealed early design errors. This iterative process highlights the value of formal verification in safety-critical systems, and future work will focus on bridging the gap between high-level formal specifications and practical digital twin implementations." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Background on the PGM framework and TLA", + "text": "A PGM encodes random variables as nodes and statistical dependencies as edges between nodes. In the DT PGM framework, the PGM governs how variables are updated in each timestep. An edge between two nodes dictates the dependence of the destination node on the parent node.\nFor example, the edge represents that observational data depends on the physical state .\n###figure_15###" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B State Machine Formalization", + "text": "This section details additional assumptions of the finite state machine formalization.\nIn developing the state machine for the DT, our aim was not only to capture the dynamic interplay of components but also to abstract complex DT behaviors into manageable and verifiable forms. This abstraction focuses on simplifying intricate component behaviors\u2014whether physical phenomena or computational complexities\u2014while preserving the essential characteristics necessary for accurate system modeling. Here, we outline our methodology for abstracting these complexities, providing general principles that are later applied in specific contexts within Sec. IV ###reference_###.\nAsserting existence instead of computing numerical operations One fundamental principle in formal methods is to describe the outcomes of system operations rather than detailing the specific computations that achieve these outcomes [35 ###reference_b35###, 55 ###reference_b55###, 56 ###reference_b56###]. In line with this principle and consistent with prior literature in other domains [57 ###reference_b57###, 58 ###reference_b58###], our approach simplifies the numerical complexity inherent in DT operations, yet maintains the integrity of the computational results in the abstracted state machine actions.\nRetain probabilistic characteristics In our methodology, simplification does not eliminate stochastic behavior inherent to many DT processes, especially those involving predictive, probabilistic components. However, instead of quantifying specific probabilities, our abstraction focuses on delineating possible behaviors by writing indeterministic actions in TLA [35 ###reference_b35###].\nRepresenting a distributed message channel as a queue with deterministic write and nondeterministic read Recall from Sec. III ###reference_### that we represent a messaging channel as a random variable in the augmented PGM. Following the established practice in formal methods of using queues to represent network communication [59 ###reference_b59###], we abstract each message channel random variable as a queue, where messages are \u2018pushed\u2019 onto the queue as they are sent and \u2018popped\u2019 from the queue in a nondeterministic order. This reflects potential real-world communication issues like delays, losses, or reordering, simplifying the analysis by eliminating the need to track details such as the timing specifics of each message.\nIndex ranges from to , a parameter representing out-of-order message delivery constraints. For instance, a realistic and common constraint in wireless networks is a limited buffering and processing window. Index is randomly chosen, where indicates that the correct, most recent message is being delivered. We impose the strong fairness condition to ensure the correct message will eventually be delivered, a realistic expectation that mirrors guarantees provided by many network protocols such as Bluetooth.\nAtomicity in system orchestration Atomicity is a fundamental consideration in formal methods, particularly when defining specifications for complex systems [60 ###reference_b60###]. In our context, two processes are deemed mutually atomic when their state transitions are considered simultaneous and uninterruptible, essentially occurring within the same atomic slice of time. In designing the orchestration for the DT, we strategically treat certain groups of operations as atomic to simplify the model and enhance tractability. This decision helps manage complexity by reducing the granularity of interaction between components, focusing on high-level system behavior rather than the minutiae of inter-process communication.\nIn the process of specifying properties, initial analysis identifies two critical aspects of synchronization: (1) Consistency between the digital and physical states, and (2) Execution of control commands by the physical twin as issued by the DT. These insights lead to the development of properties and , visualized in Fig. 8 ###reference_###. Further decomposition of reveals that the digital state\u2019s update relies on consistent sensor data and computed control inputs, reflected in property : Sensor and control inputs to update digital state must be consistent. We observe that for consistent sensor inputs to be feasible, the DT must effectively receive and process incoming sensor data, a requirement captured in property . This analysis continues, breaking down each goal into finer-grained sub-properties, eventually organizing them using goal structuring notation [61 ###reference_b61###].\nCorrect orchestration naturally requires component correctness as well. For example, property articulates that the predictive model within the digital twin must be correct to ensure that the digital state accurately twins the physical state over time. We describe these critical functionalities as explicit sub-properties within our framework. For instance, we configure our design to simulate scenarios where is false, allowing us to explore and verify the system\u2019s behavior under conditions of component failure. This transformation of potentially implicit assumptions into concrete, testable elements within our specification not only crystallizes the verifiable state of each component but also elucidates their role and relationship to the system\u2019s overall behavior." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C TLA Code", + "text": "The TLA code is written and model-checked with the TLA+ toolkit. Fig. 13 ###reference_### shows a portion of this code, specifically modeling the procedure DT_ReceiveObsDelayed(s, m). This process represents actions taken for a specific sensor s with a particular delay index m.\nIn this snippet, DT_ReceiveObsDelayed checks if there are any queued observations for sensor s (by verifying n_obs[s] is not empty). If observations exist, it then checks if the delay index m corresponds to an entry within the domain of n_obs[s]. If so, it evaluates whether the timestamp n_obs[s][m][\"t\"] is recent enough, as determined by the function OM!IsMessageUpToDate. If the message is up-to-date, it appends this observation to the list of received observations obs_in[s] and removes it from the observation queue n_obs[s].\nIf the timestamp is outdated, the entry is removed from n_obs[s], leaving obs_in[s] unchanged. In cases where the delay index m is not present, both obs_in[s] and n_obs[s] remain unchanged.\nThe full TLA+ specification, including this process and others within the DT model, is published on https://github.com/luwen-huang/uav_dt." + } + ], + "tables": { + "1": { + "table_html": "
\n
Table I: Transition : Evolution of physical state
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\n\n\n\n\n\n(4)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(5)\n\n\n\n\n\n(5)\n\n\n\n\n\n(5)\n\n\n\n
\n
", + "capture": "Table I: Transition : Evolution of physical state" + }, + "2": { + "table_html": "
\n
Table II: Transition : Generate observational data
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\n\n\n\n\n\n\n\n(6)\n\n\n\n\n\n(6)\n\n\n\n\n\n(6)\n\n\n\n\n\n(6)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(7)\n\n\n\n
\n
", + "capture": "Table II: Transition : Generate observational data " + }, + "3": { + "table_html": "
\n
Table III: Transition : Transmit observational data
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\"[Uncaptioned\n\n\n\n\n\n\n\n\n\n\n\n(8)\n\n\n\n
\n
", + "capture": "Table III: Transition : Transmit observational data" + }, + "4": { + "table_html": "
\n
Table IV: Transition : Receive observational data
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\"[Uncaptioned\n\n\n\n\n\n\n
\n
", + "capture": "Table IV: Transition : Receive observational data" + }, + "5": { + "table_html": "
\n
Table V: Transition : Update digital state
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\n\n\n\n\n\n\n\n(10)\n\n\n\n\n\n(10)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(11)\n\n\n\n\n\n(11)\n\n\n\nELSE\n\n(11)\n\n\n\n\n\n(11)\n\n\n\n\n\n(11)\n\n\n\n\u00a0\u00a0\u00a0ELSE\n\n(11)\n\n\n\n\n\n(11)\n\n\n\n
\n
", + "capture": "Table V: Transition : Update digital state" + }, + "6": { + "table_html": "
\n
Table VI: Transition : Compute and transmit control
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\n\n\n\n\n\n(12)\n\n\n\n\n\n\n\n
\n
", + "capture": "Table VI: Transition : Compute and transmit control" + }, + "7": { + "table_html": "
\n
Table VII: Transition : Receive control
\n\n\n\n\n\n\n\n\n\n
\n\nReal-world process\n\n\n\nAbstraction\n\n
\n\n\n\"[Uncaptioned\n\n\n\n\n\n\n
\n
", + "capture": "Table VII: Transition : Receive control" + }, + "8": { + "table_html": "
\n
Table VIII: Model parameters impact state space complexity
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SpecificationDistinct StatesTotal States
Baseline
\n health ()
\n sensor ()
\n delay ()
\n noise ()
\n process ()
\n
", + "capture": "Table VIII: Model parameters impact state space complexity" + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18798v1_figure_1.png", + "caption": "Figure 1: Digital twin consisting of a physical system (unmanned aerial vehicle), a virtual representation (structural health models), and bidirectional connections among components.", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/intro__uav-dt.png" + }, + "2": { + "figure_path": "2411.18798v1_figure_2.png", + "caption": "Figure 2: Derivation of state machine processes from PGM representation. Nodes represent variables and edges between nodes represent the dependence of the destination node on the parent node. The subscript Xtsubscript\ud835\udc4b\ud835\udc61X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT denote the variable X\ud835\udc4bXitalic_X\u2019s state at time t\ud835\udc61titalic_t.", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/model__pgm-processes.png" + }, + "3": { + "figure_path": "2411.18798v1_figure_3.png", + "caption": "Figure 3: PGM with distributed communication required for two processes: (1) O\u2192D\u2192\ud835\udc42\ud835\udc37O\\negmedspace\\rightarrow\\negthinspace Ditalic_O \u2192 italic_D and (2) U\u2192S\u2192\ud835\udc48\ud835\udc46U\\negmedspace\\rightarrow\\negthinspace Sitalic_U \u2192 italic_S.", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/model__pgm--distributed.png" + }, + "4": { + "figure_path": "2411.18798v1_figure_4.png", + "caption": "Figure 4: Augmentation for O\ud835\udc42Oitalic_O, which is communicated over a distributed channel to D\ud835\udc37Ditalic_D.", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/model__pgm--augmentation.png" + }, + "5(a)": { + "figure_path": "2411.18798v1_figure_5(a).png", + "caption": "(a) Testbed UAV\nFigure 5: Reproduced with permission from [39]: testbed UAV (top) equipped with individually-transmitting Bluetooth sensors (bottom)", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/application__uav.png" + }, + "5(b)": { + "figure_path": "2411.18798v1_figure_5(b).png", + "caption": "(b) sensors on UAV wing\nFigure 5: Reproduced with permission from [39]: testbed UAV (top) equipped with individually-transmitting Bluetooth sensors (bottom)", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/application__uav-sensors.png" + }, + "6": { + "figure_path": "2411.18798v1_figure_6.png", + "caption": "Figure 6: PGM for UAV Digital Twin", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/application__pgm.png" + }, + "7": { + "figure_path": "2411.18798v1_figure_7.png", + "caption": "Figure 7: Augmented PGM modeling distributed communication", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/application__pgm-augmented.png" + }, + "8": { + "figure_path": "2411.18798v1_figure_8.png", + "caption": "Figure 8: Partial property-part diagram showing a subset of properties", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/property-part-diagram.png" + }, + "9": { + "figure_path": "2411.18798v1_figure_9.png", + "caption": "Figure 9: Visualization of state space: nodes represents states and edges represent transitions from state to state", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/evaluation__state-space.png" + }, + "10": { + "figure_path": "2411.18798v1_figure_10.png", + "caption": "Figure 10: Graph visualization showing the path that leads to safety property violation", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/evaluation__state-space-violation.png" + }, + "12": { + "figure_path": "2411.18798v1_figure_12.png", + "caption": "Figure 12: Probabilistic graphical model (PGM) describing the UAV DT.", + "url": "http://arxiv.org/html/2411.18798v1/extracted/6030358/assets/intro__pgm.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18798v1" +} \ No newline at end of file diff --git a/20241127/2411.18806v1.json b/20241127/2411.18806v1.json new file mode 100644 index 0000000000000000000000000000000000000000..a71c3d859807f8f53e14f016d945f86adc3f793b --- /dev/null +++ b/20241127/2411.18806v1.json @@ -0,0 +1,98 @@ +{ + "title": "One-Step Early Stopping Strategy using Neural Tangent Kernel Theory and Rademacher Complexity", + "abstract": "The early stopping strategy consists in stopping the training process of a\nneural network (NN) on a set of input data before training error is\nminimal. The advantage is that the NN then retains good generalization\nproperties, i.e. it gives good predictions on data outside , and a good\nestimate of the statistical error (\u201cpopulation loss\u201d) is obtained.\nWe give here an analytical estimation of the optimal stopping time\ninvolving basically the initial training error vector and\nthe eigenvalues of the \u201cneural tangent kernel\u201d.\nThis yields an\nupper bound on the population loss which is well-suited to the\nunderparameterized context (where the number of parameters is moderate\ncompared with the number of data).\nOur method is\nillustrated on the example of an\nNN simulating the MPC control of a Van der Pol oscillator.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recently, a lot of work has been devoted to the field of\n\u201cimitation\u201d of model predictive control (MPC) via\na neural network (NN) (see [1 ###reference_b1###]).\nThe idea is to train an NN on a set of samples randomly\nselected from the MPC data to\nenable the NN to simulate the MPC.\nThe advantage is to avoid the need to solve\nlarge optimisation problems in real time, as required\nby MPC methods.\nHowever, the replacement of the MPC by the NN induces an approximation error that we want to evaluate here.\nMore formally, given a set of samples\n(randomly selected from a distribution of MPC data) and\na gradient descent (GD) used for training the NN on ,\nwe would like to minimize the difference\nbetween the outputs given by NN and those\ngiven by MPC for inputs selected according to .\nThis difference is called \u201cpopulation loss\u201d and denoted .\nIt is\n(with high probability) the sum of an \u201cempirical\u201d loss and a\n\u201cgeneralization\u201d loss.\nAt each step of GD, tends to decrease while the generalization loss\ntends to increase, so \nis \u201cU-shaped\u201d. This suggests to stop the GD process at the time\n reaches its minimum.\nThis is a difficult problem because the distribution is unknown.\nWe attack the problem as follows: At the first step of GD,\nwe compute a quantity which, under certain condition,\nguarantees that, not itself,\nbut an upper bound on decreases\nby a value .\nWe then stop the GD at time\n.\nExplicit formulas for\n\nand are obtained using\nRademacher complexity and the Neural Tangent Kernel (NTK) theory (see [2 ###reference_b2###]).\nWe check these theoretical results\non the example of an MPC controller for the Van der Pol\noscillator\n(see Example 1 ###reference_mple1###)." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Preliminaries", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Gradient descent and training error", + "text": "We now recall from [3 ###reference_b3###, 4 ###reference_b4###] some definitions regarding the\napplication of the GD algorithms to NNs. We consider\nan NN\nwith a single hidden layer a scalar output of the form:\nwhere is the input,\n is the weight vector of the first layer,\n is an output weight (),\n,\n\nis the output weight vector,\n is an 1-Lipschitz activation function\nwith 111like ReLU, ELU, .\nWe fix the second layer\nwith uniformly distributed\nin 222In [3 ###reference_b3###, 4 ###reference_b4###],\nwe have ,\nso the output weight vector is normalized ()..\nSince is fixed, we will abbreviate\n\nas (or more simply sometimes\njust as ).\nWe denote by the input space, i.e., the set of all possible\ninstances of .\nWe denote by the set of all possible\n\u201ctarget values\u201d.\nWe are given input-target samples\n with drawn i.i.d. from an underlying distribution .\nWe assume for simplicity that\nfor sampled from , we have\n and .We train the NN by GD\nover . We assume that, initially,\nthe weights of the first layer are almost 0, in the following sense:\nThis is the case for example when each is initialized to\na value generated\nfrom .\nThe objective of GD is to minimize the quadratic loss function\n.\nLet us define the error \nfor and \nby\nLet \nbe the training error vector.\nGiven an initial time , let\n where \nis the learning rate used at step .\nThe -th step of GD\nis defined for\n by\nthe difference equation:\nwhere denotes ,\nand the derivative of .\nUsing the 1-Lipschitzness of and the fact that ,\nit follows from\n(4 ###reference_###):\nUsing Equation (1 ###reference_###), it follows from (5 ###reference_###) for :\nAs shown in [3 ###reference_b3###] (Section 3), the discrete dynamics of\nthe error writes in a compact way\nwhere is the NTK matrix\n(see [2 ###reference_b2###])\ndefined as the matrix with -th entry" + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Rademacher complexity and generalization error", + "text": "The population loss over data distribution \nand the empirical loss over\n are defined as follows\n(see [4 ###reference_b4###]):\nwhere is the elementary quadratic function\ndefined by .\nNote that we have:\nLet ,\n, and\n such that:\nThe generalization loss refers to for\nthe learned function given sample . Given a class \nof functions , the notion of Rademacher complexity (denoted\n) is useful to derive an upper bound for\nthe generalization loss (see [8 ###reference_b8###]).\nWe have:\n(cf. Theorem 11.3, p. 270 of [8 ###reference_b8###])\nWith probability at least over a sample of size :\nwhere\nBesides, we have:\n[9 ###reference_b9###][10 ###reference_b10###] (Theorem 5.7). For a network with one\nhidden layer,\noutput weights , normalized inputs \nand -Lipschitz activation with , a bound on the Rademacher complexity is\nwith\nand .\nFor the sake of self-containment, a proof is given in Appendix (Section VIII ###reference_###).\n\u220e\nIt then follows from Propositions 1 ###reference_position1### and 2 ###reference_position2###,\nand (8 ###reference_###):\nWe have with probability at least over the sample of size :\nwhere:\n\n with\nNote that is almost 0 (due to Assumption (2 ###reference_###)).\nNote also that, in order to satisfy (9 ###reference_###) for ,\nwe can take\nusing (2 ###reference_###), (6 ###reference_###) and the fact that \nfor all .\nSuch an estimate of is very conservative. For a\nmore accurate estimate of , we can use a Monte Carlo method\n(at the price of introducing a new source of probability )." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Sketch of the method", + "text": "Equation (11 ###reference_###) tells us that,\nin order to get an upper bound on (with probability ),\nit suffices to obtain an upper bound on .\nWe first compute an upper bound on (see Theorem 1 ###reference_orem1###),\nthen an upper bound on \n(so ).\nWe then determine a factor which, under certain condition\n(see Equation (18 ###reference_###)), guarantees\nthat decreases by at least a quantity after one\nGD step.\nWe thus obtain an upper bound on of the form\n.\nThe GD is stopped after one step\nat time .\nThe formal definitions of\n, , , \nare given in Section III ###reference_###\n(cf. Section V ###reference_###)." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Upper Bound on the Population Loss after One GD Step", + "text": "We denote by the\neigenvectors of the NTK matrix , and by their associated eigenvalues at instant . For , the expression\n (resp ) denotes a lower bound\n(resp. upper bound) of for where is the initial time444As the amplitude of the eigenvalues may have large variations during a brief transient time (especially with ReLU activation), it may be useful to take large enough to ensure\ntheir stabilization\n(\u201cwarming up\u201d)..\nLet denote\nthe space orthogonal to .\nThe expression (resp. ) denotes the projection\nof on the eigenvector\n (resp.\n).\nFor , we define\n and as:\nWe suppose that the eigenvalues are ordered as\n.\nWe consider the learning rate \ndefined by\nWe have then: .\nLet be the angle between the eigenvector \nand . Let\nand such that:\nLet:\nThe angle between the eigenvector and is in practice\nnegligible because\nthe speed of rotation of vectors \nis slow, and is small (typically ).\nIt follows that is itself negligible,\nand condition (14 ###reference_###) satisfied with .\nTherefore: \nand .\nWe have , which reflects the fact that\nGD contracts the error vector along the eigenvector .\nIn the following, symbols are kept in the formulas\nfor the sake\nof formal correctness.\nNote that we have:\n\nThe quantity thus corresponds to\nthe rate of contraction of the error\nvector along the\neigenvector (see Remark 1 ###reference_ark1###).\nWe now define an expression \nwhich, when less than ,\nguarantees that\n(an overapproximation of) decreases\nduring the first step of GD\n(see Remark 3 ###reference_ark3###).\nLet\nNote that, if we take as in [3 ###reference_b3###, 4 ###reference_b4###],\n is independent of .\nSuppose now:\nNote that, from (13 ###reference_###) and , we can take:\nWe have:\nSee Appendix (Section VI ###reference_###).\u220e\nIt follows from Proposition 4 ###reference_position4###\nLet us now define an overapproximation of .\nLet:\nNote that is almost 0 (due to Assumption (2 ###reference_###)).\n(Upper bound on ).\nWe have:\n.\nFrom Equation (5 ###reference_###), we have:\nhence:\ni.e., using Equation (12 ###reference_###) and Definition 3 ###reference_inition3###:\n.\n\u220e\nLet us define\nNote that, from Definition 4 ###reference_inition4### and Equation (20 ###reference_###),\nwe have:\nUsing ,\nTheorem 1 ###reference_orem1### and (20 ###reference_###),\nwe have:\n(Upper bound on ).\nWe have: .\nWe can now give our main result.\nFor such that (18 ###reference_###) holds,\nwe have: with\nwhere\nSee Appendix (Section VII ###reference_###).\u220e\nWe have noted in Remark 1 ###reference_ark1### that, in practice,\n and are negligible. It follows\nthat is itself negligible.\nHence is negative because its dominant subterm\nis .\nSo .\nLet us now define:\nIt follows from Theorems 2 ###reference_orem2### and 3 ###reference_orem3###:\nNote that we have ,\n and ,\naccording to Theorem 1 ###reference_orem1###,\nDefinition 3 ###reference_inition3### and Equation (26 ###reference_###) respectively,\nwhich leads to:\nFrom Proposition 3 ###reference_position3### and (26 ###reference_###), we then have:\nFor such that (18 ###reference_###) holds,\nwe have with probability at least :\nLet us recall that is an upper bound on\n\n(see (9 ###reference_###).\nThere are two possibilities for getting a value for .\nWe can first choose a value for , and obtain\nan estimate for \nusing .\nAs this estimate is often too much conservative, it\nis often advantageous to estimate using a probabilistic Monte Carlo\nmethod. In this case, the evaluation of does not depend on ,\nand we choose a value \nthat maximalises \n(i.e., minimizes ). It is easy\nto see that (which satisfies (18 ###reference_###):\n).\nWe have then:\n\nand .\n(See Example 1 ###reference_mple1###.)\nNote that, in the case of normalized output weight vector (),\nthe computation of\n is independent of , relying on the knowledge\nof the error vector and the\nNTK matrix alone. We have:\n,\nand the one-step stopping time is .\nOne can think about iterating the method, taking \nas a new initialization time, and performing a second GD step.\nThe definition of \n(see (17 ###reference_###))\nthen becomes:\nwhere is the norm of the projection of \non .\nIn practice in the underparameterized context, we observe that\ns is larger than 1, which makes condition\n(18 ###reference_###) impossible to satisfy.\nA possible interpretation is that acts as\nan \u201cindicator of decrease\u201d which is\nsensitive enough to detect the sharp drop of \nat first step,\nbut not the slow decrease that follows.\nThe case of overparameterized networks is discussed hereafter." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Final Remarks", + "text": "Using the theories of NTK matrix and Rademacher complexity,\nwe obtained analytical formulas\nfor a one-step stopping time\nand an upper bound on the population loss.\nThe computation of\n is independent of \n(at least in the case of normalized output weight vector),\nrelying on the knowledge\nof the initial error vector and\nNTK matrix alone.\nOn the example of a Van der Pol oscillator,\nthe simulations are consistent with the theoretical results.\nOur method also suggests a new way of explaining the phenomenon\nof \u201cbenign overfitting\u201d in the overparameterized context.\nOur method is however limited to NNs\nwith a single hidden layer, a fixed output layer and\na scalar output.\nIn future work, we plan to extend the method to networks with more than one hidden layer\nand multi-dimensional outputs." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Appendix: Summary of Definitions", + "text": "The fact that \nis an upper bound on relies on the sequence of inequalities:\nwhere the\ndefinitions of , , and \nsummarized in Table II. Auxiliary definitions\nare:\n,\n,\n,\n.\nwhere and (given by Definition 2 ###reference_inition2###)\nis such that ." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Appendix: Proof of Proposition 4", + "text": "For the simplicity of notation, we omit the symbol index , so write .\nFor , Equation (7 ###reference_###) writes:\nwhere is\nthe NTK matrix at time .\nThe matrix can be written:\n\nwhere is the diagonal matrix having as diagonal elements, and the transition matrix\nexpressing \nin the basis .\nThe abstraction of is the matrix\ndefined as\n\nwhere is the matrix having all its entries \nexcept the -entry equal to .\nSince all the entries of \nare non-negative and not larger than the corresponding\nentries of ,\nit follows from\nEquation (30 ###reference_###):\nwhere \nmeans for all . Equation (31 ###reference_###) can be written in a compact way:\nwhere , ,\n is the -diagonal matrix\nhaving and as\nfirst and second diagonal elements respectively,\nand is a rotation -matrix of\nangle between \nand (i.e., the angle\nbetween and ):\nIt follows from (32 ###reference_###):\nHence, using the facts\n,\n,\n, ,\n,\n,\nwe derive from (33 ###reference_###):\nwhere the last inequality comes from condition (14 ###reference_###).\nLikewise, it follows from (34 ###reference_###):" + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Appendix: Proof of Theorem 3", + "text": "As before, we omit the index from the different symbols.\nUsing Equations (15 ###reference_###), (16 ###reference_###), (20 ###reference_###), (21 ###reference_###) we have\nHence, using :\nwith\nEquation (35 ###reference_###) becomes using Equation (17 ###reference_###):\nwith\n.\nTherefore:\n with\ni.e. (24 ###reference_###)." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "VIII Appendix: Proof of Proposition 2", + "text": "i.e., since all the s are equal to :\nThanks to Talagrand\u2019s lemma, 1-Lipschitzness of , it follows:\nThen, using\n:\nThen, using \nfor the class\n(see [13 ###reference_b13###] Theorem 11.5),\nwe have:\nwith" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
TABLE I: Values of , ,\n, , ,
\n
", + "capture": "TABLE I: Values of , ,\n, , , " + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
TABLE II: Definition of , , ,
\n
", + "capture": "TABLE II: Definition of , , , " + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18806v1_figure_1.png", + "caption": "Figure 1: Curves \u03a9\u03a9\\Omegaroman_\u03a9, \u2016\ud835\udc97\u20162nsuperscriptnorm\ud835\udc972\ud835\udc5b\\frac{\\|\\boldsymbol{v}\\|^{2}}{n}divide start_ARG \u2225 bold_italic_v \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_n end_ARG,\n\u03c9\ud835\udf14\\omegaitalic_\u03c9, \u03bd\ud835\udf08\\nuitalic_\u03bd, LGsubscript\ud835\udc3f\ud835\udc3aL_{G}italic_L start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT\nfor t\u2264t1=0.024\ud835\udc61subscript\ud835\udc6110.024t\\leq t_{1}=0.024italic_t \u2264 italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.024", + "url": "http://arxiv.org/html/2411.18806v1/extracted/6030367/Article_Fig-M1=0,25-Jawher.png" + }, + "2": { + "figure_path": "2411.18806v1_figure_2.png", + "caption": "Figure 2: Curves \u03a9\u03a9\\Omegaroman_\u03a9 (for t\u2264t1=0.024\ud835\udc61subscript\ud835\udc6110.024t\\leq t_{1}=0.024italic_t \u2264 italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.024) and Lt\u2062e\u2062s\u2062tsubscript\ud835\udc3f\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61L_{test}italic_L start_POSTSUBSCRIPT italic_t italic_e italic_s italic_t end_POSTSUBSCRIPT", + "url": "http://arxiv.org/html/2411.18806v1/extracted/6030367/Article_Fig--M1=0,25-Jawher-Test.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2411.18806v1" +} \ No newline at end of file diff --git a/20241127/2411.18809v1.json b/20241127/2411.18809v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d82cf97a4636c4d077a8a689a0a2c0b81fdaf43d --- /dev/null +++ b/20241127/2411.18809v1.json @@ -0,0 +1,202 @@ +{ + "title": "Improved Approximation Algorithms for Flexible Graph Connectivity and Capacitated Network Design", + "abstract": "We present improved approximation algorithms for some problems in\nthe related areas of Flexible Graph Connectivity and Capacitated Network Design.\nIn the -Flexible Graph Connectivity problem, denoted ,\nthe input is a graph where is partitioned into safe and unsafe edges, and the goal is to find a minimum cost set of edges such that the subgraph remains -edge connected upon removal of any unsafe edges from . In the related Cap--ECSS problem, we are given a graph whose edges have arbitrary integer capacities, and the goal is to find a minimum cost subset of edges such that the graph is -edge connected.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "We study some problems in the related areas of Flexible Graph Connectivity and Capacitated Network Design." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "A -Approximation Algorithm for", + "text": "This section presents a -approximation algorithm for the -FGC problem.\nFor the convenience of the reader, the presentation in this section is independent of the rest of the paper.\nOur starting point is the natural LP relaxation that follows from taking a capacitated network design view of the problem whereby we assign each unsafe edge has capacity , and\neach safe edge has capacity . The natural LP relaxation then seeks to minimize the total cost of edges subject to the constraint that , we have\nIt is easy to see that any -valued solution to this LP is a valid solution to a given instance of , and vice versa.\nHowever, it is also easy to show that this LP has an integrality gap of by adapting the integrality gap example we saw earlier for the Cap--ECSS problem. We consider an instance consisting of a pair of nodes connected by parallel unsafe edges of cost , and a single safe edge of cost . Now the optimal fractional solution has cost while the optimal integral solution has cost .\nTo get around this integrality gap, we strengthen the LP relaxation of using the knapsack-cover inequalities to obtain the following stronger LP. Intuitively, the added knapsack-cover constraint ensures that if a safe edge is being used to cover a cut that is partly being covered by unsafe edges, say by of them, then the capacity of the safe edge is reduced to .\nwhere , and .\nWe would like to solve the above LP using the ellipsoid method, but, unfortunately, we do not know a polynomial-time separation oracle for knapsack-cover inequalities. We will instead identify a subset of unsafe edges , and a polynomial-time computable collection of cuts such that as long as the knapsack-cover inequalities hold for this collection, we will be able to show that the integrality gap of the fractional solution is at most . We can use this property to design a polynomial-time algorithm. In what follows, we assume w.l.o.g. that the optimal LP solution cost, say LP, is known as it can be identified via binary search. We can thus replace the minimization objective with a feasibility constraint on our solution, namely, .\nThere is a polynomial-time algorithm that computes a solution to (KCLP:-FGC ###reference_7###) of value at most LP such that the solution satisfies the following two properties:\n.\nLet \nFor any nonempty ,\nif , then\nIt suffices to describe a polynomial-time separation oracle which identifies any violations of properties (P1) and (P2). Given a solution , we first check that . If not, we return this as a violated constraint. Otherwise, let be the capacitated graph where and each edge is assigned a capacity of . We can now check that the capacity of a minimum-cut in is at least using a polynomial-time global minimum cut algorithm [18 ###reference_b18###]. If not, we return a global minimum cut in as a violated constraint. Otherwise, we know that (P1) is also satisfied, and we proceed to verify (P2) with respect to the set .\nBy Karger\u2019s result [15 ###reference_b15###], we know that there are at most cuts of capacity at most (i.e., at most twice the capacity of a minimum-cut), and, moreover, we can enumerate all such cuts of in polynomial time [16 ###reference_b16###].\n\nBy iterating over each of the cuts, we can now verify in polynomial-time that the knapsack cover inequalities are satisfied w.r.t. the set . If not, we have found a violated constraint.\nSince the ellipsoid algorithm terminates after iterations of feasibility verification [13 ###reference_b13###], this gives us a polynomial-time algorithm that computes a solution to (KCLP:-FGC ###reference_7###) of value at most LP such that the solution satisfies properties (P1) and (P2).\n\u220e\nThe Rounding Algorithm:\nGiven a solution of cost at most LP that satisfies (P1) and (P2), Algorithm 1 ###reference_###, presented below, rounds it to an integral solution of cost at most . We describe here the main idea of our rounding scheme.\nWe say a non-trivial cut is a small cut if where , and .\nTo handle the presence of small cuts, we will first show that the solution restricted to safe edges and scaled up by a factor of constitutes a feasible solution to the instance defined by small cuts (see Lemma 5 ###reference_orem5###). We can thus pick a set of safe edges by applying the 5-approximation algorithm of [2 ###reference_b2###, 4 ###reference_b4###] to this instance of the problem, getting a solution whose cost is at most times as large as the cost of restricted to safe edges. After this step, we can contract connected components formed by edges in , and get a new instance that does not have any small cuts.\nSince there are no small cuts, we can get a feasible solution for the -connectivity problem where for every non-trivial cut by\ntaking all edges in and scaling up restricted to by a factor of .\nSince is weakly supermodular, we can apply Jain\u2019s iterative rounding scheme [14 ###reference_b14###] to solve this -connectivity problem and recover a -approximate solution, giving us an integral solution whose cost is at most times as large as the cost of restricted to unsafe edges (see Lemma 6 ###reference_orem6###).\nThus, we get an integral solution whose total cost of safe and unsafe edges is bounded by at most times the LP cost.\nApply the approximation algorithm for on the instance with\n,\nwhere each edge of is given unit capacity,\neach edge is given capacity ,\nand with link set .\nDefine the threshold (for Small Cuts) to be . Thus, we have:\nLet the output of the call in step (a) be denoted .\nThe next two lemmas formalize the key properties of the solution that are used in the rounding scheme above, allowing us to show that it indeed returns a feasible integral solution of cost at most\n.\nIn step 2 of Algorithm 1 ###reference_###, a feasible fractional\nsolution to the CoverSmallCuts instance is given by\n for .\nFix any small cut . We will establish the lemma by considering two cases below.\nConsider first the case that . Since is a small cut, we have\nSince , it follows that , and hence . Thus .\nNow suppose that . Then by (P2) we know that the cut satisfies knapsack-cover constraint w.r.t. set :\nwhere ,\n.\nMoreover, since is a small cut, we know that ,\nand hence .\nThis implies that ,\nand so, by the knapsack-cover inequality,\n.\nBy definition, , hence, we have\n. Therefore,\n, completing the proof.\n\u220e\nIn step 3 of Algorithm 1 ###reference_###, a feasible fractional\nsolution to the -connectivity problem is given by\n for .\nBy way of contradiction, suppose that the claim does not hold. Then\nfor some non-trivial cut , we must have\nThis implies that the cut is a small cut in step 2 of\nAlgorithm 1 ###reference_###. Hence, step 2 ensures (via CoverSmallCuts)\nthat and so\n.\nThis is a contradiction.\n\u220e\nThe output of Algorithm 1 ###reference_### is feasible for the\n problem due to the definition of the -connectivity\nproblem in step 3 of the algorithm. The cost of the edges in\n is since\n for each edge in . Additionally,\nthe cost of the edges in is \nby Lemma 5 ###reference_orem5### and by Proposition 3 ###reference_orem3###.\nLastly, the cost of the edges returned by Jain\u2019s iterative rounding\nalgorithm in step 3 of Algorithm 1 ###reference_### is at most\n,\nby Lemma 6 ###reference_orem6###.\nPutting it all together, the cost of the solution returned by\nAlgorithm 1 ###reference_### is at most .\nLet opt be the optimal solution value for a given instance of . Then there is a polynomial-time algorithm that computes a solution to (KCLP:-FGC ###reference_7###) of value at most opt (possibly satisfying only a subset of the constraints) and rounds it to obtain a feasible integer solution of cost at most ." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "-Approximate Reductions between and", + "text": "We provide reductions between the two problems and\n in both directions.\nEach of these reductions preserves the approximation ratio up to a constant factor.\nRecall that in the problem,\nwe are given a graph with edge capacities , a number ,\nas well as a set of links with link costs ;\nthe goal is to find a cheapest set of links \nthat two-covers the family of small cuts,\nnamely, ." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "An -Approximation Algorithm for Cap--ECSS", + "text": "In this section, we present an -approximation algorithm for\nCap--ECSS that runs in polynomial-time assuming .\nNote that when , then the previously known approximation algorithm\nof [7 ###reference_b7###] for Cap--ECSS achieves an approximation ratio of .\nLet be an optimal solution to (KCLP: CapkECSS ###reference_###), namely, the natural LP relaxation strengthened with knapsack-cover inequalities. We describe below our rounding algorithm. For the presentation of the rounding algorithm, it will be convenient to assume that satisfies all constraints in (KCLP: CapkECSS ###reference_###) even though we do not know a polynomial-time separation oracle for achieving this. However, as we will show in Lemma 13 ###reference_orem13###, we can compute in poly-time a solution of optimal cost that satisfies all knapsack-cover inequalities that are needed for a successful execution of the rounding algorithm.\nThroughout the execution of the rounding algorithm, we will maintain a set of\nedges acting as our current solution.\nWe begin with .\nDefine and\nfor ,\ndefine ;\nthus,\n\nforms a partition of the edges in into buckets\nbased on the capacities;\nlet us call the set the -th bucket\n(and is the -th bucket).\nSee Figure 4 ###reference_### for an illustration.\nOur algorithm will have iterations and each iteration (except\nfor the first and the last) will have two phases. During iteration ,\nwe will be rounding some of the edges in the -th bucket,\ni.e., some of the edges in the set .\nNote that an edge in the -th bucket has capacity .\nFor every non-trivial cut , we will maintain the following invariants for all iterations (i.e. except the first and the last iteration):\nAt the beginning of iteration , and\nWe note that iteration ensures that this invariant holds at the start of iteration .\nAt the end of the first phase of iteration ,\nAt the end of iteration , which is also the end of phase\n2 of iteration , , and\nObserve that invariant 3 for iteration is the same as invariant 1 for iteration .\nNext, we present pseudo-code for the rounding algorithm, followed by explanation and analysis of the main steps.\nLet .\nApply the approximation algorithm for to select edges from to cover the cuts in . Add the selected edges to .\nRepeat (a), (b) once.\nFor :\nLet .\nApply the approximation algorithm for to select edges from to cover the cuts in . Add the selected edges to .\nLet .\nApply the approximation algorithm for to selected edges from to cover the cuts in . Add the selected edges to .\nRepeat (b), (c) two additional times.\nAt this point, we have that for all . Apply Jain\u2019s iterative rounding method to round the edges in to an integer solution , such that is a feasible solution to Cap--ECSS." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Formulating as a Cap--ECSS Problem", + "text": "We attempt to model as the Cap--ECSS problem.\nLet us take and to be parameters.\nOur goal is to find conditions on and such that \ncan be formulated as a Cap--ECSS problem.\nWe show that one can formulate a problem as an equivalent\nCap--ECSS problem if and only if or .\nWe make the following assumptions:\nand are positive integers;\neach unsafe edge is assigned the capacity ;\neach safe edge is assigned the capacity ;\nthe requirement of the Cap--ECSS problem is fixed at\n, because each nontrivial cut\nof is required to have either safe edges or edges.\nClearly, a cut that violates the requirement of should\nhave capacity less than .\nLet us call this property (0).\nFor any , a cut does not satisfy the requirement\nof if it has safe edges and unsafe edges.\nThis gives the constraint .\nSince , we have . Starting from property (0), we get the following\n(each line follows from the preceding line):\nRecall that .\nSuppose that and assume that .\nThen inequality () is the same as\n, that is, .\nThus, inequality () implies either or .\nSimilarly, by taking and assuming that , inequality () can be written as\n, that is, .\nThus, inequality () implies either or .\nHence, either or (since and cannot both be true)." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B An -Approximation Algorithm for via Covering Integer Programs", + "text": "We apply a theorem from Chekuri & Quanrud [9 ###reference_b9###] that gives\nan approximation algorithm for Covering Integer Programs (abbreviated as CIPs).\nWe recap Theorem 2.3 from [9 ###reference_b9###].\nTheorem [Chekuri & Quanrud [9 ###reference_b9###]]\nGiven , , , ,\nsuch that satisfies the Knapsack Cover Inequalities.\nLet the approximation parameter be .\nThe algorithm runs in time and finds an integer vector \nof cost such that and .\n(Here, denotes the number of nonzeros of , and \ndenotes the maximum number of nonzeros in any column of .)\n\n(Note: for our application to , we take .)\nWe apply the theorem from [9 ###reference_b9###] to cover the deficient cuts\nwhile augmenting a solution to -FGC to a solution to -FGC.\nWe have two types of deficient cuts:\nCuts with exactly edges (safe or unsafe);\nwe need to pick one more edge (safe or unsafe) to cover each such cut.\nCuts with exactly safe edges and unsafe edges, where ;\nfor each such cut, we need to pick either one safe edge or unsafe edges;\nwe can formulate this requirement as a valid inequality constraint of a CIP\nsuch that we can apply the theorem of [9 ###reference_b9###].\n(Note: if a cut has safe edges and unsafe edges, then it is a cut of type (1)).\nThe algorithm starts by computing a (1,q)-FGC solution of cost .\nIn detail, we formulate (precisely) the (1,q)-FGC problem as a\nCapacitated Network Design (CND) problem,\nby assigning capacities of and one, respectively, to the safe edges and the unsafe edges.\nThen, we apply the approximation algorithm of Chakrabarty et al. [7 ###reference_b7###]\nto our CND problem.\nNext, we apply iterations. Iteration starts with a solution to -FGC\nand augments deficient cuts (if any) to obtain a solution to -FGC.\nEach iteration formulates a CIP and finds a solution to this CIP\nof cost , using the algorithm/theorem of [9 ###reference_b9###].\nThus, the overall approximation ratio is .\nWe present the details for iteration in what follows.\nLet be the solution to -FGC, at the start of the iteration.\nOur goal is to find all the deficient cuts, then write down a CIP,\nthen find an approximately optimal solution to the CIP via [9 ###reference_b9###, Theorem 2.3].\nWe start by formulating a CND using the following parameters:\ncapacity of a safe edge:\n\n\ncapacity of an unsafe edge:\n\n\nrequired capacity (for each non-trivial cut):\nObserve that the CND graph (corresponding to ) has capacity for\nevery non-trivial cut, because is a solution to -FGC.\nIf a cut of the CND graph has capacity ,\nthen this cut satisfies the requirement of -FGC.\nSuppose safe edges contribute at least half the capacity of this cut;\nthen this cut has safe edges.\nOtherwise, unsafe edges contribute at least half the capacity of this cut;\nthen this cut has unsafe edges.\n\u220e\nWe apply the algorithm of Nagamochi, Nishimura, & Ibaraki [16 ###reference_b16###]\nto list all the cuts in the CND graph\nwith capacities in the range \nin (deterministic) polynomial time.\nNote that , hence, by Karger\u2019s results,\nthe number of cuts in this range is .\nFinally, we set up the constraints matrix for the CIP.\nFor each deficient cut (in our list of cuts of the CND graph),\nwe write a constraint of the form .\nFor a deficient cut of type (1), i.e., a cut with edges,\nwe have the constraint .\nNext, for notational convenience, define .\nFor a deficient cut of type (2), i.e., a cut with safe edges and unsafe edges,\nlet denote the number of unsafe edges;\nfor each unsafe edge we fix the coefficient , and\nfor each safe edge we fix the coefficient ,\nand we fix the RHS coefficient .\nThus, we have the constraint" + } + ], + "tables": {}, + "image_paths": {}, + "validation": true, + "references": [ + { + "1": { + "title": "Flexible Graph Connectivity.", + "author": "David Adjiashvili, Felix Hommelsheim, and Moritz M\u00fchlenthaler.", + "venue": "Mathematical Programming, 192:409\u2013441, 2022.", + "url": null + } + }, + { + "2": { + "title": "A global analysis of the primal-dual method for pliable families.", + "author": "Ishan Bansal.", + "venue": "CoRR, abs/2308.15714, 2024.", + "url": null + } + }, + { + "3": { + "title": "Improved Approximation Algorithms by Generalizing the Primal-Dual\nMethod Beyond Uncrossable Functions.", + "author": "Ishan Bansal, Joseph Cheriyan, Logan Grout, and Sharat Ibrahimpur.", + "venue": "CoRR, abs/2209.11209v2, 2022.", + "url": null + } + }, + { + "4": { + "title": "Improved approximation algorithms by generalizing the primal-dual\nmethod beyond uncrossable functions.", + "author": "Ishan Bansal, Joseph Cheriyan, Logan Grout, and Sharat Ibrahimpur.", + "venue": "Algorithmica, 86(8):2575\u20132604, 2024.", + "url": null + } + }, + { + "5": { + "title": "Approximation algorithms for flexible graph connectivity.", + "author": "Sylvia C. Boyd, Joseph Cheriyan, Arash Haddadan, and Sharat Ibrahimpur.", + "venue": "Mathematical Programming, 2023.", + "url": null + } + }, + { + "6": { + "title": "Strengthening integrality gaps for capacitated network design and\ncovering problems.", + "author": "Robert D. Carr, Lisa K. Fleischer, Vitus J. Leung, and Cynthia A. Phillips.", + "venue": "In Proceedings of the Eleventh Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA \u201900, page 106\u2013115, USA, 2000. Society for\nIndustrial and Applied Mathematics.", + "url": null + } + }, + { + "7": { + "title": "Approximability of Capacitated Network Design.", + "author": "Deeparnab Chakrabarty, Chandra Chekuri, Sanjeev Khanna, and Nitish Korula.", + "venue": "Algorithmica, 72(2):493\u2013514, 2015.", + "url": null + } + }, + { + "8": { + "title": "Approximation Algorithms for Network Design in Non-Uniform Fault\nModels.", + "author": "Chandra Chekuri and Rhea Jain.", + "venue": "In Proceedings of the 50th International Colloquium on Automata,\nLanguages, and Programming, volume 261, article 36, pages 1\u201320, 2023.", + "url": null + } + }, + { + "9": { + "title": "On approximating (sparse) covering integer programs.", + "author": "Chandra Chekuri and Kent Quanrud.", + "venue": "In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego,\nCalifornia, USA, January 6-9, 2019, pages 1596\u20131615. SIAM, 2019.", + "url": null + } + }, + { + "10": { + "title": "Relative survivable network design.", + "author": "Michael Dinitz, Ama Koranteng, and Guy Kortsarz.", + "venue": "In Amit Chakrabarti and Chaitanya Swamy, editors, Approximation,\nRandomization, and Combinatorial Optimization. Algorithms and Techniques,\nAPPROX/RANDOM 2022, September 19-21, 2022, University of Illinois,\nUrbana-Champaign, USA (Virtual Conference), volume 245 of LIPIcs,\npages 41:1\u201341:19. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik,\n2022.", + "url": null + } + }, + { + "11": { + "title": "Improved approximations for relative survivable network design.", + "author": "Michael Dinitz, Ama Koranteng, Guy Kortsarz, and Zeev Nutov.", + "venue": "In Jaroslaw Byrka and Andreas Wiese, editors, Approximation and\nOnline Algorithms - 21st International Workshop, WAOA 2023, Amsterdam, The\nNetherlands, September 7-8, 2023, Proceedings, volume 14297 of Lecture\nNotes in Computer Science, pages 190\u2013204. Springer, 2023.", + "url": null + } + }, + { + "12": { + "title": "Improved Approximation Algorithms for Network Design Problems.", + "author": "Michel X. Goemans, Andrew V. Goldberg, Serge A. Plotkin, David B. Shmoys,\n\u00c9va Tardos, and David P. Williamson.", + "venue": "In Proceedings of the 5th Symposium on Discrete Algorithms,\npages 223\u2013232, 1994.", + "url": null + } + }, + { + "13": { + "title": "Geometric Algorithms and Combinatorial Optimization, volume 2\nof Algorithms and Combinatorics.", + "author": "Martin Gr\u00f6tschel, L\u00e1szl\u00f3 Lov\u00e1sz, and Alexander Schrijver.", + "venue": "Springer Berlin, 1993.", + "url": null + } + }, + { + "14": { + "title": "A Factor 2 Approximation Algorithm for the Generalized Steiner\nNetwork Problem.", + "author": "Kamal Jain.", + "venue": "Combinatorica, 21(1):39\u201360, 2001.", + "url": null + } + }, + { + "15": { + "title": "Global min-cuts in RNC, and other ramifications of a simple min-cut\nalgorithm.", + "author": "David R. Karger.", + "venue": "In Vijaya Ramachandran, editor, Proceedings of the Fourth Annual\nACM/SIGACT-SIAM Symposium on Discrete Algorithms, 25-27 January 1993,\nAustin, Texas, USA, pages 21\u201330. ACM/SIAM, 1993.", + "url": null + } + }, + { + "16": { + "title": "Computing All Small Cuts in an Undirected Network.", + "author": "Hiroshi Nagamochi, Kazuhiro Nishimura, and Toshihide Ibaraki.", + "venue": "SIAM Journal on Discrete Mathematics, 10(3):469\u2013481, 1997.", + "url": null + } + }, + { + "17": { + "title": "Improved approximation ratio for covering pliable set families.", + "author": "Zeev Nutov.", + "venue": "CoRR, 2024.", + "url": null + } + }, + { + "18": { + "title": "Combinatorial Optimization: Polyhedra and Efficiency,\nvolume 24 of Algorithms and Combinatorics.", + "author": "Alexander Schrijver.", + "venue": "Springer, Berlin Heidelberg New York, 2003.", + "url": null + } + }, + { + "19": { + "title": "A Primal-Dual Approximation Algorithm for Generalized Steiner\nNetwork Problems.", + "author": "David P. Williamson, Michel X. Goemans, Milena Mihail, and Vijay V. Vazirani.", + "venue": "Combinatorica, 15(3):435\u2013454, 1995.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18809v1" +} \ No newline at end of file diff --git a/20241127/2411.18811v1.json b/20241127/2411.18811v1.json new file mode 100644 index 0000000000000000000000000000000000000000..616678bea8d2d0c2ffca842099e88d5f336d98d8 --- /dev/null +++ b/20241127/2411.18811v1.json @@ -0,0 +1,503 @@ +{ + "title": "NewsEdits 2.0: Learning the Intentions Behind Updating News", + "abstract": "As events progress, news articles often update with new information: if we are not cautious, we risk propagating outdated facts. In this work, we hypothesize that linguistic features indicate factual fluidity, and that we can predict which facts in a news article will update using solely the text of a news article (i.e. not external resources like search engines). We test this hypothesis, first, by isolating fact-updates in large news revisions corpora Spangher et al. (2022). News articles may update for many reasons (e.g. factual, stylistic, narrative). We introduce the NewsEdits 2.0 taxonomy, an edit-intentions schema that separates fact updates from stylistic and narrative updates in news writing. We annotate over 9,200 pairs of sentence revisions and train high-scoring ensemble models to apply this schema.\nThen, taking a large dataset of silver-labeled pairs, we show that we can predict when facts will update in older article drafts with high precision. Finally, to demonstrate the usefulness of these findings, we construct a language model question asking (LLM-QA) abstention task. Inspired by Kasai et al. (2022), we wish the LLM to abstain from answering questions when information is likely to become outdated. Using our predictions, we show, LLM absention reaches near oracle levels of accuracy.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "News is the \u201cfirst rough draft of history\u201d Croly (1943 ###reference_b5###). Its information is both valuable and fluid, prone to changes, updates, and corrections.\n###figure_1### As shown in Figure 1 ###reference_###, the first sentence on the left\nhas a factual update, while the second\ndoes not.\nIntuitively, we might be able to predict this: an \u201cadvisory\u201d is not likely to indefinitely stay in effect, while details about the \u201cquake\u201d are less likely to change. Indeed, if someone asks \u201cQ: Is an advisory still in place?\u201d, we might want to abstain from answering definitively. However, \u201cQ: How large was the quake?\u201d can be answered directly.\nRecent work has recognized the importance of testing LLM-QA in dynamic settings Jia et al. (2018 ###reference_b13###); Liska et al. (2022 ###reference_b18###). Kasai et al. (2022 ###reference_b15###)\u2019s RealTimeQA benchmark specifically measures LLM-QA performance for updating news documents. However, current approaches rely on search engines retrieving updated information111The latest entry of RealTimeQA was RAG + Google Custom Search. https://realtimeqa.github.io/ ###reference_realtimeqa.github.io/###.. This neglects potentially salient linguistic and common-sense information. As the example shown in Figure 1 ###reference_### demonstrates, cues exist that we, as humans, intuitively understand to signal fluidity. Can we learn these cues, and predict which facts in a news article will update? Can this help LLMs better abstain from answering questions they may not have updated information for?\n###figure_2### We answer these questions in three steps, shown in Figure 2 ###reference_###. In Part 1, we start by studying update patterns in NewsEdits, a large corpus of article revision histories Spangher et al. (2022 ###reference_b24###). Articles update for many different reasons (e.g. factual, stylistic, etc.), and it is difficult to identify these reasons. So, we introduce NewsEdits 2.0, a taxonomy of edit-intentions for journalistic edits (Figure 3 ###reference_###), to help us do this. We hire professional journalists to annotate 9,200 pairs of sentence revisions across 507 article revision pairs with the NewsEdits 2.0 schema. We then train an ensemble model to tag pairs of revisions, with 75.1 Micro F1 and create a large silver-label corpus of revision pairs.\nNext, in Part 2, we use this silver-labeled corpus to predict which facts in articles might update. We find that models achieve a moderate macro-F1 of .58, overall, on a gold-labeled test set. Although these scores are noisy, we notice that our models are learning reasonable linguistic cues. We observe key linguistic patterns: the use of future-tense verbs, statistics and commonly updating events. We validate these cues with human measurement. Further, by focusing on the sentences our models predict are highly likely to update, we notice a much higher precision of .74. Finally, in Part 3, we simulate a RealTimeQA-style case where an LLM using Retrieval Augmented Generation (RAG) retrieves an outdated document. Without our predictions, the LLM abstains wrongly more than it should. With them, the LLM achieves near-oracle level performance. In sum, our contributions are:\nWe introduce the NewsEdits 2.0 schema, with 4 coarse and 20 fine-grained categories, developed with professional journalists; train models to label these with 75.1 micro-F1; and release a large corpus of 4 million revision histories silver-labeled with edit intentions.\nWe show that pretrained LLMs perform poorly at predicting which facts in the old versions articles will update, indicating that this important capability is not emergent during pretraining. While fine-tuning helps performance, LLMs still lag humans.\nFinally, we show via a use-case, Question Answering with Outdated Documents, that a failure to address these shortcomings can result in decreased performance for leading LLMs.\nFinally, two subtle yet significant contributions of this work are (1) preprocessing improvements we introduce to improve the NewsEdits corpus (e.g. improving sentence boundary detection); and (2) visualization tools to make revision histories more accessible to users. Because these advances are not relevant to the main ideas of our paper, we save a deeper discussion these for Appendix A.2 ###reference_###.\nTaken together, we hope that our work can increase utilization and understanding of news dynamics." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Although most LLM Q&A benchmarks assume that information is static, recent work has increasingly explored LLM performance in the presence of dynamic, updating information Jia et al. (2018 ###reference_b13###); Liska et al. (2022 ###reference_b18###). This growing direction is concisely captured by Kasai et al. (2022 ###reference_b15###)\u2019s statement: \u201cGPT-3 tends to return outdated answers when retrieved documents [are outdated]. Can [we] identify such unanswerable cases?\u201d\nTo our knowledge, the use of revision-histories to address this question, which we discuss in Section 5 ###reference_###, is novel. News updates are an especially crucial domain to study: (1) news is socially important Cohen et al. (2011 ###reference_b4###); (2) LLMs are increasingly using news to better serve users Hadero and Bauder (2023 ###reference_b9###); (3) news is more likely to deal with updating events than other domains Spangher et al. (2022 ###reference_b24###). Indeed, Kasai et al. (2022 ###reference_b15###)\u2019s RealTimeQA benchmark is built entirely on news data.\nEdit-intention schemas have been developed for other types of revision histories, like Wikipedia Yang et al. (2017 ###reference_b30###), and Student Learner Essays Zhang and Litman (2015 ###reference_b32###). In these works, researchers categorize the intention of each edit using similar schemas to what we have developed. While building NewsEdits 2.0, we were inspired by the schemas developed by prior work and they provided a starting point for our taxonomy. We added edit-categories that were more journalism specific, like \u201cAdd Eye-witness Account\u201d, and removed categories that were more specific to the aforementioned domains (Section 3.1 ###reference_###,). The use-cases of these schemas has mainly focused on stylistic prediction tasks (e.g. text simplification Woodsend and Lapata (2011 ###reference_b28###) and grammatical error correction Faruqui et al. (2018 ###reference_b8###)) or tasks specific to these corpora (e.g. building models to assess the validity of a student\u2019s draft Zhang and Litman (2015 ###reference_b32###), or counter vandalism on Wikipedia Yang et al. (2017 ###reference_b30###)). We are the first, to our knowledge, to develop tasks centered on news articles (Section 4 ###reference_###) and to apply predictive analyses to fact-based edits." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Part 1: Learning Edit Intentions in Revision Histories", + "text": "News articles update for different reasons, especially during breaking news cycles where facts and events update quickly Saltzis (2012 ###reference_b21###). In this section, we introduce the edit-intentions schema we use for NewsEdits 2.0, our annotation, and our models to label edit-pairs. This lays groundwork for Section 4 ###reference_###, where we will predict when facts change.\nWe wish to identify categories of edits, in order to enable different investigations into these different update patterns. In other words, we describe the following update model:\nwhere is an intention (e.g. a \u201cCorrection\u201d needs to be made), and represent the older and newer versions of a news article, respectively, and and are individual sentences where the update occurred. are sentence indices, ranging from , (where are the number of sentences in , )." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Edit Intentions Schema", + "text": "We work with two professional journalists and one copy editor222Collectively, these collaborators have over 50 years of experience in major newsrooms. to develop an intentions schema. Building off work by Zhang and Litman (2015 ###reference_b32###) and Yang et al. (2017 ###reference_b30###), we start by examining 50 revision-pairs sampled from NewsEdits. We developed our schema through 4 rounds of conferencing: tagging examples finding edge-cases and discussing whether to add or collapse schema categories. Figure 3 ###reference_### shows our schema, which we organize into coarse and fine-grained labels.\nWe incorporate existing theories of news semantics into our schema. For instance, \u201cEvent Updates\u201d incorporates definitions of \u201cevents\u201d Doddington et al. (2004 ###reference_b7###), while \u201cAdd Background\u201d incorporates theories of news discourse Van Dijk (1998 ###reference_b27###). \u201cAdd Quote\u201d incorporates definitions from informational source detection Spangher et al. (2023 ###reference_b23###) and \u201cAdd Anecdote\u201d incorporates definitions from editorial analysis Al-Khatib et al. (2016 ###reference_b1###). See Appendix B.2 ###reference_### for a deeper discussion of the theoretical schemas that inform the NewsEdits 2.0 schema. Finally, \u201cIncorrect Link\u201d is an attempt to correct sentence pairs that were erroneously (un)linked in NewsEdits." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Schema Annotation", + "text": "We build an interface for annotators to provide intention labels for news article sentence pairs (see Appendix C.2 ###reference_###). Annotators are shown definitions for each fine-grained intention and the articles to tag; they are instructed to tag each sentence. To recruit annotators, we posted on two list-serves for journalism industry professionals333The Association of Copy Editors (ACES) https://aceseditors.org/ ###reference_aceseditors.org/### and National Institute for Computer-Assisted Reporting (NICAR) https://www.ire.org/hire-ire/data-analysis/ ###reference_/###.. We train our annotators until they are all tagging with agreement, compared with a gold-set of 50 article revision-pairs that we annotated, described previously (Section 3.1 ###reference_###). See Appendix for more details." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Edit Intentions Modeling", + "text": "###table_1### Now, we are ready to classify edit intentions between sentences in article revisions.\nEdit intentions are labeled on the sentence-level, and each sentence addition, deletion or update has potentially multiple intention-labels. Document-level context is important: as shown in Figure 1 ###reference_###, understanding that Sentence 2, right, adds background (\u201cIt hit the Fukushima plant, site of previous disaster.\u201d)\nis aided by the surrounding sentences contextualizing that a major event had just occurred. So, we wish to construct models that can produce flexible outputs and reason about potentially lengthy inputs.\nGenerative models have recently been shown to outperform classification-based models in document understanding tasks Li et al. (2021 ###reference_b17###); Huang et al. (2021 ###reference_b12###). Inspired by this, we develop a sequence-to-sequence framework using LongFormer 444https://huggingface.co/allenai/led-base-16384 ###reference_384### Beltagy et al. (2020 ###reference_b2###) to predict the intent behind each edit. Specifically, our model processes the input . or can also be , which corresponds to the other sentence being a addition/deletion. The decoding target is a concatenation of intention labels annotated for the pair .\nAs discussed in Section 3.1 ###reference_###, we developed our schema to bring together different theories of news semantics. So, we hypothesize that incorporating insights from these theories into our modeling \u2013 specifically, by utilizing labels from trained models in these domains \u2013 might improve our performance. We run models from the following papers over our dataset: Discourse Spangher et al. (2021 ###reference_b22###), Quote-Type Labeling Spangher et al. (2023 ###reference_b23###), Event Detection Hsu et al. (2021 ###reference_b11###), Textual Entailment Nie et al. (2020 ###reference_b19###) and Argumentation Al-Khatib et al. (2016 ###reference_b1###). Labels generated from these models, denoted as and , are appended to the model input .\nAs shown in Table 1 ###reference_###, our baseline tagging models that solely use article features score 45.8 Macro F1 and 73.6 Micro F1, respectively. These scores are moderate-to-low. The category we are most interested in, Factual updates, scores at 32 Macro-F1 (derived from macro-averaging the fine-grained categories).\nHowever, incorporating additional features increases overall Macro and Micro F1 by and points, respectively, in the Quotes & Discourse trial. And for Factual updates, additional features increase Macro and Micro F1 accuracy by and points, respectively. While low-to-moderate scores are not ideal, this likely reflects the noisy nature of our problem. We hope in future work to assess an upperbound on these scores. For details and schema definitions, see Appendix B ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Exploratory Insights", + "text": "We run the models trained in the last section over the entire NewsEdits corpus to generate silver-labels on all edit pairs. We present an exploratory analysis of these silver labels, with more material shown in the appendix. Table 2 ###reference_### shows the correlation between syntactic edit categories (defined by Spangher et al. (2022 ###reference_b24###)) and our semantic categories. As can be seen, categories like Addition have far more Narrative and Factual updates than Stylistic updates; Stylistic updates, on the other hand, are far more likely to occur between sentences. This is logical; Stylistic updates are likely smaller, local updates, while Narrative and Factual updates might include more rewriting.\nNext, we explore if certain kinds of articles are more likely to have certain kinds of edits. We start by looking at broad news categories, shown in Table 3 ###reference_###, obtained from classifier we train on CNN News Groups dataset555https://www.kaggle.com/code/faressayah/20-news-groups-classification-prediction-cnns ###reference_news-groups-classification-prediction-cnns###. \u201cPolitics\u201d and \u201cSports\u201d coverage are observed to have the highest level of Factual updates, relative to other categories, while Stylistic updates are prevalent in \u201cHealth\u201d and \u201cEntertainment\u201d pieces. Although we focus on Factual updates for the rest of the paper, we believe that there are many fruitful directions of future work examining other categories of updates. For instance, stylistic edits made in \u201cHealth\u201d news might reach more readers \u2013 understanding these patterns might be crucial during times of crisis. We include additional exploration in Appendix A ###reference_###.\n###table_2### ###table_3###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Part 2: Predicting Factual Updates", + "text": "In Section 3 ###reference_###, we learned high-scoring models to categorize edit pairs (Equation 1 ###reference_###). Now, we wish to leverage these to learn a predictive function:\nWhere and are the older half of a revision pair. Eq 2 ###reference_### seeks to predict how might change.\nThe problem statement builds off of a line of inquiry introduced in Spangher et al. (2022 ###reference_b24###). Authors introduced tasks aimed at predicting news article developments across time. They tried to predict whether a \u201csentence will be Added to, Deleted from, or Updated in\u201d an older draft, to induce reasoning about article changes. However, authors stopped at this \u201csyntactic\u201d analysis. Here, we build off of this mode of inquiry: with the semantic understanding of edits introduced in the prior section, we try to predict how information will change." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Factual Edit Prediction Dataset", + "text": "To construct our task dataset, we sample revision pairs with a non-negligible amount of updates. We sample a set of 500,000 articles from NewsEdits that have sentences added and deleted. We acnkowledge that this introduces bias into our dataset, as we focus solely on a subsection of data we know will update. However we build off Spangher et al. (2022 ###reference_b24###)\u2019s broader analysis of syntactic edits patterns, where they found that these kinds of articles could be predicted with reasonable accuracy. We reason that our construction makes it more likely that we are focusing on factual updates that have more significant impact on the article (as they require more substantial rewrites.)\nThen, we use the best-performing edit-intentions model, in Section 3.3 ###reference_###, to produce silver labels. We assign labels using both versions of a revision pair (Equation 1 ###reference_###); then we discard , and try to predict using just (Equation 2 ###reference_###).\n###table_4###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Predicting Factual Edits", + "text": "For training and development, we chronologically split our dataset into train/development sets with 80/20 ratios. The earliest 80% is our training set, the next 20% for development, etc. To keep cost reasonable, we sample 16,000 sentences for the training set and 2,000 for the development set. We test all approaches on the same gold-labeled documents , which were part of our gold-annotated test set (Section 3.2 ###reference_###). In early experiments, we noticed that many fine-grained labels were too infrequent to model well, so we switched to predicting coarse-grained labels. We balance the training dataset to have an equal number of classes.\nWe test different variants of Equation 2 ###reference_### to provide different degrees of article context to the model. This helps us understand how much local vs. global article features predict Factual Updates.\n(1) Sentence-Only, ;\n(2) Direct Context,\n(3) Full Article, .\nFor each variant we test zero-shot (i.e. prompted gpt-3.5-turbo and gpt-4); and fine-tuning approaches (i.e. longformer models)666The longformer is trained with the same approach as the silver-label prediction step from Section 3.3 ###reference_###In early trials, we try different variations on these experiments, like restricting the dataset to different subsets based on topic, like \u201cDisaster\u201d or \u201cSafety\u201d. These topic categories, as shown in Section 3.4 ###reference_###, are more fact-heavy. However, we find negligible impact on F1-score..\nResults are shown in Table 4 ###reference_###. Performance is moderate-to-low for detecting factual updates. However, we do observe performance increases from fine-tuning the longformer model, so to some degree this task is learnable. We recruit a former journalist, with 4 years of experience in major newsrooms, to predict labels for this task, in order to provide a human upper bound to Equation 2 ###reference_###. The journalist observes the training data, and then scores the test set. At 41.2 F1-score, the journalist sets a moderately higher upper bound.\n###table_5### ###table_6### Interestingly, sentence-level characteristics seem to contain much of the signal for this task: as shown in Table 4 ###reference_###, the performance barely increases by including the Full Article as context (a finding we did not observe in our tagging task, in Section 3.1 ###reference_###). To gain a deeper intuition about these sentence-level cues, we sample 100 sentences from that have been labeled as either having a Factual Update or not (i.e. another kind of update, or no update at all). We show results in Table 5 ###reference_###. We identify cues like the temporality of an event described in the sentence as important, and whether the sentence contains statistics, analysis or other kinds of news discourse Van Dijk (1998 ###reference_b27###). Interestingly, sentences that Factual Update are more likely to contain Recent Events and Developing Events, compared with Opinion, Historical Events and Description. (See Appendix B.2 ###reference_### for definitions of these discourse patterns).\nThis would explain in part why language models underperform human reasoning in predicting updates. We find that GPT4 generally has low agreement with human annotators on these tasks, at . Researchers have generally found that LLMs struggle with this kind of reasoning Han et al. (2020 ###reference_b10###); Tan et al. (2023 ###reference_b26###). Recent modeling advancements might help us perform these tasks better Xiong et al. (2024 ###reference_b29###).\nThis prediction task is noisy: many sentences may look similar, but may or may not have had Factual Updates, due to chance. Indeed, even expert human annotators have low prediction scores. However, we hypothesize that data that the model is most confident about (or the high-precision region), are more uniformly predictable. We show samples of these sentences in Table 6 ###reference_###. These sentences contain many of the linguistic cues identified in 5 ###reference_###. See Table 12 ###reference_### for more examples of high-probability sentences (and Table 13 ###reference_### for examples of low-probability sentences). We focus on these high-precision sentences in the next section." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Part 3: Question Answering with Outdated Documents", + "text": "We are ready to test whether the prediction models learned in the last section, to predict whether a sentence will have a Factual update, can help us in dynamic LLM Q&A tasks. We set up a RealTimeQA-style task Kasai et al. (2022 ###reference_b15###), where an LLM is supplied by a retrieval system with potentially out-of-date information. We would like the LLM to abstain from answering a question if it suspects it\u2019s information might be outdated.\nConsider the scenario in Table 7 ###reference_###. As humans, we could infer that the ongoing events in the old sentence would be of relatively short time-scale. Thus, if a retriever retrieves the old sentence for the LLM, without knowledge of the new sentence, we would like the LLM to answer the question with something like: \u201cI do not have the most updated information and this might change quickly\u201d. Confidently answering without any caution as to the updating nature of events is wrong.\n###table_7### ###table_8### ###table_9###" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "LLM-QA Experiments", + "text": "We take pairs of sentences in the gold test set of our annotated data where an update occurred, and we ask GPT4 to ask questions based on the older sentence.\n(1) No-Conflict: 5 questions based on information in the older sentence that does NOT update in the newer one.\n(2) Maybe-Conflict: 5 questions based on information in the older sentence that might update in the newer one.\n(3) Likely-Conflict: 5 questions based on information from the older sentence likely updates with a newer one. (For all prompts, see Appendix D ###reference_###).\nWe devise the following experimental variants. Each variant take in the old sentence and a question, generated previously.\n(1) No Warning (Baseline #1): We formulate a basic prompt to GPT4, without alerting it to any possibly outdated material.\n(2) Uniform Warning (Baseline #2) We warn GPT4 that some information might be outdated. The warning is the same for all questions, so GPT has to rely on its own reasoning to detect information that could be potentially outdated.\n(3) w/ Our Update Likelihood: We give GPT4 predictions from our Factual Update model, binned into \u201clow\u201d, \u201cmedium\u201d, \u201chigh\u201d update likelihood. (We use the highest-scoring LED variation).\n(4) w/ Oracle Update: We give GPT4 gold labels that a fact-update did or did NOT occur. This is designed to give us an upper bound on abstention.\nWe evaluate performance of each prompting strategy using a GPT4-based evaluation. We ask GPT4: (1) Is this question answerable given the information in the old sentence? (2) Is the answer consistent with the information presented in the revised sentence?\nWe manually label a small set of 100 questions, to verify that GPT4 can perform this task, and find high agreement for both questions. If the answer to both questions is yes, the LLM should attempt to provide an answer. If either of the answers is \u201cno\u201d, then we want the LLM to ABSTAIN from answering. Abstaining when it should is a success; any other answer is a failure. We show F1 scores in Table 8 ###reference_###. Interestingly, and perhaps unexpectedly, the variant with Update Predictions does as well if not better than the variant with Oracle Updates. Perhaps the categories of the prediction score helps GPT4 better understand the task compared with the simple yes/no gold labels.\nThe Uniform Warning (Baseline #2) variation has surprisingly strong performance as well, perhaps an indication that GPT4 does have some emergent abilities to detect the linguistics of outdated information. However, when we examine overall abstention rates, shown in Table 9 ###reference_###, we find that this baseline has a far abstention rate. Meanwhile, the variant with Update Predictions abstains at nearly the same rates as that with Oracle Updates." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Discussion and Conclusion", + "text": "The ability of our prediction tags to recover near-oracle performance signals that factual edit prediction can serve a useful role in LLM Q&A. Although we have mainly tested our results in a high-likelihood region of the problem domain as a proof of concept, we suspect that if future work improves the models trained in Section 4.1 ###reference_###, then we will see an increase in the ability to drive such abstentions.\nWe do suspect there to be an inherent upper bound in our ability to model such revision patterns. Randomness undoubtedly exists in the editing and revision process; for many factual updates where, perhaps, the ethical stakes of outdated information are lower, journalists may choose not to go back and revise. We still see such work as promising. Indeed, it is surprising that, despite low scores on the modeling components for Part 1 (Edit-Intention Tagging) and Part 2 (Factual Edit Prediction), we still observe useful downstream applications in Part 3. The linguistic insights we are observe concord with human intuition, and identify known shortcomings of current language models.\nThus, we hope more broadly that the taxonomy introduced in NewsEdits 2.0 has many rich directions for yielding linguistic insights and better benchmarks. We hope in future work to revise directions around stylistic and narrative edits, both of which we believe can lead to better tools for computational journalists." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Ethical Considerations", + "text": "" + }, + { + "section_id": "7.1", + "parent_section_id": "7", + "section_name": "Dataset", + "text": "NewsEdits is a publicly and licensed dataset under an AGPL-3.0 License777https://opensource.org/licenses/AGPL-3.0 ###reference_###, which is a strong \u201cCopyLeft\u201d license.\nOur use is within the bounds of intended use given in writing by the original dataset creators, and is within the scope of their licensing." + }, + { + "section_id": "7.2", + "parent_section_id": "7", + "section_name": "Privacy", + "text": "We believe that there are no adverse privacy implications in this dataset. The dataset comprises news articles that were already published in the public domain with the expectation of widespread distribution. We did not engage in any concerted effort to assess whether information within the dataset was libelious, slanderous or otherwise unprotected speech. We instructed annotators to be aware that this was a possibility and to report to us if they saw anything, but we did not receive any reports. We discuss this more below." + }, + { + "section_id": "7.3", + "parent_section_id": "7", + "section_name": "Limitations and Risks", + "text": "The primary theoretical limitation in our work is that we did not include a robust non-Western language source. As our work builds off of NewsEdits as a primary corpora, it contains only English and French.\nThis work should be viewed with that important caveat. We cannot assume a priori that all cultures necessarily follow this approach to breaking news and indeed all of the theoretical works that we cite in justifying our directions also focus on English-language newspapers. One possible risk is that some of the information contained in earlier versions of news articles was updated or removed for the express purpose that it was potentially unprotected speech: libel, slander, etc. Instances of First Amendment lawsuits where the plaintiff was successful in challenging content are rare in the U.S. We are not as familiar with the guidelines of protected speech in other countries.\nWe echo the risk of the original NewsEdits authors: another risk we see is the misuse of this work on edits for the purpose of disparaging and denigrating media outlets. Many news tracker websites have been used for good purposes (e.g. holding newspapers accountable for when they make stylistic edits or try to update without giving notice). But we live in a political environment that is often hostile to the core democracy-preserving role of the media. We focus on fact-based updates and hope that this resource is not used to unnecessarily find fault with media outlets." + }, + { + "section_id": "7.4", + "parent_section_id": "7", + "section_name": "Computational Resources", + "text": "The experiments in our paper require computational resources. Our models run on a single 30GB NVIDIA V100 GPU or on one A40 GPU, along with storage and CPU capabilities provided by our campus. While our experiments do not need to leverage model or data parallelism, we still recognize that not all researchers have access to this resource level.\nWe use Huggingface models for our predictive tasks, and we will release the code of all the custom architectures that we construct. Our models do not exceed 300 million parameters." + }, + { + "section_id": "7.5", + "parent_section_id": "7", + "section_name": "Annotators", + "text": "We recruited annotators from professional journalism networks like the NICAR listserve, which we mention in the main body of the paper. All the annotators consented to annotate as part of the experiment, and were paid $1 per task, above the highest minimum wage in the U.S. Of our 11 annotators, all were based in large U.S. cities. 8 identify as white, 1 as Asian, 1 as Latinx and 1 as black. 8 annotators identify as male and 3 as female. This data collection process is covered under a university IRB. We do not publish personal details about the annotations, and their interviews were given with consent and full awareness that they would be published in full." + }, + { + "section_id": "7.6", + "parent_section_id": "7", + "section_name": "References", + "text": "" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Additional EDA", + "text": "###table_10### We show the following different analyses to support the findings in the main body.\nTable 10 ###reference_### shows the kinds of edits in 6 different categories of news determined \u201csocially beneficial\u201d, by Spangher et al. (2023 ###reference_b23###)888To group news articles in these categories, we use a classifier released by the authors. As can be seen, even though Factual updates are rarer overall in sentence-level updates, they are more represented in Disaster and Safety categories.\nIn Figure 7 ###reference_###, we perform an error analysis on our best-performing ensemble model, which includes tags from Argumentation and Discourse. We inspect the categories we are most likely to get wrong. As can be seen, our fine-grained accuracy is actually quite low, indicating the value of future work, perhaps collecting more training data or employing LLMs to label more silver-standard data. Many categories on the diagonal have 0 labels, both because many categories are low-count categories (e.g. \u201cDefine Term\u201d, which does not have any gold-truth labels in the test set), as well as that more dominant categories capture many of the predictions (e.g. \u201cTonal Edits\u201c).\nHowever, the problem is slightly less sever on the coarse-grained level, shown in Figure 6 ###reference_###. By comparing these two categories, we can see that many of the errors we observed are on the fine-grained level are within the same coarse-grained category. We suspect that to raise accuracy for fine-grained labels further, we need further experimentation is needed. Perhaps we can experiment with approaches involving more specific fine-grained models or with data augmentation.\nFigure 4 ###reference_### shows more details of our exploration into the predictability of higher-precision fact-update sentences: as we restrict the pool of documents, we increase the performance.\n###figure_3### Spangher et al. (2022 ###reference_b24###) identified \u201cedit-actions\u201d, or \u201csyntactic\u201d edits in article revision histories (i.e. sentence additions, deletions and updates), which requires them to match sentences across article versions. They report a 89.5 F1 efficacy at matching sentences, a significantly higher rate than we might expect for lexical matching. We examined NewsEdits\u2019s sentence matches and found that a large source of errors stem from poor sentence boundary detection (SBD). Poor SBD creates an abundance of sentence stubs, which often over-match across revisions. We reprocessed the dataset from scratch using spaCy999https://spacy.io/ ###reference_spacy.io/###, specifically, the en_core_web_lg model. instead of SparkNLP for SBD101010https://sparknlp.org/api/com/johnsnowlabs/nlp/annotators/sbd/pragmatic/SentenceDetector.html ###reference_nlp/annotators/sbd/pragmatic/SentenceDetector.html###, which we qualitatively observe to be better. For word-matching, we use albert-xxlarge-v2111111https://huggingface.co/albert/albert-xxlarge-v2 ###reference_ge-v2###\u2019s embeddings Lan et al. (2019 ###reference_b16###) instead of TinyBert Jiao et al. (2019 ###reference_b14###). These steps, we find, increase our linking accuracy to 95 F1-score. We reprocess and re-release NewsEdits. In addition, we release a suite of visualization tools, based on D3121212https://d3js.org/ ###reference_d3js.org/### to enable further exploration of the corpus. See Appendix C.2 ###reference_### for an example." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Details of the LED Model", + "text": "In this section, we describe the specifications of the LED model described in Section 3.3 ###reference_###.\nThe input to the LED model is shown below:\nPredict the edit intention from version 1 to version 2.\nVersion 1: SOURCE_SENTENCE\nVersion 2: TARGET_SENTENCE\nVersion 1 Document: SOURCE_DOCUMENT\nVersion 2 Document: TARGET_DOCUMENT\nHere, SOURCE_DOCUMENT () and TARGET_DOCUMENT () refer to the newer and older articles, while SOURCE_SENTENCE () and TARGET_SENTENCE () represent a sentence with these articles.\n###table_11###" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Annotation Details", + "text": "In this section, we provide details of the annotation process, such as annotation guidelines and task allocation.\nTo complete the task, look at each sentence: if it\u2019s been added, updated, or deleted between drafts, try to determine based on your knowledge of the journalistic editing process why this was done.\nYou can specify multiple intentions for each add/delete/edit operation. Please also pay attention to when sentences are moved around in a document (i.e. if that was done to emphasize or de-emphasize that sentence), and when there might be errors to how we are linking sentences.\nWe devised these in consultation with professional journalists. However, if you are consistently annotating edits with \"Other\" (i.e. we are missing something in our schema), please let us know!\n###figure_4### Figure 8 ###reference_### shows the annotation interface for our task. Users are shown pairs of sentences, as identified in NewsEdits Spangher et al. (2022 ###reference_b24###) and have the option to annotate edits, additions and deletions with different edit intentions. Additionally, users can annotate when the links are incorrect.\nWe asked prospective applicants to describe their journalism experience, and selected this pool based on those having one or more year of professional editing experience. Then, we asked them to label revised sentences in five news articles, which we checked. We recruited 11 annotators who scored above 90% on these tests.\nIn Figure 5 ###reference_###, we show the portion of annotation tasks assigned to each worker. As can be seen, we have a broad mix of users. Worker 11 is a professional journalist we worked most often with, and annotated a plurality of the tasks." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Prompts for Use-Case", + "text": "You are a helpful assistant. You will be shown an old sentence, a revised sentence, and a user-question.\nyou will answer the following 2 questions:\n1. Is this question answerable given JUST the old sentence?\n Answer with \"yes\" or \"no\". Do not answer anything else.\nIf the answer to 1 was yes, then proceed to the second question, otherwise respond to question 2 with n/a\n2. Does the question ask about something that is factually consistent with the information presented in the revised sentence?\n Answer with \"yes\", \"no\" or \"n/a.\" Do not answer with anything else.\n###figure_5### ###figure_6### ###table_12###" + } + ], + "tables": { + "1": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AllFactStyleNarrative
FeaturesMacroMicroMacroMicroMacroMicroMacroMicro
Baseline, fine-grained\n45.873.632.047.258.639.952.039.9
+ NLI48.674.145.750.455.238.743.638.7
+ Event46.774.139.049.059.341.441.741.4
+ Quote46.372.849.854.731.928.042.428.0
+ Collapsed Quote\n51.273.938.747.658.339.451.439.4
+ Discourse45.875.137.749.663.844.643.244.6
+ Argumentation48.973.637.147.957.137.753.537.7
+ Discourse & Event46.374.338.949.962.142.242.442.2
+ Discourse & Argumentation47.874.156.850.531.432.241.132.2
+ Argumentation & Event50.075.138.048.646.444.958.544.9
+ Quote & Discourse\n51.272.240.545.362.843.048.743.0
+ Collapsed Quote & Discourse49.673.945.649.458.939.147.939.1
+ Collapsed Quote & NLI45.472.841.950.446.731.239.331.2
+ Collapsed Quote & NLI & Event49.073.844.948.957.437.044.037.0
+ All47.273.640.049.758.636.043.536.0
Baseline, coarse-grained\n49.456.746.665.110.4
+ Discourse & Arg. (Best model, Fact)65.470.759.466.249.2
\n
Table 1: Various F1 scores (%) on our test set of the fine-tuned LED model with different combinations of features. Fact/Style/Narrative F1 scores are computed on instances that contain the corresponding labels, whereas All F1 scores are derived from all instances.
\n
", + "capture": "Table 1: Various F1 scores (%) on our test set of the fine-tuned LED model with different combinations of features. Fact/Style/Narrative F1 scores are computed on instances that contain the corresponding labels, whereas All F1 scores are derived from all instances. " + }, + "2": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NarrativeFactStyle
Addition840329358900104
Deletion330039216716088
edit411292102499644243
\n
Table 2: Counts of coarse-grained semantic edit types, broken out by syntactic categories (for fine-grained counts, see Appendix).
\n
", + "capture": "Table 2: Counts of coarse-grained semantic edit types, broken out by syntactic categories (for fine-grained counts, see Appendix)." + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FactStyleNarrative
Business 1.6\n 62.0\n 36.4\n
Entertainment 3.3\n 65.5\n 31.1\n
Health 2.1\n 61.0\n 36.9\n
News 2.8\n 57.0\n 40.2\n
Politics 5.9\n 57.8\n 36.3\n
Sport 3.5\n 59.3\n 37.2\n
\n
Table 3: Distribution over update-types, across CNN section classifications.
\n
", + "capture": "Table 3: Distribution over update-types, across CNN section classifications." + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelFeaturesFact F1Not Fact F1Macro F1Micro F1
GPT-3.5Sentence-Only11.379.130.474.2
Direct Context3.491.832.285.2
Full Article7.991.149.885.4
GPT-4Sentence-Only11.166.338.962.4
Direct Context14.888.852.784.1
Full Article15.490.653.284.9
FT LongformerSentence-Only21.292.357.487.0
Direct Context22.393.087.887.4
Full Article25.491.458.086.4
Human PerformanceSentence-Only41.275.358.669.2
\n
Table 4: How well can models predict if a sentence will have a fact update, or not? We test GPT3.5 and GPT4. Individual, macro and micro F1 scores (%) on the golden test set for various evaluated models.
\n
", + "capture": "Table 4: How well can models predict if a sentence will have a fact update, or not? We test GPT3.5 and GPT4. Individual, macro and micro F1 scores (%) on the golden test set for various evaluated models. " + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Sent. Contains:Fact U.
Recent Event 50%\n 8%\n 42%\n
Developing Event 30%\n 0%\n 30%\n
Statistic 28%\n 8%\n 19%\n
Info. request 12%\n 0%\n 12%\n
Historical Event 0%\n 17%\n -17%\n
Opinion/Analysis 2%\n 39%\n -36%\n
Description 10%\n 50%\n -40%\n
\n
Table 5: Linguistic Cues characterizing Factual Updates: Manual annotations of characteristics in sentences that either Factually Update, or not. We show the % of sentences containing these characteristics, ordered by those most salient for Factual Updates.
\n
", + "capture": "Table 5: Linguistic Cues characterizing Factual Updates: Manual annotations of characteristics in sentences that either Factually Update, or not. We show the % of sentences containing these characteristics, ordered by those most salient for Factual Updates." + }, + "6": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nSentences with \n\n
\n\nThere are no immediate reports of casualties.\n\n
\n\nHis trial has not yet started.\n\n
\n\nOfficials said attackers fired as many as 30 rockets in Friday\u2019s assault.\n\n
\n\nThe rebel group did not immediately comment.\n\n
\n
Table 6: A small sample of sentences in the high-likelihood region of . More examples shown in Table 12.
\n
", + "capture": "Table 6: A small sample of sentences in the high-likelihood region of . More examples shown in Table 12." + }, + "7": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n
\n\nOld sentence: The White House is on lockdown after a vehicle struck a security barrier.\n\n
\n\nNew sentence: The White House was on lockdown for about an hour after a vehicle struck \u2026\n\n
\n\nQuestion: \u201cCan I visit the White House right now?\u201d\n\n
\n
Table 7: LLM Abstention Demonstration: In this example, the LLM only has access to the old, outdated article. We wish to probe whether LLMs can reason about the information\u2019s likelihood of being outdated and be cautious about answering this question.
\n
", + "capture": "Table 7: LLM Abstention Demonstration: In this example, the LLM only has access to the old, outdated article. We wish to probe whether LLMs can reason about the information\u2019s likelihood of being outdated and be cautious about answering this question." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
No-ConflictMaybe-ConflictLikely-Conflict
Micro F1Macro F1Avg.Micro F1Macro F1AvgMicro F1Macro F1Avg.
No Warning55.935.855.98.88.18.838.828.038.8
Uniform Warning52.949.652.990.047.490.064.754.064.7
w. Update Pred.59.448.959.490.661.190.667.162.467.1
w. Oracle Update57.647.757.690.063.390.066.561.166.5
\n
Table 8: LLM-QA Abstention Accuracy: we measure how often GPT4 correctly abstains from answering user-questions, based on the ground truth of whether the facts in an article updated or not. Each variant shows different information that GPT4 is given. We generate questions in three categories: No-Conflict, Maybe-Conflict, Likely-Conflict, representing how likely the answer to the question will be outdated after a factual update.
\n
", + "capture": "Table 8: LLM-QA Abstention Accuracy: we measure how often GPT4 correctly abstains from answering user-questions, based on the ground truth of whether the facts in an article updated or not. Each variant shows different information that GPT4 is given. We generate questions in three categories: No-Conflict, Maybe-Conflict, Likely-Conflict, representing how likely the answer to the question will be outdated after a factual update." + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NoMaybeLikely
No Warning0.00.00.0
Uniform Warning30.087.198.8
w. Update Pred.10.674.195.9
w. Oracle Update12.475.994.1
\n
Table 9: Likelihood of abstaining in the three test cases: No factual conflict, Maybe factual conflict, Likely factual conflict. In general, we wish to refrain only when we need to. Over-refraining is bad.
\n
", + "capture": "Table 9: Likelihood of abstaining in the three test cases: No factual conflict, Maybe factual conflict, Likely factual conflict. In general, we wish to refrain only when we need to. Over-refraining is bad." + }, + "10": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FactStyleNarrative
Disaster 6.4\n 43.4\n 50.0\n
Elections 5.1\n 47.9\n 46.9\n
Environment 1.9\n 56.8\n 41.2\n
Labor 2.0\n 49.6\n 48.2\n
Other 3.7\n 50.7\n 45.5\n
Safety 4.7\n 46.6\n 48.6\n
\n
Table 10: Distribution over update-types, across social-interest categories Spangher et\u00a0al. (2023).
\n
", + "capture": "Table 10: Distribution over update-types, across social-interest categories Spangher et\u00a0al. (2023)." + }, + "11": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AdditionDeletionEdit
Add/Delete/Update Background806909329652411025
Add/Delete/Update Quote3034511799546300
Incorrect Link191022125362237437
Other (Please Specify)846466692965077
Add/Delete/Update Event Reference37409364556098
Add/Delete/Update Analysis33426390268
Add/Delete/Update Eye-witness account977203
Add/Delete/Update Source-Document6639228
Add/Delete/Update Information (Other)1058133
Additional Sourcing5731529
Tonal Edits1026000616514
Emphasize/De-emphasize Importance1321076
Syntax Correction1221729
Emphasize/De-emphasize a Point0531668
Simplification003
Style-Guide Edits013253
Correction0147
\n
Table 11: Counts of fine-grained semantic edit types, broken out by syntactic categories
\n
", + "capture": "Table 11: Counts of fine-grained semantic edit types, broken out by syntactic categories" + }, + "12": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nTop Predictions for Content Evolution Prediction, \n\n
\n\nThe company takes this recommendation extremely seriously,\u201d it said in a statement.\n\n
\n\nKABUL, Afghanistan \u2014 An Afghan official says a powerful suicide bombing has targeted a U.S. military convoy near the main American Bagram Air Base north of the capital Kabul.\n\n
\n\nWASHINGTON \u2014 The U.S. carried out military strikes in Iraq and Syria targeting a militia blamed for an attack that killed an American contractor, a Defense Department spokesman said Sunday.\n\n
\n\nMr. Causey, who reported his concern to authorities, was not charged in the indictment, which a grand jury returned last month, and did not immediately comment.\n\n
\n\nHis trial has not yet started.\n\n
\n\nMEXICO CITY \u2014 A fiery freeway accident involving a bus and a tractor-trailer killed 21 people in the Mexican state of Veracruz on Wednesday, according to the authorities and local news outlets.\n\n
\n\nThe indictment accuses Mr. Hayes, a former congressman, of helping to route $250,000 in bribes to the re-election campaign of Mike Causey, the insurance commissioner.\n\n
\n\nNo Kenyans died in the attack, Kenya\u2019s military spokesman Paul Njuguna said Monday.\n\n
\n\nMr. Manafort, 70, will most likely be arraigned on the new charges in State Supreme Court in Manhattan later this month and held at Rikers, though his lawyers could seek to have him held at a federal jail in New York, the people with knowledge said.\n\n
\n\nOfficials said attackers fired as many as 30 rockets in Friday\u2019s assault.\n\n
\n\nKABUL, Afghanistan \u2014 Gunmen attacked a remembrance ceremony for a minority Shiite leader in Afghanistan\u2019s capital on Friday, wounding at least 18 people, officials said.\n\n
\n\nBEIRUT \u2014 A senior Turkish official says Turkey has captured the older sister of the slain leader of the Islamic State group in northwestern Syria, calling the arrest an intelligence \u201cgold mine. \u201d\n\n
\n\nPaul J. Manafort, President Trump\u2019s former campaign chairman who is serving a federal prison sentence, is expected to be transferred as early as this week to the Rikers Island jail complex in New York City, where he will most likely be held in solitary confinement while facing state fraud charges, people with knowledge of the matter said.\n\n
\n\nThe watchdog, the Securities and Exchange Surveillance Commission, said Tuesday it made the recommendation to the government\u2019s Financial Services Agency on the disclosure documents from 2014 through 2017.\n\n
\n\nThere are no immediate reports of casualties.\n\n
\n\nIt said the U.S. hit three of the militia\u2019s sites in Iraq and two in Syria, including weapon caches and the militia\u2019s command and control bases.\n\n
\n\nThe rebel group did not immediately comment.\n\n
\n\nKep provincial authorities later announced a total of five dead and 18 injured.\n\n
\n\nQUETTA, Pakistan \u2014 Attackers used a remotely-controlled bomb and assault rifles to ambush a convoy of Pakistani troops assigned to protect an oil and gas facility in the country\u2019s restive southwest, killing six soldiers and wounding four, officials said Tuesday.\n\n
\n\nWASHINGTON \u2014 Senator Bernie Sanders of Vermont raised $18.2 million over the first six weeks of his presidential bid, his campaign announced Tuesday, a display of financial strength that cements his status as one of the top fund-raisers in the sprawling Democratic field.\n\n
\n
Table 12: Sample of the most likely fact-update sentences, as judged by our top-performing model. Top predictions reflect a combination of statistics, recent or upcoming events, and waiting for quotes.
\n
", + "capture": "Table 12: Sample of the most likely fact-update sentences, as judged by our top-performing model. Top predictions reflect a combination of statistics, recent or upcoming events, and waiting for quotes." + }, + "13": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\nLowest Predictions for Content Evolution Prediction, \n\n
\n\nSir Anthony Seldon, vice-chancellor of the University of Buckingham, said: \"Cheating should be tackled and the problem should not be allowed to fester any longer. \"\n\n
\n\nHe added: \"This shows the extent to which a party which had such a proud record of fighting racism has been poisoned under Jeremy Corbyn. \"\n\n
\n\nBut he said his dream of making it in the game had turned into a nightmare. \u201c\n\n
\n\nAdam Price, Plaid Cymru leader, said: \"There is now no doubt that Wales should be able to hold an independence referendum. \"\n\n
\n\nOthers told how excited they had been when they were scouted by Higgins. \u201c\n\n
\n\nThe former Conservative deputy prime minister said it was \u201ccomplete nonsense\u201d to suggest Brexit could be done by Christmas. \u201c\n\n
\n\nHe said the QAA identified 17,000 academic offences in 2016 - but it was impossible to know how many cases had gone undetected. \"\n\n
\n\nNationalism leads a \"false trail\" in \"\"exactly the opposite direction\", he argued, \"one that pits working people against each other, based on the accident of geography\".\n\n
\n\nHe also suggested that universities should adopt \"honour codes\", in which students formally commit to not cheating, and also recognise the consequences facing students who are subsequently caught.\n\n
\n\nHe added: \"But my experience is, if you make that threat, you don\u2019t actually need to follow through with the dreaded milkshake tax. \"\n\n
\n\nHe said: \u201cThere\u2019s an anger inside of me, a feeling of disgust that turns my stomach. \u201d\n\n
\n\nDamian Hinds says it is \"unethical for these companies to profit from this dishonest business\".\n\n
\n\nShe added: \u201cHis plan to hold another two referendums next year \u2013 and all the chaos that will bring \u2013 will mean that his government will not have time to focus on the people\u2019s priorities. \u201c\n\n
\n\nWe would be happy to talk to the Department of Education about their concerns.\" \u2019\n\n
\n\nI am determined to beat the cheats who threaten the integrity of our system and am calling on online giants, such as PayPal, to block payments or end the advertisement of these services - it is their moral duty to do so,\" said Mr Hinds.\n\n
\n\nThe chief executive of Action on Smoking and Health, Deborah Arnott, also warned it would be a \"grave error\" to move away from taxing cigarettes. \"\n\n
\n\nRather than just taxing people more, we should look at how effective the so-called \u2019sin taxes\u2019 really are, and if they actually change behaviour. \"\n\n
\n\nHe added: \"How many more red lines will be laid down by sensible Labour MPs, only for the leadership to trample right over them?\n\n
\n\nThis shows that the complaints process is a complete sham,\" she tweeted. \"\n\n
\n\nMr Hinds added that such firms are \"exploiting young people and it is time to stamp them out\". \"\n\n
\n\nOne said he was abused by Higgins in a gym.\n\n
\n
Table 13: Sample of the least likely fact-update sentences, as judged by our best-performing model. Predictions represent a combination of opinion quotes or anecdotes, projects and longer-term plans.
\n
", + "capture": "Table 13: Sample of the least likely fact-update sentences, as judged by our best-performing model. Predictions represent a combination of opinion quotes or anecdotes, projects and longer-term plans." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18811v1_figure_1.png", + "caption": "Figure 1: Updates can occur for many different reasons. Shown here, we identify factual updates (e.g. \u201cEvent Update\u201d between 1-1), stylistic updates (e.g. \u201cStyle-Guide\u201d between 2-3) and narrative updates (e.g. \u201cAdd Background\u201d for sentence addition 2).", + "url": "http://arxiv.org/html/2411.18811v1/extracted/6030417/figures/cover-picture.png" + }, + "2": { + "figure_path": "2411.18811v1_figure_2.png", + "caption": "Figure 2: Overall paper flow. In Part 1 of our paper, we develop an edits-intention scheme to describe news edits and train models to apply this schema to existing news revision corpora Spangher et al. (2022). In Part 2, we use these models to silver-label a large corpus and ask how well we can predict whether a sentence will factually update. In Part 3, we show these predictions can be beneficial for increasing abstention rates during LLM-QA.", + "url": "http://arxiv.org/html/2411.18811v1/extracted/6030417/figures/overall-paper-flow.png" + }, + "4": { + "figure_path": "2411.18811v1_figure_4.png", + "caption": "Figure 4: Performance of Fact-update model increases as we increasingly focus on a pool of documents that are categorized as high-likelihood under the top-performing LED model (in Table 1). In other words, the model truly shines in the high-precision, high-probability realm.", + "url": "http://arxiv.org/html/2411.18811v1/extracted/6030417/figures/fact-prec-recall-f1.png" + }, + "5": { + "figure_path": "2411.18811v1_figure_5.png", + "caption": "Figure 5: The portion of annotation tasks assigned to each worker.", + "url": "http://arxiv.org/html/2411.18811v1/x1.png" + }, + "6": { + "figure_path": "2411.18811v1_figure_6.png", + "caption": "Figure 6: Coarse-grained confusion matrix for the LED model trained with Discourse and Argumentation features.", + "url": "http://arxiv.org/html/2411.18811v1/extracted/6030417/figures/coarse-grained-confusion-matrix.png" + }, + "7": { + "figure_path": "2411.18811v1_figure_7.png", + "caption": "Figure 7: Fine-grained confusion matrix for the LED model trained with Discourse and Argumentation features.", + "url": "http://arxiv.org/html/2411.18811v1/x2.png" + }, + "8": { + "figure_path": "2411.18811v1_figure_8.png", + "caption": "Figure 8: The interface for annotating edit intentions.", + "url": "http://arxiv.org/html/2411.18811v1/extracted/6030417/figures/annotation_interface.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "A News Editorial Corpus\nfor Mining Argumentation Strategies.", + "author": "Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno\nStein. 2016.", + "venue": "In 26th International Conference on Computational Linguistics\n(COLING 2016), pages 3433\u20133443. Association for Computational Linguistics.", + "url": "https://aclanthology.org/C16-1324/" + } + }, + { + "2": { + "title": "Longformer: The long-document transformer.", + "author": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020.", + "venue": "arXiv preprint arXiv:2004.05150.", + "url": null + } + }, + { + "3": { + "title": "Discourse as a function of event: Profiling discourse structure in\nnews articles around the main event.", + "author": "Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics.", + "url": null + } + }, + { + "4": { + "title": "Computational journalism.", + "author": "Sarah Cohen, James T Hamilton, and Fred Turner. 2011.", + "venue": "Communications of the ACM, 54(10):66\u201371.", + "url": null + } + }, + { + "5": { + "title": "The New\nRepublic.", + "author": "H.D. Croly. 1943.", + "venue": "v. 108. Republic Publishing Company.", + "url": "https://books.google.com/books?id=cDgQAAAAIAAJ" + } + }, + { + "6": { + "title": "The pascal recognising textual entailment challenge.", + "author": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005.", + "venue": "In Machine learning challenges workshop, pages 177\u2013190.\nSpringer.", + "url": null + } + }, + { + "7": { + "title": "The automatic content extraction (ace) program-tasks, data, and\nevaluation.", + "author": "George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw,\nStephanie M Strassel, and Ralph M Weischedel. 2004.", + "venue": "In Lrec, volume 2, pages 837\u2013840. Lisbon.", + "url": null + } + }, + { + "8": { + "title": "Wikiatomicedits: A multilingual corpus of wikipedia edits for\nmodeling language and discourse.", + "author": "Manaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipanjan Das. 2018.", + "venue": "arXiv preprint arXiv:1808.09422.", + "url": null + } + }, + { + "9": { + "title": "New york times sues microsoft, open ai over use of content.", + "author": "Haleluya Hadero and David Bauder. 2023.", + "venue": "Globe & Mail (Toronto, Canada), pages B1\u2013B1.", + "url": null + } + }, + { + "10": { + "title": "Econet: Effective continual pretraining of language models for event\ntemporal reasoning.", + "author": "Rujun Han, Xiang Ren, and Nanyun Peng. 2020.", + "venue": "arXiv preprint arXiv:2012.15283.", + "url": null + } + }, + { + "11": { + "title": "Degree: A data-efficient generation-based event extraction model.", + "author": "I Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei\nChang, Nanyun Peng, et al. 2021.", + "venue": "arXiv preprint arXiv:2108.12724.", + "url": null + } + }, + { + "12": { + "title": "Document-level entity-based extraction as template generation.", + "author": "Kung-Hsiang Huang, Sam Tang, and Nanyun Peng. 2021.", + "venue": "In Proceedings of the 2021 Conference on Empirical Methods in\nNatural Language Processing, pages 5257\u20135269, Online and Punta Cana,\nDominican Republic. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.emnlp-main.426" + } + }, + { + "13": { + "title": "Tempquestions: A benchmark for temporal question answering.", + "author": "Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Str\u00f6tgen, and\nGerhard Weikum. 2018.", + "venue": "In Companion Proceedings of the The Web Conference 2018, pages\n1057\u20131062.", + "url": null + } + }, + { + "14": { + "title": "Tinybert: Distilling bert for natural language understanding.", + "author": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang\nWang, and Qun Liu. 2019.", + "venue": "arXiv preprint arXiv:1909.10351.", + "url": null + } + }, + { + "15": { + "title": "Realtime qa: What\u2019s the answer right now?", + "author": "Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai,\nXinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022.", + "venue": "arXiv preprint arXiv:2207.13332.", + "url": null + } + }, + { + "16": { + "title": "Albert: A lite bert for self-supervised learning of language\nrepresentations.", + "author": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and\nRadu Soricut. 2019.", + "venue": "arXiv preprint arXiv:1909.11942.", + "url": null + } + }, + { + "17": { + "title": "Document-level event argument extraction by conditional generation.", + "author": "Sha Li, Heng Ji, and Jiawei Han. 2021.", + "venue": "In Proceedings of the 2021 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pages 894\u2013908, Online. Association for Computational\nLinguistics.", + "url": "https://doi.org/10.18653/v1/2021.naacl-main.69" + } + }, + { + "18": { + "title": "Cyprien de masson d\u2019autume, tim scholtes, manzil zaheer, susannah\nyoung, ellen gilsenan-mcmahon, sophia austin, phil blunsom, and angeliki\nlazaridou. 2022. streamingqa: A benchmark for adaptation to new knowledge\nover time in question answering models.", + "author": "Adam Liska, Tom\u00e1s Kocisk\u1ef3, Elena Gribovskaya, Tayfun Terzi, Eren\nSezener, and Devang Agrawal. 2022.", + "venue": "In International Conference on Machine Learning.", + "url": null + } + }, + { + "19": { + "title": "Adversarial NLI: A new benchmark for natural language\nunderstanding.", + "author": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe\nKiela. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics. Association for Computational Linguistics.", + "url": null + } + }, + { + "20": { + "title": "Automatically detecting and attributing indirect quotations.", + "author": "Silvia Pareti, Tim O\u2019keefe, Ioannis Konstas, James R Curran, and Irena\nKoprinska. 2013.", + "venue": "In Proceedings of the 2013 Conference on Empirical Methods in\nNatural Language Processing, pages 989\u2013999.", + "url": null + } + }, + { + "21": { + "title": "Breaking news online: How news stories are updated and maintained\naround-the-clock.", + "author": "Kostas Saltzis. 2012.", + "venue": "Journalism practice, 6(5-6):702\u2013710.", + "url": null + } + }, + { + "22": { + "title": "Multitask semi-supervised learning for class-imbalanced discourse\nclassification.", + "author": "Alexander Spangher, Jonathan May, Sz-Rung Shiang, and Lingjia Deng. 2021.", + "venue": "In Proceedings of the 2021 conference on empirical methods in\nnatural language processing, pages 498\u2013517.", + "url": null + } + }, + { + "23": { + "title": "Identifying informational sources in news articles.", + "author": "Alexander Spangher, Nanyun Peng, Jonathan May, and Emilio Ferrara. 2023.", + "venue": "arXiv preprint arXiv:2305.14904.", + "url": null + } + }, + { + "24": { + "title": "Newsedits: A news article revision dataset and a novel document-level\nreasoning challenge.", + "author": "Alexander Spangher, Xiang Ren, Jonathan May, and Nanyun Peng. 2022.", + "venue": "In Proceedings of the 2022 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pages 127\u2013157.", + "url": null + } + }, + { + "25": { + "title": "Explaining mixtures of sources in news articles.", + "author": "Alexander Spangher, James Youn, Matthew Debutts, Nanyun Peng, and Jonathan May.\n2024.", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "Towards benchmarking and improving the temporal reasoning capability\nof large language models.", + "author": "Qingyu Tan, Hwee Tou Ng, and Lidong Bing. 2023.", + "venue": "arXiv preprint arXiv:2306.08952.", + "url": null + } + }, + { + "27": { + "title": "News as discourse.", + "author": "Teun A Van Dijk. 1998.", + "venue": "Lawrence Erlbaum Associates.", + "url": null + } + }, + { + "28": { + "title": "Wikisimple: Automatic simplification of wikipedia articles.", + "author": "Kristian Woodsend and Mirella Lapata. 2011.", + "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 25, pages 927\u2013932.", + "url": null + } + }, + { + "29": { + "title": "Large language models can learn temporal reasoning.", + "author": "Siheng Xiong, Ali Payani, Ramana Kompella, and Faramarz Fekri. 2024.", + "venue": "arXiv preprint arXiv:2401.06853.", + "url": null + } + }, + { + "30": { + "title": "Identifying semantic edit intentions from revisions in wikipedia.", + "author": "Diyi Yang, Aaron Halfaker, Robert Kraut, and Eduard Hovy. 2017.", + "venue": "In Proceedings of the 2017 Conference on Empirical Methods in\nNatural Language Processing, pages 2000\u20132010.", + "url": null + } + }, + { + "31": { + "title": "Identifying the discourse function of news article paragraphs.", + "author": "W Victor Yarlott, Cristina Cornelio, Tian Gao, and Mark Finlayson. 2018.", + "venue": "In Proceedings of the Workshop Events and Stories in the News\n2018, pages 25\u201333.", + "url": null + } + }, + { + "32": { + "title": "Annotation and classification of argumentative writing revisions.", + "author": "Fan Zhang and Diane Litman. 2015.", + "venue": "In Proceedings of the tenth workshop on innovative use of NLP\nfor building educational applications, pages 133\u2013143.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18811v1" +} \ No newline at end of file diff --git a/20241127/2411.18823v1.json b/20241127/2411.18823v1.json new file mode 100644 index 0000000000000000000000000000000000000000..914440fdccc4ae5e64a4d126e1a513cf0b7f9cb5 --- /dev/null +++ b/20241127/2411.18823v1.json @@ -0,0 +1,589 @@ +{ + "title": "Multi-Task Label Discovery via Hierarchical Task Tokens for Partially Annotated Dense Predictions", + "abstract": "In recent years, simultaneous learning of multiple dense prediction tasks with partially annotated label data has emerged as an important research area. Previous works primarily focus on constructing cross-task consistency or conducting adversarial training to regularize cross-task predictions, which achieve promising performance improvements, while still suffering from the lack of direct pixel-wise supervision for multi-task dense predictions. To tackle this challenge, we propose a novel approach to optimize a set of learnable hierarchical task tokens, including global and fine-grained ones, to discover consistent pixel-wise supervision signals in both feature and prediction levels. Specifically, the global task tokens are designed for effective cross-task feature interactions in a global context. Then, a group of fine-grained task-specific spatial tokens for each task is learned from the corresponding global task tokens. It is embedded to have dense interactions with each task-specific feature map. The learned global and local fine-grained task tokens are further used to discover pseudo task-specific dense labels at different levels of granularity, and they can be utilized to directly supervise the learning of the multi-task dense prediction framework. Extensive experimental results on challenging NYUD-v2, Cityscapes, and PASCAL Context datasets demonstrate significant improvements over existing state-of-the-art methods for partially annotated multi-task dense prediction.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "With the rapid development of supervised learning with deep neural networks, various pixel-wise dense prediction tasks with highly complementary properties such as semantic segmentation and depth estimation have achieved great success in multi-task learning (MTL) in recent years misra2016cross ###reference_b25###; vandenhende2020mti ###reference_b32###; xu2018pad ###reference_b35###; ye2022inverted ###reference_b37###; zhang2018joint ###reference_b46###. Researchers pursue the learning of them simultaneously in a unified framework, which can effectively model cross-task correlations and achieve superior results in terms of model training costs and performances.\nHowever, in real-world scenarios, obtaining pixel-level annotations is prohibitively expensive, especially when dealing with a set of distinct dense prediction tasks. Each image has to be annotated with pixel labels for all the tasks. Thus, existing works have delved into the problem of multi-task learning with only partially annotated dense labels li2022learning ###reference_b17###; luo2021semi ###reference_b22###; wang2022semi ###reference_b33###; zamir2020robust ###reference_b42###; zeng2019joint ###reference_b44###.\nSpecifically, as illustrated in Fig. 1 ###reference_### (a), given an input image, for dense prediction tasks, the task labels are provided for at least one task and at most tasks. Learning a multi-task model under this setting is particularly challenging since every input image lacks some of the task supervision signals, and the performance typically drops significantly if compared to the same model trained with full task label supervisions li2022learning ###reference_b17###.\n###figure_1### Directly discovering pseudo task labels in prediction spaces can alleviate this problem to a certain extent, however, it still suffers from the following two severe limitations: (i) Simply discovering labels in prediction spaces cannot take advantage of the abundant task representations in feature space. For each specific task, labeled and unlabeled images share all of the network parameters during training, and the distributions of their features and predictions should be consistent. For all dense prediction tasks, rich task-generic representations remain in feature space since the backbone parameter sharing, which are beneficial for building cross-task relations and regularizing unsupervised tasks. In this case, excavating labels only in the prediction spaces fails to exploit consistent representations from different levels among different tasks. (ii) Directly cross-task regularizing the prediction space is difficult and expensive. Since representations in the prediction space are highly semantic on each task, additional heavy mapping networks are required to project the different task predictions into a common space to utilize task correlations for learning regularization li2022learning ###reference_b17### between labeled and unlabeled tasks. Therefore, it is critically important to involve the intermediate task features for the task label discovery and regularization.\nTo effectively tackle the aforementioned challenges, we propose a novel approach that performs task label discovery from both the feature and prediction spaces via an effective design of learnable Hierarchical Task Tokens (HiTTs). HiTTs are sets of compact parameters that are learned in a hierarchical manner to model global inter-task relationships and local fine-grained intra-task relationships, which allows for discovering pixel-wise task pseudo labels straighforwardly based on the modeled correlations with the task tokens.\nMore specifically, as depicted in Fig. 1 ###reference_### (b), we apply HiTTs during the multi-task decoding stage and jointly optimize them with the multi-task learning network. The HiTTs consists of two hierarchies. The first hierarchy is a set of global task tokens.\nThe global task tokens are randomly initialized and can perform cross-task feature-token interactions with different task feature maps based on self-attention.\nThese learned task tokens can be used to discover feature-level pseudo supervision by selecting highly activated pixel features correlated to each task. The second hierarchy is the fine-grained task tokens. These tokens are directly derived from the global task tokens with learnable projection layers. They are generated to be spatial token maps to perform interactions within each task-specific feature map at a finer granularity. As the fine-grained task tokens can learn pixel-to-pixel correlation with each task feature map, it thus can help us to discover dense spatial labels for each task.\nWe learn both hierarchies simultaneously in an end-to-end manner, and exploit both levels of supervision signals discovered from the two task-token hierarchies, for optimizing multi-task dense predictions with partially annotated datasets.\nIn summary, the contribution of this work is three-fold:\nWe propose a novel design of Hierarchical Task Tokens (HiTTs), which can learn hierarchical multi-task representations and correlations with only partially annotated training data.\nWe exploit the hierarchical task tokens to effectively develop multi-task label discovery strategies and obtain direct multi-task supervision signals from both the feature and the prediction levels.\nOur proposed method significantly outperforms existing state-of-the-art competitors on multi-task partially annotated benchmarks, including NYUD-v2, Cityscapes and PASCAL-Context, and demonstrates clear effectiveness on challenging dense prediction tasks with limited annotations, including segmentation, depth estimation, normal estimation and edge detection, etc." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Multi-task Dense Prediction. Dense prediction tasks aim to produce pixel-wise predictions for each image. Common tasks including semantic segmentation, depth estimation, and surface normal estimation exhibit high cross-task correlations. For instance, depth discontinuity is usually aligned with semantic boundaries vandenhende2020mti ###reference_b32###, and surface normal distributions are aligned with spatial derivatives of the depth maps lu2021taskology ###reference_b21###. Thus, a number of works have been focusing on multi-task dense predictions gao2019nddr ###reference_b10###; liu2019end ###reference_b20###; misra2016cross ###reference_b25###; vandenhende2020mti ###reference_b32###; xu2018pad ###reference_b35###; yang2023contrastive ###reference_b36###; ye2022inverted ###reference_b37###; ye2023taskexpert ###reference_b38###; taskprompter2023 ###reference_b39###; ye2024invpt++ ###reference_b41###; zhang2024bridgenet ###reference_b45###; zhang2018joint ###reference_b46###; zhang2019pattern ###reference_b47###. They leverage parameters sharing to conduct cross-task interactions by effective attention mechanisms for task feature selection liu2019end ###reference_b20###, and multi-modal distillation xu2018pad ###reference_b35###, multi-scale cross-task interactions vandenhende2020mti ###reference_b32###, global pixel and task interactions ye2022inverted ###reference_b37### and multi-task mixture-of-experts ye2023taskexpert ###reference_b38###. However, these works focus on fully-supervised settings. In contrast, our work addresses the challenge of insufficient supervision signals in each task.\nSemi-supervised learning. Obtaining pseudo labels for semi-supervised learning is a popular research direction, with several deep learning works published on the topic iscen2019label ###reference_b13###; lee2013pseudo ###reference_b15###; li2019bidirectional ###reference_b18###; shi2018transductive ###reference_b26###; sohn2020fixmatch ###reference_b28###; tarvainen2017mean ###reference_b30###; xie2020self ###reference_b34###; zou2018unsupervised ###reference_b48###; zou2019confidence ###reference_b49###. Among them, lee2013pseudo ###reference_b15### aims at picking up the class which has the maximum predicted confidence. The graph-based label propagation method iscen2019label ###reference_b13### is also used to infer pseudo-labels for unlabeled data. shi2018transductive ###reference_b26### provides a confidence level for each unlabeled sample to reduce\ninfluences from outliers and uncertain samples, and uses MMF regularization at feature levels to make images with the same label close to each other in the feature space. xie2020self ###reference_b34### uses accurate pseudo labels produced by the teacher model on clean unlabeled data to train the student model with noise injected. For dense prediction tasks, such as semantic segmentation, several works focus on assigning pixel-wise pseudo annotations from high-confidence predictions li2019bidirectional ###reference_b18###; zou2018unsupervised ###reference_b48###; zou2019confidence ###reference_b49###. However, these works target single-task learning setups. Despite pseudo labeling, discovering consistency for regularization is also a popular direction for unlabeled data li2022learning ###reference_b17###; luo2021semi ###reference_b22###; tang2022towards ###reference_b29###; zeng2019joint ###reference_b44###. li2022learning ###reference_b17###; luo2021semi ###reference_b22###; zeng2019joint ###reference_b44### focus on building cross-task consistency, while tang2022towards ###reference_b29### uses image-level feature similarities to find important samples for semi-supervised learning. Differently, our work targets pixel-level task supervision discovery by hierarchical task tokens containing multi-level multi-task representations for partially annotated dense predictions.\n###figure_2### Multi-task Partially Supervised Learning. As discussed in the introduction, obtaining pixel-level annotations for every task on images is prohibitively expensive. Therefore, some recent works focus on partially annotated settings for multi-task learning imran2020partly ###reference_b12###; li2022learning ###reference_b17###; liu2007semi ###reference_b19###; lu2021taskology ###reference_b21###; luo2021semi ###reference_b22###; wang2022semi ###reference_b33###; ye2024diffusionmtl ###reference_b40###; zamir2020robust ###reference_b42###; zeng2019joint ###reference_b44###. Since directly recovering labels from other tasks is an ill-posed problem li2022learning ###reference_b17###, enforcing consistency among tasks is usually adopted. For instance, constructing a common feature space to align predictions and impose regularization li2022learning ###reference_b17###, and leveraging intrinsic connections of different task pairs between predictions of different tasks\non unlabeled data in a mediator dataset, when jointly learning multiple models lu2021taskology ###reference_b21###. Adversarial training is also adopted to align the distributions between labeled and unlabeled data by discriminators wang2022semi ###reference_b33###. To the best of our knowledge, our hierarchical task tokens for both pseudo feature supervision and task label discovery are a novel exploration of the problem, and show a clear difference from existing works." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed Method", + "text": "Our proposed approach for learning Hierarchical Task Tokens (HiTTs) primarily comprises two stages, i.e., the Global Token Learning and the Fine-grained Token Learning. The overall structure of HiTTs is depicted in Fig. 2 ###reference_###. Firstly, in the Global Token Learning stage, the global task tokens produce task features and then learn rich task-level representations by conducting inter- and intra-task interactions with all task features. The global tokens are utilized to exploit rich representations in feature space and discover feature-level pseudo supervision. Subsequently, in the fine-grained stage, we project each task feature into fine-grained feature space by simple convolution layers, and derive the fine-grained tokens from the global tokens by Multi-layer Perceptrons (MLPs), to inherit well-learned global task representations and therefore achieve consistent pseudo label discovery. To perform a uniform confidence-based pseudo label discovery for different types of dense prediction tasks, we follow bruggemann2021exploring ###reference_b2### to conduct discrete quantization of regression task annotations (e.g. depth estimation and normal estimation), and treat all tasks as pixel-wise classification." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Global Task Token Learning", + "text": "In the global task token learning stage, we target learning global tokens representing the distributions of each task, which are further used for pseudo feature supervision discovery. The learning process is mainly achieved by inter- and intra-task interactions among tokens and features, in order to exploit beneficial multi-task representations for token learning and feature supervision discovery.\nGiven an RGB input , a multi-task dense prediction framework firstly produces a task-generic representation through a shared encoder. Considering we have tasks, and we target decoding task features from as well as learning representative global task tokens for each task.\nAs shown in Fig. 3 ###reference_###, the proposed global task token learning process is mainly composed of three stages: i) Inter-Task Learning, which aims to learn explicit global cross-task token affinities , and conduct cross-task interaction accordingly for the global token learning process. ii) Intra-task Learning, which learns task-specific information by globally conducting self-attention between task feature and token pairs. iii) Feature Supervision Discovery: excavating pseudo feature supervision for unsupervised task features by exploiting global task tokens.\n###figure_3### Firstly, we flatten the shared feature into feature tokens with shape , and use each global task token to query the shared feature to obtain each task feature accordingly. Then, to conduct inter-task learning, all global task tokens are used to produce cross-task affinities that explicitly guide the learning process. The cross-task affinity map is calculated as:\nwhere and are individual linear projection of concatenated global tokens , in which indicates the concatenation. After affinity matrix is calculated, it is used to conduct affine combinations of task features and global task tokens respectively:\nand similarly, , and , represents all updated task tokens and features after the affine combinations. For each task feature , if it is not directly supervised by labels, the feature will be less representative and contain more noise. Thus, conducting affine combinations among all tasks ensures that the task-shared representations from labeled tasks are able to fertilize the unlabeled task features.\nAfterward, the updated tokens and features with cross-task information are involved in the intra-task learning, where we first concatenate every corresponding task token and feature, and perform self-attention on the spatial dimension among each token-feature pair . The global task tokens will further learn more specific and discriminative task representations during this process, and representative task tokens will in turn enhance the feature quality as well. Followingly, we discover pseudo feature supervision with the aid of well-learned global task tokens , and this process will be discussed in Sec. 3.3 ###reference_###.\nAdditionally, for multi-scale backbone features, directly fusing them ignores the various granularity of task representations maintained at different scales. Thus, for multi-scale image backbone, we further propose Multi-scale Global Task Token Learning in order to learn comprehensive multi-scale task relations. The proposed method involves inter-task learning separately at each scale, and then the multi-scale features and tokens are fused before intra-task learning. In this way, the global task tokens gain richer cross-task relations at different scales and are able to maintain stronger representations. We will illustrate this part in detail in the supplementary material." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Fine-grained Task Token Learning", + "text": "After the global task tokens are learned, we further propose to conduct feature-token interaction at a finer spatial granularity, which takes advantage of various representations in global task tokens and boosts task label discovery process. The process is illustrated in Fig. 4 ###reference_###.\nFirstly, we jointly project each updated task token and feature into the prediction space with finer granularity. For features, this can be easily achieved by applying a linear convolution layer, and we denote the fine-grained task features as , where indicates the prediction dimension. For tokens, we denote the projected fine-grained tokens as , we hope every vector inside it can represent one category distribution over the spatial dimension. The simplest way is to project each with a Multi-Layer Perceptron (MLP), which can be described as: . However, since there are no direct supervision imposed during this process to distinguish the learned fine-grain tokens, the MLPs will tend to degenerate and perform linearly correlated outputs, which prevents the fine-grained tokens from learning discriminative task-specific representations. To alleviate this problem, we propose to use Orthogonal Embeddings (OE) to serve as priors and aid the learning process of fine-grained tokens.\n###figure_4### In order to learn discriminative task representations, the vectors in the fine-grained token should be far from each other to represent meaningful and distinguishable task category information, so we use a group of orthogonal basis in to serve as the embedding for MLP input, denoting as . These OE are projected into the feature space by linear projection matrix , and then added with the global token before being fed into the MLP. The process can be described as:\nWith the prior information from , the MLPs are capable of keeping the distance between the vectors in far from each other in the feature space, which makes it able to learn information that distinguishes between task categories as well as inherit the global representations in .\nSubsequently, we exploit the fine-grained task tokens for the fine-grained token learning process. Normally, is noisy and low-confident on unlabeled data due to the lack of supervision, which leads to inaccurate predictions. Since we are treating all of the tasks as pixel-wise classification, we propose to use fine-grained task tokens to encourage to produce high-confident logits, and enhance the quality of pseudo labels:\nwhere , and gives pixel-wise positive score masks on each channel of . Since inherits global task representations from , which will perform more robustly on unlabeled data, and aid the production of distinguished logits score in the updated feature , as well as high-confident final task predictions. Similar to the previous stage, we also conduct pseudo label discovery after the fine-grained tokens are learned, which will be discussed in detail in the next section." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Hierarchical Label Discovery and Multi-task Optimization", + "text": "For the multi-task partially annotated setting, the training loss on labeled data can be described as:\nwhere where is the loss function for task , and if task has ground-truth label, otherwise . and are task prediction and ground-truth.\nWe firstly train the multi-task model with HiTTs jointly with only to achieve convergence on labeled data, then we utilize both hierarchies of tokens to discover feature-level and prediction-level pseudo supervision. As we mentioned in Sec. 3.1 ###reference_###, before the updated tokens and features are fed into the next stage, we conduct the feature supervision discovery to excavate feature-level supervision signals for unlabeled tasks. As shown in Fig. 3 ###reference_###, we use the updated global task tokens to query each pixel feature on every task and produce confidence mask . Since is globally learned on all task features, in , higher scores indicate that the pixel features have a higher response to task , which should be further used to prove task supervision. Thus we use to serve as a soft confidence mask for pixel-wise feature supervision loss:\nwhere represents the offline saved features which serve as pseudo supervision signals for unsupervised task features. The represents element-wise multiplication, is the mean squared error loss for feature distance measurement. For the feature loss on labeled task (first item in Eq 3.3 ###reference_###), we regard all pixel features from as valid since they are supervised by ground-truth label, while for unlabeled task (second item in Eq 3.3 ###reference_###), we use encourage high-confidence pixel features and depress low-confidence ones.\nAfterward, we also conduct pseudo label discovery with the aid of well-learned fine-grained task tokens as mentioned in Sec. 3.2 ###reference_###. We directly produce pseudo labels from : , along with binary masks to select high confidence pixel pseudo labels:\n, where is a threshold used to produce binary masks. The loss for pseudo label supervision can be written as:\nFinally, we sum all of the losses in Eq 5 ###reference_###, 3.3 ###reference_### and 7 ###reference_### to supervise all task features and predictions. The overall losses to optimize the model can be described as:\nwhere the second row represents all prediction-level supervision and the third row represents all feature-level supervision." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiment", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "NYUD-v2. NYUD-v2 silberman2012indoor ###reference_b27### contains 795 and 654 RGB-D indoor scene images for training and testing respectively. We use the 13-class semantic annotations which is defined in couprie2013indoor ###reference_b6###, the truth depth annotations recorded by Microsoft Kinect depth camera, and surface normal annotations which are produced in eigen2015predicting ###reference_b8###. Following the setting in li2022learning ###reference_b17###; liu2019end ###reference_b20###, we use image resolution to speed up training. We use Adam optimizer with a learning rate of , and train all models for 400 epochs with batch size . We update the learning rate every epoch with as the multiplying factor.\nCityscapes. Cityscapes cordts2016cityscapes ###reference_b5### contains 2975 and 500 street-view images for training and testing respectively. We used the projected 7-class semantic annotations from liu2019end ###reference_b20###, and disparity maps to serve as depth annotations. Following the setting in li2022learning ###reference_b17###; liu2019end ###reference_b20###, we use image resolution to speed up training. The optimizer and learning rate scheduler are set as the same as NYUD-v2.\nPASCAL-Context. PASCAL-Context everingham2009pascal ###reference_b9### contains 4998 and 5105 images for training and testing respectively, which also have pixel-level annotations for semantic segmentation, human-parts segmentation and semantic edge detection. Additionally, we also consider surface normal estimation and saliency detection distilled by maninis2019attentive ###reference_b23###. We use Adam optimizer with learning rate , and weight decay , and train for steps with batch size . We update the learning rate by polynomial scheduler and for the power factor.\nModel Setting. Following li2022learning ###reference_b17###, we use SegNet badrinarayanan2017segnet ###reference_b1### for NYUD-v2 and Cityscapes, ResNet-18 he2016deep ###reference_b11### for PASCAL-Context as the backbone of our single task learning (STL) baselines, and the multi-task baseline (MTL) is built from it, which consists of a shared backbone encoder and several task-specific decoding heads. For the learning of HiTTs, we follow bruggemann2021exploring ###reference_b2###; li2018monocular ###reference_b16###; zeisl2014discriminatively ###reference_b43###, and perform a discrete quantization of the label space of continuous regression tasks such as Depth. and Normal. This discrete quantization does not contribute to multi-task learning performance as analysed in bruggemann2021exploring ###reference_b2###, so we ensure a fair comparison with other works. The network with HiTTs is first jointly trained only on labeled data to obtain high-quality task representations and produce pseudo labels, and then follow the self-training procedure in xie2020self ###reference_b34###, we optimize the model with labels or pre-produced pseudo labels on all tasks and train from scratch, and the feature-level supervision is also applied during the training period.\nData Preparation. We follow the setting of li2022learning ###reference_b17###; liu2019end ###reference_b20### to process training data, and form two partially annotated settings li2022learning ###reference_b17###: (i) one-label: for each input image, it is only associated with one task annotation; (ii) random-labels: each image has at least one and at most tasks with corresponding task annotations, in the set of tasks. Additionally, we provide two extra settings to further validate the effectiveness of our method: (iii) full-labels: each image has labels on every task, which is the usual multi-task learning setting; (iv) few-shot: one certain task has only very few labels while the other tasks are fully supervised.\nTraining Pipeline. We first train the multi-task model with HiTTs on all labeled task data. Then the weights of the network and tokens are fixed, and used to produce hierarchical supervision on both feature and prediction spaces. We produce the pseudo label in an offline manner according to xie2020self ###reference_b34###, which is labeling on clean image without data augmentation, and training on augmented images and pseudo labels to enforce consistent predictions. In xie2020self ###reference_b34###, this method is only applied to classification tasks while we extend the utilization to general dense prediction tasks. After the pseudo labels are produced, we use them along with the ground-truth labels to jointly train the multi-task model from scratch. Additionally, we also use the pseudo feature supervision produced by this pre-trained multi-task model for feature regularization during the optimization process. Both hierarchies of the discovered supervision signals ensure that all task predictions will obtain pixel-wise supervision for multi-task optimization to gain better generalization ability on unlabeled data.\nEvaluation Metrics. We have briefly introduced our evaluation metrics for multiple dense prediction tasks in Sec.4.1. We provide a more detailed description as follows: (i) mIoU: mean intersection over union; (ii) pAcc: per-pixel accuracy; (iii) AbS or AbR: absolute error or absolute-relative error; (iv) rmse: root mean square error (for Normal. we calculate the mean square error of the predicted angles with the ground-truths); (v) mErr: mean of angle error; (vi) odsF: optimal dataset F-measure martin2004learning ###reference_b24###; (vii) threshold: for surface normal estimation, we calculate the proportion of pixels with angle error smaller than three thresholds . Additionally, to better evaluate the proposed method, we also consider using proposed by vandenhende2021multi ###reference_b31### to evaluate the overall improvement of the multi-task performances of all the tasks, which is defined as:\nwhere if a lower evaluation value indicates a better performance measurement of for task , and if a higher value is better. Footnote and represent the performance of the single-task learning and the multi-task learning respectively. We will show experimental results with all of these metrics to further show the effectiveness of our method.\n###figure_5###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "State-of-the-art Comparison", + "text": "Comparison on NYUD-v2.\nWe compare our method with li2022learning ###reference_b17###; liu2019end ###reference_b20### on NYUD-v2 under both the one-label and random-labels settings, and the quantitative results are shown in Table 1 ###reference_###. XTC li2022learning ###reference_b17### is the first work designed for partially annotated multi-task dense prediction. MTAN li2022learning ###reference_b17### is an attention-based MTL network designed for the fully supervised setting, and we train it with our setup. The quantitative results show that our method surpasses them by a large margin on all the metrics of the three tasks. More specifically, ours achieves and compared with li2022learning ###reference_b17### under the two partial-label settings, respectively. The qualitative comparison with the state-of-the-art method XTC li2022learning ###reference_b17### as shown in Fig. 5 ###reference_### can also confirm the superior performance of our method.\nSimilar to li2022learning ###reference_b17###, our HiTTs can also be applied on full-labels setting, which utilizes cross-task relations and task-token interactions to fertilize the multi-task learning process. We compare with some of the recent works, including multi-task interaction works like MTAN liu2019end ###reference_b20###, X-Task zamir2020robust ###reference_b42### and CCR yang2023contrastive ###reference_b36###; and multi-task optimization strategies like Uncertainty kendall2018multi ###reference_b14###, GradNorm chen2018gradnorm ###reference_b4###, MGDA desideri2012multiple ###reference_b7### and DWA liu2019end ###reference_b20###. As shown in Table 2 ###reference_###, our method still clearly surpasses all of the SOTA works ( overall), indicating the effective cross-task interaction brought by HiTTs. Additionally, with the aid of feature-level supervision loss , which is supported by global task tokens, our method can achieve overall on the three tasks.\nComparison on Cityscapes.\nWe also compare our results with li2022learning ###reference_b17###; liu2019end ###reference_b20### on Cityscapes, under the one-label setting with both Semseg. and Depth. tasks. As shown in Table 3 ###reference_###, our method achieves SOTA performance on both tasks, and significantly better performance on Depth ( higher on AbS. compared with li2022learning ###reference_b17###), resulting in an average gain of in terms of . Additionally, it\u2019s worth mentioning that our work is the only one achieving balanced performance gain on both tasks compared with STL. Qualitative comparisons are shown in Fig. 6 ###reference_###.\nComparison on Pascal-Context.\nFor the comparison on Pascal-Context, we consider both one-label and random-labels settings. As shown in Table 4 ###reference_###, our method achieves clear improvement over other methods on the majority of tasks under both settings. The greatest enhanced task is Semseg, which has and mIoU on the two settings compared with li2022learning ###reference_b17###. Overall, our method is and higher in terms of compared with li2022learning ###reference_b17###, the gain is more significant under one-label setting with fewer labels.\n###figure_6### ###figure_7### ###figure_8###" + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Model Analysis", + "text": "Components of Hierarchical Task Tokens.\nAs shown in Table 5 ###reference_###, under the one-label setting on NYUD-v2, we give an ablation study of our key components. Generally, we analyze the role of global task tokens and fine-grained tokens , and core designs for token learning process, including the orthogonal embeddings (OE) (3.2 ###reference_###), inter- and intra-task learning (3.1 ###reference_###). The quantitative results clearly show that both hierarchies of tokens contribute to the multi-task performance and HiTTs boost the model performance by overall on all tasks compared with baseline. Without either hierarchy of tokens, the performance drops, especially on Semseg. For the learning process of HiTTs, the orthogonal embeddings (OE) are essential for generating representative fine-grained task tokens, and without OE, the performance will significantly drop (), also especially on Semseg. which requires more discriminative category information. For the inter- and intra-task learning, both contribute to the learning process, and inter-task learning contributes more since it exploits useful representations from other tasks by cross-task feature-token interactions.\nEffect of Hierarchical Feature Supervision and Label Discovery. We analyze the contributions from two types of pseudo supervision losses ( for pseudo label loss and for feature supervision loss) on different hierarchies. As shown in Table 5 ###reference_###, both methods boost multi-task performance: for and for compared with MTL baseline. contributes more since fine-grained task tokens contain more specific and discriminative task information and are directly involved in the formation process of task predictions. The combination of both methods achieves better performance () than applying them separately, which validates the importance of consistently discovering supervision signals in both hierarchies. Compared with the naive pseudo-label supervision without the utilization of task tokens (denoted as ), our HiTTs-based pseudo-label discovery is much better. Since MTL baseline shares an image backbone that learns stronger representations on all tasks, the MTL produces pseudo labels with better quality and surpasses STL a lot in performance (). Our HiTTs-based supervision performs i) consistent label discovery in both feature and prediction space; ii) effective cross-task feature-token interactions, which furthermore enhance the quality of pseudo labels, and bring extra overall.\nLearning Effect on Unlabeled Data. We also study the performance of our method on the labeled and unlabeled data separately on NYUD-v2 training set under one-label setting. As shown in Table 6 ###reference_###, for data without labels, model with HiTTs generalizes better on them, especially on Depth. and Normal, and imposing hierarchical supervision can significantly boost the performance on unlabeled data.\nEffect of Cross-Task Interactions. To further show the effect of cross-task learning brought by HiTTs, we develop new few-shot settings, under which one task has only a few labels while other tasks are fully labeled. We apply this setting respectively on the three tasks of NYUD-v2, namely few-shot-semseg, few-shot-depth and few-shot-normal. For each few-shot task, we have 10 shots for the model to learn. As shown in Table 7 ###reference_###, we only show the performance of the few-shot tasks in the table, and due to the lack of label supervision, the STL performs poorly on each few-shot task: mIoU on Semseg, AbS on Depth, and mErr on Normal. Benefiting from the sharing backbone, MTL baseline performs much better, since the backbone can be fully supervised on the other two tasks, and gain stronger representations from other tasks. With the aid of HiTTs, the multi-task model can achieve an extra performance gain, since the cross-task interactions brought by intra-task learning can fertilize the unlabeled tasks in the decoding stage, which introduces more task-relevant information and discriminative representations to task features without label supervision. The gains brought by HiTTs are on Semseg, on Depth, and on Normal respectively. Additionally, if we add the pseudo supervision signals to aid the learning process, the performance will be further improved: on Semseg, on Depth, and on Normal compared with MTL baseline.\n###figure_9### ###figure_10### ###figure_11### Visualization Results. We visualize: i) The distributions of fine-grained task tokens on each task in Fig. 7 ###reference_###, with the aid of OE, the self-correlation map of tokens will be more diagonal, and the distributions after PCA have better clusters in 3-dimensional feature space. ii) The contribution of fine-grained tokens to the confidence map, we visualize the confidence map in Fig. 8 ###reference_###, compared with the preliminary confidence masks without fin-grained tokens , the final masks have higher confidence scores after the refinement. iii) The learning curves of metrics on every task in Fig. 9 ###reference_###, both training and testing performance are boosted consistently on all tasks with the discovered pseudo supervision (P) on both hierarchies. iv) The visual quality of pseudo labels generated by fine-grained tokens on Cityscapes are show in Fig. 10 ###reference_###. v) The visualization of score maps produced by global task tokens and fine-grained task tokens respectively on each task. As shown in Fig. 11 ###reference_###, we conduct visualization on both NYUD-v2 and Cityscapes datasets. The score maps indicate the response of task features to task tokens, and the response patterns of feature maps on different tasks are very different, e.g. Semseg. features highlight areas with distinguish semantics, Depth. features focus on areas with a certain depth range and Normal. features focus on surfaces with the same orientation. Comparing the score maps produced by tokens from different hierarchies, we find that score maps produced by global task tokens are relatively rough and noisy, while those generated by fine-grained task tokens have finer granularity and less noise, which shows the hierarchy of the HiTTs learning process. Also, we observed that the high-lighted areas of global task tokens are monotonous, while fine-grained task tokens can highlight more various details. This phenomenon is clearly observed in Cityscapes, since the ground truths of this dataset follow the long-tail distribution, thus the global task-tokens tend to learn the category with more pixel samples, and consequently always highlight the road area as shown in Fig. 11 ###reference_###. Oppositely, the fine-grained tokens can give attention to more details, including the vehicles and pedestrians with fewer pixel samples. Thus, it is necessary to design a hierarchical structure for tokens to learn representations with different granularity.\nAnalysis on Computational Cost.\nTo prove that our HiTTs are highly effective but efficient as well, in Table 8 ###reference_###, we analyze the number of trainable parameters, average computational costs for each input image, and the total GPU memory usage during the training phase. MTAN liu2019end ###reference_b20### has a complex encoding-decoding cross-task interactive structure based on attention, which causes larger model parameters and computational overhead in both the training and inference stages. li2022learning ###reference_b17### proposes a cross-task consistency module to regularize the training process, which causes extra parameters () and FLOPs (). Our method is much less expensive compared with li2022learning ###reference_b17###, which only requires more parameters and FLOPs. The total memory cost of the baseline is . The best-performing method XTC li2022learning ###reference_b17### increases the memory upon the baseline to , while ours upon the same baseline is only , confirming the efficiency of our model." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we propose to learn Hierarchical Task Tokens (HiTTs) for both pseudo feature supervision and label discovery under Multi-Task Partially Supervised Learning. The global task tokens are exploited for feature-token cross-task interactions and provide feature-level supervision, while the fine-grained tokens inherit knowledge from global tokens and excavate pixel pseudo labels. Extensive experimental results on partially annotated multi-task dense prediction benchmarks validate the effectiveness of our method." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparison on NYUD-v2 under one-label and random-labels settings. Our method shows clear performance gain over three tasks, which is consistent with the visualization results.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingModelSemseg.Depth.Normal.
\nmIoU\u2191\npAcc\u2191\nAbS\u2193\nrmse\u2193\nmErr\u2193\nrmse\u2193\n\u2191\n\u2191\n\u2191(%)\u2191
\n\nOne-Label\nSTL29.2855.410.71821.015130.197137.711523.153246.404658.5216-
MTL baseline\n30.9258.230.59820.854431.850938.631319.708341.261453.63810.11
MTAN\u00a0liu2019end \n30.9257.140.61960.847730.027836.780821.419944.780557.57203.26
XTC\u00a0li2022learning \n33.4660.950.57280.805631.149237.821119.841042.226854.99973.60
Ours35.8163.220.55400.793928.513136.173826.498550.235761.834313.23
\n\nRandom-Labels\nSTL34.4960.520.62720.882427.968134.929324.601149.788862.4425-
MTL baseline\n35.4961.810.55030.787429.954136.772621.693345.041257.7516-1.47
MTAN\u00a0liu2019end \n35.9661.640.61200.827228.693335.352823.025347.228760.1113-0.48
XTC\u00a0li2022learning \n38.1164.370.53870.775529.654936.399221.705845.480158.42360.66
Ours41.7866.500.51770.747227.348834.682027.161951.892463.76709.28
\n
\n
", + "capture": "Table 1: Comparison on NYUD-v2 under one-label and random-labels settings. Our method shows clear performance gain over three tasks, which is consistent with the visualization results." + }, + "2": { + "table_html": "
\n
Table 2: Comparison on NYUD-v2 under full-labels settings. Our method achieves significantly better performance compared with SoTA multi-task learning works on all of the three tasks.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelSemseg.Depth.Normal.
\nmIoU\u2191\nAbS\u2193\nmErr\u2193(%)\u2191
STL37.450.607925.94-
MTL baseline\n36.950.551029.51-1.91
MTAN\u00a0liu2019end \n39.390.569628.890.03
X-Task\u00a0zamir2020robust \n38.910.534229.940.20
Uncertainty\u00a0kendall2018multi \n36.460.537627.580.87
GradNorm\u00a0chen2018gradnorm \n37.190.577528.51-1.87
MGDA\u00a0desideri2012multiple \n38.650.557228.890.06
DWA\u00a0liu2019end \n36.460.542929.45-1.83
XTC\u00a0li2022learning \n41.000.514828.584.87
XTC+\u00a0kendall2018multi \n41.090.509026.787.58
CCR\u00a0yang2023contrastive \n43.090.489427.879.04
Ours44.320.481325.7613.29
Ours+\n45.470.476325.7214.64
\n
\n
", + "capture": "Table 2: Comparison on NYUD-v2 under full-labels settings. Our method achieves significantly better performance compared with SoTA multi-task learning works on all of the three tasks." + }, + "3": { + "table_html": "
\n
Table 3: Comparison on Cityscapes under one-label setting.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SettingModelSemseg.Depth.
\nmIoU\u2191\npAcc\u2191\nAbS\u2193\nrmse\u2193(%)\u2191
\n\nOne-Label\nSTL69.6991.910.01420.0271-
MTL baseline\n69.9491.620.01590.0292-4.92
MTAN\u00a0liu2019end \n71.1292.350.01460.0278-0.72
XTC\u00a0li2022learning \n73.2392.730.01590.0293-3.53
Ours73.6592.810.01350.02653.45
\n
\n
", + "capture": "Table 3: Comparison on Cityscapes under one-label setting." + }, + "4": { + "table_html": "
\n
Table 4: Quantitative comparison on PASCAL-Context under the one-label and random-labels setting. The ssl represents the semi-supervised learning strategy adopted in\u00a0li2022learning , the CL and DL are the Contrastive Loss and Discriminator Loss in\u00a0li2022learning .
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\u00a0\u00a0\u00a0\u00a0\u00a0Setting\u00a0\u00a0\u00a0\u00a0\u00a0Model\u00a0\u00a0\u00a0\u00a0\u00a0Semseg.\u00a0\u00a0\u00a0\u00a0\u00a0Parsing.\u00a0\u00a0\u00a0\u00a0\u00a0Norm.\u00a0\u00a0\u00a0\u00a0\u00a0Sal.\u00a0\u00a0\u00a0\u00a0\u00a0Edge.\u00a0\u00a0\u00a0\u00a0\u00a0
\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\u00a0\u00a0\u00a0\u00a0\u00a0mErr\u2193\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\u00a0\u00a0\u00a0\u00a0\u00a0odsF \u2191\u00a0\u00a0\u00a0\u00a0\u00a0(%)\u2191
\u00a0\u00a0\u00a0\u00a0\u00a0One-Label\u00a0\u00a0\u00a0\u00a0\u00a0STL\u00a0\u00a0\u00a0\u00a0\u00a047.7\u00a0\u00a0\u00a0\u00a0\u00a056.2\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a061.9\u00a0\u00a0\u00a0\u00a0\u00a064.0\u00a0\u00a0\u00a0\u00a0\u00a0-
\u00a0\u00a0\u00a0\u00a0\u00a0MTL baseline\u00a0\u00a0\u00a0\u00a0\u00a048.4\u00a0\u00a0\u00a0\u00a0\u00a055.1\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a061.6\u00a0\u00a0\u00a0\u00a0\u00a066.5\u00a0\u00a0\u00a0\u00a0\u00a00.59
\u00a0\u00a0\u00a0\u00a0\u00a0MTL ssl\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a045.0\u00a0\u00a0\u00a0\u00a0\u00a054.0\u00a0\u00a0\u00a0\u00a0\u00a016.9\u00a0\u00a0\u00a0\u00a0\u00a061.7\u00a0\u00a0\u00a0\u00a0\u00a062.4\u00a0\u00a0\u00a0\u00a0\u00a0-3.60
\u00a0\u00a0\u00a0\u00a0\u00a0MTL w.CL\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a048.5\u00a0\u00a0\u00a0\u00a0\u00a055.4\u00a0\u00a0\u00a0\u00a0\u00a017.1\u00a0\u00a0\u00a0\u00a0\u00a061.3\u00a0\u00a0\u00a0\u00a0\u00a064.6\u00a0\u00a0\u00a0\u00a0\u00a0-1.33
\u00a0\u00a0\u00a0\u00a0\u00a0MTL w.DL\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a048.2\u00a0\u00a0\u00a0\u00a0\u00a056.0\u00a0\u00a0\u00a0\u00a0\u00a017.1\u00a0\u00a0\u00a0\u00a0\u00a061.7\u00a0\u00a0\u00a0\u00a0\u00a064.7\u00a0\u00a0\u00a0\u00a0\u00a0-1.08
\u00a0\u00a0\u00a0\u00a0\u00a0XTC\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a049.5\u00a0\u00a0\u00a0\u00a0\u00a055.8\u00a0\u00a0\u00a0\u00a0\u00a017.0\u00a0\u00a0\u00a0\u00a0\u00a061.7\u00a0\u00a0\u00a0\u00a0\u00a065.1\u00a0\u00a0\u00a0\u00a0\u00a0-0.36
\u00a0\u00a0\u00a0\u00a0\u00a0Ours\u00a0\u00a0\u00a0\u00a0\u00a056.3\u00a0\u00a0\u00a0\u00a0\u00a057.3\u00a0\u00a0\u00a0\u00a0\u00a015.4\u00a0\u00a0\u00a0\u00a0\u00a063.0\u00a0\u00a0\u00a0\u00a0\u00a068.5\u00a0\u00a0\u00a0\u00a0\u00a06.51
\u00a0\u00a0\u00a0\u00a0\u00a0Random-Labels\u00a0\u00a0\u00a0\u00a0\u00a0STL\u00a0\u00a0\u00a0\u00a0\u00a060.9\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a014.7\u00a0\u00a0\u00a0\u00a0\u00a064.8\u00a0\u00a0\u00a0\u00a0\u00a066.8\u00a0\u00a0\u00a0\u00a0\u00a0-
\u00a0\u00a0\u00a0\u00a0\u00a0MTL baseline\u00a0\u00a0\u00a0\u00a0\u00a058.4\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a063.9\u00a0\u00a0\u00a0\u00a0\u00a067.8\u00a0\u00a0\u00a0\u00a0\u00a0-2.57
\u00a0\u00a0\u00a0\u00a0\u00a0MTL ssl\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a059.0\u00a0\u00a0\u00a0\u00a0\u00a055.8\u00a0\u00a0\u00a0\u00a0\u00a015.9\u00a0\u00a0\u00a0\u00a0\u00a064.0\u00a0\u00a0\u00a0\u00a0\u00a066.9\u00a0\u00a0\u00a0\u00a0\u00a0-2.29
\u00a0\u00a0\u00a0\u00a0\u00a0MTL w.CL\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a059.0\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a063.8\u00a0\u00a0\u00a0\u00a0\u00a067.8\u00a0\u00a0\u00a0\u00a0\u00a0-2.40
\u00a0\u00a0\u00a0\u00a0\u00a0MTL w.DL\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a057.9\u00a0\u00a0\u00a0\u00a0\u00a055.2\u00a0\u00a0\u00a0\u00a0\u00a016.2\u00a0\u00a0\u00a0\u00a0\u00a063.4\u00a0\u00a0\u00a0\u00a0\u00a067.4\u00a0\u00a0\u00a0\u00a0\u00a0-3.31
\u00a0\u00a0\u00a0\u00a0\u00a0XTC\u00a0li2022learning \u00a0\u00a0\u00a0\u00a0\u00a059.0\u00a0\u00a0\u00a0\u00a0\u00a055.6\u00a0\u00a0\u00a0\u00a0\u00a015.9\u00a0\u00a0\u00a0\u00a0\u00a064.0\u00a0\u00a0\u00a0\u00a0\u00a067.8\u00a0\u00a0\u00a0\u00a0\u00a0-2.10
\u00a0\u00a0\u00a0\u00a0\u00a0Ours\u00a0\u00a0\u00a0\u00a0\u00a062.2\u00a0\u00a0\u00a0\u00a0\u00a055.7\u00a0\u00a0\u00a0\u00a0\u00a014.7\u00a0\u00a0\u00a0\u00a0\u00a064.8\u00a0\u00a0\u00a0\u00a0\u00a070.7\u00a0\u00a0\u00a0\u00a0\u00a01.74
\n
\n
", + "capture": "Table 4: Quantitative comparison on PASCAL-Context under the one-label and random-labels setting. The ssl represents the semi-supervised learning strategy adopted in\u00a0li2022learning , the CL and DL are the Contrastive Loss and Discriminator Loss in\u00a0li2022learning ." + }, + "5": { + "table_html": "
\n
Table 5: Investigate the effectiveness of different components on NYUD-v2 testing set under one-label setting. Inter- and Intra-Task Learning are two key components in the Global Task Token Learning stage, and Orthogonal Embeddings are the key components in the Fine-grained Task Token Learning stage. and are HiTTs-based supervision signals imposing on feature spaces and predictions respectively, and is naive pseudo label supervision without task tokens.\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSemseg.Depth.Normal.
\nmIoU\u2191\npAcc\u2191\nAbS\u2193\nrmse\u2193\nmErr\u2193\nrmse\u2193\n\u2191\n\u2191\n\u2191(%)\u2191
STL29.2855.410.71821.015130.197137.711523.153246.404658.5216-
MTL baseline\n30.9258.230.59820.854431.850938.631319.708341.261453.63810.11
HiTTs32.4859.610.58440.838230.084737.582723.997546.479058.21466.51
\nw/o. Orthogonal Embeddings27.3855.080.60490.862630.590438.104623.523345.701257.19022.13
\nw/o. Inter-Task Learning31.2657.990.59660.859230.291137.771422.780246.259058.08804.51
\nw/o. Intra-Task Learning31.4458.500.59100.853330.143237.671923.325146.435458.15085.23
\nw/o. Global Task Tokens \n30.0358.210.58230.838930.000537.375023.416046.182458.29095.08
\nw/o. Fine-grained Task Tokens \n30.5357.180.58420.856530.089137.446523.229745.960358.05094.60
STL w. \n30.7858.940.66930.936230.242037.860123.583046.474358.37393.03
MTL w. \n33.5961.790.58820.855429.817436.978123.287545.880358.10616.89
HiTTs w. \n33.2460.740.57080.820029.222736.930525.796848.817360.26089.75
HiTTs w. \n35.2262.930.56130.801428.885236.431625.387349.125160.980611.57
HiTTs w. \n35.8163.220.55400.793928.513136.173826.498550.235761.834313.23
\n
\n
", + "capture": "Table 5: Investigate the effectiveness of different components on NYUD-v2 testing set under one-label setting. Inter- and Intra-Task Learning are two key components in the Global Task Token Learning stage, and Orthogonal Embeddings are the key components in the Fine-grained Task Token Learning stage. and are HiTTs-based supervision signals imposing on feature spaces and predictions respectively, and is naive pseudo label supervision without task tokens.\n" + }, + "6": { + "table_html": "
\n
Table 6: Investigate the performance on labeled and unlabeled data on NYUD-v2 training set under the one-label setting. GT and pseudo represents the ground-truth and the pseudo supervision, respectively. Our method clearly shows effective learning on unlabeled training data.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSupervisionSemseg.Depth.Normal.
GTPseudo\nmIoU\u2191\npAcc\u2191\nAbS\u2193\nrmse\u2193\nmErr\u2193\nrmse\u2193\n\u2191\n\u2191\n\u2191
MTL baseline\u2713\u271789.0096.040.20410.343425.928032.082425.827652.075865.4135
\u2717\u271734.3161.560.58230.837531.769738.507119.504241.240153.8490
HiTTs\u2713\u271786.0495.000.30160.462521.791128.936838.812563.628073.8326
\u2717\u271734.6961.480.56990.831929.892037.333123.978846.480958.4659
HiTTs w. \u2713\u271786.6394.890.31730.477020.922028.126041.284666.071275.6904
\u2717\u271337.2563.720.55630.807428.516936.153626.452850.034461.5778
\n
\n
", + "capture": "Table 6: Investigate the performance on labeled and unlabeled data on NYUD-v2 training set under the one-label setting. GT and pseudo represents the ground-truth and the pseudo supervision, respectively. Our method clearly shows effective learning on unlabeled training data." + }, + "7": { + "table_html": "
\n
Table 7: Investigate the cross-task learning effect on NYUD-v2 under the few-shot setting.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodFew-Shot-Semseg.Few-Shot-Depth.Few-Shot-Normal.
\nmIoU\u2191\npAcc\u2191\nAbS\u2193\nrmse\u2193\nmErr\u2193\nrmse\u2193\n\u2191\n\u2191\n\u2191
STL5.8026.060.95331.290747.528153.64225.434317.888827.9915
MTL baseline\n16.7541.010.91651.296840.045646.337012.034826.852037.2863
HiTTs18.0544.690.78031.127239.850847.111314.110829.471939.5924
HiTTs w. \n20.0745.620.73661.020638.502946.990717.108234.717145.1765
\n
\n
", + "capture": "Table 7: Investigate the cross-task learning effect on NYUD-v2 under the few-shot setting." + }, + "8": { + "table_html": "
\n
Table 8: Comparison of computational cost on NYUD-v2. All of the methods use the same SegNet\u00a0badrinarayanan2017segnet backbone.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelParams (M)FLOPs (G)Memory (G)
MTL baseline\n25.0668.819.36
MTAN\u00a0liu2019end \n44.23217.2643.46
XTC\u00a0li2022learning \n39.84223.1631.42
Ours25.3283.1421.19
\n
\n
", + "capture": "Table 8: Comparison of computational cost on NYUD-v2. All of the methods use the same SegNet\u00a0badrinarayanan2017segnet backbone." + }, + "9": { + "table_html": "
\n
Table 9: Comparison of HiTTs with Single-scale (SS) and Multi-scale (MS) Global Task Token Learning on PASCAL-Context under the one-label and random-labels setting.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0Setting\n\n\u00a0\u00a0\u00a0\u00a0\u00a0Model\n\u00a0\u00a0\u00a0\u00a0\u00a0Semseg.\u00a0\u00a0\u00a0\u00a0\u00a0Parsing.\u00a0\u00a0\u00a0\u00a0\u00a0Norm.\u00a0\u00a0\u00a0\u00a0\u00a0Sal.\u00a0\u00a0\u00a0\u00a0\u00a0Edge.\n\u00a0\u00a0\u00a0\u00a0\u00a0\n
\n\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\n\n\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\n\n\u00a0\u00a0\u00a0\u00a0\u00a0mErr\u2193\n\n\u00a0\u00a0\u00a0\u00a0\u00a0mIoU\u2191\n\n\u00a0\u00a0\u00a0\u00a0\u00a0odsF \u2191\n\u00a0\u00a0\u00a0\u00a0\u00a0(%)\u2191
\n\u00a0\u00a0\u00a0\u00a0\u00a0One-Label\n\u00a0\u00a0\u00a0\u00a0\u00a0STL\u00a0\u00a0\u00a0\u00a0\u00a047.7\u00a0\u00a0\u00a0\u00a0\u00a056.2\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a061.9\u00a0\u00a0\u00a0\u00a0\u00a064.0\u00a0\u00a0\u00a0\u00a0\u00a0-
\n\u00a0\u00a0\u00a0\u00a0\u00a0MTL baseline\n\u00a0\u00a0\u00a0\u00a0\u00a048.4\u00a0\u00a0\u00a0\u00a0\u00a055.1\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a061.6\u00a0\u00a0\u00a0\u00a0\u00a066.5\u00a0\u00a0\u00a0\u00a0\u00a00.59
\u00a0\u00a0\u00a0\u00a0\u00a0HiTTs (SS)\u00a0\u00a0\u00a0\u00a0\u00a051.0\u00a0\u00a0\u00a0\u00a0\u00a054.7\u00a0\u00a0\u00a0\u00a0\u00a016.2\u00a0\u00a0\u00a0\u00a0\u00a061.7\u00a0\u00a0\u00a0\u00a0\u00a066.1\u00a0\u00a0\u00a0\u00a0\u00a01.19
\u00a0\u00a0\u00a0\u00a0\u00a0HiTTs (MS)\u00a0\u00a0\u00a0\u00a0\u00a052.3\u00a0\u00a0\u00a0\u00a0\u00a056.2\u00a0\u00a0\u00a0\u00a0\u00a015.8\u00a0\u00a0\u00a0\u00a0\u00a062.0\u00a0\u00a0\u00a0\u00a0\u00a067.9\u00a0\u00a0\u00a0\u00a0\u00a03.43
\n\u00a0\u00a0\u00a0\u00a0\u00a0Random-Labels\n\u00a0\u00a0\u00a0\u00a0\u00a0STL\u00a0\u00a0\u00a0\u00a0\u00a060.9\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a014.7\u00a0\u00a0\u00a0\u00a0\u00a064.8\u00a0\u00a0\u00a0\u00a0\u00a066.8\u00a0\u00a0\u00a0\u00a0\u00a0-
\n\u00a0\u00a0\u00a0\u00a0\u00a0MTL baseline\n\u00a0\u00a0\u00a0\u00a0\u00a058.4\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a016.0\u00a0\u00a0\u00a0\u00a0\u00a063.9\u00a0\u00a0\u00a0\u00a0\u00a067.8\u00a0\u00a0\u00a0\u00a0\u00a0-2.57
\u00a0\u00a0\u00a0\u00a0\u00a0HiTTs (SS)\u00a0\u00a0\u00a0\u00a0\u00a059.1\u00a0\u00a0\u00a0\u00a0\u00a053.4\u00a0\u00a0\u00a0\u00a0\u00a015.0\u00a0\u00a0\u00a0\u00a0\u00a064.1\u00a0\u00a0\u00a0\u00a0\u00a067.8\u00a0\u00a0\u00a0\u00a0\u00a0-1.60
\u00a0\u00a0\u00a0\u00a0\u00a0HiTTs (MS)\u00a0\u00a0\u00a0\u00a0\u00a060.3\u00a0\u00a0\u00a0\u00a0\u00a055.3\u00a0\u00a0\u00a0\u00a0\u00a014.7\u00a0\u00a0\u00a0\u00a0\u00a064.6\u00a0\u00a0\u00a0\u00a0\u00a070.2\u00a0\u00a0\u00a0\u00a0\u00a00.76
\n
\n
", + "capture": "Table 9: Comparison of HiTTs with Single-scale (SS) and Multi-scale (MS) Global Task Token Learning on PASCAL-Context under the one-label and random-labels setting." + } + }, + "image_paths": { + "1": { + "figure_path": "2411.18823v1_figure_1.png", + "caption": "Figure 1: (a) Illustration of partially annotated multi-task dense prediction setting. Each input image only has partial task labels from all the tasks. (b) Illustration of the learning of Hierarchical Task Tokens, including global task tokens and fine-grained task tokens, by conducting feature-token interactions in feature and prediction spaces separately. The well-learned hierarchical task tokens can achieve both feature supervision discovery and task label discovery.", + "url": "http://arxiv.org/html/2411.18823v1/x1.png" + }, + "2": { + "figure_path": "2411.18823v1_figure_2.png", + "caption": "Figure 2: Illustration of our method. HiTTs consist of both global and fine-grained task tokens which learn discriminative task representations by conducting feature-token interactions with attentions in corresponding multi-task decoding stages. The global task tokens \ud835\udf3d\ud835\udc8asubscript\ud835\udf3d\ud835\udc8a\\boldsymbol{\\theta_{i}}bold_italic_\u03b8 start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT discover feature-level pseudo supervision \u2112fsubscript\u2112\ud835\udc53\\mathcal{L}_{f}caligraphic_L start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT, while the fine-grained task tokens \ud835\udf4b\ud835\udc8asubscript\ud835\udf4b\ud835\udc8a\\boldsymbol{\\varphi_{i}}bold_italic_\u03c6 start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT inherit the knowledge from global task tokens and directly discover pixel labels for supervision \u2112psubscript\u2112\ud835\udc5d\\mathcal{L}_{p}caligraphic_L start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. The supervision from ground-truth label is denoted as \u2112ssubscript\u2112\ud835\udc60\\mathcal{L}_{s}caligraphic_L start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.18823v1/x2.png" + }, + "3": { + "figure_path": "2411.18823v1_figure_3.png", + "caption": "Figure 3: Illustration of the Global Task Token Learning, which mainly contains three stages: (i) Inter-Task Learning: predicting cross-task token affinities \ud835\udc68\ud835\udc68\\boldsymbol{A}bold_italic_A from the global task tokens. (ii) Intra-Task Learning: conducting self-attention between task features and tokens. (iii) Feature Supervision Discovery: excavating pseudo feature supervision based on the learned task tokens and attentions.", + "url": "http://arxiv.org/html/2411.18823v1/x3.png" + }, + "4": { + "figure_path": "2411.18823v1_figure_4.png", + "caption": "Figure 4: Illustration of Token &\\&& Feature Projection and the Fine-grained Task Token Learning. Different tasks have the same structure, and we take one task as example. We firstly project the fine-grained feature \ud835\udc6e\ud835\udc8asubscript\ud835\udc6e\ud835\udc8a\\boldsymbol{G_{i}}bold_italic_G start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT from the updated task feature \ud835\udc6d\ud835\udc8a\u2032superscriptsubscript\ud835\udc6d\ud835\udc8abold-\u2032\\boldsymbol{F_{i}^{\\prime}}bold_italic_F start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_\u2032 end_POSTSUPERSCRIPT, and derive fine-grained task tokens \ud835\udf4b\ud835\udc8asubscript\ud835\udf4b\ud835\udc8a\\boldsymbol{\\varphi_{i}}bold_italic_\u03c6 start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT from the updated global task tokens \ud835\udf3d\ud835\udc8a\u2032superscriptsubscript\ud835\udf3d\ud835\udc8abold-\u2032\\boldsymbol{\\theta_{i}^{\\prime}}bold_italic_\u03b8 start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_\u2032 end_POSTSUPERSCRIPT. Then, we produce final task predictions with pseudo label discovery by conducting interactions between \ud835\udc6e\ud835\udc8asubscript\ud835\udc6e\ud835\udc8a\\boldsymbol{G_{i}}bold_italic_G start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT and \ud835\udf4b\ud835\udc8asubscript\ud835\udf4b\ud835\udc8a\\boldsymbol{\\varphi_{i}}bold_italic_\u03c6 start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT.", + "url": "http://arxiv.org/html/2411.18823v1/x4.png" + }, + "5": { + "figure_path": "2411.18823v1_figure_5.png", + "caption": "Figure 5: Comparisons with SoTA works on NYUDv2. Ours shows both clear semantic boundaries and accurate geometry estimations, indicating the effectiveness of cross-task feature-token interactions.", + "url": "http://arxiv.org/html/2411.18823v1/x5.png" + }, + "6": { + "figure_path": "2411.18823v1_figure_6.png", + "caption": "Figure 6: Comparisons with SOTA methods on Cityscapes.", + "url": "http://arxiv.org/html/2411.18823v1/x6.png" + }, + "7": { + "figure_path": "2411.18823v1_figure_7.png", + "caption": "Figure 7: Visualization of self-affinities heatmap (left) and PCA for distributions (right) of the fine-grained task tokens of the three tasks, with samples from NYUD-v2 testing set. Based on orthogonal embeddings, the affinities between different tokens are low and the distances between different token clustering centers are far, which indicates that more discriminative representations are learned with our method.", + "url": "http://arxiv.org/html/2411.18823v1/x7.png" + }, + "8": { + "figure_path": "2411.18823v1_figure_8.png", + "caption": "Figure 8: Comparison of the task confidence map before and after refined by the fine-grained task tokens, which greatly encourage high-confidence predictions on all tasks (red color represents high-confidence areas). For noisy data like the second photo taken in a dark environment, this enhancement are more significant.", + "url": "http://arxiv.org/html/2411.18823v1/x8.png" + }, + "9": { + "figure_path": "2411.18823v1_figure_9.png", + "caption": "Figure 9: Comparison of the training and testing performance on each task with and without hierarchical Pseudo Supervision (P). The model trained with pseudo supervision converges faster on both train and test splits, and gains better performance.", + "url": "http://arxiv.org/html/2411.18823v1/x9.png" + }, + "10": { + "figure_path": "2411.18823v1_figure_10.png", + "caption": "Figure 10: Quantitative analysis of the quality of pseudo labels generated by the fine-grained task tokens on Cityscapes.", + "url": "http://arxiv.org/html/2411.18823v1/x10.png" + }, + "11": { + "figure_path": "2411.18823v1_figure_11.png", + "caption": "Figure 11: Comparisons of task score maps produced by global task tokens and fine-grained task tokens. The upper part is the visualization of samples on NYUD-v2 while the lower part is on Cityscapes. The score maps indicate the response of task features to task tokens, those produced by global task tokens are relatively noisy and monotonous, while those generated by fine-grained task tokens have finer granularity and less noise, which shows the hierarchy of the HiTTs learning process.", + "url": "http://arxiv.org/html/2411.18823v1/x11.png" + }, + "12": { + "figure_path": "2411.18823v1_figure_12.png", + "caption": "Figure 12: Illustrations of (a) Single-scale Global Task Token Learning and (b) Multi-scale Global Task Token Learning. For the multi-scale features produced by the shared backbone, we use linear layers to produce corresponding task tokens for each scale. Then, in each scale, we query task features and conduct inter-task learning to gain multi-task multi-scale representations. The multi-scale features and tokens are fused before intra-task learning to transfer cross-scale information.", + "url": "http://arxiv.org/html/2411.18823v1/x12.png" + }, + "13": { + "figure_path": "2411.18823v1_figure_13.png", + "caption": "Figure 13: Visualization of self-affinities heatmap (left) and PCA for distributions (right) of fine-grained task tokens of the two tasks on Cityscapes validation set. With orthogonal embeddings, the affinities between different tokens are low and the clustering of token distributions on each category is better, which has a similar phenomenon to the visualization in Fig. 7 of the body part.", + "url": "http://arxiv.org/html/2411.18823v1/x13.png" + }, + "14": { + "figure_path": "2411.18823v1_figure_14.png", + "caption": "Figure 14: Comparisons of task score maps produced by global task tokens and fine-grained task tokens. The upper part is the visualization of samples on NYUD-v2 while the lower part is on Cityscapes. The score maps indicate the response of task features to task tokens, those produced by global task tokens are relatively noisy and monotonous, while those generated by fine-grained task tokens have finer granularity and less noise, which shows the hierarchy of the HiTTs learning process.", + "url": "http://arxiv.org/html/2411.18823v1/x14.png" + }, + "15": { + "figure_path": "2411.18823v1_figure_15.png", + "caption": "Figure 15: Comparisons with SOTA works on NYUD-v2 (upper part) and Cityscapes (lower part).", + "url": "http://arxiv.org/html/2411.18823v1/extracted/6030418/FIG/vis_comp_sota_more.jpg" + }, + "16": { + "figure_path": "2411.18823v1_figure_16.png", + "caption": "Figure 16: Quantitative analysis of the quality of pseudo labels generated by gloabl task tokens.", + "url": "http://arxiv.org/html/2411.18823v1/extracted/6030418/FIG/vis_plabel.jpg" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "PAMI 39(12), 2481\u20132495 (2017)", + "author": "Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation.", + "venue": null, + "url": null + } + }, + { + "2": { + "title": "In: ICCV, pp. 15869\u201315878 (2021)", + "author": "Br\u00fcggemann, D., Kanakis, M., Obukhov, A., Georgoulis, S., Van Gool, L.: Exploring relational context for multi-task dense prediction.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "In: Proceedings of the European conference on computer vision (ECCV), pp. 801\u2013818 (2018)", + "author": "Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation.", + "venue": null, + "url": null + } + }, + { + "4": { + "title": "In: International conference on machine learning, pp. 794\u2013803. PMLR (2018)", + "author": "Chen, Z., Badrinarayanan, V., Lee, C.Y., Rabinovich, A.: Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks.", + "venue": null, + "url": null + } + }, + { + "5": { + "title": "In: CVPR, pp. 3213\u20133223 (2016)", + "author": "Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding.", + "venue": null, + "url": null + } + }, + { + "6": { + "title": "arXiv preprint arXiv:1301.3572 (2013)", + "author": "Couprie, C., Farabet, C., Najman, L., LeCun, Y.: Indoor semantic segmentation using depth information.", + "venue": null, + "url": null + } + }, + { + "7": { + "title": "Comptes Rendus Mathematique 350(5-6), 313\u2013318 (2012)", + "author": "D\u00e9sid\u00e9ri, J.A.: Multiple-gradient descent algorithm (mgda) for multiobjective optimization.", + "venue": null, + "url": null + } + }, + { + "8": { + "title": "In: ICCV, pp. 2650\u20132658 (2015)", + "author": "Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture.", + "venue": null, + "url": null + } + }, + { + "9": { + "title": "IJCV 88, 303\u2013308 (2009)", + "author": "Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge.", + "venue": null, + "url": null + } + }, + { + "10": { + "title": "In: CVPR, pp. 3205\u20133214 (2019)", + "author": "Gao, Y., Ma, J., Zhao, M., Liu, W., Yuille, A.L.: Nddr-cnn: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction.", + "venue": null, + "url": null + } + }, + { + "11": { + "title": "In: CVPR, pp. 770\u2013778 (2016)", + "author": "He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition.", + "venue": null, + "url": null + } + }, + { + "12": { + "title": "arXiv preprint arXiv:2005.02523 (2020)", + "author": "Imran, A.A.Z., Huang, C., Tang, H., Fan, W., Xiao, Y., Hao, D., Qian, Z., Terzopoulos, D.: Partly supervised multitask learning.", + "venue": null, + "url": null + } + }, + { + "13": { + "title": "In: CVPR, pp. 5070\u20135079 (2019)", + "author": "Iscen, A., Tolias, G., Avrithis, Y., Chum, O.: Label propagation for deep semi-supervised learning.", + "venue": null, + "url": null + } + }, + { + "14": { + "title": "In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482\u20137491 (2018)", + "author": "Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics.", + "venue": null, + "url": null + } + }, + { + "15": { + "title": "In: ICML, vol. 3, p. 896 (2013)", + "author": "Lee, D.H., et al.: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.", + "venue": null, + "url": null + } + }, + { + "16": { + "title": "Pattern Recognition 83, 328\u2013339 (2018)", + "author": "Li, B., Dai, Y., He, M.: Monocular depth estimation with hierarchical fusion of dilated cnns and soft-weighted-sum inference.", + "venue": null, + "url": null + } + }, + { + "17": { + "title": "In: CVPR, pp. 18879\u201318889 (2022)", + "author": "Li, W.H., Liu, X., Bilen, H.: Learning multiple dense prediction tasks from partially annotated data.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "In: CVPR, pp. 6936\u20136945 (2019)", + "author": "Li, Y., Yuan, L., Vasconcelos, N.: Bidirectional learning for domain adaptation of semantic segmentation.", + "venue": null, + "url": null + } + }, + { + "19": { + "title": "NIPS 20 (2007)", + "author": "Liu, Q., Liao, X., Carin, L.: Semi-supervised multitask learning.", + "venue": null, + "url": null + } + }, + { + "20": { + "title": "In: CVPR, pp. 1871\u20131880 (2019)", + "author": "Liu, S., Johns, E., Davison, A.J.: End-to-end multi-task learning with attention.", + "venue": null, + "url": null + } + }, + { + "21": { + "title": "In: CVPR, pp. 8700\u20138709 (2021)", + "author": "Lu, Y., Pirk, S., Dlabal, J., Brohan, A., Pasad, A., Chen, Z., Casser, V., Angelova, A., Gordon, A.: Taskology: Utilizing task relations at scale.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "In: AAAI, vol. 35, pp. 8801\u20138809 (2021)", + "author": "Luo, X., Chen, J., Song, T., Wang, G.: Semi-supervised medical image segmentation through dual-task consistency.", + "venue": null, + "url": null + } + }, + { + "23": { + "title": "In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1851\u20131860 (2019)", + "author": "Maninis, K.K., Radosavovic, I., Kokkinos, I.: Attentive single-tasking of multiple tasks.", + "venue": null, + "url": null + } + }, + { + "24": { + "title": "PAMI 26(5), 530\u2013549 (2004)", + "author": "Martin, D.R., Fowlkes, C.C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues.", + "venue": null, + "url": null + } + }, + { + "25": { + "title": "In: CVPR, pp. 3994\u20134003 (2016)", + "author": "Misra, I., Shrivastava, A., Gupta, A., Hebert, M.: Cross-stitch networks for multi-task learning.", + "venue": null, + "url": null + } + }, + { + "26": { + "title": "In: ECCV, pp. 299\u2013315 (2018)", + "author": "Shi, W., Gong, Y., Ding, C., Tao, Z.M., Zheng, N.: Transductive semi-supervised deep learning using min-max features.", + "venue": null, + "url": null + } + }, + { + "27": { + "title": "In: ECCV, pp. 746\u2013760. Springer (2012)", + "author": "Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from rgbd images.", + "venue": null, + "url": null + } + }, + { + "28": { + "title": "NIPS 33, 596\u2013608 (2020)", + "author": "Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C.A., Cubuk, E.D., Kurakin, A., Li, C.L.: Fixmatch: Simplifying semi-supervised learning with consistency and confidence.", + "venue": null, + "url": null + } + }, + { + "29": { + "title": "In: CVPR, pp. 14658\u201314667 (2022)", + "author": "Tang, H., Jia, K.: Towards discovering the effectiveness of moderately confident samples for semi-supervised learning.", + "venue": null, + "url": null + } + }, + { + "30": { + "title": "NIPS 30 (2017)", + "author": "Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.", + "venue": null, + "url": null + } + }, + { + "31": { + "title": "PAMI (2021)", + "author": "Vandenhende, S., Georgoulis, S., Van Gansbeke, W., Proesmans, M., Dai, D., Van Gool, L.: Multi-task learning for dense prediction tasks: A survey.", + "venue": null, + "url": null + } + }, + { + "32": { + "title": "In: ECCV, pp. 527\u2013543. Springer (2020)", + "author": "Vandenhende, S., Georgoulis, S., Van Gool, L.: Mti-net: Multi-scale task interaction networks for multi-task learning.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "In: WACV, pp. 2505\u20132514 (2022)", + "author": "Wang, Y., Tsai, Y.H., Hung, W.C., Ding, W., Liu, S., Yang, M.H.: Semi-supervised multi-task learning for semantics and depth.", + "venue": null, + "url": null + } + }, + { + "34": { + "title": "In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10687\u201310698 (2020)", + "author": "Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves imagenet classification.", + "venue": null, + "url": null + } + }, + { + "35": { + "title": "In: CVPR, pp. 675\u2013684 (2018)", + "author": "Xu, D., Ouyang, W., Wang, X., Sebe, N.: Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing.", + "venue": null, + "url": null + } + }, + { + "36": { + "title": "In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2023)", + "author": "Yang, S., Ye, H., Xu, D.: Contrastive multi-task dense prediction.", + "venue": null, + "url": null + } + }, + { + "37": { + "title": "ECCV (2022)", + "author": "Ye, H., Xu, D.: Inverted pyramid multi-task transformer for dense scene understanding.", + "venue": null, + "url": null + } + }, + { + "38": { + "title": "In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 21828\u201321837 (2023)", + "author": "Ye, H., Xu, D.: Taskexpert: Dynamically assembling multi-task representations with memorial mixture-of-experts.", + "venue": null, + "url": null + } + }, + { + "39": { + "title": "In: ICLR (2023)", + "author": "Ye, H., Xu, D.: Taskprompter: Spatial-channel multi-task prompting for dense scene understanding.", + "venue": null, + "url": null + } + }, + { + "40": { + "title": "In: CVPR (2024)", + "author": "Ye, H., Xu, D.: Diffusionmtl: Learning multi-task denoising diffusion model from partially annotated data.", + "venue": null, + "url": null + } + }, + { + "41": { + "title": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2024)", + "author": "Ye, H., Xu, D.: Invpt++: Inverted pyramid multi-task transformer for visual scene understanding.", + "venue": null, + "url": null + } + }, + { + "42": { + "title": "In: CVPR, pp. 11197\u201311206 (2020)", + "author": "Zamir, A.R., Sax, A., Cheerla, N., Suri, R., Cao, Z., Malik, J., Guibas, L.J.: Robust learning through cross-task consistency.", + "venue": null, + "url": null + } + }, + { + "43": { + "title": "In: ECCV, pp. 468\u2013484. Springer (2014)", + "author": "Zeisl, B., Pollefeys, M., et al.: Discriminatively trained dense surface normal estimation.", + "venue": null, + "url": null + } + }, + { + "44": { + "title": "In: ICCV, pp. 7223\u20137233 (2019)", + "author": "Zeng, Y., Zhuge, Y., Lu, H., Zhang, L.: Joint learning of saliency detection and weakly supervised semantic segmentation.", + "venue": null, + "url": null + } + }, + { + "45": { + "title": "arXiv preprint arXiv:2312.13514v2 (2024)", + "author": "Zhang, J., Fan, J., Ye, P., Zhang, B., Ye, H., Li, B., Cai, Y., Chen, T.: Bridgenet: Comprehensive and effective feature interactions via bridge feature for multi-task dense predictions.", + "venue": null, + "url": null + } + }, + { + "46": { + "title": "In: ECCV, pp. 235\u2013251 (2018)", + "author": "Zhang, Z., Cui, Z., Xu, C., Jie, Z., Li, X., Yang, J.: Joint task-recursive learning for semantic segmentation and depth estimation.", + "venue": null, + "url": null + } + }, + { + "47": { + "title": "In: CVPR, pp. 4106\u20134115 (2019)", + "author": "Zhang, Z., Cui, Z., Xu, C., Yan, Y., Sebe, N., Yang, J.: Pattern-affinitive propagation across depth, surface normal and semantic segmentation.", + "venue": null, + "url": null + } + }, + { + "48": { + "title": "In: ECCV, pp. 289\u2013305 (2018)", + "author": "Zou, Y., Yu, Z., Kumar, B., Wang, J.: Unsupervised domain adaptation for semantic segmentation via class-balanced self-training.", + "venue": null, + "url": null + } + }, + { + "49": { + "title": "In: ICCV, pp. 5982\u20135991 (2019)", + "author": "Zou, Y., Yu, Z., Liu, X., Kumar, B., Wang, J.: Confidence regularized self-training.", + "venue": null, + "url": null + } + } + ], + "url": "http://arxiv.org/html/2411.18823v1" +} \ No newline at end of file diff --git a/20241127/2412.00082v1.json b/20241127/2412.00082v1.json new file mode 100644 index 0000000000000000000000000000000000000000..02f504ce508f3882352fd9159c506f8d2172eade --- /dev/null +++ b/20241127/2412.00082v1.json @@ -0,0 +1,189 @@ +{ + "title": "Dual Prototyping with Domain and Class Prototypes for Affective Brain-Computer Interface in Unseen Target Conditions", + "abstract": "EEG signals have emerged as a powerful tool in affective brain\u2013computer interfaces, playing a crucial role in emotion recognition. However, current deep transfer learning-based methods for EEG recognition face challenges due to the reliance of both source and target data in model learning, which significantly affect model performance and generalization. To overcome this limitation, we propose a novel framework (PL-DCP) and introduce the concepts of feature disentanglement and prototype inference. The dual prototyping mechanism incorporates both domain and class prototypes: domain prototypes capture individual variations across subjects, while class prototypes represent the ideal class distributions within their respective domains. Importantly, the proposed PL-DCP framework operates exclusively with source data during training, meaning that target data remains completely unseen throughout the entire process. To address label noise, we employ a pairwise learning strategy that encodes proximity relationships between sample pairs, effectively reducing the influence of mislabeled data. Experimental validation on the SEED and SEED-IV datasets demonstrates that PL-DCP, despite not utilizing target data during training, achieves performance comparable to deep transfer learning methods that require both source and target data. This highlights the potential of PL-DCP as an effective and robust approach for EEG-based emotion recognition.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Emotion, as a fundamental physiological signal, plays a critical role in human communication. Extensive research has established a close link between emotional states and mental health disorders [1 ###reference_b1###]. Accurately recognizing emotions is crucial for daily life, psychological health management, and human-computer interaction. Recently, electroencephalography (EEG)-based emotion recognition has become a focal point for understanding and modulating human emotions, attracting growing interest from researchers in affective computing [2 ###reference_b2###, 3 ###reference_b3###]. This trend is further fueled by advancements in machine learning and deep learning, which have enabled the development of increasingly efficient and accurate models for emotion recognition [4 ###reference_b4###]. Despite this progress, two critical challenges remain that limit the practical application of EEG-based emotion recognition systems. First, EEG signals are highly individualized, and current models rely heavily on target domain data (transfer learning models), which reduces their adaptability in real-world settings. How can we create emotion recognition models that accommodate substantial individual variability without requiring target-specific data, thereby improving real-world usability? Second, label noise continues to be a pressing issue, affecting model reliability. Developing models with increased resilience to label noise is essential for enhancing the robustness of emotion recognition systems. Addressing these challenges is crucial for advancing the field and enabling more adaptable, accurate, and reliable emotion recognition technologies.\nPrevious studies have emphasized that EEG signals are highly subject-dependent [5 ###reference_b5###, 6 ###reference_b6###], and that the way individuals perceive and express emotions can vary significantly [7 ###reference_b7###]. These individual differences extend to neural processes involved in emotional regulation, further complicating the task of emotion recognition [8 ###reference_b8###]. A fundamental assumption in many machine learning and deep learning methods is that training and testing data share the same feature space and follow the same distribution, thus satisfying the independent and identically distributed (IID) condition. However, individual variability in EEG signals often disrupts this assumption, leading to substantial performance degradation or even model failure when traditional emotion recognition models are applied to new subjects. This variability presents a significant challenge for the effectiveness and generalizability of existing models, underscoring the need for approaches that can adapt to diverse individual EEG patterns without compromising performance.\nTo address the challenges posed by individual variability in EEG signals, a growing number of researchers are employing transfer learning methodologies, which have shown promising results [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###]. Transfer learning accommodates variations in domains, tasks, and data distributions between training and testing phases. By considering the feature distributions of both the source domain (with labeled data and known distribution) and the target domain (with unlabeled data and unknown distribution), transfer learning approach leverages knowledge from the source domain to improve predictive accuracy in the target domain [17 ###reference_b17###]. Given the success of transfer learning in addressing individual differences in EEG signals, most current EEG-based emotion recognition models are developed within a transfer learning framework [18 ###reference_b18###, 12 ###reference_b12###, 19 ###reference_b19###, 13 ###reference_b13###]. On the other hand, transfer learning models generally require simultaneous access to both source and target domain data during training, which necessitates retraining the model before it can adapt to new subjects. This requirement significantly increases the practical costs associated with model deployment, particularly when dealing with large datasets or complex models with numerous parameters. Retraining in these cases can be time-consuming and computationally intensive, posing a challenge for efficient deployment and limiting the scalability of these models in real-world, diverse scenarios. Addressing this limitation is crucial for developing more adaptable and cost-effective EEG-based emotion recognition systems. Therefore, addressing the individual differences in EEG signals while avoiding dependence on target domain data is a crucial step for EEG-based emotion recognition models to advance towards practical applications.\nTo address the aforementioned issues, we propose a Pairwise Learning Framework with Domain and Class Prototypes (PL-DCP) for EEG-based emotion recognition in unseen target conditions. In this framework, we incorporate not only domain prototypes but also class prototypes to effectively capture diverse feature distributions and achieve improved alignment across data distributions. To mitigate the effects of label noise, classification is formulated as a pairwise learning task, evaluating the relationships between samples and the various prototypes. Importantly, to enhance practicality, the proposed model does not rely on target domain data during training, making it well-suited for real-world applications. The main contributions are summarized below. (1) Dual Prototyping with Domain and Class Prototypes. EEG features are represented through a novel dual prototyping manner (Fig. 1 ###reference_###): domain prototype inference (representing the domain to which the sample belongs) and class prototype inference (indicating the emotional class of the sample). Individual variability in EEG signals is thus conceptualized as a feature shift resulting from the interaction between these domain and class prototypes. (2) Independence from Target Domain Data for Improved Practicality. To enhance generalizability and facilitate broader deployment of emotion recognition systems, the model is designed to operate without exposure to target domain data during training. (3) Robust Performance without Target Domain Data. Experimental results show that the proposed PL-DCP achieves performance comparable to, or even surpassing, classical deep transfer learning models, despite the absence of target domain data.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Work", + "text": "To overcome the limitations of the IID assumption, which is challenging to uphold due to significant individual variability in EEG signals, an increasing number of researchers are turning to transfer learning methods. In transfer learning, the labeled training data is source domain, and the unknown test data is target domain. Current transfer learning algorithms for EEG-based emotion recognition can generally be divided into two categories: non-deep transfer learning models and deep transfer learning models." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A Non-deep transfer learning models", + "text": "To facilitate knowledge transfer between the source and target domains, Pan et al. [20 ###reference_b20###] proposed the Transfer Component Analysis (TCA) method, which minimizes the maximum mean discrepancy (MMD) to learn transferable components across domains. Fernando et al. [21 ###reference_b21###] introduced the Subspace Alignment (SA) method, which learns a mapping function to align source and target subspaces. The results showed SA could reduce the domain gap and enhance the adaptability of the model. Zheng et al. [22 ###reference_b22###] proposed two transfer learning approaches specifically for EEG signals. The first combines TCA with kernel principal component analysis (KPCA) to identify a shared feature space. The second approach, Transductive Parameter Transfer (TPT), constructs multiple classifiers in the source domain and transfers knowledge to the target subject by learning a mapping from distributions to classifier parameters, which are then applied to the target domain. Additionally, Gong et al. [23 ###reference_b23###] introduced the Geodesic Flow Kernel (GFK), which maps EEG signals to a kernel space that captures domain shifts using geodesic flow. This approach enhances feature alignment by integrating multiple subspaces and identifying domain-invariant directions, thereby supporting more robust cross-domain adaptation." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Deep transfer learning models", + "text": "Non-deep transfer models would be limited in complexity and capacity, constraining their ability to fully meet the practical demands of emotion recognition. With advances in deep learning theories and technologies, deep learning-based transfer algorithms have been introduced, offering enhanced model capabilities in terms of performance and generalizability. These algorithms have been widely applied in EEG-based emotion recognition, and most contemporary models now leverage deep transfer learning.\nFor example, Tzeng et al. [24 ###reference_b24###] proposed the Deep Domain Confusion (DDC) architecture, a CNN model incorporating an adaptation layer and a domain confusion loss based on MMD. This approach learns representations that are both semantically rich and domain-invariant. Zhang et al. [25 ###reference_b25###] introduced a cross-subject emotion recognition method that utilizes CNNs with DDC. This method constructs an Electrode-Frequency Distribution Map (EFDM) from EEG signals, using a CNN to extract emotion-related features while employing DDC to minimize distribution differences between source and target domains. Jin et al. [26 ###reference_b26###] applied the Domain-Adversarial Neural Network (DANN) framework to EEG-based emotion recognition. DANN eliminates distribution discrepancies between source and target subjects, improving cross-domain robustness. Many recent deep transfer learning methods for EEG emotion recognition have been developed based on the DANN structure, leveraging its capacity for effective domain adaptation. For example, Li et al. [18 ###reference_b18###] introduced the Bi-Domain Adversarial Neural Network (BiDANN), which accounts for asymmetrical emotional responses in the left and right hemispheres of the brain, leveraging neuroscientific insights to improve emotion recognition performance. Li et al. [27 ###reference_b27###] developed the Region-Global Spatial-Temporal Neural Network (R2G-STN), which incorporates neuroscientific insights by considering emotional response variations across different brain regions. Ye et al. [16 ###reference_b16###] introduced a semi-supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive learning framework (DS-AGC) to enhance feature representation in scenarios with limited labeled data. This framework includes a graph contrastive learning method to extract effective graph-based feature representations from multiple EEG channels. Additionally, it incorporates a self-attentive fusion module for feature fusion, sample selection, and emotion recognition, focusing on EEG features more relevant to emotions and identifying data samples in the labeled source domain that are closer to the target domain. These deep transfer learning methods has provided valuable insights and strategies for addressing individual differences in EEG signals and achieving significant results in EEG-based emotion recognition. However, a key challenge remains: the reliance on target domain data during training, which increases practical application costs and limits the scalability of these models in real-world scenarios." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "II-C Prototype Learning", + "text": "The core concept of prototype learning is that each class is represented by a prototype (a feature vector that acts as a central, representative feature for that class). Data points belonging to a specific class are clustered around this prototype, enabling classification by evaluating the proximity or similarity of data points to their respective class prototypes. For example, Snell et al. [28 ###reference_b28###] proposed prototypical networks, which learn a metric space where samples from the same category are clustered around their respective class prototypes. In Pinheiro et al. [29 ###reference_b29###]\u2019s work, prototype representations are computed for each category, and target domain images are classified by comparing their feature representations to these prototypes, assigning the label of the most similar prototype. Ji et al. [30 ###reference_b30###] tackled proposed Semantic-guided Attentive Prototypes Network (SAPNet) framework to address the challenges of extreme imbalance and combinatorial explosion in Human-Object Interaction (HOI) tasks. Liu et al. [31 ###reference_b31###] developed a refined prototypical contrastive learning network for few-shot learning (RPCL-FSL), which combines contrastive learning with few-shot learning in an end-to-end network for enhanced performance in low-data scenarios. Yang et al. [32 ###reference_b32###] introduced the Two-Stream Prototypical Learning Network (TSPLN), which simultaneously considers the quality of support images and their relevance to query images, thereby optimizing the learning of class prototypes. These studies demonstrate that prototype learning is particularly effective in few-shot learning and unsupervised tasks, offering a potential solution by enabling models that do not rely on target domain data. In the application of EEG-based emotion recognition, Zhou et al. [9 ###reference_b9###] proposed a prototype representation-based pairwise learning framework (PR-PL), where sample features interact with prototype features through a bilinear transformation. Experimental results demonstrated that these interaction features with prototype representations can significantly enhance model performance. However, PR-PL considers only class prototypes, assuming that source domain data follow a uniform distribution, a limitation that does not align with real-world variability. Additionally, PR-PL requires both source and target data during training to align sample features, which restricts its practical applicability in scenarios where target data may not be accessible.\n###figure_2###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Methodology", + "text": "The source domain is defined as ={,,,\u2026,, where denotes the number of subjects in the source domain. For each individual subject in the source domain, we have ={,, where denotes the -th sample of -th subject, represents the corresponding emotion label, and is the sample size for the -th subject. The target domain is represented as ={,, where denotes the number of EEG samples in the target domain. For clarity, Table I ###reference_### summarizes the commonly used notations." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Feature Disentanglement", + "text": "Inspired by the work of Peng et al. [33 ###reference_b33###] and Cai et al. [34 ###reference_b34###], we hypothesize that EEG features involves two types of deep features: domain-invariant class features and class-invariant domain features. The domain-invariant class features capture semantic information about the class to which a sample belongs, while the class-invariant domain features convey the domain or subject-specific information of the sample. Original EEG features can be viewed as an integration of these two types of features.\nThe distributional differences in EEG signals across subjects can be attributed to variations in the domain features, causing a shift in the distribution of class features. Since EEG classification adheres to a common standard, the class features from different subjects should ideally occupy the same feature space and follow a consistent distribution. Traditional methods assume that all test data come from a single, unified domain, effectively focusing only on class features while overlooking the presence of domain features. This assumption limits model generalization across subjects. Feature extraction methods such as DANN can be interpreted as an attempt to remove the domain-specific components from sample features, thereby aligning the class features across subjects and improving the generalization performance of the model. In this paper, we approach EEG feature extraction with consideration for both domain-specific and class-specific components to improve robustness and generalization across diverse subjects.\nWe start with a shallow feature extractor to obtain shallow features from the EEG samples . Then, we introduce a class feature disentangler and a domain feature disentangler to disentangle the semantic information within these shallow features, resulting in class features and domain features , expressed as:\nTo improve the effectiveness of the disentanglers in separating the two types of features, we introduce a domain discriminator and a class discriminator . The domain discriminator is designed to determine the domain of the input features, while the class discriminator ascertains the class of the input features. Our goal is for the domain discriminator to accurately identify the domain of the input when given domain features, while the class discriminator should be unable to identify the class based solely on domain features. This inability to classify based on domain features indicates successful disentanglement, where domain features contain only domain-specific information and no class-related information. Similarly, we aim for class features to contain only class-related information, free from domain-specific information. To achieve this, we draw on ideas from DANN [35 ###reference_b35###]. Before the class features are passed into the domain discriminator and the domain features into the class discriminator, they pass through a Gradient Reversal Layer (GRL) to facilitate adversarial training. We use a binary cross-entropy loss function to optimize the discriminators. The output from each discriminator is first passed through a sigmoid layer to obtain probability values, which are then compared to the true labels. This approach converts the multi-class problem into several independent binary classification tasks. The binary cross-entropy loss function is defined as follows:\nHere, represents the true class labels, and denotes the predicted class labels from the discriminator. Specifically, for the class discriminator, the binary cross-entropy loss can be defined as:\nwhere represents the GRL. represents the true class labels of -th sample, represents the class feature of -th sample, and represents the domain feature -th sample. This loss function is optimized to help the class discriminator learn to distinguish class labels accurately, ensuring that the class features retain only class-related information, with minimal interference from domain-specific features. Similarly, for the domain discriminator, we have the following binary cross-entropy loss function:\nHere, represents the true domain labels of -th sample. This loss function is optimized to ensure that the domain discriminator accurately identifies the domain-specific information, encouraging the domain features to be disentangled from class-related information. By applying the GRL before the domain features enter the class discriminator, adversarial training is facilitated, further promoting the separation of domain and class features. In the implementation, the shallow feature extractor, domain feature disentangler, and class feature disentangler are all designed as multi-layer perceptrons (MLPs). The disentangled class features and domain features are then utilized in the subsequent prototype inference module. This module learns domain prototypes to represent each domain. It also learns class prototypes to capture each class within those domains." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B Prototype Inference", + "text": "For domain features, we assume that each domain has a prototype representation, which we refer to as the domain prototype. This prototype represents the key characteristics of that domain, with the distribution of domain features centered around it. For each domain, the domain prototype can be considered the \u201dcentroid\u201d of all its domain features. Similarly, for each category within a domain, we derive class prototypes through prototype inference. The class prototypes capture the essential properties of each class within the domain and serve as the \u201dcentroid\u201d of the class features. Both types of prototypes, domain and class, can be computed as the average value of their respective sample features, denoted as . Specifically, the estimation of domain prototypes for each domain is given by:\nwhere represents the collection of domain features for all samples from the -th subject in the source domain. Here, denotes the number of samples from this subject, is the the domain feature of the -th sample, and is the corresponding domain label for that sample\u2019s domain feature. For data from the same domain, the domain labels are identical. For the class features within a single domain , the class prototype is given as\nwhere represents the set of class features for samples that classified as from the -th subject\u2019s samples. Here, denotes the number of samples classified as in the -th subject\u2019s data. is the class feature of the -th sample, and is the class label for that sample. In summary, for each sample, we will obtain the corresponding domain prototype and class prototypes . Here, represents the number of classes. During training, each subject\u2019s prototypes are calculated based on the features of all their samples and are updated throughout the training process to better capture the feature distribution. In the testing phase, these prototypes are fixed.\nAfter obtaining the domain prototypes and class prototypes, we proceed with prototype inference. For each sample, after feature disentanglement and extraction of the corresponding domain and class features, we first perform domain prototype inference to identify the most suitable domain. This is followed by class prototype inference within the selected domain to determine the class label. Specifically, for the domain feature , we compare its similarity with each class domain prototype using a bilinear transformation as\nwhere is a trainable, randomly initialized bilinear transformation matrix that is not constrained by positive definiteness or symmetry. The model updates the weights of this bilinear matrix through backpropagation, with the purpose of enhancing the feature representation capability for downstream tasks.\nWe compare the similarity between the sample\u2019s domain feature and each domain prototype, as follows:\nHere, represents the domain prototype of the -th subject.The most similar domain is determined based on the corresponding to the maximum value in the vector .\nFor each training epoch, once the most similar domain for the sample is identified, the class prototypes for that domain are used to measure the similarity between the sample\u2019s class features and each class prototype. When comparing class features with class prototypes, we use cosine similarity, as follows:\nwhere denotes the cosine similarity computation." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "III-C Pairwise Learning", + "text": "To address this issue and enhance the model\u2019s resistance to label noise, we employ a pairwise learning strategy to replace pointwise learning. Unlike pointwise learning, pairwise learning takes into account the relationships between pairs of samples, capturing their relative associations through pairwise comparisons. The pairwise loss function used is defined as follows.\nwhere is the binary cross-entropy function, defined in Eq. 3 ###reference_###. represents the number of samples in a batch. is determined based on the class labels of samples and sample . For class labels and of samples and , if , then ; otherwise, . The derived from the sample labels enhances the model\u2019s stability during the training process as well as its generalization capability. The term represents the similarity measure between the class features of samples and , given as:\nHere, and are the feature vectors of the class feature of samples and , obtained through prototype inference ( Eq. 9 ###reference_###). The symbol represents the dot product operation. The result of falls within the range , representing the similarity between the two feature vectors and . In summary, the objective function for the pairwise learning is defined as follows:\nCompared to pointwise learning, pairwise learning has a stronger resistance to label noise. Furthermore, a soft regularization term is introduced to prevent the model from overfitting, with its weight parameter as:\nwhere each row of the matrix represents the domain prototype belonging to a source domain subject, denotes the Frobenius norm of the matrix, and represents the identity matrix." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Experimental Results", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Dataset and Data Preprocessing", + "text": "We validate the proposed PL-DCP using the widely recognized public databases SEED [36 ###reference_b36###] and SEED-IV [37 ###reference_b37###]. The SEED dataset includes 15 subjects, each participating in three experimental sessions conducted on different dates, with each session containing 15 trials. During these sessions, video clips were shown to evoke emotional responses (negative, neutral, and positive) while EEG signals were simultaneously recorded. For the SEED-IV dataset, it includes 15 subjects, each participating in three sessions held on different dates, with each session consisting of 24 trials. In this dataset, video clips were used to induce emotions of happiness, sadness, calmness, and fear in the subjects.\nThe acquired EEG signals undergo preprocessing as follows. First, the EEG signals are downsampled to a 200 Hz sampling rate, and noise is manually removed. The denoised data is then filtered using a bandpass filter with a range of 0.3 Hz to 50 Hz. For each experiment, the signals are segmented using a 1-second window, and differential entropy (DE) features, representing the logarithmic energy spectrum of specific frequency bands, are extracted based on five frequency bands: Delta (1-3 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (14-30 Hz), and Gamma (31-50 Hz), resulting in 310 features for each EEG segment (5 frequency bands \u00d7 62 channels). Finally, a Linear Dynamic System (LDS) is applied to smooth all obtained features, leveraging the temporal dependency of emotional changes to filter out EEG components unrelated to emotions and those contaminated by noise [38 ###reference_b38###]. The EEG preprocessing procedure adheres to the same standards as previous studies to enable fair comparisons with models presented in previous literature." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Experiment Protocols", + "text": "To thoroughly evaluate the model\u2019s performance and enable a comprehensive comparison with existing methods, we adopt two different cross-validation protocols. (1) Cross-Subject Single-Session Leave-One-Subject-Out Cross-Validation. This is the most widely used validation method in EEG-based emotion recognition tasks. In this approach, data from a single session of one subject in the dataset is designated as the target, while data from single sessions of the remaining subjects serve as the source. To ensure consistency with other studies, we use only the first session for the cross-subject single-session cross-validation. (2) Cross-Subject Cross-Session Leave-One-Subject-Out Cross-Validation. To more closely simulate practical application scenarios, we also assess the model\u2019s performance for unknown subjects and unknown sessions. Similar to the previous method, all session data from one subject in the dataset is assigned as the target domain, while data from all sessions of the remaining subjects serve as the source domain." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Cross-Subject Single-Session Leave-One-Subject-Out Cross-Validation.", + "text": "Table II ###reference_### and Table III ###reference_### present the experimental results on the SEED and SEED-IV datasets under cross-subject single-session leave-one-subject-out cross-validation. On the SEED dataset, our proposed PL-DCP model demonstrates a significant performance advantage over both traditional machine learning and non-deep transfer learning methods. Compared to the best-performing non-deep transfer learning method (CORAL), our model shows an improvement of 11.40% (CORAL: 71.48%; PL-DCP: 82.88%). Notably, while other deep transfer learning methods incorporate target domain data during training, our model trains without using any target domain data. Despite this, our method achieves results comparable to, and frequently surpassing, other deep transfer learning approaches that use target domain data, with a 7.46% improvement over DDC and a 5.23% improvement over MS-MDA. On the SEED-IV dataset, PL-DCP also achieves superior results without target domain data in training. Compared to the highest-performing traditional machine learning method, RF (Random Forest: 52.67%), and deep learning methods that utilized target domain data during training (MMD: 59.34%), PL-DCP achieves an accuracy of 65.15% \u00b1 10.34%, outperforming all other methods in both the traditional and deep learning categories. This indicates a substantial improvement in recognition accuracy on the SEED-IV dataset." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "IV-D Cross-Subject Cross-Session Leave-One-Subject-Out Cross-Validation.", + "text": "Compared to cross-subject single-session, cross-subject cross-session not only accounts for variability among subjects but also incorporates differences across sessions. In EEG-based emotion recognition tasks, this evaluation scheme presents the greatest challenge to the model\u2019s effectiveness. As shown in Table IV ###reference_### and Table V ###reference_###, the proposed method achieves a recognition accuracy of 79.34% \u00b1 6.34% on the SEED dataset and 63.16% \u00b1 9.03% on the SEED-IV dataset. Compared to the best results from existing studies (SEED: 78.42% with DANN; SEED-IV: 61.44% with DANN), the proposed method outperforms classic deep transfer learning methods, achieving higher accuracy without utilizing target domain data. These results suggest that the proposed PL-DCP can maintain robust performance independently of target domain data, effectively handling the challenges posed by inter-subject and inter-session variability in EEG-based emotion recognition tasks. This demonstrates the model\u2019s strong validity and generalization capabilities." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "IV-E Confusion Matrix", + "text": "To qualitatively assess the performance of the proposed model across different emotion categories, we visualize the confusion matrix and compare the results with classic deep transfer learning methods. As shown in Fig. 3 ###reference_###, all models perform best in recognizing positive emotions, while their accuracy declines for negative and neutral emotions. For example, DANN achieves a recognition rate of only 75.96% for neutral emotions. Moreover, the three deep transfer learning methods used for comparison show performance fluctuations across different emotion categories, indicating lower stability. For instance, in DANN, the difference in performance between recognizing positive and neutral emotions is 9.03%. For DAN, the difference between positive and neutral recognition rates is 8.69%. DCORAL exhibits a performance gap of 10.2% between recognizing positive and negative emotions. In contrast, the proposed model maintains consistently high performance across all emotion categories, with recognition rates exceeding 80% for each category. It also demonstrates better stability, even without target domain data during training, with a maximum performance difference of only 4.83% across emotion types. This suggests that the model exhibits enhanced robustness and balanced performance across various emotional states.\n###figure_3###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Ablation Study", + "text": "To thoroughly evaluate the model\u2019s performance and assess the impact of each module in the proposed PL-DCP, we conduct an ablation study. The ablation results, based on the SEED dataset under cross-subject single-session leave-one-subject-out cross-validation, are presented in Table VI ###reference_###. (1) Removing the domain discriminator loss. A decrease of 7.53% in model performance is observed. (2) Removing the class discrimination loss. The model\u2019s performance drops from 82.88% to 79.18%, resulting in a decrease of 3.70%. (3) Removing both domain discriminator loss and class discrimination loss. It causes a significant performance decline of 8.39%. This indicates that the combined presence of both domain and class discriminators enhances the extraction of relevant features, substantially improving model recognition performance in the target domain. (4) Removing the bilinear transformation matrix . The bilinear transformation matrix in Eq. 8 ###reference_### contributes to model performance, increasing accuracy by 2.24%. (5) Removing soft regularization . The soft regularizer also improves accuracy, raising it by 1.15%. These results demonstrate the effectiveness of each component within the PL-DCP model and their combined impact on overall performance." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Visualization of Domain and Class Features", + "text": "To intuitively understand the extracted domain and class features, we use T-SNE [49 ###reference_b49###] to visualize these features for the respective samples, enabling us a clear observation of how the features and prototypes evolve over training. The visualizations of domain and class features at different training stages are shown in Fig. 4 ###reference_### (a)-(c) and Fig. 4 ###reference_### (d)-(f), respectively. These figures capture the features at the beginning of training (first column), after 50 training epochs (second column), and at the end of training (third column). In the domain feature visualizations (upper row), different colors represent the domain features for each domain, while diamonds in corresponding colors denote the domain prototypes for each domain. The target data are represented as semi-transparent black crosses (\u00d7) to avoid excessive overlap with other domain features. Comparing the feature distribution from Fig. 4 ###reference_### (a) to (c), it becomes evident that the domain features for the same subject are closely clustered, forming separate groups with the domain prototypes located at the center of each cluster. This differentiation in domain feature distributions across subjects supports our hypothesis that domain features, derived through feature disentanglement of shallow features, can effectively distinguish between subjects. A similar trend is observed in the class feature visualizations. Here, different colors represent different classes, with the semi-transparent black crosses (\u00d7) again indicating the target data. As training progresses, a more defined class boundary among different classes becomes apparent from Fig. 4 ###reference_### (d) to (f), illustrating the model\u2019s ability to learn clear class separability over the course of training. To further illustrate the relationships between domain prototypes and class prototypes (Fig. 5 ###reference_###), we analyze both close pairs and distant pairs of domain samples, visualizing their respective representations in the class prototype space. The results reveal that samples closer in the domain prototype space tend to remain closer in the class prototype space, indicating consistency and coherence in the mapping across the two prototype spaces. This observation reinforces the effectiveness of the proposed framework in preserving the intrinsic relationships between samples during dual prototype learning.\n###figure_4### ###figure_5###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Effect of Noisy Labels", + "text": "We further evaluate the model\u2019s performance under label noise to assess the robustness and noise-resistance capability introduced by pairwise learning. In this process, we randomly replace a proportion (%) of labels in the source domain data with incorrect labels, simulating real-world scenarios where data labels may contain noise. The model is then trained using this noisy source domain data, and its performance is validated on the unseen target domain data. We vary the % values to 5%, 10%, 20%, and 30%, respectively, yielding model accuracies and standard deviations of 81.46% \u00b1 5.54%, 80.32% \u00b1 6.39%, 79.79% \u00b1 6.09%, and 79.01% \u00b1 7.46%. These results indicate that as the label noise rate % gradually increases from 5% to 30%, the model\u2019s performance decreases only slightly, with a steady downward trend resulting in an overall performance drop of just 3.87%. This suggests that the proposed model demonstrates strong robustness against label noise, as its performance does not exhibit significant declines in the presence of noisy labels." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Conclusion", + "text": "This study proposes a novel pairwise learning framework with domain and class prototypes (PL-DCP) for EEG-based emotion recognition in unseen target conditions. Unlike existing transfer learning methods that require both source and target data for feature alignment, PL-DCP relies solely on source data for model training. Experimental results show that the proposed method achieves promising results even without using target domain data for training, with performance approaching or even surpassing some deep transfer learning models that heavily rely on target domain data. This suggests that combining feature disentanglement with domain and class prototypes helps generalize more reliable and stable characteristics of individual subjects. Additionally, the introduction of pairwise learning enhances the model\u2019s resilience to label noise. These findings underscore the potential of this method for practical applications in aBCIs." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Acknowledgments", + "text": "This work was supported in part by the National Natural Science Foundation of China under Grant 62176089, 62276169 and 62201356, in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ20024, in part by the Key Project of Xiangjiang Laboratory under Granted 23XJ02006, in part by the STI 2030-Major Projects 2021ZD0200500, in part by the Medical-Engineering Interdisciplinary Research Foundation of Shenzhen University under Grant 2024YG008, in part by the Shenzhen University-Lingnan University Joint Research Programme, and in part by Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions (2023SHIBS0003)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: Frequently used notations and descriptions.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NotationDescription
source\\target domain
\n\\\ndomain\\class feature
\n\\\ndomain\\class label
the number of subjects in the source domain
the number of classes
\n\\\ndomain\\class label
shallow feature extractor
\n \\ \ndomain\\class feature disentangler
\n\\\ndomain \\ discriminator
\n\\\ndomain\\class prototype
bilinear transformation matrix in \n
\n
\n
", + "capture": "TABLE I: Frequently used notations and descriptions." + }, + "2": { + "table_html": "
\n
TABLE II: The mean accuracies and standard deviations of cross-subject single-session leave-one-subject-out cross-validation on SEED dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Traditional machine learning methods
MethodsMethods
\nKNN*[39]\n55.26\u00b112.43\nKPCA*[40]\n48.07\u00b109.97
\nSVM*[41]\n70.62\u00b109.02\nSA*[21]\n59.73\u00b105.40
\nTCA*[20]\n58.12\u00b109.52\nCORAL*[42]\n71.48\u00b111.57
\nGFK*[23]\n56.71\u00b112.29\nRF*[43]\n62.78\u00b106.60
Deep learning methods
MethodsMethods
\nDAN* [44]\n82.54\u00b109.25\nDANN*[45]\n81.57\u00b107.21
\nDCORAL* [46]\n82.90\u00b106.97\nDDC*[24]\n75.42\u00b110.15
\nDGCNN [47]\n79.95\u00b109.02\nMMD [48]\n80.88/10.10
\nBiDANN[18]\n83.28\u00b109.60\nR2G-STNN[27]\n84.16\u00b107.10
\nEFDMs[25]\n78.40\u00b106.76\nMS-MDA*[19]\n77.65\u00b111.32
PL-DCP82.88\u00b105.23
\n
\n
", + "capture": "TABLE II: The mean accuracies and standard deviations of cross-subject single-session leave-one-subject-out cross-validation on SEED dataset. " + }, + "3": { + "table_html": "
\n
TABLE III: The mean accuracies and standard deviations of cross-subject single-session leave-one-subject-out cross-validation on SEED-IV dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Traditional machine learning methods
MethodsMethods
\nKNN*[39]\n41.77\u00b109.53\nKPCA*[40]\n29.25\u00b109.73
\nSVM*[41]\n50.50\u00b112.03\nSA*[21]\n34.74\u00b105.29
\nTCA*[20]\n44.11\u00b110.76\nCORAL*[42]\n48.14\u00b110.38
\nGFK*[23]\n43.10\u00b109.77\nRF*[43]\n52.67\u00b113.85
Deep learning methods
MethodsMethods
\nDAN* [44]\n59.27\u00b114.45\nDANN*[45]\n57.16\u00b112.61
\nDCORAL* [46]\n56.05\u00b115.60\nDDC*[24]\n58.02\u00b115.14
\nMS-MDA*[19]\n57.36\u00b111.76\nMMD [48]\n59.34\u00b105.48
PL-DCP65.15\u00b110.34
\n
\n
", + "capture": "TABLE III: The mean accuracies and standard deviations of cross-subject single-session leave-one-subject-out cross-validation on SEED-IV dataset." + }, + "4": { + "table_html": "
\n
TABLE IV: The mean accuracies and standard deviations of cross-subject cross-session leave-one-subject-out cross validation on SEED dataset. (cross-subject cross-session
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Traditional machine learning methods
MethodsMethods
\nKNN*[39]\n60.18\u00b108.10\nKPCA*[40]\n72.56\u00b106.41
\nSVM*[41]\n68.01\u00b107.88\nSA*[21]\n57.47\u00b110.01
\nTCA*[20]\n63.63\u00b106.40\nCORAL*[42]\n55.18\u00b107.42
\nGFK*[23]\n60.75\u00b108.32\nRF*[43]\n72.78\u00b106.60
Deep learning methods
MethodsMethods
\nDAN* [44]\n78.12\u00b105.47\nDANN*[45]\n78.42\u00b107.57
\nDCORAL* [46]\n77.36\u00b106.27\nDDC*[24]\n73.22\u00b105.48
PL-DCP79.34\u00b106.34
\n
\n
", + "capture": "TABLE IV: The mean accuracies and standard deviations of cross-subject cross-session leave-one-subject-out cross validation on SEED dataset. (cross-subject cross-session" + }, + "5": { + "table_html": "
\n
TABLE V: The mean accuracies and standard deviations of cross-subject cross-session leave-one-subject-out cross validation on SEED-IV dataset.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Traditional machine learning methods
MethodsMethods
\nKNN*[39]\n40.06\u00b14.98\nKPCA*[40]\n47.79\u00b17.85
\nSVM*[41]\n48.36\u00b17.51\nSA*[21]\n40.34\u00b15.85
\nTCA*[20]\n43.01\u00b17.13\nCORAL*[42]\n50.01\u00b17.93
\nGFK*[23]\n43.48\u00b16.27\nRF*[43]\n48.16\u00b19.43
Deep learning methods
MethodsMethods
\nDAN* [44]\n60.95\u00b19.34\nDANN*[45]\n61.44\u00b111.66
\nDCORAL* [46]\n59.96\u00b19.03\nDDC*[24]\n54.76\u00b19.02
PL-DCP63.16\u00b109.03
\n
\n
", + "capture": "TABLE V: The mean accuracies and standard deviations of cross-subject cross-session leave-one-subject-out cross validation on SEED-IV dataset." + }, + "6": { + "table_html": "
\n
TABLE VI: The ablation study.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Ablation study about training strategy
w/o domain disc. loss75.24/10.52
w/o class disc. loss79.18/06.62
w/o domain disc. loss and class disc. loss74.49/05.49
w/o the bilinear transformation matrix S80.64/04.60
w/o soft regularization R81.73/03.31
PL-DCP82.88/05.32
\n
\n
", + "capture": "TABLE VI: The ablation study." + } + }, + "image_paths": { + "1": { + "figure_path": "2412.00082v1_figure_1.png", + "caption": "Figure 1: Dual prototyping manner. (a) Domain Prototype Inference. Colored circles represent domain features, while colored stars represent domain prototypes. (b) Class Prototype Inference. Hollow shapes represent class features, and solid shapes represent class prototypes within each domain.", + "url": "http://arxiv.org/html/2412.00082v1/x1.png" + }, + "2": { + "figure_path": "2412.00082v1_figure_2.png", + "caption": "Figure 2: The proposed PL-DCP. In the feature disentanglement module, we disentangle domain features and class features from shallow EEG features. In the prototype inference module, we obtain the domain and class prototypes for each subject and then assess the similarity between the sample\u2019s domain features and each domain prototype. After selecting the most similar domain, we compare the sample\u2019s class features with each class prototype within that domain. In the pairwise learning module, we capture the pairwise relationships between different samples, thereby enhancing the model\u2019s resilience to label noise.", + "url": "http://arxiv.org/html/2412.00082v1/x2.png" + }, + "3": { + "figure_path": "2412.00082v1_figure_3.png", + "caption": "Figure 3: Confusion matrices of different model settings under cross-subject single-session. (a) PL-DCP; (b) DANN; (c) DAN; (d) DCORAL.", + "url": "http://arxiv.org/html/2412.00082v1/x3.png" + }, + "4": { + "figure_path": "2412.00082v1_figure_4.png", + "caption": "Figure 4: Visualization of domain features and class features. (a)-(c) show the domain features, and (c)-(f) show the class features at different training stages: at the beginning of training (first column), after 50 training epochs (second column), and at the end of training (third column).", + "url": "http://arxiv.org/html/2412.00082v1/x4.png" + }, + "5": { + "figure_path": "2412.00082v1_figure_5.png", + "caption": "Figure 5: A visualization of closer pair (a) and distant pair (b) of domain samples in the class prototype space.", + "url": "http://arxiv.org/html/2412.00082v1/x5.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2412.00082v1" +} \ No newline at end of file diff --git a/20241127/2412.00084v1.json b/20241127/2412.00084v1.json new file mode 100644 index 0000000000000000000000000000000000000000..d71e1bda8bb210586e9026d7425bf4a9dc04bf15 --- /dev/null +++ b/20241127/2412.00084v1.json @@ -0,0 +1,453 @@ +{ + "title": "Unpacking the Individual Components of Diffusion Policy", + "abstract": "Imitation Learning presents a promising approach for learning generalizable and complex robotic skills. The recently proposed Diffusion Policy generates robot action sequences through a conditional denoising diffusion process, achieving state-of-the-art performance compared to other imitation learning methods. This paper summarizes five key components of Diffusion Policy: 1) observation sequence input; 2) action sequence execution; 3) receding horizon; 4) U-Net or Transformer network architecture; and 5) FiLM conditioning. By conducting experiments across ManiSkill and Adroit benchmarks, this study aims to elucidate the contribution of each component to the success of Diffusion Policy in various scenarios. We hope our findings will provide valuable insights for the application of Diffusion Policy in future research and industry.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Imitation learning provides an efficient approach for teaching robots to perform various complex tasks, such as grasping [11 ###reference_b11###, 30 ###reference_b30###, 26 ###reference_b26###], legged locomotion [20 ###reference_b20###, 2 ###reference_b2###], dexterous manipulation [17 ###reference_b17###, 18 ###reference_b18###], and mobile manipulation [29 ###reference_b29###, 6 ###reference_b6###]. With advancements in computer vision and natural language processing, increasingly sophisticated imitation learning architectures have been developed, demonstrating impressive performance across diverse tasks [3 ###reference_b3###, 1 ###reference_b1###, 7 ###reference_b7###, 25 ###reference_b25###]. Recently, Diffusion Policy, proposed by [4 ###reference_b4###], introduced an innovative approach by representing robot action sequences through a conditional denoising diffusion process, achieving state-of-the-art performance compared to other imitation learning methods. As a result, Diffusion Policy has gained popularity and is now widely used as a policy backbone in research and downstream applications.\nDespite Diffusion Policy\u2019s popularity, a key question remains unanswered: how does each of its components contribute to overall performance? As Diffusion Policy gains traction within the robotics community, we observe that many researchers, either intentionally or unintentionally, modify its structure for their own purposes\u2014often without thorough examination of each component. For example, [21 ###reference_b21###] uses a single observation frame rather than a stack of past frames and executes all predicted actions without receding horizon control. They also employ a Multi-Layer Perceptron (MLP) as the denoising network architecture in their primary experiments. Similarly, [32 ###reference_b32###] predicts and executes only four actions per inference, compared to the 16 used in Diffusion Policy\u2019s original design. These variations highlight an urgent need for a systematic summary of Diffusion Policy\u2019s essential components and a comprehensive study illustrating how these elements affect performance in different scenarios. This information would provide researchers with a clearer foundation for considering potential component modifications to optimize their applications.\n###figure_2### In this study, we summarize 5 key components of Diffusion Policy:\nDiffusion Policy takes a stack of past observation sequences instead of one frame of current observations. We call it observation sequence input\nDiffusion Policy executes a sequence of actions in the environment at one inference instead of only executing one action. We call it action sequence execution.\nDiffusion Policy predicts many actions at one inference but only execute the first few actions in the environment instead of executing all actions predicted. We call it receding horizon control.\nDiffusion Policy employs an U-Net [23 ###reference_b23###] or Transformer architecture [27 ###reference_b27###] as denoising network backbone instead of MLP. We call it denoising network architecture.\nDiffusion Policy takes observations as FiLM conditioning [16 ###reference_b16###] instead of taking them as network input. We call it FiLM conditioning.\nAfter identifying and summarizing the five key components of Diffusion Policy, we conduct ablation experiments on each component using ManiSkill and Adroit benchmarks. These experiments aim to reveal the individual contribution of each component to the overall performance of Diffusion Policy. We conclude our findings as follows:\nObservation sequence input: Observation sequence input is crucial for tasks requiring Absolute Control, but has little impact on tasks in Delta Control Mode, where a single observation is sufficient.\nAction sequence execution: Action Sequence Execution improves performance by 10-20% for most tasks, but for tasks requiring real-time control, such as the Adroit Hammer task, shorter action horizons or single action roll-outs are preferred due to their responsiveness.\nReceding horizon control: Receding Horizon Control enhances performance for long horizon tasks but has little effect on short horizon tasks, as it is designed to optimize long-term planning.\nDenoising network architecture: U-Net denoising architecture is crucial for hard tasks, while MLP denoising architecture is sufficient for easy tasks, emphasizing the need to choose the appropriate network based on task complexity.\nFiLM conditioning: FiLM Conditioning greatly improves the performance of the Diffusion Policy on hard tasks, but is not necessary for easy tasks.\nIn light of these observations, we offer the following recommendations for future research and applications of Diffusion Policy:\nObservation Sequence Input: se observation sequence input for tasks requiring Absolute Control, but a single observation is sufficient for Delta Control tasks.\nAction Sequence Execution: Apply action sequence execution for most tasks, but for tasks requiring real-time control (e.g., Adroit Hammer), prefer shorter action horizons or single action roll-outs for better responsiveness.\nReceding Horizon Control: Implement receding horizon control for long horizon tasks to optimize long-term planning, but it is unnecessary for short horizon tasks.\nDenoising Network Architecture: Use U-Net denoising architecture for hard tasks and MLP denoising architecture for easy tasks, depending on task complexity\nFiLM Conditioning: Apply FiLM conditioning to enhance performance on hard tasks, but it is not required for easy tasks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "Diffusion model as policy With the success of diffusion models in image synthesis and video generation [10 ###reference_b10###], they have become a popular choice as policy backbones in the robotics community. These models are utilized in two main ways: 1) As policies in reinforcement learning (RL) methods, including offline RL [28 ###reference_b28###] [9 ###reference_b9###] [13 ###reference_b13###], offline-to-online RL [5 ###reference_b5###], and online RL [31 ###reference_b31###]; 2) As policies in imitation learning [4 ###reference_b4###] [22 ###reference_b22###]. Diffusion Policy belongs to the second category and has demonstrated state-of-the-art performance compared to other imitation learning methods [25 ###reference_b25###] [7 ###reference_b7###] [1 ###reference_b1###]. Furthermore, it exhibits significant potential for future research and practical applications. For these reasons, we have chosen Diffusion Policy as the primary focus of our study." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Diffusion Policy", + "text": "Diffusion Policy [4 ###reference_b4###] is a sophisticated system that leverages a conditional denoising diffusion process at its core to generate actions, along with several additional techniques and design choices, as shown in Fig. 2 ###reference_###. We systematically categorize these into five key components, which are thoroughly detailed in the following sections." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Observation Sequence Input", + "text": "###figure_3### Many recent imitation learning models, such as BeT [25 ###reference_b25###] and ACT [1 ###reference_b1###], are history-dependent. In other words, these models utilize a sequence of past observations as input, rather than relying solely on the current observation. As highlighted in [12 ###reference_b12###], data\u2014particularly human demonstrations\u2014often depend on contextual information spanning multiple observations. Similarly, Diffusion Policy adheres to this history-dependent paradigm. As illustrated in Fig. 3 ###reference_###, Diffusion Policy formally takes as input a window of past observations, , where denotes the observation horizon." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Action Sequence Execution", + "text": "###figure_4### Unlike most imitation learning models, which execute a single action at each inference step (e.g., [25 ###reference_b25###], [7 ###reference_b7###]), Diffusion Policy adopts a different strategy by executing multiple subsequent actions in the environment. As demonstrated in Fig. 4 ###reference_###, Diffusion Policy executes , where denotes the action horizon." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Receding Horizon Control", + "text": "###figure_5### Diffusion Policy innovatively employs receding horizon control aiming to strike a balance between re-generating actions in time and maintaining temporal action consisteny. Formally, at one inference time, Diffusion Policy predicts subsequent actions and only select the first to roll out in the environment, with , as illustrated by Fig. 5 ###reference_###." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Denoising Network Architecture", + "text": "The architecture of the denoising network is crucial for diffusion models to achieve high performance. In [10 ###reference_b10###], a U-Net [23 ###reference_b23###]-based denoiser was introduced and subsequently became a standard in diffusion models for image synthesis and video generation. Later, [15 ###reference_b15###] proposed a transformer-based denoiser that demonstrated stronger performance, albeit with less stable training, and it has been adopted in many state-of-the-art image generation works, such as [leedalle] and [24 ###reference_b24###]. However, for a considerable time, MLPs were predominantly used as the denoising network architecture in diffusion-based online and offline RL works. It was not until the introduction of Diffusion Policy that U-Net and transformer-based denoisers gained popularity within the robotics community." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "FiLM Conditioning", + "text": "Most imitation learning methods [25 ###reference_b25###] [1 ###reference_b1###] [12 ###reference_b12###] [7 ###reference_b7###] directly use the observation or observation sequence as the input to their networks. In contrast, Diffusion Policy draws inspiration from vision domains, treating the observation sequence as FiLM conditioning on the denoising network.\n###figure_6###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "The goal of our experimental evaluation is to study the following questions:\nHow does Observation Sequence Input contribute to the performance of Diffusion Policy (Sec. 4.2 ###reference_###)?\nHow does Action Sequence Execution contribute to the performance of Diffusion Policy (Sec. 4.3 ###reference_###)?\nHow does Receding Horizon Control contribute to the performance of Diffusion Policy (Sec. 4.4 ###reference_###)?\nHow does Denoising Network Architecture contribute to the performance of Diffusion Policy (Sec. 4.5 ###reference_###)?\nHow does FiLM Conditioning contribute to the performance of Diffusion Policy (Sec. 4.6 ###reference_###)?" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "###figure_7### Our experimental setup incorporates variations across the following dimensions:\nBenchmarks: ManiSkill and Adroit.\nTask Types: Stationary robot arm manipulation, mobile manipulation, dual-arm coordination, dexterous hand manipulation, articulated object manipulation, and high-precision tasks. Fig. 7 ###reference_### illustrates sample tasks from each benchmark.\nDemonstration Sources: Task and Motion Planning (TAMP), Model Predictive Control (MPC), Reinforcement Learning, and Human Demonstrations.\nObservation Modalities: low-dimensional state observation." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 Task Description", + "text": "Our experiments are conducted on 8 tasks across 2 benchmarks: ManiSkill (robotic manipulation; 4 tasks), and Adroit (dexterous manipulation; 4 tasks).\nSee Fig. 7 ###reference_### for illustrations.\nManiSkill\nWe consider four challenging tasks from ManiSkill. StackCube and PegInsertionSide demand high-precision control, with PegInsertion featuring a mere 3mm clearance. TurnFaucet and PushChair introduce object variations, where the base policy is trained on source environment objects, but target environments for online interactions contain different objects.\nFor all ManiSkill tasks, we use 1000 demonstrations provided by the benchmark [14 ###reference_b14###] [8 ###reference_b8###] across all methods. These demonstrations are generated through task and motion planning, model predictive control, and reinforcement learning.\nAdroit We consider all four dexterous manipulation tasks from Adroit: Door, Hammer, Pen, and Relocate. The tasks should be solved using a complex, 24-DoF manipulator, simulating a real hand.\nFor all Adroit tasks, we use 25 demonstrations provided by the original paper [19 ###reference_b19###] for all methods. These demonstrations are collected by human teleoperation." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Observation Sequence Input", + "text": "###figure_8### In this section, we aim to understand how\nObservation Sequence Input affects the performance of the Diffusion Policy. We evaluated the Diffusion Policy with and without observation sequence input across 8 tasks from the ManiSkill and Adroit benchmarks, as shown in Tab. LABEL:tab:obs_seq. Among these tasks, three are in Delta Control Mode, which requires the policy to output the delta change in the robot\u2019s position, pose, or velocity, either in the world frame or the end-effector frame. The remaining five tasks are in Absolute Control Mode, where the policy must output the absolute values of the robot\u2019s position, pose, or velocity.\nAs illustrated in Fig. 8 ###reference_###, removing the observation sequence input leads to an overall 10% performance drop for tasks requiring Absolute Control, while having little impact on tasks that rely on Delta Control.\nThe empirical results in Tab. LABEL:tab:obs_seq and Fig. 8 ###reference_### suggest that past observations are essential for inferring the robot\u2019s movements, orientation, and changes in position relative to the environment. In contrast, for tasks with Delta Control, a single observation frame suffices, as only relative changes need to be considered." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Action Sequence Execution", + "text": "###figure_9### In this section, we aim to understand how Action Sequence Execution affects the performance of the Diffusion Policy. We evaluated the Diffusion Policy with and without action sequence execution across 8 tasks from the ManiSkill and Adroit benchmarks, as shown in Tab. LABEL:tab:act_seq.\nThe empirical results in Tab. LABEL:tab:act_seq and Fig. 4 ###reference_### indicate that, for most tasks, action sequence execution provides a consistent performance boost of 10-20%. One exception is the Adroit Hammer task, where single action execution outperforms action sequence execution due to its more responsive actions. A direct roll-out of 8 subsequent actions loses the ability to receive real-time feedback from the environment, thus diminishing responsiveness. The key takeaway is that for tasks requiring responsive control and real-time environment feedback, shorter action horizons or even single action roll-outs are preferred" + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Receding Horizon Control", + "text": "###figure_10### In this section, we aim to understand how Receding Horizon Control affects the performance of the Diffusion Policy. We evaluated the Diffusion Policy with and without receding horizon control across 8 tasks from the ManiSkill and Adroit benchmarks, as shown in Tab. LABEL:tab:receding. The task horizon, calculated as the average number of steps required to successfully complete each task, is presented in the table. Seven tasks have a task horizon greater than 100 and are classified as long horizon tasks, while one task has a task horizon of 30, classified as a short horizon task.\nAs shown in Tab. LABEL:tab:receding and Fig. 10 ###reference_###, removing receding horizon control results in approximately a 15% performance drop for long horizon tasks, while causing a slight performance increase for short horizon tasks.\nThe empirical results indicate that receding horizon control is essential for long horizon tasks but unnecessary for short horizon tasks. Since receding horizon control is specifically designed to enhance the capability for long-term planning, it provides little benefit for tasks that require only a few steps to complete." + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Denoising Network Architecture", + "text": "###figure_11### In this section, we aim to understand how the Denoising Network Architecture affects the performance of the Diffusion Policy. We evaluated the Diffusion Policy with both U-Net and MLP denoising networks across 8 tasks from the ManiSkill and Adroit benchmarks, as shown in Tab. LABEL:tab:denoising. The tasks are categorized into easy and hard tasks based on their inherent difficulty. StackCube (ManiSkill) is considered an easy task due to its simple pick-and-place nature with object variations. Pen (Adroit) is classified as an easy task because it requires a very short task horizon. The remaining tasks\u2014such as TurnFaucet, PushChair, PegInsertionSide, Door, Hammer, and Relocate\u2014involve challenges such as object variations, the need for precise control, and lower quality demonstrations, and are thus classified as hard tasks.\nThe empirical results in Tab. LABEL:tab:denoising and Fig. 11 ###reference_### indicate that the U-Net denoising architecture is crucial for achieving strong performance on hard tasks, while the MLP denoising architecture is sufficient for easy tasks." + }, + { + "section_id": "4.6", + "parent_section_id": "4", + "section_name": "FiLM Conditioning", + "text": "###figure_12### In this section, we aim to understand how FiLM Conditioning affects the performance of the Diffusion Policy. We evaluated the Diffusion Policy with FiLM conditioning and direct inputs across 8 tasks from the ManiSkill and Adroit benchmarks, as shown in Tab. LABEL:tab:film.\nThe empirical results in Tab. LABEL:tab:film and Fig. 12 ###reference_### indicate that FiLM conditioning significantly improves performance on hard tasks, while it is not necessary for easy tasks." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "In this study, we systematically decompose the Diffusion Policy into five distinct components: 1) Observation Sequence Input, 2) Action Sequence Execution, 3) Receding Horizon Control, 4) Denoising Network Architecture, and 5) FiLM Conditioning. We evaluate the relative importance of each component using the ManiSkill and Adroit benchmarks and provide recommendations for researchers and practitioners." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: The performance of Diffusion Policy with and without Observation Sequence Input.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskControl ModeWithWithout
ManiSkill: StackCubeDelta Control99%98%
ManiSkill: PegInsertionSideDelta Control80%81%
ManiSkill: TurnFaucetDelta Control59%55%
ManiSkill: PushChairAbsolute Control61%55%
Adroit: DoorAbsolute Control95%83%
Adroit: PenAbsolute Control71%72%
Adroit: HammerAbsolute Control17%11%
Adroit: RelocateAbsolute Control64%47%
\n
", + "capture": "Table 1: The performance of Diffusion Policy with and without Observation Sequence Input." + }, + "2": { + "table_html": "
\n
Table 2: The performance of Diffusion Policy with and without Action Sequence Execution.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskWithWithout
ManiSkill: StackCube99%86%
ManiSkill: PegInsertionSide80%55%
ManiSkill: TurnFaucet59%35%
ManiSkill: PushChair61%46%
Adroit: Door95%46%
Adroit: Pen71%58%
Adroit: Hammer17%27%
Adroit: Relocate64%35%
\n
", + "capture": "Table 2: The performance of Diffusion Policy with and without Action Sequence Execution." + }, + "3": { + "table_html": "
\n
Table 3: The performance of Diffusion Policy with and without Receding Horizon Control.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskTask Horizon/Truncation StepsWithWithout
ManiSkill: StackCube140/20099%88%
ManiSkill: PegInsertionSide160/20080%72%
ManiSkill: TurnFaucet140/20059%50%
ManiSkill: PushChair130/20060%51%
Adroit: Door180/30095%85%
Adroit: Pen30/20071%73%
Adroit: Hammer270/40017%17%
Adroit: Relocate270/40064%11%
\n
", + "capture": "Table 3: The performance of Diffusion Policy with and without Receding Horizon Control." + }, + "4": { + "table_html": "
\n
Table 4: The performance of Diffusion Policy with different Denoising Network Architecture.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskTask DifficultyU-NetMLP
ManiSkill: StackCubeEasy99%99%
ManiSkill: PegInsertionSideHard80%21%
ManiSkill: TurnFaucetHard59%22%
ManiSkill: PushChairHard60%42%
Adroit: DoorHard95%35%
Adroit: PenEasy71%68%
Adroit: HammerHard17%17%
Adroit: RelocateHard64%7%
\n
", + "capture": "Table 4: The performance of Diffusion Policy with different Denoising Network Architecture." + }, + "5": { + "table_html": "
\n
Table 5: The performance of Diffusion Policy with FiLM Conditioning and Observation as Direct Inputs.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
TaskTask DifficultyFiLM ConditioningDirect Inputs
ManiSkill: StackCubeEasy99%97%
ManiSkill: PegInsertionSideDifficult80%44%
ManiSkill: TurnFaucetDifficult59%27%
ManiSkill: PushChairDifficult60%36%
Adroit: DoorDifficult95%79%
Adroit: PenEasy71%75%
Adroit: HammerDifficult17%18%
Adroit: RelocateDifficult64%2%
\n
", + "capture": "Table 5: The performance of Diffusion Policy with FiLM Conditioning and Observation as Direct Inputs." + } + }, + "image_paths": { + "1": { + "figure_path": "2412.00084v1_figure_1.png", + "caption": "Figure 1: Relative importance of individual components across 8 tasks on ManiSkill and Adroit.", + "url": "http://arxiv.org/html/2412.00084v1/x1.png" + }, + "2": { + "figure_path": "2412.00084v1_figure_2.png", + "caption": "Figure 2: Overview of Diffusion Policy", + "url": "http://arxiv.org/html/2412.00084v1/x2.png" + }, + "3": { + "figure_path": "2412.00084v1_figure_3.png", + "caption": "Figure 3: Visualization of observation sequence input and single observation input", + "url": "http://arxiv.org/html/2412.00084v1/x3.png" + }, + "4": { + "figure_path": "2412.00084v1_figure_4.png", + "caption": "Figure 4: Visualization of action sequence execution and single action execution", + "url": "http://arxiv.org/html/2412.00084v1/x4.png" + }, + "5": { + "figure_path": "2412.00084v1_figure_5.png", + "caption": "Figure 5: Visualization of receding horizon control", + "url": "http://arxiv.org/html/2412.00084v1/x5.png" + }, + "6": { + "figure_path": "2412.00084v1_figure_6.png", + "caption": "Figure 6: Visualization of FiLM conditioning", + "url": "http://arxiv.org/html/2412.00084v1/x6.png" + }, + "7": { + "figure_path": "2412.00084v1_figure_7.png", + "caption": "Figure 7: Tasks Visualizations. ManiSkill (left four figures) and Adroit (right four figures).", + "url": "http://arxiv.org/html/2412.00084v1/x7.png" + }, + "8": { + "figure_path": "2412.00084v1_figure_8.png", + "caption": "Figure 8: \nPerformance comparison of Diffusion Policy with observation sequence input and without observation sequence input under delta control and absolute control.", + "url": "http://arxiv.org/html/2412.00084v1/x8.png" + }, + "9": { + "figure_path": "2412.00084v1_figure_9.png", + "caption": "Figure 9: \nPerformance comparison of Diffusion Policy with action sequence execution and without action sequence execution under all tasks.", + "url": "http://arxiv.org/html/2412.00084v1/x9.png" + }, + "10": { + "figure_path": "2412.00084v1_figure_10.png", + "caption": "Figure 10: \nPerformance comparison of Diffusion Policy with receding horizon control and without receding horizon control under long horizon tasks and short horizon tasks.", + "url": "http://arxiv.org/html/2412.00084v1/x10.png" + }, + "11": { + "figure_path": "2412.00084v1_figure_11.png", + "caption": "Figure 11: \nPerformance comparison of Diffusion Policy with U-Net and MLP under hard tasks and easy tasks.", + "url": "http://arxiv.org/html/2412.00084v1/x11.png" + }, + "12": { + "figure_path": "2412.00084v1_figure_12.png", + "caption": "Figure 12: \nPerformance comparison of Diffusion Policy with FiLM conditioning and direct inputs under hard tasks and easy tasks.", + "url": "http://arxiv.org/html/2412.00084v1/x12.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "The aloha system: Another alternative for computer communications.", + "author": "Norman Abramson.", + "venue": "In Proceedings of the November 17-19, 1970, fall joint computer conference, pages 281\u2013285, 1970.", + "url": null + } + }, + { + "2": { + "title": "Locomujoco: A comprehensive imitation learning benchmark for locomotion.", + "author": "Firas Al-Hafez, Guoping Zhao, Jan Peters, and Davide Tateo.", + "venue": "arXiv preprint arXiv:2311.02496, 2023.", + "url": null + } + }, + { + "3": { + "title": "Decision transformer: Reinforcement learning via sequence modeling.", + "author": "Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch.", + "venue": "Advances in neural information processing systems, 34:15084\u201315097, 2021.", + "url": null + } + }, + { + "4": { + "title": "Diffusion policy: Visuomotor policy learning via action diffusion.", + "author": "Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song.", + "venue": "The International Journal of Robotics Research, page 02783649241273668, 2023.", + "url": null + } + }, + { + "5": { + "title": "Consistency models as a rich and efficient policy class for reinforcement learning.", + "author": "Zihan Ding and Chi Jin.", + "venue": "arXiv preprint arXiv:2309.16984, 2023.", + "url": null + } + }, + { + "6": { + "title": "Bayesian imitation learning for end-to-end mobile manipulation.", + "author": "Yuqing Du, Daniel Ho, Alex Alemi, Eric Jang, and Mohi Khansari.", + "venue": "In International Conference on Machine Learning, pages 5531\u20135546. PMLR, 2022.", + "url": null + } + }, + { + "7": { + "title": "Implicit behavioral cloning.", + "author": "Pete Florence, Corey Lynch, Andy Zeng, Oscar A Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson.", + "venue": "In Conference on Robot Learning, pages 158\u2013168. PMLR, 2022.", + "url": null + } + }, + { + "8": { + "title": "Maniskill2: A unified benchmark for generalizable manipulation skills.", + "author": "Jiayuan Gu, Fanbo Xiang, Xuanlin Li, Zhan Ling, Xiqiang Liu, Tongzhou Mu, Yihe Tang, Stone Tao, Xinyue Wei, Yunchao Yao, et al.", + "venue": "arXiv preprint arXiv:2302.04659, 2023.", + "url": null + } + }, + { + "9": { + "title": "Idql: Implicit q-learning as an actor-critic method with diffusion policies.", + "author": "Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, and Sergey Levine.", + "venue": "arXiv preprint arXiv:2304.10573, 2023.", + "url": null + } + }, + { + "10": { + "title": "Denoising diffusion probabilistic models.", + "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.", + "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.", + "url": null + } + }, + { + "11": { + "title": "Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.", + "author": "Edward Johns.", + "venue": "In 2021 IEEE international conference on robotics and automation (ICRA), pages 4613\u20134619. IEEE, 2021.", + "url": null + } + }, + { + "12": { + "title": "What matters in learning from offline human demonstrations for robot manipulation.", + "author": "Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, and Roberto Mart\u00edn-Mart\u00edn.", + "venue": "arXiv preprint arXiv:2108.03298, 2021.", + "url": null + } + }, + { + "13": { + "title": "Diffusion-dice: In-sample diffusion guidance for offline reinforcement learning.", + "author": "Liyuan Mao, Haoran Xu, Xianyuan Zhan, Weinan Zhang, and Amy Zhang.", + "venue": "arXiv preprint arXiv:2407.20109, 2024.", + "url": null + } + }, + { + "14": { + "title": "Maniskill: Generalizable manipulation skill benchmark with large-scale demonstrations.", + "author": "Tongzhou Mu, Zhan Ling, Fanbo Xiang, Derek Yang, Xuanlin Li, Stone Tao, Zhiao Huang, Zhiwei Jia, and Hao Su.", + "venue": "arXiv preprint arXiv:2107.14483, 2021.", + "url": null + } + }, + { + "15": { + "title": "Scalable diffusion models with transformers.", + "author": "William Peebles and Saining Xie.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195\u20134205, 2023.", + "url": null + } + }, + { + "16": { + "title": "Film: Visual reasoning with a general conditioning layer.", + "author": "Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.", + "url": null + } + }, + { + "17": { + "title": "Dexmv: Imitation learning for dexterous manipulation from human videos.", + "author": "Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Ruihan Yang, Yang Fu, and Xiaolong Wang.", + "venue": "In European Conference on Computer Vision, pages 570\u2013587. Springer, 2022.", + "url": null + } + }, + { + "18": { + "title": "State-only imitation learning for dexterous manipulation.", + "author": "Ilija Radosavovic, Xiaolong Wang, Lerrel Pinto, and Jitendra Malik.", + "venue": "In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7865\u20137871. IEEE, 2021.", + "url": null + } + }, + { + "19": { + "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations.", + "author": "Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine.", + "venue": "arXiv preprint arXiv:1709.10087, 2017.", + "url": null + } + }, + { + "20": { + "title": "Imitation learning for locomotion and manipulation.", + "author": "Nathan Ratliff, J Andrew Bagnell, and Siddhartha S Srinivasa.", + "venue": "In 2007 7th IEEE-RAS international conference on humanoid robots, pages 392\u2013397. IEEE, 2007.", + "url": null + } + }, + { + "21": { + "title": "Diffusion policy policy optimization.", + "author": "Allen Z Ren, Justin Lidard, Lars L Ankile, Anthony Simeonov, Pulkit Agrawal, Anirudha Majumdar, Benjamin Burchfiel, Hongkai Dai, and Max Simchowitz.", + "venue": "arXiv preprint arXiv:2409.00588, 2024.", + "url": null + } + }, + { + "22": { + "title": "Goal-conditioned imitation learning using score-based diffusion policies.", + "author": "Moritz Reuss, Maximilian Li, Xiaogang Jia, and Rudolf Lioutikov.", + "venue": "arXiv preprint arXiv:2304.02532, 2023.", + "url": null + } + }, + { + "23": { + "title": "U-net: Convolutional networks for biomedical image segmentation.", + "author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox.", + "venue": "In Medical image computing and computer-assisted intervention\u2013MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234\u2013241. Springer, 2015.", + "url": null + } + }, + { + "24": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "Advances in neural information processing systems, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "25": { + "title": "Behavior transformers: Cloning modes with one stone.", + "author": "Nur Muhammad Shafiullah, Zichen Cui, Ariuntuya Arty Altanzaya, and Lerrel Pinto.", + "venue": "Advances in neural information processing systems, 35:22955\u201322968, 2022.", + "url": null + } + }, + { + "26": { + "title": "Language-conditioned imitation learning for robot manipulation tasks.", + "author": "Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, and Heni Ben Amor.", + "venue": "Advances in Neural Information Processing Systems, 33:13139\u201313150, 2020.", + "url": null + } + }, + { + "27": { + "title": "Attention is all you need.", + "author": "A Vaswani.", + "venue": "Advances in Neural Information Processing Systems, 2017.", + "url": null + } + }, + { + "28": { + "title": "Diffusion policies as an expressive policy class for offline reinforcement learning.", + "author": "Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou.", + "venue": "arXiv preprint arXiv:2208.06193, 2022.", + "url": null + } + }, + { + "29": { + "title": "Error-aware imitation learning from teleoperation data for mobile manipulation.", + "author": "Josiah Wong, Albert Tung, Andrey Kurenkov, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, and Roberto Mart\u00edn-Mart\u00edn.", + "venue": "In Conference on Robot Learning, pages 1367\u20131378. PMLR, 2022.", + "url": null + } + }, + { + "30": { + "title": "Deep imitation learning for bimanual robotic manipulation.", + "author": "Fan Xie, Alexander Chowdhury, M De Paolis Kaluza, Linfeng Zhao, Lawson Wong, and Rose Yu.", + "venue": "Advances in neural information processing systems, 33:2327\u20132337, 2020.", + "url": null + } + }, + { + "31": { + "title": "Policy representation via diffusion probability model for reinforcement learning.", + "author": "Long Yang, Zhixiong Huang, Fenghao Lei, Yucun Zhong, Yiming Yang, Cong Fang, Shiting Wen, Binbin Zhou, and Zhouchen Lin.", + "venue": "arXiv preprint arXiv:2305.13122, 2023.", + "url": null + } + }, + { + "32": { + "title": "3d diffusion policy.", + "author": "Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu.", + "venue": "arXiv preprint arXiv:2403.03954, 2024.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2412.00084v1" +} \ No newline at end of file diff --git a/20241127/2412.00094v1.json b/20241127/2412.00094v1.json new file mode 100644 index 0000000000000000000000000000000000000000..22ceaf82cb2a45e588e1254bdfe8f88977cbd78a --- /dev/null +++ b/20241127/2412.00094v1.json @@ -0,0 +1,251 @@ +{ + "title": "A Novel Approach to Image Steganography Using Generative Adversarial Networks", + "abstract": "The field of steganography has long been focused on developing methods to securely embed information within various digital media while ensuring imperceptibility and robustness. However, the growing sophistication of detection tools and the demand for increased data hiding capacity have revealed limitations in traditional techniques. In this paper, we propose a novel approach to image steganography that leverages the power of generative adversarial networks (GANs) to address these challenges. By employing a carefully designed GAN architecture, our method ensures the creation of stego-images that are visually indistinguishable from their original counterparts, effectively thwarting detection by advanced steganalysis tools. Additionally, the adversarial training paradigm optimizes the balance between embedding capacity, imperceptibility, and robustness, enabling more efficient and secure data hiding. We evaluate our proposed method through a series of experiments on benchmark datasets and compare its performance against baseline techniques, including least significant bit (LSB) substitution and discrete cosine transform (DCT)-based methods. Our results demonstrate significant improvements in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and robustness against detection. This work not only contributes to the advancement of image steganography but also provides a foundation for exploring GAN-based approaches for secure digital communication.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Deep learning has revolutionized the field of computer vision, enabling unprecedented advancements in tasks such as image classification [1 ###reference_b1###, 2 ###reference_b2###], action recognition [3 ###reference_b3###, 4 ###reference_b4###], and generative modeling [5 ###reference_b5###]. With the advent of convolutional neural networks (CNNs) [2 ###reference_b2###] and generative adversarial networks (GANs) [5 ###reference_b5###], deep learning has provided powerful tools to process and generate visual data with remarkable accuracy and realism. These breakthroughs have not only pushed the boundaries of traditional computer vision applications but also opened new possibilities in niche areas such as image synthesis, style transfer, and secure data embedding, including steganography.\nSteganography, the practice of concealing information within digital media, has been an area of active research for decades. Derived from the Greek words \u201dsteganos\u201d (covered) and \u201dgraphy\u201d (writing), steganography focuses on enabling covert communication by embedding secret data within a medium, such as images, audio, or video. Among these, image steganography has gained significant attention due to the prevalence and versatility of digital images in modern communication systems [6 ###reference_b6###, 7 ###reference_b7###].\nThe primary goals of image steganography are to achieve high imperceptibility, robustness, and embedding capacity. Imperceptibility ensures that the modifications made to the cover image are not noticeable to human vision or statistical analysis. Robustness guarantees that the embedded data remains intact and retrievable even after undergoing common image processing operations such as compression, scaling, or noise addition. Embedding capacity refers to the amount of data that can be securely hidden without compromising imperceptibility or robustness [8 ###reference_b8###, 9 ###reference_b9###]." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Challenges in Image Steganography", + "text": "Traditional steganography methods often struggle to balance the trade-offs among imperceptibility, robustness, and embedding capacity. Spatial domain techniques, such as least significant bit (LSB) substitution, are computationally efficient and simple but are vulnerable to detection and distortion under image manipulations [10 ###reference_b10###]. Transform domain methods, which embed data in the frequency components of an image, offer greater robustness but require higher computational resources and exhibit limited embedding capacity [11 ###reference_b11###, 12 ###reference_b12###].\nFurthermore, the increasing sophistication of steganalysis tools poses significant challenges to traditional methods. Steganalysis, the science of detecting hidden data, has leveraged machine learning and deep learning techniques to identify subtle patterns introduced by embedding schemes [13 ###reference_b13###, 14 ###reference_b14###]. As a result, developing steganographic systems that can evade detection while maintaining robustness has become more complex." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Motivation for Using Deep Learning", + "text": "Deep learning, particularly convolutional neural networks (CNNs) and generative adversarial networks (GANs), has revolutionized numerous fields, including computer vision, natural language processing, and cybersecurity. These advancements have also influenced the domain of steganography, where deep learning models are increasingly being used to optimize data embedding and extraction processes.\nCNNs have been employed for both steganographic embedding and steganalysis. For instance, Baluja [15 ###reference_b15###] introduced a deep learning framework for hiding one image within another, demonstrating improved robustness and imperceptibility. Meanwhile, adversarial approaches using GANs have shown significant promise by enabling the generation of stego-images that are indistinguishable from cover images [16 ###reference_b16###]. GANs use a generator-discriminator framework where the generator learns to embed data while the discriminator attempts to distinguish between cover and stego-images, driving the system to produce more realistic and undetectable stego-images." + }, + { + "section_id": "1.3", + "parent_section_id": "1", + "section_name": "Contributions of This Work", + "text": "This paper proposes a novel approach to image steganography using generative adversarial networks (GANs). The primary contributions of this work are as follows:\nWe design a GAN-based framework that optimizes imperceptibility, robustness, and embedding capacity simultaneously, addressing the limitations of traditional and existing deep learning-based methods.\nWe introduce a loss function tailored for steganography that balances adversarial training with reconstruction accuracy, ensuring both high-quality stego-images and reliable data extraction.\nWe evaluate the proposed method against baseline techniques, including least significant bit (LSB) substitution and discrete cosine transform (DCT)-based embedding, using standard metrics such as PSNR, SSIM, and detection accuracy.\nThe results demonstrate that our approach outperforms existing methods, offering improved security and efficiency for covert communication. This work not only advances the state of the art in image steganography but also highlights the potential of GANs for secure digital communication." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Image steganography has been extensively studied, with research spanning traditional techniques, machine learning-based methods, and the emerging application of generative adversarial networks (GANs). This section provides a review of these approaches, focusing on their contributions and limitations." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Traditional Image Steganography Techniques", + "text": "Traditional methods for image steganography are generally categorized into spatial domain techniques and transform domain techniques." + }, + { + "section_id": "2.1.1", + "parent_section_id": "2.1", + "section_name": "2.1.1 Spatial Domain Techniques", + "text": "Spatial domain techniques directly modify the pixel values of the cover image to embed secret data. The most notable approach in this category is the least significant bit (LSB) substitution, which involves altering the least significant bits of pixel values to encode data [10 ###reference_b10###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. While LSB substitution is computationally efficient and easy to implement, it is highly vulnerable to statistical attacks and visual inspection.\nOther advancements in spatial domain steganography include edge-based embedding techniques, which focus on embedding data in high-gradient regions of the image to reduce perceptual distortion [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###]. However, these methods often suffer from limited embedding capacity and poor robustness to image processing operations." + }, + { + "section_id": "2.1.2", + "parent_section_id": "2.1", + "section_name": "2.1.2 Transform Domain Techniques", + "text": "Transform domain techniques embed secret data into the frequency components of the image, offering better robustness to compression and noise. Discrete cosine transform (DCT) and discrete wavelet transform (DWT) are the most commonly used transform domain methods. In DCT-based techniques, data is embedded in the middle-frequency coefficients, balancing imperceptibility and robustness [11 ###reference_b11###, 23 ###reference_b23###, 24 ###reference_b24###]. DWT-based methods further enhance robustness by leveraging the multi-resolution properties of wavelets [25 ###reference_b25###, 26 ###reference_b26###].\nAlthough transform domain techniques are more robust than spatial domain methods, they often require higher computational resources and exhibit trade-offs between embedding capacity and imperceptibility." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Machine Learning-Based Steganography", + "text": "With the advent of deep learning, machine learning-based methods have emerged as a powerful alternative to traditional approaches. These methods employ data-driven models to optimize the embedding and extraction processes." + }, + { + "section_id": "2.2.1", + "parent_section_id": "2.2", + "section_name": "2.2.1 Convolutional Neural Networks (CNNs) for Steganography", + "text": "Several studies have explored the use of convolutional neural networks (CNNs) for image steganography. For instance, Baluja [15 ###reference_b15###] proposed a deep learning framework where a CNN is used for both data embedding and extraction. The method demonstrated improved imperceptibility and robustness compared to traditional techniques. However, the approach required significant computational resources for training and inference." + }, + { + "section_id": "2.2.2", + "parent_section_id": "2.2", + "section_name": "2.2.2 Adversarial Attacks and Steganalysis", + "text": "In parallel, CNNs have been employed for steganalysis, the process of detecting hidden data within images. These advancements have posed significant challenges to traditional steganography methods, necessitating the development of more robust techniques [13 ###reference_b13###]." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Generative Adversarial Networks (GANs) in Steganography", + "text": "Generative adversarial networks (GANs) have recently been adopted for image steganography, offering a novel approach to optimize imperceptibility and robustness. GANs consist of two networks: a generator, which embeds the secret data, and a discriminator, which aims to distinguish between cover and stego-images." + }, + { + "section_id": "2.3.1", + "parent_section_id": "2.3", + "section_name": "2.3.1 GAN-Based Methods", + "text": "Volkhonskiy et al. [27 ###reference_b27###] introduced the use of GANs to generate stego-images that are visually indistinguishable from cover images. Their approach demonstrated significant improvements in imperceptibility but faced challenges in maintaining high embedding capacity.\nZhang et al. [16 ###reference_b16###] proposed SteganoGAN, a GAN-based framework that combines adversarial training with loss functions tailored for steganography. SteganoGAN achieved state-of-the-art performance in imperceptibility and robustness but required careful tuning of the model parameters." + }, + { + "section_id": "2.3.2", + "parent_section_id": "2.3", + "section_name": "2.3.2 Challenges and Limitations", + "text": "Despite their advantages, GAN-based methods face challenges such as high computational complexity and instability during training. Additionally, their performance is often dataset-dependent, limiting their generalizability." + }, + { + "section_id": "2.4", + "parent_section_id": "2", + "section_name": "Summary of Related Work", + "text": "In summary, traditional methods provide a strong foundation for steganography but face limitations in robustness and embedding capacity. Machine learning-based approaches, particularly GANs, offer promising solutions to these challenges. However, further research is needed to address issues such as computational efficiency and generalizability." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Proposed Method", + "text": "This section introduces the proposed generative adversarial network (GAN)-based framework for image steganography. The framework is designed to achieve a superior balance among imperceptibility, robustness, and embedding capacity, addressing the limitations of traditional methods and previous GAN-based approaches." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Framework Overview", + "text": "The proposed method utilizes a GAN architecture comprising three components: the generator, the discriminator, and the extractor. These components work collaboratively to embed secret data into a cover image, ensuring that the resulting stego-image is visually indistinguishable from the cover image while enabling accurate retrieval of the hidden data.\nGenerator (): Embeds the secret data into the cover image to produce the stego-image.\nDiscriminator (): Differentiates between cover and stego-images, guiding the generator to produce high-quality outputs.\nExtractor (): Recovers the secret data from the stego-image, ensuring reliability in the embedding process.\nThe generator takes the cover image and secret data as inputs and outputs the stego-image . The discriminator receives both and as inputs and provides feedback to the generator to improve the realism of . Finally, the extractor ensures that the secret data can be accurately reconstructed from ." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Mathematical Formulation", + "text": "The proposed method is formulated as an optimization problem with three objectives: (1) adversarial loss for ensuring imperceptibility, (2) reconstruction loss for data retrieval accuracy, and (3) perceptual loss to maintain visual quality. These objectives are described below." + }, + { + "section_id": "3.2.1", + "parent_section_id": "3.2", + "section_name": "3.2.1 Adversarial Loss", + "text": "The adversarial loss drives the generator to produce stego-images that are indistinguishable from cover images. The discriminator is trained to classify images as either cover or stego, while the generator is trained to \u201dfool\u201d the discriminator. The adversarial loss is given by:\nwhere represents the distribution of cover images, and represents the distribution of secret data." + }, + { + "section_id": "3.2.2", + "parent_section_id": "3.2", + "section_name": "3.2.2 Reconstruction Loss", + "text": "To ensure accurate recovery of the secret data, the reconstruction loss penalizes discrepancies between the original secret data and the extracted data . The reconstruction loss is defined as:\nThis term encourages the generator and extractor to work collaboratively for reliable embedding and extraction." + }, + { + "section_id": "3.2.3", + "parent_section_id": "3.2", + "section_name": "3.2.3 Perceptual Loss", + "text": "To preserve the visual quality of the stego-image, a perceptual loss is introduced. This loss minimizes the differences in high-level features between the cover image and the stego-image , as captured by a pre-trained deep neural network. The perceptual loss is given by:\nwhere represents the feature maps extracted from the -th layer of a pre-trained network (e.g., VGG-19)." + }, + { + "section_id": "3.2.4", + "parent_section_id": "3.2", + "section_name": "3.2.4 Overall Objective", + "text": "The total loss function combines the adversarial, reconstruction, and perceptual losses, weighted by hyperparameters and :\nThe weights and control the trade-off among the objectives." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Architecture Details", + "text": "" + }, + { + "section_id": "3.3.1", + "parent_section_id": "3.3", + "section_name": "3.3.1 Generator Design", + "text": "The generator is based on a U-Net architecture, which is effective for tasks requiring fine-grained spatial information. The generator consists of an encoder-decoder structure with skip connections, enabling the model to preserve the high-frequency details of the cover image while embedding the secret data." + }, + { + "section_id": "3.3.2", + "parent_section_id": "3.3", + "section_name": "3.3.2 Discriminator Design", + "text": "The discriminator is a convolutional neural network (CNN) that operates as a binary classifier. It takes an image as input and outputs the probability that the image is a cover image. The architecture includes convolutional layers with batch normalization and leaky ReLU activation, followed by a fully connected layer for classification." + }, + { + "section_id": "3.3.3", + "parent_section_id": "3.3", + "section_name": "3.3.3 Extractor Design", + "text": "The extractor is a lightweight CNN designed for efficient data recovery. It takes the stego-image as input and outputs the reconstructed secret data. The extractor\u2019s architecture is optimized for minimal computational overhead." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Novelty of the Proposed Method", + "text": "The novelty of the proposed method lies in the following aspects:\nAdversarial Optimization for Steganography: Unlike traditional steganography methods that rely on hand-crafted embedding rules, our approach uses adversarial optimization to learn an embedding strategy that balances imperceptibility and robustness dynamically.\nPerceptual Loss Integration: By incorporating perceptual loss, the proposed method explicitly optimizes the visual quality of stego-images, addressing one of the key limitations of previous GAN-based approaches.\nUnified Framework: The integration of generator, discriminator, and extractor within a single framework ensures seamless embedding and retrieval, reducing error propagation between components." + }, + { + "section_id": "3.5", + "parent_section_id": "3", + "section_name": "Training Procedure", + "text": "The training process alternates between optimizing the generator, discriminator, and extractor. The steps are as follows:\nTrain the discriminator to classify cover and stego-images.\nTrain the generator to produce stego-images that maximize the discriminator\u2019s classification error while minimizing and .\nTrain the extractor to minimize the reconstruction loss .\nThe process continues until convergence, ensuring that the generator produces high-quality stego-images and the extractor achieves reliable data retrieval." + }, + { + "section_id": "3.6", + "parent_section_id": "3", + "section_name": "Advantages of the Proposed Method", + "text": "The proposed method offers several advantages over existing techniques:\nAchieves a superior balance between imperceptibility, robustness, and embedding capacity.\nAutomatically learns embedding strategies, eliminating the need for manual design.\nProvides a scalable solution that can be extended to other media types, such as audio and video." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Experimental Setup", + "text": "To evaluate the effectiveness of the proposed method, we conduct experiments on the COCO, Imagenet and DVI2k datasets. We compare our approach with baseline techniques, including LSB [17 ###reference_b17###], CAIS [28 ###reference_b28###] and Hi-Net [29 ###reference_b29###]." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "To evaluate the performance of image steganography methods, several objective metrics are employed. These metrics assess the imperceptibility, quality, and robustness of the stego-images, as well as the accuracy of the data recovery process. The following metrics are used in this study:" + }, + { + "section_id": "4.2.1", + "parent_section_id": "4.2", + "section_name": "4.2.1 Structural Similarity Index (SSIM)", + "text": "Definition: SSIM measures the perceptual similarity between the cover image and the stego-image by comparing their luminance, contrast, and structural information.\nRange: Values range from 0 to 1, where 1 indicates perfect similarity.\nPurpose: Higher SSIM values indicate that the stego-image is visually similar to the cover image, ensuring imperceptibility." + }, + { + "section_id": "4.2.2", + "parent_section_id": "4.2", + "section_name": "4.2.2 Peak Signal-to-Noise Ratio (PSNR)", + "text": "Definition: PSNR measures the ratio of the maximum possible pixel intensity to the mean squared error (MSE) between the cover image and the stego-image.\nFormula:\nwhere MAX is the maximum possible pixel value (e.g., 255 for 8-bit images).\nUnit: PSNR is measured in decibels (dB).\nPurpose: Higher PSNR values signify better imperceptibility, as they indicate fewer noticeable distortions in the stego-image." + }, + { + "section_id": "4.2.3", + "parent_section_id": "4.2", + "section_name": "4.2.3 Root Mean Square Error (RMSE)", + "text": "Definition: RMSE is the square root of the mean squared error (MSE) between the cover and stego-images.\nFormula:\nwhere and represent the pixel values of the cover and stego-images, and is the total number of pixels.\nPurpose: Lower RMSE values indicate better quality, as they reflect smaller deviations between the cover and stego-images." + }, + { + "section_id": "4.2.4", + "parent_section_id": "4.2", + "section_name": "4.2.4 Mean Absolute Error (MAE)", + "text": "Definition: MAE measures the average absolute difference between the pixel values of the cover and stego-images.\nFormula:\nwhere and represent the pixel values of the cover and stego-images, and is the total number of pixels.\nPurpose: Lower MAE values indicate better visual similarity between the cover and stego-images." + }, + { + "section_id": "4.2.5", + "parent_section_id": "4.2", + "section_name": "4.2.5 Interpretation of Metrics", + "text": "Metrics marked with (e.g., SSIM, PSNR) indicate that higher values are better.\nMetrics marked with (e.g., RMSE, MAE) indicate that lower values are better.\nThese metrics collectively provide a comprehensive assessment of the quality and effectiveness of the proposed steganography method." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Results", + "text": "The proposed method achieves superior performance across all metrics, as shown in Table 1 ###reference_###.\n###table_1###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we have proposed a novel GAN-based framework for image steganography that effectively addresses the challenges of imperceptibility, robustness, and embedding capacity, which have long plagued traditional and modern methods alike. The proposed framework integrates a generator, discriminator, and extractor to seamlessly embed secret data into digital images while maintaining high visual fidelity. By incorporating adversarial training, reconstruction loss, and perceptual loss, the method optimally balances the competing objectives of ensuring minimal perceptual distortion and achieving accurate data recovery. Unlike traditional spatial domain methods such as least significant bit (LSB) substitution and transform domain techniques like discrete cosine transform (DCT) embedding, which are often susceptible to detection and attacks, the proposed method dynamically learns embedding strategies through adversarial optimization. This allows it to outperform existing approaches in terms of imperceptibility and robustness, as demonstrated by extensive experiments on benchmark datasets, including DIV2K, ImageNet, and COCO, where it achieved superior scores across metrics such as SSIM, PSNR, RMSE, and MAE. The use of perceptual loss further enhances the method\u2019s ability to produce stego-images that not only exhibit pixel-level fidelity but also preserve high-level perceptual features, making them resistant to advanced steganalysis techniques. Additionally, the method\u2019s unified framework ensures reliable data extraction even under common distortions, such as compression or noise. While the approach demonstrates state-of-the-art performance, it is not without limitations, as the computational intensity of training GANs and the dependency on dataset quality remain areas for improvement. Future work will focus on enhancing the training efficiency, extending the framework to other domains like video and audio steganography, and incorporating privacy-preserving mechanisms such as differential privacy to ensure broader applicability and alignment with ethical considerations. Overall, this work represents a significant step forward in the field of image steganography, offering a robust, scalable, and efficient solution for secure data embedding and laying a strong foundation for future innovations in the domain of secure communication and information security." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparing Benchmarks Across Various Datasets for the Secret/Recovery Image Pair (with bold best results).
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetsMethods4bit-LSB [17]CAIS [28]HiNet [29]Proposed
DIV2KSSIM\n0.8950.9650.9930.995
PSNR\n24.9936.146.5747.12
RMSE\n18.165.801.321.25
MAE\n15.574.360.840.78
ImageNetSSIM\n0.8960.9430.9600.965
PSNR\n25.0033.5436.6337.10
RMSE\n17.906.336.075.80
MAE\n15.274.704.164.00
COCOSSIM\n0.8940.9440.9610.968
PSNR\n24.9633.7036.5537.20
RMSE\n17.936.136.045.90
MAE\n15.314.554.093.95
\n
", + "capture": "Table 1: Comparing Benchmarks Across Various Datasets for the Secret/Recovery Image Pair (with bold best results)." + } + }, + "image_paths": {}, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2412.00094v1" +} \ No newline at end of file diff --git a/20241127/2412.00098v1.json b/20241127/2412.00098v1.json new file mode 100644 index 0000000000000000000000000000000000000000..18934cd3bd1cbba0e54b3ea29d80596fa0af71c0 --- /dev/null +++ b/20241127/2412.00098v1.json @@ -0,0 +1,177 @@ +{ + "title": "Fine-Tuning Large Language Models for Scientific Text Classification: A Comparative Study", + "abstract": "The exponential growth of online textual content across diverse domains has necessitated advanced methods for automated text classification. Large Language Models (LLMs) based on transformer architectures have shown significant success in this area, particularly in natural language processing (NLP) tasks. However, general-purpose LLMs often struggle with domain-specific content, such as scientific texts, due to unique challenges like specialized vocabulary and imbalanced data. In this study, we fine-tune four state-of-the-art LLMs BERT, SciBERT, BioBERT, and BlueBERT on three datasets derived from the WoS-46985 dataset to evaluate their performance in scientific text classification. Our experiments reveal that domain-specific models, particularly SciBERT, consistently outperform general-purpose models in both abstract-based and keyword-based classification tasks. Additionally, we compare our achieved results with those reported in the literature for deep learning models, further highlighting the advantages of LLMs, especially when utilized in specific domains. The findings emphasize the importance of domain-specific adaptations for LLMs to enhance their effectiveness in specialized text classification tasks.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The digital era has led to an exponential increase in the amount of textual content being shared online daily. This content encompasses a wide array of domains, including scientific literature, political documents, social media posts, and blogs [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. The rapid growth in the volume of this data necessitates the use of Natural Language Processing (NLP) to automate and classify textual information efficiently [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. Deep learning (DL), as a cutting-edge approach, has demonstrated significant success in this domain [8 ###reference_b8###, 6 ###reference_b6###, 9 ###reference_b9###].\nAmong the various DL architectures, models that utilized transformer architecture achieved better results in recent years. These models have been recognized for their exceptional performance across numerous fields [10 ###reference_b10###, 8 ###reference_b8###]. Text classification is a fundamental task in NLP, it can be utilized in many applications such as sentiment analysis [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###], topic modeling [14 ###reference_b14###, 15 ###reference_b15###], information retrieval, and natural language inference. Large Language Models (LLMs), which are built on transformer architectures [16 ###reference_b16###], have achieved remarkable success in a wide range of NLP tasks, including text classification [17 ###reference_b17###, 10 ###reference_b10###, 6 ###reference_b6###, 18 ###reference_b18###, 7 ###reference_b7###, 9 ###reference_b9###, 19 ###reference_b19###].\nDespite their success, LLMs often face challenges when fine-tuned for specific domains. Scientific texts, in particular, present difficulties due to their specialized vocabulary, distinct grammatical structures, and imbalanced data distributions [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 17 ###reference_b17###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###]. This can result in poor performance when general-purpose LLMs are applied to scientific text classification [17 ###reference_b17###]. The literature highlights the difficulties, emphasizing the need for domain-specific adaptations of LLMs to enhance their effectiveness in specialized areas [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 26 ###reference_b26###].\nTo address this issue, we fine-tune four state-of-the-art (SOTA) LLMs ( [17 ###reference_b17###], [20 ###reference_b20###], [21 ###reference_b21###], and [22 ###reference_b22###]) on the WoS-46985 dataset, which consists of 46,985 scientific documents prepared by Kowsari et al. [27 ###reference_b27###]. We perform two sets of experiments for each model: one using abstracts and another using keywords. In this study, we investigate both general purpose (BERT) and the specific purpose (SciBERT, BioBERT, and BlueBERT) LLMs111Derived datasets and implementations are available at: https://github.com/ZhyarUoS/Scientific-Text-Classification.git ###reference_t-Classification.git###.\nThe contributions of this study are:\nProvide a comprehensive evaluation of domain-specific LLMs (SciBERT, BioBERT, and BlueBERT) in comparison to a general-purpose LLM (BERT), offering valuable benchmarks for future research.\nConduct a systematic evaluation of the impact of using abstracts and keywords as input for LLMs in this context.\nOffer a detailed analysis using the WoS-46985 dataset, providing a case study on how domain-specific models can be effectively fine-tuned for scientific text classification.\nProvide empirical evidence supporting the superiority of SciBERT for scientific text classification tasks.\nPresent a comprehensive comparison of our achieved results with those reported in the literature for deep learning models." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Related Works", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "II-A LLMs for Scientific Text Classification", + "text": "Beltagy et al. [20 ###reference_b20###] present a pre-trained language model (PLM) specifically designed for scientific text. It addresses the challenge of limited high-quality labeled data in the scientific domain by leveraging a massive corpus of scientific publications for unsupervised training. The model significantly outperforms BERT, on various scientific NLP tasks, including sequence tagging, sentence classification, and dependency parsing. This improvement is attributed to SciBERT\u2019s specialized training on scientific text. SciBERT is a valuable tool for researchers working with scientific text, offering superior performance compared to general-purpose language models (LM).\nLee et al. [21 ###reference_b21###] propose a PLM specifically designed for the biomedical domain (BioBERT). The model is built upon the architecture of BERT but is trained on a massive dataset of biomedical text, such as PubMed abstracts and full-text articles. This specialized training allows BioBERT to outperform general-purpose LMs on a variety of biomedical text mining tasks, including named entity recognition (NER) [28 ###reference_b28###, 29 ###reference_b29###], relation extraction (RE) [30 ###reference_b30###], and question answering (QA). BioBERT significantly surpasses previous models in biomedical text mining tasks. This exceptional performance is attributed to its deep understanding of complex medical language and terminology.\nSciDeBERTa [20 ###reference_b20###] is a PLM specifically tailored for scientific and technological text. The model is built upon the foundation of a general-purpose LM, DeBERTa, and is further refined using a massive dataset of scientific text. This specialized training enables SciDeBERTa to outperform existing models designed for the same purpose, such as SciBERT [20 ###reference_b20###] and S2ORC-SciBERT [31 ###reference_b31###]. The research demonstrates that SciDeBERTa, particularly when fine-tuned for specific domains like computer science (SciDeBERTa-CS), achieves superior performance on tasks such as NER and RE. SciDeBERTa represents a significant advancement in NLP for the scientific and technological domains." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "II-B Other Deep Learning Approaches for Scientific Text Classification", + "text": "HDLTex [27 ###reference_b27###] provides a hierarchical DL approach for text classification. The model is designed to address the challenges of increasing volume and complexity of document collections. By utilizing a hierarchical structure, HDLTex can effectively classify documents into multiple levels of categories. The model combines different deep learning architectures, such as Deep Neural Networks (DNNs) [32 ###reference_b32###], Convolutional Neural Networks (CNNs) [33 ###reference_b33###], and Recurrent Neural Networks (RNNs) [34 ###reference_b34###, 33 ###reference_b33###], to capture intricate patterns and relationships within the text data." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III Dataset", + "text": "The dataset utilized for this study is derived from a dataset collected by Kowsari et al. [27 ###reference_b27###] from the Web of Science (WoS) database and consists of three distinct subsets: WoS-46985, WoS-11967, and WoS-5736 (presented in Tables I ###reference_###, II ###reference_### and III ###reference_###, respectively). Each dataset varies in size and categorization. The WoS-5736 dataset contains 5,736 documents organized into 11 categories, which are further grouped into 3 parent categories (electrical engineering, psychology, and biochemistry). The WoS-11967 dataset includes 11,967 documents, categorized into 35 categories and grouped under 7 parent categories (computer science, civil engineering, electrical engineering, mechanical engineering, medical sciences, psychology, and biochemistry). The largest of the datasets, WoS-46985, consists of 46,985 documents, divided into 134 categories within the same 7 parent categories." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV Methods", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Dataset Preparation and Preprocessing", + "text": "Each dataset (WoS-5736, WoS-11967, and WoS-46985) underwent a structured preparation process to extract four primary attributes: Labels, Domains, Keywords, and Abstracts. Metadata from the original WoS datasets was meticulously examined to identify common studies, from which the desired fields were extracted. Subsequently, the following preprocessing steps were applied to the extracted data and stored in a Tab-Separated Values (TSV) format:\nRemoval of extra spaces: Unnecessary spaces within domain labels were eliminated.\nTextual data was converted to lowercase and stripped of non-alphanumeric characters (except spaces).\nFurthermore, the dataset randomized to mitigate potential biases. Subsequently, we partitioned the datasets into training (80%), testing (20%), and validation (20% of the test set) subsets. To ensure consistency in data handling, all experiments adhered to this standardized data split, and presented in Table IV ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B Data Tokenization and Encoding", + "text": "To facilitate model training, the textual data (abstracts, and keywords) were transformed into numerical representations. This process involved tokenization, where text is broken down into smaller units (tokens), and encoding, where tokens are mapped to numerical values. We utilized a tokenizer with respect to the models. The tokenizer converted text sequences into input IDs and attention masks, essential for model input." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "IV-C Experimental Design", + "text": "To comprehensively evaluate the performance of various LMs, two experimental setups were implemented for each model. In the first experiment, the model was trained and evaluated using only the abstract of each scientific document. In the second experiment we focused on utilizing only the keywords associated with the document. This comparative approach allowed for a thorough assessment of the models\u2019 capabilities in handling different textual representations.\nA range of PLMs, including both general-purpose (BERT) and domain-specific (SciBERT, BioBERT, and BlueBERT) models, were included in the study. This diverse model selection enabled a comparative analysis of their performance in scientific text classification. By investigating the impact of different text representations (abstracts vs. keywords) and model architectures, this study aimed to identify the most effective approach for this specific task.\nTo ensure a fair comparison across all models, a standardized fine-tuning process was adopted and executed on Google Colab using a T4 GPU. The AdamW optimizer was employed with a learning rate of and epsilon of . A linear learning rate scheduler with warmup was utilized, commencing with a warmup period of steps. The models underwent training for a total of 20 epochs (a summary presented in Table V ###reference_###). These consistent training parameters facilitated a focused evaluation of the models\u2019 performance based on their underlying architectures and the nature of the input data (abstracts or keywords)." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results", + "text": "This section presents the model\u2019s performance and efficiency on each scenario individually and then reports the best achieved results among experimented LLMs. All models\u2019 performance evaluations are presented in Table VI ###reference_###." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "WoS-46985: Abstracts", + "text": "Among the models evaluated, SciBERT demonstrated the highest performance on the WoS-46985 dataset, achieving an accuracy of 87% and consistently higher F1 scores compared to BERT, BioBERT, and BlueBERT. While BlueBERT and BioBERT both achieved an accuracy of 86%, SciBERT\u2019s superior precision, recall, and F1 balance across classes suggest its suitability for scientific text classification. BioBERT and BlueBERT, which are tailored for biomedical contexts, displayed comparable performance to BERT, with slight variability in F1 scores, but did not surpass SciBERT (see Fig. 1 ###reference_###)." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "WoS-46985: Keywords", + "text": "SciBERT and BlueBERT consistently outperformed the other models while fine-tuning with WoS-46985 dataset and utilizing keywords as input. The classification reports reveal that SciBERT and BlueBERT also delivered superior precision, recall, and F1-scores across most categories. The results highlight both BlueBERT and SciBERT performance in classification tasks (see Fig. 1 ###reference_###), particularly in the biomedical domain, with BioBERT and BERT following closely.\n\n###figure_1###" + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "WoS-11967: Abstracts", + "text": "Our experiments show that BERT achieved a notable peak validation Micro F1 score of 0.92 by the 20th epoch, with a final classification accuracy of 91% while we use abstracts from WoS-11967 as an input. SciBERT reached a maximum accuracy of 92%, demonstrating slightly better performance in classification tasks. While all models showed high performance, SciBERT slightly outperformed the others in terms of F1 score and accuracy, emphasizing their potential advantages in specific domains of text classification (details presented in Fig. 2 ###reference_###)." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "WoS-11967: Keywords", + "text": "In the case of fine-tuning models with WoS-11967 (keywords), BERT achieved a final validation micro F1 score of 0.85 with a test accuracy of 84%. SciBERT demonstrated superior performance with a final micro F1 score of 0.87 and a test accuracy of 87%. In comparison, BioBERT reached a final micro F1 score of 0.85 and an accuracy of 86%. As a result, SciBERT outperformed the other models in both F1 score and accuracy, indicating its better effectiveness for the given classification task (model\u2019s performance presented in Fig. 2 ###reference_###).\n\n###figure_2###" + }, + { + "section_id": "5.5", + "parent_section_id": "5", + "section_name": "WoS-5736: Abstracts", + "text": "In the experiments WoS-5736 dataset (abstract as input) with BERT, SciBERT, BioBERT, and BlueBERT, all models achieved high performance in text classification tasks. BERT demonstrated steady improvements in validation micro F1 scores, reaching 0.98 by the final epoch, with a final accuracy of 97%. SciBERT also showed consistent enhancement in validation micro F1 scores, peaking at 0.97, and achieved an overall accuracy of 98%. BioBERT exhibited high performance with a final micro F1 score of 0.99 and an impressive accuracy of 98%. BlueBERT, despite its longer training time, achieved a good validation micro F1 score of 0.92 and an overall accuracy of 96%. To ensure a fair comparison, all models were trained for 20 epochs. However, it is important to note that each model achieved its peak performance prior to the 10th epoch. (see Fig. 3 ###reference_###)." + }, + { + "section_id": "5.6", + "parent_section_id": "5", + "section_name": "WoS-5736: Keywords", + "text": "In our final experiment, BERT achieved a peak validation Micro F1 score of 0.92 with a final accuracy of 93%, while SciBERT reached a maximum Micro F1 score of 0.94 and an accuracy of 94%. BioBERT\u2019s highest Micro F1 score was 0.93 with a final accuracy of 93%, and BlueBERT attained an accuracy of 93%. SciBERT generally performed best, achieving the highest validation scores consistently, while other models showed competitive results (model\u2019s performance evaluation presented in Fig. 3 ###reference_###).\n\n###figure_3###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "VI Discussion", + "text": "In this section, we provide a discussion with a comparison among the achieved results while utilizing LLMs against results reported in the literature [27 ###reference_b27###] (details presented in Table VII ###reference_###).\nBased on our results and a comparison with existing literature, our models consistently outperformed baseline models and the HDLTex model when utilizing abstracts. However, achieving high classification performance when feeding the model only keywords is a challenging task. Despite this, our setup with LLMs outperformed the baselines and HDLTex in most cases. Notably, SciBERT demonstrated superior performance in scientific and domain-specific text classification tasks across various WoS datasets, consistently surpassing other models such as BERT, BioBERT, and BlueBERT in terms of accuracy, precision, recall, and F1 scores.\nMoreover, on the WoS-46985 dataset, SciBERT achieved the highest accuracy and F1 scores, highlighting its robustness in scientific text classification. When using keywords as input, SciBERT maintained its leading position, delivering the highest validation micro F1 scores across all datasets. While BlueBERT exhibited competitive performance in later epochs, it was less consistent compared to SciBERT. BioBERT and BERT also performed well, particularly in the biomedical domain, but their results did not outperform SciBERT.\nThese findings suggest that SciBERT\u2019s domain-specific optimizations significantly enhance its effectiveness in specialized text classification tasks. Although BioBERT and BlueBERT showed strengths in certain contexts, SciBERT\u2019s consistent performance across diverse datasets underscores its potential as the most reliable model for scientific and technical text classification." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "VII Conclusion and Future Directions", + "text": "This study demonstrates the critical role of domain-specific adaptations in enhancing the performance of LLMs for scientific text classification. Our experiments highlight SciBERT\u2019s consistent superiority over both general-purpose and other domain-specific models, particularly in handling abstracts and keywords across various datasets derived from the WoS-46985 dataset. The results indicate that fine-tuning LLMs on domain-specific corpora significantly improves their ability to manage the complexities of specialized texts, such as those found in scientific literature.\nThere are several directions for future research. First, exploring further fine-tuning techniques, such as continual learning and domain-adaptive pertaining, could achieve better performance in domain-specific tasks. Additionally, expanding the scope of datasets to include more diverse and larger scientific corpora could test the models\u2019 scalability and robustness. Furthermore, investigating the impact of different data preprocessing techniques, and hyperparameter optimization is essential." + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "VIII Limitations", + "text": "While this study highlights the effectiveness of domain-specific LLMs, it has several limitations:\nThe study is limited to the WoS dataset, which primarily focuses on scientific texts; therefore, the results may not be generalizable to other domains or types of textual data.\nDue to limited access to powerful computing resources fine-tuning process was performed using a standardized set of hyperparameters, which may not have been optimal for all models or datasets.\nThe experiments were conducted using only abstracts and keywords, which may not capture the full complexity of the documents." + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "IX Acknowledgement", + "text": "The authors express their gratitude to the members of the Applied Machine Learning Research Group at \u00d3buda University\u2019s John von Neumann Faculty of Informatics for their valuable comments and suggestions. They also wish to acknowledge the support provided by the Doctoral School of Applied Informatics and Applied Mathematics at \u00d3buda University." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
TABLE I: WoS-46985: Number of Studies Documents in Different Domains
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainNumber of Abstracts
Computer Science6514
Civil Engineering4237
Electrical Engineering5483
Mechanical Engineering3297
Medical Sciences14625
Psychology7142
Biochemistry5687
Total46985
\n
", + "capture": "TABLE I: WoS-46985: Number of Studies Documents in Different Domains" + }, + "2": { + "table_html": "
\n
TABLE II: WoS-11967: Number of Documents in Different Domains
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainNumber of Abstracts
Computer Science1499
Civil Engineering2107
Electrical Engineering1132
Mechanical Engineering1925
Medical Sciences1617
Psychology1959
Biochemistry1728
Total11967
\n
", + "capture": "TABLE II: WoS-11967: Number of Documents in Different Domains" + }, + "3": { + "table_html": "
\n
TABLE III: WoS-5736: Number of Documents in Different Domains
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DomainNumber of Abstracts
Electrical Engineering1292
Psychology1597
Biochemistry2847
Total5736
\n
", + "capture": "TABLE III: WoS-5736: Number of Documents in Different Domains" + }, + "4": { + "table_html": "
\n
TABLE IV: Dataset Splits for WoS Datasets
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DatasetTrainTestValidation
WoS-573645881148230
WoS-1196795732394479
WoS-469853758893971880
\n
", + "capture": "TABLE IV: Dataset Splits for WoS Datasets" + }, + "5": { + "table_html": "
\n
TABLE V: Training Configuration Parameters
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ParameterValue
OptimizerAdamW
Learning Rate
Epsilon
SchedulerLinear with warmup
Warmup Steps
Epochs20
\n
", + "capture": "TABLE V: Training Configuration Parameters" + }, + "6": { + "table_html": "
\n
TABLE VI: Models Performance Evaluation
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelsF ScoresRecall ScoresPrecision ScoresAccuracy
WoS-46985: Abstracts
BERT85%
Macro F10.8496Macro Recall0.8501Macro Precision0.8494
Micro F10.8542Micro Recall0.8542Micro Precision0.8542
Weighted F10.8541Weighted Recall0.8542Weighted Precision0.8543
SciBERT87%
Macro F10.8666Macro Recall0.8657Macro Precision0.8676
Micro F10.8691Micro Recall0.8691Micro Precision0.8691
Weighted F10.8688Weighted Recall0.8691Weighted Precision0.8688
BioBERT86%
Macro F10.8557Macro Recall0.8541Macro Precision0.8574
Micro F10.8566Micro Recall0.8566Micro Precision0.8566
Weighted F10.8568Weighted Recall0.8566Weighted Precision0.8571
BlueBERT86%
Macro F10.8545Macro Recall0.8528Macro Precision0.8564
Micro F10.8566Micro Recall0.8566Micro Precision0.8566
Weighted F10.8566Weighted Recall0.8566Weighted Precision0.8568
WoS-46985: Keywords
BERT79%
Macro F10.7789Macro Recall0.7780Macro Precision0.7807
Micro F10.7944Micro Recall0.7944Micro Precision0.7944
Weighted F10.7939Weighted Recall0.7944Weighted Precision0.7940
SciBERT80%
Macro F10.7830Macro Recall0.7818Macro Precision0.7845
Micro F10.7951Micro Recall0.7951Micro Precision0.7951
Weighted F10.7950Weighted Recall0.7951Weighted Precision0.7952
BioBERT79%
Macro F10.7836Macro Recall0.7815Macro Precision0.7818
Micro F10.7949Micro Recall0.7949Micro Precision0.7949
Weighted F10.7944Weighted Recall0.7949Weighted Precision0.7942
BlueBERT80%
Macro F10.7854Macro Recall0.7814Macro Precision0.7879
Micro F10.7987Micro Recall0.7987Micro Precision0.7879
Weighted F10.7980Weighted Recall0.7987Weighted Precision0.7979
WoS-11967: Abstracts
BERT91%
Macro F10.9031Macro Recall0.9044Macro Precision0.9023
Micro F10.9060Micro Recall0.9060Micro Precision0.9060
Weighted F10.9060Weighted Recall0.9060Weighted Precision0.9065
SciBERT92%
Macro F10.9205Macro Recall0.9222Macro Precision0.9193
Micro F10.9218Micro Recall0.9218Micro Precision0.9218
Weighted F10.9218Weighted Recall0.9218Weighted Precision0.9222
BioBERT91%
Macro F10.9034Macro Recall0.9024Macro Precision0.9048
Micro F10.9055Micro Recall0.9055Micro Precision0.9055
Weighted F10.9055Weighted Recall0.9055Weighted Precision0.9058
BlueBERT91%
Macro F10.9060Macro Recall0.9078Macro Precision0.9046
Micro F10.9085Micro Recall0.9085Micro Precision0.9085
Weighted F10.9087Weighted Recall0.9085Weighted Precision0.9092
WoS-11967: Keywords
BERT84%
Macro F10.8369Macro Recall0.8365Macro Precision0.8384
Micro F10.8421Micro Recall0.8421Micro Precision0.8421
Weighted F10.8418Weighted Recall0.8421Weighted Precision0.8423
SciBERT87%
Macro F10.8693Macro Recall0.8689Macro Precision0.8704
Micro F10.8730Micro Recall0.8730Micro Precision0.8730
Weighted F10.8704Weighted Recall0.8730Weighted Precision0.8733
BioBERT86%
Macro F10.8518Macro Recall0.8521Macro Precision0.8528
Micro F10.8554Micro Recall0.8554Micro Precision0.8554
Weighted F10.8553Weighted Recall0.8554Weighted Precision0.8564
BlueBERT85%
Macro F10.8486Macro Recall0.8485Macro Precision0.8486
Micro F10.8521Micro Recall0.8521Micro Precision0.8521
Weighted F10.8521Weighted Recall0.8521Weighted Precision0.8521
WoS-5736: Abstracts
BERT97%
Macro F10.9649Macro Recall0.9618Macro Precision0.9687
Micro F10.9684Micro Recall0.9684Micro Precision0.9686
Weighted F10.9684Weighted Recall0.9684Weighted Precision0.9687
SciBERT98%
Macro F10.9739Macro Recall0.9715Macro Precision0.9763
Micro F10.9756Micro Recall0.9756Micro Precision0.9756
Weighted F10.9755Weighted Recall0.9756Weighted Precision0.9756
BioBERT98%
Macro F10.9749Macro Recall0.9747Macro Precision0.9750
Micro F10.9773Micro Recall0.9773Micro Precision0.9773
Weighted F10.9773Weighted Recall0.9773Weighted Precision0.9773
BlueBERT96%
Macro F10.9540Macro Recall0.9510Macro Precision0.9572
Micro F10.9581Micro Recall0.9581Micro Precision0.9581
Weighted F10.9579Weighted Recall0.9581Weighted Precision0.9580
WoS-5736: Keywords
BERT93%
Macro F10.9248Macro Recall0.9213Macro Precision0.929
Micro F10.9329Micro Recall0.9329Micro Precision0.9329
Weighted F10.9323Weighted Recall0.9329Weighted Precision0.9323
SciBERT\n94%
Macro F10.9373Macro Recall0.9387Macro Precision0.9359
Micro F10.9416Micro Recall0.9416Micro Precision0.9416
Weighted F10.9416Weighted Recall0.9416Weighted Precision0.9417
BioBERT93%
Macro F10.9165Macro Recall0.9167Macro Precision0.9163
Micro F10.9259Micro Recall0.9259Micro Precision0.9259
Weighted F10.9257Weighted Recall0.9259Weighted Precision0.9256
BlueBERT93%
Macro F10.9223Macro Recall0.9215Macro Precision0.9241
Micro F10.9303Micro Recall0.9303Micro Precision0.9303
Weighted F10.9297Weighted Recall0.9303Weighted Precision0.9299
\n
\n
", + "capture": "TABLE VI: Models Performance Evaluation" + }, + "7": { + "table_html": "
\n
TABLE VII: LLMs Accuracy Against Other Deep Learning Approaches
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
WoS-46985WoS-11967WoS-5736
MethodsAccuracyAccuracyAccuracy
BaselineDNN80.0266.9586.15
CNN83.2970.4688.68
RNN83.9672.1289.46
NBC68.846.278.14
SVM80.6567.5685.54
SVM83.1670.2288.24
Stacking SVM79.4571.8185.68
HDLTexHDLTex86.0776.5890.93
LLMs:BERT85.091.096.0
AbstractsSciBERT87.092.097.0
BioBERT86.091.098.0
BlueBERT86.091.097.0
LLMs:BERT79.084.093.0
KeywordsSciBERT80.087.094.0
BioBERT79.086.093.0
BlueBERT80.085.093.0
\n
", + "capture": "TABLE VII: LLMs Accuracy Against Other Deep Learning Approaches" + } + }, + "image_paths": { + "1": { + "figure_path": "2412.00098v1_figure_1.png", + "caption": "Figure 1: Performance evaluation on the WoS-46985 dataset for BERT, SciBERT, BioBERT, and BlueBERT LMs. The top sub-figure shows the evolution of models when utilizing abstracts, while the bottom sub-figure shows the evolution of models when utilizing keywords, as measured by the F1 score on the validation subset.", + "url": "http://arxiv.org/html/2412.00098v1/x1.png" + }, + "2": { + "figure_path": "2412.00098v1_figure_2.png", + "caption": "Figure 2: Performance evaluation on the WoS-11967 dataset for BERT, SciBERT, BioBERT, and BlueBERT LMs. The top sub-figure shows the evolution of models when utilizing abstracts, while the bottom sub-figure shows the evolution of models when utilizing keywords, as measured by the F1 score on the validation subset.", + "url": "http://arxiv.org/html/2412.00098v1/x2.png" + }, + "3": { + "figure_path": "2412.00098v1_figure_3.png", + "caption": "Figure 3: Performance evaluation on the WoS-5736 dataset for BERT, SciBERT, BioBERT, and BlueBERT LMs. The top sub-figure shows the evolution of models when utilizing abstracts, while the bottom sub-figure shows the evolution of models when utilizing keywords, as measured by the F1 score on the validation subset.", + "url": "http://arxiv.org/html/2412.00098v1/x3.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2412.00098v1" +} \ No newline at end of file diff --git a/20241127/2412.00100v1.json b/20241127/2412.00100v1.json new file mode 100644 index 0000000000000000000000000000000000000000..81fe98c26ffc1c910a1feac54de1294cd0a686c3 --- /dev/null +++ b/20241127/2412.00100v1.json @@ -0,0 +1,739 @@ +{ + "title": "Steering Rectified Flow Models in the Vector Field for Controlled Image Generation", + "abstract": "Diffusion models (DMs) excel in photorealism, image editing, and solving inverse problems, aided by classifier-free guidance and image inversion techniques.\nHowever, rectified flow models (RFMs) remain underexplored for these tasks.\nExisting DM-based methods often require additional training, lack generalization to pretrained latent models, underperform, and demand significant computational resources due to extensive backpropagation through ODE solvers and inversion processes.\nIn this work, we first develop a theoretical and empirical understanding of the vector field dynamics of RFMs in efficiently guiding the denoising trajectory.\nOur findings reveal that we can navigate the vector field in a deterministic and gradient-free manner.\nUtilizing this property, we propose FlowChef, which leverages the vector field to steer the denoising trajectory for controlled image generation tasks, facilitated by gradient skipping.\nFlowChef is a unified framework for controlled image generation that, for the first time, simultaneously addresses classifier guidance, linear inverse problems, and image editing without the need for extra training, inversion, or intensive backpropagation.\nFinally, we perform extensive evaluations and show that FlowChef significantly outperforms baselines in terms of performance, memory, and time requirements, achieving new state-of-the-art results.\nProject Page: https://flowchef.github.io.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Recent advances in diffusion models have led to rapid progress in AI generated content (AIGC), particularly in text-to-image (T2I) and text-to-video (T2V) models across various domains such as entertainment, arts, and design [40 ###reference_b40###, 44 ###reference_b44###, 12 ###reference_b12###, 36 ###reference_b36###, 32 ###reference_b32###, 45 ###reference_b45###].\nThese developments have resulted in remarkable performance in image editing, solving inverse problems, and personalization.\nThis progress could be attributed to key advances like latent diffusion models (LDMs) [40 ###reference_b40###] and classifier-free guidance (CFG) [16 ###reference_b16###], among other essential components.\nDespite their applicability to various downstream tasks, these models demand increasing computational resources.\nFor instance, CFG requires additional unconditional training of the model, while traditional classifier guidance necessitates training noise-aware classifiers [11 ###reference_b11###].\nSimilarly, existing approaches for solving inverse problems often require minutes of computation and additional memory overhead [47 ###reference_b47###, 7 ###reference_b7###, 43 ###reference_b43###, 46 ###reference_b46###, 1 ###reference_b1###].\nMoreover, image editing methods typically involve either inversion or explicit training [19 ###reference_b19###, 3 ###reference_b3###, 5 ###reference_b5###].\nThese limitations can be attributed to the inherent stochasticity of diffusion models, often requiring a higher number of function evaluations (NFEs).\n###figure_2### However, the recent introduction of flow-based methods [23 ###reference_b23###], especially rectified flow models (RFMs) [24 ###reference_b24###, 22 ###reference_b22###], addresses these limitations to some extent by requiring fewer NFEs, depending on the model considered.\nRecent works have attempted to solve inverse problems by leveraging this property, focusing mainly on pixel models [1 ###reference_b1###, 28 ###reference_b28###].\nWhile these approaches have improved computational time requirements, they are still not sufficiently efficient, as they require inversion and incur significant memory overhead.\nAs a result, they cannot be extended to large state-of-the-art models like Flux or SD3 [12 ###reference_b12###].\nIn this paper, we introduce FlowChef, a novel method that significantly enhances controlled image generation by leveraging the unique characteristics of rectified flow models.\nWe first standardize the objective of controlled synthesis, unifying various downstream tasks within a single framework.\nBy revisiting the ordinary differential equations (ODEs) that govern these models, we analyze their error dynamics both theoretically and empirically.\nWe discover that in nonlinear ODEs with stochasticity or trajectory crossovers, error terms emerge that hinder convergence due to inaccuracies in estimating denoised samples or improper gradient approximations (see Figure 2 ###reference_###(a)).\nContrary to diffusion models, rectified flow models exhibit straight trajectories and avoid significant trajectory crossovers due to their linear interpolation between noise and data distributions (see Figure 2 ###reference_###(b-c)).\nWe theoretically demonstrate and empirically validate that RFMs can achieve higher convergence rates without additional computational overhead by capitalizing on this key property.\nBuilding on this understanding, we present FlowChef, that proposes to steer the trajectories towards the target in the vector field by gradient skipping (see Figure 2 ###reference_###(c)).\nThis allows us to navigate the vector field in a deterministic manner, akin to a north star guiding sailors across a dark ocean.\nWe conduct extensive evaluations of FlowChef across tasks such as pixel-level classifier guidance, image editing, and classifier-guided style transfer. Our results demonstrate that FlowChef not only surpasses baseline methods but does so with greater computational efficiency and without the need for inversion. As illustrated in Figure 1 ###reference_###, FlowChef efficiently addresses a variety of tasks.\nFor perspective, FlowChef handles the linear inverse problems within 18 seconds on the latent-space model, while SOTA takes 1-3 minutes per image.\nFurthermore, we explore its practical applicability to large-scale models (i.e., Flux) to tackle both linear inverse problems and image editing together without inversion and within 30 NFEs at billions of parameter scales.\nOur key contributions can be summarized as follows:\nWe develop a unified perspective to study rectified flow models theoretically and empirically for a guided, controlled generation.\nWe introduce FlowChef, the most efficient method to date for guided, controlled generation using RFMs, achieving state-of-the-art performance without requiring inversion or gradient backpropagation through the ODESolver.\nWe demonstrate FlowChef\u2019s superior performance across multiple tasks, including linear inverse problems in both pixel and latent spaces, image editing evaluated on the PIE benchmark [19 ###reference_b19###], classifier guidance, and through large-scale human preference studies." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Works", + "text": "We provide detailed related works, specially diffusion-based methods and conditional sampling, in the Appendix." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Preliminaries", + "text": "Classifier guidance, inversion problems, and image editing involve guiding a model toward a specific target sample or distribution in both pixel and latent spaces.\nHowever, these tasks are often treated separately in literature.\nHere, we present a unified problem formulation to encompass these downstream tasks, with a focus on rectified flow models." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Formulation", + "text": "Let represent a pretrained flow model estimating the drift from .\nThe denoised sample is obtained by integrating the drift over time from to , starting from .\nWith a target sample , we define a cost function that quantifies the cost of aligning with , yielding the optimization problem:\nwhere represents the model-generated trajectory from to .\nThe objective is to find the trajectory that minimizes , effectively steering the generated sample toward the target.\nThis can be adapted for the denoising stage with either a noise-aware cost function at each timestep or by estimating to refine the trajectory as needed.\nThe gradient update is given by:\nwhere is guidance scale. This process requires estimating , backpropagating gradients through ODESolver () to adjust , and iteratively refining .\nAdditional details on the baseline algorithm is in the Appendix.\nAs it can be observed, this approach depends on accurate estimation and substantial computation to ensure that the trajectory remains on the data manifold." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Cost Functions", + "text": "Notably, explicit is unnecessary and can be approximated with appropriate cost functions depending on the downstream tasks.\nAssuming initial Gaussian noise leads to , the cost function can be defined as:\nIn inverse problems, let represent a degradation operation (e.g., downsampling for super-resolution).\nWe then define:\nHere, is a degraded sample, and we guide the model to generate such that its degraded version matches .\nFor classifier guidance, the cost function can be based on the negative log-likelihood (NLL).\nSpecifically, given a classifier , the cost function is:\nRemark 1. Although presented in pixel space, this formulation extends to latent space by introducing a Variational Autoencoder (VAE) encoder () and decoder ()." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Proposed Method", + "text": "In this section, we introduce our method, FlowChef, which enables free-form control for rectified flow models by presenting an efficient gradient approximation during guided sampling.\nWe begin by analyzing the error dynamics of general ordinary differential equations (ODEs) and then explain how the inherent properties of rectified flow models mitigate existing approximation issues.\nBuilding on these insights, we derive FlowChef, an intuitive yet theoretically grounded approach for free-form controlled image generation applicable to various downstream tasks, including those involving pretrained latent models." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Error Dynamics of the ODEs", + "text": "Understanding why existing methods often fail and require computationally intensive strategies is crucial. In ODE-based generative models, guiding the sampling process toward a desired target typically involves computing the gradient of a loss function with respect to the model\u2019s parameters or state variables.\nAs noted in Eq. (2 ###reference_###), even though the denoised output can be estimated using , backpropagation through the ODE solver is still necessary to obtain .\nThis raises the question: Why is backpropagation through the ODE solver necessary?\nApproximating gradient computations is a common approach to reduce computational overhead [14 ###reference_b14###, 47 ###reference_b47###].\nHowever, in models governed by nonlinear ODEs, unregulated gradient approximations can introduce significant errors into the system dynamics.\nThis issue is formalized in the following proposition:\nLet be the noise distribution and be the data distribution.\nLet denote an intermediate sample obtained from a predefined forward function as , where and .\nDefine an ODE sampling process and quadratic , where is an ODESolver.\nThen, the error dynamics of ODEs for controlled image generation is governed by:\nwhere , is the squared error magnitude, is the guidance strength, and represents the accumulated errors due to non-linearity and trajectory crossovers.\nThe proof of Proposition 4.1 ###reference_theorem1### is provided in the Appendix.\nThe term denotes the exponential decay of error due to guidance, while captures the impact of non-linearity and trajectory crossovers.\nIn diffusion models, curved sampling trajectories lead to larger , hindering convergence.\nIn contrast, rectified flow models exhibit straight trajectories with minimal crossovers, causing to approach zero and allowing error to decrease exponentially.\nTo validate our findings, we conduct a toy study comparing classifier guidance on two ODE sampling methods using pretrained IDDPM and Rectified Flow++ (RF++) models on the ImageNet 64x64.\nAs reported in Table 1 ###reference_###, skipping the gradient in DDIM-based sampling increases the FID score, indicating significant .\nConversely, RF++ converges well and improves the FID score.\nThese empirical evidences further bolster our hypothesis that Rectified Flow models observe smooth vector field with the help of Proposition 4.1 ###reference_theorem1###.\nAlthough backpropagating through the ODESolver further improves performance, it incurs higher computational costs as highlighted." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "FlowChef: Steering Within the Vector Field", + "text": "Rectified flow models inherently allow error dynamics to converge even with gradient approximations due to their straight-line trajectories and smooth vector fields, as discussed previously.\nHence, vector field is trained to be smooth, and this smoothness implies that changes gradually w.r.t. .\nWe formalize our approach with the following assumptions about the Jacobian of the vector field:\n(Local Linearity):\nWithin the small neighborhoods around any point along the sampling trajectory, the vector field behaves approximately linearly with respect to .\nDoing Taylor series expansion for small perturbations , we get:\nwhere is the Jacobian matrix of with respect to .\n(Constancy of the Jacobian):\nThe Jacobian varies slowly with respect to within these small neighborhoods.\nTherefore, for small , it can be approximated as constant:\nUnder these assumptions, we derive the following gradient relationship between and :\n###figure_3### Let be the velocity function with the parameter .\nThen the gradient of the cost function () at any timestep can be approximated as:\nTherefore, we get .\nImportantly, when either or varies slowly (Assumption 2), the matrices are close to the identity matrix.\nWe further provide empirical evidence about this on pretrained rectified flow models by analyzing the gradients and convergence w.r.t. denoising steps in the Appendix.\nWhere we observe that gradient direction improves linearly and quickly converges to as .\nUnder this approximation, the difference between the two error dynamics becomes negligible.\nSince the introduces only a small correction, it leads to the convergence in error dynamics as .\nCombining the results of Preposition 4.1 ###reference_theorem1###, Assumption 1 and 2, and Lemma 8 ###reference_###, we obtain the following theorem with straightforward proof that facilitates the controlled generation for rectified flow models in the most computationally efficient way:\n(Informal)\nGiven the above assumption and notations, the update rule for the vector field driven by for the free-form controlled generation is:\nwhere is the guidance scale.\nThe formal statement and proof are provided in the Appendix.\nThis theorem forms the core of FlowChef, enabling controlled generation efficiently.\nAlgorithm 1 ###reference_thm1### provides a generalized overview of FlowChef.\nA key feature of FlowChef is that it starts from any random noise and still converges to the desired distribution or sample without inversion.\nAt each timestep , we first estimate the .\nThen we calculate the loss .\nAt last, we directly optimize using the gradient , as per Lemma 8 ###reference_###.\nThat\u2019s all we need!\nWe may repeat this optimization times per denoising step to stabilize gradients and improve convergence, though we found sufficient in most cases.\nImportant hyperparameters include the learning rate and total number of function evaluations (NFEs) .\nSelecting optimal values for and the learning rate is crucial to maintain gradients within a suitable range, uphold Jacobian constancy (Assumption 2), and avoid adversarial effects.\nTo illustrate this, we analyze the effects of total FlowChef guidance steps on the Flux model (see Figure 3 ###reference_###).\nDetailed study on this is in Appendix." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "We evaluate FlowChef across multiple tasks:\n(1) Linear inversion problems on pixel- and latent-space models,\n(2) Image editing, and\n(3) Classifier-guided style transfer.\nOverall, FlowChef demonstrates superior performance across all tasks, significantly reducing compute and time costs compared to baselines.\nNotably, FlowChef extends seamlessly to image editing tasks without inversion or additional memory overhead, allowing it to operate on recent SOTA T2I models, such as Flux, without encountering out-of-memory (OOM) errors." + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Linear Inversion Problems", + "text": "We evaluate FlowChef against several baselines on three common linear tasks: box inpainting, super-resolution, and Gaussian deblurring, under varying difficulty levels.\nWe extend both FlowChef and the baselines to latent-space models to simulate real-world applications, reporting results on PSNR, SSIM [50 ###reference_b50###], and LPIPS [58 ###reference_b58###] across 200 images from CelebA [26 ###reference_b26###] and AFHQ-Cat [6 ###reference_b6###].\nMemory requirements and computation time are also analyzed.\nWe present the quantitative and qualitative evaluation results in Table 2 ###reference_### and Appendix, respectively.\nIt can be observed that FlowChef significantly improves the performance on both easy and hard settings across the tasks and all metrics consistently.\nNotably from Table 4 ###reference_###, we find that the FlowChef is also the fastest and most memory efficient.\nSurprisingly, diffusion-based extended baseline (DPS) significantly outperforms even recent baselines.\nHowever, DPS requires backpropagation through billions of parameters of ODESolver.\nWhile the concurrent gradient-free work, PnP-Flow, outperforms many other baselines, FlowChef leads the benchmark.\nQuantitative and qualitative results in Figure 4 ###reference_### and Table 3 ###reference_### show that FlowChef achieves SOTA performance for flow-based methods.\nHowever, a huge gap still remains w.r.t. the diffusion-based methods like Resample and PSLD.\nNotably, these baselines take about 5 minutes and 3 minutes, respectively, per image (see Figure 4 ###reference_###), while FlowChef only takes only 18 seconds and less memory (only 14GB).\nNone of the existing flow-based methods can be extended to Flux due to memory constraints.\nBut FlowChef can seamlessly be applied, which further improves the performance.\nWe find that FlowChef (Flux) reduces the artifacts in the images completely but observes the slight degradation in color dynamics.\nWe attribute this to the observed nonlinearity in the trajectory of Flux (detailed discussion in Appendix)." + }, + { + "section_id": "5.1.1", + "parent_section_id": "5.1", + "section_name": "5.1.1 Pixel-space models", + "text": "###figure_4### As FlowChef requires straightness and no crossovers, we select the Rectified-Flow++ pretrained models [22 ###reference_b22###].\nWe compare FlowChef with recent flow-based methods OT-ODE [35 ###reference_b35###], D-Flow [1 ###reference_b1###], and PnP-Flow (concurrent work) [28 ###reference_b28###], implementing the former two baselines manually due to lack of open-source access and tuning them for optimal performance.\nAdditionally, we extend two diffusion-based baselines, DPS [7 ###reference_b7###] and FreeDoM [56 ###reference_b56###], for the RFMs.\nFor comparisons, we use the Rectified-Flow++ models that are pretrained on FFHQ (for CelebA) and AFHQ-Cat datasets.\nExperiments are conducted for 64x64 image resolutions.\nHyper-parameters for each method are reported in the Appendix.\nOur selected tasks include: (1) Box inpainting with 20x20 and 30x30 centered masks, (2) Super-resolution with 2x and 4x scaling factors, and (3) Gaussian deblurring with an 11x11 kernel at intensities of 1.0 and 10.0, with added Gaussian noise at for robustness.\nWe present the quantitative and qualitative evaluation results in Table 2 ###reference_### ###reference_### and Appendix, respectively.\nIt can be observed that FlowChef significantly improves the performance on both easy and hard settings across the tasks and all metrics consistently.\nNotably from Table 4 ###reference_### ###reference_###, we find that the FlowChef is also the fastest and most memory efficient.\nSurprisingly, diffusion-based extended baseline (DPS) significantly outperforms even recent baselines.\nHowever, DPS requires backpropagation through billions of parameters of ODESolver.\nWhile the concurrent gradient-free work, PnP-Flow, outperforms many other baselines, FlowChef leads the benchmark." + }, + { + "section_id": "5.1.2", + "parent_section_id": "5.1", + "section_name": "5.1.2 Latent-space models.", + "text": "Flow-based baselines are not extended to the latent space models as either they are already very computationally heavy or require extra Jacobian calculations to support the non-linearity introduced by the VAE models.\nWe adapt D-Flow [1 ###reference_b1###] and RectifID [48 ###reference_b48###] as flow-based baselines, adding diffusion-based baselines PSLD-LDM [43 ###reference_b43###] and Resample [46 ###reference_b46###] for comparison.\nWe use InstaFlow [25 ###reference_b25###] (Stable Diffusion v1.5 variant) and Flux models as a baseline for flow-based approaches and utilize the original Stable Diffusion v1.5 checkpoint for the diffusion-based baselines.\nWe perform all tasks in 512 x 512 resolution, increasing to 1024 x 1024 for Flux experiments.\nOur task settings are: (1) Box inpainting with a 128x128 mask, (2) Super-resolution at 4x scaling, and (3) Gaussian deblurring with a 50x50 kernel at intensity 5.0, all without extra Gaussian noise.\nFor consistency, settings are doubled for Flux to a 256x256 mask, 8x super-resolution scaling, and 10.0 deblurring intensity.\nAs VAE encoders add extra unwanted nonlinearity, pixel-level cost functions alone may not be optimal.\nHence, we calculate the loss in the latent space only for the box inpainting task (as the degradation function is known with ), allowing us to extend to image editing later.\nFor super-resolution and deblurring, we stick with the pixel-level cost functions.\nWe further detail the task-specific settings and hyperparameters in the Appendix.\n###figure_5### Quantitative and qualitative results in Figure 4 ###reference_### ###reference_### and Table 3 ###reference_### ###reference_### show that FlowChef achieves SOTA performance for flow-based methods.\nHowever, a huge gap still remains w.r.t. the diffusion-based methods like Resample and PSLD.\nNotably, these baselines take about 5 minutes and 3 minutes, respectively, per image (see Figure 4 ###reference_### ###reference_###), while FlowChef only takes only 18 seconds and less memory (only 14GB).\nNone of the existing flow-based methods can be extended to Flux due to memory constraints.\nBut FlowChef can seamlessly be applied, which further improves the performance.\nWe find that FlowChef (Flux) reduces the artifacts in the images completely but observes the slight degradation in color dynamics.\nWe attribute this to the observed nonlinearity in the trajectory of Flux (detailed discussion in Appendix)." + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Image Editing", + "text": "We extend FlowChef for image editing on Flux and InstaFlow models, with Algorithm 2 detailing the implementation.\nThis extension reduces FlowChef\u2019s sensitivity to hyper-parameters.\nCurrently, the approach requires a user-provided mask for controlled editing but can be expanded to attention-based techniques.\nTherefore, we select the baselines that also accept the user-provided mask for holistic comparisons.\nDue to their optimization constraints, existing baselines for classifier guidance cannot be applied to image editing.\nFor comparison, we use diffusion-based SOTA methods Ledits++ [3 ###reference_b3###] (which requires the inversion), DiffEdit [9 ###reference_b9###] and InfEdit [53 ###reference_b53###], alongside RF-Inversion [42 ###reference_b42###] (the only concurrent flow-based editing framework).\nWe perform large-scale evaluations on PIE-Bench [19 ###reference_b19###].\nFor fair comparisons, we use PIE-Bench-provided ground truth masks for controlling all editing methods.\nAdditionally, we provide preliminary comparisons with RF-Inversion for \u201cwearing glasses\u201d on randomly selected SFHQ faces [2 ###reference_b2###].\nA human preference evaluation on randomly selected 100 PIE-Bench edits (see Figure 6 ###reference_###) shows FlowChef (InstaFlow) outperforming DiffEdit and competing with InfEdit.\nAlthough Ledits++ scored highest, it requires inversion, resulting in higher VRAM and time requirements.\nImportantly, FlowChef on Flux achieves performance comparable to Ledits++ without inversion.\nComparisons with RF-Inversion show that FlowChef reduces time by almost 50% without needing inversion and achieves competitive performance, with additional detailed quantitative and qualitative results in the Appendix." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Classifier Guidance: Style Transfer", + "text": "###figure_6### ###figure_7### We conducted classifier-guided style transfer experiments using 100 randomly selected style reference images paired with 100 random prompts. The objective was to generate stylistic images that align visually with the reference style while adhering to the prompt.\nA pretrained CLIP model was used for evaluation, and we report both CLIP-T and CLIP-S scores [38 ###reference_b38###]. For baseline comparisons, we included diffusion-based methods FreeDoM and MPGD and flow-based methods D-Flow and RectifID, which were extended for this task.\nThe backbone was fixed to Stable Diffusion v1.5 (SDv1.5), with FlowChef evaluated in its InstaFlow variant to ensure a consistent comparison. Both quantitative and qualitative results are presented in Table 5 ###reference_###, demonstrating the effectiveness of FlowChef in this setup." + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Extended Applications", + "text": "To highlight the versatility and effectiveness of FlowChef, we extended our method to tackle multi-object image editing and 3D multiview generation.\nFigure 7 ###reference_### demonstrates FlowChef (Flux) performing complex multi-object edits, such as simultaneously modifying two pots and hats. Notably, this capability relies on the base model\u2019s ability to understand textual instructions effectively. FlowChef leverages this strength of Flux, achieving edits without requiring inversion, a significant advantage over traditional methods.\nIn Figure 8 ###reference_###, we explore FlowChef\u2019s multiview synthesis capability, inspired by Score Distillation Sampling (SDS) [37 ###reference_b37###]. By incorporating the core idea of FlowChef for model steering into recent work on RFDS [54 ###reference_b54###], we evaluate its effectiveness for 3D view generation. While FlowChef does not improve inference efficiency or reduce cost compared to RFDS-Rev [54 ###reference_b54###], it demonstrates competitive performance in generating high-quality multiview outputs.\nThese results underline the adaptability of FlowChef, showcasing its potential for advanced generative tasks such as multi-object editing and 3D synthesis, while maintaining the state-of-the-art quality expected from RFMs." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we introduced FlowChef, a versatile flow-based approach that unifies key tasks in controlled image generation, including linear inverse problems, image editing, and classifier-guided style transfer. Extensive experiments show that FlowChef outperforms baselines across all tasks, achieving state-of-the-art performance with reduced computational cost and memory usage. Notably, FlowChef enables inversion-free editing and scales to SOTA T2I models like Flux without memory issues. Our results demonstrate FlowChef\u2019s adaptability and efficiency, offering a unified solution for both pixel and latent spaces across diverse architectures and practical constraints." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Supplementary Overview", + "text": "This supplementary material contains proofs, detailed results, discussion, and qualitative results:\nSection 8 ###reference_###: Proposition 4.1 ###reference_theorem1### proof.\nSection 9 ###reference_###: Theorem 4.3 ###reference_theorem3### proof.\nSection 10 ###reference_###: Numerical accuracy analysis.\nSection 11 ###reference_###: Extended related works.\nSection 12 ###reference_###: Empirical study of pixel and latent models.\nSection 13 ###reference_###: Detailed algorithms.\nSection 14 ###reference_###: Experimental setup details.\nSection 15 ###reference_###: RF-Inversion vs. FlowChef.\nSection 16 ###reference_###: Hyperparameter study.\nSection 17 ###reference_###: Qualitative Results.\nSection 18 ###reference_###: Limitations & Future Work" + }, + { + "section_id": "8", + "parent_section_id": null, + "section_name": "Proof of the Proposition", + "text": "" + }, + { + "section_id": "9", + "parent_section_id": null, + "section_name": "Proof for Theorem", + "text": "" + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodNFEsCG Scale\nFID ()\n\nVRAM ()\n\nTime ()\n
DDIM50-5.393.6714.22
MPGD5014.246.5625.01
MPGD50105.466.5625.01
\n\\hdashline Oursw/ skip grad\n50119.286.5624.95
RFPP (2-flow)2-4.563.290.28
RFPP (2-flow)15-4.293.362.75
\n\\hdashlineOursw/ backpropagation\n1552.7717.9812.79
Oursw/ skip grad15503.136.645.85
\n
\n
Table 1: Performance of Various guided sampling methods on ImageNet64x64 with 32 batch size inference on A6000 GPU.
\n
", + "capture": "Table 1: Performance of Various guided sampling methods on ImageNet64x64 with 32 batch size inference on A6000 GPU." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodBoxInpaintDeblurringSuper Resolution
\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n
Easy Scenarios
Degraded21.7974.7610.9220.1754.0322.2024.6877.5711.67
OT-ODE19.1177.8613.4921.8662.5115.1421.6462.2326.64
PnP-Flow22.1268.0214.7022.0065.7915.9522.4268.0614.91
D-FLow20.3770.0613.6720.2261.9914.5121.6069.8912.29
FreeDoM20.8774.7913.9220.2169.7313.2221.1577.5412.12
DPS23.6174.799.3522.4969.7310.2323.9477.548.46
\n\\hdashline FlowChef\u00a0(ours)\n26.3287.703.3627.6986.432.6626.0080.154.43
Hard Scenarios
Degraded18.7565.1222.5416.8330.0254.0420.7755.8538.16
OT-ODE16.3767.3519.2217.8934.0229.6818.1939.4336.84
PnP-Flow20.4461.9617.5319.5050.5422.0021.3561.7817.78
D-FLow18.3462.6219.9416.9334.1325.3120.0156.4617.64
FreeDoM18.8865.0716.8316.5034.8818.9119.5855.8414.12
DPS20.6865.0613.0617.5834.8915.8621.5255.9010.31
\n\\hdashlineFlowChef\u00a0(ours)\n21.4578.757.7320.3152.7310.6421.6260.3310.18
\n
\n
Table 2: Pixel-space model-based evaluations for tackling the linear inverse problems. SSIM & LPIPS results are multiplied by 100.
\n
", + "capture": "Table 2: Pixel-space model-based evaluations for tackling the linear inverse problems. SSIM & LPIPS results are multiplied by 100." + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodBoxInpaintSuper ResolutionDeblurring
\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n\nPSNR ()\n\nSSIM ()\n\nLPIPS ()\n
Diffusion based methods
Resample20.1279.9419.3626.9170.9130.7525.2762.9741.94
PSLD (500 NFEs)28.3093.814.4925.7965.1533.2726.6465.4443.10
PSLD (100 NFEs)26.9093.135.2921.9554.6746.0821.2551.6251.92
Flow based methods
D-Flow19.6865.0127.7920.2360.5550.3022.4264.4353.04
RectifID23.8175.1310.5010.3631.5567.0810.4031.1666.60
\n\\hdashlineFlowChef\u00a0(InstaFlow)\n22.9473.559.9425.8364.7331.3822.5047.4242.54
FlowChef\u00a0(Flux)25.7482.999.4020.2564.3441.8818.9864.3753.43
\n
\n
Table 3: Latent-space model based evaluations for tackling the linear inverse problems. SSIM & LPIPS results are multiplied by 100.
\n
", + "capture": "Table 3: Latent-space model based evaluations for tackling the linear inverse problems. SSIM & LPIPS results are multiplied by 100." + }, + "4": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MetricOT-ODEPnP-FlowD-FlowFlowChef
VRAM (GB)0.700.406.440.43
Time (sec)10.395.2380.424.31
\n
\n
Table 4: Compute requirement comparisons on a A6000 GPU.
\n
", + "capture": "Table 4: Compute requirement comparisons on a A6000 GPU." + }, + "5": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Method\nCLIP-I ()\n\nCLIP-T ()\nVRAMTime
FreeDoM0.53430.254117GB80 sec
MPGD0.52850.261616GB20 sec
RetifID0.45830.170218GB30 sec
D-Flow0.48510.259123GB5 sec
\n\\hdashline FlowChef(10 NFEs)\n0.50440.26552 sec
FlowChef(30 NFEs)0.53010.26007 sec
FlowChef(30 NFEs 2)0.55310.247814GB12 sec
\n
\n
Table 5: Comparison of Various Classifier Guided Style Transfer.
\n
", + "capture": "Table 5: Comparison of Various Classifier Guided Style Transfer." + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HyperparameterOT-ODED-FlowPnP-FlowFlowChef
Iterations / NFEs2002050200
Optimization per iteration1--1
Optimization per denoising-50--
Avg. sampling steps--5-
Guidance scale111500
Cost functionL1L1**2L1MSE
initial time (1 means noise)0.8---
blending strength-0.05--
inversion
learning rate1111
\n
\n
Table 6: Hyperparameters for solving inverse problems using pixel-space models.
\n
", + "capture": "Table 6: Hyperparameters for solving inverse problems using pixel-space models." + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
HyperparameterD-FlowRectifIDFlowChef
Iterations / NFEs104100
Optimization per iteration--1
Optimization per denoising20400-
Blending strength0.1--
Guidance scale0.50.50.5
Cost functionMSEMSEMSE
Learning rate0.510.02
OptimizerAdamSGDAdam
loss multiplier (latent/pixel)0.0000010.0001 / 1000000.001/1000
inversion
\n
\n
Table 7: Hyperparameters for solving inverse problems using latent-space models (InstaFlow).
\n
", + "capture": "Table 7: Hyperparameters for solving inverse problems using latent-space models (InstaFlow)." + }, + "8": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelHyperparametersChage ObjectAdd ObjectRemove ObjectChange AttrbiuteChage PoseChange ColorChange MaterialChange BackgroundChange Style
FlowChef (InstaFlow)Learning rate0.50.50.50.50.50.50.51.00.5
Max setps505050502030505030
Optimization steps113222241
Inference steps505050505050505050
Full source steps3030010102020030
Edit guidance scale2.02.02.04.58.08.04.03.06.0
FlowChef (Flux)Learning rate0.40.50.50.50.50.40.50.50.4
Optimization steps111111111
Inference steps303030303030303030
Full source steps550253505
Edit guidance scale4.54.54.54.57.510.04.50.010.0
\n
\n
Table 8: Hyperparameter examples for which various editing tasks can be performed (following Algorithm 2). Notably, the FlowChef\u00a0(Flux) variant can be further optimized for task-specific settings that will follow Algorithm 1 with a careful selection of hyperparameters.
\n
", + "capture": "Table 8: Hyperparameter examples for which various editing tasks can be performed (following Algorithm 2). Notably, the FlowChef\u00a0(Flux) variant can be further optimized for task-specific settings that will follow Algorithm 1 with a careful selection of hyperparameters." + }, + "9": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nCLIP-I ()\n\nCLIP-T ()\n\nTime ()\n
RF-Inversion0.85730.2790\n31 sec\n
FlowChef\u00a0(ours)0.82690.2828\n15 sec\n
\n
\n
Table 9: Comparison of FlowChef\u00a0with concurrent work RF-Inversion on top of Flux for editing task \u201cwearing glasses\u201d.
\n
", + "capture": "Table 9: Comparison of FlowChef\u00a0with concurrent work RF-Inversion on top of Flux for editing task \u201cwearing glasses\u201d." + } + }, + "image_paths": { + "2": { + "figure_path": "2412.00100v1_figure_2.png", + "caption": "Figure 2: \nMotivation behind FlowChef based on rectified flow models\u2019 trajectory space. Let p1\u223cN\u2062(0,I)similar-tosubscript\ud835\udc5d1\ud835\udc410\ud835\udc3cp_{1}\\sim N(0,I)italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_N ( 0 , italic_I ) and p0subscript\ud835\udc5d0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT be distributions, with x1\u223cp1similar-tosubscript\ud835\udc651subscript\ud835\udc5d1x_{1}\\sim p_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT as initial noise, x0r\u2062e\u2062fsuperscriptsubscript\ud835\udc650\ud835\udc5f\ud835\udc52\ud835\udc53x_{0}^{ref}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r italic_e italic_f end_POSTSUPERSCRIPT as the target sample, x^0subscript^\ud835\udc650\\hat{x}_{0}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as the denoised sample from x1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, and x1r\u2062e\u2062fsuperscriptsubscript\ud835\udc651\ud835\udc5f\ud835\udc52\ud835\udc53x_{1}^{ref}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r italic_e italic_f end_POSTSUPERSCRIPT as the specific noise leading to x0r\u2062e\u2062fsuperscriptsubscript\ud835\udc650\ud835\udc5f\ud835\udc52\ud835\udc53x_{0}^{ref}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r italic_e italic_f end_POSTSUPERSCRIPT. (a) Stochasticity and nonlinear trajectories with crossovers can complicate gradient estimation at each denoising step t\ud835\udc61titalic_t. (b) D-Flow (baseline) inference-time trajectory requires the backpropagation through entire denoising steps. (c) Our method FlowChef enables efficient trajectory steering to guide xtsubscript\ud835\udc65\ud835\udc61x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT along the trajectory towards x0r\u2062e\u2062fsuperscriptsubscript\ud835\udc650\ud835\udc5f\ud835\udc52\ud835\udc53x_{0}^{ref}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r italic_e italic_f end_POSTSUPERSCRIPT.", + "url": "http://arxiv.org/html/2412.00100v1/x2.png" + }, + "3": { + "figure_path": "2412.00100v1_figure_3.png", + "caption": "Figure 3: Illustration of impact of guided control step on Flux.1[Dev] with mean squared error as cost function (\u2112=\u2016x^0\u2212x0r\u2062e\u2062f\u201622\u2112subscriptsuperscriptnormsubscript^\ud835\udc650superscriptsubscript\ud835\udc650\ud835\udc5f\ud835\udc52\ud835\udc5322\\mathcal{L}=||\\hat{x}_{0}-x_{0}^{ref}||^{2}_{2}caligraphic_L = | | over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT - italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r italic_e italic_f end_POSTSUPERSCRIPT | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT). This shows that FlowChef could guide the rectified flow models on the fly without requiring either the gradients through the Flux model or inversion. Importantly, the convergence speed is slowed down for illustration purposes.", + "url": "http://arxiv.org/html/2412.00100v1/x3.png" + }, + "4": { + "figure_path": "2412.00100v1_figure_4.png", + "caption": "Figure 4: Qualitative results on linear inverse problems. All baselines are implemented on stable diffusion v1.5, except FlowChef Flux variant. Results are reported for VRAM and time on an A100 GPU at 512 x 512 resolution, with Flux experiments at 1024 x 1024. Best viewed when zoomed in.", + "url": "http://arxiv.org/html/2412.00100v1/x4.png" + }, + "5": { + "figure_path": "2412.00100v1_figure_5.png", + "caption": "Figure 5: Qualitative results on image editing. As illustrated, our method attains the SOTA performance on comparison inversion-free methods. While FlowChef (Flux) variant achieves better quality and edits.", + "url": "http://arxiv.org/html/2412.00100v1/x5.png" + }, + "6": { + "figure_path": "2412.00100v1_figure_6.png", + "caption": "Figure 6: Human preference analysis for image editing.", + "url": "http://arxiv.org/html/2412.00100v1/x6.png" + }, + "7": { + "figure_path": "2412.00100v1_figure_7.png", + "caption": "Figure 7: FlowChef (Flux) multi object editing examples.", + "url": "http://arxiv.org/html/2412.00100v1/x7.png" + }, + "8": { + "figure_path": "2412.00100v1_figure_8.png", + "caption": "Figure 8: Extending FlowChef to 3D multiview synthesis.", + "url": "http://arxiv.org/html/2412.00100v1/x8.png" + }, + "9(a)": { + "figure_path": "2412.00100v1_figure_9(a).png", + "caption": "(a)\nFigure 9: Empirical analysis of gradient similarity (a, b, and c) and convergence rate. (a) and (b) analyzes the gradients without model steering. (c) contains the gradient similarity during the active model steering. And (d) shows the trajectory similarity at each timestep t\ud835\udc61titalic_t w.r.t. the inversion based trajectory.", + "url": "http://arxiv.org/html/2412.00100v1/x9.png" + }, + "9(b)": { + "figure_path": "2412.00100v1_figure_9(b).png", + "caption": "(b)\nFigure 9: Empirical analysis of gradient similarity (a, b, and c) and convergence rate. (a) and (b) analyzes the gradients without model steering. (c) contains the gradient similarity during the active model steering. And (d) shows the trajectory similarity at each timestep t\ud835\udc61titalic_t w.r.t. the inversion based trajectory.", + "url": "http://arxiv.org/html/2412.00100v1/x10.png" + }, + "9(c)": { + "figure_path": "2412.00100v1_figure_9(c).png", + "caption": "(c)\nFigure 9: Empirical analysis of gradient similarity (a, b, and c) and convergence rate. (a) and (b) analyzes the gradients without model steering. (c) contains the gradient similarity during the active model steering. And (d) shows the trajectory similarity at each timestep t\ud835\udc61titalic_t w.r.t. the inversion based trajectory.", + "url": "http://arxiv.org/html/2412.00100v1/x11.png" + }, + "9(d)": { + "figure_path": "2412.00100v1_figure_9(d).png", + "caption": "(d)\nFigure 9: Empirical analysis of gradient similarity (a, b, and c) and convergence rate. (a) and (b) analyzes the gradients without model steering. (c) contains the gradient similarity during the active model steering. And (d) shows the trajectory similarity at each timestep t\ud835\udc61titalic_t w.r.t. the inversion based trajectory.", + "url": "http://arxiv.org/html/2412.00100v1/x12.png" + }, + "10": { + "figure_path": "2412.00100v1_figure_10.png", + "caption": "Figure 10: Effect of FlowChef learning rate with fixed 20 max steps and one optimization step on InstaFlow.", + "url": "http://arxiv.org/html/2412.00100v1/x13.png" + }, + "11": { + "figure_path": "2412.00100v1_figure_11.png", + "caption": "Figure 11: Effect of FlowChef optimization steps with fixed 20 max steps and 0.02 learning rate on InstaFlow.", + "url": "http://arxiv.org/html/2412.00100v1/x14.png" + }, + "12": { + "figure_path": "2412.00100v1_figure_12.png", + "caption": "Figure 12: Effect of various FlowChef\u2019s steering parameters with increasing maximum optimization steps on InstaFlow.", + "url": "http://arxiv.org/html/2412.00100v1/x15.png" + }, + "13": { + "figure_path": "2412.00100v1_figure_13.png", + "caption": "Figure 13: FlowChef (Flux) model failures on inverse problems and image editing.", + "url": "http://arxiv.org/html/2412.00100v1/x16.png" + }, + "14": { + "figure_path": "2412.00100v1_figure_14.png", + "caption": "Figure 14: Qualitative results on image editing. Additional qualitative comparisons of FlowChef with the baselines.", + "url": "http://arxiv.org/html/2412.00100v1/x17.png" + }, + "15": { + "figure_path": "2412.00100v1_figure_15.png", + "caption": "Figure 15: Qualitative examples of various methods for easy box inpainting task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x18.png" + }, + "16": { + "figure_path": "2412.00100v1_figure_16.png", + "caption": "Figure 16: Qualitative examples of various methods for hard box inpainting task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x19.png" + }, + "17": { + "figure_path": "2412.00100v1_figure_17.png", + "caption": "Figure 17: Qualitative examples of various methods for an easy deblurring task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x20.png" + }, + "18": { + "figure_path": "2412.00100v1_figure_18.png", + "caption": "Figure 18: Qualitative examples of various methods for the hard deblurring task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x21.png" + }, + "19": { + "figure_path": "2412.00100v1_figure_19.png", + "caption": "Figure 19: Qualitative examples of various methods for an easy super-resolution task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x22.png" + }, + "20": { + "figure_path": "2412.00100v1_figure_20.png", + "caption": "Figure 20: Qualitative examples of various methods for the hard super-resolution task on RF++.", + "url": "http://arxiv.org/html/2412.00100v1/x23.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "D-flow: Differentiating through flows for controlled generation.", + "author": "Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, and Yaron Lipman.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "2": { + "title": "Synthetic faces high quality (sfhq) dataset, 2022.", + "author": "David Beniaguev.", + "venue": null, + "url": null + } + }, + { + "3": { + "title": "Ledits++: Limitless image editing using text-to-image models.", + "author": "Manuel Brack, Felix Friedrich, Katharia Kornmeier, Linoy Tsaban, Patrick Schramowski, Kristian Kersting, and Apolin\u00e1rio Passos.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8861\u20138870, 2024.", + "url": null + } + }, + { + "4": { + "title": "Large scale gan training for high fidelity natural image synthesis.", + "author": "Andrew Brock.", + "venue": "arXiv preprint arXiv:1809.11096, 2018.", + "url": null + } + }, + { + "5": { + "title": "Instructpix2pix: Learning to follow image editing instructions.", + "author": "Tim Brooks, Aleksander Holynski, and Alexei A Efros.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392\u201318402, 2023.", + "url": null + } + }, + { + "6": { + "title": "Stargan v2: Diverse image synthesis for multiple domains.", + "author": "Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.", + "url": null + } + }, + { + "7": { + "title": "Diffusion posterior sampling for general noisy inverse problems.", + "author": "Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye.", + "venue": "arXiv preprint arXiv:2209.14687, 2022.", + "url": null + } + }, + { + "8": { + "title": "Parallel diffusion models of operator and image for blind inverse problems.", + "author": "Hyungjin Chung, Jeongsol Kim, Sehui Kim, and Jong Chul Ye.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6059\u20136069, 2023.", + "url": null + } + }, + { + "9": { + "title": "Diffedit: Diffusion-based semantic image editing with mask guidance.", + "author": "Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "10": { + "title": "A survey on diffusion models for inverse problems.", + "author": "Giannis Daras, Hyungjin Chung, Chieh-Hsin Lai, Yuki Mitsufuji, Jong Chul Ye, Peyman Milanfar, Alexandros G Dimakis, and Mauricio Delbracio.", + "venue": "arXiv preprint arXiv:2410.00083, 2024.", + "url": null + } + }, + { + "11": { + "title": "Diffusion models beat gans on image synthesis.", + "author": "Prafulla Dhariwal and Alexander Nichol.", + "venue": "Advances in neural information processing systems, 34:8780\u20138794, 2021.", + "url": null + } + }, + { + "12": { + "title": "Scaling rectified flow transformers for high-resolution image synthesis.", + "author": "Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "13": { + "title": "Matryoshka diffusion models.", + "author": "Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Joshua M. Susskind, and Navdeep Jaitly.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "14": { + "title": "Manifold preserving guided diffusion.", + "author": "Yutong He, Naoki Murata, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Dongjun Kim, Wei-Hsiang Liao, Yuki Mitsufuji, J Zico Kolter, Ruslan Salakhutdinov, et al.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "15": { + "title": "Prompt-to-prompt image editing with cross-attention control.", + "author": "Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "16": { + "title": "Classifier-free diffusion guidance.", + "author": "Jonathan Ho and Tim Salimans.", + "venue": "arXiv preprint arXiv:2207.12598, 2022.", + "url": null + } + }, + { + "17": { + "title": "Diffusion model-based image editing: A survey.", + "author": "Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang, Shifeng Chen, and Liangliang Cao.", + "venue": "arXiv preprint arXiv:2402.17525, 2024.", + "url": null + } + }, + { + "18": { + "title": "An edit friendly ddpm noise space: Inversion and manipulations.", + "author": "Inbar Huberman-Spiegelglas, Vladimir Kulikov, and Tomer Michaeli.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12469\u201312478, 2024.", + "url": null + } + }, + { + "19": { + "title": "Direct inversion: Boosting diffusion-based editing with 3 lines of code.", + "author": "Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and Qiang Xu.", + "venue": "arXiv preprint arXiv:2310.01506, 2023.", + "url": null + } + }, + { + "20": { + "title": "Wouaf: Weight modulation for user attribution and fingerprinting in text-to-image diffusion models.", + "author": "Changhoon Kim, Kyle Min, Maitreya Patel, Sheng Cheng, and Yezhou Yang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8974\u20138983, 2024a.", + "url": null + } + }, + { + "21": { + "title": "Race: Robust adversarial concept erasure for secure text-to-image diffusion model.", + "author": "Changhoon Kim, Kyle Min, and Yezhou Yang.", + "venue": "arXiv preprint arXiv:2405.16341, 2024b.", + "url": null + } + }, + { + "22": { + "title": "Improving the training of rectified flows.", + "author": "Sangyun Lee, Zinan Lin, and Giulia Fanti.", + "venue": "arXiv preprint arXiv:2405.20320, 2024.", + "url": null + } + }, + { + "23": { + "title": "Flow matching for generative modeling.", + "author": "Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "24": { + "title": "Flow straight and fast: Learning to generate and transfer data with rectified flow.", + "author": "Xingchao Liu, Chengyue Gong, and qiang liu.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023a.", + "url": null + } + }, + { + "25": { + "title": "Instaflow: One step is enough for high-quality diffusion-based text-to-image generation.", + "author": "Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, et al.", + "venue": "In The Twelfth International Conference on Learning Representations, 2023b.", + "url": null + } + }, + { + "26": { + "title": "Deep learning face attributes in the wild.", + "author": "Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.", + "venue": "In Proceedings of International Conference on Computer Vision (ICCV), 2015.", + "url": null + } + }, + { + "27": { + "title": "Latent consistency models: Synthesizing high-resolution images with few-step inference.", + "author": "Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao.", + "venue": "arXiv preprint arXiv:2310.04378, 2023.", + "url": null + } + }, + { + "28": { + "title": "Pnp-flow: Plug-and-play image restoration with flow matching.", + "author": "S\u00e9gol\u00e8ne Martin, Anne Gagneux, Paul Hagemann, and Gabriele Steidl.", + "venue": "arXiv preprint arXiv:2410.02423, 2024.", + "url": null + } + }, + { + "29": { + "title": "SDEdit: Guided image synthesis and editing with stochastic differential equations.", + "author": "Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon.", + "venue": "In International Conference on Learning Representations, 2022.", + "url": null + } + }, + { + "30": { + "title": "Null-text inversion for editing real images using guided diffusion models.", + "author": "Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038\u20136047, 2023.", + "url": null + } + }, + { + "31": { + "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models.", + "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen.", + "venue": "arXiv preprint arXiv:2112.10741, 2021.", + "url": null + } + }, + { + "32": { + "title": "-eclipse: Multi-concept personalized text-to-image diffusion models by leveraging clip latent space.", + "author": "Maitreya Patel, Sangmin Jung, Chitta Baral, and Yezhou Yang.", + "venue": "ArXiv, abs/2402.05195, 2024a.", + "url": null + } + }, + { + "33": { + "title": "Eclipse: A resource-efficient text-to-image prior for image generations.", + "author": "Maitreya Patel, Changhoon Kim, Sheng Cheng, Chitta Baral, and Yezhou Yang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9069\u20139078, 2024b.", + "url": null + } + }, + { + "34": { + "title": "Improving diffusion models for inverse problems using optimal posterior covariance.", + "author": "Xinyu Peng, Ziyang Zheng, Wenrui Dai, Nuoqian Xiao, Chenglin Li, Junni Zou, and Hongkai Xiong.", + "venue": "In Forty-first International Conference on Machine Learning, 2024.", + "url": null + } + }, + { + "35": { + "title": "Training-free linear image inversion via flows, 2024.", + "author": "Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, and Brian Karrer.", + "venue": null, + "url": null + } + }, + { + "36": { + "title": "Movie gen: A cast of media foundation models.", + "author": "Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al.", + "venue": "arXiv preprint arXiv:2410.13720, 2024.", + "url": null + } + }, + { + "37": { + "title": "Dreamfusion: Text-to-3d using 2d diffusion.", + "author": "Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "38": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.", + "venue": "In International Conference on Machine Learning, 2021.", + "url": null + } + }, + { + "39": { + "title": "Hierarchical text-conditional image generation with clip latents.", + "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.", + "venue": "arXiv preprint arXiv:2204.06125, 1(2):3, 2022.", + "url": null + } + }, + { + "40": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "41": { + "title": "Beyond first-order tweedie: Solving inverse problems using latent diffusion.", + "author": "Litu Rout, Yujia Chen, Abhishek Kumar, Constantine Caramanis, Sanjay Shakkottai, and Wen-Sheng Chu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9472\u20139481, 2024a.", + "url": null + } + }, + { + "42": { + "title": "Semantic image inversion and editing using rectified stochastic differential equations.", + "author": "Litu Rout, Yujia Chen, Nataniel Ruiz, Constantine Caramanis, Sanjay Shakkottai, and Wen-Sheng Chu.", + "venue": "arXiv preprint arXiv:2410.10792, 2024b.", + "url": null + } + }, + { + "43": { + "title": "Solving linear inverse problems provably via posterior sampling with latent diffusion models.", + "author": "Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alex Dimakis, and Sanjay Shakkottai.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024c.", + "url": null + } + }, + { + "44": { + "title": "Photorealistic text-to-image diffusion models with deep language understanding.", + "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.", + "venue": "Advances in neural information processing systems, 35:36479\u201336494, 2022.", + "url": null + } + }, + { + "45": { + "title": "Make-a-video: Text-to-video generation without text-video data.", + "author": "Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman.", + "venue": "In The Eleventh International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "46": { + "title": "Solving inverse problems with latent diffusion models via hard data consistency.", + "author": "Bowen Song, Soo Min Kwon, Zecheng Zhang, Xinyu Hu, Qing Qu, and Liyue Shen.", + "venue": "In The Twelfth International Conference on Learning Representations, 2024.", + "url": null + } + }, + { + "47": { + "title": "Pseudoinverse-guided diffusion models for inverse problems.", + "author": "Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz.", + "venue": "In International Conference on Learning Representations, 2023.", + "url": null + } + }, + { + "48": { + "title": "Rectifid: Personalizing rectified flow with anchored classifier guidance.", + "author": "Zhicheng Sun, Zhenhao Yang, Yang Jin, Haozhe Chi, Kun Xu, Liwei Chen, Hao Jiang, Yang Song, Kun Gai, and Yadong Mu.", + "venue": "arXiv preprint arXiv:2405.14677, 2024.", + "url": null + } + }, + { + "49": { + "title": "Instantstyle: Free lunch towards style-preserving in text-to-image generation.", + "author": "Haofan Wang, Matteo Spinelli, Qixun Wang, Xu Bai, Zekui Qin, and Anthony Chen.", + "venue": "arXiv preprint arXiv:2404.02733, 2024.", + "url": null + } + }, + { + "50": { + "title": "Image quality assessment: from error visibility to structural similarity.", + "author": "Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli.", + "venue": "IEEE Transactions on Image Processing, 13(4):600\u2013612, 2004.", + "url": null + } + }, + { + "51": { + "title": "Turboedit: Instant text-based image editing.", + "author": "Zongze Wu, Nicholas Kolkin, Jonathan Brandt, Richard Zhang, and Eli Shechtman.", + "venue": "arXiv preprint arXiv:2408.08332, 2024a.", + "url": null + } + }, + { + "52": { + "title": "Principled probabilistic imaging using diffusion models as plug-and-play priors.", + "author": "Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine L Bouman.", + "venue": "arXiv preprint arXiv:2405.18782, 2024b.", + "url": null + } + }, + { + "53": { + "title": "Inversion-free image editing with natural language.", + "author": "Sihan Xu, Yidong Huang, Jiayi Pan, Ziqiao Ma, and Joyce Chai.", + "venue": "arXiv preprint arXiv:2312.04965, 2023.", + "url": null + } + }, + { + "54": { + "title": "Text-to-image rectified flow as plug-and-play priors.", + "author": "Xiaofeng Yang, Cheng Chen, Xulei Yang, Fayao Liu, and Guosheng Lin.", + "venue": "arXiv preprint arXiv:2406.03293, 2024.", + "url": null + } + }, + { + "55": { + "title": "One-step diffusion with distribution matching distillation.", + "author": "Tianwei Yin, Micha\u00ebl Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6613\u20136623, 2024.", + "url": null + } + }, + { + "56": { + "title": "Freedom: Training-free energy-guided conditional diffusion model.", + "author": "Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, and Jian Zhang.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23174\u201323184, 2023.", + "url": null + } + }, + { + "57": { + "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks.", + "author": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas.", + "venue": "In Proceedings of the IEEE international conference on computer vision, pages 5907\u20135915, 2017.", + "url": null + } + }, + { + "58": { + "title": "The unreasonable effectiveness of deep features as a perceptual metric.", + "author": "Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586\u2013595, 2018.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2412.00100v1" +} \ No newline at end of file diff --git a/20241127/2412.00102v1.json b/20241127/2412.00102v1.json new file mode 100644 index 0000000000000000000000000000000000000000..1ed09e8ccd03ceb97813768d5a3ab91e31f34fad --- /dev/null +++ b/20241127/2412.00102v1.json @@ -0,0 +1,471 @@ +{ + "title": "ElectroVizQA: How well do Multi-modal LLMs perform in Electronics Visual Question Answering?", + "abstract": "Multi-modal Large Language Models (MLLMs) are gaining significant attention for their ability to process multi-modal data, providing enhanced contextual understanding of complex problems. MLLMs have demonstrated exceptional capabilities in tasks such as Visual Question Answering (VQA); however, they often struggle with fundamental engineering problems, and there is a scarcity of specialized datasets for training on topics like digital electronics. To address this gap, we propose a benchmark dataset called ElectroVizQA specifically designed to evaluate MLLMs\u2019 performance on digital electronic circuit problems commonly found in undergraduate curricula. This dataset, the first of its kind tailored for the VQA task in digital electronics, comprises approximately 626 visual questions, offering a comprehensive overview of digital electronics topics. This paper rigorously assesses the extent to which MLLMs can understand and solve digital electronic circuit questions, providing insights into their capabilities and limitations within this specialized domain. By introducing this benchmark dataset, we aim to motivate further research and development in the application of MLLMs to engineering education, ultimately bridging the performance gap and enhancing the efficacy of these models in technical fields.111Please contact the author for access to the dataset.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The recent shift from single-modal models to multi-modal models aims to leverage diverse information sources by incorporating different modalities. This transition has brought about remarkable advancements in models like GPT-4o with demonstrated improvements compared to its predecessors on standard reasoning and related STEM benchmarks such as MATH (Hendrycks et al. 2021a ###reference_b11###), GSM-8K (Cobbe et al. 2021 ###reference_b8###), ScienceQA (Lu et al. 2022b ###reference_b21###), MMLU (Hendrycks et al. 2021b ###reference_b12###).\nAs these models continue to evolve, their application to Visual Question Answering (VQA) has gained considerable attention. Extensive research has focused on VQA, where multimodal large language models (MLLM) such as InstructBlip (Dai et al. 2023 ###reference_b9###), Llava (Liu et al. 2023c ###reference_b18###), Sphinx (Lin et al. 2023 ###reference_b15###), and GPT4-o (OpenAI 2024 ###reference_b24###) exhibit strong performance across various tasks in diverse domains. Recently released benchmarks like MathVista (Lu et al. 2023 ###reference_b19###) and systematic studies like MathVerse (Zhang et al. 2024 ###reference_b32###) have provided comprehensive evaluations of MLLMs over VQA tasks. Beyond these, domain-specific datasets, such as those in Llava-Med (Liu et al. 2023b ###reference_b17###), JEE-Bench (Arora, Singh et al. 2023 ###reference_b3###), and IconVQA (Li et al. 2023 ###reference_b14###) further illustrate the breadth of data utilized for fine-tuning these models to set challenging benchmarks.\nBuilding on these advancements in multi-modal learning, we focus on digital electronics, a foundational subject in engineering education and electronic design automation, where problem solving relies on a strong reasoning ability. For instance, the logical operations performed by digital gates AND, OR, NOT, XOR, XNOR have direct mathematical analogies, underscoring the fundamental nature of this subject. Beyond the reasoning aspect, answering these questions requires a systematic collation of information from tables, associated figures and text, thus posing definite challenges for multi-modal models. As students in all domains increasingly rely on MLLMs like ChatGPT, it is crucial for these models to provide reliable solutions in this fundamental field. This paper presents a manually created and curated dataset tailored to fill a benchmarking gap and evaluates MLLMs\u2019 performance on answering these problems about digital electronic circuits.\n###figure_1### The ElectroVizQA dataset is developed along three problem dimensions. (1) The Conceptual dimension captures fundamental concepts for solving digital electronics problems, such as Karnaugh Map (K-map), Truth Table; (2) The Visual context dimension relates to the visual elements in the dataset and spans the topics such as finite state machines (FSM), gates, encoders/decoders; and finally the Solving strategy dimension relates to the strategy required to solve the problem\u2014factual, computational, or deep analysis requiring collating knowledge and deeper reasoning. Our dataset incorporates fine-grained question category labels derived from these problem dimensions, meticulously integrating textual and visual elements to assess multi-modal large language models in both visual and textual understanding within question-answering tasks. In addition to constructing the dataset, we benchmark LLMs and provide rigorous error analysis of their outputs.\nTo develop this benchmark, we referenced two resources to curate the questions with figures. The first is a set of course notes222http://lumetta.web.engr.illinois.edu/120-S17/ece120-spring-2017-notes-for-students.pdf used in an introductory course on fundamental digital electronics for undergraduate students at a large U.S. public university and referenced with the author\u2019s permission. The second textbook333https://textbookequity.org/Textbooks/TBQ\u02d9Feher\u02d9DigitalLogic.pdf(CreativeCommonsAttribution3.0License) covers additional topics in the domain and has a Creative Commons attribution license. Together, these textbooks provide a comprehensive coverage of the essential digital electronics topics, including foundational elements such as Karnaugh maps, truth tables, and combinational logic circuits spanning decoders, multiplexers, latches, flip-flops, counters, and finite state machines. Additionally, we utilize schemdraw444https://schemdraw.readthedocs.io/en/latest/index.html , an online library, to draw circuits when the visual elements were needed. We followed a systematic manual problem generation and review process to guarantee data quality. This resulted in 626 categorized questions from an initial pool of 800.\nThis is followed by a quantitative and qualitative analysis of the outputs of state-of-the-art proprietary and open-source MLLMs on our dataset. Further, we designed prompts to identify error types and perform a critical analysis of LLMs\u2019 performance on our dataset. Our findings reveal significant qualitative deficiencies, particularly in GPT4-o\u2019s understanding of visual content, despite its reasonable performance with textual information.\nIn sum, our main contributions include the following:\nWe propose the first benchmark for digital elecronics VQA, ElectroVizQA, which has 626 meticulously created questions with manual annotations for three primary problem dimensions. We expect this benchmark to serve as a strong and reliable test bed and to foster future research on problem-solving with MLLMs.\nWe conduct an extensive comparative analysis of MLLMs on our benchmark and investigate their visual and textual problem-solving capabilities.\nTo understand the challenges offered by our dataset, we conduct a careful manual error analysis of the MLLMs\u2019 responses to our VQA dataset." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "Recently, several benchmark datasets have been developed to test the STEM reasoning and problem-solving capabilities of large language models (Chang et al. 2024 ###reference_b5###) mainly in the math and general science domains. GSM8k (Cobbe et al. 2021 ###reference_b8###) consists of grade school math word problems that require several steps of elementary arithmetic computations to solve. MATH dataset\n(Hendrycks et al. 2021a ###reference_b11###) consists of challenging competition mathematics problems. MMLU (Hendrycks et al. 2021b ###reference_b12###) covers 57 QA tasks including STEM domains like college mathematics, computer science, and physics. TheoremQA (Chen et al. 2023 ###reference_b7###) requires the application of mathematical theorems to solve questions.\nFurther, a few benchmark datasets with visual questions for testing the multi-modal capabilities of LLMs in STEM problem-solving settings are also available. GeoQA (Chen et al. 2021 ###reference_b6###) contains geometric problems in Chinese middle school exams along with visual diagrams. MathVista (Lu et al. 2023 ###reference_b19###) includes mathematical reasoning on diagrams, logical reasoning on puzzles, statistical reasoning on functional plots, and scientific reasoning on academic figures. MathVerse (Zhang et al. 2024 ###reference_b32###) focuses on only math visual problem solving and also provides step-by-step explanations for questions. ScienceQA (Lu et al. 2022a ###reference_b20###) consists of elementary and high school multi-modal science questions. SciBench (Wang et al. 2023 ###reference_b29###) consists of more advanced, college-level scientific problems from mathematics, chemistry, and physics domains.\n###figure_2### With the advancing capabilities of LLMs, more challenging benchmarks with engineering questions have recently been developed that contain some electronics questions. JEEBench (Arora, Singh et al. 2023 ###reference_b3###) contains pre-engineering questions from the IIT-JEE exam. C-Eval (Huang et al. 2024 ###reference_b13###) presents a Chinese evaluation suite of questions across middle school, high school, college and professional grade levels in 52 diverse disciplines including electrical engineering. Both these datasets do not contain diagrams. A preliminary exploration of ChatGPT has also been done on solving four electrical circuit questions (Ogunfunmi 2024 ###reference_b23###). Closest to our work, MMMU (Yue et al. 2024 ###reference_b31###) includes multi-modal, college-level questions spanning 30 subjects including electronics. However, the majority of their 291 electronics questions cover other topics, like analog electronics and electrical circuits, and only a handful of them are about digital electronics. To the best of our knowledge, ours is the first attempt to purposely create and use a dataset to perform an in-depth benchmarking study for this domain, which also includes fine-grained question category labels based on our proposed primary problem dimensions and a careful choice of the textual and visual components in the questions." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "The ElectroVizQA dataset", + "text": "###figure_3### ###figure_4### ###figure_5### ###figure_6### Our dataset comprises 626 single-correct, multiple-choice Electronics Vision Question-Answers, meticulously curated from the textbooks (Lumetta 2017 ###reference_b22###; Feher 2014 ###reference_b10###)\nQuestion Characteristics:\nEach question is annotated with both single-valued and multi-valued labels, which together constitute the metadata for each question.\nMulti-valued labels: These are the represented Concepts and Image Characteristics, permitting multiple values per question, thereby capturing the complex nature of the problems.\nSingle-valued labels: These include Solving Strategy,\nApplication Question, Valid for Text Only, Expression/Description, and Difficulty Level. The Expression/Description field provides a minimalistic verbal representation of the visuals, while the Valid for Text Only field designates whether a question can be satisfactorily answered using only textual question and Expression/Description. These labels facilitate problem choice for language-only models\u2019 evaluation. An example of this is in Figure 2 ###reference_### (right). Although we provide difficulty levels that are based on annotator judgment, we don\u2019t include them in our analyses.\nOur dataset is constructed such that the textual and visual components are mutually exclusive, ensuring that textual information does not elucidate the content of the visuals, thus challenging LLMs to independently extract and interpret visual information.\nFurther, to eliminate answer choice bias, we have ensured an even distribution of correct answers across options: 25.32% in option A, 30.28% in option B, 23.55% in option C, and 20.83% in option D. The dataset also incorporates application-based questions (approximately 18.84%), such as given in Figure 2 ###reference_###(left). Questions are stratified by difficulty: Level 1 (easy, 27.31%), Level 2 (medium, 40.81%), and Level 3 (hard, 31.86%).\nData Collection Process:\nThe process, detailed in Figure 4 ###reference_###, represents the dataset creation process including that of intricate circuit visuals, as illustrated in Figure 1 ###reference_###.\nTo construct the dataset, two students from a large public university in the U.S., both of whom had recently completed an undergraduate course in digital electronics covering the topics in the dataset, were enlisted as annotators. They independently formulated the questions based on the solved examples from resources\n(Lumetta 2017 ###reference_b22###; Feher 2014 ###reference_b10###),\nextracting or generating the corresponding images, and preparing the corresponding answers. About 80 visuals were extracted from textbooks, corresponding to 400 questions.\nIn all, 340 VQA instances were derived from course notes (Lumetta 2017 ###reference_b22###)\nand an additional 60 instances from textbook (Feher 2014 ###reference_b10###), together covering a broad spectrum of undergraduate-level digital electronics concepts.\nTo ensure a comprehensive and balanced assessment across the diverse dimensions of digital electronics, we established three primary problem dimensions:\nConceptual Dimension: Key concepts fundamental to solving digital electronics problems were identified, including Karnaugh Map (K-map), Truth Table, Product of Sums (POS), Sum of Products (SOP), literal expressions, De Morgan\u2019s theorem, area calculation, and gate delay. Additionally, to evaluate LLMs, we incorporated gate replacement, which involves substituting circuit elements with specific gates, and gate recognition, focusing on identifying gate types. These additions were informed by preliminary observations of the types of questions where models like ChatGPT showed deficiencies. This dimension holds multi-valued labels.\nVisual Context Dimension: The dataset encompasses a variety of visual components such as finite-state machines (FSM), combinational gates, encoders/decoders, multiplexers/demultiplexers, flip-flops/latches, truth tables, transistors, clock diagrams, and K-maps. These elements represent the visual complexity inherent in digital electronics. This dimension holds multi-valued labels.\nSolving Strategy Dimension: We categorized questions into three types: factual questions that require direct answers with no computation, computational problems that involve explicit and straightforward computational steps, and deep analytical questions that necessitate extensive domain knowledge and multiple reasoning steps, particularly in circuit optimization, trade-off evaluations and Application based questions. This dimension holds single-valued labels.\nThe distribution of these dimensions across the entire dataset is depicted in Figures 3 ###reference_###, 3 ###reference_###, and 3 ###reference_###.\nSynthetic Image Collection\nTo enhance the dataset with more complex and diverse visuals beyond what textbooks typically offer, the annotators manually created 250 questions and answers, corresponding to 50 figures. Figures for these questions were then programmatically drawn using the schemdraw library (4 ###reference_te4###).\nData Review\nFollowing data collection, a rigorous review process was conducted by cross-referencing the questions between the two annotators, excluding answers. Annotators were permitted to use various resources, excluding large language models (LLMs), to solve these questions. Discrepancies in 65 questions were identified, leading to a 50% discard rate after discussions. Additionally, questions deemed inadequate for evaluating LLM performance were removed. After resolving conflicts, the review process produced approximately 626 categorized questions from an initial pool of 800. Initially, we aimed to generate five textual questions per image, but maintaining this quantity compromised the quality of the questions. Further, the dataset\u2019s integrity was further verified by an instructor of a digital electronics course from a large university, ensuring its suitability for rigorous evaluation. More details provided in Appendix A.5E ###reference_###.\nAlthough the questions and answers were manually curated to ensure high data quality, future efforts could benefit from automated recognition and extraction techniques for circuit diagrams, as seen in recent studies (Patare and Joshi 2016 ###reference_b25###; Thoma et al. 2021 ###reference_b27###; Bayer, Turabi, and Dengel 2023 ###reference_b4###). For instance, (Bayer, Turabi, and Dengel 2023 ###reference_b4###) proposed a method to extract textual information from hand-drawn circuit diagrams, which could significantly scale up the data curation process.\n###figure_7### ###figure_8### ###figure_9###" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experimental Setup", + "text": "Our experiments are centered around answering the following research questions (RQ).\nRQ1. How good are LLMs at answering the questions in our dataset using existing prompting methods? RQ2. Does LLMs\u2019 ability to answer these questions differ depending on whether visual or textual information was used? RQ3. What are the types of errors that LLMs make? RQ4. Can LLMs be leveraged to classify the error categories in their solutions? RQ5. What are the distribution and causes of errors made by LLMs?\nExperimental Data\nIn addition to the full data, we closely evaluate the LLMs on a subset of our dataset, termed testmini. This subset comprises 103 Visual Question Answering (Visual + Question) instances. Of these questions, we had 57 with the Valid for Text Only field enabled, meaning that those questions could be adequately answered only using the textual portion of the question combined with the Expression/Description field, without relying on visuals. We call this grouping (Expression + Question).\nThe distribution of all primary problem dimensions in this subset, and its comparison to that of the entire dataset, is shown in Figures 3 ###reference_### to 3 ###reference_###.\nMetrics\nSince our questions have a single answer out of two or four choices, we use the accuracy of the final answer as the performance evaluation metric. \nMulti-modal Large Language Models\nWe evaluate the proposed benchmark on several open-source multi-modal models,\nincluding Llava-1.5-7B (Liu et al. 2023c ###reference_b18###) and Llava-Next-7B (Liu et al. 2023a ###reference_b16###) known for their state-of-the-art visual-text processing capabilities. Subsequently, we assessed OpenAI\u2019s GPT-4o, GPT-4o-mini, and GPT-4-turbo (Achiam et al. 2023 ###reference_b1###) which are strong at generalization across multi-modal tasks, along with emerging multi-modal models such as Gemini-1.5-pro (Reid et al. 2024 ###reference_b26###), and the Claude models (Anthropic 2024 ###reference_b2###). For the (E+Q) questions, we also assessed Meta\u2019s language models Llama-3-70B-Instruct and Llama-3.1-405B (Touvron et al. 2023 ###reference_b28###).\nTo obtain the model\u2019s response, each model is prompted with the expected response type formatted as a multiple choice question, concatenated with the problem description and either the visual component or the Expression/Description of the question with a maximum token limit of 600, because most instances were within this range.\nIn addition, we investigate the zero-shot Chain-of-Thought (CoT) (Wei et al. 2022 ###reference_b30###) prompting technique.\nGiven the advanced capabilities of GPT-4o, the exact answer was extracted by re-prompting GPT-4o with the responses generated by the various MLLMs.\nIf an LLM\u2019s response was incoherent or the expected response type did not match any available options, the response was recorded as \u201cNone.\u201d" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Results and Discussions", + "text": "We now present the major findings of our experiments." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "RQ1. How good are LLMs at answering the questions in our dataset using existing prompting methods?", + "text": "Table 1 ###reference_### compares various MLLMs, both with and without Chain-of-Thought prompting, for visual (V+Q) and text-only (E+Q) question answering. Our findings indicate that open-source models generally trail behind closed-source ones in VQA performance. The average accuracy of LLMs on entire testmini dataset is approximately 54%, highlighting the challenges these questions pose. Notably, Claude-3.5-sonnet outperformed all the other models in VQA tasks on the testmini data which includes additional challenging questions that cannot be represented by simple expressions. Figures 5 ###reference_###, 5 ###reference_###, and 5 ###reference_### show that Claude-3.5-sonnet consistently led across the three primary problem dimensions of the V+Q data. A closer look at per-category performance reveals specific challenges: LLMs often struggled with transistor image-based questions under the visual context dimension (Figure 5 ###reference_###), and area calculations were frequently incorrect across most models, suggesting model capabilities in these subject areas need further improvement.\nInterestingly, models performed relatively well in the deep analysis category (Figure 5 ###reference_###), despite the questions being more challenging.\nThis could indicate some level of memorization of conceptual statements by the models, warranting further investigation.\nAdditionally, CoT prompting did not consistently improve performance in our task, unlike its effectiveness in other problem-solving scenarios (Wei et al. 2022 ###reference_b30###; Arora, Singh et al. 2023 ###reference_b3###).\nEvaluation on complete dataset:\nHaving analyzed LLMs\u2019 performance on testmini, we investigate the performance of the two leading models on the full dataset and observe similar trends as seen on testmini in Table 3 ###reference_###.\nGPT4-o achieved an accuracy of 60.39%, while Claude-3.5-sonnet reached 62.27% on the (V+Q) dataset. However, Claude\u2019s performance significantly declined to 45.21% accuracy on the (E+Q) dataset, whereas GPT4-o\u2019s performance surged, achieving 72.36% accuracy. Further analysis is provided in Appendix A.4D ###reference_###." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "RQ2. Does LLMs\u2019 ability to answer these questions differ depending on whether visual or textual information was used?", + "text": "In Table 1 ###reference_###\nClaude-3.5-sonnet model showed the best performance for V+Q on the full testmini dataset, while GPT-4o and GPT-4 were best among all LLMs for the Valid for text-only equal to 1 subset in both (V+Q) and (E+Q) evaluations. This indicates that closed models outperform open-sourced models on our dataset. Llama3, on the other hand, exhibits comparable efficacy in the E+Q setting showing the promise of open-source language models, but there is a lack of similarly powerful open-source MLLMs for handling the visual components. Additionally, most of these models\u2019 effectiveness declines sharply in the V+Q setting compared to that of the E+Q; only Claude models show the opposite trend. This suggests that most models struggle with complex visual understanding on our dataset." + }, + { + "section_id": "4.1.3", + "parent_section_id": "4.1", + "section_name": "RQ3. What are the types of errors that LLMs make?", + "text": "Given the limited abilities of GPT-4o and other models on our benchmark, error detection and correction are critical steps in addressing the remaining inaccuracies in solutions generated by MLLMs. To this end, we conduct an extensive manual error analysis by manually grouping the errors into broad types by the annotators who created the questions. Since a model can generate the final correct answer despite making some errors in the response, we analyze the complete responses for all samples, not just ones with incorrect final answers. Therefore, we leverage our manual error analysis to closely analyze the performance of the two leading models, namely Claude and GPT-4o, and investigate their textual vs. visual understanding capabilities on our dataset below.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### Our analysis identified four predominant types of errors described below.\nProblem comprehension error: Failure to understand the textual problem correctly. \nVisual perception error: Error arises when there is a misinterpretation of entities within the image, especially during perception. It typically can occur when all steps for the associated textual question are correct but the final answer is wrong. \nComputational Error: This error generally occurs when there is a mistake in calculations or algorithm execution, resulting in incorrect outputs. \nConceptual Error: This error arises from misunderstanding or misapplication of a concept after perceiving the correct information through images and other details. \nExamples of these categories provided in Appendix A.3 C ###reference_###." + }, + { + "section_id": "4.1.4", + "parent_section_id": "4.1", + "section_name": "RQ4. Can LLMs be leveraged to classify the error categories in their solutions?", + "text": "We explore the possibility of leveraging GPT-4o to automatically detect those errors in the steps generated by various models. For this, inspired by Chain of Thought (CoT) evaluation strategy(Zhang et al. 2024 ###reference_b32###), we design a CoT prompt that asks GPT-4o to identify the category of error (if any) in the model-generated response.\nThe prompts utilized for this strategy are detailed in Appendix A.2 B ###reference_###.\nAs illustrated in Figure 8 ###reference_###, this strategy revealed a significant discrepancy between the errors detected by prompts and those identified through manual annotation. Specifically, the number of errors detected by the prompt-based method was significantly lower than what could be identified manually, suggesting that this strategy may overlook a considerable number of errors. These results underscore the need for further refinement in prompt-based error detection techniques to more closely align with human judgment and enhance the robustness of MLLM outputs.\n###figure_14###" + }, + { + "section_id": "4.1.5", + "parent_section_id": "4.1", + "section_name": "RQ5: What are the distribution and causes of errors made by LLMs?", + "text": "As shown in Table 1 ###reference_###, the performance of leading models, GPT-4o and Claude is still lacking.\nTo investigate this issue, we conduct a detailed analysis of manually categorized errors across the primary problem dimensions defined for about 98 (V+Q) question-answers with non-None entries.\nIn Figure 7 ###reference_###, both models exhibit significantly higher error rates in visual perception and conceptual understanding compared to computational and problem comprehension errors. This further underscores that while these LLMs excel at processing textual information and performing binary math, they struggle with visual and conceptual tasks.\nFurther,\nFigures 7 ###reference_### and 7 ###reference_###,\nshow that GPT-4o\u2019s poor performance is due to a weak understanding of gates from visuals has been validated, and this issue is also evident in Claude. Conceptual errors primarily arise in gate replacement questions, which demand deeper analysis.\nAdditionally, both models show deficiencies in handling truth tables and K-map-based concepts. To mitigate one major category errors namely gate recognition, we did a preliminary exploration of prompt-tuning but did not notice improvements as shown in Appendix A.1A ###reference_###. Thus, further research is needed to improve MLLM capabilities in solving questions about these concepts." + }, + { + "section_id": "4.1.6", + "parent_section_id": "4.1", + "section_name": "Textual vs. visual understanding of Claude and GPT-4o", + "text": "To compare the visual and textual understanding abilities of the respective LLMs on our data, we categorized the errors made by both models in both (E+Q) and (V+Q) as shown in Table 2 ###reference_###. For (E+Q), Claude demonstrates deficiencies, particularly in conceptual errors and problem comprehension errors. However, in (V+Q), both the LLMs show difficulties perceiving from visuals. Although the performance of Claude with CoT prompting in Table 1 ###reference_### is comparable in both (E+Q) and (V+Q) settings, its error counts in those settings suggests that it has provided the correct answers despite making visual perception errors. Overall, this underscores the poor capabilities of MLLMs in understanding electronics diagrams in our dataset." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion and Future Work", + "text": "Our analysis reveals that LLMs like GPT-4o excel in language tasks but struggle with Visual Question Answering (VQA) in digital electronics, particularly with basic digital gates. Errors from GPT and Claude models highlight the specific deficiencies, raising the question of how to enhance their capabilities. Enhancing LLM capabilities could involve integrating online solvers and electronic design tools, though such resources are currently limited. To advance the field, we suggest focusing on multimodal data processing in LLMs, improving foundational engineering understanding, and using our new ElectroVizQA benchmark to guide future research and address these limitations." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix A.1", + "text": "" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Appendix A.2", + "text": "We experimented with prompting methods by providing GPT-4o with an extended version of the CoT strategy proposed by (Zhang et al. 2024 ###reference_b32###). This approach aimed to improve error categorization, aligning more closely with manual methods, as shown in Figure 8 ###reference_###. However, our results indicate that this method is unreliable, with a significant number of responses incorrectly categorized. Additionally, we observed inconsistencies in the calculation of average and final scores, which were sometimes inaccurate or buggy." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Appendix A.3", + "text": "###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21###" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Appendix A.4", + "text": "Table 3 ###reference_### presents the performance metrics of the leading LLMs on the testmini subset. Both models exhibited accuracies around 60% on the (V+Q) dataset, underscoring the inherent difficulty of our dataset. For the (E+Q) dataset, comprising approximately 396 samples, the results follow a similar trend as in table 1 ###reference_###. Figures 9 ###reference_###, 9 ###reference_###, and 9 ###reference_### illustrate that, while the performances of Claude and GPT4-o are comparable, the Claude model generally outperforms GPT4-o across most categories in all primary problem dimensions. This indicates marginally superior adaptability of Claude to the varying demands of these dimensions in VQA tasks. Distribution for these primary problem dimensions can be found in Figure 3 ###reference_###, 3 ###reference_###, 3 ###reference_###." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Appendix A.5", + "text": "Annotator compensation: The instructor that verified the dataset integrity and one of the annotators were part of the research team. The other annotator, an undergraduate research assistant, was paid $15 per hour (well above the federally mandated minimum wage in the United States of $7.25 per hour) for their annotation efforts." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n\n\n\n\n\n\n
Testmini
103 samples
\n
\n\n\n\n\n\n\n\n
\nValid for text only = 1\n
57 samples
\n
LLMs\n\n\n\n\n\n\n\n
- CoT
V+Q
\n
\n\n\n\n\n\n\n\n
+ CoT
V+Q
\n
\n\n\n\n\n\n\n\n
- CoT
E+Q
\n
\n\n\n\n\n\n\n\n
+ CoT
E+Q
\n
\n\n\n\n\n\n\n\n
- CoT
V+Q
\n
\n\n\n\n\n\n\n\n
+ CoT
V+Q
\n
GPT-4o67.3866.3375.074.070.9064.91
GPT-4o-mini50.055.071.6972.2247.2748.21
GPT-4-turbo62.563.6375.074.068.4261.53
Claude-3-opus60.2057.7358.6955.5560.0057.44
Claude-3.5-sonnet69.3071.5655.3157.4460.0057.44
Gemini-1.5-pro56.1257.4473.3372.1055.5556.60
Llava-Next40.8138.9448.2146.2942.8537.25
Llava-1.535.5729.6239.2140.029.8229.16
Llama-3-70B-Instructxx59.6475.0xx
Llama-3.1-405Bxx71.6963.63xx
Average Performance54.2854.3662.4663.7953.3950.65
\n
\n
Table 1: Comparison of various LLMs\u2019 performance with and without Chain-of-Thought (CoT) prompting for visual and text-only question answering using expressions in digital images on testmini
\n
", + "capture": "Table 1: Comparison of various LLMs\u2019 performance with and without Chain-of-Thought (CoT) prompting for visual and text-only question answering using expressions in digital images on testmini" + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Error type\n\n\n\n\n\n\n\n
GPT-4o
(E+Q)
\n
\n\n\n\n\n\n\n\n
GPT-4o
(V+Q)
\n
\n\n\n\n\n\n\n\n
Claude-3.5-sonnet
(E+Q)
\n
\n\n\n\n\n\n\n\n
Claude-3.5-sonnet
(V+Q)
\n
Conceptual4111112
Computational2211
Problem comprehension2050
Visual perception08012
\n
\n
Table 2: The error counts made by GPT-4o and Claude in about 47 text-only (non-None responses) (E+Q) and (V+Q) question answering, utilizing expressions/descriptions and digital images respectively\n
\n
", + "capture": "Table 2: The error counts made by GPT-4o and Claude in about 47 text-only (non-None responses) (E+Q) and (V+Q) question answering, utilizing expressions/descriptions and digital images respectively\n" + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLMs\n\n\n\n\n
V+Q
\n
\n\n\n\n\n
E+Q
\n
GPT-4o60.3972.36
Claude-3.5-sonnet62.2745.21
\n
Table 3: Performance of leading LLMs, GPT4-o and Claude-3.5-sonnet on complete dataset with 626 (V+Q) samples and 396 (E+Q) samples
\n
", + "capture": "Table 3: Performance of leading LLMs, GPT4-o and Claude-3.5-sonnet on complete dataset with 626 (V+Q) samples and 396 (E+Q) samples" + } + }, + "image_paths": { + "1": { + "figure_path": "2412.00102v1_figure_1.png", + "caption": "Figure 1: Example illustrating the challenges faced by LLMs in recognizing basic logic gates accurately", + "url": "http://arxiv.org/html/2412.00102v1/x1.png" + }, + "2": { + "figure_path": "2412.00102v1_figure_2.png", + "caption": "Figure 2: Examples of our annotated data", + "url": "http://arxiv.org/html/2412.00102v1/x2.png" + }, + "3(a)": { + "figure_path": "2412.00102v1_figure_3(a).png", + "caption": "(a) Basic concept\nFigure 3: (a), (b) and (c) represents the distribution of the three primary dimensions in the testmini set and the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x3.png" + }, + "3(b)": { + "figure_path": "2412.00102v1_figure_3(b).png", + "caption": "(b) Visual context\nFigure 3: (a), (b) and (c) represents the distribution of the three primary dimensions in the testmini set and the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x4.png" + }, + "3(c)": { + "figure_path": "2412.00102v1_figure_3(c).png", + "caption": "(c) Solving strategy\nFigure 3: (a), (b) and (c) represents the distribution of the three primary dimensions in the testmini set and the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x5.png" + }, + "4": { + "figure_path": "2412.00102v1_figure_4.png", + "caption": "Figure 4: Pipeline for data creation", + "url": "http://arxiv.org/html/2412.00102v1/x6.png" + }, + "5(a)": { + "figure_path": "2412.00102v1_figure_5(a).png", + "caption": "(a) Basic concept\nFigure 5: Performance of Large Language Models (LLMs) on a Visual Question Answering (VQA) task using CoT reasoning, across the categories on testmini", + "url": "http://arxiv.org/html/2412.00102v1/x7.png" + }, + "5(b)": { + "figure_path": "2412.00102v1_figure_5(b).png", + "caption": "(b) Visual context\nFigure 5: Performance of Large Language Models (LLMs) on a Visual Question Answering (VQA) task using CoT reasoning, across the categories on testmini", + "url": "http://arxiv.org/html/2412.00102v1/x8.png" + }, + "5(c)": { + "figure_path": "2412.00102v1_figure_5(c).png", + "caption": "(c) Solving strategy\nFigure 5: Performance of Large Language Models (LLMs) on a Visual Question Answering (VQA) task using CoT reasoning, across the categories on testmini", + "url": "http://arxiv.org/html/2412.00102v1/x9.png" + }, + "6": { + "figure_path": "2412.00102v1_figure_6.png", + "caption": "Figure 6: MLLM\u2019s responses for the sample question", + "url": "http://arxiv.org/html/2412.00102v1/x10.png" + }, + "7(a)": { + "figure_path": "2412.00102v1_figure_7(a).png", + "caption": "(a)\nFigure 7: (a): The error counts made by GPT-4o and Claude for (V+Q) data, utilizing 98 samples, (b): Visual perception error distribution based on visual entity, which was traced in (a). (c): Conceptual error distribution based on concepts used while solving questions, which was traced in (a)", + "url": "http://arxiv.org/html/2412.00102v1/x11.png" + }, + "7(b)": { + "figure_path": "2412.00102v1_figure_7(b).png", + "caption": "(b)\nFigure 7: (a): The error counts made by GPT-4o and Claude for (V+Q) data, utilizing 98 samples, (b): Visual perception error distribution based on visual entity, which was traced in (a). (c): Conceptual error distribution based on concepts used while solving questions, which was traced in (a)", + "url": "http://arxiv.org/html/2412.00102v1/x12.png" + }, + "7(c)": { + "figure_path": "2412.00102v1_figure_7(c).png", + "caption": "(c)\nFigure 7: (a): The error counts made by GPT-4o and Claude for (V+Q) data, utilizing 98 samples, (b): Visual perception error distribution based on visual entity, which was traced in (a). (c): Conceptual error distribution based on concepts used while solving questions, which was traced in (a)", + "url": "http://arxiv.org/html/2412.00102v1/x13.png" + }, + "8": { + "figure_path": "2412.00102v1_figure_8.png", + "caption": "Figure 8: Comparison of errors detected by GPT-4o vs. manual categorisation for 98 samples with non-None answers", + "url": "http://arxiv.org/html/2412.00102v1/x14.png" + }, + "9(a)": { + "figure_path": "2412.00102v1_figure_9(a).png", + "caption": "(a) Basic concept\nFigure 9: Performance of Multi-modal Large Language Models (MLLMs) on the Visual Question Answering (VQA) task using CoT reasoning across all primary dimensions on the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x15.png" + }, + "9(b)": { + "figure_path": "2412.00102v1_figure_9(b).png", + "caption": "(b) Visual context\nFigure 9: Performance of Multi-modal Large Language Models (MLLMs) on the Visual Question Answering (VQA) task using CoT reasoning across all primary dimensions on the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x16.png" + }, + "9(c)": { + "figure_path": "2412.00102v1_figure_9(c).png", + "caption": "(c) Solving strategy\nFigure 9: Performance of Multi-modal Large Language Models (MLLMs) on the Visual Question Answering (VQA) task using CoT reasoning across all primary dimensions on the complete dataset", + "url": "http://arxiv.org/html/2412.00102v1/x17.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Gpt-4 technical report.", + "author": "Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F. L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. 2023.", + "venue": "arXiv preprint arXiv:2303.08774.", + "url": null + } + }, + { + "2": { + "title": "The claude 3 model family: Opus, sonnet, haiku.", + "author": "Anthropic, A. 2024.", + "venue": "Claude-3 Model Card, 1.", + "url": null + } + }, + { + "3": { + "title": "Have llms advanced enough? a challenging problem solving benchmark for large language models.", + "author": "Arora, D.; Singh, H. G.; et al. 2023.", + "venue": "arXiv preprint arXiv:2305.15074.", + "url": null + } + }, + { + "4": { + "title": "Text Extraction for Handwritten Circuit Diagram Images.", + "author": "Bayer, J.; Turabi, S. H.; and Dengel, A. 2023.", + "venue": "In International Conference on Document Analysis and Recognition, 192\u2013198. Springer.", + "url": null + } + }, + { + "5": { + "title": "A survey on evaluation of large language models.", + "author": "Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. 2024.", + "venue": "ACM Transactions on Intelligent Systems and Technology, 15(3): 1\u201345.", + "url": null + } + }, + { + "6": { + "title": "GeoQA: A geometric question answering benchmark towards multimodal numerical reasoning.", + "author": "Chen, J.; Tang, J.; Qin, J.; Liang, X.; Liu, L.; Xing, E. P.; and Lin, L. 2021.", + "venue": "arXiv preprint arXiv:2105.14517.", + "url": null + } + }, + { + "7": { + "title": "Theoremqa: A theorem-driven question answering dataset.", + "author": "Chen, W.; Yin, M.; Ku, M.; Lu, P.; Wan, Y.; Ma, X.; Xu, J.; Wang, X.; and Xia, T. 2023.", + "venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 7889\u20137901.", + "url": null + } + }, + { + "8": { + "title": "GSM8K: A Dataset for Grade School Math Word Problems.", + "author": "Cobbe, K.; Kosaraju, V.; Bavarian, M.; Jun, H.; \u0141ukasz Kaiser; Plappert, M.; et al. 2021.", + "venue": "In arXiv preprint arXiv:2110.14168.", + "url": null + } + }, + { + "9": { + "title": "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning.", + "author": "Dai, W.; Li, J.; Yang, Z.; Li, D.; Wang, J.; Wang, L.; Yuan, L.; Lin, K.; and Zeng, M. 2023.", + "venue": "In arXiv preprint arXiv:2305.06500.", + "url": null + } + }, + { + "10": { + "title": "Introduction to Digital Logic with Laboratory Exercises.", + "author": "Feher, J., ed. 2014.", + "venue": "Jacobs Foundation, Zurich, Switzerland: Global Text Project.", + "url": null + } + }, + { + "11": { + "title": "Measuring mathematical problem solving with the math dataset.", + "author": "Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; and Steinhardt, J. 2021a.", + "venue": "arXiv preprint arXiv:2103.03874.", + "url": null + } + }, + { + "12": { + "title": "Measuring Massive Multitask Language Understanding.", + "author": "Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; et al. 2021b.", + "venue": "In arXiv preprint arXiv:2009.03300.", + "url": null + } + }, + { + "13": { + "title": "C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models.", + "author": "Huang, Y.; Bai, Y.; Zhu, Z.; Zhang, J.; Zhang, J.; Su, T.; Liu, J.; Lv, C.; Zhang, Y.; Fu, Y.; et al. 2024.", + "venue": "Advances in Neural Information Processing Systems, 36.", + "url": null + } + }, + { + "14": { + "title": "IconVQA: A New Benchmark for VQA on Iconography Art.", + "author": "Li, M.; Yang, Z.; Ghosh, S.; Wang, L.; Lin, K.; Wang, J.; Liu, Z.; and Zeng, M. 2023.", + "venue": "In arXiv preprint arXiv:2311.11583.", + "url": null + } + }, + { + "15": { + "title": "Sphinx: Interpretable Multi-Modal Reasoning with Large Language Model Guidance.", + "author": "Lin, K.; Liang, J.; Rodrigues, N. F.; Ragni, M.; Zhuang, F.; Lam, J. C. C.; Cui, P.; and Faloutsos, C. 2023.", + "venue": "In arXiv preprint arXiv:2303.17517.", + "url": null + } + }, + { + "16": { + "title": "LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models.", + "author": "Liu, H.; Li, C.; Li, Y.; and Lee, Y. J. 2023a.", + "venue": "In NeurIPS.", + "url": null + } + }, + { + "17": { + "title": "LLava-Med: Large Language and Vision Assistant for Medical Image Understanding.", + "author": "Liu, H.; Li, C.; Wang, P.; Yuan, C.; He, X.; Gao, J.; and Lee, Y. J. 2023b.", + "venue": "In arXiv preprint arXiv:2310.04120.", + "url": null + } + }, + { + "18": { + "title": "LLaVA: Large Language and Vision Assistant.", + "author": "Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023c.", + "venue": "In arXiv preprint arXiv:2304.08485.", + "url": null + } + }, + { + "19": { + "title": "Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts.", + "author": "Lu, P.; Bansal, H.; Xia, T.; Liu, J.; Li, C.; Hajishirzi, H.; Cheng, H.; Chang, K.-W.; Galley, M.; and Gao, J. 2023.", + "venue": "arXiv preprint arXiv:2310.02255.", + "url": null + } + }, + { + "20": { + "title": "Learn to explain: Multimodal reasoning via thought chains for science question answering.", + "author": "Lu, P.; Mishra, S.; Xia, T.; Qiu, L.; Chang, K.-W.; Zhu, S.-C.; Tafjord, O.; Clark, P.; and Kalyan, A. 2022a.", + "venue": "Advances in Neural Information Processing Systems, 35: 2507\u20132521.", + "url": null + } + }, + { + "21": { + "title": "ScienceQA: A Challenge Dataset for Multi-Modal Machine Learning.", + "author": "Lu, T.; Dong, Y.; Lin, X.; Fu, J.; Chen, C.; et al. 2022b.", + "venue": "In arXiv preprint arXiv:2201.10247.", + "url": null + } + }, + { + "22": { + "title": "ECE 120: Introduction to Computing.", + "author": "Lumetta, S. S. 2017.", + "venue": "Urbana-Champaign, IL: Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign.", + "url": null + } + }, + { + "23": { + "title": "Exploration of Generative AI tools for an Electric Circuits Course.", + "author": "Ogunfunmi, T. 2024.", + "venue": "In 2024 IEEE International Symposium on Circuits and Systems (ISCAS), 1\u20135. IEEE.", + "url": null + } + }, + { + "24": { + "title": "Hello GPT-4o.", + "author": "OpenAI. 2024.", + "venue": "https://openai.com/index/hello-gpt-4o/.", + "url": null + } + }, + { + "25": { + "title": "Hand-drawn digital logic circuit component recognition using svm.", + "author": "Patare, M. D.; and Joshi, M. S. 2016.", + "venue": "International Journal of Computer Applications, 143(3): 24\u201328.", + "url": null + } + }, + { + "26": { + "title": "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.", + "author": "Reid, M.; Savinov, N.; Teplyashin, D.; Lepikhin, D.; Lillicrap, T.; Alayrac, J.-b.; Soricut, R.; Lazaridou, A.; Firat, O.; Schrittwieser, J.; et al. 2024.", + "venue": "arXiv preprint arXiv:2403.05530.", + "url": null + } + }, + { + "27": { + "title": "A public ground-truth dataset for handwritten circuit diagram images.", + "author": "Thoma, F.; Bayer, J.; Li, Y.; and Dengel, A. 2021.", + "venue": "In International Conference on Document Analysis and Recognition, 20\u201327. Springer.", + "url": null + } + }, + { + "28": { + "title": "LLaMA 3: Open and Efficient Foundation Models.", + "author": "Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Goyal, A.; Baskar, S.; Maria, C.; Beauguitte, B.; Schmid, C.; et al. 2023.", + "venue": "arXiv preprint arXiv:2311.08722.", + "url": null + } + }, + { + "29": { + "title": "Scibench: Evaluating college-level scientific problem-solving abilities of large language models.", + "author": "Wang, X.; Hu, Z.; Lu, P.; Zhu, Y.; Zhang, J.; Subramaniam, S.; Loomba, A. R.; Zhang, S.; Sun, Y.; and Wang, W. 2023.", + "venue": "arXiv preprint arXiv:2307.10635.", + "url": null + } + }, + { + "30": { + "title": "Chain-of-thought prompting elicits reasoning in large language models.", + "author": "Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022.", + "venue": "Advances in neural information processing systems, 35: 24824\u201324837.", + "url": null + } + }, + { + "31": { + "title": "Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi.", + "author": "Yue, X.; Ni, Y.; Zhang, K.; Zheng, T.; Liu, R.; Zhang, G.; Stevens, S.; Jiang, D.; Ren, W.; Sun, Y.; et al. 2024.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9556\u20139567.", + "url": null + } + }, + { + "32": { + "title": "Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?", + "author": "Zhang, R.; Jiang, D.; Zhang, Y.; Lin, H.; Guo, Z.; Qiu, P.; Zhou, A.; Lu, P.; Chang, K.-W.; Gao, P.; et al. 2024.", + "venue": "arXiv preprint arXiv:2403.14624.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2412.00102v1" +} \ No newline at end of file diff --git a/20241127/2412.03589v1.json b/20241127/2412.03589v1.json new file mode 100644 index 0000000000000000000000000000000000000000..2926d53a9c33b6a726b887fbdd96e7bb47ba0689 --- /dev/null +++ b/20241127/2412.03589v1.json @@ -0,0 +1,47 @@ +{ + "title": "Contribution TitleSupported by organization x.", + "abstract": "The abstract should briefly summarize the contents of the paper in\n150\u2013250 words.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "First Section", + "text": "" + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "A Subsection Sample", + "text": "Please note that the first paragraph of a section or subsection is\nnot indented. The first paragraph that follows a table, figure,\nequation etc. does not need an indent, either.\nSubsequent paragraphs, however, are indented.\nThe contribution should contain no more than four levels of\nheadings. Table 1 ###reference_### gives a summary of all heading levels.\nDisplayed equations are centered and set on a separate\nline.\nPlease try to avoid rasterized images for line-art diagrams and\nschemas. Whenever possible, use vector graphics instead (see\nFig. 1 ###reference_###).\n###figure_1### This is a sample theorem. The run-in heading is set in bold, while\nthe following text appears in italics. Definitions, lemmas,\npropositions, and corollaries are styled the same way.\nProofs, examples, and remarks have the initial word in italics,\nwhile the following text appears in normal font.\nFor citations of references, we prefer the use of square brackets\nand consecutive numbers. Citations using labels or the author/year\nconvention are also acceptable. The following bibliography provides\na sample reference list with entries for journal\narticles [1 ###reference_b1###], an LNCS chapter [2 ###reference_b2###], a\nbook [3 ###reference_b3###], proceedings without editors [4 ###reference_b4###],\nand a homepage [5 ###reference_b5###]. Multiple citations are grouped\n[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###],\n[1 ###reference_b1###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]." + }, + { + "section_id": "1.1.1", + "parent_section_id": "1.1", + "section_name": "1.1.1 Sample Heading (Third Level)", + "text": "Only two levels of\nheadings should be numbered. Lower level headings remain unnumbered;\nthey are formatted as run-in headings.\nThe contribution should contain no more than four levels of\nheadings. Table 1 ###reference_### ###reference_### gives a summary of all heading levels.\nDisplayed equations are centered and set on a separate\nline.\nPlease try to avoid rasterized images for line-art diagrams and\nschemas. Whenever possible, use vector graphics instead (see\nFig. 1 ###reference_### ###reference_###).\n###figure_2### This is a sample theorem. The run-in heading is set in bold, while\nthe following text appears in italics. Definitions, lemmas,\npropositions, and corollaries are styled the same way.\nProofs, examples, and remarks have the initial word in italics,\nwhile the following text appears in normal font.\nFor citations of references, we prefer the use of square brackets\nand consecutive numbers. Citations using labels or the author/year\nconvention are also acceptable. The following bibliography provides\na sample reference list with entries for journal\narticles [1 ###reference_b1### ###reference_b1###], an LNCS chapter [2 ###reference_b2### ###reference_b2###], a\nbook [3 ###reference_b3### ###reference_b3###], proceedings without editors [4 ###reference_b4### ###reference_b4###],\nand a homepage [5 ###reference_b5### ###reference_b5###]. Multiple citations are grouped\n[1 ###reference_b1### ###reference_b1###, 2 ###reference_b2### ###reference_b2###, 3 ###reference_b3### ###reference_b3###],\n[1 ###reference_b1### ###reference_b1###, 3 ###reference_b3### ###reference_b3###, 4 ###reference_b4### ###reference_b4###, 5 ###reference_b5### ###reference_b5###]." + }, + { + "section_id": "1.1.2", + "parent_section_id": "1.1", + "section_name": "1.1.2 Acknowledgements", + "text": "Please place your acknowledgments at\nthe end of the paper, preceded by an unnumbered run-in heading (i.e.\n3rd-level heading)." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Table captions should be placed above the\ntables.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Heading levelExampleFont size and style
Title (centered)Lecture Notes14 point, bold
1st-level heading1 Introduction12 point, bold
2nd-level heading2.1 Printing Area10 point, bold
3rd-level heading\nRun-in Heading in Bold. Text follows10 point, bold
4th-level heading\nLowest Level Heading. Text follows10 point, italic
\n
", + "capture": "Table 1: Table captions should be placed above the\ntables." + } + }, + "image_paths": { + "1": { + "figure_path": "2412.03589v1_figure_1.png", + "caption": "Figure 1: A figure caption is always placed below the illustration.\nPlease note that short captions are centered, while long ones are\njustified by the macro package automatically.", + "url": "http://arxiv.org/html/2412.03589v1/x1.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2412.03589v1" +} \ No newline at end of file diff --git a/20241127/2412.05306v1.json b/20241127/2412.05306v1.json new file mode 100644 index 0000000000000000000000000000000000000000..c2088a97da7dad33880ba6ca21f0852037345e4c --- /dev/null +++ b/20241127/2412.05306v1.json @@ -0,0 +1,162 @@ +{ + "title": "Detection of Signals in Colored Noise: Roy\u2019s Largest Root Test for Non-central \ud835\udc39-matrices", + "abstract": "This paper investigates the signal detection problem in colored noise with an unknown covariance matrix. In particular, we focus on detecting a non-random signal by capitalizing on the leading eigenvalue (a.k.a. Roy\u2019s largest root) of the whitened sample covariance matrix as the test statistic. To this end, the whitened sample covariance matrix is constructed via -dimensional plausible signal-bearing samples and -dimensional noise-only samples. Since the signal is non-random, the whitened sample covariance matrix turns out to have a non-central -distribution with a rank-one non-centrality parameter. Therefore, the performance of the test entails the statistical characterization of the leading eigenvalue of the non-central -matrix, which we address by deriving its cumulative distribution function (c.d.f.) in closed-form by leveraging the powerful orthogonal polynomial approach in random matrix theory. This new c.d.f. has been instrumental in analyzing the receiver operating characteristic (ROC) of the detector. We also extend our analysis into the high dimensional regime in which , and diverge such that and remain fixed. It turns out that, when and fixed, the power of the test improves if the signal-to-noise ratio (SNR) is of at least , whereas the corresponding SNR in the high dimensional regime is of at least . Nevertheless, more intriguingly, for with the SNR of order , the leading eigenvalue does not have power to detect weak signals in the high dimensional regime.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The fundamental problem of detecting a signal embedded in noise has been at the forefront of various research studies, see e.g., [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###] and references therein. In this regard, among various detectors available, a certain class of detectors rely exclusively on specific correlation or mean structures inherent in the observational data [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###]. For instance, finite rank perturbation of the identity covariance matrix (i.e., low-rank-plus-identity) and rank deficient mean matrix are two such prominent structures [13 ###reference_b13###, 10 ###reference_b10###, 4 ###reference_b4###, 5 ###reference_b5###, 2 ###reference_b2###, 8 ###reference_b8###].\nThese key structures are the consequences of certain specific signal/channel characteristics which may or may not be known at the receiver.\nThe detectors which exploit those covariance or mean structures without the knowledge of underlying signal/channel characteristics are referred to as blind detectors [9 ###reference_b9###, 10 ###reference_b10###]. It turns out that those key structural characteristics are embedded in the eigenvalues of the corresponding matrices. Therefore, certain functions of the eigenvalues of\nthe covariance matrix have been used as the test statistics in such detectors.\nNotwithstanding the above facts, in practice, since the exact covariance matrix cannot be computed, the covariance matrix is estimated with the available data samples. Therefore, corresponding test statics are also computed based on the sample eigenvalues, instead. In this regard, among various eigenvalue based test statistics available, the leading eigenvalue of the sample covariance matrix, which is also known as Roy\u2019s largest root111The Roy\u2019s largest root test is a direct consequence of Roy\u2019s union-intersection principle [14 ###reference_b14###]., has been a popular choice among detection theorists [15 ###reference_b15###, 16 ###reference_b16###, 4 ###reference_b4###, 17 ###reference_b17###, 18 ###reference_b18###, 9 ###reference_b9###, 10 ###reference_b10###] due to its certain optimal properties. To be specific, the largest\nroot test turns out to be most powerful among the common tests when\nthe alternative is of rank-one [16 ###reference_b16###, 15 ###reference_b15###]. The rank-one alternative manifests as a single spike with respect to either the covariance or non-centrality parameter matrices [13 ###reference_b13###, 19 ###reference_b19###, 16 ###reference_b16###, 20 ###reference_b20###]. Here our interest is also in a rank-one none-centrality parameter alternative.\nThe most classical additive noise abstraction is the white Gaussian noise. However, in most modern practical settings, the additive Gaussian noise turns out to have certain covariance structures [4 ###reference_b4###, 21 ###reference_b21###, 11 ###reference_b11###, 7 ###reference_b7###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###] , thereby it is referred to as colored noise. For instance, in applications pertaining to radar detection, the effective noise is commonly modeled as colored, in particular, to account for thermal noise, jamming, and clutter effects [21 ###reference_b21###, 11 ###reference_b11###, 7 ###reference_b7###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###]. The highly unlikely scenario of known noise covariance matrix at the receiver, the detection theorist prefers to work with respect to a rotated and scaled coordinate system in which the effective noise is white. This process is known as whitening. However, as we have already mentioned in an early instance, in practice, the noise covariance is unknown and therefore, should be estimated. To facilitate the computation of the noise sample covariance matrix, the assumption of the availability of noise-only data (i.e., signal-free data or secondary data) has been introduced in the literature [25 ###reference_b25###, 2 ###reference_b2###, 4 ###reference_b4###, 16 ###reference_b16###, 21 ###reference_b21###, 23 ###reference_b23###, 28 ###reference_b28###, 29 ###reference_b29###, 27 ###reference_b27###]. To be specific, here we assume that the detector is made available with a separate set of noise-only samples in addition to plausible signal-bearing samples (i.e., primary data).\nConsequently, the whitening operation can now be conveniently applied to the primary data to obtain the white noise equivalent primary data.\nHere our focus is on detecting a non-random signal in colored noise with an unknown covariance matrix by capitalizing on the underlying specific covariance/mean structure of the observations. Particular examples include, detection problems associated with modern multiple-input-multiple-output (MIMO) radar and classical radar sensing applications (see e.g., [30 ###reference_b30###, 26 ###reference_b26###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 25 ###reference_b25###, 35 ###reference_b35###, 36 ###reference_b36###] and references therein). To leverage the potential of underpinning correlation/mean structure of the observations, we focus on the whitened sample covariance matrix. To be specific, assuming the availability of primary (i.e., plausible signal bearing samples) and secondary data (i.e., noise-only samples) sets, the whitened sample covariance matrix can conveniently be written in its symmetric form as where is the estimated noise covariance matrix, denotes the sample covariance estimated with the primary data, is the system dimensionality and denotes the positive definite square root. Since the signal of our interest is non-random, under the Gaussian noise assumption, turns out to have a non-central Wishart distribution with a rank-one non-centrality parameter, whereas has a central Wishart distribution. Consequently, follows the so-called non-central -distribution [37 ###reference_b37###, 38 ###reference_b38###, 16 ###reference_b16###] with a rank-one non-centrality. In certain situations, the number of available samples may be few in comparison to the system dimension (i.e., sample deficiency) [22 ###reference_b22###, 39 ###reference_b39###, 40 ###reference_b40###, 41 ###reference_b41###]. In such situation, the matrix becomes rank deficient, there by having a singular non-central -distribution [37 ###reference_b37###]. Since the non-centrality parameter of -matrix is of rank-one, having motivated with certain optimal properties of the leading eigenvalue pertaining to rank-one alternatives, here we employ the leading eigenvalue of the -matrix as the test statistic. The utility of the leading eigenvalue based test is further highlighted, as delineated in [16 ###reference_b16###, 38 ###reference_b38###], by the fact that the Roy\u2019s largest root test is most powerful among the common tests when the alternative is of rank-one [15 ###reference_b15###]. Therefore, we are interested in the statistical characteristics of the leading eigenvalue of non-central -matrix with a rank-one non-centrality parameter. Some preliminary limited analyses in this respect have recently appeared in [1 ###reference_b1###].\nThe joint eigenvalue densities of non-central singular and non-singular -matrices have been reported in [37 ###reference_b37###]. A stochastic representation of the leading eigenvalue of a non-central -matrix with a rank-one non-centrality parameter in the high signal-to-noise ratio (SNR) regime is given in [16 ###reference_b16###, 17 ###reference_b17###].\nThe high dimensional statistical characteristics of the eigenvalues of central -matrices have been well documented in the literature (see e.g., [2 ###reference_b2###, 18 ###reference_b18###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###] and references therein). Nevertheless, only a few results are available for the non-central -matrices [20 ###reference_b20###, 8 ###reference_b8###, 48 ###reference_b48###, 45 ###reference_b45###]. As delineated in [20 ###reference_b20###], in the presence of the so-called non-central spikes (i.e., the non-zero eigenvalues of the non-centrality parameter matrix), the eigenvalues of non-central -matrices undergo phase transition222This phenomenon is commonly known as the Baik, Ben Arous, P\u00e9ch\u00e9 (BBP) phase transition because of their seminal contribution in [49 ###reference_b49###]. The signal processing analogy of this phenomenon is known as the \u201csubspace swap\u201d [50 ###reference_b50###, 51 ###reference_b51###, 52 ###reference_b52###]., the threshold of which is also derived therein. To be specific, when the population non-central spikes are below the phase transition threshold (i.e., in the sub-critical regime), the corresponding sample eigenvalues of a non-central -matrix converges to the right edge of the bulk spectrum and satisfy the most celebrated Tracy-Widom law [45 ###reference_b45###], whereas when the population non-central spikes are above the phase transition threshold (i.e., in the super-critical regime), the corresponding sample eigenvalues follow a joint Gaussian density [20 ###reference_b20###, 48 ###reference_b48###]. Nevertheless, the above high dimensional results do not hold in the special case of an equality between the number of noise-only samples and the system dimensionality. Moreover, a tractable finite-dimensional characterization of the leading eigenvalue of a non-central -matrix is also not reported in the current literature.\nHaving motivated with the above facts and noting the limited very recent results in [1 ###reference_b1###], in this paper, capitalizing on powerful orthogonal polynomials technique advocated in random matrix theory [53 ###reference_b53###], we derive a new exact cumulative distribution function (c.d.f.) for the leading eigenvalue of a non-central -matrix with a rank-one non-centrality parameter matrix.\nThe new c.d.f. expression consists of a determinant of a square matrix whose dimension depend on the relative difference between the number of noise only samples and the system dimensionality (i.e., ) but not their individual magnitudes. This key feature further facilitates the efficient evaluation of the c.d.f. corresponding to an important configuration (i.e., when the noise sample covariance matrix is nearly rank deficient). Since the parameter can also be considered as an implicit indicator of the quality of as an estimator of the unknown population noise covariance matrix, the above configuration corresponds to the lowest quality noise covariance estimator. Therefore, this configuration in turn dictates a performance lower bound on the leading eigenvalue as a test statistic for other parameter being fixed. This new c.d.f. expression further facilitates the analysis of the receiver operating characteristics (ROC) of the largest root test. Moreover,\ndriven by the modern high dimensional statistical applications involving the non-central -matrices [8 ###reference_b8###, 48 ###reference_b48###, 45 ###reference_b45###], we have extended our analysis to the high dimensional regime in which such that and , where is the number of plausible signal-bearing samples.\nThe key analytical results developed in this paper shed some light on the impact of the the system dimension (i.e., ), the number of plausible signal-bearing samples (i.e., ), noise-only samples (i.e., ), and signal-to-noise ratio (SNR) (i.e., ) on the ROC.\nFor instance, our analytical ROC results corresponding to the scenario, for which (i.e., the number of noise-only samples equals the system dimensionality) and fixed, reveal that the power is an increasing function of , if is of at least . In this respect, when , the ROC converges to a remarkably simple non-trivial limiting profile as increases. This interesting observation reveals that, when we have the lowest quality noise covariance estimate at our disposal, the number of plausible signal-bearing samples are beneficial provided that the SNR scales at least linearly with .\nBe that as it may, the scenario corresponding to the high dimensional setting is a bit more subtle. To be specific, as such that and with (i.e., ), the leading eigenvalue has detection power above a certain threshold, whereas below that threshold the leading eigenvalue does not have the detection power. The main reason behind this intriguing behavior is the phase transition phenomena. Notwithstanding the above facts, as such that and , the leading eigenvalue retains its detection power, if SNR is of at least . Moreover, the corresponding ROC converges to a simple profile.\nThis paper is organized as follows. The signal detection problem in colored noise with an unknown noise covariance matrix has been formulated as a binary hypotheses testing problem in Section II. Section III derives the novel c.d.f. expression for the leading eigenvalue, which is the proposed test statistic for the preceding binary hypothesis testing problem, of non-central -matrix with a rank-one non-centrality parameter. The ROC performance of the leading eigenvalue with respect to the system parameters (i.e., , and SNR) is analyzed in Section IV. Certain important high dimensional statistical characteristics of the ROC are also derived therein. Finally, conclusive remarks are made in Section V.\nNotation: The following notation is used throughout this paper.\nA complex Gaussian random vector with mean and positive definite covariance matrix is denoted by .\nThe superscript indicates the Hermitian transpose, and represents the trace of a square matrix. If , are independent, then is said to follow a complex non-central Wishart distribution denoted by , where with is the non-centrality parameter. The Gaussian function is denoted by . The real part of a complex number is represented as and the magnitude of is given by . The operator is compactly represented as . The mathematical expectation operator is represented as , whereas the probability of an event is denoted by . The symbol is used to represent the Kronecker product between two matrices.\nThe identity matrix is represented by . The Euclidean norm of a vector is denotes by , whereas the norm of a matrix (i.e., the leading singular value of ) is denoted by .\nA diagonal matrix with the diagonal entries is denoted by . A Hermitian positive definite matrix is denoted by . For two Hermitian positive definite matrices and , the notation implies that is positive semi-definite and is denoted by . The inverse square root of a Hermitian positive definite matrix is denoted by .\nThe determinant of an matrix with its th entry given by is represented by , whereas the determinant of a matrix is denoted by . Finally, we use the following notation to compactly represent the\ndeterminant of an block matrix:" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "II Detection Problem formulation", + "text": "Consider the general linear signal observation model:\nwhere is the number of independent observations (i.e., samples), , is an unknown non-random vector, is the effective signal which may be known or unknown, and denotes the colored noise in which the noise covariance matrix is also unknown to the detector. It turns out that the above observation model encompasses, among others, various practically important sensing technologies. To be specific, for MIMO radar sensing applications (see e.g., [30 ###reference_b30###, 26 ###reference_b26###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###] and references therein) with colocated uniform linear arrays of transmit and receive antennas, the effective transmitted signal admits the form , where is the transmit array steering vector of unit magnitude (i.e., ) and is the known discrete-time base-band signal transmitted across transmit antennas.\nMoreover, under this setting, specializes to , where is the receive array steering vector such that and is an unknown but deterministic target amplitude. As such, (1 ###reference_###) specializes to333Although the transmit and receive beam steering vectors are commonly parameterized as and to demonstrate their explicit dependency on the target direction at , here we do not adopt that notation.\nwhere is interpreted as the information block length and denotes the Gaussian clutter with an unknown covariance matrix. In contrast, for conventional phase-array radar systems, the effective transmitted signal admits the form [34 ###reference_b34###] , where . This stems from the fact that the same signal is transmitted by all antennas in a conventional phase-array [32 ###reference_b32###]. In light of the above development, for conventional phase-array radar sensing applications, (1 ###reference_###) specializes to\nwhere is the number of samples and again, denotes the Gaussian clutter with an unknown covariance matrix [27 ###reference_b27###]. The above two important sensing applications further highlight the utility of the generic observation model in (1 ###reference_###).\nNow to facilitate further analysis, noting that , are independent observations, we may represent them in matrix form as\nwhere , , and . Consequently, the signal detection problem can be written as the following binary hypotheses testing problem\nSince we have\nthe above binary hypotheses problem can equivalently be written as\nIf the noise covariance matrix were known in advance, pre-whitened observations would have resulted in\nClearly, the presence of a signal is characterized by rank-one perturbation of the scaled identity matrix. Therefore, the eigenvalues of\n are given by and (repeated times). This in turn reveals that it is convenient to use the leading eigenvalue of to detect the presence of a signal. This is further highlighted by the fact, which is derived based on purely group invariance arguments [16 ###reference_b16###, 38 ###reference_b38###], that in the absence of detailed knowledge about either the signal or the channel, a generic test depends on the eigenvalues of .\nIn practice, the population average and covariance matrix are unknown so that the above procedure cannot be trivially applied. To circumvent this difficulty, the corresponding population covariance matrices are commonly replaced by their sample estimates. In particular, is replaced by\nwhereas the sample estimate of is computed based on the assumption of the availability of the so-called noise-only (i.e., signal-free) additional training samples , as\nThis particular assumption about the availability of noise-only or secondary-data samples is commonly used in the literature to facilitate computing the sample estimate of the unknown noise (also clutter) covariance matrix [27 ###reference_b27###, 21 ###reference_b21###, 28 ###reference_b28###, 25 ###reference_b25###, 11 ###reference_b11###, 4 ###reference_b4###, 2 ###reference_b2###]. Moreover, it is noteworthy that the condition ensures the almost sure invertibility of the sample covariance matrix . In this respect, a case of particular interest is when , for which is a poor estimate of but yet invertible. Therefore, the parameter can be considered as an explicit indicator of the quality of the sample covariance estimate. Having estimated the respective sample covariance matrices, now we can conveniently focus on a test based on the leading eigenvalue of instead.\nThe leading sample eigenvalue (a.k.a. Roy\u2019s largest root) has been popular among the detection theorists (see e.g., [15 ###reference_b15###, 9 ###reference_b9###, 17 ###reference_b17###, 10 ###reference_b10###, 18 ###reference_b18###] and references therein). This is further highlighted, as delineated in [16 ###reference_b16###], by the fact that the Roy\u2019s largest root test is most powerful among the common tests when the alternative is of rank-one [15 ###reference_b15###]. Therefore, in light of the above discussion, we choose\n\nas the test statistic444Although the two matrices and are different in principle, they have the same non-zero eigenvalues.. Now noting that\nwe obtain the following distributions corresponding to the sample covariance matrices\nwhere is the rank-one non-centrality parameter matrix. Under the above Gaussian setting, the matrix is referred to as the -matrix [16 ###reference_b16###, 18 ###reference_b18###, 38 ###reference_b38###]. In particular, under it is called a central -matrix, whereas under it becomes a non-central -matrix. Moreover, for , they are referred to as singular central -matrix and singular non-central -matrix, respectively. These -matrices are instrumental in various statistical decision theoretic applications including one-way multivariate analysis of variance (MANOVA), testing linear restrictions on the multivariate linear model, and canonical correlation analysis (CCA), see e.g., [16 ###reference_b16###, 8 ###reference_b8###, 38 ###reference_b38###, 14 ###reference_b14###, 48 ###reference_b48###, 54 ###reference_b54###, 45 ###reference_b45###] and references therein.\nLet us, for notational concision, denote the maximum eigenvalue as . Therefore, the test based on the leading eigenvalue detects a signal if , where is the threshold corresponding to a desired false alarm rate given by\nConsequently, the probability of detection admits\nSubsequent elimination of yields an explicit functional relationship between and , which is referred to as the ROC profile. This in turn helps evaluate the performance of the leading eigenvalue based test.\nThe above computational machinery relies on the availability of the c.d.f.s of under both hypotheses. To this end, we need to statistically characterize the c.d.f. of the leading eigenvalue of which is the focus of the following section." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "III C.D.F. of the Maximum Eigenvalue", + "text": "Here we statistically characterize the leading eigenvalue of a complex non-central -matrix having rank-one underlying non-centrality parameter. In particular, we derive a closed-form c.d.f. expression for the leading eigenvalue. To this end, we require certain fundamental results pertaining to the finite dimensional representation of the joint eigenvalue density of non-central -matrix and Jacobi polynomials which are given in the following subsection." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "III-A Preliminaries", + "text": "Let () be distributed as , where is Hermitian positive definite and . Then the matrix is said to follow a complex correlated non-central Wishart distribution [37 ###reference_b37###] with the non-centrality parameter . The correlated complex central Wishart distribution, corresponding to (i.e., ), is denoted by .\nLet and be two Hermitian non-negative definite matrices. Then the confluent hypergeometric function of two matrix arguments is defined as [37 ###reference_b37###]\nwhere is the complex Zonal polynomial which is a symmetric, homogeneous polynomial of degree in the eigenvalues of the argument matrix555The exact algebraic definition of the Zonal polynomial is tacitly avoided here, since it is not required in the subsequent analysis. More details of the zonal polynomials can be found in [37 ###reference_b37###, 55 ###reference_b55###]., , with \u2019s being non-negative integers, is a partition of such that and . Also the complex hypergeometric coefficient is defined as\nwhere with denotes the Pochhammer symbol. It turns out that, for a positive integer , assumes\nMoreover, when , the above confluent hypergeometric function of two matrix arguments degenerates into the confluent hypergeometric function of one matrix argument as follows\nIn the special case of rank-one , following the complex analogue of [38 ###reference_b38###, Corollary 7.2.4], it can be shown that the confluent hypergeometric function of one matrix argument further degenerates into the confluent hypergeometric function of the first kind as\nIt is noteworthy that the both confluent hypergeometric functions of matrix arguments can alternatively be represented in terms of determinants of size [56 ###reference_b56###, 57 ###reference_b57###].\nIf and are independently distributed with , then follows a complex non-central -distribution with density function [37 ###reference_b37###]\nwhere denotes the symmetric form of , denotes the confluent hypergeometric function of one matrix argument and\nNow the joint eigenvalue distribution of the matrix is given by the following theorem.\nThe joint-density of the ordered eigenvalues of the non-central matrix is given by [37 ###reference_b37###]\nwhere is the Vandermonde determinant,\n, and\n.\nThe following definition of Jacobi polynomial is also useful in the sequel.\nJacobi polynomials can be defined as [58 ###reference_b58###, Eq. 8.962]\nwhere and is the Gauss hypergeometric function.\nMoreover, the successive derivatives of take form\nThe following definition of confluent hypergeometric function of the first kind is instrumental in our subsequent analysis.\nThe confluent hypergeometric function of the first kind assumes the contour integral representation given by [59 ###reference_b59###, Eq. 6.11.1.2]\nwhere the contour is a loop starting (and ending) at and encircling once in the positive sense, , and is the Gamma function.\nHaving armed with the above fundamental results, we are now in a position to derive the c.d.f. of the leading eigenvalue of matrix when the underlying non-centrality parameter is rank one, which is the focus of the following subsection." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "III-B C.D.F. of the Leading Eigenvalue", + "text": "Before proceeding further, it is important to note that, since we are interested in rank-one non-centrality parameter , the matrix is also Hermitian positive definite rank one. Consequently, we have the decomposition , where and is such that . Now capitalizing on a counter integral approach due to [18 ###reference_b18###, 60 ###reference_b60###, 61 ###reference_b61###], the confluent hypergeometric function of two matrix arguments in (17 ###reference_###) can be further simplified666Alternatively, one can use the contour integral given in Definition 3 to arrive at Corollary 1. to yield an expression for the joint p.d.f. of the ordered eigenvalues of matrix as shown in the following corollary.\nLet with and . Then the joint p.d.f. of matrix assumes\nwhere\nTo facilitate further analysis, noting that the mapping , , is order preserving, we may employ the variable transformations\nwith the corresponding differentials given by , in (1 ###reference_3###) with some algebraic manipulations to arrive at the joint density of as\nwhere, for notational convenience, we have used and . One of the key advantages of the above representation is that its amenability to the use of powerful orthogonal polynomial technique in the subsequent c.d.f. analysis.\nWe find it convenient to determine the c.d.f. of , since the c.d.f. of the leading eigenvalue of matrix (i.e., or ) is related to the c.d.f. of (i.e., ) as\nTherefore, in what follows, we focus on deriving the c.d.f. of . To this end, by definition, the c.d.f. of can be written as\nThe above multiple integral can be evaluated by taking advantage of powerful orthogonal polynomial approach and the contour integral representation given in Definition 4 ###reference_### to yield the c.d.f. of and hence the c.d.f. of , which is given by the following theorem.\nLet and be independently distributed with rank-one non-centrality parameter (also ) and . Then the c.d.f. of the leading eigenvalue of the complex non-central matrix is given by\nwhere , , ,\nand\nwith the interpretation .\nSee Appendix A ###reference_###.\n\u220e\nIt is noteworthy that the computational complexity of the above c.d.f. depends on through the size of the determinant. Therefore, the above manifestation of the c.d.f. is particularly useful when the difference between and is small (i.e., is small), irrespective of their individual magnitudes. For instance, as shown below, when (i.e., ), the determinant degenerates into a scalar, thereby giving a concise c.d.f. expression. This is one of the many advantages of using orthogonal polynomial approach.\nAn alternative expression for the c.d.f. of has recently been derived in [1 ###reference_b1###, Theorem 1]. However, that expression contains a determinant of size , which makes it less numerically efficient for large values of . Moreover, this size determinantal structure precludes us from identifying remarkably simple degenerative forms corresponding to certain important special configurations (e.g., or ).\nIn this respect, the following corollaries further highlight the utility of our new (i.e., ) dependent determinantal representation.\nThe exact c.d.f. of the leading eigenvalue of matrix corresponding to (i.e., ) configuration is given by\nProof follows by noting that, for , the determinant degenerates into a scalar with and .\n\u220e\nIt turns out that the c.d.f. corresponding to can alternatively be derived purely based on a matrix integral approach as shown in Appendix B ###reference_###.\nThe above Corollary 2 ###reference_2### turns out to have far reaching ramifications with respect to certain high dimensional characterizations of the leading eigenvalue. To be precise, as such that and , capitalizing on Corollary 2 ###reference_2###, we can establish the following stochastic convergence result\nfrom which we observe that if , then the scaled leading eigenvalue carries some information about the rank-one non-centrality parameter in this particular asymptotic regime. This is in sharp contrast to the observation corresponding to scenario. To be specific, for , the scaled leading eigenvalue cannot discriminate between and . The latter observation is consistent with the result that the phase transition threshold diverges as [20 ###reference_b20###] thereby the leading eigenvalue loosing its detection power. Nevertheless, under the scaling , the above new result reveals that the scaled leading eigenvalue retains its discrimination power in the same asymptotic regime.\nAnother degenerated scenario of Theorem 3 ###reference_3### of our interest is the case corresponding to , the c.d.f. of which is given by the following corollary.\nThe exact c.d.f. of the leading eigenvalue of matrix corresponding to (i.e., central -matrix) configuration is given by\nwhere\nAs per (25 ###reference_###), the direct substitution of yields all the entries of the first column of the determinant zero except the first entry, which evaluates to . Consequently, we expand the determinant with its first column and shift the indices from to which concludes the proof.\n\u220e\nIt is noteworthy that the above formula for the c.d.f coincides with the previously derived result [4 ###reference_b4###, Corollary 8] corresponding to the leading eigenvalue of a central -matrix.\nFigure 1 ###reference_### compares the analytical c.d.f. expression given by Theorem 3 ###reference_3### with simulated data points for various system configurations under the condition that . In particular,\nthe effect of and on the c.d.f. for is shown therein. The degradation of with increasing is due to the fact that, for fixed and , as , tends to zero almost surely. The effect of on the c.d.f. has been depicted in Fig. 2 ###reference_###. Figure 3 ###reference_### shows the effect of on the c.d.f. of the scaled random variable for and scenarios with . As can be seen from the figure, as increases, the c.d.f.s converges to their corresponding limiting c.d.f.s. Although, in principle, we need to take the limit as to obtain the respective limiting c.d.f.s, our numerical results demonstrate that these limits still serve as good approximations for moderately large values of . It is also noteworthy that when , the limitings c.d.f.s of the scaled converges to the same limit under the both hypotheses. However, when , as can be seen from the figure, the null and the alternative give rise to two different limiting c.d.f.s. Finally, Fig. 4 ###reference_### shows the behavior of the scaled random variable corresponding to the configurations with and as such that .\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### Be that as it may, to further demonstrate the utility of our main Theorem 3 ###reference_3###, we focus on evaluating the following useful matrix integral\nwhere , is a Hermitian positive semi-definite rank-one matrix such that , and is a Hermitian positive definite matrix. The above matrix integral assumes closed-form solutions for the extreme values of (i.e., and )[37 ###reference_b37###, 62 ###reference_b62###, 63 ###reference_b63###, 38 ###reference_b38###]. Moreover, for , it takes form of Gauss hypergeometric function of one matrix argument [37 ###reference_b37###, 62 ###reference_b62###, 63 ###reference_b63###, 38 ###reference_b38###, 64 ###reference_b64###, 65 ###reference_b65###]. Since the value of for a general parameter configuration of and is not available in the literature, we present it in the following proposition.\nLet be Hermitian positive semi-definite with unit rank and . Then, for , and , we have\nwhere\nand .\nSee Appendix C ###reference_###.\n\u220e\nAnother important scenario arises when the matrix is rank deficient (i.e., ), thereby the matrix is singular. Notwithstanding that, for , does not have a density on the space of Hermitian positive definite matrices (see e.g., [66 ###reference_b66###, 67 ###reference_b67###, 68 ###reference_b68###, 69 ###reference_b69###, 70 ###reference_b70###] and references therein), we may utilize the joint density of the non-zero eigenvalues of singular -matrix given by [37 ###reference_b37###] to arrive at the corresponding c.d.f. of . In particular,\nThe c.d.f. of the leading eigenvalue of singular , for rank-one non-centrality parameter, is obtained from in Theorem 3 ###reference_3### by relabelling the parameters as follows\nHaving armed with the statistical characteristics of the leading eigenvalue of non-central -matrices, we next focus on the ROC of the leading eigenvalue based detector." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "IV ROC of the Leading Eigenvalue Based Test", + "text": "Here we analyze the ROC performance associated with the maximum eigenvalue based test in finite as well as in asymptotic regimes.\nTo this end, by exploiting the relationship between the non-zero eigenvalues of and that of given by by , for , we may invoke Theorem 3 ###reference_3### to express the c.d.f. of the leading eigenvalue of as\nwhere and . For the clarity of presentation, we find it convenient to analyze the finite dimensional and asymptotic behaviors of the ROC in two separate sub sections." + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "IV-A Finite Dimensional Analysis", + "text": "Here our focus is on the scenario in which the matrix dimensions , and are finite. As such, following Theorem 3 ###reference_3### and Corollary 3 ###reference_3### along with (8 ###reference_###), (9 ###reference_###), the false alarm and detection probabilities can be written, respectively, as\nSince the c.d.f. under the null is independent of , we can conveniently\nconclude that the leading eigenvalue test has the constant-false-alarm rate (CFAR) property.\nAlthough the above quantities are important in their own right, obtaining a functional relationship between and (i.e., ROC) by eliminating the dependency on seems to be an arduous task. Nevertheless, such an explicit functional relationship exists when assumes zero as specified in the following corollary.\nLet us, for notational concision, represent the false alarm and detection probabilities as and such that their dependency on and is tacitly understood. Then, when (i.e., ), and are functionally related as\nIt is noteworthy that, since the condition (i.e., the number of noise only samples equals the system dimensionality) narrowly satisfies the positive definiteness of the estimated noise-only covariance matrix, the above ROC represents the worst possible profile among all profiles generated by different values. Alternatively, (38 ###reference_###) corresponds to an achievable lower bound on the ROC profiles generated by various values of .\nThe above ROC relationship can be used to gain some insights into the effect of , thereby on for the case corresponding to . To this end, noting that and are fixed for a given system configuration under the alternative, we may observe that functionally depends on the two variables and . Therefore, this amounts to analyzing the effects of and on .\nTo facilitate further analysis, we may rewrite\nNow in light of [71 ###reference_b71###, Corollary 7.7.4.], we may easily obtain, for , , and ,\nwhere . Moreover, if , then we obtain the following much stronger result\nwhich can be restated in view of [72 ###reference_b72###, Chapter 16.F] as\nwhere the symbol denotes the weak majorization between two vectors.777Let and be two vectors such that and . Then is said to be weakly majorized by if , . More specifically, it is denoted by [72 ###reference_b72###]. This elegant result reveals that the disparity between the eigenvalues of an arbitrary noise covariance matrix and unity (since is arbitrary, without loss of generality here we assume ) indicates whether the particular ROC has improved with respect to the ROC profile corresponding to the identity noise covariance (i.e., spatially uncorrelated noise). To be specific, negative disparities imply improvement, whereas positive disparities indicate the opposite. Moreover, this result indicates the intuition that the ROC curve corresponding to the white noise (i.e., identity noise covariance) serves as a benchmark curve with respect to which one can quantify the effect of noise covariance on the ROC.\nNow an analysis of the impact of on the ROC corresponding to is in order. Since the power of the test depends on the rank-one mean departure from zero, intuitively, the ROC profile given by (39 ###reference_###) should improve with the increasing if the growth of is at least .\nTherefore, to facilitate further analysis, we make use of the representation , where and , to rewrite (39 ###reference_###) as\nwhere . Now in order to show that for , one needs to establish the fact\nthat . To this end, we consider the continuous function . Consequently, the first derivative of with respect to can be written, after some algebraic manipulation, as\nfrom which we obtain, noting that ,\nThe above inequality implies that the function is monotonically increasing for . This in turn verifies the fact that, for , implies . Therefore, it can be concluded that, if is at least of , then the corresponding power profiles improve with increasing . Moreover, capitalizing on the fact that\nas , the limiting ROC profile can be written as\nThe above result in turn yields, for , the bounds\nWhat remains is to determine a practical system configuration which achieves the minimum requirement of .\nIt turns out that this particular condition is satisfied by certain standard phase-array radar sensing systems. To be specific, noting that, for standard phase-array systems, assumes , we obtain . Now if we take our liberty to choose s from a certain constant-envelope modulation scheme (e.g., MPSK), then we get . Consequently, it yields , thereby confirming the practical achievability of the expected growth requirement.\nLet us now numerically investigate the analytical ROC characteristics derived in the preceding subsection. To be specific, Fig. 5 ###reference_### depicts the effect of , and , for fixed , on the ROC profiles. As can be seen, for fixed , increase in other parameters improves the ROC profiles. Nevertheless, for the special system configuration of with fixed , the ROC profiles tend to degrade with increasing as shown in Fig. 6 ###reference_###. To further investigate this behavior, in Fig. 7 ###reference_###, we present the ROC curves corresponding to two scenarios: and . As can be seen from the figure, for , ROC profiles achieve a limiting curve, which is slightly better than the chance line, as increases. In contrast, when , ROC profiles tend to the ideal curve as increases. The above dynamics have been analytically characterized in (53 ###reference_###).\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "IV-B High Dimensional Analysis", + "text": "Here we focus on the asymptotic characterization of the ROC of the leading eigenvalue based test.\nIn particular, we are interest in the asymptotic regime where , and diverge to infinity such that and . Notwithstanding that the c.d.f. expressions we have derived previously are important in the finite dimensional regime, their utility in the above high dimensional regime is severely restricted due to their determinantal structure. To circumvent this difficulty, here we utilize certain tools from the large dimensional random matrix theory of -matrices. In this respect, the high dimensional statistical characteristics of the eigenvalues of central -matrices have been well documented in the literature (see e.g., [2 ###reference_b2###, 18 ###reference_b18###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###] and references therein). Nevertheless, only a few results are available for the non-central -matrices [20 ###reference_b20###, 8 ###reference_b8###, 48 ###reference_b48###, 45 ###reference_b45###]. As delineated in [20 ###reference_b20###], in the presence of the so-called non-central spikes (i.e., the non-zero eigenvalues of the non-centrality parameter matrix), the eigenvalues of non-central -matrices undergo phase transition, the threshold of which is also derived therein. To be specific, when the population non-central spikes are below the phase transition threshold (i.e., in the sub-critical regime), then the corresponding sample eigenvalues of a non-central -matrix converges to the right edge of the bulk spectrum and satisfy the most celebrated Tracy-widom law [45 ###reference_b45###], whereas when when the population non-central spikes are above the phase transition threshold (i.e., in the super-critical regime), then the corresponding sample eigenvalues follow a joint Gaussian density [20 ###reference_b20###, 48 ###reference_b48###]. This phase transition phenomenon, as we shall show below,\naffects the detection power of the leading eigenvalue based test.\nLet us consider a sensing scenario for which and as , and diverges to infinity such that and . Against this backdrop, the only available non-centrality spike given by takes form . In this respect, as discussed earlier, for a standard phase-array radar system, particularizes to . Therefore, in what follows, we investigate the high dimensional characteristics of the leading eigenvalue based test pertaining to the above scenario for which assumes the decomposition , where with .\nHaving armed with the above assumptions now we are ready to analyze the high dimensional behavior of the leading eigenvalue based test.\nThe high dimensional statistical characterizations of under the hypotheses (i.e., signal is absent) and (i.e., signal is present) are in order now. To this end, capitalizing on large dimensional random matrix theory of central -matrices [2 ###reference_b2###, 18 ###reference_b18###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###], as such that and , we have\nwhere with and\nare non-random fixed parameters, whereas denotes the unitary Tracy-Widom distributed888The c.d.f. of denoted by follows the famous Tracy-Widom distribution [73 ###reference_b73###] corresponding to complex case given by\n\n\n\n\n\n\n\nin which denotes the Hastings-McLeod solution of the homogeneous Painlev\u00e9 II equation characterized by the boundary condition as , where is the Airy function. The Airy function is characterized in turn by and [74 ###reference_b74###]. random variable [73 ###reference_b73###]. Nevertheless, under the alternative (i.e., ), as delineated in [20 ###reference_b20###], phase transition manifests and therefore, has two stochastic representations depending on the magnitude of relative to the phase transition threshold. To be precise, following [20 ###reference_b20###, 45 ###reference_b45###], we obtain the following stochastic representations\nwhere is a standard normal random variable,\nand the phase transition threshold is given by\nHere we remark that, since stays away from the upper support of the bulk spectrum999The limiting spectral density (i.e., bulk spectrum) assumes , where with and [75 ###reference_b75###, 76 ###reference_b76###].\n (i.e., ), we have the strict inequality [2 ###reference_b2###, 20 ###reference_b20###].\nNow noting that is equivalent to the SNR, we conveniently refer to as the critical SNR.\nConsequently, we make the key observation that, in the sub-critical-SNR regime (i.e., below the critical SNR ), has the same asymptotic distribution under the both hypotheses. This in turn reveals that has no detection power asymptotically in the sub-critical-SNR regime. In contrast, in the super-critical-SNR regime (i.e., above the critical SNR ), the two hypotheses give rise to two different distributions, thereby enabling high precision detection.\nTo further verify the above claim related to the detection in the super-critical-SNR regime (i.e., ), for convenience, we consider the following centered and scaled form of\nNow, under the null (i.e., signal free case), as per (55 ###reference_###), the random variable follows the unitary Tracy-Widom distribution with the corresponding c.d.f. given by . Therefore, for a given fixed false alarm rate , we can choose a threshold such that . Now what remains is to evaluate the power of the test in the high-SNR domain. To this end, keeping in mind that, under the alternative (i.e., signal bearing case), for , has Gaussian fluctuations as shown in (59 ###reference_###), we rearrange the terms in (64 ###reference_###) to yield\nConsequently, the asymptotic power of the test can readily be written as\nfrom which we obtain as , since . This further demonstrates that based test in (64 ###reference_###) is asymptotically reliable in the supercritical SNR regime. Finally, we obtain the following approximate asymptotic ROC profiles\nAnother scenario of theoretical interest is when such that and . Possible consequences in this respect can be understood by studying the behavior of as . Clearly, subject to the condition , the upper support of the bulk (i.e., ) as well as the phase transition threshold (i.e., ) diverge to infinity as , thereby is lacking the detection power. To further investigate this scenario, let us assume with and to rewrite (43 ###reference_###) as\nwhere . Now noting the limit\nwe finally obtain\nTherefore, for , we the following bounds are in order\nThe above result demonstrates that, if with , then as such that and , the leading eigenvalue retains its detection power.\nThe high dimensional characteristics of the leading eigenvalue based test are numerically depicted in Fig. 8 ###reference_###. In particular, Fig. 8(a) ###reference_sf1### compares the analytical and simulated results for the power of the test given in (64 ###reference_###). The high dimensional regime of our interest is above the phase transition threshold (i.e., super-critical regime). The asymptotic Gaussianity of the test in this particular regime is clearly visible in the figure. The ROC profiles corresponding to the same regime are depicted in Fig. 8(a) ###reference_sf1###. The discrepancies between the asymptotic and simulated ROC results tend to vanish as increases. It is noteworthy that, despite the fact that those asymptotic analytical results are derived based on the assumption that , our numerical results verify that those analytical results serve as very good approximations in the finite dimensional scenarios as well.\n###figure_13### ###figure_14###" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "This paper investigates the signal detection problem in colored noise using the leading eigenvalue of whitened signal-plus-noise sample covariance matrix. To be specific, our focus is on two scenarios: finite dimensional and high dimensional. Corresponding to the finite dimensional scenario, we take advantage of powerful orthogonal polynomial approach in random matrix theory to derive a novel c.d.f. for the leading eigenvalue of non-central -matrix with a rank-one non-centrality parameter. It turns out that the leading eigenvalue based test has the CFAR property. Capitalizing on this new c.d.f. expression we have analyzed the performance of the test by deriving corresponding ROC profiles. Our results reveal that, for fixed and such that (i.e., the noise only sample covariance matrix is nearly rank deficient), the power of the test improves with (i.e., the number of plausible signal-bearing samples) if SNR is of at least . In contrast, when diverges such that and , the power of the test converges if SNR is of at least . However, in the same regime with and , the leading eigenvalue exhibits an intriguing behavior; it cannot detect the presence of weak signals. This observation is intimately related to the phase transition phenomena in infinite dimensional random matrix theory. Notwithstanding the above facts, the analysis corresponding to a non-central -matrix with an arbitrary rank non-centrality parameter remains as an open problem." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Proof of Theorem 3", + "text": "Since is symmetric in , we may remove the ordered region of integration in (24 ###reference_###) to yield\nwhere\nNoting that the above multiple integral consists of a sum of individual multiple integrals and each such integral evaluates to the same value, we take the liberty of choosing the multiple integral corresponding to the term of the summation to yield\nTo facilitate further analysis, we may employ the decomposition\nwhere , in the above multiple integral with some algebraic manipulation to obtain\nNow in view of making the limits of integration independent of , we find it convenient to introduce the variable transformations\ninto (A ###reference_6###) with some algebraic manipulation to arrive at\nwhere\nSince we are interested in using the orthogonal polynomial approach in random matrix theory to evaluate , keeping in mind that Jacobi polynomials are orthogonal with respect to the weight on the interval , we may apply the variable transformations, , into the above multiple integral to yield\nwhere .\nThis multiple integral can be evaluated, as shown in Appendix D ###reference_###, by leveraging the powerful orthogonal polynomial approach advocated in [53 ###reference_b53###] to yield\nwhere\nConsequently, keeping in mind that is of our interest, we may use (88 ###reference_###) in (85 ###reference_###) with replaced by to arrive at\nwhere . Since only the first column of the determinant in the integrand depends on , we may rewrite the above integral as\nwhere\nTo facilitate further analysis, in view of Definition 3 ###reference_###, we may expand the Jacobi polynomial term in the above integrand and perform term-by-term integration with the help of [58 ###reference_b58###, Eq. 7.613.2] to obtain\nThe direct substitution of (93 ###reference_###) into (91 ###reference_###) would result in a closed-form expression for the desired c.d.f. Nevertheless, a careful inspection of the resultant expression reveals a pole of order at . Therefore, to circumvent this difficulty by removing the singularity, in what follows, we re-sum the above finite series in (93 ###reference_###).\nLet us use Definition 20 ###reference_### to rewrite the function in (93 ###reference_###) as\nwhich upon substituting into (93 ###reference_###) with some algebraic manipulation gives\nIt is noteworthy that the ratio is well defined, since .\nNow we interchange the summation and contour integration operators to rewrite\nwhere\nwhich in view of [59 ###reference_b59###, Eq. 2.1.1.4] assumes\nIn view of further simplifying the contour integral in (96 ###reference_###), capitalizing on the hypergeometric transformation , we obtain\nwhich in view of and , can be expanded as\nConsequently we substitute (100 ###reference_0###) into (96 ###reference_###) and swap the summation and integral operators to yield\nin which the second equality is due to the Cauchy residue theorem. Finally, we substitute (A ###reference_0###) into (91 ###reference_###) and make use of (23 ###reference_###) with some algebraic manipulation to conclude the proof." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B An Alternative Proof of Corollary 2", + "text": "The density of , for (i.e., ), particularizes to\nfrom which we obtain upon using the transformation with its differential form ,\nNow keeping in mind the relationship between the leading eigenvalue of , , and the leading eigenvalue of , , given by\nfor convenience, we may evaluate the c.d.f. of . To this end, following [38 ###reference_b38###, 77 ###reference_b77###, 65 ###reference_b65###, 64 ###reference_b64###], we may write the c.d.f. of as\nwhere implies the region of for which the matrix is positive definite. As such, the desired c.d.f. can be expressed as\nwhich simplifies, upon introducing the variable transformation with , giving\nwhere the second equality is due to [62 ###reference_b62###, Eq. 6.1.21]. Finally, noting that and , we make use of the the variable transformation with (104 ###reference_4###) to yield\nthereby concluding the proof." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Proof of Proposition 1", + "text": "Following a similar approach as before, the density of can be written as\nfrom which, in view of the same reasoning which led to (107 ###reference_7###), we obtain\nNow let us assume that is rank one with . This in turn gives\nwhich can be further simplified, keeping in mind that the matrix is rank-one, with the help of (15 ###reference_###) to yield\nTo facilitate further analysis, we may replace the confluent hypergeometric function with its equivalent infinite series expansion and change the summation and integration operators to obtain\nSince can be factorized as with , we get\nwhere .\nNow let us focus on obtaining an alternative expression for the c.d.f. of , as a power series of , with the help of Theorem 3 ###reference_3###. To this end, noting the relation , we obtain, after some algebraic manipulation, from Theorem 3 ###reference_3###\nwhere\nSince only the first column of the above determinant depends on and the maximum power of therein is , we can conveniently write the th term of the Taylor expansion of the determinant about the point as\nNoting that fact that is non-zero for , we rearrange the factors in view of (13 ###reference_###) with some algebraic manipulation to arrive at\nFinally, we equate the coefficients of in (C ###reference_6###) to with some algebraic manipulation to conclude the proof." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D The evaluation of", + "text": "For convenience, let us rewrite as\nwhere\nand are non-negative integers. The above multiple integral can easily be evaluated with the help of Andr\u00e9ief-Heine identity (i.e., the continuous form of Cauchy\u2013Binet formula)[78 ###reference_b78###, 12 ###reference_b12###] as determinant of a square matrix of size . However, our goal is to obtain a determinant of a square matrix the size of which depends on . To this end,\nour main strategy is to start with a related integral given in [53 ###reference_b53###, eqs. 22.4.2, 22.4.11] as\nwhere\nand are monic101010A polynomial in which the coefficient of the highest order term is . polynomials orthogonal with respect to the weight , over . Since Jacobi polynomials are orthogonal with respect to the preceding weight, we use in (D ###reference_8###) with some algebraic manipulation to obtain\nwhere\nIn the above, s are generally distinct parameters. Nevertheless, if we choose such that\nthen the left side of (D ###reference_0###) coincides with the multidimensional integral of our interest in (119 ###reference_9###). Under the above parameterization, however, the right side of (D ###reference_0###) assumes the indeterminate form . Therefore, to circumvent this technical difficulty, capitalizing on an approach given in [56 ###reference_b56###, 12 ###reference_b12###], instead of direct substitution, we use the following limiting argument\nto yield\nNow the determinant in the denominator of (123 ###reference_3###) admits\nwhereas the numerator can be evaluated with the help of (19 ###reference_###) to yield\nFinally, substituting the above two expression into (123 ###reference_3###) and then the result into (122 ###reference_2###) gives\nwhich upon substituting into (118 ###reference_8###) followed by the parameter change , and with some algebraic manipulation yields (88 ###reference_###) ." + } + ], + "tables": {}, + "image_paths": { + "1(a)": { + "figure_path": "2412.05306v1_figure_1(a).png", + "caption": "(a) The effect of n\ud835\udc5bnitalic_n for p=10\ud835\udc5d10p=10italic_p = 10.\nFigure 1: Comparison between the theoretical c.d.f. in Theorem 3 with simulated values for various system configurations with m=5\ud835\udc5a5m=5italic_m = 5 and \u03c9=2\ud835\udf142\\omega=2italic_\u03c9 = 2.", + "url": "http://arxiv.org/html/2412.05306v1/x1.png" + }, + "1(b)": { + "figure_path": "2412.05306v1_figure_1(b).png", + "caption": "(b) The effect of p\ud835\udc5dpitalic_p for n=8\ud835\udc5b8n=8italic_n = 8.\nFigure 1: Comparison between the theoretical c.d.f. in Theorem 3 with simulated values for various system configurations with m=5\ud835\udc5a5m=5italic_m = 5 and \u03c9=2\ud835\udf142\\omega=2italic_\u03c9 = 2.", + "url": "http://arxiv.org/html/2412.05306v1/x2.png" + }, + "2": { + "figure_path": "2412.05306v1_figure_2.png", + "caption": "Figure 2: The effect of \u03c9\ud835\udf14\\omegaitalic_\u03c9 on the CDF for m=10,n=12formulae-sequence\ud835\udc5a10\ud835\udc5b12m=10,n=12italic_m = 10 , italic_n = 12, and p=15\ud835\udc5d15p=15italic_p = 15. The red dashed curve corresponds to Corollary 3.", + "url": "http://arxiv.org/html/2412.05306v1/x3.png" + }, + "3(a)": { + "figure_path": "2412.05306v1_figure_3(a).png", + "caption": "(a) CDF of \u03bbmax/psubscript\ud835\udf06\ud835\udc5d\\lambda_{\\max}/pitalic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p for m=n=5\ud835\udc5a\ud835\udc5b5m=n=5italic_m = italic_n = 5 and \u03c9=2\ud835\udf142\\omega=2italic_\u03c9 = 2. The red dashed curve is the limiting c.d.f. given by limp\u2192\u221eF\u03bbmax/p(0)\u2062(t;\u03c9)=e\u22126/tsubscript\u2192\ud835\udc5dsuperscriptsubscript\ud835\udc39subscript\ud835\udf06\ud835\udc5d0\ud835\udc61\ud835\udf14superscript\ud835\udc526\ud835\udc61\\lim_{p\\to\\infty}F_{\\lambda_{\\max}/p}^{(0)}(t;\\omega)=e^{-6/t}roman_lim start_POSTSUBSCRIPT italic_p \u2192 \u221e end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ( italic_t ; italic_\u03c9 ) = italic_e start_POSTSUPERSCRIPT - 6 / italic_t end_POSTSUPERSCRIPT.\nFigure 3: The effect of p\ud835\udc5dpitalic_p on the c.d.f. of scaled random variable \u03bbmax/psubscript\ud835\udf06\ud835\udc5d\\lambda_{\\max}/pitalic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p corresponding to the configuration m=n\ud835\udc5a\ud835\udc5bm=nitalic_m = italic_n with \u03c9=O\u2062(1)\ud835\udf14\ud835\udc421\\omega=O(1)italic_\u03c9 = italic_O ( 1 ) and \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ).", + "url": "http://arxiv.org/html/2412.05306v1/x4.png" + }, + "3(b)": { + "figure_path": "2412.05306v1_figure_3(b).png", + "caption": "(b) CDF of \u03bbmax/psubscript\ud835\udf06\ud835\udc5d\\lambda_{\\max}/pitalic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p for m=n=5\ud835\udc5a\ud835\udc5b5m=n=5italic_m = italic_n = 5 and \u03c9=p\ud835\udf14\ud835\udc5d\\omega=pitalic_\u03c9 = italic_p. The red dashed curve is the limiting c.d.f. given by limp\u2192\u221eF\u03bbmax/p(0)\u2062(t;p)=e\u22125/tsubscript\u2192\ud835\udc5dsuperscriptsubscript\ud835\udc39subscript\ud835\udf06\ud835\udc5d0\ud835\udc61\ud835\udc5dsuperscript\ud835\udc525\ud835\udc61\\lim_{p\\to\\infty}F_{\\lambda_{\\max}/p}^{(0)}(t;p)=e^{-5/t}roman_lim start_POSTSUBSCRIPT italic_p \u2192 \u221e end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ( italic_t ; italic_p ) = italic_e start_POSTSUPERSCRIPT - 5 / italic_t end_POSTSUPERSCRIPT.\nFigure 3: The effect of p\ud835\udc5dpitalic_p on the c.d.f. of scaled random variable \u03bbmax/psubscript\ud835\udf06\ud835\udc5d\\lambda_{\\max}/pitalic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p corresponding to the configuration m=n\ud835\udc5a\ud835\udc5bm=nitalic_m = italic_n with \u03c9=O\u2062(1)\ud835\udf14\ud835\udc421\\omega=O(1)italic_\u03c9 = italic_O ( 1 ) and \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ).", + "url": "http://arxiv.org/html/2412.05306v1/x5.png" + }, + "4(a)": { + "figure_path": "2412.05306v1_figure_4(a).png", + "caption": "(a) CDF of \u03bbmax/m2subscript\ud835\udf06superscript\ud835\udc5a2\\lambda_{\\max}/m^{2}italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for \u03c9=p\ud835\udf14\ud835\udc5d\\omega=pitalic_\u03c9 = italic_p. The red dashed curve depicts the limiting c.d.f. given by limm,n,p\u2192\u221em/p\u21921/2F\u03bbmax/p(0)\u2062(t;p)=e\u22122/tsubscript\u2192\ud835\udc5a\ud835\udc5b\ud835\udc5d\u2192\ud835\udc5a\ud835\udc5d12superscriptsubscript\ud835\udc39subscript\ud835\udf06\ud835\udc5d0\ud835\udc61\ud835\udc5dsuperscript\ud835\udc522\ud835\udc61\\lim_{\\begin{subarray}{c}m,n,p\\to\\infty\\\\\nm/p\\to 1/2\\end{subarray}}F_{\\lambda_{\\max}/p}^{(0)}(t;p)=e^{-2/t}roman_lim start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_m , italic_n , italic_p \u2192 \u221e end_CELL end_ROW start_ROW start_CELL italic_m / italic_p \u2192 1 / 2 end_CELL end_ROW end_ARG end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ( italic_t ; italic_p ) = italic_e start_POSTSUPERSCRIPT - 2 / italic_t end_POSTSUPERSCRIPT\nFigure 4: Limiting distributions of the scaled maximum eigenvalue \u03bbmax/m2subscript\ud835\udf06superscript\ud835\udc5a2\\lambda_{\\max}/m^{2}italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT as m,n,p\u2192\u221e\u2192\ud835\udc5a\ud835\udc5b\ud835\udc5dm,n,p\\to\\inftyitalic_m , italic_n , italic_p \u2192 \u221e such that m=n\ud835\udc5a\ud835\udc5bm=nitalic_m = italic_n and m/p\u21921/2\u2192\ud835\udc5a\ud835\udc5d12m/p\\to 1/2italic_m / italic_p \u2192 1 / 2 with \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ) and \u03c9=O\u2062(p2)\ud835\udf14\ud835\udc42superscript\ud835\udc5d2\\omega=O(p^{2})italic_\u03c9 = italic_O ( italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).", + "url": "http://arxiv.org/html/2412.05306v1/x6.png" + }, + "4(b)": { + "figure_path": "2412.05306v1_figure_4(b).png", + "caption": "(b) CDF of \u03bbmax/m2subscript\ud835\udf06superscript\ud835\udc5a2\\lambda_{\\max}/m^{2}italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for \u03c9=p2\ud835\udf14superscript\ud835\udc5d2\\omega=p^{2}italic_\u03c9 = italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The red dashed curve depicts the limiting c.d.f. given by limm,n,p\u2192\u221em/p\u21921/2F\u03bbmax/p(0)\u2062(t;p2)=e\u22126/tsubscript\u2192\ud835\udc5a\ud835\udc5b\ud835\udc5d\u2192\ud835\udc5a\ud835\udc5d12superscriptsubscript\ud835\udc39subscript\ud835\udf06\ud835\udc5d0\ud835\udc61superscript\ud835\udc5d2superscript\ud835\udc526\ud835\udc61\\lim_{\\begin{subarray}{c}m,n,p\\to\\infty\\\\\nm/p\\to 1/2\\end{subarray}}F_{\\lambda_{\\max}/p}^{(0)}(t;p^{2})=e^{-6/t}roman_lim start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_m , italic_n , italic_p \u2192 \u221e end_CELL end_ROW start_ROW start_CELL italic_m / italic_p \u2192 1 / 2 end_CELL end_ROW end_ARG end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ( italic_t ; italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) = italic_e start_POSTSUPERSCRIPT - 6 / italic_t end_POSTSUPERSCRIPT\nFigure 4: Limiting distributions of the scaled maximum eigenvalue \u03bbmax/m2subscript\ud835\udf06superscript\ud835\udc5a2\\lambda_{\\max}/m^{2}italic_\u03bb start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT / italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT as m,n,p\u2192\u221e\u2192\ud835\udc5a\ud835\udc5b\ud835\udc5dm,n,p\\to\\inftyitalic_m , italic_n , italic_p \u2192 \u221e such that m=n\ud835\udc5a\ud835\udc5bm=nitalic_m = italic_n and m/p\u21921/2\u2192\ud835\udc5a\ud835\udc5d12m/p\\to 1/2italic_m / italic_p \u2192 1 / 2 with \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ) and \u03c9=O\u2062(p2)\ud835\udf14\ud835\udc42superscript\ud835\udc5d2\\omega=O(p^{2})italic_\u03c9 = italic_O ( italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).", + "url": "http://arxiv.org/html/2412.05306v1/x7.png" + }, + "5(a)": { + "figure_path": "2412.05306v1_figure_5(a).png", + "caption": "(a) The effect of n\ud835\udc5bnitalic_n for p=10\ud835\udc5d10p=10italic_p = 10 with \u03c9=2\ud835\udf142\\omega=2italic_\u03c9 = 2.\nFigure 5: The effect of n,p\ud835\udc5b\ud835\udc5dn,pitalic_n , italic_p and \u03c9\ud835\udf14\\omegaitalic_\u03c9 on ROC profile for m=4\ud835\udc5a4m=4italic_m = 4.", + "url": "http://arxiv.org/html/2412.05306v1/x8.png" + }, + "5(b)": { + "figure_path": "2412.05306v1_figure_5(b).png", + "caption": "(b) The effect of p\ud835\udc5dpitalic_p and \u03c9\ud835\udf14\\omegaitalic_\u03c9 for n=8\ud835\udc5b8n=8italic_n = 8.\nFigure 5: The effect of n,p\ud835\udc5b\ud835\udc5dn,pitalic_n , italic_p and \u03c9\ud835\udf14\\omegaitalic_\u03c9 on ROC profile for m=4\ud835\udc5a4m=4italic_m = 4.", + "url": "http://arxiv.org/html/2412.05306v1/x9.png" + }, + "6": { + "figure_path": "2412.05306v1_figure_6.png", + "caption": "Figure 6: The behavior of ROC profile corresponding to the configuration m=n=8\ud835\udc5a\ud835\udc5b8m=n=8italic_m = italic_n = 8 for various values of p\ud835\udc5dpitalic_p with \u03c9=2\ud835\udf142\\omega=2italic_\u03c9 = 2.", + "url": "http://arxiv.org/html/2412.05306v1/x10.png" + }, + "7(a)": { + "figure_path": "2412.05306v1_figure_7(a).png", + "caption": "(a) Effect of p\ud835\udc5dpitalic_p for \u03c9=p\ud835\udf14\ud835\udc5d\\omega=pitalic_\u03c9 = italic_p.\nFigure 7: The effect of p\ud835\udc5dpitalic_p on ROC profiles for m=n=4\ud835\udc5a\ud835\udc5b4m=n=4italic_m = italic_n = 4 with \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ) and \u03c9=O\u2062(p2)\ud835\udf14\ud835\udc42superscript\ud835\udc5d2\\omega=O(p^{2})italic_\u03c9 = italic_O ( italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). The blue curve depicts the limiting ROC profile given by 1\u2212(1\u2212PF)5/41superscript1subscript\ud835\udc43\ud835\udc39541-(1-P_{F})^{5/4}1 - ( 1 - italic_P start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT.", + "url": "http://arxiv.org/html/2412.05306v1/x11.png" + }, + "7(b)": { + "figure_path": "2412.05306v1_figure_7(b).png", + "caption": "(b) Effect of p\ud835\udc5dpitalic_p for \u03c9=p2\ud835\udf14superscript\ud835\udc5d2\\omega=p^{2}italic_\u03c9 = italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.\nFigure 7: The effect of p\ud835\udc5dpitalic_p on ROC profiles for m=n=4\ud835\udc5a\ud835\udc5b4m=n=4italic_m = italic_n = 4 with \u03c9=O\u2062(p)\ud835\udf14\ud835\udc42\ud835\udc5d\\omega=O(p)italic_\u03c9 = italic_O ( italic_p ) and \u03c9=O\u2062(p2)\ud835\udf14\ud835\udc42superscript\ud835\udc5d2\\omega=O(p^{2})italic_\u03c9 = italic_O ( italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). The blue curve depicts the limiting ROC profile given by 1\u2212(1\u2212PF)5/41superscript1subscript\ud835\udc43\ud835\udc39541-(1-P_{F})^{5/4}1 - ( 1 - italic_P start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT.", + "url": "http://arxiv.org/html/2412.05306v1/x12.png" + }, + "8(a)": { + "figure_path": "2412.05306v1_figure_8(a).png", + "caption": "(a) Power of the test.\nFigure 8: High dimensional characteristics of the centered and scaled random variable\nt\ud835\udc61titalic_t in the supercritical regime (i.e., above the phase transition threshold \u03b3psubscript\ud835\udefe\ud835\udc5d\\gamma_{p}italic_\u03b3 start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT). Results are shown for \u03b3=5>\u03b3p\u22482.58\ud835\udefe5subscript\ud835\udefe\ud835\udc5d2.58\\gamma=5>\\gamma_{p}\\approx 2.58italic_\u03b3 = 5 > italic_\u03b3 start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT \u2248 2.58 with c1=0.25subscript\ud835\udc5010.25c_{1}=0.25italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.25 and c2=0.5subscript\ud835\udc5020.5c_{2}=0.5italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.5.", + "url": "http://arxiv.org/html/2412.05306v1/x13.png" + }, + "8(b)": { + "figure_path": "2412.05306v1_figure_8(b).png", + "caption": "(b) ROC of the test.\nFigure 8: High dimensional characteristics of the centered and scaled random variable\nt\ud835\udc61titalic_t in the supercritical regime (i.e., above the phase transition threshold \u03b3psubscript\ud835\udefe\ud835\udc5d\\gamma_{p}italic_\u03b3 start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT). Results are shown for \u03b3=5>\u03b3p\u22482.58\ud835\udefe5subscript\ud835\udefe\ud835\udc5d2.58\\gamma=5>\\gamma_{p}\\approx 2.58italic_\u03b3 = 5 > italic_\u03b3 start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT \u2248 2.58 with c1=0.25subscript\ud835\udc5010.25c_{1}=0.25italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.25 and c2=0.5subscript\ud835\udc5020.5c_{2}=0.5italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.5.", + "url": "http://arxiv.org/html/2412.05306v1/x14.png" + } + }, + "validation": true, + "references": [], + "url": "http://arxiv.org/html/2412.05306v1" +} \ No newline at end of file